id
string
paper_link
string
title
string
full_text
string
2412.15767v1
http://arxiv.org/abs/2412.15767v1
Some New Modular Rank Three Nahm Sums from a Lift-Dual Operation
\documentclass[12pt,reqno]{amsart} \usepackage{amsmath,amssymb,extarrows} \usepackage{url} \usepackage{tikz,enumerate} \usepackage{diagbox} \usepackage{appendix} \usepackage{epic} \usepackage{float} \vfuzz2pt \usepackage{cite} \usepackage{hyperref} \usepackage{array} \usepackage{booktabs} \setlength{\topmargin}{-3mm} \setlength{\oddsidemargin}{0.2in} \setlength{\evensidemargin}{0.2in} \setlength{\textwidth}{5.9in} \setlength{\textheight}{8.9in} \allowdisplaybreaks[4] \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conj}[theorem]{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{defn}{Definition} \theoremstyle{remark} \newtheorem{rem}{Remark} \numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{defn}{section} \DeclareMathOperator{\spt}{spt} \DeclareMathOperator{\RE}{Re} \DeclareMathOperator{\IM}{Im} \DeclareMathOperator{\sg}{sg} \newcommand{\eps}{\varepsilon} \newcommand{\To}{\longrightarrow} \newcommand{\h}{\mathcal{H}} \newcommand{\s}{\mathcal{S}} \newcommand{\A}{\mathcal{A}} \newcommand{\J}{\mathcal{J}} \newcommand{\M}{\mathcal{M}} \newcommand{\W}{\mathcal{W}} \newcommand{\X}{\mathcal{X}} \newcommand{\BOP}{\mathbf{B}} \newcommand{\BH}{\mathbf{B}(\mathcal{H})} \newcommand{\KH}{\mathcal{K}(\mathcal{H})} \newcommand{\Real}{\mathbb{R}} \newcommand{\Complex}{\mathbb{C}} \newcommand{\Field}{\mathbb{F}} \newcommand{\RPlus}{\Real^{+}} \newcommand{\Polar}{\mathcal{P}_{\s}} \newcommand{\Poly}{\mathcal{P}(E)} \newcommand{\EssD}{\mathcal{D}} \newcommand{\Lom}{\mathcal{L}} \newcommand{\States}{\mathcal{T}} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\seq}[1]{\left<#1\right>} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\essnorm}[1]{\norm{#1}_{\ess}} \newcommand{\sgn}{\mathrm{sgn}} \newcommand*\diff{\mathop{}\!\mathrm{d}} \newcommand*\Diff[1]{\mathop{}\!\mathrm{d^#1}} \begin{document} \title[Some New Modular Rank Three Nahm Sums] {Some New Modular Rank Three Nahm Sums from a Lift-Dual Operation} \author{Zhineng Cao and Liuquan Wang} \address[Z.\ Cao]{School of Mathematics and Statistics, Wuhan University, Wuhan 430072, Hubei, People's Republic of China} \email{[email protected]} \address[L.\ Wang]{School of Mathematics and Statistics, Wuhan University, Wuhan 430072, Hubei, People's Republic of China} \email{[email protected];[email protected]} \subjclass[2010]{11P84, 33D15, 33D60, 11F03} \keywords{Nahm sums; Rogers--Ramanujan type identities; Bailey pairs; modular triples} \begin{abstract} Around 2007, Zagier discovered some rank two and rank three Nahm sums, and their modularity have now all been confirmed. Zagier also observed that the dual of a modular Nahm sum is likely to be modular. This duality observation motivates us to discover some new modular rank three Nahm sums by a lift-dual operation. We first lift Zagier's rank two Nahm sums to rank three and then calculate their dual, and we show that these dual Nahm sums are indeed modular. We achieve this by establishing the corresponding Rogers--Ramanujan type identities, which express these Nahm sums as modular infinite products. \end{abstract} \maketitle \section{Introduction} As an important problem linking the theory of $q$-series and modular forms, Nahm's problem is to determine all positive definite matrix $A\in \mathbb{Q}^{r\times r}$, $r$-dimensional column vector $B\in \mathbb{Q}^r$ and rational scalar $C$ such that the Nahm sum \begin{align}\label{eq-Nahm} f_{A,B,C}(q):=\sum_{n=(n_1,\dots,n_r)^\mathrm{T} \in \mathbb{N}^r}\frac{q^{\frac{1}{2}n^\mathrm{T}An+n^\mathrm{T}B+C}}{(q;q)_{n_1} \cdots (q;q)_{n_r}} \end{align} is modular. Here and below we use $q$-series notations: for $n\in \mathbb{N}\cup \{\infty\}$ we define \begin{align} (a;q)_n&:=\prod_{k=0}^{n-1} (1-aq^k), \\ (a_1,\dots,a_m;q)_n&:=(a_1;q)_n\cdots (a_m;q)_n. \end{align} Modular Nahm sums usually appear as characters of some rational conformal field theories. A famous example arises from the Rogers--Ramanujan identities \cite{Rogers}: \begin{align} \sum_{n=0}^\infty\frac{q^{n^2}}{(q;q)_n} = \frac{1}{(q,q^4;q^5)_\infty}, \quad \sum_{n=0}^\infty\frac{q^{n^2+n}}{(q;q)_n} = \frac{1}{(q^2,q^3;q^5)_\infty}.\label{RR} \end{align} They imply that the Nahm sums $f_{2,0,-1/60}(q)$ and $f_{2,1,11/60}(q)$ are modular, and they correspond to two characters of the Lee--Yang model (see e.g.\ \cite{Kac}). For convenience, when the Nahm sum $f_{A,B,C}(q)$ is modular, we call $(A,B,C)$ as a modular triple. Nahm's conjecture, sated explicitly by Zagier \cite{Zagier}, provides a criterion on the matrix $A$ so that it becomes the matrix part of a modular triple. This conjecture has been confirmed in the rank one case by Zagier \cite{Zagier}. It does not hold for a general rank since Vlasenko and Zwegers \cite{VZ} found that the matrices \begin{align}\label{matrix-VZ} A= \begin{pmatrix} 3/4 & -1/4 \\ -1/4 & 3/4 \end{pmatrix}, \begin{pmatrix} 3/2 & 1/2 \\ 1/2 & 3/2 \end{pmatrix} \end{align} do not satisfy Nahm's criterion but do appear as the matrix part of some modular triples. Recently, Calegari, Garoufalidis and Zagier \cite{CGZ} proved that one direction of Nahm's conjecture is true. When the rank $r\geq 2$, Nahm's problem is far from being solved. One way to tackle this problem is to provide as many modular triples as possible, and the problem is solved when the list of modular triples is complete. In the rank two and three cases, after an extensive search, Zagier \cite[Table 2]{Zagier} provided 11 and 12 sets of possible modular Nahm sums, respectively. Their modularity have now all been confirmed by the works of Vlasenko--Zwegers \cite{VZ}, Cherednik--Feigin \cite{Feigin}, Cao--Rosengren--Wang \cite{CRW} and Wang \cite{Wang-rank2,Wang-rank3}. Zagier \cite[p.\ 50, (f)]{Zagier} observed that there might exist some dual structure among modular triples. For a modular triple $(A,B,C)$, we define its dual as the image of the operator: \begin{align} \mathcal{D}:(A,B,C)\longmapsto (A^\star, B^\star, C^\star)=(A^{-1},A^{-1}B,\frac{1}{2}B^\mathrm{T} A^{-1}B-\frac{r}{24}-C). \end{align} Zagier conjectured that $\mathcal{D}(A,B,C)$ is still a modular triple. Recently, Wang \cite{Wang2024} presented some counterexamples involving rank four Nahm sums of this conjecture. This work aims to provide more modular Nahm sums. The idea of constructing new modular triples consists of two steps. We first lift some known rank two modular triples to rank three, and then we consider their duals. For any \begin{align*} &A=\begin{pmatrix} a_1 & a_2 \\ a_2 & a_3\end{pmatrix}, \quad B=\begin{pmatrix} b_1 \\ b_2 \end{pmatrix}, \end{align*} we define an operator which we call as \emph{lifting operator} to lift $A$ to a $3\times 3$ matrix and $B$ to a three dimensional vector and keep the value of $C$: \begin{align} \mathcal{L}: (A,B,C)\longmapsto (\widetilde{A},\widetilde{B},C) \end{align} where \begin{align*} &\widetilde{A}=\begin{pmatrix} a_1 & a_2+1 & a_1+a_2 \\ a_2+1 & a_3 & a_2+a_3 \\ a_1+a_2 & a_2+a_3 & a_1+2a_2+a_3 \end{pmatrix}, \quad \widetilde{B}=\begin{pmatrix} b_1 \\ b_2 \\ b_1+b_2\end{pmatrix}. \end{align*} It is known that \begin{align}\label{eq-lift-id} f_{A,B,0}(q)=f_{\widetilde{A},\widetilde{B},0}(q). \end{align} This fact appeared first in Zwegers' unpublished work \cite{ZwegersTalk} according to Lee's thesis \cite{LeeThesis}. See \cite{LeeThesis} and \cite{CRW} for a proof. If $(A,B,C)$ is a rank two modular triple, then from \eqref{eq-lift-id} we get a rank three modular triple $(\widetilde{A},\widetilde{B}, C)$ for free subject to the condition that $\widetilde{A}$ is positive definite. Zagier's duality conjecture then motivates us to consider the dual example of $(\widetilde{A},\widetilde{B},C)$. That is, from a rank two modular triple $(A,B,C)$ we get a candidate of rank three modular triple $\mathcal{D}\mathcal{L}(A,B,C)$ when $\widetilde{A}$ is positive definite. We shall call this process \emph{lift-dual operation} to $(A,B,C)$. It should be noted that the lift-dual process does not always generate new modular triples. For example, the two matrices in \eqref{matrix-VZ} lift to singular matrices. Therefore, they do not generate new rank three modular triples from the lift-dual operation. The main object of this work is to apply the lifting operator to Zagier's rank two examples \cite[Table 2]{Zagier} and check if we get new modular triples. We list the lifting matrices in Table \ref{tab-lift}. It is easy to see that only four of them are positive definite. Namely, the lift of Zagier's matrices are positive definite only for Examples 1, 3, 9 and 11. \begin{table}[htbp]\label{tab-lift} \renewcommand{\arraystretch}{1.9} \begin{tabular}{cccc} \hline Exam.\ No.\ & Matrix $A$ & Lift $\widetilde{A}$ & $\det \widetilde{A}$ \\ \hline 1 & $\left(\begin{smallmatrix} a & 1-a \\ 1-a & a \end{smallmatrix} \right)$ & $\left(\begin{smallmatrix} a & 2-a & 1 \\ 2-a & a & 1 \\ 1 & 1 & 2 \end{smallmatrix} \right)$ & $4a-4$ \\ \hline 2 & $\left(\begin{smallmatrix} 2 & 1 \\ 1 & 1 \end{smallmatrix}\right)$ & $\left(\begin{smallmatrix} 2 & 2 & 3 \\ 2 & 1 & 2 \\ 3 & 2 & 5 \end{smallmatrix}\right)$ & $-3$ \\ \hline 3 & $\left(\begin{smallmatrix} 1 & -1 \\ -1 & 2 \end{smallmatrix}\right)$ & $\left(\begin{smallmatrix} 1 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 1 & 1 \end{smallmatrix}\right)$ & 1 \\ \hline 4 & $\left(\begin{smallmatrix} 4 & 1 \\ 1 & 1 \end{smallmatrix}\right)$ & $\left(\begin{smallmatrix} 4 & 2 & 5 \\ 2 & 1 & 2 \\ 5 & 2 & 7\end{smallmatrix}\right)$ & $-1$\\ \hline 5 & $\left(\begin{smallmatrix} 1/3 & -1/3 \\ -1/3 & 4/3 \end{smallmatrix} \right)$ & $\left(\begin{smallmatrix} 1/3 & 2/3 & 0 \\ 2/3 & 4/3 & 1 \\ 0 & 1 & 1 \end{smallmatrix}\right)$ & $-1/3$ \\ \hline 6 & $\left(\begin{smallmatrix} 4 & 2 \\ 2 & 2 \end{smallmatrix}\right)$ & $\left(\begin{smallmatrix} 4 & 3 & 6 \\ 3 & 2 & 4 \\ 6 & 4 & 10 \end{smallmatrix}\right)$ & $-2$ \\ \hline 7 & $\left(\begin{smallmatrix} 1/2 & -1/2 \\ -1/2 & 1 \end{smallmatrix} \right)$ & $\left(\begin{smallmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 1 & 1/2 \\ 0 & 1/2 & 1/2 \end{smallmatrix}\right)$ & 0 \\ \hline 8 & $\left( \begin{smallmatrix} 3/2 & 1 \\ 1 & 2 \end{smallmatrix} \right)$ & $\left(\begin{smallmatrix} 3/2 & 2 &5/2 \\ 2 & 2 & 3 \\ 5/2 & 3 & 11/2 \end{smallmatrix}\right)$ & $-3/2$ \\ \hline 9 & $\left( \begin{smallmatrix} 1 & -1/2 \\ -1/2 & 3/4 \end{smallmatrix} \right)$ & $\left(\begin{smallmatrix} 1& 1/2 & 1/2 \\ 1/2 & 3/4 & 1/4 \\ 1/2 & 1/4 & 3/4 \end{smallmatrix}\right)$ & $1/4$ \\ \hline 10 & $\left(\begin{smallmatrix} 4/3 & 2/3 \\ 2/3 & 4/3 \end{smallmatrix}\right)$ & $\left(\begin{smallmatrix} 4/3 & 5/3 & 2 \\ 5/3 & 4/3 & 2 \\ 2 & 2 & 4 \end{smallmatrix}\right)$ & $-4/3$ \\ \hline 11 & $\left(\begin{smallmatrix} 1 &-1/2\\ -1/2 & 1 \end{smallmatrix}\right)$ & $\left(\begin{smallmatrix} 1 & 1/2 & 1/2 \\ 1/2 & 1 & 1/2 \\ 1/2 & 1/2 &1 \end{smallmatrix}\right)$ & $1/2$ \\ \hline \end{tabular} \\[2mm] \caption{Matrices from Zagier's rank two examples and their lifts} \label{tab-known} \end{table} The dual of the lift of the matrix in Example 3 is \begin{align} \widetilde{A}^\star=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & -1 & 2 \end{pmatrix}. \end{align} Obviously, any Nahm sum for this matrix can be decomposed into the product of a rank one Nahm sum and a rank two Nahm sum, and hence is not essentially new. Therefore, we will focus on the dual of the lift of Examples 1, 9 and 11. We find that they indeed produce new modular Nahm sums. For each of the Nahm sums we consider, we investigate their modularity by establishing the corresponding Rogers--Ramanujan type identities. To be precise, we express the Nahm sums using the functions \begin{align}\label{Jm} J_m:=(q^m;q^m)_\infty \quad \text{and} \quad J_{a,m}:=(q^a,q^{m-a},q^m;q^m)_\infty. \end{align} Let $q=e^{2\pi i \tau}$ where $\mathrm{Im}~ \tau>0$. The functions $J_m$ and $J_{a,m}$ are closely related to the Dedekind eta function \begin{align}\label{eta-defn} \eta(\tau):=q^{1/24}(q;q)_\infty \end{align} and the generalized Dedekind eta function \begin{align}\label{general-eta} \eta_{m,a}(\tau):=q^{mB(a/m)/2}(q^a,q^{m-a};q^m)_\infty \end{align} where $B(x)=x^2-x+1/6$. It is well-known that $\eta(\tau)$ is a modular form of weight $1/2$ and $\eta_{m,a}(\tau)$ is a modular form of weight zero. The modularity of a Nahm sum will be clear once we write it in terms of $J_m$ and $J_{a,m}$. We shall use an example to briefly illustrate our work. Zagier's Example 11 asserts that $(A,B_i,C_i)$ are modular triples where \begin{equation} \begin{split} &A=\begin{pmatrix} 1 & -1/2 \\ -1/2 & 1 \end{pmatrix}, ~~ B_1=\begin{pmatrix} -1/2 \\ 0 \end{pmatrix}, ~~ B_2=\begin{pmatrix} 0 \\ -1/2 \end{pmatrix}, ~~ B_3=\begin{pmatrix} 0 \\ 0 \end{pmatrix}, \\ &C_1=1/20, \quad C_2=1/20, \quad C_3=-1/20. \end{split} \end{equation} This lifts to the modular triples $(\widetilde{A},\widetilde{B},C_i)$ where $C_i$ is as above and \begin{align} \widetilde{A}=\begin{pmatrix} 1 & 1/2 & 1/2 \\ 1/2 & 1 & 1/2 \\ 1/2 & 1/2 & 1 \end{pmatrix}, ~~\widetilde{B}_1 = \begin{pmatrix} -1/2 \\ 0 \\ -1/2 \end{pmatrix}, ~~\widetilde{B}_2= \begin{pmatrix} 0 \\ -1/2 \\ -1/2 \end{pmatrix}, ~~ \widetilde{B}_3=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}. \end{align} We may also include the vector $(-1/2,-1/2,0)^\mathrm{T}$ since $n_1,n_2,n_3$ are symmetric to each other in the quadratic form $\frac{1}{2}n^\mathrm{T}\widetilde{A}n$. Considering its dual, we expect that $(\widetilde{A}^\star,\widetilde{B}_i^\star,C_i^\star)$ ($i=1,2,3$) are modular triples where \begin{align} &\widetilde{A}^\star=\begin{pmatrix} 3/2 & -1/2 & -1/2 \\ -1/2 & 3/2 & -1/2 \\ -1/2 & -1/2 & 3/2 \end{pmatrix}, ~~ \widetilde{B}_1^\star= \begin{pmatrix} -1/2 \\ 1/2 \\ -1/2 \end{pmatrix}, ~~\widetilde{B}_2^\star= \begin{pmatrix} 1/2 \\ -1/2 \\ -1/2 \end{pmatrix}, ~~\widetilde{B}_3^\star=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}, \nonumber \\ &C_1^\star=-3/40, \quad C_2^\star=-3/40, \quad C_3^\star=3/40. \end{align} We can also include the vector $(-1/2,-1/2,1/2)^\mathrm{T}$. Due to the symmetry of the quadratic form generated by $\widetilde{A}^\star$, there are essentially only two different Nahm sums to consider. We establish the following identities to confirm their modularity. \begin{theorem}\label{thm-lift-11} We have \begin{align} & \sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+3k^2-2ij-2ik-2jk}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}=\frac{J_6^5J_{28,60}}{J_3^2J_4^2J_{12}^2}+2q^3\frac{J_2^2J_3J_{12}J_{12,60}}{J_1J_4^3J_6}-q^4\frac{J_6^5J_{8,60}}{J_3^2J_4^2J_{12}^2}, \label{eq-thm-11-1} \\ & \sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+3k^2-2ij-2ik-2jk-2i+2j-2k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}=2\frac{J_2^2J_3J_{12}J_{24,60}}{J_1J_4^3J_6}+q\frac{J_6^5J_{16,60}}{J_3^2J_4^2J_{12}^2}+q^5\frac{J_6^5J_{4,60}}{J_3^2J_4^2J_{12}^2}. \label{eq-thm-11-2} \end{align} \end{theorem} The main difficulty of this work lies in finding and proving the Rogers--Ramanujan type identities for Nahm sums. We achieve this by applying various $q$-series techniques including the constant term method, integral method and Bailey pairs. There are some unexpected things during our proof. For instance, in order to prove Theorem \ref{thm-lift-11}, we need to use two identities found by Vlasenko--Zwegers \cite{VZ} on Zagier's Example 10. The rest of this paper is organized as follows. In Section \ref{sec-pre} we review some known $q$-series identities which will be used in our proof. We also recall Bailey's lemma and its consequences. Sections \ref{sec-exam1}--\ref{sec-exam11} are devoted to discussing the new modular triples we found after applying the lift-dual operation to Zagier's Examples 1, 9 and 11, respectively. Finally, in Section \ref{sec-applictaion}, we discuss a special case of the matrix with parameters in Section \ref{sec-exam1}, and we show that there are other modular triples in addition to those described in Section \ref{sec-exam1}. \section{Preliminaries}\label{sec-pre} We need the Jacobi triple product identity (see e.g. \cite[Theorem 2.8]{Andrews-book}): \begin{align}\label{JTP} (q,z,q/z;q)_\infty=\sum_{n=-\infty}^\infty (-1)^nq^{\binom{n}{2}}z^n. \end{align} For convenience, besides \eqref{Jm} we also use the notation: \begin{align} \overline{J}_{a,m}=(-q^a,-q^{m-a},q^m;q^m)_\infty=\frac{J_m^2J_{2a,2m}}{J_{a,m}J_{2m}}. \end{align} Whenever we write a $q$-series $f(q)$ in terms of $J_m$ and $J_{a,m}$ defined in \eqref{Jm}, we can use \eqref{eta-defn} and \eqref{general-eta} to find suitable $C$ so that $q^Cf(q)$ is modular. In the case of Nahm sums, if we are able to write a Nahm sum $f_{A,B,0}(q)$ as \begin{align} f_{A,B,0}(q)=f_1(q)+f_2(q)+\cdots+f_k(q) \end{align} where $q^{C}f_i(q)$ are modular of weight zero for $i=1,2,\dots,k$, then we see that the Nahm sum $f_{A,B,C}(q)=q^Cf_{A,B,0}(q)$ is modular. The basic hypergeometric series ${}_r\phi_s$ is defined as: $${}_r\phi_s\bigg(\genfrac{}{}{0pt}{} {a_1,\dots,a_r}{b_1,\dots,b_s};q,z \bigg):=\sum_{n=0}^\infty \frac{(a_1,\dots,a_r;q)_n}{(q,b_1,\dots,b_s;q)_n}\Big((-1)^nq^{\binom{n}{2}} \Big)^{1+s-r}z^n.$$ We now list some formulas for such series which will be invoked in our proofs. \begin{enumerate}[(1)] \item The $q$-binomial theorem \cite[Theorem 2.1]{Andrews-book} asserts that \begin{align}\label{q-binomial} \sum_{n\geq 0} \frac{(a;q)_n}{(q;q)_n}z^n=\frac{(az;q)_\infty}{(z;q)_\infty}, \quad |z|<1. \end{align} \item As corollaries to \eqref{q-binomial}, we have Euler's $q$-exponential identities \cite[Corollary 2.2]{Andrews-book}: \begin{align}\label{Euler1} \sum_{n\geq 0} \frac{z^n}{(q;q)_n}=\frac{1}{(z;q)_\infty}, \quad \sum_{n\geq 0} \frac{z^nq^{\frac{n^2-n}{2}}}{(q;q)_n}=(-z;q)_\infty. \end{align} Here we need $|z|<1$ for the first identity. \item The $q$-Gauss summation formula \cite[(1.5.1)]{Gasper-Rahman}: \begin{align} \label{Gauss} {}_2\phi_1\bigg(\genfrac{}{}{0pt}{} {a,b}{c};q,c/ab \bigg)=\frac{(c/a,c/b;q)_\infty}{(c,c/ab;q)_\infty}, \quad \left| \frac{c}{ab} \right|<1. \end{align} \item The sum of a ${}_1\phi_1$ series \cite[\uppercase\expandafter{\romannumeral2}.5]{Gasper-Rahman}: \begin{align} \label{1phi1} {}_1\phi_1\bigg(\genfrac{}{}{0pt}{} {a}{c};q,c/a \bigg)=\frac{(c/a;q)_\infty}{(c;q)_\infty}. \end{align} \item A $q$-analogue of Bailey's ${}_2F_{1}(-1)$ sum \cite[\uppercase\expandafter{\romannumeral2}.10]{Gasper-Rahman}: \begin{align} \label{Bailey's} {}_2\phi_2\bigg(\genfrac{}{}{0pt}{} {a,q/a}{-q,b};q,-b \bigg)=\frac{(ab,bq/a;q^2)_\infty}{(b;q)_\infty}. \end{align} \item A $q$-analogue of Gauss' ${}_2F_{1}(-1)$ sum \cite[\uppercase\expandafter{\romannumeral2}.11]{Gasper-Rahman}: \begin{align} \label{Gauss'} {}_2\phi_2\bigg(\genfrac{}{}{0pt}{} {a^2,b^2}{abq^{1/2},-abq^{1/2}};q,-q \bigg)=\frac{(a^2q,b^2q;q^2)_\infty}{(q,a^2b^2q;q^2)_\infty}. \end{align} \end{enumerate} One main step in our calculations of rank three Nahm sums is to reduce them to single or double sums and then employ some known identities. Here we recall some single sum Rogers--Ramanujan type identities: \begin{align} &\sum_{n\geq 0} \frac{q^{n^2}(-q;q^2)_n}{(q^4;q^4)_{n}}=\frac{J_2J_{3,6}}{J_{1}J_{4}}, \quad \text{(S. 25)} \label{S. 25}\\ &\sum_{n\geq 0} \frac{q^{(n^2+n)/2}}{(q;q)_n(q;q^2)_{n+1}}=\frac{J_2J_{14}^{3}}{J_1J_{1,14}J_{4,14}J_{6,14}}, \quad \text{(S. 80)} \label{S. 80}\\ &\sum_{n\geq 0} \frac{q^{(n^2+n)/2}}{(q;q)_n(q;q^2)_{n}}=\frac{J_2J_{14}^{3}}{J_1J_{2,14}J_{3,14}J_{4,14}}, \quad \text{(S. 81)} \label{S. 81}\\ &\sum_{n\geq 0} \frac{q^{(n^2+3n)/2}}{(q;q)_n(q;q^2)_{n+1}}=\frac{J_2J_{14}^{3}}{J_1J_{2,14}J_{5,14}J_{6,14}}, \quad \text{(S. 82)} \label{S. 82}\\ &\sum_{n\geq 0} \frac{q^{n^2}}{(q;q^2)_n(q^4;q^4)_{n}}=\frac{J_2J_{14}J_{3,28}J_{11,28}}{J_1J_{28}J_{4,28}J_{12,28}}, \quad \text{(S. 117)} \label{S. 117}\\ &\sum_{n\geq 0} \frac{q^{n^2+2n}}{(q;q^2)_n(q^4;q^4)_{n}}=\frac{J_2J_{1,14}J_{12,28}}{J_1J_{4}J_{28}}, \quad \text{(S. 118)} \label{S. 118}\\ &\sum_{n\geq 0} \frac{q^{n^2+2n}}{(q;q)_{2n+1}(-q^2;q^2)_{n}}=\frac{J_2J_{5,14}J_{4,28}}{J_1J_{4}J_{28}}, \quad \text{(S. 119)} \label{S. 119}\\ &\sum_{n\geq 0} \frac{(-1)^nq^{n^2}}{(q^4;q^4)_{n}(-q;q^2)_{n}}=\frac{J_{1,14}J_{5,14}J_{7}}{J_{2,14}J_{4,14}J_{14}}, \quad \text{(\cite[Entry 3.5.4]{Andrews-Berndt})} \label{Entry 3.5.4}\\ &\sum_{n\geq 0} \frac{(-1)^nq^{n^2+2n}}{(q^4;q^4)_{n}(-q;q^2)_{n}}=\frac{J_{3,14}J_{5,14}J_{7}}{J_{4,14}J_{6,14}J_{14}}, \quad \text{(\cite[Entry 3.5.5]{Andrews-Berndt})} \label{Entry 3.5.5}\\ &\sum_{n\geq 0} \frac{(-1)^nq^{n^2+2n}}{(q^4;q^4)_{n}(-q;q^2)_{n+1}}=\frac{J_{1,14}J_{3,14}J_{7}}{J_{2,14}J_{6,14}J_{14}}, \quad \text{(\cite[Entry 3.5.6]{Andrews-Berndt})} \label{Entry 3.5.6}\\ &\sum_{n\geq 0} \frac{q^{n^2}(-1;q)_{n}}{(q;q)_{n}(q;q^2)_{n}} =\sum_{n\geq 0} \frac{q^{n^2}(-q;q)_{n}}{(q;q)_{n}(q;q^2)_{n+1}} =\frac{J_{2}J_{3}^2}{J_{1}^2J_{6}}, \label{Entry 4.2.8+4.2.9} \\ &\qquad \qquad \qquad \text{(\cite[Entries 4.2.8 and 4.2.9]{Andrews-Berndt})} \nonumber \\ &\sum_{n\geq 0} \frac{(-1)^nq^{n^2+2n}(q;q^2)_{n}}{(q^4;q^4)_{n}}=\frac{J_{1}J_{6}^2J_{2,12}}{J_{2}^2J_{1,6}J_{12}}, \quad \text{(\cite[Entry 4.2.11]{Andrews-Berndt})} \label{Entry 4.2.11} \\ &\sum_{n\geq 0} \frac{q^{2n^2}(-aq,-q/a;q^2)_{n}}{(q^2;q^2)_{2n}}=\frac{(-aq^3,-q^3/a,q^6;q^6)_\infty}{(q^2;q^2)_\infty}, \quad \text{(\cite[Entry 5.3.1]{Andrews-Berndt})} \label{Entry 5.3.1} \\ &\sum_{n\geq 0} \frac{(-x;q)_{n+1}(-q/x,\rho_{1},\rho_{2};q)_{n}}{(q;q)_{2n+1}}\Big(\frac{q^2}{\rho_{1}\rho_{2}}\Big)^n =\frac{(q^2/\rho_{1},q^2/\rho_{2};q)_{\infty}}{(q,q^2/\rho_{1}\rho_{2};q)_{\infty}} \nonumber \\ &\qquad \times \sum_{n\geq 0} \frac{(\rho_{1},\rho_{2};q)_{n}}{(q^2/\rho_{1},q^2/\rho_{2};q)_{n}}(x^{n+1}+x^{-n})\Big(\frac{q^2}{\rho_{1}\rho_{2}}\Big)^nq^{(n^2+n)/2}. \quad \text{(\cite[(5.2.4)]{Andrews-Berndt})} \label{Part2-5.2.4} \end{align} Here we use the label (S.\ $n$) to denote the equation $(n)$ in Slater's list \cite{Slater}. Given any series $f(z)=\sum_{n\in \mathbb{Z}} a(n)z^n$, we define the constant term extractor (with respect to $z$) $\mathrm{CT}_z$ as \begin{align} \mathrm{CT}_z f(z)=a(0). \end{align} Clearly, when the integral of $f(z)$ along a positively oriented simple closed contour around the origin is well-defined we have \begin{align} \mathrm{CT} f(z)=\oint f(z) \frac{dz}{2\pi iz}. \end{align} We recall a useful formula from \cite{Gasper-Rahman} to evaluate contour integral of infinite products. Suppose that $$P(z):=\frac{(a_1 z,\dots,a_A z,b_1/z,\dots,b_B/ z;q)_{\infty}} {(c_1 z,\dots,c_C z,d_1/z,\dots,d_D/z;q)_{\infty}}$$ has only simple poles. We have \cite[Eq. (4.10.5)]{Gasper-Rahman} \begin{align}\label{Eq. (4.10.5)} &\oint P(z)\frac{dz}{2\pi iz} =\frac{(a_1d_1,\dots,a_Ad_1,b_1/d_1,\dots,b_B/d_1;q)_{\infty}} {(c_1d_1,\dots,c_Cd_1,d_2/d_1,\dots,d_D/d_1;q)_{\infty}} \nonumber\\ &\quad \times \sum_{n\geq 0}^{\infty}\frac{(c_1d_1,\dots,c_Cd_1,qd_1/b_1,\dots,qd_1/b_B;q)_{n}} {(q,a_1d_1,\dots,a_Ad_1,qd_1/d_2,\dots,qd_1/d_D;q)_{n}} (-d_1q^{(n+1)/2})^{n(D-B)}(\frac{b_1\cdots b_B}{d_1\cdots d_D})^n \nonumber\\ &\qquad +\mathrm{idem}(d_1;d_2,\dots,d_D) \end{align} when $D>B$ or if $D=B$ and $$\left| \frac{b_1\cdots b_B}{d_1\cdots d_D}\right|<1.$$ Here the symbol $\mathrm{idem}(d_1;d_2,\dots,d_D)$ after an expression stands for the sum of the $(D-1)$ expressions obtained from the preceding expression by interchanging $d_1$ with each $d_k$, $k=2,3,\dots,D$, and the integration is over a positively oriented simple contour so that \begin{enumerate}[(1)] \item the poles of $1/(c_1 z,\cdots,c_C z;q)_{\infty}$ lie outside the contour; \item the origin and poles of $1/(d_1 z,\cdots,d_D z;q)_{\infty}$ lie inside the contour. \end{enumerate} We will also need the Bailey lemma (see e.g. \cite{BIS,Lovejoy2004,McLaughlin}). A pair of sequences $(\alpha_n(a;q),\beta_n(a;q))$ is called a Bailey pair relative to $a$ if for all $n\geq 0$, \begin{align}\label{defn-BP} \beta_n(a;q)=\sum_{k=0}^n\frac{\alpha_k(a;q)}{(q;q)_{n-k}(aq;q)_{n+k}}. \end{align} \begin{lemma}[Bailey's lemma] Suppose that $(\alpha_{n}(a;q),\beta_{n}(a;q))$ is a Bailey pair relative to $a$. Then $(\alpha_{n}'(a;q),\beta_{n}'(a;q))$ is another Bailey pair relative to $a$, where \begin{align}\label{Bailey's lemma} &\alpha_{n}'(a;q)=\frac{(\rho_{1},\rho_{2};q)_{n}}{(aq/\rho_{1},aq/\rho_{2};q)_{n}}\left( \frac{aq}{\rho_{1}\rho_{2}}\right)^{n}\alpha_{n}(a;q), \\ &\beta_{n}'(a;q)=\sum_{k=0}^{n}\frac{(\rho_{1},\rho_{2};q)_{k}(aq/\rho_{1}\rho_{2};q)_{n-k}}{(aq/\rho_{1},aq/\rho_{2};q)_{n}(q;q)_{n-k}}\left( \frac{aq}{\rho_{1}\rho_{2}}\right)^{k}\beta_{k}(a;q). \end{align} Equivalently, if $(\alpha_n(a;q),\beta_n(a;q))$ is a Bailey pair, then \begin{align}\label{eq-Bailey-general-id} &\frac{1}{(aq/\rho_1,aq/\rho_2;q)_n}\sum_{j=0}^n \frac{(\rho_1,\rho_2;q)_j(aq/\rho_1\rho_2;q)_{n-j}}{(q;q)_{n-j}}\Big(\frac{aq}{\rho_1\rho_2} \Big)^j\beta_j(a;q) \nonumber \\ &=\sum_{r=0}^n \frac{(\rho_1,\rho_2;q)_r}{(q;q)_{n-r}(aq;q)_{n+r}(aq/\rho_1,aq/\rho_2;q)_r}\Big(\frac{aq}{\rho_1\rho_2} \Big)^r \alpha_r(a;q). \end{align} \end{lemma} We need the following special consequences of Bailey's lemma. \begin{enumerate}[(1)] \item Letting $\rho_1,\rho_2\rightarrow \infty$, we obtain the Bailey pair \cite[Eq.\ (S1)]{BIS}: \begin{align}\label{BP-S1} \alpha_n'(a;q)=a^nq^{n^2}\alpha_n(a;q), \quad \beta_n'(a;q)=\sum_{r=0}^n \frac{a^rq^{r^2}}{(q;q)_{n-r}}\beta_r(a;q). \end{align} If we further let $n\rightarrow \infty$, then we deduce from \eqref{eq-Bailey-general-id} that \begin{align}\label{eq-BP-id-key} \sum_{n=0}^\infty a^nq^{n^2}\beta_n(a;q)=\frac{1}{(aq;q)_\infty} \sum_{n=0}^\infty a^n q^{n^2}\alpha_n(a;q). \end{align} \item Letting $a=1$, $q \rightarrow q^4$, $\rho_{1}=-q^2/w$, $\rho_{2} \rightarrow \infty$, we obtain the Bailey pair \begin{align} &\alpha_{n}'(1;q^4)=\frac{(-q^2/w;q^4)_{n}}{(-q^2w;q^4)_{n}}w^nq^{2n^2}\alpha_{n}(1;q^4), \label{Bailey's lemma-1.1} \\ &\beta_{n}'(1;q^4)=\sum_{k=0}^{n}\frac{(-q^2/w;q^4)_{k}w^kq^{2k^2}}{(-q^2w;q^4)_{n}(q^4;q^4)_{n-k}}\beta_{k}(1;q^4).\label{Bailey's lemma-1.2} \end{align} If we further let $n\rightarrow \infty$, then we deduce from \eqref{eq-Bailey-general-id} that \begin{align}\label{Bailey's lemma-1} &\sum_{k=0}^{\infty}(-q^2/w;q^4)_{k}w^kq^{2k^2}\beta_{k}(1;q^4) \nonumber \\ &=\frac{(-q^2w;q^4)_{\infty}}{(q^4;q^4)_{\infty}}\sum_{r=0}^{\infty}\frac{(-q^2/w;q^4)_{r}w^rq^{2r^2}}{(-q^2w;q^4)_{r}}\alpha_{r}(1;q^4). \end{align} \item Letting $a=q^4$, $q \rightarrow q^4$, $\rho_{1}=-q^4/w$, $\rho_{2} \rightarrow \infty$, we obtain the Bailey pair \begin{align} &\alpha_{n}'(q^4;q^4)=\frac{(-q^4/w;q^4)_{n}}{(-q^4w;q^4)_{n}}w^nq^{2n^2+2n}\alpha_{n}(q^4,q^4), \label{Bailey's lemma-2.1} \\ &\beta_{n}'(q^4;q^4)=\sum_{k=0}^{n}\frac{(-q^4/w;q^4)_{k}w^kq^{2k^2+2k}}{(-q^4w;q^4)_{n}(q^4;q^4)_{n-k}}\beta_{k}(q^4;q^4).\label{Bailey's lemma-2.2} \end{align} If we further let $n\rightarrow \infty$, then we deduce from \eqref{eq-Bailey-general-id} that \begin{align}\label{Bailey's lemma-2} &\frac{1}{1-q^4}\sum_{n=0}^{\infty}(-q^4/w;q^4)_{n}w^nq^{2n^2+2n}\beta_{n}(q^4;q^4) \nonumber \\ &=\frac{(-wq^4;q^4)_{\infty}}{(q^4;q^4)_{\infty}}\sum_{r=0}^{\infty}\frac{(-q^4/w;q^4)_{r}w^rq^{2r^2+2r}}{(-wq^4;q^4)_{r}}\alpha_{r}(q^4;q^4). \end{align} \end{enumerate} The following result of Lovejoy \cite[p.\ 1510]{Lovejoy2004} allows us to change the parameter $a$ to $aq$ . \begin{lemma}\label{lem-BP-lift} If $(\alpha_n(a;q),\beta_n(a;q))$ is a Bailey pair relative to $a$, then $(\alpha_n',\beta_n')$ is a Bailey pair relative to $aq$ where \begin{equation} \begin{split} \alpha_n'(aq;q)&=\frac{(1-aq^{2n+1})(aq/b;q)_n(-b)^nq^{n(n-1)/2}}{(1-aq)(bq;q)_n}\sum_{r=0}^n \frac{(b;q)_r}{(aq/b;q)_r} \\ &\qquad \times (-b)^{-r}q^{-r(r-1)/2} \alpha_r(a;q), \\ \beta_n'(aq;q)&=\frac{(b;q)_n}{(bq;q)_n}\beta_n(a;q). \end{split} \end{equation} \end{lemma} In particular, when $b\rightarrow 0$ we obtain \begin{equation}\label{eq-BP-lift} \begin{split} &\alpha_n'(aq;q)=\frac{(1-aq^{2n+1})a^nq^{n^2}}{1-aq}\sum_{r=0}^n a^{-r}q^{-r^2}\alpha_r(a;q), \\ &\beta_n'(aq;q)=\beta_n(a;q). \end{split} \end{equation} The following result of Mc Laughlin \cite{McLaughlin} allows us to change the parameter $a$ to $a/q$. \begin{lemma}\label{lem-BP-down} If $(\alpha_n(a;q),\beta_n(a;q))$ is a Bailey pair relative to $a$, then $(\alpha_n',\beta_n')$ is a Bailey pair relative to $a/q$ where \begin{align} \alpha_0'(a/q;q)&=\alpha_0(a;q), \quad \alpha_n'(a/q;q)=(1-a)\Big(\frac{\alpha_n(a;q)}{1-aq^{2n}}-\frac{aq^{2n-2}\alpha_{n-1}(a;q)}{1-aq^{2n-2}} \Big), \nonumber \\ \beta_n'(a/q;q)&=\beta_n(a;q). \end{align} \end{lemma} \section{Dual to the lift of Zagier's Example 1}\label{sec-exam1} From now on we will present some new modular triples discovered by applying the lift-dual operation to Zagier's rank two examples. For convenience, we write $(n_1,n_2,n_3)$ as $(i,j,k)$. The same symbols such as $A,B,C,F(u,v,w;q)$ may have different meanings in different places, but this will not cause any confusion. Zagier's Example 1 states that for $a>\frac{1}{2}$ ($a\neq 1$) and $b\in \mathbb{Q}$, we have modular triples $(A,B_i,C_i)$ ($i=1,2,3,4$) where \begin{equation}\label{eq-exam1-original} \begin{split} &A=\begin{pmatrix} a & 1-a \\ 1-a & a \end{pmatrix}, B_1=\begin{pmatrix} b \\ -b \end{pmatrix}, B_2=\begin{pmatrix} -1/2 \\ -1/2 \end{pmatrix}, B_3=\begin{pmatrix} 1-a/2 \\ a/2 \end{pmatrix}, \\ &B_4=\begin{pmatrix} a/2 \\ 1-a/2 \end{pmatrix}, C_1=\frac{b^2}{2a}-\frac{1}{24}, C_2=\frac{1}{8a}-\frac{1}{24}, C_3=\frac{a}{8}-\frac{1}{24}, C_4=\frac{a}{8}-\frac{1}{24}. \end{split} \end{equation} Here the last three vectors were found by Vlasenko--Zwegers \cite{VZ}. The identities justifying their modularity were given in \cite[Table 2]{VZ}: \begin{align} \sum_{i,j\geq 0}\frac{q^{\frac{1}{2}ai^2+(1-a)ij+\frac{1}{2}aj^2+b(i-j)}}{(q;q)_i(q;q)_j}&=\frac{1}{(q;q)_\infty} \sum_{n=-\infty}^\infty q^{\frac{1}{2}an^2+2bn}, \label{VZ-id-1}\\ \sum_{i,j\geq 0}\frac{q^{\frac{1}{2}ai^2+(1-a)ij+\frac{1}{2}aj^2-\frac{1}{2}i-\frac{1}{2}j}}{(q;q)_i(q;q)_j}&=\frac{2}{(q;q)_\infty} \sum_{n=-\infty}^\infty q^{\frac{1}{2}an^2+\frac{1}{2}n}, \label{VZ-id-2} \\ \sum_{i,j\geq 0}\frac{q^{\frac{1}{2}ai^2+(1-a)ij+\frac{1}{2}aj^2+(1-\frac{1}{2}a)i+\frac{1}{a}aj}}{(q;q)_i(q;q)_j}&=\frac{1}{2(q;q)_\infty} \sum_{n=-\infty}^\infty q^{\frac{1}{2}an^2+\frac{1}{2}an}. \label{VZ-id-3} \end{align} This lifts to rank three modular triples $(\widetilde{A},\widetilde{B}_i,C_i)$ where $C_i$ is as above and \begin{equation}\label{exam1-lift} \begin{split} &\widetilde{A}=\begin{pmatrix} a & 2-a & 1 \\ 2-a & a & 1 \\ 1 & 1 & 2 \end{pmatrix}, ~~ \widetilde{B}_1=\begin{pmatrix} b \\ -b \\ 0 \end{pmatrix}, ~~ \widetilde{B}_2= \begin{pmatrix} -1/2 \\ -1/2 \\ -1 \end{pmatrix}, \\ &\widetilde{B}_3=\begin{pmatrix} 1-a/2 \\ a/2 \\ 1 \end{pmatrix}, ~~ \widetilde{B}_4=\begin{pmatrix} a/2 \\ 1-a/2 \\ 1 \end{pmatrix}. \end{split} \end{equation} The matrix $\widetilde{A}$ also appeared in a recent work of Gang--Kim--Park--Stubbs \cite[Sec.\ 4.2.3]{GKPS}. Considering its dual, we obtain possible modular triples $(\widetilde{A}^\star,\widetilde{B}_i^\star,C_i^\star)$ where \begin{equation}\label{A-exam1-lift-dual} \begin{split} &\widetilde{A}^\star=\begin{pmatrix} \frac{2a-1}{4(a-1)} & \frac{2a-3}{4(a-1)} & -\frac{1}{2} \\ \frac{2a-3}{4(a-1)} & \frac{2a-1}{4(a-1)} & -\frac{1}{2} \\ -\frac{1}{2} & -\frac{1}{2} & 1 \end{pmatrix}, ~~ \widetilde{B}_1^\star=\begin{pmatrix} b/2(a-1) \\ -b/2(a-1) \\ 0 \end{pmatrix}, ~~ \widetilde{B}_2^\star=\begin{pmatrix} 0 \\ 0 \\ -1/2 \end{pmatrix}, \\ & \widetilde{B}_3^\star=\begin{pmatrix} -1/4 \\ 1/4 \\ 1/2 \end{pmatrix}, ~~ \widetilde{B}_4^\star=\begin{pmatrix} 1/4 \\ -1/4 \\ 1/2 \end{pmatrix}, ~~ C_1^\star=\frac{b^2}{2a(a-1)}-\frac{1}{12}, \\ & C_2^\star=\frac{1}{6}-\frac{1}{8a}, ~~ C_3^\star=\frac{1}{24}, ~~ C_4^\star=\frac{1}{24}. \end{split} \end{equation} We may rewrite them as \begin{align}\label{eq-vector-B-dual-exam1} & \widetilde{A}^\star=\begin{pmatrix} \frac{1}{2}+m & \frac{1}{2}-m & -\frac{1}{2} \\ \frac{1}{2}-m & \frac{1}{2}+m & -\frac{1}{2} \\ -\frac{1}{2} & -\frac{1}{2} & 1 \end{pmatrix}, ~~\widetilde{B}_1^\star=\begin{pmatrix} \nu \\ -\nu \\ 0 \end{pmatrix},\widetilde{B}_2^\star=\begin{pmatrix} 0 \\ 0 \\ -1/2 \end{pmatrix}, \widetilde{B}_3^\star=\begin{pmatrix} -1/4 \\ 1/4 \\ 1/2 \end{pmatrix}, \nonumber \\ &\widetilde{B}_4^\star=\begin{pmatrix} 1/4 \\ -1/4 \\ 1/2 \end{pmatrix},~~ C_1^\star=\frac{2\nu^2}{(4m+1)}-\frac{1}{12}, ~~ C_2^\star=\frac{m+1}{6(4m+1)}, ~~ C_3^\star=\frac{1}{24}, ~~ C_4^\star=\frac{1}{24}. \end{align} We now prove that they are indeed modular. Since $i$ and $j$ are symmetric in the quadratic form generated by $\widetilde{A}^\star$, there are essentially only three Nahm sums to consider. \begin{theorem}\label{2-Ex1-in} We have \begin{align} &\sum_{i,j,k\geq 0} \frac{q^{2m(i-j)^2+(i+j)^2+2k^2-2(i+j)k+\nu(i-j)}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k} \nonumber \\ &=\frac{J_4^3\overline{J}_{2(4m+\nu+1),4(4m+1)}}{J_2^2J_8^2} +2q^{2m+\nu+1} \frac{J_8^2\overline{J}_{-2\nu,4(4m+1)}}{J_4^3}, \label{thm1-id-1}\\ &\sum_{i,j,k\geq 0} \frac{q^{2m(i-j)^2+(i+j)^2+2k^2-2(i+j)k-2k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k} =4\frac{J_{8}^{2}\overline{J}_{8m,16m+4}}{J_{4}^{3}}+2q^{2m-1}\frac{J_{4}^{3}\overline{J}_{2,16m+4}}{J_{2}^{2}J_{8}^{2}}, \label{thm1-id-2}\\ &\sum_{i,j,k\geq 0} \frac{q^{2m(i-j)^2+(i+j)^2+2k^2-2(i+j)k+i-j+2k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}=\frac{J_8^2\overline{J}_{8m+2,16m+4}}{J_4^3}+q^{2m}\frac{J_4^3J_{32m+8}^2}{J_2^2J_8^2J_{16m+4}}. \label{thm1-id-3} \end{align} \end{theorem} \begin{proof} We define \begin{align} F(u,v,w;q^4)=\sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{2m(i-j)^2+(i+j)^2+2k^2-2(i+j)k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}. \end{align} We have \begin{align} &F(u,u^{-1},w;q^4)=\sum_{i,j,k\geq 0} \frac{q^{2m(i-j)^2+(i+j)^2+2k^2-2(i+j)k}u^{i-j}w^k}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}\nonumber\\ &=\sum_{i,j\geq 0} \frac{q^{2m(i-j)^2+(i+j)^2}u^{i-j}(-wq^{2-2(i+j)};q^4)_{\infty}}{(q^4;q^4)_i(q^4;q^4)_j}\quad \text{(set $i+j=n$}) \nonumber\\ &=\sum_{n=0}^{\infty}\sum_{j=0}^{n} \frac{q^{2m(n-2j)^2+n^2}u^{n-2j}(-wq^{2-2n};q^4)_{\infty}}{(q^4;q^4)_j(q^4;q^4)_{n-j}} \nonumber\\ &=S_0(q)+S_1(q) \label{1-proof-S}, \end{align} where $S_0(q)$ and $S_1(q)$ correspond to the sum with $n$ even and odd, respectively, namely, \begin{align} S_0(q)&=(-wq^{2};q^4)_{\infty}\sum_{n=0}^\infty \sum_{j=0}^{2n} \frac{q^{8m(n-j)^2+2n^2}u^{2(n-j)}w^n(-q^{2}/w;q^4)_{n}}{(q^4;q^4)_j(q^4;q^4)_{2n-j}}, \\ S_1(q)&=(-w;q^4)_{\infty}\sum_{n=0}^\infty \sum_{j=0}^{2n+1} \frac{q^{8m(n-j)^2+8m(n-j)+2m+2n^2+2n+1}u^{2(n-j)+1}w^n(-q^{4}/w;q^4)_{n}}{(q^4;q^4)_j(q^4;q^4)_{2n-j+1}}. \end{align} We have \begin{align} &S_0(q)=(-wq^{2};q^4)_{\infty}\sum_{n=0}^\infty w^{n}q^{2n^2}(-q^{2}/w;q^4)_{n} \Big(\sum_{r=0}^{n} \frac{q^{8mr^2}u^{2r}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r}} \nonumber \\ &\qquad \qquad +\sum_{r=1}^{n} \frac{q^{8mr^2}u^{-2r}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r}}\Big) \nonumber\\ &=(-wq^{2};q^4)_{\infty}\sum_{n=0}^\infty w^nq^{2n^2}(-q^{2}/w;q^4)_{n} \beta_n(1;q^4). \label{1-proof-S0} \end{align} Here $\beta_n(1;q^4)$ are defined by \eqref{defn-BP} with \begin{align*} \alpha_r(1;q^4):=\left\{\begin{array}{ll} 1 & r=0, \\ q^{8mr^2}(u^{2r}+u^{-2r}) & r\geq 1. \end{array} \right. \end{align*} By \eqref{Bailey's lemma-1} we obtain \begin{align} S_0(q)=\frac{(-wq^{2};q^4)_{\infty}^2}{(q^4;q^4)_\infty}\Big(1+\sum_{r=1}^{\infty}\frac{(-q^2/w;q^4)_{r}}{(-q^2w;q^4)_{r}}w^rq^{(8m+2)r^2}(u^{2r}+u^{-2r})\Big). \label{id-S0} \end{align} Similarly, we have \begin{align} &S_1(q)=(-w;q^4)_{\infty}\sum_{n=0}^{\infty}w^nq^{2n^2+2n+1}(-q^{4}/w;q^4)_{n} \Big(\sum_{r=0}^{n} \frac{q^{8mr^2+8mr+2m}u^{2r+1}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r+1}}\nonumber\\ &\qquad \qquad +\sum_{r=0}^{n} \frac{q^{8mr^2+8mr+2m}u^{-2r-1}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r+1}}\Big) \nonumber\\ &= \frac{q^{2m+1}}{1-q^4}(-w;q^4)_{\infty}\sum_{n=0}^{\infty}w^nq^{2n^2+2n}(-q^{4}/w;q^4)_{n}\beta_n(q^4;q^4). \label{1-proof-S1} \end{align} Here $\beta_n(q^4;q^4)$ are defined by \eqref{defn-BP} with \begin{align*} \alpha_r(q^4;q^4)=q^{8mr^2+8mr}(u^{2r+1}+u^{-2r-1}). \end{align*}By \eqref{Bailey's lemma-2} we obtain \begin{align} S_1(q)&=\frac{q^{2m+1}(-w,-wq^{4};q^4)_{\infty}}{(q^4;q^4)_\infty} \nonumber \\ &\qquad \times \sum_{r=0}^{\infty}\frac{(-q^4/w;q^4)_{r}}{(-q^4w;q^4)_{r}}w^rq^{(8m+2)r^2+(8m+2)r}(u^{2r+1}+u^{-2r-1}). \label{id-S1} \end{align} Substituting \eqref{id-S0} and \eqref{id-S1} into \eqref{1-proof-S}, we deduce that \begin{align}\label{exam1-S-result} &F(u,u^{-1},w;q^4)=\frac{(-wq^{2};q^4)_{\infty}^2}{(q^4;q^4)_\infty}\Big(1+\sum_{r=1}^{\infty}\frac{(-q^2/w;q^4)_{r}}{(-q^2w;q^4)_{r}}w^rq^{(8m+2)r^2}(u^{2r}+u^{-2r})\Big) \nonumber \\ & +\frac{q^{2m+1}(-w,-wq^{4};q^4)_{\infty}}{(q^4;q^4)_\infty}\sum_{r=0}^{\infty}\frac{(-q^4/w;q^4)_{r}}{(-q^4w;q^4)_{r}}w^rq^{(8m+2)r^2+(8m+2)r}(u^{2r+1}+u^{-2r-1}). \end{align} (1) Setting $w=1$ in \eqref{exam1-S-result} and using \eqref{JTP}, we have \begin{align} &F(u,u^{-1},1;q^4)=\frac{(-q^{2};q^4)_{\infty}^2}{(q^4;q^4)_\infty}\sum_{r=-\infty}^{\infty}q^{(8m+2)r^2}u^{2r}\nonumber\\ &\qquad +\frac{q^{2m+1}(-1,-q^{4};q^4)_{\infty}}{(q^4;q^4)_\infty}\sum_{r=-\infty}^{\infty}q^{(8m+2)r^2+(8m+2)r}u^{2r+1} \nonumber\\ &=\frac{J_{4}^{3}(-u^2q^{8m+2},-q^{8m+2}/u^2,q^{16m+4};q^{16m+4})_{\infty}}{J_{2}^{2}J_{8}^{2}}\nonumber\\ &\qquad +\frac{2uq^{2m+1}J_{8}^{2}(-u^2q^{16m+4},-1/u^2,q^{16m+4};q^{16m+4})_{\infty}}{J_{4}^{3}}. \end{align} In particular, when $u=q^\nu$ we obtain \eqref{thm1-id-1}. (2) Setting $w=q^{-2}$ in \eqref{exam1-S-result} and using \eqref{JTP}, we have \begin{align} &F(u,u^{-1},q^{-2};q^4)=\frac{(-1;q^4)_{\infty}^2}{(q^4;q^4)_\infty}\Big(1+\sum_{r=1}^{\infty}\frac{1+q^{4r}}{2}q^{(8m+2)r^2-2r}(u^{2r}+u^{-2r})\Big)\nonumber\\ &\qquad +\frac{q^{2m+1}(-q^{-2},-q^{2};q^4)_{\infty}}{(q^4;q^4)_\infty}\sum_{r=0}^{\infty}\frac{1+q^{4r+2}}{1+q^2}q^{(8m+2)r^2+8mr}(u^{2r+1}+u^{-2r-1}) \nonumber\\ &=\frac{2J_{8}^{2}}{J_{4}^{3}}\sum_{r=-\infty}^{\infty}q^{(8m+2)r^2-2r}(u^{2r}+u^{-2r}) \nonumber \\ &\qquad \qquad +q^{2m-1}\frac{J_{4}^{3}}{J_{2}^{2}J_{8}^{2}}\sum_{r=-\infty}^{\infty}q^{(8m+2)r^2+8mr}(u^{2r+1}+u^{-2r-1}) \nonumber\\ &=2\frac{J_{8}^{2}}{J_{4}^{3}}\Big((-u^2q^{8m},-q^{8m+4}/u^2,q^{16m+4};q^{16m+4})_{\infty} \nonumber\\ &\qquad \qquad +(-u^2q^{8m+4},-q^{8m}/u^2,q^{16m+4};q^{16m+4})_{\infty} \Big) \nonumber \\ &\qquad +q^{2m-1}\frac{J_{4}^{3}}{J_{2}^{2}J_{8}^{2}}\Big(u(-u^2q^{16m+2},-q^{2}/u^2,q^{16m+4};q^{16m+4})_{\infty} \nonumber \\ &\qquad \qquad +u^{-1}(-u^2q^{2},-q^{16m+2}/u^2,q^{16m+4};q^{16m+4})_{\infty} \Big). \end{align} In particular, when $u=1$ we obtain \eqref{thm1-id-2}. (3) Setting $w=q^2$ in \eqref{exam1-S-result}, we have \begin{align} &F(u,u^{-1},q^{2};q^4)=\frac{(-q^4;q^4)_{\infty}^2}{(q^4;q^4)_\infty}\Big(1+\sum_{r=1}^{\infty}\frac{2}{1+q^{4r}}q^{(8m+2)r^2+2r}(u^{2r}+u^{-2r})\Big)\\ &\qquad +\frac{q^{2m+1}(-q^{2},-q^{6};q^4)_{\infty}}{(q^4;q^4)_\infty}\sum_{r=0}^{\infty}\frac{1+q^2}{1+q^{4r+2}}q^{(8m+2)r^2+(8m+4)r}(u^{2r+1}+u^{-2r-1}). \nonumber \end{align} Setting $u=q$ and using \eqref{JTP}, we deduce that \begin{align*} &F(q,q^{-1},q^{2};q^4)=\frac{(-q^4;q^4)_{\infty}^2}{(q^4;q^4)_\infty}\Big(1+2\sum_{r=1}^{\infty}q^{(8m+2)r^2}\Big)\nonumber\\ &\qquad +\frac{q^{2m}(-q^{2};q^4)_{\infty}^{2}}{(q^4;q^4)_\infty}\sum_{r=0}^{\infty}q^{(8m+2)r^2+(8m+2)r}\nonumber\\ &=\frac{(-q^4;q^4)_{\infty}^2(-q^{8m+2},-q^{8m+2},q^{16m+4};q^{16m+4})_{\infty}}{(q^4;q^4)_\infty}\nonumber\\ &\qquad +\frac{q^{2m}(-q^{2};q^4)_{\infty}^{2}(-1,-q^{16m+4},q^{16m+4};q^{16m+4})_{\infty}}{2(q^4;q^4)_\infty}. \qedhere \end{align*} \end{proof} \section{Dual to the lift of Zagier's Example 9}\label{sec-exam9} Example 9 provides three modular triples $(A,B_i,C_i)$ ($i=1,2,3$) where \begin{align} &A=\begin{pmatrix} 1 & -1/2 \\ -1/2 & 3/4 \end{pmatrix}, \quad B_1=\begin{pmatrix} -1/2 \\ 1/4 \end{pmatrix}, ~~ B_2=\begin{pmatrix} 0 \\ 0 \end{pmatrix}, ~~ B_3=\begin{pmatrix} 0 \\ 1/2 \end{pmatrix}\nonumber \\ &C_1=1/28, \quad C_2=-3/56, \quad C_3=1/56. \end{align} This lifts to the modular triples $(\widetilde{A},\widetilde{B}_i,C_i)$ where \begin{align} \widetilde{A}=\begin{pmatrix} 1 & 1/2 & 1/2 \\ 1/2 & 3/4 & 1/4 \\ 1/2 & 1/4 & 3/4 \end{pmatrix}, \quad \widetilde{B}_1= \begin{pmatrix} -1/2 \\ 1/4 \\ -1/4 \end{pmatrix}, \widetilde{B}_2= \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} , \widetilde{B}_3=\begin{pmatrix} 0 \\ 1/2 \\ 1/2 \end{pmatrix}. \end{align} Considering its dual, we expect that $(\widetilde{A}^\star,\widetilde{B}_i^\star,C_i^\star)$ are modular triples where \begin{align} &\widetilde{A}^\star=\begin{pmatrix} 2 & -1 & -1 \\ -1 & 2 & 0 \\ -1 & 0 & 2 \end{pmatrix}, \quad \widetilde{B}_1^\star= \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}, \quad \widetilde{B}_2^\star=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}, \quad \widetilde{B}_3^\star=\begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix} \nonumber \\ &C_1^\star=3/14, \quad C_2^\star=-1/14, \quad C_3^\star=5/14. \end{align} The matrix $\widetilde{A}^\star$ is essentially the Cartan matrix of the Dynkin diagram $A_3$ (see also \cite[Sec.\ 4.1.3]{GKPS}). We establish the following identities to prove their modularity. \begin{theorem}\label{thm-3} We have \begin{align} &\sum_{i,j,k\geq 0} \frac{q^{2i^2+2j^2+2k^2-2ij-2ik-2i+2j}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k} \label{Thm2.4-1} \\ &=4 \frac{J_4^{6}J_{28}^{3}}{J_2^{6}J_{4,28}J_{6,28}J_{8,28}}-\frac{1}{2}\frac{J_{1}^{5}J_{7}J_{1,14}J_{3,14}}{J_{2}^{5}J_{2,14}J_{6,14}J_{14}}-\frac{1}{2}\frac{J_{2}^{11}J_{5,14}J_{4,28}}{J_1^{6}J_{4}^{6}J_{28}}, \nonumber \\ &\sum_{i,j,k\geq 0} \frac{q^{2i^2+2j^2+2k^2-2ij-2ik}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k} \label{Thm2.4-2} \\ &=\frac{1}{2}\frac{J_{2}^{11}J_{1,14}J_{12,28}}{J_1^{6}J_{4}^{6}J_{28}} +\frac{1}{2}\frac{J_{1}^{5}J_{7}J_{3,14}J_{5,14}}{J_{2}^{5}J_{4,14}J_{6,14}J_{14}} -4q^2 \frac{J_4^{6}J_{28}^{3}}{J_2^{6}J_{4,28}J_{10,28}J_{12,28}}, \nonumber \\&\sum_{i,j,k\geq 0} \frac{q^{2i^2+2j^2+2k^2-2ij-2ik-2i+2j+2k}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k} \label{Thm2.4-3} \\ &=\frac{1}{2q}\frac{J_{2}^{11}J_{14}J_{3,28}J_{11,28}}{J_1^{6}J_{4}^{5}J_{28}J_{4,28}J_{12,28}} -\frac{1}{2q}\frac{J_{1}^{5}J_{7}J_{1,14}J_{5,14}}{J_{2}^{5}J_{2,14}J_{4,14}J_{14}} -4\frac{J_4^{6}J_{28}^{3}}{J_2^{6}J_{2,28}J_{8,28}J_{12,28}}. \nonumber \end{align} \end{theorem} \begin{proof} We define \begin{align} F(u,v,w;q^2):=\sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{2i^2+2j^2+2k^2-2ij-2ik}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k}. \end{align} By \eqref{Euler1} and \eqref{JTP} we have \begin{align}\label{integration1} &F(u,v,w;q^2)= \sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{(i-j-k)^2+(j-k)^2+i^2}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k} \nonumber\\ &=\oint \oint \sum_{i\geq 0}\frac{(uz)^iq^{i^2}}{(q^2;q^2)_i}\sum_{j\geq 0}\frac{(vy/z)^j}{(q^2;q^2)_j}\sum_{k\geq 0}\frac{(w/yz)^k}{(q^2;q^2)_k}\sum_{m=-\infty}^{\infty}z^{-m}q^{m^2}\sum_{n=-\infty}^{\infty}y^{-n}q^{n^2}\frac{dy}{2\pi iy}\frac{dz}{2\pi iz} \nonumber\\ &=\oint \oint \frac{(-quz,-qz,-q/z,q^2;q^2)_{\infty}(-qy,-q/y,q^2;q^2)_{\infty}}{(vy/z,w/yz;q^2)_{\infty}}\frac{dy}{2\pi iy}\frac{dz}{2\pi iz}\nonumber\\ &=\oint (-quz,-qz,-q/z,q^2,q^2;q^2)_{\infty} \nonumber\\ &\quad \times \oint \sum_{i\geq 0}\frac{(-qz/v;q^2)_i}{(q^2;q^2)_i}(vy/z)^i\sum_{j\geq 0}\frac{(-qz/w;q^2)_j}{(q^2;q^2)_j}(w/yz)^j \frac{dy}{2\pi iy}\frac{dz}{2\pi iz}\quad \text{(by (\ref{q-binomial}))} \nonumber\\ &=\oint (-quz,-qz,-q/z,q^2,q^2;q^2)_{\infty}\sum_{i\geq 0}\frac{(-qz/v,-qz/w;q^2)_i}{(q^2,q^2;q^2)_i}(\frac{vw}{z^2})^i \frac{dz}{2\pi iz} \nonumber\\ &=\oint (-quz,-qz,-q/z,q^2,q^2;q^2)_{\infty}\frac{(-qv/z,-qw/z;q^2)_{\infty}}{(vw/z^2,q^2;q^2)_{\infty}} \frac{dz}{2\pi iz} \quad \text{(by (\ref{Gauss}))} \nonumber\\ &=\oint \frac{(-quz,-qv/z,-qw/z,-qz,-q/z,q^2;q^2)_{\infty}}{(vw/z^2;q^2)_{\infty}} \frac{dz}{2\pi iz}. \end{align} (1) By \eqref{integration1}, the left side of \eqref{Thm2.4-1} is the same as \begin{align} F(q^{-2},q^2,1;q^2)&=\oint \frac{(-z/q,-q^3/z,-q/z,-qz,-q/z,q^2;q^2)_{\infty}}{(q^2/z^2;q^2)_{\infty}} \frac{dz}{2\pi iz} \nonumber\\ &=\oint \frac{(-z/q,-qz,-q/z,-q^3/z,q^2;q^2)_{\infty}}{(q/z,-q^2/z,q^2/z;q^2)_{\infty}} \frac{dz}{2\pi iz}. \end{align} Applying \eqref{Eq. (4.10.5)} with $$(A,B,C,D)=(2,2,0,3),$$ $$(a_1,a_2)=(-1/q,-q),\quad (b_1,b_2)=(-q,-q^3),\quad (d_1,d_2,d_3)=(q,-q^2,q^2),$$ we deduce that \begin{align}\label{Thm2.4-1-R} F(q^{-2},q^2,1;q^2)=J_2(R_1(q)+R_2(q)+R_3(q)), \end{align} where \begin{align} &R_1(q)=\frac{(-1,-q^2,-1,-q^2;q^2)_{\infty}}{(q^2,-q,q;q^2)_{\infty}}\sum_{n\geq 0}\frac{(-q^2,-1;q^2)_{n}}{(q^2,-1,-q^2,-q,q;q^2)_{n}}q^{n^2+n},\\ &R_2(q)=\frac{(q,q^3,1/q,q;q^2)_{\infty}}{(q^2,-1/q,-1;q^2)_{\infty}}\sum_{n\geq 0}\frac{(q^3,q;q^2)_{n}}{(q^2,q,q^3,-q^3,-q^2;q^2)_{n}}(-1)^nq^{n^2+2n},\\ &R_3(q)=\frac{(-q,-q^3,-1/q,-q;q^2)_{\infty}}{(q^2,1/q,-1;q^2)_{\infty}}\sum_{n\geq 0}\frac{(-q^3,-q;q^2)_{n}}{(q^2,-q,-q^3,q^3,-q^2;q^2)_{n}}q^{n^2+2n}. \end{align} By \eqref{S. 81} with $q$ replaced by $q^2$, we have \begin{align}\label{Thm2.4-1-R1} R_1(q)=4\frac{(q^4;q^4)_{\infty}^{4}}{(q^2;q^2)_{\infty}^{5}(q^2;q^4)_{\infty}}\sum_{n\geq 0}\frac{q^{n^2+n}}{(q^2;q^2)_{n}(q^2;q^4)_{n}} =4\frac{J_{4}^{6}J_{28}^{3}}{J_{2}^{7}J_{4,28}J_{6,28}J_{8,28}}. \end{align} Similarly, by \eqref{Entry 3.5.6} we have \begin{align}\label{Thm2.4-1-R2} R_2(q)=-\frac{1}{2}\frac{(q;q^2)_{\infty}^{4}}{(q^2;q^2)_{\infty}(-q;q)_{\infty}}\sum_{n\geq 0}\frac{(-1)^nq^{n^2+2n}}{(q^4;q^4)_{n}(-q;q^2)_{n+1}} =-\frac{1}{2}\frac{J_{1}^{5}J_{1,14}J_{3,14}J_{7}}{J_{2}^{6}J_{2,14}J_{6,14}J_{14}}. \end{align} Next, by \eqref{S. 119} we have \begin{align}\label{Thm2.4-1-R3} R_3(q)=-\frac{1}{2}\frac{(-q;q^2)_{\infty}^{4}}{(q;q)_{\infty}(-q^2;q^2)_{\infty}}\sum_{n\geq 0}\frac{q^{n^2+2n}}{(q^4;q^4)_{n}(q;q^2)_{n+1}} =-\frac{1}{2}\frac{J_{2}^{10}J_{5,14}J_{4,28}}{J_{1}^{6}J_{4}^{6}J_{28}}. \end{align} Now substituting \eqref{Thm2.4-1-R1}--\eqref{Thm2.4-1-R3} into \eqref{Thm2.4-1-R}, we get \eqref{Thm2.4-1}. (2) By (\ref{integration1}), the left side of (\ref{Thm2.4-2}) is the same as \begin{align} F(1,1,1;q^2)&=\oint \frac{(-qz,-q/z,-q/z,-qz,-q/z,q^2;q^2)_{\infty}}{(1/z^2;q^2)_{\infty}} \frac{dz}{2\pi iz} \nonumber\\ &=\oint \frac{(-qz,-qz,-q/z,-q/z,q^2;q^2)_{\infty}}{(1/z,-1/z,q/z;q^2)_{\infty}} \frac{dz}{2\pi iz}. \end{align} Applying (\ref{Eq. (4.10.5)}) with $$(A,B,C,D)=(2,2,0,3),$$ $$(a_1,a_2)=(-q,-q),\quad (b_1,b_2)=(-q,-q),\quad (d_1,d_2,d_3)=(1,-1,q),$$ we deduce that \begin{align}\label{Thm2.4-2-S} F(1,1,1;q^2)=J_2(S_1(q)+S_2(q)+S_3(q)), \end{align} where \begin{align} &S_1(q)=\frac{(-q,-q,-q,-q;q^2)_{\infty}}{(q^2,-1,q;q^2)_{\infty}}\sum_{n\geq 0}\frac{(-q,-q;q^2)_{n}}{(q^2,-q,-q,-q^2,q;q^2)_{n}}q^{n^2+2n},\\ &S_2(q)=\frac{(q,q,q,q;q^2)_{\infty}}{(q^2,-1,-q;q^2)_{\infty}}\sum_{n\geq 0}\frac{(q,q;q^2)_{n}}{(q^2,q,q,-q^2,-q;q^2)_{n}}(-1)^nq^{n^2+2n},\\ &S_3(q)=\frac{(-q^2,-q^2,-1,-1;q^2)_{\infty}}{(q^2,1/q,-1/q;q^2)_{\infty}}\sum_{n\geq 0}\frac{(-q^2,-q^2;q^2)_{n}}{(q^2,-q^2,-q^2,q^3,-q^3;q^2)_{n}}q^{n^2+3n}. \end{align} By \eqref{S. 118} we have \begin{align}\label{Thm2.4-2-S1} S_1(q)=\frac{1}{2}\frac{(q^2;q^4)_{\infty}^{4}}{(q;q^2)_{\infty}^{5}(q^4;q^4)_{\infty}}\sum_{n\geq 0}\frac{q^{n^2+2n}}{(q;q^2)_{n}(q^4;q^4)_{n}} =\frac{1}{2}\frac{J_{2}^{10}J_{1,14}J_{12,28}}{J_{1}^{6}J_{4}^{6}J_{28}}. \end{align} Similarly, by \eqref{Entry 3.5.5} we have \begin{align}\label{Thm2.4-2-S2} S_2(q)=\frac{1}{2}\frac{(q;q^2)_{\infty}^{5}}{(q^2;q^2)_{\infty}}\sum_{n\geq 0}\frac{(-1)^nq^{n^2+2n}}{(q^4;q^4)_{n}(-q;q^2)_{n}} =\frac{1}{2}\frac{J_{1}^{5}J_{3,14}J_{5,14}J_{7}}{J_{2}^{6}J_{4,14}J_{6,14}J_{14}}. \end{align} Next, by \eqref{S. 82} with $q$ replaced by $q^2$, we have \begin{align}\label{Thm2.4-2-S3} S_3(q)=-4q^2\frac{(q^4;q^4)_{\infty}^{4}}{(q^2;q^2)_{\infty}^{5}(q^2;q^4)_{\infty}}\sum_{n\geq 0}\frac{q^{n^2+3n}}{(q^2;q^2)_{n}(q^2;q^4)_{n+1}} =-4q^2\frac{J_{4}^{6}J_{28}^{3}}{J_{2}^{7}J_{4,28}J_{10,28}J_{12,28}}. \end{align} Now substituting \eqref{Thm2.4-2-S1}--\eqref{Thm2.4-2-S3} into \eqref{Thm2.4-2-S}, we get \eqref{Thm2.4-2}. (3) By (\ref{integration1}), the left side of (\ref{Thm2.4-3}) is the same as \begin{align} F(q^{-2},q^2,q^2;q^2)&=\oint \frac{(-z/q,-q^3/z,-q^3/z,-qz,-q/z,q^2;q^2)_{\infty}}{(q^4/z^2;q^2)_{\infty}} \frac{dz}{2\pi iz} \nonumber\\ &=\oint \frac{(-z/q,-qz,-q^3/z,-q/z,q^2;q^2)_{\infty}}{(q^2/z,-q^2/z,q^3/z;q^2)_{\infty}} \frac{dz}{2\pi iz}. \end{align} Applying (\ref{Eq. (4.10.5)}) with $$(A,B,C,D)=(2,2,0,3),$$ $$(a_1,a_2)=(-1/q,-q),\quad (b_1,b_2)=(-q^3,-q),\quad (d_1,d_2,d_3)=(q^2,-q^2,q^3),$$ we deduce that \begin{align}\label{Thm2.4-3-T} F(q^{-2},q^2,q^2;q^2)=J_2(T_1(q)+T_2(q)+T_3(q)), \end{align} where \begin{align} &T_1(q)=\frac{(-q,-q^3,-q,-1/q;q^2)_{\infty}}{(q^2,-1,q;q^2)_{\infty}}\sum_{n\geq 0}\frac{(-q,-q^3;q^2)_{n}}{(q^2,-q,-q^3,-q^2,q;q^2)_{n}}q^{n^2},\\ &T_2(q)=\frac{(q,q^3,q,1/q;q^2)_{\infty}}{(q^2,-1,-q;q^2)_{\infty}}\sum_{n\geq 0}\frac{(q,q^3;q^2)_{n}}{(q^2,q,q^3,-q^2,-q;q^2)_{n}}(-1)^nq^{n^2},\\ &T_3(q)=\frac{(-q^2,-q^4,-1,-1/q^2;q^2)_{\infty}}{(q^2,1/q,-1/q;q^2)_{\infty}}\sum_{n\geq 0}\frac{(-q^2,-q^4;q^2)_{n}}{(q^2,-q^2,-q^4,q^3,-q^3;q^2)_{n}}q^{n^2+n}. \end{align} By \eqref{S. 117} we have \begin{align}\label{Thm2.4-3-T1} T_1(q)=\frac{1}{2q}\frac{(-q;q^2)_{\infty}^{4}}{(q;q^2)_{\infty}(q^4;q^4)_{\infty}}\sum_{n\geq 0}\frac{q^{n^2}}{(q;q^2)_{n}(q^4;q^4)_{n}} =\frac{1}{2q}\frac{J_{2}^{10}J_{14}J_{3,28}J_{11,28}}{J_{1}^{6}J_{4}^{5}J_{28}J_{4,28}J_{12,28}}. \end{align} Similarly, by \eqref{Entry 3.5.4} we have \begin{align}\label{Thm2.4-3-T2} T_2(q)=-\frac{1}{2q}\frac{(q;q^2)_{\infty}^{4}}{(-q;q^2)_{\infty}(q^4;q^4)_{\infty}}\sum_{n\geq 0}\frac{(-1)^nq^{n^2}}{(q^4;q^4)_{n}(-q;q^2)_{n}} =-\frac{1}{2q}\frac{J_{1}^{5}J_{1,14}J_{5,14}J_{7}}{J_{2}^{6}J_{2,14}J_{4,14}J_{14}}. \end{align} Next, by \eqref{S. 80} with $q$ replaced by $q^2$, we have \begin{align}\label{Thm2.4-3-T3} T_3(q)=-4\frac{(-q^2;q^2)_{\infty}^{4}}{(q^2;q^2)_{\infty}(q^2;q^4)_{\infty}}\sum_{n\geq 0}\frac{q^{n^2+n}}{(q^2;q^2)_{n}(q^2;q^4)_{n+1}} =-4\frac{J_{4}^{6}J_{28}^{3}}{J_{2}^{7}J_{2,28}J_{8,28}J_{12,28}}. \end{align} Now substituting \eqref{Thm2.4-3-T1}--\eqref{Thm2.4-3-T3} into \eqref{Thm2.4-3-T}, we get \eqref{Thm2.4-3}. \end{proof} \begin{rem} The idea behind the deduction of \eqref{integration1} comes from Wang's proof \cite[Theorem 4.8]{Wang-rank3} of Zagier's seventh example on rank three Nahm sums \cite[Table 3]{Zagier}. This will also be used in \eqref{integration2} in Section \ref{sec-exam11}. \end{rem} \section{Dual to the lift of Zagier's Example 11}\label{sec-exam11} Surprisingly, the proof of Theorem \ref{thm-lift-11} will rely on the following identities associated with Zagier's Example 10: \begin{align} \sum_{i,j\geq 0} \frac{q^{2i^2+2ij+2j^2}}{(q^3;q^3)_i(q^3;q^3)_j} &=\frac{1}{J_3}\left(J_{21,45}-q^3J_{6,45}+2q^2J_{9,45} \right), \label{conj-10-2} \\ \sum_{i,j\geq 0} \frac{q^{2i^2+2ij+2j^2-2i-j}}{(q^3;q^3)_i(q^3;q^3)_j}&=\frac{1}{J_3}\left(2J_{18,45}+qJ_{12,45}+q^4J_{3,45}\right). \label{conj-10-1} \end{align} They were conjectured by Vlasenko--Zwegers \cite[p.\ 633, Table 1]{VZ} and proved by Cao--Rosengren--Wang \cite{CRW} through purely $q$-series approach. The identities \eqref{conj-10-2} and \eqref{conj-10-1} give 3-dissection formulas for the Nahm sums on the left side. We record the following consequence which play a key role in our proof of Theorem \ref{thm-lift-11}. \begin{lemma}\label{lem-3-dissection} For $r\in \{-1,0,1\}$ we define \begin{align} S_r(q):=\sum_{\begin{smallmatrix} i,j\geq 0 \\ i-j\equiv r \!\!\! \pmod{3} \end{smallmatrix}} \frac{q^{2i^2+2ij+2j^2}}{(q^3;q^3)_i(q^3;q^3)_j}, \\ T_r(q):=\sum_{\begin{smallmatrix} i,j\geq 0 \\ i-j\equiv r \!\!\! \pmod{3} \end{smallmatrix}} \frac{q^{2i^2+2ij+2j^2-2i-j}}{(q^3;q^3)_i(q^3;q^3)_j}. \end{align} We have \begin{align} & S_0(q)=\frac{J_{21,45}-q^3J_{6,45}}{J_3}, \label{11-S0-result}\\ & S_1(q)=S_{-1}(q)=q^2\frac{J_{9,45}}{J_3}, \label{11-S1-result} \\ &T_0(q)+T_1(q)=2\frac{J_{18,45}}{J_3}, \label{11-T0T1-result} \\ &T_{-1}(q)=\frac{qJ_{12,45}+q^4J_{3,45}}{J_3}. \label{11-T2-result} \end{align} \end{lemma} \begin{proof} Interchanging $i$ with $j$ it is easy to see that $S_1(q)=S_{-1}(q)$. Note that \begin{align} &2i^2+2ij+2j^2=2(i-j)^2+6ij \nonumber \\ &\equiv 2(i-j)^2 \equiv \left\{\begin{array}{ll} 0 \pmod{3} & i-j\equiv 0 \pmod{3},\\ -1 \pmod{3} & i-j\equiv 1,-1 \pmod{3}. \end{array} \right. \end{align} From \eqref{conj-10-2} we obtain \eqref{11-S0-result} and \eqref{11-S1-result}. Note that \begin{align} &2i^2+2ij+2j^2-2i-j=2(i-j)^2+(i-j)+6ij-3i \nonumber \\ &\equiv 2(i-j)^2+(i-j)\equiv \left\{\begin{array}{ll} 0 \pmod{3} & i-j\equiv 0,1 \pmod{3},\\ 1 \pmod{3} & i-j\equiv -1 \pmod{3}. \end{array} \right. \end{align} From \eqref{conj-10-1} we obtain \eqref{11-T0T1-result} and \eqref{11-T2-result}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-lift-11}] We define \begin{align} F(u,v,w;q^4):=\sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{3i^2+3j^2+3k^2-2ij-2ik-2jk}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}. \end{align} By \eqref{JTP} and \eqref{Euler1} we have \begin{align}\label{integration2} &F(u,v,w;q^4)= \sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{(i-j-k)^2+2(j-k)^2+2i^2}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k} \nonumber\\ &=\mathrm{CT}_y \mathrm{CT}_z \sum_{i\geq 0}\frac{(uz)^iq^{2i^2}}{(q^4;q^4)_i}\sum_{j\geq 0}\frac{(vy/z)^j}{(q^4;q^4)_j}\sum_{k\geq 0}\frac{(w/yz)^k}{(q^4;q^4)_k}\sum_{m=-\infty}^{\infty}z^{-m}q^{m^2}\sum_{n=-\infty}^{\infty}y^{-n}q^{2n^2} \nonumber\\ &=\mathrm{CT}_y\mathrm{CT}_z \frac{(-q^2uz;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty}(-q^2y,-q^2/y,q^4;q^4)_{\infty}}{(vy/z,w/yz;q^4)_{\infty}} \nonumber\\ &= \mathrm{CT}_z (-q^2uz,q^4;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty} \nonumber\\ &\quad \times \mathrm{CT}_y \sum_{i\geq 0}\frac{(-q^2z/v;q^4)_i}{(q^4;q^4)_i}(vy/z)^i\sum_{j\geq 0}\frac{(-q^2z/w;q^4)_j}{(q^4;q^4)_j}(w/yz)^j \quad \text{(by (\ref{q-binomial}))} \nonumber\\ &=\mathrm{CT}_z (-q^2uz,q^4;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty}\sum_{i\geq 0}\frac{(-q^2z/v,-q^2z/w;q^4)_i}{(q^4,q^4;q^4)_i}(\frac{vw}{z^2})^i \nonumber\\ &=\mathrm{CT}_z (-q^2uz,q^4;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty}\frac{(-q^2v/z,-q^2w/z;q^4)_{\infty}}{(vw/z^2,q^4;q^4)_{\infty}} \quad \text{(by (\ref{Gauss}))} \nonumber\\ &=\mathrm{CT}_z \frac{(-q^2uz,-q^2v/z,-q^2w/z;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty}}{(vw/z^2;q^4)_{\infty}}. \end{align} (1) By \eqref{integration2} we have \begin{align} &(q^4;q^4)_\infty F(1,1,1;q^4) \nonumber \\ &=\mathrm{CT}_z \frac{(-q^2/z;q^4)_\infty}{(1/z^2;q^4)_{\infty}}(-q^2z,-q^2/z,q^4;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty} \nonumber\\ &= \mathrm{CT}_z \sum_{i=0}^\infty \frac{z^{-i}q^{2i^2}}{(q^4;q^4)_i} \sum_{j=0}^\infty \frac{z^{-2j}}{(q^4;q^4)_j} \sum_{m=-\infty}^\infty z^{-m}q^{2m^2} \sum_{n=-\infty}^\infty z^nq^{n^2} \quad \nonumber \\ &\qquad \qquad \qquad \qquad \qquad \qquad \text{(by \eqref{JTP} and \eqref{Euler1})} \nonumber \\ &=\sum_{i,j\geq 0} \frac{q^{2i^2}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{2m^2+(m+i+2j)^2} \nonumber \\ &=\sum_{i,j\geq 0} \frac{q^{2i^2+(i+2j)^2}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3m^2+2m(i+2j)} \nonumber \\ &=\sum_{i,j\geq 0} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3(m+\frac{1}{3}(i+2j))^2} \label{11-key-step-1} \\ &=\sum_{r=-1}^ 1 \sum_{n\geq 0} \sum_{\begin{smallmatrix} i,j\geq 0 \\ i+2j=3n+r \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3(m+\frac{1}{3}(3n+r))^2} \nonumber \\ &=\sum_{r=-1}^ 1 \sum_{n\geq 0} \sum_{\begin{smallmatrix} i,j\geq 0 \\ i+2j=3n+r \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3(m+\frac{1}{3}r)^2} \nonumber \\ &=\sum_{r=-1}^ 1 q^{\frac{1}{3}r^2} \sum_{\begin{smallmatrix} i,j\geq 0 \\ i-j\equiv r \!\!\!\! \pmod{3} \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3m^2+2mr} \nonumber \\ &=\sum_{r=-1}^ 1 q^{\frac{1}{3}r^2} (-q^{3-2r},-q^{3+2r},q^6;q^6)_\infty \sum_{\begin{smallmatrix} i,j\geq 0 \\ i-j\equiv r \!\!\!\! \pmod{3} \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2)}}{(q^4;q^4)_i(q^4;q^4)_j} \nonumber \\ &=(-q^3,-q^3,q^6;q^6)_\infty S_0(q^{\frac{4}{3}}) +q^{\frac{1}{3}}(-q,-q^5,q^6;q^6)_\infty (S_1(q^{\frac{4}{3}})+S_{-1}(q^{\frac{4}{3}})). \label{11-proof-1} \end{align} Here for the last second equality we used \eqref{JTP}. Substituting \eqref{11-S0-result} and \eqref{11-S1-result} with $q$ replaced by $q^{4/3}$ into \eqref{11-proof-1}, we obtain \eqref{eq-thm-11-1}. (2) By \eqref{integration2} we have \begin{align} &(q^4;q^4)_\infty F(q^{-2},q^{2},q^{-2};q^4) \nonumber \\ &=\mathrm{CT}_z \frac{(-1/z;q^4)_\infty}{(1/z^2;q^4)_{\infty}}(-z,-q^4/z,q^4;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty} \nonumber\\ &= \mathrm{CT}_z \sum_{i=0}^\infty \frac{z^{-i}q^{2i^2-2i}}{(q^4;q^4)_i} \sum_{j=0}^\infty \frac{z^{-2j}}{(q^4;q^4)_j} \sum_{m=-\infty}^\infty z^{-m}q^{2m^2+2m} \sum_{n=-\infty}^\infty z^nq^{n^2} \quad \nonumber \\ &\qquad \qquad \qquad \qquad \qquad \qquad \text{(by \eqref{JTP} and \eqref{Euler1})} \nonumber \\ &=\sum_{i,j\geq 0} \frac{q^{2i^2-2i}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{2m^2+2m+(m+i+2j)^2} \nonumber \\ &=\sum_{i,j\geq 0} \frac{q^{2i^2-2i+(i+2j)^2}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3m^2+2m(i+2j+1)} \nonumber \\ &=q^{-\frac{1}{3}}\sum_{i,j\geq 0} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2-2i-j)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3(m+\frac{1}{3}(i+2j+1))^2} \label{11-key-step-2} \\ &=q^{-\frac{1}{3}}\sum_{r=-1}^ 1 \sum_{n\geq 0} \sum_{\begin{smallmatrix} i,j\geq 0 \\ i+2j=3n+r-1 \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2-2i-j)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3(m+\frac{1}{3}(3n+r))^2} \nonumber \\ &=q^{-\frac{1}{3}}\sum_{r=-1}^ 1 \sum_{n\geq 0} \sum_{\begin{smallmatrix} i,j\geq 0 \\ i+2j=3n+r-1 \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2-2i-j)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3(m+\frac{1}{3}r)^2} \nonumber \\ &=q^{-\frac{1}{3}}\sum_{r=-1}^ 1 q^{\frac{1}{3}r^2} \sum_{\begin{smallmatrix} i,j\geq 0 \\ i-j\equiv r-1 \!\!\!\! \pmod{3} \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2-2i-j)}}{(q^4;q^4)_i(q^4;q^4)_j} \sum_{m=-\infty}^\infty q^{3m^2+2mr} \nonumber \\ &=q^{-\frac{1}{3}}\sum_{r=-1}^ 1 q^{\frac{1}{3}r^2} (-q^{3-2r},-q^{3+2r},q^6;q^6)_\infty \sum_{\begin{smallmatrix} i,j\geq 0 \\ i-j\equiv r-1 \!\!\!\! \pmod{3} \end{smallmatrix}} \frac{q^{\frac{4}{3}(2i^2+2ij+2j^2-2i-j)}}{(q^4;q^4)_i(q^4;q^4)_j} \nonumber \\ &=q^{-\frac{1}{3}}(-q^3,-q^3,q^6;q^6)_\infty T_{-1}(q^{\frac{4}{3}}) +(-q,-q^5,q^6;q^6)_\infty (T_0(q^{\frac{4}{3}})+T_{1}(q^{\frac{4}{3}})). \label{11-proof-2} \end{align} Here for the last second equality we used \eqref{JTP}. Substituting \eqref{11-T0T1-result} and \eqref{11-T2-result} with $q$ replaced by $q^{4/3}$ into \eqref{11-proof-2}, we obtain \eqref{eq-thm-11-2}. \end{proof} We remark here that the involvement of \eqref{conj-10-2} and \eqref{conj-10-1} is totally unexpected before we arrive at \eqref{11-key-step-1} and \eqref{11-key-step-2}. \section{Applications and Some Other Nahm Sums}\label{sec-applictaion} Section \ref{sec-exam1} provides two general matrices $\widetilde{A}$ and $\widetilde{A}^\star$ for which four modular triples can be found (see \eqref{exam1-lift} and \eqref{A-exam1-lift-dual}). Besides the data listed there, if we set some specific values for $a$, we may find some extra vectors and scalars so that they form modular triples with these matrices. For instance, if we set $a=2$ in \eqref{exam1-lift}, we obtain the seventh matrix in Zagier's rank three examples \cite[Table 3]{Zagier}. There are seven choices of vectors $B$ so that $(\widetilde{A},B,C)$ is modular for some suitable $C$. See \cite[Theorem 4.8]{Wang2024} for the corresponding identities. We will not consider its dual here since the dual examples of all of Zagier's rank three examples will be discussed in a forthcoming paper as promised in \cite{Wang-rank3}. Instead, we give another specific interesting example corresponding to $a=3/2$ to illustrate the phenomenon. \subsection{A special matrix and the corresponding Nahm sums} If we set $a=3/2$ in \eqref{eq-exam1-original} and \eqref{exam1-lift}, we obtain \begin{align}\label{eq-A-32} A=\begin{pmatrix} 3/2 & -1/2 \\ -1/2 & 3/2 \end{pmatrix}, \quad \widetilde{A}=\begin{pmatrix} 3/2 & 1/2 & 1 \\ 1/2 & 3/2 & 1 \\ 1 & 1 & 2 \end{pmatrix}. \end{align} Let \begin{align}\label{F-defn-special} F(u,v,w;q^4):=\sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{3i^2+3j^2+4k^2+2ij+4ik+4jk}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}. \end{align} Recall the identity \eqref{eq-lift-id}, we have \begin{align} F(q^{b_1},q^{b_2},q^{b_1+b_2};q^4)=\sum_{i,j\geq 0} \frac{q^{3i^2-2ij+3j^2+b_1i+b_2j}}{(q^4;q^4)_i(q^4;q^4)_j}. \label{eq-thm-32-relation} \end{align} The modular triples in \eqref{exam1-lift} with $a=3/2$ give the first four modular triples $(\widetilde{A},B,C)$ in Table \ref{tab:32-triple}. \begin{table}[htbp] \centering \begin{tabular}{ccccccc} \hline $B$ & $\begin{pmatrix} c \\ -c \\ 0 \end{pmatrix}$ & $\begin{pmatrix} -1/2 \\ -1/2 \\ -1 \end{pmatrix}$ & $\begin{pmatrix} 1/4 \\ 3/4 \\ 1 \end{pmatrix}$ & $\begin{pmatrix} 3/4 \\ 1/4 \\ 1 \end{pmatrix}$ & $\begin{pmatrix} -1/4 \\ -3/4 \\ -1/2 \end{pmatrix}$ & $\begin{pmatrix} -3/4 \\ -1/4 \\ -1/2 \end{pmatrix}$ \\ $C$ & ${c^2}/{3}-{1}/{24}$ & ${1}/{24}$ & ${7}/{48}$ & ${7}/{48}$ & ${1}/{48}$ & $1/48$ \\ \hline $B$ & $\begin{pmatrix} 0 \\ -1/2 \\ 0 \end{pmatrix}$ & $\begin{pmatrix} -1/2 \\ 0 \\ 0 \end{pmatrix}$ & $\begin{pmatrix} 1/4 \\ -1/4 \\ 1/2 \end{pmatrix}$ & $\begin{pmatrix} -1/4 \\ 1/4 \\ 1/2 \end{pmatrix}$ & $\begin{pmatrix} 1/2 \\ 1/2 \\ 0\end{pmatrix}$ & $\begin{pmatrix} 1 \\ 1 \\ 1\end{pmatrix}$ \\ $C$ & $-1/96$ & $-1/96$ & $1/48$ & $1/48$ & $1/24$ & $7/24$ \\ \hline \end{tabular} \caption{Modular triples associated with the matrix $\widetilde{A}$ in \eqref{eq-A-32}.} \label{tab:32-triple} \end{table} The corresponding Nahm sum identities associated with the first three triples inherited from \eqref{VZ-id-1}--\eqref{VZ-id-3} are \begin{align} \sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+4k^2+2ij+4ik+4jk+ci-cj}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}&=\frac{(-q^{3+c},-q^{3-c},q^6;q^6)_\infty}{(q^4;q^4)_\infty}, \label{lift-VZ-1} \\ \sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+4k^2+4ij+4ik+2jk-2i-2j-4k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}&=2\frac{(-q,-q^5,q^6;q^6)_\infty}{(q^4;q^4)_\infty}, \label{lift-VZ-2} \\ \sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+4k^2+4ij+4ik+2jk+i+3j+4k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}&=\frac{(-q^6,-q^6,q^6;q^6)_\infty}{(q^4;q^4)_\infty}. \label{lift-VZ-3} \end{align} Interchanging $i$ with $j$ in \eqref{lift-VZ-3} yields the identity for the fourth triple. As recorded in Table \ref{tab:32-triple}, we find eight more possible modular triples. Due to the symmetry of $i$ and $j$ in the quadratic form associated with $A$, there are essentially five different Nahm sums to be considered. The modularity of three of them follow from the cases $u=1,q^{-1},q$ of the following theorem. \begin{theorem}\label{thm-2} We have \begin{align} \sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+4k^2+2ij+4ik+4jk-2j}u^{i+j+2k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}= (-uq;q^2)_{\infty}. \end{align} \end{theorem} \begin{proof} By \eqref{Euler1} and \eqref{JTP} we have \begin{align}\label{integration3} &F(u,v,w;q^4)= \sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{2i^2+2j^2+(i+j+2k)^2}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k} \nonumber \\ &=\mathrm{CT}_z \sum_{i\geq 0}\frac{(uz)^iq^{2i^2}}{(q^4;q^4)_i} \sum_{j\geq 0}\frac{(vz)^jq^{2j^2}}{(q^4;q^4)_j}\sum_{k\geq 0}\frac{(wz^2)^k}{(q^4;q^4)_k}\sum_{m=-\infty}^{\infty}z^{-m}q^{m^2} \nonumber \\ &=\mathrm{CT}_z \frac{(-q^2uz,-q^2vz;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty}}{(wz^2;q^4)_{\infty}}. \end{align} By \eqref{integration3} we have \begin{align*} &F(u,q^{-2}u,u^{2};q^4)=\mathrm{CT}_z \frac{(-q^2uz,-uz;q^4)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty}}{(u^2z^2;q^4)_{\infty}}\nonumber \\ &=\mathrm{CT}_z \frac{(-uz;q^2)_{\infty}(-qz,-q/z,q^2;q^2)_{\infty}}{(u^2z^2;q^4)_{\infty}} =\mathrm{CT}_z \frac{(-qz,-q/z,q^2;q^2)_{\infty}}{(uz;q^2)_{\infty}} \nonumber \\ &=\mathrm{CT}_z \sum_{i=0}^\infty \frac{u^iz^i}{(q^2;q^2)_i} \sum_{m=-\infty}^{\infty}z^{-m}q^{m^2} \nonumber \\ &=\sum_{i=0}^\infty \frac{u^iq^{i^2}}{(q^2;q^2)_i} =(-uq;q^2)_{\infty}. \quad \text{(by \eqref{Euler1})} \qedhere \end{align*} \end{proof} The last two choices of $(B,C)$ in Table \ref{tab:32-triple} were found by applying the dual operator to the Nahm sums in Theorem \ref{thm-2-Ex1-3/2-in} below. Zagier's duality conjecture implies that $q^{1/6}F(q^2,q^2,1;q^4)$ and $q^{7/6}F(q^4,q^4,q^4;q^4)$ are likely to be modular. However, we find that they can be expressed as sums of two modular forms of weights 0 and 1, respectively. Hence $q^{C}F(q^2,q^2,1;q^4)$ and $q^{C}F(q^4,q^4,q^4;q^4)$ are not modular for any $C$. \begin{theorem}\label{thm-nonmodular} We have \begin{align} &\sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+4k^2+2ij+4ik+4jk+2i+2j}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}= \frac{1}{3}\frac{J_1^2J_4}{J_2}+\frac{2}{3}\frac{J_2^2J_3J_{12}}{J_1J_4^2J_6}, \label{nonmodular-id-1}\\ &\sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+4k^2+2ij+4ik+4jk+4i+4j+4k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}= -\frac{1}{3}q^{-1}\frac{J_1^2J_4}{J_2}+\frac{1}{3}q^{-1}\frac{J_2^2J_3J_{12}}{J_1J_4^2J_6}. \label{nonmodular-id-2} \end{align} \end{theorem} \begin{proof} From the definition \eqref{F-defn-special} we have \begin{align}\label{eq-proof-nonmodular-F} &F(u,u,w;q^4)=\sum_{n,k\geq 0} \sum_{j=0}^n \frac{u^nw^kq^{2n^2+(n-2j)^2+4k^2+4nk}}{(q^4;q^4)_{n-j}(q^4;q^4)_j(q^4;q^4)_k} \nonumber \\ &=S_0(u,w;q^4)+S_1(u,w;q^4). \end{align} Here $S_0(u,w;q^4)$ and $S_1(u,w;q^4)$ correspond to the sum with even and odd values of $n$, respectively. That is, \begin{align} &S_0(u,w;q^4)=\sum_{n,k\geq 0} \sum_{j=0}^{2n} \frac{u^{2n}w^kq^{8n^2+4(n-j)^2+4k^2+8nk}}{(q^4;q^4)_{2n-j}(q^4;q^4)_j(q^4;q^4)_k}, \\ &S_1(u,w;q^4)=\sum_{n,k\geq 0} \sum_{j=0}^{2n+1} \frac{u^{2n+1}w^kq^{8n^2+8n+4(n-j)^2+4(n-j)+4k^2+8nk+4k+3}}{(q^4;q^4)_{2n-j+1}(q^4;q^4)_j(q^4;q^4)_k}. \end{align} We have \begin{align} &S_0(u,w;q^4)=\sum_{n,k\geq 0} \sum_{j=0}^{2n} \frac{u^{2n}w^kq^{4n^2+4(n-j)^2+4(n+k)^2}}{(q^4;q^4)_{2n-j}(q^4;q^4)_j(q^4;q^4)_k} \nonumber \\ &=\sum_{n,k\geq 0} \frac{u^{2n}w^kq^{4n^2+4(n+k)^2}}{(q^4;q^4)_k}\Big(\frac{1}{(q^4;q^4)_n^2}+2\sum_{r=1}^n \frac{q^{4r^2}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r}} \Big) \nonumber \\ &=\sum_{m=0}^\infty \sum_{n=0}^m \frac{u^{2n}w^{m-n}q^{4n^2+4m^2}}{(q^4;q^4)_{m-n}} \Big(\frac{1}{(q^4;q^4)_n^2}+2\sum_{r=1}^n \frac{q^{4r^2}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r}}\Big). \label{nonmodular-S0-start} \end{align} Similarly, we have \begin{align} &S_1(u,w;q^4)=\sum_{n,k\geq 0} \sum_{j=0}^{2n+1} \frac{u^{2n+1}w^kq^{4n^2+4n+4(n-j)^2+4(n-j)+4(n+k)^2+4(n+k)+3}}{(q^4;q^4)_{2n-j+1}(q^4;q^4)_j(q^4;q^4)_k} \nonumber \\ &=2\sum_{n,k\geq 0} \frac{u^{2n+1}w^kq^{4n^2+4n+3+4(n+k)^2+4(n+k)}}{(q^4;q^4)_k}\sum_{r=0}^n \frac{q^{4r^2+4r}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r+1}} \nonumber \\ &=2\sum_{m=0}^\infty \sum_{n=0}^m \frac{u^{2n+1}w^{m-n}q^{4n^2+4n+3+4m^2+4m}}{(q^4;q^4)_{m-n}} \sum_{r=0}^n \frac{q^{4r^2+4r}}{(q^4;q^4)_{n-r}(q^4;q^4)_{n+r+1}}. \label{nonmodular-S1} \end{align} Letting $u=q^2$ and $w=1$ and then replacing $q$ by $q^{1/4}$, we have \begin{align} &S_0(q^{1/2},1;q)=\sum_{m=0}^\infty q^{m^2} \sum_{n=0}^m \frac{q^{n^2+n}}{(q;q)_{m-n}} \Big(\frac{1}{(q;q)_n^2}+2\sum_{r=1}^n \frac{q^{r^2}}{(q;q)_{n-r}(q;q)_{n+r}} \Big), \label{add-S0-start} \\ &S_1(q^{1/2},1;q)=2q^{5/4}\sum_{m=0}^\infty q^{m^2+m} \sum_{n=0}^m \frac{q^{n^2+2n}}{(q;q)_{m-n}} \sum_{r=0}^n \frac{q^{r^2+r}}{(q;q)_{n-r}(q;q)_{n+r+1}}. \label{add-S1-start} \end{align} In order to calculate $S_0(q^{1/2},1;q)$ we define a Bailey pair $(\alpha_n^{(0)}(1;q),\beta_n^{(0)}(1;q))$ with \begin{align} \alpha_n^{(0)}(1;q):=\left\{\begin{array}{ll} 1 & n=0, \\ 2q^{n^2} & n\geq 1. \end{array}\right. \end{align} Using \eqref{eq-BP-lift} we obtain a Bailey pair $(\alpha_n^{(1)}(q;q),\beta_n^{(1)}(q;q)$ where \begin{align}\label{S0-BP-1} \alpha_n^{(1)}(q;q)=(2n+1)\frac{q^{n^2}(1-q^{2n+1})}{1-q}. \end{align} Applying \eqref{BP-S1} we obtain the Bailey pair \begin{align}\label{S0-BP-2} \alpha_n^{(2)}(q;q)=(2n+1)\frac{q^{2n^2+n}(1-q^{2n+1})}{1-q}, \quad \beta_n^{(2)}(q;q)=\sum_{k=0}^n \frac{q^{k^2+k}}{(q;q)_{n-k}}\beta_k^{(1)}(q;q). \end{align} Next, applying Lemma \ref{lem-BP-down} we obtain the Bailey pair \begin{equation}\label{S0-BP-3} \begin{split} \alpha_0^{(3)}(1;q)&=1, ~~ \alpha_n^{(3)}(1;q)=(2n+1)q^{2n^2+n}-(2n-1)q^{2n^2-n} ~~ (n\geq 1), \\ \beta_n^{(3)}(1;q)&=\beta_n^{(2)}(1;q). \end{split} \end{equation} Using \eqref{eq-BP-id-key} with the Bailey pairs $(\alpha_n^{(i)};\beta_n^{(i)})$ ($i=0,1,2,3$) in \eqref{add-S0-start}, we deduce that \begin{align} &S_0(q^{1/2},1;q)=\sum_{m=0}^\infty q^{m^2} \sum_{n=0}^m \frac{q^{n^2+n}}{(q;q)_{m-n}} \beta_m^{(0)}(1;q) \nonumber \\ &=\sum_{m=0}^\infty q^{m^2} \beta_m^{(2)}(q;q) =\sum_{m=0}^\infty q^{m^2} \beta_m^{(3)}(1;q) \nonumber \\ &=\frac{1}{(q;q)_\infty} \sum_{n=0}^\infty q^{n^2} \alpha_n^{(3)}(1;q) \nonumber \\ &=\frac{1}{(q;q)_\infty} \Big(1+\sum_{n=1}^\infty \big((2n+1)q^{3n^2+n}-(2n-1)q^{3n^2-n}\big)\Big) \nonumber \\ &=\frac{1}{(q;q)_\infty} \sum_{n=-\infty}^\infty (2n+1)q^{3n^2+n}. \label{nonmodular-S0-result} \end{align} In order to calculate $S_1(q^{1/2},1;q)$ we define a Bailey pair $(\widetilde{\alpha}_n^{(0)}(q;q),\widetilde{\beta}_n^{(0)}(q;q))$ with \begin{align} \widetilde{\alpha}_n^{(0)}(q;q)=q^{n^2+n}, \quad n\geq 0. \end{align} Using \eqref{eq-BP-lift} we obtain the Bailey pair \begin{align} \widetilde{\alpha}_n^{(1)}(q^2;q)=(n+1)\frac{(1-q^{2n+2})q^{n^2+n}}{1-q^2}, \quad \widetilde{\beta}_n^{(1)}(q^2;q)=\widetilde{\beta}_n^{(0)}(q;q). \end{align} Using \eqref{BP-S1} we obtain the Bailey pair \begin{equation} \begin{split} \widetilde{\alpha}_n^{(2)}(q^2;q)&=(n+1)\frac{q^{2n^2+3n}(1-q^{2n+2})}{1-q^2}, \\ \widetilde{\beta}_n^{(2)}(q^2;q)&=\sum_{k=0}^n \frac{q^{k^2+2k}}{(q;q)_{n-k}}\widetilde{\beta}_k^{(1)}(q^2;q). \end{split} \end{equation} Next, using Lemma \ref{lem-BP-down} we obtain the Bailey pair \begin{align} \widetilde{\alpha}_n^{(3)}(q;q)=(n+1)q^{2n^2+3n}-nq^{2n^2+n-1}, \quad \widetilde{\beta}_n^{(3)}(q;q)=\beta_n^{(2)}(q^2;q). \end{align} Using \eqref{eq-BP-id-key} with the Bailey pairs $(\widetilde{\alpha}_n^{(i)};\widetilde{\beta}_n^{(i)})$ ($i=0,1,2,3$) in \eqref{add-S1-start}, we deduce that \begin{align} &S_1(q^{1/2},1;q)=2\frac{q^{5/4}}{1-q} \sum_{m=0}^\infty q^{m^2+m} \sum_{n=0}^m \frac{q^{n^2+2n}}{(q;q)_{m-n}} \widetilde{\beta}_n^{(0)}(q;q) \nonumber \\ &=2\frac{q^{5/4}}{1-q} \sum_{m=0}^\infty q^{m^2+m} \widetilde{\beta}_m^{(2)}(q^2;q)=2\frac{q^{5/4}}{1-q} \sum_{m=0}^\infty q^{m^2+m}\widetilde{\beta}_m^{(3)}(q;q) \nonumber \\ &=2\frac{q^{5/4}}{(q;q)_\infty} \sum_{n=0}^\infty q^{n^2+n}\widetilde{\alpha}_n^{(3)}(q;q) \nonumber \\ &=2\frac{q^{5/4}}{(q;q)_\infty} \sum_{n=0}^\infty \Big( (n+1)q^{3n^2+4n}-nq^{3n^2+2n-1}\Big) \nonumber \\ &=-2\frac{q^{1/4}}{(q;q)_\infty} \sum_{n=-\infty}^\infty nq^{3n^2+2n}. \label{nonmodular-S1-result} \end{align} Substituting \eqref{nonmodular-S0-result} and \eqref{nonmodular-S1-result} with $q$ replaced by $q^4$ into \eqref{eq-proof-nonmodular-F}, we deduce that \begin{align} &F(q^2,q^2,1;q^4)=\frac{1}{(q^4;q^4)_\infty} \Big(\sum_{n=-\infty}^\infty (2n+1)q^{12n^2+4n}-\sum_{n=-\infty}^\infty (2n)q^{12n^2+8n+1}\Big) \nonumber \\ &=\frac{1}{(q^4;q^4)_\infty} \sum_{k=-\infty}^\infty (k+1)q^{3k^2+2k}. \label{nonmodular-F-final} \end{align} Recall the following identity from \cite[Eq.\ (2.34)]{Wang2024}: \begin{align} \sum_{k=-\infty}^\infty (k+1)q^{3k^2+2k}=\frac{1}{3}\frac{J_1^2J_4^2}{J_2}+\frac{2}{3}\frac{J_2^2J_3J_{12}}{J_1J_4J_6}. \label{Wang-id-1} \end{align} Substituting \eqref{Wang-id-1} into \eqref{nonmodular-F-final}, we obtain \eqref{nonmodular-id-1}. We can prove \eqref{nonmodular-id-2} in a similar way, but here we prefer to use a different approach. Note that \begin{align} &F(q^2,q^{-2},1;q^4)-F(q^2,q^2,1;q^4)=\sum_{i,j,k\geq 0} \frac{q^{3i^2+3j^2+4k^2+2ij+4ik+4jk+2i-2j}(1-q^{4j})}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k} \nonumber \\ &=\sum_{i,j,k\geq 0} \frac{q^{3i^2+3(j+1)^2+4k^2+2i(j+1)+4ik+4(j+1)k+2i-2(j+1)}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k} =qF(q^4,q^4,q^4;q^4). \label{nonmodular-relation} \end{align} Substituting \eqref{lift-VZ-1} and \eqref{nonmodular-id-1} into \eqref{nonmodular-relation}, we obtain \eqref{nonmodular-id-2}. \end{proof} \subsection{Dual Nahm sums} Now we consider the dual examples of Table \ref{tab:32-triple}. We expect that $(A,B,C)$ are modular triples where (set $a=3/2$ in \eqref{A-exam1-lift-dual}) \begin{align}\label{A-exam1-lift-dual-32} &A=\begin{pmatrix} 1 & 0 & -1/2 \\ 0 & 1 & -1/2 \\ -1/2 & -1/2 & 1 \end{pmatrix}, \end{align} and $B,C$ are given in Table \ref{tab-exam1-lift-dual}. Here the last two vectors $(\frac{1}{2},\frac{1}{2},-\frac{1}{2})^\mathrm{T}$ and $(\frac{1}{2},\frac{1}{2},0)^\mathrm{T}$ were found through a Maple search, and the corresponding values of $C$ were found after finding the identities \eqref{thm3.7-6} and \eqref{thm3.7-7}. These two modular triples lead us to the discovery of the last two choices of $(B,C)$ in Table \ref{tab:32-triple} by the dual operation. \begin{table}[H] \centering \begin{tabular}{ccccccc} \hline $B$ & $\begin{pmatrix} -c \\ c \\ 0 \end{pmatrix}$ & $\begin{pmatrix} 0 \\ 0 \\ -1/2 \end{pmatrix}$ & $\begin{pmatrix} -1/4 \\ 1/4 \\ 1/2 \end{pmatrix}$ & $\begin{pmatrix} 1/4 \\ -1/4 \\ 1/2 \end{pmatrix}$ & $\begin{pmatrix} 0 \\ -1/2 \\ 0 \end{pmatrix}$ & $\begin{pmatrix} -1/2 \\ 0 \\ 0 \end{pmatrix}$ \\ $C$ & $(8c^2-1)/12$ & $1/12$ & $1/24$ & $1/24$ & $1/{24}$ & $1/24$ \\ \hline $B$ & $\begin{pmatrix} 0 \\ -1/2 \\ 1/4 \end{pmatrix}$ & $\begin{pmatrix} -1/2 \\ 0 \\ 1/4 \end{pmatrix}$ & $\begin{pmatrix} 0 \\ -1/2 \\ 1/2 \end{pmatrix}$ & $\begin{pmatrix} -1/2 \\ 0 \\ 1/2 \end{pmatrix}$ & $\begin{pmatrix} 1/2\\ 1/2 \\ -1/2 \end{pmatrix}$ & $\begin{pmatrix} 1/2\\ 1/2 \\ 0 \end{pmatrix}$ \\ $C$ & $1/96$ & $1/96$ & $1/24$ & $1/24$ & $1/12$ & $1/12$ \\ \hline \end{tabular} \caption{Modular triples associated with the matrix $A$ in \eqref{A-exam1-lift-dual-32}.} \label{tab-exam1-lift-dual} \end{table} Since $i$ and $j$ are symmetric in the quadratic form generated by $A$, there are essentially eight different Nahm sums to be considered. Their modularity is proved by the following theorem. \begin{theorem}\label{thm-2-Ex1-3/2-in} We have \begin{align} & \sum_{i,j,k\geq 0} \frac{q^{2i^2+2j^2+2k^2-2ik-2jk-2j+ak}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}=(-1,-q^a;q^2)_\infty, \label{thm3.7-1} \\ & \sum_{i,j,k\geq 0} \frac{q^{2i^2+2j^2+2k^2-2ik-2jk+i-j+2k}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}=\frac{J_2^2J_6^2}{J_1J_3J_4^2}, \label{thm3.7-3} \\ & \sum_{i,j,k\geq 0} \frac{q^{2i^2+2j^2+2k^2-2ik-2jk-ci+cj}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}=\frac{\overline{J}_{2+c,4}\overline{J}_{6+c,12}}{J_4^2}+q^2\frac{\overline{J}_{-c,4}\overline{J}_{c,12}}{J_4^2}, \label{thm3.7-4} \\ & \sum_{i,j,k\geq 0} \frac{q^{i^2+j^2+k^2-ik-jk-k}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k}=6\frac{J_3^3}{J_1J_2^2}, \label{thm3.7-5} \\ &\sum_{i,j,k\geq 0} \frac{q^{i^2+j^2+k^2-ik-jk+i+j-k}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k}=2\frac{J_3^3}{J_1J_2^2}, \label{thm3.7-6} \\ &\sum_{i,j,k\geq 0} \frac{q^{i^2+j^2+k^2-ik-jk+i+j}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k}=\frac{J_3^3}{J_1J_2^2}.\label{thm3.7-7} \end{align} \end{theorem} \begin{proof} We define \begin{align} F(u,v,w;q^4):=\sum_{i,j,k\geq 0} \frac{u^iv^jw^kq^{2i^2+2j^2+2k^2-2ik-2jk}}{(q^4;q^4)_i(q^4;q^4)_j(q^4;q^4)_k}. \end{align} By \eqref{Euler1} we have \begin{align}\label{rank2-in3.30} &F(u,v,w;q^4)=\sum_{k\geq 0} \frac{w^kq^{2k^2}}{(q^4;q^4)_k}(-uq^{2-2k};q^4)_{\infty}(-vq^{2-2k};q^4)_{\infty} \nonumber \\ &=(-uq^{2},-vq^{2};q^4)_{\infty}\sum_{i\geq 0} \frac{u^{i}v^iw^{2i}q^{4i^2}(-q^2/u,-q^2/v;q^4)_{i}}{(q^4;q^4)_{2i}} \nonumber \\ &\qquad +(-u,-v;q^4)_{\infty}\sum_{i\geq 0} \frac{u^iv^iw^{2i+1}q^{4i^2+4i+2}(-q^4/u,-q^4/v;q^4)_{i}}{(q^4;q^4)_{2i+1}}. \end{align} (1) By \eqref{rank2-in3.30} we have \begin{align} &F(1,q^{-2},q^a;q^4)=(-1;q^2)_{\infty}\sum_{i\geq 0} \frac{q^{4i^2-2i+2ai}(-q^2;q^2)_{2i}}{(q^4;q^4)_{2i}} \nonumber \\ & \qquad +(-q^{-2};q^2)_{\infty}\sum_{i\geq 0} \frac{q^{4i^2+2i+2ai+a+2}(-q^4;q^2)_{2i}}{(q^4;q^4)_{2i+1}}\nonumber \\ &=(-1;q^2)_{\infty}\Big(\sum_{i\geq 0} \frac{q^{4i^2-2i+2ai}}{(q^2;q^2)_{2i}}+\sum_{i\geq 0} \frac{q^{4i^2+2i+2ai+a}}{(q^2;q^2)_{2i+1}}\Big) \nonumber \\ &=(-1;q^2)_{\infty}\sum_{k\geq 0} \frac{q^{k^2-k+ak}}{(q^2;q^2)_{k}} =(-1,-q^a;q^2)_{\infty}. \end{align} (2) By \eqref{rank2-in3.30} we have \begin{align} &F(q,q^{-1},q^b;q^4)=(-q;q^2)_{\infty}\sum_{i\geq 0} \frac{q^{4i^2+2bi}(-q;q^2)_{2i}}{(q^4;q^4)_{2i}} \nonumber \\ &\qquad \qquad \qquad \qquad +(-q^{-1};q^2)_{\infty}\sum_{i\geq 0} \frac{q^{4i^2+4i+2bi+b+2}(-q^3;q^2)_{2i}}{(q^4;q^4)_{2i+1}} \nonumber \\ =&(-q;q^2)_{\infty}\Big(\sum_{i\geq 0} \frac{q^{4i^2+2bi}(-q;q^2)_{2i}}{(q^4;q^4)_{2i}}+\sum_{i\geq 0} \frac{q^{4i^2+4i+2bi+b+1}(-q;q^2)_{2i+1}}{(q^4;q^4)_{2i+1}}\Big) \nonumber \\ =&(-q;q^2)_{\infty}\sum_{k\geq 0} \frac{q^{k^2+bk}(-q;q^2)_{k}}{(q^4;q^4)_{k}}. \label{thm3.7-proof-2} \end{align} Setting $b=2$ in \eqref{thm3.7-proof-2} and using \eqref{Entry 4.2.11} with $q$ replaced by $-q$, we obtain \eqref{thm3.7-3}. (3) By \eqref{rank2-in3.30} we have \begin{align}\label{proof-dual32-3} F(q^{-c},q^{c},1;q^4)=(-q^{2+c},-q^{2-c};q^4)_{\infty}S_0(q)+(-q^{c},-q^{-c};q^4)_{\infty}S_1(q). \end{align} Here \begin{align} &S_0(q)=\sum_{i\geq 0} \frac{q^{4i^2}(-q^{2-c},-q^{2+c};q^4)_{i}}{(q^4;q^4)_{2i}}=\frac{(-q^{6+c},-q^{6-c},q^{12};q^{12})_{\infty}}{(q^4;q^4)_{\infty}} ~~ \text{(by \eqref{Entry 5.3.1})}, \label{proof-32-3-S0}\\ &S_1(q)=\sum_{i\geq 0} \frac{q^{4i^2+4i+2}(-q^{4-c},-q^{4+c};q^4)_{i}}{(q^4;q^4)_{2i+1}} \nonumber \\ &= \frac{q^2}{1+q^c}\sum_{i\geq 0} \frac{q^{4i^2+4i}(-q^{4-c};q^4)_{i}(-q^{c};q^4)_{i+1}}{(q^4;q^4)_{2i+1}} \nonumber \\ &=\frac{q^2}{1+q^c}\lim\limits_{\rho_{1},\rho_{2}\to \infty}\sum_{i\geq 0} \frac{(-q^{4-c},\rho_{1},\rho_{2};q^4)_{i}(-q^{c};q^4)_{i+1}}{(q^4;q^4)_{2i+1}}\left(\frac{q^8}{\rho_{1}\rho_{2}}\right)^i \nonumber \\ &=\frac{q^2}{1+q^c}\times \frac{1}{(q^4;q^4)_{\infty}}\sum_{i\geq 0} (q^{c(i+1)}+q^{-ci})q^{6i^2+6i} \quad\text{(by \eqref{Part2-5.2.4})}\nonumber \\ &=\frac{q^2}{1+q^c}\times \frac{1}{(q^4;q^4)_{\infty}}\sum_{i=-\infty}^{\infty} q^{6i^2+(6-c)i} \nonumber \\ &=\frac{q^2}{1+q^c}\times \frac{(-q^{c},-q^{12-c},q^{12};q^{12})_{\infty}}{(q^4;q^4)_{\infty}}. \label{proof-32-3-S1} \end{align} Substituting \eqref{proof-32-3-S0} and \eqref{proof-32-3-S1} into \eqref{proof-dual32-3}, we obtain \eqref{thm3.7-4}. (4) By \eqref{rank2-in3.30} we have \begin{align} &F(1,1,q^{-1};q^2)=(-q,-q;q^2)_{\infty}\sum_{i\geq 0} \frac{q^{2i^2-2i}(-q,-q;q^2)_{i}}{(q^2;q^2)_{2i}} \nonumber \\ &\qquad +(-1,-1;q^2)_{\infty}\sum_{i\geq 0} \frac{q^{2i^2}(-q^{2},-q^{2};q^2)_{i}}{(q^2;q^2)_{2i+1}} \nonumber \\ &=\frac{J_2^4}{J_1^2J_4^2}\sum_{i\geq 0} \frac{q^{2i^2-2i}(-q;q^2)_{i}}{(q;q^2)_{i}(q^4;q^4)_i} +4\frac{J_4^3J_6^2}{J_2^4J_{12}}.\label{add-id-1} \end{align} Here we used \eqref{Entry 4.2.8+4.2.9} with $q$ replaced by $q^2$. From \eqref{proof-32-3-S1} with $c=2$ we deduce that \begin{align} \sum_{i\geq 0} \frac{q^{2i^2+2i}(-q;q^2)_{i}}{(q;q^2)_{i+1}(q^4;q^4)_{i}} =\frac{\overline{J}_{1,6}}{J_2}. \label{add-id-2} \end{align} We now show that the first sum in \eqref{add-id-1} is equal to twice the sum in \eqref{add-id-2}. For each integer $r\geq 0$ we define \begin{align}\label{PrEntry4.2.9} f_{r}(q^2):=\sum_{i\geq 0} \frac{q^{2i^2-2i}(-q;q^2)_{i+r}}{(q;q^2)_{i+r}(q^4;q^4)_{i}} -2\sum_{i\geq 0} \frac{q^{2i^2+2i}(-q;q^2)_{i+r}}{(q;q^2)_{i+r+1}(q^4;q^4)_{i}}. \end{align} Hence, \begin{align} &f_{r}(q^2)=\sum_{i\geq 0} \frac{q^{2i^2-2i}(-q;q^2)_{i+r}}{(q;q^2)_{i+r+1}(q^4;q^4)_{i}} ((1-q^{2(i+r)+1})-2q^{4i}) \nonumber \\ &=\sum_{i\geq 0} \frac{q^{2i^2-2i}(-q;q^2)_{i+r}}{(q;q^2)_{i+r+1}(q^4;q^4)_{i}} (2(1-q^{4i})-(1+q^{2(i+r)+1})) \nonumber \\ &=2\sum_{i\geq 1} \frac{q^{2i^2-2i}(-q;q^2)_{i+r}}{(q;q^2)_{i+r+1}(q^4;q^4)_{i-1}} -\sum_{i\geq 0} \frac{q^{2i^2-2i}(-q;q^2)_{i+r+1}}{(q;q^2)_{i+r+1}(q^4;q^4)_{i}} \nonumber \\ &=2\sum_{i\geq 0} \frac{q^{2i^2+2i}(-q;q^2)_{i+r+1}}{(q;q^2)_{i+r+2}(q^4;q^4)_{i}} -\sum_{i\geq 0} \frac{q^{2i^2-2i}(-q;q^2)_{i+r+1}}{(q;q^2)_{i+r+1}(q^4;q^4)_{i}} \nonumber\\ &=-f_{r+1}(q^2). \end{align} Note that \begin{align} &\lim\limits_{r \to \infty}f_{r}(q^2)=\frac{(-q;q^2)_{\infty}}{(q;q^2)_{\infty}}\Big(\sum_{i\geq 0} \frac{q^{2i^2-2i}}{(q^4;q^4)_{i}} -2\sum_{i\geq 0} \frac{q^{2i^2+2i}}{(q^4;q^4)_{i}}\Big)\nonumber\\ &=\frac{(-q;q^2)_{\infty}}{(q;q^2)_{\infty}}\left((-1;q^4)_{\infty}-2(-q^4;q^4)_{\infty} \right)=0. \end{align} Therefore, the recurrence formula above implies that $f_{r}(q^2)=0$. Letting $r=0$ in \eqref{PrEntry4.2.9}, we prove the previous assertion that \begin{align} \sum_{i\geq 0} \frac{q^{2i^2-2i}(-q;q^2)_{i}}{(q;q^2)_i(q^4;q^4)_{i}} =2\frac{\overline{J}_{1,6}}{J_2}. \label{add-id-3} \end{align} Substituting \eqref{add-id-3} into \eqref{add-id-1}, we deduce that \begin{align} &F(1,1,q^{-1};q^2)=2\frac{J_2^5J_3J_{12}}{J_1^3J_4^3J_6}+4\frac{J_4^3J_6^2}{J_2^4J_{12}} \nonumber \\ &=2\frac{J_2^5J_{12}}{J_4^3J_6}\left( \frac{J_4^6J_6^3}{J_2^9J_{12}^2}+3q\frac{J_4^2J_6J_{12}^2}{J_2^7} \right)+4\frac{J_4^3J_6^2}{J_2^4J_{12}} =6\frac{J_3^3}{J_1J_2^2}. \label{add-id-4} \end{align} Here for the last two equalities we used the following identities \cite[Eqs.\ (3.75) and (3.38)]{XiaYao}: \begin{align} \frac{J_3^3}{J_1}&=\frac{J_4^3J_6^2}{J_2^2J_{12}}+q\frac{J_{12}^3}{J_4}, \label{3core-id-1} \\ \frac{J_3}{J_1^3}&=\frac{J_4^6J_6^3}{J_2^9J_{12}^2}+3q\frac{J_4^2J_6J_{12}^2}{J_2^7}. \label{3core-id-2} \end{align} This proves \eqref{thm3.7-5}. (5) Recall the following identities from \cite[Corollary 2.2]{Wang2024}: \begin{align} \sum_{n=0}^\infty \frac{q^{n^2}(-1;q)_n^2}{(q;q)_{2n}}&=\frac{4}{3}\frac{J_2J_3^2}{J_1^2J_6}-\frac{1}{3}\frac{J_1^4}{J_2^2}, \label{key-id-1} \\ \sum_{n=0}^\infty \frac{q^{2n^2+2n}(-q;q^2)_n^2}{(q^2;q^2)_{2n+1}} &=\frac{1}{3}\frac{J_1^2J_4^2}{J_2^2}+\frac{2}{3}\frac{J_2J_3J_{12}}{J_1J_4J_6}. \label{key-id-2} \end{align} We have \begin{align}\label{add-6-F-start} &F(q,q,q^{-1};q^2)=(-q^2;q^2)_\infty^2 \sum_{n=0}^\infty \frac{q^{2n^2}(-1;q^2)_n^2}{(q^2;q^2)_{2n}}+(-q;q^2)_\infty^2 \sum_{n=0}^\infty \frac{q^{2n^2+2n}(-q;q^2)_n^2}{(q^2;q^2)_{2n+1}} \nonumber \\ &=\frac{J_4^2}{J_2^2}\Big(\frac{4}{3}\frac{J_4J_6^2}{J_2^2J_{12}}-\frac{1}{3}\frac{J_2^4}{J_4^2}\Big)+\frac{J_2^4}{J_1^2J_4^2}\Big( \frac{1}{3}\frac{J_1^2J_4^2}{J_2^2}+\frac{2}{3}\frac{J_2J_3J_{12}}{J_1J_4J_6}\Big) \nonumber \\ &=\frac{4}{3}\frac{J_4^3J_6^2}{J_2^4J_{12}}+\frac{2}{3}\frac{J_2^5J_3J_{12}}{J_1^3J_4^3J_6}=2\frac{J_3^3}{J_1J_2^2}. \end{align} Here for the last equality we used \eqref{add-id-4}. This proves \eqref{thm3.7-6}. (6) Using \eqref{thm3.7-4} with $c=2$ and \eqref{add-id-4} we obtain \begin{align} F(q^{-1},q,1;q^2)=2\frac{J_4^3J_6^2}{J_2^4J_{12}}+\frac{J_2^5J_3J_{12}}{J_1^3J_4^3J_6}=3\frac{J_3^3}{J_1J_2^2}. \label{add-id-5} \end{align} It is easy to see that \begin{align} &F(q^{-1},q,1;q^2)-F(q,q,1;q^2)=\sum_{i,j,k\geq 0} \frac{q^{i^2+j^2+k^2-ik-jk-i+j}(1-q^{2i})}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k} \nonumber \\ &=\sum_{i,j,k\geq 0} \frac{q^{(i+1)^2+j^2+k^2-(i+j+1)k+j-i-1}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k} \nonumber \\ &=\sum_{i,j,k\geq 0} \frac{q^{i^2+j^2+k^2-ik-jk+i+j-k}}{(q^2;q^2)_i(q^2;q^2)_j(q^2;q^2)_k}=F(q,q,q^{-1};q^2). \label{thm3.7-proof-5} \end{align} Substituting \eqref{add-6-F-start} and \eqref{add-id-5} into \eqref{thm3.7-proof-5}, we obtain \eqref{thm3.7-7}. \end{proof} \begin{rem} If we set $b=0$ in \eqref{thm3.7-proof-2} and using \eqref{S. 25}, we deduce that \begin{align} F(q,q^{-1},1;q^4)=(-q;q^2)_{\infty}\sum_{k\geq 0} \frac{q^{k^2}(-q;q^2)_{k}}{(q^4;q^4)_{k}} =\frac{J_{2}^{3}J_{3,6}}{J_{1}^{2}J_{4}^{2}}. \end{align} This proves the special case $c=-1$ of \eqref{thm3.7-4}. \end{rem} \subsection{Concluding remarks} A striking byproduct of our study is two new counterexmaples to Zagier's duality conjecture. Recall that Wang \cite{Wang2024} provided the first set of counterexamples to it. He showed that the Nahm sums $f_{A,B_i,1/16}(q)$ ($i=1,2$) are modular for \begin{align}\label{exam-data-1} A=\begin{pmatrix} 1 & 0 & 0 & -1/2 \\ 0 & 1 & 0 & -1/2 \\ 0 & 0 & 1 & -1/2 \\ -1/2 & -1/2 & -1/2 & 1 \end{pmatrix}, \quad B_1=\begin{pmatrix} 0 \\ 1/2 \\ 1/2 \\ -1/2 \end{pmatrix}, \quad B_2=\begin{pmatrix} 0 \\ 1/2 \\ 1/2 \\ 0\end{pmatrix}. \end{align} However, the dual Nahm sums $f_{A^\star,B_i^\star,C'}(q)$ ($i=1,2$) are not modular for any $C'$ where \begin{align}\label{exam-data-2} A^\star=\begin{pmatrix} 2 & 1 & 1 & 2 \\ 1 & 2 & 1 & 2 \\ 1 & 1 & 2 & 2 \\ 2 & 2 & 2 & 4 \end{pmatrix}, \quad B_1^\star=\begin{pmatrix} 0 \\ 1/2 \\ 1/2 \\ 0 \end{pmatrix}, \quad B_2^\star=\begin{pmatrix} 1 \\ 3/2 \\ 3/2 \\ 2\end{pmatrix}. \end{align} The reason of being nonmodular is that the dual Nahm sums can be expressed as sums of two modular forms of weights zero and one, respectively. Motivated by and similar to Wang's discovery, during the above study of the lift of Zagier's Example 1, we considered the following matrices: \begin{align} A=\begin{pmatrix} 3/2 & 1/2 & 1 \\ 1/2 & 3/2 & 1 \\ 1 & 1 & 2 \end{pmatrix}, \quad A^\star=\begin{pmatrix} 1 & 0 & -1/2 \\ 0 & 1 & -1/2 \\ -1/2 & -1/2 & 1 \end{pmatrix}. \end{align} We proved that $f_{A,B_i,C}(q)$ are nonmodular for any $C$ where $B_1=(\frac{1}{2},\frac{1}{2},0)^\mathrm{T}$, $B_2=(1,1,1)^\mathrm{T}$, but their dual Nahm sums $f_{A^\star,B_i^\star,C_i}(q)$ are modular. Hence they serve as new counterexamples to Zagier's duality conjecture. \begin{thebibliography}{0} \bibitem{Andrews-book}G.E. Andrews, The Theory of Partitions, Addison--Wesley, 1976; Reissued Cambridge, 1998. \bibitem{Andrews-Berndt} G.E. Andrews and B.C. Berndt, Ramanujan’s Lost Notebook, Part II, Springer 2009. \bibitem{BIS} D.M. Bressoud, M.E.H. Ismail and D. Stanton, Change of base in Bailey pairs, Ramanujan J. 4 (2000), 435--453. \bibitem{CGZ} F. Calegari, S. Garoufalidis and D. Zagier, Bloch groups, algebraic K-theory, units, and Nahm's conjecture, Ann. Sci. \'Ec. Norm. Sup\'er. (4) 56 (2023), no.\ 2, 383--426. \bibitem{CRW} Z. Cao, H. Rosengren and L. Wang, On some double Nahm sums of Zagier, J. Combin. Theory Ser. A 202 (2024), Paper No. 105819. \bibitem{Feigin} I. Cherednik and B. Feigin, Rogers--Ramanujan type identities and Nil-DAHA, Adv. Math. 248 (2013), 1050--1088. \bibitem{GKPS} D. Gang, H. Kim, B. Park and S. Stubbs, Three dimensional topological field theories and Nahm sum formulas, arXiv: 2411.06081. \bibitem{Gasper-Rahman} G. Gasper and M. Rahman, Basic Hypergeometric Series, 2nd Edition, Encyclopedia of Mathematics and Its Applications, Vol. 96, Cambridge University Press, 2004. \bibitem{Kac} V. Kac and M. Wakimoto, Modular invariant representations of infinite dimensional Lie algebras and superalgebras, Proc. Nat. Acad. Sci. 85 (1988), 4956--4960. \bibitem{McLaughlin} J. Mc Laughlin, Topics and methods in $q$-series, Monographs in Number Theory, 8, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2018. \bibitem{LeeThesis} C.-H. Lee, Algebraic structures in modular $q$-hypergeometric series, PhD Thesis, University of California, Berkeley, 2012. \bibitem{Lovejoy2004} J. Lovejoy, A Bailey lattice, Proc. Am. Math. Soc. 132 (2004), 1507--1516. \bibitem{Rogers}L.J. Rogers, Second memoir on the expansion of certain infinite products, Proc. London Math. Soc. 25 (1894), 318--343. \bibitem{Slater}L.J. Slater, Further identities of the Rogers--Ramanujan type, Proc. Lond. Math. Soc. (2) 54 (1) (1952), 147--167. \bibitem{VZ} M. Vlasenko and S. Zwegers, Nahm's conjecture: asymptotic computations and counterexamples, Commu. Number Theory Phys. 5(3) (2011), 617--642. \bibitem{XiaYao} E.X.W. Xia and O.X.M. Yao, Analogues of Ramanujan's partition identities, Ramanujan J. 31 (2013), 373--396. \bibitem{Wang-rank2} L. Wang, Identities on Zagier's rank two examples for Nahm's problem, Res.\ Math.\ Sci. (2024) 11:49. \bibitem{Wang-rank3} L. Wang, Explicit forms and proofs of Zagier's rank three examples for Nahm's problem, Adv. Math. 450 (2024), 109743. \bibitem{Wang2024} L. Wang, Counterexamples to Zagier's duality conjecture on Nahm sums, arXiv:2411.09701v3. \bibitem{Zagier} D. Zagier, The dilogarithm function, in Frontiers in Number Theory, Physics and Geometry, II, Springer, 2007, 3--65. \bibitem{ZwegersTalk} S. Zwegers, presentation. In: Workshop on Mock Modular Forms in Combinatorics and Arithmetic Geometry, American Institute of Mathematics, Palo Alto, California Mar. 8--12. 2010. \end{thebibliography} \end{document}
2412.15866v1
http://arxiv.org/abs/2412.15866v1
The common ground of DAE approaches. An overview of diverse DAE frameworks emphasizing their commonalities
\documentclass[11pt]{article} \usepackage[english]{babel} \usepackage[a4paper, left=3.4cm, right=3.4cm, top=3cm]{geometry} \usepackage{xfrac} \usepackage{mathptmx} \usepackage{amsmath,amsthm} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{oldgerm} \usepackage{mathrsfs} \usepackage{comment} \usepackage{tikz} \usepackage{pgfplots} \usepackage[scanall]{psfrag} \usepackage{graphicx,pst-all,bm} \usepackage{flushend} \usepackage{subfig} \usepackage{mdwlist} \usepackage{multirow} \usepackage{color} \usepackage{longtable} \usepackage{cuted} \usepackage{verbatim} \usepackage{siunitx} \DeclareMathOperator{\loc}{loc} \newcommand{\C}{\mathbb{C}} \newcommand{\K}{\mathbb{K}} \newcommand{\N}{\mathbb{N}} \newcommand{\Np}{\mathbb{N}\setminus \{0\}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\fQ}{\mathfrak{Q}} \DeclareMathOperator{\im}{im \, } \DeclareMathOperator{\diag}{diag} \newcommand{\Real}{\mathbb{R}} \newcommand{\Natu}{\mathbb{N}} \newcommand{\pPi}{\mathnormal{\Pi}} \newcommand{\rank}{\operatorname{rank}} \newtheorem{proposition}{Proposition}[section] \newtheorem{theorem}[proposition]{Theorem} \newtheorem{corollary}[proposition]{Corollary} \newtheorem{lemma}[proposition]{Lemma} \newtheorem{definition}[proposition]{Definition} \newtheorem{remark}[proposition]{Remark} \newtheorem{example}[proposition]{Example} \newtheorem{assumption}[proposition]{Assumption} \newtheorem{conjecture}[proposition]{Conjecture} \newtheorem{hypothesis}[proposition]{Hypothesis} \newcommand{\corank}{\operatorname{corank}} \renewcommand{\it}{\itshape} \renewcommand{\rm}{\mathrm} \pgfplotsset{compat=1.17} \usepackage{authblk} \setlength{\parindent}{0cm} \begin{document} \title{The common ground of DAE approaches.\\ \Large An overview of diverse DAE frameworks \\ emphasizing their commonalities.} \author[1]{Diana Est\'evez Schwarz} \author[2]{Ren\'e Lamour} \author[2]{Roswitha M\"arz} \affil[1]{\small Berliner Hochschule f\"ur Technik} \affil[2]{\small Humboldt University of Berlin, Institute of Mathematics} \maketitle \begin{abstract} We analyze different approaches to differential-algebraic equations with attention to the implemented rank conditions of various matrix functions. These conditions are apparently very different and certain rank drops in some matrix functions actually indicate a critical solution behavior. We look for common ground by considering various index and regularity notions from literature generalizing the Kronecker index of regular matrix pencils. In detail, starting from the most transparent reduction framework, we work out a comprehensive regularity concept with canonical characteristic values applicable across all frameworks and prove the equivalence of thirteen distinct definitions of regularity. This makes it possible to use the findings of all these concepts together. Additionally, we show why not only the index but also these canonical characteristic values are crucial to describe the properties of the DAE. \end{abstract} \textbf{Keywords:} Differential-Algebraic Equation, Higher Index, Regularity, Critical Points, Singularities, Structural Analysis, Persistent Structure, Index Concepts, Canonical Characteristic Values \medskip \\ \textbf{AMS Subject Classification:} 34A09, 34A12, 34A30, 34A34, 34-02 \setcounter{secnumdepth}{3} \setcounter{tocdepth}{3} \tableofcontents \pagebreak \section{Introduction}\label{sec:Introduction} \begin{flushright} \textsc{What proven concepts differ is remarkable,\\ but what they have in common is essential.} \end{flushright} \emph{Who coined the term DAEs?} is asked in the engaging essay \cite{Simeon} and the answer is given there: Bill Gear. The first occurrence of the term \emph{Differential-Algebraic Equation} can be found in the title of Gear's paper from 1971 \emph{Simultaneous numerical solution of differential-algebraic equations}\cite{Gear71} and in his book \cite{Gear71B} where he considers examples from electric circuit analysis. The German term \emph{Algebro-Differentialgleichungssysteme} comes from physicists and electronics engineers and it is first found as a chapter title in the book \emph{Rechnergest{\"u}tzte Analyse in der Elektronik} from 1977, \cite{EMR77}, in which the above two works are already cited. Obviously, electric circuit analysis accompanied by the diverse computer-aided engineering that was emerging at the time gave the impetus for many developments in the following 50 years. Actually, there are several quite different approaches with a large body of literature, such as the ten volumes of the DAE-Forum book series, but still too few commonalities have been revealed. We would like to contribute to this, in particular by showing equivalences. \bigskip We are mainly focused on linear differential algebraic equations (DAEs) in standard form, \begin{align}\label{1.DAE} Ex'+Fx=q, \end{align} in which $E,F:\mathcal I\rightarrow \Real^{m\times m}$ are sufficiently smooth, at least continuous, matrix functions on the interval $\mathcal I\subseteq\Real$ so that all the index concepts we look at apply\footnote{With regard to linearizations of nonlinear DAEs, we explicitly do not assume that $E, F$ are real analytic or from $C^{\infty}$.}. The matrix $E(t)$ is singular for all $t\in\mathcal I$. \bigskip If E and F are constant matrices, the regularity of the DAE means the regularity of the matrix pair $\{E,F\}$, i.e., $\det(sE+F)$ which is a polynomial in $s$ must not be identical zero. However, it must be conceded that, so far, for DAEs with variable coefficients, there are partially quite different definitions of regularity bound to the technical concepts behind them. Surely, regular DAEs have no freely selectable solution components and do not yield any consistency conditions for the inhomogeneities. But that's not all, certain qualitative characteristics of the flow and the input-output behavior are just as important, the latter especially with regard to the applications. We are pursuing the question: To what extent are the various rank conditions which support DAE-index notions appropriate, informative and comparable? The answer results from an overview of diverse approaches to DAEs emphasizing their commonalities. We hope that our analysis will also contribute to a harmonization of understanding in this matter. To our understanding, our main equivalence theorem from Section \ref{sec:MainTheorem} is a significant step toward this direction. \medskip In the vast majority of papers about DAEs, continuously differentiable solutions $x\in\mathcal C^{1}(\mathcal I,\Real^{m})$ are assumed, and smoother if necessary. On the other hand, since $E(t)$ is singular for every $t\in\mathcal I$, obviously only a part of the first derivative of the unknown solution is actually involved\footnote{For instance, the Lagrange parameters in DAE-formulations of mechanical systems do not belong to the differentiated unknowns.} in the DAE \eqref{1.DAE}. To emphasize this fact, the DAE \eqref{1.DAE} can be reformulated by means of a suitable factorization $E=AD$ as \begin{align}\label{1.propDAE} A(Dx)'+Bx=q, \end{align} in which $B=F-AD'$. This allows the admission of only continuous solutions $x$ with continuously differentiable parts $Dx$. However, we do not make use of this possibility here. Just as we focus on the original coefficient pair $\{E, F\}$ and smooth solutions in the present paper, we underline the identity, \begin{align*} A(Dx)'+Bx=Ex'+Fx,\quad \text{ for }\; x\in\mathcal C^{1}(\mathcal I,\Real^{m}), \end{align*} being valid equally for each special factorization. In addition, we will highlight, that the auxiliary coefficient triple $\{A,D, B\}$ takes over the structural rank characteristics of $\{E, F\}$, and vice versa. With this we want to clear the frequently occurring misunderstanding that so-called \textit{DAEs with properly and quasi-properly stated leading term} are something completely different from standard form DAEs.\footnote{A \textit{DAE with properly involved derivative} or \textit{properly stated leading term} is a DAE of the form \eqref{1.propDAE} with the properties $\im A=\im AD$, $\ker D=\ker AD$. We refer to \cite{CRR} for the general description and properties.} Based on the realization that the \textsl{Kronecker index} is an adequate means to understand DAEs with constant coefficients, we survey and compare different notions which generalize the Kronecker index for regular matrix pairs. We shed light on the concerns behind the concepts, but emphasize common features to a large extent as opposed to simply list them next to each other or to stress an otherness without further arguments. We are convinced that especially the basic rank conditions within the various concepts prove to be an essential, unifying characteristic and gives the possibility of a better understanding and use. \medskip This paper is organized as follows. After clarifying important notions like solvability and equivalence transformations in Sections \ref{s.Arrangements} and \ref{s.Equivalence}, we start introducing a reference basic concept with its associated characteristic values, that depend on the rank of certain matrices in Section \ref{s.Basic&more}. This basic notion is our starting point to prove many equivalences. \\ The structure of the paper reflects that, roughly speaking, there are two types of frameworks to analyze DAEs: \begin{itemize} \item Approaches based on the direct construction of a matrix chain or a sequence of matrix pairs without using the so-called derivative array. The basic concept and all concepts discussed in Section \ref{s.Solvab} are of this type. They turn out to be equivalent and lead to a common notion of regularity. This is also equivalent to tranformability into specifically structured standard canonical form. \item Approaches based on the derivative array are addressed in Section \ref{s.notions}. In this case, it turns out that some of these are equivalent to the basic concept, whereas others are different in the sense that weaker regularity properties are used. The later ones lead to our notion of almost regular DAEs. \end{itemize} \begin{center} \begin{table}[ht] \begin{tabular}{|c | c|c|c|} \hline & \multicolumn{1}{ c|}{without derivative array} & \multicolumn{1}{c|}{with derivative array} \\ \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{regularity}}} \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{ }}} & & \\ & Basic (Sec. \ref{s.regular}) &\\ & Elimination (Sec. \ref{subs.elimination}) & Regular Differentiation (Sec. \ref{subs.qdiff})\\ & Dissection (Sec. \ref{subs.dissection}) & \\ & Regular Strangeness (Sec. \ref{subs.strangeness})& Projector Based Differentiation (Sec. \ref{subs.pbdiff})\\ & Tractability (Sec. \ref{subs.tractability})&\\ & & \\ \hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{regularity or}}} \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{almost regularity}}} & & \\ & & \\ & & Differentiation (Sec. \ref{subs.diff}) \\ & & \\ & & Strangeness (Sec. \ref{subs.Hyp}) \\ & &\\ & & \\ \hline \end{tabular} \caption{Overview of the discussed index notions. The different regularity properties are defined in Section \ref{subs.equivalence}.} \label{t.overview} \end{table} \end{center} An overview of the approaches we discuss for linear DAEs can be found in Table \ref{t.overview}. Illustrative examples for the different types of regularity are compiled in Section \ref{s.examples}. \\ All approaches use own characteristic values that correspond to ranks of matrices or dimensions of subspaces and in the end it turns out that, in case of regularity, they can be calculated with so-called canonical characteristic values and vice versa.\\ Section \ref{s.Notions} starts with a summary of all the obtained equivalence results in a quite extensive theorem with hopefully enlightening and pleasant content. Based on this, a discussion of the meaning of regularity completed by an inspection of related literature follows. \\ Finally, in Section \ref{s.nonlinearDAEs} we briefly outline the generalization of the discussed approaches to nonlinear DAEs with a view to linearizations. To facilitate reading, some technical details are provided in the appendix. \section{Special arrangements for this paper}\label{s.Arrangements} Throughout this paper the coefficients of the DAE \eqref{1.DAE} are matrix functions $E, F:\mathcal I\rightarrow\Real^{m\times m}$ that are sufficiently smooth to allow the application of all the approaches discussed here, by convention of class $\mathcal C^m$, and $\mathcal C^{\mu}$, if applicable, if an index $\mu \leq m$ is already known, but not from $\mathcal C^{\infty}$ and the real-analytic function space. Our aim is to uncover the common ground between the various concepts, in particular the rank conditions. We will not go into the undoubted differences between the concepts in terms of smoothness requirements here, which are very important, of course. Please refer to the relevant literature. \medskip This is neither a historical treatise nor a comprehensive overview of approaches and results, but rather an attempt to reveal what is common to the popular approaches. Wherever possible, we cite widely used works such as monographs and refer to the references therein for the classification of corresponding original works. Our particular goal is the harmonizing comparison of the basic rank conditions behind the various concepts combined with the characterization of the class of regular pairs $\{E,F\}$ or DAEs \eqref{1.DAE}. Details regarding solvability statements within the individual concepts would go beyond the scope of this paper. Here we merely point out the considerable diversity of approaches. While on the one hand, in many papers, from a rather functional analytical point of view, attention is paid to the lowest possible smoothness, suitable function spaces, rigorous solvability assertions, and precise statements about relevant operator properties such as surjectivity, continuity, e.g., \cite{GM86,CRR,Ma2014,Jansen2014}, we observe that, on the one hand, solvability in the sense of the Definition \ref{d.solvableDAE} below is assumed and integrated into several developments from the very beginning, e.g., \cite{BCP89,KuMe2006,BergerIlchmann}. We quote \cite[Definition 2.4.1]{BCP89}\footnote{See also Remark \ref{r.generalform} below.}: \begin{definition}\label{d.solvableDAE} The system \eqref{1.DAE} is \emph{solvable} on the interval $\mathcal I$ if for every $m$-times differentiable $q$, there is at least one continuously differentiable solution to \eqref{1.DAE}. In addition, solutions are defined on all of $\mathcal I$ and are uniquely determined by their value at any $t\in\mathcal I$. \end{definition} \medskip Here we examine and compare only those approaches whose characteristics do not change under equivalence transformations and which generalize the Kronecker index for regular matrix pairs. This rules out the so-called \emph{structural index}, e.g. \cite{Pantelides88,Pryce1998,RMB2000,PryceDAESA}. \medskip A widely used and popular means of investigating DAEs is the so-called \emph{perturbation index}, which according to \cite{HairerWanner} can be interpreted as a sensitivity measure in relation to perturbations of the given problem. For time-invariant coefficients $\{E,F\}$, the perturbation index coincides with the regular Kronecker index. We adapt \cite[Definition 5.3]{HairerWanner} to be valid for the linear DAE \eqref{1.DAE} on the interval $\mathcal I=[a,b]$: \begin{definition}\label{d.perturbation} The system \eqref{1.DAE} has \emph{perturbation index} $\mu_p\in\Natu$ if $\mu_p$ is the smallest integer such that for all functions $x:\mathcal I \rightarrow \Real^{m}$ having a defect $\delta = Ex'+Fx$ there exists an estimate \begin{align*} |x(t)|\leq c\{|x(a)|+ \max_{a\leq\tau\leq t}|\delta(\tau)|+\cdots+ \max_{a\leq\tau\leq t}|\delta^{(\mu_p-1)}(\tau)|\}, \quad t\in\mathcal I. \end{align*} \end{definition} The perturbation index does not contain any information about whether the DAE has a solution for an arbitrarily given $\delta$, but only records resulting defects. In the following, we do not devote an extra section to the perturbation index, but combine it with the proof of corresponding solvability statements and repeatedly involve it in the relevant discussions. \medskip We close this section with a comment on the index names below, more precisely on the various additional epithets used in the literature such as differentiation, dissection, elimination, geometric, strangeness, tractability, etc. We try to organize them and stick to the original names as far as possible, if there were any. In earlier works, simply the term \emph{index} is used, likewise \emph{local index} and \emph{global index}, other modifiers were usually only added in attempts at comparison, e.g., \cite{GHM,Mehrmann,RR2008}. After it became clear that the so-called \emph{local index} (Kronecker index of the matrix pencil $\lambda E(t)+F(t)$ at fixed $t$) is irrelevant for the general characterization of time-varying linear DAEs, the term \emph{global index} was used in contrast. We are not using the extra label \emph{global} here, as all the terms considered here could have this. \section{Comments on equivalence relations}\label{s.Equivalence} Equivalence relations and special structured forms are an important matter of the DAE theory from the beginning. Two pairs of matrix functions $\{E,F\}$ and $\{\bar E,\bar F\}$, and also the associated DAEs, are called \textit{equivalent}\footnote{In the context of the strangeness index \textit{globally equivalent}, e.g. \cite[Definition 2.1]{KuMe1996}, and \textit{analytically equivalent} in \cite[Section 2.4.22]{BCP89}. We underline that \eqref{1.Equivalence} actually defines a reflexive, symmetric, and transitive equivalence relation $\{E,F\}\sim \{\tilde E,\tilde F\}$.}, if there exist pointwise nonsingular, sufficiently smooth\footnote{$L$ is at least continuous, $K$ continuously differentiable. The further smoothness requirements in the individual concepts differ; they are highest when derivative arrays play a role.} matrix functions $L, K:\mathcal I\rightarrow \Real^{m\times m}$, such that \begin{align}\label{1.Equivalence} \tilde E=LEK,\quad \tilde F=LFK+LEK'. \end{align} An equivalence transformation goes along with the premultiplication of \eqref{1.DAE} by $L$ and the coordinate change $x=K\tilde x$ resulting in the further DAE $\tilde E\tilde x'+\tilde F\tilde x=Lq$. \medskip It is completely the same whether one refers the equivalence transformation to the standard DAE \eqref{1.DAE} or to the version with properly involved derivative \eqref{1.propDAE} owing to the following relations: \begin{align*} \tilde A&=LAK,\;\tilde D=K^{-1}DK,\;\tilde B=LBK+LAK'K^{-1}DK,\\ \tilde A&=\tilde A\tilde D=\tilde E,\;\tilde B=\tilde F-\tilde E\tilde D'. \end{align*} The DAE \eqref{1.DAE} is in \textit{standard canonical form} (SCF) \cite[Definition 2.4.5]{BCP89}, if \begin{align}\label{1.SCF} E=\begin{bmatrix} I_{d}&0\\0&N \end{bmatrix},\quad F=\begin{bmatrix} \Omega&0\\0&I_{m-d} \end{bmatrix}, \end{align} and $N$ is strictly upper triangular.\footnote{Analogously, $N$ may also have strict lower triangular form.} The matrix function $N$ does not need to have constant rank or nilpotency index. Trivially, choosing \begin{align*} A=E,\; D=\diag\{I_{d},0,1,\ldots,1\},\; B=F, \end{align*} one obtains the form \eqref{1.propDAE}. Obviously, a DAE in SCF decomposes into two essentially different parts, on the one hand a regular explicit ordinary differential equation (ODE) in $\Real^{d}$ and on the other some algebraic relations which require certain differentiations of components of the right-hand side $q$. More precisely, if $N^{\mu}$ vanishes identically, but $N^{\mu-1}$ does not, then derivatives up to the order $\mu-1$ are involved. The dynamical degree of freedom of the DAE in SCF is determined by the first part and equals $d$. \medskip In the particular case of constant $N$ and $\Omega$, the matrix pair $\{E,F\}$ in \eqref{1.SCF} has Weierstra{\ss}--Kronecker form \cite[Section 1.1]{CRR} or Quasi-Kronecker form \cite{BergerReis}, and the nilpotency index of $N$ is again called \emph{Kronecker index} of the pair $\{E,F\}$ and the matrix pencil $\lambda E+F$, respectively.\footnote{In general the Kronecker canonical form is complex-valued and $\Omega$ is in Jordan form. We refer to \cite[Remark 3.2]{BergerReis} for a plea not to call \eqref{1.SCF} a canonical form.} The basic regularity notion \ref{d.2} below generalizes regular matrix pairs (pencils) and their Kronecker index. Thereby, the Jordan structure of the nilpotent matrix $N$, in particular the characteristic values $\theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0$, \begin{align*} &\theta_0 \quad \text{number of Jordan blocks of order } \geq 2,\\ &\theta_1 \quad \text{number of Jordan blocks of order } \geq 3,\\ ...\\ &\theta_{\mu-2}\quad \text{number of Jordan blocks of order } \mu,\\ \end{align*} play their role and one has $d=\rank E-\sum_{i=0}^{\mu-2}\theta_i $. Generalizations of these characteristic numbers play a major role further on. \medskip For readers who are familiar with at least one of the DAE concepts discussed in this article, for a better understanding of the meaning of the characteristic values $\theta_i$ we recommend taking a look at Theorem \ref{t.Sum_equivalence} already now. \section{Basic terms and beyond that}\label{s.Basic&more} \subsection{What serves as our basic regularity notion}\label{s.regular} In our view, the elimination-reduction approach to DAEs is the most immediately obvious and accessible with the least technical effort, which is why we choose it as the basis here. We largely use the representation from \cite{RaRh}. We turn to the ordered pair $\{E,F\}$ of matrix functions $E,F:\mathcal I\rightarrow\Real^{m\times m}$ being sufficiently smooth, at least continuous, and consider the associated DAE \begin{align}\label{DAE0} E(t)x'(t)+F(t)x(t)=q(t),\quad t\in \mathcal I, \end{align} as well as the accompanying time-varying subspaces in $\Real^{m}$, \begin{align}\label{sub} \ker E(t),\quad S(t)=\{z\in \Real^m:F(t)z\in\im E(t)\},\quad t\in\mathcal I. \end{align} Let $S_{can}$ denote the so-called \emph{flow-subspace} of the DAE, which means that $S_{can}(\bar t)$ is the subspace containing the overall flow of the homogeneous DAE at time $\bar t$, that is, the set of all possible function values $x(\bar t)$ of solutions of the DAE $Ex'+Fx=0$\footnote{$S_{can}(\bar t)$ is also called \emph{linear subspace of initial values which are consistent at time }$\bar t$, e.g., \cite{BergerIlchmann}.}, \begin{align*} S_{can}(\bar t):=\{\bar x\in \Real^m: \text{there is a solution \;} x:(\bar t-\delta,\bar t+\delta)\cap\mathcal I\rightarrow \Real^m,\; \delta>0,\\ \text{ of the homogeneous DAE such that } x(\bar t)=\bar x\},\quad \bar t\in\mathcal I. \end{align*} In accordance with various concepts, see \cite[Remark 3.4]{HaMae2023}, we agree on what \emph{regular} DAEs are, and show that then the time-varying flow-subspace $S_{can}(\bar t)$ is well-defined on all $\mathcal I$, and has constant dimension. \begin{definition}\label{d.qualified} The pair $\{E,F\}$ is called \emph{qualified} on $\mathcal I$ if \begin{align*} \im [E(t) \,F(t)]=\Real^m,\quad \rank E(t)=r,\quad t\in\mathcal I, \end{align*} with integers $0\leq r\leq m$. \end{definition} \begin{definition}\label{d.prereg} The pair $\{E,F\}$ and the DAE \eqref{DAE0}, respectively, are called \emph{pre-regular} on $\mathcal I$ if \begin{align*} \im [E(t) \,F(t)]=\Real^m,\quad \rank E(t)=r,\quad \dim S(t)\cap \ker E(t)=\theta, \quad t\in\mathcal I, \end{align*} with integers $0\leq r\leq m$ and $\theta\geq 0$. Additionally, if $\theta =0$ and $r<m$, then the DAE is called \emph{regular with index one}, but if $\theta =0$ and $r=m$, then the DAE is called \emph{regular with index zero}. \end{definition} We underline that any pre-regular pair $\{E,F\}$ features three subspaces $S(t)$, $\ker E(t)$, and $S(t)\cap \ker E(t)$ having constant dimensions $r$, $m-r$, and $\theta$, respectively. \medskip We emphasize and keep in mind that now not only the coefficients are time dependent, but also the resulting subspaces. Nevertheless, we suppress in the following mostly the argument $t$, for the sake of better readable formulas. The equations and relations are then meant pointwise for all arguments. \medskip The different cases for $\theta=0$ are well-understood. A regular index-zero DAE is actually a regular implicit ODE and $S_{can}=S=\Real^m,\, \ker E=\{0\}$. Regular index-one DAEs feature $S_{can}=S,\, \dim\ker E> 0$, e.g., \cite{GM86,CRR}. Note that $r=0$ leads to $S_{can}=\{0\}$. All these cases are only interesting here as intermediate results. \bigskip We turn back to the general case, describe the flow-subspace $S_{can}$, and end up with a regularity notion associated with a regular flow. The pair $\{E, F\}$ is supposed to be \emph{pre-regular}. The first step of the reduction procedure from \cite{RaRh} is then well-defined, we refer to \cite[Section 12]{RaRh} for the substantiating arguments. In the first instance, we apply this procedure to homogeneous DAEs only. We start by $E_0=E,\,F_0=F,\,m_0=m,\,r_{0}=r$, $\theta_0=\theta$, and consider the homogeneous DAE \begin{align*} E_0x'+F_0 x=0. \end{align*} By means of a basis $Z_0:\mathcal I\rightarrow \Real^{m_0\times(m_0-r_0)}$ of $(\im E_0)^{\perp}=\ker E_0^{*}$ and a basis $Y_0:\mathcal I\rightarrow \Real^{m_0\times r_0}$ of $\im E_0$ we divide the DAE into the two parts \begin{align*} Y_0^*E_0x'+Y_0^*F_0x=0,\quad Z_0^*F_0x=0. \end{align*} From $\im[E_0,\,F_0]=\Real^m$ we derive that $\rank Z_0^*F_0= m_0-r_{0}$, and hence the subspace $S_{0}=\ker Z_0^*F_0$ has dimension $r_{0}$. Obviously, each solution of the homogeneous DAE must stay in the subspace $S_{0}$. Choosing a continuously differentiable basis $C_0:\mathcal I\rightarrow \Real^{m_0\times r_0}$ of $S_{0}$, each solution of the DAE can be represented as $x=C_0 x_{(1)}$, with a function $x_{(1)}:\mathcal I\rightarrow \Real^{r_0}$ satisfying the DAE reduced to size $m_1=r_{0}$, \begin{align*} Y_0^*E_0C_0 x_{(1)}'+Y_0^*(F_0C_0+E_0C'_0)x_{(1)}=0. \end{align*} Denote $E_1=Y_0^*E_0C_0$ and $F_1=Y_0^*(F_0C_0+E_0C'_0)$ which have size $m_1\times m_1$. The pre-regularity assures that $E_1$ has constant rank $r_{1}=r_0-\theta_0\leq r_{0}$. Namely, we have \begin{align*} \ker E_1=\ker E_0C_0=C^{+}_0 (\ker E_0\cap S_{0}),\quad \dim \ker E_1= \dim (\ker E_0\cap S_{0})=\theta_0. \end{align*} Here, $C_0(t)^+$ denotes the Moore-Penrose generalized inverse of $C_0(t)$. Next we repeat the reduction step, \begin{equation}\label{basic_reduction} \begin{array}{rl} E_{i}&:=Y_{i-1}^*E_{i-1}C_{i-1},\quad F_{i}:= Y_{i-1}^*(F_{i-1}C_{i-1}+E_{i-1}C'_{i-1}),\\ &Y_{i-1}, Z_{i-1}, C_{i-1} \text{ are smooth bases of the three subspaces}\\ &\quad \im E_{i-1},\; (\im E_{i-1})^{\perp},\; \text{ and }\; S_{i-1}:=\ker Z_{i-1}^*F_{i-1},\\ &\theta_{i-1}=\dim (\ker E_{i-1}\cap S_{i-1}), \end{array} \end{equation} supposing that the new pair $\{E_{i},F_{i}\}$ is pre-regular again, and so on. The pair $\{E_{i},F_{i}\}$ has size $m_{i}:=r_{i-1}$ and $E_{i}$ has rank $r_{i}=r_{i-1}-\theta_{i-1}$. This yields the decreasing sequence $ m\geq r_{0}\geq \cdots\geq r_{j}\geq r_{j-1}\geq\cdots \geq 0 $ and rectangular matrix functions $C_i: \mathcal I \rightarrow \Real^{r_{i-1}\times r_i}$ with full column-rank $r_i$. Denote by $\mu$ the smallest integer such that either $r_{\mu-1}=r_{\mu}>0$ or $r_{\mu-1}=0$. Then, it follows that $(\ker E_{\mu-1})\cap S_{\mu-1}=\{0\}$, which means in turn that \begin{align*} E_{\mu-1}x_{(\mu-1)}'+F_{\mu-1}x_{(\mu-1)}=0 \end{align*} represents a regular index-1 DAE. If $r_{\mu-1}=0$, that is $E_{\mu-1}=0$, then $F_{\mu-1}$ is nonsingular due to the pre-regularity of the pair, which leads to $S_{\mu-1}=\{0\}$, $C_{\mu-1}=0$, and a zero flow $x_{(\mu-1)}(t)\equiv 0 $. In turn there is only the identically vanishing solution \[x=C_0C_1\cdots C_{\mu-2}x_{(\mu-1)}=0\] of the homogeneous DAE, and $C_0C_1\cdots C_{\mu-2}C_{\mu-1}=0$. On the other hand, if $r_{\mu-1}=r_{\mu}>0$ then $x_{(\mu-1)}=C_{\mu-1}x_{(\mu)}$, $\rank C_{\mu-1}=r_{\mu-1}$, and $E_{\mu}$ remains nonsingular such that the DAE \begin{align*} E_{\mu}x_{(\mu)}'+F_{\mu}x_{(\mu)}=0 \end{align*} is actually an implicit regular ODE living in $\Real^{m_{\mu}}$,\;$m_{\mu}=r_{\mu-1}$ and $S_{\mu}=\Real^{r_{\mu-1}}$. Letting $C_{\mu}=I_{m_{\mu}}= I_{r_{\mu-1}}$, each solutions of the original homogeneous DAE \eqref{DAE0} has the form \begin{align*} x=Cx_{(\mu)}, \quad C:=C_0C_1\cdots C_{\mu-1}=C_0C_1\cdots C_{\mu-1}C_{\mu} :\mathcal I\rightarrow\Real^{m\times r_{\mu-1}},\; \rank C=r_{\mu-1}. \end{align*} Moreover, for each $\bar t\in \mathcal I$ and each $z\in\im C(\bar t)$, there is exactly one solution of the original homogeneous DAE passing through, $x(\bar t)=z$ which indicates that $\im C=S_{can}$. As proved in \cite{RaRh}, the ranks $r=r_{0}> r_{1}>\cdots> r_{\mu-1}$ are independent of the special choice of the involved basis functions. In particular, \begin{align*} d:=r_{\mu-1}=r-\sum_{i=0}^{\mu-2}\theta_i=\dim C \end{align*} appears to be the dynamical degree of freedom of the DAE. The property of pre-regularity does not necessarily carry over to the subsequent reduction pairs, e.g.,\cite[Example 3.2]{HaMae2023}. \begin{definition}\label{d.2a} The pre-regular pair $\{E,F\}$ with $r<m$ and the associated DAE \eqref{DAE0}, respectively, are called \emph{regular} if there is an integer $\mu\in\Natu$ such that the above reduction procedure \eqref{basic_reduction} is well-defined up to level $\mu-1$, each pair $\{E_{i},F_{i}\}$, $i=0,\ldots,\mu-1$, is pre-regular, and if $r_{\mu-1}>0$ then $E_{\mu}$ is well-defined and nonsingular, $r_{\mu}=r_{\mu-1}$. If $r_{\mu-1}=0$ we set $r_{\mu}=r_{\mu-1}=0$. The integer $\mu$ is called \emph{the index of the DAE \eqref{DAE0} and the given pair $\{E,F\}$}. The index $\mu$ and the ranks $r=r_{0}> r_{1}>\cdots > r_{\mu-1}=r_{\mu}$ are called \emph{characteristic values} of the pair and the DAE, respectively. \end{definition} By construction, for a regular pair it follows that $r_{i+1}= r_{i}-\theta_{i}$, $i=0,\ldots,\mu-1$. Therefore, in place of the above $\mu+1$ rank values $r_0,\ldots,r_{\mu}$, the following rank and the dimensions, \begin{align}\label{theta} r \quad \text{and} \quad \theta_0 \geq\theta_1 \geq \cdots \geq \theta_{\mu-2} >\theta_{\mu-1}=0, \end{align} \begin{align}\label{thetadef} \theta_{i}=\dim (\ker E_{i}\cap S_{i}),\; i\geq0, \end{align} can serve as characteristic quantities. Later it will become clear that these data also play an important role in other concepts, too, which is the reason for the following definition equivalent to Definition \ref{d.2a}. \begin{definition}\label{d.2} The pre-regular pair $\{E,F\}$, $E,F:\mathcal I\rightarrow \Real^{m\times m}$, with $r=\rank E<m$, and the associated DAE \eqref{DAE0}, respectively, are called \emph{regular} if there is an integer $\mu\in\Natu$ such that the above reduction procedure is well-defined up to level $\mu-1$, with each pair $\{E_{i},F_{i}\}$, $i=0,\ldots,\mu-1$, being pre-regular, and associated values \eqref{theta}. The integer $\mu$ is called \emph{the index of the DAE \eqref{DAE0} and the given pair $\{E,F\}$}. The index $\mu$ and the values \eqref{theta} are called \emph{characteristic values} of the pair and the DAE, respectively. \end{definition} At this place we add the further relationship, \begin{align}\label{thetarank} \theta_{i}=\dim\ker \begin{bmatrix} Y_{i}^{*}E_{i}\\Z_{i}^{*}F_{i} \end{bmatrix}= m_{i} -\rank \begin{bmatrix} Y_{i}^{*}E_{i}\\Z_{i}^{*}F_{i} \end{bmatrix},\quad i=0,\ldots,\mu-1, \end{align} with which all quantities in \eqref{theta} are related to rank functions. \begin{remark}\label{r.pencil} If $\{E,F\}$ is actually a pair of matrices $E,F\in \Real^{m\times m}$, then the pair is regular with index $\mu$ and characteristics $\theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0$, if and only if the matrix pencil is regular and the nilpotent matrix in its Kronecker normal form shows \begin{align*} &\theta_0 \quad \text{Jordan blocks of order } \geq 2,\\ &\theta_1 \quad \text{Jordan blocks of order } \geq 3,\quad\\ &...\\ &\theta_{\mu-2}\quad \text{Jordan blocks of order } \mu, \end{align*} \end{remark} \begin{remark}\label{r.regularity} As mentioned above, the presentation in this section mainly goes back to \cite{RaRh}. However, we have not taken up their notations \emph{regular} and \emph{completely regular} for the coefficient pairs and \emph{reducible} and \emph{completely reducible} for DAEs, but that of other works, what we consider more appropriate to the matter.\footnote{In \cite{RaRh}, the coefficient pairs of DAEs which have arbitrary many solutions like \cite[Example 3.2 ]{HaMae2023} may belong to \emph{regular} ones.} Not by the authors themselves, but sometimes by others, the index from \cite{RaRh} is also called \emph{geometric index}, e.g., \cite[Subsection 2.4]{RR2008}. An early predecessor version of this reduction procedure was already proposed and analyzed in \cite{Cis1982} under the name \emph{elimination of the unknowns}, even for more general pairs of rectangular matrix functions, see also Subsection \ref{subs.elimination}. The regularity notion given in \cite{Cis1982} is consistent with Definition \ref{d.2}. Another very related such reduction technique has been presented and extended a few years ago under the name \emph{dissection concept} \cite{Jansen2014}. This notion of regularity also agrees with Definition \ref{d.2}, see Section \ref{subs.dissection}. \end{remark} \begin{theorem}\label{t.Scan} Let the DAE \eqref{DAE0} be regular on $\mathcal I$ with index $\mu$ and characteristic values \eqref{theta}. \begin{description} \item[\textrm{(1)}] Then the subspace $S_{can}(t)\subset \Real^m$ has dimension $d=r-\sum_{i=0}^{\mu-2}\theta_i=r_{\mu-1}$ for all $t\in\mathcal I$, and the matrix function $C:\mathcal I\rightarrow \Real^{m\times d}$, $C=C_{0}\cdots C_{\mu-2}$, generated by the reduction procedure is a basis of $S_{can}$. \item[\textrm{(2)}] The DAE features precisely the same structure on each subinterval $\mathcal I_{sub}\subset \mathcal I$. \end{description} \end{theorem} \begin{proof} Regarding the relation $r_{i+1}= r_{i}-\theta_{i}$, $i=0,\ldots,\mu-2$ directly resulting from the reduction procedure, the assertion is an immediate consequence of \cite[Theorem 13.3]{RaRh}. \end{proof} Two canonical subspaces varying with time in $\Real^m$ are associated with a regular DAE \cite{CRR,HaMae2023}. The first one is the flow-subspace $S_{can}$. The second one is a unique pointwise complement $N_{can}$ to the flow-subspace, such that \begin{align*} S_{can}(t)\oplus N_{can}(t)=\Real^m,\quad N_{can}(t)\supset \ker E(t),\quad t\in \mathcal I, \end{align*} and the initial condition $x(\bar t)-\bar x\in N_{can}(\bar t)$ fixes exactly one of the DAE solutions for each given\\ $\bar t\in \mathcal I,\, \bar x\in \Real^{m}$ without any consistency conditions for the right-hand side $q$ or its derivatives, \cite[Theorem 5.1]{HaMae2023}, also \cite{CRR}. \begin{theorem}\label{t.solvability} If the DAE \eqref{DAE0} is regular on $\mathcal I$ with index $\mu$ and characteristics \eqref{theta}, then the following assertions are valid: \begin{description} \item[\textrm{(1)}] The DAE is solvable at least for each arbitrary right-hand side $q\in C^{m}(\mathcal I,\Real^{m})$. \item[\textrm{(2)}] $d=r-\sum_{i=0}^{\mu-2}\theta_i=r_{\mu-1}$ is the dynamical degree of freedom. \item[\textrm{(3)}] The condition $r=\sum_{i=0}^{\mu-2}\theta_i$ indicates a DAE with zero degree of freedom\footnote{So-called \emph{purely algebraic} systems.} and $S_{can}=\{0\}$, i.e. $d=0$. \item[\textrm{(4)}] For arbitrary given $q\in C^{m}(\mathcal I,\Real^{m})$, $\bar t\in \mathcal I$, and $\bar x\in\Real^m$, the initial value problem \begin{align*} Ex'+Fx=q,\quad x(\bar t)=\bar x, \end{align*} is uniquely solvable, if the consistency condition \eqref{cons2} in the proof below is satisfied. Otherwise there is no solution. \item[\textrm{(5)}] The DAE has perturbation index $\mu$ on each compact subinterval of $\mathcal I$. \end{description} \end{theorem} \begin{proof} \textrm (1): Given $q\in C^{m}(\mathcal I,\Real^{m})$ we apply the previous reduction now to the inhomogeneous DAE \eqref{DAE0}. We describe the first level only. The general solution of the derivative-free part $Z_0^{*}F_0x=Z_0^{*}q$ of the given DAE reads now \begin{align*} x=(I-(Z_0^{*}F_0)^{+}Z_0^{*}F_0)x+ (Z_0^{*}F_0)^{+}Z_0^{*}F_0x= C_0x_{(1)}+ (Z_0^{*}F_0)^{+}Z_0^{*}q, \end{align*} and inserting into $Y_0^{*}E_0x'+Y_0^{*}F_0x=Z_0^{*}q$ yields the reduced DAE $E_{1}x_{(1)}'+F_{1}x_{(1)}=q_{(1)}$, with \begin{align*} q_{(0)}=q,\; q_{(1)}=Y_0^{*}q_{(0)}-Y_0^{*}E_0((Z_0^{*}F_0)^{+}Z_0^{*}q_{(0)})'-Y_0^{*}F_0(Z_0^{*}F_0)^{+}Y_0^{*}q_{(0)}. \end{align*} Finally, using the constructed above matrix function sequence, each solution of the DAE has the form \begin{align} x&=C_0x_{(1)}+(Z_0^*F_0)^{+}Z_0^*q_{(0)} =C_0(C_1x_{(2)}+(Z_1^*F_1)^{+}Z_1^*q_{(1)})+(Z_0^*F_0)^{+}Z_0^*q_{(0)}=\cdots\nonumber\\ &=\underbrace{C_0C_1\cdots C_{\mu-2}}_{= C}x_{(\mu-1)}+p,\label{DAEsol}\\ p&=(Z_0^*F_0)^{+}Z_0^*q_{(0)}+C_0(Z_1^*F_1)^{+}Z_1^*q_{(1)}+\cdots+ C_0C_1\cdots C_{\mu-2}(Z_{\mu-1}^*F_{\mu-1})^{+}Z_{\mu-1}^*q_{(\mu-1)}, \nonumber \\\nonumber \quad &q_{(j+1)}=Y_{j}^{*}q_{(j)}-Y_{j}^{*}E_{j}((Z_{j}^{*}F_{j})^{+}Z_{j}^{*}q_{(j)})'-Y_{j}^{*}F_{j}(Z_{j}^{*}F_{j})^{+}Y_{j}^{*}q_{(j)},\; j=0,\ldots,\mu-2,\nonumber \end{align} in which $x_{(\mu-1)}$ is any solution of the regular index-one DAE \[ E_{\mu-1}x_{(\mu-1)}'+F_{\mu-1}x_{(\mu-1)}=q_{(\mu-1)}. \] Since $q$ and the coefficients are supposed to be smooth, all derivatives exist, and no further conditions with respect to $q$ will arise. {\textrm (4)} Expression \eqref{DAEsol} yields $x(\bar t)= C(\bar t)x_{[\mu]}(\bar t)+p(\bar t)$. The initial condition $x(\bar t)=\bar x$ splits by means of the projector $\pPi_{can}(\bar t)$ onto $S_{can}(\bar t)$ along $N_{can}(\bar t)$ into the two parts \begin{align} \pPi_{can}(\bar t)\bar x= C(\bar t) x_{[\mu]}(\bar t)+ \pPi_{can}(\bar t)p(\bar t)\label{cons1},\\ ( I-\pPi_{can}(\bar t))\bar x= (I-\pPi_{can}(\bar t))p(\bar t)\label{cons2}. \end{align} Merely part \eqref{cons1} contains the component $x_{(\mu)}(\bar t)$, which is to be freely selected in $\Real^{r_{\mu-1}}$, and \\ $x_{(\mu)}(\bar t)=C(\bar t)^+\pPi_{can}(\bar t)(\bar x-p(\bar t))$ is the only solution. In contrast, \eqref{cons2} does not contain any free components. It is a strong consistency requirement and must be given a priori for solvability. Otherwise this (overdetermined) initial value problem fails to be solvable. {\textrm (2),(3),(5)} are straightforward now, for details see \cite[Theorem 5.1]{HaMae2023}. \end{proof} The following proposition comprises enlightening special cases which will be an useful tool to provide equivalence assertions later on. Namely, for given integers $\kappa \geq 2$, $d\geq 0$, $l=l_{1}+\cdots +l_{\kappa}$, $l_{i}\geq 1$, $m=d+l$ we consider the pair $\{E,F\}$, $E,F:\mathcal I\rightarrow \Real^{m\times m}$, in special block structured form, \begin{align}\label{blockstructure} E=\begin{bmatrix} I_{d}&\\&N \end{bmatrix},\quad F=\begin{bmatrix} \Omega&\\&I_{l} \end{bmatrix}, \quad N=\begin{bmatrix} 0&N_{12}&&\cdots&N_{1\kappa}\\ &0&N_{23}&&N_{2\kappa}\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\kappa-1, \kappa}\\ &&&&0 \end{bmatrix},\\ \text{with blocks}\quad N_{ij} \quad\text{of sizes}\quad l_{i}\times l_{j}.\nonumber \end{align} If $d=0$ then the respective parts are absent. All blocks are sufficiently smooth on the given interval $\mathcal I$. $N$ is strictly block upper triangular, thus nilpotent and $N^{\kappa}=0$. We set further $N=0$ for $\kappa=1$. Obviously, then the pair $\{E,F\}$ is pre-regular with $r=d$ and $\theta_0=0$, and hence the DAE has index $\mu=\kappa=1$. Below we are mainly interested in the case $\kappa\geq 2$. \begin{proposition}\label{p.STform} Let the pair $\{E,F\}$, $E,F:\mathcal I\rightarrow \Real^{m\times m}$ be given in the form \eqref{blockstructure} and $\kappa \geq 2$. \begin{description} \item[\textrm{(1)}] If the secondary diagonal blocks $N_{i, i+1}:\mathcal I\rightarrow \Real^{l_{i}\times l_{i+1}}$ in \eqref{blockstructure} have full column-rank, that is, \[\rank N_{i, i+1}=l_{i+1}, \quad i=1,\ldots,\kappa-1, \] then $l_{1}\geq \cdots\geq l_{\kappa}$ and the corresponding DAE is regular with index $\mu=\kappa$ and characteristic values \begin{align*} r=m-l_{1},\; \theta_{0}=l_{2}, \ldots,\, \theta_{\mu-2}=l_{\mu}. \end{align*} \item[\textrm{(2)}] If the secondary diagonal blocks $N_{i, i+1}$ in \eqref{blockstructure} have full row-rank, that is, \[\rank N_{i, i+1}=l_{i}, \quad i=1,\ldots,\kappa-1, \] then $l_{1}\leq \cdots\leq l_{\kappa}$ and the corresponding DAE is regular with index $\mu=\kappa$ and characteristic values \begin{align*} r=m-l_{\mu},\; \theta_{0}=l_{\mu-1}, \ldots,\, \theta_{\mu-2}=l_{1}. \end{align*} \end{description} \end{proposition} \begin{proof} \textrm (1) Suppose the secondary diagonal blocks $N_{i, i+1}$ have full column-ranks $l_{i+1}$. It results that $r=\rank E =d+l-l_1=m-l_1$ and $\theta_0=\dim S\cap\ker E=\dim(\ker N\cap\im N)= \rank N_{12}=l_2$, thus the pair is pre-regular. For deriving the reduction step we form the two auxiliary matrix functions \begin{align*} \tilde N=\begin{bmatrix} N_{12}&&\cdots&N_{1\kappa}\\ 0 &N_{23}&&N_{2\kappa}\\ &&\ddots&\vdots\\ &&&N_{\kappa-1, \kappa}\\ 0&&\cdots&0 \end{bmatrix}:\mathcal I\rightarrow\Real^{l\times (l-l_1)},\quad \tilde E=\begin{bmatrix} I_d&\\&\tilde N \end{bmatrix}:\mathcal I\rightarrow\Real^{m\times (m-l_1)}, \end{align*} which have full column rank, $l-l_1$ and $m-l_1$, respectively. By construction, one has $\im \tilde N=\im N$, $\im \tilde E=\im E$. The matrix function $C=\tilde E$ serves as basis of the subspace \begin{align*} S=\left\{ \begin{bmatrix} u\\v \end{bmatrix}\in \Real^{d+l}:v\in \im N \right\}. \end{align*} Furthermore, with any smooth pointwise nonsingular matrix function $M:\mathcal I\rightarrow \Real^{(m-l_1)\times (m-l_1)}$, the matrix function $Y=\tilde EM$ serves as a basis of $\im E$. We will specify $M$ subsequently. Since $\tilde N^*\tilde N$ remains pointwise nonsingular, one obtains the relations \begin{align*} \mathfrak A:=[ \underbrace{0}_{l_1} \;\underbrace{\tilde N^*\tilde N}_{l-l_1} ]\,\tilde N = \tilde N^*\tilde N \,[ \underbrace{0}_{l_1} \,I_{l-l_1}]\, \tilde N = \tilde N^*\tilde N \; \mathring{N_1} \end{align*} with the structured matrix function \begin{align*} \mathring{N_1}=\begin{bmatrix} 0&N_{23}&&\cdots&N_{2\kappa}\\ &0&N_{34}&&N_{3\kappa}\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\kappa-1, \kappa}\\ &&&&0 \end{bmatrix} :\mathcal I\rightarrow \Real^{(l-l_1)\times(l-l_1)},\\ \text{again with the full column-rank blocks}\; N_{ij}. \end{align*} We will show that the reduced pair $\{E_1,F_1\}$ actually features an analogous structure. We have \begin{align*} E_1= Y^{*}EC=M^{*}\begin{bmatrix} I_d&\\&\mathfrak A \end{bmatrix} =M^{*}\begin{bmatrix} I_d&\\& \tilde N^*\tilde N \; \mathring{N_1} \end{bmatrix} = M^{*}\begin{bmatrix} I_d&\\& \tilde N^*\tilde N \end{bmatrix} \begin{bmatrix} I_d&\\& \mathring{N_1} \end{bmatrix}, \end{align*} and \begin{align*} F_1&= Y^{*}FC+ Y^{*}EC'=M^{*}\begin{bmatrix} I_d&\\& \tilde N^*\tilde N \end{bmatrix} \begin{bmatrix} \Omega&\\& I_{l-l_1} +\mathring{N_1'} \end{bmatrix}\\ &= M^{*}\begin{bmatrix} I_d&\\& \tilde N^*\tilde N (I_{l-l_1} +\mathring{N_1'}) \end{bmatrix} \begin{bmatrix} \Omega&\\& I_{l-l_1} \end{bmatrix}. \end{align*} Regarding that $I_{l-l_1}+\mathring{N_1'}$ is nonsingular, we choose \begin{align*} M^{*}=\begin{bmatrix} I_d&\\& (I_{l-l_1} +\mathring{N_1'})^{-1}(\tilde N^*\tilde N )^{-1} \end{bmatrix}, \end{align*} which leads to \begin{align*} E_1&=\begin{bmatrix} I_d&\\& (I_{l-l_1}+\mathring{N_1'})^{-1} \end{bmatrix}\begin{bmatrix} I_d&\\& \mathring{N_1} \end{bmatrix}= \begin{bmatrix} I_d&\\& (I_{l-l_1}+\mathring{N_1'})^{-1} \mathring{N_1} \end{bmatrix}=: \begin{bmatrix} I_d&\\& N_1 \end{bmatrix},\\ F_1&=\begin{bmatrix} \Omega&\\& I_{l-l_1} \end{bmatrix}. \end{align*} By construction, see Lemma \ref{l.SUT1}, the resulting matrix function $N_1$ has again strictly upper triangular block structure and it shares its secondary diagonal blocks with those from $N$ (except for $N_{12}$), that is \begin{align*} N_1=\begin{bmatrix} 0&N_{23}&*&\cdots&*\\ &0&N_{34}&&*\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\kappa-1, \kappa}\\ &&&&0 \end{bmatrix} :\mathcal I\rightarrow \Real^{(l-l_1)\times(l-l_1)}. \end{align*} Thus, the new pair has an analogous block structure as the given one, is again pre-regular but now with $m_1=r=m-l_1 $, $r_1=m_1-l_2=m-l_1-l_2 $, $\theta_1=\rank N_{23}=l_3$. Proceeding further in such a way we arrive at the pair $\{E_{\kappa-2},F_{\kappa-2}\}$, \begin{align*} E_{\kappa-2}=\begin{bmatrix} I_d&\\&N_{\kappa-2} \end{bmatrix},\quad F_{\kappa-2}=\begin{bmatrix} \Omega&\\&I_{l_{\kappa-1}+l_{\kappa}} \end{bmatrix},\quad N_{\kappa-2}=\begin{bmatrix} 0&N_{\kappa-1 ,\kappa}\\0&0 \end{bmatrix}, \end{align*} with $m_{\kappa-2}=m-l_1-\cdots-l_{\kappa-2}$, $r_{\kappa-2}=m-l_1-\cdots-l_{\kappa-1}=d+l_{\kappa}$, and $\theta_{\kappa-2}=\rank N_{\kappa-1,\kappa}=l_{\kappa}$, and the final pair $\{E_{\kappa-1},F_{\kappa-1}\}$, \begin{align*} E_{\kappa-1}=\begin{bmatrix} I_d&\\&0 \end{bmatrix},\quad F_{\kappa-1}=\begin{bmatrix} \Omega&\\&I_{l_{\kappa}} \end{bmatrix},\quad m_{\kappa-1}=d+l_{\kappa}, r_{\kappa-1}=d, \theta_{\kappa-1}=0, \end{align*} which completes the proof of the first assertion. \textrm (2): We suppose now secondary diagonal blocks $N_{i, i+1}$ which have full row-ranks $l_i$, thus nullspaces of dimension $l_{i+1}-l_i$, $ i=1,\ldots,\kappa-1$. The pair $\{E,F\}$ is pre-regular and $r=\rank E=d+\rank N=d+l-l_{\kappa}= m-l_{\kappa}$, and $\dim \ker E\cap S=\dim \ker N\cap\im N=l_{1}+(l_2-l_1)+\cdots + (l_{\kappa -1}-l_{\kappa-2})= l_{\kappa-1} $, thus $\theta_0=l_{\kappa-1} $. The constant matrix function \begin{align*} C=\begin{bmatrix} I_d&&&\\ &I_{l_1}&&\\ &&\ddots&\\ &&&I_{l_{\kappa -1}}\\ &&&0 \end{bmatrix} \end{align*} serves as a basis of $S$ and also as a basis of $\im E$, $Y=C$. This leads simply to \begin{align*} E_1=C^*EC=\begin{bmatrix} I_d&\\&N_1 \end{bmatrix}, \quad F_1=C^*FC=\begin{bmatrix} \Omega&\\&I_{l-l_{\kappa}} \end{bmatrix}, \end{align*} with $m_1=m-l_{\kappa}$, $r_1=m_1-l_{}$, and \begin{align*} N_1=\begin{bmatrix} 0&N_{12}&&\cdots&N_{1,\kappa-1}\\ &0&N_{23}&&N_{2,\kappa-1}\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\kappa-2, \kappa-1}\\ &&&&0 \end{bmatrix}. \end{align*} It results that $\theta_1=l_{\kappa-2}$. and so on. \end{proof} In Section \ref{sec:SCF} and Section \ref{subs.A_strictly} we go into further detail about these two structural forms from Proposition \ref{p.STform} and also illustrate there the difference to the Weierstraß–Kronecker form with a simple example. \subsection{A specifically geometric view on the matter}\label{subs.degree} A regular DAE living in $\Real^m$ can now be viewed as an embedded regular implicit ODE in $\Real^d$, which in turn uniquely defines a vector field on the configuration space $\Real^d$. Of course, this perspective has an impressive potential in the case of nonlinear problems, when smooth submanifolds replace linear subspaces, etc. We will give a brief outline and references in Section \ref{s.nonlinearDAEs} below. An important aspect hereby is that one first provides the manifold that makes up the configuration space, and only then examine the flow, which allows also a flow that is not necessarily regular. In this context, the extra notion \emph{degree of the DAE} introduced by \cite[Definition 8]{Reich}\footnote{Definition \ref{d.degree} below.} is relevant. It actually measures the degree of the embedding depth. In the present section we concentrate on the linear case and do not use the special geometric terminology. Instead we adapt the notion so that it fits in with our presentation. \medskip Let us start by a further look at the basic procedure yielding a regular DAE. In the second to last step of our basis reduction, the pair $\{E_{\mu-1},F_{\mu-1}\}$ is pre-regular and $\theta_{\mu-1}=0$ on all $\mathcal I$. If thereby $r_{\mu-1}=0$ then there is no dynamic part, one has $d=0$ and $S_{can}=\{0\}$. This instance is of no further interest within the geometric context. However, the interest comes alive, if $r_{\mu-1}>0$. Recall that by construction $r_{\mu-1}=r_0-\sum_{i=0}^{\kappa-2}\theta_i=d$. In the regular case we see \begin{align*} \im C_0\cdots C_{\mu-2}\supsetneqq \im C_0\cdots C_{\mu-1}=\im C_0\cdots C_{\mu}, \quad r_{\mu-2}>r_{\mu-1}=r_{\mu}. \end{align*} If now the second to last pair would fail to be pre-regular, but would be qualified with the associated rank function $\theta_{\mu-1}$ being positiv at a certain point $t_*\in\mathcal I$, and zero otherwise on $\mathcal I$, then the eventually resulting last matrix function $E_{\mu}(t)$ fails to remain nonsingular just at this critical point, because of $\rank E_{\mu}(t)=r_{\mu-1}-\theta_{\mu-1}(t)$. Nevertheless, we could state $C_{\mu}=I_{r_{\mu-1}}$ and arrive at \begin{align*} \im C_0\cdots C_{\mu-2}\supsetneqq \im C_0\cdots C_{\mu-1}=\im C_0\cdots C_{\mu}, \quad r_{\mu-2}>r_{\mu-1}\geq r_{\mu}(t) \end{align*} Clearly, then the resulting ODE in $\Real^{r_{\mu-1}}$ and in turn the given DAE are no longer regular and one is confronted with a singular vector field. \begin{example}\label{e.degree} Given is the qualified pair with $m=2, r=1$, \begin{align*} E(t)=\begin{bmatrix} 1&-t\\1&-t \end{bmatrix},\quad F(t)=\begin{bmatrix} 2&0\\0&2 \end{bmatrix},\quad t\in \Real, \end{align*} yielding \begin{align*} Z_0&=\begin{bmatrix} 1\\-1 \end{bmatrix},\; Z_0^*F_0= \begin{bmatrix} 2\\-2 \end{bmatrix}, \; C_0= \begin{bmatrix} 1\\1 \end{bmatrix},\; Y_0= \begin{bmatrix} 1\\1 \end{bmatrix},\\ E_1(t)&=2(1-t),\; F_1(t)=4,\; m_1=r_0=1, \\ &\ker E_0(t) \cap \ker (Z_0^*F_0)(t)=\{z\in\Real^2: z_1-tz_2=0, z_1=z_2\}, \end{align*} and further $\theta_0(t)=0$ for $t\neq 1$, but $\theta_0(1)=1$. The homogeneous DAE has the solutions \begin{align*} x(t)=\gamma (1-t)^2\begin{bmatrix} 1\\1 \end{bmatrix},\; t\in \Real, \quad \text{with arbitrary}\; \gamma\in \Real, \end{align*} which manifests the singularity of the flow at point $t_*=1$. Observe that now the canonical subspace varies its dimension, more precisely, \begin{align*} S_{can}(t_*)=\{0\},\quad S_{can}(t)=\im C_0,\; \text{ for all}\;t\neq t_*. \end{align*} \end{example} \begin{definition}\label{d.degreelin} The DAE given by the pair $\{E,F\}$, $E,F:\mathcal I\rightarrow\Real^{m\times m}$ has, if it exists, \emph{degree $s\in \Natu $}, if the reduction procedure in Section \ref{s.regular} is well-defined up to level $s-1$, the pairs $\{E_i,F_i\}$, $i=0,\ldots,s-1,$ are pre-regular, the pair $\{E_s,F_s\}$ is qualified, \begin{align*} \im C_0\cdots C_{s-1}\supsetneqq \im C_0\cdots C_{s}, \quad r_{s-1}>r_{s}, \end{align*} and $s$ is the largest such integer. The subspace $\im C_0\cdots C_{s}$ is called \emph{configuration space} of the DAE. \end{definition} We mention that $\im C_0\cdots C_{s}= C_0\cdots C_{s} (\Real^{r_{s}})$ and admit that, depending on the view, alternatively, $\Real^{r_{s}}$ can be regarded as the configuration space, too. \medskip If the pair $\{E,F\}$ is regular with index $\mu\in\Natu$, then its degree is $s=\mu-1$ and \begin{align*} \im C_0\cdots C_{\mu-2}\supsetneqq \im C_0\cdots C_{\mu-1}=\im C_0\cdots C_{\mu}, \quad r_{\mu-2}>r_{\mu-1}=r_{\mu}. \end{align*} On the other hand, if the DAE has degree $s$ and $r_{s}=0$ then it results that $C_{s}=0$, in turn $\im C_0\cdots C_{s}=\{0\}$ and $\theta_{s}=0$. Then the DAE is regular with index $\mu=s+1$ but the configuration space is trivial. As mentioned already, since the dynamical degree is zero, this instance is of no further interest in the geometric context. \medskip Conversely, if the DAE has degree $s$ and $r_{s}>0$, then the pair $\{E_{s},F_{s}\}$ is not necessarily pre-regular but merely qualified such that, nevertheless, the next level $\{E_{s+1},F_{s+1}\}$ is well-defined, we can state $m_{s+1}=r_{s}$, $C_{s+1}=I_{r_{s}}$, and \begin{align*} \rank E_{s+1}(t)&=m_{s+1}-\dim(\ker E_{s}(t)\cap \ker Z^*_{s}(t)F_{s}(t)) =r_{s}-\theta_{s}(t),\quad t\in\mathcal I. \end{align*} It comes out that if $\theta_{s}(t)$ vanishes almost overall on $\mathcal I$, then a vector field with isolated singular points is given. If $\theta_{s}(t)$ vanishes identically, then the DAE is regular. This approach unfolds its potential especially for quasi-linear autonomous problems, see \cite{RaRh,Reich} and Section \ref{subs.nonlinearDAEsGeo}, however, the questions concerning the sensibility of the solutions with respect to the perturbations of the right-hand sides fall by the wayside. \section{Further direct concepts without recourse to derivative arrays}\label{s.Solvab} We are concerned here with the regularity notions and approaches from \cite{Cis1982,Jansen2014,KuMe2006,CRR} associated with the elimination procedure, the dissection concept, the strangeness reduction, and the tractability framework compared to Definition \ref{d.2}. The approaches in \cite{Cis1982,Jansen2014,KuMe2006,RaRh} are de facto special solution methods including reduction steps by elimination of variables and differentiations of certain variables. In contrast, the concept in \cite{CRR} aims at a structural projector-based decomposition of the given DAE in order to analyze them subsequently. Each of the concepts is associated with a sequence of pairs of matrix functions, each supported by certain rank conditions that look very different. Thus also the regularity notions, which require in each case that the sequences are well-defined with well-defined termination, are apparently completely different. However, at the end of this section, we will know that all these regularity terms agree with our Definition \ref{d.2}, and that the characteristics \eqref{theta} capture all the rank conditions involved. When describing the individual method, traditionally the same characters are used to clearly highlight certain parallels, in particular, $\{E_j, F_j \}$ or $\{G_j, B_j \}$ for the matrix function pairs and $r_j$ for the characteristic values. Except for the dissection concept, $r_j$ is the rank of the first pair member $E_j$ and $G_j$, respectively. To avoid confusion we label the different characters with corresponding top indices $E$ (elimination), $D$ (dissection), $S$ (strangeness) and $T$ (tractability), respectively. The letters without upper index refer to the basic regularity in Section \ref{s.regular}. In some places we also give an upper index, namely $B$ (basic), for better clarity. \medskip Theorem \ref{t.equivalence} below will provide the index relations $\mu^{E}=\mu^{D}=\mu^{T}=\mu^{S}+1=\mu^B$ as well as expressions of all $r_j^{E}$, $r_j^{D}$, $r_j^{S}$, and $r_j^{T}$ in terms of \eqref{theta}. \bigskip \subsection{Elimination of the unknowns procedure}\label{subs.elimination} A special predecessor version of the procedure described in \cite{RaRh} was already proposed and analyzed in \cite{Cis1982} and entitled by \emph{elimination of the unknowns}, even for more general pairs of rectangular matrix functions. Here we describe the issue already in our notation and confine the description to square matrix functions. Let the pair $\{E,F\}$, $E,F:\mathcal I\rightarrow\Real^{m\times m} $, be qualified in the sense of Definition \ref{d.qualified}, i.e. $\im [E(t)\;F(t)]=\Real^{m},\;t\in\mathcal I, $ and $E(t)$ has constant rank $r$ on $\mathcal I$. Let $T,T^c,Z$, and $Y $ represent bases of $\ker E, (\ker E)^{\perp}, (\im E)^{\perp}$, and $\im E $, respectively. By scaling with $ [Y\, Z]^*$ one splits the DAE \begin{align*} Ex'+Fx=q \end{align*} into the partitioned shape \begin{align} Y^*Ex'+Y^*Fx&=Y^*q,\label{A.1}\\ Z^*Fx&=Z^*q.\label{A.2} \end{align} Then the $(m-r)\times m$ matrix function $ Z^*F$ features full row-rank $m-r$ and the subspace $S=\ker Z^*F$ has dimension $r$. Equation \eqref{A.2} represents an underdetermined system. The idea is to provide its general solution in the following special way. Taking a nonsingular matrix function $K$ of size $m\times m$ such that $Z^*FK=: [\mathfrak A\, \mathfrak B]$, with $\mathfrak B:\mathcal I\rightarrow \Real^{(m-r)\times(m-r)}$ being nonsingular, the transformation $x=K\tilde x$ turns \eqref{A.2} into \begin{align*} &Z^*FK\tilde x=:\mathfrak A u+\mathfrak B\tilde v=Z^*q,\quad \tilde x=\begin{bmatrix} u\\v \end{bmatrix} \\ &\text{yielding}\quad v=-\mathfrak B^{-1}\mathfrak A u + \mathfrak B^{-1}Z^*q. \end{align*} The further matrix function \begin{align*} C:= K \begin{bmatrix} I_{r}\\-\mathfrak B^{-1}\mathfrak A \end{bmatrix} :\mathcal I\rightarrow \Real^{m\times r}, \end{align*} has full column-rank $r$ on all $\mathcal I$ and serves as basis of $\ker Z^*F= S$. Each solution of \eqref{A.2} can be represented in terms of $u$ as \begin{align*} x=Cu+ p,\quad p:= K\begin{bmatrix} 0\\\mathfrak B^{-1}Z^*q \end{bmatrix}. \end{align*} Next we insert this expression into \eqref{A.1}, that is, \begin{align}\label{elimnew} Y^*ECu'+(Y^*FC + Y^*EC')u=Y^*q-Y^*Ep'-Y^*Fp. \end{align} Now the variable $v$ is eliminated and we are confronted with a new DAE with respect to $u$ living in $\Real^{r}$. By construction, it holds that \begin{align*} \rank (Y^*EC)(t)= m-\dim (S(t)\cap \ker E(t))=:m-\theta(t). \end{align*} Therefore, the new matrix function has constant rank precisely if the pair $\{E, F\}$ is pre-regular such that $\theta$ is constant. We underline again that the procedure in \cite{RaRh} and Section \ref{s.regular} allows for the choice of an arbitrary basis for $S$. Obviously, the earlier elimination procedure of \cite{Cis1982} can now be classified as its special version. This way a sequence of matrix functions pairs $\{E_{j}^{E}, F_{j}^{E}\}$ of size $m_{j}^{E}$ , $j\geq 0$, starting from \[ m_{0}^{E}=m,\; r_{0}^{E}=r,\; E_{0}^{E}=E,\; F_{0}^{E}=F, \] and letting \[ m_{j+1}^{E}=r_{j}^{E},\; E_{j+1}^{E}=Y_{j}^*E_{j}^{E}C_{j},\; r_{j+1}^{E}=\rank E_{j+1}^{E},\; F_{j+1}^{E}=Y_{j}^*F_{j}^{E}C_{j}+ Y_{j}^*E_{j}^{E}C'_{j}. \] The corresponding regularity notion from \cite[p.\ 58]{Cis1982} is then: \begin{definition}\label{d.Elim} The DAE \eqref{DAE0} is called \emph{regular} on the interval $\mathcal I$ if the above process of dimension reduction is well-defined, i.e., at each level $[E_{j}^{E}\,F_{j}^{E}]=\Real^{m_{j}^{E}}$ and $E_{j}^{E}$ has constant rank $r^E_j$, and there is a number $\kappa$ such that either $E_{\kappa}^{E}$ is nonsingular or $E_{\kappa}^{E}=0$, but then $F_{\kappa}^{E}$ is nonsingular. \end{definition} This regularity definition obviously fully agrees with Definition \ref{d.2} in the matter and also with the name, but without naming the characteristic values. It is evident that \begin{align}\label{elimchar} \kappa=\mu \quad \text{and}\quad r_{j}^{E}=\rank E_j^{E}= r-\sum_{i=0}^{j-1}\theta_i, \quad j=0,\ldots,\mu, \end{align} and each pair $\{E_{j}^{E},\,F_{j}^{E}\}$ must be pre-regular. The relevant solvability statements from \cite{Cis1982} match those in Section \ref{s.regular}. \subsection{Dissection concept}\label{subs.dissection} A decoupling technique has been presented and extended to apply to nonlinear DAEs quite recently under the name \emph{dissection concept} \cite{Jansen2014}. The intention behind this is to modify the nonlinear theory belonging to the projector based analysis in \cite{CRR} by using appropriate basis functions along the lines of \cite{KuMe2006} instead of projector valued functions. This is, by its very nature, incredibly technical. We filter out the corresponding linear version here. Let the pair $\{E,F\}$, $E,F:\mathcal I\rightarrow\Real^{m\times m}$, be pre-regular with constants $r$ and $\theta$ according to Definition \ref{d.prereg}. Let $T,T^c,Z$, and $Y $ represent bases of $\ker E, (\ker E)^{\perp}, (\im E)^{\perp}$, and $\im E $, respectively. The matrix function $Z^*FT$ has size $(m-r)\times(m-r)$ and \begin{align*} \dim \ker Z^*FT =T^+( \ker E \cap S)=\theta,\quad \rank Z^*FT =m-r-\theta=:a. \end{align*} By scaling with $ [Y\, Z]^*$ one splits the DAE \begin{align*} Ex'+Fx=q \end{align*} into the partitioned shape \begin{align} Y^*Ex'+Y^*Fx&=Y^*q,\label{A.1D}\\ Z^*Fx&=Z^*q.\label{A.2D} \end{align} Owing to the pre-regularity, the $(m-r)\times m$ matrix function $ Z^*F$ features full row-rank $m-r$. We keep in mind that $S=\ker Z^*F$ has dimension $r$. The approach in \cite{Jansen2014} needs several additional splittings. Let $V,W$ be bases of $(\im Z^*FT)^{\perp}$, and $\im Z^*FT$. By construction, $V$ has size $(m-r)\times a$ and $W$ has size $(m-r)\times \theta$. One starts with the transformation \begin{align*} x= \begin{bmatrix} T^c& T \end{bmatrix}\tilde x, \quad \tilde x=\begin{bmatrix} \tilde x_1\\\tilde x_2 \end{bmatrix},\quad x=T^c\tilde x_1+ T\tilde x_2. \end{align*} The background is the associated possibility to suppress the derivative of the nullspace-part $T\tilde x_n$ similarly as in the context of properly formulated DAEs and to set $Ex'= ET^c\tilde x_1'+E{T^c}'\tilde x_1 +ET'\tilde x_2$, which, however, does not play a role in our context, where altogether continuously differentiable solutions are assumed. Furthermore, an additional partition of the derivative-free equation \eqref{A.2} by means of the scaling with $[V\,W]^*$ is applied, which results in the system \begin{align} Y^*ET^c\tilde x'_1 +Y^*(FT^c+E{T^c}')\tilde x_1+ Y^*(FT+E{T}')\tilde x_2&=Y^*q,\label{A3}\\ V^*Z^*FT^c\tilde x_1+V^*Z^*FT\tilde x_2&=V^*Z^*q,\label{A4}\\ W^*Z^*FT^c\tilde x_1 \hspace*{18mm} &=W^*Z^*q,\label{A5}. \end{align} The matrix function $W^*Z^*FT^c$ has full row-rank $\theta$ and $V^*Z^*FT$ has full row-rank $a$. Now comes another split. Choosing bases $G, H$ of $\ker W^*Z^*FT^c\subset\Real^{\theta}$ and $\ker V^*Z^*FT\subset\Real^{a}$, as well as bases of respective complementary subspaces, we transform \begin{align*} \tilde x_1= \begin{bmatrix} G^c& G \end{bmatrix}\bar x_1,\quad \bar x_1=\begin{bmatrix} \bar x_{1,1}\\\bar x_{1,2} \end{bmatrix},\quad \tilde x_1=G^c\bar x_{1,1}+ G\bar x_{1,2},\\ \tilde x_2= \begin{bmatrix} H^c& H \end{bmatrix}\bar x_2,\quad \bar x_2=\begin{bmatrix} \bar x_{2,1}\\\bar x_{2,2} \end{bmatrix},\quad \tilde x_2=H^c\bar x_{2,1}+ H\bar x_{2,2}. \end{align*} Thus equations \eqref{A4} and \eqref{A5} are split into \begin{align} V^*Z^*FT^c(G^c\bar x_{1,1}+ G\bar x_{1,2})+V^*Z^*FT H^c\bar x_{2,1}&=V^*Z^*q,\label{A6}\\ W^*Z^*FT^c G^c\bar x_{1,1} \hspace*{38mm} &=W^*Z^*q.\label{A7} \end{align} The matrix functions $V^*Z^*FT H^c$ and $W^*Z^*FT^c G^c$ are nonsingular each, which allows the resolution to $\bar x_{1,1}$ and $\bar x_{2,1}$. In particular, for $q=0$ it results that $\bar x_{1,1}= 0$ and $\bar x_{2,1}= \mathfrak E \bar x_{1,2}$, with \[\mathfrak E:=-(V^*Z^*FT H^c)^{-1}V^*Z^*FT^cG. \] Overall, therefore, the latter procedure presents again a transformation, namely \begin{align*} x= K\bar x,\quad K=\begin{bmatrix} T^cG^c&T^cG&TH^c&TH \end{bmatrix},\quad \bar x=\begin{bmatrix} \bar x_{1,1}\\\bar x_{1,2}\\\bar x_{2,1}\\\bar x_{2,2} \end{bmatrix} \in \Real^{\theta}\times\Real^{r-\theta}\times\Real^{a}\times\Real^{\theta}, \end{align*} and we realize that we have found again a basis of the subspace $S$, namely \begin{align*} S=\im C,\quad C=K \begin{bmatrix} 0&0\\I_{r-\theta}&0\\\mathfrak E&0\\0&I_{\theta} \end{bmatrix}= \begin{bmatrix} T^cG+TH^c\mathfrak E &\; TH \end{bmatrix}, \end{align*} which makes the dissection approach a particular case of \cite{RaRh} and Section \ref{s.regular}. Consequently, the corresponding reduction procedure from there is well-defined for all regular DAEs in the sense of our basic Definition \ref{d.2}. \medskip In \cite{Jansen2014} the approach is somewhat different. Again a sequence of matrix function pairs $\{E_{i}^{D},\,F_{i}^{D}\}$ is built up starting from $E^{D}_0=E$, $F^{D}_0=F$. The construction of $\{E_{1}^{D},\,F_{1}^{D}\}$ is closely related to the system given by \eqref{A3}, \eqref{A6}, and \eqref{A7}, where the last two equations are solved with respect to $\tilde x_{1,1}$ and $\tilde x_{1,1}$ and these variables are replaced in \eqref{A3} accordingly. This leads to \begin{align*} E^{D}_1=\begin{bmatrix} 0&Y^*ET^{c}G&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{bmatrix},\quad \rank E^{D}_1=\rank Y^*ET^{c}G= \rank G= r-\theta. \end{align*} In contrast to the basic procedure in Section \ref{s.regular} in which the dimension is reduced and variables are actually eliminated on each level, now all variables stay included and the original dimension $m$ is kept analogous to the strangeness concept in Section \ref{subs.strangeness}. We omit the further technically complex representation here and refer to \cite{Jansen2014}. It is evident that $\rank E^{D}_0>\rank E^{D}_1$ and so on. The characteristic values of the dissection concept are formally adapted to certain corresponding values of the tractability index framework. It starts with $r^{D}_0=r$, and is continued in ascending order as the following definition from \cite[Definition 4.13, p. \ 83]{Jansen2014} says. \begin{definition}\label{d.diss} Let all basis functions exist and have constant ranks on $\mathcal I$ and let the sequence of the matrix function pairs be well-defined. The characteristic values of the DAE \eqref{DAE0} are defined as \begin{align*} r^{D}_0=r,\quad r^{D}_{i+1}=r^{D}_{i}+ a^{D}_{i}=r^{D}_{i}+\rank Z^*_{i}F^{D}_i T_i,\quad i\geq 0. \end{align*} If $r_0^{D}=r=m$ then the DAE is said to be regular with dissection index zero. If there is an integer $\kappa\in \Natu$ and $r^{D}_{\kappa-1}<r^{D}_{\kappa}=m$ then the DAE is said to be \emph{regular with dissection index} $\mu^{D}=\kappa$. The DAE is said to be \emph{regular}, if it is regular with any dissection index. \end{definition} In particular, in the first step one has \begin{align*} r^{D}_1= r+a=r+(m-r-\theta)=m-\theta=(m-r)+r-\theta =(m-r)+\rank E^{D}_1. \end{align*} Owing to \cite[Theorem 4.25,p.101]{Jansen2014}, the tractability index (see Section \ref{subs.strangeness}) and the dissection index coincide, and also the corresponding characteristic values, that is, \begin{align*} \mu^{D}=\mu^{T},\quad r_{i}^{D}=r_{i}^{T},\quad i=0,\ldots, \mu^{D}. \end{align*} \subsection{Regular strangeness index}\label{subs.strangeness} The strangeness concept applies to rectangular matrix functions in general, but here we are interested in the case of square sizes only, i.e., $E,F:\mathcal I\rightarrow \Real^{m\times m}$. Within the strangeness reduction framework the following five rank-values of the matrix function pair $\{E,F\}$ play their role, e.g., \cite[p. 59]{KuMe2006}: \begin{align} r&=\rank E,\label{S1}\\ a&=\rank Z^*FT, \;(\text{algebraic part})\label{S2}\\ s&=\rank V^*Z^*FT^{c}, \;(\text{strangeness})\label{S3}\\ d&=r-s, \;(\text{differential part})\label{S4}\\ v&=m-r-a-s, \;(\text{vanishing equations})\label{S5} \end{align} whereby $T,T^{c}, Z,V$ represent orthonormal bases of $\ker E$, $(\ker E)^{\bot}$, $(\im E)^{\bot}$, and $(\im Z^*FT)^{\bot}$, respectively. The strangeness concept is tied to the requirement that $r,a$, and $s$ are well-defined constant integers. Owing to \cite[Lemma 4.1]{HaMae2023}, the pair $\{E,F\}$ is pre-regular, if and only if the rank-functions \eqref{S1}-\eqref{S5} are constant and $v=0$. In case of pre-regularity, see Definition \ref{d.prereg}, one has \begin{align*} a=m-r-\theta,\quad s=\theta,\quad d=r-\theta. \end{align*} Let the pair $\{E,F\}$ have constant rank values \eqref{S1}--\eqref{S5}, and $v=0$. We describe the related step from $\{E^{S}_0,F^{S}_0\}:=\{E,F\} $ to the next matrix function pair $\{E^{S}_1,F^{S}_1\}$. Applying the basic arguments of the strangeness reduction \cite[p.\ 68f]{KuMe2006} the pair $\{E,F\}$ is equivalently transformed to $\{\tilde{E},\tilde F\}$, \begin{align*} \tilde{E}=\begin{bmatrix} I_s&&&\\&I_d&&\\ &&0&\\ &&&0 \end{bmatrix},\quad \tilde{F}=\begin{bmatrix} 0&\tilde F_{12}&0&\tilde F_{14}\\ 0&0&0&\tilde F_{24}\\ 0&0&I_a&0\\ I_s&0&0&0 \end{bmatrix}, \end{align*} with $d+s=r,\; a+s=m-r$. This means that the DAE is transformed into the intermediate form \begin{align*} \tilde x_1' +\tilde F_{12}\tilde x_2 +\tilde F_{14}\tilde x_4 &= \tilde q_1,\\ \tilde x_2' +\tilde F_{24}\tilde x_4 &= \tilde q_2,\\ \tilde x_3&=\tilde q_3,\\ \tilde x_1&=\tilde q_4. \end{align*} Replacing now in the first line $\tilde x_1'$ by $\tilde q_4'$ leads to the new pair defined as \begin{align*} E^{S}_1=\begin{bmatrix} 0&&&\\&I_d&&\\ &&0&\\ &&&0 \end{bmatrix},\quad F^{S}_1=\begin{bmatrix} 0&\tilde F_{12}&0&\tilde F_{14}\\ 0&0&0&\tilde F_{24}\\ 0&0&I_a&0\\ I_s&0&0&0 \end{bmatrix}. \end{align*} Proceeding further in this way, each pair $\{E^{S}_j, F^{S}_j\}$ must be supposed to be pre-regular for obtaining well-defined characteristic tripels $(r^S_j,a^S_j,s^S_j)$ and $v^S_j=0$. Owing to \cite[Theorem 3.14]{KuMe2006} these characteristics persist under equivalence transformations. The obvious relation $r^{S}_{j+1}=r^{S}_{j}-s^{S}_{j}$ guarantees that after a finite number of steps the so-called strangeness $s^{S}_{j}$ must vanish. We adapt Definition 3.15 from \cite{KuMe2006} accordingly\footnote{The notion \cite[Definition 3.15]{KuMe2006} is valid for more general rectangular matrix functions $E,F$. For quadratic matrix functions $E,F$ we are interested in here, it allows also nonzero values $v_j^{S}=m-r_j^S-a_j^S-s_j^S$, thus instead of pre-regularity of $\{E^{S}_j, F^{S}_j\}$, it is only required that $ r^S_j, a^S_j, s^S_j$ are constant on $\mathcal I$.}: \begin{definition}\label{d.strangeness} Let each pair $\{E^{S}_j, F^{S}_j\}$, $j\geq 0$, be pre-regular and \begin{align*} \mu^{S}=\min\{j\geq 0:s^{S}_{j}=0 \}. \end{align*} Then the pair $\{E,F\}$ and the associated DAE are called \emph{regular with strangeness index} $ \mu^{S}$ and characteristic values $(r^S_j,a^S_j,s^S_j)$, $j\geq 0$. In the case that $\mu^{S}=0$ the pair and the DAE are called \emph{strangeness-free}. \end{definition} Finally, if the DAE $Ex'+Fx=q$ is regular with strangeness index $ \mu^{S}$, this reduction procedure ends up with the strangeness-free pair \begin{align} E^S_{\mu^S}=\begin{bmatrix} I_{d^S}&\\&0 \end{bmatrix},\; F^S_{\mu^S}=\begin{bmatrix} 0&\\&I_{a^S} \end{bmatrix},\quad d^S:=d^S_{\mu^S},\; a^S:=a^S_{\mu^S},\; d^S+a^S=m, \end{align} and the transformed DAE showing a simple form, which already incorporates its solution, namely \begin{align*} \tilde{\tilde x}'_1&=\tilde{\tilde q}_1,\\ \tilde{\tilde x}_2&=\tilde{\tilde q}_2. \end{align*} The function $\tilde{\tilde x}:\mathcal I\rightarrow \Real^m$ is a solution $x:\mathcal I\rightarrow \Real^m$ of the original DAE transformed by a pointwise nonsingular matrix function. \medskip As a consequence of Theorem 2.5 from \cite{KuMe1996}, each pair $\{E,F\}$ being regular with strangeness index $\mu^S$ can be equivalently transformed into a pair $\{\tilde E,\tilde F\}$, \begin{align}\label{SCFs} \tilde E=\begin{bmatrix} I_{d^S}&*\\0&N \end{bmatrix},\quad \tilde F=\begin{bmatrix} *&0\\0&I_{a^S} \end{bmatrix},\quad d^S:=d^S_{\mu^S},\; a^S:=a^S_{\mu^S}, \end{align} in which the matrix function $N$ is pointwise nilpotent with nilpotency index $\kappa=\mu^S +1$ and has size $a^S\times a^S$. $N$ is pointwise strictly block upper triangular and the entries $N_{1,2}, \ldots, N_{\kappa-1,\kappa}$ have full row-ranks $l_1=s^S_{\mu^S-1},\ldots, l_{\kappa-1}=s^S_0$. Additionally, one has $l_{\kappa}=s^S_0+a^S_0 =m-r$, and $N$ has exactly the structure that is required in \eqref{blockstructure} and Proposition \ref{p.STform}(2). It results that each DAE having a well-defined regular strangeness index is regular in the sense of Definition \ref{d.2}. \subsection{Tractability index}\label{subs.tractability} The background of the tractability index concept is the projector based analysis which aims at an immediate characterization of the structure of the originally given DAE, its relevant subspaces and components, e.g., \cite{CRR}. In contrast to the reduction procedures with their transformations and built-in differentiations of the right-hand side, the original DAE is actually only written down in a very different pattern using the projector functions. No differentiations are carried out, but it is only made clear which components of the right-hand side must be correspondingly smooth. This is important in the context of input-output analyses and also when functional analytical properties of relevant operators are examined \cite{Ma2014}. The decomposition using projector functions reveals the inherent structure of the DAE, including the inherent regular ODE. Transformations of the searched solution are avoided in this decoupling framework, which is favourable for stability investigations and also for the analysis of discretization methods \cite{CRR,HMT}. As before we assume $E,F:\mathcal I\rightarrow \Real^{m\times m}$ to be sufficiently smooth and the pair $\{E, F\}$ to be pre-regular. We choose any continuously differentiable projector-valued function $P$ such that \[P:\mathcal I\rightarrow \Real^{m\times m},\quad P(t)^2=P(t),\; \ker P(t)=\ker E(t),\quad t\in \mathcal I, \] and regarding that $Ex'=EPx'=E(Px)'-EP'x$ for each continuously differentiable function $x:\rightarrow\Real^m$, we rewrite the DAE $Ex'+Fx=q$ as \begin{align}\label{DAEP} E(Px)'+(F-EP')x=q. \end{align} \begin{remark}\label{r.AD} The DAE \eqref{DAEP} is a special version of a DAE with \emph{properly stated leading term} or properly involved derivative, e.g., \cite{CRR}, \begin{align}\label{2.DAE} A(Dx)'+Bx=q, \end{align} which is obtained by a special \emph{proper factorizations} of $E$, which are subject to the general requirements: $E=AD$, $A:\mathcal I\rightarrow \Real^{n\times m}$ is continuous, $D:\mathcal I\rightarrow \Real^{m\times n}$ is continuously differentiable, $B=F-AD'$, and \begin{align*} \ker A\oplus \im D=\Real^{n},\; \ker D=\ker E, \end{align*} whereby both subspaces $\ker A$ and $\im D$ have continuously differentiable basis functions. As mentioned already above, a properly involved derivative makes sense, if not all components of the unknown solution are expected to be continuously differentiable, which does not matter here. In contrast, in view of applications and numerical treatment the model \eqref{2.DAE} is quite reasonable \cite{CRR}. \end{remark} In order to be able to directly apply the more general results of the relevant literature, in the following we denote \begin{align*} P=:D,\quad G_0;=E,\quad B_0:=F-ED',\quad A:=E. \end{align*} Observe that the pair $\{G_0, B_0\}$ is pre-regular with constants $r$ and $\theta$ at the same time as $\{E, F\}$. Now we build a sequence of matrix functions pairs starting from the pair $\{G_0, B_0\}$. Denote $N_0=\ker G_0$ and choose a second projector valued function $P_0:\mathcal I\rightarrow\Real^{m\times m}$, such that $\ker P_0=N_0$. With the complementary projector function $Q_{0}:=I-P_{0}$ and $D^{-}:=P_{0}$ it results that \begin{align*} DD^{-}D=D,\quad D^{-}DD^{-}=D^{-},\quad DD^{-}=P_{0},\quad D^{-}D=P_{0}. \end{align*} On this background we construct the following sequence of matrix functions and associated projector functions: Set $r^T_{0}=r=\rank G_0$ and $\pPi_{0}=P_{0}$ and build successively for $i\geq 1$, \begin{align} G_{i}&=G_{i-1}+B_{i-1}Q_{i-1},\quad r^T_{i}=\rank G_{i},\label{2.Gi}\\ \quad N_{i}&=\ker G_{i},\quad \widehat{N_{i}}=(N_{0}+\cdots+N_{i-1})\cap N_{i},\quad u^T_{i}=\dim \widehat{N_{i}},\nonumber \end{align} fix a subset $X_{i}\subseteq N_{0}+\cdots+N_{i-1}$ such that $\widehat{N_{i}}+X_{i}=N_{0}+\cdots+N_{i-1}$ and choose then a projector function $Q_{i}:\mathcal I\rightarrow\Real^{m\times m}$ to achieve \begin{align}\label{2.Qi} \im Q_{i}=N_{i},\quad X_{i}\subseteq\ker Q_{i},\quad P_{i}=I-Q_{i},\quad \pPi_{i}=\pPi_{i-1}P_i, \end{align} and then form \begin{align}\label{2.Bi} B_{i}=B_{i-1}P_{i-1}-G_{i}D^{-}(D\pPi_{i}D^{-})'D\pPi_{i-1}. \end{align} By construction, the inclusions \begin{align*} \im G_{0}\subseteq \im G_{1}&\subseteq\cdots \im G_{k}\subseteq \Real^{m},\\ \widehat{N_{1}}&\subseteq\widehat{N_{2}}\subseteq\cdots\subseteq\widehat{N_{k}}, \end{align*} come off, which leads to the inequalities \begin{align*} 0\leq r^T_{0}&\leq r^T_{1}\leq \cdots\leq r^T_{k},\\ 0&\leq u^T_{1}\leq \cdots\leq u^T_{k}. \end{align*} The sequence $G_{0},\ldots, G_{k}$ is said to be \emph{admissible} if, for each $i=1,\ldots,k$, the two rank functions $r^T_{i}$, $u^T_{i}$ are constant, $\pPi_{i}$ is continuous and $D\pPi_{i}D^{-}$ is continuously differentiable. It is worth mentioning that the matrix functions $G_{0},\ldots,G_{k}$ of an admissible sequence are continuous and the products $\pPi_{i}$ and $D\pPi_{i}D^{-}$ are projector functions again \cite{CRR}. Moreover, if $u^T_{k}=0$, then $u^T_{i}=0$, for $i<k$. We refer to \cite[Section 2.2]{CRR} for further useful properties. \begin{definition}{\cite[Section 2.2.2]{CRR}}\label{d.trac} The smallest number $\kappa\geq 0$, if it exists, leading to an admissible matrix function sequence ending up with a nonsingular matrix function $G_{\kappa}$ is called the \emph{tractability index (regular case)}\footnote{We refer to \cite[Sections 2.2.2 and 10.2.1]{CRR} for details and more general notions including also nonregular DAEs.} of the pair $\{E,F\}$, and the DAEs \eqref{1.DAE} and \eqref{2.DAE}, respectively. It is indicated by $\kappa=: \mu^T$. The associated characters \begin{align}\label{2.characvalues} 0\leq r^T_{0}\leq r^T_{1}\leq \cdots\leq r^T_{\kappa-1}< r^T_{\kappa}=m,\quad d^T=m-\sum_{i=0}^{\kappa-1}(m-r^T_{i}), \end{align} are called characteristic values of the pair $(E,F)$ and the DAEs \eqref{1.DAE} and \eqref{2.DAE}, respectively. The pair $(E,F)$ and the DAEs \eqref{1.DAE} and \eqref{2.DAE}, are called regular each. \end{definition} By definition, if the DAE is regular, then $r^T_{\mu^T}=m, u^T_{\mu^T}=0 $ and all rank functions $u^T_{i}$ have to be zero and play no further role here. The special possible choice of the projector functions $P, P_0,\ldots ,P_{\mu-1}$ does not affect regularity and the characteristic values \cite{CRR}. \begin{remark}\label{r.Riaza} An alternative way to construct admissible matrix function sequences for the regular case if $u^T_i=0$, $i\geq1$, is described in \cite[Section 2.2.4]{RR2008}. It avoids the explicit use of the nullspace projector functions onto $N_i$. One starts with $G_0, B_0$, and $\pPi_0$ as above, introduces $M_0:=I-\pPi_0$, $G_1=G_0+B_0M_0$, and then for $i\geq 1$: \begin{align*} &\text{choose a projector function }\; \pPi_i \; \text{ along }\; N_0\oplus\cdots\oplus N_i,\;\text{ with }\; \im \pPi_i \subseteq \pPi_{i-1} ,\\ &B_i=(B_{i-1}-G_iD^-(D\pPi_iD^-)'D)\pPi_i,\\ &M_i=\pPi_{i-1}-\pPi_i,\\ &G_{i+1}=G_i+B_iM_i. \end{align*} \end{remark} \begin{remark}\label{r.Tpairs} If the pair $(E,F)$ is regular in the sense of Definition \ref{d.trac} then the subspace $S^T_j(t)$, \begin{align*} S^T_j(t):=\{z\in\Real^m: B_j(t)z\in \im G_j(t)\}=\ker W^T_j(t)B_j(t),\quad W^T_j:=I-G_jG_j^+, \end{align*} has constant dimension $r_{j}$ on all $\mathcal I$. Moreover, \begin{align*} \rank [G_j \; B_j]=\rank [G_j \; W^T_jB_j]= r^T_j+ m-r^T_j=m,\\ \dim \ker G_{j+1}=\dim (\ker G_j\cap S^T_j)= m-r^T_{j+1}, \quad j=0,\ldots, \mu^T-1. \end{align*} All intermediate pairs $\{G_j,B_j\}$ are pre-regular. It is worth highlighting that in terms of the basic regularity notion\footnote{See Definition \ref{d.2} and Theorem \ref{t.equivalence}.} one has $\mu^T=\mu$ and \begin{align*} \dim (\ker G_j\cap S^T_j)= \theta_{j}, \quad j=0,\ldots, \mu-1. \end{align*} \end{remark} The decomposition \begin{align*} I_{m}=\pPi_{\mu^T-1}+Q_{0}+\pPi_{0}Q_{1}+\cdots+\pPi_{\mu^T-2}Q_{\mu^T-1} \end{align*} is valid and the involved projector functions show constant ranks, in particular, \begin{align}\label{2.ranks} \rank Q_{0}=m-r^T_{0},\;\rank \pPi_{i-1}Q_{i}=m-r^T_{i},\;i=1,\ldots,\mu^T-1,\; \rank \pPi_{\mu^T-1}=d^T. \end{align} \medskip Let the DAE \eqref{1.DAE} be regular with tractability index $\mu^T\in \Natu$ and characteristic values \eqref{2.characvalues}. Then the admissible matrix functions and associated projector functions provide a far-reaching decoupling of the DAE, which exposes the intrinsic structure of the DAE, for details see \cite[Section 2.4]{CRR}. In particular, the following representation of the scaled by $G_{\mu^T}^{-1}$ DAE was proved in \cite[Proposition 2.23]{CRR}): \begin{align*} G_{\mu^T}^{-1}A(Dx)'+G_{\mu^T}^{-1}Bx&=G_{\mu^T}^{-1}q,\\ G_{\mu^T}^{-1}A(Dx)'+G_{\mu^T}^{-1}Bx &=D^{-}(D\pPi_{\mu^T-1}x)'+G_{\mu^T}^{-1}B_{\mu^T}x\\&+ \sum_{l=0}^{\mu^T-1}\{Q_{l}x-(I-\pPi_{l})Q_{l+1}D^{-}(D\pPi_{l}Q_{l+1}x)'+V_{l}D\pPi_{l}x\}, \end{align*} with $V_{l}=(I-\pPi_{l})\{P_{l}D^{-}(D\pPi_{l}D^{-})'-Q_{l+1}D^{-}(D\pPi_{l+1}D^{-})'\}D\pPi_{l}D^{-}$. Regarding the decomposition of the unknown function \begin{align*} x=\pPi_{\mu^T-1}x+Q_{0}x+\pPi_{0}Q_{1}x+\cdots+\pPi_{\mu^T-2}Q_{\mu^T-1}x\\ \end{align*} and several projector properties, we get \begin{align}\label{Grundformel} G_{\mu^T}^{-1}A(Dx)'+G_{\mu^T}^{-1}Bx =&D^{-}(D\pPi_{\mu^T-1}x)'- \sum_{l=0}^{\mu^T-1}(I-\pPi_{l})Q_{l+1}D^{-}(D\pPi_{l}Q_{l+1}x)' \\ &+G_{\mu^T}^{-1}B_{\mu^T}\pPi_{\mu^T-1}x+ \sum_{l=0}^{\mu^T-1}V_{l}D\pPi_{\mu^T-1}x\nonumber\\ &+Q_{0}x + \sum_{l=0}^{\mu^T-1}Q_{l}\pPi_{l-1}Q_{l}x + \sum_{l=0}^{\mu^T-2}V_{l} \sum_{s=0}^{\mu^T-2}D\pPi_{s}Q_{s+1}x.\nonumber \end{align} The representation \eqref{Grundformel} is the base of two closely related versions of fine and complete structural decouplings of the DAE \eqref{1.DAE} into the so-called \textit{inherent regular ODE} (and its compressed version, respectively), \begin{align}\label{IRODE} (D\pPi_{\mu^T-1}x)'-(D\pPi_{\mu^T-1}D^-)'D\pPi_{\mu^T-1}x+D\pPi_{\mu^T-1}G_{\mu^T}^{-1}B_{\mu^T}D^-D\pPi_{\mu^T-1}x=D\pPi_{\mu^T-1}G_{\mu^T}^{-1}q, \end{align} and the extra part indicating and including all the necessary differentiations of $q$. It is worth mentioning that the explicit ODE \eqref{IRODE} is not at all affected from derivatives of $q$. While the first decoupling version is a swelled system residing in a $m$-dimensional subspace of $\Real^{(\mu^T+1)m}$, the second version remains in $\Real^{m}$ and represents an equivalently transformed DAE\footnote{In the literature there are quite a few misunderstandings about this.}. More precisely, owing to \cite[Theorem 2.65]{CRR}, each pair $\{E,F\}$ being regular with tractability index $\mu^T$ can be equivalently transformed into a pair $\{\tilde E,\tilde F\}$, \begin{align}\label{SCFt} \tilde E=\begin{bmatrix} I_{d^T}&0\\0&N \end{bmatrix},\quad \tilde F=\begin{bmatrix} \Omega&0\\0&I_{m-d^T} \end{bmatrix}, \end{align} in which the matrix function $N$ is pointwise nilpotent with nilpotency index $\kappa=\mu^T$ and has size $(m-d^T)\times (m-d^T)$. $N$ is pointwise strictly block upper triangular and the entries $N_{1,2}, \ldots, N_{\kappa-1,\kappa}$ have full column-ranks $l_2=m-r^T_{1},\ldots, l_{\kappa}=m-r^T_{\kappa-1}$. Additionally, one has $l_{1}=m-r$, and $N$ has exactly the structure that is required in \eqref{blockstructure} and Proposition \ref{p.STform}(1). The projector based approach sheds light on the role of several subspaces. In particular, the two canonical subspaces $S_{can}$ and $N_{can}$, see \cite{HaMae2023}, originate from this concept, e.g., \cite{CRR}. For regular pairs it holds that $N_{can}=N_0+\cdots+N_{\mu^T-1}$. The following assertion provided in \cite{LinhMae,HaMae2023} plays its role when analyzing DAEs and its canonical subspaces. \begin{proposition}\label{p.adjoint} If the DAE \eqref{1.DAE} is regular with tractability index $\mu^T$ and characteristics $0<r_0^T\leq\cdots<r^{T}_{\mu^T}=m$, then the adjoint DAE \begin{align*} -E^*y'+(F^*-{E^*}')y=0 \end{align*} is also regular with the same index and characteristics, and the canonical subspaces $S_{can}, N_{can}$ and $S_{adj, can}, N_{adj, can}$, are related by \begin{align*} N_{can}=\ker C^*_{adj}E,\quad N_{adj, can}=\ker C^*E^*, \end{align*} in which $C$ and $C_{adj}$ are bases of the flow-subspaces $S_{can}$ and $S_{adj, can}$, respectively. \end{proposition} \subsection{Equivalence results and other commonalities}\label{s.equivalence} \begin{theorem}\label{t.equivalence} Let $E, F:\mathcal I\rightarrow\Real^{m\times m}$ be sufficiently smooth and $\mu\in\Natu$. The following assertions are equivalent in the sense that the individual characteristic values of each two of the variants are mutually uniquely determined. \begin{description} \item[\textrm{(1)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with index $\mu\in \Natu$ and characteristics $r<m$, $\theta_0=0$ if $\mu=1$, and, for $\mu>1$, \begin{align*} r<m,\quad \theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0. \end{align*} \item[\textrm{(2)}] The strangeness index $\mu^S$ is well-defined for $\{E,F\}$ and regular, and $ \mu^S=\mu-1$. The associated characteristics are the tripels \begin{align*} (r^S_i,\;a^S_i, s^S_i ),\quad i=0,\ldots, \mu^S,\quad r^S_0=r, \quad \mu^S=\min\{i\in \Natu_0:s^S_i=0\}. \end{align*} \item[\textrm{(3)}] The pair $\{E,F\}$ is regular with tractability index $\mu^T=\mu$ and characteristics \begin{align*} r^T_0=r,\quad r^T_0\leq \cdots\leq r^T_{\mu-1}<r^T_{\mu}=m. \end{align*} \item[\textrm{(4)}] The pair $\{E,F\}$ is regular with dissection index $\mu^D=\mu$ and characteristics \begin{align*} r^D_0=r,\quad r^D_0\leq \cdots\leq r^D_{\mu-1}<r^D_{\mu}=m. \end{align*} \item[\textrm{(5)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with elimination index $\mu^E=\mu$ and characteristics $r<m$, $\theta_0=0$ if $\mu=1$, and, for $\mu>1$, \begin{align*} r<m,\quad \theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0. \end{align*} \end{description} \end{theorem} \begin{proof} Owing to \cite[Theorem 4.3]{HaMae2023}, it remains to verify the implication {\textrm (3)}$\Rightarrow ${\textrm (1)}. A DAE being regular with tractability index $\mu^{T}$ and characteristics \eqref{trac} is equivalent to a DAE in the form \eqref{blockstructure} with $\kappa=\mu^{T}$, $r=r_0^{T}$, and $l_i=m-r_{i-1}^{T}$ for $i=1,\ldots,\kappa $. Hence, by Proposition \ref{p.STform}, the DAE is regular with index $\mu=\kappa=\mu^{T}$ and characteristic values $r=r_0^{T}$, $m-\theta_0=r_1^{T},\ldots,m-\theta_{\mu-2}=r_{\mu-1}^{T}$, and $m=m-\theta_{\mu-1}=r_{\mu}^{T}$ \end{proof} Next we highlight the relations between the various characteristic values and trace back all of them to \begin{align*} r<m,\quad \theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0. \end{align*} \begin{theorem}\label{t.indexrelation} Let the pair $\{E,F\}$ regular on $\mathcal I$ with index $\mu\in \Natu$ and characteristics $r<m$, $\theta_0=0$ if $\mu=1$, and, for $\mu>1$, \begin{align*} r<m,\quad \theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0. \end{align*} Then the following relations concerning the various characteristic values arise: \begin{description} \item[\textrm{(1)}] The pair $\{E,F\}$ is regular with strangeness index $\mu^S=\mu-1$. The associated characteristics are \begin{align*} r^S_0&=r,\\ s^S_i&=\theta_i,\\ d^S_i&=r^S_i-\theta_i = r-\sum_{j=0}^{i} \theta_j,\\ a^S_i&=m-r^S_i-\theta_i = m-r + \sum_{j=0}^{i-1} \theta_j - \theta_i,\\ v^S_i&=0,\\ r^S_{i+1}&=d^S_i = r-\sum_{j=0}^{i} \theta_j,\quad i=0,\ldots,\mu-1. \end{align*} \item[\textrm{(2)}] The pair $\{E,F\}$ is regular with tractability index $\mu^T=\mu$ and characteristics \begin{align}\label{trac} r^T_0=r,\quad r^T_i=m-\theta_{i-1},\quad i=1,\ldots,\mu. \end{align} \item[\textrm{(3)}] The pair $\{E,F\}$ is regular with dissection index $\mu^D=\mu$ and characteristics \begin{align*} r^D_0=r,\quad r^D_i=m-\theta_{i-1},\quad i=1,\ldots,\mu. \end{align*} \item[\textrm{(4)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with elimination index $\mu^E=\mu$ and characteristics $r<m$, $\theta_0=0$ if $\mu=1$, and, for $\mu>1$, \begin{align*} r<m,\quad \theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0. \end{align*} \end{description} \end{theorem} Thus, the statements of Theorems \ref{t.Scan} and \ref{t.solvability} apply equally to all concepts in this section. Every regular DAE with index $\mu\in\Natu$ is a solvable system in the sense of Definition \ref{d.solvableDAE}, and it has the pertubation index $\mu$. \begin{remark}\label{r.inf} Obviously, for a regular pair $\{E,F\}$ with index $\mu$, each of the above procedures is feasible up to infinity and will eventually stabilize. This can now be recorded by setting \begin{align*} \theta_k:=0,\quad k\geq \mu. \end{align*} Namely, in particular, the strangeness index is well defined and regular, $\mu^S=\mu-1$, \begin{align*} r^S_0=r,\quad r^S_i=r^S_{i-1}-\theta_{i-1},\quad i=1,\ldots, \mu-1,\\ s^S_i=\theta_i,\quad i=0,\ldots, \mu-1,\\ a^S_i=m-r^S_i-\theta_i,\quad i=0,\ldots, \mu-1. \end{align*} After reaching the zero-strangeness $s^S_{\mu-1}=0$ the corresponding sequence $\{E^S_i,F^S_i\}$ can be continued and for $i\geq\mu$ it becomes stationary \cite[p.\ 73]{KuMe2006}, \begin{align*} r^S_i=r^S_{\mu-1}=d^S=d,\quad i\geq \mu,\\ s^S_i=0,\quad i\geq \mu,\\ a^S_i=m-r^S_{\mu-1}=m-d,\quad i\geq \mu, \end{align*} which goes along with $\theta_i=0$ for $i\geq\mu$ and justifies the setting $\theta_k=0$ for $k\geq\mu$. \end{remark} \begin{corollary}\label{c.degree} The dynamical degree of freedom of a regular DAE is \begin{align*} d=r-\sum_{i=0}^{\mu -2}\theta_i=d^S=d^T=\dim S_{can}. \end{align*} \end{corollary} After we have recognized that the rank conditions in Definition \ref{d.2} are appropriate for a regular DAE, the question arises what rank violations can mean. Based on the above equivalence statements, the findings of the projector-based analysis on regular and critical points, for instance in \cite{RR2008,CRR} are generally valid. The characterization of critical and singular points presupposes a corresponding definition of regular points. \begin{definition}\label{d.regpoint} Given is the pair $\{E,F\}$, $E,F:\mathcal I\rightarrow\Real^{m\times m}$. The point $t_*\in\mathcal I$ is said to be a \emph{regular point} of the pair and the associated DAE, if there is an open neighborhood $\mathcal U\ni t_*$ such that the pair restricted to $\mathcal I\cap\mathcal U$, is regular. Otherwise $t_*\in\mathcal I$ will be called \emph{critical or singular}. In the regular case the characteristic values \eqref{theta} are then also assigned to the regular point. The set of all regular points within $\mathcal I$ will be denoted by $\mathcal I_{reg}$. A subinterval $\mathcal I_{sub}\subset \mathcal I$ is called regularity interval if all its points are regular ones. \end{definition} We refer to \cite[Chapter 4]{RR2008} for a careful discussion and classification of possible critical points. Section \ref{s.examples} below comprises a series of relevant but simple examples. Critical points arise when rank conditions ensuring regularity are violated. We now realize that the question of whether a point is regular or critical can be answered independently of the chosen approach. According to our equivalence result, critical points arise, if at all, then simultaneously in all concepts at the corresponding levels. \medskip When viewing a DAE as a vector field on a manifold, critical points are allowed exclusively in the very last step of the basis reduction, with the intention of then being able to examine singularities of the flow, see Section \ref{subs.degree}. The concept of geometric reduction basically covers regular DAEs and those with well-defined degree and configuration space, i.e. only rank changes in the very last reduction level are permitted. \begin{remark}\label{Nonregular} We end this section with an very important note: The strangeness index and the tractability index are defined also for DAEs in rectangular size,with $E,F:\mathcal I\rightarrow\Real^{n\times m}$, $n\neq m$, but then they differ substantially from each other \cite{HM2020, CRR}. It remains to be seen whether and to what extent the above findings can be generalized. \end{remark} \subsection{Standard canonical forms}\label{sec:SCF} DAEs in standard canonical form (SCF), that is, \begin{align}\label{SCFDAE} \begin{bmatrix} I_d&0\\0&N(t) \end{bmatrix} x'(t)+ \begin{bmatrix} \Omega(t)&0\\0&I_a \end{bmatrix} x(t)=q(t),\quad t\in\mathcal I, \end{align} where $N$ is strictly upper (or lower) triangular, but it need not have constant rank or index, see \cite[Definition 2.4.5]{BCP89}, play a special role in the DAE literature \cite{BCP89,BergerIlchmann}. Their coefficient pairs represent generalizations of the Weierstraß–Kronecker form\footnote{Quasi-Weierstraß form in \cite{BergerIlchmannTrenn,Trenn2013}} of matrix pencils. If $N$ is even constant, then the DAE is said to be in \emph{strong standard canonical form}. A DAE in SCF is also characterized by the simplest canonical subspaces which are even orthogonal to each other, namely \begin{align*} S_{can}=\im \begin{bmatrix} I_d\\0 \end{bmatrix},\quad N_{can}=\im \begin{bmatrix} 0\\I_a \end{bmatrix}. \end{align*} DAEs being transformable into SCF are solvable systems in the sense of Definition \ref{d.solvableDAE}, but they are not necessary regular, see Examples \ref{e.1}, \ref{e.7} in Section \ref{s.examples}. The critical points that occur here are called \emph{harmless} \cite{RR2008,CRR} because they do not generate a singular flow. We will come back to this below. Furthermore, not all solvable systems can be transformed into SCF as Example \ref{e.2} below confirms. We refer to \cite{BCP89} and in turn to Remark \ref{r.generalform} below for the description of the general form of solvable systems. \medskip In Sections \ref{s.regular} and \ref{subs.tractability} we already have faced DAEs in SCFs with a special structure, which in turn represent narrower generalizations of the Weierstraß–Kronecker form. For given integers $\kappa \geq 2$, $d\geq 0$, $l=l_{1}+\cdots +l_{\kappa}$, $l_{i}\geq 1$, $l=a$, $m=d+l$ the pair $\{E,F\}$, $E,F:\mathcal I\rightarrow \Real^{m\times m}$, is structured as follows: \begin{align}\label{blockstructureSCF} E=\begin{bmatrix} I_{d}&\\&N \end{bmatrix},\quad F&=\begin{bmatrix} \Omega&\\&I_{l} \end{bmatrix}, \quad N=\begin{bmatrix} 0&N_{12}&&\cdots&N_{1\kappa}\\ &0&N_{23}&&N_{2\kappa}\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\kappa-1 \kappa}\\ &&&&0 \end{bmatrix},\\ &\text{with blocks}\; N_{ij} \;\text{of sizes}\; l_{i}\times l_{j}.\nonumber \end{align} If $d=0$ then the respective parts are absent. All blocks are sufficiently smooth on the given interval $\mathcal I$. $N$ is strictly block upper triangular, thus nilpotent and $N^{\kappa}=0$. \medskip The following theorem proves that and to what extent regular DAEs are distinguished by a uniform inner structure of the matrix function $N$ and thus of the canonical subspace $N_{can}$. \begin{theorem}\label{t.SCF} Each regular DAE with index $\mu\in\Natu$ and characteristics $r<m$, $\theta_0=0$ if $\mu=1$, and, for $\mu>1$, \begin{align*} r<m,\quad \theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0, \end{align*} is transformable into a structured SCF \eqref{blockstructureSCF} where $\kappa=\mu$ and all blocks of the secondary diagonal have full column rank, that means, \begin{align*} \rank N_{12}&=l_{2}= m-r,\; \ker N_{12}=\{0\},\\ \rank N_{i,i+1}&=l_{i+1}=\theta_{i-1},\; \ker N_{i,i+1}=\{0\},\quad i=1,\ldots,\mu-1, \end{align*} and the powers of $N$ feature constant rank, \begin{align*} \rank N&=r-d=\theta_0+\cdots+\theta_{\mu-2},\\ \rank N^2&=\theta_1+\cdots+\theta_{\mu-2},\\ &\cdots\\ \rank N^{\mu-1}&=\theta_{\mu-2}. \end{align*} \end{theorem} \begin{proof} Owing to Theorem \ref{t.equivalence} the DAE is regular with tractability index $\mu^T=\mu$ and the associated characteristics given by formula \eqref{trac}. By \cite[Theorem 2.65]{CRR}, each DAE being regular with tractability index $\mu$ can be equivalently transformed into a structured SCF, with N having the block upper triangular structure as in \eqref{blockstructureSCF}, $\kappa=\mu$, $l_1=m-r, l_2=m-r^T_1,\ldots, l_{\kappa}=m-r^T_{\kappa-1}$. Now the assertion results by straightforward computations. \end{proof} Sometimes structured SCFs, in which the blocks on the secondary diagonal have full row rank, are more convenient to handle, as can be seen in the case of the proof of Proposition \ref{p.STform}, for example. \begin{corollary}\label{c.SCT} Given is the strictly upper block triangular matrix function with full row-rank blocks on the secondary block diagonal, \begin{align*} \tilde N&=\begin{bmatrix} 0&\tilde N_{12}&&\cdots&\tilde N_{1\kappa}\\ &0&\tilde N_{23}&&\tilde N_{2\kappa}\\ &&\ddots&\ddots&\vdots\\ &&&&\tilde N_{\kappa-1 \kappa}\\ &&&&0 \end{bmatrix}:\mathcal I\rightarrow\Real^{l\times l},\\ \text{with blocks}\; &\tilde N_{ij} \;\text{of sizes}\; \tilde l_{i}\times \tilde l_{j},\; \rank \tilde N_{i,i+1}=\tilde l_{i}, \quad 1\leq\tilde l_1\leq \tilde l_2\leq\cdots\leq \tilde l_{\kappa}, \\&l=\sum_{i=1}^{\kappa}\tilde l_i,\; r_{\tilde N}=\rank \tilde N. \end{align*} Then the following two assertions are valid: \begin{description} \item[\textrm{(1)}] The pair $\{\tilde N,I_l\}$ can be equivalently transformed to a pair $\{N,I_l\}$ with full column-rank blocks on the secondary block diagonal, \begin{align*} N&=\begin{bmatrix} 0&N_{12}&&\cdots&N_{1\kappa}\\ &0&N_{23}&&N_{2\kappa}\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\kappa-1 \kappa}\\ &&&&0 \end{bmatrix}:\mathcal I\rightarrow\Real^{l\times l},\\ \text{with blocks}\; &N_{ij} \;\text{of sizes}\; l_{i}\times l_{j},\; \rank N_{i,i+1}=l_{i+1}, \quad l_1\geq l_2\geq\cdots\geq l_{\kappa}\geq 1,\\&l=\sum_{i=1}^{\kappa} l_i,\; r_{N}=\rank N, \end{align*} such there are pointwise nonsingular matrix functions $L,K:\mathcal I\rightarrow \Real^{l\times l}$ yielding \begin{align}\label{NN} LNK=\tilde N,\; LK+LNK'=I_l. \end{align} Furthermore, both pairs $\{N,I_l\}$ and $\{\tilde N,I_l\}$ are regular with index $\mu=\kappa$ and characteristics \begin{align*} r_{N}= r_{\tilde N}&=l-\tilde l_{\mu}=l-l_1,\\ \theta_0&=\tilde l_{\mu-1}=l_2,\\ \theta_1&=\tilde l_{\mu-2}=l_3,\\ &\cdots\\ \theta_{\mu-2}&=\tilde l_{1}=l_{\mu}. \end{align*} \item[\textrm{(2)}] The pairs $\{E,F\}$ and $\{\tilde E,\tilde F\}$, given by \begin{align*} \tilde E=\begin{bmatrix} I_d&0\\0&\tilde N \end{bmatrix},\; \tilde F=\begin{bmatrix} \Omega &0\\0&I_l \end{bmatrix},\quad E=\begin{bmatrix} I_d&0\\0&N \end{bmatrix},\; F=\begin{bmatrix} \Omega &0\\0&I_l \end{bmatrix},\; \end{align*} with $N$ from {\textrm (1)} having full column-rank blocks on the secondary block diagonal, are equivalent. Both pairs are regular with index $\mu=\kappa$ and characteristics $r=d+r_{N}$ and $\theta_0,\ldots, \theta_{\mu-2}$ from {\textrm (1)}. \end{description} \end{corollary} \begin{proof} {\textrm (1):} The pair $\{\tilde N,I_l\}$ is regular with the characteristics $r_{\tilde N}=l-\tilde l_{\mu}, \theta_0=\tilde l_{\mu-1}, \theta_1=\tilde l_{\mu-2}, \ldots, \theta_{\mu-2}=\tilde l_{1}$ and $d=0$ owing to Proposition \ref{p.STform}(2). By Theorem \ref{t.SCF} it is equivalent to the pair $\{N,I_l\}$ which proves the assertion. The characteristic values are provided by Proposition \ref{p.STform}. {\textrm (2):} By means of the transformation \begin{align*} L=\begin{bmatrix} I_d&0\\0&\mathring L \end{bmatrix},\; K=\begin{bmatrix} I_d &0\\0&\mathring K \end{bmatrix}:\mathcal I\rightarrow\Real^{(d+l)\times(d+l)}, \end{align*} in which $\mathring L,\mathring K:\mathcal I\rightarrow \Real^{l\times l}$ represent the transformation from {\textrm (1)} we verify the equivalence by \begin{align*} LEK&=\begin{bmatrix} I_d&0\\0&\mathring LN\mathring K \end{bmatrix}= \begin{bmatrix} I_d&0\\0&\tilde N \end{bmatrix}=\tilde E,\\ LFK+LEK'&= \begin{bmatrix} \Omega&0\\0&\mathring L\mathring K \end{bmatrix}+ \begin{bmatrix} I_d&0\\0&\mathring L N \end{bmatrix} \begin{bmatrix} 0&0\\0&\mathring K' \end{bmatrix} = \begin{bmatrix} \Omega&0\\0&\mathring L \mathring K+\mathring L N\mathring K' \end{bmatrix} = \begin{bmatrix} \Omega &0\\0&I_l \end{bmatrix}=\tilde F. \end{align*} The characteristic values are provided by Proposition \ref{p.STform}. \end{proof} In case of constant matrices $\tilde N$ and $N$, $K$ is constant, too, and relation \eqref{NN} simplifies to the similarity transform $K^{-1}\tilde NK=N$. \begin{example}\label{e.D} Consider the following DAE in Weierstraß–Kronecker form: \[ \left[\begin{array}{@{}ccc|cc|cc@{}} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right] \begin{bmatrix} x'_1\\ x'_2 \\ x'_3 \\ x'_4 \\ x'_5 \\ x'_6 \\ x'_7 \end{bmatrix} + \begin{bmatrix} x_1\\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \end{bmatrix}=\begin{bmatrix} q_1\\ q_2 \\ q_3 \\ q_4 \\ q_5 \\ q_6 \\ q_7 \end{bmatrix} \] with $m=7$, $r=4$, $d=0$, $\theta_0=3$, $\theta_1=1$, $\theta_2=0$. \begin{itemize} \item An equivalent DAE with blockstructure \eqref{blockstructure} with full column rank secondary blocks is \[ \left[\begin{array}{@{}ccc|ccc|cc@{}} 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right] \begin{bmatrix} x'_4\\ x'_6\\ x'_1 \\ x'_5 \\ x'_7 \\ x'_2 \\ x'_3 \end{bmatrix} + \begin{bmatrix} x_4\\ x_6\\ x_1 \\ x_5 \\ x_7 \\ x_2 \\ x_3 \end{bmatrix}=\begin{bmatrix} q_4\\ q_6 \\ q_1 \\ q_5 \\ q_7 \\ q_2 \\ q_3 \end{bmatrix} \] with $ \rank N_{1,2} = l_2=3$, $\rank N_{2,3}=l_3=1$, $l_1 \geq l_2 \geq l_3$. $\theta_0=3$, $\theta_1=1$, $\theta_2=0$. \item An equivalent DAE with blockstructure \eqref{blockstructure} with full row rank secondary blocks is \[ \left[\begin{array}{@{}c|ccc|ccc@{}} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right] \begin{bmatrix} x'_1\\ x'_2 \\ x'_4 \\ x'_6 \\ x'_3 \\ x'_5 \\ x'_7 \end{bmatrix} + \begin{bmatrix} x_1\\ x_2 \\ x_4\\ x_6 \\ x_3 \\ x_5 \\ x_7 \end{bmatrix}=\begin{bmatrix} q_1\\ q_2 \\ q_4 \\ q_6 \\ q_3 \\ q_5 \\ q_7 \end{bmatrix} \] with $ \rank N_{1,2} = l_1=1$, $\rank N_{2,3}=l_2=3$, $l_1 \leq l_2 \leq l_3$. $\theta_0=3$, $\theta_1=1$, $\theta_2=0$. \end{itemize} \end{example} \begin{remark}\label{r.Scanform} Theorem \ref{t.SCF} ensures that also each pair with regular strangeness index is equivalently transformable into SCF. At this place it should be added that the canonical form\footnote{Global canonical form in \cite{KuMe1994}} of regular pairs figured out in the context of the strangeness index \cite{KuMe1994,KuMe2006} reads \begin{align}\label{StrangeSCF} E=\begin{bmatrix} I_{d}&M\\&N \end{bmatrix},\quad F&=\begin{bmatrix} \Omega&\\&I_{l} \end{bmatrix}, \quad N=\begin{bmatrix} 0&N_{12}&&\cdots&N_{1\kappa}\\ &0&N_{23}&&N_{2\kappa}\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\kappa-1 \kappa}\\ &&&&0 \end{bmatrix},\\ M&=\begin{bmatrix} 0&M_2&\cdots&M_{\kappa} \end{bmatrix},\nonumber \end{align} with full row-rank blocks $ N_{i,i+1}$ and $l=a=m-d$, $\kappa-1=\mu^S$. In \cite[Theorem 3.21]{KuMe2006} one has even $\Omega=0$, taking into account that this is the result of the equivalence transformation \begin{align*} LEK=\begin{bmatrix} I_{d}&K_{11}^{-1}M\\&N \end{bmatrix},\quad LFK+LEK'&=\begin{bmatrix} 0&\\&I_{l} \end{bmatrix}, \end{align*} in which $K_{11}$ is the fundamental solution matrix of the ODE $y'+\Omega y=0$. Nevertheless this form fails to be in SCF if the entry $M$ does not vanish. This is apparently a technical problem caused by the special transformations used there. \end{remark} \begin{remark}\label{r.SCFgeometric} The structured SCF in Theorem \ref{t.SCF} makes the limitation of the geometric view from Section \ref{subs.degree} above and Section \ref{subs.nonlinearDAEsGeo} below obvious. These are regular DAEs with index $\mu$, degree $s=\mu-1$, and as figuration space serves $\Real^d$ resp. $\im\begin{bmatrix}I_d\\0 \end{bmatrix}$. Of course, this enables the user to study the flow of the inherent ODE $u'+\Omega u=p $; however, the other part $Nv'+v=r$, which involves the actual challenges from an application point of view, no longer plays any role. \end{remark} \section{Notions defined by means of derivative arrays }\label{s.notions} \subsection{Preliminaries and general features}\label{subs.Preliminaries} Here we consider the DAE \eqref{1.DAE} on the given interval $\mathcal I\subseteq \Real$. Differentiating the DAE $k\geq 1$ times yields the inflated system \begin{align*} Ex^{(1)}+Fx&=q,\\ Ex^{(2)}+(E^{(1)}+F)x^{(1)}+F^{(1)}x&=q^{(1)},\\ Ex^{(3)}+(2E^{(1)}+F)x^{(2)}+(E^{(2)}+2F^{(1)})x^{(1)}+F^{(2)}x&=q^{(2)},\\ &\ldots\\ Ex^{(k+1)}+(kE^{(1)}+F)x^{(k)}+\cdots+(E^{(k)}+kF^{(k-1)})x^{(1)}+F^{(k)}x&=q^{(k)}, \end{align*} or tightly arranged, \begin{align}\label{1.inflated1} \mathcal E_{[k]}x'_{[k]}+ \mathcal F_{[k]}x =q_{[k]}, \end{align} with the continuous matrix functions $\mathcal E_{[k]}:\mathcal I\rightarrow \Real^{(mk+m)\times(mk+m)}$,\\ $\mathcal F_{[k]}:\mathcal I\rightarrow \Real^{(mk+m)\times m}$, \begin{align}\label{1.GkLR} \mathcal E_{[k]}= \begin{bmatrix} E&0&&&\cdots&0\\ E^{(1)}+F&E&&&\cdots&0\\ E^{(2)}+2F^{(1)}&2E^{(1)}+F&&&\\ \vdots&\vdots&&&\ddots&\\ E^{(k)}+kF^{(k-1)}&\cdots&&& kE^{(1)}+F&E \end{bmatrix},\quad \mathcal F_{[k]}= \begin{bmatrix} F\\ F^{(1)}\\ F^{(2)}\\ \vdots\\ F^{(k)} \end{bmatrix}, \end{align} and the variables and right-hand sides \begin{align*} x_{[k]}= \begin{bmatrix} x\\ x^{(1)}\\ x^{(2)}\\ \vdots\\ x^{(k)} \end{bmatrix},\qquad q_{[k]}= \begin{bmatrix} q\\ q^{(1)}\\ q^{(2)}\\ \vdots\\ q^{(k)} \end{bmatrix}:\mathcal I\rightarrow\Real^{mk+m}. \end{align*} Set $\mathcal F_{[0]}=F,\, \mathcal E_{[0]}=E$,\ $x_{[0]}=x, \, q_{[0]}=q$, such that the DAE \eqref{1.DAE} itself coincides with \begin{align}\label{1.inflated0} \mathcal E_{[0]}x'_{[0]}+\mathcal F_{[0]}x =q_{[0]}. \end{align} By its design, the system \eqref{1.inflated1} includes all previous systems with lower dimensions, \begin{align*} \mathcal E_{[j]}x'_{[j]}+\mathcal F_{[j]}x =q_{[j]},\quad j=0,\ldots,k-1, \end{align*} and the sets \begin{align}\label{1.consitent1} \mathcal C_{[j]}(t)=\{z\in \Real^{m}: \mathcal F_{[j]}(t)z -q_{[j]}(t) \in \im \mathcal E_{[j]}(t)\},\quad t\in \mathcal I,\quad j=0,\ldots,k, \end{align} satisfy the inclusions \begin{align}\label{1.consitent2} \mathcal C_{[k]}(t)\subseteq\mathcal C_{[k-1]}(t)\subseteq\cdots\subseteq \mathcal C_{[0]}(t)=\{z\in\Real^{m}:F(t)z-q(t)\in \im E(t)\}, \quad t\in \mathcal I. \end{align} Therefore, each smooth solution $x$ of the original DAE must meet the so-called constraints, that is, \begin{align*} x(t)\in \mathcal C_{[k]}(t), \quad t\in\mathcal I. \end{align*} In the following, the rank functions $r_{[k]}:\mathcal I\rightarrow \Real$, \begin{align}\label{eq:rankGR} r_{[k]}(t)=\rank \mathcal E_{[k]}(t),\quad t\in \mathcal J,\; k\geq 0, \end{align} and the projector valued functions $\mathcal W_{[k]}:\mathcal I\rightarrow \Real^{(mk+m)\times(mk+m)}$, \begin{align}\label{eq:WGR} \mathcal W_{[k]}(t)=I_{mk+m}- \mathcal E_{[k]}(t)\mathcal E_{[k]}(t)^{+},\quad t\in \mathcal J,\; k\geq 0, \end{align} will play their role, and further the associated linear subspaces \begin{align}\label{1.subspace1} S_{[k]}(t)=\{z\in \Real^{m}: \mathcal F_{[k]}(t)z \in \im \mathcal E_{[k]}(t)\}=\ker \mathcal W_{[k]}(t)\mathcal F_{[k]}(t) ,\quad t\in \mathcal I,\quad k\geq 0. \end{align} Obviously, it holds that \begin{align}\label{1.subspace2} S_{[k]}(t)\subseteq S_{[k-1]}(t)\subseteq\cdots\subseteq S_{[0]}(t)=\{z\in\Real^{m}:F(t)z\in \im E(t)\}, \quad t\in \mathcal I. \end{align} It should be emphasized that, if the rank function $r_{[k]}$ is constant, then the pointwise Moore-Penrose inverse ${\mathcal E_{[k]}}^{+}$ and the projector function $\mathcal W_{[k]}$ are as smooth as $\mathcal E_{[k]}$. Otherwise one is confronted with discontinuities. \begin{remark}[A necessary regularity condition]\label{r.a1} One aspect of regularity is that the DAE \eqref{1.DAE} should be such that it has a correspondingly smooth solution to any $m$ times continuously differentiable function $q:\mathcal I\rightarrow \Real^{m}$. If this is so, all matrix functions \begin{align*} \begin{bmatrix} \mathcal E_{[k]}&\mathcal F_{[k]} \end{bmatrix}:\mathcal I\rightarrow \Real^{(mk+m)\times(mk+2m)} \end{align*} must have full-row rank, i.e., \begin{align}\label{eq:fullrank} \rank \begin{bmatrix} \mathcal E_{[k]}(t)&\mathcal F_{[k]}(t) \end{bmatrix}=mk+m,\quad t\in\mathcal I, \; k\geq 0. \end{align} If, on the contrary, condition \eqref{eq:fullrank} is not valid, i.e., there are a $\bar k$ and a $\bar t$ such that \begin{align*} \rank \begin{bmatrix} \mathcal E_{[\bar k]}(\bar t)&\mathcal F_{[\bar k]}(\bar t) \end{bmatrix}< m\bar k+m, \end{align*} then there exists a nontrivial $w\in\Real^{m\bar k+m}$ such that \begin{align*} w^{*}\begin{bmatrix} \mathcal E_{[\bar k]}(\bar t)&\mathcal F_{[\bar k]}(\bar t) \end{bmatrix}=0. \end{align*} Regarding the relation \begin{align*} \mathcal E_{[\bar k]}(\bar t)x'_{[\bar k]}(\bar t)+\mathcal F_{[\bar k]}(\bar t)x(\bar t)=q_{[\bar k]}(\bar t) \end{align*} one is confronted with the restriction $w^{*}q_{[\bar k]}(\bar t)=0$ for all inhomogeneities. \end{remark} \begin{remark}[Representation of $\mathcal C_{[k]}(t)$]\label{r.a2} The full row rank condition \eqref{eq:fullrank}, i.e. also \begin{align}\label{eq:fullrank(t)} \im [\mathcal E_{[k]}(t)\, \mathcal F_{[k]}(t)]=\Real^{mk+m} \end{align} implies \begin{align*} \im \underbrace{\mathcal W_{[k]}(t)[\mathcal F_{[k]}(t)\, \mathcal E_{[k]}(t)]}_{[\mathcal W_{[k]}(t)\mathcal F_{[k]}(t)\quad 0]}=\im \mathcal W_{[k]}(t), \end{align*} thus \begin{align}\label{eq:WRGL} \im \mathcal W_{[k]}(t)\mathcal F_{[k]}(t)=\im \mathcal W_{[k]}(t), \end{align} and in turn \begin{align} \mathcal C_{[k]}(t)=S_{[k]}(t)+(\mathcal W_{[k]}(t)\mathcal F_{[k]}(t))^{+}\mathcal W_{[k]}(t)q_{[k]}(t),\label{eq:RepresC}\\ \dim S_{[k]}(t)=m-\rank \mathcal W_{[k]}(t)=r_{[k]}(t)-mk.\label{eq:Represrank} \end{align} By representation \eqref{eq:RepresC}, $\mathcal C_{[k]}(t)$ appears to be an affine subspace of $\Real^{m}$ associated with $S_{[k]}(t)$. \end{remark} It becomes clear that under the necessary regularity condition \eqref{eq:fullrank} the dimensions of the subspaces $S_{[k]}(t)$ are fully determined by the ranks of $\mathcal E_{[k]}(t)$ and vice versa. In particular, then $\dim S_{[k]}(t)$ is independent of $t$ if and only if $r_{[k]}(t)$ is so, a matter that will later play a quite significant role. \medskip If the DAE \eqref{1.DAE} is interpreted as in \cite{Chist1996,ChistShch} as a Volterra integral equation \begin{align}\label{Int} E(t)x(t)+\int_a^t (F(s)-E'(s))x(s)){\mathrm ds}= c+\int_a^t q(s){\mathrm ds} \end{align} then the inflated system created on this basis reads \begin{align*} \mathcal{D}_{[k]}x_{[k]}=\begin{bmatrix} -\int_a^t (F(s)-E'(s))x(s)){\mathrm ds}+ c+\int_a^t q(s){\mathrm ds}\\q_{[k-1]} \end{bmatrix}, \end{align*} with the array function \begin{align}\label{1.Dk} \mathcal{D}_{[k]} = \begin{bmatrix} E & 0 \\ \mathcal F_{[k-1]}& \mathcal E_{[k-1]} \end{bmatrix}:\mathcal I\rightarrow \Real^{(m+mk)\times(m+mk)}. \end{align} To get an idea about the rank of $\mathcal D_{[k]}(t)$ we take a closer look at the time-varying subspace $\ker \mathcal D_{[k]}(t)$. We have for $k\geq 1$ that \begin{align} \ker \mathcal D_{[k]}&=\left\{\begin{bmatrix} z\\w \end{bmatrix}\in \Real^{m}\times\Real^{mk}: Ez=0, \mathcal F_{[k-1]}z+ \mathcal E_{[k-1]}w=0 \right\}\nonumber\\ &=\left\{\begin{bmatrix} z\\w \end{bmatrix}\in \Real^{m}\times\Real^{mk}: z\in\ker E, \mathcal W_{[k-1]}\mathcal F_{[k-1]}z=0, \mathcal E_{[k-1]}^{+}\mathcal E_{[k-1]}w=- \mathcal E_{[k-1]}^{+}\mathcal F_{[k-1]}z \right\}\nonumber\\ &=\left\{\begin{bmatrix} z\\w \end{bmatrix}\in \Real^{m}\times\Real^{mk}: z\in\ker E\cap S_{[k-1]}, \mathcal E_{[k-1]}^{+}\mathcal E_{[k-1]}w=-\mathcal E_{[k-1]}^{+}\mathcal F_{[k-1]}z \right\},\label{1.kerDk} \end{align} and consequently, \begin{align} \rank \mathcal D_{[k]}&= m-\dim(\ker E\cap S_{[k-1]})+r_{[k-1]}. \label{1.rankDk} \end{align} If $ \mathcal E_{[k]}$ has constant rank, then the projector functions $\mathcal W_{[k]}$ and the Moore-Penrose inverse $\mathcal E_{[k]}^{+}$ inherit the smoothness of $\mathcal E_{[k]}$. The following proposition makes clear that, in any case, both $r_{[k]}(t)=\rank \mathcal E_{[k]}(t)$ and $\rank \mathcal D_{[k]}(t)$ as well as $\dim S_{[k]}(t)$ and $\dim (\ker E(t)\cap S_{[k]}(t))$, $t\in\mathcal I$, are invariant under equivalence transformations. \begin{proposition}\label{p.equivalenc} Given are two equivalent coefficient pairs $\{E,F\}$ and $\{\tilde E,\tilde F\}$, $\tilde E=LEK$, $\tilde F=LFK+LEK'$, $E,F,L,K:\mathcal I\rightarrow \Real^{m\times m}$ sufficiently smooth, $L$ and $K$ pointwise nonsingular. Then, the inflated matrix function pair $\tilde{\mathcal E}_{[k]}:\mathcal I\rightarrow \Real^{(mk+m)\times(mk+m)}$, $\tilde{\mathcal F}_{[k]}:\mathcal I\rightarrow \Real^{(mk+m)\times m}$ and the subspace $\tilde S_{[k]}$ related to $\{\tilde E,\tilde F\}$ satisfy the following: \begin{align*} \tilde{\mathcal E}_{[k]}&=\mathcal L_{[k]}\mathcal E_{[k]}\mathcal K_{[k]},\quad \tilde{\mathcal F}_{[k]}=\mathcal L_{[k]}\mathcal F_{[k]}K+\mathcal L_{[k]}\mathcal E_{[k]}\mathcal H_{[k]},\quad \mathcal H_{[k]}= \begin{bmatrix} K'\\\vdots\\K^{(k+1)} \end{bmatrix},\\ \tilde S_{[k]}&=K^{-1}S_{[k]}, \quad \tilde S_{[k]}\cap\ker \tilde E=K^{-1}(S_{[k]}\cap\ker E), \end{align*} in which the matrix functions $\mathcal L_{[k]},\mathcal K_{[k]}:\mathcal I\rightarrow \Real^{(m+mk)\times (m+mk}$ are uniquely determined by $L$ and $K$, and their derivatives, respectively. They are pointwise nonsingular and have lower triangular block structure, \begin{align*} \mathcal L_{[k]}=\begin{bmatrix} L&0&\cdots&0\\ \ast&L&\cdots&0\\ \vdots&&\ddots&0\\ \ast&&\cdots&L \end{bmatrix},\quad \mathcal K_{[k]}=\begin{bmatrix} K&0&\cdots&0\\ \ast&K&\cdots&0\\ \vdots&&\ddots&0\\ \ast&&\cdots&K \end{bmatrix}=: \begin{bmatrix} K&0\\\mathcal K_{{[k]}\,{21}}&\mathcal K_{{[k]}\;{22}} \end{bmatrix}. \end{align*} \end{proposition} \begin{proof} The representation of $\tilde{\mathcal E}_{[k]}$ and $\tilde{\mathcal F}_{[k]}$ is given by a slight adaption of \cite[Theorem 3.29]{KuMe2006}. We turn to $\tilde S_{[k]}$. $\tilde z\in \tilde S_{[k]}$ means $\tilde{\mathcal F}_{[k]}\tilde z\in \im \tilde{\mathcal E}_{[k]}$, thus $ \mathcal F_{[k]}\tilde z +\mathcal E_{[k]} \mathcal H_{[k]}\tilde z\in \im \mathcal E_{[k]}$, then also $ \mathcal F_{[k]}\tilde z \in \im \mathcal E_{[k]}$, that is, $K\tilde z\in S_{[k]}$. Regarding also that $\tilde z\in \ker \tilde E$ means $K\tilde z\in \ker E$ we are done. \end{proof} The following lemma gives a certain first idea about the size of the rank functions. \begin{lemma}\label{l.rk} The rank functions $r_{[k]}=\rank \mathcal E_{[k]}$ and $r^{\mathcal D}_{[k]}=\rank \mathcal D_{[k]}$, $k\geq 1$, $r^{\mathcal D}_{[0]}= r_{[0]}=\rank E$, satisfy the inequalities \begin{align*} r_{[k]}(t)+r(t)\leq r_{[k+1]}(t)\leq r_{[k]}(t)+m,\quad t\in\mathcal I, \; k\geq0,\\ r^{\mathcal D}_{[k]}(t)+r(t)\leq r^{\mathcal D}_{[k+1]}(t)\leq r^{\mathcal D}_{[k]}(t)+m,\quad t\in\mathcal I, \; k\geq0, \end{align*} \end{lemma} \begin{proof} The special structure of both matrix functions satisfies the requirement of Lemma \ref{l.app2} ensuring the inequalities. \end{proof} The question of whether the ranks $r_{[i]}$ of the matrix functions $\mathcal E_{[i]}$ are constant will play an important role below. We are also interested in the relationships to the rank conditions associated with the Definition \ref{d.2}. We see points where these rank conditions are violated as critical points which require closer examination. In Section \ref{s.examples} below a few examples are discussed in detail to illustrate the matter. \begin{lemma}\label{l.R1} Let the matrix functions $E,F:\mathcal I\rightarrow\Real^{m\times m}$ be such that, for all $t\in \mathcal I$, $\rank E(t)=r$, $\rank [E(t) F(t)]=m$. Denote $\theta_{0}(t)=\dim(\ker E(t)\cap S_{[0]}(t))=\dim(\ker E(t)\cap \ker Z(t)^*F(t))$ in which $Z:\mathcal I\rightarrow\Real^{m\times(m-r)}$ is a basis of $(\im E)^{\perp}$. Then it results that \[r_{[1]}=\rank\mathcal E_{[1]}(t)=\rank\mathcal D_{[1]}(t)=m+r-\theta_{0}(t),\quad t\in \mathcal I, \] and both $\mathcal E_{[1]}$ and $\mathcal D_{[1]}$ have constant rank precisely if the pair is pre-regular. \end{lemma} \begin{proof} We consider the nullspaces of $\mathcal D_{[1]}$ and $\mathcal E_{[1]}$, that is \begin{align*} \ker \mathcal D_{[1]}&=\ker \begin{bmatrix} E&0\\F&E \end{bmatrix} =\{z\in\Real^{2m}:Ez_1=0, Fz_1+Ez_2=0 \}\\ &=\{z\in\Real^{2m}:Ez_1=0, Fz_1\in \im E, E^+Ez_2=-E^+Fz_1 \}\\ &=\{z\in\Real^{2m}:z_1\in \ker E\cap \ker Z^*F, E^+Ez_2=-E^+Fz_1 \},\\ \ker \mathcal E_{[1]}&=\ker \begin{bmatrix} E&0\\E'+F&E \end{bmatrix} =\{z\in\Real^{2m}:Ez_1=0, (E'+F)z_1+Ez_2=0 \} \\ &=\{z\in\Real^{2m}:Ez_1=0, (E'+F)z_1\in \im E, E^+Ez_2=-E^+(E'+F)z_1 \}\\ &=\{z\in\Real^{2m}:z_1\in \ker E\cap \ker Z^*(E'+F), E^+Ez_2=-E^+(E'+F)z_1 \}. \end{align*} Since $Z^*E'(I-E^+E)=-Z^*E(I-E^+E)'=0$ we know that $\ker E\cap \ker Z^*F=\ker E\cap \ker Z^*(E'+F)$ and hence $\dim \ker \mathcal E_{[1]}=\dim \ker \mathcal E_{[1]}= \dim (\ker E\cap\ker Z^*F) +m-r=\theta_0+m-r$, thus $\rank \mathcal E_{[1]}= 2m-(\theta_0+m-r)=m+r-\theta_0 $. \end{proof} \subsection{Array functions for DAEs being transformable into SCF and for regular DAEs}\label{subs.SCFarrays} In this Section, we consider important properties of the array function $\mathcal E_{[k]}$ and $\mathcal D_{[k]}$ from \eqref{1.GkLR} and \eqref{1.Dk}. First of all we observe that both are special cases of the matrix function \begin{footnotesize} \begin{align}\label{eq.arrayHk_SCF} \mathcal H_{[k]}:= \begin{bmatrix} E&0&&\cdots&&0\\ \alpha_{2,1}E^{(1)}+ F&E&&&&\vdots\\ \alpha_{3,1}E^{(2)}+\beta_{3,1}F^{(1)}&\alpha_{3,2}E^{(1)}+F&E&\\ \vdots&\ddots&\ddots&\ddots&&0\\ \alpha_{k+1,1}E^{(k)}+\beta_{k+1,k}F^{(k-1)}&\cdots& \alpha_{k+1,k-1} E^{(2)}+ \beta_{k+1,k-1} F^{(1)}& \alpha_{k+1,k} E^{(1)}+ F&&E \end{bmatrix}, \end{align} \end{footnotesize} each with different coefficients $\alpha_{i,j}$ and $\beta_{i,j}$. We do not specify them, as they do not play any role later on. \medskip Let for a moment the given DAE be in SCF, see \eqref{1.SCF}, that is, \begin{align*} E=\begin{bmatrix} I_d & 0 \\ 0 & N \end{bmatrix}, \quad F =\begin{bmatrix} \Omega & 0 \\ 0 & I_{m-d} \end{bmatrix}, \end{align*} with a strictly upper triangular matrix function $N$. We evaluate the nullspace of the corresponding matrix $\mathcal H_{[k]}(t)\in \Real^{(m+km)\times (m+km}$ for each fixed $t$, but drop the argument $t$ again. Denote \begin{align*} z=\begin{bmatrix} z_0\\\vdots\\z_k \end{bmatrix}\in \Real^{(k+1)m},\; z_j=\begin{bmatrix} x_j\\y_j \end{bmatrix}\in \Real^{m},\; x_j\in \Real^d,\; y_j\in \Real^{m-d},\; \begin{bmatrix} y_0\\\vdots\\y_k \end{bmatrix}=:y\in \Real^{(k+1)(m-d)} \end{align*} and evaluate the linear system $\mathcal H_{[k]}z=0$. The first block line gives \begin{align*} x_0=0, \quad Ny_0=0, \end{align*} and the entire system decomposes in parts for $x$ and $y$. All components $x_j$ are fully determined and zero, and it results that $\mathcal{N}_{[k]}y=0$, with \begin{align}\label{eq.arrayNk} \mathcal N_{[k]}:= \begin{bmatrix} N&0&&&\cdots&0\\ I+\alpha_{2,1} N^{(1)}&N&&&&\vdots\\ \alpha_{3,1} N^{(2)}&I+\alpha_{3,2}N^{(1)}&N&&\\ \vdots& \ddots&\ddots&\ddots& &\\ \vdots& &\ddots&\ddots&\ddots&0\\ \alpha_{k+1,1}N^{(k)}&\cdots&&\alpha_{k+1,k-2}N^{(2)}&I+ \alpha_{k+1,k}N^{(1)}&N \end{bmatrix}. \end{align} This leads to the relations \begin{align*} \rank \mathcal H_{[k]}=(k+1)d+\rank \mathcal N_{[k]},\\ \dim\ker \mathcal H_{[k]}= \dim\ker \mathcal N_{[k]}, \end{align*} such that the question how $ \rank \mathcal H_{[k]}$ behaves can be traced back to $\mathcal N_{[k]}$. We have prepared relevant properties of $ \rank \mathcal N_{[k]}$ in some detail in Appendix \ref{subs.A_strictly}, which enables us to formulate the following basic general results. Obviously, if the pair $\{E,F\}$ is transferable into SCF and $N$ changes its rank on the given interval, then $E$ and $\mathcal E_{[0]}=E$ do so, too. It may also happen that $N$ and in turn $\mathcal E_{[0]}=E$ show constant rank but further $\mathcal E_{[i]}$ suffer from rank changes, as Example \ref{e.5} confirms for $i=1$. Nevertheless, the subsequent matrix functions at the end have a constant rank as the next assertion shows. \begin{theorem}\label{t.rankSCF} If the pair $\{E,F\}$ is transferable into SCF with characteristics $d$ and $a=m-d$ then \begin{description} \item[\textrm{(1)}] the derivative array functions $\mathcal E_{[k]}$ and $\mathcal D_{[k]}$ become constant ranks for $k\geq a-1$, namely \begin{align*} r_{[k]}= \rank \mathcal E_{[k]}= \rank \mathcal D_{[k]}=km+d,\quad k \geq a-1. \end{align*} \item[\textrm{(2)}] Moreover, \begin{align*} \dim (\ker E \cap S_{[k]})=0,\quad k\geq a. \end{align*} \end{description} \end{theorem} \begin{proof} Owing to Proposition \ref{p.equivalenc} we may turn to the SCF, which leads to \begin{align*} \rank \mathcal D_{[k]}=\rank \mathcal E_{[k]}=\rank \mathcal H_{[k]}=(k+1)d+\rank \mathcal N_{[k]}, \end{align*} and regarding Proposition \ref{prop.rank.Nk} we obtain \begin{align*} \rank \mathcal H_{[k]}=(k+1)d+\rank \mathcal N_{[k]}=(k+1)d +ka+\rank N\tilde N_2 \cdots \tilde N_{k+1}, \end{align*} in which $N\tilde N_2 \cdots \tilde N_{k+1}$ is a product of $k+1$ strictly upper triangular matrix functions of size $a\times a$. Clearly, if $k\geq a-1$ then $N\tilde N_2 \cdots \tilde N_{k+1}= 0$ and in turn \begin{align*} \rank \mathcal H_{[k]}=(k+1)d +ka=km+d. \end{align*} Now formula \eqref{1.rankDk} implies for $k\geq a$, \begin{align*} \dim(\ker E\cap S_{[k]})&=m+r_{[k]}-\rank \mathcal D_{[k+1]}\\&=m+r_{[k]}-r_{[k+1]}=m+(km+d)-((k+1)m+d)=0. \end{align*} \end{proof} It is an advantage of regular pairs that all associated matrix functions arrays have constant rank as we know from the following assertion. \begin{theorem}\label{th.ranks} Let the pair $\{E,F\}$ be regular on $\mathcal I$ with index $\mu $ and characteristic values $r$ and $\theta_0\geq\cdots\geq \theta_{\mu-2}>\theta_{\mu-1}=0$. Set $\theta_k=0$ for $k\geq\mu$. Then the following assertions are valid: \begin{description} \item[\textrm{(1)}] The derivative array functions $\mathcal E_{[k]}$ and $\mathcal D_{[k]}$ have constant ranks, namely \begin{align*} r_{[k]}= \rank \mathcal E_{[k]}= \rank \mathcal D_{[k]}=km+r-\sum_{i=0}^{k-1}\theta_i,\quad k \geq 1. \end{align*} \item[\textrm{(2)}] In particular, $r_{[k]}= \rank \mathcal E_{[k]} = km+d$,\; $\dim\ker\mathcal E_{[k]}=m-d=a $,\; if $k\geq\mu-1$. \item[\textrm{(3)}] For $k\geq \mu$, there is a continuous function $H_k:\mathcal I\rightarrow\Real^{km\times km}$ such that the nullspace of $\mathcal E_{[k]}$ has the special form \begin{align*} \ker \mathcal E_{[k]}=\{\begin{bmatrix} z\\w \end{bmatrix}\in\Real^{m+km}:z=0, H_kw=0\}. \end{align*} \item[\textrm{(4)}] $\dim S_{[k]}=r-\sum_{i=0}^{k-1}\theta_i, \; k\geq 1$, and $\dim S_{[\mu-1]}= \dim S_{[\mu]}=d$. \item[\textrm{(5)}] $S_{[\mu-1]}= S_{[\mu]}=S_{can}$. \item[\textrm{(6)}] $ \dim(\ker E\cap S_{[k]})=\theta_k,\quad k\geq 0$. \end{description} \end{theorem} \begin{proof} {\textrm (1):} We note that this assertion is a straightforward consequence of \cite[Theorem 3.30]{KuMe2006}. Nevertheless, we formulate here a more transparent direct proof based on the preceding arguments, which at the same time serves as an auxiliary means for the further proofs. For $\mu=1$ we are done by Lemma \ref{l.R1}, so we assume $\mu\geq 2$. Each regular pair $\{ E,F \}$ with index $\mu \ge 2$ and characteristic values $r, \theta_0 \ge \cdots \ge \theta_{\mu-2} > \theta_{\mu-1} = 0$, features also the regular tractability index $\mu$ and can be equivalently transformed into the structured SCF \cite[Theorem 2.65]{CRR} \begin{align}\label{N0} E=\begin{bmatrix} I_d&0\\0&N \end{bmatrix},\quad F=\begin{bmatrix} \Omega&0\\0&I_a \end{bmatrix} \end{align} in which the matrix function $N$ is strictly block upper triangular with exclusively full column-rank blocks on the secondary diagonal and $N^{\mu}=0$, in more detail, see Proposition \ref{p.STform}(1), \begin{align*} N=\begin{bmatrix} 0&N_{12}&&\cdots&N_{1,\mu}\\ &0&N_{23}&&N_{2,\mu}\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\mu-1,\mu}\\ &&&&0 \end{bmatrix},\\ \text{with blocks}\; N_{ij} \; \text{ of sizes}\; l_{i}\times l_{j},\; \rank N_{i,i+1}=l_{i+1}. \end{align*} and $l_1=m-d-r, l_2=\theta_0,\ldots, l_{\mu}=\theta_{\mu-2}$. Since $\rank \mathcal E_{[k]}$ and $\rank \mathcal D_{[k]}$ are invariant with respect to equivalence transformations, we can turn to the array function $ \mathcal H_{[k]}$ applied to the structured SCF, and further to $ \mathcal N_{[k]}$. Regarding the relation \begin{align*} \rank \mathcal H_{[k]}&=(k+1)d+\rank \mathcal N_{[k]},\\ \dim\ker \mathcal H_{[k]}&= \dim\ker \mathcal N_{[k]}, \end{align*} we obtain by Proposition \ref{prop.rank.Nk}, formula \eqref{Nk_rank}, \begin{align*} \rank \mathcal H_{[k]}&=(k+1)d+\rank \mathcal N_{[k]}=(k+1)d+k(m-d)+\rank N^{k+1}\\ &=km+d+\rank N^{k+1}. \end{align*} Lemma \ref{l.Ncol} (with $l=m-d$) implies $\rank N^{k+1}= m-d-(l_1+\cdots l_{k+1})$, thus $\rank N^{k+1}= m-d-(m-r+\theta_0+\cdots+\theta_{k-1})= r-d-(\theta_0+\cdots+\theta_{k-1})$, and therefore \begin{align*} \rank \mathcal H_{[k]} =km+r-\sum_{j=0}^{k-1}\theta_j. \end{align*} {\textrm(2):} This is a direct consequence of {\textrm(1)}. {\textrm(3):} This follows from Corollary \ref{c.Nk_-full}. {\textrm(4):} This is a consequence of relation \eqref{eq:Represrank} and the solvability properties provided by Theorem \ref{t.solvability}. {\textrm(5):} This results from the inclusions $S_{[\mu]}\subseteq S_{[\mu-1]}$ and $S_{[\mu]}\subseteq S_{can}$ since all these subspaces have the same dimension, namely $d$. {\textrm(6):} Next we investigate the intersection $S_{[k]} \cap \ker E$. Applying {\textrm(1)} formula \eqref{1.rankDk} (which concerns the nullspace of $\mathcal D_{[k]}$) immediately yields \begin{align*} \dim(\ker E\cap S_{[k-1]})=m+r_{[k-1]}-\rank \mathcal D_{[k]}=m+r_{[k-1]}-r_{[k]}=\theta_{k-1}. \end{align*} \end{proof} \bigskip \subsection{Differentiation index}\label{subs.diff} The most popular idea behind the index of a DAE is to filter an explicit ordinary differential equation (ODE) with respect to $x$ out of the inflated system \eqref{1.inflated1}, a so-called \textit{completion ODE}, also \emph{underlying ODE}, of the form \begin{align}\label{1.completionODE} x^{(1)}+ Ax=f, \end{align} with a continuous matrix function $A:\mathcal I\rightarrow\Real^{m\times m}$. The index of the DAE is the minimum number of differentiations needed to determine such an explicit ODE, e.g., \cite[Definition 2.4.2]{BCP89}. At this point it should be emphasized that in early work the index type was not yet specified. It was simply spoken of the\emph{ index}. Only later epithets were used for distinction of various approaches. In particular in \cite{GHM} the term \emph{differentiation index} is used which is now widely practiced, e.g., \cite[Section 3.3]{KuMe2006}, \cite[Section 3.7]{RR2008}. The following definition after \cite{Campbell87}\footnote{In \cite{BCP89}, this is the statement of \cite[Proposition 2.4.2]{BCP89}.} is the specification common today. \begin{definition}\label{d.diff} The smallest number $\nu\in\Natu$, if it exists, for which the matrix function $\mathcal E_{[\nu]}$ has constant rank and is smoothly $1$-full is called the \emph{differentiation index} of the pair $(E,F)$ and the DAE \eqref{1.DAE}, respectively.\footnote{For $1$-fullness we refer to the Appendix \ref{subs.A1} } We then indicate the differentiation index by $\mu^{diff}=\nu$. \end{definition} If $\mathcal E_{[\nu]}$ is smoothly $1$-full, then there is a nonsingular, continuous matrix function $\mathcal T$ such that \begin{align}\label{1.1full} \mathcal T \mathcal E_{[\nu]}=\begin{bmatrix} I_{m}&0\\0&H_{\mathcal E} \end{bmatrix}, \end{align} and the first block-line of the inflated system \eqref{1.inflated1} scaled by $\mathcal T$ is actually an explicit ODE with respect to $x$, i.e., \begin{align*} x^{(1)}+(\mathcal T\mathcal F_{[\nu]})_{1}x =(\mathcal Tq_{[\nu]})_{1}, \end{align*} with a continuous matrix coefficient $(\mathcal T\mathcal F_{[\nu]})_{1}:\mathcal I\rightarrow\Real^{m\times m}$. Supposing a consistent initial value for $x$, that is, $x(t_0)=x_0\in \mathcal{C}_{[{\nu}]}(t_0)$\footnote{See representation \eqref{eq:RepresC}.}, the solution of the IVP for this ODE is a solution of the DAE \cite[Theorem 2.48]{ BCP89}. \begin{proposition}\label{p.diffind} The differentiation index remains invariant under sufficiently smooth equivalence transformations. \end{proposition} \begin{proof} Let $\mathcal E_{[k]}$ have constant rank and be smoothly $1$-full such that \eqref{1.1full} is given. The transformed $\tilde{\mathcal E}_{[k]}$ has the same constant rank as $\mathcal E_{[k]}$. Following \cite[Theorem 3.38]{KuMe2006}, with the notation of Proposition \ref{p.equivalenc}, we derive \begin{align*} \underbrace{\begin{bmatrix} K^{-1}&0\\ -H_{\mathcal E}\mathcal K_{[k]\;21}K^{-1}&I \end{bmatrix} \mathcal T\mathcal L_{[k]}^{-1}}_{=:\tilde{\mathcal T}}\tilde{\mathcal E}_{[k]}= \begin{bmatrix} I&0\\0&H_{\mathcal E}\mathcal K_{[k]\;{22}} \end{bmatrix}. \end{align*} $\tilde{\mathcal T}$ is pointwise nonsingular and continuous. The matrix function $\mathcal E_{[k]}$ and $\tilde{\mathcal E}_{[k]}$ are smoothly $1$-full simultaneously, which completes the proof. \end{proof} \begin{proposition}\label{p.index1} The DAE \eqref{1.DAE} and the pair $\{E,F\}$ have differentiation index one, if and only if they are regular with index $\mu=1$ in the sense of Definition \ref{d.2}. The index-one case goes along with $S_{[0]}=S_{[1]}=S_{can}$ and $d=\dim S_{can}=\rank E=r$. \end{proposition} \begin{proof} Let $\mathcal E_{[1]}$ be smoothly $1-$ full, \[\mathcal E_{[1]}=\begin{bmatrix} E&0\\E'+F&E \end{bmatrix}. \] Owing to Lemma \ref{l.app} there is a continuous matrix function $H:\mathcal I\rightarrow\Real^{m\times m}$ with constant rank such that \begin{align}\label{index1} \ker \mathcal E_{[1]}=\{z\in\Real^{2m}: z_1=0, Hz_2=0 \}. \end{align} On the other hand we derive \begin{align*} \ker \mathcal E_{[1]}&=\{z\in\Real^{2m}: Ez_1=0, (E'+F)z_1+Ez_2=0 \}\\ &=\{z\in\Real^{2m}: Ez_1=0, (E'+F)z_1\in \im E, Ez_2=-(E'+F)z_1 \}. \end{align*} Introduce the subspace $\tilde S:= \{w\in\Real^m: (E'+F)w\in \im E\}$. It comes out that \begin{align*} \ker \mathcal E_{[1]} =\{z\in\Real^{2m}: z_1\in \ker E\cap \tilde S, Ez_2=-(E'+F)z_1 \}. \end{align*} Comparing with \eqref{index1} we obtain that the condition $\ker E\cap \tilde S=\{0\}$ must be valid, and hence \begin{align*} \ker \mathcal E_{[1]} =\{z\in\Real^{2m}: z_1=0, Ez_2=0 \},\quad \dim \ker \mathcal E_{[1]}=\dim \ker E. \end{align*} Then, in particular, $\rank E$ is constant and the projector functions $Q:=I-E^+E, W:=I-EE^+$ are as smooth as $E$. This leads to $WE'Q=-WEQ'=0$, thus $\ker E\cap \ker WF=\ker E\cap\tilde S=\{0\}$. Then the matrix function $E+WF$ remains nonsingular and $\im [E\,F]=\im [E\,WF]=\Real^m$. Now it is evident that the pair $\{E,F\}$ is pre-regular with $\theta=0$ and furthermore regular with index $\mu=1$. In the opposite direction we assume the pair $\{E,F\}$ to be regular with index $\mu=1$. Then it is also regular with tractability index one and the matrix function $G_1=E+(F-EP')Q:\mathcal I\rightarrow\Real^{m\times m}$ remains nonsingular, $P:=I-Q$. With \begin{align*} \mathcal T:=\begin{bmatrix} (I+QG_1^{-1}E'P)^{-1}&0\\ -PG_1^{-1}E'(I+QG_1^{-1}E'P)^{-1}&I \end{bmatrix} \begin{bmatrix} P&Q\\Q-PG_1^{-1}F&P \end{bmatrix} \begin{bmatrix} G_1^{-1}&0\\0&G_1^{-1} \end{bmatrix} \end{align*} we obtain that \begin{align*} \mathcal T\mathcal E_{[1]}=\begin{bmatrix} I&0\\0&P \end{bmatrix}, \end{align*} and we are done. \end{proof} \begin{proposition}\label{p.diff} If the differentiation index $\mu^{diff}$ is well-defined for the pair $\{E,F\}$, then it follows that \begin{description} \item[\textrm{(1)}] $\mathcal E_{[\mu^{diff}]}$ has constant rank $r_{[\mu^{diff}]}$. \item[\textrm{(2)}] The DAE has a solution to each arbitrary $q\in \mathcal C^{(m)}(\mathcal I,\Real^m)$ and the necessary solvability condition in Remark \ref{r.a1} is satisfied, that is, \[ \rank [\mathcal E_{[k]} \,\mathcal F_{[k]}]= (k+1)m,\; k=0,\ldots, \mu^{diff}. \] \item[\textrm{(3)}] $\mathcal E_{[\mu^{diff}-1]}$ has constant rank $r_{[\mu^{diff}-1]}=r_{[\mu^{diff}]}-m$. \item[\textrm{(4)}] $S_{[\mu^{diff}-1]}=S_{[\mu^{diff}]}=S_{can}$. \item[\textrm{(5)}] $\ker E\cap S_{can}=\{0\}$. \end{description} \end{proposition} \begin{proof} The issue{\textrm (1)} is already part of the definition. {\textrm (2)}: The solvability assertion is evident and the necessary solvability condition is validated in Remark \ref{r.a1}. {\textrm (3)}: Owing to \cite[Lemma 3.6]{KuMe1996} one has $\corank \mathcal E_{[\mu^{diff}]}=\corank \mathcal E_{[\mu^{diff-1}]}$ yielding $(\mu^{diff}+1)m-r_{[\mu^{diff}]}=\mu^{diff}m-r_{[\mu^{diff}-1]}$, and hence $r_{[\mu^{diff}-1]}=r_{[\mu^{diff}]}-m$. {\textrm (4)}: Remark \ref{r.a2} provides the subspace dimensions $\dim S_{[\mu^{diff}-1]}=r_{[\mu^{diff}-1]}-(\mu^{diff}-1)m$ and $\dim S_{[\mu^{diff}]}=r_{[\mu^{diff}]}-\mu^{diff}m$. Regarding {\textrm (3)} this gives $\dim S_{[\mu^{diff}-1]}=\dim S_{[\mu^{diff}]}$. Due to the inclusion \eqref{1.subspace2} we arrive at $S_{[\mu^{diff}-1]}=S_{[\mu^{diff}]}$. It only remains to state that $S_{[\mu^{diff}]}=S_{can}$ by \cite[Theorem 2.4.8]{BCP89}. {\textrm (5)}: This is a straightforward consequence of Assertion (4) and Lemma \ref{l.app}. \end{proof} \begin{theorem}\label{t.regulardiff} Let the pair $\{E,F\}$ and the DAE \eqref{1.DAE} be regular on $\mathcal I$ with index $\mu $ and characteristic values $r$ and $\theta_0\geq\cdots\geq \theta_{\mu-2}>\theta_{\mu-1}=0$. Then the differentiation index is well-defined, $\mu^{diff}=\mu$, and, additionally, the matrix functions $\mathcal E_{[k]}$ have constant ranks and the subspaces $S_{[k]}$ have constant dimensions. \end{theorem} \begin{proof} This is an immediate consequence of Proposition \ref{th.ranks}. \end{proof} In contrast to our basic index notion in Section \ref{s.regular}, the differentiation index allows for certain rank changes which is often particularly emphasized\footnote{There have been repeated scientific disputes about this.},e.g., \cite{BCP89,KuMe2006}. In the special Examples \ref{e.1}, \ref{e.2}, \ref{e.7} in Section \ref{s.examples} below the rank of the leading matrix function $E(t)$ changes, nevertheless the differentiation index is well-defined. In Example \ref{e.5} $E(t)$ has constant rank, but $r_{[1]}$ varies, but the DAE has differentiation index three on the entire given interval. However, it may well happen that a DAE having on $\mathcal I$ a well-defined differentiation index features a different differentiation index on a subinterval. We consider the points where the rank changes to be \emph{critical points} for good reason. \begin{remark}\label{r.generalform} The formation of the differentiation index approach is closely related to the search of a general form for solvable linear DAEs (in the sence of Definition \ref{d.solvableDAE}) with time-varying coefficients from the very beginning \cite{CamPet1983,Campbell87}. We quote \cite[Theorems 2.4.4 and 2.4.5]{BCP89} and a result from \cite{BergerIlchmann} for coefficients $E,F:\mathcal I\rightarrow \Real^m$. \begin{itemize} \item Suppose that $E, F$ are real analytic. Then \eqref{1.DAE} is solvable if and only if it is equivalent to a system in standard canonical (SCF) form \eqref{1.SCF} using real analytic coordinate changes\cite[Theorems 2.4.4]{BCP89}. \item Suppose that the DAE \eqref{1.DAE} is solvable on the compact interval $\mathcal I$. Then it is equivalent to the DAE in Campbell canonical form\footnote{This appreciatory name is introduced in \cite{KuMe2024}.} \begin{align*} \begin{bmatrix} I_d&G\\0&N \end{bmatrix}z'+ \begin{bmatrix} 0&0\\0&I_{m-d} \end{bmatrix}z= \begin{bmatrix} g\\h \end{bmatrix} \end{align*} where $Nz_2'+z_2=h$ has only one solution for each function $h$. Furthermore, there exists a countable family\footnote{As we understand it, this set is not necessarily countable, see Theorem \ref{t.M}.} of disjoint open intervals $\mathcal I^{\ell}$ such that $\cup \mathcal I^{\ell}$ is dense in $\mathcal I$ and on each $\mathcal I^{\ell}$, the system $Nz_2'+z_2=h$ is equivalent to one in standard canonical form of the form $Mw'+w=f$ with $M$ structurally nilpotent\footnote{A square matrix $A$ is structurally nilpotent if and only if there is a permutation matrix $P$ such that $PAP^{-1}$ is strictly triangular, see \cite[Theorem 2.3.6]{BCP89}.} \cite[Theorems 2.4.5]{BCP89}. \item Suppose an open interval $\mathcal I$. Then every system transferable into SCF with $\mathcal C^m$-coefficients is solvable. \end{itemize} \end{remark} Reviewing our examples in Section \ref{s.examples} we observe the following: If the pair $\{E,F\}$ has on the interval $\mathcal I$ the differentiation index $\mu^{diff}$, then on each subinterval $\mathcal I_{sub}\in \mathcal I$ the differentiation index $\mu_{sub}^{diff}$ is also well-defined, which can, however, be smaller than $\mu^{diff}$, which has an impact on the input-output behavior of the system. Our next theorem captures the previous observations and generalizes them. Recall that we know from Proposition \ref{p.index1} that a DAE having differentiation index one is regular with index one in the sense of Definition \ref{d.2} and vice versa. We are interested in what happens in the higher-index cases. The following assertion says that, for any DAE with well-defined differentiation index $\mu^{diff}$ on a compact interval $\mathcal I$, the subset of regular points $\mathcal I_{reg}$ is dense in $\mathcal I$ with uniform degree of freedom, but there might be subintervals on which the DAE features a strictly smaller differentiation index than $\mu^{diff}$. \begin{theorem}\label{t.diffinterval} Let the pair $\{E,F\}$ and the DAE \eqref{1.DAE} be given on the compact interval $\mathcal I$ and have there the differentiation index $\nu=\mu^{diff}\geq 2$. Then there is a partition of the interval $\mathcal I$ by a collection $\mathfrak S$ of open, non-overlapping subintervals\footnote{We apply Theorem \ref{t.M} according to which the set of rank discontinuity points can also be over-countable. This is why we use the name \emph{collection} in contrast to a countable family.} such that \begin{align*} \overline{\bigcup_{\ell \in \mathfrak S}\mathcal I^{\ell}}=\mathcal I,\quad \mathcal I^{\ell} \text{ open },\quad \mathcal I^{\ell_i}\cap \mathcal I^{\ell_j}=\emptyset \quad\text{for}\quad \ell_i\neq \ell_j,\quad \ell_i, \ell_j \in\mathfrak S, \end{align*} and the pair $\{E,F\}$ and the DAE \eqref{1.DAE} restricted to any subinterval $\mathcal I^{\ell}$ are regular in the sense of Definition \ref{d.2} with individual characteristics, \[\mu^{\ell}\leq \mu^{diff},\; r^{\ell},\; \theta_0^{\ell}\geq \cdots \geq \theta^{\ell}_{\mu^{\ell}-2}>\theta^{\ell}_{\mu^{\ell}-1}=0,\quad \ell \in \mathfrak S, \] but necessarily with uniform degree of freedom $d$, which means \begin{align*} d= d^{\ell}=r^{\ell}-\sum_{i=0}^{\mu^{\ell}-2}\theta_i^{\ell},\quad \ell \in\mathfrak S. \end{align*} Furthermore, it holds that $\mu^{diff}=\max\{\mu^{\ell}: \ell \in\mathfrak S\}$. \end{theorem} \begin{proof} Owing to \cite[Corollary 3.26]{KuMe2006} which is based on Theorem \ref{t.M} there is a decomposition of the compact interval $\mathcal I$ by open non-overlapping subintervals $\mathcal I^{\ell}$, $\ell \in\mathfrak S$, such that the interval $\mathcal I$ is the closure of $\cup_{ \ell \in\mathfrak S}\mathcal I^{\ell}$, and the DAE has a well-defined regular strangeness index on each subinterval $\mathcal I^{\ell}$. In turn, by Theorem \ref{t.equivalence}, the DAE is regular on each subinterval $\mathcal I^{\ell}$ in the sense of Definition \ref{d.2} with individual index $\mu^{\ell}$ and characteristics \begin{align*} r^{\ell}, \quad\theta^{\ell}_0\geq \theta^{\ell}_1\geq\cdots\geq\theta^{\ell}_{\mu^{\ell}-2}>\theta^{\ell}_{\mu^{\ell}-1}=0, \quad d^{\ell}=r^{\ell}-\sum_{l=0}^{\mu^{\ell}-2}\theta^{\ell}_l. \end{align*} As in Proposition \ref{th.ranks} we set $\theta^{\ell}_j=0$ for $j>\mu^{\ell}-1$. Since the matrix functions $\mathcal E_{[\nu]}$ is pointwise $1$-full and has constant rank $r_{[\nu]}$ on the overall $\mathcal I$, owing to Proposition \ref{th.ranks} we have $\mu_{\ell} \leq \nu$ on each subinterval $\mathcal I^{\ell}$ and \begin{align*} r_{[\nu-1]}&=(\nu-1)m+ r^{\ell}-\theta_0^{\ell} -\theta_1^{\ell}-\cdots-\theta_{\nu-2}^{\ell}-\theta_{\nu-1}^{\ell} = (\nu-1)m+d^{\ell},\\ r_{[j]}&=j m+ d^{\ell},\quad j\geq \nu-1. \end{align*} Therefore, the values $d^{\ell}$ are equal on all subintervals, $d^{\ell}=d$. Denote $\kappa=\max\{\mu^{\ell}:{\ell}\in \mathfrak S\}$, $\kappa\leq\nu$, and observe that $r_{[\kappa]}=\kappa m+d$ on each subinterval $\mathcal I^{\ell}$ , thus $r_{[\kappa]}\leq\kappa m+d$ on all $\mathcal I$. Owing to Proposition \ref{p.diff}, on all $\mathcal I$ it holds that $S_{[\nu]}=S_{can}$, $\dim S_{[can]}=\dim S_{\nu}=r_{[\nu]}-\nu m=d$. The inclusion $S_{[\kappa]}(t)\supseteq S_{can}(t)$ which is given for all $t\in\mathcal I$, implies $r_{[\kappa]}-\kappa m \geq d$ on all $\mathcal I$. This leads to $r_{[\kappa]}=\kappa m+d$ on all $\mathcal I$ and $S_{[\kappa]}=S_{can}$ as well. Finally, $\mathcal E_{[\kappa]}$ has constant rank on the whole intervall, and additionally, regarding again Proposition \ref{p.diff}, it results that $\ker E\cap S_{[\kappa]}=\ker E\cap S_{can}=\{0\}$. This implies $\kappa=\nu$, since $\nu$ is the smallest such integer. \end{proof} \begin{remark}\label{r.Chist} To a large extend similar results are developed in the monographs \cite[Chapter 3]{Chist1996} and \cite[Chapter 2]{ChistShch} using operator theory. To the given DAE that is represented as operator equation of order one, $\Lambda_{1}x=Ex'+Fx=q$ a left regularization operator $\Lambda_{\nu}=A_{\nu}\frac{d^{\nu}}{dt^{\nu}}+\cdots +A_{1}\frac{d^{1}}{dt^{1}}+A_{0}$ is constructed, if possible, by evaluating derivative arrays $\mathcal D_{{k}}$ as introduced in Subsection \ref{subs.Preliminaries} above and using generalized inverses, such that $ \Lambda_{\nu}\circ\Lambda_{1}x=x'+ Bx$. The minimal possible number $\nu$ is called (\cite[p.\ 85]{Chist1996}) \emph{non-resolvedness index}, and this is the same as the differentiation index. In \cite{Chist1996,ChistShch} the SCF is renamed to central canonical form. Instead of solvable systems in the sense of Definition \ref{d.solvableDAE}, DAEs that have a \emph{general Cauchy-type solution} now form the background, see \cite[p.\ 110]{Chist1996}. \end{remark} \subsection{Regular differentiation index by geometric approaches}\label{subs.qdiff} In concepts that assume certain continuous projector-valued functions, especially where geometric ideas play a role, one finds a somewhat restricted or qualified by additional rank conditions index understanding. In \cite{Griep92}, based on the rank theorem, a modified version of the differentiation index is given, which is closely related to the differential-geometric concepts in \cite{Reich,RaRh}. Indeed, the presentation and index definition in \cite{Griep92} is a more analytical notation of the differential-geometric concept in \cite{Reich}, and this version fits well with the rest of our presentation. As before, we are dealing with linear DAEs. Basically, the derivative array functions $\mathcal E_{[k]}$ introduced in Section \ref{subs.Preliminaries} are assumed to feature constant ranks $r_{[k]}$ for all $k$. Due to the rank theorem there are smooth pointwise nonsingular matrix functions $U_{[k]}, V_{[k]}:\mathcal I\rightarrow \Real^{(km+m)\times(km+m)}$ providing the factorization \begin{align*} \mathcal E_{[k]}= U_{[k]}\bar{P}_{[k]}V_{[k]},\quad \bar{P}_{[k]}:=\diag( I_{r_{[k]}},0,\ldots, 0 )\in \Real^{(km+m)\times(km+m)}. \end{align*} Then, letting $\bar{Q}_{[k]}=I-\bar{P}_{[k]}$ we form the projector functions \begin{align*} R_{[k]}&=U_{[k]}\bar{P}_{[k]}U_{[k]}^{-1}\quad \text{onto}\quad \im \mathcal E_{[k]},\\ W_{[k]}&=U_{[k]}\bar{Q}_{[k]}U_{[k]}^{-1}\quad \text{along}\quad \im \mathcal E_{[k]},\\ Q_{[k]}&=V_{[k]}^{-1}\bar{Q}_{[k]}V_{[k]}\quad \text{onto}\quad \ker\mathcal E_{[k]},\\ P_{[k]}&=V_{[k]}^{-1}\bar{P}_{[k]}V_{[k]}\quad \text{along}\quad \ker\mathcal E_{[k]}, \end{align*} and turn to the equation \begin{align}\label{G1} \mathcal E_{[k]}x'_{[k]}+\mathcal F_{[k]}x=q_{[k]}, \end{align} which is divided into the two parts, \begin{align} \mathcal E_{[k]}x'_{[k]}+R_{[k]}\mathcal F_{[k]}x&=R_{[k]}q_{[k]},\label{G2}\\ W_{[k]}\mathcal F_{[k]}x&=W_{[k]}q_{[k]}.\label{G3} \end{align} Applying the factorization one obtains the reformulation of \eqref{G2} to \begin{align}\label{G4} V_{[k]}^{-1}\bar{P}_{[k]}V_{[k]}x'_{[k]}+ V_{[k]}^{-1}\bar{P}_{[k]}U_{[k]}^{-1}(\mathcal F_{[k]}x-q_{[k]}) =0. \end{align} Regarding \eqref{G3} one has $\mathcal F_{[k]}x-q_{[k]}= U_{[k]}\bar{P}_{[k]}U_{[k]}^{-1} (\mathcal F_{[k]}x-q_{[k]})$, thus $V_{[k]}^{-1}U_{[k]}^{-1} (\mathcal F_{[k]}x-q_{[k]})= V_{[k]}^{-1}\bar{P}_{[k]}U_{[k]}^{-1} (\mathcal F_{[k]}x-q_{[k]})$, and \eqref{G4} becomes \begin{align} V_{[k]}^{-1}\bar{P}_{[k]}V_{[k]}x'_{[k]}= - V_{[k]}^{-1}U_{[k]}^{-1}(\mathcal F_{[k]}x-q_{[k]}),\nonumber\\ x'_{[k]}=V_{[k]}^{-1}\bar{Q}_{[k]}V_{[k]}x'_{[k]}- V_{[k]}^{-1}U_{[k]}^{-1}(\mathcal F_{[k]}x-q_{[k]}). \label{G5} \end{align} Coming from \eqref{G1} we deal now with the equation \begin{align}\label{G6} \mathcal E_{[k]}(t)y+\mathcal F_{[k]}(t)x=q_{[k]}(t),\quad t\in \mathcal I, \end{align} where $y\in\Real^{(k+1)m}$ and $x\in R^m$ are placeholders for $x'_{[k]}(t)$ and $x(t)$. Denote by $\tilde C_{[k]}$ the so-called \emph{constraint manifold of order} $k$, which contains exactly all pairs $(t,x)$ for which equation \eqref{G6} is solvable with respect to $y$, that is \begin{align*} \tilde C_{[k]} &=\{(x,t)\in \Real^m\times\mathcal I: W_{[k]}(t)(\mathcal F_{[k]}(t)x-q_{[k]}(t))=0\}\\ &=\{(x,t)\in \Real^m\times\mathcal I: W_{[k]}(t)\mathcal F_{[k]}(t)x =W_{[k]}(t)q_{[k]}(t))\}\\ &=\{(x,t)\in \Real^m\times\mathcal I: x\in C_{[k]}(t)\}, \end{align*} with $C_{[k]}(t)$ from \eqref{1.consitent1} which represent the fibres at $t$ of the constraint manifold $\tilde C_{[k]}$. The inclusion chain \begin{align*} \tilde C_{[0]}\supseteq \tilde C_{[1]}\supseteq\cdots \supseteq\tilde C_{[k]} \end{align*} is obviously valid. For each $(x,t)\in \tilde C_{[k]}$ we form the manifold $M_{[k]}(x,t)\subseteq \Real^{(k+1)m}$ of all $y\in\Real^{m+km}$ solving the equation \eqref{G6}. Regarding the representation \eqref{G5} we know that $M_{[k]}(x,t)$ is an affine subspace parallel to $\ker \mathcal E_{[k]}(t)$ and it depends linearly on $x$: \begin{align} M_{[k]}(x,t)&=\{y\in\Real^{m+km}: y=z- (U_{[k]}V_{[k]})^{-1}(t)(\mathcal F_{[k]}(t)x-q_{[k]}(t)), z\in\ker \mathcal E_{[k]}(t)\}\nonumber\\ &=\ker \mathcal E_{[k]}(t)+ \{- (U_{[k]}V_{[k]})^{-1}(t)(\mathcal F_{[k]}(t)x-q_{[k]}(t))\}.\label{G7} \end{align} Using the truncation matrices \begin{align*} \hat{T}_{[k]}&=[I_{km}\; 0]\in\Real^{km\times(m+km)},\\ {T}_{[k]}&= \hat{T}_{[1]}\cdots \hat{T}_{[k]}=[I_{m}\; 0]\in\Real^{m\times(m+km)}, \end{align*} the inclusions \begin{align*} M_{[0]}(x,t)&\supseteq T_{[1]}M_{[1]}(x,t)\supseteq\cdots\supseteq T_{[k]}M_{[k]}(x,t),\\ \ker E(t)= \ker \mathcal E_{[0]}(t)&\supseteq T_{[1]}\ker \mathcal E_{[1]}(t)\supseteq\cdots\supseteq T_{[k]}\ker \mathcal E_{[k]}(t), \end{align*} are provided in \cite{Griep92}. Each DAE solution proceeds within the constraint manifolds of order $k\geq 0$, and we have \begin{align*} x(t)\in C_{[k]}(t),\quad x'_{[k]}(t)\in M_{[k]}(x(t),t),\quad t\in \mathcal I,\quad k\geq 0. \end{align*} The corresponding index definition from \cite[Section 3]{Griep92} reads: \begin{definition}\label{d.G1} The equation \eqref{1.DAE} is called a DAE with \emph{regular differentiation index $\nu$} if all $\mathcal E_{[j]}$ feature constant ranks, $T_{[\nu]} M_{[\nu]}(x,t)$ is a singleton for all $(x,t)\in \tilde C_{[\nu]}$, and $\nu$ is the smallest integer with these properties. We then indicate the regular differentiation index by $\nu=:\mu^{rdiff}$. \end{definition} From representation \eqref{G7} it follows that $T_{[\nu]} M_{[\nu]}(x,t)$ is a singleton exactly if $T_{[\nu]} \ker \mathcal E_{[\nu]}=\{0\}$, thus $T_{[\nu]}Q_{[\nu]}=0$. \medskip With the resulting vector field $v(x,t):=- T_{[\nu]}(U_{[\nu]}V_{[\nu]})^{-1}(t)(\mathcal F_{[\nu]}(t)x-q_{[\nu]}(t) $, the DAE \eqref{1.DAE} having the regular differentiation index $\nu$ may be seen as vector field on a manifold, that is, \begin{align*} x'(t)=v(x(t),t),\quad (x(t),t)\in \tilde C_{[\nu]}. \end{align*} It must be added here that in early works like \cite{Griep92,Reich} no special epithet was given to the index term. It was a matter of specifying the idea formulated in \cite{Gear88} that the index of a DAE is determined as the smallest number of differentiations necessary to filter out from the inflated system a well-defined explicit ODE. In particular, in \cite{Griep92} there is only talk about an \emph{index-$\nu$ DAE}, without the epithet \emph{regular}, but in \cite{Reich} regularity is central and the characterization of the DAE as \emph{regular} is particularly emphasized, and so we added here the label \emph{regular differentiation} to differ from other notions, specifically also from the differentiation index in Subsection \ref{subs.diff}. Based on closely related index concepts, some variants of index transformation\footnote{Also called index reduction.} are discussed in \cite{Griep92,Reich}, i.e., for a given DAE, a new DAE with an index lower by one is constructed. We pick out the respective basic idea from \cite{Reich,Griep92} which is in turn closely related to the geometric reduction in \cite{RaRh}. Given is the DAE \eqref{1.DAE} with a pair $\{E,F\}$ featuring the regular differentiation index $\mu^{rdiff}=\nu$. Then $\{E,F\}$ is pre-regular with $r=r_{[0]}$ and $\theta= m+r-r_{[1]}$, see Lemma \ref{l.R1}. Let $W$ and $P_S$ be the ortho-projector functions with $\ker W=\im E$ and $\im P_S=\ker WF=S$. We represent $P_{S}=I-(WF)^+WF$. Differentiating the derivative-free part $WFx-Wq=0$ leads to $(WF)'x-(Wq)'=-WFx'$, and in turn to $x'=P_{S}x'+(WF)^+WFx'=P_{S}x'-(WF)^+((WF)'x-(Wq)')$. Inserting this into the DAE \eqref{1.DAE} yields \begin{align*} EP_Sx'+(F-E(WF)^+(WF)'x=q-E(WF)^+(Wq)'. \end{align*} Regarding that $P_{S}'=-{(WF)^{+}}'WF-(WF)^+(WF)'$ and $WFx=Wq$ we arrive at \begin{align}\label{R1} EP_Sx'+(F+EP'_S)x=q-E((WF)^+Wq)'. \end{align} We quote \cite[Theorem 12]{Griep92}: The transfer from the DAE \eqref{1.DAE} to the DAE \eqref{R1} reduces the (regular differentiation) index by 1. Next we show the close connection to the basic reduction step described in Section \ref{s.regular}. By means of a smooth basis $C$ of the subspace $S$ we represent the above projector function $P_{S}$ by $P_S=CC^+$, \\$C^+=(CC^*)^{-1}C^*$, and rewrite \eqref{R1} as \begin{align}\label{R2} EC(C^+x)'+(F+EC'C^+)x=q-E((WF)^+Wq)'. \end{align} Letting $y=C^+x$, so that $x=P_Sx=CC^+x=Cy$, we obtain \begin{align*} ECy'+(FC+EC')y=q-E((WF)^+Wq)', \end{align*} and finally, using a basis $Y$ of $\im E$ as in Section \ref{s.regular}, \begin{align}\label{R3} Y^*ECy'+Y^*(FC+EC')y=Y^*(q-E((WF)^+Wq)'), \end{align} which illuminates the consistency of \eqref{R1} with the basic reduction step in \cite{RaRh} and Section \ref{s.regular}. \begin{theorem}\label{t.rdiff} The following assertions are valid: \begin{description} \item[\textrm{(1)}] If the DAE \eqref{1.DAE} is regular with index $\mu\geq 1$ in the sense of Definition \ref{d.2} then it has also the regular differentiation index $\mu^{rdiff}=\mu$, and vice versa, and the characteristic values are related by \begin{align*} r_{[0]}&= r,\\ r_{[1]}&=m+ r-\theta_0,\\ r_{[2]}&=2m+ r-\theta_0 -\theta_1,\\ \cdots\\ r_{[\nu-2]}&=(\nu-2)m+ r-\theta_0 -\theta_1-\ldots-\theta_{\nu-3},\\ r_{[\nu-1]}&=(\nu-1)m+ r-\theta_0-\theta_1-\ldots-\theta_{\nu-2},\\ r_{[j]}&=j m+ d,\quad j\geq \mu-1. \end{align*} \item[\textrm{(2)}] If the DAE has regular differentiation index $\mu^{rdiff}$ then it has also the differentiation index $\mu^{diff}=\mu^{rdiff}$. \item[\textrm{(3)}] If the DAE has regular differentiation index $\mu^{rdiff}$ on the interval $\mathcal I$ then, on each subinterval $\mathcal I_{sub}\subset\mathcal I$, it shows the same regular differentiation index $\mu^{rdiff}$. \end{description} \end{theorem} \begin{proof} {\textrm (1):} Owing to Proposition \ref{p.index1} there is nothing to do in the index-$1$ case. If the DAE is regular with index $\mu\geq 2$ in the sense of Definition \ref{d.2} then it has regular differentiation index $\mu^{rdiff}=\mu$ as an immediate consequence of Proposition \ref{th.ranks} and Lemma \ref{l.app}. Contrariwise, let the DAE \eqref{1.DAE} have regular differentiation index $\mu^{rdiff}=\nu\geq 2$. Then the pair $\{E,F\}$ is pre-regular, and by \cite[Theorem 12]{Griep92} the DAE \begin{align}\label{R4} EP_Sx'+(F+EP'_S)x=p,\quad p:=q-E((WF)^+Wq)', \end{align} has regular differentiation index $\nu-1$. Using smooth bases $Y, Z, C$, and $D$ of $\im E, (\im E)^{\perp}, \ker Z^*F$, and $ (\ker Z^*F)^{\perp}$, respectively, we form the pointwise nonsingular matrix functions \begin{align*} K=\begin{bmatrix} C&D \end{bmatrix}, \; L=\begin{bmatrix} Y^*\\(Z^*FD)^{-1}Z^* \end{bmatrix}, \end{align*} scale the DAE \eqref{R4} by $L$ and transform $x=K\tilde x=: C\tilde x_C+D\tilde x_D$, which leads to an equivalent DAE of the form $\tilde E\tilde x'+\tilde F\tilde x=Lp$ with coefficients \begin{align*} \tilde E&=LEP_{S}K=\begin{bmatrix} Y^*EC&0\\0&0 \end{bmatrix},\\ \tilde F&=L(F+EP_{S}')K+LEP_{S}K'=LFK+LE(P_{S}K)'=\begin{bmatrix} Y^*FC+Y^*EC'&Y^*FD\\0&I \end{bmatrix}. \end{align*} The resulting DAE reads in detail \begin{align} Y^*EC\tilde x'_C+(Y^*FC+Y^*EC')\tilde x_C +Y^*FD\tilde x_D&=Y^*p, \label{R5}\\ \tilde x_D&=(Z^*FD)^{-1}Z^*q.\label{R6} \end{align} As a DAE featuring regular differentiation index $\nu-1$, the DAE \eqref{R5}, \eqref{R6} is pre-regular. Observe that \begin{align*} \ker \tilde E\cap\ker \tilde W\tilde F=\left\{\begin{bmatrix} u\\v \end{bmatrix}\in\Real^m: u\in \ker Y^*EC, (Y^*FC+Y^*EC')u\in \im Y^*EC, v=0\right\}, \end{align*} which allows to restrict the further investigation to the inherent part \begin{align}\label{R7} \underbrace{Y^*EC}_{=E_1}\tilde x'_C+\underbrace{(Y^*FC+Y^*EC')}_{=F_1}\tilde x_C &=Y^*p-Y^*FD (Z^*FD)^{-1}Z^*q, \end{align} which has also regular differentiation index $\nu-1$, and $\dim \ker E_1\cap \ker Z^*_1F_1= \dim \ker \tilde E\cap\ker \tilde W\tilde F$. If $\nu=2$ we are done. If $\nu>2$ then we repeat the whole procedure and provide this way a basic sequence of pairs $\{E_k,F_k\}$. {\textrm (2):} Lemma \ref{l.app} makes this evident. {\textrm (3):} This is given by the construction. \end{proof} We observe that the regular differentiation index is well-defined, if and only if the (standard) differentiation index is well-defined, and, additionally, all preceding $\mathcal E_{[j]}$ have constant ranks. \begin{remark}\label{r.diff} If $\{E, F\}$ has differentiation index $\mu^{diff}=:\nu$ the matrix function $\mathcal E_{[\nu]}$ has constant rank, and, due to Lemma \ref{l.app}, it holds that $T_{[\nu]}Q_{[\nu]}=0$ such that the above formula \eqref{G5} immediately provides an underlying ODE in the form \begin{align*} x'= T_{[\nu]}x'_{[\nu]}=- T_{[\nu]}V_{[\nu]}^{-1}U_{[\nu]}^{-1}(\mathcal F_{[\nu]}x-q_{[\nu]}), \end{align*} without the predecessors $\mathcal E_{[j]}$ having to have constant rank and without the background of geometric reduction. \end{remark} \subsection{Projector based differentiation index for initialization}\label{subs.pbdiff} The index concept developed in \cite{EstLam2016Decoupling,EstLamNewApproach2018,EstLamDecoupling2020} has its origin in the computation of consistent initial values and was initially intended as a reinterpretation of the differentiation index. Although it has therefore not yet had an own name, in this article we will denote this index concept the \textit{projector based differentiation index}. Roughly speaking the projector based differentiation index is reached, as soon as by differentiation we have found sufficient (hidden) constraints. For an appropriate description, orthogonal projectors are used to decouple different components of $x$. \medskip Let $P=E^{+}E$, $Q=I-P$, and $W_{0}=I-EE^{+}$ be the orthoprojector functions onto $(\ker E)^{\bot}$, $\ker E$, and $(\im E)^{\bot}$. Given that $E(t)$ has constant rank on $\mathcal I$, these projector functions are as smooth as $E$ is itself. Then we decompose the unknown $x=Px+Qx$ and rewrite the DAE as proposed in \cite{GM86}, \begin{align}\label{1.modDAE} E(Px)'+(F-EP')x=q. \end{align} All solutions of the homogeneous DAE with $q=0$ reside within the timevarying subspace of $\Real^{m}$ \begin{align}\label{1.S0} S_{0}=\{z\in \Real^{m}:Fz\in \im E\}=\ker W_{0}F =S_{[0]}. \end{align} The DAE \eqref{1.modDAE} splits into the following two equations: \begin{align} P(Px)'+E^{+}(F-EP')(Px+Qx)&=E^{+}q,\label{1.ODE}\\ W_{0}FQx&=-W_{0}FPx+W_{0}q.\label{1.alg} \end{align} Obviously, if the second equation \eqref{1.alg} uniquely determines $Qx$ in terms of $Px$ and $q$, then replacing $Qx$ in \eqref{1.ODE} by the expression resulting from \eqref{1.alg} yields an explicit ODE for $Px$. This actually happens on condition that $S_{0}\cap\ker E=\{0\}$ is given, which indicates regular index-1 DAEs as it is well-known \cite{GM86,CRR}. For higher-index DAEs this condition is no longer met. In the context of the \textit{projector based differentiation index} one aims for extracting the needed information concerning $Qx$ from the inflated system. Thereby, the further matrix functions $\mathcal B_{[k]}:\mathcal I\rightarrow \Real^{(mk+m)\times(mk+m)}$, \begin{align}\label{1.Bk} \mathcal{B}_{[k]} = \begin{bmatrix} P & 0 \\ \mathcal F_{[k-1]}& \mathcal E_{[k-1]} \end{bmatrix}. \end{align} plays its role. For $k\geq 1$ we evaluate the inflated system \begin{align*} \mathcal E_{[k-1]}x'_{[k-1]}+ \mathcal F_{[k-1]}x= q_{[k-1]}. \end{align*} Introducing the variable $\omega=Qx$ and decomposing $x=Qx+Px=\omega+Px$ yields the system \begin{align*} P\omega&=0,\\ \mathcal E_{[k-1]} x'_{[k-1]}+\mathcal F_{[k-1]}\omega &= q_{[k-1]}-\mathcal F_{[k-1]}Px, \end{align*} that is, \begin{align}\label{1.B} \mathcal B_{[k]}\begin{bmatrix} \omega\\x'_{[k-1]} \end{bmatrix}= \begin{bmatrix} 0\\q_{[k-1]}-\mathcal F_{[k-1]}Px \end{bmatrix}. \end{align} If $\mathcal B_{[k]}$ is smoothly $1$-full, \begin{align*} \mathcal T_{\mathcal B} \mathcal B_{[k]}=\begin{bmatrix} I&\\ 0&H_{\mathcal B} \end{bmatrix}, \end{align*} then the first block-row of system \eqref{1.B} multiplied by $\mathcal T_{\mathfrak B}$ reads \begin{align*} \omega=\left(\mathcal T_{\mathcal B}\begin{bmatrix} 0\\q_{[k-1]}-\mathcal F_{[k-1]}Px \end{bmatrix} \right)_{1}, \end{align*} which is actually a representation of $Qx=\omega$ in terms of $Px$ and $q_{[k-1]}$ we are looking for. Inserting this expression into equation \eqref{1.ODE} leads to the explicit ODE for the component $Px$, \begin{align*} (Px)'-P'Px+E^{+}(F-EP')(Px+\omega)=E^{+}q, \end{align*} and, supposing a consistent initial value for $Px$, eventually to a solution $x=Px+Qx$ of the DAE. \begin{remark} Recall that the regular differentiation index focuses on the 1-fullness condition of $\mathcal E_{[k]}$ for its first $m$ rows corresponding to $x'$. In contrast, we want to emphasize that the projector based differentiation index focuses on the on the 1-fullness condition of $\mathcal B_{[k]}$ for its first $m$ rows, that correspond to $x$. \end{remark} To get an idea about the rank of $\mathcal B_{[k]}(t)$ we point out its connection to the matrix $\mathcal D_{[k]}(t)$ defined in \eqref{1.Dk}: \begin{align} \ker \mathcal B_{[k]}(t) &= \ker \mathcal D_{[k]} \nonumber \\ &=\left\{\begin{bmatrix} z\\w \end{bmatrix}\in \Real^{m}\times\Real^{mk}: z\in\ker E\cap S_{[k-1]}, \mathcal E_{[k-1]}^{+}\mathcal E_{[k-1]}w=-\mathcal E_{[k-1]}^{+}\mathcal F_{[k-1]}z \right\},\label{1.kerBk} \end{align} for the subspace $S_{[k-1]}$ defined in \eqref{1.subspace1}, and consequently, \begin{align} \rank \mathcal B_{[k]}&= m-\dim(\ker E\cap S_{[k-1]})+r_{[k-1]} . \label{1.rankBk} \end{align} In addition, regarding the inclusions \eqref{1.subspace2}, we recognize immediately the inclusions \begin{align*} S_{0}\cap\ker E=S_{[0]}\cap\ker E\supseteq S_{[1]}\cap\ker E\supseteq\cdots\supseteq S_{[k-1]}\cap\ker E\supseteq S_{[k]}\cap\ker E, \end{align*} that for the projector $\mathcal W_{[k]}$ from \eqref{eq:WGR} and \begin{align*} \rho_{k}:=\rank\begin{bmatrix} P\\\mathcal W_{[k]}\mathcal F_{[k]} \end{bmatrix}=\rank\begin{bmatrix} E\\\mathcal W_{[k]}\mathcal F_{[k]} \end{bmatrix}= m-\dim (\ker E\cap S_{[k]}) \end{align*} obviously yield the inequalities \begin{align*} \rho_{0}\leq \rho_{1}\leq\cdots\leq \rho_{k-1}\leq\rho_{k}\leq m. \end{align*} \begin{definition}[\cite{EstLam2016Decoupling}]\label{d.pbdiffA} If there is a number $\nu\in\Natu$ such that the matrix functions $\mathcal E_{[0]},\ldots,\mathcal E_{[\nu-1]}$ have constant ranks, the rank functions $\rho_{0},\ldots,\rho_{\nu-1}$ are constant, too, and \begin{align*} \rho_{\nu-2}<\rho_{\nu-1}=m, \end{align*} then $\nu$ is called the \emph{projector based differentiation index} of the pair $\{E,F\}$ and the DAE \eqref{1.DAE}, respectively. We indicate it by $\nu=:\mu^{pbdiff}$. \end{definition} Having in mind the known index-1 criterion $S_{[0]}\cap\ker E=\{0\}$ we recognize the condition $S_{[\mu^{pbdiff}-1]}\cap\ker E=\{0\}$ to characterize the projector based differentiation index in general. If the index $\mu^{pbdiff}$ is well-defined, then owing to Lemma \ref{l.app}, the matrix function $\mathcal B_{[\mu^{pbdiff}]}$ is smoothly $1$-full and has constant rank $r_{\mathcal B}=m+r_{[\mu^{pbdiff}-1]}$. Additionally, then $E$ and $\mathcal B_{[i]}$, $i=1,\ldots,\mu^{pbdiff}$, have constant ranks, too. Conversely, if there is a $\nu\in\Natu$ such that $E$ and $\mathcal B_{[i]}$, $i=1,\ldots,\nu$, have constant ranks, and $\mathcal B_{[\nu]}$ is $1-$full, and $\nu$ is the smallest such integer, then the rank functions $\rho_i=\rank \mathcal B_{[i+1]}-r_{[i]}$, $i=0,\ldots,\nu-1$, are constant, and $\rho_{\nu-2}<\rho_{\nu-1}=m$. This makes it obvious that the following alternative definition, that is equivalent to Definition \ref{d.pbdiffA}, may also be considered: \begin{definition}\label{d.pbdiffB} If there is a $\nu\in\Natu$ such that the matrix functions $E$, $\mathcal B_{[1]},\ldots,\mathcal B_{[\nu]}$ have constant ranks $r^{\mathcal B}_{[0]}:=r, r^{\mathcal B}_{[1]},\ldots,r^{\mathcal B}_{[\nu]}$, respectively, and $ \nu$ is the smallest number for which the matrix function $\mathcal B_{[\nu]}$ is smoothly 1-full, then $\nu$ is called the \emph{projector based differentiation index} of the pair $\{E,F\}$ and the DAE \eqref{1.DAE}, respectively. Again, we use the notation $\nu=:\mu^{pbdiff}$. \end{definition} Regarding again Lemma \ref{l.app}, this means precisely that \begin{align*} r^{\mathcal B}_{[i]}< r_{[i-1]}+m, \; i=1,\ldots, \nu-1,\; r^{\mathcal B}_{[\nu]}= r_{[\nu-1]}+m. \end{align*} \begin{remark} With regard to the computation of consistent initial values, if the index is $\mu^{pbdiff}$, then for $k=\mu^{pbdiff}$ according to \cite{EstLamNewApproach2018} a uniquely determined consistent initial value $x_0 \in S_{[k-1]}(t_0)$ can be computed as solution of \begin{eqnarray*} \mbox{ minimize } && \left\| P(t_0) (x_0 - \alpha)\right\|_2 \\ \mbox{ subject to } && \mathcal W_{[k-1]} \mathcal F_{[k-1]}(t_0)x_0 = \mathcal W_{[k-1]} q_{[k-1]}(t_0), \label{eq:RegSysConsInit} \end{eqnarray*} for a given guess $\alpha$. This solution can also be computed as solution $x_0$ of the optimization problem \begin{eqnarray*} \mbox{ minimize } && \left\| P(t_0) (x_0 - \alpha)\right\|_2 \\ \mbox{ subject to } &&\mathcal F_{[k-1]}(t_0)x_0 + \mathcal E_{[k-1]}(t_0)w = q_{[k-1]}(t_0) \label{eq:OptimizationConsInit} \end{eqnarray*} for a vector $w \in \Real^{km}$ that is not uniquely determined, cf. \eqref{1.kerBk} and the results for the solvability from \cite{EstLamNewApproach2018}. There, the convenience of considering the orthogonal projector $P$ instead of the matrix $E$ for the objective function is discussed, that led to the consideration of $\mathcal B_{[k]}$ instead of $\mathcal D_{[k]}$. Moreover, for $k> \mu^{pbdiff}$, the last optimization problem permits the additional computation of consistent Taylor coefficients as parts of $w$. \end{remark} \begin{proposition}\label{p.pbdiff} The projector based differentiation index remains invariant under sufficiently smooth equivalence transformations. \end{proposition} \begin{proof} Owing to Proposition \ref{p.equivalenc} and \eqref{1.rankBk}, the rank functions $\rank \mathcal B_{[k]}$ and $\rho_{k}$ are invariant under sufficiently smooth equivalence transformations, which makes the assertion evident. \end{proof} We emphasize again that the \textit{ projector based differentiation index} focuses on a 1-full condition for a different matrix than the \textit{regular differentiation index}. However, if the pair $\{E,F\}$ is regular on the interval $\mathcal I$, we can show that they turn out to be equivalent. \begin{theorem}\label{t.projector_based_diff} Let the pair $\{E,F\}$ be regular on $\mathcal I$ with index $\mu $ and characteristic values $r$ and \\ $\theta_0\geq\cdots\geq \theta_{\mu-2}>\theta_{\mu-1}=0$. Then the projector based differentiation index is well-defined and coincides with the regular differentiation index i.e. $\mu^{rdiff}=\mu=\mu^{pbdiff}$. Moreover, \[ \rank \mathcal B_{[k]} = \rank \mathcal D_{[k]} = \rank \mathcal E_{[k]}=km+r-\sum_{i=0}^{k-1}\theta_i,\quad k\geq 1, \] \begin{align*} \rho_k&=m-\dim (\ker E\cap S_{[k]})=m-\theta_k,\quad k=0,\ldots ,\mu-1, \end{align*} and in particular \begin{align*} \rho_{\mu-1}&=m, \quad \theta_{\mu-1} = 0, \quad \dim (\ker E\cap S_{[\mu-1]})=\left\{0\right\}. \end{align*} \end{theorem} \begin{proof} This follows directly from $\ker \mathcal B_{[k]} = \ker \mathcal D_{[k]}$, the definition of $\rho_k$ and Proposition \ref{th.ranks}. \end{proof} A closer look onto the matrix \begin{eqnarray} \begin{bmatrix} {\mathcal F}_{[k-1]} & {\mathcal E}_{[k-1]} \end{bmatrix} \label{eq:DefGk} \end{eqnarray} and the orthogonal projectors $Q$ and $P$ permits an orthogonal decoupling of the different components of $x$ with further orthogonal projectors\footnote{That is why in this paper we have chosen the label \textit{projector based differentiation index} for this DAE approach.}. We briefly summarize these results from \cite{EstLam2016Decoupling} and \cite{EstLamDecoupling2020}. \begin{itemize} \item To decouple the $Q$-component for $k=1, \ldots, \mu$, the projector $T_{k}$ is defined as the orthogonal projector onto \[ \ker E \cap S_{[k-1]} = \ker \begin{bmatrix} P \\ \mathcal W_{[k-1]} \mathcal F_{[k-1]} \end{bmatrix} =: \im T_{k}. \] Consequently, $T_{k}x$ corresponds to the part of the $Q$-component that, after $k$-1 differentiations, cannot yet be represented as a function of $(Px,t)$. \item To characterize the different parts of the $P$-component, the matrix $\mathcal F_{[k-1]}$ is further splitted into $\mathcal F_{[k-1]} P$ and $\mathcal F_{[k-1]}Q$, such that \[ \begin{bmatrix} \mathcal F_{[k-1]} P && \mathcal F_{[k-1]} Q && \mathcal E_{[k-1]} \end{bmatrix} \] is considered instead of \eqref{eq:DefGk}. With this decoupling, the orthogonal projector $\mathcal V_{[k]}$ with \[ \ker \mathcal V_{[k-1]}= \im \begin{bmatrix} \mathcal F_{[k-1]} Q &\quad & \mathcal E_{[k-1]}\end{bmatrix} \] is defined, permitting finally to define the orthogonal projector $V_k$ onto \[ \ker \begin{bmatrix} Q \\ \mathcal V_{[k-1]} \mathcal F_{[k-1]} \end{bmatrix} =: \im V_k. \] By definition, $V_k x$ represents the part of P-component that is not determined by the constraints resulting after $k$-1 differentiations, such that $d=\rank V_{\mu} = \rank V_{\mu-1} $ holds. \end{itemize} To determine the rank of $T_k$ and $V_k$ we will use the fact that $\rank \mathcal W_{[k-1]} \mathcal F_{[k-1]} $ is the number of explicit and hidden constraints resulting after $k-1$ differentiations, and that with \eqref{eq:WRGL}, \eqref{eq:Represrank} and Theorem \ref{th.ranks} it holds \begin{align} \rank \mathcal W_{[k-1]} \mathcal F_{[k-1]} = \rank \mathcal W_{[k-1]} =m -\dim S_{[k-1]} = m-r + \sum_{i=0}^{k-2} \theta_i. \label{eq:rankW_[k-1]} \end{align} \begin{proposition} \label{rankTkVk} For every regular pair $\{E, F\}$ on $\mathcal I$ with index $\mu$ it holds \[ \rank T_{k} = \theta_{k-1}, \quad \rank V_k = r-\sum_{i=0}^{k-1} \theta_i. \] \end{proposition} \begin{proof} For $T_k$, the assertions follows directly from the definition. For $ \rank V_k$, we use \eqref{eq:rankW_[k-1]} to obtain \begin{align*} r- \sum_{i=0}^{k-2} \theta_i &= \dim \ker \mathcal W_{[k-1]} \mathcal F_{[k-1]} = \dim \ker \begin{bmatrix} I_m& 0 \\ 0 & \mathcal W_{[k-1]} \mathcal F_{[k-1]} \end{bmatrix} \begin{bmatrix} Q & P \\ P & Q \end{bmatrix} \\ &= \dim \ker \begin{bmatrix} Q & P \\ \mathcal W_{[k-1]} \mathcal F_{[k-1]}P & \mathcal W_{[k-1]} \mathcal F_{[k-1]}Q \end{bmatrix} \end{align*} and have a closer look to the nullspace of the last matrix \begin{align*} & \left\{\begin{bmatrix} z_1\\z_2 \end{bmatrix}\in \Real^{2m}: \quad Qz_1=0, \quad Pz_2=0, \quad \mathcal W_{[k-1]} \mathcal F_{[k-1]}Pz_1 +\mathcal W_{[k-1]} \mathcal F_{[k-1]}Qz_2=0 \right\}\nonumber\\ &= \left\{\begin{bmatrix} z_1\\z_2 \end{bmatrix}\in \Real^{2m}: \quad \begin{matrix} z_1 \in \ker Q \cap \ker \mathcal V_{[k-1]} \mathcal F_{[k-1]}, \\ Pz_2=0, \quad \mathcal W_{[k-1]} \mathcal F_{[k-1]}z_2 =- \mathcal W_{[k-1]} \mathcal F_{[k-1]}Pz_1 \end{matrix} \right\} . \end{align*} Consequently, it holds \[ \dim \ker \mathcal W_{[k-1]} \mathcal F_{[k-1]} = \dim \left\{\begin{bmatrix} z_1\\z_2 \end{bmatrix}\in \Real^{2m}: \quad \begin{matrix} z_1 \in \ker Q \cap \ker \mathcal V_{[k-1]} \mathcal F_{[k-1]} = \im V_k, \\ z_2 \in \ker P \cap \ker \mathcal W_{[k-1]} \mathcal F_{[k-1]}=\im T_k \end{matrix} \right\}, \] leading to \[ r- \sum_{i=0}^{k-2} \theta_i = \rank V_k + \rank T_k, \] i.e. \[ \rank V_k = r- \sum_{i=0}^{k-2} \theta_i - \rank T_k =r- \sum_{i=0}^{k-1} \theta_i. \] \end{proof} This means that the $m-r + \sum_{i=0}^{k-2} \theta_i$ linearly independent constraints from \[ \mathcal W_{[k-1]} (t)\mathcal F_{[k-1]}(t)x = \mathcal W_{[k-1]}(t) q_{[k-1]}(t) \] uniquely determine $(I-T_k-V_k)x$ as a function of $(V_kx,t)$. \\ For the orthogonal projector $\Pi:= V_{\mu}= V_{\mu-1} $ with $\rank \Pi = d = r-\sum_{i=0}^{\mu-2} \theta_i $, $\Pi x$ represents the part of $P$-components that is not determined by the constraints and can be used to formulate an orthogonally projected explicit ODE \begin{eqnarray} (\Pi x)'-\Pi'(\Pi x) + \Pi C(t) (\Pi x) = \Pi c(t) \label{eq:Proj_Pi_ODELinDAE} \end{eqnarray} for suitable $C(t)$ and $c(t)$, cf.\ \cite{EstLamDecoupling2020}. The remaining components $(I-\Pi)x$ can then be computed accordingly with the constraints. Note that $\Pi$ is not orthogonal to $S_{[\mu-1]}$ in general, since $\mathcal V_{[\mu-1]}$ does not coincide with $\mathcal W_{[\mu-1]}$ in general. \medskip Note further that, by definition, $T_k=QT_k=T_kQ$ as well as \[ T_{k+1}T_k = T_k T_{k+1}= T_{k+1}, \quad V_{k+1}V_k = V_k V_{k+1}= V_{k+1}, \quad T_{k_1} V_{k_2} =0 \] holds, cf.\ \cite{EstLamDecoupling2020}. Therefore $(V_k - V_{k+1})$ is an orthogonal projector as well, fulfilling $\rank (V_k - V_{k+1}) = \theta_k$ and $(V_k - V_{k+1})=P(V_k - V_{k+1})=(V_k - V_{k+1})P$. \begin{remark}\label{r.mod1} It is opportune to mention that $\rank\mathcal B_{[k]}$ serves as proven monitor for indicating singular points by means of the algorithms from \cite{EstLamInitDAE}. In \cite{ELMRoboticArm2020} several simple examples are discussed and in \cite{EstLam2017Discovering} the well-known nonlinear benchmark robotic arm is analyzed in detail.\\ Indeed, for many applications, the projector $T_k$ is constant. If this is not the case, the changes in $T_{k}$ may provide an indication of which entries of $E$ or $F$ lead to a change of $\theta_{k-1}$. Comparing the obtained projectors at a regular and a singular point, critical parameter combinations or model errors may be identified, cf. Example \ref{e.5}, Example \ref{e.robotic_arm} and \cite{ELMRoboticArm2020}. \end{remark} \subsection{Strangeness index via derivative array}\label{subs.Hyp} DAEs with differentiation index zero are (possibly implicit) regular ODEs, they are well-understood and of no interest in our context here. Further, a DAE showing differentiation index one is a priori\footnote{See Proposition \ref{p.index1}} a regular DAE with index $\mu=1$ in the sense of Definition \ref{d.2}, and it is rather unreasonable\footnote{In view of different properties such as stability behavior and numerical handleability.} here to change to an underlying ODE. So the question arises as to whether one should even look for an index-1 DAE instead of a regular ODE in general. \medskip The aim is now to filter out a regular index-one DAE, more precisely, a strangeness-free DAE from an inflated system instead of the underlying ODE in the context of the differentiation-index. Again we consider the DAE \begin{align*} Ex'+Fx=q \end{align*} and try to find an associated new DAE with the same unknown function $x$ in the partitioned form \begin{align} \hat E_1x'+\hat F_1x&=\hat q_1,\label{5.5.1}\\ \hat F_2x&=\hat q_2,\label{5.5.2} \end{align} which is strangeness-free resp.\ regular with index zero or index one in the sense of Definition \ref{d.2}. We assume the given DAE to have a well-defined differentiation index, say $\nu:=\mu^{diff}$. By means of Proposition \ref{p.diff} we obtain the constant numbers $d=r_{[\nu]}-\nu m=\dim S_{can}$ and $a=m-d$ such that \begin{align*} r_{[\nu-1]}=\rank \mathcal E_{[\nu-1]}&=(\nu-1) m+d= \nu m-a,\\ \dim\ker \mathcal E_{[\nu-1]}&=a,\\ \im [ \mathcal E_{[\nu-1]} \mathcal F_{[\nu-1]}]&=\Real^{\nu m}. \end{align*} Then we form a smooth full-column-rank function $Z:\mathcal I\rightarrow \Real^{\nu m\times a}$ such that $Z^*\mathcal E_{[\nu-1]}=0$ and thus $\ker Z^*\mathcal F_{[\nu-1]}= S_{[\nu-1]}$ has constant dimension $m-a=d$. Recall that $S_{[\nu-1]}=S_{can}$. Let $\mathcal C:\mathcal I\rightarrow \Real^{m\times a}$ define a smooth basis of the subspace $S_{[\nu-1]}$, so that $Z^*\mathcal F_{[\nu-1]}C=0$. The matrix function $EC$ has full column-rank $d$ due to Proposition \ref{p.diff}. Therefore, with any matrix function $Y:\mathcal I\rightarrow\Real^{m\times a}$ forming a basis of $\im EC$ we obtain a nonsingular product $Y^*EC$. Letting in \eqref{5.5.1}, \eqref{5.5.2} \begin{align*} \hat E_1=Y^*E,\quad &\hat F_1=Y^*F, \quad \hat q_1=Y^*q,\\ &\hat F_2=Z^*\mathcal F_{[\nu-1]}, \quad \hat q_2=Z^*q_{[\nu-1]}, \end{align*} we receive $\hat{\theta}_0=\dim (\ker \hat E_1\cap \ker \hat F_2) =0$ since \begin{align*} \ker \hat E_1\cap \ker \hat F_2&=\{z\in\Real^m: Y^*Ez=0, z\in S_{[\nu-1]}\}\\ &=\{z\in\Real^m: z=Cw, Y^*ECw=0\}=\{0\}, \end{align*} and hence we are done. This matter was developed as part of the index concept and formally tied into a so-called hypothesis. We quote \cite[Hypothesis 3.48]{KuMe2006} in a form adapted to our notation. \begin{hypothesis}[\textbf{Strangeness-Free-Hypothesis (SF-Hypothesis)}]\label{SHyp} Given are sufficiently smooth matrix functions $E,F:\mathcal I\rightarrow\Real^{m\times m}$. There exist integers $\hat\mu, \hat a,$ and $\hat d=m-\hat a$ such that the inflated pair $\{\mathcal E_{[\hat\mu]},\mathcal F_{[\hat\mu]}\}$ associated with the given pair $\{E, F\}$ has the following properties: \begin{description} \item[\textrm{(1)}] $\rank \mathcal E_{[\hat\mu]}(t)=(\hat\mu+1)m-\hat a,\; t\in \mathcal I$, such that there is a smooth matrix function $Z:\mathcal I\rightarrow\Real^{(\hat\mu m+m)\times\hat a}$ with full column-rank $\hat a$ on $\mathcal I$ and $Z^*\mathcal E_{[\hat\mu]}=0$. \item[\textrm{(2)}] For $\hat F_2:= Z^* \mathcal F_{[\hat\mu]}$ one has $\rank \hat F_2(t)=\hat a, t\in \mathcal I$, such that there is a smooth matrix function \\ $C:\mathcal I\rightarrow\Real^{m\times\hat d}$ with constant rank $\hat d$ on $\mathcal I$ and $\hat F_2C=0$. \item[\textrm{(3)}] $\rank E(t)C(t)=\hat d,\;t\in \mathcal I$, such that there is a smooth matrix function $Y:\mathcal I\rightarrow\Real^{m\times\hat d}$ with constant rank $\hat d$ on $\mathcal I$, and, for $\hat E_1:= Y^*E$, one has $\rank \hat E_1(t)=\hat d,\;t\in\mathcal I$. \end{description} \end{hypothesis} The next definition simplifies \cite[Definition 4.4]{KuMe2006} for linear DAEs. \begin{definition}\label{d.HypStrangeness} Given are matrix functions $E,F:\mathcal I\rightarrow\Real^{m\times m}$. The smallest value of $\bar \mu$ such that the SF-Hypothesis \ref{SHyp} is satisfied is called the \emph{strangeness index} of the pair $\{E,F\}$ and of the DAE \eqref{1.DAE}. If $\bar \mu=0$ then the DAE is called \emph{strangeness-free}. \end{definition} Obviously, since the SF-Hypothesis is satisfied for DAEs having a well-defined differentiation index, it is even more satisfied for regular DAEs in the sense of Definition \ref{d.2} which also cover all DAEs featuring the regular strangeness index from Section \ref{subs.strangeness}. \begin{remark}\label{r.HypStrange} Also the notion \emph{regular strangeness index} is sometimes used in the context of the SF-Hypothesis, e.g., \cite[p.\ 1261]{Baum}, \cite[p.\ 154]{KuMe2006} with the reasoning that a differential-algebraic operator somehow (see \cite[Section 3.4]{KuMe2006}) associated to the strangeness-free reduced system \eqref{5.5.1}, \eqref{5.5.2} is a continuous bijection. Unfortunately, this is not a viable argument, because it says far too little about the nature of the original DAE and its associated operator\footnote{We refer to \cite{HaMae2023} and the references therein for basics on differential-algebraic operators.} $Tx=Ex'+Fx$. All differentiations are analytically assumed in advance and available from the derivative array. In addition, the term \emph{regular strangeness index} is already used for the regular case of the original strangeness index (see Definition \ref{d.strangeness} and footnote). The SF-Hypothesis is associated with the fact that a DAE with differentiation index one always contains a regular index-1 DAE, cf. Proposition \ref{p.index1}. \end{remark} It is claimed in \cite[Theorem 3.50]{KuMe2006} that, if the pair $\{E, F\}$ has differentiation index $\mu^{diff}\geq1$ on a compact interval then the SF-Hypothesis is satisfied with $\hat\mu=\mu^{diff}-1$, $\hat a=a$, and $\hat d=d$. Conversely, according to \cite[Corollary 3.53]{KuMe2006}, if the pair $\{E, F\}$ satisfies the SF-Hypothesis then it features a well-defined differentiation index, and $\mu^{diff}=\hat\mu+1$ applies if $\hat\mu$ is minimal. \subsection{Equivalence issues}\label{subs.equivalence} Let us summarize the most relevant results of the present section concerning the equivalence. We start with a well-known fact. \begin{theorem}\label{t.diff2} Let $E,F:\mathcal I\rightarrow \Real^{m\times m}$ be sufficiently smooth on the compact interval $\mathcal I$. The following two assertion are equivalent: \begin{description} \item[\textrm{(1)}] The differentiation index $\mu^{diff}$ of the pair $\{E,F\}$ on the interval $\mathcal I$ is well-defined according to Definition \ref{d.diff}. \item[\textrm{(2)}] The pair $\{E,F\}$ satisfies the Strangeness Hypothesis \ref{SHyp} on the interval $\mathcal I$ with strangeness index $\hat{\mu}$ according to Definition \ref{d.HypStrangeness}. \end{description} If these statements are valid, then $\mu^{diff}=\hat{\mu}+1$ and $\hat d=d=\dim S_{can}$ and $\dim \ker \mathcal E_{[\hat \mu]}=\hat a=a=m-d$. \end{theorem} \begin{proof} The direction (1) $\Rightarrow$ (2) immediately results from Section \ref{subs.Hyp} for an arbitrary interval. For the more complicated proof of (1) $\Leftarrow$ (2) we refer to \cite[Corollary 3.53]{KuMe2006}, cf.\ Section \ref{subs.Hyp}. \end{proof} The other index concepts considered in the present section require additional rank conditions. It turns out that they are equivalent among each other and comprise just the regular DAEs in the sense of the basic Definition \ref{d.2}. \begin{theorem} \label{t.equivalence_array} Let $E, F:\mathcal I\rightarrow\Real^{m\times m}$ be sufficiently smooth, $\mu\in\Natu$. The following assertions are equivalent in the sense that the individual characteristic values of each two of the variants are mutually uniquely determined. \begin{description} \item[\textrm{(1)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with index $\mu\in \Natu$, according to Definition \ref{d.2}. \item[\textrm{(2)}] The DAE \eqref{1.DAE} has regular differentiation index $\mu^{rdiff}=\mu$. \item[\textrm{(3)}] The DAE \eqref{1.DAE} has projector-based differentiation index $\mu^{pbdiff}=\mu$. \item[\textrm{(4)}] The DAE has differentiation index $\mu^{diff}=\mu$ and, additionally, the rank functions $r_{[k]},\ k<\mu^{diff}$, are constant. \item[\textrm{(5)}] The DAE fulfills the Hypothesis \ref{SHyp} with $\hat{\mu}=\mu-1$ and, additionally, the rank functions $r_{[k]},\ k<\hat{\mu}$, are constant. \end{description} \end{theorem} \begin{proof} The equivalence of \textrm{(1)} and \textrm{(2)} has been shown in Theorem \ref{t.rdiff} \textrm{(1)}. The equivalence of \textrm{(2)} and \textrm{(4)} follows from the equivalence of \textrm{(1)} and \textrm{(4)} in Lemma \ref{l.app}, since in both cases all ranks are assumed to be constant. The equivalence of \textrm{(4)} and \textrm{(5)} is a consequence of Theorem \ref{t.diff2}. \textrm{(1)} implies \textrm{(3)} by Theorem \ref{t.projector_based_diff}. For the last step of the proof of equivalences we verify that \textrm{(3)} implies \textrm{(5)}. Let \textrm{(3)} be given, so that $r_{[i]}=\rank \mathcal E_{[i]}$ is constant, $i=0,\ldots,\mu-1$, $\ker E\cap S_{[\mu-1]}=\{0\}$, and the necessary solvability condition \eqref{eq:fullrank} is satisfied. We set $\hat{\mu}:=\mu-1$, $\hat a:=\mu m-r_{[\mu-1]}$, $\hat d:=m-\hat a$ and show that the Hypothesis \ref{SHyp} with these values is satisfied. First, it results that $\dim (\im \mathcal E_{[\mu-1]})^{\perp}=\hat a$ and there is a smooth basis $Z:\mathcal I\rightarrow \Real^{\mu m\times\hat a}$ of $(\im \mathcal E_{[\mu-1]})^{\perp}$ such that $Z^*\mathcal E_{[\mu-1]}=0$. Next, regarding the condition \eqref{eq:fullrank} we evaluate \[ \rank Z^*\mathcal F_{[\mu-1]}=m-\dim \ker Z^*\mathcal F_{[\mu-1]}=m-\dim S_{[\mu-1]}=\mu m-r_{[\mu-1]}=\hat a. \] Finally, since $\dim S_{[\mu-1]}=\hat d$, with a smooth matrix function $C:\mathcal I\rightarrow \Real^{m\times\hat d}$ forming a basis of $\dim S_{[\mu-1]}$, we obtain a product $EC$ that feature full column-rank $\hat d$. Namely, it holds \[ \ker EC=\{z\in\Real^{\hat d}:Cz\in\ker E\}=\{z\in\Real^{\hat d}:Cz\in\ker E\cap S_{[\mu-1]}\}=\{0\}, \] and the Hypothesis \eqref{SHyp} is satisfied, and thus statement \textrm{(5)}. \end{proof} We now indicate how the individual characteristic values depend on those of the base concept. Because of the equivalence, this allows to determine the relationships between the values of any two concepts providing regular DAEs. \begin{theorem}\label{t.theta_relation_array} Let the pair $\{E,F\}$ be regular on $\mathcal I$ with index $\mu\in \Natu$ and characteristics $r<m$, $\theta_0=0$ if $\mu=1$, and, for $\mu>1$, \begin{align*} r<m,\quad \theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0,\quad d=r-\sum_{j=0}^{\mu-2}\theta_{j}. \end{align*} Then the three array functions $\mathcal E_{[k]},\mathcal D_{[k]}$, and $\mathcal B_{[k]}$ feature shared constant ranks, \[ r_{[k]}=\rank \mathcal E_{[k]} = \rank \mathcal D_{[k]} = \rank \mathcal B_{[k]}, \] and the following relations concerning the characteristic values arise: \begin{align*} r_{[k]} &=km+r-\sum_{j=0}^{k-1}\theta_{j},\quad k=1,\ldots, \end{align*} in particular, \begin{align*} r_{[\mu-1]} &= (\mu-1)m+r-\sum_{j=0}^{\mu-2}\theta_{j}=(\mu-1)m+d,&& r_{[\mu]} = \mu m+r-\sum_{j=0}^{\mu-1}\theta_{j}=\mu m+d, \end{align*} and, moreover, \begin{align*} \rho_k=m-\dim (\ker E\cap S_{[k]})=m-\theta_k,\quad &k=0,1, \ldots \ ,\\ \rank T_{k} = \theta_{k-1},\quad \rank V_k = r-\sum_{j=0}^{k-1} \theta_{j},\quad &k=0,1, \ldots . \end{align*} and, conversely, \begin{align*} r&=r_{[0]},\\ \theta_0&=m+r_{[0]}-r_{[1]},\\ &\cdots \\ \theta_{\mu-2}&=m+r_{[\mu-2]}-r_{[\mu-1]},\\ \theta_{\mu-1}&=m+r_{[\mu-1]}-r_{[\mu]},\\ d&=r_{[\mu-1]}-(\mu-1)m=r_{[\mu]}-\mu m. \end{align*} \end{theorem} \begin{proof} The relations for the characteristic values follow from Theorems \ref{t.rdiff} and \ref{t.projector_based_diff}. \end{proof} \begin{corollary}\label{c.crit} Regular and critical points in the sense of the Definition \ref{d.regpoint} are independent of the specific approach. \end{corollary} We emphasize that, for the approaches gathered in Theorems \ref{t.equivalence_array} and \ref{t.theta_relation_array} that capture regular DAEs, constant $r$ and $\theta_i$ are mandatory. In contrast, the two concepts recorded in Theorem \ref{t.diff2} allow changes of $r$ as well as the $\theta_i$, as long as $\mu$ and $d$ remain constant. This motivates the following two definitions. \begin{definition} \label{d.harmless} Given are $E,F:\mathcal I\rightarrow\Real^{m\times m}$. The critical\footnote{See Definition \ref{d.regpoint}.} point $t_*\in\mathcal I$ is said to be a \emph{ harmless critical point}\footnote{Note that this definition is consistent with that in \cite{Dokchan2011,CRR,RR2008} which is formulated in more specific terms of the projector-based analysis.} of the pair the pair $\{E,F\}$ and the associated DAE \eqref{1.DAE}, if there is an open neighborhood $\mathcal U\ni t_*$ such that the DAE restricted to $\mathcal I\cap \mathcal U$ is solvable in the sense of Definition \ref{d.solvableDAE}. \end{definition} Harmless critical points only become apparent with less smooth problems and in the input/output behavior of the systems. For smooth problems, we quote from \cite[p. 180]{RR2008}: \textit{the local behavior around a harmless critical point is entirely analogous to the one near a regular point}. That is precisely why they are called that. The Examples \ref{e.2}, \ref{e.5} and \ref{e.7} in the next section are to confirm this, see also \cite[Section 2.9]{CRR}. By Theorem \ref{t.diffinterval}, for a DAE featuring a well-defined differentiation index on a compact interval, the set of regular points is dense and all critical points are harmless. \begin{definition} \label{d.almost-reg} Given are $E,F:\mathcal I\rightarrow\Real^{m\times m}$. The pair $\{E,F\}$, and the associated DAE \eqref{1.DAE} are \emph{ almost regular} if all points of $ \mathcal I$ are regular points in the sense of Definition \ref{d.regpoint} or harmless critical points in the sense of Definition \ref{d.harmless} and the regular points are dense in $\mathcal I$. \end{definition} \section{A selection of simple examples to illustrate possible critical points}\label{s.examples} We use a few simple examples to illustrate several critical points that can arise with DAEs. For a deeper insight we refer to \cite{RR2008}. \subsection{Serious singularities} \begin{example}[$r$ constant, $\theta_0$ changes]\label{e.degree_cont} Recall Example \ref{e.degree} from Section \ref{subs.degree} \begin{align*} E(t)=\begin{bmatrix} 1&-t\\1&-t \end{bmatrix},\quad F(t)=\begin{bmatrix} 2&0\\0&2 \end{bmatrix},\quad t\in \Real. \end{align*} We know already that the homogeneous DAE has the nontrivial solution \begin{align*} x(t)=\gamma (1-t)^2\begin{bmatrix} 1\\1 \end{bmatrix},\; t\in \Real, \quad \text{with }\; \gamma\in \Real, \end{align*} and $t_\star = 1$ is obviously a singular point of the flow, because of \begin{align*} \ker E(t) \cap S_0(t) = \{z \in \Real^2: z_1-t z_2=0, z_1 = z_2\},\text{ i.e., } \theta_0(t) = \begin{cases} 0 &t \neq 1\\ 1 &t = 1 \end{cases} \end{align*} \begin{itemize} \item The framework of the tractability index with \begin{align*} A:=E, D=\begin{bmatrix} 1&-t\\0&0 \end{bmatrix},G_0=A D = E, Q_0=\begin{bmatrix} 0 & t\\0 & 1 \end{bmatrix},B_0=F-A D' = \begin{bmatrix} 2 & -1\\0 & 1 \end{bmatrix} \end{align*} leads to \begin{align*} G_1=G_0 + B_0 Q_0 = \begin{bmatrix} 1 & t-1\\ 1 & -t+1 \end{bmatrix}, \det G_1 = 2(1-t), r_1(t) = \begin{cases} 2 & t \neq 1\\ 1 & t = 1 \end{cases}. \end{align*} This indicates $t_\star = 1$ as critical point. \item By Lemma \ref{l.R1} we obtain $r_{[1]}(t) = m+r - \theta_0(t) = 3 -\theta_0(t)$ . \end{itemize} \end{example} \begin{example}[$r$ changes, $\theta_0$ constant]\label{e.rank_drop_E} This example is a special case of \cite[Example 2.69]{CRR}. We consider the pair \[ E=\begin{bmatrix}0 & \alpha\\ 0&0 \end{bmatrix}, \quad F=\begin{bmatrix} \beta & \gamma\\ 1 & 1 \end{bmatrix}, \quad m=2, \] and $\alpha, \beta, \gamma: \mathcal{I} \rightarrow \Real$ are smooth functions. $\alpha^2 + (\beta - \gamma)^2 > 0$ ensures that $\rank [E(t), F(t)] \equiv 2$ and $\alpha(t) = 0$ requires $\beta(t) \neq \gamma(t)$.\\ We have $r(t) = \begin{cases} 1 & \alpha(t) \neq 0\\ 0 & \alpha(t) = 0 \end{cases}\quad $, $\ker E(t) = \begin{cases} \im \begin{bmatrix} 1 \\ 0 \end{bmatrix} & \alpha(t) \neq 0\\ \Real^2 & \alpha(t) = 0 \end{cases}$ and\\ $S_0(t) =\{z \in \Real^m: F(t)z \in \im E(t)\} =\begin{cases} \im \begin{bmatrix} 1 \\ -1 \end{bmatrix} & \alpha(t) \neq 0,\\ \{0\} & \alpha(t) = 0. \end{cases}$\\ Surprisingly we obtain for $\theta_0(t) = \dim (\ker E(t) \cap S_0(t)) = 0$ for all $t$.\\ A simple reformulation of $Ex'+Fx = q$ shows the equations \begin{align*} \alpha x_2' + (\gamma-\beta) x_2 &= q_1-\beta q_2,\\ x_1 &= -x_2 + q_2. \end{align*} We consider the particular case that $\gamma(t)-\beta(t) \equiv M \neq 0$ constant, $q \equiv 0$ and $\alpha(t)=t$, which leads to the singular scalar homogeneous ODE for $x_2$ \begin{align}\label{solution_x2} t x_2'(t) + M x_2(t) =0. \end{align} The solution is $x_2(t) = c\, t^M$ with an arbitrary real constant $c$, see Figure \ref{fig:solM}. \begin{figure} \includegraphics[width=6.5cm]{figureMm2_mit_t.jpg} \includegraphics[width=6.5cm]{figureMp2_mit_t.jpg} \caption{Solution of \eqref{solution_x2} for $M>0$ and $M<0$} \label{fig:solM} \end{figure} \end{example} \begin{example}[$r$ constant, $\theta_0$ constant, $\theta_1$ changes]\label{e.6} Given a smooth function $\beta:\mathcal I\rightarrow\Real$, we investigate the pair $\{E,F\}$, \begin{align*} E(t)=\begin{bmatrix} 1&0&0\\0&1&0\\0&0&0 \end{bmatrix},\quad F(t)=\begin{bmatrix} 0&0&\beta(t)\\1&1&0\\1&0&0 \end{bmatrix}. \end{align*} A look at the associated DAE brightens the type of singularity. The DAE reads \begin{align*} x_1'+\beta x_3&=q_1,\\ x_2'+x_1+x_2&=q_2,\\ x_1&=q_3, \end{align*} which can be rearranged to \begin{align*} x_1&=q_3,\\ x_2'+x_2&=q_2-q_3,\\ \beta x_3&=q_1-q_3'. \end{align*} It is now evident that, if $\beta$ has zeros, then the DAE is no longer solvable for all sufficiently smooth rigt-hand sides $q$. From a more general point of view, the pair $\{E,F\}$ is pre-regular with $m=3$. $r=2$ and $\theta_0=1$ and the singularity can be detected by different approaches. We start with the basic approach, letting \begin{align*} Y_0(t)=\begin{bmatrix} 1&0\\0&1\\0&0 \end{bmatrix},\quad C_0(t)=\begin{bmatrix} 0&0\\1&0\\0&1 \end{bmatrix}, \end{align*} the first step in the basic reduction procedure leads to the new pair \begin{align*} E_1(t)=\begin{bmatrix} 0&0\\1&0 \end{bmatrix},\quad F_1(t)=\begin{bmatrix} 0&\beta(t)\\1&0 \end{bmatrix}. \end{align*} The new pair $\{E_1,F_1\}$ is pre-regular if and only if the function $\beta$ has no zeros, and then one has $\theta_1=0$, and hence the pair $\{E,F\}$ is regular with index two. In contrast, if $\beta(t_*)=0$ at a point $t_*\in\mathcal I$, we are confronted with $\theta_1(t_*)=1$ and $\rank[E_1(t_*) F_1(t_*)]=1<m$, and the pair $\{E_1,F_1\}$ fails to be pre-regular. In turn, the original pair $\{E,F\}$ is no longer regular. The tractability framework \eqref{2.Gi} leads to \begin{align*} G_0=E, B_0=F, Q_0=\begin{bmatrix} 0&&\\ &0&\\ &&1 \end{bmatrix}, G_1=\begin{bmatrix} 1& 0 & \beta\\ 0& 1& 0\\ 0 & 0 & 0 \end{bmatrix}, \end{align*} which immediately indicates zeros of $\beta$ as critical point, too, because of \begin{align*} N_1 \cap N_0 =\{z \in \Real^3: z_1=0, z_2=0, \beta z_3 =0\}, \end{align*} i.e., $u_1^T(t)= \begin{cases} 0 & \beta \neq 0\\ 1 & \beta = 0 \end{cases} $,\quad which is called "B-singularity`` in \cite[Definition 2.75]{CRR} and \cite[p.\ 144]{RR2008}. Next we consider the first array functions \begin{align*} \mathcal E_{[1]}(t)=\begin{bmatrix} 1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&0&0&0&0\\0&0&\beta(t)&1&0&0\\1&1&0&0&1&0\\1&0&0&0&0&0 \end{bmatrix},\quad \mathcal F_{[1]}(t)=\begin{bmatrix} 0&0&\beta(t)\\1&1&0\\1&0&0\\0&0&\beta'(t)\\0&0&0\\0&0&0 \end{bmatrix}. \end{align*} and compute $\rank \mathcal E_{[1]}(t)=4=m+r-\theta_0$ independently of how the function $\beta$ behaves. However, the necessary solvability requirement $\rank[\mathcal E_{[1]}(t), \mathcal F_{[1]}(t)]=6$ is satisfied only if $\beta(t)\neq 0$, but otherwise one has $\rank[\mathcal E_{[1]}(t_*), \mathcal F_{[1]}(t_*)]=5$. \end{example} \begin{example}[$r$ constant, $\theta_0$ constant, $\theta_1$ changes]\label{e.3} Given is the pair $\{E, F\}$ with $m=2$, \begin{align*} E(t)=\begin{bmatrix} 1&-1\\1&-1 \end{bmatrix},\quad F(t)=\begin{bmatrix} 2&0\\0&t+2 \end{bmatrix}, \quad t\in \mathcal I=[-1,1], \end{align*} such that $E(t)$ has constant rank $r=1$ and $\rank [E(t), F(t)]=m$. Following the basic reduction procedure in Section \ref{s.regular} we choose and find \begin{align*} Z_0(t)=\begin{bmatrix} 1\\-1 \end{bmatrix},\; Y_0(t)=\begin{bmatrix} 1\\1 \end{bmatrix},\; C_0(t)=\begin{bmatrix} \frac{t+2}{2}\\1 \end{bmatrix},\\ \ker E(t)\cap\ker (Z^*_0F)(t)=\{z\in\Real^2: z_1=z_2, tz_2=0\}, \end{align*} and \begin{align*} E_1(t)=(Y^*EC_0)(t)=t,\; F_1(t)=(Y^*FC_0)(t)+(Y^*EC'_0)(t)= 2t+5. \end{align*} The pair $\{E, F\}$ fails to be pre-regular on $\mathcal I$ because of \begin{align*} \theta_0(t)=\Bigg\lbrace \quad\begin{matrix} 0& \text{ if }t\neq 0, \\ 1& \text{ if }t= 0\;, \end{matrix} \end{align*} however, it is pre-regular and regular with index one on the subintervals $[-1,0)$ and $(0,1]$. The corresponding DAE, \begin{align*} x_1'-x_2'+2x_1&=q_1,\\ x_1'-x_2'+(t+2)x_2&=q_2, \end{align*} reads in slightly rearranged form as \begin{align*} -tx_2+2(x_1-x_2)&=q_1-q_2,\\ (x_1-x_2)'+\frac{2}{t}(t+2)(x_1-x_2)&=q_2-\frac{1}{t}(t+2)(q_1-q_2). \end{align*} Having a solution of the ODE for the difference $x_1-x_2$ we find the original solution components by $x_1=\frac{1}{2}(q_1-(x_1-x_2)')$ and $x_2=x_1-(x_1-x_2)$. No doubt, $t_*=0$ is a critical point causing a singular inherent ODE of the DAE. We refer to \cite{Koch}, where this example also comes from, for the specification of bounded solutions. Inspecting the array functions \begin{align*} \mathcal E_{[1]}=\begin{bmatrix} 1&-1&0&0\\1&-1&0&0\\2&0&1&-1\\0&t+2&1&-1 \end{bmatrix},\quad \mathcal E_{[2]}=\begin{bmatrix} 1&-1&0&0&0&0\\1&-1&0&0&0&0\\2&0&1&-1&0&0\\0&t+2&1&-1&0&0\\ 0&0&2&0& 1&-1\\0&2&0&t+2&1&-1 \end{bmatrix}, \end{align*} we see that \begin{align*} \rank\mathcal E_{[1]}(t)&=\Bigg\lbrace\quad\begin{matrix} 2m-1=3& \text{if }t\neq 0 \\ 2m-2=2& \text{if }t= 0\\ \end{matrix}\\ \rank\mathcal E_{[2]}(t)&=\Bigg\lbrace\quad\begin{matrix} 3m-1=5& \text{if }t\neq 0 \\ 3m-2=4& \text{if }t= 0,\;\\ \end{matrix} \end{align*} and the rank of the array functions also indicates this point $t_*=0$ as critical. \end{example} \begin{example}[$r$ and $\theta_0$ change]\label{e.4} Given a smooth function $\alpha:\mathcal I\rightarrow\Real$, we investigate the pair $\{E,F\}$, \begin{align*} E(t)=\begin{bmatrix} 0&\alpha(t)&0\\0&0&1\\0&0&0 \end{bmatrix},\quad F(t)=\begin{bmatrix} -6&0&0\\0&1&0\\1&0&1 \end{bmatrix},\quad t\in\mathcal I, \end{align*} and the associated DAE living in $\Real^m$, $m=3$, \begin{align*} \alpha x_2'-6x_1&=q_1,\\ x_3'+x_2&=q_2,\\ x_1+x_3&=q_3. \end{align*} Rearranging the DAE as \begin{align*} \alpha x_2'+6x_3&=q_1+6q_3,\\ x_3'+x_2&=q_2,\\ x_1+x_3&=q_3, \end{align*} we immediately know that the fact whether the function $\alpha$ has zeros or even disappears on subintervals is essential. Note that $\rank \, [E(t) F(t)]=m=3$ for all $t\in \mathcal I$, but $\rank E(t)=2$ if $\alpha(t)\neq 0$ and otherwise $\rank E(t)=1$. Obviously, on subintervals where $\alpha(t)$ does not vanish, we see a regular index-one DAE with characteristics $r=2, \theta_0=0$ and $d=2$. In contrast, if the function $\alpha$ vanishes on a subinterval then there a regular index-two DAE results with $r=1, \theta_0=1, \theta_1=0$ and $d=0$. There is no doubt that points with zero crossings of $\alpha$ are critical and may cause singularities of the solution flow, see Figure \ref{fig:solMATHEMATICA}. It should be emphasized that the rank conditions associated with regularity Definition \ref{d.2} exclude such critical points on regularity intervals and the corresponding reduction procedure reliably recognizes them. Investigating the corresponding low level array functions, $\mathcal E_{[0]}=E$, $\mathcal F_{[0]}=F$, \begin{align*} \mathcal E_{[1]}=\begin{bmatrix} 0&\alpha&0&0&0&0\\0&0& 1&0&0&0\\0&0&0&0&0&0\\-6&\alpha'&0&0&\alpha&0 \\0&1&0&0&0&1 \\1&0&1&0&0&0 \end{bmatrix},\quad \mathcal F_{[1]}=\begin{bmatrix} -6&0&0\\0&1&0\\1&0&1\\0&0&0\\0&0&0\\0&0&0 \end{bmatrix} \end{align*} \begin{align*} \mathcal E_{[2]}=\begin{bmatrix} 0&\alpha&0&0&0&0&0&0&0\\0&0& 1&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\-6&\alpha'&0&0&\alpha&0 &0&0&0\\0&1&0&0&0&1 &0&0&0\\1&0&1&0&0&0 &0&0&0\\0&\alpha''&0&-6&2\alpha'&0&0&\alpha&0 \\0&0&0&0&1&0&0&0&1\\0&0&0&1&0&1&0&0&0 \end{bmatrix},\quad \mathcal F_{[2]}=\begin{bmatrix} -6&0&0\\0&1&0\\1&0&1\\0&0&0\\0&0&0\\0&0&0\\0&0&0\\0&0&0\\0&0&0 \end{bmatrix}, \end{align*} we find that the ranks of the derivative arrays also indicate those critical points, namely \begin{align*} \rank\mathcal E_{[0]}(t)&= \begin{cases} m-1= 2& \text{if }\alpha(t)\neq 0 \\ m-2= 1& \text{if }\alpha(t)= 0 \end{cases}\\ \rank\mathcal E_{[1]}(t)&=\begin{cases} 2m-1=5& \text{if }\alpha(t)\neq 0 \\ 2m-2=4& \text{if }\alpha(t)= 0,\alpha'(t)\neq 0\\ 2m-3=3& \text{if }\alpha(t)= 0,\alpha'(t)=0, \alpha''(t)\neq 0 \end{cases}\\ \rank\mathcal E_{[2]}(t)&=\begin{cases} 3m-1=8& \text{if }\alpha(t)\neq 0 \\ 3m-2=7& \text{if }\alpha(t)= 0,\alpha'(t)\neq 0\\ 3m-3=6& \text{if }\alpha(t)= 0,\alpha'(t)=0, \alpha''(t)\neq 0, \alpha''(t)\neq 6\\ 3m-3=6& \text{if }\alpha(t)= 0,\alpha'(t)=0, \alpha''(t)= 0\\ 3m-4=5& \text{if }\alpha(t)= 0,\alpha'(t)=0, \alpha''(t)= 6 \;. \end{cases} \end{align*} \begin{figure} \includegraphics[width=6.5cm]{e4_x2_neu.png}\includegraphics[width=6.5cm]{e4_x3_neu.png} \caption{Solutions for $x_2$ and $x_3$ computed with MATHEMATICA, Version 13 for \\ $\alpha(t)=\begin{cases} 0&\text{ for }t\in (-\infty,0]\\ t^{4}&\text{ for }t\in (0,\infty) \end{cases}$, $q_1=q_2=q_3=0$ in Example \ref{e.4}. The difficulty to plot the solution around $0$ is due to the singularity.} \label{fig:solMATHEMATICA} \end{figure} \end{example} \subsection{Harmless critical points} The first example of the present section, that shows an almost regular DAE, is of particular interest because $r$ and $d$ are constant, while $\theta_0$ and $\theta_1$ change. To our knowledge such an circumstance was not discussed in literature before, since harmless critical points were usually tight to rank changes of $E$. Therefore, for this example we illustrate in detail how four different approaches identify critical points.\\ The three other examples of the section are classical cases discussed in the literature showing rank changes of $E$. \begin{example}[$r$ constant, $\theta_0$ and $\theta_1$ change]\label{e.5} Given is the pair $\{E, F\}$ with $m=4$, and smooth functions $\alpha, \beta:\mathcal I\rightarrow\Real$, \begin{align*} E(t)=\begin{bmatrix} 0&1&\alpha(t)&0\\0&0&0&\beta(t)\\0&0&0&1\\0&0&0&0 \end{bmatrix},\quad F(t)=\begin{bmatrix} 1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1 \end{bmatrix}, \quad t\in \mathcal I, \end{align*} such that $E(t)$ has constant rank $r=2$ and $\rank \, [E(t) F(t)]=m$. The associated DAE reads \begin{align*} x_2'+\alpha x_3'+x_1&=q_1,\\ \beta x_4'+x_2&=q_2,\\ x_4'+x_3&=q_3,\\ x_4&=q_4. \end{align*} For each sufficiently smooth right-hand side this DAE possesses the unique solution, \begin{align*} x_1&=q_1-q_2'-\alpha q_3'+\beta'q_4'+(\alpha+\beta)q_4'',\\ x_2&=q_2-\beta q_4',\\ x_3&=q_3- q_4',\\ x_4&=q_4, \end{align*} that is, the DAE is a solvable system in the sense of Definition \ref{d.solvableDAE} with zero dynamical degree of freedom. It can be checked immediately that in the sense of Definition \ref{d.2} the DAE is regular with index $\mu=3$ and $d=0$ on all subintervals where $\alpha+\beta$ has no zeros, and it is regular with index $\mu=2$ and $d=0$ on all subintervals where $\alpha+\beta$ vanishes identically. \medskip We analyse this example with four different approaches. All of them lead to the same values for the characteristics $\theta$: \begin{align}\label{e5.theta} \begin{array}{cccc} & \theta_0 & \theta_1 &\theta_2\\ \alpha+\beta \neq 0 & 1 & 1 & 0\\ \alpha+\beta = 0 & 2 & 0& \end{array} \end{align} \medskip \textbf{Basic reduction procedure}, cf.\ \eqref{basic_reduction}. We have $E_0=E$ and $F_0=I$. We obtain for a basis of $\ker E_0^\star$ $Z_0 = \begin{bmatrix} 0 & 0\\ 1 & 0\\ -\beta & 0\\ 0 & 1 \end{bmatrix}$ and a basis of $\im E_0$ $Y_0 = \begin{bmatrix} 1 & 0\\ 0 & \beta\\ 0 & 1\\ 0 & 0 \end{bmatrix}$. $S_0 = \ker Z_0^\star F_0 = \{z \in \Real^4: z_2 = \beta z_3, z_4 = 0 \}$. $C_0 = \begin{bmatrix} 1 & 0\\ 0 & \beta\\ 0 & 1\\ 0 & 0 \end{bmatrix} $ forms a basis of $S_0$.\\ The next reduction step serves $E_1 = Y_0^\star E_0 C_0 = \begin{bmatrix} 0 & \alpha + \beta \\ 0 & 0 \end{bmatrix}$ and $F_1 = Y_0^\star F_0 C_0 + Y_0^\star E_0 C'_0 = \begin{bmatrix} 1 & \beta' \\ 0 & 1 + \beta^2 \end{bmatrix}$.\\ To determine $\theta_0$ we investigate $\ker E_0 \cap \ker Z_0^\star F_0 = \{ z \in \Real^4: (\alpha + \beta) z_3 = 0, z_4 = 0, z_2 = \alpha z_3\}$.\\ We have now to continue with two different cases. \begin{itemize} \item $\alpha+\beta \neq 0$: It results that $\ker E_0 \cap \ker Z_0^\star F_0 = \{ z \in \Real^4: z_3 = 0, z_4 = 0, z_2 = 0\} = \im \begin{bmatrix} 1\\0\\0\\0 \end{bmatrix}$ and therefore $\theta_0 =1$.\\ The new pair $[E_1, F_1]$ is pre-regular. $Z_1 = \begin{bmatrix} 0\\ 1 \end{bmatrix} $, $Y_1 = \begin{bmatrix} 1\\ 0 \end{bmatrix} $, and $C_1 = \begin{bmatrix} 1\\ 0 \end{bmatrix} $, i.e., $E_2 = Y_1^\star E_1 C_1 = 0$ and $F_2 = Y_1^\star F_1 C_1 = 1$.\\ $\ker E_1 \cap \ker Z_1^\star F_1 =\{z \in \Real^2: z_2 = 0\}$, which means that $\theta_1 = 1$.\\ The last reduction step delivers the pre-regular pair $E_2 = 0$ and $F_2 = 1$, which leads to $\theta_2 = 0$ and therefore $\mu = 3$. \item $\alpha+\beta = 0$: In this case we obtain for $\ker E_0 \cap \ker Z_0^\star F_0 = \{ z \in \Real^4: z_4 = 0, z_2 = \alpha z_3\} = \im \begin{bmatrix} 1 & 0\\ 0 & \alpha\\ 0 & 1\\ 0 & 0 \end{bmatrix} $ and therefore $\theta_0 = 2$.\\ The next matrix pair is $E_1 = 0$ and the nonsingular $F_1 = \begin{bmatrix} 1 & \beta'\\ 0 & 1 + \beta^2 \end{bmatrix} $. $[E_1, F_1]$ is pre-regular and we obtain $\theta_1 = 0$ and therefore $\mu = 2$. \end{itemize} \medskip \textbf{ Projector based analysis (tractability index) with the related matrix chain, cf.\ \eqref{2.Gi}}. \begin{equation*} G_0 = E,\quad B_0 = F - E D',\quad Q_0 = \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & -\alpha & 0\\ 0 & 0 & 1 & 0\\ 0 & 0& 0& 0 \end{bmatrix},\quad D = P_0 = I-Q_0. \end{equation*} \begin{equation*} G_1=G_0+B_0 Q_0 = \begin{bmatrix} 1 & 1 & \alpha - \alpha' & 0\\ 0 & 0 & -\alpha & \beta\\ 0 & 0 & 1 & 1\\ 0 & 0& 0& 0 \end{bmatrix}. \end{equation*} To determine an admissible nullspace projector $Q_1$ we analyse $\ker G_1$. \begin{align*} \ker G_1 &= \{z \in \Real^4: z_1+z_2+(\alpha - \alpha')z_3 = 0,-\alpha z_3+\beta z_4 = 0,z_3+z_4 = 0\}\\ &= \{z \in \Real^4: z_1+z_2+(\alpha - \alpha')z_3 = 0,(\alpha +\beta) z_4 = 0,z_3+z_4 = 0\} \end{align*} Also here we have to continue with two different cases. \begin{itemize} \item $\alpha+\beta \neq 0$: We obtain that $\ker G_1 =\{z \in \Real^4: z_4 = 0, z_3 = 0, z_1+z_2 = 0\} = \im \begin{bmatrix} -1 \\1\\0\\0 \end{bmatrix} $ and $\rank G_1 = 3$. An admissible nullspace projector is $Q_1= \begin{bmatrix} 0 & -1 & -\alpha & 0\\ 0 & 1 & \alpha & 0\\ 0 & 0 & 0 & 0\\ 0 & 0& 0& 0 \end{bmatrix}$ and $\Pi_1 = P_0P_1 = P_0(I-Q_1)= \begin{bmatrix} 0&&&\\ &0&&\\ &&0&\\ &&&1 \end{bmatrix} $.\\ The next matrix chain level starts with $B_1 = B_0P_0 -G_1D^-(D\Pi_1D^-)'D\Pi_0 $ $=B_0P_0 -G_1D^-\Pi_1'D\Pi_0 = B_0P_0$ and we obtain \begin{align*} G_2&=G_1+B_1 Q_1= \begin{bmatrix} 1&1&\alpha - \alpha'&0\\ 0&1& 0&\beta\\ 0&0& 1&1\\ 0&0& 0&0 \end{bmatrix},\\ \ker G_2 &= \{z \in \Real^4: z_1+z_2 + (\alpha-\alpha')z_3=0, z_2+\beta z_4 = 0, z_3+z_4 = 0\} \\ &= \im \begin{bmatrix} \alpha-\alpha'+\beta\\ -\beta\\ -1\\ 1 \end{bmatrix} \text{ and } \rank G_2 = 3. \end{align*} As an admissible nullspace projector we choose $Q_2 = \begin{bmatrix} 0&0&0 &\alpha -\alpha'+ \beta\\ 0&0& 0&-\beta\\ 0&0& 0&-1\\ 0&0& 0&1 \end{bmatrix}$ and \\ $\Pi_2 = \Pi_1(I-Q_2) = 0$. The nonsingular matrix \begin{align*} G_3 = G_2+B_2 Q_2= \begin{bmatrix} 1&1&\alpha - \alpha'&0\\ 0&1& 0&\beta\\ 0&0& 1&1\\ 0&0& 0&1 \end{bmatrix}, \end{align*} which indicates that the index is 3. \item $\alpha+\beta = 0$: For this case we obtain the nullspace\\ $\ker G_1 = \{z \in \Real^4: z_1+z_2+(\alpha+\alpha')z_3=0, z_3+z_4=0 \} = \im \begin{bmatrix} \alpha+\alpha' & -1\\ 0 & 1\\ -1 & 0\\ 1 & 0 \end{bmatrix} $ and $\rank G_1 = 2$. As an admissible nullspace projector we can choose \begin{equation*} Q_1= \begin{bmatrix} 0 & -1 & -\alpha & \alpha'\\ 0 & 1 & \alpha & \alpha\\ 0 & 0 & 0 & -1\\ 0 & 0& 0& 1 \end{bmatrix} \end{equation*} and obtain because of $\Pi_1 = 0$ as the next matrix chain element the nonsingular matrix \begin{align*} G_2 = \begin{bmatrix} 1&1&\alpha-\alpha' &0\\ 0&1& 0&\beta\\ 0&0& 1&1\\ 0&0& 0&1 \end{bmatrix}, \end{align*} which indicates an index 2 DAE.\\ \end{itemize} For the characteristics we have, cf.\ \eqref{trac}, $\theta_{i-1} = m- \rank G_i$, leading to \eqref{e5.theta}. \medskip \textbf{Differentiation index concept}. Inspecting the array functions \setcounter{MaxMatrixCols}{15} \begin{align*} \mathcal E_{[1]}=\begin{bmatrix} 0&1&\alpha&0&0&0&0&0\\0&0&0&\beta&0&0&0&0\\0&0&0&1&0&0&0&0\\0&0&0&0&0&0&0&0\\ 1&0&\alpha'&0&0&1&\alpha&0\\0&1&0&\beta'&0&0&0&\beta\\0&0&1&0&0&0&0&1\\0&0&0&1&0&0&0&0 \end{bmatrix},\\ \mathcal E_{[2]}=\begin{bmatrix} 0&1&\alpha&0&0&0&0&0&0&0&0&0\\ 0&0&0&\beta&0&0&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&\alpha'&0&0&1&\alpha&0&0&0&0&0\\ 0&1&0&\beta'&0&0&0&\beta&0&0&0&0\\ 0&0&1&0&0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0&0&0&0\\ 0&0&\alpha''&0&1&0&2\alpha'&0&0&1&\alpha&0\\ 0&0&0&\beta''&0&1&0&2\beta'&0&0&0&\beta\\ 0&0&0&0&0&0&1&0&0&0&0&1\\ 0&0&0&0&0&0&0&1&0&0&0&0 \end{bmatrix}, \end{align*} we find \begin{align*} r_{[1]}= \rank\mathcal E_{[1]}(t)&=\Bigg\lbrace\quad\begin{matrix} 5& \text{if }\alpha(t)+\beta(t)\neq 0 \\ 4& \text{if }\alpha(t)+\beta(t)= 0,\\ \end{matrix} \end{align*} but, in contrast, $\mathcal E_{[2]}(t)$ does not undergo any rank changes, \begin{align*} \dim \ker \mathcal E_{[2]}(t)= 4,\quad \rank\mathcal E_{[2]}(t)= 3m-4=8. \end{align*} Using for $\theta_i = m + r_{[i]}-r_{[i+1]}$, (cf.\ Theorem \ref{t.theta_relation_array}), the characteristic values $\theta$ are the same as in \eqref{e5.theta}. Nevertheless, this DAE has a well-defined differentiation index being equal three. We add that the DAE according to \cite[Chapter 9]{CRR} is quasi-regular with an index less than or equal to three. \medskip \textbf{Projector based differentiation concept}. Starting from \begin{align*} \ker E(t)= \im \begin{bmatrix} 1 & 0 \\0 & -\alpha \\ 0 & 1 \\0 & 0 \end{bmatrix}, \quad Q=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \frac{\alpha^2}{1+\alpha^2 } & \frac{-\alpha}{1+\alpha^2 } & 0 \\ 0 & \frac{-\alpha}{1+\alpha^2 } & \frac{1}{1+\alpha^2 } & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad P=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & \frac{1}{1+\alpha^2 } & \frac{\alpha}{1+\alpha^2 } & 0 \\ 0 & \frac{\alpha}{1+\alpha^2 } & \frac{\alpha^2}{1+\alpha^2 } & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{align*} we recognize \begin{align*} \ker Q= \ker \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & -\alpha & 1 & 0 \end{bmatrix}, \quad \ker P = \ker \begin{bmatrix} 0 & 0 & 0 & 1\\ 0 & 1 & \alpha & 0 \end{bmatrix}. \end{align*} On the one hand, we have \[ \im \begin{bmatrix} \mathcal F_{[0]} Q &\quad & \mathcal E_{[0]}\end{bmatrix} = \im \begin{bmatrix} Q &\quad & E \end{bmatrix} = \im \begin{bmatrix} 1 & 0& 0\\ 0 & -\alpha & \beta \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{bmatrix}, \] such that $\mathcal V_{[0]}$ and its rank depend on whether $\alpha = -\beta$ is given or not. On the other hand the explicit constraints are \begin{align*} x_2-\beta(t) x_3 &= q_2-\beta(t) q_3, \\ x_4 &= q_4, \end{align*} such that \begin{align*} \ker E \cap S_{[0]} = \ker \begin{bmatrix} P \\ \mathcal W_{[0]} \mathcal F_{[0]} \end{bmatrix} = \ker \begin{bmatrix} 0 & 0 & 0 & 1\\ 0 & 1 & \alpha & 0 \\ \hline 0 & 1 & -\beta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}. \end{align*} Again, the dimension of this space depends on $\alpha+\beta$. Therefore, we consider two cases: \begin{itemize} \item $\alpha +\beta \neq 0 $: Then \[ \ker E \cap S_{[0]} = \ker \begin{bmatrix} P \\ \mathcal W_{[0]} \mathcal F_{[0]} \end{bmatrix} = \im \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}, \quad T_1= \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}, \] and \[ \ker \begin{bmatrix} Q \\ \mathcal V_{[0]} \mathcal F_{[0]} \end{bmatrix} = \ker \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & -\alpha & 1 & 0 \\ \hline 0 & 0 & 0 & 1 \end{bmatrix}, \quad V_1= \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & \frac{1}{1+\alpha^2 } & \frac{\alpha}{1+\alpha^2 } & 0 \\ 0 & \frac{\alpha}{1+\alpha^2 } & \frac{\alpha^2}{1+\alpha^2 } & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}. \] The next steps lead to $V_2=0$, $T_2=T_1$, $V_3=0$, $T_3=0$, such that \[ r=2,\quad \theta_0=1, \quad \theta_1=1, \quad \theta_2=0, \quad \mu=3, \quad d=0. \] \item $\alpha+\beta=0$: Then \[ \ker E \cap S_{[0]} = \ker \begin{bmatrix} P \\ \mathcal W_{[0]} \mathcal F_{[0]} \end{bmatrix} = \im \begin{bmatrix} 1 & 0 \\ 0 & -\alpha \\ 0 & 1 \\ 0 & 0 \\ \end{bmatrix}, \quad T_1= Q, \] \end{itemize} and \[ \ker \begin{bmatrix} Q \\ \mathcal V_{[0]} \mathcal F_{[0]} \end{bmatrix} = \ker \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & -\alpha & 1 & 0 \\ \hline 0 & 1 & \alpha & 0\\ 0 & 0 & 0 & 1 \end{bmatrix}, \quad V_1= 0. \] The next step leads to $V_2=0$, $T_2=0$, such that \[ r=2, \quad \theta_0=2, \quad \theta_1=0, \quad \mu=2, \quad d=0. \] In summary, all approaches lead to the same characteristic values \eqref{e5.theta} and reveal the same critical points at the zeros of $\alpha+\beta$. We realize that we have a solvable DAE here, although all the procedures show critical points and in particular not all derivative-array functions have constant rank. By Definition \ref{d.harmless}, these crical points are harmless. From the solution representation we recognize the precise dependence of the solution on the derivatives of the right-hand side $q$. Accordingly, a sharp perturbation index three is only valid on the subintervals where $\alpha+\beta$ has no zeros, and perturbation index two on subintervals where $\alpha+\beta$ is identically zero. This is very important when it comes to minimal smoothness and the input-output behavior from a functional analysis perspective. \end{example} \begin{example}[\cite{BCP89}, $d=1$, $r$ and $\theta_0$ change, in SCF]\label{e.1}\hfill Given the function $\alpha(t)=\begin{cases} 0&\text{ for }t\in[-1,0)\\ t^{3}&\text{ for }t\in [0,1] \end{cases}$\quad we consider the pair $\{E,F\}$, \begin{align*} E(t)=\begin{bmatrix} 1&0&0\\0&0&\alpha(t)\\0&0&0 \end{bmatrix},\quad F(t)=\begin{bmatrix} 1&0&0\\0&1&0\\0&0&1 \end{bmatrix},\quad t\in\mathcal I=[-1,1], \end{align*} and the associated DAE \eqref{1.DAE}, \begin{align*} x_1'+x_1&=q_1,\\ \alpha x_3'+x_2&=q_2,\\ x_3&=q_3. \end{align*} By straightforward evaluations we know that the DAE has differentiation index two on the entire interval $[-1,1]$, but differentiation index one on the subinterval $[-1,0]$. Obviously the DAE forms a solvable system in the sense of Definition \ref{d.solvableDAE} with dynamical degree of freedom $d=1$ on the entire interval $[-1,1]$. The DAE is regular with index two in the sense of Definition \ref{d.2} on the subinterval $(0,1]$, and regular with index one on $[-1,0]$. Similarly, the perturbation index equals one on $[-1,0]$, but two on each closed subinterval of $(0,1]$. Observe that $\mathcal E_{[0]}(t)=E(t)$ changes the rank at $t=0$, but $\rank \mathcal E_{[1]}(t)=4$, $\mathcal E_{[2]}(t)=7$, $t\in [-1,1]$, and $r(t)-\theta(t)=d=1$ is constant. \end{example} \begin{example}[\cite{BCP89} Example 2.4.3, $d$ constant, $r$ and $\theta_0$ change, not transferable into SCF]\label{e.2}\hfill For the functions \[ \alpha(t)=\begin{cases} 0& \text{ for } t\in [-1,0)\\ t^{3}&\text{ for } t\in [0,1] \end{cases} \text{ and }\quad \beta(t)=\begin{cases} t^{3}&\text{ for } t\in [-1,0)\\ 0&\text{ for }t\in [0,1] \end{cases} \] we consider the DAE \eqref{1.DAE} with the coefficients \begin{align*} E(t)=\begin{bmatrix} 0&\alpha(t)\\\beta(t)&0 \end{bmatrix},\quad F(t)=\begin{bmatrix} 1&0\\0&1 \end{bmatrix},\quad t\in\mathcal I=[-1,1]. \end{align*} To each arbitrary smooth right-hand side $q$, the DAE has the unique solution \begin{align*} x_1&=q_1-\alpha q_2',\\ x_2&=q_2-\beta q_1', \end{align*} so that it is solvable with zero dynamical degree of freedom. The DAE has obviously perturbation index two. Observe that $\mathcal E_{[0]}(t)=E(t)$ has a rank drop at $t=0$, but $\rank \mathcal E_{[1]}(t)=2$, $\rank \mathcal E_{[2]}(t)=4$, $t\in [-1,1]$. The DAE has differentiation index two on the entire interval $[-1,1]$ and also on each subinterval. In contrast, the basic reduction procedure from Section \ref{s.regular} indicates the point $t=0$ as critical. The DAE is regular with index two and $r=1, \theta_0=1, \theta_1=0, d=0$ on both subintervals $[-1,0)$ and $(0,1]$. \end{example} \begin{example}[$d=0$, $r$ changes, index 1 or 3, in SCF]\label{e.7} With this example in SCF we illustrate that a change of $r$ leads to an in- or decrease the local index on subintervals that differ more than one. Given the function\\ $\alpha(t)=\begin{cases} 0 &\text{ for } t \in [-1,0)\\ t^{3} &\text{ for } t\in [0,1] \end{cases}$\quad we consider the pair $\{E,F\}$, \begin{align*} E(t)=\begin{bmatrix} 0&\alpha(t)&0\\0&0&\alpha(t)\\0&0&0 \end{bmatrix},\quad F(t)=\begin{bmatrix} 1&0&0\\0&1&0\\0&0&1 \end{bmatrix},\quad t\in\mathcal I=[-1,1], \end{align*} and the associated DAE \eqref{1.DAE}, \begin{align*} \alpha x_2'+x_1&=q_1,\\ \alpha x_3'+x_2&=q_2,\\ x_3&=q_3. \end{align*} By straightforward evaluations we know that the DAE has differentiation index three on the entire interval $[-1,1]$, but differentiation index one on the subinterval $[-1,0]$. The DAE forms a solvable system in the sense of Definition \ref{d.solvableDAE} with zero dynamical degree of freedom $d=0$ on the entire interval $[-1,1]$. The DAE is regular with index three, $r=2, \theta_0=1, \theta_1=1, \theta_2=0$, and $d=0$ in the sense of Definition \ref{d.2} on the subinterval $(0,1]$, and it is regular with index one, $r=0, \theta_0=0$, and $d=0$ on $[-1,0]$. Similarly, the perturbation index equals one on $[-1,0]$, but three on each closed subinterval of $(0,1]$. The rank of $\mathcal E_{[0]}(t)$ and $\mathcal E_{[1]}(t)$ changes at $t=0$ but $\mathcal E_{[2]}(t)$, $\mathcal E_{[3]}(t)$ have constant rank each. More precisely, we have \begin{align*} \dim \ker\mathcal E_{[0]}(t)&=\Bigg\lbrace\quad\begin{matrix} 1& \text{if }\alpha(t)\neq 0 \\ 3& \text{if }\alpha(t)= 0,\\ \end{matrix}\\ \dim \ker\mathcal E_{[1]}(t)&=\Bigg\lbrace\quad\begin{matrix} 2& \text{if }\alpha(t)\neq 0 \\ 3& \text{if }\alpha(t)= 0,\\ \end{matrix}\\ \dim \ker\mathcal E_{[2]}(t) = \dim \ker\mathcal E_{[3]}(t)&=3, \quad t\in\mathcal I=[-1,1]. \end{align*} \end{example} \subsection{A case study}\label{e.struc} With the following case study we emphasize that monitoring the index of DAEs is not sufficiently informative. For a deeper understanding of their properties, all characteristic values $r$ and $\theta_i$ should be considered.\\ For identity matrices $I_i\in \Real^{m_i \times m_i}$, $i=1,2,3$ and constant strictly upper triangular matrices $N_2 \in \Real^{m_2 \times m_2}$, $N_3 \in \Real^{m_3 \times m_3}$ of the special form \[ N_i=\begin{bmatrix} 0 & 1 & \cdots & 0\\ 0 & 0 & \ddots \\ \vdots & & \ddots & 1\\ 0 & 0 & \cdots & 0 \end{bmatrix}, \quad i=2,3, \] let us consider DAEs of the form \[ \begin{bmatrix} \alpha_1(t) I_d & 0 & 0\\ 0 & \alpha_2(t) N_1 & 0 \\ 0 & 0 & \alpha_3(t) N_2 \end{bmatrix}\begin{bmatrix} x_1' \\ x_2'\\ x_3' \end{bmatrix} + \begin{bmatrix} \beta_1(t) I_1 & 0 & 0\\ 0 & \beta_2(t) I_2 & 0 \\ 0 & 0 & \beta_3(t) I_3 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2\\ x_3 \end{bmatrix} =\begin{bmatrix} q_1 \\ q_2\\ q_3 \end{bmatrix}, \] with $m=m_1+m_2+m_3$ and smooth functions $\alpha_i, \beta_i: \mathcal I \rightarrow \Real$. \begin{itemize} \item We focus first on the functions $\beta_i$: \begin{itemize} \item Zeros of $\beta_1(t)$ are not critical at all. \item If $\beta_2(t_*)$ or $\beta_3(t_*)$ are zero for $t_* \in \mathcal I$, then $\left[ E(t_*) \ F(t_*)\right]$ has a trivial row. Then $\left\{E,F\right\}$ is not qualified on $\mathcal I$, cf. Definition \ref{d.qualified} and the necessary solvability condition \eqref{eq:fullrank} is violated as in Example \ref{e.6}. \end{itemize} \item Let us suppose now that $\beta_2$ and $\beta_3$ have no zeros and focus on $\alpha_1$: \begin{itemize} \item Zeros of $\alpha_1(t)$ obviously cause a singular ODE $\alpha_1(t)x_1' + \beta_1(t)x_1=q_1$. \item If $\alpha_1$ has no zeros, then the degree of freedom is constant $d=m_1$ regardless of whether the $\alpha_i$, $i=2,3$, have zeros or not. \end{itemize} \item Let us suppose now that $\alpha_1$, $\beta_2$ and $\beta_3$ have no zeros and focus on $\alpha_2$, $\alpha_3$: \begin{itemize} \item For $\alpha_2(t)\neq 0$, $\alpha_3(t) \neq 0$ for all $ t \in \mathcal I$, the DAE is regular with index $\mu=\max \left\{m_2, m_3 \right\}$. \item For $m_2\geq m_3$ and $\alpha_2(t)\neq 0$ for all $ t \in \mathcal I$, the DAE has differentiation index $\mu=m_2$ and all points $t_*$ such that $\alpha_3(t_*)=0$, but $\alpha_3$ does not identically vanish in a neighborhood of $t_*$, are harmless critical points. \item For $m_2 > m_3$, all points $t_*$ such that $\alpha_2(t_*)=0$, but $\alpha_2$ does not identically vanish in a neighborhood of $t_*$, are harmless critical points. \item For $m_2 > m_3$, if $\alpha_2$ vanishes identically on a subinterval $\mathcal I_*\subset\mathcal I$, then the DAE restricted to this subinterval has differentiation index $\mu\leq m_3<m_2$. \item In general it may happen, if both $\alpha_2$ and $\alpha_3$ vanish on a subinterval, that there the index reduces to one. \end{itemize} \end{itemize} In general, for $\alpha_i(t)\neq0$, $\beta_j(t)\neq 0$ for $i=1,2,3$ and $j=2,3$, by construction it holds \begin{align*} r&=m_1+(m_2-1)+(m_3-1),\\ \mu&=\max \{m_2,m_3\}, \\ \theta_i &= \begin{cases} 2 & \text{ for } i \leq \min\{m_2-2,m_3-2\}, \\ 1 &\text{ for } \min\{m_2-2,m_3-2\} < i \leq \mu-2, \\ 0 & \text{ else}, \end{cases}\\ d&= m_1. \end{align*} For instance, for $m_1=4, m_2=2, m_3=3$ this means \begin{align*} r&=7,\ \quad &\mu&=3, \ \quad\ &\theta_0 &= 2,\\ \theta_1 &= 1,\ \quad &\theta_2 &=0,\ \quad &d&= 4. \end{align*} In general, zeros of $\alpha_i$ or $\beta_j$ imply a change of these characteristic values.\\ For general linear DAEs, the components are intertwined in a complex manner, such that it is not possible to guess directly whether zeros of some coefficients in $\{E,F\}$ are critical or not. However, monitoring these characteristic values may provide crucial indications. \section{Main equivalence theorem concerning regular DAEs and further comments on regularity and index notions}\label{s.Notions} \subsection{Main equivalence theorem concerning regular DAEs}\label{sec:MainTheorem} We finally summarize the main equivalence results of the preceding sections in statements for regular pairs $\{E,F\}$ and associated DAEs \eqref{DAE0} on the interval $\mathcal I$. Each of the involved equivalent DAE frameworks is based on a number of specific characteristics. The characteristic values correspond to requirements for constant dimensions of subspaces or ranks of matrix functions. Recall that we always assume for regularity that the matrix functions $E, F : \mathcal I\rightarrow \Real^{m\times m}$ are sufficiently smooth and $E$ has constant rank $r$. \medskip In the previous chapters, we precisely described the relationships between the individual characteristics of each concept and the $\theta$-values \eqref{theta}, \[ \theta_{0}\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0, \] introduced in our basic regularity Definition \ref{d.2}. The following main theorem emphasizes the universality of the $\theta$-characteristic, which is why we take the liberty of calling them and $r$ {\it canonical characteristics}, which for regular DAEs in particular indicate the dynamical degree of freedom $d=\dim S_{can}$ and the inner structure of the canonical subspace $N_{can}$, cf.\ Remark \ref{r.pencil}. We claim and highlight that the characteristics $\theta_i$ can be found regardless of which DAE concept we start from. \medskip Let us briefly consider the easier cases $\mu=0,1$: \begin{itemize} \item For constant $r=m$ the regular DAE is a regular implicit ODE, and the index is $\mu=0$. We can interpret this as $\theta_i=0$ for all $i$ and $d=m$. \item For constant $r<m$ the condition $\theta_0=0$ is equivalent to $\mu=1$. We interpret this as $\theta_i=0$ for all $i$ and $d=r<m$. Then we have a regular index-one DAE and a strangeness-free DAE, respectively. \end{itemize} Owing to Proposition \ref{p.index1} we know that the DAE associated to the pair $\{E, F\}$ has differentiation index one if and only if the pair is regular with index one in the sense of Definition \ref{d.2}. Moreover, all further index-1 concepts are consistent with each other, too, which is well-known. To enable simpler formulations, we now turn to the case that the index is higher than or equal to two. For regular DAEs with index $\mu\geq 2$ we have seen that $\theta_0>0$ follows by definition, as will be specified in the following. For the sake of uniformity, in general for $i>\mu-2$ we set $\theta_i=0$, cf.\ Remark \ref{r.inf}. \begin{theorem}[Main Equivalence Theorem]\label{t.Sum_equivalence} Let $E, F:\mathcal I\rightarrow\Real^{m\times m}$ be sufficiently smooth, and $\mu\in \Natu$, $\mu\geq2$. \textbf{Part A (Equivalence):} The following 13 assertions are equivalent in the sense that the characteristic values (constants) of each of the statements can be uniquely reproduced by those of any other statement: \begin{description} \item[\textrm{(0)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with index $\mu\in \Natu$ according to Definition \ref{d.2}, with the associated characteristics \[ r=\rank E,\ \theta_{0}\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0,\quad \theta_i:=\dim (\ker E_i\cap S_i). \] \item[\textrm{(1)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with index $\mu\in \Natu$ according to Definition \ref{d.2a}, with the associated characteristics \[ r=r_0>r_1> \ldots > r_{\mu-1}=r_{\mu}, \quad r_i:=\rank E_i. \] \item[\textrm{(2)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with elimination index $\mu^E=\mu$ according to Definition \ref{d.Elim}, with the associated characteristics \[ r=r^E_0>r^E_1> \ldots >r^E_{\mu-1}=r^E_{\mu},\quad r^E_i:=\rank E^E_i. \] \item[\textrm{(3)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with dissection index $\mu^D=\mu $ according to Definition \ref{d.diss}, with the associated characteristics \begin{align*} r=r^D_0\leq r^D_1\leq \cdots \leq r^D_{\mu-1}<r^D_{\mu} =m,\quad r^D_i:=\rank E^D_i=r^D_{i-1}+a^D_{i-1}. \end{align*} \item[\textrm{(4)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with strangeness index $\mu^S=\mu-1$ according to Definition \ref{d.strangeness}, with associated characteristic tripels \[ (r^S_i,a^S_i,s^S_i),\ i=0,\ldots,\mu-1,\;r_0^S=r,\, s^S_{\mu-1}=0. \] \item[\textrm{(5)}] The pair $\{E,F\}$ is regular on $\mathcal I$ with tractability index $\mu^T=\mu$ according to Definition \ref{d.trac}, with associated characteristics \[ r=r^T_0\leq r^T_1\leq\dots\leq r^T_{\mu-1}<r^T_{\mu}=m,\quad r^T_i=\rank G_i. \] \item[\textrm{(6)}] The pair $\{E,F\}$ can be equivalently transformed to block-structured Standard Canonical Form \eqref{blockstructure}, in which the nilpotent matrix function $N(t)=(\tilde N_{ij}(t))^{\mu}_{i,j=1}, \,\tilde N_{i,j}:\mathcal I\rightarrow\Real^{\tilde l_i\times \tilde l_j}$, is block-upper-triangular, has nilpotency index $\mu$ on $\mathcal I$, and has full row-rank blocks on the secondary block diagonal, with characteristics \[ 1\leq\tilde l_1\leq\cdots\leq\tilde l_{\mu},\; \tilde l_i=\rank \tilde N_{i,i+1}, \,i=1,\ldots,\mu-1,\, \tilde l_{\mu}=m-r, r:=\rank E. \] \item[\textrm{(7)}] The pair $\{E,F\}$ can be equivalently transformed to block-structured Standard Canonical Form \eqref{blockstructure}, in which the nilpotent matrix function $N(t)=(N_{ij}(t))^{\mu}_{i,j=1}, \, N_{i,j}:\mathcal I\rightarrow\Real^{l_i\times l_j}$, is block-upper-triangular, has nilpotency index $\mu$ on $\mathcal I$, and has full column-rank blocks on the secondary block diagonal, with characteristics \[ l_1\geq\cdots\geq l_{\mu},\; l_{1}=m-r,\, l_{i+1}=\rank N_{i,i+1}, \,i=1,\ldots,\mu-1, r:=\rank E. \] \item[\textrm{(8)}] For the pair $\{E,F\}$ the DAE \eqref{1.DAE} has regular differentiation index $\mu^{rdiff}=\mu$ on $\mathcal I$ according to Definition \ref{d.G1}, with associated characteristics \[ r_{[0]}=r,\, r_{[i]}< r_{[i-1]}+m,\, i=1,\ldots, \mu-2,\, r_{[\mu-1]}= r_{[\mu]}+m,\: r_{[i]}:=\rank \mathcal E_{[i]}. \] \item[\textrm{(9)}] For the pair $\{E,F\}$ the DAE \eqref{1.DAE} has projector-based differentiation index $\mu^{pbdiff}=\mu$ on $\mathcal I$ according to Definition \ref{d.pbdiffA}, with associated characteristics \[ r_{[0]}=r,\, r_{[i]}< r_{[i-1]}+m,\, i=1,\ldots, \mu-2,\,\: r_{[i]}:=\rank \mathcal E_{[i]}, \] \[ \rho_{0}\leq\cdots\leq \rho_{\mu-2}<\rho_{\mu-1}=m, \;\rho_{i}:=m-\dim \ker E\cap S_{[i]}. \] \item[\textrm{(10)}] For the pair $\{E,F\}$ the DAE \eqref{1.DAE} has projector-based differentiation index $\mu^{pbdiff}=\mu$ on $\mathcal I$ according to Definition \ref{d.pbdiffB}, with associated characteristics \[ r^{\mathcal B}_{[0]}=r,\, r^{\mathcal B}_{[i]}< r^{\mathcal B}_{[i-1]}+m,\, i=1,\ldots, \mu-2,\, r^{\mathcal B}_{[\mu-1]}= r^{\mathcal B}_{[\mu]}+m,\: r^{\mathcal B}_{[i]}:=\rank \mathcal B_{[i]}. \] \item[\textrm{(11)}] For the pair $\{E,F\}$ the DAE \eqref{1.DAE} has differentiation index $\mu^{diff}=\mu$ on $\mathcal I$ according to Definition \ref{d.diff} and, addionally, also the matrix functions $\mathcal E_{[i]}$, $i<\mu$, feature constant on $\mathcal I$, so that the characteristics are \[ r_{[0]}=r,\, r_{[i]}< r_{[i-1]}+m,\, i=1,\ldots, \mu-2,\, r_{[\mu-1]}= r_{[\mu]}+m,\: r_{[i]}:=\rank \mathcal E_{[i]}. \] \item[\textrm{(12)}] For the pair $\{E,F\}$ the DAE \eqref{1.DAE} satisfies the Strangeness-Free-Hypothesis \eqref{SHyp} on $\mathcal I$ with $\hat{\mu}=\mu-1$ with associated characteristics $\hat a$ and $\hat d=m-\hat a$, and, addionally, also the matrix functions $\mathcal E_{[i]}$, $i<\hat{\mu}$, feature constant on $\mathcal I$, so that the characteristics are \[ r_{[0]}=r,\, r_{[i]}< r_{[i-1]}+m,\, i=1,\ldots, \mu-2,\, r_{[\mu-1]}= \mu m+\hat a,\: r_{[i]}:=\rank \mathcal E_{[i]}. \] \end{description} \textbf{Part B (Relations between characteristics):} Let the DAE \eqref{1.DAE} be regular with index $\mu$ in the sense of Definition \ref{d.2} or one of the equivalent statements of \textbf{Part A}. Then the following relations concerning the diverse characteristic values are valid: \begin{align*} r_0 &= r_{0}^E=r_0^S=r_0^D=r_0^T=r,\\ l_{1}&= m-r, \quad \quad \tilde{l}_{\mu}=m-r, \end{align*} and for $i=1, \ldots, \mu-1$ \begin{align*} r_i&= r_{i}^E=r_i^S=r-\sum_{j=0}^{i-1} \theta_j, \\ r_i^D&=r_i^T=\rho_{i-1}=m-\theta_{i-1},\\ s_i^S&= \theta_i, \quad a_i^S=m-r+\sum_{j=0}^{i-1} \theta_j-\theta_i,\\ l_{i+1}&=\theta_{i-2}, \quad \tilde{l}_i= \theta_{\mu-i-1},\\ r_{[i]}&=\rank \mathcal B_{[i]} = \rank \mathcal D_{[i]} = \rank \mathcal E_{[i]}=km+r-\sum_{j=0}^{i-1}\theta_{j}. \end{align*} Conversely, for $i=0, \ldots, \mu-1$, we obtain \begin{align*} \theta_i&=r_i-r_{i+1}=s_i^S=r_i^S-r_{i+1}^S=r_i^E-r_{i+1}^E=m-r_{i+1}^T=m-r_{i+1}^D \\ &=m-\rho_i =r_{[i]}-r_{[i+1]}+m, \end{align*} which leads to $\theta_{\mu-1}=0$, and \begin{align*} \theta_i &=l_{i+2}=\tilde{l}_{\mu-i-1}, \quad \mbox{for} \quad i=0, \ldots, \mu-2. \end{align*} \textbf{Part C (Description of resulting main features):} Let the DAE \eqref{1.DAE} be regular with index $\mu$ in the sense of Definition \ref{d.2} or one of the equivalent statements of \textbf{Part A}. Then the following descriptions concerning the dynamical degree of freedom $d$ of the DAE and the number of constraints $a$ are given: \begin{align*} d&=r-\sum_{i=0}^{\mu-2} \theta_i,\\ a&=m-d=m-r+\sum_{i=0}^{\mu-2} \theta_i,\\ a&=\hat a=\mu m-r_{[\mu-1]}=\sum_{i=1}^{\mu}l_i=\sum_{i=1}^{\mu}\tilde{l_i},\\ d&=\hat d=m-\hat a = r_{[\mu-1]}-(\mu-1)m=r_{[\mu]}-\mu m. \end{align*} \end{theorem} \begin{proof} \textbf{Part A:} \begin{itemize} \item The equivalence of two of the five statements \textrm{(0)}, \textrm{(2)}, \textrm{(3)}, \textrm{(4)}, and \textrm{(5)}, is given by Theorem \ref{t.equivalence}. As main means of achieving this serve \cite[Theorem 4.3]{HaMae2023} and Proposition \ref{p.STform}. \item The equivalence of two of the statements \textrm{(0)}, \textrm{(8)}, \textrm{(9)}, \textrm{(11)}, \textrm{(12)} has been provided by Theorem \ref{t.equivalence_array}. An important step for this proof is the equivalence of \textrm{(0)} and \textrm{(8)} that has been verified by Theorem \ref{t.rdiff} \textrm{(1)}. \item The equivalence of \textrm{(0)} and \textrm{(1)} has been verified in Section \ref{s.regular} right after Definition \ref{d.2a}. \item The equivalence of \textrm{(0)} and \textrm{(6)} as well as that of \textrm{(0)} and \textrm{(7)} are implications of Theorem \ref{t.SCF} and Proposition \ref{p.STform}. \item The equivalence of \textrm{(9)} and \textrm{(10)} has been shown in Section \ref{subs.pbdiff} right after Definition \ref{d.pbdiffA}. \end{itemize} \textbf{Part B and Part C:} These are straightforward summaries of the related findings of Theorem \ref{t.indexrelation}, Corollary \ref{c.SCT}, Theorem \ref{t.theta_relation_array}, and Corollary \ref{c.degree}. \end{proof} Obviously, each one of the equivalent statement \textrm{(0)}-\textrm{(12)} from Theorem \ref{t.Sum_equivalence} implies both statements \textrm{(1)} and \textrm{(2)} from Theorem \ref{t.diff2}, but the reverse direction does not apply, cf.\ discussion in Section \ref{subs.equivalence} and examples in Section \ref{s.examples}. Indeed, \textrm{(1)}-\textrm{(2)} from Theorem \ref{t.diff2} only require constant $r_{[\mu]}$ and $d$. \bigskip In case of regularity the dynamical degree of freedom $d=r-\sum_{i=0}^{\mu-2} \theta_i$ is precisely the dimension of the flow-subspace of the DAE $S_{can}$. This basic canonical subspace is characterized in different manners in literature, whereas we emphasized \begin{itemize} \item $S_{can} = \im C$ (see Section \ref{s.regular}), \item $S_{can} = \im \Pi_{can}$ (see Section \ref{subs.tractability}), \item $S_{can} =S_{[\mu-1]}$ (see Section \ref{subs.SCFarrays}). \end{itemize} The second canonical subspace $N_{can}$ is defined to be the so-called canonical complement to the flow-subspace, \[ S_{can}(t)\oplus N_{can}(t)=\Real,\quad t\in \mathcal I. \] Actually $N_{can}$ accommodates important information about the structure of the DAE and the necessary differentiations. This is closely related to the perturbation index. Only an in a way uniform inner structure of that part of the DAE which resides in the subspace $N_{can}$ ensures a uniform perturbation index over the given interval. We know the representations: \begin{itemize} \item $N_{can} = \ker C^*_{adj}E$ (see Proposition \ref{p.adjoint}), \item $N_{can} = \ker \Pi_{can}=\ker \Pi_{\mu-1}= N_0+N_1+\cdots+N_{\mu-1}$ (see Section \ref{subs.tractability}). \end{itemize} If the DAE is in standard canonical form \begin{align}\label{SCFDAEneu} \begin{bmatrix} I_d&0\\0&N(t) \end{bmatrix} x'(t)+ \begin{bmatrix} \Omega(t)&0\\0&I_a \end{bmatrix} x(t)=q(t),\quad t\in\mathcal I, \end{align} where $N$ is strictly upper triangular, then the canonical subspaces are simply \begin{align*} S_{can}=\im \begin{bmatrix} I_d\\0 \end{bmatrix},\quad N_{can}=\im \begin{bmatrix} 0\\I_a \end{bmatrix}, \end{align*} and the behaviour of the particular solution components proceeding in $N_{can}$ is governed by the properties of the matrix function $N$. Clearly, if $N$ is constant, which means that the DAE is in strong standard canoncal form, then the DAE is regular with index $\mu$ and characteristics $\theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0$, $\mu$ is the nilpotency index of the matrix $N$ and the Jordan normal form of the matrix $N$ shows\footnote{Compare also Remark \ref{r.pencil} and Section \ref{sec:SCF}.} \begin{align*} &\theta_0 \quad \text{Jordan blocks of order } \geq 2,\\ &\theta_1 \quad \text{Jordan blocks of order } \geq 3,\quad\\ &...\\ &\theta_{\mu-3}\quad \text{Jordan blocks of order } \geq \mu-1,\\ &\theta_{\mu-2}\quad \text{Jordan blocks of order } \mu. \end{align*} Eventually, each DAE being transformable into strong standard canonical form is regular, and its characteistics are determined by the structure of the nilpotent matrix $N$. This gives the characteristic values \[ r \text{ and }\quad \theta_0\geq\cdots\geq \theta_{\mu-2}>\theta_{\mu-1}=0 \] a rather phenomenological background apart from special approaches and the further justification to call them canonical. The latter is all the more important if our following conjecture proves correct. \medskip \begin{conjecture}\label{conjecture} Let $E, F:\mathcal I\rightarrow\Real^{m\times m}$ be sufficiently smooth. If the pair $\{E,F\}$ is regular with index $\mu\geq2$ then it is transformable into strong standard canonical form and the Jordan normal form of the matrix $N$ is exactly made of \begin{align*} &m-r-\theta_0 \quad \text{Jordan blocks of order } 1,\\ &\theta_0-\theta_1 \quad \text{Jordan blocks of order } 2,\\ &\theta_1-\theta_2 \quad \text{Jordan blocks of order } 3,\quad \\ &...\\ &\theta_{\mu-3}-\theta_{\mu-2}\quad \text{Jordan blocks of order } \mu-1,\\ &\theta_{\mu-2}\quad \text{Jordan blocks of order } \mu. \end{align*} \end{conjecture} \subsection{What is regularity supposed to mean?}\label{subs.regularity} As we have seen, our formal basic definition of regularity in Section \ref{s.regular} agrees with the view of many other authors. We keep in mind that the characteristic values in all corresponding concepts are derived from specific rank functions and represent constant rank requirements. The equivalence results then allows the simultaneous application of all the corresponding different concepts. We associate regularity of a linear DAE $Ex'+Fx=q$, $E,F:\mathcal I\rightarrow\Real^{m}$, with the following five criteria ensuring a regular flow combined with a homogenous dependency on the input function $q$ on the given interval : \begin{description} \item[\textrm{(1)}] The homogenous DAE $Ex'+Fx=0$ has a finite-dimensional solution space $S_{can}(t), t\in\mathcal I,$ with constant dimension $d=\dim S_{can}(t), t\in\mathcal I$, which serves as dynamical degree of freedom of the system. \item[\textrm{(2)}] The DAE possesses a solution at least for each $q\in \mathcal C^{m}(\mathcal I,\Real^{m})$. \item[\textrm{(3)}] Regularity, the index $\mu$, and the canonical characteristic values \begin{align}\label{char} r<m,\; \theta_{0}\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0,\; d=r-\sum_{k=0}^{\mu-2}\theta_k, \end{align} persist under equivalence transformations. \item[\textrm{(4)}] Regularity generalizes the Kronecker structure of regular matrix pencils. \item[\textrm{(5)}] If the DAE is regular with canonical characteristic values \eqref{char} on the given interval $\mathcal I$, then its restriction to any subinterval of $\mathcal I$ is regular with the same characteristics. \end{description} In other words, a regular linear DAE is characterized by its two canonical subspaces $S_{can}$ and $N_{can}$, both featuring a homogenous structure on the given interval. \medskip A point $t_*\in\mathcal I$ is called a \emph{regular point} of the DAE, if there is an open interval $\mathcal I_*$ containing $t_*$ such that the DAE is regular on $\mathcal I_*\cap\mathcal I$. Otherwise the point $t_*$ is called a \emph{critical point}. If the DAE is regular on the interval $\mathcal I$, then all points of $\mathcal I$ are regular points. It should not go unmentioned that there are also other names for this, including \emph{singular point} in \cite[Chapter 4]{RR2008} and \emph{exceptional point} in \cite[Page 80]{KuMe2006}. We prefer the term \emph{critical} because in our opinion it leaves it open whether there are strong singularities in the flow or harmless critical points in solvable systems that only become apparent at rigorously low smoothness properties as pointed out, e.g., in \cite[Section 2.9]{CRR}, see also Definition \ref{d.harmless}. We refer to \cite[Chapter 4]{RR2008} for a careful investigation and classification of points failing to be regular. \medskip The \emph{class of regular linear DAEs} comprises exactly all those DAEs that can be transformed equivalently into a structured SCF from Theorem \ref{t.SCF}, that is, the inner algebraic structure of the subspace $N_{can}(t),\; t\in \mathcal I$, which represents the pointwise canonical complement to the flow-subspace $S_{can}(t)$, does not vary with time. Correspondingly, there is a homogenous structure on the whole interval $\mathcal I$ of how the solutions depend on derivatives of the right-hand side $q$. In particular, the perturbation index does not change if one turns to subintervals. \bigskip The \emph{class of almost regular linear DAEs} established in Definition \ref{d.almost-reg} coincides with the class of solvable DAEs by Definition \ref{d.solvableDAE}, see also Remark \ref{r.generalform}, and it is actually more capacious than the class of regular linear DAEs, which are all solvable, of course. Almost regular systems possess a well-defined differentiation index $\mu^{diff}$ and they satisfy the SF-Hypothesis with $\hat{\mu}=\mu^{diff}-1$. They feature a dense set of regular points. More precisely, they satisfy the above regularity issues {\textrm (1)}, {\textrm (2)}, and {\textrm (4)}. Issue {\textrm (5)} is not valid. Almost regular systems allow for so-called harmless critical points which do not affect the flow in sufficiently smooth problems. Instead of Issue {\textrm (3)} one has merely a flow-subspace $S_{can}(t)$ of constant dimension $d$ and a pointwise canonical complement $N_{can}(t)$ of constant dimension $a=m-d$, but the inner algebraic structure of the latter is no longer constant. The perturbation index may vary on subintervals. \subsection{Regularity, accurately stated initial condition, well-posedness, and ill-posedness}\label{subs.posedness} Let $\{E,F\}$, $E,F:\mathcal I=[a,b]\rightarrow\Real^m$, be a regular pair with index $\mu$ and canonical characteristics $r$ and $\theta_0\geq\cdots\geq\theta_{\mu-2}>\theta_{\mu-1}=0$. The DAE has the dynamical degree of freedom $d$. Obviously, for fixing a special solution from the flow one needs precisely $d$ scalar requirements but also a right way to frame them. We consider the initial value problem (IVP) \begin{align} Ex'+Fx&=q,\label{IVP1}\\ Gx(a)&=\gamma,\quad \gamma\in \im G,\label{IVP2} \end{align} with sufficiently smooth data and the matrix $G\in\Real^{s\times m}$, $s\geq d$, $\rank G=d$. The initial condition \eqref{IVP2} for the DAE \eqref{IVP1} is said to be \emph{accurately stated}, e.g. \cite[Section 5]{HaMae2023}, \cite[Definition 2.3]{LMW}, if there is a solution $x_*$, each IVP with slightly perturbed initial condition, \begin{align} Ex'+Fx&=q,\label{IVP3}\\ Gx(a)&=\gamma+\varDelta\gamma,\quad \varDelta\gamma\in\im G,\label{IVP4} \end{align} has a unique solution $x$, and the inequality \begin{align*} \max_{t\in [a,b]}|x(t)-x_*(t)|\leq K |\varDelta\gamma| \end{align*} is valid with a constant $K$. As pointed out in \cite{HaMae2023}, the two canonical time-varying subspaces, the flow-subspace $S_{can}$ and its canonical complement $N_{can}$, which are for a long time established in the context of the projector based analysis, e.g., \cite{CRR}, are well-defined for regular DAEs and the initial condition \eqref{IVP2} is accurately stated, exactly if \begin{align}\label{IVP5} \ker G=N_{can}(a). \end{align} This assertion is evident, if one deals with a DAE in SCF, that is, \begin{align} u'+\Omega u&=f,\nonumber\\ Nv'+v&=g,\label{IVP6} \end{align} and \begin{align*} E=\begin{bmatrix} I_d&0\\0&N \end{bmatrix},\; F=\begin{bmatrix} \Omega&0\\0&I_a \end{bmatrix},\; S_{can}= \im \begin{bmatrix} I_d\\0 \end{bmatrix}, \; N_{can}= \im \begin{bmatrix} 0\\I_a \end{bmatrix},\; G=\begin{bmatrix} I_d&0 \end{bmatrix}. \end{align*} To achieve easier insight we suppose a constant $N$ in \eqref{IVP6}. Denoting by $P_{N^i}$ a projector matrix along the nullspace of the power $ N^i$, such that $N^i=N^iP_{N^i}$, the unique solution $v_*$ corresponding to $g$ is given by formula \begin{align}\label{IVP7a} v_*=g+\sum_{i=1}^{\mu-1}(-N)^i(P_{N^i}g)^{(i)}, \end{align} which precisely indicates all involved derivatives of the right-hand side $g$. It becomes evident that each initial requirement for $v_*(a)$ would immediately passed to consistency condition for the righthand side $g$ and its derivatives. Condition \eqref{IVP5} has to prevent this. \bigskip If the initial conditions are stated accurately, small changes only have a low effect on the solution. Unfortunately, this can look completely different if the right side of the DAE itself is changed. The smallest changes in $g$ can cause huge amounts in the solution. We take a closer look to the initial value problem with perturbed DAE, \begin{align} Ex'+Fx&=q+\varDelta q,\label{IVP8}\\ Gx(a)&=\gamma,\quad \gamma\in \im G,\nonumber \end{align} and suppose $\ker G=N_{can}$. Again we turn to the SCF, which decomposes the original problem into an IVP for a regular ODE living in $\Real^d$ and an index-$\mu$-DAE with zero-dynamical degree of freedom, \begin{align} u'+\Omega u&=f+\varDelta f,\quad u(a)= \gamma, \nonumber\\ Nv'+v&=g+\varDelta g.\label{IVP9} \end{align} Again, to easier understand what is going on, we suppose a constant $N$. Then the unique solution of \eqref{IVP9} reads \begin{align}\label{IVP7b} v=g+\varDelta g+\sum_{i=1}^{\mu-1}(-N)^i(P_{N^i}(g+\varDelta g))^{(i)}. \end{align} According to the traditional definition of Hadamard, an operator equation $Ty=z$ with a linear bounded operator $T$ between Banach spaces $Y$ and $Z$ is called \emph{well-posed} if $T$ is a bijection, and \emph{ill-posed} otherwise. In the well-posed case, there is a unique solution $y\in Y$ to each arbitrary $z\in Z$, and the inverse $T^{-1}$ is bounded, too, such that, if $z$ tends to $z_*$ in $Z$, then $y:=T^{-1}z$ tends in $Y$ necessarily to $y_*:=T^{-1} z_*$, and \begin{align*} \lVert y-y_*\rVert_{Y} \leq \lVert T^{-1}\rVert_{Z\rightarrow Y} \; \lVert z-z_*\rVert_{Z}. \end{align*} Whether a problem is well-posed or ill-posed depends essentially on the choice of the function spaces Y and Z, which, however, should be practicable with respect to applications. It is of no use if errors to be investigated are simply ignored by artificial topologies, see \cite[Chapter 2]{Ma2014} for a discussion in the DAE context. \medskip What about the operator $L:Y\rightarrow Z$ representing the IVP \eqref{IVP1}, \eqref{IVP2} with accurately stated homogeneous initial condition, that is $\gamma=0$. It makes sense to set \begin{align*} Lx&=(Ex)'-E'x+Fx,\quad x\in Y\\ &Y=\{y\in \mathcal C([a,b],\Real^m): Ey\in \mathcal C^1([a,b],\Real^m), Gy(a)=0\},\\ &Z= \mathcal C([a,b],\Real^m). \end{align*} If the DAE has index $\mu=1$, then $N_{can}=\ker G=\ker E$, in turn \begin{align*} Y=\{y\in \mathcal C([a,b],\Real^m): Ey\in \mathcal C^1([a,b],\Real^m), Ey(a)=0\}, \end{align*} and $L$ is actually a bijection, e.g., \cite{GM86,CRR,Ma2014}. A particular result is then the inequality \begin{align}\label{IVP10} \max_{t\in [a,b]}|x(t)-x_*(t)|\leq M \max_{t\in [a,b]}|\varDelta q(t)|, \end{align} for $x_*$ and $x$ corresponding to $q$ and $q+\varDelta q$, respectively. For higher-index cases, that is $\mu\geq 2$ and $N\neq 0$ in the SCF, the situation is more complex. Then the operator $L$ is by no means surjective anymore. Its range $\im L\subset Z$ is a proper, nonclosed subset so that $L$ is not a Fredholm operator either. This can be recognized by the simplified case $E=N, F=I$, $d=0$, $x=v$. Regarding that \begin{align*} \rank P_{N} =\rank N=\sum_{j=0}^{\mu-1}\theta_j,\quad \rank P_{N^i} = \rank N^i=\sum_{j=i-1}^{\mu-1}\theta_j,\quad i=2,\ldots,\mu, \end{align*} formula \eqref{IVP7a} leads to the representation \begin{align*} \im L=\{g\in\mathcal C([a,b],\Real^m): P_{N^i}g\in\mathcal C^i([a,b],\Real^m),\; i=1,\ldots , \mu-1\}, \end{align*} which rigorously describes in detail all involved derivatives. This representation also reveals that the DAE index is only one important aspect, but that the exact structure can only be described by all the canonical characteristic values together. Moreover, from \eqref{IVP7a}, \eqref{IVP7b} it results that \begin{align*} v-v_*=\varDelta g+\sum_{i=1}^{\mu-1}(-N)^i(P_{N^i}(\varDelta g))^{(i)}, \end{align*} which makes an inequality like \eqref{IVP10} impossible. \bigskip We emphasize that an IVP \eqref{IVP1}, \eqref{IVP2} with accurate initial conditions is a well-posed problem in this setting only in the index-one case. In the case of a DAE \eqref{IVP1} with a higher index $\mu\geq2$, an ill-posed problem generally arises. Thus, a higher-index regular DAE always has a twofold character, it is on the one hand a well-behaved dynamic system and on the other hand an ill-posed input-output problem. To give an impression of this ill-posedness, we will mention a very simple example elaborated in \cite[Example 2.3]{Ma2014}, in which $E$ and $F$ are constant $5\times 5$ matrices, the matrix pencil is regular with index four, the input is $q=0$, the corresponding output is $x_*=0$, the perturbation $\varDelta q$ has size $\epsilon n^{-1}$ and tends in $Z$ to zero for $n$ tending to $\infty$, but the corresponding difference $x-x_*$ shows size $\epsilon n^2$, growths unboundedly for increasing $n$, and tends in $Y$ by no means to zero. For more profound mathematical analyses, for instance in \cite{HMT} to provide instability thresholds, individual topologies specially adapted to $\im L$ can be useful. Then one enforce a bounded bijection $L:Y\rightarrow \tilde Z$ by setting $\tilde Z=\im L$ and equipping $\tilde Z$ with an appropriate norm to become a Banach space, however, you need a precise description of $\im L$ for that. Of course, in this decidedly peculiar setting the problem becomes well-posed in this solely theoretical sense \cite{Ma2014}. \bigskip At this point it must also be mentioned that in some areas of mathematics the question of continuous dependencies is completely ignored and yet the term \emph{formally well-posed} is used, e.g. \cite[p.\ 298]{SeilerZerz}. Unfortunately, this can lead to considerable misunderstandings, as the above mentioned simple example \cite[Example 2.3]{Ma2014} shows already. The aim of \cite{SeilerZerz} is to make results from the algebraic theory of linear systems usable for DAEs, in particular methods of symbolic computation together with the theory of Gr{\"o}bner bases to provide formally well-posed problems. Among others it is figured out with a size-three Hessenberg DAE that the first Gr{\"o}bner index $\gamma_1$ recovers the strangeness index $\mu^S$ and the differentiation index by $\gamma_1=\mu^{diff}-1$. Moreover, \cite[Proposition 5.3]{SeilerZerz} provides the relation $\mu_p\leq\gamma_1+1$ as upper bound of the perturbation index for DAEs being not under-determined. Clearly, it is well-known that $\mu_p=\mu=\mu^T=\mu^S+1$ for regular linear DAEs, and $\mu_p=\mu^{diff}$ for DAEs being solvable in the sense of Definition \ref{d.solvableDAE}. However, for DAEs being over-determined the differentiation index is not defined, but strangeness index and tractability index are quite different, \cite{HM2020,CRR}. It seems to be open whether any of them is recovered by an Gr{\"o}bner index. \subsection{Other views on regularity that we do not adopt}\label{subs.views} In early work, when less was known about DAEs, regularity was associated with special technical requirements. But there are also different views on the matter in current research. We pick out just a few of the different versions. At the beginning it was assumed that the so-called local pencils $\lambda E(t)+F(t)$, for fixed $t$, and their Kronecker index were relevant for the characterization not only of DAEs with constant coefficients. The associated term is the so-called \emph{local index}. In contrast, then the further index terms were given the suffix \emph{global}. Today it goes without saying that our index terms in this sense are of a global nature, i.e. related to an interval, and we avoid the \textit{global} suffix. \bigskip \textbullet\quad In the famous monograph \cite[Page 23]{BCP89} regularity of the DAE $Ex'+Fx=q$ is considered as regularity of the associated \emph{local matrix pencils} $\lambda E(t)+F(t), t\in \mathcal I$. The intention behind this is to obtain feasible numerical integration methods. However, it is ibidem pointed out that this regularity does not imply solvability in the sense of Definition \ref{d.solvableDAE}. For instance, the DAE with coefficients \begin{align*} E(t)=\begin{bmatrix} -t&t^2\\-1&t \end{bmatrix},\quad F(t)=\begin{bmatrix} 1&0\\0&1 \end{bmatrix},\quad t\in\mathcal I=[-1,1], \end{align*} features solely regular local pencils, $\det(\lambda E(t)+F(t))=1, t\in \mathcal I$. However, it can be checked that, given an arbitrary smooth function $\gamma:\mathcal I\rightarrow \Real$, \[ x_*(t)=\gamma(t)\begin{bmatrix}t\\1\end{bmatrix}, \; t\in\mathcal I \] solves the homogenous DAE. This specific pair $\{E,F\}$ is pre-regular ($r=1, \theta=1$) but not regular in our context, since the next pair $\{E_1,F_1\}$ is no longer pre-regular, $\im[E_1 \, F_1]=\{0\}\neq \Real^r$. Apart from the inappropriate definition of regularity, the focus at the time on numerical methods bore many fruit and, with the exclamation "Differential/algebraic equations are not ODEs", \cite{Petzold1982}, generated a great deal of interest in DAEs. \bigskip \textbullet\quad With a completely different intention, namely to obtain theoretical solvability statements, in the monographs \cite{Boya1980,BDLCH1989} it is assumed for regularity, among other things, that there is a number $c$ such that $cE(t)+F(t)$ is non-singular for all $t \in \mathcal I$. Even for the pair \begin{align*} E(t)=\begin{bmatrix} 1&t\\0&0 \end{bmatrix},\quad F(t)=\begin{bmatrix} 0&0\\1&t \end{bmatrix}, \end{align*} that we recognize today as regular with index two and characteristic $r=1, \theta_0=1, \theta_1=0$, this requirement is obviously not met. Nonetheless, in \cite[Chapter 5]{Boya1980}, \cite[Chapter 2]{BDLCH1989} devoted to this kind of regular system some statements about solvability are provided. However, these results are very restricted by the then current working tools such as the Drazin inverse and the like. \bigskip \textbullet\quad In \cite{BergerIlchmann} regularity is understood to mean equivalent transformability to SCF. The class of DAEs that can be transformed into SCF is more comprehensive than the regular DAEs defined by Definition \ref{d.2}, and also includes DAEs with harmless critical points, which we have excluded for good reason. On the other hand, the class of DAEs being transformable into an SCF is only a subclass of almost regular DAEs according to the Definition \ref{d.almost-reg}, which contains DAEs featuring further harmless critical points. \bigskip \textbullet\quad We quote from \cite[p.\ 154]{KuMe2006}, in which Hypothesis 3.48 is the local name for the SF-Hypothesis \ref{SHyp}: ``...Hypothesis 3.48 is the correct way to define a regular differential-algebraic equation. Regularity here is to be understood as follows. A linear problem satisfying Hypothesis 3.48 fixes a strangeness-free differential-algebraic equation.... The underlying differential-algebraic operator ... with appropriately chosen spaces, is invertible and the inverse is continuous.'' This definition de facto declares the solvable DAEs to be regular, against what we already argued in the Subsection \ref{subs.regularity}. What is more, the attempt to justify this is somewhat confusing. Surely, strangeness-free DAEs, i.e. index-zero or index-one problems, are well-posed operator equations in natural spaces, which is well known at least through \cite{GM86,CRR,Ma2014}, see also Subsection \ref{subs.posedness} above. In detail, the original DAE $Ex'+Fx=q$ which satisfies the SF-Hypothesis\footnote{See Subsection \ref{subs.Hyp}} is remodelled to \begin{align*} Y^*Ex'+Y^*Fx&=Y^*q,\\ Z^*\mathcal F_{[\nu-1]}x&=Z^*q_{[\nu-1]}=:p. \end{align*} But to analyze the original equation, perturbations of the right-hand side $q+\varDelta q$ are appropriate, which leads to $p+Z^*\varDelta q_{[\nu-1]}$ within the transformed DAE. This fact is not recognized and only continuous functions $\varDelta p$ are applied, which actually hides the DAE structure. In the context of local statements on nonlinear DAEs, e.g. \cite[p.\ 164]{KuMe2006} this seems to be even more critically. \bigskip \textbullet\quad Recently, in the textbook \cite{KuMe2024}, that is the second edition of \cite{KuMe2006}, the following regularity definition (that we adapted to our notation) is emphasized as central notion.\\ \cite[Definition 3.3.1]{KuMe2024}: The pair $\{E,F\}$ and the corresponding DAE $Ex'+Fx=q$ are called regular if \begin{enumerate} \item the DAE is solvable for every sufficiently smooth $q$ , \item the solution is unique for every $t_0 $ in a compact interval $\mathcal I$ and every consistent initial condition $x_0 \in \R^m$\footnote{ In \cite{KuMe2024} $x_0 \in \C^m$ is considered.} given at $t_0\in \mathcal I$, \item the solution depends smoothly on $q$ , $t_0$, and $x_0$. \end{enumerate} The items (1) and (2) capture the solvable systems, see Definition \ref{d.solvableDAE}, and almost regular DAEs, see Definition \ref{d.almost-reg}, since harmless critical points are allowed. However, item (3) is inappropriate: \begin{itemize} \item On the one hand, for DAEs continuous dependency or even smooth dependency on consistent values $x_0$ cannot be assumed in general, not even for index-one DAEs. For arbitrary perturbations $\Delta_0 \in \R^m$, the vector $x_0+\Delta_0$ is not necessarily consistent. Indeed, the perturbation $\Delta_0$ has to be restricted accordingly such that $\Delta_0 \in S_{can}(t_0)$. \item On the other hand, as sketched in Section \ref{subs.posedness}, starting from corresponding spaces of continuous and differentiable functions, continuous dependence of the DAE solutions on the inhomogeneity $q$ is exclusively given for index-one problems (e.g.\ \cite{GM86}, \cite{CRR}, \cite{Ma2014}). For higher-index DAEs, surjectivity needs sophisticated, strongly problem-specific function spaces, see. \cite{Ma2014}. \end{itemize} \textbullet\quad In the monograph devoted to algebro-differential operators having finite-dimensional nullspaces \cite{Chist1996} DAEs are organized in the framework of a ring $\mathcal M_ A$ of linear differential and integral operators acting in $\mathcal C^{\infty}$. In this context, the first order operator $\Lambda_{1}x=Ex'+Fx$ is called \emph{regular} if $E(t)=I$ on the given interval. If it exists, the minimal order $\nu$ of a differential operator $\Lambda_{\nu}$ belonging to the ring $\mathcal M_A$, which serves as \emph{left regularization operator} for $\Lambda_1$ such that $\Lambda_{\nu}\circ \Lambda_{1}$ is regular, is called \emph{non-resolvedness index} of the operator $\Lambda_{1}$, see also \ref{r.Chist}. This is nothing else than the differentiation index. \section{About nonlinear DAEs}\label{s.nonlinearDAEs} There are numerous important studies of nonlinear DAEs that are based on special structural requirements, in particular, such as for simulating multibody system dynamics and circuit modeling, which are not to be reflected here in detail. We refer to \cite{Arnold,Riaza} for overviews. Here we look solely at general unstructured non-autonomous DAEs \begin{align}\label{N.0} f(t,x(t),x'(t))=0,\quad t\in \mathcal I, \end{align} given by a sufficiently smooth function $f:\mathcal I_f\times \mathcal D_{f}\times \Real^m\rightarrow \Real^m $,\; $\mathcal I_f\times \mathcal D_f\subseteq \Real\times\Real^m $ open, and ask about their general properties. The only exception to the restriction of the structure, which is allowed in some cases below, is the assumption that the subspace $\ker D_yf(t,x,y)$ is independent of the variables $y$ or $(x,y)$. If circumstances require, a change to the augmented form \begin{align*} f(t,x(t),y(t))=0,\; y(t)-x'(t)=0 \quad t\in \mathcal I. \end{align*} can be made\footnote{However, this is accompanied by an increase in the index!}. As we have already seen with linear DAEs, certain subspaces play a crucial role, which will also be the case here. However, while in the linear case the subspaces only depend on one parameter, namely $t\in\Real $, there are now the parameters $(t,x)\in \Real^{m+1}$ and more. These subspaces are handled by means of both projector functions and base functions. If such a subspace depends only on $t$ varying on a given interval $\mathcal I$ and has there constant dimension, then both smooth basic functions and smooth projector functions defined on the entire interval are available. In contrast, if a subspace depends on a variable $z\in \Real^n$, $n\geq 2$, and has constant dimension, then there are globally defined smooth projector functions, but smooth base functions only exist locally, e.g., \cite[Sections A3, A4]{CRR}. This must be taken into account. It facilitates the use of projector functions, which are usually more difficult to determine in practice, and it makes the practically often easier way of dealing with bases in theory more complicated. \bigskip It should be recalled that, in the case of nonlinear ODEs, the uniqueness of the solutions is no longer guaranteed with merely continuous vector fields. A vector field of class $\mathcal C^1$ is locally Lipschitz and hence ensures uniqueness. The same applies to vector fields on smooth manifolds. This must be taken into account when it comes to a regularity notion, which should include uniqueness of solutions to the corresponding initial value problems. We can only give a rough sketch of the approaches for nonlinear DAE and confine ourselves to the rank conditions used in each case. \medskip In the present section, in order to achieve better clarity in all of the very different following approaches, we use the descriptions $g'(t)$ but also $\frac{\rm d}{\rm dt}g(t)$ for the derivatives of a function $g(t)$ of the independent variable $t\in \Real$. For the partial derivatives of a function $g(u,v,t)$ depending on several independent variables $u\in\Real^n,v\in\Real^k,t\in\Real$ we apply the description $g_u(u,v,t)$, $g_v(u,v,t)$, $g_t(u,v,t)$, but also $D_ug(u,v,t)$, $D_vg(u,v,t)$, $D_tg(u,v,t)$. \subsection{Approaches by means of derivative arrays}\label{subs.arrays} The approaches via derivative arrays and geometric reduction procedures have been developed for nonlinear DAEs almost from the beginning \cite{Gear88,BCP89,Griep92,Reich,RaRh,KuMe2006,ChistShch,EstLam2016Decoupling}. To treat the DAE \begin{align}\label{N.1} f(t,x(t),x'(t))=0,\quad t\in \mathcal I, \end{align} on $\mathcal I\times \mathcal D\times \Real^m \subseteq \mathcal I_f\times \mathcal D_{f}\times \Real^m$ , one forms the derivative array functions \cite{Gear88,BCP89,Griep92,KuMe2006,ChistShch} \begin{align}\label{N.2} \mathfrak F_{[k]}(t,x,\underbrace{x^1,\cdots,x^{k+1}}_{y_{[k]}})=\begin{bmatrix} f_{[0]}(t,x,x^1)\\f_{[1]}(t,x,x^1,x^2)\\ \vdots \\f_{[k]}(t,x,x^1,\cdots,x^{k+1}) \end{bmatrix}\in \Real^{(k+1)m}, \\ \text{for} \quad (t,x)\in \mathcal I\times \mathcal D,\quad \begin{bmatrix} x^1\\\vdots\\x^{k+1} \end{bmatrix}=:y_{[k]} \in \Real^{km+m},\quad k\geq 0, \nonumber \end{align} in which \begin{align*} f_{[0]}(t,x,x^1)&=f(t,x,x^1),\\ f_{[j]}(t,x,\underbrace{x^1,\cdots,x^{j+1}}_{y_{[j]}})&={f_{[j-1]}}'_t(t,x,\underbrace{x^1,\cdots,x^{j}}_{y_{[j-1]}})+{f_{[j-1]}}'_x(t,x,y_{[j-1]})x^1+\sum_{i=1}^{j}{f_{[j-1]}}'_{x^i}(t,x,y_{[j-1]})x^{i+1}, \end{align*} such that, for each arbitrary smooth reference function $x_*:\mathcal I\rightarrow \Real^m$ whose graph runs in $\mathcal I\times \mathcal D$ it results that \begin{align*} \mathfrak F_{[k]}(t,x_*(t),\underbrace{x_*^{(1)}(t),\cdots,x_*^{(k+1)}(t)}_{x_{*[k]}'})&=\begin{bmatrix} f(t,x_*(t),x_*'(t))\\\frac{{\rm d}}{{\rm d}t}f(t,x_*(t),x_*'(t))\\ \vdots \\\frac{{\rm d}^{k}}{{\rm d}t^{k}} f(t,x_*(t),x_*'(t)) \end{bmatrix}. \end{align*} By construction the sets $\mathfrak L_{[{k}]}$, \begin{align}\label{N.L} \mathfrak L_{[{k}]}=\{(t,x,y_{[k]})\in \mathcal I\times \mathcal D\times\Real^{(k+1)m}:\mathfrak F_{[k]}(t,x,y_{[k]})=0\},\quad k\geq 0, \end{align} house the extended graphs of all smooth solutions $x_{*}:\mathcal I\rightarrow \mathcal D$ of the DAE \eqref{N.1}. The partial Jacobians of the matrix function $\mathfrak F_{[k]}$ with respect to $y_{[k]}$ and to $x$ show the structure, \begin{align*} \mathcal E_{[k]}= D_{y_{[k]}}\mathfrak F_{[k]} =\begin{bmatrix} f_{x^1}&&&\\ *&f_{x^1}&&\\ &&\ddots&\\ *&\cdots&*&f_{x^1} \end{bmatrix},\quad \mathcal F_{[k]}= D_{x}{\mathfrak F_{[k]}} =\begin{bmatrix} f_{x}\\ *\\ \vdots\\ * \end{bmatrix}. \end{align*} Using again the arbitrary reference function $x_*$ being not necessarily a solution and denoting \begin{align*} E_*(t)=f_{x^1}(t,x_*(t),x_*'(t)),\; F_*(t)=f_{x}(t,x_*(t),x_*'(t)),\; t\in\mathcal I, \end{align*} one arrives at matrix functions as introduced by \eqref{1.GkLR} in Section \ref{subs.Preliminaries}, namely \begin{align*} \mathcal E_{*[k]}(t)&=\mathcal E_{[k]}(t,x_*(t),x'_{*[k]}(t))=\begin{bmatrix} E_*(t)&&&\\ *&E_*(t)&&\\ &&\ddots&\\ *&\cdots&*&E_*(t) \end{bmatrix}, \\ \mathcal F_{*[k]}(t)&=\mathcal F_{[k]}(t,x_*(t),x'_{*[k]}(t))=\begin{bmatrix} F_*(t)\\ *\\ \vdots\\ * \end{bmatrix}. \end{align*} This opens up the option of using linearization for handling and tracing back questions to the linear case. Here too, as in the linear case, there are different views on rank conditions for the Jacobians. As we will see below, again, in the concepts of the (standard) differentiation index and of the strangeness index there is no need for the Jacobians $\mathcal E_{[k]}$ with lower $k$ to have constant rank. In contrast, for the regular differentiation index and projector based differentiation index, each of these Jacobians is explicitly supposed to have constant rank which allows to use parts of the geometric operation equipment such as manifolds, tangent bundles etc. \bigskip \subsubsection{Differentiation index} What we now call the differentiation index was originally simply called the index and was introduced into the discussion in \cite[p.\ 39]{Gear88} as follows: \textit{Consider \eqref{N.2} as a system of equations in the \emph{separate} dependent variables $x^1,\ldots, x^{k+1}$, and solve for these variables as functions of $x$ and $t$ considered as \emph{independent} variables. If it is possible to solve for $x^1$ for some finite $k$, then the index, $\mu$, is defined as smallest $k$ for which \eqref{N.2} can be solved for $x^1(x,t)$.} We quote the corresponding definition \cite[Definition 2.5.1]{BCP89} that incorporates this idea: \begin{definition}\label{d.N1} The \emph{index} $\nu$ of the DAE \eqref{N.1} is the smallest integer $\nu$ such that $\mathfrak F_{[\nu]}$ uniquely determines the variable $x^1$ as a continuous function of $(x,t)$. \end{definition} Unfortunately, this definition is rather vague, which triggered a lively discussion at the time. There were subsequently a series of attempts at a more precise definition partly with a variety of new terms, see e.g. \cite{CaGear95}. We will come back to this below when dealing with the perturbation index, see also Example \ref{e.simeon}. It should also not go unmentioned that what we call \emph{regular} differentiation index below was also simply called index in \cite{Griep92}, and it was explicitly intended as an adjustment of the index notion in \cite{Gear88}. We underline that in \cite{BCP89} the above definition \cite[Definition 2.5.1]{BCP89} is immediately followed by a proposition \cite[Proposition 2.5.1]{BCP89} from which we learn that the matter of the standard differentiation index of nonlinear DAEs can be traced back to properties of the partial Jacobians as follows: \begin{proposition}\label{p.N1}\cite{BCP89} Sufficient conditions for \begin{align}\label{N.3} \mathfrak F_{[k]}(t,x,y_{[k]})=0 \end{align} to uniquely determine $x^1$ as a continuous function of $(x,t)$ are that the Jacobian matrix of $ \mathfrak F_{[k]}$ with respect to $y_{[k]}$ is $1-$full with constant rank and \eqref{N.3} is consistent. \end{proposition} In the meantime, this has become established as one of the possible definitions being at the same time the straightforward generalization of Definition \ref{d.diff}: \begin{definition}\label{d.N1a} The \emph{index} $\nu$ of the DAE \eqref{N.1} is the smallest integer $\nu$ such that $\mathcal E_{[\nu]}$ is $1-$full with constant rank and \eqref{N.3} is consistent. \end{definition} \bigskip \subsubsection{Strangeness index} There is also a straightforward generalization of the SF-Hypothesis \ref{SHyp} with the strangeness index. We adapt \cite[Hypothesis 4.2]{KuMe2006} and \cite[Definition 4.4]{KuMe2006} to our notation. \begin{hypothesis}[\textbf{Strangeness-Free-Hypothesis for nonlinear DAEs}]\label{SHypN} There exist integers $\hat \mu, \hat a$, and $\hat d=m-\hat a$ such that the set \begin{align*} \mathfrak L_{[\hat{\mu}]}=\{(t,x,y_{[\hat{\mu}]})\in \mathcal I\times \mathcal D\times\Real^{(\hat{\mu}+1)m}:\mathfrak F_{[\hat{\mu}]}(t,x,y_{[\hat{\mu}]})=0\} \end{align*} associated with $f$ is nonempty and such that for every $(t,x,y_{[\hat{\mu}]})\in \mathfrak L_{[\hat{\mu}]}$ there exist a (sufficiently small) neighborhood in $\mathcal I_f\times \mathcal D_f\times\Real^{(\hat{\mu}+1)m}$ in which the following properties hold: \begin{description} \item[\textrm{(1)}] We have $\rank \mathcal E_{[\hat\mu]}(t,x,y_{[\hat\mu]})=(\hat\mu+1)m-\hat a$ on $\mathfrak L_{[\hat{\mu}]}$ such that there is a smooth matrix function $Z$ of size $((\hat\mu+1)m)\times\hat a$ and pointwise maximal rank, satisfying $Z^*\mathcal E_{[\hat\mu]}=0$ on $\mathfrak L_{[\hat{\mu}]}$. \item[\textrm{(2)}] We have $\rank Z^*(t,x,y_{[\hat\mu]})\mathcal F_{[\hat\mu]}(t,x,y_{[\hat\mu]})=\hat a$ such that there exists a smooth matrix function $C$ of size $m\times\hat d$, and pointwise maximal rank, satisfying $Z^*\mathcal E_{[\hat\mu]}C=0$. \item[\textrm{(3)}] We have $\rank f'_{x^1}(t,x,x^1)C(t,x,y_{[\hat\mu]})=\hat d$ such that there exists a smooth matrix function $Y$ of size $m\times\hat d$ and pointwise maximal rank, satisfying $\rank Y^* f'_{x^1}C=\hat d$. \end{description} \end{hypothesis} \begin{definition}\label{d.HypStrangenessN} Given a DAE as in \eqref{N.1}, the smallest value of $\bar \mu$ such that $f$ satisfies the SF-Hypothesis \ref{SHypN} is satisfied is called the \emph{strangeness index} of \eqref{N.1}. If $\bar \mu=0$ then the DAE is called strangeness-free. \end{definition} \bigskip \subsubsection{Regular differentiation index} Next we follow the ideas of \cite{Griep92} to a nonlinear version of the regular differentiation index discribed for linear DAEs in Subsection \ref{subs.qdiff}, which will turn out to be closely related to the geometric reduction and thus to the geometric index. Suppose that the partial Jacobians $\mathcal E_{[k]}$ have constant rank for all $k\geq 0$. The set \begin{align*} \tilde C_{[k]}=\{ (t,x)\in \mathcal I\times\mathcal D: \exists y_{[k]}\in\Real^{km+m},\mathfrak F_{[k]}(t,x,y_{[k]})=0 \} \end{align*} is called \emph{constraint manifold of order $k$}, and to each $(t,x)\in \tilde C_{[k]}$ one obtains the manifold $M_{[k]}(t,x)$ and its tangent space given by \begin{align*} M_{[k]}(t,x)=\{y_{[k]}\in \Real^{km+m}: \mathfrak F_{[k]}(t,x,y_{[k]})=0 \},\; TM_{[k]}(t,x; y_{[k]})=\ker \mathcal E_{[k]}(t,x,y_{[k]}). \end{align*} The following generalizes Definition \ref{d.G1} via linearization. \begin{definition}\label{d.regulardiff} The DAE \eqref{N.1} has \emph{regular differentiation index} $\nu$ if the partial Jacobians $\mathcal E_{[k]}$ have constant rank for $k\geq 0$, $\tilde C_{[\nu]}$ is non-empty, $T_{[\nu]} M_{[\nu]}(t,x)$\footnote{As in Section \ref{subs.qdiff} $T_{[\nu]}=[I_m 0\cdots 0]\in \Real^{m\times(m+m\nu)}$ is a truncation matrix.} is a singleton for each $(t,x)\in \tilde C_{[\nu]}$, and $\nu$ is the smallest such integer. \end{definition} We note that, analogously to Subsection \ref{subs.qdiff}, $T_{[\nu]} M_{[\nu]}(t,x)$ is a singleton, if and only if $T_{[\nu]}\ker \mathcal E_{[\nu]}(t,x,y_{[\nu]})=0$ for all $y_{[\nu]}\in M_{[\nu]}$. The main intention in \cite{Griep92} is giving index reduction procedures a rigorous background. In particular, owing to \cite[Theorem 16]{Griep92}, the transfer from the DAE \eqref{N.1} to \begin{align}\label{N.5} (I-&W(t,x(t),x'(t)))f(t,x(t),x'(t)) \nonumber \\ +&W(t,x(t),x'(t)){D_xf(t,x(t),x'(t))x'(t)+D_tf(t,x(t),x'(t))}=0,\quad t\in \mathcal I, \end{align} subject to the initial restriction $f(t_0,x(t_0),x'(t_0))=0$, reduces the (regular differentiation) index by one. Thereby, $W(t,x,y)$ denotes the orthoprojector function along $\im D_yf(t,x,y)$. \medskip Finally, in this segment it should be mentioned that in \cite{ChenTrenn} for autonomous quasi-linear DAEs (with $\mathcal C^{\infty}$ functions) the version of the (regular) differentiation index from \cite{Griep92} was recast in a rigorous geometric language and shown to be consistent with the geometric index, cf. Remark \ref{r.RaRh} below. \bigskip \subsubsection{Projector-based differentiation index} Next we turn to the concept associated with the projector based differentiation index. Supposing that the nullspace $\ker f_{x^1}(t,x,x^1)$ is actually independent of the variables $(x,x^1)$ and does not change its dimension. With the orthoprojector $P(t)$ along $\ker f_{x^1}(t,x,x^1)$ it results that \begin{align*} f(t,x,x^1)=f(t,x,P(t)x^1),\quad (t,x,x^1)\in \mathcal I\times \mathcal D\times\Real^m, \end{align*} and the DAE \eqref{N.1} rewrites to \begin{align*} f(t,x(t),(Px)'(t)-P(t)x(t))=0,\quad t\in \mathcal I. \end{align*} This makes clear that the given DAE accommodates an equation $(Px)'(t)=\phi(x(t),t)$. The idea now is to extract only the remaining component $(I-P(t))x$ in terms of $(P(t)x,t)$ from the derivative array. As in the linear case, the further matrix functions $\mathcal B_{[k]}:\mathcal I\rightarrow \Real^{(mk+m)\times(mk+m)}$, \begin{align*} \mathcal{B}_{[k]} = \begin{bmatrix} P & 0 \\ \mathcal F_{[k-1]}& \mathcal E_{[k-1]} \end{bmatrix} \end{align*} play their role here. \begin{definition}\label{d.pbdiff} The DAE \eqref{N.1} has \emph{projector-based differentiation index} $\nu$ if the matrix functions $\mathcal B_{[k]}$ have constant rank for $k\geq 0$, $\nu$ is the smallest integer such that $\mathcal B_{[\nu]}$ is $1$-full. \end{definition} In contrast to Definition \ref{d.N1a}, we do not assume the consistency of \eqref{N.3} in the above definition. Nevertheless, to compute consistent initial values, of course \begin{align*} \mathfrak F_{[\nu-1]}(t,x,y_{[\nu-1]})=0 \end{align*} has to be consistent. \bigskip \subsection{Geometric reduction}\label{subs.nonlinearDAEsGeo} The geometric reduction procedures are intended right from the start for nonlinear DAEs \cite{Rhe84,Reich,Griep92,RR2008}. While \cite{Griep92}, see Definition \ref{d.regulardiff} above, still uses a rather analytical representation with the rank theorem as background, \cite{Reich} and \cite{Rhe84,RaRh} use the means of geometry more consistently. The explicit rank conditions from \cite{Griep92} become inherent components of the corresponding terms, the rank theorem is replaced by the subimmersion theorem etc.\footnote{We recommend \cite[Section 3.3]{RR2008} for a nice roundup.} A clear presentation of the issue for autonomous DAEs is given in \cite{RaRh}. Here we follow the lines of \cite{Reich} whose depiction of nonautonomous DAEs fits best with the rest of the material in our treatise. \medskip The stated intention of \cite{Reich} is to elaborate a concept of regularity for general non-autonomous DAEs \eqref{N.1} considered on the open connected set $\mathcal I\times \mathcal D\times \Real^m$. Recall some standard terminology for this aim. For a $\rho$-dimensional differentiable manifold $M$ we consider the tangent bundle $TM$ of $M$ and the tangent space $T_zM$ of $M$ at $z\in M$. We deal with manifolds $M$ being embedded in $\Real\times\Real^m$, that is, sub-manifolds of $\Real\times\Real^m$, and we can accordingly assume that $T_zM$ has been identified with a $\rho$-dimensional linear subspace of $\Real\times\Real^m$. Denote by $\pi_1:\Real\times\Real^m\rightarrow \Real$ the projection onto the first factor in $\Real\times\Real^m$, and let $\mathcal J$ be the open set of $\Real$ with $\pi_1(M)=\mathcal J$. Introduce further the restriction $\pi:M\rightarrow\mathcal J$ of $\pi_1$ to $M$, that is $\pi=\pi_1|M$. Then the tripel $(M,\pi,\mathcal J)$ is a sub-bundle of $\Real\times\Real^m$, if $\pi(T_{(t,x)}M)=\Real$ for all $(t,x)\in M$. The manifolds $M(t)\subset\Real^m$ defined by $\{t\}\times M(t)=(M\cap\{t\}\times \Real^m )$ are called the fibres of $M$ at $t\in\mathcal J$. \bigskip The origin of the following regularity notion is \cite[Definition 2]{Reich}. We have included the explicit requirement for $\mathcal C^1$ classes to ensure the uniqueness of solutions to initial value problems. According to the context, we believe that this was originally intended. \begin{definition}\label{d.regularReich} The DAE \eqref{N.1} is called \emph{regular} if there is a unique sub-bundle $(\mathcal C,\pi,\mathcal I)$ of $\Real\times\Real^m$ and a unique vectorfield $v:\mathcal C\rightarrow \Real^m$ on $\mathcal C$, both of class $\mathcal C^1$, such that a differentiable mapping $x:I\subset \mathcal I\rightarrow \Real^m$ is a solution of the vectorfield $v$ if and only if $x$ is a solution of the given DAE. The manifold $\mathcal C$ is called \emph{configuration space} and $v$ the \emph{corresponding vector field}. \end{definition} A technique is stated in \cite{Reich} by means of which the configuration space and the corresponding vector field for a given DAE can be obtained. In more detail, one starts from the set \begin{align*} L =\{(t,x,p)\in \mathcal I\times \mathcal D\times\Real^m: f(t,x,p)=0\}. \end{align*} A differentiable map $x:I\subseteq\mathcal I\rightarrow \Real^m$ is a solution of the DAE if and only if \begin{align*} (t,x(t),x'(t))\in L\quad\text{for all}\; t\in I. \end{align*} We form the new set \begin{align*} \mathcal C_1=\pi_{1,2}(L)\subseteq \Real\times\Real^m, \end{align*} where $\pi_{1,2}: \Real\times \Real^m\times \Real^m\rightarrow \Real\times \Real^m$ is the projection onto the first two factors in $\Real\times \Real^m\times \Real^m$. This set reflects algebraic constraints on the solution of the DAE. If the triple $(\mathcal C_1,\pi,\mathcal I)$ is a differentiable sub-bundle of $\Real\times\Real^m$, then the differentiable map $x:I\rightarrow \Real^m$ is a solution of the DAE if and only if \begin{align*} (t,x(t),x'(t))\in L\cap S\mathcal C_1\subseteq L\quad\text{for all}\; t\in I. \end{align*} Thereby, $S\mathcal C_1=\bigcup_{(t,x)\in \mathcal C_1}\{(t,x)\}\times S_{(t,x)}\mathcal C_1$ and $S_{(t,x)}\mathcal C_1=\{\pi\in\Real^m:(1,p)\in T_{(t,x)}\mathcal C_1\}$ denote the so-called\footnote{This notion is motivated by the fact that the variable $t$ in \eqref{N.1} satisfies the differential equation $t'=1$. In autonomous DAEs, the variable $t$ is absent in \eqref{N.1} and the set $\mathcal C_1$ such that one operates by the usual tangent bundle $T\mathcal C_1$ and tangent space $T_x\mathcal C_1$ at $x$.} \emph{restricted tangent bundle} of $\mathcal C_1$ and \emph{restricted tangent space} of $\mathcal C_1$ at $(t,x)$, respectively. Then we form the next set \begin{align} \mathcal C_2=\pi_{1,2}( L\cap S\mathcal C_1). \end{align} This procedure leads to a sequence of sub-manifolds $\mathcal C_k$ of $\Real\times\Real^m$ associated with the DAE. We quote \cite[Definition 8]{Reich} and mention the modification for linear DAEs in Section \ref{subs.degree} to compare with. \begin{definition}\label{d.degree} Let $L$ be the corresponding set of the given DAE \eqref{N.1}. We define a family $\{ \mathcal C_k\}_{k=0,\ldots, s}$ of submanifolds $\mathcal C_k$ of $\Real\times\Real^m$ by the recursion \begin{align*} \mathcal C_0&=\mathcal I\times\mathcal D,\\ \mathcal C_k&= \pi_{1,2}( L\cap S\mathcal C_{k-1}),\quad k=0,\ldots, s, \end{align*} where $s$ is the largest non-negative integer such that the triples $(\mathcal C_k, p, \mathcal I)$ are differentiable sub-bundles and $\mathcal C_{s-1}\neq\mathcal C_s$. In case of $\mathcal C_1=\mathcal I\times\mathcal D$ we define $s=0$. We call the family $\{ \mathcal C_k\}_{k=0,\ldots s}$ the \emph{family of constraint manifolds} and the integer $s$ the \emph{degree of the given DAE}. \end{definition} It is shown that the degree, if it is well-defined, satisfies $s\leq m$. Furthermore, by \cite[Theorem 9]{Reich}, the DAE \eqref{N.1} is regular, if it has degree $s$ and, additionally, for each fixed $(t,x)\in \mathcal C_{s}$, the set \begin{align*} L\cap\{(t,x)\} \times S_{(t,x)}C_{s} \end{align*} contains exactly one element, $L\cap SC_s$ is of class $\mathcal C^1$, and, for all $(t,x,p)\in L\cap SC_s$, $\dim C_s=\rank \pi_{1,2}T_{(t,x,p)}(L\cap SC_s)$. Then, $\mathcal C=\mathcal C_{s}$ is the configuration space and the vectorfield is uniquely defined by \begin{align*} (t,x,v(t,x))\in L\cap S\mathcal C. \end{align*} Owing to (\cite[Theorem 12]{Reich}) regularity of the DAE is ensured by properties of the reduced derivative arrays given by \begin{align}\label{N.4} \mathfrak G_{[k]}(t,x,p)&=\begin{bmatrix} g_{[0]}(t,x,p)\\g_{[1]}(t,x,p)\\ \vdots \\g_{[k]}(t,x,p) \end{bmatrix}, \quad (t,x,p)\in \mathcal I\times \mathcal D\times\Real^m, \end{align} for $k\geq 0$, with \begin{align*} g_{[0]}(t,x,p)&=f(t,x,p),\\ g_{[j]}(t,x,p)&=W_{[j-1]}(t,x,p)(D_t g_{[j-1]}(t,x,p)+D_x g_{[j-1]}(t,x,p)p), \end{align*} in which $W_{[j-1]}(t,x,p)$ denotes a projector along $\im D_pg_{[j-1]}(t,x,p)$. Aiming for smooth projector functions the Jacobian $ D_pg_{[j-1]}(t,x,p)$ is supposed to have constant rank. In contrast to the array functions introduced in Section \ref{subs.Preliminaries}, not all equations are differentiated, but only those that are actually needed, namely the so-called derivative-free equations on each level. This leads to overdetermined systems of equations with regard to the variables $(x,p)$ which then have to be consistent. This tool is often used in structured DAEs, e.g., in multibody dynamics. A comparison with the index transformation from \eqref{N.1} to \eqref{N.5} shows that this makes sense in general. \medskip Using the sets \begin{align*} L_k=\{(t,x,p)\in\mathcal I\times\mathcal D\times\Real^m: \mathfrak G_{[k]}(t,x,p)=0 \}, \; k\geq 0, \end{align*} the above constrained manifolds $\mathcal C_k$ can be represented by \begin{align*} \mathcal C_{k}=\pi_{1,2}(L_k)=\{(t,x)\in \mathcal I\times\mathcal D: \exists p\in\Real^m,\mathfrak G_{[k-1]}(t,x,p)=0 \}, \end{align*} which is verified in \cite{Reich}. Owing to \cite[Theorem 12]{Reich} the DAE \eqref{N.1} is regular if there is a non-negative integer $\nu$ such that the matrix functions ${D_{p}\mathfrak G_{[k]}}$ and $[{D_{x}\mathfrak G_{[k]}} \; {D_{p}\mathfrak G_{[k]}} ]$ have constant ranks for $0\leq k<\nu$, and the row echelon form of ${D_{p}\mathfrak G_{[\nu]}}$ is $\begin{bmatrix} I_{m}\\0 \end{bmatrix}$ independent of $(t,x,p)\in \mathcal J\times\mathcal D\times \Real^m$. Then the DAE \eqref{N.1} has regular differentiation index $\nu$ if this is the smallest such non-negative integer. \begin{remark} Let us briefly turn to the linear DAE \begin{align}\label{lin} Ex'+Fx=0. \end{align} We have here \begin{align*} L&=\{(t,x,p)\in \mathcal I\times\Real^m\times\Real^m:E(t)p+F(t)x=0\},\\ \mathcal C_1&=\{(t,x)\in \mathcal I\times\Real^m:F(t)x\in \im E(t)\},\\ \mathcal C_1(t)&=\{x\in \Real^m:F(t)x\in \im E(t)\}=: S(t),\\ S_{(t,x)}\mathcal C_1 &=\{p\in\Real^m: p=P_S(t)p+D_tP_S(t)x\},\\ S\mathcal C_1&=\{(t,x,p)\in\mathcal I\times\Real^m\times\Real^m:E(t)p+F(t)x=0, p\in S_{(t,x)}\mathcal C_1\}, \end{align*} in which $P_S(t):=I-(WF)^+WF$ denotes the same projector function onto $S(t)=\mathcal C_1(t)$ as used in Subsection \ref{subs.qdiff} since the set $S(t)=\ker W(t)F(t)$ from Subsection \ref{subs.qdiff} obviously coincides with $C_1(t)$. If $x:\mathcal I\rightarrow\Real^m$ is a solution of the DAE, then it holds that \begin{align*} x(t)&=P_S(t)x(t),\\ x'(t)&=D_tP_S(t)x(t)+P_S(t)x'(t),\\ (t,x(t),x'(t))&=(t,x(t),D_tP_S(t)x(t)+P_S(t)x'(t))\in L\cap \mathcal C_1, \quad t\in \mathcal I. \end{align*} This leads to the new DAE \begin{align*} EP_Sx'+(F+EP_S')x=0, \end{align*} and this procedure is shown in \cite{Reich} to reduce the degree by one. We underline that the same procedure is applied in Subsection \ref{subs.qdiff} for obtaining \eqref{R1}. Furthermore, as pointed out in Subsection \ref{subs.qdiff}, there is a close relationship with our basic reduction in Section \ref{s.regular}, see formulas \eqref{R2} and \eqref{R3}. Supposing the DAE \eqref{lin} to be pre-regular in the sense of Definition \ref{d.prereg} we find that $\rank EP_S= r-\theta$ which sheds further light on the connection to the basic reduction in Section \ref{s.regular}. \end{remark} \begin{remark}\label{r.RaRh} In \cite[Chapter IV]{RaRh} the geometric reduction of quasilinear autonomous DAEs (but class $\mathcal C^{\infty}$), \begin{align}\label{RR0} E(x)x'+h(x)=0, \end{align} given by functions $E\in \mathcal C^{\infty}(\mathcal D,\Real^m\times\Real^m),\; h\in \mathcal C^{\infty}(\mathcal D,\Real^m)$, $\mathcal D\subseteq \Real^m$ open, is best revealingly developed in the spirit of reduction of manifolds. Among other things, the corresponding dimensions are also specified, which more clearly emphasizes the connection with our basic reduction procedure in section \ref{s.regular} that has its antetype in \cite[Chapter II]{RaRh}. First, a general procedure to specify the associated configuration space is created. Starting from a smooth $\bar r_0$-dimensional submanifold $\mathcal C_0$ of $\Real^n$, $T\mathcal C_0$ becomes a smooth $2\bar r_0$- dimensional submanifold of $T\Real^n=\Real^n\times\Real^n$. For any given functions $E_0\in \mathcal C^{\infty}(\mathcal C_0,\Real^n\times\Real^m),\; h_0\in \mathcal C^{\infty}(\mathcal C_0,\Real^m)$, set \begin{align*} f_0(x,p)&=E_0(x)p+h_0(x)\in \Real^m,\quad (x,p)\in T\mathcal C_0, \end{align*} and form \begin{align*} L_0&=\{(x,p)\in T\mathcal C_0: f_0(x,p)=0\},\\ \mathcal C_1&=\pi(L_0)=\pi|_{L_0}(L_0), \end{align*} with the projection $\pi:\Real^m\times\Real^m\rightarrow\Real^m$ onto the first factor. The following two assumptions play a crucial role in this approach: \begin{description} \item[A1:] The set $L_0$ is a smooth $\bar r_0$-dimensional submanifold of $T\mathcal C_0$ with tangent space $T_{(x,p)}L_0=\ker T_{(x,p)}f_0$ for every $(x,p)\in L_0$. \item[A2:] There exists a nonnegative integer $\bar r_1\leq \bar r_0$ such that $\rank E_0(x)|_{T_x\mathcal C_0}=\bar r_1$ for all $x\in \mathcal C_0$. \end{description} In particular, these two assumption ensure that $\pi|_{L_0}:L_0\rightarrow \Real ^m$ is a subimmersion with rank $\bar r_1$ and an open mapping into $\mathcal C_1$. In turn, $\mathcal C_1$ is a smooth $\bar r_1$-dimensional submanifold of both $\mathcal C_0$ and $\Real^m$. This shows how manifolds $\mathcal C_i$ and $L_i$, $i\geq0$, can be defined inductively. We introduce \begin{align*} f_1(x,p)&=E_0(x)p+h_0(x)\in \Real^m,\quad (x,p)\in T\mathcal C_1 \end{align*} and form \begin{align*} L_1&=\{(x,p)\in T\mathcal C_1: f_1(x,p)=0\}= T\mathcal C_1\cap L_0,\\ \mathcal C_2&=\pi(L_1)=\pi|_{L_1}(L_1), \end{align*} and so on. If the requirements of the above two assumptions hold at each step, one obtains sequences of manifolds $L_0\supset L_1\supset\cdots\supset L_i\supset\dots $ and $\mathcal C_0\supset \mathcal C_1\supset\cdots\supset \mathcal C_i\supset\dots $, where $L_{i+1}$ is an $\bar r_{i+1}$-dimensional submanifold of $L_{i}$ and $\mathcal C_{i+1}$ is an $\bar r_{i+1}$-dimensional submanifold of $T\mathcal C_{i}$, satisfying the relation \begin{align*} \mathcal C_{i+1}=\pi(L_i),\quad L_{i+1}=T\mathcal C_{i+1}\cap L_{i}. \end{align*} The pair of manifolds $(\mathcal C_0, L_0)$ is said to be \emph{completely reducible} if the sequence is well-defined up to infinity, and hence $\bar r_0\geq \bar r_1\geq\cdots\geq \bar r_i\geq \cdots $. The nonincreasing sequence of integers must eventually stabilize. Owing to \cite[Theorem]{RaRh}, if $L_{\nu}\neq\emptyset$ and $\bar r_{\nu}=\bar r_{\nu+1}$ for some integer $\nu\geq 0$, then $L_j=L_{\nu}$, $\bar r_j=\bar r_{\nu}$, for $j\geq \nu$, and $\mathcal C_j=\mathcal C_{\nu+1}$ for $j\geq \nu+1$. \medskip The described reduction applies to the DAE \eqref{RR0} by letting \begin{align*} E_{0}(x)=E(x), \; h_0(x)=h(x), \; \mathcal C_0=\mathcal D,\; \bar r_0=m,\; L_0= f_0^{-1}(0). \end{align*} By \cite[Definition 24.1]{RaRh} the quasilinear DAE \eqref{RR0} has \emph{(geometric)\footnote{In \cite{RaRh} the suffix \emph{geometric} is still missing, it was added later in \cite[Section 3.4.1]{RR2008} to distinguish it from other terms.} index} $\nu$, $0\leq\nu\leq m$, if the pair $(\mathcal C_0, L_0)$ is completely reducible and has index $\nu$ with $L_{\nu}\neq\emptyset$. A DAE \eqref{RR0} with well defined geometric index features locally existing and unique solutions \cite[Theorem 24.1]{RaRh}, and hence regularity. \medskip At this point, it makes sense to compare once again with linear DAEs. In \cite[Chapter II]{RaRh} the DAE $Ex'+Fx=q$ is called \emph{completely reducible} in the given interval, if our basic reduction procedure described in Section \ref{s.regular} and starting from $E_0=E, F_0=F$ is well-defined up to infinity, with constants $r_{-1}:=m$, $r_j:=\rank E_j$, $j\geq 0$. The smallest integer $0\leq\nu\leq m$ such that $r_{\nu-1}=r_{\nu}$ is the \emph{(geometric) index} of the DAE. Then $E_{\nu}$ remains nonsingular, and $\rank [E_j\;F_j]= r_{j-1}$, $j\geq 0$. This means that complete reducibility is the same as regularity in the sense of Definition \ref{d.2a}. On the other hand, if $E$ and $F$ are constant matrices, then this becomes a special case of the above autonomous geometric reduction with $r_j=\bar r_{j-1}$ for all $j\geq 0$. \end{remark} \subsection{Direct approaches without using array functions} Direct concepts without recourse to derivative arrays should be possible starting from the fact that derivatives of a function can not contain information, which is not already present in the function itself. Derivative arrays or their restricted versions are not longer used here. Instead, sequences of special matrix functions including several projector functions are pointwise build on the given function and their domain. In the context of the dissection and tractability index more general equations, \begin{align}\label{NT1} g(t,x(t),\frac{\rm d}{\rm dt}\varphi(t,x(t)))=0, \end{align} given by the two functions $g:\mathcal I_g\times\mathcal D_g\times\Real^n\rightarrow\Real^m$ and $\varphi:\mathcal I_g\times\mathcal D_g\rightarrow\Real^n$, $n\leq m$, are investigated. This has advantages both in terms of an extended solution concept and corresponding strict solvability statements with lower smoothness \cite{CRR,Jansen2014}. Since we deal with smoothness more generously here and assume $C^1$ solutions, this equation can also be written in standard form \begin{align}\label{NT2} f(t,x(t),x'(t)):=g(t,x(t),\varphi_t(t,x(t))+\varphi_x(t,x(t))x'(t))=0. \end{align} For each smooth reference function $x_*:\mathcal I\rightarrow\Real^m$ not necessarily being a solution, but residing in the definition domain $\mathcal I_g\times \mathcal D_g$ it results that \begin{align*} f(t,x_*(t),x'_*(t))&=f(t,x_*(t),\varphi_t(t,x_*(t))+\varphi_x(t,x_*(t))x'_*(t))\\ &=g(t,x_*(t),\frac{\rm d}{\rm dt}\varphi(t,x_*(t))). \end{align*} Below, linear DAEs \begin{align}\label{NT3} A_*(t)(D_*x)'(t)+B_*(t)x(t)=q(t), \quad t\in \mathcal I, \end{align} with coefficients \begin{align*} A_*(t)=g_y(t,x_*(t),\frac{\rm d}{\rm dt}\varphi(t,x_*(t))),\; D_*(t)=\varphi_x(t,x_*(t)),\; B_*(t)=g_x(t,x_*(t),\frac{\rm d}{\rm dt}\varphi(t,x_*(t))), \end{align*} which arise from linearizations of the nonlinear DAE \eqref{NT1} along the given reference function, play an important role. Roughly speaking, we will decompose the domain $\mathcal I_g\times\mathcal D_g$ into certain so-called regularity regions so that all linearizations along smooth reference functions residing in one and the same region are regular with uniform index and characteristic values. Then the borders of a maximal regularity region are critical points. \medskip The DAE \eqref{NT1} has a so-called \emph{properly involved derivative}, if the decomposition \begin{align}\label{NT5} \ker g_y(t,x,y)\oplus\im \varphi_x(t,x)=\Real^n,\quad t\in \mathcal I_g,\;x\in \mathcal D_g,\; y\in \Real^n, \end{align} is valid and both matrix function $g_y$ and $\varphi_x$ feature constant rank $r$. At this place it is worth mentioning that there are weaker versions, namely the quasi-properly involved derivative in \cite[Chapter 9 ]{CRR} admitting certain rank drops of $g_y$ and the semi-properly involved derivative in \cite[]{Jansen2014} requiring constant ranks but merely $\im g_y=\im g_y \varphi_x$ instead of \eqref{NT5}. The simplest version already applied in \cite{GM86} starts from the standard form DAE \[ f(t,x(t),x'(t))=0 \] and supposes that the partial Jacobian $f_{x^1}(t,x,x^1)$ has constant rank $r$ and $\ker f_{x^1}(t,x,x^1)=N(t)$ is independent of the variables $x$ and $x^1$. Using a smooth projector function $P:\mathcal I\rightarrow \Real^{m\times m}$ such that $\ker P(t)=N(t)$ we set $n=m$, $\varphi(t,x)=P(t)x$. and $g(t,x, y)=f(t,x,y-P'(t)x)$. Then one has $\varphi_x(t,x)=P(t)$, $g_y(t,x, y)=f_{x^1}(t,x,y-P'(t)x)$, and $\ker g_y(t,x, y)=N(t)$, and hence we arrive at a DAE with properly involved derivative. \medskip We turn back to the general case \eqref{NT1} and suppose a properly involved derivative. Analogously to Section \ref{subs.tractability} for linear DAEs we associate to the DAE \eqref{NT1} a sequence of matrix functions built pointwise now for $t\in\mathcal I_g$, $x\in\mathcal D_g$, $x^1\in\Real^m$. It will provide relevant information about the DAE, quite comparable to the array functions above. We start letting \begin{align*} D(t,x)&:=\varphi_x(t,x),\\ A(t,x,x^1)&:= g_y(t,x,\varphi_t(t,x)+D(t,x)x^1),\\ G_0(t,x,x^1)&:=A(t,x,x^1)D(t,x),\\ B_0(t,x,x^1)&:=g_x(t,x,\varphi_t(t,x)+D(t,x)x^1). \end{align*} Let $P_0(t.x)\in \Real^{m\times m}$ denote a smooth projector such that $\ker P_0(t,x)=\ker D(t,x)=:N_0(t,x)$ and \begin{align}\label{NT4} Q_0(t,x)=I -P_0(t,x),\quad \pPi_0(t,x)=P_0(t,x), \end{align} and introduce the generalized inverse $D(t,x,x^1)^-$ being uniquely determined by the four relations \begin{align*} D(t,x,x^1)^-D(t,x)D(t,x,x^1)^-&=D(t,x,x^1)^-,\\ D(t,x)D(t,x,x^1)^-D(t,x)&=D(t,x),\\ D(t,x,x^1)^-D(t,x)&=P_0(t,x),\\ \ker D(t,x)D(t,x,x^1)^-&=\ker A(t,x,x^1). \end{align*} Since the derivative is properly involved, it holds that $\ker G_0(t,x,x^1)=\ker D(t,x)=N_0(t,x)$. We form \begin{align*} G_1(t,x,x^1)&;=G_0(t,x,x^1)+B_0(t,x,x^1)Q_0(t,x),\\ N_1(t,x,x^1)&:=\ker G_1(t,x,x^1),\\ \widehat{N_1}(t,x,x^1)&:=N_1(t,x,x^1)\cap N_0(t,x), \end{align*} and choose projector functions $Q_1, P_1, \pPi_1: \mathcal I_g\times \mathcal D_g \times\Real^{m\times m}$ such that pointwise \begin{align*} \im Q_1&=N_1,\quad \ker Q_1\supseteq X_1, \quad \text{with any complement}\; X_1\subseteq N_0, \; N_0=\widehat N_1\oplus X_1,\\ P_1&=I-Q_1,\; \pPi_1=\pPi_0P_1. \end{align*} We are interested in a smooth matrix function $G_1$ and require constant rank. From the case of linear DAEs we know of the necessity to incorporate the derivative of $D\pPi_1D^-$ into the next expressions. Instead of the time derivative $(D\pPi_1D^-)'$ in the linear case, we now use the total derivative in jet variables $[D\pPi_1D^-]'$ given by \begin{align*} [D\pPi_1D^-]'(t,x,x^1,x^2)&:=(D\pPi_1D^-)_t(t,x,x^1)+(D\pPi_1D^-)_x(t,x,x^1)x^1\\ &+(D\pPi_1D^-)_{x^1}(t,x,x^1)x^2. \end{align*} The subsequent matrix function $B_1= B_0P_0-G_1D^-[D\pPi_1D^-]'D\pPi_0$ depends now on the variables $t,x,x^1$, and $x^2$. On each following level of the sequence a new variable comes in owing to the involved total derivative. Now we are ready to adapt \cite[Definition 3.21]{CRR} and \cite[Definition 3.28]{CRR} concerning admissible matrix function sequences and regularity. Both are straightforward generalizations of the linear case discussed in Section \ref{subs.tractability} above. \begin{definition}\label{d.NTadmissible} Let $\mathfrak G\subseteq \mathcal I_g\times\mathcal D_g$ be open and connected set. For given level $\kappa\in \Natu$, we call the sequence $G_0,\ldots,G_{\kappa}$ an \emph{admissible matrix function sequence} associated with the DAE \eqref{NT1} on the set $\mathfrak G$, if it is built by the rule: \begin{align*} G_i&=G_{i-1}+B_{i-1}Q_{i-1}:\mathfrak G\times\Real^{im}\rightarrow\Real^m,\quad r_i^T=\rank G_i,\\ B_{i}&=B_{i-1}P_{i-1}-G_{i}D^-[D\pPi_{i}D^-]'D\pPi_{i-1}:\mathfrak G\times\Real^{(i+1)m}\rightarrow\Real^m,\\ &\quad N_{i}=\ker G_{i},\quad \widehat{N_{i}}=(N_0+\cdots+N_{i-1})\cap N_{i},\quad u_{i}^T=\dim \widehat{N_i},\\ & \text{fix a complement}\; X_{i}\;\text{ such that}\; N_0+\cdots+N_{i-1}=\widehat{N_{i}}\oplus X_{i},\\ &\text{choose a smooth projector function}\; Q_{i}\;\text{such that}\; \im Q_{i}=N_{i},\;\ker Q_{i}\supseteq X_{i},\\ &\text{set}\; P_{i}=I-Q_{i},\; \pPi_{i}=\pPi_{i-1}P_{i},\\ i&=1,\ldots,\kappa-1,\\ G_{\kappa}&=G_{\kappa-1}+B_{\kappa-1}Q_{\kappa-1}:\mathfrak G\times\Real^{\kappa m}\rightarrow\Real^m,\quad r_{\kappa}^T=\rank G_{\kappa}, \end{align*} and, additionally, all the involved functions $r_i$ and $u_i$ are constant. \end{definition} The total derivative used here reads in detail: \begin{align*} [D\pPi_{i}D^-]'(t,x,x^1,\ldots,x^{i+1})&=(D\pPi_{i}D^-)_t(t,x,x^1,\ldots,x^{i})+(D\pPi_{i}D^-)_x(t,x,x^1,\ldots,x^{i})x^1\\ &+\sum_{j=1}^{i}(D\pPi_{i}D^-)_{x^j}(t,x,x^1,\ldots,x^{i})x^{j+1}. \end{align*} At this point, the general agreement of this work on the smoothness of the given data ensures also the existency of these derivatives. Then the required smooth projector functions actually exist due to the demanded constancy of the ranks $r_i^T$ and the dimensions $u_i^T$. The inclusions \begin{align*} \im G_i\subseteq \im G_{i+1},\quad r_i\leq r_{i+1}, \end{align*} are meant point by point and result immediately by the construction. We refer to \cite[Section 3.2]{CRR} for further useful properties. In particular, it is possible to determine the projector functions $Q_i$ in such a way that the $\pPi_{i}$ and $\pPi_{i-1}Q_i$ are pointwise symmetric projector functions \cite[p.\ 205]{CRR}. \begin{definition}\label{d.NTregularity} Let $\mathfrak G\subseteq \mathcal I_g\times\mathcal D_g$ be open and connected set. The DAE \eqref{NT1} is said to be \emph{regular on $\mathfrak G$ with tractability index $\mu\in\Natu$} if there is an admissible matrix function sequence reaching a pointwise nonsingular matrix function $G_{\mu}$ and $r_{\mu-1}^T<r_{\mu}^T=m $. The rank values \begin{align}\label{NT6} r=r_0^T\leq\cdots\leq r_{\mu-1}^T<r_{\mu}^T=m \end{align} are said to be \emph{characteristic values} of the DAE. The set $\mathfrak G$ is called \emph{regularity region} of the DAE with associated index $\mu$ and characteristics \eqref{NT6}.\footnote{One can also understand $\mathfrak G^{[\mu]}=\mathfrak G\times \Real^{\mu m}$ as a regularity region. We refer to \cite[Section 3.8]{CRR} for a relevant refinement of the definition.} If $\mathfrak G$ has the structure $\mathfrak G = \mathcal I \times \mathcal G $, $\mathcal I , \mathcal G$ open, then simply $\mathcal G$ is called regularity region, too. \end{definition} \begin{definition}\label{d.NTregpoint} The point $(\bar t,\bar x)\in \mathcal I_g\times\mathcal D_g$ is called a \emph{regular point} of the DAE, if there is an open neighborhood $\mathfrak G\ni (\bar t,\bar x)$, $\mathfrak G\subseteq \mathcal I_g\times\mathcal D_g$ being a regularity region.\footnote{In case of a regularity region $\mathfrak G^{[\mu]}=\mathfrak G\times \Real^{\mu m}$ we speak of \emph{regular jets} $(\bar t,\bar x, \bar x^1,\ldots,\bar x^{\mu})$.} Otherwise, the point is called a \emph{critical point} of the DAE. If this $\mathfrak G$ has the structure $\mathfrak G = \mathcal I \times \mathcal G $, $\mathcal I , \mathcal G$ open, then simply $\bar{x}$ is called regular point, too. \end{definition} Regularity goes along with $u_i^T=0$ for all $i\geq 0$. It is important to add that both the index and the characteristic values do not depend on the particular choice of projector functions in the admissible sequence of matrix functions. They are also invariant with respect to regular transformations, cf.\ \cite{CRR}. The main result in the framework of the projector-based analysis of nonlinear DAEs is given by \cite[Theorem 3.33]{CRR} that claims: \begin{description} \item[\textbullet] The DAE \eqref{NT1} is regular on $\mathfrak G$ if all linearizations \eqref{NT3} along smooth reference functions residing in $\mathfrak G$ are regular DAEs, and vice versa. \item[\textbullet] If the nonlinear DAE is regular on $\mathfrak G$ with index $\mu$ and the characteristics \eqref{NT6}, all linearizations built from reference functions residing in $\mathfrak G$ inherit this. \item[\textbullet] If all linearizations built from reference functions residing in $\mathfrak G$ are regular, then they feature a uniform index $\mu$ and uniform characteristics \eqref{NT3}. The nonlinear DAE has then the same index and characteristics. \end{description} This allows to trace back questions concerning the DAE properties to the linearizations. \bigskip We underline that the concept of regularity regions does not assume the existence of solutions. However, if a solutions resides in a regularity region, then for $d>0$ it is part of a regular flow with the canonical characteristics from the regularity region. In any case, also for $d=0$, the solution has no critical points. \bigskip In the dissection index concept in \cite{Jansen2014} similar results are reproduced by using smooth basis functions instead of the projector functions. For the linear parts the decomposition described in Section \ref{subs.dissection} above is applied, and this is combined with rules of the tractability framework to construct a matrix function sequence emulating that from the tractability concept. This is theoretically much more intricately but maybe useful in practical realizations. However, when using basis functions instead of projector functions, it must also be taken into account that there are not necessarily global bases in the multidimensional case, e.g., \cite[Remark A.16]{CRR}. Regarding linear DAEs, the dissection concept in Section \ref{subs.dissection} and the regular strangeness concept in Section \ref{subs.strangeness} are closely related in turn. In contrast to the basic reduction for linear DAEs in Section \ref{s.regular}, which encompasses the elimination of variables and a reduction in dimension, the original dimension is retained and all variables stay involved. This is one of the cornerstones for the adoption of the linearization concept. We quote from \cite[p.\ 65]{Jansen2014}: \textit{The index arises as we use the linearization concept of the Tractability Index and the decoupling procedure of the Strangeness Index}. This way, the regular strangeness index also finds a variant for nonlinear DAEs by means of corresponding sequences of matrix functions and linearization. \subsection{Regularity regions and perturbation index}\label{subs.pert} The perturbation index of nonlinear DAEs is an immediate generalization of the version for linear DAEs. We slightly extend \cite[Definition 5.3]{HairerWanner} to be valid also for nonautonomous DAEs, cf. also Definition \ref{d.perturbation} above: \begin{definition}\label{d.perturbationN} The equation \eqref{N.0} has perturbation index $\mu_p=\nu\in \Natu$ along a solution $x_*:\mathcal [a,b]\rightarrow \Real^m$, if $\nu$ is the smallest integer such that, for all functions $x:\mathcal [a,b]\rightarrow \Real^m$ having a defect \begin{align*} \delta(t):=f(t,x(t),x'(t)), \; t\in [a,b], \end{align*} there exists an estimation \begin{align*} |x(t)-x_*(t)|\leq c \{|x(a)-x_*(a)|+\max_{a\leq\tau\leq t}|\delta(\tau)|+\cdots+ \max_{a\leq\tau\leq t}|\delta^{(\mu_p-1)}(\tau)|\}, \; t\in\mathcal I, \end{align*} whenever the expression on the right-hand side is sufficiently small. \end{definition} Because of its significance, we adopt the authors' comment in \cite[p.\ 479]{HairerWanner} on this definition: \emph{We deliberately do not write ``Let $x(\cdot)$ be the solution of $f(t,x(t),x'(t))=\delta(t), t\in [a,b]$ ...'' in this definition, because the existence of such a solution for an arbitrarily given $\delta(\cdot)$ is not assured.} Actually, we are confronted with a problem belonging to functional analysis, with mapping properties and the question of how solutions and their components, respectively, depend on perturbations and their derivatives. Some answers concerning linear DAEs are given by means of the projector-based analysis in \cite{CRR,Ma2014,HM2020}. In the case of nonlinear DAEs, this most significant question has hardly played an adequate role to date. Among other things, it is associated with the relationship of the differentiation index to the perturbation index and the controversies surrounding it, e.g., \cite{CaGear95,Simeon}, see also Examples \ref{e.CamGear}, \ref{e.simeon} below. The attempts made in \cite{CaGear95} to clarify the relation between these two different index notions did not achieve the intended goal. A number of additional index terms is introduced in \cite{CaGear95} (so-called uniform and maximum indices), however, this is not helpful because quite special solvability properties of the DAE are assumed in advance. \medskip The geometric reduction approaches concentrate exclusively on the dynamic properties of unperturbed systems. The approaches via derivative arrays are based on the assumption that all derivatives are or can be calculated correctly. They are primarily intended to figure out and approximate a particular solution of an unperturbed DAE. For linear DAEs, the nature of the sensitivity of the solutions with respect to perturbations $\delta$ is determined by the structure of the canonical subspace $N_{can}$, i.e., not only by the index, but just as much by the characteristic values, see Subsection \ref{subs.posedness}. The projector-based analysis and the decoupled system in the tractability framework allows a precise and detailled insight into the dependencies. For regular linear index-$\mu$ DAEs, both the differentiation index and the perturbation index are equal to $\mu$, and the homogenous structure of $N_{can}$ ensures homogenous dependencies over the given interval. In contrast, for linear almost-regular DAEs featuring differentiation index $\mu$, on subintervals the perturbation index and the differentiation index may both be lower than $\mu$. Of course, nonlinear DAEs are much more complicated. As shown in the previous section, the sequences of admissible matrix functions for nonlinear DAEs allow the determination of regularity regions. All points where the required rank conditions are not fulfilled are critical points. Regarding the related method equivalencies obtained for linear regular DAEs by Theorem \ref{t.Sum_equivalence}, the linearization concept of the previous section can be utilized in all these cases. It seems that on each regularity region, the perturbation index coincides with the differentiation index and the other ones as it is the case in the examples below. We emphasize again that regularity regions characterize the DAE without assuming the existence of solutions. Naturally, the maximal possible regularity regions are bordered with critical points. Then the definition domain of the DAE may be decomposed into maximal regularity regions. Each regularity region comprises solely regular points with the very same index and characteristics. But different regularity regions may feature different index and characteristics. Since the matrix function sequence is built from the partial Jacobians of the given data, the same sequence evidently arises for the unperturbed and the perturbed DAE. The solutions may reside in one of the regularity regions, but they may also cross the borders or stay there. If they cross the border of regularity regions with different index values, then the perturbation index of the related solution segments changes accordingly as in Example \ref{e.CamGear} below. \subsection{Some further comments}\label{subs.comments} In \cite{Mehrmann} it is pointed out that the requirements of Hypothesis \ref{SHypN} and that of a well-defined differentiation index are equivalent up to some (technical) smoothness requirements. Harmless critical points are not at all indicated. It may happen that the differentiation index is much lower up to one on partial segments. For details we refer to \cite[Remark 4.29]{KuMe2006}. A comparison of the regular differentiation index, the projector-based differentiation index, and the geometric index shows full consistency with regard to the rank conditions and a slight difference in regards to smoothness. All kind of critical points are excluded for regularity, but they are being detected in the course of the procedures. \subsection{Nonlinear examples}\label{subs.Nexamples} We give a brief outlook for nonlinear DAEs considering some small, representative, and easy-to-follow examples and emphasize again the importance of taking account of all canonical characteristic values in addition to the index. The first two Examples \ref{e.exp} and \ref{e.sin-cos} have a positive degree of freedom and show expected singularities from a geometric point of view. The next Examples \ref{e.CamGear} and \ref{e.simeon} are classics from literature and show harmless critical points as well as changing characteristics. With the next Example \ref{e.Ricardo} we emphasize the fact that problems with harmless critical points do not allow the geometric reduction. Our last Example \ref{e.robotic_arm} shows the so-called robotic arm DAE, a problem with zero degree of freedom, which nevertheless has serious singularities. \begin{example}[Singular index-one DAE with bifurcation and impasse points]\label{e.exp} Consider the very simple autonomous DAE \begin{align}\label{ex.2} \begin{matrix} x_1'-\gamma x_1 &=&0,\\ (x_1)^2+(x_2)^2-1 &=& 0, \end{matrix}\quad \Bigg\rbrace \end{align} their perturbed version \begin{align}\label{ex.2_pert} \begin{matrix} x_1'-\gamma x_1 &=\delta_1,\\ (x_1)^2+(x_2)^2-1 &= \delta_2, \end{matrix}\quad \Bigg\rbrace \end{align} and the associated functions \begin{align*} f(t,x,x^1)=\begin{bmatrix} x^1_1-\gamma x_1-\delta_1(t)\\(x_1)^2+(x_2)^2-1-\delta_2(t) \end{bmatrix}, \quad t\in \Real, x,x^1\in \Real^2,\\ f_{x^1}(t,x,x^1)=\begin{bmatrix} 1&0\\0&0 \end{bmatrix}, f_x(t,x,x^1)=\begin{bmatrix} -\gamma &0\\2x_1&2x_2 \end{bmatrix},\quad \gamma\in \Real \;\text{ is a given parameter}. \end{align*} \begin{figure}[ht] \includegraphics[width=6.5cm]{CircleArrows-exp1.pdf} \includegraphics[width=6.5cm]{CircleArrows-exp2.pdf} \caption{Behavior of solutions for the DAE \eqref{ex.2} from Example \ref{e.exp} for $\gamma>0$ (left) or $\gamma<0$ (right), critical points (red) and stationary solutions (blue).} \label{fig:CircleArrows-exp} \end{figure} Obviously, the points $\begin{bmatrix} 0\\1 \end{bmatrix}$ and $\begin{bmatrix} 0\\-1 \end{bmatrix}$ serve as stationary solutions of the autonomous DAE \eqref{ex.2}. Further, for each initial point $x_0\in \Real^2$ lying on the unit circle arc, except for the two points on the $x_1$-axis, there exists exactly one solution to the autonomous DAE \eqref{ex.2} passing through at $t_0=0$, namely: \begin{itemize} \item if $\gamma<0$ and $x_{0,2}>0$ then \begin{align*} x_*(t)=\begin{bmatrix} \exp{(\gamma t)} x_{0,1} \\ \, \\ \sqrt{1-\exp{(2\gamma t)}x_{0,1}^2} \end{bmatrix},\; t\in [0,\infty),\quad x(t)\xrightarrow{t\rightarrow \infty}\begin{bmatrix} 0\\1 \end{bmatrix}, \end{align*} \item if $\gamma<0$ and $x_{0,2}<0$ then: \begin{align*} x_*(t)=\begin{bmatrix} \exp{(\gamma t)} x_{0,1} \\ \, \\ - \sqrt{1-\exp{(2\gamma t)}x_{0,1}^2} \end{bmatrix},\; t\in [0,\infty),\quad x(t)\xrightarrow{t\rightarrow \infty}\begin{bmatrix} 0\\-1 \end{bmatrix}, \end{align*} \item if $\gamma>0$ and $x_{0,2}>0$ then \begin{align*} x_*(t)=\begin{bmatrix} \exp{(\gamma t)} x_{0,1} \\ \, \\ \sqrt{1-\exp{(2\gamma t)}x_{0,1}^2} \end{bmatrix},\; t\in [0,t_f], \end{align*} \item if $\gamma>0$ and $x_{0,2}<0$ then: \begin{align*} x_*(t)=\begin{bmatrix} \exp{(\gamma t)} x_{0,1} \\ \, \\ - \sqrt{1-\exp{(2\gamma t)}x_{0,1}^2} \end{bmatrix},\; t\in [0,t_f], \end{align*} \end{itemize} whereby, the final time $t_f$ of the existence intervals is determined by the equation $\exp(\gamma t_f)=1/|x_{0,1}|$ and \begin{align*} x(t_f)=\begin{bmatrix} 1\\0 \end{bmatrix}\; \text{ for } x_{0,1}>0, \;x(t_f)=\begin{bmatrix} -1\\0 \end{bmatrix}\; \text {for } x_{0,1}<0. \end{align*} It is now evident that merely the two points $\begin{bmatrix} 1\\0 \end{bmatrix}$ and $\begin{bmatrix} -1\\0 \end{bmatrix}$ are critical. For $\gamma<0$, from each of these points, two solutions start, but, for $\gamma>0$, these points are so-called impasse points, cf. Figure \ref{fig:CircleArrows-exp}. From the geometric point of view the DAE has degree $s=1$ and the unit circle arc can be seen as the configuration space. \medskip In case of nontrivial perturbations $\delta_1, \delta_2$, the situation is on the one hand quite similar but on the other hand much more intricate. In particular, the configuration space becomes time-dependent, and we are confronted with different configuration spaces for different perturbations $\delta_2$, see Figures \ref{fig:Circle_growing} and \ref{fig:Circle_growing_sin}. The final time $t_f$ also depends on the perturbations and seemingly the place of the critical points varies. The solution representations allow the realization on correspondingly small time intervals $[a,b]$ that this is a DAE with perturbation index one apart from the critical points. \begin{figure}[ht] \includegraphics[width=7cm]{Example_exp_delta0.png} \hspace{0.3cm} \includegraphics[width=7cm]{Example_exp_deltathoch2.png} \caption{Solution of the DAE \eqref{ex.2} from Example \ref{e.exp} for $\gamma=-1$ and initial value $x_{0,1}=0.98$ (left), as well as solution of \eqref{ex.2_pert} for $\delta_1(t)=0$, $\delta_2(t)=t^2$ (right), both for $t \in \left[0,1\right]$.} \label{fig:Circle_growing} \end{figure} \begin{figure}[ht] \includegraphics[width=7cm]{Example_exp_bis2Pi.png} \hspace{0.3cm} \includegraphics[width=7cm]{Example_exp_delta_07sin.png} \caption{Solution of the DAE \eqref{ex.2} from Example \ref{e.exp} for $\gamma=-1$ and initial value $x_{0,1}=0.98$ (left), as well as solution of \eqref{ex.2_pert} for $\delta_1(t)=0$, $\delta_2(t)=0.7 \sin(t)$ (right), both for $t \in \left[0,2\pi\right]$.} \label{fig:Circle_growing_sin} \end{figure} \medskip Note that the partial derivatives $f_{x^1}$ and $f_x$ are independent of the perturbations $\delta_1, \delta_2$. Applying the projector-based analysis we form the matrix function \begin{align*} G_1(t,x,x^1)=f_{x^1}(t,x,x^1)+f_x(t,x,x^1)Q_0=\begin{bmatrix} 1&0\\0&2x_2 \end{bmatrix}, \quad Q_0=\begin{bmatrix}0&0\\ 0&1 \end{bmatrix}. \end{align*} Obviously, $G_1(t,x,x^1)$ is nonsingular if and only if $x_2\neq 0$. On the other hand, the inflated system $\mathfrak F_{[1]}=0$ yields the partial Jacobian \[ \mathcal E_{[1]}(t,x,x^1,x^2)=\begin{bmatrix} 1&0&0&0\\0&0&0&0\\-\gamma&0&1&0\\2x_1&2x_2&0&0 \end{bmatrix}, \] that undergoes a rank drop from 3 for $x_2\neq 0$ to 2 for $x_2=0$. Now it becomes clear that $x_2=0$ indicates critical points, which splits $\Real^2$ into the two regularity regions \[ \mathcal G_{+}=\{x\in \Real^2:x_2>0\}\;\text{ and }\; \mathcal G_{-}=\{x\in \Real^2:x_2<0\} , \] see Figure \ref{fig:RegReg1+2} (left). The border set consists of critical points, \[ \mathcal G_{crit}=\{x\in \Real^2:x_2=0\}. \] On each of these regularity regions, the DAE is said to be regular with index $\mu=\mu^T=\mu^{pbdiff}=\mu^{diff}=1$ and canonical characteristics $r=1$, $\theta_0=0$. This means that for all perturbed versions of our DAE, the intersection of the corresponding configuration space with $\mathcal G_{crit}$ contain the singular points of the flow. \end{example} \begin{example}[Singular index-one DAE with critical-point-crossing solution]\label{e.sin-cos} Given the value $\gamma=1$ or $\gamma=-1$, let us have a closer look at the simple autonomous DAE \begin{align}\label{ex.1} x_1'-\gamma x_2 &=0,\\ x_1^2+x_2^2-1 &= 0,\nonumber \end{align} This DAE possesses obvious solutions, namely \begin{itemize} \item if $\gamma=1$: \begin{align*} x_*(t)=\begin{bmatrix} \sin t\\\cos t \end{bmatrix}, x_{**}(t)=\begin{bmatrix} 1\\0 \end{bmatrix}, and \; x_{***}(t)=\begin{bmatrix} -1\\0 \end{bmatrix}, \quad t\in [0, 2\pi], \end{align*} \item if $\gamma=-1$: \begin{align*} x_*(t)=\begin{bmatrix} \cos t\\\sin t \end{bmatrix}, x_{**}(t)=\begin{bmatrix} 1\\0 \end{bmatrix}, and \; x_{***}(t)=\begin{bmatrix} -1\\0 \end{bmatrix}, \quad t\in [0, 2\pi], \end{align*} \end{itemize} together with phase-shifted variants. It is evident that the first solution crosses the other ones at $t=\frac{\pi}{2}$ and at $t=\frac{3\pi}{2}$, thus the points $\begin{bmatrix} 1\\0 \end{bmatrix}, \begin{bmatrix} -1\\0 \end{bmatrix}$ appear to be singular, see Figure \ref{fig:CircleArrows-sin-cos}. \medskip Similarly as in the previous example, applying the projector-based analysis we form the matrix function \begin{align*} G_1(t,x,x^1)=f_{x^1}(t,x,x^1)+f_x(t,x,x^1)Q_0=\begin{bmatrix} 1&-\gamma\\0&2x_2 \end{bmatrix}, \quad Q_0=\begin{bmatrix}0&0\\ 0&1 \end{bmatrix}. \end{align*} Again, $G_1(x,p)$ is nonsingular if and only if $x_2\neq 0$ and, on the other hand, the array function \[ \mathcal E_{[1]}(x)=\begin{bmatrix} 1&0&0&0\\0&0&0&0\\0&-\gamma&1&0\\2x_1&2x_2&0&0 \end{bmatrix}, \] undergoes a rank drop from 3 for $x_2\neq 0$ to 2 for $x_2=0$. It becomes clear that $x_2=0$ indicates critical points, which splits $\Real^2$ into the two regularity regions \[ \mathcal G_{+}=\{x\in \Real^2:x_2>0\}\;\text{ and }\; \mathcal G_{-}=\{x\in \Real^2:x_2<0\}, \] see Figure \ref{fig:RegReg1+2} (left), and the border consisting of critical points \[ \mathcal G_{crit}=\{x\in \Real^2:x_2=0\}. \] From the geometric point of view the DAE has degree $s=1$ and the unit circle arc can be seen as the configuration space. On each of the regularity regions, the DAE is regular with index $\mu=\mu^T=\mu^{pbdiff}=\mu^{diff}=1$ and canonical characteristics $r=1$, $\theta_0=0$. The intersection of the configuration space with $\mathcal G_{crit}$ contains the singular points of the flow. Indeed, for instance, if we simulate the first solution $x_*$ from above, then we start in a regularity region. However, at $t=\frac{\pi}{2}$, this solution crosses the constant solution $x_{**}$, and at $t=\frac{3\pi}{2}$ the other constant solution $x_{***}$, which are flow singularities. In terms of the characteristic-monitoring, at these times the canonical characteristic value $\theta_0$ changes, and this indicates that the solution crosses the border of a regularity region. \begin{figure} \includegraphics[width=6.5cm]{CircleArrows1.pdf} \includegraphics[width=6.5cm]{CircleArrows2.pdf} \caption{Behavior of solutions for Example \ref{e.sin-cos} and critical points (violet) that are also stationary solutions.} \label{fig:CircleArrows-sin-cos} \end{figure} \end{example} \begin{figure}[h] \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{RegRegSinCos.pdf} \caption*{Examples \ref{e.exp} and \ref{e.sin-cos}} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{RegRegCaGear.pdf} \caption*{Example \ref{e.CamGear}} \end{minipage} \caption{Regularity regions of three examples with $m=2$.} \label{fig:RegReg1+2} \end{figure} \begin{example}[Index change and harmless critical points]\label{e.CamGear} We revisit now \cite[Example 12]{CaGear95} that plays its role in the early discussion concerning index notions. For $\epsilon>0$, let $\gamma : \Real \rightarrow \Real$ be an infinitely differentiable function which has the property \[ \begin{array}{ll} \gamma(z)=0 & \text{ for } \left|z\right| \leq \epsilon, \\ \gamma(z)\neq 0 & \text{ else }, \end{array} \] and consider the DAE \begin{align*} \gamma(x_2)x_2'+x_1&= \delta_1, \\ x_2 &= \delta_2, \end{align*} by means of the associated given functions \begin{align*} f(t,x,x^1)&=\begin{bmatrix} \gamma(x_2) x_2^1+x_1\\x_2 \end{bmatrix}-\delta(t), \quad t\in \Real,\; x,x^1\in \Real^2,\\ f_{x^1}(t,x,x^1)&=\begin{bmatrix} 0&\gamma(x_2)\\0&0 \end{bmatrix}, \; f_x(t,x,x^1)=\begin{bmatrix} 1&\gamma'(x_2)\\0&1 \end{bmatrix}. \end{align*} The DAE is solvable for each arbitrary smooth perturbation $\delta$. The solutions are given by \begin{align*} x_1=\delta_1-\gamma(\delta_2)\delta'_2,\\ x_2=\delta_1, \end{align*} which indicates that the perturbation index is not greater than one and the dynamic degree of freedom is $d=0$. Not surprisingly, using the projector-based analysis, we observe three regularity regions showing different characteristics, \[ \mathcal G_{+}=\{x\in \Real^2:x_2>\epsilon\}\;\text{ and }\; \mathcal G_{-}=\{x\in \Real^2:x_2<-\epsilon\}, \] and \[ \mathcal G_{\epsilon}=\{x\in \Real^2:|x_2|<\epsilon\}, \] see Figure \ref{fig:RegReg1+2} (right), and in detail \begin{align*} \text{on}\quad \mathcal G_{+}:&\quad r=1, \quad \theta_0=1,\quad \theta_1=0,\quad \mu=2,\\ \text{on}\quad \mathcal G_{\epsilon}:&\quad r=0, \quad \theta_0=0,\quad \mu=1,\\ \text{on}\quad \mathcal G_{-}:&\quad r=1, \quad \theta_0=1, \quad \theta_1=0,\quad \mu=2. \end{align*} The borders between these regularity regions consist of critical poitns. All these critical points are obviously harmless. If a solution fully resides in $\mathcal G_{+}$ or $\mathcal G_{-}$ then the DAE has perturbation index $\mu_p=2$ along this solution. In contrast, if a solution fully resides in $\mathcal G_{\epsilon}$ then the DAE has perturbation index $\mu_p=1$ along this solution. Of course, there might be solutions crossing the borders and then the perturbation index changes accordingly along the solution. \end{example} \begin{example}[Campbell's counterexample] \label{e.simeon} This is a special case of \cite[Example 10]{CaGear95} which was picked out and discussed in the essay \cite[p.\ 73]{Simeon}. It is about the relationship between the differentiation index and the perturbation index. Consider the DAE \begin{align*} x_3x_2'+x_1 &=\delta_1,\\ x_3x_3' +x_2 &= \delta_2,\\ x_3&= \delta_3, \end{align*} and the associated functions \begin{align*} f(t,x,x^1)&=\begin{bmatrix} x_3x_2^1+x_1\\x_3x_3^1+x_2\\x_3 \end{bmatrix}-\delta(t), \quad t\in \Real,\, x,x^1\in \Real^3,\\ f_{x^1}(t,x,x^1)&=\begin{bmatrix} 0&x_3&0\\0&0&x_3\\0&0&0 \end{bmatrix}, f_x(t,x,x^1)=\begin{bmatrix} 1&0&x_2^1\\0&1&x_3^1\\0&0&1 \end{bmatrix}. \end{align*} The DAE has obviously a unique solution to each arbitrary smooth perturbation $\delta$, namely \begin{align*} x_3&=\delta_3,\\ x_2&=\delta_2-\delta_3'\delta_3,\\ x_1&= \delta_1-\delta_3 (\delta_2-\delta_3'\delta_3)'=\delta_1-\delta_3\delta'_2+\delta_3(\delta'_3)^2+(\delta_3)^2\delta''_3, \end{align*} which clearly indicates perturbation index $\mu_p=3$ and zero dynamic degree of freedom. In contrast, by Definition \ref{d.N1} (that is controversal) the unperturbed DAE with $\delta=0$ has differentiation index $\mu^{diff}=1$, with the underlying ODE $x'=0$ given on the single point $x=0$. Note that the related array function $ \mathcal E_{[1]}$ in Definition \ref{d.N1a} undergoes a rank drop at $x_3=0$, \[ \mathcal E_{[1]}=\begin{bmatrix} 0&x_3&0&0&0&0\\ 0&0&x_3&0&0&0\\ 0&0&0&0&0&0\\ 1&x_3^1&0&0&x_3&0\\ 0&1&x_3^1&0&0&x_3\\ 0&0&1&0&0&0 \end{bmatrix},\quad r_{[1]}=\rank \mathcal E_{[1]}=\Bigg\lbrace \begin{matrix} 4\;\text{ for } x_3\neq 0\\ 3\;\text{ for } x_3= 0 \end{matrix}\;. \] In particular, the rank function $r_{[1]}$ fails to be constant on each neighborhood of the origin, which would be necessary for an index-1 DAE in the sense of the precise Definition \ref{d.N1a}. According to our understanding the DAE has the regularity regions \begin{align*} \mathcal G_{+}=\{z\in \Real^3:x_3>0\},\quad \mathcal G_{-}=\{z\in \Real^3:x_3<0\}, \end{align*} and the critical point set \begin{align*} \mathcal G_{crit}=\{z\in \Real^3:x_3=0\}. \end{align*} At the points of the regularity regions we form the admissible matrix functions from the projector-based framework, \begin{align*} A=f_{x^1},\; D=P=\begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix},\; G_0=AD=\begin{bmatrix} 0 & x_3 & 0 \\ 0 & 0 & x_3 \\ 0 & 0 & 0 \end{bmatrix}, \; r_0^T=2, \; B_0=\begin{bmatrix} 1&0&x_2^1\\0&1&x_3^1\\0&0&1 \end{bmatrix}, \end{align*} and \begin{align*} Q_0&=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix},\; G_1=\begin{bmatrix} 1 & x_3 & 0 \\ 0 & 0 & x_3 \\ 0 & 0 & 0 \end{bmatrix}, \; r_1^T=2,\; B_1=\begin{bmatrix} 0&0&x_2^1\\0&1&x_3^1\\0&0&1 \end{bmatrix},\\ Q_1&=\begin{bmatrix} 0 & -x_3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix},\; G_2=\begin{bmatrix} 1 & x_3 & 0 \\ 0 & 1 & x_3 \\ 0 & 0 & 0 \end{bmatrix},\; r_2^T=2, \; B_2=\begin{bmatrix} 0&0&x_2^1\\0&0&x_3^1\\0&0&1 \end{bmatrix},\\ Q_2&=\begin{bmatrix} 0 & 0 & (x_3)^2 \\ 0 & 0 & -x_3 \\ 0 & 0 & 1 \end{bmatrix}, G_3=\begin{bmatrix} 1 & x_3 & 0 \\ 0 & 1 & x_3 \\ 0 & 0 & 1 \end{bmatrix},\; r_2^T=3. \end{align*} Therefore, on both regularity regions, the DAE is regular with index $\mu^T=\mu_p=\mu^{diff}$ and canonical characteristics \begin{align*} r=2, \quad \theta_0=1, \quad \theta_1= 1, \quad \theta_2=0, \quad d=0. \end{align*} All points from $\mathcal G_{crit}$ are harmless critical points as the above solution representation confirms. It should also be added that the further array functions $\mathcal E_{[2]}$ and $\mathcal E_{[3]}$ feature constant ranks, and the differentiation index is well-defined and equal to one on the entire domain. \end{example} \begin{example}[Riaza's counterexample] \label{e.Ricardo} The following example is part of the discussion whether problems with harmless critical points are accessible to treatment by geometric reduction from \cite{RaRh}. It is commented in \cite[p.\ 186]{RR2008} with the words: \emph{There is no way to apply the framework of \cite{RaRh} neither globally nor locally around the origin}. We apply the perturbed version of the system \cite[(4.6), p.\ 186]{RR2008}, \begin{align*} x_1'-\alpha(x_1,x_2, x_3) &=\delta_1, \\ x_1x_2' - x_3 &= \delta_2, \\ x_2 &= \delta_3, \end{align*} in which $\alpha$ denotes a smooth function. We recognize that \begin{align*} x_2 &= \delta_3,\\ x_3 &= -\delta_2+x_1\delta_3', \\ x_1'-\alpha(x_1,\delta_3, -\delta_2+x_1\delta_3') &=\delta_1, \end{align*} such that it becomes clear that the DAE is solvable for all sufficiently smooth perturbations $\delta$ and initial conditions for the first solution component. To apply the projector-based analysis we use the associated functions \begin{align*} f(t,x,x^1)&=\begin{bmatrix} x^1_1-\alpha(x_1,x_2,x_3) \\x_1x_2^1-x_3\\x_2 \end{bmatrix}, \quad t\in \Real, x,x^1\in \Real^3,\\ f_{x^1}(t,x,x^1)&=\begin{bmatrix} 1&0&0\\0&x_1&0\\0&0&0 \end{bmatrix}, \\ f_x(t,x,x^1)&=\begin{bmatrix} -\alpha_{x_1}(x_1,x_2,x_3)& -\alpha_{x_2}(x_1,x_2,x_3)& -\alpha_{x_3}(x_1,x_2,x_3)\\x_2^1&0&-1\\0&1&0 \end{bmatrix}. \end{align*} Supposing $x_1\neq 0$ we form the admissible matrix functions. We drop the arguments of the functions whenever it is reasonable. We obtain \begin{align*} Q_0=\begin{bmatrix} 0&0&0\\0&0&0\\0&0&1 \end{bmatrix},\; G_1=f_{x^1}+f_xQ_0=\begin{bmatrix} 1&0&-\alpha_{x_3}\\0&x_1&-1\\0&0&0 \end{bmatrix},\; r_1^T=\rank G_1=2,\; \theta_0=1,\\ Q_1=\begin{bmatrix} 0&\alpha_{3}/x_1&0\\0&1&0\\0&1/x_1&0 \end{bmatrix},\; G_2=G_1+B_1Q_1=\begin{bmatrix} 1&*&*\\0&x_1&-1\\0&1&0 \end{bmatrix}, \; r_2^T=\rank G_2=3,\; \theta_1=0. \end{align*} Consequently, the DAE has the two regularity regions \begin{align*} \mathcal G_{+}=\{z\in \Real^3:x_1>0\},\quad \mathcal G_{-}=\{z\in \Real^3:x_1<0\} \end{align*} and the critical point set \begin{align*} \mathcal G_{crit}=\{z\in \Real^3:x_1=0\}. \end{align*} Regarding the solvability properties we know the critical point to be harmless. The perturbation index is two around solutions residing in a regularity region. If a solution does not cross or touch the critical point set, then the perturbation index is two along this solution. Obviously, along reference functions $x_*$, with vanishing first components, the perturbation index reduces to one. \end{example} \begin{example}[DAE describing a two-link robotic arm]\label{e.robotic_arm} The so-called robotic arm DAE is a well understood benchmark for higher-index DAEs on the form \begin{align*}\left(\begin{bmatrix} I_6 & \\ & 0_2 \end{bmatrix}x \right)'+b(x,t)=0. \end{align*} that is well described in literature, see \cite{CaKu2019}, \cite{ELMRoboticArm2020} and the references therein. It results from a tracking problem in robotics and presents two types of singularities. Without going into the details of the $m=8$ equations and variables with $r=6$, here we interpret them in terms of the characteristics $\theta_i$. The DAE describes a two-link robotic arm with an elastic joint moving on a horizontal plane, see Figure \ref{fig:RA} (left). The third variable $x_3$ of the equations corresponds to the rotation of the second link with respect to the first link. \begin{figure}[htbp] \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{TwoArms+Path.pdf} \caption*{The blue marker corresponds to the elastic joint of the two links. At the black marker the end of one link is fixed to the origin. The position of the red endpoint of the outer link is prescribed by a path. } \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{Circles+Path.pdf} \caption*{If the prescribed path crossed a singularity circle $C_r$ with radius $r$, then singularities of the DAE arise. $C_0$ and $C_2$ correspond to $\sin(x_3)=0$ and $\cos(x_3)=z_*$ leads to $C_{1.7159}$ for certain model parameters. } \end{minipage} \caption{As $x_3$ is the angle between the two links, the singularities can be interpreted geometrically, see \cite{ELMRoboticArm2020}, where also the original figures and the discussion of the parameters can be found.} \label{fig:RA} \end{figure} In \cite{ELMRoboticArm2020} it has been shown that critical points arise at \[ \cos(x_3)=z_* \quad or \quad \sin(x_3)=0, \] whereas the constant value $z_*$ depends on the particular parameters of the model. In the consequence, the original definition domain of the DAE $\mathcal I\times\mathcal D=\mathcal I\times\Real^8$ decomposes into an infinite number of regularity regions whose borders are hyperplanes consisting of the corresponding critical points. Owing to \cite[Proposition 5.1]{ELMRoboticArm2020} the canonical characteristics are the same on all regularity regions. If the component $x_{*,3}$ of a solution $x_*$ crosses or touches such a critical hyperplane then this gives rise to a singular behavior. In case of the robotic arm, this happens if the prescribed path crosses so-called singularity circles, see Figure \ref{fig:RA} (right). In regularity regions, we obtain $\mu^T=\mu^{pbdiff}=5$ and \begin{align*} \theta_0 &=8-r_1^T=\rank T_1=2, \quad & \theta_1 &=8-r_2^T= \rank T_2=2, \\\ \theta_2 &=8-r_3^T=\rank T_3=1, \quad & \theta_3 &=8-r_4^T=\rank T_4=1, \\ \theta_4 &=8-r_5^T=\rank T_5=0, \quad & d & =0. \end{align*} Indeed, monitoring this ranks is how the singularities $\cos(x_3)=z_*$ were detected, which to our knowledge had not been described before. \end{example} \section{Conclusions} Until now, DAE literature has been rather heterogeneous since each approach uses quite different starting points, definitions, assumptions, leading to own results. The diversity of the frameworks made it difficult to compare them. Although a few equivalence statements were proven, a general and rigorous framework was missing so far. To get to our Main Theorem \ref{t.Sum_equivalence} for linear DAEs we revised and compiled many results from the literature and closed several gaps. By doing so, a characterization of regularity and almost regularity that is interpretable for all approaches resulted straight forward. This theorem with the canonical characteristics is, in our opinion, the common ground of all the considered approaches. For nonlinear DAEs, we worked out aspects for possible further investigations. \bibliography{CommonGround_DAEs}{} \bibliographystyle{plain} \section{Appendix}\label{s.Anhang} \subsection{1-full matrix functions and rank estimations}\label{subs.A1} The matrix $M\in \Real^{(s+1)m\,\times\, (s+1)m}$ which has block structure built from $m\times m$ matrices is said to be \emph{$1$-full}, if there is a nonsingular matrix $T$ such that $TM=\begin{bmatrix}I_{m}&0\\0&H \end{bmatrix}$. Let $M:\mathcal I\rightarrow \Real^{(s+1)m\,\times\, (s+1)m}$ be a continuous matrix function which has block structure built from $m\times m$ matrix functions. $M$ is said to be \emph{smoothly $1$-full}, if there is a pointwise nonsingular, continuous matrix function $T$ such that $TM=\begin{bmatrix}I_{m}&0\\0&H \end{bmatrix}$. \begin{lemma}\label{l.app} Let $M:\mathcal I\rightarrow \Real^{(s+1)m\,\times\, (s+1)m}$ be a continuous matrix function which has block structure built from $m\times m$ matrix functions. The following assertions are equivalent: \begin{description} \item[\textrm{(1)}] $M$ is smoothly $1$-full and has constant rank $r_{M}$. \item[\textrm{(2)}] $M$ has constant rank $r_{M}$ and $M(t)$ is $1$-full pointwise for each $t\in \mathcal I$. \item[\textrm{(3)}] There is a continuous function $H:\mathcal I\rightarrow \Real^{sm\,\times\, sm}$ with constant rank $r_{H}$ such that \begin{align}\label{a.kerM} \ker M=\{\begin{bmatrix} z\\w \end{bmatrix}\in \Real^{m}\times\Real^{sm}: z=0, w\in \ker H \}, \end{align} \item[\textrm{(4)}] $M$ has constant rank $r_{M}$ and \begin{align*} T_{[s]}\ker M={0}, \end{align*} with the truncation matrix $T_{[s]}=[I_{m}\, 0\cdots 0]\in\Real^{m\times (s+1)m}$. \end{description} \end{lemma} \begin{proof} {\textrm (1)}$\leftrightarrow${\textrm (2)}: The straight direction is trivial, the opposite direction is provided, e.g., by \cite[Lemma 3.36]{KuMe2006}. {\textrm (3)}$\leftrightarrow${\textrm (4)}: The straight direction is trivial, we immediately turn to the opposite one. Let $M$ have constant rank $r_{M}$ and $T_{[s]} \ker M=\{0\}$. Denote by $Q_{M}$ the continuous orthoprojector function onto $\ker M$. Then, $Q_{M}$ must have the special form \begin{align*} Q_{M}&=\begin{bmatrix} 0&0\\0&K \end{bmatrix}:\mathcal I\rightarrow \Real^{(m+km)\times (m+km)},\\ &K=K^*:\mathcal I\rightarrow \Real^{sm\times sm},\quad \rank K=\rank Q_{M}=(m+sm)-r_{M}. \end{align*} Let $Z:\mathcal I\rightarrow \Real^{sm \times (r_{M}-m)}$ denote a continuous basis of $(\im K)^{\perp}$ such that $\rank Z=r_{M}-m$. Then the assertion becomes true with \begin{align*} H=\begin{bmatrix} Z^*\\0 \end{bmatrix}:\mathcal I\rightarrow \Real^{(sm)\times (sm)}, \quad \rank H=\rank Z^*=r_{M}-m. \end{align*} {\textrm (1)}$\leftrightarrow${\textrm (3)}: Since the straight direction is trivial again, we immediately turn to the opposite one. Let $H:\mathcal I\rightarrow \Real^{sm\,\times\, sm}$ be a continuous function with constant rank $r_{H}$ such that \eqref{a.kerM} is valid. Then $\ker M$ has dimension $\dim\ker H=sm-r_{H}$ which is constant. Therefore, $M$ has constant rank $r_{M}=(s+1)m-(sm-r_{H})=m+r_{H}$. The projector-valued matrix functions \begin{align*} W_{M}=I_{sm+m}-MM^{+} \quad \text{ and } \quad Q_{M}=I_{sm+m}-M^{+}M=\begin{bmatrix} 0&0\\0&I_{sm}-H^{+}H \end{bmatrix} \end{align*} are continuous and have constant rank $sm+m-r_{M}=:\rho$ both. Then there are continuous matrix functions $U_{W}, V_{W}, U_{Q}, V_{Q}$ being pointwise orthogonal such that (e.g. \cite[Theorem 3.9]{KuMe2006}) \begin{align*} Q_{M}= U_{Q}\begin{bmatrix} \Sigma_{Q}&0\\0&0 \end{bmatrix} V^{T}_{Q},\quad W_{M}=U_{W}\begin{bmatrix} \Sigma_{W}&0\\0&0 \end{bmatrix}V^{T}_{W}, \end{align*} with nonsingular sigma blocks of size $\rho$. Set \begin{align*} \mathcal C=V_{Q}\begin{bmatrix} \Sigma_{Q}^{-1}\Sigma_{W}^{-1}&0\\0&0 \end{bmatrix}U^{T}_{W} \end{align*} such that \begin{align*} Q_{M}\mathcal C W_{M}=U_{Q}\begin{bmatrix} I_{\rho}&0\\0&0 \end{bmatrix}V^{T}_{W},\quad \im Q_{M}\mathcal CW_{M}= \im Q_{M},\quad \ker Q_{M}\mathcal CW_{M}=\ker W_{M}, \end{align*} and the matrix $ T= Q_{M}\mathcal C W_{M}+M^{+}$ is continuous and nonsingular. It follows that $TM=I_{sm+m}- Q_{M}=M^{+}M=\diag(I_{m},H^{+}H)$ which means that $M$ is smoothly $1$-full. \end{proof} \begin{lemma}\label{l.app2} For $m\in \Natu$, a matrix function $E: \mathcal I\rightarrow \Real^{m\,\times \, m}$ and matrix functions \[ \mathcal M_{[0]}(t) :=E(t), \quad \mathcal M_{[k]}: \mathcal I\rightarrow \Real^{(k+1)m\,\times\, (k+1)m}, \quad k=1 , \ldots \] defined in a way that the structure \begin{eqnarray} \mathcal M_{[k+1]}(t) := \begin{bmatrix} \mathcal M_{[k]}(t) & 0 \\ * & E(t) \end{bmatrix} \label{eq:Structure_M} \end{eqnarray} is given, for $r(t):= \rank E(t)$ and $r_{[k]}(t):=\rank \mathcal M_{[k]}(t)$ it holds \[ r_{[k]}(t) + r(t) \leq r_{[k+1]}(t) \leq r_{[k]}(t) + m, \quad t \in {\mathcal I}, \, k \geq 0. \] \end{lemma} \begin{proof} The structure \eqref{eq:Structure_M} obviously leads to \[ r_{[k+1]}(t) \geq r_{[k]}(t) + r(t). \] and \[ r_{[k+1]}(t) = \dim \im \begin{bmatrix} \mathcal M_{[k]}(t) & 0 \\ * & E \end{bmatrix} \leq \dim \im \begin{bmatrix} \mathcal M_{[k]}(t) \end{bmatrix} +m= r_{[k]}(t)+m. \] \end{proof} \subsection{Continuous matrix function with rank changes}\label{subs.M} We quote the following useful result from \cite[Proof of Theorem 10.5.2]{CaMe}: \begin{theorem}\label{t.M} Let the matrix function $M:[a, b]\rightarrow\Real^{m\times n}$ be continuous and let \begin{align*} \varphi=\{t_0\in [a, b]: \rank M(t)\;\text{ is not continuous at }\; t_0\} \end{align*} denote the set of its rank-change points. Then the set $\varphi$ is closed and has no interior and there exist a collection $\mathfrak S$ of open intervals $\{(a^{\ell}, b^{\ell})\}_{{\ell}\in \mathfrak S}$, such that \begin{align*} \overline{\bigcup_{ {\ell} \in \mathfrak S}(a^{{\ell}}, b^{\ell})}=[a, b],\quad (a^{\ell_i}, b^{\ell_i})\cap \mathcal (a^{\ell_j}, b^{\ell_j})=\emptyset \quad\text{for}\quad \ell_i \neq \ell_j, \end{align*} and integers $r^{\ell}\geq 0$, ${\ell}\in \mathfrak S$, such that \begin{align*} \rank M(t)=r^{\ell} \quad \text{for all}\quad t\in (a^{\ell}, b^{\ell}),\quad {\ell} \in \mathfrak S. \end{align*} \end{theorem} As emphasided already in \cite{CaMe}, the set $\varphi$ and in turn collection $\mathfrak S$ can be finite, countable, and also over-countable. \subsection{Strictly block upper triangular matrix functions and array functions of them}\label{subs.A_strictly} In this part, for given integers $\nu\geq 2, l\geq\nu, l_1\geq 1,\ldots,l_{\nu}\geq1$, such that $l=l_1+\ldots+l_{\nu}$, we denote by $SUT=SUT(l,\nu,l_1,\ldots,l_{\nu})$ the set of all strictly upper triangular matrix functions $N:\mathcal I\rightarrow \Real^{l\times l}$ showing the block structure \begin{align*} N=\begin{bmatrix} 0&N_{12}&*&\cdots&*\\ &0&N_{23}&*&*\\ &&\ddots&\ddots&\vdots\\ &&&&N_{\nu-1, \nu}\\ &&&&0 \end{bmatrix}, \quad N_{ij}=(N)_{ij}:\mathcal I\rightarrow \Real^{l_j\times l_i},\quad N_{ij}=0 \quad\text{for}\quad i\geq j. \end{align*} If $l=\nu$ and $l_i=\cdots=l_{\nu}=1$ then $N$ is strictly upper triangular in the usual sense. The following lemma collects some rules that can be checked by straightforward computations. \begin{lemma}\label{l.SUT1} $N,\hat N\in SUT$ and $N_1,\ldots,N_k\in SUT$ imply \begin{description} \item[\textrm{(1)}] $N+\hat N \in SUT$. \item[\textrm{(2)}] $N\hat N \in SUT$, and the entries of the secondary diagonals are \begin{align*} (N\hat N)_{i,i+1}&=0, \quad i=1,\ldots,\nu-1,\\ (N\hat N)_{i,i+2}&=(N)_{i,i+1}(\hat N)_{i+1,i+2},\quad i=1,\ldots,\nu-2. \end{align*} \item[\textrm{(3)}] $N^{\nu}=0$ and $N_1\cdots N_k=0$ for $k\geq\nu$. \item[\textrm{(4)}] $I-N$ remains nonsingular and $(I-N)^{-1}=I+N+\cdots+N^{\nu-1}$. \item[\textrm{(5)}] $(I-\hat N)^{-1}N=N+\hat NN+\cdots+(\hat N)^{\nu-2}N$ and \begin{align*} ((I-\hat N)^{-1}N)_{i,i+1}=(N)_{i,i+1}, \quad i=1,\ldots,\nu-1. \end{align*} \end{description} \end{lemma} The following two subsets of $SUT$ are of special interest because they enable rank determinations and beyond. \textbullet\; Supposing $l_1\geq \cdots\geq l_{\nu}$ we denote by $SUT_{column}\subset SUT$ the set of all $N\in SUT$ having exclusively blocks $(N)_{i,i+1}$ with full column rank, that is \begin{align}\label{N.col} \rank (N)_{i,i+1}=l_{i+1},\quad i=1,\ldots,\nu-1. \end{align} \textbullet\; Supposing $l_1\leq \cdots\leq l_{\nu}$ we denote by $SUT_{row}\subset SUT$ the set of all $N\in SUT$ having exclusively blocks $(N)_{i,i+1}$ with full row rank, that is \begin{align}\label{N.row} \rank (N)_{i,i+1}=l_{i},\quad i=1,\ldots,\nu-1. \end{align} \begin{lemma}\label{l.Ncol} Each $N\in SUT_{column}$ has constant rank $l-l_1$ and, for $k\leq \nu-1$, one has \begin{align*} \ker N=\im \begin{bmatrix} I_{l_1}\\0 \end{bmatrix},\quad \ker N^k=\im \begin{bmatrix} I_{l_1}&& \\ &\ddots&\\ & & I_{l_k}\\ 0&&0 \end{bmatrix}. \end{align*} Moreover, for the product of any $k$ elements $N_1,\ldots, N_k\in SUT_{column}$ it holds that \begin{align*} \ker N_1\cdots N_k=\im \begin{bmatrix} I_{l_1}&& \\ &\ddots& \\ & & I_{l_k}\\ 0&&0 \end{bmatrix}, \quad \rank N_1\cdots N_k = l-(l_1+\cdots+l_k). \end{align*} \end{lemma} \begin{proof} This follows from generalizing Lemma \ref{l.SUT1} (2) for products of several matrices and \eqref{N.col}. \end{proof} \begin{lemma}\label{l.Nrow} Each $N\in SUT_{row}$ has constant rank $l-l_{\nu}$ and, for $k\leq \nu-1$, one has \begin{align*} \im N=\im \begin{bmatrix} I_{l_1}& & \\ &\ddots& \\ & & I_{l_{\nu-1}}\\ 0&&0 \end{bmatrix},\quad \im N^k=\im \begin{bmatrix} I_{l_1}& & \\ &\ddots& \\ & & I_{l_{\nu-k}} \\ 0& & 0 \end{bmatrix}. \end{align*} Moreover, for the product of any $k$ elements $N_1,\ldots, N_k\in SUT_{row}$ it holds that \begin{align*} \im N_1\cdots N_k=\im \begin{bmatrix} I_{l_1}&& \\ &\ddots& \\ & & I_{l_{\nu-k}}\\ 0&&0 \end{bmatrix}, \quad \rank N_1\cdots N_k = l_1+\cdots+l_{\nu-k}. \end{align*} \end{lemma} \begin{proof} This follows from generalizing Lemma \ref{l.SUT1} (2) for products of several matrices and \eqref{N.row}. \end{proof} Next we turn to the derivative array function\footnote{See \eqref{eq.arrayNk} in Section \ref{subs.SCFarrays}.} $\mathcal N_{[k]}:\mathcal I\rightarrow\Real^{(kl+l)\times(kl+l)}$ associated with $N\in SUT$, that is, \begin{align}\label{N.array} \mathcal N_{[k]}:= \begin{bmatrix} N&0&&&\cdots&0\\ I+\alpha_{2,1} N^{(1)}&N&&&&\vdots\\ \alpha_{3,1} N^{(2)}&I+\alpha_{3,2}N^{(1)}&N&&\\ \vdots& \ddots&\ddots&\ddots& &\\ \vdots& &\ddots&\ddots&\ddots&0\\ \alpha_{k+1,1}N^{(k)}&\cdots&&\alpha_{k+1,k-2}N^{(2)}&I+ \alpha_{k+1,k}N^{(1)}&N \end{bmatrix}. \end{align} Since the derivatives $N^{(i)}$ inherit the basic structure of $SUT$, too, we rewrite \begin{align}\label{N.M} \mathcal N_{[k]}:= \begin{bmatrix} N&0&&&\cdots&0\\ I+M_{2,1}&N&&&&\vdots\\ M_{3,1}&I+M_{3,2}&N&&\\ \vdots& \ddots&\ddots&\ddots& &\\ \vdots& &\ddots&\ddots&\ddots&0\\ M_{k+1,1}&\cdots&&M_{k+1,k-2}&I+ M_{k+1,k}&N \end{bmatrix}, \end{align} and keep in mind that $M_{i,j}\in SUT$ for all $i$ and $j$. \begin{lemma} \label{l.existence.Ntilde} Given $N\in SUT$, there exist a regular lower block triangular matrix function $\mathcal L_{[k]}$, such that \[ \mathcal L_{[k]} \cdot \mathcal N_{[k]} = \begin{bmatrix} N&0&&&\cdots&0\\ I&\tilde{N}_2&&&&\vdots\\ 0 &I&\tilde{N}_3&&\\ \vdots& \ddots&\ddots&\ddots& &\\ \vdots& &\ddots&\ddots&\ddots& 0\\ 0&\cdots&&0&I&\tilde{N}_{k+1} \end{bmatrix}= \mathcal{\tilde{N}}_{[k]}, \] whereas the matrix functions $\tilde{N}_{2},\ldots,\tilde{N}_{k+1}$ again belong to $SUT$ and inherit the secondary diagonal blocks of $N$, that means $(\tilde N_s)_{i,i+1}=N_{i,i+1}$, $i=1,\ldots,\nu-1$, $s=2,\ldots,k$. It holds that \begin{align*} N\tilde N_2\cdots \tilde N_{k+1}=0\quad \text{if}\quad k\geq l-1. \end{align*} Moreover, for $i=1,\ldots,l$, as long as $i+k+1\leq l$, it results that \begin{align*} (N\tilde N_2\cdots \tilde N_{k+1})_{i,i+j}&=(N^{k+1})_{i,i+j}=0,\quad j=1,\ldots,k,\\ (N\tilde N_2\cdots \tilde N_{k+1})_{i,i+k+1}&=(N^{k+1})_{i,i+k+1}. \end{align*} \end{lemma} \begin{proof} Since $I+M_{2,1}$ is nonsingular, with the nonsingular lower block-triangular matrix function \[ \mathcal L_{[k]}^1 =\begin{bmatrix} I &0 & 0 & & \cdots & 0 \\ 0 & (I+M_{2,1})^{-1} & 0 & & \cdots & 0 \\ 0 & -M_{3,1}(I+M_{2,1} )^{-1} & I & 0 & \cdots& 0 \\ 0 & -M_{4,1}(I+M_{2,1})^{-1} & 0 &I & \ddots & \vdots \\ \vdots & \vdots & \vdots & \ddots& \ddots & 0\\ 0 & -M_{k+1,1}(I+M_{2,1})^{-1} & 0 & \cdots & 0&I \end{bmatrix} \] we generate zero blocks in the first column of $\mathcal N_{[k]}$ such that \[ \mathcal L^{1}_{[k]} \cdot \mathcal N_{[k]} = \begin{bmatrix} N&0&&&\cdots&0\\ I&\tilde{N}_2&&&&\vdots\\ 0&I+\hat{M}_{3,2}&N&&\\ \vdots& \ddots&\ddots&\ddots& &\\ \vdots& &\ddots&\ddots&\ddots&0\\ 0&\hat{M}_{k+1,2}&\cdot &M_{k+1,k-2}&I+ M_{k+1,k}&N \end{bmatrix}, \] with $\tilde{N}_2=(I+M_{2,1})^{-1}N$. According to Lemma \ref{l.SUT1}, $\tilde{N}_1$ has the same second diagonal blocks than $N$ and $I+\hat{M}_{3,2} $ is regular with the same pattern than $N$. If we make $k$ such elimination steps, we obtain \[ \mathcal L^{k}_{[k]} \cdot \cdots \cdot \mathcal L^{1}_{[k]} \cdot \mathcal N_{[k]} = \mathcal{\tilde{N}}_{[k]}, \] analogously to an LU-decomposition, and $\mathcal L_{[k]} := \mathcal L^{k}_{[k]} \cdot \cdots \cdot \mathcal L^{1}_{[k]}$. The remaining assertions are straightforward consequences of the properties of matrix functions belongin to the set $SUT$. \end{proof} \begin{proposition}\label{prop.rank.Nk} Given $N\in SUT$ the associated array function $\mathcal N_{[k]} $ has the nullspace \begin{align*} \ker \mathcal{{N}}_{[k]} &= \{ y=\begin{bmatrix} y_0\\\vdots\\y_k \end{bmatrix} \in \Real^{(k+1)l}: N\tilde{N}_2 \cdots \tilde{N}_{k+1} y_{k} = 0, \\ & \quad \quad \quad \quad y_i=(-1)^{k+1-i}\tilde{N}_{i+1} \cdots \tilde{N}_{k+1}y_{k}, i=0,\ldots,k-1 \}, \end{align*} and \begin{align*} \dim \ker \mathcal{{N}}_{[k]} &= \dim\ker N\tilde{N}_2 \cdots \tilde{N}_{k+1},\\ \rank \mathcal N_{[k]} & = kl +\rank N\tilde{N}_2 \cdots \tilde{N}_{k+1}. \end{align*} Moreover, if $N$ belongs even to $SUT_{row}$ or to $SUT_{column}$, then \begin{align}\label{Nk_rank} \rank \mathcal N_{[k]} & = kl +\rank N^{k+1}= \text{constant}. \end{align} \end{proposition} \begin{proof} Lemma \ref{l.existence.Ntilde} implies $\ker \mathcal{\tilde{N}}_{[k]}=\ker \mathcal{{N}}_{[k]}$ and $\rank \mathcal{\tilde{N}}_{[k]}=\rank \mathcal{{N}}_{[k]}$. We evaluate the nullspace $\ker \mathcal{\tilde{N}}_{[k]}$, \begin{align*} \ker \mathcal{\tilde{N}}_{[k]} &= \{ y \in \Real^{(k+1)l}: Ny_0=0, y_{i}=-\tilde{N}_{i+2}y_{i+1} , i=0,\ldots,k-1 \}\\ &= \{ y \in \Real^{(k+1)l}: N\tilde{N}_2 \cdots \tilde{N}_{k+1} y_{k} = 0, \\ & \quad \quad \quad \quad y_i=(-1)^{k+1-i}\tilde{N}_{i+1} \cdots \tilde{N}_{k+1}y_{k}, i=0,\ldots,k-1 \}, \end{align*} which yields \begin{align*} \dim \ker \mathcal{\tilde{N}}_{[k]} &= \dim\ker N\tilde{N}_2 \cdots \tilde{N}_{k+1},\\ \rank \tilde{\mathcal N}_{[k]} & = kl +\rank N\tilde{N}_2 \cdots \tilde{N}_{k+1}. \end{align*} If, additionally, $N\in SUK_{column}$, then owing to Lemma \ref{l.Ncol} it results that also $\dim\ker N\tilde{N}_2 \cdots \tilde{N}_{k+1}=\dim \ker N^{k+1} =l-(l_1+\cdots+l_{k+1})$ such that $\rank N\tilde{N}_2 \cdots \tilde{N}_{k+1}=\rank N^{k+1} =l_1+\cdots+l_{k+1}$, and hence \[ \rank \tilde{\mathcal N}_{[k]} = kl +\rank N^{k+1}. \] If, contrariwise, $N\in SUK_{row}$, then owing to Lemma \ref{l.Nrow} it results that also $\rank N\tilde{N}_2 \cdots \tilde{N}_{k+1}=\rank N^{k+1} =l_1+\cdots+l_{\nu-(k+1)}$ and hence \[ \rank \tilde{\mathcal N}_{[k]} = kl +\rank N^{k+1}. \] \end{proof} \begin{corollary}\label{c.Nk_-full} If $N\in SUT$ then, for $k\geq\nu$, \begin{align*} \ker \mathcal{{N}}_{[k]} = \{ y=\begin{bmatrix} y_0\\\vdots\\y_k \end{bmatrix} \in \Real^{(k+1)m}: y_0 = 0, y_i=\tilde{N}_{i+1} \cdots \tilde{N}_{k+1}y_{k}, i=1,\ldots,k-1 \}. \end{align*} \end{corollary} \end{document}
2412.16031v1
http://arxiv.org/abs/2412.16031v1
Learning sparsity-promoting regularizers for linear inverse problems
\documentclass{article} \usepackage[a4paper,left=3cm,right=3cm,top=3.5cm,bottom=3.cm]{geometry} \usepackage[title]{appendix} \usepackage{mathrsfs} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{nicefrac} \usepackage{microtype} \usepackage{xcolor} \usepackage{ulem} \usepackage{graphicx} \usepackage{latexsym} \usepackage{epsfig} \usepackage{amsthm} \usepackage{amssymb} \usepackage{amstext} \usepackage{amsgen} \usepackage{amsxtra} \usepackage{amsgen} \usepackage{mathrsfs,mathtools} \usepackage{bbm} \usepackage{enumerate,subfigure,color} \usepackage{tikz} \usepackage{mathtools} \newtheorem{thm}{Theorem}[section] \newtheorem{proposition}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{example}[thm]{Example} \newtheorem{assumption}[thm]{Assumption} \newtheorem{scheme}{Scheme} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\Sp}{\mathbb{S}} \newcommand{\T}{\mathbb{T}} \newcommand{\E}{\mathbb{E}} \newcommand{\Prob}{\mathbb{P}} \newcommand{\Rt}{\mathcal{R}} \newcommand{\Sm}{\mathcal{S}} \newcommand{\Tm}{\mathcal{T}} \newcommand{\LO}{\mathcal{L}} \newcommand{\upchi}{{\text{\raisebox{2pt}{$\chi$}}}} \renewcommand{\epsilon}{\varepsilon} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\prox}{\ensuremath{\mathrm{prox}}} \newcommand{\sign}{\ensuremath{\mathrm{sign}}} \newcommand{\supp}{\ensuremath{\mathrm{supp}}} \renewcommand{\vec}[1]{\boldsymbol{#1}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\tr}{\operatorname{tr}} \newcommand{\norm}[1]{\left\|#1 \right\|} \newcommand{\nsG}[1]{\left\|#1 \right\|_{\text{sG}}} \def\<{\langle} \def\>{\rangle} \newcommand{\lY}{\langle} \newcommand{\lK}{\langle} \newcommand{\lX}{\langle} \newcommand{\rY}{\rangle_{Y}} \newcommand{\rK}{\rangle_{Y}} \newcommand{\rX}{\rangle} \newcommand{\Se}{\Sigma_{\varepsilon}} \newcommand{\Sx}{\Sigma_x} \newcommand{\cT}{c_\Theta} \newcommand{\wx}{{\widetilde{x}}} \newcommand{\vH}{H} \newcommand{\vh}{h} \newcommand{\vZ}{Z} \newcommand{\vz}{z} \newcommand{\vzv}{{\mathbf{\vz}}} \newcommand{\Bs}{B} \newcommand{\Ba}{B^*} \newcommand{\Bds}{\widetilde{B}} \newcommand{\Bda}{\widetilde{B}^*} \newcommand{\hOp}{\vh^{\star}} \newcommand{\BOp}{B^\star} \newcommand{\thetaOp}{\theta^\star} \newcommand{\xOp}{x^\star} \newcommand{\WOp}{W^\star} \newcommand{\hU}{\widehat{\vh}_U} \newcommand{\BU}{\widehat{B}_U} \newcommand{\thetaU}{\widehat{\theta}_U} \newcommand{\xU}{\widehat{x}_U} \newcommand{\hS}{\widehat{\vh}_S} \newcommand{\BS}{\widehat{B}} \newcommand{\thetaS}{\widehat{\theta}_S} \newcommand{\bS}{\widehat{b}_S} \newcommand{\WS}{\widehat{W}_S} \newcommand{\MS}{\widehat{M}_S} \newcommand{\wh}[1]{\widehat{#1}} \newcommand{\wt}[1]{\widetilde{#1}} \newcommand{\discr}[1]{\boldsymbol{#1}} \newcommand{\spec}[1]{\bar{#1}} \newcommand{\specc}[1]{\bar{\bar{#1}}} \newcommand{\Kop}{K_B} \newcommand{\cFBI}{c_{\text{FBI}}} \newcommand{\jj}{j_2} \newcommand{\vartau}{\tau} \newcommand*\circled[1]{\tikz[baseline=(char.base)]{\node[shape=circle,draw,inner sep=2pt] (char) {#1};}} \newcommand{\hh}{H} \newcommand{\J}{j} \newcommand{\eps}{\varepsilon} \title{Learning sparsity-promoting regularizers \\ for linear inverse problems} \author{Giovanni S.~Alberti\footnotemark[1]\thanks{MaLGa Center, University of Genoa, Italy (\{giovanni.alberti, ernesto.devito, matteo.santacesaria\}@unige.it).} \and Ernesto De Vito\footnotemark[1] \and Tapio Helin\thanks{Computational Engineering, Lappeenranta-Lahti University of Technology, Finland ([email protected]).} \and Matti Lassas\thanks{Department of Mathematics and Statistics, University of Helsinki, Finland ({[email protected]}).} \and Luca Ratti\thanks{Department of Mathematics, University of Bologna, Italy ({[email protected]}).} \and Matteo Santacesaria\footnotemark[1]} \date{ } \begin{document} \maketitle \begin{abstract} This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as \( B \), which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of \( B \). We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, showcasing its flexibility in incorporating prior knowledge into the regularization framework. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions. \end{abstract} {\bf Keywords:} inverse problems, statistical learning, bilevel optimization, operator learning, sparsity-promoting regularization.\\ \section{Problem formulation and main contributions} \label{sec:intro} Consider a linear inverse problem \begin{equation} y = Ax + \varepsilon, \label{eq:invprob} \end{equation} where we assume that $A\colon X \rightarrow Y$ is a linear bounded operator between the Hilbert spaces $X,Y$, whereas the inverse $A^{-1}$ (if it exists) can be in general an unbounded operator. We introduce a variational strategy \cite{engl1996} to regularize the inverse problem \eqref{eq:invprob}, and together promote the sparsity of the solution with respect to a suitable basis or frame. To do so, we consider an operator $B \in \mathcal{L}(\ell^2,X)$ (where $\mathcal{L}(\ell^2,X)$ denotes the space of bounded linear operators from the sequence space $\ell^2(\N)$ to the Hilbert space $X$, equipped with the operator norm) and define the following minimization problem: \begin{equation} \hat x_B = \Bs \hat u_B, \quad \hat u_B = \argmin_{u \in \ell^2} \Big\{ \frac{1}{2}\| \Se^{-1/2} A\Bs u \|_Y^2 - \lK y, \Se^{-1}A\Bs u \rK + \| u \|_{\ell^1} \Big\}, \label{eq:xhat} \end{equation} where $\Se$ is the covariance of the noise $\epsilon$ (see Assumption~\ref{ass:stat} below). In Section \ref{sec:optimization}, we introduce a set of assumptions that guarantee the well-definedness and well-posedness of problem \eqref{eq:xhat}. To better interpret the proposed regularization strategy, we remark that in a finite-dimensional setup (let, e.g., $X=\R^n$ and $Y \in \R^{m}$), if the matrix $B \in \R^{n \times n}$ is invertible, the minimization problem \eqref{eq:xhat} admits the following formulation: \[ \hat x_B = \argmin_{x \in \R^n} \Big\{ \frac{1}{2}\| Ax - y \|^2_{\Se} + \| B^{-1} x \|_{1} \Big\}, \] where $\| \cdot \|_{\Se} = \| \Se^{-1/2} \cdot \|$ is a norm on $\R^m$ that leverages the knowledge of the noise covariance $\Se$ to whiten the residual error. Further details about this simplified formulation can be found in Appendix~\ref{app:finite_dim}. The main focus of this paper is related to the choice of the synthesis operator $B \colon \ell^2 \rightarrow X$ within \eqref{eq:xhat}. In particular, let us introduce the map \begin{equation}\label{eq:RB-def} R_B\colon Y \rightarrow X, \qquad R_B(y) = \hat{x}_B \quad \text{as in \eqref{eq:xhat}}. \end{equation} For a fixed $B$ satisfying the theoretical requirements expressed in Section \ref{sec:optimization}, the map $R_B$ is a \textit{stable} reconstruction operator, i.e., it provides a continuous approximation of the inverse map $A^{-1}$. In particular, $R_B$ promotes the sparsity of the regularized solution in terms of the synthesis operator $B$: namely, $R_B(y) = B \hat{u}_B$ is chosen so that it balances a good data fidelity ($A R_B(y) \approx y$) and only a few components of $\hat{u}_B$ are different from $0$. The choice of the synthesis operator $B$ encodes crucial information regarding the prior distribution of $x$. Consider, for simplicity, a signal processing problem (such as denoising, or deblurring), which, after a discretization of the spaces of signals, can be formulated as a linear inverse problem in $X=Y=\R^n$. Then, promoting the sparsity of the reconstructed signals under the choice $B=\operatorname{Id}$ encodes the prior information that the ground truths are expected to have few, isolated, spikes. Choosing $B$ to be a basis of (discretized) sines and cosines promotes band-limited signals in the Fourier domain. Setting $B$ to a wavelet transform, instead, might promote signals showing few jump discontinuities and smooth anywhere else. In general, the synthesis operator $B$ should be carefully chosen to incorporate, in the reconstruction operator $R_B$, any significant prior knowledge, or prior belief, on the ground truths. In this paper, we follow a statistical learning approach for the selection of $B$. In particular, we assume that $x$ and $y$ are random objects with a joint probability distribution $\rho$. We then leverage the (partial) knowledge of such a probability distribution to define a data-driven rule for selecting $B$. To do so, we must introduce some assumptions on the probability distribution $\rho$, which are carefully detailed in Section \ref{sec:stat}. Before delving into the details, though, let us briefly sketch the overall idea of the learning-based choice of $B$, introducing some minimal requirements on the random objects $x,y$. \begin{assumption} \label{ass:stat} Let $(x,y) \sim \rho$, being $y = Ax+\varepsilon$, and where: \begin{itemize} \item $x$ is a square-integrable random variable on $X$; \item $\varepsilon$ is a zero-mean square-integrable random variable on $Y$, independent of $x$, with known, trace-class, and injective covariance $\Se\colon Y \rightarrow Y$. \end{itemize} \end{assumption} The assumption that $\eps$ is a zero-mean random variable with $\E{[\norm{\eps}^2_Y]}<+\infty$ is in principle quite restrictive and, for example, the case of white noise requires careful treatment when $Y$ is infinite-dimensional. In its most common formulation, white noise is modeled as a random process on a Hilbert space (for example, $Y=L^2(\Omega)$) with zero mean and covariance equal to the identity. Such a noise model clearly fails in satisfying the hypotheses whenever $Y$ is infinite dimensional, because the covariance $\Se = \operatorname{Id}$ is not trace-class, but only bounded. However, it is possible to represent such a process as a square-integrable random variable by carefully selecting the space $Y$, as described in Appendix~\ref{app:white_noise}. For a specific choice of synthesis operator $B$, the quality of the reconstruction operator $R_B$ can be evaluated through the \textit{expected loss} $L\colon \mathcal{L}(\ell^2,X) \rightarrow \R$: \begin{equation} L(B) = \mathbb{E}_{(x,y)\sim \rho}[\|R_B(y)-x\|_X^2]. \label{eq:loss} \end{equation} Let us now consider a suitable class of operators $\mathcal{B} \subset \mathcal{L}(\ell^2,X)$: we define the optimal regularizer $R^\star = R_{B^\star}$, where $B^\star$ is a minimizer of the expected loss over the set of admissible operators $\mathcal B$. In particular, putting together the expressions of \eqref{eq:loss} and \eqref{eq:xhat}, we obtain: \begin{equation} \label{eq:bilevel} \begin{gathered} B^\star \in \argmin_{B\in\mathcal B} L(B), \\ R_B(y) = B \argmin_{u \in \ell^2} \Big\{ \frac{1}{2}\| \Se^{-1/2} A\Bs u \|_Y^2 - \lK y, \Se^{-1}A\Bs u \rK + \| u \|_{\ell^1} \Big\}, \end{gathered} \end{equation} which is usually referred to as a \textit{bilevel} optimization problem. Note that the optimality of $B^\star$ strongly depends on the choice of the class $\mathcal B$. We moreover stress that the optimal target $\BOp$ can only be computed if the joint probability distribution $\rho$ of $x$ and $y$ is known. This is of course not verified in practical applications, but in many contexts, we may suppose to have access to a sample of $m$ pairs $\vzv= \{(x_j,y_j)\}_{j=1}^m$ such that the family is independent and identically distributed as $(x,y)$. In this case, following the paradigm of \textit{supervised learning}, we can approximate the expected loss by an empirical average, also known as \textit{empirical risk}, namely \begin{equation} \label{eq:risk} \widehat{L}(B) = \frac{1}{m} \sum_{j=1}^m \| R_{B} (y_j) - x_j \|_{X}^2. \end{equation} A natural estimator of $B^\star$ is then given by any minimizer $\BS$ of the empirical loss: \begin{equation} \BS \in \argmin_{B\in\mathcal B} \wh{L}(B). \label{eq:empiricaltarget} \end{equation} We stress that both $\wh{L}(B)$ and $\BS$ are random variables depending on the sample $\vzv$. We simply denote this dependence by $\wh{\cdot}$. The task of \textit{sample error estimates} is to quantify the dependence of the empirical target $\BS$ on the sample $\vzv$, in particular by bounding (either in probability or in expectation) the \textit{excess risk} $L(\BS) - L(B^\star)$ in terms of the sample size $m$. \subsection*{Main contribution of this paper} In our analysis, we pursue the following main goals: \begin{enumerate} \item studying the theoretical properties of the inner minimization problem \eqref{eq:xhat}, and in particular its well-posedness for fixed $B$ and the continuous dependence of the minimizer $\hat{x}_B$ in terms of $B \in \mathcal{B}$ (see Theorems \ref{thm:xhat_is_unique} and \ref{thm:stab} in Section \ref{sec:optimization}); \item studying the approximation properties of the empirical target $\BS$ (constructed leveraging the knowledge of $A$, $\Se$, and of the sample $\vzv$) with respect to the optimal target $B^\star$, deriving sample error estimates for the excess risk (see Theorem \ref{thm:CuckSmale} in Section \ref{sec:stat}); \item formulate some relevant applications which satisfy the assumptions on $x$, $y$, $A$, and $\mathcal{B}$ introduced in Sections \ref{sec:optimization} and \ref{sec:stat} (see Section \ref{sec:examples}). \end{enumerate} \subsection*{Comparison with existing literature} This work originated from the analysis carried out by a subset of the authors in \cite{alberti2021learning}, where we considered the problem of learning (also) the optimal operator $B$ in generalized Tikhonov regularization, i.e., when a quadratic penalty term is considered within the inner problem \eqref{eq:xhat} instead of the $\ell^1$ norm. The main difficulty associated with the proposed extensions resides in the lack of strong convexity and differentiability of the $\ell^1$ norm. Moreover, unlike in the Tikhonov case, the inner minimization problem \eqref{eq:xhat} does not possess a closed-form solution: unfortunately, this also prevents the explicit computation of the optimal regularizer $B^\star$, which was one of the main results in \cite{alberti2021learning}. As a final consequence, it is not possible to formulate here an \textit{unsupervised} strategy to learn $B^\star$, i.e., based only on a training set of ground truths $\{x_j\}_{j=1}^m$: indeed, the unsupervised strategy proposed in \cite{alberti2021learning} extensively leveraged the explicit expression of the optimal operator $B^\star$. Other extensions of the work \cite{alberti2021learning} have been carried out in \cite{ratti2023learned,burger2023learned,chirinos2023learning,brauer2024learning,alberti2024learning} even though a statistical learning approach for sparse optimization in infinite dimension was not considered yet. See also \cite{hauptmann2024convergent} for a connection between generalized Tikhonov and linear plug-and-play denoisers. Let us just mention that bilevel approaches for inverse problems in imaging have been studied, from numerical and optimization points of view since many years \cite{calatroni2017bilevel,de2017bilevel}. In particular, the works \cite{horesh-haber-2009,huang-haber-horesh-2012,2022-ghosh-etal,ghosh-etal-2024} study the bilevel learning of $\ell^1$ regularizers, focusing on the finite-dimensional case and mostly on the algorithmic aspects, without considering the generalization issue from the theoretical point of view. Moreover, our work is also related to statistical inverse learning for inverse problems \cite{blanchard2018optimal,helin2023statistical, bubba2023convex} and, also, more generally to the growing field of operator learning \cite{nelsen2021random, lanthaler2022error,kovachki2023neural,de2023convergence,boulle2023mathematical}. Let us also mention some of the main works on using machine learning techniques for solving inverse problems that had motivated this work \cite{adler2017solving,2017-kobler-etal,adler2018learned,lunz2018adversarial,arridge2019,li2020nett,mukherjee2020learned,mukherjee2021adversarially}. It is worth observing that the problem of learning an operator $B$ yielding a sparse representation of a dataset $\{x_j\}_{j=1}^m$ is deeply connected with the well-known task of Dictionary Learning. Although it is possible to provide a formulation of a dictionary learning problem very close to the bilevel problem \eqref{eq:bilevel}, the two problems pursue two distinct aims and may lead to different results, as shown in Appendix \ref{app:DL}. Despite this, the sample complexity bounds we obtain in this work are comparable to those derived for dictionary learning \cite{2015-gribonval-etal,2022-sulam-etal}. \section{Theoretical results on the deterministic optimization problem \texorpdfstring{\eqref{eq:xhat}}{(1.2)}} \label{sec:optimization} The goal of this section is to study the well-posedness of the minimization problem formulated in \eqref{eq:xhat} for a fixed $B \in \mathcal{L}(\ell^2,X)$. Moreover, we study the stability of the minimization with respect to perturbations of $B$. To do so, let us introduce the following set of hypotheses. \begin{assumption}[Compatibility assumption between $\Se$ and $A$]\label{ass:compatibility} Let the covariance matrix $\Se$ satisfy Assumption \ref{ass:stat}, and assume $\operatorname{Im}(A) \subset\operatorname{Im}(\Se)$. Moreover, we assume that \begin{equation} \Se^{-1} A \colon X \rightarrow Y \text{ is a compact operator}. \label{eq:compactibility} \end{equation} \end{assumption} For the rest of the paper, we use the convention $J_B(u) = F_B(u) + \Phi(u)$, where \begin{equation} F_B(u) = \frac{1}{2}\| \Se^{-1/2} A\Bs u \|_Y^2 - \lK y, \Se^{-1} A\Bs u \rK \quad \text{and} \quad \Phi(u) = \| u \|_{\ell^1}. \end{equation} Notice that, thanks to Assumption \ref{ass:compatibility}, the functional $J_B$ is well-defined for fixed $B \in \mathcal{L}(\ell^2,X)$. Indeed, the first term can be rewritten as $\displaystyle \frac{1}{2} \lK \Se^{-1}A\Bs u, A\Bs u \rK$, and $\Se^{-1}A\Bs u$ exists and is unique since $\operatorname{Im}(A\Bs) \subset \operatorname{Im}\Se = \operatorname{dom}(\Se^{-1})$, and analogously for the second term. Finally, since the minimization problem \eqref{eq:xhat} is set in the Hilbert space $\ell^2 \supset \ell^1$, the third term in \eqref{eq:xhat} should be interpreted as $\| u\|_{\ell^1}$ if $u \in \ell^1$ and $+\infty$ if $u \in \ell^2 \setminus \ell^1$. Since $J_B$ is non-differentiable and not strictly convex, further assumptions on the interplay between $A$ and $B$ are needed to guarantee well-posedness of the minimization task. Our analysis is confined to the case where $AB$ is assumed to be finite basis injective. \begin{assumption}[Finite Basis Injectivity (FBI) of $AB$]\label{ass:FBI} For all $I\subset \N$ with $\operatorname{card}(I) < \infty$, the operator $(AB)|_{\ell^2_I }$ is injective, where we denoted $\N = \{1,2,\dots\}$ and $\ell^2_I := \operatorname{span}\{ e_i: i \in I\}$. \end{assumption} Note that while this assumption is satisfied when $B$ is injective(provided $A$ is injective), it also holds for more general structures (e.g.\ FBI frames). The well-posedness of the minimization problem \eqref{eq:xhat} is now characterized by the following theorem. \begin{thm} \label{thm:xhat_is_unique} Let $A \in \mathcal{L}(X,Y), \Se \in \mathcal{L}(Y,Y)$, and $B \in \mathcal{L}(\ell^2,X)$ satisfy Assumptions \ref{ass:compatibility} and \ref{ass:FBI}, and let $y \in Y$. There exists a unique minimizer $\hat u_B = \hat u_B(y)$ of $J_B$, and, consequently, a unique $\hat x_B = B \hat u_B$ in \eqref{eq:xhat}. \end{thm} The proof is presented in two parts: the existence is proved at the end of Section \ref{subsec:existence} while the uniqueness follows as a consequence of Theorem \ref{thm:stab} at the end of Section \ref{subsec:stability}. Let us next consider the stability of the minimizer with respect to perturbing $B$ in a suitable class of operators $\mathcal{B}$. Towards that end, let us introduce the following assumption. \begin{assumption}[Requirements on $\mathcal{B}$] \label{ass:mathcalB} Let $\mathcal{B} \subset \mathcal{L}(\ell^2,X)$ be a compact set of operators such that every $B \in \mathcal{B}$ satisfies Assumption \ref{ass:FBI}. \end{assumption} A first consequence of the compactness of $\mathcal{B}$ is that the norms $\| B \|$, $B\in\mathcal{B}$, are uniformly bounded by $|{\mathcal B}| := \sup_{B\in{\mathcal B}} \norm{B} < \infty$. Moreover, as we show in the following subsections, Assumption \ref{ass:mathcalB} allows us to provide a uniform expression of several properties of the solutions of problem \eqref{eq:xhat}, namely, independently of the choice of $B$. The most relevant consequence of this discussion is the following result, providing a global stability estimate of the minimizers $\hat{x}_B$ with respect to perturbations of $B$ within $\mathcal{B}$. \begin{thm} \label{thm:stab} Let $\| y\|_Y \leq Q.$ There exists a constant $c_{\rm ST} = c_{\rm ST}(\mathcal{B},A,\Se,Q)$ such that for every $B_1,B_2 \in \mathcal{B}$, \begin{equation} \|R_{B_1}(y)-R_{B_2}(y) \|_{X} \leq c_{\rm ST} \| B_1 - B_2 \|^{1/2}, \label{eq:stab} \end{equation} \end{thm} \subsection{Existence of the minimizer with fixed \texorpdfstring{$B \in {\mathcal B}$}{B in B}} \label{subsec:existence} To establish the existence of a minimizer $\hat{u}_B$ in \eqref{eq:xhat}, we first show that the sublevel sets of $J_B$ are bounded, which ensures the weak convergence of any minimizing sequence. \begin{lemma} \label{lem:aux_lb} There exists a monotonically increasing function $h \colon \R_+ \to \R_+$ depending on $A$, $\Se$, and $|\mathcal{B}|$ such that the following holds: if $\norm{w}_X \leq |{\mathcal B}|$ and $\norm{\Se^{-1}Aw}_Y \geq \gamma>0$, then $\norm{\Se^{-1/2}Aw}_Y \geq h(\gamma)>0$. \end{lemma} \begin{proof} Let us prove existence of a general $h$ by contradiction: assume that for all $n$ there exists $w_n \in X$ satisfying $\norm{w_n}_X \leq |{\mathcal B}|$, $\norm{\Se^{-1}Aw_n}_Y \geq \gamma$ and $\norm{\Se^{-1/2}Aw_n}_Y < \frac 1n$. Since $\{w_n\}_{n=1}^\infty \subset X$ is bounded, we can find a subsequence $\{w_{n_k}\}_{k=1}^\infty$ that weakly converges to some $w\in X$. Due to weak lower semicontinuity of the norm, we must have $w\in \operatorname{ker}(A)$ and, therefore, by the compactness of $\Se^{-1}A$ we also have that $\| \Se^{-1}A w_n\|_{Y} \rightarrow 0$. This contradicts with our assumption and, therefore, a lower bound $h(\gamma) >0$ must exist. The existence of a monotonically increasing $h$ can also be demonstrated by contradiction: suppose that there exist $\gamma_1<\gamma_2$ such that for any $h$ satisfying the statement, we have $h(\gamma_1)>h(\gamma_2)$. Then we immediately see that $\tilde h$ defined by \begin{equation*} \tilde h(\gamma) = \begin{cases} h(\gamma)& \gamma\neq \gamma_1 \\ h(\gamma_2)& \gamma = \gamma_1, \end{cases} \end{equation*} also satisfies the claim yielding the contradiction. \end{proof} \begin{proposition} \label{prop:sublevels} Let $M \in \R$, and suppose Assumptions \ref{ass:stat}, \ref{ass:compatibility}, \ref{ass:FBI} and \ref{ass:mathcalB} hold. Let $\| y \|_Y \leq Q$. Then, there exists a constant $c_{\rm UB}(M) = c_{\rm UB}(M; A,\Se, Q,|{\mathcal B}|)$ such that, for every $B \in \mathcal{B}$, we have \begin{equation} \label{eq:sublevel} \{u \in \ell^2 \; | \; J_B(u) \leq M\} \subset \{u \in \ell^2 \; | \; \| u \|_{\ell^1} \leq c_{\rm UB}(M)\}. \end{equation} \end{proposition} \begin{proof} The condition $J_B(u)\leq M$ reads as \begin{equation*} \frac{1}{2} \| \Se^{-1/2} A B u \|_Y^2 + \norm{u}_{\ell^1} \leq M + \langle y,\Se^{-1}A Bu \rK. \end{equation*} Let us next make a change of variables with $w = Bu/\norm{u}_{\ell^1}$ and $\tau = \norm{u}_{\ell^1}$, which leads to \begin{equation} \label{eq:aux_prop_bound2} \frac{1}{2} \| \Se^{-1/2} A w \|_Y^2\tau ^2 + \tau \leq M + \tau \norm{y}_Y \norm{\Se^{-1}A w}_Y. \end{equation} In particular, we have \begin{equation*} 1 \leq \frac M \tau + \norm{y}_Y \norm{\Se^{-1}A w}_Y. \end{equation*} Hence, either $\norm{u}_{\ell^1} = \tau \leq \max\{2M,0\}$ or for any $\tau \geq \max\{2M,0\}$ it holds that \begin{equation*} \norm{\Se^{-1}A w}_Y \geq \frac{1}{2 \norm{y}_Y}. \end{equation*} Since $\norm{Bu}_X \leq |\mathcal{B}|\norm{u}_{\ell^2}$ for all $u \in \ell^2$ and $B \in \mathcal{B}$, we have \begin{equation*} \| u \|_{\ell^1} \geq \| u \|_{\ell^2} \geq \frac{1}{|{\mathcal B|}} \| Bu \|_X \qquad \forall u \in \ell^2, \forall B \in \mathcal{B}, \end{equation*} which also implies that $\norm{w}_X \leq |{\mathcal B}|$. Next, by Lemma \ref{lem:aux_lb} it follows that \begin{equation*} \norm{\Se^{-1/2} A w}_Y \geq h, \end{equation*} where we abbreviate $h= h\left(\frac{1}{2 \norm{y}_Y}\right)$ for convenience. Modifying \eqref{eq:aux_prop_bound2} we obtain \begin{equation*} \frac{1}{2} h^2\tau^2 \leq M + \tau \norm{y}_Y \norm{\Se^{-1}A} |\mathcal{B}| \leq M + \frac 14 h^2\tau^2 + \frac{\norm{y}_Y^2 \norm{\Se^{-1}A}^2 |\mathcal{B}|^2}{h^2} \end{equation*} and solving for $\tau$ yields \begin{equation} \label{eq:sublevel_tau_bound} \tau^2 \leq \frac{4M}{h^2} + \frac{4\norm{y}_Y^2 \norm{\Se^{-1}A}^2 |\mathcal{B}|^2}{h^4}. \end{equation} Note carefully that, by the monotonicity of $h$, from $\| y\|_Y \leq Q$ it follows that $h\geq h\left(\frac{1}{2Q}\right)$, hence \begin{equation} \label{eq:sublevel_tau_bound_Q} \tau^2 \leq \frac{4M}{h\left(\frac{1}{2Q}\right)^2} + \frac{4Q^2 \norm{\Se^{-1}A}^2 |\mathcal{B}|^2}{h\left(\frac{1}{2Q}\right)^4}. \end{equation} Now the desired bound for $\tau = \norm{u}_{\ell^1}$ follows for any $u \in \ell^2$, $B \in \mathcal{B}$. \end{proof} \begin{proof}[Proof of Thm.~\ref{thm:xhat_is_unique} (existence)] The existence of a minimizer $\hat{u}_B$ in \eqref{eq:xhat} is now guaranteed by standard arguments: any minimizing sequence $\{u_j\}_{j=1}^\infty \subset \ell^2$ belongs to some sublevel set and, therefore, to some $\ell^1$-ball according to Proposition \ref{prop:sublevels}. By Banach--Alaoglu theorem, the sequence has a weak-* converging subsequence in $\ell^1$. Since $F_B$ is continuous and the $\ell^1$-norm is lower semicontinuous with respect to the weak-* topology, one can show that the limit is a minimizer of $J_B$. \end{proof} Let us note that since $J_B(\hat{u}_B) \leq J_B(0) = 0$, denoting by $c_{\rm UB}$ the constant $c_{\rm UB}(0)$ in \eqref{eq:sublevel} we have the following bound for such minimizers, uniformly in $B \in \mathcal{B}$. \begin{equation} \|\hat{u}_B\|_{\ell^2} \leq \|\hat{u}_B\|_{\ell^1} \leq c_{\rm UB}. \label{eq:boundmin} \end{equation} The uniqueness of such a minimizer is a consequence of Proposition \ref{prop:condition}, as we will show later. \subsection{Stability in \texorpdfstring{${\mathcal B}$}{B} and uniqueness of the minimizer} \label{subsec:stability} The key result of this section is Proposition \ref{prop:condition}, as it directly enables the uniqueness of the minimizer in Theorem \ref{thm:xhat_is_unique}, and its stability with respect to $B$. Such a result is in the spirit of \cite[Theorem 2]{bredies2008linear}, and its proof is partially based on the one of \cite[Lemma 3]{bredies2008linear}. The main contribution of Proposition \ref{prop:condition} is that the constant appearing in \eqref{eq:condition} does not depend on the choice of $B$, provided that we consider an operator class ${\mathcal B}$ satisfying Assumption \ref{ass:mathcalB}. The proof of Proposition \ref{prop:condition} requires some preliminary lemmas. Let us first note that $F_B \colon \ell^2 \rightarrow \R$ is convex and differentiable with gradient \[ F_B'(u) = B^*\left(\Se^{-1}A\right)^* (AB u - y), \] where we have used the identity $A^* (\Se^{-1} A) = \left(\Se^{-1}A\right)^* A$, since $\langle \Se^{-1} Av, Aw \rangle = \langle Av, \Se^{-1}Aw \rangle = \langle (\Se^{-1}A)^*Av, w \rangle$ for all $v,w \in X$ as $\Se^{-1}$ is self-adjoint and $\Im(A) \subset \operatorname{dom}({\Se^{-1}})$. Notice that $F_B'$ is affine in $u$ and Lipschitz continuous, and the Lipschitz constant is uniformly bounded by $L_{F'} = |{\mathcal B}|^2 \| \Se^{-1/2} A \|^2$ for $B \in {\mathcal B}$. Since both $F_B$ and $\Phi$ are convex, $\hat{u}_B$ being a minimizer of $J_B$ is equivalent to $0$ belonging to the subgradient set of $J_B$ at $\hat{u}_B$, i.e. $0\in \partial J_B(\hat{u}_B)$. Now, due to the differentiability of $F_B$, an equivalent optimality condition is given by $\hat{w}_B \in \partial \Phi(\hat{u}_B)$, where \begin{equation} \hat{w}_B = -F'_B(\hat{u}_B) = B^*\left(\Se^{-1}A\right)^* (y- AB \hat{u}_B). \label{eq:OC} \end{equation} Moreover, thanks to the explicit expression of the subdifferential of the $\ell^1$-norm, we can specify that $\hat{w}_B \in \partial \Phi(\hat{u}_B)$ is equivalent to \begin{equation} \hat{w}_{B,k} = \left\{ \begin{aligned} \sign(\hat{u}_{B,k}) \quad &\text{if } \hat{u}_{B,k} \neq 0 \\ \xi \in [-1,1] \quad &\text{if } \hat{u}_{B,k} = 0. \\ \end{aligned}\right. \label{eq:ell1subdiff} \end{equation} The following lemmas explore properties that are valid uniformly in the compact operator class $\mathcal{B}$. For $N > 0$, we introduce the orthogonal projection operator onto $\ell^2$ by setting \begin{equation*} P_N\colon \ell^2 \rightarrow \ell^2: \ P_N u = (u_1, \ldots, u_N, 0, 0, \ldots), \qquad P_N^\perp = \operatorname{Id} - P_N. \end{equation*} \begin{lemma}\label{lem:PN1} Suppose that Assumptions \ref{ass:compatibility}, \ref{ass:FBI}, and \ref{ass:mathcalB} are satisfied. Let $\mathbb{B}_{Y}(Q) = \{y \in Y: \| y\|_Y \leq Q\}$. Then the elements $\hat{w}_B \in \ell^2$ corresponding to the minimizers $\hat{u}_B \in \ell^2$ of $J_B$ via \eqref{eq:OC} satisfy \begin{equation*} \lim_{N \rightarrow \infty} \sup\big\{ \| P_N^\perp \hat{w}_B \|_{\ell^2}: \; B \in \mathcal{B},\ y \in \mathbb{B}_{Y}(Q) \big\} = 0. \end{equation*} \end{lemma} \begin{proof} By contradiction, suppose there exist $\varepsilon > 0$ and a diverging sequence $N_M$, $M=1,2,...$, such that \[ \sup\bigl\{ \big\| P_{N_M}^\perp B^* \big( \Se^{-1}A \big)^*\big( y - A B \hat{u}_B \big) \big\|_{\ell^2}: B \in \mathcal{B},\ y \in \mathbb{B}_{Y}(Q) \bigr\} \geq 2\varepsilon \quad \forall M. \] Consider in particular a sequence $\{B_M\}_{M=1}^\infty \subset \mathcal{B}$ and a sequence $\{y_M\}_{M=1}^\infty \subset \mathbb{B}_Y(Q)$ satisfying \[ \big \| P_{N_M}^\perp B_M^* \big( \Se^{-1}A \big)^*\big( y_M - A B_M \hat{u}_{B_M} \big) \big\|_{\ell^2} \geq \varepsilon \quad \forall M, \] being $\hat{u}_{B_M}$ the solution of \eqref{eq:xhat} with $B=B_M$ and $y=y_M$. By the compactness of $\mathcal{B}$, there exists $B \in \mathcal{B}$ such that $ B_M \rightarrow B$ in the strong operator topology, up to a subsequence. Moreover, consider the sequence $h_M = A B_M \hat{u}_{B_M}$, $M=1,2,...$. Since $\norm{\hat{u}_{B_M}}$ is bounded uniformly with respect to $B\in \mathcal{B}$ and $y \in \mathbb{B}_Y(Q)$ by \eqref{eq:boundmin}, $\norm{B_M} \leq |{\mathcal B}|$ independently of $M$, and $A$ is bounded, the sequence $\{h_M\}_{M=1}^\infty\subset Y$ is bounded, thus weakly convergent (up to a subsequence) to an element $h \in Y$. As a consequence of the compactness of $\big(\Se^{-1} A\big)^*$ by the Schauder theorem, we have that $(\Se^{-1} A)^*h_M \rightarrow (\Se^{-1} A)^* h$ in $X$. Similarly, the bounded sequence $\{y_M\}_{M=1}^\infty$ admits a weak limit $y \in Y$, and $(\Se^{-1} A)^*y_M \rightarrow (\Se^{-1} A)^* y$. In conclusion, we have that \begin{eqnarray*} \varepsilon &\leq & \| P_{N_M}^\perp B_M^*(\Se^{-1} A)^*(y_M - h_M)\|_{\ell^2} \\ & \leq & \| P_{N_M}^\perp B_M^*(\Se^{-1} A)^*(y_M - y)\|_{\ell^2} + \| P_{N_M}^\perp B_M^*(\Se^{-1} A)^*(h_M - h)\|_{\ell^2}\\ & &\quad + \| P_{N_M}^\perp B_M^*(\Se^{-1} A)^*(y - h)\|_{\ell^2} \\ & \leq & |\mathcal{B}| \| (\Se^{-1} A)^*(y_M - y)\|_{X} + |\mathcal{B}| \| (\Se^{-1} A)^*(h_M - h)\|_{X} \\ & &\quad+ \| P_{N_M}^\perp (B_M-B)^*(\Se^{-1} A)^*(y - h)\|_{\ell^2} + \| P_{N_M}^\perp B^*(\Se^{-1} A)^*(y - h)\|_{\ell^2} \\ & \leq & |\mathcal{B}| \| (\Se^{-1} A)^*(y_M - y)\|_{X} + |\mathcal{B}| \| (\Se^{-1} A)^*(h_M - h)\|_{X} \\ & & \quad+ \| B_M- B \|_{\ell^2 \rightarrow X} \|(\Se^{-1} A)^*(y - h)\|_{X} + \| P_{N_M}^\perp B^*(\Se^{-1} A)^*(y - h)\|_{\ell^2}. \end{eqnarray*} As $M \rightarrow \infty$, all the terms on the right-hand side converge to $0$, which entails a contradiction. \end{proof} \begin{lemma} Suppose that Assumptions \ref{ass:compatibility}, \ref{ass:FBI}, and \ref{ass:mathcalB} are satisfied. Take $N\in\N$. Then, there exists a constant $c_{\rm LB} = c_{\rm LB}(A,\mathcal{B},N)>0$ such that \begin{equation*} \| \Se^{-1/2} ABP_N u \|_{Y}^2 \geq c_{\rm LB} \| P_N u \|_{\ell^2}^2 \end{equation*} for all $B \in {\mathcal B}$ and $u\in\ell^2$. \label{lem:PN2} \end{lemma} \begin{proof} We prove this claim by contradiction. Assume that there exist two sequences $\{u_M\}_{M=1}^\infty \subset \ell^2$ and $\{B_M\}_{M=1}^\infty \subset \mathcal{B}$ such that \begin{equation} \label{eq:lower_bound_aux1} \frac{\| \Se^{-1/2} A B_M P_N u_M\|_{Y}^2}{\|P_N u_M \|_{\ell^2}^2} < \frac{1}{M} \quad \forall M. \end{equation} Let us write $p_M = \frac{P_N u_M}{\|P_N u_M\|_{\ell^2}}$ to obtain that a sequence $\{p_M\}_{M=1}^\infty \subset H_N$, where $H_N = \{ u \in \ell^2: u_k = 0 \ \forall k >N, \|u \|_{\ell^2}=1\}$. Notice that $H_N\subset X$ is a finite-dimensional, bounded and closed subset and hence compact. Rephrasing \eqref{eq:lower_bound_aux1} we have \[ \| \Se^{-1/2} A B_M p_M\|_{Y}^2 < \frac{1}{M} \quad \forall M. \] Since both $\{p_M\}_{M=1}^\infty\subset H_N$ and $\{B_M\}_{M=1}^\infty \subset {\mathcal B}$ belong to compact sets, up to a subsequence, the sequences converge to limit points $p \in H$ and $B \in \mathcal{B}$, respectively. By the continuity of $\Se^{-1/2}A$ and by the equi-continuity of the operators in $\mathcal{B}$, we deduce that $\| \Se^{-1/2} A B p \|_Y = 0$, which by the injectivity of $\Se^{-1/2}$ and by Assumption \ref{ass:FBI} implies that $p = 0$. This yields a contradiction with the assumption that $\|p\|=1$ and completes the proof. \end{proof} We can finally state and prove the main result of this subsection: \begin{proposition} \label{prop:condition} Let $\hat{u}_B \in \ell^2$ be a minimizer of $J_B$ being $B\in \mathcal{B}$ and $y \in \mathbb{B}_Y(Q)$. For every $M$, there exists a constant $\tilde c_{\rm ST} = \tilde c_{\rm ST}(M;A,\Se,\mathcal{B},Q)$ such that for any $B\in{\mathcal B}$ we have \begin{equation} \label{eq:condition} \{v \in \ell^2 \; | \; J_B(v) \leq M\} \subset \{v \in \ell^2 \; | \; \| v - \hat{u}_B \|_{\ell^2}^2 \leq \tilde c_{\rm ST} (J_B(v) - J_B(\hat{u}_B))\}. \end{equation} \end{proposition} \begin{proof} For clarity, the proof is divided into several steps. \textit{Step $1$.} Let us show that there exists $N_0 = N_0(A,\Se,\mathcal{B},Q)$ such that $P_{N_0}^\perp \hat{u}_B = 0$. Rephrasing Lemma \ref{lem:PN1}, we have that there exists a sequence $\delta_N$, $N=1,2,...,$ converging to zero such that \[ \| P_N^\perp\hat{w}_B \|_{\ell^2} \leq \delta_N \quad \forall B \in \mathcal{B}, \ y \in \mathbb{B}_Y(Q). \] Therefore, it is possible to pick $N_0$ such that \[ \| P_{N_0}^{\perp} \hat{w}_B \|_{\ell^2} \leq \frac{1}{2} \] for all $B\in {\mathcal B}$ and $y \in \mathbb{B}_Y(Q)$. Notice that such $N_0$ is independent of the specific choice of $B$, and depends on $y$ only through $Q$: thus, we denote it as $N_0(A,\Se,\mathcal{B},Q)$. Since $\| P_{N_0}^{\perp} \hat{w}_B \|_{\ell^\infty} \leq \| P_{N_0}^{\perp} \hat{w}_B \|_{\ell^2}$, we also deduce that \begin{equation} |\hat{w}_{B,k} | := \left|[\hat{w}_B]_k\right| \leq \frac{1}{2} \quad \text{for $k > {N_0}$,} \label{eq:wkPN} \end{equation} and by the optimality condition satisfied by $\hat{w}_B$, together with the characterization of the subdifferential in \eqref{eq:ell1subdiff} we get \begin{equation} \hat{u}_{B,k} := [\hat{u}_B]_k = 0 \quad \text{for $k > {N_0}$,} \label{eq:ukPN} \end{equation} whence $P_{N_0}^\perp \hat{u}_B = 0$, or $P_{N_0} \hat{u}_B = \hat{u}_B$, independently on $B$. \textit{Step $2$.} We show that \begin{equation} \| P_{N_0}^\perp (v- \hat{u}_B) \|_{\ell^2}^2 \leq 2 c_{\rm UB}(M) \left( J_B(v) - J_B(\hat{u}_B) - \frac{1}{2} \| \Se^{-1/2} AB(v-\hat{u}_B)\|_Y^2 \right), \label{eq:step2} \end{equation} where $c_{\rm UB}(M) = c_{\rm UB}(M; A, \Se, Q, |{\mathcal B}|)$ is given in Proposition \ref{prop:sublevels}. For the ease of notation, let us denote \[ r_B(v) = J_B(v) - J_B(\hat{u}_B) - \frac{1}{2}\| \Se^{-1/2} A B (v-\hat{u}_B)\|_Y^2. \] By direct computations, we have \[ \begin{aligned} r_B(v) &= \Phi(v) - \Phi(\hat{u}_B) + \frac{1}{2}\| \Se^{-1/2}AB v \|_Y^2 - \frac{1}{2}\| \Se^{-1/2}AB\hat{u}_B\|_Y^2 - \frac{1}{2}\| \Se^{-1/2}AB(v-\hat{u}_B)\|_Y^2 \\& \qquad - \langle y, \Se^{-1} AB (v - \hat{u}_B) \rK \\ &= \Phi(v) - \Phi(\hat{u}_B) + \lY \Se^{-1/2} AB \hat{u}_B, \Se^{-1/2} AB (v-\hat{u}_B) \rY - \langle B^*( \Se^{-1} A)^* y, v - \hat{u}_B \rangle \\ &= \Phi(v) - \Phi(\hat{u}_B) + \< B^*(\Se^{-1} A)^* ( AB \hat{u}_B - y), v-\hat{u}_B \>. \end{aligned} \] Using the definition of $\hat{w}_B$ in \eqref{eq:OC}, we obtain the expression \[ r_B(v) = \Phi(v) - \Phi(\hat{u}_B) - \langle \hat{w}_B, v-\hat{u}_B \rangle = \sum_{k \in \N} \left( |v_k| - |\hat{u}_{B,k}| - \hat{w}_{B,k}(v_k - \hat{u}_{B,k}) \right). \] Moreover, thanks to \eqref{eq:ell1subdiff}, it is easy to verify that each term of the previous summation is non-negative. Consider now $N_0$ introduced in Step 1: by \eqref{eq:wkPN} and \eqref{eq:ukPN}, \[ \begin{aligned} r_B(v) &\geq \sum_{k > {N_0}} \left( |v_k| - |\hat{u}_{B,k}| - \hat{w}_{B,k}(v_k - \hat{u}_{B,k}) \right) = \sum_{k > {N_0}} \left(|v_k| - \hat{w}_{B,k} v_k \right) \\ &\geq \sum_{k > {N_0}} \left( |v_k| - \frac{1}{2}| v_k| \right) = \frac{1}{2} \| P_{N_0}^\perp v \|_{\ell^1} \geq \frac{1}{2} \| P_{N_0}^\perp v \|_{\ell^2} = \frac{1}{2} \| P_{N_0}^\perp ( v- \hat{u}_B) \|_{\ell^2}. \end{aligned} \] In conclusion, \[ \| P_{N_0}^\perp ( v- \hat{u}_B) \|_{\ell^2}^2 \leq 2 r_B(v) \| P_{N_0}^\perp (v-\hat{u}_B) \|_{\ell^2} \leq 2 r_B(v) \| v \|_{\ell^2}. \] Since $J_B(v) \leq M$, by Proposition~\ref{prop:sublevels} we have $\| v\|_{\ell^2} \leq c_{\rm UB}(M)$, hence \eqref{eq:step2} holds. \textit{Step $3$.} Consider again the projection operators associated with the choice $N=N_0$ discussed in Step 1: \begin{equation} \| v - \hat{u}_B \|_{\ell^2}^2 = \| P_{N_0} (v-\hat{u}_B) \|_{\ell^2}^2 + \| P_{N_0}^\perp (v-\hat{u}_B) \|_{\ell^2}^2. \label{eq:start} \end{equation} We can bound the first term on the right-hand side of \eqref{eq:start} by means of Lemma \ref{lem:PN2} and get, for any $B \in \mathcal{B}$, that \[ \begin{aligned} \| P_{N_0} (v - \hat{u}_B)\|_{\ell^2}^2 &\leq \frac{1}{c_{\rm LB}} \|\Se^{-1/2} ABP_{N_0}(v-\hat{u}_B) \|_Y^2 \\ & \leq \frac{2}{c_{\rm LB}} \|\Se^{-1/2} AB (v-\hat{u}_B) \|_Y^2 + \frac{2L_{F'}}{c_{\rm LB}} \| P_{N_0}^\perp (v-\hat{u}_B) \|_{\ell^2}^2, \end{aligned} \] where the constant $c_{\rm LB}$ depends on $N_0$, which is nevertheless independent of $B$. Collecting now the terms and observing that \eqref{eq:step2} trivially holds with the larger constant $C = \max\left\{2 c_{\rm UB}(M), \frac{4}{c_{\rm LB}}\left( 1 + \frac{2L_{F'}}{c_{\rm LB}} \right)^{-1}\right\}$, by \eqref{eq:start} we obtain \[ \begin{aligned} &\| v - \hat{u}_B \|_{\ell^2}^2 \leq \frac{2\|\Se^{-1/2} AB (v-\hat{u}_B) \|_Y^2}{c_{\rm LB}} + \left( 1 + \frac{2L_{F'}}{c_{\rm LB}} \right) \| P_{N_0}^\perp (v-\hat{u}_B) \|_{\ell^2}^2 \\ &\leq \frac{2\|\Se^{-1/2} AB (v-\hat{u}_B) \|_Y^2}{c_{\rm LB}} + C \left( 1 + \frac{2L_{F'}}{c_{\rm LB}} \right) \left( J_B(v) - J_B(\hat{u}_B) - \frac{1}{2} \| \Se^{-1/2} AB(v-\hat{u}_B)\|_Y^2 \right). \end{aligned} \] Since $\frac{2}{c_{\rm LB}}-\frac{C}{2} \left( 1 + \frac{2L_{F'}}{c_{\rm LB}} \right) \leq 0$ we have that \eqref{eq:condition} holds with \begin{equation} \label{eq:tilde_cst} \tilde c_{\rm ST} = C \left( 1 + \frac{2L_{F'}}{c_{\rm LB}} \right) = \max\left\{2 c_{\rm UB}(M)\left( 1 + \frac{2L_{F'}}{c_{\rm LB}} \right), \frac 4{c_{\rm LB}} \right\}. \end{equation} This completes the proof. \end{proof} Now the uniqueness of the minimizer $\hat{u}_B$ follows directly from Proposition~\ref{prop:condition}. \begin{proof}[Proof of Thm.~\ref{thm:xhat_is_unique} (uniqueness)] Let $\hat{u}$ and $\hat{u}'$ be two minimizers of $J_B$. By Proposition~\ref{prop:condition}, choosing $v = \hat{u}'$ and $M = 0$ (indeed, $J_B(\hat{u}) = J_B(\hat{u}') \leq J_B(0) = 0$) we would have \[ \| \hat{u} - \hat{u}' \|_{\ell^2}^{2} \leq \tilde c_{\rm ST} (J_B(\hat{u})-J_B(\hat{u}')) = 0. \] \end{proof} Finally, we obtain a stability estimate for the perturbation of $\hat{u}_B$ with respect to modifications of $B$ within the class $\mathcal{B}$. \begin{proof}[Proof of Thm. \ref{thm:stab}] Notice that, thanks to the uniform bound on the minimizers \eqref{eq:boundmin} it holds that, independently of $B_1,B_2 \in \mathcal{B}$, \begin{eqnarray} \label{eq:uniform_bound_for_minimizers} J_{B_2}(\hat{u}_1) & = & \frac{1}{2} \| \Se^{-1/2} A B_2 \hat{u}_1 \|_Y^2 - \langle y, \Se^{-1}A B_2\hat{u}_1\rangle_Y + \| \hat{u}_1\|_{\ell^1} \nonumber \\ & \leq & \frac{1}{2}L_{F'} c_{\rm UB} + Q \| \Se^{-1}A\||\mathcal{B}| c_{\rm UB} + c_{\rm UB}. \end{eqnarray} Then, we can apply Proposition \ref{prop:condition} to $J_{B_2}$ for the choice $v = \hat{u}_1$ with a $M$-level set specified by the upper bound in \eqref{eq:uniform_bound_for_minimizers}. We obtain \[ \| \hat{u}_1 - \hat{u}_2 \|_{\ell^2}^2 \leq \tilde c_{\rm ST}(J_{B_2}(\hat{u}_1)-J_{B_2}(\hat{u}_2)), \] where the explicit expression of the constant $\tilde c_{\rm ST}$ is given in \eqref{eq:tilde_cst}. Now, since $J_{B_1}(\hat{u}_1) \leq J_{B_1}(\hat{u}_2)$, we have \[ \begin{aligned} \| \hat{u}_1 - \hat{u}_2 \|_{\ell^2}^2 &\leq \tilde c_{\rm ST} \big( J_{B_2}(\hat{u}_1)-J_{B_2}(\hat{u}_2) + J_{B_1}(\hat{u}_2)-J_{B_1}(\hat{u}_1) \big) \\ & \leq \tilde c_{\rm ST} \big(|J_{B_2}(\hat{u}_1) -J_{B_1}(\hat{u}_1) | + |J_{B_1}(\hat{u}_2) -J_{B_2}(\hat{u}_2)|\big). \end{aligned} \] Consider now each term $|J_{B_1}(\hat{u}_i) - J_{B_2}(\hat{u}_i)|$ with $i = 1,2$. Since $\| \hat{u}_i\|_{\ell^2} \leq c_{\rm UB}$, we have \[ \begin{aligned} |J_{B_1}(\hat{u}_i) - J_{B_2}(\hat{u}_i)| & \leq \big| \| \Se^{-1/2}AB_1 \hat{u}_i \|_{Y}^2 - \| \Se^{-1/2}AB_2 \hat{u}_i \|_{Y}^2 \big| + \big| \lK y, \Se^{-1} A(B_1 - B_2)\hat{u}_i\rY \big| \\ & \leq \big|\lY \Se^{-1}A(B_1 - B_2) \hat{u}_i , A(B_1+B_2) \hat{u}_i \rY \big| + \big| \lX \big(\Se^{-1}A\big)^*y,(B_1 - B_2)\hat{u}_i\rX \big| \\ & \leq \Big( \| \Se^{-1} A\| \| A \| \| B_1 + B_2 \| \|\hat{u}_i\|_{\ell^2}^2 + \| \Se^{-1}A \| \| y\| \|\hat{u}_i\|_{\ell^2} \Big) \| B_1 - B_2 \| \\ & \leq (2|\mathcal{B}| \|\Se^{-1} A\| \|A\| c_{\rm UB}^2 + \|\Se^{-1}A\| Q c_{\rm UB}) \|B_1 - B_2 \| = C \|B_1 - B_2 \|, \end{aligned} \] where $C = \|\Se^{-1} A\|c_{\rm UB}(2|\mathcal{B}| \|A\| c_{\rm UB} + Q ).$ Thus, we obtain \[ \| \hat{u}_1 - \hat{u}_2 \|_{\ell^2} \leq \left( 2 C \tilde c_{\rm ST} \| B_1 - B_2 \| \right)^{1/2}, \] and, in conclusion, since $\hat{x}_i = B_i \hat{u}_i$, it follows that \[ \begin{aligned} \| \hat{x}_1 - \hat{x}_2 \|_{\ell^2} &\leq \| B_1 - B_2 \| \|\hat{u}_1\|_{\ell^2} + \| B_2 \| \| \hat{u}_1 - \hat{u}_2 \|_{\ell^2} \\ & \leq \big( \| B_1 - B_2\|^{1/2} \| \hat{u_1}\| + \| B_2\|(2C\tilde{c}_{\rm ST})^{1/2}) \| B_1 - B_2\|^{1/2} \\ & \leq \big((2|\mathcal{B}|)^{1/2} c_{\rm UB} + |\mathcal{B}| (2\tilde c_{\rm ST}C)^{1/2} \big) \| B_1 - B_2 \|^{1/2}, \end{aligned} \] which concludes the proof with the choice $c_{\rm ST} = (2|\mathcal{B}|)^{1/2} c_{\rm UB} + |\mathcal{B}| (2\tilde c_{\rm ST}C)^{1/2}.$ \end{proof} \section{Statistical learning framework} \label{sec:stat} As shown in the previous section, for every choice of $B\in \mathcal{B}$ and any noisy output $y\in Y$, we can associate a solution $x=R_B(y)\in X$ of the inverse problem $Ax=y$, which depends on $B$. As discussed in Section~\ref{sec:intro} (see in particular Assumption~\ref{ass:stat}), the output $y$ and hence the solution $R_B(y)$ are random variables, and the optimal $B^\star$ is defined as the minimizer of the expected risk $L$ defined by \eqref{eq:loss}. Since $L$ depends on the unknown distribution of the pair $(x,y)$, $B^\star $ is estimated by its empirical version $\widehat{B}$, given by~\eqref{eq:empiricaltarget}. In this section, we provide a finite sample bound on the discrepancy \[ L(\widehat{B})-L(B^\star),\] under the assumption that both $x$ and $\eps$ are bounded random variables, and $\mathcal B$ is compact. In fact, boundedness is assumed for convenience; for similar techniques with sub-Gaussian random variables, see e.g.\ \cite{ratti2023learned}. \begin{assumption} \label{ass:bdd_rv} There exists $Q_0>0$ such that $\norm{x}_X\leq Q_0$ and $\norm{\varepsilon}_Y \leq Q_0$ almost surely. \end{assumption} Assumption \ref{ass:bdd_rv} directly implies a bound $\norm{y}_Y \leq Q = (\norm{A} +1)Q_0$, which was used in Section~\ref{sec:optimization}. The following result shows the existence of $B^\star$ and $\widehat{B}$, and provides a finite sample bound on $L(\BS) - L(\BOp)$ in probability. It is proved analogously to Proposition~4 in \cite{cucker2002mathematical} (see also \cite[Lemma A.5]{alberti2021learning}). We recall that, for any $r>0$, $\mathcal{N}(\mathcal{B},r)$ denotes the \textit{covering number} of $\mathcal{B}$, i.e., the minimum number of balls of radius $r$ (in the strong operator norm on $\LO(\ell^2,X)$) whose union contains $\mathcal{B}$. \begin{thm}\label{thm:CuckSmale} Let Assumptions \ref{ass:stat}, \ref{ass:compatibility}, \ref{ass:FBI}, \ref{ass:mathcalB}, and \ref{ass:bdd_rv} hold. There exist a minimizer $\BOp$ of $L$ and, with probability $1$, a minimizer $\BS$ of $\wh{L}$ over $\mathcal{B}$. Furthermore, for all $\eta > 0$, \begin{equation} \Prob_{\vzv \sim \rho^m} \left[ L(\BS) - L(\BOp) \leq \eta \right] \geq 1 - 2\mathcal{N}\left( \mathcal{B},C\eta^2 \right) e^{-C'm\eta^2} \label{eq:CuckSmale} \end{equation} for some $C,C'>0$ depending only on $A$, $\Se$, $Q_0$ and $\mathcal B$. \end{thm} In the above statement, the quantity $\BS$ depends on $\vzv$ and is therefore a random variable. For ease and convenience, in the rest of the section, we denote by $\Prob$ and $\E$ the probability and the expected value computed with respect to the sample $\vzv$, respectively. \begin{proof} Consider two different elements $B_1,B_2 \in \mathcal{B}$. Then, $\rho$-a.e.\ in $X\times Y$ \[ \begin{aligned} \left| \| R_{B_1}(y) - x\|_X^2 - \| R_{B_2}(y) - x\|_X^2 \right| & = \left| \langle R_{B_1}(y) - R_{B_2}(y), R_{B_1}(y)-x + R_{B_2}(y)- x \rangle \right| \\ & \leq \| R_{B_1}(y) - R_{B_2}(y) \|_X ( \|R_{B_1}(y) \|_X + \| R_{B_2}(y)\|_X + 2 Q_0 ) \\ & \leq 2 (|\mathcal{B}|c_{\rm UB} + Q_0) c_{\rm ST} \| B_1 - B_2 \|^{1/2}, \end{aligned} \] here we have used Theorem~\ref{thm:stab} and \eqref{eq:boundmin}. By integrating with respect to the probability distribution $\rho$, the above bound holds for $L$. Indeed, \begin{eqnarray}\label{eq:lip1} |L(B_1)-L(B_2)| &=& \left|\E[\|R_{B_1}(y) - x\|_X^2] - \E[\|R_{B_2}(y) - x\|_X^2]\right| \nonumber \\ & \leq &\E\left[|\|R_{B_1}(y) - x\|_X^2 - \|R_{B_2}(y) - x\|_X^2 |\right] \nonumber \\ & \leq & 2(|\mathcal{B}|c_{\rm UB} + Q_0) c_{\rm ST} \| B_1-B_2\|^{1/2}, \end{eqnarray} and, by replacing $\rho$ with its empirical counterpart, with probability $1$, \begin{equation}\label{eq:lip2} \begin{split} |\wh{L}(B_1)-\wh{L}(B_2)| &= \left| \frac{1}{m} \sum_{j = 1}^m \|R_{B_1}(y_j) - x_j\|_X^2 - \frac{1}{m} \sum_{j = 1}^m \|R_{B_2}(y_j) - x_j\|_X^2 \right| \\ & \leq \frac{1}{m} \sum_{j = 1}^m \left| \|R_{B_1}(y_j) - x_j\|_X^2 - \|R_{B_2}(y_j) - x_j\|_X^2 \right| \\ &\leq 2(|\mathcal{B}|c_{\rm UB} + Q_0) c_{\rm ST} \| B_1-B_2\|^{1/2}. \end{split} \end{equation} Since both $L$ and $\wh{L}$ are continuous functionals on $\mathcal{B}$ and $\mathcal{B}$ is compact with respect to the operator topology, the corresponding minimizers $\BOp$ and $\BS$ over $\mathcal{B}$ exist (almost surely for $\BS$). Next, we notice that \[ \sup_{B \in \mathcal{B}} |\wh{L}(B) - L(B)| \leq \frac{\eta}{2} \quad \Rightarrow \quad L(\BS) - \wh{L}(\BS) \leq \frac{\eta}{2} \quad \text{and} \quad \wh{L}(\BOp) - L(\BOp) \leq \frac{\eta}{2}, \] which ultimately implies that \[ 0\leq L(\BS) - L(\BOp) = \left(L(\BS) - \wh{L}(\BS)\right)+ \left(\wh{L}(\BS) - \wh{L}(\BOp)\right) + \left(\wh{L}(\BOp) - L(\BOp)\right) \leq \eta, \] since the central difference is negative by the definition of $\BS$. Thus, \[ \Prob \left[ L(\BS) - L(\BOp) \leq \eta \right] \geq \Prob \left[ \sup_{B \in \mathcal{B}} |\wh{L}(B) - L(B)| \leq \frac{\eta}{2} \right]. \] We now provide a lower bound for the latter term. In view of \eqref{eq:lip1} and \eqref{eq:lip2}, by using the reverse triangle inequality, for every $B_1,B_2 \in \mathcal{B}$, \[ \begin{aligned} \left| |\wh{L}(B_1) - L(B_1)| - |\wh{L}(B_2) - L(B_2)| \right| & \leq |\wh{L}(B_1) - \wh{L}(B_2)| + |L(B_1)-L(B_2)| \\ & \leq 4 (|\mathcal{B}|c_{\rm UB} + Q_0) c_{\rm ST} \| B_1-B_2\|^{1/2}. \end{aligned} \] Let now $N=\mathcal{N}\left( \mathcal{B},\left(\frac{\eta}{16(|\mathcal{B}|c_{\rm UB} + Q_0) c_{\rm ST}} \right)^2 \right)$ and consider a discrete set $B_1, \ldots, B_N$ such that the balls $\mathcal{B}_k$ centered at $B_k$ with radius $r = \left(\frac{\eta}{16(|\mathcal{B}|c_{\rm UB} + Q_0) c_{\rm ST}} \right)^2$ cover the entire $\mathcal{B}$. In each ball $\mathcal{B}_k$, for every $B \in \mathcal{B}_k$ it holds \[ \left| |\wh{L}(B) - L(B)| - |\wh{L}(B_k) - L(B_k)| \right| \leq 4(|\mathcal{B}|c_{\rm UB}+Q_0)c_{\rm ST} \| B - B_k \|^{1/2} \leq \frac{\eta}{4}. \] Therefore, the event $|\wh{L}(B) - L(B)| > \frac{\eta}{2}$ implies the event $|\wh{L}(B_k) - L(B_k)| > \frac{\eta}{4}$, and a bound (in probability) of this term can be provided by standard concentration results. Indeed, $\wh{L}(B_k)$ is the sample average of $m$ realizations of the random variable $\|R_{B_k}(y) - x\|_X^2$, whose expectation is $L(B_k)$. Moreover, such random variable is bounded by $(|\mathcal{B}|c_{\rm UB} + Q_0)^2$ by assumption, and therefore via Hoeffding's inequality \[ \begin{aligned} \Prob \left[{\sup_{B \in \mathcal{B}_k}} |\wh{L}(B) - L(B)| > \frac{\eta}{2} \right] & \leq \Prob \left[ |\wh{L}(B_k) - L(B_k)| > \frac{\eta}{4} \right] \leq 2 e^{-\frac{m\eta^2}{8(|\mathcal{B}|c_{\rm UB}+Q_0)^4}}. \end{aligned} \] Notice that this inequality holds uniformly in $k$. Finally, we obtain \begin{equation*} \begin{split} \Prob \left[ \sup_{B \in \mathcal{B}} |\wh{L}(B) - L(B)| \leq \frac{\eta}{2} \right] &= 1 - \Prob \left[ \sup_{B \in \mathcal{B}} |\wh{L}(B) - L(B)| > \frac{\eta}{2} \right] \\ & \geq 1- \sum_{k=1}^N \Prob \left[ \sup_{B \in \mathcal{B}_k} |\wh{L}(B) - L(B)| > \eta \right] \\ &\geq 1 - 2N e^{-\frac{m\eta^2}{8(|\mathcal{B}|c_{\rm UB}+Q_0)^4}}. \end{split} \end{equation*} This concludes the proof. \end{proof} We now provide a more explicit expression for the sample error estimate under the assumption that the covering numbers of the set $\mathcal{B}$ have a specific decay rate. It is possible to obtain similar bounds, for example, whenever the singular values of the compact embedding of $\mathcal{B}$ into its ambient space show a polynomial decay. \begin{cor} \label{thm:sample_error_prob} Let Assumptions \ref{ass:stat}, \ref{ass:compatibility}, \ref{ass:FBI}, \ref{ass:mathcalB}, and \ref{ass:bdd_rv} hold. Assume further that \begin{equation} \log(\mathcal{N}(\mathcal{B},r)) \leq C r^{-1/s}, \label{eq:covering} \end{equation} holds for some constants $C,s>0$. Suppose that $\tau>0$. Then, for sufficiently large values of $m$, there holds \begin{equation}\label{eq:tail} \Prob \left[ L(\hat{B})-L(\BOp)\leq \left(\frac{\alpha_1 + \alpha_2\sqrt{\tau}}{\sqrt{m}}\right)^{1-\frac{1}{1+s}}\right] \geq 1-e^{-\tau} \end{equation} for some $\alpha_1,\alpha_2>0$ depending only on $A$, $\mathcal{B}$, $\Se$, and $Q_0$. \end{cor} \begin{proof} By \eqref{eq:CuckSmale}, for $\eta\in (0,1]$ and $m$ sufficiently large (namely, such that $a_2 m\eta^{2+2/s}\geq a_1$) we have \begin{equation} \label{eq:tailbound2} \Prob \left[ L(\BS) - L(\BOp) \leq \eta \right] \geq 1-e^{-\eta^{-2/s}(a_2 m\eta^{2+2/s}-a_1)} \geq 1-e^{-(a_2 m\eta^{2+2/s}-a_1)} =1 - e^{-\tau}, \end{equation} being $a_1 = C\bigl(16(|\mathcal{B}| c_{\rm UB}+Q_0)c_{\rm ST}\bigr)^{2/s}$, $a_2 = (8(|\mathcal{B}|c_{\rm UB}+Q_0)^4)^{-1}$ and $ \tau = a_2 m \eta^{2+2/s} - a_1. $ We can rephrase this relationship by expressing $\eta$ as a function of $\tau$ and $m$: \[ 1 - e^{-\tau} \leq \left[ L(\BS) - L(\BOp) \leq \left(\frac{a_1 + \tau}{a_2 m} \right)^{\frac{1}{2+2/s}} \right] \leq \Prob\left[ L(\hat{B})-L(\BOp)\leq \left(\frac{\alpha_1 + \alpha_2\sqrt{\tau}}{\sqrt{m}}\right)^{1-\frac{1}{1+s}}\right], \] where $\alpha_1=\sqrt{a_1/a_2}$ and $\alpha_2=\sqrt{1/a_2}$, as $\sqrt{a_1+\tau}\leq \sqrt{a_1} + \sqrt{\tau}$. This concludes the proof. \end{proof} Note that, besides the bound in probability \eqref{eq:tail}, we can also provide a bound in expectation for the excess risk. Indeed, by \eqref{eq:tailbound2}, \[ \Prob\left[ L(\BS) - L(\BOp) < \eta \right] \leq \min\big\{1,e^{a_1 \eta^{-2/s} -a_2 m\eta^2}\big\}, \] and by the tail integral formula (notice that $e^{a_1 \eta^{-2/s} - a_2 m\eta^2} = 1$ for $\eta = \hat{\eta} = k_1 m^{-\frac{s}{2(s+1)}}$) \[ \begin{aligned} \E\big[L(\hat{B})-L(\BOp)\big] &= \int_{0}^\infty \Prob \left[ L(\BS) - L(\BOp) < \eta \right]d \eta \\ &\leq \hat{\eta} + e^{a_1 \hat{\eta}^{-2/s}} \int_{\hat{\eta}}^\infty e^{-a_2 m \eta^2}d\eta \leq k_1 m^{-\frac{s}{2(s+1)}} + k_2 m^{-\frac{1}{2}}, \end{aligned} \] which allows us to conclude that, for sufficiently large $m$, \begin{equation} \E\big[L(\hat{B})-L(\BOp)\big] \lesssim m^{-\frac{s}{2(s+1)}}. \label{eq:bound_covering} \end{equation} The bound~\eqref{eq:bound_covering} should be compared with the one in \cite[Corollary 7.2] {ratti2023learned} setting $\alpha = \frac{1}{2}$ and $q=2$, namely \begin{equation} \E \big[L(\hat{B})-L(\BOp)\big] \lesssim m^{-\frac{1}{2}} \label{eq:bound_chaining} \end{equation} provided that $s>1$, compare with the assumptions of \cite[Proposition 7.3]{ratti2023learned}. The rate in \eqref{eq:bound_covering} is worse than the rate in \eqref{eq:bound_chaining} obtained via a chaining argument. On the other side, the bound in Corollary~\ref{thm:sample_error_prob} holds in probability, whereas the bound~\eqref{eq:bound_chaining} holds only in expectation. \section{On the choice of the parameter class \texorpdfstring{$\mathcal{B}$}{B}} \label{sec:examples} \subsection{Compact perturbations of a known operator}\label{sec:perturbation_known} We first consider an example of a class $\mathcal{B}$ consisting of certain compact perturbations of a reference operator $B_0$. Under rather general assumptions, we can provide an estimate of the covering numbers of $\mathcal{B}$. The proofs of this section are standard, and are postponed to Appendix~\ref{sec:proofs_example}. Assume that $A$ is injective. We fix an injective operator $B_0 \colon \ell^2 \rightarrow X$ and two compact operators $E_1,E_2\colon \ell^2\to\ell^2$ such that $\| B_0 \|_{\LO(\ell^2,X)}\leq 1$ and $\| E_i \|_{\LO(\ell^2)}\leq 1$ for $i=1,2$ (here, $\LO(X):=\LO(X,X)$). For every finite set $I\subset\N$, let $c_I>0$. We define \begin{equation} \mathcal{B} = \{ B_0(\operatorname{Id} + K): K \in \mathcal{H}\}, \label{eq:Bexample} \end{equation} where the set of operators $\mathcal{H}$ is defined as \begin{multline} \mathcal{H} = \{ K = E_1 T E_2 : T \in \LO(\ell^2), \; \| T \|_{\LO(\ell^2)}\leq 1 \; \text{and} \\ \|(\operatorname{Id} + K)u\|_{\ell^2}\ge c_I \|u\|_{\ell^2} \; \text{ for every finite $I\subset\N$ and $u\in \ell^2_I$}\}, \label{eq:Hexample} \end{multline} where $\ell^2_I := \operatorname{span}\{ e_i: i \in I\}$. \begin{lemma}\label{lem:Bcompactexample} The set $\mathcal{B}$ defined in \eqref{eq:Bexample}-\eqref{eq:Hexample} satisfies Assumption~\ref{ass:mathcalB}. \end{lemma} The theoretical results of Section~\ref{sec:stat} rely on the covering numbers of $\mathcal{B}$. We now derive an estimate of the form \eqref{eq:covering} in the case when $\mathcal{B}$ is given by \eqref{eq:Bexample}. For the sake of clarity, we indicate the norm used to perform the covering explicitly. \begin{proposition}\label{prop:coveringexample} Let $\mathcal{B}$ be defined as in \eqref{eq:Bexample}-\eqref{eq:Hexample}. Suppose that $E_1 = E_2 = E$, being $E\in\LO(\ell^2)$ a positive operator whose eigenvalues $\lambda_n$ satisfy \[ \lambda_n \le c n^{-s},\qquad n\in\N \] for some $c,s>0$. Take $s'<s$. Then there exists $C>0$ such that \begin{equation} \log \mathcal{N}(\mathcal{B},r; \|\cdot \|_{\LO(\ell^2,X)}) \leq C r^{-\frac{2s+1}{2ss'}},\qquad r>0. \label{eq:covering_example_comp} \end{equation} \end{proposition} With this estimate on the covering numbers, it is possible to apply Corollary~\ref{thm:sample_error_prob} and obtain a finite sample bound in the case when $\mathcal{B}$ is defined as in \eqref{eq:Bexample}. \subsection{Learning the mother wavelet} We consider here the problem of learning the mother wavelet. In other words, the set $\mathcal{B}$ will consist of synthesis operators associated to a family of wavelets. Consider the Hilbert space $X=L^2(\R^d)$ and, for $\psi\in L^2(\R^d)$, define \[ \psi_{j,k}(x)=2^{\frac{jd}{2}}\psi(2^j x-k),\qquad \text{for a.e.\ $x\in\R^d$, $j\in\Z$, $k\in\Z^d$.} \] For $f\in L^2(\R^d)\cap L^1(\R^d)$ (the space $L^1(\R^d)$ is added just to simplify the exposition, thanks to the continuity of $\hat f$), define \[ \|f\|_W^2=\sup_{\xi\in\R^d}\sum_{j,k}|\hat f(2^{-j}\xi+2\pi k)|^2,\qquad \hat f(s)=\int_{\R^d} f(x) e^{-ix\cdot s}\,dx, \] and consider \[ W=\{f\in L^2(\R^d)\cap L^1(\R^d):\|f\|_W<+\infty\}. \] It is easy to verify that $\|\cdot\|_W$ is a norm on $W$. It is well known that, for $\psi\in W$, the family $\{\psi_{j,k}\}_{j\in\Z,k\in\Z^d}$ is a Bessel sequence of $L^2(\R^d)$, namely, the synthesis operator \[ B_\psi\colon \ell^2(\Z\times \Z^d)\to L^2(\R^d),\qquad (c_{j,k})\mapsto \sum_{j,k} c_{j,k} \psi_{j,k}, \] is well defined and bounded (see, e.g., \cite[Section~3.3]{daubechies-1992} and \cite[Theorem~3]{1999-jing}). We now construct a compact family of ``mother wavelets'' $\psi$ in $W$. (Note that the term mother wavelet here is used even though the family $\psi_{j,k}$ need not be a frame of $L^2(\R^d)$.) \begin{lemma}\label{lem:wave} Let $\Psi\subseteq W$ be a compact set (with respect to $\|\cdot\|_W$) and $a>0$. Set \[ \Psi_a = \{\psi\in\Psi: \|B_\psi x\|_{L^2(\R^d)} \geq a \| x \|_{\ell^2} \ \forall x \in \ell^2\}. \] Then \[ \mathcal{B}_{\Psi_a} = \{B_\psi:\psi\in\Psi_a\} \] is a compact subset of $\LO (\ell^2,L^2(\R^d))$ with respect to the operator norm, and \[ \mathcal{N}\left( \mathcal{B}_{\Psi_a},\delta; \|\cdot \|_{\LO(\ell^2,L^2(\R^d))} \right) \le \mathcal{N}\left( \Psi,(2\pi)^{\frac32 d}\,\delta; \|\cdot \|_{W} \right),\qquad \delta>0. \] \end{lemma} \begin{proof} By the estimates in the proof of Theorem~3 in \cite{1999-jing}, we have \[ \sum_{j,k}|\langle f,\psi_{j,k}\rangle |^2 \le (2\pi)^{-3d} \|\psi\|_W^2\|f\|^2_{L^2(\R^d)},\qquad \psi\in W,\,f\in L^2(\R^d). \] In other words, we have the following bound on the norm of the analysis operator: $ \|B^*_\psi\|\le (2\pi)^{-\frac32 d} \|\psi\|_W. $ As a consequence, we have \[ \|B_\psi\|\le (2\pi)^{-\frac32 d} \|\psi\|_W,\qquad \psi\in W. \] In other words, the linear map \[ \zeta\colon W\to \LO(\ell^2,L^2(\R^d)),\qquad \psi\mapsto B_\psi \] is linear and bounded, with $\|\zeta\|\le (2\pi)^{-\frac32 d}$. Since $\zeta$ is bounded and $\Psi$ is compact, we have that $ \{B_\psi:\psi\in\Psi\}=\zeta(\Psi)$ is compact. Thus, $\mathcal{B}_\Psi$ is the intersection of a compact set and a closed set, and so it is compact. Finally, the estimate on the covering numbers of $\mathcal{B}_{\Psi_a}$ immediately follows: \[ \mathcal{N}\left( \mathcal{B}_{\Psi_a},\delta \right)\le \mathcal{N}\left( \zeta(\Psi),\delta \right) \le \mathcal{N}\left( \Psi,\|\zeta\|^{-1}\delta \right) \le \mathcal{N}\left( \Psi,(2\pi)^{\frac32 d}\,\delta \right). \] \end{proof} We can now combine this result with Corollary~\ref{thm:sample_error_prob} and obtain the following corollary, in which we compare the loss corresponding to the optimal mother wavelet $\psi^*$ with the loss corresponding to the mother wavelet $\widehat\psi$ obtained by minimizing the empirical risk. \begin{cor} Consider the settings of Corollary~\ref{thm:sample_error_prob} and of Lemma~\ref{lem:wave}. Suppose that \[ \log \mathcal{N} (\Psi,r; \|\cdot \|_{W}) \le C r^{-1/s},\qquad r>0, \] for some $C,s>0$. There exist $\alpha_1,\alpha_2>0$ such that the following is true. Take $\tau>0$ and \[ \psi^* \in\argmin_{\psi\in\Psi_a} L(B_\psi),\qquad \widehat\psi \in\argmin_{\psi\in\Psi_a} \widehat L(B_\psi). \] Then, with probability larger than $1-e^{-\tau}$, we have \[ L(B_{\widehat \psi})-L(B_{\psi^*})\leq \left(\frac{\alpha_1 + \alpha_2\sqrt{\tau}}{\sqrt{m}}\right)^{1-\frac{1}{1+s}}. \] \end{cor} \begin{proof} By Lemma~\ref{lem:wave}, we have \[ \log \mathcal{N}\left( \mathcal{B}_{\Psi_a},r \right) \le \log \mathcal{N}\left( \Psi,(2\pi)^{\frac32 d}\,r \right) \le C (2\pi)^{-\frac{3d}{2s}} r^{-1/s}. \] The result is now a direct consequence of Corollary~\ref{thm:sample_error_prob}. \end{proof} \begin{appendices} \section{Finite-dimensional setting} \label{app:finite_dim} In order to derive a more interpretable expression for the mininization problem \eqref{eq:xhat}, let us first suppose that, on top of Assumption \ref{ass:stat}, the covariance operator $\Se \colon Y \rightarrow Y$ is also surjective. Notice in particular that this can only hold if the space $Y$ is finite-dimensional, for otherwise it would be impossible for $\Se$ to be both compact and invertible. In this case, we can add the term $\displaystyle \frac{1}{2}\| \Se^{-1/2}y\|_Y^2$ in \eqref{eq:xhat} (which is irrelevant for optimization purposes) and get \begin{equation} \hat{x}_B = B \hat{u}_B, \quad \hat{u}_B = \argmin_{u \in \ell^1} \Big\{ \frac{1}{2} \| \Se^{-1/2}(A\Bs u-y) \|_Y^2 + \| u \|_{\ell^1} \Big\}. \label{eq:easy_xhat} \end{equation} When $\Se = \sigma^2 I$, this reduces to the familiar form of Lasso regression with $\ell^1$ regularization. In this context, it is easier to interpret \eqref{eq:easy_xhat} as a regularization of the inverse problem \eqref{eq:invprob}, which promotes the sparsity of the solution with respect to the frame associated with the operator $B$. In particular, the expression in \eqref{eq:easy_xhat} is known as the \textit{synthesis} formulation of such problem, since the synthesis operator $\Bs \colon \ell^2 \rightarrow X$ appears in it, and the minimization takes place in the whole space of coefficients $\ell^2$. An alternative problem, which is in general not equivalent to \eqref{eq:easy_xhat} is the \textit{analysis} formulation of the sparsity-promoting regularization: for a bounded linear operator $C\colon X \rightarrow \ell^2$, let \begin{equation} \tilde{x}_C = \argmin_{x \in X} \Big\{ \frac{1}{2} \| \Se^{-1/2}(Ax -y) \|_Y^2 + \| C x \|_{\ell^1} \Big\}. \label{eq:easy_ana} \end{equation} The equivalence of \eqref{eq:easy_xhat} and \eqref{eq:easy_ana} holds if $B$ is invertible, and $C = B^{-1}$: in that case, it holds that $\tilde{x}_{B^{-1}} = \hat{x}_B$. The task of learning an optimal sparsity-promoting regularizer might have been expressed in terms of the analysis operator $C$ but, for theoretical purposes, we preferred considering the synthesis formulation. \section{White noise}\label{app:white_noise} As we mentioned in Section~\ref{sec:intro}, the assumption on $\Se$ does not allow us, in principle, to consider white noise in infinite dimensions, for which the covariance $\Se=\mathrm{Id}$ is not trace-class, but only bounded. However, this situation can be considered by embedding $Y$ into a larger space. Instead of providing a general and abstract construction, we prefer to discuss two particular examples. For more details, see \cite[Section~A.1]{alberti2021learning}. \begin{example}\label{white_noise} Let $\Omega$ be a connected bounded open subset of $\R^d$, and $\{\eps_y\}_{y\in L^2(\Omega)}$ be a Hilbert random process such that \[ \E{[\eps_y]}=0\qquad \E{[\eps_y \eps_{y'}]}=\<y,y'\>_{L^2(\Omega)}, \qquad y,y'\in L^2(\Omega), \] which is the classical model for white noise, since the covariance operator of $\eps$ is the identity. Assume that $\Omega$ satisfies a cone condition and fix $s>d/2$. Then the Rellich-Kondrachov theorem \cite[Theorem 6.3]{adams2003sobolev} implies that the Sobolev space $H^s(\Omega)$ is embedded into $C_b(\Omega)$, the space of bounded continuous functions on $\Omega$, and the canonical inclusion $\iota\colon H^s(\Omega)\to L^2(\Omega)$ is a compact operator. The above embedding implies that $H^s(\Omega)$ is a reproducing kernel Hilbert space with bounded reproducing kernel $K\colon\Omega\times \Omega\to \R$. An easy calculation shows that \[ \tr(\iota^*\circ\iota)=\int_U K(x,x) dx <+\infty.\] This fact implies that in the Gelfand triple \begin{equation} H^s(\Omega)\xrightarrow[]{\;\;\iota\;\;} L^2(\Omega) \xrightarrow[]{\;\;\iota^*\;\;} H^{-s}(\Omega),\label{gelfand} \end{equation} both $\iota$ and $\iota^*$ are Hilbert-Schmidt operators. Hence, it is possible to define a random variable $\widehat{\eps}$ taking values in $H^{-s}(\Omega)$ such that \[ \<\widehat{\eps},y\>=\eps_{\iota(y)},\qquad y\in H^s(\Omega). \] It is immediate to check that \[ \E{[ \<\widehat{\eps},y\>_{H^{-s},H^s}\ \<\widehat{\eps},y'\>_{H^{-s},H^s}]}=\<\iota(y),\iota(y')\>_{L^2(\Omega)}, \] so that the covariance operator of $\widehat{\eps}$ is $\iota^*\circ\iota$, which is a trace class operator. Thus, $\widehat{\eps}$ is a square-integrable random variable in $H^{-s}(\Omega)$. To apply the results of our paper, it is enough to set $Y=H^{-s}(\Omega)$ and to lift the inverse problem \eqref{eq:invprob} to $H^{-s}(\Omega)$: \[ \iota^*(y) = (\iota^*\circ A)x + \widehat\varepsilon. \] This requires identifying $H^{s}(\Omega)$ and $H^{-s}(\Omega)$ using the Riesz lemma. Note that this identification is not standard, since it is not compatible with the double embedding of~\eqref{gelfand}. However, the intermediate space $L^2(\Omega)$ does not matter once $\widehat{\eps}$ is defined. \end{example} In the next example, we consider the sequence space $\ell^2$, as a prototypical Hilbert space (once an orthonormal basis has been fixed). \begin{example} Let $X$ be a Hilbert space and set $Y=\ell^2$. Let $A\colon X\to\ell^2$ be a bounded and linear map, and consider the inverse problem \eqref{eq:invprob} \begin{equation}\label{eq:inv-white} y = Ax+\varepsilon. \end{equation} Let $\{e_n\}_{n\in\N}$ be the canonical basis of $\ell^2$, defined by $(e_n)_i = \delta_{i,n}$. We consider the case when $\varepsilon$ is a white Gaussian noise with variance $\sigma^2$, namely, \begin{equation*} \varepsilon = \sum_{n\in\N}\varepsilon_n e_n, \end{equation*} where the random variables $\varepsilon_n$ are i.i.d.\ scalar Gaussians with mean zero and variance $\sigma^2$, i.e.\ $\varepsilon_n\sim\mathcal{N}(0,\sigma^2)$. The above expression for $\varepsilon$ is only formal, because the series is divergent in $\ell^2$ with probability 1. As a consequence, \eqref{eq:inv-white} is not well-defined in $\ell^2$. However, writing \eqref{eq:inv-white} in components with respect to the orthonormal basis $\{e_n\}_{n\in\N}$ yields \begin{equation}\label{eq:family} y_n = (Ax)_n + \varepsilon_n,\qquad n\in\N, \end{equation} where we wrote $Ax = \sum_n (Ax)_n e_n$. This is a family of well-defined scalar equations. Let us now see how it is possible to reformulate this as a problem in a Hilbert space. Equivalently, we can rewrite \eqref{eq:family} as \begin{equation}\label{eq:family2} \frac{y_n}{n^s} = \frac{(Ax)_n}{n^s} + \frac{\varepsilon_n}{n^s},\qquad n\in\N, \end{equation} for some $s>\frac12$. Let us introduce the embedding $ \iota^*\colon\ell^2\to\ell^2$ defined by $\iota^*(e_n)= e_n/n^s $ and the random variable \begin{equation*} \widehat\varepsilon = \sum_{n\in\N}\frac{\varepsilon_n}{n^s} e_n. \end{equation*} Note that $\E{[\norm{\widehat\eps}^2_2]}<+\infty$, and $\widehat\varepsilon\in \ell^2$ with probability 1. Thus, we can rewrite \eqref{eq:family2} as \[ \iota^*(y) = \iota^*(Ax) + \widehat\varepsilon. \] This equation is meaningful in $\ell^2$, and has the same form as the original inverse problem \eqref{eq:inv-white}. \end{example} \section{Connections with Dictionary Learning and unsupervised strategies} \label{app:DL} Although our approach shares the same aim of dictionary learning, i.e., promoting the sparse representation of some ground truths by selecting a suitable synthesis operator, it is possible to outline some substantial differences. The key observation is that the optimal operator sought in dictionary learning is independent of the forward operator and the noise distribution, and depends only on the distribution of $x$. On the contrary, the optimal $\BS$ in \eqref{eq:bilevel} depends on both $\varepsilon$ and $A$, yielding, in general, a smaller MSE and, consequently, better statistical guarantees for the solution of the inverse problem. Let us briefly introduce the standard dictionary learning framework \cite{tosic-frossard}. If, instead of the training dataset $\vzv = \{(x_j,y_j)\}_{j=1}^m$ employed in \eqref{eq:empiricaltarget} to discretize \eqref{eq:bilevel}, only a collection of ground truths $\{x_j\}_{j=1}^m$ is available, i.i.d. sampled from the (unknown) marginal $\rho_x$, we may consider the following unsupervised technique: let $\wt{R}_B$ a sparsity-promoting regularizer of the form \eqref{eq:xhat}-\eqref{eq:RB-def} associated with $A = \operatorname{Id}$ and without assuming the knowledge of the covariance $\Se$, namely: \begin{gather*} \wt{R}_B(x) = B \wt{u}_B(x), \quad \wt{u}_B(x) = \argmin_{u \in \ell^1} \left\{ \frac{1}{2} \|Bu - x \|^2 + \|u\|_{\ell^1}\right\} \\ \BS_{DL} \in \argmin_{B \in \mathcal{B}} \left\{ \frac{1}{m} \sum_{j=1}^m \| x_j - \wt{R}_B(x_j) \|^2 \right\}. \label{eq:DL_out} \end{gather*} This problem yields a bilevel formulation of the well-known Dictionary Learning problem \cite{peyre-fadili,yang-et-al}. Our supervised strategy resembles dictionary learning, indeed, let us recall that, according to \eqref{eq:empiricaltarget}, \begin{align*} \BS &\in \argmin_{B \in \mathcal{B}} \left\{ \frac{1}{m} \sum_{j=1}^m \| x_j - R_B(A x_j+\varepsilon_j) \|^2 \right\}, \end{align*} being $R_B$ as in \eqref{eq:RB-def}. However, since $R_B$ is a nonlinear map, differently from our previous work on quadratic regularizers \cite{alberti2021learning}, it is easy to show that the two problems are not equivalent in general. In particular, while $\BS_{DL} $ is independent of the forward operator $A$ and on the noise $\varepsilon$ by construction, $\BS$ will in general depend on both. We illustrate this with a simple 1D example. Let $\sigma^2=\mathbb{E}(\varepsilon^2)>0$ denote the variance of the noise (with zero mean) and consider the 1D problem \[ A x = ax \quad \text{with } a \in (0,\sigma] \] and the regularization $B^{-1}x = bx$, $b>0$. Given an unknown $x^\dagger$ and data $y= Ax^\dagger + \varepsilon = ax^\dagger + \varepsilon$, the Lasso reconstruction is given by \[ \hat x = \argmin_x \frac12|\sigma^{-1}(A x - y)|^2 + |bx| = \argmin_x \frac12| x - y/a|^2 + \frac{\sigma^2 b}{a^2}|x|. \] Setting $\gamma = \frac{a}{\sigma}$, this may be rewritten by using the soft-thresholding operator $S_\lambda$ as \[ \hat x = S_{\frac{b}{\gamma^2}}\left(x^\dagger +\gamma^{-1}\tilde \varepsilon\right), \] where $\tilde\varepsilon = \varepsilon/\sigma$ satisfies $\mathbb{E}(\tilde \varepsilon^2)=1$. Now consider the mean squared error \[ \text{MSE} = \mathbb{E}_{x, \tilde\varepsilon}\left[\left|S_{\frac{b}{\gamma^2}}\left(x +\gamma^{-1}\tilde \varepsilon\right) -x\right|^2\right]. \] Note that we consider the minimization of the expected risk, and not of the empirical risk, because it is more significant. For simplicity, we choose the following zero-mean independent distributions \[ \mathbb P(x = \pm 1) = \frac{1}{2}, \quad \mathbb P (\tilde\varepsilon = \pm 1) = \frac{1}{2}. \] After a series of elementary computations, we can show that \begin{equation*} \text { MSE }= \begin{cases}\frac{1}{\gamma^4}( b-\gamma )^2 & b \in\left(0, \gamma-\gamma^2\right), \\ \frac{1}{2}\left[1+\frac{1}{\gamma^4}( b-\gamma)^2\right] & b \in\left[\gamma-\gamma^2, \gamma+\gamma^2\right), \\ 1 & b \geqslant \gamma+\gamma^2.\end{cases} \end{equation*} Therefore, the optimal value for the MSE is achieved at $b=\gamma =\frac{a}{\sigma}$ and depends on both $a$ and $\sigma$, namely, on both the forward operator and the noise level. Any unsupervised choice for $b$ would be independent of $a$ and $\sigma$ and give rise, in general, to larger values of the MSE. \section{Proofs of Section~\ref{sec:perturbation_known}}\label{sec:proofs_example} \begin{proof}[Proof of Lemma~\ref{lem:Bcompactexample}] Every element $B\in\mathcal{B}$ satisfies Assumption~\ref{ass:FBI} because $A$ is injective and $B|_{\ell^2_I}$ is injective for every finite subset $I\subset\N$ by \eqref{eq:Hexample}. It remains to show that $\mathcal B$ is compact. Since the map \begin{equation}\label{eq:map-lipschitz} \LO(\ell^2)\ni U \longmapsto B_0 (\operatorname{Id} + U) \in \LO(\ell^2,X) \end{equation} is continuous, it is enough to show that $\mathcal{H}\subset\LO(\ell^2)$ is compact. Write $\mathcal{H}=\mathcal{K}\cap\mathcal{C}$, where \begin{align*} &\mathcal{K} = \{ E_1 T E_2 : T \in \LO(\ell^2), \; \| T \|_{\LO(\ell^2)}\leq 1\},\\ &\mathcal{C} =\{K\in\LO(\ell^2): \|(\operatorname{Id} + K)u\|\ge c_I \|u\| \; \text{ for every finite $I\subset\N$ and $u\in \ell^2_I$}\}. \end{align*} The set $\mathcal{K}$ is compact by \cite[Theorem 3]{vala1964}, because $\left\{ T \in \LO(\ell^2): \| T \|_{\LO(\ell^2)}\leq 1 \right\}$ is closed. The set $\mathcal{C}$ is closed, because convergence in operator norm implies pointwise convergence. Hence, $\mathcal{H}=\mathcal{K}\cap\mathcal{C}$ is compact. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:coveringexample}] Note that \begin{equation} \mathcal{N}(\mathcal{B},r; \|\cdot \|_{\LO(\ell^2,X)}) \le \mathcal{N}(\mathcal{H},r;\|\cdot \|_{\LO(\ell^2)}), \label{eq:B_step1} \end{equation} since the map in \eqref{eq:map-lipschitz} is Lipschitz with constant $1$ (recall that $\| B_0 \|_{\LO(\ell^2,X)}\leq 1$). Using the notation of the proof of Lemma~\ref{lem:Bcompactexample}, we have $\mathcal{H}=\mathcal{K} \cap \mathcal{C}$, so that $\mathcal{H}\subseteq\mathcal{K}$. Thus \begin{equation} \mathcal{N}(\mathcal{H},r;\|\cdot \|_{\LO(\ell^2)}) \leq \mathcal{N}(\mathcal{K},r;\|\cdot \|_{\LO(\ell^2)}). \label{eq:B_step2} \end{equation} To further bound the right-hand side of \eqref{eq:B_step2}, we consider the singular value decomposition of $E$. Since $E$ is self-adjoint and compact, we can write its spectral decomposition in terms of its eigenvalues $\{\lambda_n\}$ and eigenvectors $\{e_n\}$, and define its rank-$N$ approximation $E_N$ as follows: \begin{equation}\label{eq:EE_N} E = \sum_{n=1}^{\infty} \lambda_n e_n \otimes e_n, \quad \qquad E_N = \sum_{n=1}^{N} \lambda_n e_n \otimes e_n=P_NE=EP_N, \end{equation} where we denote by $u \otimes v$ the rank-$1$ operator such that $(u \otimes v)x = \lX v,x \rX u$, being $\lX \cdot, \cdot \rX$ the inner product in $\ell^2$, and \( P_N=\sum_{n=1}^{N} e_n \otimes e_n \) is the orthogonal projection onto $\operatorname{span}\{e_1,\dots,e_N\}$. Assuming that the sequence $\{\lambda_n\}$ is decreasingly ordered, we know that $\| E - E_N \|_{\LO(\ell^2)} \leq \lambda_{N+1}$. We now introduce the spaces \[ \mathcal{K}_N = \left\{ K = E_N T E_N: T \in \LO(\ell^2), \ \| T \|_{\LO(\ell^2)}\leq 1 \right\}, \] and prove that the following bound holds: \begin{equation} \mathcal{N}(\mathcal{K}, \rho + 2\lambda_{N+1}; \| \cdot \|_{\LO(\ell^2)}) \leq \mathcal{N}(\mathcal{K}_N,\rho; \| \cdot \|_{\LO(\ell^2)})=:{\hat{\mathcal{N}}} . \label{eq:B_step3} \end{equation} In order to prove \eqref{eq:B_step3}, consider a $\rho$-covering $\{K_1, \ldots, K_{\hat{\mathcal{N}}}\}$ of $\mathcal{K}_N$. For any $K = ETE\in\mathcal{K}$, let $K^N = E_N T E_N\in\mathcal{K}_N$, and let $K_i$ be such that $\| K^N - K_i\| \leq \rho$. Then, \[ \begin{aligned} \| K - K_i \|_\LO &\leq \| K - K^N \|_\LO + \|K^N - K_i \|_\LO \\ & \leq \| ETE - ETE_N \|_\LO + \| ETE_N - E_N T E_N \|_\LO + \rho \\ & \leq \| E\|_\LO \| T \|_\LO \| E - E_N\|_\LO + \| E-E_N\|_\LO \| T \|_\LO \| E_N \|_\LO + \rho \\ & \leq 2 \lambda_{N+1} + \rho, \end{aligned} \] which shows that $\{K_1, \ldots, K_{\hat{\mathcal{N}}}\}$ is a $(2 \lambda_{N+1} + \rho)$-covering of $\mathcal{K}$. In order to bound the covering numbers of $\mathcal{K}_N$, we claim that $\mathcal{K}_N \subset \mathcal{F}_N$, where \[ \mathcal{F}_N = \left\{ K = E T E: T \in \operatorname{HS}(\ell^2), \ \| T \|_{\operatorname{HS}(\ell^2)}\leq \sqrt{N} \right\}, \] and $\operatorname{HS}(\ell^2)$ denotes the class of Hilbert-Schmidt operators from $\ell^2$ to $\ell^2$, namely, the ones for which the singular values are square-summable. In order to show this, take $K \in \mathcal{K}_N$, with $K = E_N T E_N$ for some $T \in \LO(\ell^2)$ such that $\| T\|_{\LO(\ell^2)} \leq 1$. Setting $ T_N = P_N T P_N $, by \eqref{eq:EE_N} we have \[ K = E_NTE_N = E P_N T P_N E = E T_N E. \] Note that $T_N$ is a rank-$N$ operator, thus belonging to the Hilbert-Schmidt class. Furthermore, \[ \|T_N\|_{\operatorname{HS}(\ell^2)} = \|P_N T P_N\|_{\operatorname{HS}(\ell^2)}\le \|P_N\|_{\operatorname{HS}(\ell^2)} \|T P_N\|_{\LO(\ell^2)}\le \sqrt{N}, \] because $\|P_N\|_{\operatorname{HS}(\ell^2)}=\sqrt{N}$, $\|T P_N\|_{\LO(\ell^2)}\le 1$ and $$\|P_N T P_N\|^2_{\operatorname{HS}(\ell^2)} = \sum_{n=1}^\infty \|P_N T P_N e_n \|^2_{\ell^2} \leq \sum_{n=1}^\infty \| P_N T \|^2_{\mathcal{L}(\ell^2)} \| P_N e_n\|_{\ell^2}^2 = \| P_N T \|^2_{\mathcal{L}(\ell^2)} \| P_N \|_{\operatorname{HS}(\ell^2)}^2. $$ This shows that $K\in \mathcal{F}_N$, as claimed. By a simple scaling argument, we have \begin{equation} \label{eq:B_step4} \mathcal{N}(\mathcal{K}_N,\rho;\| \cdot \|_\LO) \leq \mathcal{N}(\mathcal{F}_N,\rho; \| \cdot \|_\LO) = \mathcal{N}(\mathcal{F}_1,\rho N^{-1/2}; \| \cdot \|_\LO). \end{equation} We finally have to estimate the covering numbers $\mathcal{N}(\mathcal{F}_1,\varrho;\| \cdot \|_\LO)$. Let us define \[ F_1 = \left\{ T \in \operatorname{HS}(\ell^2): \ \| T \|_{\operatorname{HS}(\ell^2)}\leq 1 \right\}, \] which entails that $\mathcal{F}_1 = j(F_1)$, where $j$ is the (compact) embedding $j\colon \operatorname{HS}(\ell^2) \rightarrow \operatorname{HS}(\ell^2)$ defined as $j( T) = ETE$. Since $\| \cdot \|_{\LO(\ell^2)} \le \| \cdot \|_{\operatorname{HS}(\ell^2)}$, a $\varrho-$covering of $\mathcal{F}_1$ with respect to $\| \cdot \|_{\operatorname{HS}(\ell^2)}$ is also a $\varrho$-covering with respect to $\| \cdot \|_{\mathcal{L}(\ell^2)}$. Thus, \begin{equation}\label{eq:boh} \mathcal{N}(\mathcal{F}_1,\varrho;\| \cdot \|_{\LO(\ell^2)}) \leq \mathcal{N}(\mathcal{F}_1,\varrho; \| \cdot \|_{\operatorname{HS}(\ell^2)}). \end{equation} The quantity $\mathcal{N}(\mathcal{F}_1,\varrho; \| \cdot \|_{\operatorname{HS}(\ell^2)})$ is the covering number of the image of the unit sphere $F_1$ of the Hilbert space $\operatorname{HS}(\ell^2)$ through the embedding $j$, and is linked to the \textit{entropy numbers} $\varepsilon_k(j)$: indeed, according to the definitions in \cite[Chapter 1]{cast90}, one clearly sees that \[ \mathcal{N}(\mathcal{F}_1, \varrho; \| \cdot \|_{\operatorname{HS}(\ell^2)}) \leq k \quad \iff \quad \varepsilon_k(j) \leq \varrho. \] In order to estimate the entropy numbers of $j$, we can rely on its singular values, thanks to \cite[Theorem~3.4.2]{cast90}. In view of the decay $\lambda_n \lesssim n^{-s}$ of the eigenvalues of $E$, following the proof of \cite[Lemma A.9]{alberti2021learning} we can show that the singular values of $j$ decay as $n^{-s'}$, for any $s'< s$. Then, by the same argument used in the proof of \cite[Lemma A.8]{alberti2021learning}, we have that $\varepsilon_k(j) \lesssim (\log k)^{-s'}$, which implies $\mathcal{N}(\mathcal{F}_1, (\log k)^{-s'}; \| \cdot \|_{\operatorname{HS}(\ell^2)})\leq k$ and ultimately \begin{equation} \log \mathcal{N}(\mathcal{F}_1, \varrho; \| \cdot \|_{\operatorname{HS}(\ell^2)}) \lesssim \varrho^{-1/s'} . \label{eq:B_step5} \end{equation} We can now conclude the proof: by \eqref{eq:B_step1}, \eqref{eq:B_step2} and \eqref{eq:B_step3}, setting $\rho = \frac{r}{2}$ and choosing $N$ such that $\lambda_{N+1} = \frac{r}{4}$ (which implies $(N+1)^{-s} \gtrsim \frac{r}{4}$, hence $N \lesssim r^{-1/s}$), we get \[ \mathcal{N}(\mathcal{B},r; \|\cdot \|_{\LO(\ell^2,X)}) \lesssim \mathcal{N}(\mathcal{K}_{N},r;\| \cdot\|_{\LO(\ell^2)}). \] Next, by \eqref{eq:B_step4}, we obtain \[ \mathcal{N}(\mathcal{B},r; \|\cdot \|_{\LO(\ell^2,X)}) \lesssim \mathcal{N}(\mathcal{F}_1, r N^{-1/2}; \| \cdot \|_{\LO(\ell^2)}), \] and by \eqref{eq:boh} and \eqref{eq:B_step5} we finally obtain \[ \log\mathcal{N}(\mathcal{B},r; \|\cdot \|_{\LO(\ell^2,X)}) \lesssim \bigl(rN^{-1/2}\bigr)^{-1/s'} \lesssim r^{-\frac{2s+1}{2ss'}}. \] This shows \eqref{eq:covering_example_comp}, and the proof is concluded. \end{proof} \end{appendices} \section*{Acknowledgments} This material is based upon work supported by the Air Force Office of Scientific Research under award numbers FA8655-23-1-7083. Co-funded by the European Union (ERC, SAMPDE, 101041040, ERC, SLING, 819789, ERC, AdG project 101097198 and Next Generation EU). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. The research of LR has been funded by PNRR - M4C2 - Investimento 1.3. Partenariato Esteso PE00000013 - ``FAIR - Future Artificial Intelligence Research'' - Spoke 8 ``Pervasive AI'', which is funded by the European Commission under the NextGeneration EU programme. The research was supported in part by the MUR Excellence Department Project awarded to Dipartimento di Matematica, Università di Genova, CUP D33C23001110001. The research by GSA, EDV and MS has been supported by the MUR grants PRIN 202244A7YL, 2022B32J5C, and P2022XT498, and by FAIR Project HAOISL 33C24000410007 funded by the European Commission under the NextGeneration EU programme. GSA, EDV, LR and MS are members of the ``Istituto Nazionale di Alta Matematica''. TH and ML were supported by the Research Council of Finland (decision numbers 348504, 353094, 359182 and 359183). \bibliographystyle{siamplain} \bibliography{references} \end{document}
2412.16314v1
http://arxiv.org/abs/2412.16314v1
Thurston construction mapping classes with minimal dilatation
\documentclass[11pt, letterpaper]{amsart} \usepackage[top=3cm, bottom=4cm, left=2.5cm, right=2.5cm]{geometry} \usepackage{verbatim} \usepackage{amsmath,amsthm,amsfonts,amssymb,amscd} \usepackage{mathrsfs} \usepackage{mathtools} \usepackage[mathcal]{eucal} \usepackage{enumerate} \usepackage{mathrsfs} \usepackage{xcolor} \usepackage{float} \usepackage{graphicx} \usepackage{listings} \usepackage{hyperref} \usepackage{todonotes} \usepackage{bm} \usepackage{derivative} \usepackage{centernot} \usepackage{array} \usepackage{cellspace} \usepackage{geometry} \usepackage[skip=2pt plus1pt, indent=30pt]{parskip} \setlength{\cellspacetoplimit}{3pt} \setlength{\cellspacebottomlimit}{3pt} \hypersetup{ bookmarksnumbered=true, colorlinks=true, linkcolor=blue, citecolor=blue, filecolor=blue, menucolor=blue, urlcolor=blue, pdfnewwindow=true, pdfstartview=FitBH } \clubpenalty=9999 \widowpenalty=9999 \newcommand{\notimplies}{\centernot\implies} \newcommand{\ox}{\otimes} \newcommand{\x}{\times} \newcommand{\ceq}{\coloneqq} \newcommand{\too}{\rightarrow} \newcommand{\un}[1]{\underline{ #1 }} \newcommand{\ov}[1]{\overline{ #1 }} \newcommand{\el}{\ell} \renewcommand{\O}{\emptyset} \newcommand{\W}{\wedge} \newcommand{\bd}{\partial} \newcommand{\T}{\top} \newcommand{\pM}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\bM}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\psM}[1]{\begin{psmallmatrix}#1\end{psmallmatrix}} \newcommand{\bsM}[1]{\begin{bsmallmatrix}#1\end{bsmallmatrix}} \newcommand{\diff}{\D{x}} \newcommand{\nea}{\nearrow} \newcommand{\ua}{\uparrow} \newcommand{\da}{\downarrow} \newcommand{\li}{\lim\limits} \newcommand{\linf}{\liminf\limits} \newcommand{\lsup}{\limsup\limits} \newcommand{\unif}{\rightrightarrows} \newcommand{\lnorm}[2]{\left[\sum_{k=1}^{\infty}{#1}\right]^{1/{#2}}} \providecommand{\ar}{\rightarrow} \providecommand{\arr}{\longrightarrow} \DeclarePairedDelimiter{\inr}{\langle}{\rangle} \DeclareMathOperator{\ext}{ext} \DeclareMathOperator{\osc}{osc} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\cl}{Cl} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\intr}{int} \DeclareMathOperator{\jac}{Jac} \DeclareMathOperator{\grd}{grad} \DeclareMathOperator{\proj}{proj} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\range}{range} \DeclareMathOperator{\homeo}{{Homeo}^+} \DeclareMathOperator{\isom}{{Isom}^+} \DeclareMathOperator{\mcg}{Mod} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\St}{St} \def\phi{\varphi} \DeclarePairedDelimiter{\inn}{\langle}{\rangle} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \def\uu{\mathbf{u}} \def\vv{\mathbf{v}} \def\ww{\mathbf{w}} \def\im{\mathrm{im}} \def\H{\mathbf{H}} \def\qbar{\overline{q}} \def\SO{\mathrm{SO}} \def\OO{\mathrm{O}} \def\ZZ{\mathrm{Z}} \def\C{\mathbf{C}} \def\R{\mathbf{R}} \def\Q{\mathbf{Q}} \def\Z{\mathbf{Z}} \def\F{\mathbf{F}} \def\GL{\mathrm{GL}} \def\Cu{\mathrm{Cu}} \def\SL{\mathrm{SL}} \def\PGL{\mathrm{PGL}} \def\PSL{\mathrm{PSL}} \def\Aut{\mathrm{Aut}} \def\Inn{\mathrm{Inn}} \def\Out{\mathrm{Out}} \def\RO{\mathrm{RO}} \def\bbR{\mathbb{R}} \def\bbN{\mathbb{N}} \def\bbZ{\mathbb{Z}} \def\dimh{\operatorname{dim_H}} \def\leqtilde{ \underset{\sim}{<}} \def\defequals{\stackrel{\text{def}}{=}} \def\htop{h_{\text{top}}} \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem*{lem*}{Lemma} \newtheorem{cor}[thm]{Corollary} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem*{ex}{Exercise} \newtheorem{exmp}{Example} \newtheorem{rem}[thm]{Remark} \newtheorem*{question}{Question} \numberwithin{equation}{subsection} \graphicspath{ {./images/} } \title{Thurston construction mapping classes with minimal dilatation} \author{Maryam Contractor and Otto Reed} \date{} \begin{document} \begin{abstract} Given a pair of filling curves $\alpha, \beta$ on a surface of genus $g$ with $n$ punctures, we explicitly compute the mapping classes realizing the minimal dilatation over all the pseudo-Anosov maps given by the Thurston construction on $\alpha,\beta$. We do so by solving for the minimal spectral radius in a congruence subgroup of $\PSL_2(\bbZ)$. We apply this result to realized lower bounds on intersection number between $\alpha$ and $\beta$ to give the minimal dilatation over any Thurston construction pA map on $\Sigma_{g,n}$ given by a filling pair $\alpha \cup \beta$. \end{abstract} \maketitle \section{Statement of Results} Let $\Sigma_{g,n}$ denote the surface of genus $g$ with $n$ punctures and let $\mcg(\Sigma_{g,n})$ denote the associated mapping class group. If $[f] \in \mcg(\Sigma_{g,n})$ is an isotopy class of pseudo-Anosov (pA) homeomorphisms of $\Sigma_{g,n}$, then there is an associated ``stretch factor'' $\lambda > 1$ which quantifies the scaling of its stable and unstable foliations (\cite{primer}, Section 13.2.3). This ``stretch factor'' or \emph{dilatation} $\lambda$ gives multiple perspectives of $f$. Among other things, $\lambda$ is the growth rate of the unstable foliation of $f$ under iteration and $\log(\lambda)$ is the topological entropy of $f$ (\cite{primer}, Theorem 13.2). In addition, there is a bijective correspondence between the set of dilatations in $\mcg(\Sigma_{g,n})$ and the length spectrum of closed geodesics in the moduli space of $\Sigma_{g,n}$. Finally, $\log(\lambda)$ gives the Teichm\"uller translation length, or the realized infimum distance that a point in Teichm\"uller space (under the Teichm\"uller metric) after action by $\mcg(\Sigma_{g,n})$. Thus finding minimal dilatation maps extends to minimizing entropy in subsets of the mapping class group, the length of closed geodesics in moduli space, and the Teichm\"uller translation length. There is literature on the problem of minimizing dilatation over pA maps in $\mcg(\Sigma_{g,n})$ (see \cite{farbleiningermargalit}), \cite{penner}). We consider a specific class of pA maps related to \emph{filling pairs} of curves in $\Sigma_{g,n}$. If $\alpha$ and $\beta$ are representatives of isotopy classes of simple closed curves $a$ and $b$ on $\Sigma_{g,n}$ and are in minimal position (i.e., the geometric intersection number of $a$ and $b$ equals $\abs{\alpha\cap\beta}$) we say $\alpha,\beta$ \emph{fill} $\Sigma_{g,n}$ if the complement $\Sigma_{g,n} \setminus (\alpha \cup \beta)$ is a union of topological disks or punctured disks. To any such filling pair $\alpha \cup \beta$, let $\Gamma_{\alpha,\beta}$ be the subgroup generated by Dehn twists about $\alpha$ and $\beta$. Thurston showed that any infinite order element of $\Gamma_{\alpha,\beta}$ not conjugate to a power of $T_\alpha$ or $T_\beta$ is pA (Theorem \ref{thurston construction}). Additionally, we call pseudo-Anosov elements of $\Gamma_{\alpha,\beta} \subset \mcg(\Sigma_{g,n})$ \emph{Thurston pA maps}. \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{figure1.jpeg} \caption{Two filling pairs on the torus: the second pair has greater geometric intersection number and consequently corresponds to a higher dilatation pA. Moreover, the composition of Dehn twists about the curves on the first torus is the minimal dilatation pA mapping class.} \end{figure} In this paper, we minimize dilatation all Thurston pA elements in $\Gamma_{\alpha,\beta}$ for any genus $g$ and number of punctures $n$ and find the following: \begin{thm}\label{main theorem} For $g\neq 0,2$, $n>2$ let $\alpha,\beta$ be any filling pair on $\Sigma_{g,n}$ and let $i(\alpha,\beta)$ be the geometric intersection number of $\alpha$ and $\beta$. Then the minimal dilatation over Thurston pA maps in $\Gamma_{\alpha,\beta}$ is achieved by the product $T_\alpha \cdot T_\beta$. This dilatation equals \[\frac{1}{2}((i(\alpha,\beta)^2+i(\alpha,\beta)\sqrt{(i(\alpha,\beta))^2-4}-2).\] \end{thm} We find that the minimum dilatation increases monotonically with the geometric intersection number $i(\alpha,\beta)$ for a filling pair $\alpha,\beta$. Using realized minimums for intersection number given by Aougab, Huang, and Taylor (\cite{aougab-huang}, Lemma 2.1-2.2, \cite{aougab-taylor}, Lemma 3.1, and summarized in section \ref{filling section}), we prove the following corollary giving a lower bound for the minimal dilatation Thurston pA map for all possible filling pairs. \begin{cor}\label{main corollary} The minimal dilatation over all Thurston pA mapping classes in $\Gamma_{\alpha,\beta}$ for all filling pairs $\alpha,\beta$ in $\Sigma_{g,n}$, $g\neq 0,2$ is given for $n = 0$ by \[\frac{1}{2}((2g-1)^2+(2g-1)\sqrt{(2g-1)^2-4}-2)\] and for $n \geq 1$ by \[\frac{1}{2}((2g-1+n)^2+(2g-1+n)\sqrt{(2g-1+n)^2-4}-2).\] \\ Additionally, we have the following characterization: \\ \begin{center} \def\arraystretch{1.5} \begin{tabular}{|c|c|c|c|c|} \hline Genus & Punctures & $i(\alpha,\beta)$ & Minimal Dilatation Thurston pA\\ \hline\hline $g = 0$ & $n \geq 4$ even & $n-2$ & $\frac{1}{2}((n-2)^2+(n-2)\sqrt{(n-2)^2-4}-2)$\\ \hline $g = 0$ & $n$ odd & $n-1$ & $\frac{1}{2}((n-1)^2+(n-1)\sqrt{(n-1)^2-4}-2)$\\ \hline $g = 2$ & $n \leq 2$ & $4$ & $7+4\sqrt{3}$\\ \hline $g = 2$ & $n > 2$ & $2g+n-2$ & $\frac{1}{2}((2g+n-2)^2+(2g+n-2)\sqrt{(2g+n-2)^2-4}-2)$\\ \hline \end{tabular} \end{center} \end{cor} \subsection{Proof idea.} To prove Theorem \ref{main theorem}, we use a theorem due to Thurston, which gives a representation into $\PSL_2(\bbR)$ for the subset of $\mcg(\Sigma_{g,n})$ generated by twists about filling pairs of curves (\cite{primer}, Section 14.1).\footnote{The construction also generalizes for multicurves, or disjoint collections of simple closed curves. Here we only present the Theorem \ref{thurston construction} for two filling curves--see \cite{primer}, Section 14.1 for the generalized version.} \begin{thm}[Thurston's Construction]\label{thurston construction} Suppose $\alpha,\beta$ are simple closed curves in $\Sigma_{g,n}$, $g,n\ge 0$ so that $\alpha \cup \beta$ fill $\Sigma_{g,n}$. Let $i(\alpha,\beta)$ denote geometric intersection number of $\alpha$ and $\beta$ and let $\Gamma_{\alpha,\beta}$ be the subgroup generated by Dehn twists $T_\alpha$ and $T_\beta$ about $\alpha$ and $\beta$, respectively. Then there is a representation $\rho: \Gamma_{\alpha,\beta} \rightarrow \PSL_2(\bbZ)$ given by \[T_\alpha \mapsto \bM{1 & -i(\alpha,\beta) \\ 0 & 1} \quad T_\beta \mapsto \bM{1 & 0 \\ i(\alpha,\beta) & 1}.\] Moreover, $\rho$ has the following properties: \begin{enumerate}[(i)] \item There is a bijective correspondence between the sets of periodic, reducible, and pA elements in $\Gamma_{\alpha,\beta}$ and the sets of elliptic, parabolic, and hyperbolic elements in $\PSL_2(\bbR)$, respectively. \item Parabolic elements in $\rho(f)$ are exactly powers of $T_\alpha$ or $T_\beta$. \item If $\rho(f)$ is hyperbolic then the dilatation of $[f] \in \mcg(\Sigma_g)$ is exactly the spectral radius of $\rho(f)$. \end{enumerate} \end{thm} Using Thurston's representation, we minimize dilatation over all mapping classes in $\langle \rho(T_\alpha),\rho(T_\beta)\rangle \subseteq \PSL_2(\bbZ)$ to find the minimal dilatation mapping class in $\Gamma_{\alpha,\beta}$. Specifically, the smallest spectral radius matrices in the subgroup of $\SL_2(\bbZ)$ given by \[\Lambda_n \coloneq \left \langle \bM{1 & -n \\ 0 & 1}, \bM{1 & 0 \\ n & 1}\right \rangle,~n\ge 3\] achieve the dilatations given in Corollary \ref{main corollary}. \subsection{Comparison with prior literature.} There are interesting comparisons between our bounds in $\Gamma_{\alpha,\beta}$ and universal bounds for the entire mapping class group. Let $(\log(\lambda))_g$ denotes the smallest achieved topological entropy in $\mcg(\Sigma_{g,n})$ for $g \geq 2$, and define a relation $\sim$ on real valued functions $f, g$, where $f(x) \sim g(x)$ if there exists some constant $C> 0$ such that \[\frac{f(x)}{g(x)} \in [1/C,C], \quad \text{for all $x$}.\] Penner \cite{penner} gives that $(\log(\lambda))_g \sim \frac{1}{g}$. Thus in particular, as $g \rightarrow \infty$, there are pseudo-Anosov mapping classes whose dilatations get arbitrarily close to $1$. In contrast, in the most general case of our bound ($g \neq 0, 2$ and $n = 0$), the minimal dilatation in $\Gamma_{\alpha,\beta}$ increases monotonically with genus. This contrast with general bounds in $\mcg(\Sigma_{g,n})$ exemplifies that the proportion of pA maps in $\Gamma_{\alpha,\beta}$ to pA maps in $\mcg(\Sigma_{g,n})$ decreases as genus increases; thus, Thurston pA maps become increasingly less representative of the mapping class group for large genus. \newline The authors would like to thank Benson Farb for posing the question that motivated this paper, continually supporting them for the duration of the project, and providing extensive comments on this paper, Faye Jackson for her invaluable explanations and intuition, and Amie Wilkinson for teaching a wonderful course in analysis where the authors first began collaborating. The authors would also like to thank Aaron Calderon for his patience in teaching them about entropy, Peter Huxford for his help on Proposition \ref{form of lambda}, Tarik Aougab for helpful remarks on Section \ref{filling section}, and Noah Caplinger for discussing Theorem~\ref{min dilatation}; this paper would not have been possible without their insight. \section{Minimal spectral radii in $\Lambda_n$} Recall we defined $\Lambda_n$ as \[\Lambda_n =\left \langle \bM{1 & -n \\ 0 & 1}, \bM{1 & 0 \\ n & 1}\right \rangle,~n\ge 3.\] The \emph{minimal dilatation} for any hyperbolic map in $\Lambda_n$ (and thus pA maps in $\Gamma_{\alpha,\beta}$) is given by \[\inf\{|\lambda(\alpha)|: |\lambda(\alpha)|>2, \alpha \in \Lambda_n\},\] where $\lambda(\alpha)$ is the spectral radius of $\alpha$. Since $\Lambda_n$ is discrete, this infimum must be realized. We begin with case when $n = 1$. In this case, the solution is well-known since $\Lambda_1\simeq SL_2(\mathbb{Z})$ (Theorem 2.5, \cite{primer}). So, the solution reduces to minimizing the roots of the characteristic polynomial equation \[x^2-\tr(\alpha)x + 1.\] In $\SL_2(\bbZ)$, eigenvalues grow monotonically as a function of trace; the smallest magnitude trace in the infimum is $3$, so we have \[x^2-3x+1=0 \implies \lambda = \frac{3+\sqrt{5}}{2}\] Now, finding $\alpha=\begin{bsmallmatrix} w & x \\ y & z\end{bsmallmatrix}$ follows immediately from the conditions $w + z = 3, wz-xy=1$: the solution is given by $\alpha = \begin{bsmallmatrix} 2 & 1 \\ 1 & 1 \end{bsmallmatrix}.$ Furthermore, $\alpha$ has two distinct real eigenvalues, so this solution is unique up to conjugacy. For the general case, let \begin{align*} A=\bM{1 & -n \\ 0 & 1}, \qquad B=\bM{1 & 0 \\ n & 1} \end{align*} We will assume $n \neq 2$; later (Remark \ref{no n equals 2}) we show that $\Lambda_2$ is not the representation given by the Thurston construction for any number $g$ or $n$. \begin{thm}\label{min dilatation} Fix $n > 2$. The minimal spectral radius in $\Lambda_n$ is given by $\frac{1}{2}(n^2+n\sqrt{n^2-4}-2)$ corresponding to the matrix $\begin{bsmallmatrix} 1-n^2 & -n \\ n & 1 \end{bsmallmatrix}$. \end{thm} Fix $n > 2$. In $\PSL_2(\bbZ)$, the spectral radius of a matrix $\alpha$ is given by the larger root of the characteristic polynomial \[x^2-\tr(\alpha)x+1=0\] Explicitly, these solutions are \[x = \frac{\tr(\alpha)\pm \sqrt{(\tr(\alpha))^2-4}}{2}\] We wish to minimize spectral radius over hyperbolic matrices, so we assume also that $|\tr(\alpha)| > 2$. For $A \in \SL_2(\bbZ)$, it is also known that $\lambda(A)$ increases monotonically as a function of the magnitude of the trace; it follows that minimizing spectral radius is equivalent to minimizing trace magnitude. Here we minimize the latter and then compute the corresponding dilatation. To begin, we show the following, which was observed initially by Chorna, Geller and Shpilrain (Theorem 4(a), \cite{chorna}): \begin{prop}\label{form of lambda} Let $\alpha \in \Lambda_n$, $n > 2$. Then $\alpha$ has the form \[\bM{1+k_1n^2 & k_2n \\ k_3n & 1+k_4n^2} \quad k_i \in \bbZ.\] \end{prop} \begin{proof} For simplicity, we say that a matrix $\gamma$ is \emph{congruent}, denoted \[\gamma \cong \bM{1 \mod n^2 & 0 \mod n \\ 0 \mod n & 1 \mod n^2},\] if $\gamma$ takes on the form \begin{equation}\label{form of gamma} \gamma=\bM{1 + k_1n^2 & k_2n \\ k_3n & 1+k_4n^2}, \quad k_i \in \bbZ \end{equation} Define $S \subseteq \SL_2(\bbZ)$ as \[S\coloneqq \left\{\gamma \in \SL_2(\bbZ): \gamma \cong \bM{1 \mod n^2 & 0 \mod n \\ 0 \mod n & 1 \mod n^2}\right\}.\] We claim that $S$ is a subgroup of $\SL_2(\bbZ)$. Then, since $A, B \in \SL_2(\bbZ)$, it would follow that every $\gamma \in \Lambda_n$ would take on the form given by \ref{form of gamma}. To prove the claim, consider the natural homomorphism $\phi: \SL_2(\bbZ) \rightarrow \SL_2(\bbZ/n^2\bbZ)$ given by reduction modulo $n^2$. Then $S = \phi^{-1}(S')$, where \[S'\coloneqq \left\{\bM{1 & k_1n \\ k_2n & 1}: k_1, k_2 \in \SL_2(\bbZ/n^2\bbZ)\right\}.\] We show that $S'$ is a subgroup of $\SL_2(\bbZ/n^2\bbZ)$. Define $N, M \in \SL_2(\bbZ/n^2\bbZ)$ as \[N = \bM{1 & k_1n \\ k_2n & 1}, M = \bM{1 & k_3n \\ k_4n & 1}.\] Then we have \begin{align*} NM^{-1} &= \bM{1 & k_1n \\ k_2n & 1}\bM{1 & -k_3n \\ -k_4n & 1} \\ & = \bM{1 -k_1k_4n^2 & n(k_1-k_3) \\ n(k_2-k_4) & 1-k_2k_3n^2} \\ & \equiv \bM{1 & n(k_1-k_3) \\ n(k_2-k_4) & 1} \in S'. \\ \end{align*} It follows that $S'$ is a subgroup of $\SL_2(\bbZ/n^2\bbZ)$. Then $S = \phi^{-1}(S')$, so $S$ is a subgroup of $\SL_2(\bbZ)$, giving the desired result. \end{proof} \noindent \textit{Proof of Theorem \ref{min dilatation}} By Proposition \ref{form of lambda} it suffices to minimize trace over all matrices of the form \begin{equation}\label{general alpha} \alpha = \bM{k_1n^2+1 & k_2n \\ k_3n & k_4n^2+1} \quad \text{ such that } k_i \in \bbZ, (k_1n^2+1)(k_4n^2+1)-k_2k_3n^2=1 \end{equation} Note that we impose the second constraint equation because $\alpha \in \SL_2(\bbZ)$, so we have the determinant \[(k_1n^2+1)(k_4n^2+1)-k_2k_3n^2=1.\] Rearranging the determinant equation gives $k_2k_3=k_1k_4n^2+(k_1+k_4) \in \bbZ$. Thus, given any fixed $k_1, k_4 \in \bbZ$, there always exists $k_2, k_3$ such that the matrix $\bsM{1+k_1n^2 & k_2n \\ k_3n & 1+k_4n^2}$ is in $\SL_2(\bbZ)$. For any $\alpha$ given by \ref{general alpha}, $|\tr(\alpha)|$ is given by \[|2+n^2(k_1+k_4)|\] which is smallest when $k_1+k_4 = 0$. In this case $\tr(\alpha)=2$. Then $\alpha$ is not hyperbolic, so we disregard it. When $k_1+k_4=-1$, then $|\tr(\alpha)|=2-n^2$. For $k_1+k_4=1$, then $|\tr(\alpha)| = 2+n^2 > n^2-2$. Finally, for $|k_1+k_4|>1$, we have \[|2+n^2(k_1+k_4)| \in \{(k_1+k_4)n^2-2, (k_1+k_4)n^2+2\},\] which in either case is greater in magnitude than $n^2-2$. It is left to show that a matrix in $\Lambda_n$ achieves the minimum trace of $n^2-2$. Choosing $k_1 = -1, k_4 = 0$ gives the matrix $\bsM{1-n^2 & k_2n \\ k_3n & 1}$, which implies $k_2=-n$ and $k_3=n$. But this matrix is equal to $AB$, given by $\bsM{1 -n^2 & -n \\ n & 1}$. Thus both $AB$ and $BA$ (which are conjugate) in $\Lambda_n$ achieve the minimum dilatation of $\frac{1}{2}(n^2+n\sqrt{n^2-4}-2)$. \qed \newline To prove Theorem \ref{main theorem}, note that for two filling curves $\alpha, \beta$ on $\Sigma_{g,n}$ where $i(\alpha,\beta)=n$, we have $\Lambda_n = \langle \rho(T_\alpha), \rho(T_\beta)\rangle$ achieves its smallest dilatation map with $\rho(T_\alpha) \cdot \rho(T_\beta)$, since $\rho(T_\alpha) \cdot \rho(T_\beta)=AB \in \Lambda_n$. This map corresponds to $T_\alpha \cdot T_\beta$ in the associated mapping class group. \section{Construction of Filling Curves}\label{filling section} We exposit work given by Aougab-Huang-Taylor \cite{aougab-huang}, \cite{aougab-taylor} and Jeffreys \cite{jeffreys}. For a fixed surface $\Sigma_{g,n}$, our goal is to obtain a lower bound for the intersection number of a pair of filling curves and subsequently construct examples achieving these minima. We use lower bounds given by the filling permutations of Aougab-Huang \cite{aougab-huang} and Aougab-Taylor \cite{aougab-taylor} and the generalized filling permutations of Jeffreys \cite{jeffreys}, which gives us an algebraic way to describe ``gluing patterns" of polygons. The idea is to construct polygons whose sides are identified in such a way that, once glued, they form the surface $\Sigma_{g,n}$ with the glued sides becoming the filling curves $\alpha,\beta$. Each polygon will correspond to a disk in the complement of $\alpha\cup \beta$ on $\Sigma_{g,n}$, so we can retroactively puncture the polygons to form $\Sigma_{g,n}$. Since we will ``place" the punctures, our convention will be to treat them as marked points and thus exclude them from the Euler characteristic. We begin with a general lower bound for the intersection number on any surface $\Sigma_{g,n}$ from Aougab-Huang (\cite{aougab-huang}, Lemma 2.1). \begin{lem}\label{lower bound} Fix $g \geq 1, n \geq 0$. If $\alpha, \beta$ fill $\Sigma_{g,n}$, then $i(\alpha,\beta) \geq 2g-1$, where $i$ denotes geometric intersection number. \end{lem} \begin{proof} We model $\alpha,\beta$ as a $4$-valent graph $G$ (where vertices $v$ are intersection points) since the complement $\Sigma_g \setminus (\alpha,\beta)$ is a union of topological discs $D$. The Euler characteristic of the graph must match that of $\Sigma_{g,n}$. We know \[\sum_{v \in G} \deg_v(G) = 2|E| = 4|V| = 2i(\alpha,\beta)\] Then we obtain \[\chi(\Sigma_g) = 2-2g = |D|-2i(\alpha,\beta)+i(\alpha,\beta)\] and since $|D| \geq 1$, we have the result. \end{proof} This bound is only realized in the case when $n=0$. For punctured surfaces, however, we can come very close. To construct an explicit example where equality is realized, we now introduce the notion of \emph{filling permutations} from \cite{aougab-huang} and \cite{jeffreys}. Fix a surface $\Sigma_{g,n}$ and a filling pair $\alpha,\beta$. We will label the subarcs of the curves (segments connecting two intersection points) in the following manner, beginning with the curve $\alpha$. Fix an orientation for $\alpha$ and choose a starting intersection point $x_0\in\alpha\cap\beta$. Travel in the direction of $\alpha$ until we reach an intersection point $x_1\neq x_0$ and label the subarc of $\alpha$ joining $x_0$ to $x_1$ as $\alpha_1$. Continue this process until we arrive back at $x_0$--this will occur since the curve $\alpha$ is closed--labeling the subarcs $\{\alpha_1,\ldots,\alpha_m\}$; note that $m=i(\alpha,\beta)$. Repeat this process with $\beta$ to obtain a labeling $\{\beta_1,\ldots,\beta_m\}$. Now, cutting the surface along $\alpha\cup\beta$, we obtain $n+2-2g$ polygons whose sides correspond to subarcs of $\alpha$ and $\beta$ and whose vertices are intersection points in $\alpha\cap\beta$. Orient these polygons clockwise. Our goal is to describe the polygons algebraically in terms of permutations acting on their edges. First note that since we cut along $\alpha$ and $\beta$ to obtain these polygons, every subarc $\alpha_k$ of $\alpha$ will have an inverse $\alpha_k^{-1}$ with the opposite orientation; similarly for $\beta$. Define \[ A=\{\alpha_1,\beta_1,\ldots,\alpha_m,\beta_m,\alpha^{-1},\beta^{-1},\ldots,\alpha_m^{-1},\beta_m^{-1}\} \] and identity $A$ with the set $\{1,\ldots,4m\}$. Label the sides of the polygons with the corresponding elements of $A$. Now, we define the \emph{filling permutation} of a polygon as $\sigma(j)=k$ if, and only if, traveling clockwise around the polygon the edge labeled by the $j$th element of $A$ is followed by the edge labeled by the $k$th element of $A$. Each filling permutation will be a cycle in $S_{4m}$, the symmetric group on $4m$ elements, so since there are $n+2-2g$ polygons we have $n+2-2g$ corresponding cycles $\sigma\in S_{4m}$. \begin{figure}[H] \centering \includegraphics[width=0.75\linewidth]{figure2.jpeg} \caption{The polygons corresponding to a filling pair on $\Sigma_{2,3}$. The associated filling permutations are, from left to right, $(1,2,19,14),$ $(3,8,15,16,9,17,18,5,10,11,12)$, and $(6,13,20,7)$} \end{figure} There are two more geometrically significant permutations we are interested in. Take $Q=Q_{\alpha,\beta} \in S_{4m}$ as $Q = (1, 2, \dots, 4m)^{2m}$. We note that $Q$ acts on the edges by reversing their orientation, i.e., it sends $j$ to $k$ if and only if the $j$th and $k$th elements of $A$ are inverses of each other. Finally, define $\tau=\tau_{\alpha,\beta} \in S_{4m}$ as \[ \tau=(1,3,5,\ldots,2m-1)(2,4,6,\ldots,2m)(4m-1,4m-3,\ldots,2m+1)(4m,4m-2,\ldots,2m+2). \] The first cycle represents sending $\alpha_i$ to $\alpha_{i+1}$, the second $\beta_i$ to $\beta_{i+1}$, the third $\alpha^{-1}_k$ to $\alpha^{-1}_{k+1}$ and the fourth $\beta^{-1}_{k}$ to $\beta^{-1}_{k+1}$. In other words, $\tau$ moves each arc in $\alpha$ to the next arc of $\alpha$ with the same orientation, and similarly for $\beta$. We will say that a permutation is \emph{parity-respecting} if it sends even numbers to even numbers and odd numbers to odd numbers and \emph{parity-reversing} if it sends even numbers to odd numbers and odd numbers to even numbers. The following lemma from Jeffreys (\cite{jeffreys}, Lemma 2.3) gives the conditions necessary to define a filling permutation on a surface $\Sigma_{g,n}$; we will subsequently construct the filling curves by finding a permutation that satisfies these hypotheses. \begin{lem}\label{construction} Let $\alpha,\beta$ be a filling pair on $\Sigma_{g,n}$ with $i(\alpha,\beta)=m\ge i(\alpha,\beta)$, the minimal intersection number. Then, $\sigma=\sigma_{\alpha,\beta}$ satisfies $\sigma Q \sigma=\tau$. Conversely, a parity-reversing permutation $\sigma\in S_{4m}$ consisting of $m+2-2g$ cycles and no more than $n$ 2-cycles that satisfies the above relation defines a filling pair on $\Sigma_{g,n}$ with intersection number $m$. \end{lem} Now we have the necessary ingredients to compute the minimal realized number of intersection points on $\Sigma_{g,n}$; we closely follow the one given by \cite{aougab-taylor}, Lemma 3.1. \begin{prop} \label{intersection numbers} Suppose $g \neq 0,2$ and $n = 0$. If $\alpha,\beta$ are minimally intersecting filling curves on $\Sigma_{g,n}$, then \[i(\alpha,\beta)=2g-1.\] If $n \ge 1$, then \[i(\alpha, \beta)=2g+n-2.\] \end{prop} \begin{proof} Using the same argument as in Lemma \ref{lower bound}, we have that $i(\alpha,\beta)=2g+n-2+|D|$ where $|D|$ is the number of topological disks in the complement of $\alpha \cup \beta$ in $\Sigma_{g,n}$. Thus we have the lower bounds and it is left to show that these bounds are realized. The first case is given explicitly by Lemma \ref{construction}; for the second, we induct on $n$. When $n = 1$, then $2g-1=2g+n-2$. Thus the filling curves given in Lemma \ref{construction}, which have a single disk $D$ in their complement, still fill $\Sigma_{g,1}$, obtained by puncturing $D$ once. To begin constructing the filling pairs for surfaces with $n \geq 1$ punctures, we give an example for when $g = 1$; there is a formula for the intersection number of curves on the torus (\cite{primer}, Section 1.2.3). Namely, if $\alpha$ is a $(p,q)$ curve and $\beta$ is an $(r,s)$ curve, then \[i(\alpha,\beta)=ps-qr.\] Taking $\alpha$ to be a $(n,1)$-curve and $\beta$ to be a $(0,1)$ gives two curves intersecting exactly $n$ times. The complement of these two curves is $n$ topological disks, and puncturing each gives $2g+n-2=n$ intersections on $\Sigma_{1,n}$. Now, we describe the \textit{double bigon method,} which begins with a filling pair $\alpha,\beta$ on $\Sigma_{g,n}$ and constructs a filling pair on $\Sigma_{g,n+2}$ with intersection number $i(\alpha,\beta)+2$. As before, let $\alpha,\beta$ be a filling pair on $\Sigma_{g,n}$, and orient and label them into subarcs $\alpha_1, \dots, \alpha_{i(\alpha,\beta)}$ and $\beta_1, \dots, \beta_{i(\alpha,\beta)}$. Suppose $i(\alpha_1,\beta_{i(\alpha,\beta)} \neq 0$. Then pushing $\alpha_1$ across $\beta_{i(\alpha,\beta)}$ and back over forms $2$ bigons. Puncturing each of these bigons gives the same pair of filling curves on $\Sigma_{g,n+2}$ with intersection number $i(\alpha,\beta)+2$. See Figure \ref{double bigon} for reference. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{figure3.jpeg} \caption{The ``double bigon method.'' Given a pair of filling curves $\alpha,\beta$ on a surface $\Sigma_{g,n}$ with intersection number $i(\alpha,\beta)$, the same pair fills $\Sigma_{g,n+2}$ with intersection number $i(\alpha,\beta)+2$.} \label{double bigon} \end{figure} Suppose $n = 2k+1$ is odd and $g > 2$. Take a pair $\alpha,\beta$ which fill $\Sigma_{g,0}$, whose complement is connected, i.e. is a single topological disk, and such that $i(\alpha,\beta)=2g-1$ (we know such an $\alpha,\beta$ exist by Lemma \ref{construction}). Then, puncturing this disk gives that $\alpha, \beta$ fill $\Sigma_{g,1}$. For the remaining $2k$ punctures, perform the double bigon method $k$ times; each time will increase $i(\alpha,\beta)$ by $2$ and will result in $\alpha,\beta$ filling $\Sigma_{g,2k+1}$ with intersection number \[i(\alpha,\beta)+2k=(2g-1)+2k=2g-2+n.\] For $n=2k$ even, the same argument generalizes if there exists a filling pair $\alpha,\beta$ on $\Sigma_{g,0}$ intersecting $2g$ times; we refer the reader to \cite{aougab-taylor}, Lemma 3.1, for the construction of such a pair. \end{proof} A similar application of the double bigon method gives minimal intersection numbers for $\Sigma_{g,n}$ for $g=0, 2$ (see \cite{aougab-taylor}, Lemma 3.1 and \cite{jeffreys} Theorem 3.3). We summarize the results as follows: \begin{center} \def\arraystretch{1.5} \begin{tabular}{|c|c|c|c|} \hline Genus & Punctures & $i(\alpha,\beta)$ \\ \hline\hline $g = 0$ & $n \geq 4$ even & $n-2$ \\ \hline $g = 0$ & $n \geq 4$ odd & $n-1$ \\ \hline $g = 2$ & $n \leq 2$ & $4$ \\ \hline $g = 2$ & $n > 2$ & $2g+n-2$ \\ \hline \end{tabular} \end{center} \begin{rem}\label{no n less than 4} The case $g=0$, $n<4$ is not considered because the filling curves have intersection number zero: if there two or fewer punctures then a single curve fills and if there are exactly three punctures then the filling pair does not intersect. \end{rem} \noindent\textit{proof of Corollary \ref{main corollary}} The proof follows immediately from plugging in the values from Proposition \ref{intersection numbers} into the matrices in Theorem \ref{min dilatation} and applying Thurston's construction. Fix a surface $\Sigma_{g,n}$, $g\neq 0,2$ and let $\{\alpha,\beta\}$ be a minimally intersecting filling pair, so that $i(\alpha,\beta)=i_{g,n}$. Letting $A=\bsM{1 & -i(g,n) \\ 0 & 1}$ and $B=\bsM{1 & 0 \\ i(g,n) & 1}$, by Thurston's Construction (Theorem \ref{thurston construction}) the Thurston pA maps in $\Gamma_{\alpha,\beta}\subset\mcg(\Sigma_{g,n})$--the subset of the mapping class generated by Dehn twists $T_\alpha,T_\beta$ about the curves $\alpha$ and $\beta$--correspond to the hyperbolic elements of $\Lambda_{i(\alpha,\beta)}=\inr{A,B}$. Moreover, the spectral radius of the elements of $\Lambda_{i(\alpha,\beta)}$ correspond to the dilatation of the pA maps. By Theorem \ref{min dilatation}, the minimal nonzero spectral radius in $\Lambda_{i(\alpha,\beta)}$ (and thus minimal dilatation in $\Gamma_{\alpha,\beta}$) is given by \[ \frac{1}{2}\left(i(\alpha,\beta)^2+i(\alpha,\beta)\sqrt{i(\alpha,\beta)^2-4}-2\right) \] achieved by the hyperbolic matrix \[ AB=\bM{1 & -i(\alpha,\beta) \\ 0 & 1}\bM{1 & 0 \\ i(\alpha,\beta) & 1} =\bM{1-i(\alpha,\beta)^2 & -i(\alpha,\beta) \\ i(\alpha,\beta) & 1}. \] By Thurston's Construction, $AB$ represents the pA map $T_\alpha T_\beta$, the product of Dehn twists about $\alpha$ and $\beta$. The specific values for dilatation in Corollary \ref{main corollary} are obtained by substituting the corresponding values of $i_{g,n}$ from Proposition \ref{intersection numbers} for $i(\alpha,\beta)$. \qed\newline \begin{rem}\label{no n equals 2} We note that the value of $n =2$ is never realized for any $\Sigma_{g,n}$ justifying the exclusion of this value in Proposition \ref{intersection numbers}. \end{rem} \section{Future Directions} Throughout this paper we exclusively explored the case where $A$ and $B$ are single curves $\alpha$ and $\beta$, respectively, but the problem of finding the minimal dilatation Thurston pA map extends to the general case of \emph{multicurves} $A=\{\alpha_1,\ldots,\alpha_k\},~ B=\{\beta_1,\ldots,\beta_{\ell}\}$ on $\Sigma_{g,n}$ (i.e. disjoint collections of simple closed curves). The \emph{multitwist} about $A$ and $B$ are the products $\prod_{i=1}^n T_{\alpha_i}, \prod_{i=1}^m T_{\beta_i}$, respectively. We recall that the Thurston construction generalizes for multicurve systems which fill $\Sigma_{g,n}$ to obtain a representation $\rho: \Gamma_{A,B} \rightarrow \PSL_2(\bbR)$ given by \[T_A \mapsto \bM{1 & -\mu^{1/2} \\ 0 & 1} \quad T_B \mapsto \bM{1 & 0 \\ \mu^{1/2} & 1},\] where $\mu$ is the square of the largest singular value of the $k\x\ell$ intersection matrix $N$ whose $(n,m)$ entry is given by \[ N_{n,m}=i(\alpha_n,\beta_m), \] i.e., $\mu$ is the \emph{Perron-Frobenius eigenvalue} of $N^{\T}N$ (note that we must work with this matrix instead of $N$ since the latter is not necessarily square). We refer the reader to \cite{primer}, Section 14.1.2 for some background on Perron-Frobenius theory. Leininger \cite{Leininger_2004} derived several useful facts regarding minimal pseudo-Anosov dilatation elements in groups generated by multitwists. However, as we noted after stating Corollary \ref{main corollary}, twists about two filling curves become less representative of the entire mapping class group with increasing genus. Thus another question to ask is whether $\lambda(\Gamma_{A,B})$ also increases monotonically with genus, particularly when $A, B$ each consist of $g$ multicurves, i.e. are twists about $2g$ filling curves more characteristic of $\Sigma_{g,n}$, particularly for large $g$. \bibliographystyle{plain} \bibliography{refs} \end{document}
2412.16331v1
http://arxiv.org/abs/2412.16331v1
Efficient points in a sum of sets of alternatives
\documentclass[pdflatex,sn-apa,11pt]{sn-jnl}\linespread{1.25} \usepackage[shortlabels]{enumitem} \usepackage{graphicx}\usepackage{multirow}\usepackage{amsmath,amssymb,amsfonts}\usepackage{amsthm}\usepackage{mathrsfs}\usepackage[title]{appendix}\usepackage{xcolor}\usepackage{textcomp}\usepackage{manyfoot}\usepackage{booktabs}\usepackage{algorithm}\usepackage{algorithmicx}\usepackage{algpseudocode}\usepackage{listings} \newtheorem{theorem}{Theorem}\newtheorem*{theoremfive}{Theorem 7} \newtheorem*{theoremsix}{Theorem 8} \newtheorem{proposition}{Proposition} \newtheorem{example}{Example}\newtheorem{remark}{Remark}\newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{definition}{Definition} \raggedbottom \begin{document} \title[Efficiency in a sum of sets]{Efficient points in a sum of sets of alternatives} \author[]{\fnm{Anas} \sur{Mifrani}\footnote{Toulouse Mathematics Institute, University of Toulouse, F-31062 Toulouse Cedex 9, France. Email address: [email protected].}} \abstract{ The concept of efficiency plays a prominent role in the formal solution of decision problems that involve incomparable alternatives. This paper develops necessary and sufficient conditions for the efficient points in a sum of sets of alternatives to be identical to the efficient points in one of the summands. Some of the conditions cover both finite and infinite sets; others are shown to hold only for finite sets. A theorem with useful implications for multiple objective optimization is obtained as a corollary of our findings. Examples are provided that illustrate these results. } \keywords{Pareto optimal point, Sum set, Multi-criteria decision making, Multi-objective optimization.} \maketitle \section{Introduction}\label{sec1} Let $G$ denote a set of alternatives and $R$ denote a relation for comparing pairs of alternatives. For every $x$ and $y$ in $G$, $xRy$ shall have the interpretation that $x$ is at least as good as $y$. Where $x$ is better than $y$, we shall write $xPy$, indicating that $xRy$ but that $x \neq y$. If neither $xRy$ nor $yRx$ obtains for some $x, y \in G$, we say $x$ and $y$ are incomparable, and write $xIy$ or $yIx$. We shall concern ourselves with the set equality \begin{equation}\tag{E}\label{E}\mathscr{E}(A + B) = \mathscr{E}(A),\end{equation} where $A$ and $B$ are nonempty subsets of $G$, $A + B = \{a + b: \forall a \in A, \forall b \in B\}$, and $\mathscr{E}(S)$ is the set of all \textit{efficient} points in a set $S \subseteq G$. We say a point $g \in G$ is efficient in $S$ if $g \in S$ and no $g' \in S$ exists such that $g'Pg$; equivalently, $g$ is efficient in $S$ if $g \in S$ and $g'Rg$ implies $g' = g$ for all $g' \in S$. The concept of efficiency plays a prominent role in economics, game theory, statistical decision theory, as well as in the formal analysis and solution of multiple objective optimization problems \citep{GEOFFRION1968618, benson1978existence}. A special case of this equality appears in \cite{yu2013multiple}. When $A$ is an arbitrary subset of $\mathbb{R}^{q}$ and $B$ is the non-positive orthant of $\mathbb{R}^{q}$ in the sense of the usual, ``larger-is-better", product order, which is given by $xRy \iff x_i \geqq y_i$ for all $i = 1, ..., q$ and $x, y \in \mathbb{R}^{q}$, equality (\ref{E}) holds \citep[Theorem 3.2.]{yu2013multiple}. Stated differently, if $A$ is shifted by non-positive amounts including zero, then the components of a point in $A$ can only deteriorate or remain intact, so that if the point is maximal (efficient) in $A$ it is also maximal in the shifted set, and vice-versa (Yu's proof of this important fact is partly incorrect; we shall return to this in Section \ref{sec3}). Perhaps the most readily apparent applications of equality (\ref{E}) are in the area of multiple objective optimization. Let $f(x) = (f_1(x), ..., f_q(x))$ be a criterion function defined on a space $X$ of feasible decisions, where the underlying $f_i: X \xrightarrow{} \mathbb{R}$, $i = 1, ..., q$, are functions to be maximized simultaneously over $X$. Take $R$ to be the relation defined in the last paragraph, and consider the case, often arising in practice, where no single decision $x \in X$ maximizes the $f_i$ all at once -- in other words, the $f_i$ are at conflict. The concept of efficiency has provided a viable solution concept for this type of problem \citep{soland1979multicriteria}. A decision $x^{*} \in X$ is called an efficient solution of the multiple objective optimization problem if it achieves an outcome $f(x^{*})$ efficient in the outcome space $f(X)$. Equality (\ref{E}) can be a useful tool when investigating properties of efficient solutions. For example, in \citep[p. 27]{yu2013multiple}, the author uses the equality to establish that if $f(X)$ is an $\Lambda^{\leqq}$-\textit{convex} set, any efficient solution is an optimal solution to some ordinary optimization problem (see \citep[p. 25]{yu2013multiple} for a definition of $\Lambda^{\leqq}$-convex sets). In the same context, one could also conceive of the following application. Suppose that $f(X)$ could be expressed as a sum set, as $f(X) = A + B$, say, with $A$ and $B$ satisfying such properties as those developed in this paper, and with $A$ having a simpler structure, from the point of view of the determination of efficient points, than $f(X)$. Then, because $\mathscr{E}(f(X)) = \mathscr{E}(A)$, the computational demands of finding all efficient solutions may be mitigated by confining oneself to $A$. Efficiency in sum sets has been the subject of a handful of publications (see, e.g., \cite{moskowitz1975recursion} and \cite{white1980generalized}), but, so far as we can detect, this paper carries the first general presentation and treatment of equality (\ref{E}). The assumptions of this work are described in Section \ref{sec2}. In Section \ref{sec3}, an examination of the validity of (\ref{E}) under various conditions on $A$, $B$ and $R$ is undertaken. In the course of this examination, it will transpire, for example, that if $R$ is \textit{isotone} (property to be defined), and $\mathscr{E}(A)$ is nonempty, and $B$ contains an element which dominates a certain reference point in $G$, then (\ref{E}) fails (Theorem \ref{thm3}). The rest of the conditions are as straightforward and in many practical instances not difficult to verify. We adduce examples to illustrate this, as well as to illustrate the fact that some of these conditions fall short of being both necessary and sufficient. Our analysis will make it clear that, depending on whether the sets considered are finite or infinite, widely discrepant conclusions may follow as to the validity of certain theorems. We summarize the paper and collect some final remarks on the subject in Section \ref{sec4}. \section{Notation and assumptions}\label{sec2} In the subsequent development, we assume $(G, +)$ to be a group of which $A, B$ are nonempty subsets. We do not require $G$ to be abelian, nor that either subset be a subgroup. We let $0_{G}$ represent the identity of $G$. For any nonzero integer $p \geqq 1$ and any $g \in G$, we let $pg$ signify the sum of $p$ copies of $g$ in $G$. When $p = 0$ we set $pg = 0_{G}$. For every $g \in G$, $-g$ shall denote the group inverse of $g$. Efficiency in any subset of $G$ is taken with respect to $R$, and we adopt the convention that $\mathscr{E}(\emptyset) = \emptyset$. Borrowing slightly from the parlance of \cite{MacLane1999-zu}, we will say that $R$ is isotone if comparisons with respect to it are preserved under addition, that is, if $gRg'$ implies $(z + g)R(z + g')$ whenever $g, g', z \in G$. Depending on the particular result, we may assume $R$ to satisfy some or all of the following properties: \begin{itemize} \item[(P1)] $R$ is transitive: for every $x, y, z \in G$, if $xRy$ and $yRz$, then $xRz$; \item[(P2)] $R$ is antisymmetric: for every $x, y \in G$, if $xRy$ and $yRx$, then $x = y$; \item[(P3)] $R$ is isotone; \item[(P4)] for all $p \geqq 1$ and all $b \in B$, $(pb)P0_{G}$ implies $bP0_{G}$; \item[(P5)] for all $p \geqq 1$ and all $b \in B$, $0_{G}P(pb)$ implies $0_{G}Pb$. \end{itemize} The first two properties are standard axioms of modern utility theory \citep{white1972uncertain, luce1956semiorders}. There has been some debate over the extent to which they accurately capture the preference pattern of a rational decision maker\footnote{Almost any axiom of utility theory has come under attack for this reason; see, for example, \cite{luce1956semiorders} and the references therein. It should be noted, however, that even at the skeptical end of the controversy, commentators like \cite{luce1956semiorders} acknowledge the necessity of some of these properties for the development of utility theory-based disciplines. Regarding transitivity, for example, Luce observes that it is ``necessary if one is to have [a] numerical order preserving utility function", and that such functions ``seem indispensable for theories -- such as game theory -- which rest on preference orderings". He goes on to conclude that ``we too shall take the attitude that, at least for a normative theory, the preference relation should be transitive".}, but this should not be of concern to us. The adequacy of a property of $R$ for approximating real world behavior is too remote a question from the purposes of this paper. Given our wish to explore in as general a setting as possible what in our view is an unexplored theoretical problem, the intuitive grounds on which these properties rest, which are clear enough, are compelling a justification for considering them. Far more germane to our objectives, if one is to heed the dictum of the Scholastics when searching for hypotheses, \textit{entia non sunt multiplicanda praeter necessitatem}, is the question whether the results contained in this paper \textit{necessitate} these properties. We shall not pursue this question here, but it is one we shall endeavor to probe in future investigations. The intuitive basis for property (P3) is also clear. A relation satisfying (P1) and (P3) has been termed additive \citep{white1972uncertain}. The last two properties are true of the standard product order in $\mathbb{R}^{q}$, and are therefore standard assumptions in multiple objective optimization. Some of the proofs given in this paper make no use whatsoever of these properties. Which properties, if any, are used in a proof are indicated in the statement of the relevant proposition. Let us now consider our results. \section{Conditions for the validity of equality (\ref{E})}\label{sec3} The results developed in this section bear on the domain of validity of equality (\ref{E}). We begin with two theorems that contain each a set of conditions sufficient for the equality to hold. Theorem \ref{thm0} provides a test of validity that merely involves an inspection of the efficient subset of $A$. \begin{theorem} \label{thm0} Under (P3), $\mathscr{E}(A) = \emptyset$ implies that (\ref{E}) holds. \end{theorem} \begin{proof} Let us assume that $\mathscr{E}(A) = \emptyset$, and suppose, for the sake of contradiction, that $\mathscr{E}(A+B) \neq \emptyset$. We may therefore select $z = a +b \in A+B$, $a \in A$, $b \in B$, such that $z \in \mathscr{E}(A+B)$. Since $A$ has no efficient points, $a \notin \mathscr{E}(A)$, and there exists $a' \in A$ such that $a'Pa$. By (P3), this implies that $(a'+b)Pz$, a contradiction of the fact that $z \in \mathscr{E}(A+B)$. Consequently, $\mathscr{E}(A+B) = \emptyset$. \end{proof} \begin{theorem} \label{thm1} Under (P1)-(P3), if $0_{G} \in B$ and $0_{G} R b$ whenever $b \in B$, then (\ref{E}) holds. \end{theorem} \begin{proof} If $\mathscr{E}(A) = \emptyset$, then we are done as per Theorem \ref{thm0}. Suppose in what follows that $\mathscr{E}(A) \neq \emptyset$. The proof is in two parts. (1) Let $w^{\circ} \in \mathscr{E}(A)$. Observe that $w^{\circ} = w^{\circ} + 0_{G} \in A + B$, since $w^{\circ} \in A$ and $0_{G} \in B$. Suppose that, for some $w = a + b \in A + B$, $wRw^{\circ}$. Then, from (P1) and the fact that $0_{G}Rb$, we see that $aRw^{\circ}$. Because $a \in A$ and $w^{\circ} \in \mathscr{E}(A)$, this means that $a = w^{\circ}$, whence $wRw^{\circ}$ and therefore, by (P2), $w = w^{\circ}$. We have thus demonstrated that $w^{\circ} \in A+B$ and that, for all $w \in A+B$, $wRw^{\circ}$ implies $w = w^{\circ}$. Consequently $w^{\circ} \in \mathscr{E}(A+B)$, and $\mathscr{E}(A) \subseteq \mathscr{E}(A+B)$. (2) If $\mathscr{E}(A+B) = \emptyset$, then $\mathscr{E}(A+B) \subseteq \mathscr{E}(A)$. Otherwise, let $w^{\circ} \in \mathscr{E}(A+B)$ and assume, contrary to the theorem, that $w^{\circ} \notin \mathscr{E}(A)$. Two cases must be considered: $w^{\circ} \in A$, and $w^{\circ} \notin A$. We may write $w^{\circ} = a^{\circ} + b^{\circ}$ for some $a^{\circ} \in A$ and $b^{\circ} \in B$. If $w^{\circ} \in A$, then, in view of the fact that $w^{\circ} \notin \mathscr{E}(A)$, there exists $a \in A$ such that $aRw^{\circ}$ and $a \neq w^{\circ}$. However, $a = a + 0_{G}$ being a point in $A + B$, this contradicts the efficiency of $w^{\circ}$ in $A+B$. If $w^{\circ} \notin A$, then $b^{\circ} \neq 0_{G}$, for the reverse would clearly imply $w^{\circ} \in A$. Now, since $0_{G} R b^{\circ}$, it follows from (P3) that $a^{\circ} R w^{\circ}$. Moreover, we have that $a^{\circ} \neq w^{\circ}$. To see why, suppose the opposite were true. Then, recalling that $G$ is a group and that $a^{\circ}$ admits an inverse element $-a^{\circ}$, we obtain $0_{G} = -a^{\circ} + a^{\circ} = -a^{\circ} + (a^{\circ} + b^{\circ}) = (-a^{\circ} + a^{\circ}) + b^{\circ} = b^{\circ}$, a contradiction. Therefore, $a^{\circ}Rw^{\circ}$ with $a^{\circ} \neq w^{\circ}$. But this patently contradicts the fact that $w^{\circ} \in \mathscr{E}(A+B)$, as $a^{\circ} = a^{\circ} + 0_{G} \in A+B$. It follows from the previous two paragraphs that $w^{\circ} \in \mathscr{E}(A)$, and therefore that $\mathscr{E}(A+B) \subseteq \mathscr{E}(A)$. Combining parts (1) and (2) of this argument yields the requisite equality. \end{proof} \begin{remark} \label{rem_generalized_equality} The following extension of Theorem \ref{thm1} can be established fairly easily using induction. Under our hypotheses on $R$, if $B_1, ..., B_n$ are $n$ nonempty subsets of $G$ such that $0_{G} \in B_i$ and $0_{G}Rb_i$ whenever $b \in B_i$, for each $i = 1, ..., n$, then \begin{equation*}\mathscr{E}\biggl(A + \sum_{i = 1}^{n}B_i\biggr) = \mathscr{E}(A),\end{equation*} where $\sum_{i = 1}^{n}B_i = \{b_1 + ... + b_n: b_1 \in B_1, ..., b_n \in B_n\}$. \end{remark} Po-Lung Yu, in his extensive review of the mathematical theory of multiple criteria decision making, asserts a special case of equality (\ref{E}) where $G = \mathbb{R}^{q}$, $R$ is the product order defined in Section \ref{sec1}, $A \subseteq \mathbb{R}^{q}$ and $B = \{d \in \mathbb{R}^{q}: 0Rd\}$ \citep[p. 22]{yu2013multiple}. It can be seen from the choice of sets and relation that Yu's result is in fact a direct corollary of Theorem \ref{thm1}. However, part of his proof is in error. He proceeds in roughly the same fashion as the above. To demonstrate that $\mathscr{E}(A+B) \subseteq \mathscr{E}(A)$, he assumes by way of contradiction that a point $y^{\circ} \in \mathscr{E}(A + B)$ exists which lies outside $\mathscr{E}(A)$. From this he infers that $y^{\circ}$ must be dominated in $A$ -- that there must exist $y \in A$ such that $yPy^{\circ}$. But this inference is simply unwarranted, for if $y^{\circ} \notin \mathscr{E}(A)$, and all we know is that $y^{\circ} \in \mathscr{E}(A + B)$, then either $y^{\circ} \in A$ and $y^{\circ}$ is indeed dominated in $A$, or $y^{\circ} \notin A$ and the status of $y^{\circ}$ vis-à-vis the elements of $A$ is unclear. Yu overlooks the latter possibility, but nevertheless reaches the correct conclusion. The conditions of Theorem \ref{thm1} are sufficient but not necessary, as highlighted by the next examples. Example \ref{ex1} deals with a case where $A+B=A$, whereas in Example \ref{ex2}, $A+B \neq A$. \begin{example} \label{ex1} Let $G = \mathbb{Z}^{2}$, $A = \{(-n, n): n \in \mathbb{Z}\}$ and $B = \{(-1, 1)\}$. The relation $R$ is given by $xRy \iff x_1 \geqq y_1$ and $x_2 \geqq y_2$, for all $x, y \in \mathbb{Z}^{2}$. Properties (P1)-(P3) are satisfied. Clearly, $(0, 0) \notin B$ and $(0, 0)I(-1, 1)$, yet $A + B = A$ and so equality (\ref{E}) is valid. \end{example} \begin{example} \label{ex2} Let $G = \mathbb{R}^{2}$, $A = \{(x, y): y = -2x\}$ and $B = \{(-1, 2), (-1, 1)\}$. The relation $R$ is taken to be the standard product order on $\mathbb{R}^{2}$ (see the second paragraph of Section \ref{sec1}). As in Example \ref{ex1}, properties (P1)-(P3) are satisfied, and neither condition in Theorem \ref{thm1} obtains. Furthermore, the sum set $A+B$ can easily be shown to be given by \begin{equation*}A + B = A \cup \{(x-1, y+1): (x, y) \in A\}.\end{equation*} Considering that the sets $A$ and $\{(x-1, y+1): (x, y) \in A\}$ have no points in common (this is readily verifiable), equality (\ref{E}) will follow immediately from the fact, which we shall now elucidate, that for each point $z$ in $\{(x-1, y+1): (x, y) \in A\}$ there is a corresponding point $z'$ in $A$ such that $z'Pz$. Indeed, if $(x, y) \in A$, then $(x-1, -2x+2)P(x-1, y+1)$, with $(x-1, -2x+2) = (x-1, -2(x-1)) \in A$. Consequently, no point in $\{(x-1, y+1): (x, y) \in A\}$ can be efficient in $A + B$, hence \begin{equation*}\mathscr{E}(A+B) = \mathscr{E}\biggl(A \cup \{(x-1, y+1): (x, y) \in A\}\biggr) = \mathscr{E}(A).\end{equation*} \end{example} The next two theorems indicate situations when equality (\ref{E}) is violated. \textit{In enunciating these theorems we assume $A+B \neq A$} so as to avoid trivialities. Obviously, when $A+B = A$, the equality holds irrespective of any conditions imposed on $A$ or $B$. \begin{theorem} \label{thm2} Assume that $\mathscr{E}(A + B)$ is nonempty. Under (P3), if $0_{G}Pb$ for all $b \in B$, then (\ref{E}) fails. \end{theorem} \begin{proof} Let us suppose that $0_{G}Pb$ for all $b \in B$. Let $z = a + b \in A+B$ be an efficient point in $A+B$. The only case of interest is when $z \in A$, because $z \notin A$ implies by definition that $z$ is inefficient in $A$. If $z \in A$, then the fact that $aPz$ allows us to conclude that $z \notin \mathscr{E}(A)$. As a result, $\mathscr{E}(A + B) \neq \mathscr{E}(A)$. \end{proof} \begin{theorem} \label{thm3} Assume that $\mathscr{E}(A)$ is nonempty. Under (P3), if there exists a point $b^{\circ} \in B$ such that $b^{\circ}P0_{G}$, then (\ref{E}) fails. \end{theorem} \begin{proof} Suppose that such a point $b^{\circ}$ exists. We have assumed that $\mathscr{E}(A) \neq \emptyset$. Accordingly, let $a \in A$ be an efficient element in $A$. For the reason explained in the last proof, we need only examine the case where $a \in A+B$. If $a \in A+B$, then, because $(a+b^{\circ})Pa$ and $a+b^{\circ} \in A+B$, $a$ is inefficient in $A+B$. Thus, $\mathscr{E}(A + B) \neq \mathscr{E}(A)$. \end{proof} Theorems \ref{thm2} and \ref{thm3} can usefully be rephrased as follows. If $\mathscr{E}(A+B)$ is nonempty, then (\ref{E}) is true only if there exists a $b^{\circ} \in B$ such that $0_{G}Pb^{\circ}$ fails to hold. Moreover, if $\mathscr{E}(A)$ is nonempty, then (\ref{E}) holds only if no $b \in B$ satisfies $bP0_{G}$. \begin{lemma} \label{lem1} Under (P3), if $\mathscr{E}(A) = A$ and $B$ is a singleton $\{b\}$, $b \in G$, then $\mathscr{E}(A+B) = A + B$. \end{lemma} \begin{proof} Let $z, z' \in A+B$ such that $zRz'$, $z = a + b$ and $z' = a' + b$, where $a, a' \in A$. Replicating the calculations involving inverse elements in the proof of Theorem \ref{thm1} yields the conclusion that $aRa'$. We have assumed that every point in $A$ is efficient in $A$. Therefore, $a = a'$ and $z = z'$. To summarize, we have proven that whenever $zRz'$ for some $z, z' \in A+B$, $z = z'$: this means that $A+B \subseteq \mathscr{E}(A+B)$. Ergo, $\mathscr{E}(A+B) = A+B$. \end{proof} \begin{lemma} \label{lem2} If $A$ is finite and $B$ is a singleton $\{b\}$, $b \in G$, then $A+B = A$ if and only if $b = 0_{G}$. \end{lemma} \begin{proof} Write $A = \{a^1, ..., a^n\}$ for $n \geqq 1$ distinct points $a^1, ..., a^n$ in $G$. The subscript $i$ in $a^i$ is not to be confused with a power. The \textit{if} portion of the statement is self-evident. To prove the \textit{only if} portion, we assume that $A+B = A$. If $n = 1$, then $a^{1} + b = a^{1}$, whence $b = -a^{1} + a^{1} = 0_G$. Suppose, for the remainder of the proof, that $n \geqq 2$. Then, for every index $i = 1, ..., n$ there exists an index $j = 1, ..., n$ such that $a^{i}+b = a^{j}$; this index shall be denoted with $j_i$ to underscore the dependency on $i$. Thus, we have that \begin{equation*} \left\{ \begin{aligned} & b = (-a^1) + a^{j_1}, \\ & b = (-a^2) + a^{j_2}, \\ & \dots \\ & b = (-a^n) + a^{j_n}. \end{aligned} \right. \end{equation*} Suppose, contrary to the lemma, that $b \neq 0_{G}$. Then, for every index $i$, $j_i \neq i$. We claim that this last fact implies falsely that $b = 0_{G}$. In general, if $g^1, ..., g^r$ are $r \geqq 2$ points in $G$ such that \begin{equation*} \left\{ \begin{aligned} & b = (-g^1) + g^{j_1}, \\ & b = (-g^2) + g^{j_2}, \\ & \dots \\ & b = (-g^r) + g^{j_r}, \end{aligned} \right. \end{equation*} for some $r$ indices $j_1, ..., j_r$ satisfying $j_i \neq i$ for each $i = 1, ..., r$, then $b$ must equal $0_G$. We shall substantiate this property through induction on $r$ before applying it to the specific situation involving $a^1, ..., a^n$. For $r = 2$, the existence of such $g^i$ entails $b = (-g^1) + g^2 = (-g^2) + g^1$, which in turn entails, from basic group operations, $b = 0_{G}$. Assume now that the property is true for a fixed $r \geqq 2$. If $g^1, ..., g^{r+1} \in G$ are $r+1$ points that fulfill the property, then direct application of the induction hypothesis to, say, $g^1, g^2, ..., g^r$, and correspondingly to $g^{j_1}, g^{j_2}..., g^{j_r}$, yields that $b = 0_{G}$. Since $n \geqq 2$, the result of the preceding paragraph gives $b = 0_G$, a contradiction. This completes the proof of the \textit{only if} portion of the lemma, and with it the proof of the lemma. \end{proof} Taken together, Lemmas \ref{lem1} and \ref{lem2} reveal an interesting feature of equality (\ref{E}). If a finite $A$ is its own efficient set and $B$ is a singleton, then equality (\ref{E}) holds if and only if $b = 0_G$, that is, if and only if $A+B=A$. \begin{theorem} \label{cor1} Assume (P3) and $\mathscr{E}(A) = A$. If $A$ is finite and $B$ is a singleton $\{b\}$, then (\ref{E}) holds if and only if $A+B=A$, if and only if $b = 0_G$. \end{theorem} \begin{remark} \label{rem_cor1} In general, neither Theorem \ref{cor1} nor Lemma \ref{lem2} carries over to the case when $A$ has infinitely many points. Indeed, in Example \ref{ex1}, $A$ is infinite, (P3) is satisfied, $\mathscr{E}(A) = A$, (\ref{E}) holds and yet $b \neq 0_{G}$.\end{remark} Guided by Theorem \ref{thm1}, we have focused our investigation so far on instances of $B$ which contain points equal or comparable to $0_G$. In the spirit of Theorems \ref{thm1}, \ref{thm2} and \ref{thm3}, we might ask what happens when $B$ contains only points incomparable to $0_G$, or when it contains such points \textit{alongside} $0_G$. It should be noted that the answers to these questions cannot appeal to the arguments invoked in the proofs of the previous three theorems, for the centerpiece of these arguments was the ability to compare a point in $A$ with some point in $A+B$ selected to enable the inference that the former, while efficient in $A$, is inefficient in $A+B$ (Theorem \ref{thm3}), or that the latter, while efficient in $A+B$, is inefficient in $A$ (Theorem \ref{thm2}). To see the problem that would arise from applying this approach to the two situations now under study, take just the situation where no points in $B$ compare to $0_{G}$, and suppose one were to proceed as in the proof of Theorem \ref{thm2} or of Theorem \ref{thm3}. In this case, we have that, for each $a \in A$ and $b \in B$, $aI(a+b)$, a fact which in and of itself carries no implications for the efficiency of $a$ or of $a+b$, neither in $A$ nor in $A+B$, even if, as in the proof of Theorem \ref{thm2} or of Theorem \ref{thm3}, one of the two points was known to be efficient. It should by now be clear that different ideas are needed for the two situations we have outlined. We spend the remainder of this section introducing and applying such ideas in the context of a study of the validity of (\ref{E}). Our chief findings in case all of $B$ is incomparable with $0_{G}$ are Theorems \ref{thm5} and \ref{thm6}. For ease of consumption, we will state the theorems now, deferring their substantiation until enough background has been presented. \begin{theoremfive} Under (P1), (P3) and (P4), if $A$ is finite and $B = \{b\}$ with $bI0_G$, then (\ref{E}) fails. \end{theoremfive} \begin{theoremsix} Under identical hypotheses to those of Theorem \ref{thm5}, if $B = \{b^{1}, ..., b^{m}\}$, $m \geqq 2$, such that $b^{i}I0_G$ for each $i = 1, ..., m$, then (\ref{E}) fails. \end{theoremsix} Theorem \ref{thm5} is, to be sure, a special case of Theorem \ref{thm6}, but because the proof of the former is much shorter, and because the essentials of the techniques employed in both instances are the same, we give only the proof of the former, leaving the reader with a sketch of a program for accomplishing the generalization. The bases for Theorems \ref{thm5} and \ref{thm6} are Theorem \ref{thm4} and Propositions \ref{prop1rep} and \ref{prop1}. Theorem \ref{thm4} is a general result on efficient sets, reported in \cite{white1977kernels} and credited to a theorem in graph theory due to \cite{berge1985graphs}. Its utility here will soon become apparent. \begin{theorem}[appears in \cite{white1977kernels} as Theorem 3] \label{thm4} If $A$ is finite, then, under (P1), the efficient set $\mathscr{E}(A)$ is nonempty, and for every $a \in A$ there exists $a' \in \mathscr{E}(A)$ such that $a'Ra$. In particular, if $a \notin \mathscr{E}(A)$, then $a'Pa$. \end{theorem} \begin{remark}In commenting on a special application of Theorem \ref{thm4} in \cite{Mifrani2024}, we suggest that Theorem \ref{thm4} could be interpreted as generalizing the fact that the maximum of a totally ordered set, if it exists, is ``greater" than or equal to any element of that set. The sole efficient point in such a set relative to the order relation is its maximum. Therefore, if we denote this set with $X$, Theorem \ref{thm4} asserts that $\max(X)Rx$ for all $x \in X$.\end{remark} Equality (\ref{E}) can only be true if all efficient points in $A$ are members of $A+B$. We give two propositions, Propositions \ref{prop1rep} and \ref{prop1}, which show that the hypotheses of Theorem \ref{thm5} preclude this situation. It should be recalled that in Theorem \ref{thm5}, $B = \{b\}$, $bI0_{G}$. Furthermore, since $A$ is finite, $\mathscr{E}(A)$ is nonempty (Theorem \ref{thm4}), and so Theorem \ref{thm0} does not apply. \begin{proposition} \label{prop1rep} Assume (P1), (P3) and (P4). Suppose that $A$ is finite and let $A = \{a^1, ..., a^n\}$, $n \geqq 1$, and $b \in B$. Then, for each set of indices $i_1, ..., i_n$ in $\{1, ..., n\}$, the system \begin{equation} \tag{$S^{0}$} \label{$S^{0}$} \left\{ \begin{aligned} & a^1 = a^{i_1} + b, \\ & \dots \\ & a^n = a^{i_n} + b, \\ & bI{0_G}, \end{aligned} \right. \end{equation} is inconsistent. \end{proposition} \begin{proposition} \label{prop1} Assume (P1), (P3) and (P4). Suppose $A$ is finite, and suppose $\mathscr{E}(A) \neq A$, so that $A$ contains at least two points. Let $A = \{a^1, ..., a^n\}$, $n \geqq 2$, and $b \in B$. Then, for each $k = 1, ..., n-1$, for each set of indices $i_1, ..., i_k$ in $\{1, ..., n\}$, and for each set of indices $j_{k+1}, ..., j_{n}$ in $\{1, ..., k\}$, the system \begin{equation} \tag{$S^{1}$} \label{$S^{1}$} \left\{ \begin{aligned} & a^1 = a^{i_1} + b, \\ & \dots \\ & a^k = a^{i_k} + b, \\ & a^{j_{k+1}}Pa^{k+1}, \\ & \dots \\ & a^{j_{n}}Pa^{n}, \\ & bI{0_G}, \end{aligned} \right. \end{equation} is inconsistent. \end{proposition} \begin{remark} These propositions and the two theorems which derive from them are of interest primarily where $A$ contains more than one point. The case of a singleton $A$ falls within the scope of Theorem \ref{cor1}. One implication of that theorem is that if $A = \{a^{1}\}$, $a^{1} \in G$, and $bI0_{G}$, then $\mathscr{E}(A+B) \neq \mathscr{E}(A)$. \end{remark} To gain some insight into the significance of these propositions, one should think of $a^1, ..., a^{k}$ as the efficient points of $A$. Either these constitute the whole of $A$ or they do not. In the former case, Proposition \ref{prop1rep} tells us that, since $bI0_{G}$, at least one of the points must lie outside $A+B$, hence $A = \mathscr{E}(A) \not \subseteq A+B$. In the latter case, the points $a^{k+1}, ..., a^{n}$ represent the inefficient portion of $A$, so that by Theorem \ref{thm4} we will find for each $i = k+1, ..., n$ an index $j_i = 1, ..., k$ satisfying $a^{j_i}Pa^{i}$. Proposition \ref{prop1} merely states that, because $bI0_{G}$, the last sentence is incompatible with a situation in which all of the $a^{i}$, $i = 1, ..., k$, were in $A+B$, thus implying that $\mathscr{E}(A) \not \subseteq A+B$ as before. The proof that system (\ref{$S^{0}$}) is inconsistent is somewhat uncomplicated. Take any set of indices $i_1, ..., i_n$ from $\{1, ..., n\}$. If there exists an index $i_j$ such that $i_j = j$, then the $j$-th equation in (\ref{$S^{0}$}) will imply that $b = 0_{G}$, which precludes $bI0_{G}$. In the event that $n \geqq 2$, and the indices were selected in such a way that $i_j \neq j$ for each $j$, then a simple proof by induction will yield the sought inconsistency conclusion. We now give such a proof. \begin{proof}[Proof of Proposition \ref{prop1rep}] To dispel any confusion down the line, we shall denote the system (\ref{$S^{0}$}) associated with a set of $n$ integers $i_1, ..., i_n$ by (\ref{$S^{0}$}($i_1, ..., i_n$)). The goal is to show that (\ref{$S^{0}$}($i_1, ..., i_n$)) is inconsistent for all $i_1, ..., i_n$, for all $n \geqq 2$, whenever $i_j \neq j$ for each $j$, the case where $i_j = j$ for some $j$ having been dealt with. For $n = 2$, if $i_j \neq j$ for each $j = 1, 2$, then $i_1 = 2$ and $i_2 = 1$. Assuming (\ref{$S^{0}$}($i_1, i_2$)) were consistent, we would have that $a^{2} = a^{1} + b$ and $a^{1} = a^{2} + b$, ergo $b = 0_{G}$ and this would contradict $bI0_{G}$. Now, let $n \geqq 2$ with the assumption that (\ref{$S^{0}$}($i_1, ..., i_n$)) is inconsistent for any choice of $n$ indices such that $i_j \neq j$ for each $j$. Furthermore, let $i_1, ..., i_{n+1}$ be $(n+1)$ indices from $\{1, ..., n+1\}$ satisfying $i_j \neq j$ for each $j = 1, ..., n+1$. It is obvious that the induction hypothesis applies to $i_1, ..., i_n$, meaning that (\ref{$S^{0}$}($i_1, ..., i_n$)) is inconsistent. Because this system is implied by (\ref{$S^{0}$}($i_1, ..., i_{n+1}$)), it follows that (\ref{$S^{0}$}($i_1, ..., i_{n+1}$)) is itself inconsistent. \end{proof} With regard to Proposition \ref{prop1}, one can see that system (\ref{$S^{1}$}) depends on $k$ as well as on the choice of $i_1, ..., i_k$ and $j_{k+1}, ..., j_n$. We omit this dependency in order to simplify notation, but it ought always to be borne in mind. We believe it helps, as a prelude to proving this proposition, to offer some corroborating examples; see Examples \ref{ex4}, \ref{ex5} and \ref{ex6}. The treatment of these simple and, as we shall soon see, illuminating examples will serve to illustrate the proof's basic strategy. \begin{example} \label{ex4} Let $A = \{a^1, a^2, a^3, a^4, a^5\}$, $b \in G$ and $k = 3$. Suppose $a^1, ..., a^5$ and $b$ satisfy \begin{equation*} \left\{ \begin{aligned} & a^1 = a^{2} + b, \\ & a^2 = a^{4} + b, \\ & a^3 = a^{5} + b, \\ & a^{3}Pa^{4}, \\ & a^{1}Pa^{5}, \\ & bI{0_G}, \end{aligned} \right. \end{equation*} The two comparisons involving $P$ can be rewritten as $(a^{5}+b)Pa^{4}$ and $(a^{4}+2b)Pa^{5}$. By (P3), we can again rewrite the second comparison as $(a^{4}+3b)P(a^{5}+b)$. By (P1), then, $(a^{4}+3b)Pa^{4}$. This last fact implies $(3b)P0_{G}$, which in turn implies $bP0_{G}$. This conclusion is inconsistent with the fact that $bI0_{G}$. \end{example} \begin{example} \label{ex5} Take the same setting as in Example \ref{ex4} and substitute $a^{2}Pa^{4}$ for $a^{3}Pa^{4}$. Since $a^2 = a^4 + b$, this comparison directly implies that $bP0_{G}$, again in contradiction with $bI0_G$. \end{example} \begin{example} \label{ex6} Let $A = \{a^1, a^2, a^3, a^4, a^5, a^6\}$, $b \in G$ and $k = 3$. Suppose $a^1, ..., a^6$ and $b$ satisfy \begin{equation*} \left\{ \begin{aligned} & a^1 = a^{4} + b, \\ & a^2 = a^{5} + b, \\ & a^3 = a^{6} + b, \\ & a^{2}Pa^{4}, \\ & a^{3}Pa^{5}, \\ & a^{1}Pa^{6}, \\ & bI{0_G}, \end{aligned} \right. \end{equation*} The equations allow us to rewrite the comparisons as $(a^5 + b)Pa^4$, $(a^6+b)Pa^5$ and $(a^4 + b)Pa^{6}$. The import of the first two comparisons is that $(a^6 + 2b)Pa^{4}$. By combining the latter comparison with $(a^4 + b)Pa^6$ we derive the further comparison $(a^4 + 3b)Pa^{4}$, hence $(3b)P0_G$ and therefore $bP0_G$. However, that disagrees with the fact that $bI0_G$. \end{example} These examples suggest at once a procedure for demonstrating the inconsistency of system (\ref{$S^{1}$}) in general. First, we eliminate $a^1, ..., a^k$ from the comparisons by replacing them with the expressions provided by the $k$ equations. Then, if a comparison of the form $(a^{i}+pb)Pa^{i}$ is revealed, we are done (Example \ref{ex5}); if a pair of comparisons of the form $(a^{i}+pb)Pa^{j}$ and $(a^{j}+p'b)Pa^{i}$, $j \neq i$, appear instead, we are also done (Example \ref{ex4}). If neither situation arises, we generate, following Example \ref{ex6}, a new comparison $(a^{i}+pb)Pa^{j}$ with $j \neq i$, then we search the original set of comparisons for one of the form $(a^{j}+p'b)Pa^{i}$. If such a comparison exists, we are done; otherwise we continue generating comparisons as indicated until this situation obtains, taking at each iteration the preceding set of comparisons as our point of departure\footnote{Readers trained in linear programming will notice a slight resemblance between this procedure and Fourier's procedure \citep{williams1986fourier} for eliminating variables from a system of linear inequalities. For example, when in the final step of Example \ref{ex4} we inferred that $(a^{4}+ 3b)Pa^{4}$, we, in Fourier's language, eliminated the ``variable" $a^{6}$ between the two ``inequalities" $a^{6}P(a^{4} + (-2b))$ and $(a^{4} + b)Pa^{6}$.}. Two issues must be attended to before these steps could be operationalized as a method of proof. In the first place, although none of the examples we have given bears this out, it is sometimes impossible to eliminate from the comparisons all of $a^1, ..., a^k$. Consider, for instance, a variation of Example \ref{ex4} where the equations read successively $a^1 = a^2 + b$, $a^2 = a^1 + b$ and $a^3 = a^5 + b$, subject to the same comparisons as before. It is clear that neither $a^1$ nor $a^2$ can be discarded from the comparison $a^1Pa^5$, and so the first step of the procedure fails. In the second place, it is not immediately obvious why, if no comparison of the form $(a^i + pb)Pa^{i}$ is present initially, there must exist a pair of comparisons of the form specified above, either in the original system or in the sequence of systems produced by the transformation in Example \ref{ex6}. In light of these problems, we introduce the idea of a \textit{cycle}. Given some $k = 1, ..., n-1$ and some choice of associated indices, we shall classify as a cycle of (\ref{$S^{1}$}) any equation of the form $a^i = a^i + b$, $i = 1, ..., k$, and any group of $r \leqq k$ equations of the form \begin{equation*} \left\{ \begin{aligned} & a^{1} = a^{2} + b, \\ & a^{2} = a^{3} + b, \\ & \dots \\ & a^{r} = a^{1} + b, \end{aligned} \right. \end{equation*}where the indices above the $a^i$ are purely for convenience, need not be those of the original system, and should not occasion a loss of generality. A cycle is formed, for example, by the equations $a^{1} = a^{2} + b$ and $a^2 = a^1 + b$ of the above-described variant of Example \ref{ex4}. Cycles need not exist for a particular choice of $k$, of $i_1, ..., i_k$ and of $j_{k+1}, ..., j_n$, but it is readily seen that where they do, we will find at least one index $i = 1, ..., k$ and an integer\footnote{In reality, we will have that $a^{i} = a^{i} + pb$ for \textit{any} $p \geqq 1$.} $p \geqq 1$ such that $a^{i} = a^{i} + pb$, from which it will follow that $b = 0_{G}$, an absurdity if $bI0_{G}$. This exhausts the treatment of cyclical systems. What if (\ref{$S^{1}$}) contains no cycles? In that case, Proposition \ref{prop2} effectively establishes that each of $a^1, ..., a^k$ can be eliminated from the comparisons $a^{i}Pa^{j}$, $i = 1, ..., k$, $j = k+1, ..., n$. \begin{proposition} \label{prop2} Given an index $k = 1, ..., n-1$, if system (\ref{$S^{1}$}) contains no cycles, then every $a^i$, $i = 1, ..., k$, satisfies $a^i = a^j + pb$ for some $j = k+1, ..., n$ and $p \geqq 1$. As a result, each of $a^1, ..., a^{k}$ is expressible as a function of and only of $a^{k+1}, ..., a^{n}$. \end{proposition} \begin{proof} Let us suppose that no cycles exist in (\ref{$S^{1}$}). Then, in the choice of the point $a^{j_1}$ associated with $a^1$ in the equation $a^1 = a^{j_1} + b$, there are $(n-1)$ possibilities, namely $a^2, ..., a^k, a^{k+1}, ..., a^{n}$. For $a^2$, we may select any point bar $a^1$ and $a^2$. Pursuing this construction until $a^k$, we see that of the $(n-k)$ choices remaining, none of them lies in $\{a^1, ..., a^k\}$. Therefore, $a^{j_k}$ must be in $\{a^{k+1}, ..., a^{n}\}$. \end{proof} We are now in a position to demonstrate Proposition \ref{prop1}. \begin{proof}[Proof of Proposition \ref{prop1}] The observation that $A$ contains at least two points as a result of $\mathscr{E}(A)$ being distinct from $A$ stems from $P$'s irreflexivity. Let $k = 1, ..., n-1$. Let $i_1, ..., i_k$ and $j_{k+1}, ..., j_{k}$ denote sets of indices in $\{1, ..., n\}$ and $\{1, ..., k\}$, respectively. We need not concern ourselves with the case where (\ref{$S^{1}$}) exhibits a cycle (see the discussion preceding Proposition \ref{prop2}). Rather, we assume that no cycles exist. According to Proposition \ref{prop2}, there exist some indices $m_{k+1}, ..., m_{n}$ in $\{1, ..., k\}$ and some integers $p_{k+1}, ..., p_{n} \geqq 1$ for which \begin{equation*} \left\{ \begin{aligned} & (a^{m_{k+1}} + p_{k+1}b)Pa^{k+1}, \\ & \dots \\ & (a^{m_{n}} + p_{n}b)Pa^{n}, \\ & bI{0_G}. \end{aligned} \right. \end{equation*} If $k+1 = n$, then $m_n = n$ and we are done, as Example \ref{ex5} and the ensuing discussion make clear. Assume henceforth that $k+1 < n$, which presupposes that $n \geqq 3$ (if $n = 2$, then $k+1=n$ by definition). If any $m_{i}$, $i = k+1, ..., n$, satisfies $m_{i} = i$, we are done. The alternative situation, which we now consider, is if $m_{i} \neq i$ for all $i = k+1, ..., n$. Here we shall prove that, in general, given an index $r = 1, ..., n-1$ (such as $k$), given $n-r$ indices $s_{r+1}, ..., s_{n}$ in $\{r+1, ..., n\}$ such that $s_{i} \neq i$ for all $i = r+1, ..., n$ (such as the $m_i$), and given integers $p'_{r+1}, ..., p'_{n} \geqq 1$ (such as the $p_i$), the system \begin{equation*} \left\{ \begin{aligned} & (a^{s_{r+1}} + p'_{r+1}b)Pa^{r+1}, \\ & \dots \\ & (a^{s_{n}} + p'_{n}b)Pa^{n}, \\ & bI{0_G}, \end{aligned} \right. \end{equation*} is inconsistent. Our argument proceeds by induction on $r$. For $r = n-2 \geqq 1$, if we have that \begin{equation*} \left\{ \begin{aligned} & (a^{s_{n-1}} + p'_{n-1}b)Pa^{n-1}, \\ & (a^{s_{n}} + p'_{n}b)Pa^{n}, \\ & bI{0_G}, \end{aligned} \right. \end{equation*} while assuming $s_{n-1} \neq n-1$ and $s_{n} \neq n$, then \begin{equation*} \left\{ \begin{aligned} & (a^{n} + p'_{n-1}b)Pa^{n-1}, \\ & (a^{n-1} + p'_{n}b)Pa^{n}, \\ & bI{0_G} \end{aligned} \right. \end{equation*} holds. Now, because $R$ is transitive, the import of the first two comparisons is \begin{equation*} (a^{n} + (p'_{n-1} + p'_{n})b)Pa^{n}, \end{equation*} which means $(p'_{n-1} + p'_{n})bP0_{G}$, and hence $bP0_{G}$, an impossibility given $bI0_{G}$. This completes the first step of the induction. For the inductive step, it suffices to notice that the system \begin{equation*} \left\{ \begin{aligned} & (a^{s_{r}} + p'_{r}b)Pa^{r}, \\ & \dots \\ & (a^{s_{n}} + p'_{n}b)Pa^{n}, \\ & bI{0_G}, \end{aligned} \right. \end{equation*} implies the system \begin{equation*} \left\{ \begin{aligned} & (a^{s_{r+1}} + p'_{r}b)Pa^{r+1}, \\ & \dots \\ & (a^{s_{n}} + p'_{n}b)Pa^{n}, \\ & bI{0_G}, \end{aligned} \right. \end{equation*} so that if the latter is inconsistent, then so is the former. We conclude from this that for $r = k$, for $s_i = m_i$, for $p_i = p'_i$, $i = k+1, ..., n$, the system \begin{equation*} \left\{ \begin{aligned} & (a^{m_{k+1}} + p_{k+1}b)Pa^{k+1}, \\ & \dots \\ & (a^{m_{n}} + p_{n}b)Pa^{n}, \\ & bI{0_G}, \end{aligned} \right. \end{equation*} is inconsistent. As a result, (\ref{$S^{1}$}) is inconsistent. \end{proof} From Proposition \ref{prop1} and Theorem \ref{thm4} follows Theorem \ref{thm5}. \begin{theorem} \label{thm5} Under (P1), (P3) and (P4), if $A$ is finite and $B = \{b\}$ with $bI0_G$, then (\ref{E}) fails. \end{theorem} \begin{proof} Write $A = \{a^1, ..., a^n\}$, $n \geqq 2$. After relabeling the points if necessary, write $\mathscr{E}(A) = \{a^1, ..., a^k\}$, with $k = 1, ..., n$. If $\mathscr{E}(A) = A$, then $\mathscr{E}(A) \not\subseteq A+B$ by Proposition \ref{prop1rep}, from which we conclude that $\mathscr{E}(A) \neq \mathscr{E}(A+B)$. If $\mathscr{E}(A) \neq A$, then $k < n$, and we let $D(A) = \{a^{k+1}, ..., a^{n}\}$ denote the dominated portion of $A$. Since $A$ is finite, Theorem \ref{thm4} assures us that for each $i = k+1, ..., n$ there exists $j_i = 1, ..., k$ satisfying $a^{j_i}Pa^{i}$. Now, if $\mathscr{E}(A+B) = \mathscr{E}(A)$ were true, we would have that $\mathscr{E}(A) \subseteq A+B$, i.e \begin{equation} \left\{ \begin{aligned} & a^1 = a^{i_1} + b, \\ & \dots \\ & a^k = a^{i_k} + b, \\ \end{aligned} \right. \end{equation} for some $k$ indices $i_1, ..., i_k$ in $\{1, ..., n\}$. But Proposition \ref{prop1} tells us that cannot be so, as we already have $bI0_G$ and $a^{j_i}Pa^{i}$ for all $i = k+1, ..., n$. Consequently, $\mathscr{E}(A) \not \subseteq A+B$, and therefore $\mathscr{E}(A) \neq \mathscr{E}(A+B)$. \end{proof} If, in addition to the notation of Proposition \ref{prop1}, we let $b^1, ..., b^m$ represent $m$ points in $G$ and $s_1, ..., s_m$ represent a set of indices in $\{1, ..., m\}$, it can be shown that the modified system \begin{equation} \tag{$S^{2}$} \label{$S^{2}$} \left\{ \begin{aligned} & a^1 = a^{i_1} + b^{s_1}, \\ & \dots \\ & a^k = a^{i_k} + b^{s_m}, \\ & a^{j_{k+1}}Pa^{k+1}, \\ & \dots \\ & a^{j_{n}}Pa^{n}, \\ & b^{s_i}I{0_G}, \forall i = 1, ..., m, \end{aligned} \right. \end{equation}is inconsistent for all $k$, $i_1, ..., i_k$ and $j_{k+1}, ..., j_{n}$. The proof is similar to when $m = 1$ (i.e when $b^{s_1} = ... = b^{s_m} = b$), and employs a generalization of Proposition \ref{prop2} wherein the concept of cycle was duly adjusted to reflect the changes in (\ref{$S^{1}$}). \begin{proposition} \label{prop3} Given an index $k = 1, ..., n-1$, if (\ref{$S^{2}$}) contains no cycles, then every $a^{i}$, $i = 1, ..., k$, satisfies $a^{i} = a^{j} + p_{i, 1}b^{1} + ... + p_{i, m}b^{m}$ for some $j = k+1, ..., n$ and some set of integers $p_{i, 1}, ..., p_{i, m} \geqq 0$. As a result, each of $a^1, ..., a^{k}$ is expressible as a function of and only of $a^{k+1}, ..., a^{n}$. \end{proposition} The extension of Proposition \ref{prop1} to system (\ref{$S^{2}$}) admits the next theorem as its corollary. \begin{theorem} \label{thm6} Under identical hypotheses to those of Theorem \ref{thm5}, if $B = \{b^{1}, ..., b^{m}\}$, $m \geqq 2$, such that $b^{i}I0_G$ for each $i = 1, ..., m$, then (\ref{E}) fails. \end{theorem} \begin{remark} \label{rem_thm6} Theorems \ref{thm5} and \ref{thm6} do not extend to infinite $A$; see Examples \ref{ex1} and \ref{ex2}. \end{remark} Let us turn, finally, to the case when $0_G \in B$ and $bI0_{G}$ for all $b \in B \backslash \{0_G\}$. Contrary to the preceding case, if $0_{G} \in B$, then all efficient points in $A$ must belong to $A+B$. Our main conclusions can be summarized as follows. \begin{theorem} Suppose $A$ is finite. Under (P1), (P3) and (P5), if $\mathscr{E}(A) = A$ and there exists $b \in G$ such that $B = \{0_{G}, b\}$ and $bI0_{G}$, then (\ref{E}) fails. \end{theorem} \begin{theorem} Under identical hypotheses to those of Theorem \ref{thm7}, if $\mathscr{E}(A) = A$, and $B = \{0_{G}, b^{1}, ..., b^{m}\}$, $m \geqq 2$, where $b^{i}I0_{G}$ for each $i = 1, ..., m$, then (\ref{E}) fails. \end{theorem} Remarks \ref{rem_thm7} and \ref{rem_thm8} will show that these theorems do not generalize to infinite $A$. We begin with two important preliminaries. Proposition \ref{prop4} states a logical inconsistency result akin to Propositions \ref{prop1rep} and \ref{prop1}. Lemma \ref{lem3} presents a rather technical result that will be of use when assessing points of a certain peculiar type for efficiency in $A+B$. \begin{proposition} \label{prop4} Assume (P1), (P3) and (P5). Let $a^1, ..., a^n$ be $n \geqq 2$ distinct points in $G$, and let $b \in B$. Choose a set of indices $i_1, ..., i_n$ from $\{1, ..., n\}$ with $i_j \neq j$ for each $j$. Then the system \begin{equation} \tag{$S^{3}$} \label{$S^{3}$} \left\{ \begin{aligned} & a^{i_1}P(a^{1} + b), \\ & \dots \\ & a^{i_{n}}P(a^{n} + b), \\ & bI{0_G}, \end{aligned} \right. \end{equation} is inconsistent. \end{proposition} \begin{proof} We proceed by induction on $n$. For $n = 2$, the only candidates for $i_1$ and $i_2$ are $i_1 = 2$ and $i_2 = 1$. Suppose, contrary to the proposition, that we had simultaneously $a^{2}P(a^{1} + b)$, $a^{1}P(a^{2}+b)$ and $bI0_{G}$. Then it would follow, in particular, that $(a^{2}+(-a^{1}))Pb$ and $(a^{1}+(-a^{2}))Pb$. By (P3), this would imply $0_{G}P(2b)$, and hence $0_{G}Pb$ by this proposition's assumption. However, as $bI0_{G}$, we would obtain a contradiction, showing that the system is inconsistent as claimed. For the inductive step, let $n \geqq 2$ be an integer such that for any $n$ points $a^1, ..., a^n$ in $G$ and each set of indices $i_1, ..., i_n$ satisfying $i_j \neq j$, we do not have \begin{equation*} \left\{ \begin{aligned} & a^{i_1}P(a^{1} + b), \\ & \dots \\ & a^{i_{n}}P(a^{n} + b), \\ & bI{0_G}, \end{aligned} \right. \end{equation*} for any $b \in G$. Then it is clear that for any $(n+1)$ points $a^1, ..., a^{n+1} \in G$, any choice of indices $i_1, ..., i_{n+1}$ satisfying $i_j \neq j$ for each $j = 1, ..., n+1$, and for any $b \in G$, the system \begin{equation*} \left\{ \begin{aligned} & a^{i_1}P(a^{1} + b), \\ & \dots \\ & a^{i_{n+1}}P(a^{n+1} + b), \\ & bI{0_G}, \end{aligned} \right. \end{equation*} is inconsistent because it implies the system \begin{equation*} \left\{ \begin{aligned} & a^{i_1}P(a^{1} + b), \\ & \dots \\ & a^{i_{n}}P(a^{n} + b), \\ & bI{0_G}, \end{aligned} \right. \end{equation*} to which the induction hypothesis is applicable and yields the desired inconsistency conclusion. The proof is complete. \end{proof} \begin{lemma} \label{lem3} Under the hypotheses of Proposition \ref{prop4}, let $b \in G$ such that $bI0_{G}$. For each integer $n \geqq 2$, if there exist $n$ distinct points $a^1, ..., a^{n} \in G$ such that $A = \{a^1, ..., a^n\}$, and the set $I = \{i = 1, ..., n: a^{i} = a^{j} + b \text{ for some } j\}$ is nonempty, then so is the set $J = \{j = 1, ..., n: a^{j} + b = a^{i} \text{ for some } i\}$, and for each $i \notin I$ and $j \notin J$, $a^{i}P(a^{j} + b)$ implies $0_{G}Pb$. \end{lemma} \begin{proof} That $J$ is nonempty as a result of $I$ being nonempty is evident. We will use induction to justify the remainder of the lemma. To verify the base case, let $A = \{a^1, a^2\}$ for $a^1, a^2 \in G$, and suppose that the set $I$ corresponding to $A$ is nonempty. Only one of two situations obtains: either $I = \{1\}$, or $I = \{2\}$. The case $I = \{1, 2\}$ is impossible because, if it were true, it would engender one of the following consequences, all of which contradict the premise that $bI0_{G}$: $a^1 = a^1 + b$; $a^2 = a^2 + b$; or $a^1 = a^2 + b$ concurrently with $a^2 = a^1 + b$. If $I = \{1\}$, then $J = \{2\}$, $a^1 = a^2 + b$, and $a^{2}P(a^{1} + b)$ implies $a^{2}P(a^{2} + 2b)$, whence $0_{G}Pb$; if, on the other hand, $I = \{2\}$, then $J = \{1\}$, $a^2 = a^1 + b$, and $a^{1}P(a^{2} + b)$ implies $a^{1}P(a^{1} + 2b)$, whence $0_{G}Pb$. We have thus demonstrated the property asserted by the lemma for $n = 2$. To complete the proof, let $n \geqq 2$ be an integer for which the property holds. Let $A = \{a^1, ..., a^{n+1}\}$, $a^i \in G$, and let the sets $I$ and $J$ be defined with respect to $A$. Let $I'$ and $J'$ be the lemma's sets associated with $A \setminus \{a^{n+1}\}$. It is then clear that if some $i, j = 1, ..., n+1$ satisfied $i \notin I$ and $j \notin J$, we would have $i \notin I'$ and $j \notin J'$. Now, the set $A \setminus \{a^{n+1}\}$ contains exactly $n$ points, so that, by the induction hypothesis, $a^{i}P(a^{j} + b)$ implies $0_{G}Pb$ whenever $i \notin I'$ and $j \notin J'$. It follows that $a^{i}P(a^{j} + b)$ implies $0_{G}Pb$ for each $i \notin I$ and $j \notin J$. The property holds for $n+1$, and the proof is hereby complete. \end{proof} \begin{theorem} \label{thm7} Suppose $A$ is finite. Under (P1), (P3) and (P5), if $\mathscr{E}(A) = A$ and there exists $b \in G$ such that $B = \{0_{G}, b\}$ and $bI0_{G}$, then (\ref{E}) fails. \end{theorem} \begin{proof} Assume $B = \{0_{G}, b\}$ for some $b \in G$ incomparable with $0_{G}$, and write $A = \{a^1, ..., a^n\}$ for $n \geqq 1$. If $n = 1$, then $bI0_{G}$ implies $a^{1}I(a^{1} + b)$, so that $\mathscr{E}(A+B) = \{a^1, a^1 + b\}$, and equality (\ref{E}) fails. We assume henceforth that $n \geqq 2$. Let us suppose, for the sake of contradiction, that (\ref{E}) holds. We know that $a^i \in A+B$ for each $i = 1, ..., n$ because $0_{G} \in B$. Therefore, since $\mathscr{E}(A+B) = \mathscr{E}(A)$, this must mean that all the points in $A+B$ different than the $a^i$ are dominated in $A+B$. Such points do exist, because the opposite would imply falsely that $b = 0_{G}$, a fact established in the first half of the proof of Lemma \ref{lem2}. Put otherwise, there is in $A+B$ at least one pair of indices $i \neq j$ for which $a^{i} \neq a^{j} + b$. Two cases may arise. \begin{itemize} \item{\textbf{Case 1:}} $a^{i} \neq a^{j} + b$ for each $i \neq j$. Let $z^{i} = a^{i} + b$, $i = 1, ..., n$, be any member of $A+B$. We have assumed here that $z^{i}$ is different than $a^{j}$ for each $j \neq i$. Furthermore, $z^{i} \neq a^{i}$ because $bI0_{G}$. This establishes that $z^{i} \notin \mathscr{E}(A+B)$, and hence, by Theorem \ref{thm4}, the existence of an index $j$ such that $a^{j}Pz^{i}$. As $bI0_{G}$, the comparison $a^{i}Rz^{i}$, and by extension $a^{i}Pz^{i}$, does not hold, so that $j \neq i$. Repeating this argument for all $z^{i}$, $i = 1, ..., n$, we construct a set of indices $i_1, ..., i_n$ with $i_j \neq j$ for all $j$ such that \begin{equation*} \left\{ \begin{aligned} & a^{i_1}P(a^{1} + b), \\ & \dots \\ & a^{i_{n}}P(a^{n} + b), \\ \end{aligned} \right. \end{equation*} an absurdity in view of Proposition \ref{prop4}. \item{\textbf{Case 2:}} there exist a pair of indices $i^{*} \neq j^{*}$ such that $a^{i^{*}} = a^{j^{*}} + b$. It will prove useful to work with the sets $I = \{i: a^{i} = a^{j} + b \text{ for some } j\}$ and $J = \{j: a^{j} + b = a^{i} \text{ for some } i\}$ introduced in Lemma \ref{lem3}. We have assumed $I$, and therefore $J$, to be nonempty. The complement of $J$, as we have seen in the opening paragraph of this proof, is also nonempty. Therefore, we may choose a $z = a^{j} + b \in A+B$ where $j \notin J$. By construction, $z$ is dominated in $A+B$ by some $z'$, and Theorem \ref{thm4} allows us to take $z'$ in $\mathscr{E}(A+B) = \mathscr{E}(A)$. There exists, therefore, an index $i = 1, ..., n$ such that $a^{i}Pz$. If $i \in I$, then $a^{i} = a^{l} + b$ for some $l \neq i$, and the statement $a^{i}Pz$ is equivalent to $a^{l}Pa^{j}$, an absurdity given that $a^{l} \in A$ and $a^{j}$ is efficient in $A$. If $i \notin I$, then, because $z = a^{j} + b$ and $j \notin J$, Lemma \ref{lem3} tells us that $0_{G}Pb$, again in contradiction with the fact that $bI0_{G}$. \end{itemize} A contradiction is obtained in both cases, indicating the falsity of the initial assumption that $\mathscr{E}(A+B) = \mathscr{E}(A)$. \end{proof} \begin{remark} \label{rem_thm7} The requirement that $A$ be finite is indispensable to this theorem. For an illustration, modify Example \ref{ex1} by replacing $B$ with the set $\{(0, 0), (-1, 1)\}$. Evidently, $\mathscr{E}(A) = A$, and each of (P1), (P3) and (P5) holds. However, because we still have $A+B = A$, (\ref{E}) holds. \end{remark} \begin{example} \label{ex7} To afford the reader with some intuition about Theorem \ref{thm7} and its proof, we will show on a two-dimensional example why, if $R$ and $A$ satisfy the hypotheses of Theorem \ref{thm7}, we cannot find a set $B = \{0_{G}, b\}$ where $bI0_{G}$ and for which equality (\ref{E}) holds. Let $G = \mathbb{R}^{2}$, $A = \{(-1, 0), (0, -1)\}$ and $B = \{(0, 0), b\}$ with $b = (b_1, b_2)I0_{\mathbb{R}^2}$, all endowed with the product order of Example \ref{ex2}. This order satisfies the hypotheses of Proposition \ref{prop4}, and $A$ is, as desired, its own efficient set. Let us consider the problem of finding a value of $b_1$ and $b_2$ such that $\mathscr{E}(A+B) = \mathscr{E}(A) = \{(-1, 0), (0, -1)\}$. Notice, first, that because $(b_1, b_2)I(0, 0)$, we have that $(b_{1}-1, b_{2})I(-1, 0)$ and $(b_{1}, b_{2}-1)I(0, -1)$. From this it follows that $(b_{1}-1, b_{2}) \neq (-1, 0)$ and $(b_{1}, b_{2}-1) \neq (0, -1)$. Therefore, the sum set $A+B$ can take one of three forms: \begin{equation*} A+B = \{(-1, 0), (0, -1), (b_{1}-1, b_{2}), (b_{1}, b_{2}-1)\}, \end{equation*} or \begin{equation*} A+B = \{(-1, 0), (0, -1), (b_{1}, b_{2}-1)\}, \end{equation*} or \begin{equation*} A+B = \{(-1, 0), (0, -1), (b_{1}-1, b_{2})\}. \end{equation*} The first form obtains when $(b_{1}-1, b_{2}) \neq (0, -1)$ and $(b_{1}, b_{2}-1) \neq (-1, 0)$, the second when $(b_{1}-1, b_{2}) = (0, -1)$ and $(b_{1}, b_{2}-1) \neq (-1, 0)$, and the third when $(b_{1}, b_{2}-1) = (-1, 0)$ and $(b_{1}-1, b_{2}) \neq (0, -1)$. The first form corresponds to Case 1 in the theorem's proof, and the second and third forms correspond to Case 2. We will deal with each of these cases in turn. Let us suppose momentarily that $(b_{1}-1, b_{2}) \neq (0, -1)$ and $(b_{1}, b_{2}-1) \neq (-1, 0)$, so that \begin{equation*} A+B = \{(-1, 0), (0, -1), (b_{1}-1, b_{2}), (b_{1}, b_{2}-1)\}. \end{equation*} For $b_1$ and $b_2$ satisfying the object of our search, $(b_{1}-1, b_{2}), (b_{1}, b_{2}-1) \notin \mathscr{E}(A+B)$, and Theorem \ref{thm4} allows us to conclude that $(-1, 0)P(b_{1}, b_{2}-1)$ and $(0, -1)P(b_{1}-1, b_{2})$, since $(0, -1)I(b_{1}, b_{2}-1)$ and $(-1, 0)I(b_{1}-1, b_{2})$. The first comparison implies that $b_{1} < 0$ and $b_{2} \leqq 1$, the second that $b_{1} \leqq 1$ and $b_{2} < 0$; consequently, $(0, 0)P(b_{1}, b_{2})$, and this directly contradicts the fact that $(b_{1}, b_{2})I(0, 0)$. Next, suppose that $(b_{1}-1, b_{2}) = (0, -1)$ and $(b_{1}, b_{2}-1) \neq (-1, 0)$. Then $(b_{1}, b_{2}) = (1, -1)$, and \begin{equation*} A+B = \{(-1, 0), (0, -1), (b_{1}, b_{2}-1)\}. \end{equation*} For $b_1$ and $b_2$ satisfying the object of our search, we have that $(-1, 0)P(b_{1}, b_{2}-1)$ (recall that $(b_{1}, b_{2}-1)I(0, -1)$), which leads to the false inference that $b_{1} = 1 \leqq -1$. Suppose, finally, that $(b_{1}, b_{2}-1) = (-1, 0)$ and $(b_{1}-1, b_{2}) \neq (0, -1)$. Then $(b_{1}, b_{2}) = (-1, 1)$, and \begin{equation*} A+B = \{(-1, 0), (0, -1), (b_{1}-1, b_{2})\}. \end{equation*} A contradiction follows in similar fashion to the preceding case. In sum, there does not exist $B = \{0_{\mathbb{R}^{2}}, b\}$ such that $bI0_{\mathbb{R}^{2}}$ and $\mathscr{E}(A+B) = \mathscr{E}(A)$. \end{example} We may suggest the following generalization of Theorem \ref{thm7}. \begin{theorem} \label{thm8} Under identical hypotheses to those of Theorem \ref{thm7}, if $\mathscr{E}(A) = A$, and $B = \{0_{G}, b^{1}, ..., b^{m}\}$, $m \geqq 2$, where $b^{i}I0_{G}$ for each $i = 1, ..., m$, then (\ref{E}) fails. \end{theorem} \begin{remark} \label{rem_thm8} This theorem fails if the assumption that $A$ is finite is eliminated. To take Example \ref{ex2} again, we have, replacing $B$ with $\{(0, 0), (-1, 1), (-1, 2)\}$, that $\mathscr{E}(A) = A$, $(-1, 1)I(0, 0)$, $(-1, 2)I(0, 0)$, that properties (P1), (P3) and (P5) are satisfied, but that \begin{equation*}\mathscr{E}(A+B) = \mathscr{E}\biggl(A \cup \{(x-1, y+1): (x, y) \in A\}\biggr) = \mathscr{E}(A)\end{equation*} by the same arguments as the original example's. Notice that despite the change in $B$, it is still the case that \begin{equation*}A+B = A \cup \{(x-1, y+1): (x, y) \in A\}.\end{equation*} \end{remark} To justify Theorem \ref{thm8}, one may proceed by analogy to Theorem \ref{thm7}. Recall that our proof of that theorem drew on Proposition \ref{prop4}, Lemma \ref{lem3} and Theorem \ref{thm4}. Theorem \ref{thm4} was invoked for an element of the sum set $A+B$. The only condition for applying this theorem is that the target set be finite. In the present situation, $A+B$ is finite, and Theorem \ref{thm4} could therefore be used as is. On the other hand, Proposition \ref{prop4} would have to be extended to cover the system \begin{equation} \tag{$S^{4}$} \label{$S^{4}$} \left\{ \begin{aligned} & a^{i_{11}}P(a^{1} + b^{1}), \\ & \dots \\ & a^{i_{1m}}P(a^{1} + b^{m}), \\ & a^{i_{21}}P(a^{2} + b^{1}), \\ & \dots \\ & a^{i_{2m}}P(a^{2} + b^{m}), \\ & \dots \\ & a^{i_{nm}}P(a^{n} + b^{m}), \\ & b^{j}I{0_G}, \forall j = 1, ..., m, \end{aligned} \right. \end{equation}where, for each $k = 1, ..., n$ and $j = 1, ..., m$, $i_{kj}$ denotes any integer in $\{1, ..., n\}$ subject to $i_{kj} \neq k$. System (\ref{$S^{3}$}) is (\ref{$S^{4}$}) with $m = 1$. A variant of Lemma \ref{lem3} in which the sets $I$ and $J$ were redefined as $I = \{i: a^{i} = a^{j} + b^{l} \text{ for some } j, l\}$ and $J = \{j: a^{j} + b^{l} = a^{i} \text{ for some } i, l\}$ can likewise be demonstrated. Essentially no techniques beyond those used in connection with the original results are required for accomplishing these generalizations. \\ \section{Summary and discussion} \label{sec4} The purpose of this paper was to develop conditions for the validity of set equality (\ref{E}). The problem originates, in part, in a theorem in \citep[p. 22]{yu2013multiple} which asserts the equality for specific subsets of $\mathbb{R}^{q}$ and a specific choice of relation. This work was an effort at generalizing (\ref{E}) to situations where the sets considered are arbitrary, possibly non-Euclidean, and where the relation used for comparing alternative points in those sets need not constitute an order relation. In Theorems \ref{thm0} and \ref{thm1}, we obtained conditions sufficient for (\ref{E}) to hold. Theorem \ref{thm1} subsumes the theorem in \citep[p. 22]{yu2013multiple}. Theorems \ref{thm2}, \ref{thm3} and \ref{thm5}-\ref{thm8} describe situations in which (\ref{E}) is false, and, although this was not made explicit, can be read as providing necessary conditions for the validity of (\ref{E}). Theorem \ref{cor1} states a necessary and sufficient condition in case $A$ is finite, is its own efficient set, and $B$ contains a single point. Theorems \ref{thm2} and \ref{thm3} are each predicated on some efficient set -- $\mathscr{E}(A)$ in one case and $\mathscr{E}(A+B)$ in the other -- being nonempty. In a finite set, efficient points are guaranteed to exist provided $R$ is transitive (Theorem \ref{thm4}). The situation tends to be more complex where infinite sets are involved. As we have seen, a potential application of equality (\ref{E}) is in multiple objective optimization. A substantial amount of research has been done on the existence of efficient solutions. To cite only one example, \cite{benson1978existence} shows that, given a multiple objective optimization problem, a corresponding single-objective optimization problem can be constructed with the property that any optimal solution to the latter, if such exists, yields an efficient solution to the former. He gives three groups of conditions that ensure that the efficient solution set, and therefore the efficient outcome set, is empty should the auxiliary problem be unbounded. Thus, where the outcome set $f(X) \subseteq \mathbb{R}^{q}$ can be expressed as a sum $A+B$, it should in principle be feasible to determine, with the help of these and similar results, whether or not $\mathscr{E}(A+B)$ is empty. The same results would be appropriate for determining whether $\mathscr{E}(A)$ is empty, with a view to applying Theorem \ref{thm0} or \ref{thm2}, if $A$ was itself the outcome space of some criterion function $g(x) = (g_1(x), ..., g_q(x))$ that fulfilled the requisite conditions. Theorems \ref{thm0}-\ref{thm3} cover both finite and infinite sets. Theorems \ref{thm5}-\ref{thm8} deal strictly with finite sets, while the counterexamples in Remarks \ref{rem_cor1}, \ref{rem_thm6}, \ref{rem_thm7} and \ref{rem_thm8} stress the necessity of this assumption for the four theorems. It remains to be seen whether these theorems have counterparts in cases when $A$ is infinite, $B$ is infinite, or both. This shall be the object of future study. Consider, finally, our contention in Remark \ref{rem_generalized_equality} that Theorem \ref{thm1} carries over to the generalized set equality \begin{equation*}\mathscr{E}\biggl(A + \sum_{i = 1}^{n}B_i\biggr) = \mathscr{E}(A),\end{equation*} where the $B_i$ denote $n$ nonempty subsets of $G$, and $\sum_{i = 1}^{n}B_i = \{b_1 + ... + b_n: b_1 \in B_1, ..., b_n \in B_n\}$. It would appear that all of the results presented here are capable, \textit{mutatis mutandis}, of such a generalization. Further research will have to be conducted to establish (or refute) this formally. \backmatter \section*{Declarations of interest} The author has no competing interests to declare. \bibliography{sn-bibliography} \end{document} On page 22 of \textit{Multiple-criteria Decision Making: Concepts, Techniques and Extensions}\footnote{New York, 2013.}, Po-Lung Yu asserts the following for an arbitrary set $Y \subseteq \mathbb{R}^{q}$, $q$ being an integer greater than or equal to 2 \citep[Theorem 3.2]{yu2013multiple}. If $\Lambda^{\leqq} = \{d \in \mathbb{R}^{q}: -d \geqq 0\}$, then Stated differently, if $Y$ is shifted by non-positive amounts, then, supposing $Y$ contains efficient points, among the points not affected by the shift (due to $0 \in \Lambda^{\leqq})$ are the efficient points, and these are also efficient in the shifted set. This insight plays an important role in Yu's investigation of the efficient points in the outcome (image) space of a multiple objective mathematical program. It is used, for example, to show that these points are optimal solutions of a parametric family of ordinary mathematical programs when $Y + \Lambda^{\leqq}$ is convex \citep[Theorem 3.8]{yu2013multiple}. However, Yu's proof is partly incorrect. To demonstrate that $N(Y + \Lambda^{\leqq}) \subseteq N(Y)$, he assumes by way of contradiction that a point $y^{\circ} \in N(Y + \Lambda^{\leqq})$ exists which is not in $N(Y)$. From this he infers that $y^{\circ}$ must be dominated in $Y$, i.e., that there is a $y \in Y$ such that $y \geqq y^{\circ}$ and $y \neq y^{\circ}$. But the inference is unwarranted. In fact, if $y^{\circ} \notin N(Y)$, and all we know is that $y^{\circ} \in N(Y + \Lambda^{\leqq})$, then either $y^{\circ} \in Y$ and $y^{\circ}$ is indeed dominated, or $y^{\circ} \notin Y$. The proof omits to consider the latter case. Notwithstanding this issue, Yu's identity holds, and can furthermore be generalized to situations where $Y$ and $\Lambda^{\leqq}$ are substituted with arbitrary, possibly non-Euclidean, sets, and where the relation used for comparing alternative points in those sets need not constitute an order relation. The aim of this paper is to accomplish such a generalization and to amend Yu's proof. Comment for Figure: "Notice that E(A) = A". fall short of
2412.16386v1
http://arxiv.org/abs/2412.16386v1
Groupoid Cardinality and Random Permutations
\documentclass[reqno]{amsart} \usepackage{amssymb,amsmath,stmaryrd,txfonts,mathrsfs,amsthm}\usepackage{comment} \usepackage[mathscr]{euscript} \usepackage[neveradjust]{paralist} \usepackage{mathtools} \usepackage{multirow} \usepackage[outline]{contour} \contourlength{1.2pt} \usepackage{tikz} \usetikzlibrary{intersections,arrows.meta,calc,quotes,math,decorations.pathreplacing,decorations.markings,cd,arrows,positioning,fit,matrix, shapes.geometric,external,decorations.pathmorphing,backgrounds,circuits,circuits.ee.IEC,shapes} \usepackage[a4paper,top=3cm,bottom=3cm,inner=3cm,outer=3cm]{geometry} \usepackage[foot]{amsaddr} \usepackage{graphicx} \usepackage{stackrel} \usepackage{xcolor} \usepackage{framed,color} \definecolor{shadecolor}{rgb}{1,0.8,0.3} \definecolor{myurlcolor}{rgb}{0.5,0,0} \definecolor{mycitecolor}{rgb}{0,0,0.8} \definecolor{myrefcolor}{rgb}{0,0,0.8} \definecolor{hyperrefcolor}{rgb}{0.5,0,0} \usepackage{hyperref} \hypersetup{ colorlinks, linkcolor={mycitecolor}, citecolor={mycitecolor}, urlcolor={hyperrefcolor} } \newcommand{\backref}[1]{(Referred to on page #1.)} \usepackage[draft]{fixme} \usepackage[capitalize]{cleveref} \crefname{equation}{}{} \crefname{defn}{Definition}{Definitions} \crefname{thm}{Theorem}{Theorems} \crefname{lem}{Lemma}{Lemmas} \newcommand{\Int}{\raisebox{.3\depth}{$\smallint\hspace{-.01in}$}} \DeclareMathOperator\colim{colim} \DeclareMathOperator\eq{eq} \DeclareMathOperator\Aut{Aut} \DeclareMathOperator\End{End} \DeclareMathOperator\Hom{Hom} \DeclareMathOperator\Map{Map} \newcommand{\To}{\Rightarrow} \newcommand{\too}[1][]{\ensuremath{\overset{#1}{\longrightarrow}}} \newcommand{\oot}[1][]{\ensuremath{\overset{#1}{\longleftarrow}}} \let\toot\rightleftarrows \let\otto\leftrightarrows \let\maps\colon \let\xto\xrightarrow \let\xot\xleftarrow \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem{unthm}{Unproved ``Theorem''} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{qstn}[thm]{Question} \newtheorem*{utheorem}{Theorem} \newtheorem*{ulem}{Lemma} \newtheorem*{uprop}{Proposition} \newtheorem*{ucor}{Corollary} \newtheorem{prob}[thm]{Problem} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem{note}[thm]{Note} \newtheorem*{uremark}{Remark} \newtheorem*{unote}{Note} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem*{udefn}{defin} \newtheorem{expl}[thm]{Example} \newtheorem*{uexample}{Example} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\F}{\mathbb{F}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\Ob}{\mathrm{Ob}} \newcommand{\Mor}{\mathrm{Mor}} \newcommand{\Set}{\mathsf{Set}} \newcommand{\Perm}{\mathsf{Perm}} \newcommand{\B}{\mathsf{B}} \newcommand{\C}{\mathsf{C}} \newcommand{\J}{\mathsf{J}} \newcommand{\T}{\mathsf{T}} \newcommand{\D}{\mathsf{D}} \newcommand{\X}{\mathsf{X}} \newcommand{\Y}{\mathsf{Y}} \newcommand{\Z}{\mathsf{Z}} \newcommand{\Fin}{\mathsf{Fin}} \newcommand{\Rel}{\mathsf{Rel}} \newcommand{\Cospan}{\mathsf{Cospan}} \newcommand{\Csp}{\mathsf{Csp}} \newcommand{\one}{\mathsf{1}} \newcommand{\bicat}{\mathbf} \newcommand{\Dbl}{\bicat{Dbl}} \newcommand{\bCsp}{\bicat{Csp}} \newcommand{\bA}{\bicat{A}} \newcommand{\bB}{\bicat{B}} \newcommand{\bX}{\bicat{X}} \newcommand{\bY}{\bicat{Y}} \newcommand{\bD}{\bicat{D}} \newcommand{\Cat}{\bicat{Cat}} \newcommand{\MonCat}{\bicat{MonCat}} \newcommand{\Rex}{\bicat{Rex}} \newcommand{\SMC}{\bicat{SymMonCat}} \newcommand{\OpICat}{\bicat{OpICat}}\newcommand{\OpFib}{\bicat{OpFib}} \newcommand{\define}[1]{{\bf \boldmath{#1}}} \title{Groupoid cardinality and random permutations} \author{John\ C.\ Baez$^{1}$} \address{$^1$Department of Mathematics, University of California, Riverside CA, USA 92521} \email{[email protected]} \begin{document} \begin{abstract} If we treat the symmetric group \(S_n\) as a probability measure space where each element has measure \(1/n!\), then the number of cycles in a permutation becomes a random variable. The Cycle Length Lemma describes the expected values of products of these random variables. Here we categorify the Cycle Length Lemma by showing that it follows from an equivalence between groupoids. \end{abstract} \maketitle \section{Introduction} There is a well-behaved generalization of the concept of cardinality from finite sets to finite groupoids \cite{BD}. But what is it good for? As an illustration, here we use it to give a new proof of a known fact about random permutations: the Cycle Length Lemma \cite{Fo2}. In this lemma one treats the number of \(k\)-cycles in a permutation of \(n\) things as a random variable, where each permutation occurs with equal probability. The lemma says that in the limit as \(n \to \infty\), this random variable approaches a Poisson distribution with mean \(1/k\). Furthermore, in the \(n \to \infty\) limit these random variables become independent for different choices of \(k\). These are quick rough statements. In Section \ref{sec:cycle} we state the Cycle Length Lemma in a precise way. In Section \ref{sec:categorified} we prove a \emph{categorified} version of the Cycle Length Lemma, which asserts an equivalence of groupoids. In Section \ref{sec:groupoid} we derive the original version of the lemma from this categorified version by taking the cardinalities of these groupoids. The categorified version contains more information, so it is not just a trick for proving the original lemma (which is, after all, quite easy to show). Instead, it reveals the original lemma as a consequence of a stronger fact about groupoids. In Section \ref{sec:conclusion} we sketch how some of the ideas here generalize to other finite groups. \section{The Cycle Length Lemma} \label{sec:cycle} In the theory of random permutations, we treat the symmetric group \(S_n\) as a probability measure space where each element has the same measure, namely \(1/n!\). Functions \(f \colon S_n \to \mathbb{R}\) then become random variables, and we can study their expected values: \[ E(f) = \frac{1}{n!} \sum_{\sigma \in S_n} f(\sigma). \] An important example is the function \[ c_k \colon S_n \to \mathbb{N} \] that counts, for any permutation \(\sigma \in S_n\), its number of cycles of length \(k\), also called \(k\)-cycles. A well-known but striking fact about random permutations is that whenever \(k \le n\), the expected number of \(k\)-cycles is \(1/k\): \[ E(c_k) = \frac{1}{k} \] For example, a random permutation of any finite set has, on average, one fixed point! Another striking fact is that whenever \(j \ne k\) and \(j + k \le n\), so that it's possible for a permutation \(\sigma \in S_n\) to have both a \(j\)-cycle and a \(k\)-cycle, the random variables \(c_j\) and \(c_k\) are uncorrelated in the following sense: \[ E(c_j c_k) = E(c_j) E(c_k) . \] You might at first think that having many \(j\)-cycles for some large \(j\) would tend to inhibit the presence of \(k\)-cycles for some other large value of \(k\), but that is not true unless \(j + k > n\), when it suddenly becomes \emph{impossible} to have both a \(j\)-cycle and a \(k\)-cycle! These two facts are special cases of the Cycle Length Lemma. To state this lemma in full generality, recall that the number of ordered \(p\)-tuples of distinct elements of an \(n\)-element set is the \define{falling power} \[ x^{\underline{p}} = x(x-1)(x-2) \, \cdots \, (x-p+1). \] It follows that the function \[ c_k^{\underline{p}} \colon S_n \to \mathbb{N} \] counts, for any permutation in \(S_n\), its ordered \(p\)-tuples of distinct \(k\)-cycles. We can also replace the word `distinct' here by `disjoint', without changing the meaning, since distinct cycles must be disjoint. The two striking facts mentioned above generalize as follows: \begin{enumerate} \item First, whenever \(p k \le n\), so that it is \emph{possible} for a permutation in \(S_n\) to have \(p\) distinct \(k\)-cycles, then \[ E(c_k^{\underline{p}}) = \frac{1}{k^p}. \] For readers familiar with the moments of a Poisson distribution, here is a nice equivalent way to state this equation: when \(p k \le n\), the \(p\)th moment of the random variable \(c_k\) equals that of a Poisson distribution with mean \(1/k\). \item Second, as \(n \to \infty\) the random variables \(c_k\) become better and better approximated by independent Poisson distributions. To state this precisely we need a bit of notation. Let \(\vec{p}\) denote an \(n\)-tuple \((p_1 , \dots, p_n)\) of natural numbers, and let \[ |\vec{p}| = p_1 + 2p_2 + \cdots + n p_n. \] If \(|\vec{p}| \le n\), it is possible for a permutation \(\sigma \in S_n\) to have a collection of distinct cycles, with \(p_1\) cycles of length 1, \(p_2\) cycles of length 2, and so on up to \(p_n\) cycles of length \(n\). If \(|\vec{p}| > n\), this is impossible. In the former case, where \(|\vec{p}| \le n\), we always have \[ E\left( \prod_{k=1}^n c_k^{\underline{p}_k} \right) = \prod_{k=1}^n E( c_k^{\underline{p}_k}) . \] \end{enumerate} Taken together, 1) and 2) are equivalent to the Cycle Length Lemma, which may be stated in a unified way as follows: \vskip 1em \textbf{The Cycle Length Lemma}. Suppose \(p_1 , \dots, p_n \in \mathbb{N}\). Then \[ E\left( \prod_{k=1}^n c_k^{\underline{p}_k} \right) = \left\{ \begin{array}{ccc} \displaystyle{ \prod_{k=1}^n \frac{1}{k^{p_k}} } & & \mathrm{if} \; |\vec{p}| \le n \\ \\ 0 & & \mathrm{if} \; |\vec{p}| > n \end{array} \right. \] This appears, for example, in Ford's comprehensive review of the statistics of cycle lengths in random permutations \cite[Lem.\ 3.1]{Fo2}. He attributes it to Watterson \cite[Thm.\ 7]{W}. The most famous special case is when \(|\vec{p}| = n\), which apparently goes back to Cauchy. For more details on the sense in which random variables \(c_k\) approach independent Poisson distributions, see Arratia and Tavar\'e \cite{AT}. \section{The Categorified Cycle Length Lemma} \label{sec:categorified} To categorify the Cycle Length Lemma, the key is to treat a permutation as an extra structure that we can put on a set, and then consider the groupoid of \(n\)-element sets equipped with this extra structure: \begin{defn} Let \(\Perm_n\) be the groupoid in which \begin{itemize} \item an object is an \(n\)-element set equipped with a permutation \(\sigma \colon X \to X\) \end{itemize} and \begin{itemize} \item a morphism from \(\sigma \colon X \to X\) to \(\sigma' \colon X' \to X'\) is a bijection \(f \colon X \to X'\) that is \define{permutation-preserving} in the following sense: \[ f \circ \sigma \circ f^{-1} = \sigma'. \] \end{itemize} \end{defn} We'll need the following strange fact below: if \(n < 0\) then \(\Perm_n\) is the empty groupoid (that is, the groupoid with no objects and no morphisms). More importantly, we'll need a fancier groupoid where a set is equipped with a permutation together with a list of distinct cycles of specified lengths. For any \(n \in \mathbb{N}\) and any \(n\)-tuple of natural numbers \(\vec{p} = (p_1 , \dots, p_n)\), recall that we have defined \[ |\vec{p}| = p_1 + 2p_2 + \cdots + n p_n. \] \begin{defn} Let \(\C_{\vec{p}}\) be the groupoid of \(n\)-element sets \(X\) equipped with a permutation \(\sigma \colon X \to X\) that is in turn equipped with a choice of an ordered \(p_1\)-tuple of distinct \(1\)-cycles, an ordered \(p_2\)-tuple of distinct \(2\)-cycles, and so on up to an ordered \(p_n\)-tuple of distinct \(n\)-cycles. A morphism in this groupoid is a bijection that is permutation-preserving and also preserves the ordered tuples of distinct cycles. \end{defn} Note that if \(|\vec{p}| > n\), no choice of disjoint cycles with the specified property exists, so \(\C_{\vec{p}}\) is the empty groupoid. Finally, we need a bit of standard notation. For any group \(G\) we write \(\mathsf{B}(G)\) for its \define{delooping}: that is, the groupoid that has one object \(\star\) and \(\mathrm{Aut}(\star) = G\). \begin{thm} {\bf (The Categorified Cycle Length Lemma.)} For any \(\vec{p} = (p_1 , \dots, p_n) \in \mathbb{N}^n\) we have \[ \C_{\vec{p}} \simeq \Perm_{n - |\vec{p}|} \; \times \; \prod_{k = 1}^n \mathsf{B}(\mathbb{Z}/k)^{p_k} \] \end{thm} \begin{proof} Both sides are empty groupoids when \(|\vec{p}| > n\), so assume \(|\vec{p}| \le n\). A groupoid is equivalent to any full subcategory of that groupoid containing at least one object from each isomorphism class. So, fix an \(n\)-element set \(X\) and a subset \(Y \subseteq X\) with \(n - |\vec{p}|\) elements. Partition \(X - Y\) into subsets \(S_{k\ell}\) where \(S_{k \ell}\) has cardinality \(k\), \(1 \le k \le n\), and \(1 \le \ell \le p_k\). Every object of \(\C_{\vec{p}}\) is isomorphic to the chosen set \(X\) equipped with some permutation \(\sigma \colon X \to X\) that has each subset \(S_{k \ell}\) as a \(k\)-cycle. Thus \(\C_{\vec{p}}\) is equivalent to its full subcategory containing only objects of this form. An object of this form consists of an arbitrary permutation \(\sigma_Y \colon Y \to Y\) and a cyclic permutation \(\sigma_{k \ell} \colon S_{k \ell} \to S_{k \ell}\) for each \(k,\ell\) as above. Consider a second object of this form, say \(\sigma'_Y \colon Y \to Y\) equipped with cyclic permutations \(\sigma'_{k \ell}\). Then a morphism from the first object to the second consists of two pieces of data. First, a bijection \[ f \colon Y \to Y \] such that \[ \sigma'_Y = f \circ \sigma_Y \circ f^{-1}. \] Second, for each \(k,\ell\) as above, a bijection \[ f_{k \ell} \colon S_{k \ell} \to S_{k \ell} \] such that \[ \sigma'_{k \ell} = f_{k \ell} \circ \sigma_{k \ell} \circ f_{k \ell}^{-1}. \] Since \(Y\) has \(n - |\vec{p}|\) elements, while \(\sigma_{k \ell}\) and \(\sigma'_{k \ell}\) are cyclic permutations of \(k\)-element sets, it follows that \(\C_{\vec{p}}\) is equivalent to \[ \Perm_{n - |\vec{p}|} \; \times \; \prod_{k = 1}^n \mathsf{B}(\mathbb{Z}/k)^{p_k}. \qedhere \] \end{proof} The case where \(|\vec{p}| = n \) is especially pretty, since then our chosen cycles completely fill up our \(n\)-element set and we have \[ \C_{\vec{p}} \simeq \prod_{k = 1}^n \mathsf{B}(\mathbb{Z}/k)^{p_k}. \] \section{Groupoid Cardinality} \label{sec:groupoid} The cardinality of finite sets has a natural extension to finite groupoids, which turns out to be the key to extracting results on random permutations from category theory. We briefly recall this concept \cite{BD}. Any finite groupoid \(\mathsf{G}\) is equivalent to a coproduct of finitely many one-object groupoids, which are deloopings of finite groups \(G_1, \dots, G_m\): \[ \mathsf{G} \simeq \sum_{i = 1}^m \mathsf{B}(G_i), \] and then the \define{cardinality} of \(\mathsf{G}\) is defined to be \[ |\mathsf{G}| = \sum_{i = 1}^m \frac{1}{|G_i|}. \] This concept of groupoid cardinality has various nice properties. For example it is additive: \[ |\mathsf{G} + \mathsf{H}| = |\mathsf{G}| + |\mathsf{H}| \] and multiplicative: \[ |\mathsf{G} \times \mathsf{H}| = |\mathsf{G}| \times |\mathsf{H}| \] and invariant under equivalence of groupoids: \[ \mathsf{G} \simeq \mathsf{H} \implies |\mathsf{G}| = |\mathsf{H}|. \] None of these three properties forces us to define \(|\mathsf{G}|\) as the sum of the \emph{reciprocals} of the cardinalities \(|G_i|\): any other power of these cardinalities would work just as well. What makes the reciprocal cardinalities special is that if \(G\) is a finite group acting on a set \(S\), we have \[ |S\sslash G| = |S|/|G| \] where the groupoid \(S \sslash G\) is the \define{weak quotient} or \define{homotopy quotient} of \(S\) by \(G\), also called the \define{action groupoid}. This is the groupoid with elements of \(S\) as objects and one morphism from \(s\) to \(s'\) for each \(g \in G\) with \(g s = s'\), with composition of morphisms coming from multiplication in \(G\). The groupoid of \(n\)-element sets equipped with permutation, \(\Perm_n\), has a nice description in terms of weak quotients: \begin{lem} \label{lem:Perm} For all \(n \in \mathbb{N}\) we have an equivalence of groupoids \[ \Perm_n \simeq S_n \sslash S_n \] where the group \(S_n\) acts on the underlying set of \(S_n\) by conjugation. \end{lem} \begin{proof} We use the fact that \(\mathrm{Perm}_n\) is equivalent to any full subcategory of \(\mathrm{Perm}_n\) containing at least one object from each isomorphism class. For \(\Perm_n\) we can get such a subcategory by fixing an \(n\)-element set, say \(X = \{1,\dots, n\}\), and taking only objects of the form \(\sigma \colon X \to X\), i.e. \(\sigma \in S_n\). A morphism from \(\sigma \in S_n\) to \(\sigma' \in S_n\) is then a permutation \(\tau \in S_n\) such that \[ \sigma' = \tau \sigma \tau^{-1} .\] But this subcategory is precisely \(S_n \sslash S_n\). \end{proof} \begin{cor} For all \(n \in \mathbb{N}\) we have \[ |\mathrm{Perm}(n)| = 1 \] \end{cor} \begin{proof} We have \(|\mathrm{Perm}(n)| = |S_n \sslash S_n| = |S_n|/|S_n| = 1\). \end{proof} It should now be clear why we can prove results on random permutations using the groupoid \(\Perm_n\): this groupoid is equivalent to \(S_n \sslash S_n\), which has one object for each permutation \(\sigma \in S_n\), with each object contributing \(1/n!\) to the groupoid cardinality. Now let us use these ideas to derive the original Cycle Length Lemma from the categorified version. \begin{thm} {\bf (The Cycle Length Lemma.)} \label{thm:cycle_length_lemma} Suppose \(p_1 , \dots, p_n \in \mathbb{N}\). Then \[ E\left( \prod_{k=1}^n C_k^{\underline{p}_k} \right) = \left\{ \begin{array}{ccc} \displaystyle{ \prod_{k=1}^n \frac{1}{k^{p_k}} } & & \mathrm{if} \; |\vec{p}| \le n \\ \\ 0 & & \mathrm{if} \; |\vec{p}| > n \end{array} \right. \] \end{thm} \begin{proof} We know that \[ \C_{\vec{p}} \simeq \mathsf{Perm}(n - |\vec{p}|) \; \times \; \prod_{k = 1}^n \mathsf{B}(\mathbb{Z}/k)^{p_k} \] So, to prove the Cycle Length Lemma it suffices to show three things: \[ |\C_{\vec{p}}| = E\left( \prod_{k=1}^n c_k^{\underline{p}_k} \right) \] \[ \mathsf{Perm}(n - |\vec{p}|) = \left\{ \begin{array}{ccc} 1 & & \mathrm{if} \; |\vec{p}| \le n \\ \\ 0 & & \mathrm{if} \; |\vec{p}| > n \end{array} \right. \] and \[ |\mathsf{B}(\mathbb{Z}/k)| = 1/k \] The last of these is immediate from the definition of groupoid cardinality. The second follows from the Corollary above, together with the fact that \(\mathsf{Perm}(n - |\vec{p}|)\) is the empty groupoid when \(|\vec{p}| > n\). Thus we are left needing to show that \[ |\C_{\vec{p}}| = E\left( \prod_{k=1}^n c_k^{\underline{p}_k} \right). \] We prove this by computing the cardinality of a groupoid equivalent to \(\C_{\vec p}\). We claim this groupoid is of the form \(Q_{\vec{p}} \sslash S_n\) where \(Q_{\vec{p}}\) is some set on which \(S_n\) acts. As a result we have \[ |\C_{\vec{p}}| = |Q_{\vec{p}} \sslash S_n| = |Q_{\vec{p}}| / n! \] and to finish the proof we need to show \[ E\left( \prod_{k=1}^n c_k^{\underline{p}_k} \right) = |Q_{\vec{p}}| / n!\,. \] What is the set \(Q_{\vec{p}}\), and how does \(S_n\) act on it? An element of \(Q_{\vec{p}}\) is a permutation \(\sigma \in S_n\) equipped with an ordered \(p_1\)-tuple of distinct \(1\)-cycles, an ordered \(p_2\)-tuple of distinct \(2\)-cycles, and so on up to an ordered \(p_n\)-tuple of distinct \(n\)-cycles. Any element \(\tau \in S_n\) acts on \(Q_{\vec{p}}\) in a natural way, by conjugating the permutation \(\sigma \in S_n\) to obtain a new permutation, and mapping the chosen cycles of \(\sigma\) to the corresponding cycles of this new conjugated permutation \(\tau \sigma \tau^{-1}\). Recalling the definition of the groupoid \(\C_{\vec{p}}\), it is clear that any element of \(Q_{\vec{p}}\) gives an object of \(\C_{\vec{p}}\), and any object is isomorphic to one of this form. Furthermore any permutation \(\tau \in S_n\) gives a morphism between such objects, all morphisms between such objects are of this form, and composition of these morphisms is just multiplication in \(S_n\). It follows that \[ \C_{\vec{p}} \simeq Q_{\vec{p}} \sslash S_n. \] To finish the proof, note that \[ E\left( \prod_{k=1}^n c_k^{\underline{p}_k} \right) \] is \(1/n!\) times the number of ways of choosing a permutation \(\sigma \in S_n\) and equipping it with an ordered \(p_1\)-tuple of distinct \(1\)-cycles, an ordered \(p_2\)-tuple of distinct \(2\)-cycles, and so on. This is the same as \( |Q_{\vec{p}}| / n!\). \end{proof} \section{Conclusion} \label{sec:conclusion} We have opted to treat an example rather than develop a general theory, but many of the ideas here go beyond the symmetric group. Any finite group \(G\) acts on itself by conjugation and gives a groupoid \(G \sslash G\) of cardinality \(1\). Any functor \(F \colon G \sslash G \to \Fin\Set\) describes a conjugation-equivariant structure we can put on elements of \(G\), with \(F(g)\) being the set of structures we can put on the element \(g \in G\). Taking the ordinary cardinality of these sets, we obtain a function \(|F| \colon G \to \N\). Its expected value with respect to the normalized Haar measure on \(G\) is, by definition, \[ E(|F|) = \frac{1}{|G|} \sum_{g \in G} |F(g)| .\] However, \(E(|F|)\) also equals the cardinality of a certain groupoid for which an object is an element \(g \in G\) equipped with a structure \(x \in F(g)\). This groupoid is the familiar \define{category of elements} of \(F\), denoted \(\Int F\), for which: \begin{enumerate} \item an object is a pair \((g,x)\) where \(g \in G\) and \(x \in F(g) \); \item a morphism from \((g,x)\) to \((g',x')\) is an element \(h \in G\) such that \(g' = h g h^{-1}\) and \(x' = F(h)(x)\); \item composition of morphisms is multiplication of group elements. \end{enumerate} \begin{thm} \label{thm:general} If \(G\) is a finite group and \(F \colon G \sslash G \to \Fin\Set\) is a functor, then \[ E(|F|) = \left|\Int F \right|. \] \end{thm} \begin{proof} Let \(\Ob(\Int F)\) be the set of objects of \(\Int F\). The group \(G\) acts on \(\Ob(\Int F)\), with \(h \in G\) mapping the object \((g,x)\) to the object \((hgh^{-1}, F(h)(x))\). Using the explicit description of \(\Int F\) in items (1)--(3) above, there is an evident isomorphism of groupoids \[ \Int F \cong \Ob(\Int F) \sslash G \] that is the identity on objects and sends each morphism \(h\) from \((g,x)\) to \((g',x')\) to the analogous morphism in \(\Ob(\Int F) \sslash G \). It follows that \[ \left|\Int F \right| = \left| \Ob(\Int F) \sslash G \right| = |\Ob(\Int F)|/|G| = \displaystyle{ \frac{1}{|G|} \sum_{g \in G} |F(g)| } = E(|F|) . \qedhere\] \end{proof} The expected value of \(|F|\) is its integral over \(G\) with respect to normalized Haar measure, so we can write it as \(\Int \, |F|\), and then the theorem above takes an amusing though perhaps confusing form: \[ \Int \, |F| = \left| \Int F \right|. \] The above theorem sheds new light on the proof of Theorem \ref{thm:cycle_length_lemma}, because the \(S_n\)-set \(Q_{\vec{p}}\) in that proof is none other than \(\Ob(\Int C_{\vec p})\) for the functor \(C_{\vec{p}} \colon S_n \sslash S_n \to \Fin\Set \) assigning to any permutation the set where an element is an ordered \(p_1\)-tuple of distinct \(1\)-cycles, an ordered \(p_2\)-tuple of distinct \(2\)-cycles, and so on. Thus, the groupoid \(\C_{\vec{p}}\) is equivalent to \(\Int C_{\vec{p}}\). The same ideas apply to other structures that we can put on a finite set equipped with a permutation. The above theorem may also let us derive results about random elements of other groups from equivalences of groupoids. Results on \(\GL(n,\F_q)\) are promising candidates \cite{Fu}, since some are already proved using generating functions, which are connected to the category-theoretic techniques used here \cite{BD,BLL,J}, and there are powerful analogies between finite sets and finite-dimensional vector spaces over finite fields \cite{Le,Lo}. \begin{thebibliography}{5} \bibitem{AT} Richard Arratia and Simon Tavar\'e, The cycle structure of random permutations, \textsl{The Annals of Probability} \textbf{20} (1992), 1567--1591. Available at \href{https://doi.org/10.1214/aop/1176989707}{https://doi.org/10.1214/aop/1176989707}. \bibitem{BD} John C.\ Baez and James Dolan, From finite sets to Feynman diagrams, in \textsl{Mathematics Unlimited---2001 and Beyond}, vol. 1, eds. Bj\"orn Engquist and Wilfried Schmid, Springer, Berlin, 2001, pp.\ 29--50. Available as \href{http://arxiv.org/abs/math.QA/0004133}{arXiv:0004133}. \bibitem{BLL} Fran\c cois Bergeron, Gilbert Labelle and Pierre Leroux, \textsl{Combinatorial Species and Tree-like Structures}, Cambridge U.\ Press, Cambridge, 1998. \bibitem{Fo2} Kevin Ford, Cycle type of random permutations: a toolkit, \textsl{Discrete Analysis} \textbf{29} (2022). Available as \href{https://arxiv.org/abs/2104.12019}{ arXiv:2104.12019}. \bibitem{Fu} Jason Fulman, Random matrix theory over finite fields: a survey, \textsl{Bulletin of the American Mathematical Society} \textbf{39} (2001), 51--85. Available as \href{https://arxiv.org/abs/math/0003195}{arXiv:0003195}. \bibitem{J} Andr\'e Joyal, Une th\'eorie combinatoire des s\'eries formelles, \textsl{Advances in Mathematics} \textbf{42} (1981), 1--82. \bibitem{Le} Tom Leinster, The probability that an operator is nilpotent, \textsl{American Mathematical Monthly} \textbf{128} (2021), 371--375. Available as \href{https://arxiv.org/abs/1912.12562}{arXiv:1912.12562}. \bibitem{Lo} Oliver Lorscheid, Algebraic groups over the field with one element, \textsl{Mathematische Zeitschrift} \textbf{271} (2012), 117--138. Available as \href{https://arxiv.org/abs/0907.3824}{arXiv:0907.3824}. \bibitem{W} G.\ A.\ Watterson, The sampling theory of selectively neutral alleles, \textsl{Advances in Applied Probability} \textbf{6} (1974), 463--488. \end{thebibliography} \end{document}
2412.16368v1
http://arxiv.org/abs/2412.16368v1
Enumeration of interval-closed sets via Motzkin paths and quarter-plane walks
\documentclass{article} \usepackage{graphicx} \usepackage{amsmath,amssymb,fullpage,xcolor} \usepackage{amsthm,enumitem} \definecolor{darkgreen}{RGB}{51,117,56} \definecolor{burgundy}{RGB}{46,37,113} \definecolor{babyblue}{RGB}{30,144,255} \definecolor{beige}{RGB}{220,205,125} \definecolor{burgundy}{RGB}{126,041,084} \definecolor{pinkcheeks}{RGB}{194,106,119} \definecolor{realpurple}{RGB}{159,074,150} \definecolor{babyteal}{RGB}{093,168,153} \usepackage{tikz,verbatim} \usetikzlibrary{decorations.pathreplacing} \usetikzlibrary{decorations.markings} \usetikzlibrary{arrows} \usepackage{ytableau, ifthen} \usepackage{hyperref} \usepackage{stmaryrd} \usepackage{subcaption} \newcommand{\op}{\operatorname} \newcommand{\ytab}[1]{\begin{ytableau} #1 \end{ytableau}} \ytableausetup{centertableaux, smalltableaux} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{conjecture}[thm]{Conjecture} \newtheorem{quest}[thm]{Question} \newtheorem*{thmA}{Theorem \ref{thm:A}} \newtheorem*{thmB}{Theorem \ref{thm:B}} \newtheorem*{thmMotzBij}{Theorem \ref{thm:Motzkin_bijection}} \newtheorem*{thmwalks_bijection}{Theorem \ref{thm:walks_bijection}} \newtheorem*{thmICAn}{Theorem \ref{thm:ICAn}} \newtheorem*{thmICP}{Theorem \ref{thm:ICP}} \newtheorem*{cor3xn}{Corollary \ref{cor:3xncor}} \theoremstyle{definition} \newtheorem{definition}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{remark}[thm]{Remark} \newcommand{\IC}{\mathcal{IC}} \renewcommand{\O}{\mathcal{O}} \newcommand{\row}{\mathrm{Row}} \newcommand{\Max}{\mathrm{Max}} \newcommand{\Min}{\mathrm{Min}} \newcommand{\fl}{\mathrm{Floor}} \newcommand{\inc}{\mathrm{Inc}} \newcommand{\comp}{\mathrm{Comp}} \newcommand{\f}{\nabla} \newcommand{\oi}{\Delta} \newcommand{\tog}{\mathfrak{T}} \newcommand{\ceil}[1]{\mathrm{Ceil}({#1})} \newcommand{\A}{\inc_I\big(\ceil{I}\big)} \newcommand{\B}{\ceil{I}} \newcommand{\C}{\Min(I)} \newcommand{\F}{\Min(I)\cap\oi\ceil{I}} \newcommand{\arow}{\inc(I)\cup\Big(\oi\inc_{I}\big(\ceil{I}\big) -\big(I\cup\oi\ceil{I}\big)\Big)\cup\Big(\oi\ceil{I}-\oi(\F) \Big)} \newcommand{\arowcomp}{\Big(\oi\inc_I(\ceil{I})-\big(I\cup\oi\ceil{I}\big)\Big)\cup\Big(\oi\ceil{I}-\oi\big(\F\big)\Big)} \newcommand{\mm}{\mathfrak{M}} \newcommand\Lmn{\mathcal{L}_{m,n}} \newcommand\Lmnr{\mathcal{L}_{m,n;r}} \newcommand\LLmn{\mathcal{L}^{2}_{m,n}} \newcommand\LLmnr{\mathcal{L}^{2}_{m,n;r}} \newcommand\MMl{\mathcal{M}^{2}_\ell} \newcommand\MMmn{\mathcal{M}^{2}_{m,n}} \newcommand\MMn{\mathcal{M}^{2}_{2n}} \newcommand\MM{\mathcal{M}^{2}} \newcommand\tMM{\widetilde{\mathcal{M}}^{2}} \newcommand\tMMl{\widetilde{\mathcal{M}}^{2}_\ell} \newcommand\tMMmn{\widetilde{\mathcal{M}}^{2}_{m,n}} \renewcommand\SS{\mathcal{S}^{2}} \newcommand\SSn{\mathcal{S}^{2}_n} \newcommand\tSS{\widetilde{\SS}} \newcommand\tSSn{\widetilde{\SSn}} \newcommand\card[1]{\left|#1\right|} \newcommand{\bA}{\mathbf A} \newcommand{\fB}{\mathfrak B} \newcommand{\bB}{\mathbf B} \newcommand\Dn{\mathcal{D}_{n}} \newcommand\DDn{\mathcal{D}^{2}_{n}} \newcommand\Wo{\mathcal{W}^0} \newcommand\W{\mathcal{W}} \newcommand\tW{\widetilde{\mathcal{W}}} \newcommand\tWo{\widetilde{\mathcal{W}}^0} \newcommand\tWu{\widetilde{\mathcal{W}}} \newcommand{\e}{\textnormal{\texttt{e}}} \newcommand{\w}{\textnormal{\texttt{w}}} \newcommand{\nw}{\textnormal{\texttt{nw}}} \newcommand{\se}{\textnormal{\texttt{se}}} \newcommand{\uu}{\textnormal{\texttt{u}}} \newcommand{\dd}{\textnormal{\texttt{d}}} \newcommand{\hh}{\textnormal{\texttt{h}}} \newcommand{\jessica}[1]{\textcolor{teal}{Jessica:[#1]}} \newcommand{\mandy}[1]{\textcolor{magenta}{Mandy:[#1]}} \newcommand{\erin}[1]{\textcolor{purple}{Erin:[#1]}} \newcommand{\nadia}[1]{\textcolor{orange}{Nadia:[#1]}} \newcommand{\jbl}[1]{\textcolor{darkgreen}{Joel: [#1]}} \newcommand{\sergi}[1]{\textcolor{red}{Sergi:[#1]}} \newcommand{\bb}{\textbf} \title{Enumeration of interval-closed sets via Motzkin paths and quarter-plane walks} \author{Sergi Elizalde$^a$ \and Nadia Lafreni\`ere$^b$ \and Joel Brewster Lewis$^c$ \and Erin McNicholas$^d$ \and Jessica Striker$^e$ \and Amanda Welch$^f$} \date{\small $^a$ Dartmouth College, Department of Mathematics, 6188 Kemeny Hall, Hanover, NH 03755, USA. [email protected]\\ $^b$ Concordia University, Department of Mathematics and Statistics, 1455 De Maisonneuve Blvd.\ W., Montreal, Quebec H3G 1M8, Canada. [email protected]\\ $^c$ The George Washington University, Department of Mathematics, 801 22nd St.\ NW, Washington, DC, USA. [email protected]\\ $^d$ Willamette University, Department of Mathematics, 900 State St, Salem, Oregon 97301, USA. [email protected]\\ $^e$ North Dakota State University, Department of Mathematics, 1340 Administration Ave, Fargo, ND 58105, USA. [email protected]\\ $^f$ Eastern Illinois University, Department of Mathematics and Computer Science, 600 Lincoln Avenue, Charleston IL, 61920, USA. [email protected]\\ } \begin{document} \maketitle \begin{abstract} We find a generating function for interval-closed sets of the product of two chains poset by constructing a bijection to certain bicolored Motzkin paths. We also find a functional equation for the generating function of interval-closed sets of truncated rectangle posets, including the type $A$ root poset, by constructing a bijection to certain quarter-plane walks. \end{abstract} \section{Introduction} Interval-closed sets of partially ordered sets, or posets, are an interesting generalization of both order ideals (downward-closed subsets) and order filters (upward-closed subsets). Also called convex subsets, the interval-closed sets of a poset $P$ are defined to be the subsets $I\subseteq P$ such that if $x,y\in I$ and there is an element $z$ with $x<z<y$, then $z\in I$. In other words, $I$ contains all elements of $P$ between any two elements of $I$. Interval-closed sets are important in operations research and arise in applications such as project scheduling and assembly line balance \cite{Convex2015}. Although order ideals of posets have been well studied from enumerative, bijective, and dynamical perspectives, interval-closed sets have not received as much attention. A recent paper \cite{ELMSW} initiated the study of interval-closed sets of various families of posets from enumerative and dynamical perspectives. In this paper, we continue to study the enumeration of interval-closed sets of specific families of posets, finding useful bijections along the way, while in the companion paper \cite{LLMSW}, we extend the study of interval-closed set rowmotion dynamics. The main results of the present paper include a generating function for interval-closed sets of the product of two chains poset $[m]\times[n]$, from which we extract explicit formulas for small values of $m$, and functional equations for the generating functions of interval-closed sets of truncated rectangle posets, a family that includes the type $A$ root posets. In both cases, we define bijections from interval-closed sets to various kinds of lattice paths, namely, certain bicolored Motzkin paths and quarter-plane walks. Our first main result, stated as Theorem~\ref{thm:Motzkin_bijection}, is a bijection between the set of interval-closed sets of $[m]\times[n]$ and the set of bicolored Motzkin paths with certain restrictions; specifically, the number of up steps and horizontal steps of the first color is $m$, the number of down steps and horizontal steps of the second color is $n$, and no horizontal step of the second color on the $x$-axis is followed by a horizontal step of the first color. We use this bijection to find the following generating function. \begin{thmA} The generating function of interval-closed sets of $[m]\times[n]$ is given by $$\sum_{m,n\ge0} \card{\IC([m]\times[n])}\, x^m y^n=\frac{2}{1-x-y+2xy+\sqrt{(1-x-y)^2-4xy}}.$$ \end{thmA} One may use this generating function to extract counting formulas for fixed values of $m$, such as the following result. \begin{cor3xn} The cardinality of $\IC([3]\times[n])$ is $$\frac{n^{6}+9 n^{5}+61 n^{4}+159 n^{3}+370 n^{2}+264 n +144}{144}.$$ \end{cor3xn} Let $\fB_n$ denote the type $B_n$ minuscule poset (illustrated in Figure~\ref{fig:B_minuscule}), whose interval-closed sets are in bijection with vertically symmetric interval-closed sets of $[n]\times[n]$. \begin{thmB} The generating function of interval-closed sets of $\fB_n$ is given by $$\sum_{n\ge0} \card{\IC(\fB_n)}\, x^n=\frac{4-10x+8x^2}{2-11x+14x^2-8x^3-(2-3x)\sqrt{1-4x}}.$$ \end{thmB} Let $\bA_n$ denote the type $A_n$ positive root poset (illustrated in Figure~\ref{fig:A14}). In Theorem~\ref{thm:walks_bijection}, we construct a bijection between the set of interval-closed sets of $\bA_{n-1}$ and the set of lattice walks in the first quadrant that start and end at the origin and consist of $2n$ steps from the set $\{ (1,0),(-1,0),(1,-1),(-1,1)\}$, where no $(-1,0)$ step on the $x$-axis is immediately followed by a $(1,0)$ step. We use this bijection to derive the following functional equation for the generating function. \begin{thmICAn} The generating function of interval-closed sets of $\bA_{n-1}$ can be expressed as $$\sum_{n\ge0} \card{\IC(\bA_{n-1})}z^{2n}=F(0,0,z),$$ where $F(x,y):=F(x,y,z)$ satisfies the functional equation \begin{equation*} F(x,y)= 1+z\left(x+\frac{1}{x}+\frac{x}{y}+\frac{y}{x}\right)F(x,y) - z \left(\frac{1}{x}+\frac{y}{x}\right)F(0,y) - z\, \frac{x}{y} F(x,0) - z^2\, \left(F(x,0)-F(0,0)\right). \end{equation*} \end{thmICAn} We derive in Theorems~\ref{thm:walks_bijection_truncated} and~\ref{thm:ICP} generalizations of these theorems to the poset obtained by truncating the bottom $d$ ranks from $[m] \times [n]$. (Note that $\bA_{n-1}$ may be obtained by truncating the bottom $n$ ranks from $[n]\times[n]$.) We also find a similar functional equation in Theorem~\ref{thm:BrootGF} for symmetric ICS of $\bA_{n-1}$ and use this to extract the enumeration of ICS of the type $B$ positive root poset (illustrated in Figure~\ref{ex_typeB}). The paper is organized as follows. Section~\ref{sec:def} gives necessary poset-theoretic definitions and states relevant enumerative theorems from \cite{ELMSW}. Section~\ref{sec:rectangle} studies interval-closed sets of $[m]\times[n]$ and their corresponding bicolored Motzkin paths, proving the bijection of Theorem~\ref{thm:Motzkin_bijection}, and the generating functions of Theorems \ref{thm:A} and \ref{thm:B}. It also proves Theorem \ref{thm:Motzkin_stats_bijection}, which translates statistics of interest on each side of the bijection. Section~\ref{sec:TypeAroot} studies interval-closed sets of {the type $A$ root posets} and truncated rectangle posets, proving Theorems~\ref{thm:walks_bijection} and \ref{thm:ICAn} on the poset $\bA_{n-1}$, Theorem \ref{thm:BrootGF} on symmetric ICS of $\bA_{n-1}$, and Theorems \ref{thm:walks_bijection_truncated} and \ref{thm:ICP} on truncated rectangle posets. Section~\ref{sec:TypeAroot} also contains Theorem~\ref{statistics_walks}, which again translates statistics across the relevant bijection. We end in Section~\ref{sec:future} with some ideas for future work. \section{Definitions and background} \label{sec:def} Let $P$ be a partially ordered set (poset). All posets in this paper are finite. Below we introduce the poset-theoretic definitions that are most relevant to this paper, and refer to \cite[Ch.\ 3]{Stanley2011} for a more thorough discussion. \begin{definition} \label{def:ics} Let $I\subseteq P$. We say that $I$ is an \emph{interval-closed set (ICS)} of $P$ if for all $x, y \in I$ and $z\in P$ such that $x < z < y$, we have $z \in I$. Let $\IC(P)$ denote the set of all interval-closed sets of $P$. \end{definition} \begin{definition}\label{def:oi_of} A subset $J\subseteq P$ is an \emph{order ideal} if whenever $b\in J$ and $a\leq b$, we have $a\in J$. A subset $K$ is an \emph{order filter} if whenever $a\in K$ and $a\leq b$, we have $b\in K$. Given $S\subseteq P$, let $\oi(S)$ denote the smallest order ideal containing $S$, and let $\f(S)$ denote the smallest order filter containing $S$. \end{definition} \begin{definition}\label{def:chain} The $n$-element \textit{chain poset} has elements $1<2<\cdots<n$ and is denoted by $[n]$. In this paper, we study the poset constructed as the \emph{Cartesian product} of two chains. Its elements are $[m]\times [n]=\{(i,j) \ | \ 1\leq i\leq m, 1\leq j\leq n\}$, and the partial order is given by $(a,b)\leq (c,d)$ if and only if $a\leq c$ and $b\leq d$. \end{definition} Our convention is to draw the Hasse diagram of $[m]\times[n]$ as a tilted rectangle with poset element $(1,1)$ at the bottom, incrementing the first coordinate in the northeast direction and the second coordinate in the northwest direction, as in Figure \ref{fig:ex_ICS}. \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=.5] \foreach \x in {0,...,6} {\foreach \y in {0,...,8} ll (\x - \y, \x + \y) circle (0.1cm) {}; \ifthenelse{\x < 6} {\draw (\x - \y, \x + \y) -- (\x - \y + 1, \x + \y + 1);}{} \ifthenelse{\y < 8} {\draw (\x - \y, \x + \y) -- (\x - \y - 1, \x + \y+1);}{} } } ll[blue] (5 - 0, 5 + 0) circle (0.2cm) {}; ll[blue] (5 - 1, 5 + 1) circle (0.2cm) {}; ll[blue] (4 - 2, 4 + 2) circle (0.2cm) {}; ll[blue] (3 - 2, 3 + 2) circle (0.2cm) {}; ll[blue] (3 - 3, 3 + 3) circle (0.2cm) {}; ll[blue] (0 - 8, 0 + 8) circle (0.2cm) {}; ll[blue] (0 - 7, 0 + 7) circle (0.2cm) {}; ll[blue] (0 - 6, 0 + 6) circle (0.2cm) {}; ll[blue] (1 - 7, 1 + 7) circle (0.2cm) {}; ll[blue] (1 - 6, 1 + 6) circle (0.2cm) {}; ll[blue] (1 - 5, 1 + 5) circle (0.2cm) {}; \draw (0 - 8, 0 + 8) node[left=.25em] {$(1, 9)$}; \draw (6 - 0, 6 + 0) node[right=.25em] {$(7, 1)$}; \draw[decoration={brace, raise=.5em},decorate] (0 - 8,0 + 8) -- node[above left=.5em] {$m = 7$} (6 - 8, 6 + 8); \draw[decoration={brace, raise=.5em, mirror},decorate] (6 - 0,6 + 0) -- node[above right=.5em] {$n = 9$} (6 - 8, 6 + 8); \end{tikzpicture} \caption{An interval-closed set of the poset $[7]\times[9]$} \label{fig:ex_ICS} \end{figure} \begin{definition}\label{def:antichain} An \emph{antichain poset} of $m$ distinct, pairwise incomparable elements is denoted as $\mathbf{m}$. The \emph{ordinal sum of $n$ antichains} $\mathbf{a}_1\oplus\mathbf{a}_2\oplus\cdots\oplus\mathbf{a}_n$ is the poset constructed using the elements from these antichain posets with order relation $a\leq b$ whenever $a\in\mathbf{a}_i,b\in\mathbf{a}_j$ and $i\leq j$. \end{definition} In \cite{ELMSW}, the authors enumerated interval-closed sets of various families of posets. Generalizing the simple fact that the cardinality of $\IC([n])$ is $\binom{n+1}{2}+1$, they counted interval-closed sets of ordinal sums of antichains. \begin{thm}[\protect{\cite[Thm.\ 3.3]{ELMSW}}]\label{thm:gen_ord_sum_ics_card} The cardinality of $\IC(\mathbf{a}_1\oplus\mathbf{a}_2\oplus\cdots\oplus\mathbf{a}_n)$ is $1+\sum_{1\leq i\leq n}(2^{a_i}-1)+\sum_{1\leq i<j\leq n}(2^{a_i}-1)(2^{a_j}-1)$. \end{thm} They also gave a direct enumeration of ICS in $[2]\times[n]$. \begin{thm}[\protect{\cite[Thm.\ 4.2]{ELMSW}}]\label{prodofchainICS} The cardinality of $\IC([2] \times [n])$ is $1+n+n^2+ \frac{n+1}{2} \binom{n+2}{3}$. \end{thm} Finally, they enumerated certain ICS in $[m]\times[n]$. \begin{thm}[\protect{\cite[Thm.\ 4.4]{ELMSW}}]\label{thm:Narayana} The number of interval-closed sets of $[m] \times [n]$ containing at least one element of the form $(a, b)$ for each $a \in [m]$ is the Narayana number \[ N(m+n,n) = \frac{1}{m+n}\binom{m+n}{n}\binom{m+n}{n-1} . \] \end{thm} In the next section, we study interval-closed sets of $[m]\times[n]$, interpreting them in terms of pairs of lattice paths as well as certain colored Motzkin paths; we then derive an explicit generating function for their enumeration. \section{Interval-closed sets of rectangle posets and bicolored Motzkin paths} \label{sec:rectangle} In this section, we prove Theorem~\ref{thm:A}, which gives a generating function enumerating interval-closed sets of the poset $[m]\times[n]$. We begin by giving two bijections from interval-closed sets of $[m]\times[n]$ to pairs of lattice paths. The first pair $(L,U)$ consists of the \emph{upper} and \emph{lower} paths that trace out the smallest order ideal and order filter, respectively, containing an interval-closed set. We discuss this bijection and its implications in Subsection~\ref{ssec:latticepaths_rectangles}. In Subsection~\ref{ssec:bicolored} we give a bijection to the pair of paths $(B,T)$ (\emph{bottom} and \emph{top} paths) which trace out, respectively, the largest order ideal that does not contain the ICS and the smallest order ideal that does contain the ICS. We then prove Theorem \ref{thm:Motzkin_bijection}, which uses these paths to give a bijection between $\IC([m]\times[n])$ and certain bicolored Motzkin paths. Subsection~\ref{sec:directGF} uses this bijection to prove Theorem~\ref{thm:A}. Subsection~\ref{ssec:extracting_formulas} extracts the coefficients of this generating function for small parameter values, giving for example a formula for $\card{\IC([3]\times[n])}$. Subsection~\ref{sec:Motzkin_stats} translates statistics between interval-closed sets and Motzkin paths via the bijection of Theorem \ref{thm:Motzkin_bijection}. Finally, Subsection~\ref{sec:Bminuscule} proves Theorem~\ref{thm:B}, giving a generating function for interval-closed sets of the type $B_n$ minuscule poset, or, equivalently, vertically symmetric ICS in $[n]\times[n]$. \subsection{A bijection to pairs of paths} \label{ssec:latticepaths_rectangles} In this subsection, we associate a pair of paths $(L,U)$ to each interval-closed set in $[m]\times [n]$. We then use these paths in Proposition~\ref{prop:fullNarayana} to show that certain interval-closed sets, which we call \emph{full}, are enumerated by the Narayana numbers. Finally, we characterize in Lemma~\ref{prop:paths_in_poset_language} several subsets of the poset in terms of these paths. Denote by $\mathcal{L}_{m,n}$ the set of lattice paths in $\mathbb{R}^2$ from $(0, n)$ to $(m + n, m)$ with steps $\uu=(1,1)$ and $\dd=(1,-1)$. It is well known that $\card{\mathcal{L}_{m,n}}=\binom{m+n}{m}$. There is a standard bijection between order ideals of $[m]\times[n]$ and $\mathcal{L}_{m,n}$ (see e.g.,~\cite[Def.~4.14, Fig.~6]{SW2012}). This bijection proceeds by constructing, on the dual graph of the Hasse diagram, a path that separates the order ideal from the rest of the poset. The path begins to the left of the leftmost poset element ($(1,n)$ in poset coordinates), ends to the right of the rightmost poset element ($(m,1)$ in poset coordinates), and consists of $m$ up-steps $\uu$ and $n$ down-steps $\dd$. (Note that the Cartesian coordinates in $\mathbb{R}^2$, which we use for the paths, are different from the coordinates that we use to refer to elements of the poset.) A similar path may be constructed to separate an order filter from the rest of the poset. Given an interval-closed set $I$ of $[m] \times [n]$, let us describe how to associate a pair of lattice paths $(L,U)$ to $I$. Let $U$ be the path separating the order ideal $\oi(I)$ from the rest of the poset, and $L$ be the path separating the order filter $\f(I)$ from the rest of the poset. Both paths begin at $\left(0,n\right)$, end at $\left(m + n,m\right)$, and consist of steps $\uu = (1, 1)$ and $\dd = (1, -1)$. Among all such paths, the \emph{upper path} $U$ is the lowest path that leaves all the elements of $I$ below it, while the \emph{lower path} $L$ is the highest path that leaves all the elements of $I$ above it. See Figure \ref{fig:UL} for an example. \begin{figure}[htb] \centering \rotatebox{45}{\begin{tikzpicture}[scale=.7] ll[beige] (-.25, 7.25) -- (5.25, 7.25) -- (5.25, 1.75) -- (4.75, 1.75) -- (4.75, 2.75) -- (3.75, 2.75) -- (3.75, 3.75) -- (2.75, 3.75) -- (2.75, 4.75) -- (1.75, 4.75) -- (1.75, 6.75) -- (-.25, 6.75) -- cycle; ll[pinkcheeks] (2, 4) circle (.35cm); ll[lightgray] (-.25, .75) -- (-.25, 5.25) -- (.25, 5.25) -- (.25, 4.25) -- (1.25, 4.25) --(1.25, 3.25) -- (2.25, 3.25) --(2.25, 1.25) --(4.25, 1.25) --(4.25, .75) --cycle; \foreach \x in {0,...,5} {\foreach \y in {1,...,7} ll (\x, \y) circle (0.07cm) {}; \ifthenelse{\x < 5} {\draw (\x , \y) -- (\x + 1, \y);}{} \ifthenelse{\y < 7} {\draw (\x, \y) -- (\x, \y+1);}{} } } ll[blue] (5 , 1) circle (0.14cm) {}; ll[blue] (4 , 2) circle (0.14cm) {}; ll[blue] (3 , 2) circle (0.14cm) {}; ll[blue] (3 , 3) circle (0.14cm) {}; ll[blue] (0 , 6) circle (0.14cm) {}; ll[blue] (1 , 6) circle (0.14cm) {}; ll[blue] (1 , 5) circle (0.14cm) {}; \draw[very thick, realpurple, dashed] (5.5, .5) -- (5.5, 1.52) node[xshift=0.25cm, yshift=0.25cm] {\rotatebox{-45}{\large $U$}} -- (4.52, 1.52) -- (4.52, 2.5) -- (3.5, 2.5) -- (3.5, 3.5) -- (1.5, 3.5) -- (1.5, 6.5) -- (-0.48, 6.5) -- (-0.48, 7.5); \draw[very thick, darkgreen] (5.5, .5) -- (4.48, 0.5) node[xshift=-.25cm, yshift=-.25cm]{\rotatebox{-45}{\large $L$}} -- (4.48, 1.48) -- (2.5, 1.48) -- (2.5, 4.5) --(0.5, 4.5) -- (0.5, 5.5) -- (-.52, 5.5) -- (-0.52, 7.5); \end{tikzpicture}} \caption{An interval-closed set of $P = [6]\times[7]$ (shown with the small blue dots) and its associated upper and lower paths $U$ (dashed) and $L$. The large pink dot is the only element of $P$ incomparable with $I$, as it is below $L$ and above $U$. The order filter $\f(I)$ consists of the elements of $I$ and the elements in the beige region, whereas $\oi(I)$ consists of the elements of $I$ and the elements in the gray region.} \label{fig:UL} \end{figure} Say that $I$ is \emph{full} if $L$ and $U$ share no points other than their endpoints. The enumeration of full interval-closed sets is closely related to Theorem~\ref{thm:Narayana}. \begin{prop} \label{prop:fullNarayana} The number of full interval-closed subsets of $[m] \times [n]$ is the Narayana number \[ N(m+n-1,n) = \frac{1}{m + n - 1} \binom{m + n - 1}{m} \binom{m + n - 1}{n}. \] \end{prop} \begin{proof} Consider $I\in \IC([m]\times[n])$ and define a ``shift'' map $\varphi$ on the associated paths $U$ and $L$, as follows: $\varphi$ adds an up-step $\uu$ to the beginning of $U$ and an up-step $\uu$ to the end of $L$. This results in a pair of paths $\varphi(U)=\uu U$ and $\varphi(L)=L\uu$ in the poset $[m+1]\times[n]$; see Figure \ref{fig:shiftmap} for an example. When we start with an ICS in $[m] \times [n]$ that has at least one element of the form $(a, b)$ for each $a \in [m]$, the associated path $U$ is weakly above the path $L$. Therefore, after shifting, the new path $\varphi(U)$ is strictly above the new path $\varphi(L)$ (except at their endpoints), and so the associated ICS in $[m+1]\times[n]$ is full. \begin{figure}[htb] \begin{center} \rotatebox{45}{\begin{tikzpicture}[scale=.7] \foreach \x in {1,...,3} {\foreach \y in {1,...,7} ll (\x, \y) circle (0.07cm) {}; \ifthenelse{\x < 3} {\draw (\x , \y) -- (\x + 1, \y);}{} \ifthenelse{\y < 7} {\draw (\x, \y) -- (\x, \y+1);}{} } } ll[blue] (1, 6) circle (0.14cm) {}; ll[blue] (1, 5) circle (0.14cm) {}; ll[blue] (2, 4) circle (0.14cm) {}; ll[blue] (3, 2) circle (0.14cm) {}; ll[blue] (3, 1) circle (0.14cm) {}; \draw[realpurple, very thick, dashed] (3.5, .5) -- (3.5, 2.5) -- (2.52, 2.5) -- (2.52, 4.52) -- (1.52, 4.52) -- (1.52, 6.5) -- (.52, 6.5) -- (.52, 7.5); \draw[darkgreen, very thick] (3.5, .5) -- (2.48, .5) -- (2.48, 3.5) -- (1.5, 3.5) -- (1.48, 4.48) -- (0.48, 4.5) -- (.48, 7.5); \end{tikzpicture}} \raisebox{3cm}{$\longrightarrow$} \rotatebox{45}{\begin{tikzpicture}[scale=.7] \foreach \x in {1,...,4} {\foreach \y in {1,...,7} ll (\x, \y) circle (0.07cm) {}; \ifthenelse{\x < 4} {\draw (\x , \y) -- (\x + 1, \y);}{} \ifthenelse{\y < 7} {\draw (\x, \y) -- (\x, \y+1);}{} } } ll[blue] (1, 6) circle (0.14cm) {}; ll[blue] (1, 5) circle (0.14cm) {}; ll[blue] (2, 4) circle (0.14cm) {}; ll[blue] (3, 2) circle (0.14cm) {}; ll[blue] (3, 1) circle (0.14cm) {}; \draw[realpurple, very thick, dashed] (4.5, .5) -- (4.5, 2.5) -- (3.5, 2.5) -- (3.5, 4.5) -- (2.5, 4.5) -- (2.5, 6.5) -- (1.5, 6.5) -- (1.5, 7.5) -- (.5, 7.5); \draw[darkgreen, very thick] (4.5, .5) -- (2.5, .5) -- (2.5, 3.5) -- (1.5, 3.5) -- (1.5, 4.5) -- (0.5, 4.5) -- (.5, 7.5); ll[cyan] (1, 7) circle (0.14cm) {}; ll[cyan] (2, 6) circle (0.14cm) {}; ll[cyan] (2, 5) circle (0.14cm) {}; ll[cyan] (3, 4) circle (0.14cm) {}; ll[cyan] (3, 3) circle (0.14cm) {}; ll[cyan] (4, 2) circle (0.14cm) {}; ll[cyan] (4, 1) circle (0.14cm) {}; \end{tikzpicture}} \end{center} \caption{An illustration of the shift map $\varphi$ from the proof of Proposition~\ref{prop:fullNarayana}.} \label{fig:shiftmap} \end{figure} One can see that $\varphi$ is invertible, and so it is a bijection between interval-closed subsets of $[m] \times [n]$ that have at least one element of the form $(a, b)$ for each $a \in [m]$ and full interval-closed subsets of $[m + 1] \times [n]$. The enumeration then follows from Theorem~\ref{thm:Narayana}. \end{proof} The paths $L$ and $U$ can also be described in poset language. We will use this lemma in Section~\ref{sec:Motzkin_stats} to translate statistics via the bijections of this paper. An illustration of the four sets in the lemma appears in Figure~\ref{fig:UL}. Note we state this lemma not only for the poset $[m]\times[n]$, but also for any subposet that is itself a full interval-closed set of $[m]\times[n]$. \begin{lem}\label{prop:paths_in_poset_language} Let the poset $P$ be a full interval-closed set of $[m]\times[n]$. Given $I\in\IC(P)$ with lower path $L$ and upper path $U$, one has the following characterization of the elements of $P$ according to their position in relation to $L$ and $U$: \begin{itemize} \item the elements above $L$ and below $U$ are exactly those in $I$, \item the elements below both $L$ and $U$ are exactly those in $\oi{(I)}\setminus I$, \item the elements above both $L$ and $U$ are exactly those in $\f{(I)}\setminus I$, and \item the elements below $L$ and above $U$ are those that are incomparable with $I$. \end{itemize} \end{lem} \begin{proof} By definition, the elements of $P$ below $U$ are exactly those in the order ideal $\oi{(I)}$, and the elements of $P$ above $L$ are exactly those in the order filter $\f{(I)}$. An element $z\in P$ is in the intersection $\oi{(I)}\cap\f{(I)}$ if and only if $x\le z$ for some $x\in I$ and $z\le y$ for some $y\in I$. Since $I$ is an interval-closed set, this implies that $z\in I$. Hence, $\f{(I)} \cap \oi{(I)}= I$, proving the first three statements. For the fourth statement, note that elements below $L$ and above $U$ are those in $P \setminus (\f{(I)} \cup \oi{(I)})$, that is, elements in $P$ that are neither larger nor smaller than any element in $I$. In other words, these are the elements that are incomparable with $I$. \end{proof} This perspective will be used in \cite{LLMSW} to analyze the action of \emph{rowmotion} on interval-closed sets of $[m]\times[n]$. \subsection{From pairs of paths to bicolored Motzkin paths}\label{ssec:bicolored} In this subsection, we associate a slightly different pair of paths $(B,T)$ to each interval-closed set in $[m]\times [n]$ as an intermediate step towards a bijection between $\IC([m]\times[n])$ and certain bicolored Motzkin paths. As described in Section~\ref{ssec:latticepaths_rectangles}, the set of order ideals of $[m]\times[n]$ is in natural bijection with the set of lattice paths $\Lmn$ from $(0,n)$ to $(m+n,m)$ with steps $\uu$ and $\dd$. Let $J_1,J_2$ be order ideals of $[m]\times[n]$, and let $B,T\in\Lmn$ be their corresponding lattice paths. Then $J_1\subseteq J_2$ if and only if $B$ lies weakly below $T$. We will write this as $B\le T$. Let $\LLmn=\{(B,T):B,T\in\Lmn, B\le T\}$. Our goal is to enumerate interval-closed sets of $[m]\times[n]$. Any interval-closed set can be expressed as $J_2\setminus J_1$ for some pair of order ideals $J_1,J_2$ such that $J_1\subseteq J_2$, and any such pair of order ideals determines an ICS. However, $J_1$ and $J_2$ are not unique in general; for example, the empty set can be written as $J\setminus J$ for any order ideal $J$. In general, given $(B,T)\in\LLmn$, the steps where $B$ and $T$ coincide are irrelevant when determining the corresponding interval-closed set. This is because the interval-closed set has elements in the $i$th vertical ``file'' (i.e., elements $(a,b)\in[m]\times [n]$ such that $b-a=i+n-1$) if and only if the $i$th step of $B$ is strictly below the $i$th step of $T$. Thus, interval-closed sets of $[m]\times[n]$ are in bijection with equivalence classes of pairs $(B,T)\in\LLmn$, where the equivalence relation allows us to freely change the portions of $B$ and $T$ where these two paths coincide, as long as we preserve the portions of $B$ and $T$ that are disjoint. To enumerate these equivalence classes, let us introduce another type of lattice paths. Denote by $\MMl$ the set of {\em bicolored Motzkin paths} of length $\ell$. These are lattice paths from $(0,0)$ to $(\ell,0)$ that never go below the $x$-axis and consist of steps of four types: $\uu=(1,1)$, $\dd=(1,-1)$, and two kinds of horizontal steps $(1,0)$, which we will denote by $\hh_1$ and $\hh_2$. Denote by $u(M)$ the number of $\uu$ steps in $M$, and define $d(M)$, $h_1(M)$ and $h_2(M)$ similarly. Let $\MM=\bigcup_{\ell\ge0}\MMl$. Consider the following well known bijection (see e.g.,~\cite{Elizalde-symmetry}) between $\bigcup_{m+n=\ell}\LLmn$ and $\MMl$. Given $(B,T)\in\LLmn$ and $\ell=m+n$, let $M\in\MMl$ be the path whose $i$th step $m_i$ is determined by the $i$th steps of $B$ and $T$, as follows: \begin{equation}\label{eq:mi} m_i=\begin{cases} \uu & \text{if $b_i=\dd$ and $t_i=\uu$},\\ \dd & \text{if $b_i=\uu$ and $t_i=\dd$},\\ \hh_1 & \text{if $b_i=\uu$ and $t_i=\uu$},\\ \hh_2 & \text{if $b_i=\dd$ and $t_i=\dd$}. \end{cases} \end{equation} Under this bijection, we have $(B,T)\in\LLmn$ if and only if $u(M)+h_1(M)=m$ and $d(M)+h_2(M)=n$. Let $\MM_{m,n}$ denote the set of $M\in\MM_{m+n}$ such that $u(M)+h_1(M)=m$ and $d(M)+h_2(M)=n$. The fact that $B\le T$ guarantees that $M$ stays weakly above the $x$-axis, and that steps where $B$ and $T$ coincide correspond to horizontal steps ($\hh_1$ or $\hh_2$) of $M$ that lie on the $x$-axis. In particular, changing steps where $B$ and $T$ coincide (while preserving the portions where $B$ and $T$ are disjoint) corresponds to rearranging the horizontal steps of $M$ within each maximal block of adjacent horizontal steps on the $x$-axis. Thus, interval-closed sets of $[m]\times[n]$ are in bijection with equivalence classes of paths in $\MM_{m,n}$, where the equivalence relation is given by the above rearrangements. An easy way to pick one representative from each equivalence class is to consider paths where no $\hh_2$ on the $x$-axis is immediately followed by a $\hh_1$, i.e., every block of horizontal steps on the $x$-axis is of the form $\hh_1^r\hh_2^s$ for some $r,s\ge0$. Let $\tMM$, $\tMMl$, and $\tMMmn$ respectively be the sets of paths in $\MM$, $\MMl$, and $\MMmn$ with this property. In terms of the paths $(B,T)$, this convention for picking a representative corresponds to requiring the blocks where $B$ and $T$ coincide to be of the form $\uu^r\dd^s$. In particular, the resulting path $B$ coincides with the path $L$ of the previous subsection. The above discussion yields the following theorem. \begin{thm}\label{thm:Motzkin_bijection} The set $\IC([m]\times[n])$ of interval-closed sets of $[m]\times[n]$ is in bijection with the set $\tMMmn$ of bicolored Motzkin paths where no $\hh_2$ on the $x$-axis is immediately followed by a $\hh_1$, and such that $u(M)+h_1(M)=m$ and $\dd(M)+h_2(M)=n$. \end{thm} \begin{example}\label{ex:Motzkin_bijection} Figure~\ref{ex_paths} shows an example of an interval-closed set of $[13] \times [14]$ with paths $T$ (in blue, dashed) and $B$ (in green) with their overlap in purple. We have \begin{align*} T&=\dd \ \uu \ \uu \ \uu \ \dd \ \dd \ \dd \ \uu \ \uu \ \dd \ \uu \ \uu \ \uu \ \dd \ \dd \ \dd \ \uu \ \dd \ \uu \ \dd \ \uu \ \dd \ \dd \ \dd \ \uu \ \uu \ \dd,\\ B&= \dd \ \dd \ \uu \ \dd \ \dd \ \uu \ \uu \ \uu \ \uu \ \dd \ \dd \ \uu \ \dd \ \dd \ \dd \ \uu \ \uu \ \uu \ \uu \ \dd \ \dd \ \dd \ \dd \ \uu \ \uu \ \uu \ \dd.\end{align*} Using (1), we obtain $$M = \hh_2 \ \uu \ \hh_1 \ \uu \ \hh_2 \ \dd \ \dd \ \hh_1 \ \hh_1 \ \hh_2 \ \uu \ \hh_1 \ \uu \ \hh_2 \ \hh_2 \ \dd \ \hh_1 \ \dd \ \hh_1 \ \hh_2 \ \uu \ \hh_2 \ \hh_2 \ \dd \ \hh_1 \ \hh_1 \ \hh_2,$$ which is shown in Figure \ref{ex_motzkin_path}. \end{example} \begin{figure}[htb] \begin{center} \begin{tikzpicture}[scale=.5] \foreach \x in {1,...,13} {\foreach \y in {1,...,14} ll (\x - \y, \x + \y) circle (0.1cm) {}; \ifthenelse{\x < 13} {\draw (\x - \y, \x + \y) -- (\x - \y + 1, \x + \y + 1);}{} \ifthenelse{\y < 14} {\draw (\x - \y, \x + \y) -- (\x - \y - 1, \x + \y+1);}{} } } ll[blue] (-12, 14) circle (0.2cm) {}; ll[blue] (1 - 12, 3 + 12) circle (0.2cm) {}; ll[blue] (2 - 12, 4 + 12) circle (0.2cm) {}; ll[blue] (2 - 12, 2 + 12) circle (0.2cm) {}; ll[blue] (3 - 12, 3 + 12) circle (0.2cm) {}; ll[blue] (3 - 12, 1 + 12) circle (0.2cm) {}; ll[blue] (4 - 12, 2 + 12) circle (0.2cm) {}; ll[blue] (-3, 1 + 14) circle (0.2cm) {}; ll[blue] (-2, 16) circle (0.2cm) {}; ll[blue] (-1, 17) circle (0.2cm) {}; ll[blue] (-1, 15) circle (0.2cm) {}; ll[blue] (0, 16) circle (0.2cm) {}; ll[blue] (0, 14) circle (0.2cm) {}; ll[blue] (1, 15) circle (0.2cm) {}; ll[blue] (1, 13) circle (0.2cm) {}; ll[blue] (2, 14) circle (0.2cm) {}; ll[blue] (3, 15) circle (0.2cm) {}; ll[blue] (7, 15) circle (0.2cm) {}; ll[blue] (8, 14) circle (0.2cm) {}; ll[blue] (9, 13) circle (0.2cm) {}; \draw[burgundy, ultra thick] (-14, 15) -- (-13, 14); \draw[babyblue, ultra thick, dashed] (-13, 14) -- (-10, 17) -- (-7, 14); \draw[burgundy, ultra thick] (-7, 14) -- (-5, 16) -- (-4, 15); \draw[babyblue, ultra thick, dashed] (-4, 15) -- (-1, 18)node[above right] {{ \large $T$}} -- (2, 15) -- (3, 16) -- (4, 15); \draw[burgundy, ultra thick] (4, 15) -- (5, 16) -- (6, 15); \draw[babyblue, ultra thick, dashed] (6, 15) -- (7, 16) -- (10, 13); \draw[burgundy, ultra thick] (10, 13) -- (12, 15) -- (13, 14); \draw[darkgreen, ultra thick] (-13, 14) -- (-12, 13) -- (-11, 14) -- (-9, 12) -- (-7, 14); \draw[darkgreen, ultra thick] (-4, 15) -- (-3, 14) -- (-2, 15) -- (1, 12)node[below left] {{\large $B$}} -- (4, 15); \draw[darkgreen, ultra thick] (6, 15) -- (9, 12) -- (10, 13); \end{tikzpicture} \end{center} \caption{An interval-closed set in $P = [13] \times [14]$ with associated lattice paths $T$ (dashed) and $B$.}\label{ex_paths} \end{figure} \begin{figure}[htb] \begin{center} \begin{tikzpicture}[scale=.5] \draw[gray,thin] (0,0) grid (27,3); \draw (-1, -1) node {M =}; \draw (0.5, -1) node {$\hh_2$}; \draw (1.5, -1) node {$\uu$}; \draw (2.5, -1) node {$\hh_1$}; \draw (3.5, -1) node {$\uu$}; \draw (4.5, -1) node {$\hh_2$}; \draw (5.5, -1) node {$\dd$}; \draw (6.5, -1) node {$\dd$}; \draw (7.5, -1) node {$\hh_1$}; \draw (8.5, -1) node {$\hh_1$}; \draw (9.5, -1) node {$\hh_2$}; \draw (10.5, -1) node {$\uu$}; \draw (11.5, -1) node {$\hh_1$}; \draw (12.5, -1) node {$\uu$}; \draw (13.5, -1) node {$\hh_2$}; \draw (14.5, -1) node {$\hh_2$}; \draw (15.5, -1) node {$\dd$}; \draw (16.5, -1) node {$\hh_1$}; \draw (17.5, -1) node {$\dd$}; \draw (18.5, -1) node {$\hh_1$}; \draw (19.5, -1) node {$\hh_2$}; \draw (20.5, -1) node {$\uu$}; \draw (21.5, -1) node {$\hh_2$}; \draw (22.5, -1) node {$\hh_2$}; \draw (23.5, -1) node {$\dd$}; \draw (24.5, -1) node {$\hh_1$}; \draw (25.5, -1) node {$\hh_1$}; \draw (26.5, -1) node {$\hh_2$}; \draw[red, very thick] (0, 0) to[out=45, in=225, looseness=1.5] (1, 0); \draw[blue, very thick] (1,0) -- (2, 1) -- (3, 1) -- (4, 2); \draw[red, very thick] (4, 2) to[out=45, in=225, looseness=1.5] (5, 2); \draw[blue, very thick] (5,2) -- (6, 1) -- (7, 0) -- (8, 0) -- (9, 0); \draw[red, very thick] (9, 0) to[out=45, in=225, looseness=1.5] (10, 0); \draw[blue, very thick] (10, 0) --(11, 1) -- (12, 1) -- (13,2); \draw[red, very thick] (13, 2) to[out=45, in=225, looseness=1.5] (14, 2) to[out=45, in=225, looseness=1.5] (15, 2); \draw[blue, very thick] (15, 2) -- (16, 1) -- (17, 1) -- (18, 0) -- (19, 0); \draw[red, very thick] (19, 0) to[out=45, in=225, looseness=1.5] (20, 0); \draw[blue, very thick] (20, 0) -- (21, 1); \draw[red, very thick] (21, 1) to[out=45, in=225, looseness=1.5] (22, 1) to[out=45, in=225, looseness=1.5] (23, 1); \draw[blue, very thick] (23, 1) -- (24, 0) -- (25, 0) -- (26, 0); \draw[red, very thick] (26, 0) to[out=45, in=225, looseness=1.5] (27, 0); ll[black] (0,0) circle (0.2cm) {}; ll[black] (1,0) circle (0.2cm) {}; ll[black] (2,1) circle (0.2cm) {}; ll[black] (3,1) circle (0.2cm) {}; ll[black] (4,2) circle (0.2cm) {}; ll[black] (5,2) circle (0.2cm) {}; ll[black] (6,1) circle (0.2cm) {}; ll[black] (7,0) circle (0.2cm) {}; ll[black] (8,0) circle (0.2cm) {}; ll[black] (9,0) circle (0.2cm) {}; ll[black] (10,0) circle (0.2cm) {}; ll[black] (11,1) circle (0.2cm) {}; ll[black] (12,1) circle (0.2cm) {}; ll[black] (13,2) circle (0.2cm) {}; ll[black] (14,2) circle (0.2cm) {}; ll[black] (15,2) circle (0.2cm) {}; ll[black] (16, 1) circle (0.2cm) {}; ll[black] (17,1) circle (0.2cm) {}; ll[black] (18,0) circle (0.2cm) {}; ll[black] (19,0) circle (0.2cm) {}; ll[black] (20,0) circle (0.2cm) {}; ll[black] (21,1) circle (0.2cm) {}; ll[black] (22,1) circle (0.2cm) {}; ll[black] (23,1) circle (0.2cm) {}; ll[black] (24,0) circle (0.2cm) {}; ll[black] (25,0) circle (0.2cm) {}; ll[black] (26,0) circle (0.2cm) {}; ll[black] (27,0) circle (0.2cm) {}; \end{tikzpicture} \end{center} \caption{The bicolored Motzkin path $M\in\MM_{13,14}$, with $\hh_1$ drawn as blue and straight, and $\hh_2$ as red and curved.} \label{ex_motzkin_path} \end{figure} \subsection{Deriving the generating function} \label{sec:directGF} In this subsection, we obtain an expression for the generating function $$A(x,y)=\sum_{m,n\ge0} \card{\IC([m]\times[n])}\, x^m y^n$$ of interval-closed sets of $[m]\times[n]$. \begin{thm}\label{thm:A} The generating function of interval-closed sets of $[m]\times[n]$ is given by $$A(x,y)=\frac{2}{1-x-y+2xy+\sqrt{(1-x-y)^2-4xy}}.$$ \end{thm} \begin{proof} Using the bijection of Theorem~\ref{thm:Motzkin_bijection}, we can write $$A(x,y)=\sum_{M\in\tMM} x^{u(M)+h_1(M)} y^{d(M)+h_2(M)}.$$ We start by recalling the derivation of the generating function for bicolored Motzkin paths, $$C(x,y)=\sum_{M\in\MM} x^{u(M)+h_1(M)} y^{d(M)+h_2(M)},$$ as in~\cite[Lemma 2.1]{Elizalde-symmetry}. Any non-empty path in $\MM$ is either of the form $M=\hh_1M'$ or $M=\hh_2M'$, where $M'\in\MM$, or of the form $M=\uu M_1 \dd M_2$, where $M_1,M_2\in\MM$. This gives the equation $$C(x,y)=1+(x+y)C(x,y)+xyC(x,y)^2,$$ from which we conclude \begin{equation}\label{eq:C} C(x,y)=\frac{1-x-y-\sqrt{(1-x-y)^2-4xy}}{2xy}. \end{equation} We now give a similar decomposition for non-empty paths in $\tMM$. Paths that start with a horizontal step must be of the form $M=\hh_1M'$, where $M'\in\tMM$, or $M=\hh_2M'$, where $M'$ is any path in $\tMM$ that does not start with $\hh_1$. Paths that start with an up-step are of the form $M=\uu M_1\dd M_2$, where $M_1\in\MM$ and $M_2\in\tMM$. This decomposition yields the equation $$A(x,y)=1+xA(x,y)+y(A(x,y)-xA(x,y))+xyC(x,y)A(x,y),$$ from which we conclude $$ A(x,y)=\frac{1}{1-x-y+xy-xyC(x,y)}=\frac{2}{1-x-y+2xy+\sqrt{(1-x-y)^2-4xy}}.\qedhere $$ \end{proof} Equation~\eqref{eq:C} gives an alternative proof of Proposition~\ref{prop:fullNarayana}: via the bijection in Section~\ref{ssec:bicolored}, full interval-closed sets of $[m]\times[n]$ correspond to pairs $(B,T)$ where $B$ and $T$ only touch at their endpoints, which in turn correspond to bicolored Motzkin paths that only touch the $x$-axis at their endpoints. These are paths of the form $\uu M\dd$, where $M\in\MM$, and so their generating function is $$xy\,C(x,y)=\frac{1-x-y-\sqrt{(1-x-y)^2-4xy}}{2}.$$ The coefficient of $x^my^n$ in this generating function is $N(m+n-1,n)$, recovering Proposition~\ref{prop:fullNarayana}. \subsection{Extracting formulas for small parameter values} \label{ssec:extracting_formulas} From the expression in Theorem~\ref{thm:A}, one can obtain generating functions counting interval-closed sets of $[m]\times [n]$ where one of the parameters is fixed. For example, differentiating twice with respect to $x$, we have $$ \frac{\partial^2 A(x,y)}{\partial x^2}=\sum_{m\ge2,n\ge0} m(m-1)\card{\IC([m]\times[n])}\, x^{m-2} y^n. $$ Setting $x=0$ and using Theorem~\ref{thm:A}, we get $$\sum_{n\ge0} \card{\IC([2]\times[n])}\, y^n=\frac{1}{2} \left.\frac{\partial^2 A(x,y)}{\partial x^2}\right|_{x=0}=\frac{1-y+3y^2-2y^3+y^4}{(1-y)^5}.$$ Extracting the coefficient of $y^n$ gives $$\card{\IC([2]\times[n])}=\binom{n+4}{4}-\binom{n+3}{4}+3\binom{n+2}{4}-2\binom{n+1}{4}+\binom{n}{4}=\frac{n^4+4n^3+17n^2+14n+12}{12},$$ recovering Theorem~\ref{prodofchainICS}. Similarly, we have $$\sum_{n\ge0} \card{\IC([3]\times[n])}\, y^n=\frac{1}{6} \left.\frac{\partial^3 A(x,y)}{\partial x^3}\right|_{x=0}=\frac{1+5y^2-5y^3+6y^4-3y^5+y^6}{(1-y)^7},$$ from where we obtain the following. \begin{cor} \label{cor:3xncor} The cardinality of $\IC([3]\times[n])$ is $$\frac{n^{6}+9 n^{5}+61 n^{4}+159 n^{3}+370 n^{2}+264 n +144}{144}.$$ \end{cor} In general, for any fixed $m$, we have $$\sum_{n\ge0} \card{\IC([m]\times[n])}\, y^n=\frac{1}{m!} \left.\frac{\partial^m A(x,y)}{\partial x^m}\right|_{x=0},$$ which is a rational generating function, since the square roots in the partial derivatives of $A(x,y)$ disappear when setting $x=0$. Extracting the coefficient of $y^n$ gives an expression for $\IC([m]\times[n])$, which, according to our computations for $m\le10$, seems to be a polynomial in $n$ of degree $2m$ with non-negative coefficients. \subsection{Translating statistics between interval-closed sets and bicolored Motzkin paths} \label{sec:Motzkin_stats} We now translate some statistics between interval-closed sets and bicolored Motzkin paths, via the bijection of Theorem~\ref{thm:Motzkin_bijection}. See Example~\ref{ex:stats} below. \begin{thm} \label{thm:Motzkin_stats_bijection} Let $I\in\IC([m]\times[n])$, and let $M\in\tMMmn$ be its image under the bijection of Theorem~\ref{thm:Motzkin_bijection}. Then, \begin{enumerate}[label=(\alph*)] \item the cardinality of $I$ is the area under $M$ and above the $x$-axis; \item the number of elements of $[m]\times[n]$ that are incomparable with $I$ is equal to $\sum \#\hh_1\, \#\hh_2$, where the sum is over all maximal runs of horizontal steps of $M$ at height $0$, and $\#\hh_1$ and $\#\hh_2$ denote the number of $\hh_1$ and $\hh_2$ steps in each such run; and \item the number of connected components of $I$ is the number of returns of $M$ to the $x$-axis. \end{enumerate} \end{thm} \begin{proof} Let $B$ and $T$ be the lattice paths obtained from $I$ using the bijection from Subsection~\ref{ssec:bicolored}. Let $(i, \beta_i)$ and $(i, \tau_i)$ be the coordinates of the vertices of $B$ and $T$ after $i$ steps, respectively. Since the paths start at $(0,n)$ and consist of steps $(1,1)$ and $(1,-1)$, we have $i+\beta_i\equiv i+\tau_i\equiv n \bmod 2$. Note that, in the Cartesian coordinates that we use for lattice paths, the points $(i,j)\in\mathbb{R}^2$ satisfying $i+j \equiv n+1\bmod 2$ correspond to the elements of the poset $[m]\times [n]$, provided that they are inside the rectangle with vertices $(0,n)$, $(m,n+m)$, $(m+n, m)$ and $(n,0)$. This is shown in Figure~\ref{fig:ICS_coordinates}. \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=.5] \foreach \x in {0,1,6} {\foreach \y in {0,1,7,8} ll (\x - \y, \x + \y) circle (0.1cm) {}; \ifthenelse{\x < 6} {\draw (\x - \y, \x + \y) -- (\x - \y + 1, \x + \y + 1);}{} \ifthenelse{\y < 8} {\draw (\x - \y, \x + \y) -- (\x - \y - 1, \x + \y+1);}{} } } \draw[dotted] (-1,1) --(-6,6); \draw[dotted] (0,2) --(-5,7); \draw[dotted] (5,7) --(0,12); \draw[dotted] (-6,10) --(-3,13); \draw (-3,13) --(-2,14); \draw[dotted] (-5,9)--(-2,12); \draw (-2,12) --(-1,13); \draw[dotted] (1,3) --(4,6); \draw (4,6) --(5,7); \draw[dotted] (2,2) --(5,5); \draw (5,5) --(6,6); \draw (-7,7) --(-6,6); \draw (-6,8) --(-5,7); \draw (-1,13) --(0,12); ll[blue] (-7, 8) circle (0.35cm) {}; \draw (0 - 8, 0 + 8) node[left=.25em] {$(0, n)$}; \draw (6 - 0, 6 + 0) node[right=.25em] {$(m+n,m)$}; \draw (- 6, 8) node[right=.25em] {$(2,n)$}; \draw (- 7, 7) node[below left] {$(1,n-1)$}; \draw (- 7, 9) node[above left] {$(1,n+1)$}; \draw (0,0) node[below=.25em] {$(n,0)$}; \draw (-1,14) node[above=.25em] {$(m, m+n)$}; \end{tikzpicture} \caption{The Cartesian coordinates used for lattice paths in $\Lmn$. The blue circle is element $(1,n)$ of the poset $[m]\times[n]$.} \label{fig:ICS_coordinates} \end{figure} Let $d_i(B,T)=\tau_i-\beta_i$ be the distance between the paths $B$ and $T$ after $i$ steps. Since $T$ is weakly above $B$, we can write $d_i(B,T) = 2k$ for some nonnegative integer $k$, which is equal to the difference between the number of $\uu$ steps of $B$ and $T$ within their first $i$ steps. In the corresponding bicolored Motzkin path $M$, constructed from $B$ and $T$ using equation~\eqref{eq:mi}, this difference $k$ is equal to the number of $\uu$ steps (which occur in positions where $T$ has a $\uu$ step but $B$ does not) minus the number of $\dd$ steps (which occur in positions where $B$ has a $\uu$ step but $T$ does not) within the first $i$ steps of $M$, which in turn equals the height of $M$ after $i$ steps. Summing over $i$, it follows that the area under $M$ (defined as the number of full squares plus half the number of triangles under $M$) and above the $x$-axis is equal to $\frac{1}{2} \sum_i d_i(B,T)$. Let us now show that $\frac{1}{2}d_i(B,T)$ is also equal to the number of elements of $I$ with coordinates of the form $(i, j)$ in the lattice path coordinate system, for each fixed $i$. After $i$ steps, if $T$ is strictly above $B$, the points $(i, \beta_i +1), (i, \beta_i+3), \ldots, (i, \tau_i-1)$ are the elements of the poset above $B$ and below $T$, so they are exactly the elements of $I$ of the form $(i,j)$. There are $\frac{1}{2}d_i(B,T) = \frac{1}{2}(\tau_i-\beta_i)$ of them. Summing over $i$, we obtain $|I|=\frac{1}{2} \sum_i d_i(B,T)$. This proves part~(a). By the last part of Lemma~\ref{prop:paths_in_poset_language}, the elements that are incomparable with $I$ are those that lie below the path $L$ and above the path $U$, defined in Subsection~\ref{ssec:latticepaths_rectangles}. When $L$ is below $U$, the paths $B$ and $T$ coincide with $L$ and $U$, respectively. On the other hand, when $L$ is above $U$, then $B$ and $T$ coincide with each other, and they also coincide with $L$; in these portions, $M$ consists of horizontal steps at height $0$. Consider a maximal block where $L$ is above $U$, or equivalently, where $B$ and $T$ coincide. Let $\#\hh_1$ and $\#\hh_2$ denote the number of $\hh_1$ and $\hh_2$ steps in this block, which equals the number of $\uu$ and $\dd$ steps of $L$, respectively. In this block, the number of elements in $[m]\times [n]$ below $L$ and above $U$ (hence incomparable with $I$) forms an $\#\hh_1\times\#\hh_2$ rectangle, and so it contributes $\#\hh_1\,\#\hh_2$ elements. Summing over all the blocks where $L$ is above $U$, we obtain part~(b). Finally, the connected components in $I$ correspond to the maximal blocks where $B$ is strictly below $T$, or equivalently, $M$ is strictly above the $x$-axis. Thus, the number of connected components is the number of returns of $M$ to the $x$-axis, i.e., $\dd$ steps that end at height~$0$, proving part~(c). \end{proof} \begin{example} \label{ex:stats} Continuing Example~\ref{ex:Motzkin_bijection}, we note that the cardinality of the interval-closed set $I$ in Figure~\ref{ex_paths} is 20, which equals the area under the associated bicolored Motzkin path $M$ in Figure~\ref{ex_motzkin_path}. The number of components of $I$ is 3, which equals the number of returns of $M$. And the number of elements of the poset incomparable with $I$ is 5, which equals $\sum \#\hh_1\ \#\hh_2=2\cdot 1+1\cdot1+2\cdot1$. \end{example} \subsection{Counting interval-closed sets of the type $B$ minuscule poset } \label{sec:Bminuscule} The product of chains poset $[m]\times[n]$ may be interpreted as a \emph{minuscule poset} associated to the type $A_{m+n-1}$ Dynkin diagram. One can study interval-closed sets of other minuscule posets. The next simplest is the type $B_n$ minuscule poset, which is the triangular poset constructed as half of the $[n]\times[n]$ diamond poset, as seen in Figure~\ref{fig:B_minuscule}. We denote this poset by $\fB_n$. See~\cite[Section 3]{Okada21} for background on minuscule posets. As illustrated in Figure~\ref{fig:B_minuscule}, interval-closed sets of $\fB_n$ are in bijection with vertically-symmetric interval-closed sets of $[n]\times[n]$. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.5] \foreach \y in {0,...,8} {\foreach \x in {0,...,\y} ll (\x - \y, \x + \y) circle (0.1cm) {}; \ifthenelse{\x < \y} {\draw (\x - \y, \x + \y) -- (\x - \y + 1, \x + \y + 1);}{} \ifthenelse{\y < 8} {\draw (\x - \y, \x + \y) -- (\x - \y - 1, \x + \y + 1);}{} } } ll[blue] (-2, 8) circle (0.2cm) {}; ll[blue] (-1, 9) circle (0.2cm) {}; ll[blue] (-4, 8) circle (0.2cm) {}; ll[blue] (-5, 7) circle (0.2cm) {}; ll[blue] (-6, 8) circle (0.2cm) {}; ll[blue] (-6, 6) circle (0.2cm) {}; ll[blue] (-7, 7) circle (0.2cm) {}; ll[blue] (-8, 8) circle (0.2cm) {}; ll[blue] (0, 8) circle (0.2cm) {}; \draw[<->] (1,8)--(2,8); \end{tikzpicture}\quad \begin{tikzpicture}[scale=.5] \foreach \y in {0,...,8} {\foreach \x in {0,...,8} ll (\y - \x, \x + \y) circle (0.1cm) {}; \ifthenelse{\x < 8} {\draw (\y - \x, \x + \y) -- (\y - \x - 1, \x + \y + 1);}{} \ifthenelse{\y < 8} {\draw (\y - \x, \x + \y) -- (\y - \x + 1, \x + \y + 1);}{} } } ll[blue] (2, 8) circle (0.2cm) {}; ll[blue] (1, 9) circle (0.2cm) {}; ll[blue] (4, 8) circle (0.2cm) {}; ll[blue] (5, 7) circle (0.2cm) {}; ll[blue] (6, 8) circle (0.2cm) {}; ll[blue] (6, 6) circle (0.2cm) {}; ll[blue] (7, 7) circle (0.2cm) {}; ll[blue] (8, 8) circle (0.2cm) {}; ll[blue] (-2, 8) circle (0.2cm) {}; ll[blue] (-1, 9) circle (0.2cm) {}; ll[blue] (-4, 8) circle (0.2cm) {}; ll[blue] (-5, 7) circle (0.2cm) {}; ll[blue] (-6, 8) circle (0.2cm) {}; ll[blue] (-6, 6) circle (0.2cm) {}; ll[blue] (-7, 7) circle (0.2cm) {}; ll[blue] (-8, 8) circle (0.2cm) {}; ll[blue] (0,8) circle (0.2cm) {}; \end{tikzpicture} \end{center} \caption{An interval-closed set of the minuscule poset $\fB_9$ and its corresponding symmetric interval-closed set in $[9]\times[9]$.} \label{fig:B_minuscule} \end{figure} In this subsection, we find the generating function of $\IC(\fB_n)$. \begin{thm}\label{thm:B} The generating function of interval-closed sets of $\fB_n$ is given by $$\sum_{n\ge0} \card{\IC(\fB_n)}\, x^n=\frac{4-10x+8x^2}{2-11x+14x^2-8x^3-(2-3x)\sqrt{1-4x}}.$$ \end{thm} \begin{proof} Denote by $\SSn$ the set of bicolored Motzkin paths $M\in\MMn$ that have the following symmetry: for all $i\in[2n]$, the $i$th step of $M$ is a $\uu$ if and only if the $(2n+1-i)$th step is a $\dd$, and the $i$th step is a $\hh_1$ if and only if the $(2n+1-i)$th step is a $\hh_2$. When restricted to symmetric interval-closed sets of $[n]\times[n]$, the bijection from Section~\ref{ssec:bicolored} becomes a bijection between $\IC(\fB_n)$ and equivalence classes of paths in $\SSn$, where two paths are equivalent if one is obtained from the other by rearranging steps within the maximal blocks of $\hh_1$ and $\hh_2$ steps that lie on the $x$-axis. A canonical representative of each class can be picked by considering paths where no $\hh_2$ on the $x$-axis is immediately followed by a $\hh_1$. Let $\tSSn$ be the set of paths in $\SSn$ with this property. The above bijection implies that $\card{\IC(\fB_n)}=|\tSSn|$. By the symmetry property, paths in $\SSn$ are uniquely determined by their left halves, which are lattice paths with $n$ steps $\uu,\dd,\hh_1,\hh_2$, starting at $(0,0)$ and never going below the $x$-axis. Left halves of paths in $\tSSn$ are those that do not end with an $\hh_2$ on the $x$-axis, and where no $\hh_2$ on the $x$-axis is immediately followed by a $\hh_1$. Let $\SS=\bigcup_{n\ge0}\SSn$, $\tSS=\bigcup_{n\ge0}\tSSn$, and define the generating functions $S(x)=\sum_{n\ge0} |\SSn| x^n$, and $$\widetilde{S}(x)=\sum_{n\ge0} |\tSSn| x^n=\sum_{n\ge0} \card{\IC(\fB_n)} x^n.$$ We obtain $S(x)$ by observing that if $H$ is the left half of a non-empty path in $\SS$, then $H=\hh_1H'$, $H=\hh_2H'$, $H=\uu H'$ (if $H$ does not return to the $x$-axis), or $H=\uu M\dd H'$ (if it does return), where $H'$ is the left half of a path in $\SS$, and $M\in\MM$. This gives the equation $$S(x)=1+3xS(x)+x^2C(x,x)S(x),$$ where $C(x,y)$ is given by equation~\eqref{eq:C}. It follows that \begin{equation} \label{eq:S} S(x)=\frac{2}{1-4x+\sqrt{1-4x}}.\end{equation} Similarly, if $H$ is the left half of a non-empty path in $\tSS$, then $H=\hh_1H'$, $H=\hh_2H''$, $H=\uu H'''$, or $H=\uu M\dd H'$, where $H'$ is the left half of a path in $\tSS$, $H''$ is the left half of a nonempty path in $\tSS$ that does not start with $\hh_1$, $H'''$ is the left half of a path in $\SS$, and $M\in\MM$. This decomposition yields the equation $$\widetilde{S}(x)=1+x\widetilde{S}(x)+x(\widetilde{S}(x)-x\widetilde{S}(x)-1)+xS(x)+x^2C(x,x)\widetilde{S}(x).$$ Solving for $\widetilde{S}(x)$ and using equations~\eqref{eq:C} and~\eqref{eq:S}, we obtain $$\widetilde{S}(x)=\frac{4-10x+8x^2}{2-11x+14x^2-8x^3+(2-3x)\sqrt{1-4x}},$$ as desired. \end{proof} Extracting coefficients, we see that the values of $\card{\IC(\fB_n)}$ for $1\le n\le 10$ are $$2,7,26,96,356,1331,5014,19006,72412,277058.$$ \section{Interval-closed sets of triangular posets and quarter-plane walks} \label{sec:TypeAroot} Another poset with a notable set of order ideals is the poset of positive roots associated to the type $A$ Dynkin diagram. This poset is triangular, as shown in Figure~\ref{fig:A14}. We denote this poset with $n$ minimal elements as $\bA_{n}$. See \cite[Section 4.6]{BjornerBrenti} for background on root posets. This poset is itself an interval-closed set of $[n]\times[n]$, and we can associate to each ICS in $\bA_{n}$ the same pair of lattice paths $B$ and $T$ as in the prior section, restricted to the triangular grid. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.5] \foreach \y in {0,...,13} {\foreach \x in {0,...,\y} ll[black] (\x + \y, \y - \x) circle (0.1cm) {}; \ifthenelse{\x < \y} {\draw[black] (\x + \y, \y - \x) -- (\x + \y + 1, \y - \x - 1);}{} \ifthenelse{\y < 13} {\draw[black] (\x + \y, \y - \x) -- (\x + \y + 1, \y - \x+1);}{} } } ll[blue] (2, 0) circle (0.2cm) {}; ll[blue] (3, 1) circle (0.2cm) {}; ll[blue] (4, 0) circle (0.2cm) {}; ll[blue] (5, 1) circle (0.2cm) {}; ll[blue] (8, 2) circle (0.2cm) {}; ll[blue] (9, 3) circle (0.2cm) {}; ll[blue] (10, 2) circle (0.2cm) {}; ll[blue] (11, 1) circle (0.2cm) {}; ll[blue] (12, 2) circle (0.2cm) {}; ll[blue] (13, 1) circle (0.2cm) {}; ll[blue] (16, 2) circle (0.2cm) {}; ll[blue] (17, 3) circle (0.2cm) {}; ll[blue] (18, 2) circle (0.2cm) {}; ll[blue] (18, 4) circle (0.2cm) {}; ll[blue] (19, 3) circle (0.2cm) {}; ll[blue] (23, 3) circle (0.2cm) {}; ll[blue] (24, 2) circle (0.2cm) {}; ll[blue] (25, 1) circle (0.2cm) {}; \draw[ultra thick, babyblue, dashed] (1, 2.25) node { \large $T$}; \draw[ultra thick, babyblue, dashed] (22, 3) -- (23.03, 4.03) -- (26, 1); \draw[ultra thick, babyblue, dashed] (1, 0) -- (3, 2) -- (4, 1) --(5, 2) -- (6,1); \draw[ultra thick, babyblue, dashed] (7,2) -- (9, 4) -- (11, 2) -- (12, 3) -- (14, 1); \draw[ultra thick, babyblue, dashed] (15,2) -- (18, 5) -- (20, 3); \draw[very thick, burgundy] (-2, -1) -- (0, 1) -- (1,0); \draw[ultra thick, darkgreen] (6, -1) node { \large $B$}; \draw[ultra thick, darkgreen] (1,0) -- (2, -1) -- (3, 0) -- (4, -1) -- (6,1); \draw[ultra thick, darkgreen] (7, 2) -- (8, 1) -- (9, 2) -- (11, 0) -- (12, 1) -- (13, 0) -- (14,1); \draw[ultra thick, darkgreen] (15, 2) -- (16, 1) -- (17, 2) -- (18, 1) -- (20, 3); \draw[ultra thick, darkgreen] (22,3) -- (25, 0) -- (26, 1); \draw[burgundy, very thick] (20, 3) -- (21,4) -- (22,3); \draw[burgundy, very thick] (14,1) -- (15,2); \draw[burgundy, very thick] (6, 1) -- (7,2); \draw[burgundy, very thick] (26, 1) -- (28, -1); \end{tikzpicture} \end{center} \caption{An interval-closed set of the poset $\mathbf{A}_{14}$ and the corresponding pair of paths $B$ and $T$.} \label{fig:A14} \end{figure} Note that $\bA_{n}$ may be obtained from $[n]\times[n]$ by truncating the bottom $n-1$ ranks. This motivates the extension of our enumeration from $\bA_n$ to the class of \emph{truncated rectangles}. For nonnegative integers $d, m, n$ with $d < \min(m, n)$, let $P_{m\times n; d}$ be the poset obtained by truncating the bottom $d$ ranks from $[m] \times [n]$. See the left of Figure~\ref{fig:truncated} for an example. So we have $\bA_n=P_{n\times n; n-1}$. This section studies interval-closed sets of $\bA_n$ and $P_{m\times n; d}$. In Subsection~\ref{sec:functional_equation}, we prove Theorem~\ref{thm:walks_bijection}, giving a bijection from $\IC(\bA_{n-1})$ to certain quarter-plane walks, and Theorem~\ref{thm:ICAn}, yielding a functional equation for the generating function. Subsection~\ref{sec:typeB} proves Theorem~\ref{thm:BrootGF} giving a functional equation for the generating function of vertically symmetric interval-closed sets in $\bA_{n}$, which we then apply to the type $B$ root poset. Subsection~\ref{ssec:truncated_rectangles} proves Theorems \ref{thm:walks_bijection_truncated} and \ref{thm:ICP}, giving analogous results for truncated rectangle posets $P_{m\times n; d}$. Subsection~\ref{sec:quarter_plane_stats} translates statistics between interval-closed sets and quarter-plane walks via the bijections of Theorems~\ref{thm:walks_bijection} and \ref{thm:walks_bijection_truncated}. \subsection{A functional equation to count interval-closed sets in the type $A$ root poset} \label{sec:functional_equation} Order ideals of $\bA_{n-1}$ are in bijection with Dyck paths of length $2n$, that is, lattice paths from $(0,0)$ to $(2n,0)$ with steps $\uu$ and $\dd$ that do not go below the $x$-axis. As in the bijection described in Section~\ref{sec:rectangle}, we associate each order ideal to the path that separates it from the rest of the poset, but here it will be more convenient to consider the origin as the starting point of the paths (see, e.g., Catalan objects 25 and 178 in \cite[Ch.~2]{Stanley_Catalan}). Denote the set of Dyck paths of length $2n$ by $\Dn$, and let $\DDn=\{(B,T):B,T\in\Dn, B\le T\}$, the set of pairs of nested Dyck paths. Similarly to Section~\ref{ssec:bicolored}, enumerating ICS of $\bA_{n-1}$ is equivalent to enumerating pairs $(B,T)\in\DDn$ up to the equivalence relation that allows us to change the portions of $B$ and $T$ where these two paths coincide. One can canonically pick a representative of each equivalence class by requiring that, in each maximal block of steps where $B$ and $T$ coincide, up-steps come before down-steps. Note that this choice guarantees that the paths do not go below the $x$-axis. Next we describe a bijection between $\DDn$ and the set $\Wo_{2n}$ of lattice walks in the first quadrant $\{(x,y):x,y\ge0\}$, consisting of $2n$ steps from among $\e=(1,0),\w=(-1,0),\se=(1,-1),\nw=(-1,1)$, and starting and ending at the origin. Given $(B,T)\in\DDn$, let $W\in\Wo_{2n}$ be the walk whose $i$th step $w_i$ is determined by the $i$th steps of $B$ and $T$ as follows: \begin{equation}\label{eq:wi} w_i=\begin{cases} \nw & \text{if $b_i=\dd$ and $t_i=\uu$},\\ \se & \text{if $b_i=\uu$ and $t_i=\dd$},\\ \e & \text{if $b_i=\uu$ and $t_i=\uu$},\\ \w & \text{if $b_i=\dd$ and $t_i=\dd$}. \end{cases} \end{equation} Note that the condition that $B$ does not go below the $x$-axis guarantees that $W$ stays in $x\ge0$, and the condition that $B\le T$ guarantees that $W$ stays in $y\ge0$. Steps where $B$ and $T$ coincide correspond to steps in $W$ that lie on the $x$-axis. Thus, the above canonical representatives of the equivalence classes in $\DDn$ correspond to walks in $\Wo_{2n}$ where no $\w$ step on the $x$-axis is immediately followed by an $\e$ step. Let $\tWo_{2n}\subseteq\Wo_{2n}$ be the subset of walks with this property. This discussion yields the following theorem. \begin{thm} \label{thm:walks_bijection} The set $\IC(\bA_{n-1})$ of interval-closed sets of $\bA_{n-1}$ is in bijection with the set $\tWo_{2n}$ of lattice walks in the first quadrant starting and ending at the origin, and consisting of $2n$ steps from the set $\{ \e,\w,\se,\nw \}$ where no $\w$ step on the $x$-axis is immediately followed by an $\e$ step. \end{thm} An example illustrating Theorem \ref{thm:walks_bijection} is given below. \begin{example} Figure \ref{ex_typeA}, left, shows an interval-closed set of $\mathbf{A}_{5}$ with paths $T$ (in blue, dashed), $B$ (in green), and where they coincide in purple. We have \begin{align*}T &= \uu \ \uu \ \uu \ \dd \ \dd \ \uu \ \uu \ \dd \ \uu \ \dd \ \dd \ \dd,\\ B & = \uu \ \uu \ \dd \ \dd \ \uu \ \uu \ \uu \ \dd \ \dd \ \uu \ \dd \ \dd. \end{align*} Using \eqref{eq:wi}, we form the quarter-plane walk $$W = \e \ \e \ \nw \ \w \ \se \ \e \ \e \ \w \ \nw \ \se \ \w \ \w,$$ which is shown in Figure \ref{ex_typeA}, right. \end{example} \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.5] \foreach \y in {0,...,4} {\foreach \x in {0,...,\y} ll[gray] (\x + \y, \y - \x) circle (0.1cm) {}; \ifthenelse{\x < \y} {\draw[gray] (\x + \y, \y - \x) -- (\x + \y + 1, \y - \x - 1);}{} \ifthenelse{\y < 4} {\draw[gray] (\x + \y, \y - \x) -- (\x + \y + 1, \y - \x+1);}{} } } ll[blue] (2, 0) circle (0.2cm) {}; ll[blue] (1, 1) circle (0.2cm) {}; ll[blue] (7, 1) circle (0.2cm) {}; \draw[ultra thick, babyblue, dashed] (0,1) -- (1, 2) -- (3,0); \draw[ultra thick, babyblue, dashed] (6,1) -- (7, 2) -- (8,1); \draw[ultra thick, darkgreen] (0, 1) --(2, -1) -- (5, 2) -- (7,0) -- (8,1); \draw[darkgreen] (3.15, -1) node {\large $B$}; \draw[ultra thick, babyblue] (.5, 2.25) node { \large $T$}; \draw[very thick, burgundy] (-2, -1) -- (0,1); \draw[very thick, burgundy] (8,1) -- (10, -1); \draw[very thick, burgundy] (3, 0) -- (5, 2) -- (6, 1); \draw[->] (9.5,1.5)--(11,1.5); \end{tikzpicture}\quad \begin{tikzpicture}[scale=1.7,decoration={ markings, mark=at position 0.55 with {\arrow{>}}}] \draw[gray,dotted] (0,0) grid (3,1); \draw[darkgray,->] (0,-.25)--(0,1.25); \draw[darkgray,->] (-.25,0)--(3.25,0); ll[black] (0, 0) circle (.7mm) {}; ll[black] (1, 0) circle (.7mm) {}; ll[black] (2, 0) circle (.7mm) {}; ll[black] (3, 0) circle (.7mm) {}; ll[black] (0, 1) circle (.7mm) {}; ll[black] (1, 1) circle (.7mm) {}; \draw[thick,bend right=15,postaction={decorate}] (0, 0) to node[below,blue]{1} (1, 0); \draw[thick,bend right=15,postaction={decorate}] (1,0) to node[below,blue]{2,6} (2,0); \draw[thick,bend right=15,postaction={decorate}] (2,0) to node[above right,blue]{3,9} (1,1); \draw[thick,bend right=15,postaction={decorate}] (1, 1) to node[left,blue]{10} (2, 0); \draw[thick,bend right=15,postaction={decorate}] (2,0) to node[below,blue]{7} (3, 0); \draw[thick,bend right=15,postaction={decorate}] (3,0) to node[above,blue]{8} (2,0); \draw[thick,bend right=15,postaction={decorate}] (2,0) to node[above,blue]{11} (1,0); \draw[thick,bend right=15,postaction={decorate}] (1,0) to node[above,blue]{12} (0,0); \draw[thick,postaction={decorate}] (1,1) -- node[above,blue]{4} (0,1); \draw[thick,postaction={decorate}] (0,1)-- node[above right,blue]{5} (1,0); \draw (0,0) node[below left]{$(0,0)$}; \end{tikzpicture} \end{center} \caption{An interval-closed set of the poset $\mathbf{A}_5$ with associated lattice paths $T$ and $B$, along with the associated lattice walk, with labels representing each step of the walk.} \label{ex_typeA} \end{figure} We now obtain an expression for the generating function of $\IC(\bA_{n-1})$. \begin{thm}\label{thm:ICAn} The generating function of interval-closed sets of $\bA_{n-1}$ can be expressed as $$\sum_{n\ge0} \card{\IC(\bA_{n-1})}z^{2n}=F(0,0,z),$$ where $F(x,y):=F(x,y,z)$ satisfies the functional equation \begin{equation}\label{eq:F} F(x,y)= 1+z\left(x+\frac{1}{x}+\frac{x}{y}+\frac{y}{x}\right)F(x,y) - z \left(\frac{1}{x}+\frac{y}{x}\right)F(0,y) - z\, \frac{x}{y} F(x,0) - z^2\, \left(F(x,0)-F(0,0)\right). \end{equation} \end{thm} \begin{proof} By the bijection of Theorem~\ref{thm:walks_bijection}, we have $\card{\IC(\bA_{n-1})}=\card{\tWo_{2n}}$. In order to enumerate walks in $\tWo_{2n}$, we will consider more general walks that are not restricted to ending at the origin. Let $\tWu_\ell$ be the set of walks in the first quadrant starting at the origin, and consisting of $\ell$ steps from the set $\{\e,\w,\se,\nw\}$ where no $\w$ step on the $x$-axis is immediately followed by an $\e$ step. Define the generating function \begin{equation}\label{eq:Fdef} F(x,y,z)=\sum_{i,j,\ell\ge0}\card{\{W\in\tWu_\ell \text{ ending at }(i,j)\}}\,x^iy^jz^\ell, \end{equation} and note that $$F(0,0,z)=\sum_{\ell\ge0}\card{\{W\in\tWu_\ell \text{ ending at }(0,0)\}}\,z^\ell=\sum_{n\ge0}\card{\tWo_{2n}}z^{2n}= \sum_{n\ge0} \card{\IC(\bA_{n-1})}z^{2n}.$$ The structure of the walks in $\tWu_\ell$ yields the following functional equation for $F(x,y):=F(x,y,z)$: \begin{align*} F(x,y)=& \ 1+z\left(x+\frac{1}{x}+\frac{x}{y}+\frac{y}{x}\right)F(x,y) &\\ & - z \left(\frac{1}{x}+\frac{y}{x}\right)F(0,y) & \text{(no $\w$ or $\nw$ steps when on $y$-axis)}\\ & - z\, \frac{x}{y} F(x,0) & \text{(no $\se$ steps when on $x$-axis)}\\ & - z^2\, \left(F(x,0)-F(0,0)\right). & \text{(no $\w$ followed by an $\e$ when on $x$-axis)} \end{align*} Indeed, a non-empty walk in $\tWu_n$ is obtained by appending a step to a walk in $\tWu_n$. This can be any step from among $\e,\w,\se,\nw$, with the following exceptions: \begin{itemize} \item we cannot append a $\w$ or $\nw$ step when on the $y$-axis --- the generating function for walks ending on the $y$-axis is $F(0,y)$, \item we cannot append a $\se$ step when on the $x$-axis --- the generating function for walks ending on the $x$-axis is $F(x,0)$, \item we cannot append an $\e$ step to a path ending with a $\w$ step on the $x$-axis ---the generating function for walks ending with a $\w$ step on the $x$-axis is $\dfrac{z}{x}\left(F(x,0)-F(0,0)\right)$, and the appended $\e$ step would contribute $xz$.\qedhere \end{itemize} \end{proof} The functional equation~\eqref{eq:F} is similar to the one obtained when describing Gouyou-Beauchamps walks~\cite{GB}. These are like the walks in $\tWu_\ell$, but without the restriction that no $\w$ step on the $x$-axis is immediately followed by an $\e$ step. The functional equation for Gouyou-Beauchamps walks can be solved using the kernel method~\cite{MBM-MM}; alternatively, those ending at the origin are in bijection with pairs of nested Dyck paths, which can be easily counted by the Lindstr\"om--Gessel--Viennot lemma. However, we have not been able to solve the functional equation~\eqref{eq:F}. The term in the fourth line of the equation, which comes from our additional restriction, prevents some cancellations from occurring when we try to apply the kernel method. Still, equation~\eqref{eq:F} can be used to quickly generate hundreds of terms of the series expansion of $F(x,y,z)$ in the variable $z$. In particular, the values of $\card{\IC(\bA_{n-1})}$ for $1\le n\le 10$ are $$1, 2, 8, 45, 307, 2385, 20362, 186812, 1814156, 18448851.$$ \subsection{Counting interval-closed sets of the type $B$ root poset} \label{sec:typeB} \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.48] \foreach \y in {0,...,6} {\foreach \x in {0,...,\y} ll (\x + \y, \y - \x) circle (0.1cm) {}; \ifthenelse{\x < \y} {\draw (\x + \y, \y - \x) -- (\x + \y + 1, \y - \x - 1);}{} \ifthenelse{\y < 6} {\draw (\x + \y, \y - \x) -- (\x + \y + 1, \y - \x+1);}{} } } ll[blue] (4, 0) circle (0.2cm) {}; ll[blue] (3, 1) circle (0.2cm) {}; ll[blue] (2, 0) circle (0.2cm) {}; ll[blue] (1, 1) circle (0.2cm) {}; ll[blue] (2, 2) circle (0.2cm) {}; ll[blue] (9, 1) circle (0.2cm) {}; ll[blue] (8,0) circle (0.2cm) {}; ll[blue] (10, 0) circle (0.2cm) {}; ll[blue] (11, 1) circle (0.2cm) {}; ll[blue] (10, 2) circle (0.2cm) {}; \draw[<->] (12.5,2)--(13.5,2); \end{tikzpicture} \quad \begin{tikzpicture}[scale=.5] \draw (-2,-2) -- (4,4); \draw (0,-2) -- (4,2); \draw (2,-2) -- (4,0); \draw (1,1) -- (4,-2);\draw (3,3) -- (4,2); \draw (0,0) -- (2,-2); \draw (-1,-1) -- (0,-2); \draw (2,2) -- (4,0); ll (0,0) circle (0.1cm) {}; ll (2,0) circle (0.1cm) {}; ll (4,0) circle (0.1cm) {}; ll (1,1) circle (0.1cm) {}; ll (3,1) circle (0.1cm) {}; ll (2,2) circle (0.1cm) {}; ll (4,2) circle (0.1cm) {}; ll (3,3) circle (0.1cm) {}; ll (4,4) circle (0.1cm) {}; ll (0,-2) circle (0.1cm) {}; ll (2,-2) circle (0.1cm) {}; ll (4,-2) circle (0.1cm) {}; ll (1,-1) circle (0.1cm) {}; ll (3,-1) circle (0.1cm) {}; ll (-1,-1) circle (0.1cm) {}; ll (-2,-2) circle (0.1cm) {}; ll[blue] (2, -2) circle (0.2cm) {}; ll[blue] (1, -1) circle (0.2cm) {}; ll[blue] (0, -2) circle (0.2cm) {}; ll[blue] (0, 0) circle (0.2cm) {}; ll[blue] (-1, -1) circle (0.2cm) {}; \end{tikzpicture} \end{center} \caption{A symmetric interval-closed set of the poset $\mathbf{A}_7$, along with the corresponding interval-closed set of $\mathbf{B}_4$.} \label{ex_typeB} \end{figure} One can study interval-closed sets of root posets of other Lie types. The next simplest is the type $B_n$ root poset, which is the poset constructed as half of the poset $\bA_{2n-1}$, as seen in Figure~\ref{ex_typeB}. We denote this poset by $\bB_n$. See~\cite[Section 4.6]{BjornerBrenti} for background on root posets and \cite[Appendix]{RingelRootPosets} for diagrams of root posets of classical type. As illustrated in Figure~\ref{ex_typeB}, interval-closed sets of $\bB_n$ are in bijection with vertically-symmetric interval-closed sets of $\bA_{2n-1}$. Note that in order for a symmetric ICS of $\bA_{m}$ to correspond to an ICS in a type $B$ root poset, $m$ must be odd. Below, we study symmetric ICS of $\bA_{m}$ for all $m$ (both odd and even) and then extract coefficients corresponding to ICS of type $B$ root posets. Using the same construction from Section~\ref{sec:functional_equation}, symmetric order ideals of $\bA_{n-1}$ (that is, those invariant under vertical symmetry) are in bijection with symmetric Dyck paths in $\Dn$. Similarly, symmetric ICS of $\bA_{n-1}$ are in bijection with pairs of symmetric paths $(B,T)\in\DDn$ such that, in each maximal block where $B$ and $T$ coincide, up-steps come before down-steps. Such pairs of symmetric paths are uniquely determined by their left halves, which we denote by $(B_L,T_L)$. Each of $B_L$ and $T_L$ has $n$ steps from $\{\uu,\dd\}$, starts at the origin, and does not go below the $x$-axis. Because of the restrictions on $B$ and $T$, the path $B_L$ stays weakly below $T_L$, and in each maximal block where $B_L$ and $T_L$ coincide, up-steps come before down-steps. Additionally, $B_L$ and $T_L$ cannot end with a $\dd$ step where they coincide, since in the original paths $B$ and $T$, this $\dd$ would be followed by a $\uu$. Finally, we apply the map from equation~\eqref{thm:walks_bijection} to the pair $(B_L,T_L)$, and note that $\dd$ steps where $B_L$ and $T_L$ coincide translate into $\w$ steps of the walk on the $x$-axis. This yields a bijection between symmetric ICS of $\bA_{n-1}$ and lattice walks in the first quadrant starting at the origin, consisting of $n$ steps from the set $\{ \e,\w,\se,\nw \}$ where no $\w$ step on the $x$-axis is immediately followed by an $\e$ step, and not ending with a $\w$ step on the $x$-axis. These are walks in the set $\tWu_n$, defined in the proof of Theorem~\ref{thm:ICAn}, with the additional restriction that they cannot end with a $\w$ step on the $x$-axis. The generating function for such restricted walks can be expressed in terms of the generating function $F(x,y,z)$ from Theorem~\ref{thm:ICAn}. \begin{thm}\label{thm:BrootGF} The generating function of symmetric interval-closed sets of $\bA_{n-1}$ can be expressed as $$\sum_{n\ge0} \card{\IC_{\text{sym}}(\bA_{n-1})}z^{n}=F(1,1,z)-zF(1,0,z)+zF(0,0,z),$$ where $F(x,y):=F(x,y,z)$ satisfies the functional equation~\eqref{eq:F}. \end{thm} \begin{proof} From the generating function $F(x,y,z)$ defined in equation~\eqref{eq:Fdef}, we subtract the generating function for walks ending with a $\w$ step on the $x$-axis, which is $\frac{z}{x}\left(F(x,0,z)-F(0,0,z)\right)$. Setting $x=y=1$ to disregard the ending vertex, we obtain the desired expression. \end{proof} Using that $\IC_{\text{sym}}(\bA_{2n-1})=\IC(\bB_n)$, the coefficients of the even powers of the generating function in Theorem~\ref{thm:BrootGF} give the sequence $\card{\IC(\bB_n)}$, whose values for $1\le n\le 9$ are $$ 2, 13, 115, 1166, 12883, 150912, 1844322, 23276741, 301289155.$$ \subsection{Generalization to truncated rectangles}\label{ssec:truncated_rectangles} The approach from Section~\ref{sec:functional_equation} can be generalized to count ICS of the poset $P_{m\times n;r}$ obtained by truncating the bottom $r$ ranks from $[m] \times [n]$ (see Figure~\ref{fig:truncated}). Throughout this section, we assume that $r\le m,n$. We will allow negative values of $r$, with the convention that $P_{m\times n;r}=P_{m\times n;0}$ if $r<0$. Note that $\bA_{n-1}=P_{n\times n;n}=P_{(n-1)\times (n-1);n-2}$. Using the bijection from Section~\ref{sec:rectangle}, order ideals of $P_{m\times n;r}$ are in bijection with lattice paths from $(0,n)$ to $(m+n,m)$ with steps $\uu)$ and $\dd$ that do not go below the line $y=r$. Denote this set of paths by $\Lmnr$, and let $\LLmnr=\{(B,T):B,T\in\Lmnr, B\le T\}$. Interval-closed sets of of $P_{m\times n;r}$ are in bijection with equivalence classes of pairs $(B,T)\in\LLmnr$, where the equivalence relation allows us to change the portions of $B$ and $T$ where these two paths coincide. As before, we pick a representative of each equivalence class by requiring that, in each maximal block of steps where $B$ and $T$ coincide, up-steps come before down-steps. Let $\W^{h,s}_{\ell}$ be the set of lattice walks in the first quadrant starting at $(h,0)$ and ending at $(s,0)$, and consisting of $\ell$ steps from the set $\{\e,\w,\se,\nw\}$. The same map described in equation~\eqref{eq:wi} gives a bijection between pairs $(B,T)\in\LLmnr$ and walks $W\in\W^{n-r,m-r}_{m+n}$. The condition that $B$ does not go below $y=r$ translates into the fact that $W$ stays in $x\ge0$ (since it never moves more than $n-r$ units to the left of where it started), and the condition that $B\le T$ translates into the fact that $W$ stays in $y\ge0$. The above canonical representatives of the equivalence classes in $\LLmnr$ correspond to walks in $\W^{n-r,m-r}_{m+n}$ where no $\w$ step on the $x$-axis is immediately followed by an $\e$ step. Let $\tW^{n-r,m-r}_{m+n}$ be the subset of walks with this property. This discussion yields the following theorem. \begin{thm} \label{thm:walks_bijection_truncated} The set $\IC(P_{m\times n;r})$ of interval-closed sets of $P_{m\times n;r}$ is in bijection with the set $\tW^{n-r,m-r}_{m+n}$ of lattice walks in the first quadrant starting at $(n-r,0)$ and ending at $(m-r,0)$, and consisting of $m+n$ steps from the set $\{ \e,\w,\se,\nw \}$ where no $\w$ step on the $x$-axis is immediately followed by an $\e$ step. \end{thm} \begin{example} Figure~\ref{fig:truncated} shows an interval-closed set of $P_{4 \times 5; 1}$ with paths $T = \uu \dd \uu \dd \dd \uu \uu \dd \dd$ (in blue, dashed) and $B = \dd \dd \dd \dd \uu \uu \dd \uu \uu$ (in green). Its image is the quarter-plane walk $W = \nw\,\w\,\nw\,\w\,\se\,\e\,\nw\,\se\,\se$ from $(4,0)$ to $(3,0)$. \end{example} \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.5] \foreach \y in {0,...,4} {\foreach \x in {0,...,3} {\ifthenelse{\x > -\y}{ ll[gray] (\x - \y, \y + \x) circle (0.1cm) {}; \ifthenelse{\x < 3} {\draw[gray] (\x - \y, \y + \x) -- (\x - \y + 1, \y + \x + 1);}{} \ifthenelse{\y < 4} {\draw[gray] (\x - \y, \y + \x) -- (\x - \y - 1, \y + \x+1);}{} }{} } } ll[blue] (-1, 1) circle (0.2cm) {}; ll[blue] (-2, 2) circle (0.2cm) {}; ll[blue] (0, 2) circle (0.2cm) {}; ll[blue] (2, 2) circle (0.2cm) {}; ll[blue] (-3, 3) circle (0.2cm) {}; ll[blue] (-1, 3) circle (0.2cm) {}; ll[blue] (1, 3) circle (0.2cm) {}; ll[blue] (3, 3) circle (0.2cm) {}; ll[blue] (-4, 4) circle (0.2cm) {}; ll[blue] (-2, 4) circle (0.2cm) {}; ll[blue] (2, 4) circle (0.2cm) {}; \draw[ultra thick, babyblue, dashed] (-5,4)--(-4, 5)--(-3,4)--(-2,5)--(-1,4)--(0,3)--(1,4)--(2,5)--(3,4)--(4,3); \draw[ultra thick, darkgreen] (-5,4)--(-4, 3)--(-3,2)--(-2,1)--(-1,0)--(0,1)--(1,2)--(2,1)--(3,2)--(4,3); \draw[darkgreen] (-3.75, 1.75) node {\large $B$}; \draw[babyblue] (-3.75, 5.5) node { \large $T$}; \draw[->] (5,3.5)--(6,3.5); \end{tikzpicture} \quad \begin{tikzpicture}[scale=1.5,decoration={ markings, mark=at position 0.55 with {\arrow{>}}}] \draw[gray,dotted] (0,0) grid (4,2); \draw[darkgray,->] (0,-.25)--(0,2.25); \draw[darkgray,->] (-.25,0)--(4.25,0); ll[black] (4, 0) circle (0.7mm) {}; ll[black] (3, 0) circle (0.7mm) {}; ll[black] (3, 1) circle (0.7mm) {}; ll[black] (2, 1) circle (0.7mm) {}; ll[black] (1, 1) circle (0.7mm) {}; ll[black] (1, 2) circle (0.7mm) {}; ll[black] (0, 2) circle (0.7mm) {}; \draw[thick,postaction={decorate}] (4, 0) to node[above,blue]{1} (3, 1); \draw[thick,postaction={decorate}] (3,1) to node[above,blue]{2} (2,1); \draw[thick,bend right=15,postaction={decorate}] (2,1) to node[above right,blue]{3,7} (1,2); \draw[thick,postaction={decorate}] (1, 2) to node[above,blue]{4} (0, 2); \draw[thick,postaction={decorate}] (0,2) to node[below left,blue]{5} (1,1); \draw[thick,postaction={decorate}] (1,1) to node[below,blue]{6} (2,1); \draw[thick,bend right=15,postaction={decorate}] (1,2) to node[below left,blue]{8} (2,1); \draw[thick,postaction={decorate}] (2,1) to node[below left,blue]{9} (3,0); \draw (4,0) node[below]{$(4,0)$}; \draw (3,0) node[below]{$(3,0)$}; \end{tikzpicture} \end{center} \caption{An interval-closed set of the poset $P_{4 \times 5; 1}$ with associated lattice paths $T$ and $B$, along with the associated lattice walk. } \label{fig:truncated} \end{figure} We use this bijection to prove the following functional equation for the generating function. \begin{thm}\label{thm:ICP} The generating function of interval-closed sets of truncated rectangles can be expressed as $$\sum_{m,n\ge \max(0, r)} \card{\IC(P_{m\times n;r})}t^{n-r}x^{m-r}z^{m+n}=G(t,x,0,z),$$ where $G(x,y):=G(t,x,y,z)$ satisfies the functional equation \[ G(x,y)= \ \frac{1}{1-tx}+z\left(x+\frac{1}{x}+\frac{x}{y}+\frac{y}{x}\right)G(x,y) - z \left(\frac{1}{x}+\frac{y}{x}\right)G(0,y) - z\, \frac{x}{y} G(x,0) - z^2\, \left(G(x,0)-G(0,0)\right). \] \end{thm} \begin{proof} By the above bijection, we have $\card{\IC(P_{m\times n;r})}=\card{\tW^{n-r,m-r}_{m+n}}$. Let $h=n-r$, $s=m-r$ and $\ell=m+n$. The inverse of this change of variables is $m=\frac{\ell+s-h}{2}$, $n=\frac{\ell-s+h}{2}$, $r=\frac{\ell-s-h}{2}$. Note that $\tW^{h,s}_{\ell}$ is empty unless $h+s+\ell\equiv0\bmod2$. To enumerate walks in $\tW^{h,s}_{\ell}$, we will consider more general walks that are not restricted to ending at a particular point. Let $\tW^{h}_{\ell}$ be the set of walks in the first quadrant, consisting of $\ell$ steps from among $\e,\w,\se,\nw$, starting $(h,0)$, and with no $\w$ step on the $x$-axis is immediately followed by an $\e$ step. Define the generating function $$G_h(x,y,z)=\sum_{i,j,\ell\ge0}\card{\{W\in\tW^{h}_{\ell} \text{ ending at }(i,j)\}}\,x^iy^jz^\ell.$$ Now let $$G(t,x,y,z)=\sum_{h\ge0} G_h(x,y,z)\,t^h,$$ whose evaluation at $y=0$ equals $$G(t,x,0,z)=\sum_{h,s,\ell\ge0}\card{\tW^{h,s}_{\ell}}t^hx^sz^\ell=\sum_{\substack{h,s,\ell\ge0\\ h+s+\ell\equiv0\bmod2}}\card{\IC(P_{\frac{\ell+s-h}{2}\times\frac{\ell-s+h}{2};\frac{\ell-s-h}{2}})}t^hx^sz^{\ell}.$$ Note that, by definition, we have the symmetry $G(t,x,0,z)=G(x,t,0,z)$. A similar argument to the one we used in the proof of Theorem~\ref{thm:ICAn}, by considering the possible last step of walks in $\tW^{h,s}_{\ell}$, shows that the generating function $G(x,y):=G(t,x,y,z)$ satisfies the stated functional equation. \end{proof} We have not been able to solve the functional equation in Theorem~\ref{thm:ICP}, but we can use it to compute hundreds of terms of the series expansion in the variable $z$. The first few terms of the series expansion of $G(t,x,0,z)$ are \begin{multline*}\frac{1}{1-tx}\left(1+(t+x)\,z+(1+tx+t^2+x^2)\,z^2+ (2t+2x+2t^2x+2tx^2+t^3 + x^3)\,z^3\right.\\ \left.+(2+6tx+4t^2+4x^2+3t^3x+4tx^3+5t^2x^2+t^4+x^4)\,z^4+ \dots\right).\end{multline*} The factor $\frac{1}{1-tx}$ appears because an ICS of $P_{m\times n;r}$ can also be viewed as an ICS of $P_{m\times n;r'}$ for any $r'<r$. For example, for the poset $P_{3\times2;1}$, we have $h=2-1=1$, $s=3-1=2$, and $\ell=3+2=5$, so $\card{\IC(P_{3\times2;1})}$ is the coefficient of $tx^2z^5$ in $G(t,x,0,z)$, which is 24. For the poset $P_{n\times n;n}$, we have $h=0$, $s=0$, and $\ell=2n$. In this case, $G_0(x,y,z)$ is the generating function $F(x,y,z)$ from Theorem~\ref{thm:BrootGF}, and the coefficient of $x^0$ in $G_0(x,0,z)$ is simply $F(0,0,z)$. \subsection{Translating statistics between interval-closed sets and quarter-plane walks} \label{sec:quarter_plane_stats} Similarly to Theorem~\ref{thm:Motzkin_stats_bijection}, the quarter-plane walks obtained via the bijection from Theorem~\ref{thm:walks_bijection_truncated} give us information about the associated interval-closed set. In the next theorem, a return of the walk to the $x$-axis is a $\se$ step ending on the $x$-axis, and a return to the $y$-axis is a $\w$ or $\nw$ step ending on the $y$-axis (we use this term even if the walk did not start on the $y$-axis). \begin{thm} \label{statistics_walks} Let $I\in\IC(P_{m\times n;r})$, and let $W\in\tW^{n-r,m-r}_{m+n}$ be its image under the bijection from Theorem~\ref{thm:walks_bijection_truncated}. Then, \begin{enumerate}[label=(\alph*)] \item the cardinality of $I$ is the sum of the heights ($y$-coordinates) after each step of $W$, \item the number of connected components of $I$ is the number of returns of $W$ to the $x$-axis, and \item the number of minimal elements of $P_{m\times n;r}$ that are in $I$ is the number of returns of $W$ to the $y$-axis, not counting its last step. \end{enumerate} \end{thm} \begin{proof} Let $I\in\IC(P_{m\times n;r})$, and let $B$ and $T$ be the nested lattice paths from the bijection in Subsection~\ref{ssec:truncated_rectangles}. These are the same paths that we would have associated to $I$ in Subsection~\ref{ssec:bicolored} by viewing $I$ as an interval-closed set of $[m]\times[n]$. Therefore, as in the proof of Theorem~\ref{thm:Motzkin_stats_bijection}, we have $|I|=\frac{1}{2} \sum_i d_i(B,T)$, where $d_i(B,T)$ is the distance between $B$ and $T$ after $i$ steps. Let us show that $\frac{1}{2}d_i(B,T)$ is also the $y$-coordinate at the end of the $i$-th step of the walk $W$. As in the proof of Theorem~\ref{thm:Motzkin_stats_bijection}, $\frac{1}{2}d_i(B,T)$ is the difference between the number of $\uu$ steps of $B$ and $T$ within their first $i$ steps. Using equation~\eqref{eq:wi}, this difference is equal to the number of $\nw$ steps (which occur in positions where $T$ has a $\uu$ step but $B$ does not) minus the number of $\se$ steps (which occur in positions where $B$ has a $\uu$ step but $T$ does not) within the first $i$ steps of $W$, which in turn equals the $y$-coordinate of $W$ after $i$ steps, noting that $W$ starts on the $x$-axis. Summing over $i$, we obtain part~(a). The connected components of $I$ correspond to the maximal blocks where $B$ is strictly below $T$, or equivalently, $W$ is strictly above the $x$-axis. Part~(b) follows. To prove part~(c), observe that, after any given number of steps, the height of the path $B$ always equals the $x$-coordinate of the walk $W$. This is because $W$ starts at $(n-r,0)$ and, by equation~\eqref{eq:wi}, its $i$th step $w_i$ is $\se$ or $\e$ if $b_i=\uu$, and it is $\nw$ or $\w$ if $b_i=\dd$. An element of $P_{m\times n;r}$ is in $I$ if and only if it is above $B$ and below $T$. With the exception of its last step, the path $B$ goes below a minimal element when it reaches height $0$, which happens precisely when $W$ returns to the $y$-axis. By construction, the path $T$ does not lie below any minimal element of the poset. Hence, the minimal elements of $P_{m\times n;r}$ that are in $I$ correspond exactly to the returns of $W$ to the $y$-axis, not counting the last step of~$W$. \end{proof} \begin{example} Let $I$ be the interval-closed set of the poset $\mathbf{A}_5$ drawn in Figure \ref{ex_typeA}. Then $|I|=3$, which equals the sum of the heights of the corresponding walk $W$ after steps 3, 4 and 9, all at height $1$, since all the other steps end at height $0$. Also, $I$ has two connected components, corresponding to the two returns of $W$ to the $x$-axis, after steps 5 and 10. Finally, $I$ contains one of the minimal elements of the poset, corresponding to the return of $W$ to the $y$-axis after step 4 (the other return, after step 12, is not counted since it is the last step). \end{example} \begin{example} Let $I$ be the interval-closed set of $P_{4\times 5;1}$ drawn in Figure \ref{fig:truncated}. Then $|I|=11$, which equals the sum of the heights of the steps of the corresponding walk $W$ (three steps ending at height~$2$, and five steps ending at height~$1$). Here $I$ has only one connected component, and $W$ returns to the $x$-axis only once (at the end). Finally, $I$ contains one minimal element of the poset, which corresponds to the return of $W$ to the $y$-axis after step $4$. \end{example} \section{Future directions} \label{sec:future} It would be interesting to enumerate interval-closed sets of other posets. One natural generalization of the poset $[m]\times[n]$ is the product of three chains $[\ell]\times[m]\times[n]$. Table~\ref{tab:ICS_2_by_m_by_n} lists the number of interval-closed sets of $[2]\times[m]\times[n]$ for small values of $m$ and $n$, computed in SageMath~\cite{sage}. Other posets of interest include root and minuscule posets of other Lie types. \begin{table}[htbp] \centering \begin{tabular}{r|c|c|c|c|c|c|c} $m$\ \textbackslash\ $n$ & 2 &3&4&5& 6 & 7 & 8\\ \hline 2 & 101 &526 & 2,085& 6,793 & 19,100 & 47,883 & 109,501\\ 3& 526& 5,030 & 33,792 &175,507 &749,468 & 2,743,751 &8,870,441\\ 4 &2,085 & 33,792& 361,731 & 2,851,562 & 17,768,141 & 91,871,593 &408,168,856\\ 5 &6,793&175,507&2,851,562&32,797,595& 288,594,237 &2,050,193,127 & 12,225,400,806 \end{tabular} \caption{The number of interval-closed sets of $[2]\times [m]\times [n]$ for small values of $m$ and $n$.} \label{tab:ICS_2_by_m_by_n} \end{table} \section*{Acknowledgements} The authors thank Torin Greenwood for helpful conversations and the developers of SageMath~\cite{sage}, which was useful in this research. This work benefited from opportunities to present and collaborate at the Fall 2023 AMS Central Sectional Meeting as well as the June 2024 conference on Statistical and Dynamical Combinatorics at MIT. Lewis was partially supported by Simons Collaboration Grant \#634530 and gift MPS-TSM-00006960. Elizalde was partially supported by Simons Collaboration Grant \#929653. Striker was supported by a Simons Foundation gift MP-TSM-00002802 and NSF grant DMS-2247089. \bibliographystyle{abbrv} \bibliography{master} \end{document}
2412.16437v1
http://arxiv.org/abs/2412.16437v1
Central limit theorem for periodic solutions of stochastic differential equations driven by Levy noise
\documentclass[final,1p,times]{elsarticle} \usepackage{amssymb} \usepackage{amsthm} \usepackage{latexsym} \usepackage{amsmath} \usepackage{color} \usepackage{graphicx} \usepackage{indentfirst} \usepackage{mathrsfs} \usepackage{lipsum} \allowdisplaybreaks \def\e{\varepsilon} \renewcommand\thesection{\arabic{section}} \renewcommand\theequation{\thesection.\arabic{equation}} \renewcommand{\thefigure}{\thesection.\arabic{figure}} \renewcommand{\thefootnote}{\arabic{footnote}} \newtheorem{theorem}{\color{black}\indent \textbf{Theorem}}[section] \newtheorem{lemma}{\color{black}\indent Lemma}[section] \newtheorem{proposition}{\color{black}\indent Proposition}[section] \newtheorem{definition}{\color{black}\indent Definition}[section] \newtheorem{remark}{\color{black}\indent Remark}[section] \newtheorem{corollary}{\color{black}\indent Corollary}[section] \newtheorem{example}{\color{black}\indent Example}[section] \newtheorem{condition}{\color{black}\indent Condition}[section] \DeclareMathOperator{\diag}{{diag}} \newcommand\blfootnote[1]{ \begingroup \renewcommand\thefootnote{}\footnote{#1} \addtocounter{footnote}{-1} \endgroup } \begin{document} \begin{frontmatter} \title{Central limit theorem for periodic solutions of stochastic differential equations driven by L{\'e}vy noise} \author{{ \blfootnote{$^{*}$Corresponding author.} Xinying Deng$^{a}$\footnote{ E-mail address : [email protected]},~ Yong Li$^{b,c*}$} \footnote{E-mail address : [email protected]}, ~ Xue Yang$^{b}$\footnote{ E-mail address : [email protected]}. \\ {$^{a}$School of Mathematics and Statistics, Northeast Normal University,} {Changchun, $130024$, P. R. China.}\\ {$^{b}$School of Mathematics, Jilin University,} {Changchun, $130012$, P. R. China.}\\ {$^{c}$ Center for Mathematics and Interdisciplinary Sciences, Northeast Normal University,} {Changchun, $130024$, P. R. China. } } \begin{abstract} Through certain appropriate constructions, we establish periodic solutions in distribution for some stochastic differential equations with infinite-dimensional L{\'e}vy noise. Additionally, we obtain the corresponding periodic measures and periodic transition semigroup. Under suitable conditions, we also achieve a certain contractivity in the space of probability measures. By constructing an appropriate invariant measure, we standardize the observation functions. Utilizing the classical martingale approximation approach, we establish the law of large numbers and the central limit theorem. {\bf Keywords} {periodic measure, martingale approximation theorem, strong law of large numbers, central limit theorem.} \end{abstract}\end{frontmatter} \section{Introduction} L{\'e}vy processes are stochastic processes characterized by stationary and independent increments. They are particularly valuable as they can capture discontinuous and abrupt fluctuations encountered in practical scenarios. From a mathematical perspective, L{\'e}vy processes represent a significant class of semimartingales and Markov processes, encompassing Wiener processes and Poisson processes as notable special cases. Therefore, L{\'e}vy processes hold importance both in theory and practical applications. For a comprehensive treatment of the theory, and some computational methods, see, e.g. \cite{ref51}; for the properties of solutions to some stochastic differential equations, see, e.g. \cite{ref31}; for the asymptotic stability of the solutions to some semilinear stochastic differential equations with infinite dimensional L{\'e}vy noise, see \cite{ref40}. As widely recognized, the strong law of large numbers and central limit theorem for ergodic Markov processes have garnered significant attention over the decades, dating back to \cite{ref1} in 1937. In the case of time homogeneity, the strong law of large numbers(SLLN) and central limit theorem(CLT) demonstrate that the asymptotic behavior of an observation along a Markov process can be characterized by the invariant measure, a concept elucidated in the earlier work by Kuksin and Shirikyan \cite{ref22}. But what about inhomogeneous systems? In 1956, Dobrushin proved a definitive central limit theorem for inhomogeneous Markov chains, which played a key role in the research of the strong law of large numbers and central limit theorem for later generations. It is well known that the Markov property plays a crucial role in the study of stochastic differential equations. It emphasizes what will happen in the future with respect to the current state, not the past. Another important property is the time homogeneity, which emphasizes that the transition probability from state $i$ to state $j$ depends on the length of the time interval and is independent of the starting time. The strong law of large numbers and the central limit theorem for homogeneous processes were extensively studied, see, e.g., \cite{ref90}\cite{ref16}. However, there has been limited research on the inhomogeneous case. When the system lacks time homogeneity, meaning that the drift and diffusion coefficients are time-dependent, many of the advantageous properties of the system become unavailable. Plagued by these challenges, we seek to identify beneficial properties to compensate for the shortcomings arising from the absence of time homogeneity. If we can address this issue, we would be able to generalize many conclusions from time-homogeneous systems to inhomogeneous systems. Inspired by \cite{ref4}, we consider the periodicity of the system in the sense of distribution. For a periodic system driven by a L{\'e}vy process, the dynamic behaviors in two periods can be entirely different due to the independent increment property of the L{\'e}vy process, in other words, they are not equal almost everywhere, but their probability distributions are the same. One of the aims of this article is to investigate SLLN and CLT for inhomogeneous Markov processes driven by L{\'e}vy noise. The inconveniences of inhomogeneity can be addressed by introducing periodicity. Since it was introduced by Khasminskii \cite{ref15}, periodic solutions in the context of periodic Markov processes have been studied extensively. The paper \cite{ref4} explored stochastic periodic solution in distribution to the Fokker-Planck equation, assuming an unconventional Lyapunov condition. With a growing focus on stochastic periodic solutions in distribution, several contributions have been made in the literature. For instance, see, e.g., \cite{ref41} for affine periodic solutions for stochastic differential equations utilizing a LaSalle-type stationary oscillation principle; \cite{ref33} for periodic solutions in distribution for mean-field stochastic differential equations; \cite{ref5} for the ergodicity of a periodic probability measure for SDEs; \cite{ref13} for random periodic solutions for semilinear stochastic differential equations; \cite{ref24} for a unique ergodic invariant measure in the incompressible 2D-Navier-Stokes equations with periodic boundary conditions, which is the first to develop the results in \cite{ref38}\cite{ref36}\cite{ref20} to the time-inhomogeneous setting on the torus with highly degenerate noise. Inspired by this, combining with periodic solutions in distribution, we derive a periodic measure and an invariant measure, which is also ergodic. Using the Wasserstein metric, we establish a certain compressibility in the space of the measure, ensuring the ergodicity of the invariant measure and thus exponential convergence under a specific class of observation functions. Employing the method of martingale approximation, which traced back to the work of Gordin and Lif{\v{s}}ic \cite{ref16}, and Kipnis and Varadhan\cite{ref23}, we achieve the desired SLLN and CLT by analyzing a special martingale through the classical martingale approximation method ( the residual term is negligible, see Lemma \ref{26}). The rest of paper is arranged as follows. In Section 2, we introduce some notations that will be utilized consistently throughout the paper. In Section 3, we introduce definitions essential for the study of periodic solutions in distribution and periodic measures. We also give certain assumptions under which we derive moment estimates of the solutions and a unique periodic solution in the sense of distribution. Additionally, we establish contractivity properties on the space of probability measures, which are crucial for subsequent estimations. In Section 4, we consider a special case of the original equation. By means of classical martingale approximation, we derive the strong law of numbers in the space of weighted observation functions. In Section 5, inspired by \cite{ref25}, we obtain the central limit theorem by validating Lindeberg-type conditions and integrating them with the law of large numbers for the conditioned martingale difference. For clarity, we include in the appendix fundamental concepts and lemmas necessary for the proofs. \section{Preliminaries} We consider the following SDE for $\mathbb{R}^d$-valued stochastic process $X$ with L{\'e}vy noise: \begin{align}\label{1} dX(t) = f(t, X(t))dt + g(t, X(t))d L(t), \end{align} where $f$ is $\mathbb{R}^d$-valued, $g$ is $\mathbb{R}^{d \times d}$-valued, $L$ is a two-sided $\mathbb{R}^{d\times d}$-valued L{\'e}vy process(for more details, see Definition \ref{3}) defined on $(\Omega, \mathcal{F}, \mathbb{P}, (\mathcal{F}_t)_{t \in \mathbb{R}})$, i.e., \begin{equation*} \begin{aligned} L(t) : = \begin{cases} L_1(t), ~~t \geq 0,&\\ -L_2(-t), t < 0, \end{cases} \end{aligned} \end{equation*} $L_1$ and $L_2$ are two independent, identically distributed L{\'e}vy process, involving $b, Q, W, N$ in Proposition \ref{2}. For convenience, we assume that trace $Q < \infty$. With the assistance of \eqref{4}, we derive \begin{align*} \int_{|x| \geq 1} \nu(dx) < \infty. \end{align*} Furthermore, we denote $\int_{|x| \geq 1} \nu(dx) < \infty := e$. Utilizing the L{\'e}vy-It{\^o} composition(Proposition \ref{2}), \eqref{1} can be written by \begin{align*} dX(t) &= (f(t, X(t)) + g(t, X(t))b) dt + g(t, X(t))dW(t) +\int_{|x| < 1} g(t, X(t-))x N_1(dt, dx) \\&~~+ \int_{|x| \geq 1}g(t, X(t-)) xN(dt, dx). \end{align*} More generally, we consider the following SDE: \begin{align}\label{6} dX(t) &= f(t, X(t))dt +g(t, X(t))dW(t)+ \int_{|x| < 1}F(t, X(t-), x)N_1(dt, dx)\notag\\&~~+ \int_{|x|\geq 1}G(t, X(t-), x)N(dt, dx), \end{align} where \begin{align*} &f: \mathbb{R}^+ \times \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^d) \rightarrow \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^d) ,\\& g: \mathbb{R}^+ \times \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^d) \rightarrow \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^{d \times d}),\\& F: \mathbb{R}^+ \times \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^d) \times \mathbb{R}^d \rightarrow \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^d),\\& G: \mathbb{R}^+ \times \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^d) \times \mathbb{R}^d \rightarrow \mathcal{L}^2 (\mathbb{P}, \mathbb{R}^{d \times d}). \end{align*} Throughout the paper, we define $C_b(\mathbb{R}^d)$ as the space of all bounded and continuous functions. Let $C_{b L}(\mathbb{R}^d)$ be the space of all bounded, Lipschitz continuous functions on $\mathbb{R}^d$ endowed with the norm $||\cdot||_{bL}$ and \begin{align*} ||f||_{bL} := \sup_{x \in \mathbb{R}^d}|f(x)| + \sup_{x_1, x_2 \in \mathbb{R}^d, x_1 \neq x_2}\frac{|f (x_1) - f(x_2)|}{|x_1 -x_2|}, \end{align*} The scalar product and norm in $\mathbb{R}^d$ are denoted by $\langle \cdot, \cdot \rangle$ and $|\cdot|$ respectively. As usual, $\mathbb{E}_x$ denotes the expectation with respect to a stochastic process $\{X(t)\}_{t \geq 0}$ when its initial value is $X(0) = x \in \mathbb{R}^d$. For $p > 0$, $\mathcal{L}^p(\Omega, \mathbb{R}^d)$ denotes the space of all $\mathbb{R}^d$-valued random variables $\xi$, such that $\mathbb{E}|\xi|^p = \int_{\Omega} |\xi|^p d\mathbb{P} < \infty$. $\mathcal{P}(\mathbb{R}^d)$ is the space of probability measures on $\mathbb{R}^d$, and for any $\mu_1, \mu_2 \in \mathcal{P}(\mathbb{R}^d)$, we introduce a Wasserstein metric: \begin{align*} d_L(\mu_1, \mu_2) = \sup_{Lip(f) \leq 1} \left|\int_{\mathbb{R}^d} f(x) \mu_1(dx) - \int_{\mathbb{R}^d} f(x) \mu_2(dx) \right|, \end{align*} where $$Lip(f):= \sup_{x_1, x_2 \in \mathbb{R}^d, x_1 \neq x_2} \frac{|f(x_1) - f(x_2)|}{|x_1 - x_2|}.$$ \section{Existence of $\tau$-periodic measure} We now introduce some definitions that will be used in this section. \begin{definition} An $\mathbb{R}^d$-valued stochastic process $X(t)$ is said to be $\tau$-periodic in distribution if its probability distribution function $\mu_{X(\cdot)} : \mathbb{R} \rightarrow [0, 1]$ is a $\tau$-periodic function, i.e.,\begin{align*} \mu_{X(t +\tau)}(A) = \mu_{X(t)}(A), ~~~~\forall t \in \mathbb{R}, A \in \mathcal{B}(\mathbb{R}^d),\end{align*} where $\mathcal{B}(\mathbb{R}^d)$ is the Borel $\sigma$-algebra on $\mathbb{R}^d$, $\mu_{X(t)} := \mathbb{P} \circ [X(t)]^{-1}$. \end{definition} \begin{definition} Let $X_{\xi}(t)$ be the solution to \eqref{6} with initial value $\xi \in \mathbb{R}^d$, it is said to be a $\tau$-periodic solution in distribution, provided that the conditions below hold: (H1) $X_{\xi}(t)$ is $\tau$-periodic in distribution; (H2) There exist a stochastic process $\tilde{W}$ with the same distribution as $W$, and $\tilde{N}$ has the same distribution as $N$ with the compensated Poisson random measure $\tilde{N}_1$, such that $X_{\xi}(t+\tau)$ is a solution of the following: \begin{align*} dY(t)& = f(t, Y(t)) dt +g(t, Y(t)) d\tilde{W}(t) +\int_{|x| < 1}F(t, Y(t-), x)\tilde{N}_1(dt, dx)\\&~~+\int_{|x|\geq 1} G(t, Y(t-), x) \tilde{N}(dt, dx). \end{align*} \end{definition} \begin{definition} A sequence of measures $\{\mu_n\} \subset \mathcal{P}(\mathbb{R}^d)$ is said to be weakly convergent to a measure $\mu$, if for any $\phi \in C_b(\mathbb{R}^d)$, \begin{align*} \int_{\mathbb{R}^d} \phi(x)\mu_n(dx) \rightarrow \int_{\mathbb{R}^d} \phi(x)\mu(dx), ~~~~n \rightarrow \infty. \end{align*} For convenience, we also denote it by $\mu_n \overset{w}{\rightarrow}\mu.$ \end{definition} \begin{definition} A sequence of $\mathbb{R}^d$-valued stochastic processes $\{Y_n\}$ is said to be convergent in distribution to an $\mathbb{R}^d$-valued stochastic process $Y$, if for all $t \in \mathbb{R}$, \begin{align*} \mu_{Y_n(t)} \overset{w}{\rightarrow}\mu_{Y(t)}. \end{align*} For convenience, we also denote it by \begin{align*} Y_n \overset{\mathcal{D}}{\rightarrow}Y. \end{align*} \end{definition} Now we make some hypothesises to ensure the progress of the subsequent work. We also assume for convenience that the initial time is 0. (H3) $f, g, F, G$ in \eqref{6} are $\tau$-periodic in $t \in \mathbb{R}$, i.e., for any $x \in \mathbb{R}^d$ \begin{align*} f(t, x) &= f(t+\tau, x), ~~~~~~~~~g(t, x) = g(t+\tau, x),\\ F(t, x, u) &= F(t+\tau, x, u), G(t, x, u) = G(t+\tau, x, u). \end{align*} (H4) There exist a positive constant $M$ such that for all $t \geq 0, 2\leq p \leq 4$, \begin{align*} |f(t, 0)|^p \vee |g(t, 0)|^p \vee \int_{|u|< 1} \left|F(t, 0, u)\right|^p \nu(du) \vee \int_{|u| \geq 1} \left|G(t, 0, u)\right|^p\nu(du) \leq M^p. \end{align*} (H5) There exist a positive number $L$, such that for any $x_1, x_2 \in \mathbb{R}^d, 2 \leq p \leq 4,$ \begin{align*} |f(t, x_1) - f(t, x_2)|^p &\vee |g(t, x_1) - g(t, x_2)|^p \vee \int_{|u|\leq 1}|F(t, x_1, u) - F(t,x_2, u)|^p \nu(du) \\&\vee \int_{|u| \geq 1} |G(t, x_1, u) - G(t, x_2, u)|^p \nu(du) \leq L^p|x_1 - x_2|^p.\end{align*} Based on (H4) and (H5), the existence and uniqueness of strong solutions for \eqref{6} with initial value $\xi \in \mathcal{L}^2(\mathbb{P}, \mathbb{R}^d)$ can be established, which will be denoted by $X_{\xi}(t)$, for more details, we can refer to Theorem $3.1$ in \cite{ref31}. For $t \in [0, \tau)$, let \begin{align*}(Y^0(t), \tilde{W}^0(t), \tilde{N}_1^0(dt, dx), \tilde{N}^0(dt, dx)) &= (X_{\xi}(t), W(t), N_1(dt, dx),N(dt, dx)),\\ W^1(t) &= W(t+\tau) - W(\tau),\\ N^1(t, x) &= N(t+\tau, x) - N(\tau, x),\\ N_1^1(t, x) &= N_1(t+\tau, x) - N_1(\tau, x).\end{align*} Then \begin{align*} &X_{\xi}(t+\tau)\\& =\xi + \int_0^{t+\tau} f(r, X_{\xi}(r)) dr + \int_{0}^{t+\tau} g(r, X_{\xi}(r))dW(r) +\int_{0}^{t+\tau} \int_{|x| < 1}F(r, X_{\xi}(r-), x) N_1(dr, dx)\\& ~~~~~~+ \int_0^{t+\tau}\int_{|x|\geq 1}G(r, X_{\xi}(r-), x)N(dr, dx)\\&= X_{\xi}(\tau) +\int_{\tau}^{t+\tau} f(r, X_{\xi}(r))dr +\int_{\tau}^{t+\tau}g(r, X_{\xi}(r))dW(r) +\int_{\tau}^{t+ \tau} \int_{|x| < 1}F(r, X_{\xi}(r-), x) N_1(dr, dx) \\&~~~~~~+ \int_{\tau}^{t+\tau} \int_{|x| \geq 1}G(r, X_{\xi}(r-), x)N(dr, dx) \\& \overset{\mathcal{D}}{=} X_{\xi}(\tau) +\int_0^t f(u+\tau, X_{\xi}(u+\tau)) du + \int_{0}^t g(u+\tau, X_{\xi}(u+\tau)) d(W(u+\tau)- W(\tau))\\& ~~~~~~+\int_0^t \int_{|x| < 1}F(u+\tau, X_{\xi}(u+\tau-))d(N_1(u+\tau, x) - N_1(\tau, x))\\&~~~~~~+\int_0^t \int_{|x|\geq 1}G(u+\tau, X_{\xi}(u+\tau-), x) d(N(u+\tau, x) - N(\tau, x))\\& = X_{\xi}(\tau) +\int_0^t f(u+\tau, X_{\xi}(u+\tau)) du + \int_{0}^t g(u+\tau, X_{\xi}(u+\tau)) dW^1(u)\\& ~~~~~~+\int_0^t \int_{|x| < 1}F(u+\tau, X_{\xi}(u+\tau-))N_1^1(du, dx)+\int_0^t \int_{|x|\geq 1}G(u+\tau, X_{\xi}(u+\tau-), x) N^1(du, dx), \end{align*} where the third equality is achieved in the sense of distribution. Let $Y^1(t) = X_{\xi}(t+\tau)$, then $(Y^1(t), W^1(t), N_1^1(t, x), N^1(t, x))$ is a weak solution to \eqref{6}. By continuing this process, let \begin{align*}Y^k(t) &= X_{\xi}(t+k\tau),\\ W^k(t)&= W(t+k\tau)- W(k\tau),\\ N_1^k(t, x) &= N_1(t+k\tau, x) - N_1(k\tau, x),\\ N^k(t, x) &= N(t+k\tau, x) - N(k\tau, x).\end{align*} Then $(Y^k(t), W^k(t), N_1^k(t,x), N^k(t, x))_{k \in \mathbb{N}}$ also satisfy \eqref{6}. The following theorem states that, under appropriate conditions, the solutions of \eqref{6} have finite $p$th moments within a finite time interval, where $2 \leq p \leq 4$. \begin{theorem}\label{5} Suppose that (H4)-(H5) hold. For $2 \leq p \leq 4, \xi \in \mathcal{L}^p(\mathbb{P}, \mathbb{R}^d)$, $s \in [0, \tau]$, we have $$\mathbb{E}(\sup_{0 \leq t \leq s}|X_{\xi}(t)|^p) \leq (1+5^{p-1}\mathbb{E}|\xi|^p)e^{as},$$ where $a= 5^{p-1}(L^p +M^p)2^{\frac{p}{2}-1}\left[(1+(2e)^{p-1})\tau^{p-1}+ 2(1+2^{p-2})\left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}}\tau^{\frac{p-2}{2}} \right]$. \end{theorem} \begin{proof}Note that \begin{align*} &\mathbb{E}(\sup_{0 \leq t\leq s}|X_{\xi}(t)|^p)\\&\leq 5^{p-1} \mathbb{E}|X_{\xi}(0)|^p +5^{p-1} \mathbb{E}\left(\int_0^s |f(r, X_{\xi}(r))| dr\right)^p +5^{p-1} \mathbb{E}\left(\sup_{0\leq t \leq s} |\int_0^t g(r, X_{\xi}(r))dW(r)|^p\right) \\&~~+5^{p-1} \mathbb{E}\left (\sup_{0\leq t\leq s} |\int_0^t \int_{|x| <1}F(r, X_{\xi}(r-), x) N_1(dr, dx)|^p\right)\\ &~~+5^{p-1} \mathbb{E}\left(\sup_{0\leq t\leq s} |\int_0^t\int_{|x|\geq 1}G(r, X_{\xi}(r-), x)N(dr, dx)|^p\right) \\&=:I_1 +I_2+I_3+I_4+I_5. \end{align*} By Holder's inequality, we have the following estimations: \begin{align*} I_2 \leq (5\tau)^{p-1}\mathbb{E}\left(\int_0^s |f(r, X_{\xi}(r))|^p dr\right) \leq (5\tau)^{p-1} \left[2^{p-1}\mathbb{E}\int_0^s L^p|X_{\xi}(r)|^p dr +2^{p-1}M^p\right]. \end{align*} Combining with Lemma \ref{9}, we have \begin{align*} I_3:&= 5^{p-1} \mathbb{E}\left(\sup_{0\leq t \leq \tau}|\int_0^t g(r, X_{\xi}(r)) dW(r)|^p\right) \leq 5^{p-1} \left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}}\mathbb{E}\int_0^{s}|g(r, X_{\xi}(r))|^p dr\\& =5^{p-1} \left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}}\left[\mathbb{E}\int_0^{s} 2^{p-1}|g(r, X_{\xi}(r)) - g(r, 0)|^p +2^{p-1}|g(r, 0)|^p dr \right]\\& \leq5^{p-1}\left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}}\left[\mathbb{E}\int_0^{s} 2^{p-1}L^p|X_{\xi}(r)|^p +2^{p-1}M^p dr \right]\\& \leq 5^{p-1}\left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}}2^{p-1} \tau^{\frac{p-2}{2}}\left[\mathbb{E}\int_0^{s}L^p\mathbb{E} |X_{\xi}(r)|^p dr +M^p dr \right]\\& \leq 5^{p-1}\left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}}2^p \tau^{\frac{p-2}{2}}(L^p+M^p)\left[\mathbb{E}\int_0^{s} |X_{\xi}(r)|^p dr +1 \right].\\& \end{align*} Similar to the estimation of $I_3$, with he help of Lemma \ref{10}, we perform the necessary estimates for $I_4$ and $I_5$: \begin{align*} I_4:&= 5^{p-1}\sup_{0\leq t \leq s} \left|\int_0^t \int_{|x| < 1}F(r, X_{\xi}(r-), x) N_1(dr, dx)\right|^p \\& \leq 5^{p-1} \left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}} \mathbb{E}\left(\int_0^s\int_{|x| < 1} |F(r, X_{\xi}(r-), x) - F(r, 0, x)|^p\nu(dx)dr\right)\\& \leq 5^{p-1} \left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}} \mathbb{E}\left(\int_0^s\int_{|x|< 1} 2^{p-1}|F(r, X_{\xi}(r-), x) - F(r, 0, x)|^p \nu(dx)dr +2^{p-1}M^p \tau \right)\\& \leq 5^{p-1} \left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}} \mathbb{E}\left(\int_0^s\int_{|x|< 1} 2^{p-1}L^p |X_{\xi}(r)|^p \nu(dx)dr +2^{p-1}M^p \tau \right), \end{align*} \begin{align*} I_5:&=5^{p-1}\mathbb{E}\left(\sup_{0 \leq t \leq s}\left|\int_0^t \int_{|x|\geq 1}G(r, X_{\xi}(r-), x) N(dr, dx)\right|^p\right)\\& \leq 5^{p-1} \mathbb{E}\left(2^{p-1}\left| \int_0^t \int_{|x|\geq 1} G(r, X_{\xi}(r-), x)N_1(dr, dx)\right|^p +2^{p-1}\left| \int_0^t \int_{|x|\geq 1} G(r, X_{\xi}(r-), x) \mu(dx)dr\right|^p\right)\\& \leq 10^{p-1}\left( \frac{p^3}{2(p-1)} \right)\tau^{\frac{p-2}{2}}\mathbb{E}\left(\int_0^t\int_{|x|\geq 1}|G(r, X_{\xi}(r-), x)|^p \nu(dx) dr\right) \\&~~+ (10e\tau)^{p-1} \mathbb{E}\left(\int_0^t\int_{|x|\geq 1}|G(r, X_{\xi}(r-), x)|^p \nu(dx)dr \right)\\&\leq 10^{p-1}\left( \frac{p^3}{2(p-1)} \right)\tau^{\frac{p-2}{2}}\mathbb{E}\left(\int_0^t\int_{|x|\geq 1}2^{p-1}\mathbb{E}|X_{\xi}(r)|^p L^p \nu(dx) dr + 2^{p-1}M^p\tau\right)\\&~~+ (10e\tau)^{p-1} \mathbb{E}\left(\int_0^t\int_{|x|\geq 1}2^{p-1}\mathbb{E}|X_{\xi}(r)|^p L^p \nu(dx) dr + 2^{p-1}M^p\tau \right). \end{align*} Hence \begin{align*} 1+\mathbb{E}(\sup_{0\leq t \leq s}|X_{\xi}(t)|^p) \leq 1+ 5^{p-1}\mathbb{E}|\xi|^p +a\int_0^t[1+\mathbb{E}(\sup_{0\leq u \leq r}|X_{\xi}(u)|^p)]dr, \end{align*} where $a= 5^{p-1}(L^p +M^p)2^{\frac{p}{2}-1}\left[(1+(2e)^{p-1})\tau^{p-1}+ 2(1+2^{p-2})\left( \frac{p^3}{2(p-1)}\right)^{\frac{p}{2}}\tau^{\frac{p-2}{2}} \right]$. Then it follows from Gronwall's inequality that \begin{align*} 1+\mathbb{E}(\sup_{0\leq t\leq s}|X_{\xi}(t)|^p) \leq (1+5^{p-1}\mathbb{E}|\xi|^p)e^{as} \end{align*} for $s \in [0, \tau]$. Then \begin{align*} \mathbb{E}(\sup_{0\leq t\leq s}|X_{\xi}(t)|^p) \leq (1+5^{p-1}\mathbb{E}|\xi|^p)e^{as} \end{align*} for $s \in [0, \tau]$. \end{proof} \begin{remark} When $0 < p <2$, from Holder's inequality, it holds that \begin{align*} \mathbb{E}|X_{\xi}(t)|^p \leq (\mathbb{E}|X_{\xi}(t)|^2)^{\frac{p}{2}} \leq (1+5\mathbb{E}|\xi|^2)^{\frac{p}{2}} e^{\frac{pat}{2}}, \end{align*} where $a$ is from Theorem \ref{5}. \end{remark} In the subsequent discussion, we will establish the existence of the $\tau$-periodic measure for \eqref{6}. In the process of proof, we will rely on certain facts from Skorokhod theorem (\cite{ref42}). For ease of presentation, we will give the existed results in the appendix, specifically Lemma \ref{11} and Lemma \ref{12}. Before stating our theorem, we introduce another hypothesises: (H6) For any $t \in [0, \tau), k\in \mathbb{N}$, $\mu_{Y^k(t)}= \mu_{X_{\xi}(t+k\tau)}:= \mathbb{P}\circ Y^k(t)^{-1}$, satisfying \begin{align}\label{13} \lim_{k \rightarrow \infty} \frac{1}{n_k+1} \sum_{N=0}^{n_k}d_L(\mu_{X_{\xi}(t+(N+1)\tau)}, \mu_{X_{\xi}(t+N\tau)}) = 0, \end{align} where $\{n_k\}$ is a sequence of integers tending to $+ \infty$. (H7) For $2 \leq p \leq 4$, $\{X_{\xi}(k\tau)\}_{k \in \mathbb{N}}$ is uniformly bounded, i.e., there exist a positive constant $C > 0$ such that $$ \mathbb{E}|X_{\xi}(k\tau)|^p \leq C $$ for any $k \in \mathbb{N}$. \begin{theorem} Suppose that (H3)-(H7) are achieved. Then \eqref{6} has a $\tau$-periodic measure. \end{theorem} \begin{proof} For $t \in [0, \tau), k \in \mathbb{N}$, recall the construction of $(Y^k(t), W^k(t), N_1^k(t,x), N^k(t, x))_{k \in \mathbb{N}}$. Define a random variable $\eta_k$, where $\mathbb{P}(\eta_k = N) = \frac{1}{k+1}, N= 0, 1, \cdots, k$, and $\eta_k$ is independent of $W$, $N_1$ and $\xi$, then $(Y^{\eta_k}(t), W^{\eta_k}(t), N_1^{\eta_k}(t,x), N^{\eta_k}(t, x))_{k \in \mathbb{N}}$ is a solution to \eqref{6}. For any $A \in \mathcal{B}(\mathbb{R}^d)$, $k \in \mathbb{N}$, \begin{align*} \mathbb{P}(Y^{\eta_k}(t) \in A) = \frac{1}{k+1} \sum_{N=0}^k \mathbb{P}(X_{\xi}(t+N\tau) \in A). \end{align*} Then combining (H7) and Chebyshev's inequality, we obtain a uniform bound on $k$, that is \begin{align*} \mathbb{P}(|Y^{\eta_k}(0)| > R) &= \frac{1}{k+1} \sum_{N=0}^k \mathbb{P}(|X_{\xi}(N \tau)| > R) \\& \leq \frac{1}{k+1} \sum_{N=0}^k \frac{\mathbb{E}|X_{\xi}(N\tau)|^2}{R^2} \rightarrow 0, ~~~~R\rightarrow \infty, \end{align*} which satisfies conditions in Lemma \ref{11}, Lemma \ref{12}, and Lemma \ref{39}. Indeed, there is another probability space $(\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})$, and there is a sequence $\tilde{Y}^{\eta_k}(0)(k \in \mathbb{N})$ in which, with the same distribution as $Y^{\eta_k}(0)$, and there also exists a subsequence $\tilde{Y}^{\eta_{n_k}}(0)$, which converges to $\tilde{Y}(0)$ in probability. For $\tilde{Y}(0)$ and $\tilde{Y}^{\eta_{n_k}}(0)$, we can get random variables in the original probability space $(\Omega, \mathcal{F}, \mathbb{P})$ with the same distribution as them respectively, which will be denoted by $Y(0)$ and $Y^{\eta_{n_k}}(0)$. Reusing (H7), for $2 \leq p \leq 4$, we have \begin{align*} \tilde{ \mathbb{E}}|\tilde{Y}^{\eta_{n_k}}(0)|^p = \mathbb{E}|Y^{\eta_{n_k}}(0)|^p \leq C < \infty. \end{align*} Thanks to Lemma \ref{12} and Remark \ref{15}, there exists a $\delta > 0$ such that for any $A \in \tilde{\mathcal{F}}$ with $\tilde{\mathbb{P}}(A) \leq \delta$, we have $$ \sup_{\tilde{Y}^{\eta_{n_k}}(0) \in \mathcal{A}}\int_{A}|\tilde{Y}^{\eta_{n_k}}(0)|^2 d \tilde{\mathbb{P}} \leq \epsilon.$$ Then applying the well-known Vitali's convergence theorem and some corollary of Lebesgue's dominated convergence theorem, such as Lemma \ref{16}, we have \begin{align*} \tilde{\mathbb{E}}|\tilde{Y}^{\eta_{n_k}}(0) - \tilde{Y}(0)|^2 \rightarrow 0, ~~~~k\rightarrow \infty. \end{align*} Let $(\tilde{Y}^{\eta_{n_k}}(t), W^{\eta_{n_k}}(t), N_1^{\eta_{n_k}}(t, x), N^{\eta_{n_k}}(t, x))$ be a weak solution to \eqref{6} with initial condition $\tilde{Y}_{n_k}(0)$ on the probability space $(\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{\mathcal{P}})$. It follows from Cauchy-Schwarz's inequality, Lemma \ref{9} that \begin{align*} \tilde{\mathbb{E}}|\tilde{Y}^{\eta_{n_k}}(t) - \tilde{Y}(t)|^2 \rightarrow 0, ~~~~n_k \rightarrow \infty. \end{align*} Indeed, \begin{align*} &\mathbb{E}|\tilde{Y}^{\eta_{n_k}}(t) - \tilde{Y}(t)|^2 \\&\leq 5 \mathbb{E}|\tilde{Y}^{\eta_{n_k}}(0) - \tilde{Y}(0)|^2 + 5\mathbb{E}\left|\int_0^t f(r, \tilde{Y}^{\eta_{n_k}}(r))- f(r, \tilde{Y}(r)) dr\right|^2 \\&~~+ 5 \mathbb{E}\left|\int_0^t g(r, \tilde{Y}^{\eta_{n_k}}(r))- g(r, \tilde{Y}(r))d W^{\eta_{n_k}}(r) \right|^2\\&~~+ 5\mathbb{E}\left|\int_0^t \int_{|x|< 1} F(r, \tilde{Y}^{\eta_{n_k}}(r), x) - F(r, \tilde{Y}(r), x) N_1(dr, dx) \right|^2 \\&~~+5\mathbb{E}|\int_0^t \int_{|x| \geq 1}G(r, \tilde{Y}^{\eta_{n_k}}(r), x)- G(r, \tilde{Y}(r), x) N_1(dr, dx)\\&~~+ \int_0^t \int_{|x| \geq 1}G(r,\tilde{Y}^{\eta_{n_k}}(r), x) - G(r, \tilde{Y}(r), x) \nu^1(dx)dr |^2\\& \leq 5\mathbb{E}|\tilde{Y}^{\eta_{n_k}}(0) - \tilde{Y}(0)|^2 +5t \mathbb{E}\int_0^t |f(r, \tilde{Y}^{\eta_{n_k}}(r))- f(r, \tilde{Y}(r))|^2 dr \\&~~+ 5 \mathbb{E}\int_0^t|g(r, \tilde{Y}^{\eta_{n_k}}(r))- g(r, \tilde{Y}(r))|^2dr\\&~~+ 5\mathbb{E}\int_0^t \int_{|x|< 1} |F(r, \tilde{Y}^{\eta_{n_k}}(r), x) - F(r, \tilde{Y}(r), x)|^2 \nu^1(dx)dr \\&~~+10\mathbb{E}\int_0^t \int_{|x| \geq 1}|G(r, \tilde{Y}^{\eta_{n_k}}(r), x)- G(r, \tilde{Y}(r), x)|^2\nu^1(dx)dr\\&~~+ 10\mathbb{E}\left[\int_0^t \int_{|x|\geq 1}\nu^1(dx)dr \int_0^t \int_{|x|\geq 1}|G(r, Y^{\eta_{n_k}}(r), x) - G(r, \tilde{Y}(r), x)|^2\nu^1(dx)dr\right]\\& = 5\mathbb{E}|\tilde{Y}^{\eta_{n_k}}(0) - \tilde{Y}(0)|^2 +(20tL^2 +10teL^2) \mathbb{E}\int_0^t |\tilde{Y}^{\eta_{n_k}}(r) - \tilde{Y}(r)|^2 dr. \end{align*} Applying Gronwall's inequality, we have \begin{align*} \mathbb{E}|\tilde{Y}^{\eta_{n_k}}(t) - \tilde{Y}(t)|^2 \leq 5\mathbb{E}|\tilde{Y}^{\eta_{n_k}}(0) - \tilde{Y}(0)|^2 e^{(20tL^2 +10teL^2)t} \rightarrow 0,~~~~k \rightarrow \infty. \end{align*} Then we have \begin{align*} \mathbb{P} \circ Y^{\eta_{n_k}}(t)^{-1} = \tilde{\mathbb{P}} \circ \tilde{Y}^{\eta_{n_k}}(t)^{-1} \rightarrow \tilde{\mathbb{P}}\circ \tilde{Y}(t)^{-1} \end{align*} uniformly on $[0, \tau]$. For $Y(0)$ is identically distributed with $\tilde{Y}(0)$, $Y(t)$ has the same distribution with $\tilde{Y}(t)$. Indeed, we will verify $Y(\tau) \overset{d}{=} Y(0)$. \begin{align*} d_{L}(\mathbb{P} \circ Y(\tau)^{-1}, \mathbb{P}\circ Y(0)^{-1}) & = \lim_{k \rightarrow \infty} d_L(\tilde{\mathbb{P}}\circ \tilde{Y}^{\eta_{n_k}}(\tau)^{-1}, \tilde{\mathbb{P}} \circ \tilde{Y}^{\eta_{n_k}}(0)^{-1})\\& = \lim_{k \rightarrow \infty} d_L(\mathbb{P}\circ Y^{\eta_{n_k}}(\tau)^{-1}, \mathbb{P} \circ Y^{\eta_{n_k}}(0)^{-1})\\& =\lim_{k \rightarrow \infty} \sup_{Lip(\Phi) \leq 1} \left| \int_{\Omega} \Phi(Y^{\eta_{n_k}}(\tau)) - \Phi(Y^{\eta_{n_k}}(0)) d \mathbb{P}\right|\\& =\lim_{k \rightarrow \infty} \sup_{Lip(\Phi) \leq 1}\frac{1}{n_k +1} \sum_{N=0}^{n_k} \left|\int_{\Omega} \Phi(X_{\xi}(\tau+ N\tau)) - \Phi(X_{\xi}(N\tau))d \mathbb{P} \right|\\& \leq \lim_{k \rightarrow \infty} \frac{1}{n_k +1} \sum_{N=0}^{n_k} d_L(\mu_{X_{\xi}(N\tau +\tau)}, \mu_{X_{\xi}(N\tau)}) = 0. \end{align*} So $Y(t)$ is the unique $\tau$-periodic solution in distribution to \eqref{6}, and $\mu_{Y(t)} := \mathbb{P}\circ Y(t)^{-1}$ is a $\tau$-periodic measure, in other words, $\mu_{Y(t)} = \mu_{Y(t+\tau)}$. \end{proof} \begin{remark} Due to the independent increment property of L{\'e}vy process, we know that $\{Y(t)\}_{t \geq 0}$ is a Markov process, satisfying the crucial Markov property: the future behaviour of the process, given what has occerred up to time $s$, is the same as the behaviour obtained when starting the process at $Y(s)$. \end{remark} Now we introduce some notation commonly used in the context of Markov process: let $\mathbb{E}_x$ denote the expectation with respect to the stochastic process $\{Y(t)\}_{t \geq 0}$ when its initial value is $Y(0) = x \in \mathbb{R}^d$, let $\mathbb{E}_{s,Y(s)}$ denote the expectation of $Y(t)$ when its initial time and initial value are $(s, Y(s)) \in \mathbb{R} \times \mathbb{R}^d$. So $\mathbb{E}[\Phi(Y(t)) \mid \mathcal{F}_{N\tau}] = \mathbb{E}_{N\tau, Y(N\tau)} [\Phi(Y(t))]$. Furthermore, from the uniqueness of weak solutions, i.e., $Y(t+N\tau) \overset{\mathcal{D}}{=} Y_{Y(N\tau)}(t)$, we have $\mathbb{E}[\Phi(Y(t)) \mid \mathcal{F}_{N\tau}] = \mathbb{E}_{Y(N\tau)}[\Phi(Y(t-N\tau))]$. For $\Phi \in C_b(\mathbb{R}^d)$, let $P_{s, t}\Phi(Y(s)) = \mathbb{E}[\Phi(Y(t)) \mid \mathcal{F}_s],$ and let $P_{s, t}^*$ denote the dual operator corresponding to $P_{s, t}$. Then by the Markov property, we have $P_{s, t}^* \mu_{Y(s)} = \mu_{Y(t)}$. \begin{remark}\label{35} For $x_1 \in \mathbb{R}^d$, we have $P_{0, \tau}^* \delta_{x_1} = \delta_{X_{x_1}(\tau)}$, where $\delta_x$ denotes the dirac measure at $x \in \mathbb{R}^d$.\begin{proof}In fact, \begin{align*} d_{L}(P_{0, \tau}^* \delta_{x_1}, \delta_{X_{x_1}(\tau)})&= \sup_{Lip(f) \leq 1}\left|\int_{\mathbb{R}^d} f P_{0, \tau}^* \delta_{x_1}(dx) - \mathbb{E} \int_{\mathbb{R}^d} f \delta_{X_{x_1}(\tau)}(dx)\right| \\& = \sup_{Lip(f) \leq 1}|\mathbb{E}[f(X(\tau))\mid X(0)= x_1] - \mathbb{E}f(X_{x_1}(\tau))|\\& =\sup_{Lip(f) \leq 1}|\mathbb{E}[f(X_{x_1}(\tau)) - f(X_{x_1}(\tau))] |\\& =0. \end{align*} \end{proof} \end{remark} \begin{remark}\label{36} For any $x_1, x_2 \in \mathbb{R}^d$, \begin{align*} d_{L}(\delta_{x_1}, \delta_{x_2})= \sup_{Lip(f) \leq 1}\left|f(x_1) - f(x_2)\right|= |x_1 - x_2|. \end{align*} \end{remark} We also make the following assumptions. (H8) For $\gamma \in (0, 1)$, there exists a continuous function $a: \mathbb{R}^+ \rightarrow \mathbb{R}^+ \setminus \{0\}$ with $$r:= \lim_{k \rightarrow \infty} a(k \tau) < 1,$$ such that for any $x_1, x_2 \in \mathbb{R}^d, t \geq 0$, \begin{align*} d_{L}(P_{0, t}^*\delta_{x_1}, P_{0, t}^* \delta_{x_2}) \leq a(t)|x_1 -x_2|. \end{align*} (H9) Suppose that there exists a $\lambda \in( L+8M^2 +\frac{1}{8}, L+8M^2 +\frac{1}{8} + \frac{1}{4}\log 2)$, such that \begin{align*} 2 x\cdot f(r, x) +\int_{|u| \geq 1}2x \cdot G(r, x, u)\nu(du) \leq -\lambda(1+|x|^2). \end{align*} Now we have the following. \begin{theorem}\label{21} Suppose that (H3)-(H8) hold. Then there exists a $C > 0, \gamma \in (0, 1)$ such that for any $\mu_1, \mu_2 \in \mathcal{P}(\mathbb{R}^d), t \geq 0$, \begin{align*} d_{L}(P_{0, t}^* \mu_1, P_{0, t}^* \mu_2) \leq Ce^{-\gamma t} d_L(\mu_1, \mu_2). \end{align*} \end{theorem} \begin{proof} By the definition of $r$, there exists an $N \in \mathbb{N}$ and $a \in (r, 1)$, such that for all $k \geq N$, we have $a(k\tau) \leq \alpha$. Now for any $x_1, x_2 \in \mathbb{R}^d, 0 \leq t \leq N\tau, t = k\tau +s$ for a unique $0 \leq k \leq N$ and $s \in [0, \tau)$, combining with Remark \ref{35}, \begin{align}\label{37} d_L(P_{0, t}^* \delta_{x_1}, P_{0, t}^* \delta_{x_2})= d_L(P_{k\tau, t}^*P_{(k-1)\tau, k\tau}^*\cdots P_{0, \tau}^*\delta_{x_1}, P_{k\tau, t}^*P_{(k-1)\tau, k\tau}^*\cdots P_{0, \tau}^*\delta_{x_2}). \end{align} For $a(t)$ is a continuous function, there exists an $M_1 > 0$, such that $|a(t)| \leq M_1$ for $t \in [0, N\tau]$, and \begin{align}\label{38} d_{L}(P_{k\tau, t}\delta_{X_{x_1}(k\tau)}, P_{k\tau}^* \delta_{X_{x_2}(k\tau)})\notag & ~~\leq M_1 |X_{x_1}(k\tau) - X_{x_2}(k\tau)|= M_1 d_{L}(\delta_{X_{x_1}(k\tau)}, \delta_{X_{x_2}(k\tau)}) \notag\\&~~= M_1d_{L} (P_{0, \tau}^* \delta_{X_{x_1}((k-1)\tau)}, P_{0, \tau}^* \delta_{X_{x_2}((k-1)\tau)})\notag\\&~~\leq M_1^2 d_{L}(\delta_{X_{x_1}((k-1)\tau)}, \delta_{X_{x_2}(k\tau)})\notag\\&~~\leq \cdots \leq M_1^k d_L(\delta_{X_{x_1}(\tau)}, \delta_{X_{x_2}(\tau)}) \leq M_1^{k+1}|x_1 - x_2|, \end{align} where we use the fact from Remark \ref{36}. Considering \eqref{37} and \eqref{38}, $$d_{L}(P_{0, t}^* \delta_{x_1}, P_{0, t}^*\delta_{x_2}) \leq M^{k+1}|x_1 - x_2|.$$ By choosing $\tilde{C}= M^{k+1} C^{N\tau}$, we have \begin{align*} d_{L}(P_{0, t}^* \delta_{x_1}, P_{0, t}^* \delta_{x_2}) \leq \tilde{C} e^{-N\tau} |x_1 - x_2| \leq \tilde{C} e^{-t} |x_1 - x_2|. \end{align*} For $t > N\tau$, one has $t = k N\tau +\beta, k \in \mathbb{N}$ and $0 \leq \beta \leq N\tau$. Combing (H8) and the definition of $r$, we have \begin{align*} d_{L}(P_{0, kN\tau}^*\delta_{x_1}, P_{0, kN\tau}^* \delta_{x_2}) \leq \alpha^k |x_1 -x_2|, \end{align*} then for an appropriate $\gamma \in (0, 1)$, \begin{align*} d_{L}(P_{0, t}^*\delta_{x_1}, P_{0, t}^*\delta_{x_2}) &= d_{L}(P_{kN\tau, kN\tau+\beta}^* P_{0, kN\tau}^* \delta_{x_1}, P_{kN\tau, kN\tau+\beta}^*P_{0, kN\tau}^*\delta_{x_2}) \\&\leq \tilde{C}e^{-\beta} d_L(P_{0, kN\tau}^* \delta_{x_1}, P_{0, kN\tau}^*\delta_{x_2}) \leq \tilde{C} e^{-\gamma \beta} \alpha^{\frac{t-\beta}{N\tau}}|x_1 - x_2| \leq Ce^{-\gamma t}|x_1 -x_2| \end{align*} for some constants $C, \gamma > 0$. Combing with Lemma \ref{18}, we have the desired result. \end{proof} \section{Strong law of large numbers} Now we consider a special case of equation \eqref{6}, i.e., \begin{align}\label{17} dX(t) = f(t, X(t))dt +g(t) d \tilde{W}(t) +\int_{|x| < 1} F(t, X(t-), x)\tilde{N}_1(dt, dx) +\int_{|x|\geq 1} G(t, X(t-), x)\tilde{N}(dt, dx). \end{align} The coefficients in this equation satisfy (H3)-(H9), thus \eqref{17} has a $\tau$-periodic solution in distribution, which will be denoted by $X(t)$ in the following, and $X_{\xi}(t)$ denote the $\tau$-periodic solution to equation \eqref{17} with initial value $\xi \in \mathbb{R}^d$. For $\gamma \in (0, 1]$, let $C_{bL}^{\gamma}(\mathbb{R}^d)$ be the space of continuous bounded function with finite norms weighted by the Lyapunov function $e^{|x|^2}$: \begin{align*} C_{bL}^{\gamma}(\mathbb{R}^d):= \{\phi \in C_{bL}(\mathbb{R}^d): ||\phi||_{bL, \gamma} < \infty\}, \end{align*} where \begin{align*} ||\phi||_{bL, \gamma} := \sup_{x \in \mathbb{R}^d} \frac{|\phi(x)|}{e^{|x|^2}}+\sup_{0 < |x_1 -x_2| \leq 1} \frac{|\phi(x_1)-\phi(x_2)|}{|x_1 - x_2|(e^{|x_1|^2} + e^{|x_2|^2})}. \end{align*} For $\Phi \in C_{bL}^{\gamma}(\mathbb{R}^d)$, define $$\tilde{\Phi}(X_{\xi}(t)) = \Phi(X_{\xi}(t)) - \int_{\mathbb{R}^d} \Phi(x) \mu^*(dx),$$where $$\mu^* = \frac{1}{\tau} \int_0^{\tau} \mu_{X_{\xi}(t)} dt.$$ Before proving the key theorem of this section, namely the strong law of large numbers, we first make some preparations. \begin{lemma}\label{22} For $t \geq 0$, $\lambda \in( L+8M^2 +\frac{1}{8}, L+8M^2 +\frac{1}{8} + \frac{1}{4}\log 2)$, $4 < \eta_0 < 8$, $\eta \in (0, \eta_0], X(0) = \xi \in \mathbb{R}^d$, we have \begin{align*} \mathbb{E}e^{\eta|X(t)|^2} \leq Ce^{\eta e^{-\alpha t}|\xi|^2} e^{\frac{\eta(6M^2-4\lambda)}{\alpha}} < Ce^{\eta e^{-\alpha t}|\xi|^2}, \end{align*} where $\alpha = 4\lambda - 4L-32 M^2 - \frac{1}{2} \in (0, \log 2)$. \end{lemma} \begin{proof} For $\alpha = 4\lambda - 4L-32 M^2 - \frac{1}{2} > 0$, applying It{\^o} formula to $e^{\alpha t}|X(t)|^2$ yields \begin{align*} e^{\alpha t}|X(t)|^2&= |\xi|^2 +\int_0^t e^{\alpha r} (\alpha |X(r)|^2 +2 X(r)\cdot f(r, X(r)) +|g(r)|^2 +\int_{|u| < 1} |X(r) +F(r, X(r), u)|^2 \\&~~- |X(r)|^2 -2X(r)\cdot F(r, X(r), u)\nu(du) + \int_{|u| \geq 1}|X(r)+ G(r, X(r), u)|^2 \\&~~-|X(r)|^2\nu(du)) dr+ \int_{0}^t 2e^{\alpha r} X(r)\cdot g(r)d \tilde{W}(r)\\& =|\xi|^2 +\int_0^t e^{\alpha r} (\alpha |X(r)|^2 +2 X(r)\cdot f(r, X(r)) +|g(r)|^2 +\int_{|u| < 1} |F(r, X(r), u)|^2\nu(du) \\&~~+\int_{|u| \geq 1}2X(r)\cdot G(r, X(r), u) +|G(r, X(r), u)|^2 \nu(du))dr + \int_{0}^t 2e^{\alpha r} X(r)\cdot g(r)d \tilde{W}(r). \end{align*} Suppose that $M(t) = \int_0^t 2 X(r) \cdot g(r) d\tilde{W}(r)$, $[M]_t = 4\int_0^t |X(r) \cdot g(r)|^2 dr.$ Then \begin{align*} &\int_0^t e^{\alpha(r-t)} dM(r) - 8\int_0^t e^{\alpha(r-t)} d[M]_r\\& =|X(t)|^2 -e^{-\alpha t}|\xi|^2 -\int_0^t e^{\alpha (r-t)} (\alpha |X(r)|^2 + 2X(r) \cdot f(r, X(r)) +|g(r)|^2 \\&~~+\int_{|u| < 1} |F(r, X(r), u)|^2 \nu(du) + \int_{|u| \geq 1}2 X(r) \cdot G(r, X(r), u) + |G(r, X(r), u)|^2 \nu(du))dr\\&~~ -8\int_0^t e^{\alpha (r-t)} 4|X(r) \cdot g(r)|^2 dr\\&\geq |X(t)|^2 -e^{-\alpha t }|\xi|^2 -\int_0^t e^{\alpha(r-t)} (\alpha |X(r)|^2 - 4\lambda (1+|X(r)|^2)+6M^2+4L|X(r)|^2 +32M^2|X(r)|^2)dr\\&\geq |X(t)|^2-e^{-\alpha t }|\xi|^2 -\int_0^t e^{\alpha(r-t)} ((\alpha - 4\lambda+4L+32M^2)|X(r)|^2 +(6M^2 - 4\lambda))dr\\& \geq |X(t)|^2-e^{-\alpha t }|\xi|^2 -\int_0^t e^{\alpha(r-t)}(6M^2 - 4\lambda)dr\\& =|X(t)|^2 -e^{-\alpha t }|\xi|^2 - \frac{(6M^2 - 4\lambda)(1-e^{-\alpha t})}{\alpha}\\& \geq |X(t)|^2 -e^{-\alpha t }|\xi|^2 - \frac{6M^2 - 4\lambda}{\alpha}. \end{align*} Then for any $K > 0$, we have some facts similar to Lemma A.1 in \cite{ref50}: \begin{align*} & \mathbb{P}\left( |X(t)|^2 -e^{-\alpha t}|\xi|^2 - \frac{6M^2 - 4\lambda}{\alpha} > \frac{K e^{\alpha}}{16}\right)\\&\leq \mathbb{P}\left( \int_0^t e^{\alpha(r-t)} dM(r) - 8\int_0^t e^{\alpha(r-t)} d[M]_r > \frac{K e^{\alpha}}{16}\right) \leq e^{-K}, \end{align*} which is equivalent to \begin{align*} \mathbb{P}\left(e^{ \eta_0(|X(t)|^2 -e^{-\alpha t}|\xi|^2 - \frac{6M^2 - 4\lambda}{\alpha})} > e^{\frac{K e^{\alpha} \eta_0}{16}}\right) \leq e^{-K}. \end{align*} We know that for any $c >1$, if a random variable $X$ satisfies $$ \mathbb{P}(X \geq C) \leq \frac{1}{C^c}$$ for every $C \geq 1$, then \begin{align*} \mathbb{E}X &= \int_{\Omega} X d\mathbb{P} \leq \int_{0 \leq X \leq 1} X d\mathbb{P} + \int_{X \geq 1} X d\mathbb{P} \leq 1+ \sum_{n=0}^{\infty} \int_{2^n \leq X \leq 2^{n+1}} X d\mathbb{P} \\&\leq 1+\sum_{n=0}^{\infty}2^{n+1}\frac{1}{2^{cn}} \leq \frac{4}{1-2^{1-c}}. \end{align*} Then let $c = \frac{16}{e^{\alpha}\eta_0 }> 1, C= e^{\frac{K e^{\alpha} \eta_0}{16}} > 1$, \begin{align*} \mathbb{E}e^{ \eta_0(|X(t)|^2 -e^{-\alpha t}|\xi|^2 - \frac{6M^2 - 4\lambda}{\alpha})} \leq \frac{4}{1 - 2^{1-\frac{16}{e^{\alpha}\eta_0 }}}. \end{align*} By Holder's inequality, for any $\eta \in (0, \eta_0]$, \begin{align*} \mathbb{E}e^{ \eta(|X(t)|^2 -e^{-\alpha t}|\xi|^2 - \frac{6M^2 - 4\lambda}{\alpha})} \leq \left(\mathbb{E} e^{\eta_0 (|X(t)|^2-e^{-\alpha t}|\xi|^2 - \frac{6M^2 - 4\lambda}{\alpha})} \right)^{\frac{\eta}{\eta_0}} \leq \frac{4}{1-2^{1-\frac{16}{e^{\alpha}\eta_0 }}}. \end{align*} Then \begin{align*} \mathbb{E}e^{ \eta|X(t)|^2} \leq \frac{4}{1- 2^{1-\frac{16}{e^{\alpha}\eta_0 }}} \left[e^{\eta e^{-\alpha t}|\xi|^2 + \frac{\eta(6M^2-4\lambda)}{\alpha}} \right] \leq \frac{4}{1- 2^{1- \frac{16}{e^{\alpha}\eta_0 }}} e^{\eta e^{-\alpha t}|\xi|^2}. \end{align*} \end{proof} Combining with the above result, we make some estimation for the orthogonalized observation function. \begin{theorem}\label{20} For $\Phi \in C_{bL}^{\gamma}(\mathbb{R}^d), t \geq N\tau, N \in \mathbb{N}$, $\mathbb{E}|\xi|^2 < \infty,$ \begin{align*} |\mathbb{E}[\Phi(X_{\xi}(t))\mid \mathcal{F}_{N\tau}] - \langle \mu^*, \Phi(\cdot) \rangle| \leq C\|\Phi\|_{bL, \gamma}e^{2|\xi|^2} e^{-\frac{\gamma (t-N\tau)}{{5}}}. \end{align*} \end{theorem} \begin{proof} For any $R > 0$, let $\chi_{R}: \mathbb{R}^d \rightarrow \mathbb{R}$ satisfy $0 \leq \chi_R \leq 1$ with $\chi_R(x) =1$ for $|x| \leq R$, and $\chi_{R}(x) = 0$ for $|x| \geq R+1$. We can choose a $\chi_R$ such that $||\chi_{R}||_{bL, \gamma} \leq 2.$ Without loss of generality that $||\Phi||_{bL, \gamma} \leq 1$. Let $\bar{\chi}_{R} = 1- \chi_{R}$, then \begin{align*} &|\mathbb{E}[\Phi(X_{\xi}(t))]\mid \mathcal{F}_{N\tau} | - \langle \mu^*, \Phi(\cdot)\rangle|\\& = |\mathbb{E}_{X_{\xi}(N\tau)}[\Phi(X_{\xi}(t - N\tau))] - \langle\mu^*, \Phi(\cdot) \rangle|\\& =|\mathbb{E}_{X_{\xi}(N\tau)}[(\chi_R\Phi) (X_{\xi}(t - N\tau))] + \mathbb{E}_{X_{\xi}(N\tau)} [(\bar{\chi}_{R}\Phi)(X_{\xi}(t-N\tau))] - \langle \mu^*, \chi_R\Phi \rangle - \langle \mu^*, \bar{\chi}_R \Phi\rangle| \\&\leq |\mathbb{E}_{X_{\xi}(N\tau)}[(\chi_R\Phi) (X_{\xi}(t - N\tau))] - \langle \mu^*, \chi_R\Phi \rangle | + | \mathbb{E}_{X_{\xi}(N\tau)} [(\bar{\chi}_{R}\Phi)(X_{\xi}(t-N\tau))] -\langle \mu^*, \bar{\chi}_R\Phi \rangle|\\&:= I_1+I_2 . \end{align*} Since $\chi_R\Phi$ vanishes outside of the ball $|x| \leq R+1$, we have \begin{align}\label{19} \sup_{x \in \mathbb{R}^d} |\chi_{R}(x)\Phi(x)|\leq \sup_{x\in\mathbb{R}^d, |x|\leq R+1} |\Phi(x)| \leq ||\Phi||_{bL, \gamma}e^{(R+1)^2}. \end{align} Let $$ \mathbb{S}:= \{(x_1, x_2) \in \mathbb{R}^d \times \mathbb{R}^d: |x_1| \leq R+1, |x_2| \geq R+1, 0 < |x_1 -x_2| \leq 1\}.$$ It follows from $\chi_{R}(x_2) = 0, |x_2| \leq R+2$ and \eqref{19} that \begin{align*} \sup_{(x_1, x_2) \in \mathbb{S}}\frac{|(\chi_R\Phi)(x_1) - (\chi_R \Phi)(x_2)|}{|x_1-x_2|} &=\sup_{(x_1, x_2) \in \mathbb{S}}\frac{|\chi_R(x_1)\Phi(x_1) - \chi_R(x_2) \Phi(x_1)|}{|x_1-x_2| }\\& \leq 2||\chi_R||_{bL, \gamma}e^{(R+2)^2} ||\Phi||_{bL, \gamma} e^{(R+1)^2} \leq 2||\chi_R||_{bL, \gamma} ||\Phi||_{bL, \gamma} e^{2(R+2)^2}. \end{align*} Let$$\mathbb{S}^R = \{(x_1, x_2) \in \mathbb{R}^d \times \mathbb{R}^d: |x_1| \leq R+1, |x_2| \leq R+1, 0 < |x_1 - x_2| \leq 1\}.$$ Then \begin{align*} \sup_{(x_1, x_2) \in \mathbb{S}^R}\frac{|(\chi_{R}\Phi)(x_1) - (\chi_R\Phi)(x_2)|}{|x_1 -x_2|}&=\sup_{(x_1, x_2) \in \mathbb{S}^R} \frac{|(\chi_{R}\Phi)(x_1) - \chi_{R}(x_1)\Phi(x_2) +\chi_{R}(x_1)\Phi(x_2)-(\chi_R\Phi)(x_2)|}{|x_1 -x_2|}\\& \leq \sup_{(x_1, x_2) \in \mathbb{S}^R} \frac{|\chi_R(x_1)||\Phi(x_1)-\Phi(x_2)|}{|x_1 -x_2|} +\frac{|\Phi(x_2)||\chi_{R}(x_1) - \chi_R(x_2)|}{|x_1 - x_2|}\\&\leq \sup_{(x_1, x_2) \in \mathbb{S}^R} ||\Phi||_{bL, \gamma}e^{(R+1)^2}+2||\chi_R||_{bL, \gamma} ||\Phi||_{bL, \gamma}e^{2(R+1)^2} \\& \leq 6e^{2(R+1)^2}. \end{align*} So $\chi_R \Phi \in C_{bL}(\mathbb{R}^d)$ and $||\chi_R \Phi||_{bL} \leq 6e^{2(R+2)^2}$. It is known that the dual Holder metric on $\mathcal{P}(\mathbb{R}^d) $ is bounded by the Wasserstein metric: \begin{align*} \sup_{\Phi \in C_{bL}(\mathbb{R}^d), ||\Phi||_{bL} \leq 1} |\langle \mu_1, \Phi\rangle - \langle \mu_2, \Phi\rangle| \leq 5d_L(\mu_1, \mu_2) \end{align*} for any $\mu_1, \mu_2 \in \mathcal{P}(\mathbb{R}^d)$. And it also should be noted that $$ \mathbb{E}_{X_{\xi}(N\tau)} \Phi(X_{\xi}(t - N\tau)) = P_{0, t-N\tau} \Phi(X_{\xi}(N\tau)). $$ Combing this with Theorem \ref{21} yields \begin{align*} I_1:&= |\mathbb{E}_{X_{\xi}(N\tau)}[(\chi_R\Phi) (X_{\xi}(t - N\tau))] - \langle \mu^*, \chi_R\Phi \rangle |\\&= |\langle P_{0, t - N\tau}^*\delta_{X_{\xi}(N\tau)}, \chi_{R}\Phi \rangle - \langle \mu^*, \chi_R\Phi \rangle| \\& \leq 5\|\chi_R\Phi\|_{bL} d_L(P_{0, t-N\tau}^* \delta_{X_{\xi}(N\tau)}, \mu^*)\\& = 5||\chi_R \Phi||_{bL} d_L(P_{0, t-N\tau}^* \delta_{X_{\xi}(N\tau)}, P_{0, t-N\tau}^* \mu^*)\\& \leq 5 ||\chi_R\Phi||_{bL}2e^{|X_{\xi}(N\tau)|^2} d_L(\delta_{X_{\xi}(N\tau)}, \mu^*) e^{-\gamma(t-N\tau)}\\& \leq C e^{2(R+2)^2} e^{|X_{\xi}(N\tau)|^2} e^{-\gamma(t -N\tau)}, \end{align*} where \begin{align*} d_{L}(\delta_{X_{\xi}(N\tau)}, \mu^*) &= d_L(\delta_{X_{\xi}(N\tau)}, \frac{\int_0^{\tau}\mu_s^*ds}{\tau}) = \sup_{Lip(f) \leq 1}|f(X_{\xi}(N\tau)) - \frac{\int_0^{\tau}\mathbb{E}f(X_{\xi}(s))ds}{\tau}|\\&= \sup_{Lip(f) \leq 1} \frac{|\int_0^{\tau}\mathbb{E}[f(X_{\xi}(N\tau))- f(X_{\xi}(s))]ds|}{\tau} \leq \sup_{Lip(f) \leq 1}\frac{\int_0^{\tau} \mathbb{E}|X_{\xi}(N\tau) - X_{\xi}(s)|ds}{\tau} < \infty.\end{align*}Here we used the fact from Theorem \ref{5}. Note that \begin{align*} I_2&:= |\mathbb{E}_{X_{\xi}(N\tau)}(\bar{\chi}\Phi)(X_{\xi}(t - N\tau)) -\langle \mu^*, \bar{\chi}_R\Phi \rangle| \leq |\mathbb{E}_{X_{\xi}(N\tau)} (\bar{\chi}_{R}\Phi) (X_{\xi}(t - N\tau))| + |\langle \mu^*, \bar{\chi}_{R}\Phi \rangle|. \end{align*}And due to Lemma \ref{22}, we have \begin{align*} |\mathbb{E}_{X_{\xi}(N\tau)} (\bar{\chi}_{R}\Phi) (X_{\xi}(t - N\tau))| &\leq (\mathbb{E}_{X_{\xi}(N\tau)}|\bar{\chi}_R(X_{\xi}(t-N\tau))|^2)^{\frac{1}{2}} (\mathbb{E}_{X_{\xi}(N\tau)}|\Phi(X_{\xi}(t-N\tau))|^2)^{\frac{1}{2}}\\& \leq \left( \frac{\mathbb{E}_{X_{\xi}(N\tau)}e^{2|X_{\xi}(t-N\tau)|^2}}{e^{2R^2}}\right)^{\frac{1}{2}} (\mathbb{E}_{X_{\xi}(N\tau)}e^{2|X_{\xi}(t-N\tau)|^2})^{\frac{1}{2}}\\& \leq Ce^{2|\xi|^2}e^{-R^2}, \end{align*} where the second inequality is from the Chernoff inequality: $$ \mathbb{P}(X \geq a) \leq \frac{\mathbb{E}e^{tX}}{e^{ta}}.$$ Then \begin{align*} \langle \mu^*, \bar{\chi}_{R} \Phi\rangle & \leq \left( \int_{\mathbb{R}^d}|\Phi(x)|^2 \mu^*(dx)\right)^{\frac{1}{2}} \left( \int_{\mathbb{R}^d} \bar{\chi}_R(x)\mu^*(dx)\right)^{\frac{1}{2}} \\& \leq (\mu^*(|x| \geq R)) \left( \int_{\mathbb{R}^d} e^{2|x|^2} \mu^*(dx)\right)^{\frac{1}{2}}||\Phi||_{bL, \gamma}\\& \leq C e^{-R^2} e^{2|\xi|^2}, \end{align*} \begin{align*} |\mathbb{E}_{X_{\xi}(N\tau)}(\bar{\chi}_{R}\Phi)(X_{\xi}(t - N\tau)) -\langle \mu^*, \bar{\chi}_R\Phi \rangle|\leq C e^{-R^2} e^{2|\xi|^2}. \end{align*} Hence\begin{align*} |\mathbb{E}[\Phi(X_{\xi}(t))\mid \mathcal{F}_{N\tau} ] - \langle \mu^*, \Phi(\cdot)\rangle| &\leq Ce^{-R^2}e^{2|\xi|^2} + Ce^{2(R+2)^2} e^{2|\xi|^2} e^{-\gamma(t-N\tau)}\\& = Ce^{2|\xi|^2}(e^{-R^2} +e^{2(R+2)^2-\gamma(t-N\tau)}) \leq Ce^{2|\xi|^2} e^{-\frac{\gamma (t-N\tau)}{{5}}} \end{align*} by choosing $R^2 = \frac{\gamma(t-N\tau)}{5}$. \end{proof} For $\Phi \in C_{bL}^{\gamma} (\mathbb{R}^d)$ above, define $$\Pi(N\tau) = \int_{N\tau}^{\infty}\mathbb{E}[\tilde{\Phi}(X_{\xi}(u)) \mid \mathcal{F}_{N\tau}] du = \int_{0}^{\infty} \mathbb{E}_{X_{\xi}(N\tau)}(\tilde{\Phi}(X_{\xi}(u)))du.$$ For $t \in \mathbb{R}^+, N = \lfloor \frac{t}{\tau} \rfloor$, let \begin{align}\label{40} \int_0^t \tilde{\Phi}(X_{\xi}(u))du = \int_0^{N\tau}\tilde{\Phi}(X_{\xi}(u)) du +\int_{N\tau}^t \tilde{\Phi}(X_{\xi}(u)) du. \end{align} From the approach of martingale approximation, we decompose the left of \eqref{40} into a martingale term $M_{N\tau}$ and a residual term $R_{N\tau, t}$: \begin{align*} M_{N\tau} = \Pi(N\tau) - \Pi(0) +\int_0^{N\tau} \tilde{\Phi}(X_{\xi}(u)) du, \end{align*} \begin{align*} R_{N\tau, t} = -\Pi(N\tau) +\Pi(0) +\int_{N\tau}^t \tilde{\Phi}(X_{\xi}(u)) du, \end{align*} and we also define a martingale difference: \begin{align*} Z_{N} = M_{N\tau}- M_{(N-1)\tau}. \end{align*} \begin{lemma}\label{41} $\{M_{N\tau}\}_{N=1}^{\infty}$ is a martingale w.r.t. the filtration $\{\mathcal{F_{N\tau}}\}_{N = 1}^{\infty}$ with zero mean. \end{lemma} \begin{proof}For $0 \leq k \leq N-1, k \in \mathbb{N}$,\begin{align*} &\mathbb{E}[\Pi(N\tau) - \Pi(0) +\int_0^{N\tau} \tilde{\Phi}(X_{\xi}(u))du \mid \mathcal{F}_{k\tau}] \\& =\mathbb{E}[\Pi(N\tau)\mid \mathcal{F}_{k\tau}] -\mathbb{E}[\Pi(0)\mid \mathcal{F}_{k\tau}]+\mathbb{E}[\int_{k\tau}^{N\tau} \tilde{\Phi}(X_{\xi}(u))du \mid \mathcal{F}_{k\tau}] +\int_0^{k\tau}\tilde{\Phi}(X_{\xi}(u))du\\& =\mathbb{E}[\int_{N\tau}^{\infty} \mathbb{E}[\tilde{\Phi}(X_{\xi}(u)) \mid \mathcal{F}_{N\tau}] \mid \mathcal{F}_{k\tau}] -\Pi(0)+ \int_{0}^{k\tau}\tilde{\Phi}(X_{\xi}(u))du +\mathbb{E}[\int_{k\tau}^{N\tau}\tilde{\Phi}(X_{\xi}(u)) du\mid \mathcal{F}_{k\tau}]\\&= \int_{N\tau}^{\infty} \mathbb{E}_{X_{\xi}(k\tau)}\tilde{\Phi}(\tilde{X}_{\xi}(u-k\tau))du -\Pi(0) + \int_0^{k\tau} \tilde{\Phi}(X_{\xi}(u))du+ \int_{k\tau}^{N \tau} \mathbb{E}_{X_{\xi}(k\tau)}\tilde{\Phi}(X_{\xi}(u-k\tau))du\\& =\int_{(N-k)\tau}^{\infty}\mathbb{E}_{X_{\xi}(k\tau)}\tilde{\Phi}(X_{\xi}(u))du - \Pi(0) +\int_{0}^{k\tau} \tilde{\Phi}(X_{\xi}(u))du +\int_0^{(N-k)\tau} \mathbb{E}_{X_{\xi}(k\tau)}\tilde{\Phi}(X_{\xi}(u))du\\&= \int_0^{\infty}\mathbb{E}_{X_{\xi}(k\tau)} \tilde{\Phi}(X_{\xi}(u))du -\Pi(0) +\int_0^{k\tau}\tilde{\Phi}(X_{\xi}(u))du \\& =M_{k\tau}. \end{align*}Besides, \begin{align*} \mathbb{E}M_{N\tau}& =\mathbb{E}[\Pi(N\tau) - \Pi(0) +\int_0^{N\tau}\tilde{\Phi}(X_{\xi}(u))du]\\&= \mathbb{E}[\int_{N\tau}^{\infty}\mathbb{E}[\tilde{\Phi}(X_{\xi}(u))\mid \mathcal{F}_{N\tau}]du]-\mathbb{E}\int_0^{\infty} \tilde{\Phi}(X_{\xi}(u))du +\mathbb{E}\int_0^{N\tau}\tilde{\Phi}(X_{\xi}(u))du\\& = \int_{N\tau}^{\infty}\mathbb{E}[\tilde{\Phi}(X_{\xi}(u))]du - \int_0^{\infty}\mathbb{E} \tilde{\Phi}(X_{\xi}(u))du +\int_{0}^{N\tau}\mathbb{E}\tilde{\Phi}(X_{\xi}(u))du =0. \end{align*} \end{proof} ~~~~Then we obtain the following lemma, which plays a crucial role in the subsequent proof. \begin{lemma}\label{23} For $1\leq p \leq 2$, $N\in \mathbb{N}, \Phi \in C_{bL}^{\gamma}(\mathbb{R}^d),$ \begin{align}\label{24} ~~~~~~~~~~~~~~~~\mathbb{E}|M(N\tau)|^{2^p} \leq C((N\tau)^{2-2^{-p}}+1) e^{2^{p+1}|\xi|^2} , \end{align} \begin{align}\label{25} \mathbb{E}|Z_N|^{2^p} \leq Ce^{2^{p+1}|\xi|^2}, \end{align} where $C$ does not depend on $N$. \end{lemma} \begin{proof}Note that \begin{align*} \mathbb{E}|M(N\tau)|^{2^p} \leq 3^{p-1}[\mathbb{E}|\Pi(N\tau)|^{2^p} + \mathbb{E}|\Pi(0)|^{2^p} + \mathbb{E}|\int_{N\tau}\tilde{\Phi}(X_{\xi}(u))du|^{2^p}]. \end{align*} \begin{align*} \mathbb{E}|\Pi(N\tau)|^{2^p} &= \mathbb{E}|\int_0^{\infty} \mathbb{E}_{X_{\xi}(N\tau)}(\tilde{\Phi}(X_{\xi}(u)))du|^{2^p}\leq Ce^{2^{p+1}|\xi|^2} \int_0^{\infty} e^{-\frac{\gamma u 2^p}{5}} du\\& = Ce^{2^{p+1}|\xi|^2} \frac{5}{\gamma 2^{p}}, \end{align*} where the first inequality is from Theorem \ref{20}. Similarly, we have \begin{align*} \mathbb{E}|\Pi(0)|^{2^p} \leq Ce^{2^{p+1}|\xi|^2} \frac{5}{\gamma 2^{p}}, \end{align*} and \begin{align*} \mathbb{E}|\int_0^{N\tau} \tilde{\Phi}(X_{\xi}(u))du|^{2^p} &\leq (N\tau)^{1-2^{-p}} \int_0^{N\tau} \mathbb{E}|\tilde{\Phi} (X_{\xi}(u))|^{2^p} du\\& \leq (N\tau)^{1-2^{-p}}||\Phi||_{bL}^{2^p} \int_0^{N\tau}Ce^{2^{p+1}|\xi|^2} du\\& \leq (N\tau)^{2-2^{-p}}||\Phi||_{bL}^{2^p} Ce^{2^{p+1}|\xi|^2}. \end{align*} Then \begin{align*} \mathbb{E}|M(N\tau)|^{2^p}& \leq 3^{p-1}\left[ Ce^{2^{p+1}|\xi|^2}\frac{5}{\gamma 2^{p}} + Ce^{2^{p+1}|\xi|^2} \frac{5}{\gamma 2^{p}} + (N\tau)^{2-2^{-p}}||\Phi||_{bL, \gamma}^{2^p} e^{2^{p+1}|\xi|^2} \right]\\& \leq C((N\tau)^{2-2^{-p}}+1) e^{2^{p+1}|\xi|^2}. \end{align*} Similar to the steps in the proof above, we can obtain \begin{align*} \mathbb{E}|Z_N|^{2^p} \leq Ce^{2^{p+1}|\xi|^2} \end{align*} for any $N \geq 1$, and $C$ does not depend on $N$. \end{proof} \begin{lemma}\label{26} For $\Phi \in C_{bL}^{\gamma}(\mathbb{R}^d), t \in \mathbb{R}^+$, \begin{align*} \lim_{t \rightarrow \infty} \frac{R_{N\tau, t}}{t} = 0, ~~~~\mathbb{P}-a.s.. \end{align*} \begin{proof} Since $N = \lfloor \frac{t}{\tau} \rfloor$, it suffices to show \begin{align}\label{29} \lim_{N \rightarrow \infty}\frac{1}{\sqrt{N}} \sup_{N\tau \leq t \leq (N+1)\tau} R_{N\tau, t} = 0, ~~~~\mathbb{P}-a.s.. \end{align} Recall that \begin{align*} |\Pi(N\tau)| &= |\int_0^{\infty}\mathbb{E}_{X_{\xi}(N\tau)}(\tilde{\Phi}(X_{\xi}(u)))du|\leq \int_{0}^{\infty} C e^{2|X_{\xi}(N\tau)|^2}e^{- \frac{\gamma u}{5}}du\\& \leq C e^{2|X_{\xi}(N\tau)|^2} |\int_0^{\infty} e^{-\frac{\gamma u}{5}} du| = C e^{2|X(N\tau)|^2}\frac{5}{\gamma},\\ \sup_{N\tau \leq t \leq (N+1)\tau} |\int_{N\tau}^t \tilde{\Phi}(X(s))ds| &\leq C \tau \sup_{N\tau \leq t \leq (N+1)\tau}e^{2|X(t)|^2}. \end{align*} It then follows from Markov inequality that, for any $K > 0$, \begin{align*} \mathbb{P}\left(\sup_{N\tau \leq t \leq (N+1)\tau} e^{2|X(t)|^2} > K \right) \leq \frac{C e^{2^4 |\xi|^2}}{K^8}. \end{align*} Hence \begin{align*} \sum_{N=1}^{\infty}\mathbb{P} &\left(\sup_{N\tau \leq t \leq (N+1)\tau} (|\Pi(N\tau)| +\Pi(0) +|\int_{N\tau}^t \tilde{\Phi}(X(s))|) \geq N^{\frac{1}{4}} \right)\\& \leq \sum_{N=1}^{\infty}(C\tau \sup_{N\tau \leq t \leq (N+1)\tau} e^{2|X(t)|^2} \geq N^{\frac{1}{4}}) \leq Ce^{2^4|\xi|^2} \sum_{N=1}^{\infty} N^{-2} < \infty. \end{align*} By the Borel-Cantelli lemma, there is an almost surely finite random integer time $N_0(\omega)$ such that for $N \geq N_0(\omega)$, \begin{align*} \sup_{N\tau \leq t \leq (N+1)\tau} R_{N\tau, t} \leq N^{\frac{1}{4}}, \end{align*} which leads to \begin{align*} \lim_{N \rightarrow \infty}\frac{1}{\sqrt{N}} \sup_{N\tau \leq t \leq (N+1)\tau} R_{N\tau, t} = 0, ~~~~\mathbb{P}-a.s.. \end{align*} \end{proof} \end{lemma} Now we arrive at one of the main results in this article, the strong law of large numbers. \begin{theorem} Suppose that (H3)-(H9) hold. For $\Phi \in C_{bL}^{\gamma}(\mathbb{R}^d)$, $ \epsilon > 0$,\begin{align}\label{33} \lim_{t \rightarrow \infty}\frac{\int_0^t \tilde{\Phi}(X_{\xi}(u))du}{t^{\frac{1}{2}+ \epsilon}} = 0,~~~~\mathbb{P}-a.s.. \end{align} \end{theorem} \begin{proof} In view of Lemma \ref{23}, Lemma \ref{26} above, and Theorem \ref{27} in appendix, we get the desired \eqref{33}. To be specific, let $p=1$ in Lemma \ref{23}, then we meet the condition in Theorem \ref{27} by choosing $C_{N} = N^{\frac{1}{2} + \epsilon}$, $R_{N \tau, t}$ is the residual term, which is negligible, then we get \eqref{33} from the construction of $M_{N\tau}$ and $R_{N\tau, t}$ . \end{proof} \section{Central limit theorem} Before proving the key theorem, we do some preparation. Let $a_n(\xi) = \sum_{k=0}^n P_{0, k\tau}\tilde{\Phi}(\xi)$, we have the following lemma.\begin{lemma} The sequence of functions $a_n$ converges uniformly on bounded sets. The point wise limit $$ a:= \lim_{n \rightarrow \infty} a_n$$ is a Lipschitz function and for any $n \geq m$, \begin{align*} \mathbb{E}[a(X_{\xi}(n\tau))\mid \mathcal{F}_{m\tau}] = \lim_{N \rightarrow \infty} \mathbb{E}[a_N(X_{\xi}(n\tau)) \mid \mathcal{F}_{m\tau}]. \end{align*} \end{lemma} \begin{proof} Thanks to Theorem \ref{21}, for any $\epsilon > 0$, we have \begin{align*} |a_m(\xi) - a_n(\xi)| &= |\sum_{k = m}^n P_{0, k\tau} \Phi(\xi) - \langle \mu^*, \Phi(\cdot) \rangle| \leq \sum_{k=m}^n Lip(\Phi)d_L(\delta_{\xi}, \mu^*) e^{-\gamma k\tau}\\& =C Lip(\Phi) d_L(\delta_{\xi}, \mu^*)\frac{e^{-\gamma m\tau} - e^{-\gamma n\tau}}{1-e^{-\gamma \tau}} < \epsilon \end{align*} for $n \geq m > N$ and $N$ is large enough. Hence $a_n$ converges uniformly on bounded sets. Note that \begin{align*} |a_N(\xi_1) - a_{N}(\xi_2)| \leq \sum_{k=0}^{N} |\langle P_{0, k\tau}^* \delta_{\xi_1} - P_{0, k\tau}^* \delta_{\xi_2}, \Phi \rangle| \leq C Lip(\Phi)|\xi_1 - \xi_2| \sum_{k=0}^{N} e^{-\gamma k\tau}, \end{align*} hence $$ |a(\xi_1) - a(\xi_2)| \leq C Lip(\Phi)|\xi_1 -\xi_2|,$$ where $C$ does not depend on $\xi_1, \xi_2 \in \mathbb{R}^d$. So $a$ is also a Lipschitz function. Since $a_N$ and $a$ are Lipschitz and $$ d_L(P_{0, k\tau}^* \delta_{\xi}, \delta_0) < \infty, $$ we know that they are $P_{0, k\tau}^*\delta_{\xi}$ integrable. By the dominated convergence theorem for any $\xi \in \mathbb{R}^d$, \begin{align*} \lim_{N \rightarrow \infty}P_{0, (n-m)\tau}a_{N}(\xi) &= \lim_{N \rightarrow \infty} \mathbb{E}_{\xi}[a_N(X_{\xi}((n-m)\tau))]\\& =\lim_{N \rightarrow \infty}\langle P_{0, (n-m)\tau}^*\delta_{\xi}, a_N\rangle = \langle P_{0, (n-m)\tau}^*\delta_{\xi}, a\rangle = P_{0, (n-m)\tau}a(\xi). \end{align*} Hence \begin{align*} \lim_{N \rightarrow \infty}\mathbb{E}_{\xi}[a_N(X_{\xi}(n\tau)) \mid \mathcal{F}_{m\tau}] = \lim_{N \rightarrow \infty} P_{0, (n-m)\tau} a_{N}(X_{\xi}(m\tau)) = \mathbb{E}[a(X_{\xi}(n\tau)) \mid \mathcal{F}_{m\tau}]. \end{align*} \end{proof} In order to obtain the key theorem of this section, we need a crucial theorem from \cite{ref20}, that is Theorem \ref{28} in appendix, which is based on the martingale approximation approach. Now we are in the position to verify the conditions (M1)-(M3) in Theorem \ref{28}, which lead to the central limit theorem directly. It should also be noted that $$ \nu_N:= \frac{1}{N} \sum_{j=1}^N P_{0, (j-1)\tau}^*\delta_{\xi} \overset{w}{\rightarrow} \mu^*.$$ \begin{theorem}\label{34} Suppose that (H3)-(H9) hold. For $ \Phi \in C_{bL}^{\gamma}(\mathbb{R}^d)$, $$ \lim_{t \rightarrow \infty} \frac{1}{\sqrt{t}} \int_0^t \tilde{\Phi}(X_{\xi}(t)) dt \overset{\mathcal{D}}{=} N(0, \sigma^2), $$ where $N(0, \sigma^2)$ denotes the standard normal random variable with variance $\sigma^2$ . In fact, \begin{align*} \sigma^2 = \langle \mu^*, \mathbb{E}_{\xi}M_{\tau}^2\rangle. \end{align*} \end{theorem} \begin{proof}Set $F(\xi) = \mathbb{E}_{\xi}[Z_1^2 \mid |Z_1| \geq \epsilon \sqrt{N}] = \mathbb{E}_{\xi}[(M_{\tau} - M_0)^2 \mid |M_{\tau}| \geq \epsilon \sqrt{N}]$. By the Markov property, we have \begin{align*} P_{0, (k-1)\tau} F(\xi) &= \mathbb{E}_{\xi}[F(X_{\xi}((k-1)\tau))]\\& = \mathbb{E}_{\xi}[\mathbb{E}_{X_{\xi}((k-1)\tau)}[Z_1^2 \mid |Z_1| \geq \epsilon \sqrt{N} ]]\\&\ = \mathbb{E}_{\xi}[Z_{k}^2 \mid |Z_k| \geq \epsilon \sqrt{N}]. \end{align*} Hence $$\frac{1}{N} \sum_{j=0}^{N-1}\mathbb{E}_{\xi}[Z_{j+1}^2 \mid |Z_{j+1}| \geq \epsilon \sqrt{N}] = \langle \frac{1}{N} \sum_{j=1}^N P_{0, (j-1)\tau} \delta_{\xi}, F \rangle.$$ To verify (M1), it remains to show that $ F_N(\xi) = F(\xi)$ converges to $0$ uniformly om compact sets and there is some $\eta > 0$ that satisfy \eqref{30}. Note that for any $\delta > 0$, \begin{align*} F_N(\xi)&:= \mathbb{E}_{\xi}[Z_1^2 \mid |Z_1| \geq \epsilon \sqrt{N}] \leq (\mathbb{E}_{\xi}|Z_1|^{2+\delta})^{\frac{2}{2+\delta}} \mathbb{P}(|Z_1| \geq \epsilon \sqrt{N})^{\frac{\delta}{2+\delta}}\\&\leq (\mathbb{E}_{\xi}|Z_1|^{2+\delta})^{\frac{2}{2+\delta}} \left(\frac{\mathbb{E}_{\xi}|Z_1|^{2+\delta}}{\epsilon^{2+\delta} N^{\frac{2+\delta}{2}}}\right)^{\frac{\delta}{2+\delta}} = \frac{\mathbb{E}_{\xi}|Z_1|^{2+\delta}}{\epsilon^{\delta} N^{\frac{\delta}{2}}}.\\ \mathbb{E}_{\xi}|Z_1|^{2+\delta} &= \mathbb{E}_{\xi}|\Pi(\tau) - \Pi(0) +\int_0^{\tau} \Phi(X_{\xi}(u))du|^{2+\delta} \leq (\mathbb{E}_{\xi}|Z_1|^4)^{\frac{2+\delta}{4}}\\&\leq (Ce^{8|\xi|^2})^{\frac{2+\delta}{4}}= Ce^{2(2+\delta)|\xi|^2} < \infty. \end{align*} Then $$ \sup_{\xi \in \mathbb{R}^d}F_N(\xi) \leq \frac{Ce^{2(2+\delta)|\xi|^2}}{\epsilon^{\delta}N^{\frac{\delta}{2}}} \overset{N \rightarrow \infty}{\rightarrow} 0.$$ Hence $F_N(\xi)$ converges to 0 uniformly on any compact sets. And \begin{align*} \lim_{N \rightarrow \infty}\langle \nu_N, |F_N|^{1+\frac{\delta}{2}} \rangle &\leq \lim_{N \rightarrow \infty}\frac{1}{N} \sum_{j=1}^N P_{0, (j-1)\tau} Ce^{(2+\delta)^2\xi^2}\\& \leq \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{j=1}^{N} \mathbb{E}e^{(2+\delta)^2 X_{\xi}((j-1)N\tau)} < \infty. \end{align*} From \cite{ref20}, we know that if $\nu_N$ converges to $\mu^*$ weakly, $F_N \rightarrow 0$ uniformly on compact sets and there is an $\eta > 0$ such that \begin{align}\label{30} \lim_{N \rightarrow \infty}\langle \nu_N, |F_N|^{1+\eta} \rangle < \infty,\end{align} then \begin{align}\label{31} \lim_{N \rightarrow \infty} \langle \nu_N, F_N \rangle =0,\end{align}which leads to (M1) directly. For (M2), by periodicity and Markov property, for any $\sigma \geq 0$, \begin{align*} \frac{1}{l}\sum_{m=1}^l \mathbb{E}_{\xi}\left|\frac{1}{K}\mathbb{E}_{\xi} [[M]_{mK\tau} - [M]_{(m-1)K\tau} \mid \mathcal{F}_{(m-1)K\tau}]-\sigma^2\right|= \frac{1}{l}\sum_{m=1}^l P_{0, (m-1)K\tau}|H_K(\xi)|, \end{align*} where $H_K(\xi) = \mathbb{E}_{\xi}\left[\frac{1}{K}[M]_{K\tau} - \sigma^2 \right] = \mathbb{E}_{\xi} [\frac{1}{K}M_{K\tau}^2 - \sigma^2]$. From Lemma \ref{23}, we know that \begin{align*} \limsup_{l \rightarrow \infty} \frac{1}{l}\sum_{m=1}^l \langle P_{0, (m-1)K\tau}^* \delta_{\xi}, |H_K| \rangle < \infty. \end{align*} Hence \begin{align*} \lim_{l \rightarrow \infty}\frac{1}{l}\sum_{m=1}^l \langle P_{(m-1)K\tau}^* \delta_{\xi}, |H_K| \rangle = \langle \mu^* ,|H_K|\rangle = \mathbb{E}\left|\frac{1}{K} \sum_{j=0}^{K-1}P_{0, j\tau} J(\xi) \right|, \end{align*} where $J_{\xi} = \mathbb{E}_{\xi} M_{\tau}^2 - \sigma^2$. Since $\mu^*$ is ergodic under the Markov semigroup $\{P_{0, k\tau}\}_{k\geq 0}$, by the Birkhoff theorem, \begin{align*} \frac{1}{K}\sum_{j=0}^{K-1} P_{0, j\tau}(\mathbb{E}_{\xi}M_{\tau}^2) = \frac{1}{K}\sum_{j=0}^{K-1} \mathbb{E}_{X_{\xi}(j\tau)}M_{\tau}^2 = \int_{\mathbb{R}^d} \mathbb{E}_z M_{\tau}^2 \mu^*(dz), ~~~~K \overset{a.s.}{\rightarrow} \infty.\end{align*} Hence if we choose $\sigma^2 = \langle \mu^*, \mathbb{E}_{\xi}M_{\tau}^2\rangle$, then$$ \langle \mu^*, |H_K| \rangle \rightarrow 0,~~~~ K\rightarrow \infty.$$ Finally, we consider the condition (M3). By the periodicity and Markov property, we rewrite the expression in condition (M3) as follows: \begin{align*} \sum_{m=1}^l \sum_{j=(m-1)K}^{mK-1} \mathbb{E}_{\xi}[1+Z_{j+1}^2 \mid |M_{j\tau} -M_{(m-1)K\tau}| \geq \epsilon \sqrt{lK}] = \frac{1}{K}\sum_{j=0}^{K-1} \langle Q_{l, K}^*\delta_{\xi} , G_{l, j}\rangle, \end{align*} where $$\langle Q_{l, K}^*\delta_{\xi} , G_{l, j}\rangle = \frac{1}{l} \sum_{m=1}^l \langle P_{0, (m-1)K\tau}^* \delta_{\xi}, G_{l, j} \rangle,$$and $$G_{l, j}(\xi) = \mathbb{E}_{\xi} [1+Z_{j+1}^2 \mid |M_{j\tau}| \geq \epsilon \sqrt{lK}].$$ To verify (M3), we now to prove \begin{align}\label{32} \limsup_{l \rightarrow \infty} \langle Q_{l, K}^* \delta_{\xi}, G_{l, j} \rangle = 0, \end{align} for $j= 0, 1, \cdots, K-1.$ In fact, \begin{align*} G_{l,j}(\xi)& \leq (\mathbb{E}(1+Z_{j+1}^2)^2)^{\frac{1}{2}} \mathbb{P}(|M_{j\tau}| \geq \epsilon \sqrt{lK})\\& \leq(\mathbb{E}(1+Z_{j+1}^2)^2)^{\frac{1}{2}} \left(\frac{\mathbb{E}|M_{j\tau}|^2}{(\epsilon \sqrt{lK})^2}\right)^{\frac{1}{2}}\\& =(\mathbb{E}(1+Z_{j+1}^2)^2)^{\frac{1}{2}} \frac{(\mathbb{E}|M_{j\tau}|^2)^{\frac{1}{2}}}{\epsilon \sqrt{lK}}\\& \leq C(\epsilon ) \frac{1+\mathbb{E}_{\xi}|M_{(j+1)\tau}|^4 + |M_{j\tau}|^4}{\epsilon \sqrt{lK}}. \end{align*} In view of Lemma \ref{23}, for any $R > 0$, $0 \leq j \leq K$, \begin{align*} \sup_{\xi \in B_{R}(0)} G_{l, j}(\xi) &\leq C(\epsilon)\frac{1+C(((j+1)\tau)^{2-2^{-2}}+1)e^{8|\xi|^2} + C(1+(j\tau)^{2-2^{-2}})e^{8|\xi|^2}}{\epsilon \sqrt{lK}}\\& \leq \frac{ C( \epsilon)}{\sqrt{lK}}, \end{align*} where $B_{R}(0)$ denotes the ball with center $0$ and radius $R$ in $\mathbb{R}^d$. Then we have \begin{align*} \langle \frac{1}{l} \sum_{m=1}^l P_{0, (m-1)K\tau}^* \delta_{\xi}, G_{l, j}^2 \rangle \leq C(K, \xi), \end{align*}and \begin{align*} \limsup_{l \rightarrow \infty} \langle \frac{1}{l} \sum_{m=1}^l P_{0, (m-1)K\tau}^* \delta_{\xi}, G_{l, j}^2 \rangle < \infty. \end{align*} So \begin{align*} \limsup_{l \rightarrow \infty} \langle \frac{1}{l} \sum_{m=1}^l P_{0, (m-1)K\tau}^* \delta_{\xi}, G_{l, j}\rangle =0, \end{align*} for $j = 0, 1, \cdots, K-1$. Then \eqref{32} is achieved. \end{proof} \section{Appendix} \begin{definition}\label{3} An $\mathbb{R}^d$-valued stochastic process $L= (L(t), t \geq 0)$ is called L{\'e}vy process, if (1) L(0) = 0, a.s.; (2) L has independent and stationary increments; (3) L is stochastically continuous, i.e., for any $\epsilon > 0$ and $s \geq 0$, \begin{align*} \lim_{t \rightarrow s} \mathbb{P}(|L(t) - L(s)| > \epsilon) = 0. \end{align*} \end{definition} \begin{remark} Under (1) and (2), (3) can be expressed by: \begin{align*} \lim_{t \downarrow 0} \mathbb{P} (|L(t)| > \epsilon) = 0 \end{align*} for all $\epsilon > 0$. \end{remark} \begin{proposition}\label{2} (L{\'e}vy-It{\^o} composition) If $L$ is an $\mathbb{R}^d$-valued L{\'e}vy process, then there exist $b \in \mathbb{R}^d$, an $\mathbb{R}^d$-valued Wiener process $W$ with covariance operator $Q$, the so-called $Q$-Wiener process, and an independent Poisson random measure $N$ on $(\mathbb{R}^+ \cup \{0\} \times (\mathbb{R}^d - \{0\})$ such that: for each $t \geq 0$, \begin{align*} L(t) = bt +W(t) +\int_{|x| < 1} x N_1(t, dx) + \int_{|x| \geq 1} x N(t, dx). \end{align*} Here the Poisson random measure $N$ has the intensity measure $\nu$ which satisfies \begin{align}\label{4} \int_{\mathbb{R}^d} (|y|^2 \wedge 1) \nu(dy) < \infty, \end{align} and $N_1$ is the compensated Poisson random measure of $N$. \end{proposition} Suppose that $X(t), t_0 \leq t \leq \tau$ is the unique solution of \begin{align*} dX(t) = f(t, X(t))dt + g(t, X(t)) dW(t) \end{align*} with initial value $\xi \in \mathcal{L}^p(\mathbb{P}, \mathbb{R}^d)$. Then there are some existed results about the pth moment of the solution. For more details and results, we can refer to \cite{ref31}, and we present some of which in the lemmas below. \begin{lemma}\label{7} Suppose that there exists a constant $\alpha > 0$, such that for all $p \geq 2, \xi \in \mathcal{L}^p(\mathbb{P}, \mathbb{R}^d), t \in [0, \tau]$, $$ \langle x, f(t, x) \rangle +\frac{p-1}{2}|g(t, x)|^2 \leq \alpha (1+|x|^2). $$ Then $$ \mathbb{E}|X(t)|^p \leq 2^{\frac{p-2}{2}}(1+\mathbb{E}|\xi|^p)e^{p\alpha t} $$ for all $t \in [0, \tau]$. \end{lemma} \begin{lemma}\label{8} Let $p \geq 2$, for any $\tau > 0$, $$ \mathbb{E}\int_0^\tau |g(s)|^2 < \infty.$$ Then \begin{align*} \mathbb{E}\left|\int_0^{\tau}g(s) dW(s)\right|^p \leq \left(\frac{p(p-1)}{2}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}} \mathbb{E}\int_0^{\tau}|g(s)|^p ds. \end{align*} In particular, if $p=2$, this is an equality. \end{lemma} More generally, we have the following lemma. \begin{lemma}\label{9} Let $p \geq 2$, for any $\tau > 0$, $$ \mathbb{E}\int_0^\tau |g(s)|^2 < \infty.$$ Then \begin{align*} \mathbb{E}\left(\sup_{0 \leq t \leq \tau}|\int_0^t g(s) dW(s)|^p \right)\leq \left(\frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}} \mathbb{E}\int_0^{\tau} |g(s)|^p ds. \end{align*} \end{lemma} Similar to the results above, we have the following lemma. \begin{lemma}\label{10} Let $p \geq 2$, for any $\tau > 0$, $$ ~~~~~~~~\mathbb{E}\int_0^\tau\int_{|x|< 1} |f(r, x)|^p \nu(dx)dr < \infty.$$ Then \begin{align*} ~~~~~~~~\mathbb{E}\left|\int_0^\tau \int_{|x| < 1} f(r, x) N_1(dr, dx) \right|^2 &=\mathbb{E}\int_0^{\tau}\int_{|x|< 1} |f(r, x)|^2 \nu(dx)dr ;\\ \mathbb{E}\left|\int_0^\tau \int_{|x| < 1} f(r, x) N_1(dr, dx) \right|^p &\leq \left(\frac{p(p-1)}{2}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}} \mathbb{E}\int_0^{\tau}\int_{|x|< 1}|f(r, x)|^p \nu(dx)dr; \end{align*} and \begin{align*} \mathbb{E}\left(\sup_{0 \leq t \leq \tau}\left| \int_0^t\int_{|x|< 1}f(s, x)\tilde{N}(ds, dx)\right|^p\right) \leq \left(\frac{p^3}{2(p-1)}\right)^{\frac{p}{2}} \tau^{\frac{p-2}{2}} \mathbb{E}\int_0^{\tau}\int_{|x|< 1} |f(r, x)|^p \nu(dx)dr. \end{align*} \end{lemma} \begin{lemma}\label{11} Let $\{Y_n\}_{n=1}^{\infty}$ be a sequence of stochastic process in $\mathbb{R}^d$ such that for any $\{t_i\}_{i=1}^k \subset \mathbb{R}^+$, the joint distribution of $\{Y_n(t_i)\}_{i=1}^k$ is weakly convergent as $n \rightarrow \infty$, and the sequence of $\{Y_n\}_{n=1}^{\infty}$ is uniformly stochastic continuous, that is $$ \sup_{n, |s_1 - s_2| < \delta t} \mathbb{P}\{|Y_n(s_1) - Y_n(s_2)| > \epsilon\} \rightarrow 0, ~~~~\delta \rightarrow 0.$$ Then a sequence of stochastic processes $\{\tilde{Y}_n\}_{n=1}^{\infty}$ can be constructed in another probability space $(\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{P})$ such that (1) $\tilde{Y}_n \overset{\mathcal{D}}{\rightarrow} \tilde{Y}$; (2) The finite-dimensional distributions of the processes $Y_n$ and $\tilde{Y}_n$ coincide for $n > 0$ \end{lemma} \begin{lemma}\label{12} Suppose that the sequence of stochastic processes $\{Y_n\}_{n=1}^{\infty}$ is uniformly stochastic continuous, and uniformly bounded in probability, that is$$ \sup_{t, n}\mathbb{P}\{Y_n(t) > M\} \rightarrow 0, ~~~~ M\rightarrow \infty.$$ Then $\{Y_n\}_{n=1}^{\infty}$ contains a subsequence $\{Y_{n_k}\}_{k=1}^{\infty}$ with weakly convergent finite-dimensional distributions. \end{lemma} \begin{lemma}\label{39} Let $p > 1$, and $\mathcal{A}$ be a family of integrable random variables in $\mathcal{L}^p(\Omega, \mathbb{R}^d)$. If there exists a positive constant $C $ such that $$ \sup_{\xi \in \mathcal{A}} \mathbb{E}|\xi|^p :=C < \infty,$$ then \begin{align}\label{14} \sup_{\xi \in \mathcal{A}} \int_{|\xi| \geq M}|\xi| d \mathbb{P}\rightarrow 0,~~~~M \rightarrow \infty. \end{align} \end{lemma} \begin{remark}\label{15} Let $\mathcal{A} \in \mathcal{L}^1(\Omega, \mathbb{R}^d)$, then \eqref{14} is equivalent to: (1) $a_0 = \sup_{\xi \in \mathcal{A}}\{\mathbb{E}|\xi|\} < \infty$; (2) For any $\epsilon > 0$, there exists a $\delta > 0$ such that $$ \sup_{\xi \in \mathcal{A}}\int_{A}|\xi| d \mathbb{P} \leq \epsilon,$$ where $A \in \mathcal{F}$ satisfying $\mathbb{P}(A) \leq \delta$. \end{remark} Let we give a corollary of Lebesgue's dominated convergence theorem. \begin{lemma}\label{16} Let $\eta, \xi, \xi_1, \cdots$ be random variables such that $|\xi_n| \leq \eta$, $\xi_n \overset{a.s.}{\rightarrow} \xi$ and $\mathbb{E}\eta^p < \infty$ for some $p > 0$. Then $\mathbb{E}|\xi|^p < \infty$ and $\mathbb{E}|\xi - \xi_n|^p \rightarrow 0$ as $n \rightarrow \infty$. \end{lemma} \begin{lemma}\label{18} For any $t \geq 0$, if there is an $\alpha > 0$, such that for any $x_1, x_2 \in \mathbb{R}^d$, $$ d_{L}(P_{0, t}^* \delta_{x_1}, P_{0, t}^*\delta_{x_2}) \leq \alpha|x_1 - x_2|,$$ then for any $\mu_1, \mu_2 \in \mathcal{P}(\mathbb{R}^d)$, \begin{align*} d_L(P_{0, t}^*\mu_1, P_{0, t}^*\mu_2) \leq \alpha d_L(\mu_1, \mu_2). \end{align*} \end{lemma} \begin{theorem}\label{27}(\cite{ref17}\cite {ref22}) Let $\{M_{N\tau}\}_{N \geq 1}$ be a zero mean square integrable martingale and let $\{C_N\}_{N \geq 1}$ be an increasing sequence going to $\infty$ such that $$ \sum_{N=1}^{\infty} C_{N}^{-2} \mathbb{E} Z_{N}^2 < \infty,$$ where $Z_N = M_{N\tau} - M_{(N-1)\tau}$ and $M_0 = 0$. Then \begin{align*} \lim_{N \rightarrow \infty} C_{N}^{-1} M_{N\tau} = 0, ~~~~\mathbb{P}-a.s.. \end{align*} \end{theorem} \begin{theorem}\label{28} Assume that the martingale $M_{N\tau}$, its quadratic variation $[M]_{N\tau}$ and the associated martingale difference $Z_N = M_{N\tau} - M_{(N-1)\tau}$ (with $M_0 = 0$) satisfy the following: for every $\epsilon > 0$, [M1] $\lim_{N \rightarrow \infty} \frac{1}{N} \sum_{j=0}^{N-1} \mathbb{E}[Z_{j+1}^2 \mid |Z_{j+1}| \geq \epsilon \sqrt{N}] = 0$. [M2] There exists $\sigma > 0$ such that \begin{align*} \lim_{K \rightarrow \infty} \limsup_{l \rightarrow \infty} \frac{1}{l} \sum_{m=1}^l \mathbb{E}|\frac{1}{K} \mathbb{E} [[M]_{mK\tau} - [M]_{(m-1)K\tau}\mid \mathcal{F}_{(m-1)K\tau}] - \sigma^2 | =0 \end{align*} with uniformly square integrable condition $\sup_{N \geq 1} \mathbb{E}Z_{N}^2 < \infty$. [M3] $\lim_{K \rightarrow \infty} \limsup_{l \rightarrow \infty} \frac{1}{lK} \sum_{m=1}^l \sum_{j=(m-1)K}^{mK-1} \mathbb{E}[1+Z_{j+1}^2 \mid |M_{j\tau} -M_{(m-1)K\tau}| \geq \epsilon \sqrt{lK}] =0.$ Then one has $$\lim_{N \rightarrow \infty}\frac{\mathbb{E}[M]_{N\tau}}{N} = \sigma^2$$ and $$ \lim_{N \rightarrow \infty} \mathbb{E}e^{i\theta \frac{M_{N\tau}}{\sqrt{N}}} = e^{-\frac{\sigma^2 \theta^2}{2}}$$ for any $\theta \in \mathbb{R}$. \end{theorem} \section*{Acknowledgment} This work is supported by National Natural Science Foundation of China (12071175, 12371191). \begin{thebibliography}{00} \bibitem{ref51}Applebaum, D.: L{\'e}vy processes and stochastic calculus. Second edition. Cambridge Studies in Advanced Mathematics, 116. Cambridge University Press, Cambridge, (2009) \bibitem{ref4} Chen, F., Han, Y., Li, Y., Yang, X.: Periodic solutions of ~Fokker-Planck equations. J. Differ. Equ. 263, 285-298 (2017) \bibitem{ref5}Cheng, M., Liu, Z.: Periodic, almost periodic and almost automorphic solutions for SPDEs with monotone coefficients. Discrete Contin. Dyn. Syst. Ser. B 26 (2021)\bibitem{ref48}Cloez, B., Hairer, M.: Exponential ergodicity for Markov processes with random switching. Bernoulli 21 (2015)\bibitem{ref39}Czapla, D., Horbacz, K., Wojew{\'o}dka-{\'S}cika{\.z}ko, H.: The central limit theorem for Markov processes that are exponentially ergodic in the bounded-Lipschitz norm. (English summary) Qual. Theory Dyn. Syst. 23 (2024) \bibitem{ref7}Da Prato, G., Tudor, C.: Periodic and almost periodic solutions for semilinear stochastic equations. Stoch. Anal. Appl. 13, 13-33 (1995) \bibitem{ref8}Da Prato, G., Iannelli, M., Tubaro, L.: Stochastic differential equations in Banach spaces, variational formulation. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat. (8) 61 (1976)(1977) \bibitem{ref9}Da Prato, G.: An integral inequality for the invariant measure of some finite dimensional stochastic differential equation. Discrete Contin. Dyn. Syst. Ser. B 21 (2016)\bibitem{ref1}Dobrushin, R.: Central limit theorem for nonstationary Markov chains. II. Teor. Veroyatnost. i Primenen. 1 (1956) \bibitem{ref13}Feng, C., Zhao, H., Zhou, B.: Pathwise random periodic solutions of stochastic differential equations. J. Differ. Equ. 251 (2011) \bibitem{ref90}Gharib, M., Abdel Fattah, M.: A uniform estimate for the rate of convergence in the central limit theorem for homogeneous Markov chains. Appl. Math. Lett. 8 (1995)\bibitem{ref16}Gordin, M. I., Lif{\v{s}}ic, B. A.: Central limit theorem for stationary Markov processes. Dokl. Akad. Nauk SSSR 239 (1978) \bibitem{ref38}Hairer, M., Mattingly, J. C.: Ergodicity of the 2D Navier-Stokes equations with degenerate stochastic forcing. Ann. of Math. (2) 164 (2006)\bibitem{ref36} Hairer, M., Mattingly, J.C.: Spectral gaps in Wasserstein distances and the 2D stochastic Navier-Stokes equations. Annal. Probab. 36(6), 2050-2091 (2008) \bibitem{ref17}Hall, P., Heyde, C. C.: Martingale limit theory and its application. Probability and Mathematical Statistics. Academic Press, Inc. ~[Harcourt Brace Jovanovich, Publishers], New York-London, 1980. xii+308 pp. ISBN: 0-12-19350-8 60-02 \bibitem{ref18}Holzmann, H.: Martingale approximations for continuous-time and discrete-time stationary Markov processes. Stochastic Process. Appl. 115 (2005)\bibitem{ref41} Jiang, X., Li, Y., Yang, X.: LaSalle-type stationary oscillation principle for stochastic affine periodic systems. Stoch. Dyn. 22 (2022) \bibitem{ref15} Khasminskii, R.: Stochastic stability of differential equations. With contributions by G. N. Milstein and M. B. Nevelson. Completely revised and enlarged second edition. Stochastic Modelling and Applied Probability, 66. Springer, Heidelberg, 2012. xviii+339 pp. ISBN: 978-3-642-23279-4 \bibitem{ref20}Komorowski, T., Walczuk, A.: Central limit theorem for Markov processes with spectral gap in the Wasserstein metric. Stochastic Process. Appl. 122 (2012)\bibitem{ref22}Kuksin, S., Shirikyan, A.: Mathematics of Two-Dimensional Turbulence, Volum 194 of Cambridge Tracts in Math. Cambridge University Press (2012) \bibitem{ref23}Kipnis, C., Varadhan, S. R. S.: Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Comm. Math. Phys. 104 (1986) \bibitem{ref24}Liu, R., Lu, K.: Statistical Properties of 2D Stochastic Navier-Stokes Equations with Time-Periodic Forcing and Degenerate Stochastic Forcing. arXiv preprint arXiv:2105.00598. (2021) \bibitem{ref25}Liu, R., Lu, K.: Exponential mixing and limit theorems of quasi-periodically forced 2D stochastic Navier-Stokes Equations in the hypoelliptic setting. arXiv preprint arXiv:2205.14348. (2022) \bibitem{ref26}Liu, W., T{\"o}lle, J.: Existence and uniqueness of invariant measures for stochastic evolution equations with weakly dissipative drifts. Electron. Commun. Probab. 16 (2011) \bibitem{ref40} Liu, Z., Sun, K.: Almost automorphic solutions for stochastic differential equations driven by L{\'e}vy noise. J. Funct. Anal. 266 (2014) \bibitem{ref50}Mattingly, Jonathan C.: The dissipative scale of the stochastics Navier-Stokes equation: regularization and analyticity. J. Statist. Phys. 108 (2002) \bibitem{ref31} Mao, X.: Stochastic Differential Equations and Applications. Second edition. Horwood Publishing. Limited, Cambridge (2008) \bibitem{ref29}Miller, W., Akin, E.: Invariant measures for set-valued dynamical systems. Trans. Amer. Math. Soc. 351 (1999) \bibitem{ref30}Nualart, D.: The Malliavin Calculus and Related Topics. Springer, Berlin (2006) \bibitem{ref32}Rschendorf, L., Schnurr, A., Wolf, V.: Comparison of time-inhomogeneous Markov processes. Adv. in Appl. Probab. 48 (2016) \bibitem{ref42} Skorokhod, A. V.: Studies in the theory of random processes. Translated from the Russian by Scripta Technica, Inc. Addison-Wesley Publishing Co., Inc., Reading, MA, 1965. viii+199 pp. (Reviewer: B. Rankin) \bibitem{ref49}Walczuk, A.: Central limit theorem for an additive functional of a Markov process, stable in the Wesserstein metric. Ann. Univ. Mariae Curie-Sk{\l}odowska Sect. A 62 (2008) \bibitem{ref33} Zhou, X., Xing, J., Jiang, X., Li, Y.: Periodic solutions in distribution of mean-field stochastic differential equations. J. Stat. Phys. 190 (2023) \end{thebibliography} \end{document} \endinput
2412.16570v1
http://arxiv.org/abs/2412.16570v1
Unimodular bilinear lattices, automorphism groups, vanishing cycles, monodromy groups, distinguished bases, braid group actions and moduli spaces from upper triangular matrices
\documentclass[12pt,reqno]{amsbook} \usepackage{amscd,amssymb} \usepackage{epsfig} \usepackage{float} \usepackage{url} \usepackage[all]{xy} \usepackage{stmaryrd} \usepackage{imakeidx} \makeindex \sloppy \renewcommand{\theequation}{\mbox{\arabic{chapter}.\arabic{equation}}} \renewcommand{\thefigure}{\mbox{\arabic{chapter}.\arabic{figure}}} \renewcommand{\thesection}{\mbox{\arabic{chapter}.\arabic{section}}} \newcommand{\N}{{\mathbb N}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\C}{{\mathbb C}} \newcommand{\D}{{\mathbb D}} \newcommand{\R}{{\mathbb R}} \newcommand{\F}{{\mathbb F}} \renewcommand{\P}{{\mathbb P}} \renewcommand{\H}{{\mathbb H}} \newcommand{\T}{{\mathbb T}} \newcommand{\AAA}{{\mathcal A}} \newcommand{\BB}{{\mathcal B}} \newcommand{\CC}{{\mathcal C}} \newcommand{\DD}{{\mathcal D}} \newcommand{\EE}{{\mathcal E}} \newcommand{\FF}{{\mathcal F}} \newcommand{\GG}{{\mathcal G}} \newcommand{\HH}{{\mathcal H}} \newcommand{\KK}{{\mathcal K}} \newcommand{\LL}{{\mathcal L}} \newcommand{\MM}{{\mathcal M}} \newcommand{\NN}{{\mathcal N}} \newcommand{\Nu}{{\mathcal V}} \newcommand{\OO}{{\mathcal O}} \newcommand{\PP}{{\mathcal P}} \newcommand{\QQ}{{\mathcal Q}} \newcommand{\RR}{{\mathcal R}} \newcommand{\SSS}{{\mathcal S}} \newcommand{\TT}{{\mathcal T}} \newcommand{\UU}{{\mathcal U}} \newcommand{\VV}{{\mathcal V}} \newcommand{\WW}{{\mathcal W}} \newcommand{\XX}{{\mathcal X}} \newcommand{\YY}{{\mathcal Y}} \newcommand{\aaa}{{\bf a}} \newcommand{\mmm}{{\bf m}} \newcommand{\qqq}{{\bf q}} \newcommand{\ddd}{{\rm d}} \newcommand{\www}{\widetilde} \newcommand{\whh}{\widehat} \newcommand{\oooo}{\overline} \newcommand{\uuuu}{\underline} \newcommand{\iiii}{\infty} \newcommand{\ddiv}{{\rm div}} \newcommand{\mult}{{\rm mult}} \newcommand{\paa}{\partial} \newcommand{\zdz}{{z\partial_z}} \newcommand{\pr}{{\rm pr}} \newcommand{\Or}{{\rm Or}} \newcommand{\Br}{{\rm Br}} \newcommand{\im}{{\rm im}} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\divis}{div} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\Imm}{Im} \DeclareMathOperator{\Isom}{Isom} \DeclareMathOperator{\Lie}{Lie} \DeclareMathOperator{\lcm}{lcm} \DeclareMathOperator{\mmod}{mod} \DeclareMathOperator{\Ord}{Ord} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\Rad}{Rad} \DeclareMathOperator{\Ree}{Re} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\Sp}{Sp} \DeclareMathOperator{\Spp}{Spp} \DeclareMathOperator{\Stab}{Stab} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\tr}{tr} \begin{document} \theoremstyle{plain} \newtheorem{lemma}{Lemma}[chapter] \newtheorem{definition/lemma}[lemma]{Definition/Lemma} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{corollary}[lemma]{Corollary} \newtheorem{conjecture}[lemma]{Conjecture} \newtheorem{conjectures}[lemma]{Conjectures} \theoremstyle{definition} \newtheorem{definition}[lemma]{Definition} \newtheorem{withouttitle}[lemma]{} \newtheorem{remark}[lemma]{Remark} \newtheorem{remarks}[lemma]{Remarks} \newtheorem{example}[lemma]{Example} \newtheorem{examples}[lemma]{Examples} \newtheorem{notations}[lemma]{Notations} \newtheorem{remarksandnotations}[lemma]{Remarks and Notations} \title[Induced structures from upper triangular matrices] {Unimodular bilinear lattices,\\ automorphism groups,\\ vanishing cycles,\\ monodromy groups,\\ distinguished bases,\\ braid group actions\\ and moduli spaces from \\upper triangular matrices} \author[C. Hertling and K. Larabi] {Claus Hertling and Khadija Larabi} \address{Claus Hertling\\ Lehrstuhl f\"ur algebraische Geometrie, Universit\"at Mannheim, B6 26, 68159 Mannheim, Germany} \email{[email protected]} \address{Khadija Larabi\\ Lehrstuhl f\"ur algebraische Geometrie, Universit\"at Mannheim, B6 26, 68159 Mannheim, Germany} \email{[email protected]} \date{December 21, 2024} \subjclass[2020]{06B15, 20F55, 20F36, 14D05, 57M10, 32S30} \keywords{Unimodular bilinear lattice, upper triangular matrix, Seifert form, even and odd intersection form, even and odd monodromy group, vanishing cycle, braid group action, distinguished basis, manifold of Stokes regions, isolated hypersurface singularity} \thanks{This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- 494849004} \begin{abstract} This monograph starts with an upper triangular matrix with integer entries and 1's on the diagonal. It develops from this a spectrum of structures, which appear in different contexts, in algebraic geometry, representation theory and the theory of irregular meromorphic connections. It provides general tools to study these structures, and it studies sytematically the cases of rank 2 and 3. The rank 3 cases lead already to a rich variety of phenomena and give an idea of the general landscape. Their study takes up a large part of the monograph. Special cases are related to Coxeter groups, generalized Cartan lattices and exceptional sequences, or to isolated hypersurface singularities, their Milnor lattices and their distinguished bases. But these make only a small part of all cases. One case in rank 3 which is beyond them, is related to quantum cohomology of $\P^2$ and to Markov triples. The first structure associated to the matrix is a $\Z$-lattice with unimodular bilinear form (called Seifert form) and a triangular basis. It leads immediately to an even and an odd intersection form, reflections and transvections, an even and an odd monodromy group, even and odd vanishing cycles. Braid group actions lead to braid group orbits of distinguished bases and of upper triangular matrices. Finally, complex manifolds, which consist of correctly glued Stokes regions, are associated to these braid group orbits. A report on the case of isolated hypersurface singularities concludes the monograph. \end{abstract} \maketitle \tableofcontents \setcounter{chapter}{0} \chapter{Introduction}\label{s1} \setcounter{equation}{0} \setcounter{figure}{0} This book develops many structures, starting from a single upper triangular $n\times n$ matrix $S$ with integer entries and 1's on the diagonal. The structures are introduced and are called playing characters in section \ref{s1.1}. Such a matrix $S$ and induced structures appear in very different mathematical areas. Section \ref{s1.2} gives a panorama of these areas. Though these areas are not subject of the book, except for the area of isolated hypersurface singularities. Section \ref{s1.3} tells about the results in this book. The book provides general tools and facts. It treats the cases $n=2$ and $n=3$ systematically. In chapter \ref{s10} it overviews the area of isolated hypersurface singularities. \section{Playing characters} \label{s1.1} $H_\Z$ will always be a {\it $\Z$-lattice}, so a free $\Z$-module of some finite rank $n\in\N=\{1,2,3...\}$. Then $L$ will always be a nondegenerate bilinear form $L:H_\Z\times H_\Z\to\Z$. It is called a {\it Seifert form}. The pair $(H_\Z,L)$ is called a {\it bilinear lattice}. If for some $\Z$-basis $\uuuu{e}\in M_{1\times n}(H_\Z)$ of $H_\Z$ the determinant $\det L(\uuuu{e}^t,\uuuu{e})$ is $1$ the pair $(H_\Z,L)$ is called a {\it unimodular bilinear lattice}. The notion {\it bilinear lattice} is from \cite{HK16}. In chapter \ref{s2} we develop the following structures for bilinear lattices, following \cite{HK16}. Though in this introduction and in the chapters \ref{s3}--\ref{s10} we restrict to unimodular bilinear lattices. \begin{eqnarray*} T^{uni}_n(\Z)&:=&\{ S\in M_{n\times n}(\Z)\,|\, S_{ij}=0\textup{ for }i>j,\ S_{ii}=1\} \end{eqnarray*} denotes the set of all upper triangular matrices with integer entries and 1's on the diagonal. Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n$. A basis $\uuuu{e}$ of $H_\Z$ is called {\it triangular} if $L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. The transpose in the matrix is motivated by the case of isolated hypersurface singularities. The set of triangular bases is called $\BB^{tri}$. By far not every unimodular bilinear lattice has triangular bases. But here we care only about those which have. For fixed $n\in\N$ there is an obvious 1-1 correspondence between the set of isomorphism classes of unimodular bilinear lattices $(H_\Z,L,\uuuu{e})$ with triangular bases and the set $T^{uni}_n(\Z)$, given by the map $(H_\Z,L,\uuuu{e})\mapsto S:=L(\uuuu{e}^t,\uuuu{e})^t$. To a given matrix $S\in T^{uni}_n(\Z)$ we always associate the corresponding triple $(H_\Z,L,\uuuu{e})$. Let a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with a triangular basis $\uuuu{e}$ be given, with matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. The following objects are associated to this triple canonically. The names are motivated by the case of isolated hypersurface singularities. \begin{list}{}{} \item[(i)] A symmetric bilinear form $I^{(0)}:H_\Z\times H_\Z\to\Z$ and a skew-symmetric bilinear form $I^{(1)}:H_\Z\times H_\Z\to\Z$ with \begin{eqnarray*} I^{(0)}&=&L^t+L,\quad\textup{so } I^{(0)}(\uuuu{e}^t,\uuuu{e})=S+S^t,\\ I^{(1)}&=&L^t-L,\quad\textup{so } I^{(1)}(\uuuu{e}^t,\uuuu{e})=S-S^t, \end{eqnarray*} which are called {\it even} respectively {\it odd intersection form}. \item[(ii)] An automorphism $M:H_\Z\to H_\Z$ which is defined by \begin{eqnarray*} L(Ma,b)=L(b,a),\quad\textup{so }M(\uuuu{e}) = \uuuu{e}\cdot S^{-1}S^t, \end{eqnarray*} and which is called {\it the monodromy}. It respects $L$ (and $I^{(0)}$ and $I^{(1)}$) because $L(Ma,Mb)=L(Mb,a)=L(a,b)$. \item[(iii)] Six automorphism groups \begin{eqnarray*} O^{(k)}&:=& \Aut(H_\Z,I^{(k)})\qquad\textup{for }k\in\{0;1\},\\ G_\Z^M&:=& \Aut(H_\Z,M):=\{g:H_\Z\to H_\Z \textup{ automorphism }\,|\, gM=Mg\},\\ G_\Z^{(k)}&:=& \Aut(H_\Z,I^{(0)},M)=O^{(k)}\cap G_\Z^M \qquad\textup{for }k\in\{0;1\},\\ G_\Z&:=&\Aut(H_\Z,L)=\Aut(H_\Z,L,I^{(0)},I^{(1)},M). \end{eqnarray*} \item[(iv)] The set of {\it roots} \begin{eqnarray*} R^{(0)}:=\{a\in H_\Z\,|\, L(a,a)=1\}, \end{eqnarray*} and the set \begin{eqnarray*} R^{(1)}:= H_\Z. \end{eqnarray*} \item[(v)] For $k\in\{0;1\}$ and $a\in R^{(k)}$ the {\it reflection} (if $k=0$) respectively {\it transvection} (if $k=1$) $s^{(k)}_a\in O^{(k)}$ with \begin{eqnarray*} s^{(k)}_a(b):= b-I^{(k)}(a,b)a\quad\textup{for }b\in H_\Z. \end{eqnarray*} \item[(vi)] For $k\in\{0;1\}$ the {\it even} (if $k=0$) respectively {\it odd} (if $k=1$) {\it monodromy group} \begin{eqnarray*} \Gamma^{(k)}:=\langle s^{(k)}_{e_1},...,s^{(k)}_{e_n} \rangle \subset O^{(k)}. \end{eqnarray*} \item[(vii)] For $k\in\{0;1\}$ the set of {\it even} (if $k=0)$ respectively {\it odd} (if $k=1)$ {\it vanishing cycles} \begin{eqnarray*} \Delta^{(k)}:=\Gamma^{(k)}\{\pm e_1,...,\pm e_n\} \subset R^{(k)}. \end{eqnarray*} \end{list} The definitions of all these objects require only $S\in SL_n(\Z)$ and $e_1,...,e_n\in R^{(0)}$, not $S\in T^{uni}_n(\Z)$. But the formula (Theorem \ref{t2.6}) \begin{eqnarray*} s^{(k)}_{e_1}...s^{(k)}_{e_n}=(-1)^{k+1}M \quad\textup{for }k\in\{0;1\} \end{eqnarray*} depends crucially on $S\in T^{uni}_n(\Z)$. The even data $I^{(0)},O^{(0)},\Gamma^{(0)}$ and $\Delta^{(0)}$ are in many areas more important and are usually better understood than the odd data $I^{(1)},O^{(1)},\Gamma^{(1)}$ and $\Delta^{(1)}$. But in the area of isolated hypersurface singularities both turn up. For $k\in\{0;1\}$ the group $\Gamma^{(k)}$ contains all reflections/transvections $s^{(k)}_a$ with $a\in\Delta^{(k)}$. In the case of a bilinear lattice which is not unimodular this holds for $k=0$, but not for $k=1$ (Remark \ref{t2.9} (iii)). This is one reason why we restrict in the chapters \ref{s3}--\ref{s10} to unimodular bilinear lattices. Section \ref{s3.2} gives an action of a semidirect product $\Br_n\ltimes\{\pm 1\}^n$ of the braid group $\Br_n$ of braids with $n$ strings and of a sign group $\{\pm 1\}^n$ on the set $(R^{(k)})^n$ for $k\in\{0;1\}$. It is compatible with the Hurwitz action of $\Br_n$ on $(\Gamma^{(k)})^n$ with connecting map $$(R^{(k)})^n\to (\Gamma^{(k)})^n,\quad \uuuu{v}=(v_1,...,v_n)\mapsto (s^{(k)}_{v_1},...,s^{(k)}_{v_n}).$$ Both actions restrict to the same action on $\BB^{tri}$. Especially, one obtains the orbit $$\BB^{dist}:=\Br_n\ltimes\{\pm 1\}^n(\uuuu{e})\subset \BB^{tri}$$ of {\it distinguished bases} of $H_\Z$. The triple $(H_\Z,L,\BB^{dist})$ (up to isomorphism) is in many cases a canonical object, whereas the choice of a distinguished basis $\uuuu{e}\in\BB^{dist}$ is a true choice. The question whether $\BB^{dist}=\BB^{tri}$ or $\BB^{dist}\subsetneqq \BB^{tri}$ is usually a difficult question. The subgroup \begin{eqnarray*} G_\Z^{\BB}:=\{g\in G_\Z\,|\, g(\BB^{dist})=\BB^{dist}\} \end{eqnarray*} is in many important cases equal to $G_\Z$. But if $\BB^{dist}\subsetneqq \BB^{tri}$, then $G_\Z^{\BB}\subsetneqq G_\Z$ is possible. The action of $\Br_n\ltimes\{\pm 1\}^n$ on $\BB^{tri}$ is compatible with an action on $T^{uni}_n(\Z)$. The orbit of $S$ is called $$\SSS^{dist}:=\Br_n\ltimes\{\pm 1\}^n(S)\subset T^{uni}_n(\Z),$$ the matrices in it are called {\it distinguished matrices}. As $\{\pm 1\}^n$ is the normal subgroup in the semidirect product $\Br_n\ltimes\{\pm 1\}^n$, one can first divide out the action of $\{\pm 1\}^n$. One obtains actions of $\Br_n$ on $\BB^{tri}/\{\pm 1\}^n$ and on $T^{uni}_n(\Z)/\{\pm 1\}^n$. It will be interesting to determine the stabilizers $(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$ of $\uuuu{e}/\{\pm 1\}^n$ and $(\Br_n)_{S/\{\pm 1\}^n}$ of $S/\{\pm 1\}^n$. Finally, chapter \ref{s8} introduces several complex manifolds which are in fact {\it semisimple F-manifolds with Euler fields} (Definition \ref{t8.8}). First, the following manifolds are universal and depend only on $n\in\N$, \begin{eqnarray*} C_n^{pure}&:=& \{\uuuu{u}=(u_1,...,u_n)\in\C^n\,|\, u_i\neq u_j\textup{ for all }i\neq j\},\\ C_n^{conf}&:=& C_n^{pure}/S_n \subset \C^n/S_n\cong \C^n,\\ C_n^{univ}&:=& \textup{the universal covering of }C_n^{conf} \textup{ and }C_n^{pure}. \end{eqnarray*} $C_n^{conf}$ is the configuration space of $\Br_n$, with $\pi_1(C_n^{conf})=\Br_n$. The subset of $C_n^{pure}$ \begin{eqnarray*} F_n:=\{\uuuu{u}\in C_n^{pure}\,|\, \Imm(u_1)<...<\Imm(u_n)\} \end{eqnarray*} is a fundamental domain of the action of the symmetric group $S_n$ on $C_n^{pure}$ and maps under the projection $C_n^{pure}\to C_n^{conf}$ almost bijectively to $C_n^{conf}$. Second, two manifolds $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$ between $C_n^{conf}$ and $C_n^{univ}$ are constructed in section \ref{s8.1}. They are those coverings of $C_n^{conf}$ whose fundamental groups embed into $\Br_n=\pi_1(C_n^{conf})$ as the stabilizer $(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$ respectively the stabilizer $(\Br_n)_{S/\{\pm 1\}^n}$. In a more naive way, they are obtained by glueing copies of $F_n$ in an appropriate way, one copy for each element of $\BB^{dist}/\{\pm 1\}^n$ respectively $\SSS^{dist}/\{\pm 1\}^n$. These manifolds, especially $C_n^{\uuuu{e}/\{\pm 1\}^n}$, are related to manifolds from algebraic geometry which are associated to triples $(H_\Z,L,\uuuu{e})$, and they have to compared with these manifolds. \section {The playing characters in action: areas, literature} \label{s1.2} Upper triangular integer matrices $S$ with 1's on the diagonal and the induced structures, which are described in section \ref{s1.1}, arise in many different contexts. Figure \ref{Fig:1.1} offers a landscape of mathematical areas where they arise. The lines between the boxes indicate connections. Of course, the chart is very incomplete and subjective and does not at all give all connections between the areas. \begin{figure}\begin{xy} \xymatrix{ \fbox{\parbox{3.5cm}{$\Br_n\ltimes\{\pm 1\}^n$ orbit in $T^{uni}_n(\Z)$}} \ar@{-}[d]\ar@{-}[drr] & &\\ \fbox{\parbox{5cm}{Milnor lattice of an isola\-ted hypersurface singularity, distinguished basis}} \ar@{-}[d] & & \fbox{\parbox{4.5cm}{generalized Cartan lattice, crystallographic Coxeter group}} \ar@{-}[d] \\ \fbox{\parbox{5cm}{hom. mirror symmetry for invertible qh. singularities: Berglund-H\"ubsch duality}} \ar@{-}[drr] \ar@{-}[d]& & \fbox{\parbox{4.2cm}{non-crossing partition, representation theory, Grothendieck group of a hereditary algebra}} \ar@{-}[d] \ar@{-}[d]\\ \fbox{\parbox{4.5cm}{$\Z$-lattice of Lefschetz thimbles for a tame function}} \ar@{-}[d] \ar@{-}^{\textup{hom. mirror}}_{\textup{symmetry}}[rr] & & \fbox{\parbox{4.2cm}{semiorthogonal decom\-position in derived algebraic geometry}} \ar@{-}[d]\\ \fbox{\parbox{4.5cm}{moduli spaces from algebraic geometry}} \ar@{-}[d] \ar@{-}[drr] \ar@{-}[rr]^{\textup{mirror}}_{\textup{symmetry}} & & \fbox{\parbox{4cm}{moduli spaces from quantum cohomology}} \ar@{-}[d] \\ \fbox{\parbox{4.5cm}{isomonodromic family of connections with pole of Poincar\'e rank 2}} \ar@{-}[d] \ar@{-}[rr] & & \fbox{\parbox{4cm}{semisimple Frobenius manifold}} \ar@{-}[dll] \ar@{-}[d]\\ \fbox{\parbox{4.5cm}{irregular meromorphic connection with semisimple pole of order 2}} \ar@{-}[d] \ar@{-}[rr] & & \fbox{\parbox{4.5cm}{$N=2$ topological field theory, $tt^*$ geometry,\\ e.g. Toda equations}} \\ \fbox{\parbox{3cm}{$S\in T^{uni}_n(\C)\quad \& \\ \uuuu{u}\in C_n^{pure}\subset\C^n$}} \ar@{-}[rr] & & \fbox{\parbox{3.5cm}{$\Br_n\ltimes\{\pm 1\}^n$-action on $T^{uni}_n(\C)$}} } \end{xy} \caption[Figure 1.1]{Mathematical areas where upper triangular (integer) matrices with 1's in the diagonal appear} \label{Fig:1.1} \end{figure} In the following, for each of the boxes in Figure \ref{Fig:1.1} (and for a few more points), some words are said and a few references are given. One can start with these references and use them to find more references. The mathematical areas in these boxes are not subject of this book, except for the area of isolated hypersurface singularities, which is treated in chapter \ref{s10}. (Therefore a reader could skip the rest of this section \ref{s1.2}.) Historically the first appearance of unimodular bilinear lattices $(H_\Z,L,\uuuu{e})$ with triangular bases $\uuuu{e}$ is in the theory of isolated hypersurfaces singularities. There they arise as Milnor lattices with chosen distinguished bases \cite[Appendix]{Br70}. All distinguished bases together form a $\Br_n\ltimes\{\pm 1\}^n$ orbit $\BB^{dist}$. A single distinguished basis $\uuuu{e}$ is a choice. The triple $(H_\Z,L,\BB^{dist})$ is a canonical object associated to a singularity. These structures were studied a lot \cite{AGV88}\cite{AGLV98}\cite{Eb01}\cite{Eb20}. Nevertheless it is not clear how to characterize the triples $(H_\Z,L,\uuuu{e})$ or (equivalently) the matrices $S\in T^{uni}_n(\Z)$ which come from singularities with Milnor number $n$. See chapter \ref{s10} for more details and more references. Closely related are the triples $(H_\Z,L,\uuuu{e})$ which come by essentially the same construction from tame functions on affine manifolds \cite{Ph83}\cite{Ph85}. Though they were less in the focus of people from singularity theory. More recently they arose on one side of a version of mirror symmetry which connects such functions on the B-side with quantum cohomology of toric orbifolds on the A-side. Iritani considered the integral structures on both sides \cite{Ir09}. On the A-side this leads to the K-group of orbifold vector bundles on toric orbifolds with the Mukai pairing. This is part of a general appearance of triples $(H_\Z,L,\uuuu{e})$ in derived algebraic geometry. Gorodentsev \cite{Go94-1}\cite{Go94-2} studied them also abstractly. Chapter 4 in the book \cite{CDG24} gives an excellent survey on Gorodentsev's results. It starts with a general point of view, the Grothendieck group of a small triangulated category. Later it considers the bounded derived category $\DD^b(X)$ of a complex projective manifold and its Grothendieck group. Chapter 3 in the same book gives a survey on helices in triangulated categories, which are behind the appearance of triples $(H_\Z,L,\uuuu{e})$ in derived algebraic geometry. Gorodentsev also classified pairs $(H_\C,L)$ over $\C$. They split into indecomposable objects, which can be classified. A finer classification of pairs $(H_\R,L)$ over $\R$ was done by Nemethi \cite{Ne98} and in a more explicit way in \cite{BH19}. Also pairs $(H_\R,L)$ split into indecomposable objects, which can be classified. Pairs $(H_\R,L)$ up to isomorphism are in 1-1 correspondence with tuples of {\it spectral pairs} where the first entry is modulo $2\Z$. A homological mirror symmetry for polynomials with singularities beyond the one by Iritani was proposed by Takahashi. It compares structures above pairs of invertible polynomials. One result for chain type singularities is in \cite{AT20}. The appearance of triples $(H_\Z,L,\uuuu{e})$ in derived algebraic geometry is related to the appearance of such triples in representation theory. This motivated Hubery and Krause \cite{HK16} to consider bilinear lattices $(H_\Z,L)$ with triangular bases $\uuuu{e}$. Chapter \ref{s2} in this book takes up their definition. Though the chapters \ref{s3} to \ref{s10} in this book restrict to {\it unimodular} bilinear lattices with triangular bases. One reason is that we consider, contrary to \cite{HK16}, also the odd monodromy group $\Gamma^{(1)}$ and the set $\Delta^{(1)}$ of odd vanishing cycles. They do not seem to behave well in the case of a not unimodular bilinear lattice with triangular basis (Remark \ref{t2.9} (iii)). \cite{HK16} is interested especially in the case of a {\it generalized Cartan lattice}. There a matrix $S\in M_{n\times n}(\Z)$, with which one starts, has entries $S_{ij}=0$ for $i>j$, $S_{ij}\leq 0$ for $i<j$, and diagonal entries $S_{ii}\in\N$ with $\frac{S_{ij}}{S_{ii}},\frac{S_{ij}}{S_{jj}}\in\Z_{\leq 0}$ for $i< j$, and any such matrix works. This is easier than in the case of isolated hypersurface singularities. The intersection of both cases consists only of the ADE-singularities. Every generalized Cartan lattice arises as the Grothendieck group of a hereditary artin algebra \cite[Proposition 4.6]{HK16}. A crucial result here (cited in Theorem \ref{t3.6}) is the surjectivity of the braid group action on the set of those $n$-tuples of reflections in the Coxeter group of a Coxeter system, whose product is the monodromy \cite{IS10}\cite{BDSW14}. Using this and additional work, \cite{HK16} can reprove results of Ingalls-Thomas, Igusa-Schiffler, Crawley-Boevey and Ringel on exceptional sequences and non-crossing partitions related to finite dimensional modules of finite dimensional hereditary algebras. \cite{BBGKK19} gives a survey on non-crossing partitions. \cite{BWY19} is between the points of view of \cite{HK16} and of \cite{AT20}. The last result in it concerns the hyperbolic singularities. The homological mirror symmetry is related to a more down to earth version of mirror symmetry, which compares moduli spaces on the A-side and B-side. On the B-side one has moduli spaces of (generalized) complex structures, on the A-side they are related to symplectic data, namely Gromov-Witten invariants. In good cases, both sides lead to Frobenius manifolds which are then by mirror symmetry isomorphic \cite{RS15}. These Frobenius manifolds are generically semisimple and also in other aspects rather special, as they come from algebraic geometry. This leads to the last appearance of upper triangular matrices $S\in T^{uni}_n(\Z)$ which we want to mention, namely as the carrier of the Stokes structure of a holomorphic vector bundle on the unit disk $\D$ with a (flat) meromorphic connection with a semisimple order 2 pole at $0\in\D$ and pairwise different eigenvalues of the pole part. After some choice this structure is given by a tuple $\uuuu{u}\in C_n^{pure}\subset\C^n$ of eigenvalues of the pole part, two upper triangular matrices in $T^{uni}_n(\C)$, which are called {\it Stokes matrices}, and a diagonal matrix carrying the exponents of the formal connection \cite{Ma83}\cite{Sa02}\cite{HS11}. In cases from algebraic geometry, the diagonal matrix carrying the exponents is the unit matrix times a constant in $\frac{1}{2}\Z$, and the two Stokes matrices coincide. The Stokes matrix is usually in $T^{uni}_n(\Z)$ as there the connection comes from oscillating integrals over a distinguished basis of Lefschetz thimbles, which therefore give a splitting of the Stokes structure. Varying $\uuuu{u}$ in a suitable covering of $C_n^{conf}$ (the best is the covering $C_n^{\uuuu{e}/\{\pm 1\}^n}$ in the notation of chapter \ref{s8}) gives rise to a holomorphic bundle on $\D\times C_n^{\uuuu{e}/\{\pm 1\}^n}$ with a meromorphic connection with a pole of Poincar\'e rank 1 along $\{0\}\times C_n^{\uuuu{e}/\{\pm 1\}^n}$ \cite{Sa02}. Some additional choices/enrichments allow to construct from this isomonodromic family of connections the structure of a Frobenius manifold on the covering $C_n^{\uuuu{e}/\{\pm 1\}^n}$ \cite{Sa02}\cite{He03}\cite{Du99} (section \ref{s8} describes the underlying F-manifold). The construction of Frobenius manifolds from quantum cohomology is described in \cite{Ma99}. A different way (without additional choices) to enrich the isomonodromic family leads to a TEZP structure in the sense of \cite{He03} on the manifold $C_n^{\uuuu{e}/\{\pm 1\}^n}$. This formalizes the $tt^*$ geometry of \cite{CV93}. It induces on the manifold $C_n^{\uuuu{e}/\{\pm 1\}^n}$ interesting differential geometric structure. \cite{CV93} starts with families of $N=2$ topological field theories, which are objects from mathematical physics. But the heart of them are real analytic isomonodromic families of holomorphic bundles on $\P^1$ with meromorphic connections with poles of order 2 at 0 and $\infty$ which both come from the same matrix $S\in T^{uni}_n(\Z)$. As concrete cases, \cite{CV93} classifies and interprets those $\Br_3\ltimes\{\pm 1\}^3$ orbits of matrices $S\in T^{uni}_3(\Z)$ whose monodromy matrices $S^{-1}S^t$ have eigenvalues in $S^1$. We refine this in Theorem \ref{t4.6}. \cite{GL12} studies certain higher rank cases, which are related to the Toda equations. Also \cite{GL12} considers especially the cases related to matrices $S\in T^{uni}_n(\Z)$. $tt^*$ geometry makes also sense for matrices $S\in T^{uni}_n(\R)$. Frobenius manifolds can also be constructed from matrices $S\in T^{uni}_n(\C)$ (and additional choices). The group $\Br_n\ltimes\{\pm 1\}^n$ acts also on $T^{uni}_n(\C)$. This manifold or coverings or other enrichments of it parametrize Frobenius manifolds \cite{Ma99} or meromorphic connections (their monodromy data) and come equipped with interesting differential geometric structure. But in this book we care only about the points in the discrete subset $T^{uni}_n(\Z)\subset T^{uni}_n(\C)$. \section{Results} \label{s1.3} Section \ref{s1.1} associated to each matrix $S\in T^{uni}_n(\Z)$ an impressive list of algebraic-combinatorial data and also a few complex manifolds. For a given matrix $S$ there are many natural questions which all aim at controlling parts of these data, for example: \begin{list}{}{} \item[(i)] What can one say about the $\Z$-lattice $(H_\Z,I^{(0)})$ with the even intersection form, e.g. its signature? \item[(ii)] What can one say about the $\Z$-lattice $(H_\Z,I^{(1)})$ with the odd intersection form? \item[(iii)] What are the eigenvalues and the Jordan block structure of the monodromy $M$? \item[(iv)] How big are the groups $G_\Z$, $G_\Z^{(0)}$, $G_\Z^{(1)}$ and $G_\Z^M$? \item[(v)] How good can one understand the even monodromy group $\Gamma^{(0)}$? Is it determined by the pair $(H_\Z,I^{(0)})$ alone? \item[(vi)] How good can one understand the odd monodromy group $\Gamma^{(1)}$? Is it determined by the pair $(H_\Z,I^{(1)})$ alone? \item[(vii)] Is $\Delta^{(0)}=R^{(0)}$ or $\Delta^{(0)}\subsetneqq R^{(0)}$? How explicitly can one control these two sets? How explicitly can one control $\Delta^{(1)}$? \item[(viii)] Is there an easy description of the set $\BB^{dist}$ of distinguished bases? Is $\BB^{dist}=\BB^{tri}$ or $\BB^{dist}\subsetneqq \BB^{tri}$? \item[(ix)] Is $G_\Z^{\BB}=G_\Z$ or $G_\Z^{\BB}\subsetneqq G_\Z$? \item[(x)] How do the manifolds $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$ look like? Do they have natural partial compactifications? \end{list} In the cases from isolated hypersurface singularities, $S$ is quite special. There some of the questions have beautiful answers ((i), (ii), (iii) and (v), partially (vi) and (vii)), others are wide open. The even case is better understood than the odd case. Chapter \ref{s10} overviews this. In the case of a generalized Cartan lattice, some $S$ is easy to give. The even case is well understood ((i), (iii) and (v), partially (vii) and (viii)), the odd case is wide open and has probably much less beautiful structure. In this book we concentrate on general tools and on the cases of rank 2 and rank 3. The cases of rank 2 are already interesting, but still very special. The cases of rank 3 are still in some sense small, but they show already a big variety of different types and phenomena. We consider them as sufficiently general to give an idea of the landscape for arbitrary rank $n\in\N$. The large number of pages of this book is due to the systematic study of all cases of rank 3. Here the singularity cases form just two cases ($A_3$, $A_2A_1$), and also the cases from generalized Cartan lattices form a subset which one can roughly estimate as one third of all cases, not containing some of the most interesting cases ($\HH_{1,2}$, $\P^2$). \begin{examples}\label{t1.1} In the following examples, some matrices in $T^{uni}_2(\Z)$ and $T^{uni}_3(\Z)$ are distinguished. They cover the most important cases in $T^{uni}_2(\Z)$ and $T^{uni}_3(\Z)$. This will be made precise in Theorem \ref{t1.2}, which gives results on the braid group action on $T^{uni}_3(\Z)$. \begin{eqnarray*} \begin{array}{ccccc} S(A_1^2) & S(A_2) & S(\P^1) & S(x)\textup{ for }x\in\Z & S(A_1^3) \\ \begin{pmatrix}1&0\\0&1\end{pmatrix} & \begin{pmatrix}1&-1\\0&1\end{pmatrix} & \begin{pmatrix}1&-2\\0&1\end{pmatrix} & \begin{pmatrix}1&x\\0&1\end{pmatrix} & \begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix} \end{array} \end{eqnarray*} \begin{eqnarray*} \begin{array}{cccc} S(\P^2) & S(A_2A_1) & S(A_3) & S(\P^1A_1) \\ \begin{pmatrix}1&-3&3\\0&1&-3\\0&0&1\end{pmatrix} & \begin{pmatrix}1&-1&0\\0&1&0\\0&0&1\end{pmatrix} & \begin{pmatrix}1&-1&0\\0&1&-1\\0&0&1\end{pmatrix} & \begin{pmatrix}1&-2&0\\0&1&0\\0&0&1\end{pmatrix} \end{array} \end{eqnarray*} \begin{eqnarray*} \begin{array}{cccc} & & S(-l,2,-l) & S(x_1,x_2,x_3) \\ S(\widehat{A}_2) & S(\HH_{1,2}) & \textup{for }l\geq 3 & \textup{for }x_1,x_2,x_3\in\Z\\ \begin{pmatrix}1&-1&-1\\0&1&-1\\0&0&1\end{pmatrix} & \begin{pmatrix}1&-2&2\\0&1&-2\\0&0&1\end{pmatrix} & \begin{pmatrix}1&-l&2\\0&1&-l\\0&0&1\end{pmatrix} & \begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} \end{array} \end{eqnarray*} The notations $A_1^2$ \index{$A_1^2,\ A_2,\, A_1^3,\ A_2A_1,\ A_3,\ \whh{A}_2$}, $A_2$, $A_1^3$, $A_2A_1$, $A_3$ and $\whh{A}_2$ are due the facts that $(H_\Z,I^{(0)})$ is in these cases the corresponding root lattice respectively in the case $\whh{A}_2$ the affine root lattice of type $\whh{A}_2$. The notations $\P^1$\index{$\P^1,\ \P^2$} and $\P^2$ come from the quantum cohomology of $\P^1$ and $\P^2$: The matrix $S$ is in these cases one Stokes matrix of the associated meromorphic connection with a semisimple pole of order 2 \cite{Gu99}\cite[Example 4.4 and (4.97)]{Du99}. The notation $\HH_{1,2}$\index{$\HH_{1,2}$} is related to a Hurwitz space: Here $(H_\Z,L)$ comes from a $\Z$-lattice of Lefschetz thimbles for a branched covering of degree 2 from an elliptic curve to $\P^1$ with 4 simple branch points, one of them above $\infty$ \cite[Lecture 5]{Du96}. A different point of view is that here $(H_\Z,I^{(0)})$ is an extended affine root lattice of type $A_1^{(1,1)*}$. We prefer the notation $\HH_{1,2}$. \end{examples} A large part of this book is devoted to answering the questions above for the cases of rank 2 and 3. Though the chapters \ref{s2}, \ref{s3}, \ref{s8} and the sections \ref{s5.1}, \ref{s6.1} and \ref{s7.1} offer also a lot of background material and tools. Chapter \ref{s10} overviews the state of the art in the case of isolated hypersurface singularities. In the following, we present some key results from the chapters \ref{s4} to \ref{s9}. The action of $\Br_3\ltimes\{\pm 1\}^3$ on $T^{uni}_3(\Z)$ boils down to an action of $PSL_2(\Z)\ltimes G^{sign}$ on $T^{uni}_3(\Z)$ where $G^{sign}\cong \{\pm 1\}^2$ comes from the action of the sign group $\{\pm 1\}^3$. As the action of $PSL_2(\Z)$ is partially nonlinear, it is good to write it as a semidirect product $PSL_2(\Z)\cong G^{phi}\rtimes\langle\gamma\rangle$ where $\gamma$ acts cyclically and linearly of order 3 and $G^{phi}$ is a free Coxeter group with 3 generators which act nonlinearly. The sections \ref{s4.2}--\ref{s4.4} analyze the action on $T^{uni}_3(\Z)$ carefully. The first result Theorem \ref{t4.6} builds on coarser classifications of Kr\"uger \cite[\S 12]{Kr90} and Cecotti-Vafa \cite[Ch. 6.2]{CV93}. The following theorem gives a part of Theorem \ref{t4.6}. \begin{theorem}\label{t1.2} (Part of Theorem \ref{t4.6}) (a) The characteristic polynomial of $S^{-1}S^t$ and of the monodromy $M$ of $(H_\Z,L,\uuuu{e})$ for $S=S(\uuuu{x})\in T^{uni}_3(\Z)$ with $\uuuu{x}\in\Z^3$ is \begin{eqnarray*} p_{ch,M}&=&(t-1)(t^2-(2-r(\uuuu{x}))t+1),\\ \textup{where}&& r:\Z^3\to\Z,\quad \uuuu{x}\mapsto x_1^2+x_2^2+x_3^2-x_1x_2x_3. \end{eqnarray*} The characteristic polynomial and $r(\uuuu{x})$ are invariants of the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $S(\uuuu{x})$. All eigenvalues of $p_{ch,M}$ are unit roots if and only if $r(\uuuu{x})\in\{0,1,2,3,4\}$. (b) For $\rho\in\Z-\{4\}$ the fiber $r^{-1}(\rho)\subset\Z^3$ consists only of finitely many $\Br_3\ltimes\{\pm 1\}^3$ orbits. The following table gives the symbols in Example \ref{t1.1} for the fibers over $r\in\{0,1,2,3,4\}$, so there are only seven orbits plus one series of orbits over $r\in\{0,1,2,3,4\}$. \begin{eqnarray*} \begin{array}{c|c|c|c|c|c} r(\uuuu{x}) & 0 & 1 & 2 & 3 & 4 \\ \hline & A_1^3,\P^2 & A_2A_1 & A_3 & - & \P^1A_1,\whh{A}_2,\HH_{1,2}, S(-l,2,-l)\textup{ with }l\geq 3 \end{array} \end{eqnarray*} \end{theorem} With the help of certain (beautiful) graphs, in Theorem \ref{t4.13} the stabilizers $(\Br_3)_{S/\{\pm 1\}^3}$ are calculated for certain representatives of all $\Br_3\ltimes\{\pm 1\}^3$-orbits in $T^{uni}_3(\Z)$. We work with 14 graphs and 24 sets of representatives. Lemma \ref{t5.8} gives informations on the characteristic polynomial and the signature of $I^{(0)}$ in all rank 3 cases. \begin{lemma}\label{t1.3} (Lemma \ref{t5.8} (b)) Consider $\uuuu{x}\in\Z^3$ with $r=r(\uuuu{x})<0$ or $>4$ or with $S(\uuuu{x})$ one of the cases in the table in Theorem \ref{t1.2}. Then $p_{ch,M}=(t-\lambda_1)(t-\lambda_2)\Phi_1$ and $\sign I^{(0)}$ are as follows ($\Phi_m=$ the cyclotomic polynomial of $m$-th primitive unit roots). \begin{eqnarray*} \begin{array}{llll} r(\uuuu{x}) & p_{ch,M} & \sign I^{(0)}\hspace*{1cm} & S(\uuuu{x}) \\ r<0 & \lambda_1,\lambda_2>0 & (+--) & S(\uuuu{x}) \\ r=0 & \Phi_1^3 & (+++) & S(A_1^3)\\ r=0 & \Phi_1^3 & (+--) & S(\P^2)\\ r=1 & \Phi_6\Phi_1 & (+++) & S(A_2A_1) \\ r=2 & \Phi_4\Phi_1 & (+++) & S(A_3) \\ r=4 & \Phi_2^2\Phi_1 & (++\ 0) & S(\P^1A_1) \\ r=4 & \Phi_2^2\Phi_1 & (++\ 0) & S(\whh{A}_2) \\ r=4 & \Phi_2^2\Phi_1 & (+\ 0\ 0) & S(\HH_{1,2}) \\ r=4 & \Phi_2^2\Phi_1 & (+\ 0\ -) & S(-l,2,-l) \textup{ with }l\geq 3 \\ r>4 & \lambda_1,\lambda_2<0 & (++-) & S(\uuuu{x}) \end{array} \end{eqnarray*} \end{lemma} Chapter \ref{s5} analyzes the groups $G_\Z,G_\Z^{(0)},G_\Z^{(1)}$ and $G_\Z^M$ in all rank 3 cases. This leads into an intricate case discussion. The case $\HH_{1,2}$ is different from all other cases as it is the only case where $G_\Z$ is not abelian and where the subgroup $\{\pm M^m\,|\, m\in\Z\}$ does not have finite index in $G_\Z$. The automorphism $Q\in G_\Q:=\Aut(H_\Q,L)$ is defined for $r(\uuuu{x})\neq 0$. It is $\id$ on $\ker(M-\id)$ and $-\id$ on $\ker(M^2-(2-r)M+\id)$ (Definition \ref{t5.9}). It is only in a few cases in $G_\Z$ (Theorem \ref{t5.11}). \begin{theorem}\label{t1.4} (Part of the Theorems \ref{t5.11}, \ref{5.13}, \ref{t5.14}, \ref{t5.16}, \ref{t5.18}, \ref{t3.28}) (a) In the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $S(\HH_{1,2})$ $$G_\Z\cong SL_2(\Z)\times\{\pm 1\},\quad M=Q,$$ and the subgroup $\{\pm M^m\,|\, m\in\Z\}=\{\pm \id,\pm Q\}$ has infinite index in $G_\Z$. (b) In all other rank 3 cases the subgroup $\{\pm M^m\,|\, m\in\Z\}$ has finite index in $G_\Z$ and $G_\Z$ is abelian. Then one of the five possibilities holds, \begin{eqnarray}\label{1.1} G_\Z&=&O_3(\Z),\\ G_\Z&=&\{\id,Q\}\times\{\pm (M^{root})^m\,|\, m\in\Z\}, \label{1.2}\\ G_\Z&=&\{\pm (M^{root})^m\,|\, m\in\Z\},\label{1.3}\\ G_\Z&=&\{\id,Q\}\times\{\pm M^m\,|\, m\in\Z\},\label{1.4}\\ G_\Z&=&\{\pm M^m\,|\, m\in\Z\},\label{1.5} \end{eqnarray} where $M^{root}$ is a root of $\pm M$ or of $MQ$. The following table gives the index $[G_\Z:\{\pm M^m\,|\, m\in\Z\}]\in\N$ and informations on $M^{root}$. \begin{eqnarray*} \begin{array}{llll} & \textup{matrix} & \textup{index} & M^{root} \\ \hline \eqref{1.1} & S(A_1^3) & 24 & \\ \hline \eqref{1.2} & S(x,0,0)\textup{ with }x<0 & 4 & (M^{root})^2=MQ \\ \eqref{1.2} & S(-l,2,-l)\textup{ with }l\textup{ even} & l^2-4 & (M^{root})^{l^2/2-2}=MQ \\ \eqref{1.2} & S(4,4,4)\textup{ and }S(5,5,5) & 6 & (M^{root})^3=-M \\ \eqref{1.2} & S(4,4,8) & 4 & (M^{root})^2=M \\ \hline \eqref{1.3} & S(\P^2) & 3 & (M^{root})^3=M \\ \eqref{1.3} & S(\whh{A}_2) \textup{ and }S(x,x,x)& 3 & (M^{root})^3=-M \\ & \textup{with }x\in\Z-\{-1,0,...,5\} & & \\ \eqref{1.3} & S(-l,2,-l) \textup{ with }l\textup{ odd} & l^2-4 & (M^{root})^{l^2-4}=-M \\ \eqref{1.3} & S(2y,2y,2y^2) \textup{ with }x\in\Z_{\geq 3} & 2 & (M^{root})^2=M \\ \hline \eqref{1.4} & S(3,3,4)\textup{ and }S(x,x,0) & 2 & \\ & \textup{with }x\in\Z_{\geq 2} & & \\ \hline \eqref{1.5} & S(A_3)\textup{ and }S(\uuuu{x}) & 1 & \\ & \textup{in other }\Br_3\ltimes\{\pm 1\}^3\textup{ orbits} & & \end{array} \end{eqnarray*} (c) $G_\Z=G_\Z^{\BB}$ holds for all rank 3 cases except four cases, the $\Br_3\ltimes\{\pm 1\}^3$ orbits of $S(\uuuu{x})$ with $$\uuuu{x}\in\{(3,3,4),(4,4,4),(5,5,5),(4,4,8)\}.$$ In these four cases $Q\in G_\Z-G_\Z^{\BB}$. \end{theorem} Though in higher rank it is easier to construct matrices $S$ with $G_\Z^\BB\subsetneqq G_\Z$ (Remarks \ref{t3.29}). Chapter \ref{s6} studies the even and odd monodromy groups and the sets of even and of odd vanishing cycles in the rank 2 and rank 3 cases. The following theorem catches some of the results on the even monodromy group $\Gamma^{(0)}$ and the set $\Delta^{(0)}$ of even vanishing cycles for the rank 3 cases. The group $O^{(0),*}$ (Definition \ref{t6.4}) is a certain subgroup of $O^{(0)}$ which is determined only by $(H_\Z,I^{(0)})$ (so independently of $\BB$). Part (b) discusses only the (in general difficult) problem whether $\Delta^{(0)}=R^{(0)}$ or $\Delta^{(0)}\subsetneqq R^{(0)}$. Theorem \ref{t6.14} contains many more informations on $\Delta^{(0)}$. Theorem \ref{t6.11} contains many more informations on $\Gamma^{(0)}$ than part (a) below. Remarkably, $\Gamma^{(0)}\cong G^{fCox,3}$ (the free Coxeter group with three generators) holds not only for the Coxeter cases $\uuuu{x}\in\Z^3_{\leq -2}$ (which all satisfy $r(\uuuu{x})>4$), but also in all cases $\uuuu{x}\in\Z^3$ with $r(\uuuu{x})<0$ and in the case $\P^2$. \begin{theorem}\label{t1.5} (a) (Part of Lemma \ref{t2.11} and Theorem \ref{t6.11}) (i) (Part of Lemma \ref{t2.11}) The case $A_1^n$, $n\in\N$: \begin{eqnarray*} \Gamma^{(0)}\cong \{\pm 1\}^n,\quad \Gamma^{(1)}=\{\id\},\quad \Delta^{(0)}=R^{(0)}=\Delta^{(1)}=\{\pm e_1,...,\pm e_n\}. \end{eqnarray*} (ii) The cases with $r(\uuuu{x})>0$ and the cases $A_3,\whh{A}_2,A_2A_1,\P^1A_1$: They contain all reducible rank 3 cases except $A_1^3$. Then $\Gamma^{(0)}$ is a Coxeter group. If $\uuuu{x}\in\Z_{\leq -2}^3$ then $\Gamma^{(0)}\cong G^{fCox,3}$. (iii) The cases $A_3,\whh{A}_2,\HH_{1,2}$: Then $\Gamma^{(0)}= O^{(0),*}$. (iv) The cases $S(-l,2,-l)$ with $l\geq 3$: Then $\Gamma^{(0)}\stackrel{1:l}{\subset} O^{(0),*}.$ (v) The cases $\P^2$ and $\uuuu{x}\in\Z^3$ with $r(\uuuu{x})<0$: Then $\Gamma^{(0)}\cong G^{fCox,3}$. (b) (Part of Theorem \ref{t6.14}) (i) $\Delta^{(0)}=R^{(0)}$ holds in the following cases: $A_3,\whh{A}_2,\P^2$, all $S(\uuuu{x})$ with $\uuuu{x}\in\{0,-1,-2\}$, all reducible cases. (ii) $\Delta^{(0)}\subsetneqq R^{(0)}$ holds in the following cases: $\HH_{1,2}$, all $S(-l,2,-l)$ with $l\geq 3$, $S(3,3,4),S(4,4,4),S(5,5,5),S(4,4,8)$. (iii) In the cases of the other $\Br_3\ltimes\{\pm 1\}^3$ orbits in $T^{uni}_3(\Z)$, we do not know whether $\Delta^{(0)}=R^{(0)}$ or $\Delta^{(0)}\subsetneqq R^{(0)}$ holds. \end{theorem} Let $E_n$ denote the $n\times n$ unit matrix. Given $S\in T^{uni}_n(\Z)$ with associated triple $(H_\Z,L,\uuuu{e})$, consider the matrix $\www{S}:=2E_n-S\in T^{uni}_n(\Z)$ with the associated triple $(H_\Z,\www{L},\uuuu{e})$. Then $\www{L}$, $\www{I}^{(0)}$ and $\www{M}$ are far from $L$, $I^{(0)}$ and $M$, but $\www{I}^{(1)}=-I^{(1)}$, $\www{\Gamma}^{(1)}=\Gamma^{(1)}$ and $\www{\Delta}^{(1)}=\Delta^{(1)}$ (see the Remarks \ref{t4.17}). For example the cases $A_3$ and $\whh{A}_2$ are related in this way, and also the Coxeter case $(-2,-2,-2)$ and the case $\HH_{1,2}$ are related in this way (in both cases after an action of $\Br_3\ltimes\{\pm 1\}^3$). This motivates in the rank 3 cases to consider the action of the bigger group $(G^{phi}\ltimes\www{G}^{sign})\rtimes\langle\gamma\rangle$ on $\Z^3$ where $\www{G}^{sign}$ is generated by $G^{sign}$ and the total sign change $\delta^\R:\uuuu{x}\mapsto -\uuuu{x}$. Lemma \ref{t4.18} gives representatives for all orbits of this action on $\Z^3$ (respectively $T^{uni}_3(\Z)$). Still it is difficult to see for a given triple $\uuuu{x}\in\Z^3$ in which orbit it is. We had for some time the hope that the beautiful facts on the even monodromy group $\Gamma^{(0)}$ for the Coxeter cases $\uuuu{x}\in\Z^3_{\leq 0}$ would have analoga for the odd monodromy group $\Gamma^{(1)}$, but this does not hold in general. In the case $(-2,-2,-2)$ $\Gamma^{(0)}\cong G^{fCox,3}$, but in the case $\HH_{1,2}$ not, and in both cases together $\Gamma^{(1)}\not\cong G^{free,3}$. On the other hand, $\Gamma^{(1)}\cong G^{free,3}$ for $\uuuu{x}\in B_1$, where $B_1\subset\Z^3$ is as follows. \begin{eqnarray*} B_1&:=& (G^{phi}\ltimes \www{G}^{sign})\rtimes \langle\gamma\rangle (\{\uuuu{x}\in\Z^3-\{(0,0,0)\} \,|\, r(\uuuu{x})\leq 0 \}),\\ B_2&:=& \{\uuuu{x}\in\Z^3-\{(0,0,0)\}\,|\, S(\uuuu{x}) \textup{ is reducible}\},\\ B_3&:=& \{(0,0,0)\}. \end{eqnarray*} Though the set $B_1$ is difficult to understand (see the Examples \ref{t4.20}). It contains $(3,3,3)$, so the orbit of $\P^2$. $B_2\cup B_3$ consists of the triples $\uuuu{x}$ with reducible $S(\uuuu{x})$, so with two or three zero entries. Consider $\uuuu{x}\in\Z^3-B_3$. The radical $\Rad I^{(1)}$ has rank 1, so the quotient lattice $\oooo{H_\Z}^{(1)}:= H_\Z/\Rad I^{(1)}$ has rank 2. Denote by $\Gamma^{(1)}_s$ the image of $\Gamma^{(1)}$ under the natural homomorphism $\Gamma^{(1)}\to \Aut(\oooo{H_\Z}^{(1)})$ and by $\Gamma^{(1)}_u$ the kernel of it. There is an exact sequence $$\{1\}\to\Gamma^{(1)}_u\to \Gamma^{(1)}\to \Gamma^{(1)}_s\to\{1\}.$$ Denote by $\oooo{\Delta^{(1)}}\subset \oooo{H_\Z}^{(1)}$ the image of $\Delta^{(1)}$ in $\oooo{H_\Z}^{(1)}$. Often $\oooo{\Delta^{(1)}}$ is easier to describe than $\Delta^{(1)}$. The long Theorems \ref{t6.18} and \ref{t6.21} offer detailed results about $\Gamma^{(1)}$ and $\Delta^{(1)}$ for the representatives in Lemma \ref{t4.18} of the $(G^{phi}\ltimes\www{G}^{sign})\rtimes\langle\gamma\rangle$ orbits in $\Z^3$. The next theorem gives only a rough impression. \begin{theorem}\label{t1.6} Consider $S=S(\uuuu{x})\in T^{uni}_3(\Z)$ and the associated triple $(H_\Z,L,\uuuu{e})$. (a) (Part of Theorem \ref{t6.18}) Consider $\uuuu{x}\neq (0,0,0)$. \begin{eqnarray*} \Gamma^{(1)}\cong G^{free,3}&\iff& \uuuu{x}\in B_1,\\ \Gamma^{(1)}_u=\{\id\}&\iff& \uuuu{x}\in B_1\cup B_2,\\ \Gamma^{(1)}_u\cong\Z^2&\iff& \uuuu{x}\in\Z^3- (B_1\cup B_2\cup B_3), \end{eqnarray*} \begin{eqnarray*} \Gamma^{(1)}_s&\cong& \textup{one of the groups } SL_2(\Z),G^{free,2},G^{free,2}\times\{\pm 1\}\\ &&\textup{for }\uuuu{x}\in \Z^3-(B_1\cup B_3). \end{eqnarray*} (b) (Part of Theorem \ref{t6.21}) (i) In the cases of $A_3$ and $\whh{A}_2$ $\oooo{\Delta^{(1)}}=\oooo{H_\Z}^{(1),prim}$, so $\oooo{\Delta^{(1)}}$ is the set of primitive vectors in $\oooo{H_\Z}^{(1)}$, and $\Delta^{(1)}$ is the full preimage in $H_\Z$ of $\oooo{\Delta^{(1)}}$. (ii) Though in many other cases $\oooo{\Delta^{(1)}}\not\subset \oooo{H_\Z}^{(1),prim}$, and $\Delta^{(1)}$ is not the full preimage in $H_\Z$ of $\oooo{\Delta^{(1)}}$, but each fiber has infinitely many elements. (iii) But for $\uuuu{x}\in B_1$ the map $\Delta^{(1)}\to\oooo{\Delta^{(1)}}$ is a bijection. Especially for $\P^2$ $\oooo{\Delta^{(1)}}$ is easy to describe (Theorem \ref{t6.21} (h)), but $\Delta^{(1)}$ not. \end{theorem} Chapter \ref{s7} studies the set $\BB^{dist}=\Br_n\ltimes\{\pm 1\}^n(\uuuu{e})$ of distinguished bases for a given triple $(H_\Z,L,\uuuu{e})$. In general, it is difficult to characterize this orbit in easy terms. We know that the inclusions in \eqref{3.3} and \eqref{3.4} hold. We are interested when they are equalities. \begin{eqnarray*} \BB^{dist}\subset\{\uuuu{v}\in(\Delta^{(0)})^n\,|\, s_{v_1}^{(0)}...s_{v_n}^{(0)}=-M\},\hspace*{2cm}(3.3)\\ \BB^{dist}\subset\{\uuuu{v}\in(\Delta^{(1)})^n\,|\, s_{v_1}^{(1)}...s_{v_n}^{(1)}=M\}.\hspace*{2.4cm}(3.4) \end{eqnarray*} In general, this is a difficult question. In the rank 3 cases Theorem \ref{t7.3} and Theorem \ref{t7.7} give our results for \eqref{3.3} and \eqref{3.4}. \begin{theorem}\label{t1.7} Consider $S(\uuuu{x})\in T^{uni}_3(\Z)$ and the associated triple $(H_\Z,L,\uuuu{e})$. (a) (Part of Theorem \ref{t7.3}) \eqref{3.3} is an equality for all cases except for $\uuuu{x}$ in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $\HH_{1,2}$. There the right hand side of \eqref{3.3} consists of countably many $\Br_3\ltimes\{\pm 1\}^3$ orbits. (b) (Part of Theorem \ref{t7.7}) (i) The inclusion in \eqref{3.4} is an equality $\iff\uuuu{x}\in B_1\cup B_2$. (ii) The cases $A_3$, $\whh{A}_2$, $\HH_{1,2}$ and $S(-l,2,l)$ with $l\geq 3$ are not in $B_1$. But there the inclusion in \eqref{3.4} becomes an equality if one adds on the right hand side of \eqref{3.4} the condition $\sum_{i=1}^n \Z v_i=H_\Z$. \end{theorem} The last section \ref{s7.4} of chapter \ref{s7} builds on Theorem \ref{t4.16} which determines for a representative $S\in T^{uni}_3(\Z)$ of each $\Br_3\ltimes\{\pm 1\}$ orbit in $T^{uni}_3(\Z)$ the stabilizer $(Br_3)_{S/\{\pm 1\}^3}$. Theorem \ref{t7.11} determines in each of these cases the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. The graphs $\GG_1,...,\GG_{14}$ in section \ref{s4.4} used the groups $G^{phi}\rtimes\langle\gamma\rangle$. At the end of section \ref{s7.4} different graphs, which use the group $\Br_3$, are introduced for the orbits of matrices as well as the orbits of triangular bases. For the cases of finite orbits and for the case $\whh{A}_2$ the graphs are given explicitly. In the case of $A_3$ the orbit $\SSS^{dist}/\{\pm 1\}^3$ has four elements and the orbit $\BB^{dist}/\{\pm 1\}^3$ has 16 elements. The stabilizers $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ and $(\Br_3)_{S/\{\pm 1\}^3}$ are used in section \ref{s9} for the construction of the manifolds $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ in the rank 3 cases. Recall that they are obtained by glueing Stokes regions, one for each element of an orbit $\BB^{dist}/\{\pm 1\}^3$ or $\SSS^{dist}/\{\pm 1\}^3$. Theorem \ref{t9.3} gives complete results, it covers all cases. Remarkably, in all cases where $\Gamma^{(0)}\cong G^{fCox,3}$ or $\Gamma^{(1)}\cong G^{free,3}$, the manifold $C^{\uuuu{e}/\{\pm 1\}^3}$ is just $C_3^{univ}\cong\C^2\times\H$. This holds especially for the case $\P^2$. In this aspect, the case $\P^2$ is easier than the case $A_3$, where $C_3^{\uuuu{e}/\{\pm 1\}^3}$ consists of 16 Stokes regions and is isomorphic to $\C^3$ minus two hypersurfaces (a caustic and a Maxwell stratum). Corollary \ref{t9.8} discusses natural partial compactifications. Section \ref{s9.1} prepares this by giving explicitly deck transformations of $C_3^{univ}$ which generate the group $\Br_3$ of deck transformations of the covering $\pr_3^{u,c}: C_3^{univ}\to C_3^{conf}$. The formulas for the deck transformations use a Schwarzian triangle function which is equivalent to the $\lambda$-function and a certain lift of the logarithm. Chapter \ref{s8} gives general background for these manifolds: the configuration space $C_n^{conf}$ of $\Br_n$ and the related manifolds $C_n^{pure}$ and $C_n^{univ}$, the notion of a (semisimple) F-manifold with Euler field, distinguished systems of paths, two a priori different, but closely related braid group actions on (homotopy classes) of distinguished systems of paths, and a construction of $\Z$-lattice bundles on $C_n^{univ}$, $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$. They are the door to more transcendent structures on these manifolds, like Dubrovin-Frobenius manifolds. We refer to the beginning of section \ref{s10} for an introduction to the material of section \ref{s10}, which presents known results in the case of isolated hypersurface singularities. Appendix \ref{sa} recalls properties of the hyperbolic plane and subgroups of its group of isometries. The upper half plane, M\"obius transformations from matrices in $SL_2(\R)$ and hyperbolic polygons are mentioned. Three special cases of the Poincar\'e-Maskit theorem are made explicit. Later also a model of the hyperbolic plane from a cone of positive vectors in $\R^3$ with a metric with signature $(+--)$ is explained. This connects groups of $3\times 3$ matrices with groups of isometries. Appendix \ref{sa} is mainly used in chapter \ref{s6} for the determination of some groups $\Gamma^{(0)}$ and $\Gamma^{(1)}$, but also in chapter \ref{s4} for the group $G^{phi}$. Appendix \ref{sb} is used for the explicit formulas for some generating deck transformations of the covering $\pr_3^{u,c}:C_3^{univ}\to C_3^{conf}$ in section \ref{s9.1}. It recalls properties of the first congruence subgroups $\Gamma(n)\subset SL_2(\Z)$, those with $n\in\{2,3,4,5\}$, with special emphasis on the most important case $n=2$. In this case, the Schwarzian triangle function $T:\H\to \C-\{0;1\}\cong\H/\Gamma(2)$, which we use, is obtained by composing the $\lambda$-function with some automorphism of $\C-\{0;1\}$. Later we introduce a lift $\kappa:\H\to\C$ of the restriction $\ln:\C-\{0;1\}\dashrightarrow\C$ of the logarithm $\ln:\C-\{0\}\dashrightarrow\C$, lifting it with $T$. Appendix \ref{sc} contains Lemma \ref{tc.1} which determines the unit groups in two families of rings of quadratic algebraic integers. The lemma is used at many places, especially in chapter \ref{s5} for the determination of the groups $G_\Z,G_\Z^{(0)},G_\Z^{(1)},G_\Z^M$. In order to present an elegant proof of Lemma \ref{t6.1}, we use the theory of continued fractions. The results are certainly known, but we did not find a completely satisfying reference. Also Appendix \ref{sd} is used in chapter \ref{s5}. It studies powers of units in rings of quadratic algebraic integers. {\bf Acknowledgements.} We thank Martin Guest and Atsushi Takahashi for discussions on topics related to this book. \chapter{Bilinear lattices and induced structures}\label{s2} \setcounter{equation}{0} \setcounter{figure}{0} This chapter fixes the basic notions, a bilinear lattice and its associated data, namely a Seifert form, an even and an odd intersection form, a monodromy, the roots, the triangular bases, an even and an odd monodromy group, the even and the odd vanishing cycles. The notion of a bilinear lattice and the even part of the associated data are considered in \cite{HK16}. The more special case of a unimodular bilinear lattice and even and odd data are considered since long time in singularity theory \cite{AGV88}\cite{Eb01}. In this paper we are mainly interested in unimodular bilinear lattices. Only this chapter \ref{s2} treats the general case, partially following \cite{HK16}. \begin{notations}\label{t2.1} In these notations, $R$ will be either the ring $\Z$ or one of the fields $\Q$, $\R$ or $\C$. Later we will work mainly with $\Z$. If $R=\Z$ write $\www{R}:=\Q$, else write $\www{R}:=R$. In the whole paper, \index{$H_R$} $H_R\supsetneqq\{0\}$ is a finitely generated free $R$-module, so a $\Z$-lattice\index{$\Z$-lattice} if $R=\Z$, and a finite dimensional $R$-vector space if $R$ is $\Q$, $\R$ or $\C$. Its rank will usually be called $n\in\N=\{1,2,3,...\}$ \index{$\N=\{1,2,3,...\}$} (it is its dimension if $R$ is $\Q$, $\R$ or $\C$). If $R_1$ and $R_2$ are both in the list $\Z,\Q,\R,\C$ and $R_1$ is left of $R_2$ and $H_{R_1}$ is given, then $H_{R_2}:=H_{R_1}\otimes_{R_1}R_2$. In the whole paper, $L:H_R\times H_R\to R$ will be a nondegenerate $R$-bilinear form. If $U\subset H_R$ is an $R$-submodule, then $U^{\perp}:=\{b\in H_R\,|\, L(U,b)=0\}$ and ${}^{\perp}U:=\{a\in H_R\,|\, L(a,U)=0\}$. In the case $R=\Z$, $U^{\perp}$ and ${}^{\perp}U$ are obviously primitive $\Z$-submodules of $H_\Z$. In Lemma \ref{t2.2} we will start with $H_R$ and a symmetric $R$-bilinear form $I^{[0]}:H_R\times H_R\to R$ or a skew-symmetric $R$-bilinear form $I^{[1]}:H_R\times H_R\to R$. With the square brackets in the index we distinguish them from the bilinear forms $I^{(0)}$ and $I^{(1)}$, which are induced in Definition \ref{t2.3} by a given bilinear form $L$. Though later they will be identified. Suppose that $M:H_R\to H_R$ is an automorphism. Then $M_s,M_u,N:H_{\www{R}}\to H_{\www{R}}$ denote the semisimple part\index{semisimple part}, the unipotent part\index{unipotent part} and the \index{nilpotent part}nilpotent part of $M$ with $M=M_sM_u=M_uM_s$ and $N=\log M_u,e^N=M_u$. Denote $H_\lambda:=\ker(M_s-\lambda\cdot \id):H_\C\to H_\C$. For $m\in\N$ denote by $\Phi_m\in\Z[t]$ \index{$\Phi_m$} the cyclotomic polynomial \index{cyclotomic polynomial} whose zeros are the primitive $m$-th unit roots. \end{notations} The following lemma is elementary and classical. We skip the proof. \begin{lemma}\label{t2.2} Let $R\in\{\Q,\R,\C\}$ and let $H_R$ be an $R$-vector space of dimension $n\in\N$. (a) Let $I^{[0]}:H_R\times H_R\to R$ be a symmetric bilinear form. Consider $a\in H_R$ with $I^{[0]}(a,a)\neq 0$. The map \begin{eqnarray*} s_a^{[0]}:H_R\to H_R,\quad s_a^{[0]}(b):=b-\frac{2I^{[0]}(a,b)}{I^{[0]}(a,a)}a, \end{eqnarray*} is a {\sf \index{reflection}reflection}, so it is in $\Aut(H_r,I^{[0]})$, it fixes the codimension 1 subspace $\{b\in H_R\,|\, I^{[0]}(a,b)=0\}$ and it maps $a$ to $-a$. Especially $(s_a^{[0]})^2=\id$. (b) Let $I^{[1]}:H_R\times H_R\to R$ be a skew-symmetric bilinear form. Consider $a\in H_R$. The map \begin{eqnarray*} s_a^{[1]}:H_R\to H_R,\quad s_a^{[1]}(b):=b-I^{[1]}(a,b)a, \end{eqnarray*} is in $\Aut(H_R,I^{[1]})$ with $$(s_a^{[1]})^{-1}(b)=b+I^{[1]}(a,b)a.$$ It is $\id$ if $a\in \Rad(I^{[1]})$. If $a\notin \Rad(I^{[1]})$ then it fixes the codimension 1 subspace $\{b\in H_R\,|\, I^{[1]}(a,b)=0\}$, and $s_a^{[1]}-\id$ is nilpotent with a single $2\times 2$ Jordan block. Then it is called a {\sf \index{transvection}transvection}. (c) Fix $k\in\{0;1\}$ and consider $I^{[k]}$ as in (a) or (b). An element $g\in\Aut(H_R,I^{[k]})$ and an element $a\in H_R$ with $I^{[0]}(a,a)\neq 0$ if $k=0$ satisfy \begin{eqnarray*} g\, s_a^{[k]}\, g^{-1}=s_{g(a)}^{[k]}. \end{eqnarray*} \end{lemma} \begin{definition}\label{t2.3} (a) \cite[ch. 2]{HK16} A {\it \index{bilinear lattice}bilinear lattice} is a pair \index{$H_\Z$}\index{$(H_\Z,L)$} $(H_\Z,L)$ with $H_\Z$ a $\Z$-lattice of some rank $n\in\N$ together with a nondegenerate bilinear form $L:H_\Z\times H_\Z\to \Z$. \index{$L$: Seifert form} If $\det L(\uuuu{e}^t,\uuuu{e})=1$ for some $\Z$-basis $\uuuu{e}=(e_1,...,e_n)$ of $H_\Z$ then $L$ and the pair $(H_\Z,L)$ are called {\it \index{unimodular bilinear lattice}unimodular}. The bilinear form is called {\it \index{Seifert form}Seifert form} in this paper. (b) A bilinear lattice induces several structures: \begin{list}{}{} \item[(i)] \cite[ch. 2]{HK16} A symmetric bilinear form \begin{eqnarray*} I^{(0)}=L^t+L:H_\Z\times H_\Z\to\Z,\quad \textup{so }I^{(0)}(a,b)=L(b,a)+L(a,b), \end{eqnarray*} \index{$I^{(0)},\ I^{(1)}$} which is called {\it \index{even intersection form}even intersection form}. \item[(ii)] A skew-symmetric bilinear form \begin{eqnarray*} I^{(1)}=L^t-L:H_\Z\times H_\Z\to\Z,\quad \textup{so }I^{(1)}(a,b)=L(b,a)-L(a,b), \end{eqnarray*} which is called {\it \index{odd intersection form}odd intersection form}. \item[(iii)] \cite[ch. 2]{HK16} An automorphism $M:H_\Q\to H_\Q$ which is defined by \index{$M$: monodromy} \begin{eqnarray*} L(Ma,b)=L(b,a)\quad\textup{for }a,b\in H_\Q, \end{eqnarray*} which is called {\it \index{monodromy}monodromy}. \item[(iv)] Six \index{automorphism group}automorphism groups \index{$O^{(0)},\ O^{(1)}$} \index{$G_\Z^M,\ G_\Z^{(0)},\ G_\Z^{(1)},\ G_\Z$} \begin{eqnarray*} O^{(k)}&:=& \Aut(H_\Z,I^{(k)})\qquad\textup{for }k\in\{0;1\},\\ G_\Z^M&:=& \Aut(H_\Z,M):=\{g:H_\Z\to H_\Z \textup{ automorphism }\,|\, gM=Mg\},\\ G_\Z^{(k)}&:=& \Aut(H_\Z,I^{(0)},M)=O^{(k)}\cap G_\Z^M \qquad\textup{for }k\in\{0;1\},\\ G_\Z&:=&\Aut(H_\Z,L). \end{eqnarray*} \item[(v)] \cite[ch. 2]{HK16} The set $R^{(0)}\subset H_\Z$ of \index{$R^{(0)}$}{\it \index{root}roots}, \begin{eqnarray*} R^{(0)}:=\{a\in H_\Z\,|\, L(a,a)>0; \frac{L(a,b)}{L(a,a)},\frac{L(b,a)}{L(a,a)}\in \Z \textup{ for all }b\in H_\Z\}. \end{eqnarray*} \item[(vi)] \cite[ch. 2]{HK16} The set $\BB^{tri}$ \index{$\BB^{tri}$} of {\it \index{triangular basis}triangular bases}, \begin{eqnarray*} \BB^{tri}:=\{\uuuu{e}=(e_1,...,e_n)\in (R^{(0)})^n\,|\, \bigoplus_{i=1}^n\Z e_i=H_\Z, L(e_i,e_j)=0\textup{ for }i<j\}. \end{eqnarray*} \end{list} (c) Let $n\in\N$ and $R\in\{\Z,\Q,\R,\C\}$. The sets $T^{tri}_n$ and $T^{uni}_n(R)$ of \index{$T^{uni}_n(\Z)$} \index{upper triangular matrix}upper triangular matrices are defined by \begin{eqnarray*} T^{uni}_n(R):= \{S=(s_{ij})\in M_{n\times n}(R)&|& s_{ii}=1,s_{ij}=0\textup{ for }i>j\},\\ T^{tri}_n:= \{S=(s_{ij})\in M_{n\times n}(\Z)&|& s_{ii}\in \N, s_{ij}=0\textup{ for }i>j, \\ && \frac{s_{ij}}{s_{ii}},\frac{s_{ji}}{s_{ii}}\in\Z \textup{ for }i\neq j\}. \end{eqnarray*} Obviously $T^{uni}_n(\Z)\subset T^{tri}_n$. \end{definition} \begin{remarks}\label{t2.4} (i) There are bilinear lattices with $\BB^{tri}=\emptyset$. We are interested only in bilinear lattices with $\BB^{tri}\neq\emptyset$. (ii) A triangular basis $\uuuu{e}\in\BB^{tri}$ is called in \cite{HK16} a {\it complete exceptional sequence}. (iii) In the case $\BB^{tri}\neq\emptyset$, \cite{HK16} considers the bilinear form $L^t$ (with $L^t(a,b)=L(b,a)$). Our choice $L$ is motivated by singularity theory. Also the names for $L$, $I^{(0)}$, $I^{(1)}$ and $M$, namely {\it Seifert form, even intersection form, odd intersection form} and {\it monodromy} are motivated by singularity theory. The roots in $R^{(0)}$ are in \cite{HK16} also called {\it pseudo-real roots}. (iv) In this paper we are mainly interested in the cases of unimodular bilinear lattices with $\BB^{tri}\neq\emptyset$. Singularity theory leads to such cases. (v) \cite{HK16} is mainly interested in the cases of {\it \index{generalized Cartan lattice}generalized Cartan lattices}. A generalized Cartan lattice is a triple $(H_\Z,L,\uuuu{e})$ with $(H_\Z,L)$ a bilinear lattice and $\uuuu{e}\in \BB^{tri}$ with $L(e_i,e_j)\leq 0$ for $i>j$. \end{remarks} \begin{remarks}\label{t2.5} (i) The classification of pairs $(H_\R,L)$ and pairs $(H_\C,L)$ with $L$ a nondegenerate bilinear form on $H_\R$ respectively $H_\C$ is well understood. Such a pair decomposes into an orthogonal sum of irreducible pairs. This and the classification of the irreducible pairs over $\R$ is carried out in \cite{Ne98} and, more explicitly, in \cite{BH19}. In both references it is also proved that a pair $(H_\R,L)$ of rank $n\in\N$ up to isomorphism is uniquely determined by an unordered tuple of $n$ spectral pairs modulo $2\Z$, i.e. by $n$ pairs $([\alpha_1],l_1),...,([\alpha_n],l_n) \in\R/2\Z\times\Z$. Here $\alpha_1,...,\alpha_n\in\R$. The eigenvalues of the monodromy $M$ are the numbers $e^{-2\pi i\alpha_1},...,e^{-2\pi i \alpha_n}$. The numbers $l_1,...,l_n$ determine the Jordan block structure, see \cite{BH19} for details. The classification over $\C$ follows easily. Though it was carried out before in \cite{Go94-1}\cite{Go94-2}, and it is formulated also in \cite[Theorem 4.22]{CDG24}. (ii) A unimodular bilinear lattice $(H_\Z,L)$ is called in \cite{CDG24} a {\it Mukai pair}. In \cite[4.1--4.4]{CDG24} basic results of Gorodentsev for $R=\Z$ or $R=\C$ are rewritten. The monodromy is there called {\it canonical operator}. A triangular basis is there called {\it exceptional}. (iii) The classification over $\Z$, so of unimodular bilinear lattices $(H_\Z,L)$, is wide open for larger $n$. The case $n=3$ is treated in great detail in this book. \end{remarks} \begin{lemma}\label{t2.6} (a) Let $(H_\Z,L)$ be a bilinear lattice of rank $n\in\N$. \begin{list}{}{} \item[(i)] Let $\uuuu{e}=(e_1,...,e_n)$ be a $\Z$-basis of $H_\Z$. Define $S:=L^t(\uuuu{e}^t,\uuuu{e})=L(\uuuu{e}^t,\uuuu{e})^t \in M_{n\times n}(\Z)\cap GL_n(\Q)$. Then \begin{eqnarray*} I^{(0)}(\uuuu{e}^t,\uuuu{e})=S+S^t,\ I^{(1)}(\uuuu{e}^t,\uuuu{e})=S-S^t,\ M(\uuuu{e})=\uuuu{e}S^{-1}S^t. \end{eqnarray*} \item[(ii)] \begin{eqnarray*} I^{(0)}(a,b)=L((M+\id)a,b),\quad \Rad I^{(0)}=\ker((M+\id):H_\Z\to H_\Z),\\\ I^{(1)}(a,b)=L((M-\id)a,b),\quad \Rad I^{(1)}=\ker((M-\id):H_\Z\to H_\Z). \end{eqnarray*} \item[(iii)] \begin{eqnarray*} G_\Z=\Aut(H_\Z,L,I^{(0)},I^{(1)},M)\subset \left\{\begin{array}{c}G^{(0)}_\Z\\ G^{(1)}_\Z\end{array} \right\}\subset G^M_\Z. \end{eqnarray*} \item[(iv)] $M\in G_\Z$ if $(H_\Z,L)$ is unimodular or if $\BB^{tri}\neq\emptyset$. \item[(v)] If $a\in R^{(0)}$ then \index{$s_a^{(0)},\ s_a^{(1)}$} \begin{eqnarray*} s_a^{(0)}:=s_a^{[0]}\textup{ and } s_a^{(1)}:=s_{a/\sqrt{L(a,a)}}^{[1]},\quad \textup{so }s_a^{(1)}(b)=b-\frac{I^{(1)}(a,b)}{L(a,a)}a, \end{eqnarray*} are in $O^{(0)}$ respectively $O^{(1)}$. \item[(vi)] \cite[Lemma 2.1]{HK16} If $a,b\in R^{(0)}$ then $s_a^{(0)}(b)\in R^{(0)}$ (but not necessarily $s_a^{(1)}(b)\in R^{(0)}$). \item[(vii)] If $a,b\in R^{(0)}$ with $L(a,b)=0$ then \begin{eqnarray*} L(s_a^{(1)}b,s_a^{(1)}b)=L(b,b),\quad s_a^{(1)}(b)\in R^{(0)},\quad s_{s_a^{(1)}(b)}^{(1)}=s_a^{(1)} s_b^{(1)}(s_a^{(1)})^{-1}. \end{eqnarray*} \end{list} (b) The map \begin{eqnarray*} \{(H_\Z,L,\uuuu{e})\,|\, \begin{array}{ll}(H_\Z,L) \textup{ is a bilinear}\\ \textup{lattice of rank }n,\ \uuuu{e}\in\BB^{tri}\end{array} \}/\textup{isomorphism} \to T^{tri}_n \end{eqnarray*} is a bijection and restricts to a bijection \begin{eqnarray*} \{(H_\Z,L,\uuuu{e})\,|\, \begin{array}{ll}(H_\Z,L) \textup{ is a unimodular}\\ \textup{bilinear lattice of rank }n,\ \uuuu{e}\in\BB^{tri}\end{array} \}/\textup{isom.} \to T^{uni}_n(\Z). \end{eqnarray*} (c) \cite[Lemma 3.10]{HK16} Let $(H_\Z,L)$ be a unimodular bilinear lattice with $\BB^{tri}\neq\emptyset$. Then \begin{eqnarray*} R^{(0)}=\{a\in H_\Z\,|\, L(a,a)=1\}. \end{eqnarray*} (d) Let $(H_\Z,L)$ be a unimodular bilinear lattice with $\BB^{tri}\neq\emptyset$. Define for $a\in H_\Z$ $s_a^{(1)}:=s_a^{[1]}\in O^{(1)}$. This definition is compatible with the definition of $s_a^{(1)}$ for $a\in R^{(0)}$ in part (a) (v). Furthermore now for $a,b\in H_\Z$ $$s^{(1)}_{s^{(1)}_a(b)}=s^{(1)}_a s^{(1)}_b (s^{(1)}_a)^{-1}.$$ \end{lemma} {\bf Proof:} (a) (i) The defining equation for $M$ can be written as $L((M\uuuu{e})^t,\uuuu{e})=L(\uuuu{e},\uuuu{e})^t$, which implies $M\uuuu{e}=\uuuu{e}S^{-1}S^t$. The rest is trivial. (ii) Trivial. (iii) $g\in G_\Z$ commutes with $M$ because \begin{eqnarray*} L(gMa,gb)=L(Ma,b)=L(b,a)=L(gb,ga)=L(Mga,gb). \end{eqnarray*} Of course it respects $I^{(0)}$ and $I^{(1)}$. Therefore $G_\Z=\Aut(H_\Z,L,I^{(0)},I^{(1)},M)$. The rest is trivial. (iv) The calculation $L(Ma,Mb)=L(Mb,a)=L(a,b)$ shows that $M$ respects $L$. It remains to show $M\in \Aut(H_\Z)$. This is clear if $(H_\Z,L)$ is unimodular. Suppose $\BB^{tri}\neq\emptyset$ and $(H_\Z,L)$ not unimodular. Consider $\uuuu{e}\in \BB^{tri}$, $S:=L(\uuuu{e}^t,\uuuu{e})^t\in T^{tri}_n$ and $D:=\textup{diag}(s_{11},...,s_{nn})\in M_{n\times n}(\Z)$. Then $D^{-1}S,SD^{-1}\in T^{uni}_n(\Z)$ and \begin{eqnarray*} S^{-1}S^t=S^{-1}DD^{-1}S^t=(D^{-1}S)^{-1}(SD^{-1})^t\in GL_n(\Z), \end{eqnarray*} so $M\in\Aut(H_\Z)$. (v) If $a\in R^{(0)}$ and $b\in H_\Z$ then $\frac{L(a,b)}{L(a,a)},\frac{L(b,a)}{L(a,a)}\in \Z$, so \begin{eqnarray*} \frac{2 I^{(0)}(a,b)}{I^{(0)}(a,a)},\frac{I^{(1)}(a,b)}{L(a,a)} \in\Z\textup{ and }s^{(0)}_a(b),s^{(1)}_a(b),(s^{(1)}_a)^{-1}(b) \in H_\Z. \end{eqnarray*} (vi) $L(s_a^{(0)}(b),s_a^{(0)}(b))=L(b,b)$ because $s_a^{(0)}\in G_\Z^{(0)}$ and $I^{(0)}=L^t+L$ (in general $L(s_a^{(1)}(b),s_a^{(1)}(b))\neq L(b,b)$). For $c\in H_\Z$ \begin{eqnarray*} \frac{L(s_a^{(0)}(b),c)}{L(s_a^{(0)}(b),s_a^{(0)}(b))} &=& \frac{L(b-\frac{L(a,b)+L(b,a)}{L(a,a)}a,c)}{L(b,b)}\\ &=&\frac{L(b,c)}{L(b,b)} -\frac{L(a,b)+L(b,a)}{L(b,b)} \frac{L(a,c)}{L(a,a)}\in\Z, \end{eqnarray*} and analogously $\frac{L(c,s_a^{(0)}(b))}{L(s_a^{(0)}(b),s_a^{(0)}(b))}\in\Z$. (vii) $I^{(1)}(a,b)=L(b,a)$ because of $L(a,b)=0$. \begin{eqnarray*} L(s_a^{(1)}(b),s_a^{(1)}(b)) &=& L(b-\frac{L(b,a)}{L(a,a)}a,b-\frac{L(b,a)}{L(a,a)}a)\\ &=& L(b,b)-\frac{L(b,a)}{L(a,a)}L(b,a)-0+ (-\frac{L(b,a)}{L(a,a)})^2L(a,a)\\ &=& L(b,b). \end{eqnarray*} For $c\in H_\Z$ \begin{eqnarray*} \frac{L(s_a^{(1)}(b),c)}{L(s_a^{(1)}(b),s_a^{(1)}(b))} &=& \frac{L(b-\frac{L(b,a)}{L(a,a)}a,c)}{L(b,b)}\\ &=& \frac{L(b,c)}{L(b,b)}-\frac{L(b,a)}{L(b,b)} \frac{L(a,c)}{L(a,a)}\in\Z, \end{eqnarray*} so $s_a^{(1)}(b)\in R^{(0)}$. Finally \begin{eqnarray*} s_a^{(1)}s_b^{(1)}(s_a^{(1)})^{-1} &=& s_a^{(1)}s_{b/\sqrt{L(b,b)}}^{[1]}(s_a^{(1)})^{-1}\\ &\stackrel{\textup{Lemma \ref{t2.2} (c)}}{=}& s_{s_a^{(1)}(b/\sqrt{L(b,b)})}^{[1]} =s_{s_a^{(1)}(b)/\sqrt{L(b,b)}}^{[1]}\\ &=& s_{s_a^{(1)}(b)/\sqrt{L(s_a^{(1)}(b),s_a^{(1)}(b))}}^{[1]} =s_{s_a^{(1)}(b)}^{(1)}. \end{eqnarray*} (b) Starting with $S\in T^{tri}_n$, one can define $H_\Z:=M_{n\times 1}(\Z)$ with standard $\Z$-basis $\uuuu{e}=(e_1,...,e_n)$, and one can define $L:H_\Z\times H_\Z\to\Z$ by $L(\uuuu{e}^t,\uuuu{e})=S^t$. Then $\uuuu{e}\in \BB^{tri}$. If $(H_\Z,L)$ is unimodular then $\pm 1=\det L(\uuuu{e}^t,\uuuu{e})=L(e_1,e_1)... L(e_n,e_n)$ and $L(e_i,e_i)\in \N$, so $L(e_i,e_i)=1$ and $L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. The rest is trivial. (c) The inclusion $R^{(0)}\supset\{a\in H_\Z\,|\, L(a,a)=1\}$ is obvious. Consider $\uuuu{e}\in\BB^{tri}$. By part (b), the matrix $S:=L(\uuuu{e}^t,\uuuu{e})^t$ is in $T^{uni}_n(\Z)$. Consider a root $a=\sum_{i=1}^n \alpha_ie_i\in R^{(0)}$. Then \begin{eqnarray*} (L(a,e_1),...,L(a,e_n))&=&(\alpha_1,...,\alpha_n)S,\\ \textup{ so } \gcd(L(a,e_1),...,L(a,e_n))&=&\gcd(\alpha_1,...,\alpha_n). \end{eqnarray*} But $L(a,a)$ divides $\gcd(L(a,e_1),...,L(a,e_n))$ because $a$ is a root. Therefore $L(a,a)$ divides each $\alpha_i$. Thus $L(a,a)^2$ divides $\sum_{i=1}^n\sum_{j=1}^n\alpha_is_{ij}\alpha_j=L(a,a)$, so $L(a,a)=1$. (d) By part (c) $L(a,a)=1$ for $a\in R^{(0)}$. \hfill$\Box$ \bigskip Up to now, the triangular shape of the matrix $L(\uuuu{e}^t,\uuuu{e})^t\in T^{tri}_n$ has not been used. It leads to the result in Theorem \ref{t2.7}. In algebraic geometry and the theory of meromorphic differential equations, this result is well known, it is a piece of Picard-Lefschetz theory. In the frame of singularity theory, it is treated in \cite{AGV88} and \cite{Eb01}. An elementary direct proof for a unimodular lattice is given in \cite{BH20}. The case $k=0$ is proved in \cite{HK16}. \begin{theorem}\label{t2.7} Let $(H_\Z,L,\uuuu{e})$ be a bilinear lattice with a triangular basis $\uuuu{e}$. Let $k\in\{0,1\}$. Then \begin{eqnarray*} s_{e_1}^{(k)}...s_{e_n}^{(k)}=(-1)^{k+1}M. \end{eqnarray*} \end{theorem} {\bf Proof:} The case $k=0$ is a special case of Proposition 2.4 in \cite{HK16}. The case $k=1$ can be proved by an easy modification of Lemma 2.3 (5) and Proposition 2.4 in \cite{HK16}. Both cases are proved for a unimodular bilinear lattice in \cite[Theorem 4.1]{BH20}. \hfill$\Box$ \bigskip In Picard-Lefschetz theory and singularity theory also the following notions are standard. \begin{definition}\label{t2.8} Let $(H_\Z,L,\uuuu{e})$ be a bilinear lattice with a triangular basis $\uuuu{e}$. It induces several structures: \index{$\Gamma^{(0)},\ \Gamma^{(1)}$} \index{$\Delta^{(0)},\ \Delta^{(1)}$} \begin{list}{}{} \item[(a)] The {\it \index{even monodromy group}even monodromy group} $\Gamma^{(0)}:=\langle s_{e_1}^{(0)},...,s_{e_n}^{(0)}\rangle \subset O^{(0)}.$ \item[(b)] The {\it \index{odd monodromy group}odd monodromy group} $\Gamma^{(1)}:=\langle s_{e_1}^{(1)},...,s_{e_n}^{(1)}\rangle \subset O^{(1)}.$ \item[(c)] The set $\Delta^{(0)}:=\Gamma^{(0)}\{\pm e_1,...,\pm e_n\} \subset H_\Z$ of {\it \index{even vanishing cycle}even vanishing cycles}. \item[(d)] The set $\Delta^{(1)}:=\Gamma^{(1)}\{\pm e_1,...,\pm e_n\} \subset H_\Z$ of {\it \index{odd vanishing cycle}odd vanishing cycles}. \end{list} \end{definition} \begin{remarks}\label{t2.9} (i) The even vanishing cycles are roots, i.e. $\Delta^{(0)}\subset R^{(0)}$, because of Lemma \ref{t2.6} (a) (vi). In general $\Delta^{(1)}\not\subset R^{(0)}$. The name {\it \index{vanishing cycle}vanishing cycles} for the elements of $\Delta^{(0)}$ and $\Delta^{(1)}$ and the name {\it \index{monodromy group}monodromy group} stem from singularity theory. In \cite{HK16} the elements of $\Delta^{(0)}$ are called {\it real roots}. $\Gamma^{(1)}$ and $\Delta^{(1)}$ are not considered in \cite{HK16}. (ii) A matrix $S\in T^{tri}_n$ or $T^{uni}_n(\Z)$ determines by Lemma \ref{t2.6} (b) a bilinear lattice $(H_\Z,L,\uuuu{e})$ with a triangular basis (up to isomorphism). This leads to the program to determine for a given matrix $S$ the data $I^{(0)},I^{(1)},G_\Z,G_\Z^{(0)}, G_\Z^{(1)},G_\Z^M,\Gamma^{(0)},\Gamma^{(1)},\Delta^{(0)}$ and $\Delta^{(1)}$. One should start with relevant invariants like $\sign I^{(0)}$, $\Rad I^{(0)}$, $\Rad I^{(1)}$, the characteristic polynomial and the Jordan normal form of $M$. (iii) The odd monodromy group $\Gamma^{(1)}$ arises naturally in many cases where $(H_\Z,L)$ is a unimodular bilinear lattice, for example in cases from isolated hypersurface singularities. But it is not clear whether it is natural in cases where $(H_\Z,L)$ is a bilinear lattice which is not unimodular. Theorem \ref{t2.7} is positive evidence. But the following is negative evidence. The monodromy group \begin{eqnarray*} \Gamma^{(1)}=\langle s^{[1]}_{e_i/\sqrt{L(e_i,e_i)}}\,|\, i\in\{1,...,n\}\rangle \end{eqnarray*} contains because of Lemma \ref{t2.2} (c) all transvections $s^{[1]}_{g(e_i)/\sqrt{L(e_i,e_i)}}$ for $g\in \Gamma^{(1)}$. Only in the unimodular cases these coincide with the transvections $s^{[1]}_a$ for $a\in\Delta^{(1)}$. We will only consider the unimodular cases. (iv) We will work on this program rather systematically in the chapters \ref{s5} and \ref{s6} for $S\in T^{uni}_2(\Z)$ and $S\in T^{uni}_3(\Z)$. We will discuss in chapter \ref{s9} known results for matrices $S\in T^{uni}_n(\Z)$ from singularity theory, with emphasis on the simple singularities (ADE-type, i.e. the ADE root lattices) and the simple elliptic singularities (i.e. $\www{E_6},\www{E_7},\www{E_8}$). \end{remarks} Definition \ref{t2.10} and Lemma \ref{t2.11} discuss the case when a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with triangular basis is {\it reducible}. Then also the monodromy groups, the set of roots and the sets of vanishing cycles split. But beware that here reducibility involves not only $(H_\Z,L)$, but also $\uuuu{e}$. \begin{definition}\label{t2.10} (a) Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\in\N$ with a triangular basis $\uuuu{e}$. Let $\{1,...,n\}=I_1\ \dot\cup\ I_2$ be a decomposition into disjoint subsets such that \begin{eqnarray*} L(e_i,e_j)=L(e_j,e_i)=0\quad\textup{for }i\in I_1,\ j\in I_2. \end{eqnarray*} Then the triple $(H_\Z,L,\uuuu{e})$ is called {\it \index{reducible triple}reducible}. If such a decomposition does not exist the triple is called {\it \index{irreducible triple}irreducible}. (b) A matrix $S\in T^{uni}_n(\Z)$ is called reducible if the triple $(H_\Z,L,\uuuu{e})$ (which is unique up to isomorphism) is reducible, where $(H_\Z,L)$ is a unimodular bilinear lattice and $\uuuu{e}$ is a triangular basis with $S=L(\uuuu{e}^t,\uuuu{e})^t$. \end{definition} \begin{lemma}\label{t2.11} Keep the situation of Definition \ref{t2.10}. For $l\in\{1,2\}$ let $\sigma_l:\{1,2,...,|I_l|\}\to I_l$ be the unique bijection with $\sigma_l(i)<\sigma_l(j)$ for $i<j$. Define \begin{eqnarray*} \uuuu{e}_l&:=&(e_{1,l},e_{2,l},...,e_{|I_l|,l}):= (e_{\sigma_l(1)},e_{\sigma_l(2)},...,e_{\sigma_l(|I_l|}),\\ H_{\Z,l}&:=&\bigoplus_{i=1}^{|I_l|}\Z\cdot e_{i,l},\quad L_l:=L|_{H_{\Z,l}}. \end{eqnarray*} Then $(H_{\Z,l},L_l,\uuuu{e}_l)$ is a unimodular bilinear lattice with triangular basis. The decomposition $H_\Z=H_{\Z,1}\oplus H_{\Z,2}$ is left and right \index{orthogonal}$L$-orthogonal. Denote by $\Gamma^{(0)}_l$, $\Gamma^{(1)}_l$, $\Delta^{(0)}_l$, $\Delta^{(1)}_l$ and $R^{(0)}_l$ the monodromy groups and sets of vanishing cycles and roots of $(H_{\Z,l},L_l,\uuuu{e}_l)$. Denote by $\www{M}_l$ the automorphism of $H_\Z$ which extends the monodromy $M_l$ on $H_{\Z,l}$ by the identity on $H_{\Z,m}$, where $\{l,m\}=\{1,2\}$. Then \begin{eqnarray*} \Gamma^{(k)}&=& \Gamma^{(k)}_1\times \Gamma^{(k)}_2,\\ R^{(0)}&=& R^{(0)}_1\ \dot\cup\ R^{(0)}_2,\\ \Delta^{(k)}&=& \Delta^{(k)}_1\ \dot\cup \ \Delta^{(k)}_2,\\ M&=& \www{M}_1\www{M}_2=\www{M}_2\www{M}_1. \end{eqnarray*} \end{lemma} The proof is trivial. Because of this lemma, we will study the monodromy groups and the sets of vanishing cycles only for irreducible triples. In the Examples \ref{t1.1} this excludes the cases $S(A^2_1)$, $S(A^3_1)$, $S(A_2A_1)$, $S(\P^1A_1)$ and all cases $S(x_1,x_2,x_3)$ where two of the three numbers $x_1,x_2,x_3$ are zero. The following lemma treats the cases $S(A_1^n):=E_n$ for $n\in\N$. It is a trivial consequence of the special case $S(A_1)=(1)\in M_{1\times 1}(\Z)$ and Lemma \ref{t2.11}, but worth to be stated. \begin{lemma}\label{t2.12} The case \index{$A_1^n$}$A_1^n$ for any $n\in\N$: \begin{eqnarray*} H_\Z=\bigoplus_{i=1}^n\Z\cdot e_i,\quad S=S(A_1^n):=E_n,\quad I^{(0)}=2L,\quad I^{(1)}=0, \end{eqnarray*} the reflections $s_{e_i}^{(0)}\textup{ with } s_{e_i}^{(0)}(e_j)=\left\{\begin{array}{ll} e_j&\textup{if }j\neq i,\\ -e_i&\textup{if }j=i,\end{array}\right\}$ commute, the transvections $s_{e_i}^{(1)}$ are $s_{e_i}^{(1)}=\id$, \begin{eqnarray*} \Gamma^{(0)}&=&\{ \prod_{i=1}^n (s_{e_i}^{(0)})^{l_i}\,|\, (l_1,...,l_n)\in\{0;1\}^n\} \cong \{\pm 1\}^n,\\ \Gamma^{(1)}&=&\{\id\},\\ \Delta^{(0)}&=&R^{(0)}=\{\pm e_1,...,\pm e_n\}=\Delta^{(1)}. \end{eqnarray*} \end{lemma} \chapter{Braid group actions}\label{s3} \setcounter{equation}{0} \setcounter{figure}{0} In the sections \ref{s3.2}--\ref{s3.4} a unimodular bilinear lattice $(H_\Z,L)$ of some rank $n\geq 2$ is considered. The braid group $\Br_n$ is introduced in section \ref{s3.1}. It acts on several sets of $n$-tuples and of matrices associated to $(H_\Z,L)$. Section \ref{s3.1} starts with the Hurwitz action on $G^n$ where $G$ is a group. Results of Artin, Birman-Hilden and Igusa-Schiffler for $G$ a free group, a free Coxeter group or any Coxeter group are cited and applied. This is relevant as many of the monodromy groups $\Gamma^{(1)}$ and $\Gamma^{(0)}$ of rank 2 or rank 3 unimodular bilinear lattices with triangular bases are such groups. It turns out that the Hurwitz action of $\Br_n$ on $(O^{(k)})^n$ lifts to an action of a semidirect product $\Br_n\ltimes\{\pm 1\}^n$ on sets of certain $n$-tuples of cycles in $H_\Z$. This is discussed in section \ref{s3.2}. Most important is the set $\BB^{tri}$ of triangular bases of $(H_\Z,L)$ (if this set is not empty) and the subset $\BB^{dist}=\Br_n\ltimes\{\pm 1\}^n(\uuuu{e})$ of a chosen triangular basis $\uuuu{e}$. Section \ref{s3.3} poses questions on the characterization of such a set $\BB^{dist}$ of {\it distinguished bases} which will guide our work in chapter \ref{s7}. It also offers several examples with quite different properties. Section \ref{s3.4} connects the group $\Br_n\ltimes\{\pm 1\}^n$ via its action on the orbit of a triangular basis $\uuuu{e}$ with the group $G_\Z$. There is a group antihomomorphism $Z:(\Br_n\ltimes\{\pm 1\}^n)_S\to G_\Z$, where $(\Br_n\ltimes\{\pm 1\}^n)_S$ denotes the stabilizer of a matrix $S\in T^{uni}_n(\Z)$. In this way certain braids induce automorphisms in $G_\Z$, and in many cases these automorphisms generate $G_\Z$, i.e. $Z$ is surjective. Theorem \ref{t3.26} (b) states the well known fact $Z((\delta^{1-k}\sigma^{root})^n)=(-1)^{k+1}M$ for $k\in\{0;1\}$. Theorem \ref{t3.26} (c) gives a condition when $Z(\delta^{1-k}\sigma^{root})$ is in $G_\Z$ and thus an $n$-th root of $(-1)^{k+1}M$. Theorem \ref{t3.28} states that for almost all cases with rank $\leq 3$ the map $Z$ is surjective. The exceptions are only four cases. \section[The braid group and the Hurwitz action] {The braid group and the Hurwitz action, some classical results} \label{s3.1} Choose $n\in\Z_{\geq 2}$. The \index{braid group}braid group \index{$\Br_n$}$\Br_n$ of braids with $n$ strings was introduced by Artin \cite{Ar25}. It is the fundamental group of a configuration space. We will come to this geometric point of view in chapter \ref{s8}. Here we take a purely algebraic point of view. Artin \cite[Satz 1]{Ar25} showed that $\Br_n$ is generated by $n-1$ elementary braids $\sigma_1,...,\sigma_{n-1}$ \index{$\sigma_1,...,\sigma_{n-1}$} and that all relations come from the relations \begin{eqnarray*} \sigma_i\sigma_j&=&\sigma_j\sigma_i\quad \textup{for }i,j\in\{1,...,n-1\}\textup{ with }|i-j|\geq 2,\\ \sigma_i\sigma_{i+1}\sigma_i&=&\sigma_{i+1}\sigma_i\sigma_{i+1} \quad\textup{for }i\in\{1,...,n-2\}. \end{eqnarray*} He also showed \cite[Theorem 19]{Ar47} that the \index{center}center of $\Br_n$ is \begin{eqnarray*} \textup{Center}(\Br_n)=\langle \sigma^{mon}\rangle, \end{eqnarray*} where \index{$\sigma^{root},\ \sigma^{mon}$} \begin{eqnarray*} \sigma^{root}:=\sigma_{n-1}\sigma_{n-2}...\sigma_2\sigma_1,\quad \sigma^{mon}:=(\sigma^{root})^n. \end{eqnarray*} An important action of $\Br_n$ is the {\it \index{Hurwitz action}Hurwitz action} on the $n$-th power $G^n$ for any group $G$. The braid group $\Br_n$ acts via \begin{eqnarray*} \sigma_i(g_1,...,g_n)&:=& (g_1,...,g_{i-1},g_ig_{i+1}g_i^{-1},g_i, g_{i+2},...,g_n),\\ \sigma_i^{-1}(g_1,...,g_n)&:=& (g_1,...,g_{i-1},g_{i+1}, g_{i+1}^{-1}g_ig_{i+1},g_{i+2},...,g_n). \end{eqnarray*} The fibers of the map \begin{eqnarray*} \pi_n:G^n\to G,\quad \uuuu{g}=(g_1,...,g_n)\mapsto g_1...g_n, \end{eqnarray*} are invariant under this action, \begin{eqnarray*} \pi_n(\uuuu{g})=\pi_n(\sigma_i\uuuu{g}) =\pi_n(\sigma_i^{-1}\uuuu{g}). \end{eqnarray*} We will study this action for $n=3$ in the cases of the monodromy groups for the rank 3 unimodular bilinear lattices. The following results in Theorem \ref{t3.2} of Artin \cite{Ar25} and Birman-Hilden \cite{BH73} will be relevant. \begin{definition}\label{t3.1} (a) Let $G^{free,n}$ be the \index{free group} \index{$G^{free,n},\ G^{fCox,n}$} free group with $n$ generators $x_1,...,x_n$. Let $$\Delta(G^{free,n}):=\bigcup_{i=1}^n\{wx_iw^{-1}\,|\, w\in G^{free,n}\}$$ be the set of elements conjugate to $x_1,...,x_n$. Obviously $\Br_n((x_1,...,x_n))\subset\Delta(G^{free,n})^n$. (b) Let $G^{fCox,n}$ be the \index{free Coxeter group}free Coxeter group with $n$ generators $x_1,...,x_n$, so all relations are generated by the relations $x_1^2=...=x_n^2=e$. Let $$\Delta(G^{fCox,n}):=\bigcup_{i=1}^n\{wx_iw^{-1}\,|\, w\in G^{fCox,n}\}$$ be the set of elements conjugate to $x_1,...,x_n$. Obviously $\Br_n((x_1,...,x_n))\subset\Delta(G^{fCox,n})^n$. \end{definition} \begin{theorem}\label{t3.2} (a) \cite[Satz 7 and Satz 9]{Ar25} $\Br_n$ acts simply transitively on the set of tuples $$\{(w_1,...,w_n)\in \Delta(G^{free,n})^n\,|\, w_1...w_n=x_1...x_n\}.$$ (b) \cite[Theorem 7]{BH73} $\Br_n$ acts simply transitively on the set of tuples $$\{(w_1,...,w_n)\in \Delta(G^{fCox,n})^n\,|\, w_1...w_n=x_1...x_n\}.$$ \end{theorem} \begin{remarks}\label{t3.3} (i) Both results were reproved by Kr\"uger in \cite[Satz 7.6]{Kr90}. (ii) Theorem 1.31 in \cite{KT08} gives a weaker version of Artin's result Theorem \ref{t3.2} (a). Theorem 1.31 in \cite{KT08} is equivalent to the statement that $\Br_n$ acts simply transitively on the set of tuples \begin{eqnarray*} &&\{(w_1,...,w_n)\in \Delta(G^{free,n})^n\,|\, w_1...w_n=x_1...x_n,\\ &&\hspace*{1cm} w_1,...,w_n\textup{ generate }G^{free,n},\ \textup{ a permutation}\\ &&\hspace*{1cm} \sigma\in S_n\textup{ exists with } w_i\textup{ conjugate to }x_{\sigma(i)}\}. \end{eqnarray*} (iii) The formulation of Theorem 1.31 in \cite{KT08} is different. There a group automorphism $\varphi$ of $G^{free,n}$ is called a {\it \index{braid automorphism}braid automorphism} if $\varphi(x_1...x_n)=x_1...x_n$ and if a permutation $\sigma\in S_n$ with $\varphi(x_i)$ conjugate to $x_{\sigma(i)}$ exists. The group of all braid automorphisms is called $\www{\Br}_n$. Theorem 1.31 in \cite{KT08} states that the map \begin{eqnarray*} \www{Z}:\{\sigma_1^{\pm 1},...,\sigma_{n-1}^{\pm 1}\} \to \www{Br}_n\quad\textup{with}\\ (\www{Z}(\sigma_i)(x_1),...,\www{Z}(\sigma_i)(x_n)) = \sigma_i^{-1}(x_1,...,x_n),\\ (\www{Z}(\sigma_i^{-1})(x_1),...,\www{Z}(\sigma_i^{-1})(x_n)) = \sigma_i(x_1,...,x_n), \end{eqnarray*} extends to a group isomorphism $\Br_n\to \www{\Br}_n$. (iv) In order to understand the equivalence of the statements in (ii) and (iii), it is crucial to see that the extension $\www{Z}:\Br_n\to\www{\Br}_n$ which is defined by \begin{eqnarray*} (\www{Z}(\beta)(x_1),...,\www{Z}(\beta)(x_n)) =\beta^{-1}(x_1,...,x_n)\quad\textup{for }\beta\in\Br_n \end{eqnarray*} is a group homomorphism. This follows from the equations \begin{eqnarray*} \www{Z}(\beta\sigma_i)=\www{Z}(\beta)\www{Z}(\sigma_i) \quad\textup{and}\quad \www{Z}(\beta\sigma_i^{-1})=\www{Z}(\beta)\www{Z}(\sigma_i^{-1}) \end{eqnarray*} for $\beta\in\Br_n$ and $i\in\{1,....n-1\}$. The first equation holds because of \begin{eqnarray*} &&(\www{Z}(\beta\sigma_i)(x_1),..., \www{Z}(\beta\sigma_i)(x_n))\\ &=& (\beta\sigma_i)^{-1}(x_1,...,x_n)\\ &=&\sigma_i^{-1}(\beta^{-1}(x_1,...,x_n))\\ &=& \sigma_i^{-1}(\www{Z}(\beta)(x_1),...,\www{Z}(\beta)(x_n))\\ &=& (\www{Z}(\beta)(x_1),...,\www{Z}(\beta)(x_{i-1}), \www{Z}(\beta)(x_{i+1}),\\ && (\www{Z}(\beta)(x_{i+1}))^{-1}\www{Z}(\beta)(x_i) \www{Z}(\beta)(x_{i+1}), \www{Z}(\beta)(x_{i+2}),...,\www{Z}(\beta)(x_n))\\ &=& (\www{Z}(\beta)(x_1),...,\www{Z}(\beta)(x_{i-1}), \www{Z}(\beta)(x_{i+1}),\\ && \www{Z}(\beta)(x_{i+1}^{-1}x_ix_{i+1}), \www{Z}(\beta)(x_{i+2}),...,\www{Z}(\beta)(x_n))\\ &=& (\www{Z}(\beta)\www{Z}(\sigma_i)(x_1),..., \www{Z}(\beta)\www{Z}(\sigma_i)(x_n)). \end{eqnarray*} The second equation is proved by a similar calculation. \end{remarks} \begin{example}\label{t3.4} Theorem \ref{t3.2} will be applied in the following situation, which in fact arises quite often. Consider a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ of rank $n\geq 2$ with triangular basis $\uuuu{e}$ such that for some $k\in\{0,1\}$ the following holds: \begin{eqnarray*} \Gamma^{(k)}=\left\{\begin{array}{lll} G^{fCox,n}&\textup{ with generators }s_{e_1}^{(0)},..., s_{e_n}^{(0)}&\textup{ if }k=0,\\ G^{free,n}&\textup{ with generators }s_{e_1}^{(1)},..., s_{e_n}^{(1)}&\textup{ if }k=1. \end{array}\right. \end{eqnarray*} Then in the notation of Definition \ref{t3.1} \begin{eqnarray*} \Delta(G^{fCox,n})= \{s_v^{(0)}\,|\,v\in\Delta^{(0)}\}&& \textup{if }k=0,\\ \Delta(G^{free,n})= \{s_v^{(1)}\,|\,v\in\Delta^{(1)}\}&& \textup{if }k=1. \end{eqnarray*} By Theorem \ref{t3.2}, two statements hold: \begin{list}{}{} \item[(1)] The set \begin{eqnarray*} \{(s_{v_1}^{(k)},...,s_{v_n}^{(k)})\,|\, v_1,...,v_n\in\Delta^{(k)}, s_{v_1}^{(k)}...s_{v_n}^{(k)}=(-1)^{k+1}M\} \end{eqnarray*} is a single orbit under the Hurwitz action of $\Br_n$. \item[(2)] The stabilizer of any such tuple $(s_{v_1}^{(k)},...,s_{v_n}^{(k)})$ under the Hurwitz action of $\Br_n$ is $\{\id\}$. \end{list} \end{example} Theorem \ref{t3.2} (b) concerns a free Coxeter group with $n$ generators. The transitivity of the action generalizes to arbitrary Coxeter groups and can be applied to generalize the statement (1) in Example \ref{t3.4}, as is explained in the following. \begin{definition}\label{t3.5} (Classical, e.g \cite[5.1]{Hu90}) A {\it \index{Coxeter system}Coxeter system} $(W,S^{gen})$ consists of a group $W$ and a finite set $S^{gen}=\{s_1,...,s_n\}\subset W$ for some $n\in\N$ of generators of the group such that there are generating relations as follows. There is a subset $I\subset\{(i,j)\in\{1,...,n\}^2\,|\, i<j\}$ and a map $a:I\to\Z_{\geq 2}$ such that the generating relations are \begin{eqnarray*} s_1^2=...=s_n^2=1,\quad 1=(s_is_j)^{a(i,j)}\textup{ for } (i,j)\in I. \end{eqnarray*} The group $W$ is then called a {\it \index{Coxeter group}Coxeter group}. \end{definition} The following theorem was proved by Deligne \cite{De74} for the ADE Weyl groups and in general by Igusa and Schiffler \cite{IS10}. A short proof was given by Baumeister, Dyer, Stump and Wegener \cite{BDSW14}. \begin{theorem}\label{t3.6} \cite{De74}\cite{IS10}\cite{BDSW14} Let $(W,S^{gen})$ with $S^{gen}=\{s_1,...,s_n\}$ be a Coxeter system with $n\geq 2$. Define $\Delta(W,S^{gen}):=\bigcup_{i=1}^n \{ws_iw^{-1}\,|\, w\in W\}$. The set \begin{eqnarray*} \{(w_1,...,w_n)\in \Delta(W,S^{gen})^n\,|\, w_1...w_n=s_1...s_n\} \end{eqnarray*} is a single orbit under the Hurwitz action of $\Br_n$. \end{theorem} Part (a) of the following theorem is classical if $S_{ij}\in\{0,-1,-2\}$ for $i<j$ and due to Vinberg \cite{Vi71} in the general case. \begin{theorem}\label{t3.7} Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n\geq 2$ and let $\uuuu{e}$ be a triangular basis such that the matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$ satisfies $S_{ij}\leq 0$ for $i<j$. (a) (Classical for $S_{ij}\in\{0,-1,-2\}$, \cite[Proposition 6, Theorem 1, Theorem 2, Proposition 17]{Vi71} for $S_{ij}\leq 0$) The pair $(\Gamma^{(0)},\{s_{e_1}^{(0)},...,s_{e_n}^{(0)}\})$ is a Coxeter system with \begin{eqnarray*} I&=&\{(i,j)\in\{1,...,n\}^2\,|\, i<j,S_{ij}\in\{0,-1\}\} \quad\textup{and}\\ a(i,j)&=&\left\{\begin{array}{ll} 2&\textup{ if }S_{ij}=0,\\ 3&\textup{ if }S_{ij}=-1. \end{array}\right. \end{eqnarray*} (b) The set \begin{eqnarray*} \{(g_1,...,g_n)\in\bigl(\{s_v^{(0)}\,|\, v\in\Delta^{(0)}\}\bigr)^n\,|\, g_1...g_n=-M\} \end{eqnarray*} is a single orbit under the Hurwitz action of $\Br_n$. \end{theorem} {\bf Proof of part (b):} Observe \begin{eqnarray*} \Delta(\Gamma^{(0)},\{s_{e_1}^{(0)},...,s_{e_n}^{(0)}\}) =\{s_v^{(0)}\,|\, v\in\Delta^{(0)}\}. \end{eqnarray*} Apply Theorem \ref{t3.6}. \hfill$\Box$ \begin{remarks}\label{t3.8} (i) The transitivity result in part (b) holds also for a bilinear lattice $(H_\Z,L)$ which is not necessarily unimodular, if it comes equipped with a triangular basis $\uuuu{e}$ with $L(e_i,e_j)\leq 0$ for $i>j$. This is the case of a generalized Cartan lattice (Remark \ref{t2.4} (v)). This is crucial in \cite{HK16}. (ii) Especially in the case of a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$ with $S_{ij}\leq -2$ for all $i<j$ we have $\Gamma^{(0)}=G^{fCox,n}$ with generators $s_{e_1}^{(0)},...,s_{e_n}^{(0)}$. (iii) Theorem \ref{t6.11} (g) gives in the case $n=3$ $\Gamma^{(0)}=G^{fCox,3}$ with generators $s_{e_1}^{(0)},s_{e_2}^{(0)},s_{e_3}^{(0)}$ also in the following cases: if $S_{ij}\geq 3$ for $i<j$ and if additionally $$2S_{12}\leq S_{13}S_{23},\quad 2S_{13}\leq S_{12}S_{23},\quad 2S_{23}\leq S_{12}S_{13}.$$ (iv) Theorem \ref{t6.18} (g) gives in the situation of part (iii) also $\Gamma^{(1)}=G^{free,3}$ with generators $s_{e_1}^{(1)},s_{e_2}^{(1)},s_{e_3}^{(1)}$. (v) Though in the situation of part (ii) there are cases with $\Gamma^{(1)}= G^{free,n}$, and there are cases with $\Gamma^{(1)}\neq G^{free,n}$. The odd cases are more complicated than the even cases. For the cases with $n=3$ see the Remarks \ref{t4.17}, Lemma \ref{t4.18} and Theorem \ref{t6.18}. \end{remarks} \section{Braid group action on tuples of cycles} \label{s3.2} Consider a unimodular bilinear lattice $(H_\Z,L)$ of some rank $n\geq 2$ and the groups $O^{(k)}=\Aut(H_\Z,I^{(k)})$ for $k\in\{0;1\}$. Recall that here the set of roots $R^{(0)}$ is $$R^{(0)}=\{\delta\in H_\Z\,|\, L(\delta,\delta)=1\}.$$ In order to treat the even case $k=0$ and the odd case $k=1$ uniformly, we define \index{$R^{(1)}$} $$R^{(1)}:=H_\Z.$$ The Hurwitz action of $\Br_n$ on $(O^{(k)})^n$ restricts because of \begin{eqnarray}\label{3.1} s_a^{(k)}s_b^{(k)}(s_a^{(k)})^{-1} =s_{s_a^{(k)}(b)}^{(k)}\quad\textup{for}\quad a,b\in R^{(k)} \end{eqnarray} (Lemma \ref{t2.2} (c)) to an action on the subset $(\{s_v^{(k)}\,|\, v\in R^{(k)}\})^n$. It turns out that this action has a natural lift to an action of a certain semidirect product $\Br_n\ltimes\{\pm 1\}^n$ on the set $(R^{(k)})^n$. Here the sets $(R^{(k)})^n$ and $(\{s_v^{(k)}\,|\, v\in R^{(k)}\})^n$ are related by the map \begin{eqnarray*} \pi_n^{(k)}: (R^{(k)})^n&\to& (\{s_v^{(k)}\,|\, v\in R^{(k)}\})^n \subset (O^{(k)})^n, \\ \quad \uuuu{v}=(v_1,...,v_n)&\mapsto& (s_{v_1}^{(k)},...,s_{v_n}^{(k)}). \end{eqnarray*} \index{$\pi_n^{(k)},\ \pi_n$} Recall also the map $$\pi_n: (O^{(k)})^n\to O^{(k)},\quad (g_1,...,g_n)\mapsto g_1...g_n$$ which was defined for an arbitrary group before Definition \ref{t3.1}. Furthermore it turns out that both actions, for $k=0$ and for $k=1$, restrict to the same action on the set $\BB^{tri}$ of triangular bases if this set is not empty. This is the action in which we are interested most. In the case of a unimodular bilinear lattice from singularity theory, it is well known \cite[5.7]{Eb01} \cite[\S 1.9]{AGV88} and has been studied by A'Campo, Brieskorn, Ebeling, Gabrielov, Gusein-Zade, Kr\"uger and others. In fact it works also for bilinear lattices with $\BB^{tri}\neq\emptyset$ which are not necessarily unimodular, see Remark \ref{t3.14}. Finally, the actions induce actions of $\Br_n\ltimes \{\pm 1\}^n$ on several spaces of matrices. The purpose of this section is to fix all these well known actions. Lemma \ref{t3.9} presents the semidirect product $\Br_n\ltimes\{\pm 1\}^n$. Lemma \ref{t3.10} gives its action on $(R^{(k)})^n$. Lemma \ref{t3.11} gives its action on $\BB^{tri}$ if this set is not empty. \begin{lemma}\label{t3.9} Fix $n\in\Z_{\geq 2}$. (a) The multiplicative group $\{\pm 1\}^n$ is called {\it sign group}. It is generated by the elements $\delta_j=((-1)^{\delta_{ij}})_{i=1,...,n}\in\{\pm 1\}^n$ (here $\delta_{ij}$ is the Kronecker symbol) for $j\in\{1,...,n\}$. (b) The following relations define a \index{semidirect product}semidirect product \index{$\Br_n\ltimes\{\pm 1\}^n$}$\Br_n\ltimes\{\pm 1\}^n$ of $\Br_n$ and $\{\pm 1\}^n$ with $\{\pm 1\}^n$ as normal subgroup, \begin{eqnarray*} \sigma_j\delta_i\sigma_j^{-1}&=& \delta_i\quad \textup{for }i\in\{1,...,n\}-\{j,j+1\},\\ \sigma_j\delta_j\sigma_j^{-1}&=&\delta_{j+1},\quad \sigma_j\delta_{j+1}\sigma_j^{-1}=\delta_j. \end{eqnarray*} In the following $\Br_n\ltimes\{\pm 1\}^n$ always means this semidirect product. \end{lemma} {\bf Proof:} Part (a) is a notation. Part (b) requires a proof. We have the exact sequence \begin{eqnarray*} \{1\}\to \Br_n^{pure}\to\Br_n\to S_n\to\{1\} \end{eqnarray*} where $\Br_n^{pure}\subset \Br_n$ is the normal subgroup of pure braids (and $\sigma_i\in \Br_n$ maps to the transposition $(i\ i+1)\in S_n$). See Remark \ref{t8.4} (vii) for this group and this exact sequence. The natural action of $S_n$ on $\{\pm 1\}^n$, \begin{eqnarray*} S_n\owns \alpha: \uuuu{\varepsilon} =(\varepsilon_1,...,\varepsilon_n)\mapsto (\varepsilon_{\alpha^{-1}(1)},...,\varepsilon_{\alpha^{-1}(n)}) =:\alpha . \uuuu{\varepsilon} \end{eqnarray*} lifts to an action of $\Br_n$ on $\{\pm 1\}^n$, $\sigma:\uuuu{\varepsilon}\mapsto \sigma .\uuuu{\varepsilon}.$ This action can be used to define a semidirect product of $\Br_n$ and $\{\pm 1\}^n$ by $\sigma\uuuu{\varepsilon}\sigma^{-1}:=\sigma .\uuuu{\varepsilon}$. It is the semidirect product in part (b).\hfill$\Box$ \begin{lemma}\label{t3.10} Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n\geq 2$. Fix $k\in\{0,1\}$. (a) The following formulas define an action of the semidirect product $\Br_n\ltimes \{\pm 1\}^n $ from Definition \ref{t3.9} (b) on the set $(R^{(k)})^n$, \begin{eqnarray*} \sigma_j(\uuuu{v})&=& (v_1,...,v_{j-1}, s_{v_j}^{(k)}(v_{j+1}),v_j,v_{j+2},...,v_n),\\ \sigma_j^{-1}(\uuuu{v})&=& (v_1,...,v_{j-1}, v_{j+1},(s_{v_{j+1}}^{(k)})^{-1}(v_j),v_{j+2},...,v_n),\\ \delta_j(\uuuu{v})&=& (v_1,...,v_{j-1},-v_j,v_{j+1},...,v_n), \end{eqnarray*} for $\uuuu{v}=(v_1,...,v_n)\in (R^{(k)})^n$. (b) The map $\pi_n^{(k)}:(R^{(k)})^n\to (O^{(k)})^n$ is compatible with the action of $\Br_n\ltimes \{\pm 1\}^n$ on $(R^{(k)})^n$ from part (a) and the Hurwitz action of $\Br_n$ on $(O^{(k)})^n$, so the diagram \begin{eqnarray*} \begin{CD} (R^{(k)})^n @>{\sigma_j}>> (R^{(k)})^n\\ @V{\pi^{(k)}_n}VV @VV{\pi^{(k)}_n}V \\ (O^{(k)})^n @>{\sigma_j}>> (O^{(k)})^n \end{CD} \end{eqnarray*} commutes. Here the sign group $\{\pm 1\}^n$ acts trivially on $(O^{(k)})^n$. Especially, each orbit in $(R^{(k)})^n$ is contained in one fiber of the projection $\pi_n\circ\pi_n^{(k)}:(R^{(k)})^n\to O^{(k)}$. \end{lemma} {\bf Proof:} (a) We denote the actions in part (a) by $\sigma_j^{(k)},(\sigma_j^{-1})^{(k)}$ and $\delta_j^{(k)}$ (of course $\delta_j^{(0)}=\delta_j^{(1)}$). The identities \begin{eqnarray*} \sigma_j^{(k)}(\sigma_j^{-1})^{(k)} =(\sigma_j^{-1})^{(k)}\sigma_j^{(k)}=\id&& \textup{for }j\in\{1,...,n-1\}\\ \sigma_i^{(k)}\sigma_j^{(k)}=\sigma_j^{(k)}\sigma_i^{(k)}&& \textup{for }|i-j|\geq 2,\\ \sigma_j^{(k)}\delta_i^{(k)}(\sigma_j^{-1})^{(k)}=\delta_i^{(i)}&& \textup{for }i\in\{1,...,n\}-\{j,j+1\},\\ \sigma_j^{(k)}\delta_j^{(k)}(\sigma_j^{-1})^{(k)} =\delta_{j+1}^{(k)}&&\textup{and}\\ \sigma_j^{(k)}\delta_{j+1}^{(k)}(\sigma_j^{-1})^{(k)} =\delta_j^{(k)}&&\textup{for }j\in\{1,...,n-1\} \end{eqnarray*} are obvious or easy to see. The identities \begin{eqnarray*} \sigma_i^{(k)}\sigma_{i+1}^{(k)}\sigma_i^{(k)} =\sigma_{i+1}^{(k)}\sigma_i^{(k)}\sigma_{i+1}^{(k)} &&\textup{for }i\in\{1,...,n-2\} \end{eqnarray*} are proved by the following calculation with $\uuuu{v}\in (R^{(k)})^n$, \begin{eqnarray*} && \sigma_i^{(k)}\sigma_{i+1}^{(k)}\sigma_i^{(k)}(\uuuu{v})\\ &=& \sigma_i^{(k)}\sigma_{i+1}^{(k)}(...,v_{i-1},s^{(k)}_{v_i}(v_{i+1}),v_i, v_{i+2},v_{i+3}...)\\ &=& \sigma_i^{(k)}(...,v_{i-1},s^{(k)}_{v_i}(v_{i+1}), s^{(k)}_{v_i}(v_{i+2}),v_i,v_{i+3},...)\\ &=& (...,v_{i-1},s^{(k)}_{s^{(k)}_{v_i}(v_{i+1})} (s^{(k)}_{v_i}(v_{i+2})),s^{(k)}_{v_i}(v_{i+1}),v_i,v_{i+3},...)\\ &\stackrel{\eqref{3.1}}{=}&(...,v_{i-1},s^{(k)}_{v_i}s^{(k)}_{v_{i+1}} (s^{(k)}_{v_i})^{-1}(s^{(k)}_{v_i}(v_{i+2})), s^{(k)}_{v_i}(v_{i+1}),v_i,v_{i+3},...)\\ &=& (...,v_{i-1},s^{(k)}_{v_i}(s^{(k)}_{v_{i+1}}(v_{i+2})), s^{(k)}_{v_i}(v_{i+1}),v_i,v_{i+3},...)\\ &=&\sigma_{i+1}^{(k)}(...,v_{i-1},s^{(k)}_{v_i} (s^{(k)}_{v_{i+1}}(v_{i+2})),v_i,v_{i+1},v_{i+3},...)\\ &=&\sigma_{i+1}^{(k)}\sigma_i^{(k)}(...,v_{i-1},v_i, s^{(k)}_{v_{i+1}}(v_{i+2}),v_{i+1},v_{i+3},...)\\ &=&\sigma_{i+1}^{(k)}\sigma_i^{(k)}\sigma_{i+1}^{(k)}(\uuuu{v}). \end{eqnarray*} The maps $\sigma_j^{(k)}$ and $\delta_i^{(k)}$ satisfy all relations between the generators $\sigma_j$ and $\delta_i$ of the group $\Br_n\ltimes\{\pm 1\}^n$. Therefore the formulas in part (a) define an action of this group on the set $(R^{(k)})^n$. (b) The actions are compatible because of \eqref{3.1}. The sign group acts trivially on $(O^{(k)})^n$ because $s_v^{(k)}=s_{-v}^{(k)}$ for $v\in R^{(k)}$. Each orbit of the Hurwitz action on $(O^{(k)})^n$ is contained in one fiber of the map $\pi_n$, as was remarked in section \ref{s3.1}.\hfill$\Box$ \begin{lemma}\label{t3.11} Let $(H_\Z,L)$ be a unimodular bilinear lattice with nonempty set $\BB^{tri}$ of triangular bases. The actions in Lemma \ref{t3.10} of $\Br_n\ltimes\{\pm 1\}^n$ on $(R^{(0)})^n$ and on $(R^{(1)})^n$ both restrict to the same action on $\BB^{tri}$. This action can also be written as follows, \begin{eqnarray*} \sigma_j(\uuuu{v})&=& (v_1,...,v_{j-1}, v_{j+1}-L(v_{j+1},v_j)v_j,v_j,v_{j+2},...,v_n),\\ \sigma_j^{-1}(\uuuu{v})&=& (v_1,...,v_{j-1}, v_{j+1},v_j-L(v_{j+1},v_j)v_{j+1},v_{j+2},...,v_n),\\ \delta_j(\uuuu{v})&=& (v_1,...,v_{j-1},-v_j,v_{j+1},...,v_n), \end{eqnarray*} for $\uuuu{v}=(v_1,...,v_n)\in \BB^{tri}$. \end{lemma} {\bf Proof:} $\uuuu{v}\in\BB^{tri}$ implies $L(v_j,v_{j+1})=0$ and $2L(v_j,v_j)=2=I^{(0)}(v_j,v_j)$. Recall $I^{(0)}=L+L^t$ and $I^{(1)}=L^t-L$. Therefore \begin{eqnarray*} s_{v_j}^{(k)}(v_{j+1}) &=& v_{j+1}-I^{(k)}(v_j,v_{j+1})v_j =v_{j+1}-L(v_{j+1},v_j)v_j,\\ (s_{v_{j+1}}^{(k)})^{-1}(v_j) &=& v_j-(-1)^kI^{(k)}(v_{j+1},v_j)v_{j+1} =v_j-L(v_{j+1},v_j)v_{j+1}. \end{eqnarray*} So $\sigma_j(\uuuu{v})$ and $\sigma_j^{-1}(\uuuu{v})$ are given by the formulas in Lemma \ref{t3.11}. It remains to see that the images are again in $\BB^{tri}$. They are in $(R^{(0)})^n$ because of the even case $k=0$. They form $\Z$-bases of $H_\Z$ because $\uuuu{v}$ is a $\Z$-basis of $H_\Z$. They are triangular bases because \begin{eqnarray*} L(\sigma_j(\uuuu{v})_j,\sigma_j(\uuuu{v})_{j+1}) &=& L(v_{j+1}-L(v_{j+1},v_j)v_j,v_j)=0,\\ L(\sigma_j^{-1}(\uuuu{v})_j,\sigma_j^{-1}(\uuuu{v})_{j+1}) &=& L(v_{j+1},v_j-L(v_{j+1},v_j)v_{j+1})=0. \end{eqnarray*} Of course $\delta_j(\uuuu{v})\in\BB^{tri}$.\hfill$\Box$ \begin{definition}\label{t3.12} Fix $n\in\N$ and $R\in\{\Z,\Q,\R,\C\}$. (a) Recall the definition of the set $T^{uni}_n(R)$ of upper triangular $n\times n$ matrices with entries in $R$ and 1's on the diagonal in Definition \ref{t2.3} (c). Additionally we define the sets of symmetric and skewsymmetric matrices \begin{eqnarray*} T^{(0)}_n(R)&:=& \{A\in M_{n\times n}(R)\,|\, A^t=A, A_{ii}=2\},\\ T^{(1)}_n(R)&:=& \{A\in M_{n\times n}(R)\,|\, A^t=-A\}. \end{eqnarray*} (b) For $a\in\Z$ define the $n\times n$ matrix \begin{eqnarray*} C_{n,j}(a)&=& \begin{pmatrix} 1 & & & & & \\ & \ddots & & & & \\ & & a & 1 & & \\ & & 1 & 0 & & \\ & & & & \ddots & \\ & & & & & 1 \end{pmatrix} \end{eqnarray*} which differs from the unit matrix only in the positions $(j,j),(j,j+1),(j+1,j),(j+1,j+1)$. Its inverse is \begin{eqnarray*} C_{n,j}^{-1}(a)&=& \begin{pmatrix} 1 & & & & & \\ & \ddots & & & & \\ & & 0 & 1 & & \\ & & 1 & -a & & \\ & & & & \ddots & \\ & & & & & 1 \end{pmatrix} \end{eqnarray*} with $-a$ at the position $(j+1,j+1)$. \end{definition} \begin{lemma}\label{t3.13} Fix $n\in\Z_{\geq 2}$ and $R\in\{\Z,\Q,\R,\C\}$. (a) The following formulas define an action of the semidirect product $\Br_n\ltimes\{\pm 1\}^n$ on each of the sets of matrices $T^{(0)}_n(R)$, $T^{(1)}_n(R)$ and $T^{uni}_n(R)$, \begin{eqnarray*} \sigma_j(A)&=& C_{n,j}(-A_{j,j+1})\cdot A\cdot C_{n,j}(-A_{j,j+1}) \quad\textup{for }j\in\{1,...,n-1\},\\ \sigma_j^{-1}(A)&=& C_{n,j}^{-1}(A_{j,j+1})\cdot A\cdot C_{n,j}^{-1}(A_{j,j+1}) \quad\textup{for }j\in\{1,...,n-1\},\\ \delta_j(A)&=& \diag(((-1)^{\delta_{ij}})_{i=1,...,n})\cdot A\cdot \diag(((-1)^{\delta_{ij}})_{i=1,...,n})\\ &&\hspace*{4cm} \textup{for }j\in\{1,...,n\}. \end{eqnarray*} (b) Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n$. (i) Fix $k\in\{0;1\}$. The map \begin{eqnarray*} (R^{(k)})^n\to T^{(k)}_n(\Z),\quad \uuuu{v}\mapsto I^{(k)}(\uuuu{v}^t,\uuuu{v}), \end{eqnarray*} is compatible with the actions of $\Br_n\ltimes\{\pm 1\}^n$ on $(R^{(k)})^n$ and on $T^{(k)}_n(\Z)$. (ii) Suppose that $\BB^{tri}$ is not empty. The map \begin{eqnarray*} \BB^{tri}\to T^{uni}_n(\Z),\quad \uuuu{v}\mapsto L(\uuuu{v}^t,\uuuu{v})^t, \end{eqnarray*} is compatible with the actions of $\Br_n\ltimes\{\pm 1\}^n$ on $\BB^{tri}$ and on $T^{uni}_n(\Z)$. \end{lemma} {\bf Proof:} (a) For $A\in T^{(k)}_n(\Z)$ there is a unique matrix $S\in T^{uni}_n(\Z)$ with $A=S+(-1)^kS^t$. Consider a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with a triangular basis $\uuuu{e}$ with matrix $S=L(\uuuu{e}^t,\uuuu{e})^t$. Then \begin{eqnarray*} \sigma_j(\uuuu{e})&=& (...,e_{j-1},e_{j+1}-S_{j,j+1}e_j,e_j,e_{j+2},...)\\ &=& \uuuu{e}\cdot C_{n,j}(-S_{j,j+1}),\\ \sigma_j^{-1}(\uuuu{e})&=& (...,e_{j-1},e_{j+1},e_j-S_{j,j+1}e_{j+1},e_{j+2},...)\\ &=& \uuuu{e}\cdot C_{n,j}^{-1}(S_{j,j+1}),\\ \delta_j(\uuuu{e})&=& (...,e_{i-1},-e_i,e_{i+1},...)\\ &=& \uuuu{e}\cdot \diag(((-1)^{\delta_{ij}})_{i=1,...,n}). \end{eqnarray*} Observe that $C_{n,j}(a)$ for $a\in\Z$ is symmetric. Therefore for $k\in\{0;1\}$ \begin{eqnarray*} I^{(k)}(\sigma_j(\uuuu{e})^t,\sigma_j(\uuuu{e})) &=& C_{n,j}(-S_{j,j+1})\cdot I^{(k)}(\uuuu{e}^t,\uuuu{e}) \cdot C_{n,j}(-S_{j,j+1}),\\ I^{(k)}(\sigma_j^{-1}(\uuuu{e})^t,\sigma_j^{-1}(\uuuu{e})) &=& C_{n,j}^{-1}(S_{j,j+1})\cdot I^{(k)}(\uuuu{e}^t,\uuuu{e}) \cdot C_{n,j}^{-1}(S_{j,j+1}), \end{eqnarray*} similarly for $L$ instead of $I^{(k)}$, and also similarly for the action of $\delta_j$. This shows part (a) for $R=\Z$. Changing the set of scalars does not change the matrix identities which say that the group $\Br_n\ltimes\{\pm 1\}^n$ acts. (b) This follows from the proof of part (a).\hfill$\Box$ \begin{remarks}\label{t3.14} Let $(H_\Z,L)$ be a bilinear lattice, not necessarily unimodular. (i) The action of $\Br_n\ltimes\{\pm 1\}^n$ on $(R^{(0)})^n$ in Lemma \ref{t3.10} (a) works also in this case. It restricts as in Lemma \ref{t3.11} to an action on $\BB^{tri}$, if this set is not empty, though here for $\uuuu{v}\in\BB^{tri}$ \begin{eqnarray*} \sigma_j(\uuuu{v})&=& (v_1,...,v_{j-1}, v_{j+1}-\frac{L(v_{j+1},v_j)}{L(v_j,v_j)}v_j,v_j,v_{j+2},...,v_n),\\ \sigma_j^{-1}(\uuuu{v})&=& (v_1,...,v_{j-1}, v_{j+1},v_j-\frac{L(v_{j+1},v_j)}{L(v_{j+1},v_{j+1})} v_{j+1},v_{j+2},...,v_n). \end{eqnarray*} (ii) The action of $\Br_n\ltimes\{\pm 1\}^n$ in Lemma \ref{t3.10} on $(R^{(1)})^n$ does not generalize. In Lemma \ref{t2.6} (a) (v) we defined in the case of a general bilinear lattice $s_a^{(1)}$ only for $a\in R^{(0)}$. We defined $s_a^{(1)}$ for any $a\in R^{(1)}$ in Lemma \ref{t2.6} (d) only in the case of a unimodular bilinear lattice. (iii) On the other hand, part (a) (vii) of Lemma \ref{t2.6} says that the action in Lemma \ref{t3.10} for $k=1$ works for $\uuuu{v}\in \BB^{tri}$. Though at the end this is just the action in (i) above. (iv) The action in (i) on $\BB^{tri}$ is compatible with an action on the set $T^{tri}_n$ of matrices in Lemma \ref{t2.3} (c), which generalizes the action in Lemma \ref{t3.13} (a). Here $C_{n,j}(-S_{j,j+1})$ and $C_{n,j}^{-1}(S_{j,j+1})$ in Lemma \ref{t3.13} (a) have to be replaced by $C_{n,j}(-\frac{S_{j,j+1}}{S_{j,j}})$ and $C_{n,j}^{-1}(\frac{S_{j,j+1}}{S_{j+1,j+1}})$. \end{remarks} $\{\pm 1\}^n$ is the normal subgroup in the semidirect product $\Br_n\ltimes\{\pm 1\}^n$. Therefore, if $\Br_n\ltimes\{\pm 1\}^n$ acts on some set $\Sigma$, the group $\Br_n$ acts on the quotient $\Sigma/\{\pm 1\}^n$. Often it is good to consider this quotient and the action of $\Br_n$ on it. \begin{lemma}\label{t3.15} Let $(H_\Z,L)$ be a unimodular bilinear lattice of some rank $n\geq 2$. Fix $k\in\{0;1\}$. (a) The map \begin{eqnarray*} \pi_n^{(k)}:(R^{(k)})^n\to (\{s_v^{(k)}\,|\, v\in R^{(k)}\})^n \subset (O^{(k)})^n,\quad \uuuu{v}\mapsto (s^{(k)}_{v_1},...,s^{(k)}_{v_n}), \end{eqnarray*} factors into maps \begin{eqnarray*} \begin{CD} (R^{(k)})^n @>>> (R^{(k)})^n/\{\pm 1\}^n @>{\pi_n^{(k)}/\{\pm 1\}^n}>> (\{s_v^{(k)}\,|\, v\in R^{(k)}\})^n. \end{CD} \end{eqnarray*} $\Br_n$ acts on the quotient $(R^{(k)})^n/\{\pm 1\}^n$, and the second map $\pi_n^{(k)}/\{\pm 1\}^n$ is $\Br_n$ equivariant. The image of $\uuuu{v}$ in $(R^{(k)})^n/\{\pm 1\}^n$ is denoted by $\uuuu{v}/\{\pm 1\}^n$. (b) The second map $$\pi_n^{(k)}/\{\pm 1\}^n: (R^{(k)})^n/\{\pm 1\}^n\to (\{s_v^{(k)}\,|\, v\in R^{(k)}\})^n$$ in part (a) is a bijection if $k=0$ or if $k=1$ and $\Rad I^{(1)}=\{0\}$. (c) Consider the case $k=1$ and $\Rad I^{(1)}\supsetneqq\{0\}$. Consider a triangular basis $\uuuu{e}\in\BB^{tri}$ and the induced set $\Delta^{(1)}$ of odd vanishing cycles. The second map restricts to a $\Br_n$ equivariant bijection \begin{eqnarray*} (\Delta^{(1)})^n/\{\pm 1\}^n\to (\{s_v^{(1)}\,|\, v\in\Delta^{(1)}\})^n,\quad \uuuu{v}\mapsto (s_{v_1}^{(1)},...,s_{v_n}^{(1)}), \end{eqnarray*} if $(H_\Z,L,\uuuu{e})$ is either irreducible or reducible with at most one summand of type $A_1$. \end{lemma} {\bf Proof:} Part (a) is trivial. (b) Suppose $k=0$ or ($k=1$ and $\Rad I^{(1)}=\{0\}$). If $k=1$ and $v=0$ then $s_v^{(1)}=\id$. If $k=0$ and $v\in R^{(0)}$ or if $k=1$ and $v\in R^{(1)}-\{0\}$ then $v\notin\Rad I^{(k)}$ and $s_v^{(k)}\neq\id$ for any $v\in R^{(k)}$. Then one can recover $\pm v$ from $s_v^{(k)}$, essentially because of \begin{eqnarray*} \{0\}\subsetneqq (s_v^{(k)}-\id)(H_\Z)\subset\Z v. \end{eqnarray*} (c) If $(H_\Z,L,\uuuu{e})$ is irreducible then $\Delta^{(1)}\cap \Rad I^{(1)}=\emptyset$, and the argument of part (b) holds. If $(H_\Z,L,\uuuu{e})$ is reducible with only one summand of type $A_1$ then for a unique $j\in\{1,...,n\}$ $e_j\in \Rad I^{(1)}$, and then $\Delta^{(1)}\cap \Rad I^{(1)}=\{\pm e_j\}$. Then $v\in\Delta^{(1)}$ satisfies $s_v^{(1)}=\id$ if and only if $v=\pm e_j$. So also then one can recover $\pm v$ from $s_v^{(1)}$ for any $v\in\Delta^{(1)}$.\hfill$\Box$ \begin{remarks}\label{t3.16} (i) Consider the action of $\Br_n$ on $T^{uni}_n(\Z)$. The elementary braid $\sigma_j$ maps $S=(S_{ij})\in T^{uni}_n(\Z)$ to \begin{eqnarray*} \sigma_j(S)&=& C_{n,j}(-S_{j,j+1})\cdot S\cdot C_{n,j}(-S_{j,j+1})\\ \textup{ with }&& \begin{pmatrix} \sigma_j(S)_{jj} & \sigma_j(S)_{j,j+1} \\ \sigma_j(S)_{j+1,j} & \sigma_j(S)_{j+1,j+1}\end{pmatrix} =\begin{pmatrix} 1 & -S_{j,j+1} \\ 0 & 1 \end{pmatrix}. \end{eqnarray*} (ii) Especially, in the case $n=2$ $\delta_1$, $\delta_2$ and $\sigma_1$ all map $S=\begin{pmatrix}1 & x\\0&1\end{pmatrix}$ to $\begin{pmatrix} 1& -x\\0&1\end{pmatrix}$, so the $\Br_2\ltimes\{\pm 1\}^2$ orbit equals the $\Br_2$ orbit and the $\langle \delta_1\rangle$ orbit and consists only of $S=\begin{pmatrix}1 & x\\0&1\end{pmatrix}$ and $\begin{pmatrix}1 &-x\\0&1\end{pmatrix}$. (iii) Under rather special circumstances, also in higher rank $n$ the sign group action is eaten up by the braid group action. Ebeling proved the following lemma. \end{remarks} \begin{lemma}\label{t3.17} Let $(H_\Z,L)$ be a unimodular bilinear lattice and $\uuuu{e}\in\BB^{tri}$ a triangular basis with $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. (a) \cite[proof of Prop. 2.2]{Eb83} Suppose $S_{j,j+1}=\varepsilon\in\{\pm 1\}$ for some $j\in\{1,...,n-1\}$. Then \begin{eqnarray*} \sigma_j^{3\varepsilon}(\uuuu{e})=\delta_j(\uuuu{e}),\quad \sigma_j^{-3\varepsilon}(\uuuu{e})=\delta_{j+1}(\uuuu{e}). \end{eqnarray*} (b) \cite[Prop. 2.2]{Eb83} Suppose $S_{ij}\in\{0,1,-1\}$ for all $i,j$ and that $(H_\Z,L)$ is irreducible. Then the orbit of $\uuuu{e}$ under $\Br_n$ coincides with the orbit of $\uuuu{e}$ under $\Br_n\ltimes\{\pm 1\}^n$. \end{lemma} \section{Distinguished bases} \label{s3.3} In this section we continue the discussion of the braid group action on $\BB^{tri}$. Now we fix one triangular basis $\uuuu{e}$. Definition \ref{t3.18} gives notations. The Remarks \ref{t3.19} pose questions on the orbit of $\uuuu{e}$ under $\Br_n\ltimes\{\pm 1\}^n$. The questions will guide the work which will be done in chapter \ref{s7}. \begin{definition}\label{t3.18} Let $(H_\Z,L)$ be a unimodular lattice of rank $n\geq 2$ with nonempty set $\BB^{tri}$ of triangular bases. Given a triangular basis $\uuuu{e}\in\BB^{tri}$ and $k\in\{0,1\}$, we are interested in the following orbits: \index{$\BB^{dist}$}\index{$\RR^{(k),dist}$}\index{$\SSS^{dist}$} \begin{eqnarray*} \textup{the set }\BB^{dist} &:=&\Br_n\ltimes\{\pm 1\}^n(\uuuu{e}) \textup{ of {\it \index{distinguished basis}distinguished bases}},\\ \textup{the set }\RR^{(k),dist} &:=&\Br_n(\pi_n^{(k)}(\uuuu{e})) \textup{ of {\it \index{distinguished tuple of reflections}distinguished tuples of}}\\ &&\textup{\it reflections or transvections}, \\ \textup{the set }\SSS^{dist} &:=&\Br_n\ltimes\{\pm 1\}^n(S) \textup{ of {\it \index{distinguished matrix}distinguished matrices},}\\ &&\textup{where }S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z), \end{eqnarray*} the quotient sets $\BB^{dist}/\{\pm 1\}^n$ and $\SSS^{dist}/\{\pm 1\}^n$, which are $\Br_n$ orbits. We are also interested in the \index{stabilizer}stabilizers in $\Br_n$ of the points $\uuuu{e}/\{\pm 1\}^n\in \BB^{dist}/\{\pm 1\}^n$ and $S/\{\pm 1\}^n\in \SSS^{dist}/\{\pm 1\}^n$, namely the groups \index{$(\Br_n)_{\uuuu{e}/\{\pm 1\}^n},\ (\Br_n)_{S/\{\pm 1\}^n}$} \begin{eqnarray*} (\Br_n)_{\uuuu{e}/\{\pm 1\}^n} \subset (\Br_n)_{S/\{\pm 1\}^n} \subset \Br_n. \end{eqnarray*} \end{definition} \begin{remarks}\label{t3.19} In the situation of Definition \ref{t3.18} the following constraints on the set $\BB^{dist}$ of distinguished bases are clear from what has been said, \begin{eqnarray}\nonumber \BB^{dist}&\subset& \BB^{tri}\cap (\Delta^{(0)})^n \cap (\Delta^{(1)})^n \cap \{\uuuu{v}\in (H_\Z)^n\,|\, \sum_{i=1}^n\Z v_i=H_\Z\}\\ &&\cap \ (\pi_n\circ\pi_n^{(0)})^{-1}(-M) \cap (\pi_n\circ\pi_n^{(1)})^{-1}(M),\label{3.2} \end{eqnarray} where $\pi_n\circ\pi_n^{(k)}:(R^{(k)})^n\to O^{(k)},\quad \uuuu{v}\mapsto s_{v_1}^{(k)}...s_{v_n}^{(k)}$. An interesting problem is which - if any - of these constraints are sufficient to characterize the orbit $\BB^{dist}$. We are most interested in the questions whether the inclusions \begin{eqnarray}\label{3.3} \BB^{dist}&\subset& \{\uuuu{v}\in (\Delta^{(0)})^n\,|\, \pi_n\circ\pi_n^{(0)}(\uuuu{v})=-M\},\\ \BB^{dist}&\subset& \{\uuuu{v}\in (\Delta^{(1)})^n\,|\, \pi_n\circ\pi_n^{(1)}(\uuuu{v})=M\},\label{3.4} \end{eqnarray} are equalities. We will study this problem systematically in chapter \ref{s7} for $n=2$ and $n=3$. In this section \ref{s3.3} we give some examples. \end{remarks} \begin{remarks}\label{t3.20} Consider a unimodular bilinear lattice $(H_\Z,L)$ of rank $n\geq 2$ and $k\in\{0;1\}$. (i) Two basic invariants of the $\Br_n\ltimes \{\pm 1\}^n$ orbit of a tuple $\uuuu{v}\in (R^{(k)})^n$ are the product $(\pi_n\circ\pi_n^{(k)})(\uuuu{v})) =s_{v_1}^{(k)}...s_{v_n}^{(k)}\in O^{(k)}$ and the sublattice $\sum_{i=1}^n\Z v_i\subset H_\Z$, namely \begin{eqnarray*} (\pi_n\circ\pi_n^{(k)})(\sigma_j({\uuuu{v}})) =(\pi_n\circ\pi_n^{(k)})(\uuuu{v}) \quad\textup{and}\quad \sum_{i=1}^n\Z \sigma_j(\uuuu{v})_i=\sum_{i=1}^n\Z v_i. \end{eqnarray*} (ii) A triangular basis $\uuuu{e}\in \BB^{tri}$ induces the even and odd monodromy groups $\Gamma^{(0)}$ and $\Gamma^{(1)}$ and the sets $\Delta^{(0)}$ and $\Delta^{(1)}$ of even and odd vanishing cycles. Each distinguished basis $\uuuu{v}\in\BB^{dist}$ induces the same even and odd monodromy groups $\Gamma^{(0)}$ and $\Gamma^{(1)}$ and the same sets $\Delta^{(0)}$ and $\Delta^{(1)}$ of even and odd vanishing cycles. This is obvious from the action of $\Br_n\ltimes \{\pm 1\}^n$ on $\BB^{dist}$, the Hurwitz action of $\Br_n$ on $\RR^{(k),dist}$ and the definition of $\Gamma^{(k)}$ and $\Delta^{(k)}$. So they are invariants of the set $\BB^{dist}$ of distinguished bases. We did not mention the monodromy $M$, because it is by Theorem \ref{t2.7} an invariant of $(H_\Z,L)$ if $\BB^{tri}\neq\emptyset$, so it does not depend on the choice of a $\Br_n\ltimes\{\pm 1\}^n$ orbit in $\BB^{tri}$. (iii) A matrix $S\in T^{uni}_n(\Z)$ determines a triple $(H_\Z,L,\uuuu{e})$ with $(H_\Z,L)$ a unimodular bilinear lattice and $\uuuu{e}$ a triangular basis with $S=L(\uuuu{e}^t,\uuuu{e})^t$ up to isomorphism. For a second matrix $\www{S}$ in the $\Br_n\ltimes\{\pm 1\}^n$ orbit of $S$ then a triangular basis $\www{\uuuu{e}}$ of $(H_\Z,L)$ with $\www{S}=L(\www{\uuuu{e}}^t,\www{\uuuu{e}})^t$ exists (but it is not unique in general). Therefore the triple $(H_\Z,L,\BB^{dist})$ and all induced data depend only on the $\Br_n\ltimes\{\pm 1\}^n$ orbit $\SSS^{dist}$ of $S$. These induced data comprise $R^{(0)}$, $\Gamma^{(0)}$, $\Gamma^{(1)}$, $\Delta^{(0)}$ and $\Delta^{(1)}$. (iv) Choose a triangular basis $\uuuu{e}$. Then $s_\delta^{(k)}\in \Gamma^{(k)}$ for $\delta\in\Delta^{(k)}$. This follows from the definition of a vanishing cycle and from formula \ref{3.2}. It also implies that the set $(\Delta^{(k)})^n$ is invariant under the action of $\Br_n\ltimes\{\pm 1\}^n$ on $(R^{(k)})^n$. \end{remarks} \begin{remarks}\label{t3.21} (i) Consider a unimodular bilinear lattice which is not a lattice of type $A_1^n$ and a fixed triangular basis $\uuuu{e}$. Then $\Delta^{(0)}\subset R^{(0)}$, but $\Delta^{(1)}\not\subset R^{(0)}$ by Corollary \ref{t6.22} (a). Nevertheless $\BB^{dist}\subset (\Delta^{(0)}\cap\Delta^{(1)})^n$, so many odd vanishing cycles do not turn up in bases in the braid group orbit of $\uuuu{e}$, i.e. in distinguished bases. (ii) In all cases except $A_1^n$ $\Delta^{(1)}\not\subset\Delta^{(0)}$ because $\Delta^{(1)}\not\subset R^{(0)}$. In some cases $\Delta^{(0)}\subset \Delta^{(1)}$, in many cases $\Delta^{(0)}\not\subset \Delta^{(1)}$. See Corollary \ref{t6.22}. \end{remarks} Given a unimodular bilinear lattice $(H_\Z,L)$, any element $g\in O^{(k)}$ acts on $(R^{(k)})^n$ by $g(\uuuu{v}):=(g(v_1),...,g(v_n))$. Part (a) of the next Lemma \ref{t3.22} says especially that this action commutes with the action of $\Br_n\ltimes\{\pm 1\}^n$ on $(R^{(k)})^n$. Part (b) gives implications, which will be used to construct the interesting Examples \ref{t3.23} (i) and (ii). \begin{lemma}\label{t3.22} Let $(H_\Z,L)$ be a unimodular bilinear lattice. Fix $k\in \{0,1\}$. (a) If $g\in O^{(k)}$ and $(\alpha,\varepsilon)\in \Br_n\ltimes \{\pm 1\}^n$ then for $\uuuu{v}\in (R^{(k)})^n$ \begin{eqnarray*} g((\alpha,\varepsilon)(\uuuu{v})) &=& (\alpha,\varepsilon)(g(\uuuu{v})),\\ (\pi_n\circ\pi_n^{(k)})(g(\uuuu{v})) &=& g\circ (\pi_n\circ \pi_n^{(k)})(\uuuu{v})\circ g^{-1}. \end{eqnarray*} Here $g(\uuuu{v})$ means $(g(v_1),...,g(v_n))$, and similarly for $g((\alpha,\varepsilon)(\uuuu{v}))$. (b) If $g\in G_\Z^{(k)}-G_\Z$ and $\uuuu{e}\in\BB^{tri}$ then \begin{eqnarray*} g(\uuuu{e})&\notin& \BB^{tri},\\ \textup{so especially}\quad g(\uuuu{e})&\notin& \Br_n\ltimes\{\pm 1\}^n(\uuuu{e}),\\ \textup{but}\quad (\pi_n\circ\pi_n^{(k)})(g(\uuuu{e}))&=&(\pi_n\circ\pi_n^{(k)}) (\uuuu{e})=(-1)^{k+1}M. \end{eqnarray*} \end{lemma} {\bf Proof} (a) $g((\id,\varepsilon)(\uuuu{v})) =(\id,\varepsilon)(g(\uuuu{v}))$ is trivial. Consider $(\alpha,\varepsilon)=(\sigma_j, (1,...,1))=\sigma_j$. \begin{eqnarray*} g(\sigma_j(\uuuu{v})) &=&(g(v_1),...,g(v_{j-1}),gs_{v_j}^{(k)}(v_{j+1}), g(v_j),g(v_{j+2}),...,g(v_n))\\ &=& (g(v_1),...,g(v_{j-1}),s_{g(v_j)}^{(k)}(gv_{j+1}), g(v_j),g(v_{j+2}),...,g(v_n))\\ &=&\sigma_j(g(\uuuu{v})), \end{eqnarray*} because of $gs_{v_j}^{(k)}g^{-1}=s_{g(v_j)}^{(k)}$ (Lemma \ref{t2.2} (c)). \begin{eqnarray*} (\pi_n\circ\pi_n^{(k)})(g(\uuuu{v})) &=& s_{g(v_1)}^{(k)}...s_{g(v_n)}^{(k)} =\bigl(gs_{v_1}^{(k)}g^{-1}\bigr)... \bigl(gs_{v_n}^{(k)}g^{-1}\bigr)\\ &=& gs_{v_1}^{(k)}...s_{v_n}^{(k)}g^{-1} =g\circ (\pi_n\circ \pi_n^{(k)})(\uuuu{v})\circ g^{-1}. \end{eqnarray*} (b) Suppose $g\in G_\Z^{(k)}-G_\Z$ and $\uuuu{e}\in\BB^{tri}$. Then $gMg^{-1}=M$ and \begin{eqnarray*} (\pi_n\circ \pi_n^{(k)})(g(\uuuu{e})) &\stackrel{(a)}{=}& g\circ(\pi_n\circ\pi_n^{(k)})(\uuuu{e})\circ g^{-1}\\ &=&g\bigl( (-1)^{k+1}M\bigr)g^{-1} =(-1)^{k+1}M=(\pi_n\circ\pi_n^{(k)})(\uuuu{e}). \end{eqnarray*} Furthermore $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$, $I^{(k)}=L^t+(-1)^{k}L$ and \begin{eqnarray*} I^{(k)}(g(\uuuu{e})^t,g(\uuuu{e})) &=& I^{(k)}(\uuuu{e}^t,\uuuu{e})=S+(-1)^{k}S^t \quad\textup{because }g\in G_\Z^{(k)},\\ L(g(\uuuu{e})^t,g(\uuuu{e}))^t&\neq& L(\uuuu{e}^t,\uuuu{e})^t=S \quad\textup{because }g\notin G_\Z,\\ \textup{so }L(g(\uuuu{e})^t,g(\uuuu{e}))&\notin& T^{uni}_n(\Z), \end{eqnarray*} so $g(\uuuu{e})\notin \BB^{tri}$, so $g(\uuuu{e})\notin \Br_n\ltimes \{\pm 1\}^n(\uuuu{e})$. \hfill$\Box$ \begin{examples}\label{t3.23} Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n\geq 2$, $\uuuu{e}\in \BB^{tri}$ a triangular basis, $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$ and $k\in\{0,1\}$. (i) $k=1$, $n=3$, $S=S(-3,3,-3)=S(\P^2)$, the odd case $\P^2$. To carry out this example we need two results which will be proved later, Theorem \ref{t5.14} (b) and Theorem \ref{t6.21} (h). By Theorem \ref{t5.14} (b) (i) there is a root $M^{root}\in G_\Z$ of the monodromy with $(M^{root})^3=M$ and \begin{eqnarray*} G_\Z^{(1)}&=&\{\pm (M^{root})^l(\id +a(M^{root}-\id)^2)\,|\, l,a\in\Z\}\\ &\supsetneqq& G_\Z=\{\pm (M^{root})^l\,|\, l\in \Z\}. \end{eqnarray*} Also, here $\Rad I^{(1)}=\Z f_3$ with $f_3=e_1+e_2+e_3$. The shape of $M^{root}$ in Theorem \ref{t5.14} (b) shows \begin{eqnarray*} (M^{root}-\id)^2(\uuuu{e})=\uuuu{e} (\begin{pmatrix}3&-3&1\\1&0&0\\0&1&0\end{pmatrix}-E_3)^2 =f_3(1,-2,1). \end{eqnarray*} For example $g:=\id +(M^{root}-\id)^2\in G_\Z^{(1)}-G_\Z$ satisfies \begin{eqnarray*} g(\uuuu{e})&=&\uuuu{e}+f_3(1,-2,1),\\ I^{(1)}(g(\uuuu{e})^t,g(\uuuu{e})) &=& I^{(1)}(\uuuu{e}^t,\uuuu{e})=S-S^t =\begin{pmatrix}0&-3&3\\3&0&-3\\-3&3&0\end{pmatrix},\\ L(g(\uuuu{e})^t,g(\uuuu{e}))^t &=&\begin{pmatrix}3&-7&5\\-4&9&-7\\2&-4&3\end{pmatrix},\\ s_{g(e_1)}^{(1)}s_{g(e_2)}^{(1)}s_{g(e_3)}^{(1)} &=&M,\\ g(e_1),g(e_2),g(e_3)&\notin& R^{(0)},\quad\textup{because } 3\neq 1\textup{ and }9\neq 1,\\ g(e_1),g(e_2),g(e_3)&\notin& \Delta^{(1)}. \end{eqnarray*} The last claim $g(e_j)\notin \Delta^{(1)}$ holds because of $g(e_j)\neq e_j$, but $g(e_j)\in e_j+\Z f_3$, and because the projection $H_\Z\to H_\Z/\Z f_3$ restricts to an injective map $\Delta^{(1)}\to H_\Z/\Z f_3$ by Theorem \ref{t6.21} (h). (ii) $k=0$, $n=3$, $S=S(-2,2,-2)=S(\HH_{1,2})$, the even case $\HH_{1,2}$. To carry out this example we need two results which will be proved later, Theorem \ref{t5.14} (a) and Theorem \ref{t6.14} (e). Recall Theorem \ref{t5.14} (b) (i), \begin{eqnarray*} (H_\Z,L)&=& (H_{\Z,1},L_1)\oplus (H_{\Z,2},L_2)\\ \textup{with }H_{\Z,1}&=& \Z f_1\oplus \Z f_2=\ker\Phi_2(M),\quad H_{\Z,2}=\Z f_3=\ker \Phi_1(M),\\ (f_1,f_2,f_3)&=& \uuuu{e} \begin{pmatrix}1&0&1\\1&1&1\\0&1&1\end{pmatrix},\\ G_\Z^{(0)}&=& G_{\Z,1}^{(0)}\times G_{\Z,2}^{(0)} \supsetneqq G_\Z=G_{\Z,1}\times G_{\Z,2}\\ \textup{with } G_{\Z,1}^{(0)}&=&\Aut(H_{\Z,1})\supsetneqq G_{\Z,1}=\{g\in\Aut(H_{\Z,1})\,|\, \det g=1\},\\ G_{\Z,2}^{(0)}&=&G_{\Z,2}=\{\pm\id|_{\Z f_3}\}. \end{eqnarray*} For example $g:=((f_1,f_2,f_3)\mapsto (f_1,-f_2,f_3)) \in G_\Z^{(0)}-G_\Z$ satisfies \begin{eqnarray*} g(\uuuu{e})&=& g(f_3-f_2,-f_3+f_1+f_2,f_3-f_1)\\ &=& (f_3+f_2,-f_3+f_1-f_2,f_3-f_1)\\ &=&\uuuu{e}+f_2(2,-2,0),\\ s_{g(e_1)}^{(0)}s_{g(e_2)}^{(0)}s_{g(e_3)}^{(0)}&=&-M, \end{eqnarray*} \begin{eqnarray*} I^{(0)}(g(\uuuu{e})^t,g(\uuuu{e})) &=&I^{(0)}(\uuuu{e}^t,\uuuu{e})=S+S^t,\\ L(g(\uuuu{e})^t,g(\uuuu{e}))^t&=& \begin{pmatrix}1&0&0\\-2&1&0\\2&-2&1\end{pmatrix} =S^t\neq S =L(\uuuu{e}^t,\uuuu{e})^t,\\ g(\uuuu{e})&\notin& \BB^{tri}, \textup{ so }g(\uuuu{e})\notin \BB^{dist},\\ g(e_1),g(e_2),g(e_3)&\in\Delta^{(0)}. \end{eqnarray*} The last claim $g(e_j)\in\Delta^{(0)}$ holds because of Theorem \ref{t6.14} (e). Therefore \begin{eqnarray*} g(\uuuu{e})\in \{\uuuu{v}\in (\Delta^{(0)})^3\,|\, (\pi_3\circ\pi_3^{(0)})(\uuuu{v})=-M, \sum_{i=1}^3\Z v_i=H_\Z\}. \end{eqnarray*} Especially, here the inclusion in \eqref{3.3} is not an equality. In a certain sense, this example is the worst case within all cases $S(\uuuu{x})$ with $k=0$, $n=3$ and eigenvalues of $M$ unit roots. See Theorem \ref{t7.3} (b). (iii) $n\in\N$, $S=E_n$, the even and odd case $A_1^n$. Compare Lemma \ref{t2.12}. \begin{eqnarray*} \Delta^{(0)}&=& R^{(0)}=\Delta^{(1)}=\{\pm e_1,...,\pm e_n\},\\ M&=&\id,\\ \BB^{dist}&=& \{(\varepsilon_1e_{\sigma(1)},..., \varepsilon_ne_{\sigma(n)})\,|\, \varepsilon_1,...,\varepsilon_n \in \{\pm 1\},\sigma\in S_n\},\\ &=&\{\uuuu{v}\in(\Delta^{(0)})^n\,|\, (\pi_n\circ\pi_n^{(0)})(\uuuu{e})=-M=-\id\} \end{eqnarray*} The last equality follows from $-M=-\id$ and \begin{eqnarray*} s_{e_i}^{(0)}|_{\Z e_i}=-\id,\quad s_{e_i}^{(0)}|_{\sum_{j\neq i}\Z e_j}=\id. \end{eqnarray*} Here the inclusion in \eqref{3.3} is an equality. On the contrary, the inclusion \eqref{3.4} is not an equality, the set \begin{eqnarray*} \{\uuuu{v}\in(\Delta^{(1)})^n\,|\, (\pi_n\circ\pi_n^{(1)})(\uuuu{v})=M=\id\} \end{eqnarray*} is much bigger than $\BB^{dist}$ if $n\geq 2$, it consists of many $\Br_n\ltimes \{\pm 1\}^n$ orbits. Each of these orbits contains a unique one of the following tuples, \begin{eqnarray*} (e_{l(1)},e_{l(2)},...,e_{l(n)})\quad\textup{with}\quad 1\leq l(1)\leq l(2)\leq ...\leq l(n)\leq n. \end{eqnarray*} This follows from $s_{e_i}^{(1)}=\id$. The braids act just by permutations on the $\Br_n$ orbit $\BB^{dist}/\{\pm 1\}^n$. Therefore the stabilizer of $\uuuu{e}/\{\pm 1\}^n$ is the group $\Br_n^{pure}$ of pure braids (see Remark \ref{t8.2} (vii) for this group). The stabilizer of $S/\{\pm 1\}^n$ is the whole group $\Br_n$. (iv) Reconsider Example \ref{t3.4}, so a case where \begin{eqnarray*} \Gamma^{(k)}=\left\{\begin{array}{ll} G^{fCox,n}&\textup{ with generators }s_{e_1}^{(0)},..., s_{e_n}^{(0)}\textup{ if }k=0,\\ G^{free,n}&\textup{ with generators }s_{e_1}^{(1)},..., s_{e_n}^{(1)}\textup{ if }k=1. \end{array}\right. \end{eqnarray*} In the notation of Definition \ref{t3.1} $\Delta(G^{fCox,n})=\{s_\delta^{(0)}\,|\,\delta\in\Delta^{(0)}\}$ if $k=0$ and $\Delta(G^{free,n})=\{s_\delta^{(1)}\,|\,\delta\in\Delta^{(1)}\}$ if $k=1$. By Theorem \ref{t3.2} \begin{eqnarray*} \RR^{dist,(k)}=\{(s_{v_1}^{(k)},...,s_{v_n}^{(k)})\,|\, \uuuu{v}\in(\Delta^{(k)})^n,s_{v_1}^{(k)}...s_{v_n}^{(k)} =(-1)^{k+1}M\}. \end{eqnarray*} Furthermore, the shape of $\Gamma^{(k)}$ shows that $(H_\Z,L,\uuuu{e})$ is irreducible. Lemma \ref{t3.15} (b) or (c) applies. Therefore \begin{eqnarray*} \BB^{dist}=\{\uuuu{v}\in (\Delta^{(k)})^n\,|\, s_{v_1}^{(k)}...s_{v_n}^{(k)}=(-1)^{k+1}M\}. \end{eqnarray*} So in this case only the constraints $\uuuu{v}\in(\Delta^{(k)})^n$ and $s_{v_1}^{(k)}...s_{v_n}^{(k)}=(-1)^{k+1}M$ in the Remarks \ref{t3.19} are needed in order to characterize the orbit $\BB^{dist}$. The inclusions in \eqref{3.3} and \eqref{3.4} are here equalities. By Theorem \ref{t3.2} the stabilizer of $\pi_n^{(k)}(\uuuu{e})$ and of $\uuuu{e}/\{\pm 1\}^n$ is $\{\id\}\subset\Br_n$. The size of the stabilizer $(\Br_n)_{S/\{\pm 1\}^n}$ depends on the case. Theorem \ref{t7.11} gives cases with $n=3$ where it is $\langle \sigma_2\sigma_1\rangle$ or $\langle \sigma_2\sigma_1^2\rangle$ or $\langle \sigma^{mon}\rangle$. (v) Suppose $S_{ij}\leq 0$ for $i<j$, so $(H_\Z,L,\uuuu{e})$ is a generalized Cartan lattice as in Theorem \ref{t3.7}. By Theorem \ref{t3.7} (b) \begin{eqnarray*} \RR^{dist,(0)}&=& \{(g_1,...,g_n)\in\bigl(\{s_\delta^{(0)}\,|\, \delta\in\Delta^{(0)}\}\bigr)^n\,|\, g_1...g_n=-M\}. \end{eqnarray*} With Lemma \ref{t3.15} (b) this implies \begin{eqnarray*} \BB^{dist}&=& \{\uuuu{v}\in (\Delta^{(0)})^n\,|\, s_{v_1}^{(k)}...s_{v_n}^{(k)}=-M\}. \end{eqnarray*} Also here only the two constraints $\uuuu{v}\in(\Delta^{(0)})^n$ and $s_{v_1}^{(k)}...s_{v_n}^{(k)}=-M$ in the Remarks \ref{t3.19} are needed in order to characterize the orbit $\BB^{dist}$. The inclusion in \eqref{3.3} is here an equality. \end{examples} \section{From $\Br_n\ltimes\{\pm 1\}^n$ to $G_\Z$} \label{s3.4} Definition \ref{t3.24} gives a map $Z:\Br_n\ltimes\{\pm 1\}^n\to \Aut(H_\Z)$, which restricts to a group antihomomorphism $Z:(\Br_n\ltimes\{\pm 1\}^n)_S\to G_\Z$. The definition and the restriction to a group antihomomorphism are classical. Lemma \ref{t3.25} provides basic facts around this map. Also Theorem \ref{t3.26} (b) is classical. It states that $Z((\delta_n^{1-k}\sigma^{root})^n)=(-1)^{k+1}M$. Theorem \ref{t3.26} (c) gives a condition when $Z(\delta_n^{1-k}\sigma^{root})\in G_\Z$. Then this is an $n$-th root of $(-1)^{k+1}M$. Theorem \ref{t3.26} (c) embraces Theorem 4.5 (a)+(b) in \cite{BH20}. It gives more because in \cite{BH20} no braids are considered. The braids allow a new and more elegant proof than the one in \cite{BH20}. Theorem \ref{t3.26} (c) will be used in the discussion of the groups $G_\Z$ in many cases in chapter \ref{s5}. \begin{definition}\label{t3.24} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\geq 2$ with a triangular basis $\uuuu{e}$. For $(\alpha,\varepsilon)\in\Br_n\ltimes \{\pm 1\}^n$ define an automorphism $Z((\alpha,\varepsilon))\in Aut(H_\Z)$ by the following action on the $\Z$-basis $\uuuu{e}$ of $H_\Z$, \index{$Z$} \begin{eqnarray*} Z:\Br_n\ltimes\{\pm 1\}^n&\to& \Aut(H_\Z),\\ (\alpha,\varepsilon)&\mapsto& Z((\alpha,\varepsilon)) =(\uuuu{e}\to (\alpha,\varepsilon)(\uuuu{e})). \end{eqnarray*} \end{definition} \begin{lemma}\label{t3.25} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\geq 2$ with a triangular basis $\uuuu{e}$. (a) For $(\alpha,\varepsilon)\in\Br_n\ltimes\{\pm 1\}^n$ \begin{eqnarray*} Z((\alpha,\varepsilon))\in G_\Z \iff Z((\alpha,\varepsilon))\in G_\Z^{(0)} \iff Z((\alpha,\varepsilon))\in G_\Z^{(1)}. \end{eqnarray*} (b) The stabilizer of $S$ in $\Br_n\ltimes\{\pm 1\}^n$ is \begin{eqnarray*} (\Br_n\ltimes\{\pm 1\}^n)_S=\{(\alpha,\varepsilon) \in\Br_n\ltimes\{\pm 1\}^n\,|\, Z((\alpha,\varepsilon))\in G_\Z\}. \end{eqnarray*} (c) The restriction of the map $Z$ to the stabilizer $(\Br_n\ltimes\{\pm 1\}^n)_S$ is also denoted $Z$, \begin{eqnarray*} Z:(\Br_n\ltimes \{\pm 1\}^n)_S\to G_\Z. \end{eqnarray*} It is a group antihomomorphism with kernel the stabilizer $(\Br_n\ltimes\{\pm 1\}^n)_{\uuuu{e}}$ of $\uuuu{e}$. (d) The triple $(H_\Z,L,\BB^{dist})$ with $\BB^{dist}=\Br_n\ltimes\{\pm 1\}^n(\uuuu{e})$ the set of distinguished bases (Definition \ref{t3.18}) gives rise to the subgroup $G_\Z^{\BB}$ of $G_\Z$, \index{$G_\Z^{\BB}$} \begin{eqnarray*} G_\Z^{\BB}:=\Aut(H_\Z,L,\BB^{dist}) :=\{g\in G_\Z\,|\, g(\BB^{dist})=\BB^{dist}\}\subset G_\Z. \end{eqnarray*} It does not depend on $\uuuu{e}$, but only on the triple $(H_\Z,L,\BB^{dist})$. Then $G_\Z^\BB$ is the image of $(\Br_n\ltimes\{\pm 1\}^n)_S$ under $Z$ in $G_\Z$, \begin{eqnarray*} G_\Z^{\BB}=Z((\Br_n\ltimes\{\pm 1\}^n)_S)\subset G_\Z. \end{eqnarray*} (e) The subgroup $Z((\{\pm 1\}^n)_S)$ of $G_\Z^{\BB}$ is a normal subgroup of $G_\Z^{\BB}$, and the group antihomomorphism $Z$ in part (c) induces a group antihomomorphism \begin{eqnarray*} \oooo{Z}:(\Br_n)_{S/\{\pm 1\}^n}\to G_\Z^{\BB}/Z((\{\pm 1\}^n)_S) \end{eqnarray*} with kernel $(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$, which is isomorphic to $(\Br_n\ltimes \{\pm 1\}^n)_{\uuuu{e}}$. (f) Suppose that $(H_\Z,L,\uuuu{e})$ is irreducible (Definition \ref{t2.10} (a)). Then \begin{eqnarray*} (\{\pm 1\}^n)_S &=&\{(1,...,1),(-1,...,-1)\},\\ Z((-1,...,-1))&=&-\id\in G_\Z,\\ Z((\{\pm 1\}^n)_S)&=& \{\pm \id\}. \end{eqnarray*} $\{\pm \id\}$ is a normal subgroup of $G_\Z$. The group antihomorphism $\oooo{Z}$ in part (d) becomes \begin{eqnarray*} \oooo{Z}:(\Br_n)_{S/\{\pm 1\}^n}\to G_\Z/\{\pm \id\} \end{eqnarray*} with kernel $(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$ and image $G_\Z^{\BB}/\{\pm \id\}$. \end{lemma} {\bf Proof:} (a) Fix $k\in\{0,1\}$ and $(\alpha,\varepsilon)\in \Br_n\ltimes\{\pm 1\}^n$. Then \begin{eqnarray*} Z((\alpha,\varepsilon))\in G_\Z^{(k)} \iff I^{(k)}((\alpha,\varepsilon)(\uuuu{e})^t, (\alpha,\varepsilon)(\uuuu{e}))=S+(-1)^{k}S^t. \end{eqnarray*} If this equality holds then $I^{(k)}=L^t+(-1)^{k}L$ and $L((\alpha,\varepsilon)(\uuuu{e})^t, (\alpha,\varepsilon)(\uuuu{e}))^t\in T^{uni}_n(\Z)$ imply $L((\alpha,\varepsilon)(\uuuu{e})^t, (\alpha,\varepsilon)(\uuuu{e}))^t=S$, so $Z((\alpha,\varepsilon))\in G_\Z$. (b) Trivial with the compatibility of the actions of $\Br_n\ltimes\{\pm 1\}^n$ on $\BB^{dist}$ and on $T^{uni}_n(\Z)$ in Theorem \ref{t3.4} (d). (c) The following calculation shows that the map $Z:(\Br_n\ltimes \{\pm 1\}^n)_S\to G_\Z$ is a group antihomomorphism, \begin{eqnarray*} Z((\alpha,\varepsilon)(\beta,\www{\varepsilon}))(\uuuu{e}) &=& (\alpha,\varepsilon)(\beta,\www{\varepsilon})(\uuuu{e})\\ &=& (\alpha,\varepsilon)(Z((\beta,\www{\varepsilon}))(e_1),..., Z((\beta,\www{\varepsilon}))(e_n))\\ &\stackrel{\textup{\ref{t3.22} (a)}}{=}& Z((\beta,\www{\varepsilon}))(\alpha,\varepsilon)(\uuuu{e}) =Z((\beta,\www{\varepsilon}))Z((\alpha,\varepsilon))(\uuuu{e}). \end{eqnarray*} It is trivial that the kernel of this map $Z$ is $(\Br_n\ltimes\{\pm 1\}^n)_{\uuuu{e}}$. (d) $G_\Z\subset Z((\Br_n\ltimes\{\pm 1\}^n)_S)$: Consider $g\in G_\Z^{\BB}$. Then $g(\uuuu{e})\in\BB^{dist}$ comes with same matrix $S$ as $\uuuu{e}$ because $g$ respects $L$. There is a pair $(\alpha,\varepsilon)\in \Br_n\ltimes\{\pm 1\}^n$ with $Z((\alpha,\varepsilon))(\uuuu{e}) =(\alpha,\varepsilon)(\uuuu{e})=g(\uuuu{e})$. Therefore $g=Z((\alpha,\varepsilon))$. $G_\Z\supset Z((\Br_n\ltimes\{\pm 1\}^n)_S)$: Consider $(\alpha,\varepsilon)\in (\Br_n\ltimes\{\pm 1\}^n)_S$, $(\beta,\www{\varepsilon})\in\Br_n\ltimes\{\pm 1\}^n$ and $\uuuu{v}:=(\beta,\www{\varepsilon})(\uuuu{e})$. We have to show $Z((\alpha,\varepsilon))(\uuuu{v})\in \BB^{dist}$. This is rather obvious with the commutativity of the actions of $O^{(k)}$ and $\Br_n\ltimes\{\pm 1\}^n$ in Lemma \ref{t3.22} (a), \begin{eqnarray*} Z((\alpha,\varepsilon))(\uuuu{v}) &=&Z((\alpha,\varepsilon))(\beta,\www{\varepsilon})(\uuuu{e})\\ &\stackrel{\ref{t3.22} (a)}{=}& (\beta,\www{\varepsilon})Z((\alpha,\varepsilon))(\uuuu{e})\\ &=& (\beta,\www{\varepsilon})(\alpha,\varepsilon)(\uuuu{e}). \end{eqnarray*} (e) Elementary group theory gives the group isomorphisms \begin{eqnarray*} (\Br_n)_{S/\{\pm 1\}^n}&\cong& \frac{(\Br_n\ltimes\{\pm 1\}^n)_S} {(\{\pm 1\}^n)_S},\\ (\Br_n)_{\uuuu{e}/\{\pm 1\}^n}&\cong& \frac{(\Br_n\ltimes\{\pm 1\}^n)_{\uuuu{e}}} {(\{\pm 1\}^n)_{\uuuu{e}}} \cong (\Br_n\ltimes\{\pm 1\}^n)_{\uuuu{e}} \end{eqnarray*} Here use $(\{\pm 1\}^n)_{\uuuu{e}}=\{(1,...,1)\}$. $\{\pm 1\}^n$ is a normal subgroup of $\Br_n\ltimes\{\pm 1\}^n$. Therefore $(\{\pm 1\}^n)_S$ is a normal subgroup of $(\Br_n\ltimes\{\pm 1\}^n)_S$. Therefore $Z((\{\pm 1\}^n)_S)$ is a normal subgroup of $G_\Z^{\BB}$. Therefore $\oooo{Z}$ is well defined. Its kernel is still $(\Br_n)_{\uuuu{e}/\{\pm 1\}^n} \cong (\Br_n\ltimes \{\pm 1\}^n)_{\uuuu{e}}$ because $(\Br_n\ltimes \{\pm 1\}^n)_{\uuuu{e}}\cap (\{\pm 1\}^n)_{S}=(\{\pm 1\}^n)_{\uuuu{e}}=\{(1,...,1)\}$. (f) If $(H_\Z,L,\uuuu{e})$ is irreducible, then the following graph is connected: its vertices are $e_1,...,e_n$, and it has an edge between $e_i$ and $e_j$ for $i<j$ if $S_{ij}(=L(e_j,e_i))\neq 0$. Therefore then $(\{\pm 1\}^n)_S=\{(1,...,1),(-1,...,-1)\}$. Everything else follows from this and from part (d). \hfill$\Box$ \bigskip The antihomomorphism $Z:(\Br_n\ltimes \{\pm 1\}^n)_S\to G_\Z$ is not always surjective, but in many cases. See Theorem \ref{t3.28}, the Remarks \ref{t3.29} and the Remarks \ref{t9.1}. Theorem \ref{t3.26} (b) writes $(-1)^{k+1}M$ as an image of a braid by $Z$. Theorem \ref{t3.26} (c) gives conditions when it has an $n$-th root which is also an image of a braid by $Z$. \begin{theorem}\label{t3.26} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\geq 2$ with a triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. Fix $k\in\{0,1\}$. Recall from chapter \ref{s3.1} \begin{eqnarray*} \sigma^{root}&:=&\sigma_{n-1}\sigma_{n-2}...\sigma_2\sigma_1 \in \Br_n,\\ \sigma^{mon}&:=&(\sigma^{root})^n,\\ \textup{center}(\Br_n)&=&\langle \sigma^{mon}\rangle. \end{eqnarray*} (a) \begin{eqnarray*} \delta_n^{1-k}\sigma^{root}(\uuuu{e}) &=& Z(\delta^{1-k}_n\sigma^{root})(\uuuu{e}) \\ &=&(s_{e_1}^{(k)}(e_2),s_{e_1}^{(k)}(e_3),..., s_{e_1}^{(k)}(e_n),s_{e_1}^{(k)}(e_1))\\ &=& \uuuu{e}\cdot R \end{eqnarray*} \begin{eqnarray*} \textup{with }R&:=&\left(\begin{array}{ccc|c} -q_{n-1}& \dots & -q_1 & -q_0 \\ \hline & & & \\ & E_{n-1} & & \\ & & & \end{array}\right)\\ \textup{so }R_{ij}&=&\left\{\begin{array}{ll} -q_{n-j}& \textup{if }i=1, \\ \delta_{i-1,j}& \textup{if }i\geq 2, \end{array}\right. \end{eqnarray*} where $q_0=(-1)^k$, $q_{n+1-j}=S_{1j}$ for $j\in\{2,...,n\}$. (b) \begin{eqnarray*} Z((\delta^{1-k}_n\sigma^{root})^n)&=&(-1)^{k+1}M,\\ \textup{so especially } (\delta^{1-k}_n\sigma^{root})^n&\in& (\Br_n\ltimes\{\pm 1\}^n)_S. \end{eqnarray*} (c) Write $q_0=(-1)^k$, $q_{n+1-j}=S_{1j}$ for $j\in\{2,...,n\}$ as in part (a) and additionally $q_n:=1$. Suppose $q_{n-j}=q_0q_j$ for $j\in\{1,...,n-1\}$, and suppose that $S$ has the following shape, \begin{eqnarray*} S&:=&\begin{pmatrix}1&q_{n-1}& \dots & q_1\\ & \ddots & \ddots & \vdots \\ & & \ddots & q_{n-1}\\ & & & 1 \end{pmatrix}, \\ \textup{so }S_{ij}&=&\left\{\begin{array}{ll} 0& \textup{if }i>j, \\ q_{n-(j-i)}& \textup{if }i\leq j, \end{array}\right. \end{eqnarray*} Then \begin{eqnarray*} M^{root}&:=&Z(\delta^{1-k}_n\sigma^{root})\in G_\Z\\ \textup{ with }(M^{root})^n&=&(-1)^{k+1}M. \end{eqnarray*} \index{root of the monodromy}\index{$M^{root}$} $M^{root}$ is regular and cyclic and has the characteristic polynomial $q(t):=\sum_{i=0}^n q_it^i\in\Z[t]$. \end{theorem} {\bf Proof:} (a) The second line follows from the definition of the action of $\delta^{1-k}_n\sigma^{root}$ on $\uuuu{e}$. For the third line observe $s_{e_1}^{(k)}(e_j)=e_j-S_{1j}e_1$ for $j\geq 2$ and $s_{e_1}^{(k)}(e_1)=-q_0e_1$. (b) Use part (a) and \begin{eqnarray*} s_{s_{e_1}^{(k)}(e_2)}^{(k)}(s_{e_1}^{(k)}(e_j)) =s_{e_1}^{(k)}s_{e_2}^{(k)}(s_{e_1}^{(k)})^{-1}s_{e_1}^{(k)}(e_j) =s_{e_1}^{(k)}s_{e_2}^{(k)}(e_j) \end{eqnarray*} to find \begin{eqnarray*} (\delta^{1-k}_n\sigma^{root})^2(\uuuu{e}) =(s_{e_1}^{(k)}s_{e_2}^{(k)}(e_3),..., s_{e_1}^{(k)}s_{e_2}^{(k)}(e_n), s_{e_1}^{(k)}s_{e_2}^{(k)}(e_1), s_{e_1}^{(k)}s_{e_2}^{(k)}(e_2)). \end{eqnarray*} One continues inductively and finds \begin{eqnarray*} (\delta^{1-k}_n\sigma^{root})^n(\uuuu{e}) =(s_{e_1}^{(k)}...s_{e_n}^{(k)}(e_1),..., s_{e_1}^{(k)}...s_{e_n}^{(k)}(e_n)) =(-1)^{k+1}M(\uuuu{e}), \end{eqnarray*} so $Z((\delta^{1-k}_n\sigma^{root})^n)=(-1)^{k+1}M$. (c) If $S$ is as in part (c) then \begin{eqnarray*} I^{(k)}(\delta^{1-k}_n\sigma^{root}(\uuuu{e})^t, \delta^{1-k}_n\sigma^{root}(\uuuu{e})) &\stackrel{(1)}{=}& I^{(k)}((e_2\ e_3\ ...\ e_n\ e_1)^t,(e_2\ e_3\ ...\ e_n\ e_1))\\ &\stackrel{(2)}{=}&S+(-1)^kS^t =I^{(k)}(\uuuu{e}^t,\uuuu{e}). \end{eqnarray*} Here $\stackrel{(1)}{=}$ uses $s_{e_1}^{(k)}\in O^{(k)}$, and $\stackrel{(2)}{=}$ uses that $I^{(k)}(\uuuu{e}^t,\uuuu{e})=S+(-1)^kS^t$ and that $S$ is as in part (c). Therefore $M^{root}:=Z(\delta^{1-k}_n\sigma^{root}) \in G_\Z^{(k)}$, so by Lemma \ref{t3.25} (a) \begin{eqnarray*} M^{root}\in G_\Z\quad\textup{and}\quad\delta^{1-k}_n\sigma^{root} \in (\Br_n\ltimes\{\pm 1\}^n)_S. \end{eqnarray*} Also \begin{eqnarray*} (M^{root})^n=(Z(\delta^{1-k}_n\sigma^{root}))^n =Z((\delta^{1-k}_n\sigma^{root})^n) =(-1)^{k+1}M. \end{eqnarray*} Let $\uuuu{e}^*$ be the $\Z$-basis of $H_\Z$ which is left $L$-dual to the $\Z$-basis $\uuuu{e}$, so with $L((\uuuu{e}^*)^t,\uuuu{e})=E_n$. Remark 4.8 in \cite{BH20} says \begin{eqnarray*} M^{root}\uuuu{e}^* = \uuuu{e}^*R^{-t}=\uuuu{e}^*\cdot \left(\begin{array}{ccc|c} & & & -q_0 \\ \hline & & & -q_1 \\ & E_{n-1} & & \vdots \\ & & & -q_{n-1}\end{array}\right). \end{eqnarray*} The matrix $R^{-t}$ is the companion matrix of the polynomial $q(t)$. Therefore $M^{root}$ is regular, cyclic with generating vector $c=e_1^*$ and has the characteristic polynomial $q(t)$. \hfill$\Box$ \begin{remarks}\label{t3.27} The main part of part (c) of Theorem \ref{t3.26} has also the following matrix version: For $q(t)=\sum_{i=0}^nq_it^i\in\Z[t]$ with $q_n=1$, $q_0=(-1)^k$ for some $k\in\{0;1\}$ and $q_{n-j}=q_0q_j$ the matrix $R$ in part (a) and the matrix $S$ in part (c) of Theorem \ref{t3.26} satisfy $$R^n=(-1)^{k+1}S^{-1}S^t.$$ A proof using matrices of this version of part (c) of Theorem \ref{t3.26} was given in \cite[Theorem 4.5 (a)+(b)]{BH20}. The proof here with the braid group action is more elegant. \end{remarks} The antihomomorphism $Z:(\Br_n\ltimes\{\pm 1\}^n)_S\to G_\Z$ is not surjective in general. A simple example with $n=4$ is given in the Remarks \ref{t3.29}. But it is surjective in the case $n=1$, in all cases with $n=2$ and in almost all cases with $n=3$. Theorem \ref{t3.28} gives precise statements. Its proof requires first a good control of the braid group action on $T^{uni}_3(\Z)$, which is the subject of chapter \ref{s4} and second complete knowledge of the group $G_\Z$ for all cases with $n\leq 3$, which is the subject of chapter \ref{s5}. Theorem \ref{t3.28} is proved within the theorems in chapter \ref{s5} which treat the different cases with $n\in\{1,2,3\}$, namely Lemma \ref{t5.4} (the cases $A_1^n$), Theorem \ref{t5.5} (the rank 2 cases), Theorem \ref{t5.13} (the reducible rank 3 cases), Theorem \ref{t5.14} (the irreducible rank 3 cases with all eigenvalues in $S^1$), Theorem \ref{t5.16} (some special other rank 3 cases), Theorem \ref{t5.18} (the rest of the rank 3 cases). \begin{theorem}\label{t3.28} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\leq 3$ with triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. The group antihomomorphism $Z:(\Br_n\ltimes\{\pm 1\}^n)_S \to G_\Z$ is not surjective in the four cases with $n=3$ where $S$ is in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $S(\uuuu{x})$ with $$\uuuu{x}\in\{(3,3,4),(4,4,4),(5,5,5),(4,4,8)\},$$ so then $G_\Z\supsetneqq G_\Z^{\BB}$. It is surjective in all other cases with $n\leq 3$, so then $G_\Z=G_\Z^{\BB}$. \end{theorem} \begin{remarks}\label{t3.29} (i) Contrary to Theorem \ref{t3.28} for the cases $n=1,2,3$, it is in the cases $n\geq 4$ easy to find matrices $S\in T^{uni}_n(\Z)$ such that the group antihomomorphism $Z:(\Br_n\ltimes \{\pm 1\}^n)_S\to G_\Z$ is not surjective. Though the construction which we propose in part (ii) and carry out in one example in part (iii) leads to matrices which are rather particular. For a given matrix $S$ it is in general not easy to see whether $Z$ is surjective or not. (ii) Consider a reducible unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ of rank $n$ with triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. There are an $L$-orthogonal decomposition $H_\Z=\bigoplus_{j=1}^l H_{\Z,j}$ with $l\geq 2$ and $\textup{rank}\, H_{\Z,j}\geq 1$ and a surjective map $\alpha:\{1,...,n\}\to\{1,...,l\}$ with $e_i\in H_{\Z,\alpha(i)}$. Then for $k\in\{0,1\}$ \begin{eqnarray*} \Gamma^{(k)}\{e_i\}\subset H_{\Z,\alpha(i)},\\ \textup{so}\quad \Delta^{(k)}\subset \bigcup_{j=1}^l H_{\Z,j}. \end{eqnarray*} Especially any $g\in G_\Z^{\BB}\subset G_\Z$ maps each $e_i$ to an element of $\bigcup_{j=1}^l H_{\Z,j}$. This does not necessarily hold for any $g\in G_\Z$. Part (iii) gives an example. (iii) Consider the unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ of rank 4 with triangular basis $\uuuu{e}$ and matrix $$S=L(\uuuu{e}^t,\uuuu{e})^t= \begin{pmatrix}1&2&0&0\\0&1&0&0\\0&0&1&2\\0&0&0&1\end{pmatrix} \in T^{uni,4}(\Z).$$ Then $H_\Z= H_{\Z,1}\oplus H_{\Z,2}$ with $H_{\Z,1}=\Z e_1\oplus \Z e_2$ and $H_{\Z,2}=\Z e_3\oplus \Z e_4$. The $\Z$-linear map $g:H_\Z\to H_\Z$ with \begin{eqnarray*} (g(e_1),g(e_2),g(e_3),g(e_4)) =(e_1+(e_3-e_4),e_2+(e_3-e_4),\\ e_3+(e_1-e_2),e_4+(e_1-e_2)) \end{eqnarray*} is not in $G_\Z^{\BB}$ because $g(e_1)$, $g(e_2)$, $g(e_3)$, $g(e_4)\notin H_{\Z,1}\cup H_{\Z,2}$. But $g\in G_\Z$ because \begin{eqnarray*} L(e_1-e_2,e_1-e_2)=0=L(e_3-e_4,e_3-e_4),\\ \textup{so}\quad L(g(e_i),g(e_j))=L(e_i,e_j)\quad\textup{for } \{i,j\}\subset\{1,2\}\textup{ or }\{i,j\}\subset\{3,4\}, \end{eqnarray*} and also \begin{eqnarray*} L(g(e_i),g(e_j))=L(e_i,e_j)\quad\textup{for } (i,j)\in(\{1,2\}\times\{3,4\})\cup (\{3,4\}\times\{1,2\}). \end{eqnarray*} So here $g\in G_\Z-G_\Z^{\BB}$, so $G_\Z^{\BB}\subsetneqq G_\Z$. \end{remarks} \chapter{Braid group action on upper triangular $3\times 3$ matrices }\label{s4} \setcounter{equation}{0} \setcounter{figure}{0} The subject of this chapter is the case $n=3$ of the action in Lemma \ref{t3.13} of $\Br_n\ltimes\{\pm 1\}^n$ on the matrices in $T^{uni}_n(\Z)$. In section \ref{s4.1} the action on $T^{uni}_3(\R)$ is made concrete. The (quotient) group of actions is given in new generators. It is $$ (G^{phi}\ltimes G^{sign})\rtimes \langle\gamma\rangle\cong (G^{phi}\rtimes \langle\gamma\rangle)\ltimes G^{sign},$$ where $G^{phi}$ is a free Coxeter group with three generators, $G^{sign}$ is the group of actions in $T^{uni}_3(\R)$ which the sign group $\{\pm 1\}^3$ induces, and $\gamma$ acts cyclically of order 3. In fact, $G^{phi}\rtimes \langle\gamma\rangle \cong PSL_2(\Z),$ so we have a nonlinear action of $PSL_2(\Z)$, but this way to look at it is less useful than the presentation as $G^{phi}\rtimes\langle\gamma\rangle$. The action on $T^{uni}_3(\Z)$ had been studied already by Kr\"uger \cite[\S 12]{Kr90} and by Cecotti-Vafa \cite[Ch. 6.2]{CV93}. Section \ref{4.2} recovers and refines their results. Like them, it puts emphasis on the cases where the monodromy of a corresponding unimodular bilinear lattice has eigenvalues in $S^1$. Section \ref{s4.2} follows largely Kr\"uger \cite[\S 12]{Kr90}. Section \ref{s4.3} uses {\it pseudo-graphs} to systematically study all cases, not only those where the monodromy has eigenvalues in $S^1$. This goes far beyond Kr\"uger and Cecotti-Vafa. The results of section \ref{s4.3} and the pseudo-graphs are used in section \ref{s4.4} to determine in all cases the stabilizer $(\Br_3\ltimes\{\pm 1\}^3)_S$ respectively the stabilizer $(\Br_3)_{S/\{\pm 1\}^3}$. Section \ref{s7.4} will build on this and determine the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of a distingushed basis $\uuuu{e}\in\BB^{dist}$ for any unimodular bilinear lattice of rank 3 with a fixed triangular basis. Section \ref{s4.5} starts with the observation that a matrix $S\in T^{uni}_n(\Z)$ and the matrix $\www{S}\in T^{uni}_n(\Z)$ with $\www{S}_{ij}=-S_{ij}$ for $i<j$ lead to unimodular bilinear lattices with the same odd monodromy groups and the same odd vanishing cycles. This motivates to study the action on $T^{uni}_n(\Z)$ which extends the action of $\Br_n\ltimes\{\pm 1\}^n$ by this global sign change. Section \ref{s4.5} carries this out in the case $n=3$ and gives standard representatives for each orbit. Though examples show that the action is rather wild. Similar looking triples in $\Z^3$ are in the orbits of very different standard representatives. \section[Real upper triangular $3\times 3$ matrices] {Braid group action on real upper triangular $3\times 3$ matrices} \label{s4.1} The action of $\Br_3\ltimes \{\pm 1\}^3$ on $T^{uni}_3(\Z)$ will be studied in the next sections. It extends to an action on $T^{uni}_3(\R)\cong\R^3$ which will be studied here. By Theorem \ref{t3.4} (d), $\sigma_1$ acts on $T^{uni}_3(\Z)$ by \begin{eqnarray*} \sigma_1:\begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} &\mapsto& \begin{pmatrix}-x_1&1&0\\1&0&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} \begin{pmatrix}-x_1&1&0\\1&0&0\\0&0&1\end{pmatrix}\\ &=&\begin{pmatrix}1&-x_1&x_3-x_1x_2\\0&1&x_2\\0&0&1\end{pmatrix}. \end{eqnarray*} It extends to an action on $T^{uni}_3(\R)$. With the isomorphism \begin{eqnarray*} T^{uni}_3(R)\stackrel{\cong}{\longrightarrow}R^3,\quad \begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} \mapsto (x_1,x_2,x_3) \quad\textup{for }R\in\{\Z,\Q,\R,\C\} \end{eqnarray*} this gives the action \begin{eqnarray*} \sigma_1^{\R}:\R^3\to\R^3,\quad (x_1,x_2,x_3)\mapsto (-x_1,x_3-x_1x_2,x_2). \end{eqnarray*} Analogously \index{$\sigma_j^\R:\R^3\to\R^3$} \index{$\delta_j^\R:\R^3\to\R^3$} \begin{eqnarray*} (\sigma_1^{\R})^{-1}:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (-x_1,x_3,x_2-x_1x_3), \\ \sigma_2^{\R}:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (x_2-x_1x_3,x_1,-x_3), \\ (\sigma_2^{\R})^{-1}:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (x_2,x_1-x_2x_3,-x_3), \\ \delta_1^{\R}:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (-x_1,-x_2,x_3), \\ \delta_2^{\R}:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (-x_1,x_2,-x_3), \\ \delta_3^{\R}:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (x_1,-x_2,-x_3). \end{eqnarray*} One sees \index{$G^{sign}$} \begin{eqnarray*} \delta_3^{\R}=\delta_1^{\R}\delta_2^{\R}\quad \textup{and}\quad G^{sign}:=\langle\delta_1^{\R},\delta_2^{\R}\rangle \cong\{\pm 1\}^2. \end{eqnarray*} The group $\langle \sigma_1^{\R},\sigma_2^{\R}\rangle \ltimes G^{sign}\subset\Aut_{pol}(\R^3)$ of polynomial automorphisms of $\R^3$ will be more transparent in other generators. \begin{definition}\label{t4.1} Define the polynomial automorphisms of $\R^3$ \index{$\varphi_j:\R^3\to\R^3$}\index{$\gamma:\R^3\to\R^3$} \begin{eqnarray*} \varphi_1:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (x_2x_3-x_1,x_3,x_2), \\ \varphi_2:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (x_3,x_1x_3-x_2,x_1), \\ \varphi_3:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (x_2,x_1,x_1x_2-x_3), \\ \gamma:\R^3\to\R^3,&& (x_1,x_2,x_3)\mapsto (x_3,x_1,x_2), \end{eqnarray*} and the group $G^{phi}:=\langle \varphi_1,\varphi_2,\varphi_3 \rangle\subset\Aut_{pol}(\R^3)$. \index{$G^{phi}$} \end{definition} \begin{theorem}\label{t4.2} (a) The group $G^{phi}$ is a free Coxeter group with the three generators $\varphi_1,\varphi_2,\varphi_3$, so $G^{phi}\cong G^{fCox,3}$. (b) $\langle\gamma\rangle\cong \Z/3\Z\cong A_3\subset S_3$. (c) $\langle \sigma_1^{\R},\sigma_2^{\R}\rangle \ltimes G^{sign} = (G^{phi}\ltimes G^{sign})\rtimes \langle\gamma\rangle.$ \end{theorem} {\bf Proof:} (a) $\varphi_1^2=\varphi_2^2=\varphi_3^2=\id$ is obvious, and also that $(2,2,2)\in\R^3$ is a fixed point of $G^{phi}$. We will show that the group $\langle d_{(2,2,2)}\varphi_1,d_{(2,2,2)}\varphi_2, d_{(2,2,2)}\varphi_3\rangle$ of induced actions on the tangent space $T_{(2,2,2)}\R^3$ is a free Coxeter group with three generators. This will imply $G^{phi}\cong G^{fCox,3}$. Affine linear coordinates $(\www{x}_1,\www{x}_2,\www{x}_3)$ on $\R^3$ which vanish at $(2,2,2)$ with \begin{eqnarray*} (x_1,x_2,x_3)=(2+\www{x}_1,2+\www{x}_2,2+\www{x}_3) =(2,2,2)+(\www{x}_1,\www{x}_2,\www{x}_3) \end{eqnarray*} are also linear coordinates on $T_{(2,2,2)}\R^3$. We have \begin{eqnarray*} \varphi_1((2,2,2)+(\www{x}_1,\www{x}_2,\www{x}_3)) =(2,2,2)+(\www{x}_2\www{x}_3+2\www{x}_2+2\www{x}_3-\www{x}_1, \www{x}_3,\www{x}_2),\\ d_{(2,2,2)}\varphi_1(\www{x}_1,\www{x}_2,\www{x}_3) =(\www{x}_1,\www{x}_2,\www{x}_3) \begin{pmatrix}-1 &0&0&\\2&0&1\\2&1&0\end{pmatrix}, \end{eqnarray*} and analogously \begin{eqnarray*} d_{(2,2,2)}\varphi_2(\www{x}_1,\www{x}_2,\www{x}_3) =(\www{x}_1,\www{x}_2,\www{x}_3) \begin{pmatrix}0&2&1\\0&-1&0\\1&2&0\end{pmatrix},\\ d_{(2,2,2)}\varphi_3(\www{x}_1,\www{x}_2,\www{x}_3) =(\www{x}_1,\www{x}_2,\www{x}_3) \begin{pmatrix}0&1&2\\1&0&2\\0&0&-1\end{pmatrix}. \end{eqnarray*} The group $G^{phi}$ respects the fibers of the map \begin{eqnarray*} r_\R:\R^3\to \R,\quad (x_1,x_2,x_3)\mapsto x_1^2+x_2^2+x_3^2-x_1x_2x_3. \end{eqnarray*} The group $\langle d_{(2,2,2)}\varphi_1,d_{(2,2,2)}\varphi_2, d_{(2,2,2)}\varphi_3\rangle$ respects the tangent cone at $(2,2,2)$ of the fiber $r_\R^{-1}(4)$. This tangent cone is the zero set of the quadratic form \begin{eqnarray*} q_{\R^3,(2,2,2)}:\R^3&\to& \R,\\ (\www{x}_1,\www{x}_2,\www{x}_3)&\mapsto& -\www{x}_1^2-\www{x}_2^2-\www{x}_3^2+2\www{x}_1\www{x}_2 +2\www{x}_1\www{x}_3+2\www{x}_2\www{x}_3. \end{eqnarray*} This quadratic form is indefinite with signature $(+,-,-)$. As in Theorem \ref{ta.4}, its cone of positive vectors is called $\KK$. Consider the six vectors \begin{eqnarray*} v_1=(1,1,0),\ v_2=(1,0,1),\ v_3=(0,1,1),\\ w_1=v_1+v_2,\ w_2=v_1+v_3,\ w_3=v_2+v_3. \end{eqnarray*} Then \begin{eqnarray*} v_1,v_2,v_3\in\paa\KK,\quad w_1,w_2,w_3\in\KK,\\ d_{(2,2,2)}\varphi_1:v_1\leftrightarrow v_2,\ w_1\mapsto w_1,\\ d_{(2,2,2)}\varphi_2:v_1\leftrightarrow v_3,\ w_2\mapsto w_2,\\ d_{(2,2,2)}\varphi_3:v_2\leftrightarrow v_3,\ w_3\mapsto w_3. \end{eqnarray*} Compare Theorem \ref{ta.4}. In the model $\KK/\R^*$ of the hyperbolic plane, $d_{(2,2,2)}\varphi_i$ for $i\in\{1,2,3\}$ gives a rotation with angle $\pi$ and elliptic fixed point $\R^* w_i$, which maps the hyperbolic line with euclidean boundary points $\R^*v_1\& \R^*v_2$ respectively $\R^*v_1\& \R^*v_3$ respectively $\R^*v_2\& \R^*v_3$ to itself Theorem \ref{ta.2} (b) applies and shows $\langle d_{(2,2,2)}\varphi_i\,|\, i\in\{1,2,3\}\rangle \cong G^{fCox,3}$. Figure \ref{Fig:4.1} illustrates this. \begin{figure}[H] \includegraphics[width=0.5\textwidth]{pic-4-1.png} \caption[Figure 4.1]{$G^{fCox,3}$ generated by 3 elliptic M\"obius transformations, an application of Theorem \ref{ta.2} (b)} \label{Fig:4.1} \end{figure} (b) Trivial. (c) The equality of groups \begin{eqnarray*} \langle \sigma_1^{\R},\sigma_2^{\R}\rangle\ltimes G^{sign} =\langle\varphi_1,\varphi_2,\varphi_3,\gamma, \delta_1^{\R},\delta_2^{\R}\rangle \end{eqnarray*} follows from \begin{eqnarray}\label{4.1} \gamma &=& \delta_3^{\R}\sigma_2^{\R}\sigma_1^{\R},\\ \varphi_1&=& \delta_1^{\R}\gamma^{-1}(\sigma_2^{\R})^{-1}, \label{4.2}\\ \varphi_2&=& \delta_1^{\R}\gamma \sigma_2^{\R} =\delta_3^{\R}\gamma^{-1}(\sigma_1^{\R})^{-1}, \label{4.3}\\ \varphi_3&=& \delta_3^{\R}\gamma\sigma_1^{\R}.\label{4.4} \end{eqnarray} and \begin{eqnarray}\label{4.5} \sigma_1^{\R}=\gamma^{-1}\delta_3^{\R}\varphi_3,\quad \sigma_2^{\R}=\gamma^{-1}\delta_1^{\R}\varphi_2. \end{eqnarray} $G^{phi}$ fixes $(2,2,2)$, and therefore $G^{phi}\cap G^{sign}=\{\id\}$. As $G^{sign}$ is a normal subgroup of $\langle \sigma_1^{\R},\sigma_2^{\R}\rangle \ltimes G^{sign}$, it is also a normal subgroup of $\langle \varphi_1,\varphi_2,\varphi_3,\delta_1^{\R}, \delta_2^{\R}\rangle$, so $\langle\varphi_1,\varphi_2,\varphi_3,\delta_1^{\R}, \delta_2^{\R}\rangle =G^{phi}\ltimes G^{sign}$. More precisely \begin{eqnarray*} \varphi_i\delta_j^{\R}\varphi_i^{-1}=\delta_k^{\R} \quad\textup{for }(i,j,k)\in\{(1,3,3),(2,2,2),(3,1,1), \\ (1,1,2),(1,2,1),(2,1,3),(2,3,1),(3,2,3),(3,3,2)\}.\end{eqnarray*} We claim $\gamma\notin G^{phi}\ltimes G^{sign}$. If $\gamma$ were in $G^{phi}\ltimes G^{sign}$ then $\gamma\in G^{phi}$ as $\gamma$ fixes $(2,2,2)$. But all elements of finite order in $G^{phi}\cong G^{fCox,3}$ have order two, though $\gamma$ has order three. Hence $\gamma\notin G^{phi}$ and $\gamma\notin G^{phi}\ltimes G^{sign}$. The claim and \begin{eqnarray}\label{4.6} \gamma\varphi_1\gamma^{-1}=\varphi_2,\ \gamma\varphi_2\gamma^{-1}=\varphi_3,\ \gamma\varphi_3\gamma^{-1}=\varphi_1,\\ \gamma\delta_1^{\R}\gamma^{-1}=\delta_3^{\R},\ \gamma\delta_2^{\R}\gamma^{-1}=\delta_1^{\R},\ \gamma\delta_3^{\R}\gamma^{-1}=\delta_2^{\R},\label{4.7} \end{eqnarray} show \begin{eqnarray*} \langle \varphi_1,\varphi_2,\varphi_3,\delta_1^{\R}, \delta_2^{\R},\gamma\rangle =(G^{phi}\ltimes G^{sign})\rtimes \langle\gamma\rangle. \hspace*{2cm}\Box \end{eqnarray*} \section[Integer upper triangular $3\times 3$ matrices] {Braid group action on integer upper triangular $3\times 3$ matrices} \label{s4.2} In this section we will give a partial classification of the orbits of the action of $\Br_3\ltimes\{\pm 1\}^3$ on $T^{uni}_3(\Z)$. This refines results which were obtained independently by Kr\"uger \cite[\S 12]{Kr90} and Cecotti-Vafa \cite[Ch. 6.2]{CV93} (building on Mordell \cite[p 106 ff]{Mo69}). The refinement consists in the following. By Theorem \ref{t4.2} the action of $\Br_3\ltimes\{\pm 1\}^3$ on $T^{uni}_3(\Z)$ coincides with the action of $(G^{phi}\ltimes G^{sign})\rtimes\langle\gamma\rangle$. For reasons unknown to us, Kr\"uger and Cecotti-Vafa considered the action of the slightly larger group $(G^{phi}\ltimes G^{sign})\rtimes\langle \gamma,\gamma_2\rangle$ with \begin{eqnarray*} \gamma_2:\R^3\to\R^3,\quad (x_1,x_2,x_3)\mapsto (x_2,x_1,x_3), \quad\textup{so }\langle \gamma,\gamma_2\rangle\cong S_3. \end{eqnarray*} Thus they obtained a slightly coarser classification. Nevertheless Theorem \ref{t4.6} is essentially due to them (and Mordell \cite[p 106 ff]{Mo69}). The following definition and lemma prepare it. They are due to Kr\"uger \cite[\S 12]{Kr90}. \begin{definition}\label{t4.3} \cite[Def. 12.2]{Kr90} For $\uuuu{x}=(x_1,x_2,x_3)\in\R^3$ we set as usual $\|\uuuu{x}\|:=\sqrt{x_1^2+x_2^2+x_3^2}$. A tuple $\uuuu{x}\in\R^3$ is called a {\it local minimum} if \index{local minimum} \begin{eqnarray*} \| \uuuu{x}\| \leq\min(\|\sigma_1^{\R}(\uuuu{x})\|, \|(\sigma_1^{\R})^{-1}(\uuuu{x})\|, \|\sigma_2^{\R}(\uuuu{x})\|, \|(\sigma_2^{\R})^{-1}(\uuuu{x})\|). \end{eqnarray*} This is obviously equivalent to \begin{eqnarray*} \| \uuuu{x}\| \leq\min(\|\varphi_1(\uuuu{x})\|, \|\varphi_2(\uuuu{x})\|, \|\varphi_3(\uuuu{x})\|). \end{eqnarray*} \end{definition} \begin{lemma}\label{t4.4} \cite[Lemma 12.3]{Kr90} $\uuuu{x}\in\R^3$ is a local minimum if and only if it satisfies (i) or (ii), \begin{list}{}{} \item[(i)] $x_1x_2x_3\leq 0$, \item[(ii)] $x_1x_2x_3>0,\ 2|x_1|\leq |x_2x_3|,\ 2|x_2|\leq |x_1x_3|,\ 2|x_3|\leq |x_1x_2|$. \end{list} In the case (ii) also $|x_1|\geq 2$, $|x_2|\geq 2$ and $|x_3|\geq 2$ hold. \end{lemma} {\bf Proof:} $\uuuu{x}\in\R^3$ is a local minimum if for all $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$ \begin{eqnarray*} x_i^2+x_j^2+x_k^2\leq x_i^2+x_j^2+(x_k-x_ix_j)^2 \end{eqnarray*} holds, which is equivalent to \begin{eqnarray*} 2x_ix_jx_k\leq x_i^2x_j^2. \end{eqnarray*} {\bf 1st case,} $x_1x_2x_3\leq 0$: Then $\uuuu{x}$ is a local minimum. {\bf 2nd case,} $x_1x_2x_3>0$: Then the condition $2x_ix_jx_k\leq x_i^2x_j^2$ is equivalent to \begin{eqnarray*} 2|x_k|\leq |x_ix_j|. \end{eqnarray*} These three conditions together imply \begin{eqnarray*} 4|x_k|\leq 2|x_i||x_j|\leq |x_i||x_i||x_k|,\quad\textup{so } 4\leq |x_i|^2,\quad\textup{so }2\leq |x_i|. \hspace*{1cm}\Box \end{eqnarray*} \bigskip The square $\|.\|^2:\Z^3\to\Z_{\geq 0}$ of the norm has on $\Z^3$ values in $\Z_{\geq 0}$. Therefore each $\Br_3\ltimes\{\pm 1\}^3$ orbit in $\Z^3$ has local minima. Kr\"uger showed that the only $\Br_3\ltimes\{\pm 1\}^3$ orbits in $\R^3$ without local minimal are of the following shape. We will not use this result, but we find it interesting enough to cite it. \begin{theorem}\label{t4.5} \cite[Theorem 12.6]{Kr90} Let $\uuuu{x}\in\R^3$ whose $\Br_3\ltimes\{\pm 1\}^3$ orbit does not contain a local minimum. Then \begin{eqnarray*} x_1x_2x_3>0,\quad 2<\min(|x_1|,|x_2|,|x_3|),\\ 4=r_\R(\uuuu{x})(=x_1^2+x_2^2+x_3^2-x_1x_2x_3). \end{eqnarray*} Furthermore, there is a sequence $(\psi_n)_{n\in\N}$ with $\psi_n\in\{\varphi_1,\varphi_2,\varphi_3\}$ with $\psi_n\neq\psi_{n+1}$ such that the sequence $(\uuuu{x}^{(n)})_{n\in\N\cup\{0\}}$ with $\uuuu{x}^{(0)}=\uuuu{x}$ and $\uuuu{x}^{(n+1)}=\psi_n(\uuuu{x}^{(n)})$ satisfies \begin{eqnarray*} \| \uuuu{x}^{(n+1)}\| < \| \uuuu{x}^{(n)}\| \quad \textup{for all }n\in\N\cup\{0\},\\ \lim_{n\to\infty} (|x^{(n)}_1|,|x^{(n)}_2|,|x^{(n)}_3|)=(2,2,2). \end{eqnarray*} \end{theorem} Now we come to the classification of $\Br_3\ltimes\{\pm 1\}^3$ orbits in $\Z^3$. The following result is except for its part (f) a refinement of \cite[Theorem 12.7]{Kr90} and of \cite[Ch 6.2]{CV93}. The proof below follows (except for the part (f)) the proof in \cite{Kr90}. Recall that each $\Br_3\ltimes\{\pm 1\}^3$ orbit in $\R^3$ is contained in one fiber of the map $r_\R:\R^3\to\R$, $\uuuu{x}\mapsto x_1^2+x_2^2+x_3^2-x_1x_2x_3$. \begin{theorem}\label{t4.6} (a) Each fiber of \index{$r,\ r_\R$} $$r:\Z^3\to\Z,\ r(\uuuu{x})=x_1^2+x_2^2+x_3^2-x_1x_2x_3$$ except the fiber $r^{-1}(4)$ contains only finitely many local minima. (b) Each fiber of $r:\Z^3\to\Z$ except the fiber $r^{-1}(4)$ consists of only finitely many $\Br_3\ltimes\{\pm 1\}^3$ orbits. (c) For $\rho\in\Z_{<0}$, each local minimum $\uuuu{x}\in r^{-1}(\rho)$ satisfies $x_1x_2x_3>0$ and $|x_1|\geq 3$, $|x_2|\geq 3$, $|x_3|\geq 3$. (d) For $\rho\in\N-\{4\}$, each local minimum $\uuuu{x}\in r^{-1}(\rho)$ satisfies $x_1x_2x_3\leq 0$. (e) The following table gives all local minima in $r^{-1}(\{0,1,2,3,4\})$. The local minima in one $\Br_3\ltimes\{\pm 1\}^3$ orbit are in one line. The last entry in each line is one matrix in the corresponding orbit in $T^{uni}_3(\Z)$. \begin{eqnarray*} \begin{array}{rll} r=3 & - & - \\ r=0 & (0,0,0) & S(A_1^3) \\ r=0 & (3,3,3),(-3,-3,3),(-3,3,-3),(3,-3,-3) & S(\P^2) \\ r=1 & (\pm 1,0,0),(0,\pm 1,0),(0,0,\pm 1) & S(A_2A_1) \\ r=2 & (\pm 1,\pm 1,0),(\pm 1,0,\pm 1),(0,\pm 1,\pm 1) & S(A_3)\\ r=4 & (\pm 2,0,0),(0,\pm 2,0),(0,0,\pm 2) & S(\P^1A_1) \\ r=4 & (-1,-1,-1),(1,1,-1),(1,-1,1),(-1,1,1) & S(\whh{A}_2)\\ r=4 & (2,2,2),(-2,-2,2),(-2,2,-2),(2,-2,-2) & S(\HH_{1,2})\\ \left\{\begin{array}{c}r=4 \\ l\in\Z_{\geq 3}\end{array}\right\} & \left\{\begin{array}{l} (\varepsilon_1 2,\varepsilon_2 l,\varepsilon_3 l), (\varepsilon_1 l,\varepsilon_2 2,\varepsilon_3 l), (\varepsilon_1 l,\varepsilon_2 l,\varepsilon_3 2)\\ \textup{for }\varepsilon_1,\varepsilon_2,\varepsilon_3\in\{\pm 1\} \textup{ with }\varepsilon_1\varepsilon_2\varepsilon_3=1 \end{array}\right\} & S(-l,2,-l) \end{array} \end{eqnarray*} So there are seven single $\Br_3\ltimes \{\pm 1\}^3$ orbits and one series with parameter $l\in\Z_{\geq 3}$ of $\Br_3\ltimes\{\pm 1\}^3$ orbits with $r\in\{0,1,2,3,4\}$. These are the most interesting orbits as the monodromy matrix $S(\uuuu{x})^{-1}S(\uuuu{x})^t$ for $\uuuu{x}\in\R^3$ has eigenvalues in $S^1$ if and only if $r(\uuuu{x})\in[0,4]$. (f) For a given local minimum $\uuuu{x}\in\Z^3$ the set of all local minima in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $\uuuu{x}$ is either the set $G^{sign}\rtimes \langle\gamma\rangle(\uuuu{x})$ or the set $G^{sign}\rtimes \langle\gamma,\gamma_2\rangle(\uuuu{x})$ (see Theorem \ref{t4.13} (b) for details). \end{theorem} {\bf Proof:} (a) Fix $\rho\in\Z-\{4\}$. Let $\uuuu{x}\in \Z^3$ be a local minimum with $r(\uuuu{x})=\rho$. {\bf 1st case,} $x_1x_2x_3\leq 0$: Then $\rho=r(\uuuu{x})=x_1^2+x_2^2+x_3^2-x_1x_2x_3\geq \|\uuuu{x}\|^2$, so $\rho\geq 0$. The closed ball of radius $\sqrt{\rho}$ around $0$ in $\R^3$ intersects $\Z^3$ only in finitely many points. {\bf 2nd case,} $x_1x_2x_3>0$: We can suppose $x_i>0$ for $i\in\{1,2,3\}$ because of the action of $G^{sign}$ on $\R^3$ and $\Z^3$. Lemma \ref{t4.4} says $2x_1\leq x_2x_3$, $2x_2\leq x_1x_3$, $2x_3\leq x_1x_2$, $x_i\geq 2$ for $i\in\{1,2,3\}$. We can suppose $x_1=\min(x_1,x_2,x_3)$ (the other cases are analogous). If $x_1=2$ then $4\neq \rho=r(\uuuu{x})=4+x_2^2+x_3^2-2x_2x_3=4+(x_2-x_3)^2$, so $x_2\neq x_3$, which is a contradiction to $2x_2\leq x_1x_3=2x_3$, $2x_3\leq x_1x_2=2x_2$. Therefore $x_1\geq 3$. We can suppose $x_1\leq x_2\leq x_3$ (the other cases are analogous). \begin{eqnarray*} \rho&=& r(\uuuu{x}) = x_1^2+x_2^2+(x_3-\frac{1}{2}x_1x_2)^2 -\frac{1}{4}x_1^2x_2^2\\ &\leq & x_1^2+x_2^2+(x_2-\frac{1}{2}x_1x_2)^2 - \frac{1}{4}x_1^2x_2^2 \quad(\textup{because }x_2\leq x_3\leq \frac{1}{2}x_1x_2)\\ &=& x_1^2+2x_2^2-x_1x_2^2 =(x_1-2)(x_1+2-x_2^2)+4 \\ &\leq& (x_1-2)(x_1+2-x_1^2) + 4 =-(x_1-2)(x_1-2)(x_1+1) + 4 \\ &\leq& \left\{\begin{array}{ll} -(3-2)(3-2)(3+1)+4 =0, \\ -(x_1-2)^3+4.\end{array}\right. \end{eqnarray*} This implies $\rho\leq 0$ and $x_1\leq 2+\sqrt[3]{4-\rho}$, so $x_1$ is one of the finitely many values in $\Z\cap [3,2+\sqrt[3]{4-\rho}]$. The inequality $\rho\leq (x_1-2)(x_1+2-x_2^2)+4$ implies \begin{eqnarray*} x_2^2\leq \frac{4-\rho}{x_1-2}+x_1+2, \end{eqnarray*} so $x_2$ is one of the finitely many values in $\Z\cap[x_1,\sqrt{\frac{4-\rho}{x_1-2}+x_1+2}]$. Because of $x_3\leq \frac{1}{2}x_1x_2$ also $x_3$ can take only finitely many values. (b) Each $\Br_3\ltimes\{\pm 1\}^3$ orbit in $\Z^3$ is mapped by $\|.\|^2$ to a subset of $\Z_{\geq 0}$. A preimage in this orbit of the minimum of this subset is a local minimum. Therefore (a) implies (b). (c) Suppose $\rho<0$ and $\uuuu{x}\in r^{-1}(\rho)$ is a local minimum. $0>\rho=\|\uuuu{x}\|^2-x_1x_2x_3$ implies $x_1x_2x_3>0$. Lemma \ref{t4.4} gives $|x_1|\geq 2$. If $x_1=2\varepsilon$ with $\varepsilon\in\{\pm 1\}$ then $\rho=r(\uuuu{x})=4+(x_2-\varepsilon x_3)^2\geq 4$, a contradiction. So $|x_1|\geq 3$. Analogously $|x_2|\geq 3$ and $|x_3|\geq 3$. (d) Suppose $\rho\in\N-\{4\}$ and $\uuuu{x}\in r^{-1}(\rho)$ is a local minimum. In the second case $x_1x_2x_3>0$ in the proof of part (a) we concluded $\rho\leq 0$. Therefore we are in the first case in the proof of part (a), so $x_1x_2x_3\leq 0$. (e) Suppose $\rho\in\{0,1,2,3,4\}$, and $\uuuu{x}\in r^{-1}(\rho)$ is a local minimum. In the cases $\rho\in\{1,2,3\}$ by part (d) $x_1x_2x_3\leq 0$ and $\rho=r(\uuuu{x})=x_1^2+x_2^2+x_3^2+|x_1x_2x_3|$, so in these cases all $x_i\neq 0$ is impossible, so some $x_i=0$, so $\rho=r(\uuuu{x})=x_j^2+x_k^2$ where $\{i,j,k\}=\{1,2,3\}$. {\bf The case $\rho=3$:} $3=x_j^2+x_k^2$ is impossible, the case $\rho=3$ is impossible, $r^{-1}(3)=\emptyset$. {\bf The case $\rho=1$:} $1=x_j^2+x_k^2$ is solved only by $(x_j,x_k)\in\{(\pm 1,0),(0,\pm 1)\}$. The six local minima $(\pm 1,0,0),(0,\pm 1,0),(0,0,\pm 1)$ are in one orbit of $\Br_3\ltimes\{\pm 1\}^3$ because $\gamma(1,0,0)=(0,1,0)$, $\gamma(0,1,0)=(0,0,1)$. {\bf The case $\rho=2$:} $x_j^2+x_k^2=2$ is solved only by $(x_j,x_k)\in\{(\pm 1,\pm 1)\}$. The twelve local minimal $(\pm 1,\pm 1,0),(\pm 1,0,\pm 1), (0,\pm 1,\pm 1)$ are in one orbit of $\Br_3\ltimes\{\pm 1\}^3$ because $\gamma(1,1,0)=(0,1,1)$, $\gamma(0,1,1)=(1,0,1)$. {\bf The case $\rho=0$:} We use the proof of part (a). {\bf 1st case,} $x_1x_2x_3\leq 0$: $0=\rho=\|x\|^2-x_1x_2x_3\geq\|x\|^2$, so $\uuuu{x}=(0,0,0)$. Its $\Br_3\ltimes\{\pm 1\}^3$ orbit consists only of $(0,0,0)$. {\bf 2nd case,} $x_1x_2x_3>0$: We can suppose $x_i>0$ for each $i\in\{1,2,3\}$. Suppose $x_i\leq x_j\leq x_k$ for $\{i,j,k\}=\{1,2,3\}$. The proof of part (a) gives \begin{eqnarray*} 3\leq x_i\leq 2+\sqrt[3]{4-\rho}=2+\sqrt[3]{4}, \textup{ so }x_i=3 \end{eqnarray*} and \begin{eqnarray*} 3=x_i\leq x_j\leq \sqrt{\frac{4-\rho}{x_i-2}+x_i+2}=3, \textup{ so }x_j=3. \end{eqnarray*} $0=r(\uuuu{x})=9+9+x_k^2-9x_k=(x_k-3)(x_k-6)$ and $x_k\leq \frac{1}{2}x_ix_j=\frac{9}{2}$ show $x_k=3$. The four local minima $(3,3,3)$, $(-3,-3,3)$, $(-3,3,-3)$ and $(3,-3,-3)$ are in one $\Br_3\ltimes \{\pm 1\}^3$ orbit because of the action of $G^{sign}$. {\bf The case $\rho=4$:} {\bf 1st case,} some $x_i=0$: Then with $\{i,j,k\}=\{1,2,3\}$ $4=r(\uuuu{x})=x_j^2+x_k^2$. This is solved only by $(x_j,x_k)\in\{(\pm 2,0),(0,\pm 2)\}$. The six local minima $(\pm 2,0,0)$, $(0,\pm 2,0)$, $(0,0,\pm 2)\}$ are in one $\Br_3\ltimes\{\pm 1\}^3$ orbit because $\gamma((2,0,0))=(0,2,0)$, $\gamma((0,2,0))=(0,0,2)$. {\bf 2nd case,} all $x_i\neq 0$ and $x_1x_2x_3<0$: $4=r(\uuuu{x})=x_1^2+x_2^2+x_3^2+|x_1x_2x_3|$, so $(x_1,x_2,x_3)\in \{(-1,-1,-1),(1,1,-1),(1,-1,1),(-1,1,1)\}$. These four local minima are in one $\Br_3\ltimes\{\pm 1\}^3$ orbit because of the action of $G^{sign}$. {\bf 3rd case,} all $x_i\neq 0$ and $x_1x_2x_3>0$: We can suppose $x_i>0$ for each $i\in\{1,2,3\}$ and $x_i\leq x_j\leq x_k$ for some $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$. As in the proof of part (a) we obtain the estimate \begin{eqnarray*} 4=\rho=r(\uuuu{x})\leq -(x_i-2)^3+4,\quad\textup{so }x_i=2, \end{eqnarray*} and \begin{eqnarray*} 4=\rho=r(\uuuu{x})=4+(x_j-x_k)^2,\quad\textup{so } l:=x_j=x_k\geq 2. \end{eqnarray*} For $l=2$ the four local minima \begin{eqnarray*} (2,2,2),(-2,-2,2),(-2,2,-2),(2,-2,-2) \end{eqnarray*} and for $l\geq 3$ the 24 local minima \begin{eqnarray*} (\varepsilon_1 2,\varepsilon_2 l,\varepsilon_3 l), (\varepsilon_1 l,\varepsilon_2 2,\varepsilon_3 l), (\varepsilon_1 l,\varepsilon_2 l,\varepsilon_3 2)\\ \textup{ with }\varepsilon_1,\varepsilon_2,\varepsilon_3 \in\{\pm 1\}, \varepsilon_1\varepsilon_2\varepsilon_3=1, \end{eqnarray*} are in one $\Br_3\ltimes\{\pm 1\}^3$ orbit because of the action of $G^{sign}$ and $\gamma$. It remains to see that local minima in different lines in the list in part (e) are in different $\Br_3\ltimes\{\pm 1\}^3$ orbits. One reason is part (f). Another way to argue is given in the Remarks \ref{t4.7}. (f) See Lemma \ref{t4.10} (e). \hfill$\Box$ \begin{remarks}\label{t4.7} Part (f) of Theorem \ref{t4.6} is strong and allows easily to see when $\Br_3\ltimes\{\pm 1\}^3$ orbits are separate. Nevertheless it is also interesting to find invariants of the orbits which separate them. Now we discuss several invariants which help to prove the claim that local minima in different lines in the list in part (e) are in different $\Br_3\ltimes\{\pm 1\}^3$ orbits. The number $r(\uuuu{x})\in\Z$ is such an invariant. Furthermore the set $\{(0,0,0)\}$ is a single orbit and thus different from the orbit of $S(\P^2)$. Therefore the claim is true for the lines with $r\in\{0,1,2,3\}$. It remains to consider the $3+\infty$ lines with $r=4$. Certainly the reducible case $S(\P^1A_1)$ is separate from the other cases, which are all irreducible. The signature of $I^{(0)}$ is an invariant. It is given in Lemma \ref{t5.7}. It allows to see that the orbits of $S(\whh{A}_2)$, $S(\HH_{1,2})$ are different from one another and from the orbits of $S(-l,2,-l)$ for $l\geq 3$. In order to see that the orbits of $S(-l,2,-l)$ for $l\geq 3$ are pairwise different, we can offer Lemma \ref{t7.10}, which in fact allows to separate all the lines with $r=4$. It considers the induced monodromy on the quotient lattice $H_\Z/\Rad I^{(1)}$. \end{remarks} \begin{remarks}\label{t4.8} Kr\"uger \cite[\S 12]{Kr90} and Cecotti-Vafa \cite[Ch. 6.2]{CV93} considered the action of the group $(G^{phi}\ltimes G^{sign})\rtimes \langle\gamma,\gamma_2\rangle$ with $\gamma_2:\R^3\to\R^3$, $(x_1,x_2,x_3)\mapsto (x_2,x_1,x_3)$, \index{$\gamma_2:\R^3\to\R^3$} so $\langle \gamma,\gamma_2\rangle\cong S_3$ which is slightly larger than $(G^{phi}\ltimes G^{sign})\rtimes\langle\gamma\rangle$. Because of \begin{eqnarray*} \gamma_2\varphi_1\gamma_2^{-1}=\varphi_2, \ \gamma_2\varphi_2\gamma_2^{-1}=\varphi_1, \ \gamma_2\varphi_3\gamma_2^{-1}=\varphi_3, \ \gamma_2\gamma\gamma_2^{-1}=\gamma^{-1}, \end{eqnarray*} we have \begin{eqnarray*} \Br_3\ltimes\{\pm 1\}^3\bigl(\gamma_2(\uuuu{x})\bigr) =\gamma_2\bigl(\Br_3\ltimes\{\pm 1\}^3(\uuuu{x})\bigr). \end{eqnarray*} Especially, the $\Br_3\ltimes \{\pm 1\}^3$ orbit of $\uuuu{x}$ coincides with the $(G^{phi}\ltimes G^{sign})\rtimes \langle\gamma,\gamma_2\rangle$ orbit of $\uuuu{x}$ in the following cases: \begin{list}{}{} \item[(i)] if $x_i=x_j$ for some $i\neq j$, \item[(ii)] if $x_i=0$ for some $i$ (observe $\delta_3^{\R}\gamma \varphi_1(x_1,x_2,0)=(x_2,x_1,0)$), \item[(iii)] if $\uuuu{x}=(x_1,x_2,\frac{1}{2}x_1x_2)$ with $|x_i|\geq 3$ and $|x_1|\neq |x_2|$ (observe $\varphi_3(\uuuu{x})=(x_2,x_1,\frac{1}{2}x_1x_2)$). \end{list} In Lemma \ref{t4.12} 24 sets $C_1,...,C_{24}$ of local minima are considered. The only local minima $\uuuu{x}\in\bigcup_{i=1}^{24} C_i$ which satisfy none of the conditions (i), (ii) and (iii) are those in $C_{16}\cup C_{22}\cup C_{24}$. Theorem \ref{t4.13} (b) shows that in these cases the $\Br_3\ltimes\{\pm 1\}^3$ orbits of $\uuuu{x}$ and of $\gamma_2(\uuuu{x})$ are indeed disjoint. Especially all orbits in the fibers $r^{-1}(\rho)$ with $\rho\in\{0,1,2,3,4\}$ contain local minima which satify (i), (ii) or (iii), so there the classifications in Theorem \ref{t4.6} and the classification by Kr\"uger and Cecotti-Vafa coincide. We do not know whether for $\uuuu{x}\in C_{16}\cup C_{22}\cup C_{24}$ and $\gamma_2(\uuuu{x})$ the corresponding unimodular bilinear lattices with sets of distinguished bases are isomorphic or not. For $\uuuu{x}$ and $\gamma_2(\uuuu{x})$ in one $\Br_3\ltimes\{\pm 1\}^3$ orbit they are isomorphic, see Remark \ref{t3.20} (iii). \end{remarks} \section{A classification of the $\Br_3\ltimes \{\pm 1\}^3$ orbits in $\Z^3$} \label{s4.3} This section refines the results of section \ref{s4.2} on the braid group action on integer upper triangular $3\times 3$ matrices. Using pseudo-graphs, it gives a classification of all orbits of $\Br_3$ on $\Z^3/\{\pm 1\}^3$. Definition \ref{t4.9} makes precise what is meant here by a pseudo-graph, and it defines a pseudo-graph $\GG(\uuuu{x})$ for any local minimum $\uuuu{x}\in\Z^3$. As $\Br_3\ltimes\{\pm 1\}^3$ and $(G^{phi}\ltimes\{\pm 1\}^3) \rtimes\langle \gamma\rangle$ are semidirect products with normal subgroups $\{\pm 1\}^3$, the groups $\Br_3$ and $G^{phi}\rtimes\langle\gamma\rangle$ act on $\Z^3/\{\pm 1\}^3$. \begin{definition}\label{t4.9} (a) For any set $\VV$, $\PP(\VV)$ denotes its power set, so the set of all its subsets, and $\PP_k(\VV)$ for some $k\in\N$ denotes the set of all subsets with $k$ elements. We will use only $\PP_1(\VV)$ and $\PP_2(\VV)$. (b) A {\it pseudo-graph} \index{pseudo-graph} is here a tuple $\GG=(\VV,\VV_0,\VV_1,\VV_2,v_0,\EE_1,\EE_2,\EE_3,\EE_\gamma)$ with the following ingredients: $\VV$ is a non-empty finite or countably infinite set of vertices. $\VV_0,\VV_1,\VV_2\subset\VV$ are pairwise disjoint subsets, $\VV_0$ is not empty (the sets $\VV_1$ and $\VV_2$ may be empty, the union $\VV_0\cup\VV_1\cup\VV_2$ can be equal to $\VV$ or a proper subset of $\VV$). $v_0\in\VV_0$ is a distinguished vertex in $\VV_0$. $\EE_1,\EE_2,\EE_3\subset \PP_1(\VV)\cup\PP_2(\VV)$ are sets of undirected edges. A subset of $\VV$ with two elements means an edge between the two vertices. A subset of $\VV$ with one element means a loop from the vertex to itself. $\EE_\gamma=\{(v_0,v_1),(v_2,v_0)\}$ for some $v_1,v_2\in\VV_0$ is a set of two or one directed edges, one only if $v_1=v_2=v_0$, and then it is a directed loop. (c) An isomorphism between two pseudo-graphs $\GG$ and $\www{\GG}$ is a bijection $\phi:\VV\to\www{\VV}$ with $\phi(v_0)=\www{v_0}$ which induces bijections $\phi:\VV_i\to \www{\VV_i}$, $\phi:\EE_j\to\www{\EE_j}$ and $\phi:\EE_\gamma\to\www{\EE_\gamma}$. (d) $\GG|_{\VV_0\cup\VV_1}$ denotes the restriction of a pseudo-graph $\GG$ to the vertex set $\VV_0\cup\VV_1$, so one deletes all vertices in $\VV-(\VV_0\cup\VV_1)$ and all edges with at least one end in $\VV-(\VV_0\cup\VV_1)$. Analogously, $\GG|_{\VV_0}$ denotes the restriction of a pseudo-graph $\GG$ to the vertex set $\VV_0$. (e) Define \begin{eqnarray*} \LL_0&:=& \{\uuuu{x}/\{\pm 1\}^3\,|\, \uuuu{x}\in\Z^3 \textup{ is a local minimum}\},\\ \LL_1&:=& \{\uuuu{y}/\{\pm 1\}^3\,|\,\uuuu{y}\in\Z^3\textup{ with } |y_i|=1\textup{ for some }i\}-\LL_0,\\ \LL_2&:=& \Z^3/\{\pm 1\}^3-(\LL_0\cup\LL_1). \end{eqnarray*} (f) A pseudo-graph $\GG(\uuuu{x})$ is associated to a local minimum $\uuuu{x}\in\Z^3$ in the following way: \begin{eqnarray*} \VV&:=&\Br_3(\uuuu{x}/\{\pm 1\}^3)\subset\Z^3/\{\pm 1\}^3,\\ \VV_0&:=& \VV\cap \LL_0\textup{ is the set of sign classes in }\VV\textup{ of local minima},\\ \VV_1&:=& \VV\cap \LL_1,\\ \VV_2&:=& \{w\in\VV\cap\LL_2\,|\, \textup{an } i\in\{1,2,3\}\textup{ with } \varphi_i(w)\in\VV_0\cup\VV_1\textup{ exists}\},\\ v_0&:=& \uuuu{x}/\{\pm 1\}^3,\\ \EE_i&:=& \{\{w,\varphi_i(w)\}\,|\, w\in \VV\} \quad\textup{ for }i\in\{1,2,3\},\\ \EE_\gamma&:=& \{(v_0,\gamma(v_0)),(\gamma^{-1}(v_0),v_0)\}. \end{eqnarray*} (g) An {\it infinite tree} \index{infinite tree} $(\WW,\FF)$ consists of a countably infinite set $\WW$ of vertices and a set $\FF\subset\PP_2(\WW)$ of undirected edges such that the graph is connected and has no cycles. A $(2,\infty\times 3)$-tree \index{$(2,\infty\times 3)$-tree} is an infinite tree with a distinguished vertex with two neighbours such that any other vertex has three neighbours. \end{definition} The next lemma gives already structural results about the pseudo-graphs $\GG(\uuuu{x})$ for the local minima $\uuuu{x}\in\Z^3$. Theorem \ref{t4.13} and the Remarks \ref{t4.14} will give a complete classification of all isomorphism classes of pseudo-graphs $\GG(\uuuu{x})$ for the local minima $\uuuu{x}\in\Z^3$. \begin{lemma}\label{t4.10} Let $\uuuu{x}\in\Z^3$ be a local minimum with pseudo-graph $\GG(\uuuu{x})=(\VV,\VV_0,\VV_1,\VV_2,v_0,\EE_1,\EE_2,\EE_3, \EE_\gamma)$. (a) ($\uuuu{x}$ and $\GG(\uuuu{x})$ are not used in part (a)) For $w\in\LL_2$, there are $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$ and $\|\varphi_i(w)\|<\|w\|$, $\|\varphi_j(w)\|>\|w\|$, $\|\varphi_k(w)\|>\|w\|$, $\varphi_j(w)\in\LL_2$, $\varphi_k(w)\in\LL_2$ and $\varphi_j(w)\neq\varphi_k(w)$. (b) Let $w\in\VV_2$, so especially $w\in\LL_2$. Choose $i,j,k$ as in part (a). The edge which connects $w\in\VV_2$ with $\VV_0\cup\VV_1$ is in $\EE_i$. After deleting this edge, the component of the remaining pseudo-graph which contains $w$ is a $(2,\infty\times 3)$-tree with distinguished vertex $w$ and all vertices in $\LL_2$. (c) The pseudo-graph $\GG(\uuuu{x})$ is connected. (d) The pseudo-graph $\GG(\uuuu{x})|_{\VV_0\cup \VV_1}$ is connected. (e) The pseudo-graph $\GG(\uuuu{x})|_{\VV_0\cup\VV_1}$ is finite, and \begin{eqnarray*} \VV_0=\langle\gamma\rangle (v_0)\quad\textup{or}\quad \VV_0=\langle\gamma,\gamma_2\rangle(v_0). \end{eqnarray*} \end{lemma} {\bf Proof:} (a) Consider $w=\uuuu{y}/\{\pm 1\}^3\in\LL_2$. Then $|y_i|\geq 2$ for $i\in\{1,2,3\}$ because $w\notin\LL_0\cup\LL_1$. $w\notin\LL_0$ implies $y_1y_2y_3>0$. We can suppose $y_1,y_2,y_3\in\Z_{\geq 2}$. Observe for $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$ \begin{eqnarray*} &&\|\varphi_j(\uuuu{y})\|^2-\|\varphi_i(\uuuu{y})\|^2\\ &=& (y_i^2+(y_iy_k-y_j)^2+y_k^2) -((y_jy_k-y_i)^2+y_j^2+y_k^2)\\ &=& (y_i^2-y_j^2)y_k^2. \end{eqnarray*} Consider the case $2\leq y_1\leq y_2\leq y_3$. The other cases are analogous. Then \begin{eqnarray*} \|\varphi_3(\uuuu{y})\|\leq \|\varphi_2(\uuuu{y})\| \leq \|\varphi_1(\uuuu{y})\|. \end{eqnarray*} Because $w\notin\LL_0$ $\|\varphi_3(\uuuu{y})\|<\|\uuuu{y}\|$. Also $\varphi_2(\uuuu{y})=(y_3,y_1y_3-y_2,y_1)$ with \begin{eqnarray*} \varphi_2(\uuuu{y})_2=y_1y_3-y_2 \geq (y_1-1)y_3\left\{\begin{array}{ll} \geq 2y_3>y_2&\textup{ if }y_1>2,\\ = y_3>y_2&\textup{ if }y_1=2.\end{array}\right. \end{eqnarray*} Here $y_3>y_2$ if $y_1=2$ because $\uuuu{y}=(2,y_2,y_3)$ is not a local minimum. Therefore $\|\varphi_2(\uuuu{y})\|>\|\uuuu{y}\|$, and also $\|\varphi_1(\uuuu{y})\| \geq \|\varphi_2(\uuuu{y})\| >\|\uuuu{y}\|$. Especially $\varphi_2(\uuuu{y})$ and $\varphi_1(\uuuu{y})$ are not local minima, so $\varphi_2(w)\notin\LL_0$ and $\varphi_1(w)\notin\LL_0$. Obviously $\varphi_2(\uuuu{y})_i\geq 2$ and $\varphi_1(\uuuu{y})_i\geq 2$ for $i\in\{1,2,3\}$, so $\varphi_2(w)\in\LL_2$ and $\varphi_1(w)\in\LL_2$. The inequality $\varphi_2(w)\neq \varphi_1(w)$ follows from \begin{eqnarray*} \varphi_2(\uuuu{y})_2\geq 2y_3>y_3=\varphi_1(\uuuu{y})_2 &&\textup{if }y_1>2,\\ \varphi_2(\uuuu{y})_2=y_1y_3-y_2>(y_1-1)y_3=y_3=\varphi_1(\uuuu{y})_2 &&\textup{if }y_1=2. \end{eqnarray*} (b) Because $\varphi_j(w),\varphi_k(w)\in\LL_2$, the edge which connects $w$ to $\VV_0\cup\VV_1$ cannot be in $\EE_j$ or $\EE_k$, so it is in $\EE_i$. Using part (a) again and again, one sees that the component of $\GG(\uuuu{x})-\textup{(this edge)}$ which contains $w$ is a $(2,\infty\times 3)$ tree. (c) Any vertex $w\in\VV=(G^{phi}\rtimes \langle\gamma\rangle) (v_0)$ is obtained from $v_0$ by applying an element $\psi\gamma^\xi$ with $\psi\in G^{phi}$ and $\xi\in\{0,\pm 1\}$. As $G^{phi}$ is a free Coxeter group with generators $\varphi_1$, $\varphi_2$, $\varphi_3$, applying $\psi\gamma^\xi$ to $v_0$ yields a path in $\GG(\uuuu{x})$ from $v_0$ to $w$. (d) This follows from (b) and (c). (e) Consider $w=\uuuu{y}/\{\pm 1\}^3\in \VV_1$. Because $w\notin \VV_0$, $y_1y_2y_3>0$. We can suppose $y_1,y_2,y_3\in\N$, and one of them is equal to $1$. Suppose $1=y_1\leq y_2\leq y_3$. Then \begin{eqnarray} \varphi_3(\uuuu{y})&=&(y_2,1,y_2-y_3),\textup{ so } y_2\cdot 1\cdot (y_2-y_3)\leq 0,\nonumber\\ &&\textup{so }\varphi_3(w)\in\VV_0,\label{4.8}\\ \varphi_2(\uuuu{y})&=&(y_3,y_3-y_2,1),\nonumber\\ &&\textup{so }\varphi_2(w)\in\VV_0\cup\VV_1\ (\textup{in }\VV_0\textup{ only if }y_3=y_2),\label{4.9}\\ \varphi_1\varphi_2(\uuuu{y})&=& (-y_2,1,y_3-y_2),\nonumber\\ &&\textup{so }\varphi_1\varphi_2(w)=\varphi_3(w)\in\VV_0. \label{4.10} \end{eqnarray} Especially, each vertex in $\VV_1$ is connected by an edge to a vertex in $\VV_0$. Therefore the main point is to show $\VV_0=\langle \gamma\rangle(v_0)$ or $\VV_0=\langle \gamma,\gamma_2\rangle(v_0)$. Then $\VV_0$ and $\VV_1$ are finite. First case, the restricted pseudo-graph $\GG(\uuuu{x})|_{\VV_0}$ is connected: Then $\|w\|=\|v_0\|$ for each $w\in\VV_0$. This easily implies $\VV_0=\langle\gamma\rangle(v_0)$ or $\VV_0=\langle\gamma,\gamma_2\rangle$. Second case, the restricted pseudo-graph $\GG(\uuuu{x})|_{\VV_0}$ is not connected: We will lead this to a contradiction. Within all paths in $\GG(\uuuu{x})|_{\VV_0\cup\VV_1}$ which connect vertices in different components of $\GG(\uuuu{x})|_{\VV_0}$, consider a shortest path. It does not contain an edge in $\EE_\gamma$, because else one could go over to a path of the same length with an edge in $\EE_\gamma$ at one end and between the same vertices, but dropping that edge would lead to a shorter path. Because each vertex in $\VV_1$ is connected by an edge to a vertex in $\VV_0$, a shortest path contains either one or two vertices in $\VV_1$. The observations \eqref{4.8}--\eqref{4.10} lead in both cases to the vertices at the end of the path being in the same component of $\GG(\uuuu{x})|_{\VV_0}$, so to a contradiction. \hfill$\Box$ \begin{examples}\label{t4.11} The following 14 figures show the pseudo-graphs $\GG_1,...,\GG_{14}$ \index{$\GG_1,...,\GG_{14}$} for 14 values $v_0=\uuuu{x}/\{\pm 1\}^3$ with $\uuuu{x}\in\Z^3$ a local minimum. The ingredients of the figures have the following meaning. $\bullet$\qquad a vertex in $\VV_0$, {\tiny{$\otimes$}} \qquad a vertex in $\VV_1$, \noindent \hspace*{0.14cm} \includegraphics[width=0.04\textwidth]{pic-4-1-tree.png} \qquad a vertex in $\VV_2$ together with the $(2,\infty\times 3)$ tree (compare Lemma \ref{t4.10} (b)), $\stackrel{i}{\mbox{-----}}$ \qquad an edge in $\EE_i$, $\stackrel{\gamma}{\rightarrow\hspace*{-0.3cm}\mbox{---}}$ \qquad an edge in $\EE_\gamma$. The pseudo-graphs are enriched in the following way. Each vertex is labeled with a value $\uuuu{y}$ of its sign class $\uuuu{y}/\{\pm 1\}^3$. We have chosen $\uuuu{y}\in\N^3$ if $y_1y_2y_3>0$ (this holds for all $\uuuu{y}/\{\pm 1\}^3\in\VV_1\cup\VV_2$ and some $\uuuu{y}/\{\pm 1\}^3\in\VV_0)$ and $\uuuu{y}\in\Z_{\leq 0}^3$ if $y_1y_2y_3\leq 0$ (this holds for some $\uuuu{y}/\{\pm 1\}^3\in \VV_0$). The vertex $v_0$ can be recognized by the edges in $\EE_\gamma$ leading to and from it. The sets $C_i$ are defined in Lemma \ref{t4.12}. The relations $\GG_j\,:\, C_i$ are explained in Theorem \ref{t4.13}. \end{examples} \begin{figure}[H] \includegraphics[width=0.3\textwidth]{pic-4-2.png} \caption[Figure 4.2]{$G_1:\ C_1\ (A_1^3),\ C_2\ (\HH_{1,2}). \quad \textup{Here }\uuuu{x}=(0,0,0)\in C_1.$} \label{Fig:4.2} \end{figure} \begin{figure}[H] \includegraphics[width=0.5\textwidth]{pic-4-3.png} \caption[Figure 4.3]{$G_2:\ C_3, C_4, C_5$, so the reducible cases without $A_1^3$. \quad Here $\uuuu{x}=(x,0,0)\in C_3\cup C_4\cup C_5,\ x<0.$} \label{Fig:4.3} \end{figure} \begin{figure}[H] \includegraphics[width=0.5\textwidth]{pic-4-4.png} \caption[Figure 4.4]{$G_3:\ C_6\ (A_3)$. \quad Here $\uuuu{x}=(-1,0,-1)$.} \label{Fig:4.4} \end{figure} \begin{figure}[H] \includegraphics[width=0.5\textwidth]{pic-4-5.png} \caption[Figure 4.5]{$G_4:\ C_7\ (\widehat{A}_2)$. \quad Here $\uuuu{x}=(-1,-1,-1)$.} \label{Fig:4.5} \end{figure} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{pic-4-6.png} \caption[Figure 4.6]{$G_5:\ C_8,C_9$. \quad Here $\uuuu{x}=(l,2,l)\sim (-l,2,-l)$ with $l\geq 3$.} \label{Fig:4.6} \end{figure} \begin{figure}[H] \includegraphics[width=0.5\textwidth]{pic-4-7.png} \caption[Figure 4.7]{$G_6:\ C_{10},C_{11},C_{12}.$ \quad Here $\uuuu{x}=(3,3,3)\in C_{10}$.} \label{Fig:4.7} \end{figure} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{pic-4-8.png} \caption[Figure 4.8]{$G_7:\ C_{13}.$ \quad Here $\uuuu{x}=(4,4,8)$.} \label{Fig:4.8} \end{figure} \begin{figure}[H] \includegraphics[width=0.85\textwidth]{pic-4-9.png} \caption[Figure 4.9]{$G_8:\ C_{14}.$ \quad Here $\uuuu{x}=(3,4,6)$.} \label{Fig:4.9} \end{figure} \begin{figure}[H] \includegraphics[width=0.85\textwidth]{pic-4-10.png} \caption[Figure 4.10]{$G_9:\ C_{15},C_{16},C_{23}, C_{24}.$ \quad Here $\uuuu{x}=(3,4,5)\in C_{16}$.} \label{Fig:4.10} \end{figure} \begin{figure}[H] \includegraphics[width=0.75\textwidth]{pic-4-11.png} \caption[Figure 4.11]{$G_{10}:\ C_{17}.$ \quad Here $\uuuu{x}=(-2,-2,0)$.} \label{Fig:4.11} \end{figure} \begin{figure}[H] \includegraphics[width=0.65\textwidth]{pic-4-12.png} \caption[Figure 4.12]{$G_{11}:\ C_{18}.$ \quad Here $\uuuu{x}=(-3,-2,0)$.} \label{Fig:4.12} \end{figure} \begin{figure}[H] \includegraphics[width=0.8\textwidth]{pic-4-13.png} \caption[Figure 4.13]{$G_{12}:\ C_{19}.$ \quad Here $\uuuu{x}=(-2,-1,0)$.} \label{Fig:4.13} \end{figure} \begin{figure}[H] \includegraphics[width=0.85\textwidth]{pic-4-14.png} \caption[Figure 4.14]{$G_{13}:\ C_{20}.$ \quad Here $\uuuu{x}=(-2,-1,-1)$.} \label{Fig:4.14} \end{figure} \begin{figure}[H] \includegraphics[width=0.85\textwidth]{pic-4-15.png} \caption[Figure 4.15]{$G_{14}:\ C_{21},C_{22}.$ \quad Here $\uuuu{x}=(-2,-2,-1)\in C_{21}$.} \label{Fig:4.15} \end{figure} \begin{lemma}\label{t4.12} Consider the following 24 sets $C_i$, $i\in\{1,2,...,24\}$, of triples in $\Z^3$. \index{$C_1,...,C_{24}$} \begin{eqnarray*} C_1&=& \{(0,0,0)\}\quad (A_1^3),\\ C_2&=& \{(2,2,2)\}\quad (\HH_{1,2}),\\ C_3&=& \{(-1,0,0)\}\quad (A_2A_1),\\ C_4&=& \{(-2,0,0)\}\quad (\P^1A_1),\\ C_5&=& \{(x,0,0)\,|\, x\in\Z_{\leq -3}\}\\ && (\textup{the reducible cases without } A_1^3,A_2A_1,\P^1A_1),\\ C_6&=& \{(-1,0,-1)\}\quad (A_3),\\ C_7&=& \{(-1,-1,-1)\}\quad (\widehat{A}_2),\\ C_8&=& \{(-l,2,-l)\,|\, l\in\Z_{\geq 3}\textup{ odd}\},\\ C_9&=& \{(-l,2,-l)\,|\, l\in\Z_{\geq 4}\textup{ even}\},\hspace*{6cm}\\ C_{10}&=& \{(3,3,3)\}\quad (\P^2),\\ C_{11}&=& \{(x,x,x)\,|\, x\in\Z_{\geq 4}\},\\ C_{12}&=& \{(x,x,x)\,|\, x\in\Z_{\leq -2}\},\\ C_{13}&=& \{(2y,2y,2y^2)\,|\, y\in\Z_{\geq 2}\},\\ C_{14}&=& \{(x_1,x_2,\frac{1}{2}x_1x_2\,|\, 3\leq x_1<x_2, x_1x_2\textup{ even}\},\\ C_{15}&=& \{(x_1,x_1,x_3)\,|\, 3\leq x_1<x_3<\frac{1}{2}x_1^2\} \\ && \cup\ \{(x_1,x_2,x_2)\,|\, 3\leq x_1<x_2\},\\ C_{16}&=& \{(x_1,x_2,x_3)\,|\, 3\leq x_1<x_2<x_3 <\frac{1}{2}x_1x_2\}\\ &&\cup\ \{(x_1,x_2,x_3)\,|\, 3\leq x_2<x_1<x_3<\frac{1}{2}x_1x_2\},\\ C_{17}&=& \{(x,x,0)\,|\, x\in\Z_{\leq -2}\},\\ C_{18}&=& \{(x_1,x_2,0)\,|\, x_1<x_2\leq -2\},\\ C_{19}&=& \{(x,-1,0)\,|\, x\in\Z_{\leq -2}\},\\ C_{20}&=& \{(x,-1,-1)\,|\, x\in\Z_{\leq -2}\},\\ C_{21}&=& \{(x,x,-1)\,|\, x\in\Z_{\leq -2}\},\\ C_{22}&=& \{(x_1,x_2,-1)\,|\, x_1<x_2\leq -2\}\\ &&\cup\ \{(x_1,x_2,-1)\,|\, x_2<x_1\leq -2\},\\ C_{23}&=& \{(x_1,x_1,x_3)\,|\, x_1<x_3\leq -2\}\\ &&\cup\ \{(x_1,x_2,x_2)\,|\, x_1<x_2\leq -2\},\\ C_{24}&=& \{(x_1,x_2,x_3)\,|\, x_1<x_2<x_3\leq -2\}\\ &&\cup\ \{(x_1,x_2,x_3)\,|\, x_2<x_1<x_3\leq -2\}. \end{eqnarray*} (a) Each triple in $\bigcup_{i=1}^{24}C_i$ is a local minimum. All $\uuuu{x}$ in one set $C_i$ have the value in the following table or satisfy the inequality in the following table, \begin{eqnarray*} \begin{array}{l|l} \rho & i\textup{ with }r(\uuuu{x})=\rho\textup{ for } \uuuu{x}\in C_i\\ \hline 0 & 1,\ 10\\ 1 & 3 \\ 2 & 6 \\ 4 & 2,\ 4,\ 7,\ 8,\ 9\\ <0 & 11,\ 13,\ 14,\ 15,\ 16 \\ >4 & 5,\ 12,\ 17,\ 18,\ 19,\ 20,\ 21,\ 22,\ 23,\ 24 \end{array} \end{eqnarray*} (b) The following table makes statements about the $\langle\gamma\rangle$ orbits and the $\langle\gamma,\gamma_2\rangle$ orbits of $v_0=\uuuu{x}/\{\pm 1\}^3$ with $\uuuu{x}\in \bigcup_{i=1}^{24}C_i$, \begin{eqnarray*} \begin{array}{l|l} i\in\{1,2,7,10,11,12\} & \langle\gamma\rangle(v_0) \textup{ is }\gamma_2\textup{-invariant}\\ & \qquad\textup{ and has size }1\\ \hline i\in\{3,4,5,6,8,9,13,15,17,20,21,23\}& \langle\gamma\rangle(v_0) \textup{ is }\gamma_2\textup{-invariant}\\ & \qquad\textup{ and has size }3\\ \hline i\in\{14,16,18,19,22,24\}& \langle\gamma\rangle(v_0) \textup{ is not }\gamma_2\textup{-invariant}\\ & \qquad\textup{ and has size }3,\\ & \langle\gamma,\gamma_2\rangle(v_0) \textup{ has size }6 \end{array} \end{eqnarray*} (c) The set of all local minima in $\Z^3$ is the following disjoint union, \begin{eqnarray*} &&\Bigl(\dot\bigcup_{i\in\{1,...,24\}-\{14,18,19\}} \dot\bigcup_{\uuuu{x}\in C_i} (G^{sign}\rtimes\langle\gamma\rangle)\{\uuuu{x}\}\Bigr)\\ &\dot\cup& \Bigl(\dot\bigcup_{i\in\{14,18,19\}} \dot\bigcup_{\uuuu{x}\in C_i} (G^{sign}\rtimes\langle\gamma,\gamma_2\rangle)\{\uuuu{x}\} \Bigr). \end{eqnarray*} \end{lemma} {\bf Proof:} Part (b) is trivial. The parts (a) and (c) follow with the characterization of local minima in Lemma \ref{t4.4} and Theorem \ref{t4.6} (c)--(e). \hfill$\Box$ \begin{theorem}\label{t4.13} (a) $\Z^3$ is the disjoint union $$\dot\bigcup_{i\in\{1,...,24\}}\dot\bigcup_{\uuuu{x}\in C_i} (\Br_3\ltimes\{\pm 1\}^3)(\uuuu{x}).$$ (b) For $v_0=\uuuu{x}/\{\pm 1\}^3$ with $\uuuu{x}\in \bigcup_{i=1}^{24}C_i$, the set $\VV_0=\Br_3(v_0)\cap\LL_0$ of sign classes of local minima in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $\uuuu{x}$ is as follows, \begin{eqnarray*} \VV_0= \langle\gamma\rangle(v_0)&\textup{ if }i\in \{1,...,24\}-\{14,18,19\},\\ \VV_0= \langle\gamma,\gamma_2\rangle(v_0)&\textup{ if }i\in \{14,18,19\}. \end{eqnarray*} (c) The set $\{\GG(\uuuu{x})\,|\, \uuuu{x}\in \bigcup_{i=1}^{24}C_i\}$ of pseudo-graphs $\GG(\uuuu{x})$ for $\uuuu{x}\in\bigcup_{i=1}^{24}C_i$ consists of the 14 isomorphism classes $\GG_1,...,\GG_{14}$ in the Examples \ref{t4.11}. All $\uuuu{x}$ in one set $C_i$ have the same pseudo-graph. The first and second column in the following table give for each of the 14 pseudo-graphs $\GG_j$ the set or the sets $C_i$ with $\GG(\uuuu{x})=\GG_j$ for $\uuuu{x}\in C_i$. The third and fourth column in the following table are subject of Theorem \ref{t4.16}. \begin{eqnarray*} \begin{array}{l|l|l|l} & \textup{sets} & (G^{phi}\rtimes \langle \gamma\rangle )_{\uuuu{x}/\{\pm 1\}^3} & (\Br_3)_{\uuuu{x}/\{\pm 1\}^3} \\ \hline \GG_1 & C_1\ (A_1^3),\ C_2\ (\HH_{1,2}) & G^{phi}\rtimes\langle\gamma\rangle & \Br_3 \\ \GG_2 & C_3,C_4,C_5\textup{ (red. cases)} & \langle \varphi_1,\gamma^{-1}\varphi_3\rangle & \langle \sigma_1,\sigma_2^2\rangle \\ \GG_3 & C_6\ (A_3) & \langle \gamma\varphi_3,\varphi_2\varphi_1\varphi_3\rangle & \langle\sigma_1\sigma_2,\sigma_1^3\rangle \\ \GG_4 & C_7 \ (\widehat{A}_2) & \langle\gamma,\varphi_2\varphi_1\varphi_3\rangle & \langle\sigma_2\sigma_1,\sigma_1^3\rangle\\ \GG_5 & C_8,\ C_9\ ((-l,2,-l)) & \langle\gamma^{-1}\varphi_1\rangle &\langle\sigma^{mon},\sigma_1^{-1}\sigma_2^{-1}\sigma_1\rangle\\ \GG_6 & C_{10} (\P^2),\ C_{11},\ C_{12} & \langle\gamma\rangle & \langle\sigma_2\sigma_1\rangle\\ \GG_7 & C_{13}\ (\textup{e.g. }(4,4,8)) & \langle \varphi_3\rangle & \langle \sigma_2\sigma_1^2\rangle \\ \GG_8 & C_{14} (\textup{e.g. }(3,4,6)) & \langle\id\rangle & \langle \sigma^{mon}\rangle \\ \GG_9 & C_{15},\ C_{16},\ C_{23},\ C_{24} & \langle\id\rangle &\langle \sigma^{mon}\rangle \\ \GG_{10} & C_{17}\ (\textup{e.g. }(-2,-2,0)) & \langle \gamma^{-1}\varphi_2\rangle & \langle \sigma^{mon},\sigma_2\rangle \\ \GG_{11} & C_{18}\ (\textup{e.g. }(-3,-2,0)) & \langle \gamma^{-1}\varphi_3\varphi_1\rangle & \langle \sigma^{mon},\sigma_2^2\rangle \\ \GG_{12} & C_{19}\ (\textup{e.g. }(-2,-1,0)) & \langle \gamma^{-1}\varphi_3\varphi_1, \varphi_3\varphi_2\varphi_1\rangle & \langle \sigma^{mon},\sigma_2^2,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle \\ \GG_{13} & C_{20}\ (\textup{e.g. }(-2,-1,-1)) & \langle \varphi_2\varphi_3\varphi_1,\varphi_3\varphi_2\varphi_1 \rangle & \langle \sigma^{mon},\sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle \\ \GG_{14} & C_{21},\ C_{22}\ (\textup{e.g. }(-2,-2,-1)) & \langle \varphi_2\varphi_3\varphi_1\rangle & \langle \sigma^{mon},\sigma_2^3\rangle \end{array} \end{eqnarray*} \end{theorem} {\bf Proof:} We start with part (c). It can be seen rather easily for all $\uuuu{x}$ in one family $C_i$ simultaneously. We do not give more details. (b) The pseudo-graphs $G_8,G_{11}$ and $G_{12}$ are the only of the 14 pseudo-graphs with $|\VV_0|=6$. By inspection of them or by Lemma \ref{t4.10} (e), for them $\VV_0=\langle\gamma,\gamma_2\rangle(v_0)$. The table in part (c) gives the correspondence $G_8\leftrightarrow C_{14}$, $G_{11}\leftrightarrow C_{18}$, $G_{12}\leftrightarrow C_{19}$. The other 11 pseudo-graphs satisfy $|\VV_0|=1$ or $|\VV_0|=3$, so in any case $\VV_0=\langle\gamma\rangle(v_0)$. (a) Part (c) of Lemma \ref{t4.12} alone shows already that $\Z^3$ is the union given. Part (b) of Theorem \ref{t4.13} adds only the fact that this is a disjoint union. \hfill$\Box$ \begin{remarks}\label{t4.14} (i) We have 14 pseudo-graphs $\GG_1,...,\GG_{14}$, but 24 sets $C_1,...,C_{24}$ because the separation into the sets shall be fine enough for the table in Theorem \ref{t4.13} (c) and the tables in Lemma \ref{t4.12} (a) and (b). (ii) In the pseudo-graphs $\GG_j$ with $|\VV_0|=3$ or $|\VV_0|=6$ one can choose another distinguished vertex $\www{v_0}\in\VV_0$ and change the set $\EE_\gamma$ to a set $\www{\EE_\gamma}$ accordingly. This gives a pseudo-graph $\www{\GG_j}$ which is not equal to $\GG_j$, but closely related. The graphs $\GG_5$ and $\GG_{10}$ are related by such a change. The following table shows for each $\GG_j$ except $\GG_{10}$ the number of isomorphism classes of pseudo-graphs obtained in this way (including the original pseudo-graphs). In the cases $\GG_8$ and $\GG_{11}$ there is $|\VV_0|=6$, but because of some symmetry of the pseudo-graph without $\EE_\gamma$, there are only 3 related pseudo-graphs. \begin{eqnarray*} \begin{array}{c|cccccccccccccc|c} \GG_i,\ i & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & \sum \\ \hline \textup{related} & 1 & 3 & 3 & 1 & 3 & 1 & 3 & 3 & 3 & G_5 & 3 & 6 & 3 & 3 & 36\\ \textup{pseudo-graphs} &&&&&&&&&&&&&&& \end{array} \end{eqnarray*} The total number 36 is the number of isomorphism classes of pseudo-graphs $\GG(\uuuu{x})$ for $\uuuu{x}\in\Z^3$ a local minimum. \end{remarks} \section{The stabilizers of upper triangular $3\times 3$ matrices} \label{s4.4} The groups $G^{phi}\ltimes \langle\gamma\rangle$ and $\Br_3$ act on $\Z^3/\{\pm 1\}^3$. The pseudo-graphs in the Examples \ref{t4.11} and Theorem \ref{t4.13} offer a convenient way to determine the stabilizers $(G^{phi}\ltimes\langle\gamma\rangle)_{v_0}$ and $(\Br_3)_{v_0}$ for $v_0=\uuuu{x}/\{\pm 1\}^3$ with $\uuuu{x}\in\bigcup_{i=1}^{24}C_i$ a local minimum. The stabilizers of $v_0$ depend only on the pseudo-graph $G_j$ with $G_j=\GG(\uuuu{x})$. The results are presented in Theorem \ref{t4.16}. The Remarks \ref{t4.15} prepare this. \begin{remarks}\label{t4.15} (i) First we recall some well known facts about the groups $SL_2(\Z)$ and $PSL_2(\Z)$ and their relation to $\Br_3$. The group $SL_2(\Z)$ is generated by the matrices $$A_1:=\begin{pmatrix}1&-1\\0&1\end{pmatrix} \quad\textup{and}\quad A_2:=\begin{pmatrix}1&0\\1&1\end{pmatrix}.$$ \index{generators of $SL_2(\Z)$} Generating relations are \begin{eqnarray*} A_1A_2A_1=A_2A_1A_2\quad\textup{and}\quad (A_2A_1)^6=E_2. \end{eqnarray*} The group $\Br_3$ is generated by the elementary braids $\sigma_1$ and $\sigma_2$. The only generating relation is $\sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2.$ Therefore there is a surjective group homomorphism \begin{eqnarray*} \Br_3\to SL_2(\Z),\quad \sigma_1\mapsto A_1, \quad\sigma_2\mapsto A_2, \end{eqnarray*} with kernel $\langle (\sigma_2\sigma_1)^6\rangle =\langle (\sigma^{mon})^2\rangle$. It induces a surjective group homomorphism \begin{eqnarray*} \Br_3\to PSL_2(\Z),\quad \sigma_1\mapsto[A_1], \quad \sigma_2\mapsto [A_2], \end{eqnarray*} with kernel $\langle\sigma^{mon}\rangle$ because $(A_2A_1)^3=-E_2$. (ii) The action of $\Br_3\ltimes \{\pm 1\}^3$ on $T^{uni}_3(\Z)$ and on $\Z^3$ is fixed in the beginning of section \ref{s4.1}. One sees that $\sigma^{mon}=(\sigma_2\sigma_1)^3$ acts trivially on $T^{uni}_3(\Z)$ and $\Z^3$. This can be checked directly. Or it can be seen as a consequence of the following two facts. \begin{list}{}{} \item[(1)] The action of $\Br_3\ltimes\{\pm 1\}^3$ on $(\Br_3\ltimes\{\pm 1\}^3)(\uuuu{x})$ for some $\uuuu{x}\in\Z^3$ is induced by the action of $\Br_3\ltimes\{\pm 1\}^3$ on the set $\BB^{dist}$ of distinguished bases of a triple $(H_\Z,L,\uuuu{e})$ with $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})$ by $S((\alpha,\varepsilon)(\uuuu{x}))= L((\alpha,\varepsilon)(\uuuu{e})^t, (\alpha,\varepsilon)(\uuuu{e}))^t$ for $(\alpha,\varepsilon)\in \Br_3\ltimes\{\pm 1\}^3$. \item[(2)] For $(\alpha,\varepsilon)=(\sigma^{mon},(1,1,1))$ $(\alpha,\varepsilon)(\uuuu{e})=Z((\alpha,\varepsilon)) (\uuuu{e})=M(\uuuu{e})$ by Theorem \ref{t3.10}, and $L(M(\uuuu{e})^t,M(\uuuu{e}))=L(\uuuu{e}^t,\uuuu{e})$. \end{list} In any case the action of $\Br_3\ltimes\{\pm 1\}^3$ on $\Z^3$ boils down to a nonlinear action of $PSL_2(\Z)\ltimes G^{sign}$ where $G^{sign}=\langle\delta_1^\R,\delta_2^\R\rangle$ catches the action of $\{\pm 1\}^3$ on $\Z^3$, see section \ref{s4.1}. (iii) The shape of this nonlinear action led us in Definition \ref{t4.1} and Theorem \ref{t4.2} to the group $(G^{phi}\ltimes G^{sign})\rtimes \langle\gamma\rangle =(G^{phi}\rtimes \langle\gamma\rangle)\ltimes G^{sign}$. In fact, $G^{phi}\rtimes\langle\gamma\rangle \cong PSL_2(\Z)$. This can be seen as follows. The formulas \eqref{4.1}--\eqref{4.4} in the proof of Theorem \ref{t4.2} (c) give lifts to $\Br_3\ltimes\{\pm 1\}^3$ of the generators $\varphi_1,\varphi_2,\varphi_3$ and $\gamma$ of $G^{phi}\ltimes\langle\gamma\rangle$. Dropping the generators of the sign action in these lifts, we obtain the following lifts to $\Br_3$, \begin{eqnarray} \left.\begin{array}{l} l(\gamma)=\sigma_2\sigma_1, \quad l(\gamma^{-1})=\sigma_1^{-1}\sigma_2^{-1},\\ l(\varphi_1)=l(\gamma)^{-1}\sigma_2^{-1} =\sigma_1^{-1}\sigma_2^{-2},\\ l(\varphi_2)=l(\gamma)\sigma_2=\sigma_2\sigma_1\sigma_2 =\sigma_1\sigma_2\sigma_1,\\ l(\varphi_3)=l(\gamma)\sigma_1=\sigma_2\sigma_1^2. \end{array}\right\}\label{4.11} \end{eqnarray} The equality of groups in Theorem \ref{t4.2} (c) boils down after dropping the sign action to an equality of groups \begin{eqnarray}\label{4.12} \langle \sigma_1^\R,\sigma_2^\R\rangle\cong G^{phi}\rtimes\langle\gamma\rangle. \end{eqnarray} As $(\sigma_2^\R\sigma_1^\R)^3=\id$, we obtain a surjective group homomorphism $PSL_2(\Z)\to G^{phi}\rtimes\langle\gamma\rangle$ with $[A_i]\mapsto \sigma_i^\R$. The subgroup $\langle [A_1]^{-1}[A_2]^{-2},[A_2][A_1][A_2], [A_2][A_1]^2\rangle$ of $PSL_2(\Z)$ is mapped to $G^{phi}$. One easily calculates that this subgroup is the free Coxeter group with three generators which was considered in Remark \ref{t6.12} (iv) and which has index three in $PSL_2(\Z)$. As $G^{phi}$ is also a free Coxeter group with three generators and has index 3 in $G^{phi}\rtimes\langle\gamma\rangle$, the map $PSL_2(\Z)\to G^{phi}\rtimes\langle\gamma\rangle$ is a group isomorphism. (iv) For use in the proof of Theorem \ref{t4.16} we recall the formulas \eqref{4.6} \begin{eqnarray} \left.\begin{array}{ccccccccc} \gamma\varphi_1&=&\varphi_2\gamma,& \gamma\varphi_2&=&\varphi_3\gamma,& \gamma\varphi_3&=&\varphi_1\gamma,\\ \varphi_1\gamma^{-1}&=&\gamma^{-1}\varphi_2,& \varphi_2\gamma^{-1}&=&\gamma^{-1}\varphi_3,& \varphi_3\gamma^{-1}&=&\gamma^{-1}\varphi_1, \end{array}\right\}\label{4.13} \end{eqnarray} from the proof of Theorem \ref{t4.2}. (v) The relation $\sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2$ is equivalent to each of the two relations \begin{eqnarray*} \sigma_1\sigma_2\sigma_1^{-1}=\sigma_2^{-1}\sigma_1\sigma_2 \quad\textup{and}\quad \sigma_1^{-1}\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2^{-1} \end{eqnarray*} and induces for any $m\in\Z$ the relations \begin{eqnarray}\label{4.14} \sigma_1\sigma_2^m\sigma_1^{-1}=\sigma_2^{-1}\sigma_1^m\sigma_2 \quad\textup{and}\quad \sigma_1^{-1}\sigma_2^m\sigma_1=\sigma_2\sigma_1^m\sigma_2^{-1}. \end{eqnarray} Also this will be useful in the proof of Theorem \ref{t4.16}. \end{remarks} \begin{theorem}\label{t4.16} Consider $v_0:=\uuuu{x}/\{\pm 1\}^3$ with $\uuuu{x}\in\bigcup_{i=1}^{24}C_i\subset\Z^3$ a local minimum, and consider the pseudo-graph $\GG_j$ with $\GG_j=\GG(\uuuu{x})$. The entries in the third and fourth column in the table in Theorem \ref{t4.13}, which are in the line of $\GG_j$, give the stabilizers $(G^{phi}\rtimes\langle\gamma\rangle)_{v_0}$ and $(\Br_3)_{v_0}$ of $v_0$. \end{theorem} {\bf Proof:} First we treat the stabilizer $(G^{phi}\rtimes\langle\gamma\rangle)_{v_0}$. The {\it total set} $|\GG_j|$ of the pseudo-graph $\GG_j$ means the union of vertices and edges in an embedding of the pseudo-graph in the real plane $\R^2$ as in the figures in the Examples \ref{t4.11}. The fundamental group $\pi_1(|\GG_j|,v_0)$ is a free group with 0, 1, 3, 4, 5 or 6 generators. The number of generators is the number of compact components in $\R^2-|\GG_j|$. A generator is the class of a closed path which starts and ends at $v_0$ and turns once around one of these compact components. Any such closed path induces a word in $\varphi_1,\varphi_2,\varphi_3,\gamma$ and $\gamma^{-1}$. This word gives an element of $(G^{phi}\rtimes\langle\gamma\rangle)_{v_0}$. We obtain a group homomorphism \begin{eqnarray*} \pi_1(|\GG_j|,v_0)\to (G^{phi}\rtimes\langle\gamma\rangle)_{v_0}. \end{eqnarray*} It is surjective because any element of $(G^{phi}\rtimes\langle\gamma\rangle)_{v_0}$ can be written as $\psi\gamma^\xi$ with $\psi\in G^{phi}$ and $\xi\in\{0,\pm 1\}$. The element $\psi\gamma^\xi$ leads to and comes from a closed path in $|\GG_j|$ which starts and ends at $v_0$. In fact, this shows that we could restrict to closed paths which run never or only once at the beginning through an edge in $\EE_\gamma$. But we will not use this fact. The following list gives for each of the 14 pseudo-graphs $\GG_1,...,\GG_{14}$ in the first line one word in $\varphi_1,\varphi_2,\varphi_3,\gamma$ and $\gamma^{-1}$ for each compact component of $\R^2-|\GG_j|$. In the following lines the relations \eqref{4.13} are used to show that all these words are generated in $G^{phi}\rtimes \langle\gamma\rangle$ by the generators in the third column in the table in Theorem \ref{t4.13}. The generators are underlined. \begin{list}{}{} \item[$\GG_1:$] $\uuuu{\varphi_1}$, $\uuuu{\varphi_2}$, $\uuuu{\varphi_3}$, $\uuuu{\gamma}$. \item[$\GG_2:$] $\uuuu{\varphi_1}$, $\varphi_3\gamma$, $\gamma\varphi_2$, $\gamma^{-1}\varphi_2\gamma$, $\gamma\varphi_3\gamma^{-1}$, $\gamma\varphi_1\gamma$. \begin{eqnarray*} \gamma\varphi_1\gamma&=&\gamma^2\varphi_3 =\uuuu{\gamma^{-1}\varphi_3},\\ \gamma\varphi_2&=& \varphi_3\gamma =(\gamma^{-1}\varphi_3)^{-1},\\ \gamma^{-1}\varphi_2\gamma&=&\varphi_1,\\ \gamma\varphi_3\gamma^{-1}&=&\varphi_1. \end{eqnarray*} \item[$\GG_3:$] $\varphi_1\gamma$, $\varphi_2\varphi_3\gamma$, $\gamma\varphi_2\gamma$, $\gamma\varphi_1\varphi_2$, $\uuuu{\gamma\varphi_3}$. \begin{eqnarray*} \varphi_1\gamma&=& \gamma\varphi_3,\\ \varphi_2\varphi_3\gamma&=& \varphi_2\gamma\varphi_2 =\gamma\varphi_1\varphi_2 = (\gamma\varphi_3) (\uuuu{\varphi_2\varphi_1\varphi_3})^{-1},\\ \gamma\varphi_2\gamma&=& \gamma^2\varphi_1 =\gamma^{-1}\varphi_1 =\varphi_3\gamma^{-1} =(\gamma\varphi_3)^{-1}. \end{eqnarray*} \item[$\GG_4:$] $\uuuu{\gamma}$, $\uuuu{\varphi_2\varphi_1\varphi_3}$, $\varphi_3\varphi_2\varphi_1$, $\varphi_1\varphi_3\varphi_2$. \begin{eqnarray*} \gamma\varphi_2\varphi_1\varphi_3\gamma^{-1} &=&\gamma\varphi_2\varphi_1\gamma^{-1}\varphi_1 =\gamma\varphi_2\gamma^{-1}\varphi_2\varphi_1 =\varphi_3\varphi_2\varphi_1,\\ \gamma^{-1}\varphi_2\varphi_1\varphi_3\gamma &=&\gamma^{-1}\varphi_2\varphi_1\gamma\varphi_2 =\gamma^{-1}\varphi_2\gamma\varphi_3\varphi_2 =\varphi_1\varphi_3\varphi_2. \end{eqnarray*} \item[$\GG_5:$] $\uuuu{\gamma^{-1}\varphi_1}$, $\gamma\varphi_2\gamma$, $\gamma\varphi_3$. \begin{eqnarray*} \gamma\varphi_2\gamma&=& \gamma^2\varphi_1 =\gamma^{-1}\varphi_1,\\ \gamma\varphi_3&=& \varphi_1\gamma =(\gamma^{-1}\varphi_1)^{-1}. \end{eqnarray*} \item[$\GG_6:$] $\uuuu{\gamma}$. \item[$\GG_7:$] $\uuuu{\varphi_3}$, $\gamma^{-1}\varphi_1\gamma$, $\gamma\varphi_2\gamma^{-1}$. \begin{eqnarray*} \gamma^{-1}\varphi_1\gamma&=& \varphi_3,\\ \gamma\varphi_2\gamma^{-1}&=& \varphi_3. \end{eqnarray*} \item[$\GG_8:$] $\uuuu{\id}$. \item[$\GG_9:$] $\uuuu{\id}$. \item[$\GG_{10}:$] $\uuuu{\gamma^{-1}\varphi_2}$, $\gamma\varphi_3\gamma$, $\gamma\varphi_1$. \begin{eqnarray*} \gamma\varphi_3\gamma&=&\gamma^2\varphi_2 =\gamma^{-1}\varphi_2,\\ \gamma\varphi_1&=&\varphi_2\gamma =(\gamma^{-1}\varphi_2)^{-1}. \end{eqnarray*} \item[$\GG_{11}:$] $\uuuu{\gamma^{-1}\varphi_3\varphi_1}$, $\gamma\varphi_1\varphi_2\gamma$, $\varphi_2\varphi_3\gamma^{-1}$. \begin{eqnarray*} \gamma\varphi_1\varphi_2\gamma &=& \gamma\varphi_1\gamma\varphi_1 =\gamma^2\varphi_3\varphi_1=\gamma^{-1}\varphi_3\varphi_1,\\ \varphi_2\varphi_3\gamma^{-1} &=& \varphi_2\gamma^{-1}\varphi_1=\gamma^{-1}\varphi_3\varphi_1. \end{eqnarray*} \item[$\GG_{12}:$] $\uuuu{\varphi_3\varphi_2\varphi_1}$, $\uuuu{\gamma^{-1}\varphi_3\varphi_1}$, $\gamma^{-1}\varphi_1\varphi_3\varphi_2\gamma$, $\gamma\varphi_1\varphi_2\gamma$, $\varphi_2\varphi_3\gamma^{-1}$, $\gamma\varphi_2\varphi_1\varphi_3\gamma^{-1}$. \begin{eqnarray*} \gamma^{-1}\varphi_1\varphi_3\varphi_2\gamma &=& \gamma^{-1}\varphi_1\varphi_3\gamma\varphi_1 =\gamma^{-1}\varphi_1\gamma\varphi_2\varphi_1 =\varphi_3\varphi_2\varphi_1,\\ \gamma\varphi_1\varphi_2\gamma &=& \gamma\varphi_1\gamma\varphi_1 =\gamma^2\varphi_3\varphi_1=\gamma^{-1}\varphi_3\varphi_1,\\ \varphi_2\varphi_3\gamma^{-1} &=& \varphi_2\gamma^{-1}\varphi_1 =\gamma^{-1}\varphi_3\varphi_1,\\ \gamma\varphi_2\varphi_1\varphi_3\gamma^{-1} &=& \gamma\varphi_2\varphi_1\gamma^{-1}\varphi_1 =\gamma\varphi_2\gamma^{-1}\varphi_2\varphi_1 =\varphi_3\varphi_2\varphi_1. \end{eqnarray*} \item[$\GG_{13}:$] $\uuuu{\varphi_2\varphi_3\varphi_1}$, $\uuuu{\varphi_3\varphi_2\varphi_1}$, $\gamma^{-1}\varphi_3\varphi_1\varphi_2\gamma$, $\gamma^{-1}\varphi_1\varphi_3\varphi_2\gamma$, $\gamma\varphi_1\varphi_2\varphi_3\gamma^{-1}$, $\gamma\varphi_2\varphi_1\varphi_3\gamma^{-1}$. \begin{eqnarray*} \gamma^{-1}\varphi_3\varphi_1\varphi_2\gamma &=& \gamma^{-1}\varphi_3\varphi_1\gamma\varphi_1 =\gamma^{-1}\varphi_3\gamma\varphi_3\varphi_1 =\varphi_2\varphi_3\varphi_1,\\ \gamma^{-1}\varphi_1\varphi_3\varphi_2\gamma &=& \gamma^{-1}\varphi_1\varphi_3\gamma\varphi_1 =\gamma^{-1}\varphi_1\gamma\varphi_2\varphi_1 =\varphi_3\varphi_2\varphi_1,\\ \gamma\varphi_1\varphi_2\varphi_3\gamma^{-1} &=& \gamma\varphi_1\varphi_2\gamma^{-1}\varphi_1 =\gamma\varphi_1\gamma^{-1}\varphi_3\varphi_1 =\varphi_2\varphi_3\varphi_1,\\ \gamma\varphi_2\varphi_1\varphi_3\gamma^{-1} &=& \gamma\varphi_2\varphi_1\gamma^{-1}\varphi_1 =\gamma\varphi_2\gamma^{-1}\varphi_2\varphi_1 =\varphi_3\varphi_2\varphi_1. \end{eqnarray*} \item[$\GG_{14}:$] $\uuuu{\varphi_2\varphi_3\varphi_1}$, $\gamma^{-1}\varphi_3\varphi_1\varphi_2\gamma$, $\gamma\varphi_1\varphi_2\varphi_3\gamma^{-1}$. \begin{eqnarray*} \gamma^{-1}\varphi_3\varphi_1\varphi_2\gamma &=& \varphi_2\varphi_3\varphi_1\quad (\textup{see }\GG_{13}),\\ \gamma\varphi_1\varphi_2\varphi_3\gamma^{-1} &=& \varphi_2\varphi_3\varphi_1 \quad (\textup{see }\GG_{13}). \end{eqnarray*} \end{list} Therefore the stabilizer $(G^{phi}\rtimes\langle\gamma\rangle)_{v_0}$ is as claimed in the third column of the table in Theorem \ref{t4.13}. Now we treat the stabilizer $(\Br_3)_{v_0}$. It is the preimage in $\Br_3$ of $(G^{phi}\rtimes\langle\gamma\rangle)_{v_0}$ under the surjective group homomorphism $\Br_3\to G^{phi}\rtimes \langle\gamma\rangle$ with kernel $\langle \sigma^{mon}\rangle$. So if $(G^{phi}\rtimes \langle\gamma\rangle)_{v_0} =\langle g_1,...,g_m\rangle$ and $h_1,...,h_m$ are any lifts to $\Br_3$ of $g_1,...,g_m$ then $(\Br_3)_{v_0}=\langle \sigma^{mon},h_1,...,h_m\rangle$. For any word in $\varphi_1,\varphi_2,\varphi_3,\gamma$ and $\gamma^{-1}$ we use the lifts in \eqref{4.11} to construct a lift of this word. The following list gives for each of the 14 pseudo-graphs $\GG_1,...,\GG_{14}$ for each generator of $(G^{phi}\times \langle\gamma\rangle)_{v_0}$ in the third column of the table in Theorem \ref{t4.13} this lift and rewrites it using the relations \eqref{4.14}. The generators in the fourth column of the table in Theorem \ref{t4.13} are underlined. \begin{eqnarray*} \GG_1:& G^{phi}&\rightsquigarrow \Br_3,\\ \GG_2:& \varphi_1 &\rightsquigarrow \sigma_1^{-1}\sigma_2^{-2} =\sigma_2(\sigma_2^{-1}\sigma_1^{-1}\sigma_2^{-1})\sigma_2^{-1} =\sigma_2(\sigma_1^{-1}\sigma_2^{-1}\sigma_1^{-1})\sigma_2^{-1} \\ &&=(\sigma_2^2\sigma_1)(\sigma_1^{-1}\sigma_2^{-1})^3 =\uuuu{\sigma_2^2}\sigma_1(\sigma^{mon})^{-1} ,\\ & \gamma^{-1}\varphi_3 &\rightsquigarrow (\sigma_1^{-1}\sigma_2^{-1})(\sigma_2\sigma_1^2) =\uuuu{\sigma_1} ,\\ \GG_3:& \gamma\varphi_3 &\rightsquigarrow (\sigma_2\sigma_1)(\sigma_2\sigma_1^2) =(\sigma_2\sigma_1\sigma_2)\sigma_1^2 =(\uuuu{\sigma_1\sigma_2})\sigma_1^3 ,\\ & \varphi_2\varphi_1\varphi_3 &\rightsquigarrow (\sigma_1\sigma_2\sigma_1)(\sigma_1^{-1}\sigma_2^{-2}) (\sigma_2\sigma_1^2)=\uuuu{\sigma_1^3} ,\\ \GG_4:& \gamma &\rightsquigarrow \uuuu{\sigma_2\sigma_1} ,\\ & \varphi_2\varphi_1\varphi_3 &\rightsquigarrow \uuuu{\sigma_1^3} ,\\ \GG_5:& \gamma^{-1}\varphi_1 &\rightsquigarrow (\sigma_1^{-1}\sigma_2^{-1})(\sigma_1^{-1}\sigma_2^{-2}) =(\sigma_1^{-1}\sigma_2^{-1})^3\sigma_2\sigma_1\sigma_2^{-1}\\ &&\stackrel{\eqref{4.14}}{=}(\sigma^{mon})^{-1}(\uuuu{\sigma_1^{-1}\sigma_2^{-1} \sigma_1})^{-1},\\ \GG_6:& \gamma &\rightsquigarrow \uuuu{\sigma_2\sigma_1},\\ \GG_7:& \varphi_3 &\rightsquigarrow \uuuu{\sigma_2\sigma_1^2}, \end{eqnarray*} \begin{eqnarray*} \GG_8:& \id &\rightsquigarrow \uuuu{\id} ,\\ \GG_9:& \id &\rightsquigarrow \uuuu{\id} ,\\ \GG_{10}:& \gamma^{-1}\varphi_2 &\rightsquigarrow (\sigma_1^{-1}\sigma_2^{-1})(\sigma_2\sigma_1\sigma_2) =\uuuu{\sigma_2} ,\\ \GG_{11}:& \gamma^{-1}\varphi_3\varphi_1 &\rightsquigarrow (\sigma_1^{-1}\sigma_2^{-1})(\sigma_2\sigma_1^2) (\sigma_1^{-1}\sigma_2^{-2}) =\sigma_2^{-2}=(\uuuu{\sigma_2^2})^{-1} ,\\ \GG_{12}:& \gamma^{-1}\varphi_3\varphi_1 &\rightsquigarrow (\uuuu{\sigma_2^2})^{-1} ,\\ & \varphi_3\varphi_2\varphi_1 &\rightsquigarrow (\sigma_2\sigma_1^2)(\sigma_1\sigma_2\sigma_1) (\sigma_1^{-1}\sigma_2^{-2}) =\uuuu{\sigma_2\sigma_1^3\sigma_2^{-1}} ,\\ \GG_{13}:& \varphi_2\varphi_3\varphi_1 &\rightsquigarrow (\sigma_1\sigma_2\sigma_1)(\sigma_2\sigma_1^2) (\sigma_1^{-1}\sigma_2^{-2}) =(\sigma_1\sigma_2)^3\sigma_2^{-3}\\ &&=\sigma^{mon}(\uuuu{\sigma_2^3})^{-1} ,\\ & \varphi_3\varphi_2\varphi_1 &\rightsquigarrow \uuuu{\sigma_2\sigma_1^3\sigma_2^{-1}} ,\\ \GG_{14}:& \varphi_2\varphi_3\varphi_1 &\rightsquigarrow \sigma^{mon}(\uuuu{\sigma_2^3})^{-1}. \end{eqnarray*} Observe $$(\sigma_2\sigma_1)^3=\sigma^{mon},\quad (\sigma_2\sigma_1^2)^2=\sigma^{mon}.$$ Therefore the stabilizer $(\Br_3)_{v_0}$ is as claimed in the fourth column of the table in Theorem \ref{t4.13}. \hfill$\Box$ \section{A global sign change, relevant for the odd case} \label{s4.5} \begin{remarks}\label{t4.17} (i) For $S\in T^{uni}_n(\Z)$ consider a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}=(e_1,...,e_n)$ with $L(\uuuu{e}^t,\uuuu{e})^t=S$. Consider also the matrix $\www{S}\in T^{uni}_n(\Z)$ with $\www{S}_{ij}=-S_{ij}$ for $i<j$. On the same lattice $H_\Z$ and the same basis $\uuuu{e}$ we define a second unimodular bilinear form $\www{L}$ by $\www{L}(\uuuu{e}^t,\uuuu{e})^t=\www{S}$ and denote all objects associated to $(H_\Z,\www{L},\uuuu{e})$ with a tilde, $\www{I}^{(k)}$, $\www{\BB}^{tri}$, $\www{\Gamma}^{(k)}$, $\www{\Delta}^{(k)}$. Most of them differ a lot from the objects associated to $(H_\Z,L,\uuuu{e})$. But the odd intersection forms differ only by the sign. Therefore the monodromies $M$ and $\www{M}$ are different (in general), but the odd monodromy groups and the sets of odd vanishing cycles coincide: \begin{eqnarray*} \www{I}^{(1)}&=& -I^{(1)},\\ \textup{so }\www{s}_{e_i}^{(1)}&=&(s_{e_i}^{(1)})^{-1},\\ \www{M}&=& \www{s}_{e_1}^{(1)}\circ ...\circ \www{s}_{e_n}^{(1)} \stackrel{\textup{in general}}{\neq} M =s_{e_1}^{(1)}\circ ...\circ s_{e_n}^{(1)},\\ \textup{but }\www{\Gamma}^{(1)} &=&\langle \www{s}_{e_1}^{(1)},...,\www{s}_{e_n}^{(1)}\rangle =\langle s_{e_1}^{(1)},...,s_{e_n}^{(1)}\rangle =\Gamma^{(1)},\\ \www{\Delta}^{(1)}&=& \www{\Gamma}^{(1)}\{\pm e_1,...,\pm e_n\} ={\Gamma}^{(1)}\{\pm e_1,...,\pm e_n\}=\Delta^{(1)}. \end{eqnarray*} Because of $\www{\Gamma}^{(1)}=\Gamma^{(1)}$ and $\www{\Delta}^{(1)}=\Delta^{(1)}$ the global sign change from $S$ to $\www{S}$ is interesting. (ii) In this section we will study the action on $T^{uni}_3(\Z)$ of the extension of the action of $\Br_3\ltimes\{\pm 1\}^3$ by this global sign change. \index{global sign change} Define \index{$\delta^\R:\R^3\to\R^3$} \begin{eqnarray*} \delta^{\R}:\R^3\to\R^3,\ (x_1,x_2,x_3)\mapsto (-x_1,-x_2,-x_3) \end{eqnarray*} and \begin{eqnarray*} \www{G}^{sign}:=\langle \delta_1^{\R},\delta_2^{\R}, \delta^{\R}\rangle\cong\{\pm 1\}^3. \end{eqnarray*} It is easy to see that the double semidirect product $(G^{phi}\ltimes G^{sign})\rtimes \langle\gamma\rangle$ extends to the double semidirect product $(G^{phi}\ltimes \www{G}^{sign})\rtimes\langle\gamma\rangle$. \end{remarks} \begin{lemma}\label{t4.18} Each $(G^{phi}\ltimes\www{G}^{sign})\rtimes \langle\gamma\rangle$ orbit in $\Z^3$ contains at least one local minimum of one of the following types: \begin{list}{}{} \item[(a)] $\uuuu{x}\in\Z^3_{\geq 3}$ with $2x_i\leq x_jx_k$ for $\{i,j,k\}=\{1,2,3\}$. \item[(b)] $(-l,2,-l)$ for some $l\in\Z_{\geq 2}$. \item[(c)] $(x_1,x_2,0)$ for some $x_1,x_2\in\Z_{\geq 0}$ with $x_1\geq x_2$. \end{list} \end{lemma} {\bf Proof:} In (c) we can restrict to $x_1\geq x_2$ because $\delta^\R_3\gamma\varphi_1(x_1,x_2,0)=(x_2,x_1,0)$. Each $(G^{phi}\ltimes \www{G}^{sign})\rtimes \langle\gamma\rangle$ orbit consists of one or several $(G^{phi}\ltimes G^{sign})\rtimes\langle\gamma\rangle$ orbits and thus contains local minima. Suppose that $\uuuu{x}\in\Z^3$ is such a local minimum and is not obtained with $G^{sign}$ from a local minimum in (a), (b) or (c). Then either $\uuuu{x}$ is a local minimum associated to $S(\whh{A}_2)$, so $\uuuu{x}\in \{(-1,-1,-1),(1,1,-1), (1,-1,1),(-1,1,1)\}$ or $x_1x_2x_3<0$ and $r(\uuuu{x})>4$. In the first case $\delta^{\R}(-1,-1,-1)=(1,1,1) =\varphi_3(1,1,0)$, so the orbit contains a local minimum in (c). In the second case $r(\delta^{\R}(\uuuu{x}))=r(-\uuuu{x}) =r(\uuuu{x})+2x_1x_2x_3<r(\uuuu{x})$. We consider a local minimum $\uuuu{x}^{(1)}$ in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $-\uuuu{x}$. If it is not obtained with $G^{sign}$ from a local minimum in (a), (b) or (c), then $\uuuu{x}^{(1)}$ is a local minimum associated to $S(\whh{A}_2)$ or $x_1^{(1)}x_2^{(1)}x_3^{(1)}<0$ and $r(\uuuu{x}^{(1)})>4$. We repeat this procedure until we arrive at a local minimum obtained with $G^{sign}$ from one in (a), (b) or (c). This stops after finitely many steps because $r(\uuuu{x}^{(1)})=r(-\uuuu{x})<r(\uuuu{x})$. \hfill$\Box$ \begin{remark}\label{t4.19} Corollary \ref{t6.23} will say that the $(G^{phi}\ltimes \www{G}^{sign})\rtimes \langle\gamma\rangle$ orbits of the local minima in the parts (b) and (c) of Lemma \ref{t4.18} are pairwise different and also different from the orbits of the local minima in part (a). \end{remark} \begin{examples}\label{t4.20} Given an element $\uuuu{x}\in\Z^3$, it is not obvious which local minimum of a type in (a), (b) or (c) is contained in the $(G^{phi}\ltimes \www{G}^{sign}) \rtimes\langle\gamma\rangle$ orbit of $\uuuu{x}$. We give four families of examples. An arrow $\mapsto$ between two elements of $\Z^3$ means that these two elements are in the same orbit. (i) Start with $(x_1,x_2,-1)$ with $x_1\geq x_2>0$. \begin{eqnarray*} (x_1,x_2,-1)&\stackrel{\www{G}^{sign}}{\mapsto}& (x_1,x_2,1)\stackrel{\varphi_1}{\mapsto} (x_2-x_1,1,x_2)\stackrel{\www{G}^{sign}}{\mapsto} (x_1-x_2,1,x_2)\\ &\mapsto & ... \mapsto (\gcd(x_1,x_2),1,0). \end{eqnarray*} (ii) Start with $(x_1,x_2,-2)$ with $x_1\geq x_2\geq 2$. \begin{eqnarray*} (x_1,x_2,-2)&\stackrel{\www{G}^{sign}}{\mapsto}& (x_1,x_2,2)\stackrel{\varphi_1}{\mapsto} (2x_2-x_1,2,x_2)\mapsto ...\\ & \mapsto& \left\{\begin{array}{l} (\gcd(x_1,x_2),2,0) \quad\textup{ if }x_1\textup{ and }x_2\\ \hspace*{2cm}\textup{contain different powers of }2,\\ (-\gcd(x_1,x_2),2,-\gcd(x_1,x_2))\quad \textup{ if } x_1\textup{ and }x_2\\ \hspace*{2cm}\textup{contain the same power of }2 \end{array}\right. \end{eqnarray*} In order to understand the case discussion, observe that $2x_2-x_1$ and $x_2$ contain the same power of $2$ if and only if $x_1$ and $x_2$ contain the same power of $2$. Furthermore $0$ is divisible by an arbitrarily large power of $2$. (iii) The examples in part (ii) lead in the special case $\gcd(x_1,x_2)=1$ to the following, \begin{eqnarray*} (x_1,x_2,-2)\mapsto ... \mapsto &\mapsto& (1,2,0) \textup{ or }(-1,2,-1) \\ &\mapsto& (2,1,0)\textup{ or }(1,1,0), \end{eqnarray*} again depending on whether $x_1$ and $x_2$ contain different powers of $2$ or the same power of $2$. (iv) Start with $(-3,-3,-l)$ for some $l\in\Z_{\geq 2}$. \begin{eqnarray*} (-3,-3,-l)&\stackrel{\delta^{\R}}{\mapsto}& (3,3,l)\mapsto (3,3,\pm (l-9))\\ &\mapsto& (3,3,\www{l})\textup{ for some } \www{l}\in\{0,1,2,3,4\}\\ &\mapsto& (3,3,0),\ (3,1,0),\ (-3,2,-3),\ (3,3,3)\textup{ or }(4,3,3). \end{eqnarray*} $(3,3,0)$ and $(3,1,0)$ are in part (c), $(-3,2,-3)$ is in part (b), and $(3,3,3)$ and $(4,3,3)$ are in part (a) of Lemma \ref{t4.18}. \end{examples} \chapter{Automorphism groups}\label{s5} \setcounter{equation}{0} \setcounter{figure}{0} A unimodular bilinear lattice $(H_\Z,L)$ comes equipped with four automorphism groups $G_\Z$, $G_\Z^{(0)}$, $G_\Z^{(1)}$ and $G_\Z^M$ of $H_\Z$, which all respect the monodromy $M$ and possibly some bilinear form. They are the subject of this chapter. Section \ref{s5.1} gives two basic observations which serve as general tools to control these groups under reasonable conditions. Another very useful tool is Theorem \ref{t3.26} (c), which gives in favourable situations an $n$-th root of $(-1)^{k+1}M$ in $G_\Z$. Section \ref{s5.1} treats also the cases $A_1^n$. Section \ref{s5.2} takes care of the rank 2 cases. It makes use of some statements on quadratic units in Lemma \ref{tc.1} (a) in Appendix \ref{sc}. All further sections \ref{s5.3}--\ref{s5.7} are devoted to the rank 3 cases. Section \ref{s5.3} discusses the setting and the basic data. It also introduces the special automorphism $Q\in\Aut(H_\Q,L)$ which is $\id$ on $\ker(M-\id)$ and $-\id$ on $\ker(M^2-(2-r)M+\id)$ and determines the (rather few) cases where $Q$ is in $G_\Z$. The treatment of the reducible rank 3 cases in section \ref{s5.4} builds on the rank 2 cases and is easy. The irreducible rank 3 cases with all eigenvalues in $S^1$ form the four single cases $A_3$, $\whh{A}_2$, $\P^2$, $\HH_{1,2}$ and the series $S(-l,2,-l)$ with $l\in\Z_{\geq 3}$. Here in section \ref{s5.5} third roots of the monodromy and in the series $S(-l,2,-l)$ even higher roots of the monodromy turn up. The sections \ref{s5.6} and \ref{s5.7} treat all irreducible rank 3 cases with eigenvalues not all in $S^1$ (1 is always one eigenvalue). Section \ref{s5.6} takes care of those families of cases where $G_\Z\supsetneqq \{\pm M^m\,|\, m\in\Z\}$, section \ref{s5.7} of all others. Section \ref{s5.6} is rather long. Here again roots of the monodromy turn up, and statements on quadratic units in Lemma \ref{tc.1} are used. Section \ref{s5.7} is very long. The main result $G_\Z= \{\pm M^m\,|\, m\in\Z\}$ in these cases requires an extensive case discussion. This chapter determines $G_\Z$ in all cases with rank $n\leq 3$. An application is a proof of Theorem \ref{t3.28}, which says that in almost all cases with rank $n\leq 3$ the map $Z:(\Br_n\ltimes\{\pm 1\}^n)_S\to G_\Z$ is surjective, the exception being four cases in section \ref{s5.6}. \section{Basic observations} \label{s5.1} Given a bilinear lattice $(H_\Z,L)$, the most important of the four automorphisms groups $G_\Z^M,G_\Z^{(0)},G_\Z^{(1)}$ and $G_\Z$ in Definition \ref{t2.3} (b) (iv) and Lemma \ref{t2.6} (a) (iii) is the smallest group $G_\Z$. But the key to it is often the largest group $G_\Z^M$. We collect some elementary observations on these groups. \begin{lemma}\label{t5.1} Let $H_\Z$ be a $\Z$-lattice of some rank $n\in\N$ and let $M:H_\Z\to H_\Z$ be an automorphism of it. (a) The characteristic polynomial $p_{ch,M}(t)\in\Z[t]$ of the automorphism $M$ is unitary. Each eigenvalue $\lambda\in\C$ of $M$ is an algebraic integer and a unit in the ring $\OO_{\Q[\lambda]}\subset\Q[\lambda]$ of algebraic integers \index{$\OO_{\Q[\lambda]},\ \OO_{\Q[\lambda]}^*$} in $\Q[\lambda]$, so in $\OO_{\Q[\lambda]}^*$. Also $\lambda^{-1}\in \OO_{\Q[\lambda]}^*$. (b) Suppose that $M$ is {\sf regular}, \index{regular endomorphism} that means, $M:H_\C\to H_\C$ has for each eigenvalue only one Jordan block. \begin{list}{}{} \item[(i)] Then \begin{eqnarray*} \Q[M]=\bigoplus_{i=0}^{n-1}\Q M^i\stackrel{!}{=} \End(H_\Q,M) :=\{g:H_\Q\to H_\Q\,|\, gM=Mg\}. \end{eqnarray*} \item[(ii)] Consider a polynomial $p(t)=\sum_{i=0}^{n-1}p_it^i\in\Q[t]$ and the endomorphism $g=p(M)\in \End(H_\Q,M)$ of $H_\Q$ and $H_\C$. Then $g$ has the eigenvalue $p(\lambda)$ on the generalized eigenspace $H_\lambda$ of $M$ with eigenvalue $\lambda$. If $g\in \End(H_\Z,M)$ then $p(\lambda)\in\OO_{\Q[\lambda]}$ for each eigenvalue $\lambda$ of $M$. If $g\in G_\Z^M$ then $p(\lambda)\in \OO_{\Q[\lambda]}^*$ for each eigenvalue $\lambda$ of $M$. \end{list} (c) Suppose that $M$ is the monodromy of a bilinear lattice $(H_\Z,L)$. Suppose that $M$ is regular. \begin{list}{}{} \item[(i)] Then \begin{eqnarray*} G_\Z^{(0)}\cup G_\Z^{(1)}\subset \{p(M)\,|\, p(t)=\sum_{i=0}^{n-1}p_it^i\in\Q[t],\ p(M)\in\End(H_\Z),\\ \\ p(\lambda)p(\lambda^{-1})=1\textup{ for each eigenvalue } \lambda\textup{ of }M\}. \end{eqnarray*} \item[(ii)] If $M$ is semisimple then $G_\Z=G_\Z^{(0)}=G_\Z^{(1)}$, and this set is equal to the set on the right hand side of (i). \end{list} \end{lemma} {\bf Proof:} (a) Trivial. (b) (i) As $M$ is regular, one can choose a vector $c\in H_\Q$ with $H_\Q=\bigoplus_{i=0}^{n-1}\Q M^ic$. The inclusion $\Q[M]\subset\End(H_\Q,M)$ is clear. Suppose $g\in\End(H_\Q,M)$. Write $gc=p(M)c$ for some polynomial $p(t)=\sum_{i=0}^{n-1}p_it^i \in\Q[t]$. As $$gM^kc=M^kgc=M^kp(M)c=p(M)M^kc$$ for each $k\in\{0,1,...,n-1\}$, $g=p(M)$. Thus $\Q[M]=\End(H_\Q,M)$. (ii) Similar to part (a). (c) (i) Suppose $g=p(M)\in G_\Z^{(k)}$ for some $k\in\{0;1\}$ with $p(t)=\sum_{i=0}^{n-1}p_it^i\in\Q[t]$. Recall $\Rad I^{(k)}=\ker(M-(-1)^{k+1}\id:H_\Z\to H_\Z)$. For $\lambda\neq (-1)^{k+1}$ the generalized eigenspaces $H_\lambda$ and $H_{\lambda^{-1}}$ are dual to one another with respect to $I^{(k)}$, and $g$ has eigenvalue $p(\lambda)$ on $H_\lambda$ and eigenvalue $p(\lambda^{-1})$ on $H_{\lambda^{-1}}$. That $g$ respects $I^{(k)}$ implies $p(\lambda)p(\lambda^{-1})=1$. For $\lambda=(-1)^{k+1}$, $g$ restricts to an automorphism of the sublattice $H_\lambda\cap H_\Z$ of $H_\Z$ with determinant $\pm 1=\det (g|_{H_\lambda\cap H_\Z}) =p(\lambda)^{\dim H_\lambda}$, so $p(\lambda)=\pm 1$. (ii) Suppose additionally that $M$ is semisimple and that $g=p(M)$ with $p(t)=\sum_{i=0}^{n-1}p_it^i\in\Q[t]$ satifies $g\in\End(H_\Z)$ and $p(\lambda)p(\lambda^{-1})=1$ for each eigenvalue $\lambda$ of $M$. As $M$ is regular and semisimple, each eigenvalue has multiplicity 1. The (1-dimensional) eigenspaces $H_\lambda$ and $H_{\lambda^{-1}}$ are dual to one another with respect to $L$. The conditions $p(\lambda)p(\lambda^{-1})=1$ imply that $g$ respects $L$ and also that $\det g=\prod_{\lambda\textup{ eigenvalue}}p(\lambda)=\pm 1$. Together with $g\in \End(H_\Z)$ this shows $g\in G_\Z$. \hfill$\Box$ \bigskip The situation in the following Lemma \ref{t5.2} arises surprisingly often. One reason is Theorem \ref{t3.26} (c). See the Remarks \ref{t5.3} below. \begin{lemma}\label{t5.2} Let $H_\Z$ be a $\Z$-lattice of some rank $n\in\N$, let $M:H_\Z\to H_\Z$ and $M^{root}:H_\Z\to H_\Z$ be automorphisms of $H_\Z$, and let $l\in\N$ and $\varepsilon\in\{\pm 1\}$ be such that the following holds: $M$ is regular, $$(M^{root})^l=\varepsilon M,$$ and $M^{root}$ is {\sf cyclic}, \index{cyclic endomorphism} that means, a vector $c\in H_\Z$ with $\bigoplus_{i=0}^{n-1}\Z(M^{root})^ic=H_\Z$ exists. (a) Then $M^{root}$ is regular, and \begin{eqnarray*} \End(H_\Z,M) & = & \End(H_\Z,M^{root})=\Z[M^{root}],\\ \Aut(H_\Z,M)&=&\Aut(H_\Z,M^{root})=\{p(M^{root})\,|\, p(t)=\sum_{i=0}^{n-1}p_it^i\in\Z[t],\\ && p(\kappa)\in(\Z[\kappa])^*\textup{ for each eigenvalue }\kappa\textup{ of }M^{root}\}. \end{eqnarray*} (b) Suppose that $M$ is the monodromy of a bilinear lattice $(H_\Z,L)$ and that the set of eigenvalues of $M^{root}$ is invariant under inversion, that means, with $\kappa$ an eigenvalue of $M^{root}$ also $\kappa^{-1}$ is an eigenvalue of $M^{root}$. \begin{list}{}{} \item[(i)] Then $M^{root}\in G_\Z$ and \begin{eqnarray*} G_\Z^{(0)}\cup G_\Z^{(1)}&\subset& \{p(M^{root})\,|\, p(t)=\sum_{i=0}^{n-1}p_it^i\in\Z[t],\\ &&p(\kappa)p(\kappa^{-1})=1\textup{ for each eigenvalue } \kappa\textup{ of }M^{root}\}. \end{eqnarray*} \item[(ii)] If $M$ is semisimple then $G_\Z=G_\Z^{(0)}=G_\Z^{(1)}$, and this set is equal to the set on the right hand side if (i). \end{list} \end{lemma} {\bf Proof:} (a) $M^{root}$ is regular as $M$ is regular and $(M^{root})^l=\varepsilon M$. This equation also implies $\Q[M]\subset \Q[M^{root}]$. As these $\Q$-vector spaces have both dimension $n$, $$\End(H_\Q,M)=\Q[M]=\Q[M^{root}]=\End(H_\Q,M^{root}).$$ Then $\End(H_\Z,M)=\End(H_\Z)\cap\Q[M^{root}]$. Consider $g\in \End(H_\Z,M)$. Choose a cyclic generator $c\in H_\Z$ with $\bigoplus_{i=0}^{n-1}\Z(M^{root})^ic=H_\Z$. Write $g(c)=p(M^{root})c$ for some polynomial $p(t)=\sum_{i=0}^{n-1}p_it^i\in\Z[t]$. As in the proof of Lemma \ref{t5.1}, one finds $g=p(M^{root})$, so $\End(H_\Z,M)=\Z[M^{root}]$. The element $g=p(M^{root})$ above is in $\Aut(H_\Z,M)$ if and only if $\det g=\pm 1$, and this holds if and only if the algebraic integer $p(\kappa)$ is a unit in $\Z[\kappa]$ for each eigenvalue $\kappa$ of $M^{root}$. (b) (i) The main point is to show $M^{root}\in G_\Z$. As $M$ and $M^{root}$ are both regular, the map $\kappa\mapsto \varepsilon \kappa^l$ is a bijection from the set of eigenvalues of $M^{root}$ to the set of eigenvalues of $M$. For $\lambda=\varepsilon\kappa^l$, the generalized eigenspaces $H_\lambda$ and $H_{\lambda^{-1}}$ of $M$ are the generalized eigenspaces of $M^{root}$ with eigenvalues $\kappa$ and $\kappa^{-1}$. These two spaces are dual to one another with respect to $I^{(0)}$ (if $\lambda\neq -1$), $I^{(1)}$ (if $\lambda\neq 1$) and $L$. Consider the decomposition of $M^{root}$ into the commuting semisimple part $M^{root}_s$ and unipotent part $M^{root}_u$ with nilpotent part $N^{root}$ with $\exp(N^{root})=M^{root}_u$, and also the decomposition $M=M_sM_u$ with nilpotent part $N$ with $\exp N=M_u$. $M^{root}_s$ and $M_s$ respect $L$ because they have eigenvalue $\kappa$ and $\lambda$ on $H_\lambda$ and eigenvalue $\kappa^{-1}$ and $\lambda^{-1}$ on $H_{\lambda^{-1}}$. As $M$ and $M_s$ respect $L$, also $M_u$ respects $L$. Therefore $N$ is an infinitesimal isometry. Because $N=lN^{root}$, also $N^{root}$ is an infinitesimal isometry. Therefore $M^{root}_u$ respects $L$. Thus also $M^{root}=M^{root}_sM^{root}_u$ respects $L$, so $M^{root}\in G_\Z$. Part (ii) and the rest of part (i) are proved as part (b) in Lemma \ref{t5.1}. \hfill$\Box$ \begin{remarks}\label{t5.3} (i) The pair $(H_\Z,M^{root})$ in Lemma \ref{t5.2} is an {\it Orlik block} if $M^{root}$ is of finite order. Orlik blocks will be defined and discussed in the beginning of section \ref{s10.3}. They are important building blocks in the unimodular bilinear lattices $(H_\Z,L)$ for many isolated hypersurface singularities. (ii) If the matrix $S=L(\uuuu{e}^t,\uuuu{e})^t$ of a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}$ has the special shape in Theorem \ref{t3.26} (c), then the monodromy $(-1)^{k+1} M$ has by Theorem \ref{t3.26} (c) a specific $n$-th root $M^{root}\in G_\Z$. This situation is special, but it arises surprisingly often, in singularity theory and in the cases in part (ii). (iii) Theorem \ref{t3.26} (c) applies to all matrices $S(x)=\begin{pmatrix}1&x\\0&1\end{pmatrix}$ with $x\in\Z$ and to all matrices $S=\begin{pmatrix}1&x&\varepsilon x\\ 0&1&x\\ 0&0&1\end{pmatrix}$ with $x\in\Z$ and $\varepsilon\in\{\pm 1\}$. It applies especially to the matrices $S(A_1^3)$, $S(\widehat{A}_2)$, $S(\HH_{1,2})$ and $S(\P^2)$ in the Examples \ref{t1.1} and to the matrix $S(-1,1,-1)$ in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $S(A_3)$. Though it is not useful in the cases $S(A_1^3)$ and $S(\HH_{1,2})$ because their monodromies are not regular. In the case $S(A_3)$ we will not use it as there the monodromy itself is cyclic. \end{remarks} The completely reducible cases $A_1^n$ for $n\in\N$ can be treated easily, building on Lemma \ref{t2.12}. \begin{lemma}\label{t5.4} Fix $n\in\N$ and consider the case $A_1^n$ with $S=S(A_1^n)=E_n$. Then \index{$A_1^n$} \begin{eqnarray*} G_\Z=G_\Z^{(0)}&\cong& O_n(\Z)=\{A\in GL_n(\{0;\pm 1\})\,|\, \exists\ \sigma\in S_n\\ &&\exists\ \varepsilon_1,...,\varepsilon_n\in\{\pm 1\} \textup{ such that }A_{ij}=\varepsilon_i\delta_{i\sigma(j)}\},\\ G_\Z^{(1)}=G_\Z^M&=&\Aut(H_\Z)\cong GL_n(\Z). \end{eqnarray*} The map $Z:(\Br_n\ltimes\{\pm 1\}^n)_{S}=\Br_n\ltimes\{\pm 1\}^n \to G_\Z$ is surjective. \end{lemma} {\bf Proof:} The groups $G_\Z^{(1)}$ and $G_\Z^M$ are as claimed because $M=\id$ and $I^{(1)}=0$. The groups $G_\Z$ and $G_\Z^{(0)}$ map the set $R^{(0)}=\{\pm e_1,...,\pm e_n\}$ to itself and are therefore also as claimed. The stabilizer of $S=E_n$ is the whole group $\Br_n\ltimes\{\pm 1\}^n$. The subgroup $\{\pm 1\}^n$ gives all sign changes of the basis $\uuuu{e}=(e_1,...,e_n)$. The subgroup $\Br_n$ gives under $Z$ all permutations of the elements of the tuple $(e_1,...,e_n)$. Therefore $Z$ is surjective. \hfill$\Box$ \section{The rank 2 cases}\label{s5.2} For $x\in\Z$ consider the matrix $S=S(x)=\begin{pmatrix}1&x\\0&1\end{pmatrix}\in T^{uni}_2(\Z)$, and consider a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}=(e_1,e_2)$ with $L(\uuuu{e}^t,\uuuu{e})^t=S$. Then \begin{eqnarray*} M\uuuu{e}&=&\uuuu{e}S^{-1}S^t=\uuuu{e} \begin{pmatrix}1-x^2 & -x\\ x & 1\end{pmatrix},\\ p_{ch,M}(t)&=& t^2-(2-x^2)t+1\\ \textup{ with zeros } \lambda_{1/2}&=&\frac{2-x^2}{2}\pm \frac{1}{2}x\sqrt{x^2-4}. \end{eqnarray*} Theorem \ref{t3.26} (c) applies with $(n,k,q_0,q_1)=(2,0,1,x)$, namely \begin{eqnarray*} \delta_2\sigma^{root}&=&\delta_2\sigma_1\in (\Br_2\ltimes\{\pm 1\}^2)_S,\\ M^{root}&:=&Z(\delta_2\sigma_1)\in G_\Z\\ \textup{with } M^{root}\uuuu{e}&=&\uuuu{e}\begin{pmatrix}-x&-1\\1&0\end{pmatrix}\\ \textup{and }(M^{root})^2&=&-M. \end{eqnarray*} $M^{root}$ is regular and cyclic, \begin{eqnarray*} p_{ch,M^{root}}(t) &=& t^2+xt+1\\ \textup{ with zeros } \kappa_{1/2}&=&-\frac{x}{2}\pm \frac{1}{2}\sqrt{x^2-4}\\ \textup{with }\kappa_i^2&=&-\lambda_i=-x\kappa_i-1, \ \kappa_1+\kappa_2=-x, \ \kappa_1\kappa_2=1. \end{eqnarray*} $M$ is regular if $x\neq 0$. $M$ and $M^{root}$ are semisimple if $x\neq \pm 2$. If $x=\pm 1$, $M$ has eigenvalues $e^{\pm 2\pi i/6}$ and $M^{root}$ has eigenvalues $e^{\pm 2\pi i /3}$ respectively $e^{\pm 2\pi i/6}$. If $|x|\geq 3$, $M$ and $M^{root}$ have real eigenvalues and infinite order. If $x=\pm 2$, they have a $2\times 2$ Jordan block with eigenvalue $-1$ respectively $-\frac{x}{2}$. \begin{theorem}\label{t5.5} (a) If $x\neq 0$ then \begin{eqnarray*} G_\Z&=& G_\Z^{(0)}=G_\Z^{(1)}=\{\pm (M^{root})^l\,|\, l\in\Z\},\\ G_\Z^M&=& \left\{\begin{array}{ll} G_\Z & \textup{if }x\neq \pm 3,\\ \{\pm (M^{root}+\frac{x}{|x|}\id)^l\,|\, l\in\Z\} & \textup{if }x=\pm 3. \end{array}\right. \end{eqnarray*} If $x=0$ then \begin{eqnarray*} && G_\Z=G_\Z^{(0)}\cong O_2(\Z)= \{\begin{pmatrix}\varepsilon_1 & 0\\ 0 &\varepsilon_2\end{pmatrix}, \begin{pmatrix} 0& \varepsilon_1 \\ \varepsilon_2&0\end{pmatrix} \,|\, \varepsilon_1,\varepsilon_2\in\{\pm 1\}\},\\ && G_\Z^{(1)}=G_\Z^M =\Aut(H_\Z)\cong GL_2(\Z). \end{eqnarray*} In all cases $G_\Z=G_\Z^{\BB}$, so $Z:(\Br_2\ltimes\{\pm 1\}^2)_S\to G_\Z$ is surjective. (b) Properties of $I^{(0)}$ and $I^{(1)}$: \begin{eqnarray*} x=0:&& I^{(1)}=0,\ \Rad I^{(1)}=H_\Z,\ L(\uuuu{e}^t,\uuuu{e})^t=E_2,\ I^{(0)}(\uuuu{e}^t,\uuuu{e})=2E_2.\\ x\neq 0:&& \Rad I^{(1)}=\{0\}.\\ |x|\leq 1:&& I^{(0)}\textup{ is positive definite.}\\ |x|=2:&& I^{(0)}\textup{ is positive semi-definite},\ \Rad I^{(0)}=\Z(e_1-\frac{x}{|x|}e_2).\\ |x|>2:&& I^{(0)}\textup{ is indefinite},\ \Rad I^{(0)}=\{0\}. \end{eqnarray*} \end{theorem} {\bf Proof:} Part (b) is obvious. The case $x=0$ is the case $A_1^2$. It is covered by Lemma \ref{t5.4}. Consider the cases $x\neq 0$ in part (a). We can restrict to $x<0$ because of $L((e_1,-e_2)^t,(e_1,-e_2))^t=\begin{pmatrix} 1& -x\\0&1\end{pmatrix}$. So suppose $x<0$. We know $\{\pm (M^{root})^l\,|\, l\in\Z\}\subset G_\Z\subset G_\Z^M$. By Lemma \ref{t5.2} (a) \begin{eqnarray*} G_\Z^M=\{p(M^{root})\,|\, p(t)=p_1t+p_0\in\Z[t], \ p(\kappa_1)p(\kappa_2)\in\{\pm 1\}\}. \end{eqnarray*} The map \begin{eqnarray*} Q_2:\Z^2\to\Z,\quad (p_1,p_0)\mapsto p(\kappa_1)p(\kappa_2)=p_1^2-p_1p_0x+p_0^2, \end{eqnarray*} is a quadratic form. Lemma \ref{t5.2} (a) shows \begin{eqnarray*} G_\Z^M=\{p_1M^{root}+p_0\id\,|\, (p_1,p_0,\varepsilon_1)\in\Z^2\times\{\pm 1\}, Q_2(p_1,p_0)=\varepsilon\}. \end{eqnarray*} Lemma \ref{t5.2} (b) (ii) shows for $x\neq -2$ \begin{eqnarray*} G_\Z=G_\Z^{(0)}=G_\Z^{(1)}=\{p_1M^{root}+p_0\id\,|\, (p_1,p_0)\in\Z^2, Q_2(p_1,p_0)=1\}. \end{eqnarray*} For $x=-2$ it shows only \begin{eqnarray*} G_\Z,G_\Z^{(0)},G_\Z^{(1)} \subset \{p_1M^{root}+p_0\id\,|\, (p_1,p_0)\in\Z^2, Q_2(p_1,p_0)=1\}. \end{eqnarray*} We discuss the cases $x=-1$, $x=-2$ and $x\leq -3$ separately. {\bf The case $x=-1$:} $Q_2$ is positive definite, so $Q_2(p_1,p_0)=-1$ is impossible, and \begin{eqnarray*} \{(p_1,p_0)\,|\, Q_2(p_1,p_0)=1\} =\{\pm (0,1),\pm(1,0),\pm(1,-1)\},\\ G_\Z=G_\Z^{(0)}=G_\Z^{(1)}=G_\Z^M=\{\pm\id,\pm M^{root},\pm (M^{root})^2\}. \end{eqnarray*} Because of $M^2=-M^{root}$ this equals $\{\pm \id,\pm M,\pm M^2\}$. {\bf The case $x=-2$:} This follows from Lemma \ref{t5.6} below. Remark: Here $Q_2$ is positive semidefinite with $Q_2(p_1,p_0)=(p_1+p_0)^2$. The solution $(p_1,p_0)=(p_1,-p_1+\varepsilon_2)$ with $\varepsilon_2\in\{\pm 1\}$ of $Q_2(p_1,p_0)=1$ corresponds to \begin{eqnarray*} p_1M^{root}+(-p_1+\varepsilon_2)\id &=&\varepsilon_2(\id+\varepsilon_2p_1 (M^{root}-\id))\\ &=&\varepsilon_2(\id+(M^{root}-\id))^{\varepsilon_2p_1}\\ &=&\varepsilon_2 (M^{root})^{\varepsilon_2p_1}. \end{eqnarray*} {\bf The cases $x\leq -3$:} The arguments above show \begin{eqnarray*} &&G_\Z^M\cong (\Z[\kappa_1])^*\\ &\supset &G_\Z\cong\{p_1\kappa_1+p_0\in\Z[\kappa_1]\,|\, (p_1\kappa_1+p_0)(p_1\kappa_2+p_0)=1\}. \end{eqnarray*} So we need to understand the unit group of $\Z[\kappa_1]$ and the subgroup of elements with norm 1. Both are treated in Lemma \ref{tc.1} (a) in Appendix C. It remains to show $G_\Z=G_\Z^{\BB}$ in all cases $x\in\Z_{\leq -1}$. This follows from $G_\Z=\{\pm (M^{root})^l\,|\, l\in\Z\}$ and $$Z(\delta_1\delta_2)=-\id,\quad Z(\delta_2\sigma_1)=M^{root}. \hspace*{2cm}\Box$$ \begin{lemma}\label{t5.6} Let $H_\Z$ be a $\Z$-lattice of rank 2, and let $\www{M}:H_\Z\to H_\Z$ be an automorphism with a $2\times 2$ Jordan block and eigenvalue $\lambda\in\{\pm 1\}$. (a) Then a cyclic automorphism $\www{M}^{root}:H_\Z\to H_\Z$ with eigenvalue 1 and a number $l\in\N$ with $(\www{M}^{root})^l=\lambda M$ exist. They are unique. $$\Aut(H_\Z,\www{M})=\{\pm (\www{M}^{root})^l\,|\, l\in\Z\}.$$ (b) If $\www{I}:H_\Z\times H_\Z\to\Z$ is an $\www{M}$-invariant bilinear form then it is also $\www{M}^{root}$-invariant and $$\Aut(H_\Z,\www{M},\www{I})=\Aut(H_\Z,\www{M}) =\{\pm (\www{M}^{root})^l\,|\, l\in\Z\}.$$ \end{lemma} {\bf Proof:} (a) There is a $\Z$-basis $\uuuu{f}=(f_1,f_2)$ of $H_\Z$ and an $l\in\N$ with $$\www{M}\uuuu{f}=\uuuu{f}\lambda\begin{pmatrix}1&l\\0&1\end{pmatrix}.$$ Here $f_1$ is a generator of the rank 1 $\Z$-lattice $\ker(\www{M}-\lambda\id)\subset H_\Z$. It is unique up to the sign. It is a primitive element of $H_\Z$. An element $f_2$ with $H_\Z=\Z f_1\oplus \Z f_2$ exists. It is unique up to sign and up to adding a multiple of $f_1$. The sign is fixed by $l$ in the matrix above being positive. $l$ is unique. Define $\www{M}^{root}:H_\Z\to H_\Z$ by $$\www{M}^{root}\uuuu{f}=\uuuu{f}\begin{pmatrix}1&1\\0&1\end{pmatrix}.$$ Obviously $(\www{M}^{root})^l=\lambda \www{M}$. Any $g\in \Aut(H_\Z,\www{M})$ must fix $\Z f_1=\ker(\www{M}-\lambda\id)$. Therefore it must be up to the sign a power of $\www{M}^{root}$. (b) That $\www{M}^{root}$ respects $\www{I}$ follows by the same arguments as $M^{root}\in G_\Z$ in the proof of Lemma \ref{t5.2} (b) (i) (but now the situation is simpler, as $\www{M}$ and $\www{M}^{root}$ have a single $2\times 2$ Jordan block). The rest follows with part (a). \hfill$\Box$ \section{Generalities on the rank 3 cases}\label{s5.3} For $\uuuu{x}=(x_1,x_2,x_3)\in\Z^3$ consider the matrix $S=S(\uuuu{x})= \begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} \in T^{uni}_3(\Z)$, and consider a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}=(e_1,e_2,e_3)$ with $L(\uuuu{e}^t,\uuuu{e})^t=S$. Then \begin{eqnarray*} S^{-1}&=&\begin{pmatrix}1&-x_1&x_1x_3-x_2\\0&1&-x_3\\0&0&1\end{pmatrix},\\ S^{-1}S^t&=&\begin{pmatrix}1-x_1^2-x_2^2+x_1x_2x_3&-x_1-x_2x_3+x_1x_3^2 &x_1x_3-x_2\\x_1-x_2x_3&1-x_3^2&-x_3\\x_2&x_3&1\end{pmatrix},\\ M\uuuu{e}&=&\uuuu{e}S^{-1}S^t, \end{eqnarray*} \begin{eqnarray*} I^{(0)}(\uuuu{e}^t,\uuuu{e})&=&S+S^t= \begin{pmatrix}2&x_1&x_2\\x_1&2&x_3\\x_2&x_3&2\end{pmatrix},\\ I^{(1)}(\uuuu{e}^t,\uuuu{e})&=&S-S^t= \begin{pmatrix}0&x_1&x_2\\-x_1&0&x_3\\-x_2&-x_3&0\end{pmatrix},\\ p_{ch,M}(t)&=& (t-1)(t^2-(2-r(\uuuu{x}))t+1), \end{eqnarray*} where \begin{eqnarray*} r:\Z^3\to\Z,\quad \uuuu{x}=(x_1,x_2,x_3)\mapsto x_1^2+x_2^2+x_3^2-x_1x_2x_3. \end{eqnarray*} \index{$r,\ r_\R$} For $(x_1,x_2,x_3)\neq (0,0,0)$ define \begin{eqnarray*} f_3&:=&\uuuu{e}\, \frac{1}{\gcd(x_1,x_2,x_3)}\begin{pmatrix}-x_3\\x_2\\-x_1 \end{pmatrix}. \end{eqnarray*} \index{$f_3$} This is a primitive vector in $H_\Z$. \begin{eqnarray*} \Rad I^{(1)} \stackrel{\textup{2.5 (a)(ii)}}{=} \ker (M-\id)=\left\{\begin{array}{ll} \Z f_3&\textup{if }(x_1,x_2,x_3)\neq (0,0,0),\\ H_\Z&\textup{if }(x_1,x_2,x_3)=(0,0,0).\end{array}\right. \end{eqnarray*} Also \begin{eqnarray*} p_{ch,S+S^t}(t)&=& t^3-6t^2+(12-x_1^2-x_2^2-x_3^2)t-2(4-r),\\ L(f_3,f_3)&=&\frac{r(\uuuu{x})}{\gcd(x_1,x_2,x_3)^2},\quad I^{(0)}(f_3,f_3)=2L(f_3,f_3). \end{eqnarray*} The eigenvalues of $M$ are called \begin{eqnarray*} \lambda_{1/2}=\frac{2-r}{2}\pm \frac{1}{2}\sqrt{r(r-4)},\quad \lambda_3=1 \end{eqnarray*} with $\lambda_1+\lambda_2=2-r$, $\lambda_1\lambda_2=1$. The following Lemma gives implicitly precise information on $p_{ch,M}$ and $\sign I^{(0)}$ for all $\uuuu{x}\in\Z^3$. {\it Implicitly}, because one has to determine with the tools from section \ref{s4.2} in the cases $r(\uuuu{x})\in\{0,1,2,4\}$ in which $\Br_3\ltimes \{\pm 1\}^3$ orbit in Theorem \ref{t4.6} (e) the matrix $S(\uuuu{x})$ is. \begin{lemma}\label{t5.7} (a) $r^{-1}(3l)=\emptyset$ for $l\in\Z-3\Z$. (b) Consider $\uuuu{x}\in\Z^3$ with $r=r(\uuuu{x})<0$ or $>4$ or with $S(\uuuu{x})$ one of the cases in Theorem \ref{t4.6} (e). Then $p_{ch,M}$ and $\sign I^{(0)}$ are as follows. \begin{eqnarray*} \begin{array}{llll} & p_{ch,M} & \sign I^{(0)}\hspace*{1cm} & S(\uuuu{x}) \\ r<0 & \lambda_1,\lambda_2>0 & (+--) & S(\uuuu{x}) \\ r=0 & \Phi_1^3 & (+++) & S(A_1^3)\\ r=0 & \Phi_1^3 & (+--) & S(\P^2)\\ r=1 & \Phi_6\Phi_1 & (+++) & S(A_2A_1) \\ r=2 & \Phi_4\Phi_1 & (+++) & S(A_3) \\ r=4 & \Phi_2^2\Phi_1 & (++\ 0) & S(\P^1A_1) \\ r=4 & \Phi_2^2\Phi_1 & (++\ 0) & S(\whh{A}_2) \\ r=4 & \Phi_2^2\Phi_1 & (+\ 0\ 0) & S(\HH_{1,2}) \\ r=4 & \Phi_2^2\Phi_1 & (+\ 0\ -) & S(-l,2,-l) \textup{ with }l\geq 3 \\ r>4 & \lambda_1,\lambda_2<0 & (++-) & S(\uuuu{x}) \end{array} \end{eqnarray*} \end{lemma} {\bf Proof:} (a) If $(3| x_1,3| x_2,3| x_3)$ then $9| r$. If $(3| x_1,3| x_2,3\nmid x_3)$ then $3| (r-1),3\nmid r$. If $(3| x_1,3\nmid x_2,3\nmid x_3)$ then $3| (r-2),3\nmid r$. If $(3\nmid x_1,3\nmid x_2,3\nmid x_3)$ then $3| (x_1^2+x_2^2+x_3^2),3\nmid r$. (b) The statements on $p_{ch,M}$ are obvious. $$\Rad I^{(0)}\stackrel{\textup{2.5 (a)(ii)}}{=} \ker (M+\id)\supsetneqq\{0\} \iff \Phi_2|p_{ch,M}\iff r=4.$$ In the cases with $r=4$, one calculates $p_{ch,S+S^t}(t)$ and reads off $\sign I^{(0)}$ from the zeros of $p_{ch,S+S^t}(t)$. The case $S(A_1^3)=S(0,0,0)$ is trivial. Consider the cases with $r\neq 4$ and $\uuuu{x}\neq (0,0,0)$. Then $I^{(0)}$ is nondegenerate. The product of the signs in the signature of $I^{(0)}$ is the sign of $\det(S+S^t)=2(4-r)$. Because of the $2$'s on the diagonal of $S+S^t$, $I^{(0)}$ cannot be negative definite. Also recall $I^{(0)}(f_3,f_3)=2r(\gcd(x_1,x_2,x_3))^{-2}$. This shows $\sign I^{(0)}=(+--)$ for $r<0$, $\sign I^{(0)}=(++-)$ for $r>4$ and $\sign I^{(0)}=(+++)$ or $(+--)$ for $r\in\{0,1,2\}$. The classification of $\Br_3\ltimes\{\pm 1\}$ orbits in $T^{uni}_3(\Z)$ in Theorem \ref{t4.6} (e) says that for each of the cases $r\in\{0,1,2\}$ there is only one orbit (with $\uuuu{x}\neq (0,0,0)$ in the case $r=0$), namely $S(\P^2)$, $S(A_2A_1)$ and $S(A_3)$. One checks the claims on $\sign I^{(0)}=\sign (S+S^t)$ immediately. \hfill$\Box$ \begin{remarks}\label{t5.8} (i) It is very remarkable that the fibers $r^{-1}(1)$ and $r^{-1}(2)\subset\Z^3$ of $r:\Z^3\to\Z$ consist each of only one orbit. If one looks at the fibers of the real map \begin{eqnarray*} r_\R:\R^3\to\R,\quad \uuuu{x}\mapsto x_1^2+x_2^2+x_3^2-x_1x_2x_3, \end{eqnarray*} this does not hold. Each real fiber $r_\R^{-1}(\rho)$ with $\rho\in(0,4)$ has five components, one compact (homeomorphic to a 2-sphere), four non-compact (homeomorphic to $\R^2$). The four non-compact components are related by the action of $G^{sign}$. It is remarkable that the fibers $r_\R^{-1}(1)$ and $r_\R^{-1}(2)\subset\R^3$ intersect $\Z^3$ only in the central piece. (ii) By $p_{ch,M}=(t-1)(t^2-(2-r(\uuuu{x}))t+1)$, the monodromy matrix $S^{-1}S^t$ for $\uuuu{x}\in\R^3$ and $S=S(\uuuu{x})$ has all eigenvalues in $S^1$ if and only if $r_\R(\uuuu{x})\in[0,4]$. (iii) The semialgebraic subvariety $r_\R^{-1}([0,4])\subset\R^3$ was studied in \cite[5.2]{BH20}. It has a central piece which is $G^{sign}$ invariant and which looks like a tetrahedron with smoothened edges and four other pieces which are permuted by $G^{sign}$. Each other piece is homeomorphic to $[0,1]\times\R^2$ and is glued in one point (its only singular point) to one of the vertices of the central piece. The four vertices are $(2,2,2),(2,-2,-2),(-2,2,-2),(-2,-2,2)$, so the elements of the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $(2,2,2)$. For $\uuuu{x}$ inside the central piece $S+S^t$ is positive definite, on its boundary except the vertices $\sign (S+S^t)=(++0)$, at a vertex $\sign(S+S^t)=(+00)$. On those boundary components of the other four pieces which contain one of the vertices $\sign(S+S^t)=(+0-)$ (except at the vertex). On the interior of $r_\R^{-1}(-\infty,4]$ except the central piece $\sign(S+S^t)=(+--)$. On the exterior of $r_\R^{-1}(-\infty,4]$ $\sign(S+S^t)=(++-)$. (iv) Due to Lemma \ref{t5.7} (b) and Theorem \ref{t4.6}, the seven cases $S(A_1^3)$, $S(\P^2)$, $S(A_2A_1)$, $S(A_3)$, $S(\P^1A_1)$, $S(\widehat{A}_2)$, $S(\HH_{1,2})$ and the series $S(-l,2,-l)$ for $l\geq 3$ give the only rank 3 unimodular bilinear lattices where all eigenvalues of the monodromy are unit roots. In the sections \ref{s5.4} and \ref{s5.5} we will focus on the reducible cases and these cases. In the sections \ref{s5.6} and \ref{s5.7} we will treat the other cases. \end{remarks} The following definition presents a special automorphism $Q$ in $\Aut(H_\Q,L)$. Theorem \ref{t5.11} will say in which cases $Q$ is in $G_\Z$ and in which cases not. The determination of the group $G_\Z$ in all irreducible rank 3 cases in the sections \ref{s5.5}--\ref{s5.7} will build on this result. It is preceded by Lemma \ref{t5.10} which provides notations and estimates which will be used in the proof of Theorem \ref{t5.11} and also later. \begin{definition}\label{t5.9} Consider $\uuuu{x}\in\Z^3$ with $r(\uuuu{x})\neq 0$. Then $H_\Q=H_{\Q,1}\oplus H_{\Q,2}$ with $H_{\Q,1}:=\ker(M^2-(2-r(\uuuu{x}))M+\id:H_\Q\to H_\Q)$ and $H_{\Q,2}:=\ker(M-\id:H_\Q\to H_\Q)$. This decomposition is left and right $L$-orthogonal. Then $Q:H_\Q\to H_\Q$ denotes the automorphism with $Q|_{H_{\Q,1}}=-\id$ and $Q|_{H_{\Q,2}}=\id$. It is in $\Aut(H_\Q,L)$. \index{$Q$} \end{definition} \begin{lemma}\label{t5.10} (a) For $\uuuu{x}\in \Z^3-\{(0,0,0)\}$ write $r:=r(\uuuu{x})$ and define \index{$g=g(\uuuu{x})=\gcd(x_1,x_2,x_3)$} \index{$\www{\uuuu{x}}=(\www{x}_1,\www{x}_2,\www{x}_3)=g^{-1}\uuuu{x}$} \begin{eqnarray*} g:=g(\uuuu{x})&:=&\gcd(x_1,x_2,x_3)\in\N,\nonumber\\ \uuuu{\www{x}}:=(\www{x}_1,\www{x}_2,\www{x}_3) &:=&g^{-1}\uuuu{x}\in\Z^3. \end{eqnarray*} Then $f_3=-\www{x}_3e_1+\www{x}_2e_2-\www{x}_1e_3$, $\gcd(\www{x}_1,\www{x}_2,\www{x}_3)=1$ and \begin{eqnarray} g^2\,|\, r,\quad \frac{r}{g^2}= \www{x}_1^2+\www{x}_2^2+\www{x}_3^2-g\www{x}_1\www{x}_2\www{x}_3. \label{5.1} \end{eqnarray} (b) Consider a local minimum (Definition \ref{t4.3}) $\uuuu{x}\in\Z^3_{\geq 3}$ with $x_i\leq x_j\leq x_k$ for some $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$. Then \begin{eqnarray} \www{x}_i&\leq& \frac{2+(4-r)^{1/3}}{g},\label{5.2}\\ x_j^2&\leq & \frac{4-r}{x_i-2}+x_i+2,\label{5.3}\\ x_k&\leq& \frac{1}{2}x_ix_j \quad\textup{and}\quad \www{x}_k\leq \frac{g}{2}\www{x}_i\www{x}_j.\label{5.4} \end{eqnarray} \end{lemma} {\bf Proof:} (a) Trivial. (b) Lemma \ref{t4.4} shows $x_k\leq\frac{1}{2}x_ix_j$, which is \eqref{5.4}. This is equivalent to $\www{x}_k\leq\frac{g}{2}\www{x}_i\www{x}_j$. We also have $\www{x}_i\leq \www{x}_j\leq\www{x}_k$, and we know $r\leq 0$ from Theorem \ref{4.6}. The proof of \eqref{5.2} is similar to the second case in the proof of Theorem \ref{t4.6} (a). \begin{eqnarray*} \frac{r}{g^2}&=& \www{x}_i^2+\www{x}_j^2+ (\www{x}_k-\frac{g}{2}\www{x}_i\www{x}_j)^2 -\frac{g^2}{4}\www{x}_i^2\www{x}_j^2\\ &\leq& \www{x}_i^2+\www{x}_j^2+ (\www{x}_j-\frac{g}{2}\www{x}_i\www{x}_j)^2 -\frac{g^2}{4}\www{x}_i^2\www{x}_j^2 \quad(\textup{because }\www{x}_j\leq\www{x}_k\leq \frac{g}{2}\www{x_i}\www{x}_j)\\ &=& \www{x}_i^2+2\www{x}_j^2-g\www{x}_i\www{x}_j^2\\ &=& (\www{x}_i-g\www{x}_j^2+\frac{2}{g})(\www{x}_i-\frac{2}{g}) +\frac{4}{g^2}. \end{eqnarray*} If $\www{x}_i<\frac{2}{g}$, then \eqref{5.2} holds anyway. If $\www{x}_i\geq \frac{2}{g}$ then we can further estimate the last formula using $-g\www{x}_j^2\leq -g\www{x}_i^2$. Then we obtain \begin{eqnarray*} \frac{r}{g^2}&\leq& (\www{x}_i-g\www{x}_i^2+\frac{2}{g}) (\www{x}_i-\frac{2}{g})+\frac{4}{g^2}\\ &=& -g(\www{x}_i-\frac{2}{g})^2(\www{x}_i+\frac{1}{g}) +\frac{4}{g^2}\\ &\leq& -g(\www{x}_i-\frac{2}{g})^3+\frac{4}{g^2},\\ \textup{so } (\www{x}_i-\frac{2}{g})^3&\leq& \frac{4-r}{g^3}. \end{eqnarray*} This shows \eqref{5.2}. The inequality \eqref{5.3} was proved within the second case in the proof of Theorem \ref{t4.6} (a).\hfill$\Box$ \begin{theorem}\label{t5.11} Consider $\uuuu{x}\in\Z^3$ with $r\neq 0$. The automorphism $Q\in \Aut(H_\Q,L)$ which was defined in Definition \ref{t5.9} can be written in two interesting ways, \begin{eqnarray}\label{5.5} Q&=& (\uuuu{e}\mapsto -\uuuu{e}+\frac{2g^2}{r}f_3 (-\www{x}_3,\www{x}_2-g\www{x}_1\www{x}_3,-\www{x}_1)),\\ Q&=& \id + 2(M-\id)+\frac{2}{r}(M-\id)^2.\label{5.6} \end{eqnarray} We have \begin{eqnarray} Q\in G_\Z\iff \frac{2g^2}{r}\in\Z \iff \frac{r}{g^2}\in\{\pm 1,\pm 2\}.\label{5.7} \end{eqnarray} This holds if and only if $\uuuu{x}$ is in the $\Br_3\ltimes \{\pm 1\}^3$ orbit of a triple in the following set: \begin{eqnarray*} &&\{(x,0,0)\,|\, x\in\N\} \quad (\textup{these are the reducible cases except }A_1^3),\\ &\cup& \{(x,x,0)\,|\, x\in\N\}\quad \textup{(these cases include }A_3),\\ &\cup& \{(-l,2,-l)\,|\, l\geq 2\textup{ even}\}\quad \textup{(these cases include }\HH_{1,2}),\\ &\cup&\{(3,3,4),(4,4,4),(5,5,5),(4,4,8)\}. \end{eqnarray*} So, within the cases with $r\in\{0,1,2,3,4\}$, $Q$ is not defined for $A_1^3$ and $\P^2$, and $Q\notin G_\Z$ for $\widehat{A}_2$ and $S(-l,2,-l)$ with $l\geq 3$ odd. \end{theorem} {\bf Proof:} First we prove \eqref{5.5}. The 2-dimensional subspace $H_{\Q,1}=\ker(M^2-(2-r)M+\id)\subset H_\Q$, on which $Q$ is $-\id$, can also be characterized as the right $L$-orthogonal subspace $H_{\Q,1}=(\Q f_3)^{\perp}$ to $\Q f_3$. For $b=\uuuu{e}\cdot \uuuu{y}^t\in H_\Q$ with $\uuuu{y}\in\Q^3$ \begin{eqnarray*} L(f_3,b)&=&(-\www{x}_3, \www{x}_2, -\www{x}_1) \begin{pmatrix}1&0&0\\x_1&1&0\\x_2&x_3&1\end{pmatrix} \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}\\ &=&(-\www{x}_3, \www{x}_2-g\www{x}_1\www{x}_3,-\www{x}_1) \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}, \end{eqnarray*} so \begin{eqnarray} H_{\Q,1}=\{\uuuu{e}\cdot \uuuu{y}^t\,|\, \uuuu{y}\in\Q^3, 0=(-\www{x}_3, \www{x}_2-g\www{x}_1\www{x}_3,-\www{x}_1) \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}\}.\label{5.8} \end{eqnarray} Denote the endomorphisms on the right hand sides of \eqref{5.5} and \eqref{5.6} by $Q^{\eqref{5.5}}$ respectively $Q^{\eqref{5.6}}$. The formulas \eqref{5.5} and \eqref{5.8} show $Q^{\eqref{5.5}}|_{H_{\Q,1}}=-\id$. Also \begin{eqnarray*} Q^{\eqref{5.5}}(f_3)=-f_3+f_3\frac{2g^2}{r} (\www{x}_3^2+(\www{x}_2-g\www{x}_1\www{x}_3)\www{x}_2 +\www{x}_1^2) =f_3. \end{eqnarray*} Therefore $Q^{\eqref{5.5}}=Q$, so \eqref{5.5} holds. Now we prove \eqref{5.6}. Because $(M-\id)(f_3)=0$, we have $Q^{\eqref{5.6}}(f_3)=f_3$. Consider $b\in H_{\Q,1}$. Then \begin{eqnarray*} 0&=&(M^2-(2-r)M+\id)(b),\quad\textup{so}\\ (M-\id)^2(b)&=&-rM(b),\quad\textup{so}\\ Q^{\eqref{5.6}}(b) &=& (\id+2(M-\id)+\frac{2}{r}(-rM))(b) =(-id)(b)=-b. \end{eqnarray*} Therefore $Q^{\eqref{5.6}}=Q$, so \eqref{5.6} holds. \eqref{5.7} can be proved either with \eqref{5.6} and Lemma \ref{t5.17} below or with \eqref{5.5}, which is easier and which we do now. Observe $\gcd(-\www{x}_3, \www{x}_2-g\www{x}_1\www{x}_3,-\www{x}_1)= \gcd(\www{x}_1,\www{x}_2,\www{x}_3)=1$. Also, $f_3$ is a primitive vector in $H_\Z$. This shows that $Q(e_1),Q(e_2),Q(e_3)$ are all in $H_\Z$ if and only if $\frac{2g^2}{r}\in\Z$. This shows \eqref{5.7}. It is easy to see that all triples in the set in Theorem \ref{t5.11} satisfy $\frac{r}{g^2}\in\{\pm 1,\pm 2\}$: \begin{eqnarray*} \frac{r}{g^2}((x,0,0))=\frac{x^2}{x^2}=1,\quad \frac{r}{g^2}((x,x,0))=\frac{2x^2}{x^2}=2,\\ \textup{for even }l\quad \frac{r}{g^2}((-l,2,-l)) =\frac{4}{4}=1, \end{eqnarray*} \begin{eqnarray*} \frac{r}{g^2}((3,3,4))= \frac{-2}{1} = -2,\quad \frac{r}{g^2}((4,4,4))= \frac{-16}{16} = -1,\\ \frac{r}{g^2}((5,5,5))= \frac{-50}{25} = -2,\quad \frac{r}{g^2}((4,4,8))= \frac{-32}{16} = -2. \end{eqnarray*} The difficult part is to see that there are no other $\uuuu{x}\in\Z^3$ with $r\neq 0$ and $\frac{r}{g^2}\in\{\pm 1,\pm 2\}$. It is sufficient to consider local minima (Definition \ref{t4.3}). The calculations \begin{eqnarray*} \frac{r}{g^2}((-l,2,-l))=\frac{4}{1}=4 \quad\textup{for odd }l\geq 3\\ \textup{and}\quad \frac{r}{g^2}((-1,-1,-1))=\frac{4}{1}=4 \end{eqnarray*} deal with the other cases with $r\in\{1,2,3,4\}$, see Theorem \ref{t4.6} (e). Consider $\uuuu{x}\in\Z^3_{\leq 0}$ with $x_i\leq x_j\leq x_k$ for some $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$. Then $\frac{r}{g^2}=\www{x}_1^2+\www{x}_2^2+\www{x}_3^2 +g|\www{x}_1\www{x}_2\www{x}_3|$ can be $1$ or $2$ only if $\www{x}_k=0$ and $\www{x}_i=\www{x}_j=-1$. Then $\uuuu{x}=(-g,-g,0)$. This is in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $(g,g,0)$. Consider a local minimum $\uuuu{x}\in\Z^3_{\geq 3}$ with $x_i\leq x_j\leq x_k$ for some $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$. Suppose $\frac{r}{g^2}\in\{-1,-2\}$. We have to show $\uuuu{x}\in\{(3,3,4),(4,4,4),(5,5,5),(4,4,8)\}$. Of course $\uuuu{x}\neq (3,3,3)$ because $r\neq 0$. If $g=1$ then $r\in\{-1,-2\}$. One sees easily that $r=-1$ is impossible and that $r=-2$ is only satisfied for $\uuuu{x}=(3,3,4)$. From now on suppose $g\geq 2$. Write $\rho:=|\frac{r}{g^2}|\in\{1,2\}$. \eqref{5.2} takes the shape \begin{eqnarray*} \www{x}_i\leq \frac{2}{g}+(\frac{\rho}{g}+\frac{4}{g^3})^{1/3}. \end{eqnarray*} The only pairs $(\www{x}_i,g)\in\N\times\Z_{\geq 2}$ which satisfy this and $x_i=\www{x}_ig\geq 3$ are in the following two tables, \begin{eqnarray*} \rho=1: \begin{array}{l|l|l|l|l} g & 2 & 3 & 4 & 5 \\ \hline \www{x}_i & 2 & 1 & 1 & 1 \end{array},\quad \rho=2: \begin{array}{l|l|l|l|l|l} g & 2 & 3 & 4 & 5 & 6 \\ \hline \www{x}_i & 2 & 1 & 1 & 1 & 1\end{array} \end{eqnarray*} The following table discusses these nine cases. Three of them lead to $(4,4,4)$, $(5,5,5)$ and $(4,4,8)$, six of them are impossible. The symbol $\circledast$ denotes {\it impossible}. The inequalities $x_i\leq x_j\leq x_k$ and \eqref{5.3} and \eqref{5.4} are used, and also $x_i=g\www{x}_i$ and $x_j,x_k\in g\N$. \begin{eqnarray*} \begin{array}{l|l|l|l|l|l|l|l|l} \rho & g & -r & \www{x}_i & x_i & x_j^2\leq \frac{4-r}{x_i-2}+x_i+2 & x_j & x_k\leq \frac{1}{2}x_ix_j & x_k \\ \hline 1 & 2 & 4 & 2 & 4 & x_j^2 \leq 10 & \circledast & & \\ 1 & 3 & 9 & 1 & 3 & x_j^2 \leq 18 & 3 & x_k\leq \frac{9}{2} & 3 \ \circledast \\ 1 & 4 & 16 & 1 & 4 & x_j^2 \leq 16 & 4 & x_k\leq 8 & 4 \\ 1 & 5 & 25 & 1 & 5 & x_j^2 \leq 16+\frac{2}{3} & \circledast & & \\ 2 & 2 & 8 & 2 & 4 & x_j^2 \leq 12 & \circledast & & \\ 2 & 3 & 18 & 1 & 3 & x_j^2 \leq 27 & 3 & x_k\leq \frac{9}{2} & 3 \ \circledast \\ 2 & 4 & 32 & 1 & 4 & x_j^2 \leq 24 & 4 & x_k\leq 8 & 8 \\ 2 & 5 & 50 & 1 & 5 & x_j^2 \leq 25 & 5 & x_k\leq 12+\frac{1}{2} & 5 \\ 2 & 6 & 72 & 1 & 6 & x_j^2 \leq 27 & \circledast & & \end{array} \end{eqnarray*} This finishes the proof of Theorem \ref{t5.11}.\hfill$\Box$ \section{The reducible rank 3 cases} \label{s5.4} Definition \ref{t2.10} proposed the notion of reducible triple $(H_\Z,L,\uuuu{e})$ where $(H_\Z,L)$ is a unimodular bilinear lattice and $\uuuu{e}$ is a triangular basis. The following Remarks propose the weaker notion when a unimodular bilinear lattice $(H_\Z,L)$ (without a triangular basis) is reducible. Then the groups $G_\Z,G_\Z^{(0)},G_\Z^{(1)}$ and $G_\Z^M$ split accordingly if also the eigenvalues of $M$ split in a suitable sense. \begin{remarks}\label{t5.12} (i) Suppose that a unimodular bilinear lattice $(H_\Z,L)$ splits into a direct sum $H_{\Z,1}\oplus H_{\Z,2}$ which is left and right $L$-orthogonal. Then the restrictions of $L,I^{(0)},I^{(1)}$ and $M$ to $H_{\Z,i}$ are called $L_i,I^{(0)}_i,I^{(1)}_i$ and $M_i$ for $i\in\{1;2\}$. We say that $(H_\Z,L)$ is {\it reducible} and that it splits into the direct sum $(H_{\Z,1},L_1)\oplus (H_{\Z,2},L_2)$. (ii) In the situation of (i), suppose that the eigenvalues of $M_1$ are pairwise different from the eigenvalues of $M_2$. Then any element of $G_\Z^M$ respects the splitting. For $\{i,j\}=\{1,2\}$ write $G_{\Z,i}^M:=G_\Z^M(H_{\Z,i},L_i)$. Then \begin{eqnarray*} G_\Z^M = G_{\Z,1}^M\times G_{\Z,2}^M, \end{eqnarray*} and with analogous notations \begin{eqnarray*} G_\Z = G_{\Z,1}\times G_{\Z,2}, \ G_\Z^{(0)} = G_{\Z,1}^{(0)}\times G_{\Z,2}^{(0)}, \ G_\Z^{(1)} = G_{\Z,1}^{(1)}\times G_{\Z,2}^{(1)}. \end{eqnarray*} (iii) There is only one unimodular bilinear lattice of rank 1. We call it $A_1$-lattice and denote the matrix $S=S(A_1)=(1)\in M_{1\times 1}(\Z)$. Here $G_\Z=G_\Z^{(0)}=G_\Z^{(1)}=G_\Z^M=\{\pm\id\}$. (iv) Suppose that the characteristic polynomial $p_{ch,M}(t)\in\Z[t]$ of the monodromy $M$ of a unimodular bilinear lattice $(H_\Z,L)$ splits into a product $p_{ch,M}=p_1p_2$ of non-constant polynomials $p_1$ and $p_2$ with $\gcd(p_1,p_2)=1$. Then $\ker p_1(M)\oplus \ker p_2(M)$ is a sublattice of finite index in $H_\Z$, and the summands are left and right $L$-orthogonal to one another. If the index is 1, we are in the situation of (ii). Theorem \ref{t5.14} will show that this applies to the cases $S(\HH_{1,2})$ and $S(-l,2,-l)$ with $l\geq 4$ even, but not to the cases $S(A_3)$, $S(\widehat{A}_2)$ and $S(-l,2,-l)$ with $l\geq 3$ odd. \end{remarks} These remarks apply especially to the reducible $3\times 3$ cases except $A_1^3$ (which is part of Lemma \ref{t5.4}). This includes the two reducible cases $A_2A_1$ and $\P^1A_1$ with eigenvalues in $S^1$. \begin{theorem}\label{t5.13} Consider $\uuuu{x}=(x,0,0)\in\Z^3$ with $x\neq 0$ and the unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with triangular basis $\uuuu{e}$ with $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})\in T^{uni}_3(\Z)$. Then $(H_\Z,L,\uuuu{e})$ is reducible with the summands $(H_{\Z,1},L_1,(e_1,e_2))$ and $(H_{\Z,2},L_2,e_3)$ with $H_{\Z,1}=\Z e_1\oplus \Z e_2$ and $H_{\Z,3}=\Z e_3$. The first summand is an irreducible rank two unimodular bilinear lattice with triangular basis. Its groups $G_{\Z,1},G_{\Z,1}^{(0)},G_{\Z,1}^{(1)}$ and $G_{\Z,1}^M$ are treated in Theorem \ref{t5.5}. The second summand is of type $A_1$. See Remark \ref{t5.12} (iii) for its groups. The decompositions in Remark \ref{t5.12} (ii) hold for the groups $G_\Z$, $G_\Z^{(0)}$, $G_\Z^{(1)}$ and $G_\Z^M$. Here $G_\Z^M=G_\Z$ if $x\neq \pm 3$ and $G_\Z=G_\Z^{(0)}=G_\Z^{(1)}$ always. The map $Z:(\Br_3\ltimes\{\pm 1\}^3)_S\to G_\Z$ is surjective. \end{theorem} {\bf Proof:} The first point to see is that Remark \ref{t5.12} (ii) applies. It does because the characteristic polynomials of the monodromies $M_1$ and $M_2$ of the two summands are $t^2-(2-x^2)t+1$ and $t-1$, and here $x\neq 0$, so that the eigenvalues of $M_1$ are not equal to the eigenvalue $1$ of $M_2$. The second point to see is the surjectivity of the map $Z$. This follows from the surjectivity of the map $Z$ in the irreducible rank 2 cases in Theorem \ref{t5.5} and in the case $A_1$ in Lemma \ref{t5.4}. \hfill$\Box$ \section{The irreducible rank 3 cases with all eigenvalues in $S^1$} \label{s5.5} Theorem \ref{t5.14} is the only point in this section. It treats the irreducible rank 3 cases with all eigenvalues in $S^1$. \begin{theorem}\label{t5.14} Consider for each of the matrices $S(\P^2)$, $S(A_3)$, $S(\widehat{A}_2)$, $S(\HH_{1,2})$ and $S(-l,2,-l)$ for $l\geq 3$ in the Examples \ref{t1.1} a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}$ with $L(\uuuu{e}^t,\uuuu{e})^t=S$. (a) The cases $S(\HH_{1,2})$ and $S(-l,2,-l)$ for $l\geq 4$ even: Then $(H_\Z,L)$ is reducible (in the sense of Remark \ref{t5.12} (i)), $H_\Z=H_{\Z,1}\oplus H_{\Z,2}$ with \begin{eqnarray*} H_{\Z,1}&:=&\ker(M+\id)^2\textup{ of rank 2}\\ \textup{and }H_{\Z,2}&:=&\ker(M-\id)\textup{ of rank 1.} \end{eqnarray*} $(H_{\Z,2},L_2)$ is an $A_1$-lattice. In all cases the decompositions in Remark \ref{t5.12} (ii) hold for the groups $G_\Z^M$, $G_\Z^{(0)}$, $G_\Z^{(1)}$ and $G_\Z$. The groups $G_{\Z,1}^M$, $G_{\Z,1}^{(0)}$, $G_{\Z,1}^{(1)}$ and $G_{\Z,1}$ are as follows. \begin{list}{}{} \item[(i)] $S(\HH_{1,2})$: $H_{\Z,1}$ has a $\Z$-basis $\uuuu{f}=(f_1,f_2)$ with \begin{eqnarray*} L(\uuuu{f}^t,\uuuu{f})^t=\begin{pmatrix}0&-1\\1&0\end{pmatrix}, \quad I^{(0)}(\uuuu{f}^t,\uuuu{f})=\begin{pmatrix}0&0\\0&0 \end{pmatrix},\\ I^{(1)}(\uuuu{f}^t,\uuuu{f})=\begin{pmatrix}0&-2\\2&0 \end{pmatrix},\quad M\uuuu{f}=\uuuu{f}\begin{pmatrix}-1&0\\0&-1\end{pmatrix}, \end{eqnarray*} \begin{eqnarray*} G_{\Z,1}^M&=&G_{\Z,1}^{(0)}=\Aut(H_{\Z,1})\cong GL_2(\Z),\\ G_{\Z,1}&=&G_{\Z,1}^{(1)}=\{g\in\Aut(H_{\Z,1})\,|\, \det g=1\} \cong SL_2(\Z). \end{eqnarray*} \item[(ii)] $S(-l,2,-l)$ for $l\geq 4$ even: $H_{\Z,1}$ has a $\Z$-basis $\uuuu{f}=(f_1,f_2)$ with \begin{eqnarray*} L(\uuuu{f}^t,\uuuu{f})^t =\begin{pmatrix}0&-1\\1&1-\frac{l^2}{4}\end{pmatrix}, \quad I^{(0)}(\uuuu{f}^t,\uuuu{f}) =\begin{pmatrix}0&0\\0&2-\frac{l^2}{2}\end{pmatrix},\\ I^{(1)}(\uuuu{f}^t,\uuuu{f}) =\begin{pmatrix}0&-2\\2&0 \end{pmatrix},\quad M\uuuu{f}=\uuuu{f} \begin{pmatrix}-1&2-\frac{l^2}{2}\\0&-1\end{pmatrix}, \end{eqnarray*} Define $M_1^{root}\in \Aut(H_{\Z,1})$ by $M_1^{root}\uuuu{f}=\uuuu{f}\begin{pmatrix}1&1\\0&1\end{pmatrix}$. Then \begin{eqnarray*} (M_1^{root})^{l^2/2-2}=-M_1\qquad \textup{ and }\\ G_{\Z,1}=G_{\Z,1}^{(0)}=G_{\Z,1}^{(1)}=G_{\Z,1}^M =\{\pm (M_1^{root})^m\,|\, m\in\Z\}. \end{eqnarray*} \end{list} (b) The cases $S(\P^2)$, $S(A_3)$, $S(\widehat{A}_2)$, $S(-l,2,-l)$ with $l\geq 3$ odd: Then $(H_\Z,L)$ is irreducible. \begin{eqnarray*} G_\Z=G_\Z^{(0)}=\{\pm(M^{root})^m\,|\, m\in\Z\}. \end{eqnarray*} Here $M^{root}$ is defined by \index{$M^{root}$} \begin{eqnarray} M^{root}&:=& Z(\sigma^{root}) \quad\textup{ in the case }S(\P^2),\label{5.9}\\ M^{root}&:=& M= Z(\sigma^{mon}) \quad\textup{ in the case }S(A_3),\label{5.10}\\ M^{root}&:=& Z(\delta_3\sigma^{root}) \quad\textup{ in the case }S(\whh{A}_2),\label{5.11}\\ M^{root}&:=& (-M)\circ Z(\delta_3\sigma_1^{-1}\sigma_2^{-1}\sigma_1)^{(5-l^2)/2} \nonumber \\ &&\textup{ in the case }S(-l,2,-l)\textup{ for odd }l\geq 3. \label{5.12} \end{eqnarray} It satisfies \begin{eqnarray*} M^{root}\uuuu{e}=\uuuu{e} M^{root,mat}\quad\textup{and}\quad (M^{root})^m=\varepsilon M \end{eqnarray*} where $M^{root,mat}$, $m$ and $\varepsilon$ are as follows: \begin{eqnarray*} \begin{array}{ccc} & S(\P^2) & S(A_3) \\ M^{root,mat} & \begin{pmatrix}3&-3&1\\1&0&0\\0&1&0 \end{pmatrix} & \begin{pmatrix}0&0&1\\-1&0&1\\0&-1&1 \end{pmatrix} \\ (m,\varepsilon) & (3,1) & (1,1)\\ \hline \hline & S(\widehat{A}_2) & S(-l,2,-l)\textup{ with }l\geq 3\textup{ odd}\\ M^{root,mat} & \begin{pmatrix}1&1&-1\\1&0&0\\0&1&0 \end{pmatrix} & \frac{1}{2} \begin{pmatrix}1-l^2&l^3-l&-1-l^2\\ -2l&2l^2-2&-2l\\ 1-l^2&l^3-3l&3-l^2 \end{pmatrix} \\ (m,\varepsilon) & (3,-1) & (l^2-4,-1) \end{array} \end{eqnarray*} $M$ and $M^{root}$ are regular. $M^{root}$ is cyclic. In the cases $S(A_3)$, $S(\widehat{A}_2)$ and $S(-l,2,-l)$ for $l\geq 3$ odd \begin{eqnarray*} G_\Z=G_\Z^{(0)}=G_\Z^{(1)}=G_\Z^M =\{\pm (M^{root})^m\,|\, m\in\Z\}. \end{eqnarray*} Some additional information: \begin{list}{}{} \item[(i)] $S(\P^2)$: $\sign I^{(0)}=(+--)$, $p_{ch,M}=p_{ch,M^{root}}=\Phi_1^3$, $M$ and $M^{root}$ have a $3\times 3$ Jordan block, \begin{eqnarray*} G_\Z^{(1)}=G_\Z^M =\{\pm (M^{root})^m(\id+a(M^{root}-\id)^2)\,|\, m, a\in\Z\}\supsetneqq G_\Z. \end{eqnarray*} \item[(ii)] $S(A_3)$: $\sign I^{(0)}=(+++)$, $p_{ch,M}=\Phi_4\Phi_1$, $M=M^{root}$, $|G_\Z|=8$. \item[(iii)] $S(\widehat{A}_2)$: $\sign I^{(0)}=(++\, 0)$, $p_{ch,M}=\Phi_2^2\Phi_1$, $ p_{ch,M^{root}}=\Phi_1^2\Phi_2$, $M$ and $M^{root}$ have a $2\times 2$ Jordan block with eigenvalue $-1$ respectively $1$. \item[(iv)] $S(-l,2,-l)$ with $l\geq 3$ odd: $\sign I^{(0)}=(+\, 0\, -)$, $p_{ch,M}=\Phi_2^2\Phi_1$, $p_{ch,M^{root}}=\Phi_1^2\Phi_2$, $M$ and $M^{root}$ have a $2\times 2$ Jordan block with eigenvalue $-1$ respectively $1$. \end{list} (c) In all cases in this theorem the map $Z:(\Br_3\ltimes\{\pm 1\}^3)_S\to G_\Z$ is surjective, so $G_\Z=G_\Z^{\BB}$. \end{theorem} {\bf Proof:} (a) Recall \begin{eqnarray*} H_{\Z,2}=\ker(M-\id)=\Rad I^{(1)}=\Z f_3,\ f_3=-\www{x}_3e_1+\www{x}_2e_2-\www{x}_1e_3. \end{eqnarray*} We will choose a $\Z$-basis $\uuuu{f}=(f_1,f_2)$ of $H_{\Z,1}:=\ker(M+\id)^2$. Denote $\www{\uuuu{f}}:=(f_1,f_2,f_3)$. \index{$f_1,\ f_2,\ f_3$} In all cases it will be easy to see that $\www{\uuuu{f}}$ is a $\Z$-basis of $H_\Z$. Therefore in all cases $H_\Z=H_{\Z,1}\oplus H_{\Z,2}$. (i) $S(\HH_{1,2})$: Recall $p_{ch,M}=\Phi_2^2\Phi_1$, \begin{eqnarray*} S=\begin{pmatrix}1&-2&2\\0&1&-2\\0&0&1\end{pmatrix},\ S^{-1}S^t=\begin{pmatrix}1&-2&2\\2&-3&2\\2&-2&1\end{pmatrix},\ f_3=\uuuu{e}\begin{pmatrix}1\\1\\1\end{pmatrix}. \end{eqnarray*} Define \begin{eqnarray*} f_1:=\uuuu{e}\begin{pmatrix}1\\1\\0\end{pmatrix},\ f_2:=\uuuu{e}\begin{pmatrix}0\\1\\1\end{pmatrix}. \end{eqnarray*} Then \begin{eqnarray*} L(\www{\uuuu{f}}^t,\www{\uuuu{f}})^t =\begin{pmatrix}0&-1&0\\1&0&0\\0&0&1\end{pmatrix},\ M\www{\uuuu{f}}=\www{\uuuu{f}} \begin{pmatrix}-1&0&0\\0&-1&0\\0&0&1\end{pmatrix},\ H_{\Z,1}=\Z f_1\oplus \Z f_2. \end{eqnarray*} The claims on the groups $G_{\Z,1}^M,G_{\Z,1}^{(1)},G_{\Z,1}^{(0)}$ and $G_{\Z,1}$ follow from the shape of the matrices of $M,I^{(1)},I^{(0)}$ and $L$ with respect to the basis $\uuuu{f}$ of $H_{\Z,1}$. (ii) $S(-l,2,-l)$ with $l\geq 4$ even: Recall $p_{ch,M}=\Phi_2^2\Phi_1$, \begin{eqnarray*} S=\begin{pmatrix}1&-l&2\\0&1&-l\\0&0&1\end{pmatrix},\ S^{-1}S^t=\begin{pmatrix}l^2-3&-l^3+3l&l^2-2\\ l&-l^2+1&l\\2&-l&1\end{pmatrix},\ f_3=\uuuu{e} \begin{pmatrix}\frac{l}{2}\\1\\ \frac{l}{2}\end{pmatrix}. \end{eqnarray*} Define \begin{eqnarray*} f_1:=\uuuu{e}\begin{pmatrix}1\\0\\-1\end{pmatrix},\ f_2:=\uuuu{e} \begin{pmatrix}\frac{l^2-2}{2}\\ \frac{l}{2}\\0\end{pmatrix}. \end{eqnarray*} Then \begin{eqnarray*} L(\www{\uuuu{f}}^t,\www{\uuuu{f}})^t =\begin{pmatrix}0&-1&0\\1&1-\frac{l^2}{4}&0\\0&0&1\end{pmatrix}, \ M\www{\uuuu{f}}=\www{\uuuu{f}} \begin{pmatrix}-1&2-\frac{l^2}{2}&0\\0&-1&0\\0&0&1\end{pmatrix},\\ H_{\Z,1}=\Z f_1\oplus \Z f_2,\ H_\Z=H_{\Z,1}\oplus H_{\Z,2}, \end{eqnarray*} $M_1=M|_{H_{\Z,1}}$ and $M_1^{root}$ have each a $2\times 2$ Jordan block, $M_1^{root}$ is cyclic and $(M_1^{root})^{l^2/2-2}=-M_1$. Lemma \ref{t5.6} shows \begin{eqnarray*} G_{\Z,1}^M=G_{\Z,1}^{(0)}=G_{\Z,1}^{(1)}=G_{\Z,1} =\{\pm (M_1^{root})^m\,|\, m\in\Z\}. \end{eqnarray*} (b) Recall \begin{eqnarray*} \begin{array}{ccc} & S(\P^2) & S(A_3) \\ S^{-1}S^t & \begin{pmatrix}10&-15&6\\6&-8&3\\3&-3&1 \end{pmatrix} & \begin{pmatrix}0&0&1\\-1&0&1\\0&-1&1 \end{pmatrix} \\ p_{ch,M} & \Phi_1^3 & \Phi_4\Phi_1 \\ \hline \hline & S(\widehat{A}_2) & S(-l,2,-l)\textup{ with }l\geq 3\textup{ odd}\\ S^{-1}S^t & \begin{pmatrix}-2&-1&2\\-2&0&1\\-1&-1&1 \end{pmatrix} & \begin{pmatrix}l^2-3&-l^3+3l&l^2-2\\ l&-l^2+1&l\\2&-l&1 \end{pmatrix} \\ p_{ch,M} & \Phi_2^2\Phi_1 & \Phi_2^2\Phi_1 \end{array} \end{eqnarray*} The case $S(-l,2,-l)$ with $l\geq 3$ odd will be treated separately below. Theorem \ref{t3.26} (c) applies in the case $S(\P^2)$ with $k=1$ and in the case $S(\whh{A}_2)$ with $k=0$. It shows in these cases $(M^{root})^3=\varepsilon M$. By Theorem \ref{t3.26} (c) the matrices $M^{root,mat}$ are as claimed in the cases $S(\P^2)$ and $S(\whh{A}_2)$. In the case $A_3$ by definition $M^{root}=M$. In all three cases $S(\P^2)$, $S(A_3)$ and $S(\whh{A}_2)$ $M^{root}$ is cyclic with cyclic generator $e_1$. In the cases $S(\P^2)$ and $S(A_3)$ $p_{ch,M^{root}}=p_{ch,M}$. In the case $S(\whh{A}_2)$ $p_{ch,M^{root}}=\phi_1^2\phi_2$ and $p_{ch,M}=\phi_2^2\phi_1$. Lemma \ref{t5.2} (a) shows in all three cases \begin{eqnarray*} G_\Z^M&=& \{p(M^{root})\,|\, p(t)=\sum_{i=0}^2p_it^i\in \Z[t],\\ &&p(\kappa)\in(\Z[\kappa])^* \textup{ for each eigenvalue }\kappa\textup{ of }M^{root}\}. \end{eqnarray*} (i) $S(\P^2)$: $M^{root}-\id$ is nilpotent with $(M^{root}-\id)^2\neq 0$, $(M^{root}-\id)^3=0$. An element of $\Z[M^{root}]$ can be written in the form \begin{eqnarray*} q_0\id+q_1(M^{root}-\id)+q_2(M^{root}-\id)^2\textup{ with } q_0,q_1,q_2\in\Z. \end{eqnarray*} It is in $\Aut(H_\Z)$ if and only if $q_0\in\{\pm 1\}$. Then it can be written as \begin{eqnarray*} q_0(M^{root})^{q_0q_1}(\id + \www{q}_2(M^{root}-\id)^2) \textup{ for some }\www{q}_2\in\Z. \end{eqnarray*} Therefore $G_\Z^M$ is as claimed. Because $(M^{root}-\id)^3=0$, $(M^{root}-\id)^2(H_\Z)\subset\ker (M^{root}-\id) =\Rad I^{(1)}$. Therefore $\id+\www{q}_2(M^{root}-\id)^2$ and thus any element of $G_\Z^M$ respects $I^{(1)}$, so $G_\Z^M=G_\Z^{(1)}$. On the other hand, one easily checks that $\id+\www{q}_2(M^{root}-\id)^2$ respects $I^{(0)}$ only if $\www{q}_2=0$. Therefore \begin{eqnarray*} G_\Z=G_\Z^{(0)}=\{\pm (M^{root})^m\,|\, m\in\Z\} \subsetneqq G_\Z^{(1)}=G_\Z^M. \end{eqnarray*} (ii) $S(A_3)$: For $p(t)=\sum_{i=0}^2p_it^i\in\Z[t]$ write $\mu_j:=p(\lambda_j)$ for $j\in\{1,2,3\}$ for the eigenvalues of the element $p(M)\in\Z[M]=\End(H_\Z,M)$ where $\lambda_1=i,\lambda_2=-i,\lambda_3=1$ are the eigenvalues of $M$. Because of $(\Z[i])^*=\{\pm 1,\pm i\}$, one can multiply a given element of $G_\Z^M$ with a suitable power of $M$ and obtain an element with $\mu_1=\mu_2=1$. Therefore \begin{eqnarray*} G_\Z^M=\{M^{m_1}(\id+m_2\Phi_4(M))&|& m_1\in\{0,1,2,3\}, m_2\in\Z, \\ && 1+m_2\Phi_4(1)\in\{\pm 1\}\}. \end{eqnarray*} This forces $m_2\in\{0,-1\}$. The case $m_2=-1$ gives $\id+(-1)(M^2+\id)=-M^2$. Therefore \begin{eqnarray*} \{\pm M^m\,|\, m\in\{0,1,2,3\}\} =\{\pm M^m\,|\, m\in\Z\} =G_\Z^M=G_\Z^{(0)}=G_\Z^{(1)}=G_\Z. \end{eqnarray*} (iii) $S(\widehat{A}_2)$: Write $\uuuu{f}=(f_1,f_2):=\uuuu{e} \begin{pmatrix}1&2\\1&1\\1&0\end{pmatrix}$. Then \index{$f_1,\ f_2,\ f_3$} \begin{eqnarray*} M^{root}\uuuu{f}&=&\uuuu{f} \begin{pmatrix}1&1\\0&1\end{pmatrix},\\ H_{\Z,1}&=&\ker\Phi_2^2(M)=\ker\Phi_1^2(M^{root}) =\Z f_1\oplus \Z f_2. \end{eqnarray*} Lemma \ref{t5.6} implies \begin{eqnarray*} \{g|_{H_{\Z,1}}\,|\, g\in G_\Z^M\} =\{\pm (M^{root}|_{H_{\Z,1}})^m\,|\, m\in\Z\},\\ G_\Z^M=\{\pm (M^{root})^m\,|\, m\in\Z\}\times \{g\in G_\Z^M\,|\, g|_{H_{\Z,1}}=\id\}. \end{eqnarray*} But $p(t)=1+q\Phi_1^2(t)$ with $q\in\Z$ satisfies $p(-1)=1+q\cdot 2^2\in\{\pm 1\}$ only if $q=0$. Therefore \begin{eqnarray*} \{\pm (M^{root})^m\,|\, m\in\Z\} =G_\Z^M=G_\Z^{(0)}=G_\Z^{(1)}=G_\Z. \end{eqnarray*} (iv) $S(-l,2,-l)$ with $l\geq 3$ odd: Recall $f_3$ and define $(f_1,\www{f}_2)$ and $\uuuu{d}=(d_1,d_2,d_3)$ with \begin{eqnarray*} (f_1,\www{f}_2,f_3)=\uuuu{e}\begin{pmatrix} 1&l^2-2&l\\0&l&2\\-1&0&l\end{pmatrix},\quad \uuuu{d}=\uuuu{e}\begin{pmatrix} \frac{l^2-l-2}{2}&\frac{l^2-1}{2}&\frac{l^2-l}{2}\\ \frac{l-1}{2}&\frac{l+1}{2}&\frac{l-1}{2}\\ 0&\frac{l-1}{2}&-1\end{pmatrix}. \end{eqnarray*} The matrix which expresses $(f_1,\www{f}_2,f_3)$ with $\uuuu{e}$ has determinant 4, the matrix which expresses $\uuuu{d}$ with $\uuuu{e}$ has determinant 1. Therefore $\Z f_1\oplus\Z \www{f}_2\oplus \Z f_3$ is a sublattice of index 4 in $H_\Z$, and $\uuuu{d}$ is a $\Z$-basis of $H_\Z$. One calculates \begin{eqnarray*} M(f_1,\www{f}_2,f_3)&=&(f_1,\www{f}_2,f_3)\begin{pmatrix} -1&-l^2+4&0\\0&-1&0\\0&0&1\end{pmatrix}. \end{eqnarray*} Especially \begin{eqnarray*} \Z f_1\oplus \Z \www{f}_2=\ker(M^2+\id)^2\supset\Z f_1 =\ker(M+\id)=\Rad I^{(0)}. \end{eqnarray*} Observe \begin{eqnarray} \delta_3\sigma_1^{-1}\sigma_2^{-1}\sigma_1\in (\Br_3\ltimes\{\pm 1\}^3)_S.\label{5.13} \end{eqnarray} Define \begin{eqnarray}\label{5.14} \www{M}:=Z(\delta_3\sigma_1^{-1}\sigma_2^{-1}\sigma_1)\in G_\Z. \end{eqnarray} Then $M^{root}=(-M)\circ \www{M}^{(5-l^2)/2}\in G_\Z$. One calculates \begin{eqnarray} \www{M}\uuuu{e}&=&\uuuu{e}+f_1(-1,l,-1),\label{5.15}\\ \www{M}(f_1,\www{f}_2,f_3)&=&(f_1,\www{f}_2,f_3)\begin{pmatrix} 1&2&0\\0&1&0\\0&0&1\end{pmatrix},\label{5.16}\\ M^{root}(f_1,\www{f}_2,f_3)&=&(f_1,\www{f}_2,f_3)\begin{pmatrix} 1&1&0\\0&1&0\\0&0&-1\end{pmatrix},\nonumber\\ (M^{root})^{l^2-4}&=&-M,\nonumber\\ (M^{root})^2&=& \www{M}.\nonumber \end{eqnarray} Finally, one calculates \begin{eqnarray*} M^{root}\uuuu{d}=\uuuu{d}\begin{pmatrix} 0&0&-1\\1&0&1\\0&1&1\end{pmatrix}. \end{eqnarray*} Therefore $M^{root}$ is cyclic with cyclic generator $d_1$ and regular. Lemma \ref{t5.2} (a) shows \begin{eqnarray*} G_\Z^M=\{p(M^{root})\,|\, p(t)=\sum_{i=0}^2p_it^i\in\Z[t], p(1),p(-1)\in\{\pm 1\}\}. \end{eqnarray*} As in the case $S(\widehat{A}_2)$ one finds with Lemma \ref{t5.6} \begin{eqnarray*} \{g|_{H_{\Z,1}}\,|\, g\in G_\Z^M\} =\{\pm (M^{root}|_{H_{\Z,1}})^m\,|\, m\in\Z\},\\ G_\Z^M=\{\pm (M^{root})^m\,|\, m\in\Z\}\times \{g\in G_\Z^M\,|\, g|_{H_{\Z,1}}=\id\}. \end{eqnarray*} But $p(t)=1+q\Phi_1^2(t)$ with $q\in\Z$ satisfies $p(-1)=1+q\cdot 2^2\in\{\pm 1\}$ only if $q=0$. Therefore \begin{eqnarray*} \{\pm (M^{root})^m\,|\, m\in\Z\} =G_\Z^M=G_\Z^{(0)}=G_\Z^{(1)}=G_\Z. \end{eqnarray*} (c) Of course $Z(\delta_1\delta_2\delta_3)=-\id$. In the cases in part (b) $G_\Z=\{\pm (M^{root})^m\,|\, m\in\Z\}$. The definitions \eqref{5.9}--\eqref{5.12} show that $Z$ is surjective. The case $\HH_{1,2}$: $G_\Z\cong SL_2(\Z)\times\{\pm 1\}$. The group $G_\Z$ is generated by $-\id$, $h_1$ and $h_2$ with \begin{eqnarray*} h_1:=(\www{\uuuu{f}}\mapsto \www{\uuuu{f}} \begin{pmatrix}1&-1&0\\0&1&0\\0&0&1\end{pmatrix}) =(\uuuu{e}\mapsto \uuuu{e} \begin{pmatrix}2&-1&0\\1&0&0\\0&0&1\end{pmatrix}) =Z(\delta_2\sigma_1),\\ h_2:=(\www{\uuuu{f}}\mapsto \www{\uuuu{f}} \begin{pmatrix}1&0&0\\1&1&0\\0&0&1\end{pmatrix}) =(\uuuu{e}\mapsto \uuuu{e} \begin{pmatrix}1&0&0\\0&2&-1\\0&1&0\end{pmatrix}) =Z(\delta_3\sigma_2). \end{eqnarray*} The cases $S(-l,2,-l)$ with $l\geq 4$ even: \eqref{5.13}--\eqref{5.16} hold also for even $l$. With respect to the $\Z$-basis $\www{\uuuu{f}}=(f_1,f_2,f_3) =(f_1,\frac{1}{2}\www{f}_2,f_3)$ of $H_\Z$ \begin{eqnarray*} \www{M}\www{\uuuu{f}}=\www{\uuuu{f}} \begin{pmatrix}1&1&0\\0&1&0\\0&0&1\end{pmatrix}. \end{eqnarray*} One sees \begin{eqnarray*} G_\Z&=&\langle -\id,\www{M},Q\rangle \end{eqnarray*} with \begin{eqnarray*} Q&=&(\www{\uuuu{f}}\mapsto \www{\uuuu{f}} \begin{pmatrix}-1&0&0\\0&-1&0\\0&0&1\end{pmatrix}) = M\circ \www{M}^{2-l^2/2}\\ &=& Z(\sigma^{mon})\circ Z(\delta_3\sigma_1^{-1}\sigma_2^{-1}\sigma_1)^{2-l^2/2}. \hspace*{2cm}\Box \end{eqnarray*} \hfill$\Box$ \section{Special rank 3 cases with eigenvalues not all in $S^1$} \label{s5.6} This section starts with a general lemma for all rank 3 cases with eigenvalues not all in $S^1$. It gives coarse information on the four groups $G_\Z^M,G_\Z^{(0)},G_\Z^{(1)}$ and $G_\Z$. Afterwards Theorem \ref{t5.16} determines these groups precisely for three series of cases and one exceptional case. Theorem \ref{t5.18} in section \ref{s5.7} will treat the other irreducible cases with eigenvalues not all in $S^1$. \begin{lemma}\label{t5.15} Fix $\uuuu{x}\in\Z^3$ with $r(\uuuu{x})<0$ or $r(\uuuu{x})>4$. Then \begin{eqnarray*} G_\Z^M\stackrel{(2:1)\textup{ or }(1:1)} {\supset} G_\Z=G_\Z^{(0)}=G_\Z^{(1)} \stackrel{(\textup{finite}:1)}{\supset} \{\pm M^l\,|\, l\in\Z\}. \end{eqnarray*} \end{lemma} {\bf Proof:} For $p(t)=\sum_{i=0}^2p_it^i\in\Q[t]$ write $\mu_j:=p(\lambda_j)$ for $j\in\{1,2,3\}$. Then $\mu_1,\mu_2$ and $\mu_3$ are the eigenvalues of $p(M)\in\End(H_\Q)$. The monodromy is because of $r(\uuuu{x})\in \Z-\{0,1,2,3,4\}$ and Lemma \ref{t5.7} semisimple and regular. Lemma \ref{t5.1} (c) (i) and (ii) applies and gives \begin{eqnarray}\label{5.17} G_\Z^M&=&\{p(M)\,|\, p(t)=\sum_{i=0}^2p_it^i\in\Q[t],\\ &&p(M)\in\End(H_\Z), \mu_1\mu_2\in\{\pm 1\}, \mu_3\in\{\pm 1\}\}\nonumber\\ \supset G_\Z &=& G_\Z^{(0)}=G_\Z^{(1)}=\{p(M)\in G_\Z^M\,|\, \mu_1\mu_2=1\}\label{5.18} \end{eqnarray} Especially $h\in G_\Z^M\Rightarrow h^2\in G_\Z$. Also, $\End(H_\Q,M)=\Q[M]$, and the map \begin{eqnarray*} \End(H_\Q,M)=\Q[M]\to \Q[\lambda_1]\times\Q,\quad p(M)\mapsto (\mu_1,\mu_3) \end{eqnarray*} is an isomorphism of $\Q$-algebras. This is a special case of the chinese remainder theorem. $Q$ is mapped to $(-1,1)$. Observe $(-\id)\in G_\Z\subset G_\Z^M$. Therefore the subgroup $\{h\in G_\Z\,|\, \mu_3=1\}$ has index 2 in $G_\Z$, and the subgroup $\{h\in G_\Z^M\,|\, \mu_3=1\}$ has index 2 in $G_\Z^M$. The map \begin{eqnarray} \{h\in G_\Z^M\,|\, \mu_3=1\}\to \OO_{\Q[\lambda_1]}^*,\quad h\mapsto \mu_1,\label{5.19} \end{eqnarray} is injective. The element $-1\in \OO_{\Q[\lambda_1]}^*$ is in the image of the map in \eqref{5.19} if and only if $Q\in G_\Z^M$, and then it is the image of $Q$. By Dirichlet's unit theorem \cite[Ch. 2 4.3 Theorem 5]{BSh73} the group $\OO_{\Q[\lambda_1]}^*$ is isomorphic to the group $\{\pm 1\}\times \Z$. Therefore $\{h\in G_\Z^M\,|\, \mu_3=1\}$ is isomorphic to $\{\pm 1\}\times \Z$ if $Q\in G_\Z^M$ and to $\Z$ if $Q\notin G_\Z^M$. If $Q\in G_\Z^M$ then because of $Q\in \Aut(H_\Q,L)$ also $Q\in G_\Z$. This and the implication $h\in G_\Z^M\Rightarrow h^2\in G_\Z$ show \begin{eqnarray*} [G_\Z^M:G_\Z] =[\{h\in G_\Z^M\,|\, \mu_3=1\}:\{h\in G_\Z\,|\, \mu_3=1\}] \in\{1,2\}. \end{eqnarray*} The group $\{\pm M^l\,|\, l\in\Z\}\subset G_\Z \subset G_\Z^M$ is isomorphic to $\{\pm 1\}\times\Z$, so it has a free part of rank 1, just as $G_\Z$ and $G_\Z^M$. Therefore $[G_\Z:\{\pm M^l\,|\, l\in\Z\}]<\infty$. \hfill$\Box$ \bigskip Later we will see precisely how much bigger $G_\Z^M$ and $G_\Z$ are than $\{\pm M^l\,|\, l\in\Z\}$. In the majority of the cases they are not bigger, but $G_\Z^M=G_\Z=\{\pm M^l\,|\, l\in\Z\}$. The following theorem determines the groups $G_\Z$ and $G_\Z^M\supset G_\Z$ for three series of triples and the exceptional case $(3,3,4)$. In Theorem \ref{t5.18} in section \ref{s5.7} we will see that the $\Br_3\ltimes\{\pm 1\}^3$ orbits of these three series and of the triple $(3,3,4)$ are the only triples $\uuuu{x}$ with $r(\uuuu{x})\in\Z_{<0}\cup\Z_{>4}$, $(H_\Z,L,\uuuu{e})$ irreducible and {\it not} $G_\Z^M=G_\Z=\{\pm M^l\,|\, l\in\Z\}$. \begin{theorem}\label{t5.16} For each $\uuuu{x}\in\Z^3$ below fix also the associated triple $(H_\Z,L,\uuuu{e})$. (a) Consider $\uuuu{x}=(x,x,x)$ with $x\in\Z-\{-1,0,1,2,3\}$ and $S=S(\uuuu{x})$. Then $\delta_3\sigma_2\sigma_1\in (\Br_3\ltimes\{\pm 1\}^3)_S$ and by Theorem \ref{t3.26} (c) (with $k=0$) \begin{eqnarray*} M^{root,3}:=Z(\delta_3\sigma_2\sigma_1)\in G_\Z \quad\textup{with}\quad (M^{root,3})^3=-M. \end{eqnarray*} $M^{root,3}$ is cyclic with $M^{root,3}(f_3)=-f_3$. In the case $x=4$ define also \begin{eqnarray*} M^{root,6}:=-(M^{root,3})^2-2M^{root,3}. \end{eqnarray*} Then \begin{eqnarray*} G_\Z&=&G_\Z^M=\{\pm (M^{root,3})^l\,|\, l\in\Z\}\quad\textup{if } x\notin\{4,5\},\\ G_\Z&=&G_\Z^M=\{\id,Q\}\times \{\pm (M^{root,3})^l\,|\, l\in\Z\}\quad\textup{if }x=5,\\ G_\Z&=& G_\Z^{(0)}=G_\Z^{(1)}= \{\id,Q\}\times \{\pm (M^{root,3})^l\,|\, l\in\Z\}\\ &\stackrel{1:2}{\subset}& G_\Z^M=\{\id,Q\}\times \{\pm (M^{root,6})^l\,|\, l\in\Z\} \quad\textup{ if }x=4. \end{eqnarray*} If $x=4$ then $(M^{root,6})^2=-M^{root,3}$. (b) Consider $\uuuu{x}=(2y,2y,2y^2)$ with $y\in\Z_{\geq 2}$ and $S=S(\uuuu{x})$. Then $\sigma_2\sigma_1^2\in (\Br_3\ltimes\{\pm 1\}^3)_S$. By Lemma \ref{t3.25} (b) \begin{eqnarray*} M^{root,2}:=Z(\sigma_2\sigma_1^2)\in G_\Z. \end{eqnarray*} It satisfies $(M^{root,2})^2=M$ and $M^{root,2}(f_3)=-f_3$. In the case $y=2$ define also \begin{eqnarray*} M^{root,4}:=-\frac{1}{4}M-2M^{root,2}-\frac{3}{4}\id. \end{eqnarray*} Then \begin{eqnarray*} G_\Z&=&G_\Z^M=\{\pm (M^{root,2})^l\,|\, l\in\Z\} \quad\textup{if } y\geq 3,\\ G_\Z&=&G_\Z^{(0)}=G_\Z^{(1)}= \{\id,Q\}\times \{\pm (M^{root,2})^l\,|\, l\in\Z\}\\ &\stackrel{1:2}{\subset}& G_\Z^M=\{\id,Q\}\times \{\pm (M^{root,4})^l\,|\, l\in\Z\} \quad\textup{ if }y=2. \end{eqnarray*} If $y=2$ then $(M^{root,4})^2=-M^{root,2}$. (c) Consider $\uuuu{x}=(x,x,0)$ with $x\in\Z_{\geq 2}$ and $S=S(\uuuu{x})$. In the case $x=2$ define also \begin{eqnarray*} M^{root,2}:=\frac{1}{2}M+\frac{1}{2}\id. \end{eqnarray*} Then \begin{eqnarray*} G_\Z&=&G_\Z^M=\{\id,Q\}\times\{\pm M^l\,|\, l\in\Z\}\quad\textup{if } x\geq 3,\\ G_\Z&=&G_\Z^{(0)}=G_\Z^{(1)}=\{\id,Q\}\times \{\pm M^l\,|\, l\in\Z\}\\ &\stackrel{1:2}{\subset}& G_\Z^M=\{\id,Q\}\times \{\pm (M^{root,2})^l\,|\, l\in\Z\} \quad\textup{ if }x=2. \end{eqnarray*} If $x=2$ then $(M^{root,2})^2=M$. (d) Consider $\uuuu{x}=(3,3,4)$. Then \begin{eqnarray*} G_\Z=G_\Z^M=\{\id,Q\}\times\{\pm M^l\,|\, l\in\Z\}. \end{eqnarray*} (e) In all cases in (a)--(d) except for the four cases $$\uuuu{x}\in\{(4,4,4),(5,5,5),(4,4,8),(3,3,4)\}$$ the map $Z:(\Br_3\ltimes\{\pm 1\}^3)_S\to G_\Z$ is surjective so $G_\Z=G_\Z^{\BB}$. In the four exceptional cases $Q\in G_\Z-G_\Z^{\BB}$. \end{theorem} {\bf Proof:} In all cases in this theorem $r(\uuuu{x})<0$ or $r(\uuuu{x})>4$, so Lemma \ref{t5.15} applies, so $G_\Z=G_\Z^{(0)}=G_\Z^{(1)}$. (a) $S$ is as in Theorem \ref{t3.26} (c) with $k=0$. Therefore $\delta_3\sigma_2\sigma_1$ is in the stabilizer of $S$ and $M^{root,3}$ is in $G_\Z$, it is cyclic, and it satisfies $(M^{root,3})^3=-M$. Explicitly (by Theorem \ref{t3.26} (a)) $$ M^{root,3}(\uuuu{e})=\uuuu{e}\cdot \begin{pmatrix}-x&-x&-1\\1&0&0\\0&1&0\end{pmatrix}.$$ One sees $M^{root,3}(f_3)=-f_3$ where $f_3=-e_1+e_2-e_3$, so its third eigenvalue is $\kappa_3=-1$. The other two eigenvalues $\kappa_1$ and $\kappa_2$ are determined by the trace $-x=\kappa_1+\kappa_2-1$ and the determinant $-1=\kappa_1\kappa_2(-1)$ of $M^{root,3}$. The eigenvalues are $$\kappa_{1/2}=\frac{1-x}{2}\pm \frac{1}{2}\sqrt{x^2-2x-3},\quad \kappa_3=-1.$$ Because $M$ and $M^{root,3}$ are regular, Lemma \ref{t5.1} applies. It gives an isomorphism of $\Q$-algebras \begin{eqnarray*} \End(H_\Q,M)=\End(H_\Q,M^{root,3})&&\\ =\{p(M^{root,3})\,|\, p(t)=\sum_{i=0}^2p_it^i\in\Q[t]\} &\to& \Q[\kappa_1]\times \Q\\ p(M^{root,3})&\mapsto& (p(\kappa_1),p(-1)). \end{eqnarray*} The image of $-\id$ is $(-1,-1)$, the image of $Q$ is $(-1,1)$, the image of $M^{root,3}$ is $(\kappa_1,-1)$. The image of $G_\Z^M$ is a priori a subgroup of $\OO_{\Q[\kappa_1]}^*\times\{\pm 1\}$. We have to find out which one. By Theorem \ref{t5.11} $Q\in G_\Z^M$ (and then also $Q\in G_\Z$) only for $x\in\{4,5\}$. Lemma \ref{t5.2} applies because $M^{root,3}$ is cyclic. It shows \begin{eqnarray*} G_\Z^M&=& \{p(M^{root,3})\,|\, p(t)=p_2t^2+p_1t+p_0\in\Z[t] \textup{ with }\\ && (p(\kappa_1),p(-1))\in\Z[\kappa_1]^*\times\{\pm 1\}\},\\ G_\Z&=& \{p(M^{root,3})\,|\, p(t)=p_2t^2+p_1t+p_0\in\Z[t] \textup{ with }\\ && (p(\kappa_1),p(-1))\in\Z[\kappa_1]^*\times\{\pm 1\} \textup{ and }p(\kappa_1)p(\kappa_2)=1\}. \end{eqnarray*} Now Lemma \ref{tc.1} (a) is useful. It says \begin{eqnarray*} \Z[\kappa_1]^*=\left\{\begin{array}{ll} \{\pm \kappa_1^l\,|\, l\in\Z\}&\textup{ for }x\notin\{4,-2\},\\ \{\pm (\kappa_1-1)^l\,|\, l\in\Z\}&\textup{ for }x=-2,\\ \{\pm (\kappa_1+1)^l\,|\, l\in\Z\}&\textup{ for }x=4. \end{array}\right. \end{eqnarray*} with \begin{eqnarray*} (\kappa_1-1)(\kappa_2-1)=-1\quad\textup{and}\quad (\kappa_1-1)^2=\kappa_1\quad\textup{for}\quad x=-2,\\ (\kappa_1+1)(\kappa_2+1)=-1\quad\textup{and}\quad (\kappa_1+1)^2=-\kappa_1\quad\textup{for}\quad x=4. \end{eqnarray*} For $x\notin\{4,5,-2\}$ the facts $-\id,M^{root,3}\in G_\Z$ and $Q\notin G_\Z$ show that the image of $G_\Z$ and $G_\Z^M$ in $\Z[\kappa_1]^*\times\{\pm 1\}=\{\pm \kappa_1^l\,|\, l\in\Z\} \times\{\pm 1\}$ has index 2. Therefore then $G_\Z=G_\Z^M$ is as claimed. For $x=5$ the facts $-\id,M^{root,3},Q\in G_\Z$ show that the image of $G_\Z$ and $G_\Z^M$ is $\Z[\kappa_1]^*\times\{\pm 1\} = \{\pm \kappa_1^l\,|\, l\in\Z\} \times\{\pm 1\}$. Therefore then $G_\Z=G_\Z^M$ is as claimed. Consider the case $x=-2$. If an automorphism $p(M^{root,3})$ is in $G_\Z^M$ which corresponds to a pair $(\kappa_1-1,\pm 1)$, then $p(\kappa_1)=\kappa_1-1$ means $p(t)=t-1+l_2(t^2-3t+1)$ for some $l_2\in\Z$. But then $p(-1)=-2+l_2\cdot 5\notin\{\pm 1\}$. So $G_\Z^M$ does not contain such an automorphism. $G_\Z^M$ and $G_\Z$ are as claimed, because $Q\notin G_\Z^M$. Consider the case $x=4$. The polynomial $p(t)=-t^2-2t$ satisfies $p(-1)=1$, $p(\kappa_1)=-\kappa_1^2-2\kappa_1=-(-3\kappa_1-1)-2\kappa_1 =\kappa_1+1$. Therefore $M^{root,6}=p(M^{root,3})\in G_\Z^M$. Because $\kappa_1+1$ has norm $-1$, $M^{root,6}$ is not in $G_\Z$. The groups $G_\Z^M$ and $G_\Z$ are as claimed. (b) Compare the sections \ref{s4.1} and \ref{s3.2} for the actions of $\Br_3\ltimes\{\pm 1\}^3$ on $\Z^3$ and on $\BB^{dist}$. \begin{eqnarray*} &&\sigma_2\sigma_1^2(2y,2y,2y^2)\\ &=& \sigma_2\sigma_1(-2y,2y^2-2y\cdot 2y,2y) =\sigma_2\sigma_1(-2y,-2y^2,2y)\\ &=& \sigma_2(2y,2y-(-2y)(-2y^2),-2y^2) =\sigma_2(2y,2y-4y^3,-2y^2)\\ &=& ((2y-4y^3)-2y\cdot (-2y^2),2y,2y^2) =(2y,2y,2y^2), \end{eqnarray*} so $\sigma_2\sigma_1^2$ is in the stabilizer of $(2y,2y,2y^2)$, so $M^{root,2}:=Z(\sigma_2\sigma_1^2)\in G_\Z$. \begin{eqnarray*} && M^{root,2}(\uuuu{e})=\sigma_2\sigma_1^2(\uuuu{e})\\ &=& \sigma_2\sigma_1(s_{e_1}^{(0)}(e_2),e_1,e_3)\\ &=&\sigma_2\sigma_1(e_2-2ye_1,e_1,e_3)\\ &=& \sigma_2(s_{e_2-2ye_1}^{(0)}(e_1),e_2-2ye_1,e_3)\\ &=&\sigma_2(e_1+2y(e_2-2ye_1),e_2-2ye_1,e_3)\\ &=& ((1-4y^2)e_1+2ye_2,s_{e_2-2ye_1}^{(0)}(e_3),e_2-2ye_1)\\ &=&((1-4y^2)e_1+2ye_2,e_3+2y^2(e_2-2ye_1),e_2-2ye_1)\\ &=& \uuuu{e}\cdot \begin{pmatrix} 1-4y^2&-4y^3&-2y\\2y&2y^2&1\\0&1&0\end{pmatrix} =:\uuuu{e}\cdot M^{root,2,mat}. \end{eqnarray*} The map $Z:(\Br_3\ltimes\{\pm 1\}^3)_S\to G_\Z$ in Lemma \ref{t3.25} is a group antihomomorphism. By Theorem \ref{t3.26} $Z(\sigma^{mon})=M$. Therefore \begin{eqnarray*} (M^{root,2})^2&=& Z(\sigma_2\sigma_1^2)Z(\sigma_2\sigma_1^2) =Z(\sigma_2\sigma_1(\sigma_1\sigma_2\sigma_1)\sigma_1)\\ &=& Z(\sigma_2\sigma_1(\sigma_2\sigma_1\sigma_2)\sigma_1) =Z((\sigma_2\sigma_1)^3)=Z(\sigma^{mon})=M. \end{eqnarray*} One sees $M^{root,2}(f_3)=-f_3$, where $f_3=-ye_1+e_2-e_3$, so its third eigenvalue is $\kappa_3=-1$. The other two eigenvalues $\kappa_1$ and $\kappa_2$ are determined by the trace $1-2y^2=\kappa_1+\kappa_2-1$ and the product $\kappa_1\kappa_2=1$, which holds because of $M^{root,2}\in G_\Z$ (or because $\det M^{root,2}=-1$). The eigenvalues are $$\kappa_{1,2}=(1-y^2)\pm y\sqrt{y^2-2},\quad \kappa_3=-1.$$ Because $M$ and $M^{root,2}$ are regular, Lemma \ref{t5.1} applies. It gives an isomorphism of $\Q$-algebras \begin{eqnarray*} \End(H_\Q,M)=\End(H_\Q,M^{root,2})&&\\ =\{p(M^{root,2})\,|\, p(t)=\sum_{i=0}^2p_it^i\in\Q[t]\} &\to& \Q[\kappa_1]\times \Q\\ p(M^{root,2})&\mapsto& (p(\kappa_1),p(-1)). \end{eqnarray*} The image of $-\id$ is $(-1,-1)$, the image of $Q$ is $(-1,1)$, the image of $M^{root,2}$ is $(\kappa_1,-1)$. The image of $G_\Z^M$ is a priori a subgroup of $\OO_{\Q[\kappa_1]}^*\times\{\pm 1\}$. We have to find out which one. By Theorem \ref{t5.11} $Q\in G_\Z^M$ (and then also $Q\in G_\Z$) only for $y=2$. Consider the decomposition $H_\Q=H_{\Q,1}\oplus H_{\Q,2}$ as in Definition \ref{t5.9} and the primitive sublattices $H_{\Z,1}=H_{\Q,1}\cap H_\Z$ and $H_{\Z,2}=H_{\Q,2}\cap H_\Z=\Z f_3$ in $H_\Z$. The sublattice $H_{\Z,1}$ is the right $L$-orthogonal subspace $(\Z f_3)^\perp\subset H_\Z$ of $\Z f_3$, see \eqref{5.8}: \begin{eqnarray*} H_{\Z,1}&=&\{\uuuu{e}\cdot \uuuu{z}^t\,|\, \uuuu{z}\in \Z^3, 0=(-y,1-2y^2,-1)\begin{pmatrix}z_1\\z_2\\z_3\end{pmatrix}\}\\ &=& \langle f_1,f_2\rangle_\Z \quad\textup{with}\quad f_1=e_1-ye_3,\quad f_2=e_2+(1-2y^2)e_3. \end{eqnarray*} Write $\uuuu{f}=(f_1,f_2,f_3)=\uuuu{e}\cdot M(\uuuu{e},\uuuu{f})$, \begin{eqnarray*} M^{root,2}(\uuuu{e})=\uuuu{e}\cdot M^{root,2,mat},\quad M^{root,2}(\uuuu{f})=\uuuu{f}M^{root,2,mat,\uuuu{f}}. \end{eqnarray*} Then \begin{eqnarray*} M(\uuuu{e},\uuuu{f}) &=&\begin{pmatrix}1&0&-y\\0&1&1\\-y&1-2y^2&-1\end{pmatrix},\\ M(\uuuu{e},\uuuu{f})^{-1} &=&\frac{1}{y^2-2} \begin{pmatrix}2y^2-2&2y^3-y&y\\-y&-y^2-1&-1\\y&2y^2-1&1 \end{pmatrix}, \end{eqnarray*} \begin{eqnarray*} M^{root,2,mat,\uuuu{f}} =M(\uuuu{e},\uuuu{f})^{-1}M^{root,2,mat}M(\uuuu{e},\uuuu{f}) = \begin{pmatrix} -2y^2+1&-2y&0\\y&1&0\\0&0&-1\end{pmatrix}. \end{eqnarray*} Any element $h=p(M)\in G_\Z^M$ with $p(t)\in\Q[t]$ restricts to an automorphism of $H_{\Z,1}$ which commutes with $M^{root,2}|_{H_{\Z,1}}$, so its restriction to $H_{\Z,1}$ has the shape $h|_{H_{\Z,1}}=a\id+b M^{root,2}|_{H_{\Z,1}}$ with $a,b\in\Q$ with \begin{eqnarray*} a\begin{pmatrix}1&0\\0&1\end{pmatrix} +b\begin{pmatrix}-2y^2+1&-2y\\y&1\end{pmatrix}\in GL_2(\Z). \end{eqnarray*} This implies $by\in\Z$ and $a+b\in\Z$. The eigenvalue $p(\kappa_1)$ is \begin{eqnarray*} &&a+b\kappa_1=a+b(1-y^2+y\sqrt{y^2-2})\\ &=&(a+b)+by(-y+\sqrt{y^2-2})\in\Z[\sqrt{y^2-2}]^*. \end{eqnarray*} Now Lemma \ref{tc.1} (b) is useful. It says \begin{eqnarray*} \Z[\sqrt{y^2-2}]^*=\left\{\begin{array}{ll} \{\pm \kappa_1^l\,|\, l\in\Z\}&\textup{ if }y\geq 3,\\ \{\pm (1+\sqrt{2})^l\,|\, l\in\Z\}&\textup{ if }y=2. \end{array}\right. \end{eqnarray*} Furthermore $\kappa_1$ has norm 1, $1+\sqrt{2}$ has norm $-1$, and $(1+\sqrt{2})^2=-\kappa_2$ if $y=2$. Consider the cases $y\geq 3$. The map $$G_\Z^M\to \OO_{\Q[\kappa_1]}^*\times\{\pm 1\}, \quad p(M^{root,2})\mapsto (p(\kappa_1),p(-1)),$$ has because of $Q\notin G_\Z^M$ as image the index 2 subgroup of $\{\pm\kappa_1^l\,|\, l\in\Z\}\times\{\pm 1\}$ which is generated by $(\kappa_1,-1)$ and $(-1,-1)$, because $Q\notin G_\Z^M$. Therefore then $G_\Z^M=G_\Z=\{\pm (M^{root,2})^l\,|\, l\in\Z\}$. Consider the case $y=2$. Then $Q\in G_\Z\subset G_\Z^M$. Therefore then $G_\Z=\{\id,Q\}\times\{\pm (M^{root,2})^l\,|\, l\in\Z\}$. The question remains whether $1+\sqrt{2}$ arises as eigenvalue $p(\kappa_1)$ for an element $p(M^{root,2})\in G_\Z^M$. It does. $M^{root,4}$ has the first eigenvalue $$-\frac{1}{4}(-3+2\sqrt{2})^2-2(-3+2\sqrt{2})-\frac{3}{4} =1-\sqrt{2}$$ with $(1-\sqrt{2})^2=3-2\sqrt{2}=-\kappa_1$ and the third eigenvalue $-\frac{1}{4}-2(-1)-\frac{3}{4}=1$. Therefore $(M^{root,4})^2=-M^{root,2}$. $M^{root,4}$ is in $G_\Z^M$ because \begin{eqnarray*} M^{root,4}(\uuuu{e}) &=&\uuuu{e}\left(-\frac{1}{4} \begin{pmatrix} 97 & 220 & 28 \\ -28 & -63 & -8 \\ 4 & 8 & 1\end{pmatrix} -2\begin{pmatrix} -15 & -32 & -4 \\ 4 & 8 & 1 \\ 0&1&0\end{pmatrix} -\frac{3}{4}E_3\right)\\ &=& \uuuu{e}\begin{pmatrix}5&9&1\\-1&-1&0\\-1&-4&-1\end{pmatrix}. \end{eqnarray*} Therefore then $G_\Z^M=\{\id,Q\}\times\{\pm (M^{root,4})^l\,|\, l\in\Z\}$. (c) Observe $r=2x^2$. $M$ has the eigenvalues $\lambda_1,\lambda_2,\lambda_3$ with $$\lambda_{1/2}=(1-x^2)\pm x\sqrt{x^2-2},\quad \lambda_3=1.$$ Because $M$ is regular, Lemma \ref{t5.1} applies. It gives an isomorphism of $\Q$-algebras \begin{eqnarray*} \End(H_\Q,M)&&\\ =\{p(M)\,|\, p(t)=p_2t^2+p_1t+p_0\in\Q[t]\} &\to& \Q[\lambda_1]\times \Q\\ p(M)&\mapsto& (p(\lambda_1),p(1)). \end{eqnarray*} The image of $-\id$ is $(-1,-1)$, the image of $Q$ is $(-1,1)$, the image of $M$ is $(\lambda_1,1)$. The image of $G_\Z^M$ is a priori a subgroup of $\OO_{\Q[\lambda_1]}^*\times\{\pm 1\}$. We have to find out which one. By Theorem \ref{t5.11} $Q\in G_\Z\subset G_\Z^M$. Consider the decomposition $H_\Q=H_{\Q,1}\oplus H_{\Q,2}$ as in Definition \ref{t5.9} and the primitive sublattices $H_{\Z,1}=H_{\Q,1}\cap H_\Z$ and $H_{\Z,2}=H_{\Q,2}\cap H_\Z=\Z f_3$ in $H_\Z$. The sublattice $H_{\Z,1}$ is the right $L$-orthogonal subspace $(\Z f_3)^\perp\subset H_\Z$ of $\Z f_3$, see \eqref{5.8}: \begin{eqnarray*} H_{\Z,1}&=&\{\uuuu{e}\cdot \uuuu{z}^t\,|\, \uuuu{z}\in \Z^3, 0=(0,1,-1)\begin{pmatrix}z_1\\z_2\\z_3\end{pmatrix}\}\\ &=& \langle f_1,f_2\rangle_\Z \quad\textup{with}\quad f_1=e_1,\quad f_2=e_2+e_3. \end{eqnarray*} Write $\uuuu{f}=(f_1,f_2,f_3)=\uuuu{e}\cdot M(\uuuu{e},\uuuu{f})$, \begin{eqnarray*} M(\uuuu{e})=\uuuu{e}\cdot M^{mat},\quad M(\uuuu{f})=\uuuu{f}M^{mat,\uuuu{f}}. \end{eqnarray*} Then \begin{eqnarray*} M(\uuuu{e},\uuuu{f}) &=&\begin{pmatrix}1&0&0\\0&1&1\\0&1&-1\end{pmatrix},\quad M(\uuuu{e},\uuuu{f})^{-1} =\frac{1}{2} \begin{pmatrix}2&0&0\\0&1&1\\0&1&-1\end{pmatrix},\\ M^{mat}&=&\begin{pmatrix}1-2x^2&-x&-x\\x&1&0\\x&0&1\end{pmatrix},\\ M^{mat,\uuuu{f}} &=&M(\uuuu{e},\uuuu{f})^{-1}M^{mat}M(\uuuu{e},\uuuu{f}) = \begin{pmatrix} 1-2x^2&-2x&0\\x&1&0\\0&0&1\end{pmatrix}. \end{eqnarray*} The upper left $2\times 2$-matrix in $M^{mat,\uuuu{f}}$ coincides after identification of $x$ and $y$ with the upper left $2\times 2$-matrix in $M^{root,2,mat,\uuuu{f}}$ in the proof of part (b). Therefore we can argue exactly as in the proof of part (b). Lemma \ref{tc.1} (b) applies in the same way. We obtain for $x\geq 3$ $G_\Z^M=G_\Z=\{\id,Q\}\times \{\pm M^l\,|\, l\in\Z\}$ and for $x=2$ $G_\Z=\{\id,Q\}\times \{\pm M^l\,|\, l\in\Z\}$. Consider the case $x=2$. Then $M^{root,2}$ has the first eigenvalue $$\frac{1}{2}\lambda_1+\frac{1}{2}=-1+\sqrt{2}$$ with $(-1+\sqrt{2})^2=3-2\sqrt{2}=-\lambda_1$ and the third eigenvalue $\frac{1}{2}+\frac{1}{2}=1$. Therefore $(M^{root,2})^2=QM$. $M^{root,2}$ is in $G_\Z^M$ because \begin{eqnarray*} M^{root,2}(\uuuu{e})&=&\uuuu{e}\left(\frac{1}{2} \begin{pmatrix}-7&-2&-2\\2&1&0\\2&0&1\end{pmatrix} +\frac{1}{2}E_3\right) = \uuuu{e}\begin{pmatrix}-3&-1&-1\\1&1&0\\1&0&1\end{pmatrix}. \end{eqnarray*} Therefore $G_\Z^M=\{\id,Q\}\times\{\pm (M^{root,2})^l\,|\, l\in\Z\}$ for $x=2$. (d) Here $r=-2$. $M$ has the eigenvalues $\lambda_1,\lambda_2,\lambda_3$ with $$\lambda_{1/2}=2\pm \sqrt{3},\quad \lambda_3=1.$$ It is well known and can be seen easily either elementarily or with Theorem \ref{tc.6} that $$\OO_{\Q[\lambda_1]}^*=\{\pm \lambda_1^l\,|\, l\in\Z\}.$$ $Q\in G_\Z\subset G_\Z^M$ by Theorem \ref{t5.11}. Recall the proof of Lemma \ref{t5.15}. The restriction of the map in \eqref{5.19} to the map $$\{\id,Q\}\times \{M^l\,|\, l\in\Z\}\to \OO_{\Q[\lambda_1]}^*$$ is an isomorphism. Therefore the map in \eqref{5.19} is an isomorphism and $$G_\Z^M=G_\Z=\{\id,Q\}\times\{\pm M^l\,|\, l\in\Z\}.$$ (e) Observe in part (c) \begin{eqnarray*} Z(\sigma_2)(\uuuu{e})&=&\sigma_2(\uuuu{e}) =(e_1,e_3,e_2),\\ \textup{so }Z(\sigma_2)(f_1,f_2,f_3)&=&(f_1,f_2,-f_3),\\ \textup{ so }Z(\sigma_2)&=&-Q. \end{eqnarray*} Now in all cases \begin{eqnarray*} -\id= Z(\delta_1\delta_2\delta_3),\quad M=Z(\sigma^{mon}) \end{eqnarray*} and \begin{eqnarray*} &&\textup{ in part (a): }\left\{ \begin{array}{lll} M^{root,3}&=&Z(\delta_3\sigma_2\sigma_1),\\ G_\Z&=&\{\pm (M^{root,3})^l\,|\, l\in\Z\}\textup{ for } x\notin\{4,5\},\end{array}\right. \\ &&\textup{ in part (b): }\left\{ \begin{array}{lll} M^{root,2}&=&Z(\sigma_2\sigma_1^2),\\ G_\Z&=&\{\pm (M^{root,2})^l\,|\, l\in\Z\}\textup{ for } y\neq 2,\end{array}\right. \\ &&\textup{ in part (c): }\left\{ \begin{array}{lll} Q&=&Z(\delta_1\delta_2\delta_3\sigma_2),\\ G_\Z&=&\{\id,Q\}\times \{\pm M^l\,|\, l\in\Z\}. \end{array}\right. \end{eqnarray*} This shows $G_\Z=G_\Z^{\BB}$ in all but the four cases $\uuuu{x}\in \{(4,4,4),(5,5,5),(4,4,8),(3,3,4)\}$. In these four cases $Q\in G_\Z$. It remains to see $Q\notin G_\Z^{\BB}$. We offer two proofs. First proof: It uses that in these four cases the stabilizer of $\uuuu{e}$ in $\Br_3\ltimes\{\pm 1\}^3$ is $\{\id\}$, which will be proved as part of Theorem \ref{t7.11}. It also follows from $\Gamma^{(1)}=G^{free,3}$ in Theorem \ref{t6.18} (g) or $\Gamma^{(0)}=G^{fCox,3}$ in Theorem \ref{t6.11} (g) and from Example \ref{t3.4} (respectively Theorem \ref{t3.2} (a) or (b)). This implies that here $Z:(\Br_3\ltimes\{\pm 1\}^3)_S\to G_\Z$ is injective. Observe furthermore $Q^2=\id$. If $Q=Z(\beta)$ for some braid $\beta$, then $\beta^2=\id$ as $Z$ is a group antihomomorphism. But there is no braid of order two. Second proof: By formula \eqref{5.5} in Theorem \ref{t5.11} in the four cases \begin{eqnarray*} Q(\uuuu{e})&=&-\uuuu{e} + 2f_3(1,3,1) \quad \textup{in the case }\uuuu{x}=(4,4,4),\\ Q(\uuuu{e})&=&-\uuuu{e} + f_3(1,4,1) \quad \textup{in the case }\uuuu{x}=(5,5,5),\\ Q(\uuuu{e})&=&-\uuuu{e} + f_3(2,7,1) \quad \textup{in the case }\uuuu{x}=(4,4,8),\\ Q(\uuuu{e})&=&-\uuuu{e} + f_3(4,9,3) \quad \textup{in the case }\uuuu{x}=(3,3,4). \end{eqnarray*} By Theorem \ref{t6.21} (g) the restriction to $\Delta^{(1)}$ of the projection $\pr^{H,(1)}:H_\Z\to \oooo{H_\Z}^{(1)}$ is injective. Therefore in all four cases $Q(e_i)\notin\Delta^{(1)}$ for $i\in\{1,2,3\}$. But any automorphism in $G_\Z^{\BB}$ maps each $e_i$ to an odd vanishing cycle. Thus $Q\notin G_\Z^{\BB}$. \hfill$\Box$ \section{General rank 3 cases with eigenvalues not all in $S^1$} \label{s5.7} Theorem \ref{t5.18} below will show $G_\Z=G_\Z^M=\{\pm M^l\,|\, l\in\Z\}$ in all irreducible rank 3 with eigenvalues not all in $S^1$ which have not been treated in Theorem \ref{t5.16}. This result is simple to write down, but the proof is long. It is a case discussion with many subcases. It builds on part (b) of the technical Lemma \ref{t5.17} which gives necessary and sufficient conditions when an endomorphism in $\End(H_\Q)$ of a certain shape is in $\End(H_\Z)$. Any element of $\{h\in G_\Z^M\,|\, \mu_3=1\}$ can be written as $h=q(M)$ with $q(t)=1+q_0(t-1)+q_1(t-1)^2$ with unique coefficients $q_0,q_1\in\Q$, but not all values $q_0,q_1\in\Q$ give such an element. Part (b) of the following lemma says which integrality conditions on $q_0$ and $q_1$ are necessary for $q(M)\in \End(H_\Z)$. Part (a) is good to know in this context. \begin{lemma}\label{t5.17} Fix $\uuuu{x}\in \Z^3-\{(0,0,0)\}$ and the associated triple $(H_\Z,L,\uuuu{e})$. Recall $g=\gcd(x_1,x_2,x_3)$ and $\www{x}_i=g^{-1}x_i$. Define \index{$g_1,\ g_2$} \begin{eqnarray} g_1&:=&\gcd(2x_1-x_2x_3,2x_2-x_1x_3,2x_3-x_1x_2) \in\N\cup\{0\},\nonumber\\ g_2&:=& \frac{g_1}{g}=\gcd(2\www{x}_1-g\www{x}_2\www{x}_3, 2\www{x}_2-g\www{x}_1\www{x}_3,2\www{x}_3-g\www{x}_1\www{x}_2)\in\N\cup\{0\}.\nonumber \\ &&\label{5.20} \end{eqnarray} (a) We separate three cases.\\ (i) Case (three or two of $x_1,x_2,x_3$ are odd): Then $g$ and $g_2$ are odd and \begin{eqnarray*} \gcd(g_2,\www{x}_i)=\gcd(g_2,g)=1, \quad g_2^2\,|\, (r-4). \end{eqnarray*} (ii) Case (exactly one of $x_1,x_2,x_3$ is odd): Then $g$ is odd, $g_2\equiv 2(4)$ and \begin{eqnarray*} \gcd(\frac{g_2}{2},\www{x}_i)=\gcd(\frac{g_2}{2},g)=1, \quad (\frac{g_2}{2})^2\,|\, (r-4). \end{eqnarray*} (iii) Case (none of $x_1,x_2,x_3$ is odd): Then $g$ and $g_2$ are even. More precisely, $g_2\equiv 0(4)$ only if $\frac{g}{2}$ and $\www{x}_1,\www{x}_2,\www{x}_3$ are odd. Else $g_2\equiv 2(4)$. Always \begin{eqnarray*} \gcd(\frac{g_2}{2},\www{x}_i)=\gcd(\frac{g_2}{2},\frac{g}{2})=1, \quad g_2^2\,|\, (r-4). \end{eqnarray*} (b) Consider $q_0,q_1\in\Q$, $q(t):=1+q_0(t-1)+q_1(t-1)^2\in\Q[t]$ and $h=q(M)\in \Q[M]$. Define $q_2:=q_0-2q_1\in\Q$. \index{$q_0,\ q_1,\ q_2$} Then $h\in \End(H_\Z,M)$ if and only if the following integrality conditions \eqref{5.21}--\eqref{5.24} are satisfied. \begin{eqnarray} q_2\cdot g^2\in\Z,&&\label{5.21}\\ q_1\cdot gg_1\in\Z,\label{5.22}\\ q_0x_i-q_1x_jx_k\in\Z&&\textup{for}\quad \{i,j,k\}=\{1,2,3\},\label{5.23}\\ q_1(x_i^2-x_j^2)\in\Z&&\textup{for}\quad \{i,j,k\}=\{1,2,3\}.\label{5.24} \end{eqnarray} If these conditions hold, then also the following holds, \begin{eqnarray} q_0\cdot g_1\in\Z.\label{5.25} \end{eqnarray} (c) In part (b) the eigenvalue of $q(M)$ on $H_{\C,\lambda_1}$ is \begin{eqnarray*} \mu_1&:=&q(\lambda_1)= (1-rq_1)+(q_0-rq_1)(\lambda_1-1)\\ &=& (1-q_0)+(q_0-rq_1)\lambda_1. \end{eqnarray*} \end{lemma} {\bf Proof:} (a) $g$ is odd in the cases (i) and (ii) and even in case (iii). Therefore $g_2$ is odd in case (i) and even in the cases (ii) and (iii), and furthermore $\frac{g_2}{2}$ is odd in case (ii). Also $\frac{g_2}{2}$ odd in case (iii) almost always, namely except when $\frac{g}{2}$ and $\www{x}_1,\www{x}_2,\www{x}_3$ are odd, as can be seen from the definition \eqref{5.20} of $g_2$. Here observe that at least one of $\www{x}_1,\www{x}_2,\www{x}_3$ is odd because $\gcd(\www{x}_1,\www{x}_2,\www{x}_3)=1$. Now we consider first case (iii). A common divisor of $\frac{g_2}{2}$ and $\www{x}_1$ would be odd. Because of the second term $2\www{x}_2-g\www{x}_1\www{x}_3$ and the third term $2\www{x}_3-g\www{x}_1\www{x}_2$ in \eqref{5.20} it would also divide $\www{x}_2$ and $\www{x}_3$. This is impossible because of $\gcd(\www{x}_1,\www{x}_2,\www{x}_3)=1$. Therefore $\gcd(\frac{g_2}{2},\www{x}_1)=1$. Analogously for $\www{x}_2$ and $\www{x}_3$. A common divisor of $\frac{g_2}{2}$ and $\frac{g}{2}$ would be odd. Because of all three terms in \eqref{5.20} it would divide $\www{x}_1,\www{x}_2,\www{x}_3$. Therefore $\gcd(\frac{g_2}{2},\frac{g}{2})=1$. For $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$ observe \begin{eqnarray} 2(2\www{x}_i-g\www{x}_j\www{x}_k) + g\www{x}_k(2\www{x}_j-g\www{x}_i\www{x}_k) =\www{x}_i(4-x_k^2),\label{5.26} \\ 4(r-4)= g^2(2\www{x}_i-g\www{x}_j\www{x}_k)^2- (4-x_j^2)(4-x_k^2).\label{5.27} \end{eqnarray} \eqref{5.26} and $\gcd(\frac{g_2}{2},\www{x}_i)=1$ imply in case (iii) that $\frac{g_2}{2}$ divides $4^{-1}(4-x_k^2)$. This and \eqref{5.27} imply that $(\frac{g_2}{2})^2$ divides $\frac{r-4}{4}$, so $g_2^2$ divides $r-4$. The claims for the cases (i) and (ii) follow similarly. (b) Recall the shape of $M^{mat}\in M_{3\times 3}(\Z)$ with $M\uuuu{e}=\uuuu{e}M^{mat}$ from the beginning of Section \ref{s5.3}. It gives \begin{eqnarray*} M^{mat}-E_3= \begin{pmatrix}-x_1^2-x_2^2+x_1x_2x_3&-x_1-x_2x_3+x_1x_3^2 &x_1x_3-x_2\\x_1-x_2x_3&-x_3^2&-x_3\\x_2&x_3&0\end{pmatrix},\\ \begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} (M^{mat}-E_3) =\begin{pmatrix}0&-x_1&-x_2\\x_1&0&-x_3\\x_2&x_3&0\end{pmatrix},\\ (M^{mat}-E_3) \begin{pmatrix}1&0&0\\0&1&0\\-x_2&-x_3&1\end{pmatrix} =\begin{pmatrix}-x_1^2&-x_1&x_1x_3-x_2\\x_1&0&-x_3\\x_2&x_3&0\end{pmatrix}. \end{eqnarray*} Now $$q(M)\in \End(H_\Z)\iff q(M^{mat})-E_3\in M_{3\times 3}(\Z),$$ and this is equivalent to the following matrix being in $M_{3\times 3}(\Z)$, \begin{eqnarray*} &&\begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} (q(M^{mat})-E_3) \begin{pmatrix}1&0&0\\0&1&0\\-x_2&-x_3&1\end{pmatrix}\\ &=& \begin{pmatrix}0&-x_1&-x_2\\x_1&0&-x_3\\x_2&x_3&0\end{pmatrix}\\ &&\cdot\left[ q_0\begin{pmatrix}1&0&0\\0&1&0\\-x_2&-x_3&1\end{pmatrix} +q_1\begin{pmatrix}-x_1^2&-x_1&x_1x_3-x_2\\x_1&0&-x_3\\x_2&x_3&0\end{pmatrix} \right]\\ &=& q_0 \begin{pmatrix} x_2^2 & -x_1+x_2x_3 & -x_2 \\ x_1+x_2x_3 & x_3^2 & -x_3 \\ x_2 & x_3 & 0\end{pmatrix} \\ &+& q_1 \begin{pmatrix} -x_1^2-x_2^2 & -x_2x_3 & x_1x_3 \\ -x_1^3-x_2x_3 & -x_1^2-x_3^2 & x_1^2x_3 -x_1x_2 \\ -x_1^2x_2 +x_1x_3 & -x_1x_2 & x_1x_2x_3-x_2^2-x_3^2\end{pmatrix}. \end{eqnarray*} This gives nine scalar conditions, which we denote by their place $[a,b]$ with $a,b\in\{1,2,3\}$ in the matrix, so for example $[2,1]$ is the condition $q_0(x_1+x_2x_3)+q_1(-x_1^3-x_2x_3)\in\Z$. These nine conditions are sufficient and necessary for $q(M)\in \End(H_\Z)$. The following trick allows an easy derivation of implied conditions. Recall the cyclic action $\gamma:\Z^3\to\Z^3,\uuuu{x}\mapsto(x_3,x_1,x_2)$. It lifts to an action of $\Br_3\ltimes\{\pm 1\}^3$ on triangular bases of $(H_\Z,L)$. Therefore together with $M^{mat}=S(\uuuu{x})^{-1}S(\uuuu{x})^t$ also the matrix $\www{M}^{mat}:= S(\gamma(\uuuu{x}))^{-1}S(\gamma(\uuuu{x}))^t$ is a monodromy matrix. Integrality of $q(M^{mat})$ is equivalent to integrality of $q(\www{M}^{mat})$. Therefore if the nine conditions hold, also the conditions hold which are obtained from the nine conditions by replacing $(x_1,x_2,x_3)$ by $(x_3,x_1,x_2)$ or by $(x_2,x_3,x_1)$. In the following $[a,b]$ denotes all three so obtained conditions, so for example $[2,1]$ denotes the conditions \begin{eqnarray*} q_0(x_i+x_jx_k)+q_1(-x_i^3-x_jx_k)\in\Z\\\ \textup{for } (i,j,k)\in\{(1,2,3),(3,1,2),(2,3,1)\}. \end{eqnarray*} We have to show the following equivalence: \begin{eqnarray*} \textup{the conditions }[a,b]\textup{ for }a,b\in\{1,2,3\} \iff \textup{ the conditions }\eqref{5.21}-\eqref{5.24}. \end{eqnarray*} $\Longrightarrow$: $[1,3]$ and $[3,2]$ are equivalent to one another and to \eqref{5.23}. $[3,3]$ says $q_1(x_i^2-r)\in\Z$. One derives $q_1(x_i^2-x_j^2)\in\Z$, which is \eqref{5.24}. $[1,1]$ and \eqref{5.24} give $q_2x_i^2\in\Z$, so $q_2\gcd(x_1^2,x_2^2,x_3^2)=q_2g^2\in\Z$ which is \eqref{5.21}. The derivation of $q_1gg_1\in\Z$ is laborious and goes as follows:\\ $[3,1]\& [1,3]$ imply $q_1x_i(2x_k-x_ix_j)\in\Z$.\\ $[3,2]\& [2,3]$ imply $q_1x_i(2x_j-x_ix_k)\in\Z$.\\ $[3,3]\& \eqref{5.24}$ imply $q_1x_i(2x_i-x_jx_k)\in\Z$. \\ One sees $q_1x_ig_1\in\Z$ and then $q_1gg_1\in\Z$. $\Longleftarrow$: $\eqref{5.23}$ gives $[1,3]$ and $[3,2]$. \eqref{5.21} and \eqref{5.24} give $[1,1]$ and $[2,2]$. \eqref{5.23} reduces $[1,2]$, $[2,1]$, $[2,3]$ and $[3,1]$ to $q_2x_jx_k$, $q_0x_jx_k-q_1x_i^3$, $q_1x_i(x_ix_k-2x_j)$ and $q_1x_i(-x_ix_j+2x_k)$. The first follows from \eqref{5.21}, the third and fourth follow from \eqref{5.22}. The second reduces with \eqref{5.24} to $x_j(q_0x_k-q_1x_ix_j)$, which follows from \eqref{5.23}. $[3,3]$ reduces with \eqref{5.24} to $q_1x_j(x_ix_k-2x_j)$ which follows from \eqref{5.22}. The equivalence of the conditions $[a,b]$ with the conditions \eqref{5.21}--\eqref{5.24} is shown. It remains to show how \eqref{5.21}-\eqref{5.24} imply \eqref{5.25}. One combines two times \eqref{5.23}, $2q_0x_i-2q_1x_jx_k\in\Z$, with \eqref{5.21}, $(q_0-2q_1)x_jx_k\in\Z$, and obtains $q_0(2x_i-x_jx_k)\in\Z$. (c) Recall \begin{eqnarray*} (t-\lambda_1)(t-\lambda_2)=t^2-(2-r)t+1,\\ \textup{so}\quad \lambda_1+\lambda_2=2-r,\quad \lambda_1\lambda_2=1,\\ \lambda_1^2=(2-r)\lambda_1-1,\quad(\lambda_1-1)^2=(-r)\lambda_1, \end{eqnarray*} so \begin{eqnarray*} \mu_1&=&1+q_0(\lambda_1-1)+q_1(\lambda_1-1)^2\\ &=&(1-rq_1)+(q_0-rq_1)(\lambda_1-1).\hspace*{2cm}\Box \end{eqnarray*} \begin{theorem}\label{t5.18} Consider a triple $\uuuu{x}\in\Z^3$ with $r(\uuuu{x})\in\Z_{<0}\cup\Z_{>4}$ which is neither reducible nor in the $\Br_3\ltimes\{\pm 1\}^3$ orbit of a triple in Theorem \ref{t5.16}. More explicitly, the triple $\uuuu{x}$ is any triple in $\Z^3$ which is not in the $\Br_3\ltimes\{\pm 1\}^3$ orbits of the triples in the following set, \begin{eqnarray*} \{(x,0,0)\,|\, x\in\Z\}\cup \{(x,x,x)\,|\, x\in\Z\}\cup \{(2y,2y,2y^2\,|\, y\in\Z_{\geq 2}\}\\ \cup \{(x,x,0)\,|\, x\in\Z\}\cup \{(-l,2,-l)\,|\, l\in\Z_{\geq 3}\}\cup \{(3,3,4)\}. \end{eqnarray*} Consider the associated triple $(H_\Z,L,\uuuu{e})$ with $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})$. Then \begin{eqnarray*} G_\Z=G_\Z^M=\{\pm M^l\,|\, l\in\Z\}. \end{eqnarray*} The map $Z:(\Br_3\ltimes\{\pm 1\}^3)_S\to G_\Z$ is surjective, so $G_\Z=G_\Z^{\BB}$. \end{theorem} {\bf Proof:} The surjectivity of $Z$ follows from $G_\Z=\{\pm M^l\,|\, l\in\Z\}$, $-\id=Z(\delta_1\delta_2\delta_3)$ and $M=Z(\sigma^{mon})$. The main point is to prove $G_\Z^M=\{\pm M^l\,|\, l\in\Z\}$. Theorem \ref{t5.11} says for which $\uuuu{x}$ the automorphism $Q$ of $H_\Q$ in Definition \ref{t5.9} is in $G_\Z^M$. They are all excluded here. So here $Q\notin G_\Z^M$. We use the notation $g=g(\uuuu{x})$ in Lemma \ref{t5.10} and the notations from the beginning of section \ref{s5.3}. Especially $r:=r(\uuuu{x})\in\Z_{<0}\cup\Z_{>4}$, and $\lambda_3=1$ and $\lambda_{1/2}=\frac{2-r}{2}\pm\sqrt{r(r-4)}$ are the eigenvalues of the monodromy. The proof of Lemma \ref{t5.15} gives a certain control on $G_\Z^M$ and $G_\Z$. Recall the notations there. For $p(t)=\sum_{i=0}^2p_it^i\in\Q[t]$ write $\mu_j:=p(\lambda_j)$ for the eigenvalues of $p(M)\in \End(H_\Q)$. Recall \eqref{5.17}, \eqref{5.18}, the isomorphism of $\Q$-algebras \begin{eqnarray*} \End(H_\Q,M) =\{p(M)\,|\, p(t)=\sum_{i=0}^2p_it^i\in\Q[t]\}&\to& \Q[\lambda_1]\times \Q,\\ p(M)&\mapsto& (p(\lambda_1),p(1)), \end{eqnarray*} and its restriction in \eqref{5.19}, the injective group homomorphism \begin{eqnarray}\label{5.28} \{h\in G_\Z^M\,|\, \mu_3=1\}\to \OO_{\Q[\lambda_1]}^*, \quad h=p(M)\mapsto \mu_1=p(\lambda_1). \end{eqnarray} The image of $Q\in\End(H_\Q,M)$ in $\Q[\lambda_1]\times \Q$ is $(-1,1)$. Because $Q\notin G_\Z^M$, the image in \eqref{5.28} does not contain $-1$, so it is a cyclic group. It contains $\lambda_1$ which is the image of $M$. Therefore the group $\{h\in G_\Z^M\,|\, \mu_3=1\}$ is cyclic. It has two generators which are inverse to one another. We denote by $h_{gen}$ the generator such that a positive power of it is $M$, namely $(h_{gen})^{l_{gen}}=M$ for a unique number $l_{gen}\in\N$. We have to prove $l_{gen}=1$. We will argue indirectly. We will assume the existence of a root $h=p(M)\in G_\Z^M$ with $h^l=M$ for some $l\geq 2$, first eigenvalue $\mu_1=p(\lambda_1)$ and third eigenvalue $\mu_3=p(1)=1$. Then $\mu_1^l=\lambda_1$ and $\mu_1=(1-q_0)+(q_0-rq_1)\lambda_1$ for certain $q_0,q_1\in\Q$ which must satisfy the properties in \eqref{5.21}--\eqref{5.24}. We will come to a contradiction. We can restrict to $\uuuu{x}$ in the following set, as the $\Br_3\ltimes\{\pm 1\}^3$ orbits of the elements in this set are all $\uuuu{x}$ which we consider in this theorem: Consider the two sets $Y_I$ and $Y_{II}\subset \Z^3$, \begin{eqnarray*} Y_I&:=&\{\uuuu{x}\in \Z^3_{\leq 0}\,|\, x_1\leq x_2\leq x_3\}\\ &-&\bigl[\{(x,0,0)\,|\, x\in\Z_{<0}\}\cup\ \{(x,x,x)\,|\, x\in\Z_{\leq 0}\}\\ && \cup\ \{(x,x,0)\,|\, x\in\Z_{<0}\}\bigr],\\ Y_{II}&:=&\{\uuuu{x}\in \Z^3_{\geq 3}\,|\, x_1\leq x_2\leq x_3, 2x_3\leq x_1x_2\}\\ &-&\bigl[\{(x,x,x)\,|\, x\in\Z_{\geq 3}\}\\ && \cup\ \{(2y,2y,2y^2)\,|\, y\in\Z_{\geq 2}\} \cup\{(3,3,4)\}\bigr]. \end{eqnarray*} All triples $\uuuu{x}$ in this theorem are in the $\Br_3\ltimes\{\pm 1\}^3$ orbits of the triples in $Y_I\cup Y_{II}\cup\{\uuuu{x}\,|\, (x_2,x_1,x_3)\in Y_I\cup Y_{II}\}$. We will restrict to $\uuuu{x}\in Y_I\cup Y_{II}$. For $\uuuu{x}$ with $(x_2,x_1,x_3)\in Y_I\cup Y_{II}$, one can copy the following proof and exchange $x_1$ and $x_2$. Because the triples $(x,x,x)$ are excluded, $x_1<x_3$ and $\www{x}_1<\www{x}_3$. We will assume the existence of a unit $\mu_1 \in \OO_{\Q[\lambda_1]}^*$ with $\mu_1^l=\lambda_1$ for some $l\geq 2$ and some norm $\NN(\mu_1)\in\{\pm 1\}$. The integrality conditions in Lemma \ref{t5.17} (b) for $q_0,q_1,q_2\in\Q$ with $\mu_1=(1-q_0)+(q_0-rq_1)\lambda_1$ and $q_2=q_0-2q_1$ will lead to a contradiction. The proof is a case discussion. The cases split as follows. Case I: $\uuuu{x}\in Y_I$. \hspace*{0.5cm} Subcase I.1: $l\geq 3$ odd. \hspace*{0.5cm} Subcase I.2: $l=2$. Case II: $\uuuu{x}\in Y_{II}$. \hspace*{0.5cm} Subcase II.1: $l\geq 3$ odd. \hspace*{0,5cm} Subcase II.2: $l=2$. \hspace*{1cm} Subcase II.2.1: $\NN(\mu_1)=1$: \hspace*{1.5cm} Subcase II.2.1.1: $\mu_1=\kappa_a$ for some $a\in\Z_{\geq 3}$. \hspace*{1.5cm} Subcase II.2.1.2: $\mu_1=-\kappa_a$ for some $a\in\Z_{\geq 3}$. \hspace*{1cm} Subcase II.2.2: $\NN(\mu_1)=-1$. The treatment of the cases I, II.1 and II.2.2 will be fairly short. The treatment of the cases II.2.1 will be laborious. Lemma \ref{td.2} prepares all cases with $\NN(\mu_1)=1$. Consider such a case. Suppose $\mu_1=\kappa_a$ for some $a\in\Z_{\leq -3}\cup\Z_{\geq 3}$. Compare Lemma \ref{td.2} (c) and Lemma \ref{t5.17} (c): \begin{eqnarray*} q_0=q_{0,l}(a),\quad q_1=q_{1,l}(a),\quad q_2=q_{2,l}(a), \quad r=r_l(a). \end{eqnarray*} The integrality condition \eqref{5.21} $q_2g^2\in\Z$ together with \eqref{d.6} and \eqref{d.5} tells that $r/(2-a)=r_l(a)/(2-a)$ divides $g^2$. For $l\geq 3$ odd $r/(2-a)$ is itself a square by \eqref{d.4}, and $g$ can be written as $g=\gamma_1\gamma_3$ with $\gamma_1,\gamma_3\in\N$ and $\gamma_1^2=r/(2-a)$. For $l=2$ $r/(2-a)=a+2$, and $g$ can be written as $g=\gamma_1\gamma_2\gamma_3$ with $\gamma_1,\gamma_2,\gamma_3\in\N$, $\gamma_2$ squarefree, and $a+2=\gamma_1^2\gamma_2$, so $g^2=(a+2)\gamma_2\gamma_3^2$. On the other hand $g^2$ divides $r=r_l(a)$ by Lemma 1.3. \eqref{5.1} takes the shape \begin{eqnarray}\label{5.29} \www{x}_1^2+\www{x}_2^2+\www{x}_3^2-g\www{x}_1\www{x}_2\www{x}_3 =\frac{r}{g^2} =\left\{\begin{array}{ll} \frac{2-a}{\gamma_3^2}&\textup{ if }l\geq 3\textup{ is odd,}\\ \frac{2-a}{\gamma_2\gamma_3^2}&\textup{ if }l=2. \end{array}\right. \end{eqnarray} This equation will be the key to contradictions in the cases discussed below. The absolute value of the left hand side will be large, the absolute value of the right hand side will be small. Now we start the case discussion. {\bf Case I.1}, $\uuuu{x}\in Y_I$, $l\geq 3$ odd: $\NN(\mu_1)^l=\NN(\lambda_1)=1$ and $l$ odd imply $\NN(\mu_1)=1$. Here $\lambda_1\in(-1,0)$, so $\mu_1\in(-1,0)$, so $\mu_1=\kappa_a$ for some $a\in\Z_{\leq -3}$. By the discussion above $g=\gamma_1\gamma_3$, $\gamma_1=|b_{(l+1)/2}+b_{(l-1)/2}|$, and \eqref{5.29} holds. {\bf Case I.1.1}, all $x_i<0$: We excluded the triples $(x,x,x)$. Therefore $\www{x}_1\leq -2$. For $l\geq 3$ $\gamma_1=|b_{(l+1)/2}+b_{(l-1)/2}|\geq |b_2+b_1|=|a|-1$ by Lemma \ref{td.2} (b). Now by \eqref{5.29} \begin{eqnarray*} |a|+2\geq \frac{2-a}{\gamma_3^2} \geq 2g+4+1+1=2\gamma_1\gamma_3+6\geq 2(|a|-1)+6, \end{eqnarray*} a contradiction. {\bf Case I.1.2}, $x_3=0$: The integrality conditions \eqref{5.23} and \eqref{5.24} say here \begin{eqnarray*} q_0x_1,q_0x_2,q_1x_1x_2\in\Z,\quad q_1x_1^2,q_1x_2^2\in\Z,\quad\textup{so}\quad q_1g^2\in\Z \end{eqnarray*} (which is a bit stronger than \eqref{5.22}). $q_1g^2\in\Z$ means \begin{eqnarray*} \frac{(b_l(a)-b_{l-1}(a)-1)g^2}{rb_l(a)} =\frac{b_l(a)-b_{l-1}(a)-1}{b_l(a)(2-a)/\gamma_3^2}\in\Z. \end{eqnarray*} But $\frac{b_l(a)-b_{l-1}(a)-1}{b_l(a)}\in (1,2)$ because of Lemma \ref{td.2} (b), and $\frac{2-a}{\gamma_3^2}\in\N$, a contradiction. {\bf Case I.2}, $\uuuu{x}\in Y_{I}$, $l=2$: Here $\lambda_1<0$. Therefore $\mu_1^2=\lambda_1$ is impossible. {\bf Case II.1}, $\uuuu{x}\in Y_{II}$, $l\geq 3$ odd: $\NN(\mu_1)^l=\NN(\lambda_1)=1$ and $l$ odd imply $\NN(\mu_1)=1$. Here $\lambda_1>1$, so $\mu_1>1$, so $\mu_1=\kappa_a$ for some $a\in\Z_{\geq 3}$. By the discussion above $g=\gamma_1\gamma_3$, $\gamma_1=b_{(l+1)/2}+b_{(l-1)/2}$, and \eqref{5.29} holds. The proof of Lemma \ref{t5.10} (b) gives the first inequality below, \begin{eqnarray}\nonumber \frac{a-2}{\gamma_3^2}&=&\frac{|r|}{g^2} \geq g\www{x}_1\www{x}_2^2-\www{x}_1^2-2\www{x}_2^2 \geq \www{x}_2^2(g\www{x}_1-3),\\ \textup{so}\quad a-2&\geq& \gamma_3^2\www{x}_2^2(\gamma_1\gamma_3\www{x}_1-3), \nonumber\\ \textup{so}\quad (\gamma_3^2\www{x}_2^2-1)3&\geq& (\gamma_3^2\www{x}_2^2-1)\gamma_1\gamma_3\www{x}_1 +\gamma_1\gamma_3\www{x}_1-(a+1).\label{5.30} \end{eqnarray} Observe with Lemma \ref{td.2} (b) $$\gamma_1=b_{(l+1)/2}+b_{(l-1)/2}\geq b_2+b_1=a+1\geq 4.$$ Therefore the inequality \eqref{5.30} can only hold if $\gamma_3=\www{x}_2=\www{x}_1=1$ and $\gamma_1=a+1$, so $l=3$. Then also $g=\gamma_1\gamma_3=a+1$. \begin{eqnarray*} a-2&=&\frac{a-2}{\gamma_3^2}=\frac{|r|}{g^2}= g\www{x}_3-\www{x}_3^2-1-1=(a+1-\www{x}_3)\www{x}_3-2,\\ 0&=& (\www{x}_3-1)(\www{x}_3-a). \end{eqnarray*} We excluded the triples $(x,x,x)$, so $\www{x}_3>1$. But also $\www{x}_3\leq\frac{g}{2}\www{x}_1\www{x}_2 =\frac{a+1}{2}<a$. A contradiction. {\bf Case II.2}, $\uuuu{x}\in Y_{II}$, $l=2$: Then $\NN(\mu_1)=\varepsilon_1$ for some $\varepsilon_1\in\{\pm 1\}$. Also $\lambda_1>1$ and $\varepsilon_2\mu_1>1$ for some $\varepsilon_2\in\{\pm 1\}$. Then $\mu_1$ is a zero of a polynomial $t^2-\varepsilon_2at+\varepsilon_1$, namely \begin{eqnarray*} \mu_1=\varepsilon_2(\frac{a}{2} +\frac{1}{2}\sqrt{a^2-4\varepsilon_1})\quad \textup{with}\left\{\begin{array}{ll} a\in\Z_{\geq 3}&\textup{ if }\varepsilon_1=1,\\ a\in\N&\textup{ if }\varepsilon_1=-1,\end{array}\right.\\ \mu_1+\mu_1^{conj}=\varepsilon_2a, \quad \mu_1\mu_1^{conj}=\varepsilon_1,\quad \mu_1^2=\varepsilon_2 a\mu_1-\varepsilon_1. \end{eqnarray*} Comparison with \begin{eqnarray*} \lambda_1&=&\frac{2-r}{2}+\frac{1}{2}\sqrt{r(r-4)}\\ &=& \mu_1^2=\frac{a^2-2\varepsilon_1}{2}+ \frac{a}{2}\sqrt{a^2-4\varepsilon_1}\\\ &=&\varepsilon_2a\mu_1-\varepsilon_1 \end{eqnarray*} shows \begin{eqnarray*} r&=& -a^2+2(\varepsilon_1+1),\\ \mu_1&=& \frac{\varepsilon_1\varepsilon_2}{a} +\frac{\varepsilon_2}{a}\lambda_1\\ &=& (1-q_0)+(q_0-rq_1)\lambda_1, \end{eqnarray*} with \begin{eqnarray*} q_0&=&\frac{a-\varepsilon_1\varepsilon_2}{a},\\ q_1&=&\frac{\varepsilon_2(\varepsilon_1+1)-a} {a(a^2-2(\varepsilon_1+1))}. \end{eqnarray*} {\bf Case II.2.1}, $\NN(\mu_1)=1$: Then $\varepsilon_1=1$ and \begin{eqnarray*} r&=& -a^2+4=(2-a)(2+a),\\ q_0&=& \frac{a-\varepsilon_2}{a},\\ q_1&=&\frac{2\varepsilon_2-a}{a(a^2-4)} =\frac{-1}{a(a+2\varepsilon_2)},\\\ q_2&=& q_0-2q_1=\frac{a+\varepsilon_2}{a+2\varepsilon_2}. \end{eqnarray*} Write $a+2\varepsilon_2=\gamma_1^2\gamma_2$ with $\gamma_1,\gamma_2\in\N$ and $\gamma_2$ squarefree. The integrality condition \eqref{5.21} $q_2g^2\in\Z$ tells \begin{eqnarray}\label{5.31} g=\gamma_1\gamma_2\gamma_3\quad\textup{with}\quad a+2\varepsilon_2=\gamma_1^2\gamma_2, \quad g^2=(a+2\varepsilon_2)\gamma_2\gamma_3^2, \end{eqnarray} for some $\gamma_3\in\N$. The conditions $r<0$ and $g^2|r$ tell \begin{eqnarray}\label{5.32} \frac{a-2\varepsilon_2}{\gamma_2\gamma_3^2} &=& \frac{-r}{g^2}=g\www{x}_1\www{x}_2\www{x}_3 -\www{x}_1^2-\www{x}_2^2-\www{x}_3^2\in\N. \end{eqnarray} $\gamma_2$ divides $a+2\varepsilon_2$ and $a-2\varepsilon_2$, so it divides $4$. As it is squarefree, $\gamma_2\in\{1,2\}$. Also $\gcd(g,a)=\gcd(\gamma_1\gamma_2\gamma_3,a)\in\{1,2\}$ as $a+2\varepsilon_2=\gamma_1^2\gamma_2$ and $\gamma_2\gamma_3^3$ divides $a-2\varepsilon_2$. The integrality conditions \eqref{5.23}, $q_0x_i-q_1x_jx_k\in\Z$ tell \begin{eqnarray*} \frac{a-\varepsilon_2}{a}x_i+\frac{1}{a(a+2\varepsilon_2)}x_jx_k =x_i+\frac{1}{a}(-g\varepsilon_2\www{x}_i +\gamma_2\gamma_3^2\www{x}_j\www{x}_k)\in\Z,\nonumber\\ \textup{so after multiplying with }\gamma_1\qquad \frac{\gamma_3}{a}(-(a+2\varepsilon_2)\varepsilon_2\www{x}_i +g\www{x_j}\www{x}_k)\in\Z,\nonumber\\ \textup{so}\quad \frac{\gamma_3}{a} (-2\www{x}_i+g\www{x}_j\www{x}_k)\in\Z,\nonumber\\ \textup{so}\quad \frac{\gamma_3}{a}g_2\in\Z.\nonumber \end{eqnarray*} This is a bit stronger than the integrality condition \eqref{5.22} $q_1g^2g_2\in\Z$ which says $\frac{\gamma_2\gamma_3^2}{a}g_2\in\Z$. We can improve it even more, to \begin{eqnarray}\label{5.33} \frac{g_2}{a}\in\Z,\quad \frac{1}{a}(-2\www{x}_i+g\www{x}_j\www{x}_k)\in\Z, \end{eqnarray} by the following case discussion: If $a\equiv 1(2)$, then $\gamma_3\equiv 1(2)$, so $\gcd(a,\gamma_3)=1$, so \eqref{5.33} holds. It $a\equiv 0(4)$, then $a-2\varepsilon_2\equiv 2(4)$, so $\gamma_3\equiv 1(2)$, so $\gcd(a,\gamma_3)=1$, so \eqref{5.33} holds. If $a\equiv 2(4)$, then a priori $\frac{2}{a}g_2\in\Z$. But then $\frac{a}{2}\equiv 1(2)$ and $g\equiv 0(2)$, so $g_2\equiv 0(2)$, so \eqref{5.33} holds. Finally, the integrality conditions \eqref{5.24} $q_1(x_i^2-x_j^2)$ say \begin{eqnarray} \frac{\gamma_2\gamma_3^2}{a}(\www{x}_i^2-\www{x}_j^2)\in\Z. \label{5.34} \end{eqnarray} The following estimate which arises from \eqref{5.2} will also be useful: \begin{eqnarray}\nonumber \www{x}_1^2&\leq& \frac{(2+(4-r)^{1/3})^2}{g^2} = \frac{(2+a^{2/3})^2}{g^2} = \frac{4+4a^{2/3}+a^{4/3}}{g^2}\\ &\leq& \frac{4+2a^{1/3}+2a+a^{4/3}}{g^2} =\frac{(a+2) (2+a^{1/3})}{(a+2\varepsilon_2) \gamma_2\gamma_3^2}.\label{5.35} \end{eqnarray} {\bf Case II.2.1.1}, $\varepsilon_2=1$, $\mu_1=\kappa_a$ for some $a\in\Z_{\geq 3}$: The estimate \eqref{5.35} says \begin{eqnarray}\label{5.36} \www{x}_1^2\leq\lfloor \frac{2+a^{1/3}}{\gamma_2\gamma_3^2} \rfloor \leq \lfloor 2+a^{1/3}\rfloor \leq a\quad (\textup{recall }a\in\Z_{\geq 3}). \end{eqnarray} Recall that $\uuuu{x}$ is a local minimum, so $2\www{x}_3\leq g\www{x}_1\www{x}_2$ and that $\www{x}_1\leq \www{x}_2\leq\www{x}_3$ and $1\leq \www{x}_1<\www{x}_3$ (as $(x,x,x)$ is excluded). {\bf Case II.2.1.1.1}, $2\www{x}_3<g\www{x}_1\www{x}_2$: Then \eqref{5.33} gives the existence of $\alpha\in\N$ with $g\www{x}_1\www{x}_2=\alpha a+2\www{x}_3$. With this we go into \eqref{5.32}, \begin{eqnarray*} \frac{a-2}{\gamma_2\gamma_3^2} &=& \alpha a\www{x}_3-\www{x}_1^2+(\www{x}_3^2-\www{x}_2^2)\\ &\stackrel{\eqref{5.36}}{\geq}& \alpha a\www{x}_3-a+0 \stackrel{\www{x}_3\geq 2}{\geq} a, \end{eqnarray*} a contradiction. {\bf Case II.2.1.1.2}, $2\www{x}_3=g\www{x}_1\www{x}_2$, $\www{x}_1\geq 2$: \eqref{5.32} takes the shape \begin{eqnarray*} \frac{a-2}{\gamma_2\gamma_3^2} &=& \www{x}_3^2-\www{x}_1^2-\www{x}_2^2 =g^2(\frac{\www{x}_1}{2})^2\www{x}_2^2-\www{x}_1^2-\www{x}_2^2\\ &\geq& (g^2-2)\www{x}_2^2\geq ((a+2)-2)\www{x}_2^2=a\www{x}_2^2 \geq a, \end{eqnarray*} a contradiction. {\bf Case II.2.1.1.3,} $2\www{x}_3=g\www{x}_1\www{x}_2$, $\www{x}_1=1$: Write $\gamma_4:=\gamma_2\gamma_3^2$. Then \eqref{5.32} takes the shape \begin{eqnarray*} \frac{a-2}{\gamma_4}&=& \www{x}_3^2-1-\www{x}_2^2= (\frac{1}{4}(a+2)\gamma_4-1)\www{x}_2^2-1,\\ &\stackrel{\www{x}_2\geq 1}{\geq} & \frac{1}{4}(a+2)\gamma_4-2,\\ (\gamma_4^2-4)a&\leq& -2(\gamma_4^2-4)+8(\gamma_4-2). \end{eqnarray*} If $\gamma_4>2$ then $a\leq -2+\frac{8}{\gamma_4+2}\leq -2+\frac{8}{5},$ a contradiction. Therefore $\gamma_4\in\{1,2\}$. If $\gamma_4=1$ then \eqref{5.32} becomes \begin{eqnarray*} a-2&=& \frac{a-2}{4}\www{x}_2^2-1, \quad\textup{so}\quad 4=(a-2)(\www{x}_2^2-4), \end{eqnarray*} which has no solution $(a,\www{x}_2)\in\Z_{\geq 3}\times\N$, so a contradiction. If $\gamma_4=2$ then \eqref{5.32} is solved with $\www{x}_2=1$ and $a\in\Z_{\geq 3}$ arbitrary. Then $\www{x}_1=1$, $\gamma_2=2$, $\gamma_3=1$, $g=2\gamma_1$, $\www{x}_3=\frac{g}{2}=\gamma_1$, $\uuuu{x}=(2\gamma_1,2\gamma_1,2\gamma_1^2)$. These cases are excluded. {\bf Case II.2.1.2}, $\varepsilon_2=-1$, $\mu_1=-\kappa_a$ for some $a\in\Z_{\geq 3}$: The estimates \eqref{5.35} say here \begin{eqnarray}\label{5.37} \www{x}_1^2&\leq& \lfloor \frac{(2+a^{2/3})^2} {(a-2)\gamma_2\gamma_3^2} \rfloor \leq \lfloor \frac{(a+2)(2+a^{1/3})}{(a-2)\gamma_2\gamma_3^2}\rfloor. \end{eqnarray} This implies \begin{eqnarray}\label{5.38} \www{x}_1^2<a\quad\textup{if}\quad a\geq 8. \end{eqnarray} We treat small $a$ first. Recall $\gamma_2\in\{1,2\}$ and $a=\gamma_1^2\gamma_2+2$. So if $a\leq 9$ then $a\in\{3,4,6\}$. The following table lists constraints for $a\in\{3,4,6\}$. For $\www{x}_1$ \eqref{5.37} gives an upper bound and $\frac{3}{g}\leq \frac{x_1}{g}=\www{x}_1$ gives a lower bound. Recall the conditions $a-2=\gamma_1^2\gamma_2$ and $\frac{a+2}{\gamma_2\gamma_3^2}\in\N$ . \begin{eqnarray*} \begin{array}{c|c|c|c} a & 3 & 4 & 6 \\ (\gamma_1,\gamma_2,\gamma_3) & (1,1,1) & (1,2,1) & (2,1,1) \textup{ or }(2,1,2)\\ \gamma_2\gamma_3^2 & 1 & 2 & 1\textup{ or }4\\ g & 1 & 2 & 2 \textup{ or }4\\ \frac{3}{g} & 3 & \frac{3}{2} & \frac{3}{2} \textup{ or }\frac{3}{4} \\ \frac{(2+a^{2/3})^2}{a-2} & 16,64.. & 10,21.. & 7,03.. \\ \www{x}_1 & 3\textup{ or }4 & 2 & 2 \textup{ or }1 \end{array} \end{eqnarray*} This gives five cases $(a,\www{x}_1)\in\{(3,3),(3,4),(4,2),(6,2),(6,1)\}$ with $a\leq 9$. We treat these cases first and then all cases with $a\geq 10$. Because of \eqref{5.33} a number $\alpha\in\Z_{\geq 0}$ with $g\www{x}_1\www{x}_2=\alpha a+2\www{x}_3$ exists. Then \eqref{5.32} becomes \begin{eqnarray}\label{5.39} \frac{a+2}{\gamma_2\gamma_3^2}=\alpha a\www{x}_3-\www{x}_1^2 +(\www{x}_3^2-\www{x}_2^2). \end{eqnarray} Also recall \eqref{5.34} $\frac{\gamma_2\gamma_3^2}{a}(\www{x}_i^2-\www{x}_j^2)\in\Z$. {\bf Case II.2.1.2.1}, $(a,\gamma_1,\gamma_2,\gamma_3,g,\www{x}_1) =(3,1,1,1,1,3)$: \eqref{5.39} says $$5=3 \alpha\www{x}_3-9+(\www{x}_3^2-\www{x}_2^2).$$ \eqref{5.34} says that $3$ divides $\www{x}_3^2-\www{x}_2^2$. A contradiction. {\bf Case II.2.1.2.2}, $(a,\gamma_1,\gamma_2,\gamma_3,g,\www{x}_1) =(3,1,1,1,1,4)$: \eqref{5.39} says $$5=3 \alpha\www{x}_3-16+(\www{x}_3^2-\www{x}_2^2).$$ $g\www{x}_1\www{x}_2=4\www{x}_2=\alpha a+2\www{x}_3$ implies that $\alpha$ is even. This and $\www{x}_3>\www{x}_1=4$ and \eqref{5.39} show $\alpha=0$, so $2\www{x}_2=\www{x}_3$, so $5=0-16+3\www{x}_2^2$, a contradiction. {\bf Case II.2.1.2.3}, $(a,\gamma_1,\gamma_2,\gamma_3,g,\www{x}_1) =(4,1,2,1,2,2)$: \eqref{5.39} says $$3=4 \alpha\www{x}_3-4+(\www{x}_3^2-\www{x}_2^2).$$ \eqref{5.34} says that $2$ divides $\www{x}_3^2-\www{x}_2^2$. A contradiction. {\bf Case II.2.1.2.4}, $(a,\gamma_1,\gamma_2,\gamma_3,g, \www{x}_1)=(6,2,1,1,2,2)$: \eqref{5.39} says $$8=6 \alpha\www{x}_3-4+(\www{x}_3^2-\www{x}_2^2).$$ $\www{x}_3>\www{x}_1=2$ and \eqref{5.39} imply $\alpha=0$, so $12=\www{x}_3^2-\www{x}_2^2$. Only $\www{x}_2=2$ and $\www{x}_3=4$ satisfy this. But then $\gcd(\www{x}_1,\www{x}_2,\www{x}_3)=2\neq 1$, a contradiction. {\bf Case II.2.1.2.5}, $(a,\gamma_1,\gamma_2,\gamma_3,g,\www{x}_1) =(6,2,1,2,4,1)$: \eqref{5.39} says $$2=6 \alpha\www{x}_3-1+(\www{x}_3^2-\www{x}_2^2).$$ It implies $\alpha=0$ and $\www{x}_2=1$, $\www{x}_3=2$, so $\uuuu{x}=(4,4,8)$. This case was excluded in Theorem \ref{t5.18}. {\bf Case II.2.1.2.6}, $a\geq 10$: {\bf Case II.2.1.2.6.1}, $\alpha>0$: \eqref{5.38} gives $\www{x}_1^2<a$. This and \eqref{5.39} and $\www{x}_3>\www{x}_1\geq 1$ show $\alpha=1$, $\www{x}_3=2$, $\www{x}_1=1$, $\gamma_2=\gamma_3=1$, so $a+2=2a-1+(4-\www{x}_2^2)$. A contradiction to $a\geq 10$. {\bf Case II.2.1.2.6.2}, $\alpha=0$, $\www{x}_1\geq 2$: \eqref{5.39} says \begin{eqnarray*} \frac{a+2}{\gamma_2\gamma_3^2} &=&\www{x}_3^2-\www{x}_1^2-\www{x}_2^2 =((a-2)\gamma_2\gamma_3^2\frac{1}{4}\www{x}_1^2-1)\www{x}_2^2 -\www{x}_1^2,\\ \textup{so}\quad a+2&\geq& ((a-2)-1)\www{x}_2^2-\www{x}_1^2\geq ((a-2)-2)4,\\ \textup{so}\quad 3a&\leq& 18,\quad a\leq 6, \end{eqnarray*} a contradiction. {\bf Case II.2.1.2.6.3}, $\alpha=0$, $\www{x}_1=1$: Write $\gamma_4:=\gamma_2\gamma_3^2$. Then \eqref{5.39} says \begin{eqnarray}\nonumber \frac{a+2}{\gamma_4}&=& \www{x}_3^2-1-\www{x}_2^2 =((a-2)\gamma_4\frac{1}{4}-1)\www{x}_2^2-1,\\ \textup{so}\quad \www{x}_2^2&=& \frac{4}{\gamma_4^2}\cdot \frac{a+2+\gamma_4}{a-2-\frac{4}{\gamma_4}}.\label{5.40} \end{eqnarray} The right hand side must be $\geq 1$. This means \begin{eqnarray*} \gamma_4^2(a-2-\frac{4}{\gamma_4})&\leq& 4(a+2+\gamma_4),\\ (\gamma_4^2-4)a&\leq& 2(\gamma_4^2-4)+8(\gamma_4+2). \end{eqnarray*} If $\gamma_4>2$ then $a\leq 2+\frac{8}{\gamma_4-2}$, which is in contradiction to $a\geq 10$, as $\gamma_4=3$ would mean $\gamma_2=3$, which is impossible. Therefore $\gamma_4\in\{1,2\}$. If $\gamma_4=1$ then \eqref{5.40} says $\www{x}_2^2=4\frac{a+3}{a-6}=4+\frac{36}{a-6}$. But the right hand side is not a square for any $a\geq 10$, a contradiction. If $\gamma_4=2$ then \eqref{5.40} says $\www{x}_2^2=\frac{a+4}{a-4}$, which is also not a square for any $a\geq 10$, a contradiction. {\bf Case II.2.2}, $\NN(\mu_1)=\varepsilon_1=-1$: Recall the formulas for $r,q_0$ and $q_1$ at the beginning of case II.2. Now \begin{eqnarray*} r&=& -a^2,\\ q_0&=& \frac{a+\varepsilon_2}{a},\\ q_1&=& \frac{-1}{a^2},\\ q_2&=& \frac{a^2+\varepsilon_2 a+2}{a^2}. \end{eqnarray*} The integrality condition \eqref{5.21} $q_2g^2\in\Z$ says $\frac{a}{2}\,|\, g$ if $a\equiv 2(4)$ and $a\,|\, g$ if $a\equiv 1(2)$ or $a\equiv 0(4)$. Also $g^2\,|\, r=-a^2$. Therefore $g=a$ or $g=\frac{a}{2}$, and $g=\frac{a}{2}$ only if $a\equiv 2(4)$. The case $g=a$ means $\frac{r}{g^2}=-1$ which implies by Lemma \ref{t5.11} $Q\in G_\Z$. But all such cases are excluded in Theorem \ref{t5.18}. Therefore $g=\frac{a}{2}$ and $a\equiv 2(4)$. The integrality condition \eqref{5.22} $q_1g^2g_2\in\Z$ says $\frac{g_2}{4}\in\Z$. But $g_2\equiv 0(4)$ and $g\equiv 1(2)$ are together impossible in view of the definition $g_2=\gcd(2\www{x}_1-g\www{x}_2\www{x}_3, 2\www{x}_2-g\www{x}_1\www{x}_3,2\www{x}_3-g\www{x}_1\www{x}_2)$ and $\gcd(\www{x}_1,\www{x}_2,\www{x}_3)=1$. A contradiction. Therefore in all cases the assumption that a nontrivial root $\mu_1$ of $\lambda_1$ exists which satisfies the integrality conditions \eqref{5.21}--\eqref{5.24} leads to a contradiction. Theorem \ref{t5.18} is proved \hfill$\Box$ \begin{remark}\label{t5.19} The results in this chapter give complete results on $G_\Z$ and $G_\Z^M\supset G_\Z$ for all unimodular bilinear lattices of rank 3. The reducible cases: Lemma \ref{t5.4}, Theorem \ref{t5.13}. The irreducible cases with $r\in\{0,1,2,4\}$: Theorem \ref{t5.14} The irreducible cases with $r\in\Z_{<0}\cup\Z_{>4}$ and $G_\Z^M\supsetneqq \{\pm M^l\,|\,l\in\Z\}$: Theorem \ref{t5.16}. The irreducible cases with $r\in \Z_{<0}\cup\Z_{>4}$ and $G_\Z^M=\{\pm M^l\,|\,l\in\Z\}$: Theorem \ref{t5.18}. \end{remark} \chapter{Monodromy groups and vanishing cycles}\label{s6} \setcounter{equation}{0} \setcounter{figure}{0} This chapter studies the monodromy groups $\Gamma^{(0)}$ and $\Gamma^{(1)}$ of the unimodular bilinear lattices $(H_\Z,L,\uuuu{e})$ with triangular basis $\uuuu{e}$ which have rank 2 or 3. In rank 3 the even as well as the odd cases split into many different case studies. They make the chapter long. Section \ref{s6.1} considers for $k\in \{0;1\}$ the quotient lattice $\oooo{H_\Z}^{(1)}:= H_\Z/\Rad I^{(k)}$ and the induced bilinear form $\oooo{I}^{(k)}$ on it. Because $\Gamma^{(k)}$ acts trivially on $\Rad I^{(k)}$, it acts on this quotient lattice and respects $\oooo{I}^{(k)}$. The homomorphism $\Gamma^{(k)}\to \Aut(\oooo{H_\Z}^{(1)}, \oooo{I}^{(k)})$ has an image $\Gamma^{(k)}_s$, the {\it simple part} of $\Gamma^{(k)}$ and a kernel $\Gamma^{(k)}_u$, the {\it unipotent part} of $\Gamma^{(k)}$. There is the exact sequence $$\{\id\}\to \Gamma^{(k)}_u\to\Gamma^{(k)}\to \Gamma^{(k)}_s\to\{\id\}.$$ We will study $\Gamma^{(k)}$ together with $\Gamma^{(k)}_s$ and $\Gamma^{(k)}_u$. Also the natural homomorphism $j^{(k)}:H_\Z\to H_\Z^\sharp := \Hom_\Z(H_\Z,\Z)$ and in the even case the spinor norm will be relevant. Section \ref{s6.1} fixes more or less well known general facts. Section \ref{s6.2} treats the rank 2 cases. The even cases $A_1^2$ and $A_2$ are classical and easy. In the other even cases $\Gamma^{(0)}\cong G^{fCox,2}$. There we can characterize $\Delta^{(0)}$ arithmetically and geometrically. In the odd case $A_2$ $\Gamma^ {(1)}\cong SL_2(\Z)$. In the other irreducible odd cases $\Gamma^{(1)}\cong G^{free,2}$. The matrix group $\Gamma^{(1),mat}\subset SL_2(\Z)$ is a Fuchsian group of the second kind, but has infinite index in $SL_2(\Z)$ in most cases. We do not have a characterization of $\Delta^{(1)}$ which is as nice as in the even cases. The long Theorem \ref{t6.11} in section \ref{s6.3} states our results on the even monodromy group $\Gamma^{(0)}$ in the rank 3 cases. The results are detailed except for the local minima $\uuuu{x}\in\Z^3_{\geq 3}$ with $r(\uuuu{x})\leq 0$ where we only state $\Gamma^{(0)}\cong \Gamma^{(0)}_s\cong G^{fCox,3}$ and $\Gamma^{(0)}_u=\{\id\}$. It is followed by Theorem \ref{t6.14} which gives the set $\Delta^{(0)}$ of even vanishing cycles in many, but not all cases. Especially in the cases of the local minima $\uuuu{x}\in\Z^3_{\geq 3}$ with $r(\uuuu{x})\leq 0$ we know little and only state $\Delta^{(0)}=R^{(0)}$ in the case $(3,3,3)$, but $\Delta^{(0)}\subsetneqq R^{(0)}$ in the four cases $(3,3,4)$, $(4,4,4)$, $(5,5,5)$ and $(4,4,8)$. The result $\Delta^{(0)}=R^{(0)}$ in the case $(3,3,3)$ seems to be new . Its proof is rather laborious. Section \ref{s6.4} treats the odd monodromy group $\Gamma^{(1)}$ and the set of odd vanishing cycles $\Delta^{(1)}$ in the rank 3 cases. The long Theorem \ref{t6.18} fixes the results on $\Gamma^{(1)}$. The even longer Theorem \ref{t6.21} fixes the results on $\Delta^{(1)}$. Also their proofs are long. They are preceded by two technical lemmas, the second one helps to control $\Gamma^{(1)}_u$. Similar to the even rank 3 cases, in the case of a local minimum $\uuuu{x}\in\Z^3_{\geq 3}$ with $r(\uuuu{x})\leq 0$ $\Gamma^{(1)}\cong \Gamma^{(1)}_s\cong G^{free,3}$ and $\Gamma^{(1)}_u=\{\id\}$. In the same case, interestingly, the map $\Delta^{(1)} \to \oooo{H_\Z}^{(1)}$ is injective. This leads in this case to the problem how to recover an odd vanishing cycle from its image in $\oooo{H_\Z}^{(1)}$. One solution is offered in the most important case $\uuuu{x}=(3,3,3)$ in Lemma \ref{t6.26}. One general application of the Theorems \ref{t6.18} and \ref{t6.21} is given in Corollary \ref{t6.23}. It allows to separate many of the orbits of the bigger group $(G^{phi}\ltimes \www{G}^{sign}) \rtimes\langle\gamma\rangle$ which acts on $T^{uni}_3(\Z)$ and $\Z^3$ in Lemma \ref{t4.18}. \section{Basic observations}\label{s6.1} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\in\N$ with a triangular basis $\uuuu{e}$. Definition \ref{t2.8} gave two monodromy groups $\Gamma^{(0)}$ and $\Gamma^{(1)}$ and two sets $\Delta^{(0)}$ and $\Delta^{(1)}$ of vanishing cycles. Later in this chapter they shall be studied rather systematically in essentially all cases with $n=2$ or $n=3$. For that we need some notations and basic facts, which are collected here. Everything in this section is well known. Most of it is stated in the even case in [Eb84] and in the odd case in [Ja83]. \begin{definition}\label{t6.1} Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n\in\N$. In the following $k\in\{0;1\}$. Denote \index{dual lattice}\index{$H_\Z^\sharp=\Hom(H_\Z,\Z)$} \index{$\oooo{H_\Z}^{(k)},\ \oooo{H_\Z}^{(k),\sharp}$} \index{$\OO^{(k),Rad},\ \OO^{(k),Rad}_u$} \begin{eqnarray*} O^{(k)}&:=& \Aut(H_\Z,I^{(k)})\quad\textup{the group of automorphisms of }H_\Z\\ && \textup{ which respect }I^{(k)}.\\ H_\Z^\sharp &:=&\Hom(H_\Z,\Z)\quad\textup{the dual lattice}.\\ j^{(k)}&:&H_\Z\to H_\Z^\sharp,\quad a\mapsto(b\mapsto I^{(k)}(a,b)).\\ t^{(k)}&:&O^{(k)}\to \Aut(H_\Z^\sharp),\quad g\mapsto (l\mapsto l\circ g^{-1}). \end{eqnarray*} \begin{eqnarray*} \oooo{H_\Z}^{(k)}&:=& H_\Z/\Rad I^{(k)},\quad \oooo{H_\R}^{(k)}:=H_\R/\Rad_\R I^{(k)}.\\ \pr^{H,(k)}&=&\oooo{(.)}^{(k)}:H_\Z\to \oooo{H_\Z}^{(k)},\quad a\mapsto \oooo{a}^{(k)},\quad\textup{the projection}.\\ \oooo{I}^{(k)}&:&\oooo{H_\Z}^{(k)}\times \oooo{H_\Z}^{(k)}\to\Z \quad\textup{the bilinear form on }\oooo{H_\Z}^{(k)}\\ && \textup{ which is induced by }I^{(k)},\\ \oooo{H_\Z}^{(k),\sharp}&:=& \Hom(\oooo{H_\Z}^{(k)},\Z) \quad\textup{the dual lattice}.\\ O^{(k),Rad}&:=&\{g\in O^{(k)}\,|\, g|_{\Rad I^{(k)}}=\id\}.\\ \pr^{A,(k)}&=&\oooo{(.)}:O^{(k),Rad}\to \Aut(\oooo{H_\Z}^{(k)},\oooo{I}^{(k)}),\quad g\mapsto\oooo{g},\\ &&\textup{the natural map to the set of induced automorphisms}. \end{eqnarray*} For any subgroup $G^{(k)}\subset O^{(k),Rad}$ define \begin{eqnarray*} G^{(k)}_s&:=& \pr^{A,(k)}(G^{(k)})\subset \Aut(\oooo{H_\Z}^{(k)},\oooo{I}^{(k)}),\\ G^{(k)}_u&:=& \ker(\pr^{A,(k)}:G^{(k)}\to \Aut(\oooo{H_\Z}^{(k)},\oooo{I}^{(k)})). \end{eqnarray*} $G^{(k)}_s$ is called the {\it simple part} of $G^{(k)}$, and $G^{(k)}_u$ is called the {\it unipotent part} of $G^{(k)}$. \index{simple part}\index{unipotent part} \end{definition} \begin{lemma}\label{t6.2} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\in\N$ with a triangular basis $\uuuu{e}$. (a) The map $t^{(k)}:O^{(k)}\to\Aut(H_\Z^\sharp)$ is a group homomorphism. For $g\in O^{(k)}$, $t^{(k)}(g)$ maps $j^{(k)}(H_\Z)\subset H_\Z^\sharp$ to itself, so it induces an automorphism $\tau^{(k)}(g)\in\Aut(H_\Z^\sharp/j^{(k)}(H_\Z))$. The map \index{$\tau^{(k)}$} \begin{eqnarray*} \tau^{(k)}:O^{(k)}\to\Aut (H_\Z^\sharp/j^{(k)}(H_\Z)) \end{eqnarray*} is a group homomorphism. (b) For $a\in R^{(0)}$ if $k=0$ and for $a\in H_\Z$ if $k=1$ the reflection or transvection $s^{(k)}_a\in O^{(k)}$ is in $\ker\tau^{(k)}$. Therefore $\Gamma^{(k)}\subset \ker \tau^{(k)}$. (c) $\ker\tau^{(k)}\subset O^{(k),Rad}$. (d) The horizontal lines of the following diagram are exact sequences. \begin{eqnarray*} \begin{array}{ccccccccc} \{\id\} & \to & \Gamma^{(k)}_u & \to & \Gamma^{(k)} & \to & \Gamma^{(k)}_s & \to & \{\id\} \\ \| & & \cap & & \cap & & \cap & & \| \\ \{\id\} & \to & (\ker \tau^{(k)})_u & \to & \ker \tau^{(k)} & \to & (\ker \tau^{(k)})_s & \to & \{\id\} \\ \| & & \cap & & \cap & & \cap & & \| \\ \{\id\} & \to & O^{(k),Rad}_u & \to & O^{(k),Rad} & \to & O^{(k),Rad}_s & \to & \{\id\} \end{array} \end{eqnarray*} The second and third exact sequence split non-canonically. \begin{eqnarray*} O^{(k),Rad}_s=\Aut(\oooo{H_\Z}^{(k)},\oooo{I}^{(k)}). \end{eqnarray*} (e) The map \index{$T:\oooo{H_\Z}^{(k),\sharp}\otimes \Rad I^{(k)} \to O^{(k),Rad}_u$} \begin{eqnarray*} T:\oooo{H_\Z}^{(k),\sharp}\otimes \Rad I^{(k)} &\to& O^{(k),Rad}_u,\\ \sum_{i\in I}l_i\otimes r_i&\mapsto& \bigl(a\mapsto a+\sum_{i\in I}l_i(\oooo{a}^{(k)})r_i\bigr),\\ \textup{ shorter: }h&\mapsto& \bigl(a\mapsto a+h(\oooo{a}^{(k)})\bigr), \end{eqnarray*} is an isomorphism between abelian groups with \begin{eqnarray*} T(h_1+h_2)=T(h_1)\circ T(h_2), \quad T(h)^{-1}=T(-h). \end{eqnarray*} It restricts to an isomorphism \begin{eqnarray*} T:\oooo{j}^{(k)}(\oooo{H_\Z}^{(k)})\otimes \Rad I^{(k)}\to (\ker \tau^{(k)})_u, \end{eqnarray*}where $\oooo{j}^{(k)}:\oooo{H_\Z}^{(k)}\to \oooo{H_\Z}^{(k),\sharp}$ is the map \begin{eqnarray*} a\mapsto \bigl(b\mapsto \oooo{I}^{(k)}(a,b)\bigr)\quad \textup{for an arbitrary }b\in \oooo{H_\Z}^{(k)}. \end{eqnarray*} (f) For $g\in O^{(k),Rad}$, $a\in\oooo{H_\Z}^{(k)}$, $r\in\Rad I^{(k)}$ \begin{eqnarray*} g\circ T(\oooo{j}^{(k)}(a)\otimes r)\circ g^{-1} =T(\oooo{j}^{(k)}(\oooo{g}(a))\otimes r). \end{eqnarray*} (g) Analogously to $t^{(k)}$ and $\tau^{(k)}$ there are the group homomorphisms \begin{eqnarray*} \oooo{t}^{(k)}:O^{(k),Rad}_s=\Aut(\oooo{H_\Z}^{(k)}, \oooo{I}^{(k)})&\to& \Aut(\oooo{H_\Z}^{(k),\sharp}),\ g\mapsto (l\mapsto l\circ g^{-1}),\\\ \textup{and}\quad \oooo{\tau}^{(k)}:O^{(k),Rad}_s&\to& \Aut(\oooo{H_\Z}^{(k),\sharp} /\oooo{j}^{(k)}(\oooo{H_\Z}^{(k)}). \end{eqnarray*} $\tau^{(k)}$ and $\oooo{\tau}^{(k)}$ satisfy \begin{eqnarray*} (\ker \tau^{(k)})_s =\ker \oooo{\tau}^{(k)}. \end{eqnarray*} \end{lemma} {\bf Proof:} (a) The map $t^{(k)}$ is a group homomorphism because \begin{eqnarray*} t^{(k)}(g_1g_2)(l)=l\circ(g_1g_2)^{-1}=l\circ g_2^{-1}\circ g_1^{-1}=t^{(k)}(g_1)t^{(k)}(g_2)(l). \end{eqnarray*} $t^{(k)}(g)$ maps $j^{(k)}(H_\Z)$ to itself because \begin{eqnarray*} t^{(k)}(g)(j^{(k)}(a))&=&j^{(k)}(a)\circ g^{-1}= I^{(k)}(a,g^{-1}(.))\\ &=&I^{(k)}(g(a),(.))=j^{(k)}(g(a)). \end{eqnarray*} $\tau^{(k)}$ is a group homomorphism because $t^{(k)}$ is one. (b) Choose $l\in H_\Z^\sharp$ and $b\in H_\Z$. Then \begin{eqnarray*} (t^{(k)}(s^{(k)}_a)(l)-l)(b)&=&l\circ (s^{(k)}_a)^{-1}(b)-l(b)\\ &=& l(b-(-1)^k I^{(k)}(a,b)a)-l(b)\\ &=& (-1)^{k+1}I^{(k)}(a,b)l(a)\\ &=&j^{(k)}((-1)^{k+1}l(a)a)(b),\\ \textup{so }t^{(k)}(s^{(k)}_a)(l)-l&=& j^{(k)}((-1)^{k+1}l(a)a)\in j^{(k)}(H_\Z),\\ \textup{so }\tau^{(k)}(s^{(k)}_a)&=&\id, \end{eqnarray*} so $s^{(k)}_a\in\ker \tau^{(k)}$. (c) Let $g\in \ker\tau^{(k)}$ and let $r\in \Rad I^{(k)}$. Also $g^{-1}\in\ker \tau^{(k)}$. Choose $l\in H_\Z^\sharp$. Now $\tau^{(k)}(g^{-1})=\id$ implies \begin{eqnarray*} t^{(k)}(g^{-1})(l)-l = j^{(k)}(a)\quad\textup{for some } a\in H_\Z,\\ 0=I^{(k)}(a,r)=j^{(k)}(a)(r)= \bigl(t^{(k)}(g^{-1})(l)-l)(r) =l((g-\id)(r)). \end{eqnarray*} Because $l$ is arbitrary, $(g-\id)(r)=0$, so $g(r)=r$, so $g\in O^{(k),Rad}$. (d) The exact sequences are obvious. Choose an arbitrary splitting of $H_\Z$ as $\Z$-module into $\Rad I^{(k)}$ and a suitably chosen $\Z$-module $\www{H_\Z}^{(k)}$, \begin{eqnarray*} H_\Z=\Rad I^{(k)}\oplus \www{H_\Z}^{(k)}. \end{eqnarray*} The projection $\pr^{H,(k)}:H_\Z\to \oooo{H_\Z}^{(k)}$ restricts to an isomorphism \begin{eqnarray*} \pr^{H,(k)}:(\www{H_\Z}^{(k)},I^{(k)}|_{\www{H_\Z}^{(k)}}) \to (\oooo{H_\Z}^{(k)},\oooo{I}^{(k)}). \end{eqnarray*} Via this isomorphism, any element of $\Aut(\oooo{H_\Z}^{(k)},\oooo{I}^{(k)})$ lifts to an element of $O^{(k),Rad}$. This shows $O^{(k),Rad}_s=\Aut(\oooo{H_\Z}^{(k)}, \oooo{I}^{(k)})$, and it gives a non-canonical splitting of the third exact sequence. The end of the proof of part (g) will show that this splitting restricts to a non-canonical splitting of the second exact sequence. (e) The fact $\oooo{r}^{(k)}=0$ for $r\in \Rad I^{(k)}$ easily implies that $T$ is a group homomorphism with $T(h_1+h_2)=T(h_1)T(h_2)$ and with image in $O^{(k),Rad}_u$. Consider $g\in O^{(k),Rad}_u$. Then $g|_{\Rad I^{(k)}}=\id$ and $(g-\id)(a)\in\Rad I^{(k)}$ for any $a\in H_\Z$. If $b\in a+\Rad I^{(k)}$ then $(g-\id)(a-b)=0$, so $(g-\id)(b)=(g-\id)(a)$. Thus there is an element $h\in \oooo{H_\Z}^{(k),\sharp}\otimes \Rad I^{(k)}$ with $h(\oooo{a}^{(k)})=(g-\id)(a)$ for any $a\in H_\Z$, so $T(h)(a)=a+h(\oooo{a}^{(k)})=g(a)$, so $T(h)=g$. Choose a $\Z$-basis $r_1,...,r_m$ of $\Rad I^{(k)}$ and linear forms $l_1,...,l_m\in H_\Z^\sharp$ with $l_i(r_j)=\delta_{ij}$. Then any $r\in \Rad I^{(k)}$ satisfies $r=\sum_{i=1}^ml_i(r)r_i$. Consider $h\in \oooo{H_\Z}^{(k),\sharp}\otimes \Rad I^{(k)}$ with $T(h)\in (\ker\tau^{(k)})_u$. Then \begin{eqnarray*} t^{(k)}(T(h))(l_i)-l_i &=&j^{(k)}(a_i)\quad \textup{for some }a_i\in H_\Z, \textup{ and also}\\ t^{(k)}(T(h))(l_i)-l_i &=& l_i\circ T(h)^{-1}-l_i = l_i\circ T(-h)-l_i\\ &=& l_i\circ (\id -h(\oooo{(.)}^{(k)}))-l_i =-l_i(h(\oooo{(.)}^{(k)})). \end{eqnarray*} For $b\in H_\Z$ $h(\oooo{b}^{(k)})\in\Rad I^{(k)}$, so \begin{eqnarray*} h(\oooo{b}^{(k)})&=& \sum_{i=1}^m l_i(h(\oooo{b}^{(k)}))r_i =-\sum_{i=1}^m j^{(k)}(a_i)(b)r_i,\\ \textup{so }h&\in& \oooo{j}^{(k)}(\oooo{H_\Z}^{(k)}) \otimes \Rad I^{(k)}. \end{eqnarray*} Going backwards through these arguments, one sees that any $h\in \oooo{j}^{(k)}(\oooo{H_\Z}^{(k)})\otimes \Rad I^{(k)}$ satisfies $T(h)\in (\ker\tau^{(k)})_u$. (f) For $b\in H_\Z$ \begin{eqnarray*} (g\circ T(\oooo{j}^{(k)}(a)\otimes r)\circ g^{-1})(b) &=& g(g^{-1}(b) + \oooo{I}^{(k)}(a,\oooo{g^{-1}(b)}^{(k)})r)\\ &=& b+\oooo{I}^{(k)}(\oooo{g}(a),\oooo{b}^{(k)})r\\ &=& T(\oooo{j}^{(k)}(\oooo{g}(a))\otimes r)(b). \end{eqnarray*} (g) The projection $\pr^{H,(k)}=\oooo{()}^{(k)}:H_\Z\to \oooo{H_\Z}^{(k)}$ induces the embedding \begin{eqnarray*} i^{(k)}:\oooo{H_\Z}^{(k),\sharp}&\hookrightarrow& H_\Z^\sharp,\ l\mapsto l\circ \pr^{H,(k)},\\ \textup{with}\qquad\Imm(i^{(k)}) &=&\{l\in H_\Z^\sharp\,|\, l|_{\Rad I^{(k)}}=0\}\\ \textup{and}\qquad j^{(k)}(H_\Z)&=&i^{(k)}(\oooo{j}^{(k)}(\oooo{H_\Z}^{(k)}))\subset \Imm (i^{(k)}). \end{eqnarray*} The three lattices \begin{eqnarray*} H_\Z^{\sharp}\supset \Imm(i^{(k)})\supset j^{(k)}(H_\Z) \end{eqnarray*} have ranks $n$, $n-\rk \Rad I^{(k)}$, $n-\rk \Rad I^{(k)}$ and are for each $g\in O^{(k),\Rad}$ invariant under the map $t^{(k)}(g)=(l\mapsto l\circ g^{-1})$. This map acts trivially on the quotient $H_\Z^\sharp /\Imm(i^{(k)})$. It acts trivially on the quotient $\Imm(i^{(k)})/j^{(k)}(H_\Z)$ if and only if $\oooo{g}^{(k)}\in\ker \oooo{\tau}^{(k)}$. It acts trivially on the quotient $H_\Z^\sharp/j^{(k)}(H_\Z)$ if and only if $g\in \ker\tau^{(k)}$. Therefore $(\ker\tau^{(k)})_s\subset\oooo{\tau}^{(k)}$. It remains to find for each $\www{g}\in \ker\oooo{\tau}^{(k)}$ an element $g\in \ker\tau^{(k)}$ with $\oooo{g}^{(k)}=\www{g}$. Choose a $\Z$-basis $r_1,...,r_n$ of $H_\Z$ such that $r_1,...,r_m$ (with $m=\rk\Rad I^{(k)}$) is a $\Z$-basis of $\Rad I^{(k)}$. Then $H_\Z=\Rad I^{(k)}\oplus \www{H_\Z}^{(k)}$ with $\www{H_\Z}^{(k)}=\bigoplus_{j=m+1}^n\Z \cdot r_j$ is a splitting of $H_\Z$ with $\www{H_\Z}^{(k)}\cong\oooo{H_\Z}^{(k)}$. Consider the dual $\Z$-basis $l_1,...,l_n$ of $H_\Z^\sharp$ with $l_i(r_j)=\delta_{ij}$. Then $\Imm(i^{(k)}) =\bigoplus_{j=m+1}^n \Z\cdot l_j \supset j^{(k)}(H_\Z)$. An element $\www{g}\in O^{(k),Rad}_s$ has a unique lift to an element $g\in O^{(k),Rad}$ with $g(\www{H_\Z}^{(k)})=\www{H_\Z}^{(k)}$. This splitting of the third exact sequence in part (d) was used already in the proof of part (d). We claim that $g\in \ker\tau^{(k)}$ if $\www{g}\in \ker\oooo{\tau}^{(k)}$. We have \begin{eqnarray*} l_j-l_j\circ g^{-1}&\in& j^{(k)}(H_\Z)\quad\textup{ for } j\in\{m+1,...,n\}\\ && \textup{ because of } \www{g}\in \ker\oooo{\tau}^{(k)},\\ l_i-l_i\circ g^{-1}&=&0\quad\textup{ for }i\in\{1,...,m\}. \end{eqnarray*} This shows the claim. Therefore $(\ker\tau^{(k)})_s =\ker\oooo{\tau}^{(k)}$. The claim also shows that the non-canonical splitting of the third exact sequence in part (d) restricts to a non-canonical splitting of the second exact sequence in part (d). \hfill$\Box$ \begin{remarks}\label{t6.3} (i) The exact sequence $\{\id\}\to \Gamma^{(k)}_u\to\Gamma^{(k)}\to \Gamma^{(k)}_s\to\{\id\}$ splits sometimes, sometimes not. When it splits and when $\Gamma^{(k)}_u$, $\Gamma^{(k)}_s$ and the splitting are known, then also $\Gamma^{(k)}$ is known. (ii) Suppose that one has a presentation of $\Gamma^{(k)}_s$, namely an isomorphism \begin{eqnarray*} \Gamma^{(k)}_s&\stackrel{\cong}{\longrightarrow}& \langle g_1,...,g_n\,|\, w_1(g_1,...,g_m), ...,w_m(g_1,...,g_n)\rangle,\\ \oooo{s^{(k)}_{e_i}}&\mapsto& g_i, \end{eqnarray*} where $w_1(g_1....,g_n),...,w_m(g_1,...,g_n)$ are certain words in $g_1^{\pm 1},...,g_n^{\pm 1}$. Then the group $\Gamma^{(k)}_u$ is the normal subgroup of $\Gamma^{(k)}$ generated by the elements $w_1(s^{(k)}_{e_1},...,s^{(k)}_{e_n})$,..., $w_m(s^{(k)}_{e_1},...,s^{(k)}_{e_n})$. In many of the cases with $n=2$ or $n=3$ we have such a presentation. (iii) The symmetric bilinear form $\oooo{I}^{(0)}$ on $\oooo{H_\Z}^{(0)}$ is nondegenerate. It is well known that for any $g\in\Aut(\oooo{H_\R}^{(0)},\oooo{I}^{(0)})$ some $m\in\N$ and elements $a_1,...,a_m\in\oooo{H_\R}^{(0)}$ with $\oooo{I}^{(0)}(a_i,a_i)\in\R^*$ and $g=\oooo{s}^{(0)}_{a_1}...\oooo{s}^{(0)}_{a_m}$ exist and that the sign \begin{eqnarray*} \oooo{\sigma}(g):=\prod_{i=1}^m\sign(\oooo{I}^{(0)}(a_i,a_i)) \in\{\pm 1\} \end{eqnarray*} is independent of $m$ and $a_1,...,a_m$. This sign $\oooo{\sigma}(g)\in\{\pm 1\}$ is the {\it spinor norm} of $g$. The map $\oooo{\sigma}: \Aut(\oooo{H_\R}^{(0)},\oooo{I}^{(0)})\to\{\pm 1\}$ is obviously a group homomorphism. \index{spinor norm} \end{remarks} \begin{definition}\label{t6.4} Keep the situation of Definition \ref{t6.1}. Define the {\it spinor norm homomorphism} \begin{eqnarray*} \sigma: O^{(0),Rad}\to \{\pm 1\},\quad \sigma(g):= \oooo{\sigma}(\oooo{g}). \end{eqnarray*} Define the subgroup $O^{(k),*}$ of $O^{(k),Rad}$ \begin{eqnarray*} O^{(k),*}&:=& \left\{\begin{array}{ll} \ker\tau^{(1)}& \textup{if }k=1,\\ \ker\tau^{(0)}\cap\ker\sigma & \textup{if }k=0. \end{array}\right. \end{eqnarray*} \end{definition} \begin{remarks}\label{t6.5} (i) For $a\in R^{(0)}$ of course $\sigma(s^{(0)}_a)=1$. Therefore $\Gamma^{(0)}\subset\ker\sigma$. Thus \begin{eqnarray*} \Gamma^{(k)}\subset O^{(k),*}\quad\textup{for }k\in\{0;1\}. \end{eqnarray*} (ii) For $g\in (\ker\tau^{(0)})_u$ $\sigma(g)=1$ because $\oooo{g}=\id$. Therefore \begin{eqnarray*} O^{(k),*}_u=(\ker\tau^{(k)})_u\quad\textup{for }k\in\{0;1\}. \end{eqnarray*} \end{remarks} \begin{remarks}\label{t6.6} Finally we make some comments on the sets of vanishing cycles in a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with a triangular basis. (i) Let $H_\Z^{prim}$ denote the set of primitive vectors in $H_\Z$, i.e. vectors $a\in H_\Z-\{0\}$ with $\Z a=\Q a\cap H_\Z$, and analogously $\oooo{H_\Z}^{(k),prim}$. Then \begin{eqnarray*} \Delta^{(0)}\subset R^{(0)}\subset H_\Z^{prim},\quad \Delta^{(1)}\subset H_\Z^{prim},\\ \oooo{\Delta}^{(0)}\subset \oooo{R}^{(0)}\subset \oooo{H_\Z}^{(0),prim},\quad\textup{where } \oooo{\Delta}^{(k)}:=\pr^{H,(k)}(\Delta^{(k)}). \end{eqnarray*} Here $R^{(0)}\subset H_\Z^{prim}$ and $\oooo{R}^{(0)}\subset\oooo{H_\Z}^{(0),prim}$ because of $2=I^{(0)}(a,a)=\oooo{I}^{(0)}(\oooo{a}^{(0)},\oooo{a}^{(0)})$ for $a\in R^{(0)}$. Furthermore \begin{eqnarray*} \oooo{e_i}^{(1)}\in \oooo{H_\Z}^{(1),prim}&\iff& \Gamma^{(1)}_s\{\oooo{e_i}^{(1)}\}\subset \oooo{H_\Z}^{(1),prim}. \end{eqnarray*} Whether or not $\oooo{e_i}^{(1)}\in \oooo{H_\Z}^{(1),prim}$, that depends on the situation. $\oooo{\Delta}^{(1)}\subset\oooo{H_\Z}^{(1),prim}$ may hold or not. (ii) In general, an element $a\in H_\Z$ satisfies \begin{eqnarray*} \oooo{a}^{(k)}\in \oooo{H_\Z}^{(k),prim}&\iff& a+\Rad I^{(k)}\subset H_\Z^{prim}. \end{eqnarray*} (iii) The set $\oooo{\Delta}^{(k)}\subset \oooo{H_\Z}^{(k)}$ is often simpler to describe than the set $\Delta^{(k)}$. The control of $\oooo{\Delta}^{(k)}$ is a step towards the control of $\Delta^{(k)}$. (iv) Given $a\in\Delta^{(k)}$, it is interesting to understand the three sets \begin{eqnarray*} (a+\Rad I^{(k)})\cap\Delta^{(k)} \stackrel{(1)}{\supset} (a+\Rad I^{(k)})\cap \Gamma^{(k)}\{a\} \stackrel{(2)}{\supset} \Gamma^{(k)}_u\{a\}. \end{eqnarray*} The next lemma makes comments on the inclusions $\stackrel{(1)}{\supset}$ and $\stackrel{(2)}{\supset}$. \end{remarks} \begin{lemma}\label{t6.7} Keep the situation in Remark \ref{t6.6} (iv). (a) In $\stackrel{(1)}{\supset}$ equality holds if and only if the image in $\oooo{H_\Z}^ {(k)}$ of any $\Gamma^{(k)}$ orbit different from $\Gamma^{(k)}\{a\}$ is different from $\Gamma^{(k)}_s\{\oooo{a}^{(k)}\}$. (b) The following is an inclusion of groups, \begin{eqnarray*} \Stab_{\Gamma^{(k)}}(\oooo{a}^{(k)})\stackrel{(3)}{\supset} \Gamma^{(k)}_u\cdot \Stab_{\Gamma^{(k)}}(a). \end{eqnarray*} In $\stackrel{(2)}{\supset}$ equality holds if and only if in $\stackrel{(3)}{\supset}$ equality holds. \end{lemma} {\bf Proof:} Trivial. \hfill$\Box$ \section{The rank 2 cases}\label{s6.2} For $x\in\Z-\{0\}$ consider the matrix $S=S(x)=\begin{pmatrix}1&x\\0&1\end{pmatrix}\in T^{uni}_2(\Z)$, and consider a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}=(e_1,e_2)$ with $L(\uuuu{e}^t,\uuuu{e})^t=S$. Recall the formulas and the results in section \ref{s5.2}, especially $M^{root}:H_\Z\to H_\Z$ and its eigenvalues $\kappa_{1/2}=\frac{-x}{2}\pm\frac{1}{2}\sqrt{x^2-4}$. We can restrict to $x<0$ because of $L((e_1,-e_2)^t,(e_1,-e_2))^t=\begin{pmatrix}1&-x\\0&1\end{pmatrix}$. So suppose $x<0$. First we consider the even cases. Then $\Gamma^{(0)}$ is a Coxeter group, and $\Gamma^{(0)}$ and $\Delta^{(0)}$ are well known. Still we want to document and derive the facts in our way. \begin{theorem}\label{t6.8} (a) We have \begin{eqnarray*} s^{(0)}_{e_i}\uuuu{e}&=&\uuuu{e}\cdot s_{e_i}^{(0),mat} \quad\textup{ with }\\ s^{(0),mat}_{e_1}&=&\begin{pmatrix}-1&-x\\0&1\end{pmatrix},\quad s^{(0),mat}_{e_2}=\begin{pmatrix}1&0\\-x&-1\end{pmatrix},\\ \Gamma^{(0)}&\cong& \Gamma^{(0),mat}:= \langle s^{(0),mat}_{e_1},s^{(0),mat}_{e_2}\rangle \subset GL_2(\Z),\\ R^{(0)}&=&\{y_1e_1+y_2e_2\in H_\Z\,|\, 1=y_1^2+xy_1y_2+y_2^2\}. \end{eqnarray*} (b) The case $x=-1$: $(H_\Z,I^{(0)})$ is the $A_2$ root lattice. $\Gamma^{(0)}\cong D_6$ is a dihedral group with six elements, the identity, three reflections and two rotations, \index{dihedral group} \begin{eqnarray*} \Gamma^{(0)}&=& \langle\id,\ s^{(0)}_{e_1},\ s^{(0)}_{e_2}, \ s^{(0)}_{e_1}s^{(0)}_{e_2}s^{(0)}_{e_1}, \ s^{(0)}_{e_1}s^{(0)}_{e_2},\ s^{(0)}_{e_2}s^{(0)}_{e_1}\rangle \cong\Gamma^{(0),mat}\\ &=&\langle E_2,\begin{pmatrix}-1&1\\0&1\end{pmatrix}, \begin{pmatrix}1&0\\1&-1\end{pmatrix}, \begin{pmatrix}0&-1\\-1&0\end{pmatrix}, \begin{pmatrix}0&-1\\1&-1\end{pmatrix}, \begin{pmatrix}-1&1\\-1&0\end{pmatrix}\rangle. \end{eqnarray*} The set $\Delta^{(0)}$ of vanishing cycles coincides with the set $R^{(0)}$ of roots and is $$\Delta^{(0)}=R^{(0)}=\{\pm e_1,\pm e_2,\pm(e_1+e_2)\}.$$ The following picture shows the action of $\Gamma^{(0)}$ on $\Delta^{(0)}$. One sees the action of $D_6$ on the vertices of a regular 6-gon. \begin{figure}[H] \includegraphics[width=1.0\textwidth]{pic-6-1.png} \caption[Figure 6.1]{A regular 6-gon, actions of $D_6$ and $D_{12}$} \label{Fig:6.1} \end{figure} One sees also the action of $D_{12}$ on the regular 6-gon. This fits to the following. \begin{eqnarray*} D_6\cong \Gamma^{(0)}=\ker\tau^{(0)}=O^{(0),*} \stackrel{1:2}{\subset}O^{(0)}\cong D_{12}. \end{eqnarray*} (c) The case $x=-2$: $\Gamma^{(0)}\cong G^{fCox,2}$ is a free Coxeter group with the two generators $s^{(0)}_{e_1}$ and $s^{(0)}_{e_2}$. Here \begin{eqnarray*} \Rad I^{(0)}=\Z f_1\quad\textup{with}\quad f_1=e_1+e_2. \end{eqnarray*} The set $\Delta^{(0)}$ of vanishing cycles coincides with the set $R^{(0)}$ of roots and is \begin{eqnarray*} \Delta^{(0)}=R^{(0)}&=&\{y_1e_1+y_2e_2\in H_\Z\,|\, 1=(y_1-y_2)^2\}\\ &=& (e_1+\Z f_1)\ \dot\cup\ (e_2+\Z f_1). \end{eqnarray*} It splits into the two disjoint orbits \begin{eqnarray*} \Gamma^{(0)}\{e_1\}=\Gamma^{(0)}\{-e_1\} &=& (e_1+\Z 2f_1)\ \dot\cup\ (-e_1+\Z 2f_1),\\ \Gamma^{(0)}\{e_2\}=\Gamma^{(0)}\{-e_2\} &=& (e_2+\Z 2f_1)\ \dot\cup\ (-e_2+\Z 2f_1). \end{eqnarray*} $s^{(0)}_{e_1}$ acts on $\Delta^{(0)}$ by permuting vanishing cycles horizontally, so by adding $\pm 2e_1$. $s^{(0)}_{e_2}$ acts on $\Delta^{(0)}$ by permuting vanishing cycles vertically, so by adding $\pm 2e_2$, see the following formulas and Figure \ref{Fig:6.2}. For $\varepsilon\in\{\pm 1\}$ and $m\in\Z$ \begin{eqnarray*} s_{e_1}^{(0)}(\varepsilon e_1+mf_1)=-\varepsilon e_1+mf_1,\quad s_{e_1}^{(0)}(\varepsilon e_2+mf_1)=-\varepsilon e_2+(m+2\varepsilon)f_1,\\ s_{e_2}^{(0)}(\varepsilon e_1+mf_1)=-\varepsilon e_1+(m+2\varepsilon)f_1,\quad s_{e_2}^{(0)}(\varepsilon e_2+mf_1)=-\varepsilon e_2+mf_1. \end{eqnarray*} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{pic-6-2.png} \caption[Figure 6.2]{Even vanishing cycles in the case $S=\begin{pmatrix}1&-2\\0&1\end{pmatrix}$} \label{Fig:6.2} \end{figure} \noindent The matrix group $\Gamma^{(0),mat}$ is given by the following formulas for $m\in\Z$, \begin{eqnarray*} (s_{e_1}^{(0)}s_{e_2}^{(0)})^m(\uuuu{e}) &=& \uuuu{e}\begin{pmatrix}2m+1&-2m\\2m&-2m+1\end{pmatrix},\\ (s_{e_1}^{(0)}s_{e_2}^{(0)})^ms_{e_1}^{(0)}(\uuuu{e}) &=& \uuuu{e}\begin{pmatrix}-2m-1&2m+2\\-2m&2m+1\end{pmatrix}. \end{eqnarray*} (d) The cases $x\leq -3$: $\Gamma^{(0)}\cong G^{fCox,2}$ is a free Coxeter group with the two generators $s^{(0)}_{e_1}$ and $s^{(0)}_{e_2}$. The set $\Delta^{(0)}$ of vanishing cycles coincides with the set $R^{(0)}$ of roots. More information on $\Delta^{(0)}$: (i) Recall Lemma \ref{tc.1} (a). The map \begin{eqnarray*} u:\Delta^{(0)}&\to& \{\textup{units in }\Z[\kappa_1]\textup{ with norm }1\} = \{\pm \kappa_1^l\,|\, l\in\Z\}\\ y_1e_1+y_2e_2&\mapsto & y_1-\kappa_1 y_2, \end{eqnarray*} is well defined and a bijection with \begin{eqnarray*} s^{(0)}_{e_1}(u^{-1}(\varepsilon\kappa_1^l)) =u^{-1}(-\varepsilon\kappa_1^{-l}),\quad s^{(0)}_{e_2}(u^{-1}(\varepsilon\kappa_1^l)) =u^{-1}(-\varepsilon\kappa_1^{2-l}), \end{eqnarray*} for $\varepsilon\in\{\pm 1\}$, $l\in\Z$. Especially, $\Delta^{(0)}$ splits into the two disjoint orbits $\Gamma^{(0)}\{e_1\}$ and $\Gamma^{(0)}\{e_2\}$. (ii) The matrix group $\Gamma^{(0),mat}$ is given by the following formulas for $m\in\Z$, \begin{eqnarray*} (s_{e_1}^{(0)}s_{e_2}^{(0)})^m(\uuuu{e}) &=& \uuuu{e}\begin{pmatrix}y_1&-y_2\\y_2&y_1+xy_2\end{pmatrix} =(u^{-1}(\kappa_1^{-2m}),u^{-1}(-\kappa_1^{-2m+1})),\\ &&\textup{where }u^{-1}(\kappa_1^{-2m})=y_1-\kappa_1y_2,\\ (s_{e_1}^{(0)}s_{e_2}^{(0)})^ms_{e_1}^{(0)}(\uuuu{e}) &=& \uuuu{e}\begin{pmatrix}y_1&xy_1+y_2\\y_2&-y_1\end{pmatrix} =(u^{-1}(-\kappa_1^{-2m}),u^{-1}(\kappa_1^{-2m-1}))\\ &&\textup{where }u^{-1}(-\kappa_1^{-2m})=y_1-\kappa_1y_2. \end{eqnarray*} (iii) $\Delta^{(0)}\subset H_\R\cong\R^2$ is part of the hyperbola $\{y_1e_1+y_2e_2\in H_\R\,|\, (y_1-\kappa_1y_2)(y_1-\kappa_2y_2) =1\}$ with asymptotic lines $y_2=\kappa_2y_1$ and $y_2=\kappa_1y_1$. Both branches of this hyperbola are strictly monotonously increasing. The lower right branch is concave and contains the points $u^{-1}(\kappa_1^l)$ for $l\in\Z$, the upper left branch is convex and contains the points $u^{-1}(-\kappa^l)$ for $l\in\Z$. The horizontal respectively vertical line through a vanishing cycle $a\in \Delta^{(0)}$ intersects the other branch of the hyperbola in $s^{(0)}_{e_1}(a)$ respectively $s^{(0)}_{e_2}(a)$. See Figure \ref{Fig:6.3}. \begin{figure}\includegraphics[width=1.0\textwidth]{pic-6-3.png} \caption[Figure 6.3]{Even vanishing cycles in the case $S=\begin{pmatrix}1&-3\\0&1\end{pmatrix}$} \label{Fig:6.3} \end{figure} (iv) Denote by $\www{s}^{(0)}_{e_i}$, $\www{\Gamma}^{(0)}$ and $\www{\Delta}^{(0)}$ the objects for $x=-2$ and as usual by $s^{(0)}_{e_i}$, $\Gamma^{(0)}$ and $\Delta^{(0)}$ the objects for an $x\leq -3$. The map $\www{s}^{(0)}_{e_i}\mapsto s^{(0)}_{e_i}$ extends to a group isomorphism $\www{\Gamma}^{(0)}\to\Gamma^{(0)}$. The map \begin{eqnarray*} &&\www{\Delta}^{(0)}\to\Delta^{(0)},\\ &&\uuuu{e}\begin{pmatrix}1-l\\-l\end{pmatrix}\mapsto u^{-1}(\kappa_1^l),\quad \uuuu{e}\begin{pmatrix}l-1\\l\end{pmatrix}\mapsto u^{-1}(-\kappa_1^l)\quad \textup{ for }l\in\Z, \end{eqnarray*} is a bijection. The bijections $\www{\Gamma}^{(0)}\to\Gamma^{(0)}$ and $\www{\Delta}^{(0)}\to\Delta^{(0)}$ are compatible with the action of $\www{\Gamma}^{(0)}$ on $\www{\Delta}^{(0)}$ and of $\Gamma^{(0)}$ on $\Delta^{(0)}$. (e) More on the cases $x\leq -2$: The automorphism $g_{1,2}:H_\Z\to H_\Z$ with $g_{1,2}:e_1\leftrightarrow e_2$ is in $O^{(0)}$. The set $\{\pm \id,\pm g_{1,2}\}$ is a subgroup of $O^{(0)}$ with \begin{eqnarray*} \Gamma^{(0)}=\ker\tau^{(0)}=O^{(0),*} \stackrel{1:4}{\subset} O^{(0)}=\Gamma^{(0)}\rtimes \{\pm \id,\pm g_{1,2}\}. \end{eqnarray*} \end{theorem} {\bf Proof:} (a) Everything except possibly the shape of $R^{(0)}$ is obvious. \begin{eqnarray*} R^{(0)}&=&\{y_1e_1+y_2e_2\in H_\Z\,|\, 2=I^{(0)}(y_1e_1+y_2e_2,y_1e_1+y_2e_2)\}\\ &=&\{y_1e_1+y_2e_2\in H_\Z\,|\, 2=\begin{pmatrix}y_1&y_2\end{pmatrix} \begin{pmatrix}2&x\\x&2\end{pmatrix} \begin{pmatrix}y_1\\y_2\end{pmatrix}\}\\ &=&\{y_1e_1+y_2e_2\in H_\Z\,|\, 1=y_1^2+xy_1y_2+y_2^2\}. \end{eqnarray*} (b) This is classical and elementary. $R^{(0)}=\{\pm e_1,\pm e_2,\pm (e_1+e_2)\}$. The actions of $s^{(0)}_{e_1}$ and $s^{(0)}_{e_2}$ on this set extend to the action of the dihedral group $D_6$ on the vertices of a regular 6-gon. Therefore $\Delta^{(0)}=R^{(0)}$ and $\Gamma^{(0)}\cong D_6$. $O^{(0)}\cong D_{12}$ is obvious as the vanishing cycles form the vertices of a regular 6-gon in $(H_\Z,I^{(0)})$. It remains to show for some element $g\in O^{(0)}-\Gamma^{(0)}$ $g\notin\ker\tau^{(0)}$. Consider the reflection $g\in O^{(0)}$ with $g(\uuuu{e})=(e_1,-e_1-e_2)$ and the linear form $l:H_\Z\to\Z$ with $l(\uuuu{e})=(1,0)$. Then \begin{eqnarray*} t^{(0)}(g)(l)=l\circ g^{-1},\quad (l\circ g^{-1})(\uuuu{e})=(l(e_1),l(-e_1-e_2))=(1,-1),\\ (l-l\circ g^{-1})(\uuuu{e})=(0,1),\\ l-l\circ g^{-1}\notin j^{(0)}(H_\Z)= \langle (\uuuu{e}\mapsto (2,-1)),(\uuuu{e}\mapsto (-1,2))\rangle, \end{eqnarray*} so $g\notin\ker\tau^{(0)}$. (c) and (d) The group $\Gamma^{(0)}$ for $x\leq -2$: Recall the Remarks and Notations \ref{ta.1}. The matrices $s_{e_1}^{(0),mat}=\begin{pmatrix}-1&-x\\0&1\end{pmatrix}$ and $s_{e_2}^{(0),mat}=\begin{pmatrix}1&0\\-x&-1\end{pmatrix}$ have the eigenvectors $\begin{pmatrix}1\\0\end{pmatrix}$ respectively $\begin{pmatrix}0\\1\end{pmatrix}$ with eigenvalue $-1$ and the eigenvectors $\begin{pmatrix}-x/2\\1\end{pmatrix}$ respectively $\begin{pmatrix}-2/x\\1\end{pmatrix}$ with eigenvalue $1$ Therefore $\mu(s_{e_1}^{(0),mat})$ and $\mu(s_{e_2}^{(0),mat}) \in\Isom(\H)$ are reflections along the hyperbolic line $A(\infty,-\frac{x}{2})$ respectively $A(0,-\frac{2}{x})$. As $x\leq -2$, we have $-\frac{2}{x}\leq -\frac{x}{2}$, so $A(0,-\frac{2}{x})\cap A(-\frac{x}{2},\infty)=\emptyset$, see the pictures in Figure \ref{Fig:6.4}. \begin{figure}[H] \includegraphics[width=0.9\textwidth]{pic-6-4.png} \caption[Figure 6.4]{Fundamental domain in $\H$ of $\Gamma^{(0),mat}$ for $x=-2$ and $x\leq -3$} \label{Fig:6.4} \end{figure} Theorem \ref{ta.2} (a) applies and shows that $\langle \mu(s_{e_1}^{(0),mat}),\mu(s_{e_2}^{(0),mat})\rangle \subset \Isom(\H)$ is a free Coxeter group with the two given generators. Therefore also $\Gamma^{(0)}$ is a free Coxeter group with the two generators $s_{e_1}^{(0)}$ and $s_{e_2}^{(0)}$. (c) The set $\Delta^{(0)}$ for $x=-2$: \begin{eqnarray*} R^{(0)}&=& \{y_1e_1+y_2e_2\in H_\Z\,|\, 1=(y_1-y_2)^2\}\\ &=& (e_1+\Z f_1)\ \dot\cup\ (e_2+\Z f_1). \end{eqnarray*} For $m\in\Z$, $\varepsilon\in\{\pm 1\}$, \begin{eqnarray*} s_{e_1}^{(0)}(\varepsilon e_1+mf_1)&=& -\varepsilon e_1+mf_1,\\ s_{e_2}^{(0)}(\varepsilon e_2+mf_1)&=& -\varepsilon e_2+mf_1,\\ s_{e_1}^{(0)}s_{e_2}^{(0)}(e_1+mf_1,e_2+mf_1) &=&(e_1+(m+2)f_1,e_2+(m-2)f_1). \end{eqnarray*} This shows all claims on $\Delta^{(0)}$ in part (c). (d) (i) Recall $\kappa_1+\kappa_2=-x$, $\kappa_1\kappa_2=1$, $0=\kappa_i^2+x\kappa_i+1$, $\kappa_2=\kappa_1^{-1}=-\kappa_1-x$, $\kappa_1=-\kappa_2-x$. Because of $R^{(0)}=\{y_1e_1+y_2e_2\in H_\Z\,|\, 1=y_1^2+xy_1y_2+y_2^2\}$ the map \begin{eqnarray*} u:R^{(0)}&\to&\{\textup{the units with norm }1\textup{ in } \Z[\kappa_1]\}\\ y_1e_1+y_2e_2&\mapsto& y_1-\kappa_1y_2 \end{eqnarray*} is well defined and a bijection. Because of Lemma \ref{tc.1} (a) \begin{eqnarray*} \{\textup{the units with norm }1\textup{ in }\Z[\kappa_1]\} =\{\pm\kappa_1^l\,|\, l\in\Z\}. \end{eqnarray*} Now \begin{eqnarray*} s_{e_1}^{(0)}(u^{-1}(\varepsilon \kappa_1^l)) =u^{-1}(-\varepsilon\kappa_1^{-l})\quad\textup{and}\quad s_{e_2}^{(0)}(u^{-1}(\varepsilon \kappa_1^l)) =u^{-1}(-\varepsilon\kappa_1^{2-l}) \end{eqnarray*} follow from \begin{eqnarray*} s_{e_1}^{(0)}\uuuu{e}\begin{pmatrix}y_1\\y_2\end{pmatrix} =\uuuu{e}\begin{pmatrix}-1&-x\\0&1\end{pmatrix} \begin{pmatrix}y_1\\y_2\end{pmatrix} =\uuuu{e}\begin{pmatrix}-y_1-xy_2\\y_2\end{pmatrix},\\ (-y_1-xy_2)-\kappa_1y_2 = -(y_1-\kappa_2y_2) =-(y_1-\kappa_1y_2)^{-1},\\ s_{e_2}^{(0)}\uuuu{e}\begin{pmatrix}y_1\\y_2\end{pmatrix} =\uuuu{e}\begin{pmatrix}1&0\\-x&-1\end{pmatrix} \begin{pmatrix}y_1\\y_2\end{pmatrix} =\uuuu{e}\begin{pmatrix}y_1\\-xy_1-y_2\end{pmatrix},\\ y_1-\kappa_1(-xy_1-y_2) = \kappa_1((\kappa_2+x)y_1+y_2) =\kappa_1(-\kappa_1y_1+y_2)\\ =-\kappa_1^2(y_1-\kappa_2y_2) =-\kappa_1^2(y_1-\kappa_1y_2)^{-1}. \end{eqnarray*} This shows $\Delta^{(0)}=R^{(0)}$ and that $\Delta^{(0)}$ splits into the two disjoint orbits $\Gamma^{(0)}\{e_1\}$ and $\Gamma^{(0)}\{e_2\}$. (ii) The formulas in part (i) show immediately $(s_{e_1}^{(0)}s_{e_2}^{(0)})^m(u^{-1}(\varepsilon\kappa_1^l)) =u^{-1}(\varepsilon\kappa_1^{l-2m})$ for $l,m\in\Z$, $\varepsilon\in\{\pm 1\}$. Together with $u(e_1)=1$ and $u(e_2)=-\kappa_1$ this implies the formulas in part (ii). (iv) This follows from the formulas for the action of $\www{s}_{e_i}^{(0)}$ on $\www{R}^{(0)}$ and of $s_{e_i}^{(0)}$ on $R^{(0)}$. (iii) First we consider the lower right branch of the hyperbola. There $y_1-\kappa_1 y_2>0$ and $y_1-\kappa_2y_2>0$. We consider $y_2$ as an implicit function in $y_1$. The equation \begin{eqnarray*} 1=y_1^2+xy_1y_2+y_2^2=(y_1-\kappa_1y_2)(y_1-\kappa_2y_2) \end{eqnarray*} implies \begin{eqnarray*} 0&=&(y_1-\kappa_1y_2)(1-\kappa_2y_2') +(y_1-\kappa_2y_2)(1-\kappa_1y_2')\\ &=& [(y_1-\kappa_1y_2)+(y_1-\kappa_2y_2)] -[(y_1-\kappa_1y_2)\kappa_2+(y_1-\kappa_2y_2)\kappa_1]y_2',\\ \textup{so}&&y_2'>0\quad\textup{and}\quad (1-\kappa_1y_2')(1-\kappa_2y_2')<0,\\ 0&=& (y_1-\kappa_1y_2)(-\kappa_2y_2'') +2(1-\kappa_1y_2')(1-\kappa_2y_2')\\ &&+(y_1-\kappa_2y_2)(-\kappa_1y_2'')\\ &=&-[(y_1-\kappa_1y_2)\kappa_2+(y_1-\kappa_2y_2)\kappa_1]y_2'' +2(1-\kappa_1y_2')(1-\kappa_2y_2'),\\ \textup{so}&&y_2''<0. \end{eqnarray*} Therefore the lower right branch of the hyperbola is strictly monotonously increasing and concave. The upper left branch is obtained from the lower right branch by the reflection $H_\R\to H_\R,\ (y_1,y_2)\mapsto (y_2,y_1),$ along the diagonal. Therefore it is strictly monotonously increasing and convex. By definition $s_{e_1}^{(0)}$ maps each horizontal line in $H_\R$ to itself, and $s_{e_2}^{(0)}$ maps each vertical line in $H_\R$ to itself. As they map $\Delta^{(0)}=R^{(0)}$ to itself, this shows all statements in (iii). (e) Obviously $g_{1,2}\in O^{(0)}$ and $\{\pm \id,\pm g_{1,2}\}$ is a subgroup of $O^{(0)}$. Recall \begin{eqnarray*} H_\Z^\sharp \supset j^{(0)}(H_\Z) =\langle j^{(0)}(e_1),j^{(0)}(e_2)\rangle\\ \textup{with}\quad j^{(0)}(e_1)(\uuuu{e})=(2,x),j^{(0)}(e_2)(\uuuu{e})=(x,2). \end{eqnarray*} Define $l\in H_\Z^\sharp$ with $l(\uuuu{e})=(1,0)$. Then \begin{eqnarray*} (l-l\circ(-\id)^{-1})(\uuuu{e})=2l(\uuuu{e})=(2,0),\\ \textup{so}\quad l-l\circ(-\id)^{-1}\notin j^{(0)}(H_\Z), \textup{so}\quad -\id\notin\ker\tau^{(0)}.\\ (l-l\circ g_{1,2}^{-1})(\uuuu{e})=(1,-1),\\ \textup{so}\quad l-l\circ g_{1,2}^{-1}\notin j^{(0)}(H_\Z), \textup{so}\quad g_{1,2}\notin\ker\tau^{(0)}. \end{eqnarray*} Denote for a moment by $\www{O}^{(0)}$ the subgroup of $O^{(0)}$ which is generated by $\{\pm\id,\pm g_{1,2}\}$ and $\Gamma^{(0)}$. We just saw \begin{eqnarray*} (\ker\tau^{(0)})\cap \www{O}^{(0)} =\Gamma^{(0)},\\ \textup{so}\quad \www{O}^{(0)}=\Gamma^{(0)}\rtimes \{\pm \id,\pm g_{1,2}\}. \end{eqnarray*} It remains to show that this subgroup is $O^{(0)}$. By the parts (c) and (d) (iii) the set $\Delta^{(0)}=R^{(0)}$ splits into two $\Gamma^{(0)}$ orbits, $\Gamma^{(0)}\{e_1\}$ and $\Gamma^{(0)}\{e_2\}$. The element $g_{1,2}$ interchanges them, so $\Delta^{(0)}=R^{(0)}$ is a single $\www{O}^{(0)}$ orbit. Therefore each element of $O^{(0)}$ can be written as a product of an element in $\www{O}^{(0)}$ and an element $g\in O^{(0)}$ with $g(e_1)=e_1$. It is sufficient to show $g\in\www{O}^{(0)}$. Observe that the set $\{v\in H_\Z\,|\, I^{(0)}(v,v)>0\}$ consists of two components, with $e_1$ in one component and $e_2$ in the other component. Because of $g(e_1)=e_1$, $g(e_2)$ is in the same component as $e_2$. Now $H_\Z=\Z e_1+\Z g(e_2)$ shows $g(e_2)\in\{e_2,s_{e_1}^{(0)}(-e_2)\}$ and $g\in\{\id,-s_{e_1}^{(0)}\}$. Therefore $\www{O}^{(0)}=O^{(0)}$. \hfill$\Box$ \begin{remarks}\label{t6.9} (i) By part (iv) of Theorem \ref{t6.8} (d) the pairs $(\Gamma^{(0)},\Delta^{(0)})$ with the action of $\Gamma^{(0)}$ on $\Delta^{(0)}$ are isomorphic for all $x\leq -2$. This is interesting as in the case $x=-2$ the set $\Delta^{(0)}$ and this action could be written down in a very simple way. The parts (i) and (iii) of Theorem \ref{t6.8} (d) offered two ways to control the set $\Delta^{(0)}$ and this action also for $x\leq -3$, a number theoretic way and a geometric way. But both ways are less simple than $\Delta^{(0)}$ in the case $x=-2$. (ii) In the odd cases the situation will be partly similar, partly different. The pairs $(\Gamma^{(1)},\Delta^{(1)})$ with the action of $\Gamma^{(1)}$ on $\Delta^{(1)}$ are isomorphic for all $x\leq -2$. In the case $x=-2$ the set $\Delta^{(1)}$ and this action can be written down in a fairly simple way. But we lack analoga of the parts (i) and (iii) in Theorem \ref{t6.8} (d). We do not have a good control on the sets $\Delta^{(1)}$ for $x\leq -3$. (iii) For each $x\leq -2$ and $i\in\{1,2\}$ $\Stab_{\Gamma^{(0)}}(e_i)=\{\id\}$, so the map $\Gamma^{(0)}\to \Gamma^{(0)}\{e_i\}$, $\gamma\mapsto \gamma(e_i)$, is a bijection. The action of $\Gamma^{(0)}$ on $\Gamma^{(0)}\{e_i\}$ shows again immediately that $\Gamma^{(0)}$ is a free Coxeter group with generators $s_{e_1}^{(0)}$ and $s_{e_2}^{(0)}$. \end{remarks} Now we come to the odd cases. As before we restrict to $x\in \Z_{<0}$. \begin{theorem}\label{t6.10} (a) We have \begin{eqnarray*} O^{(1)}&\cong & SL_2(\Z),\\ \ker\tau^{(1)}&\cong& \Gamma(x):=\{A\in SL_2(\Z)\,|\, A\equiv E_2\mmod x\}, \end{eqnarray*} \begin{eqnarray*} s^{(1)}_{e_i}\uuuu{e}&=&\uuuu{e}\cdot s_{e_i}^{(1),mat} \quad\textup{ with }\\ s^{(1),mat}_{e_1}&=&\begin{pmatrix}1&-x\\0&1\end{pmatrix},\quad s^{(1),mat}_{e_2}=\begin{pmatrix}1&0\\x&1\end{pmatrix},\\ \Gamma^{(1)}&\cong& \Gamma^{(1),mat}:= \langle s^{(1),mat}_{e_1},s^{(1),mat}_{e_2}\rangle \subset SL_2(\Z), \end{eqnarray*} The map from $\Delta^{(1)}$ to its image in $\widehat{\R}=\R\cup\{\infty\}$ under the composition $C:\Delta^{(1)}\to\widehat{\R}$ of maps \begin{eqnarray*} \Delta^{(1)}&\to& (\R^*\Delta^{(1)})/\R^* =\{\textup{lines through vanishing cycles}\}\\ &\hookrightarrow& (H_\R-\{0\})/\R^* =\{\textup{lines in }H_\R\} \stackrel{\cong}{\longrightarrow}\widehat{\R},\\ &&\R^*\uuuu{e}\begin{pmatrix}y_1\\1\end{pmatrix}\mapsto y_1,\quad \R^*e_1\mapsto\infty, \end{eqnarray*} is two-to-one. (b) The case $x=-1$: $\Gamma^{(1),mat}=SL_2(\Z)$. \begin{eqnarray*} \Delta^{(1)}=\Gamma^{(1)}\{e_1\} = \{y_1e_1+y_2e_2\in H_\Z\,|\, \gcd(y_1,y_2)=1\} =H_\Z^{prim}, \end{eqnarray*} where $H_\Z^{prim}$ denotes the set of primitive vectors in $H_\Z$. The image $C(\Delta^{(1)})\subset\widehat{\R}$ is $\widehat{\Q}=\Q\cup\{\infty\}$. (c) The case $x=-2$: $\Gamma^{(1)}\cong G^{free,2}$ is a free group with the two generators $s_{e_1}^{(1)}$ and $s_{e_2}^{(1)}$. The (isomorphic) matrix group $\Gamma^{(1),mat}$ is \begin{eqnarray*} \Gamma^{(1),mat}&=& \{\begin{pmatrix}a&b\\c&d\end{pmatrix} \in SL_2(\Z)\,|\, a\equiv d\equiv 1(4),\ b\equiv c\equiv 0(2)\}. \end{eqnarray*} It is a subgroup of index 2 in the principal congruence subgroup \begin{eqnarray*} \Gamma(2)=\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\,|\, a\equiv d\equiv 1(2),\ b\equiv c\equiv 0(2)\} \end{eqnarray*} of $SL_2(\Z)$. The set $\Delta^{(1)}=\Gamma^{(1)}\{\pm e_1,\pm e_2\}$ is $$\Delta^{(1)}= \{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\, y_1+y_2\equiv 1(2)\}.$$ It splits into the four disjoint orbits \begin{eqnarray*} \Gamma^{(1)}\{e_1\}&=& \{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\, y_1\equiv 1(4),y_2\equiv 0(2)\}\\ \Gamma^{(1)}\{-e_1\}&=& \{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\, y_1\equiv 3(4),y_2\equiv 0(2)\}\\ \Gamma^{(1)}\{e_2\}&=& \{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\, y_1\equiv 0(2),y_2\equiv 1(4)\}\\ \Gamma^{(1)}\{-e_2\}&=& \{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\, y_1\equiv 0(2),y_2\equiv 3(4)\}. \end{eqnarray*} The set $H_\Z^{prim}$ of primitive vectors is the disjoint union of $\Delta^{(1)}$ and the set \begin{eqnarray*} \{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\,y_1\equiv y_2\equiv 1(2)\}. \end{eqnarray*} The image $C(\Delta^{(1)})\subset\widehat{\R}$ is \begin{eqnarray*} \{\infty\}\cup\{\frac{a}{b}\,|\, a\in\Z,b\in\N, \gcd(a,b)=1,a\equiv 0(2)\textup{ or }b\equiv 0(2)\} \subset\widehat{\Q}, \end{eqnarray*} and is dense in $\widehat{\R}$. (d) The cases $x\leq -3$. (i) $\Gamma^{(1)}\cong G^{free,2}$ is a free group with the two generators $s_{e_1}^{(1)}$ and $s_{e_2}^{(1)}$. (ii) The matrix group $\Gamma^{(1),mat}$ is a Fuchsian group of \index{Fuchsian group} the second kind. It has infinite index in the group \begin{eqnarray*} \{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in SL_2(\Z) \,|\, a\equiv d\equiv 1(x^2),b\equiv c\equiv 0(x)\}, \end{eqnarray*} which has finite index in $SL_2(\Z)$. (iii) The image $C(\Delta^{(1)})\subset\widehat{\R}$ is a subset of $\widehat{\Q}$ which is nowhere dense in $\widehat{\R}$. (iv) Denote by $\www{s}_{e_i}^{(1)}$, $\www{\Gamma}^{(1)}$ and $\www{\Delta}^{(1)}$ the objects for $x=-2$ and as before by $s_{e_i}^{(1)}$, $\Gamma^{(1)}$ and $\Delta^{(1)}$ the objects for an $x\leq -3$. The map $\www{s}_{e_i}^{(1)}\mapsto s_{e_i}^{(1)}$ extends to a group isomorphism $\www{\Gamma}^{(1)}\to\Gamma^{(1)}$, which maps the stabilizer $\langle \www{s}_{e_i}^{(1)}\rangle$ of $e_i$ in $\www{\Gamma}^{(1)}$ to the stabilizer $\langle s_{e_i}^{(1)}\rangle$ of $e_i$ in $\Gamma^{(1)}$. The induced map \begin{eqnarray*} \www{\Delta}^{(1)}\to\Delta^{(1)},\ \www{\gamma}(\varepsilon e_i)\mapsto \gamma(\varepsilon e_i) \quad\textup{for }\varepsilon\in\{\pm 1\}, \www{\gamma}\mapsto\gamma, \end{eqnarray*} is a bijection. The bijections $\www{\Gamma}^{(1)}\to \Gamma^{(1)}$ and $\www{\Delta}^{(1)}\to \Delta^{(1)}$ are compatible with the actions of $\www{\Gamma}^{(1)}$ on $\www{\Delta}^{(1)}$ and of $\Gamma^{(1)}$ on $\Delta^{(1)}$. Especially, $\Delta^{(1)}$ splits into the four disjoint orbits $\Gamma^{(1)}\{e_1\}$, $\Gamma^{(1)}\{-e_1\}$, $\Gamma^{(1)}\{e_2\}$, $\Gamma^{(1)}\{-e_2\}$. (e) In the case $A_1^2$ $\Delta^{(0)}=\Delta^{(1)}=\{\pm e_1,\pm e_2\}$. In all other rank 2 cases $\Delta^{(0)}\subsetneqq \Delta^{(1)}$. \end{theorem} {\bf Proof:} (a) Because of $I^{(1)}(\uuuu{e}^t,\uuuu{e}) =\begin{pmatrix}0&x\\-x&0\end{pmatrix}$ we have $O^{(1)}\cong SL_2(\Z)$. In order to see $\ker \tau^{(1)}\cong \Gamma(x)$, consider the generators $l_1,l_2\in H_\Z^\sharp$ of $H_\Z^\sharp$ with $l_1(\uuuu{e})=(1,0)$ and $l_2(\uuuu{e})=(0,1)$. Observe first $$j^{(1)}(e_1)(\uuuu{e})=(0,x),\ j^{(1)}(e_2)(\uuuu{e})=(-x,0), \quad\textup{so }j^{(1)}(H_\Z)=x H_\Z^\sharp,$$ and second that $g\in O^{(1)}$ with $g^{-1}(\uuuu{e})=\uuuu{e}\cdot\begin{pmatrix}a&b\\c&d\end{pmatrix}$ satisfies \begin{eqnarray*} (l_1-l_1\circ g^{-1})(\uuuu{e})&=&(1-a,b),\\ (l_2-l_2\circ g^{-1})(\uuuu{e})&=&(-c,1-d), \end{eqnarray*} so $g\in \ker \tau^{(1)}$ if and only if $\begin{pmatrix}a&b\\c&e\end{pmatrix}\equiv E_2\mmod x$. It remains to prove that the line $\R\cdot \delta\subset H_\R$ through a vanishing cycle $\delta\in\Delta^{(1)}$ intersects $\Delta^{(1)}$ only in $\pm \delta$. To prove this we can restrict to $\delta=e_i$. There it follows from the fact that any matrix $A\in SL_2(\Z)$ with a zero in an entry $A_{ij}$ has entries $A_{i,j+1(2)}$, $A_{i+1(2),j}\in\{\pm 1\}$. (b) $\Gamma^{(1),mat}=SL_2(\Z)$ is well known. The standard arguments for this are as follows. The group $\mu(\Gamma^{(1),mat})\subset\Isom(\H)$ is generated by \begin{eqnarray*} \mu(s_{e_1}^{(1),mat})= \mu(\begin{pmatrix}1&1\\0&1\end{pmatrix}) &=&(z\mapsto z+1),\\ \mu(s_{e_1}^{(1),mat}s_{e_2}^{(1),mat}s_{e_1}^{(1),mat}) =\mu(\begin{pmatrix}0&1\\-1&0\end{pmatrix})&=&(z\mapsto -z^{-1}). \end{eqnarray*} One sees almost immediately that it acts transitively on $\widehat{\Q}$ and that the stabilizer $\langle \mu(s_{e_1}^{(1),mat})\rangle$ of $\infty$ in $\mu(\Gamma^{(1),mat})$ coincides with the stabilizer of $\infty$ in $\mu(SL_2(\Z))$. Therefore $\mu(\Gamma^{(1),mat})=\mu(SL_2(\Z))$. But $-E_2\in \Gamma^{(1),mat}$ because of $\begin{pmatrix}0&1\\-1&0\end{pmatrix}^2=-E_2$. Therefore $\Gamma^{(1),mat}=SL_2(\Z)$. The fact that $\mu(SL_2(\Z))$ acts transitively on $\widehat{\Q}$ shows $C(\Delta^{(1)})=\widehat{\Q}$. Together with $-E_2\in SL_2(\Z)$ this shows \begin{eqnarray*} \Delta^{(1)}=\Gamma^{(1)}\{e_1\}=H_\Z^{prim}. \end{eqnarray*} (c) and (d) The group $\Gamma^{(1)}$ for $x\leq -2$: Recall the Remarks and Notations \ref{ta.1}. The elements \begin{eqnarray*} \mu(s_{e_1}^{(1),mat})=\mu(\begin{pmatrix}1&-x\\0&1\end{pmatrix}) &=&(z\mapsto z-x)\quad\textup{and}\\ \mu(s_{e_2}^{(1),mat})=\mu(\begin{pmatrix}1&0\\x&1\end{pmatrix}) &=&(z\mapsto \frac{z}{xz+1}) \end{eqnarray*} of $\Isom(\H)$ are parabolic with fixed points $\infty$ respectively $0$ on $\widehat{\R}$. Observe \begin{eqnarray*} \mu(s_{e_1}^{(1),mat})^{-1}(1)=1+x,\quad \mu(s_{e_2}^{(1),mat})(1)=(1+x)^{-1},\\ (1+x)^{-1}\geq 1+x\quad\textup{for }x\leq -2. \end{eqnarray*} Therefore $\mu(s_{e_1}^{(1),mat})^{-1}(A(\infty,1))=A(\infty,1+x)$ and $\mu(s_{e_2}^{(1),mat})(A(0,1))=A(0,(1+x)^{-1})$ do not intersect. See Figure \ref{Fig:6.5} \begin{figure}[H] \includegraphics[width=0.8\textwidth]{pic-6-5.png} \caption[Figure 6.5]{Fundamental domain in $\H$ of $\Gamma^{(1),mat}$ for $x=-2$ and $x\leq -3$} \label{Fig:6.5} \end{figure} Theorem \ref{ta.2} (c) applies and shows that $\mu(\Gamma^{(1),mat})$ is a free group with the two generators $\mu(s_{e_1}^{(1),mat})$ and $\mu(s_{e_2}^{(1),mat})$. Therefore also $\Gamma^{(1)}$ is a free group with the two generators $s_{e_1}^{(1)}$ and $s_{e_2}^{(1)}$. As therefore the map $\Gamma^{(1),mat}\to \mu(\Gamma^{(1),mat})$ is an isomorphism, $-E_2\notin\Gamma^{(1),mat}$ and $-\id\notin \Gamma^{(1)}$. Theorem \ref{ta.2} (c) says also that the contractible open set $\FF$ whose hyperbolic boundary consists of the four hyperbolic lines which were used above, $A(1+x,\infty)$, $A(\infty,1)$, $A(1,0)$, $A(0,(1+x)^{-1})$, is a fundamental domain for $\mu(\Gamma^{(1),mat})$. If $x=-2$ then its euclidean boundary in $\widehat{\C}$ consists of these four hyperbolic lines and the four points $\infty$, $1$, $0$, $-1=1+x=(1+x)^{-1}$. If $x\leq -3$ then its euclidean boundary consists of these four hyperbolic lines, the three points $\infty$, $1$, $0$, and the interval $[1+x,(1+x)^{-1}]$. (c) $\Gamma^{(1),mat}$ and $\Delta^{(1)}$ for $x=-2$: The following facts together imply $\mu(\Gamma^{(1),mat})=\mu(\Gamma(2))$: \begin{eqnarray*} \Gamma^{(1),mat}\subset\Gamma(2),\quad [SL_2(\Z):\Gamma(2)]=6,\quad -E_2\in \Gamma(2),\\ (\textup{hyperbolic area of the fundamental domain }\FF \textup{ of }\mu(\Gamma^{(1),mat}))=2\pi,\\ (\textup{hyperbolic area of a fundamental domain of } \mu(SL_2(\Z)))=\frac{\pi}{3}. \end{eqnarray*} Therefore either $\Gamma^{(1),mat}=\Gamma(2)$ or $\Gamma^{(1),mat}$ is a subgroup of index 2 in $\Gamma(2)$. But $\Gamma^{(1),mat}$ is certainly a subgroup of the subgroup \begin{eqnarray*} \begin{pmatrix}a&b\\c&d\end{pmatrix}\in SL_2(\Z)\,|\, a\equiv d\equiv 1(4),b\equiv c\equiv 0(2)\} \end{eqnarray*} of $\Gamma(2)$ and does not contain $-E_2$. Therefore $\Gamma^{(1),mat}$ coincides with this subgroup of $\Gamma(2)$ and has index 2 in $\Gamma(2)$. Therefore the orbits $\Gamma^{(1)}\{e_1\}$, $\Gamma^{(1)}\{-e_1\}$, $\Gamma^{(1)}\{e_2\}$, $\Gamma^{(1)}\{-e_2\}$ are contained in the right hand sides of the equations in part (c) which describe them and are disjoint. It remains to show equality. We restrict to $\Gamma^{(1)}\{e_1\}$. The argument for $\Gamma^{(1)}\{e_2\}$ is analogous, and the equations for $\Gamma^{(1)}\{-e_1\}$ and $\Gamma^{(1)}\{-e_2\}$ follow immediately. Suppose $y_1,y_3\in\Z$ with $y_1\equiv 1(4)$, $y_3\equiv 0(2)$, $\gcd(y_1,y_3)=1$. We have to show $\uuuu{e}\begin{pmatrix}y_1\\y_3\end{pmatrix} \in \Gamma^{(1)}\{e_1\}$. For that we have to find $y_2,y_4\in\Z$ with $\begin{pmatrix}y_1&y_2\\y_3&y_4\end{pmatrix}\in \Gamma^{(1)}$, so with $1=y_1y_4-y_2y_3$, $y_4\equiv 1(4)$, $y_2\equiv 0(2)$. The condition $\gcd(y_1,y_3)$ implies existence of $\www{y}_2,\www{y}_4\in\Z$ with $1=y_1\www{y}_4-\www{y}_2y_3$. 1st case, $\www{y_2}\equiv 0(2)$: Then $1=y_1\www{y}_4-\www{y}_2y_3$ shows $\www{y}_4\equiv 1(4)$, and $(y_2,y_4)=(\www{y}_2,\www{y}_4)$ works. 2nd case, $\www{y}_2\equiv 1(2)$: Then $(y_2,y_4)=(\www{y}_2+y_1,\www{y}_4+y_3)$ satisfies $1=y_1y_4-y_2y_3$ and $y_2\equiv 0(2)$, so we are in the 1st case, so $(y_2,y_4)$ works. Therefore the orbit $\Gamma^{(1)}\{e_1\}$ is as claimed the set $\{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\, y_1\equiv 1(4),y_2\equiv 0(2)\}$. The statements on $H_\Z^{prim}$ and $C(\Delta^{(1)})$ are clear now, too. (d) $\Gamma^{(1),mat}$ and $\Delta^{(1)}$ for $x\leq -3$: Part (i) was shown above. (ii) The euclidean boundary of the fundamental domain $\FF$ above of $\mu(\Gamma^{(1),mat})$ contains the real interval $[(1+x)^{-1},1+x]$. Therefore its hyperbolic area is $\infty$, $\Gamma^{(1),mat}$ is a Fuchsian group of the second kind, and the index of $\Gamma^{(1),mat}$ in $SL_2(\Z)$ is $\infty$ (see e.g. \cite[34.]{Fo51} \cite[\S 5.3, \S 8.1]{Be83}). (iii) The set $C(\Delta^{(1)})\subset\widehat{\R}$ is the union of the $\mu(\Gamma^{(1),mat})$ orbits of $\infty$ and $0$ in $\widehat{\R}$. Because $\Gamma^{(1),mat}$ is a Fuchsian group of the second kind, these two orbits are nowhere dense in $\widehat{\R}$ (see e.g. \cite{Fo51} \cite{Be83}). For example, they contain no point of the open interval $((1+x)^{-1},(1+x))$ and of its $\mu(\Gamma^{(1),mat})$ orbit. (iv) The groups $\www{\Gamma}^{(1)}$ and $\Gamma^{(1)}$ are free groups with generators $\www{s}_{e_1}^{(1)}$, $\www{s}_{e_2}^{(1)}$ and $s_{e_1}^{(1)}$, $s_{e_2}^{(1)}$. Therefore there is a unique isomorphism $\www{\Gamma}^{(1)}\to\Gamma^{(1)}$ with $\www{s}_{e_i}^{(1)}\mapsto s_{e_i}^{(1)}$. The other statements follow immediately. (e) The case $x=0$, so $A_1^2$: $$\Delta^{(0)}=\Delta^{(1)}=\{\pm e_1,\pm e_2\}.$$ The case $x=-1$, so $A_2$: $$\Delta^{(0)}=\{\pm e_1,\pm e_2,\pm (e_1+e_2)\}\subsetneqq H_\Z^{prim}=\Delta^{(1)}.$$ The case $x=-2$, so $\P^1A_1$: \begin{eqnarray*} \Delta^{(0)}&=&(e_1+\Z f_1)\, \dot\cup\, (e_2+\Z f_1)\\ &\subsetneqq& \{y_1e_1+y_2e_2\in H_\Z^{prim}\,|\, y_1+y_2\equiv 1(2)\} =\Delta^{(1)}. \end{eqnarray*} The cases $x\leq -3$: Recall $s_{e_1}^{(0)}s_{e_2}^{(0)}=-M=-s_{e_1}^{(1)}s_{e_2}^{(1)}$. Therefore for $m\in\Z,\varepsilon\in\{\pm 1\}$ \begin{eqnarray*} (s_{e_1}^{(0)}s_{e_2}^{(0)})^m(\uuuu{e})&=& (-s_{e_1}^{(1)}s_{e_2}^{(1)})^m(\uuuu{e})\in(\Delta^{(1)})^2,\\ (s_{e_1}^{(0)}s_{e_2}^{(0)})^m s_{e_1}^{(0)}(e_1)&=& -(-s_{e_1}^{(1)}s_{e_2}^{(1)})^m(e_1)\in \Delta^{(1)},\\ (s_{e_1}^{(0)}s_{e_2}^{(0)})^m s_{e_1}^{(0)}(e_2)&=& (s_{e_1}^{(0)}s_{e_2}^{(0)})^{m+1}s_{e_2}^{(0)}(e_2)\in \Delta^{(1)},\\ &=&-(-s_{e_1}^{(1)}s_{e_2}^{(1)})^{m+1}(e_2)\in \Delta^{(1)}. \end{eqnarray*} This shows $\Delta^{(0)}\subset\Delta^{(1)}$. For example $(s_{e_1}^{(1)})^{-1}(e_2)=xe_1+e_2$ satisfies $$L(xe_1+e_2,xe_1+e_2)=\begin{pmatrix}x&1\end{pmatrix} \begin{pmatrix}1&0\\x&1\end{pmatrix}\begin{pmatrix}x\\1\end{pmatrix} =2x^2+1\neq 1,$$ so $\Delta^{(0)}\subsetneqq\Delta^{(1)}$. \hfill$\Box$ \section{The even rank 3 cases}\label{s6.3} For $\uuuu{x}=(x_1,x_2,x_3)\in\Z^3$ consider the matrix $S=S(\uuuu{x})= \begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} \in T^{uni}_3(\Z)$, and consider a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}=(e_1,e_2,e_3)$ with $L(\uuuu{e}^t,\uuuu{e})^t=S$. In this section we will determine in all cases the even monodromy group $\Gamma^{(0)}=\langle s_{e_1}^{(0)}, s_{e_2}^{(0)},s_{e_3}^{(0)}\rangle$ and in many, but not all, cases the set $\Delta^{(0)}=\Gamma^{(0)}\{\pm e_1,\pm e_2,\pm e_3\}$ of even vanishing cycles. The cases where we control $\Delta^{(0)}$ well contain all cases with $r(\uuuu{x})\in\{0,1,2,4\}$ (3 does not turn up). The group $\Br_3\ltimes \{\pm 1\}^3$ acts on the set $\BB^{tri}$ of triangular bases of $(H_\Z,L)$, but this action does not change $\Gamma^{(0)}$ and $\Delta^{(0)}$. Therefore the analysis of the action of $\Br_3\ltimes\{\pm 1\}^3$ on $T^{uni}_3(\Z)$ in Theorem \ref{t4.6} allows to restrict to the matrices $S(\uuuu{x})$ with $\uuuu{x}$ in the following list: \begin{eqnarray*} S(\uuuu{x})\textup{ with }\uuuu{x}\in\Z^3_{\leq 0} \textup{ and }r(\uuuu{x})>4,\\ S(A_1^3),\ S(\P^2),\ S(A_2A_1),\ S(A_3),\ S(\P^1A_1),\ S(\whh{A}_2),\ S(\HH_{1,2}),\\ S(-l,2,-l)\textup{ for }l\geq 3,\\ \left\{\begin{array}{r} S(\uuuu{x})\textup{ with }\uuuu{x}\in\Z^3_{\geq 3} \textup{ and }r(\uuuu{x})<0\textup{ and }\\ x_i\leq \frac{1}{2}x_jx_k \textup{ for }\{i,j,k\}=\{1,2,3\}.\end{array}\right. \end{eqnarray*} The following of these matrices satisfy $\uuuu{x}\in\Z^3_{\leq 0}$: \begin{eqnarray*} S(\uuuu{x})\textup{ with }\uuuu{x}\in\Z^3_{\leq 0} \textup{ and }r(\uuuu{x})>4,\\ S(A_1^3),\ S(A_2A_1),\ S(A_3),\ S(\P^1A_1),\ S(\whh{A}_2). \end{eqnarray*} These are all Coxeter matrices. Their even monodromy groups $\Gamma^{(0)}$ are Coxeter groups and are well known. The cases with $\uuuu{x}\in\{0,1,2\}^3$ are classical, the extension to all $\uuuu{x}\in\Z^3_{\leq 0}$ has been done by Vinberg \cite[Prop. 6, Thm. 1, Thm. 2, Prop. 17]{Vi71}. If we write $(x_1,x_2,x_3)=(S_{12},S_{13},S_{23})$ then the following holds \cite[5.3+5.4]{Hu90} \cite{Vi71} \cite[4.1+4.2]{BB05}: All relations in $\Gamma^{(0)}=\langle s_{e_1}^{(0)}, s_{e_2}^{(0)},s_{e_3}^{(0)}\rangle$ are generated by the relations \begin{eqnarray}\label{6.1} (s_{e_i}^{(0)})^2=\id && \textup{for }i\in\{1,2,3\},\\ (s_{e_i}^{(0)}s_{e_j}^{(0)})^2=\id &&\textup{for }\{i,j,k\}=\{1,2,3\}\textup{ with }s_{ij}=0 \label{6.2}\\ {}[\textup{equivalent: }&&s_{e_i}^{(0)}\textup{ and } s_{e_j}^{(0)}\textup{ commute}],\nonumber\\ (s_{e_i}^{(0)}s_{e_j}^{(0)})^3=\id &&\textup{for }\{i,j,k\}=\{1,2,3\}\textup{ with }s_{ij}=-1, \label{6.3}\\ \textup{no relation }&& \textup{for }\{i,j,k\}=\{1,2,3\}\textup{ with }s_{ij}\leq -2. \label{6.4} \end{eqnarray} Especially, $\Gamma^{(0)}\cong G^{fCox,3}$ is a free Coxeter group with three generators if $\uuuu{x}\in\Z^3_{\leq -2}$. In Theorem \ref{t6.11} we recover this result, we say more about the Coxeter groups with $r\in\{0,1,2,4\}$, so the cases $S(A_1^3),S(A_2A_1),S(A_3),S(\P^1A_1),S(\whh{A}_2)$, and we treat also the other cases where $\Gamma^{(0)}$ is not a Coxeter group. The only cases where $\Rad I^{(0)}\supsetneqq\{0\}$ are the cases with $r(\uuuu{x})=4$, so the cases $S(\P^1A_1)$, $S(\whh{A}_2)$, $S(\HH_{1,2})$ and $S(-l,2,-l)$ with $l\geq 3$. In these cases, we have the exact sequence \begin{eqnarray}\{1\}\to \Gamma^{(0)}_u\to\Gamma^{(0)}\to \Gamma^{(0)}_s\to\{1\}\label{6.5} \end{eqnarray} in Lemma \ref{t6.2} (d). \begin{theorem}\label{t6.11} (a) We have \begin{eqnarray*} s_{e_i}^{(0)}\uuuu{e}&=&\uuuu{e}\cdot s_{e_i}^{(0),mat} \quad\textup{with}\quad s_{e_1}^{(0),mat}= \begin{pmatrix}-1&-x_1&-x_2\\0&1&0\\0&0&1\end{pmatrix},\\ s_{e_2}^{(0),mat}&=& \begin{pmatrix}1&0&0\\-x_1&-1&-x_3\\0&0&1\end{pmatrix},\ s_{e_3}^{(0),mat}= \begin{pmatrix}1&0&0\\0&1&0\\-x_2&-x_3&-1\end{pmatrix},\\ \Gamma^{(0)}&\cong&\Gamma^{(0),mat} :=\langle s_{e_1}^{(0),mat},s_{e_2}^{(0),mat},s_{e_3}^{(0),mat}\rangle \subset GL_3(\Z),\\ R^{(0)}&=&\{y_1e_1+y_2e_2+y_3e_3\in H_\Z\,|\, \\ &&1=y_1^2+y_2^2+y_3^2+x_1y_1y_2+x_2y_1y_3+x_3y_2y_3\}. \end{eqnarray*} (b) In the cases $S(\uuuu{x})$ with $\uuuu{x}\in\Z^3_{\leq 0}$ and $r(\uuuu{x})>4$ and in the reducible cases $S(A_1^3),S(A_2A_1),S(\P^1A_1)$, all relations in $\Gamma^{(0)}$ are generated by the relations in \eqref{6.1}--\eqref{6.4}. Especially \begin{eqnarray*} \Gamma^{(0)}(A_1^3)&\cong&\Gamma^{(0)}(A_1)\times \Gamma^{(0)}(A_1)\times \Gamma^{(0)}(A_1) \cong (G^{fCox,1})^3\cong \{\pm 1\}^3,\\ \Gamma^{(0)}(A_2A_1)&\cong& \Gamma^{(0)}(A_2)\times \Gamma^{(0)}(A_1)\cong D_6\times \{\pm 1\}\cong S_3\times \{\pm 1\},\\ \Gamma^{(0)}(\P^1A_1)&\cong& \Gamma^{(0)}(\P^1)\times \Gamma^{(0)}(A_1)\cong G^{fCox,2}\times\{\pm 1\}. \end{eqnarray*} (c) In the case $S(A_3)$ the group $\Gamma^{(0)}$ is the Weyl group \index{Weyl group} of the root system $A_3$, so $\Gamma^{(0)}=\ker\tau^{(0)}=O^{(0),*}\cong S_4$. (d) In the case $S(\whh{A}_2)$ the group $\Gamma^{(0)}$ is the Weyl group of the affine root system $\whh{A}_2$. More concretely, the following holds. \begin{eqnarray*} \Rad I^{(0)}&=&\Z f_1\textup{ with }f_1=e_1+e_2+e_3,\\ \oooo{H_\Z}^{(0)}&=&\Z\oooo{e_1}^{(0)}\oplus \Z\oooo{e_2}^{(0)},\\ \Gamma^{(0)}_u&=&\bigl(\ker \tau^{(0)}\bigr)_u =T(\oooo{j}^{(0)}(\oooo{H_\Z}^{(0)})\otimes\Z f_1)\\ &=&\langle T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1), T(\oooo{j}^{(0)}(\oooo{e_2}^{(0)})\otimes f_1) \rangle\cong\Z^2 \quad\textup{ with}\\ &&T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1)(\uuuu{e}) =\uuuu{e}+f_1(2,-1,-1),\\ &&T(\oooo{j}^{(0)}(\oooo{e_2}^{(0)})\otimes f_1)(\uuuu{e}) =\uuuu{e}+f_1(-1,2,-1),\\ \Gamma^{(0)}_u&=&\bigl(\ker \tau^{(0)}\bigr)_u \stackrel{1:3}{\subset} O^{(0),Rad}_u =T(\oooo{H_\Z}^{(0),\sharp}\otimes \Z f_1),\\ \Gamma^{(0)}_s&=& (\ker\tau^{(0)})_s \cong\Gamma^{(0)}(A_2) \cong D_6\cong S_3,\\ \Gamma^{(0)}_s&\stackrel{1:2}{\subset}& O^{(0),Rad}_s =\Aut(\oooo{H_\Z}^{(0)}, \oooo{I}^{(0)})\cong D_{12},\\ \Gamma^{(0)}&=&\ker \tau^{(0)}=O^{(0),*} \stackrel{1:6}{\subset} O^{(0),Rad}. \end{eqnarray*} The exact sequence \eqref{6.5} splits non-canonically with $\Gamma^{(0)}_s\cong\langle s_{e_1}^{(0)},s_{e_2}^{(0)}\rangle \subset\Gamma^{(0)}$ (for example). (e) The case $S(\HH_{1,2})$: The following holds. \begin{eqnarray*} H_\Z&=&\Z f_3\oplus \Rad I^{(0)}\textup{ with } f_3=e_1+e_2+e_3,\\ \oooo{H_\Z}^{(0)}&=& \Z\oooo{f_3}^{(0)},\\ \Rad I^{(0)}&=&\Z f_1\oplus \Z f_2\textup{ with } f_1=e_1+e_2,\ f_2=e_2+e_3,\\ \Gamma^{(0)}_u&=&(\ker \tau^{(0)})_u =T(\oooo{j}^{(0)}(\oooo{H_\Z}^{(0)})\otimes\Rad I^{(0)})\\ &=&\langle T(\oooo{j}^{(0)}(\oooo{f_3}^{(0)})\otimes f_1), T(\oooo{j}^{(0)}(\oooo{f_3}^{(0)})\otimes f_2)\rangle\cong \Z^2 \quad \textup{with }\\ &&T(\oooo{j}^{(0)}(\oooo{f_3}^{(0)})\otimes f_1)(f_1,f_2,f_3)=(f_1,f_2,f_3+2f_1),\\ &&T(\oooo{j}^{(0)}(\oooo{f_3}^{(0)})\otimes f_2)(f_1,f_2,f_3)=(f_1,f_2,f_3+2f_2),\\ \Gamma^{(0)}_u &\stackrel{1:4}{\subset}& O^{(0),\Rad}_u =T(\oooo{H_\Z}^{(0),\sharp}\otimes \Rad I^{(0)}),\\ \Gamma^{(0)}_s&=&(\ker \tau^{(0)})_s =O^{(0),Rad}_s \cong \Gamma^{(0)}(A_1)\cong\{\pm 1\},\\ \Gamma^{(0)}&=&\ker \tau^{(0)} =O^{(0),*}\stackrel{1:4}{\subset} O^{(0),Rad}. \end{eqnarray*} The exact sequence \eqref{6.5} splits non-canonically with $\Gamma^{(0)}_s\cong\langle -M\rangle\subset \Gamma^{(0)}$ and $-M(f_1,f_2,f_3)=(f_1,f_2,-f_3)$. Therefore \begin{eqnarray*} \Gamma^{(0)}=\{(f_1,f_2,f_3)\mapsto (f_1,f_2,\varepsilon f_3+ 2\beta_1f_1+2\beta_2f_2)\,|\, \varepsilon\in\{\pm 1\}, \beta_1,\beta_2\in \Z\}. \end{eqnarray*} (f) The cases $S(-l,2,-l)$ with $l\geq 3$: The following holds. \begin{eqnarray*} \Rad I^{(0)}&=&\Z f_1\textup{ with }f_1=e_1-e_3,\\ \oooo{H_\Z}^{(0)}&=&\Z\oooo{e_1}^{(0)}\oplus\Z\oooo{e_2}^{(0)},\\ &&\oooo{I}^{(0)}((\oooo{e_1}^{(0)},\oooo{e_2}^{(0)})^t, (\oooo{e_1}^{(0)},\oooo{e_2}^{(0)})) =\begin{pmatrix}2&-l\\-l&2\end{pmatrix},\\ \Gamma^{(0)}_u&=&\langle T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)}) \otimes f_1), T(\oooo{j}^{(0)}(l\oooo{e_2}^{(0)})\otimes f_1)\rangle\cong\Z^2 \quad \textup{with }\\ &&T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)}) \otimes f_1)(\uuuu{e})=\uuuu{e}+f_1(2,-l,2),\\ &&T(\oooo{j}^{(0)}(l\oooo{e_2}^{(0)})\otimes f_1) (\uuuu{e})=\uuuu{e}+f_1(-l^2,2l,-l^2),\\ \Gamma^{(0)}_u&\stackrel{1:l}{\subset}& (\ker\tau^{(0)})_u\stackrel{1:(l^2-4)}{\subset} O^{(0),Rad}_u\cong\Z^2,\\ \Gamma^{(0)}_s &\cong& \Gamma^{(0)}(S(-l))\cong G^{fCox,2},\\ \Gamma^{(0)}_s &=& (\ker \tau^{(0)})_s\cap \ker\oooo{\sigma} \stackrel{1:4}{\subset} O^{(0),Rad}_s,\\ \Gamma^{(0)}&\stackrel{1:l}{\subset}& O^{(0),*} \stackrel{1:4(l^2-4)}{\subset} O^{(0),Rad}. \end{eqnarray*} The exact sequence \eqref{6.5} splits non-canonically with $\Gamma^{(0)}_s\cong\langle s^{(0)}_{e_1},s^{(0)}_{e_2}\rangle \subset\Gamma^{(0)}$ (for example). (g) The case $S(\P^2)$ and the cases $S(\uuuu{x})$ with $\uuuu{x}\in \Z^3_{\geq 3}$, $r(\uuuu{x})<0$ and $x_i\leq \frac{1}{2}x_jx_k$ for $\{i,j,k\}=\{1,2,3\}$: $\Gamma^{(0)}\cong G^{fCox,3}$ is a free Coxeter group with the three generators $s^{(0)}_{e_1},s^{(0)}_{e_2},s^{(0)}_{e_3}$. \end{theorem} {\bf Proof:} (a) This follows from the definitions in Lemma \ref{t2.6} (a) and in Definition \ref{t2.8}. Especially \begin{eqnarray*} R^{(0)}&=&\{\uuuu{e}\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix} \in H_\Z\,|\, 2=I^{(0)}( (\uuuu{e}\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix})^t, \uuuu{e}\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix})\\ &=&\{\uuuu{e}\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix} \in H_\Z\,|\, 2=(y_1\ y_2\ y_3) \begin{pmatrix}2&x_1&x_2\\x_1&2&x_3\\x_2&x_3&2\end{pmatrix} \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}\}. \end{eqnarray*} (b) First we consider the cases $S(\uuuu{x})$ with $\uuuu{x}\in\Z^3_{\leq 0}$ and $r(\uuuu{x})>4$. By Lemma \ref{t5.7} (b) $\sign I^{(0)}=(++-)$. We will apply Theorem \ref{ta.4} with $I^{[0]}:=-I^{(0)}$, which has signature $\sign I^{[0]}=(+--)$. The vectors $e_1,e_2,e_3$ are negative with respect to $I^{[0]}$. By Theorem \ref{ta.4} (c) (vi) the reflection $s_{e_i}^{(0)}$ acts on the model $\KK/\R^*$ of the hyperbolic plane as reflection along the hyperbolic line $((\R e_i)^\perp\cap\KK)/\R^*\subset \KK/\R^*$. The corresponding three planes $(\R e_1)^\perp$, $(\R e_2)^\perp$ and $(\R e_3)^\perp$ in $H_\R$ intersect pairwise in the following three lines \begin{eqnarray*} (\R e_1)^\perp\cap(\R e_2)^\perp=\R y^{[1]},&& y^{[1]}= (-2x_2+x_1x_3,-2x_3+x_1x_2,4-x_1^2),\\ (\R e_1)^\perp\cap(\R e_3)^\perp=\R y^{[2]},&& y^{[2]}= (-2x_1+x_2x_3,4-x_2^2,-2x_3+x_1x_2),\\ (\R e_2)^\perp\cap(\R e_3)^\perp=\R y^{[3]},&& y^{[3]}= (4-x_3^2,-2x_1+x_2x_3,-2x_2+x_1x_3). \end{eqnarray*} $\uuuu{x}\in\Z^3_{\leq 0}$ and $y^{[1]}=0$ would imply $x_2=x_3=0$, $x_1=-2$, $r(\uuuu{x})=4$. But $r(\uuuu{x})>4$ by assumption. Therefore $y^{[1]}\neq 0$. Analogously $y^{[2]}\neq 0$ and $y^{[3]}\neq 0$. One calculates \begin{eqnarray*} I^{[0]}(y^{[i]},y^{[i]})&=& 2(4-x_i^2)(r(\uuuu{x})-4) \quad\textup{for }i\in\{1,2,3\}\\ &&\left\{\begin{array}{ll} \leq 0&\textup{ for }x_i\leq -2,\\ >0&\textup{ for }x_i\in\{0,-1\}.\end{array}\right. \end{eqnarray*} Therefore two of the three hyperbolic lines $((\R e_j)^\perp\cap\KK)/\R^*$ $(j\in\{1,2,3\})$ intersect in $\KK/\R^*$ if and only if the corresponding $x_i$ is $0$ or $-1$. \medskip {\bf Claim:} {\it If $x_i\in\{0,-1\}$ then the angle between the two hyperbolic lines at the intersection point $\R^* y^{[i]}\in\KK/\R^*$ is $\frac{\pi}{2}$ if $x_i=0$ and $\frac{\pi}{3}$ if $x_i=-1$.} \medskip We prove the claim in an indirect way. Observe in general \begin{eqnarray}\nonumber x_1=0\Rightarrow \bigl(s_{e_1}^{(0),mat}s_{e_2}^{(0),mat}\bigr)^2 &=&(\begin{pmatrix}-1&0&-x_2\\0&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&-1&-x_3\\0&0&1\end{pmatrix})^2\\ &=&\begin{pmatrix}-1&0&-x_2\\0&-1&-x_3\\0&0&1\end{pmatrix}^2 =E_3,\label{6.6}\\ \nonumber x_1=-1\Rightarrow \bigl(s_{e_1}^{(0),mat}s_{e_2}^{(0),mat}\bigr)^3 &=&(\begin{pmatrix}-1&1&-x_2\\0&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&-1&-x_3\\0&0&1\end{pmatrix})^3\\ &=&\begin{pmatrix}-1&-1&-x_2-x_3\\0&-1&-x_3\\0&0&1\end{pmatrix}^3 =E_3,\label{6.7} \end{eqnarray} and analogously for $x_2$ and $x_3$. Therefore the angle between the hyperbolic lines must be $\frac{\pi}{2}$ if $x_i=0$ and $\frac{\pi}{3}$ or $\frac{2\pi}{3}$ if $x_i=-1$. But in the case $x_1=x_2=x_3=-1$ the three intersection points are the vertices of a hyperbolic triangle, so then the angles are all $\frac{\pi}{3}$. Deforming $x_2$ and $x_3$ does not change the angle at $\R^*y^{[1]}$, so it is $\frac{\pi}{3}$ if $x_1=-1$. This proves the Claim. \hfill$(\Box)$ \medskip Now Theorem \ref{ta.2} (a) shows that in the group of automorphisms of $\KK/\R^*$ which is induced by $\Gamma^{(0)}$ all relations are generated by the relations in \eqref{6.1}--\eqref{6.4}. Therefore this holds also for $\Gamma^{(0)}$ itself. Now we consider the three reducible cases $S(A_1^3)$, $S(A_2A_1)$ and $S(\P^1A_1).$ Lemma \ref{t2.11} gives the first isomorphisms in part (b) for $\Gamma^{(0)}(A_1^3)$, $\Gamma^{(0)}(A_2A_1)$ and $\Gamma^{(0)}(\P^1A_1)$. Lemma \ref{t2.12} (for $A_1$) and Theorem \ref{t6.8} (b) and (c) give the second isomorphisms in part (b). The isomorphisms show in these three cases that all relations in $\Gamma^{(0)}$ are generated by the relations in \eqref{6.1}--\eqref{6.4}. (c) It is classical that in the case of the $A_3$ root lattice the monodromy group $\Gamma^{(0)}$ is the Weyl group and is $\ker\tau^{(0)}\cong S_4$. (d) The proof of Theorem \ref{t5.14} (b) (iii) shows \begin{eqnarray*} \Rad I^{(0)}&=&\ker\Phi_2(M)=\ker\Phi_1(M^{root})=\Z f_1,\\ \oooo{H_\Z}^{(0)}&=&\Z\oooo{e_1}^{(0)}\oplus\Z\oooo{e_2}^{(0)},\\ && \oooo{I}^{(0)}((\oooo{e_1}^{(0)},\oooo{e_2}^{(0)})^t, (\oooo{e_1}^{(0)},\oooo{e_2}^{(0)})) =\begin{pmatrix}2&-1\\-1&2\end{pmatrix}, \end{eqnarray*} so $(\oooo{H_\Z}^{(0)},\oooo{I}^{(0)})$ is an $A_2$ root lattice. This was treated in Theorem \ref{t6.8} (b). We have \begin{eqnarray*} O^{(0),Rad}_s&=&\Aut(\oooo{H_\Z}^{(0)}, \oooo{I}^{(0)})\cong D_{12},\\ \oooo{e_3}^{(0)}&=&-\oooo{e_1}^{(0)}-\oooo{e_2}^{(0)},\\ R^{(0)}(\oooo{H_\Z}^{(0)},\oooo{I}^{(0)})&=& \{\pm\oooo{e_1}^{(0)},\pm\oooo{e_2}^{(0)},\pm\oooo{e_3}^{(0)}\},\\ \oooo{\Gamma^{(0)}}=\Gamma^{(0)}_s &=&\langle \oooo{s^{(0)}_{e_1}},\oooo{s^{(0)}_{e_2}}\rangle \cong \Gamma^{(0)}(A_2)\cong D_6\cong S_3,\\ \Gamma^{(0)}_s&=& (\ker \tau^{(0)})_s\stackrel{1:2}{\subset} O^{(0),Rad}_s. \end{eqnarray*} Observe also \begin{eqnarray*} &&s^{(0)}_{e_2}s^{(0)}_{e_1}s^{(0)}_{e_2}s^{(0)}_{e_3}(\uuuu{e})\\ &=&\uuuu{e} \begin{pmatrix}1&0&0\\1&-1&1\\0&0&1\end{pmatrix} \begin{pmatrix}-1&1&1\\0&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\1&-1&1\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&1&0\\1&1&-1\end{pmatrix}\\ &=&\uuuu{e}+f_1(1,1,-2) = T(\oooo{j}^{(0)}(-\oooo{e_3}^{(0)}) \otimes f_1)(\uuuu{e}), \end{eqnarray*} so $T(j^{(0)}(-\oooo{e_3}^{(0)})\otimes f_1)\in \Gamma^{(0)}_u$. Compare Lemma \ref{t6.2} (f) and recall that $\Gamma^{(0)}_s$ acts transitively on $\{\pm \oooo{e_1}^{(0)}, \pm\oooo{e_2}^{(0)},\pm\oooo{e_3}^{(0)}\}$. Therefore $T(\oooo{j}^{(0)}(\oooo{e_j}^{(0)})\otimes f_1) \in\Gamma^{(0)}_u$ for $j\in\{1,2,3\}$ with \begin{eqnarray*} T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1)(\uuuu{e}) &=&\uuuu{e}+f_1(2,-1,-1),\\ T(\oooo{j}^{(0)}(\oooo{e_2}^{(0)})\otimes f_1)(\uuuu{e}) &=&\uuuu{e}+f_1(-1,2,-1), \end{eqnarray*} so \begin{eqnarray*} \Gamma^{(0)}_u&=&T(\oooo{j}^{(0)}(\oooo{H_\Z}^{(0)})\otimes f_1) =(\ker \tau^{(0)})_u \\ &\subset& T(\oooo{H_\Z}^{(0),\sharp}\otimes f_1) =O^{(0),Rad}_u\cong\Z^2. \end{eqnarray*} $T(\oooo{H_\Z}^{(0),\sharp}\otimes f_1)$ is generated by \begin{eqnarray*} (\uuuu{e}\mapsto \uuuu{e}+f_1(1,-1,0))\quad\textup{and}\quad (\uuuu{e}\mapsto \uuuu{e}+f_1(0,1,-1)). \end{eqnarray*} Therefore $\Gamma^{(0)}_u\stackrel{1:3}{\subset} O^{(0),Rad}_u$. Together $\Gamma^{(0)}_s=(\ker\tau^{(0)})_s \stackrel{1:2}{\subset}\OO_s^{(0),Rad}$ and $\Gamma^{(0)}_u=(\ker\tau^{(0)})_u \stackrel{1:3}{\subset}\OO_u^{(0),Rad}$ show \begin{eqnarray*} \Gamma^{(0)}=\ker\tau^{(0)}\stackrel{1:6}{\subset} O^{(0),Rad} \end{eqnarray*} \eqref{6.7} and $x_1=-1$ show $\langle s^{(0)}_{e_1},s^{(0)}_{e_2}\rangle\cong D_6$, so $\Gamma^{(0)}_s\cong \langle s^{(0)}_{e_1}, s^{(0)}_{e_2}\rangle\subset \Gamma^{(0)}$, so the exact sequence \eqref{6.5} splits non-canonically. (e) Recall from the proof of Theorem \ref{t5.14} (a) (i) that \begin{eqnarray*} \uuuu{f}=(f_1,f_2,f_3):=\uuuu{e} \begin{pmatrix}1&0&1\\1&1&1\\0&1&1\end{pmatrix} \end{eqnarray*} is a $\Z$-basis of $H_\Z$ and \begin{eqnarray*} \Rad I^{(0)}&=& \Z f_1\oplus \Z f_2,\\ H_\Z&=& \Z f_3\oplus \Rad I^{(0)},\\ \oooo{H_\Z}^{(0)}&=& \Z \oooo{f_3}^{(0)}. \end{eqnarray*} Also observe \begin{eqnarray*} s^{(0)}_{e_j}|_{\Rad I^{(0)}}&=&\id\quad\textup{ for } i\in\{1,2,3\},\\ s^{(0)}_{e_1}(f_3)&=&-f_3+2f_2,\\ s^{(0)}_{e_2}(f_3)&=&-f_3+2f_1+2f_2,\\ s^{(0)}_{e_3}(f_3)&=&-f_3+2f_1,\\ \oooo{\Gamma^{(0)}}=\Gamma^{(0)}_s &=&\{\pm\id\} =(\ker \tau^{(0)})_s=O^{(0),Rad}_s \cong \Gamma^{(0)}(A_1)\cong\{\pm 1\}. \end{eqnarray*} Therefore \begin{eqnarray*} \Gamma^{(0)}_u\owns s^{(0)}_{e_1} s^{(0)}_{e_2}= (\uuuu{f}\mapsto \uuuu{f}+2f_1(0,0,1)) =T(\oooo{j}^{(0)}(\oooo{f_3}^{(0)})\otimes f_1),\\ \Gamma^{(0)}_u\owns s^{(0)}_{e_3} s^{(0)}_{e_2}= (\uuuu{f}\mapsto \uuuu{f}+2f_2(0,0,1)) =T(\oooo{j}^{(0)}(\oooo{f_3}^{(0)})\otimes f_2), \end{eqnarray*} so \begin{eqnarray*} \Gamma^{(0)}_u&=&(\ker \tau^{(0)})_u =T(\oooo{j}^{(0)}(\oooo{H_\Z}^{(0)})\otimes \Rad I^{(0)})\\ &\stackrel{1:4}{\subset}& O^{(0),Rad} =T(\oooo{H_\Z}^{(0),\sharp}\otimes \Rad I^{(0)})\\ &=&\langle (\uuuu{f}\mapsto \uuuu{f}+f_1(0,0,1)), (\uuuu{f}\mapsto \uuuu{f}+f_2(0,0,1))\rangle\cong\Z^2. \end{eqnarray*} Together the statements on $\Gamma^{(0)}_s$ and $\Gamma^{(0)}_u$ imply \begin{eqnarray*} \Gamma^{(0)}=\ker\tau^{(0)} \stackrel{1:4}{\subset} O^{(0),Rad}. \end{eqnarray*} The exact sequence \eqref{6.5} splits non-canonically with $\Gamma^{(0)}_s=\{\pm id\}\cong \langle -M\rangle \subset\Gamma^{(0)}$ (for example). (f) Recall from the proof of Theorem \ref{t5.14} (a) (ii) and (b) (iv) \begin{eqnarray*} \Rad I^{(0)}&=& \Z f_1\quad\textup{with}\quad f_1=e_1-e_3,\quad\textup{so}\\ \oooo{H_\Z}^{(0)}&=& \Z\oooo{e_1}^{(0)}\oplus\Z \oooo{e_2}^{(0)},\\ &&\oooo{I}^{(0)}((\oooo{e_1}^{(0)},\oooo{e_2}^{(0)})^t, (\oooo{e_1}^{(0)},\oooo{e_2}^{(0)})) =\begin{pmatrix}2&-l\\-l&2\end{pmatrix}. \end{eqnarray*} Observe \begin{eqnarray*} s^{(0),mat}_{e_1} =\begin{pmatrix}-1&l&-2\\0&1&0\\0&0&1\end{pmatrix},\ s^{(0),mat}_{e_2} =\begin{pmatrix}1&0&0\\l&-1&l\\0&0&1\end{pmatrix},\\ s^{(0),mat}_{e_3} =\begin{pmatrix}1&0&0\\0&1&0\\-2&l&-1\end{pmatrix},\\ \oooo{s_{e_1}^{(0)}}(\oooo{e_1}^{(0)},\oooo{e_2}^{(0)}) =\oooo{s_{e_3}^{(0)}}(\oooo{e_1}^{(0)},\oooo{e_2}^{(0)}) =(\oooo{e_1}^{(0)},\oooo{e_2}^{(0)}) \begin{pmatrix}-1&l\\0&1\end{pmatrix},\\ \oooo{s_{e_2}^{(0)}}(\oooo{e_1}^{(0)},\oooo{e_2}^{(0)}) =\begin{pmatrix}1&0\\l&-1\end{pmatrix}. \end{eqnarray*} Theorem \ref{t6.8} (d) shows \begin{eqnarray*} \oooo{\Gamma^{(0)}}=\Gamma^{(0)}_s \cong \Gamma^{(0)}(S(-l))\cong G^{fCox,2}. \end{eqnarray*} Therefore with respect to the generators $\oooo{s_{e_1}^{(0)}},\oooo{s_{e_2}^{(0)}}$ and $\oooo{s_{e_3}^{(0)}}$, all relations in $\Gamma^{(0)}_s$ are generated by the relations \begin{eqnarray*} (\oooo{s_{e_1}^{(0)}})^2=(\oooo{s_{e_2}^{(0)}})^2 =\oooo{s_{e_1}^{(0)}}\ \oooo{s_{e_3}^{(0)}}=\id. \end{eqnarray*} Therefore $\Gamma^{(0)}_u$ is generated by the set $\{g s^{(0)}_{e_1}s^{(0)}_{e_3}g^{-1}\,|\, g\in\Gamma^{(0)}\}$ of conjugates of $s^{(0)}_{e_1}s^{(0)}_{e_3}$. Observe \begin{eqnarray*} s^{(0)}_{e_1}s^{(0)}_{e_3}(\uuuu{e}) &=&\uuuu{e}\begin{pmatrix}-1&l&-2\\0&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&1&0\\-2&l&-1\end{pmatrix}\\ &=&\uuuu{e}\begin{pmatrix}3&-l&2\\0&1&0\\-2&l&-1\end{pmatrix} =\uuuu{e}+f_1(2,-l,2),\\ \textup{so }s^{(0)}_{e_1}s^{(0)}_{e_3} &=& T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1), \end{eqnarray*} and recall Lemma \ref{t6.2} (f). The $\Z$-lattice generated by the $\Gamma^{(0)}$ orbit of $e_1$ is $\Z e_1\oplus \Z le_2\oplus \Z \gcd(2,l)e_3$. Therefore \begin{eqnarray*} \Gamma^{(0)}_u = \langle T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1), T(\oooo{j}^{(0)}(l\oooo{e_2}^{(0)})\otimes f_1)\rangle\cong\Z^2 \textup{ with}\\ T(\oooo{j}^{(0)}(l\oooo{e_2}^{(0)}))\otimes f_1)(\uuuu{e}) =\uuuu{e}+f_1(-l^2,2l,-l^2). \end{eqnarray*} Compare \begin{eqnarray*} (\ker \tau^{(0)})_u &=& T(\oooo{j}^{(0)}(\oooo{H_\Z}^{(0)})\otimes f_1)\\ &=&\langle T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1), T(\oooo{j}^{(0)}(\oooo{e_2}^{(0)})\otimes f_1)\rangle,\\ O^{(0),Rad} &=& T(\oooo{H_\Z}^{(0),\sharp}\otimes f_1)\\ &=&\langle (\uuuu{e}\mapsto \uuuu{e}+f_1(1,0,1)), (\uuuu{e}\mapsto\uuuu{e}+f_1(0,1,0))\rangle. \end{eqnarray*} Therefore \begin{eqnarray*} \Gamma^{(0)}_u\stackrel{1:l}{\subset} (\ker\tau^{(0)})_u \stackrel{1:(l^2-4)}{\subset} O^{(0),Rad}_u\cong\Z^2. \end{eqnarray*} Theorem \ref{t6.8} (e) shows also \begin{eqnarray*} \Gamma^{(0)}_s =(\ker\tau^{(0)})_s\cap \ker\oooo{\sigma} \stackrel{1:4}{\subset} O^{(0),Rad}_s. \end{eqnarray*} Therefore \begin{eqnarray*} \Gamma^{(0)}\stackrel{1:l}{\subset} O^{(0),*} \stackrel{1:4(l^2-4)}{\subset} O^{(0),Rad}. \end{eqnarray*} (g) By Lemma \ref{t5.7} (b) $\sign I^{(0)}=(+--)$. We will apply Theorem \ref{ta.4} with $I^{[0]}=I^{(0)}$ and Theorem \ref{ta.2} (b). The vectors $e_1,e_2$ and $e_3$ are positive. By Theorem \ref{ta.4} (c) (vii) the reflection $s^{(0)}_{e_i}$ acts on the model $\KK/\R^*$ of the hyperbolic plane as an elliptic element of order $2$ with fixed point $\R^*e_i\in\KK/\R^*$. Consider the three vectors $v_1,v_2,v_3\in H_\Z\subset H_\R$ \begin{eqnarray*} v_1&:=& -x_3e_1+x_2e_2+x_1e_3,\\ v_2&:=& x_3e_1-x_2e_2+x_1e_3,\\ v_3&:=& x_3e_1+x_2e_2-x_1e_3, \end{eqnarray*} and observe \begin{eqnarray*} v_1+v_2= 2x_1e_3,\ v_1+v_3=2x_2e_2,\ v_2+v_3=2x_3e_1,\\ I^{(0)}(v_i,v_i)=r(\uuuu{x})\leq 0. \end{eqnarray*} The three planes $\R v_1\oplus \R v_2$, $\R v_1\oplus \R v_3$ and $\R v_2\oplus \R v_3$ contain the lines $\R e_3$, $\R e_2$ respectively $\R e_1$. Two of the three planes intersect in one of the lines $\R v_1$, $\R v_2$ and $\R v_3$, and these three lines do not meet $\KK$. Therefore the three hyperbolic lines $((\R v_1\oplus\R v_2)\cap\KK)/\R^*$, $((\R v_1\oplus\R v_3)\cap\KK)/\R^*$ and $((\R v_2\oplus\R v_3)\cap\KK)/\R^*$ in $\KK/\R^*$ contain the points $\R^* e_3$, $\R^* e_2$ respectively $\R^* e_1$ and do not meet. Now Theorem \ref{ta.2} (b) shows that the group of automorphisms of $\KK/\R^*$ which is induced by $\Gamma^{(0)}$ is isomorphic to $G^{fCox,3}$. Therefore also $\Gamma^{(0)}$ itself is isomorphic to $G^{fCox,3}$.\hfill$\Box$ \begin{remarks}\label{t6.12} (i) In part (g) of Theorem \ref{t6.12} we have less informations than in the other cases. We do not even know in which cases in part (g) $\Gamma^{(0)}=\OO^{(0),*}$ respectively $\Gamma^{(0)}\subsetneqq\OO^{(0),*}$ holds. (ii) In the case of $S(\P^2)$, the proof of Theorem \ref{t6.11} (g) gave three hyperbolic lines in the model $\KK/\R^*$ which form a degenerate hyperbolic triangle, so with vertices on the euclidean boundary of the hyperbolic plane. These vertices are the lines $\R^* v_1$, $\R^* v_2$, $\R^* v_3$, which are isotropic in the case of $S(\P^2)$ because there $I^{(0)}(v_i,v_i)=r(\uuuu{x})=0$. The reflections $s^{(0)}_{e_1}$, $s^{(0)}_{e_2}$, $s^{(0)}_{e_3}$ act as elliptic elements of order 2 with fixed points on these hyperbolic lines. Therefore the degenerate hyperbolic triangle is a fundamental domain of this action of $\Gamma^{(0)}$. (iii) Milanov \cite[4.1]{Mi19} had a different point of view on $\Gamma^{(0)}$ in the case of $S(\P^2)$. He gave an isomorphism $\Gamma^{(0)}\cong U$ to a certain subgroup $U$ of index 3 in $PSL_2(\Z)$. First we describe $U$ in (iv), then we present our way to see this isomorphism in (v). (iv) The class in $PSL_2(\Z)$ of a matrix $A\in SL_2(\Z)$ is denoted by $[A]$. It is well known that there is an isomorphism of the free product of $\Z/2\Z$ and $\Z/3\Z$ with $PSL_2(\Z)$, \begin{eqnarray*} \langle \alpha\,|\, \alpha^2=e\rangle * \langle \beta\,|\, \beta^3=e\rangle \to PSL_2(\Z), \\ \alpha\mapsto [\begin{pmatrix}0&-1\\1&0\end{pmatrix}],\ \beta\mapsto [\begin{pmatrix}-1&-1\\1&0\end{pmatrix}]. \end{eqnarray*} Consider the character \begin{eqnarray*} \chi:\langle \alpha\,|\, \alpha^2=e\rangle * \langle \beta\,|\, \beta^3=e\rangle \to \{1,e^{2\pi i/3},e^{2\pi i 2/3}\}, \ \alpha\mapsto 1,\ \beta\mapsto e^{2\pi i /3}, \end{eqnarray*} and the corresponding character $\www{\chi}$ on $PSL_2(\Z)$. Then \begin{eqnarray*} U&=&\ker\www{\chi}\stackrel{1:3}{\subset} PSL_2(\Z),\\ \ker\chi&=& \langle \alpha,\beta\alpha\beta^2, \beta^2\alpha\beta\rangle\\ \textup{with}&& \alpha\simeq [F_1],\beta\alpha\beta^2\simeq[F_2], \beta^2\alpha\beta\simeq [F_3]\textup{ and}\\ F_1&=&\begin{pmatrix}0&-1\\1&0\end{pmatrix},\quad F_2=\begin{pmatrix}-1&-2\\1&1\end{pmatrix},\quad F_3=\begin{pmatrix}-1&-1\\2&1\end{pmatrix}. \end{eqnarray*} It is easy to see $\ker\chi\cong G^{fCox,3}$, with the three generators $\alpha$, $\beta\alpha\beta^2$, $\beta^2\alpha\beta$. It is well known and easy to see that \begin{eqnarray*} \langle F_1,F_2,F_3\rangle =\{\begin{pmatrix}a&b\\c&d\end{pmatrix} \in SL_2(\Z)\,|\, a\equiv d\equiv 0\mmod 3\textup{ or }b\equiv c\mmod 3\}. \end{eqnarray*} The M\"obius transformations $\mu(F_1)$, $\mu(F_2)$, $\mu(F_3)$ are elliptic of order 2 with fixed points $z_1=i$, $z_2=-1+i$, $z_3=-\frac{1}{2}+\frac{1}{2}i$. The hyperbolic lines $\www{l_1}:=A(\infty,0)$ $\www{l_2}:=A(-1,\infty)$, $\www{l_3}:=A(0,-1)$ (notations from the Remarks and Notations \ref{ta.1} (iii)) form a degenerate hyperbolic triangle, and $\www{l_i}$ contains $z_i$. (v) Consider the matrices \begin{eqnarray*} B&:=&\begin{pmatrix}z_1\oooo{z_1} & -z_2\oooo{z_2} & 2z_3\oooo{z_3}\\ \Ree(z_1) & -\Ree(z_2) & 2\Ree(z_3) \\ 1&-1&2\end{pmatrix} =\begin{pmatrix} 1&-2&1\\0&1&-1\\1&-1&2\end{pmatrix} \quad\textup{and}\\ B^{-1}&=&\frac{1}{2}\begin{pmatrix}1&3&1\\-1&1&1\\-1&-1&1 \end{pmatrix}. \end{eqnarray*} One checks \begin{eqnarray*} B^t\begin{pmatrix}0&0&1\\0&-2&0\\1&0&0\end{pmatrix}B &=&\begin{pmatrix}2&-3&3\\-3&2&-3\\3&-3&2\end{pmatrix} =S(\P^2)+S(\P^2)^t,\\ B^{-1}\Theta(F_i)B&=& -s_{e_i}^{(0),mat}\quad \textup{for }i\in\{1,2,3\} \end{eqnarray*} (see Theorem \ref{ta.4} (i) for $\Theta$). The $\Z$-basis $\uuuu{e}$ of $H_\Z$ and the $\R$-basis $\uuuu{f}$ in Theorem \ref{ta.4} of $H_\R$ are related by $\uuuu{e}=\uuuu{f}\cdot B$. The tuple $(v_1,v_2,v_3)$ in the proof of Theorem \ref{t6.11} (g) is \begin{eqnarray*} (v_1,v_2,v_3)&=&\uuuu{e} \begin{pmatrix}3&-3&-3\\3&-3&3\\-3&-3&3\end{pmatrix}\\ &=&\uuuu{f}\cdot B\cdot \begin{pmatrix}3&-3&-3\\3&-3&3\\-3&-3&3\end{pmatrix} =\uuuu{f}\cdot (-6) \begin{pmatrix}1&0&1\\-1&0&0\\1&1&0\end{pmatrix}. \end{eqnarray*} Finally observe that $\vartheta:\H\to \KK/\R^*$ in Theorem \ref{ta.4} extends to the euclidean boundary with \begin{eqnarray*} (\vartheta(-1),\vartheta(0),\vartheta(\infty)) =\R^*\cdot \uuuu{f}\cdot \begin{pmatrix}1&0&1\\-1&0&0\\1&1&0\end{pmatrix}. \end{eqnarray*} So the points $-1,0,\infty$ are mapped to the points $\R^* v_1$, $\R^* v_2$, $\R^* v_3$. The groups $\Gamma^{(0),mat}= \langle s^{(0),mat}_{e_i}\,|\, i\in\{1,2,3\}\rangle$ and $\langle -s^{(0),mat}_{e_i}\,|\, i\in\{1,2,3\}\rangle$ are isomorphic because $\Gamma^{(0),mat}$ does not contain $-E_3$ because else it would not be a free Coxeter group with three generators. Therefore the group $U=\langle [F_1],[F_2],[F_3]\rangle\subset PSL_2(\Z)$ is isomorphic to the group $\langle B^{-1}\Theta(F_i)B\,|\, i\in\{1,2,3\}\rangle =\langle -s_{e_i}^{(0),mat}\,|\, i\in\{1,2,3\}\rangle$ and to the groups $\Gamma^{(0),mat}$ and $\Gamma^{(0)}$. \end{remarks} Now we turn to the study of the set $R^{(0)}$ of roots and the subset $\Delta^{(0)}\subset R^{(0)}$ of vanishing cycles. For the set $R^{(0)}$ Theorem \ref{t6.11} (a) gave the general formula $R^{(0)}=\{y_1e_1+y_2e_2+y_3e_3\in H_\Z\,|\, 1=Q_3(y_1,y_2,y_3)\}$ with the quadratic form \begin{eqnarray*} Q_3:\Z^3\to\Z,\quad (y_1,y_2,y_3)\mapsto y_1^2+y_2^2+y_3^2+x_1y_1y_2+x_2y_1y_3+x_3y_2y_3. \end{eqnarray*} It gave also a good control on $\Gamma^{(0)}$ for all cases $S(\uuuu{x})$ with $\uuuu{x}\in\Z^3$. With respect to $\Delta^{(0)}$ and $R^{(0)}$ we know less. We have a good control on them for the cases with $r(\uuuu{x})\in\{0,1,2,4\}$ and the reducible cases, but not for all other cases. Theorem \ref{t6.14} treats all cases except those in the Remarks \ref{t6.13} (ii). \begin{remarks}\label{t6.13} (i) The cases $S(\HH_{1,2})$, $S(-l,2,-l)$ for $l\geq 3$ and the four cases $S(3,3,4)$, $S(4,4,4)$, $S(5,5,5)$, $S(4,4,8)$ (more precisely their $\Br_3\ltimes\{\pm 1\}^3$ orbits) are the only cases in rank 3 where we know $\Delta^{(0)}\subsetneqq R^{(0)}$. (ii) We do not know whether $\Delta^{(0)}=R^{(0)}$ or $\Delta^{(0)}\subsetneqq R^{(0)}$ in the following cases: \begin{list}{}{} \item[(a)] All cases $S(\uuuu{x})$ with $r(\uuuu{x})<0$ except the four cases $S(3,3,4)$, $S(4,4,4)$, $S(5,5,5)$, $S(4,4,8)$. With the action of $\Br_3\ltimes\{\pm 1\}^3$ and Theorem \ref{t4.6} (c) they can be reduced to the cases $S(\uuuu{x})$ with $\uuuu{x}\in\Z^3_{\geq 3}$, $r(\uuuu{x})<0$, $x_i\leq\frac{1}{2}x_jx_k$ for $\{i,j,k\}=\{1,2,3\}$. \item[(b)] The irreducible cases $S(\uuuu{x})$ with $\uuuu{x}\in \Z^3_{\leq 0}$, $r(\uuuu{x})>4$ and $\uuuu{x}\notin \{0,-1,-2\}^3$. \end{list} \end{remarks} \begin{theorem}\label{t6.14} (a) Consider the reducible cases (these include $S(A_1^3)$, $S(A_2A_1)$, $S(\P^1A_1)$). More precisely, suppose that $\uuuu{x}=(x_1,0,0)$. Then the tuple $(H_\Z,L,\uuuu{e})$ splits into the two tuples $(\Z e_1\oplus \Z e_2,L|_{\Z e_1\oplus \Z e_2},(e_1,e_2))$ and $(\Z e_3,L|_{\Z e_3},e_3)$ with sets $\Delta^{(0)}_1=R^{(0)}_1\subset \Z e_1\oplus \Z e_2$ and $\Delta^{(0)}_2=R^{(0)}_2=\{\pm e_3\}\subset\Z e_3$ of vanishing cycles and roots, and \begin{eqnarray*} \Delta^{(0)}=\Delta^{(0)}_1\ \dot\cup\ \Delta^{(0)}_2 =R^{(0)}=R^{(0)}_1\ \dot\cup\ R^{(0)}_2. \end{eqnarray*} $\Delta^{(0)}_1=R^{(0)}_1$ is given in Theorem \ref{t6.8}. (b) Consider $S(\uuuu{x})$ with $\uuuu{x}\in\{0,-1,-2\}^3$ and $r(\uuuu{x})>4$. Then $\Delta^{(0)}=R^{(0)}$. (c) The case $S(A_3)$ is classical. There \begin{eqnarray*} \Delta^{(0)}=R^{(0)}=\{\pm e_1,\pm e_2,\pm e_3, \pm (e_1+e_2),\pm (e_2+e_3),\pm (e_1+e_2+e_3)\}. \end{eqnarray*} (d) The case $S(\whh{A}_2)$: Recall $\Rad I^{(0)}=\Z f_1$ with $f_1=e_1+e_2+e_3$. There \begin{eqnarray*} \Delta^{(0)}&=&R^{(0)}=\Gamma^{(0)}\{e_1\}\\ &=& (\pm e_1+\Z f_1)\ \dot\cup\ (\pm e_2+\Z f_1)\ \dot\cup\ (\pm e_3+\Z f_1). \end{eqnarray*} (e) The case $S(\HH_{1,2})$: Recall $(f_1,f_2,f_3)=\uuuu{e} \begin{pmatrix}1&0&1\\1&1&1\\0&1&1\end{pmatrix}$ and $\Rad I^{(0)}=\Z f_1\oplus \Z f_2$. The set of roots is \begin{eqnarray*} R^{(0)}= \pm e_1+\Rad I^{(0)} = (e_1+\Rad I^{(0)})\ \dot\cup\ (-e_1+\Rad I^{(0)}), \end{eqnarray*} with \begin{eqnarray*} e_1+\Rad I^{(0)} = -e_2+\Rad I^{(0)} = e_3+\Rad I^{(0)} = f_3+\Rad I^{(0)}. \end{eqnarray*} It splits into the four $\Gamma^{(0)}$ orbits \begin{eqnarray*} \Gamma^{(0)}\{e_1\}=\pm e_1+2\Rad I^{(0)},&& \Gamma^{(0)}\{e_2\}=\pm e_2+2\Rad I^{(0)},\\ \Gamma^{(0)}\{e_3\}=\pm e_3+2\Rad I^{(0)},&& \Gamma^{(0)}\{f_3\}=\pm f_3+2\Rad I^{(0)}. \end{eqnarray*} The set $\Delta^{(0)}$ of vanishing cycles consists of the first three of these sets, \begin{eqnarray*} \Delta^{(0)}=\Gamma^{(0)}\{e_1\}\ \dot\cup\ \Gamma^{(0)}\{e_2\} \ \dot\cup\ \Gamma^{(0)}\{e_3\}, \end{eqnarray*} so $\Delta^{(0)}\subsetneqq R^{(0)}$. (f) The cases $S(-l,2,-l)$ with $l\geq 3$: Recall $\Rad I^{(0)}=\Z f_1$ with $f_1=e_1-e_3$. As the tuple $(\oooo{H_\Z}^{(0)},\oooo{I}^{(0)}, (\oooo{e_1}^{(0)},\oooo{e_2}^{(0)}))$ is isomorphic to the corresponding tuple from the $2\times 2$ matrix $S(-l)=\begin{pmatrix}1&-l\\0&1\end{pmatrix}$, its sets of roots and its set of even vanishing cycles coincide because of Theorem \ref{t6.8}. These sets are called $R^{(0)}(S(-l))$. Then \begin{eqnarray*} R^{(0)} = \{\www{y}_1e_1+\www{y}_2e_2\in H_\Z^{(0)}\,|\, \www{y}_1\oooo{e_1}^{(0)}+\www{y}_2\oooo{e_2}^{(0)}\in R^{(0)}(S(-l))\} + \Rad I^{(0)}. \end{eqnarray*} The cases with $l$ even: $R^{(0)}$ splits into the following $l+2$ $\Gamma^{(0)}$ orbits, \begin{eqnarray*} \Gamma^{(0)}\{e_1\},\quad \Gamma^{(0)}\{e_3\},\quad \Gamma^{(0)}\{e_2+mf_1\}\textup{ for }m\in\{0,1,...,l-1\}. \end{eqnarray*} The set $\Delta^{(0)}$ of vanishing cycles consists of the first three $\Gamma^{(0)}$ orbits, \begin{eqnarray*} \Delta^{(0)}=\Gamma^{(0)}\{e_1\}\ \dot\cup\ \Gamma^{(0)}\{e_3\} \ \dot\cup\ \Gamma^{(0)}\{e_2\}. \end{eqnarray*} The cases with $l$ odd: $R^{(0)}$ splits into the following $l+1$ $\Gamma^{(0)}$ orbits, \begin{eqnarray*} \Gamma^{(0)}\{e_1\}=\Gamma^{(0)}\{e_3\},\quad \Gamma^{(0)}\{e_2+mf_1\}\textup{ for }m\in\{0,1,...,l-1\}. \end{eqnarray*} The set $\Delta^{(0)}$ of vanishing cycles consists of the first two $\Gamma^{(0)}$ orbits, \begin{eqnarray*} \Delta^{(0)}=\Gamma^{(0)}\{e_1\}\ \dot\cup\ \Gamma^{(0)}\{e_2\}. \end{eqnarray*} In both cases $\Delta^{(0)}\subsetneqq R^{(0)}$. (g) The case $S(\P^2)$. Then $\Delta^{(0)}=R^{(0)}$, and $R^{(0)}$ splits into three $\Gamma^{(0)}$ orbits, \begin{eqnarray*} R^{(0)}&=& \Gamma^{(0)}\{e_1\}\ \dot\cup\ \Gamma^{(0)}\{e_2\} \ \dot\cup\ \Gamma^{(0)}\{e_3\}\quad\textup{with}\\ \Gamma^{(0)}\{e_i\}&\subset& \pm e_i+3H_\Z\quad \textup{for }i\in\{1,2,3\} \end{eqnarray*} (but we would like to have a better control on $R^{(0)}$). (h) The cases $S(3,3,4)$, $S(4,4,4)$, $S(5,5,5)$ and $S(4,4,8)$. Then $$\Delta^{(0)}\subsetneqq R^{(0)}.$$ \end{theorem} {\bf Proof:} (a) The splittings $\Delta^{(0)}=\Delta^{(0)}_1\ \dot\cup\ \Delta^{(0)}_2$ and $R^{(0)}=R^{(0)}_1\ \dot\cup\ R^{(0)}_2$ are part of Lemma \ref{t2.11}. Lemma \ref{t2.12} gives for $A_1$ $\Delta^{(0)}_2=R^{(0)}_2=\{\pm e_3\}$. Theorem \ref{t6.8} gives for any rank 2 case $\Delta^{(0)}_1=R^{(0)}_1$. (b) The cases where $(H_\Z,L,\uuuu{e})$ is reducible are covered by part (a). In any irreducible case, the bilinear lattice $(H_\Z,L,\uuuu{e})$ with triangular basis is \index{hyperbolic bilinear lattice} {\it hyperbolic} in the sense of the definition before Theorem 3.12 in \cite{HK16}, because $I^{(0)}$ is indefinite, but the submatrices $(2)$, $\begin{pmatrix}2&x_1\\x_1&2\end{pmatrix}$, $\begin{pmatrix}2&x_2\\x_2&2\end{pmatrix}$, $\begin{pmatrix}2&x_3\\x_3&2\end{pmatrix}$ of the matrix $I^{(0)}(\uuuu{e}^t,\uuuu{e})$ are positive definite or positive semidefinite. Theorem 3.12 in \cite{HK16} applies and gives $\Delta^{(0)}=R^{(0)}$. (c) This is classical. It follows also with \begin{eqnarray*} Q_3(y_1,y_2,y_3)&=& 2(y_1^2+y_2^2+y_3^2-y_1y_2-y_2y_3)\\ &=&y_1^2+(y_1-y_2)^2+(y_2-y_3)^2+y_3^2 \end{eqnarray*} and the transitivity of the action of $\Gamma^{(0)}$ on $R^{(0)}$. (d) The quotient lattice $(\oooo{H_\Z}^{(0)},\oooo{I}^{(0)})$ is an $A_2$ lattice with set of roots $\{\pm \oooo{e_1}^{(0)},\pm \oooo{e_2}^{(0)},\pm \oooo{e_3}^{(0)} \}$. Therefore \begin{eqnarray*} R^{(0)}=(\pm e_1+\Z f_1)\ \dot\cup\ (\pm e_2+\Z f_1) \ \dot\cup\ (\pm e_3+\Z f_1). \end{eqnarray*} (One can prove this also using $2Q_3=(y_1-y_2)^2+(y_1-y_3)^2+(y_2-y_3)^2$.) $\Gamma^{(0)}_s\cong D_6$ acts transitively on the set $\{\pm \oooo{e_1}^{(0)},\pm \oooo{e_2}^{(0)},\pm \oooo{e_3}^{(0)} \}$. The group $\Gamma^{(0)}_u\cong \Z^2$ contains the elements \begin{eqnarray*} (\uuuu{e}\mapsto \uuuu{e}+f_1(2,-1,-1))\quad\textup{and}\quad (\uuuu{e}\mapsto \uuuu{e}+f_1(-1,2,-1)). \end{eqnarray*} Therefore it acts transitively on each of the six sets $\varepsilon e_i+\Z f_1$ with $\varepsilon\in\{\pm 1\}$, $i\in\{1,2,3\}$. Thus $\Gamma^{(0)}$ acts transitively on $R^{(0)}$, so $\Delta^{(0)}=R^{(0)}=\Gamma^{(0)}\{e_1\}$. (e) The quotient lattice $(\oooo{H_\Z}^{(0)},\oooo{I}^{(0)})$ is an $A_1$ lattice with set of roots $\{\pm \oooo{e_1}^{(0)}\}$. Therefore \begin{eqnarray*} R^{(0)}=\pm e_1+\Rad I^{(0)} =(e_1+\Rad I^{(0)})\ \dot\cup\ (-e_1+\Rad I^{(0)}). \end{eqnarray*} (One can prove this also using $Q_3=(y_1-y_2+y_3)^2$.) $s^{(0)}_{e_i}$ exchanges $e_i$ and $-e_i$, and $s^{(0)}_{e_1}$ maps $f_3$ to $-f_3+2f_2$. $\Gamma_u^{(0)}\cong \Z^2$ is generated by the elements \begin{eqnarray*} ((f_1,f_2,f_3)\mapsto (f_1,f_2,f_3+2f_1)) = (\uuuu{e}\mapsto \uuuu{e}+f_1(2,-2,2)),\\ ((f_1,f_2,f_3)\mapsto (f_1,f_2,f_3+2f_2)) = (\uuuu{e}\mapsto \uuuu{e}+f_2(2,-2,2)). \end{eqnarray*} Therefore $R^{(0)}$ splits into the four $\Gamma^{(0)}$ orbits \begin{eqnarray*} \Gamma^{(0)}\{e_1\} &=& \pm e_1+2\Rad I^{(0)},\quad \Gamma^{(0)}\{e_2\} = \pm e_2+2\Rad I^{(0)},\\ \Gamma^{(0)}\{e_3\} &=& \pm e_3+2\Rad I^{(0)},\quad \Gamma^{(0)}\{f_3\} = \pm f_3+2\Rad I^{(0)}. \end{eqnarray*} $\Delta^{(0)}$ consists of the first three of them. (f) The set of roots of the quotient lattice $(\oooo{H_\Z}^{(0)},\oooo{I}^{(0)})$ is called $R^{(0)}(S(-l))$. Theorem \ref{t6.8} describes it. Therefore \begin{eqnarray*} R^{(0)}=\{\www{y}_1e_1+\www{y}_2e_2\in H_\Z\,|\, \www{y}_1\oooo{e_1}^{(0)}+\www{y}_2\oooo{e_2}^{(0)} \in R^{(0)}(S(-l))\} + \Rad I^{(0)}. \end{eqnarray*} By Theorem \ref{t6.8} (d) (iv) and (c), $R^{(0)}(S(-l))$ splits into the two $\Gamma^{(0)}_s$ orbits $\Gamma^{(0)}_s\{\oooo{e_1}^{(0)}\}$ and $\Gamma^{(0)}_s\{\oooo{e_2}^{(0)}\}$, and the action of $\Gamma^{(0)}_s$ on each of these two orbits is simply transitive. $\Gamma^{(0)}_u\cong\Z^2$ is generated by the elements \begin{eqnarray*} (\uuuu{e}\mapsto \uuuu{e}+f_1(2,-l,2))\textup{ and } (\uuuu{e}\mapsto \uuuu{e}+f_1(-l^2,2l,-l^2)). \end{eqnarray*} Therefore for $m\in\{0,1,...,l-1\}$ \begin{eqnarray*} \Gamma^{(0)}\{e_2+m f_1\}\cap (e_2+\Z f_1) =e_2+mf_1+\Z lf_1. \end{eqnarray*} If $l$ is odd then $1=\gcd(2,l^2)$ and \begin{eqnarray*} \Gamma^{(0)}\{e_1\}\supset e_1+\Z f_1\owns e_3=e_1-f_1, \quad\textup{so }\Gamma^{(0)}\{e_1\}=\Gamma^{(0)}\{e_3\}. \end{eqnarray*} If $l$ is even then $2=\gcd(2,l^2)$ and \begin{eqnarray*} \Gamma^{(0)}\{e_1\}\cap (e_1+2\Z f_1) =e_1+\Z 2f_1 \not\owns e_3, \quad\textup{so }\Gamma^{(0)}\{e_1\}\cap\Gamma^{(0)}\{e_3\} =\emptyset. \end{eqnarray*} This shows all claims. (g) The matrices $s^{(0),mat}_{e_i}\in M_{3\times 3}(\Z)$ with $s^{(0)}_{e_i}(\uuuu{e})=\uuuu{e}\cdot s^{(0),mat}_{e_i}$ are \begin{eqnarray*} s^{(0),mat}_{e_1} = \begin{pmatrix} -1&3&-3\\0&1&0\\0&0&1\end{pmatrix},\\ s^{(0),mat}_{e_2} = \begin{pmatrix} 1&0&0\\3&-1&3\\0&0&1\end{pmatrix},\quad s^{(0),mat}_{e_3} = \begin{pmatrix} 1&0&0\\0&1&0\\-3&3&-1\end{pmatrix}. \end{eqnarray*} One sees $\Gamma^{(0)}\{e_i\}\subset \pm e_i+3H_\Z$ for $i\in\{1,2,3\}$. Therefore $\Delta^{(0)}$ splits into three $\Gamma^{(0)}$ orbits, \begin{eqnarray*} \Delta^{(0)} = \Gamma^{(0)}\{e_1\}\ \dot\cup\ \Gamma^{(0)}\{e_2\} \ \dot\cup\ \Gamma^{(0)}\{e_3\}. \end{eqnarray*} It remains to show $\Delta^{(0)}=R^{(0)}$. Write $\www{\uuuu{e}}=(e_1,-e_2,e_3)$, so that \begin{eqnarray*} L(\www{\uuuu{e}}^t,\www{\uuuu{e}})^t=\www{S} = \begin{pmatrix}1&3&3\\0&1&3\\0&0&1\end{pmatrix}\quad\textup{and} \quad I^{(0)}(\www{\uuuu{e}}^t,\www{\uuuu{e}})^t=\www{S}+\www{S}^t =\begin{pmatrix}2&3&3\\3&2&3\\3&3&2\end{pmatrix}. \end{eqnarray*} The quadratic form $\www{Q}_3:\Z^3\to\Z$ with \begin{eqnarray*} \www{Q}_3(\uuuu{y})&=& \frac{1}{2} I^{(0)}((\www{\uuuu{e}} \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix})^t, \www{\uuuu{e}}\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}) =\frac{1}{2}(y_1\ y_2\ y_3) \begin{pmatrix}2&3&3\\3&2&3\\3&3&2\end{pmatrix} \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix}\\ &=&y_1^2+y_2^2+y_3^2+3(y_1y_2+y_1y_3+y_2y_3) \end{eqnarray*} can also be written in the following two ways which will be useful below, \begin{eqnarray}\label{6.8} \www{Q}_3(\uuuu{y})&=& (y_1+y_2)(y_1+y_3)+(y_1+y_2)(y_2+y_3) +(y_1+y_3)(y_2+y_3),\\ \www{Q}_3(\uuuu{y})&=& \frac{3}{2}(y_1+y_2+y_3)^2- \frac{1}{2}(y_1^2+y_2^2+y_3^2).\label{6.9} \end{eqnarray} We have $R^{(0)}=\{y_1\www{e_1}+y_2\www{e_2}+y_3\www{e_3} \in H_\Z\,|\, \www{Q}_3(\uuuu{y})=1\}.$ Define \begin{eqnarray*} \|a\| := \sqrt{y_1^2+y_2^2+y_3^2}\quad\textup{for}\quad a=\sum_{i=1}^3 y_ie_i\in H_\Z. \end{eqnarray*} \medskip {\bf Claim:} {\it For any $a\in R^{(0)}-\{\pm e_1,\pm e_2, \pm e_3\}$ an index $i\in\{1,2,3\}$ with \begin{eqnarray*} \| s^{(0)}_{e_i}(a)\| <\|a\| \end{eqnarray*} exists.} \medskip The Claim implies $\Delta^{(0)}=R^{(0)}$ because it says that any $a\in R^{(0)}$ can be mapped by a suitable sequence of reflections in $\{s^{(0)}_{e_1},s^{(0)}_{e_2},s^{(0)}_{e_3}\}$ to $e_1$ or $e_2$ or $e_3$. It remains to prove the Claim. {\bf Proof of the Claim:} Suppose $a\in R^{(0)}-\{\pm e_1,\pm e_2,\pm e_3\}$ satisfies $\| s^{(0)}_{e_i}(a)\|\geq a$ for any $i\in\{1,2,3\}$. Write $a=y_1\www{e_1}+y_2\www{e_2}+y_3\www{e_3}$. For $j$ and $k$ with $\{i,j,k\}=\{1,2,3\}$ \begin{eqnarray*} &&\| s^{(0)}_{e_i}(a)\| \geq \|a\| \iff \| s^{(0)}_{e_i}(a)\|^2 \geq \|a\|^2 \\ &\iff& (-y_i-3y_j-3y_k)^2 + y_j^2+y_k^2 \geq y_i^2+y_j^2+y_k^2\\ &\iff& 6y_i(y_j+y_k) + 9 (y_j+y_k)^2 \geq 0 \\ &\iff& \left\{\begin{array}{ll} 3(y_j+y_k)\geq -2y_i &\textup{ if }y_j+y_k>0,\\ 3(y_j+y_k)\leq -2y_i &\textup{ if }y_j+y_k<0,\\ \textup{no condition} &\textup{ if }y_j+y_k=0. \end{array}\right. \end{eqnarray*} $y_j+y_k=0$ is impossible because else by formula \eqref{6.8} \begin{eqnarray*} 1=\www{Q}_3(\uuuu{y})= (y_i+y_j)(y_i+y_k)=(y_i+y_j)(y_i-y_j)=y_i^2-y_j^2,\\ \quad\textup{so }y_i=\pm 1,y_j=y_k=0, \end{eqnarray*} which is excluded by $a\in R^{(0)}-\{\pm e_1,\pm e_2,\pm e_3\}$. Also $(y_1+y_2>0,y_1+y_3>0,y_2+y_3>0)$ and $(y_1+y_2<0,y_1+y_3<0,y_2+y_3<0)$ are impossible because of $1=\www{Q}_3(\uuuu{y})$ and \eqref{6.8}. We can suppose \begin{eqnarray*} y_1+y_2>0,\quad y_1+y_3>0,\quad y_2+y_3<0,\quad y_1\geq y_2\geq y_3. \end{eqnarray*} Then \begin{eqnarray*} y_1>0,\quad y_3\in\Z\cap [-y_1+1,-1],\quad y_2\in [-y_3-1,y_3],\\ 3(y_1+y_2)\geq -2y_3\quad\textup{ because of }y_1+y_2>0,\\ 3(y_2+y_3)\leq -2y_1\quad\textup{ because of }y_2+y_3<0, \end{eqnarray*} so \begin{eqnarray*} y_1&\geq& 3(y_1+y_2+y_3)\geq y_3\geq -y_1+1>-y_1,\\ |y_1|&\geq& 3|y_1+y_2+y_3|,\\ y_1^2&\geq& 9(y_1+y_2+y_3)^2,\\ \www{Q}_3(\uuuu{y}) &\stackrel{\eqref{6.9}}{=}& \frac{3}{2}(y_1+y_2+y_3)^2-\frac{1}{2}(y_1^2+y_2^2+y_3^2)\\ &\leq& \frac{1}{6}y_1^2-\frac{1}{2}(y_1^2+y_2^2+y_3^2)\leq 0, \end{eqnarray*} a contradiction. Therefore an $a\in R^{(0)}-\{\pm e_1,\pm e_2, \pm e_3\}$ with $\| s^{(0)}_{e_i}(a)\| \geq \| a\|$ for each $i\in\{1,2,3\}$ does not exist. The Claim is proved. (h) By Theorem \ref{t6.11} (g) $\Gamma^{(0)}$ is a free Coxeter group with generators $s_{e_1}^{(0)}$, $s_{e_2}^{(0)}$ and $s_{e_3}^{(0)}$. By Example \ref{t3.23} (iv) equality holds in \eqref{3.3}, so $$\BB^{dist}=\{\uuuu{v}\in (\Delta^{(0)})^3\,|\, s_{v_1}^{(0)}s_{v_2}^{(0)}s_{v_3}^{(0)}=-M\}$$ (see also Theorem \ref{t7.2} (a)). By Theorem \ref{t5.16} (a)+(b)+(d)+(e) $Q\in G_\Z-G_\Z^{\BB}$. Lemma \ref{t3.22} (a) and $QMQ^{-1}=M$ give $$s_{Q(e_1)}^{(0)}s_{Q(e_2)}^{(0)}s_{Q(e_3)}^{(0)} =Qs_{e_1}^{(0)}s_{e_2}^{(0)}s_{e_3}^{(0)}Q^{-1} =Q(-M)Q^{-1}=-M.$$ If $Q(e_1),Q(e_2),Q(e_3)$ were all in $\Delta^{(0)}$ then equality in \eqref{3.3} would imply $(Q(e_1),Q(e_2),Q(e_3))\in\BB^{dist}$ and $Q\in G_\Z^{\BB}$, a contradiction. So $Q(e_1),Q(e_2),Q(e_3)$ are not all in $\Delta^{(0)}$. But of course they are in $R^{(0)}$. \hfill $\Box$ \begin{remarks}\label{t6.15} (i) In view of Remark \ref{t6.13} (ii) it would be desirable to extend the proof of $\Delta^{(0)}=R^{(0)}$ in part (g) of Theorem \ref{t6.11} to other cases. The useful formulas \eqref{6.8} and \eqref{6.9} generalize as follows: \begin{eqnarray*} (x_1+x_2+x_3)Q_3(\uuuu{y}) = (x_1+x_2+x_3)(y_1^2+y_2^2+y_3^2)\\ -x_1x_2y_1^2-x_1x_3y_2^2-x_2x_3y_3^2 + (x_1y_2+x_2y_3)(x_1y_1+x_3y_3)\\ + (x_1y_2+x_2y_3)(x_2y_1+x_3y_2) + (x_1y_1+x_3y_3)(x_2y_1+x_3y_2). \end{eqnarray*} If $\uuuu{x}\in(\Z-\{0\})^3$ then \begin{eqnarray*} 2 Q_3(\uuuu{y})&=& x_1x_2x_3(\frac{y_1}{x_3}+\frac{y_2}{x_2} +\frac{y_3}{x_1})^2 -(x_1x_2x_3-2x_3^2)\Bigl(\frac{y_1}{x_3}\Bigr)^2 \\ &-&(x_1x_2x_3-2x_2^2)\Bigl(\frac{y_2}{x_2}\Bigr)^2 -(x_1x_2x_3-2x_1^2)\Bigl(\frac{y_3}{x_1}\Bigr)^2. \end{eqnarray*} Also the rephrasing in the proof of part (g) of the inequality $\| s^{(0)}_{e_i}(a)\| \geq \| a\|$ generalizes naturally. But the further arguments do not seem to generalize easily. (ii) In view of Theorem \ref{t6.14}, we know the following for $n=3$: \begin{eqnarray*} \Delta^{(0)}=R^{(0)}&&\textup{in the cases }S(\uuuu{x})\textup{ with } \uuuu{x}\in\{0,-1,-2\}^3\textup{ with }r(\uuuu{x})>4,\\ &&\textup{in all reducible cases }\\ &&\textup{and in the cases }A_3,\whh{A}_2,\P^2.\\ \Delta^{(0)}\subsetneqq R^{(0)}&&\textup{in the cases }\HH_{1,2}, \ S(-l,2,-l)\textup{ with }l\geq 3,\\ && S(3,3,3),\ S(4,4,4),\ S(5,5,5)\textup{ and }S(4,4,8). \end{eqnarray*} In the following cases with $n=3$ we do not know whether $\Delta^{(0)}=R^{(0)}$ or $\Delta^{(0)}\subsetneqq R^{(0)}$ holds: All cases $\uuuu{x}\in\Z^3$ with $r(\uuuu{x})<0$ except four cases and many cases $\uuuu{x}\in\Z^3$ with $r(\uuuu{x})>4$. \end{remarks} \section{The odd rank 3 cases}\label{s6.4} For $\uuuu{x}\in\Z^3$ consider the matrix $S=S(\uuuu{x}) =\begin{pmatrix}1&x_1&x_2\\0&1&x_3\\0&0&1\end{pmatrix} \in T^{uni}_3(\Z)$, and consider a unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}=(e_1,e_2,e_3)$ with $L(\uuuu{e}^t,\uuuu{e})^t=S$. In this section we will determine in all cases the odd monodromy group $\Gamma^{(1)}=\langle s_{e_1}^{(1)},s_{e_2}^{(1)}, s_{e_3}^{(1)}\rangle\subset O^{(1)}$ and in many, but not all cases the set $\Delta^{(1)}=\Gamma^{(1)}\{\pm e_1,\pm e_2,\pm e_3\}$ of odd vanishing cycles. Recall Remark \ref{t4.17}. The group $\Gamma^{(1)}$ and the set $\Delta^{(1)}$ are determined by the triple $(H_\Z,I^{(1)},\uuuu{e})$, and here $I^{(1)}$ is needed only up to the sign. By Remark \ref{t4.17} and Lemma \ref{t4.18} we can restrict to $\uuuu{x}$ in the union of the following three families. It will be useful to split each of the first two families into the three subfamilies on the right hand side. \begin{eqnarray*} &&(x_1,x_2,0)\textup{ with }x_1\geq x_2\geq 0,\quad \left\{\begin{array}{l} (x_1,0,0):\textup{ reducible cases,}\\ (1,1,0):\quad A_3\textup{ and }\whh{A}_2,\\ (x_1,x_2,0) \textup{ with }2\leq x_1\geq x_2>0, \end{array}\right. \\\ &&(-l,2,-l)\textup{ with }l\geq 2,\quad \left\{\begin{array}{l} (-l,2,-l)\textup{ with }l\equiv 0(4),\\ (-l,2,-l)\textup{ with }l\equiv 2(4) \textup{ (this includes }\HH_{1,2}),\\ (-l,2,-l)\textup{ with }l\equiv 1(2), \end{array}\right. \\ &&(x_1,x_2,x_3)\in\Z^3_{\geq 3}\textup{ with } 2x_i\leq x_jx_k\textup{ for }\{i,j,k\}=\{1,2,3\}\\ &&\hspace*{7cm}(\textup{this includes }\P^2). \end{eqnarray*} Recall \begin{eqnarray*} \uuuu{\www{x}}=(\www{x}_1,\www{x}_2,\www{x}_3):= \gcd(x_1,x_2,x_3)^{-1}(x_1,x_2,x_3)\quad\textup{for } \uuuu{x}\neq (0,0,0). \end{eqnarray*} Recall from section \ref{s5.3} the definition \begin{eqnarray*} f_3:=-\www{x}_3e_1+\www{x}_2e_2-\www{x}_1e_3\in H_\Z^{prim} \quad\textup{for }\uuuu{x}\neq (0,0,0) \end{eqnarray*} and the fact \begin{eqnarray*} \Rad I^{(1)}&=& \left\{\begin{array}{ll} \Z f_3&\textup{ if }\uuuu{x}\neq (0,0,0),\\ H_\Z &\textup{ if }\uuuu{x}=(0,0,0). \end{array}\right. \end{eqnarray*} Therefore in all cases except $\uuuu{x}=(0,0,0)$ the exact sequence \begin{eqnarray}\label{6.10} \{1\}\to\Gamma^{(1)}_u\to\Gamma^{(1)}\to \Gamma^{(1)}_s\to\{1\} \end{eqnarray} in Lemma \ref{t6.2} (d) is interesting. \begin{lemma}\label{t6.16} Suppose $x_1\neq 0$ (this holds in the three families above except for the case $\uuuu{x}=(0,0,0)$). (a) The sublattice $\Z\oooo{e_1}^{(1)} + \Z \oooo{e_2}^{(1)} \subset \oooo{H_\Z}^{(1)}$ has index $\www{x}_1$ in $\oooo{H_\Z}^{(1)}$. (b) $\oooo{I}^{(1)}$ is nondegenerate. For each $\Z$-basis $\uuuu{b}=(b_1,b_2)$ of $\oooo{H_\Z}^{(1)}$ \begin{eqnarray*} \oooo{I}^{(1)}(\uuuu{b}^t,\uuuu{b})=\varepsilon\gcd(x_1,x_2,x_3) \begin{pmatrix}0&-1\\1&0\end{pmatrix} \end{eqnarray*} for some $\varepsilon\in\{\pm 1\}$. Also $O^{(1),Rad}_s\cong SL_2(\Z)$. (c) \begin{eqnarray*} &&\oooo{e_1}^{(1)}\in \gcd(\www{x}_1,\www{x}_2) \oooo{H_\Z}^{(1),prim},\quad \oooo{e_2}^{(1)}\in \gcd(\www{x}_1,\www{x}_3) \oooo{H_\Z}^{(1),prim},\\ &&\oooo{e_3}^{(1)}\in \gcd(\www{x}_2,\www{x}_3) \oooo{H_\Z}^{(1),prim}\textup{ if } (\www{x}_2,\www{x}_3)\neq (0,0),\quad\textup{ else } \oooo{e_3}^{(1)}=0. \end{eqnarray*} \end{lemma} {\bf Proof:} (a) \begin{eqnarray*} \oooo{H_\Z}^{(1)}&=& \Z\oooo{e_1}^{(1)} + \Z\oooo{e_2}^{(1)} + \Z\oooo{e_3}^{(1)}\\ &=& \Z\oooo{e_1}^{(1)} + \Z\oooo{e_2}^{(1)} + \Z\frac{1}{\www{x}_1}(-\www{x}_3\oooo{e_1}^{(1)} +\www{x}_2\oooo{e_2}^{(1)})\\ &=& \Z\oooo{e_1}^{(1)} + \Z\oooo{e_2}^{(1)} + \Z\frac{\xi}{\www{x}_1}h_2\quad\textup{with}\\ \xi&:=& \gcd(\www{x}_2,\www{x}_3),\quad h_2:=-\frac{\www{x}_3}{\xi}\oooo{e_1}^{(1)} +\frac{\www{x}_2}{\xi}\oooo{e_2}^{(1)}. \end{eqnarray*} The element $h_2$ is in $(\Z \oooo{e_1}^{(1)}+\Z \oooo{e_2}^{(1)})^{prim}$. One can choose a second element $h_1\in \Z \oooo{e_1}^{(1)}+\Z \oooo{e_2}^{(1)}$ with $\Z \oooo{e_1}^{(1)}\oplus\Z \oooo{e_2}^{(1)} =\Z h_1\oplus \Z h_2$. Then \begin{eqnarray*} \Z h_2+\Z\frac{\xi}{\www{x}_1}h_2=\Z \frac{1}{\www{x}_1}h_2 \end{eqnarray*} because $\gcd(\www{x}_1,\xi)=1$. Therefore \begin{eqnarray*} \oooo{H_\Z}^{(1)} = (\Z h_1+\Z h_2)+\Z\frac{\xi}{\www{x}_1}h_2 =\Z h_1\oplus \Z\frac{1}{\www{x}_1}h_2 \stackrel{\www{x}_1:1}{\supset} \Z h_1\oplus \Z h_2. \end{eqnarray*} (b) $\oooo{I}^{(1)}(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)})=x_1\neq0$. With part (a) one sees \begin{eqnarray*} I^{(1)}(b_1,b_2)=\pm \frac{x_1}{\www{x}_1}=\pm \gcd(x_1,x_2,x_3). \end{eqnarray*} A rank two lattice with a nondegenerate skew-symmetric bilinear form has an automorphism group isomorphic to $SL_2(\Z)$. (c) The proof of part (a) and $1=\gcd(\www{x}_3,\gcd(\www{x}_1,\www{x}_2))$ show \begin{eqnarray*} \Q\oooo{e_1}^{(1)}\cap \oooo{H_\Z}^{(1)} =\Z\oooo{e_1}^{(1)} + \Z \frac{-\www{x}_3}{\gcd(\www{x}_1,\www{x}_2)}\oooo{e_1}^{(1)} =\Z \frac{1}{\gcd(\www{x}_1,\www{x}_2)}\oooo{e_1}^{(1)}. \end{eqnarray*} This shows $\oooo{e_1}^{(1)}\in \gcd(\www{x}_1,\www{x}_2) \oooo{H_\Z}^{(1),prim}$. Analogously $\oooo{e_2}^{(1)}\in \gcd(\www{x}_1,\www{x}_3) \oooo{H_\Z}^{(1),prim}$. If $(\www{x}_2,\www{x}_3)=(0,0)$ then $-\www{x}_1\oooo{e_3}^{(1)} =\oooo{f_3}^{(1)}=0$, so $\oooo{e_3}^{(1)}=0$. If $\www{x}_2\neq 0$ or $\www{x}_3\neq 0$, formulas as in the proof of part (a) hold also for $\Z\oooo{e_1}^{(1)}+\Z\oooo{e_3}^{(1)}$ respectively $\Z\oooo{e_2}^{(1)}+\Z\oooo{e_3}^{(1)}$. In both cases one shows $\oooo{e_3}^{(1)}\in\gcd(\www{x}_2,\www{x}_3) \oooo{H_\Z}^{(1),prim}$ as above. \hfill$\Box$ \bigskip In Theorem \ref{t6.18} we will consider in many cases the three groups $\Gamma^{(1)}_u\subset (\ker\tau^{(1)})_u\subset O^{(1),Rad}_u$. Their descriptions in Lemma \ref{t6.2} (e) simplify because now $\Rad I^{(1)}=\Z f_3$ if $\uuuu{x}\neq (0,0,0)$. In the cases $\uuuu{x}=(-l,2,-l)$ with $l\equiv 2(4)$ also the larger group $$O^{(1),Rad}_{\pm}:=\{g\in O^{(1),Rad}\,|\, \oooo{g}=\pm\id\} \stackrel{2:1}{\supset} O^{(1),Rad}_u$$ will be considered. The following Lemma fixes notations and gives a description of $O^{(1),Rad}_{\pm}$ similar to the one for $O^{(1),Rad}_u$ in Lemma \ref{t6.2} (e). It makes also $O^{(1),Rad}_u$ and $(\ker\tau^{(1)})_u$ more explicit, and, under some condition, $\Gamma^{(1)}_u$ and $\Gamma^{(1)}\cap O^{(1),Rad}_\pm$. \begin{lemma}\label{t6.17} Suppose $x_1\neq 0$. Denote \index{$t_\lambda^+,\ t_\lambda^-$} \index{$\Hom_{0\textup{ or }2}(H_\Z,\Z)$} \begin{eqnarray*} \Hom_0(H_\Z,\Z)&:=& \{\lambda:H_\Z\to \Z\,|\, \lambda \textup{ is }\Z\textup{-linear},\lambda(f_3)=0\},\\ \Hom_2(H_\Z,\Z)&:=& \{\lambda:H_\Z\to \Z\,|\, \lambda \textup{ is }\Z\textup{-linear},\lambda(f_3)=2\},\\ t^+_\lambda:H_\Z\to H_\Z&& \textup{with } t^+_\lambda(a)=a+\lambda(a)f_3\quad\textup{for } \lambda\in \Hom_0(H_\Z,\Z),\\ t^-_\lambda:H_\Z\to H_\Z&& \textup{with } t^-_\lambda(a)=-a+\lambda(a)f_3\quad\textup{for } \lambda\in \Hom_2(H_\Z,\Z). \end{eqnarray*} (a) Then $t^+_\lambda\in O^{(1),Rad}_u$ for $\lambda\in \Hom_0(H_\Z,\Z)$, $t^-_\lambda\in O^{(1),Rad}_{\pm}-O^{(1),\Rad}_u$ for $\lambda\in \Hom_2(H_\Z,\Z)$. The maps \begin{eqnarray*} \Hom_0(H_\Z,\Z)&\to& O^{(1),Rad}_u,\quad \lambda\mapsto t^+_\lambda,\\ \Hom_2(H_\Z,\Z)&\to& O^{(1),Rad}_\pm-O^{(1),Rad}_u,\quad \lambda\mapsto t^-_\lambda, \end{eqnarray*} are bijections, and the first one is a group isomorphism. For $\lambda_1,\lambda_2\in \Hom_0(H_\Z,\Z)$ and $\lambda_3,\lambda_4\in \Hom_2(H_\Z,\Z)$ \begin{eqnarray*} t^+_{\lambda_2}\circ t^+_{\lambda_1} &=& t^+_{\lambda_2+\lambda_1},\quad t^-_{\lambda_3}\circ t^+_{\lambda_1} = t^-_{\lambda_3+\lambda_1},\\ t^+_{\lambda_1}\circ t^-_{\lambda_3} &=& t^-_{-\lambda_1+\lambda_3},\quad t^-_{\lambda_4}\circ t^-_{\lambda_3} = t^+_{-\lambda_4+\lambda_3},\\ &&\textup{and especially }(t^-_{\lambda_3})^2=\id. \end{eqnarray*} (b) \begin{eqnarray*} (\ker\tau^{(1)})_u=\{t^+_\lambda\,|\,\lambda(\uuuu{e})\in \langle (0,x_1,x_2),(-x_1,0,x_3),(x_2,x_3,0)\rangle_\Z\}. \end{eqnarray*} (c) If $\Gamma^{(1)}_u$ is the normal subgroup generated by $t^+_{\lambda_1}$ for some $\lambda_1\in\Hom_0(H_\Z,\Z)$, then $\Gamma^{(1)}_u=\{t^+_\lambda\,|\, \lambda(\uuuu{e}) \in L\}$ where $L\subset\Z^3$ is the smallest sublattice with $\lambda_1(\uuuu{e})\in L$ and $L\cdot (s_{e_i}^{(1),mat})^{\pm 1}\subset L$ for $i\in\{1,2,3\}$. (d) If $\Gamma^{(1)}\cap O^{(1),Rad}_\pm$ is the normal subgroup generated by $t^-_{\lambda_1}$ for some $\lambda_1\in\Hom_2(H_\Z,\Z)$, then \begin{eqnarray*} \Gamma^{(1)}_u&=&\{t^+_\lambda\,|\, \lambda(\uuuu{e}) \in L\}\quad\textup{and}\\ \Gamma^{(1)}\cap O^{(1),Rad}_\pm-\Gamma^{(1)}_u &=&\{t^-_\lambda\,|\, \lambda(\uuuu{e})\in \lambda_1(\uuuu{e}) +L\} \end{eqnarray*} where $L\subset\Z^3$ is the smallest sublattice with $\lambda_1(\uuuu{e})-\lambda_1(\uuuu{e}) \cdot (s_{e_i}^{(1),mat})^{\pm 1}\in L$ and $L\cdot (s_{e_i}^{(1),mat})^{\pm 1}\subset L$ for $i\in\{1,2,3\}$. \end{lemma} {\bf Proof:} (a) By definition $t^+_\lambda =T(\oooo{\lambda}\otimes f_3)$ where $\oooo{\lambda}\in \oooo{H_\Z}^{(1),\sharp}$ denotes the element which is induced by $\lambda$. The map $\Hom_0(H_\Z,\Z)\to O^{(1),Rad}_u$ is an isomorphism by Lemma \ref{t6.2} (e). The proofs of the other statements are similar or easy. (b) The row vectors $(0,x_1,x_2),(-x_1,0,x_3),(-x_2,-x_3,0)$ are the rows of the matrix $I^{(1)}(\uuuu{e}^t,\uuuu{e})$. Because of this and Lemma \ref{t6.2} (e) $(\ker\tau^{(1)})_u$ is as claimed. (c) This follows from \begin{eqnarray*} \Gamma^{(1)}_u&=& \langle g^{-1}\circ t^+_{\lambda_1}\circ g\,|\, g\in \Gamma^{(1)}\rangle,\\ g^{-1}\circ t^+_{\lambda}\circ g&=&t^+_{\lambda\circ g},\\ \textup{and }\lambda\circ (s_{e_i}^{(1)})^{\pm 1}(\uuuu{e}) &=& \lambda(\uuuu{e})\cdot (s_{e_i}^{(1),mat})^{\pm 1}, \end{eqnarray*} for $\lambda\in\Hom_0(H_\Z,\Z)$. (d) Similar to the proof of part (c). \hfill$\Box$ \bigskip The following theorem gives $\Gamma^{(1)}$ for $\uuuu{x}$ in one of the three families above and thus via Remark \ref{t4.17} and Lemma \ref{t4.18} in principle for all $\uuuu{x}\in\Z^3$. Though recall that it is nontrivial to find for a given $\uuuu{x}\in\Z^3$ an element in one of the three families above which is in the $(G^{phi}\ltimes \www{G}^{sign})\rtimes\langle\gamma\rangle$ orbit of $\uuuu{x}$. \begin{theorem}\label{t6.18} (a) We have \begin{eqnarray*} s_{e_i}^{(1)}\uuuu{e}&=& \uuuu{e}\cdot s_{e_i}^{(1),mat} \quad\textup{with}\quad s_{e_1}^{(1),mat} =\begin{pmatrix}1&-x_1&-x_2\\0&1&0\\0&0&1\end{pmatrix},\\ s_{e_2}^{(1),mat} &=&\begin{pmatrix}1&0&0\\x_1&1&-x_3\\0&0&1\end{pmatrix},\quad s_{e_3}^{(1),mat} =\begin{pmatrix}1&0&0\\0&1&0\\x_2&x_3&1\end{pmatrix},\\ \Gamma^{(1)}&\cong& \Gamma^{(1),mat} =\langle s_{e_1}^{(1),mat},s_{e_2}^{(1),mat},s_{e_3}^{(1),mat} \rangle \subset SL_3(\Z). \end{eqnarray*} (b) In each reducible case $\uuuu{x}=(x_1,0,0)$ \begin{eqnarray*} \Gamma^{(1)}\cong \Gamma^{(1)}(S(-x_1))\times \Gamma^{(1)}(A_1) \cong \Gamma^{(1)}(S(-x_1))\times\{1\}, \end{eqnarray*} and $\Gamma^{(1)}(S(-x_1))$ is given in Theorem \ref{t6.10}, with \begin{eqnarray*} \Gamma^{(1)}(S(0))&\cong&\Gamma^{(1)}(A_1^2)\cong \{1\},\\ \Gamma^{(1)}(S(-1))&\cong&\Gamma^{(1)}(A_2)\cong SL_2(\Z),\\ \Gamma^{(1)}(S(-x_1))&\cong& G^{free,2}\quad\textup{ for } x_1\geq 2. \end{eqnarray*} Also $\Gamma^{(1)}\cong \Gamma^{(1)}_s$ and $\Gamma^{(1)}_u=\{\id\}$. (c) The case $\uuuu{x}=(1,1,0)$: (This is the case of $A_3$ and $\whh{A}_2$.) \begin{eqnarray*} \Rad I^{(1)}&=&\Z f_3\quad \textup{with}\quad f_3=e_2-e_3,\\ \oooo{H_\Z}^{(1)}&=& \Z \oooo{e_1}^{(1)}\oplus \Z\oooo{e_2}^{(1)},\\ \Gamma^{(1)}_u&=& (\ker\tau^{(1)})_u = O^{(1),Rad}_u =\{t^+_\lambda\,|\, \lambda\in\langle\lambda_1,\lambda_2\rangle_\Z \}\cong\Z^2, \\ &&\textup{with}\quad \lambda_1(\uuuu{e})=(1,0,0),\quad \lambda_2(\uuuu{e})=(0,1,1),\\ \Gamma^{(1)}_s&=& (\ker \tau^{(1)})_s=O^{(1),Rad}_s \cong SL_2(\Z),\\ \Gamma^{(1)}&=& \ker\tau^{(1)}=O^{(1),Rad}. \end{eqnarray*} The exact sequence \eqref{6.10} splits non-canonically with $\Gamma^{(1)}_s\cong \langle s_{e_1}^{(1)},s_{e_2}^{(1)}\rangle \subset\Gamma^{(1)}$ (for example). (d) The cases $\uuuu{x}=(x_1,x_2,0)$ with $2\leq x_1\geq x_2>0$: Write $$x_{12}:=\gcd(x_1,x_2)=\frac{x_1}{\www{x}_1} =\frac{x_2}{\www{x}_2}.$$ Then \begin{eqnarray*} \Rad I^{(1)}&=& \Z f_3\quad\textup{with}\quad f_3=\www{x}_2e_2-\www{x}_1e_3,\\ \oooo{H_\Z}^{(1)}&=& \Z \oooo{e_1}^{(1)}\oplus \Z g_2\quad\textup{with }g_2:=\frac{1}{\www{x}_1}\oooo{e_2}^{(1)} =\frac{1}{\www{x}_2}\oooo{e_3}^{(1)}\in \oooo{H_\Z}^{(1)}, \end{eqnarray*} \begin{eqnarray*} \Gamma^{(1)}_u&=&\{t^+_\lambda\,|\, \lambda\in\langle\lambda_1, \lambda_2\rangle_\Z\}\cong\Z^2 \quad\textup{with}\\ && \lambda_1(\uuuu{e})=x_{12}\www{x}_1\www{x}_2(1,0,0),\quad \lambda_2(\uuuu{e})=x_1x_2(0,\www{x}_1,\www{x}_2),\\ (\ker\tau^{(1)})_u&=&\{t^+_\lambda\,|\, \lambda(\uuuu{e})\in\langle x_{12}(1,0,0),(0,x_1,x_2) \rangle_\Z\},\\ O^{(1),Rad}_u&=&\{t^+_\lambda\,|\, \lambda(\uuuu{e})\in\langle (1,0,0),(0,\www{x}_1,\www{x}_2)\rangle_\Z\}, \end{eqnarray*} \begin{eqnarray*} \Gamma^{(1)}_s&\cong& \Gamma^{(1)}(S(-x_{12}))\cong \langle\begin{pmatrix}1&-x_{12}\\0&1\end{pmatrix}, \begin{pmatrix}1&0\\x_{12}&1\end{pmatrix}\rangle \\ &\cong& \left\{\begin{array}{ll} G^{free,2}&\textup{ if }x_{12}>1\\ SL_2(\Z)&\textup{ if }x_{12}=1. \end{array}\right. \end{eqnarray*} This matrix group has finite index in $SL_2(\Z)$ if and only if $x_{12}\in\{1,2\}$. The exact sequence \eqref{6.10} splits non-canonically. (e) The cases $\uuuu{x}=(-l,2,-l)$ with $l\geq 2$ even: (This includes the case $\uuuu{x}=(-2,2,-2)$ which is the case of $\HH_{1,2}$.) \begin{eqnarray*} \Rad I^{(1)}&=&\Z f_3\quad\textup{with}\quad f_3=\frac{l}{2}(e_1+e_3)+e_2,\\ \oooo{H_\Z}^{(1)}&=& \Z \oooo{e_1}^{(1)}\oplus \Z\oooo{e_3}^{(1)},\quad \oooo{e_2}^{(1)} =-\frac{l}{2}(\oooo{e_1}^{(1)}+\oooo{e_3}^{(1)}),\\ \end{eqnarray*} \begin{eqnarray*} \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle &\cong& \langle \oooo{s_{e_1}^{(1)}},\oooo{s_{e_3}^{(1)}}\rangle \cong \langle \begin{pmatrix}1&-2\\0&1\end{pmatrix}, \begin{pmatrix}1&0\\2&1\end{pmatrix}\rangle \stackrel{1:2}{\subset}\Gamma(2),\\ \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle &\cong& G^{free,2}, \end{eqnarray*} \begin{eqnarray*} (\ker\tau^{(1)})_u&=&\{t^+_\lambda\,|\, \lambda(\uuuu{e}) \in\langle (-2,0,2),(-2,l,0)\rangle_\Z\},\\ O^{(1),Rad}_u&=&\{t^+_\lambda\,|\, \lambda(\uuuu{e}) \in\langle (-1,0,1),(-1,\frac{l}{2},0)\rangle_\Z\}. \end{eqnarray*} (i) The cases with $l\equiv 0(4)$: $\Gamma^{(1)}_s\cong \langle \oooo{s_{e_1}^{(1)}},\oooo{s_{e_3}^{(1)}}\rangle \cong G^{free,2}$. The isomorphism $\Gamma^{(1)}_s\cong \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle \subset\Gamma^{(1)}$ gives a splitting of the exact sequence \eqref{6.10}. Here $-\id\notin\Gamma^{(1)}_s$. \begin{eqnarray*} \Gamma^{(1)}_u&=&\{t^+_\lambda\,|\, \lambda\in\langle \lambda_1,\lambda_2\rangle_\Z\}\cong\Z^2 \quad\textup{with}\\ &&\lambda_1(\uuuu{e})=(-l,0,l),\quad \lambda_2(\uuuu{e})=(2l,-l^2,0). \end{eqnarray*} (ii) The cases with $l\equiv 2(4)$: Here $-\id\in\Gamma^{(1)}_s$. \begin{eqnarray*} \Gamma^{(1)}_s&\cong& \langle \oooo{s_{e_1}^{(1)}},\oooo{s_{e_3}^{(1)}},-\id\rangle \cong \langle \begin{pmatrix}1&-2\\0&1\end{pmatrix}, \begin{pmatrix}1&0\\2&1\end{pmatrix}, \begin{pmatrix}-1&0\\0&-1\end{pmatrix}\rangle =\Gamma(2)\\ &\cong& G^{free,2}\times\{\pm 1\}. \end{eqnarray*} The isomorphism $\Gamma^{(1)}_s/\{\pm \id\}\cong \langle s_{e_1}^{(1)},s_{e_2}^{(1)}\rangle\subset\Gamma^{(1)}$ gives a splitting of the exact sequence \begin{eqnarray*} \{1\}\to \Gamma^{(1)}\cap O^{(1),Rad}_\pm\to\Gamma^{(1)}\to \Gamma^{(1)}_s/\{\pm\id\}\to\{1\}. \end{eqnarray*} \begin{eqnarray*} \Gamma^{(1)}_u&=&\{t^+_\lambda\,|\, \lambda\in\langle 2\lambda_1,\lambda_2\rangle_\Z\}\cong\Z^2 \quad\textup{with}\\ &&\lambda_1(\uuuu{e})=(-l,0,l),\quad \lambda_2(\uuuu{e})=(2l,-l^2,0),\\ \Gamma^{(1)}\cap O^{(1),Rad}_\pm -\Gamma^{(1)}_u &=&\{t^-_\lambda\,|\, \lambda\in\lambda_3+\langle 2\lambda_1,\lambda_2\rangle_\Z\}\quad\textup{with}\\ &&\lambda_3(\uuuu{e})=(-l,2,l),\quad\lambda_3(f_3)=2. \end{eqnarray*} (f) The cases $\uuuu{x}=(-l,2,-l)$ with $l\geq 3$ odd: \begin{eqnarray*} \Rad I^{(1)}&=& \Z f_3\quad\textup{with}\quad f_3=l(e_1+e_3)+2e_2,\\ \oooo{H_\Z}^{(1)}&=& \Z\oooo{e_1}^{(1)}\oplus \Z\oooo{g_2}^{(1)}\quad\textup{with}\quad g_2:=\frac{1}{2}(e_1+e_3)-\frac{l}{2}f_3\in H_\Z,\\ \www{\uuuu{e}}&:=& (e_1,g_2,f_3) \quad\textup{is a }\Z\textup{-basis of }H_\Z. \end{eqnarray*} Consider \begin{eqnarray*} s_4&:=& \bigl(s_{e_3}^{(1)}s_{e_1}^{(1)})^{\frac{l^2-1}{4}} s_{e_2}^{(1)}\in\Gamma^{(1)}. \end{eqnarray*} Then \begin{eqnarray*} \Gamma^{(1)}_s&\cong& \langle \oooo{s_{e_1}^{(1)}}, \oooo{s_4}\rangle \cong \langle s_{e_1}^{(1)},s_4\rangle\cong SL_2(\Z), \end{eqnarray*} and the isomorphism $\Gamma^{(1)}_s\cong \langle s_{e_1}^{(1)}, s_4\rangle\subset \Gamma^{(1)}$ gives a splitting of the exact sequence \eqref{6.10}. \begin{eqnarray*} \Gamma^{(1)}_u&=&\{t^+_\lambda\,|\, \lambda\in \langle\lambda_1, \lambda_2\rangle_\Z\}\cong\Z^2 \quad\textup{with}\\ &&\lambda_1(\uuuu{e})=(-l,0,l),\quad \lambda_2(\uuuu{e})=(2l,-l^2,0),\\ (\ker\tau^{(1)})_u&=& O^{(1),Rad}_u= \{t^+_\lambda\,|\, \lambda(\uuuu{e})\in \langle (-1,0,1), (2,-l,0)\rangle_\Z\}. \end{eqnarray*} (g) The cases $\uuuu{x}\in\Z^3_{\geq 3}$ with $2x_i\leq x_jx_k$ for $\{i,j,k\}=\{1,2,3\}$: (This includes the case $\uuuu{x}=(3,3,3)$ which is the case of $\P^2$.) \begin{eqnarray*} \Rad I^{(1)}&=& \Z f_3\quad\textup{with}\quad f_3=-\www{x}_3e_1+\www{x}_2e_2-\www{x}_1e_3,\\ \Gamma^{(1)}_u&=&\{\id\}\subsetneqq (\ker\tau^{(1)})_u\cong\Z^2,\\ \Gamma^{(1)}&\cong& \Gamma^{(1)}_s\cong G^{free,3}, \end{eqnarray*} $\Gamma^{(1)}$ and $\Gamma^{(1)}_s$ are free groups with the three generators $s_{e_1}^{(1)}$, $s_{e_2}^{(1)}$, $s_{e_3}^{(1)}$ respectively $\oooo{s_{e_1}^{(1)}}$, $\oooo{s_{e_2}^{(1)}}$, $\oooo{s_{e_3}^{(1)}}$. \end{theorem} {\bf Proof:} (a) This follows from the definitions in Lemma \ref{t2.6} (a) and in Definition \ref{t2.8}. (b) This follows from Lemma \ref{t2.11} and Lemma \ref{t2.12}. (c)--(f) In (c)--(f) $(\ker\tau^{(1)})_u$ and $O^{(1),Rad}_u$ are calculated with Lemma \ref{t6.17} (a) and (b). (c) The first statements $\Rad I^{(1)}=\Z f_3$ and $\oooo{H_\Z}^{(1)}=\Z\oooo{e_1}^{(1)}\oplus \Z\oooo{e_2}^{(1)}$ are known respectively obvious. Also $(e_1,e_2,f_3)$ is a $\Z$-basis of $H_\Z$. With respect to this basis \begin{eqnarray*} s_{e_1}^{(1)}(e_1,e_2,f_3)=(e_1,e_2,f_3) \begin{pmatrix}1&-1&0\\0&1&0\\0&0&1\end{pmatrix},\\ s_{e_2}^{(1)}(e_1,e_2,f_3)=(e_1,e_2,f_3) \begin{pmatrix}1&0&0\\1&1&0\\0&0&1\end{pmatrix}. \end{eqnarray*} This shows $\langle s_{e_1}^{(1)},s_{e_2}^{(1)}\rangle\cong\langle \oooo{s_{e_1}^{(1)}},\oooo{s_{e_2}^{(1)}}\rangle \cong SL_2(\Z)$. Together with Lemma \ref{t6.16} (b) and Lemma \ref{t6.2} (d) we obtain \begin{eqnarray*} \Gamma^{(1)}_s= \langle \oooo{s_{e_1}^{(1)}},\oooo{s_{e_2}^{(1)}}\rangle =(\ker\tau^{(1)})_s=O^{(1),Rad}_s\cong SL_2(\Z), \end{eqnarray*} and that the exact sequence \eqref{6.10} splits non-canonically with $\Gamma^{(1)}_s\cong \langle s_{e_1}^{(1)},s_{e_2}^{(1)}\rangle\subset\Gamma^{(1)}$. From the actions of $s_{e_1}^{(1)}$ and $s_{e_2}^{(1)}$ on $(e_1,e_2,f_3)$ and from \begin{eqnarray*} s_{e_3}^{(1)}((e_1,e_2,f_3))=(e_1,e_2,f_3) \begin{pmatrix}1&0&0&\\1&1&0\\-1&0&1\end{pmatrix}, \end{eqnarray*} one sees that the map \begin{eqnarray*} \Bigl( (e_1,e_2,f_3)\mapsto (e_1,e_2,f_3)\begin{pmatrix} 1&0&0\\0&1&0\\-1&0&1\end{pmatrix}\Bigr) =t^+_{-\lambda_1} \end{eqnarray*} is in $\Gamma^{(1)}_u$ and that \begin{eqnarray*} \Gamma^{(1)}=\langle s_{e_1}^{(1)},s_{e_2}^{(1)}, t^+_{\lambda_1}\rangle. \end{eqnarray*} Also \begin{eqnarray*} (s_{e_1}^{(1)})^{-1}\circ t^+_{\lambda_1}\circ s_{e_1}^{(1)} =t^+_{\lambda_1\circ s_{e_1}^{(1)}},\quad\textup{with}\quad t^+_{\lambda_1\circ s_{e_1}^{(1)}}(\uuuu{e})=(1,-1,-1), \end{eqnarray*} and therefore $t^+_{\lambda_2}= t^+_{\lambda_1} -t^+_{\lambda_1\circ s_{e_1}^{(1)}} \in\Gamma^{(1)}_u$. But $O^{(1),Rad}_u =\langle t^+_{\lambda_1}, t^+_{\lambda_2}\rangle_\Z$, so \begin{eqnarray*} \Gamma^{(1)}_u=(\ker \tau^{(1)})_u = O^{(1),Rad}_u =\langle t^+_{\lambda_1},t^+_{\lambda_2}\rangle_\Z \end{eqnarray*} In the diagram of exact sequences in Lemma \ref{t6.2} (d) the inclusions in the second and fourth columns are bijections, so also the inclusions in the middle column, \begin{eqnarray*} \Gamma^{(1)}=\ker\tau^{(1)}=O^{(1),Rad}. \end{eqnarray*} (d) $f_3=\www{x}_2e_2-\www{x}_1e_3$ implies $\www{x}_2\oooo{e_2}^{(1)}=\www{x}_1\oooo{e_3}^{(1)}$. Lemma \ref{t6.16} (c) gives $\oooo{e_2}^{(1)}\in \www{x}_1\oooo{H_\Z}^{(1),prim}$, so $g_2:=\frac{1}{\www{x}_1}\oooo{e_2}^{(1)} =\frac{1}{\www{x}_2}\oooo{e_3}^{(1)}$ is in $\oooo{H_\Z}^{(1),prim}$. Thus \begin{eqnarray*} \oooo{H_\Z}^{(1)}=\Z\oooo{e_1}^{(1)}+\Z\oooo{e_2}^{(1)} +\Z\oooo{e_3}^{(1)}=\Z\oooo{e_1}^{(1)}\oplus\Z g_2. \end{eqnarray*} First we consider $\Gamma^{(1)}_s$. Define $\uuuu{g}:=(\oooo{e_1}^{(1)},g_2)$. One sees \begin{eqnarray*} \oooo{s_{e_1}^{(1)}}\uuuu{g}=\uuuu{g} \begin{pmatrix}1&-x_{12}\\0&1\end{pmatrix},\quad \oooo{s_{e_2}^{(1)}}\uuuu{g}=\uuuu{g} \begin{pmatrix}1&0\\x_1\www{x}_1&1\end{pmatrix},\quad \oooo{s_{e_3}^{(1)}}\uuuu{g}=\uuuu{g} \begin{pmatrix}1&0\\x_2\www{x}_2&1\end{pmatrix}. \end{eqnarray*} Choose $y_1,y_2\in\Z$ with $1=y_1\www{x}_1^2+y_2\www{x}_2^2$ and define \begin{eqnarray*} s_4:= (s_{e_2}^{(1)})^{y_1}(s_{e_3}^{(1)})^{y_2}\in\Gamma^{(1)}. \end{eqnarray*} Then \begin{eqnarray*} \oooo{s_4}\,\uuuu{g}=\uuuu{g} \begin{pmatrix}1&0\\x_{12}&1\end{pmatrix},\quad s_4\,\uuuu{e}=\uuuu{e}\cdot s_4^{mat}\textup{ with } s_4^{mat}=\begin{pmatrix}1&0&0\\y_1x_1&1&0\\y_2x_2&0&1 \end{pmatrix}. \end{eqnarray*} $\oooo{s_{e_3}^{(1)}}$ and $\oooo{s_{e_2}^{(1)}}$ are powers of $\oooo{s_4}$, so $\Gamma^{(1)}_s=\langle \oooo{s_{e_1}^{(1)}}, \oooo{s_4}\rangle$, so \begin{eqnarray*} \Gamma^{(1)}_s\cong\langle \begin{pmatrix}1&-x_{12}\\0&1\end{pmatrix}, \begin{pmatrix}1&0\\x_{12}&1\end{pmatrix}\rangle =\Gamma^{(1),mat}(S(-x_{12}))\subset SL_2(\Z). \end{eqnarray*} The matrix group $\Gamma^{(1),mat}(S(-x_{12}))$ was treated in Theorem \ref{t6.10} (b)--(d): \begin{eqnarray*} \Gamma^{(1),mat}(S(-x_{12}))\cong\langle \begin{pmatrix}1&-x_{12}\\0&1\end{pmatrix}, \begin{pmatrix}1&0\\x_{12}&1\end{pmatrix}\rangle \left\{\begin{array}{ll} \cong G^{free,2}&\textup{ if }x_{12}>1,\\ = SL_2(\Z)&\textup{ if }x_{12}=1. \end{array}\right. \end{eqnarray*} {\bf The case $x_{12}>1$:} Then $\Gamma^{(1)}_s=\langle \oooo{s_{e_1}^{(1)}},\oooo{s_4} \rangle$ and $\langle s_{e_1}^{(1)},s_4\rangle\subset\Gamma^{(1)}$ are free groups with the two given generators. Then $\langle s_{e_1}^{(1)},s_4\rangle\subset\Gamma^{(1)}$ gives a splitting of the exact sequence \eqref{6.10}. The generating relations in $\Gamma^{(1)}_s$ with respect to the four elements $\oooo{s_{e_1}^{(1)}}$, $\oooo{s_{e_2}^{(1)}}$, $\oooo{s_{e_3}^{(1)}}$ and $\oooo{s_4}$ are \begin{eqnarray}\label{6.11} \oooo{s_{e_2}^{(1)}}=(\oooo{s_4})^{\www{x}_1^2},\quad \oooo{s_{e_3}^{(1)}}=(\oooo{s_4})^{\www{x}_2^2}. \end{eqnarray} Therefore $\Gamma^{(1)}_u$ is the normal subgroup of $\Gamma^{(1)}$ generated by the elements $s_4^{\www{x}_1^2}(s_{e_2}^{(1)})^{-1}$ and $s_4^{\www{x}_2^2}(s_{e_3}^{(1)})^{-1}$. {\bf The case $x_{12}=1$:} Then $\Gamma^{(1)}_s=\langle \oooo{s_{e_1}^{(1)}},\oooo{s_4} \rangle\cong SL_2(\Z)$ \medskip {\bf Claim:} Also $\langle s_{e_1}^{(1)},s_4\rangle\cong SL_2(\Z)$. \medskip {\sf Proof of the Claim:} The generating relations in $SL_2(\Z)$ with respect to the generators $\oooo{s_{e_1}^{(1)}}^{mat}=\begin{pmatrix}1&-1\\0&1\end{pmatrix}$ and $\oooo{s_4}^{mat}=\begin{pmatrix}1&0\\1&1\end{pmatrix}$ are \begin{eqnarray*} \oooo{s_{e_1}^{(1)}}^{mat} \oooo{s_4}^{mat} \oooo{s_{e_1}^{(1)}}^{mat} =\oooo{s_4}^{mat} \oooo{s_{e_1}^{(1)}}^{mat} \oooo{s_4}^{mat}\quad\textup{and}\quad E_2=\bigl(\oooo{s_{e_1}^{(1)}}^{mat}\oooo{s_4}^{mat}\bigr)^6. \end{eqnarray*} One checks with calculations that they lift to $\Gamma^{(1),mat}$, \begin{eqnarray*} s_{e_1}^{(1),mat} s_4^{mat} s_{e_1}^{(1),mat} =s_4^{mat} s_{e_1}^{(1),mat} s_4^{mat}\quad\textup{and}\quad E_3=\bigl(s_{e_1}^{(1),mat} s_4^{mat}\bigr)^6. \hspace*{0.5cm}(\Box) \end{eqnarray*} \medskip Because of the Claim, $\langle s_{e_1}^{(1)},s_4\rangle\subset \Gamma^{(1)}$ gives a splitting of the exact sequence \eqref{6.10}. The generating relations in $\Gamma^{(1)}_s$ with respect to the four elements $\oooo{s_{e_1}^{(1)}}$, $\oooo{s_{e_2}^{(1)}}$, $\oooo{s_{e_3}^{(1)}}$ and $\oooo{s_4}$ are the relations between $\oooo{s_{e_1}^{(1)}}$ and $\oooo{s_4}$ in the proof of the Claim and the relations in \eqref{6.11}. The relations in the proof of the Claim lift to $\Gamma^{(1)}$. Therefore again $\Gamma^{(1)}_u$ is the normal subgroup of $\Gamma^{(1)}$ generated by the elements $s_4^{\www{x}_1^2}(s_{e_2}^{(1)})^{-1}$ and $s_4^{\www{x}_2^2}(s_{e_3}^{(1)})^{-1}$. {\bf Back to both cases $x_{12}\geq 1$ together:} We have to determine these two elements of $\Gamma^{(1)}_u$. The first one is given by the following calculation, \begin{eqnarray*} &&s_4^{\www{x}_1^2}(s_{e_2}^{(1)})^{-1}(\uuuu{e})\\ &=&\uuuu{e} \begin{pmatrix}1&0&0\\y_1x_1\www{x}_1^2&1&0\\y_2x_2\www{x}_1^2 &0&1\end{pmatrix} \begin{pmatrix}1&0&0\\-x_1&1&0\\0&0&1\end{pmatrix} =\uuuu{e}\begin{pmatrix}1&0&0\\-x_1+y_1x_1\www{x}_1^2&1&0\\ y_2x_2\www{x}_1^2 &0&1\end{pmatrix}\\ &=&\uuuu{e}\begin{pmatrix}1&0&0\\-y_2x_1\www{x}_2^2&1&0\\ y_2x_2\www{x}_1^2&0&1\end{pmatrix} =\uuuu{e}+f_3(-y_2x_{12}\www{x}_1\www{x}_2,0,0),\\ &&\textup{so } s_4^{\www{x}_1^2}(s_{e_2}^{(1)})^{-1}(\uuuu{e}) =t^+_{-y_2\lambda_1}. \end{eqnarray*} A similar calculation gives \begin{eqnarray*} s_4^{\www{x}_2^2}(s_{e_3}^{(1)})^{-1} &=& t^+_{y_1\lambda_1}. \end{eqnarray*} Observe $\gcd(y_1,y_2)=1$. Therefore $\Gamma^{(1)}_u$ is the normal subgroup of $\Gamma^{(1)}$ generated by $t^+_{\lambda_1}$. Lemma \ref{t6.17} (c) and the calculations \begin{eqnarray*} \lambda_1\circ s_{e_1}^{(1)}(\uuuu{e})=x_{12}\www{x}_1\www{x}_2 (1,-x_1,-x_2),\\ \textup{so } \langle \lambda_1,\lambda_1\circ s_{e_1}^{(1)}\rangle_\Z =\langle \lambda_1,\lambda_2\rangle_\Z,\\ \lambda_1\circ (s_{e_i}^{(1)})^{\pm 1}, \lambda_2\circ (s_{e_i}^{(1)})^{\pm 1}\in \langle\lambda_1,\lambda_2\rangle_\Z, \end{eqnarray*} give \begin{eqnarray*} \Gamma^{(1)}_u=\{t^+_\lambda\,|\, \lambda\in \langle\lambda_1,\lambda_2\rangle_\Z\}. \end{eqnarray*} (e) The first statements $\Rad I^{(1)}=\Z f_3$, $f_3=\frac{l}{2}(e_1+e_3)+e_2$ and $\oooo{H_\Z}^{(1)}=\Z \oooo{e_1}^{(1)}\oplus \Z\oooo{e_3}^{(1)}$ are obvious, also \begin{eqnarray*} \oooo{s_{e_i}^{(1)}}(\oooo{e_1}^{(1)},\oooo{e_3}^{(1)}) =(\oooo{e_1}^{(1)},\oooo{e_3}^{(1)})\oooo{s_{e_i}^{(1)}}^{mat} \quad\textup{with}\quad \oooo{s_{e_1}^{(1)}}^{mat} =\begin{pmatrix}1&-2\\0&1\end{pmatrix},\\ \oooo{s_{e_2}^{(1)}}^{mat} =\begin{pmatrix}1+\frac{l^2}{2}&-\frac{l^2}{2}\\ \frac{l^2}{2}&1-\frac{l^2}{2}\end{pmatrix},\quad \oooo{s_{e_3}^{(1)}}^{mat} =\begin{pmatrix}1&0\\2&1\end{pmatrix}. \end{eqnarray*} By Theorem \ref{t6.10} (c) $\langle \oooo{s_{e_1}^{(1)}}^{mat}, \oooo{s_{e_3}^{(1)}}^{mat}\rangle\cong G^{free,2}$ and therefore \begin{eqnarray*} \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle\cong G^{free,2}. \end{eqnarray*} We have \begin{eqnarray*} \oooo{s_{e_2}^{(1)}}^{mat}&\equiv& \begin{pmatrix}1&0\\0&1\end{pmatrix}\mmod 4\quad \textup{if }l\equiv 0(4),\\ \oooo{s_{e_2}^{(1)}}^{mat}&\equiv& \begin{pmatrix}3&2\\2&3\end{pmatrix}\mmod 4\quad \textup{if }l\equiv 2(4). \end{eqnarray*} If $l\equiv 0(4)$ then by Theorem \ref{t6.10} (c) $\oooo{s_{e_2}^{(1)}}^{mat}\in \langle \oooo{s_{e_1}^{(1)}}^{mat}, \oooo{s_{e_3}^{(1)}}^{mat}\rangle$, so then $\Gamma^{(1)}_s=\langle \oooo{s_{e_1}^{(1)}}, \oooo{s_{e_3}^{(1)}}\rangle,$ and the isomorphism $\Gamma^{(1)}_s \cong \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle \subset\Gamma^{(1)}$ gives a splitting of the exact sequence \eqref{6.10}. If $l\equiv 2(4)$ then by Theorem \ref{t6.10} (c) $\oooo{s_{e_2}^{(1)}}^{mat}\notin \langle \oooo{s_{e_1}^{(1)}}^{mat}, \oooo{s_{e_3}^{(1)}}^{mat}\rangle$, so then $\langle \oooo{s_{e_i}^{(1)}}^{mat}\,|\, i\in\{1,2,3\} \rangle=\Gamma^{(2)} \cong G^{free,2}\times\{\pm 1\}$, $-\id\in\Gamma^{(1)}_s$, and the isomorphism $\Gamma^{(1)}_s/\{\pm \id\} \cong \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle \subset\Gamma^{(1)}$ gives a splitting of the exact sequence in part (ii). \medskip {\bf Claim:} $\Gamma^{(1)}_u$ if $l\equiv 0(4)$ and $\Gamma^{(1)}\cap O^{(1),Rad}_\pm$ if $l\equiv 2(4)$ is the normal subgroup generated by $s_4:=\bigl(s_{e_3}^{(1)}s_{e_1}^{(1)}\bigr)^{l^2/4} s_{e_2}^{(1)}$. \medskip {\sf Proof of the Claim:} Consider the $\Z$-basis $\www{\uuuu{e}}:=(e_1,e_1+e_3,f_3)$ of $H_\Z$. \begin{eqnarray*} s_{e_1}^{(1)}\www{\uuuu{e}}&=&\www{\uuuu{e}} \begin{pmatrix}1&-2&0\\0&1&0\\0&0&1\end{pmatrix},\ s_{e_2}^{(1)}\www{\uuuu{e}}=\www{\uuuu{e}} \begin{pmatrix}1&0&0\\\frac{l^2}{2}&1&0\\-l&0&1\end{pmatrix},\ s_{e_3}^{(1)}\www{\uuuu{e}}=\www{\uuuu{e}} \begin{pmatrix}-1&-2&0\\2&3&0\\0&0&1\end{pmatrix},\\ s_4\, \www{\uuuu{e}}&=&\www{\uuuu{e}} \begin{pmatrix}-1&0&0\\2&-1&0\\0&0&1\end{pmatrix}^{l^2/4} \begin{pmatrix}1&0&0\\\frac{l^2}{2}&1&0\\-l&0&1\end{pmatrix}\\ &=&\www{\uuuu{e}} \begin{pmatrix}-1&0&0\\0&-1&0\\0&0&1\end{pmatrix}^{l/2} \begin{pmatrix}1&0&0\\-\frac{l^2}{2}&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\\frac{l^2}{2}&1&0\\-l&0&1\end{pmatrix}\\ &=&\www{\uuuu{e}} \begin{pmatrix}-1&0&0\\0&-1&0\\0&0&1\end{pmatrix}^{l/2} \begin{pmatrix}1&0&0\\0&1&0\\-l&0&1\end{pmatrix} =\www{\uuuu{e}} \begin{pmatrix}(-1)^{l/2}&0&0\\0&(-1)^{l/2}&0\\-l&0&1\end{pmatrix}, \end{eqnarray*} \begin{eqnarray} s_4\, \uuuu{e}=(-1)^{l/2}\uuuu{e}+f_3(-l,1-(-1)^{l/2},l),\nonumber\\ s_4=\left\{\begin{array}{ll}t^+_{\lambda_1}\in\Gamma^{(1)}_u& \textup{ if }l\equiv 0(4),\\ t^-_{\lambda_3}\in \Gamma^{(1)}\cap O^{(1),Rad}_\pm & \textup{ if }l\equiv 2(4). \end{array}\right. \label{6.12} \end{eqnarray} The splittings above of the exact sequences give semidirect products \begin{eqnarray*} \Gamma^{(1)}=\left\{\begin{array}{ll} \Gamma^{(1)}_u\rtimes \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle & \textup{ if }l\equiv 0(4),\\ \Gamma^{(1)}\cap O^{(1),Rad}_\pm \rtimes \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle & \textup{ if }l\equiv 2(4). \end{array}\right. \end{eqnarray*} $s_{e_2}^{(1)}$ turns up linearly in $s_4$. Therefore $\Gamma^{(1)}=\langle s_{e_1}^{(1)}, s_{e_3}^{(1)},s_4\rangle$. Together these facts show that $\Gamma^{(1)}_u$ respectively $\Gamma^{(1)}\cap O^{(1),Rad}_\pm $ is the normal subgroup generated by $s_4$. This finishes the proof of the Claim. \hfill($\Box$) \medskip Now we can apply Lemma \ref{t6.17} (c) if $l\equiv 0(4)$ and Lemma \ref{t6.17} (d) if $l\equiv 2(4)$. The following calculations show the claims on $\Gamma^{(1)}_u$ and (in the case $l\equiv 2(4)$) $\Gamma^{(1)}\cap O^{(1),Rad}_\pm$. {\bf The case $l\equiv 0(4)$:} \begin{eqnarray*} \lambda_1\circ s_{e_1}^{(1)}(\uuuu{e})=(-l,-l^2,3l),\\ \textup{so }\lambda_1\circ s_{e_1}^{(1)} =3\lambda_1+\lambda_2,\\ \lambda_1\circ (s_{e_i}^{(1)})^{\pm 1}, \lambda_2\circ (s_{e_i}^{(1)})^{\pm 1} \in\langle \lambda_1,\lambda_2\rangle_\Z. \end{eqnarray*} {\bf The case $l\equiv 2(4)$:} \begin{eqnarray*} \lambda_3\circ s_{e_1}^{(1)}(\uuuu{e})=(-l,2-l^2,3l),\\ \textup{so }\lambda_3\circ s_{e_1}^{(1)} =\lambda_3+2\lambda_1+\lambda_2,\\ \lambda_3\circ (s_{e_2}^{(1)})^{-1}(\uuuu{e})=(l,2,-l),\\ \textup{so }\lambda_3\circ (s_{e_2}^{(1)})^{-1} =\lambda_3-2\lambda_1,\\ \lambda_3\circ (s_{e_i}^{(1)})^{\pm 1} \in\lambda_3+\langle 2\lambda_1,\lambda_2\rangle_\Z,\\ 2\lambda_1\circ (s_{e_i}^{(1)})^{\pm 1}, \lambda_2\circ (s_{e_i}^{(1)})^{\pm 1} \in\langle 2\lambda_1,\lambda_2\rangle_\Z. \end{eqnarray*} (f) $\Rad I^{(1)}=\Z f_3$ is known. $e_2=-\frac{l}{2}(e_1+e_3)+\frac{1}{2}f_3$ shows $\oooo{H_\Z}^{(1)}=\Z \oooo{e_1}^{(1)} \oplus \Z (\frac{1}{2}(\oooo{e_1}^{(1)}+ \oooo{e_3}^{(1)}))$. We have \begin{eqnarray*} \www{\uuuu{e}}=\uuuu{e} \begin{pmatrix} 1&\frac{1-l^2}{2}&l\\ 0&-l&2\\0&\frac{1-l^2}{2}&l \end{pmatrix},\quad \uuuu{e}=\www{\uuuu{e}} \begin{pmatrix} 1&0&-1\\0&-l&2\\0&\frac{1-l^2}{2}&l \end{pmatrix}, \end{eqnarray*} so $\www{\uuuu{e}}$ is a $\Z$-basis of $H_\Z$. First we calculate $s_4$ with respect to the $\Q$-basis $\www{\uuuu{f}} :=(e_1,\frac{1}{2}(e_1+e_3),f_3)$ of $H_\Q$, \begin{eqnarray*} s_4(\www{\uuuu{f}})&=&\www{\uuuu{f}} \bigl( \begin{pmatrix}-1&-1&0\\4&3&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&-1&0\\0&1&0\\0&0&1\end{pmatrix} \bigr)^{\frac{l^2-1}{4}} \begin{pmatrix}1&0&0\\ l^2&1&0\\ -\frac{l}{2}&0&1\end{pmatrix}\\ &=& \www{\uuuu{f}} \begin{pmatrix}-1&0&0\\4&-1&0\\0&0&1 \end{pmatrix}^{\frac{l^2-1}{4}} \begin{pmatrix}1&0&0\\ l^2&1&0\\ -\frac{l}{2}&0&1\end{pmatrix}\\ \\ &=& \www{\uuuu{f}} \begin{pmatrix}1&0&0\\1-l^2&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\ l^2&1&0\\ -\frac{l}{2}&0&1\end{pmatrix}\ =\www{\uuuu{f}} \begin{pmatrix}1&0&0\\1&1&0\\-\frac{l}{2}&0&1 \end{pmatrix}. \end{eqnarray*} Therefore \begin{eqnarray}\label{6.13} s_4(\www{\uuuu{e}})=\www{\uuuu{e}} \begin{pmatrix}1&0&0\\1&1&0\\0&0&1\end{pmatrix},\quad s_{e_1}^{(1)}(\www{\uuuu{e}})=\www{\uuuu{e}} \begin{pmatrix}1&-1&0\\0&1&0\\0&0&1\end{pmatrix}. \end{eqnarray} Thus \begin{eqnarray*} \Gamma^{(1)}_s=\langle \oooo{s_{e_1}^{(1)}},\oooo{s_4} \rangle \cong \langle s_{e_1}^{(1)},s_4\rangle\subset\Gamma^{(1)}, \end{eqnarray*} and this isomorphism gives a splitting of the exact sequence \eqref{6.10}. \eqref{6.13} shows that the group $\langle s_{e_1}^{(1)},s_4\rangle$ contains an element $s_5$ with \begin{eqnarray*} s_5(\www{\uuuu{e}})=\www{\uuuu{e}} \begin{pmatrix}3&1&0\\-4&-1&0\\0&0&1\end{pmatrix}, \end{eqnarray*} thus \begin{eqnarray*} s_5s_{e_3}^{(1)}(\www{\uuuu{e}})&=&\www{\uuuu{e}} \begin{pmatrix}3&1&0\\-4&-1&0\\0&0&1\end{pmatrix} \begin{pmatrix}-1&-1&0\\4&3&0\\2l&l&1\end{pmatrix} =\www{\uuuu{e}} \begin{pmatrix}1&0&0\\0&1&0\\2l&l&1\end{pmatrix},\\ s_5s_{e_3}^{(1)}(\uuuu{e})&=&\uuuu{e}+ f_3(2l,-l^2,0),\\ s_5s_{e_3}^{(1)}&=& t^+_{\lambda_2}\quad\textup{with} \quad \lambda_2(\uuuu{e})=(2l,-l^2,0). \end{eqnarray*} Also \begin{eqnarray*} \Gamma^{(1)}=\langle s_{e_1}^{(1)},s_{e_2}^{(1)}, s_{e_3}^{(1)}\rangle =\langle s_4,s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle =\langle s_4,s_{e_1}^{(1)},t^+_{\lambda_2}\rangle, \end{eqnarray*} so $\Gamma^{(1)}_u$ is the normal subgroup generated by $t^+_{\lambda_2}$. Now we can apply Lemma \ref{t6.17} (c). The following formulas show $\Gamma^{(1)}_u=\{t^+_\lambda\,|\, \lambda\in\langle \lambda_1,\lambda_2\rangle_\Z\}.$ \begin{eqnarray*} (2l,-l^2,0) s_{e_1}^{(1),mat}=(2l,l^2,-4l) =(2l,-l^2,0)+(0,2l^2,-4l),\\ 2(2l,-l^2,0)+(0,2l^2,-4l)=(4l,0,-4l),\\ (2l,-l^2,0)s_{e_2}^{(1),mat}=(2l,-l^2,0)+(l^3,0,-l^3),\\ \gcd(4l,l^3)=l,\quad \textup{so }\lambda_1\in\Gamma^{(1)}_u,\\ \lambda_1\circ (s_{e_i}^{(1)})^{\pm 1}, \lambda_2\circ (s_{e_i}^{(1)})^{\pm 1}\in \langle\lambda_1,\lambda_2\rangle_\Z. \end{eqnarray*} (g) Because of the cyclic action of $\gamma\in (G^{phi}\ltimes \www{G}^{sign})\rtimes \langle\gamma\rangle$ on $\Z^3$, we can suppose $x_1=\max(x_1,x_2,x_3)$. With respect to the $\Q$-basis $(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)})$ of $\oooo{H_\Q}^{(1)}$, $\oooo{s_{e_1}^{(1)}}$, $\oooo{s_{e_2}^{(1)}}$ and $\oooo{s_{e_3}^{(1)}}$ take the shape $\oooo{s_{e_i}^{(1)}}(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)}) =(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)})B_i$ with \begin{eqnarray*} B_1=\begin{pmatrix}1&-x_1\\0&1\end{pmatrix},\quad B_2=\begin{pmatrix}1&0\\x_1&1\end{pmatrix},\quad B_3=\begin{pmatrix}1-\frac{x_3x_2}{x_1}&-\frac{x_3^2}{x_1}\\ \frac{x_2^2}{x_1}&1+\frac{x_3x_2}{x_1}\end{pmatrix}. \end{eqnarray*} We will show that the group $\langle \mu_1,\mu_2,\mu_3\rangle$ of M\"obius transformations $\mu_i:=\mu(B_i)$ is a free group with the three generators $\mu_1,\mu_2,\mu_3$. This implies first $\Gamma^{(1)}_s\cong G^{free,3}$ and then $\Gamma^{(1)}\cong \Gamma^{(1)}_s\cong G^{free,3}$ and $\Gamma^{(1)}_u=\{\id\}$. We will apply Theorem \ref{ta.2} (c). $\mu_1,\mu_2$ and $\mu_3$ are parabolic, \begin{eqnarray*} \mu_1&=&\bigl(z\mapsto z-x_1\bigr) \quad\textup{with fixed point }\infty,\\ \mu_2&=&\Bigl(z\mapsto \frac{z}{x_1z+1}\Bigr) \quad\textup{with fixed point }0,\\ \mu_3&=&\Bigl(z\mapsto \frac{(1-\frac{x_3x_2}{x_1})z -\frac{x_3^2}{x_1}} {\frac{x_2^2}{x_1}z+(1+\frac{x_3x_2}{x_1})} =\frac{(x_1-x_2x_3)z-x_3^2}{x_2^2z+(x_1+x_2x_3)}\Bigr)\\ &&\hspace*{2cm}\textup{with fixed point }-\frac{x_3}{x_2}. \end{eqnarray*} Consider \begin{eqnarray*} r_1:=\mu_1(1)=1-x_1,\quad r_2:=\mu_2^{-1}(1)=\frac{1}{1-x_1},\\ r_3:=\mu_3(r_1) =\frac{(x_1-x_2x_3)(1-x_1)-x_3^2} {x_2^2(1-x_1)+(x_1+x_2x_3)}. \end{eqnarray*} It is sufficient to show the inequalities \begin{eqnarray}\label{6.14} -\infty < r_1<-\frac{x_3}{x_2}< r_3\leq r_2<0<1<\infty. \end{eqnarray} \begin{figure}[H] \includegraphics[width=0.8\textwidth]{pic-6-6.png} \caption[Figure 6.6]{$G^{free,3}$ generated by three parabolic M\"obius transformations, an application of Theorem \ref{ta.2} (c)} \label{Fig:6.6} \end{figure} Then Theorem \ref{ta.2} (c) applies. Compare Figure \ref{Fig:6.6}. We treat the case $\uuuu{x}=(3,3,3)$ separately and first. Then \begin{eqnarray*} (r_1,-\frac{x_3}{x_2},r_3,r_2)=(-2,-1,-\frac{1}{2},-\frac{1}{2}), \end{eqnarray*} so then \eqref{6.14} holds. From now on we suppose $x_1\geq 4$. The inequality $r_2<0$ is trivial. Consider the number \begin{eqnarray*} r_4:=-\frac{x_1+x_2x_3}{x_2^2} =-\frac{x_1}{x_2^2}-\frac{x_3}{x_2} <-\frac{x_3}{x_2},\quad\textup{with } \mu_3(r_4)=\infty. \end{eqnarray*} We will show in this order the following claims: \begin{list}{}{} \item[(i)] $r_1<r_4$, which implies $r_1<-\frac{x_3}{x_2}$. \item[(ii)] $r_3\in (-\frac{x_3}{x_2},\infty)$. \item[(iii)] The numerator $(x_2x_3-x_1)(x_1-1)-x_3^2$ of $r_3$ is positive. \item[(iv)] The denominator $x_2^2(1-x_1)+(x_1+x_2x_3)$ of $r_3$ is negative. \item[(v)] $r_3\leq r_2$. \end{list} Together (i)--(v) and $r_2<0$ show \eqref{6.14}. Two of the three inequalities $2x_i\leq x_jx_k$ for $\{i,j,k\}=\{1,2,3\}$ in the assumption on $\uuuu{x}$ can be improved, \begin{eqnarray*} x_1x_2\geq 3x_1\geq 3x_3,\quad x_1x_3\geq 3x_1\geq 3x_2,\quad \textup{we keep}\quad x_2x_3\geq 2x_1. \end{eqnarray*} (i) holds because \begin{eqnarray*} r_1<r_4\iff 1<x_1-\frac{x_1}{x_2^2}-\frac{x_3}{x_2} \quad\textup{and}\\ x_1-\frac{x_1}{x_2^2}-\frac{x_3}{x_2} \geq x_1-\frac{x_1}{9}-\frac{x_1}{3}=\frac{5x_1}{9} \geq \frac{20}{9}. \end{eqnarray*} (ii) $r_1<r_4$ and $\mu_3(r_4)=\infty$ imply $r_3=\mu_3(r_1)\in(-\frac{x_3}{x_2},\infty)$. (iii) holds because \begin{eqnarray*} (x_2x_3-x_1)(x_1-1)-x_3^2>0\iff x_1-1>\frac{x_3^2}{x_2x_3-x_1} =\frac{x_3}{x_2}\frac{1}{1-\frac{x_1}{x_2x_3}}\\ \textup{and } \frac{x_3}{x_2}\frac{1}{1-\frac{x_1}{x_2x_3}} \leq \frac{x_1}{3}\frac{1}{1-\frac{1}{2}}=\frac{2x_1}{3},\quad x_1-1>\frac{2x_1}{3}\Leftarrow x_1\geq 4. \end{eqnarray*} (iv) holds because \begin{eqnarray*} x_2^2(x_1-1)-(x_1+x_2x_3)>0\iff x_1-1>\frac{x_1+x_2x_3}{x_2^2} =\frac{x_3}{x_2}(1+\frac{x_1}{x_2x_3})\\ \textup{and } \frac{x_3}{x_2}(1+\frac{x_1}{x_2x_3}) \leq \frac{x_1}{3}(1+\frac{1}{2})=\frac{x_1}{2},\quad x_1-1>\frac{x_1}{2}\Leftarrow x_1\geq 4. \end{eqnarray*} (ii)--(iv) show $r_3\in(-\frac{x_3}{x_2},0)$. Now \begin{eqnarray*} r_3\leq r_2&\iff & \bigl[(x_2x_3-x_1)(x_1-1)-x_3^2\bigr](x_1-1)\\ &&-\bigl[x_2^2(x_1-1)-(x_1+x_2x_3)\bigr]\geq 0. \end{eqnarray*} The right hand side is \begin{eqnarray*} g(x_1,x_2,x_3):= (x_2x_3-x_1)(x_1-1)^2 -(x_2^2+x_3^2)(x_1-1)+(x_1+x_2x_3). \end{eqnarray*} This is symmetric and homogeneous of degree 2 in $x_2$ and $x_3$. \begin{eqnarray*} \frac{\paa g}{\paa x_2}(\uuuu{x}) =(x_1-1)^2x_3-2(x_1-1)x_2+x_3, \end{eqnarray*} so for fixed $x_1$ and $x_3$ $g(\uuuu{x})$ takes its maximum in \begin{eqnarray*} x_2^0=\frac{(x_1-1)^2x_3+x_3}{2(x_1-1)} =\frac{x_3(x_1-1)}{2}+\frac{x_3}{2(x_1-1)} >\frac{3(x_1-1)}{2}>x_1, \end{eqnarray*} and is monotonously increasing left of $x_2^0$. Because of the symmetry we can restrict to the cases $x_1\geq x_2\geq x_3$. Then $g(\uuuu{x})$ takes for fixed $x_1$ and $x_3$ its minimum in $x_2=x_3$. \begin{eqnarray*} \www{g}(x_1,x_3):= g(x_1,x_3,x_3) =x_3^2(x_1-2)^2-(x_1-1)^2x_1+x_1. \end{eqnarray*} $\www{g}$ takes for fixed $x_1$ its minimum at the minimal possible $x_3$, which is $x_3^0:=\max(3,\sqrt{2x_1})$ because of $2x_1\leq x_2x_3=x_3^2$. \begin{eqnarray*} \www{g}(x_1,x_3^0)=\left\{\begin{array}{ll} \www{g}(4,3)=4>0&\textup{ if }x_1=4,\\ \www{g}(x_1,\sqrt{2x_1})=x_1^3-6x_1^2+8x_1 & \\ \hspace*{1cm}=x_1(x_1-2)(x_1-4)>0 &\textup{ if }x_1\geq 5. \end{array}\right. \end{eqnarray*} Therefore $r_3<r_2$ and \eqref{6.14} is proved. \hfill$\Box$ \begin{remarks}\label{t6.19} (i) In the proof of part (g) of Theorem \ref{t6.18} the hyperbolic polygon $P$ whose relative boundary is the union of the six arcs \begin{eqnarray*} A(\infty,r_1),\ A(r_1,-\frac{x_3}{x_2}),\ A(-\frac{x_3}{x_2},r_3),\ A(r_2,0),\ A(0,1),\ A(1,\infty), \end{eqnarray*} is a fundamental domain for the action of the group $\langle \mu_1,\mu_2,\mu_3\rangle$ on the upper half plane $\H$. (ii) In part (g) of Theorem \ref{t6.18} the case $\uuuu{x}=(3,3,3)$ is especially interesting. It is the only case within part (g) where $r_3=r_2$, so the only case in part (g) where the hyperbolic polygon $P$ has finite hyperbolic area. In this case \begin{eqnarray*} \langle B_1,B_2,B_3\rangle = \langle \begin{pmatrix}1&-3\\0&1\end{pmatrix}, \begin{pmatrix}1&0\\3&1\end{pmatrix}, \begin{pmatrix}-2&-3\\3&4\end{pmatrix}\rangle =\Gamma(3). \end{eqnarray*} \end{remarks} Now we turn to the study of the set $\Delta^{(1)}$ of odd vanishing cycles. The shape of $\Delta^{(1)}$ and our knowledge on it are very different for different $\uuuu{x}\in\Z^3$. \begin{remarks}\label{t6.20} (i) In the cases in the parts (c)--(f) of Theorem \ref{t6.21} we will give $\Delta^{(1)}$ rather explicitly. There $\oooo{\Delta^{(1)}}$ is known, and we can give a subset of $\Delta^{(1)}$ explicitly which maps bijectively to $\oooo{\Delta^{(1)}}$. Furthermore $\Gamma^{(1)}_u\cong\Z^2$, and for $\varepsilon\in\{\pm 1\}$ and $i\in\{1,2,3\}$ there is a subset $F_{\varepsilon,i}\subset\Z f_3$ with \begin{eqnarray*} \Delta^{(1)}\cap (\varepsilon e_i+\Z f_3) &=&\varepsilon e_i+ F_{\varepsilon,i}\quad\textup{and}\\ \Delta^{(1)}\cap (a+\Z f_3) &=&a+ F_{\varepsilon,i}\quad\textup{for any } a\in\Gamma^{(1)}\{\varepsilon e_i\}. \end{eqnarray*} In many, but not all cases $F_{\varepsilon,i}\subset\Z f_3$ is a sublattice. (ii) On the contrary, in the cases in part (g) of Theorem \ref{t6.21} we know rather little. There $\Gamma^{(1)}_u=\{\id\}$, and remarkably the projection $\Delta^{(1)}\to\oooo{\Delta^{(1)}} \subset\oooo{H_\Z}^{(1)}$ is a bijection. In the case $\uuuu{x}=(3,3,3)$ we know $\oooo{\Delta^{(1)}}$. But the lift $a\in\Delta^{(1)}$ of an element $\oooo{a}^{(1)}\in\oooo{\Delta^{(1)}}$ is difficult to determine. See Lemma \ref{t6.26}. \end{remarks} \begin{theorem}\label{t6.21} (a) (Empty: (b)--(g) shall correspond to (b)--(g) in Theorem \ref{t6.18}.) (b) In each reducible case $\uuuu{x}=(x_1,0,0)$ \begin{eqnarray*} \Delta^{(1)}&=&\Delta^{(1)}\cap (\Z e_1\oplus \Z e_2) \ \dot\cup\ \{\pm e_3\}\\ \textup{with}&& \Delta^{(1)}\cap(\Z e_1\oplus\Z e_2)\cong\Delta^{(1)}(S(-x_1)), \end{eqnarray*} and $\Delta^{(1)}(S(-x_1))$ is given in Theorem \ref{t6.10}. (c) The case $\uuuu{x}=(1,1,0)$: \begin{eqnarray*} \oooo{\Delta^{(1)}}&=&\oooo{H_\Z}^{(1),prim},\\ \Delta^{(1)} &=& (pr^{H,(1)})^{-1}(\oooo{\Delta^{(1)}}) =(\Z e_1\oplus\Z e_2)^{prim}+\Z f_3\subset H_\Z^{prim},\\ \Delta^{(1)}&=& \Gamma^{(1)}\{e_1\}. \end{eqnarray*} $\Delta^{(1)}$ and $\oooo{\Delta^{(1)}}$ consist each of one orbit. (d) The cases $\uuuu{x}=(x_1,x_2,0)$ with $2\leq x_1\geq x_2>0$: Recall $x_{12}:=\gcd(x_1,x_2)$. (i) The cases with $x_{12}=1$ and $x_2>1$: Then $x_1=\www{x}_1$, $x_2=\www{x}_2$ and $x_1>x_2>1$. Choose $y_1,y_2\in\Z$ with $1=y_1x_1^2+y_2x_2^2$. $\oooo{\Delta^{(1)}}$ consists of the three orbits \begin{eqnarray*} \Gamma^{(1)}_s\{\oooo{e_1}^{(1)}\}&=&\oooo{H_\Z}^{(1),prim},\\ \Gamma^{(1)}_s\{\oooo{e_2}^{(1)}\} &=&x_1\oooo{H_\Z}^{(1),prim},\\ \Gamma^{(1)}_s\{\oooo{e_3}^{(1)}\} &=&x_2\oooo{H_\Z}^{(1),prim}. \end{eqnarray*} $\Delta^{(1)}$ consists of five orbits: \begin{eqnarray*} \Gamma^{(1)}\{e_1\}=\Gamma^{(1)}\{\pm e_1\},\quad \Gamma^{(1)}\{\varepsilon e_2\},\quad \Gamma^{(1)}\{\varepsilon e_3\}\quad\textup{with } \varepsilon\in\{\pm 1\}. \end{eqnarray*} Here \begin{eqnarray*} &&\Delta^{(1)}\cap(e_1+\Z f_3) = \Gamma^{(1)}\{e_1\}\cap (e_1+\Z f_3)\\ &&\hspace*{1cm}=\Gamma^{(1)}_u\{e_1\} =e_1+\Z x_1x_2f_3,\\ &&\Delta^{(1)}\cap(\varepsilon e_2+\Z f_3)\\ &&\hspace*{1cm} = \Bigl(\Gamma^{(1)}\{\varepsilon e_2\}\cap (\varepsilon e_2+\Z f_3) \Bigr) \ \dot\cup\ \Bigl( \Gamma^{(1)}\{-\varepsilon e_2\}\cap (\varepsilon e_2+\Z f_3) \Bigr)\\ &&\hspace*{1cm} = \Bigl(\varepsilon e_2+\Z x_1^2x_2 f_3\Bigr) \ \dot\cup\ \Bigl(\varepsilon e_2-\varepsilon 2y_2x_2 f_3+\Z x_1^2x_2f_3\Bigr) ,\\ &&\Delta^{(1)}\cap(\varepsilon e_3+\Z f_3)\\ &&\hspace*{1cm} = \Bigl(\Gamma^{(1)}\{\varepsilon e_3\}\cap (\varepsilon e_3+\Z f_3) \Bigr)\ \dot\cup\ \Bigl( \Gamma^{(1)}\{-\varepsilon e_3\}\cap (\varepsilon e_3+\Z f_3 \Bigr)\\ &&\hspace*{1cm} = \Bigl(\varepsilon e_3+\Z x_1x_2^2 f_3)\Bigr) \ \dot\cup\ \Bigl( \varepsilon e_3+\varepsilon 2y_1x_1 f_3+\Z x_1x_2^2f_3\Bigr). \end{eqnarray*} (ii) The cases with $x_2=1$: Then $x_1=\www{x}_1>x_2=\www{x}_2=x_{12}=1$. $\oooo{\Delta^{(1)}}$ consists of the two orbits \begin{eqnarray*} \Gamma^{(1)}_s\{\oooo{e_1}^{(1)}\}&=& \Gamma^{(1)}_s\{\pm \oooo{e_1}^{(1)},\pm \oooo{e_3}^{(1)}\} =\oooo{H_\Z}^{(1),prim},\\ \Gamma^{(1)}_s\{\oooo{e_2}^{(1)}\} &=&\Gamma^{(1)}_s\{\pm \oooo{e_2}^{(1)}\} =x_1\oooo{H_\Z}^{(1),prim}. \end{eqnarray*} $\Delta^{(1)}$ consists of three orbits, \begin{eqnarray*} \Gamma^{(1)}\{e_1\}=\Gamma^{(1)}\{\pm e_1,\pm e_3\},\quad \Gamma^{(1)}\{\varepsilon e_2\},\quad \textup{with } \varepsilon\in\{\pm 1\}. \end{eqnarray*} Here \begin{eqnarray*} &&\Delta^{(1)}\cap(e_1+\Z f_3) = \Gamma^{(1)}\{e_1\}\cap (e_1+\Z f_3) =\Gamma^{(1)}_u\{e_1\} =e_1+\Z x_1f_3,\\ &&\Delta^{(1)}\cap(\varepsilon e_2+\Z f_3)\\ &&\hspace*{1cm} = \Bigl(\Gamma^{(1)}\{\varepsilon e_2\}\cap (\varepsilon e_2+\Z f_3) \Bigr) \ \dot\cup\ \Bigl( \Gamma^{(1)}\{-\varepsilon e_2\}\cap (\varepsilon e_2+\Z f_3) \Bigr)\\ &&\hspace*{1cm} = \Bigl(\varepsilon e_2+\Z x_1^2 f_3\Bigr) \ \dot\cup\ \Bigl(\varepsilon e_2-\varepsilon 2 f_3+\Z x_1^2f_3\Bigr). \end{eqnarray*} (iii) The cases with $x_{12}>1$ and $x_1>x_2>1$: Then $\www{x}_1>\www{x}_2\geq 1$. Recall from Theorem \ref{t6.18} (s) $g_2=\frac{1}{\www{x}_1}\oooo{e_2}^{(1)}= \frac{1}{\www{x}_2}\oooo{e_3}^{(1)}\in \oooo{H_\Z}^{(1)}$. Here $\oooo{\Delta^{(1)}}\subset\oooo{H_\Z}^{(1)}$ consists of the six orbits (with $\varepsilon\in\{\pm 1\}$) \begin{eqnarray*} \Gamma^{(1)}_s\{\varepsilon \oooo{e_1}^{(1)}\} &\subset&\oooo{H_\Z}^{(1),prim},\\ \Gamma^{(1)}_s\{\varepsilon \oooo{e_2}^{(1)}\} &=&\www{x}_1\Gamma^{(1)}_s\{\varepsilon g_2\} \subset \www{x}_1\oooo{H_\Z}^{(1),prim},\\ \Gamma^{(1)}_s\{\varepsilon \oooo{e_3}^{(1)}\} &=&\www{x}_2\Gamma^{(1)}_s\{\varepsilon g_2\} \subset \www{x}_2\oooo{H_\Z}^{(1),prim} \end{eqnarray*} Theorem \ref{t6.10} (c) and (d) describe the orbits $\Gamma^{(1)}_s\{\oooo{e_1}^{(1)}\}$ and $\Gamma^{(1)}_s\{g_2\}$. Also $\Delta^{(1)}$ consists of six orbits. \begin{eqnarray*} \Delta^{(1)}\cap(\varepsilon e_1+\Z f_3) &=& \varepsilon e_1 +\Z x_{12}\www{x}_1\www{x}_2f_3,\\ \Delta^{(1)}\cap(\varepsilon e_2+\Z f_3) &=& \varepsilon e_2 +\Z x_1x_2\www{x}_1f_3,\\ \Delta^{(1)}\cap(\varepsilon e_3+\Z f_3) &=& \varepsilon e_3 +\Z x_1x_2\www{x}_2f_3. \end{eqnarray*} (iv) The cases with $x_1=x_2\geq 2$: Then $x_{12}=x_1=x_2$, $\www{x}_1=\www{x}_2=1$, $\oooo{e_2}^{(1)}=\oooo{e_3}^{(1)}=g_2$. Then $\oooo{\Delta^{(1)}}\subset\oooo{H_\Z}^{(1)}$ consists of the four orbits (with $\varepsilon\in\{\pm 1\}$) \begin{eqnarray*} \Gamma^{(1)}_s\{\varepsilon \oooo{e_1}^{(1)}\} &\subset&\oooo{H_\Z}^{(1),prim},\\ \Gamma^{(1)}_s\{\varepsilon \oooo{e_2}^{(1)}\} &=&\Gamma^{(1)}_s\{\varepsilon g_2\}\subset\oooo{H_\Z}^{(1),prim}. \end{eqnarray*} Theorem \ref{t6.10} (c) and (d) describe these orbits. $\Delta^{(1)}$ consists of six orbits. \begin{eqnarray*} &&\Delta^{(1)}\cap(\varepsilon e_1+\Z f_3) = \Gamma^{(1)}\{\varepsilon e_1\}\cap (\varepsilon e_1+\Z f_3) = \Gamma^{(1)}_u\{\varepsilon e_1\}\\ &&\hspace*{1cm}= \varepsilon e_1 +\Z x_1f_3,\\ &&\Delta^{(1)}\cap(\varepsilon e_2+\Z f_3) = \Delta^{(1)}\cap (\varepsilon e_3+\Z f_3)\\ && \hspace*{1cm}= \Bigl(\Gamma^{(1)}\{\varepsilon e_2\} \cap (\varepsilon e_2+\Z f_3)\Bigr) \ \dot\cup\ \Bigl(\Gamma^{(1)}\{\varepsilon e_3\} \cap (\varepsilon e_2+\Z f_3)\Bigr)\\ && \hspace*{1cm}= \Bigl(\varepsilon e_2 +\Z x_1^2f_3\Bigr)\ \dot\cup\ \Bigl(\varepsilon e_2 -\varepsilon f_3+\Z x_1^2f_3\Bigr). \end{eqnarray*} (e) The cases $\uuuu{x}=(-l,2,-l)$ with $l\geq 2$ even: Consider $\varepsilon\in\{\pm 1\}$. (i) The cases with $l\equiv 0(4)$: $\Delta^{(1)}$ consists of six orbits, \begin{eqnarray*} \Gamma^{(1)}\{\varepsilon e_1\}&=& \{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv \varepsilon(4),y_2\equiv 0(2)\} + \Z lf_3,\\ \Gamma^{(1)}\{\varepsilon e_3\}&=& \{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv 0(2),y_2\equiv \varepsilon(4)\} + \Z lf_3,\\ \Gamma^{(1)}\{\varepsilon e_2\}&=& \frac{l}{2}\{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv 1(2),y_2\equiv 1(2)\}\\ && + \varepsilon f_3 + \Z l^2f_3. \end{eqnarray*} $\oooo{\Delta^{(1)}}$ consists of five orbits, $\Gamma_s^{(1)}\{\oooo{e_2}^{(1)}\} =\Gamma_s^{(1)}\{-\oooo{e_2}^{(1)}\}$. (ii) The cases with $l\equiv 2(4)$: $\Delta^{(1)}$ consists of six orbits, \begin{eqnarray*} &&\Gamma^{(1)}\{\varepsilon e_1\}= \Bigl(\{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv \varepsilon(4),y_2\equiv 0(2)\} + \Z 2lf_3\Bigr)\\ &&\hspace*{0.5cm} \dot\cup \Bigl(\{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv -\varepsilon(4),y_2\equiv 0(2)\}+lf_3+\Z 2lf_3\Bigr),\\ &&\Gamma^{(1)}\{\varepsilon e_3\}= \Bigl(\{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv 0(2),y_2\equiv \varepsilon(4)\} + \Z 2lf_3\Bigr)\\ &&\hspace*{0.5cm} \dot\cup \Bigl(\{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv 0(2),y_2\equiv -\varepsilon(4)\}+lf_3+\Z 2lf_3\Bigr),\\ &&\Gamma^{(1)}\{\varepsilon e_2\}= \frac{l}{2}\{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv 1(2),y_2\equiv 1(2)\}\\ &&\hspace*{2cm} + \varepsilon f_3 + \Z l^2f_3. \end{eqnarray*} Especially \begin{eqnarray*} \Delta^{(1)}\cap (\varepsilon e_1+\Z f_3) &=&\varepsilon e_1+\Z lf_3\\ \supsetneqq \Gamma^{(1)}\{\varepsilon e_1\} \cap (\varepsilon e_1+\Z f_3) &=&\varepsilon e_1+\Z 2l f_3, \end{eqnarray*} and similarly for $\varepsilon e_3$. $\oooo{\Delta^{(1)}}$ consists of three orbits, $\Gamma_s^{(1)}\{\oooo{e_i}^{(1)}\} =\Gamma_s^{(1)}\{-\oooo{e_i}^{(1)}\}$ for $i\in\{1,2,3\}$. (f) The cases $\uuuu{x}=(-l,2,-l)$ with $l\geq 3$ odd: $\Delta^{(1)}$ consists of three orbits (with $\varepsilon\in\{\pm 1\}$) \begin{eqnarray*} \Gamma^{(1)}\{e_1\} &=& \Gamma^{(1)}\{\pm e_1,\pm e_3\} =(\Z e_1\oplus \Z g_2)^{prim}+\Z lf_3,\\ \Gamma^{(1)}\{\varepsilon e_2\} &=&l(\Z e_1\oplus \Z g_2)^{prim} +\varepsilon \frac{1-l^2}{2}f_3+\Z l^2 f_3. \end{eqnarray*} $\oooo{\Delta^{(1)}}$ consists of two orbits, $\Gamma_s^{(1)}\{\oooo{e_2}^{(1)}\} =\Gamma_s^{(1)}\{-\oooo{e_2}^{(1)}\}$. (g) The cases $\uuuu{x}\in\Z^3_{\geq 3}$ with $2x_i\leq x_jx_k$ for $\{i,j,k\}=\{1,2,3\}$: $\Delta^{(1)}$ and $\oooo{\Delta^{(1)}}$ consist each of six orbits, \begin{eqnarray*} \Gamma^{(1)}\{\varepsilon e_i\}\quad\textup{respectively}\quad \Gamma^{(1)}_s\{\varepsilon\oooo{e_i}^{(1)}\} \quad\textup{for }(\varepsilon,i)\in\{\pm 1\}\times\{1,2,3\}. \end{eqnarray*} The projection $\Delta^{(1)}\to\oooo{\Delta^{(1)}}$ is a bijection. (h) The case $\uuuu{x}=(3,3,3)$: In this subcase of (g) the statements in (g) hold, and \begin{eqnarray*} \Gamma^{(1)}\{\varepsilon e_i\}\subset \varepsilon e_i+3H_\Z,\quad \Gamma^{(1)}_s\{\varepsilon\oooo{e_i}^{(1)}\} = (\varepsilon \oooo{e_i}^{(1)}+3 \oooo{H_\Z}^{(1)})^{prim} \end{eqnarray*} for $\varepsilon\in\{\pm 1\}$, $i\in\{1,2,3\}$. \end{theorem} {\bf Proof:} (b) The splitting of $\Delta^{(1)}$ follows with Lemma \ref{t2.11}. The first and second subset are treated by Theorem \ref{t6.10} and Lemma \ref{t2.12}. (c) $\oooo{\Delta^{(1)}}=\oooo{H_\Z}^{prim}$ follows with $\Gamma^{(1)}_s\cong SL_2(\Z)$ and Theorem \ref{t6.10} (b). $\Delta^{(1)}$ is the full preimage in $H_\Z$, because $\Gamma^{(1)}_u=O^{(1),Rad}_u=\{t^+_\lambda\,|\, \lambda(\uuuu{e})\in\langle (1,0,0),(0,1,1)\rangle_\Z\}$. (d) (i) All statements on $\oooo{\Delta^{(1)}}$ follow from $\Gamma^{(1)}_s\cong SL_2(\Z)$ and $\oooo{e_2}^{(1)}=x_1g_2$, $\oooo{e_3}^{(1)}=x_2g_2$. Recall the definition and the properties of $s_4\in\Gamma^{(1)}$ in the proof of Theorem \ref{t6.18}. For $\Delta^{(1)}$ we have to use Lemma \ref{t6.7}. \medskip {\bf Claim 1:} For $a\in\{\pm e_1,\pm e_2,\pm e_3\}$ equality in $\stackrel{(3)}{\supset}$ in Lemma \ref{t6.7} holds. \medskip {\sf Proof of Claim 1:} Suppose $h\in \textup{Stab}_{\Gamma^{(1)}}(\oooo{a}^{(1)})$. Then \begin{eqnarray*} \oooo{h}\in\textup{Stab}_{\Gamma^{(1)}_s}(\oooo{a}^{(1)}) =\left\{\begin{array}{ll} \langle \oooo{s_{e_1}^{(1)}}\rangle & \textup{ if }a\in\{\pm e_1\},\\ \langle \oooo{s_4}\rangle &\textup{ if }a\in\{\pm e_2,\pm e_3\}, \end{array}\right. \end{eqnarray*} so for a suitable $l\in\Z$ \begin{eqnarray*} \oooo{h}(\oooo{s_{e_1}^{(1)}})^l=\id &\textup{and}& h(s_{e_1}^{(1)})^l\in \Gamma^{(1)}_u \quad\textup{for }a\in\{\pm e_1\},\\ \oooo{h}(\oooo{s_4})^l=\id &\textup{and}& h(s_4)^l\in\Gamma^{(1)}_u \quad\textup{for }a\in\{\pm e_2,\pm e_3\}. \end{eqnarray*} As $s_{e_1}^{(1)}\in\textup{Stab}_{\Gamma^{(1)}} (\varepsilon e_1)$ and $s_4\in \textup{Stab}_{\Gamma^{(1)}} (\varepsilon e_2)=\textup{Stab}_{\Gamma^{(1)}} (\varepsilon e_3)$ $$h\in\Gamma^{(1)}_u\textup{Stab}_{\Gamma^{(1)}}(a).$$ \hfill($\Box$) \medskip By Lemma \ref{t6.7} (b) \begin{eqnarray}\label{6.15} \Gamma^{(1)}\{\varepsilon e_i\}\cap (\varepsilon e_i+\Z f_3) =\Gamma^{(1)}_u\{\varepsilon e_i\}. \end{eqnarray} \medskip {\bf Claim 2:} $\bigl(s_{e_1}^{(1)}s_4 s_{e_1}^{(1)})^2 =t^-_{\lambda_3}$ with $\lambda_3\in\Hom_2(H_\Z,\Z)$, $\lambda_3(\uuuu{e}) =(0,2y_2x_2,-2y_1x_1)$. \medskip {\sf Proof of Claim 2:} This is a straightforward calculation with $s_{e_1}^{(1),mat}$ and $s_4^{mat}$.\hfill($\Box$) \medskip Therefore \begin{eqnarray*} \Gamma^{(1)}\{e_1\}&=&\Gamma^{(1)}\{-e_1\},\\ \Delta^{(1)}\cap (e_1+\Z f_3) &=&\Gamma^{(1)}\{e_1\}\cap (e_1+\Z f_3) =\Gamma^{(1)}_u\{e_1\}\\ &=&e_1+\Z x_1x_2 f_3. \end{eqnarray*} Also \begin{eqnarray*} \Gamma^{(1)}\{e_2\}\owns -e_2+2y_2x_2f_3,\quad \Gamma^{(1)}\{e_3\}\owns -e_3-2y_1x_1f_3. \end{eqnarray*} This together with \eqref{6.15} and the shape of $\Gamma^{(1)}_u$ shows the statements on $\Delta^{(1)}\cap (\varepsilon e_2+\Z f_3)$ and $\Delta^{(1)}\cap (\varepsilon e_3+\Z f_3)$. (ii) All statements on $\oooo{\Delta^{(1)}}$ follow from $\Gamma^{(1)}_s\cong SL_2(\Z)$ and $\oooo{e_2}^{(1)}=x_1g_2$ and $\oooo{e_3}^{(1)}=g_2$. For $\Delta^{(1)}$ we have to use Lemma \ref{t6.7}. Claim 1 in the proof of part (i), its proof and the implication \eqref{6.15} still hold. Now we can choose $(y_1,y_2)=(0,1)$, so $s_4=s_{e_3}^{(1)}$. One calculates \begin{eqnarray*} s_{e_1}^{(1)}s_{e_3}^{(1)}s_{e_1}^{(1)}\, \uuuu{e}=\uuuu{e} \begin{pmatrix}0&-x_1&-1\\0&1&0\\1&-x_1&0\end{pmatrix}. \end{eqnarray*} Therefore $\Gamma^{(1)}\{e_1\}=\Gamma^{(1)}\{\pm e_1,\pm e_3\}$. Claim 2 in the proof of part (i) still holds. It gives $(s_{e_1}^{(1)}s_{e_3}^{(1)}s_{e_1}^{(1)})^2=t^-_{\lambda_3}$ with $\lambda_3\in\Hom_2(H_\Z,\Z)$, $\lambda_3(\uuuu{e})=(0,2,0)$. Especially $t^-_{\lambda_3}(-e_2)=e_2-2f_3$. This fact, \eqref{6.15} and the shape of $\Gamma^{(1)}_u$ imply the statements on $\Delta^{(1)}\cap (e_1+\Z f_3)$ and $\Delta^{(1)}\cap (\varepsilon e_2+\Z f_3)$. (iii) and (iv) By Theorem \ref{t6.18} (d) $\Gamma^{(1)}_s\cong \Gamma^{(1)}(S(-x_{12}))$. By Theorem \ref{t6.10} (c) and (d) $\Gamma^{(1)}_s\{\pm \oooo{e_1}^{(1)},\pm g_2\}$ consists of the four orbits $\Gamma^{(1)}_s\{\varepsilon \oooo{e_1}^{(1)}\}$ and $\Gamma^{(1)}_s\{\varepsilon g_2\}$ with $\varepsilon\in\{\pm 1\}$. Therefore if $x_1> x_2$ then $\oooo{\Delta^{(1)}}$ consists of the six orbits in (iii), and if $x_1=x_2$ then $\oooo{\Delta^{(1)}}$ consists of the four orbits $\Gamma^{(1)}_s\{\varepsilon \oooo{e_1}^{(1)}\}$ and $\Gamma^{(1)}_s\{\varepsilon \oooo{e_2}^{(1)}\} =\Gamma^{(1)}_s\{\varepsilon \oooo{e_3}^{(1)}\} =\Gamma^{(1)}_s\{\varepsilon g_2\}$. For $\Delta^{(1)}$ we have to use Lemma \ref{t6.7}. Claim 1 in the proof of part (i), its proof and the implication \eqref{6.15} still hold. In the case $x_1>x_2$, $\oooo{\Delta^{(1)}}$ consists of six orbits. Part (a) of Lemma \ref{t6.7}, \eqref{6.15} and the shape of $\Gamma^{(1)}_u$ imply the statements on $\Delta^{(1)}\cap (\varepsilon e_i+\Z f_3)$ in part (iii). In the case $x_1=x_2$, $\oooo{\Delta^{(1)}}$ consists of four orbits. Then $e_3=e_2-f_3$, \eqref{6.15} and the shape of $\Gamma^{(1)}_u$ imply the statements on $\Delta^{(1)}\cap (\varepsilon e_i+\Z f_3)$ in part (iv). (e) Theorem \ref{t6.10} (c) and \begin{eqnarray*} s_{e_1}^{(1)}(e_1,e_3)=(e_1,e_3) \begin{pmatrix}1&-2\\0&1\end{pmatrix},\quad s_{e_3}^{(1)}(e_1,e_3)=(e_1,e_3) \begin{pmatrix}1&0\\2&1\end{pmatrix}, \end{eqnarray*} imply \begin{eqnarray*} \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle \{\varepsilon e_1\} &=& \{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv\varepsilon (4), y_2\equiv 0(2)\},\\ \langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle \{\varepsilon e_3\} &=& \{y_1e_1+y_2e_3\in H_\Z^{prim}\,|\, y_1\equiv 0(2)), y_2\equiv \varepsilon(4)\}. \end{eqnarray*} In the case (i), the semidirect product $\Gamma^{(1)}=\Gamma^{(1)}_u\rtimes \langle s_{e_1}^{(1)}, s_{e_3}^{(1)}\rangle $ and the shape of $\Gamma^{(1)}_u$ show that $\Gamma^{(1)}\{\varepsilon e_1\}$ and $\Gamma^{(1)}\{\varepsilon e_3\}$ are as claimed. In the case (ii), the semidirect product $\Gamma^{(1)}=\Gamma^{(1)}\cap O^{(1),Rad}_\pm \rtimes \langle s_{e_1}^{(1)}, s_{e_3}^{(1)}\rangle $ and the shape of $\Gamma^{(1)} \cap O^{(1),Rad}_\pm$ show that $\Gamma^{(1)}\{\varepsilon e_1\}$ and $\Gamma^{(1)}\{\varepsilon e_3\}$ are as claimed. The following fact was not mentioned in Theorem \ref{t6.10} (c): \begin{eqnarray*} \{y_1e_1+y_2e_3\in H_\Z ^{prim}\,|\, y_1\equiv y_2\equiv 1(2)\} =\langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle \{e_1+e_3\}, \end{eqnarray*} so this set is a single $\langle s_{e_1}^{(1)},s_{e_3}^{(1)}\rangle$ orbit. We skip its proof (it contains the observation $s_{e_3}^{(1)}s_{e_1}^{(1)} (e_1+e_3)=-e_1-e_3$). This fact, the semidirect products above of $\Gamma^{(1)}$, the shape of $\Gamma^{(1)}_u$ in case (i) and of $\Gamma^{(1)}\cap O^{(1),Rad}_\pm$ in case (ii), and $e_2=-\frac{l}{2}(e_1+e_3)+f_3$ show in case (i) and case (ii) that $\Gamma^{(1)}\{\varepsilon e_2\}$ is as claimed. (f) Theorem \ref{t6.10} (b), $\www{\uuuu{e}}=(e_1, g_2,f_3)$, \begin{eqnarray*} s_{e_1}^{(1)}(\www{\uuuu{e}})&=&\www{\uuuu{e}} \begin{pmatrix}1&-1&0\\0&1&0\\0&0&1\end{pmatrix},\quad s_4(\www{\uuuu{e}})=\www{\uuuu{e}} \begin{pmatrix}1&0&0\\1&1&0\\0&0&1\end{pmatrix},\\ e_3&=&-e_1+2 g_2+lf_3\quad\textup{and}\quad e_2=-l g_2+\frac{1-l^2}{2}f_3 \end{eqnarray*} imply \begin{eqnarray*} \langle s_{e_1}^{(1)},s_4\rangle (e_1) &=& (\Z e_1\oplus \Z g_2)^{prim},\\ \langle s_{e_1}^{(1)},s_4\rangle (e_3) &=& (\Z e_1\oplus \Z g_2)^{prim}+lf_3,\\ \langle s_{e_1}^{(1)},s_4\rangle (e_2) &=& l(\Z e_1\oplus \Z g_2)^{prim}+\frac{1-l^2}{2}f_3. \end{eqnarray*} The semidirect product $\Gamma^{(1)}=\Gamma^{(1)}_u\rtimes \langle s_{e_1}^{(1)},s_4\rangle$ and the shape of $\Gamma^{(1)}_u$ show (with $\varepsilon\in\{\pm 1\}$) \begin{eqnarray*} \Gamma^{(1)}\{e_1\}&=& \Gamma^{(1)}\{\pm e_1,\pm e_3\} =(\Z e_1\oplus \Z g_2)^{prim}+\Z l f_3,\\ \Gamma^{(1)}\{\varepsilon e_2\}&=& l(\Z e_1\oplus \Z g_2)^{prim}+\varepsilon\frac{1-l^2}{2}f_3 + \Z l^2 f_3. \end{eqnarray*} (g) In the proof of part (g) of Theorem \ref{t6.18} the hyperbolic polygon $P$ with the six arcs in Remark \ref{t6.19} was used. It is a fundamental polygon of the action of the group $\langle \mu_1,\mu_2,\mu_3\rangle$ on $\H$. Here $\mu_1,\mu_2,\mu_3$ are parabolic M\"obius transformations with fixed points $\infty,0$ and $-\frac{x_3}{x_2}$. These fixed points are cusps of $P$. This geometry implies \begin{eqnarray*} \textup{Stab}_{\langle \mu_1,\mu_2,\mu_3\rangle}(\infty) &=&\langle\mu_1\rangle,\\ \textup{Stab}_{\langle \mu_1,\mu_2,\mu_3\rangle}(0) &=&\langle\mu_2\rangle,\\ \textup{Stab}_{\langle \mu_1,\mu_2,\mu_3\rangle}(-\frac{x_3}{x_2}) &=&\langle\mu_3\rangle. \end{eqnarray*} As $\langle \mu_1,\mu_2,\mu_3\rangle\cong\Gamma^{(1)}_s \bigl(\cong G^{free,3}\bigr)$ with $\mu_i\sim \oooo{s_{e_i}^{(1)}}$, this implies \begin{eqnarray*} \textup{Stab}_{\Gamma^{(1)}_s}(\{\pm \oooo{e_i}^{(1)}\}) = \langle \oooo{s_{e_i}^{(1)}}\rangle =\textup{Stab}_{\Gamma^{(1)}_s}(\oooo{e_i}^{(1)}) \quad \textup{for}\quad i\in\{1,2,3\}. \end{eqnarray*} Especially $-\oooo{e_i}^{(1)}\notin \Gamma^{(1)}_s(\oooo{e_i}^{(1)})$, so the orbits $\Gamma^{(1)}_s\{\oooo{e_i}^{(1)}\}$ and $\Gamma^{(1)}_s\{-\oooo{e_i}^{(1)}\}$ are disjoint. The cusps $\infty$, $0$ and $-\frac{x_3}{x_2}$ of the fundamental domain $P$ of the group $\langle \mu_1,\mu_2,\mu_3\rangle$ are in disjoint orbits of $\langle \mu_1,\mu_2,\mu_3\rangle$. Therefore the sets $\Gamma^{(1)}_s\{\pm \oooo{e_1}^{(1)}\}$, $\Gamma^{(1)}_s\{\pm \oooo{e_2}^{(1)}\}$ and $\Gamma^{(1)}_s\{\pm \oooo{e_3}^{(1)}\}$ are disjoint. Therefore $\oooo{\Delta^{(1)}}$ consists of the six disjoint orbits $\Gamma^{(1)}_s\{\varepsilon \oooo{e_i}^{(1)}\}$ with $(\varepsilon,i)\in\{\pm 1\}\times \{1,2,3\}$. Therefore also $\Delta^{(1)}$ consists of the six orbits $\Gamma^{(1)}\{\varepsilon e_i\}$ with $(\varepsilon,i)\in\{\pm 1\}\times \{1,2,3\}$. \medskip {\bf Claim:} For $a\in\{\pm e_1,\pm e_2,\pm e_3\}$ equality in $\stackrel{(1)}{\supset}$ and $\stackrel{(2)}{\supset}$ before Lemma \ref{t6.7} holds. \medskip {\bf Proof of the Claim:} Equality in $\stackrel{(1)}{\supset}$ holds because of Lemma \ref{t6.7} (a) and because $\Delta^{(1)}$ and $\oooo{\Delta^{(1)}}$ each consist of six orbits. Equality in $\stackrel{(2)}{\supset}$ is by Lemma \ref{t6.7} (b) equivalent to equality in $\stackrel{(3)}{\supset}$. Here $\Gamma^{(1)}\cong \Gamma^{(1)}_s(\cong G^{free,3})$, $\Gamma^{(1)}_u=\{\id\}$, the lift $s_{e_i}^{(1)}\in \Gamma^{(1)}$ of $\oooo{s_{e_i}^{(1)}}\in \Gamma^{(1)}_s$ is in $\Stab_{\Gamma^{(1)}}(\varepsilon e_i)$, and therefore \begin{eqnarray*} \Stab_{\Gamma^{(1)}}(\varepsilon\oooo{e_i}^{(1)}) =\langle s_{e_i}^{(1)}\rangle =\Stab_{\Gamma^{(1)}}(\varepsilon e_i) =\Gamma^{(1)}_u\cdot \Stab_{\Gamma^{(1)}}(\varepsilon e_i), \end{eqnarray*} so equality in $\stackrel{(3)}{\supset}$ holds. The Claim is proved. \hfill ($\Box$) \medskip The Claim and $\Gamma^{(1)}_u=\{\id\}$ show \begin{eqnarray*} (\varepsilon e_i+\Z f_3)\cap \Delta^{(1)} =\varepsilon e_i. \end{eqnarray*} Therefore the projection $\Delta^{(1)}\to\oooo{\Delta^{(1)}}$ is a bijection. (h) In the case $\uuuu{x}=(3,3,3)$ \begin{eqnarray*} \www{\uuuu{x}}=(1,1,1),\ f_3=-e_1+e_2-e_3,\ \oooo{e_3}^{(1)}=-\oooo{e_1}^{(1)}+\oooo{e_2}^{(1)},\\ \oooo{H_\Z}^{(1)}=\Z\oooo{e_1}^{(1)}\oplus \Z\oooo{e_2}^{(1)}. \end{eqnarray*} Recall that the matrices $B_1,B_2,B_3$ in the proof of Theorem \ref{t6.18} (g) with $\oooo{s_{e_i}^{(1)}}(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)}) =(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)})B_i$ are here \begin{eqnarray*} B_1=\begin{pmatrix}1&-3\\0&1\end{pmatrix},\ B_1=\begin{pmatrix}1&0\\3&1\end{pmatrix},\ B_3=\begin{pmatrix}-2&-3\\3&4\end{pmatrix}, \end{eqnarray*} and generate $\Gamma(3)$. Because $s_{e_i}^{(1),mat}\equiv E_3\mmod 3$ and $B_i\equiv E_2\mmod 3$, \begin{eqnarray*} \Gamma^{(1)}\{\varepsilon e_i\}\subset \varepsilon e_i +3H_\Z \quad\textup{and}\quad \Gamma^{(1)}_s\{\varepsilon \oooo{e_i}^{(1)}\}\subset \varepsilon\oooo{e_i}^{(1)}+3\oooo{H_\Z}^{(1)}. \end{eqnarray*} It remains to show $\Gamma^{(1)}_s\{\oooo{e_i}^{(1)}\}= (\oooo{e_i}^{(1)}+3\oooo{H_\Z}^{(1)})^{prim}$. This is equivalent to the following three statements: \begin{list}{}{} \item[(i)] For $(a_1,a_3)\in\Z^2$ with $(a_1,a_3)\equiv (1,0)\mmod 3$ and $\gcd(a_1,a_3)=1$ a pair $(a_2,a_4)\in\Z^2$ with $\begin{pmatrix}a_1&a_2\\a_3&a_4\end{pmatrix}\in\Gamma(3)$ exists. \item[(ii)] For $(a_2,a_4)\in\Z^2$ with $(a_2,a_4)\equiv (0,1)\mmod 3$ and $\gcd(a_2,a_4)=1$ a pair $(a_1,a_3)\in\Z^2$ with $\begin{pmatrix}a_1&a_2\\a_3&a_4\end{pmatrix}\in\Gamma(3)$ exists. \item[(iii)] For $(b_1,b_2)\in\Z^2$ with $(b_1,b_2)\equiv (-1,1)\mmod 3$ and $\gcd(b_1,b_2)=1$ a matrix $\begin{pmatrix}a_1&a_2\\a_3&a_4\end{pmatrix}\in\Gamma(3)$ with $\begin{pmatrix}-a_1+a_2\\-a_3+a_4\end{pmatrix} =\begin{pmatrix}b_1\\b_2\end{pmatrix}$ exists. \end{list} (i) is proved as follows. There exist $(\www{a_2},\www{a_4})\in\Z^2$ with $1=a_1\www{a_4}-a_3\www{a_2}$. Thus $\www{a_4}\equiv 1(3)$. Let $\www{a_2}\equiv r(3)$ with $r\in\{0,1,2\}$. Choose $(a_2,a_4):=(\www{a_2}-ra_1,\www{a_4}-ra_3)$. The proofs of (ii) and (iii) are similar. The proof of Theorem \ref{t6.21} (h) is finished. \hfill$\Box$ \bigskip In quite some cases $\Delta^{(0)}\subset \Delta^{(1)}$, but nevertheless in general $\Delta^{(0)}\not\subset\Delta^{(1)}$. Corollary \ref{t6.22} gives some details. \begin{corollary}\label{t6.22} (a) $\Delta^{(0)}=\Delta^{(1)}$ holds only in the cases $A_1^n$, so the cases with $S=E_n$ for some $n\in\N$. In all other cases $\Delta^{(1)}\not\subset R^{(0)}$. (b) $\Delta^{(0)}\subsetneqq\Delta^{(1)}$ in the cases $n=2$ except $A_1^2$, in the reducible cases with $n=3$ except $A_1^3$ and in the case $A_3$. (c) $\Delta^{(0)}\not\subset\Delta^{(1)}$ holds in the following cases with $n=3$: $\whh{A}_2$, $\HH_{1,2}$, $S(-l,2,-l)$ with $l\geq 3$ and $\P^2$. \end{corollary} {\bf Proof:} (a) In the cases $A_1^n$ $\Delta^{(0)}=\Delta^{(1)}= \{\pm e_1,...,\pm e_n\}$ by Lemma \ref{t2.12}. In a case with $S\in T^{uni}_n(\Z)-\{E_n\}$ there is an entry $S_{ij}\neq 0$ for some $i<j$, so $L(e_j,e_i)\neq 0$. We can restrict to the rank 2 unimodular bilinear lattice $(\Z e_i+\Z e_j,L|_{\Z e_i+\Z e_j},(e_i,e_j))$ with triangular basis $(e_i,e_j)$. Part (e) of Theorem \ref{t6.10} shows that it has odd vanishing cycles which are not roots. They are also odd vanishing cycles of $(H_\Z,L,\uuuu{e})$, so then $\Delta^{(1)}\not\subset R^{(0)}\supset \Delta^{(0)}$. (b) For the cases $n=2$ see Theorem \ref{t6.10} (e). The reducible cases with $n=3$ follow from the case $A_1$ and the cases with $n=2$. In the case $A_3$ the twelve elements of $\Delta^{(0)}$ are given in Theorem \ref{t6.14} (c). The set $\Delta^{(1)}$ is by Theorem \ref{t6.21} (c) \begin{eqnarray*} (\pr^{H,(1)})^{-1}(\oooo{H_\Z}^{(1),prim})+\Z f_3. \end{eqnarray*} which contains $\Delta^{(0)}$ as a strict subset. (c) The case $\whh{A}_2$: With $\uuuu{x}=(-1,-1,-1)$ we have $f_1=e_1+e_2+e_3$ and $f_3=e_1-e_2+e_3$. By Theorem \ref{t6.14} (d) \begin{eqnarray*} \Delta^{(0)}=(\pm e_1+\Z f_1)\,\dot\cup\, (\pm e_2+\Z f_1) \, \dot\cup\, (\pm e_3+\Z f_1). \end{eqnarray*} By Theorem \ref{t6.21} (c) \begin{eqnarray*} \Delta^{(1)}= (\pr^{H,(1)})^{-1}(\oooo{H_\Z}^{(1),prim})+\Z f_3. \end{eqnarray*} Here for example for $m\in\Z-\{0,-1\}$ $$\Delta^{(0)}\owns e_2+mf_1=(2m+1)e_2+mf_3\notin\Delta^{(1)}.$$ The case $\HH_{1,2}$: With $\uuuu{x}=(-2,2,-2)$ we have $\Rad I^{(0)}=\Z(e_1+e_2)\oplus \Z(e_2+e_3)$ and $f_3=e_1+e_2+e_3$. By Theorem \ref{t6.14} (e) $$\Delta^{(0)}=(\pm e_1+2\Rad I^{(0)})\,\dot\cup\, (\pm e_2+2\Rad I^{(0)})\,\dot\cup\, (\pm e_3+2\Rad I^{(0)}).$$ By Theorem \ref{t6.21} (e) (ii) $$\Delta^{(1)}\subset (\Z e_1+\Z e_3)^{prim}+ \Z f_3.$$ Here for example \begin{eqnarray*} \Delta^{(0)}&\owns& e_1+6(e_1+e_2)+4(e_2+e_3)\\ &=&(-3)e_1+(-6)e_3+10f_3\notin (\Z e_1+\Z e_3)^{prim}+ \Z f_3. \end{eqnarray*} This element is not contained in $\Delta^{(1)}$ because by Theorem \ref{t6.21} (e) $$\Delta^{(1)}\subset \Bigl((\Z e_1+\Z e_3)^{prim}+\Z f_3\Bigr) \,\dot\cup\, \Bigl(\frac{l}{2}(\Z e_1+\Z e_3)^{prim}+\Z f_3\Bigr).$$ The cases $S(-l,2,-l)$: Recall $\Rad I^{(0)}=\Z f_1$, $f_1=e_1-e_3$, \begin{eqnarray*} &&f_3=\frac{1}{2}(le_1+2e_2+le_3)\quad\textup{if }l\geq 3\textup{ is even},\\ &&\left. \begin{array}{l} f_3=le_1+2e_2+le_3\\ g_2=\frac{1}{2}(e_1+e_3)-\frac{l}{2}e_2\end{array}\right\} \textup{ if }l\geq 3\textup{ is odd.} \end{eqnarray*} Consider the element \begin{eqnarray*} h_1(e_1)-la_1f_1&=&h_1(e_1-la_1f_1)\in H_\Z\quad\textup{with}\\ h_1&:=& (s_{e_1}^{(0)}s_{e_2}^{(0)})^3\in\Gamma^{(0)},\\ a_1&:=& \frac{1}{2}(l^5-4l^3+3l) \in\Z \end{eqnarray*} By Theorem \ref{t6.11} (f) $T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1)$ and $T(\oooo{j}^{(0)}(l\oooo{e_2}^{(0)})\otimes f_1)\in \Gamma_u^{(0)}$ with \begin{eqnarray*} T(\oooo{j}^{(0)}(\oooo{e_1}^{(0)})\otimes f_1)(e_1)&=&e_1+2f_1,\\ T(\oooo{j}^{(0)}(l\oooo{e_2}^{(0)})\otimes f_1)(e_1)&=&e_1-l^2f_1, \end{eqnarray*} so \begin{eqnarray*} \Delta^{(0)}&\supset&\left\{\begin{array}{ll} e_1+2\Z f_1 & \textup{ if }l\textup{ is even,}\\ e_1+\Z f_1 & \textup{ if }\textup{ is odd.} \end{array}\right. \end{eqnarray*} Therefore $h_1(e_1)-la_1f_1=h_1(e_1-la_1f_1)\in\Delta^{(0)}$. One calculates \begin{eqnarray*} h_1(e_1)-la_1f_1&=& (s_{e_1}^{(0)}s_{e_2}^{(0)})^3 (e_1)-la_1f_1\\ &=&\uuuu{e}(\begin{pmatrix}-1&l&2\\0&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\l&-1&l\\0&0&1\end{pmatrix})^2 \begin{pmatrix}1\\0\\0\end{pmatrix}-la_1f_1\\ &=&\uuuu{e}\begin{pmatrix}l^2-1&-l&l^2-2\\l&-1&l\\0&0&1\end{pmatrix}^3 \begin{pmatrix}1\\0\\0\end{pmatrix}-la_1f_1\\ &=&\uuuu{e}\begin{pmatrix}l^6-5l^4+6l^2-1\\l^5-4l^3+3l\\0\end{pmatrix} -la_1f_1\\ &=&(-l^4+3l^2-1) e_1+ \left\{\begin{array}{ll} 2a_1f_3&\textup{ if }l\textup{ is even,}\\ a_1f_3&\textup{ if }l\textup{ is odd.}\end{array}\right. \end{eqnarray*} This element is not contained in $\Delta^{(1)}$ because $(-l^4+3l^2-1)\notin\{\pm 1,\pm \frac{l}{2},\pm l\}$ and because by Theorem \ref{t6.21} (e) and (f) \begin{eqnarray*} \Delta^{(1)}&\subset& \Bigl((\Z e_1+\Z e_3)^{prim}+\Z f_3\Bigr) \,\dot\cup\, \Bigl(\frac{l}{2}(\Z e_1+\Z e_3)^{prim}+\Z f_3\Bigr)\\ && \textup{ if } l\textup{ is even,}\\ \Delta^{(1)}&\subset& \Bigl((\Z e_1+\Z g_2)^{prim}+\Z f_3\Bigr) \,\dot\cup\, \Bigl(l(\Z e_1+\Z g_2)^{prim}+\Z f_3\Bigr)\\ && \textup{ if }l\textup{ is odd.} \end{eqnarray*} The case $\P^2$: With $\uuuu{x}=(-3,3,-3)$ we have $f_3=e_1+e_2+e_3$. \begin{eqnarray*} \Delta^{(1)}&\owns& s_{e_3}^{(1)}(s_{e_2}^{(1)})^{-2}(e_1)\\ &=&\uuuu{e}\begin{pmatrix}1&0&0\\0&1&0\\3&-3&1\end{pmatrix} \begin{pmatrix}1&0&0\\3&1&-3\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\3&1&-3\\0&0&1\end{pmatrix} \begin{pmatrix}1\\0\\0\end{pmatrix} =\uuuu{e}\begin{pmatrix}1\\6\\-15\end{pmatrix}. \end{eqnarray*} By Theorem \ref{t6.21} (g) the projection $\Delta^{(1)}\to\oooo{\Delta^{(1)}}$ is a bijection. Therefore \begin{eqnarray*} \Delta^{(1)}\not\owns (e_1+6e_2-15e_3)+9f_3 =10e_1+15e_2-6e_3. \end{eqnarray*} On the other hand \begin{eqnarray*} &&L(10e_1+15e_2-6e_3,10e_1+15e_2-6e_3)\\ &=&\begin{pmatrix}10&15&-6\end{pmatrix} \begin{pmatrix}1&0&0\\-3&1&0\\3&-3&1\end{pmatrix} \begin{pmatrix}10\\15\\-6\end{pmatrix} =1, \end{eqnarray*} so $10e_1+15e_2-6e_3\in R^{(0)}=\Delta^{(0)}$. \hfill$\Box$ \bigskip Consider a triple $\uuuu{x}\in\Z^3$ and a corresponding unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with a triangular basis $\uuuu{e}$ with $L(\uuuu{e}^t,\uuuu{e})^t =S(\uuuu{x})$. The Remarks \ref{t4.17} explained that the tuple $(H_\Z,\pm I^{(1)},\Gamma^{(1)},\Delta^{(1)})$ depends only on the $(G^{phi}\ltimes \www{G}^{sign})\rtimes \langle\gamma\rangle$ orbit of $\uuuu{x}\in\Z^3$. Lemma \ref{t4.18} gave at least one element of each orbit of this group in $\Z^3$. Theorem \ref{t6.18} and Theorem \ref{t6.21} gave detailed information on the tuple $(H_\Z,\pm I^{(1)},\Gamma^{(1)}, \Delta^{(1)})$ for the elements in Lemma \ref{t4.18} (b)+(c) and rather coarse information for the elements in Lemma \ref{t4.18} (a). The next corollary uses this information to conclude that the $(G^{phi}\ltimes \www{G}^{sign})\rtimes\langle\gamma\rangle$ orbits of the elements in Lemma \ref{t4.18} (b)+(c) are pairwise different and also different from the orbits of the elements in Lemma \ref{t4.18} (a), because the corresponding tuples $(H_\Z,\pm I^{(1)},\Gamma^{(1)},\Delta^{(1)})$ are not isomorphic. As Theorem \ref{t6.18} and Theorem \ref{t6.21} give only coarse information on the cases in Lemma \ref{t4.18} (a), also Corollary \ref{t6.23} is vague about them. The set of local minima in Lemma \ref{t4.18} (b)+(c) and $(3,3,3)$ is called $\Lambda_1$, the set of local minima in Lemma \ref{t4.18} (a) without $(3,3,3)$ is called $\Lambda_2$, \begin{eqnarray*} \Lambda_1&:=& \{(3,3,3)\}\cup\{(-l,2,-l)\,|\, l\geq 2\}\\ &&\cup\{(x_1,x_2,0)\,|\, x_1,x_2\in\Z_{\geq 0},x_1\geq x_2\},\\ \Lambda_2&:=&\{\uuuu{x}\in\Z^3_{\geq 3}\,|\, 2x_i\leq x_jx_k \textup{ for }\{i,j,k\}=\{1,2,3\}\}-\{(3,3,3)\}. \end{eqnarray*} \begin{corollary}\label{t6.23} Consider $\uuuu{x}$ and $\www{\uuuu{x}}\in \Lambda_1$ or $\uuuu{x}\in \Lambda_1$ and $\www{\uuuu{x}}\in \Lambda_2$. Suppose $\uuuu{x}\neq \www{\uuuu{x}}$. Then the tuples $(H_\Z,\pm I^{(1)},\Gamma^{(1)},\Delta^{(1)})$ of $\uuuu{x}$ and $\www{\uuuu{x}}$ are not isomorphic. Consequently, the $(G^{phi}\ltimes \www{G}^{sign})\rtimes\langle\gamma\rangle$ orbits of $\uuuu{x}$ and $\www{\uuuu{x}}$ are disjoint. \end{corollary} {\bf Proof:} In the following, (b), (c), (d), (d)(i), (d)(ii), (d)(iii), (d)(iv), (e), (e)(i), (e)(ii), (f), (g), (h)($\subset$ (g)) mean the corresponding families of cases in Theorem \ref{t6.21}. Of course, (c) and (h) are single cases. We will first discuss how to separate the families by properties of the isomorphism classes of the tuples $(H_\Z,\pm I^{(1)},\Gamma^{(1)},\Delta^{(1)})$, and then how to separate the cases within one family. The pair $(\Gamma^{(1)}_u,\Gamma^{(1)}_s)$ gives the following incomplete separation of families, \begin{eqnarray*} \begin{array}{l|c|r} \Gamma^{(1)}_u\cong \ ? &\Gamma^{(1)}_s\cong \ ? & \textup{families}\\ \hline \{\id\}& G^{free,3} & (g) \\ \{\id\}&\not\cong G^{free,3} & (b) \\ \Z^2& SL_2(\Z) & (c),(d)(i)+(ii),(f)\\ \Z^2& SL_2(\Z)\times\{\pm 1\} & (e)(ii) \\ \Z^2& G^{free,2} & (d)(iii)+(iv), (e)(i) \end{array} \end{eqnarray*} The fundamental polygon $P$ in Remark \ref{t6.19} (ii) has finite area in the case (h) (i.e. $\uuuu{x}=(3,3,3)$) and infinite area in the other cases in (g). So it separates the case (h) from the other cases in (g). The number of $\Br_3\ltimes\{\pm 1\}^3$ orbits in $\oooo{\Delta^{(1)}}$ separates the families (c),(d),(e)(i),(f) almost completely: \begin{eqnarray*} \begin{array}{l|c|c|c|c|c|c} |\{\textup{orbits in }\oooo{\Delta^{(1)}}\}| & 1 & 2 & 3 & 4 & 5 & 6 \\ \textup{families} & (c) & (d)(ii), (f) & (d)(i) & (d)(iv) & (e)(i) & (d)(iii) \end{array} \end{eqnarray*} The separation of the family (d)(ii) from the family (f) is more difficult and can be done as follows. In both families of cases $\Delta^{(1)}$ consists of three orbits, and $\oooo{\Delta^{(1)}}$ consists of two orbits. The two orbits $\Gamma^{(1)}\{e_2\}$ and $\Gamma^{(1)}\{-e_2\}$ unite to a single orbit $\Gamma^{(1)}_s\{\oooo{e_2}^{(1)}\} =\Gamma^{(1)}_s\{-\oooo{e_2}^{(1)}\}$. The set \begin{eqnarray*} \{n\in\N&|& \textup{there exists } a_1\in \Gamma^{(1)}\{e_2\} \textup{ and }\varepsilon\in\{\pm 1\}\\ &&\textup{ with } a_1+\varepsilon nf_3\in \Gamma^{(1)}\{-e_2\}\} \end{eqnarray*} is well defined. Its minimum is $2$ in each case in (d)(ii) because there $x_1>x_2=1$ and \begin{eqnarray*} \Gamma^{(1)}\{e_2\}\cap (e_2+\Z f_3)=e_2+\Z x_1^2f_3,\\ \Gamma^{(1)}\{-e_2\}\cap (e_2+\Z f_3)=e_2-2f_3+\Z x_1^2f_3. \end{eqnarray*} Its minimum is $1$ in each case in (f) because there \begin{eqnarray*} \Gamma^{(1)}\{e_2\}\cap (le_1+\Z f_3) &=&le_1+\frac{1+l^2}{2}f_3+\Z l^2f_3,\\ \Gamma^{(1)}\{-e_2\}\cap (le_1+\Z f_3) &=&le_1+\frac{-1+l^2}{2}+\Z l^2f_3. \end{eqnarray*} It remains to separate within each family (b), (d), (e) and (f) the cases ((c) and (h) are single cases). The pair $(\oooo{H_\Z}^{(1)},\pm\oooo{I}^{(1)})$ and Lemma \ref{t6.16} (b) allow to recover $\gcd(x_1,x_2,x_3)$ which is as follows in these families, \begin{eqnarray*} \begin{array}{c|c|c|c|c} \textup{family of cases} & (b) & (d) & (e) & (f) \\ \gcd(x_1,x_2,x_3) & x_1 & x_{12} & 2 & 1 \end{array} \end{eqnarray*} Within the family (b) this separates the cases. For the family (d) we need additionally the pair $(\www{x}_1,\www{x}_2)$ because $(x_1,x_2)=(x_{12}\www{x}_1,x_{12}\www{x}_2)$. The pair $(\www{x}_1,\www{x}_2)$ can be read off from $\oooo{\Delta}^{(1)}\subset \oooo{H_\Z}^{(1)}$, more precisely, from the relation of the $\Gamma^{(1)}_s$ orbits in $\oooo{\Delta^{(1)}}$ to the set $\oooo{H_\Z}^{(1),prim}\subset \oooo{H_\Z}^{(1)}$. In the family (e) one can read off $\frac{l}{2}$, and in the family (f) one can read off $l$ from the relation of the $\Gamma_s^{(1)}$ orbits in $\oooo{\Delta}^{(1)}$ to the set $\oooo{H_\Z}^{(1),prim}\subset\oooo{H_\Z}^{(1)}$. \hfill$\Box$ \begin{remarks}\label{t6.24} Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n\geq 2$, and let $\uuuu{e}$ be a triangular basis with matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. Recall Theorem \ref{t3.7} (a). If $S_{ij}\leq 0$ for $i<j$ then $(\Gamma^{(0)},\{s_{e_1}^{(0)},...,s_{e_n}^{(0)}\})$ is a Coxeter system, and the presentation in Definition \ref{t3.15} of the Coxeter group $\Gamma^{(0)}$ is determined by $S$. Especially $\Gamma^{(0)}\cong G^{fCox,n}$ if $S_{ij}\leq -2$ for $i<j$. One might hope for a similar easy control of $\Gamma^{(1)}$ if $S_{ij}\leq 0$ for $i<j$. In the cases with $n=2$ this works by Lemma \ref{t2.12} and Theorem \ref{t6.10}: \begin{eqnarray*} \Gamma^{(1)}&\cong& \left\{\begin{array}{ll} \{\id\}&\textup{ if }x=0,\\ SL_2(\Z)&\textup{ if }x=-1,\\ G^{free,2}&\textup{ if }x\leq -2. \end{array}\right. \end{eqnarray*} But in the cases with $n=3$ this fails. The Remarks \ref{t4.17} show $\Gamma^{(1)}(S(\uuuu{x}))\cong \Gamma^{(1)}(S(-\uuuu{x}))$ for any $\uuuu{x}\in\Z^3$. The cases $S(\www{\uuuu{x}})$ with $\www{\uuuu{x}}\in\Z^3_{\geq 0}$ lead by the action of $(G^{phi}\ltimes \www{G}^{sign})\rtimes\langle\gamma\rangle$ to all cases in Theorem \ref{t6.18}. Especially, the cases $S(\www{\uuuu{x}})$ with $\www{\uuuu{x}}\in\Z^3_{\geq 2}$ contain the nice cases in Theorem \ref{t6.18} (g) with $\Gamma^{(1)}\cong G^{free,3}$, but also many other cases. Compare the family $\{(3,3,l)\,|\, l\geq 2\}$ in the Examples \ref{t4.20} (iv) or the case $S=S(2,2,2)\sim S(-2,-2,-2)\sim S(\HH_{1,2})$ with $\Gamma^{(1)}$ far from $G^{free,3}$. \end{remarks} \begin{remarks}\label{t6.25} In the cases $\uuuu{x}\in\Z^3$ in Lemma \ref{t4.18} (a), so $\uuuu{x}\in\Z^3_{\geq 2}$ with $2x_i\leq x_jx_k$ for $\{i,j,k\}=\{1,2,3\}$, Theorem \ref{t6.18} and Theorem \ref{t6.21} give rather coarse information, \begin{eqnarray*} \Gamma^{(1)}_u=\{\id\}\quad\textup{and}\quad \Gamma^{(1)}\cong\Gamma^{(1)}_s\cong G^{free,3} \quad\textup{by Theorem \ref{t6.18}},\\ \Delta^{(1)}\to \oooo{\Delta^{(1)}}\quad\textup{is a bijection} \qquad\textup{by Theorem \ref{t6.21}}. \end{eqnarray*} But it is nontrivial to determine the unique preimage in $\Gamma^{(1),mat}$ of an element of $\Gamma^{(1)}_s$ and the unique preimage in $\Delta^{(1)}$ of an element of $\oooo{\Delta^{(1)}}$. This holds especially for the case $\uuuu{x}=(3,3,3)$ where $\Gamma^{(1)}_s\cong\Gamma(3)$ and $\oooo{\Delta^{(1)}}$ are known. Part (c) of the following lemma gives for $\uuuu{x}=(3,3,3)$ at least an inductive way to determine the preimage in $\Gamma^{(1),mat}$ of a matrix in $\Gamma(3)\cong \Gamma^{(1)}_s$. \end{remarks} \begin{lemma}\label{t6.26} Consider the case $\uuuu{x}=(3,3,3)$. Denote by $L_{\P^2}:\Gamma(3)\to\Gamma^{(1),mat}$ the inverse of the natural group isomorphism \begin{eqnarray*} \begin{CD} \Gamma^{(1),mat} @>>> \Gamma^{(1)} @>>> \Gamma^{(1)}_s @>>> \Gamma(3),\\ s_{e_i}^{(1),mat} @>>> s_{e_i}^{(1)} @>>> \oooo{s_{e_i}^{(1)}} @>>> B_i \\ g^{mat} @<<< g @>>> \oooo{g} @>>> \oooo{g}^{mat} \end{CD} \end{eqnarray*} with \begin{eqnarray*} g(\uuuu{e})&=&\uuuu{e}\cdot g^{mat}\qquad\textup{and}\\ \oooo{g}(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)}) &=&(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)})\cdot \oooo{g}^{mat} \end{eqnarray*} for $g\in \Gamma^{(1)}$. Define the subgroup of $SL_3(\Z)$ \index{$G^{(3,3,3)}\subset SL_3(\Z)$} \begin{eqnarray*} G^{(3,3,3)}&:=& \{F\in SL_3(\Z)\,|\, F\equiv E_3\mmod 3, F\begin{pmatrix}-1\\1\\-1\end{pmatrix}= \begin{pmatrix}-1\\1\\-1\end{pmatrix}\}. \end{eqnarray*} Define the map (st for standard) \begin{eqnarray*} L_{st}:\Gamma(3)&\to& M_{3\times 3}(\Z),\quad \begin{pmatrix}a&b\\c&d\end{pmatrix}\mapsto \begin{pmatrix}a&b&1-a+b\\c&d&-1-c+d\\0&0&1\end{pmatrix}, \end{eqnarray*} and the three matrices \begin{eqnarray*} K_1&:=& \begin{pmatrix}3&0&-3\\-3&0&3\\3&0&-3\end{pmatrix},\quad K_2:= \begin{pmatrix}0&3&3\\0&-3&-3\\0&3&3\end{pmatrix},\\ K_3&:=&K_1+K_2= \begin{pmatrix}3&3&0\\-3&-3&0\\3&3&0\end{pmatrix}. \end{eqnarray*} (a) $L_{st}$ is an injective group homomorphism $L_{st}:\Gamma(3)\to G^{(3,3,3)}$, \begin{eqnarray*} K_iK_j=0\quad\textup{for }i,j\in\{1,2,3\},\\ L_{st}(C)K_i=K_i\quad\textup{for }C\in\Gamma(3),\ i\in\{1,2,3\},\\ G^{(3,3,3)}=\{L_{st}(C)+\alpha K_1+\beta K_2\,|\, C\in \Gamma(3),\alpha,\beta\in\Z\}. \end{eqnarray*} The following sequence is an exact sequence of group homomorphisms, \begin{eqnarray*} \begin{CD} \{1\} @>>> \Z^2 @>>> G^{(3,3,3)} @>>> \Gamma(3) @>>> \{1\}\\ @. (\alpha,\beta) @>>> E_3+\alpha K_1+\beta K_2 @. @. \\ @. @. L_{st}(C)+\alpha K_1+\beta K_2 @>>> C @. \end{CD} \end{eqnarray*} $L_{st}$ is a splitting of this exact sequence. (b) $\Gamma^{(1),mat}\subset G^{(3,3,3)}$, and $L_{\P^2}:\Gamma(3)\to \Gamma^{(1),mat}$ is another splitting of the exact sequence in part (b). It satisfies \begin{eqnarray*} L_{\P^2}(B_1)&=&L_{st}(B_1)=s_{e_1}^{(1),mat} =\begin{pmatrix}1&-3&-3\\0&1&0\\0&0&1\end{pmatrix} \quad\textup{for }B_1=\begin{pmatrix}1&-3\\0&1\end{pmatrix},\\ L_{\P^2}(B_2)&=&L_{st}(B_2)=s_{e_2}^{(1),mat} =\begin{pmatrix}1&0&0\\3&1&-3\\0&0&1\end{pmatrix} \quad\textup{for }B_2=\begin{pmatrix}1&0\\3&1\end{pmatrix},\\ L_{\P^2}(B_3)&=&L_{st}(B_3)+K_3=s_{e_3}^{(1),mat} =\begin{pmatrix}1&0&0\\0&1&0\\3&3&1\end{pmatrix} \quad\textup{for }B_3=\begin{pmatrix}-2&-3\\3&4\end{pmatrix},\\ L_{\P^2}(B_3^{-1})&=&L_{st}(B_3^{-1})-K_3. \end{eqnarray*} (c) An arbitrary element $C\in\Gamma(3)$ can be written in a unique way as a product \begin{eqnarray*} C=C_1 B_3^{\varepsilon_1} C_2 B_3^{\varepsilon_2} C_3 ... C_m B_3^{\varepsilon_m} C_{m+1}\\ \textup{with}\quad m\in\Z_{\geq 0},\ C_1,...,C_{m+1}\in \langle B_1^{\pm 1},B_2^{\pm 1}\rangle,\ \varepsilon_1,...,\varepsilon_m\in\{\pm 1\}. \end{eqnarray*} Then \begin{eqnarray*} L_{\P^2}(C)&=& L_{st}(C) + K_3\Bigl( \varepsilon_1 L_{st}(C_2B_3^{\varepsilon_2}C_3...C_m B_3^{\varepsilon_m}C_{m+1}) \Bigr. \\ && \hspace*{2cm}+\varepsilon_2 L_{st}(C_3B_3^{\varepsilon_3}C_4...C_m B_3^{\varepsilon_m}C_{m+1}) \\ && \hspace*{2cm} \Bigl. + ... + \varepsilon_m L_{st}(C_{m+1})\Bigr). \end{eqnarray*} \end{lemma} {\bf Proof:} The parts (a) and (b) are easy. (c) By part (b) $L_{\P^2}(C_j)=L_{st}(C_j)$ and $L_{\P^2}(B_3^{\varepsilon_j})=L_{st}(B_3^{\varepsilon_j}) +\varepsilon_j K_3$, so \begin{eqnarray*} L_{\P^2}(C)&=& L_{\P^2}(C_1)L_{\P^2}(B_3^{\varepsilon_1}) L_{\P^2}(C_2)...L_{\P^2}(C_m)L_{\P^2}(B_3^{\varepsilon_m}) L_{\P^2}(C_{m+1})\\ &=& L_{st}(C_1)(L_{st}(B_3^{\varepsilon_1})+\varepsilon_1K_3) L_{st}(C_2)...\\ && L_{st}(C_m)(L_{st}(B_3^{\varepsilon_m})+ \varepsilon_mK_3)L_{st}(C_{m+1}). \end{eqnarray*} Observe $K_3 L_{st}(\www{C})K_3=K_3K_3=0$ for $\www{C}\in\Gamma(3)$. Therefore if one writes the product above as a sum of $2^m$ terms, only the $1+m$ terms do not vanish in which $K_3$ turns up at most once. This leads to the claimed formula for $L_{\P^2}(C)$.\hfill$\Box$ \chapter{Distinguished bases in the rank 2 and rank 3 cases}\label{s7} \setcounter{equation}{0} \setcounter{figure}{0} In section \ref{s3.3} we introduced the set of distinguished bases of a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with a triangular basis. It is the orbit $\BB^{dist}=\Br_n\ltimes\{\pm 1\}^n(\uuuu{e})$ of $\uuuu{e}$ under the group $\Br_n\ltimes\{\pm 1\}^n$. We also posed the question when this set can be characterized in an easy way, more precisely, when the inclusions in \eqref{3.3} or \eqref{3.4} are equalities, \begin{eqnarray*} \BB^{dist}\subset\{\uuuu{v}\in(\Delta^{(0)})^n\,|\, s_{v_1}^{(0)}...s_{v_n}^{(0)}=-M\},\hspace*{2cm}(3.3)\\ \BB^{dist}\subset\{\uuuu{v}\in(\Delta^{(1)})^n\,|\, s_{v_1}^{(1)}...s_{v_n}^{(1)}=M\}.\hspace*{2.4cm}(3.4) \end{eqnarray*} Theorem \ref{t3.2} (a) and (b) imply that \eqref{3.4} is an equality if $\Gamma^{(1)}$ is a free group with generators $s_{e_1}^{(1)},...,s_{e_n}^{(1)}$ and that \eqref{3.3} is an equality if $\Gamma^{(0)}$ is a free Coxeter group with generators $s_{e_1}^{(0)},...,s_{e_n}^{(0)}$, see the Examples \eqref{t3.23} (iv). More generally, if $(\Gamma^{(0)},s_{e_1}^{(0)},...,s_{e_n}^{(0)})$ is a Coxeter system (Definition \ref{t3.5}) then by Theorem \ref{t3.6} \eqref{3.3} is an equality, see the Examples \ref{t3.23} (v). It is remarkable that the property $\sum_{i=1}^n \Z v_i=H_\Z$, which each distinguished basis $\uuuu{v}\in \BB^{dist}$ satisfies is not needed in the characterization in these cases. These are positive results. In the sections \ref{s3.1}--\ref{s3.3} we study systematically all cases of rank 2 and 3 and find also negative results. In rank 2 in section \ref{s6.2} \eqref{3.3} is always an equality, and \eqref{3.4} is an equality in all cases except the case $A_1^2$. In the even rank 3 cases in section \ref{s7.2} \eqref{3.3} is in all cases except the case $\HH_{1,2}$ an equality. In the case $\HH_{1,2}$ the set on the right hand side contains $\Br_3\ltimes\{\pm 1\}^3$ orbits of tuples $\uuuu{v}$ with arbitrary large finite index $[H_\Z:\sum_{i=1}^3\Z v_i]$ and two orbits with index 1, $\BB^{dist}$ and one other orbit. In the odd rank 3 cases in section \ref{s7.3} we understand the set $B_1\cup B_2$ of triples $\uuuu{x}\in\Z^3$ such that \eqref{3.4} is an equality, and we also know a set $B_3\cup B_4$ of triples $\uuuu{x}\in\Z^3$ such that \eqref{3.4} becomes an equality if one adds the condition $H_\Z=\sum_{i=1}^3 \Z v_i$. But for $\uuuu{x}\in\Z^3-\cup_{j=1}^4B_j$, we know little. Section \ref{s7.4} builds on section \ref{4.4} where for a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ the stabilizer $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$ had been determined. It determines the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. It uses the systematic results in chapter \ref{s5} on the group $G_\Z$ and on the map $Z:(\Br_n\ltimes\{\pm 1\}^n)_S \to G_\Z$ in the rank 3 cases. In the sections \ref{s4.3} and \ref{s4.4} the pseudo-graph $\GG(\uuuu{x})$ with vertex set an orbit $\Br_3(\uuuu{x}/\{\pm 1\})$ and edge set from generators of the group $G^{phi}\rtimes\langle\gamma\rangle$ had been crucial. In section \ref{s7.4} we introduce a variant with the same vertex set, but different edge set, namely (now) oriented edges coming from the elementary braids $\sigma_i^{\pm 1}$. We also define the much larger $\sigma$-pseudo-graph with vertex set a set $\BB^{dist}/\{\pm 1\}^3$ of distinguished bases up to signs and oriented edges coming from the elementary braids $\sigma_i^{\pm 1}$. We consider especially the examples where the set $\Br_3(\uuuu{x}/\{\pm 1\}^3)$ is finite. \section{Distinguished bases in the rank 2 cases} \label{s7.1} In the rank 2 cases the inclusion \eqref{3.3} is always an equality, and the inclusion \eqref{3.4} is almost always an equality, namely in all cases except the case $A_1^2$. \begin{theorem}\label{t7.1} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank 2 with a triangular basis $\uuuu{e}=(e_1,e_2)$ with matrix $S=S(x)=\begin{pmatrix}1&x\\0&1\end{pmatrix} =L(\uuuu{e}^t,\uuuu{e})^t$ with $x\in\Z$. Fix $k\in\{0,1\}$. (a) The inclusion \eqref{3.3} respectively \eqref{3.4} in Remark \ref{t3.19} is an equality in all cases except the odd case $A_1^2$, so the case $(k,x)=(1,0)$. In that case the right hand side in \eqref{3.4} splits into the orbits of the three pairs $(e_1,e_1)$, $(e_1,e_2)$, $(e_2,e_2)$. (b) The stabilizers in $\Br_2$ of $S/\{\pm 1\}^2$ and of $\uuuu{e}/\{\pm 1\}^2$ are \begin{eqnarray*} (\Br_2)_{S/\{\pm 1\}^2}&=&\Br_2\quad\textup{and}\\ (\Br_2)_{\uuuu{e}/\{\pm 1\}^2}&=& \left\{\begin{array}{ll} \langle \sigma_1^2\rangle & \textup{ if }x=0,\\ \langle \sigma_1^3\rangle & \textup{ if }x\in\{\pm 1\},\\ \{\id\} & \textup{ if }|x|\geq 2. \end{array}\right. \end{eqnarray*} \end{theorem} {\bf Proof:} (a) The even and odd cases $A_1^2$: See the Examples \ref{t3.23} (iii). The cases with $|x|\geq 2$: Theorem \ref{t6.8} (c)+(d) and Theorem \ref{t6.10} (c)+(d) show \begin{eqnarray*} \Gamma^{(k)}\cong \left\{\begin{array}{ll} G^{fCox,2}&\textup{ with generators }s_{e_1}^{(0)}, s_{e_2}^{(0)}\textup{ if }k=0,\\ G^{free,2}&\textup{ with generators }s_{e_1}^{(1)}, s_{e_2}^{(1)}\textup{ if }k=1. \end{array}\right. \end{eqnarray*} The Examples \ref{t3.23} (iv) apply and give equality in \eqref{3.3} and \eqref{3.4}. The cases with $x=\pm 1$: We can restrict to the case $x=-1$. The even case is a simple case of Example \ref{t3.23} (v) (in the Remarks \ref{t7.2} we will offer an elementary proof for the even case). It remains to show equality in \eqref{3.4} in the odd case $(k,x)=(1,-1)$. Consider $\uuuu{v}\in (\Delta^{(1)})^2$ with $s_{v_1}^{(1)}s_{v_2}^{(1)}=M$. Let $b:=I^{(1)}(v_1,v_2)\in\Z$. If $b=0$ then $v_2=\pm v_1$ and $M=(s_{v_1}^{(1)})^2$ would have an eigenvalue $1$, a contradiction. Therefore $b\neq 0$ and $\Z v_1+\Z v_2$ has rank 2. Then \begin{eqnarray*} M(\uuuu{v})&=& s_{v_1}^{(1)}s_{v_2}^{(1)}(\uuuu{v}) = s_{v_1}^{(1)}(v_1+bv_2,v_2)\\ &=&(v_1+bv_2-b^2v_1,v_2-bv_1) =\uuuu{v}\begin{pmatrix}1-b^2&-b\\b&1\end{pmatrix}, \end{eqnarray*} $1=\tr M=(1-b^2)+1$, so $b=\pm 1$. By possibly changing the sign of $v_2$, we can suppose $b=-1=x$. Then \begin{eqnarray*} I^{(1)}(\uuuu{v}^t,\uuuu{v}) =\begin{pmatrix}0&-1\\1&0\end{pmatrix}. \end{eqnarray*} Therefore $\uuuu{v}$ is a $\Z$-basis of $H_\Z$, and the automorphism $g\in\Aut(H_\Z)$ with $(g(e_1),g(e_2))=(v_1,v_2)$ is in $O^{(1)}$. By Lemma \ref{t3.22} (a) \begin{eqnarray*} gMg^{-1}=g\circ((\pi_2\circ \pi_2^{(1)})(\uuuu{e}))\circ g^{-1} =(\pi_2\circ\pi_2^{(1)})(\uuuu{v})= s_{v_1}^{(1)}s_{v_2}^{(1)}= M, \end{eqnarray*} so $gMg^{-1}=M$, so $g\in G_\Z^{(1)}=G_\Z^M\cap O^{(1)}$. Theorem \ref{t5.5} can be applied and gives $\stackrel{(*)}{=}$, \begin{eqnarray*} G_\Z^{(1)}\stackrel{(*)}{=}G_\Z\stackrel{(*)}{=} \{\pm (M^{root})^l\,|\, l\in\Z\} \stackrel{(*)}{=}Z(\Br_2\ltimes \{\pm 1\}^2). \end{eqnarray*} Therefore $\uuuu{v}\in\BB^{dist}$. This shows equality in \eqref{3.4}. (b) Because of $\sigma_1\begin{pmatrix}1&x\\0&1\end{pmatrix} =\begin{pmatrix}1&-x\\0&1\end{pmatrix}$, the stabilizer $(\Br_2)_{S/\{\pm 1\}^2}$ is the whole group $\Br_2=\langle \sigma_1\rangle$. If $x=0$, \begin{eqnarray*} (e_1,e_2)\stackrel{\sigma_1}{\mapsto} (e_2,e_1) \stackrel{\sigma_1}{\mapsto}(e_1,e_2), \quad\textup{so }(\Br_2)_{\uuuu{e}/\{\pm 1\}^2} =\langle \sigma_1^2\rangle. \end{eqnarray*} If $x=-1$, \begin{eqnarray*} (e_1,e_2)\stackrel{\sigma_1}{\mapsto} (e_1+e_2,e_1) \stackrel{\sigma_1}{\mapsto}(-e_2,e_1+e_2) \stackrel{\sigma_1}{\mapsto}(e_1,-e_2),\\ \textup{so }(\Br_2)_{\uuuu{e}/\{\pm 1\}^2} =\langle \sigma_1^3\rangle. \end{eqnarray*} If $|x|\geq 2$ Theorem \ref{t3.2} (a) or (b) and $\Gamma^{(1)}\cong G^{free,2}$ or $\Gamma^{(0)}\cong G^{fCox,2}$ show $(\Br_2)_{\uuuu{e}/\{\pm 1\}^2}=\{\id\}$.\hfill$\Box$ \begin{remarks}\label{t7.2} (i) A direct elementary proof of equality in \eqref{3.3} for the even case $A_2$, so $(k,x)=(0,-1)$, is instructive. Recall from Theorem \ref{t6.8} (b) that $\Delta^{(0)}=\{\pm e_1,\pm e_2,\pm (e_1+e_2)\}$. The map $\pi_2\circ\pi_2^{(0)}:(\Delta^{(0)})^2\to \Gamma^{(0)}$ has the three values $-M$, $M^2$ and $\id$ and the three fibers \begin{eqnarray*} (\pi_2\circ\pi_2^{(0)})^{-1}(-M) &=& \{(\pm e_1,\pm e_2),(\pm (e_1+e_2),\pm e_1), (\pm e_2,\pm (e_1+e_2)\}\\ &=&\BB^{dist},\\ (\pi_2\circ\pi_2^{(0)})^{-1}(M^2) &=& \{(\pm e_2,\pm e_1),(\pm (e_1+e_2),\pm e_2), (\pm e_1,\pm (e_1+e_2)\}\\ &=& \Br_2\ltimes\{\pm 1\}^2(e_2,e_1),\\ (\pi_2\circ\pi_2^{(0)})^{-1}(\id) &=& \{(\pm e_1,\pm e_1),(\pm e_2,\pm e_2), (\pm (e_1+e_2),\pm (e_1+e_2)\}. \end{eqnarray*} This gives equality in \eqref{3.3} in the case $(k,x)=(0,-1)$. (ii) Also in the cases $(k=0,x\leq -2)$ a direct elementary proof of equality in \eqref{3.3} is instructive. Equality in \eqref{3.3} for $(k,x)=(0,-2)$ and Theorem \ref{t6.8} (d) (iv) imply equality in \eqref{3.3} for $(k=0,x\leq -3)$. Therefore we restrict to the case $(k,x)=(0,-2)$. Recall \begin{eqnarray*} \Rad I^{(0)}=\Z f_1\quad\textup{with}\quad f_1=e_1+e_2. \end{eqnarray*} By Theorem \ref{t6.8} (c) \begin{eqnarray*} \Delta^{(0)}=(e_1+\Z f_1)\ \dot\cup\ (-e_1+\Z f_1) =(e_1+\Z f_1)\ \dot\cup\ (e_2+\Z f_1). \end{eqnarray*} One easily sees for $b_1,b_2\in\Z$ \begin{eqnarray*} s_{e_1+b_1f_1}^{(0)}s_{e_2+b_2f_1}^{(0)}=-M &\iff& b_1+b_2=0, \end{eqnarray*} thus \begin{eqnarray*} \{\uuuu{v}\in (\Delta^{(0)})^2\,|\, s_{v_1}^{(0)}s_{v_2}^{(0)}=-M\} =\{\pm (e_1+bf_1),\pm (e_2-bf_1)\,|\, b\in\Z\}. \end{eqnarray*} This set is a single $\Br_2\ltimes \{\pm 1\}^2$ orbit because of \begin{eqnarray*} \delta_2\sigma_1(e_1+bf_1,e_2-bf_1)=(e_1+(b+1)f_1,e_2-(b+1)f_1). \hspace*{1cm}\Box \end{eqnarray*} \end{remarks} \section{Distinguished bases in the even rank 3 cases} \label{s7.2} In the even cases with $n=3$ we have complete results on the question when the inclusion in \eqref{3.3} is an equality. It is one in all cases except the case $\HH_{1,2}$. \begin{theorem}\label{t7.3} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank 3 with a triangular basis $\uuuu{e}$ with matrix $S=S(\uuuu{x})=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_3(\Z)$. (a) Suppose $S\notin (\Br_3\ltimes\{\pm 1\}^3)(S(\HH_{1,2}))$. Then the inclusion in \eqref{3.3} is an equality. (b) Suppose $S=S(\HH_{1,2})=S(-2,2,-2)$. Recall the basis $(f_1,f_2,f_3)=\uuuu{e} \begin{pmatrix}1&0&1\\1&1&1\\0&1&1\end{pmatrix}$ of $H_\Z$ with $\Rad I^{(1)}=\Z f_1\oplus \Z f_2$ and $\Rad I^{(0)}=\Z f_3$. The set $\{\uuuu{v}\in (\Delta^{(0)})^3\,|\, (\pi_3\circ\pi_3^{(0)})(\uuuu{v})=-M\}$ splits into countably many orbits. The following list gives one representative for each orbit, \begin{eqnarray*} (f_3-g_1,-f_3+g_1+g_2,f_3-g_2)\quad\textup{with}\quad \begin{pmatrix}g_1\\g_2\end{pmatrix} =\begin{pmatrix}0&c_2\\c_1&c_3\end{pmatrix} \begin{pmatrix}f_1\\f_2\end{pmatrix},\\ c_1\in\N\textup{ odd},\ c_2\in\Z\textup{ odd}, \ c_3\in\{0,1,...,|c_2|-1\}. \end{eqnarray*} The sublattice $\langle f_3-g_1,-f_3+g_1+g_2,f_3-g_2\rangle =\langle f_3,g_1,g_2\rangle\subset H_\Z$ has finite index $c_1\cdot |c_2|$ in $H_\Z$. It is $H_\Z$ in the following two cases: \begin{eqnarray*} \uuuu{e}=(f_3-f_2,-f_3+f_1+f_2,f_3-f_1),\\ \textup{so}\quad (g_1,g_2,c_1,c_2,c_3)=(f_2,f_1,1,1,0),\\ (f_3+f_2,-f_3+f_1-f_2,f_3-f_1),\\ \textup{so}\quad (g_1,g_2,c_1,c_2,c_3)=(-f_2,f_1,1,-1,0) \end{eqnarray*} (see also Example \ref{t3.23} (ii)) for the second case). \end{theorem} {\bf Proof:} (a) We can replace $\uuuu{e}$ by an arbitrary element $\www{\uuuu{e}}\in\BB^{dist}$. By Theorem \ref{t4.6} the following cases exhaust all $\Br_3\ltimes\{\pm 1\}^3$ orbits except that of $\HH_{1,2}$: \begin{list}{}{} \item[(A)] $(H_\Z,L,\uuuu{e})$ is irreducible with $\uuuu{x}\in\Z^3_{\leq 0}$. \item[(B)] $r(\uuuu{x})\leq 0$ and $\uuuu{x}\neq (0,0,0)$. \item[(C)] $\uuuu{x}=(x_1,0,0)$ with $x_1\in\Z_{\leq 0}$, so $(H_\Z,L,\uuuu{e})$ is reducible (this includes the case $A_1^3$). \item[(D)] $\uuuu{x}=(-l,2,-l)$ with $l\geq 3$. \end{list} The cases (A): $\Gamma^{(0)}$ is a Coxeter group by Theorem \ref{t3.7} (a). Theorem \ref{t3.7} (b) applies. The cases (B): By Theorem \ref{t6.11} (g) $\Gamma^{(0)}$ is a free Coxeter group with generators $s_{e_1}^{(0)},s_{e_2}^{(0)},s_{e_3}^{(0)}$. Theorem \ref{t3.7} (b) or Example \ref{t3.23} (iv) can be used. The cases (C): Consider a triple $\uuuu{v}\in (\Delta^{(0)})^3$ with $s_{v_1}^{(0)}s_{v_2}^{(0)}s_{v_3}^{(0)}=-M$. The set $\Delta^{(0)}$ splits into the subsets $\Delta^{(0)}\cap (\Z e_1+\Z e_2)$ and $\{\pm e_3\}$. Compare $-M|_{\Z e_3}=-\id|_{\Z e_3}$ with $s_{e_3}^{(0)}|_{\Z e_3}=-\id|_{\Z e_3}$ and $s_a^{(0)}|_{\Z e_3}=\id|_{\Z e_3}$ for $a\in \Delta^{(0)}\cap (\Z e_1+\Z e_2)$. All three $v_i\in\{\pm e_3\}$ is impossible because $(s_{e_3}^{(0)})^3\neq -M$. Therefore there are $i,j,k$ with $\{i,j,k\}=\{1,2,3\}$, $i<j$, $v_i,v_j\in \Delta^{(0)} \cap(\Z e_1+\Z e_2)$ and $v_k\in\{\pm e_3\}$. The reflection $s_{v_k}^{(0)}$ acts trivially on $\Z e_1+\Z e_2$ and commutes with $s_{v_i}^{(0)}$ and $s_{v_j}^{(0)}$. Therefore \begin{eqnarray*} s_{v_i}^{(0)}s_{v_j}^{(0)}s_{v_k}^{(0)} =s_{v_1}^{(0)}s_{v_2}^{(0)}s_{v_3}^{(0)} =(\pi_3\circ\pi_3^{(0})(\uuuu{v})=-M= s_{e_1}^{(0)}s_{e_2}^{(0)}s_{e_3}^{(0)},\\ \textup{so}\quad s_{v_i}^{(0)}s_{v_j}^{(0)} =s_{e_1}^{(0)}s_{e_2}^{(0)}. \end{eqnarray*} The reflections $s_{v_i}^{(0)}$, $s_{v_j}^{(0)}$, $s_{e_1}^{(0)}$ and $s_{e_2}^{(0)}$ act trivially on $\Z e_3$. The inclusion in \eqref{3.3} is an equality because of Theorem \ref{t7.1} (a) for the rank 2 cases. The cases (D): The proof of these cases is prepared by Lemma \ref{t7.4} and Lemma \ref{t7.5}. The proof comes after the proof of Lemma \ref{t7.5}. (b) The proof of part (b) is prepared by Lemma \ref{t7.6} and comes after the proof of Lemma \ref{t7.6}. \hfill ($\Box$) \bigskip The following lemma is related to $t^+_\lambda$ in Lemma \ref{t6.17}. Recall also $j^{(k)}:H_\Z\to H_\Z^\sharp$, $a\mapsto I^{(k)}(a,.)$, in Definition \ref{t6.1}. \begin{lemma}\label{t7.4} Let $(H_\Z,L)$ be a unimodular bilinear lattice of rank $n\in\N$. Fix $k\in\{0,1\}$. Suppose $\Rad I^{(k)}\neq\{0\}$ and choose an element $f\in \Rad I^{(k)}-\{0\}$. Denote \index{$t_\lambda$}\index{$\Hom_{0,f}(H_\Z,\Z)$} \begin{eqnarray*} \Hom_{0,f}(H_\Z,\Z):= \{\lambda:H_\Z\to\Z\,|\, \lambda \textup{ is }\Z\textup{-linear}, \lambda(f)=0\},\\ t_\lambda:H_\Z\to H_\Z\textup{ with } t_\lambda(a)=a+\lambda(a)f\textup{ for } \lambda\in \Hom_{0,f}(H_\Z,\Z). \end{eqnarray*} Then $t_\lambda\in O^{(k),Rad}_u$. The map \begin{eqnarray*} \Hom_{0,f}(H_\Z,\Z)\to O^{(k),Rad}_u,\quad\lambda\mapsto t_\lambda, \end{eqnarray*} is an injective group homomorphism. For $b\in R^{(k)}$ (with $R^{(1)}=H_\Z$, see \ref{t3.9} (i)) and $a\in\Z$ \begin{eqnarray*} s_{b+af}^{(k)} &=& s_b^{(k)}\circ t_{-aj^{(k)}(b)} =t_{(-1)^kaj^{(k)}(b)}\circ s_b^{(k)},\\\ s_b^{(k)}\circ t_\lambda &=& t_{\lambda-(-1)^k\lambda(b)j^{(k)}(b)}\circ s_b^{(k)}. \end{eqnarray*} \end{lemma} {\bf Proof:} The proof is straightforward. We skip the details. \hfill$\Box$ \bigskip The following lemma studies the Hurwitz action of $\Br_3$ on triples of reflections in $G^{fCox,2}$. It is related to Theorem \ref{t3.2} (b). \begin{lemma}\label{t7.5} As in Definition \ref{t3.1}, $G^{fCox,2}$ denotes the free Coxeter group with two generators $z_1$ and $z_2$, so generating relations are $z_1^2=z_2^2=1$. (a) Its set of reflections is \begin{eqnarray*} \Delta(G^{fCox,2}) =\bigcup_{i=1}^2 \{wz_iw^{-1}\,|\, w\in G^{fCox,2}\} =\{(z_1z_2)^mz_1\,|\, m\in\Z\}. \end{eqnarray*} The complement of this set is \begin{eqnarray*} G^{fCox,2}-\Delta(G^{fCox,2}) =\{(z_1z_2)^m\,|\, m\in\Z\}. \end{eqnarray*} $\Delta(G^{fCox,2})$ respectively its complement consists of the elements which can be written as words of odd respectively even length in $z_1$ and $z_2$. (b) The set \begin{eqnarray*} \{(w_1,w_2,w_3)\in (\Delta(G^{fCox,2})^3\,|\, w_1w_2w_3=z_1z_2z_1\} \end{eqnarray*} is the disjoint union of the following $\Br_3$ orbits: \begin{eqnarray*} \dot\bigcup_{m\in \Z_{\geq 0}} \Br_3\bigl((z_1z_2z_1, (z_1z_2)^{1-m}z_1,(z_1z_2)^{1-m}z_1)\bigr). \end{eqnarray*} \end{lemma} {\bf Proof:} (a) Clear. (b) The map \begin{eqnarray*} \{(w_1,w_2,w_3)\in(\Delta(G^{fCox,2}))^3\,|\, w_1w_2w_3=z_1z_2z_1\}&\to& M_{2\times 1}(\Z),\\ (w_1,w_2,w_3)&\mapsto& (m_1,m_2)^t\\ \textup{ with } w_1w_2=(z_1z_2)^{m_1},\ w_2w_3=(z_1z_2)^{m_2}, \end{eqnarray*} is a bijection because \begin{eqnarray*} z_1z_2z_1&=&(w_1w_2)w_2(w_2w_3),\quad\textup{so}\\ w_2&=&(w_1w_2)^{-1}z_1z_2z_1(w_2w_3)^{-1},\\ w_1&=&(w_1w_2)w_2,\\ w_3&=&w_2(w_2w_3), \end{eqnarray*} so a given column vector $(m_1,m_2)^t$ has a unique preimage. The Hurwitz action of $\Br_3$ on the set on the left hand side of the bijection above translates as follows to an action on $M_{2\times 1}(\Z)$. \begin{eqnarray*} \sigma_1(w_1,w_2,w_3)&=& (w_1w_2w_1,w_1,w_3),\\ w_1w_2w_1\cdot w_1&=&w_1w_2=(z_1z_2)^{m_1}, \\ w_1w_3&=&(w_1w_2)(w_2w_3)=(z_1z_2)^{m_1+m_2},\\ \sigma_1\begin{pmatrix}m_1\\m_2\end{pmatrix} &=&\begin{pmatrix}m_1\\m_1+m_2\end{pmatrix} =\begin{pmatrix}1&0\\1&1\end{pmatrix} \begin{pmatrix}m_1\\m_2\end{pmatrix},\\ \sigma_2(w_1,w_2,w_3)&=& (w_1,w_2w_3w_2,w_2),\\ w_1\cdot w_2w_3w_2=(w_1w_2)(w_2w_3)^{-1} &=&(z_1z_2)^{m_1-m_2},\\ w_2w_3w_2\cdot w_2&=&w_2w_3=(z_1z_2)^{m_2},\\ \sigma_2\begin{pmatrix}m_1\\m_2\end{pmatrix} &=&\begin{pmatrix}m_1-m_2\\m_2\end{pmatrix} =\begin{pmatrix}1&-1\\0&1\end{pmatrix} \begin{pmatrix}m_1\\m_2\end{pmatrix}. \end{eqnarray*} So $\Br_3$ acts as multiplication with matrices in $SL_2(\Z)$ from the left on $M_{2\times 1}(\Z)$. Each orbit has a unique element of the shape $\begin{pmatrix}m\\0\end{pmatrix}$ with $m\in\Z_{\geq 0}$. This element corresponds to \begin{eqnarray*} (z_1z_2z_1,(z_1z_2)^{1-m}z_1,(z_1z_2)^{1-m}z_1). \hspace*{1cm}\Box \end{eqnarray*} \bigskip {\bf Proof} of Theorem \ref{t7.3} (a) in the cases (D), $\uuuu{x}=(-l,2,-l)$ with $l\geq 3$: Recall from Theorem \ref{t6.11} (f) \begin{eqnarray*} \Rad I^{(0)}&=&\Z f_1\quad\textup{with}\quad f_1=e_1-e_3,\\ \Gamma^{(0)}_s&\cong& G^{fCox,2}\quad\textup{with generators } z_1=\oooo{s_{e_1}^{(0)}}=\oooo{s_{e_3}^{(0)}}, \ z_2=\oooo{s_{e_2}^{(0)}}. \end{eqnarray*} Suppose $\uuuu{v}\in (\Delta^{(0)})^3$ with $s_{v_1}^{(0)}s_{v_2}^{(0)}s_{v_3}^{(0)}=-M$. We want to show $\uuuu{v}\in \BB^{dist}$ or equivalently $(s_{v_1}^{(0)},s_{v_2}^{(0)},s_{v_3}^{(0)})\in \RR^{(0),dist}$. First we look at the images in $\Gamma^{(0)}_s$: $\oooo{s_{v_1}^{(0)}} \oooo{s_{v_2}^{(0)}} \oooo{s_{v_3}^{(0)}}=z_1z_2z_1$. Because of Lemma \ref{t7.5} (b), we can make a suitable braid group action and then suppose \begin{eqnarray*} (\oooo{s_{v_1}^{(0)}} ,\oooo{s_{v_2}^{(0)}}, \oooo{s_{v_3}^{(0)}})=(z_1z_2z_1,r,r)\textup{ with } r=(z_1z_2)^{1-m}z_1\textup{ for some }m\in\Z_{\geq 0}. \end{eqnarray*} Write $\www{e_2}:=s_{e_1}^{(0)}(e_2)=e_2+le_1$ and observe \begin{eqnarray*} z_1z_2z_1=\oooo{s_{e_1}^{(0)}s_{e_2}^{(0)}s_{e_3}^{(0)}} =\oooo{s_{\www{e_2}}^{(0)}}. \end{eqnarray*} After possibly changing the signs of $v_1$ and $v_3$, $\oooo{s_{v_1}^{(0)}}=\oooo{s_{\www{e_2}}^{(0)}}$ and $\oooo{s_{v_2}^{(0)}}=r=\oooo{s_{v_3}^{(0)}}$ imply \begin{eqnarray*} v_1=\www{e_2}+a_1f_1\quad\textup{and}\quad v_3=v_2+a_2f_1\quad\textup{for some }a_1,a_2\in\Z. \end{eqnarray*} With Lemma \ref{t7.4} and $f_1=(f\textup{ in Lemma \ref{t7.4}})$ we calculate \begin{eqnarray*} -M&=& s_{e_1}^{(0)}s_{e_2}^{(0)}s_{e_3}^{(0)} = s_{e_1}^{(0)}s_{e_2}^{(0)}s_{e_1-f_1}^{(0)}\\\ &=& s_{e_1}^{(0)}s_{e_2}^{(0)}s_{e_1}^{(0)} t_{j^{(0)}(e_1)} =s_{\www{e_2}}^{(0)}t_{j^{(0)}(e_1)},\\ -M&=& s_{v_1}^{(0)}s_{v_2}^{(0)}s_{v_3}^{(0)} = s_{\www{e_2}+a_1f_1}^{(0)}s_{v_2}^{(0)}s_{v_2+a_2f_1}^{(0)}\\ &=& s_{\www{e_2}}^{(0)}t_{-a_1j^{(0)}(\www{e_2})} s_{v_2}^{(0)}s_{v_2}^{(0)}t_{-a_2j^{(0)}(v_2)}\\ &=& s_{\www{e_2}}^{(0)}t_{-a_1j^{(0)}(\www{e_2})-a_2j^{(0)}(v_2)}, \end{eqnarray*} so \begin{eqnarray*} j^{(0)}(e_1)=-a_1j^{(0)}(\www{e_2})-a_2j^{(0)}(v_2). \end{eqnarray*} Write \begin{eqnarray*} \oooo{v_2}^{(0)}=b_1\oooo{e_1}^{(0)}+b_2\oooo{e_2}^{(0)} \quad\textup{with }b_1,b_2\in\Z. \end{eqnarray*} By Theorem \ref{t6.14} (f) the tuple $(\oooo{H_{\Z}}^{(0)},\oooo{I}^{(0)}, (\oooo{e_1}^{(0)}, \oooo{e_2}^{(0)}))$ is isomorphic to the corresponding tuple from the $2\times 2$ matrix $S(-l)= \begin{pmatrix}1&-l\\0&1\end{pmatrix}$. The set of roots of this tuple is called $R^{(0)}(S(-l))$. It contains $\oooo{v_2}^{(0)}$, $\oooo{e_1}^{(0)}$, $\oooo{e_1}^{(0)}$. By Theorem \ref{t6.8} (d)(i) the map \begin{eqnarray*} R^{(0)}(S(-l))&\to& \{\textup{units in }\Z[\kappa_1] \textup{ with norm }1\} =\{\pm \kappa_1^m\,|\, m\in\Z\}\\ y_1\oooo{e_1}^{(0)}+y_2\oooo{e_2}^{(0)} &\mapsto& y_1-\kappa_1y_2, \end{eqnarray*} is a bijection, where $\kappa_1=\frac{l}{2}+\frac{1}{2}\sqrt{l^2-4}$. The norm of $b_1-b_2\kappa_1$ is \begin{eqnarray*} 1=b_1^2-lb_1b_2+b_2^2. \end{eqnarray*} Now \begin{eqnarray*} (2,-l)&=& j^{(0)}(e_1)(e_1,e_2) =(-a_1j^{(0)}(\www{e_2})-a_2j^{(0)}(v_2))(e_1,e_2)\\ &=& -a_1((-l,2)+l(2,-l))-a_2(b_1(2,-l)+b_2(-l,2))\\ &=& (-a_1l-a_2b_1)(2,-l)+(-a_1-a_2b_2)(-l,2). \end{eqnarray*} so \begin{eqnarray*} a_1=-a_2b_2,\quad 1=-a_1l-a_2b_1=a_2(b_2l-b_1),\\ a_2=\pm 1\quad\textup{and}\quad b_1=-a_2+b_2l. \end{eqnarray*} Calculate \begin{eqnarray*} 0&=&-1+b_1^2-lb_1b_2+b_2^2=-1+(-a_2+b_2l)(-a_2)+b_2^2\\\ &=& b_2(b_2-a_2l). \end{eqnarray*} We obtain the four solutions \begin{eqnarray*} (a_1,a_2,b_1,b_2)&\in&\{(0,1,-1,0),(0,-1,1,0),\\ &&(-l,1,l^2-1,l),(-l,-1,1-l^2,-l)\}. \end{eqnarray*} In the case of the third solution \begin{eqnarray*} \oooo{v_2}^{(0)}&=&(l^2-1)\oooo{e_1}^{(0)}+l\oooo{e_2}^{(0)} =\oooo{s_{e_1}^{(0)}s_{e_2}^{(0)}(e_1)},\\ r&=&\oooo{s_{v_2}^{(0)}} =\oooo{s_{s_{e_1}^{(0)}s_{e_2}^{(0)}(e_1)}^{(0)}} =\oooo{s_{e_1}^{(0)}s_{e_2}^{(0)}s_{e_1}^{(0)} s_{e_2}^{(0)}s_{e_1}^{(0)}}\\ &=& (z_1z_2)^2z_1=(z_1z_2)^{1-m}z_1\quad\textup{with }m=-1. \end{eqnarray*} As $m=-1$ is not in the set $\Z_{\geq 0}$, we can discard the third solution. In fact, $(z_1z_2z_1,(z_1z_2)^2z_1,(z_1z_2)^2z_1)$ is in the $\Br_3$ orbit of $(z_1z_2z_1,z_1,z_1)$ because $\begin{pmatrix}-1\\0\end{pmatrix}$ is in the $SL_2(\Z)$ orbit of $\begin{pmatrix}1\\0\end{pmatrix}$. We can discard also the fourth solution because its vector $\oooo{v_2}^{(0)}$ differs from the vector $\oooo{v_2}^{(0)}$ in the third solution only by the sign. Also the vector $\oooo{v_2}^{(0)}$ in the first solution differs from the vector $\oooo{v_2}^{(0)}$ in the second solution only by the sign. The second solution gives $\oooo{v_2}^{(0)}=\oooo{e_1}^{(0)}$ and thus for some $b_3\in\Z$ \begin{eqnarray*} \uuuu{v}=(\www{e_2},e_1+b_3f_1,e_1+b_3f_1-f_1) =(\www{e_2},e_1+b_3f_1,e_3+b_3f_1). \end{eqnarray*} The observation \begin{eqnarray*} \delta_2\sigma_2(\uuuu{v}) &=& \delta_2(\www{e_2},e_3+b_3f_1-2(e_1+b_3f_1),e_1+b_3f_1)\\ &=& (\www{e_2},e_1+(b_3+1)f_1,e_3+(b_3+1)f_1) \end{eqnarray*} shows $\uuuu{v}\in \Br_3\ltimes\{\pm 1\}^3(\www{e_2},e_1,e_3)$. This orbit is $\BB^{dist}$ because $(\www{e_2},e_1,e_3)=\sigma_1(\uuuu{e})$. \hfill$\Box$ \bigskip Lemma \ref{t7.6} states some facts which arise in the proof of part (b) of Theorem \ref{t7.3} and which are worth to be formulated explicitly. \begin{lemma}\label{t7.6} Let $(H_\Z,L,\uuuu{e})$ be the unimodular bilinear lattice of rank 3 with triangular basis $\uuuu{e}$ with matrix $S=S(\HH_{1,2})=S(-2,2,-2)=L(\uuuu{e}^t,\uuuu{e})^t$. Recall $\Rad I^{(0)}=\Z f_1+\Z f_2$ and $R^{(0)}=\pm f_3+\Rad I^{(0)}$ (Theorem \ref{t6.14} (f)). (a) For $g_1,g_2,g_3\in\Rad I^{(0)}$ \begin{eqnarray*} s_{f_3-g_1}^{(0)}s_{f_3-g_2}^{(0)}s_{f_3-g_3}^{(0)}=-M \iff g_2=g_1+g_3. \end{eqnarray*} (b) The map \begin{eqnarray*} \Phi:M_{2\times 2}(\Z)&\to& \{\uuuu{v}\in (R^{(0)})^3\,|\, (\pi_3\circ\pi_3^{(0)})(\uuuu{v}) =-M\}/\{\pm 1\}^3,\\ A &\mapsto& (f_3-g_1,f_3-g_1-g_3,f_3-g_3)/\{\pm 1\}^3\\ &&\textup{with } \begin{pmatrix}g_1\\g_3\end{pmatrix}= A \begin{pmatrix}f_1\\f_2\end{pmatrix}, \end{eqnarray*} is a bijection. The action of $\Br_3$ on the right hand side translates to the following action on the left hand side, \begin{eqnarray*} \sigma_1(A) = \begin{pmatrix}1&-1\\0&1\end{pmatrix} A,\quad \sigma_2(A) = \begin{pmatrix}1&0\\1&1\end{pmatrix} A. \end{eqnarray*} (c) $\uuuu{v}\in (R^{(0)})^3$ with $(\pi_3\circ\pi_3^{(0)})(\uuuu{v})=-M$ satisfies either (i) or (ii): \begin{list}{}{} \item[(i)] There exists a permutation $\sigma\in S_3$ with $v_i\in \Gamma^{(0)}\{e_{\sigma(i)}\}$ for $i\in\{1,2,3\}$. \item[(ii)] Either $v_1,v_2,v_3\in \Gamma^{(0)}\{f_3\}$ or there exists a permutation $\sigma\in S_3$ and an $l\in\{1,2,3\}$ with $v_{\sigma(1)}\in \Gamma^{(0)}\{f_3\}$ and $v_{\sigma(2)},v_{\sigma(3)}\in \Gamma^{(0)}\{e_l\}$. \end{list} (i) holds if and only if $\Phi^{-1}(\uuuu{v}/\{\pm 1\}^3)$ has an odd determinant. (d) Let $SL_2(\Z)$ act by multiplication from the left on $\{A\in M_{2\times 2}(\Z)\,|\,\det A\textup{ is odd}\}$. Each orbit has a unique representative of the shape \begin{eqnarray*} \begin{pmatrix}0&c_2\\c_1&c_3\end{pmatrix}\quad \textup{with}\quad c_1\in\N\textup{ odd}, c_2\in\Z\textup{ odd}, c_3\in\{0,1,...,|c_2|-1\}. \end{eqnarray*} \end{lemma} {\bf Proof:} (a) For $g_1,g_2,g_3\in \Rad I^{(0)}$ \begin{eqnarray*} s_{f_3-g_1}^{(0)}|_{\Rad I^{(0)}}=\id,\quad s_{f_3-g_1}^{(0)}(f_3+g_2)=-(f_3-g_2-2g_1),\\ s_{f_3-g_1}^{(0)}s_{f_3-g_2}^{(0)}s_{f_3-g_3}^{(0)}(f_3) =-f_3+2(g_1-g_2+g_3). \end{eqnarray*} Compare $-M|_{\Rad I^{(0)}}=\id$, $-M(f_3)=-f_3$. (b) $\Phi$ is a bijection because of $R^{(0)}=\pm f_3+\Rad I^{(0)}$ and part (a). The action of $\Br_3$ on the right hand side translates to the claimed action on the left hand side because of the following, \begin{eqnarray*} \delta_1\sigma_1(f_3-g_1,f_3-g_1-g_3,f_3-g_3) &=& (f_3-g_1+g_3,f_3-g_1,f_3-g_3),\\ \begin{pmatrix} g_1-g_3\\g_3\end{pmatrix} &=& \begin{pmatrix} 1&-1\\0&1\end{pmatrix} \begin{pmatrix} g_1\\g_3\end{pmatrix},\\ \delta_2\sigma_2(f_3-g_1,f_3-g_1-g_3,f_3-g_3) &=& (f_3-g_1,f_3-2g_1-g_3,f_3-g_1-g_3),\\ \begin{pmatrix} g_1\\g_1+g_3\end{pmatrix} &=& \begin{pmatrix} 1&0\\1&1\end{pmatrix} \begin{pmatrix} g_1\\g_3\end{pmatrix}. \end{eqnarray*} (c) Recall that by Theorem \ref{t6.14} (e) \begin{eqnarray*} \Gamma^{(0)}\{e_i\}=\pm e_i+2\Rad I^{(0)},\quad \Gamma^{(0)}\{f_3\}=\pm f_3+2\Rad I^{(0)},\\ (e_1,e_2,e_3)=(f_3-f_2,-f_3+f_1+f_2,f_3-f_1),\\ \Delta^{(0)}=\Gamma^{(0)}\{e_1\}\ \dot\cup\ \Gamma^{(0)}\{e_2\}\ \dot\cup\ \Gamma^{(0)}\{e_3\},\quad R^{(0)}=\Delta^{(0)}\ \dot\cup\ \Gamma^{(0)}\{f_3\}. \end{eqnarray*} Observe that $g_1,g_2,g_3\in \Rad I^{(0)}$ with $g_2=g_1+g_3$ satisfy either (i)' or (ii)', \begin{list}{}{} \item[(i)'] $g_1,g_2,g_3\notin 2\Rad I^{(0)}$, \item[(ii)'] There exists a permutation $\sigma\in S_3$ with $g_{\sigma(1)}\in 2\Rad I^{(0)}$ and $g_{\sigma(2)}-g_{\sigma(3)}\in 2\Rad I^{(0)}$. \end{list} $\uuuu{v}=(f_3-g_1,f_3-g_2,f_3-g_3)$ satisfies (i) if (i)' holds, and it satisfies (ii) if (ii)' holds. If $\begin{pmatrix}g_1\\g_2\end{pmatrix}=A \begin{pmatrix}f_1\\f_2\end{pmatrix}$ then (i)' holds if and only if $\det A$ is odd. (d) This is elementary. We skip the details. \hfill$\Box$ \bigskip {\bf Proof of Theorem \ref{t7.3} (b):} $\uuuu{v}\in (\Delta^{(0)})^3$ with $(\pi_3\circ\pi_3^{(0)})(\uuuu{v})=-M$ satisfies property (i) in Lemma \ref{t7.6} (c) because it does not satisfy property (ii) in Lemma \ref{t7.6} (c). The parts (b) and (d) of Lemma \ref{t7.6} show that the set $(\Delta^{(0)})^3\cap (\pi_3\circ\pi_3^{(0)})^{-1}(-M)$ consists of countably many orbits. The parts (b) and (d) of Lemma \ref{t7.6} also give the claimed representative in each orbit. The rest is obvious. \hfill$\Box$ \section{Distinguished bases in the odd rank 3 cases} \label{s7.3} Also in the odd cases with $n=3$ we have complete results on the question when the inclusion \eqref{3.4} is an equality. It is one if and only if $\uuuu{x}\in B_1\cup B_2$ where \index{$B_1,\ B_2,\ B_3,\ B_4,\ B_5$} \begin{eqnarray*} B_1&=& \{\uuuu{x}\in\Z^3-\{(0,0,0)\}\,| \\ &&\hspace*{1.5cm} ((G^{phi}\ltimes \www{G}^{sign})\rtimes\langle \gamma\rangle) (\uuuu{x})\cap r^{-1}(\Z_{\leq 0})\neq\emptyset\},\\ B_2&=& \{\uuuu{x}\in\Z^3-\{(0,0,0)\}\,|\, S(\uuuu{x})\textup{ is reducible, i.e. there are }i,j,k\\ && \hspace*{1cm}\textup{ with } \{i,j,k\}=\{1,2,3\}\textup{ and }x_i\neq 0=x_j=x_k\},\\ B_3&:=& \{(0,0,0)\},\\ B_4&:=& \{\uuuu{x}\in\Z^3\,|\, S(\uuuu{x})\in (\Br_3\ltimes\{\pm 1\}^3)\Bigl( \{S(A_3),S(\whh{A}_2),S(\HH_{1,2})\}\Bigr.\\ &&\Bigl.\hspace*{6cm}\cup \{S(-l,2,-l)\,|\, l\geq 3\}\Bigr)\},\\ B_5&:=& \Z^3-(B_1\cup B_2\cup B_3\cup B_4). \end{eqnarray*} $B_2$ is the set of $\uuuu{x}\neq (0,0,0)$ which give reducible cases. $B_1$ contains $r^{-1}(\Z_{\leq 0})-\{(0,0,0)\}$, but is bigger. $\uuuu{x}\in B_1$ if and only if the $(G^{phi}\ltimes \www{G}^{sign})\rtimes\langle \gamma\rangle$ orbit of $\uuuu{x}$ contains a triple $\www{\uuuu{x}}\in\Z^3$ as in Lemma \ref{t4.18} (a), so with $\www{\uuuu{x}}\in\Z^3_{\geq 3}$ and $2\www{x}_i\leq \www{x}_j\www{x}_k$ for $\{i,j,k\}=\{1,2,3\}$. The Examples \ref{t4.20} show that it is not so easy to describe $B_1$ more explicitly. In Theorem \ref{t7.7} we show $B_4\subset \Z^3-(B_1\cup B_2\cup B_3)$. For $\uuuu{x}\in B_3\cup B_4$ the inclusion in \eqref{3.4} is not an equality, but we can add the constraint $\sum_{i=1}^3\Z v_i= H_\Z$ to \eqref{3.4} and obtain an equality. For $\uuuu{x}\in B_5$ we do not know whether \eqref{3.4} with the additional constraint $\sum_{i=1}^3\Z v_i= H_\Z$ becomes an equality. \begin{theorem}\label{t7.7} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $3$ with a triangular basis $\uuuu{e}$ with matrix $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})\in T^{uni}_3(\Z)$ for some $\uuuu{x}\in \Z^3$. (a) $(H_\Z,L,\uuuu{e})$ is reducible if and only if $\uuuu{x}\in B_2\cup B_3$. Then $\Gamma^{(1)}_u=\{\id \}$. (b) The following conditions are equivalent: \begin{list}{}{} \item[(i)] $\uuuu{x}\in B_1$. \item[(ii)] $\Gamma^{(1)}\cong G^{free,3}$. \item[(iii)] $(H_\Z,L,\uuuu{e})$ is irreducible and $\Gamma^{(1)}_u=\{\id\}$. \end{list} (c) $\Z^3=\dot\bigcup_{i\in\{1,2,3,4,5\}} B_i$. (d) The inclusion in \eqref{3.4} is an equality $\iff \uuuu{x}\in B_1\cup B_2.$ (e) Consider $\uuuu{x}=(0,0,0)$. The set $\{\uuuu{v}\in (\Delta^{(1)})^3\,|\, (\pi_3\circ\pi_3^{(1)})(\uuuu{v})=M\}$ is $(\Delta^{(1)})^3$. It consists of ten $\Br_3\ltimes\{\pm 1\}^3$ orbits, the orbit $\BB^{dist}$ of $\uuuu{e}=(e_1,e_2,e_3)$ and the orbits of the nine triples \begin{eqnarray*} (e_1,e_1,e_1),(e_1,e_1,e_2),(e_1,e_2,e_2),(e_2,e_2,e_2),\\ (e_1,e_1,e_3), (e_1,e_3,e_3),(e_3,e_3,e_3),(e_2,e_2,e_3),(e_2,e_3,e_3). \end{eqnarray*} (f) Consider $\uuuu{x}\in B_4\cup B_5$. Then $\Gamma^{(1)}_u\cong \Z^2$. The map \begin{eqnarray*} \Psi: \{\uuuu{v}\in (\Delta^{(1)})^3\,|\, (\pi_3\circ\pi_3^{(1)})(\uuuu{v})=M\} &\to& \N\cup\{\infty\},\\ \uuuu{v}&\mapsto& \Bigl(\textup{index of }\sum_{i=1}^3\Z v_i \textup{ in }H_\Z\Bigr), \end{eqnarray*} has infinitely many values. The set $\{\uuuu{v}\in (\Delta^{(1)})^3\,|\, (\pi_3\circ\pi_3^{(1)})(\uuuu{v})=M\}$ contains besides $\BB^{dist}$ infinitely many $\Br_3\ltimes\{\pm 1\}^3$ orbits. (g) For $\uuuu{x}\in B_3\cup B_4$ \begin{eqnarray*} \BB^{dist}= \{\uuuu{v}\in (\Delta^{(1)})^3\,|\, (\pi_3\circ\pi_3^{(1)})(\uuuu{v})=M,\sum_{i=1}^3\Z v_i =H_\Z\}. \end{eqnarray*} \end{theorem} {\bf Proof:} (a) Compare Definition \ref{t2.10} in the case $n=3$. For $\Gamma^{(1)}_u=\{\id\}$ see Theorem \ref{t6.18} (b). (b) By the Remarks \ref{t4.17} the tuple $(H_\Z,\pm I^{(1)},\Gamma^{(1)},\Delta^{(1)})$ depends up to isomorphism only on the $(G^{phi}\ltimes \www{G}^{sign})\rtimes\langle \gamma\rangle$ orbit of $\uuuu{x}$. Lemma \ref{t4.18} gives representatives of each such orbit. Theorem \ref{t6.18} studies their groups $\Gamma^{(1)}$. Theorem \ref{t6.18} (b) treats $\uuuu{x}\in B_2\cup B_3$. Theorem \ref{t6.18} (g) treats $\uuuu{x}\in B_1$. Theorem \ref{t6.18} (c)--(f) treats $\uuuu{x}\in \Z^3-(B_1\cup B_2\cup B_3)$. One sees \begin{eqnarray*} \Gamma^{(1)}\cong G^{free,3}&\iff& \uuuu{x}\in B_1,\\ \Gamma^{(1)}_u=\{\id\}&\iff& \uuuu{x}\in B_1\cup B_2\cup B_3,\\ \Gamma^{(1)}_u\cong\Z^2&\iff& \uuuu{x}\in \Z^3 -(B_1\cup B_2 \cup B_3). \end{eqnarray*} (c) $B_1\cap B_2=\emptyset$, $B_2\cap B_4=\emptyset$ and $(0,0,0)\notin B_1\cup B_2\cup B_4$ are clear. $B_1\cap B_4=\emptyset$ follows from $\Gamma^{(1)}(\uuuu{x})\not\cong G^{free,3}$ for $\uuuu{x}\in B_4$. (d) The parts (e) and (f) will give $\Longrightarrow$. Here we show $\Longleftarrow$, first for $\uuuu{x}\in B_1$, then for $\uuuu{x}\in B_2$. Let $\uuuu{x}\in B_1$. Then by the Remarks \ref{t4.17} and Theorem \ref{t6.18} (g) $\Gamma^{(1)}$ is a free group with generators $s_{e_1}^{(1)}$, $s_{e_2}^{(1)}$ and $s_{e_3}^{(1)}$. Example \ref{t3.23} (iv) applies. Let $\uuuu{x}\in B_2$. Because of the actions of $\gamma$ and $\www{G}^{sign}$ (here $G^{sign}$ is sufficient) on $B_2$ we can suppose $\uuuu{x}=(x_1,0,0)$ with $x_1\in \Z_{<0}$. Then $e_3\in\Rad I^{(1)}$, $s_{e_3}^{(1)}=\id$, $\Gamma^{(1)}=\langle s_{e_1}^{(1)},s_{e_2}^{(1)}\rangle$ and by Theorem \ref{t6.21} (b) $\Delta^{(1)}=\Delta^{(1)}\cap (\Z e_1+\Z e_2)\ \cup\ \{\pm e_3\}$. The monodromy $M$ has the characteristic polynomial $(t-1)(t^2-(2-r(\uuuu{x}))t+1)=(t-1)(t^2-(2-x_1^2)t+1)$, so three different eigenvalues. Consider $\uuuu{v}\in (\Delta^{(1)})^3 $ with $s_{v_1}^{(1)}\circ s_{v_2}^{(1)}\circ s_{v_3}^{(1)}=M$. Now $v_1,v_2,v_3\in\{\pm e_3\}$ is impossible because $M\neq \id$. Two of $v_1,v_2,v_3$ in $\{\pm e_3\}$ cannot be because then $M$ would have the eigenvalue $1$ with multiplicity 3. \medskip {\bf Claim:} All three $v_1,v_2,v_3\in\Delta^{(1)}\cap (\Z e_1+\Z e_2)$ is impossible. \medskip {\bf Proof of the Claim:} Suppose $v_1,v_2,v_3\in \Delta^{(1)}\cap (\Z e_1+\Z e_2)$. First we consider a case with $x_1\leq -2$. By Theorem \ref{t6.18} (b) and Theorem \ref{t6.10} (c)+(d) $\Gamma^{(1)}\cong G^{free,2}$ with generators $s_{e_1}^{(1)}$ and $s_{e_2}^{(1)}$. There is a unique group homomorphism \begin{eqnarray*} \Gamma^{(1)}\to\{\pm 1\}\quad \textup{with}\quad s_{e_1}^{(1)}\mapsto -1,\ s_{e_2}^{(1)}\mapsto -1. \end{eqnarray*} Each $s_{v_i}^{(1)}$ is conjugate to $s_{e_1}^{(1)}$ or $s_{e_2}^{(1)}$ and thus has image $-1$. Also their product $s_{v_1}^{(1)}\circ s_{v_2}^{(1)}\circ s_{v_3}^{(1)}$ has image $-1$. But $M=s_{e_1}^{(1)}\circ s_{e_2}^{(1)}$ has image $1$, a contradiction. Now consider the case $x_1=-1$. By Theorem \ref{t6.18} (b) and Theorem \ref{t6.10} (a)+(b) $\Gamma^{(1)}\cong SL_2(\Z)$ with $s_{e_1}^{(1)}\sim \begin{pmatrix}1&1\\0&1\end{pmatrix}$ and $s_{e_2}^{(1)}\sim \begin{pmatrix}1&0\\-1&1\end{pmatrix}$. It is well known that the group $SL_2(\Z)$ is isomorphic to the group with the presentation \begin{eqnarray*} \langle x_1,x_2\,|\, x_1x_2x_1=x_2x_1x_2,\ 1=(x_1x_2)^6\rangle \end{eqnarray*} where $x_1\mapsto \begin{pmatrix}1&1\\0&1\end{pmatrix}$ and $x_2\mapsto \begin{pmatrix}1&0\\-1&1\end{pmatrix}$. The differences of the lengths of the words in $x_1^{\pm 1}$ and $x_2^{\pm 1}$ which are connected by these relations are $3-3$ and $12-0$, so even. Therefore also in this situation there is a unique group homomorphism \begin{eqnarray*} \Gamma^{(1)}\to\{\pm 1\}\quad \textup{with}\quad s_{e_1}^{(1)}\to -1,\ s_{e_2}^{(1)}\to -1. \end{eqnarray*} The argument in the case $\Gamma^{(1)}\cong G^{free,2}$ goes here through, too. The Claim is proved. \hfill ($\Box$) \medskip Therefore a permutation $\sigma\in S_3$ with $v_{\sigma(1)},v_{\sigma(2)}\in\Delta^{(1)}\cap (\Z e_1+\Z e_2)$, $\sigma(1)<\sigma(2)$ and $v_{\sigma(3)}\in\{\pm e_3\}$ exists. Then $s_{v_{\sigma(3)}}^{(1)}=\id$ and \begin{eqnarray*} s_{e_1}^{(1)}s_{e_2}^{(1)}=M=s_{v_1}^{(1)}s_{v_2}^{(1)} s_{v_3}^{(1)}=s_{v_{\sigma(1)}}^{(1)}s_{v_{\sigma(2)}}^{(1)}. \end{eqnarray*} Because of Theorem \ref{t7.1} $(v_{\sigma(1)},v_{\sigma(2)})$ is in the $\Br_2\ltimes\{\pm 1\}^2$ orbit of $(e_1,e_2)$. Therefore $\uuuu{v}$ is in $(\Br_3\ltimes\{\pm 1\}^3)(\uuuu{e})=\BB^{dist}$. (e) See the Examples \ref{t3.23} (iii). (f) $\Gamma^{(1)}_u\cong\Z^2$ for $\uuuu{x}\in B_4\cup B_5$ follows from Theorem \ref{t6.21} (c)--(f), the Remarks \ref{t4.17}, Lemma \ref{t4.18} and the definition of $B_4$ and $B_5$. The last statement in (f) follows from the middle statement because the sublattice $\sum_{i=1}^3\Z v_i\subset H_\Z$ and its index in $H_\Z$ are invariants of the $\Br_3\ltimes\{\pm 1\}^3$ orbit of $\uuuu{v}$. For the middle statement we consider \begin{eqnarray*} \uuuu{v}=(e_1+a(-\www{x}_3)f_3,e_2+a(\www{x}_2-\www{x}_1x_3)f_3, e_3+a(-\www{x}_1)f_3)\quad\textup{with }a\in\Z. \end{eqnarray*} The next Lemma \ref{t7.8} implies \begin{eqnarray*} (\pi_3\circ\pi_3^{(1)})(\uuuu{v})&=&M,\\ \Bigl(\textup{index of }\sum_{i=1}^3\Z v_i\textup{ in }H_\Z\Bigr) &=&\left|1+a\frac{r(\uuuu{x})}{\gcd(x_1,x_2,x_3)^2}\right| \end{eqnarray*} and that for a suitable $a_0\in\N$ and any $a\in\Z a_0$ $\uuuu{v}\in(\Delta^{(1)})^3$. As $r(\uuuu{x})\neq 0$ for $\uuuu{x}\in B_4\cup B_5$, this shows that the map $\Psi$ has countably many values. (g) Part (g) will be prepared by Lemma \ref{t7.10} and will be proved after the proof of Lemma \ref{t7.10}. \hfill$\Box$ \begin{lemma}\label{t7.8} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank 3 with a triangular basis $\uuuu{e}$ with matrix $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})$ for some $\uuuu{x}\in \Z^3-\{(0,0,0)\}$. Recall $\Rad I^{(1)}=\Z f_3$ with $f_3=-\www{x}_3e_1+\www{x}_2e_2-\www{x}_1e_3$ and $(\www{x}_1,\www{x}_2,\www{x}_3) =\gcd(x_1,x_2,x_3)^{-1}(x_1,x_2,x_3).$ (a) For $\uuuu{a}=(a_1,a_2,a_3)\in\Z^3$ \begin{eqnarray*} &&s_{e_1+a_1f_3}^{(1)}\circ s_{e_2+a_2f_3}^{(1)}\circ s_{e_3+a_3f_3}^{(1)} =M\\ &\iff& (a_1,a_2,a_3)\in \Z(-\www{x}_3,\www{x}_2-\www{x}_1x_3, -\www{x}_1). \end{eqnarray*} (b) For $\uuuu{a}=(a_1,a_2,a_3)=a (-\www{x}_3,\www{x}_2-\www{x}_1x_3,-\www{x}_1)$ with $a\in\Z$, the index of $\sum_{i=1}^3\Z(e_i+a_if_3)$ in $H_\Z$ is $|1+a\frac{r(\uuuu{x})}{\gcd(x_1,x_2,x_3)^2}|$. (c) If $\Gamma^{(1)}_u\cong\Z^2$ then there is a number $a_0\in\N$ with \begin{eqnarray*} (e_1+a(-\www{x}_3)f_3,e_2+a(\www{x}_2-\www{x}_1x_3)f_3, e_3+a(-\www{x}_1)f_3)\in (\Delta^{(1)})^3 \textup{ for }a\in \Z a_0. \end{eqnarray*} \end{lemma} {\bf Proof:} (a) With Lemma \ref{t7.4} and $f_3=(f\textup{ in Lemma \ref{t7.4}})$ one calculates \begin{eqnarray*} &&s_{e_1+a_1f_3}^{(1)}\circ s_{e_2+a_2f_3}^{(1)}\circ s_{e_3+a_3f_3}^{(1)} \circ M^{-1}\\ &=& t_{-a_1j^{(1)}(e_1)}\circ s_{e_1}^{(1)} \circ t_{-a_2j^{(1)}(e_2)}\circ s_{e_2}^{(1)} \circ t_{-a_3j^{(1)}(e_3)}\circ s_{e_3}^{(1)}\circ M^{-1}\\ &=& t_{-a_1j^{(1)}(e_1)}\circ t_{-a_2j^{(1)}(e_2)-a_2j^{(1)}(e_2)(e_1)j^{(1)}(e_1)}\circ s_{e_1}^{(1)}\\ &&\circ \ t_{-a_3j^{(1)}(e_3)-a_3j^{(1)}(e_3)(e_2)j^{(1)}(e_2)}\circ s_{e_2}^{(1)}\circ s_{e_3}^{(1)}\circ M^{-1}\\ &=& t_{-A} \end{eqnarray*} with \begin{eqnarray*} A&=& a_1j^{(1)}(e_1)+a_2j^{(1)}(e_2) +a_2I^{(1)}(e_2,e_1)j^{(1)}(e_1)\\ &&+a_3j^{(1)}(e_3)+a_3I^{(1)}(e_3,e_2)j^{(1)}(e_2) +a_3I^{(1)}(e_3,e_1)j^{(1)}(e_1)\\ &&+a_3I^{(1)}(e_3,e_2)I^{(1)}(e_2,e_1)j^{(1)}(e_1)\\ &=& j^{(1)}\Bigl((a_1-a_2x_1-a_3x_2+a_3x_1x_3)e_1 +(a_2-a_3x_3)e_2+a_3e_3\Bigr). \end{eqnarray*} $t_{-A}=\id$ holds if and only if $A=0$, so if and only if \begin{eqnarray*} (a_1-a_2x_1-a_3x_2+a_3x_1x_3)e_1+(a_2-a_3x_3)e_2+a_3e_3 \in \Rad I^{(1)}=\Z f_3. \end{eqnarray*} The ansatz that it is $af_3=a(-\www{x}_3e_1+\www{x}_2e_2-\www{x}_1e_3)$ with $a\in\Z$ gives \begin{eqnarray*} -a\www{x}_1=a_3,\ a\www{x}_2=a_2-a_3x_3,\ -a\www{x}_3=a_1-a_2x_1-a_3x_2+a_3x_1x_3,\\ \textup{so}\quad (a_1,a_2,a_3)=a(-\www{x}_3,\www{x}_2-\www{x}_1x_3, -\www{x}_1). \end{eqnarray*} (b) Write $\uuuu{a}=\www{a}\gcd(x_1,x_2,x_3)(-x_3,x_2-x_1x_3,-x_1)$ with $\www{a}=\gcd(x_1,x_2,x_3)^{-2}a\in \gcd(x_1,x_2,x_3)^{-2}\Z$. Then \begin{eqnarray*} &&(e_1+a_1f_3,e_2+a_2f_3,e_3+a_3f_3)\\ &=&\uuuu{e}\begin{pmatrix} 1+\www{a}(-x_3)(-x_3) & \www{a}(x_2-x_1x_3)(-x_3) & \www{a}(-x_1)(-x_3) \\ \www{a}(-x_3)x_2 & 1+\www{a}(x_2-x_1x_3)x_2 & \www{a}(-x_1)x_2 \\ \www{a}(-x_3)(-x_1) & \www{a}(x_2-x_1x_3)(-x_1) & 1+\www{a}(-x_1)(-x_1)\end{pmatrix}. \end{eqnarray*} The determinant of this matrix is $1+\www{a}r(\uuuu{x})$. The index of the lattice $\sum_{i=1}^3\Z(e_i+a_if_3)$ in $H_\Z$ is the absolute value of this determinant. (c) Suppose $\Gamma^{(1)}_u\cong\Z^2$. Compare Lemma \ref{t6.17}. The set \begin{eqnarray*} \Lambda:=\{\lambda\in \Hom_0(H_\Z,\Z)\,|\, t_\lambda^+\in \Gamma^{(1)}_u\} \end{eqnarray*} is a sublattice of rank 2 in the lattice $\Hom_0(H_\Z,\Z)$ of rank 2. For $i\in\{1,2,3\}$ \begin{eqnarray*} \Gamma^{(1)}_u\{e_i\}=\{e_i+\lambda(e_i)f_3\,|\, \lambda\in\Lambda\} \subset (e_i+\Z f_3)\cap\Delta^{(1)}. \end{eqnarray*} The triple $(H_\Z,L,\uuuu{e})$ is irreducible because of $\Gamma_u^{(1)}\cong\Z^2$ and Theorem \ref{t7.7} (a). Therefore it is not reducible with a summand of type $A_1$, and thus $\{e_1,e_2,e_3\}\cap\Rad I^{(1)}=\emptyset$. Because of this and because $\Lambda$ has finite index in $\Hom_0(H_\Z,\Z)$, there is a number $b_i\in\N$ with \begin{eqnarray*} \Z b_i= \{b\in\Z\,|\, e_i+bf_3\in\Gamma^{(1)}_u\{e_i\}\}. \end{eqnarray*} For each $\uuuu{a}=(a_1,a_2,a_3)$ with $a_i\in \Z b_i$ $e_i+a_if_3\in\Delta^{(1)}$. Any number $a_0\in\N$ (for example the smallest one) with \begin{eqnarray*} a_0\www{x}_3\in \Z b_1,\ a_0(\www{x}_2-\www{x}_1x_3)\in\Z b_2,\ a_0\www{x}_1\in \Z b_3 \end{eqnarray*} works. \hfill$\Box$ \begin{remarks}\label{t7.9} If $\uuuu{x}\in B_1\cup B_2$ then the map $\Delta^{(1)}\to \oooo{\Delta^{(1)}}$ is a bijection by Theorem \ref{t6.18} (b)+(g). Therefore then \begin{eqnarray*} \uuuu{v}=(e_1+a(-\www{x}_3)f_3,e_2+a(\www{x}_2-\www{x}_1x_3)f_3, e_3+a(-\www{x}_1)f_3)\in H_\Z^3 \end{eqnarray*} for $a\in\Z-\{0\}$ satisfies $(\pi_3\circ\pi_3^{(1)})(\uuuu{v})=M$, but $\uuuu{v}\notin (\Delta^{(1)})^3$. This fits to Theorem \ref{t7.7} (d). \end{remarks} \begin{lemma}\label{t7.10} The $\Br_3\ltimes\{\pm 1\}^3$ orbits in $r^{-1}(4)\subset\Z^3$ are classified in Theorem \ref{t4.6} (e). They are separated by the isomorphism classes of the pairs $(\oooo{H_\Z}^{(1)},\oooo{M})$ for corresponding unimodular bilinear lattices $(H_\Z,L,\uuuu{e})$ with triangular bases $\uuuu{e}$ with $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})$ and $r(\uuuu{x})=4$. More precisely, $\oooo{H_\Z}^{(1)}$ has a $\Z$-basis $(c_1,c_2)$ with $\oooo{M}(c_1,c_2)=(c_1,c_2) \begin{pmatrix}-1&\gamma\\0&-1\end{pmatrix}$ with a unique $\gamma\in\Z_{\geq 0}$, which is as follows: \begin{eqnarray*} \begin{array}{c|c|c|c|c|c} S(\uuuu{x}) & S(\HH_{1,2}) & S(\P^1A_1) & S(\whh{A}_2) & S(-l,2,-l) & S(-l,2,-l)\\ & & & & \textup{ with }l\equiv 0(2) & \textup{ with }l\equiv 1(2) \\ \hline \gamma & 0 & 2 & 3 & \frac{l^2}{2}-2 & l^2-4\end{array} \end{eqnarray*} The numbers $\gamma$ in this table are pairwise different. \end{lemma} {\bf Proof:} $S(\HH_{1,2})$: See Theorem \ref{t5.14} (a) (i). $S(\P^1A_1)$: By Theorem \ref{t5.13} $(\oooo{H_\Z}^{(1)},\oooo{M})\cong (H_{\Z,1},M_1)$ which comes from $S(\P^1)=\begin{pmatrix}1&-2\\0&1\end{pmatrix}$ with $S(\P^1)^{-1}S(\P^1)^t=\begin{pmatrix}-3&2\\-2&1\end{pmatrix}$. This monodromy matrix is conjugate to $\begin{pmatrix}-1&2\\0&-1\end{pmatrix}$ with respect to $GL_2(\Z)$. $S(\whh{A}_2)$: Compare Theorem \ref{t5.14} (b) (iii) and its proof: \begin{eqnarray*} f_3&=&e_1-e_2+e_3,\\ \oooo{H_\Z}^{(1)}&=&\Z \oooo{e_1}^{(1)}+\Z \oooo{e_2}^{(1)},\\ M\uuuu{e}&=&\uuuu{e} \begin{pmatrix}-2&-1&2\\-2&0&1\\-1&-1&1\end{pmatrix},\\ \oooo{M}(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)}) &=&(\oooo{e_1}^{(1)},\oooo{e_2}^{(1)}) \begin{pmatrix}-1&0\\-3&-1\end{pmatrix}. \end{eqnarray*} This monodromy matrix is conjugate to $\begin{pmatrix}-1&3\\0&-1\end{pmatrix}$ with respect to $GL_2(\Z)$. $S(-l,2,-l)$ with $l\geq 4,l\equiv 0(2)$: See Theorem \ref{t5.14} (a) (ii). Here $(\oooo{H_\Z}^{(1)},\oooo{M})\cong (H_{\Z,1},M_1)$. $S(-l,2,-l)$ with $l\geq 3,l\equiv 1(2)$: Compare Theorem \ref{t5.14} (b) (iv) and its proof. Define elements $a_1,a_2\in H_\Z$, \begin{eqnarray*} a_1&:=& \frac{l+1}{2}e_1+e_2+\frac{l+1}{2}e_3 = \frac{1}{2}f_1+\frac{1}{2}f_3,\\ a_2&:=& -e_1 = \frac{1}{2}\www{f}_2-\frac{l^2}{4}f_1-\frac{l}{4}f_3. \end{eqnarray*} The triple $(a_1,a_2,f_3)$ is a $\Z$-basis of $H_\Z$. The equality in the proof of Theorem \ref{t5.14} (b) (iv), $$M(f_1,\www{f}_2)=(f_1,\www{f}_2) \begin{pmatrix}-1&l^2-4\\0&-1\end{pmatrix},$$ implies $$M(\oooo{a_1}^{(1)},\oooo{a_2}^{(1)}) =(\oooo{a_2}^{(1)},\oooo{a_2}^{(1)}) \begin{pmatrix}-1&l^2-4\\0&-1\end{pmatrix}.\hspace*{1cm}\Box$$ {\bf Proof of Theorem \ref{t7.7} (g):} The case $\uuuu{}(0,0,0)$ is treated first and separately. Compare part (e). Of the ten triples listed there, only the triple $\uuuu{v}=(e_1,e_2,e_3)$ satisfies $\sum_{i=1}^3\Z v_i =H_\Z$. This shows part (g) in the case $\uuuu{x}=(0,0,0)$. Now consider $\uuuu{x}\in B_4$. We can suppose \begin{eqnarray*} \uuuu{x}\in\{(-1,0,-1),(-1,-1,-1),(-2,2,-2)\} \cup\{(-l,2,-l)\,|\, l\geq 3\}, \end{eqnarray*} which are the cases $S(\uuuu{x})\in \{S(A_3),S(\whh{A}_2), S(\HH_{1,2})\}\cup\{S(-l,2,-l)\,|\, l\geq 3\}$. Consider $\uuuu{v}\in (\Delta^{(1)})^3$ with $(\pi_3\circ\pi_3^{(1)})(\uuuu{v})=M$ and $\uuuu{v}$ a $\Z$-basis of $H_\Z$. We want to show $\uuuu{v}\in \BB^{dist}$. We have \begin{eqnarray*} I^{(1)}(\uuuu{v}^t,\uuuu{v})=\begin{pmatrix} 0&y_1&y_2\\-y_1&0&y_3\\-y_2&-y_3&0\end{pmatrix} =S(\uuuu{y})-S(\uuuu{y})^t\quad\textup{for some } \uuuu{y}\in\Z^3. \end{eqnarray*} Define a new Seifert form $\www{L}:H_\Z\times H_\Z\to \Z$ by $\www{L}(\uuuu{v}^t,\uuuu{v})^t=S(\uuuu{y})$ (only at the end of the proof it will turn out that $\www{L}=L$). Then \begin{eqnarray*} \www{L}^t-\www{L}=I^{(1)}=L^t-L. \end{eqnarray*} $\uuuu{v}$ is a triangular basis with respect to $(H_\Z,\www{L})$. By Theorem \ref{t2.7} for $(H_\Z,\www{L})$ (alternatively, one can calculate the product of the matrices of $s_{v_1}^{(1)}$, $s_{v_2}^{(1)}$ and $s_{v_3}^{(1)}$ with respect to $\uuuu{v}$) \begin{eqnarray*} M\uuuu{v}=(\pi_3\circ\pi_3^{(1)})(\uuuu{v})= (s_{v_1}^{(1)}\circ s_{v_2}^{(1)}\circ s_{v_3}^{(1)}) (\uuuu{v})=\uuuu{v}S(\uuuu{y})^{-1}S(\uuuu{y})^t. \end{eqnarray*} Then \begin{eqnarray*} 3-r(\uuuu{x})=\tr(M)=\tr(S(\uuuu{y})^{-1}S(\uuuu{y})^t) =3-r(\uuuu{y}), \end{eqnarray*} so $r(\uuuu{x})=r(\uuuu{y})$. In the case of $A_3$, $r^{-1}(2)$ is a unique $\Br_3\ltimes\{\pm 1\}^3$ orbit. In the cases of $\whh{A}_2$, $\HH_{1,2}$ and $\uuuu{x}\in \{(-l,2,-l)\,|\, l\geq 3\}$, Lemma \ref{t7.10} and $M\uuuu{v}=\uuuu{v}S(\uuuu{y})^{-1}S(\uuuu{y})^t$ show that $\uuuu{y}$ is in the same $\Br_3\ltimes\{\pm 1\}^3$ orbit as $\uuuu{x}$. Therefore in any case there is an element of $\Br_3\ltimes\{\pm 1\}^3$ which maps $\uuuu{v}$ to a $\Z$-basis $\uuuu{w}$ of $H_\Z$ with \begin{eqnarray*} \www{L}(\uuuu{w}^t,\uuuu{w})^t=S(\uuuu{x}). \end{eqnarray*} Then \begin{eqnarray*} I^{(1)}(\uuuu{w}^t,\uuuu{w})=S(\uuuu{x})-S(\uuuu{x})^t =I^{(1)}(\uuuu{e}^t,\uuuu{e}). \end{eqnarray*} Define an automorphism $g\in O^{(1)}$ by $g(\uuuu{e})=\uuuu{w}$. Because of \begin{eqnarray*} M&=& s_{v_1}^{(1)}s_{v_2}^{(1)}s_{v_2}^{(1)} = s_{w_1}^{(1)}s_{w_2}^{(1)}s_{w_3}^{(1)}\\ &=& s_{g(e_1)}^{(1)}s_{g(e_2)}^{(1)}s_{g(e_3)}^{(1)} =gs_{e_1}^{(1)}s_{e_2}^{(1)}s_{e_3}^{(1)}g^{-1} =gMg^{-1} \end{eqnarray*} $g$ is in $G_\Z^{(1)}$. But for the considered cases of $\uuuu{x}$ \begin{eqnarray*} G_\Z^{(1)}\stackrel{\textup{Theorem \ref{t5.14}}}{=} G_\Z \stackrel{\textup{Theorem \ref{t3.28}}}{=} Z(\Br_3\ltimes \{\pm 1\}^3). \end{eqnarray*} Therefore there is an element of $\Br_3\ltimes \{\pm 1\}^3$ which maps $\uuuu{e}$ to $\uuuu{w}$. Altogether $\uuuu{v}\in \Br_3\ltimes\{\pm 1\}^3(\uuuu{e}) =\BB^{dist}$. (Now also $L(\uuuu{v}^t,\uuuu{v})^t\in T^{uni}_3(\Z)$ and thus $L=\www{L}$ are clear.)\hfill$\Box$ \section[The stabilizers of distinguished bases] {The stabilizers of distinguished bases in the rank 3 cases} \label{s7.4} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $3$ with a triangular basis $\uuuu{e}$ with $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})\in T^{uni}_3(\Z)$ for some $x\in\Z^3$. We are interested in the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. The surjective map \begin{eqnarray*} \BB^{dist}=(\Br_3\ltimes\{\pm 1\}^3)(\uuuu{e})&\to& (\Br_3\ltimes\{\pm 1\}^3)(\uuuu{x})\\ \www{\uuuu{e}}&\mapsto& \www{\uuuu{x}}\ \textup{ with }\ L(\www{\uuuu{e}}^t,\www{\uuuu{e}})^t=S(\www{\uuuu{x}}), \end{eqnarray*} is $\Br_3\ltimes\{\pm 1\}^3$ equivariant. By Theorem \ref{t4.13} (a) $$\Z^3=\dot\bigcup_{\uuuu{x}\in \bigcup_{i=1}^{24}C_i} (\Br_3\ltimes\{\pm 1\}^3)(\uuuu{x}).$$ Therefore we can and will restrict to $\uuuu{x}\in \bigcup_{i=1}^{24}C_i$. The stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is by Lemma \ref{t3.25} (e) the kernel of the group antihomomorphism \begin{eqnarray*} \oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}\to G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}}). \end{eqnarray*} Here this simplifies to \begin{eqnarray*} \oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}\to G_\Z/Z((\{\pm 1\}^3)_{\uuuu{x}}) \end{eqnarray*} because $G_\Z^{\BB}=G_\Z$ in the reducible cases (and also in most irreducible cases) and $Z((\{\pm 1\}^3)_{\uuuu{x}})=\{\pm \id\}$, which is a normal subgroup of $G_\Z$, in the irreducible cases by Lemma \ref{t3.25} (f). Theorem \ref{t4.16} gives the stabilizer $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$ in all cases. The following Theorem \ref{t7.11} gives the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ in all cases. \begin{theorem}\label{t7.11} Consider a local minimum $\uuuu{x}\in C_i \subset\Z^3$ for some $i\in\{1,...,24\}$ and the pseudo-graph $\GG_j$ with $\GG_j=\GG(\uuuu{x})$. In the following table, the entry in the fourth column and in the line of $C_i$ is the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. The first, second and third column are copied from the table in Theorem \ref{t4.13}. \begin{eqnarray*} \begin{array}{l|l|l|l} & \textup{sets} & (\Br_3)_{\uuuu{x}/\{\pm 1\}^3} & (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}\\ \hline \GG_1 & C_1\ (A_1^3) & \Br_3 & \Br_3^{pure}\\ \GG_1 & C_2\ (\HH_{1,2}) & \Br_3 & \langle (\sigma^{mon})^2 \rangle \\ \GG_2 & C_3\ (A_2A_1) & \langle \sigma_1,\sigma_2^2\rangle & \langle \sigma_2^2,(\sigma^{mon})^{-1}\sigma_1^2, \sigma^{mon}\sigma_1\rangle \\ & & & = \langle \sigma_2^2, \sigma_1\sigma_2^2\sigma_1^{-1}, \sigma_1^3\rangle \\ \GG_2 & C_4\ (\P^1A_1),C_5 & \langle \sigma_1,\sigma_2^2\rangle & \langle \sigma_2^2,(\sigma^{mon})^{-1}\sigma_1^2\rangle \\ & & & =\langle \sigma_2^2,\sigma_1\sigma_2^2\sigma_1^{-1} \rangle \\ \GG_3 & C_6\ (A_3) & \langle\sigma_1\sigma_2,\sigma_1^3\rangle & \langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle \\ \GG_4 & C_7 \ (\widehat{A}_2) & \langle\sigma_2\sigma_1,\sigma_1^3\rangle & \langle \sigma_1^3,\sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle \\ \GG_5 & C_8,\ C_9\ ((-l,2,-l)) &\langle\sigma^{mon},\sigma_1^{-1}\sigma_2^{-1}\sigma_1\rangle & \langle (\sigma^{mon})^2\sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1\rangle \\ \GG_6 & C_{10}\ (\P^2),\ C_{11},\ C_{12} & \langle\sigma_2\sigma_1\rangle & \langle \id\rangle \\ \GG_7 & C_{13}\ (\textup{e.g. }(4,4,8)) & \langle \sigma_2\sigma_1^2\rangle & \langle\id \rangle \\ \GG_8 & C_{14}\ (\textup{e.g. }(3,4,6)) & \langle \sigma^{mon}\rangle & \langle \id\rangle \\ \GG_9 & C_{15},\ C_{16},\ C_{23},\ C_{24} &\langle \sigma^{mon}\rangle & \langle \id\rangle \\ \GG_{10} & C_{17}\ (\textup{e.g. }(-2,-2,0)) & \langle \sigma^{mon},\sigma_2\rangle & \langle \sigma_2^2\rangle \\ \GG_{11} & C_{18}\ (\textup{e.g. }(-3,-2,0)) & \langle \sigma^{mon},\sigma_2^2\rangle & \langle \sigma_2^2\rangle \\ \GG_{12} & C_{19}\ (\textup{e.g. }(-2,-1,0)) & \langle \sigma^{mon},\sigma_2^2, \sigma_2\sigma_1^3\sigma_2^{-1}\rangle & \langle \sigma_2^2,\sigma_2\sigma_1^3\sigma_2^{-1}\rangle \\ \GG_{13} & C_{20}\ (\textup{e.g. }(-2,-1,-1)) & \langle \sigma^{mon},\sigma_2^3, \sigma_2\sigma_1^3\sigma_2^{-1}\rangle & \langle \sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1}\rangle \\ \GG_{14} & C_{21},\ C_{22} & \langle \sigma^{mon},\sigma_2^3\rangle & \langle \sigma_2^3\rangle \end{array} \end{eqnarray*} \end{theorem} {\bf Proof:} {\bf The reducible case $\GG_1\, \&\, C_1\, (A_1^3)$:} Here $\uuuu{x}=(0,0,0)$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\Br_3$. $$\BB^{dist}=\{(\varepsilon_1 e_{\sigma(1)}, \varepsilon_2 e_{\sigma(2)},\varepsilon_3 e_{\sigma(3)})\,|\, \varepsilon_1,\varepsilon_2,\varepsilon_3\in\{\pm 1\}, \sigma\in S_3\},$$ and $\Br_3$ acts by permutation of the entries of triples on $\BB^{dist}$. Therefore $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is the kernel of the natural group homomorphism $\Br_3\to S_3$, so it is the subgroup $\Br_3^{pure}$ of pure braids (see Remark \ref{t8.2} (vii) for this group). {\bf The case $\GG_1\, \&\, C_2\, (\HH_{1,2})$:} Here $\uuuu{x}=(2,2,2)\sim(-2,2,-2)$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\Br_3$. Recall the case $\HH_{1,2}$ in the proof of Theorem \ref{t5.14}, recall the $\Z$-basis $\www{f}$ of $H_\Z=H_{\Z,1}\oplus H_{\Z,2}$, and recall $$G_\Z=\{g\in\Aut(H_\Z,1)\,|\, \det g=1\}\times \Aut(H_{\Z,2})\cong SL_2(\Z)\times \{\pm 1\}.$$ We found in the proof of Theorem \ref{t5.14} (c) \begin{eqnarray*} Z(\delta_2\sigma_1)=(\www{f}\mapsto \www{f} \begin{pmatrix}1&-1&0\\0&1&0\\0&0&1\end{pmatrix}),\quad Z(\delta_3\sigma_2)=(\www{f}\mapsto \www{f} \begin{pmatrix}1&0&0\\1&1&0\\0&0&1\end{pmatrix}). \end{eqnarray*} The group antihomomorphism $\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\Br_3\to G_\Z/\{\pm \id\}\cong SL_2(\Z)$ is surjective with $\oooo{Z}(\sigma_1)\equiv A_1$ and $\oooo{Z}(\sigma_2)\equiv A_2$. It almost coincides with the group homomorphism $\Br_3\to SL_2(\Z)$ in Remark \ref{t4.15} (i). It has the same kernel $\langle (\sigma^{mon})^2\rangle$. {\bf The reducible cases $\GG_2\, \&\, C_3\, (A_2A_1), C_4\, (\P^1A_1),C_5$:} Here $\uuuu{x}=(x_1,0,0)$ with $x_1\leq -1$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle\sigma_1,\sigma_2^2 \rangle$. The quotient group $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}/(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is by Theorem \ref{t3.28} (c) and Lemma \ref{t3.25} (e) isomorphic to the quotient group $G_\Z/Z((\{\pm 1\}^3)_{\uuuu{x}})$. Here $Z((\pm 1\}^3)_{\uuuu{x}}) =\langle (-1,-1,-1),(-1,-1,1)\rangle$ with \begin{eqnarray*} Z((-1,-1,-1))&=&-\id,\\ Z((-1,-1,1))&=&(\uuuu{e}\mapsto (-e_1,-e_2,e_3))=Q. \end{eqnarray*} Define $$M^{root}:=Z(\delta_2\sigma_1)= (\uuuu{e}\mapsto \uuuu{e}\begin{pmatrix}-x_1&-1&0\\ 1&0&0\\0&0&1\end{pmatrix}),$$ and recall from Theorem \ref{t5.13} and Theorem \ref{t5.5} \begin{eqnarray*} G_\Z=\{\pm (M^{root})^l\,|\, l\in\Z\}\times \{\id,Q\}. \end{eqnarray*} Therefore $$(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} /(\Br_3)_{\uuuu{e}/\{\pm 1\}^3} \cong G_\Z/Z((\{\pm 1\}^3)_{\uuuu{x}})\cong \{(M^{root})^l\,|\, l\in\Z\}.$$ In the case $C_3\, (A_2A_1)$ $x_1=-1$ and $M^{root}$ has order three, so the quotient group $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}/(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is cyclic of order three with generator the class $[\sigma_1]$ of $\sigma_1$. In the cases $C_4$ and $C_5$ $x_1\leq -2$ and $M^{root}$ has infinite order, so the quotient group $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}/(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is cyclic of infinite order with generator the class $[\sigma_1]$ of $\sigma_1$. The cases $C_4\, (\P^1A_1), C_5$: Theorem \ref{t7.1} (b) can be applied to the subbasis $(e_2,e_3)$ with $x_3=0$. It shows $\sigma_2^2\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. Therefore $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ contains the normal closure of $\sigma_2^2$ in $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle\sigma_1,\sigma_2^2\rangle$. This normal subgroup is obviously $$\langle \sigma_1^l\sigma_2^2\sigma_1^{-l}\,|\,l\in\Z\rangle.$$ It can also be written with two generators, namely it is $$\langle \sigma_2^2,\sigma_1\sigma_2^2\sigma_1^{-1}\rangle = \langle \sigma_2^2,(\sigma^{mon})^{-1}\sigma_1^2\rangle.$$ The equality of left and right side follows from $$\sigma_2^2\cdot\sigma_1\sigma_2^2\sigma_1^{-1} =\sigma_2^2\sigma_1\sigma_2^2\sigma_1\cdot \sigma_1^{-2} =\sigma^{mon}\sigma_1^{-2}.$$ The equality of this group with $\langle \sigma_1^l\sigma_2^2\sigma_1^{-l}\,|\,l\in\Z\rangle$ follows from the fact that $\sigma^{mon}$ is in the center of $\Br_3$. The quotient group $\langle\sigma_1,\sigma_2^2\rangle/ \langle \sigma_1^l\sigma_2^2\sigma_1^{-l}\,|\, l\in\Z\rangle$ is cyclic of infinite order with generator the class $[\sigma_1]$ of $\sigma_1$. Therefore \begin{eqnarray*} (\Br_3)_{\uuuu{e}/\{\pm 1\}^3} &=& \langle \sigma_2^2,(\sigma^{mon})^{-1}\sigma_1^2\rangle =\langle \sigma_2^2,\sigma_1\sigma_2^2\sigma_1^{-1}\rangle \\ &=& \langle \sigma_1^l\sigma_2^2\sigma_1^{-l}\,|\, l\in\Z\rangle\\ &=& (\textup{the normal closure of } \sigma_2^2\textup{ in }\langle\sigma_1,\sigma_2^2\rangle). \end{eqnarray*} The case $C_3\, (A_2A_1)$: Theorem \ref{t7.1} (b) can be applied to the subbasis $(e_1,e_2)$ with $x_1=-1$ and to the subbasis $(e_2,e_3)$ with $x_3=0$. It shows $\sigma_1^3$ and $\sigma_2^2 \in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. Therefore $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ contains the normal closure of $\sigma_1^3$ and $\sigma_2^2$ in $(Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle \sigma_1,\sigma_2^2 \rangle$. The quotient group $$\langle\sigma_1,\sigma_2^2\rangle / (\textup{the normal closure of }\sigma_1^3\textup{ and } \sigma_2^2\textup{ in }\langle \sigma_1,\sigma_2^2\rangle)$$ is cyclic of order three with generator the class $[\sigma_1]$ of $\sigma_1$. Therefore \begin{eqnarray*} (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= (\textup{the normal closure of }\sigma_1^3 \textup{ and }\sigma_2^2\textup{ in } \langle\sigma_1,\sigma_2^2\rangle). \end{eqnarray*} It coincides with the subgroup generated by $\sigma_1^3$ and by the normal closure $\langle \sigma_2^2,(\sigma^{mon})^{-1} \sigma_1^2\rangle$ of $\sigma_2^2$ in $\langle\sigma_1, \sigma_2^2\rangle$. Therefore \begin{eqnarray*} (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}&=& \langle\sigma_2^2,(\sigma^{mon})^{-1}\sigma_1^2,\sigma_1^3 \rangle =\langle \sigma_2^2,(\sigma^{mon})^{-1}\sigma_1^2, \sigma^{mon}\sigma_1\rangle. \end{eqnarray*} {\bf The case $\GG_3\, \&\, C_6\, (A_3)$:} Here $\uuuu{x}=(-1,0,-1)$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle \sigma_1\sigma_2, \sigma_1^3\rangle$. By Theorem \ref{t5.14} (b) $G_\Z=\{\pm M^l\,|\, l\in\{0,1,2,3\}\}$, and $M$ has order four. By Theorem \ref{t3.28} and Lemma \ref{t3.25} (f), the antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} / (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}\to G_\Z/\{\pm \id\}$$ is an antiisomorphism. Therefore the quotient group $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} / (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is cyclic of order four. Theorem \ref{t7.1} (b) can be applied to the subbasis $(e_1,e_2)$ with $x_1=-1$. It shows $\sigma_1^3\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. Observe also \begin{eqnarray*} \delta_1\sigma_1\sigma_2(\uuuu{e}) &=& \delta_1\sigma_1(e_1,e_3+e_2,e_2) =\delta_1(e_1+e_2+e_3,e_1,e_2)\\ &=& (-e_1-e_2-e_3,e_1,e_2)=-M^{-1}(\uuuu{e}),\\ \textup{so } Z(\delta_1\sigma_1\sigma_2)&=& -M^{-1}. \end{eqnarray*} $M$ and $-M^{-1}$ have order four. Therefore $(\sigma_1\sigma_2)^4\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. Thus $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3} \supset\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle.$ We will show first that $\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle$ is a normal subgroup of $\langle \sigma_1\sigma_2,\sigma_1^3\rangle$ and then that the quotient group is cyclic of order four. This will imply $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3} =\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle.$ Recall that $\sigma^{mon}=(\sigma_2\sigma_1)^3 =(\sigma_1\sigma_2)^3$ generates the center of $\Br_3$. Therefore \begin{eqnarray*} (\sigma_1\sigma_2)^l\sigma_1^3(\sigma_1\sigma_2)^{-l} =(\sigma_1\sigma_2)^{4l}\sigma_1^3(\sigma_1\sigma_2)^{-4l} \in\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle \textup{ for any }l\in\Z. \end{eqnarray*} Thus $\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle$ is a normal subgroup of $\langle \sigma_1\sigma_2,\sigma_1^3\rangle$. This also shows that the quotient group $\langle\sigma_1\sigma_1,\sigma_1^3\rangle / \langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle$ is cyclic of order four. Therefore $\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle =(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. {\bf The case $\GG_4\, \&\, C_7\, (\whh{A}_2)$:} Here $\uuuu{x}=(-1,-1,-1)$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle \sigma_2\sigma_1, \sigma_1^3\rangle$. By Theorem \ref{t5.14} (b) $G_\Z=\{\pm (M^{root})^l\,|\, l\in\Z\}$, and $M^{root}$ has infinite order. By Theorem \ref{t3.28} and Lemma \ref{t3.25} (f), the antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} / (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}\to G_\Z/\{\pm \id\}$$ is an antiisomorphism. By Theorem \ref{t3.26} (c) and Theorem \ref{t5.14} (b) $Z(\delta_3\sigma_2\sigma_1)=M^{root}$. Therefore the quotient group $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} / (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is cyclic of infinite order with generator the class $[\sigma_2\sigma_1]$ of $\sigma_2\sigma_1$. Theorem \ref{t7.1} (b) can be applied to the subbasis $(e_1,e_2)$ with $x_1=-1$. It shows $\sigma_1^3\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. Therefore $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ contains the normal closure of $\sigma_1^3$ in $\langle \sigma_2\sigma_1,\sigma_1^3\rangle$. We will first determine this normal closure and then show that it equals $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. As $\sigma^{mon}=(\sigma_2\sigma_1)^3$ generates the center of $\Br_3$, \begin{eqnarray*} (\sigma_2\sigma_1)^{\varepsilon+3l}\sigma_1^3 (\sigma_2\sigma_1)^{-\varepsilon-3l} =(\sigma_2\sigma_1)^{\varepsilon}\sigma_1^3 (\sigma_2\sigma_1)^{-\varepsilon} \quad\textup{for }\varepsilon\in\{0;\pm 1\},l\in\Z. \end{eqnarray*} One sees \begin{eqnarray*} (\sigma_2\sigma_1)\sigma_1(\sigma_2\sigma_1)^{-1} &=&\sigma_2\sigma_1\sigma_2^{-1},\quad\textup{so}\\ (\sigma_2\sigma_1)\sigma_1^3(\sigma_2\sigma_1)^{-1} &=&\sigma_2\sigma_1^3\sigma_2^{-1},\\ (\sigma_2\sigma_1)^{-1}\sigma_1(\sigma_2\sigma_1) &=&\sigma_1^{-1}(\sigma_2^{-1}\sigma_1\sigma_2)\sigma_1 \stackrel{\eqref{4.14}}{=} \sigma_1^{-1}(\sigma_1\sigma_2\sigma_1^{-1})\sigma_1 =\sigma_2, \quad\textup{so}\\ (\sigma_2\sigma_1)^{-1}\sigma_1^3(\sigma_2\sigma_1) &=&\sigma_2^3. \end{eqnarray*} Therefore the normal closure of $\sigma_1^3$ in $\langle \sigma_2\sigma_1,\sigma_1^3\rangle$ is $\langle \sigma_1^3,\sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle$. The quotient group is an infinite cyclic group with generator the class $[\sigma_2\sigma_1]$ of $\sigma_2\sigma_1$. Therefore \begin{eqnarray*} (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}&=& \langle \sigma_1^3,\sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle\\ &=& (\textup{the normal closure of }\sigma_1^3\textup{ in } \langle \sigma_2\sigma_1,\sigma_1^3\rangle). \end{eqnarray*} {\bf The cases $\GG_5\, \&\, C_8$:} Here $\uuuu{x}=(-l,2,-l)$ with $l\geq 3$ odd and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle \sigma^{mon}, \sigma_1^{-1}\sigma_2^{-1}\sigma_1\rangle$. By Theorem \ref{t5.14} (b) $G_\Z=\{\pm (M^{root})^l\,|\,l\in\Z\}$ with $M^{root}$ as in Theorem \ref{t5.14} (b) with $(M^{root})^{l^2-4}=-M$. Because $l$ is odd, the cyclic group $G_\Z/\{\pm \id\}$ with generator $[M^{root}]$ can also be written as $$G_\Z/\{\pm \id\} =\langle [M],[(M^{root})^2]\rangle \quad\textup{with }[M]=[M^{root}]^{l^2-4}.$$ By Theorem \ref{t3.28} and Lemma \ref{t3.25} (f), the antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}\to G_\Z/\{\pm \id\}$$ is surjective with kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. By the proof of Theorem \ref{t5.14} (b)(iv) $[M]=\oooo{Z}(\sigma^{mon})$ and $[(M^{root})^2]=\oooo{Z}(\sigma_1^{-1}\sigma_2^{-1}\sigma_1)$. The single relation between $[M]$ and $[(M^{root})^2]$ is $[\id]=[M]^2([(M^{root})^2])^{4-l^2}$. Therefore the kernel $(Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of the group antihomomorphism $\oooo{Z}$ above is generated by $(\sigma^{mon})^2(\sigma_1^{-1}\sigma_2^{-1}\sigma_1)^{4-l^2} =(\sigma^{mon})^2\sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1$. {\bf The cases $\GG_6\, \&\, C_9$:} Here $\uuuu{x}=(-l,2,-l)$ with $l\geq 4$ even and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle \sigma^{mon}, \sigma_1^{-1}\sigma_2^{-1}\sigma_1\rangle$. By Theorem \ref{t3.28} and Lemma \ref{t3.25} (f), the antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}\to G_\Z/\{\pm \id\}$$ is surjective with kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. By Theorem \ref{t5.14} (a) and the proof of Theorem \ref{t5.14} (c) \begin{eqnarray*} G_\Z&=&\langle -\id,\www{M},Q\rangle\\ \textup{with}\quad \www{M}&=&Z(\delta_3\sigma_1^{-1}\sigma_2^{-1}\sigma_1),\ Q=Z(\sigma^{mon}) Z(\delta_3\sigma_1^{-1}\sigma_2^{-1}\sigma_1)^{2-l^2/2}, \end{eqnarray*} $\www{M}$ and $Q$ commute, $\www{M}$ has infinite order, $Q$ has order two. Therefore the kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of the group antihomomorphism $\oooo{Z}$ above is generated by $(\sigma^{mon}(\sigma_1^{-1}\sigma_2^{-1}\sigma_1)^{2-l^2/2})^2 =(\sigma^{mon})^2\sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1$. {\bf The cases $\GG_6\, \&\, C_{10}\, (\P^2),C_{11},C_{12}$:} Here $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma_2\sigma_1\rangle$. Here $Z(\sigma_2\sigma_1)=M^{root}$ is a third root of the monondromy. The monodromy $M$ and $M^{root}$ have infinite order. Therefore the kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of the group antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma_2\sigma_1\rangle\to G_\Z/\{\pm \id\}$$ is $\langle \id\rangle$. {\bf The cases $\GG_7\, \&\, C_{13}\, (\textup{e.g. }(4,4,8))$:} Here $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma_2\sigma_1^2\rangle$. Here $Z(\sigma_2\sigma_1)=M^{root}$ is a root of the monondromy. The monodromy $M$ and $M^{root}$ have infinite order. Therefore the kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of the group antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma_2\sigma_1^2\rangle\to G_\Z/\{\pm \id\}$$ is $\langle \id\rangle$. {\bf The cases $\GG_8\, \&\, C_{14}, \GG_9\, \&\, C_{15},C_{16},C_{23},C_{24}$:} Here $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma^{mon}\rangle$. The monodromy $M=Z(\sigma^{mon})$ has infinite order. Therefore the kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of the group antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma^{mon}\rangle\to G_\Z/\{\pm \id\}$$ is $\langle \id\rangle$. {\bf The cases $\GG_{10}\, \&\, C_{17}\, (\textup{e.g. } (-2,-2,0))$:} Here $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma^{mon},\sigma_2\rangle$. Recall from Theorem \ref{t5.16} (c) that \begin{eqnarray*} G_\Z=\{\id,Q\}\times \{\pm M^l\,|\, l\in\Z\} =\langle -\id,Q,M\rangle, \end{eqnarray*} $Q$ and $M$ commute, $Q$ has order two, $M$ has infinite order, $-Q=Z(\sigma_2)$ (see the proof of Theorem \ref{5.17} (e)), $M=Z(\sigma^{mon})$. Therefore the kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of the group antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma^{mon},\sigma_2\rangle\to G_\Z/\{\pm \id\}$$ is $\langle \sigma_2^2\rangle$. {\bf The cases $\GG_{11}\, \&\, C_{18}, \GG_{12}\, \&\, C_{19}, \GG_{13}\, \&\, C_{20},\GG_{14}\, \&\, C_{21},C_{22}$:} By Theorem \ref{t4.16} in all cases $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$ is generated by $\sigma^{mon}$ and some other generators. We claim that the other generators are all in $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. Application of Theorem \ref{t7.1} (b) to the subbasis $(e_2,e_3)$ shows this for the following generators:\\ $\sigma_2^2$ in the cases $\GG_{11}\& C_{18}$ and $G_{12}\& C_{19}$ because there $x_3=0$;\\ $\sigma_2^3$ in the cases $\GG_{13}\& C_{20}$ and $G_{14}\& C_{21},C_{22}$ because there $x_3=-1$. $\sigma_2^{-1}$ maps $\uuuu{e}$ to the basis $(e_1,e_3,s^{(0)}_{e_3}(e_2))$. Therefore application of Theorem \ref{t7.1} (b) to the subbasis $(e_1,e_3)$ with $x_2=-1$ in the cases $\GG_{12}\& C_{19}$ and $\GG_{13}\& C_{20}$ shows $\sigma_2\sigma_1^3\sigma_2^{-1} \in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. The claim $\langle \textup{other generators}\rangle \subset (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is proved. The monodromy $M=Z(\sigma^{mon})$ has infinite order. Therefore the kernel $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ of the group antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma^{mon},\textup{other generators} \rangle\to G_\Z/\{\pm \id\}$$ is $\langle \textup{other generators}\rangle$. \hfill$\Box$ \begin{remarks}\label{t7.12} (i) In the cases $\GG_6\,\&\, C_{10},C_{11},C_{12}$, $\GG_7\,\&\, C_{13}$, $\GG_8\,\&\, C_{14}$ and $\GG_9\,\&\, C_{15},C_{16},C_{23},C_{24}$, the even monodromy group $\Gamma^{(0)}$ is a free Coxeter group with three generators by Theorem \ref{t6.11} (b) and (g). Example \ref{t3.23} (iv), which builds on Theorem \ref{t3.2}, shows $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\langle\id\rangle$. Using this fact, one derives also $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$ in the following way. In all cases $(H_\Z,L,\uuuu{e})$ is irreducible. The group antihomomorphism $$\oooo{Z}:(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}\to G_\Z/\{\pm \id\}$$ is injective because the kernel is $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\langle\id\rangle$. By Theorem \ref{t3.28} $\oooo{Z}$ is surjective in almost all cases. The proof of Theorem \ref{t3.28} provides in all cases preimages of generators of $\Imm(\oooo{Z})\subset G_\Z/\{\pm \id\}$. These preimages generate $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$. So, the arguments here on one side and the Theorems \ref{t4.13}, \ref{t4.16} and \ref{t7.11} on the other side offer two independent ways to derive the stabilizers $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$ and $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ in the considered cases. (ii) But the arguments in (i) cannot easily be adapted to the other cases. In the cases $\GG_{10}\,\&\, C_{17}$, $\GG_{11}\,\&\, C_{18}$, $\GG_{12}\,\&\, C_{19}$, $\GG_{13}\,\&\, C_{20}$ and $\GG_{14}\,\&\, C_{21},C_{22}$, the even monodromy group $\Gamma^{(0)}$ is a non-free Coxeter group. Theorem \ref{t3.2} (b) generalizes in Theorem \ref{t3.7} (b) to a statement on the size of $\BB^{dist}$, but not to a statement on the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$. The cases $\GG_1$, $\GG_2$, $\GG_3$, $\GG_4$ and $\GG_5$ are with the exception of the reducible cases $C_5$ and the case $C_{10}$ the cases with $r(\uuuu{x})\in\{0,1,3,4\}$. For them it looks possible, but difficult, to generalize the arguments in (i). The conceptual derivation of the stabilizer groups for all cases with the Theorems \ref{t4.13}, \ref{t4.16} and \ref{t7.11} is more elegant. \end{remarks} \begin{remarks}\label{t7.13} (i) The table in Theorem \ref{t7.11} describes the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ by generators, except for the case $\GG_1\,\&\, C_1(A_1^3)$ where $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\Br_3^{pure}$. In fact \begin{eqnarray*} \Br_3^{pure}&=& \langle \sigma_1^2,\sigma_2^2, \sigma_2\sigma_1^2\sigma_2^{-1}, \sigma_2^{-1}\sigma_1^2\sigma_2\rangle\\ &=& (\textup{the normal closure of }\sigma_1^2\textup{ in } \Br_3), \end{eqnarray*} because $\sigma_2\sigma_1^2\sigma_2^{-1} =\sigma_1^{-1}\sigma_1^2\sigma_2$, $\sigma_2^{-1}\sigma_1^2\sigma_2 =\sigma_1\sigma_2^2\sigma_1^{-1}$ by \eqref{4.14}. (ii) In some cases the proof of Theorem \ref{t7.11} provides elements so that $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is the normal closure of these elements in $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$: \begin{eqnarray*} \begin{array}{lcl} & \textup{elements} & (\Br_3)_{\uuuu{x}/\{\pm 1\}^3} \\ \hline \GG_2\,\&\, C_3\, (A_2A_1) & \sigma_1^3,\sigma_2^2 & \langle \sigma_1,\sigma_2^2\rangle \\ \GG_2\,\&\, C_4\, (\P^1A_1),C_5 & \sigma_2^2 & \langle \sigma_1,\sigma_2^2\rangle \\ \GG_4\,\&\, C_7\, (\whh{A}_2) & \sigma_1^3 & \langle \sigma_2\sigma_1,\sigma_1^3\rangle \end{array} \end{eqnarray*} (iii) In the case $\GG_3\,\&\, C_6\, (A_3)$, the stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3} =\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle$ was determined already in \cite[Satz 7.3]{Yu90}. \end{remarks} The pseudo-graph $\GG(\uuuu{x})$ for $\uuuu{x}\in\bigcup_{i=1}^{24}C_i$ with vertex set $\VV=\Br_3(\uuuu{x}/\{\pm 1\}^3)$ in Definition \ref{t4.9} (f), Lemma \ref{t4.10} and the Examples \ref{t4.11} had been very useful. All except two edges came from the generators $\varphi_1,\varphi_2,\varphi_3$ of the free Coxeter group $G^{phi}$, and two edges came from $\gamma(v_0)$ and $\gamma^{-1}(v_0)$. An a priori more natural choice of edges comes from the elementary braids $\sigma_1$ and $\sigma_2$. It is less useful, but also interesting. \begin{definition}\label{t7.14} Let $\VV$ be a non-empty finite or countably infinite set on which $\Br_3$ acts. The triple $\GG_\sigma(\VV):=(\VV,\EE_1,\EE_2)$ with $\EE_1:=\{(v,\sigma_1(v))\,|\, v\in\VV\}$ and $\EE_2:=\{(v,\sigma_2(v))\,|\, v\in \VV\}$ is called {\it $\sigma$-pseudo-graph of $\VV$}. \index{$\sigma$-pseudo-graph} Here $\EE_1$ and $\EE_2$ are two families of directed edges. A {\it loop} in $\EE_i$ is an edge $(v,\sigma_i(v))=(v,v)$. \end{definition} \begin{remarks}\label{t7.15} (i) In a picture of a $\sigma$-pseudo-graph, edges in $\EE_1$ and in $\EE_2$ are denoted as follows. \includegraphics[height=0.03\textheight]{pic-7-1-sigma1.png} \quad an edge in $\EE_1$, \includegraphics[height=0.03\textheight]{pic-7-1-sigma2.png} \quad an edge in $\EE_2$. (ii) Consider a $\sigma$-pseudo-graph $\GG_\sigma(\VV)$. Because $\sigma_1:\VV\to\VV$ and $\sigma_2:\VV\to\VV$ are bijections, each vertex $v\in\VV$ is starting point of one edge in $\EE_1$ and one edge in $\EE_2$ and end point of one edge in $\EE_1$ and one edge in $\EE_2$. The $\sigma$-pseudo-graph is connected if and only if $\VV$ is a single $\Br_3$ orbit. (iii) Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice with a triangular basis $\uuuu{e}$ with $L(\uuuu{e}^t,\uuuu{e})^t=S(\uuuu{x})$ for some $\uuuu{x}\in\Z^3$. Two $\sigma$-pseudo-graphs are associated to it, $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ and $\GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$. The natural map \begin{eqnarray*} \BB^{dist}/\{\pm 1\}^3&\to& \Br_3(\uuuu{x}/\{\pm 1\}^3)\\ \www{\uuuu{e}}/\{\pm 1\}^3&\mapsto& \www{\uuuu{x}}/\{\pm 1\}^3\textup{ with } L(\www{\uuuu{e}}^t,\www{\uuuu{e}})^t=S(\www{\uuuu{x}}), \end{eqnarray*} is $\Br_3$ equivariant and surjective. It induces a {\it covering} \begin{eqnarray*} \GG_\sigma(\BB^{dist}/\{\pm 1\}^3)&\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3)) \end{eqnarray*} of $\sigma$-pseudo-graphs. This is even a {\it normal covering} with group of deck transformations $G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}})$ where $G_\Z^{\BB}=Z((\Br_3\ltimes\{\pm 1\}^3)_{\uuuu{x}})\subset G_\Z$ is as in Lemma \ref{t3.25} (e). Now we explain what this means and why it holds. The group $G_\Z^{\BB}$ acts transitively on the fiber over $\uuuu{x}$ of the map $\BB^{dist}\to (\Br_3\ltimes\{\pm 1\}^3)(\uuuu{x})$. By Lemma \ref{t3.22} (a) the action of this group $G_\Z^{\BB}$ and the action of the group $\Br_3\ltimes\{\pm 1\}^3$ on $\BB^{dist}$ commute, so that $G_\Z^{\BB}$ acts transitively on each fiber of the map $\BB^{dist}\to (\Br_3\ltimes\{\pm 1\}^3)(\uuuu{x})$. Therefore the group $G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}})$ acts simply transitively on each fiber of the covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$ and is a group of automorphisms of the $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$. The quotient by this group is the $\sigma$-pseudo-graph $\GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$. These statements are the meaning of the {\it normal covering} $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$. (iv) In part (iii) the $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ contains no loops, and for any $v\in\BB^{dist}/\{\pm 1\}^3$ $\sigma_1(v) \neq \sigma_2(v)$. The $\sigma$-pseudo-graph $\GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$ contains loops in a few cases. It contains a vertex $v$ with $\sigma_1(v)=\sigma_2(v)$ only in the cases $\uuuu{x}=(0,0,0)$ ($A_1^3$) and $\uuuu{x}=(2,2,2)$ ($\HH_{1,2}$) where $\Br_3(\uuuu{x}/\{\pm 1\}^3)$ has only one vertex anyway. \end{remarks} \begin{figure} \parbox{2cm}{\includegraphics[height=0.15\textheight] {pic-7-1-1.png}} $\uuuu{x}\in C_1\cup C_2=\{(0,0,0),(2,2,2)\},\, A_1^3,\HH_{1,2}$\\ \includegraphics[height=0.15\textheight]{pic-7-1-2.png}\\ Two equivalent pictures for the cases $\uuuu{x}=(x,0,0)\in C_3\cup C_4\cup C_5=\{(\www{x},0,0)\,|\, \www{x}<0\}$\\ ($A_2A_1,\P^1A_1$, other reducible cases without $A_1^3$)\\ \parbox{6cm}{\includegraphics[height=0.2\textheight]{pic-7-1-3.png}} $\uuuu{x}=(-1,0,-1)\in C_6,\ A_3$\\ \parbox{6cm}{\includegraphics[height=0.2\textheight]{pic-7-1-4.png}} $\uuuu{x}=(-1,-1,-1)\in C_7,\, \whh{A}_2$ \caption[Figure 7.1]{Examples \ref{t7.16} (i): The $\sigma$-pseudo-graphs for the finite $\Br_3$ orbits in $\Z^3/\{\pm 1\}^3$} \label{Fig:7.1} \end{figure} \begin{examples}\label{t7.16} (i) By Theorem \ref{t4.13} (a) $\Z^3/\{\pm 1\}^3$ consists of the $\Br_3$ orbits $\Br_3(\uuuu{x}/\{\pm 1\}^3)$ for $\uuuu{x}\in \bigcup_{i=1}^{24}C_i$. Precisely for $\uuuu{x}\in\bigcup_{i=1}^7C_i$ such an orbit is finite. This led to the four pseudo-graphs $\GG_1,\GG_2,\GG_3,\GG_4$ in the Examples \ref{t4.11}. The four corresponding $\sigma$-pseudo-graphs $\GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$ are listed in Figure \ref{Fig:7.1}. A vertex $\www{\uuuu{x}}/\{\pm 1\}^3\in\Z^3/\{\pm 1\}^3$ is denoted by a representative $\www{\uuuu{x}}\in\Z^3_{>0}\cup\Z^3_{\leq 0}$. The vertices are positioned at the same places as in the pictures in the Examples \ref{t4.11} for $\GG_1,\GG_2,\GG_3,\GG_4$. (ii) The case $\HH_{1,2}$, $\uuuu{x}=(2,2,2)$: Here $\Br_3(\uuuu{x}/\{\pm 1\}^3)=\{\uuuu{x}/\{\pm 1\}^3\}$ has only one vertex, but the group $$G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}}) =G_\Z/\{\pm \id\} \cong SL_2(\Z)$$ is big. There is a natural bijection $\BB^{dist}/\{\pm 1\}^3\to SL_2(\Z)$, and the elementary braids $\sigma_1$ and $\sigma_2$ act by multiplication from the left with the matrices $A_1=\begin{pmatrix} 1 &-1\\0&1\end{pmatrix}$ and $A_2=\begin{pmatrix} 1&0\\1&1\end{pmatrix}$ on $SL_2(\Z)$. This gives a clear description of the $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$. We do not attempt a picture. (iii) The reducible case $A_1^3$, $\uuuu{x}=(0,0,0)$: Also here $\Br_3(\uuuu{x}/\{\pm 1\}^3)=\{\uuuu{x}/\{\pm 1\}^3\}$ has only one vertex. The group $$ G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}}) =G_\Z/\{\pm 1\}^3\cong O_3(\Z)/\{\pm 1\}^3\cong S_3$$ has six elements. Therefore $\BB^{dist}/\{\pm 1\}^3$ has six elements, and $\sigma_1$ and $\sigma_2$ act as involutions. The right hand side of the first line in Figure \ref{Fig:7.3} gives the $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$. Part (iv) offers a different description which applies also to $A_1^3$ if one sees it as $A_1^2A_1$. (iv) The reducible cases, $\uuuu{x}\in \bigcup_{i\in\{3,4,5\}}C_i$ ($A_2A_1$, $\P^1A_1$, other reducible cases): Here $(H_\Z,L,\uuuu{e})=(H_{\Z,1},L_1,(e_1,e_2))\oplus (H_{\Z,2},L_2,e_3)$ with $H_{\Z,1}=\Z e_1\oplus \Z e_2$ and $H_{\Z,2}=\Z e_3$. The group of deck transformations of the normal covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$ is $$ G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}}) =G_\Z/\{\pm \id,\pm Q\}\cong \{(M^{root})^l\,|\, l\in\Z\}.$$ Here $M^{root}$ has order 3 in the case $A_2A_1$ and infinite order in the other cases. Therefore the $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ can be obtained by a triple or infinite covering of the $\sigma$-pseudo-graph $\GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3)$. \begin{figure}[H] \includegraphics[width=0.6\textwidth]{pic-7-2.png} \caption[Figure 7.2]{In the reducible cases (without $A_1^3$) one sheet of the covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$} \label{Fig:7.2} \end{figure} More concretely, the type of the covering is determined by the $\Br_1$ orbit of distinguished bases up to signs of $(H_{\Z,1},L_1,(e_1,e_2))$. One such distinguished basis modulo signs $(\www{e}_1,\www{e}_2)/\{\pm 1\}^2$ gives rise to one sheet in the covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$. Figure \ref{Fig:7.2} shows the part of a $\sigma$-pseudo-graph which corresponds to one such sheet. \begin{figure} \includegraphics[height=0.2\textheight]{pic-7-3-1.png}\\ $A_1^2$ \hspace*{7cm} $A_1^3$ \\ \hspace*{-0.5cm}\includegraphics[height=0.2\textheight]{pic-7-3-2.png}\\ $A_2$ \hspace*{7cm}$A_2A_1$ \\ \hspace*{1cm}\includegraphics[height=0.3\textheight] {pic-7-3-3.png} $\P^1$, $f_1=e_1+e_2$ \hspace*{5cm} $\P^1A_1$ \\ \caption[Figure 7.3]{Examples \ref{t7.16} (iii): The $\sigma$-pseudo-graphs for distinguished bases modulo signs in the reducible cases} \label{Fig:7.3} \end{figure} The six pictures in Figure \ref{Fig:7.3} show on the left hand side analogous $\sigma_1$-pseudo-graphs for the distinguished bases modulo signs of the rank 2 cases $A_1^2$, $A_2$ and $\P^1$, and on the right hand side the $\sigma$-pseudo-graphs $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ for $A_1^3$, $A_2A_1$ and $\P^1A_1$ (respectively only a part of the $\sigma$-pseudo-graph in the case of $\P^1A_1)$. The $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ for $\uuuu{x}=(x,0,0)$ with $x<-2$ looks the same as the one for $\P^1A_1$, though of course the distinguished bases are different. (v) The case $A_3$, $\uuuu{x}=(-1,0,-1)$: The $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ was first given in \cite[page 40, Figur 6]{Yu90}. We recall and explain it in our words. The group of deck transformations of the normal covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$ is $$ G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}}) =G_\Z/\{\pm \id\}\cong \{M^l\,|\, l\in\{0,1,2,3\}\}.$$ Here the monodromy $M$ acts in the natural way, $$M((\www{e}_1,\www{e}_2,\www{e}_3)/\{\pm 1\}^3) =(M(\www{e}_1),M(\www{e}_2),M(\www{e}_3))/\{\pm 1\}^3),$$ on $\BB^{dist}/\{\pm 1\}^3$ and has order four, $M^4=\id$. Here $M$ and its powers are \begin{eqnarray*} M(\uuuu{e})=\uuuu{e} \begin{pmatrix}0&0&1\\ -1&0&1\\0&-1&1\end{pmatrix}, M^2(\uuuu{e})=\uuuu{e} \begin{pmatrix}0&-1&1\\0&-1&0\\1&-1&0\end{pmatrix}, \\ M^3(\uuuu{e})=\uuuu{e} \begin{pmatrix}1&-1&0\\1&0&-1\\1&0&0\end{pmatrix}. \end{eqnarray*} Because of the shape of $\GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$, $$b^{1,0}:=\uuuu{e}/\{\pm 1\}^3,\quad b^{2,0}:=\sigma_1 b^{1,0},\quad b^{3,0}:=\sigma_1^2 b^{1,0},\quad b^{4,0}:=\sigma_2^{-1} b^{1,0}$$ form one sheet of the fourfold cyclic covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$. Define $$b^{i,l}:=M^l(b^{i,0})\quad \textup{for}\quad l\in\{1,2,3\}.$$ Then $$\BB^{dist}/\{\pm 1\}^3 =\{b^{i,l}\,|\, i\in\{1,2,3,4\},l\in\{0,1,2,3\}\}$$ has sixteen elements. We claim for $l\in\{0,1,2,3\}$ \begin{eqnarray*} \begin{array}{ll} \sigma_1 b^{1,l}=b^{2,l},& \sigma_2 b^{1,l}=b^{3,l+3(\mmod 4)},\\ \sigma_1 b^{2,l}=b^{3,l},& \sigma_2 b^{2,l}=b^{2,l+2(\mmod 4)},\\ \sigma_1 b^{3,l}=b^{1,l},& \sigma_2 b^{3,l}=b^{4,l+1(\mmod 4)},\\ \sigma_1 b^{4,l}=b^{4,l+2(\mmod 4)},& \sigma_2 b^{4,l}=b^{1,l}. \end{array} \end{eqnarray*} It is sufficient to prove the claim for $l=0$. The equations $\sigma_1 b^{1,0}=b^{2,0}$, $\sigma_1 b^{2,0}=b^{3,0}$, $\sigma_2 b^{4,0}=b^{1,0}$ follow from the definitions of $b^{2,0}$, $b^{3,0}$, $b^{4,0}$. The inclusion $\sigma_1^3\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3})$ gives $\sigma_1 b^{3,0}=b^{1,0}$. It remains to show $$\sigma_1 b^{4,0}=b^{4,2},\\ \sigma_2 b^{1,0}=b^{3,3},\ \sigma_2 b^{2,0}=b^{2,2},\ \sigma_2 b^{3,0}=b^{4,1}.$$ One sees \begin{eqnarray*} \begin{array}{llll} i & b^{i,0} & \www{\uuuu{e}} & \www{\uuuu{x}} \textup{ with }L(\www{\uuuu{e}}^t,\www{\uuuu{e}}) =S(\www{\uuuu{x}})\\ \hline 1 & \uuuu{e}/\{\pm \}^3 & \uuuu{e} & (-1,0,-1)\\ 2 & \sigma_1(\uuuu{e})/\{\pm 1\}^3 & \sigma_1(\uuuu{e})=(e_1+e_2,e_1,e_3) & (1,-1,0) \\ 3 & \sigma_1^2(\uuuu{e})/\{\pm 1\}^3 & \sigma_1^2(\uuuu{e})=(-e_2,e_1+e_2,e_3) & (-1,1,-1)\\ 4 & \sigma_2^{-1}(\uuuu{e})/\{\pm 1\}^3 & \sigma_2^{-1}(\uuuu{e})=(e_1,e_3,e_2+e_3) & (0,-1,1) \end{array} \end{eqnarray*} \begin{eqnarray*} \sigma_1 b^{4,0}&=& (e_3,e_1,e_2+e_3)/\{\pm 1\}^3 =M^2b^{4,0}=b^{4,2},\\ \sigma_2 b^{1,0}&=& (e_1,e_2+e_3,e_2)/\{\pm 1\}^3 =M^3b^{3,0}=b^{3,3},\\ \sigma_2 b^{2,0}&=& (e_1+e_2,e_3,e_1)/\{\pm 1\}^3 =M^2b^{2,0}=b^{2,2}. \end{eqnarray*} $\sigma_2b^{1,0}=b^{3,3}$, $\sigma_2b^{4,0}=b^{1,0}$ and $\sigma_2^3\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ show $\sigma_2b^{3,3}=b^{4,0}$. This implies $\sigma_2b^{3,0}=b^{4,1}$. The claim is proved. The $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ is given in Figure \ref{Fig:7.4}. \begin{figure} \includegraphics[width=0.7\textwidth]{pic-7-4.png} \caption[Figure 7.4]{Example \ref{t7.16} (iv): The $\sigma$-pseudo-graph for distinguished bases modulo signs in the case $A_3$} \label{Fig:7.4} \end{figure} (vi) The case $\whh{A}_2$, $\uuuu{x}=(-1,-1,-1)$: The group of deck transformations of the normal covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$ is $$ G_\Z^{\BB}/Z((\{\pm 1\}^3)_{\uuuu{x}}) =G_\Z/\{\pm \id\}\cong \{(M^{root})^l\,|\, l\in\Z\}.$$ Here $M^{root}$ acts in the natural way on $\BB^{dist}/\{\pm 1\}^3$. $M^{root}$ has infinite order and satisfies $(M^{root})^3=-M$. Recall $f_1=e_1+e_2+e_3$, $\Z f_1=\Rad I^{(0)}$, \begin{eqnarray*} M^{root}(\uuuu{e})&=&\uuuu{e} \begin{pmatrix}1&1&-1\\1&0&0\\0&1&0\end{pmatrix},\quad M^{root}(f_1)=f_1,\\ (M^{root})^2(\uuuu{e})&=&\uuuu{e}+f_1(1,0,-1). \end{eqnarray*} Because of the shape of $\GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$, $$b^{1,0}:=\uuuu{e}/\{\pm 1\}^3,\quad b^{2,0}:=\sigma_1 b^{1,0},\quad b^{3,0}:=\sigma_1^2 b^{1,0},\quad b^{4,0}:=\sigma_2 b^{1,0}$$ form one sheet of the infinite cyclic covering $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)\to \GG_\sigma(\Br_3(\uuuu{x}/\{\pm 1\}^3))$. Define $$b^{i,l}:=(M^{root})^l(b^{i,0})\quad \textup{for}\quad i\in\{1,2,3,4\},\ l\in\Z-\{0\}.$$ Then $$\BB^{dist}/\{\pm 1\}^3 =\{b^{i,l}\,|\, i\in\{1,2,3,4\},l\in\Z\}\}.$$ We claim for $l\in\Z$ \begin{eqnarray*} \begin{array}{ll} \sigma_1 b^{1,l}=b^{2,l},& \sigma_2 b^{1,l}=b^{4,l},\\ \sigma_1 b^{2,l}=b^{3,l},& \sigma_2 b^{2,l}=b^{1,l+1},\\ \sigma_1 b^{3,l}=b^{1,l},& \sigma_2 b^{3,l}=b^{3,l+2},\\ \sigma_1 b^{4,l}=b^{4,l+2},& \sigma_2 b^{4,l}=b^{2,l-1}. \end{array} \end{eqnarray*} It is sufficient to prove the claim for $l=0$. The equations $\sigma_1 b^{1,0}=b^{2,0}$, $\sigma_1 b^{2,0}=b^{3,0}$, $\sigma_2 b^{1,0}=b^{4,0}$ follow from the definitions of $b^{2,0}$, $b^{3,0}$, $b^{4,0}$. The inclusion $\sigma_1^3\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ gives $\sigma_1 b^{3,0}=b^{1,0}$. It remains to show $$\sigma_1 b^{4,0}=b^{4,2},\\ \sigma_2 b^{2,0}=b^{1,1},\ \sigma_2 b^{3,0}=b^{3,2},\ \sigma_2 b^{4,0}=b^{2,-1}.$$ One sees \begin{eqnarray*} \begin{array}{llll} i & b^{i,0} & \www{\uuuu{e}} & \www{\uuuu{x}} \textup{ with }L(\www{\uuuu{e}}^t,\www{\uuuu{e}}) =S(\www{\uuuu{x}})\\ \hline 1 & \uuuu{e}/\{\pm \}^3 & \uuuu{e} & (-1,-1,-1)\\ 2 & \sigma_1(\uuuu{e})/\{\pm 1\}^3 & \sigma_1(\uuuu{e})=(e_1+e_2,e_1,e_3) & (1,-2,-1) \\ 3 & \sigma_1^2(\uuuu{e})/\{\pm 1\}^3 & \sigma_1^2(\uuuu{e})=(-e_2,e_1+e_2,e_3) & (-1,1,-2)\\ 4 & \sigma_2(\uuuu{e})/\{\pm 1\}^3 & \sigma_2(\uuuu{e})=(e_1,e_2+e_3,e_2) & (-2,-1,1) \end{array} \end{eqnarray*} \begin{eqnarray*} \sigma_1 b^{4,0}&=& (2e_1+e_2+e_3,e_1,e_2)/\{\pm 1\}^3\\ &=&(e_1+f_1,e_2+e_3-f_1,e_2)/ \{\pm 1\}^3 =(M^{root})^2b^{4,0}=b^{4,2},\\ \sigma_2 b^{2,0}&=& (e_1+e_2,e_1+e_3,e_1)/\{\pm 1\}^3 =M^{root}b^{1,0}=b^{1,1},\\ \sigma_2 b^{3,0}&=& (-e_2,2e_1+2e_2+e_3,e_1+e_2)/\{\pm 1\}^3\\ &=&(-e_2,e_1+e_2+f_1,-e_3+f_1)/\{\pm 1\}^3 =M^2b^{3,0}=b^{3,2}. \end{eqnarray*} $\sigma_2b^{2,0}=b^{1,1}$ implies $\sigma_2b^{2,-1}=b^{1,0}$. This, $\sigma_2b^{1,0}=b^{4,0}$ and $\sigma_2^3\in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ show $\sigma_2b^{4,0}=b^{2,-1}$. The claim is proved. A part of the $\sigma$-pseudo-graph $\GG_\sigma(\BB^{dist}/\{\pm 1\}^3)$ is given in Figure \ref{Fig:7.5}. \begin{figure} \includegraphics[width=1.0\textwidth]{pic-7-5.png} \caption[Figure 7.5]{Example \ref{t7.16} (v): A part of the $\sigma$-pseudo-graph for distinguished bases modulo signs in the case $\whh{A}_2$} \label{Fig:7.5} \end{figure} \end{examples} \chapter[Manifolds induced by braid group orbits] {Manifolds induced by the orbit of distinguished bases and the orbit of distinguished matrices}\label{s8} \setcounter{equation}{0} \setcounter{figure}{0} A single matrix $S\in T^{uni}_n(\Z)$ induces a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with triangular basis $\uuuu{e}$. This triple is unique up to isomorphism. The chapters \ref{s2}--\ref{s7} studied this structure and the associated structures $\Gamma^{(k)}$, $\Delta^{(k)}$ and $\BB^{dist}$ in general, and also systematically in the cases of rank 2 and 3. These chapters are all of an algebraic/combinatorial type. Chapter \ref{s8} comes to geometry. It will present two complex $n$-dimensional manifolds $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$ which are induced by $S$ respectively by $(H_\Z,L,\uuuu{e})$. If $n\geq 2$ they are certain coverings of the configuration space of the braid group $\Br_n$. They can also be seen as the results of gluing many {\it Stokes regions}, one for each element in $\BB^{dist}/\{\pm 1\}^n$ respectively in $\SSS^{dist}/\{\pm 1\}^n$. Their definition is not difficult, but worth to be discussed in detail. This is done in section \ref{s8.1}. Many of them are important in algebraic geometry and singularity theory. See chapter \ref{s10} for statements on the cases in singularity theory. They all are carriers of much richer structures. One version of these richer structures is presented in section \ref{s8.5}, certain {\it $\Z$-lattice bundles} on them. Another rather elementary differential geometric structure on them is given in section \ref{s8.2}. Section \ref{s8.2} recalls the notion of an F-manifold and states that $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$ are semisimple F-manifolds with Euler field and empty Maxwell stratum. As such, often they have partical compactifications to which the F-manifold structure extends. Section \ref{s8.3} considers first the case when $(H_\Z,L,\uuuu{e})$ is reducible. It states that then $C_n^{\uuuu{e}/\{\pm 1\}^n}$ embeds into the product of the corresponding manifolds for the summands of $(H_\Z,L,\uuuu{e})$. Afterwards it discusses the rank 2 cases. Section \ref{s8.4} recalls a notion which was developed within singularity theory, the {\it distinguished systems of $n$ paths}. There are a priori two actions of the braid group $\Br_n$ on sets of (homotopy classes) of distinguished systems of paths. They are discussed and compared in section \ref{s8.4}. They are needed in section \ref{s8.5}. It constructs from $(H_\Z,L,\uuuu{e})$ natural families of $\Z$-lattices structures over the manifolds $C_n^{univ}$ and $C_n^{\uuuu{e}/\{\pm 1\}^n}$. Before, it constructs a single $\Z$-lattice structure on $\C-\{u_1,...,u_n\}$ from $(H_\Z,L,\uuuu{e})$ and the additional choices of a pair $(\uuuu{u},r)\in C_n^{pure}\times\R$ with $r$ big enough and of a distinguished system of paths with starting point $r$ and endpoints in $\{u_1,...,u_n\}$. The $\Z$-lattice bundle over $C_n^{\uuuu{e}/\{\pm 1\}^n}$ leads in general (with an additional choice) to a {\it Dubrovin-Frobenius manifold structure} on a Zariski open subset of $C_n^{\uuuu{e}/\{\pm 1\}^n}$. Remark \ref{t8.9} (ii) offers one definition of a Dubrovin-Frobenius manifold and references. We do not go more into this big story. \section[Natural coverings of the configuration space] {Natural coverings of the configuration space of the braid group $\Br_n$} \label{s8.1} First we fix some notations and recall some facts from the theory of covering spaces and some facts around the configuration space of the braid group $\Br_n$. After this the Remarks \ref{t8.4} start the discussion of the manifolds $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$. \begin{definition}\label{t8.1} Let $n\in \Z_{\geq 2}$. Define \index{$C_n^{conf},\ C_n^{pure},\ C_n^{univ}$} \index{$D_n^{conf},\ D_n^{pure},\ D_n^{univ}$} \index{$b_n^{conf},\ b_n^{pure},\ b_n^{univ}$} \begin{eqnarray*} D_n^{pure}&:=& \bigcup_{1\leq i<j\leq n} \{\uuuu{u}\in\C^n\,|\, u_i=u_j\}\subset \C^n,\\ &&\textup{the union of the partial diagonals in }\C^n,\\ C_n^{pure}&:=& \C^n-D_n^{pure},\\ C_n^{conf}&:=& C_n^{pure}/S_n \subset \C^n/S_n\\ D_n^{conf}&:=& D_n^{pure}/S_n \subset \C^n/S_n,\quad \textup{so }\C^n/S_n=C_n^{conf}\, \dot\cup\, D_n^{conf},\\ C_n^{univ}&:=&(\textup{the universal covering of } C_n^{pure}\textup{ and }C_n^{conf}),\\ b_n^{pure}&:=& (i,2i,...,ni)\in C_n^{pure},\\ b_n^{conf}&:=&[b_n^{pure}]=b_n^{pure}/S_n,\\ b_n^{univ}&:=&(\textup{the preimage of }b_n^{pure} \textup{ in }C_n^{univ} \textup{ which corresponds}\\ &&\textup{to the trivial path in }C_n^{pure}). \end{eqnarray*} \index{universal covering} $C_n^{conf}$ is called {\it configuration space} of the braid group $\Br_n$. The three covering maps are denoted as follows, \index{$\pr_n^{p,c},\ \pr_n^{u,p},\ \pr_n^{u,c}$} \begin{eqnarray*} \pr_n^{p,c}:C_n^{pure}\to C_n^{conf},&&\\ \pr_n^{u,p}:C_n^{univ}\to C_n^{pure},&&\\ \pr_n^{u,c}=\pr_n^{p,c}\circ\pr_n^{u,p}:C_n^{univ}\to C_n^{conf}.&& \end{eqnarray*} \end{definition} \begin{remarks}\label{t8.2} (i) The permutation group $S_n$ acts from the left on $\C^n$ by the following action for $\sigma\in S_n$, \begin{eqnarray*} \sigma(\uuuu{u})&=& \sigma((u_1,u_2,...,u_n)) :=(u_{\sigma^{-1}(1)},u_{\sigma^{-1}(2)},..., u_{\sigma^{-1}(n)}). \end{eqnarray*} So the entry of $\uuuu{u}$ at the $\sigma^{-1}(j)$-th place is put to the $j$-th place. If $\tau\in S_n$, then $\sigma(\tau(\uuuu{u}))$ has at the $j$-th place the entry $u_{\tau^{-1}(\sigma^{-1}(j))}$ of $\tau(\uuuu{u})$ at the $\sigma^{-1}(j)$-th place, so the entry $u_{(\sigma\circ\tau)^{-1}(j)}$. Therefore this is an action from the left of $S_n$ on $\C^n$. Although this is an action from the left, we write in the quotient $\C^n/S_n$ the group on the right. (ii) The quotient $\C^n/S_n$ is again isomorphic to $\C^n$. It can be identified with the set \begin{eqnarray*} \C[x]_n:=\{f(x)=x^n+\sum_{j=1}^nf_jx^{n-j}\in\C[x]\,|\, f=(f_1,...,f_n)\in\C^n\} \end{eqnarray*} of complex unitary polynomials of degree $n$. The identification is given by the map $[\uuuu{u}]=[(u_1,...,u_n)]\mapsto \prod_{i=1}^n(x-u_i)$. The covering $C_n^{pure}\to C_n^{conf}$ extends to a branched covering $\C^n\to \C^n/S_n$. It maps each partial diagonal $\{\uuuu{u}\in\C^n\,|\, u_i=u_j\}$ for $1\leq i<j\leq n$ to the same irreducible algebraic hypersurface $D_n^{conf}=D_n^{pure}/S_n\subset \C^n/S_n$. This hypersurface $D_n^{conf}$ and its image $$\C[x]_n^{mult}:=\{f(x)\in\C[x]_n\,|\, f(x) \textup{ has multiple roots}\}$$ \index{$\C[x]_n\, \C[x]_n^{mult},\ \C[x]_n^{reg}$} in $\C[x]_n$ are called {\it discriminants}. \index{discriminant} The configuration space $C_n^{conf}$ is identified with the complement $$\C[x]_n^{reg}:=\{f(x)\in\C[x]_n\,|\, f(x) \textup{ has no multiple roots}\}.$$ (iii) The braid group $\Br_n$ is isomorphic to the fundamental \index{braid group} group $\pi_1(C_n^{conf},b_n^{conf})$ \index{fundamental group} (e.g. \cite[Theorem 1.8]{Bi75}, \cite[1.4.3]{KT08}). A closed loop $\gamma:[0,1]\to C_n^{conf}$ with $\gamma(0)=\gamma(1)=b_n^{conf}$ can be visualized by a braid with $n$ strings. Figure \ref{Fig:8.1} shows five braids with three strings which represent the braids $\sigma_1,\sigma_1^{-1},\sigma_2,\sigma_2^{-1}, \sigma_2^{-1}\sigma_1$. The interval $[0,1]$ is going top down. The starting points and the end points of the strings are $i,2i,...,ni\in\C$ where $\C$ is the complex line in which the entries of the loop live. Recall that in a fundamental group a product of elements is gone through from left to right. Therefore in the braid $\sigma_2^{-1}\sigma_1$ in Figure \ref{Fig:8.1} $\sigma_2^{-1}$ comes first and $\sigma_1$ comes second. \begin{figure} \includegraphics[width=0.9\textwidth]{pic-8-1.png} \caption[Figure 8.1]{Five braids} \label{Fig:8.1} \end{figure} The relations \begin{eqnarray*} \sigma_j\sigma_{j+1}\sigma_j&=&\sigma_{j+1}\sigma_j\sigma_{j+1} \quad\textup{for }j\in\{,...,n-1\}\\ \textup{and}\qquad \sigma_j\sigma_k&=&\sigma_k\sigma_j\quad\textup{for }|j-k|>1 \end{eqnarray*} are now obvious. It is true, but not obvious that they are generating relations in $\Br_n$ for the generators $\sigma_1,...,\sigma_{n-1}$. (iv) $C_n^{conf}$ is a connected complex manifold and especially locally simply connected. Therefore the main results of the theory of covering spaces apply. The {\it Galois correspondence} \index{covering}\index{connected covering}\index{Galois correspondence} for connected coverings tells the following \cite[Theorem 1.38]{Ha01}. Each connected covering $p:(Y,y_0)\to (C_n^{conf},b_n^{conf})$ with base point $y_0\in Y$ with $p(y_0)=b_n^{conf}$ gives rise to an injective group homomorphism $p_*:\pi_1(Y,y_0)\to \pi_1(C_n^{conf},b_n^{conf})$. Vice versa, for each subgroup $U\subset \Br_n$, there is such a connected covering $p:(Y,y_0)\to(C_n^{conf},b_n^{conf})$ with $p_*(\pi_1(Y,y_0))=U$, and it is unique up to isomorphism of connected coverings with base points. This gives a 1--1 correspondence between connected coverings of $C_n^{conf}$ with base points up to isomorphism and subgroups of $\Br_n$. If one forgets the base points, one obtains a 1--1 correspondence between connected coverings and conjugacy classes of subgroups of $\Br_n$. (v) The group of deck transformations \index{deck transformation} of a connected covering $p:(Y,y_0)\to(C_n^{conf},b_n^{conf})$ is the group of automorphisms of $Y$ which map each fiber of $p$ to itself. A deck transformation $f:Y\to Y$ is determined by the one value $f(y_0)\in p^{-1}(b_n^{conf})$. The group of deck transformations is isomorphic to the quotient $$N(p_*(\pi_1(Y,y_0)))/p_*(\pi_1(Y,y_0)),$$ where $N(p_*(\pi_1(Y,y_0)))\subset \Br_n$ is the largest subgroup of $\Br_n$ in which $p_*(\pi_1(Y,y_0))$ is a normal subgroup. The covering is normal if $N(p_*(\pi_1(Y,y_0)))=\Br_n$. This holds if and only if the group of deck transformations acts transitively on the fiber $p^{-1}(b_n^{conf})$ (and then transitively on each fiber of $p$). For a given subgroup $U\subset \Br_n$, the covering $p:(Y,y_0)\to (C_n^{conf},b_n^{conf})$ can be constructed as the quotient $C_n^{univ}/U$ where $U$ acts from the left as subgroup of the group $\Br_n$, which acts as group of deck transformations of $C_n^{univ}$. (vi) The universal covering \index{universal covering} $\pr_n^{u,c}:C_n^{univ}\to C_n^{conf}$ has trivial fundamental group $\{\id\}\subset \Br_n$ which is normal in $\Br_n$, so it is a normal covering with $\Br_n$ as group of deck transformations. One can identify $C_n^{univ}$ as a set with the following set, \begin{eqnarray*} \bigcup_{b\in C_n^{conf}} \{\textup{homotopy classes of paths in }C_n^{conf} \textup{ from }b_n^{conf}\textup{ to }b\}. \end{eqnarray*} $\Br_n$ acts on this set from the left as follows: a braid $\alpha$ maps a homotopy class $\beta$ to the homotopy class $\alpha\beta$, where one first goes along a loop which represents $\alpha$ and then along a path which represents $\beta$. Remarkably, this gives an action {\it from the left} of $\Br_n$ on $C_n^{univ}$, although in $\Br_n$ in a product of two loops one walks first along the left loop, whereas in a product of two automorphisms of $C_n^{univ}$ the right automorphism is carried out first. (vii) The covering $\pr_n^{p,c}:C_n^{pure}\to C_n^{conf}$ is normal of degree $n!$ with group of deck transformations isomorphic to $S_n$. The fundamental group $\pi_1(C_n^{pure},b_n^{pure})$ embeds into $\Br_n$ as group $\Br_n^{pure}$ of {\it pure braids}. \index{pure braid} A pure braid is represented by a closed loop in $C_n^{pure}$ which starts and ends at $b_n^{pure}$. So in a pure braid with $n$ strings, each string ends at the point at which it starts. There is an exact sequence $$\{1\}\to \Br_n^{pure}\to\Br_n\to S_n\to\{1\}$$ of group homomorphisms. $\Br_n^{pure}$ is the kernel of the map $\Br_n\to S_n$, so normal in $\Br_n$, so $N(\Br_n^{pure})=\Br_n$. Here given a braid $\alpha\in \Br_n$, that permutation $\sigma\in S_n$ is associated to it, such that for a closed loop in $C_n^{conf}$ which represents $\alpha$, the lift to $C_n^{pure}$ which starts at $b_n^{pure}=(i,2i,...,ni)$ ends at $(\sigma^{-1}(1)i,\sigma^{-1}(2)i,...,\sigma^{-1}(n)i)$. This fits to the action of $S_n$ on $\C^n$ in part (i) of these remarks. (viii) Arnold \cite{Ar68} showed that the universal covering $C_n^{univ}$ is homeomorphic to an open ball in $\R^{2n}$. Kaliman \cite{Ka75} \cite{Ka93} showed that $C_n^{univ}$ is biholomorphic to $\C^2\times M_{n-2}^{univ}$ where $M_{n-2}^{univ}$ is a bounded domain in $\C^{n-2}$ and a Teichm\"uller space. We will need explicitly only the cases $C_2^{univ}\cong\C^2$ and $C_3^{univ}\cong \C^2\times\H$. They are treated together with the actions of $\Br_2$ respectively $\Br_3$ as groups of deck transformations in Theorem \ref{t8.12} ($n=2$) and in Theorem \ref{t9.1} and Remark \ref{t9.2} ($n=3$). (ix) As $C_n^{conf}$ is a connected complex manifold, each connected covering is a connected complex manifold. As $C_n^{conf}$ is an affine algebraic manifold, each finite connected covering is an affine algebraic manifold. This follows from Grothendieck's version of the Riemann existence theorem, relating especially finite etale coverings of an affine algebraic manifold \index{affine algebraic manifold} to finite coverings of the underlying complex manifold \cite[XII Th\'eor\`eme 5.1]{Gr61}. \end{remarks} \begin{definition}\label{t8.3} Fix $n\in\Z_{\ge 2}$. Let $X\neq\emptyset$ be a set on which $\Br_n$ acts transitively from the left, and let $x_0\in X$. The covering of $C_n^{conf}$ with fundamental group the stabilizer $(\Br_n)_{x_0}$ and with a base point $b_n^{x_0}\in C_n^{x_0}$ is denoted by $$\pr_n^{x_0,c}:(C_n^{x_0},b_n^{x_0})\to(C_n^{conf},b_n^{conf}). $$ \index{$C_n^{x_0}$}\index{$b_n^{x_0}$} It is isomorphic to the quotient manifold $C_n^{univ}/(\Br_n)_{x_0}$ (in the quotient, we write the group on the right although the action on $C_n^{univ}$ is from the left). \end{definition} \begin{remarks}\label{t8.4} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\geq 2$ with a triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. Consider the coverings \begin{eqnarray*} \pr_n^{\uuuu{e},c}:(C_n^{\uuuu{e}/\{\pm 1\}^n}, b_n^{\uuuu{e}/\{\pm 1\}^n})\to (C_n^{conf},b_n^{conf}),\\ \pr_n^{S,c}:(C_n^{S/\{\pm 1\}^n}, b_n^{S/\{\pm 1\}^n})\to (C_n^{conf},b_n^{conf}). \end{eqnarray*} \index{$C_n^{\uuuu{e}/\{\pm 1\}^n},\ C_n^{S/\{\pm 1\}^n}$} \index{$b_n^{\uuuu{e}/\{\pm 1\}^n},\ b_n^{S/\{\pm 1\}^n}$} We claim that especially the covering $C_n^{\uuuu{e}/\{\pm 1\}^n}$ is important. (i) They are the following quotients of the universal covering, \begin{eqnarray*} C_n^{\uuuu{e}/\{\pm 1\}^n}&\cong & C_n^{univ}/(\Br_n)_{\uuuu{e}/\{\pm 1\}^n},\quad \textup{with }b_n^{\uuuu{e}/\{\pm 1\}^n}=[b_n^{univ}],\\ C_n^{S/\{\pm 1\}^n}&\cong & C_n^{univ}/(\Br_n)_{S/\{\pm 1\}^n},\quad \textup{with }b_n^{S/\{\pm 1\}^n}=[b_n^{univ}]. \end{eqnarray*} (ii) Because of Lemma \ref{t3.25} (e) $(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$ is a normal subgroup of $(\Br_n)_{S/\{\pm 1\}^n}$. Therefore there is a normal covering $C_n^{\uuuu{e}/\{\pm 1\}^n}\to C_n^{S/\{\pm 1\}^n}$ with group of deck transformations the quotient group $(\Br_n)_{S/\{\pm 1\}^n}/(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$, and the covering space $C_n^{S/\{\pm 1\}^n}$ is canonically isomorphic to the quotient $$C_n^{\uuuu{e}/\{\pm 1\}^n} / \frac{(\Br_n)_{S/\{\pm 1\}^n}} {(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}}$$ (we write the group on the right although it acts from the left). (iii) Recall from Lemma \ref{t3.25} (e) that the quotient group $(\Br_n)_{S/\{\pm 1\}^n}/(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$ is isomorphic to the group $G_\Z^{\BB}/Z((\{\pm 1\}^n)_S)$. Often $G_\Z^{\BB}=G_\Z$. Recall from Lemma \ref{t3.25} (f) that in the case of $(H_\Z,L,\uuuu{e})$ irreducible $Z((\{\pm 1\}^n)_S)=\{\pm \id\}\subset G_\Z$, so the quotient group above is then isomorphic to $G_\Z^{\BB}/\{\pm \id\}\subset G_\Z/\{\pm \id\}$. Often equality holds. (iv) If $I^{(0)}$ is positive definite, then $\www{S}+\www{S}^t$ for $\www{S}\in \SSS^{dist}$ is positive definite with $(\www{S}+\www{S}^t)_{jj}=2$, so $\www{S}_{ij}\in\{0,\pm 1\}$ for $i<j$. Then the sets $R^{(0)}$, $\BB^{dist}$ and $\SSS^{dist}$ are finite, the covering $C_n^{\uuuu{e}/\{\pm 1\}^n}\to C_n^{conf}$ is finite of degree $|\BB^{dist}/\{\pm 1\}^n|$, the covering $C_n^{S/\{\pm 1\}^n}\to C_n^{conf}$ is finite of degree $|\SSS^{dist}/\{\pm 1\}^n|$, and the manifolds $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$ are affine algebraic. (v) If $I^{(0)}$ is positive semidefinite, then $\www{S}+\www{S}^t$ for $\www{S}\in \SSS^{dist}$ is positive semidefinite with $(\www{S}+\www{S}^t)_{jj}=2$, so $\www{S}_{ij}\in\{0,\pm 1,\pm 2\}$ for $i<j$. Then the set $\SSS^{dist}$ is finite, the covering $C_n^{S/\{\pm 1\}^n}\to C_n^{conf}$ is finite of degree $|\SSS^{dist}/\{\pm 1\}^n|$, and the manifold $C_n^{S/\{\pm 1\}^n}$ is affine algebraic. (vi) Often the covering space $C_n^{\uuuu{e}/\{\pm 1\}^n}$ has a natural smooth partial compactification \index{partial compactification} $\oooo{C_n^{\uuuu{e}/\{\pm 1\}^n}}^p$, and the covering map $\pr_n^{\uuuu{e},c}:C_n^{\uuuu{e}/\{\pm 1\}^n}\to C_n^{conf}$ extends to a holomorphic map $\oooo{C_n^{\uuuu{e}/\{\pm 1\}^n}}^p\to \C^n/S_n$. (vii) If $H_\Z$ has rank $n=1$, we have $S=(1)$, and we set \begin{eqnarray*} C_1^{univ}=C_1^{\uuuu{e_1}/\{\pm 1\}}=C_1^{S/\{\pm 1\}} =C_1^{pure}=C_1^{conf}=\C,\\ \Br_1=(\Br_1)_{\uuuu{e_1}/\{\pm 1\}} =(\Br_1)_{S/\{\pm 1\}}=\{1\}. \end{eqnarray*} Then the statements in the parts (i)--(iv) still hold, though in a rather trivial way. The coordinate on $\C$ is here called $u_1$. \end{remarks} The covering $C_n^{x_0}\to C_n^{conf}$ in Definition \ref{t8.3} can and will be described in Lemma \ref{t8.6} in a more explicit way, by gluing manifolds with boundaries, one for each element of the set $X$. The manifolds with boundaries are all copies of $\oooo{F_n}$, which is given in Definition \ref{t8.5}. \begin{definition}\label{t8.5} Fix $n\in\Z_{\ge 2}$. Define for $j\in\{1,...,n-1\}$ \begin{eqnarray*} W_n^{j,+}&:=&\{\uuuu{u}\in C_n^{pure}\,|\, \Imm(u_1)\leq ...\leq \Imm(u_n),\\ && \hspace*{2.5cm} \Imm(u_j)=\Imm(u_{j+1}),\ \Ree(u_j)<\Ree(u_{j+1})\},\\ W_n^{j,-}&:=&\{\uuuu{u}\in C_n^{pure}\,|\, \Imm(u_1)\leq ...\leq \Imm(u_n),\\ && \hspace*{2.5cm} \Imm(u_j)=\Imm(u_{j+1}),\ \Ree(u_j)>\Ree(u_{j+1})\},\\ W_n&:=& \bigcup_{j=1}^{n-1}\bigl(W_n^{j,+}\cup W_n^{j,-}\bigr), \\ F_n&:=& \{\uuuu{u}\in C_n^{pure}\,|\, \Imm(u_1)<...<\Imm(u_n)\}. \end{eqnarray*} \index{$W_n^+,\ W_n^-,\ W_n$}\index{$F_n$} \end{definition} \begin{lemma}\label{t8.6} Fix $n\in\Z_{\geq 2}$. (a) The set $W_n^{j,\varepsilon}\subset C_n^{pure}$ for $\varepsilon\in\{\pm 1\}$ is called {\sf wall} or \index{wall} {\sf $\sigma_j^\varepsilon$-wall}. It is closed in $C_n^{pure}$, it has real codimension 1 in $C_n^{pure}$, it has boundary in $C_n^{pure}$ if $n\geq 3$, it is (outside its boundary if $n\geq 3$) smooth and affine linear, it is contractible. If $n\geq 4$ its boundary in $C_n^{pure}$ is only piecewise smooth and affine linear. The set $F_n$ is an open convex polyhedron in $\C^n\supset C_n^{pure}$ and therefore contractible. Its boundary consists of the union $W_n$ of the $2(n-1)$ walls. Two walls $W_n^{j_1,\varepsilon_1}$ and $W_n^{j_2,\varepsilon_2}$ with $j_1\neq j_2$ intersect in real codimension 1. Two walls $W_n^{j,+}$ and $W_n^{j,-}$ do not intersect. (b) The covering $\pr_n^{p,c}:C_n^{pure}\to C_n^{conf}$ restricts to a homeomorphism from $F_n$ to its image in $C_n^{conf}$. The complement of $\pr_n^{p,c}(F_n)$ in $C_n^{conf}$ is $\pr_n^{p,c}(W_n)$. It has real codimension 1 and is the boundary of $\pr_n^{p,c}(F_n)$. The covering $\pr_n^{p,c}$ maps $W_n^{j,+}$ and $W_n^{j,-}$ to the same set. The covering $\pr_n^{p,c}$ is not injective on $W_n^{j,\varepsilon}$ because it maps the disjoint sets $W_n^{j,\varepsilon}\cap W_n^{j_1,+}$ and $W_n^{j,\varepsilon}\cap W_n^{j_1,-}$ for $j_1\neq j$ to the same set. (c) Let $X\neq \emptyset$ be a set on which $\Br_n$ acts transitively from the left, and let $x_0\in X$. The following are basic general facts from covering theory. The group $\Br_n$ acts from the left on the fiber $(\pr_n^{x_0,c})^{-1}(b_n^{conf})$ in the following way. For $\alpha\in Br_n=\pi_1(C_n^{conf},b_n^{conf})$ and $b\in (\pr_n^{x_0,c})^{-1}(b_n^{conf})$ let $\www{\alpha}$ be the lift to $C_n^{x_0}$ with end point $\www{\alpha}(1)=b$ of a loop in $C_n^{conf}$ which represents $\alpha$. Then $\alpha(b):=\www{\alpha}(0)$. Write $U:=(\Br_n)_{x_0}$. This action induces the two bijections \begin{eqnarray*} \begin{array}{ccccc} X&\stackrel{1:1}{\longleftarrow} &\Br_n/U & \stackrel{1:1}{\longrightarrow} & (\pr_n^{x_0,c})^{-1}(b_n^{conf})\\ x=\alpha(x_0) & \longmapsfrom & \alpha U & \longmapsto & \alpha(b_n^{x_0}) \end{array} \end{eqnarray*} For $x=\alpha(x_0)$ write $b_n^{x_0,x}:=\alpha(b_n^{x_0}) \in (\pr_n^{x_0,c})^{-1}(b_n^{conf})$ (especially $b_n^{x_0}=b_n^{x_0,x_0}$). So there is a natural bijection between $X$ and the fiber in $C_n^{x_0}$ over $b_n^{conf}$, and $x$ corresponds to $b_n^{x_0,x}$. (d) Let $X$ and $x_0$ be as in part (c). The covering $\pr_n^{x_0,c}:(C_n^{x_0},b_n^{x_0})\to (C_n^{conf},b_n^{conf})$ restricts over the open subset $\pr_n^{p,c}(F_n)$ of $C_n^{conf}$ to an even covering, that means that $(\pr_n^{x_0,c})^{-1}(\pr_n^{p,c}(F_n))$ is a union of disjoint open subsets of $C_n^{x_0}$ each of which is mapped homeomorphically to $\pr_n^{p,c}(F_n)$. The components of $(\pr_n^{x_0,c})^{-1}(\pr_n^{p,c}(F_n))$ are called {\sf Stokes regions}. \index{Stokes region} Each contains one element of the fiber $(\pr_n^{x_0,c})^{-1}(b_n^{conf})$. For $x\in X$ let $F_n^{x_0,x}$ be the component which contains the point $b_n^{x_0,x}$. The complement $C_n^{x_0}-(\pr_n^{x_0,c})^{-1}(\pr_n^{p,c}(F_n))$ is the preimage of the walls $\pr_n^{p,c}(W_n)$ in $C_n^{conf}$, it has everywhere real codimension 1. It is the boundary of $(\pr_n^{x_0,c})^{-1}(\pr_n^{p,c}(F_n))$. (e) In $C_n^{conf}$ the images of the walls $W_n^{j,+}$ and $W_n^{j,-}$ coincide, but this image is a smooth oriented real hypersurface and has two sides, the $W_n^{j,+}$-side and the $W_n^{j,-}$-side. The braid $\sigma_j$ has a representative by a loop which goes once through this hypersurface, from the $W_n^{j,+}$-side to the $W_n^{j,-}$-side. (f) Let $X$ and $x_0$ be as in the parts (c) and (d). The covering space $C_n^{x_0}$ can be obtained by glueing copies of the closure $\oooo{F_n}$ in $C_n^{pure}$ in the following way along their boundaries. More precisely, it is isomorphic to the quotient of the disjoint union $\bigcup_{x\in X}(\oooo{F_n}\times \{x\})$ by the equivalence relation which is generated by the following equivalences. \begin{eqnarray*} (\uuuu{u},x)&\sim& ((u_1,...,u_{j-1},u_{j+1},u_j,u_{j+2},..., u_n)),\sigma_j^{-1}(x)) \quad \textup{for }x\in X\\ && \textup{ and }\uuuu{u}\in W_n^{j,+} \textup{ (and thus }(u_1,...,u_{j+1},u_j,...,u_n)\in W_n^{j,-}. \end{eqnarray*} So the walls $W_n^{j,+}\times\{x\}$ and $W_n^{j,-}\times \{\sigma_j^{-1}(x)\}$ are identified. The isomorphism maps $F_n\times\{x\}$ to $F_n^{x_0,x}$. \end{lemma} {\bf Proof:} (a) Trivial. (b) Trivial. (c) The bijection $\Br_n/U\to X$, $\alpha U\to \alpha(x_0)$, is elementary group theory. The group action from the left of $\Br_n$ on the fiber $(\pr_n^{x_0,c})^{-1}(b_n^{conf})$ is described for example in \cite[page 65, section 1.3]{Ha01}. Here the image $\alpha(b)$ of the {\it end} point $b=\www{\alpha}(1)$ under $\alpha$ is the {\it starting} point $\www{\alpha}(0)$, because composition of paths is written from left to right, whereas the action of $\Br_n$ on the fiber $(\pr_n^{x_0,c})^{-1}(b_n^{conf})$ shall be an action from the left. The stabilizer of the base point $b_n^{x_0}$ is by construction of the covering the group $U=(\Br_n)_{x_0}$. This gives the second bijection $\Br_n/U\to (\pr_n^{x_0,c})^{-1}(b_n^{conf})$, $\alpha U\mapsto \alpha(b_n^{x_0})$. (d) This follows from the shape of the subset $\pr_n^{p,c}(F_n)$ in $C_n^{conf}$, from the definition of a covering of $C_n^{conf}$ and from part (c). (e) Compare the description of the elementary braid $\sigma_j$ in Remark \ref{t8.2} (iii) and the two pictures in Figure \ref{Fig:8.2}. \begin{figure} \includegraphics[width=1.0\textwidth]{pic-8-2.png} \caption[Figure 8.2]{Crossing a wall with an elementary braid} \label{Fig:8.2} \end{figure} The path $[0,1]\to C_n^{pure}$, $t\mapsto \uuuu{u}(t)$, which represents $\sigma_j$ is chosen such that $u_1(t),...,u_{j-1}(t),u_{j+2}(t),...,u_n(t)$ are constant and $u_j(t)$ and $u_{j+1}(t)$ have a constant center $\frac{1}{2}(u_j(t)+u_{j+1}(t))$ and turn around halfway clockwise. (f) This follows from the parts (c), (d) and (e). \hfill$\Box$ \begin{remarks}\label{t8.7} Let $X$ and $x_0$ be as in the parts (c), (d) and (f) in Lemma \ref{t8.6}. (i) The union \begin{eqnarray*} W_n^{sing}:= \bigcup_{j_1,j_2\in\{1,...,n-1\},j_1\neq j_2} \bigcup_{\varepsilon_1,\varepsilon_2\in\{\pm 1\}} W_n^{j_1,\varepsilon_1}\cap W_n^{j_2,\varepsilon_2} \subset W_n\subset C_n^{pure} \end{eqnarray*} has real codimension 2 in $C_n^{pure}$. Also the image $\pr_n^{p,c}(W_n^{sing})\subset C_n^{conf}$ has real codimension 2 in $C_n^{conf}$. (ii) Choose a sequence $(\sigma_{j_a}^{\varepsilon_a})_{a\in\{1,...,l\}}$ of elementary braids for some $l\in\N$, $j_a\in\{1,...,n-1\}$, $\varepsilon_a\in\{\pm 1\}$. Choose a closed path $p$ in $C_n^{conf}$ with starting point and end point $b_n^{conf}$ which avoids $\pr_n^{p,c}(W_n^{sing})$ and which passes $l$ times through $\pr_n^{p,c}(W_n)$, first from the $W_n^{j_1,\varepsilon_1}$-side through $\pr_n^{p,c}(W_n^{j_1,\varepsilon_1})$, then from the $W_n^{j_2,\varepsilon_2}$-side through $\pr_n^{p,c}(W_n^{j_2,\varepsilon_2})$, and so on. The path $p$ represents the braid $\alpha=\sigma_{j_1}^{\varepsilon_1}\sigma_{j_2}^{\varepsilon_2} ...\sigma_{j_l}^{\varepsilon_l}$. The lift of $p$ to $C_n^{x_0}$ which starts at $b_n^{x_0,x_0}$ starts in the Stokes region $F_n^{x_0,x_0}$ and goes through the wall $W_n^{j_1,\varepsilon_1}$ of it to the Stokes region $F_n^{x_0,\sigma_{j_1}^{-\varepsilon_1}(x_0)}$, then through the wall $W_n^{j_2,\varepsilon_2}$ of this Stokes region to the Stokes region $F_n^{x_0,\sigma_{j_2}^{-\varepsilon_2} \sigma_{j_1}^{-\varepsilon_1}(x_0)}$, and so on. It ends in the Stokes region $F_n^{x_0,\alpha^{-1}(x_0)}$ at the point $b_n^{x_0,\alpha^{-1}(x_0)}$. (iii) A homotopy from a path as in part (ii) for $\sigma_j\sigma_{j+1}\sigma_j$ to a path as in part (ii) for $\sigma_{j+1}\sigma_j\sigma_{j+1}$ contains at least one path which contains a point in $\pr_n^{p,c}(W_n^{j,+}\cap W_n^{j+1,+})$. A preimage in $C_n^{pure}$ of this point satisfies $\Imm(u_j)=\Imm(u_{j+1})=\Imm(u_{j+2})$. (iv) If the covering $\pr_n^{x_0,c}:C_n^{x_0}\to C_n^{conf}$ is normal, the deck transformation group of it acts transitively on the set of Stokes regions $\{F_n^{x_0,x}\,|\, x\in X\}$. Then $F_n^{x_0,x_0}$ is a fundamental domain in $C_n^{x_0}$ for the action of the deck transformation group. \end{remarks} \section{Semisimple F-manifolds} \label{s8.2} Each covering space of $C_n^{conf}$ is a semisimple F-manifold with Euler field and with empty Maxwell stratum. We explain these notions here. \begin{definition}\label{t8.8} \cite{HM98}\cite[Definition 2.8]{He02} (a) A (holomorphic) \index{F-manifold} {\it F-manifold} is a tuple $({\MM},\circ,e)$ \index{$(M,\circ,e,E)$} where ${\MM}$ is a complex manifold, $\circ$ is a commutative and associative multiplication on the holomorphic tangent bundle \index{multiplication on the holomorphic tangent bundle} $T{\MM}$ of ${\MM}$ which satisfies the integrability condition \begin{eqnarray*} \Lie_{X\circ Y}&=& Y\circ \Lie_X(\circ)+ X\circ\Lie_Y(\circ) \end{eqnarray*} for local holomorphic vector fields $X$ and $Y$, and $e$ is a global holomorphic vector field with $e\,\circ=\id$. It is called {\it unit field}. \index{unit field} (b) An {\it Euler field} \index{Euler field} $E$ in an F-manifold $({\MM},\circ,e)$ is a global holomorphic vector field with $\Lie_E(\circ)=\circ$. (c) An F-manifold is semisimple \index{semisimple F-manifold} at a point $t\in {\MM}$ if the algebra $(T_t{\MM},\circ_t,e_t)$ is semisimple, that means, it splits canonically into a sum of 1-dimensional algebras, so $T_t{\MM}=\bigoplus_{j=1}^n\C\cdot e_{t,j}$ with $e_{t,i}\circ e_{t,j}=\delta_{ij}e_{t,i}$ (and automatically $e=\sum_{j=1}^n e_j$). \index{semisimple multiplication} \end{definition} \begin{remarks}\label{t8.9} (i) The theory of F-manifolds is developed in \cite[chapters 1--5]{He02}. (ii) The condition to be semisimple is an open condition. If an F-manifold is semisimple at one point, it is generically semisimple. More precisely, then the set $\KK_3\subset {\MM}$ of points where it is not semisimple is either empty or an analytic hypersurface \cite[Proposition 2.6]{He02}. The set $\KK_3$ is called {\it caustic}. \index{caustic}\index{$\KK_3$: caustic} (iii) A complex manifold with a semisimple (commutative and associative) multiplication on the holomorphic tangent bundle has locally a basis $e_1,...,e_n$ of holomorphic vector fields which are {\it the partial units}, so $e_i\circ e_j=\delta_{ij}e_i$. It is unique up to indexing. Here the integrability condition of an F-manifold says $[e_i,e_j]=0$, so the partial units are locally coordinate vector fields \cite[Theorem 3.2]{He02}. The corresponding coordinates are unique up to addition of constants. The choice of an Euler field $E$ makes them unique, because the eigenvalues $u_1,...,u_n$ of the endomorphism $E\,\circ$ on the holomorphic tangent bundle are such local coordinates with $e_1=\frac{\paa}{\paa u_1},..., e_n=\frac{\paa}{\paa u_n}$ and $E=\sum_{j=1}^n u_je_j$. These local coordinates are unique up to indexing in the case of a semisimple F-manifold with Euler field and are called {\it canonical coordinates}. \index{canonical coordinates} (iv) Let $({\MM},\circ,e,E)$ be a generically semisimple F-manifold with Euler field. Besides the caustic $\KK_3$ there is often a second analytic hypersurface $\KK_2$, the {\it Maxwell stratum}. \index{Maxwell stratum}\index{$\KK_2$: Maxwell stratum} It is the closure of the set of points $t\in {\MM}$ such that $(T_t{\MM},\circ_t,e_t)$ is semisimple (so locally canonical coordinates exist), but at least two eigenvalues of $E\, \circ$ coincide. Usually $\KK_3$ and $\KK_2$ intersect. The map \begin{eqnarray*} \LL:{\MM}\to \C[x]_n,\quad t\mapsto \textup{ (the characteristic polynomial of }E_t\,\circ), \end{eqnarray*} is holomorphic. In singularity theory it is called {\it Lyashko-Looijenga map}. \index{Lyashko-Looijenga map}\index{$\LL$} $\KK_3\cup\KK_2$ is mapped to the discriminant $\C[x]_n^{mult}$, the complement ${\MM}-(\KK_3\cup\KK_2)$ is mapped locally biholomorphically to the complement $\C[x]_n^{reg}\cong C_n^{conf}$. Near a generic point of $\KK_2$ there are local coordinates such that two of them take the same value at points of $\KK_2$. Therefore $\LL$ is a branched covering of degree two near a generic point of $\KK_2$. The index 3 in $\KK_3$ stems from the fact that in a family of important cases $\LL$ is a branched covering of order 3 near generic points of $\KK_3$. (v) Vice versa, a locally biholomorphic map $\LL:{\MM}\to\C[x]_n^{reg}\cong C_n^{conf}$ induces on ${\MM}$ the structure of a semisimple F-manifold with Euler field and empty Maxwell stratum. (vi) Define \index{$\C[x]_n^{mult,reg}$} \begin{eqnarray*} \C[x]_n^{mult,reg}:=\{f\in \C[x]^{mult}\,|\, f\textup{ has precisely }n-1\textup{ different roots}\}. \end{eqnarray*} It is an open part of $\C[x]_n^{mult}$. The complement is a complex hypersurface in $\C[x]_n^{mult}$ (which itself is a complex hypersurface in $\C[x]_n$). Consider the following situation: A locally biholomorphic map $\LL:{\MM}\to \C[x]_n^{reg}$, a point $p\in \C[x]_n^{mult,reg}$, a neighborhood $U\subset \C[x]_n$ of $p$ with $U\cap \C[x]_n^{mult}\subset\C[x]_n^{mult,reg}$, $U':=U-\C[x]_n^{mult}$, a component $\www{U'}$ of $\LL^{-1}(U')$ such that the restriction $\LL:\www{U'}\to U'$ is an $m$-fold covering for some $m\in\N$. Then $\www{U'}$ has a partial compactification $\www{U}$ \index{partial compactification} such that $\LL$ extends to a branched covering $\www{\LL}:\www{U}\to U$, which is branched of order $m$ over $U-U'=U\cap \C[x]_n^{mult}$. If $m\geq 2$ the F-manifold structure extends from $\www{U'}$ to $\www{U}$. If $m=2$ $\www{U}-\www{U'}$ is the Maxwell stratum in $\www{U}$. If $m\geq 3$ $\www{U}-\www{U'}$ is the caustic in $\www{U}$. See the Examples \ref{t8.10}. (vii) A semisimple F-manifold with Euler field is locally a rather trivial object. More interesting is a generically semisimple F-manifold near points of the caustic $\KK_3$. (viii) The notion of a {\it Dubrovin-Frobenius manifold} \index{Dubrovin-Frobenius manifold}\index{Frobenius manifold} is much richer. It is an F-manifold $({\MM},\circ,e,E)$ with Euler field together with a flat holomorphic metric $g$ on the holomorphic tangent bundle such that $e$ is flat (with respect to the Levi-Civita connection of $g$), $g$ is {\it multiplication invariant}, that means \begin{eqnarray*} g(X\circ Y,Z)=g(X,Y\circ Z) \end{eqnarray*} for local holomorphic vector fields $X,Y,Z$, and $$\Lie_E(g)=(2-d)g \quad\textup{for some}\quad d\in\C.$$ It was first defined by Dubrovin \cite{Du92}\cite{Du96}. To see the equivalence of the definition above with the definition in \cite{Du92}\cite{Du96} one needs \cite[Theorem 2.15 and Lemma 2.16]{He02}. Dubrovin-Frobenius manifolds arise in singularity theory, in quantum cohomology, in mirror symmetry and in integrable systems. They are all highly transcendental and interesting objects. The construction in singularity theory of a metric $g$ which makes a semisimple F-manifold with Euler field into a Dubrovin-Frobenius manifold uses additional structure above the F-manifold. One version of this additional structure is presented in section \ref{s8.5}. \end{remarks} \begin{examples}\label{t8.10} (i) Of course, $\C^n$ with coordinates $u_1,...,u_n$ is a semisimple F-manifold with partial units $e_i=\frac{\paa}{\paa u_i}$ and Euler field $E=\sum_{i=1}^n u_ie_i$. It is called of type $A_1^n$. The Maxwell stratum is the union $D_n^{pure}$ of partial diagonals. (ii) For $m\in\Z_{\geq 2}$ the manifold ${\MM}=\C^2$ with coordinates $z_1,z_2$ becomes a generically semisimple F-manifold with Euler field in the following way \cite[Theorem 4.7]{He02}. It is called of type $I_2(m)$. \index{$I_2(m)$} Denote by $\paa_1:=\frac{\paa}{\paa z_1}$ and $\paa_2:=\frac{\paa}{\paa z_2}$ the coordinate vector fields and define the unit field, the multiplication $\circ$ and the Euler field $E$ as follows, \begin{eqnarray*} e&:=& \paa_1,\quad\textup{ so }\paa_1\,\circ =\id \textup{ on the holomorphic tangent bundle},\\ \paa_2\circ\paa_2&:=& z_2^{m-2}\paa_1,\\ E&:=& z_1\paa_1+\frac{2}{m}z_2\paa_2. \end{eqnarray*} If $m=2$ it is everywhere semisimple, and the Maxwell stratum is $\KK_2=\{(z_1,z_2)\in {\MM}\,|\, z_2=0\}$. If $m\geq 3$ it is generically semisimple, and the caustic is $\KK_3=\{(z_1,z_2)\in {\MM}\,|\, z_2=0\}$. For $m=2$ there are global canonical coordinates, for even $m\geq 3$ there are global canonical coordinates on ${\MM}-\KK_3$, for odd $m\geq 3$ there are local canonical coordinates on ${\MM}-\KK_3$, which in fact are globally 2-valued. In all cases they are \begin{eqnarray*} u_{1/2}=z_1\pm \frac{2}{m}z_2^{m/2}. \end{eqnarray*} The partial units are \begin{eqnarray*} e_{1/2}=\frac{1}{2}e\pm \frac{1}{2} z_2^{1-\frac{m}{2}} \partial_2. \end{eqnarray*} The Lyashko-Looijenga map \begin{eqnarray*} \LL:{\MM}\to \C[x]_2,&& (z_1,z_2)\mapsto (-u_1-u_2,u_1u_2)= (-2z_1,z_1^2-\frac{4}{m^2}z_2^m)\\ && \cong x^2+(-u_1-u_2)x+u_1u_2=(x-u_1)(x-u_2), \end{eqnarray*} restricts to an $m$-fold covering $\LL:{\MM}-\{z_2=0\}\to \C[x]^{reg}$, it maps $\{z_2=0\}$ to $\C[x]^{mult}$, and it is branched of order $m$ along $\{z_2=0\}$. (iii) The type $I_2(2)$ is the type $A_1^2$, the type $I_2(3)$ is also called type $A_2$. (iv) Theorem 4.7 in \cite{He02} says also that each 2-dimensional irreducible germ of a generically semisimple F-manifold with Euler field is isomorphic to the germ $(({\MM},t^0),\circ,e,E)$ in the F-manifold ${\MM}=\C^2$ of type $I_2(m)$ for some $t^0=(0,z_2^0)\in\KK_3$ and some $m\in\Z_{\geq 3}$. (v) If $({\MM}^{(i)},\circ^{(i)},e^{(i)},E^{(i)})$ for $i\in\{1,2\}$ are F-manifolds with Euler fields, their product becomes an F-manifold $({\MM}^{(1)}\times {\MM}^{(2)},\circ^{(1)}\times \circ^{(2)}, e^{(1)}+e^{(2)},E^{(1)}+E^{(2)})$ in the natural way, with Maxwell stratum $\KK_2^{(1)}\times {\MM}^{(2)}\, \cup\, {\MM}^{(1)}\times \KK_2^{(2)}$ and caustik $\KK_3^{(1)}\times {\MM}^{(2)}\, \cup\, {\MM}^{(1)}\times \KK_3^{(2)}$. For example, one obtains for $m\geq 2$ and $n\geq 3$ the generically semisimple F-manifold of type $I_2(m)A_1^{n-2}$ with Euler field on $\C^2\times\C^{n-2}$. \end{examples} \section{Reducible cases and rank 2 cases} \label{s8.3} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\geq 2$ with a triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. First we state a result on $C_n^{\uuuu{e}/\{\pm 1\}^n}$ in the case when $(H_\Z,L,\uuuu{e})$ is reducible. Then we treat the rank 2 cases. \begin{theorem}\label{t8.11} Let $(H_\Z,L,\uuuu{e})= (H_{\Z,1},L_1,\uuuu{e}^{(1)}) \oplus (H_{\Z,2},L_2,\uuuu{e}^{(2)})$ be reducible with $\uuuu{e}^{(1)}=(e_1,e_2,...,e_{n_1})$ and $\uuuu{e}^{(2)}=(e_{n_1+1},...,e_n)$, so $S_{ij}=0$ for $i\leq n_1<j$. Write $n_2:=n-n_1$. There is a natural embedding \begin{eqnarray*} C_n^{\uuuu{e}/\{\pm 1\}^n}\hookrightarrow C_{n_1}^{\uuuu{e}^{(1)}/\{\pm 1\}^{n_1}}\times C_{n_2}^{\uuuu{e}^{(2)}/\{\pm 1\}^{n_2}}. \end{eqnarray*} It is an embedding of semisimple F-manifolds with Euler fields. The complement is a complex hypersurface. This hypersurface is the Maxwell stratum of the semisimple F-manifold with Euler field on the right hand side. \end{theorem} {\bf Proof:} In order to write down the embedding concretely, we will make use of the construction of the three manifolds by the glueing of Stokes regions in Lemma \ref{t8.6} (f). Here \begin{eqnarray*} x_0&:=&\uuuu{e}/\{\pm 1\}^n\in X:=\Br_n(x_0)=\BB^{dist}/\{\pm 1\}^n,\\ x_0^{(1)}&:=& \uuuu{e}^{(1)}/\{\pm 1\}^{n_1}\in X^{(1)}:=\Br_{n_1}(x_0^{(1)})=\BB^{dist,(1)}/\{\pm 1\}^{n_1},\\ x_0^{(2)}&:=& \uuuu{e}^{(2)}/\{\pm 1\}^{n_2}\in X^{(2)}:=\Br_{n_2}(x_0^{(2)})=\BB^{dist,(2)}/\{\pm 1\}^{n_2}, \end{eqnarray*} \begin{eqnarray*} C_n^{x_0}&=& \Bigl(\bigcup_{x\in X}\oooo{F_n} \times \{x\}\Bigr)_\sim,\\ C_{n_1}^{x_0^{(1)}}&=& \Bigl(\bigcup_{x^{(1)}\in X^{(1)}} \oooo{F_{n_1}}\times \{x^{(1)}\}\Bigr)_{\sim^{(1)}},\\ C_{n_2}^{x_0^{(2)}}&=& \Bigl(\bigcup_{x^{(2)}\in X^{(2)}} \oooo{F_{n_2}}\times \{x^{(2)}\}\Bigr)_{\sim^{(2)}}. \end{eqnarray*} For any $x=(x_1,...,x_n)\in X$ there are unique indices $i_1,...,i_{n_1}$ and $j_1,...,j_{n_2}$ with \begin{eqnarray*} 1\leq i_1<...<i_{n_1}\leq n,\quad 1\leq j_1<...<j_{n_2}\leq n,\\ \{i_1,...,i_{n_1},j_1,...,j_{n_2}\} =\{1,2,...,n\},\\ x^{(1)}:=(x_{i_1},x_{i_2},...,x_{i_{n_1}})\in X^{(1)},\ x^{(2)}:=(x_{j_1},x_{j_2},...,x_{j_{n_1}})\in X^{(2)}. \end{eqnarray*} In words, the distinguished basis $x$ up to signs for $(H_\Z,L,\uuuu{e})$ is a shuffling of the distinguished basis $x^{(1)}$ up to signs for $(H_{\Z,1},L_1,\uuuu{e}^{(1)})$ and the distinguished basis $x^{(2)}$ up to signs for $(H_{\Z,2},L_2,\uuuu{e}^{(2)})$. Here $x^{(1)}$ and $x^{(2)}$ are unique. A point $(\uuuu{u},x)\in \oooo{F_n}\times\{x\}$ is mapped to the point \begin{eqnarray*} (\uuuu{u}^{(1)},x^{(1)};\uuuu{u}^{(2)},x^{(2)}) &:=& ((u_{i_1},u_{i_2},...,u_{i_{n_1}}),x^{(1)}; (u_{j_1},u_{j_2},...,u_{j_{n_2}}),x^{(2)})\\ &\in& (\oooo{F_{n_1}}\times\{x^{(1)}\})\times (\oooo{F_{n_2}}\times \{x^{(2)}\}). \end{eqnarray*} The following observations (1), (2) and (3) are crucial. (1) Given $x^{(1)}\in X^{(1)}$ and $x^{(2)}\in X^{(2)}$, there are $\begin{pmatrix}n\\n_1\end{pmatrix}$ distinguished bases $y\in X$ up to signs with $y^{(1)}=x^{(1)}$ and $y^{(2)}=x^{(2)}$. They are all the tuples obtained by shuffling of $x^{(1)}$ and $x^{(2)}$. (2) For $x^{(1)}\in X^{(1)}$ and $x^{(2)}\in X^{(2)}$ the map \begin{eqnarray*} \dot\bigcup_{y\in X:\, y^{(1)}=x^{(1)},y^{(2)}=x^{(2)}} \oooo{F_n}\times\{y\}\to (\oooo{F_{n_1}}\times\{x^{(1)}\})\times (\oooo{F_{n_2}}\times\{x^{(2)}\}) \end{eqnarray*} is almost surjective and almost injective. More precisely, the following two points (i) and (ii) hold. (i) The map \begin{eqnarray*} \left(\dot\bigcup_{y\in X:\, y^{(1)}=x^{(1)},y^{(2)}=x^{(2)}} \oooo{F_n}\times\{y\}\right)_\sim\to (\oooo{F_{n_1}}\times\{x^{(1)}\})\times (\oooo{F_{n_2}}\times\{x^{(2)}\}) \end{eqnarray*} is well defined and injective because of the following. The restriction to $\dot\bigcup_{y\in X:\, y^{(1)}=x^{(1)},y^{(2)}=x^{(2)}} F_n\times\{y\}$ is injective, anyway. Consider $\uuuu{u}\in W_n^{j,+}$ and $y\in X$ with $x^{(1)}=y^{(1)}=(\sigma_j^{-1}y)^{(1)}$, $x^{(2)}=y^{(2)}=(\sigma_j^{-1}y)^{(2)}$. Then $\Imm(u_j)=\Imm(u_{j+1})$, and $(\uuuu{u},x)$ and $((u_1,...,u_{j+1},u_j,...,u_n),\sigma_j^{-1}(x))$ have the same image. (ii) Consider any two points $\uuuu{u}^{(1)}\in \oooo{F_{n_1}}$ and $\uuuu{u}^{(2)}\in \oooo{F_{n_2}}$ such that the entries of $\uuuu{u}^{(1)}$ are pairwise different from the entries of $\uuuu{u}^{(2)}$. There are indices $\www{i}_1,...,\www{i}_{n_1}$ and $\www{j}_1,...,\www{j}_{n_2}$ with \begin{eqnarray*} 1\leq \www{i}_1<...<\www{i}_{n_1}\leq n,\quad 1\leq \www{j}_1<...<\www{j}_{n_2}\leq n,\\ \{\www{i}_1,...,\www{i}_{n_1},\www{j}_1,...,\www{j}_{n_2}\}=\{1,...,n\}, \end{eqnarray*} such that the tuple $\uuuu{u}\in\C^n$ which is defined by \begin{eqnarray*} \uuuu{u}^{(1)}=(u_{\www{i}_1},...,u_{\www{i}_{n_1}}),\quad \uuuu{u}^{(2)}=(u_{\www{j}_1},...,u_{\www{j}_{n_2}}), \end{eqnarray*} satisfies \begin{eqnarray*} \Imm(u_1)\leq \Imm(u_2)\leq ...\leq \Imm(u_n). \end{eqnarray*} The tuple $y=(y_1,...,y_n)$ which is defined by \begin{eqnarray*} x^{(1)}=(y_{\www{i}_1},y_{\www{i}_2},...,y_{\www{i}_{n_1}}),\quad x^{(2)}=(y_{\www{j}_1},y_{\www{j}_2},...,y_{\www{j}_{n_2}}), \end{eqnarray*} is in $X$ and is obtained by shuffling of $x^{(1)}$ and $x^{(2)}$. The pair $(\uuuu{u},y)$ is mapped to the tuple $(\uuuu{u}^{(1)},x^{(1)};\uuuu{u}^{(2)},x^{(2)})$. Therefore the only points $(\uuuu{u}^{(1)},x^{(1)};\uuuu{u}^{(2)},x^{(2)}) \in (\oooo{F_{n_1}}\times \{x^{(1)}\})\times (\oooo{F_{n_2}}\times\{x^{(2)}\})$ which are not met by the map in part (i) are those points where some entry of $\uuuu{u}^{(1)}$ coincides with some entry of $\uuuu{u}^{(2)}$. In the semisimple F-manifold $(F_{n_1}\times \{x^{(1)}\})\times (F_{n_2}\times\{x^{(2)}\})$ with Euler field these points form the Maxwell stratum. (3) (i) The map \begin{eqnarray*} \dot\bigcup_{x\in X}\oooo{F_n}\times\{x\}\to \Bigl(\dot\bigcup_{x^{(1)}\in X^{(1)}} \oooo{F_{n_1}}\times\{x^{(1)}\}\Bigr)\times \Bigl(\dot\bigcup_{x^{(2)}\in X^{(2)}} \oooo{F_{n_2}}\times\{x^{(2)}\}\Bigr) \end{eqnarray*} is well defined (and almost surjective and almost injective). This follows from (2)(i) and (ii). (ii) {\bf Claim:} The map in (3)(i) induces a well defined injective map on the quotients \begin{eqnarray*} &&\Bigl(\dot\bigcup_{x\in X}\oooo{F_n}\times\{x\}\Bigr)_{\sim} \to\\ &&\Bigl(\dot\bigcup_{x^{(1)}\in X^{(1)}} \oooo{F_{n_1}}\times\{x^{(1)}\}\Bigr)_{\sim^{(1)}}\times \Bigl(\dot\bigcup_{x^{(2)}\in X^{(2)}} \oooo{F_{n_2}}\times\{x^{(2)}\}\Bigr)_{\sim^{(2)}}. \end{eqnarray*} Proof of the Claim: The restriction to $\dot\bigcup_{x\in X}F_n\times\{x\}$ is well defined and injective, anyway. Consider $\uuuu{u}\in W_n^{j,+}$ and $x\in X$. If $(\sigma_j^{-1}(x))^{(1)}=x^{(1)}$ and $(\sigma_j^{-1}(x))^{(2)}=x^{(2)}$, we are in the case treated in (2)(ii). If not, then either $x_j$ and $x_{j+1}$ are both in $H_{\Z,1}/\{\pm 1\}^{n_1}$ or $x_j$ and $x_{j+1}$ are both in $H_{\Z,2}/\{\pm 1\}^{n_2}$. In the first case $(\uuuu{u}^{(1)},x^{(1)})\sim^{(1)} ((u_1,...,u_{j+1},u_j,...,u_n)^{(1)}, (\sigma_j^{-1}(x))^{(1)})$, in the second case $(\uuuu{u}^{(2)},x^{(2)})\sim^{(2)} ((u_1,...,u_{j+1},u_j,...,u_n)^{(2)}, (\sigma_j^{-1}(x))^{(2)})$. This proves the Claim in (3)(ii). The observations (1)--(3) show that there is a natural open embedding \begin{eqnarray*} C_n^{x_0}\hookrightarrow C_{n_1}^{x_0^{(1)}} \times C_{n_2}^{x_0^{(2)}} \end{eqnarray*} and that the complement is a hypersurface, which is the Maxwell stratum of $C_{n_1}^{x_0^{(1)}} \times C_{n_2}^{x_0^{(2)}}$ as a semisimple F-manifold with Euler field. \hfill$\Box$ \bigskip Now we come to the rank 2 cases. The parts (a)--(c) of the following theorem treat $C_2^{pure}$, $C_2^{conf}$ and $C_2^{univ}$ as semisimple F-manifolds with Euler fields, part (d) considers $C_2^{\uuuu{e}/\{\pm 1\}^2}$ for a unimodular bilinear lattice of rank 2 with a triangular basis. \begin{theorem}\label{t8.12} (a) $C_2^{pure}$ is isomorphic to the semisimple F-manifold with Euler field $I_2(2)$ in the Examples \ref{t8.10}, minus its Maxwell stratum. In formulas: \begin{eqnarray*} C_2^{pure}=\{(u_1,u_2)\in\C^2\,|\, u_1\neq u_2\} &\stackrel{\cong}{\longrightarrow}& \C\times \C^*,\\ (u_1,u_2)&\mapsto& (z_1,z_2) =(\frac{u_1+u_2}{2},\frac{u_1-u_2}{2}). \end{eqnarray*} Therefore $C_2^{pure}$ extends to an F-manifold on $\C^2$ with Maxwell stratum $\{(u_1,u_2)\in \C^2\,|\, u_1=u_2\}$. Multiplication $\circ$, unit field $e$, Euler field $E$ and partial units $e_1$ and $e_2$ are explicitly as follows, \begin{eqnarray*} u_{1/2}=z_1\pm z_2,\quad e_i=\frac{\paa}{\paa u_i}, \quad e_i\circ e_j=\delta_{ij}e_i,\\ e=e_1+e_2=\frac{\paa}{\paa z_1}, \quad\textup{ so }e\,\circ =\id,\\ \frac{\paa}{\paa z_2}=e_1-e_2,\quad \frac{\paa}{\paa z_2}\circ \frac{\paa}{\paa z_2}=e,\\ E=u_1e_1+u_2e_2=z_1\frac{\paa}{\paa z_1} +z_2\frac{\paa}{\paa z_2}. \end{eqnarray*} (b) The covering $C_2^{pure}\to C_2^{conf}$ has degree 2. In the coordinates $(z_1,z_2)$ on $C_2^{pure}$ it extends to the branched covering \begin{eqnarray*} \C^2\to \C^2,\quad (z_1,z_2) \mapsto (\www{z}_1,\www{z}_2) = (z_1,z_2^2). \end{eqnarray*} On $C_2^{conf}\cong \C\times\C^*$ with coordinates $(\www{z}_1,\www{z}_2)$, unit field $e$, Euler field $E$, multiplication $\circ$ and partial units $e_1$ and $e_2$ are as follows, \begin{eqnarray*} e=e_1+e_2=\frac{\paa}{\paa \www{z}_1},\quad E=\www{z}_1e +2\www{z}_2\frac{\paa}{\paa \www{z}_2},\\ \frac{\paa}{\paa\www{z}_2}\circ \frac{\paa}{\paa\www{z}_2} =\frac{1}{4\www{z}_2}e,\\ u_{1/2}=\www{z}_1\pm\sqrt{\www{z}_2},\quad e_{1/2}=\frac{1}{2}e \pm \sqrt{\www{z}_2}\frac{\paa}{\paa \www{z}_2}. \end{eqnarray*} The canonical coordinates and the partial units are 2-valued. The multiplication does not extend to $\C\times\{0\}$. (c) The universal covering $C_2^{univ}\to C_2^{pure}$ can be written as follows, \begin{eqnarray*} C_2^{univ}\cong \C\times\C&\to& \C\times\C^*\cong C_2^{pure},\\ (z_1,\zeta)&\mapsto& (z_1,\exp(\zeta))=(z_1,z_2). \end{eqnarray*} In the semisimple F-manifold $C_2^{univ}\cong\C^2$ with coordinates $(z_1,\zeta)$, unit field $e$, Euler field $E$, multiplication $\circ$ and partial units $e_1$ and $e_2$ are as follows, \begin{eqnarray*} e=e_1+e_2=\frac{\paa}{\paa z_1}, \quad E=z_1e +\frac{\paa}{\paa \zeta},\\ \frac{\paa}{\paa\zeta}\circ\frac{\paa}{\paa\zeta} =\exp(2\zeta)e,\\ u_{1/2}=z_1\pm \exp(\zeta),\quad e_{1/2}=\frac{1}{2}e\pm \frac{1}{2}\exp(-\zeta)\frac{\paa}{\paa \zeta},\\ \end{eqnarray*} (d) Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank 2 with triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t =\begin{pmatrix}1&x\\0&1\end{pmatrix}\in T^{uni}_2(\Z)$ for some $x\in\Z_{\leq 0}$. Then $$C_2^{S/\{\pm 1\}^2}=C_2^{conf},$$ and \begin{eqnarray*} C_2^{\uuuu{e}/\{\pm 1\}^2} =\left\{\begin{array}{ll} C_2^{pure}&\textup{ if }x=0,\quad \textup{the case }A_1^2,\\ C_2^{A_2}&\textup{ if }x=-1,\quad \textup{the case }A_2,\\ C_2^{univ}&\textup{ if }x\leq -2,\quad\textup{the cases }\P^1 \textup{ and beyond}.\end{array}\right. \end{eqnarray*} Here $C_2^{A_2}\cong\C\times \C^*$ with coordinates $(z_1,z_2)$ is the semisimple F-manifold of type $A_2$ in the Examples \ref{t8.10} (ii), minus its Maxwell stratum $\C\times\{0\}$. Explicitly, unit field $e$, Euler field $E$, multiplication $\circ$ and partial units are as follows, \begin{eqnarray*} e=e_1+e_2=\frac{\paa}{\paa z_1},\quad E=z_1e +\frac{2}{3}z_2\frac{\paa}{\paa z_2},\\ \frac{\paa}{\paa z_2}\circ \frac{\paa}{\paa z_2}=z_2e,\\ u_{1/2}=z_1\pm\frac{2}{3}z_2^{3/2},\quad e_{1/2}=\frac{1}{2}e \pm \frac{1}{2}z_2^{-1/2}\frac{\paa}{\paa z_2}. \end{eqnarray*} The canonical coordinates and the partial units are 2-valued. The semisimple F-manifold $C_2^{A_2}$ extends to $\C\times\{0\}$ with $\C\times\{0\}$ as caustic. \end{theorem} {\bf Proof:} (a) Trivial. (b) Locally on $C_2^{pure}$ $(\www{z}_1,\www{z}_2)=(z_1,z_2^2)$ is a coordinate system with $\ddd \www{z}_2=2z_2\ddd z_2$ and $\frac{\paa}{\paa \www{z}_2} =\frac{1}{2z_2}\frac{\paa}{\paa z_2}$. One applies part (a). (c) Locally on $C_2^{univ}\cong \C\times\C$ $(z_1,z_2)=(z_1,e^\zeta)$ is a coordinate system with $\ddd \zeta=e^{-\zeta}\ddd (e^\zeta)=z_2^{-1}\ddd z_2$ and $\frac{\paa}{\paa \zeta}=z_2\frac{\paa}{\paa z_2}$. One applies part (a). (d) Recall from Theorem \ref{t7.1} (b) \begin{eqnarray*} (\Br_2)_{S/\{\pm 1\}^2}&=& \Br_2 \quad\textup{for any }x,\\ (\Br_2)_{\uuuu{e}/\{\pm 1\}^2} &=& \left\{\begin{array}{ll} \langle\sigma_1^2\rangle &\textup{ for }x=0,\textup{ the case } A_1^2,\\ \langle\sigma_1^3\rangle &\textup{ for }x=-1,\textup{ the case }A_2,\\ \{\id\} &\textup{ for }x\leq -2,\textup{ the cases }\P^1 \textup{ and beyond.}\end{array}\right. \end{eqnarray*} Therefore $$C_2^{S/\{\pm 1\}^2}=C_2^{conf},$$ and $C_2^{\uuuu{e}/\{\pm 1\}^2}$ is a cyclic covering of $C_2^{conf}$ of degree 2 for $x=0$, of degree 3 for $x=-1$ and of infinite degree for $x\leq -2$. With the parts (a)--(c) and the case of type $A_2$ in Example \ref{t8.10} (ii) this shows part (d).\hfill$\Box$ \section[Distinguished systems of paths] {Distinguished systems of paths, two braid group actions} \label{s8.4} For the $\Z$-lattice bundles in section \ref{s8.5}, {\it distinguished systems of paths} are needed. This is a notion which was developed within the theory of isolated hypersurfac singularities. A good source is \cite[5.2 and 5.7]{Eb01}. Definition \ref{t8.13} and Theorem \ref{t8.14} define them and describe one action of the braid group $\Br_n$ on the set of distinguished systems of $n$ paths with fixed starting point and fixed set of endpoints. Though there is also an action of the groupoid of homotopy classes of paths in $C_n^{pure}$ on the set of distinguished systems of $n$ paths with fixed starting point, but variable set of endpoints (Remarks \ref{t8.15} and Definition/Lemma \ref{t8.16}). It leads to a second action of the braid group $\Br_n$ on the set of distinguished systems of $n$ paths with fixed starting point and fixed set of endpoints (Remark \ref{t8.17} (ii)). This second action is not discussed in \cite{Eb01}. Lemma \ref{t8.18} clarifies the relationship with the first action. The two actions commute, and one can be expressed through the other. Example \ref{t8.19} gives an example. \begin{definition}\label{t8.13} Let $\uuuu{u}=(u_1,u_2,...,u_n)\in C_n^{pure}$ and $r\in\R$ with $r>\max(\Ree u_1,...,\Ree u_n)$. (a) A {\it $(\uuuu{u},r)$-path} $\gamma$ \index{$(\uuuu{u},r)$-path} is a continuous embedding $\gamma:[0,1]\to \C$ with $\gamma(0)=r$, $\gamma(1)\in\{u_1,...,u_n\}$ and $\gamma((0,1)) \subset\{z\in\C\,|\, \Ree z<r\}-\{u_1,...,u_n\}$. (b) \cite[5.2]{Eb01} A {\it distinguished system of paths} \index{distinguished system of paths} is a tuple $(\gamma_1,...,\gamma_n)$ of $(\uuuu{u},r)$-paths together with a permutation $\sigma\in S_n$ such that $\gamma_j(1)=u_{\sigma(j)}$, $\gamma_i((0,1))\cap \gamma_j((0,1))=\emptyset$ for $i\neq j$, and $\gamma_1,...,\gamma_n$ leave their common starting point $r=\gamma_1(0)=...=\gamma_n(0)$ in clockwise order. (c) \cite[5.7]{Eb01} Two distinguished systems of $(\uuuu{u},r)$-paths $(\gamma_1,...,\gamma_n;\sigma)$ and $(\www{\gamma}_1,...,\www{\gamma}_n;\sigma)$ are \index{homotopic} {\it homotopic}, if continuous maps $\Gamma_1,...,\Gamma_n :[0,1]\times [0,1]\to\C$ exist such that $(\Gamma_1(.,s),...,\Gamma_n(.,s);\sigma)$ is for each $s\in[0,1]$ a distinguished system of $(\uuuu{u},r)$-paths and $(\Gamma_1(.,0),...,\Gamma_n(.,0))=(\gamma_1,...,\gamma_n)$, $(\Gamma_1(.,1),...,\Gamma_n(.,1))=(\www{\gamma}_1,..., \www{\gamma}_n)$. Denote by $\PP(\uuuu{u},r)$ the set of all homotopy classes of distinguished systems of $(\uuuu{u},r)$-paths. (d) A {\it standard system of paths} \index{standard system of paths} is a distinguished system of paths $(\gamma_1,...,\gamma_n)$ with the following properties (1)--(4).\\ (1) $\sigma\in S_n$ is determined by \begin{list}{}{} \item[(i)] $\Imm u_{\sigma(1)}\leq ...\leq \Imm u_{\sigma(n)}$. \item[(ii)] If $\Imm u_{\sigma(j)}=\Imm u_{\sigma(j+1)}$ then $\Ree u_{\sigma(j)}<\Ree u_{\sigma(j+1)}$. \end{list} (2) Choose $r_1\in \R$ with $\max(\Ree u_1,...,\Ree u_n) < r_1 < r$. Consider $j\in\{1,...,n\}$ and $n_1\in\N$ with \begin{eqnarray*} \Imm u_{\sigma(j-n_1)}<\Imm u_{\sigma(j-n_1+1)} =...=\Imm u_{\sigma(j)}<\Imm u_{\sigma(j+1)}. \end{eqnarray*} (3) Choose $\varepsilon_j>0$ so small that $n_1\varepsilon_j < \Imm u_{\sigma(j)}-\Imm u_{\sigma(j-n_1)}$. \\ (4) Choose the path $\gamma_{j-k}$ for $k\in\{0,1,...,n_1-1\}$ as follows:\\ First go straight from $r$ to $r_1+i(\Imm u_{\sigma(j)}-k\varepsilon_j)$.\\ Then go straight (horizontally to the left) to $\Ree(u_{\sigma(j-k)})+i(\Imm u_{\sigma(j)}-k\varepsilon_j)$.\\ Then go straight (vertically upwards) to $u_{\sigma(j-k)}$. \end{definition} Obviously any two standard systems of paths are homotopic. Figure \ref{Fig:8.3} shows three distinguished systems of paths. The lower one is a standard system of paths, the upper left one is homotopic to a standard system of paths, the upper right one not. \begin{figure} \includegraphics[width=1.0\textwidth]{pic-8-3.png} \caption[Figure 8.3]{Three distinguished systems of paths} \label{Fig:8.3} \end{figure} \begin{theorem}\label{t8.14} \cite[5.7]{Eb01} Let $\uuuu{u}=(u_1,...,u_n)\in C_n^{pure}$ and let $r\in\R$ with $r>\max(\Ree u_1,...,\Ree u_n)$. (a) (Definition) Define actions of the elementary braids $\sigma_j$ and $\sigma_j^{-1}$ in $\Br_n$ on the set $\PP(\uuuu{u},r)$ as follows. $\sigma_j$ maps the homotopy class of a distinguished system of paths $(\gamma_1,...,\gamma_n)$ to the homotopy class of a distinguished system of paths $(\gamma_1',...,\gamma_n')$ where \begin{eqnarray*} \gamma_i'&:=&\gamma_i\textup{ for }i\in\{1,...,n\}-\{j,j+1\},\\ \gamma_{j+1}'&:=&\gamma_j,\\ \gamma_j'&:& \textup{\it\ go from }r\textup{\it\ along }\gamma_j \textup{\it\ almost to }u_{\sigma(j)},\\ &&\textup{\it\ turn around }u_{\sigma(j)} \textup{\it\ once clockwise},\\ &&\textup{\it\ go along }\gamma_j^{-1}\textup{\it\ back to }r,\\ && \textup{\it\ go from }r\textup{\it\ along }\gamma_{j+1} \textup{\it\ to }u_{\sigma(j+1)},\\ \sigma'&:=& \sigma\circ (j\, j+1). \end{eqnarray*} $\sigma_j^{-1}$ maps the homotopy class of a distinguished system of paths $(\gamma_1,...,\gamma_n)$ to the homotopy class of a distinguished system of paths $(\gamma_1'',...,\gamma_n'')$ where \begin{eqnarray*} \gamma_i''&:=&\gamma_i\textup{ for }i\in\{1,...,n\}-\{j,j+1\},\\ \gamma_j''&:=&\gamma_{j+1},\\ \gamma_{j+1}''&:& \textup{\it\ go from }r\textup{\it\ along } \gamma_{j+1}\textup{\it\ almost to }u_{\sigma(j+1)},\\ &&\textup{\it\ turn around }u_{\sigma(j+1)} \textup{\it\ once counter clockwise,}\\ &&\textup{\it\ go along }\gamma_{j+1}^{-1}\textup{\it\ back to }r,\\ && \textup{\it\ go from }r\textup{\it\ along }\gamma_j \textup{\it\ to }u_{\sigma(j)},\\ \sigma''&:=& \sigma\circ (j\, j+1). \end{eqnarray*} Figure \ref{Fig:8.4} shows the old and new paths under both operations. \begin{figure} \includegraphics[width=0.9\textwidth]{pic-8-4.png} \caption[Figure 8.4]{Actions of $\sigma_j$ and $\sigma_j^{-1}$ on distinguished systems of paths} \label{Fig:8.4} \end{figure} (b) The actions of $\sigma_j$ and $\sigma_j^{-1}$ extend to an action of the braid group $\Br_n$ from the left on the set $\PP(\uuuu{u},r)$ of homotopy classes of distinguished systems of $(\uuuu{u},r)$-paths. (c) The action of $\Br_n$ on the set $\PP(\uuuu{u},r)$ is transitive. \end{theorem} The sequence of pictures in Figure \ref{Fig:8.5} illustrates that the action of $\sigma_j^{-1}\sigma_j$ on $\PP(\uuuu{u},r)$ is trivial. The pictures 5.27 and 5.28 in \cite{Eb01} illustrate that the braids $\sigma_j\sigma_{j+1}\sigma_j$ and $\sigma_{j+1}\sigma_j\sigma_{j+1}$ act in the same way on $\PP(\uuuu{u},r)$. \begin{figure} \includegraphics[width=1.0\textwidth]{pic-8-5.png} \caption[Figure 8.5]{The action of $\sigma_j^{-1}\sigma_j$ on distinguished systems of paths is trivial} \label{Fig:8.5} \end{figure} In the case of $\uuuu{u}=b_n^{pure}=(i,2i,...,ni)$ and $r=1$ ($r=1$ is here an example, any $r>0$ works here) there is also an action of $\Br_n$ from the right on the set $\PP(b_n^{pure},1)$. It comes from an action of the fundamental groupoid of $C_n^{pure}$ on the union $\bigcup_{(\uuuu{u},r)}\PP(\uuuu{u},r)$. This action and the compatibility with the action of $\Br_n$ from the left on $\PP(b_n^{conf},1)$ in Theorem \ref{t8.14} are not discussed in \cite{Eb01}. They are given in the points \ref{t8.15} to \ref{t8.19}. \begin{remark}\label{t8.15} \cite[1.7]{Sp66} For $\uuuu{u}^0$ and $\uuuu{u}^1\in C_n^{pure}$ denote by $H(\uuuu{u}^0,\uuuu{u}^1)$ the set of homotopy classes \index{homotopy class of a path} of (continuous maps =) paths $\alpha:[0,1]\to C_n^{pure}$ with $\alpha(0)=\uuuu{u}^0$ and $\alpha(1)=\uuuu{u}^1$. The tuple $(H(\uuuu{u}^0,\uuuu{u}^1))_{\uuuu{u}^0,\uuuu{u}^1 \in C_n^{pure}}$ is a {\it groupoid} \index{groupoid} with $[\alpha_0][\alpha_1]=[\alpha_0\alpha_1]\in H(\uuuu{u}^0, \uuuu{u}^2)$ for $[\alpha_0]\in H(\uuuu{u}^0,\uuuu{u}^1)$, $[\alpha_1]\in H(\uuuu{u}^1,\uuuu{u}^2)$. The multiplication is associative. $[\alpha][\alpha^{-1}]=[\textup{trivial path}]\in H(\uuuu{u}^0,\uuuu{u}^0)$ for $[\alpha]\in H(\uuuu{u}^0,\uuuu{u}^1)$. See \cite[1.7]{Sp66} for more details on this groupoid. \end{remark} \begin{definition/lemma}\label{t8.16} (a) Let $\alpha:[0,1]\to C_n^{pure}$ be a path with $[\alpha]\in H(\uuuu{u}^0,\uuuu{u}^1)$. We will construct a natural bijection $\Phi(\alpha):\PP(\uuuu{u}^0,r_0)\to\PP(\uuuu{u}^1,r_1)$ (in the notation $\Phi(\alpha)$ we should include $r_0$ and $r_1$, but for simplicity we drop them). Choose $r\in\R$ with $$r\geq \max(r_0,r_1),\quad r>\max_{i\{1,...,n\}}\max_{t\in [0,1]}\Ree\alpha_i(t).$$ Because there are natural bijections $\PP(\uuuu{u}^0,r_0)\to \PP(\uuuu{u}^0,r)$ and $\PP(\uuuu{u}^1,r_1)\to\PP(\uuuu{u}^1,r)$, we can replace $r_0$ and $r_1$ by $r$. Choose a continuous family in $s\in[0,1]$ of homeomorphisms $\Psi_s:\C\to\C$ with the properties \begin{eqnarray*} && \Psi_s|_{\{z\in\C\,|\, \Ree z\geq r\}}=\id,\\ && \Psi_s(\{u_1^0,...,u_n^0\})=\{\alpha_1(s),...,\alpha_n(s)\}. \end{eqnarray*} For a distinguished system of paths $(\uuuu{\gamma};\sigma)=(\gamma_1,...,\gamma_n;\sigma)$ with $[(\uuuu{\gamma};\sigma)]\in\PP(\uuuu{u}^0,r)$, the composition $(\Psi_s\circ\uuuu{\gamma};\sigma) =(\Psi_s\circ\gamma_1,...,\Psi_s\circ\gamma_n;\sigma)$ is a distinguished system of paths with $[(\Psi_s\circ\uuuu{\gamma};\sigma)]\in\PP(\alpha(s),r)$. The class $[(\Psi_s\circ\uuuu{\gamma};\sigma)]$ depends only on the class $[(\uuuu{\gamma};\sigma)]$ and on the path $\alpha$. Define \begin{eqnarray*} \Phi(\alpha)([(\uuuu{\gamma};\sigma)]) := [(\Psi_1\circ\uuuu{\gamma};\sigma)]\in \PP(\uuuu{u}^1,r). \end{eqnarray*} (b) The map $\Phi(\alpha):\PP(\uuuu{u}^0,r)\to \PP(\uuuu{u}^1,r)$ depends only on the homotopy class $[\alpha]\in H(\uuuu{u}^0,\uuuu{u}^1)$. (c) If $[\alpha_1]\in H(\uuuu{u}^0,\uuuu{u}^1)$ and $[\alpha_2]\in H(\uuuu{u}^1,\uuuu{u}^2)$ then $[\alpha_1\alpha_2]\in H(\uuuu{u}^0,\uuuu{u}^2)$ and $$\Phi(\alpha_1\alpha_2)=\Phi(\alpha_2)\Phi(\alpha_1).$$ The groupoid $(H(\uuuu{u}^0,\uuuu{u}^1))_{\uuuu{u}^0, \uuuu{u}^1\in C_n^{pure}}$ acts from the right on the tuple $(\PP(\uuuu{u},r))_{\uuuu{u}\in C_n^{pure}, r>\max(\Ree u_1,...,\Ree u_n)}$. \end{definition/lemma} \begin{remarks}\label{t8.17} (i) Let $\uuuu{u}\in C_n^{pure}$ and $r>\max(\Ree u_1,..., \Ree u_n)$. Let $\tau\in S_n$. Then $\tau$ as deck transformation of the covering $\pr_n^{p,c}:C_n^{pure}\to C_n^{conf}$ maps $\uuuu{u}=(u_1,...,u_n)$ to $\tau(\uuuu{u})=(u_{\tau^{-1}(1)},...,u_{\tau^{-1}(n)})$, see the Remarks \ref{t8.2} (i) and (vii). A distinguished basis $(\uuuu{\gamma};\sigma)$ for $\uuuu{u}$ and $r$ is almost the same as the distinguished basis $(\uuuu{\gamma},\tau\sigma)$ for $\tau(\uuuu{u})$ and $r$. They just differ by the indexing of the entries of $\uuuu{u}$. This gives a natural bijection $\PP(\uuuu{u},r)\to \PP(\tau(\uuuu{u}),r)$, $[(\uuuu{\gamma};\sigma)]\mapsto [(\uuuu{\gamma};\tau\sigma)]$. (ii) A closed path $\beta$ in $C_n^{conf}$ with starting point and end point $b_n^{conf}$ represents a braid $[\beta]\in \Br_n$. It lifts to a path $\www{\beta}$ in $C_n^{pure}$ from $b_n^{pure}=(i,2i,...,ni)$ to $\sigma(b_n^{pure})=(\sigma^{-1}(1)i,\sigma^{-1}(2)i,..., \sigma^{-1}(n)i)$ for some $\sigma\in S_n$ with homotopy class $[\www{\beta}]\in H(b_n^{pure},\sigma(b_n^{pure})]$. Now $\Phi(\www{\beta})$ maps $\PP(b_n^{pure},1)$ to $\PP(\sigma(b_n^{pure}),1)$ which can be identified with $\PP(b_n^{pure},1)$ because of (i). Therefore $[\beta]$ acts on $\PP(b_n^{pure},1)$. We obtain an action of $\Br_n$ on $\PP(b_n^{pure},1)$, which is an action from the right because of Lemma \ref{t8.16} (c). \end{remarks} Remark \ref{t8.17} (ii) gives an intuitive geometric action of $\Br_n$ on $\PP(b_n^{pure},1)$ from the right. It comes from moving the endpoints of a distinguished system of paths along a braid and deforming the distinguished system of paths accordingly. Lemma \ref{t8.18} connects it to the more algebraic action of $\Br_n$ on $\PP(b_n^{pure},1)$ from the left in Theorem \ref{t8.14} (a)+(b). Lemma \ref{t8.18} shows first that these actions commute and second that one action can be expressed through the other action. \begin{lemma}\label{t8.18} The action of $\Br_n$ on $\PP(b_n^{pure},1)$ from the left in Theorem \ref{t8.14} and the action of $\Br_n$ on $\PP(b_n^{pure},1)$ from the right in Remark \ref{t8.17} (ii) commute, \begin{eqnarray*} \beta_1([(\uuuu{\gamma};\sigma)].\beta_2) = (\beta_1[(\uuuu{\gamma};\sigma)]).\beta_2 \quad\textup{for }\beta_1,\beta_2\in\Br_n, [(\uuuu{\gamma};\sigma)]\in\PP(b_n^{pure},1), \end{eqnarray*} and satisfy \begin{eqnarray*} \beta[(\uuuu{\gamma};\sigma)] = [(\uuuu{\gamma};\sigma)].\beta \quad\textup{for }\beta\in\Br_n,[(\uuuu{\gamma};\sigma)] \in \PP(b_n^{pure},1). \end{eqnarray*} \end{lemma} {\bf Proof:} The action on the right comes from a continuous family of homeomorphisms as in Lemma \ref{t8.16} (a) which deforms the distinguished system of paths. The action on the left comes from a construction of new paths which follow closely the old paths. The new paths are deformed together with the old paths. A deformation of the old and new paths does not change this construction of new paths from old paths. Therefore the actions commute. The action of $\Br_n$ from the left on $\PP(b_n^{pure},1)$ is transitive by Theorem \ref{t8.14} (c). Therefore it is sufficient to show for an elementary braid $\sigma_j$ and a standard system of paths $(\uuuu{\gamma}^{st};\sigma)$ with $[(\uuuu{\gamma}^{st};\sigma)] \in \PP(b_n^{conf},1)$ $$\sigma_j[(\uuuu{\gamma}^{st};\sigma)] = [(\uuuu{\gamma}^{st};\sigma)].\sigma_j.$$ This is obvious respectively shown in the picture in Figure \ref{Fig:8.6}. \hfill$\Box$ \begin{figure} \includegraphics[width=0.7\textwidth]{pic-8-6.png} \caption[Figure 8.6]{Moving along a braid in $C_n^{pure}$ changes a distinguished system of paths} \label{Fig:8.6} \end{figure} \begin{example}\label{t8.19} Figure \ref{Fig:8.7} shows for $n=3$ on the left the action of $\sigma_2^{-1}\sigma_1$ on $[(\uuuu{\gamma}^{st};\sigma)]$ from the left and on the right the action of $\sigma_2^{-1}\sigma_1$ on $[(\uuuu{\gamma}^{st};\sigma)]$ from the right. \begin{figure} \includegraphics[width=1.0\textwidth]{pic-8-7.png} \caption[Figure 8.7]{Actions of $\sigma_2^{-1}\sigma_1$ from the left and from the right: same result} \label{Fig:8.7} \end{figure} \end{example} \section[Two families of $\Z$-lattice structures over $C_n^{\uuuu{e}/\{\pm 1\}^n}$]{Two families of $\Z$-lattice structures over the manifold $C_n^{\uuuu{e}/\{\pm 1\}^n}$} \label{s8.5} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank $n\in\N$ with a triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. The manifold $C_n^{\uuuu{e}/\{\pm 1\}^n}$ is not just a semisimple F-manifold with Euler field. It is the carrier of much richer structures, which we call {\it $\Z$-lattice structures} and which we explain in this section. \begin{remark}\label{t8.20} A $\Z$-lattice structure leads often in several steps to a strong enrichment of the F-manifold structure, namely a Dubrovin-Frobenius manifold. We will not go through these steps and not treat Dubrovin-Frobenius manifolds here, but we will say a few words in this remark. Details on the steps are given in \cite{Sa02} \cite{He02} \cite{He03}. The $\Z$-lattice structure can be extended to a holomorphic vector bundle on $\C\times C_n^{\uuuu{e}/\{\pm 1\}^n}$ with a meromorphic connection with a logarithmic pole along a certain discriminant which will be defined below and a $\Z$-lattice bundle on the complement. A Fourier-Laplace transformation of this bundle along the first factor $\C$ of the base space leads to another holomorphic vector bundle on $\C\times C_n^{\uuuu{e}/\{\pm 1\}^n}$ with a meromorphic connection with a pole of Poincar\'e rank 1 along the hypersurface $\{0\}\times C_n^{\uuuu{e}/\{\pm 1\}^n}$, a $\Z$-lattice bundle on the complement and a certain pairing. In the notation of \cite{He03} it is a {\it TEZP-structure} (Twistor Extension $\Z$-lattice Pairing). From this and from some additional choice one can usually construct a Dubrovin-Frobenius manifold structure on $C_n^{\uuuu{e}/\{\pm 1\}^n}$ or at least on the complement of a hypersurface in it. See \cite{Sa02} \cite{He02} \cite{He03} for this construction. Dubrovin-Frobenius manifolds were first defined by Dubrovin \cite{Du92}\cite{Du96}. \index{Dubrovin-Frobenius manifold} \index{Frobenius manifold} A definition is given above in Remark \ref{t8.9} (viii). Here we will restrict to explain two natural families of $\Z$-lattice structures over $C_n^{\uuuu{e}/\{\pm 1\}^n}$. \end{remark} Definition/Lemma \ref{t8.21} (c) presents the notion of a single $\Z$-lattice structure. The construction of natural families of $\Z$-lattice structures over $C_n^{univ}$ and $C_n^{\uuuu{e}/\{\pm 1\}^n}$ comes in Definition \ref{t8.22} (c) and Theorem \ref{t8.23} (d). The unimodular lattice $(H_\Z,L,\uuuu{e})$ with distinguished basis $\uuuu{e}$ induces such families. The construction uses distinguished systems of paths. Theorem \ref{t8.23} (a)--(c) discusses how $\Z$-lattice structures over different points in $C_n^{univ}$ in such a family of $\Z$-lattice structures are related by the actions of braids. \begin{definition/lemma}\label{t8.21} Let $(H_\Z,L,\uuuu{e})$ and $S$ be as above. (a) (Definition) The {\sf discriminant} \index{discriminant}\index{$D_{1,n}^{univ}$} $D_{1,n}^{univ}\subset \C\times C_n^{univ}$ is the smooth complex hypersurface $$\{(\tau,b)\in \C\times C_n^{univ}\,|\, \LL(b)(\tau)=0\}.$$ Recall the map $\LL:C_n^{univ}\to \C[x]_n^{reg}$ in Remark \ref{t8.9} (iv). The polynomial $\LL(b)\in \C[x]_n^{reg}\cong C_n^{conf}$ corresponds to $\pr_n^{u,c}(b)\in C_n^{conf}$. (Trivial lemma) The projection $\pr_2:\C\times C_n^{univ}\to C_n^{univ}$, $(\tau,b)\mapsto b$, restricts to a covering $D_{1,n}^{univ}\to C_n^{univ}$ of degree $n$. (b) (Trivial lemma) The group $\Br_n$ of deck transformations of the universal covering $\pr_n^{u,c}:C_n^{univ}\to C_n^{conf}$ extends with $\id_\C$ on the first factor of $\C\times C_n^{univ}$ to a group of automorphisms of $\C\times C_n^{univ}$. It leaves the discriminant $D_{1,n}^{univ}$ invariant. (Definition) The image of $D_{1,n}^{univ}$ under the group $\id_\C\times(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$ is the discriminant \index{$D_{1,n}^{\uuuu{e}/\{\pm 1\}^n}$} $D_{1,n}^{\uuuu{e}/\{\pm 1\}^n} \subset \C\times C_n^{\uuuu{e}/\{\pm 1\}^n}$, $$D_{1,n}^{\uuuu{e}/\{\pm 1\}^n} =\{(\tau,b)\in \C\times C_n^{\uuuu{e}/\{\pm 1\}^n}\,|\, \LL(b)(\tau)=0\},$$ where now $\LL$ is the Lyashko-Looijenga map $\LL:C_n^{\uuuu{e}/\{\pm 1\}^n}\to \C[x]_n^{reg}.$ (c) (Definition) A $\Z$-lattice structure means an (automatically flat) $\Z$-lattice bundle over $\C-\{n\textup{ points}\}$. (Remark) Therefore a $\Z$-lattice bundle over $\C\times C_n^{univ}-D_{1,n}^{univ}$ is considered as a ({\sf flat} or {\sf isomonodromic}) family of $\Z$-lattice structures over $C_n^{univ}$. A $\Z$-lattice bundle over $\C\times C_n^{\uuuu{e}/\{\pm 1\}^n} - D_{1,n}^{\uuuu{e}/\{\pm 1\}^n}$ is considered as a ({\sf flat} or {\sf isomonodromic}) family of $\Z$-lattice structures over $C_n^{\uuuu{e}/\{\pm 1\}^n}$. (Remark) The $\Z$-lattice structures which we will construct will come equipped with an (automatically flat) even or odd intersection form on each fiber. \end{definition/lemma} Recall that a unimodular bilinear lattice $(H_\Z,L)$ with triangular basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$ was fixed at the beginning of this section. Together with the choice of a pair $(\uuuu{u},r)\in C_n^{pure}\times \R$ with $r>\max(\Ree u_1,...,\Ree u_n)$ and the choice of a homotopy class $[(\uuuu{\gamma};\sigma)]\in \PP(\uuuu{u},r)$ of a distinguished system of paths $(\uuuu{\gamma};\sigma)$, it induces a $\Z$-lattice structure on $\C-\{u_1,...,u_n)$. It also induces (without any additional choice) a natural $\Z$-lattice bundle over $C_n^{univ}$. The two constructions are given in Definition \ref{t8.22} (b) and (c). \begin{definition}\label{t8.22} (a) Let $\uuuu{u}\in C_n^{pure}$ and $r\in\R$ with $r>\max(\Ree u_1,...,\Ree u_n)$. Let $\gamma:[0,1]\to \C$ be a $(\uuuu{u},r)$-path (Definition \ref{t8.13} (a)). An {\it associated loop} $\delta:[0,1]\to\C$ \index{associated loop} is a closed path which starts and ends at $r$, which first goes along $\gamma$ almost to $\gamma(1)$, which then moves on a small circle once counterclockwise around $\gamma(1)$ and which finally goes along $\gamma^{-1}$ back to $r$. Figure \ref{Fig:8.8} illustrates in a picture three loops which are associated to three paths. \begin{figure} \includegraphics[width=0.5\textwidth]{pic-8-8.png} \caption[Figure 8.8]{Three loops associated to three paths} \label{Fig:8.8} \end{figure} (b) Let $\uuuu{u}\in C_n^{pure}$ and $r\in\R$ with $r>\max(\Ree u_1,...,\Ree u_n)$. Let $(\uuuu{\gamma};\sigma)$ be a distingushed system of paths with $[(\uuuu{\gamma};\sigma)]\in \PP(\uuuu{u},r)$. Let $\uuuu{f}\in\BB^{dist}$ be a distinguished basis of $H_\Z$. Let $k\in\{0,1\}$. Then $V_\Z^{(k)}((\uuuu{u},r),[(\uuuu{\gamma};\sigma)], \uuuu{f})$ is the $\Z$-lattice bundle on $\C-\{u_1,...,u_n\}$ which is determined by the following conditions (i) and (ii): \begin{list}{}{} \item[(i)] The restriction of the bundle to $\{z\in\C\,|\, \Ree z\geq r\}$ is trivial, and each fiber over a point in this set is identified with $H_\Z$. \item[(ii)] Let $\delta_1,...,\delta_n$ be loops which are associated to $\gamma_1,...,\gamma_n$. The monodromy along the loop $\delta_j$ is with the identification of the fiber $V_\Z^{(k)}((\uuuu{u},r),[(\uuuu{\gamma};\sigma)],\uuuu{f})_r$ with $H_\Z$ the automorphism $s^{(k)}_{f_j}:H_\Z\to H_\Z$. \end{list} It is well known that (i) and (ii) determine uniquely a $\Z$-lattice bundle on $\C-\{u_1,...,u_n\}$. This bundle \index{$V_\Z^{(k)}((\uuuu{u},r),[(\uuuu{\gamma};\sigma)],\uuuu{f})$} $V_\Z^{(k)}((\uuuu{u},r),[(\uuuu{\gamma};\sigma)], \uuuu{f})$ is a $\Z$-lattice structure in the sense of Definition \ref{t8.21} (c). (c) Let $k\in\{0,1\}$. $V_\Z^{(k),univ}$ \index{$V_\Z^{(k),univ}$}is the $\Z$-lattice bundle on $\C\times C_n^{univ}-D_{1,n}^{univ}$ which is determined by the following conditions (i) and (ii): \begin{list}{}{} \item[(i)] The restriction of the bundle to $\{(z,b)\in\C\times C_n^{univ}\,|\, \Ree z>\max(\Ree u\,|\, (u,b)\in D_{1,n}^{univ})\}$ is trivial, and each fiber over a point in this set is identified with $H_\Z$. \item[(ii)] The restriction of $V_\Z^{(k),univ}$ to $(\C-\{i,2i,...,ni\})\times \{b_n^{univ}\}$ is equal to the $\Z$-lattice bundle $V_\Z^{(k)}((b_n^{pure},1),[(\uuuu{\gamma}^{st};\id)], \uuuu{e})$. Here $(\uuuu{\gamma}^{st},\id)$ is a standard system of paths which start at $r=1$ and end at the entries of $b_n^{pure}=(i,2i,...,ni)$. \end{list} Because $\pr_2:\C\times C_n^{univ}$ restricts to a trivial $n$-sheeted covering $\pr_2:D_{1,n}^{univ}\to C_n^{univ}$ over the simply connected manifold $C_n^{univ}$, (i) and (ii) determine uniquely a $\Z$-lattice bundle on $\C\times C_n^{univ}-D_{1,n}^{univ}$. The bundle $V_\Z^{(k),univ}$ is a flat family of $\Z$-lattice structures over the manifold $C_n^{univ}$ in the sense of Definition \ref{t8.21} (c). \end{definition} The next theorem is central in this section. It will construct in part (d) a $\Z$-lattice bundle over $\C\times C_n^{\uuuu{e}/\{\pm 1\}^n}- D_{1,n}^{\uuuu{e}/\{\pm 1\}^n}$. This is a flat family of $\Z$-lattice structures over $C_n^{\uuuu{e}/\{\pm 1\}^n}$ in the sense of Definition \ref{t8.21} (c). The parts (a), (b) and (c) prepare this. Part (a) starts with a $\Z$-lattice bundle $V_\Z^{(k)}((\uuuu{u},r),[(\uuuu{\gamma};\sigma)],\uuuu{f})$ on $\C-\{u_1,...,u_n\}$ as in Definition \ref{t8.22} (b). It answers the question how one has to change $\uuuu{f}$ in order to obtain the same $\Z$-lattice bundle if one changes the distinguished system of paths from $[(\uuuu{\gamma};\sigma)]$ to $\beta[(\uuuu{\gamma};\sigma)]$ with some braid $\beta\in\Br_n$. Part (b) starts with the $\Z$-lattice bundle $V_\Z^{(k),univ}$ on $\C\times C_n^{univ}-D_{1,n}^{univ}$ as in Definition \ref{t8.22} (c). It considers for a point $\uuuu{\www{u}}\in C_n^{univ}$ and its image $\uuuu{u}^1=\pr^{u,p}(\uuuu{\www{u}}) \in C_n^{pure}$ the restriction of $V_\Z^{(k),univ}$ to a $\Z$-lattice bundle on $(\C-\{u_1^1,...,u_n^1\})\times\{\uuuu{\www{u}}\}$. It studies how this restriction is obtained from the restriction to $(\C-\{i,2i,...,ni\})\times\{b_n^{univ}\}$ if one moves from $b_n^{univ}$ to $\uuuu{\www{u}}$. The groupoid action in Lemma \ref{t8.18} is used. Part (c) specializes the situation in part (b) to the case where $\pr^{u,c}(\uuuu{\www{u}})=b_n^{conf}$. Then a path in $C_n^{univ}$ from $b_n^{univ}$ to $\uuuu{\www{u}}$ corresponds to a braid $\beta\in \Br_n$. Part (c) shows with the help of the parts (a) and (b) and Lemma \ref{t8.18} that then the restriction of $V_\Z^{(k),univ}$ to $(\C-\{i,2i,...,ni\})\times\{\uuuu{\www{u}}\}$ is simply $V_\Z^{(k)}((b_n^{pure},1),[(\uuuu{\gamma^{st}};\id)], \beta^{-1}(\uuuu{e}))$. \begin{theorem}\label{t8.23} (a) Let $(\uuuu{\gamma};\sigma)$ be a distinguished system of paths with $[(\uuuu{\gamma};\sigma)]\in\PP(b_n^{pure},1)$. Let $\uuuu{f}$ be a distinguished basis of $H_\Z$. Let $k\in\{0,1\}$. Let $\beta\in \Br_n$. Then \begin{eqnarray*} V_\Z^{(k)}((b_n^{pure},1),[(\uuuu{\gamma};\sigma)], \uuuu{f}) =V_\Z^{(k)}((b_n^{pure},1),\beta[(\uuuu{\gamma};\sigma)], \beta(\uuuu{f})). \end{eqnarray*} (b) Let $\alpha:[0,1]\to C_n^{pure}$ be a path with $[\alpha]\in H(b_n^{pure},\uuuu{u}^1)$ for some $\uuuu{u}^1\in C_n^{pure}$. Let $\www{\alpha}:[0,1]\to C_n^{univ}$ be the lift to $C_n^{univ}$ of $\alpha$ with starting point $\www{\alpha}(0)=b_n^{univ}$. Let $k\in\{0,1\}$. The restriction of $V_\Z^{(k),univ}$ to $(\C-\{u_1^1,...,u_n^1\})\times\{\www{\alpha}(1)\}$ is equal to $V_\Z^{(k)}((\uuuu{u}^1,r), \Phi(\alpha)[(\uuuu{\gamma}^{st};\id)],\uuuu{e})$ for some $r\in \R_{\geq 1}$ with $r>\max(\Ree u_1^1,...,\Ree u_n^1)$. (c) Let $\beta\in \Br_n$. Let $\www{\beta}:[0,1]\to C_n^{univ}$ be the lift to $C_n^{univ}$ with $\www{\beta}(0)=b_n^{univ}$ of a loop in $C_n^{conf}$ which represents $\beta$. Let $k\in\{0,1\}$. Then the restriction of $V_\Z^{(k),univ}$ to $(\C-\{i,2i,...,ni\})\times \{\www{\beta}(1)\}$ is equal to $V_\Z^{(k)}((b_n^{pure},1), [(\uuuu{\gamma}^{st},\id)],\beta^{-1}(\uuuu{e}))$. (d) For $\beta\in (\Br_n)_{\uuuu{e}/\{\pm 1\}^n}$ denote the deck transformation of $C_n^{univ}$ which it induces by $\beta^{deck}:C_n^{univ}\to C_n^{univ}$. It extends to an automorphism $\beta^{deck,V,(k)}$ of $V_\Z^{(k),univ}$ which is $(\id_{H_\Z},\id_\C\times\beta^{deck})$ over $\{(z,b)\in\C\times C_n^{univ}\,|\, \Ree z> \max(\Ree u\,|\,(u,b)\in D_{1,n}^{univ})\}$. Therefore the quotient $V_\Z^{(k),univ}/ (\Br_n)_{\uuuu{e}/\{\pm 1\}^n}^{deck,V,(k)}$ is a $\Z$-lattice bundle over $\C\times C_n^{\uuuu{e}/\{\pm 1\}^n} -D_{1,n}^{\uuuu{e}/\{\pm 1\}^n}$ whose restriction to $$(\{(z,b)\in \C\times C_n^{\uuuu{e}/\{\pm 1\}^n}\,|\, \Ree z>\max(\Ree u\,|\, (u,b)\in D_{1,n}^{\uuuu{e}/\{\pm 1\}^n})\}$$ is trivial. We call this bundle \index{$V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$} $V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$. It is a family of $\Z$-lattice structures over $C_n^{\uuuu{e}/\{\pm 1\}^n}$. \end{theorem} {\bf Proof:} (a) Let $\delta_1,...,\delta_n$ be loops associated to $\gamma_1,...,\gamma_n$. By definition, in the bundle $V_\Z^{(k)}((b_n^{pure}),[(\uuuu{\gamma};\sigma)],\uuuu{f})$ the local monodromy along the loop $\delta_j$ is given by $s^{(k)}_{f_j}$. Write $[(\uuuu{\gamma}';\sigma')] := \beta[(\uuuu{\gamma};\sigma)]$ and $\uuuu{f}':=\beta(\uuuu{f})$. We have to show that in the bundle $V_\Z^{(k)}((b_n^{pure}),[(\uuuu{\gamma};\sigma)],\uuuu{f})$ the local monodromy around the loop $\delta_j'$ is given by $s^{(k)}_{f_j'}$. First we consider $\beta=\sigma_j$. Then \begin{eqnarray*} &&\gamma_i'=\gamma_i\textup{ for }i\in\{1,...,\}-\{j,j+1\},\\ && \gamma_{j+1}'=\gamma_j,\\ &&\gamma_j'\textup{ is homotopic to }\delta_j^{-1}\gamma_{j+1},\\ \textup{so }&&\delta_i'=\delta_i\textup{ for } i\in\{1,...,n\}-\{j,j+1\},\\ &&\delta_{j+1}'=\delta_j,\\ &&\delta_j'\textup{ is homotopic to }\delta_j^{-1}\delta_{j+1} \delta_j. \end{eqnarray*} Also \begin{eqnarray*} &&f_i'=f_i\textup{ for }i\in\{1,...,n\}-\{j,j+1\},\\ &&f_{j+1}'=f_j,\\ &&f_j'=s_{f_j}^{(k)}(f_{j+1}). \end{eqnarray*} Because of \begin{eqnarray*} s^{(k)}_{f_j'} = s^{(k)}_{s^{(k)}_{f_j}(f_{j+1})} =s^{(k)}_{f_j} s^{(k)}_{f_{j+1}} (s^{(k)}_{f_j})^{-1} \end{eqnarray*} the local monodromy along $\delta_i'$ is indeed given by $s^{(k)}_{f_i'}$ for $i\in\{1,...,n\}$ (recall that a composition of paths is read from the left, a composition of automorphisms is read from the right). The case $\beta=\sigma_j^{-1}$ is treated analogously. The general case $\beta\in \Br_n$ follows. (b) In the restriction of $V_\Z^{(k),univ}$ to $(\C-\{i,2i,...,ni\})\times \{b_n^{univ}\}$ the local monodromies along loops associated to a standard system of paths $(\uuuu{\gamma}^{st};\id)$ are $s^{(k)}_{e_1},...,s^{(k)}_{e_n}$. Moving in the base space $C_n^{univ}$ from $b_n^{univ}$ to $\www{\alpha}(1)$, the standard system of paths is deformed to a representative of $\Phi(\alpha)[(\uuuu{\gamma}^{st};\id)]$, associated loops are deformed accordingly, and the local monodromies along the deformed associated loops are still $s^{(k)}_{e_1},...,s^{(k)}_{e_n}$. This shows the claim. (c) One has to apply part (b), Remark \ref{t8.17} (ii), Lemma \ref{t8.18} and part (a), as follows. Let $\oooo{\beta}:[0,1]\to C_n^{pure}$ be the lift of a representative of $\beta$ with $\oooo{\beta}(0)=b_n^{pure}$. Then $\oooo{\beta}(1)=\sigma(b_n^{pure})$ for some $\sigma\in S_n$. We have the following equalities \begin{eqnarray*} &&\textup{the restriction of }V_\Z^{(k),univ}\textup{ to } (\C-\{i,2i,...,ni\})\times \{\www{\beta}(1)\}\\ &=& V_\Z^{(k)}((\sigma(b_n^{pure}),1), \Phi(\oooo{\beta})[(\uuuu{\gamma}^{st};\id)],\uuuu{e}) \quad(\textup{by part (b)})\\ &=& V_\Z^{(k)}((b_n^{pure},1),[(\uuuu{\gamma}^{st};\id)].\beta, \uuuu{e}) \quad(\textup{by Remark \ref{t8.17} (ii)})\\ &=& V_\Z^{(k)}((b_n^{pure},1),\beta[(\uuuu{\gamma}^{st};\id)], \uuuu{e})\quad(\textup{by Lemma \ref{t8.18}})\\ &=& V_\Z^{(k)}((b_n^{pure},1),[(\uuuu{\gamma}^{st};\id)], \beta^{-1}(\uuuu{e}))\quad(\textup{by part (a)}). \end{eqnarray*} (d) For $\beta\in (\Br_n)_{\uuuu{e}/\{\pm 1\}^n }$ $\beta^{-1}(\uuuu{e})$ coincides with $\uuuu{e}$ up to signs, $\beta^{-1}(\uuuu{e}/\{\pm 1\}^n)=\uuuu{e}/\{\pm 1\}^n$. Because of $s^{(k)}_{-e_i}=s^{(k)}_{e_i}$ and part (c), the restriction of $V_\Z^{(k),univ}$ to $(\C-\{i,2i,...,ni\})\times\{\www{\beta}(1)\}$ is canonically isomorphicm to the restriction of $V_\Z^{(k),univ}$ to $(\C-\{i,2i,...,ni\})\times \{b^{univ}\}$. Therefore $\beta^{deck}$ extends to an automorphism of $V_\Z^{(k),univ}$ as claimed. The quotient bundle $V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$ is well defined.\hfill$\Box$ \bigskip Part (a) of the next lemma says that in all $\Z$-lattice structures and $\Z$-lattice bundles which are constructed from $(H_\Z,L,\uuuu{e})$, the bilinear form $I^{(k)}$ becomes a flat bilinear form on the bundle. Part (b) is a nice observation of K. Saito \cite[Lemma 2]{Sa82}. It says that $I^{(k)}$ is essentially the unique flat bilinear form on these bundles. \begin{lemma}\label{t8.24} Let $k\in\{0,1\}$. (a) Each of the following $\Z$-lattice bundles comes with a $(-1)^k$-symmetric bilinear form on its fibers, which is induced by $I^{(k)}$ on $H_\Z$ and which is also called $I^{(k)}$: \\ the bundle $V_\Z^{(k)}((\uuuu{u},r),[(\uuuu{\gamma};\sigma)], \uuuu{f})$ over $\C-\{u_1,...,u_n\}$ in Definition \ref{t8.22} (b),\\ the bundle $V_\Z^{(k),univ}$ over $\C\times C_n^{univ}-D_{1,n}^{univ}$ in Definition \ref{t8.22} (c),\\ the bundle $V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$ over $\C\times C_n^{\uuuu{e}/\{\pm 1\}^n} -D_{1,n}^{\uuuu{e}/\{\pm 1\}^n}$ in Theorem \ref{t8.23} (d). (b) \cite[Lemma 2]{Sa82} Suppose that $(H_\Z,L,\uuuu{e})$ is irreducible and $(n,k)\neq (1,1)$. In each of the bundles in part (a), the only flat bilinear forms on its fibers are the bilinear bilinear forms $a\cdot I^{(k)}$ with $a\in\Z$, so the scalar multiples of $I^{(k)}$. \end{lemma} {\bf Proof:} (a) This follows in all cases from $s^{(k)}_{f_i}\in\OO^{(k)}$, which says that $s^{(k)}_{f_i}:H_\Z\to H_\Z$ respects $I^{(k)}$. (b) In each of the bundles in part (a), the monodromy group, which is the image of the group antihomomorphism \begin{eqnarray*} \pi_1(\textup{base space},\textup{base point})\to O^{(k)} \end{eqnarray*} is $\Gamma^{(k)}$. This is clear for the bundle $V_\Z^{(k)}((\uuuu{u},r),[(\uuuu{\gamma};\sigma)],\uuuu{f})$ in Definition \ref{t8.22} (b) and for $V_\Z^{(k),univ}$. It follows for the bundle $V_\Z^{\uuuu{e}/\{\pm 1\}^n}$ with the fact that the restriction of this bundle to $\{(z,b)\in\C\times C_n^{\uuuu{e}/\{\pm 1\}^n}\,|\, \Ree z>\max(\Ree u\,|\, (u,b)\in D_{1,n}^{\uuuu{e}/\{\pm 1\}^n}) \}$ is trivial (without this fact, one might have additional {\it transversal} monodromy). The proof of Lemma 2 in \cite{Sa82} shows that the only $\Gamma^{(k)}$ invariant bilinear forms on $H_\Z$ are the forms $a\cdot I^{(k)}$ with $a\in \Z$. \hfill$\Box$ \begin{example}\label{t8.25} The case $A_2$, $(H_\Z,L,\uuuu{e})$ of rank 2 with the matrix $S=L(\uuuu{e}^t,\uuuu{e})^t =\begin{pmatrix}1&-1\\0&1\end{pmatrix}$. We saw in Theorem \ref{t8.12} \begin{eqnarray*} C_2^{\uuuu{e}/\{\pm 1\}^2}=C_2^{A_2}\cong \C\times\C^* \quad\textup{with coordinates }(z_1,z_2)\\ \textup{with }u_{1/2}=z_1\pm \frac{2}{3}z_2^{3/2},\quad z_1=\frac{1}{2}(u_1+u_2), z_2=(\frac{3}{4}(u_1-u_2))^{2/3}. \end{eqnarray*} The base point $b_2^{\uuuu{e}/\{\pm 1\}^2}\in C_2^{\uuuu{e}/\{\pm 1\}^2}$ is a point with $(u_1,u_2)=(i,2i)$. We choose $b_2^{\uuuu{e}/\{\pm 1\}^2}$ with $(z_1^0,z_2^0):=(\frac{3}{2}i, (\frac{3}{4})^{2/3}e^{2\pi i/6})$ in the coordinates $(z_1,z_2)$. Recall \begin{eqnarray*} (e_1,e_2)\stackrel{\sigma_1}{\longmapsto} (e_1+e_2,e_1)\stackrel{\sigma_1}{\longmapsto} (-e_2,e_1+e_2)\stackrel{\sigma_1}{\longmapsto} (e_1,-e_2). \end{eqnarray*} The set $\BB^{dist}/\{\pm 1\}^2$ has the three elements \begin{eqnarray*} x_0&=&(e_1,e_2)/\{\pm 1\}^2=\sigma_1^{-1}x_2,\\ x_1&=&(-e_2,e_1+e_2)/\{\pm 1\}^2 =\sigma_1^{-1}x_0,\\ x_2&=&(e_1+e_2,e_1)/\{\pm 1\}^2 = \sigma_1^{-1}x_1. \end{eqnarray*} The walls in $C_2^{\uuuu{e}/\{\pm 1\}^2}$ are the real hypersurfaces of points $(z_1,z_2)$ with $\Imm u_1=\Imm u_2$. They are in the coordinates $(z_1,z_2)$ \begin{eqnarray*} \C\times\{z_2\in\C^*\,|\, \Imm z_2^{3/2}=0\} =\C\times (\R_{>0}\, \dot\cup\, \R_{>0}\cdot e^{2\pi i /3} \, \dot\cup\, \R_{>0}\cdot e^{2\pi i 2/3}). \end{eqnarray*} Between them are the three Stokes regions $F_2^{x_0,x_0},F_2^{x_0,x_1},F_2^{x_0,x_2}$. The picture in Figure \ref{Fig:8.9} shows in the middle a part of the slice $\{z_1^0\}\times \C \subset C_2^{\uuuu{e}/\{\pm 1\}^2}$ and the intersection of this part with the three walls and the three Stokes regions. The three lines with arrows are the three lifts of $\sigma_1\in \pi_1(C_2^{conf},b_2^{conf})$ to paths in $C_2^{\uuuu{e}/\{\pm 1\}^2}$. In the outer part, the picture shows for different values of $(z_1^0,z_2)$ standard systems of paths (recall $u_{1/2}=z_1\pm \frac{2}{3}z_2^{3/2}$). The small circle between the points $u_1$ and $u_2$ means the value 0. We have chosen the indexing of $u_1$ and $u_2$ in each Stokes region so that $\Imm u_1<\Imm u_2$. Therefore there is a discontinuity of indexing along each wall. \begin{figure} \includegraphics[width=0.9\textwidth]{pic-8-9.png} \caption[Figure 8.9]{Moving with powers of $\sigma_1$ through the three Stokes regions of $C_2^{A_2}$} \label{Fig:8.9} \end{figure} \end{example} \begin{remarks}\label{t8.26} (i) Part (c) and part (a) of Theorem \ref{t8.23} give the following. Let $k\in\{0,1\}$ and let $\beta\in \Br_n$. Let $\www{\beta}:[0,1]\to C_n^{\uuuu{e}/\{\pm 1\}^n}$ be the lift to $C_n^{\uuuu{e}/\{\pm 1\}^n}$ with $\www{\beta}(0)=b_n^{\uuuu{e}/\{\pm 1\}^n}$ of a loop in $C_n^{conf}$ which represents $\beta$. Then the restriction of $V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$ to $(\C-\{i,2i,...,ni\})\times \{\www{\beta}(1)\}$ is equal to $V_\Z^{(k)}((b_n^{pure},1), [(\uuuu{\gamma}^{st},\id)],\beta^{-1}(\uuuu{e}))$. (ii) This fits well to the indexing of the Stokes regions in Lemma \ref{t8.6} (d) in the case $X=\BB^{dist}/\{\pm 1\}^n$. Then $x_0:=\uuuu{e}/\{\pm 1\}^n$. The Stokes region which contains $\www{\beta}(1)$ is $F_n^{x_0,x}$ with $x=\beta^{-1}(\uuuu{e})/\{\pm 1\}^n\in X$. See also Remark \ref{t8.7} (ii). (iii) One can modify the construction of the bundle $V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$ in Theorem \ref{t8.23} (d). Suppose that a group antihomomorphism \begin{eqnarray*} \pi_1(C_n^{\uuuu{e}/\{\pm 1\}^n},b_n^{\uuuu{e}/\{\pm 1\}^n}) \to (\{\pm 1\}^n)_S\subset G_\Z \end{eqnarray*} is given. Here an element of $\{\pm 1\}^n$ means an automorphism of $H_\Z$ which changes some signs in the basis $\uuuu{e}$. Then $(\{\pm 1\}^n)_S\subset G_\Z$. One can twist the automorphism $\beta^{deck,V,(k)}$ in Theorem \ref{t8.23} (d) with the correct element of $(\{\pm 1\}^n)_S$ and obtain a twisted quotient bundle over $\C\times C_n^{\uuuu{e}/\{\pm 1\}^n}- D_{1,n}^{\uuuu{e}/\{\pm 1\}^n}$. But one looses the property of $V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$ to be a trivial bundle over the open set \begin{eqnarray*} \{(z,b)\in\C\times C_n^{\uuuu{e}/\{\pm 1\}^n}\,|\, \Ree z>\max(\Ree u\,|\, (u,b)\in D_{1,n}^{\uuuu{e}/\{\pm 1\}^n})\}. \end{eqnarray*} This property is needed for a possible extension of such a bundle over a partial compactification of $C_n^{\uuuu{e}/\{\pm 1\}^n}$. Therefore the bundle $V_\Z^{(k),\uuuu{e}/\{\pm 1\}^n}$ is more natural than the twisted bundles. (iv) Consider a bundle $V_\Z^{(k)}((\uuuu{u},r), [(\uuuu{\gamma};\sigma)],\uuuu{f})$ as in Definition \ref{t8.22} (a). In most cases one can recover $\uuuu{f}/\{\pm 1\}^n$ from the bundle and from $[(\uuuu{\gamma};\sigma)]$, namely in all cases with $k=0$ and in all cases with $k=1$ where is $(H_\Z,L,\uuuu{e})$ is not reducible with at least two summands of type $A_1$, so especially in all irreducible cases. We explain this. In any case, one recovers from the bundle and the distinguished system of paths $(\uuuu{\gamma};\sigma)$ the local monodromies $s^{(k)}_{f_1},...,s^{(k)}_{f_n}$ along the associated loops. With Lemma \ref{t3.15} (b) and (c) one obtains from the tuple $(s_{f_1}^{(k)},...,s_{f_n}^{(k)})$ of reflections or transvections the distinguished basis $\uuuu{f}/\{\pm 1\}^n$ up to signs. \end{remarks} \chapter[The manifolds in the rank 3 cases] {The manifolds in the rank 3 cases}\label{s9} \setcounter{equation}{0} \setcounter{figure}{0} The manifolds $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$ from chapter \ref{s8} had been discussed in the rank $2$ cases in section \ref{s8.3}. There only three different manifolds $C_2^{\uuuu{e}/\{\pm 1\}^n}$ arose, namely $C_2^{pure}$, $C_2^{A_2}$ and $C_2^{univ}$, and only one manifold $C_2^{S/\{\pm 1\}^2}=C_2^{conf}$. In rank 3 the classification is already much richer. The rank 3 cases are treated in this chapter. Section \ref{s9.1} makes the deck transformations of $C_3^{univ}$ as universal covering of $C_3^{conf}$ and of $C_3^{pure}$ explicit. For that it introduces new coordinates on $C_3^{pure}$. In fact, the new coordinates come from the restriction to $C_3^{pure}$ of a blowing up of the complex line $\{u_1=u_2=u_3\}$ in $\C^3\supset C_3^{pure}$. For the deck transformations the Schwarzian triangle function $T:\H\to \C-\{0,1\}\cong \H/\oooo{\Gamma(2)}$ and a lift $\kappa:\H\to\C$ with $T$ of the logarithm $\ln:\C-\{0\}\dashrightarrow \C$ are needed. Both are treated in Appendix \ref{sc}. Theorem \ref{t9.3} in section \ref{s9.2} gives the main result, a description of all possible manifolds $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ in rank 3. It starts with a unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with a triangular basis $\uuuu{e}$ and matrix $S=S(\uuuu{x})=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_3(\Z)$ for some $\uuuu{x}\in\Z^3$. Thanks to Theorem \ref{t4.14} we can and will restrict to a local minimum $\uuuu{x}\in C_i$ for some $i\in\{1,2,...,24\}$. Theorem \ref{t7.11} lists all 16 possible pairs $((\Br_3)_{\uuuu{e}/\{\pm 1\}^3}, (\Br_3)_{S/\{\pm 1\}^3})$ of stabilizers, the case $S(-l,2,-l)$ with parameter $l\in \Z_{\geq 3}$. Theorem \ref{t9.3} gives in all 16 cases the manifolds $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$. Though this is coarse information. The covering map of the normal covering $C_3^{\uuuu{e}/\{\pm 1\}^3}\to C_3^{S/\{\pm 1\}^3}$ is also important. But its description requires the whole discussion in the proof, and therefore it is given only in the proof of Theorem \ref{t9.3}. Also the F-manifold structure and the Euler field are important, as well as possible partial compactifications of $C_3^{\uuuu{e}/\{\pm 1\}^3}$ such that the F-manifold structure extends. They are treated in section \ref{s9.3} in Lemma \ref{t9.7} and Corollary \ref{t9.8}. The cases $A_3$ and $A_2A_1$ have partial compactifications which are well known from singularity theory. Seeing them here is a bit involved, especially in the case $A_3$. \section{The deck transformations on $C_3^{univ}$}\label{s9.1} Theorem \ref{t9.1} first gives new coordinates on $C_3^{pure}$. They are better suited to treat the deck transformations of $C_3^{univ}$ as universal covering $\pr_3^{u,c}:C_3^{univ}\to C_3^{conf}$ and as universal covering $\pr_3^{u,p}:C_3^{univ}\to C_3^{pure}$. It writes the F-manifold structure in these new coordinates. It gives in these coordinates explicitly the six deck transformations of $C_3^{pure}$ as normal covering of $C_3^{conf}$. Finally, it gives explicitly several deck transformations of $C_3^{univ}$ as universal covering of $C_3^{conf}$. Here the Schwarzian triangle function $T:\H\to\C-\{0,1\}$ and the lift $\kappa:\H\to\C$ with $T$ of the logarithm $\ln:\C-\{0\}\dashrightarrow\C$ from Appendix \ref{sc} are used. \begin{theorem}\label{t9.1} (a) The map \begin{eqnarray*} f:C_3^{pure}&\to& \C\times \C^*\times (\C-\{0,1\}),\\ (u_1,u_2,u_3) &\mapsto& (z_1,z_2,z_3)=(u_1+u_2+u_3,u_2-u_1,\frac{u_3-u_1}{u_2-u_1}),\\ \end{eqnarray*} is an isomorphism of complex manifolds. The inverse map is \begin{eqnarray*} f^{-1}:\C\times\C^*\times (\C-\{0,1\}) \to C_3^{pure}\\ (z_1,z_2,z_3) \mapsto (u_1,u_2,u_3)= \frac{1}{3}(z_1-z_2(1+z_3),\\ z_1+z_2(2-z_3),z_1+z_2(-1+2z_3)). \end{eqnarray*} Let $\paa_j:=\frac{\paa}{\paa z_j}$ be the coordinate vector fields of the coordinates $(z_1,z_2,z_3)$. The base changes between them and the partial units $e_i=\frac{\paa}{\paa u_i}$ are as follows, \begin{eqnarray*} (\paa_1,\paa_2,\paa_3)&=&(e_1,e_2,e_3)\frac{1}{3} \begin{pmatrix}1&-1-z_3&-z_2\\1&2-z_3&-z_2\\1&-1+2z_3&2z_2 \end{pmatrix},\\ (e_1,e_2,e_3)&=&(\paa_1,\paa_2,\paa_3) \begin{pmatrix}1&1&1\\-1&1&0\\ \frac{z_3-1}{z_2} & \frac{-z_3}{z_2} & \frac{1}{z_2}\end{pmatrix}. \end{eqnarray*} Unit field $e$, Euler field $E$ and multiplication $\circ$ are in canonical coordinates given by \begin{eqnarray*} e=\sum_{j=1}^3 e_j,\quad E=\sum_{j=1}^3 u_je_j,\quad e_i\circ e_j=\delta_{ij}e_i. \end{eqnarray*} In the coordinates $(z_1,z_2,z_3)$ they are given by \begin{eqnarray*} e&=&3\paa_1,\quad E=z_1\paa_1+z_2\paa_2,\\ \paa_2\circ\paa_2&=& \frac{2-2z_3+2z_3^2}{3}\paa_1 + \frac{1-2z_3}{3}\paa_2+\frac{-z_3+z_3^2}{z_2}\paa_3,\\ \paa_2\circ\paa_3&=& \frac{-z_3+2z_2z_3}{3}\paa_1 + \frac{-z_2}{3}\paa_2 + \frac{-1+2z_3}{3}\paa_3,\\ \paa_3\circ\paa_3&=& \frac{2z_2^2}{3}\paa_1 +\frac{z_2}{3}\paa_3. \end{eqnarray*} (b) The map \begin{eqnarray*} \sigma\mapsto \chi_\sigma:= (C_3^{pure}\to C_3^{pure},\ \uuuu{u}\mapsto (u_{\sigma^{-1}(1)},u_{\sigma^{-1}(2)},u_{\sigma^{-1}(3)})) \end{eqnarray*} is an isomorphism from $S_3$ to the group of deck transformations of the covering $C_3^{pure}\to C_3^{conf}$. The corresponding automorphisms $\phi_\sigma:=f\circ \chi_\sigma \circ f^{-1}$ of $\C\times\C^*\times (\C-\{0,1\})$ are as follows (see Theorem \ref{tb.1} (d) for $g_\sigma$): \begin{eqnarray*} \phi_{\id}(\uuuu{z})&=&\uuuu{z},\\ \phi_{(12)}(\uuuu{z})&=& (z_1,-z_2,g_{(12)}(z_3)),\\ \phi_{(13)}(\uuuu{z})&=& (z_1,z_2(1-z_3),g_{(13)}(z_3)),\\ \phi_{(23)}(\uuuu{z})&=& (z_1,z_2z_3,g_{(23)}(z_3)),\\ \phi_{(123)}(\uuuu{z})&=& (z_1,-z_2z_3,g_{(123)}(z_3)),\\ \phi_{(132)}(\uuuu{z})&=& (z_1,z_2(z_3-1),g_{(132)}(z_3)). \end{eqnarray*} (c) A universal covering of $\C\times\C^*\times (\C-\{0,1\})$ is given by the map \begin{eqnarray*} \www{\pr}_3^{u,p}:\C\times \C\times \H&\to& \C\times\C^*\times (\C-\{0,1\})\\ (z_1,\zeta,\tau)&\mapsto& (z_1,e^\zeta,T(\tau)), \end{eqnarray*} with $T$ as in Theorem \ref{tb.1} (d). The group of deck transformations of the universal covering \begin{eqnarray*} \pr_3^{p,c}\circ f^{-1}\circ \www{\pr}_3^{u,p}: \C\times\C\times\H &\longrightarrow& \C\times \C^*\times (\C-\{0,1\})\\ &\stackrel{\cong}{\longrightarrow}& C_3^{pure}\\ &\stackrel{/S_3}{\longrightarrow}& C_3^{conf} \end{eqnarray*} is isomorphic to the group $\Br_3$. An isomorphism $\psi$ \index{$\psi,\ \psi(\sigma_1),\ \psi(\sigma_2),\ \psi(\sigma^{mon})$} is determined by the images $\psi(\sigma_1)$ and $\psi(\sigma_2)$ of the elementary braids $\sigma_1$ and $\sigma_2$. The following work (recall the lift $\kappa:\H\to \C$ of the (multivalued) logarithm $\log:\C^*\dashrightarrow\C$ in Definition \ref{tb.2} and Lemma \ref{tb.3} (c)), \begin{eqnarray*} \psi(\sigma_1):\C\times\C\times\H&\to& \C\times\C\times\H,\\ (z_1,\zeta,\tau)&\mapsto& (z_1,\zeta-\pi i,\tau-1) \quad(\textup{recall }\tau-1=\mu_{(12)}(\tau)),\\ \psi(\sigma_2):\C\times\C\times\H&\to& \C\times\C\times\H,\\ (z_1,\zeta,\tau)&\mapsto& (z_1,\zeta+\kappa(\tau+1),\ \frac{\tau}{\tau+1})\\ &&\qquad (\textup{recall }\frac{\tau}{\tau+1}=\mu_{(13)}(\tau)). \end{eqnarray*} The isomorphism $\psi$ satisfies \begin{eqnarray*} \psi(\sigma^{mon}):(z_1,\zeta,\tau)&\mapsto& (z_1,\zeta-2\pi i,\tau),\\ \psi((\sigma^{mon})^{-1}\sigma_1^2):(z_1,\zeta,\tau)&\mapsto& (z_1,\zeta,\tau-2),\\ \psi(\sigma_2^2):(z_1,\zeta,\tau)&\mapsto& (z_1,\zeta,\frac{\tau}{2\tau+1}),\\ \psi(\sigma_1\sigma_2^2):(z_1,\zeta,\tau)&\mapsto& (z_1,\zeta-\pi i,\frac{-\tau-1}{2\tau+1}),\\ \psi(\sigma_2\sigma_1):(z_1,\zeta,\tau)&\mapsto& (z_1,\zeta-\pi i+\kappa(\tau),\frac{\tau-1}{\tau}). \end{eqnarray*} Here recall \begin{eqnarray*} \mu_{(12)}^2=(\tau\to \tau-2),\quad \mu_{(13)}^2=(\tau\mapsto\frac{\tau}{2\tau+1}),\\ \mu_{(123)}=(\tau\mapsto \frac{\tau-1}{\tau}) =\textup{rotation of order 3 around }e^{2\pi i/6},\\ \mu(\begin{pmatrix}-1&-1\\2&1\end{pmatrix}) =\textup{rotation of order 2 around }\frac{-1+i}{2}. \end{eqnarray*} \end{theorem} {\bf Proof:} (a) It is rather obvious that $f$ is an isomorphism of complex manifolds with inverse $f^{-1}$ as claimed. The base changes between the two bases $(\paa_1,\paa_2,\paa_3)$ and $(e_1,e_2,e_3)$ of coordinate vector fields are obtained by derivating $f(\uuuu{u})$ and $f^{-1}(\uuuu{z})$ suitably. Unit field $e$, Euler field $E$ and multiplication $\circ$ in the new coordinates $\uuuu{z}=(z_1,z_2,z_3)$ can be calculated straightforwardly. (b) Straightforward calculations. (c) $\psi(\sigma_1)$ and $\psi(\sigma_2))$ are lifts to $\C\times\C\times\H$ of the automorphisms $\phi_{(12)}$ and $\phi_{(13)}$ of $\C\times\C^*\times(\C-\{0,1\})$ because \begin{eqnarray*} \www{\pr}_3^{u,p}(\psi(\sigma_1)(z_1,\zeta,\tau))) &=& \www{\pr}_3^{u,p}(z_1,\zeta-\pi i,\mu_{(12)}(\tau))\\ &=&(z_1,e^{\zeta-\pi i},T(\mu_{(12)}(\tau)))\\ &=& (z_1,-e^{\zeta},g_{(12)}(T(\tau)))\\ &=&(z_1,-z_2,g_{(12)}(z_3)),\\ \www{\pr}_3^{u,p}(\psi(\sigma_2)(z_1,\zeta,\tau))) &=& \www{\pr}_3^{u,p}(z_1,\zeta+\kappa(\tau+1), \mu_{(13)}(\tau))\\ &=&(z_1,e^{\zeta}T(\tau+1),T(\mu_{(13)}(\tau)))\\ &=& (z_1,e^{\zeta}(1-T(\tau)),g_{(13)}(T(\tau)))\\ &=&(z_1,z_2(1-z_3),g_{(13)}(z_3)). \end{eqnarray*} They satisfy the relation \begin{eqnarray*} \psi(\sigma_1)\psi(\sigma_2)\psi(\sigma_1) =\psi(\sigma_2)\psi(\sigma_1)\psi(\sigma_2) \end{eqnarray*} because \begin{eqnarray*} \psi(\sigma_1)\psi(\sigma_2)\psi(\sigma_1):(z_1,\zeta,\tau) &\stackrel{\sigma_1}{\longmapsto}& (z_1,\zeta-\pi i,\tau-1)\\ &\stackrel{\sigma_2}{\longmapsto}& (z_1,\zeta-\pi i+\kappa(\tau),\frac{\tau-1}{\tau})\\ &\stackrel{\sigma_1}{\longmapsto}& (z_1,\zeta-2\pi i+\kappa(\tau),\frac{-1}{\tau}),\\ \psi(\sigma_2)\psi(\sigma_1)\psi(\sigma_2):(z_1,\zeta,\tau) &\stackrel{\sigma_2}{\longmapsto}& (z_1,\zeta+\kappa(\tau+1)),\frac{\tau}{\tau+1})\\ &\stackrel{\sigma_1}{\longmapsto}& (z_1,\zeta-\pi i+\kappa(\tau+1),\frac{-1}{\tau+1})\\ &\stackrel{\sigma_2}{\longmapsto}& (z_1,\zeta-\pi i+\kappa(\tau+1)+\kappa(\frac{\tau}{\tau+1}), \frac{-1}{\tau}) \end{eqnarray*} and $-2\pi i +\kappa(\tau) =-\pi i +\kappa(\tau+1) +\kappa(\frac{\tau}{\tau+1})$ by Theorem \ref{tb.3} (c) (iv). Therefore $\psi$ is a group homomorphism from $\Br_3$ to the group of deck transformations of the universal covering $\C\times\C\times \H\to C_3^{conf}$. It is surjective because $\psi(\sigma^{mon})$ is \begin{eqnarray*} \psi(\sigma^{mon}) &=& (\psi(\sigma_1)\psi(\sigma_2)\psi(\sigma_1))^2:\\ (z_1,\zeta,\tau)&\mapsto& (z_1,\zeta-4\pi i+\kappa(\tau)+\kappa(\frac{-1}{\tau}),\tau)\\ &&= (z_1,\zeta-2\pi i,\tau) \end{eqnarray*} because \begin{eqnarray*} \kappa(\tau)+\kappa(\frac{-1}{\tau}) &=& \kappa(\tau)+\kappa(\frac{-1}{\tau}+2)+2\pi i\\ &=& \kappa(\tau)+\kappa(\frac{2\tau-1}{\tau})+2\pi i\\ &=& 2\pi i \quad\textup{ by Theorem \ref{tb.3} (c) (iv).} \end{eqnarray*} $\psi(\sigma_2^2)$ is as claimed because by Theorem \ref{tb.3} (c) (iv) $$\zeta + \kappa(\tau+1)+\kappa(\frac{2\tau+1}{\tau+1}) =\zeta + \kappa(\tau+1)+\kappa(\frac{2(\tau+1)-1}{\tau+1}) =\zeta.$$ The formulas for $\psi((\sigma^{mon})^{-1}\sigma_1^2)$, $\psi(\sigma_1\sigma_2^2)$ and $\psi(\sigma_2\sigma_1)$ are calculated straightforwardly. \hfill$\Box$ \begin{remarks}\label{t9.2} (i) The group of deck transformations of the universal covering $\C\times\C\times\H\to C_3^{pure}$ splits into the product \begin{eqnarray*} \langle \psi(\sigma^{mon})\rangle\times \langle \psi((\sigma^{mon})^{-1}\sigma_1^2,\psi(\sigma_2^2) \rangle \\ =\langle ((z_1,\zeta,\tau)\mapsto (z_1,\zeta-2\pi i,\tau)) \rangle \times \\ \langle((z_1,\zeta,\tau)\mapsto(z_1,\zeta,\mu_{(12)}^2(\tau)), (z_1,\zeta,\tau)\mapsto(z_1,\zeta,\mu_{(13)}^2(\tau))\rangle. \end{eqnarray*} This fits to the splitting of $\Br_3^{pure}$, \begin{eqnarray*} \Br_3^{pure}=\langle\sigma^{mon}\rangle \times \langle (\sigma^{mon})^{-1}\sigma_1^2,\sigma_2^2\rangle. \end{eqnarray*} (ii) The coordinate change $$\C^3\to\C^3,\quad (u_1,u_2,u_3)\mapsto (z_1,z_2,z_4):=(u_1+u_2+u_3,u_2-u_1,u_3-u_1),$$ is linear. $f$ is the restriction to $C_3^{pure}$ of the composition of this coordinate change with the blowing up \index{blowing up} $$\C^3 \dashrightarrow \C\times \OO_{\P^{1}}(-1),\quad (z_1,z_2,z_4)\dashrightarrow (z_1,z_2,\frac{z_4}{z_2}) =(z_1,z_2,z_3),$$ of $\C\times\{(0,0)\}$ in one of the two natural charts of $\C\times \OO_{\P^1}(-1)$. \begin{figure}[H] \includegraphics[width=1.0\textwidth]{pic-9-1new.png} \caption[Figure 9.1]{Blowing up of $\C\times\{(0,0)\} \subset \C^3$} \label{Fig:9.1} \end{figure} Also the multiplication is calculated only in this chart $\C^3$ with coordinates $(z_1,z_2,z_3)$. We refrain from calculating it in the other chart $\C^3$ with coordinates $(z_1,\www{z}_2,\www{z}_3)=(z_1,\frac{z_2}{z_4},z_4)$ as we do not really need it. Though in the chart considered, the hyperplane in the Maxwell stratum where $u_2=u_1$ lies above $z_3=\infty$, so for that hyperplane the other chart would be useful. The hyperplanes where $u_3=u_1$ respectively $u_3=u_2$ lie above $z_3=0$ respectively $z_3=1$. (iii) The multiplication on $\C\times\C\times\H\cong C_3^{univ}$ involves $\frac{\paa T(\tau)}{\paa\tau}$. Unit field and Euler field on it are $e=3\paa_1$ and $E=z_1\paa_1+\paa_{\zeta}$. \end{remarks} \section{The manifolds as quotients and their covering maps} \label{s9.2} Let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice of rank 3 with triangular basis $\uuuu{e}$ and matrix $S=S(\uuuu{x})=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_3(\Z)$ for some $\uuuu{x}\in\Z^3$. We can and will restrict to a local minimum $\uuuu{x}\in C_i$ for some $i\in\{1,2,...,24\}$ as in Theorem \ref{t7.11}. This theorem lists all 16 possible pairs $((\Br_3)_{\uuuu{e}/\{\pm 1\}^3}, (\Br_3)_{S/\{\pm 1\}^3})$ of stabilizers, the case $S(-l,2,-l)$ with parameter $l\in \Z_{\geq 3}$. Theorem \ref{t9.3} below gives in all 16 cases the manifolds $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$. Though this is coarse information. The covering map of the normal covering $C_3^{\uuuu{e}/\{\pm 1\}^3}\to C_3^{S/\{\pm 1\}^3}$ is also important. But its description requires the whole discussion in the proof, and therefore it is given only in the proof of Theorem \ref{t9.3}. The group $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ acts on the third factor $\H$ of $\C\times\C\times\H\cong C_3^{univ}$ via its image in $PSL_2(\Z)$ under the homomorphism $\Br_3\to PSL_2(\Z)$ in Remark \ref{t4.15} (i). In all cases except $A_3$ and $A_2A_1$ this action is free and the quotient of $\H$ by this action is a noncompact complex curve $C$. Furthermore, in all cases except $A_1^3$, $\HH_{1,2}$, $A_2A_1$, and $A_3$ the group $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ does not contain a power of $\sigma^{mon}$. Then the quotient $\C\times\C\times\H/(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}\cong C_3^{\uuuu{e}/\{\pm 1\}^3}$ is $$\C\times (\textup{an affine }\C\textup{-bundle over }C).$$ Remark \ref{t9.4} (i) says what that means, and Remark \ref{t9.4} (ii) states that it is isomorphic to $\C\times\C\times C$. Remark \ref{t9.4} (iv) states that any $\C^*$-bundle over a noncompact complex curve $C$ is isomorphic to $\C^*\times C$. \begin{theorem}\label{t9.3} Consider a local minimum $\uuuu{x}\in C_i\subset \Z^3$ for some $i\in\{1,2,...,24\}$ and the pseudo-graph $\GG_j$ with $\GG_j=\GG(\uuuu{x})$. In the following table the first and second column are copied from the table in Theorem \ref{t7.11}. The third and fourth column give the manifolds $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$. Here $\D:=\{z\in\C\,|\, |z|<1\}$ denotes the unit disk, \index{$\D,\ \D^*,\ \D^{**}$} $\D^*:=\D-\{0\}$, $\D^{**}:=\D-\{0,\frac{1}{2}\}$ and $\C^{***}:=\C-\{z\in\C\,|\, z^3=1\}$. \index{$\C^{***}$} \begin{eqnarray*} \begin{array}{l|l|l|l} & \textup{sets} & C_3^{\uuuu{e}/\{\pm 1\}^3} & C_3^{S/\{\pm 1\}^3}\\ \hline \GG_1 & C_1\ (A_1^3) & C_3^{pure} & C_3^{conf}\\ \GG_1 & C_2\ (\HH_{1,2}) & \C\times\C^*\times\H & C_3^{conf} \\ \GG_2 & C_3\ (A_2A_1) & \frac{\C\times\C^*\times (\C-\{0,1\})} {\langle\phi_{(12)}\rangle} & C_3^{pure}/\langle\phi_{(12)}\rangle \\ \GG_2 & C_4\ (\P^1A_1),C_5 & \C^2\times (\C-\{0,1\}) & C_3^{pure}/\langle\phi_{(12)}\rangle \\ \GG_3 & C_6\ (A_3) & \C\times\frac{\C^*\times\C^{***}} {\textup{group of order 3}} & \C\times\frac{\C^*\times\C^{***}} {\textup{group of order 3}} \\ \GG_4 & C_7 \ (\whh{A}_2) & \C^2\times \C^{***} & \C\times\frac{\C^*\times \C^{***}} {\textup{group of order 3}} \\ \GG_5 & C_8,\ C_9\ ((-l,2,-l)) &\C\times\C\times\D^* & \C\times\C^*\times \D^* \\ \GG_6 & C_{10}\ (\P^2),\ C_{11},\ C_{12} & C_3^{univ} & \frac{\C\times\C^*\times\H } {\textup{group of order 3}}\\ \GG_7 & C_{13}\ (\textup{e.g. }(4,4,8)) & C_3^{univ} & \frac{\C\times\C^*\times\H} {\textup{group of order 2}} \\ \GG_8 & C_{14}\ (\textup{e.g. }(3,4,6)) & C_3^{univ} & \C\times\C^*\times \H \\ \GG_9 & C_{15},\ C_{16},\ C_{23},\ C_{24} & C_3^{univ} & \C\times\C^*\times\H \\ \GG_{10} & C_{17}\ (\textup{e.g. }(-2,-2,0)) & \C\times\C\times\D^* & \C\times\C^*\times\D^* \\ \GG_{11} & C_{18}\ (\textup{e.g. }(-3,-2,0)) & \C\times\C\times\D^* & \C\times\C^*\times\D^* \\ \GG_{12} & C_{19}\ (\textup{e.g. }(-2,-1,0)) & \C\times\C\times\D^{**} & \C\times\C^*\times\D^{**} \\ \GG_{13} & C_{20}\ (\textup{e.g. }(-2,-1,-1)) & \C\times\C\times\D^{**} & \C\times\C^*\times\D^{**} \\ \GG_{14} & C_{21},\ C_{22} & \C\times\C\times \D^* & \C\times\C^*\times\D^* \end{array} \end{eqnarray*} In the case $\GG_2\,\&\, C_3\ (A_2A_1)$ the moduli spaces and the covering can also be presented as follows, \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3}\cong @. \hspace*{0.3cm}\C\times \{(z_8,z_9)\in\C^*\times\C\,|\, z_8^3-4z_9^2\neq 0\} @. \hspace*{0.5cm}(z_1,z_8,z_9)\\ @VV{3:1}V @VV{3:1}V @VV{3:1}V\\ C_3^{S/\{\pm 1\}^3}\cong @. \hspace*{0.3cm}\C\times\{(z_{10},z_9)\in\C^*\times\C\,|\, z_{10}-4z_9^2\neq 0\} @. \hspace*{0.5cm}(z_1,z_8^3,z_9) \end{CD} \end{eqnarray*} In the case $\GG_3\,\&\, C_6\ (A_3)$ the moduli spaces and the covering can also be presented as follows, \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3}\cong @. \hspace*{0.3cm}\C\times \{(z_{13},z_{14})\in\C^*\times\C\,|\, z_{14}^3-z_{13}^2\neq 0\} @. \hspace*{0.5cm} (z_1,z_{13},z_{14})\\ @VV{4:1}V @VV{4:1}V @VV{4:1}V\\ C_3^{S/\{\pm 1\}^3}\cong @. \hspace*{0.3cm}\C\times\frac{\{(z_{15},z_{14})\in\C^*\times\C\,|\,z_{14}^3-z_{15}\neq 0\}} {\langle (z_{15},z_{14})\mapsto (-z_{15},-z_{14})\rangle} @. \hspace*{0.5cm} [(z_1,z_{13}^2,z_{14})] \end{CD} \end{eqnarray*} \end{theorem} {\bf Proof:} {\bf The reducible case $\GG_1\,\&\, C_1\, (A_1)$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \Br_3^{pure}$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \Br_3$. Therefore $C_3^{\uuuu{e}/\{\pm 1\}^3}=C_3^{pure}$ and $C_3^{\uuuu{x}/\{\pm 1\}^3}=C_3^{conf}$. The normal covering $C_3^{pure}\to C_3^{conf}$ is known. \medskip {\bf The case $\GG_1\,\&\, C_2\, (\HH_{1,2})$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle(\sigma^{mon})^2 \rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \Br_3$. Therefore $C_3^{\uuuu{x}/\{\pm 1\}^3}=C_3^{conf}$. The equality $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle(\sigma^{mon})^2 \rangle$ and \begin{eqnarray*} \psi((\sigma^{mon})^2): \C\times\C\times\H\to \C\times\C\times\H, \ (z_1,\zeta,\tau)\mapsto (z_1,\zeta-4\pi i,\tau) \end{eqnarray*} show \begin{eqnarray*} C_3^{univ}\cong \C\times\C\times\H&\to& \C\times\C^*\times\H \cong C_3^{\uuuu{e}/\{\pm 1\}^3},\\ (z_1,\zeta,\tau)&\mapsto& (z_1,e^{\zeta/2},\tau)=(z_1, z_5,\tau). \end{eqnarray*} The quotient $(Br_3)_{S/\{\pm 1\}^3}/(\Br_3)_{\uuuu{e}/\{\pm 1\}^3} =\Br_3/\langle (\sigma^{mon})^2\rangle$ is by Remark \ref{t4.15} (i) isomorphic to $SL_2(\Z)=\langle A_1,A_2 \rangle$ where $A_1=\begin{pmatrix}1&-1\\0&1\end{pmatrix}$, $A_2=\begin{pmatrix}1&0\\1&1\end{pmatrix}$ and $\sigma_1,\sigma_2,\sigma^{mon}$ are mapped to $A_1,A_2,-E_2$. They induce the following deck transformations of $\C\times\C^*\times\H \cong C_3^{\uuuu{e}/\{\pm 1\}^3}$, \begin{eqnarray*} \sigma_1:\ (z_1,z_5,\tau) &\mapsto & (z_1,(-i)z_5,\tau-1),\\ \sigma_2:\ (z_1,z_5,\tau) &\mapsto & (z_1,z_5\cdot e^{\kappa(\tau+1)/2},\frac{\tau}{\tau+1}),\\ \sigma^{mon}:\ (z_1,z_5,\tau) &\mapsto & (z_1,-z_5,\tau). \end{eqnarray*} Dividing out $\langle\sigma^{mon}\rangle$ first, one obtains $\C\times\C^*\times\H$ with coordinates $(z_1,z_2,\tau)=(z_1,z_5^2,\tau)$ and an action of $PSL_2(\Z)$ on it, which is induced by the following action of the classes $[A_1]$ and $[A_2]$ in $PSL_2(\Z)$, \begin{eqnarray*} {}[A_1]\sim \sigma_1: (z_1,z_2,\tau)&\mapsto & (z_1,-z_2,\tau-1) =(z_1,-z_2,\mu_{(12)}(\tau)),\\ {}[A_2]\sim \sigma_2: (z_1,z_2,\tau)&\mapsto & (z_1,z_2T(\tau+1),\frac{\tau}{\tau+1})\\ &&=(z_1,z_2(1-T(\tau)),\mu_{(13)}(\tau)). \end{eqnarray*} The normal subgroup $\oooo{\Gamma(2)} =\langle [A_1^2],[A_2^2] \rangle$ acts nontrivially only on the third factor $\H$ with quotient $\C-\{0,1\}$, so its quotient is $C_3^{pure}$. The action of the quotient group $PSL_2(\Z)/\oooo{\Gamma(2)} \cong S_3$ gives $C_3^{conf}$. \medskip We treat the cases $\GG_2\,\&\, C_4\, (\P^1A_1), C_5$ before the case $\GG_2\,\&\, C_3\, (A_2A_1)$. \medskip {\bf The reducible cases $\GG_2\,\&\, C_4\, (\P^1A_1), C_5$:}\\ By the proof of Theorem \ref{t7.11}, the group $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle\sigma_2^2, (\sigma^{mon})^{-1}\sigma_1^2\rangle$ is the normal closure of $\sigma_2^2$ in the group $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle \sigma_1,\sigma_2^2\rangle$, so especially a normal subgroup. The deck transformations $\psi(\sigma_2^2)$ and $ \psi((\sigma^{mon})^{-1}\sigma_1^2)$ of $C_3^{univ}$ in Theorem \ref {t9.1} (c) act nontrivially only on the third factor $\H$, and there they act as the generators $[A_2^2]$ and $[A_1^2]$ of $\oooo{\Gamma(2)}$. Therefore \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3} \cong \C\times\C\times(\C-\{0,1\}) \quad \textup{with coordinates }(z_1,\zeta,z_3). \end{eqnarray*} To obtain $C_3^{S/\{\pm 1\}^3}$ from $C_3^{\uuuu{e}/\{\pm 1\}^3}$, it is a priori sufficient to divide out the action on $C_3^{\uuuu{e}/\{\pm 1\}^3}$ which the deck transformation $\psi(\sigma_1)$ of $C_3^{univ}$ induces. But it is easier to first divide out the action of $\sigma^{mon}\in\langle \sigma_1,\sigma_2^2\rangle =(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$ on $C_3^{\uuuu{e}/\{\pm 1\}^3}$. The quotient is $C_3^{pure}\cong \C\times \C^*\times (\C-\{0,1\})$. On this space the action of $\sigma_1$ is the action of $\phi_{(12)}$, which is a fixed point free involution. Therefore $C_3^{S/\{\pm 1\}^3} = C_3^{pure}/\langle \phi_{(12)}\rangle$. \medskip {\bf The reducible case $\GG_2\,\&\, C_3\, (A_2A_1)$:}\\ Also here $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle \sigma_1,\sigma_2^2\rangle$, so $C_3^{S/\{\pm 1\}^3}=C_3^{pure}/\langle\phi_{(12)}\rangle$. Here $(Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle \sigma_2^2, (\sigma^{mon})^{-1}\sigma_1^2,\sigma^{mon}\sigma_1\rangle$. First $C_3^{univ}$ is divided by the action of the normal subgroup $\langle \sigma_2^2,(\sigma^{mon})^{-1}\sigma_1^2 \rangle$. The quotient is isomorphic to $\C\times\C\times (\C-\{0,1\})$ with coordinates $(z_1,\zeta,z_3)$ by the discussion in the cases $\GG_2\,\&\, C_4,C_5$. The deck transformation $\psi(\sigma^{mon}\sigma_1)$ on $C_3^{univ}$ induces on this manifold the automorphism $$(z_1,\zeta,z_3)\mapsto (z_1,\zeta-3\pi i,1-z_3).$$ If we divide out first the square of this action, the quotient is isomorphic to $\C\times\C^*\times (\C-\{0,1\})$ with coordinates $(z_1,z_6,z_3)$ with $z_6=e^{\zeta/3}$, and $\psi(\sigma^{mon}\sigma_1)$ induces on it the action of $\phi_{(12)}: (z_1,z_6,z_3)\mapsto (z_1,-z_6,1-z_3)$. We obtain the following diagram, \begin{eqnarray*} \begin{CD} (z_1,z_6,z_3) @>>> (z_1,z_6^3,z_3)=(z_1,z_2,z_3) \\ \C\times\C^*\times (\C-\{0,1\}) @>{3:1}>\text{covering}> \C\times\C^*\times (\C-\{0,1\}) \cong C_3^{pure} \\ @VV{/\langle\phi_{(12)}\rangle}V @VV{/\langle\phi_{(12)}\rangle}V \\ C_3^{\uuuu{e}/\{\pm 1\}^3} @>{3:1}>\text{covering}> C_3^{S/\{\pm 1\}^3} = C_3^{pure}/\langle \phi_{(12)}\rangle \\ \end{CD} \end{eqnarray*} Though $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ can be presented in a better way. Write $z_7:=z_3-\frac{1}{2}$. Then $\phi_{(12)}:(z_1,z_6,z_7)\mapsto (z_1,-z_6,-z_7)$ and $\phi_{(12)}:(z_1,z_2,z_7)\mapsto (z_1,-z_2,-z_7)$. The following two diagrams belong together. The upper diagram shows the sets, the lower diagram shows the maps. The horizontal maps are isomorphisms. The sets on the right side in the upper diagram are better presentations of $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ than the quotients on the left side. See Corollary \ref{t9.8}. \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \frac{\C\times\C^*\times (\C-\{\pm\frac{1}{2}\})}{\langle\phi_{(12)}\rangle} @>{\cong}>> \C\times \{(z_8,z_9)\in\C^*\times\C\,|\, z_8^3-4z_9^2=0\}\\ @VV{3:1}V @VV{3:1}V \\ C_3^{S/\{\pm 1\}^3}\cong \frac{\C\times\C^*\times (\C-\{\pm\frac{1}{2}\})}{\langle\phi_{(12)}\rangle} @>{\cong}>> \C\times\{(z_{10},z_9)\in\C^*\times\C\,|\, z_{10}-4z_9^2=0\} \end{CD} \end{eqnarray*} \begin{eqnarray*} \begin{CD} [(z_1,z_6,z_7)] @>>> (z_1,z_6^2,z_6^3z_7) @. =(z_1,z_8,z_9) \\ @VV{3:1}V @. @VV{3:1}V \\ [(z_1,z_6^3,z_7)] @. @. (z_1,z_8^3,z_9) \\ @| @. @| \\ [(z_1,z_2,z_7)] @>>> (z_1,z_2^2,z_2z_7) @. =(z_1,z_{10},z_9) \end{CD} \end{eqnarray*} Geometrically, the horizontal maps are the restrictions of the inversions on the level of $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ of the blowing up described in Remark \ref{t9.2} (ii). \medskip We treat the case $\GG_4\,\&\, C_7\, (\widehat{A}_2)$ before the case $\GG_3\,\&\, C_6\, (A_3)$. \medskip {\bf The case $\GG_4\,\&\, C_7\, (\widehat{A}_2)$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle \sigma_1^3,\sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle \sigma_2\sigma_1,\sigma_1^3\rangle$. The surjective homomorphism $\Br_3\to PSL_2(\Z)$ in Remark \ref{t4.15} (i) catches the action of $\Br_3$ on the third factor $\H$ of $\C\times\C\times\H\cong C_3^{univ}$. It maps the subgroup $\langle \sigma_1^3,\sigma_2^3, \sigma_2\sigma_1^3\sigma_2^{-1}\rangle$ to the subgroup $\langle [A_1^3],[A_2^3],[A_2A_1^3A_2^{-1}]\rangle =\oooo{\Gamma(3)}\subset PSL_2(\Z)$. This group is isomorphic to $G^{free,3}$. The action of $\oooo{\Gamma(3)}$ on $\H$ is free. The quotient $\H/\oooo{\Gamma(3)}$ is by Theorem \ref{tb.1} (c) isomorphic to $$\P^1-\{\textup{the four vertices of a tetrahedron}\} \cong \C-\{z_{11}\,|\, z_{11}^3=1\}=:\C^{***}.$$ Therefore the quotient $C_3^{\uuuu{e}/\{\pm 1\}^3}$ of $C_3^{univ}$ by $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ is isomorphic to \begin{eqnarray*} &&\C\times (\textup{an affine }\C\textup{-bundle over } \C^{***})\\ &\cong& \C\times\C\times \C^{***} \quad\textup{with coordinates }(z_1,\zeta,z_{11}) \end{eqnarray*} by Remark \ref{t9.4} (ii). In order to obtain $C_3^{S/\{\pm 1\}^3}$ it is because of $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle \sigma_2\sigma_1,\sigma_1^3\rangle$ a priori sufficient to divide out the action on $C_3^{\uuuu{e}/\{\pm 1\}^3}$ which the deck transformation $\psi(\sigma_2\sigma_1)$ of $C_3^{univ}$ induces. But it is easier to first divide out the action of $\sigma^{mon}\in\langle \sigma_2\sigma_1,\sigma_1^3\rangle$ on $C_3^{\uuuu{e}/\{\pm 1\}^3}$. The quotient is isomorphic to $\C\times\C^*\times \C^{***}$ with coordinates $(z_1,z_2,z_{11})$. Now $\sigma_2\sigma_1$ acts because of $(\sigma_2\sigma_1)^3 =\sigma^{mon}$ as an automorphism of order three on the $\C^*$-bundle $\C^*\times \C^{***}$. To determine this automorphism, recall the action of $\sigma_2\sigma_1$ on $\C\times\C\times\H\cong C_3^{univ}$ in Theorem \ref{t9.1} (c), \begin{eqnarray*} \psi(\sigma_2\sigma_1):(z_1,\zeta,\tau)\mapsto (z_1,\zeta-\pi i+\kappa(\tau),\frac{\tau-1}{\tau}). \end{eqnarray*} The M\"obius transformation $\mu_{(123)}=(\tau\mapsto \frac{\tau-1}{\tau})$ is elliptic of order three with fixed point $\tau_0:=e^{2\pi i /6}$. It acts on the tangent space $T_{\tau_0}\H$ by multiplication with $e^{-2\pi i/3}$, because $\mu_{(123)}(\tau_0+\varepsilon) \approx \tau_0+\varepsilon e^{-2\pi i/3}$ for small $\varepsilon\in\C$. The coordinate $z_{11}$ on $\C^{***}$ can be chosen so that $\sigma_2\sigma_1$ acts on $\C^{***}$ by $z_{11}\mapsto z_{11}e^{-2\pi i /3}$, especially $\tau_0$ maps to $z_{11}=0$. On the $\C^*$-fiber over $z_{11}=0$ $\sigma_2\sigma_1$ acts because of $T(\tau_0)=\tau_0$ by \begin{eqnarray*} z_2=e^\zeta\mapsto e^{\zeta-\pi i+\kappa(\tau_0)} =z_2(-1)T(\tau_0)=z_2 e^{-2\pi i /3}. \end{eqnarray*} By Remark \ref{t9.4} (iv) we can choose the trivialization of the $\C^*$-bundle $\C^*\times \C^{***}$ over $\C^{***}$ such that the action of $\sigma_2\sigma_1$ on it is the action $(z_2,z_{11})\mapsto (z_2e^{-2\pi i /3},z_{11}e^{-2\pi i /3})$. We obtain the following diagram, \begin{eqnarray*} \begin{CD} (z_1,\zeta,z_{11}) @>>> (z_1,e^\zeta,z_{11})=(z_1,z_2,z_{11}) \\ C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C\times\C\times \C^{***} @>{/\langle \sigma^{mon}\rangle}>> \C\times\C^*\times \C^{***} \\ @. @VV{/\langle \sigma_2\sigma_1\rangle}V \\ \hspace*{2cm}C_3^{S/\{\pm 1\}^3}\cong @. \C\times\frac{\C^*\times \C^{***}} {\langle (z_2,z_{11})\mapsto (z_2e^{2\pi i /3},z_{11}e^{2\pi i/3}) \rangle} \\ \end{CD} \end{eqnarray*} \medskip {\bf The case $\GG_3\,\&\, C_6\, (A_3)$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle \sigma_1\sigma_2,\sigma_1^3\rangle$. Especially, $(\sigma_1\sigma_2)^4=\sigma_1\sigma_2\sigma^{mon}$ and \begin{eqnarray*} (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}&\stackrel{3:1}{\supset}& \langle \sigma_1^3,\sigma_2^{-1}\sigma_1^3\sigma_2 (=\sigma_1\sigma_2^3\sigma_1^{-1}),\sigma_2^3,(\sigma^{mon})^4 \rangle\\ &\supset& \langle\sigma_1^3,\sigma_2^{-1}\sigma_1^3\sigma_2, \sigma_2^3 \rangle, \end{eqnarray*} and the group $\langle\sigma_1^3,\sigma_2^{-1}\sigma_1^3\sigma_2, \sigma_2^3 \rangle$ maps under the surjective homomorphism $\Br_3\to PSL_2(\Z)$ in Remark \ref{t4.15} (i) to the group $$\langle [A_1^3],[A_2^{-1}A_1^3A_2],[A_2^3]\rangle =\oooo{\Gamma(3)}\subset PSL_2(\Z).$$ The action of $\oooo{\Gamma(3)}$ on $\H$ is free, and the quotient is by Theorem \ref{tb.1} (c) isomorphic to $\C^{***}$. The quotient of $C_3^{univ}$ by $\langle \sigma_1^3,\sigma_2^{-1}\sigma_1^3\sigma_2, \sigma_2^3\rangle$ is isomorphic to \begin{eqnarray*} &&\C\times (\textup{an affine }\C\textup{-bundle over }\C^{***}) \\ &\cong& \C\times\C\times\C^{***}\quad\textup{with coordinates } (z_1,\zeta,z_{11}) \end{eqnarray*} by Remark \ref{t9.4} (ii). The quotient of $C_3^{univ}$ by $\langle \sigma_1^3,\sigma_2^{-1}\sigma_1^3\sigma_2, \sigma_2^3,(\sigma^{mon})^4\rangle$ is isomorphic to \begin{eqnarray*} \C\times\C^*\times\C^{***}\quad\textup{with coordinates } (z_1,z_{12},z_{11})\textup{ with }z_{12}=e^{\zeta/4}. \end{eqnarray*} On this quotient $(\sigma_1\sigma_2)^4$ acts as an automorphism of order three. We will determine the action now. First, it acts on $\C\times\C^*\times\H$ with coordinates $(z_1,z_{12},\tau)$ with $z_{12}=e^{\zeta/4}$ by $$(\sigma_1\sigma_2)^4:(z_1,z_{12},\tau)\mapsto (z_1,z_{12}e^{-2\pi i 3/8}e^{\kappa(\tau+1)/4},\frac{-1}{\tau+1}). $$ The M\"obius transformation $(\tau\mapsto \frac{-1}{\tau+1})$ is elliptic of order three with fixed point $\tau_1:=e^{2\pi i /3}$. It acts on the tangent space $T_{\tau_1}\H$ by multiplication with $e^{-2\pi i/3}$ because $\frac{-1}{\tau_1+\varepsilon}\equiv \tau_1+\varepsilon e^{-2\pi i/3}$ for small $\varepsilon\in\C$. The coordinate $z_{11}$ on $\C^{***}$ can be chosen so that $(\sigma_1\sigma_2)^4$ acts on $\C^{***}$ by $z_{11}\mapsto z_{11}e^{-2\pi i/3}$, especially $\tau_1$ maps to $z_{11}=0$. On the $\C^*$-fiber over $z_{11}=0$ $(\sigma_1\sigma_2)^4$ acts because of $\kappa(\tau_1+1)=\kappa(e^{2\pi i/6})= \frac{2\pi i}{6}$ by \begin{eqnarray*} z_{12}=e^{\zeta/4}\mapsto e^{(\zeta-3\pi i+\kappa(\tau_1+1))/4} =z_{12}e^{-2\pi i 3/8+2\pi i/24}=z_{12}e^{-2\pi i/3}. \end{eqnarray*} By Remark \ref{t9.4} (ii) we can choose the trivialization of the $\C^*$-bundle $\C^*\times\C^{***}$ over $\C^{***}$ with coordinates $(z_{12},z_{11})$ such that the action of $(\sigma_1\sigma_2)^4$ on it becomes the action $(z_{12},z_{11})\mapsto (z_{12}e^{-2\pi i/3},z_{11}e^{-2\pi i/3})$. $C_3^{S/\{\pm 1\}^3}$ is obtained by dividing out the action of $\sigma^{mon}$, because $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3} =\langle\sigma_1,\sigma_2^2\rangle$ is generated by $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ and by $\sigma^{mon}$, with $(\sigma^{mon})^4\in (\Br_3)_{\uuuu{x}/\{\pm 1\}^3}$. We obtain the following diagram, \begin{eqnarray*} \begin{CD} (z_1,z_{12},z_{11}) @>>> (z_1,z_{12}^4,z_{11})=(z_1,z_2,z_{11}) \\ \C\times\C^*\times \C^{***} @>{4:1}>> \C\times\C^*\times \C^{***} \\ @V{3:1}VV @V{3:1}VV \\ \C\times\frac{\C^*\times \C^{***}} {\langle (z_{12},z_{11})\mapsto (z_{12}e^{2\pi i /3},z_{11}e^{2\pi i/3}) \rangle} @>{4:1}>> \C\times\frac{\C^*\times \C^{***}} {\langle (z_2,z_{11})\mapsto (z_2e^{2\pi i /3},z_{11}e^{2\pi i/3}) \rangle} \\ \cong C_3^{\uuuu{e}/\{\pm 1\}^3} @. \cong C_3^{S/\{\pm 1\}^3} \\ \end{CD} \end{eqnarray*} Though $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ can be presented in a better way. Write $\xi:=e^{2\pi i/3}$. The following two diagrams belong together. The upper diagram shows the sets, the lower diagram shows the maps. The horizontal maps are isomorphisms. In the line of $C_3^{\uuuu{e}/\{\pm 1\}^3}$ the set on the right side is nicer than the set on the left side. In the line of $C_3^{S/\{\pm 1\}^3}$ both sets are quotients. \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \frac{\C\times\C^*\times \C^{***}}{\langle (z_1,z_{12},z_{11})\mapsto(z_1,z_{12}\xi,z_{11}\xi) \rangle} @>{\cong}>> \C\times \{(z_{13},z_{14})\in\C^*\times\C\,|\, z_{14}^3-z_{13}^2=0\}\\ @VV{4:1}V @VV{4:1}V \\ C_3^{S/\{\pm 1\}^3}\cong \frac{\C\times\C^*\times \C^{***}}{\langle (z_1,z_2,z_{11})\mapsto (z_1,z_2\xi,z_{11}\xi) \rangle} @>{\cong}>> \frac{\C\times\{(z_{15},z_{14})\in\C^*\times\C\,|\,z_{14}^3-z_{15}=0\}} {\langle (z_1,z_{15},z_{14})\mapsto (z_1,-z_{15},-z_{14})\rangle} \end{CD} \end{eqnarray*} \begin{eqnarray*} \begin{CD} [(z_1,z_{12},z_{11})] @>>> (z_1,z_{12}^3,z_{12}^2z_{11}) @. =(z_1,z_{13},z_{14}) \\ @VV{4:1}V @. @VV{4:1}V \\ [(z_1,z_{12}^4,z_{11})] @. @. [(z_1,z_{13}^2,z_{14})] \\ @| @. @| \\ [(z_1,z_2,z_{11})] @>>> [(z_1,z_2^{3/2},z_2^{1/2}z_{11})] @. =[(z_1,z_{15},z_{14})] \end{CD} \end{eqnarray*} We claim that geometrically, the horizontal maps are the restrictions of the inversions on the level of $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ of the blowing up described in Remark \ref{t9.2} (ii). \medskip {\bf The cases $\GG_5\,\&\, C_8,C_9\, ((-l,2,-l))$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle (\sigma^{mon})^2\sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1 \rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle \sigma^{mon},\sigma_1^{-1}\sigma_2\sigma_1\rangle$. On the last factor $\H$ of $\C\times\C\times\H \cong C_3^{univ}$, $\sigma_1^{-1}\sigma_2\sigma_1$ and $\sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1= (\sigma_1^{-1}\sigma_2\sigma_1)^{l^2-4}$ act as parabolic elements with fixed point $1\in\whh{\R}$, so freely, and $\sigma^{mon}$ acts trivially. The quotients of $\H$ by the actions of $\sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1$ and $\sigma_1^{-1}\sigma_2\sigma_1$ are both isomorphic to $\D^*=\D-\{0\}$, where $\D=\{z\in\C\,|\,|z|<1\}$ is the unit disk, and the covering is as follows, \begin{eqnarray*} \H/(\textup{action of }\sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1) \cong \D^* &\to& \D^*\cong \H/(\textup{action of } \sigma_1^{-1}\sigma_2\sigma_1),\\ z_{16}& \mapsto & z_{16}^{l^2-4}. \end{eqnarray*} Recall that $\psi(\sigma_2^2)$ acts trivially on the second factor of $\C\times\C\times\H \cong C_3^{univ}$. The total action of $\psi((\sigma^{mon})^2\sigma_1^{-1} \sigma_2^{l^2-4}\sigma_1)$ on $\C\times\C\times \H\cong C_3^{univ}$ is as follows, for even $l$: \begin{eqnarray*} (z_1,\zeta,\tau)&\stackrel{\sigma_1}{\mapsto}& (z_1,\zeta-\pi i ,\tau-1)\\ &\stackrel{\sigma_2^{l^2-4}}{\mapsto}& (z-1,\zeta-\pi i,\frac{\tau-1}{(l^2-4)(\tau-1)+1})\\ &\stackrel{\sigma_1^{-1}}{\mapsto}& (z_1,\zeta,\frac{(\tau-1)+(l^2-4)(\tau-1)+1} {(l^2-4)(\tau-1)+1})\\ &\stackrel{(\sigma^{mon})^2}{\mapsto}& (z_1,\zeta-4\pi i, \frac{(l^2-3)(\tau-1)+1} {(l^2-4)(\tau-1)+1}), \end{eqnarray*} and for odd $l$: \begin{eqnarray*} (z_1,\zeta,\tau)&\mapsto& (z_1,\zeta-4\pi i+\kappa(\tau), \frac{(l^2-3)(\tau-1)+1} {(l^2-4)(\tau-1)+1}), \end{eqnarray*} Because the action on the third factor $\H$ is free, the action on the second factor $\C$ is not important for the quotient up to isomorphism. The quotient is for even $l$ and for odd $l$ \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3} &\cong & \C\times (\textup{an affine }\C\textup{-bundle over }\D^*)\\ &\cong & \C\times\C\times \D^* \quad\textup{with coordinates }(z_1,\zeta,z_{16}) \end{eqnarray*} by Remark \ref{t9.4} (ii). Dividing out the action of $\sigma^{mon}$ gives \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3} \cong \C\times\C\times \D^* &\to & \C\times \C^*\times \D^* \\ (z_1,\zeta,z_{16})&\mapsto& (z_1,e^{\zeta},z_{16})=(z_1,z_2,z_{16}). \end{eqnarray*} On this quotient $\sigma_1^{-1}\sigma_2\sigma_1$ acts as a cyclic automorphism of order $l^2-4$ because $(\sigma_1^{-1}\sigma_2\sigma_1)^{l^2-4} \in \langle \sigma^{mon},(\sigma^{mon})^2\sigma_1^{-1} \sigma_2^{l^2-4}\sigma_1\rangle$. The action on the third factor $\D^*$ is free. Therefore we can choose a posteriori the trivialization $\C\times\D^*$ with coordinates $(\zeta,z_{16})$ and $\C^*\times\D^*$ with coordinates $(z_2,z_{16})$ so that we obtain the following diagram, \begin{eqnarray*} \begin{CD} (z_1,\zeta,z_{16}) @>>> (z_1,e^\zeta,z_{16}) @. \\ C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C\times\C\times \D^* @>{/\langle \sigma^{mon}\rangle}>> \C\times\C^*\times \D^* @. \hspace*{0.5cm}(z_1,z_2,z_{16}) \\ @. @V{(l^2-4):1}VV @VV{/\langle \sigma_1^{-1}\sigma_2\sigma_1\rangle}V \\ \hspace*{2cm}C_3^{S/\{\pm 1\}^3}\cong @. \C\times\C^*\times \D^* @. \hspace*{0.5cm}(z_1,z_2,z_{16}^{l^2-4}) \\ \end{CD} \end{eqnarray*} \medskip {\bf The cases $\GG_6\,\&\, C_{10}\, (\P^2),C_{11},C_{12}$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \{\id\}$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle\sigma_2\sigma_1 \rangle$. Therefore $$C_3^{\uuuu{e}/\{\pm 1\}^3} = C_3^{univ} \cong \C\times\C\times\H.$$ In order to obtain $C_3^{S/\{\pm 1\}^3}$ it is a priori sufficient to divide out the action of $\sigma_2\sigma_1$. But it is easier to first divide out the action of $\sigma^{mon}=(\sigma_2\sigma_1)^3$. On the quotient $C_3^{\uuuu{e}/\{\pm 1\}^3} /\langle \sigma^{mon}\rangle \cong \C\times\C^*\times \H$ with coordinates $(z_1,z_2,\tau)$ $\sigma_2\sigma_1$ acts as automorphism of order three. Concretely, \begin{eqnarray*} \begin{CD} @. (z_1,\zeta,\tau) @>>> (z_1,\zeta-\pi i+\kappa(\tau), \frac{\tau-1}{\tau}) \\ (z_1,\zeta,\tau)\hspace*{0.5cm} @. C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C^2\times \H @>{\sigma_2\sigma_1}>> C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C^2\times \H \\ @VVV @VV{/\langle \sigma^{mon}\rangle}V @VV{/\langle \sigma^{mon}\rangle}V \\ (z_1,e^\zeta,\tau)\hspace*{0.5cm} @. \C\times\C^*\times \H @>{\sigma_2\sigma_1}>> \C\times\C^*\times \H \\ @. (z_1,z_2,\tau) @>>> (z_1,z_2(-T(\tau)),\frac{\tau-1}{\tau}) \\ \end{CD} \end{eqnarray*} The action is fixed point free. One reason is that $C_3^{S/\{\pm 1\}^3}$ is smooth. Another reason is given by the following chain of arguments. The M\"obius transformation $\mu_{(123)}=(\tau\mapsto \frac{\tau-1}{\tau})$ is elliptic of order three with fixed point $e^{2\pi i /6}$. The quotient is again isomorphic to $\H$. On the fiber of $\C^*\times\H$ over $e^{2\pi i /6}$, $\sigma_2\sigma_1$ acts by multiplication with $-T(e^{2\pi i /6})=-e^{2\pi i /6}=e^{-2\pi i/3}$, so fixed point free. Also $$(-T(\tau))(-T(\frac{\tau-1}{\tau}))(-T(\frac{-1}{\tau-1})) =-T(\tau)\frac{T(\tau)-1}{T(\tau)}\frac{1}{1-T(\tau)}=1$$ because of Theorem \ref{tb.1} (d). Therefore $\sigma_2\sigma_1$ acts on $\C\times\C^*\times\H$ fixed point free and has order three. We obtain the following diagram, \begin{eqnarray*} \begin{CD} (z_1,\zeta,\tau) @>>> (z_1,e^\zeta,\tau)=(z_1,z_2,\tau) \\ C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C\times\C\times \H @>{/\langle \sigma^{mon}\rangle}>> \C\times\C^*\times \H \\ @. @VV{/\langle \sigma_2\sigma_1\rangle}V \\ C_3^{S/\{\pm 1\}^3}\cong @. \C\times\frac{\C^*\times \H} {\langle(z_2,\tau)\mapsto (z_2(-T(\tau)),\frac{\tau-1}{\tau}) \rangle} \\ \end{CD} \end{eqnarray*} $C_3^{S/\{\pm 1\}^3}$ is $\C\times(\textup{a }\C^* \textup{-orbibundle over }\H/\langle \mu_{(123)}\rangle)$ in the sense of Remark \ref{t9.5} with the only exceptional fiber over $[e^{2\pi i/6}]\in \H/\langle \mu_{(123)}\rangle$. \medskip {\bf The cases $\GG_7\,\&\, C_{13}\ (\textup{e.g. }(4,4,8))$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \{\id\}$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle\sigma_2\sigma_1^2 \rangle$. Therefore $$C_3^{\uuuu{e}/\{\pm 1\}^3} = C_3^{univ} \cong \C\times\C\times\H.$$ In order to obtain $C_3^{S/\{\pm 1\}^3}$ it is a priori sufficient to divide out the action of $\sigma_2\sigma_1^2$. But it is easier to first divide out the action of $\sigma^{mon}=(\sigma_2\sigma_1^2)^2$. On the quotient $C_3^{\uuuu{e}/\{\pm 1\}^3} /\langle \sigma^{mon}\rangle \cong \C\times\C^*\times \H$ with coordinates $(z_1,z_2,\tau)$ $\sigma_2\sigma_1^2$ acts as automorphism of order two. Concretely, \begin{eqnarray*} \begin{CD} @. (z_1,\zeta,\tau) @>>> (z_1,\zeta-2\pi i+\kappa(\tau-1), \frac{\tau-2}{\tau-1}) \\ (z_1,\zeta,\tau)\hspace*{0.5cm} @. C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C^2\times \H @>{\sigma_2\sigma_1^2}>> C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C^2\times \H \\ @VVV @VV{/\langle \sigma^{mon}\rangle}V @VV{/\langle \sigma^{mon}\rangle}V \\ (z_1,e^\zeta,\tau)\hspace*{0.5cm} @. \C\times\C^*\times \H @>{\sigma_2\sigma_1^2}>> \C\times\C^*\times \H \\ @. (z_1,z_2,\tau) @>>> (z_1,z_2(1-T(\tau)),\frac{\tau-2}{\tau-1}) \\ \end{CD} \end{eqnarray*} The action is fixed point free. One reason is that $C_3^{S/\{\pm 1\}^3}$ is smooth. Another reason is given by the following chain of arguments. The M\"obius transformation $(\tau\mapsto \frac{\tau-2}{\tau-1})$ is elliptic of order two with fixed point $1+i$. The quotient is again isomorphic to $\H$. On the fiber of $\C^*\times\H$ over $1+i$, $\sigma_2\sigma_1^2$ acts by multiplication with $1-T(1+i)=1-2=-1$, so fixed point free. Also \begin{eqnarray*} (1-T(\tau))(1-T(\frac{\tau-2}{\tau-1})) &=&(1-T(\tau))(1-T(\mu_{(123)}\mu_{(12)}))\\ &=&(1-T(\tau))(1-g_{(123)}g_{(12)}(T(\tau)))\\ &=&(1-T(\tau))(1-\frac{(1-T(\tau))-1}{1-T(\tau)})=1 \end{eqnarray*} because of Theorem \ref{tb.1} (d). Therefore $\sigma_2\sigma_1$ acts on $\C\times\C^*\times\H$ fixed point free and has order two. We obtain the following diagram, \begin{eqnarray*} \begin{CD} (z_1,\zeta,\tau) @>>> (z_1,e^\zeta,\tau)=(z_1,z_2,\tau) \\ C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C\times\C\times \H @>{/\langle \sigma^{mon}\rangle}>> \C\times\C^*\times \H \\ @. @VV{/\langle \sigma_2\sigma_1^2\rangle}V \\ C_3^{S/\{\pm 1\}^3}\cong @. \C\times\frac{\C^*\times \H} {\langle(z_2,\tau)\mapsto (z_2(1-T(\tau)),\frac{\tau-2}{\tau-1}) \rangle} \\ \end{CD} \end{eqnarray*} $C_3^{S/\{\pm 1\}^3}$ is $\C\times(\textup{a }\C^* \textup{-orbibundle over } \H/\langle (\tau\mapsto \frac{\tau-2}{\tau-1})\rangle)$ in the sense of Remark \ref{t9.5} with the only exceptional fiber over $[1+i]\in \H/\langle (\tau\mapsto \frac{\tau-2}{\tau-1}) \rangle$. \medskip {\bf The cases $\GG_8\,\&\, C_{14}$ and $\GG_9\,\&\, C_{15},C_{16},C_{23},C_{24}$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \{ \id\}$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle\sigma^{mon}\rangle$. Therefore \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3} =C_3^{univ}\cong\C\times\C\times\H @>{/\langle \sigma^{mon}\rangle}>> \C\times\C^*\times \H \cong C_3^{S/\{\pm 1\}^3}\\ (z_1,\zeta,\tau) @>>> (z_1,e^{\zeta},\tau) =(z_1,z_2,\tau). \end{CD} \end{eqnarray*} \medskip {\bf The cases $\GG_{10}\,\&\, C_{17} \ (\textup{e.g. }(-2,-2,0))$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle\sigma_2^2\rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle\sigma^{mon}, \sigma_2\rangle$. The deck transformation $\psi(\sigma_2^2)$ of $C_3^{univ}\cong \C\times\C\times\H$ acts nontrivially only on the third factor $\H$, and there it acts as the parabolic transformation $(\tau\mapsto \frac{\tau}{2\tau+1})$ with fixed point $0\in\whh{\R}$ and quotient $\H/\langle (\tau\mapsto \frac{\tau}{2\tau+1})\rangle \cong\D^*$. Therefore $$C_3^{\uuuu{e}/\{\pm 1\}^3} \cong \C\times\C\times \D^* \quad\textup{with coordinates }(z_1,\zeta,z_{16}).$$ On this quotient $\sigma_2$ acts as fixed point free automorphism of order two with the action $\D^*\to \D^*$, $z_{16}\mapsto z_{16}^2=z_{17}$, on the last factor. The quotient is isomorphic to $\C\times\C\times\D^*$ with coordinates $(z_1,\zeta,z_{17})$. $\sigma^{mon}$ acts on the quotient with nontrivial action $\zeta\mapsto \zeta-2\pi i$ only on the second factor $\C$. We obtain the following diagram, \begin{eqnarray*} \begin{CD} (z_1,\zeta,z_{16}) @>>> (z_1,\zeta+\kappa(\tau+1),z_{16}^2) @. \textup{with }z_{16}=[\tau] \\ C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C\times\C\times \D^* @>{/\langle \sigma_2\rangle}>> \C\times\C\times \D^* @. \hspace*{1cm}(z_1,\zeta,z_{17}) \\ @. @VV{/\langle \sigma^{mon}\rangle}V @VVV\\ \hspace*{2cm}C_3^{S/\{\pm 1\}^3}\cong @. \C\times\C^*\times \D^* @. \hspace*{1cm}(z_1,e^{\zeta},z_{17}) \\ \end{CD} \end{eqnarray*} \medskip {\bf The cases $\GG_{11}\,\&\, C_{18} \ (\textup{e.g. }(-3,-2,0))$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\langle\sigma_2^2\rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle \sigma^{mon}, \sigma_2^2\rangle $. The manifold $C_3^{\uuuu{e}/\{\pm 1\}^3}$ is the same quotient of $C_3^{univ}$ as in the cases $\GG_{10}\,\&\, C_{17}$, namely $$C_3^{\uuuu{e}/\{\pm 1\}^3} \cong \C\times\C\times \D^* \quad\textup{with coordinates }(z_1,\zeta,z_{16}).$$ $\sigma^{mon}$ acts nontrivially only on the second factor $\C$ of this manifold, by $\zeta\mapsto \zeta-2\pi i$. We obtain the following diagram, \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3} \cong\C\times\C\times\D^* @>{/\langle\sigma^{mon}\rangle}>> \C\times\C^*\times \D^* \cong C_3^{S/\{\pm 1\}^3}\\ (z_1,\zeta,z_{16}) @>>> (z_1,e^{\zeta},z_{16}) =(z_1,z_2,z_{16}). \end{CD} \end{eqnarray*} \medskip {\bf The cases $\GG_{12}\,\&\, C_{19} \ (\textup{e.g. }(-2,-1,0))$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle \sigma_2^2, \sigma_2\sigma_1^3\sigma_2^{-1}\rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle\sigma^{mon}, \sigma_2^2,\sigma_2\sigma_1^3\sigma_2^{-1}\rangle$. The group $\langle \sigma_2^2,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle$ acts on $\H$ as the group $\langle [A_2^2],[A_2A_1^3A_2^{-1}]\rangle\subset PSL_2(\Z)$. Both generators are parabolic. Consider the hyperbolic polygon $P$ whose relative boundary consists of the four arcs $A(\infty,0)$, $A(0,\frac{1}{2})$, $A(\frac{2}{3},1)$, $A(1,\infty)$, see the left picture in Figure \ref{Fig:9.2}. \begin{figure}[H] \includegraphics[width=0.9\textwidth]{pic-9-2.png} \caption[Figure 9.2]{Fundamental domains in $\H$ for the subgroups $\langle [A_2^2],[A_2A_1^3A_2^{-1}]\rangle$ (left) and $\langle [A_2^3],[A_2A_1^3A_2^{-1}]\rangle$ (right) of $PSL_2(\Z)$} \label{Fig:9.2} \end{figure} Recall \begin{eqnarray*} A_1&=&\begin{pmatrix}1&-1\\0&1\end{pmatrix},\ A_2=\begin{pmatrix}1&0\\1&1\end{pmatrix},\ A_2^2=\begin{pmatrix}1&0\\2&1\end{pmatrix},\\ A_2A_1A_2^{-1}&=&\begin{pmatrix}2&-1\\0&1\end{pmatrix},\ A_2A_1^3A_2^{-1}=(A_2A_1A_2^{-1})^3 =\begin{pmatrix}4&-3\\3&-2\end{pmatrix}. \end{eqnarray*} $[A_2^2]$ has the fixed point $0\in\whh{\R}$ and maps $A(\infty,0)$ to $A(0,\frac{1}{2})$, and $[A_2A_1^3A_2^{-1}]$ has the fixed point $1\in\whh{\R}$ and maps $A(\frac{2}{3},1)$ to $A(1,\infty)$. By Theorem \ref{ta.2} (c) the hyperbolic polygon $P$ is a fundamental domain for the group $\langle [A_2^2], [A_2A_1^3A_2^{-1}]\rangle$. Because of the two parabolic fixed points and the euclidean boundary $[\frac{1}{2},\frac{2}{3}]$ of $P$, the quotient $\H/\langle [A_2^2],[A_2A_1^3A_2^{-1}]\rangle$ is isomorphic to $\D-\{0,\frac{1}{2}\}=:\D^{**}$ where $z_{16}=0$ is the image of $0\in\whh{\R}$ and $z_{16}=\frac{1}{2}$ is the image of $1\in\whh{\R}$. As the action of $\langle\sigma_2^2, \sigma_2\sigma_1^3\sigma_2^{-1}\rangle$ on the third factor $\H$ of $\C\times\C\times\H\cong C_3^{univ}$ is free, the quotient of $\C\times\H$ with coordinates $(\zeta,\tau)$ is an affine $\C$-bundle over $\H/\langle [A_2^2],[A_2A_1^3A_2^{-1}]\rangle$, so \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3} &\cong&\C\times (\textup{an affine }\C\textup{-bundle over }\D^{**})\\ &\cong& \C\times\C\times \D^{**} \quad\textup{with coordinates }(z_1,\zeta,z_{16}) \end{eqnarray*} by Remark \ref{t9.4} (ii). $\sigma^{mon}$ acts nontrivially only on the second factor of this product, by the action $\zeta\to\zeta-2\pi i$. Therefore \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3} \cong \C\times\C\times \D^{**} @>{/\langle\sigma^{mon}\rangle}>> \C\times\C^*\times \D^{**} \cong C_3^{S/\{\pm 1\}^3}\\ (z_1,\zeta,z_{16}) @>>> (z_1,e^{\zeta},z_{16}) =(z_1,z_2,z_{16}).\\ \end{CD} \end{eqnarray*} \medskip {\bf The cases $\GG_{13}\,\&\, C_{20} \ (\textup{e.g. }(-2,-1,-1))$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}= \langle \sigma_2^3, \sigma_2\sigma_1^3\sigma_2^{-1}\rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}= \langle\sigma^{mon}, \sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1}\rangle$. This is analogous to the cases $\GG_{12}\,\&\, C_{19}$. The group $\langle \sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1} \rangle$ acts on $\H$ as the group $\langle [A_2^3],[A_2A_1^3A_2^{-1}]\rangle\subset PSL_2(\Z)$. Both generators are parabolic. Consider the hyperbolic polygon $P$ whose relative boundary consists of the four arcs $A(\infty,0)$, $A(0,\frac{1}{3})$, $A(\frac{2}{3},1)$, $A(1,\infty)$, see the right picture in Figure \ref{Fig:9.2}. $[A_2^3]$ has the fixed point $0$ and maps $A(\infty,0)$ to $A(0,\frac{1}{3})$, and $[A_2A_1^3A_2^{-1}]$ has the fixed point $1$ and maps $A(\frac{2}{3},1)$ to $A(1,\infty)$. By Theorem \ref{ta.2} (c) the hyperbolic polygon $P$ is a fundamental domain for the group $\langle [A_2^3], [A_2A_1^3A_2^{-1}]\rangle$. As in the cases $\GG_{12}\,\&\, C_{19}$ we obtain \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3} \cong \C\times\C\times \D^{**} @>{/\langle\sigma^{mon}\rangle}>> \C\times\C^*\times \D^{**} \cong C_3^{S/\{\pm 1\}^3}\\ (z_1,\zeta,z_{16}) @>>> (z_1,e^{\zeta},z_{16}) =(z_1,z_2,z_{16}).\\ \end{CD} \end{eqnarray*} {\bf The cases $\GG_{14}\,\&\, C_{21}, C_{22}$:}\\ Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\langle\sigma_2^3\rangle$ and $(\Br_3)_{\uuuu{x}/\{\pm 1\}^3}=\langle \sigma^{mon}, \sigma_2^3\rangle$. The group $\langle \sigma_2^3\rangle$ acts on the third factor $\H$ of $\C\times\C\times\H\cong C_3^{univ}$ as the group $\langle [A_2^3]\rangle\subset PSL_2(\Z)$. The generator $[A_2^3]$ is parabolic. Therefore the quotient $\H/\langle [A_2^3]\rangle$ is isomorphic to $\D^*$. We obtain \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3} &\cong&\C\times (\textup{affine }\C\textup{-bundle over } \D^{*})\\ &\cong& \C\times\C\times \D^{*} \quad\textup{with coordinates }(z_1,\zeta,z_{16}) \end{eqnarray*} by Remark \ref{t9.4} (ii) and the diagram \begin{eqnarray*} \begin{CD} C_3^{\uuuu{e}/\{\pm 1\}^3} \cong \C\times\C\times \D^{*} @>{/\langle\sigma^{mon}\rangle}>> \C\times\C^*\times \D^{*} \cong C_3^{S/\{\pm 1\}^3}\\ (z_1,\zeta,z_{16}) @>>> (z_1,e^{\zeta},z_{16}) =(z_1,z_2,z_{16}).\\ \end{CD} \end{eqnarray*} \hfill$\Box$ \begin{remarks}\label{t9.4} Let $X$ be a non-compact smooth complex curve. \index{non-compact smooth complex curve} (i) An {\it affine $\C$-bundle over $X$} \index{affine $\C$-bundle} is a holomorphic bundle $p:Y\to X$ such that the following exist: An open covering $(U_i)_{i\in I}$ of $X$, isomorphisms $f_i:p^{-1}(U_i)\to\C\times U_i$, and for $i$ and $j\in I$ with $i\neq j$ and $U_i\cap U_j\neq\emptyset$ a holomorphic map $f_{ij}\in\OO(U_i\cap U_j)$ with \begin{eqnarray*} \C\times (U_i\cap U_j)\stackrel{f_i^{-1}}{\longrightarrow} p^{-1}(U_i\cap U_j)\stackrel{f_j}{\longrightarrow} \C\times (U_i\cap U_j),\\ (z,x)\mapsto (z+f_{ij}(x),x). \end{eqnarray*} (ii) $H^1(X,\OO)=0$ (e.g. \cite[26.1 Satz]{Fo77}). This sheaf cohomology coincides with the \v{C}ech cohomology $H^1((U_i)_{i\in I},\OO)$ with values in $\OO$ (e.g. \cite[12.8 Satz]{Fo77}). Therefore any affine $\C$-bundle over $X$ is trivial, so isomorphic to the affine $\C$-bundle $\C\times X$. (iii) Let $a:\C\times X\to\C\times X$ be an automorphism of some finite order of $\C\times X$ as affine $\C$-bundle over $X$, with a discrete set of fixed points in $X$, such that $a$ acts as identity on the fiber of each fixed point. Then $(\C\times X)/\langle a\rangle$ is again an affine $\C$-bundle. The pull back of a trivialization of this quotient bundle gives a new trivialization of $\C\times X$ such that with respect to this new trivialization $a$ acts nontrivially only on the base $X$, $a:(z,x)\mapsto (z,a_2(x))$, where $a_2:X\to X$ is the induced automorphism of the base $X$. (iv) A {\it line bundle on $X$} \index{line bundle} means a holomorphic rank 1 vector bundle. Any line bundle on $X$ is trivial, so isomorphic to $\C\times X$ as vector bundle (e.g. \cite[30.3 Satz]{Fo77}). Therefore any $\C^*$-bundle on $X$ is trivial, so isomorphic to $\C^*\times X$. \end{remarks} \section{Partial compactifications of the F-manifolds} \label{s9.3} Theorem \ref{t9.3} gives all possible manifolds $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and $C_3^{S/\{\pm 1\}^3}$ as complex manifolds. The (long) proof describes also the coverings $C_3^{\uuuu{e}/\{\pm 1\}^3}\to C_3^{S/\{\pm 1\}^3}$. But the F-manifold structure and the Euler field as well as possible partial compactifications are not treated in Theorem \ref{t9.3}. They are subject of this section. Corollary \ref{t9.8} gives the Euler fields. We refrain from writing down the multiplications in each case. They can in principle be extracted from the construction as quotients of $C_3^{univ}$ or as coverings of $C_3^{conf}$. But we care about possible partial compactifications of $C_3^{\uuuu{e}/\{\pm 1\}^3}$ such that the F-manifold structure extends. Lemma \ref{t9.7} says something conceptually about such partial compactifications. Corollary \ref{t9.8} writes them down in all cases and makes the Maxwell strata $\KK_2$ and the caustics $\KK_3$ in them explicit. Especially interesting are the cases $A_3$ and $A_2A_1$. In both cases $C_3^{\uuuu{e}/\{\pm 1\}^3}$ has a partial compactification to $\C^3$ which is well known from singularity theory. To recover it starting from $C_3^{\uuuu{e}/\{\pm 1\}^3}$ is especially in the case $A_3$ a bit involved. It requires in the case $A_3$ the notion of a $\C^*$-orbibundle, which is given in the Remarks \ref{t9.5}, and the Example \ref{t9.6}. \begin{remarks}\label{t9.5} Let $\www{X}$ be a smooth complex curve, compact or not. A {\it $\C^*$-orbibundle over $\www{X}$} \index{orbibundle} generalizes slightly the notion of a $\C^*$-bundle over $\www{X}$. It is a complex two dimensional manifold $Y$ with a projection $p:Y\to\www{X}$ and a $\C^*$-action which restricts to a transitive $\C^*$-action on each fiber and trivial stabilizer on almost each fiber. There is a discrete subset $\Sigma\subset\www{X}$ such that the stabilizer of the $\C^*$-action is trivial on $p^{-1}(q)$ for $q\in\www{X}-\Sigma$ and nontrivial on $p^{-1}(q)$ for $q\in\Sigma$. Then $Y|_{\www{X}-\Sigma}$ is a $\C^*$-bundle over $\www{X}-\Sigma$. For any $q\in \Sigma$ the restriction $p^{-1}(\Delta)\subset Y$ for a small disk $\Delta\subset\www{X}$ with center $q$ and $\Delta\cap\Sigma =\{q\}$ is isomorphic to one of the following models, \index{$Y_{a,m}$} \begin{eqnarray*} Y_{a,m}:=(\C^*\times\D)/\langle ((y_1,y_2)\mapsto (e^{2\pi i a/m}y_1,e^{2\pi i/m}y_2))\rangle,\\ \textup{with }\C^*\textup{-action }\C^*\times Y_{a,m}\to Y_{a,m}, \quad (t,[(y_1,y_2)])\mapsto [(ty_1,y_2)],\\ \textup{and projection }p_{a,m}:Y_{a,m}\to\D/\langle (y_2\mapsto e^{2\pi i /m}y_2)\rangle,\quad [(y_1,y_2)]\mapsto [y_2] \end{eqnarray*} for some $m\in\Z_{\geq 2}$ and some $a\in\{1,2,...,m-1\}$ with $\gcd(a,m)=1$. The fiber over $[0]$ in this model is called {\it exceptional fiber of type $(a,m)$}. The stabilizer of the $\C^*$-action on it has order $m$. If $\www{X}$ is compact there is a notion of an Euler number $\in\Q$ of a $\C^*$-orbibundle over $\www{X}$ which generalizes the Euler number $\in\Z$ of a $\C^*$-bundle over $\www{X}$ \cite{OW71} (see also \cite{Do83}). If one takes out the center of an isolated quasihomogeneous surface singularity one obtains a $\C^*$-orbibundle over a smooth compact curve $\www{X}$ with negative Euler number. Vice versa, any $\C^*$-orbibundle over a smooth compact curve $\www{X}$ with negative Euler number arises in this way \cite{OW71}. We care about $\C^*$-orbibundles here mainly because of the case $A_3$. The following example will be used in the proof of the case $A_3$ in Corollary \ref{t9.8}. \end{remarks} \begin{example}\label{t9.6} Consider $Y_1:=\C^2-\{(0,0)\}$ with coordinates $(z_{13},z_{14})$ and the $\C^*$-action \begin{eqnarray*} \C^*\times Y_1\to Y_1,\quad (t,z_{13},z_{14})\mapsto (t^3 z_{13},t^2 z_{14}). \end{eqnarray*} It is a $\C^*$-orbibundle over $\www{X}=\P^1$ with projection \begin{eqnarray*} p_1:Y_1\to\P^1,\quad (z_{13},z_{14})\mapsto \frac{z_{14}^3}{z_{13}^2}, \end{eqnarray*} and two exceptional fibers, the exceptional fiber $\{z_{14}=0\}$ over $0$ with stabilizer of order three and the exceptional fiber $\{z_{13}=0\}$ over $\infty$ of type $(1,2)$. We claim that the exceptional fiber $\{z_{14}=0\}$ over $0$ is of type $(1,3)$ and not of type $(2,3)$. We will show this below after studying the $\C^*$-orbibundle $p_2:Y_2\to\P^1$ which is obtained from $p_1:Y_1\to\P^1$ via pull back with the $3:1$ branched covering $b_2:\P^1\to\P^1, z\mapsto z^3$. The $\C^*$-orbibundle $p_2:Y_2\to\P^1$ has only one exceptional fiber, the fiber of type $(1,2)$ over $\infty$. This $\C^*$-orbibundle is given by the following two charts and their glueing.\\ First chart over $\C$: \begin{eqnarray*} p_2^{(1)}:Y_2^{(1)}=\C^*\times\C\to\C,&& (z_{12},z_{11})\mapsto z_{11},\\ \C^*\textup{-action}\quad \C^*\times Y_2^{(1)}\to Y_2^{(1)}, && (t,z_{12},z_{11})\mapsto (tz_{12},z_{11}),\\ Y_2^{(1)}\stackrel{3:1}{\longrightarrow} Y_1^{(1)}=p_1^{-1}(\C)=\C^*\times\C,&& (z_{12},z_{11})\mapsto (z_{12}^3,z_{12}^2z_{11})=(z_{13},z_{14}). \end{eqnarray*} Second chart over $\P^1-\{0\}$: \begin{eqnarray*} p_2^{(2)}:Y_2^{(2)}=\C\times\C^*\to\P^1-\{0\}, && (y_1,y_2)\mapsto \frac{y_2}{y_1^2},\\ \C^*\textup{-action}\quad \C^*\times Y_2^{(2)}\to Y_2^{(2)}, && (t,y_1,y_2)\mapsto (ty_1,t^2y_2),\\ Y_2^{(2)}\longrightarrow Y_1^{(2)}=p_1^{-1}(\P^1-\{0\})=\C\times\C^*,&& (y_1,y_2)\mapsto (y_1^3,y_2)=(z_{13},z_{14}), \end{eqnarray*} Glueing of the two charts: \begin{eqnarray*} Y_2^{(1)}-(p_2^{(1)})^{-1}(0)=\C^*\times\C^* &\to& Y_2^{(2)}-(p_2^{(2)})^{-1}(\infty)=\C^*\times\C^*,\\ (z_{12},z_{11})&\mapsto& (z_{12},z_{12}^2z_{11})=(y_1,y_2). \end{eqnarray*} The map $Y_2^{(1)}\to Y_1^{(1)}$, $(z_{12},z_{11})\mapsto (z_{12}^3,z_{12}^2z_{11})$, shows that the chart $Y_1^{(1)}$ of the first $\C^*$-orbibundle $p_1:Y_1\to\P^1$ is a $\C^*$-orbibundle over $\C$ with exceptional fiber over 0 and that the $\C^*$-orbibundle is isomorphic to the model of type $(1,3)$ in Remark \ref{t9.4} (v), namely \begin{eqnarray*} Y_1^{(1)}\cong (\C^*\times\C)/\langle ((z_{12},z_{11})\mapsto (e^{2\pi i /3}z_{12},e^{2\pi i /3}z_{11}))\rangle. \end{eqnarray*} Consider a further $2:1$ branched covering $b_3:\P^1\to\P^1$, $\www{z}_{11}\to \www{z}_{11}^2=z_{11}$. The pull back of $p_2:Y_2\to\P^1$ via $b_3$ gives a $\C^*$-bundle $p_3:Y_3\to\P^1$ over $\P^1$. We claim that its Euler number is $-1$, so it is isomorphic to $(\OO_{\P^1}(-1)-\textup{(zero section)})$. To prove this, we consider two charts for $p_3:Y_3\to\P^1$ and their glueing.\\ First chart over $\C$: \begin{eqnarray*} p_3^{(1)}:Y_3^{(1)}=\C^*\times\C\to\C, && (z_{12},\www{z}_{11})\mapsto \www{z}_{11},\\ \C^*\textup{-action}\quad \C^*\times Y_3^{(1)}\to Y_3^{(1)}, && (t,z_{12},\www{z}_{11})\mapsto (tz_{12},\www{z}_{11}),\\ Y_3^{(1)}\stackrel{2:1}{\longrightarrow} Y_2^{(1)}=\C^*\times\C,&& (z_{12},\www{z}_{11})\mapsto (z_{12},\www{z}_{11}^2) =(z_{12},z_{11}). \end{eqnarray*} Second chart over $\P^1-\{0\}$: \begin{eqnarray*} p_3^{(2)}:Y_3^{(2)}=\C^*\times\C\to\P^1-\{0\}, && (y_3,y_4)\mapsto \frac{1}{y_4},\\ \C^*\textup{-action}\quad \C^*\times Y_3^{(2)}\to Y_3^{(2)}, && (t,y_3,y_4)\mapsto (ty_3,y_4),\\ Y_3^{(2)}\longrightarrow Y_2^{(2)}=\C\times\C^*,&& (y_3,y_4)\mapsto (y_3y_4,y_3^2)=(y_1,y_2), \end{eqnarray*} Glueing of the two charts: \begin{eqnarray*} Y_3^{(1)}-(p_3^{(1)})^{-1}(0)=\C^*\times\C^* &\to& Y_3^{(2)}-(p_3^{(2)})^{-1}(\infty)=\C^*\times\C^*,\\ (z_{12},\www{z}_{11})&\mapsto& (z_{12}\www{z}_{11}, \frac{1}{\www{z}_{11}})=(y_3,y_4). \end{eqnarray*} The glueing map shows that $p_3:Y_3\to\P^1$ is the $\C^*$-bundle with Euler number $-1$. \end{example} \begin{lemma}\label{t9.7} Fix $m\in\Z_{\geq 2}$. (a) The following table defines four elements $\beta_1,\beta_2,\beta_3,\beta_4 \in \Br_3$, it gives their images $g_1,g_2,g_3,g_4$ in $PSL_2(\Z)$ under the homomorphism $\Br_3\to PSL_2(\Z)$ in Remark \ref{t4.15} (i), and it defines four points $x_1,x_2,x_3,x_4\in\whh{\R}$, \begin{eqnarray*} \begin{array}{c|c|c|c|c} i & 1 & 2 & 3 & 4 \\ \beta_i & \sigma_2^m & \sigma_1\sigma_2^m\sigma_1^{-1} =\sigma_2^{-1}\sigma_1^m\sigma_2 & \sigma_1^{-1}\sigma_2^m\sigma_1=\sigma_2\sigma_1^m\sigma_2^{-1} & \sigma_1^m \\ g_i & [A_2^m] & [A_1A_2^mA_1^{-1}] = [A_2^{-1}A_1^mA_2] & [A_1^{-1}A_2^mA_1] = [A_2A_1^mA_2^{-1}] &[A_1^m] \\ x_i & 0 & -1 & 1 & \infty \end{array} \end{eqnarray*} The M\"obius transformation $g_i$ is parabolic with fixed point $x_i$. The action of the braid $\beta_i$ on $\C\times\C\times\H\cong C_3^{univ}$ restricts to the action of $g_i$ on $\H$. The quotient $\H/\langle g_i\rangle$ of that action is isomorphic to $\D^*=\D-\{0\}$. The quotient $(\C\times\C\times\H)/\langle \beta_i\rangle$ is isomorphic to $\C\times (\textup{an affine }\C\textup{-bundle over }\D^*)$. (b) This quotient extends to $\C\times (\textup{an affine }\C\textup{-bundle over }\D)$ such that this is an F-manifold with Euler field and at each point in the fiber over $0\in\D$ the germ is of type $I_2(m)A_1$ (compare Example \ref{t8.10} (ii)). (c) (Concretization of a family of cases in part (b)) If $m$ is even then \begin{eqnarray*} (\C\times\C\times\H)/\langle \sigma_2^m\rangle & \stackrel{\cong}{\longrightarrow} & \C\times\C\times\D^* \\ {}[(z_1,\zeta,\tau)] &\mapsto & (z_1,\zeta,z_4) \quad\textup{with }z_4=z_4(\tau), \end{eqnarray*} and the partial compactification in part (b) is $\C\times\C\times\D$. (d) The semisimple F-manifold with Euler field $\C\times\C^*\times (\C-\{0,1\})\cong C_3^{pure}$ extends to a semisimple F-manifold with Euler field \begin{eqnarray*} \C\times (\textup{the }\C^*\textup{-bundle over }\P^1 \textup{ which is }\OO_{\P^1}(-1)\textup{ minus the }0 \textup{-section}) \end{eqnarray*} with Maxwell stratum the union of the fibers of $0,1,\infty\in\P^1$. At each point in the Maxwell stratum the germ is of type $I_2(2)A_1$ (compare Example \ref{t8.10} (ii)). (e) Consider $m\in\N$. The F-manifold with Euler field \begin{eqnarray*} \C\times\C^*\times\H \cong (\C\times\C\times\H)/\langle (\sigma^{mon})^m\rangle \cong C_3^{univ}/\langle (\sigma^{mon})^m\rangle \end{eqnarray*} extends to $\C\times\C\times\H$ if and only if $m\geq 2$. If $m\geq 2$ then at each point of $\C\times\{0\}\times\H$ the germ of the F-manifold is irreducible. \end{lemma} {\bf Proof:} (a) The equations $\sigma_2\sigma_1^m\sigma_2^{-1}=\sigma_1^{-1}\sigma_2^m\sigma_1$ and $\sigma_2^{-1}\sigma_1^m\sigma_2=\sigma_1\sigma_2^m\sigma_1^{-1}$ are \eqref{4.14}. They show that $\beta_1,\beta_2,\beta_3$ and $\beta_4$ are all conjugate in $\Br_3$. Therefore also $g_1,g_2,g_3$ and $g_4$ are all conjugate in $PSL_2(\Z)$. They are parabolic with the fixed points $x_1,x_2,x_3$ and $x_4$. The rest of part (a) is also obvious. Part (d) is proved before the parts (b) and (c). (d) $C_3^{pure}=\C^3-D_3^{pure}$ extends into the semisimple F-manifold $\C^3$ of type $A_1^3$ with Maxwell stratum $D_3^{pure}$, which consists of three hyperplanes which intersect all three in the complex line $\{\uuuu{u}\in\C^3\,|\, u_1=u_2=u_3\}$. The isomorphism $C_3^{pure}\cong \C\times\C^*\times(\C-\{0,1\})$ is the restriction to $C_3^{pure}$ of the blowing up of this complex line, \begin{eqnarray*} \C\times \OO_{\P^1}(-1) &\longrightarrow& \C^3 \\ \cup & & \cup \\ \C\times \C\times\C & \longrightarrow & \C\times (\C\times\C -\{0\}\times\C^*) \\ \cup & & \cup \\ \C\times\C^*\times (\C-\{0,1\}) & \longrightarrow & \{(u_1+u_2+u_3,u_2-u_1,u_3-u_1)\in\C^3\,|\, \\ && \hspace*{1cm}u_1\neq u_2\neq u_3\neq u_1\} \cong C_3^{pure}\\ (z_1,z_2,z_3) &\mapsto & (z_1,z_2,z_2z_3)\\ && \hspace*{0.5cm}=(u_1+u_2+u_3,u_2-u_1,u_3-u_1) \end{eqnarray*} In Theorem \ref{t9.1} (a) and here only one of the two standard charts of $\C\times \OO_{\P^1}(-1)$ is made explicit. This chart excludes the fiber over $z_3=\infty\in\P^1$. The three hyperplanes in $D_3^{pure}$ give the fibers over $z_3=0,1,\infty\in\P^1$. Compare Remark \ref{t9.2} (ii). (b) and (c) Because $\beta_1,\beta_2,\beta_3$ and $\beta_4$ are all conjugate in $\Br_3$, it is for the proof of part (b) sufficient to prove it for $\beta_1=\sigma_2^m$. The action of $\sigma_2^m$ on $\C\times\C\times\H\cong C_3^{univ}$ is as follows, \begin{eqnarray*} \psi(\sigma_2^m):\ (z_1,\zeta,\tau)&\mapsto& (z_1,\zeta, \frac{\tau}{m\tau+1})\quad\textup{for even }m,\\ \psi(\sigma_2^m):\ (z_1,\zeta,\tau)&\mapsto& (z_1,\zeta+\kappa(\tau+1), \frac{\tau}{m\tau+1})\quad\textup{for odd }m. \end{eqnarray*} So, for even $m$ $\psi(\sigma_2^m)$ acts nontrivially only on the third factor $\H$. Part (d) and this fact give the parts (b) and (c) in the case $m=2$. For even $m$ the map $(\C\times\C\times\H)/\langle\sigma_2^m\rangle \to (\C\times\C\times\H)/\langle\sigma_2^2\rangle$ is a branched covering of degree $\frac{m}{2}$, cyclically branched along the fiber over $[0]\in\H/\langle[A_2^2]\rangle$. With the Examples \ref{t8.10} (ii) and \ref{t8.10} (v) (the case $I_2(m)A_1$) this shows for even $m$ part (b) for $\beta_1$ and part (c). For odd $m$ observe that $\kappa(\tau+1)$ has for $\tau\to 0$ along each hyperbolic line in $\H$ a limit, the limit $\kappa(1)=0$. Therefore for odd $m$ we obtain a 2-valued map $(\C\times\C\times\H)/\langle\sigma_2^m\rangle \to (\C\times\C\times\H)/\langle\sigma_2^2\rangle$ which can be considered as a branched covering of the half-degree $\frac{m}{2}$, cyclically branched along the fiber over $[0]\in\H/\langle A_2^2\rangle$. Again with the Examples \ref{t8.10} (ii) and \ref{t8.10} (v) this shows part (b) for $\beta_1=\sigma_2^m$. (e) For $m=1$ the multiplication in $\C\times\C^*\times\H \cong C_3^{univ}/\langle \sigma^{mon}\rangle$ degenerates near $\C\times\{0\}\times\H$ in the same way as the multiplication in $\C\times\C^*\times(\C-\{0,1\})\cong C_3^{pure}$ degenerates near $\C\times\{0\}\times (\C-\{0,1\})$. Because of the denominator $z_2$ in the third summand on the right hand side of \begin{eqnarray*} \paa_2\circ\paa_2= \frac{2-2z_3+2z_3^2}{3}\paa_1 + \frac{1-2z_3}{3}\paa_2 + \frac{-z_3+z_3^2}{z_2}\paa_3, \end{eqnarray*} the multiplication does not extend holomorphically to $\C\times\{0\}\times(\C-\{0,1\})$. For $m\geq 2$ the (vertical) maps \begin{eqnarray*} \begin{CD} C_3^{univ}/\langle (\sigma^{mon})^m\rangle @. \ \cong\ @. \C\times\C^*\times \H @. \hspace*{0.5cm}(z_1,z_{18},\tau) @. \\ @VVV @. @VVV @VVV @. \\ C_3^{univ}/\langle\sigma^{mon}\rangle @. \ \cong\ @. \C\times\C^*\times\H @. \hspace*{0.5cm}(z_1,z_{18}^m,\tau)@. \ =(z_1,z_2,\tau) \end{CD} \end{eqnarray*} are $m$-fold coverings. Again, we can go from $\H$ to $\C-\{0,1\}$. The multiplication upstairs is obtained by pull back from the multiplication downstairs via the $m$-fold covering, \begin{eqnarray*} \begin{CD} @. \C\times\C^*\times (\C-\{0,1\}) @. \hspace*{0.5cm}(z_1,z_{18},z_3)@. \\ @. @VVV @VVV @.\\ C_3^{pure} \cong\ @. \C\times\C^*\times(\C-\{0,1\}) @. \hspace*{0.5cm}(z_1,z_{18}^m,z_3)@. \ =(z_1,z_2,z_3) \end{CD} \end{eqnarray*} Write $\paa_{18}:=\frac{\paa}{\paa z_{18}}$. Because of $\paa_{18}\mapsto mz_{18}^{m-1}\paa_2$, the multiplication is given by $e=3\paa_1$ and \begin{eqnarray*} \paa_{18}\circ\paa_{18}&=& m^2z_{18}^{2m-2}\frac{2-2z_3+2z_3^2}{3}\paa_1 + mz_{18}^{m-1}\frac{1-2z_3}{3}\paa_{18}\\ &+& m^2z_{18}^{m-2}(-z_3+z_3^2)\paa_3,\\ \paa_{18}\circ\paa_3&=& mz_{18}^{m-1}\frac{-z_3+2z_{18}^mz_3}{3}\paa_1 - z_{18}^m\frac{1}{3}\paa_{18} + mz_{18}^{m-1}\frac{-1+2z_3}{3}\paa_3,\\ \paa_3\circ\paa_3&=& \frac{2z_{18}^{2m}}{3}\paa_1 +\frac{z_{18}^m}{3}\paa_3. \end{eqnarray*} The multiplication extends holomorphically to $\C\times\{0\}\times(\C-\{0,1\})$. Moreover $\paa_{18}\circ\paa_3$ and $\paa_3\circ\paa_3$ vanish there, and also $\paa_{18}\circ\paa_{18}$ if $m\geq 3$. If $m=2$ then $\paa_{18}\circ\paa_{18}=4(-z_3+z_3^2)\paa_3$ on $\C\times \{0\}\times (\C-\{0,1\})$. Therefore for any $m\geq 2$ the algebra $(T_p,\circ)$ for $p\in\C\times\{0\}\times (\C-\{0,1\})$ is irreducible, and thus the germ at $p$ of the F-manifold is irreducible. \hfill$\Box$ \bigskip Recall that $C_3^{\uuuu{e}/\{\pm 1\}^3}$ is in all cases a semisimple F-manifold (with Euler field) with empty Maxwell stratum. We consider the manifold in Theorem \ref{t9.3} which is isomorphic to $C_3^{\uuuu{e}/\{\pm 1\}^3}$ with the coordinates in the proof of Theorem \ref{t9.3}. Together Theorem \ref{t7.11}, Theorem \ref{t9.1} (a), Theorem \ref{t9.3} and its proof, and Lemma \ref{t9.7} allow in many cases to embed $C_3^{\uuuu{e}/\{\pm 1\}^3}$ into a bigger F-manifold $\oooo{C_3^{\uuuu{e}/\{\pm 1\}^3}}^p$ with nonempty Maxwell stratum or nonempty caustic or both. This is subject of Corollary \ref{t9.8}. \begin{corollary}\label{t9.8} In many cases in Theorem \ref{t9.3}, $C_3^{\uuuu{e}/\{\pm 1\}^3}$ has a partial compactification, \index{partial compactification} which we call $\oooo{C_3^{\uuuu{e}/\{\pm 1\}^3}}^p$, to an F-manifold with nonempty Maxwell stratum or nonempty caustic or both. (a) The exceptions are the cases $\GG_5,\GG_6,\GG_7,\GG_8$ and $\GG_9$. There $E=z_1\paa_1+\paa_\zeta$ and \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3}&=&\C\times\C\times\D^* \quad\textup{in the case }\GG_5,\\ C_3^{\uuuu{e}/\{\pm 1\}^3}&=&C_3^{univ} \quad\textup{in the cases } \GG_6,\GG_7,\GG_8\textup{ and }\GG_9. \end{eqnarray*} (b) All cases except the cases in part (a) and the cases $A_2A_1$ and $A_3$: The first table below gives the manifold in Theorem \ref{t9.3} isomorphic to $C_3^{\uuuu{e}/\{\pm 1\}^3}$ and the manifold isomorphic to its extension $\oooo{C_3^{\uuuu{e}/\{\pm 1\}^3}}^p$. The second table gives the strata in the Maxwell stratum $\KK_2$ and in the caustic $\KK_3$ of $\oooo{C_3^{\uuuu{e}/\{\pm 1\}^3}}^p$. In the case $\HH_{1,2}$ at each point $p\in\KK_3$ the germ of the F-manifold is irreducible. In all other cases with $\KK_3\neq\emptyset$, at each point $p\in\KK_3$ the germ of the F-manifold is of type $I_2(3)A_1$. \begin{eqnarray*} \begin{array}{l|l|l|l} & \textup{sets} & C_3^{\uuuu{e}/\{\pm 1\}^3} & \oooo{C_3^{\uuuu{e}/\{\pm 1\}^3}}^p \\ \hline \GG_1 & C_1\ (A_1^3) & C_3^{pure} & \C^3 \\ \GG_1 & C_2\ (\HH_{1,2}) & \C\times\C^*\times\H & \C\times\C\times\H \\ \GG_2 & C_4\ (\P^1A_1),C_5 & \C^2\times (\C-\{0,1\}) & \C^3 \\ \GG_4 & C_7 \ (\whh{A}_2) & \C^2\times \C^{***} & \C^3 \\ \GG_{10} & C_{17} & \C\times\C\times\D^* & \C\times\C\times\D \\ \GG_{11} & C_{18} & \C\times\C\times\D^* & \C\times\C\times\D \\ \GG_{12} & C_{19} & \C\times\C\times\D^{**} & \C\times\C\times\D \\ \GG_{13} & C_{20} & \C\times\C\times\D^{**} & \C\times\C\times\D \\ \GG_{14} & C_{21},\ C_{22} & \C\times\C\times \D^* & \C\times\C\times\D \end{array} \end{eqnarray*} \begin{eqnarray*} \begin{array}{l|l|l|l} & \textup{sets} & \KK_2 & \KK_3 \\ \hline \GG_1 & C_1\ (A_1^3) & D_3^{pure} & - \\ \GG_1 & C_2\ (\HH_{1,2}) & - & \C\times\{0\}\times\H\\ \GG_2 & C_4\ (\P^1A_1),C_5 & \C^2\times \{0,1\} & - \\ \GG_4 & C_7 \ (\whh{A}_2) & - & \C^2\times\{z_5\,|\, z_5^3=1\} \\ \GG_{10} & C_{17} & \C\times\C\times\{0\} & - \\ \GG_{11} & C_{18} & \C\times\C\times\{0\} & - \\ \GG_{12} & C_{19} & \C\times\C\times\{0\} & \C\times\C\times\{\frac{1}{2}\} \\ \GG_{13} & C_{20} & - & \C\times\C\times\{0,\frac{1}{2}\} \\ \GG_{14} & C_{21},\ C_{22} & - & \C\times\C\times\{0\} \end{array} \end{eqnarray*} The Euler field is \begin{eqnarray*} \begin{array}{l|l|l|l} & A_1^3 & \HH_{1,2} & (\GG_2\,\&\, C_4,C_5), \GG_4, \GG_{10},\GG_{11},\GG_{12},\GG_{13},\GG_{14} \\ \hline E & \sum_{i=1}^3 u_ie_i & z_1\paa_1+\frac{1}{4}z_5\paa_5 & z_1\paa_1+\paa_\zeta \end{array} \end{eqnarray*} (c) The case $A_2A_1$: We use the second presentation in Theorem \ref{t9.1}. \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3}&\cong& \C\times\{(z_8,z_9)\in\C^*\times\C\,|\, z_8^3-4z_9^2\neq 0\},\\ \oooo{C_3^{\uuuu{e}/\{\pm 1\}^3}}^p&\cong& \C^3,\\ \KK_2&\cong& \C\times\{(z_8,z_9)\in\C\times\C\,|\, z_8^3-4z_9^2= 0\},\\ \KK_3&\cong& \C\times\{0\}\times\C,\\ E&=& z_1\paa_1+\frac{2}{3}z_8\paa_8+z_9\paa_9. \end{eqnarray*} At each point $p\in\KK_3$ the germ of the F-manifold is of type $I_2(3)A_1$. (d) The case $A_3$: We use the second presentation in Theorem \ref{t9.1}. \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3}&\cong& \C\times\{(z_{13},z_{14})\in\C^*\times\C\,|\, z_{14}^3-z_{13}^2 \neq 0\},\\ \oooo{C_3^{\uuuu{e}/\{\pm 1\}^3}}^p&\cong& \C^3,\\ \KK_2&\cong& \C\times\{0\}\times\C,\\ \KK_3&\cong& \C\times\{(z_{13},z_{14})\in\C\times\C\,|\, z_{14}^3-z_{13}^2= 0\},\\ E&=& z_1\paa_1+\frac{3}{4}z_{13}\paa_{13} +\frac{1}{2}z_{14}\paa_{14}. \end{eqnarray*} At each point $p\in \KK_3-\KK_2 =\KK_3-\C\times\{(0,0)\}$ the germ of the F-manifold is of type $I_2(3)A_1$. At each point $p\in \C\times\{(0,0)\}=\KK_3\cap\KK_2$ it is irreducible. \begin{figure}[H] \includegraphics[width=0.8\textwidth]{pic-9-3.png} \caption[Figure 9.3]{In the cases $A_2A_1$ (left) and $A_3$ (right) the restrictions of $\KK_2$ and $\KK_3$ to a hyperplane $\{z_1=\textup{constant}\}$ in the F-manifold $\C^3$} \label{Fig:9.3} \end{figure} \end{corollary} {\bf Proof:} (a) Here nothing has to be proved. Though a remark on the cases $\GG_5\,\&\, C_8,C_9$: Here $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\langle (\sigma^{mon})^2 \sigma_1^{-1}\sigma_2^{l^2-4}\sigma_1\rangle$. Without the factor $(\sigma^{mon})^2$ in the generator, Lemma \ref{t9.7} (b) would apply with $m=l^2-4$, and it would lead to an extension of the F-manifold to $\C\times\C\times\D$ with caustic $\C\times\C\times\{0\}$. (b) The case $\GG_1\,\&\, C_1\ (A_1^3)$: See Example \ref{t8.10} (i). The case $\GG_1\,\&\, C_2\ (\HH_{1,2})$: Lemma \ref{t9.7} (e) applies with $m=2$. The cases $\GG_2\,\&\, C_4\ (\P^1A_1),C_5$: Recall $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\langle \sigma_2^2, \sigma_1\sigma_2^2\sigma_1^{-1}\rangle$ and consider its action on the third factor $\H$ of $\C\times\C\times\H \cong C_3^{univ}$. The point $z_3=0$ in the closure of the quotient $\C-\{0,1\}$ of $\H$ is the image of the fixed point $0\in\whh{\R}$ of $\sigma_2^2$, the point $z_3=1$ is the image of the fixed point $-1\in\whh{\R}$ of $\sigma_1\sigma_2^2\sigma_1^{-1}$. By Lemma \ref{t9.7} (b) the F-manifold extends to the fibers over $0$ and $1$ of $\C\times(\textup{an affine }\C\textup{-bundle over }\C)$ and has there its Maxwell stratum. By Remark \ref{t9.4} (ii) one can choose the affine $\C$-bundle over $\C$ to be $\C\times\C$. The case $\GG_4\,\&\, C_7\ (\whh{A}_2)$. Recall $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}=\langle \sigma_1^3, \sigma_2^3, \sigma_2\sigma_1^3\sigma_2^{-1}\rangle$ and consider its action on the third factor $\H$ of $\C\times\C\times\H\cong C_3^{univ}$. The points $z_5$ with $z_5^3=1$ in the closure of the quotient $\C^{***}$ of $\H$ are the images of the fixed points $\infty,0$ and $1\in\whh{\R}$ of $\sigma_1^3,\sigma_2^3$ and $\sigma_2\sigma_1^3\sigma_2^{-1}$. By Lemma \ref{t9.7} (b) the F-manifold extends to the fibers over $z_5$ with $z_5^3=1$ of $\C\times(\textup{an affine }\C\textup{-bundle over }\C)$ and has there its caustic $\KK_3$, and at each point $p\in\KK_3$ the germ of the F-manifold is of type $I_2(3)A_1$. By Remark \ref{t9.4} (ii) one can choose the affine $\C$-bundle over $\C$ to be $\C\times\C$. A remark on this case: Observe $$(\sigma_1^3)(\sigma_2\sigma_1^3\sigma_2^{-1})(\sigma_2^3) =(\sigma^{mon})^2(\sigma_2^{-1}\sigma_1^3\sigma_2)^{-1}.$$ Therefore $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ contains $(\sigma^{mon})^{-2}\sigma_2^{-1}\sigma_1^3\sigma_2$. The point $z_5=\infty$ is the image of the fixed point of $A_2^{-1}A_1^3A_2$ which is the image in $PSL_2(\Z)$ of this braid. But Lemma \ref{t9.7} (b) does not apply to $z_5=\infty$, because $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ does not contain $\sigma_2^{-1}\sigma_1^3\sigma_2$. The cases $\GG_{10},\GG_{11},\GG_{12},\GG_{13},\GG_{14}$: Recall from Theorem \ref{t7.11} \begin{eqnarray*} \begin{array}{l|l|l|l|l} & \GG_{10}\textup{ and }\GG_{11} & \GG_{12} & \GG_{13} & \GG_{14} \\ (\Br_3)_{\uuuu{e}/\{\pm 1\}^3} & \langle \sigma_2^2\rangle & \langle \sigma_2^2,\sigma_2\sigma_1^3\sigma_2^{-1}\rangle & \langle \sigma_2^3,\sigma_2\sigma_1^3\sigma_2^{-1}\rangle & \langle \sigma_2^3\rangle \end{array} \end{eqnarray*} The point $0\in\D$ is the image of the fixed point $0\in\whh{\R}$ of $\sigma_2^2$ respectively $\sigma_2^3$, the point $\frac{1}{2}\in\D$ is in the cases $\GG_{12}$ and $\GG_{13}$ the image of the fixed point $-1\in\whh{\R}$ of the action of $\sigma_2\sigma_1^3\sigma_2^{-1}$ on the third factor $\H$ of $\C\times\C\times\H\cong C_3^{univ}$. Lemma \ref{t9.7} (b) applies in all cases and yields an extension to an F-manifold $\C\times (\textup{an affine }\C\textup{-bundle over }\D)$ with the claimed Maxwell stratum and caustic. By Remark \ref{t9.4} (ii) one can choose the affine $\C$-bundle over $\D$ to be $\C\times\D$. On the Euler field: For the case $A_1^3$ see Example \ref{t8.10} (i). In the case $\HH_{1,2}$ the moduli spaces is $C_3^{\uuuu{e}/\{\pm 1\}^3}\cong \C\times\C^*\times\H$ with coordinates $(z_1,z_5,\tau)$ with $z_5=e^{\zeta/4}$. Therefore the Euler field $z_1\paa_1+\paa_\zeta$ on $C_3^{univ}$ pulls down to $E=z_1\paa_1+\frac{1}{4}z_5\paa_5$. In all other cases no power of $\sigma^{mon}$ is in $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$, and \begin{eqnarray*} C_3^{\uuuu{e}/\{\pm 1\}^3}&\cong& \C\times (\textup{an affine }\C\textup{-bundle over a noncompact curve } C) \\ &\cong& \C\times\C\times C\quad\textup{with coordinates } (z_1,\zeta,z_{3/11/16}). \end{eqnarray*} Therefore the Euler field $z_1\paa_1+\paa_\zeta$ on $C_3^{univ}$ pulls down to $E=z_1\paa_1+\paa_\zeta$ on $C_3^{\uuuu{e}/\{\pm 1\}^3}$. (c) Recall the discussion of the case $A_2A_1$ in the proof of Theorem \ref{t9.3}. Because $z_6=e^{\zeta/3}$, the Euler field on $\C\times\C^*\times (\C-\{\pm\frac{1}{2}\})$ with coordinates $(z_1,z_6,z_7)$ is $z_1\paa_1+\frac{1}{3}z_6\paa_6$. Under the $2:1$ map \begin{eqnarray*} \C\times\C^*\times (\C-\{\pm\frac{1}{2}\}) &\to& \C\times \{(z_8,z_9)\in\C^*\times\C\,|\, z_8^3-4z_9^2=0\}\\ (z_1,z_6,z_7)&\mapsto& (z_1,z_6^2,z_6^3z_7) =(z_1,z_8,z_9), \end{eqnarray*} it becomes \begin{eqnarray*} E=z_1\paa_1+\frac{1}{3}z_6(2z_6\paa_8+3z_6^2z_7\paa_9) =z_1\paa_1+\frac{2}{3}z_8\paa_8+z_9\paa_9. \end{eqnarray*} Consider the isomorphism \begin{eqnarray*} C_3^{pure}\cong \C\times\C^*\times(\C-\{\pm\frac{1}{2}\})&\to& \C\times\{(z_2,z_9)\in\C^*\times\C\,|\, z_2^2-4z_9^2=0\},\\ (u_1,u_2,u_3)\hspace*{2cm} (z_1,z_2,z_7) &\mapsto& (z_1,z_2,z_2z_7) =(u_1+u_2+u_3,\\ && u_2-u_1,(u_3-u_1)-\frac{1}{2}(u_2-u_1)). \end{eqnarray*} The multiplication in the manifold on the right side extends to $\C^3$ with Maxwell stratum the three hyperplanes \begin{eqnarray*} \C\times\{0\}\times\C\cup \C\times\{(z_2,z_9)\in\C^2\,|\, z_2^2-4z_9^2=0\}. \end{eqnarray*} The two-valued map \begin{eqnarray*} \C^3\dashrightarrow\C^3,\quad (z_1,z_8,z_9)\mapsto (z_1,z_8^{3/2},z_9) =(z_1,z_2,z_9) \end{eqnarray*} is outside of the branching locus $\{z_8=0\}$ locally an isomorphism of F-manifolds, and along $\{z_8=0\}$ it is branched of order $3/2$. Therefore the F-manifold $\C^3$ on the left side with coordinates $(z_1,z_8,z_9)$ has Maxwell stratum $\KK_2=\C\times\{(z_8,z_9)\in\C^2\,|\, z_8^3-4z_9^2=0\}$ and caustic $\KK_3=\C\times\{0\}\times\C$, and at each point of the caustic the germ of the F-manifold is of type $I_2(3)A_1$. (d) Recall the discussion of the case $A_3$ in the proof of Theorem \ref{t9.3}. Because $z_{12}=e^{\zeta/4}$, the Euler field on $\C\times\C^*\times \C^{***}$ with coordinates $(z_1,z_{12},z_{11})$ is $z_1\paa_1+\frac{1}{4}z_{12}\paa_{12}$. Under the $3:1$ map \begin{eqnarray*} \C\times\C^*\times \C^{***} &\to& \C\times \{(z_{13},z_{14})\in\C^*\times\C\,|\, z_{14}^3 -z_{13}^2\neq 0\}\\ (z_1,z_{12},z_{11})&\mapsto& (z_1,z_{12}^3,z_{12}^2z_{11}) =(z_1,z_{13},z_{14}), \end{eqnarray*} it becomes \begin{eqnarray*} E=z_1\paa_1+\frac{1}{4}z_{12}(3z_{12}^2\paa_{13} +2z_{12}z_{11}\paa_{14}) =z_1\paa_1+\frac{3}{4}z_{13}\paa_{13}+\frac{1}{2}z_{14}\paa_{14}. \end{eqnarray*} In the proof of Theorem \ref{t9.3} the quotient of $C_3^{univ}$ by \begin{eqnarray*} \langle\sigma_1^3,\sigma_2^{-1}\sigma_1^3\sigma_2,\sigma_2^3, (\sigma^{mon})^4\rangle \stackrel{1:3}{\subset} (\Br_3)_{\uuuu{e}/\{\pm 1\}^3} =\langle (\sigma_1\sigma_2)^4,\sigma_1^3\rangle \end{eqnarray*} was constructed as \begin{eqnarray*} \C\times\C^*\times\C^{***}\quad\textup{with coordinates } (z_1,z_{12},z_{11})\\ \textup{ with } z_{12}=e^{\zeta/4},z_{11}=z_{11}(\tau). \end{eqnarray*} The three values $z_{11}\in\C$ with $z_{11}^3=1$ are the images of the parabolic fixed points $\infty,-1,0\in\whh{\R}$ of the action of $\sigma_1^3,\sigma_2^{-1}\sigma_1^3\sigma_2, \sigma_2^3$ on $\H$. By Lemma \ref{t9.7} (b) and Remark \ref{t9.4} (iv) the F-manifold $\C\times\C^*\times\C^{***}$ extends to $\C\times\C^*\times\C$ with caustic $\C\times\C^*\times\{z_{11}\,|\, z_{11}^3=1\}$, and at each point $p$ in the caustic the germ of the F-manifold is of type $I_2(3)A_1$. Under the extension \begin{eqnarray*} \C\times\C^*\times\C &\to& \C\times\C^*\times\C\\ (z_1,z_{12},z_{11}) &\mapsto& (z_1,z_{12}^3,z_{12}^2z_{11}) =(z_1,z_{13},z_{14}) \end{eqnarray*} of the $3:1$ map above this caustic maps to the complex surface \begin{eqnarray*} \C\times\{(z_{13},z_{14}\in\C^*\times\C\,|\, z_{14}^3-z_{13}^2=0\}, \end{eqnarray*} and the F-manifold extends to $\C\times\C^*\times\C$ with coordinates $(z_1,z_{13},z_{14})$ and with this surface as caustic. We claim that the F-manifold also extends to a fiber over $z_{14}=\infty$. Though proving this and understanding how this fiber glues in requires more care. The stabilizer $(\Br_3)_{\uuuu{e}/\{\pm 1\}^3}$ contains $(\sigma^{mon})^4$ and \begin{eqnarray*} (\sigma_2^3)(\sigma_2^{-1}\sigma_1^3\sigma_2)(\sigma_1^3) = (\sigma^{mon})^2 (\sigma_2\sigma_1^3\sigma_2^{-1})^{-1} \end{eqnarray*} so also $\sigma_2\sigma_1^6\sigma_2^{-1}$. The parabolic fixed point $1\in\whh{\R}$ of the action of $\sigma_2\sigma_1^6\sigma_2^{-1}$ corresponds to $z_{11}=\infty$. By Lemma \ref{t9.7} (b) the F-manifold $\C\times\C^*\times\C$ with coordinates $(z_1,z_{12},z_{11})$ extends to an F-manifold with fiber over $z_{11}=0$, namely \begin{eqnarray*} \C\times (\textup{a certain }\C^*\textup{-orbibundle over } \P^1)=:\C\times \www{Y}_2, \end{eqnarray*} and for a point in the fiber over $z_{11}=\infty$ the germ of the F-manifold is of type $I_2(6)A_1$. The fiber over $z_{11}=\infty$ is exceptional of type $(1,2)$ (in the sense of Remark \ref{t9.5}), because $$(\sigma^{mon})^4, (\sigma^{mon})^2\sigma_2\sigma_1^3\sigma_2^{-1}, \sigma_2\sigma_1^6\sigma_2^{-1} \in (\Br_3)_{\uuuu{e}/\{\pm 1\}^3}.$$ The pull back of $\www{p}_2:\www{Y}_2\to\P^1$ via the $2:1$ branched covering $b_3:\P^1\to\P^1$, $\www{z}_{11}\to \www{z}_{11}^2=z_{11}$, is a $\C^*$-bundle $\www{Y}_3$ with projection $\www{p}_3:\www{Y}_3\to\P^1$. We claim that its Euler number is $-1$. To see this, compare it with $C_3^{pure}$ as $\C^*$-bundle with the standard action of $\C^*$. This is the tautological bundle $\OO_{\P^1}(-1)$ minus its zero section, so it has Euler number $-1$. Because of the $2:1$ branched covering $b_3:\P^1\to\P^1$ and because of $[PSL_2(\Z):\oooo{\Gamma(2)}]=6$ and $[PSL_2(\Z):\oooo{\Gamma(3)}]=12$, the base space $\P^1$ of $\www{Y}_3$ is as a quotient of $\H\cup\{\textup{cusps}\}$ four times as large as the base space $\P^1$ of $C_3^{pure}$ as $\C^*$-bundle. On the other hand, to obtain $C_3^{pure}$ from $C_3^{univ}$, we divided out $\sigma^{mon}$. To obtain $\www{Y}_3$, we divided out only $(\sigma^{mon})^4$. Therefore the Euler number of $\www{Y}_3$ is also $-1$. Now Example \ref{t9.6} applies. One sees $\www{Y}_3=Y_3$ and $\www{Y}_2=Y_2$. The automorphism of order three in the proof of Theorem \ref{t9.3}, which $(\sigma_1\sigma_2)^4$ induces on the chart $Y_2^{(1)}$, $(z_{12},z_{11})\mapsto (e^{-2\pi i /3}z_{12},e^{-2\pi i/3}z_{11})$, extends to $Y_2$. It restricts because of $y_2=z_{12}^2z_{11}$ to the identity on the fiber over $z_{11}=\infty$ which is the fiber $\{0\}\times\C$ in the chart $Y_2^{(2)}$. The quotient is $Y_1$. The quotient map is a $3:1$ branched covering with branching locus the fiber over $z_{11}=\infty$ and the image $\{0\}\times\C^*\subset Y_1$ of this fiber. Therefore at each point in $\C\times (\textup{this image})=\C\times\{0\}\times\C^*$ the germ of the F-manifold is of type $I_2(2)A_1$. Therefore $\C\times\{0\}\times\C^*$ is the Maxwell stratum of $\C\times Y_1$ as F-manifold. Finally, the complement of $\C\times Y_1$ in $\C^3$ has codimension 2. Therefore the multiplication extends to this complement holomorphically, so the F-manifold extends to $\C^3$ with coordinates $(z_1,z_{13},z_{14})$. Because this codimension 2 complement is in the closure of the caustic of the F-manifold $\C\times Y_1$, it is part of the caustic of $\C^3$. Part (d) is proved. \hfill$\Box$ \chapter{Isolated hypersurface singularities}\label{s10} \setcounter{equation}{0} \setcounter{figure}{0} An isolated hypersurface singularity gives rise to a unimodular bilinear lattice $(H_\Z,L)$ of some rank $n\in\N$ and a $\Br_n\ltimes\{\pm 1\}^n$-orbit $\BB^{dist}$ of triangular bases. This chapter reports about these structures and gives references. It does not contain new results. The theory of isolated hypersurface singularities came to life in the 1960ies. Milnor's book \cite{Mi69} was a crucial early step. Soon several groups around the world joined, especially Arnold and his students, and also the first author's teacher Brieskorn. Nowadays there are several thorough and extensive presentations of the theory of isolated hypersurface singularities, foremost the two volumes \cite{AGV85} \cite{AGV88}, but also the more recent books \cite{AGLV98} and \cite{Eb01}. A few years ago Ebeling wrote a survey \cite{Eb20} on distinguished bases for isolated hypersurface singularities. But the sections below offer quite a lot of material which is not covered in that survey. Section \ref{s10.1} introduces basic structures induced by one isolated hypersurface singularity, especially a unimodular bilinear lattice $(H_\Z,L)$, but also the notion of a universal unfolding. The base space $\MM$ of such an unfolding has to be compared with a space $C_n^{\uuuu{e}/\{\pm 1\}^n}$. The section finishes with a loose description of Arnold's classification of singularities with modality $\leq 2$. Section \ref{s10.2} equips the pair $(H_\Z,L)$ with a natural $\Br_n\ltimes\{\pm 1\}^n$ orbit $\BB^{dist}$ of distinguished bases. Here the $\Z$-lattice bundles from section \ref{s8.5} come from geometry and induce this set $\BB^{dist}$. These triples $(H_\Z,L,\BB^{dist})$ are rather special. The only triples in rank 2 and rank 3 which appear here are those of $A_2$ and $A_3$. Section \ref{s10.2} points at techniques and cites results on the triples from singularities. It ends with the best studied cases, the simple singularities and the simple elliptic singularities. It remains an interesting open problem to find properties which distinguish the triples $(H_\Z,L,\BB^{dist})$ for singularities from all other triples. Section \ref{s10.3} discusses results of the first author and coauthors on the groups $G_\Z$ for singularities. A crucial notion is that of an {\it Orlik block}. The most concrete results are those on the singularities with modality $\leq 2$. Section \ref{s10.4} explains classical results from the 1980ies on the monodromy groups and the vanishing cycles for singularities. The situation for the even monodromy group $\Gamma^{(0)}$ is very satisfying, thanks to results of Ebeling. In almost all cases it is determined by the pair $(H_\Z,I^{(0)})$. The odd case is more complicated. There the main work was done by Wajnryb, Chmutov and Janssen. It is also subject of section \ref{s10.4}. Section \ref{s10.5} comes to the few moduli spaces $C_n^{\uuuu{e}/\{\pm 1\}^n}$ for singularities, which had been studied, namely those for the simple singularities and the simple elliptic singularities. They turn out to be the complements of Maxwell stratum and caustic in base spaces of certain global unfoldings. In the case of the simple singularities, this is essentially due to Looijenga and Deligne 1974. Hertling and Roucairol generalized it to the simple elliptic singularities. The section offers key ideas of the proofs. \section{Topology, Milnor lattice, unfolding, classification}\label{s10.1} \begin{definition}\label{t10.1} (E.g. \cite{Mi69}\cite{Lo84}\cite{AGV88}\cite{Eb01}) \index{isolated hypersurface singularity}\index{singularity} An isolated hypersurface singularity is a holomorphic function germ $$f:(\C^{m+1},0)\to(\C,0)$$ (with $m\in\Z_{\geq 0}$) with $f(0)=0$ and with an isolated singularity at 0, i.e. the Jacobi ideal $J_f:=\bigl(\frac{\paa f}{\paa z_0},...,\frac{\paa f}{\paa z_m}\bigr) \subset \C\{z_0,...,z_m\}$ has an isolated zero at 0. Its Milnor number $n$ \index{Milnor number} is \begin{eqnarray}\label{10.1} n:=\dim\C\{z_0,...,z_m\}/J_f\in\N. \end{eqnarray} \end{definition} Usually the Milnor number is called $\mu$, and the number of variables is called $n$ \cite{AGV88} or $n+1$ \cite{Eb01}. In order to be consistent with the other chapters of this book, here the Milnor number is called $n$ and the number of variables is called $m+1$. The first thing to do is to construct {\it a good representative} \index{good representative of $f$} of $f$. One can choose and chooses a small positive number $\varepsilon>0$ such that the boundary of each ball $B_{\www{\varepsilon}}\subset \C^{m+1}$ with center at 0 and radius $\www{\varepsilon}\in(0,\varepsilon]$ intersects the fiber $f^{-1}(0)$ transversally. Then one chooses a small positive number $\eta>0$ such that each fiber $f^{-1}(\tau)$ for $\tau\in\Delta_\eta$ intersects the boundary of $B_\varepsilon$ transversally. Here $\Delta_\eta:=\{\tau\in\C\, |\, |\tau|<\eta\}\subset\C$ is the disk of radius $\eta$ around 0. Then \begin{eqnarray}\label{10.2} f:X\to\Delta_\eta\quad\textup{with}\quad X:=B_\varepsilon\cap f^{-1}(\Delta_\eta) \end{eqnarray} is a good representative of $f$ with fibers $X_\tau=f^{-1}(\tau)$ for $\tau\in\Delta_\eta$. The restriction $f:X-X_0\to \Delta_\eta-\{0\}$ is a locally trivial $C^\infty$ fiber bundle. Only the 0-fiber $X_0$ is singular, and its only singularity is at 0. The fibers $X_\tau$ for $\tau\in\Delta_\eta-\{0\}$ are called {\it Milnor fibers}. That all this works and is true can be found in \cite{Mi69}, \cite{Lo84}, \cite{AGV88} or \cite{Eb01}. See Figure \ref{Fig:10.1} for a schematic picture. \begin{figure} \includegraphics[width=0.5\textwidth]{pic-10-1.png} \caption[Figure 10.1]{A schematic picture of a representative $f:X\to\Delta_\eta$ of a singularity $f:(\C^{m+1},0)\to(\C,0)$} \label{Fig:10.1} \end{figure} The references \cite[(5.11)]{Lo84}, \cite[ch. 2]{AGV88} or \cite{Eb01} tell also the following. Our signs and conventions are consistent with those in \cite[ch. 2]{AGV88}. \begin{theorem}\label{t10.2} Let $f:(\C^{m+1},0)\to(\C,0)$ be an isolated hypersurface singularity with Milnor number $n\in\N$. Let $f:X\to\Delta_\eta$ be a good representative as above. (a) Each regular fiber $X_\tau$ with $\tau\in\Delta_\eta-\{0\}$ is homotopy equivalent to a bouquet of $n$ $m$-spheres, so that its only (reduced if $m=0$) homology is the middle homology and that is $$H_m^{(red)}(X_\tau,\Z)\cong\Z^n.$$ This $\Z$-module is called a {\sf Milnor lattice}. \index{Milnor lattice} (b) For $\zeta\in S^1$ there is a canonical isomorphism \begin{eqnarray}\label{10.3} H_m^{(red)}(X_{\zeta\eta},\Z)\cong H_{m+1}(X,X_{\zeta\eta},\Z) =:Ml(f,\zeta) \end{eqnarray} of the Milnor lattice over a point $\zeta\eta\in\paa\Delta_\eta$ to the relative homology group with boundary in $X_{\zeta\eta}$. The union $\bigcup_{\zeta\in S^1}Ml(f,\zeta)$ is a flat bundle over $S^1$ of $\Z$-lattices of rank $n$. (c) An intersection form for Lefschetz thimbles \index{intersection form for Lefschetz thimbles} \index{Lefschetz thimble} is well defined on relative homology groups with different boundary parts. It is for any $\zeta\in S^1$ a $(-1)^{m+1}$ symmetric unimodular bilinear form \begin{eqnarray}\label{10.4} I^{Lef}:Ml(f,\zeta)\times Ml(f,-\zeta)\to\Z \end{eqnarray} Let $\gamma_\pi$ be the isomorphism $Ml(f,\zeta)\to Ml(f,-\zeta)$ by flat shift in mathematically positive direction. Then the classical Seifert form is given by \begin{eqnarray}\label{10.5} L^{sing}: Ml(f,\zeta)\times Ml(f,\zeta)\to\Z,\\ L^{sing}(a,b):=(-1)^{m+1}I^{Lef}(a,\gamma_{\pi} b).\nonumber \end{eqnarray} It has determinant $(-1)^{n(m+1)(m+2)/2}$. The classical monodromy $M^{sing}$ and the intersection form $I^{sing}$ on $Ml(f,\zeta)$ are given by \begin{eqnarray}\label{10.6} L^{sing}(M^{sing}a,b)&=&(-1)^{m+1}L^{sing}(b,a),\\ I^{sing}(a,b)&=& -L^{sing}(a,b)+(-1)^{m+1}L^{sing}(b,a) \label{10.7}\\ &=&L^{sing}((M^{sing}-\id)a,b).\nonumber \end{eqnarray} The monodromy $M^{sing}$ is the monodromy operator of the flat $\Z$-lattice bundle $\bigcup_{\zeta\in S^1}Ml(f,\zeta) \cong \bigcup_{\zeta\in S^1}H_m(X_{\zeta\eta},\Z)$. The intersection form $I^{sing}$ is the natural intersection form on $H_m(X_\tau,\Z)$. It is $(-1)^m$-symmetric. (d) The pair $(Ml(f,1),L^{sing})$ is up to a canonical isomorphism independent of the choice of a good representative $f:X\to\Delta_\eta$ of the function germ $f$. So it is an invariant of the singularity $f$. \end{theorem} \begin{definition}\label{t10.3} Consider the situation in Theorem \ref{t10.2}. We define a unimodular bilinear lattice $(H_\Z,L)$ by \begin{eqnarray}\label{10.8} H_\Z(f):=H_\Z&:=& Ml(f,1),\\ L(f):= L&:=&(-1)^{(m+1)(m+2)/2}\cdot L^{sing}.\label{10.9} \end{eqnarray} We call $L$ the {\it normalized} Seifert form. \index{normalized Seifert form} Define $k\in\{0;1\}$ by \begin{eqnarray}\label{10.10} k:=m\textup{ mod}\, 2,\quad\textup{i.e. }\left\{\begin{array}{ll} k:=0&\textup{if }m\textup{ is even,}\\ k:=1&\textup{if }m\textup{ is odd.}\end{array}\right. \end{eqnarray} Obviously, the monodromy $M$ and the bilinear form $I^{(k)}$ of the unimodular bilinear lattice $(H_\Z,L)$ are related to $M^{sing}$ and $I^{sing}$ by \begin{eqnarray} M&=& (-1)^{m+1}M^{sing},\label{10.11}\\ I^{(k)}&=&(-1)^{m(m+1)/2}I^{sing}.\label{10.12} \end{eqnarray} We call $M$ the {\it normalized} monodromy. \index{normalized monodromy} \end{definition} \begin{remark}\label{t10.4} Above the Seifert form $L^{sing}$ \index{Seifert form} is defined with the help of the isomorphism \eqref{10.3} and the intersection form $I^{Lef}$ in \eqref{10.4}. An alternative is to derive it from a {\it linking number} of cycles in different Milnor fibers in the closely related fibration $$\paa B_\varepsilon-f^{-1}(0) \to S^1, \quad z\mapsto \frac{f(z)}{|f(z)|}.$$ We prefer $I^{Lef}$ as the Lefschetz thimbles appear implicitly anyway in the construction of distinguished bases in section \ref{s10.2}. \end{remark} The notion of an {\it unfolding} of an isolated hypersurface singularity $f$ goes back to Thom and Mather. An {\it unfolding} \index{unfolding} of $f:(\C^{m+1},0)\to(\C,0)$ is a holomorphic function germ $F:(\C^{m+1}\times {\MM},0)\to (\C,0)$ such that $F|_{(\C^{m+1},0)\times\{0\}}=f$ and such that $( {\MM},0)$ is the germ of a complex manifold. Its Jacobi ideal is $J_F:=(\frac{\paa F}{\paa z_i})\subset \OO_{\C^{m+1}\times {\MM},0}$, its critical space is the germ $(C,0)\subset (\C^{m+1}\times {\MM},0)$ of the zero set of $J_F$ with the canonical complex structure. The projection $(C,0)\to ( {\MM},0)$ is finite and flat of degree $n$. A kind of Kodaira-Spencer map \index{Kodaira-Spencer map} is the $\OO_{ {\MM},0}$-linear map \begin{eqnarray}\label{10.13} \aaa_C:\TT_{ {\MM},0}\to \OO_{C,0},\quad X\mapsto\www X(F)|_{(C,0)} \end{eqnarray} where $\www X$ is an arbitrary lift of $X\in\TT_{ {\MM},0}$ to $(\C^{m+1}\times {\MM},0)$. We will use the following notion of morphism between unfoldings. Let $F_i:(\C^{m+1}\times {\MM}_i,0)\to(\C,0)$ for $i\in\{1,2\}$ be two unfoldings of $f$ with projections $\pr_i:(\C^{m+1}\times {\MM}_i,0)\to( {\MM}_i,0)$. A {\it morphism} from $F_1$ to $F_2$ is a pair $(\Phi,\varphi)$ of map germs such that the following diagram commutes, \begin{eqnarray*} \begin{xy} \xymatrix{ (\C^{m+1}\times {\MM}_1,0) \ar[r]^\Phi \ar[d]^{\pr_1} & (\C^{m+1}\times {\MM}_2,0) \ar[d]^{\pr_2}\\ ( {\MM}_1,0) \ar[r]^{\varphi} & ( {\MM}_2,0) } \end{xy} \end{eqnarray*} and \begin{eqnarray*} \Phi|_{(\C^{m+1},0)\times\{0\}}&=&\id,\\ F_1 &=& F_2\circ\Phi \end{eqnarray*} hold. Then one says that $F_1$ {\it is induced} by $(\Phi,\varphi)$ from $F_2$. An unfolding is {\it versal} if any unfolding is induced from it by a suitable morphism. A versal unfolding $F:(\C^{n+1}\times {\MM},0)\to(\C,0)$ is {\it universal} \index{universal unfolding} if the dimension of the parameter space $( {\MM},0)$ is minimal. Universal unfoldings exist by work of Thom and Mather. More precisely, an unfolding is versal if and only if the map $\aaa_C$ is surjective, and it is universal if and only if the map $\aaa_C$ is an isomorphism (see e.g. \cite[ch. 8]{AGV85} for a proof). Observe that $\aaa_C$ is surjective/an isomorphism if and only if its restriction to $0\in\MM$, the map \begin{eqnarray}\label{10.14} \aaa_{C,0}:T_0 {\MM}\to \OO_{\C^{m+1},0}/J_f \end{eqnarray} is surjective/an isomorphism. Therefore an unfolding \begin{eqnarray}\label{10.15} F(z_0,...,z_m,t_1,...,t_n)=F(x,t)=F_t(z)=f(z)+\sum_{j=1}^n m_jt_j, \end{eqnarray} with $( {\MM},0)=(\C^n,0)$ with coordinates $t=(t_1,...,t_n)$ where $m_1,...,m_n\in\OO_{\C^{m+1},0}$ represent a basis of the Jacobi algebra $\OO_{\C^{m+1},0}/J_f$, is universal. In a versal unfolding the critical space $C$ is smooth. Just as for $f$, one can also choose a good representative for an unfolding $F$. In the following we suppose that such a representative is chosen. Then $ {\MM}$ is an open subset of $\C^n$. \begin{theorem}\label{t10.5}\cite[Theorem 5.3]{He02} Let $f:(\C^{m+1},0)\to(\C,0)$ be an isolated hypersurface singularity with Milnor number $n$. The base space $\MM$ of a good representative of a universal unfolding is a generically semisimple {\it F-manifold} with {\it Euler field}. The multiplication comes from the multiplication on the right side of the isomorphism ${\bf a}_C:\TT_{ {\MM},0}\to\OO_{C,0}$ in \eqref{10.13}, the unit field is $e={\bf a}_C^{-1}([1])$, the Euler field is $E={\bf a}_C^{-1}([F])$. For each $t\in {\MM}$, the eigenvalues of $E\circ$ are the values of the critical points of $F_t$. \end{theorem} The algebra $(T_0 {\MM},\circ)\cong (\OO_{\C^{m+1},0}/J_f)$ is irreducible. Therefore the caustic $\KK_3\subset {\MM}$ is not empty, so it is a hypersurface. Also the Maxwell stratum $\KK_2\subset {\MM}$ is not empty, but a hypersurface. Within the caustic there is the {\it $\mu$-constant stratum} \index{$\mu$-constant stratum} \begin{eqnarray} \MM_\mu := \{t\in {\MM}\,|\, F_t\textup{ has only one singularity }z^0, F_t(z^0)=0\}.&&\label{10.16} \end{eqnarray} The only singularity which $F_t$ for $t\in \MM_\mu$ has, has automatically Milnor number $n$ (because ${\bf a}_C$ is finite and flat of degree $n$). The $\mu$-constant stratum is a complex subvariety of ${\MM}$. Its dimension is called the {\it modality} \index{modality} of the singularity $f$. It is nontrivial, but true (see e.g. \cite[Theorem 2.2]{He11}) that the lattices $(H_\Z,L)(F_t)$ for $t\in \MM_\mu$ glue to a local system of $\Z$-lattices with Seifert forms (and monodromy $M$ and intersection forms $I^{(0)}$ and $I^{(1)}$) over $\MM_\mu$. Arnold classified all singularities with modality $\leq 2$ up to coordinate changes. We will describe them loosely in the following theorem. We give normal forms for the simple singularities, the simple elliptic singularities and the hyperbolic singularities. Normal forms of the other singularities with modality $\leq 2$ can be found in \cite[II ch. 15.1]{AGV85}. Wall classified also all singularities with modality $3$ \cite{Wa99}. From a normal form $F_t(z)$ in $m+1$ variables $z_0,...,z_m$ and some parameters $t$ one obtains also a normal form $F_t(z)+\sum_{j=m+1}^{m_2+1}z_j^2$ in $m_2+1$ variables. In the following theorem, $(m\geq 2)$ after the symbol of a family means that the smallest number of variables for which this family exists is $m=2$. The meaning of $(m\geq 0)$ and $(m\geq 1)$ is analogous. See also Remark \ref{t10.11}. \begin{theorem}\label{t10.6} (Arnold, see \cite[II 15.1]{AGV85}) (a) \index{classification of singularities}\index{singularity} The singularities with modality 0 are called {\sf simple singularities}. \index{simple singularity} There are two series and three exceptional cases (the index is the Milnor number): \begin{eqnarray*} \begin{array}{c|c|c|c|c} A_n\ (m\geq 0) & D_n\ (m\geq 1) & E_6\ (m\geq 1) & E_7\ (m\geq 1)& E_8\ (m\geq 1)\\ \hline z_0^{n+1} & z_0^{n-1}+z_0z_1^2 & z_0^4+z_1^3 & z_0^3z_1+z_1^3 & z_0^5+z_1^3 \end{array} \end{eqnarray*} (b) The singularities with modality 1 are called {\sf unimodal singularities}. \index{unimodal singularity} They fall into three groups of 1-parameter families: (i) The three (1-parameter families of) {\sf simple elliptic singularities} \index{simple elliptic singularity} $\www{E}_6$, $\www{E}_7$ and $\www{E}_8$, here the parameter is $\lambda\in \C-\{0,1\}$, \begin{eqnarray*} \begin{array}{l|l|l}\textup{name} & n & normal form \\ \hline \www{E}_6 (m\geq 2) & 8 & f_\lambda=z_1(z_1-z_0)(z_1-\lambda z_0)-z_0z_2^2 \\ \www{E}_7\ (m\geq 1) & 9 & f_\lambda=z_0z_1(z_1-z_0)(z_1-\lambda z_0) \\ \www{E}_8\ (m\geq 1) & 10 & f_\lambda=z_1(z_1-z_0^2)(z_1-\lambda z_0^2) \end{array} \end{eqnarray*} These normal forms are called {\sf Legendre normal forms}; they are from \cite[1.9]{Sa74}. (ii) The (1-parameter families of) {\sf hyperbolic singularities} \index{hyperbolic singularity} $T_{pqr}$ with three discrete parameters $p,q,r\in\Z_{\geq 2}$ with $p\geq q\geq r$ and $\frac{1}{p}+\frac{1}{q}+\frac{1}{r}<1$, here the continuous parameter is $t\in\C^*$, \begin{eqnarray*} z_0^p+z_1^q+z_2^r+tz_0z_1z_2. \end{eqnarray*} (iii) 14 (1-parameter families of) {\sf exceptional unimodal singularities} \index{exceptional unimodal singularity} with the following names (the index is the Milnor number), \begin{eqnarray*} m\geq 1:&& E_{12},\ E_{13},\ E_{14},\ Z_{11},\ Z_{12},\ Z_{13},\ W_{12},\ W_{13} \\ m\geq 2:&& Q_{10},\ Q_{11},\ Q_{12},\ S_{11},\ S_{12},\ U_{12}. \end{eqnarray*} (c) The singularities with modality 2 are called {\sf bimodal singularities}. \index{bimodal singularity} They fall into three groups of 2-parameter families: (i) The six (2-parameter families of) {\sf quadrangle singularities} \index{quadrangle singularity} with the following names, \begin{eqnarray*} \begin{array}{l|l|l|l|l||l|l|l|l|l} m\geq 1 & \textup{name} & E_{3,0} & Z_{1,0} & W_{1,0} & m\geq 2 & \textup{name} & Q_{2,0} & S_{1,0} & U_{1,0} \\ & n & 16 & 15 & 15 & & n & 14 & 14 & 14 \end{array} \end{eqnarray*} (ii) The eight (2-parameter families of) {\sf bimodal series} \index{bimodal series} with one discrete paramter $p\in\N$ and with the following names \begin{eqnarray*} \begin{array}{l|l|l|l|l|l} m\geq 1 & \textup{name} & E_{3,p} & Z_{1,p} & W_{1,p} & W_{1,p}^\sharp \\ & n & 16+p & 15+p & 15+p & 15+p \\ \hline m\geq 2 & \textup{name} & Q_{2,p} & S_{1,p} & S_{1,p}^\sharp & U_{1,p} \\ & n & 14+p & 14+p & 14+p & 14+p \end{array} \end{eqnarray*} (iii) The 14 (2-parameter families of) {\sf exceptional bimodal singularities} \index{exceptional bimodal singularity} with the following names (the index is the Milnor number), \begin{eqnarray*} m\geq 1:&& E_{18},\ E_{19},\ E_{20},\ Z_{17},\ Z_{18},\ Z_{19},\ W_{17},\ W_{18} \\ m\geq 2:&& Q_{16},\ Q_{17},\ Q_{18},\ S_{16},\ S_{17},\ U_{16}. \end{eqnarray*} \end{theorem} The simple singularities and the simple elliptic singularities can be characterized in many different ways. Arnold found the following characterizations. \begin{theorem}\label{t10.7} (a) (Arnold \cite{Ar73-1}) The simple singularities are the only isolated hypersurface singularities where $I^{(0)}$ on $H_\Z$ is positive definite. (Classical) Then $(H_\Z,I^{(0)})$ is a root lattice of the same type as the name, $M$ is a Coxeter element, $\Gamma^{(0)}$ is the Weyl group, $\Delta^{(0)}$ is the set of roots, so especially $\Delta^{(0)}=R^{(0)}$. (b) (Arnold \cite{Ar73-2}) The simple elliptic singularities are the only isolated hypersurface singularities where $I^{(0)}$ on $H_\Z$ is positive semidefinite. \end{theorem} \section{Distinguished bases}\label{s10.2} Let $f:(\C^{m+1},0)\to(\C,0)$ be an isolated hypersurface singularity. Theorem \ref{t10.2} and Definition \ref{t10.3} gave a unimodular bilinear lattice $(H_\Z,L)$ which is associated canonically to the singularity $f$. Here $H_\Z$ is a Milnor lattice of a fiber with value in $\R_{>0}$ and $L$ is the normalized Seifert form. In fact, the pair $(H_\Z,L)$ comes equipped naturally with a $\Br_n\ltimes\{\pm 1\}^n$ orbit $\BB^{dist}$ of triangular bases. They are called {\it distinguished bases}. Also the set $\BB^{dist}$ is an invariant of the singularity $f$. This section gives the construction of the distinguished bases and lists several results on them. They have special properties. There are many open questions around them. Let $F:\XX\to\Delta_\eta$ be a good representative of a universal unfolding $F$ of $f$, with $\XX=F^{-1}(\Delta_\eta)\cap (B_\varepsilon\times {\MM})$ where $ {\MM}\subset \C^n$ is a sufficiently small open ball around 0 in $\C^n$. As above the critical space is $C\subset \XX$. Its image under the map $$(F,\pr_{{\MM}}):\XX\subset B_\varepsilon\times {\MM}\to \Delta_\eta\times {\MM}, \quad (z,t)\mapsto (F(z,t),t)$$ is the {\it discriminant} $D_{1,n}$, a hypersurface in $\Delta_\eta\times {\MM}$. The projection $D_{1,n}\to {\MM}$ is finite and flat of degree $n$. Of course, for that $ {\MM}$ must have been chosen small enough. For any $t\in {\MM}$ the intersection $D_{1,n}\times\{t\}\subset \Delta_\eta\times\{t\}$ is the set of critical values of the critical points of $F_t$. See Figure \ref{Fig:10.2}. \begin{figure} \includegraphics[width=0.7\textwidth]{pic-10-2.png} \caption[Figure 10.2]{A schematic picture of $\Delta_\eta\times\MM\supset D_{1,n}$ for a representative $F:\XX\to\Delta_\eta$ of a universal unfolding $F$ of a singularity $f$} \label{Fig:10.2} \end{figure} Recall that the caustic $\KK_3\subset {\MM}$ and the Maxwell stratum $\KK_2\subset {\MM}$ are complex hypersurfaces. Choose any point $t^0\in {\MM}-(\KK_3\cup\KK_2)$, so a generic point. Then $F_{t^0}:\XX_{t^0}\to\Delta_\eta$ with $\XX_{t^0}:=\XX\cap(B_\varepsilon\times\{t^0\})$ is a holomorphic function with $n$ $A_1$-singularities and such that the critical values $u_1,...,u_n\in\Delta_\eta$ of these $A_1$-singularities are pairwise different. The union $\bigcup_{\tau\in \Delta_\eta-\{u_1,...,u_n\}} H_m(F_t^{-1}(\tau),\Z)$ is a flat $\Z$-lattice bundle of rank $n$. Near each critical value $u_j$ there is a lattice vector, which is unique up to the sign and which comes geometrically from the vanishing cycle of the $A_1$ singularity in the fiber over $u_j$. One chooses a distinguished system $(\uuuu{\gamma};\sigma)$ of paths $\gamma_1,...,\gamma_n$ from $\eta$ to $u_{\sigma(1)},...,u_{\sigma(n)}$ and pushes the $n$ vanishing cycles along these paths to $H_m(F_t^{-1}(\eta),\Z)$. There is a canonical isomorphism $$H_m(F_t^{-1}(\eta),\Z)\cong H_m(X_\eta,\Z)\cong Ml(f,1)=H_\Z.$$ One obtains a tuple $\uuuu{v}=(v_1,...,v_n)$ of cycles in $H_\Z$. Brieskorn \cite[Appendix]{Br70} first wrote a proof of part (a) of the following theorem (see also \cite{AGV88}, \cite[Satz 5.5 in 5.4, and 5.6]{Eb01}, \cite[\S 1 1.5]{AGLV98}). \begin{theorem}\label{t10.8} (a) The resulting tuple $\uuuu{v}$ is a triangular basis of $(H_\Z,L)$ with $S:=L(\uuuu{v}^t,\uuuu{v})^t\in T^{uni}_n(\Z)$ (b) The $\Br_n\ltimes\{\pm 1\}^n$ orbit $\BB^{dist}$ of this basis $\uuuu{v}$ is an invariant of the singularity $f$. Its elements are called {\sf distinguished bases}. \index{distinguished basis} \end{theorem} \begin{remarks}\label{t10.9} (i) If one pushes the vanishing cycle above $u_{\sigma(j)}$ along the path $\gamma_j$ to the fiber $F_{t^0}^{-1}(\eta)$ over $\eta$, the union of cycles in the fibers above $\gamma_j$ forms an $m+1$ dimensional cycle with boundary in $F_{t^0}^{-1}(\eta)$, which gives a homology class in $H_{m+1}(\XX_{t^0}, F_{t^0}^{-1}(\eta),\Z)$. It is a {\it Lefschetz thimble}. \index{Lefschetz thimble} Its homology class is mapped by the isomorphism $$H_{m+1}(\XX_{t^0}, F_{t^0}^{-1}(\eta),\Z) \cong H_m(F_{t^0}^{-1}(\eta),\Z)$$ to the homology class of its boundary, which is the cycle in the fiber $F_{t^0}^{-1}(\eta)$. See Figure \ref{Fig:10.3}. \begin{figure} \includegraphics[width=0.7\textwidth]{pic-10-3.png} \caption[Figure 10.3]{Two Lefschetz thimbles above paths from $\eta$ to critical values $u_1$ and $u_2$} \label{Fig:10.3} \end{figure} (ii) Recall $k=0$ if $m\equiv 0(2)$ and $k=1$ if $m\equiv 1(2)$. The $\Z$-lattice bundle $\bigcup_{\tau\in \Delta_\eta-\{u_1,...,u_n\}} H_m(F_t^{-1}(\tau),\Z)$ is the $\Z$-lattice structure $V_\Z^{(k)}((\uuuu{u},\eta),[(\uuuu{\gamma};\sigma)],\uuuu{v})$. So, in the case of an isolated hypersurface singularity, this bundle comes from geometry and is the source of the triangular basis $\uuuu{v}$ in $(H_\Z,L)$. (iii) In \cite[5.6]{Eb01} the bilinear form $L^{sing}$ is called $v$. Korollar 5.3 (i) there claims that the matrix $V$ is upper triangular. But it is lower triangular. (iii) In \cite{Br83} Brieskorn considered the larger set $\BB^*:=\bigcup_{g\in\Gamma^{(k)}} g(\BB^{dist})$, possibly because he wanted an invariant of the singularity $f$ and did not see that $\BB^{dist}$ is one. $\BB^{dist}$ is one because the set $\{\eta\}\times {\MM}$ does not mix with the discriminant. Therefore the $\Z$-lattice $H_m(F_t^{-1}(\eta),\Z)$ is canonically isomorphic to $H_m(X_\eta,\Z)$. Moving $t\in {\MM}-(\KK_3\cup\KK_2)$ changes the tuple $(u_1,...,u_n)$ of critical values and deforms the distinguished system of paths. It does not do more. $\eta$ is fixed. \end{remarks} Now we start a report on specific properties of the unimodular bilinear lattices $(H_\Z,L,\uuuu{v})$ with distinguished bases which come from isolated hypersurface singularities. We start with the Thom-Sebastiani sum. \begin{theorem}\label{t10.10} (a) (Definition) The {\sf Thom-Sebastiani sum} \index{Thom-Sebastiani sum} of a singularity $f=f(z_0,...,z_{m_1})$ in $m_1+1$ variables and a singularity $g=g(z_{m_1+1},...,z_{m_1+m_2+1})$ in $m_2+1$ different variables is the singularity $f+g$. (b) Thom and Sebastiani \cite{ST71} observed that there is a canonical isomorphism \begin{eqnarray}\label{10.17} H_\Z(f)\otimes H_\Z(g)&\cong& H_\Z(f+g), \end{eqnarray} which respects the monodromies and the normalized monodromies. \begin{eqnarray} M^{sing}(f)\otimes M^{sing}(g)&\cong& M^{sing}(f+g),\nonumber\\ M(f)\otimes M(g)&\cong& M(f+g),\label{10.18} \end{eqnarray} Deligne (1973?, cited in \cite{AGV88}) observed that it respects the Seifert forms (up to a sign) and the normalized Seifert forms, \begin{eqnarray} L^{sing}(f)\otimes L^{sing}(g) &\cong& (-1)^{(m_1+1)(m_2+1)}L^{sing}(f+g).\nonumber\\ L(f)\otimes L(g)&\cong& L(f+g).\label{10.19} \end{eqnarray} Any distinguished basis $(\delta^{f}_1,...,\delta^{f}_{n(f)})$ of $f$ and any distinguished basis $(\delta^{g}_1,...,\delta^{g}_{n(g)})$ of $g$ give rise to a basis \begin{eqnarray}\label{10.20} (\delta^{f}_1\otimes \delta^{g}_1,..., \delta^{f}_1\otimes \delta^{g}_{n(g)}, \delta^{f}_2\otimes \delta^{g}_1,..., \delta^{f}_2\otimes \delta^{g}_{n(g)},...,\\ \delta^{f}_{n(f)}\otimes \delta^{g}_1,..., \delta^{f}_{n(g)}\otimes \delta^{g}_{n(g)})\nonumber \end{eqnarray} of $H_\Z(f)\otimes H_\Z(g)$, with the lexicographic order. Gabrielov \cite{Ga73} observed that its image under the isomorphism \eqref{10.17} is a distinguished basis of $H_\Z(f+g)$. \end{theorem} \begin{remark}\label{t10.11} The Thom-Sebastiani sum $f+g$ of an isolated hypersurface singularity $f=f(z_0,...,z_{m_1})$ and an $A_1$-singularity $g=z^2_{m_1+1}+...+z^2_{m_1+m_2+1}$ is close to the singularity $f$. Here $f+g$ is called a {\it suspension} of $f$. The generator of the Milnor lattice $H_\Z(g)\cong \Z$ is unique up to a sign. Therefore here the canonical isomorphism $H_\Z(f)\otimes H_\Z(g)\cong H_\Z(f+g)$ induces an isomorphism $H_\Z(f)\cong H_\Z(f+g)$ which is unique up to the sign. Therefore any bilinear form and any endomorphism on $H_\Z(f)$ are also well-defined on $H_\Z(f+g)$. The isomorphism \eqref{10.18} tells that the normalized monodromies $M(f)$ and $M(f+g)$ coincide, because $M(g)=\id$. The isomorphism \eqref{10.19} tells that the normalized Seifert forms $L(f)$ and $L(f+g)$ coincide, because $L(g)(\delta^g,\delta^g)=1$ where $H_\Z(g)=\Z\delta^g$. Gabrielov's result implies that $f$ and $f+g$ lead to the same unimodular bilinear lattice $(H_\Z,L)$ with $\Br_n\ltimes\{\pm 1\}^n$ orbit $\BB^{dist}$ of distinguished bases. \end{remark} The technique which was most successfully applied for the calculation of distinguished bases of isolated hypersurface singularities, is due to A'Campo and Gusein-Zade. It works for plane curve singularities, so functions in two variables. Together with the Thom-Sebastiani result above, it can be applied to many singularities in more than two variables. We will not describe the recipe here. The following theorem will just describe roughly the result. For details see the given references as well as \cite{AGV88}. The recipe builds on resolution of plane curve singularities. In concrete cases it leads to many pictures of curves in the real plane and an interesting way how to construct them and deal with them. \begin{theorem}\label{t10.12} (A'Campo \cite{AC75-1}\cite{AC75-2} and Gusein-Zade \cite{Gu74-1}\cite{Gu74-2}, see also \cite[\S 2 2.1]{AGLV98} \cite{BK96} \cite{LSh18}) Let $f$ be a plane curve singularity, \index{plane curve singularity} so an isolated hypersurface singularity with $m=1$. Suppose that $f=f_1...f_r$ is the decomposition of $f$ into its irreducible branches and that $f_j((\R^2,0))\subset (\R,0)$ for any $j$ (so $f$ is a complexification of a {\sf totally real singularity}). Any {\sf totally real morsification} (see e.g. \cite{AGLV98} for its definition) gives rise to three sets $B^{\ominus},B^{0},B^{\oplus}\subset H_\Z$ with the following properties: If $\uuuu{v}=(v_1,...,v_n)$ is a list of the elements of $B^{\ominus}\cup B^{0}\cup B^{\oplus}$ which lists first the elements of $B^{\ominus}$, then the elements of $B^0$ and finally the elements of $B^{\oplus}$, then $\uuuu{v}$ is a distinguished basis, and its matrix $S$ takes the following shape, \begin{eqnarray}\label{10.21} S=L(\uuuu{v}^t,\uuuu{v})^t=E_n+ \left(\begin{array}{c|c|c} 0 & - & + \\ \hline 0 & 0 & - \\ \hline 0 & 0 & 0\end{array}\right)\in T^{uni}_n(\Z), \end{eqnarray} more precisely, for $i<j$ \begin{eqnarray}\nonumber -I^{sing}(v_i,v_j)=I^{(1)}(v_i,v_j)=L(v_j,v_i)=S_{ij}\\ =\left\{\begin{array}{ll} 0&\textup{if }v_i,v_j\in B^\ominus\textup{ or } v_i,v_j\in B^0\textup{ or }v_i,v_j\in B^\oplus,\\ \leq 0&\textup{if }v_i\in B^\ominus, v_j\in B^0,\\ \geq 0&\textup{if }v_i\in B^\ominus, v_j\in B^\oplus,\\ \leq 0&\textup{if }v_i\in B^0, v_j\in B^\oplus. \end{array}\right. \label{10.22} \end{eqnarray} \end{theorem} Theorem \ref{t10.10} and Theorem \ref{t10.12} together allow to deal with singularities $f(z_0,...,z_m)$ with $m\geq 2$ if $f$ is an iterated Thom-Sebastiani sum of plane curve singularities. The following Theorem \ref{t10.13} of Gabrielov is stronger. It allows to construct a distinguished basis of any singularity (in principle) from results for singularities with less variables. Again we do not give the precise recipe, but only the rough result of it. For details see the given references. \begin{theorem}\label{t10.13} (a) \cite{Ga79} (see also \cite[5.10]{Eb01} \cite[\S 2 2.4]{AGLV98}) Given an isolated hypersurface singularity $f:(\C^{m+1},0)\to(\C,0)$, there is a recipe of Gabrielov which allows to construct a certain distinguished basis of $f$ from the following data: a generic linear function $g:\C^{m+1}\to\C$ and a distinguished basis of $\www f:= f|_{g^{-1}(0)}$. For the recipe one needs sufficient information on the {\sf polar curve} \index{polar curve} of $f$ with respect to $g$. Let $S$ be the matrix of the distinguished basis of $f$, and let $\www S$ be the matrix of the distinguished basis of $\www f$. Then especially \begin{eqnarray}\label{10.23} \{S_{ij}\,|\, i,j\in\{1,..,n(f)\}\}\subset \{0,\pm 1\}\cup\{\pm\www S_{ij}\,|\, i,j\in \{1,...,n(\www f)\}\}. \end{eqnarray} (b) (Corollary of part (a)) Each isolated hypersurface singularity has a distinguished basis with matrix $S$ with entries only $0,\pm 1$, so $$\{S_{ij}\,|\, i,j\in\{1,..,n(f)\}\}\subset \{0,\pm 1\}.$$ \end{theorem} The following theorem lists further properties of the unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with a distinguished basis $\uuuu{e}$, which come from a singularity $f$. \begin{theorem}\label{t10.14} Let $f:(\C^{m+1},0)\to(\C,0)$ be an isolated hypersurface singularity with Milnor number $n\in\N$, and let $(H_\Z,L,\uuuu{e})$ be the induced unimodular bilinear lattice with one chosen distinguished basis $\uuuu{e}$ and matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$. Fix $k\in\{0;1\}$. (a) (Classical, e.g. \cite{Ga79}) $\Delta^{(k)}=\Gamma^{(k)}(e_j)$ for any element $e_j$ of the distinguished basis, so $\Delta^{(k)}$ is a single orbit. (b) \cite{Ga74-1}\cite{La73}\cite{Le73} The triple $(H_\Z,L,\uuuu{e})$ is irreducible. (c) If $n\geq 2$ then there are $\delta_1,\delta_2\in \Delta^{(k)}$ with $I^{(k)}(\delta_1,\delta_2)=1$. (d) (Classical) The monodromy $M$ is quasiunipotent. The sizes of the Jordan blocks are at most $m+1$. The sizes of the Jordan blocks with eigenvalue 1 are at most $m$. (e) \cite{AC73} $\tr(M)=1$. \end{theorem} \begin{remarks}\label{t10.15} (i) The discriminant $D_{1,n}\subset\Delta_\eta\times M$ of a good representative $F$ of a universal unfolding of $f$ is irreducible because it is the image of the smooth critical set $C\subset\XX$. Its irreducibility leads easily to part (a), $\Delta^{(k)}=\Gamma^{(k)}(e_j)$. See e.g. \cite[Ch. 5 Satz 5.20]{Eb01}. (ii) Part (a) implies immediately part (b), the irreducibility of the triple $(H_\Z,L,\uuuu{e})$. This is the proof of part (b) in \cite{Ga74-1}. The proof of part (b) in \cite{Le73} uses part (e). (iii) Part (c) follows from the fact that any singularity with Milnor number $n\geq 2$ deforms to the $A_2$ singularity. See e.g. \cite{Eb01}. (iv) The triple $(H_\Z,I^{(k)},\Delta^{(k)})$ with the properties in the parts (a) and (c) and the property that $\Delta^{(k)}$ generates $H_\Z$ and the further property $\Delta^{(0)}\subset R^{(0)}$ if $k=0$ is a {\it vanishing lattice}. \index{vanishing lattice} This notion is defined in \cite{Ja83} and \cite{Eb84}. Here $\Gamma^{(k)}=\langle s_\delta^{(k)}\,|\, \delta\in\Delta^{(k)}\rangle$ is determined by the triple. (v) In rank 2 and rank 3 the only unimodular bilinear lattices whose monodromy has trace 1 are those of $A_2$ and $A_3$. So, the condition $\tr(M)=1$ is very restrictive. (vi) It is an interesting open question how to characterize those matrices $S\in T^{uni}_n(\Z)$ which come from distinguished bases of some isolated hypersurface singularities. Theorem \ref{t10.13} (b) and Theorem \ref{t10.14} give a number of necessary conditions. But very probably these conditions are not sufficient. \end{remarks} Finally we come to concrete results for the first singularities in Arnold's classification. Recall that the matrices $S\in \SSS^{dist}\subset T^{uni}_n(\Z)$ of distinguished bases are called {\it distinguished matrices}. Gabrielov gave distinguished matrices for all unimodal singularities in \cite{Ga73} and \cite{Ga74-2}. Ebeling gave distinguished matrices for all bimodal singularities in \cite{Eb83}. Distinguished matrices for the simple singularities are classical. Figure \ref{Fig:10.4} encodes the information of matrices $S$ of distinguished bases for the simple singularities and the simple elliptic singularities in {\it Coxeter-Dynkin diagrams}. This is more efficient than writing down series of matrices. See \cite[Ch. 5 Tabelle 5.3]{Eb01} for the given Coxeter-Dynkin diagrams of the simple elliptic singularities. In the given Coxeter-Dynkin diagrams for the simple singularities one can in fact change the numbering arbitrarily. \begin{figure}\includegraphics[width=0.8\textwidth]{pic-10-4.png} \caption[Figure 10.4]{Coxeter-Dynkin diagrams of some distinguished matrices for the simple and the simple elliptic singularities} \label{Fig:10.4} \end{figure} \begin{definition}\label{t10.16} Let $S=(S_{ij})\in T^{uni}_n(\Z)$. Its {\it Coxeter-Dynkin diagram} \index{Coxeter-Dynkin diagram} $\textup{CDD}(S)$ is a graph with $n$ vertices which are numbered from $1$ to $n$ and with weighted edges which are defined as follows: Between the vertices $i$ and $j$ with $i<j$ one has no edge if $S_{ij}=0$ and an edge with weight $S_{ij}$ if $S_{ij}\neq 0$. Alternatively, one draws $|S_{ij}|$ edges if $S_{ij}<0$ and $S_{ij}$ dotted edges if $S_{ij}>0$. \end{definition} The matrix $S$ and its Coxeter-Dynkin diagram $\textup{CDD}(S)$ determine one another. A unimodular bilinear lattice $(H_\Z,L)$ with a triangular basis $\uuuu{e}$ with $S=L(\uuuu{e}^t,\uuuu{e})^t$ is reducible if and only if $CDD(S)$ consists of several components. Recall from Theorem \ref{t10.7} that the simple singularities are the only singularities where $I^{(0)}$ on $H_\Z$ is positive definite and that the simple elliptic singularities are the only singularities where $I^{(0)}$ on $H_\Z$ is positive semidefinite. The following Lemma applies. \begin{lemma}\label{t10.17} Let $S\in T^{uni}_n(\Z)$ and let $(H_\Z,L,\uuuu{e})$ be a unimodular bilinear lattice with triangular basis $\uuuu{e}$ with $S=L(\uuuu{e}^t,\uuuu{e})^t$. (a) Suppose that $I^{(0)}$ is positive definite. Then $S_{ij}\in\{0,\pm 1\}$. The sets $R^{(0)}$, $\BB^{dist}=\Br_n\ltimes\{\pm 1\}^n(\uuuu{e})$ and $\SSS^{dist}=\Br_n\ltimes\{\pm 1\}^n(S)$ are finite. (b) Suppose that $I^{(0)}$ is positive definite. Then $S_{ij}\in\{0,\pm 1,\pm 2\}$. The set $\SSS^{dist}=\Br_n\ltimes\{\pm 1\}^n(S)$ is finite. \end{lemma} {\bf Proof:} (a) $R^{(0)}$ is a discrete subset of the compact set $\{\delta\in H_\R\,|\, I^{(0)}(\delta,\delta)=2\}$ and therefore finite. Therefore also the sets $\Delta^{(0)}$, $\BB^{dist}$ and $\SSS^{dist}$ are finite. With $S+S^t$ also the submatrix $\begin{pmatrix}2 & S_{ij} \\ S_{ij} & 2\end{pmatrix}$ (with $i<j$) is positive definite, so $S_{ij}\in\{0,\pm 1\}$. (b) With $S+S^t$ also the submatrix $\begin{pmatrix}2 & S_{ij} \\ S_{ij} & 2\end{pmatrix}$ (with $i<j$) is positive semidefinite, so $S_{ij}\in\{0,\pm 1,\pm 2\}$. Therefore the set $\SSS^{dist}$ is finite.\hfill$\Box$ \bigskip In section \ref{s3.3} we considered the unimodular bilinear lattice $(H_\Z,L,\uuuu{e})$ with triangular basis $\uuuu{e}$ with arbitrary matrix $S=L(\uuuu{e}^t,\uuuu{e})^t\in T^{uni}_n(\Z)$ and posed the questions when the following two inclusions are equalities, \begin{eqnarray*} \BB^{dist}&\subset& \{\uuuu{v}\in (\Delta^{(0)})^n\,|\, s_{v_1}^{(0)}...s_{v_n}^{(0)}=-M\},\hspace*{2cm}(3.3)\\ \BB^{dist}&\subset& \{\uuuu{v}\in (\Delta^{(1)})^n\,|\, s_{v_1}^{(1)}...s_{v_n}^{(1)}=M\}.\hspace*{2.4cm}(3.4) \end{eqnarray*} Chapter \ref{s7} was devoted to answer these questions in the cases of rank 2 and 3. General positive answers were given in Theorem \ref{t3.7} and the Examples \ref{t3.22} (iv) and (v). They embrace the cases of the simple singularities. References for the results in the next theorem are given after the theorem. \begin{theorem}\label{t10.18} (a) \begin{tabular}{l|l|l|l} $f$ & $S_{ij}$ & $|\BB^{dist}|$ & $|\SSS^{dist}|$ \\ \hline ADE-singularity & in $\{0,\pm 1\}$ & finite & finite \\ simple elliptic sing. & in $\{0,\pm 1,\pm 2\}$ & infinite & finite \\ any other singularity & unbounded & infinite & infinite \end{tabular} \medskip More details are given below in \eqref{10.33} and \eqref{10.47}. (b) The cases of the simple singularities: $\Delta^{(0)}=R^{(0)}$. The inclusion in \eqref{3.3} is an equality. (c) The cases of the simple elliptic singularities: $\Delta^{(0)}=R^{(0)}$. The inclusion in \eqref{3.3} is an equality. (d) The cases of the hyperbolic singularities: The inclusion in \eqref{3.3} is an equality. \end{theorem} \begin{remarks}\label{t10.19} (a) The first two lines in the table in part (a) follow with the exception of $|\BB^{dist}|=\infty$ for the simple elliptic singularities from Lemma \ref{t10.17} and Theorem \ref{t10.7}. The fact $|\BB^{dist}|=\infty$ for the simple elliptic singularities is proved for example in \cite{Kl87} or in \cite{HR21}. The third line in the table is due to \cite{Eb18}. (b) $\Delta^{(0)}=R^{(0)}$ is classical. The equality in \eqref{3.3} was first proved by Deligne in a letter to Looijenga \cite{De74}. It was rediscovered and generalized by Igusa and Schiffler \cite{IS10} to all Coxeter groups, see the Theorems \ref{t3.6} and \ref{t3.7}. (c) \cite[Korollar 4.4]{Kl83} says $\Delta^{(0)}=R^{(0)}$, \cite[VI 1.2 Theorem]{Kl87} gives equality in \eqref{3.3}. (d) \cite[Theorem 1.5]{BWY19}. (e) Theorem \ref{t7.1} (a) implies that in the case $A_2$ the inclusion in \eqref{3.4} is an equality. Theorem \ref{t7.7} (g) implies that in the case $A_3$ the inclusion in \eqref{3.4} is not an equality, but that it becomes an equality if one adds on the right hand side the condition $\sum_{i=1}^n\Z v_i=H_\Z$. These two results for \eqref{3.4} and the results in Theorem \ref{t10.18} for \eqref{3.3} are the only results on the interesting question for which singularities the inclusions in \eqref{3.3} or \eqref{3.4} are equalities. \end{remarks} \section[The groups $G_\Z$ for some singularities] {Orlik blocks and the groups $G_\Z$ for the simple, unimodal and bimodal singularities}\label{s10.3} The groups $G_\Z$ for the unimodular bilinear lattices $(H_\Z,L)$ from isolated hypersurface singularities have been mainly studied by the first author and F. Gau{\ss} \cite{He92}\cite{He11}\cite{GH17}\cite{GH18}. This section reports about results, especially for the singularities with modality $\leq 2$. But it starts with some rather recent general results on Orlik blocks \cite{He20} \cite{HM22-1} \cite{HM22-2} (which would have simplified part of the work in the references above, if they had been available at that time). \begin{definition}\label{t10.20} Consider a $\Z$-lattice $H_\Z$ of rank $n\in\N$ with an automorphism $M$ which is of finite order and cyclic. Here {\it cyclic} means that an element $e_1\in H_\Z$ with $H_\Z=\bigoplus_{i=0}^{n-1}\Z M^{i-1}(e_1)$ exists. The pair $(H_\Z,M)$ is called an {\it Orlik block}. \index{Orlik block} \end{definition} \begin{remarks}\label{t10.21} Consider an Orlik block $(H_\Z,M)$ with $H_\Z$ a $\Z$-lattice of rank $n$. (i) The eigenvalues of $M$ are unit roots because $M$ is of finite order. Cyclic implies regular. Finite order and regular together imply that each eigenvalue has multiplicity one. Especially, $M$ is semisimple. Denote by $\Ord(M)\subset\N$ the finite set of orders of the eigenvalues of $M$. Then $p_{ch,M}=\prod_{m\in\Ord(M)}\Phi_m$ is a product of cyclotomic polynomials. The pair $(H_\Z,M)$ is up to isomorphism determined by the set $\Ord(M)$ or by the polynomial $p_{ch,M}$. (ii) Lemma \ref{t5.2} (a) implies $\End(H_\Z,M)=\Z[M]$. Consider the group \begin{eqnarray*} \Aut_{S^1}(H_\Z,M)&:=&\{g\in\Aut(H_\Z,M)\,|\, \textup{all eigenvalues of }g\textup{ are in }S^1\}\\ &=&\{p(M)\,|\, p(t)=\sum_{i=0}^{n-1}p_it^i\in\Z[t],\\ && p(\lambda)p(\lambda^{-1})=1\textup{ for all eigenvalues } \lambda\textup{ of }M\}. \end{eqnarray*} Here we used $\lambda^{-1}=\oooo{\lambda}$ for $\lambda$ eigenvalues of $M$. (iii) If $M$ is the monodromy of a unimodular bilinear lattice $(H_\Z,L)$ then $\Aut_{S^1}(H_\Z,M)=G_\Z=G_\Z^{(0)}=G_\Z^{(1)}$ by Lemma \ref{t5.2} (b) (ii). (iv) $\Aut_{S^1}(H_\Z,M)$ contains the finite group $\{\pm M^l\,|\, l\in\Z\}$. The main result Theorem 1.2 in \cite{He20} gives necessary and sufficient conditions on the set $\Ord(M)$ or (equivalently) on the polynomial $p_{ch,M}$ for \begin{eqnarray*} \Aut_{S^1}(H_\Z,M)=\{\pm M^l\,|\, l\in\Z\}. \end{eqnarray*} \end{remarks} Orlik \cite{Or72} conjectured that in the case of a {\it quasihomogeneous singularity} the pair $(H_\Z,M)$ decomposes in a specific way into a sum of Orlik blocks. Part of this conjecture was proved in \cite{HM22-1}\cite{HM22-2}. Before stating the conjecture and the results precisely, we define quasihomogeneous singularities and two special families of quasihomogeneous singularities. \begin{definition}\label{t10.22} (a) An isolated hypersurface singularity $f:(\C^{m+1},0)\to(\C,0)$ is a {\it quasihomogoneous singularity}, \index{quasihomogeneous singularity} if $f\in\C[z_0,...,z_m]$ is a {\it quasihomogeneous polynomial}, that means there exists a weight system $\uuuu{w}=(w_0,...,w_m) \in (0,\frac{1}{2}]\cap \Q$ such that for any monomial $\prod_{i=0}^m z_i^{e_i}$ with nonvanishing coefficient in $f$ its weighted degree $\deg_{\uuuu{w}}(\prod_{i=0}^m z_i^{e_i}):= \sum_{i=0}^mw_ie_i$ is equal to $1$. (b) A {\it chain type singularity} \index{chain type singularity} is a quasihomogeneous singularity of the special shape \begin{eqnarray}\label{10.24} f=f(z_0,...,z_m)= z_0^{a_0+1}+\sum_{i=1}^mz_{i-1}z_i^{a_i} \end{eqnarray} for some $m\in\N$ and some $a_0,...,a_m\in\N$. This quasihomogeneous polynomial has indeed an isolated singularity. (c) A {\it cycle type singularity} \index{cycle type singularity} is a quasihomogeneous singularity of the special shape \begin{eqnarray}\label{10.25} f=f(z_0,...,z_m)=\sum_{i=0}^{m-1}z_i^{a_i}z_{i+1}+z_m^{a_m}z_0 \end{eqnarray} for some $m\in\N$ and some $a_0,...,a_m\in\N$ which satisfy in the case of odd $m$ that neither $a_j=1$ for all even $j$ nor $a_j=1$ for all odd $j$. This quasihomogeneous polynomial has indeed an isolated singularity. \end{definition} The quasihomogeneous singularities form an important and especially well studied subfamily of all isolated hypersurface singularities. Their monodromies are semisimple. \begin{remarks}\label{t10.23} Let $f\in\C[z_0,...,z_m]$ be a quasihomogeneous singularity. (i) Milnor and Orlik \cite{MO70} gave a formula for the characteristic polynomial $p_{ch,M^{sing}}$ of its monodromy. Recall $M=(-1)^{m+1} M^{sing}$. (ii) The polynomial $p_{ch,M}$ has a unique decomposition $p_{ch,M}=p_1p_2...p_l$ with $p_1\,|\, p_2\,|\, ...\,|\, p_l$ and $p_l$ the minimal polynomial of $M$ and $l\in\N$ suitable. (iii) Orlik \cite[Conjecture 3.1]{Or72} conjectured that the pair $(H_\Z,M)$ has a decomposition into $l$ Orlik blocks with characteristic polynomials $p_1,...,p_l$. (iv) His conjecture was proved in \cite[Theorem 1.3]{HM22-1} for all cycle type singularities and in \cite[Theorem 1.3]{HM22-2} for all iterated Thom-Sebastiani sums of chain type singularities and cycle type singularities. (v) Furthermore, in \cite[Theorem 1.5]{HM22-3} it was proved that each polynomial $p_j$ in (ii) satisfies the sufficient condition (I) in Theorem 1.2 in \cite{He20} for \begin{eqnarray*} &&\Aut_{S^1}(\textup{Orlik block for }p_j)\\ &=&\{\pm (\textup{monodromy of the Orlik block})^l\,|\, l\in\Z\}. \end{eqnarray*} (vi) Therefore if the singularity $f$ satisfies Orlik's conjecture, one knows the automorphisms of the summands of a decomposition of $(H_\Z,M)$ into Orlik blocks. This is a big step towards $G_\Z$. Though in general, the decomposition is not left or right $L$-orthogonal. Also, automorphisms might mix or exchange Orlik blocks. So, there are still more steps to do towards $G_\Z$. (vii) Many families of singularities with modality $\leq 2$ contain quasihomogeneous singularities, namely all families except the hyperbolic singularities $T_{pqr}$ and the eight bimodal series $E_{3,p},Z_{1,p},W_{1,p},W_{1,p}^\sharp,Q_{2,p},S_{1,p}, S_{1,p}^\sharp,U_{1,p}$. Each family which contains at all a quasihomogeneous singularity, contains also a quasihomogeneous singularity which is an iterated Thom-Sebastiani sum of chain type singularities and cycle type singularities. Therefore Orlik's conjecture is true for all of these families of singularities. In \cite{He92} this was proved in a case-by-case work using Coxeter-Dynkin diagrams. (viii) But Orlik blocks are not only useful for quasihomogeneous singularities. Also in the families with modality $\leq 2$ which do not contain quasihomogeneous singularities, $(H_\Z,M)$ usually contains an $M$-invariant sublattice of finite index in $H_\Z$, which decomposes into Orlik blocks which each satisfy the sufficient condition (I) in Theorem 1.2 in \cite{He20}. This fact was elaborated and used in many cases in \cite{GH17} and \cite{GH18}. See for example Theorem \ref{t10.26} (ii) below. (ix) Orlik and Randell \cite{OR77} found for the chain type singularities an $n$-th root of the monodromy, without using Theorem \ref{t3.26} (c). The pair $(H_\Z,\textup{this }n\textup{-th root})$ is an Orlik block. Though they conjectured that a distinguished basis with matrix $S$ as in Theorem \ref{t3.26} (c) exists. This conjecture was proved recently by Varolgunes \cite{Va23}. \end{remarks} Now we describe a part of the results in \cite{He11}\cite{GH17} and \cite{GH18} on $G_\Z$ in the families of singularities with modality $\leq 2$. We start with the table in the proof of Theorem 8.3 in \cite{He11} of the characteristic polynomials $p_{ch,-M}$ for all families of singularities with modality $\leq 2$ which contain quasihomogeneous singularities. They can be extracted from the tables of spectral numbers in \cite[13.3.4]{AGV88} or from \cite{He92}. The families consist of all singularities with modality $\leq 2$ except the hyperbolic singularities and the eight bimodal series. \medskip \begin{eqnarray*} \begin{array}{ll|ll|ll} A_n & \frac{t^{n+1}-1}{t-1} & \www E_6 & \Phi_3^3\Phi_1^2 & E_{3,0} & \Phi_{18}^2\Phi_6\Phi_2^2 \\ D_n & (t^{n-1}+1)\Phi_2 & \www E_7 & \Phi_4^2\Phi_2^3\Phi_1^2 & Z_{1,0} & \Phi_{14}^2\Phi_2^3 \\ E_6 & \Phi_{12}\Phi_3 & \www E_8 & \Phi_6\Phi_3^2\Phi_2^2\Phi_1^2 & Q_{2,0} & \Phi_{12}^2\Phi_4^2\Phi_3 \\ E_7 & \Phi_{18}\Phi_2 & & & W_{1,0} & \Phi_{12}^2\Phi_6\Phi_4\Phi_3\Phi_2 \\ E_8 & \Phi_{30} & & & S_{1,0} & \Phi_{10}^2\Phi_5\Phi_2^2 \\ & & & & U_{1,0} & \Phi_9^2\Phi_3 \end{array} \end{eqnarray*} \begin{eqnarray*} \begin{array}{ll|ll|ll} E_{12} & \Phi_{42} & E_{13} & \Phi_{30}\Phi_{10}\Phi_2 & E_{14} & \Phi_{24}\Phi_{12}\Phi_3 \\ Z_{11} & \Phi_{30}\Phi_6\Phi_2 & Z_{12} & \Phi_{22}\Phi_2^2 & Z_{13} & \Phi_{18}\Phi_9\Phi_2 \\ Q_{10} & \Phi_{24}\Phi_3 & Q_{11} & \Phi_{18}\Phi_{6}\Phi_{3}\Phi_{2} & Q_{12} & \Phi_{15}\Phi_{3}^2 \\ W_{12} & \Phi_{20}\Phi_{5} & W_{13} & \Phi_{16}\Phi_{8}\Phi_{2} & \\ S_{11} & \Phi_{16}\Phi_{4}\Phi_{2} & S_{12} & \Phi_{13} & U_{12} & \Phi_{12}\Phi_{6}\Phi_{4}^2\Phi_{2}^2 \end{array} \end{eqnarray*} \begin{eqnarray*} \begin{array}{ll|ll|ll} E_{18} & \Phi_{30}\Phi_{15}\Phi_3 & E_{19} & \Phi_{42}\Phi_{14}\Phi_2 & E_{20} & \Phi_{66} \\ Z_{17} & \Phi_{24}\Phi_{12}\Phi_6\Phi_3\Phi_2 & Z_{18} & \Phi_{34}\Phi_2^2 & Z_{19} & \Phi_{54}\Phi_2 \\ Q_{16} & \Phi_{21}\Phi_3^2 & Q_{17} & \Phi_{30}\Phi_{10}\Phi_{6}\Phi_{3}\Phi_2 & Q_{18} & \Phi_{48}\Phi_{3} \\ W_{17} & \Phi_{20}\Phi_{10}\Phi_{5}\Phi_{2} & W_{18} & \Phi_{28}\Phi_{7} & \\ S_{16} & \Phi_{17} & S_{17} & \Phi_{24}\Phi_{8}\Phi_{6}\Phi_{3}\Phi_2 & U_{16} & \Phi_{15}\Phi_{5}^2 \end{array} \end{eqnarray*} One sees that $p_{ch,-M}$ has no multiple roots in the following cases: The simple singularities $A_n$, $D_{2n+1}$, $E_6,E_7,E_8$, so all simple singularities except the $D_{2n}$, and 22 of the 28 exceptional unimodal and bimodal singularities, namely all except $Z_{12},Q_{12},U_{12}$, $Z_{18},Q_{16}$ and $U_{16}$. Part (a) of the next theorem follows from the Remarks \ref{t10.21} (iv) and \ref{t10.23} (iv), (v) and (vii). \begin{theorem}\label{t10.24} (a) \cite[Theorems 8.3 and 8.4]{He11} In the cases $A_n$, $D_{2n+1}$, $E_6,E_7,E_8$ and the 22 of the 28 exceptional unimodal and bimodal singularities with the exceptions $Z_{12},Q_{12},U_{12},Z_{18},Q_{16}$ and $U_{16}$, the pair $(H_\Z,M)$ is a single Orlik block and $$G_\Z=\{\pm M^l\,|\, l\in\Z\}.$$ (b) \cite[Theorem 4.1]{GH17} In the cases $D_{2n}$, $Z_{12},Q_{12},U_{12}$, $Z_{18},Q_{16}$ and $U_{16}$, $(H_\Z,M)$ has a decomposition into two Orlik blocks and \begin{eqnarray*} G_\Z=\{\pm M^l\, |\, l\in\Z\}\times U \end{eqnarray*} with \begin{eqnarray*} \begin{array}{l|l|l|l|l|l|l|l|l} & D_4 & D_{2n}\textup{ with }n\geq 3& Z_{12} & Q_{12} & U_{12} & Z_{18} & Q_{16} & U_{16}\\ \hline U\cong & S_3 & S_2 & \{\id\} & S_2 & S_3 & \{\id\} & S_2 & S_3 \end{array} \end{eqnarray*} (c) \cite[Theorem 3.1]{GH17} In the simple elliptic cases $\Rad I^{(0)}\subset H_\Z$ has rank 2. The restriction map $ G_\Z\to G_\Z|_{\Rad I^{(0)}}$ is called $\pr^{\Rad I^{(0)}}$. There is an exact non-splitting sequence \begin{eqnarray*} \{\id\}\to \ker\pr^{\Rad I^{(0)}} \to G_\Z \stackrel{\pr^{\Rad I^{(0)}}}{\to} G_\Z|_{\Rad I^{(0)}} \cong SL_2(\Z)\to \{\id\} \end{eqnarray*} with finite group $\ker \pr^{\Rad I^{(0)}}$. The sublattice $(H_{\C,\neq 1}\cap H_\Z,M|_{..})$ has a left and right $L$-orthogonal decomposition into three Orlik blocks $H^{(1)}\oplus H^{(2)}\oplus H^{(3)}$ with restricted monodromies $M^{(1)}$, $M^{(2)}$ and $M^{(3)}$ and characteristic polynomials in the following table, \begin{eqnarray*} p_{ch,-M^{(j)}}=\frac{t^{p_j}-1}{t-1}\quad\textup{with}\quad \begin{array}{l|l|l|l} & \www{E}_6 & \www{E}_7 & \www{E}_8 \\ \hline (p_1,p_2,p_3) & (3,3,3) & (4,4,2) & (6,3,2) \end{array} \end{eqnarray*} Then $\ker \pr^{\Rad I^{(0)}}=U_1\rtimes U_2$, where\\ $U_2\cong S_3$ permutes the three Orlik blocks in the case of $\www{E}_6$,\\ $U_2\cong S_2$ permutes $H^{(1)}$ and $H^{(2)}$ in the case of $\www{E}_7$,\\ and $U_2=\{\id\}$ in the case of $\www{E}_8$.\\ The elements of $U_1$ are the extensions to $\Rad I^{(0)}$ by $\id$ of the following automorphisms of $H_{\C,\neq 1}\cap H_\Z$, \begin{eqnarray*} \{(M^{(1)})^\alpha\times (M^{(2)})^\beta\times (M^{(3)})^{\gamma}\,|\, (\alpha,\beta,\gamma)\in\prod_{j=1}^3\{0,1,..,p_j-1\}\\ \textup{with}\quad \frac{\alpha}{p_1}+\frac{\beta}{p_2}+\frac{\gamma}{p_3}\equiv 0\mmod\Z\} \end{eqnarray*} \end{theorem} \begin{remark}\label{t10.25} Kluitmann \cite[III 2.4--2.6]{Kl87} calculated for the simple elliptic singularities the group $\Aut(H_\Z,I^{(0)},M)$, which contains $G_\Z$. Comparison with part (c) above gives equality $\Aut(H_\Z,I^{(0)},M)= G_\Z$. Kluitmann described the group by an exact sequence \begin{eqnarray*} \{\id\}\to \ker \pr^{\neq 1}&\to& \Aut(H_\Z,I^{(0)},M)\\ &\stackrel{\pr^{\neq 1}}{\to}& \Aut(H_{\C,\neq 1}\cap H_\Z,I^{(0)}|_{..}, M|_{..})\to\{\id\}. \end{eqnarray*} The group on the right side is finite of order 1296, 768 respectively 864 in the case $\www{E}_6,\www{E_7}$ respectively $\www{E}_8$. The kernel $\ker \pr^{\neq 1}$ is isomorphic to $\Gamma(3),\Gamma(4)$ respectively $\Gamma(6)\subset SL_2(\Z)$ in the case $\www{E}_6,\www{E_7}$ respectively $\www{E}_8$. \end{remark} In the cases of the other families of singularities with modality $\leq 2$, namely the hyperbolic singularities, the six families of quadrangle singularities and the eight bimodal series, we restrict here to the rough information in the following tables and refer to \cite[Theorem 3.1]{GH17} and \cite[Theorem 5.1 and Theorem 6.1]{GH18} for details. The characteristic polynomial $p_{ch,-M}$ for the family $T_{pqr}$ of hyperbolic singularities with $p\geq q\geq r$ and $\frac{1}{p}+\frac{1}{q}+\frac{1}{r}<1$ is \begin{eqnarray*} p_{ch,-M}(t)=(t^p-1)(t^q-1)(t^r-1)(t-1)^{-1}. \end{eqnarray*} The next table from \cite[(5.1)]{GH18} gives the characteristic polynomial $p_{ch,-M}$ for the eight bimodal series as a product $b_1b_2$ respectively $b_1b_2b_3$ in the case $Z_{1,p}$. The meaning of $m$ and $r_I$ is explained in Theorem \ref{t10.26}. \begin{eqnarray}\label{10.26} \begin{array}{lllllll} \textup{series} & n & b_1 & b_2 & b_3 & m & r_I\\ \hline W_{1,p}^\sharp & 15+p & \Phi_{12}& (t^{12+p}-1)/\Phi_1& - & 12 & 1\\ S_{1,p}^\sharp & 14+p & \Phi_{10}\Phi_2 & (t^{10+p}-1)/\Phi_1 & - & 10 & 1\\ U_{1,p} & 14+p & \Phi_9 & (t^{9+p}-1)/\Phi_1 & - & 9 & 1 \\ E_{3,p} & 16+p & \Phi_{18}\Phi_2 & t^{9+p}+1& - & 18 & 2\\ Z_{1,p} & 15+p & \Phi_{14}\Phi_2 & t^{7+p}+1 & \Phi_2 & 14 & 2\\ Q_{2,p} & 14+p & \Phi_{12}\Phi_4\Phi_3 & t^{6+p}+1 & - & 12 & 2\\ W_{1,p} & 15+p & \Phi_{12}\Phi_6\Phi_3\Phi_2 & t^{6+p}+1 & - & 12 & 2 \\ S_{1,p} & 14+p & \Phi_{10}\Phi_5\Phi_2 & t^{5+p}+1 & - & 10 & 2 \end{array} \end{eqnarray} \begin{theorem}\cite{He11}\cite{GH17}\cite{GH18}\label{t10.26} (a) (i) Within the families of singularities with modality $\leq 2$ the monodromy is not finite only in the hyperbolic singularities. It has one $2\times 2$ Jordan block with eigenvalue $-1$ for the hyperbolic singularities. (ii) In the bimodal series, $H_\Z$ contains with index $r_I$ a direct sum $H^{(1)}\oplus H^{(2)}$ respectively $H^{(1)}\oplus H^{(2)}\oplus H^{(3)}$ for $Z_{1,p}$ of Orlik blocks with characteristic polynomials $b_1(-t)$ and $b_2(-t)$ and in the case $Z_{1,p}$ $b_3(-t)$. (iii) Within the bimodal series, the eigenvalue $\zeta:=e^{2\pi i /m}$ has multiplicity 2 exactly in the eight subseries with $m\,|\, p$, so the subseries $E_{3,18p},Z_{1,14p},W_{1,12p},W_{1,12p}^\sharp,Q_{2,12p},S_{1,10p}, S_{1,10p}^\sharp,U_{1,9p}$. In these cases $G_\Z$ contains automorphisms which act nontrivially on the 2-dimensional eigenspace $H_{\C,\zeta}$ and which do not exist for the other cases. (b) The quotient $G_\Z/\{\pm M^l\,|\, l\in\Z\}$ looks roughly as follows in the families of singularities with modality $\leq 2$. \begin{eqnarray*} \begin{array}{ll} \textup{Singularity family} & G_\Z/\{\pm M^l\, |\, l\in\Z\} \\ \hline \textup{ADE-singularities} & \{\id\}\textup{ or }S_2\textup{ or }S_3 \\ \textup{simple elliptic sing.} & \textup{a finite extension of }SL(2,\Z) \\ \textup{hyperbolic sing.} & \textup{a finite group} \\ \textup{exc. unimodal sing.} & \{\id\}\textup{ or }S_2\textup{ or }S_3 \\ \textup{exc. bimodal sing.} & \{\id\}\textup{ or }S_2\textup{ or }S_3 \\ \textup{quadrangle sing.} & \textup{a triangle group} \\ \textup{the 8 series, for }m\not|p & \textup{a cyclic finite group} \\ \textup{the 8 subseries with }m|p & \textup{an infinite Fuchsian group} \end{array} \end{eqnarray*} (This table is taken from the introduction in \cite{GH18}.) \end{theorem} Finally, we come to the subgroup $G_\Z^{\BB}\subset G_\Z$. \begin{remarks}\label{t10.27} In \cite{He11} a subgroup $G^{mar}\subset G_\Z$ is defined for any isolated hypersurface singularity. It is called {\it $\mu$-constant monodromy group}. It comes from transversal monodromies along $\mu$-constant families and from $\pm\id$. It has the property \begin{eqnarray*} G^{mar}\subset G_\Z^\BB \subset G_\Z \end{eqnarray*} (this is stated, though not really explained in \cite[Remark 3.4]{He11}). In \cite{He11}, \cite{GH17} and \cite{GH18} $G_\Z$ and $G^{mar}$ are determined for all singularities with modality $\leq 2$. It turns out that $G^{mar}=G_\Z^{\BB}=G_\Z$ for almost all of them, namely all except the subseries with $m\,|\, p$ of the eight bimodal series. For these subseries $G^{mar}\subsetneqq G_\Z$, and it is not clear where in between $G_\Z^{\BB}$ is. \end{remarks} \section{Monodromy groups and vanishing cycles}\label{s10.4} Consider the unimodular bilinear lattice $(H_\Z,L,\BB)$ with set of distinguished bases $\BB$ from an isolated hypersurface singularity $f:(\C^{m+1},0)\to(\C,0)$. The distinguished bases induce the even and odd monodromy groups $\Gamma^{(0)}$ and $\Gamma^{(1)}$ and the sets $\Delta^{(0)}$ and $\Delta^{(1)}$ of even and odd vanishing cycles. These two groups and these two sets have been studied a lot and by many people. In the even case the pair $(H_\Z,I^{(0)})$ alone determines $\Gamma^{(0)}$ and $\Delta^{(0)}$ in almost all cases. This is satisfying. The precise results below are due to Ebeling. In the odd case, the situation is more complicated. It was worked upon by Wajnryb, Chmutov and Janssen. In the following we report first on the even case and then on the odd case. The lattice $(H_\Z,I^{(0)})$ with even intersection form is an a priori rather rough invariant. Lattices with even intersection forms are well understood, due to work of Durfee, Kneser, Nikulin, Wall and others. The following theorem of Ebeling was developed by him in several papers, in greater and greater generality, until the final state in \cite[(5.5) and (2.5)]{Eb84}. The version here is taken from \cite[Theorem 5.9]{Eb01}, as \cite{Eb84} does not cover the characterization of $\Delta^{(0)}$ for some special cases. \begin{theorem}\label{t10.28} \cite[Theorem 5.9]{Eb01} Suppose that the singularity $f$ is a hyperbolic singularity of type $T_{pqr}$ with $(p,q,r)\in\{(2,3,7),(2,4,5),(3,3,4)\}$ (so these three triples are allowed, all other triples $(p,q,r)$ with $p\geq q\geq r$ and $\frac{1}{p}+\frac{1}{q}+\frac{1}{r}<1$ are not allowed) or any not hyperbolic singularity. Then \begin{eqnarray*} \Gamma^{(0)}&=&O^{(0),*},\\ \Gamma^{(0)}_u&=&(\ker\tau^{(0)})_u \stackrel{\ref{t5.2}\ (e)}{=} T(\oooo{j}^{(0)}(\oooo{H_\Z}^{(0)})\otimes \Rad I^{(0)}),\\ \Gamma^{(0)}_s&=& \ker\oooo{\tau}^{(0)}\cap\ker\oooo{\sigma},\\ {}[O^{(0)}_s:\Gamma^{(0)}_s]&<&\infty,\\ \Delta^{(0)}&=&\{a\in R^{(0)}\,|\, I^{(0)}(a,H_\Z)=\Z\}, \end{eqnarray*} (recall Lemma \ref{t6.2} (f) for $\oooo{\tau}^{(0)}$ and Remark \ref{t6.3} (iii) for $\oooo{\sigma}$), the exact sequence \begin{eqnarray*} \{\id\}\to\Gamma^{(0)}_u\to\Gamma^{(0)}\to\Gamma^{(0)}_s\to\{\id\} \end{eqnarray*} splits non-canonically. \end{theorem} \begin{remarks}\label{t10.29} (i) The characterizations of $\Gamma^{(0)}$ and $\Delta^{(0)}$ in the theorem show that they are determined by $(H_\Z,I^{(0)})$ alone. (ii) $\Gamma^{(0)}_u=(\ker\tau^{(0)})_u$ follows from $\Gamma^{(0)}=O^{(0),*}\stackrel{Def.}{=} \ker\tau^{(0)}\cap\ker\sigma$ and $O^{(0),Rad}_u\subset\ker\sigma$. The simple part $\Gamma^{(0)}_s$ has finite index in $O^{(0)}_s$ because $\oooo{H_\Z}^{(0),\sharp}/\oooo{j}^{(0)}(\oooo{H_\Z}^{(0)})$ is a finite abelian group and therefore $\ker\oooo{\tau}^{(0)}$ has finite index in $O^{(0)}_s$. (iii) We do not know a singularity where $R^{(0)}\supsetneqq \Delta^{(0)}$. Equivalent: We do not know a singularity where a root $a\in R^{(0)}$ with $I^{(0)}(a,H_\Z)\subsetneqq \Z$ exists. (iv) The first idea of a proof of $\Delta^{(0)}=\{a\in R^{(0)}\,|\, I^{(0)}(a,H_\Z)=\Z\}$ is due to Looijenga. It is given in \cite[Korollar 1 in 4.2]{Br83}. This paper treats the exceptional unimodal singularities. But it gives also many useful references and general facts. (v) In the case of a hyperbolic singularity $T_{pqr}$ with $(p,q,r)\notin\{(2,3,7),(2,4,5),(3,3,4)\}$, Gabrielov \cite{Ga74-2} determined $\Gamma^{(0)}$. Then especially $[O^{(0)}_s:\Gamma^{(0)}_s|=\infty$ (see also \cite[(5.2)]{Eb84}). For example the singularities $T_{2,7,7}$ and $T_{3,3,10}$ have isomorphic pairs $(H_\Z,I^{(0)})$, but the monodromy groups for all hyperbolic singularities are pairwise different \cite{Br83}. So in the case of the hyperbolic singularities, the pair $(H_\Z,I^{(0)})$ does not determine the monodromy group $\Gamma^{(0)}$. See \cite[7.2]{BWY19} for a statement on $\Delta^{(0)}$ for the hyperbolic singularities. (vi) An example \cite[Example after Theorem 3.3]{Eb83}: The exceptional bimodal singularities $E_{18}$ and $Q_{18}$ have isomorphic pairs $(H_\Z,I^{(0)})$. Both are isomorphic to the orthogonal sum $E_6\perp E_8\perp U\perp U$ where $E_6$ and $E_8$ mean the root lattices and $U$ means the rank 2 hyperbolic lattice, \index{hyperbolic lattice} so with matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ for its even intersection form. Therefore also $\Gamma^{(0)}$ and $\Delta^{(0)}$ are isomorphic for these singularities. But Seifert form $L$ and monodromy $M$ are not isomorphic. The monodromy $M$ has order 30 for $E_{18}$ and order 48 for $Q_{18}$. \end{remarks} Now we come to the odd case. The lattice $(H_\Z,I^{(1)})$ with odd intersection form has a basis $\uuuu{v}$ whose intersection matrix is \begin{eqnarray*} I^{(1)}(\uuuu{v}^t,\uuuu{v})= \begin{pmatrix} 0 &d_1& & & & & & & & \\ -d_1&0 & & & & & & & & \\ & &0 &d_2& & & & & & \\ & &-d_2&0 & & & & & & \\ & & & &\ddots& & & & & \\ & & & & &0 &d_l& & & \\ & & & & &-d_l&0 & & & \\ & & & & & & &0& & \\ & & & & & & & &\ddots& \\ & & & & & & & & &0 \end{pmatrix} \end{eqnarray*} (at empty places put 0) where $d_1,...,d_l\in\N$ with $d_l\,|\, d_{l-1}\,|\, ...\,|\, d_1$. This is a linear algebra fact. The numbers $l$ and $d_1,...,d_l$ are unique. The last $\rk\Rad I^{(1)}$ vectors of $\uuuu{v}$ are a basis of the radikal $\Rad I^{(1)}$ of $I^{(1)}$, so $2l+\rk\Rad I^{(1)}=n$. The basis $\uuuu{v}$ is called a {\it symplectic basis}. \index{symplectic basis} Define the sublattice of rank $n$ \begin{eqnarray*} H^{(d_1)}_\Z&:=& \{a\in H_\Z\,|\, I^{(1)}(a,H_\Z)\subset d_1\Z\}, \end{eqnarray*} and the finite abelian quotient group \begin{eqnarray*} H^{quot}&:=& H^{(d_1)}_\Z/2d_1H_\Z. \end{eqnarray*} Each automorphism in $O^{(1),Rad}$ respects $H^{(d_1)}$ and $2d_1H_\Z$, so it acts on the quotient $H^{quot}$. The induced finite group of automorphisms of $H^{quot}$ is called $O^{(1),quot}$, the homomorphism is called $p^{quot}:O^{(1),Rad}\to O^{(1),quot}$. Each element of $O^{(1),quot}$ acts trivially on the image of the radical $\Rad I^{(1)}$ in the quotient $H^{quot}$. The following theorem cites four results of Chmutov in \cite{Ch82}:\\ (i) A relative characterization of $\Gamma^{(1)}$,\\ (ii) a relative characterization of $\Delta^{(1)}$,\\ (iii) that $\Gamma^{(1)}_s$ has finite index in $O^{(1),Rad}_s$, \\ (iv) and a characterization of the subgroup $\ker p^{quot}\subset \Gamma^{(1)}$. \begin{theorem}\label{t10.30} \cite[Theorem 1, Corollary of Proposition 1, Theorem 2, Proposition 2]{Ch82} (i) $\Gamma^{(1)}$ is the full preimage in $O^{(1),Rad}$ of $p^{quot}(\Gamma^{(1)})$ in $O^{(1),quot}$. Equivalent: \begin{eqnarray*} \ker \bigl(p^{quot}:O^{(1),rad}\to O^{(1),quot}\Bigr)\subset\Gamma^{(1)}. \end{eqnarray*} So if one knowns the image $p^{quot}(\Gamma^{(1)})$, one knows $\Gamma^{(1)}$. (ii) \begin{eqnarray*} \Delta^{(1)}=\{a\in H_\Z&|& \textup{there exists } b\in \Delta^{(1)} \textup{ with }\\ && a-b\in 2H_\Z\textup{ and }I^{(1)}(a,H_\Z)=\Z\}. \end{eqnarray*} So if one knows the image of $\Delta^{(1)}$ in $H_\Z/2 H_\Z$, one knows $\Delta^{(1)}$. (iii) \begin{eqnarray*} \{g\in O^{(1),Rad}_s\,|\, g\textup{ acts trivially on the quotient }\oooo{H_\Z}^{(1)}/2d_1\oooo{H_\Z}^{(1)}\} \subset \Gamma^{(1)}_s. \end{eqnarray*} As the group on the left hand side has finite index in $O^{(1),Rad}_s$, also $\Gamma^{(1)}_s$ has finite index in $O^{(1),Rad}_s$. (iv) \begin{eqnarray*} \langle (s^{(1)}_a)^2\,|\, a\in H_\Z\rangle &=& \ker\bigl(p^{quot}:O^{(1),rad}\to O^{(1),quot}\Bigr) \stackrel{(i)}{\subset}\Gamma^{(1)}. \end{eqnarray*} \end{theorem} \begin{remarks}\label{t10.31} (i) In the case of a curve singularity (so $m=1$) the number of components is $\rk\Rad I^{(1)}+1$. So the curve singularity is irreducible if and only if $I^{(1)}$ is nondegenerate. (ii) In the case of a curve singularity $d_1=...=d_l=1$. (iii) Wajnryb \cite{Wa80} proved Theorem \ref{t10.30} for irreducible curve singularities. Chmutov \cite{Ch81} generalized it to all curve singularities. Here $d_1=1$ implies $H^{(d_1)}=H_\Z$ and $H^{quot}=H_\Z/2H_\Z$. (iv) Wajnryb characterized in the case of irreducible curve singularities also the image $p^{quot}(\Gamma^{(1)})\subset O^{(1),quot}$ and the image of $\Delta^{(1)}$ in $H_\Z/2H_\Z$. Also this was generalized by Chmutov first to all curve singularities \cite{Ch81} and then to all singularities \cite{Ch83}. \end{remarks} First we discuss the characterization in \cite{Ch83} of the image $$\Delta^{(1)}_{\F_2}:= \textup{image of }\Delta^{(1)} \textup{ in }H_\Z/2H_\Z$$ of $\Delta^{(1)}$ in $H_\Z/2H_\Z$. The following Lemma is elementary. It is essentially formulated in \cite[3.1]{Ch83}, extending an observation of Wajnryb \cite{Wa80}. \begin{lemma}\label{t10.32} Let $\www{H}_{\F_2}$ be an $\F_2$-vector space of dimension $\www{n}\in\Z_{\geq 2}$, let $\www{I}^{(1)}$ be an odd bilinear form $\www{I}^{(1)}:\www{H}_{\F_2}\times \www{H}_{\F_2}\to\F_2$ on $\www{H}_{\F_2}$ ({\sf odd} means here only $\www{I}^{(1)}(a,a)=0$ for $a\in\www{H}_{\F_2}$), and let $\uuuu{v}$ be a basis of $\www{H}_{\F_2}$. (a) (Definition) A {\sf quadratic form} $q:\www{H}_{\F_2}\to\F_2$ is a map with $$q(x+y)=q(x)+q(y)+\www{I}^{(1)}(x,y)\quad\textup{for all }x,y \in \www{H}_{\F_2}.$$ (b) There is a unique quadratic form $q$ with $q(v_j)=1$ for all $j\in\{1,...,n\}$. The value $q(x)$ for $x\in\www{H}_{\F_2}$ can be characterized as follows. Write $x=\sum_{j=1}^{\www{n}} a_jv_j$ with $a_j\in\F_2$. Define a graph with vertex set $V(x):=\{j\in\{1,...,n\}\,|\, a_j=1\}$ and edge set $E(x):=\{(i,j)\in\{1,...,n\}^2\,|\, i<j,\www{I}^{(1)}(v_i,v_j)=1\}$. Then $q(x)$ is the Euler characteristic of the graph $\mmod 2$ so $$q(x)\equiv |V(a)|-|E(a)|\ \mmod 2.$$ \end{lemma} The next theorem collects results of Chmutov and Janssen. The main point is part (b). It characterizes $\Delta^{(1)}_{\F_2}$. \begin{theorem}\label{t10.33} Suppose that the singularity $f$ is neither the singularity $A_n$ nor the singularity $D_n$. Choose a distinguished basis $\uuuu{v}\in\BB^{dist}$. Consider the quadratic form $q$ on $H_\Z/2H_\Z$ with $q(v_j)=1$ for any $j\in\{1,...,n\}$ (it exists and is unique by Lemma \ref{t10.32}). (a) \cite[7.1]{Ch83} The quadratic form $q$ on $H_\Z/2H_\Z$ is independent of the choice of the distinguished basis $\uuuu{v}\in\BB^{dist}$. (b) \cite[Proposition 1]{Ch83} $$\Delta^{(1)}_{\F_2}=q^{-1}(1)-(\textup{image of } \Rad I^{(1)}\textup{ in }H_\Z/2H_\Z).$$ (c) \cite[(5.1) and (5.2)]{Ja83} The subset $$(\Rad I^{(1)})^q:=\{a\in\Rad I^{(1)}\,|\, q(\textup{image of }a \textup{ in }H_\Z/2H_\Z)=0\}$$ of the radical $\Rad I^{(1)}$ is either equal to $\Rad I^{(1)}$ or a subgroup of index 2 in $\Rad I^{(1)}$. In any case it is the set $\{a\in\Rad I^{(1)}\,|\, a+b\in\Delta^{(1)}\}$, where $b\in\Delta^{(1)}$ is an arbitrarily chosen odd vanishing cycle. (d) \cite[(5.3)]{Ja83} $$T(\oooo{j}^{(1)}(\oooo{H_\Z}^{(1)})\otimes (\Rad I^{(1)})^q) \subset \Gamma^{(1)}_u.$$ \end{theorem} \begin{remarks}\label{t10.34} The singularities $A_n$ and $D_n$ play a special role in the odd case, in \cite{Ch83} and in \cite{Ja83}\cite{Ja85}. They are the only singularities which have no deformation to the singularity $E_6$. The groups $\Gamma^{(1)}$ and the sets $\Delta^{(1)}$ for them had been determined by Varchenko. Chmutov offers an account of this at the end of \cite{Ch83}. \end{remarks} It remains to characterize the finite group $p^{quot}(\Gamma^{(1)})$. The following holds for curve singularities (irreducible curve singularities without $A_n$ and $D_n$ \cite[Theorem 4 (c)]{Wa80}, arbitrary curve singularities \cite[Theorems 1 and 3]{Ch83}). \begin{theorem}\label{t10.35} Suppose that $f$ is a curve singularity. Recall that then $d_1=1$ and $H^{quot}=H_\Z/2H_\Z$. The following three properties for an element $g\in O^{(1),quot}$ are equivalent: \begin{list}{}{} \item[(i)] $g\in p^{quot}(\Gamma^{(1)})$. \item[(ii)] $g$ respects the quadratic form $q$ in Theorem \ref{t10.33}. \item[(iii)] $g$ maps $\Delta^{(1)}_{\F_2}$ to itself. \end{list} \end{theorem} \begin{remarks}\label{t10.36} (i) So, such an automorphism $g$ maps in the disjoint union $$H_\Z/2H_\Z=(\textup{image of }\Rad I^{(1)})\,\dot\cup\, \Delta^{(1)}_{\F_2}\,\dot\cup\,(\textup{the complement})$$ each of the three sets to itself, and $g$ is the identity on the first set. (ii) Chmutov found a generalization of this to arbitrary singularities. In Section 5 in \cite{Ch83} an equivalence relation for the elements of $H^{quot}$ for an arbitrary singularity is defined. Theorem 1 in \cite{Ch83} says that for an arbitrary singularity an automorphism $g\in O^{(1),quot}$ is in $p^{quot}(\Gamma^{(1)})$ if and only if it respects this equivalence relation. For details see \cite[Section 5]{Ch83}. \end{remarks} Janssen \cite{Ja83}\cite{Ja85} recovered a good part of Chmutov's results. Whereas Chmutov worked mainly within the singularity cases, Janssen took a more abstract point of view. He started with the notion of an {\it odd vanishing lattice}. He classified odd vanishing lattices of $\F_2$ and over $\Z$. \begin{definition}\label{t10.37} Let $R=\Z$ or $R=\F_2$. An {\sf odd vanishing lattice} \index{odd vanishing lattice}\index{vanishing lattice} over $R$ is a triple $(V,I_V,\Delta_V)$ with the following properties. $V$ is a free $R$-module of some rank $n_V\in\Z_{\geq 2}$. $I_V:V\times V\to R$ is an odd bilinear form. $\Delta_V$ is a subset of $V$ with three specific properties. Define the group $\Gamma_{\Delta_V}:= \langle s^{[1]}_a\,|\, a\in\Delta_V\rangle$ where $s^{[1]}_a\in O(V,I_V)$ is as usual the transvection with $s^{[1]}_a(b):=b-I_V(a,b)a$ for $b\in V$. The three specific properties: \begin{list}{}{} \item[(i)] $\Delta_V$ is a single $\Gamma_{\Delta_V}$-orbit. \item[(ii)] $\Delta_V$ generates $V$ as $R$-module. \item[(iii)] There exist $a_1$ and $a_2\in\Delta_V$ with $I_V(a_1,a_2)=1$. \end{list} \end{definition} \begin{remarks}\label{t10.38} (i) It is true that for any singularity with $n\geq 2$ the triple $(H_\Z,I^{(1)},\Delta^{(1)})$ is an odd vanishing lattice over $\Z$. (ii) Janssen classified all odd vanishing lattices over $\F_2$ \cite[(4.8) Theorem]{Ja83} and over $\Z$ \cite[(7.8) Theorem]{Ja85}. There are surprisingly few families, only seven families, for $R=\F_2$ as well as for $R=\Z$ (iii) In the case of $R=\Z$, in each of the seven families, an odd vanishing lattice $(V,I_V,\Delta_V)$ is determined up to isomorphism by the invariants $l,d_1,...,d_l$ and $\rk\Rad I_V$ of the pair $(V,I_V)$, and additionally in two of the seven families by one more number. (iv) Except for the $A_n$ and $D_n$ singularities, for any singularity $f$ the triple $(H_\Z,I^{(1)},\Delta^{(1)})$ fits into one of the three families of odd vanishing lattices with symbols $O^\sharp_1(d_1,...,d_l,\rho)$, $O^\sharp_0(d_1,...,d_l,\rho)$ and $O^\sharp(d_1,...,d_l,\rho,k_0)$ where $\rho=\rk \Rad I^{(1)}$ and $k_0$ is the additional number \cite[(6.6) Theorem]{Ja83}\cite{Ja85} (v) But even if one knows that for a singularity the triple $(H_\Z,I^{(1)},\Delta^{(1)})$ is for example of some type $O^\sharp_1(d_1,...,d_l,\rho)$ where all invariants are determined by the pair $(H_\Z,I^{(1)})$, this does not mean that the pair $(H_\Z,I^{(1)})$ determines $\Delta^{(1)}$ uniquely. There might by an automorphism of the pair $(H_\Z,I^{(1)})$ which maps $\Delta^{(1)}$ to a different set. Then the pair $(H_\Z,I^{(1)})$ determines the triple $(H_\Z,I^{(1)},\Delta^{(1)})$ only up to isomorphism. (vi) The singularities $A_6$ and $E_6$ exist both as irreducible curve singularities, so the pairs $(H_\Z,I^{(1)})$ have the same invariants $d_1=d_2=d_3=1$ and $\rk\Rad I^{(1)}=0$, so they are isomorphic. But the sets $\Delta^{(1)}$, the groups $\Gamma^{(1)}$ and the quadratic forms $q$ on $H_\Z/2H_\Z$ are very different. \end{remarks} \begin{example}\label{t10.39} In \cite[5.2 and 5.4]{DM94} several examples of pairs $(f_1,f_2)$ of curve singularities with the following property are given. The pairs $(H_\Z,L)$ of Milnor lattice and normalized Seifert form are isomorphic for $f_1$ and $f_2$, but $f_1$ and $f_2$ have distinct topological type. In all examples $f_1$ and $f_2$ are both reducible, with two components each (so $\rk\Rad I^{(1)}=1$). This leads naturally to the following questions for each of these pairs of curve singularities. We do no know their answers. \begin{list}{}{} \item[(A)] Are the triples $(H_\Z,I^{(1)},\Delta^{(1)})$ isomorphic? \item[(B)] Are the triples $(H_\Z,L,\Delta^{(1)})$ isomorphic? \item[(C)] Are the triples $(H_\Z,L,\BB^{dist})$ isomorphic? \end{list} \end{example} \section[Moduli spaces]{Moduli spaces for the simple and the simple elliptic singularities}\label{s10.5} The only cases where $C_n^{\uuuu{e}/\{\pm 1\}^n}$ and $C_n^{S/\{\pm 1\}^n}$ had been studied in detail up to now are the cases from the simple singularities and the simple elliptic singularities, the cases from the simple singularities by Looijenga \cite{Lo74} and (implicitly) Deligne \cite{De74}, all cases by Hertling and Roucairol \cite{HR21}. Results on these cases are recalled and explained here. First we discuss the simple singularities, then the simple elliptic singularities. The simple singularities in the normal forms in Theorem \ref{t10.6} are quasihomogeneous polynomials. They have universal unfoldings which are also quasihomogeneous polynomials. Here the good representatives and the base spaces $\MM$ can be chosen as algebraic and global objects. We reproduce the universal unfoldings which are given in \cite{Lo74}: \index{unfolding of a simple singularity} \begin{eqnarray}\label{10.27} &&F^{alg}:\C^{m+1}\times \MM^{alg}\to\C\quad\textup{with}\quad \MM^{alg}=\C^n,\\ &&F^{alg}(z_0,...,z_m,t_1,...,t_n)=F^{alg}(z,t)=F^{alg}_t(z) =f(z)+\sum_{j=1}^n t_jm_j \nonumber \end{eqnarray} with $f=F^{alg}_0$ and $m_1,...,m_n$ the monomials in the tables \eqref{10.28} and \eqref{10.29}, \begin{eqnarray}\label{10.28} \begin{array}{ccccccc} \textup{name} & m_1 & m_2 & m_3 & m_4 & ... & m_n \\ \hline A_n & 1 & z_0 & z_0^2 & z_0^3 & ... & z_0^{n-1} \\ D_n & 1 & z_1 & z_0 & z_0^2 & ... & z_0^{n-2} \end{array} \end{eqnarray} \begin{eqnarray}\label{10.29} \begin{array}{ccccccccc} \textup{name} & m_1 & m_2 & m_3 & m_4 & m_5 & m_6 & m_7 & m_8 \\ \hline E_6 & 1 & z_0 & z_1 & z_0^2 & z_0z_1 & z_0^2z_1 & & \\ E_7 & 1 & z_0 & z_1 & z_0^2 & z_0z_1 & z_0^3 & z_0^4 & \\ E_8 & 1 & z_0 & z_1 & z_0^2 & z_0z_1 & z_0^3 & z_0^2z_1 & z_0^3z_1 \end{array} \end{eqnarray} One checks easily that the monomials form a basis of the Jacobi algebra $\OO_{\C^{m+1},0}/J_f$. Therefore the unfolding $F^{alg}$ is indeed universal. The following tables list the weights $\deg_{\bf w}t_j$ of the unfolding parameters $t_1,...,t_n$ which make $F$ into a quasihomogeneous polynomial of weighted degree 1. The weights are all positive. The tables list also the Coxeter number $N_{Coxeter}$ of the corresponding root system. \begin{eqnarray*} \begin{array}{l|l|l|l|l|l|l|l|l|l|l|l} & N_{Coxeter} & z_0 & z_1 & t_1 & t_2 & t_3 & t_4 & ... & t_n \\ A_n & n+1 & \frac{1}{n+1} & & 1 & \frac{n}{n+1} & \frac{n-1}{n+1} & \frac{n-2}{n+1} & ... & \frac{2}{n+1} \\ D_n & 2(n-1) & \frac{1}{n-1} & \frac{n-2}{2(n-1)} & 1 & \frac{n}{2(n-1)} & \frac{n-2}{n-1} & \frac{n-3}{n-1} & ... & \frac{1}{n-1} \end{array} \end{eqnarray*} \begin{eqnarray*} \begin{array}{l|l|l|l|l|l|l|l|l|l|l|l} & N_{Coxeter} & z_0 & z_1 & t_1 & t_2 & t_3 & t_4 & t_5 & t_6 & t_7 & t_8 \\ E_6 & 12 & \frac{1}{4} & \frac{1}{3} & 1 & \frac{3}{4} & \frac{2}{3} & \frac{1}{2} & \frac{5}{12} & \frac{1}{6} & & \\[1mm] E_7 & 18 & \frac{2}{9} & \frac{1}{3} & 1 & \frac{7}{9} & \frac{2}{3} & \frac{5}{9} & \frac{4}{9} & \frac{1}{3} & \frac{1}{9} & \\[1mm] E_8 & 30 & \frac{1}{5} & \frac{1}{3} & 1 & \frac{4}{5} & \frac{2}{3} & \frac{3}{5} & \frac{7}{15} & \frac{2}{5} & \frac{4}{15} & \frac{1}{15} \end{array} \end{eqnarray*} Because the weights are positive, $F_t$ for each $t$ satisfies that the sum of Milnor numbers of its singular points is equal to $n$. It follows with little work that $\MM$ is an F-manifold with $$\textup{unit field }e=\paa_1,\quad \textup{Euler field } E=\sum_{j=1}^n\deg_{\bf w}(t_j)t_j\paa_j,$$ and polynomial multiplication \cite[Theorem 4.3]{HR21}. The positive weights and general properties of the Lyashko-Looijenga map lead easily to the following result which was found independently by Looijenga and Lyashko (but Lyashko published his version only much later). \begin{theorem}\label{t10.40} \cite{Lo74}\cite{Ly79} The Lyashko-Looijenga map \begin{eqnarray}\label{10.30} \LL^{alg}:\MM^{alg} \to \C[x]_n\cong\C^n \end{eqnarray} is a branched covering of degree \begin{eqnarray}\label{10.31} \deg \LL^{alg}=\frac{n!}{\prod_{j=1}^n\deg_{\bf w}t_j}. \end{eqnarray} It is branched along the caustic $\KK_3^{alg}\subset\MM^{alg}$ (at generic points of order 3) and along the Maxwell stratum $\KK_2^{alg}\subset\MM^{alg}$ (at generic points of order 2). The restriction \begin{eqnarray}\label{10.32} \LL^{alg}:\MM^{alg}-(\KK_3^{alg}\cup \KK_2^{alg})\to \C[x]_n^{reg}\cong C_n^{conf} \end{eqnarray} is a covering. \end{theorem} \begin{remarks}\label{t10.41} (i) It is known that $|\Gamma^{(0)}|=(N_{Coxeter})^n\prod_{j=1}^n\deg_{\bf w}t_j$ in the case of an ADE root lattice. Therefore $$\deg \LL^{alg}= \frac{n!\cdot (N_{Coxeter})^n} {|\Gamma^{(0)}|}.$$ (ii) The following table lists the numbers $|G_\Z|$ and $|\SSS^{dist}/\{\pm 1\}^n|$ for the simple singularities. Theorem \ref{t10.42} and \eqref{10.39} will tell \begin{eqnarray*} \deg\LL^{alg}=|\BB^{dist}/\{\pm 1\}^n| =\frac{1}{2}\cdot |G_\Z|\cdot |\SSS^{dist}/\{\pm 1\}^n|. \end{eqnarray*} \begin{eqnarray}\label{10.33} \begin{array}{lll} & |G_\Z| & |\SSS^{dist}/\{\pm 1\}^n| \\ \hline A_n & 2(n+1) & (n+1)^{n-2}\\ D_4 & 36 & 9\\ D_n,n\geq 5 & 4(n-1) & (n-1)^{n-1}\\ E_6 & 24 & 2^7\cdot 3^3 = 3456\\ E_7 & 18 & 2\cdot 3^{10} = 118098\\ E_8 & 30 & 2\cdot 3^4\cdot 5^6 = 2531250 \end{array} \end{eqnarray} \end{remarks} Recall from Definition \ref{t8.5} the open convex polyhedron $F_n\subset C_n^{pure}\subset\C^n$ and its boundary $W_n$, a union of walls. Also recall $$C_n^{conf}\subset \C^n =\pr_n^{p,c}(F_n)\,\dot\cup\, \pr_n^{p,c}(W_n).$$ The union of walls $(\LL^{alg})^{-1}(\pr_n^{p,c}(W_n))$ upstairs in $\MM^{alg}$ contains $\KK_3^{alg}\cup \KK_2^{alg}$. The restriction $$\LL^{alg}: (\LL^{alg})^{-1}(\pr_n^{p,c}(F_n))\to F_n$$ is an even covering. The components of the preimages are called {\it Stokes regions}. Each of them is mapped isomorphically to $F_n$. The number of Stokes regions is $\deg \LL^{alg}$. We obtain a map \index{$LD$} \begin{eqnarray*} LD:\{\textup{Stokes regions in }\MM^{alg}\}\to \BB^{dist}/\{\pm 1\}^n \end{eqnarray*} in the following way. For $t$ in one Stokes region choose a standard system $(\uuuu{\gamma};\id)$ of paths and obtain as in Theorem \ref{t10.8} from $F_t$ a distinguished basis up to signs. This works because $\MM=\C^n$ is simply connected and therefore there is a canonical isomorphism $$H_m(F_t^{-1}(\eta),\Z)\cong H_m(X_\eta,\Z)\cong H_\Z$$ for (depending on $t$) sufficiently large $\eta>0$. It is independent of the choice of $t$ within a fixed Stokes region. The distinguished basis up to signs is $LD(\textup{Stokes region})\in\BB^{dist}/\{\pm 1\}^n$. Choose one distinguished basis $\uuuu{e}\in\BB^{dist}$ for reference. The map $LD$ is surjective because the restriction of $\LL^{alg}$ in \eqref{10.32} is a covering and $\BB^{dist}/\{\pm 1\}^n$ is a single $\Br_n$ orbit. Even more, the covering factorizes through the covering $\pr_n^{e,c}:C_n^{\uuuu{e}/\{\pm 1\}^n}\to C_n^{conf}$, so it is a composition of two coverings, \begin{eqnarray}\label{10.34} \MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg}) \stackrel{\pr_n^{alg,e}}{\longrightarrow} C_n^{\uuuu{e}/\{\pm 1\}^n} \stackrel{\pr_n^{e,c}}{\longrightarrow} C_n^{conf}. \end{eqnarray} This follows from the construction of the spaces and the compatibility of the braid group actions. The main result in this section for the simple singularities is the following. \begin{theorem}\label{t10.42}\cite{Lo74}\cite{De74} \cite[Theorem 7.1]{HR21} Consider a simple singularity. The map \begin{eqnarray}\label{10.35} LD:\{\textup{Stokes regions in }\MM^{alg}\}\to \BB^{dist}/\{\pm 1\}^n \end{eqnarray} is a bijection. Equivalent: The covering \begin{eqnarray}\label{10.36} \pr_n^{alg,e}: \MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg}) \to C_n^{\uuuu{e}/\{\pm 1\}^n} \end{eqnarray} is an isomorphism. \end{theorem} Looijenga \cite{Lo74} asked whether the map $LD$ is a bijection and proved it in the case of the $A_n$ singularity. Deligne \cite{De74} answered the question for all simple singularities positively in the following way. He calculated in a letter to Looijenga for each simple singularity the number $|\BB^{dist}/\{\pm 1\}^n|$ and found equality with $\deg \LL^{alg}$. The map $LD$ is a surjection between finite sets of same size, so it is a bijection. This is one proof of Theorem \ref{t10.42}. \cite{HR21} gives a different proof, which generalizes to the simple elliptic singularities where $LD$ is a map between infinite sets. Here we sketch part of this proof. We need a general result about symmetries of a singularity \index{symmetries of a singularity} from \cite[13.2]{He02}. Let $f:(\C^ {m+1},0)\to(\C,0)$ be an isolated hypersurface singularity with Milnor number $n$. Let $F:(\C^{m+1}\times \MM,0)\to(\C,0)$ be a universal unfolding with base space a germ $(\MM,0)=(\C^n,0)$. Denote by \begin{eqnarray*} \RR^f:=\{\varphi:(\C^{m+1},0)\to(\C^{m+1},0)&|& \varphi\textup{ is a biholomorphic}\\ &&\textup{map germ with }f=f\circ\varphi\} \end{eqnarray*} the (very large) group of germs of coordinate changes which leave $f$ invariant. Denote by $$\Aut_\MM:=\Aut((\MM,0),\circ,e,E)$$ the group of automorphisms of $(\MM,0)$ as a germ of an F-manifold with Euler field. It is a finite group \cite[Theorem 4.14]{He02}. Consider a good representative $F:\XX\to\Delta_\eta$ of the universal unfolding, whose base space $\MM\subset\C^n$ is a small ball in $\C^n$. The Lyashko-Looijenga map $\LL:\MM\to\C[x]_n$ is holomorphic. The restriction $\LL:\MM-(\KK_3\cup\KK_2)\to C_n^{conf}$ is locally biholomorphic. The components of the even smaller restriction \begin{eqnarray}\label{10.37} \LL:\LL^{-1}(\pr^{p,c}_n(F_n))\to F_n \end{eqnarray} are also now called {\it Stokes regions}, although the restriction $\LL:(\textup{one Stokes region})\to F_n$ is injective, but in general not an isomorphism. Nevertheless, as in the case of the simple singularities, we obtain a natural map \begin{eqnarray}\label{10.38} LD:\{\textup{Stokes regions in }\MM\}\to\BB^{dist}/\{\pm 1\}^n \end{eqnarray} by the construction in Theorem \ref{t10.8} applied to $F_t$ for some $t$ in the Stokes region together with a standard system of paths. \begin{theorem}\label{t10.43}\cite[13.2]{He02} (a) Each $\varphi\in\RR^f$ can be lifted to an automorphism of the unfolding. It induces an automorphism $(\varphi)_{hom}\in G_\Z$ of $H_\Z$ and an automorphism $(\varphi)_{\MM}\in \Aut_\MM$. The group homomorphism $()_\MM:\RR^f\to\Aut_\MM$ is surjective. The group homomorphism $()_{hom}:\RR^f\to G_\Z$ has finite image. The image is a subgroup of $G_\Z^{\BB}$. (b) \begin{eqnarray*} \begin{array}{cccc} -\id\notin (\RR^f)_{hom}&\textup{ and }& \ker()_\MM=\ker()_{hom}&\textup{ if }\mult(f)\geq 3.\\ -\id\in (\RR^f)_{hom}&\textup{ and }& \ker()_\MM=\ker()_{hom}\times\{\pm \id\}& \textup{ if }\mult(f)=2. \end{array} \end{eqnarray*} (c) One can choose a finite subgroup $\RR^{finite}\subset\RR^f$ which lifts to a group of automorphisms of a (sufficiently) good representative $F:\XX\to\Delta_\eta$ of the universal unfolding and such that $()_{hom}:\RR^{finite}\to G_\Z$ is injective with image $(\RR^f)_{hom}$. Then $(\RR^{finite})_\MM=\Aut_\MM$. This group acts on the set of Stokes regions of $\MM$. The map $LD$ is equivariant, that means, for $\varphi\in\RR^{finite}$ the following diagram commutes, \begin{eqnarray*} \begin{CD} \{\textup{Stokes regions in }\MM\} @>{LD}>> \BB^{dist}/\{\pm 1\}^n\\ @VV{(\varphi)_\MM}V @VV{(\varphi)_{hom}}V \\ \{\textup{Stokes regions in }\MM\} @>{LD}>> \BB^{dist}/\{\pm 1\}^n\\ \end{CD} \end{eqnarray*} \end{theorem} The last statement in part (a), $(\RR^f)_{hom}\subset G_\Z^{\BB}$, and the last statement in part (c) that the map $LD$ is equivariant, are not formulated in \cite[13.2]{He02}. But they follow easily from the fact that any automorphism $\varphi\in \RR^{finite}$ lifts to an automorphism of the good representative of the universal unfolding. We return to the simple singularities. Recall that the group $G_\Z$ is \begin{eqnarray*} G_\Z&=&\{\pm M^l\,|\, l\in\Z\}\times U\\ \textup{with}\quad U&\cong& \left\{\begin{array}{ll} \{\id\}&\textup{ in the cases }A_n,D_{2n+1},E_6,E_7,E_8,\\ S_2&\textup{ in the cases }D_{2n}\textup{ with }n\geq 3,\\ S_3&\textup{ in the case }D_4. \end{array}\right. \end{eqnarray*} In \cite[Ch. 8]{He11} as well as in \cite[5.1]{HR21} it is proved that the map $()_{hom}:\RR^{finite}\to G_\Z$ is an isomorphism if $\mult(f)=2$ and that its image maps isomorphically to $G_\Z/\{\pm \id\}$ if $\mult(f)=3$. As a conclusion from this discussion of symmetries of the simple singularities, we obtain the following lemma for the simple singularities. \begin{lemma}\label{t10.44} The elements $(\varphi)_\MM$ for $\varphi\in\RR^{finite}$ form a group of deck transformations for the covering $$\pr_n^{alg,S}:\MM^{alg}-(\KK_3^{alg}\cup \KK_2^{alg}) \to C_n^{S/\{\pm 1\}^n}$$ which is isomorphic to $G_\Z/\{\pm \id\}$. They are the restriction to $\MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg})$ of the elements of $\Aut_\MM$. Here $G_\Z=G_\Z^{\BB}$ and $\Aut_\MM\cong G_\Z/\{\pm\id\}$. \end{lemma} On the other hand recall that the covering $\pr_n^{e,S}:C_n^{\uuuu{e}/\{\pm 1\}^n}\to C_n^{S/\{\pm 1\}^n}$ is normal with group of deck transformations \begin{eqnarray}\label{10.39} \frac{(\Br_n)_{S/\{\pm 1\} ^n}}{(\Br_n)_{\uuuu{e}/\{\pm 1\}^n}} \cong \frac{G_\Z^{\BB}}{Z((\{\pm 1\}^n)_S)} \stackrel{\textup{here}}{=} \frac{G_\Z}{\{\pm \id\}}. \end{eqnarray} In the composition of coverings \begin{eqnarray*} \MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg}) \stackrel{\pr_n^{alg,e}}{\longrightarrow} C_n^{\uuuu{e}/\{\pm 1\}^n} \stackrel{\pr_n^{e,S}}{\longrightarrow} C_n^{S/\{\pm 1\}^n} \end{eqnarray*} the second one is normal, and its group of deck transformations lifts to a group of deck transformations of the composite covering $\pr_n^{alg,S}=\pr_n^{e,S}\circ \pr_n^{alg,e}$. Because of the next lemma, the covering $\pr_n^{alg,e}$ is an isomorphism. \begin{lemma}\label{t10.45} The covering $\pr_n^{alg,S}$ is normal with group of deck transformations the group $\Aut_\MM$. \end{lemma} The proof uses the following idea, which had been used first by Jaworski \cite[Proposition 2]{Ja88} in the case of the Lyashko-Looijenga map for the simple elliptic singularitiees. Consider two points $t^{(1)}$ and $t^{(2)}\in\MM^{alg}$ with the same image $S/\{\pm 1\}^n=\pr_n^{alg,S}(t^{(1)})=\pr_n^{alg,S}(t^{(2)}) \in \SSS^{dist}/\{\pm 1\}^n$. Consider in $C_n^{conf}$ a path from $\pr_n^{alg,c}(t^{(1)})=\pr_n^{alg,c}(t^{(2)})$ to a generic point of $D_n^{conf}$. Then the lifts of this path to $\MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg})$ which start at $t^{(1)}$ or $t^{(2)}$ tend either both to $\KK_3^{alg}$ (if the relevant entry of the relevant matrix in the orbit of $S$ is $\pm 1$) or both to $\KK_2^{alg}$ (if the relevant entry of the relevant matrix in the orbit of $S$ is $0$). Therefore the covering $\pr_n^{alg,c}$ looks the same from $t^{(1)}$ and $t^{(2)}$, so there is a deck transformation which maps $t^{(1)}$ to $t^{(2)}$. It lifts to a deck transformation of the covering $\pr_n^{alg,S}$. It extends to an automorphism of $\MM$ as F-manifold with Euler field. Now we turn to the simple elliptic singularities. The ideas above go through and lead to Theorem \ref{t10.47}, which is analogoues to Theorem \ref{t10.42}. A common point of the simple and simple elliptic singularities, which is not shared by the other singularities, is that they have global algebraic unfoldings. The 1-parameter families of simple elliptic singularities in the Legendre normal forms in Theorem \ref{t10.6} with parameter $\lambda\in\C-\{0,1\}$ are quasihomogeneous polynomials except for $\lambda$ which has weight $\deg_{\bf w}(\lambda)=0$. They have global unfoldings which are also quasihomogeneous polynomials except for $\lambda$. \index{unfolding of a simple elliptic singularity} \begin{eqnarray} &&F^{alg}:\C^{m+1}\times \MM^{alg}\to\C \quad\textup{with}\quad \MM^{alg}=\C^{n-1}\times(\C-\{0,1\}),\nonumber\\ &&F^{alg}(z_0,...,z_m,t_1,...,t_{n-1},\lambda) =F^{alg}(z,t',\lambda)\\ &&=F^{alg}_{t',\lambda}(z) \nonumber =f_{\lambda}(z)+\sum_{j=1}^{n-1} t_jm_j \label{10.40} \end{eqnarray} with $f_{\lambda}=F^{alg}_{0,\lambda}$ and $m_1,...,m_{n-1}$ the monomials in table \eqref{10.41}, \begin{eqnarray}\label{10.41} \begin{array}{cccccccccc} \textup{name} & m_1 & m_2 & m_3 & m_4 & m_5 & m_6 & m_7 & m_8 & m_9 \\ \hline \www E_6 & 1 & z_0 & z_1 & z_2 & z_0^2 & z_0z_1 & z_1z_2 & & \\ \www E_7 & 1 & z_0 & z_1 & z_0^2 & z_0z_1 & z_1^2 & z_0^2z_1 & z_0z_1^2 & \\ \www E_8 & 1 & z_0 & z_0^2 & z_1 & z_0^3 & z_0z_1 & z_0^2z_1 & z_1^2 & z_0z_1^2 \end{array} \end{eqnarray} Let $\lambda:\H\to\C-\{0,1\},t_n\mapsto \lambda(t_n)$, be the standard universal covering. For each of the three Legendre families of simple elliptic singularities, we will also consider the global family of functions \begin{eqnarray}\label{10.42} &&F^{mar}:\C^{m+1}\times \MM^{mar}\to\C, (z,t) \mapsto F^{alg}(z,t',\lambda(t_m))\\ &&\textup{where}\quad \MM^{mar}=\C^{n-1}\times\H.\nonumber \end{eqnarray} The $\mu$-constant stratum within $\MM^{alg}$ is the 1-dimensional set $\MM^{alg}_\mu=\{0\}\times(\C-\{0,1\})\subset\MM^{alg}$. The global unfolding $F^{alg}$ is near any point $(0,\lambda)\in \MM^{alg}_\mu$ a universal unfolding of $f_\lambda=F_{0,\lambda}^{alg}$ \cite[Lemma 4.1]{HR21}. The weights $\deg_{\bf w}t_j$ for $j\in\{1,...,n-1\}$ are all positive. The next table lists them and a rational number which will appear in the degree of a Lyashko-Looijenga map. \begin{eqnarray*} \begin{array}{l|l|l|l|l|l|l|l|l|l|l|l|l|l} & z_0 & z_1 & z_2 & t_1 & t_2 & t_3 & t_4 & t_5 & t_6 & t_7 & t_8 & t_9 & \frac{1}{2}\sum_{j=2}^{n-1}\frac{1}{\deg_{\bf w}t_j}\\ \www E_6 & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & 1 & \frac{2}{3} & \frac{2}{3} & \frac{2}{3} & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & & & \frac{27}{4} \\ \www E_7 & \frac{1}{4} & \frac{1}{4} & & 1 & \frac{3}{4} & \frac{3}{4} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} & & \frac{25}{3} \\ \www E_8 & \frac{1}{6} & \frac{1}{3} & & 1 & \frac{5}{6} & \frac{2}{3} & \frac{2}{3} & \frac{1}{2} & \frac{1}{2} & \frac{1}{3} & \frac{1}{3} & \frac{1}{6} & \frac{101}{10} \end{array} \end{eqnarray*} Because the weights are positive, $F^{alg}_{t',\lambda}$ for each $(t',\lambda)$ satisfies that the sum of Milnor numbers of its singular points is equal to $n$. It follows with little work that $\MM^{alg}$ is an F-manifold with $$\textup{unit field }e=\paa_1,\quad \textup{Euler field }E=\sum_{j=1}^{n-1}\deg_{\bf w}(t_j)t_j\paa_j$$ and polynomial multiplication \cite[Theorem 4.3]{HR21} and also that $\MM^{mar}$ is an F-manifold with Euler field. The Lyashko-Looijenga map for $\MM^{alg}$ was first studied by Jaworski \cite{Ja86}\cite{Ja88}. He found that the restriction of $\LL^{alg}:\MM^{alg}\to\C[x]_n$ to $\LL^{alg}:\MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg})\to C_n^{conf}$ is a finite covering. But contrary to Theorem \ref{t10.40} for the simple singularities this is not at all trivial. The map $\LL^{alg}$ is holomorphic, but not everywhere finite. The 1-dimensional $\mu$-constant stratum $\MM^{alg}_\mu$ is mapped to $0\in\C[x]_n$. His methods did not allow Jaworski to calculate the degree $\deg\LL^{alg}$. This was done by a huge effort in \cite[sections 5, 8, 9, 10]{HR21} by glueing into $\MM^{alg}$ suitable fibers over $\lambda\in\{0,1,\infty\}$ and extending $\LL^{alg}$ to these fibers. A simpler way to calculate $\deg\LL^{alg}$ was recently found by Takahashi and Zhang \cite{TZ23}. \begin{theorem}\label{t10.46} (a) \cite{Ja86}\cite{Ja88} The restriction $$\LL^{alg}:\MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg})\to C_n^{conf}$$ of the Lyashko-Looijenga map $\LL^{alg}:\MM^{alg}\to \C[x]_n$ is a finite covering. (b) \cite[Theorem 6.3]{HR21}\cite{TZ23} Its degree is \begin{eqnarray}\label{10.43} \deg\LL^{alg} =\frac{n!\cdot\frac{1}{2}\cdot \sum_{j=2}^{n-1}\frac{1}{\deg_{\bf w}t_j}} {\prod_{j=2}^{n-1}\deg_{\bf w}t_j}. \end{eqnarray} \end{theorem} The Stokes regions in $\MM^{alg}$ and in $\MM^{mar}$ are the components of the preimages in $\MM^{alg}$ and $\MM^{mar}$ of $\pr_n^{p,c}(F_n)\subset C_n^{conf}$. The restriction of $\LL^{alg}$ respectively $\LL^{mar}$ to a Stokes region is an isomorphism to $\pr_n^{p,c}(F_n)$. We obtain a map \index{$LD$} $$LD:\{\textup{Stokes regions in }\MM^{mar}\}\to \BB^{dist}/\{\pm 1\}^n$$ as for the simple singularities. Though here we have to choose a reference point $(0,t_n^{ref})\in\MM^{mar}_\mu$. We choose it with $\lambda(t_n^{ref})=\frac{1}{2}$. Then $H_\Z:=H_\Z(f_{1/2})$. Because the space $\MM^{mar}=\C^{n-1}\times\H$ is simply connected there is a canonical isomorphism $$H_m(F_t^{-1}(\eta),\Z)\cong H_m(f_{1/2}^{-1}(\eta),\Z) \cong H_\Z(f_{1/2})$$ for any $t\in\MM^{mar}$ and (depending on $t$) sufficiently large $\eta>0$. We do not obtain a map $LD$ for the set of Stokes regions in $\MM^{alg}$ because there we do not have such canonical isomorphisms. The $\Z$-lattice bundle $\bigcup_{\lambda\in\C-\{0,1\}}H_\Z(f_\lambda)$ has transversal monodromy. Choose one distinguished basis $\uuuu{e}\in\BB^{dist}$ for reference. The map $LD$ is surjective because the restriction of $\LL^{mar}$ to a map $$\LL^{mar}:\MM^{mar}-(\KK_3^{mar}\cup \KK_2^{mar})\to C_n^{conf}$$ is by Theorem \ref{t10.46} (a) a covering. Even more, it factorizes in ways described by the following commutative diagram of coverings, \begin{eqnarray}\label{10.44} \begin{xy} \xymatrix{ \MM^{alg}-(\KK_3^{alg}\cup\KK_2^{alg}) \ar[dr]^{\textup{finite}} \ar[rr]^{\textup{finite}} & & C_n^{conf} \\ & C_n^{S/\{\pm 1\}^n} \ar[ur]^{\textup{finite}} & \\ \MM^{mar}-(\KK_3^{mar}\cup\KK_2^{mar}) \ar[uu]^{\infty} \ar[ur]^{\infty} \ar[rr] & & C_n^{\uuuu{e}/\{\pm 1\}^n} \ar[ul]^{\infty} \ar[uu]^{\infty} } \end{xy} \end{eqnarray} This follows from the construction of the spaces and the compatibility of the braid group actions. The main result in this section for the simple elliptic singularities is the following. \begin{theorem}\label{t10.47} \cite[Theorem 7.1]{HR21} Consider a 1-parameter family of simple elliptic singularities. The map \begin{eqnarray}\label{10.45} LD:\{\textup{Stokes regions in }\MM^{mar}\}\to \BB^{dist}/\{\pm 1\}^n \end{eqnarray} is a bijection. Equivalent: The covering \begin{eqnarray}\label{10.46} \pr_n^{mar,e}:\MM^{mar}-(\KK_3^{mar}\cup \KK_2^{mar})\to C_n^{\uuuu{e}/\{\pm 1\}^n} \end{eqnarray} is an isomorphism. \end{theorem} The ideas of the proof are the same as in the proof of Theorem \ref{t10.42} for the simple singularities in \cite{HR21}. Recall that the group $G_\Z:=G_\Z(f_{1/2})$ is by Theorem \ref{t10.24} (c) the extension of the group $G_\Z|_{\Rad I^{(0)}}\cong SL_2(\Z)$ by the finite group $U_1\rtimes U_2$. The following results are proved in \cite{HR21} in a similar way as for the simple singularities. \begin{lemma}\label{t10.48} (a) $(\RR^{f_{1/2}})_{hom}=U_1\rtimes U_2$ if $\mult(f)=3$, and $(\RR^{f_{1/2}})_{hom}=U_1\rtimes U_2\times\{\pm\id\}$ if $\mult(f)=2$. (b) $G_\Z=G_\Z^{\BB}$. (c) The covering $\pr_n^{mar,S}:\MM^{mar}-(\KK_3^{mar}\cup\KK_2^{mar})\to C_n^{S/\{\pm 1\}^n}$ is a normal covering. The group of deck transformations is isomorphic to $\Aut_{\MM^{mar}}:=\Aut(\MM^{mar},\circ,e,E)$ of $\MM^{mar}$ as F-manifold with Euler field and isomorphic to $G_\Z/\{\pm \id\}$. (d) The covering $\pr_n^{e,S}:C_n^{\uuuu{e}/\{\pm 1\}^n}\to C_n^{S/\{\pm 1\}^n}$ is a normal covering with group of deck transformations isomorphic to $G_\Z/\{\pm \id\}$. \end{lemma} The proof of part (c) uses Jaworski's idea which was described after Lemma \ref{t10.45}. Now a matrix $S\in\SSS^{dist}$ can have the entries $0,\pm 1,\pm 2$. The lifts of a path in $C_n^{conf}$ which tends to a generic point of $D_n^{conf}$ tend to $\KK_2^{mar}$ if the relevant entry is $0$, to $\KK_3^{mar}$ if the relevant entry is $\pm 1$, and to $\C^n\times\{0,1,\infty\}$ if the relevant entry is $\pm 2$. Lemma \ref{t10.48} (c)+(d) and the diagram \eqref{10.44} of coverings show Theorem \ref{t10.47}. \begin{remarks}\label{t10.49} (i) The following table gives the degrees of the finite coverings in the diagram \eqref{10.44} of coverings. \begin{eqnarray}\label{10.47} \begin{array}{llll} & \deg\pr_n^{alg,S}=6|U_1||U_2| & \deg\pr_n^{S,c}=|\SSS^{dist}/\{\pm 1\}^n|\\ \hline \www E_6 & 6\cdot 2\cdot 3\cdot 3^2 =324 & 3^7\cdot 5\cdot 7=76545\\ \www E_7 & 6\cdot 1\cdot 4\cdot 2^2 = 96 & 2^{13}\cdot 5^3\cdot 7 =7168000\\ \www E_8 & 6\cdot 1\cdot 6\cdot 1^2 = 36 & 2^7\cdot 3^8\cdot 7\cdot 101 =593744256 \end{array} \end{eqnarray} The surprising factor $101$ in the case of $\www{E}_8$ comes from the term $\sum_{j=2}^{n-1}\frac{1}{\deg_{\bf w}t_j}$ in formula \eqref{10.43} for $\deg\LL^{alg}$. (ii) The number $\deg\LL^{alg}=\deg \pr_n^{alg,S}\cdot |\SSS^{dist}/\{\pm 1\}|$ was calculated in \cite{HR21} and again in \cite{TZ23}. With the easily determined number $\deg \pr_n^{alg,S}=6|U_1||U_2|$ it gives $|\SSS^{dist}/\{\pm 1\}^n|$. Though this number $|\SSS^{dist}/\{\pm 1\}^n|$ was determined in the cases of $\www{E}_6$ and $\www{E}_7$ much earlier by Kluitmann, for $\www{E}_6$ in \cite{Kl83} and \cite[VII]{Kl87}, for $\www{E}_7$ in \cite[VII]{Kl87}, by a long combinatorial and inductive procedure. \end{remarks} \begin{remarks}\label{t10.50} In the cases of other singularities than the simple and simple elliptic singularities, there are a priori no natural global unfoldings. A priori the base space of a universal unfolding is a germ $(\MM,0)\cong(\C^n,0)$. Consider a given $\mu$-homotopy class of singularities. In \cite{He11} a global moduli space $\MM^{mar}_\mu$ for {\it right equivalence classes} of {\it marked singularities} is constructed. Locally it is isomorphic to the $\mu$-constant stratum in the base space of a universal unfolding of one singularity. So one can consider it as a global $\mu$-constant stratum for all singularities in one $\mu$-homotopy class. In order to obtain global moduli spaces of dimension $n$, it would be good to thicken $\MM^{mar}_\mu$ to an $n$-dimensional manifold which is near each point of $\MM^{mar}_\mu$ the base space of a universal unfolding and then glue this manifold to the corresponding manifold $C_n^{\uuuu{e}/\{\pm 1\}^n}$ where $\uuuu{e}\in\BB^{dist}$ is one chosen distinguished basis. By Theorem \ref{t10.42} and \ref{t10.47} this works for the simple singularities and the simple elliptic singularities. It gives the manifold $\MM^{alg}$ for the simple singularities and the manifold $\MM^{mar}$ for the simple elliptic singularities. For other singularities this is an open project. \end{remarks} \begin{appendix} \chapter{Tools from hyperbolic geometry}\label{sa} \setcounter{equation}{0} \setcounter{figure}{0} \renewcommand{\theequation}{\mbox{A.\arabic{equation}}} \renewcommand{\thefigure}{\mbox{A.\arabic{figure}}} The upper half plane \index{upper half plane}\index{hyperbolic plane} $\H=\{z\in\C\,|\, \Im(z)>0\}$ together with its natural metric (whose explicit form we will not need) is one model of the hyperbolic plane. In the study of the monodromy groups $\Gamma^{(0)}$ and $\Gamma^{(1)}$ for the cases with $n=2$ or $n=3$, we will often encounter subgroups of $\Isom(\H)$. The theorem of Poincar\'e-Maskit \cite{Po82}\cite{Ma71} allows under some conditions to show for such a group that it is discrete, to find a fundamental domain and to find a presentation. Three special cases of this theorem, which will be sufficient for us, are formulated in Theorem \ref{ta.2}. Before, we collect basic facts and set up some notations in the following Remarks and Notations \ref{ta.1}. Subgroups of $\Isom(\H)$ arise here in two ways. Either they come from groups of real $2\times 2$ matrices. This is covered by the Remarks and Notations \ref{ta.1} (v). Or they come from the action of certain groups of real $3\times 3$ matrices on $\R^3$ with an indefinite metric. This will treated in Theorem \ref{ta.4}. \begin{remarksandnotations}\label{ta.1}(Some references for the following material are \cite{Fo51}\cite{Le66}\cite{Be83}) (i) Let $n\in\N$. Recall the notions of the free group $G^{free,n}$ with $n$ generators and of the free Coxeter group $G^{fCox,n}$ with $n$ generators from Definition \ref{t3.1}. (ii) $\widehat{\C}:=\C\cup\{\infty\}$, $\widehat{\R}:=\R\cup\{\infty\}$. The hyperbolic lines \index{hyperbolic line} in $\H$ are the parts in $\H$ of circles and euclidean lines which meet $\R$ orthogonally. For any $z_1,z_2\in\H\cup\widehat{\R}$ with $z_1\neq z_2$, denote by $A(z_1,z_2)$ the part between $z_1$ and $z_2$ of the unique hyperbolic line whose closure in $\H\cup\widehat{\R}$ contains $z_1$ and $z_2$. Here $z_i\in A(z_1,z_2)$ if $z_i\in\H$, but $z_i\notin A(z_1,z_2)$ if $z_i\in\widehat{\R}$. Such sets are called {\it arcs}. \index{arc} (iii) We simplify the definition of a polygon in \cite{Ma71}. A {\it hyperbolic polygon} \index{hyperbolic polygon} $P$ is a contractible open subset $P\subset\H$ whose relative boundary in $\H$ consists of finitely many arcs $A_1=A(z_{1,1},z_{1,2}),..., A_m=A(z_{m,1},z_{m,2})$ (Maskit allows countably many arcs). The arcs and the points are numbered such that one runs through them in the order $A_1,...,A_m$ and $z_{1,1},z_{1,2},z_{2,1},z_{2,2},..., z_{m,1},z_{m,2}$ if one runs mathematically positive on the euclidean boundary of $P$ in $\H\cup\widehat{\R}$. For $A_i$ and $A_{i+1}$ (with $A_{m+1}:=A_1$) there are three possibilities: \begin{list}{}{} \item[(a)] $z_{i,2}=z_{i+1,1}\in\H;$ then this point is called a {\it vertex} of $P$; \item[(b)] $z_{i,2}=z_{i+1,1}\in\widehat{\R}$; \item[(c)] $z_{i,2}\in\widehat{\R},z_{i+1,1}\in\widehat{\R}, z_{i,2}\neq z_{i+1,1}$; then the part of $\widehat{\R}$ between $z_{i,2}$ and $z_{i+1,1}$ (moving from smaller to larger values) is in the euclidean boundary of $P$ between $A_i$ and $A_{i+1}$. \end{list} In the second and third case $A_i\cap A_{i+1}=\emptyset$. A polygon has no vertices if and only if all arcs $A_1,...,A_m$ are hyperbolic lines, and if and only if all points $z_{1,1},...,z_{m,2}\in\widehat{\R}$. (iv) Denote \index{$GL_2^{(\pm 1)}(\R)$} \begin{eqnarray*} Gl^{(-1)}_2(\R)&:=&\{A\in GL_2(\R)\,|\, \det A=-1\},\\ Gl^{(\pm 1)}_2(\R)&:=& \{A\in GL_2(\R)\,|\, \det A=\pm 1\} =SL_2(\R)\cup GL^{(-1)}_2(\R), \end{eqnarray*} and analogously $Gl^{(-1)}_2(\Z)$, $Gl^{(\pm 1)}_2(\Z)$. Recall $$A^t\begin{pmatrix}0&1\\-1&0\end{pmatrix}A= \det A\cdot \begin{pmatrix}0&1\\-1&0\end{pmatrix} \quad\textup{for }A\in Gl^{(\pm 1)}_2(\R).$$ (v) The following map $\mu:Gl^{(\pm 1)}_2(\R)\to\Isom(\H)$ is a surjective group homomorphism with kernel $\ker\mu=\{\pm E_2\}$, \begin{eqnarray*} \mu(A)=\Bigl(z\mapsto \frac{za+b}{cz+d}\Bigr) &&\textup{ if }A=\begin{pmatrix}a&b\\c&d\end{pmatrix} \in SL_2(\R),\\ \mu(A)=\Bigl(z\mapsto \frac{\oooo{z}a+b}{c\oooo{z}+d}\Bigr) &&\textup{ if }A=\begin{pmatrix}a&b\\c&d\end{pmatrix} \in GL^{(-1)}_2(\R). \end{eqnarray*} $\mu(A)$ for $A\in SL_2(\R)$ is orientation preserving and is called a {\it M\"obius transformation}. \index{M\"obius transformation} \index{$\mu:Gl^{(\pm 1)}_2(\R)\to\Isom(\H)$} $\mu(A)$ for $A\in GL^{(-1)}_2(\R)$ is orientation reversing. If $A\in SL_2(\R)-\{\pm E_2\}$, there are three possibilities: \begin{list}{}{} \item[(a)] $|\tr(A)|<2$; then $A$ has a fixed point in $\H$ (and the complex conjugate number is a fixed point in $-\H$) and is called {\it elliptic}. \index{elliptic M\"obius transformation} \item[(b)] $|\tr(A)|=2$; then $A$ has one fixed point in $\widehat{\R}$ and is called {\it parabolic}. \index{parabolic M\"obius transformation} \item[(c)] $|\tr(A)|>2$; then $A$ has two fixed points in $\widehat{\R}$ and is called {\it hyperbolic}. \index{hyperbolic M\"obius transformation} \end{list} If $A=\begin{pmatrix}a&b\\c&d\end{pmatrix} \in GL^{(-1)}_2(\R)$ with $\tr(A)=0$ then $\mu(A)$ is a reflection along the hyperbolic line \begin{eqnarray*} \{z\in\H\,|\, z=\mu(A)(z)\} =\{z\in\H\,|\, 0=cz\oooo{z}-2a\Ree(z)-b\}\\ \Bigl(= \{z\in\H\,|\, 0=(z-\frac{a}{c})(\oooo{z}-\frac{a}{c}) -\frac{1}{c^2}\}\quad\textup{if }c\neq 0\Bigr). \end{eqnarray*} \end{remarksandnotations} The theorem of Poincar\'e-Maskit starts with a hyperbolic polygon $P$ whose relative boundary in $\H$ consists of arcs $A_1,...,A_m$, with an involution $\sigma\in S_m$ and with elements $g_1,...,g_m\in \Isom(\H)$ with $g_i(A_i)=A_{\sigma(i)}$ and $g_{\sigma(i)}=g_i^{-1}$. Under some additional conditions, it states that the group $G:=\langle g_1,...,g_m\rangle\subset\Isom(\H)$ is discrete, that $P$ is a fundamental domain (i.e. each orbit of $G$ in $\H$ meets the relative closure of $P$ in $\H$, no orbit of $G$ in $\H$ meets $P$ in more than one point), and it gives a complete set of relations with respect to $g_1,...,g_m$ of $G$. Poincar\'e \cite{Po82} had the case when $\H/G$ is compact, Maskit \cite{Ma71} generalized it greatly. In \cite{Ma71} the relative boundary of $P$ in $\H$ may consist of countably many arcs. The following theorem singles out three special cases, which are sufficient for us. Remark \ref{ta.3} (ii) illustrates them with pictures. \begin{theorem}\label{ta.2} \index{Poincar\'e-Maskit theorem} \cite{Ma71} Let $P\subset \H$ be a hyperbolic polygon whose relative boundary in $\H$ consists of arcs $A_1,...,A_m$ with $A_i=A(z_{i,1}z_{i,2})$ where one runs through these arcs and these points in the order $A_1,...,A_m$ and $z_{1,1},z_{1,2},z_{2,1},z_{2,2},...,z_{m,1},z_{m,2}$ if one runs mathematically positive on the euclidean boundary of $P$ in $\H\cup\widehat{\R}$. (a) Let $I\subset\{1,...,m\}$ be the set of indices such that $z_{i,2}$ is a vertex, so $z_{i,2}=z_{i+1,1}\in\H$, with $A_{m+1}:=A_1$ and $z_{m+1,i}:=z_{1,i}$ ($I$ may be empty). Suppose that at a vertex $z_{i,2}$ the arcs $A_i$ and $A_{i+1}$ meet at an angle $\frac{\pi}{n_i}$ for some number $n_i\in\Z_{\geq 2}$. For $i\in \{1,...,m\}$ let $g_i\in \Isom(\H)$ be the reflection along the hyperbolic line which contains $A_i$. The group $G:=\langle g_1,...,g_m\rangle\subset\Isom(\H)$ is discrete, $P$ is a fundamental domain, and the set of relations $$g_1^2=...=g_m^2=\id,\quad (g_ig_{i+1})^{n_i}=\id \quad\textup{for }i\in I,$$ form a complete set of relations. Especially, if $I=\emptyset$ then $G$ is a free Coxeter group with generators $g_1,...,g_m$. (b) Let $P\subset\H$ have no vertices. Choose on each hyperbolic line $A_i$ a point $p_i$, and let $g_i$ be the elliptic element with fixed point $p_i$ and rotation angle $\pi$. The group $G:=\langle g_1,...,g_m\rangle\subset\Isom(\H)$ is discrete, $P$ is a fundamental domain, and the set of relations $g_1^2=...=g_m^2=\id$ a complete set of relation, so $G$ is a free Coxeter group with generators $g_1,...,g_m$. (c) Let $P\subset\H$ have no vertices. Suppose that $m$ is even. Suppose $z_{2i-1,2}=z_{2i,1}$ for $i\in\{1,2,...,\frac{m}{2}\}$. Let $g_i$ for $i\in\{1,2,...,\frac{m}{2}\}$ be the parabolic element with fixed point $z_{2i-1,2}$ which maps $A_{2i-1}$ to $A_{2i}$. The group $G:=\langle g_1,...,g_m\rangle\subset\Isom(\H)$ is discrete, $P$ is a fundamental domain, and the group $G$ is a free group with generators $g_1,...,g_m$. \end{theorem} \begin{remarks}\label{ta.3} (i) The Cayley transformation \index{Cayley transformation} $\widehat{\C}\to\widehat{\C}$, $\bigl(z\mapsto \frac{z-i}{z+i}\bigr)$ maps the upper half plane $\H$ to the unit disk $\D=\{z\in\C\,|\, |z|<1\}$. It leads to the unit disk model of the hyperbolic plane. The hyperbolic lines in this model are the parts in $\D$ of circles and euclidean lines which intersect $\partial\D$ orthogonally. (ii) The following three pictures illustrate Theorem \ref{ta.2} in the unit disk model instead of the upper half plane model. \begin{figure}[H] \includegraphics[width=1.0\textwidth]{pic-a-1.png} \caption[Figure A.1]{Three pictures for Theorem \ref{ta.2}} \label{Fig:A.1} \end{figure} \end{remarks} The surjective group homomorphism $\mu:Gl^{(\pm 1)}_2(\R)\to\Isom(\H)$ in Remark \ref{ta.1} (v) shows how to go from groups of real $2\times 2$ matrices to subgroups of $\Isom(\H)$. The next theorem shows how to go from groups of certain real $3\times 3$ matrices to subgroups of $\Isom(\H)$. It is classical. But as we need some details, we prefer to explain these details and not refer to some literature. \begin{theorem}\label{ta.4} Let $(H_\R,I^{[0]})$ be a 3-dimensional real vector space with a symmetric bilinear form $I^{[0]}$ with signature $(+--)$. (a) (Elementary linear algebra) A vector $v\in H_\R-\{0\}$ is called {\sf positive} if $I^{[0]}(v,v)>0$, {\sf isotropic} if $I^{[0]}(v,v)=0$, {\sf negative} if $I^{[0]}(v,v)<0$. The positive vectors \index{positive vector} form a (double) cone $\KK\subset H_\R$, \index{cone}\index{$\KK\subset H_\R$} the isotropic vectors \index{isotropic vector} and the vector 0 form its boundary, the negative vectors \index{negative vector} form its complement. The orthogonal hyperplanes $(\R\cdot v)^\perp$ satisfy the following: \begin{list}{}{} \item[(i)] $(\R\cdot v)^\perp\cap \KK\neq\emptyset$ if $v$ is negative. \item[(ii)] $(\R\cdot v)^\perp\cap\oooo{\KK}=\R\cdot v$ if $v$ is isotropic. \item[(iii)] $(\R\cdot v)^\perp\cap\oooo{\KK}=\{0\}$ if $v$ is positive. \end{list} $\KK/\R^*$ denotes the lines in $\KK$, i.e. the 1-dimensional subspaces. (b) (Basic properties of $\Aut(H_\R,I^{[0]})$) Let $\sigma:\Aut(H_\R,I^{[0]})\to\{\pm 1\}$ be the spinor norm map (see Remark \ref{t6.3} (iii)). The group $\Aut(H_\R,I^{[0]})$ is a real 3-dimensional Lie group with four components. The components are the fibers of the group homomorphism \begin{eqnarray*} (\det,\sigma):\Aut(H_\R,I^{[0]})\to \{\pm 1\}\times \{\pm 1\}. \end{eqnarray*} $-\id\in \Aut(H_\R,I^{[0]})$ has value $(\det,\sigma)(-\id)=(-1,1)$. If $v$ is positive then $(\det,\sigma)(s^{(0)}_v)=(-1,1)$. If $v$ is negative then $(\det,\sigma)(s^{(0)}_v)=(-1,-1)$. An isometry $g\in \Aut(H_\R,I^{[0]})$ maps each of the two components of the cone $\KK$ to itself if and only if $g$ is in the two components which together form the kernel of $\det\cdot\sigma$, so if $\det(g)\sigma(g)=1$. (c) Choose a basis $\uuuu{f}=(f_1,f_2,f_3)$ of $H_\R$ with $$I^{[0]}(\uuuu{f}^t,\uuuu{f})= \begin{pmatrix}0&0&1\\0&-2&0\\1&0&0\end{pmatrix}.$$ (i) The map \index{$\Theta: Gl^{(\pm 1)}_2(\R)\to \Aut(H_\R)$} \begin{eqnarray*} \Theta: Gl^{(\pm 1)}_2(\R)\to \Aut(H_\R),\quad \begin{pmatrix}a&b\\c&d\end{pmatrix}\mapsto \Bigl(\uuuu{f}\mapsto \uuuu{f} \begin{pmatrix}a^2&2ab&b^2\\ac&ad+bc&bd\\c^2&2cd&d^2 \end{pmatrix}\Bigr), \end{eqnarray*} is a group homomorphism with kernel $\ker\Theta=\{\pm E_2\}$ and image $\ker(\det\cdot\sigma:\Aut(H_\R,I^{[0]})\to\{\pm 1\})$. (ii) The map \index{$\vartheta:\H\to (H_\R-\{0\})/\R^*$} \begin{eqnarray*} \vartheta:\H\to (H_\R-\{0\})/\R^*,\quad z\mapsto\R^*(z\oooo{z}f_1+\Ree(z)f_2+f_3), \end{eqnarray*} is a bijection $\vartheta:\H\to \KK/\R^*$. (iii) For $A\in Gl^{(\pm 1)}_2(\R)$, the automorphism $\vartheta\circ\mu(A)\circ\vartheta^{-1}:\KK/\R^*\to \KK/\R^*$ coincides with the action of $\Theta(A)$ on $\KK/\R^*$. (iv) The natural maps \begin{eqnarray*} \begin{array}{ccccc} \Aut(H_\R,I^{[0]})/\{\pm \id\} & \longleftarrow & \ker(\det\cdot\sigma) & \longrightarrow & \Isom(\H)\\ \{\pm B\} & \longleftarrow & B\ , \ \Theta(A) & \longmapsto & \mu(A) \end{array} \end{eqnarray*} are group isomorphisms. (v) For each hyperbolic line $l$ there is a negative vector $v\in H_\R$ with $\vartheta(l)=((\R\cdot v)^\perp\cap\KK)/\R^*$. (vi) Let $\sigma_l\in \Isom(\H)$ be the reflection along a hyperbolic line $l$, and let $v$ be as in (v). The action of $s^{(0)}_v$ on $\KK/\R^*$ coincides with $\vartheta\circ\sigma_l\circ\vartheta^{-1}$. (vii) Let $\delta_p\in\Isom(\H)$ be the elliptic element with fixed point $p\in\H$ and order 2 (so rotation angle $\pi$). Let $v\in H_\R$ be a positive vector with $\R\cdot v=\vartheta(p)$. The action of $s^{(0)}_v$ on $\KK/\R^*$ coincides with $\vartheta\circ \delta_p\circ\vartheta^{-1}$. \end{theorem} {\bf Proof:} (a) and (b) are elementary and classical, their proofs are skipped. (c) (i) Start with a real 2-dimensional vector space $V_\R$ with basis $\uuuu{e}=(e_1,e_2)$ and a skew-symmetric bilinear form $I^{[1]}$ on $V_\R$ with matrix $I^{[1]}(\uuuu{e}^t,\uuuu{e})=\begin{pmatrix}0&1\\-1&0\end{pmatrix}$. The tensor product $V_\R\otimes V_\R$ comes equipped with an induced symmetric bilinear form $\www{I}^{(0)}$ via (here $I$ and $J$ are finite index sets and $a_i,b_i,c_j,d_j\in V_\R$) \begin{eqnarray*} \www{I}^{(0)}(\sum_{i\in I}a_i\otimes b_i,\sum_{j\in J}c_j\otimes d_j) =\sum_{i\in I}\sum_{j\in J}I^{[1]}(a_i,c_j)I^{[1]}(b_i,d_j). \end{eqnarray*} An element $g\in\Aut(H_\R)$ with $g\uuuu{e}=\uuuu{e}A$ and $A\in Gl^{(\pm 1)}_2(\R)$ respects $I^{[1]}$ in the following weak sense: \begin{eqnarray*} I^{[1]}(g(v_1),g(v_2))&=&\det A\cdot I^{[1]}(v_1,v_2),\\ \textup{because }A^t\begin{pmatrix}0&1\\-1&0\end{pmatrix}A &=&\det A\cdot \begin{pmatrix}0&1\\-1&0\end{pmatrix}. \end{eqnarray*} It induces an element $\www{\Theta}(g)\in\Aut(V_\R\otimes V_\R,\www{I}^{(0)})$ via \begin{eqnarray*} \www{\Theta}(g)(\sum_{i\in I}a_i\otimes b_i)= \sum_{i\in I}g(a_i)\otimes g(b_i). \end{eqnarray*} The symmetric part $\www{H}_\R\subset V_\R\otimes V_\R$ of the tensor product has the basis $\www{\uuuu{f}}=(\www{f}_1,\www{f}_2,\www{f}_3) =(e_1\otimes e_1,e_1\otimes e_2+e_2\otimes e_1,e_2\otimes e_2)$. One sees \begin{eqnarray*} \www{I}^{(0)}(\www{\uuuu{f}}^t,\www{\uuuu{f}}) =\begin{pmatrix}0&0&1\\0&-2&0\\1&0&0\end{pmatrix}. \end{eqnarray*} From now on we identify $(\www{H}_\R, \www{I}^{(0)}|_{\www{H}_\R},\www{\uuuu{f}})$ with $(H_\R,I^{[0]},\uuuu{f})$. For an element $g\in \Aut(V_\R)$ with $g\uuuu{e}=\uuuu{e}A$ with $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in Gl^{(\pm 1)}_2(\R)$, the automorphism $\www{\Theta}(g)$ on $V_\R\otimes V_\R$ restricts to an automorphism of the symmetric part $H_\R$ with matrix \begin{eqnarray*} \www{\Theta}(g)\uuuu{f}=\uuuu{f} \begin{pmatrix}a^2&2ab&b^2\\ac&ad+bc&bd\\c^2&2cd&d^2 \end{pmatrix}. \end{eqnarray*} This fits to $\Theta$. It shows especially $\Theta(A)\in\Aut(H_\R,I^{[0]})$. The kernel of $\Theta$ is $\{\pm E_2\}$. The Lie groups $Gl^{(\pm 1)}_2(\R)$ and $\Aut(H_\R,I^{[0]})$ are real 3-dimensional. The Lie group $Gl^{(\pm 1)}_2(\R)$ has two components. $(\det,\sigma)(\Theta(\begin{pmatrix}1&0\\0&-1\end{pmatrix})) =(\det,\sigma)(s^{(0)}_{f_2})=(-1,-1)$. Therefore the image of $\Theta$ consists of the two components of $\Aut(H_\R,I^{[0]})$ which together form the kernel of $\det\cdot \sigma$. This finishes the proof of part (i). (ii) Define $\www{\vartheta}(z):=z\oooo{z}f_1+\Ree(z)f_2+f_3$. It is a positive vector because \begin{eqnarray*} I^{[0]}(\www{\vartheta}(z),\www{\vartheta}(z)) =z\oooo{z}\cdot 1-2(\Ree(z))^2+1\cdot z\oooo{z}=2(\Im(z))^2>0. \end{eqnarray*} It is easy to see that $\vartheta$ is a bijection from $\H$ to $\KK/\R^*$. (iii) In fact, $\www{\vartheta}(z)$ is the symmetric part of $$(ze_1+e_2)\otimes (\oooo{z}e_1+e_2) =(\uuuu{e}\begin{pmatrix}z\\1\end{pmatrix})\otimes (\uuuu{e}\begin{pmatrix}\oooo{z}\\1\end{pmatrix}) \in V_\C\otimes V_\C.$$ For $A=\begin{pmatrix}a&b\\c&d\end{pmatrix} \in Gl^{(\pm 1)}_2(\R)$ \begin{eqnarray*} &&\www{\vartheta}(\mu(A)(z))\\ &=& \Bigl(\textup{symmetric part of } (\uuuu{e}\begin{pmatrix}\mu(A)(z)\\1\end{pmatrix})\otimes (\uuuu{e}\begin{pmatrix}\mu(A)(\oooo{z})\\1\end{pmatrix}\Bigr)\\ &=& |cz+d|^{-2}\Bigl(\textup{symmetric part of } (\uuuu{e}A\begin{pmatrix}z\\1\end{pmatrix})\otimes (\uuuu{e}A\begin{pmatrix}\oooo{z}\\1\end{pmatrix})\Bigr)\\ &=& |cz+d|^{-2}\uuuu{f} \begin{pmatrix}a^2&2ab&b^2\\ac&ad+bc&bd\\c^2&2cd&d^2 \end{pmatrix} \begin{pmatrix}z\oooo{z} \\ \Ree(z) \\ 1\end{pmatrix}\\ &=& |cz+d|^{-2}\Theta(A)(\uuuu{f}) \begin{pmatrix}z\oooo{z}\\ \Ree(z)\\1\end{pmatrix}\\ &=& |cz+d|^{-2}\Theta(A)(\www{\vartheta}(z)). \end{eqnarray*} This shows part (iii). Part (iv) follows from part (iii). (v) A hyperbolic line $l$ is the fixed point set of a reflection $\mu(A)$ for a matrix $A=\begin{pmatrix}a&b\\c&-a\end{pmatrix}\in Gl^{(-1)}_2(\R)$, so \begin{eqnarray*} l&=& \{z\in\H\,|\, z=\mu(A)(z)\} =\{z\in \HH\,|\, 0=cz\oooo{z}-2a\Ree(z)-b\}. \end{eqnarray*} Observe \begin{eqnarray*} cz\oooo{z}-2a\Ree(z)-b=I^{[0]}(\uuuu{f} \begin{pmatrix}-b\\a\\c\end{pmatrix}, \uuuu{f}\begin{pmatrix}z\oooo{z}\\ \Ree(z)\\1\end{pmatrix}). \end{eqnarray*} Therefore \begin{eqnarray*} \vartheta(l)&=& \Bigl(\R(-bf_1+af_2+cf_3)^\perp\cap\KK\Bigr)/\R^*. \end{eqnarray*} (vi) and (vii) are clear now.\hfill$\Box$ \begin{remarks}\label{ta.5} (i) In the model $\KK/\R^*$ of the hyperbolic plane, the isometries of $\H$ are transformed to linear isometries of $(H_\R,I^{[0]})$. The hyperbolic lines in $\H$ are transformed to linear hyperplanes in $H_\R$ (modulo $\R^*$) which intersect $\KK$. (ii) If one chooses an affine hyperplane in $H_\R$ which intersects one component of $\KK$ in a disk, this disk gives a new disk model of the hyperbolic plane, which is not conformal to $\H$ and $\D$ (angles are not preserved), but where the hyperbolic lines in $\H$ are transformed to the segments in the new disk of euclidean lines in the affine hyperplane which intersect the disk. The following picture sketches three hyperbolic lines in the unit disk $\D$ and in the new disk which is part of the cone $\KK$. \begin{figure}[H] \includegraphics[width=0.7\textwidth]{pic-a-2.png} \caption[Figure A.2]{Two disk models of the hyperbolic plane} \label{Fig:A.2} \end{figure} \end{remarks} \chapter{The first congruence subgroups}\label{sb} \setcounter{equation}{0} \setcounter{figure}{0} \renewcommand{\theequation}{\mbox{B.\arabic{equation}}} \renewcommand{\thefigure}{\mbox{B.\arabic{figure}}} For the 3-dimensional covering spaces in section \ref{s9.2}, we need some classical facts on the quotient map $\H\to\H/\Gamma(n)$ for $n\in\{2,3\}$. In the case $n=2$ we need additionally a lift of the multivalued logarithm on $\C-\{0,1\}\cong\H/\Gamma(2)$ to a univalued holomorphic function on $\H$. For $n\in\Z_{\geq 2}$ the congruence subgroup \index{congruence subgroup}\index{$\Gamma(n)$} $\Gamma(n)$ is the group \begin{eqnarray*} \Gamma(n):=\{A\in SL_2(\Z)\,|\, A\equiv E_2\mmod n\}. \end{eqnarray*} For $n=2$ we have $-E_2\in\Gamma(2)$, and $\Gamma(2)/\{\pm E_2\}$ embeds into $PSL_2(\Z)$. For $n\geq 3$ we have $-E_2\notin \Gamma(n)$, and $\Gamma(n)$ embeds into $PSL_2(\Z)$. In any case we write $\oooo{\Gamma(n)}\subset PSL_2(\Z)$ for the image in $PSL_2(\Z)$. The following theorem collects classical facts (see e.g. \cite[5.4 and 5.7.3]{La09}). \begin{theorem}\label{tb.1} (a) Fix $n\in\Z_{\geq 2}$. The group $\oooo{\Gamma(n)}$ is a normal subgroup of finite index in $PSL_2(\Z)$. It does not contain elliptic elements. It acts properly discontinuously and freely on $\H$. The quotient $\H/\Gamma(n)$ is the complement of finitely many points in a smooth compact complex curve. The finitely many missing points correspond to the $\oooo{\Gamma(n)}$ orbits in $\Q\cup\{\infty\}$, which is the set of fixed points of the parabolic elements in $\oooo{\Gamma(n)}$. The number of the missing points is $[PSL_2(\Z):\oooo{\Gamma(n)}]\cdot n^{-1}$. (b) Fix $n\in\{2,3,4,5\}$. Then $\H/\Gamma\cong \P^1\C- \{\textup{finitely many points}\}$. \begin{eqnarray*} \begin{array}{cccc} n & PSL_2(\Z)/\oooo{\Gamma(n)} & [PSL_2(\Z):\oooo{\Gamma(n)}] & |\{\textup{missing points}\}| \\ \hline 2 & \cong S_3 & 6 & 3 \\ 3 & \cong A_4 & 12 & 4 \\ 4 & \cong S_4 & 24 & 6 \\ 5 & \cong A_5 & 60 & 12 \end{array} \end{eqnarray*} There are a natural homeomorphism $h:\P^1\C\to S^2\subset\R^3$ and embeddings of the vertices of a \index{platonic solid} tetrahedron, an octahedron and an icosahedron into $S^2$ such that the image $h(\{\textup{missing points}\})$ is the set of vertices of the tetrahedron in the case $n=3$, the set of vertices of the octahedron in the case $n=4$ and the set of vertices of the icosahedron in the case $n=5$. The group $PSL_2(\Z)/\oooo{\Gamma(n)}$ for $n\in\{3,4,5\}$ acts on $S^2$ as group of rotations which fix the corresponding platonic solid. For $n=2$ $\H/\oooo{\Gamma(2)}\cong\C-\{0,1\}$ (for the group $PSL_2(\Z)/\oooo{\Gamma(2)}$ see part (d)). For $n\in\{2,3,4,5\}$ the composition \begin{eqnarray*} T:\H\to \H/\oooo{\Gamma(n)}\stackrel{\cong}{\longrightarrow} \P^1\C-\{\textup{missing points}\}\hookrightarrow \C \end{eqnarray*} is a Schwarzian triangle function. \index{$T:\H\to \P^1\C-\{\textup{points}\}$} \index{Schwarzian triangle function} It maps the degenerate hyperbolic triangle \index{hyperbolic triangle} with vertices $0,1,\infty$ and angles $0,0,0,$ to a spherical triangle with angles $\frac{2\pi}{n},\frac{2\pi}{n},\frac{2\pi}{n}$. It maps the degenerate hyperbolic triangle with vertices $i,e^{2\pi i/6},\infty$ with angles $\frac{\pi}{2}, \frac{\pi}{3},0$ to a spherical triangle \index{spherical triangle} with angles $\frac{\pi}{2},\frac{\pi}{3},\frac{\pi}{n}$. (c) In the case $n=3$, there is an isomorphism $\P^1\C\cong\C\cup\{\infty\}$ such that the 24 spherical triangles with angles $\frac{\pi}{2},\frac{\pi}{3},\frac{\pi}{3}$ in $\P^1\C$ have the following images in $\C-\{0,1\}\subset \C\cup\{\infty\}$. Figure \ref{Fig:b.1} shows a subdivision of each of the two visible faces of a tetrahedron into 6 triangles. Projecting the tetrahedron to $S^2$ gives 24 spherical triangles. Their boundaries consist of altogether six great circles in $S^2$. Figure \ref{Fig:b.2} shows the images of the six great circles in $\C$. Three are circles, three are lines (where the point $\infty$ is missing). Figure \ref{Fig:b.3} restricts to showing the preimages of the edges of the tetrahedron. The preimages in $\C\cup\{\infty\}$ of the vertices of the tetrahedron are the points $\{\infty,1,e^{2\pi i/3},e^{-2\pi i/3}\}$. Therefore \begin{eqnarray*} \H/\oooo{\Gamma(3)}\cong \C-\{z\,|\, z^3=1\}. \end{eqnarray*} \begin{figure} \parbox{4cm}{\includegraphics[width=0.3\textwidth]{pic-b-1.png}} \parbox{6cm}{ $\bullet$ \hspace*{0.3cm} 4 vertices, \\ $\circ$ \hspace*{0.3cm} 5 (of 6) center points of edges,\\ $*$ \hspace*{0.3cm} 2 (of 4) center points of faces.} \caption[Figure B.1]{A tetrahedron and a subdivision of each face into six triangles, only two faces shown} \label{Fig:b.1} \end{figure} \begin{figure} \includegraphics[width=0.8\textwidth]{pic-b-2.png} \caption[Figure B.2]{Images in $\C\cup\{\infty\}$ of 24 spherical triangles} \label{Fig:b.2} \end{figure} \begin{figure} \includegraphics[width=0.4\textwidth]{pic-b-3.png} \caption[Figure B.3]{The edges of the tetrahedron in Figure B.2} \label{Fig:b.3} \end{figure} (d) The case $n=2$. The group of M\"obius transformations of $\C\cup\{\infty\}$ which fix the set $\{0,1,\infty\}$ is called {\sf anharmonic group} \index{anharmonic group} $G^{anh}$. The first four entries of the following table fix an isomorphism $g_\bullet:S_3\to G^{anh}$ with $g_{\sigma_1}g_{\sigma_2}=g_{\sigma_1\sigma_2}$. \begin{eqnarray*} \begin{array}{llllllll} \sigma & g_\sigma & g_\sigma(z) & (g_\sigma(1),g_\sigma(0),g_\sigma(\infty)) & \mu_\sigma & \mu_\sigma(\tau) & \mu_\sigma & \textup{fp} \\ \hline \id & g_{\id} & z & (1,0,\infty) & \mu_{\id} & \tau & \textup{identity} & \H\\ (1\, 2) & g_{(12)} & 1-z & (0,1,\infty) & \mu_{(12)} & \tau-1 & \textup{parabolic} & \infty \\ (1\, 3) & g_{(13)} & \frac{z}{z-1} & (\infty,0,1) & \mu_{(13)} & \frac{\tau}{\tau+1} & \textup{parabolic} & 0 \\ (2\, 3) & g_{(23)} & \frac{1}{z} & (1,\infty,0) & \mu_{(23)} & \frac{-1}{\tau} & \textup{rotation} & i \\ (1\, 2\, 3) & g_{(123)} & \frac{z-1}{z} & (0,\infty,1) & \mu_{(123)} & \frac{\tau-1}{\tau} & \textup{rotation} & e^{2\pi i/6} \\ (1\, 3\, 2) & g_{(132)} & \frac{1}{1-z} & (\infty,1,0) & \mu_{(132)} & \frac{-1}{\tau-1} & \textup{rotation} & e^{2\pi i/6} \end{array} \end{eqnarray*} The Schwarzian triangle function $T:\H\to \H/\oooo{\Gamma(2)}\cong \P^1\C-\{\textup{missing points}\}$ \index{Schwarzian triangle function} \index{$T:\H\to \P^1\C-\{\textup{points}\}$} can here be chosen as \begin{eqnarray*} T=g_{(12)}\circ (\textup{the classical }\lambda\textup{-function}): \H\to\C-\{0,1\} \end{eqnarray*} (also the $\lambda$-function itself works, but we prefer $T$ as it maps $0,1,\infty$ to $0,1,\infty$). The following table gives the values under $T$ of some points or sets. \begin{eqnarray*} \begin{array}{l|l||l|l} \textup{point} & T(\textup{point}) & \textup{set} & T(\textup{set}) \\ \hline 0 & 0 & i\R_{>0}=A(0,\infty) & \R_{<0}\\ 1 & 1 & 1+i\R_{>0}=A(\infty,1) & \R_{>1}\\ \infty & \infty & A(1,0) & (0,1)\\ i & -1 & \frac{1}{2}+i\R_{>0}=A(\frac{1}{2},\infty) & \frac{1}{2}+i\R\\ \frac{1}{2}(1+i) & \frac{1}{2} & A(1,-1) & S^1-\{1\}\\ e^{2\pi i /6} & e^{2\pi i /6} & A(2,0) & 1+(S^1-\{-1\}) \end{array} \end{eqnarray*} The hyperbolic polygon with boundary $A(1,0)\cup A(0,-1)\cup A(-1,\infty)\cup A(\infty,1)$ is a fundamental domain for the action of $\oooo{\Gamma(2)}$ on $\H$. It consists of 12 degenerate hyperbolic triangles with angles $\frac{\pi}{2},\frac{\pi}{3},0$. They are shown in the left picture in Figure B.4. The right picture in Figure \ref{Fig:b.4} shows their images in $\C-\{0,1\}$. The images are the connected components of the complement of the two circles and the lines $\R$ and $i\R$. \begin{figure} \includegraphics[width=1.0\textwidth]{pic-b-4.png} \caption[Figure B.4]{$T$ maps 12 hyperbolic triangles to 12 spherical triangles} \label{Fig:b.4} \end{figure} The six M\"obius transformations in the fifth and sixth column of the table above are representatives of the classes in the quotient group $\mu(SL_2(\Z))/\mu(\Gamma(2))\cong S_3$, with \begin{eqnarray*} T(\mu(A)\mu_\sigma(\tau))=T(\mu_\sigma\mu(A)(\tau)) =g_\sigma(T(\tau))\quad\textup{for }\sigma\in S_3, A\in\Gamma(2). \end{eqnarray*} The seventh and eighth column give properties of them ({\sf fixed point} is abbreviated {\sf fp}). \end{theorem} {\bf Proof:} See e.g. \cite[5.4 and 5.7.3]{La09}. \hfill$\Box$ \begin{definition}\label{tb.2} The logarithm $\ln:\C-\{0\}\dashrightarrow\C$ is multivalued. As $T:\H\to\C-\{0,1\}$ is a universal covering of $\C-\{0,1\}$, the logarithm has univalued lifts $\H\to\C$. We denote by $\kappa:\H\to\C$ the lift \index{lift of the logarithm}\index{$\kappa:\H\to\C$} with values in $\R_{>0}$ on $1+i\R_{>0}$ (this set maps under $T$ to $\R_{>1}$). \end{definition} The geometry of this lift is described in part (c) of Theorem \ref{tb.3}. The parts (a) and (b) help to make part (c) more transparent. They contain basic well known facts. \begin{theorem}\label{tb.3} (a) For each cusp $c\in \Q\cup\{\infty\}$ of $PSL_2(\Z)$, the stabilizer $(\mu(SL_2(\Z)))_c$ of $c$ in $\mu(SL_2(\Z))$ is infinite cyclic with a parabolic generator. For example, for $c\in\{\infty,0,1\}$ generators are as follows, \begin{eqnarray*} \begin{array}{l|l} c & \textup{generator of }(\mu(SL_2(\Z))_c \\ \hline \infty &\mu_{(12)}=\mu(\begin{pmatrix}1&-1\\0&1\end{pmatrix})\\ 0 &\mu_{(13)}=\mu(\begin{pmatrix}1&0&\\1&1\end{pmatrix})\\ 1 &\mu(\begin{pmatrix}2&-1\\1&0\end{pmatrix}) \end{array} \end{eqnarray*} The cusps form a single $\mu(SL_2(\Z))$ orbit. If $c=\mu(B)(\infty)$ then $(\mu(SL_2(\Z)))_c = \langle\mu(B)\mu_{(12)}\mu(B)^{-1}\rangle$. (b) For each cusp $c\in \Q\cup\{\infty\}$ of $\oooo{\Gamma(2)}$, the stabilizer in $\mu(\Gamma(2))$ is infinite cyclic with a parabolic generator. The generator is the square of a generator of $(\mu(SL_2(\Z)))_c$. For example, for $c\in\{\infty,0,1\}$ generators are as follows, \begin{eqnarray*} \begin{array}{l|l} c & \textup{generator of }(\mu(\Gamma(2))_c \\ \hline \infty &\mu_{(12)}^2=\mu(\begin{pmatrix}1&-2\\0&1\end{pmatrix})\\ 0 &\mu_{(13)}^2=\mu(\begin{pmatrix}1&0&\\2&1\end{pmatrix})\\ 1 &\mu(\begin{pmatrix}3&-2\\2&-1\end{pmatrix}) =\mu_{(12)}^{-2}\mu_{(13)}^{-2} \end{array} \end{eqnarray*} The cusps form three $\Gamma(2)$ orbits, the orbits of $0$, $1$ and $\infty$. If $c\in\Q\cup\{\infty\}$ is in the orbit of $c_0\in\{0,1,\infty\}$ with $c=\mu(B)(c_0)$ for some $B\in\Gamma(2)$ then $(\mu(\Gamma(2)))_c = \mu(B)(\mu(\Gamma(2)))_{c_0} \mu(B)^{-1}$. (c) The lift $\kappa:\H\to\C$ of the logarithm $\ln:\C-\{0\}\dashrightarrow\C$ satisfies the following. \index{$\kappa:\H\to\C$} (i) The very fact that it is a lift with $T$ says \begin{eqnarray*} T(\tau)=\exp(\kappa(\tau))\quad\textup{for }\tau\in\H. \end{eqnarray*} \begin{eqnarray*} \begin{xy} \xymatrix{\H \ar[drr]_\kappa \ar[rr]^T && \C-\{0,1\} \ar[d]_{\ln} \ar[dr]_\id & \\ & & \C \ar[r]_\exp & \C-\{0\}} \end{xy} \end{eqnarray*} The following table shows the values under $\kappa$ of some sets. \begin{eqnarray*} \begin{array}{l|l|l|l} \textup{set} & 1+i\R_{>0}=A(\infty,1) & A(1,0) & i\R_{>0} = A(0,\infty) \\ \hline \kappa(\textup{set}) & \R_{>0} & \R_{<0} & \pi i +\R \end{array} \end{eqnarray*} (ii) Let $\nu$ be the (because of $\mu(\Gamma(2))=\langle\mu_{(12)}^2,\mu_{(13)}^2\rangle \cong G^{free,2}$ well defined) group homomorphism \begin{eqnarray*} \nu:\mu(\Gamma(2))\to (2\pi i \Z,+)\quad\textup{with } \nu(\mu_{(12)}^2)=2\pi i,\ \nu(\mu_{(13)}^2)=-2\pi i. \end{eqnarray*} Then \begin{eqnarray*} \kappa(\mu(B)(\tau)) = \kappa(\tau) + \nu(B)\quad \textup{for }B\in\Gamma(2),\tau\in \H. \end{eqnarray*} Especially \begin{eqnarray*} \kappa(\tau-2)&=& \kappa(\tau)+2\pi i,\\ \kappa(\frac{\tau}{2\tau+1})&=& \kappa(\tau)-2\pi i,\\ \kappa(\frac{3\tau-2}{2\tau-1})&=& \kappa(\tau). \end{eqnarray*} For $B\in\Gamma(2)$ and $m\in\Z$ \begin{eqnarray*} \kappa(\mu(B)(\tau))&=& \kappa(\tau)+2\pi i m \quad\textup{if }\mu(B)\textup{ is conjugate in }\Gamma(2) \textup{ to }\mu_{(12)}^{2m},\\ \kappa(\mu(B)(\tau))&=& \kappa(\tau)-2\pi i m \quad\textup{if }\mu(B)\textup{ is conjugate in }\Gamma(2) \textup{ to }\mu_{(13)}^{2m},\\ \kappa(\mu(B)(\tau))&=& \kappa(\tau) \quad\textup{if }\mu(B)\textup{ is conjugate in }\Gamma(2) \textup{ to }\mu(\begin{pmatrix}3&-2\\2&-1\end{pmatrix})^{m}. \end{eqnarray*} (iii) Consider a cusp $c=\mu(B)(1)$ in the $\mu(\Gamma(2))$ orbit of $1$, so $B\in\Gamma(2)$. If $\tau$ moves along any geodesic in $\H$ to $c$, then $\kappa(\tau)\to \nu(B)$. (iv) Finally for $\tau\in\H$ \begin{eqnarray*} \kappa(\frac{2\tau-1}{\tau})&=& -\kappa(\tau),\\ 0&=&\pi i-\kappa(\tau-1)+\kappa(\tau) +\kappa(\frac{\tau-1}{\tau}). \end{eqnarray*} \end{theorem} {\bf Proof:} (a) and (b) are well known. (c) (i) $T(\tau)=\exp(\kappa(\tau))$ holds by definition of a lift of the logarithm with $T$. The table on the values of $\kappa$ follows from the table in Theorem \ref{tb.1} on the values of $T$ and from properties of the logarithm. (ii) Because of $\mu(\Gamma(2))= \langle \mu_{(12)}^2,\mu_{(13)}^2 \rangle\cong G^{free,2}$, it is sufficient to show \begin{eqnarray*} \kappa(\tau-2)=\kappa(\tau)+2\pi i\quad\textup{and}\quad \kappa(\frac{\tau}{2\tau+1})=\kappa(\tau)-2\pi i. \end{eqnarray*} First consider a path from $z_0\in \R_{>1}\subset \C-\{0,1\}$ to $z_0$ which goes once counterclockwise around 0 along a circle with center 0. The logarithm takes up the summand $2\pi i$. The lift to a path in $\H$ via $T:\H\to\C-\{0,1\}$, which starts at the preimage $\tau_0\in 1+i\R_{>0}$ of $z_0$, ends at $\tau_0-2$. Therefore $\kappa(\tau-2)=\kappa(\tau)+2\pi i$. Now consider a path from $z_1\in (0,1)\subset \C-\{0,1\}$ to $z_1$ which goes once clockwise around 0 along a circle with center 0 The logarithm takes up the summand $-2\pi i$. The lift to a path in $\H$ via $T:\H\to\C-\{0,1\}$, which starts at the preimage $\tau_1\in A(-1,0)$ of $z_1$, ends at $\frac{\tau_1}{2\tau_1+1}\in A(1,0)$. Therefore $\kappa(\frac{\tau}{2\tau+1})=\kappa(\tau)-2\pi i$. (iii) The logarithm is univalued near 1 with value $\ln(1)=0$. Therefore if $\tau\to 1$ along any geodesic in $\H$ then $\kappa(\tau)\to 0$. Together with $\kappa(\mu(B))(\tau)=\kappa(\tau)+\nu(B)$ for $B\in\Gamma(2)$, this shows $$\kappa(t)\to\nu(B)\quad\textup{if}\quad \tau\to\mu(B)(1).$$ (iv) Observe $\mu(\begin{pmatrix}2&-1\\1&0\end{pmatrix}) =\mu_{(12)}^{-1}\mu_{(123)}$ and \begin{eqnarray*} &&\exp\left(\kappa(\frac{2\tau-1}{\tau})+\kappa(\tau)\right) = T\left(\mu_{(12)}^{-1}\mu_{(123)}(\tau)\right) T(\tau)\\ &=& g_{(12)}^{-1}(g_{(123)}(T(\tau))) T(\tau) =g_{(23)}(T(\tau)) T(\tau) = 1, \end{eqnarray*} so $\kappa(\frac{2\tau-1}{\tau})+\kappa(\tau) =2\pi im$ for some $m\in\Z$. If $\tau\to 1$ along a geodesic in $\H$ then $\kappa(\frac{2\tau-1}{\tau})+\kappa(\tau) \to 0+0=0$, so $m=0$. Similarly \begin{eqnarray*} &&\exp\left(\pi i-\kappa(\tau-1)+\kappa(\tau)+ \kappa(\frac{\tau-1}{\tau})\right)\\ &=& (-1)T(\mu_{(12)}(\tau))^{-1} T(\tau) T(\mu_{(13)}\mu_{(12)}(\tau))\\ &=& (-1)g_{(12)}(T(\tau))^{-1}T(\tau)g_{(13)}g_{(12)}(T(\tau))\\ &=& (-1)\frac{1}{1-T(\tau)} T(\tau) \frac{T(\tau)-1}{T(\tau)} = 1, \end{eqnarray*} so $\pi i-\kappa(\tau-1)+\kappa(\tau)+ \kappa(\frac{\tau-1}{\tau})=2\pi i m$ for some $m\in \Z$. If $\tau\in 1+i\R_{>0}=A(\infty,1)$, then \begin{eqnarray*} \tau-1\in i\R_{>0},\quad\textup{so }\kappa(\tau-1) \in\pi i+\R,\\ \kappa(\tau)\in \R_{>0},\\ \frac{\tau-1}{\tau}\in A(1,0),\quad\textup{so } \kappa(\frac{\tau-1}{\tau})\in \R_{<0},\\ \textup{so }\pi i-\kappa(\tau-1)+\kappa(\tau)+ \kappa(\frac{\tau-1}{\tau})\in\R, \end{eqnarray*} so $m=0$.\hfill$\Box$ \chapter{Quadratic units via continued fractions}\label{sc} \setcounter{equation}{0} \setcounter{figure}{0} \renewcommand{\theequation}{\mbox{C.\arabic{equation}}} \renewcommand{\thefigure}{\mbox{C.\arabic{figure}}} The purpose of this appendix is to prove Lemma \ref{tc.1} with two statements on the units in certain rings of algebraic integers. The proof of Lemma \ref{tc.1} will be given after the proof of Theorem \ref{tc.6}. A convenient and very classical tool to prove this lemma is the theory of continued fractions as best approximations of irrational numbers, applied to the case of quadratic irrationals which are algebraic integers. Theorem \ref{tc.4} below cites standard results on the continued fractions of real irrationals. It is prepared by the Definitions \ref{tc.2} and \ref{tc.3} Lemma \ref{tc.5} provides the less well known formulas \eqref{c.2} and \eqref{c.3} for the case of a quadratic irrational. Theorem \ref{tc.6} describes the unit group $\Z[\alpha]^*$ where $\alpha$ is a quadratic irrational and an algebraic integer, in terms of the continued fractions of $\alpha$. \begin{lemma}\label{tc.1} (a) Let $x\in \Z_{\geq 3}$, and let $\kappa_{1/2}:=\frac{x}{2}\pm \frac{1}{2}\sqrt{x^2-4}$ be the zeros of the polynomial $t^2-xt+1$, so $\kappa_1+\kappa_2=x,\kappa_1\kappa_2=1,\kappa_1^2=x\kappa_1-1$. Then \begin{eqnarray*} \Z[\kappa_1]^*=\left\{\begin{array}{ll} \{\pm\kappa_1^l\,|\, l\in\Z\}&\textup{ if }x\in\Z_{\geq 4},\\ \{\pm (\kappa_1-1)^l\,|\, l\in\Z\}&\textup{ if }x=3. \end{array}\right. \end{eqnarray*} $\kappa_1$ has norm 1. If $x=3$ then $(\kappa_1-1)^2=\kappa_1$, and $\kappa_1-1$ has norm $-1$. (b) Let $x\in\Z_{\geq 2}$, and let $\lambda_{1/2} =x^2-1\pm x\sqrt{x^2-2}$ be the zeros of the polynomial $t^2-(2x^2-2)t+1$, so $\lambda_1+\lambda_2=2x^2-2, \lambda_1\lambda_2=1, \lambda_1^2=(2x^2-2)\lambda_1-1$. Then \begin{eqnarray*} \Z[\sqrt{x^2-2}]^*=\left\{\begin{array}{ll} \{\pm \lambda_1^l\,|\, l\in\Z\}&\textup{ if }x\geq 3,\\ \{\pm (1+\sqrt{2})^l\,|\, l\in\Z\}&\textup{ if }x=2. \end{array}\right. \end{eqnarray*} $\lambda_1$ has norm 1. If $x=2$ then $(1+\sqrt{2})^2=\lambda_1$, and $1+\sqrt{2}$ has norm $-1$. \end{lemma} Theorem \ref{tc.4} is mainly taken from several theorems in \cite[1.2 and 1.3]{Ai13}, but with part (b) from \cite[I 2.]{Ca65}. It is preceded by two definitions. According to \cite[5.9 Lagrange's Theorem]{Bu00}, this part (b) is orginally due to Lagrange 1770. In fact, we will not use this part (b), but we find it enlightening. \begin{definition}\label{tc.2} Let $\theta\in\R-\Q$ be an irrational number. \index{irrational number} (a) Define recursively sequences $(a_n)_{n\geq 0}$, $(\theta_n)_{n\geq 0}$, $(p_n)_{n\geq -1},(q_n)_{n\geq -1}$, $(r_n)_{n\geq 0}$ as follows: \begin{eqnarray*} \theta_0&:=&\theta, \\ a_0&:=& \lfloor \theta_0\rfloor\in\Z,\\ \theta_n&:=& \frac{1}{\theta_{n-1}-a_{n-1}}\in\R_{>1}-\Q \quad\textup{for }n\in\N,\\ a_n&:=&\lfloor \theta_n\rfloor\in\N\quad\textup{for }n\in\N,\\ (p_{-1},p_0,q_{-1},q_0)&:=& (1,a_0,0,1),\\ p_n&:=& a_np_{n-1}+p_{n-2}\in\Z\quad\textup{for }n\in\N,\\ q_n&:=& a_nq_{n-1}+q_{n-2}\in\N\quad\textup{for }n\in\N,\\ r_n&:=& \frac{p_n}{q_n}\in\Q\quad\textup{for }n\in\Z_{\geq 0}. \end{eqnarray*} $\theta_n$ and $a_n$ are defined for all $n\in\N$, because each $\theta_{n-1}$ is in $\R-\Q$, so $\theta_{n-1}-a_{n-1}\in(0,1)$. (b) Following \cite[Notation 2.]{Ca65} define \begin{eqnarray*} \|\theta\|:=\min (\theta-\lfloor\theta\rfloor, \lceil\theta\rceil-\theta)\in(0,\frac{1}{2}) \end{eqnarray*} \end{definition} We are interested especially in the case when $\theta\in\R-\Q$ is a quadratic irrational. We recall some notations for this case. \begin{definition}\label{tc.3} Let $\theta\in\R-\Q$ be a quadratic irrational, \index{quadratic irrational} i.e. $\dim_\Q \Q[\theta]=2$. The other root of the minimal polynomial of $\theta$ is called $\theta^{conj}$, so $\theta+\theta^{conj}=:\www{a_0}\in\Q$ and $-\theta\theta^{conj}=:d_0\in\Q$. For any $\alpha=a+b\theta\in\Q[\theta]$ with $a,b\in\Q$ write $\alpha^{conj}:=a+b\theta^{conj}$. It is the algebraic conjugate of $\alpha$. The algebra homomorphism $$\NN:\Q[\theta]\to\Q,\quad \alpha\mapsto \alpha\alpha^{conj},$$ is the norm map. The number $\alpha$ is called \index{reduced number} {\it reduced} if $\alpha>1$ and $\alpha^{conj}\in (-1,0)$. Recall that $\alpha$ is an algebraic integer if and only if $\alpha+\alpha^{conj}\in\Z$ and $\NN(\alpha)\in\Z$ and that in this case $\alpha$ is a unit in $\Z[\alpha]$ if and only if $\NN(\alpha)\in\{\pm 1\}$. \end{definition} \begin{theorem}\label{tc.4} (Classical) In the situation of Definition \ref{tc.2} the following holds. (a) \cite[1.2]{Ai13} $a_0\in\Z$, $a_n\in\N$ for $n\in\N$. For $n\in\Z_{\geq 0}$ the rational number $r_n$ is \begin{eqnarray*} r_n=a_0+\frac{1}{a_1+\frac{1}{\ddots \frac{1}{a_{n-1}+\frac{1}{a_n}}}} =:[a_0,a_1,...,a_n]. \end{eqnarray*} It is called {\sf partial quotient} or {\sf continued fraction} \index{partial quotient}\index{continued fraction}\index{convergent} or {\sf $n$-th convergent} of $\theta$. These numbers approximate $\theta$, \begin{eqnarray*} r_0<r_2<r_4<...<\theta<...<r_5<r_3<r_1,\\ |\theta-r_n|<\frac{1}{q_n^2}. \end{eqnarray*} This allows to write $\theta=[a_0,a_1,...]$ as an infinite continued fraction. The numerator $p_n$ and the denominator $q_n$ of $r_n$ are coprime, \begin{eqnarray*} \gcd(p_n,q_n)&=&1,\quad\textup{and more precisely }\\ p_nq_{n-1}-p_{n-1}q_n&=&(-1)^{n-1}\textup{ for }n\in\Z_{\geq 0}. \end{eqnarray*} The denominators grow strictly from $n=1$ on, \begin{eqnarray*} 1=q_0\leq q_1<q_2<q_3<... . \end{eqnarray*} (b) \cite[I 2.]{Ca65} The partial quotients $r_n$ are in the following precise sense the only best approximations of $\theta$: \begin{eqnarray*} |p_n-q_n\theta|&=&\|q_n\theta\|\quad\textup{for }n\in\N,\\ \|q_{n+1}\theta\|&<&\|q_n\theta\|\quad\textup{for }n\in\N,\\ \|q\theta\|&\geq& \|q_n\theta\|\quad\textup{for }n\in\Z_{\geq 0} \textup{ and }q\in\N\textup{ with }q<q_{n+1},\\ |p_0-q_0\theta|&=&\|q_0\theta\|>\|q_1\theta\|\quad \textup{if }q_1>1 (\iff a_1>1),\\ |p_0-q_0\theta|&\in& (\frac{1}{2},1)\textup{ and } |p_0-q_0\theta|>\|q_1\theta\|\quad\textup{if }q_1=1 (\iff a_1=1). \end{eqnarray*} In any case \begin{eqnarray*} |p_{n+1}-q_{n+1}\theta|<|p_n-q_n\theta|\quad\textup{for }n\in\Z_{\geq 0}. \end{eqnarray*} (c) \cite[Theorem 1.19]{Ai13} The partial quotients $r_n$ are also in the following precise sense the only best approximations of $\theta$: A rational number $\frac{p}{q}$ with $p\in\Z,q\in\N$ and $\gcd(p,q)=1$ satisfies \begin{eqnarray*} |\theta-\frac{p}{q}|<\frac{1}{2q^2}&\Longrightarrow& (p,q)=(p_n,q_n) \quad\textup{for a suitable}\quad n\in\Z_{\geq 0}. \end{eqnarray*} (d) \cite[Theorem 1.17 and Proposition 1.18]{Ai13} The continued fraction is {\sf periodic}, \index{periodic continued fraction} i.e. there exist $k_0\in\Z_{\geq 0}$ and $k_1\in\N$ with \begin{eqnarray*} a_{n+k_1}=a_n\textup{ for }n\geq k_0, \end{eqnarray*} if and only if $\theta$ is a quadratic irrational, i.e. $\dim_\Q\Q[\theta]=2$. Then one writes $[a_0a_1...]=[a_0a_1...a_{k_0-1}\oooo{a_{k_0} a_{k_0+1}...a_{k_0+k_1-1}}]$. Furthermore, then $k_0$ can be chosen as 0 if and only if $\theta$ is reduced. In this case the continued fraction $[a_0a_1...]$ is called {\sf purely periodic}. \index{purely periodic continued fraction} \end{theorem} Lemma \ref{tc.5} fixes useful additional observations for the case of a quadratic irrational $\theta$. These observations are used in the proof of Theorem \ref{tc.6}. It considers an algebraic integer $\alpha\in\R-\Q$ which is a quadratic irrational. We are interested in the group $\Z[\alpha]^*$ of units in $\Z[\alpha]$. Theorem \ref{tc.6} shows how to see a generator of this group (and a quarter of its elements) in the continued fractions of a certain reduced element $\theta$ in $\Z[\alpha]$. Theorem \ref{tc.6} is not new. For example, \cite[Theorem 8.13]{Bu00} gives its main part. But the proof here is more elegant than what we found in the literature. \begin{lemma}\label{tc.5} Let $\theta\in\R-\Q$ be a quadratic irrational which is reduced. Let $[\oooo{a_0a_1...a_{k-1}}]$ be its purely periodic continued fraction of some minimal length $k\in\N$. We consider the objects in Definition \ref{tc.2} for this $\theta$. Then \begin{eqnarray}\label{c.1} \theta_{n+k}=\theta_n\quad\textup{for }n\in\Z_{\geq 0}. \end{eqnarray} $\theta_m$ is reduced for $m\in\{0,1,...,k-1\}$, and its purely periodic continued fraction is $[\oooo{a_m...a_{k-1}a_0...a_{m-1}}]$. Write \begin{eqnarray*} \www{a_0}&:=&\theta+\theta^{conj}\in\Q_{>0},\qquad d_0:=-\NN(\theta)=-\theta\theta^{conj}\in\Q_{>0},\\ \beta&:=&p_{k-1}-q_{k-1}\theta\in \Q[\theta]-\Q. \end{eqnarray*} Then for $n\in\Z_{\geq -1}$ \begin{eqnarray}\label{c.2} &&p_{n+k}-q_{n+k}\theta= \beta\cdot (p_n-q_n\theta) \end{eqnarray} and for $n\in\Z_{\geq 0}$ \begin{eqnarray} \theta_n= \frac{-p_{n-2}p_{n-1}+p_{n-2}q_{n-1}\www{a_0} +q_{n-2}q_{n-1}d_0+(-1)^n\theta}{\NN(p_{n-1}-q_{n-1}\theta)}. \label{c.3} \end{eqnarray} \end{lemma} {\bf Proof:} The natural generalization of the notation $[a_0,a_1,...,a_m]$ to numbers $a_0\in\R,a_1,...,a_m\in\R_{>0}$ gives for $n\in\Z_{\geq 0}$ \begin{eqnarray*} \theta&=&[a_0,a_1,...,a_{n-1},\theta_n] =\frac{\theta_np_{n-1}+p_{n-2}}{\theta_nq_{n-1}+q_{n-2}}, \end{eqnarray*} see Proposition 1.9 in \cite{Ai13}. One concludes that the continued fraction of $\theta_n$ is purely periodic, that $\theta_n=\theta_m$ if $n=kl+m$ with $l\in\Z_{\geq 0}$ and $m\in\{0,1,...,k-1\}$, and that its continued fraction is $[\oooo{a_m...a_{k-1}a_0...a_{m-1}}]$. Therefore $\theta_n$ is reduced. Recall $p_{n-1}q_{n-2}-p_{n-2}q_{n-1}=(-1)^n$. Inverting the equation above gives \begin{eqnarray*} \theta_n&=& \frac{\theta q_{n-2}-p_{n-2}}{\theta(-q_{n-1})+p_{n-1}}\\ &=&\frac{(-p_{n-2}+q_{n-2}\theta)(p_{n-1}-q_{n-1}\theta^{conj})} {\NN(p_{n-1}-q_{n-1}\theta)}\\ &=&\frac{-p_{n-2}p_{n-1}+p_{n-2}q_{n-1}\www{a_0} +q_{n-2}q_{n-1}d_0+(-1)^n\theta} {\NN(p_{n-1}-q_{n-1}\theta)}. \end{eqnarray*} The formula $\theta=\theta_{k}=\frac{\theta q_{k-2}-p_{k-2}} {\theta(-q_{k-1})+p_{k-1}}$ shows \begin{eqnarray*} (1,-\theta) \begin{pmatrix}p_{k-1}&p_{k-2}\\q_{k-1}&q_{k-2}\end{pmatrix} = (p_{k-1}-q_{k-1}\theta)(1,-\theta)=\beta(1,-\theta). \end{eqnarray*} The inductive definition of $p_n$ and $q_n$ shows \begin{eqnarray*} \begin{pmatrix}p_n& p_{n-1}\\q_n& q_{n-1}\end{pmatrix} =\begin{pmatrix}a_0&1\\1&0\end{pmatrix} \begin{pmatrix}a_1&1\\1&0\end{pmatrix}... \begin{pmatrix}a_n&1\\1&0\end{pmatrix}. \end{eqnarray*} With the periodicity $a_n+k=a_n$ we obtain \begin{eqnarray*} (1,-\theta) \begin{pmatrix}p_{n+k}&p_{n-1+k}\\q_{n+k}&q_{n-1+k}\end{pmatrix} &=&(1,-\theta) \begin{pmatrix}p_{k-1}&p_{k-2}\\q_{k-1}&q_{k-2}\end{pmatrix} \begin{pmatrix}p_{n}&p_{n-1}\\q_{n}&q_{n-1}\end{pmatrix}\\ &=& \beta(1,-\theta) \begin{pmatrix}p_n&p_{n-1}\\q_n&q_{n-1}\end{pmatrix}. \end{eqnarray*} This gives formula \eqref{c.2}. \hfill$\Box$ \begin{theorem}\label{tc.6} Let $\alpha\in\R-\Q$ be a quadratic irrational and an algebraic integer.\index{algebraic integer} \index{quadratic algebraic integer} (a) There are a unique sign $\varepsilon_\alpha\in\{\pm 1\}$ and a unique number $n_\alpha\in\Z$ such that $\theta:=\varepsilon_\alpha \alpha+n_\alpha$ is reduced. Then $\Z[\alpha]=\Z[\theta]$, and any reduced element $\www\theta\in\Z[\alpha]$ with $\Z[\alpha]=\Z[\www\theta]$ satisfies $\www\theta=\theta$. We consider the objects in Definition \ref{tc.2} for this $\theta$. We define \begin{eqnarray*} \www{a_0}&:=&\theta+\theta^{conj}\in\N,\qquad d_0:=-\NN(\theta)=-\theta\theta^{conj}\in\N,\\ \beta&:=&p_{k-1}-q_{k-1}\theta\in \Z[\theta]-\Z. \end{eqnarray*} as in Lemma \ref{tc.5}. Then $a_0=\www{a_0}$ and $d_0\in\{1,2,...,a_0\}$. (b) Then $\beta$ is a unit and generates together with $-1$ the unit group $\Z[\alpha]^*$, the $l$-th power of $\beta$ is $\beta^l=p_{lk-1}-q_{lk-1}\theta$ for $l\in\Z_{\geq 0}$, and \begin{eqnarray*} \{\pm \beta^l\,|\, l\in\Z\}&=&\Z[\alpha]^*,\\ \{\beta^l\,|\, l\in\Z_{\geq 0}\} &=&\Z[\alpha]^*\cap\{p_{n-1}-q_{n-1}\theta\,|\, n\in\Z_{\geq 0}\}. \end{eqnarray*} The element $\beta$ is uniquely characterized by the following properties: (i) $-1$ and $\beta$ generate the unit group $\Z[\alpha]^*$, (ii) $|\beta|<1$, (iii) $\beta=p-q\theta$ with $p\in\Z,q\in\N$ (namely $p=p_{k-1},q=q_{k-1}$). \end{theorem} {\bf Proof:} (a) Choose $\varepsilon_\alpha\in\{\pm 1\}$ such that $\varepsilon_\alpha(\alpha-\alpha^{conj})>0$. Then choose $n_\alpha\in\Z$ such that $\varepsilon_\alpha\alpha^{conj}+n_\alpha\in(-1,0)$. Define $\theta:=\varepsilon_\alpha\alpha+n_\alpha$. Then $\theta^{conj}=\varepsilon_\alpha\alpha^{conj}+n_\alpha\in(-1,0)$. Also $\theta>\theta^{conj}$ and $\theta\theta^{conj}\in\Z-\{0\}$. This shows $\theta>1$, so $\theta$ is reduced. Also $\Z[\alpha]=\Z[\theta]$ is clear. Any reduced element $\www{\theta}\in\Z[\alpha]$ with $\Z[\alpha]=\Z[\theta]$ has the shape $\www{\theta}=\www{\varepsilon_\alpha}\alpha+\www{n_\alpha}$ with $\www{\varepsilon_\alpha}\in\{\pm 1\}$ and $\www{n_\alpha}\in\Z$. The sign $\www{\varepsilon_\alpha}$ is because of $\www{\theta}>1>0>\www{\theta}^{conj}$ the unique sign with $\www{\varepsilon_\alpha}(\alpha-\alpha^{conj})>0$, so $\www{\varepsilon_\alpha}=\varepsilon_\alpha$. Now $\www{n_\alpha}$ is the unique integer with $\varepsilon_\alpha\alpha^{conj}+n_\alpha\in(-1,0)$, so $\www{n_\alpha}=n_\alpha$. Therefore $\www{\theta}=\theta$. We have $a_0=\lfloor \theta\rfloor=\theta+\theta^{conj}=\www{a_0}$ and $d_0=-\theta\theta^{conj}\leq a_0$, both because $\theta^{conj}\in(-1,0)$. (b) We apply Lemma \ref{tc.5}. It tells us which of the elements $p_n-q_n\theta$ for $n\in\Z_{\geq 0}$ are units, in the following way. Consider $n\in\Z_{\geq 0}$ and write $n=lk+m$ with $l\in\Z_{\geq 0}$ and $m\in\{0,1,...,k-1\}$. Recall that $\theta_n$ is reduced, that $\theta_n=\theta_m$ because of formula \eqref{c.1} and that $\theta_m=\theta$ only for $m=0$, because for $m\in\{1,...,k-1\}$ the purely periodic continued fractions of $\theta$ and $\theta_m$ differ. Recall also from formula \eqref{c.2} that $$p_{n-1}-q_{n-1}\theta=\beta^l(p_{m-1}-q_{m-1}\theta).$$ If for some $n\in\Z_{\geq 0}$ $\NN(p_{n-1}-q_{n-1}\theta)\in\{\pm 1\}$, then by formula \eqref{c.3} $\theta_n$ satisfies $\Z[\alpha]=\Z[\theta] =\Z[\theta_n]$. The uniqueness of $\theta$ in part (a) implies that then $\theta_n=\theta$, so $m=0$. Therefore for $n\in \Z_{\geq 0}-k\Z_{\geq 0}$, $\NN(p_{n-1}-q_{n-1}\theta)\notin\{\pm 1\}$, so then $p_{n-1}-q_{n-1}\theta$ is not a unit. On the other hand, if $n=kl$, so $m=0$, then $\theta_n=\theta$, and formula \eqref{c.3} tells $\NN(p_{n-1}-q_{n-1}\theta)=(-1)^n$, so $p_{n-1}-q_{n-1}\theta$ is a unit. In fact, formula \eqref{c.2} tells $p_{n-1}-q_{n-1}\theta=\beta^l$. We see \begin{eqnarray}\label{c.4} \{p_{n-1}-q_{n-1}\theta\,|\, n\in\Z_{\geq 0}\} \cap \Z[\theta]^* =\{\beta^l\,|\, l\in\Z_{\geq 0}\}. \end{eqnarray} It remains to see that $-1$ and $\beta$ generate $\Z[\theta]^*$. By Dirichlet's unit theorem \cite[Ch. 2 4.3 Theorem 5]{BSh73}, the set $\Z[\alpha]^*$ is as a group isomorphic to $\{\pm 1\}\times \Z$. It has two generators $\pm\www\beta$ with $|\pm\www\beta|<1$. They are the unique elements in $\Z[\alpha]^*$ with maximal absolute value $<1$. One of them has the shape $p-q\theta$ with $q\in\N$. This is called $\www\beta$. Then also $p\in\N$, because $|\www\beta|=|p-q\theta|<1$ and $q\theta>1$. {\bf 1st case,} $\theta\in(1,2)$: Then $a_0=d_0=1$, and $\theta=\frac{1+\sqrt{5}}{2}$ is the golden section with $\theta^2=\theta+1$. This case is well known. Here the continued fraction of $\theta$ is purely periodic with period $\oooo{1}$ of length one, because $a_0=1$ and \begin{eqnarray*} \theta_1&=&(\theta_0-a_0)^{-1}=\theta=\theta_0. \end{eqnarray*} Here $\beta=1-\theta=-\theta^{-1}=\theta^{conj}$. It is well known that $$\Z[\alpha]^*=\Z[\theta]^*=\{\pm\theta^l\,|\, l\in\Z\} =\{\pm \beta^l\,|\, l\in\Z\}$$ {\bf 2nd case,} $\theta>2$: $\www{\beta}=p-q\theta$ is a unit, so $\pm 1=\NN(p-q\theta)$. Also $|\www{\beta}|<1$ and $\theta>2$ imply $p\geq 2q$. Therefore \begin{eqnarray*} |\frac{p}{q}-\theta|&=& \frac{1}{q(p-q\theta^{conj})} =\frac{1}{q(p+q|\theta^{conj}|)}\\ &<& \frac{1}{q(2q+0)}=\frac{1}{2q^2}. \end{eqnarray*} By Theorem \ref{tc.4} (c) $n\in\Z_{\geq 0}$ with $(p,q)=(p_n,q_n)$ exists. By \eqref{c.4} $\www{\beta}$ is a power of $\beta$, so $\www\beta=\beta$. \hfill$\Box$ \bigskip The parts (a) and (b) in the following proof of Lemma \ref{tc.1} serve also as examples for Theorem \ref{tc.6}. \bigskip {\bf Proof of Lemma \ref{tc.1}:} (a) Here \begin{eqnarray*} x\geq 3\Rightarrow 2x>5\Rightarrow x^2-4&>&x^2-2x+1\\ \Rightarrow \sqrt{x^2-4}&\in& (x-1,x),\\ \Rightarrow \theta=\kappa_1-1 =\frac{x-2}{2}+\frac{1}{2}\sqrt{x^2-4} &\in& (x-\frac{3}{2},x-1)\\ \textup{and}\quad \theta^{conj}=\kappa_2-1 =\frac{x-2}{2}-\frac{1}{2}\sqrt{x^2-4}&\in& (-1,-\frac{1}{2}). \end{eqnarray*} Observe \begin{eqnarray*} \theta+\theta^{conj}=x-2,\quad \theta\theta^{conj}=-x+2. \end{eqnarray*} Therefore \begin{eqnarray*} \theta_0&=&\theta,\quad a_0=\lfloor \theta_0\rfloor =x-2,\\ \theta_1&=&(\theta_0-a_0)^{-1} = \frac{\theta^{conj}-(x-2)}{(\theta-(x-2)) (\theta^{conj}-(x-2))} \\ &=& \frac{-\theta}{-x+2}=\frac{\theta}{x-2},\\ a_1&=&\lfloor \theta_1\rfloor = 1, \\ \theta_2&=& (\theta_1-a_1)^{-1} =\frac{x-2}{\theta-(x-2)} =\frac{(x-2)(\theta^{conj}-(x-2))}{-x+2}=\theta,\\ \theta&=&[\oooo{x-2,1}]. \end{eqnarray*} The continued fraction of $\theta$ is purely periodic with period $\oooo{x-2,1}$ of length 2 if $x\geq 4$ and purely periodic with period $\oooo{1}$ if $x=3$. The norm of $p-q\theta$ for $p,q\in\Z$ is \begin{eqnarray*} \NN(p-q\theta):=(p-q\theta)(p-q\theta^{conj}) =p^2-q(p+´q)(x-2)\in\Z. \end{eqnarray*} It is $\pm 1$ if and only if $p-q\theta$ is a unit. \begin{eqnarray*} \begin{array}{lllll} n & 0 & 1 & 2 & 3 \\ a_n & x-2 & 1 & x-2 & 1 \\ (p_n,q_n) & (x-2,1) & (x-1,1) & (x^2-2x,x-1) & (x^2-x-1,x) \\ \NN(p_n-q_n\theta) & -x+2 & 1 & -x+2 & 1 \end{array} \end{eqnarray*} If $x=3$ then $\beta=p_0-q_0\theta=1-\theta$ in the notation of Lemma \ref{tc.5}, so $\Z[\theta]^*$ is generated by $(-1)$ and $\beta$ or $-\beta^{-1}=\theta=\kappa_1-1$, so $$\Z[\theta]^*=\{\pm (1-\theta)^l\,|\, l\in\Z\} =\{\pm(\kappa_1-1)^l\,|\, l\in\Z\}.$$ This is also consistent with the 1st case in the proof of part (b) of Theorem \ref{tc.6}. If $x\geq 4$ then $\beta=p_1-q_1\theta=x-1-\theta$ in the notation of Corollary \ref{tc.4}, so $\Z[\theta]^*$ is generated by $(-1)$ and $\beta$ or $x-1-\theta^{conj}=\kappa_1$, so $$\Z[\theta]^*=\{\pm(x-1-\theta)^l\,|\, l\in\Z\} =\{\pm\kappa_1^l\,|\, l\in\Z\}.$$ This proves part (a) of Lemma \ref{tc.1}. (b) The case $x=2$ is treated separately and first. This case is well known. Then $\theta=1+\sqrt{2}\in(2,3)$, $\theta^{conj}=1-\sqrt{2}\in (-1,0)$, so $a_0=2$. The continued fraction of $\theta$ is purely periodic with period $\oooo{2}$ of length one, because $a_0=2$ and \begin{eqnarray*} \theta_1=(\theta_0-a_0)^{-1}=(\sqrt{2}-1)^{-1}=\theta_0. \end{eqnarray*} The element $$p_0-q_0\theta=2-\theta=1-\sqrt{2}=\theta^{conj}$$ is a unit. This and Theorem \ref{tc.6} (b) show $$\Z[\alpha]^*=\Z[\theta]^*=\{\pm\theta^l\,|\, l\in\Z\} =\{\pm (1+\sqrt{2})^l\,|\, l\in\Z\}.$$ Now we treat the cases $x\geq 3$. Here \begin{eqnarray*} x\geq 3\Rightarrow x^2-2>x^2-x+\frac{1}{4} \Rightarrow \sqrt{x^2-2}>x-\frac{1}{2},\\ \Rightarrow \theta=(x-1)+\sqrt{x^2-2} \in(2x-\frac{3}{2},2x-1)\\ \textup{and}\quad \theta^{conj}=(x-1)-\sqrt{x^2-2} \in(-1,-\frac{1}{2}). \end{eqnarray*} Observe \begin{eqnarray*} \theta+\theta^{conj}=2x-2,\quad \theta\theta^{conj}=-2x+3. \end{eqnarray*} Therefore \begin{eqnarray*} \theta_0&=&\theta,\quad a_0=\lfloor \theta_0\rfloor=2x-2,\\ \theta_1&=&(\theta_0-a_0)^{-1}=...= \frac{\theta}{2x-3}\in(1,2),\quad a_1=1,\\ \theta_2&=&(\theta_1-a_1)^{-1}=...= \frac{\theta-1}{2}\in (x-2,x-1),\quad a_2=x-2,\\ \theta_3&=&(\theta_2-a_2)^{-1}=...= \frac{\theta-1}{2x-3}\in(1,2),\quad a_3=1,\\ \theta_4&=&(\theta_3-a_3)^{-1}=...= \theta=\theta_0,\\ \theta&=&[\oooo{2x-2,1,x-2,1}]. \end{eqnarray*} The continued fraction of $\theta$ is purely periodic with period $\oooo{2x-2,1,x-2,1}$ of length four. The norm of $p-q\theta$ is $$\NN(p-q\theta)=(p-q\theta)(p-q\theta^{conj}) =p^2+q^2-q(p+q)(2x-2).$$ It is $\pm 1$ if and only if $p-q\theta$ is a unit. \begin{eqnarray*} \begin{array}{lllll} n & 0 & 1 & 2 & 3 \\ a_n & 2x-2 & 1 & x-2 & 1 \\ (p_n,q_n) & (2x-2,1) & (2x-1,1) & (2x^2-3x,x-1) & (2x^2-x-1,x) \\ \NN(p_n-q_n\theta) & -2x+3 & 2 & -2x+3 & 1 \end{array} \end{eqnarray*} We conclude with Theorem \ref{tc.6} (and with the notation of Lemma \ref{tc.5}) that $$\beta=p_3-q_3\theta=(2x^2-x-1)-x\theta =(x^2-1)-x\sqrt{x^2-2}=\lambda_2$$ is together with $(-1)$ a generator of $\Z[\sqrt{x^2-2}]^*=\Z[\theta]^*$. Therefore also $\lambda_1$ together with $(-1)$ is a generator of $\Z[\sqrt{x^2-2}]^*$. This proves part (b) of Lemma \ref{tc.1}. \hfill$\Box$ \begin{remark}\label{tc.7} In the situation of Theorem \ref{tc.6}, Satz 9.5.2 in \cite{Ko97} tells that the unit group $\Z[\theta]^*$ is generated by $-1$ and $q_{k-2}+q_{k-1}\theta$. This is consistent with Theorem \ref{tc.6} because of the following. Here \begin{eqnarray*} \theta&=&\frac{\theta_kp_{k-1}+p_{k-2}}{\theta_kq_{k-1}+q_{k-2}} =\frac{\theta p_{k-1}+p_{k-2}}{\theta q _{k-1}+q_{k-2}},\\ \textup{so}\quad 0&=& q_{k-1}\theta^2-(p_{k-1}-q_{k-2})\theta-p_{k-2}, \\ \textup{but also}\quad 0&=&\theta^2-\www{a_0}\theta-d_0,\\ \textup{so}\quad a_0&=&\www{a_0}=\frac{p_{k-1}-q_{k-2}}{q_{k-1}},\quad d_0=\frac{p_{k-2}}{q_{k-1}},\\ p_{k-1}-q_{k-1}\theta^{conj} &=&p_{k-1}-q_{k-1}(a_0-\theta) =q_{k-2}+q_{k-1}\theta. \end{eqnarray*} \end{remark} \chapter{Powers of quadratic units}\label{sd} \setcounter{equation}{0} \setcounter{figure}{0} \renewcommand{\theequation}{\mbox{D.\arabic{equation}}} \renewcommand{\thefigure}{\mbox{D.\arabic{figure}}} The following definition and lemma treat powers of units of norm 1 \index{power of a quadratic unit}\index{quadratic unit} in the rings of integers of quadratic number fields. Though these powers appear explicitly only in Lemma \ref{td.2} (c). Lemma \ref{td.2} will be used in the proof of Theorem \ref{t5.18}. \begin{definition}\label{td.1} (a) Define the polynomials $b_l(a)\in\Z[a]$ for $l\in\Z_{\geq 0}$ by the following recursion. \begin{eqnarray} b_0:=0,\quad b_1:=1, \quad b_l:=ab_{l-1}-b_{l-2} \quad\textup{for }l\in\Z_{\geq 2}.\label{d.1} \end{eqnarray} (b) Define for $l\in\Z_{\geq 0}$ the polynomial $r_l\in\Z[a]$ and for $l\in\N$ the rational functions $q_{0,l},q_{1,l},q_{2,l}\in\Q(t)$, \index{$q_{0,l},\ q_{1,l},\ q_{2,l}\in\Q(t)$} \begin{eqnarray*} r_0&:=&0,\\ r_l&:=& -ab_l+2b_{l-1}+2\quad\textup{for }l\in\N,\\ q_{0,l}&:=& \frac{b_l-b_{l-1}}{b_l},\\ q_{1,l}&:=& \frac{b_l-b_{l-1}-1}{r_lb_l},\\ q_{2,l}&:=& q_{0,l}-2q_{1,l}. \end{eqnarray*} (c) A notation: For two polynomials $f_1,f_2\in\Z[a]$, $(f_1,f_2)_{\Z[a]}:=\Z[a]f_1+\Z[a]f_2\subset\Z[a]$ denotes the ideal generated by $f_1$ and $f_2$. Remark: If $(f_1,f_2)_{\Z[a]}=\Z[a]$ then for any integer $c\in\Z$ $\gcd(f_1(c),f_2(c))=1$. (d) For $a\in\Z_{\leq -3}\cup\Z_{\geq 3}$ define $\kappa_a:=\frac{a}{2}+\frac{1}{2}\sqrt{a^2-4}$ and $\kappa_a^{conj}:=\frac{a}{2}-\frac{1}{2}\sqrt{a^2-4}$ as the zeros of the polynomial $t^2-at+1$, so that $\kappa_a+\kappa_a^{conj}=a$, $\kappa_a\kappa_a^{conj}=1$, $\kappa_a^2=a\kappa_a-1$. They are algebraic integers and units with norm 1. \end{definition} The following table gives the first twelve of the polynomials $b_l(a)$. The software Maxima \cite{Maxima22} claims that the factors in the products are irreducible polynomials as polynomials in $\Q[a]$. We will not use this claim. \begin{eqnarray*} b_0 &=& 0\\ b_1 &=& 1\\ b_2 &=& a\\ b_3 &=& (a-1)(a+1)\\ b_4 &=& a(a^2-2)\\ b_5 &=& (a^2-a-1)(a^2+a-1)\\ b_6 &=& (a-1)a(a+1)(a^2-3)\\\ b_7 &=& (a^3-a^2-2a+1)(a^3+a^2-2a-1)\\ b_8 &=& a(a^2-2)(a^4-4a^2+2)\\ b_9 &=& (a-1)(a+1)(a^3-3a-1)(a^3-3a+1)\\ b_{10} &=& a(a^2-a-1)(a^2+a-1)(a^4-5a^2+5)\\ b_{11} &=& (a^5-a^4-4a^3+3a^2+3a-1)(a^5+a^4-4a^3-3a^2+3a+1) \end{eqnarray*} \begin{lemma}\label{td.2} (a) For any $l\in\N$ \begin{eqnarray} &&1= b_l^2-ab_lb_{l-1}+b_{l-1}^2 =b_l^2-b_{l+1}b_{l-1},\label{d.2}\\ &&(b_{l-1},b_l)_{\Z[a]}= (b_l-b_{l-1},b_l)_{\Z[a]}=\Z[a],\label{d.3}\\ &&r_l=\left\{\begin{array}{ll} (2-a)(b_{(l+1)/2}+b_{(l-1)/2})^2&\textup{ for } l\textup{ odd},\\ (2-a)(a+2)b_{l/2}^2&\textup{ for }l\textup{ even,}\\ &(\textup{also }l=0) \end{array}\right. \label{d.4}\\ &&(r_{l-1}/(2-a),r_l/(2-a))_{\Z[a]}=\Z[a],\label{d.5} \end{eqnarray} and \begin{eqnarray} q_{2,l}=1-\frac{r_{l-1}/(2-a)}{r_l/(2-a)}.\label{d.6} \end{eqnarray} (b) For $a\in\Z_{\leq -3}$ \begin{eqnarray*} b_l(a)\in(-1)^{l-1}\N\quad\textup{for}\quad l\geq 1,\\ b_1(a)=1,\quad b_2(a)=a,\quad |b_2(a)+b_1(a)|=|a|-1,\\ |b_l(a)|>2|b_{l-1}(a)|\geq |b_{l-1}(a)|+1\quad\textup{for}\quad l\geq 2,\\ |b_{l+1}(a)+b_{l}(a)|>|b_{l}(a)+b_{l-1}(a)| \quad\textup{for}\quad l\geq 1. \end{eqnarray*} For $a\in\Z_{\geq 3}$ \begin{eqnarray*} b_l(a)>0\quad\textup{for}\quad l\geq 1,\\ b_1(a)=1,\quad b_2(a)=a,\quad b_2(a)+b_1(a)=a+1,\\ b_l(a)>2b_{l-1}(a)\quad\textup{for}\quad l\geq 1,\\ b_{l+1}(a)+b_l(a)>2(b_l(a)+b_{l-1}(a)) \quad\textup{for}\quad l\geq 1. \end{eqnarray*} (c) Consider $a\in\Z_{\leq -3}\cup\Z_{\geq 3}$ and $l\in\N$. Then \begin{eqnarray} \kappa_a^l&=& b_l(a)\kappa_a-b_{l-1}(a),\label{d.7}\\ \kappa_a&=& (1-q_{0,l}(a)) + (q_{0,l}(a)-r_l(a)q_{1,l}(a)) \kappa_a^l, \label{d.8}\\ \kappa_a^l&=&\frac{2-r_l(a)}{2}+\frac{1}{2}\sqrt{r_l(a)(r_l(a)-4)}, \label{d.9} \end{eqnarray} so $\kappa_a^l$ is a zero of the polynomial $t^2-(2-r_l(a))t+1$. \end{lemma} {\bf Proof:} (a) The recursive definition \eqref{d.1} of $b_l$ shows immediately the equality of the middle and right term in \eqref{d.2}, it shows $$b_l^2-ab_lb_{l-1}+b_{l-1}^2 =b_{l-1}^2-ab_{l-1}b_{l-2}+b_{l-2}^2,$$ and it shows $b_1^2-ab_1b_0+b_0^2=1$. This proves \eqref{d.2}. It implies \eqref{d.3}. The sequence $(r_l)_{l\in\N}$ satisfies the recursion \begin{eqnarray*} r_0=0,\quad r_1=2-a,\quad r_l=ar_{l-1}-r_{l-2}+2(2-a)\quad\textup{for }l\geq 2. \end{eqnarray*} For $l=2$ one verifies this immediately. For $l\geq 3$ it follows inductively with \eqref{d.1}, \begin{eqnarray*} r_l&=&-a(ab_{l-1}-b_{l-2})+2(ab_{l-2}-b_{l-3})+2\\ &=& a(-ab_{l-1}+2b_{l-2}+2)-(-ab_{l-2}+2b_{l-3}+2)+2(2-a)\\ &=&ar_{l-1}-r_{l-2}+2(2-a). \end{eqnarray*} For $l=1$ and $l=0$ \eqref{d.4} is obvious. For odd $l=2k+1\geq 3$ as well as even $l=2k\geq 2$, one verifies \eqref{d.4} inductively with this recursion and with \eqref{d.2}, for odd $l=2k+1\geq 3$: \begin{eqnarray*} &&(2-a)(b_{k+1}+b_k)^2-ar_{2k}+r_{2k-1}-2(2-a)\\ &=&(2-a)[ ((ab_k-b_{k-1})+b_k)^2]\\ &&-(2-a)[a(a+2)b_{k}^2-(b_k+b_{k-1})^2+2]\\ &=&(2-a)[2b_k^2-2ab_kb_{k-1}+2b_{k-1}^2-2]\\ &=& 0 \quad (\textup{with }\eqref{d.2}), \end{eqnarray*} for even $l=2k\geq 2$: \begin{eqnarray*} && (2-a)(a+2)b_k^2-ar_{2k-1}+r_{2k-2}-2(2-a)\\ &=& (2-a)[(a+2)b_k^2]\\ &&-(2-a)[a(b_k+b_{k-1})^2-(a+2)b_{k-1}^2+2] \\ &=&(2-a)[2b_k^2-2ab_kb_{k-1}+2b_{k-1}^2-2]\\ &=&0 \quad (\textup{with }\eqref{d.2}). \end{eqnarray*} \eqref{d.5} claims for $k\geq 0$ \begin{eqnarray*} ((b_{k+1}+b_k)^2,(a+2)b_k^2)_{\Z[a]} =\Z[a]\\ \textup{and}\quad ((a+2)b_{k+1}^2,(b_{k+1}+b_k)^2)_{\Z[a]}=\Z[a]. \end{eqnarray*} The following {\bf claim} is basic: For $f_1,f_2,f_3\in\Z[a]$ \begin{eqnarray*} (f_1,f_3)_{\Z[a]}=(f_2,f_3)_{\Z[a]}=\Z[a]\quad\Rightarrow\quad (f_1f_2,f_3)_{\Z[a]}=\Z[a]. \end{eqnarray*} To see this claim consider $1=\alpha_1f_1+\alpha_2f_3$, $1=\beta_1f_2+\beta_2f_3$. Then \begin{eqnarray*} 1&=& (\alpha_1f_1+\alpha_2f_3)(\beta_1f_2+\beta_2f_3)\\ &=& \alpha_1\beta_1f_1f_2+\alpha_2\beta_2f_3^3 + \alpha_1\beta_2f_1f_3+\alpha_2\beta_1f_2f_3. \end{eqnarray*} The claim and \eqref{d.3} show that for \eqref{d.5} it is sufficient to prove $$(a+2,b_{k+1}+b_k)_{\Z[a]}=\Z[a].$$ This follows inductively in $k$ with $$b_{k+1}+b_k=(a+2)b_k-(b_k+b_{k-1})\quad\textup{and}\quad b_1+b_0=1.$$ Finally, we calculate $q_{2,l}$: \begin{eqnarray*} q_{2,l}&=& (r_lb_l)^{-1}(r_l(b_l-b_{l-1})-2(b_l-b_{l-1}-1))\\ &=& 1+(r_lb_l)^{-1}(-(-ab_l+2b_{l-1}+2)b_{l-1})-2b_l+2b_{l-1}+2)\\ &=& 1+(r_lb_l)^{-1}(b_l(ab_{l-1}-2)-2(b_{l-1}^2-1))\\ &=& \left\{\begin{array}{ll} 1+(r_l)^{-1}((ab_{l-1}-2)-2b_{l-2}) \quad(\textup{with \eqref{d.2}})&\textup{ for }l\geq 2,\\ 1&\textup{ for }l=1\end{array}\right. \\ &=& 1-r_l^{-1}r_{l-1}. \end{eqnarray*} (b) All inequalities and signs follow inductively with \eqref{d.1}. (c) \eqref{d.7} is true for $l=1$. It follows inductively in $l$ with the following calculation, which uses $\kappa_a^2=a\kappa_a-1$. \begin{eqnarray*} \kappa_a^{l+1}&=& \kappa_a(b_l(a)\kappa_a-b_{l-1}(a))\\ &=& b_l(a) (a\kappa_a-1)-b_{l-1}(a)\kappa_a\\ &=& (ab_l(a)-b_{l-1}(a))\kappa_a-b_l(a)\\ &=&b_{l+1}(a)\kappa_a-b_l(a). \end{eqnarray*} The right hand side of \eqref{d.8} is $$\frac{b_{l-1}(a)}{b_l(a)} + \frac{1}{b_l(a)}\kappa_a^l$$ which is $\kappa_a$ by inverting \eqref{d.7}. Writing $\kappa_a=\frac{a}{2}+\frac{1}{2}\sqrt{a^2-4}$ gives for $\kappa_a^l$ $$\kappa_a^l=\frac{ab_l(a)-2b_{l-1}(a)}{2} +\frac{b_l(a)}{2}\sqrt{a^2-4}.$$ One verifies that this equals the right hand side of \eqref{d.9}. \hfill$\Box$ \end{appendix} \begin{thebibliography}{AGV1} \bibitem[AC73]{AC73} N. A'Campo: \quad{} Le nombre de Lefschetz d'une monodromie. Indagationes Math. {\bf 35} (1973), 113--118. \bibitem[AC75-1]{AC75-1} N. A'Campo: \quad{} Le groupe de monodromie du d\'eploiement des singularit\'es isol\'ees de courbes planes I. Math. Ann {\bf 213} (1975), 1--32. \bibitem[AC75-2]{AC75-2} N. A'Campo: \quad{} Le groupe de monodromie du d\'eploiement des singularit\'es isol\'ees de courbes planes II. In: Proc. Int. Congr. Math., Vancouver 1974, Vol. I, 1975, 395--404. \bibitem[Ai13]{Ai13} M. Aigner: \quad{} Markov's theorem and 100 years of the uniqueness conjecture. Springer, 2013. \bibitem[AT20]{AT20} D. Aramaki, A. Takahashi: \quad{} Maximally-graded matrix factorizations for an invertible polynomial of chain type. Adv. Math. {\bf 373} (2020), 107320, 23 pages. \bibitem[Ar68]{Ar68} V.I. Arnold: \quad On braids of algebraic functions and the cohomology of the "swallow tails". Russian Math. Surveys {\bf 23.4} (1968), 247--248. \bibitem[Ar73-1]{Ar73-1} V.I. Arnold: \quad Remarks on the method of stationary phase and on the Coxeter numbers. Russian Math. Surveys {\bf 28.5} (1973), 19--48. \bibitem[Ar73-2]{Ar73-2} V.I. Arnold: \quad A classification of the unimodal critical points of functions. Funct. Anal. Appl. {\bf 7} (1973), 230--231. \bibitem[AGLV98]{AGLV98} V.I. Arnold, V.V. Goryunov, O.V. Lyashko, V.A. Vasil'ev: \quad{} Singularity theory I. Springer, 1998. \bibitem[AGV85]{AGV85} V.I. Arnold, S.M. Gusein-Zade, A.N. Varchenko: \quad{} Singularities of differentiable maps, volume I. Birkh\"auser, Boston, 1985. \bibitem[AGV88]{AGV88} V.I. Arnold, S.M. Gusein-Zade, A.N. Varchenko: \quad{}Singularities of differentiable maps, volume II. Birkh\"auser, Boston, 1988. \bibitem[Ar25]{Ar25} E. Artin: \quad Theorie der Z\"opfe. Abh. Math. Sem. Univ. Hamburg {\bf 4} (1925), 47--72. \bibitem[Ar47]{Ar47} E. Artin: \quad Theory of braids. Ann. of Math. \textbf{48} (1947), 101--126. \bibitem[BK96]{BK96} L. Balke, R. Kaenders: \quad{} On a certain type of Coxeter-Dynkin diagrams of plane curve singularities. Topology {\bf 35.1} (1996), 39--54. \bibitem[BH19]{BH19} S. Balnojan, C. Hertling: \quad Real Seifert forms and polarizing forms of Steenbrink mixed Hodge structures. Bull Braz Math Soc, New Series 50.1 (2019), 233-274. \bibitem[BH20]{BH20} S. Balnojan, C. Hertling: \quad Conjectures on spectral numbers for upper triangular matrices and for singularities. Math Phys Anal Geom (2020) 23:5, https://doi.org/10.1007/s11040-019-9327-3, 49 pages. \bibitem[BBGKK19]{BBGKK19} B. Baumeister, K.-U. Bux, F. G\"otze, D. Kielak, H. Krause: \quad Non-crossing partitions. In: EMS Ser. Congr. Rep., EMS Publishing House, Z\"urich, 2019, pp. 235--274. \bibitem[BDSW14]{BDSW14} B. Baumeister, M. Dyer, C. Stump, P. Wegener: \quad A note on the transitive Hurwitz action on decompositions of parabolic Coxeter elements. Proc. Amer. Math. Soc. Ser. B {\bf 1} (2014), 149--154. \bibitem[BWY19]{BWY19} B. Baumeister, P. Wegener, S. Yahiatene: \quad{} Extended Weyl groups and Hurwitz transitivity and weighted projective lines II: The wild case. Preprint arXiv:2104.07075v2. \bibitem[Be83]{Be83} A.F. Beardon: \quad The geometry of discrete groups. Graduate texts in mathematics {\bf 91}, Springer, 1983. \bibitem[Bi75]{Bi75} J.S. Birman: \quad Braids, links and mapping class groups. Ann. of Math. Studies \textbf{82}, Princeton Univ. Press, 1975. \bibitem[BH73]{BH73} J.S. Birman, H.M. Hilden: \quad On isotopies of homeomorphisms of Riemann surfaces. Ann. of Math. {\bf 97} (1973), 424--439. \bibitem[BB05]{BB05} A. Bj\"orner, F. Brenti: \quad Combinatorics of Coxeter groups. Graduate Texts in Mathematics \textbf{231}, Springer, New York, 2005. \bibitem[BSh73]{BSh73} Z.I. Borevich, I.R. Shafarevich: \quad Number theory. Academic Press Inc., 1973. \bibitem[Br70]{Br70} E. Brieskorn: \quad{} Die Monodromie der isolierten Singularit\"aten von Hyperfl\"achen. Manuscripta math. {\bf 2} (1970), 103--160. \bibitem[Br83]{Br83} E. Brieskorn: \quad{} Milnor lattices and Dynkin diagrams. In: Proceedings of Symposia in Pure Mathematics, \textbf{40.1} (1983), 153--165. \bibitem[Bu00]{Bu00} E.B. Burger: \quad{} Exploring the number jungle. Student mathematical library, Volume 8, American Mathematical Society, 2000. \bibitem[Ca65]{Ca65} J.W.S. Cassels: \quad{} An introduction to diophantine approximation. Cambridge University Press, 1965. \bibitem[CV93]{CV93} S. Cecotti, C. Vafa:\quad{} On classification of $N=2$ supersymmetric theories. Commun. Math. Phys. \textbf{158} (1993), 569--644. \bibitem[Ch81]{Ch81} S.V. Chmutov: \quad{} The monodromy groups of singularities of functions of two variables. Funct. Anal. Applications {\bf 15.1} (1981), 61--66. \bibitem[Ch82]{Ch82} S.V. Chmutov: \quad{} Monodromy groups of critical points of functions. Invent. Math. \textbf{67} (1982), 123--131. \bibitem[Ch83]{Ch83} S.V. Chmutov: \quad{} Monodromy groups of critical points of functions II. Invent. Math. \textbf{73} (1983), 491--510. \bibitem[CDG24]{CDG24} G. Cotti, B. Dubrovin, D. Guzzetti: \quad{} Helix Structures in Quantum Cohomology of Fano Varieties. Lecture Notes in Mathematics {\bf 2356}, Springer, 2024. \bibitem[De74]{De74} P. Deligne: \quad{} Letter to {L}ooijenga on {M}arch 9, 1974, Reprinted in the diploma thesis of P. Kluitmann, pages 101--111. Also available on \texttt{http://homepage.rub.de/christian.stump/ Deligne\_Looijenga\_Letter\_09-03-1974.pdf} \bibitem[Do83]{Do83} I. Dolgachev: \quad{} On the link space of a Gorenstein quasihomogeneous surface singularity. Math. Ann. {\bf 265} (1983), 529--540. \bibitem[DM94]{DM94} Ph. Du Bois, F. Michel: \quad The integral Seifert form does not determine the topology of plane curve singularities. J. Algebraic Geometry {\bf 3} (1994), 1--38. \bibitem[Du92]{Du92} B. Dubrovin: \quad Integrable systems in topological field theory. Nucl. Phys {\bf B 379} (1992), 627--689. \bibitem[Du96]{Du96} B. Dubrovin: \quad Geometry of 2D topological field theories. In: Integrable systems and quantum groups. Montecatini, Terme 1993 (M. Francoviglia, S. Greno, eds.). Lecture Notes in Mathematics {\bf 1620}, Springer, 1996, 120--348. \bibitem[Du99]{Du99} B. Dubrovin:\quad Painlev\'e transcendents in two-dimensional topological field theory. In: Painlev\'e property (R. Conte, ed.), one century later. Carg\`ese 1996. Springer, 1999, pp. 287--412. \bibitem[Eb83]{Eb83} W. Ebeling: \quad{} Milnor lattices and geometric bases of some special singularities. In: Noeuds, tresses et singularit\'es, Monographie No \textbf{31} de l'enseignement Math\'ematique, Gen\`eve, 1983, 129--146. \bibitem[Eb84]{Eb84} W. Ebeling: \quad{} An arithmetic characterization of the symmetric monodromy groups of singularities. Invent. Math. \textbf{77} (1984), 85--99. \bibitem[Eb87]{Eb87} W. Ebeling: \quad{} The monodromy groups of isolated singularities of complete intersections. Lecture Notes in Mathematics \textbf{1293}, Springer, 1987. \bibitem[Eb01]{Eb01} W. Ebeling: \quad{} Funktionentheorie, Differentialtopologie und Singularit\"aten. Vieweg, 2001. \bibitem[Eb18]{Eb18} Ebeling, W.: \quad A note on distinguished bases of singularities. Topology and its Applications {\bf 234} (2018), 259--268. \bibitem[Eb20]{Eb20} Ebeling, W.: \quad Distinguished bases and monodromy of complex hypersurface singularities. In: {\it Handbook of geometry and topology of singularities I} (ed. Jos\'e Luis Cisneros Molina, L\'e D\~ung Tr\^ang, Jos\'e Seade), Springer, 2020, pp. 441--490. \bibitem[Fo51]{Fo51} L.R. Ford: \quad Automorphic functions. Chelsea Publishing Company, New York, 1951. \bibitem[Fo77]{Fo77} O. Forster: \quad Riemannsche Fl\"achen. Springer, Berlin, Heidelberg, New York, 1977. \bibitem[Ga73]{Ga73} A.M. Gabrielov: \quad{} Intersection matrices for certain singularities. Funct. Anal. Appl. \textbf{7} (1973), 182--193 \bibitem[Ga74-1]{Ga74-1} A.M. Gabrielov: \quad{} Bifurcations, Dynkin diagrams and modality of isolated singularities. Funct. Anal. Appl. \textbf{8} (1974), 94--98. \bibitem[Ga74-2]{Ga74-2} A.M. Gabrielov: \quad{} Dynkin diagrams of unimodal singularities. Funct. Anal. Appl. \textbf{8} (1974), 192--196. \bibitem[Ga79]{Ga79} A.M. Gabrielov: \quad{} Polar curves and intersection matrices of singularities. Invent. Math. \textbf{54} (1979), 15--22. \bibitem[GGI16]{GGI16} S. Galkin, V. Golyshev, H. Iritani: \quad{} Gamma classes and quantum cohomology of Fano manifolds: gamma conjectures. Duke Math. J. {\bf 165.11} (2016), 2005--2077. \bibitem[GH17]{GH17} F. Gau{\ss}, C. Hertling: \quad{} $\mu$-constant monodromy groups and Torelli results for marked singularities, for the unimodal and some bimodal singularities. In: Singularities and Computer Algebra, Festschrift for Gert-Martin Greuel on the Occasion of his 70th Birthday (W. Decker, G. Pfister, M. Schulze, eds.). Springer International Publishing, 2017, 109--146. \bibitem[GH18]{GH18} F. Gau{\ss}, C. Hertling: \quad{} $\mu$-constant monodromy groups and Torelli results for the quadrangle singularities and the bimodal series. Journal of Singularities {\bf 18} (2018), 119--215. \bibitem[Go94-1]{Go94-1} A.L. Gorodentsev: \quad{} Helix theory and nonsymmetrical bilinear forms. In: Algebraic geometry and its applications (Yaroslavl’ , 1992), Aspects Math., E25, Friedr. Vieweg, Braunschweig, 1994, pp. 47--59. \bibitem[Go94-2]{Go94-2} A.L. Gorodentsev: \quad{} Non-symmetric orthogonal geometry of Grothendieck rings of coherent sheaves on projective spaces. Preprint math.AG/9409005v1. \bibitem[Gr61]{Gr61} A. Grothendieck: \quad{} S\'eminaire de g\'eom\'etrie alg\'ebrique du Bois Marie 1960-1961, rev\^{e}tements \'etales et groupe fondamental (SGA 1). \bibitem[GL12]{GL12} M.A. Guest, C.-S. Lin: \quad{} Some $tt^*$ structures and their integral Stokes data. Commun. Number Theory Phys. {\bf 6.4} (2012), 785--803. \bibitem[Gu74-1]{Gu74-1} S.M. Gusein-Zade: \quad{} Intersection matrices for certain singularities of functions of two variables. Funct. Anal. Appl. {\bf 8} (1974), 10--13. \bibitem[Gu74-2]{Gu74-2} S.M. Gusein-Zade: \quad{} Dynkin diagrams of the functions of two variables. Funct. Anal. Appl. {\bf 8} (1974), 295--300. \bibitem[Gu99]{Gu99} D. Guzzetti: \quad{} Stokes matrices and monodromy of the quantum cohomology of projective spaces. Comm. Math. Phys. {\bf 207.2} (1999), 341--383. \bibitem[Ha01]{Ha01} A. Hatcher: \quad Algebraic Topology. Copyright A. Hatcher, 2001. \bibitem[He92]{He92} C. Hertling: \quad Analytische Invarianten bei den unimodularen und bimodularen Hyperfl\"achensingularit\"aten. Doctoral thesis. 188 pages. Bonner Math. Schriften {\bf 250}, Bonn, 1992. \bibitem[HM98]{HM98} C. Hertling, Yu. Manin: \quad Weak Frobenius manifolds. International Mathematics Research Notices {\bf 1999, No. 6}, 277--286. \bibitem[He02]{He02} C. Hertling: \quad{} Frobenius manifolds and moduli spaces for singularities. Cambridge Tracts in Mathematics \textbf{151}, Cambridge University Press, 2002. \bibitem[He03]{He03} C. Hertling: \quad {$tt\sp *$} geometry, {F}robenius manifolds, their connections, and the construction for singularities. J. Reine Angew. Math. \textbf{555} (2003), 77--161. \bibitem[He11]{He11} C. Hertling: \quad $\mu$-constant monodromy groups and marked singularities. Ann. Inst. Fourier, Grenoble {\bf 61.7} (2011), 2643--2680. \bibitem[HS11]{HS11} C. Hertling. C. Sabbah: \quad Examples of non-commutative Hodge structures. Journal of the Inst. of Math. Jussieu {\bf 10.3} (2011), 635--674. \bibitem[He20]{He20} C. Hertling: \quad Automorphisms with eigenvalues in $S^1$ of a $\Z$-lattice with cyclic finite monodromy. Acta Arithmetica {\bf 192.1} (2020), 1--30. \bibitem[HR21]{HR21} C. Hertling, C. Roucairol: \quad Distinguished bases and Stokes regions for the simple and the simple elliptic singularities. In: {\it Moduli spaces and locally symmetric spaces} (eds Li-Zhen Ji, Shing-Tung Yau), Surveys of Modern Mathematics 16, Higher Education Press and International Press, Beijing-Boston, 2021, 39--106. \bibitem[HM22-1]{HM22-1} C. Hertling, M. Mase: \quad The integral monodromy of cycle type singularities. Journal of Singularities {\bf 24} (2022), 268--298. \bibitem[HM22-2]{HM22-2} C. Hertling, M. Mase: \quad The integral monodromy of isolated quasihomogeneous singularities. Algebra and Number Theory {\bf 16.4} (2022), 955--1024 \bibitem[HM22-3]{HM22-3} C. Hertling, M. Mase: \quad The combinatorics of weight systems and characteristic polynomials of isolated quasihomogeneous singularities. Journal of Algebraic Combinatorics (2022), published online at https://doi.org/10.1007/s10801-022-01138-x. \bibitem[HK16]{HK16} A, Hubery, H. Krause: \quad A categorification of non-crossing partitions. J. Eur. Math. Soc. {\bf 18} (2916), 2273--2313. \bibitem[Hu90]{Hu90} J.E. Humphreys: \quad{} Reflection groups and Coxeter groups. Cambridge Studies in Advanced Mathematics \textbf{29}. Cambridge University Press, 1990. \bibitem[IS10]{IS10} K. Igusa, R. Schiffler: \quad Exceptional sequences and clusters. J. Algebra {\bf 323} (2010), 2183--2202. \bibitem[Ir09]{Ir09} H. Iritani: \quad An integral structure in quantum cohomology and mirror symmetry for toric orbifolds. Adv. Math. {\bf 222.3} (2009), 1016--1079. \bibitem[Ja83]{Ja83} W.A.M. Janssen: \quad{} Skew-symmetric vanishing lattices and their monodromy groups. Math. Ann. \textbf{266} (1983), 115--133. \bibitem[Ja85]{Ja85} W.A.M. Janssen: \quad{} Skew-symmetric vanishing lattices and their monodromy groups. II. Math. Ann. \textbf{272} (1985), 17--22. \bibitem[Ja86]{Ja86} P. Jaworski: \quad Distribution of critical values of miniversal deformations of parabolic singularities. Invent. Math. {\bf 86} (1986), 19--33. \bibitem[Ja88]{Ja88} P. Jaworski: \quad Decomposition of parabolic singularities. Bull. Sci. Math., II S\'er. {bf 112} (1988), 143--176. \bibitem[Ka75]{Ka75} Sh.I. Kaliman: \quad Holomorphic universal covering spaces of polynomials without multiple roots. Funct. Anal. Appl. {\bf 9.1} (1975), 67--68 \bibitem[Ka93]{Ka93} Sh.I. Kaliman: \quad The holomorphic universal covers of spaces of polynomials without multiple roots. Selecta Mathematica {\it formerly} Sovietica {\\bf 12} (1993), 395--405. \bibitem[KT08]{KT08} C. Kassel, V. Turaev: \quad Braids groups. Graduate Texts in Mathematics 247, Springer, 2008. \bibitem[Kl83]{Kl83} P. Kluitmann: \quad{} Geometrische Basen des Milnorgitters einer einfach elliptischen Singularit\"at. Diploma thesis, 116 Seiten, Bonn, 1983. \bibitem[Kl87]{Kl87} P. Kluitmann: \quad{} Ausgezeichnete Basen erweiterter affiner Wurzelgitter. Doctoral thesis, 142 pages, Bonner Mathematische Schriften \textbf{185}, 1987. \bibitem[Ko97]{Ko97} H. Koch: \quad{} Zahlentheorie. Algebraische Zahlen und Funktionen. Vieweg, 1997. \bibitem[Kr90]{Kr90} B. Kr\"uger: \quad Automorphe Mengen und die Artinschen Zopfgruppen. Doctoral thesis, 111 pages. Bonner Mathematische Schriften {\bf 207}, 1990. \bibitem[La09]{La09} K. Lamotke: \quad Riemannsche Fl\"achen. Springer, 2009. \bibitem[La73]{La73} F. Lazzeri: \quad{} A theorem on the monodromy of isolated singularities. Singularit\'es a Carg\`ese, Ast\'erisque \textbf{7--8} (1973), 269--275. \bibitem[Le73]{Le73} L\^e D\~ung Tr\'ang:\quad{} Une application d'un th\'eor\'eme d'A'Campo a l'equisingularit\'e. Preprint, Centre de Math. de l'Ecole Polytechnique, Paris, 1973. \bibitem[Le66]{Le66} J. Lehner: \quad A short course in automorphic functions. Holt, Rinehart and Winston, New York, 1966. \bibitem[LSh18]{LSh18} P. Leviant, E. Shustin: \quad{} Morsifications of real plane curve singularities. J. of Sing. {\bf 18} (2018), 307--328. \bibitem[Lo74]{Lo74} Looijenga, E.:\quad The complement of the bifurcation variety of a simple singularity. Invent. Math. {\bf 23} (1974), 105--116. \bibitem[Lo84]{Lo84} E.J.N. Looijenga: \quad Isolated singular points on complete intersections. London Math. Soc. Lecture Note Series {\bf 77}, Cambridge University Press, 1984. \bibitem[Ly79]{Ly79} O.V. Lyashko: \quad The geometry of bifurcation diagrams. Russian Math. Surveys {\bf 34.3} (1979), 209--210. \bibitem[Ma83]{Ma83} B. Malgrange,: \quad La classification des connexions irr\'eguli\`eres \`a une variable. In: Mathematique et physique (Paris, 1979--1982), Progress in Mathematics {\bf 37}, Birkh\"auser, 1983, pp. 381-399. \bibitem[Ma99]{Ma99} Yu. Manin: \quad Frobenius manifolds, quantum cohomology and moduli spaces. A.M.S. Colloquium Publications {\bf 47}, 1999. \bibitem[Ma71]{Ma71} B. Maskit: \quad On Poincar\'e's theorem for fundamental polygons. Adv. Math. {\bf 7} (1971), 219--230. \bibitem[Maxima22]{Maxima22} One homepage of the computer algebra system Maxima is this: \texttt{https://maxima.sourceforge.io/}, last modification 2022. \bibitem[Mi19]{Mi19} T. Milanov: \quad The period map for quantum cohomology of $\P^2$. Adv. Math. {\bf 351} (2019), 804--869. \bibitem[Mi69]{Mi69} J. Milnor: \quad{} On isometries of inner product spaces. Invent. Math. \textbf{8} (1969), 83--97. \bibitem[MO70]{MO70} J. Milnor, P. Orlik: \quad Isolated singularities defined by weighted homogeneous polynomials. Topology {\bf 9} (1970), 385--393. \bibitem[Mo69]{Mo69} L.J. Mordell: \quad Diophantine equations. Acad. Press, New York - London, 1969. \bibitem[Ne98]{Ne98} A. Nemethi: \quad{} The real Seifert form and the spectral pairs of isolated hypersurface singularities. Comp. Math. \textbf{98} (1995), 33--41. \bibitem[OW71]{OW71} P. Orlik, P. Wagreich: \quad Isolated singularities of algebraic surfaces with $\C^*$ action. Ann. of Math. {\bf 93 (2)} (1971), 205--228. \bibitem[Or72]{Or72} P. Orlik: \quad On the homology of weighted homogeneous manifolds. In: Lecture Notes in Mathematics {\bf 298}, Springer, 1972, pp. 260–269. \bibitem[OR77]{OR77} P. Orlik, R.C. Randell: \quad The monodromy of weighted homogeneous singularities. Invent. Math. {\bf 39} (1977), 199--211. \bibitem[Ph83]{Ph83} Pham, F.: \quad Vanishing homologies and the $n$ variable saddlepoint method. In: Singularities, Proc. of symp. in pure math. {\bf 40.2} (1983), 319--333. \bibitem[Ph85]{Ph85} Pham, F.: \quad La descente des cols par les onglets de Lefschetz, avec vues sur Gauss-Manin. In: Syst\`emes diff\'erentiels et singularit\'es. Asterisques {\bf 130} (1985), 11--47. \bibitem[Po82]{Po82} H. Poincar\'e: \quad Th\'eorie des groupes fuchsiens. Acta Math. {\bf 1} (1882), 1-62. \bibitem[RS15]{RS15} T. Reichelt, C. Sevenheck: \quad Loogarithmic Frobenius manifolds, hypergeometric systems and quantum D-modules. J. Algebraic Geom. {\bf 24} (2015), 201--281. \bibitem[Sa02]{Sa02} C. Sabbah: \quad D\'eformations isomonodromiques et vari\'et\'es de {F}robenius. Savoirs Actuels, EDP Sciences, Les Ulis, 2002, Math\'ematiques. English translation: Isomonodromic deformations and Frobenius manifolds. Universitext, Springer and EDP Sciences, 2007. \bibitem[Sa74]{Sa74} K. Saito: \quad{} Einfach-elliptische Singularit\"aten. Invent. Math. {\bf 23} (1974), 289--325. \bibitem[Sa82]{Sa82} K. Saito: \quad{} A characterization of the intersection form of a Milnor's fiber for a function with an isolated critical point. Proc. Japan Acad. \textbf{58} (1982), 79--81. \bibitem[ST71]{ST71} M. Sebastiani, R. Thom: \quad Un resultat sur la monodromie. Invent. Math. {\bf 13} (1971), 90--96. \bibitem[Sp66]{Sp66} E.H. Spanier: \quad Algebraic topology. Springer, 1966. \bibitem[TZ23]{TZ23} A. Takahashi, H. Zhang: \quad Number of full exceptional collections modulo spherical twists for elliptic orbifolds. Preprint arXiv:2311:11110v1.pdf. \bibitem[Tr13]{Tr13} M. Trivkovi\'c: \quad{} Algebraic theory of quadratic numbers. Springer, 2013. \bibitem[Va23]{Va23} U. Varolgunes: \quad{} Seifert form of chain-type invertible singularities. Kyoto Journal of Math. {\bf 63.1} (2023): 155--193. \bibitem[Vi71]{Vi71} \`E.B. Vinberg: \quad Discrete linear groups generated by reflections. Math. USSR Izvestija {\bf 5.5} (1971), 1083--1119. \bibitem[Wa99]{Wa99} C.T.C. Wall: \quad Notes on trimodal singularities. Manuscripta math. {\bf 100.2} (1999), 131--157. \bibitem[Wa80]{Wa80} B. Wajnryb: \quad{} On the monodromy group of plane curve singularities. Math. Ann. {\bf 246} (1980), 141--154. \bibitem[Yu90]{Yu90} Yu Jianming: \quad Kombinatorische Geometrie der Stokesregionen. Doctoral thesis, 114 pages, Bonner Mathematische Schriften {\bf 212}, 1990. \end{thebibliography} \printindex \end{document}
2412.16606v1
http://arxiv.org/abs/2412.16606v1
Genus embeddings of complete graphs minus a matching
\documentclass[12pt]{article} \usepackage{amsmath,amssymb,amsfonts,amsthm,graphicx,subcaption} \usepackage[margin=1in]{geometry} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{problem}[theorem]{Problem} \newtheorem{corollary}[theorem]{Corollary} \title{Genus embeddings of complete graphs minus a matching} \author{Timothy Sun\\Department of Computer Science\\San Francisco State University} \date{} \newcommand{\Z}{\mathbb{Z}} \begin{document} \maketitle \begin{abstract} We show that for all $n \equiv 0 \pmod{6}$, $n \geq 18$, there is an orientable triangular embedding of the octahedral graph on $n$ vertices that can be augmented with handles to produce a genus embedding of the complete graph of the same order. For these values of $n$, the intermediate embeddings of the construction also determine some surface crossing numbers of the complete graph on $n$ vertices and the genus of all graphs on $n$ vertices and minimum degree $n-2$. \end{abstract} \section{Introduction} Let $K_n$ denote the complete graph on $n$ vertices, and for even $n$, let $O_n = K_n-(n/2)K_2$ denote the octahedral graph on $n$ vertices. The orientable genus of the complete graphs \cite{Ringel-MapColor} and the octahedral graphs \cite{JungermanRingel-Octa, Sun-Index2} have the formulas $$\gamma(K_n) = \left\lceil \frac{(n-3)(n-4)}{12} \right\rceil$$ and $$\gamma(O_n) = \left\lceil \frac{(n-2)(n-6)}{12} \right\rceil,$$ which match a lower bound derived from Euler's polyhedral equation. For certain residues $n$ modulo $12$, all of the faces in these genus embeddings are triangular. It is perhaps not too surprising that such embeddings exist for sufficiently large $n$ given that these graphs have $\Theta(n^3)$ 3-cycles, while any triangular embedding would need only $\Theta(n^2)$ triangular faces. Mohar and Thomassen \cite{MoharThomassen} ask whether sufficiently dense graphs are guaranteed to have triangular embeddings: \begin{problem}[Mohar and Thomassen \cite{MoharThomassen}] Does there exist a constant $c \in (0, 1)$ such that every graph $G = (V,E)$ of minimum degree at least $c|V|$ satisfying $|E| \equiv 3|V| \pmod{6}$ triangulates an orientable surface? \label{prob-mohar} \end{problem} While the author believes that their question has a positive answer, it is not close to being resolved in the affirmative. The present work makes progress on the modest case where the minimum degree is $n-2$. Jungerman and Ringel \cite{JungermanRingel-Octa} tackled the $(n-2)$-regular case by constructing triangular embeddings of the octahedral graphs for $n \equiv 0, 2, 6, 8 \pmod{12}$ using ``orientable cascades,'' i.e., nonorientable current graphs with orientable derived embeddings. The remaining even residues $n \equiv 4, 10 \pmod{12}$ were later solved by Sun \cite{Sun-Index2}. The present work strengthens part of Jungerman and Ringel's result by showing that there exist triangular embeddings of $O_n$ for $n \equiv 0,6 \pmod{12}$ where a set of handles can be added to the embedding to obtain genus embeddings of the complete graphs $K_n$. In the original proof of the genus formula for the complete graphs \cite{Ringel-MapColor}, the residues $n \equiv 0,6 \pmod{12}$ had difficult constructions. In fact, the genus of $K_{18}$ and $K_{30}$ were among the last few special cases to be solved, and the $n \equiv 0 \pmod{12}$ case relied on current graphs over nonabelian groups. Simpler constructions for both of these residues are now known \cite{Sun-K12s, Sun-Minimum, Sun-nPrism, Sun-Kainen}. We provide yet another approach whose main advantage is that it also constructs genus embeddings of other graphs. The first known genus embeddings of $K_{18}$ were found essentially through trial and error \cite{Mayer-Orientables, Jungerman-K18}, but as shown in Sun \cite{Sun-Index2}, an easier construction results from augmenting the triangular embedding of $O_{18}$ of Jungerman and Ringel \cite{JungermanRingel-Octa} with two handles. We generalize this idea to all $n \equiv 0,6 \pmod{12}$, $n \geq 18$. Let $K(n,t)$ denote the complete graph with a matching of size $t \leq n/2$ deleted. For even $n$, the ends of this spectrum are the complete graphs $K_n = K(n,0)$ and the octahedral graphs $O_n = K(n,n/2)$. For these values of $n$, our constructions determine the genus of all such graphs $K(n, t)$, i.e., the graphs of minimum degree $2$: \begin{theorem} When $n \equiv 0 \pmod{6}$, the genus of $K(n,t)$, for $t = 0, \dotsc, n/2$, is $$\gamma(K(n,t)) = \left\lceil \frac{(n-3)(n-4)-2t}{12}\right\rceil.$$ \label{thm-mindeg} \end{theorem} The \emph{surface crossing number} of a graph is the fewest number of crossings needed in a drawing of that graph in some specified closed surface. As a byproduct, the same construction provides surface crossing numbers of complete graphs over a range of surfaces: \begin{theorem} When $n \equiv 0 \pmod{6}$, the surface crossing number of $K_n$ in the surface of genus $\gamma(K_n)-g$, for $g = 1, \dotsc, \lceil n/12\rceil$, is $6g$ if $n \equiv 0 \pmod{12}$ and $6g-3$ if $n \equiv 6 \pmod{12}$. \label{thm-crossing} \end{theorem} The proofs of these results are each divided into two cases: Theorem \ref{thm-mindeg} is proven as Corollaries \ref{corr-g6} and \ref{corr-g0}, and Theorem \ref{thm-crossing} is proven as Corollaries \ref{corr-c6} and \ref{corr-c0}. \section{Background} For more information on topological graph theory, see Gross and Tucker \cite{GrossTucker} or Mohar and Thomassen \cite{MoharThomassen}. Let $S_g$ denote the orientable surface of genus $g$. In this work, embeddings $\phi\colon G \to S_g$ of a graph $G = (V, E)$ in $S_g$ are usually \emph{cellular}, i.e., that $S_g \setminus \phi(G)$ is a disjoint union of open disks. The disks are called \emph{faces}, and we denote the set of faces as $F$. Cellular embeddings satisfy the \emph{Euler polyhedral equation} $$|V|-|E|+|F| = 2-2g.$$ Each face is bounded by a closed walk of length $k$. We call faces of length 3 and 4 \emph{triangular} and \emph{quadrangular}, respectively, and an embedding is triangular if all of its faces are triangular. The Euler polyhedral equation leads to well-known relationships between the numbers of edges and vertices of embedded graphs: \begin{proposition} If an embedding of a (not necessarily simple) graph $G = (V, E)$ in the surface $S_g$ is triangular, then $$|E| = 3|V|-6+6g.$$ If a simple graph $G = (V,E)$ on at least 3 vertices is embedded (not necessarily cellularly) in the surface $S_g$, then $$|E| \leq 3|V|-6+6g.$$ \label{prop-bound} \end{proposition} The \emph{genus} $\gamma(G)$ of a graph $G$ is the smallest integer $g$ such that $G$ can be embedded in $S_g$, and such an embedding is referred to as a \emph{genus embedding}. By rearranging the second part of Proposition \ref{prop-bound}, we obtain a result sometimes referred to as the \emph{Euler lower bound}: \begin{corollary} For a simple graph $G = (V,E)$ on at least 3 vertices, its genus is at least $$\gamma(G) \geq \left\lceil \frac{|E|-3|V|+6}{6} \right\rceil.$$ \label{corr-euler} \end{corollary} We observe that the claim in Theorem \ref{thm-mindeg} and the genus formulas of the complete and octahedral graphs are equivalent to the fact that the the genus of these graphs match their Euler lower bound. In practice, these genus formulas are rarely referenced when constructing genus embeddings. Instead, it is more convenient to examine the distribution of faces in the embedding. If a simple graph has a triangular embedding, then that would automatically be a genus embedding. For graphs that cannot triangulate a surface due to the number of edges, one can check if only a few extra ``diagonals'' are needed to triangulate its nontriangular faces. \begin{proposition} Let $G = (V,E)$ be a graph with a triangular embedding in some orientable surface. If $G'$ is any simple graph formed by deleting up to five edges from $G$, then the genus of $G'$ matches the Euler lower bound. \label{prop-del} \end{proposition} \begin{proof} Let $G' = (V, E')$ be formed by deleting $m$ edges from $G$, for some nonnegative $m \leq 5$. Since $G$ has a triangular embedding, $|V| \geq 3$ and $(|E|-3|V|+6)/6$ is an integer. The embedding induced on $G'$ has genus $$\frac{|E|-3|V|+6}{6} = \left\lceil\frac{|E|-3|V|+6}{6}-\frac{m}{6}\right\rceil = \left\lceil\frac{|E'|-3|V|+6}{6}\right\rceil.$$ This is exactly the Euler lower bound for $G'$. \end{proof} Proposition \ref{prop-del} shows that Problem \ref{prob-mohar} has an equivalent formulation that applies to all graphs, not just those that can triangulate an orientable surface: \begin{problem} Does there exist a constant $c \in (0, 1)$ such that the genus of every graph $G = (V,E)$ of minimum degree at least $c|V|$ matches the Euler lower bound? \label{prob-euler} \end{problem} The \emph{surface crossing number} $cr_g(G)$, introduced by Kainen \cite{Kainen-LowerBound}, is the fewest number of crossings needed to draw $G$ in $S_g$. Kainen proved a lower bound on the surface crossing number based on the girth of the graph, which we state here for girth $3$: \begin{proposition}[Kainen \cite{Kainen-LowerBound}] Let $G = (V,E)$ be a simple graph on at least 3 vertices. The surface crossing number of $G$ in the surface of genus $g$ is at least $$cr_g(G) \geq |E|-(3|V|-6+6g).$$ \label{prop-kainen} \end{proposition} \begin{proof} Deleting one edge from each crossing yields an embedding in the same surface, so the resulting graph is subject to Proposition \ref{prop-bound}. \end{proof} Like the phenomenon suggested by Problem \ref{prob-euler}, Kainen's bound is often tight when the graph is dense and the genus of the surface is near the genus of the graph. In Sun \cite{Sun-Kainen}, it was shown that in the surface of genus $\gamma(K_n)-1$, for $n \not\equiv 0,3,4,7 \pmod{12}$ (i.e., the complete graphs that do not triangulate an orientable surface), the surface crossing number of the complete graph $K_n$ matches Kainen's lower bound, except when $n = 9$. In this work, we use octahedral graphs to calculate surface crossing numbers of complete graphs for a wider range of surfaces. To this end, we derive a more convenient form for complete graphs: \begin{corollary} If there is a triangular embedding of $K(n, t)$ in the surface $S_g$, then the surface crossing number of $K_n$ in $S_g$ is at least $t$. \label{corr-kainen} \end{corollary} \begin{proof} $cr_g(K_n) \geq \binom{n}{2} - \left(\binom{n}{2}-t\right) = t.$ In words, each missing edge needs to cross at least one other edge because the embedding is already triangular. \end{proof} \subsection{Rotation systems} Cellular embeddings in surfaces can be specified combinatorially in the following way. First, given an arbitrary orientation to the edges of the graph $G$, each edge $e \in E(G)$ induces two arcs $e^+$ and $e^-$ with the same endpoints but pointing in opposite directions. We use $E(G)^+$ to denote the set of such arcs. A \emph{rotation} at a vertex is a cyclic permutation of the arcs leaving the vertex. For a simple graph, it suffices to specify a cyclic permutation of the neighbors of that vertex. A \emph{(general) rotation system} assigns a rotation to each vertex and an \emph{edge signature} $\lambda\colon E(G) \to \{-1,1\}$. If an edge has signature $-1$, we say that it is \emph{twisted}. If the rotation system has no twisted edges, then it is \emph{pure} and the embedding is orientable. When tracing the faces of a rotation system, each face boundary walk begins in \emph{normal behavior}, where rotations are interpreted as, say, clockwise orderings of the edges leaving a vertex. If the walk traverses a twisted edge, the walk would then be in \emph{alternate behavior}, where the local orientation is reversed and rotations are now counterclockwise orderings. The walk reverts to normal behavior when it traverses another twisted edge. Given an orientation of each face boundary, an edge is said to be \emph{bidirectional} if the two times a face boundary walk traverses that edge are in opposite directions, and \emph{unidirectional}, otherwise. For a graph embedded with one face (like most of our current graphs), this property on edges is independent of the orientation of the face boundary walk. A \emph{vertex flip} reverses the rotation at that vertex and switches the signatures of each incident edge (unless it is a self-loop). Vertex flips do not affect the set of faces, and hence two rotation systems are considered to be equivalent if there is a sequence of vertex flips that transforms one rotation system into the other. A rotation system describes an orientable embedding if and only if it can be transformed into a pure rotation system by a sequence of vertex flips. Conversely, if there is a closed walk with an odd number of twisted edges, then no such sequence exists and the embedding is nonorientable. \subsection{Current graphs} A \emph{current graph} $(\phi, \alpha)$ consists of a cellular embedding $\phi\colon G \to S$ in a possibly nonorientable surface $S$ and an arc-labeling $\alpha\colon E(G)^+ \to \Gamma$ satisfying $\alpha(e^+) = -\lambda(e)\alpha(e^-)$ for every edge $e$. The group $\Gamma$ is called the \emph{current group} and the arc labels are referred to as \emph{currents}. In this paper, the current group is always the (additive) cyclic group $\Z_n$, where $n$ is a multiple of $6$. The \emph{excess} of a vertex is the sum of all of the currents on the arcs entering the vertex, and if the excess is 0, we say that that vertex satisfies \emph{Kirchhoff's current law}. Face boundary walks $(e^\pm_1, e^\pm_2, e^\pm_3, \dotsc)$ are called \emph{circuits}, and the \emph{log} of a circuit replaces each $e^\pm_i$ with $\alpha(e^\pm_i)$ or $-\alpha(e^\pm_i)$ if the walk is in normal or alternate behavior, respectively. With one exception, the current graphs in this work are \emph{orientable cascades}, i.e., current graphs with a 1-face embedding in a nonorientable surface whose derived embeddings are orientable. Paraphrasing from Jungerman and Ringel \cite{JungermanRingel-Octa}, these orientable cascades satisfy a standard set of properties: \begin{enumerate} \item[(C1)] Each vertex is of degree 1 or 3. \item[(C2)] There is one face in the embedding, inducing a single circuit. \item[(C3)] Kirchhoff's current law is satisfied at every vertex of degree 3. \item[(C4)] The excess of a degree 1 vertex has order 3 in $\Z_n$. \item[(C5)] The log of the circuit contains each element of $\Z_n\setminus\{0, n/2\}$ exactly once. \item[(C6)] The current on an edge is even if and only if the edge is bidirectional. \end{enumerate} For example, consider the orientable cascade in Figure \ref{fig-ex} with current group $\Z_{18}$ satisfying these properties. Solid and hollow vertices represent clockwise and counterclockwise rotations, respectively. It is identical to one described in Jungerman and Ringel \cite{JungermanRingel-Octa}, except with all of the vertices flipped. If its lone circuit is oriented so that it is in normal behavior as it traverses the degree 1 vertex, then the log is: \begin{figure}[tbp] \centering \includegraphics[scale=1]{figs/case6-s1.pdf} \caption{An orientable cascade that generates a triangular embedding of $O_{18}$.} \label{fig-ex} \end{figure} $$\begin{array}{ccccccccccccccccccccccccccccc} (6 & 12 & 2 & 5 & 4 & 15 & 13 & 17 & 7 & 3 & 16 & 10 & 11 & 14 & 1 & 8). \end{array}$$ Each current graph generates a \emph{derived embedding} of a \emph{derived graph}. The derived graph has vertex set $\Z_n$, and the rotation at each vertex $i \in \Z_n$, and hence the neighborhood of $i$, is determined by adding $i$ to each element of the log. This is sometimes referred to as the \emph{additivity rule}. Normally, for a current graph with twisted edges, one would also need to specify the signatures of the edges in the derived embedding. However, because of property (C6), the twisted edges are exactly those with one odd and one even endpoint (see Section 4.1.6 of Gross and Tucker \cite{GrossTucker} or Section 8.4 of Ringel \cite{Ringel-MapColor}). By flipping, say, the odd vertices, we obtain a pure rotation system, demonstrating that the derived embedding is orientable. While most of the current graphs in this paper are embedded in nonorientable surfaces, all derived embeddings will be orientable, so we often omit mentioning the orientability of such an embedding. Applying this process to the above log results in a triangular embedding of the octahedral graph $O_{18}$, since property (C5) implies that each vertex $i$ is adjacent to all other vertices except $i+9$. It was shown in Sun \cite{Sun-Index2} how to augment this embedding with handles to produce a genus embedding of $K_{18}$. We now describe this construction and its generalization to all larger $n \equiv 0 \pmod{6}$. \section{The constructions} In Jungerman and Ringel \cite{JungermanRingel-Octa}, triangular embeddings of the octahedral graphs on $n \equiv 0 \pmod{6}$ vertices are generated by orientable cascades. We require new families satisfying additional properties so that we can attach handles that introduce the missing edges. The handles and edge flips differ between the $n \equiv 0 \pmod{12}$ and $n \equiv 6 \pmod{12}$ cases, so we describe them separately. \subsection{$n \equiv 6 \pmod{12}$} \begin{theorem} For $s \geq 1$, there is an orientable triangular embedding of the octahedral graph $O_{12s+6}$ that can be augmented with $s+1$ handles to obtain a genus embedding of the complete graph $K_{12s+6}$. \label{thm-case6} \end{theorem} \begin{proof} Consider the current graphs in Figures \ref{fig-ex} and \ref{fig-case6} with current group $\Z_{12s+6}$, for each $s \geq 1$. In our infinite families, the ellipses denote a ``ladder,'' such as the one shown in Figure \ref{fig-ladder}, where the currents on the vertical ``rungs'' form a sequence of consecutive integers, the directions of those arcs alternate, and the rotations on the vertices form a checkerboard pattern. The currents on the horizontal arcs can be deduced from Kirchhoff's current law. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.99\textwidth} \centering \includegraphics[scale=0.9]{figs/case6-s2-new.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.99\textwidth} \centering \includegraphics[scale=0.9]{figs/case6-odd.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.99\textwidth} \centering \includegraphics[scale=0.9]{figs/case6-even-new.pdf} \caption{} \end{subfigure} \caption{Current graphs for $O_{12s+6}$ for (a) $s = 2$, (b) odd $s \geq 3$, and (c) even $s \geq 4$.} \label{fig-case6} \end{figure} \begin{figure}[tbp] \centering \includegraphics[scale=1]{figs/index2-ladder.pdf} \caption{A ladder and its specification for $s = 5$.} \label{fig-ladder} \end{figure} For readability, we set $r = 2s+1 = (12s+6)/6$ and describe just one of the handles---the other handles will be obtained via the additivity rule: we add $2i$ to each vertex that is mentioned, for appropriate values of $i \in \Z_{12s+6}$. In each current graph, we orient the circuit so that it is in normal behavior as it traverses the degree 1 vertex. Thus, that vertex induces the faces $[0, 4r, 2r]$ and $[r, 3r, 5r]$ (the latter face has the ``opposite'' orientation since $r$ is odd). We connect these two faces with a handle to add the edges $(0, 3r)$, $(2r, 5r)$, $(4r, 2r)$, $(0, r)$, $(2r, 3r)$, and $(4r, 5r)$, as illustrated on the left of Figure \ref{fig-add6}(a). The latter three edges already exist in the graph, so we remove their original instances. In the resulting quadrangular faces, as seen on the right of Figure \ref{fig-add6}(a), the other diagonals are the missing edges $(q, q+3r)$, $(q+2r, q+5r)$, and $(q+4r, q+r)$, where $q$ is $2r+1$ and $1$ for odd and even $s$, respectively. After adding the handle, the embedding remains triangular. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[scale=1]{figs/add6.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[scale=1]{figs/cross6.pdf} \caption{} \end{subfigure} \caption{A handle and an edge flip used to incorporate missing edges.} \label{fig-add6} \end{figure} In each case, the six edges added by this handle are of the form $(j, j+3r)$ for $j = 0, 1, 2r, 2r+1, 4r, 4r+1$. Thus, we can sequentially apply this handle-addition process for each $i = 0, \dotsc, s-1$ (with $i$ defined above) while maintaining that the sets of added edges are disjoint. When we try to apply this procedure one more time by setting $i = s$, this final handle overlaps with the $i = 0$ handle at the three edges $(0, 3r)$, $(r, 4r)$, and $(2r, 5r)$ (e.g., $(i,j) = (0, 0)$ and $(i,j) = (s, 2r+1)$ are the same edge). Thus, we obtain a triangular embedding of $K_{12s+6}$ with three extra edges. Figure \ref{fig-schematic} illustrates the case $s = 2$, where the overlap is illustrated by dashed lines in the rightmost handle. By Proposition \ref{prop-del}, deleting those extra edges results in a genus embedding of $K_{12s+6}$. \begin{figure}[tbp] \centering \includegraphics[scale=1]{figs/schematic.pdf} \caption{Contributions from each handle for $O_{30}$.} \label{fig-schematic} \end{figure} \end{proof} The sequence of embeddings in the above construction yields genus embeddings for all graphs on $n = 12s+6$ vertices with minimum degree $n-2$: \begin{corollary} For $s \geq 0$, there exist orientable triangular embeddings of $K(12s+6, 6s+3-6t)$ for $t = 0, \dotsc, s-1$. Hence, the genus of $K(12s+6, k)$ for $k = 0, \dotsc, 6s+3$, matches the Euler lower bound. \label{corr-g6} \end{corollary} \begin{proof} Besides the triangular embeddings in the proof of Theorem \ref{thm-case6}, there is also a triangular embedding of $K(6,3)$ in the plane, and the genus of $K_6$ is $1$ \cite{Ringel-MapColor}. If $k \geq 3$, then define $k'$ to be the largest integer where $k' \leq k$ and $k' \equiv 3 \pmod{6}$. Then, we can apply Proposition \ref{prop-del} to the triangular embedding of $K(12s+6, k')$ to determine the genus of $K(12s+6, k)$. For $k = 1,2$, the Euler lower bound for $K(12s+6,k)$ is the same as that of $K_{12s+6}$. Alternatively, we can apply Proposition \ref{prop-del} again to the final embedding with duplicate edges. \end{proof} The edge flips used in the construction imply tight bounds on surface crossing numbers: \begin{corollary} For $s \geq 0$, the surface crossing number of $K_{12s+6}$ in the surface of genus $\gamma(K_{12s+6})-h$, for $h = 1, \dotsc, s+1$ is $6h-3$. \label{corr-c6} \end{corollary} \begin{proof} For $s = 0$, the planar crossing number of $K_6$ is $3$ \cite{HararyHill}. For larger $s$, each missing edge $e_i = (q+2i, q+3r+2i)$ can be incorporated into the triangular embedding of $O_{12s+6}$ by crossing the edge $(2i, r+2i)$, like in Figure \ref{fig-add6}(b). These crossings do not interfere with each other since the edges are added inside of disjoint pairs of faces. We may still insert $e_i$ in this manner in any subsequent triangular embedding in the proof of Theorem \ref{thm-case6} where $e_i$ is still missing. Since each missing edge is incorporated using a single crossing, the total number of crossings match the lower bound in Corollary \ref{corr-kainen}. \end{proof} \subsection{$n \equiv 0 \pmod{12}$} Once again, we utilize the degree 1 vertex with order 3 excess, but unlike in the previous residue, the process is complicated by the fact that missing edges are between vertices of the same parity. As a result, an extra set of edge flips are needed, but we have increased flexibility in deciding which pairs of faces we connect with handles. \begin{theorem} For $s \geq 2$, there is an orientable triangular embedding of the octahedral graph $O_{12s}$ that can be augmented with $s$ handles to obtain a triangular embedding of the complete graph $K_{12s}$. \end{theorem} \begin{proof} Consider the current graphs in Figures \ref{fig-case0-s2} and \ref{fig-case0} with current group $\Z_{12s}$, defined for all $s \geq 2$. For the $s = 2$ case in Figure \ref{fig-case0-s2}, it is an orientable index 2 current graph. Instead of describing all the properties needed to explain this single case, we just list out its logs: $$\arraycolsep=4pt\begin{array}{rlccccccccccccccccccccl} \lbrack0\rbrack. & (8 & 16 & 23 & 22 & 3 & 5 & 2 & 1 & 14 & 18 & 15 & 21 & 6 & 20 & 13 & 17 & 4 & 10 & 11 & 19 & 9 & 7) \\ \lbrack1\rbrack. & (16 & 8 & 13 & 23 & 1 & 17 & 2 & 21 & 19 & 22 & 15 & 10 & 6 & 9 & 3 & 18 & 4 & 11 & 7 & 20 & 14 & 5). \end{array}$$ To generate the rotation at vertex $i \in \Z_{24}$, take the log of circuit $[i \bmod{2}]$ and add $i$ to each element. One can check that the resulting rotation system is triangular. For more information on index 2 current graphs and their relationship with orientable cascades, see Sun \cite{Sun-Index2}. \begin{figure}[tbp] \centering \includegraphics[scale=0.9]{figs/case0-s2-new.pdf} \caption{An index 2 current graph for $O_{24}$.} \label{fig-case0-s2} \end{figure} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.99\textwidth} \centering \includegraphics[scale=0.9]{figs/case0-odd.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.99\textwidth} \centering \includegraphics[scale=0.9]{figs/case0-even.pdf} \caption{} \end{subfigure} \caption{Current graphs for $O_{12s}$ for (a) odd $s \geq 3$, and (b) even $s \geq 4$.} \label{fig-case0} \end{figure} Like before, we illustrate just one of the handles and let $r = 2s = (12s)/6$. We connect the faces $[0, 4r, 2r]$ and $[a, a+2r, a+4r]$, where $a$ is some odd number. As seen in Figure \ref{fig-add0}, we move two groups of existing edges into this handle: edges of the form $(j,a+j)$ and $(j,a+2r+j)$, where $j = 0, 2r, 4r$. In the resulting quadrilaterals, we can add the missing edges $(b+j, b+3r+j)$ and $(c+j, c+3r+j)$, where $b$ and $c$ are numbers of opposite parity. We obtain the other handles by adding $2i$ to each vertex in this construction, for $i = 1, \dotsc, r-1$. Since $b$ and $c$ are of opposite parities, all of these added edges are distinct. \begin{figure}[tbp] \centering \includegraphics[scale=1]{figs/add0.pdf} \caption{A handle and the two kinds of edge flips used to incorporate missing edges.} \label{fig-add0} \end{figure} The values $(a,b,c)$ are $(1, 2, 19)$ for $s = 2$, $(2s-1, 8s+1, 3s-1)$ for odd $s \geq 3$, and $(12s-3, 10s, 8s-1)$ for even $s \geq 4$. \end{proof} Like in the $n \equiv 6 \pmod{12}$ case, the construction has additional benefits: \begin{corollary} For $s \geq 1$, there exist orientable triangular embeddings of $K(12s, 6s-6t)$ for $t = 0, \dotsc, s$. Hence, the genus of $K(12s, k)$ for $k = 0, \dotsc, 6s$, matches the Euler lower bound. \label{corr-g0} \end{corollary} \begin{corollary} For $s \geq 1$, the surface crossing number of $K_{12s}$ in the surface of genus $\gamma(K_{12s})-t$, for $t = 0, \dotsc, s$ is $6t$. \label{corr-c0} \end{corollary} \begin{proof} The only remaining case for both results is when $s = 1$. There exists a triangular embedding of $K_{12}$ \cite{Ringel-MapColor}. In Jungerman and Ringel \cite{JungermanRingel-Octa}, the orientable cascade generating a triangular embedding of $O_{12}$ has a log of the form $$\begin{array}{rcccccccccccccccccccccl} (\dotsc & 2 & 11 & 8 & \dotsc), \end{array}$$ so the missing edge $(2,8)$ can be added to graph, crossing the edge $(0,11)$. Similarly, by the additivity rule, each of the other five missing edges can also be added to the drawing using one crossing (though for the edges between odd vertices, the orientation is reversed). Thus the surface crossing number of $K_{12}$ in the surface $S_5$ is $6$. \end{proof} \section{Conclusion} We described a construction for transforming genus embeddings of octahedral graphs into those of complete graphs, when the number of vertices is a multiple of 6. The intermediate embeddings generated in the construction provides some evidence in favor of an affirmative answer to Mohar and Thomassen's Problem \ref{prob-mohar}. Additional, they determine some surface crossing numbers of the complete graphs. Jungerman and Ringel \cite{JungermanRingel-Minimal} found a \emph{minimal triangulation} for each surface $S_g$, $g \neq 2$. In particular, for each such surface, they constructed a triangular embedding of a simple graph on $$M(g) := \left\lceil \frac{7+\sqrt{1+48g}}{2} \right\rceil$$ vertices, and this is the fewest number of vertices possible. Previously, the author \cite{Sun-Kainen} conjectured a generalization of this result, that Kainen's lower bound is tight for the complete graph on $M(g)$ vertices in the surface $S_g$, i.e., that there is a minimal triangulation of $S_g$ where each missing edge can be added to the embedding using only one crossing each. Theorem \ref{thm-crossing} solves this conjecture for some surfaces, but minimal triangulations can have as many as $n-6$ fewer edges than the complete graph $K_n$. Our constructions exploit the vertex of degree 1, but the current graphs used to find the genus of the other octahedral graphs \cite{JungermanRingel-Octa, Sun-Index2} do not have this feature. Are there other simple structures that allow for a similar kind of handle operation? Furthermore, what about the graphs $K(n,t)$ for $n$ odd? At present, the genus of such graphs, especially for large $t$, is almost completely unknown. Even for small $t$, there are few results besides those derived from Proposition \ref{prop-del}: Su, Noguchi, and Zhou \cite{Su} showed that $\gamma(K(9,4)) = \gamma(K_9) = 3$, extending a result of Huneke \cite{Huneke-Minimum}. Perhaps the odd $n$ case is the most promising and fertile direction to explore next. \bibliographystyle{alpha} \bibliography{biblio} \end{document}
2412.16629v1
http://arxiv.org/abs/2412.16629v1
Asymptotic formulae for Iwasawa Invariants of Mazur--Tate elements of elliptic curves
\documentclass{amsart} \usepackage{ amsmath, amsxtra, amsthm, amssymb, booktabs, comment, longtable, mathrsfs, mathtools, multirow, stmaryrd, tikz-cd, bbm, xr, color, xcolor} \usepackage[normalem]{ulem} \usepackage{colonequals} \usepackage[bbgreekl]{mathbbol} \usepackage[all]{xy} \usepackage[nobiblatex]{xurl} \usepackage{hyperref} \usepackage{geometry} \geometry{left=1.4in, right=1.4in, top=1.5in, bottom=1.5in} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{defn}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newcommand\robout{\bgroup\markoverwith {\textcolor{blue}{\rule[0.5ex]{2pt}{0.4pt}}}\ULon} \newtheorem{lthm}{Theorem} \renewcommand{\thelthm}{\Alph{lthm}} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{conv}[theorem]{Convention} \setlength{\parskip}{.5\baselineskip} \newcounter{dummy} \makeatletter \newcommand{\mylabel}[2]{#2\def\@currentlabel{#2}\label{#1}} \makeatother \newcommand{\Gal}{\mathrm{Gal}} \newcommand{\BSymb}{\mathrm{BSymb}} \newcommand{\eval}{\mathrm{eval}} \newcommand{\Hom}{\mathrm{Hom}} \newcommand{\Symb}{\mathrm{Symb}} \newcommand{\cG}{\mathcal{G}} \newcommand{\SL}{\mathrm{SL}} \newcommand{\ovp}{\overline{\varphi}} \newcommand{\vp}{\varphi} \newcommand{\GL}{\mathrm{GL}} \newcommand{\Div}{\mathrm{Div}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\Frob}{\mathrm{Frob}} \newcommand{\cor}{\mathrm{cor}} \newcommand{\ord}{\mathrm{ord}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\Qp}{\mathbb{Q}_p} \newcommand{\Fp}{\mathbb{F}_p} \newcommand{\Zp}{\ZZ_p} \newcommand{\cE}{\mathcal{E}} \newcommand{\Sel}{\mathrm{Sel}} \newcommand{\res}{\mathrm{res}} \newcommand{\coker}{\mathrm{coker}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\cX}{\mathcal{X}} \usepackage[OT2,T1]{fontenc} \DeclareSymbolFont{cyrletters}{OT2}{wncyr}{m}{n} \DeclareMathSymbol{\Sha}{\mathalpha}{cyrletters}{"58} \DeclareMathSymbol\dDelta \mathord{bbold}{"01} \definecolor{Green}{rgb}{0.0, 0.5, 0.0} \newcommand{\green}[1]{\textcolor{Green}{#1}} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \renewcommand{\Im}{\mathrm{Im}} \renewcommand{\Re}{\mathrm{Re}} \usepackage[utf8]{inputenc} \numberwithin{equation}{section} \author{Antonio Lei} \address{Antonio Lei\newline Department of Mathematics and Statistics\\University of Ottawa\\ 150 Louis-Pasteur Pvt\\ Ottawa, ON\\ Canada K1N 6N5} \email{[email protected]} \author{Robert Pollack} \address{Robert Pollack\newline Department of Mathematics\\The University of Arizona\\617 N. Santa Rita Ave. \\ Tucson\\ AZ 85721-0089\\USA} \email{[email protected]} \author{Naman Pratap} \address{Naman Pratap\newline Indian Institute of Science Education and Research Pune\\The Mathematics Department\\ Dr. Homi Bhabha Road\\ Pune 411008\\ India } \email{[email protected]} \subjclass[2020]{11R23} \keywords{Iwasawa invariants, Mazur--Tate elements, elliptic curves, additive primes} \begin{document} \begin{abstract} We investigate two related questions regarding the $\lambda$-invariants of Mazur--Tate elements of elliptic curves defined over the field of rational numbers. At additive primes, we explain their growth and how these invariants relate to other better understood invariants depending on the potential reduction type. At good ordinary primes dividing the denominator of the normalised $L$-value of the elliptic curve, we prove that the $\lambda$-invariant grows as $p^n-1$, which is the maximum value. In addition, we give examples and a conjecture for the additive potentially supersingular case, supported by computational data from Sage in this setting. \end{abstract} \title[Iwasawa Invariants of Mazur--Tate elements of elliptic curves]{Asymptotic formulae for Iwasawa Invariants of Mazur--Tate elements of elliptic curves} \maketitle \section{Introduction}\label{sec:intro} Let $p$ be an odd prime, and $E$ an elliptic curve defined over $\QQ$, with $f_E$ the weight two cusp form of level $N_E$ attached to $E$. Mazur and Swinnerton-Dyer \cite{MSD74} constructed a $p$-adic $L$-function attached to $E$ when it has good ordinary reduction at $p$. The construction of $p$-adic $L$-functions has been extended to bad multiplicative and good supersingular primes in \cite{AmiceVelu} and \cite{VISIK}. In the case of good ordinary and bad multiplicative primes, the $p$-adic $L$-functions constructed in these works belong to $\Zp[[T]]\otimes \Qp$, and thus have finitely many zeros on the open unit $p$-adic disk. Their Iwasawa invariants (which measure the $p$-divisibility and the number of zeros in the open unit disk) can be defined via the $p$-adic Weierstrass preparation theorem. At supersingular primes, the construction in \cite{AmiceVelu,VISIK} yields a pair of $p$-adic $L$-functions which do not necessarily lie in an Iwasawa algebra. Nonetheless, the works \cite{pollack03} and \cite{sprung} show that they can be decomposed into $p$-adic $L$-functions that lie in $\Zp[[T]]\otimes\Qp$ via a logarithmic matrix. In particular, Iwasawa invariants are defined for each of these $p$-adic $L$-functions. The central objects of the present article are Mazur--Tate elements attached to elliptic curves, which are constructed using modular symbols and intimately related to the aforementioned $p$-adic $L$-functions. Originally called \emph{modular elements} in \cite{MT}, they can be realized as $\Theta_M(E)\in\QQ[\Gal(\QQ(\zeta_{M})/\QQ)]$, where $M\geq 1$ is an integer. The element $\Theta_M(E)$ interpolates the $L$-values of $E$ twisted by Dirichlet characters on $\Gal(\QQ(\zeta_M)/\QQ)$, normalized by appropriate periods (in the original article of Mazur and Tate, only even characters were considered and $\Theta_M$ were constructed as elements in $\QQ[(\ZZ/M\ZZ)^\times/\{\pm1\}]$). We shall concentrate on the Mazur--Tate elements $\vartheta_n(E)$ that belong to $\QQ[\Gal(\QQ(\zeta_{p^n})/\QQ)]$, where $p$ is our fixed prime number and $n\ge0$ is an integer. Furthermore, we may regard $\vartheta_n(E)$ as an element of $\Zp[\Gal(\QQ(\zeta_{p^n})/\QQ)]$ after an appropriate normalisation. These elements satisfy a norm relation as $n$ varies, which can be derived from the action of Hecke operators on modular symbols. One can define Iwasawa invariants of these Mazur--Tate elements, which are intimately linked to the $p$-adic valuations of the $L$-values of $E$ twisted by Dirichlet characters of $p$-power conductor as a consequence of the aforementioned interpolation property. In cases where the construction of a $p$-adic $L$-function is known (i.e., when $E$ has good ordinary, good supersingular, or bad multiplicative reduction at $p$), one can relate these invariants to those of the $p$-adic $L$-function, see \cite{PW} and \S\ref{sec:known} below for further details. The present article aims to investigate two related questions regarding the $\lambda$-invariants of Mazur--Tate elements. In what follows, we write $\theta_{n,i}(E)$ for the $\omega^i$-isotypic component of $\vartheta_{n+1}(E)$, where $\omega$ is the Teichm\"uller character. When $i=0$, we simply write $\theta_n(E)$. \begin{itemize} \item[(\mylabel{item_Add}{\textbf{Add}})] For elliptic curves over $\QQ$ with bad additive reduction at $p$, the Mazur--Tate elements do not immediately give rise to a $p$-adic $L$-function. Furthermore, since $a_p(E)=0$, the norm relation satisfied by the Mazur--Tate elements implies that $\lambda(\theta_n(E))\geq p^{n-1}$ (see \cite[Corollary~5.3]{doyon-lei}). Despite the lack of $p$-adic $L$-functions, these $\lambda$-invariants appear to satisfy regular formulae as observed in \S6 of \textit{op.\ cit.} Under appropriate hypotheses, we give a theoretical explanation of these growth patterns and relate them to other better understood invariants. \\ \item[(\mylabel{item_Red}{\textbf{Red}})] When $E$ has good ordinary reduction at $p$, the $\lambda$-invariant of the $p$-adic $L$-function can be used to describe the Iwasawa invariants of the Mazur--Tate elements of the ordinary $p$-stabilization of $f_E$. When the mod $p$ representation attached to $E$ is irreducible, they agree with those attached to $\theta_n(E)$. In particular, $\lambda(\theta_n(E))$ stabilizes as $n$ grows. We study cases where $\lambda(\theta_n(E))$ is unbounded. In particular, we consider elliptic curves $E$ with $a_p(E)\equiv 1 \pmod{p}$ whose mod $p$ representation is reducible. \end{itemize} \subsection{Notation} Let $\QQ_\infty/\QQ$ denote the cyclotomic $\Zp$-extension of $\QQ$ with $\Gamma \colon \Gal(\QQ_\infty/\QQ) \cong \Zp$. We fix a topological generator $\gamma$ of $\Gamma$. Let $\Gamma_n\colonequals\Gamma^{p^n}$ for an integer $n\ge0$. We write $k_n\colonequals \QQ_\infty^{\Gamma_n}$, which is a cyclic sub-extension of $\QQ_\infty/\QQ$ of degree $p^n$. Let $\mathcal{G}_n \colonequals \Gal(\QQ(\mu_{p^n})/\QQ)$ and $G_n\colonequals \Gal(k_n/\QQ)$. We define the Iwasawa algebra $\Lambda$ as $\displaystyle\varprojlim_{n}\Zp[G_n]$. We fix an isomorphism $\Lambda \cong \Zp[[T]]$ that sends $\gamma$ to $1+T$. The Teichm\"uller character is denoted by $\omega: (\ZZ/p\ZZ)^\times \to \Zp^\times$. We use the notation $L_p(E, \omega^i, T)$ to denote the $\omega^i$-isotypic component of the $p$-adic $L$-function of $E$ whenever its construction is possible, for more details see \S~\ref{ssec: MT and Lp}. \subsection{Known results}\label{sec:known} The connection of Iwasawa invariants of Mazur-Tate elements to Iwasawa invariants of $p$-adic $L$-functions is easiest to see in the case of an elliptic curve $E/\QQ$ and a prime $p$ of multiplicative reduction. In this case, the $p$-adic $L$-function of $E$ is nothing other than the inverse limit of $\theta_n(E)/a_p^{n+1}$ which immediately implies that $$ \mu(\theta_n(E))=\mu(E) \quad \text{and} \quad \lambda(\theta_n(E)) = \lambda(E) $$ for $n \gg 0$ where $\mu(E)$ and $\lambda(E)$ are the Iwasawa invariants of the $p$-adic $L$-function of $E$. However, even for a prime of good ordinary reduction, $\lambda$-invariants can be unbounded in $n$. Consider, for instance, $E=X_0(11)$ and $p=5$. In \cite[Example 3.4]{PW}, it is shown for $n \geq 0$, $$ \mu(\theta_n(E))=0 \quad \text{and} \quad \lambda(\theta_n(E))=p^n-1. $$ Such behavior is limited though to elliptic curves where $E[p]$ is reducible as a Galois module. We have the following theorem. \begin{theorem} Let $E/\QQ$ be an elliptic curve with good ordinary reduction at $p$ such that $E[p]$ is irreducible as a Galois module. If $\mu(E) = 0$, then $$ \mu(\theta_n(E)) = 0 \quad \text{and} \quad \lambda(\theta_n(E)) = \lambda(E) $$ for $n \gg 0$. \end{theorem} \begin{proof} See \cite[Proposition 3.7]{PW}. \end{proof} By contrast, for primes $p$ of good supersingular reduction, the $\lambda$-invariants of Mazur-Tate elements are always unbounded. This is related to the fact that the $p$-adic $L$-function of $E$ is not an Iwasawa function and one instead has a pair of Iwasawa-invariants, $\mu^\pm(E)$ and $\lambda^\pm(E)$ as defined in \cite{pollack03} and \cite{sprung}. In this case, results of Kurihara and Perrin-Riou imply that these invariants can be read off of the Iwasawa invariants of Mazur-Tate elements. \begin{theorem}\label{thm:PW-ss} Let $E/\QQ$ be an elliptic curve with good supersingular reduction at $p$. \begin{enumerate} \item For $n \gg 0$, $$ \mu(\theta_{2n}(E)) = \mu^+(E) \quad \text{and} \quad \mu(\theta_{2n-1}(E)) = \mu^-(E). $$ \item If $\mu^+(E) = \mu^-(E)$, then $$ \lambda(\theta_n(E)) = q_n + \begin{cases} \lambda^+ & n \text{~even}\\ \lambda^- & n \text{~odd}, \end{cases} $$ where $$ q_n = p^{n-1} - p^{n-2} + \dots + \begin{cases} p -1 & n \text{~even}\\ p^2 - p & n \text{~odd}. \end{cases} $$ \end{enumerate} \end{theorem} \begin{proof} See \cite[Theorem 4.1]{PW}. \end{proof} \begin{remark} The $q_n$ term in the above formula forces the $\lambda$-invariants to be unbounded as $n$ grows. The interpolation property of the Mazur-Tate elements then implies that the $p$-adic valuation of $L(E,\chi,1)/\Omega_E^+$ (where $\Omega_E^+$ is the real Néron period of $E$) is unbounded as $n$ increases. The Birch and Swinnerton-Dyer conjecture thus predicts that some algebraic invariant should grow along the cyclotomic $\Zp$-extension. Consistent with this, it is known that the Tate-Shafarevich group of $E$ (if finite) grows without bound along this extension (see \cite[Theorem 10.9]{kobayashi}). \end{remark} \subsection{Main results} We now discuss the main results we prove in the present article. We begin with our results in the context of \eqref{item_Add} discussed above. For an elliptic curve $E/\QQ$ with additive reduction at a prime $p$, our approach differs depending on the `potential reduction' type of $E$. Recall that when $E$ has bad additive reduction at $p$, it achieves semistable reduction over a finite extension of $\QQ$. We first study the case where $E$ achieves semistable reduction over the quadratic field $F=\QQ(\sqrt{(-1)^{p-1}p})$ and relate the Mazur--Tate elements of $E$ with its quadratic twist associated with $F$, denoted by $E^{F}$. Since $E^F$ has good reduction at $p$, the Iwasawa invariants of the $p$-adic $L$-function(s) of $E^F$ are well understood. In particular, we prove: \begin{lthm}[Theorem \ref{quad}]\label{thmA} Let $E/\QQ$ be an elliptic curve with additive reduction at an odd prime $p$. Let $i$ be an even integer between $0$ and $p-2$. Assume that \begin{itemize} \item the quadratic twist $E^F$ has either good ordinary or multiplicative reduction at $p$; \item the $\mu$-invariant of $L_p(E^F,\omega^{(p-1)/2+i}, T)$ is zero and the $\mu$-invariant of $\theta_{n,i}(E)$ is non-negative when $n$ is sufficiently large. \end{itemize} For all $n\gg0$, \begin{align*} \mu(\theta_{n,i}(E)) &= 0, \\ \lambda(\theta_{n,i}(E))&= \frac{p-1}{2}\cdot{p^{n-1}} + \lambda(E^F, \omega^{{(p-1)/2+i}})\end{align*} where $\lambda(E^F, \omega^{{(p-1)/2+i}})$ denotes the $\lambda$ invariant of $L_p(E^F, \omega^{{(p-1)/2+i}}, T)$. \end{lthm} Our method of proof is to compare the interpolation properties of $\theta_{n,i}(E)$ with those of $\theta_{n,i+\frac{p-1}{2}}(E^F)$. The corresponding interpolation formulae are nearly the same with the exception of the Néron periods. Here, the ratio of the Néron periods of $E$ and $E^F$ equals $\sqrt{p}$, up to a $p$-unit. This factor of $\sqrt{p}$ leads to the presence of the term $\frac{p-1}{2}\cdot p^{n-1}$ in the formula above. \begin{remark} \label{rmk:periods} The term $\frac{p-1}{2}\cdot p^{n-1}$ forces the $\lambda$-invariants to grow without bound. However, unlike the good supersingular case, this is not explained via the Birch and Swinnerton-Dyer conjecture by the growth of the Tate-Shafaverich group along the cyclotomic $\ZZ_p$-extension. Instead, it is explained by the growth of the $p$-valuation of the ratio of the periods $\Omega_{E/k_n}$ and $\left(\Omega_{E/\QQ}\right)^{p^n}$. This ratio, in turn, captures the lack of a global minimal model for $E$ over the number field $k_n$. See \eqref{perratio} and Proposition \ref{fudge}. \end{remark} Furthermore, we can prove a similar result if $E^F$ has good supersingular reduction at $p$, where a formula of $\lambda(\theta_{n,i}(E))$ in terms of the plus and minus $p$-adic $L$-functions of $E^F$ is proven. The formula we prove resembles that of Theorem~\ref{thm:PW-ss}, except for the presence of the extra term $\frac{p-1}{2}\cdot p^{n-1}$ originating from the ratio of periods; see Theorem~\ref{ssquad} for the precise statement. When $E$ has additive reduction at $p$, but achieves good ordinary reduction over more general extensions, we can again derive exact formulae for the $\lambda$-invariants of Mazur-Tate elements, but now we need to assume the Birch and Swinnerton-Dyer conjecture. Specifically, we require the $p$-primary part of the Tate--Shafarevich group to be finite over $k_n$ and that the leading term of the Taylor expansion of $L(E/k_n,s)$ at $s=1$ predicted in the Birch and Swinnerton-Dyer conjecture holds up to $p$-adic units; see Conjecture~\ref{conj:pBSD}. In the following theorem, $\cX(E/\QQ_\infty)$ denotes the dual of the Selmer group of $E$ over $\QQ_\infty$. \begin{lthm}[Theorem \ref{thm: bsd}]\label{thmB} Let $E/\QQ$ be an elliptic curve with additive, potentially good ordinary reduction at a prime $p\geq 5$ and minimal discriminant $\Delta_E$. Assume that $\cX(E/\QQ_\infty)$ is a $\Lambda$-torsion module. Assume furthermore that \begin{itemize} \item Conjecture~\ref{conj:pBSD} is true over $k_{n}$ for all $n \gg 0$, \item $\mu(\cX(E/\QQ_\infty)) = \mu(\theta_{n,0}(E))$ for $n\gg0$; \item $\lambda(\theta_{n,0}(E))<p^{n-1}(p-1)$ for $n\gg0$. \end{itemize} Then, when $n$ is sufficiently large, we have \begin{align*} \lambda(\theta_{n,0}(E)) &= \frac{(p-1)\cdot \ord_p(\Delta_E)}{12}\cdot p^{n-1}+{\lambda(\cX(E/\QQ_\infty))}. \end{align*} \end{lthm} Our method is to analyze how each term in the Birch and Swinnerton-Dyer conjecture changes along the cyclotomic $\ZZ_p$-extension. A key step here relies on a control theorem for the $p$-primary Selmer group of $E$ along $\QQ_\infty$ which in turn governs the growth of the Tate--Shafarevich groups (see Theorems~\ref{thm:control} and \ref{sha}). From this analysis, we can determine the $p$-adic valuation of $L(E,\chi,1)/\Omega_E$ for Dirichlet characters $\chi$ of $p$-power conductor and thus the $\lambda$-invariant of $\theta_{n,0}(E)$. The unbounded term in the above formula arises from terms that capture the lack of a global minimal model for $E$ over $k_n$. This formula is consistent with Theorem \ref{thmA}; when good ordinary reduction at $p$ is achieved over a quadratic extension, we have $\ord_p(\Delta_E)=6$. We now discuss our results related to the setting discussed in \eqref{item_Red} above. In particular, $p$ is a good ordinary prime for $E$, and $E[p]$ is reducible as a Galois module. In an isogeny class of elliptic curves over $\QQ$, we consider the \emph{optimal} curve in the sense of Stevens \cite{Stevens1989}. In \cite{GV}, it has been proven that the $p$-adic $L$-function of the optimal curve (when normalised using the Néron periods of the curve) is an integral power series. Based on this, we show the following theorem, which gives a formula for $\lambda(\theta_n(E))$ assuming the occurrence of $p$ in the denominator of the rational number $L(E,1)/\Omega_E^+$ (where $\Omega_E^+$ is the real Néron period of $E$). \begin{lthm}[Theorem \ref{thm: Lvaldenom}]\label{thmC} Let $E/\QQ$ be an optimal elliptic curve with good ordinary reduction at $p$ such that $\ord_p(L(E,1)/\Omega_{E}^+)<0$ and $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\}) \in \Zp^\times$, where $\phi_{E,\mathrm{Coh}}$ is the modular symbol attached to $E$ normalised by the cohomological periods $\Omega_{f_E}^\pm$. Then, \[ \lambda(\theta_n(E))=p^n-1\] for all $n\geq 0$. \end{lthm} The proof of Theorem~\ref{thmC} is based on an analysis of the Néron periods and the cohomological periods considered in \cite{PW}. In particular, we compare the `$p$-stabilised' Mazur--Tate elements under these two normalisations. Extending the ideas in \cite{doyon-lei2}, where formulae for the $\lambda$-invariants of Mazur--Tate elements attached to the Ramanujan $\Delta$ function were obtained from congruences with boundary symbols, we prove: \begin{lthm}[Theorem \ref{thm: bsym to Lval}]\label{thmD} Assume $E$ is an optimal elliptic curve with good ordinary reduction at an odd prime $p$ with $a_p(E)\equiv 1 \pmod{p}$. Assume $\mu(L_p(E,\omega^0, T))=0$ and $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\}) \in \Zp^\times$ where $\phi_{E,\mathrm{Coh}}$ is the modular symbol attached to $E$ normalised by the cohomological periods $\Omega_{f_E}^\pm$. Suppose $\phi_{E,\mathrm{Coh}}^+$ is congruent modulo $p$ to a weight 0 boundary symbol of level $\Gamma_0(N_E)$. Then \[\lambda(\theta_n(E))=p^n-1 \text{ for all }n\geq 0 \text{ and }\ord_p(L(E,1)/\Omega_E)<0.\] \end{lthm} We use the convention that weight $0$ boundary symbols can be identified with weight 2 Eisenstein series, see Definition~\ref{defn: bsym}. In particular, Theorem~\ref{thmD} tells us that a congruence of $\phi_{E,\mathrm{Coh}}^+$ with a boundary symbol is reflected in the denominator of $L(E,1)/\Omega_E^+$ under appropriate hypotheses. When the rank of $E(\QQ)$ is zero, the quantity $L(E,1)/\Omega_E$ can be expressed in terms of various arithmetic invariants by the Birch and Swinnerton-Dyer Conjecture. In particular, the denominator of $L(E,1)/\Omega_E^+$ should divide $|E(\QQ)_{\mathrm{tors}}|^2$. If $E(\QQ)$ has a point of order $p$, then $f_E$ is congruent to a weight 2 Eisenstein series. In this case, Theorems \ref{thmC} and \ref{thmD} together suggest that there is a congruence between the modular symbol associated with $E$ and the boundary symbol corresponding to the Eisenstein series. This observation is supported by computational evidence (see example \ref{example1}), which suggests that mod $p$ multiplicity may hold in this setting. We plan to explore this in a future project. While Theorems \ref{thmC} and \ref{thmD} are only stated for optimal elliptic curves, $\lambda(\theta_n(E))$ is invariant under isogeny, so the stated formula holds for all curves in the same isogeny class. Numerical data suggests that the hypothesis $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\}) \in \Zp^\times$ in Theorems \ref{thmC} and \ref{thmD} is automatic. See Remarks \ref{rem: phi unit} and \ref{rem: phi unit2} for a discussion on this hypothesis. \subsection{Organisation} We begin with preliminaries related to modular symbols and Mazur--Tate elements associated with elliptic curves over $\QQ$ in \S\ref{sec:msmt}. In \S\ref{sec:prelim}, we provide background on elliptic curves with additive reduction and review the notion of `potential semistability', i.e., when $E$ has bad additive reduction over a field $K$, but attains semistable reduction over a finite extension of $K$. Moreover, we study properties of the Selmer group associated with $E$ at additive potentially good ordinary primes. We use this to show that the growth of the $p$-primary part of the Tate--Shafarevich group of $E$ along the cyclotomic $\Zp$-extension of $\QQ$ is similar to the good ordinary case. In \S\ref{sec:form1}, we prove Theorems~\ref{thmA} and \ref{thmB}. The potentially supersingular case in the generality of Theorem~\ref{thmB} has eluded us so far, but we provide examples and a conjecture supported by computational data from Sage in this setting. In \S \ref{sec: form2}, we study when $\lambda(\theta_n(E))$ grows as $p^n-1$ for an elliptic curve with good ordinary primes. We also give several explicit examples related to Theorem \ref{thmD}, one of which illustrates an interesting phenomenon of the failure of mod $p$ multiplicity one. \subsection*{Acknowledgement} The research of AL is supported by the NSERC Discovery Grants Program RGPIN-2020-04259 and RGPAS-2020-00096. RP's research has been partially supported by NSF grant DMS-2302285 and by Simons Foundation Travel Support Grant for Mathematicians MPS-TSM-00002405. Parts of this work were carried out during NP's summer internship at the University of Ottawa in the summer of 2023, supported by a MITACS Globalink Scholarship. This article forms part of the master's thesis of NP at IISER, Pune. The authors thank Anthony Doyon and Rik Sarkar for interesting discussions related to the content of the article. \section{Modular symbols and Mazur--Tate elements}\label{sec:msmt} \subsection{Modular symbols} Let $R$ be any commutative ring and, for any integer $g \geq 0$, let $V_g(R)$ be the space of homogeneous polynomials of degree $g$ in the variables $X$ and $Y$ with coefficients in $R$. Let $\dDelta$ denote the abelian group of divisors on $\mathbb{P}^1(\QQ)$, and let $\dDelta^0$ denote the subgroup of degree 0 divisors. Let $\SL_2(\ZZ)$ act on $\dDelta^0$, by linear fractional transformations, which allows us to endow $\Hom(\dDelta^0, V_{g}(R))$ with a right action of $\SL_2(\ZZ)$ via $$(\varphi \mid_{\gamma})(D) = (\varphi(\gamma \cdot D))\mid_{\gamma},$$ where $\varphi \in \Hom(\dDelta^0, V_{g}(R))$, $\gamma \in \SL_2(\ZZ)$ and $D \in \dDelta^0$. \begin{defn}\label{defn:modsymb} Let $\Gamma\leq \SL_2(\ZZ)$ be a congruence subgroup. We define $\Hom_{\Gamma}(\dDelta^0, V_g(R))$ to be the space of $R$-valued \textbf{modular symbols} of weight $g$, level $\Gamma$ for some commutative ring $R$, and we denote this space by $\Symb(\Gamma, V_g(R))$. \end{defn} \begin{remark} One can identify $\text{Symb}(\Gamma, {V_g(R)})$ with the compactly supported cohomology group $ H^1_c(\Gamma, {V_g(R)})$ (see \cite[Proposition~4.2]{ash-ste}). \end{remark} For $f \in S_k(\Gamma)$, we define the \textbf{modular symbol associated with $f$} as \[\xi_f: \{s\}-\{r\} \to 2\pi i \int_s^r f(z)(zX+Y)^{k-2}dz,\] which is an element of $\Symb(\Gamma, V_{k-2}(\CC))$ as $f$ is a holomorphic cusp form. Let $A_f$ be the field of Fourier coefficients of $f$ and fix a prime $p$. The matrix $\iota \colonequals \begin{psmallmatrix} -1& 0 \\ 0 & 1 \end{psmallmatrix}$ acts as an involution on $\Symb(\Gamma, \CC)$ and we decompose $\xi_f=\xi_f^+ + \xi_f^-$ with $\xi_f^\pm$ in the $\pm1$-eigenspace of $\iota$ respectively. By a theorem of Shimura, there exist $\Omega_f^\pm \in \CC$ such that ${\xi_f^\pm/\Omega_f^\pm}$ take values in $V_{k-2}(A_f)$, and in $V_{k-2}(\overline{\QQ}_p)$ upon fixing an embedding of $\overline{\QQ}\hookrightarrow \overline{\QQ}_p$ (which we fix for the rest of the article). Define $\Psi_f^\pm \colonequals \psi_f^\pm/\Omega_f^\pm$, and $\Psi_f \colonequals \Psi_f^+ + \Psi_f^-$ which is in $\Symb(\Gamma, \overline{\QQ}_p)$. \begin{remark}[\textbf{On periods}]\label{rem:periods} The periods we choose for normalisation play a crucial role in this article. Let $\mathcal{O}_f$ denote the ring of integers of the completion of the image of $A_f$ in $\overline{\QQ}_p$. We can choose $\Omega^+$ and $\Omega^-$ so that each of $\Psi_f^+$ and $\Psi_f^-$ takes values in $V_{k-2}(\mathcal{O}_f)$ and that each takes on at least one value in $\mathcal{O}_f^\times$. We denote these periods $\Omega_f^\pm$; they are called \textbf{cohomological periods} of $f$, which are well-defined up to $p$-adic units (for more details, see \cite[Def. 2.1]{PW}). For an elliptic curve $E$ defined over $\QQ$, the ring of integers $\mathcal{O}_{f_E}$ is $\Zp$ and so $\Omega_{f_E}^\pm$ ensure that the modular symbols of $E$ take values in $\Zp$, with at least one value being a $p$-adic unit. On the other hand, we are supplied with the (real and imaginary) \textbf{Néron periods}, by which we denote $\Omega_E^\pm$. They ensure that the modular symbols take values in $\Qp$ but \textit{a priori} do not guarantee integrality. In \S \ref{sec:form1}, we exclusively use Néron periods for our normalisation, while in \S \ref{sec: form2}, we make use of both sets of periods. We will implicitly assume that the $p$-adic $L$-function of an elliptic curve $E$ is constructed using the Néron periods of $E$. We denote the real and imaginary Néron periods by $\Omega_E^+$ and $\Omega_E^-$ respectively. \end{remark} In \S \ref{sec: form2}, we will encounter boundary symbols, which we introduce here following \cite{bel-das}. For simplicity of notation, let $V$ denote $V_g(R)$ where $R$ is a commutative ring. There is a tautological short exact sequence of abelian groups \begin{equation}\label{eqn:ses} 0 \to \dDelta^0 \to \dDelta \to \ZZ \to 0. \end{equation} Since this sequence splits, we can form the following exact sequence of modules $$0 \to V \to \text{Hom}(\dDelta, V) \to \text{Hom}(\dDelta^0, V) \to 0$$ by taking the $\text{Hom}(-,V)$ functor of (\ref{eqn:ses}). On taking $\Gamma$-cohomology, we obtain the following exact sequence: \begin{equation}\label{eqn:longcohom} 0 \xrightarrow{} V^\Gamma \xrightarrow{} \text{Hom}_{\Gamma}(\dDelta,V) \xrightarrow{b} \Symb(\Gamma, V) \xrightarrow{h} {H}^1(\Gamma,V). \end{equation} \begin{defn}\label{defn: bsym} The map $b$ in \eqref{eqn:longcohom} is called the \textbf{boundary map} and its image, denoted by $\BSymb(\Gamma, V)$, is called the module of \textbf{boundary modular symbols} (or simply \textbf{boundary symbols}). For $V=V_g(R)$, $\BSymb(\Gamma, V)$ is the space of weight $g$ boundary symbols. \end{defn} The exact sequence (\ref{eqn:longcohom}) yields an isomorphism of Hecke-modules $$\text{BSymb}(\Gamma, V) \cong \text{Hom}_{\Gamma} (\dDelta, V)/ V^\Gamma,$$ relating modular symbols to boundary symbols. Furthermore, there is a short exact sequence $$0 \to \text{BSymb}_\Gamma(V_g(R)) \to \Symb(\Gamma,V_g(R)) \to H^1(\Gamma, V_g(R)).$$ The space of boundary symbols can be identified with the space of weight $g+2$ Eisenstein series under the Eichler--Shimura isomorphism (see \cite[Prop.\ 2.5]{bel-das} and note that a notion of modular symbols that is dual to the one discussed here is utilized therein). For our purposes, the property that these symbols can be considered as $\Gamma$-invariant maps on the set of divisors $\dDelta$ will be crucial. \subsection{Mazur--Tate elements and $p$-adic $L$-functions}\label{ssec: MT and Lp} Recall the following notation given in the introduction. We fix an elliptic curve $E/\QQ$ and let $f_E$ be the weight 2 newform associated with $E$ by the modularity theorem. For a non-negative integer $n$, let $\mathcal{G}_n \colonequals \Gal(\QQ(\mu_{p^n})/\QQ)$. For $a \in (\ZZ/p^n\ZZ)^\times$, we write $\sigma_a\in\cG_n$ for the element that satisfies $\sigma_a(\zeta)=\zeta^a$ for $\zeta \in \mu_{p^n}$. \begin{defn} For a modular symbol $\varphi \in \Symb(\Gamma, V_g(R))$, define the associated Mazur--Tate element of level $n\geq 1$ by \[\vartheta_n(\varphi)= \sum_{a \in (\ZZ/p^n\ZZ)^\times}\varphi(\{\infty\}-\{a/p^n\})|_{(X,Y)=(0,1)}\cdot \sigma_a \in R[\mathcal{G}_n].\] When $R$ is a subring of $\overline{\QQ}_p$, decomposing $\mathcal{G}_{n+1}=G_n\times(\ZZ/p\ZZ)^\times$ with $G_n\cong\Gal(k_{n}/\QQ)$, one can project $\vartheta_n(\varphi)$ to $R[G_n]$ by the characters $\omega^i: (\ZZ/p\ZZ)^\times \to \Zp^\times$, where $0\leq i \leq p-2$. We define the \emph{$\omega^i$-isotypic component of the $p$-adic Mazur--Tate element} of level $n$ associated with a cusp form $f\in S_k(\Gamma)$ as \[\theta_{n,i}(f)\colonequals \omega^i(\vartheta_{n+1}(\Psi_f)) \in \overline{\QQ}_p[G_n].\] \end{defn} We define $\theta_{n,i}(E)\colonequals\theta_{n,i}(\Psi_{f_E}) \in \Qp[G_n]$ where the normalisation may be using either of the two sets of periods discussed above in Remark \ref{rem:periods}. \begin{proposition}\label{interpprop} For a character $\chi$ on $G_n$, $\theta_{n, i}(f)$ satisfies the following interpolation property \[\chi(\theta_{n,i}(f))=\tau(\omega^i\chi)\cdot\frac{L(f, \overline{\omega^i\chi},1)}{\Omega^{\epsilon}},\] where $\tau$ denotes the Gauss sum, and $\epsilon\in\{+,-\}$ is the sign of $\omega^i(-1)$. \end{proposition} \begin{proof} See \cite[Equation 8.6]{MTT}, and consider the projection described above. \end{proof} Let $\gamma_n$ be a generator of ${G}_n$. Then, for any element $F \in \Zp[{G}_n]$, we may write it as a polynomial $\sum_{i=0}^{p^n-1}a_iT^i$ with $T=\gamma_n-1$. \begin{defn}[Iwasawa invariants] The $\mu$ and $\lambda$-invariants of $F=\sum_{i=0}^{p^n-1}a_iT^i \in \Zp[G_n]$ are defined as \begin{align*} \mu(F) &= \underset{i}{\min}\{\ord_p(a_i)\},\\ \lambda(F) &= \min\{ i : \ord_p(a_i) = \mu(F)\} \end{align*} where $\ord_p$ is the $p$-adic valuation such that $\ord_p(p)=1$. \end{defn} These invariants are independent of the choice of $\gamma_n$. One can directly define $\mu$ and $\lambda$-invariants for an element of the finite level group algebra $\Zp[G_n]$ which are equivalent to the above definitions; for more details, see \cite[\S~3.1]{PW}. Let $\pi_{n}^{n+1} : G_{n+1} \to G_n$ be the natural projection map. For $\sigma \in G_{n-1}$, define \[\cor_{n-1}^n(\sigma) \colonequals \sum_{\substack{\pi(\tau)=\sigma \\ \tau \in \Gal(k_{n}/\QQ)}} \tau\in\Zp[G_n]\] which gives a map $\Gal(k_{n-1}/\QQ) \to \Gal(k_{n}/\QQ)$. We extend these to maps on the corresponding group rings and use the same notation for the extension. Finally, we briefly recall the construction of the $p$-adic $L$-function of $E$ when it is good ordinary at $p$. Let $\alpha$ denote the unique $p$-adic unit root of the Hecke polynomial $X^2-a_p(E)X+p$. We consider the $p$-stabilisation \[f_{E, \alpha}(z)\colonequals f_E(z)- \frac{p}{\alpha}f_E(pz),\] which gives us a norm-compatible system given by $\{\frac{1}{\alpha^{n+1}} \theta_{n,i}(f_{E,\alpha})\}_n$. (We shall revisit the notion of $p$-stabilisation in greater detail in \S~\ref{sec: form2}.) Then, \[L_p(E, \omega^i)=\varprojlim_{n}\frac{1}{\alpha^{n+1}} \theta_{n,i}(f_{E,\alpha})\] is the $\omega^i$-isotypic component of the $p$-adic $L$-function attached to $E$. This is an element of $\Lambda\otimes\Qp$. (If we normalise by the cohomological periods, we get an element of $\Lambda$.) We use the notation $L_p(E, \omega^i, T)$ for the image of $L_p(E, \omega^i)$ under the isomorphism $\Lambda\otimes\Qp\cong\Zp[[T]]\otimes\Qp$. One can also define the $p$-adic $L$-function as an element of $\Zp[[\Gal(\QQ(\mu_{p^\infty})/\QQ]]\otimes \Qp$ by considering the norm-compatible system built from $\frac{1}{\alpha^{n}}\vartheta_n(\Psi_{f_{E,\alpha}})$ directly. We denote this inverse limit by $L_p(E)$, which can be projected by powers of $\omega$ to recover $L_p(E, \omega^i)$. \section{Preliminaries: Elliptic curves and additive reduction}\label{sec:prelim} In this section, we recall certain facts about elliptic curves over number fields that have additive reduction at a finite place $v$ above $p$. We shall consider the base-change of an elliptic curve $E/\QQ$ to a number field, as well as the completion of a number field at a finite place (to which we refer as a $p$-adic field). We say that $E$ has \textit{semi-stable} reduction at $v$ if it has either good or multiplicative reduction at $v$. We begin with the following well-known result. \begin{theorem}[Semi-stable reduction theorem]\label{thm:semistable} Let $K$ be a $p$-adic field. There exists a finite extension $K'/K$ such that $E$ has semi-stable reduction over $K'$. \end{theorem} \begin{proof} See \cite[Proposition VII.5.4]{Si}. \end{proof} \begin{remark} We recall that if $E$ has additive reduction at $p$, it attains semi-stable reduction at a place $v$ after a base change to a finite extension. If it has good reduction at $p$, then the reduction type remains the same for any places above $p$. If it has nonsplit multiplicative reduction at $p$, it becomes split after a base change to a quadratic extension. \end{remark} We say that $E$ has \textit{potentially good reduction} at $p$ if there exists a finite extension $F/\QQ$ such that the base-change of the curve to $F$ has good reduction at the places of $F$ above $p$. By \cite[ Prop. VII.5.5]{Si}, this is equivalent to saying that the $j$-invariant of the curve is a $p$-adic integer. \textit{Potentially multiplicative reduction} is defined in a similar way. \subsection{Potentially good reduction}\label{ssec: potgoodred} In this subsection, we assume that $E$ has potentially good reduction at $p$. Let $K$ be a $p$-adic field. Let $m$ be an integer greater than 2 and coprime to $p$. Let $K^{ur}$ be the maximal unramified extension of $K$. Define $L\colonequals K^{ur}(E[m])$. The extension $L$ is independent of $m$. Moreover, we have the following lemma. \begin{lemma}[Serre--Tate] The field $L$ is the minimal extension of $K^{ur}$ where $E$ achieves good reduction. \end{lemma} \begin{proof} See \cite[Section 2, Corollaries 2 and 3]{serretate}. \end{proof} Write $\Phi\colonequals \Gal(L/K^{ur})$ and define the \emph{semistability defect} of $E$ as $e\colonequals \#\Phi$ ($e$ depends on $E$ and $p$ although we suppress it from the notation). We see that $\Phi$ is the inertial subgroup of $\Gal(L/K)$. For a description of $\Phi$ in the case when $p\in\{2,3\}$, see \cite{Kraus1990}. When $p\ge5$, the discussion in \cite[Section 5.6]{Serre1971/72} tells us that $\Phi$ is cyclic of order 2, 3, 4 or 6. Furthermore, the size of $\Phi$ is given by \begin{equation}\label{eq: semistabilitydef} e = \frac{12}{\text{gcd}(12,\ord_p(\Delta_E))}, \end{equation} where $\Delta_E$ is the minimal discriminant of $E/\QQ$. This allows us to show, for $p\geq 5$, that $E$ achieves good reduction over an extension of degree at most $6$. \begin{lemma}\label{lem: Kgdeg} Let $p\geq 5$. Suppose that $E$ has additive potentially good reduction at $p$. Then the semistability defect $e$ is the smallest integer $e\in \{2,3,4,6\}$ such that $E$ obtains good reduction over $\Qp(\sqrt[e]{p})$. \end{lemma} \begin{proof} In this case, $\Phi= \Gal(L/\Qp^{ur})$ is cyclic of order $e$. So $L/\Qp^{ur}$ is tamely ramified and cyclic of order $e$, thus $L=\Qp^{ur}(\sqrt[e]{p})$. Now good reduction is invariant under unramified extensions, so $E$ obtains good reduction over $\Qp(\sqrt[e]{p})$. \end{proof} \begin{lemma}\label{ediv} Assume that $E$ has potentially good reduction at $p\geq 5$ and that $e>2$. Then $E$ is potentially ordinary at $p$ if and only if $e$ divides $p-1$. If $E$ is potentially supersingular at $p$ then $e$ divides $p+1$. \end{lemma} \begin{proof} See \cite[Lemma 2.1]{del-JNT}. \end{proof} \subsection{Potentially multiplicative reduction}\label{sec:potmult} In the case when $E/\QQ$ has potentially multiplicative reduction, it achieves multiplicative reduction over a quadratic extension. This is because the $j$-invariant of $E$ has negative $p$-adic valuation, and thus $E$ becomes isomorphic to a \emph{Tate curve} upon taking a base change to a quadratic extension by \cite[Theorem 5.3, Corollary 5.4]{silverman1994advanced}. See also \cite[Section 5.6 (b)]{Serre1971/72}. \subsection{The Birch--Swinnerton-Dyer conjecture over number fields}\label{ssec: BSD} The Birch and Swinnerton-Dyer conjecture for elliptic curves over a number field $K$ provides an expression for the leading term of the $L$-function $L(E/K, s)$ at $s=1$ in terms of arithmetic data of $E/K$, which we recall below. \begin{conjecture}\label{conj:BSD} Let $K$ be a number field. Then \begin{itemize} \item $\ord_{s=1} L(E/K,s) = \textup{rank}(E/K)$, \item the Tate--Shafarevich group of $E/K$, denoted by $\Sha(E/K)$ is finite and \item the leading term of the Taylor series at $s\!=\!1$ of the $L$-function $L(E/K, s)$ is given by \[ \frac{L^{(r)}(E/K,s)}{\Omega_{E/K}}=\frac{\textup{Reg}({E/K})|\Sha{(E/K)}| C_{E/K}}{\sqrt{|\Delta_K|}|E(K)_{\textup{tors}}|^2}, \tag{$\dagger$}\label{bsd1} \] \end{itemize} where $r$ is the order of vanishing of $L(E/K, s)$ at $s=1$, $\Delta_K$ is the discriminant of $K$, $\textup{Reg}$ denotes the regulator and $C_{E/K}$ is the product of Tamagawa numbers at finite places. \vspace{3pt}\\ Here, $\Omega_{E/F} \in \CC^\times$ is a `period' of $E$ which has a precise description in terms of differentials on $E(K)$ and its completions (see Definition~\ref{defn: period} below). We will refer to the expression on the right-hand side of \eqref{bsd1} as $\textup{BSD}(E/K)$. \end{conjecture} For our purposes, we will utilize the "$p$-part" of Conjecture~\ref{conj:BSD}. \begin{conjecture}\label{conj:pBSD} Let $K$ be a number field. Then \begin{itemize} \item $\ord_{s=1} L(E/K,s) = \textup{rank}(E/K)$, \item the $p$-primary part of the Tate--Shafarevich group, $\Sha(E/K)[p^\infty]$, is finite and \item the leading term of the Taylor series at $s\!=\!1$ of the $L$-function $L(E/K, s)$ satisfies \[ \ord_p\left(\frac{L^{(r)}(E/K,s)}{\Omega_{E/K}}\right)=\ord_p\left(\frac{\textup{Reg}({E/K})|\Sha{(E/K)[p^\infty]}| C_{E/K}}{\sqrt{|\Delta_K|}|E(K)_{\textup{tors}}|^2}\right), \tag{$\dagger$}\label{bsdp} \] \end{itemize} where we use the same notation as Conjecture \ref{conj:BSD}. \end{conjecture} \subsubsection{Periods in the Birch and Swinnerton-Dyer conjecture} Let $K$ be a number field. Let $v$ be a non-archimedean place of $K$ and write $K_v$ for the completion of $K$ at $v$ with ring of integers $\mathcal{O}_v$, and choose a uniformizer $\pi_{K_v}$. Let $q_v$ be the cardinality of the residue field. Let $|\cdot|_v$ denote the unique normalized absolute value on $K_v$ with $|\pi_{K_v}|_v=\frac{1}{q_v}$. Given an elliptic curve $E$ defined over $K$ (for our purposes, it is the base-change of $E/\QQ$), for each non-archimedean place $v$ of $K$, we can find a \emph{minimal} Weierstrass equation for $E$. Consequently, there is an associated discriminant $\Delta_v$ and an invariant (minimal) differential $\omega_v^{\min}$. When the class number of $K$ is 1, there exists a global minimal Weierstrass equation (i.e., minimal for the base-change of $E$ to $K_v$ for all non-archimedean places $v$ of $K$); see \cite[\S VIII.8]{Si}. This does not hold for general number fields. We discuss the factor in Conjecture \ref{conj:BSD} that encapsulates this phenomenon. The set of local points $E(K_v)$ admits a structure of a $K_v$-analytic manifold of dimension 1. For an open subset $U\subset E(K)$, an open subset $V \subset K_v$ and a chart $\beta:U \to V$, $\omega_v^{\min}$ is of the form $f(z)dz$ on $V$, where $dz$ is the usual differential on $K$ and $f$ is a Laurent power series in $z$ without poles in $V$. We define \[\int_{U}|\omega_v^{\min}|_v := \int_V |f(z)|_v d\mu,\] where $\mu$ is the Haar measure on $K_v$ normalized so that $\mathcal{O}_v$ has volume $1$. The integral over $E(K_v)$ is defined by gluing these charts. The following relates the Tamagawa number with the integral over $E(K_v)$. \begin{lemma} Denote the \emph{Tamagawa number} at $v$ by $c(E/K_v)$. We have \[\int_{E(K_v)}|\omega_v^{\min}|_v= c(E/K_v)\cdot{L_v(E, q_v^{-1})}.\] \end{lemma} \begin{proof} See \cite[Lemma 1.5]{AdamMorgan}. \end{proof} If $\omega$ is a non-zero global differential on $E$, there exists $\lambda \in K_v$ such that $\omega= \lambda \omega_v^{\min}$ and \[\int_{E(K_v)}|\omega|=|\lambda|_v\frac{c(E/K)|\tilde{E}_{ns}(k)|}{q}= \left|\frac{\omega}{\omega_v^{\min}}\right|_v c(E/K)\cdot L_v(E, q_v^{-1}).\] We now give the following definitions for the periods occurring in \eqref{bsd1}. \begin{defn}\label{defn: period} For a global differential $\omega$ for $E$ over a number field $K$, we define \begin{align*} \Omega_{E/\CC, \omega}&\colonequals2\int_{E(\CC)}\omega \wedge \overline{\omega},\\ \Omega_{E/\mathbb{R}}&\colonequals\int_{E(\mathbb{R})}|\omega|,\\ \Omega^{*}_{E/\mathbb{R}}&\colonequals\frac{\Omega_{E/\CC, \omega}}{\Omega_{E/\mathbb{R}, \omega}^2}. \end{align*} We define the \textbf{global period} \[\Omega_{E/K}=\prod_{v\nmid\infty}\left|\frac{\omega}{\omega_v^{\min}}\right|_v\cdot\prod_{v \mid \infty}\Omega_{E/K_v, \omega}.\] \end{defn} \begin{remark} For $K=\QQ$, the global minimal differential $\omega$ is also $\omega_v^{\min}$ for all primes $v$. Thus, \[\Omega_{E/\QQ}=\Omega_{E/\mathbb{R}},\] which is the usual (real) Néron period for $E$. \end{remark} \begin{lemma}\label{dok} Let $E$ be an elliptic curve defined over a number field $K$. Let $F/K$ be a finite extension. Then \[\Omega_{E/F}= \Omega_{E/K}^{[F:K]}\prod_{v \textup{ real}}(\Omega^*_{A/K_v})^{\#\{w\mid v \textup{ complex}\}}\prod_{v, w\mid v} \left|\frac{\omega_v^{\min}}{\omega_w^{\min}}\right|_{w},\] where $v$ runs over places of $K$ and $w$ over places of $F$ above $v$. \end{lemma} \begin{proof} This is \cite[Lemma 2.4]{Dokchitser_Dokchitser_2015}. \end{proof} We see that for $F=k_n$ (which is a totally real field) and $K=\QQ$, we have \begin{equation}\label{perratio} \Omega_{E/k_n}= \Omega_{E/\QQ}^{p^n} \prod_{v, w\mid v} \left|\frac{\omega_v^{\min}}{\omega_w^{\min}}\right|_{w}, \end{equation} where $v$ runs over all places of $\QQ$ and $w$ over places of $k_n$ above $v$. We conclude with the following explicit description of the periods over number fields that appear in \ref{conj:BSD}. \begin{proposition}\label{fudge} Let $E/K$ be an elliptic curve over a number field, $F/K$ a field extension of finite degree $d$. Let $v$ be a finite place of $K$ with $w\mid v$ a place of $F$ lying above above it. Let $\omega_v^{\min}$ and $\omega_w^{\min}$ be the minimal differentials for $E/K_v$ and $E/F_w$, respectively. \begin{enumerate} \item If $E/K_v$ has good or multiplicative reduction, then $\displaystyle\left|\frac{\omega_v^{\min}}{\omega_w^{\min}}\right|_{w}=1$. \item If $E/K_v$ has potentially good reduction and the residue characteristic is not $2$ or $3$, then $\displaystyle\left|\frac{\omega_v^{\min}}{\omega_w^{\min}}\right|_{w}= q^{\left\lfloor e_{F/K} \ord_v(\Delta_{\min, v})/12\right\rfloor}$, where $q$ is the size of the residue field at $w$, and $e_{F/K}$ is the ramification index of $F_w/K_v$ . \end{enumerate} \end{proposition} \begin{proof} This is proved in \cite[Lemma 36 (5), (6)]{DokchitserEvansWiersema+2021+199+230}. \end{proof} \subsection{Iwasawa theory at potentially good, ordinary primes} In this subsection, $K$ denotes a number field $K$. Let $\overline{K}$ be an algebraic closure of $K$ and for any place $v$, let $K_v$ denote the completion of $K$ at $v$. Let $H^1(K, A)$ denote the cohomology group $H^1(\Gal(\overline{K}/K),A)$ for any $\Gal(\overline{K}/K)$-modules $A$. Similarly, let $H^1(L/K, A)$ denote $H^1(\Gal(L/K),A)$. We define the $n$-Selmer group of $E/K$ as \[\Sel_n(E/K) \colonequals \text{ker}\left(H^1(K, E[n])\to \prod_v \frac{H^1(K_v, E[n])}{\text{im}(\kappa_v)}\right),\] where $\kappa_v:E(K_v)/nE(K_v) \to H^1(K_v, E[n])$ is the Kummer map. Let \[\mathcal{G}_E(K) \colonequals \text{im}\left(H^1(K,E[n]) \to \prod_v \frac{H^1(K_v, E[n])}{\text{im}(\kappa_v)}\right)\] where $v$ runs over all primes of $K$. We have the following exact sequence \[0 \xrightarrow{} \text{Sel}_n(E/K) \xrightarrow{} H^1(K,E[n]) \xrightarrow{} {\mathcal{G}_E(K)} \xrightarrow{} 0. \] We begin with a lemma regarding Selmer groups over finite Galois extensions. \begin{lemma}\label{lem: sel1} Let $F/K$ be a finite Galois extension of degree $d$ such that $(n,d)=1$. Then \[\Sel_n(E/K) \cong \Sel_n(E/F)^{\Gal(F/K)}.\] \end{lemma} \begin{proof} Let $G := \Gal(F/K)$. The inflation-restriction exact sequence gives: \[0\to H^1(F/K, E(F)[n])\to H^1(K, E[n]) \to H^1(F, E[n])^G \to H^2(F/K, E(F)[n]).\] The first and last terms of this exact sequence are finite groups that are annihilated by both $n$ and by $d$. As $n$ and $d$ are coprime, both groups are trivial. Thus, the restriction map $\res: H^1(K, E[n]) \to H^1(F, E[n])^G$ is an isomorphism. We have the following commutative diagram with exact rows. \[\begin{tikzcd} 0 & {\text{Sel}_n(E/K)} && {H^1(K,E[n])} && {\mathcal{G}_E(K)} & 0 \\ \\ 0 & {\text{Sel}_n(E/F)^G} && {H^1(F, E[n])^G} && {\mathcal{G}_E(F)^G} \arrow[from=1-1, to=1-2] \arrow[from=1-2, to=1-4] \arrow["s", from=1-2, to=3-2] \arrow[from=1-4, to=1-6] \arrow["\res", from=1-4, to=3-4] \arrow[from=1-6, to=1-7] \arrow["g", from=1-6, to=3-6] \arrow[from=3-1, to=3-2] \arrow[from=3-2, to=3-4] \arrow[from=3-4, to=3-6] \end{tikzcd}\] As $\res$ is an isomorphism, the snake lemma gives the following exact sequence: \[0 \to \text{ker}(s) \to 0 \to \text{ker}(g) \to \text{coker}(s) \to 0.\] We show that $\text{ker}(g)=0$ below. For a prime $v$ of $K$, let $w\mid v$ be a prime of $F$ and consider the natural restriction map $r_v: {H^1(K_v, E[n])}/{\text{im}(\kappa_v)} \to {H^1(F_w, E[n])}/{\text{im}(\kappa_w)}$. Then $\text{ker}(g)= \mathcal{G}_E(K) \cap \text{ker}(\prod_v r_v)$, so it suffices to show $\text{ker}(r_v)=0$ for all $v$. The exact sequence \[0 \to E(K_v)/nE(K_v) \to H^1(K_v, E[n]) \to H^1(K_v, E(\overline{K_v}))[n]\to 0 ,\] implies that \[\frac{H^1(K_v, E[n])}{\text{im}(\kappa_v)} \cong H^1(K_v, E(\overline{K_v}))[n].\] Similarly, we have \[\frac{H^1(F_w, E[n])}{\text{im}(\kappa_w)} \cong H^1(F_w, E(\overline{F_w}))[n].\] Thus, it suffices to show that the restriction map $r_{w,v}:H^1(K_v, E(\overline{K_v}))[n] \to H^1(F_w, E(\overline{F_w}))[n]$ is injective. As $\ker(r_{w,v})=H^1(F_w/K_v, E(F_w))[n]$, which is annihilated by $[F_w:K_v]$ and $n$, it follows that $\text{ker}(r_{w,v})=0$, as desired. \end{proof} We define the $p$-primary Selmer group \[\text{Sel}_{p^\infty}(E/K) = \lim_{\longrightarrow}\text{Sel}_{p^k}(E/K).\] For a finite Galois extension $F/K$ with degree co-prime to $p$, Lemma~\ref{lem: sel1} implies that \[\text{Sel}_{p^\infty}(E/K)\cong \text{Sel}_{p^\infty}(E/F)^{\Gal(F/K)}.\] For $E/\QQ$ with additive potentially good reduction at a prime $p$, we establish Mazur's control theorem for $p^\infty$-Selmer groups of $E$ along the $\Zp$-extension of $\QQ$. \begin{theorem}\label{thm:control} Let $E/\QQ$ be an elliptic curve with additive potentially good ordinary reduction at $p\geq 5$. Then Mazur's control theorem holds for ${\Sel}_{p^\infty}(E/\QQ_\infty)$, i.e., the kernel and the cokernel of the restriction map \[{\Sel}_{p^\infty}(E/k_n) \to {\Sel}_{p^\infty}(E/\QQ_\infty)^{\Gamma_n}\] are finite. Furthermore, their cardinalities are bounded independently of $n$. \end{theorem} \begin{proof} Let $K_g$ denote the minimal {Galois} extension of $\QQ$ over which $E$ achieves good reduction (note that $K_g\subseteq \QQ(\sqrt[e]{p},\mu_e)$, where $e\in\{2,3,4,6\}$). Let $(K_g)_\infty\colonequals K_g\QQ_\infty$. We have $\Gal((K_g)_\infty/K_g)\cong \Gamma$. Denote $\Gal(K_g/\QQ)$ by $G$. Then, for $p\geq 5$, we have $(|G|, p) = 1$. If we write $(K_g)_n=((K_g)_\infty)^{\Gamma_n}$, we have \[G \cong \Gal((K_g)_n/k_n) \cong \Gal((K_g)_\infty/\QQ_\infty),\quad n\gg0.\] Lemma \ref{lem: sel1} gives \[{\Sel}_{p^\infty}(E/\QQ_\infty)\cong \Sel_{p^\infty}(E/(K_g)_\infty)^G,\] and \[\text{Sel}_{p^\infty}(E/k_n)\cong \text{Sel}_{p^\infty}(E/(K_g)_n)^G\] when $n$ is large enough. As $E$ has good ordinary reduction at the primes of $K_g$ lying above $p$, Mazur's control theorem along the $\Zp$-extension $(K_g)_\infty/K_g$ in \cite{Mazur1972} tells us that the kernel and cokernel of the restriction map \[r_{g,n}: \text{Sel}_{p^\infty}(E/(K_g)_n) \to \text{Sel}_{p^\infty}(E/(K_g)_\infty)^{\Gamma_n}\] are finite and bounded independently of $n$. Note that if $A$ is simultaneously a $G$-module and a $\Gamma_n$-module, we have \[(A^G)^{\Gamma_n} = (A^{\Gamma_n})^G.\] Thus, the restriction map $r_n:\Sel_{p^\infty}(E/k_n)\rightarrow\Sel_{p^\infty}(E/\QQ_\infty)^{\Gamma_n} $ can be realized as \begin{align*} \Sel_{p^\infty}(E/k_n)\cong\Sel_{p^\infty}(E/(K_g)_n)^G\stackrel{r_{g,n}}\longrightarrow\left(\Sel_{p^\infty}(E/(K_g)_\infty)^{\Gamma_n}\right)^{G}\\ =\left(\Sel_{p^\infty}(E/(K_g)_\infty)^G\right)^{\Gamma_n}\cong\Sel_{p^\infty}(E/\QQ_\infty)^{\Gamma_n}. \end{align*} It follows that $\ker (r_n)= \ker (r_{g,n})^G$ and $\mathrm{Im} (r_n)=\mathrm{Im} (r_{g,n})^G$. Furthermore, as the order of $G$ is coprime to $p$ and $\mathrm{Im}(r_{g,n})$ is a $p$-group, we have $H^1(G,\mathrm{Im}(r_{g,n}))=0$. Taking $G$-cohomology of the short exact sequence \[ 0\rightarrow\mathrm{Im}(r_{g,n})\rightarrow \Sel(E/(K_g)_\infty)^{\Gamma_n}\rightarrow\coker(r_{g,n})\rightarrow0 \] gives $\coker(r_{g,n})^G=\coker(r_n)$, from which the theorem follows. \end{proof} Define the Pontryagin dual of $\Sel_{p^{\infty}}(E/\QQ_\infty)$ as \[\cX(E/\QQ_\infty) \colonequals \textup{Hom}(\text{Sel}_{p^\infty}(E/\QQ_\infty), \QQ_p/\ZZ_p).\] Similarly define $\cX(E/(K_g)_\infty)$. The following conjecture is due to Mazur (see \cite[Conjecture~1.3]{greenberg}). \begin{conjecture}\label{conj:tor} Let $F$ be a number field, and $v$ be a prime lying over $p$. Let $F_\infty/F$ denote the cyclotomic $\Zp$-extension. Let $E$ be an elliptic curve such that $E/F$ has good ordinary reduction at all primes lying above $p$. Then $\cX(E/F_\infty)$ is a torsion $\Lambda$-module. \end{conjecture} \begin{remark} The best known result in this direction is the work of Kato \cite{kato1} combined with the non-vanishing result of $L$-values by Rohrlich \cite{Rohrlich1984}, which implies the above when $F$ is an abelian extension over $\QQ$. \end{remark} \begin{lemma} \label{lem:cortorsion} Let $E/\QQ$ be an elliptic curve with additive potentially good ordinary reduction at $p$. Assuming Conjecture~\ref{conj:tor} holds for $E$ and $F=\QQ(\sqrt[e]{p},\mu_e)$, then $\cX(E/\QQ_\infty)$ is $\Lambda$-torsion. \end{lemma} \begin{proof} It follows from Lemma~\ref{lem: sel1} that there exists a surjective map $\cX(E/(K_g)_\infty)\rightarrow \cX(E/\QQ_\infty)$. In particular, if $\cX(E/(K_g)_\infty)$ is $\Lambda$-torsion, then so is $\cX(E/\QQ_\infty)$. \end{proof} The conclusion of Lemma~\ref{lem:cortorsion}, combined with the control theorem given in Theorem~\ref{thm:control}, implies that $\rank(E(k_n))$ is bounded above by the $\lambda$-invariant of $\cX(E/\QQ_\infty)$. Let $r_\infty=\displaystyle\lim_{n\rightarrow\infty}\rank(E(k_n))$. We have: \begin{theorem}\label{sha} Assume that $E$ is an elliptic curve defined over $\QQ$ and that $E$ has potentially good ordinary reduction at $p \geq 5$. Furthermore, assume that $\cX(E/\QQ_\infty)$ is $\Lambda$-torsion and that $\Sha(E/\QQ_n)[p^\infty]$ is finite for all $n$. Then there exist integers $\lambda_E, \mu\geq 0$ and $\nu$ depending only on $E$ such that \[|\Sha_E(\QQ_n)[p^\infty]|=p^{(\lambda_E- r_\infty)n + \mu p^n + \nu} \text{ for all } n\gg0.\] \end{theorem} \begin{proof} The argument for the good ordinary case as given in \cite[proof of Theorem~1.10]{greenberg} carries over under our hypotheses. \end{proof} \section{Formulae for $\lambda$ Invariants at additive primes}\label{sec:form1} \subsection{Potential semi-stable reduction over a quadratic extension} We first focus on the case where $E/\QQ$ is additive at $p$ and achieves good or multiplicative reduction over a quadratic extension, i.e., the case when the semistability defect $e$ is equal to $2$. Let $E^F$ be the quadratic twist of $E$ over $F\colonequals\QQ(\sqrt{(-1)^{p-1}p})$ as in \S~\ref{sec:intro}. We begin with the following proposition that can be obtained from an analysis of the discriminant, and the invariants $c_4$ and $c_6$ associated with the minimal Weierstrass equations for $E$ and $E^F$, respectively. \begin{proposition} Let $E$ be an elliptic curve defined over $\QQ$ with additive reduction at $p$ such that $e=2$. Then $E^F$ has semistable reduction at $p$. \end{proposition} Next, we recall the main theorem of \cite{pal}, which gives a relation between the Néron periods of $E$ and those of its quadratic twist, applied to the additive case. \begin{theorem}\label{thm: pal} Let $E^F$ denote the quadratic twist of $E$ over $F=\QQ(\sqrt{(-1)^{p-1}p})$, with $p$ odd. Assume that $E$ has additive reduction at $p$ but $E^F$ has semistable reduction at $p$. Then the periods of $E$ and $E^F$ are related as follows: If $p\equiv 1 \pmod{4}$, then \[\Omega^+_{E^F} = u_1\sqrt{p}\Omega^+_{E},\] and if $p\equiv 3 \pmod{4}$, then \[\Omega^-_{E^F} = u_2 c_\infty(E^F)\sqrt{p}\Omega^+_{E},\] where $u_1,u_2$ are powers of $2$ and $c_\infty(E^F)$ is the number of connected components of $E^F(\mathbb{R})$. \end{theorem} \begin{proof} The result \cite[Corollary 2.6]{pal} gives the relation for the potentially good case. For the potentially multiplicative case, see Prop. 2.4 of \textit{op. cit.} and consider the change in $p$-adic valuations of the invariants $\Delta_{E^F}$ and $c_4(E^F)$ upon twisting over $F$. \end{proof} In the forthcoming proofs, we relate the $\lambda(\theta_{n,i}(E))$ to $\lambda(\theta_{n,i+(p-1)/2}(E^F))$ for even $i$. The analytic $\lambda$ invariants of $\theta_n(E^F)$ are well-behaved for large $n$ since there exists a $p$-adic $L$-function for $E^F$. \begin{theorem}\label{quad} Let $E/\QQ$ be an elliptic curve with additive reduction at an odd prime $p$. Let $i$ be an even integer between $0$ and $p-2$. Assume that \begin{itemize} \item the quadratic twist $E^F$ has either good ordinary or multiplicative reduction at $p$ and \item the $\mu$-invariant of $L_p(E^F,\omega^{(p-1)/2+i}, T)$ is zero and the $\mu$-invariant of $\theta_{n,i}(E)$ is non-negative. \end{itemize} Let $\lambda(E^F, \omega^{{(p-1)/2+i}})$ denote the $\lambda$-invariant of $L_p(E^F, \omega^{{(p-1)/2+i}}, T)$. Then, for $n$ sufficiently large, \begin{align*} \mu(\theta_{n,i}(E)) &= 0, \\ \lambda(\theta_{n,i}(E))&= \frac{(p-1)}{2}\cdot{p^{n-1}} + \lambda(E^F, \omega^{{(p-1)/2+i}}).\end{align*} \end{theorem} \begin{remark} Recall from the discussion in \S\ref{sec:potmult} that when $E$ has potentially multiplicative reduction, it necessarily achieves multiplicative reduction over a quadratic extension. Thus, Theorem~\ref{quad} gives us a formula for $\lambda(\theta_{n,i}(E))$ for all cases of potentially multiplicative reduction provided that the assumptions on the $\mu$-invariants hold. We also note that the integrality of the $p$-adic $L$-function attached to $E^F$ is not guaranteed \textit{a priori} since we normalise by the Néron periods, but our assumption on the $\mu$-invariant ensures we have an integral power series (otherwise we would have $\mu<0$). Similarly, the assumption on $\mu(\theta_{n,i}(E))$ is to ensure integrality. Alternatively, assuming $\mu(\theta_{n,i}(E))= \mu(L_p(E^F, \omega^{(p-1)/2+i}, T))$ for all large $n$ also gives us the same formula for the $\lambda$-invariant. \end{remark} \begin{proof} We give the proof when $i=0$ for notational convenience; the entire argument remains the same for a general even $i$. For a character $\chi$ on $G_n$, we have \[L(E,\chi, 1) = L(E^F, \omega^{(p-1)/2}\chi, 1),\] where $\omega^{(p-1)/2}$ is the quadratic character corresponding to the quadratic extension $F/\QQ$. By the interpolation property of Mazur--Tate elements, we have \begin{align*} \overline{\chi}(\theta_{n, 0}(E)) &= \tau(\overline{\chi})\frac{L(E, \chi, 1)}{\Omega_E^+}, \end{align*} which can be rewritten as \[\overline{\chi}(\theta_{n, 0}(E)) = {\frac{\tau(\overline{\chi})}{\tau(\omega^{(p-1)/2}\overline{\chi})}}\cdot {\frac{\Omega_{E^F}^{\epsilon'}}{\Omega_E^+}}\cdot\left(\tau(\omega^{(p-1)/2}\overline{\chi}) \frac{L(E^F,\omega^{(p-1)/2}{\chi}, 1)}{\Omega_{E^F}^{\epsilon'}}\right),\] where $\epsilon'=(-1)^{(p-1)/2}$. (The theorem's hypothesis that $i$ is even is needed here since Theorem \ref{thm: pal} only gives us expressions for the period ratios corresponding to even characters $\chi\omega^i$). The ratio of the two Gauss sums is a $p$-adic unit (since $\omega^{(p-1)/2}\overline{\chi}$ and $\overline{\chi}$ have the same conductor when $n$ is large enough), and the ratio of periods, up to $p$-adic units, is $\sqrt{p}$ by Theorem \ref{thm: pal}. Taking valuations on both sides gives \[\ord_p(\overline{\chi}(\theta_{n, 0}(E))) = \frac{1}{2}+ \ord_p\left(\tau(\omega^{(p-1)/2}\overline{\chi}) \frac{L(E^F,\omega^{(p-1)/2}{\chi}, 1)}{\Omega_{E^F}^{\epsilon'}}\right).\] We focus on computing the valuation on the right-hand side. Crucially, we can attach a $p$-adic $L$-function to $E^F$ having the following interpolation property: \[L_p(E^F,\omega^{(p-1)/2}, \zeta_{p^n}-1)= \frac{1}{\alpha_{E^F}^{n+1}}\left(\tau(\omega^{(p-1)/2}\overline{\chi}) \frac{L(E^F,\omega^{(p-1)/2}{\chi}, 1)}{\Omega_{E^F}^{\epsilon'}}\right),\] where $\zeta_{p^n}$ is the image of a topological generator of $\Gamma$ under $\overline{\chi}$, and $\alpha_{E^F}$ is the root of the polynomial $X^2+a_p(E^F)X+p$ with trivial $p$-adic valuation when $E^F$ is ordinary at $p$ and it is $\pm1$ when $E^F$ is multiplicative at $p$. This gives a formula for the valuation of $\overline{\chi}(\theta_{n, 0}(E))$, via the $p$-adic Weierstrass preparation theorem, in terms of the Iwasawa invariants of $L_p(E^F,\omega^{(p-1)/2}, T)$ for $n$ large enough: \begin{equation}\label{ord1} \ord_p(\overline{\chi}(\theta_{n, 0}(E)))= \frac{1}{2} + \frac{\lambda(E^F, \omega^{(p-1)/2})}{p^{n-1}(p-1)} \end{equation} as we have assumed the $\mu$-invariant vanishes for this $p$-adic $L$-function. We now compute $\ord_p(\overline{\chi}(\theta_{n, 0}(E)))$ differently as follows. For each $n$, define $\mu_n\colonequals\mu(\theta_{n,0}(E))$ and $\lambda_n\colonequals\lambda(\theta_{n,0}(E))$. We can write \begin{align*} \theta_{n, 0}(E)(T)&=p^{\mu_n}(T^{\lambda_n}+ p\cdot g_n(T)) u_n(T),\end{align*} where $g_n(T) \in \Zp[T]$, and $u(T)\in \Zp[[T]]^\times$. Then we have \begin{align*} \ord_p(\overline{\chi}(\theta_{n, 0}(E))) &\geq \mu_n+ \text{min}\left\{\frac{\lambda_n}{p^{n-1}(p-1)}, 1+v_p(g_n(\zeta_{p^n}-1))\right\}.\end{align*} Combining these together, we get, for $n\gg0$, \begin{equation}\label{compare} \frac{1}{2} + \frac{\lambda(E^F, \omega^{(p-1)/2})}{p^{n-1}(p-1)}\geq \mu_n+ \text{min}\left\{\frac{\lambda_n}{p^{n-1}(p-1)}, 1+v_p(g_n(\zeta_{p^n}-1))\right\}. \end{equation} For $n$ large enough, the left-hand side can be made strictly less than $1$, so under our assumption that $\mu_n\geq 0$, we must have $\mu_n=0$ and \[1 > \text{min}\left\{\frac{\lambda_n}{p^{n-1}(p-1)}, 1+v_p(g_n(\zeta_{p^n}-1))\right\}.\] Since $v_p(g_n(\zeta_{p^n}-1))\geq 0$ (as $g_n(T) \in \Zp[T]$), we deduce that $\frac{\lambda_n}{p^{n-1}(p-1)}<1$. With this, \eqref{compare} becomes an equality and \begin{equation} \frac{\lambda_n}{p^{n-1}(p-1)} = \frac{1}{2} + \frac{\lambda(E^F, \omega^{(p-1)/2})}{p^{n-1}(p-1)}, \end{equation} which results in the desired formula for $\lambda_n$.\end{proof} We investigate the potentially supersingular case next. Recall from the statement of Theorem~\ref{thm:PW-ss} that we define \[ q_n=\begin{cases} p^{n-1}-p^{n-2}+\cdots+p-1 \space \text{ if $n$ even}\\ p^{n-1}-p^{n-2}+\cdots+p^2-p \space \text{ if $n$ odd.} \end{cases} \] Using a similar argument and the plus and minus $p$-adic $L$-functions defined in \cite{pollack03}, we have: \begin{theorem}\label{ssquad} Let $E/\QQ$ be an elliptic curve with additive reduction at an odd prime $p$. Let $i$ be an even integer between $0$ and $p-2$. Assume that \begin{itemize} \item the quadratic twist $E^F$ has supersingular reduction at $p$ with $a_p(E^F)=0$ and \item the $\mu$-invariants of the $\omega^{(p-1)/2+i}$-isotypic component of the plus and minus $p$-adic $L$-functions are both 0, that is, $\mu(L^\pm_p(E^F, \omega^{(p-1)/2+i}, T)) = 0$ and that $\mu(\theta_{n,i}(E))$ is non-negative. \end{itemize} Let $\lambda^\pm(E^F, \omega^{(p-1)/2+i})$ denote the $\lambda$-invariants of $L^\pm_p(E^F, \omega^{(p-1)/2+i}, T)$ respectively. Then we have, for all $n$ large enough, \begin{align*} \mu(\theta_{n,i}(E)) &= 0, \\ \lambda(\theta_{n,i}(E))&= \frac{(p-1)}{2}\cdot p^{n-1} + q_n+ \begin{cases} \lambda^+(E^F, \omega^{(p-1)/2+i}) \text{ if $n$ even}\\ \lambda^-(E^F, \omega^{(p-1)/2+i}) \text{ if $n$ odd}.\end{cases} \end{align*} \end{theorem} \begin{remark} A well-known conjecture of Greenberg (\cite[Conjecture~1.11]{greenberg}) asserts that for a good ordinary prime $p$, the $\mu$ invariant is always zero when $E[p]$ is irreducible as a $\Gal(\overline{\QQ}/\QQ)$-module. In the supersingular case, it is conjectured (\cite[Conjecture~6.3]{pollack03}) that both $\mu(L^\pm_p(E^F, \omega^{i}, T))=0$ for all $0\leq i\leq p-2$. Both conjectures are supported by a large amount of numerical data. \end{remark} \begin{proof} One proceeds as in the proof of Theorem \ref{quad}. The only difference is that we relate the Mazur--Tate elements of $E^F$ to the plus and minus $p$-adic $L$-functions via \cite[Proposition~6.18]{pollack03}. Indeed, we have \[\ord_p(\overline{\chi}(\theta_{n, 0}(E))) = \frac{1}{2}+\frac{ q_n + \lambda^\pm(E^F, \omega^{(p-1)/2+i})}{p^{n-1}(p-1)},\] where the sign is chosen according to the parity of $n$ (see Theorem \ref{thm:PW-ss}, \cite[Theorem 4.1]{PW}). We write the analogue of equation \eqref{compare} and for large $n$, the inequality \[1 > \frac{1}{2}+ \frac{q_n + \lambda^\pm(E^F, \omega^{(p-1)/2+i})}{p^{n-1}(p-1)}\] allows us to proceed as before to conclude the proof. \end{proof} \subsubsection{Relation to Delbourgo's $p$-adic $L$-functions} We discuss Delbourgo's construction of $p$-adic $L$-functions in \cite{del-compositio} for additive primes briefly to highlight that our proofs of Theorems \ref{quad} and \ref{ssquad} are in fact closely related to the aforementioned work. Delbourgo's construction is based on considering the newform obtained by twisting $f_E$ by a power of $\omega$. This twist has a non-trivial Euler factor at $p$, for which the construction of a $p$-adic $L$-function is possible. The potentially good ordinary case satisfies Hypothesis (G) therein (see \cite[Section 1.5]{del-compositio}) since good reduction is achieved over an abelian extension of $\Qp$ with degree dividing $(p-1)$ (see Lemmas \ref{lem: Kgdeg} and \ref{ediv}). In the potentially multiplicative case, Delbourgo considered the quadratic twist as we have done above. See also \cite[Lemma 1]{Bayer1992}, which explains how a twist of $f_E$ can be used to obtain a $p$-ordinary newform with level $N$ such that $p\mid\mid N$ when $E$ has potentially good ordinary reduction at $p$. Let $e$ be the semistability defect, and let $\tilde{f}$ be the newform $f_E \otimes \omega^{(p-1)/e}$, where $E$ is an elliptic curve over $\QQ$ with potentially good ordinary reduction. To ensure that one of the roots of the Hecke polynomial of $\tilde{f}$ is a $p$-adic unit, one may need to twist $f_E$ by $\omega^{-(p-1)/e}$ instead. In what follows, we assume that the twist is by $\omega^{(p-1)/e}$. Consider the $\omega^{-(p-1)/e}$-isotypic component of the $p$-adic $L$-function $L_p(\tilde{f},\omega^{-(p-1)/e},T)$ (the corresponding Mazur--Tate element will be $\theta_{n, -(p-1)/e}(\tilde{f})$). Thus an argument similar to the proof of Theorem~\ref{quad}, assuming integrality and the vanishing of the $\mu$-invariant, would give formulae of the form \begin{align}\label{eqn: delb} \lambda(\theta_{n,0}(E)) = \ord_p\left(\frac{\Omega_{\tilde{f}}}{\Omega_E}\right)\cdot p^{n-1}(p-1)+\lambda(L_p(\tilde{f},\omega^{-(p-1)/e},T)). \end{align} Note that the assumption on $\mu(L_p(\tilde{f},\omega^{-(p-1)/e},T))$ would pin down a period for $\tilde{f}$. We also mention that when $E$ has additive potentially supersingular reduction with $e=2$, we can use the plus and minus $p$-adic $L$-functions of $E^F$ to construct the counterparts for $E$. Delbourgo commented that his construction would give unbounded $p$-adic $L$-functions. Indeed, if we combine Delbourgo's construction with the work of \cite{pollack03}, one may decompose these unbounded $p$-adic $L$-functions into bounded ones, resulting in the elements utilized in the proof of Theorem~\ref{ssquad}. \subsection{Potential good ordinary reduction: The general case} We give a more general result in the potentially good ordinary case assuming Conjectures \ref{conj:BSD} and \ref{conj:tor}. This is Theorem \ref{thmB} from \S~\ref{sec:intro}. \begin{theorem}\label{thm: bsd} Let $E/\QQ$ be an elliptic curve with additive, potentially good ordinary reduction at a prime $p\geq 5$ and minimal discriminant $\Delta_E$. Assume $\mathcal{X}(E/\QQ_\infty)$ is $\Lambda$-torsion. Assume furthermore that \begin{itemize} \item Conjecture~\ref{conj:BSD} is true over $k_{n}$ for all $n \gg 0$, \item $\mu(\cX(E/\QQ_\infty)) = \mu(\theta_{n,0}(E))$ for $n\gg0$; \item $\lambda(\theta_{n,0}(E))<p^{n-1}(p-1)$ for $n\gg0$. \end{itemize} Then, when $n$ is sufficiently large, we have \begin{align*} \lambda(\theta_{n,0}(E)) &= \frac{(p-1)\cdot \ord_p(\Delta_E)}{12}\cdot p^{n-1}+{\lambda(\cX(E/\QQ_\infty))}. \end{align*} \end{theorem} \begin{proof} From \cite{Rohrlich1984}, $L(E/\QQ, \chi, 1)=0$ for only finitely many Dirichlet characters $\chi$ of $p$-power order. We also have \[L(E/k_n,s)= \prod_{\chi}L(E/\QQ, \chi, s)\] where the product is taken over all characters $\chi$ on $\Gal(k_n/\QQ)$. Thus, the order of vanishing of $L(E/k_n, s)$ at $s=1$ must stabilize for $n$ large. Furthermore, $E(\QQ_\infty)_{\text{tor}}$ is finite by \cite[Theorem~3]{Serre1971/72} (see also the main result of \cite{CDKN} for a precise description). Under the Birch and Swinnerton-Dyer conjecture, this implies that $E(\QQ_\infty)$ is finitely generated. Choose $n$ large enough so that \begin{itemize} \item $\ord_{s=1}L(E/k_{n+1}, s)=\ord_{s=1}L(E/k_{n}, s)$ \item $E(k_{n+1})=E(k_{n})$ \item Tam$(E/k_{n+1})= \mathrm{Tam}(E/k_{n})$. \end{itemize} For such $n$, we have \begin{equation}\label{pbsd} \frac{|\Sha(E/k_{n+1})[p^\infty]|}{|\Sha(E/k_{n})[p^\infty]|}= \frac{\Omega_{E/k_{n}}}{\Omega_{E/k_{n+1}}}{ \prod_{\chi}L(E/\QQ, \chi, 1)}\cdot\sqrt{\frac{|\Delta_{k_{n+1}}|}{|\Delta_{k_{n}}|}}\cdot \frac{\textup{Reg}(E/k_{n})}{\textup{Reg}(E/k_{n+1})}, \end{equation} where the product is taken over all characters $\chi$ on $G_{n+1}$ that do not factor through $G_n$. There are $p^{n}(p-1)$ such characters, each of conductor $p^{n+2}$. The conductor-discriminant formula tells us that \[\ord_p\left(\sqrt{\frac{|\Delta_{k_{n+1}}|}{|\Delta_{k_{n}}|}}\right)= p^{n}(p-1)\cdot \frac{n+2}{2}.\] For any finite extension of number fields $L_1/L_2$ such that $E(L_1)=E(L_2)$, we have \[\frac{\textup{Reg}(E/L_1)}{\textup{Reg}(E/L_2)}=[L_1:L_2]^{\text{rank}(E(L_2))}.\] Thus, for $L_1/L_2=k_{n+1}/k_n$, the quotient of the regulators is equal to $p^{\text{rank}(E(k_{n+1}))}$. We deduce from \eqref{perratio} that \begin{equation} \frac{\Omega_{E/k_{n+1}}}{\Omega_{E/k_{n}}}= \Omega_{E/\QQ}^{p^n(p-1)} \prod_{v, w\mid v} \left|\frac{\omega_v^{\min}}{\omega_w^{\min}}\right|_{w}, \end{equation} where $v$ runs over places of $k_{n+1}$ and $w$ over places of $k_{n}$ above $v$. Putting these together, the $p$-adic valuation of the right-hand side of \eqref{pbsd} can be expressed as \begin{equation*} \ord_p\left(\prod_\chi\frac{L(E/\QQ, \chi, 1)}{\Omega_{E/\QQ}} \right)- \ord_p\left(\prod_{v, w\mid v} \left|\frac{\omega_v^{\min}}{\omega_w^{\min}}\right|_{w}\right) + p^{n}(p-1)\cdot \frac{n+2}{2}- \text{rank}(E(k_{n+1})). \end{equation*} Applying Lemma \ref{fudge}(2) on the totally ramified extension $(k_{n+1})_p/(k_n)_p$ gives \[\ord_p\left(\prod_{v, w\mid v} \left|\frac{\omega_v^{\min}}{\omega_w^{\min}}\right|_{w}\right)=\left\lfloor \frac{p^n(p-1)\ord_p(\Delta_{E})}{12}\right\rfloor.\] From the interpolation property of Mazur--Tate elements, we have \[\tau(\chi)\cdot\frac{L(E/\QQ, \chi, 1)}{\Omega_{E/\QQ}} = \mu(\theta_{n+1,0}(E))+ \frac{\lambda(\theta_{n+1,0}(E))}{p^{n}(p-1)}.\] Since the Gauss sums satisfy $\tau(\chi)\tau(\overline{\chi})=\pm p^{n+2},$ we have \[\ord_p\left(\prod_\chi \tau(\chi)\right)=p^n(p-1)\cdot \frac{n+2}{2}.\] On the left hand side of \eqref{pbsd}, we know from Theorem \ref{sha} that for sufficiently large $n$, we have \[\frac{|\Sha(E/k_{n+1})[p^\infty]|}{|\Sha(E/k_{n})[p^\infty]|}= \mu_E\cdot p^n(p-1) + \lambda_E - \text{rank}(E(\QQ_\infty)).\] Hence, we deduce that \[\mu_E\cdot p^n(p-1) + \lambda_E = \mu(\theta_{n+1,0}(E))\cdot p^n(p-1)+ \lambda(\theta_{n+1,0}(E)) - \left\lfloor \frac{p^n(p-1)\cdot\ord_p(\Delta_{E})}{12}\right\rfloor.\] Assuming $\mu_E=\mu(\theta_{n+1,0}(E))$ for large $n$ gives the formula for $\lambda(\theta_{n+1,0}(E))$, as desired. \end{proof} \begin{remark} Alternatively, in the final equation above, assuming $\mu_E=0$, and $\mu(\theta_{n+1,0}(E))\geq 0$ would also give us the formula since $\lambda_E$ is a non-negative constant independent of $n$ and $\ord_p(\Delta_E)<12$. \end{remark} \begin{remark} We note that Theorem~\ref{thm: bsd} gives an expression for the $p$-adic valuations of the ratio of periods that occurs in \eqref{eqn: delb}. \end{remark} \subsection{Speculation for the potentially supersingular case} For elliptic curves with additive potential supersingular reduction at a prime $p\geq3$, the extension over which supersingular reduction is achieved is of degree $e$ dividing $p+1$ and is not Galois. Taking its Galois closure gives us a non-abelian extension of the form $\QQ(\zeta_m, \sqrt[e]{p})$ for $m=3$ or 4. Unlike the potentially good ordinary case, the growth in the $\lambda$-invariants cannot be attributed solely to the valuation of the ratio of periods (see \eqref{perratio}), as will become clear from the following example. \begin{example}\label{example} Consider the elliptic curve $E=$\href{https://www.lmfdb.org/EllipticCurve/Q/4232i1/}{4232i1} at $p=23$. This curve achieves supersingular reduction at $p$ over an extension of degree $e=6$. The $\lambda$-invariants of the associated Mazur--Tate elements are seen to satisfy, for $n\leq 9$, \[\lambda(\theta_{n,0}(E))= 19\cdot 23^{n-1}+1.\] One can subtract off the growth coming from the periods (as given by Proposition~\ref{fudge}(2)) to get, under the Birch and Swinnerton-Dyer conjecture, \[\ord_p(\Sha(E/k_{n})[p^\infty])= \begin{cases} 16q_{n-1}+17 \text{ for $n$ even,} \\ 16q_{n-1}+1 \text{ for $n$ odd.} \end{cases}\] The method in the proof of Theorem \ref{ssquad} does not apply in this case. To the authors' knowledge, such growth for $\Sha$ cannot be explained by the currently known plus and minus Iwasawa theory, which is what we would need for a proof similar to that of Theorem \ref{thm: bsd}. \end{example} We direct the reader to Table \ref{tab:pot.ss} given at the end of the article more examples. \begin{remark} Note that $\lambda(\theta_{n,0}(E))$ is predicted to be given by $19\cdot 23^{n-1}+1$ according to our numerical data. In particular, it does not exhibit the alternating plus/minus behaviour as in the good supersingular case; when the valuation coming from the period ratios \eqref{perratio} is added to the growth of $\Sha$, the contribution of $q_{n-1}$ seems to be eliminated. We do not have a theoretical explanation for this phenomenon at present. \end{remark} For elliptic curves with $j$-invariant $0$ or $1728$ with additive potentially supersingular reduction, computations on SageMath suggest the following conjecture (see the last 4 entries of Table \ref{tab:pot.ss}). \begin{conjecture}\label{cm} Let $E/\QQ$ be an elliptic curve with $j_E=0$ or $1728$ with additive, potentially supersingular reduction at $p \geq 5$. Assume that $E$ achieves supersingular reduction over an extension $K/\QQ$ with $[K:\QQ]>2$. Then there exist integers $\mathcal{M},\lambda^+, \lambda^-, \nu$ depending only on $E$ and $p$, such that if $\Sha(E/k_{n})[p^\infty]= p^{e_n}$ then \[e_n=\mathcal{M} p^n + (q_n+ \lambda^\pm)n+\nu\] for all $n$ large enough. \end{conjecture} \begin{remark} We computed the $\lambda$-invariants using SageMath and combined with the Birch and Swinnerton-Dyer conjecture to formulate Conjecture~\ref{cm}. This conjecture states that $\Sha(E/k_{n})[p^\infty]$ shows growth similar to what is known for elliptic curves which have good supersingular reduction at $p$ (see, for example, \cite[Section~6.4]{pollack03}). We can equivalently state this in terms of the $\lambda$-invariants of the Mazur--Tate elements after adding the growth from the periods (\eqref{perratio}, Prop. \ref{fudge},), but the above formulation makes the underlying `plus and minus' behaviour more apparent. However, the data suggest that curves that do not satisfy these hypotheses have remarkably different growth of $\Sha(E/k_n)[p^\infty]$ as compared to what is known in the literature as seen in Example \ref{example}. \end{remark} An important property of curves that satisfy these hypotheses is that they have CM by $\QQ(\sqrt{-3})$ or $\QQ(i)$. Secondly, they admit sextic or quartic twists to curves with good supersingular reduction at $p$; however, their $L$-functions are related via twists by Hecke characters of finite order. We speculate that it may be possible to utilize these twists to make progress on Conjecture \ref{cm}, possibly by a modified argument similar to the one used in Theorem \ref{quad}. Finally, we refer the reader to \cite{lario1992serre}, which contains an interesting conjecture involving mod $p$ Galois representations of elliptic curves that are additive potentially supersingular at $p$. A better understanding of this conjecture may help shed light on cases that are not covered by Theorem~\ref{ssquad}. \section{Mazur--Tate elements with maximum $\lambda$-invariants}\label{sec: form2} As discussed in \S\ref{sec:known}, it has been shown in \cite[Example 3.4]{PW} that for the elliptic curve $E=X_0(11)$ at $p=5$, the Mazur--Tate elements of $E$ satisfy \[\lambda(\theta_{n,0}(E))=p^n-1\] for all $n \geq 1$, which is the maximum a non-zero element of $\Zp[G_n]$ can attain. We study this phenomenon for a larger family of elliptic curves. In particular, we show that such a formula for the $\lambda$-invariant is related to the $p$-adic valuation of the normalised $L$-value $L(E,1)/\Omega_E^+$. \begin{remark} We note that if $\varphi$ is a modular symbol, then \[\theta_{n,0}(\varphi^+)=\theta_{n,0}(\varphi),\] where we decompose $\varphi=\varphi^++\varphi^-$ such that $\varphi^\pm$ is a $\pm1$-eigenvector of the involution $\iota$ (see \S\ref{sec:msmt} and recall that for the projection $i=0$, the Mazur--Tate elements interpolate twists by even characters on $\Gal(k_n/\QQ)$). Since we are concerned with $\theta_{n,0}$, it suffices to consider only $\varphi^+$ in our calculations. To simplify notation, we write $\theta_{n}(\varphi)$ for $\theta_{n,0}(\varphi)=\theta_{n,0}(\varphi^+)$ in our statements. \end{remark} We begin with a lemma involving the map $\cor^n_{n-1}$ defined in \S~\ref{sec:msmt}. \begin{lemma}\label{lem: corlambda} \begin{itemize} \item[(1)] For an integer $N\geq 1$ and a prime $p$, let $\phi \in \Symb(\Gamma_0(N),V_{k-2}(\Zp))$ be a modular symbol. For $n\geq 1$, we have \[\theta_{n}(\phi\mid\begin{psmallmatrix} p &0 \\ 0 & 1 \end{psmallmatrix})= p^{k-2}\cdot\cor^n_{n-1}(\theta_{n-1}(\phi)).\] \item[(2)] For $\theta \in \Zp[G_{n-1}]$, we have \[\mu(\cor_{n-1}^n(\theta))=\mu(\theta) \text{ and } \lambda(\cor_{n-1}^n(\theta))= p^n-p^{n-1}+\lambda(\theta).\] \end{itemize} \end{lemma} \begin{proof} Part (1) is \cite[Lemma 2.6]{PW}, whereas (2) is discussed in \cite[\S~4]{pollack05}. \end{proof} We discuss a setting in which the $\lambda$-invariants of the Mazur--Tate elements coming from weight $0$ modular symbols are of the form $p^n-1$. By an abuse notation, the same notation is used to denote a modular symbol and its image modulo $p$ in what follows. \begin{proposition}\label{lem: maxl} For a positive integer $N$, an odd prime $p$, let $\phi \in \Symb(\Gamma_0(N),\Zp)$. Assume $\phi(\{\infty\}-\{0\}) \in \Zp^\times$. If \begin{equation}\label{eq:cong} \phi^+(\{{a}/{p^{n+1}}\}-\{{a}/{p^{n}}\}) \equiv 0 \pmod{p} \end{equation} for all $a \in (\ZZ/p^n\ZZ)^\times$ and $n\geq 0$, then for all $n\geq 0$, we have \[\mu(\theta_n(\phi))=0 \text{ and } \lambda(\theta_n(\phi))=p^n-1.\] \end{proposition} \begin{proof} The condition \eqref{eq:cong} is equivalent to \[\phi^+(\{\infty\}-\{{a}/{p^n}\}) \equiv \phi^+\mid\begin{psmallmatrix} p & 0 \\ 0 & 1 \end{psmallmatrix}(\{\infty\}-\{{a}/{p^n}\}) \pmod{p}\] for all $a \in (\ZZ/p^n\ZZ)^\times$ and $n\geq 1$. Since $\theta_n(\phi)$ is determined by the values of $\phi^+$ on divisors of the form $\{\infty\}-\{a/p^n\}$, we use Lemma~\ref{lem: corlambda}(1) with the previous congruence to get \[\theta_n(\phi)\equiv \cor_{n-1}^{n}(\theta_{n-1}(\phi))\equiv \cdots \equiv \cor^n_0(\theta_0(\phi)) \pmod{p}.\] Now $\theta_0(\phi)=(p-1)\cdot\phi(\{\infty\}-\{1\})\cdot \sigma_1 \not \equiv 0 \pmod{p}$ by hypothesis since $\{\infty\}-\{1\}$ and $\{\infty\}-\{0\}$ are $\Gamma_0(N)$-equivalent. The proposition now follows from Lemma~\ref{lem: corlambda}(2). \end{proof} \begin{remark} Instead of the assumption on $\phi(\{\infty\}-\{0\})$, we can impose $\mu(\theta_n(\phi))=0$ for large $n$ instead to deduce the same formula for $\lambda(\theta_n(\phi))$ from the condition \eqref{eq:cong}. Moreover, if $\phi(\{\infty\}-\{0\})\equiv 0 \pmod{p}$ and the congruence condition on $\phi^+$ holds, then $\mu(\theta_n(\phi))\geq 1$. \end{remark} We now revisit the notion of $p$-stabilisation. Let $f \in S_2(\Gamma_0(N))$ and $p$ be an odd prime. Assume that $p\nmid N\cdot a_p(E)$. Let $\alpha$ be the $p$-adic unit root of $X^2-a_p(f)X+p$. We define \[f_\alpha(z)\colonequals f(z)-\frac{p}{\alpha}f(pz),\] so that $f_\alpha \in S_2(\Gamma_0(pN))$ and it is a $U_p$-eigenvector with eigenvalue equal to $\alpha$. Let $\xi_f \in \Symb(\Gamma_0(N), \CC)$ be the $\CC$-valued modular symbol associated with $f$ as in \S\ref{sec:msmt}. Then \[\xi_{f_\alpha}\colonequals \xi_f - \frac{1}{\alpha}\xi_f |\begin{psmallmatrix} p & 0 \\ 0 & 1 \end{psmallmatrix}\] is the `$p$-stabilized' modular symbol lying in $\Symb(\Gamma_0(pN), \CC)$ associated with $f_\alpha$. For $\xi_f$ and $\xi_{f_\alpha}$, there are associated cohomological periods $\Omega_f^\pm$, $\Omega_{f_\alpha}^\pm$ respectively and these are determined uniquely up to $p$-adic units. Normalising with these periods ensures that the respective modular symbols take values in $\Zp$. When $f$ corresponds to an elliptic curve $E$ defined over $\QQ$, we write $\xi_E$ for $\xi_f$, and $\xi_{E, \alpha}$ for $\xi_{f_\alpha}$. The following proposition is a straightforward generalisation of \cite[Example 3.4]{PW}. \begin{proposition}\label{coh} Let $f \in S_2(\Gamma_0(N))$ and $p$ be an odd prime not dividing $N$. Assume $a_p(f)\equiv 1 \pmod{p}$ and let $\alpha$ be the $p$-adic unit root of the Hecke polynomial of $f$ at $p$. Let $\varphi_f^+ \colonequals \xi_f^+/\Omega_f^+$ be the "plus"-modular symbol associated with $f$ normalized by the cohomological period $\Omega_f^+$ (as discussed in Remark~\ref{rem:periods}). Then, ${\varphi}_f^+ \equiv {\varphi}_f^+ \big{|}\begin{psmallmatrix}p & 0 \\ 0 & 1\end{psmallmatrix} \pmod{p}$ if and only if $\ord_p({\Omega_{f_\alpha}^+/}{\Omega_f^+})>0$. If these equivalent conditions hold and $\varphi_f^+(\{\infty\}-\{0\})\in \Zp^\times$, we have \[\lambda(\theta_{n}(f))= p^n-1\] for all $n\geq 0$. \end{proposition} \begin{proof} As $a_p(f)\equiv 1\pmod p$, we have $\alpha\equiv 1\pmod p$. Thus, the equivalence of the stated conditions follows from the definitions of $\Omega_f^+$ and $\Omega_{f_\alpha}^+$. The final assertion follows from Proposition~\ref{lem: maxl}. \end{proof} \begin{remark} In \cite[Lemma 3.6 and Prop. 3.7]{PW}, it has been shown that when the mod $p$ Galois representation attached to $f$ is irreducible, $\ord_p(\Omega_{f_\alpha}/\Omega_f)=0$. Thus, the proposition indicates that the primes at which the maximum $\lambda$-invariant is attained correspond to `Eisenstein' primes under the hypotheses above. \end{remark} \subsection{The denominator of $L(E,1)/\Omega_E$} For an elliptic curve defined over $\QQ$, let $\Omega_E^+$ denote the real Néron period associated with $E$. Let $\phi_{E,\mathrm{Coh}}$ denote the modular symbol attached to $E$ normalised by the cohomological periods of $E$, so that \[\phi_{E,\mathrm{Coh}} \colonequals \frac{\xi^+_E}{\Omega^+_{f_E}}+ \frac{\xi^-_E}{\Omega^-_{f_E}}.\] We note that $\phi_{E,\mathrm{Coh}}$ is invariant under isogeny since it depends only on $f_E$. Define $\phi_{E,\mathrm{Ner}}^\alpha$ as the `$p$-stabilised' modular symbol attached to $E$ normalised by the Néron periods $\Omega_E^\pm$, i.e., \[\phi_{E,\mathrm{Ner}}^\alpha \colonequals \frac{\xi^+_{E, \alpha}}{\Omega_E^+} + \frac{\xi^-_{E, \alpha}}{\Omega_E^-}.\] We prove the following theorem in this section (this is Theorem \ref{thmC} from \S~\ref{sec:intro}) : \begin{theorem}\label{thm: Lvaldenom} Let $E/\QQ$ be an optimal elliptic curve with good ordinary reduction at an odd prime $p$ such that $\ord_p(L(E,1)/\Omega_{E}^+)<0$ and $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\}) \in \Zp^\times$. Then $\lambda(\theta_n(E))=p^n-1$ for all $n\geq 0$. \end{theorem} \begin{remark}\label{rem: phi unit} The assumption on $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\})$ is used at two places in the proof of Theorem~\ref{thm: Lvaldenom}. First, $\ord_p(L(E,1)/\Omega_{E}^+)<0$ and $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\}) \in \Zp^\times$ together imply $\ord_p(\Omega_E^+/\Omega_{f_E}^+)>0$ since $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\})=L(E,1)/\Omega_{f_E}^+$. Second, we need it to apply Lemma \ref{lem: maxl}. In all our numerical examples for $E$ with good ordinary reduction at an odd prime $p$ and $\ord_p(L(E,1)/\Omega_E^+)<0$, we see that $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\})$ is a $p$-adic unit. \end{remark} The proof of Theorem~\ref{thm: Lvaldenom} relies on the integrality of certain values of the modular symbol attached to $E$ when normalised by $\Omega_E^\pm$, which we show in Lemma~\ref{mtinteg} below. We use the notion of an \emph{optimal} curve in a given isogeny class of elliptic curves, defined by Stevens in \cite{Stevens1989} (see also \cite[\S 3]{GV}). The optimal curve $E^{opt}$ in the isogeny class has an integral $p$-adic $L$-function when normalised by the Néron period of $E^{opt}$. More precisely: \begin{lemma}[Greenberg--Vatsal] Let $E$ be an optimal elliptic curve defined over $\QQ$ with good ordinary reduction at a prime $p>2$. Then the $p$-adic $L$-function $L_p(E, \omega^i, T)$ normalised with $\Omega_E^\pm$ lies in $\Zp[[T]]$ for $0\leq i \leq p-2$. \end{lemma} \begin{proof} This follows from \cite[Prop. 3.7]{GV}. \end{proof} This allows us to deduce the following: \begin{lemma} \label{mtinteg} Let $E$ be an optimal elliptic curve defined over $\QQ$ with good ordinary reduction at a prime $p>2$. Then \[{\phi_{E,\mathrm{Ner}}^\alpha(\{\infty\}-\{a/p^n\})} \in \Zp\] for all $a \in (\ZZ/p^n\ZZ)^\times$ and $n\geq 0$. \end{lemma} \begin{proof} Consider the \emph{full} $p$-adic $L$-function $L_p(E)$, which is an element of \[\Zp[[\Gal(\QQ(\zeta_{p^\infty})/\QQ)]]\cong \underset{n}{\varprojlim}\Zp[\mathcal{G}_n]\cong \Zp[[\Zp^\times]].\] The projection of $L_p(E)$ to $\Zp[\mathcal{G}_n]$ is $\vartheta^\alpha_n(E)/\alpha^n$, where $\vartheta^\alpha_n(E)$ is the $p$-stabilised Mazur--Tate element defined as \[\vartheta^\alpha_n(E) \colonequals \sum_{a \in (\ZZ/p^n\ZZ)^\times} \phi^\alpha_{E,\mathrm{Ner}}(\{\infty\}-\{a/p^n\})\cdot \sigma_a \] for $\sigma_a \in \mathcal{G}_n = \Gal(\QQ(\zeta_{p^n})/\QQ)$. With $\alpha \in \Zp^\times$, the projection gives us \[ \vartheta^\alpha_n(E) \in \Zp[\mathcal{G}_n], \] implying $\phi^\alpha_N(\{\infty\}-\{a/p^n\}) \in \Zp$ for all $a \in (\ZZ/p^n\ZZ)^\times$ and $n\geq 0$. \end{proof} \begin{remark}\label{rem: plus} Write $\phi_{E,\mathrm{Ner}}^\alpha = \phi_{E,\mathrm{Ner}}^{\alpha, +}+ \phi_{E,\mathrm{Ner}}^{\alpha, -}$ where $\phi_{E,\mathrm{Ner}}^{\alpha, \pm}$ are in the $\pm 1$-eigenspaces of $\iota$. We have $\phi_{E,\mathrm{Ner}}^{\alpha,+}= (\phi_{E,\mathrm{Ner}}^\alpha+ \phi_{E,\mathrm{Ner}}^\alpha|\iota)/2$, and $\phi_{E,\mathrm{Ner}}^\alpha|\iota(\{\infty\}-\{a/p^n\})=\phi_{E,\mathrm{Ner}}^\alpha(\{\infty\}-\{-a/p^n\})\in \Zp$, so \[\phi_{E,\mathrm{Ner}}^{\alpha,+}(\{\infty\}-\{a/p^n\})\in \Zp.\] \end{remark} \begin{remark}\label{rem: anom} Let $E$ be an optimal elliptic curve with good ordinary reduction at $p$, with $\alpha$ the unit root of the Hecke polynomial of $f_E$ at $p$. If $\ord_p(L(E,1)/\Omega_E)<0$, then $\alpha \equiv 1 \pmod{p}$. This follows from the integrality of $L_p(E, \omega^0 ,0)$ as \[L_p(E, \omega^0, 0)=\left(1-\frac{1}{\alpha}\right)^2\frac{L(E,1)}{\Omega_E} \in \Zp.\] \end{remark} We now give the proof of Theorem \ref{thm: Lvaldenom}. \begin{proof}[Proof of Theorem \ref{thm: Lvaldenom}] By Lemma \ref{mtinteg} and Remark \ref{rem: plus}, we have $\phi_{E,\mathrm{Ner}}^{\alpha, +} (\{\infty\}-\{a/p^n\}) \in \Zp$. Thus \[\phi_{E,\mathrm{Ner}}^{\alpha, +} (\{\infty\}-\{a/p^n\})= \left(\phi_{E,\mathrm{Coh}}^+(\{\infty\}-\{a/p^n\})- \frac{1}{\alpha}\phi_{E,\mathrm{Coh}}^+(\{\infty\}-\{a/p^{n-1}\})\right)\cdot \frac{\Omega^+_{f_E}}{\Omega_E^+} \in \Zp\] where $\ord_p(\Omega_E^+/\Omega_{f_E}^+)>0$ by Remark \ref{rem: phi unit}. With $\alpha \equiv 1 \pmod{p}$ (see Remark \ref{rem: anom}), this allows us to conclude \[\phi_{E,\mathrm{Coh}}^+(\{\infty\}-\{a/p^n\}) \equiv \phi_{E,\mathrm{Coh}}^+(\{\infty\}-\{a/p^{n-1}\}).\] Hence, Prop. \ref{lem: maxl} implies that $\lambda(\theta_n(E))=p^n-1$ for $n\geq 0$. \end{proof} Next, we prove Theorem \ref{thmD}. \begin{theorem}\label{thm: bsym to Lval} Assume $E$ is an optimal elliptic curve with good ordinary reduction at an odd prime $p$ with $a_p(E)\equiv 1 \pmod{p}$. Assume $\mu(L_p(E,\omega^0, T))=0$ and $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\}) \in \Zp^\times$. Suppose $\phi_{E,\mathrm{Coh}}^+$ is congruent modulo $p$ to a weight 0 boundary symbol of level $\Gamma_0(N_E)$. Then \[\lambda(\theta_n(E))=p^n-1 \text{ for all }n\geq 0 \text{ and }\ord_p(L(E,1)/\Omega_E)<0.\] \end{theorem} \begin{remark} It is expected that the \emph{optimal} curve in an isogeny class always has $\mu(L_p(E, \omega^0, T))=0$ (see \cite[Corollary 4.13]{Stevens1989}). \end{remark} \begin{proof} Due to the congruence with a boundary symbol and the fact that divisors of the form $\{a/p^n\}$ and $\{a/p^{n-1}\}$ are $\Gamma_0(N_E)$-equivalent, we have \[\phi_{E,\mathrm{Coh}}^{+} (\{\infty\}-\{a/p^n\}) \equiv \phi_{E,\mathrm{Coh}}^{+} (\{\infty\}-\{a/p^{n-1}\})\] for all $n\geq 1$. Then Prop.\ \ref{lem: maxl} gives the formula for $\lambda(\theta_n(E))$. Moreover, we have \[\theta_n(\phi_{E,\mathrm{Coh}}) - \theta_n\left(\phi_{E,\mathrm{Coh}}|\begin{psmallmatrix} p & 0 \\ 0 & 1 \end{psmallmatrix}\right)\equiv 0 \pmod{p}.\] The $\mu=0$ assumption implies that \[\theta_{n}(\phi_{E,\mathrm{Ner}}^\alpha) \not\equiv 0 \pmod{p}\] for $n$ large enough. (Integrality is guaranteed by Lemma \ref{mtinteg}.) The previous congruences together imply that $\ord_p(\Omega_E^+/\Omega_{f_E}^+)>0$, and our assumption that $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\})\in \Zp^\times$ gives us the result on the normalised $L$-value. \end{proof} \begin{remark}\label{rem: phi unit2} The assumption $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\})\in \Zp^\times$ is needed to apply Prop. \ref{lem: maxl} and to get $\ord_p(L(E,1)/\Omega_E^+)<0$ from $\ord_p(\Omega_E/\Omega_{f_E}^+)>0$ (see Remark \ref{rem: phi unit}). Numerical data indicates that this hypothesis is automatic. \end{remark} \subsection{Examples} We begin with an example where our results in this section apply. \begin{example}\label{example1} Let $E=$\href{https://www.lmfdb.org/EllipticCurve/Q/26b1/}{26b1} and $p=7$. From LMFDB, we see that $L(E,1)/\Omega_E^+=1/7$ and $L(E,1)/\Omega_{f_E}^+$. Thus, we have \[\lambda(\theta_n(E))=p^n-1\] for $n \geq 0$. We see that $\phi_{E,\mathrm{Coh}}^+$ is congruent modulo $p$ to a boundary symbol (taking values in $\Fp$) defined as \[\psi(\{r\})=\begin{cases} -1 & {\text{ if } r \in \Gamma_0(26)\{0\}\cup \Gamma_0(26)\{1/13\}}\\ 0 & {\text{ if } r \in \Gamma_0(26)\{\infty\}\cup \Gamma_0(26)\{1/2\}} \end{cases}\] for $r \in \QQ$. Here $\Gamma_0(26)\{c\}$ denotes the equivalence class of the cusp $c$. \end{example} Our computations suggest that the $\lambda$-invariant of $\theta_n(E)$ is of the form $p^n-1$ precisely when $\ord_p(L(E,1)/\Omega_E^+)<0$. Moreover, in all the cases this occurs, the modular symbol associated with the elliptic curve is congruent to a boundary symbol, suggesting a mod $p$ multiplicity one result holds. We plan to investigate this phenomenon further in a future project. We also have examples of elliptic curves with additive reduction at $p$ that have $\lambda(\theta_n(E))=p^n-1$ for $n \geq 0$. Interestingly, in all these cases, we see that $\ord_p(L(E,1)/\Omega_E^+)<0$. \begin{example} Let $E=$\href{https://www.lmfdb.org/EllipticCurve/Q/50b1/}{50b1} and $p=5$. This curve admits a non-trivial $5$-torsion over $\QQ$ and $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\})=1$. We have $L(E,1)/\Omega_E^+=1/5$. We observe, for $n\geq 0$, \[\lambda(\theta_{n,0}(E))=p^n-1.\] Moreover, $\phi_{E,\mathrm{Coh}}^+$ is indeed congruent to a weight 0 boundary symbol on $\Gamma_0(50)$ modulo $5$. \end{example} This suggests a possible generalisation of Theorem~\ref{thm: Lvaldenom} that does not require $p$ to be a good ordinary prime. Also, note that for bad multiplicative primes, the elements $\theta_{n,i}(E)$ have the same $\lambda$-invariants as the $p$-adic $L$-function for $n\gg 0$ so growth in $\lambda$ is not possible. For $\lambda(\theta_{n,i}(E))$ when $p$ is a supersingular primes, we have a good understanding of $\lambda(\theta_{n,i}(E))$; see \cite[Theorem 4.1]{PW}. We end with another example to demonstrate a consequence of Theorem \ref{thm: bsym to Lval} in this direction. \begin{example} Let $E=$\href{https://www.lmfdb.org/EllipticCurve/Q/174b2/}{174b2} and $p=7$. This curve has $a_p(E)=1$, $\mu=0$, and $\phi_{E,\mathrm{Coh}}(\{\infty\}-\{0\})=1$. The curve $E$ admits a non-trivial $7$-torsion over $\QQ$, which implies that $E$ is congruent to a weight 2 Eisenstein series of level $\Gamma_0(174)$ modulo $7$. However, we have $L(E,1)/\Omega_E^+=1$, so Theorem~\ref{thm: bsym to Lval} tells us that $\phi_E$ cannot be congruent to a weight 0 boundary symbol on $\Gamma_0(174)$ modulo $7$. This implies that mod $p$ multiplicity one fails in this case. \end{example} \newpage \section{Numerical Data} In the following table, for an elliptic curve $E$ defined over $\QQ$ with additive potentially supersingular reduction at a prime $p$, $e$ is the semistability defect (see \eqref{eq: semistabilitydef}) and \[f_n \colonequals \left\lfloor(p^{n}v_p(\Delta_E))/12\right\rfloor- \left\lfloor(p^{n-1}v_p(\Delta_E))/12\right\rfloor\] denotes the valuation coming from the ratio of periods as in \eqref{perratio} and Proposition~\ref{fudge}(2). The formulae for $\lambda(\theta_{n,0}(E))$ are seen to hold for $n\leq 8$ in all the cases. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline $(E, p)$& $v_p(\Delta)$& $e$ & $\lambda(\theta_{n,0}(E))$&$\lambda(\theta_{n,0}(E))-f_n$ \\ \hline (121c1,11)& 4& 3& $7\cdot11^{n-1}$&$ 4q_n+\begin{cases} 4 & \text{if $n$ is odd}\\ 0 & \text{if $n$ is even} \end{cases}$ \\ \hline (968d1,11)& 2& 6& $2\cdot 11^{n-1}$&$4 q_{n-1}+\begin{cases} 1 & \text{if $n$ is odd}\\ 3 & \text{if $n$ is even} \end{cases}$ \\ \hline (2890h1, 17)& 2& 6& $3\cdot17^{n-1}+1$&$6q_{n-1}+ \begin{cases} 2 & \text{if $n$ is odd}\\ 6 & \text{if $n$ is even} \end{cases}$ \\ \hline (2890e1, 17)& 8& 3& $11\cdot17^{n-1}$&$6 q_{n-1}+\begin{cases} 0 & \text{if $n$ is odd}\\ 6 & \text{if $n$ is even} \end{cases}$ \\ \hline (2116b1,23)& 2& 6& $4\cdot23^{n-1}$&$8 q_{n-1}+\begin{cases} 1 & \text{if $n$ is odd}\\ 7 & \text{if $n$ is even} \end{cases}$ \\ \hline (4232i1, 23)& 10& 6& $19\cdot23^{n-1}+1$&$16 q_{n-1}+\begin{cases} 1 & \text{if $n$ is odd}\\ 17 & \text{if $n$ is even} \end{cases}$ \\ \hline (3844d1, 31)& 3& 4& $8\cdot31^{n-1}$&$16 q_{n-1}+\begin{cases} 1 & \text{if $n$ is odd}\\ 15 & \text{if $n$ is even} \end{cases}$ \\ \hline (1089a1, 11)& 4& 3& $4\cdot 11^{n-1}+ 3q_{n-1}+ \begin{cases} 1 & \text{if $n$ is odd}\\ 3 & \text{if $n$ is even} \end{cases}$&$q_n+\begin{cases} 2 & \text{if $n$ is odd} \\ 0 & \text{if $n$ is even} \end{cases}$ \\ \hline (1089b1, 11)& 10& 6& $9\cdot11^{n-1}+3 q_{n-1}+2$&$q_n+\begin{cases} 2 & \text{if $n$ is odd} \\ 0 & \text{if $n$ is even} \end{cases}$ \\ \hline (3872h2,11)& 3& 4& $3\cdot11^{n-1}+5q_{n-1}+ \begin{cases} 1 & \text{if $n$ is odd} \\ 5 & \text{if $n$ is even} \end{cases}$&$q_n+\begin{cases} 2 & \text{if $n$ is odd} \\ 0 & \text{if $n$ is even} \end{cases}$ \\ \hline (4761b1,23)& 2& 6& $4\cdot23^{n-1}+15 q_{n-1}+\begin{cases}1& \text{if $n$ is odd}\\17 & \text{if $n$ is even} \end{cases}$&$q_n+2$ \\ \hline \end{tabular} \vspace{0.5cm} \caption{$\lambda$ invariants at potentially supersingular primes} \label{tab:pot.ss} \end{table} \bibliographystyle{amsalpha} \bibliography{references} \end{document} #can write about p-adic L function in quadratic+ quartic, sextic case more explicitly Delbourgo gives e|p-1 when pot. ord. while e|p+1 when pot. ss. Let $E$ be an elliptic curve over $\QQ$. For a quadratic extension $F/\QQ$, let $E^F$ denote the quadratic twist of $E$ by $F$. Equivalently, if $\psi$ is the Dirichlet (quadratic) character associated with $F$, we have the twist of $E$ by $\psi$, which we denote by $E^\psi$, so we use $E^\psi$ and $E^F$ interchangeably. Note that $E^F/F$ and $E/F$ are isomorphic as elliptic curves over $F$. \subsection{} We can generalize the above theorem slightly using ideas from [Delbourgo] We first introduce the following hypothesis: \textbf{Hypothesis (G):} $E$ has good reduction over an extension $L \subset \QQ_p(\mu_p)$ of $\QQ_p$ such that $[L:\QQ_p]=d$. Conjecture: When $E$ has potentially good ordinary reduction over an extension contained in $\QQ(\zeta_p)$, we have \[\lambda_n = \left(\frac{(p-1)\cdot v_p(\Delta_E)}{12}\right)\cdot p^{n-1}+\lambda_0.\] Since, $v_p(\Delta_E)=6$ when the quadratic twist of $E$ by $p$ has good reduction at $p$, this reduces to the formula of $\ref{quad}$. \begin{theorem}\label{quad} Let $E/\QQ$ be an elliptic curve with additive reduction at a prime $p$ such that \begin{enumerate} \item the quadratic twist of $E$ by a quadratic character of conductor $p$, $E^\psi$ has semi-stable reduction at $p$, \item the mod $p$ representation $\overline{\rho_{E^\psi}}$ is irreducible, \item the $\mu$-invariant of $L_p(E^\psi,\omega^{\frac{p-1}{2}})$ is 0. \end{enumerate} Then, if $E^\psi$ is ordinary at $p$, we have, for $n$ large enough, \begin{align*} \mu(\theta_{n,0}(E)) &= 0, \\ \lambda(\theta_{n,0}(E))&= \left(\frac{p-1}{2}\right)\cdot p^{n-1} + \lambda\left(E^\psi, \frac{p-1}{2}\right).\end{align*} If $E^\psi$ has supersingular reduction at $p$, we have, for $n$ large enough, \begin{align*} \mu(\theta_{n,0}(E)) &= 0, \\ \lambda(\theta_{n,0}(E))= \left(\frac{p-1}{2}\right)\cdot p^{n-1} + q_n+ \lambda^\pm\left(E^\psi, \frac{p-1}{2}\right). \end{align*} Here $\lambda\left(E^\psi, \frac{p-1}{2}\right)$ denotes the $\lambda$ invariant of $L_p(E^\psi, \omega^{\frac{p-1}{2}})$, and $\lambda^\pm\left(E^\psi, \frac{p-1}{2}\right)$ denote the $\lambda$ invariants of $L_p^{\pm}(E^\psi, \omega^{\frac{p-1}{2}})$. \end{theorem} \begin{remark} In the potentially supersingular case, $E^\psi$ is always residually irreducible, so hypothesis (3) is redundant in that case. \end{remark} \begin{remark} This essentially gives us all the possible formulae for $\lambda$-invariants that we may observe in the potentially multiplicative, irreducible case. \end{remark} Let $E^F$ denote the quadratic twist of $E$ by $F$. Let $\theta_{n,i}(E),\theta_{n,i}(E^F)$ denote the $n$th level Mazur--Tate elements associated with $E$ and $E^F$ respectively. Let $f_E$, $f_{E^F}$ denote the associated weight 2 cusp forms by the modularity theorem. \par The character $\psi_F$ associated with $F$ is of order 2 and conductor $p$, so we can write $\psi=\omega^{\frac{p-1}{2}}$. Let $\chi$ be a primitive Dirichlet character of conductor $p^n$, then by the interpolation property of Mazur--Tate elements, we have the following equations: \begin{align*} \chi(\theta_{n, 0}(E)) &= \tau(\chi)\frac{L(E, \chi, 1)}{\Omega_E}\\ \chi\psi(\theta_{n, \frac{p-1}{2}}(E^F)) &= \tau(\chi\psi)\frac{L({E^F}, \chi\psi, 1)}{\Omega_{E^F}}.\\ \end{align*} (Recall that the $\theta_{n,i}(E)$ interpolate twists by $\omega^i$, which can be understood as branches of the $p$-adic.) Now we have $(E^F)^F = E$, so \[L(E^F, \psi, s)=L(E, s) \implies L(E^F, \chi\psi, 1)=L(E, \chi, 1)\] Thus, we get \begin{align*} \chi(\theta_{n, 0}(E))= \left(\frac{\tau(\chi)}{\tau(\chi\psi)}\right)\cdot\left(\frac{\Omega_{E^F}}{\Omega_E}\right)\cdot\left(\chi\psi\left(\theta_{n, \frac{p-1}{2}}\left(E^F\right)\right)\right). \end{align*} Note that the $p$-adic valuations of $\tau(\chi)$ and $\tau(\chi\psi)$ are equal, and so we get \[\ord_p(\chi(\theta_{n, 0}(E))) = \ord_p\left(\frac{\Omega_{E^\psi}}{\Omega_E}\right) + \ord_p\left(\theta_{n, \frac{p-1}{2}}(E^\psi)\right)\] By \ref{}, we get \[\ord_p(\chi(\theta_{n, 0}(E))) = \frac{1}{2} + \ord_p\left(\theta_{n, \frac{p-1}{2}}(E^\psi)\right).\] Now $\ord_p\left(\theta_{n, \frac{p-1}{2}}(E^\psi)\right)$ is controlled by the $\mu$ and $\lambda$ invariants of $E^F$ for $n$ large enough as we shall see below. \textbf{Case 1}: $E^F$ has good, ordinary reduction at $p$. This is equivalent to saying $E$ achieves good, ordinary reduction at $p$. By the $p$-adic Weierstrass preparation theorem, we So we have, for $n$ large enough so that $\lambda(L_p(E^F, \omega^{\frac{p-1}{2}}))<p^{n-1}(p-1)$, giving us \begin{align*} \ord_p(\chi(\theta_{n, 0}(E))) &= \frac{1}{2}+ \ord_p\left(\theta_{n, \frac{p-1}{2}}(E^\psi)\right)\\ &=\frac{1}{2} + \mu(\theta_{n, \frac{p-1}{2}}(E^\psi)) + \frac{\lambda(\theta_{n, \frac{p-1}{2}}(E^\psi))}{p^{n-1}(p-1)}. \end{align*} By PW, under the given hypotheses, for $n$ large enough, $\mu(\theta_{n,i}(E^F)) = \mu(L_p(E^F, \omega^i))$ and $\lambda(\theta_{n,i}(E^F)) = \lambda(L_p(E^F, \omega^i))$. Thus taking $n$ sufficiently large, we get \begin{align*} \ord_p(\chi(\theta_{n, 0}(E))) &= \frac{1}{2}+ \mu(L_p(E^F, \omega^i)) + \frac{\lambda(L_p(E^F, \omega^i))}{p^{n-1}(p-1)},\\ \ord_p(\chi(\theta_{n, 0}(E)))&= \frac{1}{2}+\frac{\lambda(L_p(E^F, \omega^i))}{p^{n-1}(p-1)}. \end{align*} In the last equation, we used $\mu(L_p(E^F, \omega^{\frac{p-1}{2}}))=0$. Note, in particular, for $n \to \infty$, $\ord_p(\chi(\theta_{n, 0}(E))) \to \frac{1}{2}$. By the $p$-adic Weierstrass preparation theorem, we write \begin{align*} \theta_{n, 0}(E)(T)&=p^{\mu_n}(T^{\lambda_n}+ p\cdot g_n(T)) u_n(T),\end{align*} where we have used $\mu_n:= \mu(\theta_{n,0}(E)), \lambda_n:=\lambda(\theta_{n,0}(E))$ for ease of notation, and $g(T) \in \Zp[T]$, $u(T)\in \Zp[[T]]$ is a unit power series. Then we have \begin{align*} \ord_p(\chi(\theta_{n, 0}(E))) &\geq \mu_n+ \text{min}\left\{\frac{\lambda_n}{p^{n-1}(p-1)}, 1+v_p(g_n(\zeta_{p^n}-1))\right\}\end{align*} This gives us \begin{align*} \frac{1}{2}+\frac{\lambda(L_p(E^F, \omega^i))}{p^{n-1}(p-1)} \geq \mu_n+ \text{min}\left\{\frac{\lambda_n}{p^{n-1}(p-1)}, 1+v_p(g_n(\zeta_{p^n}-1))\right\} \end{align*} Take $n$ large enough so that the left-hand side of this inequality is less than $1$, then we have $\mu_n=0$ and \[1 > \text{min}\left\{\frac{\lambda_n}{p^{n-1}(p-1)}, 1+v_p(g_n(\zeta_{p^n}-1))\right\}.\] This implies $\frac{\lambda_n}{p^{n-1}(p-1)}<1$ since $v_p(g_n(\zeta_{p^n}-1))\geq 0$, and this is true for all $n$ large enough with regards to all the previous conditions. We finally get \begin{align*} \frac{\lambda(\theta_{n, 0}(E))}{p^{n-1}(p-1)} &= \ord_p(\chi(\theta_{n, 0}(E))) \\ &= \frac{1}{2}+\frac{\lambda(L_p(E^F, \omega^i))}{p^{n-1}(p-1)}.\\ \implies \lambda(\theta_{n, 0}(E)) &= \frac{(p-1)p^{n-1}}{2} + \lambda(L_p(E^F, \omega^{\frac{p-1}{2}})) \end{align*} \textbf{Case 2}: $E^F$ has multiplicative reduction at $p$. The same argument goes through case 1, since for large $n$, we again have $\mu(\theta_{n, \frac{p-1}{2}}(E^\psi)) =\mu(L_p(E^F, \omega^{\frac{p-1}{2}}))$ and $\lambda(\theta_{n, \frac{p-1}{2}}(E^\psi)) =\lambda(L_p(E^F, \omega^{\frac{p-1}{2}}))$. In fact, in this case, we do not require Next, we look at the scenario when $E$ achieves supersingular reduction over a quadratic extension. This is equivalent to $E^\psi/\QQ$ being supersingular at $p$ \begin{theorem} Assume $E^\psi$ has supersingular reduction at $p$ such that the signed $\mu$-invariants of the $\omega^{\frac{p-1}{2}}$ branch of the $\pm$ $p$-adic $L$-functions are both 0, that is $\mu(L^\pm_p(E^\psi, \omega^{\frac{p-1}{2}}) = 0$. Let $\lambda^\pm(E, (p-1)/2)$ denote the $\lambda$-invariants of $L^\pm_p(E^\psi, \omega^{\frac{p-1}{2}})$ respectively. Then we have, for large $n$, \begin{align*} \mu(\theta_{n,0}(E)) &= 0, \\ \lambda(\theta_{n,0}(E))&= \left(\frac{p-1}{2}\right)\cdot p^{n-1} + q_n+ \lambda^\pm\left(E^\psi, \frac{p-1}{2}\right). \end{align*} \end{theorem} \begin{remark} This suggests that we can associate signed $p$-adic $L$-functions to $E$ via $L_p(E^\psi, \omega^{\frac{p-1}{2}})$ \end{remark} We know that for a number field $F$, we can write up to $p$-adic units, \[\Omega_{E/F}\sim\prod_{v \mid \infty}\int_{E(F_v)}|\omega|_v\cdot \prod_{v \nmid \infty}\left| \frac{\omega}{\omega_v^\circ}\right|_v\] and $\Omega_{E/\QQ}^+\sim\int_{E(\mathbb{R})}|\omega|_v$ for $v$ real and $\Omega^-_{E/\QQ}\sim\int_{E(\mathbb{C})}|\omega|_v$ for $v$ complex. So the ratio of periods in Equation \ref{pbsd} becomes \[ \frac{\Omega_{E/k_{n}}}{\Omega_{E/k_{n+1}}}=\frac{1}{\prod{\Omega_{E/\QQ}}}\cdot \frac{ \prod_{v \nmid \infty}\left| \frac{\omega}{\omega_v^\circ}\right|_v}{ \prod_{w \nmid \infty}\left| \frac{\omega}{\omega_v^\circ}\right|_w}.\] where the product of $\Omega_{E/\QQ}$
2412.16621v1
http://arxiv.org/abs/2412.16621v1
Explicit Upper Bounds on Decay Rates of Fourier Transforms of Self-similar Measures on Self-similar Sets
\documentclass[a4paper, 12pt, reqno]{amsart} \usepackage[margin=1in]{geometry} \usepackage{mathrsfs} \usepackage{mathtools} \linespread{1.15} \usepackage{tikz, float, tikzscale} \usepackage{pgfplots} \pgfplotsset{compat=1.18} \definecolor{mycolour}{rgb}{0.7,0.7,0.7} \theoremstyle{definition} \newtheorem*{defi}{Definition} \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem*{thm*}{Theorem} \newtheorem{prop}{Proposition} \usepackage[T1]{fontenc} \usepackage{libertinus,libertinust1math} \title[Explicit Upper Bounds on Decay Rates of Fourier Transforms]{Explicit Upper Bounds on Decay Rates of Fourier Transforms of Self-similar Measures on Self-similar Sets} \author{Ying Wai Lee} \date{\today} \begin{document} \begin{abstract} The study of Fourier transforms of probability measures on fractal sets plays an important role in recent research. Faster decay rates are known to yield enhanced results in areas such as metric number theory. This paper focuses on self-similar probability measures defined on self-similar sets. Explicit upper bounds are derived for their decay rates, improving upon prior research. These findings are illustrated with an application to sets of numbers whose digits in their L\"uroth representations are restricted to a finite set. \end{abstract} \maketitle \tableofcontents \section{Introduction \& Results} The Fourier transform, a fundamental tool in harmonic analysis, is initially defined for functions to analyse their frequency components. It is later generalised to measures, enabling the frequency analysis of distributions. Let $F\subset[0,1]$ and $\mu$ be a probability measure on $F$. Define $\widehat{\mu}:\mathbb{R}\to\mathbb{C}$, the Fourier transform of $\mu$, by, for any $\xi\in\mathbb{R}$, \begin{align*} \widehat{\mu} (\xi)\coloneqq\int_{F}e^{-2\pi i\xi x}\,\mu(\mathrm{d}x). \end{align*} One research focus is the study of probability measures with the Rajchman property; that is, measures whose Fourier transform vanishes at infinity: \begin{align} \label{eq: Rajchman} \lim_{\xi\to\pm\infty}\widehat{\mu}(\xi)=0. \end{align} The Rajchman property of measures reflects a form of regularity at infinity; the measure does not exhibit excessive concentration at any specific scale. Although one can be interested in establishing the exact convergence rate in \eqref{eq: Rajchman}, knowing some upper bounds of the rate can provide certain information on some metric properties of the set. Let $\Phi:\mathbb{N}\to\mathbb{R}^+$ be a function. $\Phi$ is said to be a decay rate of $\mu$ if, as $\xi\to\pm\infty$, \begin{align} \label{eq: Lyons assumption 0} \widehat{\mu}(\xi)=O\left(\Phi\left(|\xi|\right)\right). \end{align} In the case of $\lim_{n\to+\infty}\Phi(n)=0$, \eqref{eq: Lyons assumption 0} quantifies the convergence rate in \eqref{eq: Rajchman}. The classical results of Lyons \cite[Theorems 3 \& 4]{LyonsRussell1986Tmon} provide foundational insights into how the decay rate of a probability measure is useful in metric number theory. These two results can also be compared to highlight how faster decay can lead to stronger conclusions. \begin{thm*}[Lyons, 1986; {\cite[Theorem 4]{LyonsRussell1986Tmon}}] Let $F\subset[0,1]$ and $\mu$ be a probability measure on $F$. Suppose, there exists a non-increasing $\Phi:\mathbb{N}\to\mathbb{R}^+$ such that as $\xi\to\pm\infty$, \eqref{eq: Lyons assumption 0} holds and \begin{align} \label{eq: Lyons assumption 4} \sum_{n=2}^\infty\frac{\Phi(n)}{n\log{n}}<+\infty. \end{align} Then, for $\mu$-almost every $x\in F$ and any sequence $(q_n)_{n\in\mathbb{N}}$ of positive integers, if $\inf_{n\in\mathbb{N}}{q_{n+1}}/{q_n}>1$, then the sequence $(q_nx\mod1)_{n\in\mathbb{N}}$ is uniformly distributed in $[0,1)$. \end{thm*} In the case of $\Phi$ exhibiting logarithmic decay (i.e., there exists $\varepsilon>0$ such that as $n\to+\infty$, $\Phi(n)=O(\log^{-\varepsilon}{n})$), condition~\eqref{eq: Lyons assumption 4} is satisfied. For $\Phi$ with faster decay, such as power-law decay (i.e., there exists $\varepsilon>0$ such that as $n\to+\infty$, $\Phi(n)=O(n^{-\varepsilon})$), the conclusion of the above result can be strengthened, allowing for broader choices of sequences. \begin{thm*}[Lyons, 1986; {\cite[Theorem 3]{LyonsRussell1986Tmon}}] Let $F\subset[0,1]$ and $\mu$ be a probability measure on $F$. Suppose, there exists a non-increasing $\Phi:\mathbb{N}\to\mathbb{R}^+$ such that as $\xi\to\pm\infty$, \eqref{eq: Lyons assumption 0} holds and \begin{align} \label{eq: Lyons assumption 3} \sum_{n=1}^\infty\frac{\Phi(n)}{n}<+\infty. \end{align} Then, for $\mu$-almost every $x\in F$ and any strictly increasing sequence $(q_n)_{n\in\mathbb{N}}$ of positive integers, the sequence $(q_nx\mod1)_{n\in\mathbb{N}}$ is uniformly distributed in $[0,1)$. \end{thm*} These results illustrate how faster decay has the potential to yield stronger conclusions. Therefore, deriving faster decay rates stands as an important research focus. In the works of Kaufman \cite{kaufman1980continued} and Queff{\'e}lec--Ramar{\'e} \cite{queffelec2003analyse}, sets of numbers whose partial quotients in their continued fraction representations are restricted to a finite set $\mathcal{A}\subset\mathbb{N}$ have been studied. These work prove that each of these sets (provided their Hausdorff dimensions exceed certain thresholds) supports a probability measure $\mu$ satisfying both conditions~\eqref{eq: Lyons assumption 0} and \eqref{eq: Lyons assumption 3} for some $\Phi$ with power-law decay. For any $x\in[0,1)\setminus\mathbb{Q}$, there exists a unique sequence $(a_n)_{n\in\mathbb{N}}$ of positive integers, referred to as partial quotients, such that \begin{align} x &=[a_1,a_2,a_3,\ldots] \label{eq: CF Repr}\\ &\coloneqq{\frac{1}{\displaystyle a_1+\frac{1}{\displaystyle a_2+\frac{1}{a_3+\cdots}}}}; \label{eq: CF Expa} \end{align} where equation~\eqref{eq: CF Repr} is referred to as the continued fraction representation of $x$, and equation~\eqref{eq: CF Expa} as the continued fraction expansion of $x$. Define, for any $\mathcal{A}\subset\mathbb{N}$, $F_\mathcal{A}\coloneqq\{[a_1,a_2,a_3,\ldots]:\text{$a_n\in\mathcal{A}$ for all $n\in\mathbb{N}$}\}$; and for any $N\in\mathbb{N}$, $F_N\coloneqq F_{\mathbb{N}\cap[1,N]}$. These sets play an important role in the study of Diophantine approximation. According to \cite[Theorem 1.4]{beresnevich2016metricdiophantineapproximationaspects}, the countable union $\bigcup_{N\in\mathbb{N}}F_N$ coincides with the set of all badly approximable numbers in $[0,1]$. \begin{thm*}[Kaufman, 1980; \cite{kaufman1980continued}] Let $N\in\mathbb{N}\setminus\{1\}$. Suppose, the Hausdorff dimension of $F_N$, $\dim{F_N}>2/3$. Then, there exists a probability measure $\mu_N$ on $F_N$ such that as $\xi\to\pm\infty$, \begin{align*} \widehat{\mu_{{N}}}(\xi)=O\left(|\xi|^{-0.0007}\right). \end{align*} \end{thm*} \begin{thm*}[Queff\'elec--Ramar\'e, 2003; {\cite[Th\'eor\`eme 1.4]{queffelec2003analyse}}] Let $\mathcal{A}\subset\mathbb{N}$ be finite. Suppose, $\dim{F_{\mathcal{A}}}>1/2$. Then, for any $\varepsilon>0$ and $1/2<\delta<\dim{F_{\mathcal{A}}}$, there exists a probability measure $\mu_{\mathcal{A},\varepsilon,\delta}$ on $F_{\mathcal{A}}$ such that as $\xi\to\pm\infty$, \begin{align*} \widehat{\mu_{\mathcal{A},\varepsilon,\delta}}(\xi) =O\left(|\xi|^{-\eta+\varepsilon}\right), \end{align*} where $\eta\coloneqq{\delta(2\delta-1)}/{((2\delta+1)(4-\delta))}>0$. \end{thm*} The result of Queff\'elec--Ramar\'e improves upon that of Kaufman by generalising from \( F_N \) to \( F_{\mathcal{A}} \), providing faster decay rates with explicit parametrisation, and relying on a weaker threshold for the Hausdorff dimension. By the approximation of Hensley, \cite[Table 1]{hensley1996polynomial} or \cite[(4.10)]{HENSLEY1989182}, \begin{align*} 1/2<\dim{F_2}=0.5312805\ldots<2/3; \end{align*} the result of Queff\'elec--Ramar\'e applies to more cases than that of Kaufman. Both results construct probability measures satisfying both conditions~\eqref{eq: Lyons assumption 0} and \eqref{eq: Lyons assumption 3}, where $\Phi$ exhibits power-law decay. The result of Lyons \cite[Theorem 3]{LyonsRussell1986Tmon} applies for these sets. In the work of Li--Sahlsten \cite{li2022trigonometric}, self-similar probability measures on self-similar sets are studied. One of their results \cite[Theorem 1.3]{li2022trigonometric} focuses on non-singleton self-similar sets satisfying a non-Liouville condition. This result states that every self-similar probability measure $\mu$ on such sets satisfies both condition~\eqref{eq: Lyons assumption 0} and \eqref{eq: Lyons assumption 4} for some $\Phi$ with logarithmic decay. \begin{defi}[Self-similar Set] Let $F\subset[0,1]$. $F$ is said to be a self-similar set if, there exist a finite set $\mathcal{A}$ and its collection of similitudes $f_w:[0,1]\to[0,1]$ such that \begin{align} \label{eq: SSS} F=\bigcup_{w\in\mathcal{A}}f_w(F), \end{align} where for any $w\in\mathcal{A}$, there exist a contraction ratio $r_w\in(0,1)$ and a translation $b_w\in[0,1]$ such that for any $x\in[0,1]$, \begin{align} \label{eq: IPS} f_w(x) = r_wx+b_w. \end{align} \end{defi} \begin{defi}[Self-similar Measure] Let $\mu$ be a probability measure on a subset of $[0,1]$. $\mu$ is said to be self-similar if, there exist a finite set $\mathcal{A}$ and its collection of similitudes \eqref{eq: IPS} such that \begin{align} \label{eq: SSM} \mu=\sum_{w\in\mathcal{A}}p_w\mu\circ {f_w}^{-1}, \end{align} where the weights $(p_w)_{w\in\mathcal{A}}$ satisfy $\sum_{w\in\mathcal{A}}p_w=1$ and for any $w\in\mathcal{A}$, $p_w>0$. \end{defi} According to \cite[Theorem 3.1.(3)(i)]{hutchinson1981fractals}, for any finite collection of similitudes, there is a unique non-empty compact subset of $[0,1]$ satisfying the self-similarity property~\eqref{eq: SSS}. According to the Hutchinson Theorem \cite[Theorem 4.4.(1)(i)]{hutchinson1981fractals}, for any finite collection of similitudes and weights, there is a unique probability measure satisfying the self-similarity property~\eqref{eq: SSM}. In Diophantine approximation, the concept of non-Liouville numbers (sometimes called diophantine numbers) is a natural generalisation of badly approximable numbers. \begin{defi}[Non-Liouville Number] Let $\theta\in\mathbb{R}$ and $l>0$. $\theta$ is said to be non-Liouville of degree $l$ if, there exists $c>0$ such that for any $p\in\mathbb{Z}$ and $q\in\mathbb{N}$, \begin{align}\label{eq: non-Liouville} \left|\theta-\frac{p}{q}\right|\geq\frac{c}{q^l}. \end{align} \end{defi} Badly approximable numbers are precisely the non-Liouville numbers of degree 2. By the Dirichlet Theorem \cite[Theorem 1.2]{beresnevich2016metricdiophantineapproximationaspects}, every non-Liouville number is of degree at least 2. \begin{thm*}[Li--Sahlsten, 2022; {\cite[Theorem 1.3]{li2022trigonometric}}] Let $F\subset[0,1]$ be a self-similar set of the form \eqref{eq: SSS} associated with similitudes \eqref{eq: IPS} for a finite set $\mathcal{A}$. Suppose, $F$ is not a singleton, and there exist $j,k\in\mathcal{A}$ and $l>2$ such that $\log{r_j}/\log{r_k}$ is non-Liouville of degree $l$. Then, for any self-similar measure $\mu$ on $F$, there exists $\beta>0$ such that as $\xi\to\pm\infty$, \begin{align} \label{eq: Li-S Theorem 1.3} \widehat{\mu}(\xi) =O\left(\log^{-\beta}{|\xi|}\right). \end{align} \end{thm*} Although \eqref{eq: Li-S Theorem 1.3} is sufficient for both conditions~\eqref{eq: Lyons assumption 0} and \eqref{eq: Lyons assumption 4}, the parameter $\beta$ is implicit, and its explicit dependence on the probability measure and the self-similar set remains undetermined. Determining an explicit value for $\beta$ is crucial for certain applications in the metric theory of Diophantine approximation. For instance, to ensure sufficient decay, the result of Pollington--Velani--Zafeiropoulos--Zorin (PVZZ) \cite[Theorem 1]{pollington2020inhomogeneous} requires $\beta>2$ as a a threshold. However, the implicit nature of $\beta$ in \eqref{eq: Li-S Theorem 1.3} prevents verification of whether the threshold can be met, limiting its applicability. This paper establishes an explicit expression of $\beta$ in \eqref{eq: Li-S Theorem 1.3} for general self-similar probability measure $\mu$. The formula of $\beta$ is provided in terms of the upper regularity exponents of $\mu$ and the non-Liouville degree $l$ of the logarithmic ratio of the contraction ratios. The value of $\beta$ is maximised for self-similar sets satisfying the open set condition (a pairwise disjoint condition). An application is demonstrated for sets of numbers whose digits in their L\"uroth representations are restricted to a finite set, analogous to continued fraction representations. \begin{defi}[Upper Regularity Exponent] Let $\mu$ be a probability measure on a subset of $[0,1]$ and $\alpha\in[0,1]$. $\alpha$ is said to be an upper regularity exponent of $\mu$ if, there exist $C_{\alpha}>0$ and $r_{\alpha}>0$ such that for any interval $I\subset [0,1]$, if $\operatorname{diam}{I}<r_\alpha$ then \begin{align} \label{eq: URE} \mu(I)\leq C_{\alpha}\left(\operatorname{diam}{I}\right)^\alpha, \end{align} where $\operatorname{diam}{I}$ denotes the diameter of $I$. \end{defi} Theorems~\ref{thm: 1}, \ref{thm: 2}, \ref{thm: 3}, and \ref{thm: 4} are the main results of this paper. Theorem~\ref{thm: 1} provides an explicit decay rate to refine and improve the previous result \cite[Theorem 1.3]{li2022trigonometric}. Theorem~\ref{thm: 2} provides a method for computing the Hausdorff dimension of self-similar sets satisfying the open set condition. Theorem~\ref{thm: 3} provides a probability measure achieving the fastest decay rate under Theorem~\ref{thm: 1}. Theorem~\ref{thm: 4} is an application of Theorem~\ref{thm: 3} to number theory. \begin{thm}\label{thm: 1} Let $F\subset[0,1]$ be a self-similar set of the form \eqref{eq: SSS} associated with similitudes \eqref{eq: IPS} for a finite set $\mathcal{A}$. Suppose, there exist $j,k\in\mathcal{A}$ and $l\geq 2$ such that $\log{r_j}/\log{r_k}$ is non-Liouville of degree $l$. Then, for any self-similar probability measure $\mu$ on $F$ and any upper regularity exponent $\alpha\in[0,1]$ of $\mu$, as $\xi\to\pm\infty$, \begin{align*} \widehat{\mu}(\xi) =O\left(\log^{-\alpha/(2(1+2\alpha)(8l-7))}{|\xi|}\right) . \end{align*} \end{thm} According to the Frostman Lemma \cite[Theorem 8.8]{MattilaPertti1995Gosa}, for any probability measure $\mu$ on $F$, every upper regularity exponent $\alpha$ of $\mu$ is at most $\dim{F}$, the Hausdorff dimension of $F$. In the case of $F$ being a singleton, $\dim{F}=0$ implies that the only upper regularity exponent is $\alpha=0$. Theorem~\ref{thm: 1} remains consistent in degenerate cases by providing the trivial non-decay rate, after the non-singleton assumption is removed from the result of Li--Sahlsten {\cite[Theorem 1.3]{li2022trigonometric}}. In the case of including the non-singleton assumption, the result of Feng--Lau \cite[Proposition 2.2]{FENG2009407} ensures that for any self-similar probability measure, a positive upper regularity exponent exists. Theorem~\ref{thm: 1} guarantees the decay of its Fourier transform in non-degenerate cases. In the case of including the open set condition (a pairwise disjoint condition), a self-similar probability measure $\mu_{\mathcal{A}}$ can be constructed, so that the maximal upper regularity exponent equals $\dim{F}$. This measure maximises the decay rate described in Theorem~\ref{thm: 1}. \begin{thm}\label{thm: 2} Let $F\subset[0,1]$ be a self-similar set of the form \eqref{eq: SSS} associated with similitudes \eqref{eq: IPS} for a non-empty finite set $\mathcal{A}$. Suppose, the images $(f_w([0,1]))_{w\in\mathcal{A}}$ are pairwise disjoint except at the endpoints. Then, $\dim{F}$ is the unique solution $s=s_{\mathcal{A}}$ in $[0,1]$ to the following equation: \begin{align} \label{eq: sum rw s =1} \sum_{w\in\mathcal{A}}{r_w}^{s}=1. \end{align} \end{thm} \begin{thm}\label{thm: 3} Let $F\subset[0,1]$ be a self-similar set of the form \eqref{eq: SSS} associated with similitudes \eqref{eq: IPS} for a finite set $\mathcal{A}$. Suppose, the images $(f_w([0,1]))_{w\in\mathcal{A}}$ are pairwise disjoint except at the endpoints, and there exist $j,k\in\mathcal{A}$ and $l\geq 2$ such that $\log{r_j}/\log{r_k}$ is non-Liouville of degree $l$. Then, there exists a self-similar probability measure $\mu_{\mathcal{A}}$ on $F$ such that as $\xi\to\pm\infty$, \begin{align*} \widehat{\mu_{\mathcal{A}}}(\xi) =O\left(\log^{-\dim{F}/(2(1+2\dim{F})(8l-7))}{|\xi|}\right) . \end{align*} \end{thm} Theorem~\ref{thm: 2} is essentially the same as the classical result of Moran \cite[Theorem II]{Moran_1946}. Different from the original approach of Moran, which constructs additive functions and applies density lemmas, the proof of Theorem~\ref{thm: 2} employs a novel approach. It first constructs a series of self-similar probability measures and then selects the one maximising its upper regularity exponent. This measure is sufficient to deduce Theorem~\ref{thm: 3} from Theorem~\ref{thm: 1}. Theorems~\ref{thm: 2} and \ref{thm: 3} can be applied together. By equation~\eqref{eq: sum rw s =1}, the Hausdorff dimension of the self-similar set can be computed or approximated numerically. Once the non-Liouville degree is known, the decay rate described in Theorem \ref{thm: 3} can be calculated. Theorem~\ref{thm: 3} provides a framework for number theorists studying sets of numbers with self-similar structures. One application lies in the study of sets of numbers whose digits in their L\"uroth representation are restricted to a finite set. This result is an analogous to the work of Queff\'elec--Ramar\'e \cite{queffelec2003analyse} on restricted partial quotients. For any $x\in(0,1]$, there exists a unique sequence $(d_n)_{n\in\mathbb{N}}$ of positive integers not equal to 1, referred to as digits, such that \begin{align} x &=[d_1,d_2,\ldots,d_n,\ldots] \label{eq: Luroth Repr}\\ &\coloneqq\sum_{k=1}^{\infty}\frac{1}{d_k\prod_{j=1}^{k-1}d_j(d_j-1)}; \label{eq: Luroth Expa} \end{align} where equation~\eqref{eq: Luroth Repr} is referred to as the L\"uroth representation of $x$, and equation~\eqref{eq: Luroth Expa} as the L\"uroth expansion of $x$. The notation $[d_1,d_2,\ldots,d_n,\ldots]$ denotes the L\"uroth representation when the brackets contain $d$'s instead of $a$'s, distinguishing it from the continued fraction representation $[a_1, a_2, \ldots,a_n,\ldots]$, Define, for any $\mathcal{A}\subset\mathbb{N}\setminus\{1\}$, $L_{\mathcal{A}}$ to be the sets of numbers whose digits in their L\"uroth representations are restricted to $\mathcal{A}$. Formally, \begin{align*} L_{\mathcal{A}} \coloneqq\left\{[d_1,d_2,\ldots,d_n,\ldots]:\text{$d_n\in\mathcal{A}$ for all $n\in\mathbb{N}$}\right\}. \end{align*} For any finite $\mathcal{A}\subset\mathbb{N}\setminus\{1\}$, $L_{\mathcal{A}}$ is self-similar in $(0,1]$. Specifically, $L_{\mathcal{A}}$ can be expressed as \begin{align} \label{eq: Luroth SSS} L_{\mathcal{A}}=\bigcup_{d\in\mathcal{A}}f_d\left(L_{\mathcal{A}}\right), \end{align} where for any $d\in\mathbb{N}\setminus\{1\}$, the similitude $f_d:(0,1]\to(0,1]$ is defined by, for any $x\in(0,1]$, \begin{align} \label{eq: Luroth IFS} f_d(x)\coloneqq\frac{1}{d}+\frac{1}{d(d-1)}x, \end{align} derived from the following recursion formula: for any sequence $(d_n)_{n\in\mathbb{N}}$ in $\mathbb{N}\setminus\{1\}$, \begin{align*} [d_1,d_2,d_3,\ldots]=\frac{1}{d_1}+\frac{1}{d_1(d_1-1)}[d_2,d_3,d_4,\ldots]. \end{align*} Figure~\ref{fig: L3} illustrates the fractal construction process of $L_3\coloneqq L_{\{2,3\}}$. Starting from the interval $(0,1]$, sub-intervals are partitioned and removed iteratively at each level. At the first level, $(0,1]$ is partitioned to three sub-intervals: $(0,1/3]$, $(1/3,1/2]$, and $(1/2,1]$, which correspond to numbers in $(0,1]$ with $d_1\geq4$, $d_1=3$, and $d_1=2$ respectively. The first sub-interval is removed, while the second and third are retained for the next level. At each subsequent level $n\in\mathbb{N}$, every retained interval is further partitioned to three sub-intervals, with diameters in the ratio of 2$:$1$:$3, corresponding to $d_n\geq4$, $d_n=3$, and $d_n=2$ respectively. As before, the first sub-interval is removed, and the remaining two are retained to continue the process. The set $L_3$ is the countable intersection of all levels in this iterative construction. \input{figure_L3_1127} The non-Liouville condition in Theorem~\ref{thm: 3} can be proved by applying the explicit Baker Theorem of Matveev \cite[Theorem 2.1]{matveev2000explicit} for any non-singleton $\mathcal{A}\subset\mathbb{N}\setminus\{1\}$. Theorem~\ref{thm: 4} is established as an application of Theorem~\ref{thm: 3} to number theory. \begin{thm}\label{thm: 4} For any non-singleton finite $\mathcal{A}\subset\mathbb{N}\setminus\{1\}$, there exists a self-similar probability measure $\mu_{\mathcal{A}}$ on $L_{\mathcal{A}}$ such that as $\xi\to\pm\infty$, \begin{align*} \widehat{\mu_{\mathcal{A}}}(\xi)=O\left(\log^{-\beta_{\mathcal{A}}}{|\xi|}\right), \end{align*} where $a_1$ and $a_2$ are the first smallest and second smallest elements in $\mathcal{A}$ respectively, and \begin{align*} \beta_{\mathcal{A}} \coloneqq \frac{\dim{L_\mathcal{A}}}{1+2\dim{L_\mathcal{A}}}\frac{10^{-10}}{\log{a_1}\log{a_2}+1} >0 . \end{align*} \end{thm} By Theorem~\ref{thm: 2}, the Hausdorff dimension of $L_3$ is approximated as \begin{align*} \dim{L_3} =0.60096685161367548572 \ldots . \end{align*} By Theorem~\ref{thm: 4}, there exists a probability measure $\mu_3$ on $L_3$ such that as $\xi\to\pm\infty$, \begin{align*} \widehat{\mu_3}(\xi)=O\left(\log^{-\beta_3}{|\xi|}\right), \end{align*} where $\beta_3>10^{-11}>0$. Returning to the motivation of establishing an explicit expression for $\beta$ in \eqref{eq: Li-S Theorem 1.3}, the result of PVZZ \cite[Theorem 1]{pollington2020inhomogeneous} sets a significant target by requiring $\beta>2$ as a threshold in its assumption. Under the most favourable conditions, Theorem~\ref{thm: 3}, which achieves the fastest decay rate derived from Theorem~\ref{thm: 1}, yields $\beta=1/54$. This value reflects inherent constraints, such as the Hausdorff dimension of subsets in $[0,1]$ being at most 1 and the degree of non-Liouville numbers being at least 2. While the results in this paper do not yet meet this threshold, they offer an explicit framework and improved decay rates that contribute to bridging the gap toward the target. \section{Proof of Theorem~\ref{thm: 1}} The proof of Theorem~\ref{thm: 1} employs quantitative renewal theory, specifically for the stopping time of random walks, as introduced in prior work \cite{li2022trigonometric}. A few definitions and propositions are presented before starting the main proof of Theorem~\ref{thm: 1}. The self-similar measure $\mu$, defined as in equation~\eqref{eq: SSM} associated with similitudes \eqref{eq: IPS}, can be expressed in terms of compositions of similitudes and their corresponding weights. Define, for any $n\in\mathbb{N}$ and $w=(w_1,w_2,\ldots,w_n)\in\mathcal{A}^n$, a similitude $f_w:[0,1]\to[0,1]$ by \begin{align} \label{eq: f_w} f_w\coloneqq f_{w_n}\circ \cdots \circ f_{w_2} \circ f_{w_1}, \end{align} along with the contraction ratio $r_w\coloneqq r_{w_1} r_{w_2} \cdots r_{w_n}$, and the weight $p_w\coloneqq p_{w_1} p_{w_2} \cdots p_{w_n}$. By these definitions, for any $n\in\mathbb{N}$, \begin{align*} \mu=\sum_{w\in\mathcal{A}^n}p_w\mu\circ {f_w}^{-1}. \end{align*} Define, for any $t>0$, the stopping time $n_t:\mathcal{A}^\mathbb{N}\to\mathbb{N}$ by, for any $w=(w_n)_{n\in\mathbb{N}}\in\mathcal{A}^{\mathbb{N}}$, \begin{align*} n_t(w) \coloneqq\min\left\{n\in\mathbb{N}:-\log{r_w}\geq t\right\}. \end{align*} Define, for any $t>0$, \begin{align*} \mathcal{W}_t \coloneqq\left\{\left(w_1,w_2,\ldots w_{n_t(w)}\right):(w_n)_{n\in\mathbb{N}}\in\mathcal{A}^\mathbb{N}\right\}. \end{align*} Proposition~\ref{prop: refined Li-S Lemma 3.1}, as the same as {\cite[Lemma 3.1]{li2022trigonometric}}, serves as a starting point of the proof. \begin{prop}[{\cite[Lemma 3.1]{li2022trigonometric}}]\label{prop: refined Li-S Lemma 3.1} For any $\xi\in\mathbb{R}$ and $t>0$, \begin{align*} \left|\widehat{\mu}(\xi)\right|^2 \leq \iint_{[0,1]^2}\sum_{w\in \mathcal{W}_t}p_we^{-2\pi i \xi(f_w(x)-f_w(y))}\,\mu(\mathrm{d}x)\,\mu(\mathrm{d}y) . \end{align*} \end{prop} The integral in Proposition~\ref{prop: refined Li-S Lemma 3.1} is partitioned into two regions, depending on whether the points are near the diagonal or not. Upper bounds for the integrals over each region are established, which together suffice to prove Theorem~\ref{thm: 1}. Proposition~\ref{prop: refined Li-S Proposition 3.2} is an applied version of {\cite[Proposition 3.2]{li2022trigonometric}}. The integral over points near the diagonal is controlled by the upper regularity exponent of the measure. \begin{prop}\label{prop: refined Li-S Proposition 3.2} Let $\alpha\in[0,1]$ be an upper regularity exponent of $\mu$ and $l\geq2$. There exist $D_\alpha>0$ and $t_\alpha>0$ such that for any $\xi\in\mathbb{R}$ and $t>t_\alpha$, \begin{align*} \left| \iint_{A_{t}} \sum_{w \in \mathcal{W}_t} p_w e^{-2 \pi i \xi (f_w(x) - f_w(y))} \,\mu(\mathrm{d}x)\,\mu(\mathrm{d}y)\right| \leq D_\alpha t^{-\alpha/((1+2\alpha)(8l-7))}, \end{align*} where $A_{t}$ is the set of points near the diagonal, defined as \begin{align} \label{eq: def of A} A_{t} \coloneqq \left\{(x,y)\in[0,1]^2:|x-y|\leq t^{-1/((1+2\alpha)(8l-7))}\right\}. \end{align} \end{prop} \begin{proof} Since $\alpha\in[0,1]$ is an upper regularity exponent of $\mu$, there exist $C_{\alpha}>0$ and $r_\alpha>0$ such that for any interval $I\subset[0,1]$, if $\operatorname{diam}{I}<r_\alpha$ then inequality~\eqref{eq: URE} holds. Define \begin{align*} t_\alpha\coloneqq {r_\alpha}^{-(1+2\alpha)(8l-7)}>0. \end{align*} Thus, for any $t>t_\alpha$, it satisfies $0<\delta\coloneqq t^{-1/((1+2\alpha)(8l-7))}<r_\alpha$. Note that for any $\theta\in\mathbb{R}$, $|e^{i\theta}|=1$; and for any $t>0$, $\sum_{w\in\mathcal{W}_t}p_w=1$. By applying the triangle inequality for integrals, the Fubini Theorem, inequality~\eqref{eq: URE}, and the definition of $\delta$, \begin{align*} \left| \iint_{A_{t}} \sum_{w \in \mathcal{W}_t} p_w e^{-2 \pi i \xi (f_w(x) - f_w(y))} \,\mu(\mathrm{d}x)\,\mu(\mathrm{d}y)\right| &\leq \iint_{A_{t}} \sum_{w \in \mathcal{W}_t} p_w \left|e^{-2 \pi i \xi (f_w(x) - f_w(y))}\right| \,\mu(\mathrm{d}x)\,\mu(\mathrm{d}y) \\ &=\iint_{A_{t}}1\,\mu(\mathrm{d}x)\,\mu(\mathrm{d}y) =\int_{[0,1]}\mu((x-\delta,x+\delta))\,\mu(\mathrm{d}x) \\ &\leq C_{\alpha}\left(2\delta\right)^{\alpha} =D_{\alpha}t^{-\alpha/((1+2\alpha)(8l-7))} , \end{align*} where $D_{\alpha}\coloneqq 2^{-\alpha/((1+2\alpha)(8l-7))}C_{\alpha}>0$. \end{proof} Quantitative renewal theory is applied to control the integral over points that are not near the diagonal. The non-Liouville condition is employed to guarantee that an auxiliary measure is weakly diophantine. \begin{defi}[Weakly Diophantine] Let $\lambda$ be a probability measure on $\mathbb{R}^+$ with finite support $\Lambda$ and $l>0$. $\lambda$ is said to be $l$-weakly diophantine if, \begin{align*} \liminf_{b\to\pm\infty}|b|^l\left|1-\mathscr{L}\lambda(ib)\right|>0, \end{align*} where $\mathscr{L}\lambda$ is the Laplace transform of $\lambda$; that is, for any $z\in\mathbb{C}$, \begin{align*} \mathscr{L}\lambda(z)\coloneqq\int_{\Lambda}e^{-zx}\,\lambda(\mathrm{d}x). \end{align*} \end{defi} In the case of a self-similar measure $\mu$ of the form \eqref{eq: SSM} associated with similitudes \eqref{eq: IPS}, define the corresponding auxiliary measure $\lambda$ by \begin{align} \label{eq: lambda} \lambda\coloneqq\sum_{w\in\mathcal{A}}p_w\delta_{-\log{r_w}}, \end{align} where $\delta$ is the Dirac delta measure. Since $\mathcal{A}$ is finite, $\Lambda=\{-\log{r_w}>0:w\in\mathcal{A}\}$, the support of $\lambda$, is also finite. Proposition~\ref{prop: refined Li-S Lemma 3.4} refines \cite[Lemma 3.4]{li2022trigonometric}. \begin{prop}\label{prop: refined Li-S Lemma 3.4} Suppose, there exist $j,k\in\mathcal{A}$ and $l\geq 2$ such that $\log{r_j}/\log{r_k}$ is non-Liouville of degree $l$. Then, the probability measure $\lambda$, defined in \eqref{eq: lambda}, is $(2l-2)$-weakly diophantine. \end{prop} \begin{proof} Pick any $b_1\in\mathbb{Z}\setminus\{0\}$. By taking $b\coloneqq 2\pi b_1/\log{r_k}$, \begin{align*} \left|1-\mathscr{L}\lambda(ib)\right| &\geq\left|\Re{\left(p_j\left(1-e^{-b\log{r_j}}\right)+p_k\left(1-e^{-b\log{r_k}}\right)\right)}\right| \\ &\gg \max{\left(\operatorname{dist}\left(b\log{r_j},2\pi\mathbb{Z}\right)^2,\operatorname{dist}\left(b\log{r_k},2\pi\mathbb{Z}\right)^2\right)} \\ &\gg\max{\left(\operatorname{dist}\left(\frac{\log{r_j}}{\log{r_k}}b_1,\mathbb{Z}\right)^2,\operatorname{dist}\left(b_1,\mathbb{Z}\right)^2\right)}, \end{align*} where the Vinogradov symbol $\gg$ denotes a quantity being greater than a fixed positive multiple of another. By $\theta\coloneqq\log{r_j}/\log{r_k}$ being non-Liouville of degree $l$, inequality~\eqref{eq: non-Liouville} implies that there exists $c>0$ such that for any $p\in\mathbb{Z}$ and $q\in\mathbb{N}$, \begin{align*} \left|\frac{\log{r_j}}{\log{r_k}}-\frac{p}{q}\right|\geq\frac{c}{q^l}. \end{align*} In particular, by taking $q=|b_1|\in\mathbb{N}$ and multiplying both sides by $|b_1|>0$, for any $p\in\mathbb{Z}$, \begin{align*} \left|\frac{\log{r_j}}{\log{r_k}} b_1-\frac{pb_1}{|b_1|}\right| \geq\frac{c|b_1|}{{|b_1|}^l} =\frac{c}{{|b_1|}^{l-1}}. \end{align*} Thus, for any $p\in\mathbb{Z}$, \begin{align*} \left|\frac{\log{r_j}}{\log{r_k}} b_1-p\right|&\geq\frac{c}{{|b_1|}^{l-1}}; \end{align*} it follows that, \begin{align*} \left|1-\mathscr{L}\lambda(ib)\right| \gg \operatorname{dist}\left(\frac{\log{r_j}}{\log{r_k}}b_1,\mathbb{Z}\right)^2 \gg |b_1|^{2-2l} \gg |b|^{2-2l} . \end{align*} Thus, for any $b_1\in\mathbb{Z}\setminus\{0\}$ and $b=2\pi b_1/\log{r_j}\ne0$, the above implies that \begin{align*} |b|^{2l-2}\left|1-\mathscr{L}\lambda(ib)\right|\gg1. \end{align*} Therefore, $\lambda$ is $(2l-2)$-weakly diophantine. \end{proof} Let $\sigma$ be the expectation of the auxiliary measure $\lambda$, defined in \eqref{eq: lambda}; that is, \begin{align*} \sigma\coloneqq\int_{\Lambda}x\,\lambda(\mathrm{d}x)>0, \end{align*} where $\Lambda$ is the support of $\lambda$. Let $(X_n)_{n\in\mathbb{N}}$ be a sequence of independent and identically distributed random variables in $\mathbb{R}$ with probability distribution $\lambda$. Define for any $n\in\mathbb{N}$, the partial sum $S_n$ as \begin{align*} S_n\coloneqq X_1+X_2+\cdots+X_n. \end{align*} Define, for any $t>0$, the stopping time $n_t$ as \begin{align*} n_t\coloneqq \min{\{n\in\mathbb{N}:S_n\geq t\}}. \end{align*} Proposition~\ref{prop: refined Li-S Proposition 2.2} is essentially the same as \cite[Proposition 2.2]{li2022trigonometric}. \begin{prop}\label{prop: refined Li-S Proposition 2.2} Let $\lambda$ be a probability measure on $\mathbb{R}^+$ and $l\geq 2$. Suppose, $\lambda$ has finite support $\Lambda$, is non-lattice and $l$-weakly diophantine. Then, for any $C^1$ function $g:\mathbb{R}\to[0,+\infty)$ and $t>1+\max{\Lambda}$, \begin{align*} \mathbb{E}\left(g\left(S_{n_t}-t\right)\right) =\frac{1}{\sigma}\int_{\mathbb{R}^+}g(z)p(z)\,\mathrm{d}z+O\left(t^{-1/(4l+1)}\right)\|g\|_{C^1}, \end{align*} where $\sigma$ is the expectation of $\lambda$, $p:\mathbb{R}^+\to[0,1]$ is the survival function of $\lambda$, that is, for any $z>0$, $p(z)\coloneqq\lambda((z,+\infty))$; and the local $C^1$-norm of $g$ is defined by \begin{align*} \|g\|_{C^1} \coloneqq \sup{\left\{|g(z)|+|g'(z)|:-1<z<1+\max{\Lambda}\right\}} . \end{align*} \end{prop} By combining Propositions~\ref{prop: refined Li-S Lemma 3.4} and \ref{prop: refined Li-S Proposition 2.2}, Proposition~\ref{prop: refined Li-S Proposition 3.5} is established to control the integral over points that are not near the diagonal. This proposition also incorporates optimisations to make the decay rate explicit, serving as a refined and improved version of \cite[Proposition 3.5]{li2022trigonometric}. These optimisations are achieved through carefully chosen parameters. \begin{prop}\label{prop: refined Li-S Proposition 3.5} Let $\alpha\in[0,1]$. Suppose, there exist $j,k\in\mathcal{A}$ and $l\geq 2$ such that $\log{r_j}/\log{r_k}$ is non-Liouville of degree $l$. Then, as $|\xi|\to\pm\infty$, \begin{align*} \iint_{[0,1]^2\setminus A_{t}} \sum_{w\in \mathcal{W}_t}p_we^{-2\pi i\xi (f_w(x)-f_w(y))} \,\mu(\mathrm{d}x)\,\mu(\mathrm{d}y) =O\left(\log^{-\alpha/((1+2\alpha)(8l-7))}{|\xi|}\right), \end{align*} where $A_{t}$ is defined in equation~\eqref{eq: def of A} and $t\coloneqq t(\xi)>1$ is the unique solution to the equation: \begin{align} \label{eq: C1} t^{(1+\alpha)/((1+2\alpha)(8l-7))}e^t=|\xi|. \end{align} \end{prop} \begin{proof} Pick any $\xi\in\mathbb{R}$. Suppose $|\xi|$ is large enough in terms of $\alpha$ and $l$, so that the unique solution $t\coloneqq t(\xi)>1$ to equation~\eqref{eq: C1} satisfies that $t>1+\max{\Lambda}$, where $\Lambda$ is the support of $\lambda$, defined in \eqref{eq: lambda}. Define a probability measure $\mathbb{P}_t$ on $\mathcal{A}^*\coloneqq\bigcup_{n\in\mathbb{N}}\mathcal{A}^n$ by \begin{align*} \mathbb{P}_t\coloneqq\sum_{w\in\mathcal{W}_t}p_w\delta_w, \end{align*} where $\delta$ is the Dirac delta measure. For any continuous function $g:\mathbb{R}\to\mathbb{C}$, the expectation \begin{align} \label{eq: 3.1 Li-S} \mathbb{E}\left(g\left(S_{n_t}-t\right)\right) =\int_{\mathcal{W}_t}g(-\log{r_w}-t)\,\mathbb{P}_t(\mathrm{d}w) =\sum_{w\in\mathcal{W}_t}p_wg(-\log{r_w}-t). \end{align} Define, for any $s\in\mathbb{R}$, a smooth function $g_s:\mathbb{R}\to\mathbb{C}$ by, for any $z\in\mathbb{R}$, \begin{align*} g_s(z)\coloneqq\exp{\left(-2\pi ise^{-z}\right)}. \end{align*} In particular, as $s\to\pm\infty$, $\|g_s\|_{C^1}=O(|s|)$. By definition~\eqref{eq: f_w}, for any $x,y\in[0,1]$ and $w\in\mathcal{W}_t$, $e^{-2\pi i\xi (f_w(x)-f_w(y))}=e^{-2\pi i\xi(x-y)r_w}$. By equation~\eqref{eq: 3.1 Li-S} and $s\coloneqq\xi/e^t$, for any $x,y\in[0,1]$, \begin{align} \label{eq: 3.5 Li-S} \mathbb{E}\left(g_{(x-y)s}\left(S_{n_t}-t\right)\right) = \sum_{w\in \mathcal{W}_t}p_we^{-2\pi i\xi (f_w(x)-f_w(y))} . \end{align} By Proposition~\ref{prop: refined Li-S Lemma 3.4}, $\lambda$ is $(2l-2)$-weakly diophantine, defined as in \eqref{eq: lambda}. By Proposition~\ref{prop: refined Li-S Proposition 2.2}, \begin{align*} \left|\mathbb{E}\left(g_{(x-y)s}\left(S_{n_t}-t\right)\right)-\frac{1}{\sigma}\int_{\mathbb{R}^+}g_{(x-y)s}(z)p(z)\,\mathrm{d}z\right| =O\left(\frac{|(x-y)s|}{t^{1/(4(2l-2)+1)}}\right) =O\left(\frac{|(x-y)s|}{t^{1/(8l-7)}}\right) . \end{align*} By substituting equation~\eqref{eq: 3.5 Li-S} into the above, \begin{align} \label{eq: renewal} \left|\sum_{w\in \mathcal{W}_t}p_we^{-2\pi i\xi (f_w(x)-f_w(y))} -\frac{1}{\sigma}\int_{\mathbb{R}^+}g_{(x-y)s}(z)p(z)\,\mathrm{d}z\right| =O\left(\frac{|(x-y)s|}{t^{1/(8l-7)}}\right). \end{align} Since $\Lambda$, the support of $\lambda$, is finite; the survival function $p:\mathbb{R}^+\to[0,1]$ of $\lambda$ is piecewise constant with finitely many points of discontinuity. By \cite[Lemma 3.8]{li2018decrease}, the decay rate in the main term, the oscillation integral, satisfies \begin{align*} \left|\int_{\mathbb{R}^+}g_{(x-y)s}(z)p(z)\,\mathrm{d}z\right|=O\left(\frac{1}{|(x-y)s|}\right). \end{align*} By applying the triangle inequality, the above and equation~\eqref{eq: renewal} yield that \begin{align} \label{eq: two sums} \left|\sum_{w\in \mathcal{W}_t}p_we^{-2\pi i\xi (f_w(x)-f_w(y))}\right| =O\left(\frac{1}{|(x-y)s|}+\frac{|(x-y)s|}{t^{1/(8l-7)}}\right). \end{align} By the definition in equation~\eqref{eq: def of A}, for any $(x,y)\in{[0,1]^2\setminus A_{t}}$, \begin{align*} t^{\alpha/((1+2\alpha)(8l-7))} \leq |(x-y)s| \leq t^{(1+\alpha)/((1+2\alpha)(8l-7))}, \end{align*} where $|s|=t^{(1+\alpha)/((1+2\alpha)(8l-7))}>1$. Since $\alpha\in[0,1]$, $l\geq2$, and $\xi=se^t$, \begin{align*} \log{|\xi|}=\log{|s|}+t=\frac{1+\alpha}{(1+2\alpha)(8l-7)}\log{t}+t< 2t. \end{align*} The first term in the big-O notation in equation~\eqref{eq: two sums} is bounded above by \begin{align*} \frac{1}{|(x-y)s|} \leq t^{-\alpha/((1+2\alpha)(8l-7))} < 2^{1/27}\log^{-\alpha/((1+2\alpha)(8l-7))}{|\xi|} . \end{align*} Similarly, the second term is bounded above by \begin{align*} \frac{|(x-y)s|}{t^{1/(8l-7)}} \leq\frac{t^{(1+\alpha)/((1+2\alpha)(8l-7))}}{t^{1/(8l-7)}} \leq t^{-\alpha/((1+2\alpha)(8l-7))} < 2^{1/27}\log^{-\alpha/((1+2\alpha)(8l-7))}{|\xi|} . \end{align*} The result follows by the triangle inequality for integrals. \end{proof} It now remains to complete the proof of Theorem~\ref{thm: 1}. \begin{proof}[Proof of Theorem~\ref{thm: 1}] Let $\alpha\in[0,1]$ be an upper regularity exponent of $\mu$. Pick any $\xi\in\mathbb{R}$. Suppose $|\xi|$ is large enough in terms of $\alpha$ and $l$, so that the unique solution $t\coloneqq t(\xi)>1$ to equation~\eqref{eq: C1} satisfies both $t>1+\max{\Lambda}$ and $t>t_\alpha$, where $\Lambda$ is the support of $\lambda$, defined in \eqref{eq: lambda}, and $t_\alpha$ is given in Proposition~\ref{prop: refined Li-S Proposition 3.2}. Note that $\log{|\xi|}<2t$. By applying Propositions~\ref{prop: refined Li-S Lemma 3.1}, \ref{prop: refined Li-S Proposition 3.2} and \ref{prop: refined Li-S Proposition 3.5}, \begin{align*} \left|\widehat{\mu}(\xi)\right|^2 &\leq \left(\iint_{A_{t}}+\iint_{[0,1]^2\setminus A_{t}}\right) \sum_{w\in \mathcal{W}_t}p_we^{-2\pi i\xi (f_w(x)-f_w(y))} \, \mu(\mathrm{d}x) \, \mu(\mathrm{d}y) \\ &=O\left( t^{-\alpha/((1+2\alpha)(8l-7))}\right)+O\left( \log^{-\alpha/((1+2\alpha)(8l-7))}{|\xi|} \right) =O\left(\log^{-\alpha/((1+2\alpha)(8l-7))}{|\xi|}\right). \end{align*} Theorem~\ref{thm: 1} is established. \end{proof} \section{Proof of Theorem~\ref{thm: 2}} A few definitions and propositions are presented before starting the main proof of Theorem~\ref{thm: 2}. Let $\mathcal{A}$ be a non-empty finite set. Define $f_{\mathcal{A}}:[0,1]\to\mathbb{R}^+$ by, for any $s\in[0,1]$, \begin{align}\label{eq: f_A} f_{\mathcal{A}}(s) \coloneqq\sum_{w\in\mathcal{A}}{r_w}^s. \end{align} Let $s\in[0,1]$. Define, for any $w\in\mathcal{A}$, a weight $p_{s,w}$ by \begin{align} \label{eq:p_s_w} p_{s,w} \coloneqq\frac{{r_w}^s}{f_{\mathcal{A}}(s)} . \end{align} Define a probability measure $\nu_{s}$ on $\mathcal{A}^\mathbb{N}$ by, for any $(a_n)_{n\in\mathbb{N}}\in\mathcal{A}^\mathbb{N}$ and $n\in\mathbb{N}$, \begin{align*} \nu_s\left(\left\{(w_n)_{n\in\mathbb{N}}\in\mathcal{A}^\mathbb{N} : w_1 = a_1, w_2 = a_2, \ldots, w_n = a_n \right\} \right) \coloneqq p_{s,a_1} \, p_{s,a_2} \cdots p_{s,a_n}. \end{align*} Define a coding map $\pi:\mathcal{A}^\mathbb{N}\to F$ by, for any $w=(w_n)_{n\in\mathbb{N}}\in\mathcal{A}^\mathbb{N}$, \begin{align*} \pi(w) \coloneqq \bigcap_{n\in\mathbb{N}}\left(f_{w_n} \circ \cdots \circ f_{w_2}\circ f_{w_1}\right)(F). \end{align*} By the assumption that the images $(f_w([0,1]))_{w\in\mathcal{A}}$ are pairwise disjoint except at the endpoints, the countable intersection in the expression of $\pi$ is a singleton in $F$. Define a measure $\mu_{s}$ to be the push-forward measure of $\nu_s$ under the coding map $\pi$; that is, for any Borel subset $S\subset [0,1]$, \begin{align} \label{eq: mu definition} \mu_s(S) \coloneqq\left(\nu_s\circ\pi^{-1}\right)(S). \end{align} $\mu_s$ is a probability measure on $F$, as \begin{align*} \mu_s(F) =\left(\nu_s\circ\pi^{-1}\right)(F) =\nu_s\left( \mathcal{A}^\mathbb{N} \right) =1 , \end{align*} and $\mu_s$ is self-similar, as \begin{align*} \mu_s =\sum_{w\in\mathcal{A}}p_{s,w}\mu_{s}\circ {f_w}^{-1}. \end{align*} Proposition~\ref{prop: Holder} provides an upper regularity exponent for $\mu_s$. \begin{prop}\label{prop: Holder} Let $s\in[0,1]$. Suppose, $f_{\mathcal{A}}(s)\geq1$. Then, for any interval $I\subset [0,1]$, \begin{align*} \mu_s(I) =O\left((\operatorname{diam}{I})^{s}\right), \end{align*} where $f_{\mathcal{A}}$ and $\mu_s$ are defined in \eqref{eq: f_A} and \eqref{eq: mu definition} respectively; and $\operatorname{diam}{I}$ denotes the diameter of $I$. \end{prop} \begin{proof} Pick any interval $I\subset[0,1]$. By the self-similarity property~\eqref{eq: SSS}, there exist a maximal $n\in\mathbb{N}\setminus\{0\}$ and $w=(w_1,w_2,\ldots,w_n)\in\mathcal{A}^{n}$ such that $I$ is covered by the $w$-cylinder; that is, \begin{align*} I\subset J_n\coloneq\left(f_{w_n}\circ\cdots\circ f_{w_2}\circ f_{w_1}\right)([0,1]). \end{align*} By convention, the composition of no functions is defined as the identity function. The interval $I$ intersects at most $\#\mathcal{A}$, the cardinality of $\mathcal{A}$, cylinders of the form \begin{align*} J_{n+1,w_{n+1}}\coloneq\left(f_{w_{n+1}}\circ f_{w_n}\circ\cdots\circ f_{w_2}\circ f_{w_1}\right)([0,1]), \end{align*} where $w_{n+1}\in\mathcal{A}$. Suppose $I$ intersects exactly one $J_{n+1}\coloneqq J_{n+1,w_{n+1}}$ for some $w_{n+1}\in\mathcal{A}$. Without loss of generality, by discarding sub-intervals of $I$ that contribute zero $\mu_s$-measure to $I$, $I$ satisfies $J_{n+1}\subset I\subset J_n$. Thus, $\operatorname{diam}J_{n+1}\leq h\coloneqq \operatorname{diam}I\leq \operatorname{diam}J_n=r_{w_1}r_{w_2}\cdots r_{w_n}$. By the assumption that $f_{\mathcal{A}}(s)\geq1$ and the definition in equation~\eqref{eq:p_s_w}, for any $w\in\mathcal{A}$, $p_{s,w}\leq {r_w}^s$. Thus, \begin{align*} \mu_{s}(I) & \leq \mu_{s}(J_n) \leq p_{s,w_1}p_{s,w_2}\cdots p_{s,w_n} \leq {r_{w_1}}^{s}{r_{w_2}}^{s}\cdots{r_{w_n}}^{s} \\ & \leq {r_{w_{n+1}}}^{-s}\left(r_{w_1}r_{w_2}\cdots r_{w_{n+1}}\right)^{s} \leq {r_{w_{n+1}}}^{-s}h^{s} \leq D_sh^{s}, \end{align*} where $D_s\coloneqq\max_{w\in\mathcal{A}}{{r_w}^{-s}}$. Suppose $I$ intersects more than one $J_{n+1,w_{n+1}}$. By the sub-additivity of measures, \begin{align*} \mu_{s}(I) \leq \sum_{w_{n+1}\in\mathcal{A}}\mu_{s}(J_{n+1,w_{n+1}}) \leq C_sh^{s}, \end{align*} where $C_s\coloneqq D_s\cdot\#\mathcal{A}$. \end{proof} \begin{prop}\label{prop: 7} There exists a unique $s=s_\mathcal{A}\in[0,1]$ such that equation~\eqref{eq: sum rw s =1} is satisfied. \end{prop} \begin{proof} The existence follows from the Intermediate Value Theorem. Since $f_{\mathcal{A}}$ is a finite sum of continuous functions on $[0,1]$, it is itself continuous on $[0,1]$. Notice that, $f_{\mathcal{A}}(0)=\#\mathcal{A}\geq1$ and $f_{\mathcal{A}}(1)=\sum_{w\in\mathcal{A}}r_w\leq1$. Otherwise, the total length of the image intervals $(f_w([0,1]))_{w\in\mathcal{A}}$ would exceed 1, resulting in an uncountable intersection, which contradicts the assumption. By the Intermediate Value Theorem, there exists $s_\mathcal{A}\in[0,1]$ such that $f_{\mathcal{A}}(s_\mathcal{A})=1$. The uniqueness follows from the strict monotonicity of $f_{\mathcal{A}}$. For any $w\in\mathcal{A}$, $r_w\in(0,1)$ and consequently $\log{r_w}<0$. Thus, for any $s\in[0,1]$, the first derivative of $f_{\mathcal{A}}$, \begin{align*} {f_{\mathcal{A}}}'(s) =\sum_{w\in\mathcal{A}}{r_w}^s\log{r_w}<0. \end{align*} It follows that $f_{\mathcal{A}}$ is strictly decreasing on $[0,1]$. Hence, the equation $f_{\mathcal{A}}(s)=1$ has at most one solution for $s\in[0,1]$. \end{proof} It now remains to complete the proof of Theorem~\ref{thm: 2}; this is the unique solution to equation~\eqref{eq: sum rw s =1} given by Proposition~\ref{prop: 7} is the Hausdorff dimension. The proof is separated into two parts, establishing the inequalities $\dim{F}\leq s_\mathcal{A}$ and $\dim{F}\geq s_\mathcal{A}$. The first inequality is proved by a standard covering argument, and the second is proved by applying Proposition~\ref{prop: Holder}. \begin{proof}[Proof of Theorem~\ref{thm: 2}] Define $r^*\coloneqq\max_{w\in\mathcal{A}}{r_w}\in(0,1)$. Pick any $s>s_\mathcal{A}$, $\varepsilon>0$, and $\rho>0$. For these parameters, take \begin{align*} k\coloneqq\max{\left\{1,\left\lceil\frac{\log{\rho}}{\log{r^*}}\right\rceil,\left\lceil\frac{\log{\varepsilon}}{\log{f_{\mathcal{A}}(s)}}\right\rceil\right\}}\in\mathbb{N}, \end{align*} where $f_{\mathcal{A}}$ is defined in equation~\eqref{eq: f_A}. Define $\mathcal{F}_k$ to be the set of all cylinders at level $k$ by \begin{align*} \mathcal{F}_k \coloneqq \left\{\left(f_{w_k}\circ\cdots \circ f_{w_2}\circ f_{w_1}\right)([0,1])\subset[0,1]:(w_1,w_2,\ldots,w_k)\in\mathcal{A}^k \right\} , \end{align*} where each $f_w$ is given by equation~\eqref{eq: IPS}. By the self-similarity property~\eqref{eq: SSS}, $\mathcal{F}_k$ is a cover of $F$. There are ${(\#\mathcal{A})}^k$ pairwise disjoint intervals (except possibly at the endpoints) in $\mathcal{F}_k$, each of length at most $\rho$, as for any $(w_1,w_2,\ldots,w_k)\in\mathcal{A}^k$, \begin{align*} \operatorname{diam}{\left(f_{w_k}\circ\cdots \circ f_{w_2}\circ f_{w_1}\right)([0,1])} =r_{w_1}r_{w_2}\cdots r_{w_k} \leq (r^*)^k \leq \rho. \end{align*} Then, $\mathcal{F}_k$ is a $\rho$-cover of $F$, and the $s$-dimensional Hausdorff measure of $F$ is at most \begin{align*} \sum_{I\in \mathcal{F}_k}(\operatorname{diam}{I})^s &=\sum_{w_1\in\mathcal{A}} \sum_{w_2\in\mathcal{A}}\cdots\sum_{w_k\in\mathcal{A}}\left(r_{w_1}r_{w_2}\cdots r_{w_k}\right)^s =\left(\sum_{w\in\mathcal{A}}{r_w}^s\right)^k = \left(f_{\mathcal{A}}(s)\right)^k \leq\varepsilon, \end{align*} as $f_{\mathcal{A}}$ is strictly decreasing on $[0,1]$, and $s>s_\mathcal{A}$ implies that $f_{\mathcal{A}}(s)<f_{\mathcal{A}}(s_\mathcal{A})=1$. Therefore, the $s$-Hausdorff measure of $F$ is 0, and consequently $\dim{F}\leq s_\mathcal{A}$. Pick any $s<s_\mathcal{A}$ and countable collection of intervals $(I_j)_{j\in\mathbb{N}}$. Suppose $(I_j)_{j\in\mathbb{N}}$ covers $F$, that is $F\subset\bigcup_{j\in\mathbb{N}}I_j$. Since $f_{\mathcal{A}}$ is strictly decreasing on $[0,1]$, $f_{\mathcal{A}}(s)>f_{\mathcal{A}}(s_\mathcal{A})=1$. By Proposition~\ref{prop: Holder}, \begin{align*} \sum_{j\in\mathbb{N}}(\operatorname{diam}{I_j})^s \gg \sum_{j\in\mathbb{N}}\mu_s{(I_j)} \geq\mu_s(F)=1. \end{align*} Therefore, the $s$-Hausdorff measure of $F$ is positive, and consequently $\dim{F}\geq s_\mathcal{A}$. Theorem~\ref{thm: 2} is established. \end{proof} \section{Proof of Theorem~\ref{thm: 3}} \begin{proof}[Proof of Theorem~\ref{thm: 3}] By Theorem~\ref{thm: 2} and the definition in equation~\eqref{eq: f_A}, $f_{\mathcal{A}}(\dim{F})=1$. By Proposition~\ref{prop: Holder}, for any interval $I\subset[0,1]$, \begin{align*} \mu_{\mathcal{A}}(I) = O\left((\operatorname{diam}{I})^{\dim{F}}\right), \end{align*} where $\mu_{\mathcal{A}}\coloneqq\mu_{\dim{F}}$, as defined in equation~\eqref{eq: mu definition}. Thus, the Hausdorff dimension of $F$ is the maximal upper regularity exponent of $\mu_{\mathcal{A}}$. By setting $\alpha\coloneqq \dim{F}$ in Theorem~\ref{thm: 1}, the desired decay rate in Theorem~\ref{thm: 3} is established. \end{proof} \section{Proof of Theorem~\ref{thm: 4}} By equations~\eqref{eq: Luroth SSS} and \eqref{eq: Luroth IFS}, it is evident that for any finite $\mathcal{A}\subset\mathbb{N}\setminus\{1\}$, the set $L_{\mathcal{A}}\subset[0,1]$ is self-similar, and the images $(f_w([0,1]))_{w \in \mathcal{A}}$ are pairwise disjoint except possibly at the endpoints. To establish Theorem~\ref{thm: 4}, as an application of Theorem~\ref{thm: 3}, it suffices to prove the remaining non-Liouville condition for non-singleton $\mathcal{A}\subset\mathbb{N}\setminus\{1\}$. First, the linear independence of logarithms of the contraction ratios is proved. Then, the explicit Baker Theorem by Matveev \cite[Theorem 2.1]{matveev2000explicit} is applied, establishing the non-Liouville degree explicitly. \begin{prop}\label{prop: Luroth linearly independent} For any $a_1,a_2\in\mathbb{N}\setminus\{1\}$, if $a_1\ne a_2$ then $\log{(a_1(a_1-1))}$ and $\log{(a_2(a_2-1))}$ are linearly independent over $\mathbb{Z}$. \end{prop} \begin{proof} The contrapositive of the statement is proved. Suppose there exists $k\in\mathbb{Q}$ such that \begin{align} \label{eq: Luroth exp form} a_1(a_1-1)=(a_2(a_2-1))^k\geq2. \end{align} Suppose there exist $n,m\in\mathbb{N}$ such that $n\geq2$ and $a_1(a_1-1)=n^m$. Let ${p_1}^{e_1}{p_2}^{e_2}\cdots {p_s}^{e_s}$ be the prime factorisation of $n$, where for any $i\in\mathbb\{1,2,\ldots,s\}$, $p_i$ is a distinct prime and $e_i\in\mathbb{N}$. Since $\gcd(a_1,a_1-1)=1$, without loss of generality (if necessary, relabel the primes accordingly), there exists $r\in\{1,2,\ldots,s\}$ such that $a_1=({p_1}^{e_1}{p_2}^{e_2}\cdots {p_r}^{e_r})^m$ and $a_1-1=({p_{r+1}}^{e_{r+1}}{p_{r+2}}^{e_{r+2}}\cdots {p_s}^{e_s})^m$. Thus, $a_1$ and $a_1-1$ are consecutive positive integers, and both are $m$-th powers of positive integers. The only possibility is $m=1$. Therefore, both $a_1(a_1-1)$ and $a_2(a_2-1)$ are not perfect powers. By equation~\eqref{eq: Luroth exp form}, the only possibility is $k=1$. Hence, $a_1(a_1-1)=a_2(a_2-1)$, and consequently, $a_1=a_2$. \end{proof} \begin{prop}\label{prop: Luroth SSL} For any $a_1,a_2\in\mathbb{N}\setminus\{1\}$, if $a_1\ne a_2$ then \begin{align*} \frac{\log{(a_1(a_1-1))}}{\log{(a_2(a_2-1))}} \end{align*} is non-Liouville of degree $l_{a_1,a_2}$, where \begin{align} \label{eq: l a1 a2} l_{a_1,a_2} &\coloneqq387072e^3(15.8+5.5\log{2})\log{(a_1(a_1-1))}\log{(a_2(a_2-1))}+1 \\ &>189369098. \nonumber \end{align} \end{prop} \begin{proof} Pick any $p\in\mathbb{Z}$ and $q\in\mathbb{N}$. By Proposition~\ref{prop: Luroth linearly independent} and \cite[Theorem 2.1]{matveev2000explicit}, \begin{align} \label{eq: Luroth log linear} L(p,q,a_1,a_2) \coloneqq\log{|q\log{(a_1(a_1-1))}-p\log{(a_2(a_2-1))}|} >-C(n)C_0W_0D^2\Omega, \end{align} where the values in \cite[Theorem 2.1]{matveev2000explicit} are $n=2$, $D=1$, $x=1$, and \begin{align*} C(n) &= 387072e^3; \\ C_0 &= 15.8+5.5\log{2}; \\ B &=\max\left\{1,|p|,\frac{\log{(a_1(a_1-1))}}{\log{(a_2(a_2-1))}}q\right\}\geq\frac{\log{(a_1(a_1-1))}}{\log{(a_2(a_2-1))}}q; \\ W_0 &= \log{\left(\frac{3e\log{(a_1(a_1-1))}}{2\log{(a_2(a_2-1))}}q\right)}; \\ \Omega &=\log{(a_1(a_1-1))}\log{(a_2(a_2-1))}. \end{align*} By taking the exponential function on both sides of equation~\eqref{eq: Luroth log linear}, \begin{align*} \exp{L(p,q,a_1,a_2)} &=\left|q\log{(a_1(a_1-1))}-p\log{(a_2(a_2-1))}\right| \\ &>\left(\frac{3e\log{(a_1(a_1-1))}}{2\log{(a_2(a_2-1))}}q\right)^{-C(n)C_0\Omega} \\ &> c_{a_1,a_2}q^{-387072e^3(15.8+5.5\log{2})\log{(a_1(a_1-1))}\log{(a_2(a_2-1))}}\log{(a_2(a_2-1))}, \end{align*} where \begin{align*} c_{a_1,a_2} \coloneqq\frac{1}{\log{(a_2(a_2-1))}}\left(\frac{3e\log{(a_1(a_1-1))}}{2\log{(a_2(a_2-1))}}\right)^{-C(n)C_0\Omega}>0. \end{align*} By dividing both sides by $q\log{(a_2(a_2-1))}$, \begin{align*} \left|\frac{\log{(a_1(a_1-1))}}{\log{(a_2(a_2-1))}}-\frac{p}{q}\right|&>\frac{c_{a_1,a_2}}{q^{l_{a_1,a_2}}}, \end{align*} where $l_{a_1,a_2}$ is defined in equation~\eqref{eq: l a1 a2}. \end{proof} Proposition~\ref{prop: 10} offers a faster decay rate than Theorem~\ref{thm: 4}, but its decay parameter involves a more complicated expression. Theorem~\ref{thm: 4} is derived from Proposition~\ref{prop: 10}, provides a more accessible formula, and facilitates understanding how the decay rate varies with the parameters. \begin{prop}\label{prop: 10} For any non-singleton finite $\mathcal{A}\subset\mathbb{N}\setminus\{1\}$, there exists a self-similar probability measure $\mu_{\mathcal{A}}$ on $L_{\mathcal{A}}$ such that as $\xi\to\pm\infty$, \begin{align*} \widehat{\mu_{\mathcal{A}}}(\xi)=O\left(\log^{-\beta_{\mathcal{A},0}}{|\xi|}\right), \end{align*} where $a_1$ and $a_2$ are the first smallest and second smallest elements in $\mathcal{A}$ respectively, and \begin{align*} \beta_{\mathcal{A},0} \coloneqq \frac{1}{2}\frac{\dim{L_\mathcal{A}}}{1+2\dim{L_\mathcal{A}}}\frac{1}{3096576e^3(15.8+5.5\log{2})\log{(a_1(a_1-1))}\log{(a_2(a_2-1))}+1} >0 . \end{align*} \end{prop} \begin{proof} By the formula for the non-Liouville degree in equation~\eqref{eq: l a1 a2}, Theorem~\ref{thm: 3} gives the desired decay rate after substitution and calculation. The fastest decay rate under Theorem~\ref{thm: 3} is achieved when $l_{a_1,a_2}$ is minimised. Since $l_{a_1,a_2}$ is jointly and strictly increasing, $a_1$ and $a_2$ are selected to be the first smallest and the second smallest elements in $\mathcal{A}$ respectively. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: 4}] Theorem~\ref{thm: 4} follows directly from Proposition~\ref{prop: 10} and that $\beta_{\mathcal{A},0}\geq\beta_{\mathcal{A}}$. \end{proof} \bibliographystyle{siam} \bibliography{name} \end{document} \newlength{\mytikzwidth} \setlength{\mytikzwidth}{0.95\textwidth} \begin{figure}[H] \begin{center} \begin{tikzpicture} ll [mycolour] (0.00000000\mytikzwidth,-0.1) rectangle (1.00000000\mytikzwidth,0.1); \draw (0,0) -- (\mytikzwidth,0); \draw (0.000\mytikzwidth,-0.15) -- + (0,0.3); \draw (1.000\mytikzwidth,-0.15) -- + (0,0.3); \node at (0.0000\mytikzwidth,.45) {$0$}; \node at (1.0000\mytikzwidth,.45) {$1$}; \node at (0.5000\mytikzwidth,.45) {$1/2$}; \node at (0.3333\mytikzwidth,.45) {$1/3$}; \end{tikzpicture} \begin{tikzpicture} ll [mycolour] (0.50000000\mytikzwidth,-0.1) rectangle (1.00000000\mytikzwidth,0.1); ll [mycolour] (0.33333333\mytikzwidth,-0.1) rectangle (0.50000000\mytikzwidth,0.1); \draw (0,0) -- (\mytikzwidth,0); \draw (1.0000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (.50000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (.33333333\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.0000000\mytikzwidth,-0.15) -- + (0,0.3); \end{tikzpicture} \begin{tikzpicture} ll [mycolour] (0.75000000\mytikzwidth,-0.1) rectangle (1.00000000\mytikzwidth,0.1); ll [mycolour] (0.66666667\mytikzwidth,-0.1) rectangle (0.75000000\mytikzwidth,0.1); ll [mycolour] (0.41666667\mytikzwidth,-0.1) rectangle (0.50000000\mytikzwidth,0.1); ll [mycolour] (0.38888889\mytikzwidth,-0.1) rectangle (0.41666667\mytikzwidth,0.1); \draw (0,0) -- (\mytikzwidth,0); \draw (1.00000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.75000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.66666667\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.50000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.66666667\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.41666667\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.38888889\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.00000000\mytikzwidth,-0.15) -- + (0,0.3); \end{tikzpicture} \begin{tikzpicture} ll [mycolour] (0.87500000\mytikzwidth,-0.1) rectangle (1.00000000\mytikzwidth,0.1); ll [mycolour] (0.83333333\mytikzwidth,-0.1) rectangle (0.87500000\mytikzwidth,0.1); ll [mycolour] (0.70833333\mytikzwidth,-0.1) rectangle (0.75000000\mytikzwidth,0.1); ll [mycolour] (0.69444444\mytikzwidth,-0.1) rectangle (0.70833333\mytikzwidth,0.1); ll [mycolour] (0.45833333\mytikzwidth,-0.1) rectangle (0.50000000\mytikzwidth,0.1); ll [mycolour] (0.44444444\mytikzwidth,-0.1) rectangle (0.45833333\mytikzwidth,0.1); ll [mycolour] (0.40277778\mytikzwidth,-0.1) rectangle (0.41666667\mytikzwidth,0.1); ll [mycolour] (0.39814815\mytikzwidth,-0.1) rectangle (0.40277778\mytikzwidth,0.1); \draw (0,0) -- (\mytikzwidth,0); \draw (1.000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.000000\mytikzwidth,-0.15) -- + (0,0.3); \end{tikzpicture} \begin{tikzpicture} ll [mycolour] (0.93750000\mytikzwidth,-0.1) rectangle (1.00000000\mytikzwidth,0.1); ll [mycolour] (0.91666667\mytikzwidth,-0.1) rectangle (0.93750000\mytikzwidth,0.1); ll [mycolour] (0.85416667\mytikzwidth,-0.1) rectangle (0.87500000\mytikzwidth,0.1); ll [mycolour] (0.84722222\mytikzwidth,-0.1) rectangle (0.85416667\mytikzwidth,0.1); ll [mycolour] (0.72916667\mytikzwidth,-0.1) rectangle (0.75000000\mytikzwidth,0.1); ll [mycolour] (0.72222222\mytikzwidth,-0.1) rectangle (0.72916667\mytikzwidth,0.1); ll [mycolour] (0.70138889\mytikzwidth,-0.1) rectangle (0.70833333\mytikzwidth,0.1); ll [mycolour] (0.69907407\mytikzwidth,-0.1) rectangle (0.70138889\mytikzwidth,0.1); ll [mycolour] (0.47916667\mytikzwidth,-0.1) rectangle (0.50000000\mytikzwidth,0.1); ll [mycolour] (0.47222222\mytikzwidth,-0.1) rectangle (0.47916667\mytikzwidth,0.1); ll [mycolour] (0.45138889\mytikzwidth,-0.1) rectangle (0.45833333\mytikzwidth,0.1); ll [mycolour] (0.44907407\mytikzwidth,-0.1) rectangle (0.45138889\mytikzwidth,0.1); ll [mycolour] (0.40972222\mytikzwidth,-0.1) rectangle (0.41666667\mytikzwidth,0.1); ll [mycolour] (0.40740741\mytikzwidth,-0.1) rectangle (0.40972222\mytikzwidth,0.1); ll [mycolour] (0.40046296\mytikzwidth,-0.1) rectangle (0.40277778\mytikzwidth,0.1); ll [mycolour] (0.39969136\mytikzwidth,-0.1) rectangle (0.40046296\mytikzwidth,0.1); \draw (0,0) -- (\mytikzwidth,0); \draw (1.000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.000000\mytikzwidth,-0.15) -- + (0,0.3); \end{tikzpicture} \begin{tikzpicture} ll [mycolour] (0.96875000\mytikzwidth,-0.1) rectangle (1.00000000\mytikzwidth,0.1); ll [mycolour] (0.95833333\mytikzwidth,-0.1) rectangle (0.96875000\mytikzwidth,0.1); ll [mycolour] (0.92708333\mytikzwidth,-0.1) rectangle (0.93750000\mytikzwidth,0.1); ll [mycolour] (0.92361111\mytikzwidth,-0.1) rectangle (0.92708333\mytikzwidth,0.1); ll [mycolour] (0.86458333\mytikzwidth,-0.1) rectangle (0.87500000\mytikzwidth,0.1); ll [mycolour] (0.86111111\mytikzwidth,-0.1) rectangle (0.86458333\mytikzwidth,0.1); ll [mycolour] (0.85069444\mytikzwidth,-0.1) rectangle (0.85416667\mytikzwidth,0.1); ll [mycolour] (0.84953704\mytikzwidth,-0.1) rectangle (0.85069444\mytikzwidth,0.1); ll [mycolour] (0.73958333\mytikzwidth,-0.1) rectangle (0.75000000\mytikzwidth,0.1); ll [mycolour] (0.73611111\mytikzwidth,-0.1) rectangle (0.73958333\mytikzwidth,0.1); ll [mycolour] (0.72569444\mytikzwidth,-0.1) rectangle (0.72916667\mytikzwidth,0.1); ll [mycolour] (0.72453704\mytikzwidth,-0.1) rectangle (0.72569444\mytikzwidth,0.1); ll [mycolour] (0.70486111\mytikzwidth,-0.1) rectangle (0.70833333\mytikzwidth,0.1); ll [mycolour] (0.70370370\mytikzwidth,-0.1) rectangle (0.70486111\mytikzwidth,0.1); ll [mycolour] (0.70023148\mytikzwidth,-0.1) rectangle (0.70138889\mytikzwidth,0.1); ll [mycolour] (0.69984568\mytikzwidth,-0.1) rectangle (0.70023148\mytikzwidth,0.1); ll [mycolour] (0.48958333\mytikzwidth,-0.1) rectangle (0.50000000\mytikzwidth,0.1); ll [mycolour] (0.48611111\mytikzwidth,-0.1) rectangle (0.48958333\mytikzwidth,0.1); ll [mycolour] (0.47569444\mytikzwidth,-0.1) rectangle (0.47916667\mytikzwidth,0.1); ll [mycolour] (0.47453704\mytikzwidth,-0.1) rectangle (0.47569444\mytikzwidth,0.1); ll [mycolour] (0.45486111\mytikzwidth,-0.1) rectangle (0.45833333\mytikzwidth,0.1); ll [mycolour] (0.45370370\mytikzwidth,-0.1) rectangle (0.45486111\mytikzwidth,0.1); ll [mycolour] (0.45023148\mytikzwidth,-0.1) rectangle (0.45138889\mytikzwidth,0.1); ll [mycolour] (0.44984568\mytikzwidth,-0.1) rectangle (0.45023148\mytikzwidth,0.1); ll [mycolour] (0.41319444\mytikzwidth,-0.1) rectangle (0.41666667\mytikzwidth,0.1); ll [mycolour] (0.41203704\mytikzwidth,-0.1) rectangle (0.41319444\mytikzwidth,0.1); ll [mycolour] (0.40856481\mytikzwidth,-0.1) rectangle (0.40972222\mytikzwidth,0.1); ll [mycolour] (0.40817901\mytikzwidth,-0.1) rectangle (0.40856481\mytikzwidth,0.1); ll [mycolour] (0.40162037\mytikzwidth,-0.1) rectangle (0.40277778\mytikzwidth,0.1); ll [mycolour] (0.40123457\mytikzwidth,-0.1) rectangle (0.40162037\mytikzwidth,0.1); ll [mycolour] (0.40007716\mytikzwidth,-0.1) rectangle (0.40046296\mytikzwidth,0.1); ll [mycolour] (0.39994856\mytikzwidth,-0.1) rectangle (0.40007716\mytikzwidth,0.1); \draw (0,0) -- (\mytikzwidth,0); \draw (1.000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.000000\mytikzwidth,-0.15) -- + (0,0.3); \end{tikzpicture} \begin{tikzpicture} \draw[white] (0,0) -- (\mytikzwidth,0); \draw[white] (0.000\mytikzwidth,-0.15) -- + (0,0.3); \draw[white] (1.000\mytikzwidth,-0.15) -- + (0,0.3); \node at (.5\mytikzwidth,0) {$\ldots$}; \end{tikzpicture} \begin{tikzpicture} ll [mycolour] (0.99609375\mytikzwidth,-0.1) rectangle (1.00000000\mytikzwidth,0.1); ll [mycolour] (0.99479167\mytikzwidth,-0.1) rectangle (0.99609375\mytikzwidth,0.1); ll [mycolour] (0.99088542\mytikzwidth,-0.1) rectangle (0.99218750\mytikzwidth,0.1); ll [mycolour] (0.99045139\mytikzwidth,-0.1) rectangle (0.99088542\mytikzwidth,0.1); ll [mycolour] (0.98307292\mytikzwidth,-0.1) rectangle (0.98437500\mytikzwidth,0.1); ll [mycolour] (0.98263889\mytikzwidth,-0.1) rectangle (0.98307292\mytikzwidth,0.1); ll [mycolour] (0.98133681\mytikzwidth,-0.1) rectangle (0.98177083\mytikzwidth,0.1); ll [mycolour] (0.98119213\mytikzwidth,-0.1) rectangle (0.98133681\mytikzwidth,0.1); ll [mycolour] (0.96744792\mytikzwidth,-0.1) rectangle (0.96875000\mytikzwidth,0.1); ll [mycolour] (0.96701389\mytikzwidth,-0.1) rectangle (0.96744792\mytikzwidth,0.1); ll [mycolour] (0.96571181\mytikzwidth,-0.1) rectangle (0.96614583\mytikzwidth,0.1); ll [mycolour] (0.96556713\mytikzwidth,-0.1) rectangle (0.96571181\mytikzwidth,0.1); ll [mycolour] (0.96310764\mytikzwidth,-0.1) rectangle (0.96354167\mytikzwidth,0.1); ll [mycolour] (0.96296296\mytikzwidth,-0.1) rectangle (0.96310764\mytikzwidth,0.1); ll [mycolour] (0.96252894\mytikzwidth,-0.1) rectangle (0.96267361\mytikzwidth,0.1); ll [mycolour] (0.96248071\mytikzwidth,-0.1) rectangle (0.96252894\mytikzwidth,0.1); ll [mycolour] (0.93619792\mytikzwidth,-0.1) rectangle (0.93750000\mytikzwidth,0.1); ll [mycolour] (0.93576389\mytikzwidth,-0.1) rectangle (0.93619792\mytikzwidth,0.1); ll [mycolour] (0.93446181\mytikzwidth,-0.1) rectangle (0.93489583\mytikzwidth,0.1); ll [mycolour] (0.93431713\mytikzwidth,-0.1) rectangle (0.93446181\mytikzwidth,0.1); ll [mycolour] (0.93185764\mytikzwidth,-0.1) rectangle (0.93229167\mytikzwidth,0.1); ll [mycolour] (0.93171296\mytikzwidth,-0.1) rectangle (0.93185764\mytikzwidth,0.1); ll [mycolour] (0.93127894\mytikzwidth,-0.1) rectangle (0.93142361\mytikzwidth,0.1); ll [mycolour] (0.93123071\mytikzwidth,-0.1) rectangle (0.93127894\mytikzwidth,0.1); ll [mycolour] (0.92664931\mytikzwidth,-0.1) rectangle (0.92708333\mytikzwidth,0.1); ll [mycolour] (0.92650463\mytikzwidth,-0.1) rectangle (0.92664931\mytikzwidth,0.1); ll [mycolour] (0.92607060\mytikzwidth,-0.1) rectangle (0.92621528\mytikzwidth,0.1); ll [mycolour] (0.92602238\mytikzwidth,-0.1) rectangle (0.92607060\mytikzwidth,0.1); ll [mycolour] (0.92520255\mytikzwidth,-0.1) rectangle (0.92534722\mytikzwidth,0.1); ll [mycolour] (0.92515432\mytikzwidth,-0.1) rectangle (0.92520255\mytikzwidth,0.1); ll [mycolour] (0.92500965\mytikzwidth,-0.1) rectangle (0.92505787\mytikzwidth,0.1); ll [mycolour] (0.92499357\mytikzwidth,-0.1) rectangle (0.92500965\mytikzwidth,0.1); ll [mycolour] (0.87369792\mytikzwidth,-0.1) rectangle (0.87500000\mytikzwidth,0.1); ll [mycolour] (0.87326389\mytikzwidth,-0.1) rectangle (0.87369792\mytikzwidth,0.1); ll [mycolour] (0.87196181\mytikzwidth,-0.1) rectangle (0.87239583\mytikzwidth,0.1); ll [mycolour] (0.87181713\mytikzwidth,-0.1) rectangle (0.87196181\mytikzwidth,0.1); ll [mycolour] (0.86935764\mytikzwidth,-0.1) rectangle (0.86979167\mytikzwidth,0.1); ll [mycolour] (0.86921296\mytikzwidth,-0.1) rectangle (0.86935764\mytikzwidth,0.1); ll [mycolour] (0.86877894\mytikzwidth,-0.1) rectangle (0.86892361\mytikzwidth,0.1); ll [mycolour] (0.86873071\mytikzwidth,-0.1) rectangle (0.86877894\mytikzwidth,0.1); ll [mycolour] (0.86414931\mytikzwidth,-0.1) rectangle (0.86458333\mytikzwidth,0.1); ll [mycolour] (0.86400463\mytikzwidth,-0.1) rectangle (0.86414931\mytikzwidth,0.1); ll [mycolour] (0.86357060\mytikzwidth,-0.1) rectangle (0.86371528\mytikzwidth,0.1); ll [mycolour] (0.86352238\mytikzwidth,-0.1) rectangle (0.86357060\mytikzwidth,0.1); ll [mycolour] (0.86270255\mytikzwidth,-0.1) rectangle (0.86284722\mytikzwidth,0.1); ll [mycolour] (0.86265432\mytikzwidth,-0.1) rectangle (0.86270255\mytikzwidth,0.1); ll [mycolour] (0.86250965\mytikzwidth,-0.1) rectangle (0.86255787\mytikzwidth,0.1); ll [mycolour] (0.86249357\mytikzwidth,-0.1) rectangle (0.86250965\mytikzwidth,0.1); ll [mycolour] (0.85373264\mytikzwidth,-0.1) rectangle (0.85416667\mytikzwidth,0.1); ll [mycolour] (0.85358796\mytikzwidth,-0.1) rectangle (0.85373264\mytikzwidth,0.1); ll [mycolour] (0.85315394\mytikzwidth,-0.1) rectangle (0.85329861\mytikzwidth,0.1); ll [mycolour] (0.85310571\mytikzwidth,-0.1) rectangle (0.85315394\mytikzwidth,0.1); ll [mycolour] (0.85228588\mytikzwidth,-0.1) rectangle (0.85243056\mytikzwidth,0.1); ll [mycolour] (0.85223765\mytikzwidth,-0.1) rectangle (0.85228588\mytikzwidth,0.1); ll [mycolour] (0.85209298\mytikzwidth,-0.1) rectangle (0.85214120\mytikzwidth,0.1); ll [mycolour] (0.85207690\mytikzwidth,-0.1) rectangle (0.85209298\mytikzwidth,0.1); ll [mycolour] (0.85054977\mytikzwidth,-0.1) rectangle (0.85069444\mytikzwidth,0.1); ll [mycolour] (0.85050154\mytikzwidth,-0.1) rectangle (0.85054977\mytikzwidth,0.1); ll [mycolour] (0.85035687\mytikzwidth,-0.1) rectangle (0.85040509\mytikzwidth,0.1); ll [mycolour] (0.85034079\mytikzwidth,-0.1) rectangle (0.85035687\mytikzwidth,0.1); ll [mycolour] (0.85006752\mytikzwidth,-0.1) rectangle (0.85011574\mytikzwidth,0.1); ll [mycolour] (0.85005144\mytikzwidth,-0.1) rectangle (0.85006752\mytikzwidth,0.1); ll [mycolour] (0.85000322\mytikzwidth,-0.1) rectangle (0.85001929\mytikzwidth,0.1); ll [mycolour] (0.84999786\mytikzwidth,-0.1) rectangle (0.85000322\mytikzwidth,0.1); ll [mycolour] (0.74869792\mytikzwidth,-0.1) rectangle (0.75000000\mytikzwidth,0.1); ll [mycolour] (0.74826389\mytikzwidth,-0.1) rectangle (0.74869792\mytikzwidth,0.1); ll [mycolour] (0.74696181\mytikzwidth,-0.1) rectangle (0.74739583\mytikzwidth,0.1); ll [mycolour] (0.74681713\mytikzwidth,-0.1) rectangle (0.74696181\mytikzwidth,0.1); ll [mycolour] (0.74435764\mytikzwidth,-0.1) rectangle (0.74479167\mytikzwidth,0.1); ll [mycolour] (0.74421296\mytikzwidth,-0.1) rectangle (0.74435764\mytikzwidth,0.1); ll [mycolour] (0.74377894\mytikzwidth,-0.1) rectangle (0.74392361\mytikzwidth,0.1); ll [mycolour] (0.74373071\mytikzwidth,-0.1) rectangle (0.74377894\mytikzwidth,0.1); ll [mycolour] (0.73914931\mytikzwidth,-0.1) rectangle (0.73958333\mytikzwidth,0.1); ll [mycolour] (0.73900463\mytikzwidth,-0.1) rectangle (0.73914931\mytikzwidth,0.1); ll [mycolour] (0.73857060\mytikzwidth,-0.1) rectangle (0.73871528\mytikzwidth,0.1); ll [mycolour] (0.73852238\mytikzwidth,-0.1) rectangle (0.73857060\mytikzwidth,0.1); ll [mycolour] (0.73770255\mytikzwidth,-0.1) rectangle (0.73784722\mytikzwidth,0.1); ll [mycolour] (0.73765432\mytikzwidth,-0.1) rectangle (0.73770255\mytikzwidth,0.1); ll [mycolour] (0.73750965\mytikzwidth,-0.1) rectangle (0.73755787\mytikzwidth,0.1); ll [mycolour] (0.73749357\mytikzwidth,-0.1) rectangle (0.73750965\mytikzwidth,0.1); ll [mycolour] (0.72873264\mytikzwidth,-0.1) rectangle (0.72916667\mytikzwidth,0.1); ll [mycolour] (0.72858796\mytikzwidth,-0.1) rectangle (0.72873264\mytikzwidth,0.1); ll [mycolour] (0.72815394\mytikzwidth,-0.1) rectangle (0.72829861\mytikzwidth,0.1); ll [mycolour] (0.72810571\mytikzwidth,-0.1) rectangle (0.72815394\mytikzwidth,0.1); ll [mycolour] (0.72728588\mytikzwidth,-0.1) rectangle (0.72743056\mytikzwidth,0.1); ll [mycolour] (0.72723765\mytikzwidth,-0.1) rectangle (0.72728588\mytikzwidth,0.1); ll [mycolour] (0.72709298\mytikzwidth,-0.1) rectangle (0.72714120\mytikzwidth,0.1); ll [mycolour] (0.72707690\mytikzwidth,-0.1) rectangle (0.72709298\mytikzwidth,0.1); ll [mycolour] (0.72554977\mytikzwidth,-0.1) rectangle (0.72569444\mytikzwidth,0.1); ll [mycolour] (0.72550154\mytikzwidth,-0.1) rectangle (0.72554977\mytikzwidth,0.1); ll [mycolour] (0.72535687\mytikzwidth,-0.1) rectangle (0.72540509\mytikzwidth,0.1); ll [mycolour] (0.72534079\mytikzwidth,-0.1) rectangle (0.72535687\mytikzwidth,0.1); ll [mycolour] (0.72506752\mytikzwidth,-0.1) rectangle (0.72511574\mytikzwidth,0.1); ll [mycolour] (0.72505144\mytikzwidth,-0.1) rectangle (0.72506752\mytikzwidth,0.1); ll [mycolour] (0.72500322\mytikzwidth,-0.1) rectangle (0.72501929\mytikzwidth,0.1); ll [mycolour] (0.72499786\mytikzwidth,-0.1) rectangle (0.72500322\mytikzwidth,0.1); ll [mycolour] (0.70789931\mytikzwidth,-0.1) rectangle (0.70833333\mytikzwidth,0.1); ll [mycolour] (0.70775463\mytikzwidth,-0.1) rectangle (0.70789931\mytikzwidth,0.1); ll [mycolour] (0.70732060\mytikzwidth,-0.1) rectangle (0.70746528\mytikzwidth,0.1); ll [mycolour] (0.70727238\mytikzwidth,-0.1) rectangle (0.70732060\mytikzwidth,0.1); ll [mycolour] (0.70645255\mytikzwidth,-0.1) rectangle (0.70659722\mytikzwidth,0.1); ll [mycolour] (0.70640432\mytikzwidth,-0.1) rectangle (0.70645255\mytikzwidth,0.1); ll [mycolour] (0.70625965\mytikzwidth,-0.1) rectangle (0.70630787\mytikzwidth,0.1); ll [mycolour] (0.70624357\mytikzwidth,-0.1) rectangle (0.70625965\mytikzwidth,0.1); ll [mycolour] (0.70471644\mytikzwidth,-0.1) rectangle (0.70486111\mytikzwidth,0.1); ll [mycolour] (0.70466821\mytikzwidth,-0.1) rectangle (0.70471644\mytikzwidth,0.1); ll [mycolour] (0.70452353\mytikzwidth,-0.1) rectangle (0.70457176\mytikzwidth,0.1); ll [mycolour] (0.70450746\mytikzwidth,-0.1) rectangle (0.70452353\mytikzwidth,0.1); ll [mycolour] (0.70423418\mytikzwidth,-0.1) rectangle (0.70428241\mytikzwidth,0.1); ll [mycolour] (0.70421811\mytikzwidth,-0.1) rectangle (0.70423418\mytikzwidth,0.1); ll [mycolour] (0.70416988\mytikzwidth,-0.1) rectangle (0.70418596\mytikzwidth,0.1); ll [mycolour] (0.70416452\mytikzwidth,-0.1) rectangle (0.70416988\mytikzwidth,0.1); ll [mycolour] (0.70124421\mytikzwidth,-0.1) rectangle (0.70138889\mytikzwidth,0.1); ll [mycolour] (0.70119599\mytikzwidth,-0.1) rectangle (0.70124421\mytikzwidth,0.1); ll [mycolour] (0.70105131\mytikzwidth,-0.1) rectangle (0.70109954\mytikzwidth,0.1); ll [mycolour] (0.70103524\mytikzwidth,-0.1) rectangle (0.70105131\mytikzwidth,0.1); ll [mycolour] (0.70076196\mytikzwidth,-0.1) rectangle (0.70081019\mytikzwidth,0.1); ll [mycolour] (0.70074588\mytikzwidth,-0.1) rectangle (0.70076196\mytikzwidth,0.1); ll [mycolour] (0.70069766\mytikzwidth,-0.1) rectangle (0.70071373\mytikzwidth,0.1); ll [mycolour] (0.70069230\mytikzwidth,-0.1) rectangle (0.70069766\mytikzwidth,0.1); ll [mycolour] (0.70018326\mytikzwidth,-0.1) rectangle (0.70023148\mytikzwidth,0.1); ll [mycolour] (0.70016718\mytikzwidth,-0.1) rectangle (0.70018326\mytikzwidth,0.1); ll [mycolour] (0.70011896\mytikzwidth,-0.1) rectangle (0.70013503\mytikzwidth,0.1); ll [mycolour] (0.70011360\mytikzwidth,-0.1) rectangle (0.70011896\mytikzwidth,0.1); ll [mycolour] (0.70002251\mytikzwidth,-0.1) rectangle (0.70003858\mytikzwidth,0.1); ll [mycolour] (0.70001715\mytikzwidth,-0.1) rectangle (0.70002251\mytikzwidth,0.1); ll [mycolour] (0.70000107\mytikzwidth,-0.1) rectangle (0.70000643\mytikzwidth,0.1); ll [mycolour] (0.69999929\mytikzwidth,-0.1) rectangle (0.70000107\mytikzwidth,0.1); ll [mycolour] (0.49869792\mytikzwidth,-0.1) rectangle (0.50000000\mytikzwidth,0.1); ll [mycolour] (0.49826389\mytikzwidth,-0.1) rectangle (0.49869792\mytikzwidth,0.1); ll [mycolour] (0.49696181\mytikzwidth,-0.1) rectangle (0.49739583\mytikzwidth,0.1); ll [mycolour] (0.49681713\mytikzwidth,-0.1) rectangle (0.49696181\mytikzwidth,0.1); ll [mycolour] (0.49435764\mytikzwidth,-0.1) rectangle (0.49479167\mytikzwidth,0.1); ll [mycolour] (0.49421296\mytikzwidth,-0.1) rectangle (0.49435764\mytikzwidth,0.1); ll [mycolour] (0.49377894\mytikzwidth,-0.1) rectangle (0.49392361\mytikzwidth,0.1); ll [mycolour] (0.49373071\mytikzwidth,-0.1) rectangle (0.49377894\mytikzwidth,0.1); ll [mycolour] (0.48914931\mytikzwidth,-0.1) rectangle (0.48958333\mytikzwidth,0.1); ll [mycolour] (0.48900463\mytikzwidth,-0.1) rectangle (0.48914931\mytikzwidth,0.1); ll [mycolour] (0.48857060\mytikzwidth,-0.1) rectangle (0.48871528\mytikzwidth,0.1); ll [mycolour] (0.48852238\mytikzwidth,-0.1) rectangle (0.48857060\mytikzwidth,0.1); ll [mycolour] (0.48770255\mytikzwidth,-0.1) rectangle (0.48784722\mytikzwidth,0.1); ll [mycolour] (0.48765432\mytikzwidth,-0.1) rectangle (0.48770255\mytikzwidth,0.1); ll [mycolour] (0.48750965\mytikzwidth,-0.1) rectangle (0.48755787\mytikzwidth,0.1); ll [mycolour] (0.48749357\mytikzwidth,-0.1) rectangle (0.48750965\mytikzwidth,0.1); ll [mycolour] (0.47873264\mytikzwidth,-0.1) rectangle (0.47916667\mytikzwidth,0.1); ll [mycolour] (0.47858796\mytikzwidth,-0.1) rectangle (0.47873264\mytikzwidth,0.1); ll [mycolour] (0.47815394\mytikzwidth,-0.1) rectangle (0.47829861\mytikzwidth,0.1); ll [mycolour] (0.47810571\mytikzwidth,-0.1) rectangle (0.47815394\mytikzwidth,0.1); ll [mycolour] (0.47728588\mytikzwidth,-0.1) rectangle (0.47743056\mytikzwidth,0.1); ll [mycolour] (0.47723765\mytikzwidth,-0.1) rectangle (0.47728588\mytikzwidth,0.1); ll [mycolour] (0.47709298\mytikzwidth,-0.1) rectangle (0.47714120\mytikzwidth,0.1); ll [mycolour] (0.47707690\mytikzwidth,-0.1) rectangle (0.47709298\mytikzwidth,0.1); ll [mycolour] (0.47554977\mytikzwidth,-0.1) rectangle (0.47569444\mytikzwidth,0.1); ll [mycolour] (0.47550154\mytikzwidth,-0.1) rectangle (0.47554977\mytikzwidth,0.1); ll [mycolour] (0.47535687\mytikzwidth,-0.1) rectangle (0.47540509\mytikzwidth,0.1); ll [mycolour] (0.47534079\mytikzwidth,-0.1) rectangle (0.47535687\mytikzwidth,0.1); ll [mycolour] (0.47506752\mytikzwidth,-0.1) rectangle (0.47511574\mytikzwidth,0.1); ll [mycolour] (0.47505144\mytikzwidth,-0.1) rectangle (0.47506752\mytikzwidth,0.1); ll [mycolour] (0.47500322\mytikzwidth,-0.1) rectangle (0.47501929\mytikzwidth,0.1); ll [mycolour] (0.47499786\mytikzwidth,-0.1) rectangle (0.47500322\mytikzwidth,0.1); ll [mycolour] (0.45789931\mytikzwidth,-0.1) rectangle (0.45833333\mytikzwidth,0.1); ll [mycolour] (0.45775463\mytikzwidth,-0.1) rectangle (0.45789931\mytikzwidth,0.1); ll [mycolour] (0.45732060\mytikzwidth,-0.1) rectangle (0.45746528\mytikzwidth,0.1); ll [mycolour] (0.45727238\mytikzwidth,-0.1) rectangle (0.45732060\mytikzwidth,0.1); ll [mycolour] (0.45645255\mytikzwidth,-0.1) rectangle (0.45659722\mytikzwidth,0.1); ll [mycolour] (0.45640432\mytikzwidth,-0.1) rectangle (0.45645255\mytikzwidth,0.1); ll [mycolour] (0.45625965\mytikzwidth,-0.1) rectangle (0.45630787\mytikzwidth,0.1); ll [mycolour] (0.45624357\mytikzwidth,-0.1) rectangle (0.45625965\mytikzwidth,0.1); ll [mycolour] (0.45471644\mytikzwidth,-0.1) rectangle (0.45486111\mytikzwidth,0.1); ll [mycolour] (0.45466821\mytikzwidth,-0.1) rectangle (0.45471644\mytikzwidth,0.1); ll [mycolour] (0.45452353\mytikzwidth,-0.1) rectangle (0.45457176\mytikzwidth,0.1); ll [mycolour] (0.45450746\mytikzwidth,-0.1) rectangle (0.45452353\mytikzwidth,0.1); ll [mycolour] (0.45423418\mytikzwidth,-0.1) rectangle (0.45428241\mytikzwidth,0.1); ll [mycolour] (0.45421811\mytikzwidth,-0.1) rectangle (0.45423418\mytikzwidth,0.1); ll [mycolour] (0.45416988\mytikzwidth,-0.1) rectangle (0.45418596\mytikzwidth,0.1); ll [mycolour] (0.45416452\mytikzwidth,-0.1) rectangle (0.45416988\mytikzwidth,0.1); ll [mycolour] (0.45124421\mytikzwidth,-0.1) rectangle (0.45138889\mytikzwidth,0.1); ll [mycolour] (0.45119599\mytikzwidth,-0.1) rectangle (0.45124421\mytikzwidth,0.1); ll [mycolour] (0.45105131\mytikzwidth,-0.1) rectangle (0.45109954\mytikzwidth,0.1); ll [mycolour] (0.45103524\mytikzwidth,-0.1) rectangle (0.45105131\mytikzwidth,0.1); ll [mycolour] (0.45076196\mytikzwidth,-0.1) rectangle (0.45081019\mytikzwidth,0.1); ll [mycolour] (0.45074588\mytikzwidth,-0.1) rectangle (0.45076196\mytikzwidth,0.1); ll [mycolour] (0.45069766\mytikzwidth,-0.1) rectangle (0.45071373\mytikzwidth,0.1); ll [mycolour] (0.45069230\mytikzwidth,-0.1) rectangle (0.45069766\mytikzwidth,0.1); ll [mycolour] (0.45018326\mytikzwidth,-0.1) rectangle (0.45023148\mytikzwidth,0.1); ll [mycolour] (0.45016718\mytikzwidth,-0.1) rectangle (0.45018326\mytikzwidth,0.1); ll [mycolour] (0.45011896\mytikzwidth,-0.1) rectangle (0.45013503\mytikzwidth,0.1); ll [mycolour] (0.45011360\mytikzwidth,-0.1) rectangle (0.45011896\mytikzwidth,0.1); ll [mycolour] (0.45002251\mytikzwidth,-0.1) rectangle (0.45003858\mytikzwidth,0.1); ll [mycolour] (0.45001715\mytikzwidth,-0.1) rectangle (0.45002251\mytikzwidth,0.1); ll [mycolour] (0.45000107\mytikzwidth,-0.1) rectangle (0.45000643\mytikzwidth,0.1); ll [mycolour] (0.44999929\mytikzwidth,-0.1) rectangle (0.45000107\mytikzwidth,0.1); ll [mycolour] (0.41623264\mytikzwidth,-0.1) rectangle (0.41666667\mytikzwidth,0.1); ll [mycolour] (0.41608796\mytikzwidth,-0.1) rectangle (0.41623264\mytikzwidth,0.1); ll [mycolour] (0.41565394\mytikzwidth,-0.1) rectangle (0.41579861\mytikzwidth,0.1); ll [mycolour] (0.41560571\mytikzwidth,-0.1) rectangle (0.41565394\mytikzwidth,0.1); ll [mycolour] (0.41478588\mytikzwidth,-0.1) rectangle (0.41493056\mytikzwidth,0.1); ll [mycolour] (0.41473765\mytikzwidth,-0.1) rectangle (0.41478588\mytikzwidth,0.1); ll [mycolour] (0.41459298\mytikzwidth,-0.1) rectangle (0.41464120\mytikzwidth,0.1); ll [mycolour] (0.41457690\mytikzwidth,-0.1) rectangle (0.41459298\mytikzwidth,0.1); ll [mycolour] (0.41304977\mytikzwidth,-0.1) rectangle (0.41319444\mytikzwidth,0.1); ll [mycolour] (0.41300154\mytikzwidth,-0.1) rectangle (0.41304977\mytikzwidth,0.1); ll [mycolour] (0.41285687\mytikzwidth,-0.1) rectangle (0.41290509\mytikzwidth,0.1); ll [mycolour] (0.41284079\mytikzwidth,-0.1) rectangle (0.41285687\mytikzwidth,0.1); ll [mycolour] (0.41256752\mytikzwidth,-0.1) rectangle (0.41261574\mytikzwidth,0.1); ll [mycolour] (0.41255144\mytikzwidth,-0.1) rectangle (0.41256752\mytikzwidth,0.1); ll [mycolour] (0.41250322\mytikzwidth,-0.1) rectangle (0.41251929\mytikzwidth,0.1); ll [mycolour] (0.41249786\mytikzwidth,-0.1) rectangle (0.41250322\mytikzwidth,0.1); ll [mycolour] (0.40957755\mytikzwidth,-0.1) rectangle (0.40972222\mytikzwidth,0.1); ll [mycolour] (0.40952932\mytikzwidth,-0.1) rectangle (0.40957755\mytikzwidth,0.1); ll [mycolour] (0.40938465\mytikzwidth,-0.1) rectangle (0.40943287\mytikzwidth,0.1); ll [mycolour] (0.40936857\mytikzwidth,-0.1) rectangle (0.40938465\mytikzwidth,0.1); ll [mycolour] (0.40909529\mytikzwidth,-0.1) rectangle (0.40914352\mytikzwidth,0.1); ll [mycolour] (0.40907922\mytikzwidth,-0.1) rectangle (0.40909529\mytikzwidth,0.1); ll [mycolour] (0.40903099\mytikzwidth,-0.1) rectangle (0.40904707\mytikzwidth,0.1); ll [mycolour] (0.40902563\mytikzwidth,-0.1) rectangle (0.40903099\mytikzwidth,0.1); ll [mycolour] (0.40851659\mytikzwidth,-0.1) rectangle (0.40856481\mytikzwidth,0.1); ll [mycolour] (0.40850051\mytikzwidth,-0.1) rectangle (0.40851659\mytikzwidth,0.1); ll [mycolour] (0.40845229\mytikzwidth,-0.1) rectangle (0.40846836\mytikzwidth,0.1); ll [mycolour] (0.40844693\mytikzwidth,-0.1) rectangle (0.40845229\mytikzwidth,0.1); ll [mycolour] (0.40835584\mytikzwidth,-0.1) rectangle (0.40837191\mytikzwidth,0.1); ll [mycolour] (0.40835048\mytikzwidth,-0.1) rectangle (0.40835584\mytikzwidth,0.1); ll [mycolour] (0.40833441\mytikzwidth,-0.1) rectangle (0.40833976\mytikzwidth,0.1); ll [mycolour] (0.40833262\mytikzwidth,-0.1) rectangle (0.40833441\mytikzwidth,0.1); ll [mycolour] (0.40263310\mytikzwidth,-0.1) rectangle (0.40277778\mytikzwidth,0.1); ll [mycolour] (0.40258488\mytikzwidth,-0.1) rectangle (0.40263310\mytikzwidth,0.1); ll [mycolour] (0.40244020\mytikzwidth,-0.1) rectangle (0.40248843\mytikzwidth,0.1); ll [mycolour] (0.40242413\mytikzwidth,-0.1) rectangle (0.40244020\mytikzwidth,0.1); ll [mycolour] (0.40215085\mytikzwidth,-0.1) rectangle (0.40219907\mytikzwidth,0.1); ll [mycolour] (0.40213477\mytikzwidth,-0.1) rectangle (0.40215085\mytikzwidth,0.1); ll [mycolour] (0.40208655\mytikzwidth,-0.1) rectangle (0.40210262\mytikzwidth,0.1); ll [mycolour] (0.40208119\mytikzwidth,-0.1) rectangle (0.40208655\mytikzwidth,0.1); ll [mycolour] (0.40157215\mytikzwidth,-0.1) rectangle (0.40162037\mytikzwidth,0.1); ll [mycolour] (0.40155607\mytikzwidth,-0.1) rectangle (0.40157215\mytikzwidth,0.1); ll [mycolour] (0.40150784\mytikzwidth,-0.1) rectangle (0.40152392\mytikzwidth,0.1); ll [mycolour] (0.40150249\mytikzwidth,-0.1) rectangle (0.40150784\mytikzwidth,0.1); ll [mycolour] (0.40141139\mytikzwidth,-0.1) rectangle (0.40142747\mytikzwidth,0.1); ll [mycolour] (0.40140604\mytikzwidth,-0.1) rectangle (0.40141139\mytikzwidth,0.1); ll [mycolour] (0.40138996\mytikzwidth,-0.1) rectangle (0.40139532\mytikzwidth,0.1); ll [mycolour] (0.40138817\mytikzwidth,-0.1) rectangle (0.40138996\mytikzwidth,0.1); ll [mycolour] (0.40041474\mytikzwidth,-0.1) rectangle (0.40046296\mytikzwidth,0.1); ll [mycolour] (0.40039866\mytikzwidth,-0.1) rectangle (0.40041474\mytikzwidth,0.1); ll [mycolour] (0.40035044\mytikzwidth,-0.1) rectangle (0.40036651\mytikzwidth,0.1); ll [mycolour] (0.40034508\mytikzwidth,-0.1) rectangle (0.40035044\mytikzwidth,0.1); ll [mycolour] (0.40025399\mytikzwidth,-0.1) rectangle (0.40027006\mytikzwidth,0.1); ll [mycolour] (0.40024863\mytikzwidth,-0.1) rectangle (0.40025399\mytikzwidth,0.1); ll [mycolour] (0.40023255\mytikzwidth,-0.1) rectangle (0.40023791\mytikzwidth,0.1); ll [mycolour] (0.40023077\mytikzwidth,-0.1) rectangle (0.40023255\mytikzwidth,0.1); ll [mycolour] (0.40006109\mytikzwidth,-0.1) rectangle (0.40007716\mytikzwidth,0.1); ll [mycolour] (0.40005573\mytikzwidth,-0.1) rectangle (0.40006109\mytikzwidth,0.1); ll [mycolour] (0.40003965\mytikzwidth,-0.1) rectangle (0.40004501\mytikzwidth,0.1); ll [mycolour] (0.40003787\mytikzwidth,-0.1) rectangle (0.40003965\mytikzwidth,0.1); ll [mycolour] (0.40000750\mytikzwidth,-0.1) rectangle (0.40001286\mytikzwidth,0.1); ll [mycolour] (0.40000572\mytikzwidth,-0.1) rectangle (0.40000750\mytikzwidth,0.1); ll [mycolour] (0.40000036\mytikzwidth,-0.1) rectangle (0.40000214\mytikzwidth,0.1); ll [mycolour] (0.39999976\mytikzwidth,-0.1) rectangle (0.40000036\mytikzwidth,0.1); \draw (0,0) -- (\mytikzwidth,0); \draw (1.000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.000000\mytikzwidth,-0.15) -- + (0,0.3); \end{tikzpicture} \begin{tikzpicture} \draw[white] (0,0) -- (\mytikzwidth,0); \draw[white] (0.000\mytikzwidth,-0.15) -- + (0,0.3); \draw[white] (1.000\mytikzwidth,-0.15) -- + (0,0.3); \node at (.5\mytikzwidth,0) {$\ldots$}; \end{tikzpicture} \begin{tikzpicture} \draw (0,0) -- (\mytikzwidth,0); \draw (1.000000\mytikzwidth,-0.15) -- + (0,0.3); \draw (0.000000\mytikzwidth,-0.15) -- + (0,0.3); \node[fill=white, inner sep=2pt] at (.5\mytikzwidth,0) {$L_3$}; \end{tikzpicture} \end{center} \caption{Construction of the L\"uroth Set $L_3$} \label{fig: L3} \end{figure}
2412.16710v1
http://arxiv.org/abs/2412.16710v1
Space-time divergence lemmas and optimal non-reversible lifts of diffusions on Riemannian manifolds with boundary
\documentclass[a4paper,bibliography=totoc]{scrartcl} \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsthm,amsfonts} \usepackage{mathtools,thmtools,thm-restate} \usepackage[british]{babel} \usepackage{csquotes} \usepackage{hyperref} \usepackage[nameinlink,noabbrev]{cleveref} \usepackage{enumerate} \usepackage{graphicx} \usepackage{xcolor} \usepackage[maxbibnames=5,sorting=nyt,sortcites,giveninits=true]{biblatex} \DeclareNameAlias{sortname}{family-given} \addbibresource{main.bib} \title{Space-time divergence lemmas and optimal non-reversible lifts of diffusions on Riemannian manifolds with boundary} \author{Andreas Eberle\thanks{E-Mail: \href{mailto:[email protected]}{[email protected]}, ORCID: \href{https://orcid.org/0000-0003-0346-3820}{0000-0003-0346-3820}}\qquad Francis Lörler\thanks{E-Mail: \href{mailto:[email protected]}{[email protected]}, ORCID: \href{https://orcid.org/0009-0007-3177-1093}{0009-0007-3177-1093}}\medskip\\Institute for Applied Mathematics, University of Bonn.\\ Endenicher Allee 60, 53115 Bonn, Germany.} \newcommand{\R} {\mathbb{R}} \newcommand{\C} {\mathbb{C}} \newcommand{\N} {\mathbb{N}} \newcommand{\Z} {\mathbb{Z}} \newcommand{\T} {\mathrm{T}} \newcommand{\Ncal} {\mathrm{N}} \renewcommand{\H} {\mathbb{H}} \newcommand{\X} {\mathfrak{X}} \newcommand{\D} {\mathcal{D}} \newcommand{\eps} {\varepsilon} \renewcommand{\phi} {\varphi} \let\S\relax \newcommand{\S} {\mathcal{S}} \newcommand{\V} {\mathcal{V}} \newcommand{\Ecal} {\mathcal{E}} \newcommand{\diff} {\mathop{}\!\mathrm{d}} \newcommand{\loc} {\mathrm{loc}} \newcommand{\muq} {\overline\mu} \newcommand{\Mq} {{\overline{M}}} \newcommand{\dM} {{\partial M}} \newcommand{\nuM} {\nu_M} \newcommand{\nudM} {\nu_\dM} \newcommand{\nablad}{\nabla^\dM} \newcommand{\Deltad}{\Delta^\dM} \newcommand{\Ld} {L^\dM} \newcommand{\nablaq}{\overline{\nabla}} \newcommand{\mudM} {\mu_\dM} \newcommand{\muqk} {\lambda\otimes\hat\mu} \newcommand{\stpi} {\mathrm{STPI}} \newcommand{\DC} {\mathrm{D}} \newcommand{\sff} {\mathrm{I\!I}} \newcommand{\rel} {\mathrm{rel}} \newcommand{\RHMC} {\textup{RHMC}} \newcommand{\LD} {\textup{LD}} \renewcommand{\c} {\textup{c}} \let\Re\relax \let\Im\relax \DeclareMathOperator{\Re} {Re} \DeclareMathOperator{\Im} {Im} \DeclareMathOperator{\spn} {span} \DeclareMathOperator{\tr} {tr} \DeclareMathOperator{\gap} {gap} \DeclareMathOperator{\spec} {spec} \DeclareMathOperator{\Var} {Var} \DeclareMathOperator{\Ex} {\mathbb{E}} \DeclareMathOperator{\Cov} {Cov} \DeclareMathOperator{\unif} {Unif} \DeclareMathOperator{\dom} {Dom} \DeclareMathOperator{\sing} {s} \DeclareMathOperator{\divg} {div} \DeclareMathOperator{\grad} {grad} \DeclareMathOperator{\ran} {Ran} \DeclareMathOperator{\Ric} {Ric} \DeclareMathOperator{\supp} {supp} \DeclarePairedDelimiterX{\norm}[1]{\lVert}{\rVert}{#1} \theoremstyle{plain} \newtheorem{theo}{Theorem} \newtheorem{lemm}[theo]{Lemma} \newtheorem{coro}[theo]{Corollary} \theoremstyle{definition} \newtheorem{defi}[theo]{Definition} \newtheorem{exam}[theo]{Example} \newtheorem{rema}[theo]{Remark} \newtheorem{rema*}{Remark} \newtheorem{assu}{Assumption} \crefname{lemm}{lemma}{lemmas} \crefname{theo}{theorem}{theorems} \crefname{assu}{assumption}{assumptions} \begin{document} \maketitle \begin{abstract}\vspace{-\baselineskip} Non-reversible lifts reduce the relaxation time of reversible diffusions at most by a square root. For reversible diffusions on domains in Euclidean space, or, more generally, on a Riemannian manifold with boundary, non-reversible lifts are in particular given by the Hamiltonian flow on the tangent bundle, interspersed with random velocity refreshments, or perturbed by Ornstein-Uhlenbeck noise, and reflected at the boundary. In order to prove that for certain choices of parameters, these lifts achieve the optimal square-root reduction up to a constant factor, precise upper bounds on relaxation times are required. A key tool for deriving such bounds by space-time Poincaré inequalities is a quantitative space-time divergence lemma. Extending previous work of Cao, Lu and Wang, we establish such a divergence lemma with explicit constants for general locally convex domains with smooth boundary in Riemannian manifolds satisfying a lower, not necessarily positive, curvature bound. As a consequence, we prove optimality of the lifts described above up to a constant factor, provided the deterministic transport part of the dynamics and the noise are adequately balanced. Our results show for example that an integrated Ornstein-Uhlenbeck process on a locally convex domain with diameter $d$ achieves a relaxation time of the order $d$, whereas, in general, the Poincaré constant of the domain is of the order $d^2$. \begin{samepage} \par\vspace\baselineskip \noindent\textbf{Keywords:} Lift; divergence lemma; Riemannian manifold; hypocoercivity; Langevin dynamics; Hamiltonian Monte Carlo; convex domain; reflection.\par \noindent\textbf{MSC Subject Classification:} 60J25, 60J35, 58J65, 58J05, 35H10. \end{samepage} \end{abstract} \section{Introduction} Long-time convergence to equilibrium of strongly continuous contraction semigroups plays an important role in many areas of probability and analysis, ranging from convergence of Markov processes with applications in sampling to kinetic equations, ergodic theory, and non-equilibrium statistical physics. The degenerate case in which the generator of the semigroup is not an elliptic but only hypoelliptic operator has attracted much attention in the past years, and the study of convergence of such dynamics is known as hypocoercivity \cite{Villani2009Hypocoercivity}. Recently, a variational approach based on space-time Poincaré inequalities pioneered by Albritton, Armstrong, Mourrat and Novack \cite{Albritton2021Variational} has proved successful in obtaining sharp and quantitative bounds on the rates of convergence. This approach has further been developed and applied in various scenarios by Cao, Lu and Wang \cite{Cao2019Langevin,Lu2022PDMP}. Furthermore, by casting it in a framework of second-order lifts \cite{EberleLoerler2024Lifts}, the authors have simplified the approach, and an associated lower bound on the relaxation time of lifts has raised the question of existence of optimal lifts achieving this lower bound. The goal of this paper is to prove upper bounds of the optimal order on the relaxation time of Langevin dynamics and randomised Hamiltonian Monte Carlo with appropriately chosen parameters on locally convex Riemannian manifolds with boundary that satisfy a lower curvature bound. To this end, we provide a quantitative space-time divergence lemma, which is the key ingredient in the proof of space-time Poincaré inequalities. By considering Riemannian manifolds with boundary as spatial domains, our result allows for the treatment of non-reversible, degenerate dynamics on Riemannian manifolds, as well as dynamics with reflective boundary behaviour. In particular, it shows that, up to a constant factor, the processes described above are optimal lifts of an overdamped Langevin diffusion with reflection at the boundary, provided the deterministic transport part of the dynamics and the noise are adequately balanced. Probability distributions supported on Riemannian manifolds arise in many applications; through constrained dynamics in physics \cite{lelievre2012langevin,Lelievre2010freeenergy}, as natural parameter spaces in statistics \cite{Chikuse2012Statistics}, in Bayesian inference \cite{RudolfSprungk2023Sphere,Diaconis2013Sampling}, through introduction of an artificial Riemannian geometry on $\R^n$ \cite{LeeVempala2022ManifoldJoys}, or in volume computation of convex bodies \cite{LeeVempala2018Convergence,Gatmiry2024Sampling}, to name a few. Sampling from such distributions is a challenging task, and theoretical convergence bounds for many proposed algorithms \cite{ByrneGirolami2013GeodesicSampling,LeeShenVempala2023Riemannian,ChevallierHMCreflections,GirolamiCalderhead2011Riemann,AfsharDomke2015ReflectionHMC} are still lacking. Our quantitative convergence bounds provide a first step towards the theoretical understanding of sampling algorithms based on discretisations of continuous-time processes on Riemannian manifolds. Similarly, dynamics constrained to a convex domain arise in sampling problems with restrictions \cite{AhnChewi2021MirrorLangevin,LanKang2023Constrained}. The relaxation time of reflected Brownian motion on a convex domain with diameter $d$ is the Poincaré constant of the uniform distribution on this domain, which is of the order $d^2$ in general \cite{Li2012Geometric}. Our result shows that critically damped Langevin dynamics, which reduces to an integrated Ornstein-Uhlenbeck process reflected at the boundary, and randomised Hamiltonian Monte Carlo with appropriately chosen refresh rate, which reduces to a billiards process with velocity refreshment, achieve a relaxation time of order $d$. \medskip Let us first give a brief review of the concept of second-order lifts of reversible diffusions introduced in \cite{EberleLoerler2024Lifts}. Consider a time-homogeneous Markov process $(X_t,V_t)_{t\geq 0}$ with invariant probability measure $\hat\mu$ on the tangent bundle $\T M$ of a Riemannian manifold $(M,g)$ with boundary and let $\pi\colon\T M\to M$ be the natural projection. Furthermore, let $(Z_t)_{t\geq 0}$ be a reversible diffusion process on $M$ with invariant probability measure $\mu = \hat\mu\circ\pi^{-1}$. Their associated transition semigroups acting on $L^2(\hat\mu)$ and $L^2(\mu)$ are $(\hat P_t)_{t\geq 0}$ and $(P_t)_{t\geq 0}$, and their infinitesimal generators are denoted by $(\hat L,\dom(\hat L))$ and $(L,\dom(L))$, respectively. Then $(\hat P_t)_{t\geq 0}$ is a \emph{second-order lift} of $(P_t)_{t\geq 0}$ if \begin{equation}\label{eq:deflift0} \dom(L)\subseteq\{f\in L^2(\mu)\colon f\circ\pi\in\dom(\hat L)\}\,, \end{equation} and for all $f,g\in\dom(L)$ we have \begin{equation}\label{eq:deflift1} \int_{\T M}\hat L(f\circ \pi)(g\circ\pi)\diff\hat\mu=0 \end{equation} and \begin{equation}\label{eq:deflift2} \frac{1}{2}\int_{\T M}\hat L(f\circ\pi)\hat L(g\circ\pi)\diff\hat\mu =\Ecal(f,g)\,, \end{equation} where $\Ecal(f,g)=-\int_M fLg\diff\mu$ is the Dirichlet form associated to $(L,\dom(L))$. Since the process $(Z_t)_{t\geq 0}$ is assumed to be reversible, its generator is self-adjoint, and the associated semigroup satisfies \begin{equation}\label{eq:Ptnorm} \norm{P_t}_{L_0^2(\mu)\to L_0^2(\mu)} = \exp(-\lambda t) \end{equation} with the decay rate $\lambda$ given by the spectral gap \begin{equation}\label{eq:gap} \gap(L)=\inf\{\Re(\alpha)\colon\alpha\in\spec(-L|_{L_0^2(\mu)})\} \end{equation} of $(L,\dom(L))$ in the Hilbert space $L^2(\mu)$. Here $L_0^2(\mu) = \{f\in L^2(\mu)\colon\int_Mf\diff\mu=0\}$ denotes the subspace of $L^2(\mu)$ of mean-zero functions. In contrast, the strong generator of a \emph{non-reversible} Markov process is no longer self-adjoint and the operator norm of the associated semigroup is no longer a pure exponential. In particular, the spectral gap only coincides with the asymptotic decay rate of the semigroup under additional assumptions ensuring a spectral mapping theorem relating the spectrum of $P_t$ to that of $L$, see \cite{engel1999semigroups}. This motivates the introduction of the non-asymptotic relaxation time \begin{equation*} t_\rel(\hat P) = \inf\{t\geq0\colon\norm{\hat P_tf}_{L^2(\mu)}\leq e^{-1}\norm{f}_{L^2(\mu)}\textup{ for all }f\in L_0^2(\mu)\} \end{equation*} of $(\hat P_t)_{t\geq 0}$. For reversible processes, \eqref{eq:Ptnorm} shows that the relaxation time coincides with the usual definition as the inverse of the spectral gap. A crucial consequence of the lift property is the lower bound \begin{equation*} t_\rel(\hat P) \geq \frac{1}{2\sqrt{2}}\sqrt{t_\rel(P)} \end{equation*} on the relaxation time of an arbitrary lift, see \cite[Theorem 11]{EberleLoerler2024Lifts}, which shows that convergence measured by the relaxation time can at most be accelerated by a square root through lifting. A lift $(\hat P_t)_{t\geq 0}$ is hence called \emph{$C$-optimal} if \begin{equation}\label{eq:Copt} t_\rel(\hat P) \leq \frac{C}{2\sqrt{2}}\sqrt{t_\rel(P)}\,, \end{equation} i.e.\ if it achieves this lower bound up to a constant factor $C>0$. \medskip Proving $C$-optimality of a lift requires precise upper bounds on the relaxation time. The approach recently introduced by Albritton, Armstrong, Mourrat and Novack \cite{Albritton2021Variational} relies on space-time Poincaré inequalities in order to obtain exponential decay \begin{equation*} \frac{1}{T}\int_t^{t+T}\norm{\hat P_sf}_{L^2(\hat\mu)}^2\diff s \leq e^{-\nu t}\frac{1}{T}\int_0^T\norm{\hat P_s f}_{L^2(\hat\mu)}^2\diff s\qquad\textup{for all }f\in L_0^2(\hat\mu)\textup{ and }t\geq 0 \end{equation*} with rate $\nu$ of time-averages over some fixed length $T>0$, which yields $t_\rel(\hat P)\leq\frac{1}{\nu}+T$. The key tool in proving a space-time Poincaré inequality is a space-time divergence lemma which can be formulated in terms of the generator $(L,\dom(L))$ of the reversible process alone. Namely, letting $\lambda$ denote the Lebesgue measure on $[0,T]$, any $f\in L_0^2(\lambda\otimes\mu)$ can be written as \begin{equation*} f = \partial_th-Lg \end{equation*} with functions $h\in H^{1,2}(\lambda\otimes\mu)$ and $g\in H^{2,2}(\lambda\otimes\mu)$ satisfying Dirichlet boundary conditions in time, as well as $g(t,\cdot)\in\dom(L)$ for $\lambda$-almost every $t\in [0,T]$, together with the regularity estimates \begin{eqnarray*} \norm{h}_{L^2(\muq)}^2+\norm{\nabla g}_{L^2(\muq)}^2 &\leq& c_0(T)\,\norm{f}_{L^2(\muq)}^2\, ,\\ \norm{\nablaq h}_{L^2(\muq)}^2+\norm{\nablaq\nabla g}_{L^2(\muq)}^2 &\leq& c_1(T)\,\norm{f}_{L^2(\muq)}^2\, \end{eqnarray*} on $h$ and $g$, where $\nablaq=(\partial_t,\,\nabla)$ denotes the space-time gradient. One goal of this work is to provide solutions to these space-time divergence equations with quantitative expressions for $c_0(T)$ and $c_1(T)$ in the case where $\mu$ has a density $\exp(-U)$ with respect to the Riemannian volume measure and $(Z_t)_{t\geq 0}$ is a Riemannian overdamped Langevin diffusion with generator \begin{equation*} L = \Delta-\langle\nabla U,\nabla\rangle \end{equation*} with Neumann boundary conditions. In this case, $f$ can be written as the space-time divergence of the vector field with components $-h$ and $\nabla g$, which motivated the term `divergence lemma' in the literature. This allows for the treatment of second-order lifts of Riemannian overdamped Langevin dynamics, which include Riemannian randomised Hamiltonian Monte Carlo and Langevin dynamics with reflective boundary behaviour. \medskip We begin in the following section by formally introducing the setting and summarising the Riemannian geometry involved. In particular, we provide an extension of the Reilly formula which arises through an integration-by-parts of the Bochner formula for $L$ and a careful treatment of the boundary terms. Together with a lower curvature bound, it allows to upper bound the Hessian of smooth functions on $M$. In \Cref{sec:divergence}, we prove the divergence lemma with explicit quantitative bounds on the constructed functions. By considering a splitting of $L^2(\mu)$ into high and low modes associated to a spectral decomposition of $L$, as well as into symmetric and antisymmetric functions, we obtain constants of the optimal order conjectured in \cite{Cao2019Langevin}. We demonstrate an application of the divergence lemma in \Cref{sec:optimallifts} by showing that, with critical choice of parameters, Langevin dynamics and randomised Hamiltonian Monte Carlo on a Riemannian manifold with boundary satisfying a lower curvature bound are optimal lifts of the corresponding overdamped Langevin diffusion. A brief overview of Riemannian geometry on manifolds with boundary is given in \Cref{sec:appendix}. \section{Preliminaries and the generalised Reilly formula}\label{sec:setting} Consider a complete connected and oriented Riemannian manifold $(M,g)$ with (possibly empty) boundary. In particular, the boundary $\dM$ of $M$ is a Riemannian submanifold without boundary, and for simplicity we again denote the induced metric on $\dM$ by $g$. We equip the boundary with the outward unit-normal vector field $N$, and for $x\in M$ and $v,w\in \T_xM$ we write $\langle v,w\rangle_x$ for $g_x(v,w)$. For a short overview over manifolds with boundary, see \Cref{ssec:ManifoldBdry}. We equip $M$ with a probability measure $\mu$ absolutely continuous with respect to the Riemannian volume measure $\nu_M$, i.e.\ \begin{equation*} \mu(\diff x)=\exp(-U(x))\,\nuM(\diff x) \end{equation*} for some function $U\in C^2(M)$ with $\int_M\exp(-U)\diff\nuM = 1$. Let $\Delta\colon C^\infty(M)\to C^\infty(M),\,u\mapsto\divg(\nabla u)$, where $\nabla$ is the Riemannian gradient and $\divg$ the Riemannian divergence, be the Laplace-Beltrami operator. Then the weighted Laplace-Beltrami operator $L$ is defined as \begin{equation*} Lu = \Delta u-\langle\nabla U,\nabla u\rangle\qquad\textup{for all }u\in C^\infty(M)\,. \end{equation*} The operator $L$ satisfies the generalised Bochner-Lichnerowicz-Weitzenböck formula \begin{equation}\label{eq:bochner2} \frac{1}{2}L|\nabla u|^2 = |\nabla^2u|^2 + \langle \nabla u,\nabla L u\rangle + (\Ric+\nabla^2U)(\nabla u,\nabla u) \end{equation} for all $u\in C^\infty(M)$, see \cite[Proposition 3]{BakryEmery1985Diffusions}, where $\nabla^2u$ is the Hessian, or second covariant derivative, of $u$, and $\Ric$ is the Ricci curvature tensor of $M$. Letting $\partial_i$ denote the $i$-th coordinate vector field, $|\nabla^2u|^2=\sum_{i,j=1}^d(\nabla^2_{\partial_i,\partial_j}u)^2$ is the squared Frobenius norm of the tensor $\nabla^2u$. Note that \eqref{eq:bochner2} reduces to the usual Bochner identity in case $U$ is constant and thus $L=\Delta$, see e.g.\ \cite{Jost2013Riemannian}. Let \begin{equation*} C_\c^\infty(M) = \{u\in C^\infty(M)\colon \supp(u) \textup{ is compact in } M\} \end{equation*} denote the set of smooth, compactly supported functions on $M$, where elements of $C_\c^\infty(M)$ are \emph{not} required to vanish at the boundary $\dM$. Integrating the Bochner identity by parts with respect to the Riemannian volume $\nu_M$ yields the Reilly formula \cite{Reilly1977Formula} \begin{align*} \MoveEqLeft\int_M\left(|\nabla^2u|^2-(\Delta u)^2+\Ric(\nabla u,\nabla u)\right)\diff\nuM\\ &=\int_\dM\left(H\left(\partial_Nu\right)^2 - 2(\partial_Nu)\Deltad u + h\big(\nablad u,\nablad u\big)\right)\diff\nudM\,, \end{align*} for all $u\in C_\c^\infty(M)$, see also \cite[Lemma 7.10]{ColdingMinicozzi2011Minimal} for a proof. Here $h$ is the scalar second fundamental form, $H$ is the mean curvature of the boundary, see \Cref{ssec:HessianSFF}, and $\partial_Nu = \langle\nabla u,N\rangle$. We use the notation $\nablad$ and $\Deltad$ for the Riemannian gradient and the Laplace-Beltrami operator on the boundary manifold $(\dM,g)$, respectively, and the induced Riemannian volume measure on $\dM$ is denoted by $\nudM$. Denoting the induced measure on the boundary by \begin{align*} \mudM(\diff x)=\exp(-U(x))\,\nudM(\diff x)\,, \end{align*} the operator $L$ satisfies the integration-by-parts identity \begin{align}\label{eq:intbypartsL} \int_M \phi L\psi\diff\mu = \int_\dM \phi \langle\nabla\psi,N\rangle\diff\mu_\dM - \int_M\langle\nabla\phi,\nabla\psi\rangle\diff\mu\, \end{align} for all smooth, compactly supported functions $\phi,\psi\in C_\c^\infty(M)$. Note that $\mudM$ is not necessarily a probability measure. Now integrating \eqref{eq:bochner2} by parts with respect to $\mu$ yields the following generalisation of the Reilly formula due to Ma and Du \cite{MaDu2010Extension}, which reduces to the usual Reilly formula in case $U$ is constant. For completeness and the convenience of the reader, we include a proof in \Cref{ssec:BochnerReilly}. \begin{restatable}[Generalised Reilly formula]{lemm}{GenReilly}\label{lem:GenReilly} For any $u\in C_\c^\infty(M)$, \begin{align*} \MoveEqLeft\int_M\left(|\nabla^2u|^2-(Lu)^2+(\Ric+\nabla^2U)(\nabla u,\nabla u)\right)\diff\mu\\ &=\int_\dM\left((H+\partial_NU)\left(\partial_Nu\right)^2 - 2(\partial_Nu)\Ld u + h\big(\nablad u,\nablad u\big)\right)\diff\mudM\,, \end{align*} where $\Ld = \Deltad-\langle\nablad U,\nablad\rangle$. \end{restatable} \Cref{lem:GenReilly} has been applied by Kolesnikov and Milman \cite{KolesnikovMilman2017Brascamp,KolesnikovMilman2016Riemannian,KolesnikovMilman2016Poincare} to obtain Brascamp-Lieb-type and log-Sobolev functional inequalities on weighted Riemannian manifolds with boundary. \begin{rema}\label{rem:GenReillyCorners} The set of corner points $C$ of a Riemannian manifold $M$ with corners, see \Cref{ssec:ManifoldBdry}, forms a set of measure $0$ with respect to the Riemannian volume measure $\nu_M$, and $M\setminus C$ is a Riemannian manifold with boundary. Hence \Cref{lem:GenReilly} holds analogously for Riemannian manifolds with corners when replacing the domain of integration on the right-hand side with the boundary $\partial(M\setminus C)$ of $M\setminus C$. \end{rema} The Sobolev space $H^{1,2}(M,\mu)$ on $M$ is defined as the closure of $C^\infty(M)\cap L^2(\mu)$ in $L^2(\mu)$ with respect to the norm \begin{equation}\label{eq:MSobolevDef1} \norm{u}_{H^{1,2}(\mu)}^2 = \norm{u}_{L^2(\mu)}^2 + \norm{\nabla u}_{L^2(\mu)}^2\,. \end{equation} Similarly, one defines the second-order Sobolev space $H^{2,2}(M,\mu)$ on $M$ as the closure of $C^\infty(M)\cap L^2(\mu)$ with respect to the norm \begin{equation}\label{eq:MSobolevDef2} \norm{u}_{H^{2,2}(\mu)}^2 = \norm{u}_{L^2(\mu)}^2 + \norm{\nabla u}_{L^2(\mu)}^2 + \norm{\nabla^2 u}_{L^2(\mu)}^2\,, \end{equation} where $\norm{\nabla^2u}_{L^2(\mu)}$ is the $L^2(\mu)$-norm of $|\nabla^2u|$. We make the following assumptions on $(M,g)$ and the probability measure $\mu$. \begin{samepage} \begin{assu}\label{ass:U}\mbox{} \begin{enumerate}[(i)] \item\label{ass:hdefinite}\textbf{Local convexity.} The scalar second fundamental form $h$ on $\dM$ is negative semi-definite, i.e.\ \begin{equation*} h(v,v)\leq 0\qquad\textup{for all }x\in\dM\textup{ and }v\in \T_x\dM\,. \end{equation*} \item\label{ass:RicHess}\textbf{Lower curvature bound.} There is a constant $\rho\in [0,\infty )$ such that $\Ric+\nabla^2 U\geq-\rho$ on $M$, i.e.\ \begin{equation*} \Ric(v,v)+\nabla^2U(v,v) \geq -\rho|v|^2\qquad\textup{for all }x\in M\textup{ and }v\in\T_xM\,. \end{equation*} \item\label{ass:poincare}\textbf{Poincaré inequality.} The probability measure $\mu$ satisfies a Poincaré inequality with constant $m^{-1}$, i.e.\ \begin{equation*} \int_{M}f^2\diff\mu\leq\frac{1}{m}\int_{M}|\nabla f|^2\diff\mu\, \quad\text{ for all $f\in H^{1,2}(M,\mu)$ with $\int_{M}f\diff\mu=0$}\,. \end{equation*} \end{enumerate} \end{assu} \end{samepage} The local convexity assumption \eqref{ass:hdefinite} is implied by geodesic convexity of the manifold, i.e.\ the existence of a minimising geodesic between any two interior points that lies in the interior, while the converse is not true in general \cite{Bishop1974Convexity,Bishop1964Geometry,Alexander1977LocallyConvex}. Note that the lower bound \eqref{ass:RicHess} on the generalised curvature tensor $\Ric+\nabla^2U$ is also known as the Bakry-Émery curvature-dimension condition CD($-\rho$,$\infty$) \cite{Bakry2014Analysis}. \begin{rema}\label{rem:Mconvex} Let $M$ be a closed, connected subset of $\R^d$ with smooth boundary. Then $M$ is a Riemannian manifold with boundary when equipped with the Euclidean metric and \Cref{ass:U}\eqref{ass:hdefinite} is equivalent to convexity of $M$. Since the Euclidean metric is flat, \Cref{ass:U}\eqref{ass:RicHess} simplifies to $\nabla^2U\geq-\rho$ on $M$. Hence \Cref{ass:U} is satisfied by probability measures on convex subsets of Euclidean space satisfying a Poincaré inequality and a lower curvature bound on their potential. \end{rema} By the integration-by-parts identity \eqref{eq:intbypartsL}, when equipped with the Neumann boundary conditions \begin{equation}\label{eq:defD} \D=\{u\in C_\c^\infty(M)\colon\partial_Nu=0\textup{ on }\dM\}\,,\end{equation} we have $Lu = -\nabla^*\nabla u$ for all $u\in\D$, where the adjoint is in $L^2(\mu)$. Therefore, the densely defined linear operator $(L,\D)$ on $L^2(\mu)$ is symmetric and negative semi-definite. \begin{coro}\label{cor:reillyestimate} Let \Cref{ass:U} hold. \begin{enumerate}[(i)] \item For all $u\in\D$, \begin{equation}\label{eq:HessianUpperBound} \norm{\nabla^2u}_{L^2(\mu)}^2 \leq \norm{Lu}_{L^2(\mu)}^2 + \rho\norm{\nabla u}_{L^2(\mu)}^2\,. \end{equation} \item The operator $(L,\D)$ is essentially self-adjoint. Its unique self-adjoint extension $(L,\dom(L))$ satisfies $\dom(L)\subseteq H^{2,2}(M,\mu)$ and \eqref{eq:HessianUpperBound} holds for all $u\in\dom(L)$. \end{enumerate} \end{coro} \begin{proof}\begin{enumerate}[(i)] \item The estimate \eqref{eq:HessianUpperBound} follows directly from the generalised Reilly formula stated in \Cref{lem:GenReilly}, using \Cref{ass:U}\eqref{ass:hdefinite} and \eqref{ass:RicHess} and the vanishing normal derivative at the boundary. \item In the case of constant potential, the essential self-adjointness of the Laplace-Beltrami operator $(\Delta,\D)$ with Neumann boundary conditions is classical and e.g.\ shown in \cite[Prop. 8.2.5]{Taylor2023PDE}. Another proof following an approach by Chernoff \cite{Chernoff1973Essential} is given in \cite{BianchiGüneysuSetti2024Neumann}. Due to the integration-by-parts identity \eqref{eq:intbypartsL} and the regularity estimate \eqref{eq:HessianUpperBound} guaranteed by the lower bound on the curvature of $U$, the general case $L=\Delta-\langle\nabla U,\nabla\rangle$ can be treated analogously. Finally, since $\norm{\nabla u}_{L^2(\mu)}^2 \leq \norm{u}_{L^2(\mu)}\norm{Lu}_{L^2(\mu)}$ for all $u\in\D$, \eqref{eq:HessianUpperBound} yields \begin{equation*} \norm{\nabla^2u}_{L^2(\mu)}^2\leq (1+\rho)(\norm{u}_{L^2(\mu)} + \norm{Lu}_{L^2(\mu)})^2 \end{equation*} for all $u\in\D$. In particular, this implies $\dom(L)\subseteq H^{2,2}(M,\mu)$ and \eqref{eq:HessianUpperBound} holds for all $u\in\dom(L)$ by an approximation argument. \end{enumerate} \end{proof} We make the following technical assumption which allows us to work with a spectral decomposition of $L$. It can surely be relaxed, see \cite{BrigatiStoltz2023Decay}, yet is satisfied in many cases, for instance if $M$ is compact. \begin{assu} \label{ass:discretespec} The spectrum of $(L,\dom(L))$ on $L^2(\mu)$ is discrete. \end{assu} Let $L_0^2(\mu) = \{f\in L^2(\mu)\colon\int_M f\diff\mu=0\}$ denote the subspace of $L^2(\mu)$ of mean-zero functions. By the Poincaré inequality \Cref{ass:U}\eqref{ass:poincare}, \begin{equation*} -L|_{L_0^2(\mu)}\colon\dom(L)\cap L_0^2(\mu)\to L_0^2(\mu) \end{equation*} has a spectral gap and hence admits a bounded linear inverse \begin{equation}\label{eq:defG} G\colon L_0^2(\mu)\to \dom(L)\cap L_0^2(\mu) \end{equation} with operator norm bounded by $\frac{1}{m}$. \Cref{ass:discretespec} allows us to consider an orthonormal basis $\{e_k\colon k\in\N_0\}$ of $L^2(\mu)$ consisting of eigenfunctions of $L$ with eigenvalues $-\alpha_k^2$, where $e_0=1$ and $\alpha_0=0$. In particular, $\inf\{\alpha_k^2\colon k\in\N\}\geq m$ and $e_k\in\dom(L)$ implies $\partial_Ne_k=0$ on $\dM$. The operator $G=(-L|_{L_0^2(\mu)})^{-1}$ satisfies \begin{equation*} Ge_k = \frac{1}{\alpha_k^2}e_k\qquad\textup{for all }k\in\N\,. \end{equation*} \section{The space-time divergence lemma}\label{sec:divergence} We adjoin a time component to the spatial component and hence consider time-space domains $\Mq=[0,T]\times M$, where $T\in(0,\infty)$ is fixed and, as before, $(M,g)$ is a Riemannian manifold with boundary such that \Cref{ass:U} is satisfied. Note that $\Mq$ is a Riemannian manifold with corners with metric $\overline{g}$ given by the product of the Euclidean metric and $g$. Its corner points are $\{0,T\}\times\dM$. The tangent bundle $\T\Mq$ is canonically isomorphic to $([0,T]\times\R)\oplus \T M$, and we identify elements $V\in\T\Mq$ of the former with tuples $(V_0,V_1)$ with $V_0\in[0,T]\times\R$ and $V_1\in\T M$. On $\Mq$ we consider the measure \begin{equation*} \muq = \lambda\otimes\mu\,, \end{equation*} where $\lambda$ is the Lebesgue measure restricted to $[0,T]$. We use the notation $L^2(\muq) = L^2(\Mq,\muq)$ and $L_0^2(\muq) = \{f\in L^2(\muq)\colon\int_\Mq f\diff\muq=0\}$ for the subspace of $L^2(\muq)$ of mean-zero functions. The Riemannian gradient on $\overline{M}$ is denoted by $\nablaq$ and its formal $L^2(\muq)$-adjoint by $\nablaq^*$. The latter acts on a vector field $X\in\Gamma_{C^1}(\T\Mq)$ by \begin{align*} \nablaq^*X &= -\partial_t X_0 + \nabla^*X_1\\ &=-\partial_tX_0 - \divg(X_1) + \langle\nabla U,X_1\rangle\,. \end{align*} For $m\in\{1,2\}$, the Sobolev spaces $H^{m,2}(\muq)$ are defined in analogy to \eqref{eq:MSobolevDef1} and \eqref{eq:MSobolevDef2}. Similarly, the Sobolev space $H^{m,2}_\DC(\muq)$ with Dirichlet boundary conditions in time is the closure of \begin{equation*} \{u\in C^\infty(\Mq)\cap L^2(\muq)\colon u(0,\cdot)=u(T,\cdot)=0\} \end{equation*} in $L^2(\muq)$ with respect to $\norm{\cdot}_{H^{m,2}(\muq)}$. \begin{theo}[Quantitative divergence lemma]\label{thm:divergence} Suppose that \Cref{ass:U,ass:discretespec} are satisfied. Then for any $T\in (0,\infty )$, there exist constants $c_0(T),c_1(T)\in (0,\infty )$ such that for any $f\in L_0^2(\muq)$ there are functions \begin{equation}\label{eq:hgDirichletcond} h\in H^{1,2}_\DC(\muq)\qquad\textup{and}\qquad g\in H^{2,2}_\DC(\muq) \end{equation} satisfying \begin{equation}\label{eq:gdomLcond} g(t,\cdot)\in\dom(L)\qquad\textup{for }\lambda\textup{-almost all }t\in[0,T] \end{equation} such that \begin{equation} f\ =\ \partial_th-Lg \end{equation} and the functions $h$ and $g$ satisfy \begin{eqnarray}\label{eq:bound0order} \norm{h}_{L^2(\muq)}^2+\norm{\nabla g}_{L^2(\muq)}^2 &\leq& c_0(T)\,\norm{f}_{L^2(\muq)}^2\, ,\\\label{eq:bound1order} \norm{\nablaq h}_{L^2(\muq)}^2+\norm{\nablaq\nabla g}_{L^2(\muq)}^2 &\leq& c_1(T)\,\norm{f}_{L^2(\muq)}^2\,, \end{eqnarray} where \begin{equation}\label{eq:c0c1} \begin{aligned} c_0(T)\, &=\, 2T^2+43\frac{1}{m}\qquad\textup{and}\\ c_1(T)\, &=\, 290+\frac{991}{mT^2}+43\max\left(\frac{1}{m},\frac{T^2}{\pi^2}\right)\rho\,. \end{aligned} \end{equation} \end{theo} \begin{rema}[Relation to the classical divergence lemma] In \Cref{thm:divergence}, writing \begin{equation*} X=\begin{pmatrix}X_0\\ X_1\end{pmatrix}\qquad\textup{with }X_0=-h\textup{ and }X_1=\nabla g \end{equation*} yields the divergence equation \begin{equation}\label{eq:divergence} f \ =\ \nablaq^*X \end{equation} together with the bounds \begin{eqnarray}\label{eq:boundX} \norm{X}_{L^2(\muq)}^2 &\leq& c_0(T)\,\norm{f}_{L^2(\muq)}^2\, ,\\ \label{eq:boundnablaX} \norm{\overline\nabla X}_{L^2(\muq)}^2 &\leq& c_1(T)\,\norm{f}_{L^2(\muq)}^2\,. \end{eqnarray} The vector field $X$ satisfies Dirichlet boundary conditions on the time boundary, i.e.\ it vanishes on $\{0,T\}\times M$, and it vanishes in the direction normal to the spatial boundary, i.e.\ $\langle X_1,N\rangle=0$ on $[0,T]\times\dM$. In comparison, in the classical Lions' divergence lemma as in \cite{Amrouche2015Lions}, one imposes Dirichlet boundary conditions on the solution to \eqref{eq:divergence} on the whole boundary. While \Cref{thm:divergence} only requires the vector field to vanish in the direction normal to the spatial boundary, we obtain the additional gradient structure of $X_1$. \end{rema} \begin{rema}[Related works] In the unbounded Euclidean case $M=\R^d$, a quantitative divergence lemma has been proved by Cao, Lu and Wang \cite[Lemma 2.6]{Cao2019Langevin}. We obtain the order of the constant $c_1(T)$ as conjectured in \cite[Remark 2.7]{Cao2019Langevin} by more closely considering the case of space-time harmonic right-hand sides and splitting these into high and low modes, as well as into symmetric and antisymmetric functions. Again on $\R^d$, the recent preprint \cite{Lehec2024Convergence} takes a variational approach to the divergence lemma, yet obtains an additional term of the order $mT^2$ in the constant $c_1(T)$ in \eqref{eq:c0c1} due to the estimate of $\norm{\nabla h}_{L^2(\muq)}^2$. We avoid this, at the cost of the estimate of $\norm{\partial_t\nabla g}_{L^2(\muq)}^2$ being of the order $1+\frac{1}{mT^2}$ instead of $\frac{1}{mT^2}$ as in \cite{Lehec2024Convergence}. Brigati and Stoltz \cite{Brigati2022FokkerPlanck,BrigatiStoltz2023Decay} prove a related averaging lemma, eliminating the need of discrete spectrum of $L$, at the cost of worse scaling in $T$. Finally, \cite{EGHLM2024Lifting} uses a variation of our proof by formulating the divergence lemma in terms of the associated Dirichlet form, allowing for the treatment of lifts of diffusions with different boundary behaviour. \end{rema} Let us first give a sketch of the proof of \Cref{thm:divergence}. A first idea would be to solve the equation \begin{equation}\label{eq:Lbaruf} (\partial_t^2+L)u = f \end{equation} with Neumann boundary conditions and to set $X=-\overline{\nabla}u$, or, equivalently, $h=\partial_tu$ and $g = -u$. However, by definition, Neumann boundary conditions only lead to $h$ and $\partial_tg$ vanishing on the time boundary and not $g$ as required by \eqref{eq:hgDirichletcond}. More precisely, let \begin{align*} \overline\D&=\left\{u\in C^\infty(\Mq)\cap L^2(\muq)\colon\partial_Nu=0\textup{ on }[0,T]\times\dM,\,\partial_tu=0\textup{ on }\{0,T\}\times M\right\}\,,\\ \overline{L}u &= (\partial_t^2+L)u = -{\nablaq^*\nablaq} u\qquad\textup{for }u\in\overline{\D}\,. \end{align*} Then, as in \Cref{cor:reillyestimate}, the operator $(\overline{L},\overline{\D})$ on $L^2(\muq)$ is essentially self-adjoint and its unique self-adjoint extension $(\overline{L},\dom(\overline{L}))$ satisfies $\dom(\overline{L})\subseteq H^{2,2}(\muq)$. In particular, $\dom(\overline{L})$ contains functions $u\in L^2(\muq)$ such that $u(t,\cdot)\in\dom(L)$ for $\lambda$-a.e.\ $t\in[0,T]$ and $u(\cdot,x)\in H^{2,2}([0,T])$ for $\muq$-a.e.\ $x\in M$ satisfying the Neumann boundary conditions $\partial_tu(0,\cdot) = \partial_tu(T,\cdot) = 0$ and $(\partial_t^2+L)u\in L^2(\muq)$. The operator $(\overline{L},\dom(\overline{L}))$ has discrete spectrum on $L^2(\muq)$ and satisfies a Poincaré inequality with constant $\overline{m}= \min(m,\frac{\pi^2}{T^2})$. Therefore, the restriction \begin{equation*} -\overline{L}|_{L_0^2(\muq)}\colon\dom(\overline{L})\cap L_0^2(\muq)\to L_0^2(\muq) \end{equation*} admits a bounded linear inverse \begin{equation*} \overline{G}\colon L_0^2(\muq) \to \dom(\overline{L})\cap L_0^2(\muq) \end{equation*} whose operator norm is bounded from above by $\overline{m}^{-1} = \max(\frac{1}{m},\frac{T^2}{\pi^2})$. Hence $u=\overline{G}f$ is a solution of \eqref{eq:Lbaruf} and bounds as in \eqref{eq:bound0order} and \eqref{eq:bound1order} can be derived for $h=-\partial_tu$ and $g=u$ in a straightforward way. However, in general $u$ does not satisfy Dirichlet boundary conditions in time as required in \eqref{eq:hgDirichletcond}. Indeed, an integration by parts shows that Dirichlet boundary conditions in time hold for $u$ if and only if $f$ is orthogonal to the space $\H$ of space-time harmonic functions, that is $\H$ consists of all functions $u\in L^2(\muq)$ such that $u(t,\cdot)\in\dom(L)$ for $\lambda$-a.e.\ $t\in[0,T]$ and $u(\cdot,x)\in H^{2,2}([0,T])$ for $\muq$-a.e.\ $x\in M$ and $(\partial_t^2+L)u=0$. Therefore, in the proof of \Cref{thm:divergence}, we treat right-hand sides in $\H_0 = \H\cap L_0^2(\muq)$ and its orthogonal complement $\H_0^\bot$ in $L_0^2(\muq)$ separately. For functions $f\in\H_0^\bot$, we can set $u = \overline{G}f\in H^{2,2}(\muq)$ and $X = \nablaq u$, or equivalently $h=-\partial_tu$ and $g=u$, to obtain $\nablaq^* X = \partial_th-Lg = f$. The orthogonality of $f$ to the space of harmonic functions $\H_0$ then yields the boundary conditions \eqref{eq:hgDirichletcond}, and the bounds \eqref{eq:bound0order} and \eqref{eq:bound1order} follow by the Poincaré inequality and the Reilly formula. Indeed, by \Cref{rem:GenReillyCorners}, we can apply \Cref{lem:GenReilly} on the Riemannian manifold $\Mq$ with corners, so that we obtain the estimate \begin{equation}\label{eq:HessianBarUpperBound} \norm{\nablaq^2u}_{L^2(\muq)}^2 \leq \norm{\overline{L}u}_{L^2(\muq)}^2 + \rho\norm{\nablaq u}_{L^2(\muq)}^2 \end{equation} for any function $u\in \dom(\overline{L})$ in analogy to \Cref{cor:reillyestimate}. In order to treat functions $f\in\H_0$, we decompose $\H_0$ using an orthogonal basis of $L^2(\mu)$ consisting of eigenfunctions of $L$. We then consider the antisymmetric and symmetric parts as well as the high and low modes separately. In the case of antisymmetric functions in the low modes, $h$ and $g$ can simply be chosen as the time-integral of $f$ and zero. The tricky case is that of symmetric functions in the low modes: this is where the estimates degenerate for $T\to 0$. Finally, the high modes are not expected to effect the results. This can indeed be verified by a technical computation. \begin{proof}[Proof of \Cref{thm:divergence}] Let $\H_0 = \H\cap L_0^2(\mu)$ as defined above and decompose \begin{equation*} L_0^2(\muq) = \H_0\oplus\H_0^\bot\,. \end{equation*} At first consider $f\in \H_0^\bot$. Then setting $h=-\partial_tu$ and $g=u$ with $u = \overline{G}f$ yields $u\in\dom(\overline{L})$ and \begin{equation*} f = \partial_th-Lg\,. \end{equation*} Furthermore, for any $v\in\H_0$, an integration by parts yields \begin{align*} 0 = (f,v)_{L^2(\muq)} &= (\overline{L}u,v)_{L^2(\muq)}\\ &= (u,(\partial_t^2+L)v)_{L^2(\muq)} + \int_{(0,T)}\int_{\dM}v\partial_Nu-u\partial_{N}v\diff\mudM\diff\lambda\\ &\qquad + \int_M\big(u(T,\cdot)\partial_tv(T,\cdot) - u(0,\cdot)\partial_tv(0,\cdot)\big)\diff\mu\\ &=\int_M\big(u(T,\cdot)\partial_tv(T,\cdot) - u(0,\cdot)\partial_tv(0,\cdot)\big)\diff\mu\,. \end{align*} Hence $u = 0$ on $\{0,T\}\times M$ since $v\in\H_0$ is arbitrary. Since by definition $u$ also satisfies Neumann boundary conditions, $h$ and $g$ and the boundary conditions \eqref{eq:hgDirichletcond} and \eqref{eq:gdomLcond}. We obtain the estimates \begin{equation*} \norm{h}_{L^2(\muq)}^2 + \norm{\nabla g}_{L^2(\muq)}^2 = \norm{\nablaq u}_{L^2(\muq)}^2 = (u,f)_{L^2(\muq)}\leq \max\left(\frac{1}{m},\frac{T^2}{\pi^2}\right)\norm{f}_{L^2(\muq)}^2 \end{equation*} and \begin{align*} \MoveEqLeft\norm{\partial_th}_{L^2(\muq)}^2 + \norm{\partial_t\nabla g}_{L^2(\muq)}^2 + \norm{\nabla h}_{L^2(\muq)}^2 + \norm{\nabla^2 g}_{L^2(\muq)}^2\\ &= \norm{\overline{\nabla}^2u}_{L^2(\muq)}^2\leq \norm{\overline{L}u}_{L^2(\muq)}^2 + \rho\norm{\overline{\nabla}u}_{L^2(\muq)}^2\\ &\leq \left(1+\max\left(\frac{1}{m},\frac{T^2}{\pi^2}\right)\rho\right)\norm{f}_{L^2(\muq)}^2 \end{align*} by \eqref{eq:HessianBarUpperBound} and $\norm{u}_{L^2(\muq)}\leq\max(\frac{1}{m},\frac{T^2}{\pi^2})\norm{f}_{L^2(\muq)}$. Therefore, for functions $f\in \H_0^\bot$, the bounds \eqref{eq:bound0order} and \eqref{eq:bound1order} are satisfied with the constants \begin{equation*} c_0^\bot = \max\left(\frac{1}{m},\frac{T^2}{\pi^2}\right)\qquad\textup{and}\qquad c_1^\bot = 1+\max\left(\frac{1}{m},\frac{T^2}{\pi^2}\right)\rho\,. \end{equation*} Now consider $f\in\H_0$. In terms of the orthonormal basis $(e_k)_{k\in\N_0}$ of eigenfunctions of $L$, any function $f\in\H_0$ can be represented as \begin{equation*} f(t,x) = \sum_{k\in\N_0}f_k(t)e_k(x)\,, \end{equation*} where $f_k = (f,e_k)_{L^2(\mu)}$. This yields \begin{equation*} 0 = (\partial_t^2f+Lf)(t,x) = f_0''(t) + \sum_{k\in\N}(f_k''(t)-\alpha_k^2f_k(t))e_k(x)\,, \end{equation*} so that the coefficients $f_k$ satisfy the ordinary differential equations $f_k'' = \alpha_k^2 f_k$. Hence for any $k\in\N$, the function $f_k$ can be expressed as a linear combination of $e^{-\alpha_kt}$ and $e^{-\alpha_k(T-t)}$, and the functions \begin{align*} H_0^a(t,x)&=(t-(T-t))e_0(x)\,,\\ H_k^a(t,x)&=(e^{-\alpha_kt}-e^{-\alpha_k(T-t)})e_k(x)\quad\textup{for }k\in\N\,,\\ H_k^s(t,x)&=(e^{-\alpha_kt}+e^{-\alpha_k(T-t)})e_k(x)\quad\textup{for }k\in\N \end{align*} define an orthogonal basis $\{H_k^a\colon k\in\N_0\}\cup\{H_k^s\colon k\in\N\}$ of $\H_0$. The functions $H_k^a$ and $H_k^s$ are in $H^{2,2}(\muq)$ since $e_k\in\dom(L)\subseteq H^{2,2}(\mu)$ for all $k\in\N_0$. We decompose $\H_0$ into the symmetric and antisymmetric functions as well as into the high and low modes, i.e.\ \begin{equation*} \H_0=\H_{l,a}\oplus\H_{l,s}\oplus\H_{h,a}\oplus\H_{h,s}\,, \end{equation*} where \begin{align*} \H_{l,a}&=\spn\{H_k^a\colon k\in\N_0\textup{ with }\alpha_k\leq\frac{\beta}{T}\}\,,\\ \H_{l,s}&=\spn\{H_k^s\colon k\in\N\textup{ with }\alpha_k\leq\frac{\beta}{T}\}\,,\\ \H_{h,a}&=\overline{\spn}\{H_k^a\colon k\in\N\textup{ with }\alpha_k>\frac{\beta}{T}\}\,,\\ \H_{h,s}&=\overline{\spn}\{H_k^s\colon k\in\N\textup{ with }\alpha_k>\frac{\beta}{T}\}\,, \end{align*} and consider these four subspaces separately. Here $\beta\geq 2$ is the cutoff between high and low modes. For $f\in\H_{l,a}\mathbin{\cup}\H_{l,s}$ we have $\left(f,-Lf\right)_{L^2(\muq)}\leq\frac{\beta^2}{T^2}\norm{f}^2_{L^2(\muq)}$. We will later choose $\beta=2$. \emph{Case 1:} Let $f\in \H_{l,a}$. Set $h(t,x)=\int_0^tf(s,x)\diff s$ and $g(t,x)=0$. Then $h\in H_\DC^{1,2}(\muq)$ due to the antisymmetry of $f$, so that \eqref{eq:hgDirichletcond} and \eqref{eq:gdomLcond} are satisfied. Since $\partial_th=f$, the Poincaré inequality on $[0,T]$ yields \begin{align*} \norm{h}_{L^2(\muq)}^2 &\leq \frac{T^2}{\pi^2}\norm{\partial_th}_{L^2(\muq)}^2 = \frac{T^2}{\pi^2}\norm{f}_{L^2(\mu)}^2\,,\\ \norm{\nabla h}_{L^2(\muq)}^2 &\leq \frac{T^2}{\pi^2}\norm{\nabla f}_{L^2(\muq)}^2 = \frac{T^2}{\pi^2}\int_0^T\big(f(t,\cdot),-Lf(t,\cdot)\big)_{L^2(\mu)}\diff t\\ &\leq\frac{T^2}{\pi^2}\int_0^T\frac{\beta^2}{T^2}\norm{f(t,\cdot)}_{L^2(\mu)}^2\diff t = \frac{\beta^2}{\pi^2}\norm{f}_{L^2(\muq)}\,. \end{align*} Hence for $f\in\H_{l,a}$ we obtain \eqref{eq:bound0order} and \eqref{eq:bound1order} with the constants \begin{align*} c_0^{l,a} = \frac{T^2}{\pi^2}\qquad\textup{and}\qquad c_1^{l,a} = 1 + \frac{\beta^2}{\pi^2}\,.\\ \end{align*} \emph{Case 2:} Let $f\in\H_{l,s}$. Since $f$ is symmetric, it does not necessarily integrate to $0$ in time and we cannot argue as in the previous case. Instead, we consider the decomposition $f = f_0+f_1$, \begin{align*} f_0(t,x)&=f(0,x)\cdot\cos\left(\frac{2\pi t}{T}\right)\,,\\ f_1(t,x)&=f(t,x)-f_0(t,x) \end{align*} of $f$ into a part $f_0$ that integrates to $0$ in time and a part $f_1$ with Dirichlet boundary values in time. Note that $f(t,\cdot),f_0(t,\cdot),f_1(t,\cdot)$ are contained in $\spn\{e_k\colon k\in\N\textup{ with }\alpha_k\leq\frac{\beta}{T}\}$ for all $t\in[0,T]$. We now set \begin{equation*} h(t,x)=\int_0^tf_0(s,x)\diff s\quad\textup{and}\quad g(t,x)=Gf_1(t,\cdot)(x)\,, \end{equation*} so that $f_0=\partial_th$ and $f_1=-Lg$. The boundary conditions \eqref{eq:hgDirichletcond} are satisfied due to the boundary conditions of $f_0$ and $f_1$, and \eqref{eq:gdomLcond} is satisfied by the definition \eqref{eq:defG} of $G$. In order to obtain the required estimates, we bound $\norm{f_0}_{L^2(\muq)}$ by $\norm{f}_{L^2(\muq)}$. First note that \begin{equation*} \norm{f_0}_{L^2(\muq)}^2 = \norm{f(0,\cdot)}_{L^2(\mu)}^2\int_0^T\cos\left(\frac{2\pi t}{T}\right)^2\diff t = \frac{T}{2}\norm{f(0,\cdot)}_{L^2(\mu)}^2\,. \end{equation*} Expanding $f(t,x) = \sum_{\alpha_k\leq\frac{\beta}{T}}b_kH_k^s(t,x)$ yields \begin{align*} \norm{f(0,\cdot)}_{L^2(\mu)}^2 = \sum_{\alpha_k\leq\frac{\beta}{T}}b_k^2(1+e^{-\alpha_kT})^2\leq 4\sum_{\alpha_k\leq\frac{\beta}{T}}b_k^2 \end{align*} and \begin{align*} \norm{f}_{L^2(\muq)}^2 &= \sum_{\alpha_k\leq\frac{\beta}{T}}b_k^2\int_0^T\left(e^{-\alpha_kt}+e^{-\alpha_k(T-t)}\right)^2\diff t\\ &\geq \sum_{\alpha_k\leq\frac{\beta}{T}}T\min_{t\in[0,T]}\left(e^{-\alpha_kt}+e^{-\alpha_k(T-t)}\right)^2b_k^2\\ &=\sum_{\alpha_k\leq\frac{\beta}{T}}4Te^{-\alpha_kT}b_k^2 \geq 4Te^{-\beta}\sum_{\alpha_k\leq\frac{\beta}{T}}b_k^2\,, \end{align*} so that \begin{equation}\label{eq:Hls:boundf0} \norm{f_0}_{L^2(\muq)}^2\leq\frac{T}{2}\norm{f(0,\cdot)}_{L^2(\mu)}^2\leq\frac{1}{2}e^{\beta}\norm{f}_{L^2(\muq)}^2 \end{equation} and \begin{equation*} \norm{f_1}_{L^2(\muq)}^2 \leq \left(\norm{f}_{L^2(\muq)}+\norm{f_0}_{L^2(\muq)}\right)^2\leq \big(1+\frac{1}{\sqrt{2}}e^{\beta/2}\big)^2\norm{f}_{L^2(\muq)}^2\leq (2+e^\beta)\norm{f}_{L^2(\muq)}^2\,. \end{equation*} Similarly to the previous case, this gives the estimates \begin{align*} \norm{h}_{L^2(\muq)}^2 &\leq \frac{T^2}{\pi^2}\norm{f_0}_{L^2(\muq)}^2\leq\frac{T^2}{2\pi^2}e^\beta\norm{f}_{L^2(\muq)}^2\,,\\ \norm{\partial_th}_{L^2(\muq)}^2 &= \norm{f_0}_{L^2(\muq)}^2 \leq \frac{1}{2}e^{\beta}\norm{f}_{L^2(\muq)}^2\,,\\ \norm{\nabla h}_{L^2(\muq)}^2 &\leq\frac{\beta^2}{\pi^2}\norm{f_0}_{L^2(\muq)}^2 \leq \frac{\beta^2}{2\pi^2}e^\beta\norm{f}_{L^2(\muq)}^2\,. \end{align*} For the bounds on $g$ we use the fact that \begin{equation*} \norm{\nabla Gu}_{L^2(\mu)}^2 = (u,Gu)_{L^2(\mu)}\leq\frac{1}{m}\norm{u}_{L^2(\mu)}^2 \end{equation*} for any $u\in L_0^2(\mu)$. Then \begin{align*} \norm{\nabla g}_{L^2(\muq)}^2 = \int_0^T\norm{\nabla Gf_1(t,\cdot)}_{L^2(\mu)}^2\diff t\leq\frac{1}{m}\norm{f_1}_{L^2(\muq)}^2\leq \frac{1}{m}\big(1+\frac{1}{\sqrt{2}}e^{\beta/2}\big)^2\norm{f}_{L^2(\muq)}^2\,. \end{align*} Similarly, commuting the derivatives, \begin{align}\label{eq:Hls:dtf1} \norm{\partial_t\nabla g}_{L^2(\muq)}^2 = \norm{\nabla\partial_tg}_{L^2(\muq)}^2\leq\frac{1}{m}\norm{\partial_tf_1}_{L^2(\muq)}^2\leq\frac{1}{m}\left(\norm{\partial_tf}_{L^2(\muq)} + \norm{\partial_tf_0}_{L^2(\muq)}\right)^2\,. \end{align} Again using the expansion $f(t,x) = \sum_{\alpha_k\leq\frac{\beta}{T}}b_kh_k^s(t,x)$ we obtain \begin{align*} \norm{\partial_tf}_{L^2(\muq)}^2 &= \sum_{\alpha_k\leq\frac{\beta}{T}}\alpha_k^2b_k^2\int_0^T\left(-e^{-\alpha_kt}+e^{-\alpha_k(T-t)}\right)^2\diff t\\ &\leq \frac{\beta^2}{T^2}\sum_{\alpha_k\leq\frac{\beta}{T}}b_k^2\int_0^T\left(e^{-\alpha_kt}+e^{-\alpha_k(T-t)}\right)^2\diff t = \frac{\beta^2}{T^2}\norm{f}_{L^2(\muq)}^2 \end{align*} as well as \begin{equation*} \norm{\partial_tf_0}_{L^2(\muq)}^2 = \frac{2\pi^2}{T}\norm{f(0,\cdot)}_{L^2(\mu)}^2 \leq \frac{2\pi^2}{T^2}e^\beta\norm{f}_{L^2(\muq)}^2 \end{equation*} by \eqref{eq:Hls:boundf0}. Hence \eqref{eq:Hls:dtf1} yields \begin{equation*} \norm{\partial_t\nabla g}_{L^2(\muq)}^2 \leq \frac{(\beta+\sqrt{2}\pi e^{\beta/2})^2}{mT^2}\norm{f}_{L^2(\muq)}^2. \end{equation*} Finally, \Cref{cor:reillyestimate} yields \begin{align*} \norm{\nabla^2g}_{L^2(\muq)}^2 &\leq \norm{Lg}_{L^2(\muq)}^2+\rho\norm{\nabla g}_{L^2(\muq)}^2\leq \left(1+\frac{\rho}{m}\right)\norm{f_1}_{L^2(\muq)}^2\\ &\leq\left(1+\frac{\rho}{m}\right)\left(1+\frac{1}{\sqrt{2}}e^{\beta/2}\right)^2\norm{f}_{L^2(\muq)}\,. \end{align*} For $f\in\H_{l,s}$ we therefore obtain \eqref{eq:bound0order} and \eqref{eq:bound1order} with the constants \begin{align*} c_0^{l,s} &= \frac{T^2}{2\pi^2}e^\beta + \frac{1}{m}\left(1+\frac{1}{\sqrt{2}}e^{\beta/2}\right)^2\qquad\textup{and}\\ c_1^{l,s} &= \left(\frac{1}{2}+\frac{\beta^2}{2\pi^2}\right)e^\beta + \frac{(\beta+\sqrt{2}\pi e^{\beta/2})^2}{mT^2} + \left(1+\frac{\rho}{m}\right)\left(1+\frac{1}{\sqrt{2}}e^{\beta/2}\right)^2\,.\\ \end{align*} \emph{Case 3:} Let $f\in\H_{h,a}$. We will use the expansion $f(t,x) = \sum_{\alpha_k>\frac{\beta}{T}}b_kH_k^a(t,x)$. As we will show below, it is sufficient to derive a representation \begin{equation*} H_k^a = \partial_th_k-Lg_k \end{equation*} for each of the basis functions $H_k^a$ with $k\in\N$ fixed. To this end, we write $u_k(t) = e^{-\alpha_kt}-e^{-\alpha_k(T-t)}$, so that $H_k^a(t,x) = u_k(t)e_k(x)$. We employ the ansatz \begin{equation}\label{eq:ansatzuvw_a} u_k = \dot v_k+w_k\quad\textup{with }v_k(0)=v_k(T)=w_k(0)=w_k(T)=0\,. \end{equation} Then setting \begin{equation*} h_k(t,x) = v_k(t)e_k(x)\qquad\textup{and}\qquad g_k(t,x) = \frac{1}{\alpha_k^2}w_k(t)e_k(x) \end{equation*} yields $H_k^a = \partial_th_k-Lg_k$. We now construct such functions $v_k$ and $w_k$ for a fixed $k\in\N$. Let \begin{equation*} \phi_k(t)=(\alpha_kt-1)^21_{[0,\alpha_k^{-1}]}(t)\,. \end{equation*} This function is in $C^1([0,T])$ and piecewise $C^2([0,T])$. Moreover, since $\alpha_k>\frac{2}{T}$, it satisfies $\phi_k(\frac{T}{2})=0$. For $t\in[0,\frac{T}{2}]$ we set \begin{align*} v_k(t)&=\phi_k(t)\int_0^tu_k(s)\diff s\qquad\textup{and}\\ w_k(t)&=u_k(t)-\dot v_k(t)=(1-\phi_k(t))u_k(t)-\dot\phi_k(t)\int_0^tu_k(s)\diff s\,. \end{align*} Analogously, for $t\in[\frac{T}{2},T]$ we set \begin{align*} v_k(t)&=-\phi_k(T-t)\int_t^Tu_k(s)\diff s\qquad\textup{and}\\ w_k(t)&=u_k(t)-\dot v_k(t)=(1-\phi_k(T-t))u_k(t)-\dot\phi_k(T-t)\int_t^Tu_k(s)\diff s\,. \end{align*} Then $v_k,w_k$ are in $C^0([0,T])$ and piecewise $C^1([0,T])$ with $v_k(0)=v_k(T)=0$, $v_k(\frac{T}{2})=\dot v_k(\frac{T}{2})=0$, and \begin{align*} \begin{split} \dot v_k(0)&=\phi_k(0)u_k(0)=u_k(0)\,,\\ w_k(0)&=u_k(0)-\dot v_k(0)=0\,, \end{split} \begin{split} \dot v_k(T)&=\phi_k(0)u_k(T)=u_k(T)\,,\\ w_k(T)&=u_k(t)-\dot v_k(T)=0\,, \end{split} \end{align*} so that \eqref{eq:ansatzuvw_a} is satisfied. To obtain the required bounds, notice that $\norm{H_k^a}_{L^2(\muq)}^2 = \int_0^Tu_k(t)^2\diff t$ and \begin{equation*} \begin{split} \norm{h_k}_{L^2(\muq)}^2 &= \int_0^Tv_k(t)^2\diff t\,,\\ \norm{\partial_th_k}_{L^2(\muq)}^2 &= \int_0^T\dot v_k(t)^2\diff t\,,\\ \norm{\nabla h_k}_{L^2(\muq)}^2 &= \alpha_k^2\int_0^Tv_k(t)^2\diff t\,, \end{split} \qquad \begin{split} \norm{\nabla g_k}_{L^2(\muq)}^2 &= \frac{1}{\alpha_k^2}\int_0^Tw_k(t)^2\diff t\,,\\ \norm{\partial_t\nabla g_k}_{L^2(\muq)}^2 &= \frac{1}{\alpha_k^2}\int_0^T\dot w_k(t)^2\diff t\,,\\ \norm{Lg_k}_{L^2(\muq)}^2 &= \int_0^Tw_k(t)^2\diff t\,, \end{split} \end{equation*} where we used that $\norm{\nabla g_k}_{L^2(\muq)}^2=(g_k,-Lg_k)_{L^2(\muq)}$ and $\norm{\partial_t\nabla g_k}_{L^2(\muq)}^2 = (\partial_tg_k,-L\partial_tg_k)_{L^2(\muq)}$. One estimates \begin{align*} \int_0^{\frac{T}{2}}v_k(t)^2\diff t &= \int_0^\frac{1}{\alpha_k}\phi_k(t)^2\left(\int_0^tu_k(s)\diff s\right)^2\diff t \leq \int_0^\frac{1}{\alpha_k}(\alpha_kt-1)^4t\diff t\int_0^\frac{T}{2}u_k(t)^2\diff t\\ &=\frac{1}{30\alpha_k^2}\int_0^\frac{T}{2}u_k(t)^2\diff t\,, \end{align*} and analogously on $[\frac{T}{2},T]$, so that \begin{equation*} \int_0^Tv_k(t)^2\diff t\leq\frac{1}{30\alpha_k^2}\int_0^Tu_k(t)^2\diff t\,. \end{equation*} For $\dot v_k = \dot\phi_k\int_0^\cdot u_k(s)\diff s+\phi_ku_k$ we similarly estimate \begin{align*} \int_0^\frac{T}{2}\dot v_k(t)^2\diff t &\leq \left(\left(\int_0^\frac{1}{\alpha_k}\dot\phi_k(t)^2\left(\int_0^tu_k(s)\diff s\right)^2\diff t\right)^\frac{1}{2} + \left(\int_0^\frac{T}{2}u_k(t)^2\diff t\right)^\frac{1}{2}\right)^2\\ &\leq \left(1+\left(\int_0^\frac{1}{\alpha_k}(2\alpha_k(\alpha_kt-1))^2t\diff t\right)^\frac{1}{2}\right)^2\int_0^\frac{T}{2}u_k(t)^2\diff t\\ &=\left(1+\frac{1}{\sqrt{3}}\right)^2\int_0^\frac{T}{2}u_k(t)^2\diff t < \frac{5}{2}\int_0^\frac{T}{2}u_k(t)^2\diff t\,, \end{align*} so that \begin{equation*} \int_0^T\dot v_k(t)^2\diff t \leq \left(1+\frac{1}{\sqrt{3}}\right)^2\int_0^Tu_k(t)^2\diff t\,. \end{equation*} Similarly, since $w_k = -\dot\phi_k\int_0^\cdot u_k(s)\diff s+(1-\phi_k)u_k$ on $[0,\frac{T}{2}]$ and correspondingly on $[\frac{T}{2},T]$, one obtains \begin{equation*} \int_0^Tw_k(t)^2\diff t\leq \left(1+\frac{1}{\sqrt{3}}\right)^2\int_0^Tu_k(t)^2\diff t\,. \end{equation*} For $\dot w_k = (1-\phi_k)\dot u_k -2\dot\phi_ku_k-\ddot\phi_k\int_0^\cdot u_k(s)\diff s$ we first estimate $\int_0^T\dot u_k(t)^2\diff t$ by $\int_0^Tu_k(t)^2\diff t$. A direct computation yields \begin{align*} \int_0^\frac{T}{2}\dot u_k(t)^2\diff t &= \alpha_k^2\int_0^\frac{T}{2}\left(e^{-\alpha_kt}+e^{-\alpha_k(T-t)}\right)^2\diff t\\ &=\alpha_ke^{-\alpha_kT}(\sinh(\alpha_kT)+\alpha_kT) \end{align*} and similarly \begin{equation*} \int_0^\frac{T}{2}u_k(t)^2\diff t = \alpha_k^{-1}e^{-\alpha_kT}(\sinh(\alpha_kT)-\alpha_kT)\,, \end{equation*} so that \begin{equation*} \int_0^\frac{T}{2}\dot u_k(t)^2\diff t \leq \alpha_k^2\frac{\sinh(\beta)+\beta}{\sinh(\beta)-\beta} \int_0^\frac{T}{2}u_k(t)^2\diff t \end{equation*} by monotonicity of $x\mapsto\frac{\sinh(x)+x}{\sinh(x)-x}$ on $[0,\infty)$. Therefore, by similar arguments as before, \begin{align*} \int_0^\frac{T}{2}\dot w_k(t)^2\diff t &\leq \left(4\alpha_k+\alpha_k\left(\frac{\sinh(\beta)+\beta}{\sinh(\beta)-\beta}\right)^\frac{1}{2}+\left(\int_0^\frac{1}{\alpha_k}4\alpha_k^4t\diff t\right)^\frac{1}{2}\right)^2\int_0^\frac{T}{2}u_k(t)^2\diff t\\ &=\alpha_k^2\left(4+\sqrt{2}+\left(\frac{\sinh(\beta)+\beta}{\sinh(\beta)-\beta}\right)^\frac{1}{2}\right)^2\int_0^\frac{T}{2}u_k(t)^2\diff t\,, \end{align*} and analogously on $[\frac{T}{2},T]$, yielding \begin{equation*} \int_0^T\dot w_k(t)^2\diff t \leq \alpha_k^2\left(4+\sqrt{2}+\left(\frac{\sinh(\beta)+\beta}{\sinh(\beta)-\beta}\right)^\frac{1}{2}\right)^2\int_0^Tu_k(t)^2\diff t\,. \end{equation*} Noting that $\alpha_k^2\geq\max(m,\frac{\beta^2}{T^2})$, we hence obtain the bounds \begin{align*} \norm{h_k}_{L^2(\muq)}^2 &\leq\frac{1}{30}\min\left(\frac{1}{m},\frac{T^2}{\beta^2}\right)\norm{H_k^a}_{L^2(\muq)}^2\,,\\ \norm{\partial_th_k}_{L^2(\muq)}^2 &\leq \left(1+\frac{1}{\sqrt{3}}\right)^2\norm{H_k^a}_{L^2(\muq)}^2\,,\\ \norm{\nabla h_k}_{L^2(\muq)}^2 &\leq\frac{1}{30}\norm{H_k^a}_{L^2(\muq)}^2\,,\\ \norm{\nabla g_k}_{L^2(\muq)}^2 &\leq \min\left(\frac{1}{m},\frac{T^2}{\beta^2}\right)\left(1+\frac{1}{\sqrt{3}}\right)^2\norm{H_k^a}_{L^2(\muq)}^2\,,\\ \norm{\partial_t\nabla g_k}_{L^2(\muq)}^2 &\leq \left(4+\sqrt{2}+\left(\frac{\sinh(\beta)+\beta}{\sinh(\beta)-\beta}\right)^\frac{1}{2}\right)^2\norm{H_k^a}_{L^2(\muq)}^2\,,\\ \norm{Lg_k}_{L^2(\muq)}^2 &\leq\left(1+\frac{1}{\sqrt{3}}\right)^2\norm{H_k^a}_{L^2(\muq)}^2\,. \end{align*} and thus also \begin{equation*} \norm{\nabla^2g_k}_{L^2(\muq)}^2 \leq \left(1+\frac{1}{\sqrt{3}}\right)^2\left(1+\min\left(\frac{1}{m},\frac{T^2}{\beta^2}\right)\rho\right)\norm{H_k^a}_{L^2(\muq)}^2\,. \end{equation*} For a general function $f=\sum_{\alpha_k>\frac{\beta}{T}}b_kH_k^a$ in $\H_{h,a}$, we set \begin{equation*} h = \sum_{\alpha_k>\beta/T}b_kh_k\qquad\textup{and}\qquad g = \sum_{\alpha_k>\beta/T}b_kg_k \end{equation*} to obtain $f = \partial_th-Lg$. By orthogonality of the functions $e_k$, the functions $h_k$ and $\partial_th_k$ are also orthogonal systems. Hence \begin{equation*} \norm{h}_{L^2(\muq)}^2 = \sum_{\alpha_k>\beta/T}\norm{h_k}^2,\qquad \norm{\partial_th}_{L^2(\muq)}^2 = \sum_{\alpha_k>\beta/T}\norm{\partial_th_k}^2 \end{equation*} and \begin{align*} \norm{\nabla h}_{L^2(\muq)}^2 &= -(h,Lh)_{L^2(\muq)} = \sum_{\alpha_k>\beta/T}\alpha_k^2\norm{h_k}^2\\ &= \sum_{\alpha_k>\beta/T}(h_k,Lh_k)_{L^2(\muq)} = \sum_{\alpha_k>\beta/T}\norm{\nabla h_k}_{L^2(\muq)}^2\,. \end{align*} The same is true for the functions $\nabla g_k$, $\partial_t\nabla g_k$ and $Lg_k$, so that analogous identities hold. Since $\norm{f}_{L^2(\muq)}^2 = \sum_{\alpha_k\geq\frac{\beta}{T}}b_k^2\norm{H_k^a}_{L^2(\muq)}^2$, this yields \eqref{eq:bound0order} and \eqref{eq:bound1order} with constants \begin{align*} c_0^{h,a} &= \left(\frac{1}{30}+\left(1+\frac{1}{\sqrt{3}}\right)^2\right)\min\left(\frac{1}{m},\frac{T^2}{\beta^2}\right)\qquad\textup{and}\\ c_1^{h,a} &= \left(1+\frac{1}{\sqrt{3}}\right)^2\left(2+\min\left(\frac{1}{m},\frac{T^2}{\beta^2}\right)\rho\right)+ \left(4+\sqrt{2}+\left(\frac{\sinh(\beta)+\beta}{\sinh(\beta)-\beta}\right)^\frac{1}{2}\right)^2 + \frac{1}{30}\,.\\ \end{align*} \emph{Case 4:} Let $f\in\H_{l,s}$. Similarly to the previous case, we now set $u_k(t) = e^{-\alpha_kt}+e^{-\alpha_k(T-t)}$, so that $H_k^s(t,x) = u_k(t)e_k(x)$. We again derive a representation \begin{equation*} H_k^s = \partial_th_k-Lg_k \end{equation*} for each of the basis functions $H_k^s$ with $k\in\N$ fixed by employing the ansatz \begin{equation} u_k = \dot v_k+w_k\quad\textup{with }v_k(0)=v_k(T)=w_k(0)=w_k(T)=0\,. \end{equation} Then setting \begin{equation*} h_k(t,x) = v_k(t)e_k(x)\qquad\textup{and}\qquad g_k(t,x) = \frac{1}{\alpha_k^2}w_k(t)e_k(x) \end{equation*} yields $H_k^s = \partial_th_k-Lg_k$. The only thing that changes compared to the previous case is the bound on $\int_0^T\dot u_k(t)^2\diff t$. In this case, one has \begin{align*} \int_0^\frac{T}{2}\dot u_k(t)^2\diff t &= \alpha_k^2\int_0^T\left(-e^{-\alpha_kt}+e^{-\alpha_k(T-t)}\right)^2\diff t\\ &= \alpha_ke^{-\alpha_kT}(\sinh(\alpha_kT)-\alpha_kT)\,, \end{align*} whereas \begin{equation*} \int_0^\frac{T}{2}u_k(t)^2\diff t = \alpha_k^{-1}e^{-\alpha_kT}(\sinh(\alpha_kT)+\alpha_kT)\,. \end{equation*} Hence \begin{equation*} \int_0^T\dot u_k(t)^2\diff t = \alpha_k^2\frac{\sinh(\alpha_kT)-\alpha_kT}{\sinh(\alpha_kT)+\alpha_kT}\int_0^Tu_k(t)^2\diff t \leq \alpha_k^2\int_0^Tu_k(t)^2\diff t\,. \end{equation*} Again setting \begin{equation*} h = \sum_{\alpha_k>\beta/T}b_kh_k\qquad\textup{and}\qquad g = \sum_{\alpha_k>\beta/T}b_kg_k \end{equation*} yields $f = \partial_th-Lg$. As in the previous case, the bounds \eqref{eq:bound0order} and \eqref{eq:bound1order} follow with the constants \begin{align*} c_0^{h,s} &= \left(\frac{1}{30}+\left(1+\frac{1}{\sqrt{3}}\right)^2\right)\min\left(\frac{1}{m},\frac{T^2}{\beta^2}\right)\qquad\textup{and}\\ c_1^{h,s} &= \left(1+\frac{1}{\sqrt{3}}\right)^2\left(2+\min\left(\frac{1}{m},\frac{T^2}{\beta^2}\right)\rho\right)+ \left(5+\sqrt{2}\right)^2 + \frac{1}{30}\,.\\ \end{align*} For any $f\in L_0^2(\muq)$, combining the results on each of the subspaces using linearity, we can hence write $f=\partial_th-Lg$ with $h$ and $g$ satisfying \eqref{eq:hgDirichletcond} and \eqref{eq:gdomLcond}. Choosing $\beta=2$, the bounds \eqref{eq:bound0order} and \eqref{eq:bound1order} then follow with \begin{align*} c_0(T) &= 5\max\left(c_0^{l,a}, c_0^{l,s}, c_0^{h,a}, c_0^{h,s}, c_0^\bot\right) = 5c_0^{l,s}\\ &\leq 2T^2+43\frac{1}{m} \end{align*} and \begin{align*} c_1(T) &= 5\max\left(c_1^{l,a}, c_1^{l,s}, c_1^{h,a}, c_1^{h,s}, c_1^\bot\right)=5\max\left(c_1^{l,s},c_1^{h,a},c_1^\bot\right)\\ &\leq 290+\frac{991}{mT^2}+43\max\left(\frac{1}{m},\frac{T^2}{\pi^2}\right)\rho\,. \end{align*} \end{proof} \section{Optimal lifts on Riemannian manifolds with boundary}\label{sec:optimallifts} In the following, let $(M,g)$ be a complete connected and oriented Riemannian manifold with boundary and let \begin{equation*} \mu(\diff x)=\exp(-U(x))\,\nuM(\diff x) \end{equation*} with $U\in C^2(M)$ be a probability measure on $M$ such that \Cref{ass:U,ass:discretespec} are satisfied. The operator $(\frac{1}{2}L,\dom(L))$ defined by \Cref{cor:reillyestimate} is the generator in $L^2(\mu)$ of an overdamped Langevin diffusion on $M$ with reflection at the boundary $\dM$ and invariant probability measure $\mu$. Explicitly, the set of compactly supported smooth functions \begin{equation*} \D=\{u\in C_\c^\infty(M)\colon\partial_Nu=0\textup{ on }\dM\} \end{equation*} with Neumann boundary conditions is a core for the self-adjoint operator $(\frac{1}{2}L,\dom(L))$ and \begin{equation*} Lf = -\nabla U\cdot\nabla f + \Delta f\qquad\textup{for }f\in\D\,. \end{equation*} The corresponding diffusion process solves the Stratonovich stochastic differential equation \begin{equation*} \diff Z_t = -\frac{1}{2}\nabla U(Z_t)\diff t + \circ\diff B_t - N(Z_t)\diff L_t\,, \end{equation*} where $(B_t)_{t\geq0}$ is a Brownian motion on $M$ and $(L_t)_{t\geq 0}$ is the local time of $(Z_t)_{t\geq0}$ at the boundary $\dM$, see \cite{IkedaWatanabe1989SDE,Wang2014Analysis}. By \Cref{cor:reillyestimate}, the associated Dirichlet form is \begin{equation*} \Ecal(f,g) = \frac{1}{2}\int_M \langle\nabla f,\nabla g\rangle\diff\mu \end{equation*} with domain $\dom(\Ecal) = H^{1,2}(M,\mu)$. \subsection{Lifts based on Hamiltonian dynamics on Riemannian manifolds} At first assume that $\dM=\emptyset$. We denote elements of the tangent bundle $\T M$ by $(x,v)$ with $x\in M$ and $v\in \T_xM$. Defining the Hamiltonian \begin{equation*} H(x,v) = U(x) + \frac{1}{2}\langle v,v\rangle_x \end{equation*} on $\T M$ yields the associated Hamiltonian flow $(X_t,V_t)_{t\geq 0}$ on $\T M$, see \cite[Section 3.7]{Abraham1987Mechanics}. It preserves the Hamiltonian $H$ and satisfies the equations \begin{align*} \frac{\diff}{\diff t}X_t = V_t\,,\qquad \frac{\nabla}{\diff t}V_t = -\nabla U(X_t)\,. \end{align*} Using horizontal and vertical lifts of vector fields, the tangent space $\T_{(x,v)}\T M$ of $\T M$ in $(x,v)\in\T M$ can be canonically identified with the direct sum \begin{equation}\label{eq:TTM} \T_{(x,v)}\T M = \T_xM\oplus\T_xM\,, \end{equation} see \cite[Section II.4]{Sakai1996Riemannian}. Using this identification, the vector field generating the Hamiltonian flow is $(x,v)\mapsto(v,-\nabla U(x))$. Furthermore, the Sasaki metric \begin{equation*} \tilde g_{(x,v)}\big((\zeta,\xi),(\zeta',\xi')\big) = g_x(\zeta,\zeta') + g_x(\xi,\xi')\quad\textup{for }(x,v)\in\T M,\,(\zeta,\xi), (\zeta',\xi')\in\T_{(x,v)}\T M \end{equation*} on $\T M$ turns the tangent bundle $(\T M,\tilde g)$ into a Riemannian manifold \cite{Sasaki1958Differential,Sakai1996Riemannian}. By Liouville's theorem, the Hamiltonian flow leaves the associated Riemannian volume form $\diff\nu_{\tilde g}$ on $\T M$ invariant. Therefore, the Hamiltonian flow preserves the probability measure $\hat\mu$ on $\T M$ with density proportional to $\exp(-H)$ with respect to the volume $\diff\nu_{\tilde g}$, which can be disintegrated as \begin{equation*} \hat\mu(\diff x\diff v) = \mu(\diff x)\kappa(x,\diff v) \end{equation*} with $\kappa(x,\cdot) = \mathcal{N}(0,g_x^{-1})$. The generator of the associated transition semigroup on $L^2(\hat\mu)$ is then given by \begin{equation}\label{eq:hatL} \hat Lf(x,v) = \langle v,\nabla_xf(x,v)\rangle - \langle\nabla U(x),\nabla_vf(x,v)\rangle\,, \end{equation} and its domain satisfies \begin{equation*} \dom(\hat L)\supseteq\{f\in C^\infty(\T M)\cap L^2(\hat\mu)\colon\langle v,\nabla_xf\rangle - \langle\nabla U,\nabla_vf\rangle\in L^2(\hat\mu) \}\,. \end{equation*} \begin{lemm}\label{lem:HamDynlift} The Hamiltonian dynamics $(X_t,V_t)_{t\geq 0}$ is a second-order lift of the overdamped Langevin diffusion $(Z_t)_{t\geq 0}$. \end{lemm} \begin{proof} Note that it suffices to verify \eqref{eq:deflift0}--\eqref{eq:deflift2} on the core $\D$ for $(\frac{1}{2}L,\dom(L))$ defined in \eqref{eq:defD}. Hence let $f,g\in\D$. Then $\langle v,\nabla_x(f\circ\pi)\rangle = \langle v,\nabla f\circ\pi\rangle\in L^2(\hat\mu)$, so that $f\circ\pi\in\dom(\hat L)$ and \eqref{eq:deflift0} is satisfied. Furthermore, \begin{equation*} \int_{\T M}\hat L(f\circ\pi)(g\circ\pi)\diff\hat\mu = \int_M\int_{\T_x M}\langle v,\nabla f(x)\rangle\kappa(x,\diff v)\,g(x)\mu(\diff x) = 0\,, \end{equation*} so that \eqref{eq:deflift1} is satisfied. Finally, \eqref{eq:deflift2} is satisfied by \begin{align*} \frac{1}{2}\int_{\T M}(\hat L(f\circ\pi))^2\diff\hat\mu &= \frac{1}{2}\int_M\int_{\T_xM}\langle v,\nabla f(x)\rangle^2\kappa(x,\diff v)\mu(\diff x)\\ &=\frac{1}{2}\int_M\langle\nabla f,\nabla f\rangle\diff\mu = \Ecal(f,f)\,.\qedhere \end{align*} \end{proof} Now let us consider a complete Riemannian manifold $(M,g)$ with boundary $\dM\neq\emptyset$. For $x\in\dM$ and $v\in\T_xM$ let \begin{equation*} R_xv = v-2N(x)\langle N(x),v\rangle_x \end{equation*} be the specular reflection at the boundary. Note that, if $v$ is outward-pointing, then $R_xv$ is inward-pointing, and $R_xv$ preserves $\kappa(x,\diff v)$. The generator of Hamiltonian dynamics with reflection at the boundary is then given by the closure $(\hat L,\dom(\hat L))$ of $(\hat L,\mathcal{R})$ in $L^2(\hat\mu)$ with \begin{align*} \mathcal{R} = \{f\in C^\infty(\T M)\cap L^2(\hat\mu)\colon &\langle v,\nabla_xf\rangle - \langle\nabla U,\nabla_vf\rangle\in L^2(\hat\mu)\quad\textup{and}\\ &f(x,R_xv)=f(x,v)\textup{ for all }x\in\dM\textup{ and }v\in\T_xM\}\,, \end{align*} and $\hat L$ given by \eqref{eq:hatL}. This definition clearly coincides with the definition above in case $\dM=\emptyset$. The associated process $(X_t,V_t)_{t\geq 0}$ follows the Hamiltonian flow on the interior of $M$, and the normal component of the velocity is reflected when hitting the boundary. \begin{lemm}\label{eq:HamDynBdrylift} The Hamiltonian dynamics $(X_t,V_t)_{t\geq 0}$ with reflection at the boundary is a second-order lift of the overdamped Langevin diffusion $(Z_t)_{t\geq 0}$ with reflection at the boundary. \end{lemm} \begin{proof} Since $f\in\D$ implies $f\circ\pi\in\mathcal{R}$, the domain condition \eqref{eq:deflift0} is satisfied. The remaining conditions \eqref{eq:deflift1} and \eqref{eq:deflift2} follow as in \Cref{lem:HamDynlift}. \end{proof} \begin{lemm}[Adjoint of $\hat L$]\label{lem:Lhatstar} The adjoint of $(\hat L,\dom(\hat L))$ in $L^2(\hat\mu)$ satisfies $\dom(\hat L)\subseteq\dom(\hat L^*)$ and $\hat L^*g = -\hat Lg$ for all $g\in\dom(\hat L)$. \end{lemm} \begin{proof} Let $f,g\in\mathcal{R}$. Integration by parts gives \begin{align*} \int_{\T M}\hat Lfg\diff\hat\mu = -\int_{\T M}f\hat Lg\diff\hat\mu + \int_{\dM}\int_{\T_x M}fg\langle v,N(x)\rangle\,\kappa(x,\diff v)\mu(\diff x)\,. \end{align*} Denoting $h(x,v) = f(x,v)\langle v,N(x)\rangle$ gives $h(x,R_xv) = -h(x,v)$, so that \begin{align*} \int_{\T_xM}h(x,v)g(x,v)\kappa(x,\diff v) &= \int_{\T_xM}h(x,R_xv)g(x,R_xv)\kappa(x,\diff v)\\ &=-\int_{\T_xM}h(x,v)g(x,v)\kappa(x,\diff v) \end{align*} since $R_x$ leaves $\kappa(x,\cdot)$ invariant, so that the boundary terms vanish. Hence the operator $(-\hat L^*,\dom(\hat L^*))$ extends $(\hat L,\mathcal{R})$ and thus also $(\hat L,\dom(\hat L))$. \end{proof} \emph{Randomised Riemannian Hamiltonian Monte Carlo} with refresh rate $\gamma>0$ is obtained by interspersing Hamiltonian dynamics with complete velocity refreshments according to $\kappa(x,\diff v)$ after random times with exponential distribution $\mathrm{Exp}(\gamma)$. By successively conditioning on the refreshment times, one can show that randomised Riemannian Hamiltonian Monte Carlo again leaves $\hat\mu$ invariant \cite{Bou-Rabee2017RHMC}, and its generator in $L^2(\hat\mu)$ is given by \begin{equation*} \hat L^{(\gamma )}_\RHMC= \hat L+\gamma(\Pi-I)\qquad\textup{and}\qquad\dom\big(\hat L^{(\gamma)}_\RHMC\big) = \dom(\hat L)\,, \end{equation*} where \begin{equation*} (\Pi f)(x,v)=\int_{\R^d}f(x,w)\kappa(x,\diff w)\,. \end{equation*} \emph{Riemannian Langevin dynamics} is obtained by adding Ornstein-Uhlenbeck noise instead of discrete jumps in the velocity, i.e.\ it is the solution $(X_t,V_t)_{t\geq 0}$ to the stochastic differential equation \begin{align*} \diff X_t &= V_t\diff t\,,\\ \diff V_t & = -\nabla U(X_t)\diff t - \gamma V_t\diff t + \sqrt{2\gamma}\circ\diff B_t + \diff K_t\,, \end{align*} where \begin{equation*} K_t = -2\sum_{0< s\leq t}\langle N(X_s),V_{s-}\rangle N(X_s)\cdot 1_{\dM}(X_s) \end{equation*} is a confinement process modelling specular reflections the boundary. Well-posedness of this equation and existence of solutions, in particular non-accumulation of boundary hits, is considered in \cite{BossyJabir2011Confined,BossyJabir2015Lagrangian} in the Euclidean case, and the associated kinetic Fokker-Planck equation with specular boundary conditions is considered in \cite{Dong2022Specular}. The invariant probability measure of $(X_t,V_t)_{t\geq 0}$ is $\hat\mu$, see \cite{GrothausMertin2022LangevinManifolds}. We denote the associated generator in $L^2(\hat\mu)$ by $(\hat L^{(\gamma)}_\LD,\dom(\hat L^{(\gamma)}_\LD))$. Let \begin{equation}\label{eq:Lvdef} L_vf = -\langle v,\nabla_vf\rangle+\Delta_vf\,, \end{equation} where $\Delta = \Delta_x+\Delta_v$ is the splitting of the Laplace-Beltrami operator $\Delta$ on $\T M$ associated to the decomposition \eqref{eq:TTM}, i.e.\ $\Delta_v = \divg_v\nabla_v$ is the Laplace-Beltrami operator in $v$. Then, for functions $f\in\mathcal{R}$ with $L_vf\in L^2(\hat\mu)$, the generator is given by \begin{equation}\label{eq:hatLgamma} \hat L^{(\gamma)}_\LD f = \hat Lf + \gamma L_vf\,. \end{equation} \begin{lemm} For any choice of $\gamma>0$, both $(\hat L_\RHMC^{(\gamma)},\dom(\hat L_\RHMC^{(\gamma)}))$ and $(\hat L_\LD^{(\gamma)},\dom(\hat L_\LD^{(\gamma)}))$ are lifts of $(\frac{1}{2}L,\dom(L))$. \end{lemm} \begin{proof} Firstly, since $\D$ forms a core for $(L,\dom(L))$, the condition \eqref{eq:deflift0} is satisfied for both $\hat L_\RHMC^{(\gamma)}$ and $\hat L^{(\gamma)}$. Secondly, both generators are obtained from $\hat L$ by adding a term that only acts on the velocity and leaves $\kappa(x,\cdot)$ invariant. This means that \begin{equation*} \hat L_\RHMC^{(\gamma)}(f\circ\pi) = \hat L^{(\gamma)}_\LD(f\circ\pi) = \hat L(f\circ\pi) \end{equation*} for any $f\in\D$, so that \eqref{eq:deflift1} and \eqref{eq:deflift2} are immediately satisfied. \end{proof} \begin{rema} Suppose that $M\subseteq\R^d$ is closed and convex with smooth boundary, so that $M$ is a Riemannian manifold with boundary when equipped with the flat Euclidean metric, as in \Cref{rem:Mconvex}. The construction of RHMC and Langevin dynamics on $M$ then reduces to the usual processes on $\R^d$ constrained to $M$ via a specular reflection at the boundary. In case $U$ is constant, the invariant probability measure $\hat\mu$ is the product of the uniform distribution on $M$ and a standard normal distribution in the velocity. In this case, the Hamiltonian dynamics on $M$ reduces to billiards in a convex set, see \cite{Tabachnikov2005Billiards}. \end{rema} \subsection{Space-time Poincaré inequality on Riemannian manifolds} Albritton, Armstrong, Mourrat and Novack \cite{Albritton2021Variational} developed an approach to proving exponential decay for the kinetic Fokker-Planck equation using space-time Poincaré inequalities. Quantitative upper bounds for various non-reversible Markov processes were then obtained by Cao, Lu and Wang \cite{Lu2022PDMP,Cao2019Langevin} by proving such space-time Poincaré inequalities for the Kolmogorov backward equations associated to these processes. In \cite{EberleLoerler2024Lifts}, the authors show how the framework of second-order lifts together with a divergence lemma as proved in \Cref{sec:divergence} can be used to prove such space-time Poincaré inequalities and thereby obtain quantitative bounds on the relaxation time of non-reversible Markov processes. In this section, we prove a space-time Poincaré inequality for RHMC and Langevin dynamics on a Riemannian manifold $M$ with reflection at the boundary. In the following, we fix $T\in(0,\infty)$ and write $L^2(\muqk)=L^2([0,T]\times\T M,\muqk)$. We consider the operator $(A,\dom(A))$ on $L^2(\muqk)$ defined by \begin{equation} \label{def:A} Af=-\partial_tf+\hat Lf \end{equation} with domain consisting of all functions $ f\in L^2(\muqk)$ such that $f(\cdot,x,v)$ is absolutely continuous for $\hat\mu$-a.e.\ $(x,v)\in \T M$ with $\partial_t f\in L^2(\muqk)$, and $f(t,\cdot)\in\dom(\hat L)$ for $\lambda\textup{-a.e.\ }t\in[0,T]$ with $\hat Lf\in L^2(\muqk)$. Here $\hat L$ is the generator \eqref{eq:hatL} of the Hamiltonian flow. We consider the adjoint $(A^*,\dom(A^*))$ in $L^2(\muqk)$ of the operator $(A,\dom(A))$. An integration by parts and \Cref{lem:Lhatstar} yield \begin{align*} (Af,g)_{L^2(\muqk)} &= (f,-Ag)_{L^2(\muqk)} + \int_{\T M}(f(T,\cdot)g(T,\cdot)-f(0,\cdot)g(0,\cdot))\diff\hat\mu \end{align*} for all $f,g\in\dom(A)$. Thus functions $g\in\dom(A)$ satisfying $g(0,\cdot)=g(T,\cdot)=0$ are contained in the domain of $A^*$, and \begin{equation}\label{Astarg} A^*g=-Ag=\partial_tg-\hat Lg\, . \end{equation} In the following, for $(t,x,v)\in [0,\infty )\times \T M$ we set $\pi(t,x,v)=(t,x)$, whereas $\pi(x,v)=x$. \begin{rema}\label{rem:hgdomAstar} For $h$ and $g$ as in \Cref{thm:divergence} and $(x,v)\in\T M$ with $x\in\dM$, we have \begin{align*} (h\circ\pi+\hat L(g\circ\pi))(\cdot,x,R_xv) &= h(\cdot,x)+\big\langle v-2N(x)\langle N(x),v\rangle),\nabla_xg(\cdot,x)\big\rangle \\ &= (h\circ\pi+\hat L(g\circ\pi))(\cdot,x,v)\,, \end{align*} since $\langle N,\nabla_xg\rangle = \partial_Ng = 0$ on $\dM$, and \begin{equation*} (h\circ\pi+\hat L(g\circ\pi))(0,\cdot) = (h\circ\pi+\hat L(g\circ\pi))(T,\cdot) = 0\,. \end{equation*} so that $h\circ\pi+\hat L(g\circ\pi) \in\dom(A^*)$. \end{rema} \begin{lemm} \label{lem:Astar} Let $X=\begin{pmatrix}-h\\\nabla_xg\end{pmatrix}$ as in \Cref{thm:divergence} and $x\in M$. Then \begin{eqnarray}\label{eq:Astar1} \int_{\T_xM} A^*(h\circ\pi+\hat L(g\circ\pi))\kappa(x,\diff v) &=& \overline{\nabla}^*X\circ\pi \, ,\quad\text{ and}\\ \norm{A^*(h\circ\pi+\hat L(g\circ\pi))-\overline\nabla^*X\circ\pi}_{L^2(\muqk)} &\leq &\sqrt{2}\norm{\overline{\nabla}X}_{L^2(\muq)}\,.\label{eq:Astar2} \end{eqnarray} \end{lemm} \begin{proof} The proof in the Riemannian setting works as in \cite[Lemma 22]{EberleLoerler2024Lifts}. \end{proof} We are now ready to state and prove the space-time Poincaré inequality that is the key ingredient for deriving the relaxation time of lifts. For any $x\in M$ we let $H^{-1}(\kappa_x)$ denote the dual of the Gaussian Sobolev space $H^{1,2}(\kappa_x) = H^{1,2}(\T_xM,\kappa(x,\cdot))$. Note that \begin{equation*} \norm{f}_{H^{1,2}(\kappa_x)}^2 = \norm{f}_{L^2(\kappa_x)}^2 + \norm{\nabla f}_{L^2(\kappa_x)}^2 = \big(f,(I-L_v)f\big)_{L^2(\kappa_x)}\,, \end{equation*} where $L_v$ is given by \eqref{eq:Lvdef}. For $f\in L^2(\kappa_x)$, the $H^{-1}(\kappa_x)$-norm is then \begin{equation*} \norm{f}_{H^{-1}(\kappa_x)}^2 = \big(f,(I-L_v)^{-1}f\big)_{L^2(\kappa_x)}\,, \end{equation*} where the inverse exists by the spectral decomposition. In particular, a spectral decomposition shows that for $f\in H^{2,2}(\kappa_x)$, \begin{equation}\label{eq:H-1comparison1} \norm{L_vf}_{H^{-1}(\kappa_x)}^2 = \big(L_vf,(I-L_v)^{-1}L_vf\big)_{L^2(\kappa_x)} \leq (f,-L_vf)_{L^2(\kappa_x)} \end{equation} and \begin{equation}\label{eq:H-1comparison2} \norm{L_vf}_{H^{-1}(\kappa_x)}^2 \geq \frac{1}{2}(f,-L_vf)_{L^2(\kappa_x)}\,. \end{equation} Let $L^2(\lambda\otimes\mu;H^{1,2}(\kappa))$ denote the subspace of functions $f\in L^2(\muqk)$ such that $f(t,x,\cdot)\in H^{1,2}(\kappa_x)$ for $\lambda\otimes\mu$-a.e.\ $(t,x)\in[0,T]\times M$ and \begin{equation*} \norm{f}_{L^2(\lambda\otimes\mu;H^{1,2}(\kappa))}^2=\int_{[0,T]\times M}\norm{f(t,x,\cdot)}_{H^{1,2}(\kappa_x)}^2\mu(\diff x)\lambda(\diff t)<\infty\,. \end{equation*} Its dual is the similarly defined $L^2(\lambda\otimes\mu;H^{-1}(\kappa))$. In particular, for $f\in L^2(\lambda\otimes\mu;H^{-1}(\kappa))$ and $g\in L^2(\lambda\otimes\mu,H^{1,2}(\kappa))$, \begin{equation*} (f,g)_{L^2(\muqk)} \leq \norm{f}_{L^2(\lambda\otimes\mu;H^{-1}(\kappa))} \norm{g}_{L^2(\lambda\otimes\mu;H^{1,2}(\kappa))}\,. \end{equation*} \begin{theo}[Space-time Poincaré inequality]\label{thm:STPI} For any $T>0$, there exist constants $C_0(T),C_1(T)>0$ such that for all functions $f\in\dom(A)$, \begin{equation}\label{eq:STPI1} \norm{f-\int f\diff(\muqk)}_{L^2(\muqk)}^2\leq C_0(T)\norm{Af}_{L^2(\muqk)}^2+C_1(T)\norm{(\Pi-I)f}_{L^2(\muqk)}^2 \end{equation} and \begin{equation}\label{eq:STPI2} \norm{f-\int f\diff(\muqk)}_{L^2(\muqk)}^2\leq 2C_0(T)\norm{Af}_{L^2(\lambda\otimes\mu,H^{-1}(\kappa))}^2+C_1(T)\norm{(\Pi-I)f}_{L^2(\muqk)}^2\,, \end{equation} where \begin{align*} C_0(T) &= 2c_0(T) = 4T^2 + 86\frac{1}{m}\,,\\ C_1(T) &= 3+4c_1(T) = 1163+\frac{3964}{mT^2}+172\max\left(\frac{1}{m},\frac{T^2}{\pi^2}\right)\rho\,. \end{align*} \end{theo} The proof of \eqref{eq:STPI1} is an adaptation of \cite[Theorem 23]{EberleLoerler2024Lifts} to the Riemannian setting with boundary. An expression similar to \eqref{eq:STPI2} has first been shown in the Euclidean setting in \cite{Cao2019Langevin} but with non-optimal constants. The treatment of Langevin dynamics requires the $L^2(\lambda\otimes\mu;H^{-1}(\kappa))$-norm of $Af$ in \eqref{eq:STPI2}. In the proof we apply the following \emph{space-time property of lifts}, see \cite[Lemma 18]{EberleLoerler2024Lifts}. It states that for any $f,g,h\in L^2(\lambda\otimes\mu)$ such that $f\circ\pi\in\dom(A)$ and $g(t,\cdot)\in\dom(L)$ for a.e.\ $t\in[0,T]$ with $Lg\in L^2(\lambda\otimes\mu)$, we have \begin{equation}\label{eq:xtproplift} \frac{1}{2}\left(A(f\circ\pi),h\circ\pi+\hat L(g\circ\pi)\right)_{L^2(\muqk)}=-\frac{1}{2}(\partial_tf,h)_{L^2(\lambda\otimes\mu)}+\Ecal_T(f,g)\, , \end{equation} where $\Ecal_T(f,g)=\int_0^T\Ecal(f(t,\cdot),g(t,\cdot ))\diff t$ is the time-integrated Dirichlet form associated to $(\frac{1}{2}L,\dom(L))$. \begin{proof}[Proof of \Cref{thm:STPI}] Fix a function $f_0\in\dom(A)$, let $f=f_0-\int f_0\diff(\muqk)$, and let $\tilde f\colon[0,T]\times\R^d\to\R$ be such that $\Pi f=\tilde f\circ\pi$. First note that \begin{align} \norm{f}_{L^2(\muqk)}^2&=\norm{f-\Pi f}_{L^2(\muqk)}^2+\norm{\tilde f}_{L^2(\muq)}^2\,,\label{L2decompf} \end{align} so that it suffices to bound $\norm{\tilde f}_{L^2(\muq)}^2$ by $\norm{f-\Pi f}_{L^2(\muqk)}^2$ and $\norm{Af}_{L^2(\muqk)}^2$ to obtain \eqref{eq:STPI1}, and by $\norm{f-\Pi f}_{L^2(\muqk)}^2$ and $\norm{Af}_{L^2(\muq,H^{-1}(\kappa))}^2$ to obtain \eqref{eq:STPI2}. By \Cref{thm:divergence} applied to $\tilde f$, there exist functions $h\in H^{1,2}_\DC(\muq)$ and $g\in H^{2,2}_\DC(\muq)$ satisfying \eqref{eq:gdomLcond} such that \begin{equation*} \tilde f = \partial_th-Lg\,. \end{equation*} In particular, \begin{equation}\label{L2tildef} \norm{\tilde f}_{L^2(\muq)}^2=(\tilde f,\tilde f)_{L^2(\muq)}=(\tilde f,\partial_th-Lg)_{L^2(\muq)}\,. \end{equation} Using the space-time property of lifts \eqref{eq:xtproplift}, we see that \begin{align*} (\tilde f,\partial_th-Lg)_{L^2(\muq)}&=\big({-\partial_t\tilde f},h\big)_{L^2(\muq)}+2\Ecal_T(\tilde f,g)\\ &=\left(A(\tilde f\circ\pi),h\circ\pi+\hat L(g\circ\pi)\right)_{L^2(\muqk)}\\ &=\big(Af,h\circ\pi+\hat L(g\circ\pi)\big)_{L^2(\muqk)}\\ &\qquad+\big(A(\Pi f-f),h\circ\pi+\hat L(g\circ\pi)\big)_{L^2(\muqk)}\,. \end{align*} Let us consider the two summands separately. Since $h\circ\pi+\hat L(g\circ\pi)\in L^2(\lambda\otimes\mu;H^{1,2}(\kappa))$, the first summand can be estimated as \begin{align*} \left(Af,h\circ\pi+\hat L(g\circ\pi)\right)_{L^2(\muqk)}&\leq \norm{Af}_{L^2(\muqk)}\norm{h\circ\pi+\hat L(g\circ\pi)}_{L^2(\muqk)}\\ &\leq c_0(T)^\frac{1}{2}\norm{Af}_{L^2(\muqk)}\norm{\tilde f}_{L^2(\muq)}\, , \end{align*} where we used that \begin{equation*} \norm{h\circ\pi+\hat L(g\circ\pi)}_{L^2(\muqk)}^2 = \norm{h}_{L^2(\muq)}^2 + \norm{\nabla g}_{L^2(\muq)}^2\leq c_0(T)\norm{\tilde f}_{L^2(\muq)}^2 \end{equation*} by \eqref{eq:bound0order}. By \Cref{rem:hgdomAstar}, $h\circ\pi+\hat L(g\circ\pi)\in\dom(A^*)$, so that the second summand satisfies \begin{equation*} \left(A(\Pi f-f),h\circ\pi+\hat L(g\circ\pi)\right)_{L^2(\muqk)}=\left(\Pi f-f,A^*(h\circ\pi+\hat L(g\circ\pi))\right)_{L^2(\muqk)}, \end{equation*} Since $\tilde f= \overline\nabla^*X$, \Cref{lem:Astar} and \Cref{thm:divergence} yield \begin{align*} \norm{A^*(h\circ\pi+\hat L(g\circ\pi))}_{L^2(\muqk)}^2 &=\norm{\tilde f}_{L^2(\muq)}^2+\norm{A^*(h\circ\pi+\hat L(g\circ\pi))-\overline\nabla^*X\circ\pi}_{L^2(\muqk)}^2\,\\ &\leq \norm{\tilde f}_{L^2(\muq)}^2+2\norm{\overline\nabla X}_{L^2(\muq)}^2\leq(1+2c_1(T))\norm{\tilde f}_{L^2(\muq)}^2\,, \end{align*} so that the second summand can be estimated as \begin{equation*} \left(A(\Pi f-f),h\circ\pi+\hat L(g\circ\pi)\right)_{L^2(\muqk)}\leq \left(1+2c_1(T)\right)^\frac{1}{2}\norm{f-\Pi f}_{L^2(\muqk)}\norm{\tilde f}_{L^2(\muqk)}\,. \end{equation*} Putting things together, we hence obtain by \eqref{L2tildef}, \begin{align*} \norm{\tilde f}_{L^2(\muq)}&\leq c_0(T)^\frac{1}{2}\norm{Af}_{L^2(\muqk)}+\left(1+2c_1(T)\right)^\frac{1}{2}\norm{f-\Pi f}_{L^2(\muqk)} \end{align*} so that by \eqref{L2decompf}, \begin{align*} \norm{f}_{L^2(\muqk)}^2 &\leq2c_0(T)\norm{Af}_{L^2(\muqk)}^2+(3+4c_1(T))\norm{f-\Pi f}_{L^2(\muqk)}^2\, \end{align*} which proves \eqref{eq:STPI1}. Similarly, to obtain \eqref{eq:STPI2}, the first summand can be estimated as \begin{align*} \left(Af,h\circ\pi+\hat L(g\circ\pi)\right)_{L^2(\muqk)}&\leq \norm{Af}_{L^2(\muq,H^{-1}(\kappa))}\norm{h\circ\pi+\hat L(g\circ\pi)}_{L^2(\muq,H^{1,2}(\kappa))}\\ &\leq (2c_0(T))^\frac{1}{2}\norm{Af}_{L^2(\muq),H^{-1}(\kappa))}\norm{\tilde f}_{L^2(\muq)}\, , \end{align*} where we used that \begin{align*} \norm{h\circ\pi+\hat L(g\circ\pi)}_{L^2(\muq,H^{1,2}(\kappa))}^2 &= \norm{h\circ\pi+\hat L(g\circ\pi)}_{L^2(\muqk)}^2 + \norm{\nabla_x(h\circ\pi+\hat L(g\circ\pi))}_{L^2(\muqk)}^2\\ &=\norm{h}_{L^2(\muq)}^2+\norm{\nabla g}_{L^2(\muq)}^2 + \norm{v\cdot\nabla g}_{L^2(\muqk)}^2\\ &\leq 2c_0(T)\norm{\tilde f}_{L^2(\muq)}^2 \end{align*} and $c_0(T)$ is the constant from \eqref{eq:bound0order}. One then concludes analogously. \end{proof} As demonstrated in \cite{EberleLoerler2024Lifts}, such a space-time Poincaré inequality yields the following exponential decay in $T$-average of the associated semigroup. \begin{theo}[Exponential Decay]\label{thm:decay} Let $\gamma>0$ and $T>0$ and set \begin{equation*} \nu = \frac{\gamma}{\gamma^2C_0(T)+C_1(T)}\,. \end{equation*} \begin{enumerate}[(i)] \item The transition semigroup $(\hat P_t^{(\gamma)})_{t\geq0}$ of Riemannian RHMC with reflection at the boundary and refresh rate $\gamma$ satisfies \begin{equation*} \frac{1}{T}\int_t^{t+T}\norm{\hat P_s^{(\gamma)}f}_{L^2(\hat\mu)}\diff s\leq e^{-\nu t}\norm{f}_{L^2(\hat\mu)} \end{equation*} for all $f\in L^2_0(\hat\mu)$. \item The transition semigroup $(\hat P_t^{(\gamma)})_{t\geq0}$ of Riemannian Langevin dynamics with reflection at the boundary and friction $\gamma$ satisfies \begin{equation*} \frac{1}{T}\int_t^{t+T}\norm{\hat P_s^{(\gamma)}f}_{L^2(\hat\mu)}\diff s\leq e^{-\frac{1}{2}\nu t}\norm{f}_{L^2(\hat\mu)} \end{equation*} for all $f\in L^2_0(\hat\mu)$. \end{enumerate} \end{theo} \begin{proof} \begin{enumerate}[(i)] \item The proof in this case uses \eqref{eq:STPI1} and is identical to that of \cite[Theorem 17]{EberleLoerler2024Lifts}. \item Let $f\in L_0^2(\hat\mu)$. Then by \eqref{eq:hatLgamma} and the antisymmetry of $\hat L$, \begin{align*} \MoveEqLeft\frac{\diff}{\diff t}\int_t^{t+T}\norm{\hat P_s^{(\gamma)}f}_{L^2(\hat\mu)}^2\diff s=\norm{\hat P_{t+T}^{(\gamma)}f}_{L^2(\hat\mu)}^2-\norm{\hat P_{t}^{(\gamma)}f}_{L^2(\hat\mu)}^2\\ &=\int_t^{t+T}\frac{\diff}{\diff s}\norm{\hat P_s^{(\gamma)}f}_{L^2(\hat\mu)}^2\diff s = 2\int_t^{t+T}\left(\hat P_s^{(\gamma)}f,(\hat L+\gamma L_v)\hat P_s^{(\gamma)}f\right)_{L^2(\hat\mu)}\diff s\\ &=-2\gamma\int_t^{t+T}\left(\hat P_s^{(\gamma)}f,-L_v\hat P_s^{(\gamma)}f\right)_{L^2(\hat\mu)}\diff s \leq-2\gamma\norm{L_v\hat P_{t+\cdot}^{(\gamma)}f}_{L^2(\lambda\otimes\mu,H^{-1}(\kappa))}^2\,, \end{align*} where we used \eqref{eq:H-1comparison1} in the last step. Since $\hat P_{t+\cdot}^{(\gamma)} f\in\dom(A)$ and $A\hat P_{t+\cdot}^{(\gamma)} f=\gamma L_v\hat P_{t+\cdot}^{(\gamma)} f$, the space-time Poincaré inequality \eqref{eq:STPI2} yields \begin{equation*} \norm{\hat P_{t+\cdot}^{(\gamma)} f}_{L^2(\muqk)}^2\leq 2\gamma^2C_0(T)\norm{L_v\hat P_{t+\cdot}^{(\gamma)} f}_{L^2(\lambda\otimes\mu,H^{-1}(\kappa))}^2 + C_1(T)\norm{(\Pi -I)\hat P_{t+\cdot}^{(\gamma)}f}_{L^2(\muqk)}^2\,. \end{equation*} The Gaussian Poincaré inequality and \eqref{eq:H-1comparison2} yield \begin{equation*} \norm{(\Pi -I)\hat P_{t+\cdot}^{(\gamma)}f}_{L^2(\muqk)}^2\leq 2\norm{L_v\hat P_{t+\cdot}^{(\gamma)} f}_{L^2(\muq,H^{-1}(\kappa))}^2\,, \end{equation*} so that \begin{equation*} \norm{\hat P_{t+\cdot}^{(\gamma)} f}_{L^2(\muqk)}^2\leq (2\gamma^2C_0(T)+2C_1(T))\norm{L_v\hat P_{t+\cdot}^{(\gamma)} f}_{L^2(\lambda\otimes\mu,H^{-1}(\kappa))}^2\,. \end{equation*} Therefore, Grönwall's inequality yields \begin{align*} \frac{1}{T}\int_t^{t+T}\norm{\hat P_s^{(\gamma)}f}_{L^2(\hat\mu)}^2\diff s&\leq e^{-\nu t}\frac{1}{T}\int_0^T\norm{\hat P_s^{(\gamma)}f}_{L^2(\hat\mu)}^2\diff s\leq e^{-\nu t}\norm{f}_{L^2(\hat\mu)}^2\,, \end{align*} and we obtain the exponential contractivity in $T$-average with rate $\frac{1}{2}\nu$ by the Cauchy-Schwarz inequality. \end{enumerate} \end{proof} \subsection{Optimality of randomised Hamiltonian Monte Carlo and Langevin dynamics on Riemannian manifolds with boundary} The exponential decay of $T$-averages in \Cref{thm:decay} immediately yields the bounds \begin{equation*} t_\rel(\hat P^{(\gamma)})\leq\frac{1}{\nu}+T\qquad\textup{or}\qquad t_\rel(\hat P^{(\gamma)})\leq\frac{2}{\nu}+T \end{equation*} for the relaxation times of Riemannian RHMC with reflection at the boundary and refresh rate $\gamma$, as well as Riemannian Langevin dynamics with reflection at the boundary and friction $\gamma$, respectively, see \cite[Lemma 8]{EberleLoerler2024Lifts}. For a fixed $T>0$, the rate $\nu$ is maximised by choosing $\gamma = \sqrt{{C_1(T)}/{C_0(T)}}$, and this choice yields the upper bounds \begin{equation*} t_\rel(\hat P^{(\gamma)})\leq2\sqrt{C_0(T)C_1(T)}+T\qquad\textup{and}\qquad t_\rel(\hat P^{(\gamma)})\leq4\sqrt{C_0(T)C_1(T)}+T\,, \end{equation*} respectively. To obtain explicit bounds, we set $T=\frac{\pi}{\sqrt{m}}$, which yields \begin{align}\label{eq:C0C1explicit} C_0(T)= (4\pi^2+86)\frac{1}{m}\qquad\textup{and}\qquad C_1(T) = 1163+\frac{3964}{\pi^2}+172\frac{\rho}{m}\,. \end{align} \begin{coro}[$C$-optimality of RHMC and Langevin dynamics]\label{cor:Coptimal} Let $c\geq 0$. On the class of potentials satisfying \Cref{ass:U} with $\rho\leq cm$ and \Cref{ass:discretespec}, Riemannian randomised Hamiltonian Monte Carlo with reflection at the boundary with refresh rate \begin{equation}\label{eq:gammaopt} \gamma = \sqrt{\frac{(1163+\frac{3964}{\pi^2})m+172\rho}{4\pi^2+86}} \end{equation} is a $C$-optimal lift of the corresponding overdamped Langevin diffusion with reflection at the boundary with \begin{equation*} C = 2\left(887\sqrt{1+c/9}+\pi\right)\,. \end{equation*} Similarly, on the same class of potentials, Langevin dynamics with reflection at the boundary with friction $\gamma$ given by \eqref{eq:gammaopt} is a $C$-optimal lift of the same process with \begin{equation*} C = 2\left(1773\sqrt{1+c/9}+\pi\right)\,. \end{equation*} More generally, for $A\in[1,\infty)$, Riemannian RHMC is a $C(A)$-optimal lift with \begin{equation*} C(A)= \left(3381+344c\right)A+2\pi \end{equation*} for any $\gamma$ satisfying $1/A\le \frac{\gamma}{\sqrt{m}}\le A$. Similarly, for the same choices of $\gamma$, Riemannian Langevin dynamics is a $C(A)$-optimal lift with \begin{equation*} C(A) = \left(6761+344c\right)A+2\pi \,. \end{equation*} \end{coro} \begin{proof} The relaxation time of the corresponding overdamped Langevin diffusion with reflection at the boundary is $\frac{2}{m}$. Choosing $T=\frac{\pi}{\sqrt{m}}$ and plugging \eqref{eq:C0C1explicit} into the expression for $\nu$ gives \begin{align*} \frac{1}{\nu}&=2\sqrt{(4\pi^2 + 86)\left(1163+\frac{3964}{\pi^2}\right)}\frac{1}{\sqrt{m}}\sqrt{1+\frac{172}{1163+\frac{3964}{\pi^2}}\frac{\rho}{m}}\,, \end{align*} so that \begin{equation*} t_\rel(\hat P^{(\gamma)})\leq \frac{1}{\sqrt{m}}\left(887\sqrt{1+\frac{\rho}{9m}}+\pi\right)\quad\textup{and}\quad t_\rel(\hat P^{(\gamma)})\leq \frac{1}{\sqrt{m}}\left(1773\sqrt{1+\frac{\rho}{9m}}+\pi\right) \end{equation*} for RHMC and Langevin dynamics, respectively. The final two statements follow similarly from the expression \begin{equation*} \frac{1}{\nu} = \gamma C_0(T) + \frac{1}{\gamma}C_1(T) \leq \sqrt{m}AC_0(T) + \frac{A}{\sqrt{m}}C_1(T)\,.\qedhere \end{equation*} \end{proof} \begin{rema} The explicit values of the constants in the divergence lemma \Cref{thm:divergence} are surely not optimal, so that we expect \Cref{cor:Coptimal} to hold with better constants. Nonetheless, in the case of mildly negative Bakry-\'Emery curvature $\Ric+\nabla^2U$, i.e.\ if the lower bound $\rho$ in \Cref{ass:U} \eqref{ass:RicHess} is of the same order as the inverse Poincar\'e constant $m$, the result shows that Riemannian RHMC and Riemannian Langevin dynamics with reflection at the boundary are close to being optimal lifts. \end{rema} \appendix \section{Appendix}\label{sec:appendix} In this appendix, for the convenience of the reader, we recall some concepts from Riemannian geometry and fix notation. Particular emphasis is placed on manifolds with boundary and curvature properties of the boundary, measured in terms of the second fundamental form. We finally provide a self-contained proof for the generalisation of the Reilly formula stated in \Cref{lem:GenReilly}. We mainly follow \cite{Lee2013Smooth} and \cite{Lee2018Riemannian} as well as \cite{Gallot2004Riemannian}, and point to these sources for a more comprehensive overview of the topic. \subsection{Manifolds with boundary and manifolds with corners}\label{ssec:ManifoldBdry} A \emph{$d$-dimensional topological manifold with boundary} is a second-countable Hausdorff space such that every point has a neighbourhood that is homeomorphic to an open subset of $\R^n$ or a relatively open subset of $\R^d_+ = \{(x_1,\dots,x_d)\in\R^d\colon x_d\geq0\}$. A chart $(U,\phi)$ is an open subset $U\subset M$ together with a map $\phi\colon U\to\R^d$ that is a homeomorphism onto an open subset of $\R^d$ or $\R^d_+$. It is called a \emph{boundary chart} if $\phi(U)$ is an open subset of $\R_+^d$ such that $\phi(U)\cap\partial\R_+^d\neq\emptyset$ and an \emph{interior chart} otherwise. The \emph{boundary} $\dM$ of $M$ is the set of all $p\in M$ that are in the domain of some boundary chart $(U,\phi)$ with $\phi(p)\in\partial\R_+^d$. In case $\dM=\emptyset$, $M$ is just a conventional $d$-dimensional topological manifold (without boundary). The interior $\mathring{M}=M\setminus\dM$ of $M$ is an open subset of $M$ and a $d$-dimensional topological manifold without boundary, whereas $\dM$ is a closed subset of $M$ and a $d-1$-dimensional topological manifold without boundary. A \emph{$d$-dimensional smooth manifold with boundary} is a $d$-dimensional topological manifold with boundary such that any two charts $(U,\phi)$ and $(V,\psi)$ are smoothly compatible, i.e.\ \begin{equation*} \phi\circ\psi^{-1}\colon \psi(U\cap V)\to\phi(U\cap V) \end{equation*} is a smooth diffeomorphism. Here smoothness of a function $f$ defined on a subset $A$ of $\R^n$ that may not be open means that it is the restriction to $A$ of some smooth function $\tilde f$ defined on an open set $\tilde A\subset\R^n$ containing $A$. A map $f$ between two smooth manifolds $M$ and $N$ is called smooth if for any charts $(U,\phi)$ in $M$ and $(V,\psi)$ in $N$ with $f(U)\subset V$, the composition \begin{equation*} \psi\circ f\circ\phi^{-1}\colon \phi(U)\to \psi(V) \end{equation*} is smooth. The set of smooth functions from $M$ to $N$ is denoted $C^\infty(M,N)$, and we set $C^\infty(M)=C^\infty(M,\R)$. The boundary $\dM$ together with the smooth structure given by the restrictions of the charts of $M$ to $\dM$ is a $(d-1)$-dimensional smooth manifold (without boundary). In the following, let $M$ be a $d$-dimensional smooth manifold with boundary. As in the case of smooth manifolds without boundary, for any point $p\in M$, the tangent space $\T_pM$ is the vector space of derivations at $p$, i.e.\ linear maps $v\colon C^\infty(M)\to\R$ such that \begin{equation*} v(fg) = f(p)v(g) + g(p)v(f)\quad\textup{for all }f,g\in C^\infty(M). \end{equation*} Note that $\T_pM$ is a $d$-dimensional vector space, regardless of whether $p\in\dM$ or $p\in\mathring M$. As usual, the \emph{tangent bundle} $\T M$ is the disjoint union \begin{equation*} \T M = \coprod_{p\in M}\T_pM \end{equation*} of the tangent spaces and is again a smooth manifold. A \emph{vector field} on $M$ is a Borel-measurable section of $\T M$, and the space of vector fields on $M$ is denoted by $\Gamma(\T M)$. For $k\in\N_0\cup\{\infty\}$, a \emph{$C^k$-vector field} on $M$ is a $C^k$-section of $\T M$. The space of $C^k$-vector fields is denoted by $\Gamma_{C^k}(\T M)$, and we denote the space of smooth vector fields by $\X(M) = \Gamma_{C^\infty}(\T M)$. As in the case without boundary, a \emph{Riemannian metric} $g$ on $M$ is a smooth covariant $2$-tensor field on $M$ that is symmetric and positive definite. This is nothing else than an inner product $g_p$ on each tangent space $\T_pM$ that varies smoothly in $p$. Such a pair $(M,g)$ is called a \emph{$d$-dimensional Riemannian manifold with boundary}. The smooth submanifold $\dM$ of $M$ can be endowed with a Riemannian metric induced by $g$ in the following way. Letting $\iota\colon\dM\to M$ denote the inclusion map and $\diff\iota_p\colon\T_p\dM\to\T_{\iota(p)}M$ its differential at $p\in\dM$, the smooth submanifold $\dM$ of $M$ can be endowed with the \emph{induced Riemannian metric} \begin{equation*} \tilde g_p(v,w) = g_{\iota(p)}(\diff\iota_p(v),\diff\iota_p(w))\,. \end{equation*} Unwinding the definition, $\diff\iota_p(v)$ is simply the derivation that acts as $\diff\iota_p(v)(f) = v(f|_\dM)$ for all $f\in C^\infty(M)$. For sake of simplicity, we usually identify $\T_p\dM$ with its image $\diff\iota_p(\T_p\dM)$ in $\T_{\iota(p)}M=\T_pM$ and again denote $\tilde g$ by $g$. Thus the induced metric is just the restriction of $g$ to vectors tangent to $\dM$. Once a Riemannian metric $g$ is fixed, for any $p\in M$ and $v,w\in\T_pM$ we also write $\langle v,w\rangle = g(v,w)$ and $|v| = \langle v,v\rangle^{1/2}$ for the inner product and norm on $\T_pM$ given by $g$. In the following, let $(M,g)$ be a $d$-dimensional Riemannian manifold with boundary. The normal space $N_p\dM$ at $p\in\dM$ is the orthogonal complement of $\T_p\dM$ in $\T_pM$ with respect to the inner product $\langle\cdot,\cdot\rangle$, and the normal bundle is the disjoint union \begin{equation*} \Ncal\dM = \coprod_{p\in\dM}\Ncal_p\dM \end{equation*} of the normal spaces. The space of smooth sections of $\Ncal\dM$ is denoted by $\mathfrak{N}(\dM)$. There exists a unique smooth outward-pointing unit normal vector field $N\in\mathfrak{N}(\dM)$ along all of $\dM$, see \cite[Prop. 2.17]{Lee2018Riemannian}. Since the product of two smooth manifolds with boundary is not necessarily again a smooth manifold with boundary, it is necessary to introduce \emph{smooth manifolds with corners}, see \cite[Chapter 16]{Lee2013Smooth}. They are defined analogously to smooth manifolds with boundaries, with the difference being that every point has a neighbourhood that is homeomorphic to a subset of $\overline{\R}_+^d = \{(x_1,\dots,x_d)\in\R^d\colon x_1\geq0,\dots,x_d\geq0\}$. In particular, any smooth manifold with boundary is a smooth manifold with corners. The corner points of $\overline{\R}_+^d$ are all the points at which more than one coordinate vanishes, and the corner points of a smooth manifold with corners are all the points $p$ that are in the domain of some corner chart mapping $p$ to a corner point of $\overline{\R}_+^d$. The tangent bundle, vector fields and Riemannian metrics are defined analogously to the case with boundary. The product of a finite number of smooth manifolds with corners is again a smooth manifold with corners, and removing the corner points from a smooth manifold with corners yields a smooth manifold with boundary. \subsection{The Hessian and the second fundamental form}\label{ssec:HessianSFF} Let $(M,g)$ be a $d$-dimensional Riemannian manifold with boundary. The \emph{Riemannian gradient} of a smooth function $f\in C^\infty(M)$ is the smooth vector field $\nabla f$ defined by \begin{equation*} \langle \nabla f,v\rangle = \diff f(v)\quad\textup{for all }v\in \T M\,, \end{equation*} where $\diff f$ is the differential of $f$. The Levi-Civita connection \begin{align*} \nabla\colon\X(M)\times\X(M)\to\X(M),\quad(X,Y)\mapsto\nabla_XY \end{align*} defines the \emph{covariant derivative} of a vector field $Y$ in direction of the vector field $X$. Since $\nabla_XY|_p$ only depends on $X$ through the value of $X$ at $p$, the derivative $\nabla_vY$ of $Y$ at $p$ in direction $v$ is defined for any $v\in\T_pM$. Furthermore, $\nabla$ uniquely determines a connection in each tensor bundle $\T^{k,l}\T M$ that agrees with the usual Levi-Civita connection on $\T^{1,0}\T M = \T M$ and the Riemannian gradient on $\T^{0,0}=M\times\R$ in the sense that $\nabla_Xf = \langle \nabla f,X\rangle$ for all $f\in C^\infty(M)$ and $X\in\X(M)$, see \cite[Prop. 4.15]{Lee2018Riemannian}. The \emph{Hessian} of a smooth function $f\in C^\infty(M)$ is the symmetric $2$-tensor $\nabla^2f$. Its action on vector fields $X,Y\in\X(M)$ is given by \begin{equation*} \nabla^2f(X,Y)=\nabla^2_{X,Y}f = \nabla_X(\nabla_Yf) - \nabla_{\nabla_XY}f\,. \end{equation*} For any $p\in\dM$ and $Z\in\T_pM$ denote by $Z_p^\top$ and $Z_p^\bot$ the projections of $Z_p$ onto $\T_p\dM$ and $\Ncal_p\dM$, respectively. Extending vector fields $X$ and $Y$ on $\dM$ arbitrarily to vector fields on $M$, we can decompose \begin{equation*} \nabla_XY = (\nabla_XY)^\top + (\nabla_XY)^\bot\,. \end{equation*} The Gauß formula, see \cite[Theorem 8.2]{Lee2018Riemannian}, identifies $(\nabla_XY)^\top$ as the covariant derivative $\nablad_XY$ with respect to the Levi-Civita connection $\nablad$ on $\T\dM$ of the induced metric $g$ on $\dM$. This leads to the \emph{second fundamental form} $\sff\colon\X(\dM)\times\X(\dM)\to\mathfrak{N}(\dM)$ defined by \begin{equation}\label{eq:defh} \sff(X,Y) = (\nabla_XY)^\bot = \nabla_XY-\nablad_XY\,. \end{equation} The \emph{scalar second fundamental form} $h\colon\X(\dM)\times\X(\dM)\to C^\infty(\dM)$ is then defined by \begin{equation*} h(X,Y) = \langle\sff(X,Y),N\rangle = \langle \nabla_XY,N\rangle\,. \end{equation*} Both $\sff$ and $h$ are bilinear over $C^\infty(M)$ and symmetric, and the values of $h(X,Y)$ and $\sff(X,Y)$ at $p\in\dM$ only depend on the values of $X$ and $Y$ at $p$. The \emph{mean curvature} $H$ is \begin{equation*} H = \tr(h)\,, \end{equation*} where $\tr$ is the trace operator. \subsection{The Laplace-Beltrami operator}\label{ssec:LaplaceBeltrami} Let $(M,g)$ be a $d$-dimensional Riemannian manifold with corners. The \emph{Riemannian volume measure} $\nu_M$ is the Borel measure on $M$ given by \begin{equation*} \diff\nuM = \sqrt{\det{g}}\,\lvert\diff x^1\wedge\dots\wedge \diff x^d\rvert \end{equation*} in any local coordinates $(x^1,\dots,x^d)$. The \emph{divergence} of a vector field $X\in\Gamma_{C^1}(\T M)$ is $\divg(X) = \tr(\nabla X)$. In local coordinates $X=\sum_{i=1}^dX^i\frac{\partial}{\partial x^i}$ it takes the form \begin{equation*} \divg(X) = \frac{1}{\sqrt{\det{g}}}\sum_{i=1}^d\frac{\partial}{\partial x^i}\left(X^i\sqrt{\det{g}}\right)\,. \end{equation*} It satisfies the integration-by-parts identity \begin{equation*} \int_M \langle\nabla f,X\rangle\diff\nuM = \int_\dM f\langle X,N\rangle\diff\nudM - \int_M(f\divg X)\diff\nuM\,, \end{equation*} where $\nudM$ is the Riemannian volume measure on $(\dM,g)$. The \emph{Laplace-Beltrami operator} is the linear operator $\Delta\colon C^\infty(M)\to C^\infty(M)$ defined by \begin{equation*} \Delta u = \divg(\nabla u) = \tr(\nabla^2u)\,. \end{equation*} In coordinates it takes the form \begin{equation*} \Delta u = \frac{1}{\sqrt{\det{g}}}\sum_{i,j=1}^d\frac{\partial}{\partial x^i}\left((g^{-1})_{ij}\sqrt{\det{g}}\frac{\partial u}{\partial x^j}\right)\,. \end{equation*} If $M\subset\R^d$ with the Euclidean metric, these definitions reduce to the usual divergence and Laplacian. The definition of the Laplace-Beltrami operator $\Delta$ immediately implies the integration-by-parts identity \begin{align*} \int_M \phi \Delta\psi\diff\nuM = \int_\dM \phi \langle\nabla\psi,N\rangle\diff\nudM - \int_M\langle\nabla\phi,\nabla\psi\rangle\diff\nuM \end{align*} for all smooth functions $\phi,\psi\in C^\infty(M)$. \subsection{A generalisation of the Reilly formula}\label{ssec:BochnerReilly} Let $(M,g)$ be a $d$-dimensional Riemannian manifold with boundary. In this section, we prove \Cref{lem:GenReilly}. \GenReilly* \begin{proof} Let $u\in C^\infty(M)$. The integration-by-parts identity \eqref{eq:intbypartsL} yields \begin{equation*} \frac{1}{2}\int_M L|\nabla u|^2\diff\mu = \frac{1}{2}\int_\dM\langle\nabla|\nabla u|^2,N\rangle\diff\mudM \end{equation*} and \begin{equation*} \int_M\langle\nabla u,\nabla L u\rangle\diff\mu = -\int_M (Lu)^2\diff\mu + \int_\dM Lu\langle\nabla u,N\rangle\diff\mudM\,. \end{equation*} Thus integrating the Bochner identity \eqref{eq:bochner2} shows \begin{equation*} \int_M\left(|\nabla^2u|^2-(Lu)^2+(\Ric+\nabla^2U)(\nabla u,\nabla u)\right)\diff\mu=\int_\dM\left\langle\frac{1}{2}\nabla|\nabla u|^2-\Delta u\nabla u,N\right\rangle\diff\mudM\,. \end{equation*} By splitting $\nabla u = \nabla^\dM u + (\nabla u)^\bot $, one sees that \begin{align*} \frac{1}{2}\langle\nabla|\nabla u|^2,N\rangle &= \langle\nabla_{\nabla u}\nabla u,N\rangle\\ &=\langle\nabla_{\nablad u}\nablad u,N\rangle + \langle\nabla_{\nablad u}(\nabla u)^\bot ,N\rangle + \langle\nabla_{(\nabla u)^\bot}\nabla u,N\rangle\,. \end{align*} The first summand is just the scalar second fundamental form $h(\nablad u,\nablad u)$. For the second summand, note that any vector field $X$ defined on the boundary $\dM$ satisfies \begin{equation*} \nabla_{X^\top}X^\bot = \langle X,N\rangle\nabla_{X^\top}N + \langle X^\top,\nabla\langle X,N\rangle\rangle N\,, \end{equation*} so that \begin{equation*} \langle\nabla_{X^\top}X^\bot,N\rangle = \langle X^\top,\nabla\langle X,N\rangle\rangle = \langle X^\top,\nablad\langle X,N\rangle\rangle\,. \end{equation*} This yields \begin{equation*} \langle\nabla_{\nablad u}(\nabla u)^\bot,N\rangle = \langle\nablad u,\nablad\langle\nabla u,N\rangle\rangle \end{equation*} and thus \begin{equation*} \frac{1}{2}\langle\nabla|\nabla u|^2,N\rangle = h(\nablad u,\nablad u) + \langle\nablad u,\nablad\langle\nabla u,N\rangle\rangle + \langle\nabla_{(\nabla u)^\bot}\nabla u,N\rangle\,. \end{equation*} Furthermore, in an orthonormal boundary frame $(E_1,\dots,E_{d-1},N)$, one has \begin{align*} \Delta u &= \tr(\nabla^2 u) = \langle\nabla_N\nabla u,N\rangle + \sum_{i=1}^{d-1}\langle\nabla_{E_i}\nabla u,E_i\rangle\\ &=\langle\nabla_N\nabla u,N\rangle + \sum_{i=1}^{d-1}\left(\langle\nabla_{E_i}\nablad u,E_i\rangle+\langle\nabla_{E_i}(\nabla u)^\bot,E_i\rangle\right)\\ &=\langle\nabla_N\nabla u,N\rangle + \Deltad u + \sum_{i=1}^{d-1}\langle\nabla_{E_i}(\nabla u)^\bot,E_i\rangle \end{align*} on $\dM$. For any tangential vector field $X$ defined on the boundary $\dM$, one has \begin{align*} 0 &= X\langle N,X\rangle = \langle \nabla_XN,X\rangle + \langle N,\nabla_XX\rangle\\ &=\langle \nabla_XN,X\rangle + \langle N,\sff(X,X)+\nablad_XX\rangle\\ &=\langle \nabla_XN,X\rangle + h(X,X)\,. \end{align*} Hence $\langle\nabla_{E_i}(\nabla u)^\bot,E_i\rangle = -\langle\nabla u,N\rangle h(E_i,E_i)$ and thus \begin{align*} Lu &= \Delta u-\langle\nabla U,\nabla u\rangle\\ &= \langle\nabla_N\nabla u,N\rangle + \Deltad u - \langle\nabla u,N\rangle H-\langle\nablad U,\nablad u\rangle-\langle(\nabla U)^\bot,(\nabla u)^\bot\rangle\\ &=\langle\nabla_N\nabla u,N\rangle + \Ld u -(H+\langle\nabla U,N\rangle)\langle\nabla u,N\rangle \end{align*} on the boundary $\dM$. Putting things together and integrating by parts on $\dM$ finally yields \begin{align*} \MoveEqLeft\int_\dM\left\langle\frac{1}{2}\nabla|\nabla u|^2-L u\nabla u,N\right\rangle\diff\mudM\\ &=\int_\dM\left( h(\nablad u,\nablad u) + \langle\nablad u,\nablad\langle\nabla u,N\rangle\rangle + \langle\nabla_{(\nabla u)^\bot}\nabla u,N\rangle \right)\diff\mudM\\ &\quad-\int_\dM \left(\langle\nabla_N\nabla u,N\rangle + \Ld u - (H+\partial_NU)\langle\nabla u,N\rangle\right)\langle\nabla u,N\rangle\diff\mudM\\ &=\int_\dM \left(h(\nablad u,\nablad u)-2(\partial_N u)\Ld u + (H+\partial_NU)(\partial_Nu)^2\right)\diff\mudM\,, \end{align*} completing the proof. \end{proof} \section*{Statements and Declarations} \noindent\textbf{Funding.}\hspace{2ex} Gef\"ordert durch die Deutsche Forschungsgemeinschaft (DFG) im Rahmen der Exzellenzstrategie des Bundes und der L\"ander -- GZ2047/1, Projekt-ID 390685813.\\ The authors were funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- GZ 2047/1, Project-ID 390685813.\smallskip\\ \textbf{Acknowledgements.}\hspace{2ex} The authors would like to thank Lihan Wang, Arnaud Guillin and L\'eo Hahn for many helpful discussions. \printbibliography \end{document}
2412.16821v1
http://arxiv.org/abs/2412.16821v1
Maximum principle for discrete-time control systems driven by fractional noises and related backward stochastic difference equations
\documentclass[sn-mathphys,Numbered]{sn-jnl} \usepackage{graphicx}\usepackage{multirow}\usepackage{amsmath,amssymb,amsfonts}\usepackage{amsthm}\usepackage{mathrsfs}\usepackage[title]{appendix}\usepackage{xcolor}\usepackage{textcomp}\usepackage{manyfoot}\usepackage{booktabs}\usepackage{algorithm}\usepackage{algorithmicx}\usepackage{algpseudocode}\usepackage{listings}\usepackage{placeins} \numberwithin{equation}{section} \theoremstyle{thmstyleone}\newtheorem{theorem}{Theorem}\newtheorem{proposition}[theorem]{Proposition} \theoremstyle{thmstyletwo}\newtheorem{example}{Example}\newtheorem{remark}{Remark}\newtheorem{lemma}{Lemma}\newtheorem{corollary}{Corollary} \theoremstyle{thmstylethree}\newtheorem{definition}{Definition} \raggedbottom \begin{document} \title[Article Title]{Maximum principle for discrete-time control systems driven by fractional noises and related backward stochastic difference equations} \author[1]{\fnm{Yuecai} \sur{Han}}\email{[email protected]} \author*[1]{\fnm{Yuhang} \sur{Li}}\email{[email protected]} \equalcont{These authors contributed equally to this work.} \affil[1]{\orgdiv{School of Mathematics}, \orgname{Jilin University}, \orgaddress{ \city{Changchun}, \postcode{130012}, \state{Jilin Province}, \country{China}}} \abstract{In this paper, the optimal control for discrete-time systems driven by fractional noises is studied. A stochastic maximum principle is obtained by introducing a backward stochastic difference equation contains both fractional noises and the constructed white noises. The solution of the backward stochastic difference equations is also investigated. As an application, the linear quadratic case is considered to illustrate the main results.} \keywords{Maximum principle, discrete-time system, fractional noise, backward stochastic difference equations} \maketitle \section{Introduction} From last century, stochastic maximum principle (SMP) and backward stochastic differential equations (BSDEs) are studied popularly to deal with optimal control problems of stochastic systems. For SMP, some original works refer to \cite{kushner1972necessary,bismut1978introductory,bensoussan2006lectures}. Right up to 1990, Peng \cite{peng1990general} obtain the stochastic maximum principle for a general control system, where the control domain could be non-convex and the diffusion term contains control process. Then SMP for many different control problems are investigated, such as near-optimal control \cite{zhou1998stochastic}, doubly stochastic control systems \cite{han2010maximum,zhang2011maximum}, mean-field optimal control \cite{li2012stochastic,yong2013linear} and delayed control problems \cite{chen2010maximum,yu2012stochastic}. For BSDEs, as the adjoint equation in SMP, it is originated from Bismut \cite{bismut1973conjugate}. In 1990, Pardoux and Peng \cite{pardoux1990adapted} prove the existence and uniqueness of the solution of nonlinear BSDEs. Then some related early works refer to \cite{peng1993backward,el1997backward}. For SMP for discrete-time systems, the original work is by Lin and Zhang \cite{lin2014maximum}, a type of backward stochastic difference equations (BS$\Delta$Es) is introduced as adjoint equation. Based on this work, much progress has been made by researchers. Discrete-time stochastic games are studied by Wu and Zhang \cite{wu2022maximum}. Dong et al. obtain the SMP for discrete-time systems with mean-field \cite{dong2022maximum} and delay \cite{dong2023maximum}. Ji and Zhang investigate the infinite horizon recursive optimal control and infinite horizon BS$\Delta$E. But as we known, there are poorly works consider the discrete-time control problems with fractional noises or other ``colored noises". In this paper, motivated by stochastic optimal control of continuous systems driven by fractional Brownian motion (FBM), we concern the discrete-time optimal control for systems driven by fractional noises. Let $H\in(0,\,1)$ be a fixed constant, which is called Hurst parameter. The $m$-dimensional FBM $B_t^H=\left(B_1^H(t),\cdots,B_m^H(t)\right), t\in[0,T]$ is a continuous, mean 0 Gaussian process with the covariance \begin{align*} \mathbb{E}[B_i^H(t)B_j^H(s)]=\frac{1}{2}\delta_{ij}(t^{2H}+s^{2H}-| t-s|^{2H}), \end{align*} $i,\,j=1,\dots,m$. Moreover, the FBM could be generated by a standard Brownian motion through \begin{align*} B_j^H(t)=\int_0^t Z_H(t,\,s) dB_j(s), \quad 1\le j\le m, \end{align*} for some explicit functions $Z_H(\cdot,\cdot)$. For control problem of continuous systems driven by FBM, Han et al. \cite{han2013maximum} firstly obtain the maximum principle for general systems. Some other related works refer to \cite{hu2005stochastic,hu2009backward,sun2021stochastic}. For our problem, the state equation is \begin{align*} \left\{\begin{array}{ll} X_{n+1}=X_n+b(n,X_n,u_n)+\sigma(n,X_n,u_n)\xi_n^H,\, 0\le n\le N-1, \\X_0=x, \end{array}\right. \end{align*} to minimize the cost function \begin{align*} J(u)=\mathbb{E}\left[\sum_{n=0}^{N-1}l(n,X_n,u_n)+\Phi(X_N)\right]. \end{align*} Here $\xi^H$ is called a fractional noise describe by the increment of a FBM. To obtain the stochastic maximum principle, the following BS$\Delta$E is introduced as adjoint equation: \begin{align}\label{adjoint} \left\{\begin{array}{ll} p_n+q_n\eta_n=p_{n+1}+b_x^*(n+1)p_{n+1}+b(n+1,n+1)\sigma_x^*(n+1)q_{n+1}\\ \qquad\qquad\qquad+l_x^*(n+1)+\sigma_x^*(n+1)p_{n+1}\xi_{n+1}^H, \\p_N=\Phi_x(X^*_N),\\ q_N=0, \end{array}\right. \end{align} where $\eta$ is a Gaussian white noises constructed by $\xi^H$. \begin{remark} Our techniques can also be used in control problem with some other noises $\omega$, such as AR(p) or MA(q) model. Indeed, we just need the following two conditions are equivalent: (i):$\mathbb{E}\left[\left(\sum_{k=0}^na_1(n,k)\omega_k\right)\cdot\left(\sum_{l=0}^ma_2(m,l)\omega_l\right)\right]=0$. (ii):$\sum_{k=0}^na_1(n,k)\omega_k$ and $\sum_{l=0}^ma_2(m,l)\omega_l$ are independent. \noindent For all determined functions $a_1,a_2$ and $m,n\in \mathbb{Z}^+$. \end{remark} A difficulty is that it is hard to estimate the $L^2_\mathcal{F}$-norm for $X,p,q$ and the variation equation appears in section 3, because of the dependence of $\xi^H$ and its coefficient. We deal with this with more complex calculations, then we prove the uniqueness of the state S$\Delta$E and adjoint BS$\Delta$E, and obtain the maximum principle. The rest of this paper is organized as follows. In section 2, we introduce the BS$\Delta$E driven by both fractional noise and white noise, and prove the existence and the uniqueness of the solution of this type of BS$\Delta$E. In section 3, we obtain the stochastic maximum principle by proving the existence and uniqueness of the state S$\Delta$E and showing the convergence of the variation equation. In section 4, the linear quadratic case is investigated to illustrate the main results. \section{Backward stochastic difference equations} Let $(\Omega,\mathcal{F},\{\mathcal{F}_n\}_{n\in\mathbb {Z}^+},P)$ be a filtered probability space, $\mathcal{F}_0\subset \mathcal{F}$ be a sub $\sigma$-algebra. $\{\xi_n^H\}_{n\in\mathbb {Z}^+}$ is a sequence of fractional noises described by the increment of a $m$-dimensional fractional Brownian motion, namely, $\xi_n^H=B^H(n+1)-B^H(n)$. Define the filtration $\mathbb{F}=(\mathcal{F}_n)_{0\le n\le N}$ by $\mathcal{F}_n=\mathcal{F}_0\lor\sigma(\xi^H_0,\xi_1^H,...,\xi_{n-1}^H)$. Denote by $L^\beta(\mathcal{F}_n;\mathbb{R}^n)$, or $L^\beta(\mathcal{F}_n)$ for simplify, the set of all $\mathcal{F}_n$-measurable random variables $X$ taking values in $\mathbb{R}^n$, such that $\mathbb{E}\|X\|^\beta<+\infty$. Denote by $L^\beta_\mathcal{F}(0,T;\mathbb{R}^n)$, or $L^\beta_\mathcal{F}(0,T)$ for simplify, the set of all $\mathcal{F}_n$-adapted process $X=(X_n)_{n\in\mathbb {Z}^+}$ such that \begin{align*} \|X_\cdot\|_{\beta}=\left(\sum_{n=0}^N\mathbb{E}\|X_n\|^\beta\right)^{\frac{1}{\beta}}<+\infty. \end{align*} Then we consider the following BS$\Delta$E , \begin{align}\label{BSDE} \left\{\begin{array}{ll} Y_n+Z_n\eta_n=Y_{n+1}+f(t,Y_{n+1},Z_{n+1})+g(t,Y_{n+1},Z_{n+1})\xi_{n+1}^H, \\Y_N=y, \end{array}\right. \end{align} where $y\in L^{2a}(\mathcal{F}_N)$ for some $a>1$, and $\eta_n\in\mathcal{F}_{n+1}$, which will be defined in the following lemma, is a sequence of independent Gaussian random variables generated by $\{\xi_n^H\}$. We also assume the following conditions for $f, g$: \begin{enumerate} \item[(H2.1)] $f$ and $g$ are adapted processes: $f(\cdot,y,z), g(\cdot,y,z)\in L^{2a}_{\mathcal{F}}(1,N)$ for all $y\in \mathbb{R}^n, z\in\mathbb{R}^{n\times m}$. \item[(H2.2)] There are some constants $L>0$, such that \begin{align*} |f(n,y_1,z_1)-f(n,y_2,z_2)|&+|g(n,y_1,z_1)-g(n,y_2,z_2)|\le L\left(|y_1-y_2|+|z_1-z_2|\right),\\ &\forall n\in[1,N],\quad y_1,y_2\in\mathbb{R}^n,\quad z_1,z_2\in\mathbb{R}^{n\times m}. \end{align*} \item[(H2.3)] There exists a $\mathcal{F}_N$-measurable functions $ f_1, g_1$, such that $f(N,y,z)=f_1(y)$ and $ g(N,y,z)=g_1(y)$ for all $y,z$. \end{enumerate} Then we show the construction and properties of $\{\eta_n\}$. \begin{lemma}\label{lem1} Let $a(\cdot,\cdot)$ is given by $a(i-1,j-1)=(B^{-1})_{ij}$, where $B=(b_{ij})$ is given by \begin{align}\label{bij} b_{ij}=\mathbf{1}_{\{j\le i\}}\frac{\rho(i,j)-\sum_{k=0}^{j-2}b_{i,k+1}b_{j,k+1}}{b_{jj}}. \end{align} Then $\eta_n=\sum_{k=0}^na(n,k)\xi_m^H$ is a sequence of independent Gaussian random variables such that $\eta_n$ is $\mathcal{F}_{n+1}$-measurable but independent with $\mathcal{F}_n$. \end{lemma} \begin{proof} From the properties of fractional Brownian motion, we know \begin{align*} \left(\xi_0^H,\xi_1^H,...,\xi_{N-1}^H\right)^T\sim \mathcal{N}\left(0_{N\times1},\Sigma_{N\times N}\right), \end{align*} where $\Sigma_{ij}=\rho(i-1,j-1)$. So that there exists invertible matrix $A$ such that \begin{align}\label{xieta} \left(\eta_0^H,\eta_1^H,...,\eta_{N-1}^H\right)^T=A\left(\xi_0^H,\xi_1^H,...,\xi_{N-1}^H\right)^T\sim \mathcal{N}\left(0_{N\times1},I_{N\times N}\right). \end{align} Notice that $A$ is not unique, we choose the lower triangular one since $\eta_n$ is $\mathcal{F}_n$-measurable. But it is hard to obtain $A$ directly, we determine $B=A^{-1}$ at first by checking $b_{ij}$ satisfy equality (\ref{bij}). Let $b(i-1,j-1)=b_{ij}$, rewrite (\ref{xieta}) as \begin{align*} B\left(\eta_0^H,\eta_1^H,...,\eta_{N-1}^H\right)^T=\left(\xi_0^H,\xi_1^H,...,\xi_{N-1}^H\right)^T, \end{align*} or \begin{align*} \xi_n^H=\sum_{j=1}^{N}b_{n+1,j}\eta_j=\sum_{k=0}^{N-1}b(n,k)\eta_k. \end{align*} Since $\eta_k$ is $\mathcal{F}_{k+1}$-measurable and independent with $\mathcal{F}_k$, $b(n,k)=0 $ for $ k>n$. Then let $\mathbb{E}\left[\xi_n\xi_m\right]=\sum_{k=0}^mb(n,k)b(m,k)=\rho(n,m)$ for $m\le n$, we obtain the induction formula for $b(\cdot,\cdot)$: \begin{align*} b(n,m)=\frac{\rho(n,m)-\sum_{k=0}^{m-1}b(n,k)b(m,k)}{b(m,m)}, \end{align*} which equally to (\ref{bij}). \end{proof} \begin{remark} It is clear that $\{\xi_n\}$ and $\{\eta_n\}$ generate the same filtration, namely, $\sigma(\xi_0,\xi_1,...,\xi_{n})=\sigma(\eta_0,\eta_1,...,\eta_{n})$ for $0\le n\le N-1$. \end{remark} Then we show the solvability of BS$\Delta$E (\ref{BSDE}). \begin{theorem} Assume that assumptions (H2.1)-(H2.3) hold. Then BS$\Delta$E (\ref{BSDE}) has a unique solution. \end{theorem} \begin{proof} We first define $Y_{N-1}$ by constructing \begin{align*} M_n=\mathbb{E}\left[y+f_1(y)+g_1(y)\xi_{N}^H|\mathcal{F}_n\right]. \end{align*} Through the Lipschitz's condition, $ \text {Hölder} $'s inequality and the fact $y\in L(\mathcal{F}_N)$, we have \begin{align}\label{2.4} \mathbb{E}|M_n|^{2a\delta}&\le \mathbb{E}\left|y+f_1(y)+g_1(y)\xi_{N}^H\right|^{2a\delta}\notag\\ &\le C\left[\mathbb{E}|y|^{2a\delta}+\mathbb{E}|f(N,0,0)|^{2a\delta}\right]\notag\\ &\quad+C\left[\left(\mathbb{E}|y|^{2a}\right)^\delta+\left(\mathbb{E}|g(N,0,0)|^{2a}\right)^\delta\right]\left(\mathbb{E}|\xi_{N}|^{2a\delta\frac{1}{1-\delta}}\right)^{1-\delta}\notag\\ &<+\infty, \end{align} for $\delta\in(\frac{1}{a},1)$. Then it is a straightforward result that $\mathbb{E}|M_n|^2\le \left(\mathbb{E}|M_n|^{2a\delta}\right)^{\frac{1}{a\delta}}<+\infty$, so that $M_n$ is a square integrable martingale. Similar to formula (2.5) of \cite{}, there exists a unique adapted process $Z$, such that \begin{align*} M_n=M_0+\sum_{k=0}^{n-1}Z_k\eta_k,\quad 0\le n\le N. \end{align*} Rewrite the above equality as \begin{align*} M_{N-1}+Z_{N-1}\eta_{N-1}=y+f_1(y)+g_1(y)\xi_{N}^H, \end{align*} which shows $Y_{N-1}=M_{N-1}$. Multiply $\eta_{N-1}$ on both side and take $\mathbb{E}[\cdot|\mathcal{F}_{N-1}]$, we obtain $Z_{N-1}$ by \begin{align*} Z_{N-1}=\mathbb{E}\left[\eta_{N-1}[y+f_1(y)+g_1(y)\xi_{N}^H]|\mathcal{F}_{N-1}\right]. \end{align*} Similar to formula (\ref{2.4}), we have $\mathbb{E}|Z_{N-1}|^{2a\delta}<+\infty$. Up to now, we obtain the unique pair $(Y_{N-1},Z_{N-1})\in L^{2a\delta}(\mathcal{F}_{T-1})\times L^{2a\delta}(\mathcal{F}_{T-1})$. Then by induction, for all $0\le n\le N-2$, if $(Y_{n+1},Z_{n+1})\in L^{2b}(\mathcal{F}_{n+1})\times L^{2b}(\mathcal{F}_{n+1})$ for some $b>1$. Define \begin{align*} \tilde{M}_k=\mathbb{E}\left[Y_{n+1}+f(n+1,Y_{n+1},Z_{n+1})+g(n+1,Y_{n+1},Z_{n+1})\xi_{n+1}^H|\mathcal{F}_k\right] \end{align*} for $0\le k\le n+1$. Then we show $\tilde{M}_k\in L^{2b\delta}(\mathcal{F}_k)$: \begin{align*} \mathbb{E}|\tilde{M}_k|^{2b\delta}&=\mathbb{E}\left|Y_{n+1}+f(n+1,Y_{n+1},Z_{n+1})+g(n+1,Y_{n+1},Z_{n+1})\xi_{n+1}^H\right|^{2b\delta}\notag\\ &\le C\left[\mathbb{E}|Y_{n+1}|^{2b\delta}+\mathbb{E}|Z_{n+1}|^{2b\delta}+\mathbb{E}|f(n+1,0,0)|^{2b\delta}\right]\notag\\ &\quad+C\left[\mathbb{E}|Y_{n+1}|^{2b\delta}+\mathbb{E}|Z_{n+1}|^{2b\delta}+\mathbb{E}|g(n+1,0,0)|^{2b\delta}\right]\left(\mathbb{E}|\xi_{N-1}|^{2b\delta\frac{1}{1-\delta}}\right)^{1-\delta}\notag\\ &<+\infty. \end{align*} Let $Y_n=\tilde{M}_n$ and \begin{align*} Z_n=\mathbb{E}\left[\eta_n\left[Y_{n+1}+f(n+1,Y_{n+1},Z{n+1})+g(n+1,Y_{n+1},Z_{n+1})\xi_{n+1}^H\right]|\mathcal{F}_n\right], \end{align*} which implies \begin{align*} Y_n+Z_n\eta_n=Y_{n+1}+f(t,Y_{n+1},Z_{n+1})+g(t,Y_{n+1},Z_{n+1})\xi_{n+1}^H. \end{align*} Choosing $\delta\in[a^{-1/N},1)$, we obtain the unique process pair $(Y,Z)\in L^2_{\mathcal{F}}(0,N)\times L^2_{\mathcal{F}}(0,N-1)$. \end{proof} \section{A maximum principle} In this section, we study the optimal control problem for discrete-time systems driven by fractional noises. Let $(\Omega,\mathcal{F},\{\mathcal{F}_n\}_{n\in\mathbb {Z}^+},P)$ be a filtered probability space, $\mathcal{F}_0\subset \mathcal{F}$ be a sub $\sigma$-algebra and $\mathbb{F}=(\mathcal{F}_n)_{0\le n\le N}$ be the filtration defined by $\mathcal{F}_n=\mathcal{F}_0\lor\sigma(\xi^H_0,\xi_1^H,...,\xi_{n-1}^H)$. The state equation is \begin{align}\label{state} \left\{\begin{array}{ll} X_{n+1}=X_n+b(n,X_n,u_n)+\sigma(n,X_n,u_n)\xi_n^H,\, 0\le n\le N-1, \\X_0=x, \end{array}\right. \end{align} with the cost function \begin{align}\label{cost} J(u)=\mathbb{E}\left[\sum_{n=0}^{N-1}l(n,X_n,u_n)+\Phi(X_N)\right]. \end{align} Here $x\in L^{2a}(\mathcal{F}_0)$ is independent with $\{\xi_n\}$, $b(n,x,u)$ and $\sigma(n,x,u) $ are measurable functions on $[0,N-1]\times \mathbf{R}^d\times \mathbf{R}^k$ with values in $\mathbf{R}^d$ and $\mathbf{R}^{d\times m}$, respectively. $l(n,x,u)$ and $\Phi(x) $ be measurable functions on $[0,N-1]\times \mathbf{R}^d\times \mathbf{R}^k$ and $\mathbf{R}^d$, respectively, with values in $\mathbf{R}$. Denote by $\mathbb{U}$ the set of progressively measurable process \textbf{u}$=(u_n)_{0\le n\le N-1}$ taking values in a given closed-convex set $\textbf{U}\subset \mathbb{R}^k$ and satisfying $\mathbb{E}\sum_{n=0}^{N-1}|u_n|^{2a} <+\infty$. The problem is to find an optimal control $u^*\in\mathbb{U}$ to minimized the cost function, i.e., \begin{align*} J(u^*)=\inf_{u\in\mathbb{U}}J(u). \end{align*} To simplify the notation without losing the generality, we assume that $d=k=m=1$. We give the following assumptions: \begin{enumerate} \item[(H3.1)] $b$ and $\sigma$ are adapted processes: $b(\cdot,x,u), \sigma(\cdot,y,z)\in L^{2a}_{\mathcal{F}}(0,N-1)$ for all $x,u\in\mathbb{R}$. \item[(H3.2)] $b(n,x,u),\sigma(n,x,u)$ are differential w.r.t $(x,u)$ and there exists some constants $L>0$, such that \begin{align*} | \phi(n,x_1,u_1)-\ &\phi(n,x_2,u_2)|\le L\left(|x_1-x_2|+|u_1-u_2|\right),\\ &\forall n\in[0,N-1],\quad\forall x_1,x_2,u_1,u_2\in\mathbb{R}, \end{align*} for $\phi=b,\sigma,b_x,\sigma_x,b_u,\sigma_u.$ \item[(H3.3)] $l(n,x,u),\Phi(x)$ are differential w.r.t $(x,u)$ and \begin{align*} |l_x(n,x,u)|+|\Phi_x(x)|\le L(1+|x|+|u|),\quad \forall n\in[0,N-1], \, x,u\in\mathbb{R}. \end{align*} \item[(H3.4)] $b(N,x,u)=\sigma(N,x,u)=l(N,x,u)=0,$ for all $(x,u)$. \end{enumerate} Then we show the solvability of the state equation. \begin{lemma}\label{lem2} Assume that assumptions (H3.1)-(H3.4) hold and $u\in\mathbb{U}$, then S$\Delta$E (\ref{state}) has a unique solution $X\in L_{\mathcal{F}}^2(0,N)$. \end{lemma} \begin{proof} Since $x\in L^{2a}(\mathcal{F}_0)$, by assumptions (3.1) and (3.2), we have \begin{align*} \mathbb{E}|X_1|^{2a\delta}&\le C\mathbb{E}\left[|x|^{2a\delta}+|f(0,0,0)|^{2a\delta}+|g(0,0,0)|^{2a\delta}|\xi_0^H|^{2a\delta}+|x|^{2a\delta}|\xi_0^H|^{2a\delta}\right]\\ &\le C\left[\left(\mathbb{E}|x|^{2a}\right)^\delta+\left(\mathbb{E}|f(0,0,0)|^{2a}\right)^\delta\right] \\ &\quad +C\left[\left(\mathbb{E}|x|^{2a}\right)^\delta+\left(\mathbb{E}|g(0,0,0)|^{2a}\right)^\delta\right]\times\left(\mathbb{E}|\xi_0^H|^{\frac{2a\delta}{1-\delta}}\right)^{1-\delta}\\ &<+\infty. \end{align*} for some $C>0$ and $\delta\in[a^{-1/N},1)$. Then, by induction, if $X_n\in L^{2a\delta^n}(\mathcal{F}_n)$, we show $X_n\in L^{2a\delta^{n+1}}(\mathcal{F}_{n+1})\subset L^{2a}(\mathcal{F}_{n+1}), n=0,1,...,N-1$: \begin{align*} \mathbb{E}|X_{n+1}|^{2a\delta^{n+1}}&\le C\mathbb{E}\left[|X_n|^{2a\delta^{n+1}}+|f(n,0,0)|^{2a\delta^{n+1}}+|g(n,0,0)|^{2a\delta^{n+1}}|\xi_0^H|^{2a\delta^{n+1}}\right]\\ &\quad+C\mathbb{E}\left[|X_n|^{2a\delta^{n+1}}|\xi_0^H|^{2a\delta^{n+1}}\right]\\ &\le C\mathbb{E}\left[|X_n|^{2a\delta^{n+1}}+|f(0,0,0)|^{2a\delta^{n+1}}\right] \\ &\quad +C\left[\left(\mathbb{E}|x|^{2a\delta^n}\right)^\delta+\left(\mathbb{E}|g(0,0,0)|^{2a\delta^n}\right)^\delta\right]\times\left(\mathbb{E}|\xi_0^H|^{\frac{2a\delta^{n+1}}{1-\delta}}\right)^{1-\delta}\\ &<+\infty. \end{align*} Thus, the solution of (\ref{state}) in $ L^2_{\mathcal{F}}(0,N)$. Uniqueness. Let $\tilde{X}$ and $X$ be two solutions of S$\Delta$E (\ref{state}), so that $\mathbb{E}|\tilde{X}_0-X_0|^{2a}=0$. If $\mathbb{E}|\tilde{X}_n-X_n|^{2a\delta^n}=0$, we have \begin{align*} \mathbb{E}|\tilde{X}_{n+1}-X_{n+1}|^{2a\delta^{n+1}}\le& C\mathbb{E}\left[|\tilde{X}_n-X_n|^{2a\delta^{n+1}}+|\tilde{X}_n-X_n|^{2a\delta^{n+1}}|\xi_n^H|^{2a\delta^{n+1}}\right]\\ \le& C\mathbb{E}|\tilde{X}_n-X_n|^{2a\delta^{n+1}}+C\left(\mathbb{E}|\tilde{X}_n-X_n|^{2a\delta^n}\right)^\delta\times\left(\mathbb{E}|\xi_n^H|^{\frac{2a\delta^{n+1}}{1-\delta}}\right)^{1-\delta}\\ =& 0, \end{align*} for $n=0,1,...,N-1$. Then we conclude $\sum_{n=0}^N\mathbb{E}|\tilde{X}_n-X_n|^2=0$, which shows the solution is unique. \end{proof} \begin{remark}\label{rem2} Indeed, we could obtain the unique solution of S$\Delta$E (\ref{state}) in $L^{\beta}_{\mathcal{F}}(0,N)$ for $\beta\in(2,2a)$ as long as we take $\delta\in\left(\sqrt[N]{\frac{\beta}{2a}},1\right).$ \end{remark} For any $u\in\mathbb{U}$ and $\varepsilon\in(0,1)$, let \begin{align*} u_n^\varepsilon=(1-\varepsilon)u^*_n+\varepsilon u_n:=u^*_n+\varepsilon v_n. \end{align*} Denote \begin{align*} \phi^*(n)=\phi(n,X^*_n,u^*_n),\\ \phi^\varepsilon(n)=\phi(n,X^\varepsilon_n,u^\varepsilon_n) \end{align*} for $\phi=b,\sigma,l,b_x,\sigma_x,l_x,b_u,\sigma_u,l_u$. Define the variation equation by \begin{align*} \left\{\begin{array}{ll} V_{n+1}=V_n+b_x^*(n)V_n+b_u^*v_n+\left[\sigma_x^*(n)V_n+\sigma_u^*(n)v_n\right]\xi_n^H,\, 0\le n\le N-1, \\V_0=0. \end{array}\right. \end{align*} Then we have the following convergence result. \begin{lemma}\label{lem3} Let $X^\varepsilon$, $X^*$ be the corresponding state equation to $u^\varepsilon$, $u^*$, respectively. Then we have \begin{align}\label{3.3} \sum_{n=0}^N\mathbb{E}|X_n^\varepsilon-X^*_n|^2\le O(\varepsilon^2), \end{align} and \begin{align}\label{3.4} \lim_{\varepsilon\to 0}\sum_{n=0}^N\mathbb{E}\left|\frac{X^\varepsilon_n-X^*_n}{\varepsilon }-V_n\right|^2=0. \end{align} \end{lemma} \begin{proof} The proof of (\ref{3.3}) follows lemma \ref{lem2}, if $\mathbb{E}|X_n^\varepsilon-X_n^*|^{2a\delta^n}=O\left(\varepsilon^{2a\delta^n}\right)$, then \begin{align*} \mathbb{E}|X^\varepsilon_{n+1}-X^*_{n+1}|^{2a\delta^{n+1}}\le& C\mathbb{E}\left[|X^\varepsilon_n-X^*_n|^{2a\delta^{n+1}}+|X^\varepsilon_n-X^*_n|^{2a\delta^{n+1}}|\xi_n^H|^{2a\delta^{n+1}}\right]\\ \le& C\left(\mathbb{E}|X^\varepsilon_n-X^*_n|^{2a\delta^n}\right)^\delta+C\left(\mathbb{E}|X^\varepsilon_n-X^*_n|^{2a\delta^n}\right)^\delta\times\left(\mathbb{E}|\xi_n^H|^{\frac{2a\delta^{n+1}}{1-\delta}}\right)^{1-\delta}\\ =& O\left(\varepsilon^{2a\delta^{n+1}}\right). \end{align*} So that \begin{align*} \sum_{n=0}^N\mathbb{E}|X^\varepsilon_n-X^*_n|^2\le \sum_{n=0}^N\left(\mathbb{E}|X^\varepsilon_n-X^*_n|^{2a\delta^n}\right)^{\frac{1}{a\delta^n}}=O(\varepsilon^2). \end{align*} Denote $\hat{X}^\varepsilon=\frac{X^\varepsilon-X^*}{\varepsilon}-V$, it follows \begin{align*} \hat{X}^\varepsilon_{n+1}=&\hat{X}^\varepsilon_{n}+\frac{b^\varepsilon(n)-b^*(n)}{\varepsilon}-b_x^*(n)V_n-b_u^*(n)v_n\\ &+\left[\frac{\sigma^\varepsilon(n)-\sigma^*(n)}{\varepsilon}-\sigma_x^*(n)V_n-\sigma_u^*(n)v_n\right]\xi_n^H\\ =&[1+b_x^*(n)]\hat{X^\varepsilon_n}+\frac{\tilde{b}_x^\varepsilon(n)-b^*_x(n)}{\varepsilon}[X^\varepsilon_n-X^*_n]+[\tilde{b}_u^\varepsilon(n)-b^*_u(n)]v_n\\ &+\left[\sigma_x^*(n)\hat{X^\varepsilon_n}+\frac{\tilde{\sigma}_x^\varepsilon(n)-\sigma^*_x(n)}{\varepsilon}[X^\varepsilon_n-X^*_n]+[\tilde{\sigma}_u^\varepsilon(n)-\sigma^*_u(n)]v_n\right]\xi^H_n, \end{align*} where \begin{align*} \tilde{\phi}^\varepsilon(n)=\int_0^1 \phi(n,X^*_n+\theta(X^\varepsilon_n-X^*_n),u^*_n+\theta\varepsilon v_n)d\theta, \end{align*} for $\phi=b_x,b_u,\sigma_x,\sigma_u$. Then \begin{align*} \mathbb{E}\left|\hat{X}^\varepsilon_{n+1}\right|^{2a\delta^{n+1}}\le& C\mathbb{E}\left|\hat{X}^\varepsilon_{n}\right|^{2a\delta^{n+1}}+C\mathbb{E}\|\tilde{b}_u^\varepsilon(n)-b^*_u(n)\|^{2a\delta^{n+1}}|v_n|^{2a\delta^{n+1}}\\ &+C\mathbb{E}\|\tilde{b}_x^\varepsilon(n)-b^*_x(n)\|^{2a\delta^{n+1}}\left|\varepsilon^{-1}(X^\varepsilon_n-X^*_n)\right|^{2a\delta^{n+1}}\\ &+C\mathbb{E}\Bigg[\left|\hat{X}^\varepsilon_{n}\right|^{2a\delta^{n+1}}+\|\tilde{\sigma}_u^\varepsilon(n)-\sigma^*_u(n)\|^{2a\delta^{n+1}}|v_n|^{2a\delta^{n+1}}\\ &\qquad+\|\tilde{\sigma}_x^\varepsilon(n)-\sigma^*_x(n)\|^{2a\delta^{n+1}}\left|\varepsilon^{-1}(X^\varepsilon_n-X^*_n)\right|^{2a\delta^{n+1}}\Bigg]\left|\xi_n^H\right|^{2a\delta^{n+1}}\\ \le&C\left[\left(\mathbb{E}\left|\hat{X}^\varepsilon_{n}\right|^{2a\delta^{n}}\right)^\delta+o(1)\right]\times\left[1+\left(\mathbb{E}|\xi_n^H|^{\frac{2a\delta^{n+1}}{1-\delta}}\right)^{1-\delta}\right]. \end{align*} Since $\mathbb{E}|\hat{X}^\varepsilon_0|^{2a}=0$, by induction, we have \begin{align*} \lim_{\varepsilon\to 0}\mathbb{E}\left|\hat{X}^\varepsilon_{n}\right|^{2a\delta^{n}}=0,\quad \forall n=0,1,...,N, \end{align*} which complete the proof of (\ref{3.4}). \end{proof} Then we give the stochastic maximum principle for control system (\ref{state}), (\ref{cost}). \begin{theorem} Let assumptions (3.1)-(3.3) hold, $u^*, X^*$ be the optimal control and the corresponding state process. Let $(p,q)$ be the solution to the following BS$\Delta$E: \begin{align}\label{adjoint} \left\{\begin{array}{ll} p_n+q_n\eta_n=p_{n+1}+b_x^*(n+1)p_{n+1}+b(n+1,n+1)\sigma_x^*(n+1)q_{n+1}\\ \qquad\qquad\qquad+l_x^*(n+1)+\sigma_x^*(n+1)p_{n+1}\xi_{n+1}^H, \\p_N=\Phi_x(X^*_N),\\ q_N=0. \end{array}\right. \end{align} Then the following inequality holds: \begin{align*} \left[b_u^*(n)p_n+\sigma_u^*(n)p_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)\sigma_u^*(n)q_n+l_u^*(n)\right]\cdot(u_n-u^*_n)\ge 0, \end{align*} a.s., for all $n\in[0,N-1], u\in\mathbb{U}$. Here $c(n,k)=\sum_{l=0}^{n-1}b(n,l)a(l,k)$ and $a(\cdot,\cdot), b(\cdot,\cdot)$ are given by lemma \ref{lem1}. \end{theorem} \begin{proof} According to the assumptions (H3.1)-(H3.2) and remark \ref{rem2}, it is easy to check BS$\Delta$E (\ref{adjoint}) satisfies (H2.1)-(H2.3), so that BS$\Delta$E (\ref{adjoint}) has a unique solution. Trough lemma \ref{lem3}, it is easy to show the directional derivative of $J$ is \begin{align}\label{3.6} \frac{d}{d\varepsilon}J(u^*+\varepsilon v)\Big|_{\varepsilon=0}=\mathbb{E}\left[\sum_{n=0}^{N-1}\left[l_x^*(n)V_n+l_u^*(n)v_n\right]+\Phi_x(X^*_N)V_N\right]. \end{align} Consider \begin{align}\label{3.7} \Delta(p_nV_n)=&p_{n+1}V_{n+1}-p_nV_n\notag\\ =&-V_{n+1}\big[b_x^*(n+1)p_{n+1}+b(n+1,n+1)\sigma_x^*(n+1)q_{n+1}\notag\\ &\qquad\qquad+l_x^*(n+1)+\sigma_x^*(n+1)p_{n+1}\xi_{n+1}^H\big]\notag\\ &+V_{n+1}q_n\eta_n+p_n\left[b_x^*(n)V_n+b_u^*(n)v_n\right]\notag\\ &+p_n\left[\sigma_x^*(n)V_n+\sigma_u^*(n)v_n\right]\xi_n^H\notag\\ =&-\left[b_x^*(n+1)p_{n+1}V_{n+1}-b_x^*(n)p_nV_n\right]\notag\\ &-\left[\sigma_x^*(n+1)p_{n+1}V_{n+1}\xi_{n+1}^H-\sigma_x^*(n)p_nV_n\xi_n^H\right]\notag\\ &-\left[b(n+1,n+1)\sigma_x^*(n+1)q_{n+1}V_{n+1}-\sigma_x^*(n)q_nV_n\eta_n\xi_n^H\right]\notag\\ &+\left[V_n+b^*(n)V_n\right]q_n\eta_n+\sigma_u^*(n)q_nv_n\eta_n\xi_n^H\notag\\ &-l_x^*(n+1)V_{n+1}+b_u^*(n)p_nv_n+\sigma_u^*(n)p_nv_n\xi_{n}^H. \end{align} Notice that \begin{align*} \mathbb{E}\left(\left[V_n+b^*(n)V_n\right]q_n\eta_n\right)=\mathbb{E}\left(\left[V_n+b^*(n)V_n\right]q_n\mathbb{E}\left[\eta_n|\mathcal{F}_n\right]\right)=0, \end{align*} and \begin{align}\label{3.8} \mathbb{E}\left[\sigma_x^*(n)q_nV_n\eta_n\xi_n^H\right]=&\mathbb{E}\left[\sigma_x^*(n)q_nV_n\mathbb{E}\left(\eta_n\xi_n^H|\mathcal{F}_n\right)\right]\notag\\ =&\mathbb{E}\left[\sigma_x^*(n)q_nV_n\mathbb{E}\left(\eta_n\sum_{k=0}^nb(n,k)\eta_k|\mathcal{F}_n\right)\right]\notag\\ =&\mathbb{E}\left[\sigma_x^*(n)q_nV_n\sum_{k=0}^{n-1}b(n,k)\eta_k\mathbb{E}\left(\eta_n|\mathcal{F}_n\right)\right]\notag\\ &+\mathbb{E}\left[\sigma_x^*(n)q_nV_n\mathbb{E}\left(b(n,n)\eta_n^2|\mathcal{F}_n\right)\right]\notag\\ =&\mathbb{E}\left[b(n,n)\sigma_x^*(n)q_nV_n\right], \end{align} so take summation and expectation of equation (\ref{3.7}), we have \begin{align}\label{3.9} \mathbb{E}\left[g_x(X_N^*)V_N\right]=&\sum_{n=0}^{N-1}\Delta(p_nV_n)\notag\\ =&-\mathbb{E}[b_x^*(N)p_NV_N-b_x^*(0)p_0V_0]\notag\\ &-\mathbb{E}[\sigma_x^*(N)p_NV_N\xi_N^H-\sigma_x^*(0)p_0V_0\xi_0^H]\notag\\ &-\mathbb{E}[b(N,N)\sigma_x^*(N)q_NV_N-\sigma_x^*(0)q_0V_0\xi_0^H]\notag\\ &-\mathbb{E}\sum_{n=1}^Nl_x^*(n)V_{n}+\mathbb{E}\sum_{n=0}^{N-1}b_u^*(n)p_nv_n\notag\notag\\ &+\mathbb{E}\sum_{n=0}^{N-1}[\sigma_u^*(n)p_nv_n\xi_{n}^H]+\mathbb{E}\sum_{n=0}^{N-1}[\sigma_u^*(n)q_nv_n\eta_n\xi_n^H]\notag\\ =&-\mathbb{E}\sum_{n=0}^{N-1}l_x^*(n)V_{n}+\mathbb{E}\sum_{n=0}^{N-1}b_u^*(n)p_nv_n\notag\\ &+\mathbb{E}\sum_{n=0}^{N-1}[\sigma_u^*(n)p_nv_n\xi_{n}^H]+\mathbb{E}\sum_{n=0}^{N-1}[\sigma_u^*(n)q_nv_n\eta_n\xi_n^H], \end{align} since $b_x^*(N)=\sigma_x^*(N)=l_x^*(N)=V_0=0$. \end{proof} Substitute equation (\ref{3.9}) to (\ref{3.6}), it follows \begin{align}\label{3.10} &\frac{d}{d\varepsilon}J(u^*+\varepsilon v)\Big|_{\varepsilon=0}\notag\\ &=\mathbb{E}\sum_{n=0}^{N-1}\left[b_u^*(n)p_n+\sigma_u^*(n)p_n\xi_n^H+\sigma_u^*(n)q_n\eta_n\xi_n^H+l_u^*(n)\right]\cdot v_n \end{align} Notice that, similar to equation (\ref{3.8}), \begin{align*} \mathbb{E}\left[\sigma_u^*(n)q_n\eta_n\xi_n^H\right]=\mathbb{E}\left[b(n,n)\sigma_u^*(n)q_n\right], \end{align*} and \begin{align*} \mathbb{E}\left[\sigma_u^*(n)p_n\xi_n^H\right]=&\mathbb{E}\left[\sigma_u^*(n)p_n\mathbb{E}\left(\sum_{l=0}^nb(n,l)\eta_l^H|\mathcal{F}_n\right)\right]\\ =&\mathbb{E}\left[\sigma_u^*(n)p_n\mathbb{E}\left(\sum_{l=0}^{n-1}\sum_{k=0}^lb(n,l)a(l,k)\xi_k^H|\mathcal{F}_n\right)\right]\\ &+\mathbb{E}\left[\sigma_u^*(n)p_nb(n,n)\mathbb{E}\left(\eta_n|\mathcal{F}_n\right)\right]\\ :=&\mathbb{E}\left[\sigma_u^*(n)p_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H\right], \end{align*} through equation (\ref{3.10}) and the fact $\frac{d}{d\varepsilon}J(u^*+\varepsilon v)\Big|_{\varepsilon=0}\ge 0$, we conclude \begin{align*} \mathbb{E}\sum_{n=0}^{N-1}\left[b_u^*(n)p_n+\sigma_u^*(n)p_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)\sigma_u^*(n)q_n+l_u^*(n)\right]\cdot v_n\ge 0. \end{align*} By the arbitrary of $v$, we have \begin{align*} \mathbb{E}\left\{\mathbf{1}_\mathcal{A}\left[b_u^*(n)p_n+\sigma_u^*(n)p_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)\sigma_u^*(n)q_n+l_u^*(n)\right]\cdot v_n\right\}\ge 0, \end{align*} for all $n\in[0,N-1],\,\mathcal{A}\in\mathcal{F}_n$, which implies \begin{align*} \left[b_u^*(n)p_n+\sigma_u^*(n)p_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)\sigma_u^*(n)q_n+l_u^*(n)\right]\cdot(u_n-u^*_n)\ge 0. \end{align*} \begin{remark} If the optimal control process $(u_n^*)_{0\le n\le N-1}$ takes values in the interior of the $\mathbb{U}$, it implies \begin{align*} b_u^*(n)p_n+\sigma_u^*(n)p_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)\sigma_u^*(n)q_n+l_u^*(n)=0,\quad a.s. \end{align*} for all $n\in[0,N-1]$. Thus, we obtain the optimal system: \begin{align}\label{sys} \left\{\begin{array}{ll} X_{n+1}=X_n+b(n,X_n,u_n)+\sigma(n,X_n,u_n)\xi_n^H,\\\\ p_n+q_n\eta_n=p_{n+1}+b_x^*(n+1)p_{n+1}+b(n+1,n+1)\sigma_x^*(n+1)q_{n+1}\\\\ \qquad\qquad\qquad+l_x^*(n+1)+\sigma_x^*(n+1)p_{n+1}\xi_{n+1}^H,\\\\ X_0=x, \\\\p_N=\Phi_x(X^*_N),\\\\ q_N=0,\\\\ b_x^*(N)=\sigma_x^*(N)=l_x^*(N)=0,\\\\ b_u^*(n)p_n+\sigma_u^*(n)p_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)\sigma_u^*(n)q_n+l_u^*(n)=0. \end{array}\right. \end{align} \begin{corollary} If $\xi^H$ is white noise ($H=1/2$), then it is easy to check \begin{equation} a(n,k)=b(n,k)=\left\{ \begin{aligned} &1,\ \ {\rm if}\ n=k, \\ &0,\ \ {\rm if}\ n\neq k, \end{aligned} \right. \notag \end{equation} so that $c(n,k)=0, \,k=0,1,...,n-1$. Moreover, consider the adjoint BS$\Delta$E, $p$ and $q$ are given by \begin{align*} p_n=\mathbb{E}\Big[p_{n+1}+&b_x^*(n+1)p_{n+1}+b(n+1,n+1)\sigma_x^*(n+1)q_{n+1}\\ &+l_x^*(n+1)+\sigma_x^*(n+1)p_{n+1}\xi_{n+1}^H|\mathcal{F}_n\Big]\\ =\mathbb{E}\Big[p_{n+1}+&b_x^*(n+1)p_{n+1}+\sigma_x^*(n+1)q_{n+1}+l_x^*(n+1)|\mathcal{F}_n\Big], \end{align*} and \begin{align*} q_n=\mathbb{E}\Big[\eta_n\big[p_{n+1}+&b_x^*(n+1)p_{n+1}+\sigma_x^*(n+1)q_{n+1}+l_x^*(n+1)\big]|\mathcal{F}_n\Big]. \end{align*} We can also write the adjoint equation $(p,q)$ as \begin{align*} \left\{\begin{array}{ll} p_n+q_n\eta_n=p_{n+1}+b_x^*(n+1)p_{n+1}+\sigma_x^*(n+1)q_{n+1}+l_x^*(n+1), \\p_N=\Phi_x(X^*_N),\\ q_N=0. \end{array}\right. \end{align*} The condition for optimal control is \begin{align*} b_u^*(n)p_n+\sigma_u^*(n)q_n+l_u^*(n)=0, \quad a.s.\quad n=0,1,...,N-1, \end{align*} which is same to the results obtained in \cite{lin2014maximum} \end{corollary} \end{remark} \section{Application to linear quadratic control} In this section, we consider the following linear quadratic (LQ) optimal control problem, the state equation is \begin{align}\label{4.1} \left\{\begin{array}{ll} X_{n+1}=X_n+A_nX_n+B_nu_n+\left[C_nX_n+D_nu_n\right]\xi_n^H,\, 0\le n\le N-1, \\X_0=x, \end{array}\right. \end{align} with the cost function \begin{align}\label{4.2} J(u)=\frac{1}{2}\mathbb{E}\left[\sum_{n=0}^{N-1}\left(Q_nX_n^2+R_nu^2_n\right)+GX_N^2\right], \end{align} where $Q_n,G\ge 0$ and $R_n>0$ for $n=0,1,...,N-1$. According to the optimal system (\ref{sys}), the adjoint equation is \begin{align}\label{4.3} \left\{\begin{array}{ll} p_n+q_n\eta_n=p_{n+1}+A_{n+1}p_{n+1}+b(n+1,n+1)C_{n+1}q_{n+1}\\ \qquad\qquad\qquad+Q_{n+1}X^*_{n+1}+C_{n+1}p_{n+1}\xi_{n+1}^H, \\p_N=GX^*_N,\\ q_N=0,\\ A_N=C_N=Q_N=0. \end{array}\right. \end{align} The optimal control should satisfy \begin{align*} B_np_n+D_np_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)D_nq_n+R_nu_n^*=0, \end{align*} i.e., \begin{align}\label{4.4} u_n^*=-R_n^{-1}\left[B_np_n+D_np_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H+b(n,n)D_nq_n\right]. \end{align} \begin{theorem} Let $(p,q)$ be the solution of BS$\Delta$E (\ref{4.3}). Then function (\ref{4.4}) is the unique optimal control for control problem (\ref{4.1}), (\ref{4.2}). \end{theorem} \begin{proof} Let us show the sufficiency of the optimal control first. Assume that $u\in\mathbb{U}$ is any other admissible control, $\{X_n\}$ is the corresponding state process, denote $\Delta X_n=X_n-X_n^*$ and $\Delta u_n=u_n-u^*_n$, then \begin{align}\label{4.1} \left\{\begin{array}{ll} \Delta X_{n+1}=\Delta X_n+A_n\Delta X_n+B_n\Delta u_n+[C_n\Delta X_n+D_n\Delta u_n]\xi_n^H \\\Delta X_0=0, \end{array}\right. \end{align} and \begin{align*} &p_{n+1}\Delta X_{n+1}-p_n\Delta X_n\\ =&-\Delta X_{n+1}\big[A_{n+1}p_{n+1}+b(n+1,n+1)C_{n+1}q_{n+1}\notag\\ &\qquad\qquad+Q_{n+1}X^*_{n+1}+C_{n+1}p_{n+1}\xi_{n+1}^H\big]\notag\\ &+\Delta X_{n+1}q_n\eta_n+p_n\left[A_n\Delta X_n+B_n\Delta u_n\right]\notag\\ &+p_n\left[C_n\Delta X_n+D_n\Delta u_n\right]\xi_n^H\notag\\ =&-\left[A_{n+1}p_{n+1}\Delta X_{n+1}-A_np_n\Delta X_n\right]\notag\\ &-\left[C_{n+1}p_{n+1}\Delta X_{n+1}\xi_{n+1}^H-C_np_n\Delta X_n\xi_n^H\right]\notag\\ &-\left[b(n+1,n+1)C_{n+1}q_{n+1}\Delta X_{n+1}-C_nq_n\Delta X_n\eta_n\xi_n^H\right]\notag\\ &-Q_{n+1}X^*_{n+1}\Delta X_{n+1}+B_np_n\Delta u_n+D_np_n\Delta u_n\xi_{n}^H\notag\\ &+D_nq_n\Delta u_n\eta_n\xi_n^H. \end{align*} So that \begin{align*} &\mathbb{E}\left(GX^*_N\Delta X_N\right)\\=&\sum_{n=0}^{N-1}\mathbb{E}\Big[-Q_{n+1}X^*_{n+1}\Delta X_{n+1}+B_np_n\Delta u_n+D_np_n\Delta u_n\xi_{n}^H +D_nq_n\Delta u_n\eta_n\xi_n^H\Big]\\ =&\sum_{n=0}^{N-1}\mathbb{E}\Big[-Q_{n+1}X^*_{n+1}\Delta X_{n+1}+B_np_n\Delta u_n\\ &\qquad\qquad+D_np_n\Delta u_n\sum_{k=0}^{n-1}c(n,k)\xi_k^H +b(n,n)D_nq_n\Delta u_n\eta_n\Big] \\=&\sum_{n=0}^{N-1}\mathbb{E}\Big[-Q_{n}X^*_{n}\Delta X_{n}-R_nu^*_n\Delta u_n\Big]. \end{align*} Then \begin{align*} J(u)-J(u^*)=&\frac{1}{2}\mathbb{E}\left[\sum_{n=0}^{N-1}\left(Q_n(X_n^2-X_n^{*2})+R_n(u^2_n-u_n^{*2})\right)+G\left(X_N^2-X_N^{*2}\right)\right]\\ \ge &\mathbb{E}\left[\sum_{n=0}^{N-1}\left(Q_nX_n^*\Delta X_n+R_nu^*_n\Delta u_n\right)+GX_N^*\Delta X_N\right]\\ =&0, \end{align*} which shows $u^*$ is an optimal control. Then we show uniqueness of the optimal control. Let both $u_t^{*,1}$ and $u_t^{*,2}$ are optimal control processes, $X_t^1$ and $X_t^2$ are corresponding state processes, respectively. It is easy to check $\frac{X_t^1+X_t^2}{2}$ is the corresponding state process to $\frac{u_t^{*,1}+u_t^{*,2}}{2}$. Assume that there exists constants $\theta>0, \alpha\ge 0$, such that $R_t\ge \theta$ and \begin{align*} J(u_t^{*,1})=J(u_t^{*,2})=\alpha. \end{align*} Using $a^2+b^2=2\left[(\frac{a+b}{2})^2+(\frac{a-b}{2})^2\right]$, we have that \begin{align*} 2\alpha=&J(u^{*,1})+J(u^{*,2})\\ =&\frac{1}{2}\mathbb{E}\sum_{n=0}^{N-1} \Big[Q_n(X_n^1X_n^1+X_n^2X_n^2)+R_n(u_n^{*,1}u_n^{*,1}+u_n^{*,2}u_n^{*,2})\Big]\\ &+\frac{1}{2}\mathbb{E}G(X_N^1X_N^1+X_N^2X_N^2)\\ \ge&\mathbb{E}\sum_{n=0}^{N-1} \left[Q_n\Big(\frac{X_n^1+X_n^2}{2}\Big)^2+R_n\Big(\frac{u_n^{*,1}+u_n^{*,2}}{2}\Big)^2\right]\\ &+\mathbb{E}G\Big(\frac{X_N^1+X_N^2}{2}\Big)^2+\mathbb{E}\sum_{n=0}^{N-1}R_n\Big(\frac{u_n^{*,1}-u_n^{*,2}}{2}\Big)^2\\ =&2J\Big(\frac{u^{*,1}+u^{*,2}}{2}\Big)+\mathbb{E}\sum_{n=0}^{N-1}R_n\Big(\frac{u_n^{*,1}-u_n^{*,2}}{2}\Big)^2\\ \ge&2\alpha+\frac{\theta}{4}\mathbb{E}\sum_{n=0}^{N-1}|u_t^{*,1}-u_t^{*,2}|^2. \end{align*} Thus, we have \begin{align*} \mathbb{E}\sum_{n=0}^{N-1}|u_t^{*,1}-u_t^{*,2}|^2=0, \end{align*} which shows the uniqueness of the optimal control. \end{proof} \section{Acknowledgments} This work was supported by National Key R$\&$D Program of China (Grant number 2023YFA1009200) and National Science Foundation of China (Grant numbers 12471417). \FloatBarrier \bibliography{main} \end{document}
2412.16957v4
http://arxiv.org/abs/2412.16957v4
Euclidean distance discriminants and Morse attractors
\documentclass[12pt]{amsart} \usepackage[margin=1.15in]{geometry} \usepackage{amsmath,amscd,amssymb,amsfonts,latexsym} \usepackage{wasysym} \usepackage{mathrsfs} \usepackage{mathtools,hhline} \usepackage{color} \usepackage{bm} \usepackage[all, cmtip]{xy} \usepackage{comment} \usepackage{url,mathtools,amsmath} \definecolor{hot}{RGB}{65,105,225} \usepackage[pagebackref=true,colorlinks=true, linkcolor=hot , citecolor=hot, urlcolor=hot]{hyperref} \renewcommand{\theenumi}{(\rm \alph{enumi})} \renewcommand{\labelenumi}{(\rm \alph{enumi})} \renewcommand{\theenumii}{(\roman{enumii})} \renewcommand{\labelenumii}{(\roman{enumii})} \renewcommand{\labelitemi}{\labelenumii} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{\sc Definition} \newtheorem{example}[theorem]{\sc Example} \newtheorem{remark}[theorem]{\sc Remark} \newtheorem{notation}[theorem]{\sc Notation} \newtheorem{note}[theorem]{\sc Note} \numberwithin{equation}{section} \newcommand\hot{\mathrm{h.o.t.}} \newcommand\sC{\mathscr{C}} \newcommand\sS{\mathscr{S}} \newcommand\cD{\mathcal{D}} \newcommand\cO{\mathcal{O}} \newcommand\cB{\mathcal{B}} \newcommand\cE{\mathcal{E}} \newcommand\sW{\mathscr{W}} \newcommand\sZ{\mathscr{Z}} \newcommand\bx{\mathbf{x}} \newcommand\ity{\infty} \def\bZ{\mathbb{Z}} \def\bR{\mathbb{R}} \def\bC{\mathbb{C}} \def\bP{\mathbb{P}} \def\bX{\mathbb{X}} \def\e{\varepsilon} \def\m{\setminus} \def\s{\subset} \renewcommand{\d}{{\mathrm d}} ll}$\square$} \newcommand{\NCone}{\mathscr{N}\mathrm{Cone}} \DeclareMathOperator{\Sing}{Sing} \DeclareMathOperator{\Jac}{Jac} \DeclareMathOperator{\mult}{mult} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\grad}{grad} \DeclareMathOperator{\gen}{gen} \DeclareMathOperator{\reg}{reg} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\sing}{sing} \DeclareMathOperator{\atyp}{atyp} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\lin}{lin} \DeclareMathOperator{\EDdeg}{EDdeg} \DeclareMathOperator{\ED}{ED} \DeclareMathOperator{\Eu}{Eu} \DeclareMathOperator{\cl}{closure} \title[ED discriminants]{Euclidean distance discriminants and Morse attractors} \author{Cezar Joi\c ta} \address{Institute of Mathematics of the Romanian Academy, P.O. Box 1-764, 014700 Bucharest, Romania and Laboratoire Europ\' een Associ\'e CNRS Franco-Roumain Math-Mode} \email{[email protected]} \author{Dirk Siersma} \address{Institute of Mathematics, Utrecht University, PO Box 80010, \ 3508 TA Utrecht, The Netherlands.} \email{[email protected]} \author{Mihai Tib\u ar} \address{Univ. Lille, CNRS, UMR 8524 -- Laboratoire Paul Painlev\'e, F-59000 Lille, France} \email{[email protected]} \thanks{The authors acknowledges support from the project ``Singularities and Applications'' - CF 132/31.07.2023 funded by the European Union - NextGenerationEU - through Romania's National Recovery and Resilience Plan, and support by the grant CNRS-INSMI-IEA-329. } \keywords{enumerative geometry, ED discriminant, number of Morse points, Euclidean distance degree} \subjclass[2010]{14N10, 14H50, 51M15, 58K05} \begin{document} \begin{abstract} Our study concerns the Euclidean distance function in case of complex plane curves. We decompose the ED discriminant into 3 parts which are responsible for the 3 types of behavior of the Morse points, and we find the structure of each one. In particular we shed light on the ``atypical discriminant'' which is due to the loss of Morse points at infinity. We find formulas for the number of Morse singularities which abut to the corresponding 3 types of attractors when moving the centre of the distance function toward a point of the discriminant. \end{abstract} \maketitle \section{Introduction} Early studies dedicated to the Euclidean distance emerged before 2000, with much older roots going back to the 19th century geometers. For instance, if one considers the particular case of a curve $X \subset \bR^2$ given by a real equation $f(x,y) = 0$, the aim is to study the critical points of the Euclidean distance function: \[D_u(x,y) = (x - u_{1})^{2} + (y - u_{2})^{2} \] from a centre $u :=(u_1,u_2)$ to the variety $X$. In the case that $X$ is compact and smooth, $D_{u}$ is generically a Morse function, and the values $u$ where $D_{u}$ has degenerate critical points are called \emph{discriminant}, or \emph{caustic}, or \emph{evolute}. These objects have been studied intensively in the past, see e.g. the recent study \cite{PRS} with its multiple references including to Huygens in the 17th century, and to the ancient greek geometer Apollonius. On each connected component of the complement of the caustic, the number of Morse critical points and their index is constant. Assuming now that $(x,y)$ are complex coordinates, the number of those complex critical points is known as the \emph{ED degree}, and it provides upper bounds for the real setting. The corresponding discriminant is called the \emph{ED discriminant}. These notions have been introduced in \cite{DHOST}, and have been studied in many papers ever since, see e.g. \cite{Hor2017}, \cite{DGS}, \cite{Ho}. They have applications to computer vision e.g. \cite{PST2017}, numerical algebraic geometry, data science, and other optimization problems e.g. \cite{HS2014}, \cite{NRS2010}. The earlier paper \cite{CT} contains a study of the ED discriminant under a different name, with a particular definition and within a restricted class of (projective) varieties. From the topological side, more involved computation of $\EDdeg(X)$ have been done in \cite{MRW2018}, \cite{MRW5} etc, in terms of the Morse formula from \cite{STV} for the \emph{global Euler obstruction} $\Eu(X)$, and in terms of vanishing cycles of a linear Morsification of a distance function where the data point is on the ED discriminant. In particular the same authors have proved in \cite{MRW2018} the \emph{multiview conjecture} which had been stated in \cite{DHOST}. This type of study based on Morsifications appears to be extendable to singular polynomial functions, see \cite{MT1}, \cite{MT2}. The most recent paper \cite{MT3} treats for the first time the case of Morse points disappearing at infinity, via a new principle of computation based on relative polar curves. \ In this paper we consider the discriminant in the case of plane curves $X$, where several general striking phenomena already manifest. In particular, the "loss of Morse points at infinity`` has a central place in our study. This phenomenon shows that the bifurcation locus encoded by the discriminant may be partly due to the non-properness of the projection $\pi_2: \cE_X \to \bC^n$, see Definition \ref{d:incidence}. It occurs even in simple examples, and it is specific to the complex setting.\\ \noindent The contents of our study are as follows. In \S\ref{s:discrim} we recall two definitions of ED discriminants that one usually use, the total ED discriminant $\Delta_{T}(X)$, and the strict ED discriminant $\Delta_{\ED}(X)$. We explain the first step of a classification for low ED degree, equal to 0 and to 1. In \S\ref{ss:discrim} we introduce the 3 types of discriminants which compose the total discriminant: the atypical discriminant $\Delta^{\atyp}$ responsible for the loss of Morse points at infinity, the singular discriminant $\Delta^{\sing}$ due to the Morse points which move to singularities of $X$, and the regular discriminant $\Delta^{\reg}$ due to the collision of Morse points on $X_{\reg}$. We find the structure of each of them in the main sections \S\ref{s:struct}, \S\ref{ss:affineMorse}, \S\ref{ss:regdiscrim}. It then follows that we have the equalities:\\ $\bullet$ $\Delta_{\ED}(X) = \Delta^{\reg}\cup \Delta^{\atyp}$.\\ $\bullet$ $\Delta_{T}(X) =\Delta_{\ED}(X) \cup \Delta^{\sing}$. By Corollary \ref{c:reg}, the regular discriminant $\Delta^{\reg}$ may contain lines only if they are isotropic tangents\footnote{"Isotropic tangent line``means that it is parallel to one of the lines of equation $x^2 +y^2=0$. See \S\ref{e:2ex}.} to flex points on $X_{\reg}$. The atypical discriminant $\Delta^{\atyp}$ consists of complex isotropic lines only (cf Theorem \ref{t:atyp}). In the real setting it then follows that the ED discriminant $\Delta_{\ED}(X)$ does not contain lines. For each type of complex discriminant, we compute in \S\ref{ss:morseinfty}, \S\ref{ss:affineMorse}, and \S\ref{ss:morsereg}, the number of Morse singularities which abut to attractors of Morse points (as defined at \S\ref{ss:attract}), respectively. Several quite simple examples at \S\ref{s:examples} illustrate all these results and phenomena, with detailed computations. \tableofcontents \section{ED degree and ED discriminant}\label{s:discrim} \subsection{Two definitions of the ED discriminant} We consider an algebraic curve $X\subset \bC^{2}$, with reduced structure. Its singular set $\Sing X$ consists of a finite subset of points. For a generic centre $u$, the complex ``Euclidean distance'' function $D_{u}$ is a stratified Morse function. \begin{definition}\label{d:defgood} The \emph{ED degree of $X$}, denoted by $\EDdeg(X)$, is the number of Morse points $p\in X_{\reg}$ of a generic distance function $D_{u}$, and this number is independent of the choice of the generic centre $u$ in a Zariski-open subset of $\bC^{2}$. The \emph{total ED discriminant} $\Delta_{T}(X)$ is the set of points $u \in \bC^{2}$ such that the function $D_{u}$ has less than $\EDdeg(X)$ Morse points on $X_{\reg}$, or that $D_{u}$ is not a Morse function.\footnote{In particular $u\in\Delta_{T}(X)$ if $D_{u}$ has non-isolated singularities.} \end{definition} Note that by definition $\Delta_{T}(X)$ is a closed set, as the complement of an open set. \ A second definition goes as follows, cf \cite{DHOST}. Consider the following incidence variety, a variant of the conormal of $X$, where $\bx = (x,y)$ and $(u-\bx)$ is viewed as a 1-form: $$ \cE_X := \cl \bigl\{ (\bx,u)\in X_{\reg}\times \bC^{2} \mid \ (u-\bx)|T_{\bx}X_{\reg}=0 \bigr\} \subset X\times \bC^{2} \subset \bC^{2}\times \bC^{2},$$ and let us remark that $\dim \cE_X = 2$. Let $\pi_{1} : \cE_X \to X$ and $\pi_{2} : \cE_X \to \bC^{2}$ be the projections on the first and second factor, respectively. The projection $\pi_{2}$ is generically finite, and the degree of this finite map is the \emph{ED degree of $X$}, like also defined above at Definition \ref{d:defgood}. \begin{definition}\label{d:incidence} The bifurcation set of $\pi_{2}$ is called \emph{the (strict) ED discriminant}, and will be denoted here by $\Delta_{\ED}(X)$. \end{definition} By the above definitions, we have the inclusion $\Delta_{\ED}(X)\subset \Delta_{T}(X)$, which may not be an equality, see e.g. Examples \ref{ss:lines} and \ref{ss:cusp}. We will also use the following: \subsection{Terminology and two simple examples}\label{e:2ex}\ We say that a line in $\bC^2$ is \emph{isotropic} if it verifies the equation $x^2 + y^2 =0$. We say that a line $K$ is \emph{normal} to a line $L$ at some point $p\in L$ if the Hermitian product $\langle q-p, \overline{r-p} \rangle$ is equal to 0 for any $q\in K$ and any $r\in L$. \begin{example}[Lines] \label{ss:lines}\ Lines in $\bC^{2}$ do not have all the same ED degree, see Theorem \ref{t:lines}(a-b). Let $X$ be the union of two non-isotropic lines intersecting at a point $p$. The ED degree is then $\EDdeg(X) =2$. According to the definitions, the ED discriminant $\Delta_{T}(X)$ contains the two normal lines at $p$, whereas $\Delta_{\ED}(X)$ is empty. \end{example} \begin{example}[Cusp]\label{ss:cusp}\ The plane cusp $X:= \{ (x,y) \in \bC^2 \mid x^{3}=y^{2}\}$ has $\EDdeg(X)= 4$. The ED discriminant $\Delta_{\ED}(X)$ is a smooth curve of degree 4 passing through the origin. If $u\in \Delta_{\ED}(X)$ is a point different from the origin, then the distance function $D_{u}$ has precisely one non-Morse critical point on $X_{\reg}$ produced by the merging of two of the Morse points. The origin is a special point of $\Delta_{\ED}(X)$: the distance function from the origin, denoted by $D_{0}$, has only two Morse points on $X_{\reg}$ while two other Morse points had merged in the origin. We have $\Delta_{T}(X) = \Delta_{\ED}(X)\cup\{x=0\}$. At some point $p\in \{x=0\}$ different from the origin, the distance function $D_{p}$ has only 3 Morse points on $X_{\reg}$ while the 4th Morse point had merged with the singular point of $X$. \end{example} \subsection{First step of a classification}\label{ss:classif} \begin{theorem}\label{t:lines} Let $X\subset \bC^{2}$ be an irreducible reduced curve. Then \begin{enumerate} \item $\EDdeg(X) =0$ $\Longleftrightarrow$ $X$ is a line parallel to one of the two isotropic lines $\{ x \pm iy =0\}$. In this case $\Delta_{T}(X)=X$. \item $\EDdeg(X) =1$ $\Longleftrightarrow$ $X$ is a line different from the two isotropic lines. In this case $\Delta_{\ED}(X)$ is empty. \item The discriminant $\Delta_{\ED}(X)$ contains some point $u= (u_1, u_2) \in \bC^2$ such that $\dim \pi_{2}^{-1}(u)>0$ if and only if: (i). either $X = \{ (x, y)\in \bC^2 \mid (x-u_{1})^{2}+ (y-u_{2})^{2} = \alpha\}$ for a certain $\alpha \in \bC^{*}$. (ii). or $X$ is one of the two isotropic lines. \end{enumerate} \end{theorem} We need the following general classical result. \begin{lemma}[Genericity of Morse functions]\label{l:genericity} Let $u\in \bC^{n}\m X$ be a fixed point. There exists a Zariski open subset $\Omega_{u}\subset \check \bP^{n-1}$ of linear functions $\ell = \sum_{i}a_{i}x_{i}$ such that, for any $\ell \in \Omega_{u}$, the distance function $D_{u+ta}$ is a stratified Morse function for any $t\in \bC$ except finitely many values. n \end{lemma} \begin{proof}[Proof of Theorem \ref{t:lines}] In (a) and (b) the implications ``$\Leftarrow$'' are both clear by straightforward computation; we will therefore show ``$\Rightarrow$'' only. \noindent (a). $\EDdeg(X) =0$ implies that the normal to the tangent space $T_{p}X_{\reg}$ is this space itself. If $T_{p}X_{\reg} = \bC\langle(a,b)\rangle$, then the only vectors $(a,b)$ which have this property are those verifying the equation $a^{2}+b^{2} =0$. This means that for any $p\in X_{\reg}$, one has either $T_{p}X_{\reg} = \bC\langle(x, ix)\rangle$ or $T_{p}X_{\reg} = \bC\langle(x, -ix)\rangle$. This implies that $X_{\reg}$ is one of the lines $\{ x \pm iy = \alpha\}$, for some $\alpha\in \bC$.\\ \noindent (b). By Lemma \ref{l:genericity} we have a dense set $\cD$ of points $u\in \bC^{2}\m X$ such that the distance function $D_{u}$ is a stratified Morse function. Let us now assume $\EDdeg(X) =1$. This implies that there exists a unique line $L_{u}$ passing through $u\in \cD$ which is normal to $X_{\reg}$. It also follows from the condition $\EDdeg(X) =1$ that, for $u\in \cD$, the lines $L_{u}$ do not mutually intersect. These lines are thus parallel, dense in $\bC^{2}$, and normal to $X_{\reg}$. This implies that $X_{\reg}$ is contained in a line.\\ \noindent (c). The hypothesis implies that for some point $u\in \Delta_{\ED}(X)$, the function $D_{u}$ has non-isolated singularity on $X$. Since this is necessarily contained in a single level of $D_{u}$, it follows that $X$ contains $\{ (x-u_{1})^{2}+ (y^{2}-u_{2})^{2} = \alpha\}$ for some $\alpha\in \bC$, and since $X$ is irreducible, the twofold conclusion follows. \end{proof} \subsection{Three types of discriminants}\label{ss:discrim} The total discriminant $\Delta_{T}(X)$ is the union of 3 types of discriminants that will be discussed in the following:\\ $(1).$ \emph{The atypical discriminant} $\Delta^{\atyp}$, due to the Morse points which are ``lost'' at infinity. See \S\ref{s:atyp}. $(2).$ \emph{The singular discriminant} $\Delta^{\sing}$, due to the Morse points which move to singularities of $X$. See \S\ref{ss:affineMorse}. $(3.)$ \emph{The regular discriminant} $\Delta^{\reg}$, due to the collision of Morse points on $X_{\reg}$. See \S\ref{ss:regdiscrim}. \\ We will see that the first two types are lines only, whereas the 3rd type may contain components of higher degree. These discriminants may intersect, and may also have common components, which should then be lines. Several examples at the end will illustrate these notions and other phenomena, see \S\ref{s:examples}. \section{The atypical discriminant}\label{s:atyp} We define the discriminant $\Delta^{\atyp}$ as the subset of $\Delta_{\ED}(X)$ which is due to the loss of Morse points to infinity, and we find its structure. \begin{definition}\label{d:atyp} Let $\overline X$ denote the closure of $X$ in $\bP^2$. For some point $\xi\in X^{\infty} :=\overline X\cap H^\ity$, let $\Gamma$ be a local branch of $\overline X$ at $\xi$. We denote by $\Delta^{\atyp}(\Gamma)\subset\Delta_{\ED}(X)$ the set of all points $u\in \bC^2$ such that there are a sequence $\{u_n\}_{n\geq 1}\subset \bC^2$ with $u_n\to u$, and a sequence $\{\bx_n\}_{n\geq 1}\subset (\Gamma\m H^\ity)$ with $\bx_n\to\xi$, such that $(u_{n}-\bx_{n})|T_{\bx}X_{\reg}=0$. The \emph{atypical discriminant} is then defined as follows: $$\Delta^{\atyp} :=\bigcup_\Gamma \Delta^{\atyp}(\Gamma)$$ where the union runs over all local branches $\Gamma$ of $\overline X$ at all points $\xi\in X^{\infty}$. \end{definition} \subsection{The structure of $\Delta^{\atyp}$}\label{s:struct}\ \\ Let $\gamma:B\to \Gamma$ be a local holomorphic parametrisation of $\Gamma$ at $\xi$, where $B$ is some disk in $\bC$ centred at $0$ of small enough radius, and $\gamma(0)=\xi$. If $x$ and $y$ denote the coordinates of $\bC^2$, then for $t\in B$, we write $x(t)=x(\gamma(t))$ and $y(t)=y(\gamma(t))$. It follows that the functions $x(t)$ and $y(t)$ are meromorphic on $B$ and holomorphic on $B\setminus{0}$. We thus may write them on some small enough disk $B'\subset B\subset \bC$ centred at the origin, as follows: $$x(t)=\frac{P(t)}{t^k}, \ y(t)=\frac{Q(t)}{t^k},$$ where $P(t)$ and $Q(t)$ are holomorphic, and $P(0)$ and $Q(0)$ are not both equal to zero. See also Corollary \ref{l:atyp} for the change of coordinates and for the significance of the exponent $k$. \medskip Under these notations, we have $\xi =[P(0);Q(0)]\in H^{\ity}$. For $t\in B\m\{0\}$ and $u=(u_1,u_2)\in\bC^2$, we have: $\bigl( (x(t),y(t)),u\bigr)\in \cE_X$ if and only if $$\frac{(tP'(t)-kP(t))}{t^{k+1}}\Big(\frac{P(t)}{t^k}-u_1\Big) + \frac{(tQ'(t)-kQ(t))}{t^{k+1}}\Big(\frac{Q(t)}{t^k}-u_2\Big)=0.$$ This yields a holomorphic function $h:B\times\bC^2\to \bC$ defined as: $$h(t,u)=\bigl(tP'(t)-kP(t)\bigr)(P(t)-u_1t^k) + \bigl(tQ'(t)-kQ(t)\bigr)\bigl(Q(t)-u_2t^k\bigr) $$ which is linear in the coordinates $(u_1,u_2)$. For $t\in B\m\{0\}$ and $u\in\bC^2$, we then obtain the equivalence: \begin{equation}\label{eq:normal} \bigl( (x(t),y(t)),u\bigr)\in \cE_X \Longleftrightarrow h(t,u)=0. \end{equation} \medskip If we write $h(t,u)=\sum_{j\geq 0} h_j(u)t^j$, then we have: $\bullet$ For any $j\leq k-1$, $h_j(u)=h_j\in\bC$, for all $u\in \bC^{2}$, $\bullet$ The function $h_k(u)$ is of the form $h_k(u)=kP(0)u_1 + kQ(0)u_2+\text{constant}$. Since $P(0)$ and $Q(0)$ are not both zero by our assumption, it also follows that the function $h_k(u)$ is not constant. $\bullet$ For any $i>k$, the function $h_i(u)$ is a (possibly constant) linear function. \ Let us point out the geometric interpretation of the integer $k$, and the role of the isotropic points at infinity: \begin{lemma} \label{l:atyp}\ Let $\xi \in X^{\ity}$ and let $\Gamma$ be a branch of $\overline{X}$ at $\xi$. Then: \begin{enumerate} \item $k = \mult_{\xi}(\Gamma, H^{\ity})$. \item Let $Q^\ity := \{x^{2} + y^{2} =0\} \subset H^\ity$. If $\xi \not\in X^{\ity}\cap Q^\ity=\emptyset$ then $\Delta^{\atyp}(\Gamma)=\emptyset$. \end{enumerate} \end{lemma} \begin{proof} \noindent (a). Since $P(0)$ and $Q(0)$ are not both zero, let us assume that $P(0) \not= 0$. In coordinates at $\xi\in H^{\ity}\subset \bP^{2}$ we then have $z=\frac1x$ and $w = \frac{y}{x}$. Composing with the parametrisation of $\Gamma$ we get $z(t) = \frac{1}{x(t)} = t^{k}r(t)$ where $r$ is holomorphic and $r(0) \not= 0$. We therefore get: \begin{equation}\label{eq:PQ} \mult_{\xi}(\Gamma, H^{\ity}) = \ord_{0} z(t) = k, \end{equation} and observe this is holds in the other case $Q(0) \not= 0$. \noindent (b). If $\xi \not\in X^\ity\cap Q^\ity$ then, for any branch $\Gamma$ of $\overline{X}$ at $\xi$, we have $P(0)^2+Q(0)^2\neq 0$, hence $h_0\neq 0$. This shows that the equation $h(t,u)=0$ has no solutions in a small enough neighbourhood of $\xi$. \end{proof} \begin{theorem} \label{t:atyp} \ Let $\xi\in X^{\ity}\cap Q^\ity$, and let $\Gamma$ be a branch of $\overline{X}$ at $\xi$. Then: \begin{enumerate} \item $u\in \Delta^{\atyp}(\Gamma)$ if and only if $\ord_{t}h(t,u) \ge 1+ \mult_{\xi}(\Gamma, H^{\ity})$. \item If $\Delta^{\atyp}(\Gamma)\neq\emptyset$, then $\Delta^{\atyp}(\Gamma)$ is the line $\{u\in \bC^{2} \mid h_k(u)=0\}$. In particular, $\Delta^{\atyp}$ is a finite union of affine lines parallel to the isotropic lines. \end{enumerate} \end{theorem} \begin{proof} \noindent (a). \sloppy We have to show that $u\in \Delta^{\atyp}(\Gamma)$ if and only if $h_0=\cdots = h_{k-1}=0$ in $h(t,u)$, and $h_{k}(u) =0$. If $h_0,\ldots, h_{k-1}$ are not all equal to $0$, then let $0\leq j_1\leq k-1$ be the first index such that $h_{j_1}\neq 0$. We then have: $$h(t,u)=t^{j_1}\Big(h_{j_1}+\sum_{j>j_1}h_j(u)t^{j-j_1}\Big).$$ Let $K$ be a compact subset of $\bC^2$ containing a neighbourhood of some point $u_{0}\in \Delta^{\atyp}(\Gamma)$. Then, since $(t,u)\to \sum_{j>j_1}h_j(u)t^{j-j_1}$ is holomorphic, we get $\lim_{t\to 0} \sum_{j>j_1}h_j(u)t^{j-j_1}= 0$ uniformly for $u\in K$. This implies that $h(t,u)\neq 0$, for $|t|\neq 0$ small enough, and for all $u\in K$, which contradicts the assumption that $u_{0}\in \Delta^{\atyp}(\Gamma)$. We conclude that $\Delta^{\atyp}(\Gamma)=\emptyset$. The continuation and the reciprocal will be proved in (b). \medskip \noindent (b). Let us assume now that $h_0=\cdots =h_{k-1}=0$. We then write $h(t,u)=t^k\widetilde h(t,u)$ where \begin{equation}\label{eq:morseinfty} \widetilde h(t,u)=h_k(u)+\sum_{j>k}h_j(u)t^{j-k}. \end{equation} We have to show that $u\in \Delta^{\atyp}(\Gamma)$ if and only if $h_k(u)=0$. \medskip ``$\Rightarrow$'': If $h_k(u)\neq 0$, then a similar argument as at (a) applied to $\widetilde h(t,u)$ shows that $u\not\in \Delta^{\atyp}(\Gamma)$. \medskip ``$\Leftarrow$'': Let $h_k(u_{1}, u_{2})=0$. We have to show that for every neighborhood $V$ of $u$ and every disk $D \subset B \subset \bC$ centred at the origin, there exist $v\in V$ and $t\in D\m\{0\}$ such that $\widetilde h(t,v)=0$. Suppose that this is not the case. Denoting by $Z(\widetilde h)$ the zero-set of $\widetilde h$, we would then have $$\big(Z(\widetilde h)\cap (D\times V)\big)\subset \{0\} \times V.$$ We also have the equality $Z(\widetilde h)\cap (\{0\} \times V)=\{0\} \times Z(h_k)$. It would follow the inclusion: \begin{equation}\label{eq:inclZ} \big(Z(\widetilde h)\cap (D\times V)\big)\subset \{0\} \times Z(h_k). \end{equation} The set $\{0\} \times Z(h_k)$ has dimension at most 1, while $Z(\widetilde h)\cap (D\times V)$ has dimension 2 since it cannot be empty, as $\widetilde h(u,0)=0$. We obtain in this way a contradiction to the inclusion \eqref{eq:inclZ}. This shows in particular that $\Delta^{\atyp}(\Gamma)$ is a line parallel to an isotropic line which contains the point $\xi$ in its closure at infinity. We finally note that $\Delta^{\atyp}$ is the union of $\Delta^{\atyp}(\Gamma)$ over all branches at infinity of $\overline{X}$, thus $\Delta^{\atyp}$ is a union of lines, all of which are parallel to the isotropic lines. \end{proof} \begin{corollary}\label{c2} Let $\Gamma$ be a branch of $\overline X$ at $\xi \in X^\ity\cap Q^{\ity}$. Then $\Delta^{\atyp}(\Gamma)\neq\emptyset$ if and only if $\Gamma$ is not tangent at $\xi$ to the line at infinity $H^\ity$. \end{corollary} \begin{proof} Let us assume $\xi = [i;1]$, since a similar proof works for the other point of $Q^{\ity}$. Let $(w, z)$ be local coordinates of $\bP^2$ at $\xi$, such that $H^\ity=\{z=0\}$ and we have: $$\ x=\frac{w}{z}, \ y=\frac{1}{z}.$$ Our hypothesis ``$H^\ity$ is not tangent to $\Gamma$ at $\xi$'' implies that we may choose a parametrisation for $\Gamma$ at $\xi$ of the form $z(t)=t^k$, $w(t)=i+t^kP_1(t)$, where $P_1$ is a holomorphic function on a neighborhood of the origin, and where $\ord_0 z(t) = k = \mult_{\xi}(\Gamma, H^{\ity})\ge 1$, as shown in \eqref{eq:PQ}. Under the preceding notations, we have $Q(t)\equiv 1$, $P(t)=i+tP_1(t)$, and we get \begin{align*} h(t,u)&=\bigl(t^kP_1'(t)-ki\bigr)\bigl(i+t^kP_1(t)-u_1t^k\bigr))-k+ku_2t^k\\ &=t^k\Big[P_1'(t)\bigl(i+t^kP_1(t)-u_1t^k\bigr)-kiP_1(t)+kiu_1+ku_2\Big] \end{align*} By Theorem \ref{t:atyp}(a), $u \in \Delta^{\atyp}(\Gamma)\neq\emptyset$ if and only if $\ord_t h(t,u) \ge 1+k$. From the above expression of $h(t,u)$ we deduce: $\ord_t h(t,u) \ge 1+k$ $\Longleftrightarrow$ $iu_1+u_2 +K =0$, where $K= iP_1'(0) - iP_1(0)$ is a constant. This is the equation of a line parallel to one of the two isotropic lines. We deduce that $\Delta^{\atyp}(\Gamma)$ is this line, and therefore it is not empty. \ Reciprocally, let us assume now that $\Gamma$ is tangent to $H^\ity$ at $\xi$. By Lemma \ref{l:atyp}(a), this implies $k\ge 2$. A parametrisation for $\Gamma$ is of the form $z(t)=t^k$, $w(t)=i+\sum_{j\geq r}a_jt^j$, where $1\le r<k$. As before, we have $Q(t)\equiv 1$ and $P(t)=i+a_rt^r+\hot$ where $\hot$ means as usual ``higher order terms''. The expansion of $h(t,u)$ looks then as follows: \begin{align*} h(t,u)&=\bigl(tP'(t)-kP(t)\bigr)(P(t)-u_1t^k) + \bigl(tQ'(t)-kQ(t)\bigr)\bigl(Q(t)-u_2t^k\bigr) \\ &=(ra_rt^r-ki-ka_rt^r+\hot)(i+a_rt^r+\hot)-k+\hot\\ &=k+ia_r(r-2k)t^r-k+\hot=ia_r(r-2k)t^r+\hot \end{align*} We have $a_r\not= 0$, $r-2k\neq 0$ since $r<k$, thus $\ord_t h(t,u) < k$. Then Theorem \ref{t:atyp}(a) tells that $\Delta^{\atyp}(\Gamma)=\emptyset$. \end{proof} \subsection{Morse numbers at infinity} \label{ss:morseinfty} We have shown in \S\ref{s:struct} that $\Delta^{\atyp}$ is a union of lines. Our purpose is now to fix a point $\xi \in \overline{X}\cap Q^{\ity}$ and find the number of Morse singularities of $D_{u}$ which abut to it when the centre $u$ moves from outside $\Delta^{\atyp}$ toward some $u_{0}\in \Delta^{\atyp}$. We will in fact do much more than that. Let $\Gamma$ be a local branch of $\overline X$ at $\xi$. We assume that $u_{0}\in \Delta^{\atyp}(\Gamma)\subset \Delta^{\atyp}$, as defined in \S\ref{s:struct}. We will now prove the formula for the number of Morse points which are lost at infinity. \begin{theorem}\label{t:morseinfty} Let $u_{0}\in \Delta^{\atyp}(\Gamma)\not= \emptyset$. Let $\cB\in \bC $ denote a small disk centred at the origin, and let $u: \cB \to \bC^{2}$ be a continuous path such that $u(0) = u_{0}$, and that $h_k(u(s)) \not= 0$ for all $s\not=0$. Then the number of Morse points of $D_{u(s)}$, which abut to $\xi$ along $\Gamma$ when $s\to 0$ is: \begin{equation}\label{eq:morseinfty} m_{\Gamma}(u_{0}) := \ord_{0}\Bigl(\sum_{j>k}h_j(u_{0})t^{j}\Bigr) - \mult_{\xi}(\Gamma, H^{\ity}) \end{equation} if $\ord_{0}\sum_{j>k}h_j(u_{0})t^{j}$ is finite. In this case, the integer $m_{\Gamma}(u_{0}) >0$ is independent of the choice of the path $u(s)$ at $u_{0}$. \end{theorem} \begin{remark}\label{r:ordinfty} The excepted case in Theorem \ref{t:morseinfty} is in fact a very special curve $X$. Indeed, the order $\ord_{0}\sum_{j>k}h_j(u_{0})t^{j}$ is infinite if and only if the series is identically zero, and this is equivalent to $X = \{ (x-u_{0,1})^{2} + (y-u_{0,2})^{2}\} =\alpha$, for some $\alpha\in \bC$. \end{remark} \begin{proof}[Proof of Theorem \ref{t:morseinfty}] We will use Theorem \ref{t:atyp}, its preliminaries and its proof with all notations and details. Replacing $u$ by $u(s)$ in \eqref{eq:morseinfty} yields: $$\widetilde h(t,u(s)) =h_k(u(s))+\sum_{j>k}h_j(u(s))t^{j-k}.$$ Note that by our choice of the path $u(s)$ we have that $h_k(u(s)) \not= 0$ for all $s\not=0$ close enough to 0. The number of Morse points which abut to $\xi$ is precisely the number of solutions in variable $t$ of the equation $\widetilde h(t,u(s))=0$ which converge to 0 when $s\to 0$. This is precisely equal to $\ord_{t}\sum_{j>k}h_j(u_{0})t^{j-k}$, and we remind that $k = \mult_{\xi}(\Gamma, H^{\ity})$ by Lemma \ref{l:atyp}(a). In particular, this result is independent of the choice of the path $u(s)$. Since we have assumed that $\Delta^{\atyp}(\Gamma)\not= \emptyset$, there must exist Morse singularities which abut at $\xi = \Gamma\cap Q^\ity$, thus $m_{\Gamma}(u_{0}) >0$. \end{proof} \begin{remark}\label{r:morseinfty} For $j\ge k$, the set $L_{j}:= \{ h_{j}(u) =0 \}$ is a line if $h_{j}(u) \not\equiv 0$. The number of Morse points $m_{\Gamma}(u)$ interprets as $j-k$, where $j>k$ is the first index such that $L_{j}$ is a nonempty line. Since $L_{j}\cap L_{k}$ consists of at most one point, it follows that the number $m_{\Gamma}(u_0)$ is constant for all points $u_0\in L_{k}$, except possibly at the point $u_0 = L_{j}\cap L_{k}$, for which the Morse number $m_{\Gamma}(u_0)$ takes a higher value, or $\widetilde h(t,u_0) \equiv 0$, in which case $D_{u_0}$ has non-isolated singularities, see Remark \ref{r:ordinfty}. The generic number will be denoted by $m^{\gen}_{\Gamma}$. See Examples \ref{ex:morseind} and \ref{e:isotropic}. \end{remark} This justifies: \begin{definition}\label{d:morseinfty} We call $m^{\gen}_{\Gamma}$ the \emph{generic Morse number at $\xi$ along $\Gamma$}. The number: $$ m^{\gen}_{\xi} := \sum_{\Gamma} m^{\gen}_{\Gamma} $$ will be called the \emph{generic Morse number at $\xi$}. We say that $m_{\Gamma}(\hat u) > m^{\gen}_{\Gamma}$ is the \emph{exceptional Morse number at $\xi$ along $\Gamma$}, whenever this point $\hat u\in L_k$ exists and the associated number is finite. \end{definition} See Example \ref{ex:morseind} where $\hat u$ exists but the number $m_{\Gamma}(\hat u)$ is not defined because the order is infinite (cf Theorem \ref{t:morseinfty}). \section{Local behaviour at attractors, and the structure of discriminants}\label{s:attractors} \subsection{The attractors of Morse points}\label{ss:attract} Let $X\subset \bC^{2}$ be a reduced affine variety of dimension 1. Let $u_{0}\in \Delta_{T}$. A point $p\in \overline{X}$ will be called \emph{attractor} for $D_{u_{0}}$ if there are Morse points of $D_{u(s)}$ which abut to $p$ when $s\to 0$, where $u(s)$ is some path at $u(0):=u_{0}$. The attractors fall into 3 types, which correspond to the 3 types of discriminants defined at \S\ref{ss:discrim}: (1). One or both points of $\overline{X} \cap Q^{\ity}$ may be attractors, as shown in Lemma \ref{l:atyp}. See \S\ref{ss:morseinfty} for all details, and Examples \ref{ex:morseind} and \ref{e:isotropic}. (2). Any $p\in \Sing X$ is an attractor, since at least one Morse point of $D_{u(s)}$ abuts to it. See \S\ref{ss:affineMorse} for all details. (3). Points $p\in X_{\reg}$ to which more than one Morse singularities of $D_{u(s)}$ abut. Such a point appears to be a non-Morse singularity of $\Sing D_{u_{0}}$, and it varies with $u_{0}\in \Delta_{T}$. See \S\ref{ss:regdiscrim}, and Example \ref{ex:regatyp}. \medskip \subsection{Structure of $\Delta^{\sing}$, and Morse numbers at the attractors of $\Sing X$.}\label{ss:affineMorse}\ We consider $X$ with reduced structure; consequently $X$ has at most isolated singularities. We recall from \S\ref{ss:discrim} that $\Delta^{\sing}$ is the subset of points $u\in\Delta_T(X)$ such that, when $u(s) \to u_{0}$, at least a Morse point of $D_{u(s)}$ abuts to a singularity of $X$. \begin{theorem}\label{t:morseaffine} Let $p\in \Sing X$. Then $p$ is an attractor, and: \begin{enumerate} \item The singular discriminant is the union $$\Delta^{\sing} = \cup_{p\in \Sing X}\NCone_{p}X ,$$ where $\NCone_{p}X$ denotes the union of all the normal lines at $p$ to the tangent cone $\Cone_{p}X$. \item The number of Morse points of $D_{u(s)}$, which abut to $p$ along $\Gamma$ when $s\to 0$, is: $$m_{\Gamma}(u_{0}) := 1 - \mult_{p}\Gamma + \ord_{t}\sum_{j\ge \mult_{p}\Gamma }h_j(u_{0})t^{j}$$ This number is independent of the choice of the path $u(s)$. \end{enumerate} \end{theorem} \begin{proof} \noindent (a). Let us consider a local branch $\Gamma$ of $X$ at $p\in \Sing X$. Let $\gamma: B \to \Gamma$ be the Puiseux parametrisation of $\Gamma$ at $p$, with $\gamma(0)=p$, where $B$ denotes a small enough disk in $\bC$ centred at $0$. For $t\in B$, and up to a switch of coordinates, we have the presentation: $$x(t) :=x(\gamma(t)) = p_{1} + t^{\alpha} \mbox{ and } y(t) :=y(\gamma(t))= p_{2} + ct^{\beta} +hot,$$ where $c\not=0$, and $(\alpha, \beta)$ are the first Puiseux exponents of $\gamma$, with $\beta \ge \alpha = \mult_{p}\Gamma>1$. We then have the equivalence: \begin{equation}\label{eq:singeq} \bigl( ((x(t),y(t)),u\bigr)\in \cE_X \Longleftrightarrow x'(t)(x(t) -u_{1}) + y'(t)(y(t) -u_{2}) = 0. \end{equation} This equivalence \eqref{eq:singeq} means that the number of Morse points which abut to $p$ when $t\to 0$ is precisely the maximal number of solutions in $t$ of the equation: \begin{equation}\label{eq:Morsesol} h(t,u) := x'(t)\bigl( x(t) -u_{1}\bigr) + y'(t)\bigl( y(t) -u_{2}\bigr) = 0 \end{equation} which converge to 0 when $s\to 0$. We have: \[ x'(t)\bigl( x(t) -u_{1}\bigr) = \alpha (p_{1}-u_{1})t^{\alpha-1} + \alpha t^{2\alpha-1},\] \[ y'(t)\bigl( y(t) -u_{2}\bigr) = c\beta (p_{2}-u_{2})t^{\beta-1} + c\beta t^{2\beta-1} + \hot ,\] We write $h(t,u)= \sum_{j\ge 0}h_j(u)t^{j}$ as a holomorphic function of variable $t$ with coefficients depending on $u$. For any $j\ge 0$, the coefficient $h_j(u)$ is a linear function in $u_{1}, u_{2}$, possibly identically zero. Let $h(t,u) = t^{\alpha-1}\widetilde h(t,u)$, where $\widetilde h(t,u) := \sum_{j\ge \alpha-1}h_j(u)t^{j-\alpha+1}$. Note that the constant term in $\widetilde h(t,u) $ as a series in variable $t$ is $h_{\alpha-1}(u) = \alpha(p_{1}-u_{1})$ if $\alpha <\beta$, or $h_{\alpha-1}(u) = \alpha(p_{1}-u_{1}) + c\beta (p_{2}-u_{2})$ if $\alpha =\beta$. It follows that either the line $L :=\{ u_{1} - p_{1}=0\}$, or the line $L :=\{ \alpha(p_{1}-u_{1}) + c\beta (p_{2}-u_{2})=0\}$, respectively, is included in the discriminant $\Delta^{\sing}$. Note that this line is the normal at $p$ to the tangent cone of the branch $\Gamma$ in the coordinate system that we have set in the beginning. This proves the point (a) of our statement.\\ \noindent (b). Let us fix now a point $u_{0}$ on a line $L\in \NCone_{p}X$. In order to compute how many Morse points abut to $p$ along $\Gamma$ when approaching $u_{0}$, let us consider a small disk $B\in \bC $ centred at the origin and a continuous path $u: B \to \bC^{2}$ such that $u(0) = u_{0}$, and that $h_{\alpha-1}(u(s)) \not= 0$ for all $s\not=0$. Let $B\in \bC $ denote a small disk centred at the origin, and let $u: B \to \bC^{2}$ be some continuous path such that $u(0) = u_{0}$, and that $h_k(u(s)) \not= 0$ for all $s\not=0$. Replacing $u$ by $u(s)$ yields: $$\widetilde h(t,u(s)) = \sum_{j\ge \alpha-1}h_j(u(s))t^{j-\alpha +1}.$$ The number of Morse points which abut to $p$ is then the number of solutions in variable $t$ of the equation $\widetilde h(t,u(s))=0$ which converge to 0 when $s\to 0$. This is precisely equal to $\ord_{t}\sum_{j\ge \alpha}h_j(u_{0})t^{j-\alpha+1}$. In particular, this number is independent of the choice of the path $u(s)$. \end{proof} \begin{remark}\label{r:exceptionalpoint} There is a generic Morse number $m_{p, \Gamma}(u)$ for all $u\in L$, except possibly a unique exceptional point $\tilde u$ of $L$ for which the number is higher, or $D_{\tilde u}$ has a non-isolated singularity. In Example \ref{ss:cusp}, we have $X:= \{x^{3}=y^{2}\}$, and then $\Delta^{\sing} = \NCone_{(0,0)}X$ is the axis $u_{1}=0$. The generic Morse number $m_{(0,0), X}(u)$ is 1, and the exceptional point of this line is $\tilde u=(0,0)$, where the Morse number is 2. \end{remark} \medskip \subsection{The regular discriminant $\Delta^{\reg}$, and Morse attractors on $X_{\reg}$.}\label{ss:regdiscrim}\ We recall from \S\ref{ss:discrim} that $\Delta^{\reg}$ denotes the subset of points $u\in\Delta_{\ED}(X)$ such that $D_u$ has a degenerate critical point\footnote{In the classical real setting, this is known as an \emph{evolute}, or a \emph{caustic}, see e.g. \cite{Mi}, \cite{Tr},\cite{CT}.} on $X_{\reg}$. Such a singularity is an attractor for at least 2 Morse points of some generic small deformation of $D_u$. Let $(x(t),y(t))$ be a local parametrisation of $X$ at some point $p := (x(t_0),y(t_0))\in X_{\reg}$. \begin{definition}\label{d:flex} A point $p= (x(t_0),y(t_0))\in X_{\reg}$ is called a \emph{flex point} if $x'(t_0)y''(t_0)-y'(t_0)x''(t_0) = 0$. We say that the tangent line to $X_{\reg}$ at $p$ is \emph{isotropic} if it verifies the condition $(x'(t_0))^2+(y'(t_0))^2=0$. These two definitions do not depend on the chosen parametrisation. \end{definition} \begin{theorem}\label{t:reg} Let $p=(p_1,p_2)\in X_{\reg}$ and $(x(t),y(t))$ be a local parametrisation for $X$ at $p$. \begin{enumerate} \item If $p$ is not a flex point, then there exists a unique $u\in\bC^2$ such that $p$ is an attractor for $D_u$. \item If $p$ is a flex point, and if the tangent to $X$ at $p$ is isotropic, then $p$ is an attractor for $D_u$ for every point $u$ on the line tangent to $X$ at $p$. \item If $p$ is a flex point, and if the tangent to $X$ at $p$ is not isotropic, then $p$ is not an attractor for $D_u$, $\forall u\in\Delta_{\ED}(X)$. \end{enumerate} \end{theorem} \begin{proof} We recall that: $$h(t,u) := x'(t)\bigl( x(t) -u_{1}\bigr) + y'(t)\bigl( y(t) -u_{2}\bigr). $$ Let $t_0$ be the point in the domain of the parametrisation $(x(t),y(t))$ such that $(x(t_0),y(t_0))=p$. The Taylor series at $t_0$ are: \[ \begin{cases} x(t)=p_1+x'(t_0)(t-t_0)+ \frac{x''(t_0)}{2} \cdot (t-t_0)^2 +\hot\\ y(t)=p_2+y'(t_0)(t-t_0)+ \frac{y''(t_0)}{2} \cdot (t-t_0)^2 +\hot \end{cases} \] and therefore: \begin{align*} h(t,u)=&\bigl[ x'(t_0)(p_1-u_1)+y'(t_0)(p_2-u_2)\bigr] + \\ &\bigl[ (x'(t_0))^2+x''(t_0)(p_1-u_1)+(y'(t_0))^2+y''(t_0)(p_2-u_2)\bigr] (t-t_0)+ \hot \end{align*} The point $p\in X_{\reg}$ is an attractor (for at least 2 Morse points) if and only if $\ord_{t_{0}}h(t,u) >1$, thus if and only if $u=(u_{1}, u_{2})$ is a solution of the linear system: \[ \begin{cases} x'(t_0)(p_1-u_1)+y'(t_0)(p_2-u_2)=0\\ x''(t_0)(p_1-u_1)+y''(t_0)(p_2-u_2)=-(x'(t_0))^2-(y'(t_0))^2 \end{cases} \] If its determinant $D = x'(t_0)y''(t_0)-y'(t_0)x''(t_0)$ is not 0, then the system has a unique solution\footnote{This corresponds in the real geometry to the familiar fact that for every non-flex point there is a unique focal centre on the normal line to $X$ through $p$.}. If $D =0$, then the system has solutions if and only if $(x'(t_0))^2+(y'(t_0))^2=0$, and in this case the set of solutions is the line passing through $p$, normal to $X$, and thus tangent to $X$ because it is parallel to one of the two isotropic lines $\{x\pm iy =0\}$. \end{proof} \subsection{Structure of $\Delta^{\reg}$}\label{ss:reg}\ \\ As we have seen, $\Delta^{\reg}$ may have line components due to Theorem \ref{t:reg}(b), see also Example \ref{ex:regatyp}. We also have: \begin{corollary}\label{c:reg} If $X$ is irreducible and is not a line, then $\Delta^{\reg}$ has a unique component which is not a line. \end{corollary} \begin{proof} Since the non-flex points are dense in $X$ whenever $X$ is not a line, let $p\in X_{\reg}$ be a non-flex point, and we consider a local parametrisation $(x(t),y(t))$ at $p:= (x(0),y(0))$. Then the system: \[ \begin{cases} x'(t)(x(t)-u_1)+y'(t)(y(t)-u_2)=0\\ x''(t)(x(t)-u_1)+y''(t)(y(t)-u_2)=-(x'(t))^2-(y'(t))^2 \end{cases} \] has a unique solution $u(t)=(u_{1}(t), u_{2}(t))$ for $t$ close enough to 0. We therefore obtain a parametrisation for the germ of $\Delta^{\reg}$ at $\tilde u = u(0)$, exactly like in the classical real setting (see e.g. ``evolute'' in Wikipedia \cite{Wiki}): \[\begin{cases} u_1(t)=x(t)-\frac{y'(t)((x'(t))^2+(y'(t))^2)}{x'(t)y''(t)-y'(t)x''(t)}\\ u_2(t)=y(t)+\frac{x'(t)((x'(t))^2+(y'(t))^2)}{x'(t)y''(t)-y'(t)x''(t)}. \end{cases}\] This germ of $\Delta^{\reg}$ at $p$ cannot be a line. Indeed, by taking the derivative with respect to $t$ of the first equation of the system, we get: $$ x''(t)(x(t)-u_1(t))+(x'(t))^2-x'(t)u'_1(t)+y''(t)(y(t)-u_2(t))+(y'(t))^2-y'(t)u'_2(t)=0.$$ By using the second equation of the system we deduce: $$x'(t)u'_1(t)+y'(t)u'_2(t)=0.$$ The germ $\Delta^{\reg}$ at $u(0)$ is a line if and only if $u'_1(t)/u'_2(t)$ is constant for all $t$ close enough to 0, which by the above equation is equivalent to $x'(t)/y'(t)=$ const. This implies that $X$ is a line at $p$, thus it is an affine line, contradicting our assumption. \end{proof} \subsection{Morse numbers at attractors on $X_{\reg}$.}\label{ss:morsereg} Let $u_{0}\in \Delta^{\reg}(X)$ and let $p\in \Sing {D_{u_{0}}}_{|X}\cap X_{\reg}$. We call \emph{Morse number at $p\in X_{\reg}$}, and denote it by $m_{p}$, the number of Morse points which abut to $p$ as $s\to 0$ in a Morse deformation $D_{u(s)}$ with $u(0)=u_{0}$. A point $p\in \Sing {D_{u_{0}}}_{|X_{\reg}}$ is an \emph{attractor} if $m_{p}\ge 2$, see \S\ref{ss:attract} point (3). An attractor is therefore a singularity of ${D_{u_{0}}}_{|X_{\reg}}$ at $p$ which is not Morse. \begin{theorem}[Morse number at an attractor on $X_{\reg}$] \ \\ The Morse number at $p\in \Sing {D_{u_{0}}}_{|X_{\reg}}$ is: \[ m_{p} = \mult_{p}\bigl( X, \{D_{u_{0}}= D_{u_{0}}(p)\}\bigr) - 1. \] \end{theorem} \begin{proof} This is a consequence of general classical results, as follows. The Milnor number of a holomorphic function germ $f: (X,p)\to (\bC,0)$ with isolated singularity at a smooth point $p\in X_{\reg}$ is equal to the number of Morse points in some Morsification $f_{s}$ which abut to $p$ when $s\to 0$, cf Brieskorn \cite{Bri}, and see also \cite{Ti3} for a more general statement. On the other hand the Milnor number of $f$ at $p\in X_{\reg}$, in case $\dim_{p}X=1$, is equal to the multiplicity of $f$ at $p$ minus 1. In our case, the function $f$ is the restriction to $X$ of the Euclidean distance function $D_{u_{0}}$, and therefore this multiplicity equals the intersection multiplicity $\mult_{p}\bigl( X, \{D_{u_{0}}= D_{u_{0}}(p)\}\bigr)$. \end{proof} \section{Examples}\label{s:examples} \begin{example}[The ``complex circle'']\label{ex:morseind}\ Let $X := \{x^{2} + y^{2} = 1\}\subset \bC^{2}$, and $D_{u} := (x-u_{1})^{2} + (y-u_{2})^{2}$. We have $X^{\ity}\cap Q^{\ity} = Q^{\ity} = \{[1;i], [i;1]\}$, and $\EDdeg(X)= 2$. A parametrisation of the unique branch of $X$ at $[1;i]$, which we will denote by $X_{[1;i]}$, is $\gamma: x= \frac{1+t^{2}}{2t}, \ y= \frac{(1-t^{2})i}{2t}$, for $s\to 0$. We get, in the notations of \S\ref{s:struct}: $k=1$, $P(t) = \frac{1+t^{2}}{2}$, $P'(t) = t$, and $Q(t) = \frac{(1-t^{2})i}{2}$, $Q'(t) = it$. After all simplifications, we obtain: \[ h(t,u) = (u_{1} +i u_{2})t + (-u_{1} +i u_{2})t^{3} \] which yields $\widetilde{h}(t,u) = (u_{1} +i u_{2}) + (-u_{1} +i u_{2})t^{2}$. This shows that $\Delta^{\atyp}(X_{[1;i]}) = \{ u_{1} +i u_{2} =0\}$, and that $m^{\gen}_{X_{[1;i]}} =2$ in the notations of Remark \ref{r:morseinfty}, which means that there are 2 Morse points which abut to $[1;i]$. The exceptional point on the line is $(0,0)$, for which we get $\widetilde{h}(t,u) \equiv 0$, which means that $\dim \Sing D_{(0,0)} >0$, in other words $D_{(0,0)}$ has non-isolated singularities on $X$. The study at the other point at infinity $[i;1]$ is similar. By the symmetry, we get: $\Delta^{\atyp}(X_{[i;1]}) = \{ u_{1} - i u_{2} =0\}$ and $m^{\gen}_{X_{[i;1]}} =2$, with the same exceptional point $(0,0)$. We get $\Delta^{\atyp} = \Delta^{\atyp}(X_{[i;1]})\cup \Delta^{\atyp}(X_{[1;i]}) = \{u_{1}^{2}+u_{2}^{2}=0\}$, and we actually have: $$\Delta_{T}(X)= \Delta_{\ED}(X)= \Delta^{\atyp}.$$ \end{example} \ \begin{example}[where $\Delta^{\atyp}$ is a line component of $\Delta^{\reg}$]\label{ex:regatyp}\ \\ Let $$X=\{(x,y)\in\bC^2:xy^4=iy^5+y^3-3y^2+3y-1\}.$$ We have $\EDdeg(X) = 10$. We will first find $\Delta^{\atyp}$. Let us observe that $\bar X \cap H^\ity = \{[1;0], [i;1]\}$, and that $Q^{\ity}\cap H^\ity = \{[i;1]\}$, thus in order to find $\Delta^{\atyp}$ we have to focus at the point $[i;1]$ only. A local parametrisation of $X$ at $[i;1]$, which is actually global, is given by: $$t\in\bC^*\to (x(t),y(t));\ \ \ x(t)=\frac{i+t^2-3t^3+3t^4-t^5}{t},\ \ y(t)=\frac 1t.$$ By our study of the structure of $\Delta^{\atyp}$ in \S\ref{s:struct}, we get: $k=1$, $P(t)=i+t^2-3t^3+3t^4-t^5$ and $Q(t)=1$. Thus: $$h(t,u)=[(-i)(i-u_1)+(-1)(1-u_2)]t + t^3(-3i-u_1)) + \hot=(iu_1+u_2)t + t^3(-3i-u_1)) + \hot$$ and therefore $\Delta^{\atyp}$ is the line $L :=\{iu_1+u_2=0\}$. By Theorem \ref{t:morseinfty} and Definition \ref{t:morseinfty}, the generic Morse number at infinity of is then $m^{\gen}_{[i;1]} = 3-1 = 2$. \ We claim that the inclusion $\Delta^{\atyp}\subset \Delta^{\reg}$ holds. To prove it, we will use Theorem \ref{t:reg} at the point $p=(i,1)$ and the same global parametrisation, thus at the value $t_0=1$. We have: $$x'(t)=-\frac{i}{t^2}+1-6t+9t^2-4t^3,\ \ \ x''(t)=\frac{2i}{t^3}-6+18t-12t^2,$$ $$y'(t)=-\frac{1}{t^2},\ \ \ y''(t)=\frac{2}{t^3},$$ and for $t_{0}=1$, we get: $$x(1)=i,\ \ \ y(1)=1,\ \ \ x'(1)=-i, \ \ \ x''(1)=2i,\ \ \ y'(1)=-1, \ \ \ y''(1)=2,$$ and $$x'(1)y''(1)-y'(1)x''(1)=(-i)2-(-1)(2i)=0,$$ $$(x'(1))^2+(y'(1))^2=(-i)^2+(-1)^2=0.$$ By Theorem \ref{t:reg}(b), every point $(u_1,u_2)$ which satisfies the equation $$x'(1)(x(1)-u_1)+y'(1)(y(1)-u_2)=0$$ is in $\Delta^{\reg}$. In our case we have: $$x'(1)(x(1)-u_1)+y'(1)(y(1)-u_2)=(-i)(i-u_1)+(-1)(1-u_2)=iu_1+u_2,$$ thus our claim is proved. By \S\ref{ss:reg} it follows that $\Delta^{\reg}$ does not contain any other line component. \end{example} \ \subsection{Isotropic coordinates}\label{ss:isotropic} The examples with atypical discriminant do not occur in the real setting. Indeed, the isotropic points at infinity $Q^{\ity}$ are not real, and the atypical discriminant is not real either (since consist of lines parallel to the isotropic lines). We obtain real coefficients when we use ``isotropic coordinates'', as follows: $z:=x+iy$, $w:= x-iy$. The data points also become: $v_{1} := u_{1}+i u_{2}$, $v_{2} := u_{1}-i u_{2}$, and the Euclidean distance function takes the following hyperbolic shape: \[ D_v(z,w) = (z-v_1)(w-v_2). \] In isotropic coordinates, $Q^{\ity}$ reads $\{zw=0\}$, thus two points: $[0;1]$ and $[1;0]$. In order to study what happens with the Morse points in the neighbourhood of these points at infinity $[0;1]$ and $[1;0]$, we need to change the variables in the formulas of \S\ref{s:struct}. So we recall and adapt as follows: Let $\gamma:D\to \Gamma$ be a local holomorphic parametrisation of $\Gamma$ at $\xi\in Q^{\ity}$, where $D$ is some small enough disk in $\bC$ centred at $0$, and $\gamma(0)=\xi$. For $t\in D$, we write $z_{1}(t):=z_{1}(\gamma(t))$ and $z_{2}(t):=z_{2}(\gamma(t))$, where $z_{1}(t)$ and $z_{2}(t)$ are meromorphic on $D$. Then there exists a unique positive integer $k$ such that: $$z_{1}(t)=\frac{P(t)}{t^k}, \ z_{2}(t)=\frac{Q(t)}{t^k},$$ and $P(t)$ and $Q(t)$ are holomorphic on $D$, where $P(0)$ and $Q(0)$ are not both equal to zero. Note that under these notations we have $\xi =[P(0);Q(0)]\in H^{\ity}$. For $t\in D\m\{0\}$ and $v=(v_1,v_2)\in\bC^2$, we then have the equivalence $((z_{1}(t),z_{2}(t)),v)\in \cE_X \Longleftrightarrow h(t,v) =0$, where: \begin{equation}\label{eq:test} h(t,v)=(tP'(t)-kP(t))(Q(t)-v_2t^k)+(tQ'(t)-kQ(t))(P(t)-v_1t^k), \end{equation} and note that $h:D\times\bC^2\to \bC$ is a holomorphic function. \begin{example}\label{e:isotropic} $X = \{z_1^2 z_2 - z_1 = 1 \}$ in isotropic coordinates. Then $X^{\infty}\cap Q^{\ity} = Q^{\ity}$ consist of the two isotropic points, $[0;1]$ and $[1;0]$. On computes that $\EDdeg(X) =3$. At the point $[0;1]\in Q^{\ity}$, the curve $\bar X$ has a single local branch, which we denote by $X_{[0;1]}$. We use the parametrisation: $z_1 = t = \frac{t^3}{t^2}$, $z_2 = \frac{1+t}{t^2}$. \\ Thus $k=2$ and $P = t^3$, $Q= 1+t$, for $t \to 0$ The condition \eqref{eq:test} becomes: $h(t,v) = 2v_1 t^2 + (v_1-1) t^3 - v_2 t^5$. In the notations of \S\ref{s:struct}, we get: $h_0=h_1=0$, and therefore, by Theorem \ref{t:atyp}, we have that $\Delta^{\atyp}(X_{[0;1]}) = \{v_1 = 0\}$. By Theorem \ref{t:morseinfty}, the Morse number is $m_{X_{[0;1]}}^{\gen}= 3-2=1$ at every point of this line. \begin{figure}[hbt!] \includegraphics[width=65mm]{broughton-pic} \caption{In blue $\Delta^{\reg}$; in brown $\Delta^{\atyp}$; in green a real picture of $X$.} \end{figure} At the other isotropic point $[1;0]$, we also have a single branch of $X $, which we denote by $X_{[1;0]}$. Using the parametrisation $z_1 = \frac{1}{t}$, $z_2 = \frac{t^3 + t^2}{t}$, we get $k=1$, and $P =1$, $Q= t^2 + t^3$. This yields: $h(t,v) = v_2 t + (1-v_1) t^3 - 2 v_1 t^4$. Since $h_{0}=0$, by Theorem \ref{t:atyp}, we have that $\Delta^{\atyp}(X_{[1;0]}) = \{v_2 = 0\}$. By Theorem \ref{t:morseinfty}, the Morse number is $m_{X_{[1;0]}}^{\gen} =3-1=2$ at all points of this line except of the point of intersection $(1,0) =\{v_2 = 0 = 1-v_1\}$, where the Morse number is $m_{X_{[0;1]}}((1,0)) =4-1=3$. At this exceptional point, all the 3 Morse points abut to infinity at $[1;0]$. Here are some more conclusions: \begin{itemize} \item[$\bullet$] $\Delta^{\atyp} = \{ v_1 v_2 = 0\}$: two lines trough the isotropic points, \item[$\bullet$] $\Delta^{\reg} = \{ - v_1^3 +27 v_1^2v_2 + 3v_1^2 -3 v_1 +1 = 0 \}$, as computed with Mathematica \cite{Wo}. \item[$\bullet$] $\Delta^{\sing}=\emptyset$. \end{itemize} Notice that $\Delta^{\atyp}\cap \Delta^{\reg} \ni (1,0)$. We have seen above that when moving the data point from outside the discriminant $\Delta_{T}$ to the point $(1,0)$, the 3 Morse points go to infinity at $[1;0]$. In case we move the data point inside $\Delta^{\reg}$, then both the Morse point and the non-Morse singular point go to infinity. Moving the data point along $\{v_2=0\}$, the single Morse point goes to infinity at $[1;0]$. \end{example} \begin{thebibliography}{1000} \bibitem[Br]{Bri} E.~Brieskorn, {\it Die Monodromie der isolierten Singularit\"{a}ten von Hyperfl\"{a}chen}, Manuscripta Math. (1970), no. 2, 103-161. \bibitem[CT]{CT} F. Catanese, C. Trifogli, \emph{Focal loci of algebraic varieties. I.} Comm. Algebra 28 (2000), no. 12, 6017-6057. \bibitem[DGS]{DGS} S. Di Rocco, L. Gustafsson, L. Sodomaco. \emph{Conditional Euclidean distance optimization via relative tangency.} preprint arXiv:2310.16766 (2023). \bibitem[DHOST]{DHOST} J.~Draisma, E.~Horobe\c{t}, G.~Ottaviani, B.~Sturmfels and R.~R. Thomas. \emph{The Euclidean distance degree of an algebraic variety.} Found. Comput. Math. 16 (2016), no 1, 99-149. \bibitem[GM]{GM} M. Goresky, R. MacPherson, {\it Stratified Morse theory}, Ergebnisse der Mathematik und ihrer Grenzgebiete (3), 14. Springer-Verlag, Berlin, 1988. \bibitem[Ho1]{Hor2017} E.~Horobe\c{t}, \emph{The data singular and the data isotropic loci for affine cones}. Communications in Algebra, 45 (2017), no. 3, 1177-1186. \bibitem[Ho2]{Ho} E.~Horobe\c{t}, \emph{The critical curvature degree of an algebraic variety.} Journal of Symbolic Computation 121 (2024) 102259. \bibitem[HS]{HS2014} J.~Huh and B.~Sturmfels, \emph{Likelihood geometry}. In: Combinatorial algebraic geometry, Lecture Notes in Math. 2108, pag. 63-117. Springer, Cham, 2014. \bibitem[MRW1]{MRW2018} L.~G. Maxim, J.~I. Rodriguez and B.~Wang. \emph{Euclidean distance degree of the multiview variety.} SIAM J. Appl. Algebra Geom. 4 (2020), no. 1, 28-48. \bibitem[MRW2]{MRW5} L.G. Maxim, J.I. Rodriguez and B.~Wang. \emph{A Morse theoretic approach to non-isolated singularities and applications to optimization.} J. Pure Appl. Algebra 226 (2022), no. 3, Paper No. 106865, 23 pp. \bibitem[MT1]{MT1} L. Maxim, M.~Tib\u{a}r, {\it Euclidean distance degree and limit points in a Morsification}, Adv. in Appl. Math. 152 (2024), Paper No. 102597, 20 pp. \bibitem[MT2]{MT2} L. Maxim, M.~Tib\u{a}r, {\it Morse numbers of function germs with isolated singularities}, Q. J. Math. 74 (2023), no. 4, 1535--1544. \bibitem[MT3]{MT3} L. Maxim, M.~Tib\u{a}r, \emph{Morse numbers of complex polynomials}. J. Topology 17 (2024), no. 4, Paper e12362. \bibitem[Mi]{Mi} J.W. Milnor, Morse theory. Ann. of Math. Stud., No. 51 Princeton University Press, Princeton, NJ, 1963, vi+153 pp. \bibitem[NRS]{NRS2010} J.~Nie, K.~Ranestad, and B.~Sturmfels. \emph{The algebraic degree of semidefinite programming}. Math. Program. 2 Ser. A, 122 (2010), 379-405. \bibitem[PST]{PST2017} J.~Ponce, B.~Sturmfels, and M.~Trager, Congruences and concurrent lines in multi-view geometry. Adv. in Appl. Math 88 (2017), 62-91. \bibitem[PRS]{PRS} R. Piene, C. Riener, B. Shapiro, \emph{Return of the plane evolute.} https://arxiv.org/abs/2110.11691 \bibitem[STV]{STV} J. Seade, M. Tib\u ar, A. Verjovsky, {\it Global Euler obstruction and polar invariants}, Math. Ann. 333 (2005), no.2, 393-403. \bibitem[Ti]{Ti3} M. Tib\u ar, \emph{Local linear Morsifications.} Rev. Roum. Math. Pures Appl. 69 (2024), no. 2, 295-303. \bibitem[Tr]{Tr} C. Trifogli, \emph{Focal loci of algebraic hypersurfaces: a general theory.} Geom. Dedicata 70 (1998), no. 1, 1-26. \bibitem[Wiki]{Wiki} Wikipedia, the free encyclopedia. https://en.wikipedia.org/wiki/Evolute \bibitem[Wo]{Wo} Mathematica 14.1. Wolfram Mathematica free software, https://www.wolfram.com/mathematica/ \end{thebibliography} \end{document}
2412.16941v1
http://arxiv.org/abs/2412.16941v1
Overpartitions with separated overlined parts and non-overlined parts
\documentclass[12pt]{article} \usepackage{latexsym, amssymb, amsmath, amscd, amsfonts, epsfig, graphicx, colordvi,verbatim,ifpdf,extarrows} \usepackage{amsfonts, amsmath, amssymb} \usepackage{amssymb,amsfonts,amsmath,latexsym,epsfig,cite, psfrag,eepic,color} \usepackage{amscd,graphics} \usepackage{latexsym, amssymb, amsmath,amscd, amsfonts, epsfig, graphicx, colordvi,amsthm} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{color} \usepackage{ifpdf} \usepackage{fancybox} \usepackage[font=small,labelfont=bf,labelsep=none]{caption} \usepackage{float} \allowdisplaybreaks \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{rem}[thm]{\it Remark} \newtheorem{defi}[thm]{Definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{core}[thm]{Corollary} \newtheorem{exa}[thm]{Example} \def\pf{\noindent{\it Proof.} } \setcounter{section}{1} \def\qed{\nopagebreak\hfill{\rule{4pt}{7pt}} \medbreak} \setlength{\topmargin}{0.25cm} \setlength{\oddsidemargin}{0.25cm} \setlength{\textwidth}{16cm} \setlength{\textheight}{22.1cm} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \renewcommand{\thetable}{\arabic{section}.\arabic{table}} \numberwithin{equation}{section} \def\setP{\mathop{\cal P}} \def\qed{\nopagebreak\hfill{\rule{4pt}{7pt}} \medbreak} \newenvironment{kst} {\setlength{\leftmargini}{2.4\parindent} \begin{itemize} \setlength{\itemsep}{-0.5mm}} {\end{itemize}} \renewcommand{\thefigure}{\thesection.\arabic{figure}} \setcounter{section}{0} \def \n {NC} \def \f {F} \def \GG {G\"ollnitz-Gordon marking} \newlength{\boxedparwidth} \setlength{\boxedparwidth}{1.\textwidth} \newenvironment{boxedtext} {\begin{center} \begin{tabular}{|@{\hspace{.315in}}c@{\hspace{.15in}}|} \hline \\ \begin{minipage}[t]{\boxedparwidth} \setlength{\parindent}{.25in}} {\end{minipage} \\ \\ \hline \end{tabular} \end{center}} \parskip 6pt \begin{document} \begin{center} {\Large \bf Overpartitions with separated overlined parts and non-overlined parts} \end{center} \begin{center} {Y.H. Chen}$^{1}$, {Thomas Y. He}$^{2}$, {Y. Hu}$^{3}$ and {Y.X. Xie}$^{4}$ \vskip 2mm $^{1,2,3,4}$ School of Mathematical Sciences, Sichuan Normal University, Chengdu 610066, P.R. China \vskip 2mm $^[email protected], $^[email protected], $^[email protected], $^[email protected] \end{center} \vskip 6mm {\noindent \bf Abstract.} Recently, Andrews considered the partitions with parts separated by parity, in which parts of a given parity are all smaller than those of the other parity, and if the smaller parity is odd then odd parts must appear. Inspired from the partitions with parts separated by parity, we investigate the overpartitions with separated overlined parts and non-overlined parts, in which the size of overlined parts are greater than (resp. greater than or equal to) the size of non-overlined parts, or non-overlined parts must appear and the size of non-overlined parts are greater than (resp. greater than or equal to) the size of overlined parts. \noindent {\bf Keywords}: overpartitions, separable overpartition classes, distinct partitions, overlined parts, non-overlined parts \noindent {\bf AMS Classifications}: 05A17, 11P84 \section{Introduction} A partition $\pi$ of a positive integer $n$ is a finite non-increasing sequence of positive integers $\pi=(\pi_1,\pi_2,\ldots,\pi_m)$ such that $\pi_1+\pi_2+\cdots+\pi_m=n$. The empty sequence forms the only partition of zero. The $\pi_i$ are called the parts of $\pi$. Let $\ell(\pi)$ be the number of parts of $\pi$. The weight of $\pi$ is the sum of parts, denoted $|\pi|$. We use $\mathcal{P}(n)$ to denote the set of all partitions of $n$. The conjugate $\pi'$ of a partition $\pi$ is a partition such that the $i$-th part $\pi'_i$ is the number of parts of $\pi$ greater than or equal to $i$. In \cite{Andrews-2018,Andrews-2019}, Andrews considered partitions in which parts of a given parity are all smaller than those of the other parity, and if the smaller parity is odd then odd parts must appear. Andrews designated both cases, where the even (resp. odd) parts are distinct with the couplet ``$ed$" (``$od$"), or when the even (resp. odd) parts may appear an unlimited number of times with the couplet ``$eu$" (resp. ``$ou$"). The eight partition functions $p^{zw}_{xy}$ designate the partition functions in question, where $xy$ constrains the smaller parts and $zw$ the larger parts. In \cite{Andrews-2022}, Andrews introduced separable integer partition classes and analyzed some well-known theorems. By virtue of separable integer partition classes with modulus $2$, Passary studies $p^{ou}_{eu}$ and $p^{eu}_{ou}$ in \cite[Section 3.2]{Passary-2019} and Chen, He, Tang and Wei investigate the remaining six cases in \cite[Section 3]{Chen-He-Tang-Wei-2024}. Inspired from the partitions with parts separated by parity, we consider the overpartitions with separated overlined parts and non-overlined parts. An overpartition, introduced by Corteel and Lovejoy \cite{Corteel-Lovejoy-2004}, is a partition such that the first occurrence of a part can be overlined. For example, there are fourteen overpartitions of $4$: \[(4),(\overline{4}),(3,1),(\overline{3},1),(3,\overline{1}),(\overline{3},\overline{1}),(2,2),(\overline{2},2), \] \[(2,1,1),(\overline{2},1,1),(2,\overline{1},1),(\overline{2},\overline{1},1),(1,1,1,1),(\overline{1},1,1,1).\] We will use $\overline{\mathcal{P}}$ to denote the set of all overpartitions. Let $\pi$ be an overpartition. We sometimes write $\pi=\left(1^{f_1(\pi)}\bar{1}^{f_{\bar{1}}(\pi)}2^{f_2(\pi)}\bar{2}^{f_{\bar{2}}(\pi)}\cdots\right)$, where $f_t(\pi)$ (resp. $f_{\bar{t}}(\pi)$) denotes the number of parts equal to $t$ (resp. $\bar{t}$) in $\pi$. For a part $\pi_i$ of $\pi$, we say that $\pi_i$ is of size $t$ if $\pi_i=t$ or $\overline{t}$, denoted $|\pi_i|=t$. We define $t+ b$ (resp. $\overline{t}+ b$) as a non-overlined part (resp. an overlined part) of size $t+ b$. For easier expression, we use the following notations related to an overpartition $\pi$: \begin{itemize} \item $LN(\pi)$ (resp. $SN(\pi)$) is the size of the largest (resp. smallest) non-overlined part of $\pi$ if there exist non-overlined parts in $\pi$, and $LN(\pi)=0$ (resp. $SN(\pi)=0$) otherwise; \item $LO(\pi)$ (resp. $SO(\pi)$) is the size of the largest (resp. smallest) overlined part of $\pi$ if there exist overlined parts in $\pi$, and $LO(\pi)=0$ (resp. $SO(\pi)=0$) otherwise; \item $\ell_{N\geq O}(\pi)$ (resp. $\ell_{N>O}(\pi)$) is the number of non-overline parts of $\pi$ with size greater than or equal to (resp. greater than) $SO(\pi)$; \item $\ell_{O\geq N}(\pi)$ (resp. $\ell_{O>N}(\pi)$) is the number of overline parts of $\pi$ with size greater than or equal to (resp. greater than) $SN(\pi)$. \end{itemize} In this article, we investigate four types of overpartitions. \begin{itemize} \item[(1)] $\mathcal{G}_N^{O}$: the set of overpartitions such that the size of overlined parts (if exist) are greater than the size of non-overlined parts; \item[(2)] $\mathcal{E}_N^{O}$: the set of overpartitions such that the size of overlined parts (if exist) are greater than or equal to the size of non-overlined parts; \item[(3)] $\mathcal{G}_O^{N}$: the set of overpartitions such that non-overlined parts must appear and the size of non-overlined parts are greater than the size of overlined parts; \item[(4)] $\mathcal{E}_O^{N}$: the set of overpartitions such that non-overlined parts must appear and the size of non-overlined parts are greater than or equal to the size of overlined parts. \end{itemize} For example, if we consider the overpartitions of $4$, then we have \[(4),(\overline{4}),(3,1),(\overline{3},1),(\overline{3},\overline{1}),(2,2),(2,1,1),(\overline{2},1,1),(1,1,1,1)\in\mathcal{G}_N^{O};\] \[(4),(\overline{4}),(3,1),(\overline{3},1),(\overline{3},\overline{1}),(2,2),(\overline{2},2),(2,1,1),(\overline{2},1,1),(\overline{2},\overline{1},1),(1,1,1,1),(\overline{1},1,1,1)\in\mathcal{E}_N^{O};\] \[(4),(3,1),(3,\overline{1}),(2,2),(2,1,1),(1,1,1,1)\in\mathcal{G}_O^{N};\] \[(4),(3,1),(3,\overline{1}),(2,2),(\overline{2},2),(2,1,1),(2,\overline{1},1),(1,1,1,1),(\overline{1},1,1,1)\in\mathcal{E}_O^{N}.\] In other words, $\mathcal{G}_N^{O}$ (resp. $\mathcal{E}_N^{O}$) is the set of overpartitions $\pi$ such that if $SO(\pi)\geq 1$ then $SO(\pi)>LN(\pi)$ (resp. $SO(\pi)\geq LN(\pi)$), and $\mathcal{G}_O^{N}$ (resp. $\mathcal{E}_O^{N}$) is the set of overpartitions $\pi$ such that $SN(\pi)\geq 1$ and $SN(\pi)>LO(\pi)$ (resp. $SN(\pi)\geq LO(\pi)$). For a nonempty overpartition $\pi$, it is clear that $\ell_{N\geq O}(\pi)=0$ if $\pi\in\mathcal{G}_N^{O}$ and $SO(\pi)\geq 1$, $\ell_{N>O}(\pi)=0$ if $\pi\in\mathcal{E}_N^{O}$ and $SO(\pi)\geq 1$, $\ell_{O\geq N}(\pi)=0$ if $\pi\in\mathcal{G}_O^{N}$, and $\ell_{O>N}(\pi)=0$ if $\pi\in\mathcal{E}_O^{N}$. We will show that $\mathcal{G}_N^{O}$, $\mathcal{E}_N^{O}$, $\mathcal{G}_O^{N}$ and $\mathcal{E}_O^{N}$ are separable overpartition classes introduced by Chen, He, Tang and Wei \cite[Definition 4.1]{Chen-He-Tang-Wei-2024}, which is an extension of separable integer partition classes given by Andrews \cite{Andrews-2022}. \begin{defi}[separable overpartition classes] A separable overpartition class $\mathcal{P}$ is a subset of all the overpartitions satisfying the following{\rm:} There is a subset $\mathcal{B}\subset\mathcal{P}$ {\rm(}$\mathcal{B}$ is called the basis of $\mathcal{P}${\rm)} such that for each integer $m\geq 1$, the number of overpartitions in $\mathcal{B}$ with $m$ parts is finite and every overpartition in $\mathcal{P}$ with $m$ parts is uniquely of the form \begin{equation}\label{over-form-1} (b_1+\pi_1)+(b_2+\pi_2)+\cdots+(b_m+\pi_m), \end{equation} where $(b_1,b_2,\ldots,b_m)$ is an overpartition in $\mathcal{B}$ and $(\pi_1,\pi_2,\ldots,\pi_m)$ is a non-increasing sequence of nonnegative integers. Moreover, all overpartitions of the form \eqref{over-form-1} are in $\mathcal{P}$. \end{defi} For $m\geq 1$, let $b_\mathcal{B}(m)$ be the generating function for the overpartitions in $\mathcal{B}$ with $m$ parts. The generating function for the overpartitions in $\mathcal{P}$ with $m$ parts is \[\frac{b_\mathcal{B}(m)}{(q;q)_m}.\] Here and in the sequel, we assume that $|q|<1$ and use the standard notation \cite{Andrews-1976}: \[(a;q)_\infty=\prod_{i=0}^{\infty}(1-aq^i)\quad\text{and}\quad(a;q)_n=\frac{(a;q)_\infty}{(aq^n;q)_\infty}.\] Motivated by Kim, Kim and Lovejoy \cite{Kim-Kim-Lovejoy-2021} and Lin and Lin \cite{Lin-Lin-2024}, we consider the following partition functions. \begin{itemize} \item[(1)] Let ${EG}_N^{O}(n)$ (resp. ${OG}_N^{O}(n)$) be the number of overpartitions of $n$ such that there are even (resp. odd) number of non-overlined parts whose size are not less than the size of all overlined parts; \item[(2)] Let ${EE}_N^{O}(n)$ (resp. ${OE}_N^{O}(n)$) be the number of overpartitions of $n$ such that there are even (resp. odd) number of non-overlined parts whose size are not less than or equal to the size of all overlined parts; \item[(3)] Let ${EG}_O^{N}(n)$ (resp. ${OG}_O^{N}(n)$) be the number of overpartitions of $n$ such that non-overlined parts must appear and there are even (resp. odd) number of overlined parts whose size are not less than the size of all non-overlined parts; \item[(4)] Let ${EE}_O^{N}(n)$ (resp. ${OE}_O^{N}(n)$) be the number of overpartitions of $n$ such that non-overlined parts must appear and there are even (resp. odd) number of overlined parts whose size are not less than or equal to the size of all non-overlined parts. \end{itemize} In other words, we can rewrite the definitions above as follows. \begin{itemize} \item[(1)] ${EG}_N^{O}(n)$ (resp. ${OG}_N^{O}(n)$) is the number of overpartitions $\pi$ of $n$ with $\ell_{N\geq O}(\pi)$ being even (resp. odd); \item[(2)] ${EE}_N^{O}(n)$ (resp. ${OE}_N^{O}(n)$) is the number of overpartitions $\pi$ of $n$ with $\ell_{N>O}(\pi)$ being even (resp. odd); \item[(3)] ${EG}_O^{N}(n)$ (resp. ${OG}_O^{N}(n)$) is the number of overpartitions $\pi$ of $n$ such that $SN(\pi)\geq 1$ and $\ell_{O\geq N}(\pi)$ is even (resp. odd); \item[(4)] ${EE}_O^{N}(n)$ (resp. ${OE}_O^{N}(n)$) is the number of overpartitions $\pi$ of $n$ such that $SN(\pi)\geq 1$ and $\ell_{O>N}(\pi)$ is even (resp. odd). \end{itemize} This article is organized as follows. We will list the results of this article in Section 2. In Section 3, we recall some necessary identities and give an involution on $\overline{\mathcal{P}}$. Then, we will show the results in Sections 2.1 and 2.2 in Sections 4 and 5 respectively. \section{The results of this article} In this section, we will list the results related to $\mathcal{G}_N^{O}$ and $\mathcal{E}_N^{O}$ in Section 2.1 and the results related to $\mathcal{G}_O^{N}$ and $\mathcal{E}_O^{N}$ in Section 2.2. \subsection{Results related to $\mathcal{G}_N^{O}$ and $\mathcal{E}_N^{O}$} In view of separable overpartition classes, we can get \begin{equation}\label{gen-G-O-N} \sum_{\pi\in\mathcal{G}_N^{O}}q^{|\pi|}=(-q;q)_\infty^2 \end{equation} and \begin{equation}\label{gen-E-O-N} \sum_{\pi\in\mathcal{E}_N^{O}}q^{|\pi|}=(-q;q)_\infty^2+\frac{1}{(q;q)_\infty}-(-q;q)_\infty. \end{equation} We will give another two proofs of \eqref{gen-G-O-N} and \eqref{gen-E-O-N} in terms of the largest non-overlined part and the smallest overlined part. For $n\geq 0$, let ${G}_N^{O}(n)$ be the number of overpartitions of $n$ in $\mathcal{G}_N^{O}$ and let $D_2(n)$ be the number of pairs of distinct partitions $(\alpha,\beta)$ such that $|\alpha|+|\beta|=n$. As a corollary of \eqref{gen-G-O-N}, we get \begin{core}\label{core-g-d2} For $n\geq0$, \[{G}_N^{O}(n)=D_2(n).\] \end{core} In \cite{Andrews-Newman-2019}, Andrews and Newman undertook a combinatorial study of the minimal excludant of a partition, which was earlier introduced by Grabner and Knopfmacher \cite{Grabner-Knopfmacher-2006} under the name ``smallest gap". The minimal excludant of a partition $\pi$ is the smallest positive integer that is not a part of $\pi$, denoted $mex(\pi)$. For $n\geq 0$, Andrews and Newman defined \[\sigma mex(n)=\sum_{\pi\in\mathcal{P}(n)}mex(\pi).\] They gave two proofs of the following theorem. \begin{thm}\label{simgamex-dn} For $n\geq0$, \[\sigma mex(n)=D_2(n).\] \end{thm} Ballantine and Merca \cite{Ballantine-Merca-2021} presented a combinatorial proof of Theorem \ref{simgamex-dn}. By Corollary \ref{core-g-d2} and Theorem \ref{simgamex-dn}, we get the following corollary. \begin{core}\label{core-g-sigma} For $n\geq0$, \[{G}_N^{O}(n)=\sigma mex(n).\] \end{core} For $n\geq 1$, let ${E}_{ON}(n)$ denote the number of overpartitions of $n$ such that the size of the smallest overlined part equals the size of the largest non-overlined part, that is, ${E}_{ON}(n)$ is the number of overpartitions $\pi$ of $n$ with $SO(\pi)=LN(\pi)$. For example, there are seven overpartitions counted by ${E}_{ON}(6)$, and so we have ${E}_{ON}(6)=7$. \[(\overline{4},\overline{1},1),(\overline{3},3),(\overline{3},\overline{1},1,1),(\overline{2},2,2),(\overline{2},2,1,1),(\overline{2},\overline{1},1,1,1),(\overline{1},1,1,1,1,1).\] It follows from \eqref{gen-G-O-N} and \eqref{gen-E-O-N} that \begin{equation}\label{gen-E-O=N} \sum_{n\geq 1}{E}_{ON}(n)q^{n}=\frac{1}{(q;q)_\infty}-(-q;q)_\infty. \end{equation} We will give another analytic proof of \eqref{gen-E-O=N}. For $n\geq 1$, let $R(n)$ denote the number of partitions of $n$ with repeated parts. For example, we have $R(6)=7$ since there are seven partitions of $6$ with repeated parts. \[(4,1,1),(3,3),(3,1,1,1),(2,2,2),(2,2,1,1),(2,1,1,1,1),(1,1,1,1,1,1).\] Clearly, the right hand side of \eqref{gen-E-O=N} is the generating function of $R(n)$. So, we have \begin{core}\label{core-ON=R} For $n\geq1$, \[{E}_{ON}(n)=R(n).\] \end{core} Now, we turn to ${EG}_N^{O}(n)$, ${OG}_N^{O}(n)$, ${EE}_N^{O}(n)$ and ${OE}_N^{O}(n)$. \begin{thm}\label{EG-OG-THM} For $n\geq 1$, \begin{equation*}\label{EG-OG-eqn-1} {EG}_N^{O}(n)-{OG}_N^{O}(n)\equiv0\pmod 2, \end{equation*} and \begin{equation*}\label{EG-OG-eqn-2} {EG}_N^{O}(n)-{OG}_N^{O}(n)\geq 0\text{ with strict inequality if }n\geq 3. \end{equation*} \end{thm} Let $p_e(n)$ denote the number of partitions of $n$ with an even number of parts. \begin{thm}\label{EE-OE-THM} For $n\geq 1$, \begin{equation*}\label{EE-OE-eqn-1} {EE}_N^{O}(n)-{OE}_N^{O}(n)=2p_e(n). \end{equation*} \end{thm} It yields that \begin{core} For $n\geq 1$, \begin{equation*} {EG}_N^{O}(n)-{OG}_N^{O}(n)\equiv0\pmod 2, \end{equation*} and \begin{equation*} {EG}_N^{O}(n)-{OG}_N^{O}(n)\geq 0\text{ with strict inequality if }n\geq 2. \end{equation*} \end{core} In Section 4, we will prove \eqref{gen-G-O-N}, \eqref{gen-E-O-N}, Theorems \ref{EG-OG-THM} and \ref{EE-OE-THM}, and we will present an analytic proof \eqref{gen-E-O=N} and bijective proofs of Corollary \ref{core-g-d2}, \ref{core-g-sigma} and \ref{core-ON=R}. \subsection{Results related to $\mathcal{G}_O^{N}$ and $\mathcal{E}_O^{N}$} By virtue of separable overpartition classes, we can get \begin{equation}\label{gen-G-N-O} \sum_{\pi\in\mathcal{G}_O^{N}}q^{|\pi|}=\sum_{m\geq1}\frac{1}{(q;q)_m} \sum_{k=0}^{m-1}q^{(k+1)m-\binom{k+1}{2}}, \end{equation} and \begin{equation}\label{gen-E-N-O} \sum_{\pi\in\mathcal{E}_O^{N}}q^{|\pi|}=\sum_{m\geq1}\frac{1}{(q;q)_m} \Bigg[q^m+\sum_{k=1}^{m-1}q^{km-\binom{k}{2}}\Bigg]. \end{equation} By considering the smallest non-overlined part, we can get \begin{equation}\label{gen-G-N-O-new} \sum_{\pi\in\mathcal{G}_O^{N}}q^{|\pi|}=\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}=(-q;q)_\infty\sum_{n\geq0} \frac{q^{2n+1}}{1-q^{2n+1}}\frac{1}{(q^2;q^2)_n} \end{equation} and \begin{equation}\label{gen-E-N-O-new} \sum_{\pi\in\mathcal{E}_O^{N}}q^{|\pi|}=(-q;q)_\infty\sum_{n\geq0} \frac{q^{2n+1}}{1-q^{2n+1}}\frac{1}{(q^2;q^2)_n}+\frac{1}{(q;q)_\infty}-(-q;q)_\infty. \end{equation} As a corollary of \eqref{gen-G-N-O-new}, we can get \begin{core} For $n\geq 1$, the number of overpartition of $n$ in $\mathcal{G}_O^{N}$ equals the number of overpartitions of $n$ such that the largest non-overlined part is odd and the non-overlined parts less than the largest non-overlined part are even. \end{core} In \cite{Cohen-1988}, Cohen observed the following identity: \begin{equation*} \sum_{n\geq 1}q^n(q^2;q^2)_{n-1}=\sum_{n\geq 1}\frac{(-1)^{n-1}q^{n^2}}{(q;q^2)_n}. \end{equation*} Combining with \eqref{gen-G-N-O-new}, we can get \[\sum_{\pi\in\mathcal{G}_O^{N}}q^{|\pi|}=\frac{1}{(q;q)_\infty}\sum_{n\geq 1}\frac{(-1)^{n-1}q^{n^2}}{(q;q^2)_n}.\] Inspired by the minimal excludant of a partition, Chern \cite{Chern-2021} investigated the maximal excludant of a partition. The maximal excludant of a partition $\pi$ is the largest nonnegative integer smaller than the largest part of $\pi$ that does not appear in $\pi$, denoted $maex(\pi)$. For $n\geq 1$, Chern defined \[\sigma maex(n)=\sum_{\pi\in\mathcal{P}(n)}maex(\pi).\] Let $\sigma L(n)$ be the sum of largest parts over all partitions of $n$, that is, \[\sigma L(n)=\sum_{\pi\in\mathcal{P}(n)}LN(\pi).\] Chern \cite{Chern-2021} obtained the following identity: \[\sum_{n\geq 1}(\sigma L(n)-\sigma maex(n))q^n=\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}.\] Combining with \eqref{gen-G-N-O-new}, we have \begin{core}\label{equiv-g-sigma} For $n\geq 1$, the number of overpartitions of $n$ in $\mathcal{G}_O^{N}$ is $\sigma L(n)-\sigma maex(n)$. \end{core} For $n\geq 1$, let ${E}_{NO}(n)$ denote the number of overpartitions of $n$ such that the size of the smallest non-overlined part equals the size of the largest overlined part, that is, ${E}_{NO}(n)$ is the number of overpartitions of $n$ with $SN(\pi)=LO(\pi)$. For example, there are seven overpartitions counted by ${E}_{NO}(6)$, and so we have ${E}_{NO}(6)=7$. \[(4,\overline{1},1),(\overline{3},3),(3,\overline{1},1,1),(\overline{2},2,2),(2,2,\overline{1},1),(2,\overline{1},1,1,1),(\overline{1},1,1,1,1,1).\] It follows from \eqref{gen-G-N-O-new} and \eqref{gen-E-N-O-new} that \begin{equation}\label{gen-E-N=O} \sum_{n\geq 1}{E}_{NO}(n)q^{n}=\frac{1}{(q;q)_\infty}-(-q;q)_\infty. \end{equation} Recall that the right hand side of \eqref{gen-E-N=O} is the generating function of $R(n)$, so we have \begin{core}\label{core-NO=R} For $n\geq1$, \[{E}_{NO}(n)=R(n).\] \end{core} Let $D(n)$ denote the number of distinct partitions of $n$. \begin{thm}\label{N-O-THM-1} For $n\geq 1$, \[{EG}_O^{N}(n)-{OG}_O^{N}(n)=D(n).\] \end{thm} It implies that \begin{core} For $n\geq 1$, \[{EG}_O^{N}(n)-{OG}_O^{N}(n)>0.\] \end{core} Let ${H}'_{ON}(n)$ be the number of overpartitions of $n$ such that non-overlined parts must appear, the size of non-overlined parts are greater than or equal to the size of overlined parts, and the parts of size less than the size of the largest non-overlined part are overlined, that is, ${H}'_{ON}(n)$ is the number of overpartitions $\pi$ of $n$ with $LN(\pi)=SN(\pi)\geq 1$ and $SN(\pi)\geq LO(\pi)$. \begin{thm}\label{thm-e-o-gen-1} For $n\geq 1$, \[{EE}_O^{N}(n)-{OE}_O^{N}(n)={H}'_{ON}(n).\] \end{thm} It yields that \begin{core}For $n\geq 1$, \[{EE}_O^{N}(n)-{OE}_O^{N}(n)>0.\] \end{core} In Section 5, we will give proofs of \eqref{gen-G-N-O}, \eqref{gen-E-N-O}, \eqref{gen-G-N-O-new}, \eqref{gen-E-N-O-new}, Theorems \ref{N-O-THM-1} and \ref{thm-e-o-gen-1}, and we will present an analytic proof of \eqref{gen-E-N=O} and bijective proofs of Corollary \ref{equiv-g-sigma} and \ref{core-NO=R}. \section{Preliminaries} In this section, we collect some identities needed in this article. We also build an involution on $\overline{\mathcal{P}}$, which plays a crucial in the proofs of Theorems \ref{EG-OG-THM}, \ref{EE-OE-THM}, \ref{N-O-THM-1} and \ref{thm-e-o-gen-1}. {\noindent \bf The $q$-binomial theorem \cite[Theorem 2.1]{Andrews-1976}:} \begin{equation}\label{chang-1} \sum_{n\geq0} \frac{(a;q)_{n}}{(q;q)_{n}}t^{n}=\frac{{(at;q)}_{\infty}}{(t;q)_{\infty}}. \end{equation} There are two special cases of \eqref{chang-1} (see \cite[Corollary 2.2]{Andrews-1976}): \begin{equation}\label{Euler-1} \sum_{n\geq0}\frac{t^n}{(q;q)_n}=\frac{1}{(t;q)_\infty}, \end{equation} and \begin{equation}\label{Euler-2} \sum_{n\geq0}\frac{t^nq^{{n}\choose 2}}{(q;q)_n}=(-t;q)_\infty. \end{equation} Setting $t=\pm q$ in \eqref{Euler-1}, we have \begin{equation}\label{Euler-1-000-111} \sum_{n\geq0}\frac{(\pm q)^n}{(q;q)_n}=\frac{1}{(\pm q;q)_\infty}. \end{equation} Letting $q\rightarrow q^2$ and $t=q$ in \eqref{Euler-1}, we have \begin{equation}\label{chang-1-1} \sum_{n\geq0}\frac{q^n}{(q^2;q^2)_n}=\frac{1}{(q;q^2)_\infty}. \end{equation} Letting $q\rightarrow q^2$ and $t=q^2$ in \eqref{Euler-1}, we have \begin{equation}\label{chang-1-2} \sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_n}=\frac{1}{(q^2;q^2)_\infty}. \end{equation} Setting $t=q$ in \eqref{Euler-2}, we have \begin{equation}\label{Euler-2-0000} \sum_{n\geq0}\frac{q^{{n+1}\choose 2}}{(q;q)_n}=(-q;q)_\infty. \end{equation} {\noindent \bf A formula due to Euler \cite[(1.2.5)]{Andrews-1976}:} \begin{equation}\label{Euler-new} \frac{1}{(q;q^2)_\infty}=(-q;q)_\infty. \end{equation} {\noindent \bf A formula due to Gauss \cite[(2.2.13)]{Andrews-1976}:} \begin{equation}\label{Gauss} \sum_{n\geq 0}q^{\binom{n+1}{2}}=\frac{(q^2;q^2)_\infty}{(q;q^2)_\infty}. \end{equation} {\noindent \bf Heine's transformation formula \cite[Corollary 2.3]{Andrews-1976}:} \begin{equation}\label{Heine} \sum_{n\geq 0}\frac{(a;q)_n(b;q)_n}{(q;q)_n(c;q)_n}t^n=\frac{(b;q)_\infty(at;q)_\infty}{(c;q)_\infty(t;q)_\infty}\sum_{n\geq 0}\frac{(c/b;q)_n(t;q)_n}{(q;q)_n(at;q)_n}b^n. \end{equation} We also need the following lemma. \begin{lem}\label{pri-lem-1} For $k\geq 0$, \begin{equation}\label{pri-lem-eqn-1} \sum_{m\geq k+1}\frac{q^m}{(q;q)_m}=\frac{1}{(q;q)_\infty}-\frac{1}{(q;q)_{k}}. \end{equation} \end{lem} \pf Note that for $m\geq 1$, \[\frac{q^m}{(q;q)_m}\] is the generating function for the partitions with $m$ parts, so we see that the left hand side of \eqref{pri-lem-eqn-1} is the generating function for the partitions with at least $k+1$ parts. On the other hand, it is clear that \[\frac{1}{(q;q)_\infty}\] is the generating function for all partitions, and \[\frac{1}{(q;q)_{k}}\] is the generating function for the partitions with at most $k$ parts. It yields that the right hand side of \eqref{pri-lem-eqn-1} is also the generating function for the partitions with at least $k+1$ parts. This completes the proof. \qed We conclude this section with an involution on $\overline{\mathcal{P}}$. \begin{defi}\label{defi-involution} For an overpartition $\pi$ in $\overline{\mathcal{P}}$, the map $\mathcal{I}$ is defined as follows: \begin{itemize} \item[{\rm(1)}] if $LN(\pi)>LO(\pi)$, then change the largest non-overlined part of $\pi$ to an overlined part{\rm;} \item[{\rm(2)}] if $LO(\pi)\geq LN(\pi)$, then change the largest overlined part of $\pi$ to a non-overlined part{\rm.} \end{itemize} We denote the resulting overpartition by $\mathcal{I}(\pi)$. \end{defi} Clearly, the map $\mathcal{I}$ is an involution on $\overline{\mathcal{P}}$. \section{Proofs of the results in Section 2.1} In this section, we aim to show the results in Section 2.1. We will prove \eqref{gen-G-O-N} in Section 4.1, Corollary \ref{core-g-d2} and \ref{core-g-sigma} in Section 4.2, \eqref{gen-E-O-N} in Section 4.3, \eqref{gen-E-O=N} and Corollary \ref{core-ON=R} in Section 4.4, Theorem \ref{EG-OG-THM} in Section 4.5, and Theorem \ref{EE-OE-THM} in Section 4.6. \subsection{Proofs of \eqref{gen-G-O-N}} In this subsection, we give three proofs of \eqref{gen-G-O-N} in terms of separable overpartition classes, the largest non-overlined part and the smallest overlined part. {\noindent \bf The first proof of \eqref{gen-G-O-N}.} For $m\geq 1$, define \[\mathcal{BG}^O_N(m)=\left\{(1^m),(1^{m-1},\overline 2),\ldots,(1,\overline 2,\overline 3,\ldots,\overline{m-1},\overline m),(\overline 1,\overline 2,\overline 3,\ldots,\overline{m-1},\overline m)\right\}.\] Set \[\mathcal{BG}^O_N=\bigcup_{m\geq 1}\mathcal{BG}^O_N(m).\] Then, it is clear that $\mathcal{G}^O_N$ is a separable overpartition class and $\mathcal{BG}^O_N$ is the basis of $\mathcal{G}^O_N$. So, we get \begin{align*} \sum_{\pi\in\mathcal{G}_N^{O}}q^{|\pi|}&=1+\sum_{m\geq1}\frac{1}{(q;q)_m}\bigg[q^m+\sum_{k=1}^{m-1}q^{(m-k)+2+3+\cdots+(k+1)}+q^{\binom{m+1}{2}}\bigg]\\ &=1+\sum_{m\geq1}\frac{1}{(q;q)_m}\bigg[\sum_{k=0}^{m-1}q^{m+\binom{k+1}{2}}+q^{\binom{m+1}{2}}\bigg]\\ &=1+\sum_{m\geq1}\frac{q^m}{(q;q)_m}\sum_{k=0}^{m-1}q^{\binom{k+1}{2}}+\sum_{m\geq1}\frac{q^{\binom{m+1}{2}}}{(q;q)_m}\\ &=\sum_{k\geq0}q^{\binom{k+1}{2}}\sum_{m\geq k+1}\frac{q^m}{(q;q)_m}+\sum_{m\geq0}\frac{q^{\binom{m+1}{2}}}{(q;q)_m}. \end{align*} Combining with \eqref{Euler-new}, \eqref{Gauss} and \eqref{pri-lem-eqn-1}, we have \begin{align*} \sum_{\pi\in\mathcal{G}_N^{O}}q^{|\pi|}&=\sum_{k\geq0}q^{\binom{k+1}{2}}\bigg[\frac{1}{(q;q)_\infty}-\frac{1}{(q;q)_{k}}\bigg]+\sum_{m\geq0}\frac{q^{\binom{m+1}{2}}}{(q;q)_m}\\ &=\frac{1}{(q;q)_\infty}\sum_{k\geq0}q^{\binom{k+1}{2}}-\sum_{k\geq0}\frac{q^{\binom{k+1}{2}}}{(q;q)_k}+\sum_{m\geq0}\frac{q^{\binom{m+1}{2}}}{(q;q)_m}\\ &=\frac{1}{(q;q)_\infty}\sum_{k\geq0}q^{\binom{k+1}{2}}\\ &=\frac{1}{(q;q)_\infty}\frac{(q^2;q^2)_\infty}{(q;q^2)_\infty}\\ &=(-q;q)_\infty^2. \end{align*} The proof is complete. \qed {\noindent \bf The second proof of \eqref{gen-G-O-N}.} By considering the largest non-overlined part, we have \[\sum_{\pi\in\mathcal{G}_N^{O}}q^{|\pi|}=\sum_{n\geq0}\frac{q^n}{(q;q)_n}(-q^{n+1};q)_\infty=(-q;q)_\infty\sum_{n\geq0}\frac{q^n}{(q^2;q^2)_n}.\] Combining with \eqref{chang-1-1} and \eqref{Euler-new}, we get \[\sum_{\pi\in\mathcal{G}_N^{O}}q^{|\pi|}=(-q;q)_\infty\frac{1}{(q;q^2)_\infty}=(-q;q)_\infty^2.\] The proof is complete. \qed {\noindent \bf The third proof of \eqref{gen-G-O-N}.} By virtue of the smallest overlined part, we get \begin{align*} \sum_{\pi\in\mathcal{G}_N^{O}}q^{|\pi|}&=\frac{1}{(q;q)_\infty}+\sum_{n\geq1}\frac{q^n(-q^{n+1};q)_\infty}{(q;q)_{n-1}}\\ &=\frac{1}{(q;q)_\infty}+(-q;q)_\infty\sum_{n\geq1}\frac{q^n(1-q^n)}{(q;q)_n(-q;q)_n}\\ &=\frac{1}{(q;q)_\infty}+(-q;q)_\infty\Bigg[\sum_{n\geq0}\frac{q^n}{(q^2;q^2)_n}-\sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_n}\Bigg]. \end{align*} Using \eqref{chang-1-1}, \eqref{chang-1-2} and \eqref{Euler-new}, we have \[\sum_{\pi\in\mathcal{G}_N^{O}}q^{|\pi|}=\frac{1}{(q;q)_\infty}+(-q;q)_\infty\Bigg[\frac{1}{(q;q^2)_\infty}-\frac{1}{(q^2;q^2)_\infty}\Bigg]=(-q;q)_\infty^2.\] This completes the proof. \qed \subsection{Combinatorial proofs of Corollary \ref{core-g-d2} and \ref{core-g-sigma}} This subsection is devoted to giving combinatorial proofs of Corollary \ref{core-g-d2} and \ref{core-g-sigma}. We first prove Corollary \ref{core-g-d2} by a bijective method, which can be seen as a combinatorial proof of \eqref{gen-G-O-N}. For $n\geq 0$ and $k\geq 0$, let $\mathcal{A}_{k}(n)$ be the set of overpartitions of $n$ in $\mathcal{G}_N^{O}$ with exactly $k$ overlined parts and let $\mathcal{B}_{k}(n)$ be the set of pairs $(\alpha,\beta)$ counted by $D_2(n)$ with $\ell(\alpha)-\ell(\beta)=k$ or $-k-1$. To prove Corollary \ref{core-g-d2}, it suffices to show the following theorem. \begin{thm}\label{bijective-ab} For $n\geq 0$ and $k\geq 0$, there is a bijection between $\mathcal{A}_{k}(n)$ and $\mathcal{B}_{k}(n)$. \end{thm} We will give a combinatorial proof of Theorem \ref{bijective-ab} by a modification of Sylvester's bijective proof of Jacobi's product identity \cite{Franklin-Sylvester-1882}, which was later rediscovered by Wright \cite{Wright-1965}. {\noindent \bf Proof of Theorem \ref{bijective-ab}.} Let $\pi=(\pi_1,\pi_2,\ldots,\pi_m)$ be an overpartition in $\mathcal{A}_{k}(n)$. Under the definition of $\mathcal{A}_{k}(n)$, we see that $\pi_1,\ldots,\pi_k$ are overlined parts, $\pi_{k+1},\ldots,\pi_m$ are non-overlined parts and \[|\pi_1|>\cdots>|\pi_k|>|\pi_{k+1}|\geq\cdots\geq|\pi_m|.\] Set $\lambda_i=|\pi_i|-(k-i+1)$ for $1\leq i\leq k$, and $\lambda_i=\pi_i$ for $k+1\leq i\leq m$. It is clear that $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_m)$ is a partition of $n-{{k+1}\choose 2}$. Let $\lambda'=(\lambda'_1,\lambda'_2,\ldots)$ be the conjugate of $\lambda$. Set $s$ be the largest integer such that $\lambda_{s}\geq k+s$ with the convention that $\lambda_0=+\infty$. By the choice of $s$, we have \begin{equation}\label{conjugate} \lambda_{s+1}\leq k+s,\ \lambda'_{k+s}\geq s\text{ and }\lambda'_{k+s+1}\leq s. \end{equation} For $i\geq 1$, we set $\mu_i=\lambda'_i+k-i+1$ and $\nu_i=\lambda_i-k-i$. Clearly, we have $\mu_1>\mu_2>\cdots$ and $\nu_1>\nu_2>\cdots$. Appealing to \eqref{conjugate}, we derive that \[\mu_{k+s}=\lambda'_{k+s}+k-(k+s)+1\geq s+k-(k+s)+1=1,\] \[\mu_{k+s+1}=\lambda'_{k+s+1}+k-(k+s+1)+1\leq s+k-(k+s+1)+1=0,\] and \[\nu_{s+1}=\lambda_{s+1}-k-(s+1)\leq (k+s)-k-(s+1)=-1.\] Setting $\mu=(\mu_1,\mu_2,\ldots,\mu_{k+s})$, we see that $\mu$ is a distinct partition. Note that $\lambda_{s}\geq k+s$, then we consider the following two cases. Case 1: $\lambda_s\geq k+s+1$. In such case, we have $\nu_{s}=\lambda_s-k-s\geq (k+s+1)-k-s=1$. Then, we set $\beta=(\nu_1,\nu_2,\ldots,\nu_s)$, which is a distinct partition. Setting $\alpha=\mu$, then $(\alpha,\beta)$ is a pair of distinct partitions with $\ell(\alpha)-\ell(\beta)=k$. Case 2: $\lambda_s=k+s$. In such case, we have $\nu_{s}=\lambda_s-k-s=(k+s)-k-s=0$. Then, we set $\alpha=(\nu_1,\nu_2,\ldots,\nu_{s-1})$, which a distinct partition. Setting $\beta=\mu$, then $(\alpha,\beta)$ is a pair of distinct partitions with $\ell(\alpha)-\ell(\beta)=-k-1$. In either case, we get a pair of distinct partitions $(\alpha,\beta)$ with $\ell(\alpha)-\ell(\beta)=k$ or $-k-1$. It is clear that $|\alpha|+|\beta|=|\lambda|+{{k+1}\choose 2}=n$. So, we have $(\alpha,\beta)\in\mathcal{B}_{k}(n)$. Obviously, the process above is reversible. The proof is complete. \qed For example, let $\pi=(\overline{10},\overline{8},\overline{7},6,4,4,2,1)$ be an overpartition in $\mathcal{A}_{3}(42)$. Set $\lambda_1=|\overline{10}|-3=7$, $\lambda_2=|\overline{8}|-2=6$, $\lambda_3=|\overline{7}|-1=6$, $\lambda_4=6$, $\lambda_5=4$, $\lambda_6=4$, $\lambda_7=2$ and $\lambda_8=1$. Then, we get $\lambda=(7,6,6,6,4,4,2,1)$, which is a partition of $36$. The conjugate of $\lambda$ is $\lambda'=(8,7,6,6,4,4,1)$. It can be check that $s=3$ is the largest integer such that $\lambda_{s}=\lambda_3=6\geq 3+s=6$. Moreover, we have $\lambda_{s}=\lambda_3=6=3+s$. Then, we can get \[\alpha=(3,1)\text{ and }\beta=(11,9,7,6,3,2),\] where $\alpha_i=\lambda_i-3-i$ for $1\leq i\leq 2$, and $\beta_i=\lambda'_i+3-i+1$ for $1\leq i\leq 6$. Clearly, $(\alpha,\beta)$ is a pair of distinct partitions in $\mathcal{B}_{3}(42)$. The same process to get $(\alpha,\beta)$ could be run in reverse. For another example, let $\pi=(7,6,6,6,4,4,2,1)$ be an overpartition in $\mathcal{A}_{0}(36)$. Note that $k=0$, then we have $\lambda=\pi=(7,6,6,6,4,4,2,1)$. The conjugate of $\lambda$ is $\lambda'=(8,7,6,6,4,4,1)$. It can be check that $s=4$ is the largest integer such that $\lambda_{s}=\lambda_4=6\geq 0+s=4$. Moreover, we have $\lambda_{s}=\lambda_4=6\geq 0+s+1=5$. Then, we can get \[\alpha=(8,6,4,3)\text{ and }\beta=(6,4,3,2),\] where $\alpha_i=\lambda'_i+0-i+1$ for $1\leq i\leq 4$, and $\beta_i=\lambda_i-0-i$ for $1\leq i\leq 4$. Clearly, $(\alpha,\beta)$ is a pair of distinct partitions in $\mathcal{B}_{0}(36)$. The same process to get $(\alpha,\beta)$ could be run in reverse. Then, we proceed to give the combinatorial proof of Corollary \ref{core-g-sigma}. {\noindent \bf Combinatorial proof of Corollary \ref{core-g-sigma}.} For $n\geq 0$, we define \[\mathcal{M}(n)=\left\{(\pi,k)\mid\pi\in\mathcal{P}(n),0\leq k\leq mex(\pi)-1\right\}.\] Then, we have \[\sigma mex(n)=\sum_{(\pi,k)\in\mathcal{M}(n)}1.\] Let $\mathcal{G}_N^{O}(n)$ be the set of overpartitions counted by ${G}_N^{O}(n)$. We just need to build a bijection between $\mathcal{G}_N^{O}(n)$ and $\mathcal{M}(n)$. Let $\pi=(\pi_1,\pi_2,\ldots,\pi_m)$ be an overpartition in $\mathcal{G}_N^{O}(n)$. Assume that there are $k\geq 0$ overlined parts in $\pi$. With the same argument in the proof of Theorem \ref{bijective-ab}, we can get a partition $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_m)$ of $n-{{k+1}\choose 2}$, where \[\lambda_i=|\pi_i|-(k-i+1)\text{ for }1\leq i\leq k,\text{ and }\lambda_i=\pi_i\text{ for }k+1\leq i\leq m.\] Then, we add $1,2,\ldots,k$ as parts into $\lambda$ and denote the resulting partition by $\mu$. Namely, \[f_t(\mu)=f_t(\lambda)+1\text{ for }1\leq t\leq k,\text{ and }f_t(\mu)=f_t(\lambda)\text{ otherwise.}\] Clearly, $\mu$ is a partition of $n$ with $mex(\mu)\geq k+1$, and so we get a pair $(\mu,k)$ in $\mathcal{M}(n)$. Obviously, the process above is reversible. The proof is complete. \qed \subsection{Proofs of \eqref{gen-E-O-N}} With the similar arguments in the proofs of \eqref{gen-G-O-N} given in Section 4.1, we present three proofs of \eqref{gen-E-O-N} in this subsection. {\noindent \bf The first proof of \eqref{gen-E-O-N}.} For $m\geq 1$, define \[\mathcal{BE}^O_N(m)=\left\{(1^m), (1^{m-1},\overline{1}),\ldots,(1,\overline{1},\overline{2},\ldots,\overline{m-1},\overline{m}),(\overline 1,\overline 2,\overline 3,\ldots,\overline{m-1},\overline m)\right\}.\] Set \[\mathcal{BE}^O_N=\bigcup_{m\geq 1}\mathcal{BE}^O_N(m).\] Then, it is clear that $\mathcal{E}^O_N$ is a separable overpartition class and $\mathcal{BE}^O_N$ is the basis of $\mathcal{E}^O_N$. So, we get \begin{align*} \sum_{\pi\in\mathcal{E}_N^{O}}q^{|\pi|}&=1+\sum_{m\geq1}\frac{1}{(q;q)_m}\bigg[q^m+\sum_{k=1}^{m-1}q^{(m-k)+1+2+\cdots+k}+q^{\binom{m+1}{2}}\bigg]\\ &=1+\sum_{m\geq1}\frac{1}{(q;q)_m}\sum_{k=0}^{m}q^{m+\binom{k}{2}}\\ &=\sum_{k\geq0}q^{\binom{k}{2}}\sum_{m\geq k}\frac{q^m}{(q;q)_m}\\ &=\sum_{m\geq 0}\frac{q^m}{(q;q)_m}+\sum_{k\geq1}q^{\binom{k}{2}}\sum_{m\geq k}\frac{q^m}{(q;q)_m}\\ &=\frac{1}{(q;q)_\infty}+\sum_{k\geq1}q^{\binom{k}{2}}\bigg[\frac{1}{(q;q)_\infty}-\frac{1}{(q;q)_{k-1}}\bigg], \end{align*} where the final equation follows from \eqref{Euler-1-000-111} and Lemma \ref{pri-lem-1}. Appealing to \eqref{Euler-2-0000}, \eqref{Euler-new} and \eqref{Gauss}, we have \begin{align*} \sum_{\pi\in\mathcal{E}_N^{O}}q^{|\pi|}&=\frac{1}{(q;q)_\infty}+\frac{1}{(q;q)_\infty}\sum_{k\geq0}q^{\binom{k+1}{2}}-\sum_{k\geq0}\frac{q^{\binom{k+1}{2}}}{(q;q)_{k}}\\ &=\frac{1}{(q;q)_\infty}+\frac{1}{(q;q)_\infty}\frac{(q^2;q^2)_\infty}{(q;q^2)_\infty}-(-q;q)_\infty\\ &=\frac{1}{(q;q)_\infty}+(-q;q)_\infty^2-(-q;q)_\infty. \end{align*} This completes the proof. \qed {\noindent \bf The second proof of \eqref{gen-E-O-N}.} In view of the largest non-overlined part, we have \begin{align*} \sum_{\pi\in\mathcal{E}_N^{O}}q^{|\pi|}&=(-q;q)_\infty+\sum_{n\geq1}\frac{q^n}{(q;q)_n}(-q^{n};q)_\infty\\ &=(-q;q)_\infty+(-q;q)_\infty\sum_{n\geq1}\frac{q^n(1+q^n)}{(q;q)_n(-q;q)_n}\\ &=(-q;q)_\infty+(-q;q)_\infty\Bigg[\sum_{n\geq0}\frac{q^n}{(q^2;q^2)_n}+\sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_n}-2\Bigg]. \end{align*} It follows from \eqref{chang-1-1}, \eqref{chang-1-2} and \eqref{Euler-new} that \begin{align*} \sum_{\pi\in\mathcal{E}_N^{O}}q^{|\pi|}&=(-q;q)_\infty+(-q;q)_\infty\Bigg[\frac{1}{(q;q^2)_\infty}+\frac{1}{(q^2;q^2)_\infty}-2\Bigg]\\ &=(-q;q)_\infty^2+\frac{1}{(q;q)_\infty}-(-q;q)_\infty. \end{align*} The proof is complete. \qed {\noindent \bf The third proof of \eqref{gen-E-O-N}.} In light of the smallest overlined part, we have \begin{align*} \sum_{\pi\in\mathcal{E}_N^{O}}q^{|\pi|}&=\frac{1}{(q;q)_\infty}+\sum_{n\geq1}q^n(-q^{n+1};q)_\infty\frac{1}{(q;q)_{n}}\\ &=\frac{1}{(q;q)_\infty}+(-q;q)_\infty\sum_{n\geq 1}\frac{q^n}{(-q;q)_{n}(q;q)_{n}}\\ &=\frac{1}{(q;q)_\infty}+(-q;q)_\infty\Bigg[\sum_{n\geq0}\frac{q^n}{(q^2;q^2)_n}-1\Bigg]. \end{align*} Combining with \eqref{chang-1-1} and \eqref{Euler-new}, we arrive at \begin{align*} \sum_{\pi\in\mathcal{E}_N^{O}}q^{|\pi|}&=\frac{1}{(q;q)_\infty}+(-q;q)_\infty\Bigg[\frac{1}{(q;q^2)_\infty}-1\Bigg]\\ &=\frac{1}{(q;q)_\infty}+(-q;q)_\infty^2-(-q;q)_\infty. \end{align*} This completes the proof. \qed \subsection{Proofs of \eqref{gen-E-O=N} and Corollary \ref{core-ON=R}} In this subsection, we first give an analytic proof of \eqref{gen-E-O=N} and then we give a combinatorial proof of Corollary \ref{core-ON=R}. {\noindent \bf Analytic proof of \eqref{gen-E-O=N}.} By considering the size of the smallest overlined part and the largest non-overlined part, we can get \begin{align*} \sum_{n\geq 1}{E}_{ON}(n)q^{n}&=\sum_{n\geq1}{q^{n}}(-q^{n+1};q)_\infty\frac{q^{n}}{(q;q)_n}\\ &=(-q;q)_\infty\sum_{n\geq1}\frac{q^{2n}}{(-q;q)_{n}(q;q)_{n}}\\ &=(-q;q)_\infty\Bigg[\sum_{n\geq 0}\frac{q^{2n}}{(q^2;q^2)_n}-1\Bigg]. \end{align*} Combining with \eqref{chang-1-2}, we have \[ \sum_{n\geq 1}{E}_{ON}(n)q^{n}=(-q;q)_\infty\Bigg[\frac{1}{(q^2;q^2)_\infty}-1\bigg]=\frac{1}{(q;q)_\infty}-(-q;q)_\infty.\] The proof is complete. \qed Now, we proceed to give a combinatorial proof of Corollary \ref{core-ON=R}, which can be seen as a bijective proof of \eqref{gen-E-O=N}. {\noindent \bf Combinatorial proof of Corollary \ref{core-ON=R}.} Let $\pi$ be an overpartition counted by ${E}_{ON}(n)$. We define $\lambda=\phi_{ON}(\pi)$ by changing the overlined parts in $\pi$ to non-overlined parts. Clearly, $\lambda$ is a partition enumerated by $R(n)$. Conversely, let $\lambda$ be a partition enumerated by $R(n)$. Assume that $k$ is the largest integer such that $k$ appears at least twice in $\lambda$. Then, we define $\pi=\psi_{ON}(\lambda)$ by changing a $k$ in $\lambda$ to $\overline k$ and changing the parts greater than $k$ to overlined parts. It is obvious that $\pi$ is an overpartition counted by ${E}_{ON}(n)$. Clearly, $\psi_{ON}$ is the inverse map of $\phi_{ON}$. Thus, we complete the proof. \qed \subsection{Proof of Theorem \ref{EG-OG-THM}} This subsection is devoted to showing Theorem \ref{EG-OG-THM}. Let $p^{e}_o(n)$ (resp. $p^{o}_e(n)$) be the number of partitions of $n$ such that the largest part is even (resp. odd) and the smallest part is odd (resp. even). Then, it suffices to show \begin{equation}\label{lem-excess-1} \sum_{n\geq 0}\left({EG}_N^{O}(n)-{OG}_N^{O}(n)\right)q^n=1+2\sum_{n\geq 1}\left(p^{e}_o(n)-p^{o}_e(n)\right)q^n, \end{equation} and for $n\geq 1$, \begin{equation}\label{lem-excess-2} p^{e}_o(n)-p^{o}_e(n)\geq 0\text{ with strict inequality if }n\geq 3. \end{equation} We first give an analytic proof and a combinatorial proof of \eqref{lem-excess-1}. {\noindent \bf Analytic proof of \eqref{lem-excess-1}.} In virtue of the smallest overlined part, we can get \begin{align} \sum_{n\geq 0}\left({EG}_N^{O}(n)-{OG}_N^{O}(n)\right)q^n&=\frac{1}{(-q;q)_\infty}+\sum_{n\geq1} \frac{q^n(-q^{n+1};q)_\infty}{(q;q)_{n-1}}\frac{1}{(-q^n;q)_\infty}\nonumber\\ &=\frac{1}{(-q;q)_\infty}+\sum_{n\geq1}\frac{1}{(q;q)_{n-1}}\frac{q^n}{1+q^n}.\label{proof-lem-excess-1} \end{align} Utilizing \eqref{Euler-1-000-111}, we have \begin{equation}\label{proof-lem-excess-2} \frac{1}{(-q;q)_\infty}=\sum_{n\geq0}\frac{(-q)^n}{(q;q)_n}=1+\sum_{n\geq1}\frac{q^{2n}}{(q;q)_{2n}}-\sum_{n\geq0}\frac{q^{2n+1}}{(q;q)_{2n+1}}. \end{equation} Clearly, \[\sum_{n\geq1}\frac{q^{2n}}{(q;q)_{2n}}\ \left(\text{resp. }\sum_{n\geq0}\frac{q^{2n+1}}{(q;q)_{2n+1}}\right)\] is the generating function for the nonempty partitions such that the largest part is even (resp. odd). Setting $t=q^{m+1}$ in \eqref{Euler-1}, we have \[\sum_{n\geq0}\frac{q^{(m+1)n}}{(q;q)_n}=\frac{1}{(q^{m+1};q)_\infty}.\] So, we get \begin{align} \sum_{n\geq1}\frac{1}{(q;q)_{n-1}}\frac{q^n}{1+q^n}&=\sum_{n\geq0}\frac{1}{(q;q)_n}\frac{q^{n+1}}{1+q^{n+1}}\nonumber\\ &=\sum_{n\geq 0}\frac{1}{(q;q)_n}\sum_{m\geq 0}(-1)^mq^{(n+1)(m+1)}\nonumber\\ &=\sum_{m\geq0}(-1)^mq^{m+1}\sum_{n\geq 0}\frac{q^{(m+1)n}}{(q;q)_n}\nonumber\\ &=\sum_{m\geq 0}(-1)^mq^{m+1}\frac{1}{(q^{m+1};q)_\infty}\nonumber\\ &=\sum_{m\geq 0}\frac{q^{2m+1}}{(q^{2m+1};q)_\infty}-\sum_{m\geq0}\frac{q^{2m+2}}{(q^{2m+2};q)_\infty}.\label{proof-lem-excess-3} \end{align} Clearly, \[\sum_{m\geq 0}\frac{q^{2m+1}}{(q^{2m+1};q)_\infty}\ \left(\text{resp. }\sum_{m\geq0}\frac{q^{2m+2}}{(q^{2m+2};q)_\infty}\right)\] is the generating function for the nonempty partitions such that the smallest part is odd (resp. even). Then, we obtain that \[\sum_{n\geq1}\frac{q^{2n}}{(q;q)_{2n}}-\sum_{m\geq0}\frac{q^{2m+2}}{(q^{2m+2};q)_\infty}\] and \[\sum_{m\geq 0}\frac{q^{2m+1}}{(q^{2m+1};q)_\infty}-\sum_{n\geq0}\frac{q^{2n+1}}{(q;q)_{2n+1}}\] are both equal to the generating function for the nonempty partitions such that the largest part is even and the smallest part is odd subtract the generating function for the non-empty partitions such that the largest part is odd and the smallest part is even, that is, \begin{align} \sum_{n\geq 1}\left(p^{e}_o(n)-p^{o}_e(n)\right)q^n&=\sum_{n\geq1}\frac{q^{2n}}{(q;q)_{2n}}-\sum_{m\geq0}\frac{q^{2m+2}}{(q^{2m+2};q)_\infty}\label{proof-lem-excess-4}\\ &=\sum_{m\geq 0}\frac{q^{2m+1}}{(q^{2m+1};q)_\infty}-\sum_{n\geq0}\frac{q^{2n+1}}{(q;q)_{2n+1}}.\label{proof-lem-excess-5} \end{align} Combining \eqref{proof-lem-excess-1}, \eqref{proof-lem-excess-2}, \eqref{proof-lem-excess-3}, \eqref{proof-lem-excess-4} and \eqref{proof-lem-excess-5}, we arrive at \eqref{lem-excess-1}. This completes the proof. \qed Before giving the combinatorial proof of \eqref{lem-excess-1}, we introduce the following notations, which will be also used in the combinatorial proof of Theorem \ref{EE-OE-THM}. For $n\geq 1$, let $\overline{\mathcal{P}}(n)$ denote the set of overpartitions of $n$. Then, we divide $\overline{\mathcal{P}}(n)$ into the following four disjoint subsets. \begin{itemize} \item $\mathcal{C}_{NO}(n)$ is the set of overpartitions $\pi$ of $n$ such that $LN(\pi)>LO(\pi)\geq 1$; \item $\mathcal{F}_{NO}(n)$ is the set of overpartitions $\pi$ of $n$ such that there are at least two overlined parts in $\pi$ and $LO(\pi)\geq LN(\pi)$; \item $\mathcal{H}_{NO}(n)$ is the set of overpartitions $\pi$ of $n$ such that there is exactly one overlined part in $\pi$ and $LO(\pi)\geq LN(\pi)$; \item $\mathcal{P}_{NO}(n)$ is the set of overpartitions of $n$ with no overlined parts, that is, $\mathcal{P}_{NO}(n)$ is the set of partitions of $n$. \end{itemize} Clearly, \[\overline{\mathcal{P}}(n)=\mathcal{C}_{NO}(n)\bigcup\mathcal{F}_{NO}(n)\bigcup\mathcal{H}_{NO}(n)\bigcup\mathcal{P}_{NO}(n).\] By restricting the involution $\mathcal{I}$ defined in Definition \ref{defi-involution} on $\mathcal{C}_{NO}(n)\bigcup\mathcal{F}_{NO}(n)$, it is easy to get the following lemma. \begin{lem}\label{lem-involution-CF} For $n\geq 1$, the map $\mathcal{I}$ is a bijection between $\mathcal{C}_{NO}(n)$ and $\mathcal{F}_{NO}(n)$. Moreover, for an overpartition $\pi \in \mathcal{C}_{NO}(n)$, let $\lambda=\mathcal{I}(\pi)$. We have \[\ell_{N\geq O}(\lambda)=\ell_{N\geq O}(\pi)-1\text{ and }\ell_{N>O}(\lambda)=\ell_{N>O}(\pi)-1.\] \end{lem} Then, we proceed to give the combinatorial proof of \eqref{lem-excess-1}. {\noindent \bf Combinatorial proof of \eqref{lem-excess-1}.} Note that \begin{equation*} \sum_{n\geq 0}\left({EG}_N^{O}(n)-{OG}_N^{O}(n)\right)q^n=1+\sum_{n\geq 1}\sum_{\pi\in\overline{\mathcal{P}}(n)}(-1)^{\ell_{N\geq O}(\pi)}q^{n}, \end{equation*} it is equivalent to showing that for $n\geq 1$, \begin{equation}\label{lem-excess-1-combinatorial-1} \sum_{\pi\in\overline{\mathcal{P}}(n)}(-1)^{\ell_{N\geq O}(\pi)}=2\left(p^{e}_o(n)-p^{o}_e(n)\right). \end{equation} To do this, we are required to compute \[\sum_{\pi\in\mathcal{C}_{NO}(n)\bigcup\mathcal{F}_{NO}(n)}(-1)^{\ell_{N\geq O}(\pi)},\ \sum_{\pi\in\mathcal{H}_{NO}(n)}(-1)^{\ell_{N\geq O}(\pi)}\text{ and }\sum_{\pi\in\mathcal{P}_{NO}(n)}(-1)^{\ell_{N\geq O}(\pi)}.\] By Lemma \ref{lem-involution-CF}, we have \begin{equation}\label{lem-excess-1-combinatorial-2} \sum_{\pi\in\mathcal{C}_{NO}(n)\bigcup\mathcal{F}_{NO}(n)}(-1)^{\ell_{N\geq O}(\pi)}=0. \end{equation} Then, we divide $\mathcal{H}_{NO}(n)$ into four disjoint subsets. \begin{itemize} \item $\mathcal{H}_{NO}^{(1)}(n)$: the set of overpartitions $\pi$ in $\mathcal{H}_{NO}(n)$ such that $\ell(\pi)$ is even and there is an even number of non-overlined parts of size $LO(\pi)$ in $\pi$; \item $\mathcal{H}^{(2)}_{NO}(n)$: the set of overpartitions $\pi$ in $\mathcal{H}_{NO}(n)$ such that $\ell(\pi)$ is even and there is an odd number of non-overlined parts of size $LO(\pi)$ in $\pi$; \item $\mathcal{H}_{NO}^{(3)}(n)$: the set of overpartitions $\pi$ in $\mathcal{H}_{NO}(n)$ such that $\ell(\pi)$ is odd and there is an even number of non-overlined parts of size $LO(\pi)$ in $\pi$; \item $\mathcal{H}_{NO}^{(4)}(n)$: the set of overpartitions $\pi$ in $\mathcal{H}_{NO}(n)$ such that $\ell(\pi)$ is odd and there is an odd number of non-overlined parts of size $LO(\pi)$ in $\pi$. \end{itemize} Obviously, we have \[\mathcal{H}_{NO}(n)=\mathcal{H}_{NO}^{(1)}(n)\bigcup\mathcal{H}_{NO}^{(2)}(n)\bigcup\mathcal{H}_{NO}^{(3)}(n)\bigcup\mathcal{H}_{NO}^{(4)}(n).\] For an overpartition $\pi\in \mathcal{H}_{NO}(n)$, it is clear that $\ell_{N\geq O}(\pi)$ is the number of non-overlined parts of size $LO(\pi)$ in $\pi$. So, we have $(-1)^{\ell_{N\geq O}(\pi)}=1$ if $\pi\in\mathcal{H}_{NO}^{(1)}(n)\bigcup\mathcal{H}_{NO}^{(3)}(n)$, and $(-1)^{\ell_{N\geq O}(\pi)}=-1$ if $\pi\in\mathcal{H}_{NO}^{(2)}(n)\bigcup\mathcal{H}_{NO}^{(4)}(n)$. We also divide $\mathcal{P}_{NO}(n)$ into four disjoint subsets. \begin{itemize} \item $\mathcal{P}_{NO}^{(1)}(n)$: the set of partitions $\pi$ in $\mathcal{P}_{NO}(n)$ such that $\ell(\pi)$ is even and there is an odd number of parts of size $LN(\pi)$ in $\pi$; \item $\mathcal{P}_{NO}^{(2)}(n)$: the set of partitions $\pi$ in $\mathcal{P}_{NO}(n)$ such that $\ell(\pi)$ is even and there is an even number of parts of size $LN(\pi)$ in $\pi$; \item $\mathcal{P}_{NO}^{(3)}(n)$: the set of partitions $\pi$ in $\mathcal{P}_{NO}(n)$ such that $\ell(\pi)$ is odd and there is an odd number of parts of size $LN(\pi)$ in $\pi$; \item $\mathcal{P}_{NO}^{(4)}(n)$: the set of partitions $\pi$ in $\mathcal{P}_{NO}(n)$ such that $\ell(\pi)$ is odd and there is an even number of parts of size $LN(\pi)$ in $\pi$. \end{itemize} Obviously, we have \[\mathcal{P}_{NO}(n)=\mathcal{P}_{NO}^{(1)}(n)\bigcup\mathcal{P}_{NO}^{(2)}(n)\bigcup\mathcal{P}_{NO}^{(3)}(n)\bigcup\mathcal{P}_{NO}^{(4)}(n).\] For a partition $\pi\in \mathcal{P}_{NO}(n)$, it is clear that $\ell_{N\geq O}(\pi)=\ell(\pi)$. So, we have $(-1)^{\ell_{N\geq O}(\pi)}=1$ if $\pi\in\mathcal{P}_{NO}^{(1)}(n)\bigcup\mathcal{P}_{NO}^{(2)}(n)$, and $(-1)^{\ell_{N\geq O}(\pi)}=-1$ if $\pi\in\mathcal{P}_{NO}^{(3)}(n)\bigcup\mathcal{P}_{NO}^{(4)}(n)$. By restricting the involution $\mathcal{I}$ defined in Definition \ref{defi-involution} on $\mathcal{H}_{NO}(n)\bigcup\mathcal{P}_{NO}(n)$, we find that the map $\mathcal{I}$ is a bijection between $\mathcal{H}_{NO}^{(i)}(n)$ and $\mathcal{P}_{NO}^{(i)}(n)$ for $1\leq i\leq 4$. So, we get \begin{equation*} \sum_{\pi\in\mathcal{H}_{NO}(n)\bigcup\mathcal{P}_{NO}(n)}(-1)^{\ell_{N\geq O}(\pi)}=2\left( \sum_{\pi\in\mathcal{P}_{NO}^{(1)}(n)}(-1)^{\ell_{N\geq O}(\pi)}+\sum_{\pi\in\mathcal{P}_{NO}^{(4)}(n)}(-1)^{\ell_{N\geq O}(\pi)}\right). \end{equation*} Let $p_{NO}^{(1)}(n)$ and $p_{NO}^{(4)}(n)$ be the number of partitions in $\mathcal{P}_{NO}^{(1)}(n)$ and $\mathcal{P}_{NO}^{(4)}(n)$ respectively. Then, we have \begin{equation}\label{lem-excess-1-combinatorial-3} \sum_{\pi\in\mathcal{H}_{NO}(n)\bigcup\mathcal{P}_{NO}(n)}(-1)^{\ell_{N\geq O}(\pi)}=2\left({p}_{NO}^{(1)}(n)-{p}_{NO}^{(4)}(n)\right). \end{equation} It follows from \eqref{lem-excess-1-combinatorial-2} and \eqref{lem-excess-1-combinatorial-3} that \begin{equation}\label{lem-excess-1-combinatorial-4} \sum_{\pi\in\overline{\mathcal{P}}(n)}(-1)^{\ell_{N\geq O}(\pi)}=2\left({p}_{NO}^{(1)}(n)-{p}_{NO}^{(4)}(n)\right). \end{equation} By considering the conjugate, we can get \begin{equation}\label{lem-excess-1-combinatorial-5} {p}_{NO}^{(1)}(n)=p^{e}_o(n)\text{ and }{p}_{NO}^{(4)}(n)=p^{o}_e(n). \end{equation} Combining \eqref{lem-excess-1-combinatorial-4} and \eqref{lem-excess-1-combinatorial-5}, we arrive at \eqref{lem-excess-1-combinatorial-1}, and thus the proof is complete. \qed We conclude this section with the proof of \eqref{lem-excess-2}. {\noindent \bf Proof of \eqref{lem-excess-2}.} It can be checked that \[p^{e}_o(1)=p^{o}_e(1)=p^{e}_o(2)=p^{o}_e(2)=0,\] which implies that \eqref{lem-excess-2} holds for $n=1,2$. For $n\geq 3$, let $\mathcal{P}^{o}_e(n)$ be the set of partitions counted by $p^{o}_e(n)$ and let $\hat{\mathcal{P}}^{e}_o(n)$ be set of partitions $\pi$ enumerated by $p^{e}_o(n)$ such that $\ell(\pi)\geq 4$, $\ell(\pi)$ is even, $\pi_{\frac{\ell(\pi)}{2}}$ is odd and $f_{1}(\pi)\geq\frac{\ell(\pi)}{2}$. Then, we establish a bijection $\phi^o_{e}$ between $\mathcal{P}^{o}_e(n)$ and $\hat{\mathcal{P}}^{e}_o(n)$. For a partition $\lambda\in\mathcal{P}^{o}_e(n)$, we set \[\pi_i=\lambda_i-1\text{ for }1\leq i\leq \ell(\lambda),\text{ and }\pi_i=1\text{ for }\ell(\lambda)+1\leq i\leq 2\ell(\lambda).\] Then, we define $\phi^o_{e}(\lambda)=(\pi_1,\pi_2,\ldots,\pi_{2\ell(\lambda)})$, which is a partition in $\hat{\mathcal{P}}^{e}_o(n)$. Conversely, For a partition $\pi\in\hat{\mathcal{P}}^{e}_o(n)$, we set \[\lambda_i=\pi_i+1\text{ for }1\leq i\leq \frac{\ell(\pi)}{2}.\] Then, we define $\psi^o_{e}(\pi)=(\lambda_1,\lambda_2,\ldots,\lambda_{\frac{\ell(\pi)}{2}})$, which is a partition in $\mathcal{P}^{o}_e(n)$. Clearly, $\psi^o_{e}$ is the inverse map of $\phi^o_{e}$. Now, we have built the bijection $\phi^o_{e}$ between $\mathcal{P}^{o}_e(n)$ and $\hat{\mathcal{P}}^{e}_o(n)$. Note that the partitions in $\hat{\mathcal{P}}^{e}_o(n)$ are counted by $p^{e}_o(n)$. In order to show that \eqref{lem-excess-2} holds for $n\geq 3$, it remains to find a partition which is counted by $p^{e}_o(n)$ but is not in $\hat{\mathcal{P}}^{e}_o(n)$. For a partition $\pi\in\hat{\mathcal{P}}^{e}_o(n)$, by definition, we have $\ell(\pi)\geq 4$. So, we just need to find a partition counted by $p^{e}_o(n)$ with less than four parts. There are two cases. Case 1: $n$ is odd. In such case, $(n-1,1)$ is counted by $p^{e}_o(n)$ but is not in $\hat{\mathcal{P}}^{e}_o(n)$. Case 2: $n$ is even. In such case, $(n-2,1,1)$ is counted by $p^{e}_o(n)$ but is not in $\hat{\mathcal{P}}^{e}_o(n)$. Thus, the proof is complete. \qed \subsection{Proofs of Theorem \ref{EE-OE-THM}} In this subsection, we will give an analytic proof and a combinatorial proof of Theorem \ref{EE-OE-THM}. We first give an analytic proof of Theorem \ref{EE-OE-THM}. {\noindent \bf Analytic proof of Theorem \ref{EE-OE-THM}.} By considering the smallest overlined part, we can get \begin{align*} \sum_{n\geq 0}\left({EE}_N^{O}(n)-{OE}_N^{O}(n)\right)q^n&=\frac{1}{(-q;q)_\infty}+\sum_{n\geq1} q^n(-q^{n+1};q)_\infty\frac{1}{(q;q)_{n}}\frac{1}{(-q^{n+1};q)_\infty}\\ &=\frac{1}{(-q;q)_\infty}+\sum_{n\geq 1}\frac{q^n}{(q;q)_{n}}. \end{align*} It is from \eqref{proof-lem-excess-2} that \[\sum_{n\geq 0}\left({EE}_N^{O}(n)-{OE}_N^{O}(n)\right)q^n=1+2\sum_{n\geq1}\frac{q^{2n}}{(q;q)_{2n}}.\] Note that \[\sum_{n\geq1}\frac{q^{2n}}{(q;q)_{2n}}\] is also the generating function for the nonempty partitions with an even number of parts, so the proof is complete. \qed Then, we give a combinatorial proof of Theorem \ref{EE-OE-THM} with a similar argument in the combinatorial of \eqref{lem-excess-1}. {\noindent \bf Combinatorial proof of Theorem \ref{EE-OE-THM}.} We find that it is equivalent to showing that for $n\geq 1$, \begin{equation}\label{EE-OE-THM-combinatorial-0} \sum_{\pi\in\overline{\mathcal{P}}(n)}(-1)^{\ell_{N> O}(\pi)}=2p_e(n). \end{equation} For $n\geq1$, we first divide $\mathcal{P}_{NO}(n)$ into two disjoint subsets. \begin{itemize} \item $\mathcal{P}_{e}(n)$: the set of partitions in $\mathcal{P}_{NO}(n)$ with an even number of parts; \item $\mathcal{P}_{o}(n)$: the set of partitions in $\mathcal{P}_{NO}(n)$ with an odd number of parts. \end{itemize} Obviously, we have \[\mathcal{P}_{NO}(n)=\mathcal{P}_{e}(n)\bigcup\mathcal{P}_{o}(n).\] For a partition $\pi\in \mathcal{P}_{NO}(n)$, it is clear that $\ell_{N> O}(\pi)=\ell(\pi)$. So, we have $(-1)^{\ell_{N> O}(\pi)}=1$ if $\pi\in\mathcal{P}_{e}(n)$, and $(-1)^{\ell_{N> O}(\pi)}=-1$ if $\pi\in\mathcal{P}_{o}(n)$. By definition, we see that $p_e(n)$ is the number of partitions in $\mathcal{P}_{e}(n)$. Let $p_o(n)$ be the number of partitions in $\mathcal{P}_{o}(n)$. So we get \begin{equation}\label{EE-OE-THM-combinatorial-1} \sum_{\pi\in\mathcal{P}_{NO}(n)}(-1)^{\ell_{N>O}(\pi)}=p_e(n)-p_o(n). \end{equation} For an overpartition $\pi\in\mathcal{H}_{NO}(n)$, we have $SO(\pi)=LO(\pi)\geq LN(\pi)$, which yields $\ell_{N>O}(\pi)=0$, and so $(-1)^{\ell_{N> O}(\pi)}=1$. Let $H_{NO}(n)$ be the number of overpartitions in $\mathcal{H}_{NO}(n)$. Then, we get \begin{equation}\label{EE-OE-THM-combinatorial-2} \sum_{\pi\in\mathcal{H}_{NO}(n)}(-1)^{\ell_{N>O}(\pi)}=H_{NO}(n). \end{equation} By restricting the involution $\mathcal{I}$ defined in Definition \ref{defi-involution} on $\mathcal{H}_{NO}(n)\bigcup\mathcal{P}_{NO}(n)$, we find that the map $\mathcal{I}$ is a bijection between $\mathcal{H}_{NO}(n)$ and $\mathcal{P}_{NO}(n)$, and so we have \[H_{NO}(n)=p_e(n)+p_o(n).\] Combining with \eqref{EE-OE-THM-combinatorial-2}, we get \begin{equation}\label{EE-OE-THM-combinatorial-3} \sum_{\pi\in\mathcal{H}_{NO}(n)}(-1)^{\ell_{N>O}(\pi)}=p_e(n)+p_o(n). \end{equation} Using Lemma \ref{lem-involution-CF}, we have \begin{equation*} \sum_{\pi\in\mathcal{C}_{NO}(n)\bigcup\mathcal{F}_{NO}(n)}(-1)^{\ell_{N> O}(\pi)}=0. \end{equation*} Combining with \eqref{EE-OE-THM-combinatorial-1} and \eqref{EE-OE-THM-combinatorial-3}, we arrive at \eqref{EE-OE-THM-combinatorial-0}. This completes the proof. \qed \section{Proofs of the results in Section 2.2} In this section, we aim to prove the results in Section 2.2. We will prove \eqref{gen-G-N-O} and \eqref{gen-E-N-O} in Section 5.1, \eqref{gen-G-N-O-new} and \eqref{gen-E-N-O-new} in Section 5.2, Corollary \ref{equiv-g-sigma} in Section 5.3, \eqref{gen-E-N=O} and Corollary \ref{core-NO=R} in Section 5.4, Theorem \ref{N-O-THM-1} in Section 5.5, and Theorem \ref{thm-e-o-gen-1} in Section 5.6. \subsection{Proofs of \eqref{gen-G-N-O} and \eqref{gen-E-N-O}} In this subsection, we give the proofs of \eqref{gen-G-N-O} and \eqref{gen-E-N-O} in view of separable overpartition classes. {\noindent \bf Proof of \eqref{gen-G-N-O}.} For $m\geq 1$, define \[\mathcal{BG}^N_O(m)=\{(1^m),(\overline 1,2^{m-1}),(\overline 1,\overline 2,3^{m-2}),\ldots,(\overline 1,\overline 2,\overline 3,\ldots,\overline{m-1}, m)\}.\] Set \[\mathcal{BG}^N_O=\bigcup_{m\geq 1}\mathcal{BG}^N_O(m).\] Then, it is clear that $\mathcal{G}^N_O$ is a separable overpartition class and $\mathcal{BG}^N_O$ is the basis of $\mathcal{G}^N_O$. So, we get \begin{align*} \sum_{\pi\in\mathcal{G}_O^{N}}q^{|\pi|}&=\sum_{m\geq1}\frac{1}{(q;q)_m}\sum_{k=0}^{m-1}q^{1+2+\cdots+k+(k+1)(m-k)}\\ &=\sum_{m\geq1}\frac{1}{(q;q)_m}\sum_{k=0}^{m-1}q^{(k+1)m-\binom{k+1}{2}}. \end{align*} The proof is complete. \qed To give a proof of \eqref{gen-E-N-O}, in the remaining of this subsection, we impose the following order on the parts of an overpartition: \begin{equation*} \overline{1}<{1}<\overline{2}<{2}<\cdots. \end{equation*} That is to say, an overpartition is a partition such that the last occurrence of a part can be overlined. For an overpartition $\pi$, we write $\pi=\left(\overline{1}^{f_{\overline{1}}(\pi)} 1^{f_1(\pi)} \overline{2}^{f_{\overline{2}}(\pi)} 2^{f_2(\pi)}\cdots\right)$. Now, we are in a position to show \eqref{gen-E-N-O}. {\noindent \bf Proof of \eqref{gen-E-N-O}.} For $m\geq 1$, define \[\mathcal{BE}^N_O(m)=\left\{(1^m),(\overline 1,1^{m-1}),(\overline 1,\overline 2,2^{m-2}),\ldots,(\overline 1,\overline 2,\ldots,\overline{m-1}, m-1)\right\}.\] Set \[\mathcal{BE}^N_O=\bigcup_{m\geq 1}\mathcal{BE}^N_O(m).\] Then, it is clear that $\mathcal{E}^N_O$ is a separable overpartition class and $\mathcal{BE}^N_O$ is the basis of $\mathcal{E}^N_O$. So, we get \begin{align*} \sum_{\pi\in\mathcal{E}_O^{N}}q^{|\pi|}&=\sum_{m\geq1}\frac{1}{(q;q)_m}\bigg[q^m+\sum_{k=1}^{m-1}q^{1+2+\cdots+k+k(m-k)}\bigg]\\ &=\sum_{m\geq1}\frac{1}{(q;q)_m}\bigg[q^m+\sum_{k=1}^{m-1}q^{km-\binom{k}{2}}\bigg]. \end{align*} This completes the proof. \qed \subsection{Proofs of \eqref{gen-G-N-O-new} and \eqref{gen-E-N-O-new}} In this subsection, we will show \eqref{gen-G-N-O-new} and \eqref{gen-E-N-O-new} by considering the smallest non-overlined part. {\noindent \bf Proof of \eqref{gen-G-N-O-new}.} In view of the smallest non-overlined part, we can get \begin{align*} \sum_{\pi\in\mathcal{G}_O^{N}}q^{|\pi|}&=\sum_{n\geq1}\frac{q^n}{(q^n;q)_\infty}(-q;q)_{n-1}\\ &=\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}.\\ \end{align*} Then, it remains to show that \begin{equation}\label{proof-GNO-1} \frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}=(-q;q)_\infty\sum_{n\geq0}\frac{q^{2n+1}}{1-q^{2n+1}}\frac{1}{(q^2;q^2)_n}. \end{equation} It is easy to see that \begin{equation}\label{proof-GNO-2} \begin{split} \frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}&=\frac{1}{(q;q)_\infty}\sum_{n\geq0}q^{n+1}(q^2;q^2)_{n}\\ &=\frac{q}{(q;q)_\infty}\sum_{n\geq0} \frac{(q^2;q^2)_n(q^2;q^2)_n}{(q^2;q^2)_n(0;q^2)_n}q^n. \end{split} \end{equation} Letting $q\rightarrow q^2$, $a=b=q^2$, $c=0$ and $t=q$ in \eqref{Heine}, we get \begin{align*}\label{proof-GNO-3} \sum_{n\geq0} \frac{(q^2;q^2)_n(q^2;q^2)_n}{(q^2;q^2)_n(0;q^2)_n}q^n&=\frac{(q^2;q^2)_\infty(q^3;q^2)_\infty}{(0;q^2)_\infty(q;q^2)_\infty} \sum_{n\geq0}\frac{(0;q^2)_n(q;q^2)_n}{(q^2;q^2)_n(q^3;q^2)_n}q^{2n}\\ &=(q^2;q^2)_\infty\sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_n(1-q^{2n+1})}. \end{align*} Combining with \eqref{proof-GNO-2}, we have \begin{align*} \frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}&=\frac{q}{(q;q)_\infty}(q^2;q^2)_\infty\sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_n(1-q^{2n+1})}\\ &=(-q;q)_\infty\sum_{n\geq0}\frac{q^{2n+1}}{1-q^{2n+1}}\frac{1}{(q^2;q^2)_n}. \end{align*} So, \eqref{proof-GNO-1} is valid. The proof is complete. \qed {\noindent \bf Proof of \eqref{gen-E-N-O-new}.} In virtue of the smallest non-overlined part, we can get \begin{align*} \sum_{\pi\in\mathcal{E}_O^{N}}q^{|\pi|}&=\sum_{n\geq1}\frac{q^n}{(q^n;q)_\infty}(-q;q)_{n}\\ &=\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}(1+q^n)\\ &=\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^n(q^2;q^2)_{n-1}+\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^{2n}(q^2;q^2)_{n-1}. \end{align*} Appealing to \eqref{proof-GNO-1}, we find that it remains to show \begin{equation}\label{proof-GON-1} \frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^{2n}(q^2;q^2)_{n-1}=\frac{1}{(q;q)_\infty}-(-q;q)_\infty. \end{equation} It is easy to see that \begin{equation}\label{proof-GON-2} \begin{split} \frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^{2n}(q^2;q^2)_{n-1}&=\frac{1}{(q;q)_\infty}\sum_{n\geq0}q^{2n+2}(q^2;q^2)_{n}\\ &=\frac{q^2}{(q;q)_\infty}\sum_{n\geq0} \frac{(q^2;q^2)_n(q^2;q^2)_n}{(q^2;q^2)_n(0;q^2)_n}q^{2n}. \end{split} \end{equation} Letting $q\rightarrow q^2$, $a=b=q^2$, $c=0$ and $t=q^2$ in \eqref{Heine}, we get \begin{align*} \sum_{n\geq0} \frac{(q^2;q^2)_n(q^2;q^2)_n}{(q^2;q^2)_n(0;q^2)_n}q^{2n}&=\frac{(q^2;q^2)_\infty(q^4;q^2)_\infty}{(0;q^2)_\infty(q^2;q^2)_\infty} \sum_{n\geq0} \frac{(0;q^2)_n(q^2;q^2)_n}{(q^2;q^2)_n(q^4;q^2)_n}q^{2n}\\ &=(q^4;q^2)_\infty\sum_{n\geq0}\frac{q^{2n}}{(q^4;q^2)_n}\\ &=(q^2;q^2)_\infty\sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_{n+1}}. \end{align*} Combining with \eqref{proof-GON-2}, we have \begin{align*} \frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^{2n}(q^2;q^2)_{n-1}&=\frac{q^2}{(q;q)_\infty}(q^2;q^2)_\infty\sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_{n+1}}\\ &=(-q;q)_\infty\sum_{n\geq0}\frac{q^{2n+2}}{(q^2;q^2)_{n+1}}\\ &=(-q;q)_\infty\left[\sum_{n\geq0}\frac{q^{2n}}{(q^2;q^2)_{n}}-1\right]. \end{align*} It follows from \eqref{chang-1-2} that \[\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^{2n}(q^2;q^2)_{n-1}=(-q;q)_\infty\Bigg[\frac{1}{(q^2;q^2)_\infty}-1\Bigg]=\frac{1}{(q;q)_\infty}-(-q;q)_\infty.\] We arrive at \eqref{proof-GON-1}. The proof is complete. \qed \subsection{Combinatorial proof of Corollary \ref{equiv-g-sigma}} In this subsection, we will present the combinatorial proof of Corollary \ref{equiv-g-sigma}. For $n\geq 1$, we define \[\mathcal{N}(n)=\left\{(\pi,k)|\pi\in\mathcal{P}(n),0\leq k<LN(\pi)-maex(\pi)\right\}.\] Clearly, we have \[\sigma L(n)-\sigma maex(n)=\sum_{(\pi,k)\in\mathcal{N}(n)}1.\] Let $\mathcal{G}_O^{N}(n)$ denote the set of overpartitions of $n$ in $\mathcal{G}_O^{N}$. We find that in order to give the combinatorial proof of Corollary \ref{equiv-g-sigma}, we just need to build a bijection between $\mathcal{G}_O^{N}(n)$ and $\mathcal{N}(n)$. More precisely, for $n\geq 1$ and $k\geq 0$, we set \[\mathcal{N}(n,k)=\left\{\pi|\pi\in\mathcal{P}(n),0\leq k<LN(\pi)-maex(\pi)\right\}\] and let $\mathcal{G}_O^{N}(n,k)$ be the set of overpartitions in $\mathcal{G}_O^{N}(n)$ with exactly $k$ overlined parts. It suffices to show the following theorem. \begin{thm}\label{equiv-N-sub} For $n\geq 1$ and $k\geq 0$, there is a bijection between $\mathcal{G}_O^{N}(n,k)$ and $\mathcal{N}(n,k)$. \end{thm} \pf Let $\pi=(\pi_1,\pi_2,\ldots,\pi_m)$ be a partition in $\mathcal{G}_O^{N}(n,k)$. Using the definition of $\mathcal{G}_O^{N}(n,k)$, we deduce that $m>k$, $\pi_1,\ldots,\pi_{m-k}$ are non-overlined parts, $\pi_{m-k+1},\ldots,\pi_m$ are overlined parts and \[|\pi_1|\geq\cdots\geq|\pi_{m-k}|>|\pi_{m-k+1}|>\cdots>|\pi_m|.\] We first change the overlined parts $\pi_{m-k+1},\ldots,\pi_m$ in $\pi$ to non-overlined parts and denote the resulting partition by $\lambda$. Clearly, we have \begin{equation}\label{conjugate-pro} \lambda_1\geq\cdots\geq\lambda_{m-k}>\lambda_{m-k+1}>\cdots>\lambda_m. \end{equation} Then, we set $\mu$ be the conjugate of $\lambda$. We proceed to show that $\mu$ is a partition in $\mathcal{N}(n,k)$. We consider the largest part of $\mu$ and the maximal excludant of $\mu$. It follows from the definition of conjugate that the largest part of $\mu$ is $m$, that is, \begin{equation}\label{conjugate-largest} LN(\mu)=m. \end{equation} Again by the definition of conjugate, we have $f_t(\mu)=\lambda_{t}-\lambda_{t+1}$ for $1\leq t<m$. Appealing to \eqref{conjugate-pro}, we obtain that for $m-k\leq t<m$, $f_t(\mu)=\lambda_{t}-\lambda_{t+1}>0$, which implies that $m-k,m-k+1,\ldots,m-1$ occur in $\mu$. Recall that the largest part of $\mu$ is $m$, so we get \begin{equation}\label{conjugate-maex} maex(\mu)<m-k. \end{equation} Combining \eqref{conjugate-largest} and \eqref{conjugate-maex}, we have \[LN(\mu)-maex(\mu)>m-(m-k)=k,\] and so $\mu$ is a partition in $\mathcal{N}(n,k)$. Obviously, the process above is reversible. The proof is complete. \qed \subsection{Proofs of \eqref{gen-E-N=O} and Corollary \ref{core-NO=R}} In this subsection, we first give an analytic proof of \eqref{gen-E-N=O} and then we give a combinatorial proof of Corollary \ref{core-NO=R}. {\noindent \bf Analytic proof of \eqref{gen-E-N=O}.} By considering the size of the smallest non-overlined part and the largest overlined part, we can get \begin{equation*} \sum_{n\geq 1}{E}_{NO}(n)q^{n}=\sum_{n\geq 1}\frac{q^n}{(q^n;q)_\infty}q^{n}(-q;q)_{n-1}=\frac{1}{(q;q)_\infty}\sum_{n\geq 1}q^{2n}(q^2;q^2)_{n-1}. \end{equation*} Combining with \eqref{proof-GON-1}, we arrive at \eqref{gen-E-N=O}, and thus the proof is complete. \qed Then, we give a combinatorial proof of Corollary \ref{core-NO=R}, which can be seen as a bijective proof of \eqref{gen-E-N=O}. {\noindent \bf Combinatorial proof of Corollary \ref{core-NO=R}.} Let $\pi$ be an overpartition counted by ${E}_{NO}(n)$. We define $\lambda=\phi_{NO}(\pi)$ by changing the overlined parts in $\pi$ to non-overlined parts. Clearly, $\lambda$ is a partition enumerated by $R(n)$. Conversely, let $\lambda$ be a partition enumerated by $R(n)$. Assume that $k$ is the smallest integer such that $k$ appears at least twice in $\lambda$. Then, we define $\pi=\psi_{NO}(\lambda)$ by changing a $k$ in $\lambda$ to $\overline k$ and changing the parts smaller than $k$ in $\lambda$ to overlined parts. It is obvious that $\pi$ is an overpartition counted by ${E}_{NO}(n)$. Clearly, $\psi_{NO}$ is the inverse map of $\phi_{NO}$. Thus, we complete the proof. \qed \subsection{Proofs of Theorem \ref{N-O-THM-1}} This subsection is devoted to presenting an analytic proof and a combinatorial of Theorem \ref{N-O-THM-1}. We first give the analytic proof of Theorem \ref{N-O-THM-1}. {\noindent \bf Analytic proof of Theorem \ref{N-O-THM-1}.} In virtue of the smallest non-overlined part, we can get \begin{align*} \sum_{n\geq 1}\left({EG}_O^{N}(n)-{OG}_O^{N}(n)\right)q^n&=\sum_{n\geq 1} \frac{q^n}{(q^n;q)_\infty}(-q;q)_{n-1}(q^n;q)_\infty\\ &=\sum_{n\geq1}q^n(-q;q)_{n-1}\\ &=q\sum_{n\geq0}q^{n}(-q;q)_{n}\\ &=q\sum_{n\geq0}\frac{(q;q)_n(-q;q)_n}{(q;q)_n(0;q)_n}q^n. \end{align*} Using \eqref{Heine} with $a=q$, $b=-q$, $c=0$ and $t=q$, we have \begin{align*} \sum_{n\geq 1}\left({EG}_O^{N}(n)-{OG}_O^{N}(n)\right)q^n&=q\frac{(-q;q)_\infty(q^2;q)_\infty}{(0;q)_\infty(q;q)_\infty} \sum_{n\geq0}\frac{(0;q)_n(q;q)_n}{(q;q)_n(q^2;q)_n}(-q)^{n}\\ &=q\frac{(-q;q)_\infty}{1-q}\sum_{n\geq0}\frac{(-q)^n}{(q^2;q)_n}\\ &=-(-q;q)_\infty\sum_{n\geq 0}\frac{(-q)^{n+1}}{(q;q)_{n+1}}\\ &=-(-q;q)_\infty\left[\sum_{n\geq 0}\frac{(-q)^{n}}{(q;q)_{n}}-1\right]. \end{align*} Utilizing \eqref{Euler-1-000-111}, we arrive at \[\sum_{n\geq 1}\left({EG}_O^{N}(n)-{OG}_O^{N}(n)\right)q^n=-(-q;q)_\infty\Bigg[\frac{1}{(-q;q)_\infty}-1\Bigg]=(-q;q)_\infty-1.\] The proof is complete. \qed Before giving the combinatorial proof of Theorem \ref{N-O-THM-1}, we introduce the following notations. For $n\geq 1$, let $\overline{\mathcal{P}'}(n)$ denote the set of overpartitions of $n$ such that non-overlined parts must appear. Then, we divide $\overline{\mathcal{P}'}(n)$ into the following three disjoint subsets. \begin{itemize} \item $\mathcal{C}_{ON}(n)$ is the set of overpartitions $\pi$ of $n$ with $LO(\pi)\geq LN(\pi)\geq1$; \item $\mathcal{F}_{ON}(n)$ is the set of overpartitions $\pi$ of $n$ such that there are at least two non-overlined parts in $\pi$ and $LN(\pi)> LO(\pi)$; \item $\mathcal{H}_{ON}(n)$ is the set of overpartitions $\pi$ of $n$ such that there is exactly one non-overlined part in $\pi$ and $LN(\pi)> LO(\pi)$. \end{itemize} Clearly, \[\overline{\mathcal{P}'}(n)=\mathcal{C}_{ON}(n)\bigcup\mathcal{F}_{ON}(n)\bigcup\mathcal{H}_{ON}(n).\] By restricting the involution $\mathcal{I}$ defined in Definition \ref{defi-involution} on $\mathcal{C}_{ON}(n)\bigcup\mathcal{F}_{ON}(n)$, it is easy to get the following lemma. \begin{lem}\label{lem-involution-CF-NEXT} For $n\geq 1$, the map $\mathcal{I}$ is a bijection between $\mathcal{C}_{ON}(n)$ and $\mathcal{F}_{ON}(n)$. Moreover, for an overpartition $\pi \in \mathcal{C}_{ON}(n)$, let $\lambda=\mathcal{I}(\pi)$. We have $\ell_{O\geq N}(\lambda)=\ell_{O\geq N}(\pi)-1$. \end{lem} With a bijective method, we get \begin{lem}\label{lem-ON-H-NUMBER-11111} For $n\geq 1$, the number of overpartitions in $\mathcal{H}_{ON}(n)$ is $D(n)$. \end{lem} \pf Let $\pi=(\pi_1,\pi_2,\ldots,\pi_m)$ be an overpartition in $\mathcal{H}_{ON}(n)$. If we change the overlined parts $\pi_2,\ldots,\pi_m$ in $\pi$ to non-overlined parts, then we get a distinct partition counted by $D(n)$, and vice versa. This completes the proof. \qed Now, we are in a position to give the combinatorial proof of Theorem \ref{N-O-THM-1}. {\noindent \bf Combinatorial proof of Theorem \ref{N-O-THM-1}.} Clearly, it is equivalent to showing that for $n\geq 1$, \begin{equation}\label{EO-OO-THM-combinatorial-0} \sum_{\pi\in\overline{\mathcal{P}'}(n)}(-1)^{\ell_{O\geq N}(\pi)}=D(n). \end{equation} Appealing to Lemma \ref{lem-involution-CF-NEXT}, we have \begin{equation}\label{EO-OO-THM-combinatorial-1} \sum_{\pi\in\mathcal{C}_{ON}(n)\bigcup\mathcal{F}_{ON}(n)}(-1)^{\ell_{O\geq N}(\pi)}=0. \end{equation} For an overpartition $\pi\in\mathcal{H}_{ON}(n)$, we have $SN(\pi)=LN(\pi)> LO(\pi)$, which yields $\ell_{O\geq N}(\pi)=0$, and so $(-1)^{\ell_{O\geq N}(\pi)}=1$. Using Lemma \ref{lem-ON-H-NUMBER-11111}, we get \begin{equation*} \sum_{\pi\in\mathcal{H}_{ON}(n)}(-1)^{\ell_{O\geq N}(\pi)}=D(n). \end{equation*} Combining with \eqref{EO-OO-THM-combinatorial-1}, we arrive at \eqref{EO-OO-THM-combinatorial-0}. The proof is complete. \qed \subsection{Proofs of Theorem \ref{thm-e-o-gen-1}} In this subsection, we will give an analytic proof and a combinatorial of Theorem \ref{thm-e-o-gen-1}. We first present the analytic proof of Theorem \ref{thm-e-o-gen-1}. {\noindent \bf Analytic proof of Theorem \ref{thm-e-o-gen-1}.} It is clear that \begin{equation}\label{final-000000000} \sum_{n\geq 1}{H}'_{ON}(n)q^n=\sum_{n\geq1} \frac{q^n}{1-q^n}(-q;q)_n. \end{equation} In view of the smallest non-overlined part, we can get \begin{align*} \sum_{n\geq 1}\left({EE}_O^{N}(n)-{OE}_O^{N}(n)\right)q^n&=\sum_{n\geq1}\frac{q^n}{(q^n;q)_\infty}(-q;q)_n(q^{n+1};q)_\infty\\ &=\sum_{n\geq1}\frac{q^n}{1-q^n}(-q;q)_n. \end{align*} Combining with \eqref{final-000000000}, we complete the proof. \qed Then, we give the combinatorial of Theorem \ref{thm-e-o-gen-1} with a similar argument in the combinatorial proof of Theorem \ref{N-O-THM-1}. {\noindent \bf Combinatorial proof of Theorem \ref{thm-e-o-gen-1}.} Clearly, it is equivalent to showing that for $n\geq 1$, \begin{equation}\label{EO-OO-THM-combinatorial-000-0} \sum_{\pi\in\overline{\mathcal{P}'}(n)}(-1)^{\ell_{O>N}(\pi)}={H}'_{ON}(n). \end{equation} To do this, we divide $\overline{\mathcal{P}'}(n)$ into the following three disjoint subsets. \begin{itemize} \item $\mathcal{C}'_{ON}(n)$ is the set of overpartitions $\pi$ of $n$ with $LO(\pi)\geq LN(\pi)\geq 1$ and $LO(\pi)>SN(\pi)$; \item $\mathcal{F}'_{ON}(n)$ is the set of overpartitions $\pi$ of $n$ with $LN(\pi)> LO(\pi)$ and $LN(\pi)> SN(\pi)\geq 1$; \item $\mathcal{H}'_{ON}(n)$ is the set of overpartitions $\pi$ of $n$ with $LN(\pi)=SN(\pi)\geq 1$ and $SN(\pi)\geq LO(\pi)$. \end{itemize} By definition, we know that the number of overpartitions in $\mathcal{H}'_{ON}(n)$ is ${H}'_{ON}(n)$. For an overpartition $\pi\in\mathcal{H}'_{ON}(n)$, we have $SN(\pi)\geq LO(\pi)$, which yields $\ell_{O> N}(\pi)=0$, and so $(-1)^{\ell_{O> N}(\pi)}=1$. So, we get \begin{equation}\label{EO-OO-THM-combinatorial-000-1} \sum_{\pi\in\mathcal{H}'_{ON}(n)}(-1)^{\ell_{O> N}(\pi)}={H}'_{ON}(n). \end{equation} By restricting the involution $\mathcal{I}$ defined in Definition \ref{defi-involution} on $\mathcal{C}'_{ON}(n)\bigcup\mathcal{F}'_{ON}(n)$, we find that the map $\mathcal{I}$ is a bijection between $\mathcal{C}'_{ON}(n)$ and $\mathcal{F}'_{ON}(n)$. Moreover, for an overpartition $\pi \in \mathcal{C}'_{ON}(n)$, let $\lambda=\mathcal{I}(\pi)$. We have $\ell_{O> N}(\lambda)=\ell_{O> N}(\pi)-1$. This implies that \begin{equation*} \sum_{\pi\in\mathcal{C}'_{ON}(n)\bigcup\mathcal{F}'_{ON}(n)}(-1)^{\ell_{O> N}(\pi)}=0. \end{equation*} Combining with \eqref{EO-OO-THM-combinatorial-000-1}, we arrive at \eqref{EO-OO-THM-combinatorial-000-0}, and thus the proof is complete. \qed \begin{thebibliography}{99} \bibitem{Andrews-1976} G.E. Andrews, The Theory of Partitions, Encyclopedia of Mathematics and Its Applications, Vol. 2, Addison-Wesley, 1976. \bibitem{Andrews-2018} G.E. Andrews, Integer partitions with even parts below odd parts and the mock theta functions, Ann. Comb. 22 (2018) 433--445. \bibitem{Andrews-2019} G.E. Andrews, Partitions with parts separated by parity, Ann. Comb. 23 (2019) 241--248. \bibitem{Andrews-2022} G.E. Andrews, Separable integer partition classes, Trans. Amer. Math. Soc. 9 (2022) 619--647. \bibitem{Andrews-Newman-2019} G.E. Andrews and D. Newman, Partitions and the minimal excludant, Ann. Comb. 23(2) (2019) 249--254. \bibitem{Ballantine-Merca-2021} C. Ballantine and M. Merca, Combinatorial proof of the minimal excludant theorem, Int. J. Number Theory 17(8) (2021) 1765--1779. \bibitem{Chen-He-Tang-Wei-2024} Y.H. Chen, Thomas Y. He, F. Tang and J.J. Wei, Some separable integer partition classes, Int. J. Number Theory 20(5) (2024) 1353--1371. \bibitem{Chern-2021} S. Chern, Partitions and the maximal excludant, Electron. J. Comb. 28(3) (2021) P3.13. \bibitem{Cohen-1988} H. Cohen, $q$-Identities for Maass waveforms, Invent. Math. 91 (1988) (3) 409--422. \bibitem{Corteel-Lovejoy-2004} S. Corteel and J. Lovejoy, Overpartitions, Trans. Amer. Math. Soc. 356 (4) (2004) 1623--1635. \bibitem{Franklin-Sylvester-1882} F. Franklin and J.J. Sylvester, A constructive theory of partitions, arranged in three acts, an interact and an exodion, Amer. J. Math. 5 (1882) 251--330. \bibitem{Grabner-Knopfmacher-2006} P.J. Grabner and A. Knopfmacher, Analysis of some new partition statistics, Ramanujan J. 12(3) (2006) 439--454. \bibitem{Kim-Kim-Lovejoy-2021} B. Kim, E. Kim and J. Lovejoy, On weighted overpartitions related to some qseries in Ramanujan's lost notebook, Int. J. Number Theory 3 (2021) 603--619. \bibitem{Lin-Lin-2024} Bernard L.S. Lin and Xiaowei Lin, Some results for bipartition difference functions, Ann. Comb. (2024) 1--15. \bibitem{Passary-2019} D. Passary, Studies of partition functions with conditions on parts and parity, PhD thesis, Penn. State University, 2019. \bibitem{Wright-1965} E.M. Wright, An enumerative proof of an identity of Jacobi, J. London Math. Soc. 40 (1965) 55--57. \end{thebibliography} \end{document}
2412.16993v1
http://arxiv.org/abs/2412.16993v1
The Fermat curves and arrangements of lines and conics
\documentclass[UKenglish, final]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath,mathscinet} \usepackage{amssymb} \usepackage{todonotes} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage{lipsum} \usepackage{ragged2e} \usepackage{xcolor} \usepackage{proof} \usepackage{babel, csquotes, graphicx, textcomp, kantlipsum, booktabs} \usepackage[backend=biber, style=alphabetic, url=false, maxcitenames=4, maxbibnames=4, giveninits=true] {biblatex} \addbibresource{bib.bib} \DeclareNameAlias{default}{family-given} \usepackage[hidelinks, hypertexnames=false]{hyperref} \usepackage{varioref, cleveref} \RequirePackage{amsfonts} \RequirePackage{amsmath} \RequirePackage{amssymb} \RequirePackage{amsthm} \RequirePackage{cancel} \RequirePackage{comment} \RequirePackage{dsfont} \RequirePackage{enumitem} \RequirePackage{etoolbox} \RequirePackage{mathrsfs} \RequirePackage{mathtools} \RequirePackage{multirow} \RequirePackage{pgffor} \RequirePackage{physics} \RequirePackage{showkeys} \RequirePackage{stmaryrd} \RequirePackage{tablefootnote} \RequirePackage{textcomp} \RequirePackage{thmtools} \RequirePackage{tikz} \declaretheorem[style = plain, numberwithin = section]{theorem} \declaretheorem[style = plain, sibling = theorem]{corollary} \declaretheorem[style = plain, sibling = theorem]{lemma} \declaretheorem[style = plain, sibling = theorem]{proposition} \declaretheorem[style = plain, sibling = theorem]{observation} \declaretheorem[style = plain, sibling = theorem]{conjecture} \declaretheorem[style = definition, sibling = theorem]{definition} \declaretheorem[style = definition, sibling = theorem]{example} \declaretheorem[style = definition, sibling = theorem]{notation} \declaretheorem[style = remark, sibling = theorem]{remark} \declaretheorem[style = plain]{question} \DeclareMathOperator\arcsinh{arcsinh} \newcommand\crule[3][black]{\textcolor{#1}{\rule{#2}{#3}}} \title{The Fermat curves \\ and arrangements of lines and conics} \author{Nils Peder Astrup Toft and Torgunn Karoline Moe} \begin{document} \maketitle \begin{abstract} In this paper we present new results about arrangements of lines and conics associated to the Fermat curves in the projective plane. The first result is that we compute the $2$-Hessian to a Fermat curve, which is a union of lines, and we determine the singularities of this line arrangement. Then we consider the sextactic points on the Fermat curves and show that they are distributed on three grids. The lines in the grids constitute new line arrangements associated with the Fermat curves, and we determine the singularities on each of them. The main result is that we compute the hyperosculating conics to the Fermat curves, study the arrangement of these conics, and find that they intersect in a special way. \end{abstract} \tableofcontents \section{Introduction} Arrangements of lines, conics and curves of low degree in the projective plane has been a popular research topic the last decade. This work has been driven by the interest in the so-called \emph{free} and \emph{nearly free} curves, for which we refer to the work by Dimca in \cite{Dim17}; see also \cite{Dim24} for conjectures and open problems on this topic. Moreover, given a plane curve, the study of points with hyperosculating curves, in particular sextactic points and hyperosculating conics, has been revitalized by the correction of Cayley's defining polynomial for the so-called $2$-Hessian by Maugesten and the second author in \cite{MM19}. The correct formula in combination with modern computer algebra systems, makes it possible to study sextactic points and the hyperosculating conics on any given plane curve, not only smooth curves of low degree, see \cite{Mau17}. The fact that it is possible to construct many examples of free and nearly free curves using curves of low degree together with their inflection tangents or hyperosculating conics, makes this a particularly hot topic. Such arrangements have been studied for, amongst others, the nodal cubic, the Klein quartic, and the Fermat curves of degree $d=3$ and $d=4$. See the work by Szemberg and Szpond in \cite{SS24}, Abe, Dimca and Sticlaru in \cite{ADP24}, Dimca, Ilardi, Malara and Pokora in \cite{DGMP24}, and Merta and Zieli{n}ski in \cite{MZ24}. In addition, there are some recent results for the Fermat curve of degree $d \geq 3$ and arrangements of inflection tangents given by Dimca, Ilardi, Pokora and Sticlaru in \cite{DIPS24}. The purpose of this article is to study the Fermat curves $C_d$ of degree $d \geq 3$ and certain arrangements of associated curves. We give a brief overview of well known results about the inflection points and sextactic points with new details. Moreover, we present properties of arrangements of inflection tangents, hyperosculating conics, $2$-Hessian lines and other grid lines associated with the sextactic points. Our main findings can be summed up in the following theorem. \begin{theorem}\label{thm:main} Given any set of $d$ co-linear sextactic points on a Fermat curve $C_d$ of degree $d>3$. The corresponding $d$ tangent lines intersect in one point on the fundamental triangle. Moreover, there are two points on the fundamental triangle contained in all of the corresponding $d$ hyperosculating conics. \end{theorem} In \Cref{sec:fermat} we recall the coordinates of the inflection points of the Fermat curve $C_d$. Moreover, we show that these are all higher order inflection points, and as a first new result we compute their inflectionary weight with respect to a complete linear system of degree $n$. Moreover, we show that the Fermat curve admits the maximal number of lines with maximal tangency, and we study the union of the inflection tangents and look at the singularities on this arrangement of curves. In \Cref{sec:hhess} we compute the $2$-Hessian to $C_d$ and the intersection with the Fermat curve to identify the sextactic points. For completion we determine the number of sextactic points and provide their coordinates. In \Cref{sec:lines} we provide an overview of interesting line arrangements associated with the Fermat curves. We show that the sextactic points are clustered in three grids. For each line in the grids, the $d$ tangent lines at the $d$ sextactic points on that line intersect in a $d$-fold point. Furthermore, the grids themselves give new line arrangements that turn out to be new examples of free curves. In \Cref{sec:hyposc} we compute the osculating conic to a given point on a Fermat curve. Moreover, we compute the hyperosculating conic at the sextactic points. Finally, proving the last part of our main result, we show that for each set of $d$ co-linear sextactic points, the corresponding $d$ hyperosculating conics intersect in two ordinary $d$-fold points on the fundamental triangle. \section{The Fermat curve}\label{sec:fermat} Let $\mathbb{P}^2$ denote the projective plane over the complex numbers $\mathbb{C}$. We refer to the union of lines $V(xyz)$ in $\mathbb{P}^2$ as \emph{the fundamental triangle}. The plane Fermat curve $C=C_d$ of degree $d \geq 3$ is given as the zero set $V(F)$ of the homogeneous polynomial $F=F_d \in\mathbb{C}[x,y,z]_d$, where \[F_d=x^d+y^d+z^d.\] The partial derivatives $F_x=dx^{d-1}$, $F_y=dy^{d-1}$, and $F_z=dz^{d-1}$ never vanish at the same point, so the Fermat curves all are smooth curves, of genus \[g=\frac{(d-1)(d-2)}{2}.\] \subsection{The Hessian and inflection points} The Hessian curve $H(C)$ to $C$ is determined by the vanishing locus of the polynomial \begin{align*} H=d^3(d-1)^3(xyz)^{d-2}, \end{align*} computed as the determinant of the Hessian matrix of $F$. It is well known that the intersection of $C$ and $H(C)$ contains the inflection points of $C$, and since $C$ is smooth, the intersection consists of these points only. For completion and notation, we recall the following result about inflection points on Fermat curves, see also \cite{Wat99}. \begin{theorem}\label{thm:inflcoord} The Fermat curve $C$ of degree $d$ has $3d$ inflection points. With $u\in\mathbb{C}$ a $2d$-root of unity, so that $u^d=-1$, the inflection points have coordinates \[ \bigcup_{\underset{k\text{ odd}}{1\leq k <2d}}\{(0:1:u^k),(u^k:0:1),(1:u^k:0)\}. \] Moreover, the tangent $T_p$ to $C$ at an inflection point $p$ intersects $C$ at $p$ with intersection multiplicity $I_p(T_p,C)=d$. \end{theorem} \begin{proof} First notice that the points in the intersection of $C$ and $H(C)$ must all be on the fundamental triangle $V(xyz)$. By symmetry, we may assume that $x=0$, which gives $y^d=-z^d$. By choice, assume $y=1$. Then $z^d=-1$, implying that $z^{2d}=1$, so $z=u^k$ for $k$ odd with $0<k<2d$. The observation that $k$ must be odd follows by contradiction; if $k=2n$ for an integer $n$, then $-1=z^d=(u^{2n})^d=(u^{2d})^n=1$. Since there are $d$ odd numbers between 1 and $2d$, by symmetry there are a total of $3d$ inflection points. The $3d$ tangents to $C$ at the inflection points, the inflection tangents, are all maximal in the sense that they intersect $C$ only once, which can be shown by an easy computation. Indeed, let $p$ be the inflection point $p=(0:1:u^k)$ for $k$ odd. The tangent line $T_p$ to $C$ at the point $p$ is given by \begin{align*} T_p \colon &y-u^{-k}z=0. \end{align*} The intersection of $C$ and $T_p$ can be computed directly as \begin{align*} V(F)\cap V(T_p)&=V(x^d+y^d+z^d,y-u^{-k}z)\\ &=V(x^d+(u^{-k})^dz^d+z^d,y-u^{-k}z)\\ &=V(x^d-z^d+z^d,y-u^{-k}z)\\ &=V(x^d,y-u^{-k}z), \end{align*} from which we conclude that $C$ and $T_p$ intersect only at $p$, and $I_p(T_p,C)=d$. \end{proof} \begin{remark} Note that the fact that $I_p(T_p,C)=d$ means that the tangent line has maximal tangency with $C$ at $p$, and the Fermat curve has the maximal number of such lines on a curve of degree $d$. \end{remark} \begin{remark} The inflection points are all on the fundamental triangle $V(xyz)$. However, three inflection points are not co-linear if they are not on the same line in the fundamental triangle. \end{remark} \subsection{The inflectionary weight of an inflection point} The study of inflection points and sextactic points on the Fermat curves is classical as they in some cases are examples of Weierstrass points, see \cite{Has50}. The Weierstrass points of a curve with respect to the canonical linear system $\lvert K\rvert$, with associated vector space $\mathcal{L}(K)$, and the higher-order Weierstrass points with resepct to the pluricanonical system $\lvert nK\rvert$, with associated vector space $\mathcal{L}(nK)$, coincide with inflection points and sextactic points for smooth cubic and quartic curves, see \cite{EK19} and \cite{Far17}. In general it is well known, see \cite{Has50}, that the Weierstrass weight $w(p)$ of an inflection point, or trivial Weierstrass point, $p$, on the Fermat curve is equal to \[w(p)=\frac{1}{24}(d-1)(d-2)(d-3)(d+4).\] It is well known by Riemann-Hurwitz' theorem that \[\sum_{p \in C}w(p)=g(g^2-1),\] and that there are $3d^2$ other Weierstrass points for all $d \geq 3$. We shall later see that these points are sextactic points, and for $d \leq 5$, there are no other Weierstrass points \cite{Wat99}. According to Rohrlich in \cite{Roh82}, there are still open problems concerning Weierstrass points on the Fermat curves. There are, however, some results for curves of degree $d \leq 14$, see \cite{KK99, Tow97, Wat99}. We now consider the inflection points on the Fermat curves with respect to a complete linear system $Q_n$ on $C$, and compute their inflectionary weight $w_p(Q_n)$, see \cite[p. 240]{Mir95}. \begin{theorem}\label{thm:inflw} The inflectionary weight $w_p(Q_n)$ of an inflection point on the Fermat curve $C$ of degree $d$ with respect to a complete linear system $Q_n$ on $C$ with $n \leq d$, $\deg Q_n = nd$, and $r_n=\dim Q_n=\frac{n(n+3)}{2}$ is \[w_p(Q_n)=\frac{1}{24}n(n+1)(n+2)\bigl[4d-(3n+5)\bigr].\] \end{theorem} \begin{proof} We compute the inflectionary weight $w_p(Q_n)$ using the possible intersection multiplicities $h_i$ we get from intersecting curves of degree $n$ with $C$ at an inflection point. Note that this is equivalent to using the gap sequence. The inflectionary weight of an inflection point $p$ is computed by \[w_p(Q_n)=\sum_{i=0}^{r_n}(h_i-i).\] Since the intersection with the tangent line is $d$, the values of $h_i$ can easily be evaluated and seen to sum to \begin{align*} w_p(Q_n)&=\sum_{k=1}^n(n-k+1)\left(kd-\sum_{j=1}^k(n-j+2)\right). \end{align*} Summing over $j$ with the standard formula \[\sum_{j=1}^kj=\frac{k(k+1)}{2},\] we get \begin{align*} w_p(Q_n)&=\sum_{k=1}^n(n-k+1)\left(k\left(d-n-\frac{3}{2}\right) +\frac{1}{2}k^2\right)\\ &=\sum_{k=1}^n\left[-\frac{1}{2}k^3+\left(2-d+\frac{3}{2}n \right)k^2+\left((n+1)\left(d-n-\frac{3}{2}\right) \right)k \right]. \end{align*} The different parts in this sum over $k$ are computable as sums of integers, integers squared and integers cubed, and a straight forward computation with the standard formulas gives the result in the theorem. \end{proof} \begin{example} For $n=1$, so $r_1+1=3$, the result in \Cref{thm:inflw} is equivalent to the inflectionary weight of $p$ as an inflection point, $w_p(Q_1)=d-2$. For $n=2$, i.e., $r_2+1=6$, it is equal to the weight of the inflection point as a sextactic point, and we have $w_p(Q_2)=4d-11$. \end{example} \begin{remark} When $C$ is a smooth quartic, the notion of $(r+1)$-tactic and $n$-Weierstrass point coincide due to the dimension of $\mathcal{L}(K)$, see \cite{EK19}. \end{remark} \subsection{The union of inflection tangents}\label{sec:inttang} Let the coordinates $x,y,z$ be denoted by $x_1,x_2,x_3$, respectively. Then we may organize the inflection points by letting $p_1^k=(1:u^k:0)$, $p_2^k=(0:1:u^k)$ and $p_3^k=(u^k:0:1)$ where $k< 2d$ is odd. Hence, all inflection points are of the form $p_i^k$. The tangent line to $C$ at $p_i^k$ is given by \begin{align*} T_{p_i^k}=V(x_i-u^{-k}x_{i+1}), \end{align*} with the convention that $x_i=x_{i+3}$. By symmetry, there are essentially merely two ways of choosing the inflection lines $T_{p_i^k}$ and $T_{p_j^l}$; we either let $i=j$, or $i+1=j$. In the first case, the intersection becomes \[ T_{p_i^k}\cap T_{p_i^l}= V(x_i-u^{-k}x_{i+1},x_i-u^{-l}x_{i+1})=V(x_i,x_{i+1}). \] Since the intersection is independent of $k$ and $l$, we have that at least $d$ inflection lines meet at the point where $x_i=x_{i+1}=0$ and $x_{i-1}=1$. For the second case, one calculates that \begin{align*} T_{p_i^k}\cap T_{p_{i+1}^l}&=V(x_i-u^{-k-l}x_{i-1},x_{i+1}-u^{-l}x_{i-1}). \end{align*} It is clear that each coordinate for a point in the intersection must be non-zero. This means that the three origins are not intersection points in this case. Thus, referring back to the first case, there are exactly $d$ lines through these three points. Moreover, in this second case, we may assume that $x_i=1$, implying by the above identity that $x_{i-1}=u^{k+l}$ and $x_{i+1}=u^{k}$. To show that the number of inflection tangents through a point on this form is exactly two, let $T_{p_n^m}$ be an arbitrary inflection tangent with $m$ odd. We evaluate for which combinations of $m$ and $n$ the point with $x_i=1$, $x_{i-1}=u^{k+l}$ and $x_{i+1}=u^{k}$ is contained in the tangent. It can easily be checked that $n=i$ implies $m=k$, and $n=i+1$ implies $m=l$. Indeed, these two cases correspond to exactly the two inflection lines we started with. Now assume $n=i-1$. We evaluate the polynomial corresponding to the tangent line $T_{p_m^n}$ at the intersection point from above, to check when it is a root: \begin{align*} x_n-u^{-m}x_{n+1}&=x_{i-1}-u^{-m}x_i\\ &=u^{k+l}-u^{2d-m}. \end{align*} Furthermore, $u^{k+l}-u^{2d-m}=0$ if and only if $m=2d-k-l$ or $m=4d-k-l$. Observe that $k+l$ is even since $k$ and $l$ are odd integers. Thus, $m$ is even in both cases, contradicting our assumption. This shows that the only inflection tangents containing the point with coordinates $x_i=1$, $x_{i-1}=u^{k+l}$ and $x_{i+1}=u^{k}$ are the two lines we started with, namely $T_{p_i^k}$ and $T_{p_{i+1}^l}$. We sum up these results in a theorem that generalizes \cite[Lemma~2.1]{MZ24}. Note that we refer to Arnold's standard $ADE$-classification of singularities. \begin{theorem} Let $\mathcal{L}$ be the union of the $3d$ inflection tangents of the Fermat curve of degree $d$, \[\mathcal{L}\colon (x^d+y^d)(y^d+z^d)(z^d+x^d)=0.\] With $\mathcal{L}$ partitioned into subsets $\mathcal{L}_i$ consisting of the inflection tangents $T_{p_i^k}$ for $k$ odd, $\mathcal{L}$ has precisely $3d^2$ singularities of type $A_1$ corresponding to intersections between lines $l_i\subset \mathcal{L}_i$ and $l_j\subset \mathcal{L}_j$ with $i\neq j$. When $i=j$, There are 3 remaining singularities of $\mathcal{L}$ - at the origins - where $\mathcal{L}$ has ordinary $d$-fold points corresponding to the common intersection point to all the lines in the subset $\mathcal{L}_i$. \end{theorem} \begin{remark} With the notation that that $\mathcal{L}_z=x^d+y^d$ etc., Dimca et. al. in \cite[Corollary~1.6]{DIPS24}, showed that $\mathcal{L}_zF=0$ etc. is a free curve. Moreover, they showed in \cite[Corollary~1.7]{DIPS24} that the union of the Fermat curve, the arrangement $\mathcal{L}$, and the coordinate axes $xyz=0$ (or two of them), is a free curve. \end{remark} \section{The 2-Hessian and sextactic points}\label{sec:hhess} \subsection{The 2-Hessian} In 1865, Cayley in \cite{Cay65} presented a formula for the $2$-Hessian curve $H_2(C)$ of a given plane curve $C$; a curve that intersects the given curve in its singular points, higher order inflection points, and sextactic points, i.e., points for which there exists a hyperosculating conic. We compute the defining polynomial $H_2=H_2(F)$ of the $2$-Hessian (up to a constant) for the Fermat curve using the corrected formula in \cite{MM19}, which leads to \begin{align*} H_2&= \begin{vmatrix} x^{5d-9} & x^{4d-9} & x^{3d-9}\\ y^{5d-9} & y^{4d-9} & y^{3d-9}\\ z^{5d-9} & z^{4d-9} & z^{3d-9}\\ \end{vmatrix}\\ &=(xyz)^{3d-9}(x^d-y^d)(y^d-z^d)(z^d-x^d). \end{align*} The polynomials $(x^d-y^d)$, $(y^d-z^d)$, and $(z^d-x^d)$ are bivariate, so they factorize further to \begin{align*} x^d-y^d&=\prod_{j=0}^{d-1}\left(x-\zeta^jy\right),\\ y^d-z^d&=\prod_{j=0}^{d-1}\left(y-\zeta^jz\right),\\ z^d-x^d&=\prod_{j=0}^{d-1}\left(z-\zeta^jx\right), \end{align*} where $\zeta\in\mathbb{C}$ is a primitive $d$-root of unity. Hence, the 2-Hessian factorizes as \begin{align*} H_2 &=(xyz)^{3d-9}\prod_{j=0}^{d-1}\left(x-\zeta^jy\right)\prod_{j=0}^{d-1}\left(y-\zeta^jz\right)\prod_{j=0}^{d-1}\left(z-\zeta^jx\right). \end{align*} The intersection of the $2$-Hessian and the Fermat curve consists of the sextactic points and the higher order inflection points of $C$. Removing the factors of $H_2$ that are factors in the Hessian polynomial $H$, we get a polynomial $\hat{H}_2$ of degree $3d$. \[\hat{H}_2=\prod_{j=0}^{d-1}\left(x-\zeta^jy\right)\prod_{j=0}^{d-1}\left(y-\zeta^jz\right)\prod_{j=0}^{d-1}\left(z-\zeta^jx\right).\] We denote this arrangement of lines by $\mathcal{B}$ and refer to it as \emph{the core $2$-Hessian lines}, where \[\mathcal{B} \colon \left(x^d-y^d\right)\left(y^d-z^d\right)\left(z^d-x^d\right)=0,\] and we let \[\mathcal{B}_z=V\left(x^d-y^d\right), \quad \mathcal{B}_x=V\left(y^d-z^d\right), \quad \mathcal{B}_y=V\left(z^d-x^d\right).\] \noindent We say that the line $z=0$ is \emph{opposite} to any line in $\mathcal{B}_z$, and similarly for the other variables and arrangements of lines. \begin{remark} \label{rem:cay2} Note that in this case, Cayley's original formula from \cite{Cay65} provides the same defining polynomial as the corrected formula from \cite{MM19}. Similarly, the same polynomial appears as the determinant of the Jacobian of $F$, $H$, and the bordered Hessian $\Psi$, the latter given by \begin{align*} \Psi&=\det \begin{pmatrix} 0 & H_x &H_y &H_Z \\ H_x & F_{xx} & F_{xy} & F_{xz}\\ H_y & F_{xy} & F_{yy} & F_{yz}\\ H_z & F_{xz} & F_{yz} & F_{zz}\\ \end{pmatrix}\\ &=d^8(d-1)^8(d-2)^2(xyz)^{2d-6}\left(y^dz^d+x^dz^d+x^dy^d\right). \end{align*} In general, there is no apparent reason why this simplification -- to the determinant of $\mathrm{Jac}(F,H,\Psi)$ -- of Cayley's formula should be possible, even for smooth curves, but this Jacobian seems to be used to compute sextactic points in many cases of smooth curves of low degree, see f.ex. \cite{Lev99, PR21}. \end{remark} \subsection{The sextactic points} By intersecting the core $2$-Hessian lines $\mathcal{B}$ with the Fermat curve $C$, we compute the coordinates of the sextactic points. This shows that there are exactly $3d^2$ such points. \begin{theorem} On the Fermat curve $C$ of degree $d$ there are exactly $3d^2$ sextactic points. The sextactic points have coordinates \[s_{j,k}=(\zeta^{j}u^k:u^k:2^{1/d}),\] for $j \in \{0,\ldots, d-1\}$, and $k$ odd with $0< k< 2d$, or a permutation of these. \end{theorem} \begin{proof} By symmetry, it suffices to intersect the lines $V(x-\zeta^jy)$ with $C$ for all $j$. We get \begin{align*} V(x-\zeta^jy,F)&=V(x-\zeta^jy,x^d+y^d+z^d)\\ &=V(x-\zeta^jy,2y^d+z^d), \end{align*} whose solutions are the $d^2$ points $(\zeta^{j}u^k:u^k:2^{1/d})$, for $k$ odd and $0< k< 2d$. There are six possible permutations of three coordinates, but as the permutation interchanging the first two coordinates only permutes these $d^2$ points, and similarly for two other permutations, the fact that there are exactly $3d^2$ sextactic points on $C$ follows. \end{proof} \begin{remark} These points are exactly the so-called \emph{Leopoldt Weierstrass points}, see \cite{Roh82}. The coordinates of sextactic points are also well known for $d=3$, see \cite{SS24}, and $d=4$, see \cite[Proposition~4.7]{MZ24}. \end{remark} \section{Line arrangements associated with sextactic points}\label{sec:lines} \subsection{The singularites of the core \texorpdfstring{$2$}{2}-Hessian lines} We now consider the $3d$ core $2$-Hessian lines and their intersections. \begin{theorem}\label{thm:2hint} The $3d$ lines in the core $2$-Hessian lines $\mathcal{B}$ intersect in $d^2$ triple points and $3$ ordinary $d$-fold points at the origins. Neither of these points are contained in $C$. \end{theorem} \begin{proof} The $d$ lines in $\mathcal{B}_z=V(x^d-y^d)$ all intersect in $(0:0:1)$, and similarly for the other groups, making them $d$-fold points. Moreover, taking any line $x-\zeta^{j_1} y=0$ and any line $y-\zeta^{j_2}z=0$, they intersect in a point $q=(\zeta^{j_1+j_2}:\zeta^{j_2}:1)$, which is not contained in $C$, but which is also contained in the line $z-\zeta^{-j}x=0$ for $j=j_1+j_2$. This makes $q$ a triple point. There are $d^2$ ways to choose two lines, hence $d^2$ such points. \end{proof} \begin{remark} The line arrangement $V(xyz) \cup \mathcal{B}$ is by \cite[Corollary~2.9]{MV23} well known to be a free curve with exponents $(d+1,2d+1)$. Similar results hold for subarrangements by \cite[Corollary~2.10]{MV23}; in particular, $\mathcal{B}$ is free with exponents $(d+1,2d-2)$. \end{remark} \subsection{The three grids of sextactic points} The $2$-Hessian by Cayley is not the only curve that intersects a given curve in its sextactic points. Indeed, adding any term (of appropriate degree) with $F$ as a factor to the defining polynomial gives a new curve with similar properties. In particular, and more surprisingly, in the case of the Fermat curve we can work with the factors of degree $d$ of the core $2$-Hessian, which gives us three grids of sextactic points. We develop this result for one cluster of points, but note that there are similar results for the other two. Before we state the result, let $\mathcal{M}$ and $\mathcal{N}$ denote the following two line arrangements, referred to as \emph{the modulated lines}, \begin{align*} \mathcal{M} \colon& \left(z^d+2y^d\right)\left(x^d+2z^d\right)\left(y^d+2x^d\right)=0,\\ & \prod_{\underset{i\text{ odd}}{i=1}}^{2d}\left(z-u^{-k}2^{1/d}y\right)\prod_{\underset{k\text{ odd}}{k=1}}^{2d}\left(x-u^{-k}2^{1/d}z\right) \prod_{\underset{i\text{ odd}}{k=1}}^{2d}\left(y-u^{-k}2^{1/d}x\right)=0,\\ \text{ and } \mathcal{N} \colon & \left(y^d+2z^d\right)\left(z^d+2x^d\right)\left(x^d+2y^d\right)=0,\\ & \prod_{\underset{k\text{ odd}}{k=1}}^{2d}\left(y-u^{-k}2^{1/d}z\right)\prod_{\underset{k\text{ odd}}{k=1}}^{2d}\left(z-u^{-k}2^{1/d}x\right) \prod_{\underset{i\text{ odd}}{k=1}}^{2d}\left(x-u^{-k}2^{1/d}y\right)=0. \end{align*} As in the case of $\mathcal{B}$, we use the convention that $\mathcal{M}_x=V(z^d+2y^d)$ etc., where the subscript indicates the variable that is \emph{not} in the defining polynomial, and we say that $\mathcal{M}_x$ is \emph{opposite} to the line $x=0$ etc. \begin{theorem}\label{thm:coremod} The $3d^2$ sextactic points on the Fermat curve $C=V(F)$ are clustered in three groups of $d^2$ points. The $d^2$ points on the form \[s_{j,k}=(\zeta^ju^k:u^k:2^{1/d}),\] with $j\in\{0,\ldots,d-1\}$, and $k$ odd with $0<k<2d$, are distributed with $d$ points on each of the $3d$ lines in $\mathcal{B}_z=V\left(x^d-y^d\right)$, $\mathcal{M}_x=V\left(z^d+2y^d\right)$, and $\mathcal{N}_y=V\left(z^d+2x^d\right)$. The intersection of the $3d$ lines consist of $d^2$ ordinary triple points at the sextactic points, and three ordinary $d$-fold points at the origins. \end{theorem} \begin{proof} First observe that $\mathcal{B}_z$, $\mathcal{M}_x$, and $\mathcal{N}_y$. are given by bivariate homogeneous polynomials of degree $d$, hence they factor in $d$ linear polynomials, \begin{align*} \mathcal{B}_z=x^d-y^d&=\prod_{j=0}^{d-1}\left(x-\zeta^jy\right),\\ \mathcal{M}_x=z^d+2y^d&=\prod_{\underset{k\text{ odd}}{k=1}}^{2d}\left(z-u^{-k}2^{1/d}y\right),\\ \mathcal{N}_y = z^d+2x^d&=\prod_{\underset{k\text{ odd}}{k=1}}^{2d}\left(z-u^{-k}2^{1/d}x\right). \end{align*} Second, observe that \begin{align*} z^d+2y^d&=F-\left(x^d-y^d\right),\\ z^d+2x^d&=F+\left(x^d-y^d\right). \end{align*} Thus, by construction, both $V\left(z^d+2y^d\right)$ and $V\left(z^d+2x^d\right)$ intersect $C$ in the same $d^2$ sextactic points as $V\left(x^d-y^d\right)$. Indeed, this implies that through any sextactic point there are exactly three of these lines, and on each of the three lines there are in total $d$ sextactic points. Moreover, the latter claim follows since the $d$ lines in $\mathcal{B}_z$ all contain $(0:0:1)$, the lines in $\mathcal{M}_x$ contain $(1:0:0)$, and the lines in $\mathcal{N}_y$ contain $(0:1:0)$. \end{proof} \begin{corollary} Any union of $\mathcal{B}_i$, $\mathcal{M}_j$ and $\mathcal{N}_k$ where $i \neq j \neq k$, intersected with the Fermat curve gives exactly one cluster of $d^2$ sextactic points. Moreover, both the union $\mathcal{B}_i \cup \mathcal{M}_i \cup \mathcal{N}_i$ and any arrangement of lines $\mathcal{K}=\mathcal{K}_x \cup \mathcal{K}_y \cup \mathcal{K}_z$, for $\mathcal{K}$ any of $\mathcal{B}$, $\mathcal{M}$, or $\mathcal{N}$, provide curves that intersect the Fermat curve in all of its $3d^2$ sextactic points. \end{corollary} \begin{remark} The arrangement $\mathcal{B}_y \cup \mathcal{M}_y \cup \mathcal{N}_y$, for $d=4$ and with $z=1$, shows up in \cite[17]{EK19} as the polynomial $P_{12}(x)=(x^4+1)(x^4+2)(2x^4+1)$, the essential component of the Wronskian $W_2(x,y)$ of $\{1,x,y,x^2,xy,y^2\}$, the roots of which determine the sextactic points on $C_4$. We suspect that this indeed is a general pattern. Notice that all the 12 lines in $V(P_{12})$ intersect in $(0:1:0)$. \end{remark} Some of these line arrangements turn out to be new examples of free curves. We state the results for a fixed arrangement, but they hold for all permutations of the variables. \begin{theorem} The line arrangement $\mathcal{B}_z\mathcal{M}_x\mathcal{N}_y$ is a free curve with exponents $(d+1,2d-2)$. \end{theorem} \begin{proof} This can be seen by direct computation of the Jacobian syzygy of the defining polynomial of the line arrangement \[(x^d-y^d)(z^d+2y^d)(z^d+2x^d),\] which is \begin{align*} &2^dx^{d+1}-2^{d+1}xy^d+2xz^d,\\ &-2^{d+1}x^dy+2^dy^{d+1}+2yz^d,\\ &-2^{d+1}x^dz-2^{d+1}y^dz-z^{d+1},\\ &2^dx^{d-1}y^{d-1}, \quad -y^{d-1}z^{d-1},\quad -x^{d-1}x^{d-1}. \end{align*} The requirement that the line arrangement is free is (see \cite[Theorem~2.3]{Dim24}), \begin{equation}\label{eq:freeness}r^2-(\hat{d}-1)r+(\hat{d}-1)^2=\tau(V(P)),\end{equation} where $\hat{d}$ is the degree of the defining polynomial $P$, $r < \tfrac{\hat{d}}{2}$ is the smallest degree in the syzygy-module of the Jacobian of the polynomial $P$, and $\tau(V(P))$ is the total Tjurina number of the arrangement $V(P)$. By considering the syzygy-module, we see that the exponents are $(d+1,2d-2)$, so $r=d+1$. Moreover, the arrangement is clearly of degree $\hat{d}=3d$. With this we compute the right hand side of \Cref{eq:freeness} as \begin{equation*} (d+1)^2-(3d-1)(d+1)+(3d-1)^2=7d^2-6d+3. \end{equation*} We then note from \Cref{thm:coremod} that the line arrangement has $d^2$ triple points and three $d$-fold points. At an ordinary $d$-fold point the Tjurina number is $(d-1)^2$, so at a triple point, the Tjurina number is $(3-1)^2$. Therefore, the total Tjurina number $\tau(\mathcal{B}_z\mathcal{M}_x\mathcal{N}_y)$ can be found as \begin{equation*} d^2 (3-1)^2+3(d-1)^2=7d^2-6d+1, \end{equation*} and \Cref{eq:freeness} holds, so the arrangement is free. \end{proof} \begin{remark} Note that the arrangement $\mathcal{B}_x\mathcal{M}_x\mathcal{N}_x$ is trivially free. Indeed, since the defining polynomial does not contain $x$, the partial derivative of the defining polynomial with respect to $x$ vanishes, so the arrangement is free with exponents $(0,d-1)$. This can also be seen from the fact that $(1:0:0)$ is the only intersection point of the $3d$ lines, corresponding to an ordinary $3d$-fold point. Then freeness criteria hold when $r=0$, \[r^2-(3d-1)r+(3d-1)^2=(3d-1)^2.\] \end{remark} \subsection{The modulated lines} We now study the modulated lines and their intersections. We only consider $\mathcal{M}$, as $\mathcal{N}$ by symmetry has exactly the same properties. \begin{theorem} The $3d$ lines in the union of the modulated lines $\mathcal{M}$ intersect in $3d^2$ ordinary double points and $3$ ordinary $d$-fold points at the origins. Neither of these points are contained in $C$. \end{theorem} \begin{proof} The $d$ lines in $\mathcal{M}_x=V(z^d+2y^d)$ all intersect in $(1:0:0)$, and similarly for the other two groups and origins, making them $d$-fold points. Moreover, taking any line from two different groups, they intersect in a point $q$, which is neither contained in $C$, nor any line from the third group, so $q$ is indeed an ordinary double point on the line arrangement. By symmetry it suffices to demonstrate this for two lines from $\mathcal{M}_x$ and $\mathcal{M}_{z}$, respectively, say $z-u^{-k_1}2^{1/d}y=0$ and $y-u^{-k_2}2^{1/d}x=0$. These two lines intersect in the point {$q=\left(1:u^{-k_2}2^{1/d}:u^{-(k_1+k_2)}2^{2/d}\right)$}. By inspection, it is impossible that $q$ is contained in any line from $\mathcal{M}_y$, i.e., on the form $x-u^{-k}2^{1/d}z=0$ for $i$ odd between $0$ and $2d$. There are $3d^2$ ways to choose two different lines from three groups, each containing $d$ lines, and the result follows. \end{proof} \begin{remark} Neither $\mathcal{M}$, nor $\mathcal{N}$ are free curves. Indeed, the arrangements have three ordinary $d$-fold points as well as $3d^2$ ordinary double points. The freeness criteria then demands that \[r^2-(3d-1)r+(3d-1)^2=3(d-1)^2+3d^2(2-1)^2.\] This can be simplified to the quadratic polynomial \[r^2-(3d-1)r+3d^2-2=0,\] which has negative discriminant for positive values of $d$, suggesting that $r$ is complex, and this gives a contradiction. \end{remark} \subsection{Tangent lines at sextactic points} We conclude this section with an observation about the tangent lines at sextactic points on the Fermat curves. \begin{theorem}\label{thm:tangsexthes} The $d$ tangents to the Fermat curve $C$ at $d$ sextactic points co-linear on the same line in the core $2$-Hessian lines $\mathcal{B}$, all intersect in the same point. When $d$ is odd, the common intersection point is an inflection point on $C$. \end{theorem} \begin{proof} It suffices to check the first claim for the $d$ sextactic points on one line in the core $2$-Hessian lines, say $V(x-y)$. At the $d$ sextactic points $s_k$ with coordinates $s_k=(1:1:u^{-k}2^{1/d})$ with $k$ odd and $0<k<2d$, the curve $C$ has tangent lines $T_{s_k} \colon x+y+\left(u^{-k}2^{1/d}\right)^{d-1}z=0$. These lines all contain $(1:-1:0)$, proving the first part. For the second part, observe that when $d$ is even, $(1:-1:0)$ is not on $C$. When $d$ is odd, $(1:-1:0)$ is an inflection point on $C$ by \Cref{thm:inflcoord}. \end{proof} \begin{remark} This observation is well known for all smooth cubic curves, including the Fermat cubic, see \cite[Theorem~B]{MZ25}, as the sextactic points correspond to $6$-torsion points that are not $3$-torsion points. \end{remark} We have a similar result for groups of $d$ sextactic points co-linear with respect to the modulated lines. \begin{theorem}\label{thm:tangsextmod} The $d$ tangents to the Fermat curve $C$ at $d$ sextactic points co-linear on the same line in the modulated lines $\mathcal{M}$ and $\mathcal{N}$, all intersect in the same point. When $d$ is odd, the common intersection point is contained in $C$. \end{theorem} \begin{proof} It suffices to check the first claim for the $d$ sextactic points on one line, say one of the lines in $V(z^d+2y^d)$. At the $d$ sextactic points $s_j$ with coordinates $s_j=(\zeta^j:1:u^{-k}2^{1/d})$ with $j=1,\ldots,d$ and $k$ fixed, the curve $C$ has tangent lines $T_{s_j} \colon \left(\zeta^j\right)^{d-1}x+y+\left(u^{-k}2^{1/d}\right)^{d-1}z=0$. These lines all contain $(1:-\zeta^{-j}:0)$, proving the first part. For the second part, observe that when $d$ is even, $(1:-\zeta^{-j}:0)$ is not on $C$, while when $d$ is odd, $(1:-\zeta^{-j}:0)$ is a point on $C$. \end{proof} \section{Arrangements of hyperosculating conics}\label{sec:hyposc} In 1859, Cayley in \cite{Cay59} gave a formula for the defining polynomial to the osculating conic to a plane curve at a given point. In this section we compute this conic for a given point on the Fermat curve. With this general result, it is a simple task to compute the defining polynomials to its hyperosculating conics at sextactic points. These conics happen to intersect in a peculiar way, which is our main result. \subsection{The hyperosculating conics} We start by computing the osculating conic to $C$ at a given point $p$. \begin{theorem}\label{thm:osccon} The osculating conic $O_p$ to a point $p=(p_x:p_y:p_z)$ on the Fermat curve is given by the polynomial \small{ \begin{align*} &p_x^{2d-2}(d+1)\left((2d-1)p_y^dp_z^d+(2-d)(p_y^d+p_z^d)p_x^d\right)x^2\\ &+p_y^{2d-2}(d+1)\left((2d-1)p_x^dp_z^d+(2-d)(p_x^d+p_z^d)p_y^d\right)y^2\\ &+p_z^{2d-2}(d+1)\left((2d-1)p_x^dp_y^d+(2-d)(p_x^d+p_y^d)p_z^d\right)z^2\\ &-\left(2(d+1)(d-2)p_x^{2d-1}p_y^{2d-1}+4(2d-1)(d-2)(p_x^{d-1}p_y^{2d-1}+p_x^{2d-1}p_y^{d-1})p_z^d\right)xy\\ &-\left(2(d+1)(d-2)p_x^{2d-1}p_z^{2d-1}+4(2d-1)(d-2)(p_x^{d-1}p_z^{2d-1}+p_x^{2d-1}p_z^{d-1})p_y^d\right)xz\\ &-\left(2(d+1)(d-2)p_y^{2d-1}p_z^{2d-1}+4(2d-1)(d-2)(p_y^{d-1}p_z^{2d-1}+p_y^{2d-1}p_z^{d-1})p_x^d\right)yz. \end{align*}} \end{theorem} \begin{proof} By Cayley's formula \cite{Cay59}, see \cite{MM19} for notation, $O_p$ is given as the vanishing locus of the polynomial \begin{align*} 9H^3(p)D^2F_p-(6H^2(p)DH_p+9H^3(p)\Lambda(p)DF_p)DF_p, \end{align*} where $9H^3\Lambda=-3\Omega H+4\Psi$. We obtain the following expressions for $\Omega$ and $\Psi$ via straightforward calculations: \begin{align*} \Omega&=d^5(d-1)^5(d-2)(d-3)(xyz)^{d-4}(x^dy^d+y^dz^d+z^dx^d),\\ \Psi&=d^8(d-1)^8(d-2)^2(xyz)^{2d-6}(x^dy^d+y^dz^d+z^dx^d). \end{align*} We substitute back into our formula for $9H^3\Lambda$ and get \begin{align*} 9H^3(p)\Lambda(p)&=-3\Omega(p) H(p)+4\Psi(p)\\ &=d^8(d-1)^8(d+1)(d-2)(p_xp_yp_z)^{2d-6}(p_x^dp_y^d+p_y^dp_z^d+p_z^dp_x^d). \end{align*} Moreover, we compute the following evaluations and polynomials \begin{align*} H^2(p)&= d^6(d-1)^6(p_xp_yp_z)^{2d-4},\\ H^3(p)&= d^9(d-1)^9(p_xp_yp_z)^{3d-6},\\ DF_p&= d(p_x^{d-1}x+p_y^{d-1}y+p_z^{d-1}z),\\ D^2F_p&= d(d-1)(p_x^{d-2}x^2+p_y^{d-2}y^2+p_z^{d-2}z^2),\\ DH_p&= d^3(d-1)^3(d-2)(p_xp_yp_z)^{d-3}(p_yp_zx+p_xp_zy+p_xp_yz). \end{align*} The result follows by substituting these expressions into Cayley's formula for the osculating conic. \end{proof} This immediately allows us to compute all the hyperosculating conics for a Fermat curve. We do this explicitly for the $d^2$ sextactic points on $\mathcal{B}_z$. Similar results hold for the other sextactic points. \begin{corollary}\label{cor:hyposccon} Let $s_{j,k}:=(\zeta^j:1:u^{-k}2^{1/d})$ be a sextactic point on $\mathcal{B}_z$. Then the hyperosculating conic to $s_{j,k}$ is given by the polynomial \begin{align*} O_{j,k}=&d(d+1)\zeta^{-2j}x^2\\ &+d(d+1)y^2\\ &-4(d+1)(2d-3)u^{2k}2^{-2/d}z^2\\ &-2(d-2)(5d-3)\zeta^{-j}xy\\ &+8d(d-2)\zeta^{-j}u^{k}2^{-1/d}xz\\ &+8d(d-2)u^{k}2^{-1/d}yz. \end{align*} \end{corollary} \subsection{Intersections of hyperosculating conics} The defining polynomials of the hyperosculating conics allow us to compute their intersections explicitly. Quite surprisingly, these hyperosculating conics intersect in a peculiar pattern. \begin{theorem}\label{thm:hyposches} The $d$ hyperosculating conics to the Fermat curve $C$ at the $d$ sextactic points co-linear on the core $2$-Hessian lines $\mathcal{B}$ have two common intersection points when $d > 3$, and one common intersection point when $d=3$. In particular, the common intersection points are contained in the opposite line in the fundamental triangle. \end{theorem} \begin{proof} The result for $d=3$ is proved in \cite{DGMP24}, so let $d>3$. By symmetry it suffices to show this for co-linear sextactic points on a $2$-Hessian line opposite to $z=0$. Given integers $j$ and $k$, where $k$ is odd, let $s_{j,k}:=(\zeta^j:1:u^{-k}2^{1/d})$ be a sextactic point on the line $V(x-\zeta^jy)$. Let $O_{j,k}$ be the hyperosculating conic to the Fermat curve at the point $s_{j,k}$. Fixing $j$, we show that each conic intersects the line opposite to $V(x-\zeta^jy)$, i.e., $V(z)$, in the same two points. The intersection of $V(z)$ with $O_{j,k}$ from \Cref{cor:hyposccon} then gives \begin{align*} O_{j,k}\cap V(z)&=V\left(d(d+1)\zeta^{-j}x^2-2(d-2)(5d-3)xy+d(d+1)\zeta^jy^2,z\right). \end{align*} Observe that when $d>3$, the discriminant of the quadratic polynomial above is always positive, so we obtain two points of intersection. Moreover, $j$ is fixed, and the polynomials are independent of $k$, thus the intersection is independent of $k$, and the result follows. The coordinates of the two points can be found by computing the roots of the above polynomial, \[\left(\frac{(d-2)(5d-3)\pm2(d-1)\sqrt{3(2d-1)(d-3)}}{d(d+1)}:\zeta^j:0\right).\] When $d=3$, the discriminant is equal to 0, and there is only one solution, $(1:\zeta^j:0)$, see \cite{DIPS24}. \end{proof} \begin{remark}\label{rem:twootherpoints} Note that between any two hyperosculating conics with fixed $j$, say $O_{j,k_1}$ and $O_{j,k_2}$, there are two other intersection points outside $V(z)$, but these are not independent of $k$. Indeed, the intersection points between the conics are fixed in the pencil of the two, and the intersection points must lie on the degenerate conic we get when we subtract one defining polynomial from the other, $z\cdot \ell=0$, where (up to a constant) \[\ell=2d(d+1)\zeta^{-j}x+2d(d-2)y-(d+1)(2d-3)(u^{k_1}+u^{k_2})2^{-1/d}z.\] Although quite cumbersome, it is possible to verify that the intersection of $O_{j,k_i}$ with $V(\ell)$ consists of exactly two points by direct computation of the coordinates as in the proof of \Cref{thm:hyposches}. \end{remark} \begin{remark} In \cite{DGMP24}, Dimca et. al. showed that the cubic Fermat curve together with three hyperosculating conics at three sextactic points co-linear with respect to $\mathcal{B}$, form a free curve. We suspect that this is not true for higher degrees. \end{remark} We now consider sets of hyperosculating conics from sextactic points co-linear on the modulated lines. There is a similar pattern here. Note that again the result is only formulated for $\mathcal{M}$, but the same holds for $\mathcal{N}$. \begin{theorem} When $d\geq 3$, the $d$ hyperosculating conics at $d$ sextactic points co-linear with respect to one of the lines in $\mathcal{M}$ (or $\mathcal{N}$) have two common intersection points. In particular, the two common intersection points are contained in the opposite line in the fundamental triangle. \end{theorem} \begin{proof} As before, we may use the defining polynomials from \Cref{cor:hyposccon} to compute the intersection points. Let $O_{j,k}$ be the $d$ hyperosculating conics for fixed $k$ odd, and $j \in \{1,\ldots, d\}$. The intersection of $V(x)$ with $O_{j,k}$ then gives \begin{align*} O_{j,k}\cap V(x)&=V(d(d+1)u^{-k}2^{1/d}y^2+8d(d-2)yz-4(d+1)(2d-3)u^{k}2^{-1/d}z^2,x). \end{align*} This intersection is independent of $j$, $k$ is fixed, and the discriminant is strictly positive for $d \geq 3$, so it is the same two intersection points for all the $d$ hyperosculating conics, \[\left(0:\frac{-8d(d-2)\pm4(d-1)\sqrt{3d(2d-1)}}{d(d+1)}:u^k\right).\] \end{proof} \begin{remark} As in \Cref {rem:twootherpoints}, it is possible to show that there are two other intersection points (outside $v(x)$) between any two hyperosculating conics co-linear with respect to $\mathcal{M}$. \end{remark} Our main result, \Cref{thm:main}, is a consequence of the above. Exploiting the symmetries in $C$ and the fundamental triangle, we round off with a corollary. \begin{corollary} On each line in the fundamental triangle $V(xyz)$ there are $d$ points where for each of the tangents to $C$ at $d$ co-linear sextactic points intersect, and $2d$ points where hyperosculating conics at the same co-linear sextactic points intersect. When $d=3$, the $3$ hyperosculating conics at sextactic points co-linear with respect to $\mathcal{B}$ intersect in one point, hence in this case there are $3$ of these points on each line in the fundamental triangle. \end{corollary} \printbibliography \end{document}
2412.17024v1
http://arxiv.org/abs/2412.17024v1
Foliation of constant harmonic mean curvature surfaces in asymptotic Schwarzschild spaces
\documentclass{amsart} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{stmaryrd} \usepackage{amssymb,latexsym} \usepackage{amsfonts,mathrsfs} \usepackage[margin=1.8in]{geometry} \usepackage[all,cmtip]{xy} \usepackage[colorlinks= true,pdfborder={0 0 0}]{hyperref} \usepackage{graphicx} \usepackage{esint} \usepackage {verbatim} \usepackage{color} \vfuzz2pt \hfuzz2pt \def\pdz#1{\dfrac{\partial}{\partial z_{#1}}} \def\pd#1{\dfrac{\partial}{\partial #1}} \def\f#1#2{\frac{#1}{#2}} \def\p#1#2{\dfrac{\partial #1}{\partial#2}} \def\pa{\partial} \def\dt{\frac{d}{dt}} \def\n{\nabla} \def\lap{\bigtriangleup} \def\a{\alpha} \def\b{\beta} \def\ga{\gamma} \def\vph{\varphi} \def\({\left (} \def\){\right )} \def\<{\langle} \def\>{\rangle} \def\ma{|\mathring{A}|^2} \def\ch{\mathring{h}_{ij}} \newcommand{\bel}[1]{\begin{equation}\label{#1}} \newcommand{\lab}[1]{\label{#1}} \newcommand{\beq}{\begin{equation}} \newcommand{\ba}{\begin{eqnarray}} \newcommand{\ea}{\end{eqnarray}} \newcommand{\rf}[1]{(\ref{#1})} \newcommand{\qe}{\end{equation}} \newcommand{\eeq}{\end{equation}} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{claim}{Claim}[section] \newtheorem*{acknowledge}{Acknowledgement} \newtheorem{eg}{Example}[section] \newtheorem{asup}[thm]{Assumption} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\real}{\mathbb R^n} \newcommand{\R}{\mathbb R} \newcommand{\eps}{\varepsilon} \newcommand{\To}{\longrightarrow} \newcommand{\pf}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\wc}{\llcorner} \newcommand{\cta}{\theta} \newcommand{\Om}{\Omega} \newcommand{\om}{\omega} \newcommand{\vp}{\varphi} \newcommand{\vr}{\varrho} \newcommand{\sub}{\subset} \newcommand{\half}{\frac{1}{2}} \newcommand{\tab}[1]{ \begin{tabular}[]{||p{11cm}||} #1 \end{tabular}} \newcommand{\foo}[1]{\footnote{#1}} \newcommand{\red}[1]{ {\color{red} #1} } \newcommand{\blue}[1]{ {\color{blue} #1} } \newcommand{\tr}{\mbox{\rm tr\,}} \title[harmonic mean curvature surface]{Foliation of constant harmonic mean curvature surfaces in asymptotically Schwarzschild spaces} \keywords{volume preserving harmonic mean curvature flow; asymptotically Schwarzschild space; foliation; center of mass} \author{Yaoting Gui, Yuqiao Li, Jun Sun} \address{Yaoting Gui, Beijing International Mathematical Research Center, Peking University, Beijing, 100087, P.R.China} \address{Yuqiao Li, Department of Mathematics, Hefei University of Technology, Hefei, 230009, P.R.China} \address{Jun Sun, School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, P. R. of China} \email{[email protected], [email protected], [email protected]} \thanks {2020 Mathematics Subject Classification. 51F99,31E05} \begin{document} \begin{abstract} This paper investigates the volume-preserving harmonic mean curvature flow in asymptotically Schwarzschild spaces. We demonstrate the long-time existence and exponential convergence of this flow with a coordinate sphere of large radius serving as the initial surface in the asymptotically flat end, which eventually converges to a constant harmonic mean curvature surface. We also establish that these surfaces form a foliation of the space outside a large ball. Finally, we utilize this foliation to define the center of mass, proving that it agrees with the center of mass defined by the ADM formulation of the initial data set. \end{abstract} \maketitle \section{Introduction} \allowdisplaybreaks A spacelike timeslice has the structure of a complete Riemannian three-manifold with an asymptotically flat end in the description of isolated gravitating systems in general relativity. Thus, in this paper, we are concerned with the so-called asymptotically Schwarzschild space. Precisely, a complete Riemannian manifold $(N, \bar{g})$ is said to be an asymptotically Schwarzschild space, if $(N, \bar{g})$ is a 3-manifold where $N$ is diffeomorphic to $\mathbb{R}^n\backslash B_1(0)$ and the metric $\bar{g}$ is asymptotically flat, namely, \begin{equation}\label{metr} \bar{g}_{\alpha\beta}=\left(1+\frac{m}{2r}\right)^4\delta_{\alpha\beta}+P_{\alpha\beta}, \end{equation} where $r$ is the Euclidean distance, $m>0$ is a constant representing the ADM mass and $P_{\alpha\beta}$ satisfies \[ |\partial^lP_{\alpha\beta}|\leq C_{l+1}r^{-l-2}, \quad 0\leq l\leq5, \] with $\partial$ denoting partial derivatives with respect to the Euclidean metric $\delta$. For notation convenience, we in the sequel set $C_0=\max(1, m, C_1, C_2, C_3, C_4, C_5, C_6)$. The existence of foliation by special surfaces in an asymptotically Schwarzschild space is a significant problem in general relativity. The foliation offers an intrinsic geometric structure near infinity and provides a definition of the center of mass associated with an isolated physical system. It defines a natural coordinate system near infinity which is extensively used in spacetime, see \cite{CK}. In their seminal paper, Huisken-Yau (\cite{Huisken1996DefinitionOC}) constructed a foliation of stable constant mean curvature (CMC) surfaces in an asymptotically Schwarzschild space with $m>0$. These surfaces arise from the volume preserving mean curvature flow introduced by Huisken (\cite{Huisken1987TheVP}). They also proved that the foliation is unique outside a large compact set depending on the mean curvature and defined the geometric center of mass based on their foliation. The uniqueness was further extended by Qing-Tian (\cite{QingTian}) saying that the foliation is actually unique outside of a fixed compact set, which we refer to as the global uniqueness. Corvino-Wu proved that the geometric center of mass of the foliation is the ADM center of mass if the metric is conformally flat at infinity in \cite{Corvino}. The conformal flatness condition later is removed by Huang in \cite{Huang09}. Ye provided a different proof of the existence of CMC surfaces in \cite{Ye}. Metzger \cite{Metzger} later generalized Huisken-Yau's result to asymptotically flat manifolds with a slightly weaker asymptotic condition, namely, the metric $\bar{g}$ in \eqref{metr} satisfies $ |\partial^lP_{\alpha\beta}|\leq C_{l+1}r^{-l-1-\epsilon}, \quad 0\leq l\leq2$ for $\epsilon\geq0$. However, the metric there should be assumed to be rotationally symmetric while the ADM center of mass can be defined for asymptotically flat manifolds satisfying the Regge-Teitelboim (RT) condition, which is interpreted as an asymptotically symmetric condition. Huang relaxed the symmetric assumption and proved that the CMC foliation exists in the exterior region of an asymptotically flat manifold satisfying the RT condition in \cite{Huang}. Nerz extended the existence and uniqueness of CMC foliation in 3-dimensional asymptotically flat manifolds without asymptotic symmetry in \cite{Nerz}. Furthermore, the corresponding result was established by Eichmair-Merzger for dimension greater than three in \cite{Eich-Met}. Finally, we also notice that the global uniqueness of the CMC foliation is characterized by Chodosh-Eichmair in \cite{Cho-Eich}. The CMC foliation is not the only foliation utilized in mathematical general relativity. For instance, Lamm-Metzger-Schulze establish the existence of a foliation of the asymptotically Schwarzschild space with positive mass by surfaces which are critical points of the Willmore functional subject to an area constraint in \cite{Lamm}. The geometric center of mass of the foliation by the area constrained Willmore surfaces is also shown to agree with the ADM center of mass if the scalar curvature is asymptotically vanishing \cite{Willmore}. In this paper, we will establish a novel foliation by surfaces of constant harmonic mean curvature in an asymptotically Schwarzschild space with $m>0$. These surfaces arise from the harmonic mean curvature flow (HMCF), which has been investigated in the Euclidean space \cite{MR2287150}\cite{MR2564404}. For non-Euclidean space, Xu explored the unnormalized harmonic mean curvature flow in his thesis (\cite{Xuguoyi}). More general mixed volume preserving curvature flows are investigated in Euclidean space in \cite{McCoy2005MixedVP} and in hyperbolic space in \cite{Wang-xia}. Different from their attempt, we will examine the volume preserving harmonic mean curvature flow in asymptotically Schwarzschild spaces as a generalization of Huisken-Yau's result \cite{Huisken1996DefinitionOC}. To be precise, we first introduce some fundamental notion. Let $S_\sigma(0)$ be the coordinate sphere of radius $\sigma$ centered at the origin in $(N,\bar{g})$ given by a map $$ \phi_0^\sigma:S^2\to N,\quad\phi_0^\sigma(S^2)=S_\sigma(0). $$ We aim to find a one-parameter family of maps $\phi_t^\sigma=\phi^\sigma(\cdot,t)$ solving the initial value problem \bel{flow} \begin{cases} \frac{d}{dt}\phi^{\sigma}(p,t)&=(f-F)\nu(p,t),\quad t\geq0,\:p\in S^{2},\\ \phi^{\sigma}(0)&=\phi_{0}^{\sigma}, \end{cases} \qe where $F(p, t)=\frac{H^2-|A|^2}{2H}$ and $\nu(p, t)$ represent the harmonic mean curvature and the outward unit normal vector of the surface $\Sigma_t=\phi_t^\sigma(S^2)$ respectively. Here we denote $|\Sigma_t|$ and $f(t)=\frac{\int_{\Sigma_t}Fd\mu_t}{|\Sigma_t|}$ the area and the average of the harmonic mean curvature. The flow described by equation \eqref{flow} preserves the volume of the region enclosed by $\Sigma_t$ and $S_{\frac{m}{2}}(0)$. And we designate this flow \eqref{flow} as the {\em volume preserving harmonic mean curvature flow}. It should be noted that, owing to the additional curvature terms, there is no monotonic quantity associated with this flow. The first result of this paper establishes the long time existence and exponential convergence of the flow \eqref{flow}, which converges to a constant harmonic mean curvature surface. \begin{thm}\label{main} If $(N, \bar{g})$ is the asymptotically Schwarzschild manifold with metric \eqref{metr}, then there is $\sigma_0$ depending only on $C_0$ such that for all $\sigma\geq\sigma_0$, the volume preserving harmonic mean curvature flow \eqref{flow} has a smooth solution for all times $t\geq0$. As $t\rightarrow\infty$, the surfaces $\Sigma_t$ converge exponentially fast to a surface $\Sigma_{\sigma}$ of constant harmonic mean curvature $f_{\sigma}$. \end{thm} Once establishing the existence of the constant harmonic mean curvature surfaces, we can further prove that they form a foliation of the asymptotically Schwarzschild manifold outside a large ball. For $\sigma\geq1$ and $B_1, B_2, B_3$ nonnegative numbers, we now define a set $\mathcal{B}_\sigma(B_1,B_2,B_3)$ of nearly round surfaces in $(N,\bar{g})$ by setting \[ \mathcal{B}_\sigma:=\set{M\subset N \big{|} \sigma-B_1\leq r\leq\sigma+B_1, |\mathring{A}|\leq B_2\sigma^{-3}, |\n\mathring{A}|\leq B_3\sigma^{-4}}, \] where $\mathring{A}$ is the traceless part of the second fundamental form. \begin{thm}\label{main2} There is $\sigma_0$ depending only on $C_0$ and $c=c(n)$ such that for all $\sigma\geq\sigma_0$, the constant harmonic mean curvature surfaces $\Sigma_{\sigma}$ constructed in Theorem \ref{main} form a proper foliation of $N\backslash B_{\sigma_0}(0)$. From (1) of Lemma \ref{lem1.7}, we have $|F-f|\leq c\sigma^{-2}$, where $c$ depends on $C_0, B_2, B_3$. Actually, we have better decay estimate of $|F-f|$. \begin{lem}\label{sig3} Suppose $\Sigma_t$ is a solution of \eqref{flow} contained in $\mathcal{B}_{\sigma}(B_1, B_2, B_3)$. Then, for all $t\geq0$, \[ |F-f|\leq c\sigma^{-3}, \] where $c$ depends on $C_0, B_1, B_2, B_3$. \end{lem} \begin{proof} Since $F=\frac{H}{2}-\frac{|A|^{2}}{2 H}$, we can calculate directly that \[ \nabla F=\frac{\nabla H}{2}-\frac{\nabla |A|^{2}}{2H}+\frac{\vert A\vert^{2}}{2H^{2}} \nabla H=\frac{1}{2}\left(1+\frac{\vert A\vert^{2}}{H^{2}}\right) \nabla H-\frac{\vert A\vert \nabla|A|}{H}. \] By Lemma \ref{lem1.4}, we know that \[ |\nabla H|^{2} \leq 8|\nabla\mathring{A}|^{2}+8\omega^{2} \leq c\sigma^{-8}, \] \[ |\nabla |A||^{2} \leq|\nabla A|^{2} \leq 5|\nabla\mathring{A}|^{2}+4\omega^{2} \leq c\sigma^{-8}. \] Then, \[ |\n F|^2\leq c\sigma^{-8}. \] Therefore, \[ |F-f|\leq diam(\Sigma_t)\max_{\Sigma_t}|\n F|\leq c\sigma^{-3}, \] where where $c$ depends on $C_0, B_1, B_2, B_3$. Here $diam(\Sigma_t)$ is the intrinic diameter, which by the 1st variation formula is bounded by the Willmore energy. \end{proof} Next, we show the surfaces $\Sigma_t$ converge exponentially to a constant harmonic mean curvature surface $\Sigma_{\sigma}$. From the Appendix, we have \begin{equation}\label{pf} \pd tF=\mathcal{L}F-(f-F)(F^{kl}h_{mk}h_{lm}-F^{kl}\bar{R}_{3k3l}). \end{equation} Then, in view of the fact that $\int_{\Sigma_t}(f-F)d\mu_t=0$, we obtain \begin{align*} &\frac{d}{dt}\int_{\Sigma_t}(f-F)^2d\mu_{t}\\ =&\int_{\Sigma_t}[-2(f-F)\mathcal{L}F+2(f-F)^2(F^{kl}h_{mk}h_{ml}-F^{kl}\bar{R}_{3k3l})+H(f-F)^3]d\mu_{t}. \end{align*} By Proposition \ref{2.2} and \ref{3.4}, we can calculate \begin{align}\label{f2} &F^{kl}h_{mk}h_{ml}=F^{11}\lambda_1^2+F^{22}\lambda_2^2=2F^2 \\ =& 2\left( \frac{H}{4}-\frac{|\mathring{A}|^2}{2H} \right)^2 =\frac{1}{2\sigma^2}-\frac{2m}{\sigma^3}+O(\sigma^{-4}),\notag \end{align} and \begin{align} &F^{kl}\bar{R}_{3k3l}=F^{11}\bar{R}_{3131}+F^{22}\bar{R}_{3232}\notag\\ =& \frac{\lambda_2^2-\lambda_1^2}{(\lambda_1+\lambda_2)^2}\bar{R}_{3131}-\frac{\lambda_1^2}{(\lambda_1+\lambda_2)^2}\bar{R}ic(\nu, \nu)\notag\\ =&\frac{m}{2\sigma^3}+O(\sigma^{-4}),\label{r3i3j} \end{align} Using Lemma \ref{mu1}, we compute \begin{align*} &\frac{d}{dt}\int_{\Sigma_t}(f-F)^2d\mu_{t}\\ \leq & -\left( \frac{1}{\sigma^2}-\frac{2m}{\sigma^3}-c\sigma^{-4} \right)\int_{\Sigma_{t}}(F-f)^{2}d\mu_t\\ & +2\left(\f1{2\sigma^{2}}-\f {2m}{\sigma^3}-\frac{m}{2\sigma^{3}}+\f c{\sigma^4}\right)\int_{\Sigma_{t}}(F-f)^{2}d\mu_t +\int_{\Sigma_{t}}H(f-F)^{3}d\mu_t \\ \leq &-\left( \frac{1}{\sigma^2}-\frac{2m}{\sigma^3}-c\sigma^{-4} \right)\int_{\Sigma_{t}}(F-f)^{2}d\mu_t+\left( \frac{1}{\sigma^2}-\frac{5m}{\sigma^3}+c\sigma^{-4} \right)\int_{\Sigma_{t}}(F-f)^{2}d\mu_t\\ & +\int_{\Sigma_{t}}H(f-F)^{3}d\mu_t \\ \leq& \left(-\frac{3m}{\sigma^{3}}+c\sigma^{-4}\right)\int_{\Sigma_t}(F-f)^{2}d\mu_t-\int_{\Sigma_t}H(F-f)^3d\mu_t. \end{align*} By Lemma \ref{sig3}, we have $|H(F-f)|\leq c\sigma^{-4}$. Therefore, we obtain \begin{equation}\label{exp} \frac{d}{dt}\int_{\Sigma_t}(f-F)^2d\mu_{t}\leq (-\frac{3m}{\sigma^{3}}+c\sigma^{-4})\int_{\Sigma_t}(F-f)^{2}d\mu_t, \end{equation} which implies that \begin{equation}\label{expo} \int_{\Sigma_t}(f-F)^2d\mu_{t}\leq e^{-\frac{2mt}{\sigma^3}}\int_{\Sigma_0}(f-F)^2d\mu_{0}, \end{equation} for $\sigma\geq\sigma_0$. Exponential convergence follows by standard interpolation inequalities, completing the proof of Theorem \ref{main}. \hfill $\boxed{}$ \vspace{.2in} \section{The Foliation} In this section, we prove the existence of the foliation by constant harmonic mean curvature surfaces. We begin by providing a sufficient condition that guarantees a family of surfaces form a foliation. \begin{lem}\label{fol} Let $\Sigma_\sigma=\phi^\sigma(\mathbb{S}^2)\hookrightarrow N$ be a family of surfaces parametrized by $\sigma\in[\sigma_0,\infty)$. Define the lapse function by $u=\<\pd\sigma\phi^\sigma,\nu\>$, where $\nu$ is the unit outer normal to $\Sigma_\sigma$. If furthermore $u>c>0$, then this family of surfaces will foliate the ambient space in $N\setminus B_{2\sigma_0}$. \end{lem} \begin{proof} We will prove that the union of this family of surfaces will pass through any points in $N\backslash B_{2\sigma_0}$. In other words, for any $x\in N\setminus B_{2\sigma_0}$, there exists some surface $\Sigma_\sigma$, for $\sigma>\sigma_0$ such that $x\in \Sigma_{\sigma}$. It is important to note that it is not true generally that the radial exponential map is a global diffeomorphism. And thus rather than proving the global diffeomorphism, we will prove that there exists a positive constant $\delta_0$ such that for any $i\in\mathbb{N}$, the map defined by \[ F_i:[i\delta_0,(i+1)\delta_0]\times\pa B_{2\sigma_0+i\delta_0}\longrightarrow N\setminus B_{2\sigma_0} \] \[ (t,p)\longrightarrow \exp_p^N(t\f{\pa \phi^\sigma}{\pa \sigma}) \] is a local diffeomorphism. Indeed, we can take \[ \delta_0=\inf_{p}\norm{(\exp_p^N)^{-1}}=\inf_{p,X_p\in(\exp_p^N)^{-1}(W_{p,i})}\norm{X_p}, \] where $W_{p,i}=\mathrm{Img}(F_i)$ is the tubular neighbourhood of the surface $\pa B_{2\sigma_0+i\delta_0}$. \begin{claim} $F_i$ is a local diffeomorphism, and the constant $\delta_0$ must be positive. \end{claim} \begin{proof}[Proof of claim] The first statement can be seen as an application of the inverse function theorem. Hence we calculate the differential of the map $F_i$ as for any fixed $(t_0,p_0)$ we have \begin{align*} dF_i|_{(0,p_0)}(t,v) & =\f d{ds}|_{s=0}\left[\exp_{\gamma(s)}((t+s)\f{\pa \phi^\sigma}{\pa \sigma})\right]\\ & =(d\exp_p)_{t\f{\pa \phi^\sigma}{\pa \sigma}}(\f{\pa \phi^\sigma}{\pa \sigma})+U(0,t), \end{align*} where $\gamma(s)$ is a curve starting at $p$ with $\dot{\gamma}(0)=v_p$ and $U(0,t)$ is the variational field along the curve $\xi(t)=\exp_p(t\f{\pa \phi^\sigma}{\pa \sigma})$. We then find that at $s=0$, $U(0,0)=\dot{\gamma}(0)=v$, thus \[ dF_i|_{(0,p_0)}(0,v)=v_p+\f{\pa \phi^\sigma}{\pa \sigma}. \] It follows that $dF_i$ is a local diffeomorphism. And the inverse function theorem gives some positive constant $\delta_i$ such that \[ G_i:W_i\longrightarrow [-\delta_i,\delta_i]\times\pa B_{2\sigma_0+\delta_{i-1}} \] is a diffeomorphism. We now prove that there exists some positive constant $\delta_0$ such that $2\delta_i\geq\delta_0$. Fix some $p\in\pa B_{2\sigma_0+\delta_i}$, let $l_i$ be the length of the curve $\xi(t)=\exp_p(t\f{\pa\phi^\sigma}{\pa\sigma})$ and $\theta_i$ the angle between $\f{\pa\phi^\sigma}{\pa\sigma}$ and $\nu$ at $p$, then we have \[ \delta_i\geq dist(q_i,\pa B_{2\sigma_0+\delta_i})\geq \f12l_i\cos\theta_i\geq \f12c. \] Here $q_i=\exp_p(l_i\f{\pa\phi^\sigma}{\pa\sigma})$ is the end point of $\xi(t)$. Here $l_i$ has a lower bound, which is due to the positivity of the injectivity radius. \end{proof} Once the claim is established, we can then glue all the local diffeomorphisms together. It is noted that possibly the gluing part of $F_i$ and $F_{i+1}$ would be non-smooth, but this issue can be addressed by slightly modifying the map, this completes the proof. \end{proof} Consider the smooth operator $\mathscr{F}: C^3(S^2, N)\rightarrow C^1(S^2)$ which assigns to each embedding $\phi: S^2\rightarrow N$ the harmonic mean curvature $\mathscr{F}(\phi)$ of the surface $\Sigma=\phi(S^2)$. Given a variation vector field $V$ on a constant harmonic mean curvature surface $\Sigma$, we could compute directly that the first variation of the $\mathscr{F}$ operator at $\Sigma$ in direction $V$ is given by \[ d\mathscr{F}(\phi)\cdot V=-\left(\mathcal{L}+F^{ij}h_{jk}h_{ik}-F^{ij}\bar{R}_{3i3j}\right)\<V,\nu\>, \] where $\phi:S^2\rightarrow{N}$ is the embedding of the constant harmonic mean curvature surface. From \eqref{f2}, we know $F^{ij}h_{jk}h_{ik}=2F^2$. Define the linearized harmonic mean curvature operator $L$ on $\Sigma$ \[ L=-\left(\mathcal{L}+2F^2-F^{ij}\bar{R}_{3i3j}\right). \] Since the operator $L$ is not self-adjoint, many useful tools in functional analysis and PDE cannot be applied to it. To proceed further, we consider the adjoint operator $L^*$ of $L$. By direct computation, we see that \begin{equation}\label{e-adjoint-L} L^*u=Lu-2F^{ij}_{,i}u_j-F^{ij}_{,ij}u. \end{equation} Then the operator \begin{eqnarray}\label{e-S} Su&:=&\frac{1}{2}(L+L^*)u=Lu-F^{ij}_{,i}u_j-\frac{1}{2}F^{ij}_{,ij}u\nonumber\\ &=& -\left(\mathcal{L}u+2F^2u-F^{ij}\bar{R}_{3i3j}u+F^{ij}_{,i}u_j+\frac{1}{2}F^{ij}_{,ij}u\right)\nonumber\\ &=& -(F^{ij}u_j)_{,i}-\left(2F^2-F^{ij}\bar{R}_{3i3j}+\frac{1}{2}F^{ij}_{,ij}\right)u \end{eqnarray} is a self-adjoint operator. Denote \[\mu_0:=\inf \left\{\int_{\Sigma}uSud\mu: ||u||_{L^2}=1, \int_{\Sigma}ud\mu=0\right\}>0.\] Then we have the following lower bound for $\mu_0$: \begin{lem}\label{strc} Let $\Sigma_{\sigma}$ be the constant harmonic mean curvature hypersurface constructed in Theorem \ref{main}. Then for $\sigma\geq\sigma_0$, we have \[ \mu_0\geq \frac{3m}{2}\sigma^{-3}-c\sigma^{-4}. \] \end{lem} \begin{proof} We denote $\Sigma_{\sigma}$ by $\Sigma$ for any $\sigma\geq\sigma_0$. By \eqref{f2} and \eqref{r3i3j}, we see that \begin{equation}\label{fr} 2F^2-F^{ij}\bar{R}_{3i3j}=\frac{1}{2\sigma^2}-\frac{5m}{2\sigma^3}+O(\sigma^{-4}). \end{equation} Using Lemma \ref{lem1.9} and Proposition \ref{3.10}, we also have \begin{equation}\label{e-F-ij} F^{ij}_{,ij}=O(\sigma^{-4}). \end{equation} For any $u$ with $||u||_{L^2}=1, \int_{\Sigma}ud\mu=0$, by Lemma \ref{mu1}, we have \begin{equation*} \begin{split} \int_{\Sigma}uSud\mu=& \int_{\Sigma}\left[-u\mathcal{L}u-(2F^2-F^{ij}\bar{R}_{3i3j})u^2-\left(F^{ij}_{,i}u_ju+\frac{1}{2}F^{ij}_{,ij}u^2\right)\right]d\mu\\ \geq& \int_{\Sigma}\left[-u\mathcal{L}u-(2F^2-F^{ij}\bar{R}_{3i3j})u^2\right]d\mu-c\sigma^{-4}\\ \geq& \left(\frac{1}{2\sigma^2}-\frac{m}{\sigma^3}+c\sigma^{-4}\right)-\left(\frac{1}{2\sigma^2}-\frac{5m}{2\sigma^3}+c\sigma^{-4}\right)\\ =&\frac{3m}{2\sigma^3}-c\sigma^{-4}. \end{split} \end{equation*} \end{proof} \begin{lem}\label{inv} Let $\Sigma_{\sigma}$ be the constant harmonic mean curvature hypersurface constructed in Theorem \ref{main}. For $\sigma\geq\sigma_0$, the operator $S$ is invertible, and $|S^{-1}|\leq cm^{-1}\sigma^{3}$. \end{lem} \begin{proof} We denote $\Sigma_{\sigma}$ by $\Sigma$ for any $\sigma\geq\sigma_0$. Let $\eta_0$ be the lowest eigenvalue of $S$ without constraints, i.e. \begin{align*} \eta_0=\inf_{\{u: ||u||_{L^2}=1\}}\int_{\Sigma}uSud\mu=\inf_{\{u: ||u||_{L^2}=1\}}\int_{\Sigma}uLud\mu. \end{align*} By Lemma \ref{mu1}, for $u$ with $||u||_{L^2}=1$ and $\int_{\Sigma}ud\mu=0$, there holds, for $\sigma\geq\sigma_0 $ \begin{align*} &\int_{\Sigma}u\mathcal{L}ud\mu\\ \leq & -\left(\frac{1}{4}-c\sigma^{-2}\right)\int_{\Sigma}{|\n u|^2d\mu}+c\sigma^{-4}\\ \leq& c\sigma^{-4}. \end{align*} By \eqref{fr}, we have \[ \eta_0\geq-\frac{1}{2\sigma^2}+\frac{5m}{2\sigma^3}-c\sigma^{-4}. \] On the other hand, if we replace $u$ by a constant, we obtain the reverse inequality. Hence, \begin{equation*} \eta_0=-\frac{1}{2\sigma^2}+\frac{5m}{2\sigma^3}+O(\sigma^{-4}). \end{equation*} Let $h_0$ be the corresponding eigenfunction of $\eta_0$ \[ Sh_0=\eta_0h_0. \] nt_{\Sigma}h_0d\mu$ be the mean value of $h_0$. Multiplying the above identity with $(h_0-\bar{h}_0)$ and integrating it over $\Sigma$ to obtain \begin{align*} & -\int_{\Sigma}(h_0-\bar{h}_0)\mathcal{L}(h_0-\bar{h}_0)d\mu\\ =& \int_{\Sigma}\left(\eta_0+2F^2-F^{ij}\bar{R}_{3i3j}+\frac{1}{2}F^{ij}_{,ij}\right)(h_0-\bar{h}_0)^2d\mu\\ &+\int_{\Sigma}\left(\eta_0+2F^2-F^{ij}\bar{R}_{3i3j}+\frac{1}{2}F^{ij}_{,ij}\right)\bar{h}_0(h_0-\bar{h}_0)d\mu\\ &+\int_{\Sigma}F^{ij}_{,i}h_{0,j}(h_0-\bar{h}_0)d\mu\\ =& \int_{\Sigma}\left(\eta_0+2F^2-F^{ij}\bar{R}_{3i3j}\right)(h_0-\bar{h}_0)^2d\mu\\ &+\int_{\Sigma}\left(\eta_0+2F^2-F^{ij}\bar{R}_{3i3j}+\frac{1}{2}F^{ij}_{,ij}\right)\bar{h}_0(h_0-\bar{h}_0)d\mu. \end{align*} From Lemma \ref{mu1}, the left hand side is bounded below by \[ -\int_{\Sigma}(h_0-\bar{h}_0)\mathcal{L}(h_0-\bar{h}_0)d\mu\geq \left( \frac{1}{2\sigma^2}-\frac{m}{\sigma^3} -c\sigma^{-4} \right)\int_{\Sigma}(h_0-\bar{h}_0)^2d\mu. \] Together with \[ \eta_0+2F^2-F^{ij}\bar{R}_{3i3j}=O(\sigma^{-4}) \] and (\ref{e-F-ij}), we obtain, for $\sigma\geq\sigma_0$, \begin{equation}\label{h0} ||h_0-\bar{h}_0||_{L^2}\leq c\sigma^{-2}|\bar{h}_0||\Sigma|^{\frac{1}{2}}. \end{equation} In particular, $\bar{h}_0\not=0$. Let $\eta_1$ be the next eigenvalue of $S$ with corresponding eigenfunction $h_1$. Let $\bar{h}_1=|\Sigma|^{-1}\int_{\Sigma}h_1d\mu$ be the mean value of $h_1$. Note that \[ 0=\int_{\Sigma}h_0h_1d\mu=\int_{\Sigma}(h_0-\bar{h}_0)(h_1-\bar{h}_1)d\mu+\int_{\Sigma}\bar{h}_0h_1d\mu. \] By H\"older inequality, we get \[ \left|\int_{\Sigma}h_1d\mu\right|\leq |\bar{h}_0|^{-1}||h_0-\bar{h}_0||_{L^2}||h_1-\bar{h}_1||_{L^2}. \] Then, by \eqref{h0}, there holds \begin{equation}\label{h1} |\bar{h}_1|\leq c\sigma^{-2}|\Sigma|^{-\frac{1}{2}}||h_1-\bar{h}_1||_{L^2}. \end{equation} Multiplying $Sh_1=\eta_1 h_1$ with $(h_1-\bar{h}_1)$ and integrating it over $\Sigma$, we obtain by Lemma \ref{strc} and \eqref{h1}, \begin{align*} &\left(\frac{3m}{2\sigma^3}-c\sigma^{-4}\right)\int_{\Sigma}(h_1-\bar{h}_1)^2d\mu\\ \leq& \int_{\Sigma}(h_1-\bar{h}_1)S(h_1-\bar{h}_1)d\mu\\ =& \eta_1\int_{\Sigma}(h_1-\bar{h}_1)^2d\mu+\bar{h}_1\int_{\Sigma}(h_1-\bar{h}_1)\left(2F^2-F^{kl}\bar{R}_{3k3l}+\frac{1}{2}F^{ij}_{,ij}\right)d\mu\\ \leq &\eta_1\int_{\Sigma}(h_1-\bar{h}_1)^2d\mu +c\sigma^{-4}|\bar{h}_1|\int_{\Sigma}|h_1-\bar{h}_1|d\mu\\ \leq &\eta_1\int_{\Sigma}(h_1-\bar{h}_1)^2d\mu+c\sigma^{-6}\int_{\Sigma}(h_1-\bar{h}_1)^2d\mu. \end{align*} Therefore, for $\sigma\geq\sigma_0$, \[ \eta_1\geq cm\sigma^{-3}. \] Hence, we show that $S$ is invertible, and $|S^{-1}|\leq cm^{-1}\sigma^3$. \end{proof} We are now ready to use Lemma \ref{fol} to show that $\{\Sigma_{\sigma}\}$ form a foliation. \begin{thm}\label{local} There is $\sigma_0$ depending only on $C_0$ such that for all $\sigma\geq\sigma_0$, the constant harmonic mean curvature surfaces $\Sigma_{\sigma}$ constructed in Theorem \ref{main} constitute a proper foliation of $N\backslash B_{\sigma_0}(0)$. \end{thm} \begin{proof} Let $\Sigma_{\sigma}$ be the family of constant harmonic mean curvature surfaces constructed in Theorem \ref{main} with $\sigma\geq\sigma_0$, and $\phi^{\sigma}: S^2\rightarrow N$ be the embedding for each $\sigma$. For any $\sigma_2>\sigma_1\geq\sigma_0$ such that $\Sigma_{\sigma_i}\in \mathcal{B}_{\sigma}(B_1, B_2, B_3), i=1,2$, $\Sigma_{\sigma_2}$ can be represented by a normal variation over $\Sigma_{\sigma_1}$ of the form \[ \phi^{\sigma_2}=\phi^{\sigma_1}+u(p)\nu(p), \quad p\in S^2. \] We will show that $u$ has a sign; in particular, $u$ cannot be zero. In the following, we denote $\Sigma_{\sigma}$ by $\Sigma$. Let $V=\phi^{\sigma_2}-\phi^{\sigma_1}$. By the Taylor theorem, there is some $\sigma\in(\sigma_1, \sigma_2)$ such that \begin{eqnarray} \mathscr{F}(\phi^{\sigma_2})&=&\mathscr{F}(\phi^{\sigma_1})+Lu+\frac{1}{2}d^2\mathscr{F}(\phi^{\sigma})(V, V)\nonumber\\ &=&\mathscr{F}(\phi^{\sigma_1})+Su+(L-S)u+\frac{1}{2}d^2\mathscr{F}(\phi^{\sigma})(V, V), \end{eqnarray} where $\mathscr{F}(\phi^{\sigma_2})$ and $\mathscr{F}(\phi^{\sigma_1})$ are constants. Note that the second variation of the harmonic mean curvature operator involves the second fundamental form and higher derivatives of the metric, this leads to \begin{equation}\label{ddf} ||d^2\mathscr{F}(V,V)||\leq c\sigma^{-3}||V||^2\leq c\sigma^{-3}||u||_{C^{2}}. \end{equation} We also have from the definition of $S$ that \begin{equation}\label{e-S-L} |(L-S)u|=\frac{|(L-L^*)u|}{2}=\left|F^{ij}_{,i}u_j+\frac{1}{2}F^{ij}_{,ij}u\right|\leq c\sigma^{-3}||u||_{C^{1}}. \end{equation} Hence, $u$ satisfies the following elliptic equation \begin{equation}\label{lu} Su=\mathscr{F}(\phi^{\sigma_2})-\mathscr{F}(\phi^{\sigma_1})+E_1, \end{equation} with the error term satisfying $|E_1|\leq c\sigma^{-3}||u||_{C^{2}}$. We will employ Huang's approach of Theorem 3.9 in \cite{Huang}. By integrating \eqref{lu} over $S^2$, \[ \mathscr{F}(\phi^{\sigma_2})-\mathscr{F}(\phi^{\sigma_1})= \frac{1}{|S^2|}\int_{S^2}Sud\mu-\frac{1}{|S^2|}\int_{S^2}E_1d\mu. \] By the definition of $S$, we get \begin{equation*} \begin{split} &\frac{1}{|S^2|}\int_{S^2}Sud\mu\\ =& -\frac{1}{|S^2|}\int_{S^2}\left(\mathcal{L}u+2F^2u-F^{ij}\bar{R}_{3i3j}u+F^{ij}_{,i}u_j+\frac{1}{2}F^{ij}_{,ij}u\right)d\mu\\ =& -\frac{1}{|S^2|}\int_{S^2}\left(\mathcal{L}u+2F^2u-F^{ij}\bar{R}_{3i3j}u-\frac{1}{2}F^{ij}_{,ij}u\right)d\mu\\ =& \frac{1}{|S^2|}\int_{S^2}\left[-\frac{1}{4} \Delta u-\frac{1}{H^{2}}\left(\mathring{h}^{ij}-H{g}^{ij}\right)\mathring{h}_{jk} u_{ki}\right]d\mu-\left(\frac{1}{2\sigma^2}-\frac{5m}{2\sigma^3}+O(\sigma^{-4})\right)\bar{u}\\ \leq & c\sigma^{-3}||u||_{C^1}-\left(\frac{1}{2\sigma^2}-\frac{5m}{2\sigma^3}+O(\sigma^{-4})\right)\bar{u}, \end{split} \end{equation*} where $\bar{u}=\frac{1}{|S^2|}\int_{S^2}ud\mu$ is the mean value of $u$ and the last inequality is obtained by integration by parts. It follows, \begin{equation}\label{fphi} \mathscr{F}(\phi^{\sigma_2})-\mathscr{F}(\phi^{\sigma_1})=-\frac{1}{2\sigma^2}\bar{u}+E_2, \end{equation} where \[ |E_2|\leq c\sigma^{-3}||u||_{C^1}+c\sigma^{-3}||u||_{C^2}. \] Now we decompose $u=h_0+u_0$, where $h_0$ is the lowest eigenfunction of $S$ and $\int_{S^2}h_0u_0d\mu=0$. Due to De Giorgi-Nash-Moser iteration and \eqref{h0}, there holds the estimate \[ \sup|h_0-\bar{h}_0|\leq c\sigma^{-2}|\bar{h}_0|. \] We note that $(h_0-\bar{h}_0)$ satisfies the following equation \[ S(h_0-\bar{h}_0)=\eta_0(h_0-\bar{h}_0)+\left(\eta_0+2F^2-F^{ij}\bar{R}_{3i3j}+\frac{1}{2}F^{ij}_{,ij}\right)\bar{h}_0. \] And hence we obtain \begin{equation}\label{hal} \begin{split} &||h_0-\bar{h}_0||_{C^{0, \alpha}}\\ \leq& c\left(\sigma^{-\alpha}\sup_{S^2}|h_0-\bar{h}_0|+|\bar{h}_0|\norm{\eta_0+2F^2-F^{ij}\bar{R}_{3i3j}+\frac{1}{2}F^{ij}_{,ij}}_{L^2}\right)\\ \leq & c\sigma^{-\alpha-2}|\bar{h}_0|, \end{split} \end{equation} for some $\alpha\in(0, 1)$. Since $u_0=u-h_0$, \eqref{lu} and \eqref{fphi} imply \begin{equation*} \begin{split} Su_0=& Su-Sh_0=-\frac{1}{2\sigma^2}\bar{u}+E_2+E_1-\eta_0h_0\\ =& \frac{1}{2\sigma^2}(h_0-\bar{h}_0)-\frac{1}{2\sigma^2}\bar{u}_0+E_3+E_2+E_1, \end{split} \end{equation*} where $|E_3|\leq c\sigma^{-3}|\bar{h}_0|$. Because $S$ has no kernel, the Schauder estimate and \eqref{hal} gives \begin{equation}\label{u0c2} \begin{split} \norm{u_0}_{C^{2, \alpha}}\leq & c(\sigma^{-2}\norm{h_0-\bar{h}_0}_{C^{0, \alpha}}+\sigma^{-2}|\bar{u}_0|+\sigma^{-3}\norm{u}_{C^{2, \alpha}}+\sigma^{-3}|\bar{h}_0|)\\ \leq & c(\sigma^{-\alpha-4}|\bar{h}_0|+\sigma^{-2}|\bar{u}_0|+\sigma^{-3}(\norm{u_0}_{C^{2, \alpha}}+\norm{h_0}_{C^{2, \alpha}})+\sigma^{-3}|\bar{h}_0|). \end{split} \end{equation} Again since $h_0$ satisfies $Sh_0=\eta_0h_0$ and $\eta_0=O(\sigma^{-2})$, inequality \eqref{hal} leads to \[ \norm{h_0}_{C^{2, \alpha}}\leq c\sigma^{-2}\norm{h_0}_{C^{0, \alpha}}\leq c\sigma^{-2}(\norm{h_0-\bar{h}_0}_{C^{0, \alpha}}+|\bar{h}_0|)\leq c\sigma^{-2}|\bar{h}_0|. \] Inserting this into \eqref{u0c2}, we arrive at \begin{equation}\label{u0} \norm{u_0}_{C^{2, \alpha}}\leq c(\sigma^{-3}|\bar{h}_0|+\sigma^{-2}|\bar{u}_0|), \end{equation} for $\sigma\geq\sigma_0$. Notice the fact that \[0=\int_{S^2}h_0u_0d\mu=\int_{S^2}u_0(h_0-\bar{h}_0)+\bar{h}_0\int_{S^2}u_0,\] this implies \[ |\bar{u}_0|\leq |\bar{h}_0|^{-1}\sup |h_0-\bar{h}_0|\sup |u_0|\leq c\sigma^{-2}|u_0|_{C^{2, \alpha}}. \] Then combining with \eqref{u0} we arrive at the following inequality \[ \norm{u_0}_{C^{2, \alpha}}\leq c\sigma^{-3}|\bar{h}_0|. \] Therefore, we obtain \[ |u-\bar{h}_0|\leq |u_0|+|h_0-\bar{h}_0|\leq c\sigma^{-2}|\bar{h}_0|,\] which shows that $u$ has a sign since $\bar{h}_0$ has a sign. Finally, an application of Lemma \ref{fol} completes the proof. \end{proof} We will now define the ``center of gravity" as the constant harmonic mean curvature foliation approaches infinity. Following the way of Corvino-Wu in \cite{Corvino}, we will show that the center of mass defined by the constant harmonic mean curvature foliation is equivalent to the ADM center of mass. We first present some definitions and necessary propositions. The second derivative of the second fundamental form can be similarly estimated as Proposition \ref{3.10}, see the appendix in \cite{Corvino}. Together with Lemma \ref{lem1.4}, we can derive the following lemma. \begin{lem}\label{2of} Let $\Sigma_{\sigma}$ be the constant harmonic mean curvature surface constructed in Theorem \ref{main}. Then, for all $t\geq0$ and $\sigma\geq\sigma_0$ large enough, \[ |\n^2F|\leq c\sigma^{-5}, \] where $c$ depends on $C_0, B_1, B_2, B_3$. \end{lem} \begin{defn} Assume $(M,g)$ be an asymptotically Schwarzschild manifold with metric $g_{ij}=\left(1+\frac{m}{2|x|} \right)^4\delta_{ij}+O_2(|x|^{-2})$, the ADM center of mass is defined by \[ C^k=\frac{1}{16m\pi}\lim_{R\to\infty}\int_{|x| = R}\left[\sum_{i}x^{k}\left(g_{ij,i}-g_{ii,j}\right)v_{e}^{j}d\mu_{e}-\sum_{i}\left(g_{ik}v_{e}^{i}-g_{ii}v_{e}^{k}\right)d\mu_{e}\right],\] where $\nu_e$ is the normal vector with respect to the Euclidean metric $\delta$, $k=1, 2, 3$. \end{defn} Consider an exterior region in an asymptotically flat manifold, and assume that there is an asymptotically flat chart in which $g_{ij}=\left(1+\frac{m}{2|x|}+\frac{B_{k}x^{k}}{|x|^{3}}\right)^{4}\delta_{ij}+O_{5}\left(|x|^{-3}\right)$ with $m>0$. By direct calculation, the ADM center of mass is \begin{equation}\label{center} C^k=\frac{2B_k}{m}. \end{equation} \begin{defn} Let $\Sigma_{\sigma}$ be the family of surfaces constructed in Theorem \ref{local} and $\phi^{\sigma}$ be the position vector. The center of gravity of $\Sigma_{\sigma}$ is defined as \[ C_{HM}=\lim_{\sigma\rightarrow\infty}\frac{\int_{\Sigma_{\sigma}}\phi^{\sigma} d\mu_e}{\int_{\Sigma_{\sigma}}d\mu_e}. \] \end{defn} We note that translating the coordinates by a vector $a^k$ shifts both the ADM center of mass and the center $C_{HM}$ by $a^k$. Thus, we can adjust the coordinates to set the ADM center of mass vector to be zero, which results in $B_k=0$ in this chart by \eqref{center}. To prove Theorem \ref{thm-center} saying that $C_{HM}$ is equivalent to the ADM center of mass, it suffices to consider the following case. \begin{thm} Consider an exterior region in an asymptotically flat manifold, and assume that there is an asymptotically flat chart in which $g_{ij}=\left(1+\frac{m}{2|x|}\right)^{4}\delta_{ij}+O_{5}\left(|x|^{-3}\right)$ with $m>0$. Then $C_{HM}=0$. \end{thm} \begin{proof} By the Sobolev embedding inequality ((3.1) in \cite{Corvino}), for $r>2$ and all $t\geq0$, there is a constant $c(r)$, such that \begin{equation}\label{inter} ||F-f||_{C^0}\leq c\sigma^{-\frac{2}{q}}(\sigma||\n F||_{L^q}+||F-f||_{L^r}^{\frac{1}{q}}||F-f||_{L^2}^{1-\frac{1}{q}}), \end{equation} where $q=\frac{r}{2}+1$ and norms are calculated on $\Sigma_t$ which is a solution to \eqref{flow} contained in $\mathcal{B}_{\sigma}(B_1, B_2, B_3)$. At $t=0$, we are working in a coordinate system that $g_{ij}-g^S_{ij}=O_5(|x|^{-3})$, with $g^S_{ij}$ being the Schwarzschild metric. Thus, the difference between the second fundamental form of $\Sigma_t$ is \[ h_{ij}-h^S_{ij}=O(|x|^{-4}), \] which implies that $|F-f|=O(\sigma^{-4})$. From \eqref{expo}, for all $t\geq0$, \[ \int_{\Sigma_t}(F-f)^2d\mu_t\leq C\sigma^{-6}e^{-\frac{2mt}{\sigma^3}}. \] To derive a pointwise bound from \eqref{inter}, we utilize the interpolation inequality from \cite{Hamilton} to estimate, for $\frac{1}{p}+\frac{1}{2}=\frac{2}{q}$, \begin{equation*} \begin{split} \sigma^{1-\frac{2}{q}}||\n F||_{L^q} \leq & q\sigma^{1-\frac{2}{q}}||\n^2F||_{L^p}^{\frac{1}{2}}||F-f||_{L^2}^{\frac{1}{2}} \\ \leq & C \sigma^{1-\frac{2}{q}}(\sigma^{-5}\sigma^{\frac{2}{p}})^{\frac{1}{2}}\cdot(\sigma^{-6}e^{-\frac{2mt}{\sigma^3}})^{\frac{1}{4}}\\ = & C\sigma^{-\frac{7}{2}}e^{-\frac{mt}{2\sigma^3}}, \end{split} \end{equation*} and \begin{equation*} \begin{split} &\sigma^{-\frac{2}{q}}||F-f||_{L^r}^{\frac{1}{q}}||F-f||_{L^2}^{1-\frac{1}{q}}\\ \leq & C\sigma^{-\frac{2}{q}}\cdot\left(\sigma^{-3}\sigma^{\frac{2}{r}}\right)^{\frac{1}{q}}\cdot\left(\sigma^{-6}e^{-2mt\sigma^{-3}}\right)^{\frac{1}{2}\left(1-\frac{1}{q}\right)}\\ = & C\sigma^{-3-\frac{2}{q}\left(1-\frac{1}{r}\right)}\cdot\left(e^{-2mt\sigma^{-3}}\right)^{\frac{1}{2}\left(1-\frac{1}{q}\right)}\\ \leq & C\sigma^{-\frac{7}{2}}e^{-\frac{mt}{2\sigma^3}}. \end{split} \end{equation*} In the last inequality, we have selected that $r\in (2, 4)$. Therefore, combining with \eqref{inter} we get on $\Sigma_t$ that, for all $t\geq0$ and $\sigma\geq\sigma_0$, \[ |F-f|\leq C\sigma^{-\frac{7}{2}}e^{-\frac{mt}{2\sigma^3}}. \] By integrating the flow equation $\frac{\partial}{\partial t}\phi^{\sigma}=(f-F)\nu$, and $\partial_t(d\mu_g)=H(f - F)d\mu_g$, we obtain \begin{align*} |\phi^{\sigma}_{\infty}-\phi^{\sigma}_0|\leq & C\sigma^{-\frac{1}{2}},\\ |(\phi^{\sigma}_{\infty})^*(d\mu_g)-(\phi^{\sigma}_0)^*(d\mu_g)|\leq & C\sigma^{-\frac{3}{2}}|(\phi^{\sigma}_0)^*(d\mu_g)|. \end{align*} Finally, we have the center of mass \[ C_{HM}=\lim_{\sigma\rightarrow\infty}\frac{\int_{\Sigma_{\sigma}}\phi^{\sigma} d\mu_e}{\int_{\Sigma_{\sigma}}d\mu_e} =\lim_{\sigma\rightarrow\infty}\frac{\int_{S^2}\phi^{\sigma}_{\infty}d\mu^{\sigma}}{\int_{S^2}d\mu^{\sigma}}=\lim_{\sigma\rightarrow\infty}O(\sigma^{-\frac{1}{2}})=0,\] where $d\mu^{\sigma}=(\phi^{\sigma}_{\infty})^*(d\mu_e)$. \end{proof} \vspace{.2in} \section{Appendix} In this appendix, we give the details of the 1st variation of the $F$-operator. We begin with the calculation of the $H$-operator with respect to arbitrary variation. So let $V_t$ be the variation vector field, that is, \[ \pd t\phi_t=V_t. \] As before, we let $e_i(t)=d\phi_t(e_i)$, then $\pa_ te_i(t)=\bar{\n}_{e_i}V$. Recall the variation formula \begin{enumerate} \item[i)] $$ \begin{aligned} \frac{d}{dt}g_{ij}=\<\bar{\n}_{e_i}V,e_j\>+\<e_i,\bar{\n}_{e_j}V\>,\\ \f d{dt}g^{ij}=-g^{ik}g^{jl}(\<\bar{\n}_{e_k}V,e_l\>+\<e_k,\bar{\n}_{e_l}V\>)\\ \frac{d}{dt}\nu=-\<\nu,\bar{\n}_{e_i}V\>e_i. \end{aligned} $$ \item[ii)] \begin{align*} \pd th_{ij} & = \pd t\<\bar{\n}_{e_i}\nu,e_j\>=\<\bar{\n}_{e_i}\pa_t\nu+\bar{R}(V_t,e_i)\nu,e_j\>+\<\bar{\n}_{e_i}\nu,\pa_te_j\>\\ & = -\<\nu,\bar{\n}_{e_k}V\>\<\bar{\n}_{e_i}e_k,e_j\>-\<\nu,\bar{\n}_{e_i}\bar{\n}_{e_j}V\>+\bar{R}(V,e_i,\nu,e_j)\\ & = -\<\nu,\bar{\n}^2V(e_i,e_j)\>+\bar{R}(V,e_i,\nu,e_j). \end{align*} \end{enumerate} Now, the mean curvature operator is given by \begin{align*} \pd tH=\pd t(g^{ij}h_{ij}) & = \pd tg^{ij}h_{ij}+g^{ij}\pd th_{ij}\\ & = -h^{kl}\left(\<\bar{\n}_{e_k}V,e_l\>+\<e_k,\bar{\n}_{e_l}V\>\right)-\bar{R}ic(V,\nu)-\<\nu,\Delta V\> \end{align*} Note that \begin{align*} \Delta\<\nu,V\> & = \<\nu,\bar{\n}_{e_i}\bar{\n}_{e_i}V\>+\<V,\bar{\n}_{e_i}\bar{\n}_{e_i}\nu\>+2\<\bar{\n}_{e_i}V,\bar{\n}_{e_i}\nu\>\\ & = \<\nu,\bar{\n}_{e_i}\bar{\n}_{e_i}V\>+\<V,\bar{\n}_{e_i}(h_{ij}e_j)\>+2\<\bar{\n}_{e_i}V,\bar{\n}_{e_i}\nu\>\\ & = \<\nu,\bar{\n}_{e_i}\bar{\n}_{e_i}V\>+2\<\bar{\n}_{e_i}V,\bar{\n}_{e_i}\nu\>+h_{ij,i}\<e_j,V\>+h_{ij}\<V,\bar{\n}_{e_i}e_j\>\\ & = \<\nu,\bar{\n}_{e_i}\bar{\n}_{e_i}V\>+2\<\bar{\n}_{e_i}V,\bar{\n}_{e_i}\nu\>+\<H_je_j,V\>-\bar{R}_{3iji}\<e_j,V\>-\abs{A}^2\<V,\nu\>, \end{align*} and \[ \bar{R}ic(V,\nu)=\bar{R}ic(\nu,\nu)\<V,\nu\>+\bar{R}ic(e_j,V)\<e_j,V\>. \] We thus get \[ \pd tH=-\Delta\<V,\nu\>-\left(\bar{R}ic(\nu,\nu)+\abs{A}^2\right)\<V,\nu\>+\<\n H,V\>. \] Now we proceed the computation of the $F$-operator. Note first that \begin{align*} F^{ij}\<V,\nu\>_{ij} & =F^{ij}\left(\<\nu,\bar{\n}_{e_i}\bar{\n}_{e_j}V\>+\<V,\bar{\n}_{e_i}\bar{\n}_{e_j}\nu\>+2\<\bar{\n}_{e_i}V,\bar{\n}_{e_j}\nu\>\right)\\ & = F^{ij}\<\nu,V_{ij}\>+F^{ij}h_{jk,i}\<V,e_k\>-F^{ij}h_{jk}h_{ik}\<V,\nu\>+2F^{ij}\<\bar{\n}_{e_i}V,\bar{\n}_{e_j}\nu\>\\ & = F^{ij}\<\nu,V_{ij}\>+\left(\<V,\n F\>-F^{ij,pq}h_{pq,k}h_{ij}\<V,e_k\>-F^{ij}\bar{R}_{3jki}\<V,e_k\>\right)\\ & -F^{ij}h_{jk}h_{ik}\<V,\nu\>+2F^{ij}\<\bar{\n}_{e_i}V,\bar{\n}_{e_j}\nu\> \end{align*} We hence obtain \begin{align*} \pd tF =& F^{ij}\pd t h_{ij}+F^{ij}\pd tg^{ik}h_{kj}\\ =& -F^{ij}\<\nu,V_{ij}\>+F^{ij}\bar{R}(V,e_i,\nu,e_j)-F^{ij}h_{kj}(g^{ip}g^{kl}(\<\bar{\n}_{e_p}V,e_l\>+\<e_p,\bar{\n}_{e_l}V\>))\\ =& -\mathcal{L}\<V,\nu\>-F^{ij}\left(h_{jk}h_{ik}-\bar{R}(V,e_i,\nu,e_j)\right)\<V,\nu\>\\ &+\<V,\n F\>-F^{ij,pq}h_{pq,k}h_{ij}\<V,e_k\>. \end{align*} We now conclude that the 1st variation of the $F$-operator with the constant harmonic mean curvature is given by \[ d\mathscr{F}(\phi)\cdot V=-\left(\mathcal{L}+F^{ij}h_{jk}h_{ik}-F^{ij}\bar{R}(\nu,e_i,\nu,e_j)\right)\<V,\nu\>, \] where $\phi:S^2\rightarrow{N}$ is the embedding of the constant harmonic mean curvature surface. Define the operator \[ L=-\left(\mathcal{L}+F^{ij}h_{jk}h_{ik}-F^{ij}\bar{R}(\nu,e_i,\nu,e_j)\right). \] This operator is then interpreted as the first variation of the constant harmonic mean curvature surface in the normal direction. \vspace{.2in} \textbf{Conflict of Interest} The authors have no conflict of interest to declare. \vspace{.1in} \textbf{Data availability} The authors declare no datasets were generated or analysed during the current study. \bibliography{refofmix} \bibliographystyle{alpha} \end{document}
2412.17000v1
http://arxiv.org/abs/2412.17000v1
Singular vectors, characters, and composition series for the N=1 BMS superalgebra
\documentclass[12pt, reqno]{amsart} \usepackage{amssymb,dsfont} \usepackage{eucal} \usepackage{amsmath} \usepackage{amscd} \usepackage[dvips]{color} \usepackage{multicol} \usepackage[all]{xy} \usepackage{graphicx} \usepackage{color} \usepackage{colordvi} \usepackage{xspace} \usepackage{txfonts} \usepackage{lscape} \usepackage{tikz} \numberwithin{equation}{section} \usepackage[shortlabels]{enumitem} \usepackage{ifpdf} \ifpdf \usepackage[colorlinks,final,backref=page,hyperindex]{hyperref} \else \usepackage[colorlinks,final,backref=page,hyperindex]{hyperref} \usepackage{tikz} \usepackage[active]{srcltx} \usepackage{array} \usepackage{tabularx} \usepackage{colortbl} \renewcommand\baselinestretch{1} \topmargin -.8cm \textheight 22.8cm \oddsidemargin 0cm \evensidemargin -0cm \textwidth 16.3cm \makeatletter \theoremstyle{plain} \numberwithin{equation}{section} \newtheorem{theo}{Theorem}[section] \newtheorem{pro}[theo]{Proposition} \newtheorem{lem}[theo]{Lemma} \newtheorem{cor}[theo]{Corollary} \newtheorem{rem}[theo]{Remark} \theoremstyle{definition} \newtheorem{defi}[theo]{Definition} \newtheorem{exa}[theo]{Example} \def\Vir{\hbox{\rm Vir}} \def\vep{\varepsilon} \def\vn{\varepsilon} \def\ot{\otimes} \def\om{\omega} \def\q{\boldsymbol{q}} \def\bv{\boldsymbol{v}} \def\bc{\boldsymbol{c}} \def\lan{\langle} \def\ran{\rangle} \def\al{\alpha} \def\th{\theta} \def\be{\beta} \def\De{\Delta} \def\ga{\gamma} \def\Ga{\Gamma} \def\Om{\Omega} \def\si{\sigma} \def\tu{\tilde{u}} \def\ep{\epsilon} \def\de{\delta} \def\pa{\partial} \def\La{\Lambda} \def\la{\lambda} \def\bi{\binom} \def\lra{\longrightarrow} \def\lmto{\longmapsto} \def\ra{\rightarrow} \def\ol{\overline} \def\e{{\bf e}} \def\t{{\bf t}} \def\a{{\bf a}} \def\t{{\bf{t}}} \def\i{{\bf{i}}} \def\j{{\bf{j}}} \def\k{{\bf k}} \def\c{{\bf c}} \def\s{\star} \def\wt{{\rm wt}} \newcommand{\N}{{\mathbf N}} \newcommand{\C}{{\mathcal C}} \newcommand{\D}{{\mathcal D}} \newcommand{\B}{{\mathcal B}} \newcommand{\F}{{\mathcal F}} \newcommand{\Z}{{\mathcal Z}} \newcommand{\K}{{\mathcal K}} \newcommand{\Hei}{{\mathcal H}} \newcommand{\A}{{\mathcal A}} \def\bN{{\mathbb Z_+}} \def\bZ{{\mathbb Z}} \def\bQ{{\mathbb Q}} \def\bR{{\mathbb R}} \def\bT{{\mathbb T}} \def\bF{{\mathbb F}} \def\bK{{\mathbb K}} \def\bC{{\mathbb C}} \def\sA{{\mathscr A}} \def\P{{\mathcal P}} \def\sB{{\mathscr B}} \def\C{{\mathscr C}} \def\sL{{\mathscr L}} \def\mh{\mathfrak{h}} \def\b{\mathfrak{b}} \def\n{\mathfrak{n}} \def\H{{\mathscr H}} \def\Res{\mbox{\rm Res}} \def\Diag{\mbox{\rm Diag}} \def\rank{\mbox{\rm rank}} \def\Ob{\mbox{\rm Ob}} \def\ad{\mbox{\rm ad}} \def\Hom{\mbox{\rm Hom}} \def\op{\mbox{\scriptsize op}} \def\ext{\mbox{\rm Ext}\,} \def\Ker{\mbox{\rm Ker}\,} \def\udim{{\mathbf {\dim}\,}} \def\mo{\mbox{\rm mod}\,} \def\mx{\mbox{\rm max}} \def\tr{\mbox{\rm tr}\,} \def\rad{\mbox{\rm rad}\,} \def\top{\mbox{\rm top}\,} \def\rep{\mbox{\rm Rep}\,} \def\Supp{\mbox{\rm Supp}\,} \def\End{\text{\rm End}} \def\Ind{\text{\rm Ind}} \def\Im{\text{\rm Im}} \def\id{\text{\rm id}} \def\wt{\text{\rm wt}} \def\e{\mbox{\rm e}} \def\uf{\mbox{\rm f}} \def\f{{\mathbf {\uf}}} \def\bcL{\bar{\cL}} \def\st{\stackrel} \def\1{{\bf 1}} \def\v{\mathbbm{v}} \renewcommand\baselinestretch{1.2} \def\NO{\mbox{\,$\circ\atop\circ$}\,} \def\bms{{\mathfrak{bms}}} \begin{document} \title[The N=1 BMS superalgebra] {Singular vectors, characters, and composition series for the N=1 BMS superalgebra} \author[Jiang]{Wei Jiang} \address{Department of Mathematics, Changshu Institute of Technology, Changshu 215500, China}\email{[email protected]} \author[Liu]{Dong Liu} \address{Department of Mathematics, Huzhou University, Zhejiang Huzhou, 313000, China}\email{[email protected]} \author[Pei]{Yufeng Pei} \address{Department of Mathematics, Huzhou University, Zhejiang Huzhou, 313000, China}\email{[email protected]} \author[Zhao]{Kaiming Zhao} \address{Department of Mathematics, Wilfrid Laurier University, Waterloo, ON, Canada N2L3C5, and School of Mathematical Science, Hebei Normal (Teachers) University, Shijiazhuang, Hebei, 050024 P. R. China}\email{[email protected]} \subjclass[2020]{17B65,17B68,17B69,17B70 (primary); 17B10,81R10 (secondary)} \keywords{N=1 BMS superalgebra, Verma module, singular vector, character, composition series} \thanks{} \begin{abstract} This paper investigates the structure of Verma modules over the N=1 BMS superalgebra. We provide a detailed classification of singular vectors, establish necessary and sufficient conditions for the existence of subsingular vectors, uncover the structure of maximal submodules, present the composition series of Verma modules, and derive character formulas for irreducible highest weight modules. \end{abstract} \maketitle \tableofcontents \section{Introduction} Infinite-dimensional symmetries play a significant role in physics. Specifically, Virasoro-type symmetries have significant applications in two-dimensional field theory, string theory, gravity, and other areas. The representation theory of the Virasoro algebra has also been widely and deeply studied \cite{FF, IK, MY,RW}. In recent years, two-dimensional non-relativistic conformal symmetries have gained importance in establishing holographic dualities beyond the AdS/CFT correspondence \cite{Ba0,BT,SZ}. The Bondi-Metzner-Sachs algebra, commonly known as BMS algebra, generates the asymptotic symmetry group of three-dimensional Einstein gravity \cite{BM,BH,Sa}. Although BMS algebra extends the Virasoro algebra, its representation theory differs fundamentally. Studies on special highest weight modules for the BMS algebra have explored various aspects: determinant formulas \cite{BGMM}, character formulas \cite{O}, free field realizations \cite{BJMN}, and modular invariance \cite{BSZ, BNSZ}. However, a complete understanding of highest weight modules is still lacking. In mathematical literature, the BMS algebra is known as the Lie algebra $W(2,2)$, an infinite-dimensional Lie algebra first introduced in \cite{ZD} to study the classification of moonshine-type vertex operator algebras generated by two weight-2 vectors. They examined the vacuum modules of the $W(2,2)$ algebra (with a VOA structure) and established necessary and sufficient conditions for these modules to be irreducible. Their key insight was creating a total ordering on the PBW bases, which facilitated computations of determinant formulas (see also \cite{JPZ}). It is worth mentioning that the $W(2,2)$ algebra has also been discovered and studied in several different mathematical fields, such as \cite{FK, HSSU, Wi}. The irreducibility conditions for Verma modules over the $W(2,2)$ algebra are also given in \cite{Wi}. In \cite{JP}, it was proposed that maximal submodules of reducible Verma modules are generated by singular vectors. However, Radobolja \cite{R} pointed out that this is true only for typical highest weights. For atypical weights, the maximal submodules are generated by both a singular vector and a subsingular vector. He also derived a character formula for irreducible highest weight modules and established necessary conditions for subsingular vector existence. The study further conjectured that these necessary conditions are also sufficient. Later, \cite{JZ} provided additional support for this conjecture. Adamovic et al. used the free field realization of the twisted Heisenberg-Virasoro algebra at level zero \cite{ACKP,Bi}, along with constructing screening operators in lattice vertex algebras, to derive an expression for singular vectors of Verma modules for the $W(2,2)$ algebra under certain conditions in \cite{AR1,AR2}. To our knowledge, explicit formulas for singular and subsingular vectors, as well as the composition series for general Verma modules over the $W(2,2)$ algebra, remain unresolved prior to the present paper. The {N}=1 BMS superalgebra, introduced in \cite{BDMT}, is the minimal supersymmetric extension of the BMS$_3$ algebra with central extensions. It incorporates a set of spin-$\frac{3}{2}$ generators $ Q_n $ within the BMS$_3$ algebra framework. Although this superalgebra is a subalgebra of the truncated Neveu-Schwarz superalgebra, its representation theory differs significantly from that of the {N}=1 Neveu-Schwarz superalgebra \cite{BMRW,IK0,IK1,MR}. In recent paper \cite{LPXZ, DGL}, the authors classified simple smooth modules including Whittaker modules over the N=1 BMS superalgebra under mild conditions and provided necessary and sufficient conditions for the irreducibility of Verma modules and Fock modules. Further detailed analysis on the structure of reducible Verma modules over the {N}=1 BMS superalgebra will be carried out in the present paper. As established in \cite{LPXZ}, the Verma module $V(c_L,c_M,h_L,h_M)$ over the N=1 BMS superalgebra $\frak g$ is irreducible if and only if $2h_M+\frac{p^2-1}{12}c_M\ne 0$ for any positive integer $p$. If further $c_M=0$, then $h_M=0$, resulting in the degeneration of the irreducible highest weight module into an irreducible highest weight module over the Virasoro algebra (refer to Lemma \ref{degenerated-case}). In this paper, we study the structure of the Verma module $V(c_L,c_M,h_L,h_M)$ over $\frak g$ under the obvious conditions that $$ c_M\ne 0\ \text{and}\ \ 2h_M+\frac{p^2-1}{12}c_M=0\ \text{for some}\ p\in\mathbb Z_+. $$ We classify all singular vectors of the Verma module $V(c_L,c_M,h_L,h_M)$ and provide explicit formulas. We also identify the necessary and sufficient conditions for the existence of subsingular vector and list them all. Our first main result is as follows: \vskip 0.2cm \noindent {\bf Main Theorem 1.} (Theorems \ref{main1}, \ref{main2}, \ref{main3} below) {\it Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in\mathbb Z_+$ with $c_M\ne0$. $ (1)$ All singular vectors in $V(c_L,c_M,h_L,h_M)$ are of the form ${\rm S}^i\1$ (when $p$ even) or ${\rm R}^i\1$ (when $p$ odd) for $ i\in \mathbb N$, where ${\rm S}$ and ${\rm R}$ are given in Proposition \ref{singular-S1} and Proposition \ref{singular-R11}, respectively. $(2)$ There exists a subsingular vector of $V(c_L,c_M,h_L,h_M)$ if and only if $h_L=h_{p,r}$ for some $r\in\mathbb Z_+$, where \begin{eqnarray*}\label{e3.37}\label{atypical} h_{p,r}=-\frac{p^2-1}{24}c_L+\frac{(41p+5)(p-1)}{48}+\frac{(1-r)p}{2}-\frac{1+(-1)^p}8p. \end{eqnarray*} In this case, ${\rm T}_{p, r}\1$ is the unique subsingular vector, up to a scalar multiple, where ${\rm T}_{p, r}$ are given in Theorem \ref{main3}. } \vskip 0.2cm By utilizing the information provided in Main Theorem 1 regarding singular and subsingular vectors, we can derive the character formulas for irreducible highest weight modules over $\mathfrak{g}$ as follows: \vskip 0.2cm \noindent {\bf Main Theorem 2.} (Theorems \ref{irreducibility}, \ref{irreducibility1} below) {\it Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in\mathbb Z_+$ with $c_M\ne0$. $ (1)$ If $V(c_L,c_M,h_L,h_M)$ is typical (i.e., $h_L\ne h_{p,r}$ for any $r\in\mathbb Z_+$), then the maximal submodule of $V(c_L,c_M,h_L,h_M)$ is generated by ${\rm S}\1$ (when $p$ is even), or by ${\rm R}\1$ (when $p$ is odd). Additionally, the character formula of the irreducible highest weight module $L(c_L,c_M,h_L,h_M)$ can be expressed as follows: $$ {\rm char}\, L(c_L,c_M,h_L,h_M)= q^{h_L}(1-q^{\frac{p}2})\left(1+\frac12(1+(-1)^p)q^{\frac p2}\right)\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}. $$ $ (2)$ If $V(c_L,c_M,h_L,h_M)$ is atypical (i.e., $h_L=h_{p,r}$ for some $r\in\mathbb Z_+$), then the maximal submodule of $V(c_L,c_M,h_L,h_M)$ is generated by the subsingular vector ${\rm T}_{p,r}\1$. Additionally, the character formula of the irreducible highest weight module $L(c_L,c_M,h_L,h_M)$ can be expressed as follows: $$ {\rm char}\, L(c_L,c_M,h_L,h_M)= q^{h_{p,r}}(1-q^{\frac{p}2})\left(1+\frac12(1+(-1)^p)q^{\frac p2}\right)(1-q^{rp})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}. $$} \vskip 0.2cm Following Main Theorems 1 and 2, we derive the composition series of the Verma modules as follows: \vskip 0.2cm \noindent {\bf Main Theorem 3.} (Theorems \ref{main4-1}, \ref{main4-2} below) {\it Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in\mathbb Z_+$ with $c_M\ne0$. $(1)$ If $V(c_L,c_M,h_L,h_M)$ is typical, then the Verma module $V(c_L,c_M,h_L,h_M)$ has the following infinite composition series of submodules: \begin{eqnarray*} V(c_L,c_M,h_L,h_M)\supset\langle {\rm S}\1 \rangle \supset \langle {\rm S}^2\1 \rangle\supset\cdots\supset \langle {\rm S}^n\1 \rangle\supset \cdots, \text{ if $p$ is even}; \\ V(c_L,c_M,h_L,h_M)\supset \langle {\rm R}\1 \rangle\supset\langle {\rm R}^2\1 \rangle\supset\cdots\supset \langle {\rm R}^{n}\1 \rangle\supset \cdots, \text{ if $p$ is odd}. \end{eqnarray*} $(2)$ If $V(c_L,c_M,h_L,h_M)$ is atypical, then the Verma module $V(c_L,c_M,h_L,h_M)$ has the following infie nit composition series of submodules: $$\aligned\label{filtration-aS1} V(c_L,&c_M,h_L,h_M)=\langle {\rm S}^0\1 \rangle\supset\langle {\rm T}_{p, r}\1 \rangle \supset\langle {\rm S}\1 \rangle \supset \langle {\rm T}_{p, r-2}({\rm S}\1) \rangle\supset\cdots\nonumber\\ &\supset\langle {\rm S}^{[\frac{r-1}2]}\1 \rangle \supset\langle {\rm T}_{p, r-2[\frac{r-1}2]}({\rm S}^{[\frac{r-1}2]}\1) \rangle\supset\langle {\rm S}^{[\frac{r-1}2]+1}\1 \rangle\supset\langle {\rm S}^{[\frac{r-1}2]+2}\1 \rangle\supset \cdots, \text{ if $p$ is even};\\ V(c_L,&c_M,h_L,h_M)=\langle {\rm R}^0\1 \rangle\supset\langle {\rm T}_{p, r}\1 \rangle \supset\langle {\rm R}\1 \rangle \supset \langle {\rm T}_{p, r-1}{\rm R}\1 \rangle \supset \langle {\rm R}^2\1 \rangle \supset \langle {\rm T}_{p, r-2}{\rm R}^2\1 \rangle\supset\cdots\\ &\supset\langle {\rm R}^{r-1}\1 \rangle \supset\langle {\rm T}_{p, 1}{\rm R}^{r-1}\1 \rangle\supset\langle {\rm R}^{r}\1 \rangle\supset\langle {\rm R}^{r+1}\1 \rangle\supset \cdots, \text{ if $p$ is odd}. \endaligned $$ } \vskip 0.2cm As a byproduct, we also explicitly determine all singular vectors, subsingular vectors, and the composition series of Verma modules over the algebra \( W(2,2) \) (see Corollary \ref{main1-w22}, Corollary \ref{main2-w22}, and Corollary \ref{main3-w22} below). It is worth noting that subsingular vectors have been observed in studies of Verma modules for the N=1 Ramond algebra \cite{IK0}, the N=2 superconformal algebra \cite{DG}, and the N=1 Heisenberg-Virasoro superalgebra at level zero \cite{AJR}. Our main theorems reveal significant differences between the structure of Verma modules for the N=1 BMS superalgebra and those for the Virasoro algebra and N=1 Neveu-Schwarz algebra. For Verma modules over the Virasoro algebra, the maximal submodule is usually generated by two distinct weight vectors \cite{As,AF}. In contrast, the maximal submodule of a Verma module \( V(c_L, c_M, h_{p,r}, h_M) \) can always be generated by a single weight vector. Additionally, some submodules of \( V(c_L, c_M, h_L, h_M) \) cannot be generated by singular vectors. The methods used to prove the main theorems differ from those in \cite{Bi, R} and also from the one in \cite{AJR, AR1}. Motivated by \cite{JZ}, in the present paper we introduce key operators \( {\rm S} \), \( {\rm R} \), and \( {\rm T} \), derive their crucial properties, and reveal significant relationships among them. Our method of classifying singular and subsingular vectors differs from those used for the Virasoro, super-Virasoro, and $W(2,2)$ algebras \cite{As, JZ, R}. A significant advancement is the application of the total ordering on PBW bases, as defined in \cite{LPXZ}, to analyze the coefficient of the highest order terms of the vectors $ {\rm S}\1, {\rm R}\1$ or ${\rm T}\1$ with respect to $L_{-p}$. This approach facilitates the recursive identification of all singular and subsingular vectors. Our future research will focus on the Fock modules of the N=1 BMS superalgebra as introduced in \cite{LPXZ}, with the aim of deepening our understanding of Verma modules. The paper is organized as follows. In Section 2, we briefly review the relevant results on representations of the N=1 BMS superalgebra. In Section 3, in order to determine the maximal submodule of the Verma module $V(c_L,c_M,h_L,h_M)$ over $\frak g$ when it is reducible, we investigate all singular vectors in $V(c_L,c_M,h_L,h_M)_n$ for both $n\in\mathbb Z_+$ and $n\in\frac{1}{2}+\mathbb N$ cases. All singular vectors in $V(c_L,c_M,h_L,h_M)$ are actually determined by one element ${\rm S}$ (or ${\rm R}$) in $U(\frak{g}_-)$, see Theorems \ref{main1}, \ref{main2}. In Section 4, We study the quotient module of the Verma module by the submodule generated by all singular vectors determined in Section 3. In particular, we find the necessary and sufficient conditions for the existence of a subsingular vector in $V(c_L,c_M,h_L,h_M)$, and determine all subsingular vectors. See Theorems \ref{necessity}, \ref{subsingular}. In Section 5, we give the maximal submodules of $V(c_L,c_M,h_L,h_M)$ (which is always generated by one weight vector) and the character formula for irreducible highest weight modules in both typical and atypical cases, see Theorems \ref{irreducibility}, \ref{irreducibility1}. We obtain the composition series (of infinite length) of Verma modules $V(c_L,c_M,h_L,h_M)$ in both typical and atypical cases, see Theorems \ref{main4-1}, \ref{main4-2}. Throughout this paper, $\mathbb C$, $\mathbb N$, $\mathbb Z_+$ and $\mathbb Z$ refer to the set of complex numbers, non-negative integers, positive integers, and integers, respectively. All vector spaces and algebras are over $\mathbb C$. For a Lie (super)algebra $L$, the universal enveloping algebra of $L$ will be denoted by $U(L)$. We consider a $\mathbb Z_2$-graded vector space $V = V_{\bar 0} \oplus V_{\bar 1}$, where an element $u\in V_{\bar 0}$ (respectively, $u\in V_{\bar 1}$) is called even (respectively, odd). We define $|u|=0$ if $u$ is even and $|u|=1$ if $u$ is odd. The elements in $V_{\bar 0}$ or $V_{\bar 1}$ are referred to as homogeneous, and whenever $|u|$ is used, it means that $u$ is homogeneous. \section{Preliminaries} In this section, we recall some notations and results related to the N=1 BMS superalgebra. \subsection{The N=1 BMS superalgebra} \begin{defi}\cite{BDMT}\label{Def2.1} The {\bf N=1 BMS superalgebra} $$\frak g=\bigoplus_{n\in\mathbb{Z}}\mathbb{C} L_n\oplus\bigoplus_{n\in\mathbb{Z}}\mathbb{C} M_n\oplus\bigoplus_{n\in\mathbb{Z}+\frac{1}{2}}\mathbb{C} Q_n\oplus\mathbb{C} {\bf c}_L\oplus\mathbb{C} {\bf c}_M$$ is a Lie superalgebra, where $$ \frak g_{\bar0}=\bigoplus_{n\in\mathbb{Z}}\mathbb{C} L_n\oplus\bigoplus_{n\in\mathbb{Z}}\mathbb{C} M_n\oplus\mathbb{C} {\bf c}_L\oplus\mathbb{C} {\bf c}_M,\quad \frak g_{\bar1}=\bigoplus_{r\in\mathbb{Z}+\frac12} \mathbb{C} Q_r, $$ with the following commutation relations: \begin{align*}\label{SBMS} & {[L_m, L_n]}=(m-n)L_{m+n}+{1\over12}\delta_{m+n, 0}(m^3-m){\bf c}_L,\nonumber \\ & {[L_m, M_n]}=(m-n)M_{m+n}+{1\over12}\delta_{m+n, 0}(m^3-m){\bf c}_M,\nonumber \\ & {[Q_r, Q_s]}=2M_{r+s}+{1\over3}\delta_{r+s, 0}\left(r^2-\frac14\right){\bf c}_M, \end{align*} \begin{align*} & {[L_m, Q_r]}=\left(\frac{m}{2}-r\right)Q_{m+r},\nonumber \\ & {[M_m,M_n]}=[M_n,Q_r]=0, \\ & [{\bf c}_L,\frak g]=[{\bf c}_M, \frak g]=0, \nonumber \end{align*} for any $m, n\in\mathbb{Z}, r, s\in\mathbb{Z}+\frac12$. \end{defi} Note that the even part $\mathfrak{g}_{\bar 0}$ corresponds to the BMS algebra $W(2,2)$. Additionally, the subalgebra ${{\mathfrak{vir}}} = \bigoplus_{n \in \mathbb{Z}} \mathbb{C} L_n \oplus \mathbb{C} \mathbf{c}_L$ represents the Virasoro algebra. The N=1 BMS superalgebra $\mathfrak{g}$ has a $\frac{1}{2}\mathbb{Z}$-grading by the eigenvalues of the adjoint action of $L_0$. It is clear that $\mathfrak{g}$ has the following triangular decomposition: \begin{eqnarray*} \mathfrak{g}={\mathfrak{g}}_{-}\oplus {\mathfrak{g}}_{0}\oplus {\mathfrak{g}}_{+}, \end{eqnarray*} where \begin{eqnarray*} &&{\mathfrak{g}}_{\pm}=\bigoplus_{n\in \mathbb{Z}_+}\bC L_{\pm n}\oplus \bigoplus_{n\in \mathbb{Z}_+}\bC M_{\pm n}\oplus \bigoplus_{r\in \frac{1}{2}+\mathbb{N}}\bC Q_{\pm r},\\ &&{\mathfrak{g}}_{0}=\bC L_0\oplus\bC M_0\oplus\bC {\bf c}_L\oplus \bC {\bf c}_M. \end{eqnarray*} \subsection{Verma modules} For $(c_L,c_M,h_L,h_M)\in\bC^4$, let $\bC$ be the module over ${\mathfrak{g}}_{0}\oplus{\mathfrak{g}}_{+}$ defined by \begin{eqnarray*} {\bf c}_L{ 1}=c_L{ 1},\quad {\bf c}_M{ 1}=c_M{ 1},\quad L_0{ 1}=h_L{ 1},\quad M_0{ 1}=h_M{ 1},\quad{\mathfrak{g}}_{+}1=0. \end{eqnarray*} The Verma module over ${\mathfrak{g}}$ is defined as follows \begin{eqnarray*} V(c_L,c_M,h_L,h_M)=U({\mathfrak{g}})\ot_{U({\mathfrak{g}}_{0}\oplus{\mathfrak{g}}_{+})}\bC\simeq U({\mathfrak{g}}_{-})\1, \end{eqnarray*} where $\1=1\ot 1$. It follows that $V(c_L,c_M,h_L,h_M)=\bigoplus_{n\in \mathbb{N}}V(c_L,c_M,h_L, h_M)_{n}$ and $U({\mathfrak{g}_-})=\bigoplus_{n\in \mathbb{Z}_+}U(\mathfrak{g}_-)_{n},$ where $$V(c_L,c_M,h_L,h_M)_{n} =\{v \in V(c_L,c_M,h_L,h_M)\,|\,L_0v =(h_L+n)v\} $$ and $$U(\mathfrak{g}_-)_{n} =\{x \in U(\mathfrak{g}_-)\,|\,[L_0, x]=nx\}. $$ Moreover, $ V(c_L,c_M,h_L,h_M)=V(c_L,c_M,h_L,h_M)_{\bar{0}}\oplus V(c_L,c_M,h_L,h_M)_{\bar{1}}$ with $$\aligned V(c_L,c_M,h_L,h_M)_{\bar{0}}=&\bigoplus_{n\in\mathbb{N}}V(c_L,c_M,h_L,h_M)_{n},\\ V(c_L,c_M,h_L,h_M)_{\bar{1}}=&\bigoplus_{n\in \mathbb{N}}V(c_L,c_M,h_L,h_M)_{\frac{1}{2}+n}.\endaligned$$ It is clear that $V(c_L,c_M,h_L,h_M)$ has a unique maximal submodule $J(c_L,c_M,h_L,h_M)$ and the factor module $$ L(c_L,c_M,h_L,h_M)=V(c_L,c_M,h_L,h_M)/J(c_L,c_M,h_L,h_M) $$ is an irreducible highest weight ${\mathfrak{g}}$-module. Define $${\rm char}\, V(c_L,c_M,h_L,h_M)=q^{h_L}\sum_{i\in\frac12\mathbb N }{\rm dim}\, V(c_L,c_M,h_L,h_M)_iq^{i}.$$ An eigenvector $u$ in $V(c_L,c_M,h_L,h_M)$ with respect to $\mathfrak{g}_0$ is called a {\bf singular vector} if $\mathfrak{g}_{+} u=0$. Let $J'(c_L,c_M,h_L,h_M)$ be the submodule of $V(c_L,c_M,h_L,h_M)$ generated by all singular vectors. A weight vector $u'$ in $V(c_L,c_M,h_L,h_M)$ is called a {\bf subsingular vector} if $u'+J'(c_L,c_M,h_L,h_M)$ is a singular vector in $V(c_L,c_M,h_L,h_M)/J'(c_L,c_M,h_L,h_M)$. Recall that a partition of a positive integer $n$ is a finite non-increasing sequence of positive integers $\la=(\la_1,\la_2,\dots, \la_r)$ such that $n=\sum_{i=1}^r\la_i$. The positive integer $\la_i$ is called the $i$-th entry of the partition $\la$. We call $r$ the length of $\la$, denoted by $\ell(\la)$, and call the sum of $\la_i$'s the weight of $\la$, denoted by $|\la|$. Denote $\la-\frac12=\left(\la_1-\frac12,\la_2-\frac12,\dots, \la_r-\frac12\right)$ and $-\la=(-\la_1,-\la_2,\dots, -\la_r)$. The number of partitions of $n$ is given by the partition function ${\tt p}(n)$. Denote by $\mathcal P$ the set of all partitions (including the empty partition) and $\mathcal P(n)$ the set of all partitions with weight $n\in\mathbb Z_+$. A partition $\la=(\la_1,\la_2,\dots, \la_r)$ is called strict if $\la_1 >\la_2 >\dots >\la_r >0$. The set $\mathcal{SP}$ consists of all strict partitions (including the empty partition). Recall that the natural ordering on $\mathcal P$ and $\mathcal{SP}$ is defined as follows: \begin{eqnarray*} &&\la> \mu\iff |\la|> |\mu|, \text{ or } |\la|= |\mu|, \la_1=\mu_1,\dots, \la_k=\mu_k, \text{ and }\la_{k+1}>\mu_{k+1} \text{ for some }\ k\geq0;\\ &&\la=\mu\iff \la_i=\mu_i \quad\text{for all }\ i. \end{eqnarray*} According to the Poincar\'{e}-Birkhoff-Witt ($\mathrm{PBW}$) theorem, every vector $v$ of $V(c_L,c_M,h_L,h_M)$ can be uniquely written in the following form \begin{equation}\label{def2.1} v=\sum_{\lambda, \nu\in\mathcal P, \mu\in\mathcal{SP}}a_{\lambda, \mu, \nu}M_{-\la}Q_{-\mu+\frac12}L_{-\nu}{\bf 1}, \end{equation} where $a_{\lambda, \mu, \nu}\in\mathbb C$ and only finitely many of them are non-zero, and $$M_{-\la}:=M_{-\la_1}\cdots M_{-\la_r},\ Q_{-\mu+\frac12}:=Q_{-\mu_1+\frac12}\cdots Q_{-\mu_s+\frac12},\ L_{-\nu}:=L_{-\nu_1}\cdots L_{-\nu_t}.$$ For any $v\in V(c_L,c_M,h_L,h_M)$ as in \eqref{def2.1}, we denote by $\mathrm{supp}(v)$ the set of all $(\lambda, \mu, \nu)\in \mathcal P\times\mathcal{SP}\times \mathcal P$ such that $a_{\lambda, \mu, \nu}\neq0$. Next, we define \begin{eqnarray*} &&\mathcal{M}={\rm span}_{\mathbb C}\{M_i \mid i\in\mathbb Z\},\quad \mathcal{M}_-={\rm span}_{\mathbb C}\{M_{-i} \mid i\in\mathbb Z_+\},\\ &&\mathcal{Q}={\rm span}_{\mathbb C}\{Q_{-i+\frac12}\mid i\in \mathbb Z\},\ \mathcal{Q}_-={\rm span}_{\mathbb C}\{Q_{-i+\frac12}\mid i\in \mathbb Z_+\}. \end{eqnarray*} Note that $\mathcal{M}+\mathbb{C} {\bf c}_M$ and $ \mathcal{M}+\mathcal{Q}+\mathbb{C} {\bf c}_M$ are ideals of $\mathfrak{g}$. For $y=M_{-\la}Q_{-\mu+\frac12}L_{-\nu}$ or $y\1$, we define $$\ell(y)=\ell(y\1)=\ell(\lambda)+\ell(\mu)+\ell(\nu), \ {\rm deg}(y)=|\la|+|\mu-\frac12|+|\nu|.$$ For \eqref{def2.1}, we define $${\ell}_M(v):={\rm max}\{\ell(\lambda)\mid (\lambda, \mu, \nu)\in {\rm supp}(v)\}.$$ Similarly, we can define ${\ell}_Q(v)$, ${\ell}_L(v)$ and ${\rm deg}(v)$. For $n\in \frac{1}{2}\mathbb Z_+$, let $$ B_{n}=\{M_{-\la}Q_{-\mu+\frac12}L_{-\nu}{\bf 1}\mid |\la|+|\mu-\frac12|+|\nu|=n, \forall\ \la,\nu\in \mathcal P, \mu\in\mathcal{SP} \}. $$ Clearly, $B_{n}$ is a basis of $V(c_L,c_M,h_L,h_M)_n$. Then $$ |B_{n}|=\dim V(c_L,c_M,h_L,h_M)_{n}, $$ \begin{eqnarray*}\label{2.6} {\rm char}\, V(c_L,c_M,h_L,h_M)=q^{h_L}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}. \end{eqnarray*} Now we can define a total ordering $\succ$ on $B_{n}$: $M_{-\la}Q_{-\mu+\frac{1}{2}}L_{-\nu}{\bf 1} \succ M_{-\la'}Q_{-\mu'+\frac{1}{2}}L_{-\nu'}{\bf 1}$ if and only if one of the following condition is satisfied: \begin{itemize} \item[(i)]$|\nu|>|\nu'|;$ \item[(ii)]$|\nu|=|\nu'|\ \mbox{and}\ \ell(\nu)>\ell(\nu')$; \item[(iii)]$|\nu|=|\nu'|, \ell(\nu)=\ell(\nu')\ \mbox{and}\ \nu>\nu'$; \item[(iv)]$\nu=\nu',\ \mu>\mu';$ \item[(v)]$\nu=\nu',\ \mu=\mu',\ \mbox{and}\ \la>\la'.$ \end{itemize} Let \begin{eqnarray*} B_{n}=\{b_i\mid b_{i}\succ b_{j}\ \text{for}\ i>j\},\quad\text{where}\quad b_{i}=M_{-\la^{(i)}}G_{-\mu^{(i)}}L_{-\nu^{(i)}}\1, \end{eqnarray*} with $\la^{(i)},\nu^{(i)}\in \mathcal P ,\mu^{(i)}\in \mathcal{SP}$ and $|\la^{(i)}|+|\mu^{(i)}-\frac{1}{2}|+|\nu^{(i)}|=n$ for any $i$. Any non-zero homogenous vector $X\in V_n=V(c_L,c_M,h_L,h_M)_{n}$ can be uniquely written as a linear combination of elements in $B_{n}$ for some $n\in\mathbb Z_+$: $$X=\Sigma_{i=1}^m a_iX_i,\text{ where } 0\neq a_i\in\mathbb C, X_i\in B_{n}\text{ and }X_1\succ X_2\succ\cdots\succ X_m.$$ We define the {\bf highest term} of $X$ as ${\rm hm}(X)=X_1$. Now we define on $V(c_L,c_M,h_L,h_M)$ the operations of formal partial derivative $\frac{\partial}{\partial Q_{- i+\frac12}}, i\in \mathbb{Z}_+$ as follows \begin{eqnarray*} \frac{\partial}{\partial Q_{- i+\frac12}}M_{- j}=\frac{\partial}{\partial Q_{- i+\frac12}}L_{- j}=0,\ \frac{\partial}{\partial Q_{- i+\frac12}}Q_{- j+\frac12}=\delta_{ji},\ \frac{\partial}{\partial Q_{- i+\frac12}}\1=0\end{eqnarray*} and then define their actions on monomials (\ref{def2.1}) by the super-Leibniz rule. Finally, we extend these to $U(\frak{g}_-)$ by linearity. Let us recall the necessary and sufficient conditions for the Verma module $V(c_L,c_M,h_L,h_M)$ to be irreducible. \begin{theo} \label{Sim} \cite[Theorem 3.2]{LPXZ} For $(c_L,c_M,h_L,h_M)\in\bC^4$, the Verma module $V(c_L,c_M,h_L,h_M)$ over $\mathfrak{g}$ is irreducible if and only if $$ 2h_M+\frac{i^2-1}{12}c_M\neq 0,\ \forall i\in \mathbb Z_{+}. $$ \end{theo} From now on we always assume that $$\phi(p)=2h_M+\frac{p^2-1}{12}c_M=0$$ for some $p\in \mathbb Z_+$. This assumption indicates that the Verma module $V(c_L,c_M,h_L,h_M)$ is reducible and contains a singular vector not in $\mathbb{C}{\bf 1}$. \begin{theo}\label{cor3.3}\cite[Theorem 3.3 and Proposition 5.2]{LPXZ} Suppose $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(1)=0$, which implies that $h_M=0$. \begin{itemize} \item[$(1)$] The vectors $M_{-1}{\bf 1}$ and $Q_{-\frac{1}{2}}{\bf 1}$ of $V(c_L,c_M,h_L,0)$ are singular vectors. If further $h_L =0$, then $L_{-1}{\bf 1}$ is a subsingular vector of $V(c_L,c_M,0,0)$, i.e., a singular vector of \\ $V(c_L,c_M,0,0)/\langle Q_{-\frac12}\mathbf 1\rangle$, where $\langle Q_{-\frac12}\mathbf 1\rangle$ is the $\mathfrak{g}$-submodule generated by $Q_{-\frac12}\mathbf 1$. \item[$(2)$] The vaccum module $V(c_L,c_M)=V(c_L,c_M,0,0)/\langle L_{-1}\1\rangle$ is irreducible if and only if $c_M\neq 0$. \item[$(3)$] The vacuum module $V(c_L,c_M)$ with $c_M\ne0$ is endowed with a simple vertex superalgebra structure. There is a one-to-one correspondence between smooth $\mathfrak{g}$-modules of central charge $(c_L, c_M)$ and $V(c_L,c_M)$-modules. \end{itemize} \end{theo} The following result is obvious. \begin{lem}\label{degenerated-case} If $ c_M=0$ and $h_M=0$, then the Verma module $V(c_L,0,h_L,0)$ possesses a submodule $ \langle \mathcal{M}_-\mathbf1, \mathcal{Q}_-\mathbf1\rangle$ and the quotient module $V(c_L,0,h_L,0)/ \langle \mathcal{M}_-\mathbf1, \mathcal{Q}_-\mathbf1\rangle$ is isomorphic to the Verma module $V_{\mathfrak{vir}}(c_L,h_L)$ over the Virasoro algebra. \end{lem} For the remaining case $p=1$ and $c_M\ne0$ (in this case $h_M=0$), the structure of the Verma module $V(c_L,c_M,h_L,0)$ will be determined in the next sections. \section{Classification of singular vectors of Verma modules} Fix $(c_L,c_M,h_L,h_M)\in\bC^4$. In this section we will determine all singular vectors of the Verma module $V(c_L,c_M,h_L,h_M)$ when it is reducible. From now on we will assume that $\phi(p)=0$ for some $p\in\mathbb Z_+$ with $c_M \ne0$. The case $c_M=0$ (then $h_M=0$) was solved in Theorem \ref{cor3.3} and Lemma \ref{degenerated-case}. \subsection{Singular vectors in $V(c_L,c_M,h_L,h_M)_n$ for $ n\in\mathbb Z_+$} First, we construct a singular vector ${\rm S}\1$ in $V(c_L,c_M,h_L,h_M)_n$ for some $ n\in\mathbb Z_+$. \begin{pro} \label{singular-S1} The Verma module $V(c_L,c_M,h_L,h_M)$ possesses a singular vector \begin{eqnarray}\label{e3.7} u={\rm S}\1=M_{-p}\1+\sum_{\mu\in \mathcal P(p), \lambda<(p) }s_{\mu}M_{-\mu}\in U(\mathfrak{g}_{-})\1\in V(c_L,c_M,h_L,h_M)_p, \end{eqnarray} where $$ s_{\mu}=(-1)^{\ell(\mu)-1}\prod_{i=1}^{\ell(\mu)-1}\frac{2(p-\sum_{j=0}^{i-1}\mu_j)-\mu_{i}}{2(p-\sum_{j=1}^{i}\mu_j)\phi(p-\sum_{j=1}^{i}\mu_j)}, $$ and $\mu_0=0$, $\mu=(\mu_1, \mu_2, \cdots, \mu_s)\in\mathcal P(p)$. \end{pro} \begin{proof} Suppose that $${\rm S}=\sum_{\lambda\in \mathcal P(p) }s_{\lambda}M_{-\lambda}\in U(\mathfrak{g}_{-}), s_{\lambda}\in \mathbb{C},$$ where the ordering of all summands of ${\rm S}$ is according to "$\succ$" defined in Section 2.2 as follows \begin{eqnarray*} M_{-p}, M_{-(p-1)}M_{-1}, M_{-(p-2)}M_{-2}, M_{-(p-2)}M_{-1}^{2}, \cdots, M_{-1}^p.\label{W-ordering} \end{eqnarray*} Now we consider the ${\tt p}(p)$ linear equations: \begin{eqnarray} L_{p}u=0,\ L_{p-1}L_{1}u=0,\ L_{p-2}L_{2}u=0,\ L_{p-2}L_{1}^{2}u=0,\ \cdots, L_{1}^{p}u=0.\label{o2.110} \end{eqnarray} The coefficient matrix $A_{p}$ of the linear equations \eqref{o2.110} is $$A_{p}=\left( \begin{array}{ccccc} p\phi(p) & 0 & 0 & \cdots & 0 \\ \star & \star & 0 & \cdots & 0 \\ \star & \star & \star & \cdots & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ \star & \star & \star & \cdots & \star \\ \end{array} \right). $$ Clearly, $A_{p}$ is an lower triangular whose first row is zero, its other diagonal entries and other entries in the first column $\star$ are non-zero. So there exists a unique solution for ${\rm S}$ with $1$ as the coefficient of $M_{-p}$ up to a scalar multiple. Certainly, by the actions of $L_i, i=p-1, p-2, \cdots, 1$ on $u={\rm S}\1$ we can get all $s_{\mu}$. \end{proof} \begin{exa} \begin{eqnarray*} &(1)&p=1,h_M=0: {\rm S}=M_{-1};\\ &(2)&p=2,h_M=-\frac{1}{8}c_M: {\rm S}=M_{-2}+\frac{6}{c_M}M_{-1}^2;\\ &(3)&p=3,h_M=-\frac{1}{3}c_M: {\rm S}=M_{-3}+\frac{6}{c_M}M_{-2}M_{-1}+\frac{9}{c_M^2}M_{-1}^3. \end{eqnarray*} \end{exa} \begin{lem}\label{l3.15'} For any $x\in \frak g_+$, we have \begin{eqnarray*} [x, {\rm S}]\subset U(\mathcal{M}_-) \left(2M_0+\frac{p^2-1}{12}{\bf c}_M\right)+U(\mathcal{M}_-)\mathcal M_+. \end{eqnarray*} \end{lem} \begin{proof} It follows from the fact that $[x, {\rm S}]\1=0$ in $V(c_L,c_M,h_L,h_M)$ and $[x, {\rm S}]\in U(\mathcal{M}\oplus \mathbb C{\bf c}_M) $ for any $x\in\frak g_+$. \end{proof} \begin{lem}\label{singular-Sk} Let $u={\rm S}{\bf 1}$ be the singular vector in Proposition \ref{singular-S1}. Then ${\rm S}^k{\bf 1}$ is a singular vector in $V(c_L,c_M,h_L,h_M)_{kp}$ for any $k\in\mathbb Z_+$. \end{lem} \proof It follows from Lemma \ref{l3.15'}.\qed \begin{lem}\label{l3.6} If $u$ is a singular vector in $V(c_L,c_M,h_L,h_M)_n$ for $n\in \mathbb Z_+$ with $\ell_L(u)=0$, then $\ell_Q(u)=0$. \end{lem} \begin{proof} Assume that $\ell_Q(u)\ne0$. Set \begin{eqnarray*} u=\sum_{\mu\in\mathcal {SP}}a_\mu Q_{-\mu+\frac12}\1\in V(c_L,c_M,h_L,h_M)_n, \end{eqnarray*} where $a_\mu \in U(\mathcal M_-)$. Take $\bar\mu=(\bar\mu_1, \cdots, \bar\mu_s)$ among all $\mu$ with $a_{\mu}\ne0$ such that ${\bar\mu}_1$ is maximal. Certainly, $s\ge 2$ since $\ell(\bar\mu)$ is even and $n\in\mathbb Z$. \begin{eqnarray*} 0=Q_{{\bar\mu}_1-\frac12}u=\left(2h_M+\frac{(2\bar \mu_1-1)^2-1}{12}c_M\right)\sum_{\mu_1=\bar{\mu_1}} a_{\mu}\frac{\partial}{\partial Q_{-\bar\mu_1+\frac12}}Q_{-\mu+\frac12}{\bf 1}. \end{eqnarray*} If $2\bar\mu_1-1\ne p$, then $Q_{{\bar\mu}_1-\frac12}u\ne 0$, which is a contradiction. Now we only consider the case of $p=2\bar\mu_1-1$ being odd, and in this case $p>2\bar\mu_2-1$. By acting $Q_{\bar\mu_2-\frac12}$ on $u$, we get \begin{eqnarray*} 0=Q_{{\bar\mu}_2-\frac12}u=\left(2h_M+\frac{(2\bar \mu_2-1)^2-1}{12}c_M\right)\sum_{\mu_1=\bar{\mu_1}} a_{\mu}\frac{\partial}{\partial Q_{-\bar\mu_2+\frac12}}Q_{-\mu+\frac12}{\bf 1}+B\ne 0, \end{eqnarray*} where $\nu_1<\bar{\mu}_1$ for all summand $a_\nu Q_{-\nu+\frac12}$ in $B$ with $a_\nu\ne0$. It also gets a contradiction. \end{proof} Now we shall determine all singular vectors in $V(c_L,c_M,h_L,h_M)$ with ${\ell}_L(u)=0$. \begin{theo}\label{singular-W} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ with $c_M\ne 0$, ${\rm S}$ be defined in Proposition \ref{singular-S1} and $u\in V(c_L,c_M,h_L,h_M)_n$ for some $p\in \mathbb Z_+$ with ${\ell}_L(u)=0$. Then $u$ is a singular vector if and only if $n=kp$ for some $k\in\mathbb Z_+$. In this case $u={\rm S}^k{\bf 1}$ up to a scalar multiple if $n=pk$. \end{theo} \proof Let $u\in V(c_L,c_M,h_L,h_M)_n$ be a singular vector. By Lemma \ref{l3.6} we can suppose that \begin{equation} u= (a_0{\rm S}^k+a_1{\rm S}^{k-1}+\cdots a_{k-1}{\rm S}+a_k)\mathbf1,\label{E3.5} \end{equation} where $k\in\mathbb Z_+$ and each $a_i\in U(\mathcal M_-)$ does not involve $M_{-p}$ for any $i=0,1, \cdots, k$. We may assume that $a_0\ne0$. If ${\rm hm}\{a_0, a_1, \cdots, a_k\}\notin \mathbb C$, set ${\rm hm}\{a_0, a_1, \cdots, a_k\}=M_{-\lambda}$. By the action of $L_{\lambda}$ on (\ref{E3.5}), we can get $L_{\lambda}u\ne0$ since all $a_i\in U(\mathcal M_-)$ are not involving $M_{-p}$. So $a_0\in\mathbb C$ and $u=a_0{\rm S}^k{\bf 1}.$ The theorem follows. \qed \begin{lem}\label{l3.1} If $u$ is a singular vector in $V(c_L,c_M,h_L,h_M)_n$, then $\ell_{L}(u)=0$. \end{lem} \begin{proof} To the contrary we assume that $\ell_{L}(u)\neq 0$. Write $u=y\1\in V(c_L,c_M,h_L,h_M)_{n}$, where $y\in U(\mathfrak{g}_-)$. Then $M_0 u=M_0y\1=[M_0,y]\1+h_Mu$. If $[M_0,y]\neq 0$, then $\ell_{L}([M_0,y])<\ell_{L}(y)$. This implies $[M_0,y]\neq ay $ for any $a\in\mathbb{C}^*$, showing that $u$ is not a singular vector of $V(c_L,c_M,h_L,h_M)$. So $[M_0,y]=0$. We write \begin{equation*} u=y\1= (a_0L_{-p}^k+a_1L_{-p}^{k-1}+a_2L_{-p}^{k-2}+\cdots+a_k)\1, \label{singularL} \end{equation*} where $k\in\mathbb Z_+, a_i\in U(\frak g_-), i=0, 1, \cdots, k, a_0\ne 0$ and any $a_i$ does not involve $L_{-p}$. We claim that ${\ell}_L(a_0)=0$. Otherwise, ${\rm hm}(a_0)=a'_0L_{-\nu}$ for some $a'_0\in \mathcal{MQ}$ and $\emptyset\ne\nu\in\mathcal P$. Then $[M_{\nu}, y]\1=a'_0[M_{\nu}, L_{-\nu}]L_{-p}^k\1+a'_1L_{-p}^{k-1}\1+\cdots+a'_k\1\ne 0$, where $a'_i\in U(\frak g_-), i=1, \cdots, k$ with any $a'_i$ not involving $L_{-p}$. This is a contradiction since $[M_{\nu}, L_{-\nu}]\ne 0$ by the assumption of $L_{-\nu}$ not involving $L_{-p}$. This claim follows and $a_0\in U(\mathcal M_-+\mathcal Q_-)$. Now we shall prove that $k=0$ and get the lemma. To the contrary, assume that $k\ge 1$. In the case if $[M_0, a_1]=0$ we see that \begin{equation*} [M_0, y]= kpa_0M_{-p}L_{-p}^{k-1}+A=0, \label{M0-act}\end{equation*} where the degree of $L_{-p}$ in $A$ is no more than $k-2$. It is a contradiction. So $[M_0, a_1]\ne0$. If ${\ell}_L(a_1)\ge 2$, then \[ [M_0, y]= kpa_0M_{-p}L_{-p}^{k-1}+[M_0, a_1]L_{-p}^{k-1}+B=0, \] where the degree of $L_{-p}$ in $B$ is no more than $k-2$. We see that ${\ell}_L([M_0, a_0L_{-p}^{k}])=k-1$, ${\ell}_L([M_0, a_1]L_{-p}^{k-1})\ge k$, yielding that $[M_0, y]\ne0$, which is a contradiction. Now we obtain that ${\ell}_L(a_1)=1$ and set \begin{equation*}a_1=\sum_{i=1}^{s}b_iL_{-i}+b_0, \label{eqa1}\end{equation*} where $b_i\in U(\mathcal{M}_-+\mathcal Q_-)$ and $b_s\ne 0$. $$[M_s, y]=a_0[M_s, L_{-p}^k]+b_1[M_s, L_{-s}]L_{-p}^{k-1}+B',$$ where the degree of $L_{-p}$ in $B'$ is no more than $k-2$. If $s>p$, then ${\ell}_L(a_0[M_s, L_{-p}^k])\le k-2$ and ${\ell}_L([M_s, L_{-s}]L_{-p}^{k-1})=k-1$. In this case $[M_s, y]\1\ne 0$, it is a contradiction. So $s<p$. Note that if $p=1$, then $s=0$, which means ${\ell}_L (a_1)=0$. This is a contradiction. So we can suppose that $p>1$. By action of $L_i$ for any $i\in\mathbb Z_+$ on $u$ we get $$L_iu= L_{-p}^k[L_i, a_0]\1+A=0, $$ where the degree of $L_{-p}$ in $A$ is no more than $k-1$. So $[L_i, a_0]\1=0$ for any $i\in\mathbb Z_+$. In this case, $a_0\1$ becomes a singular vector of $V(c_L,c_M,h_L,h_M)$ with ${\ell}_L(a_0\1)=0$. By Theorem \ref{singular-W}, we get $ a_0=d_0{\rm S}^l $ where $l\in\mathbb N, d_0 \in\mathbb C^*$. In this case, \begin{equation*}[M_0, y]\1=kpa_0M_{-p}L_{-p}^{k-1}\1+[M_0, a_1]L_{-p}^{k-1}\1+B\1=0,\label{eqMp}\end{equation*} where the degree of $L_{-p}$ in $B$ is no more than $k-2$. So \begin{equation}kpd_0{\rm S}^lM_{-p}+[M_0, a_1]=0.\label{eqMp1}\end{equation} By considering the degree of ${\rm S}$ in \eqref{eqMp1}, we have $a_1=f_0{\rm S}^{l+1}+f_2{\rm S}^l+\cdots+f_{l+1}$, where $f_i\in U(\frak g_-)$ not involving $L_{-p}, M_{-p}$. Comparing the coefficients of ${\rm S}^{l+1}$ in \eqref{eqMp1}, we get $$[M_0, f_0]=kpd_0\in\mathbb C^*,$$ a contradiction. \end{proof} \begin{theo} \label{main1} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ with $c_M\ne 0$. Let ${\rm S}\1$ be the singular vector in Proposition \ref{singular-S1}, $n\in\mathbb Z_+$. Then $V(c_L,c_M,h_L,h_M)_n$ possesses a singular vector $u$ if and only if $n=kp$ for some $k\in\mathbb Z_+$. In this case $u={\rm S}^k{\bf 1}$ if $n=pk$ up to a scalar multiple. \end{theo} \begin{proof} It follows from Theorem \ref{singular-W} and Lemma \ref{l3.1}. \end{proof} \subsection{Singular vectors in $V(c_L,c_M,h_L,h_M)_{n-\frac12}$ for $ n\in\mathbb Z_+$} In this subsection, we shall determine all singular vectors in $V(c_L,c_M,h_L,h_M)_{n-\frac12}$ for $ n\in\mathbb Z_+$. \begin{lem}\label{singular-Q1} If there exists a singular vector in $V(c_L,c_M,h_L,h_M)_{n-\frac12}$ for $ n\in\mathbb Z_+$ with $\ell_{L}(u)=0$, then $p$ is odd and $\ell_Q(u)=1$. \end{lem} \proof It follows similar arguments as in that of Lemma \ref{l3.6} and the fact that $\ell_Q(u)\ge 1$ here. \qed \begin{pro} \label{singular-R1} Let $p\in 2\mathbb Z_+-1$. Then the Verma module $V(c_L,c_M,h_L,h_M)$ possesses a singular vector $u\in V(c_L,c_M,h_L,h_M)_{\frac{p}{2}}$ with $\ell_{L}(u)=0$. Up to a scalar multiple, it is unique and can be written as \begin{eqnarray} u={\rm R}\1=Q_{-\frac{p}{2}}\1+\sum_{i=1}^{\frac{p-1}{2}}f_{i}(M)Q_{-\frac{p}{2}+i}\1, \end{eqnarray} where $f_i(M)=\sum_{|\lambda|=i}c_{\lambda}M_{-\lambda}$ for some $c_{\lambda}\in \mathbb{C}$. \end{pro} \proof It suffices to prove the case of $p>1$. By Lemma \ref{singular-Q1}, we can suppose that \begin{eqnarray*} {\rm R}=f_0Q_{-\frac{p}{2}}+\sum_{i=1}^{\frac{p-1}{2}}f_i(M)Q_{-i+\frac{1}{2}}, \end{eqnarray*} where $ f_0\in \mathbb C, f_i(M)\in U(\mathcal M_-), i=1, 2, \cdots, \frac{p-1}{2}$. Here the ordering of all summands of ${\rm R}$ is according to the ordering $\succ$ defined in Section 2.2 as follows \begin{eqnarray*} Q_{-\frac{p}{2}}, M_{-1}Q_{-\frac{p}{2}+1}, M_{-2}Q_{-\frac{p}{2}+2}, M_{-1}^{2}Q_{-\frac{p}{2}+2}, \cdots, M_{-1}^{\frac{p-1}{2}}Q_{-\frac{1}{2}}.\label{o2.10} \end{eqnarray*} Now we consider the following linear equations. \begin{eqnarray}\label{eee4.8} Q_{\frac{p}{2}}u=L_{1}Q_{\frac{p}{2}-1}u=L_{2}Q_{\frac{p}{2}-2}u=L_{1}^{2}Q_{\frac{p}{2}-2}u=\cdots=L_{1}^{\frac{p-1}{2}}Q_{\frac{1}{2}}u=0. \end{eqnarray} The number of these equations is exactly $\sum_{i=0}^{\frac{p-1}{2}}{\tt p}(i)$. By direct calculations we can see that the coefficient matrix $A_{p}$ of \eqref{eee4.8} is lower triangular and the first row is zero. All other diagonal elements are non-zero's by assumption. So there exists a unique solution with a non-zero coefficient of $Q_{-\frac p2}$ up to a scalar multiple. The proposition follows. \qed In the following, we provide an explicit formula for ${\rm R}$. \begin{pro}\label{singular-R11} Let $p\in2\mathbb Z_+-1$. Then the singular vector ${\rm R}\1$ in Proposition \ref{singular-R1} can be determined as \begin{eqnarray}\label{R-exp} {\rm R}\1=Q_{-\frac{p}{2}}\1+\sum_{i=1}^{\frac{p-1}{2}}f_{i}(M)Q_{-\frac{p}{2}+i}\1, \end{eqnarray} where \begin{eqnarray} f_{1}(M)=c_1M_{-1}, f_{i}(M)=c_iM_{-i}+\sum_{j=1}^{i-1}c_if_j(M)M_{-(i-j)}, \end{eqnarray} and $c_i=\frac{6}{i(p-i)c_M}$ for $i=1,\cdots,\frac{p-1}{2}$. \end{pro} \begin{proof} Let ${\rm R}\1$ be as \eqref{R-exp}, a singular vector in $V(c_L,c_M,h_L,h_M)_{\frac p2}$, where $f_{i}(M)\in U(\mathcal{M}_-)$ with degree $i$, $i=1,\cdots,\frac{p-1}{2}$. For $i=1, 2, \cdots,\frac{p-1}{2}$, using the action of $Q_{\frac{p}{2}-i}$ on \eqref{R-exp}, we deduce that \begin{eqnarray*} 0=Q_{\frac{p}{2}-i}{\rm R}\1&=&[Q_{\frac{p}{2}-i},Q_{-\frac{p}{2}}]\1+\sum_{j=1}^if_j(M)[Q_{\frac{p}{2}-i},Q_{-\frac{p}{2}+j}]\1 \\ &=&2M_{-i}\1+2f_1(M)M_{-i+1}\1+\cdots+f_{i}(M)\left(2M_0+\frac{(p-2i)^2-1}{12}{\bf c}_M\right)\1. \end{eqnarray*} Applying $h_M=-\frac{p^2-1}{24}c_M$ we deduce that $$2M_{-i}+2f_1(M)M_{-i+1}+\cdots-f_{i}(M) \frac{i(p-i)}{3}c_M =0,$$ \begin{eqnarray*} f_{1}(M)=c_1M_{-1}, f_{i}(M)=c_iM_{-i}+\sum_{j=1}^{i-1}c_if_j(M)M_{-(i-j)}, \end{eqnarray*} and $c_i=\frac{6}{i(p-i)c_M}$ for $i=1,\cdots,\frac{p-1}{2}$. \end{proof} \begin{exa} \begin{eqnarray*} &(1)&p=1,h_M=0: {\rm R}=Q_{-\frac{1}{2}};\\ &(2)&p=3,h_M=-\frac{1}{3}c_M: {\rm R}=Q_{-\frac{3}{2}}+\frac{3}{c_M}M_{-1}Q_{-\frac{1}{2}};\\ &(3)&p=5,h_M=-c_M: {\rm R}=Q_{-\frac{5}{2}}+\frac{3}{2c_M}M_{-1}Q_{-\frac{3}{2}}+\frac{1}{c_M}M_{-2}Q_{-\frac{1}{2}}+\frac{3}{2c_M^2}M_{-1}^{2}Q_{-\frac{1}{2}}. \end{eqnarray*} \end{exa} By direct calculation, we have the following lemma. \begin{lem}\label{l3.15} For any $x\in \frak g_+$, we have \begin{eqnarray*} [x, {\rm R}]\subset U(\mathcal{M}_-+\mathcal{Q}_-)\left(2M_0+\frac{p^2-1}{12}{\bf c}_M\right)+U(\mathcal{M}_-+\mathcal{Q}_-)\frak g_+. \end{eqnarray*} \end{lem} \begin{proof} It follows from the fact that $[x, {\rm R}]\1=0$ in $V(c_L,c_M,h_L,h_M)$ and $[x, {\rm R}]\in U(\mathcal{M}+\mathcal Q+\mathbb C{\bf c}_M)$ for any $x\in\frak g_+$. \end{proof} \begin{pro}\label{t3.15} Let $p\in2\mathbb Z_+-1$. Then ${\rm R}^{2}={\rm S}$, ${\rm R}^n\1$ is also a singular vector for any $n\in 2\mathbb Z_+-1$. \end{pro} \proof It follows from Lemma \ref{l3.15} that ${\rm R}^{2}\1$ is a singular vector in $ V(c_L,c_M,h_L,h_M)$. By Theorem \ref{main1}, ${\rm R}^{2}={\rm S}$. Moreover, for any $n\in 2\mathbb Z_+-1$, ${\rm R}^n\1$ is also a singular vector by Lemma \ref{l3.15}. \qed \begin{lem}\label{l3.1Q} If $u$ is a singular vector in $V(c_L,c_M,h_L,h_M)_{n-\frac12}$ for $n\ge 1$, then $\ell_{L}(u)=0$. \end{lem} \begin{proof} Write $u=y\1\in V(c_L,c_M,h_L,h_M)_{n-\frac12}$, where $y\in U(\mathfrak{g}_-)$. Then $M_0 u=M_0y\1=[M_0,y]\1+h_Mu$. By a similar argument in the proof of Lemma \ref{l3.1}, we have $M_0y\1=h_My\1$. For any $x\in \frak g_+$, $x{\rm R}y\1={\rm R}xy\1+[x, {\rm R}]y\1=0$ by Lemma \ref{l3.15}. Then ${\rm R}y\1$ is a singular vector in $V(c_L,c_M,h_L,h_M)_{n+\frac{p-1}2}$. So ${\ell}_L({\rm R}y)=0$ by Lemma \ref{l3.1} and then ${\ell}_L(y)=0$. \end{proof} Now we get the following result. \begin{theo} \label{main2} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ with $c_M\ne 0$. Then $V(c_L,c_M,h_L,h_M)_{n-\frac{1}{2}}$ for $n\in\mathbb Z_+$ has a singular vector $u$ if and only if $p\in 2\mathbb Z_+-1$ and there exists $k\in \mathbb Z_+$ such that $n-\frac12=\frac{p}{2}(2k-1)$. Moreover, all singular vectors of $V(c_L,c_M,h_L,h_M)_{kp-\frac{p}{2}}$, up to a scalar multiple, are ${\rm R}^{2k-1}{\bf 1}$ for $k\in \mathbb{Z}_+$. \end{theo} \proof By Lemmas \ref{singular-Q1}, \ref{l3.1Q} and Propositions \ref{singular-R1} and \ref{t3.15}, we can suppose that \begin{equation} u= (a_0{\rm R}^{2k-1}+a_1{\rm R}^{2k-3}+\cdots a_{k-1}{\rm R}+a_k)\mathbf1,\label{singularSM} \end{equation} where $k\in\mathbb Z_+, a_i\in U(\mathcal{M}_-)$ not involving $M_{-p}$ for any $i=1, \cdots, k-1$, and $a_k\in U(\mathcal{M}_-+\mathcal{Q}_-)$ not involving $M_{-p}, Q_{-\frac p2}$. Assume that $a_k\ne 0$, then $\ell_Q(a_k)=1$. Set ${\rm hm}(a_k)=M_{-\mu}Q_{-\frac q2}$, where $\mu\in\mathcal P, q\ne p$. By action of $Q_{\frac q2}$ on $u$, we get a contradiction. So $a_k=0$. Set ${\rm max}\{{\rm hm}(a_0), \cdots, {\rm hm}(a_{k-1})\}=M_{-\lambda}$. By actions of $L_{\lambda}$ on \eqref{singularSM}, we can get $L_{\lambda}u\ne0$ since all $a_i\in U(\mathcal M_-)$ are not involving $M_{-p}$. So $a_i\in\mathbb C$ for any $i=0,1, \cdots, k-1$. The theorem follows. \qed Combining Theorem $\ref{main1}$ with Theorem $\ref{main2}$, we get the following result about all singular vectors of the Verma $\mathfrak{g}$-module $V(c_L,c_M,h_L,h_M)$. \begin{theo}\label{t3.19} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ with $c_M\ne 0$. \begin{itemize} \item[$(1)$] If $p$ is even, all singular vectors of the Verma module $V(c_L,c_M,h_L,h_M)$ are ${\rm S}^k{\bf 1}$ for $k\in \mathbb N$, up to a scalar multiple. \item[$(2)$] If $p$ is odd, all singular vectors of the Verma module $V(c_L,c_M,h_L,h_M)$ are ${\rm R}^{k}{\bf 1}$ for $k\in \mathbb N$, up to a scalar multiple. \end{itemize} \end{theo} Applying this theorem we can easily get the following consequence. \begin{cor} Let $(c_L,c_M,h_L,h_M)\ne (c_L',c_M',h_L',h_M')\in\bC^4$. Then $${\rm Hom}_{\frak g} (V(c_L,c_M,h_L,h_M), V(c_L',c_M',h_L',h_M'))\ne 0$$ if and only if $c_M=c_M', c_L=c_L', h_M=h_M'$, $2h'_M+\frac{p^2-1}{12}c'_M=0$ for some $p\in \mathbb Z_+$, and $h_L=h_L'+ip$ for some $i\in \mathbb N$ (when $p$ even) or $i\in \frac12\mathbb N$ (when $p$ odd). \end{cor} \begin{proof} We know that ${\rm Hom}_{\frak g} (V(c_L,c_M,h_L,h_M), V(c_L',c_M',h_L',h_M'))\ne 0$ if and only if there is a non-zero $\frak g$-module homomorphism $$\varphi: V(c_L,c_M,h_L,h_M)=\langle {\bf 1}\rangle\to V(c_L',c_M',h_L',h_M')=\langle {\bf1'}\rangle,$$ if and only if, $\varphi({\bf 1})=u{\bf 1'}$ is a singular vector of $ V(c_L',c_M',h_L',h_M')$ for some $u\in U(\frak g_-) $, by Theorem \ref{t3.19}, if and only if $u={\rm S}^k$ ($p$ even) or ${\rm R}^k$ ($p$ odd) for some $k\in\mathbb N$. So $c_M=c_M', c_L=c_L', h_M=h_M'$ and $h_L=h_L'+ip$ for some $i\in \mathbb N$ (when $p$ even) or $i\in \frac12\mathbb N$ (when $p$ odd). In this case ${\rm dim}\, {\rm Hom}_{\frak g}(V(c_L,c_M,h_L,h_M), V(c_L',c_M',h_L',h_M'))=1$. \end{proof} \begin{cor} \label{main1-w22} Using the notations as above, if $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ with $c_M\ne 0$, then any singular vector of the Verma module $V_{W(2,2)}(h_L, h_M, c_L, c_M)$ is ${\rm S}^k{\bf 1}$ for some $k\in \mathbb{N}$, up to a scalar multiple. \end{cor} \proof Consider the subspace $U({W(2,2)})\1$ in the Verma $\mathfrak{g}$-module $V(h_L, h_M, c_L, c_M)$ which is the Verma $W(2,2)$-module $V_{W(2,2)}(h_L, h_M, c_L, c_M)$. From Corollary \ref{t3.19} and simple computations we know that $u\in V_{W(2,2)}(h_L, h_M, c_L, c_M)$ is a singular vector if and only if it is a singular vector in the Verma $\mathfrak{g}$-module $V(c_L,c_M,h_L,h_M)$, if and only if it is ${\rm S}^k{\bf 1}$ for $k\in \mathbb{N}$, up to a scalar multiple. \qed \begin{rem} Corollary \ref{main1-w22} was originally proposed in \cite[Theorem 2.7]{JZ}. However, the proof presented in \cite[Lemma 2.4, Theorem 2.7]{JZ} contains certain gaps. The singular vector ${\rm S}\1$ for the Verma module $V_{W(2,2)}(h_L, h_M, c_L, c_M)$ was first introduced in \cite[Proposition 2.6]{JZ}, and later expressed in \cite[Theorem 7.5]{AR1} using a free-field realization of vertex algebras and Schur polynomials. \end{rem} \section{Classification of subsingular vectors of Verma modules} In this section, we continue considering reducible Verma modules $V(c_L,c_M,h_L,h_M)$ over $\mathfrak{g}$ for fixed $(c_L,c_M,h_L,h_M)\in\bC^4$. So we always assume that $\phi(p)=2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$. We will determine all subsingular vectors in the Verma module $V(h_L, h_M, c_L, c_M)$. Let $J'(c_L,c_M,h_L,h_M)$ be the submodule of $V(c_L,c_M,h_L,h_M)$ generated by all singular vectors. Set $$ L'(c_L,c_M,h_L,h_M)=V(c_L,c_M,h_L,h_M)/J'(c_L,c_M,h_L,h_M). $$ By Theorem \ref{t3.19}, $J'(c_L,c_M,h_L,h_M)$ is generated by $u={\rm S}\1$ if $p\in 2\mathbb Z_+$, by $u={\rm R}\1$ if $p\in 2\mathbb Z_+-1$, defined in Section 3. For convenience, for $x\in V(c_L,c_M,h_L,h_M)$ we will abuse the notation that $ x \in L'(c_L,c_M,h_L,h_M)$ means $ x+J'(c_L,c_M,h_L,h_M)\in L'(c_L,c_M,h_L,h_M)$. \subsection{Necessary condition for the existence of subsingular vectors} From the construction of ${\rm R}$ and ${\rm S}$ we have the following results. \begin{lem}\label{ll4.1} (1) If $p\in 2\mathbb Z_+$, then the image of \begin{eqnarray}\label{e4.1} {\mathcal B}=\{M_{-\la}Q_{-\mu}L_{-\nu}{\bf 1}\mid \la,\nu\in \mathcal P, \mu\in\mathcal{SP}, \ \mbox{and}\ M_{-\la}\ \mbox{does't involve }\ M_{-p}\} \end{eqnarray} under the natural projection $$\pi: V(c_L,c_M,h_L,h_M)\rightarrow L'(c_L,c_M,h_L,h_M)=V(c_L,c_M,h_L,h_M)/J'(c_L,c_M,h_L,h_M)$$ forms a PBW basis of $L'(c_L,c_M,h_L,h_M)$.\\ (2) If $p\in 2\mathbb Z_+-1$, then the image of \begin{equation}\label{e4.2} {\mathcal B}'=\{M_{-\la}Q_{-\mu}L_{-\nu}{\bf 1}\mid \la,\nu\in \mathcal P, \mu\in\mathcal{SP}, \ \mbox{and}\ \ Q_{-\mu},M_{-\la}\ \mbox{does't involve }\ Q_{-\frac{p}{2}},M_{-p} \ \mbox{respectively}\} \end{equation} under the natural projection $\pi$ forms a PBW basis of $L'(c_L,c_M,h_L,h_M)$. \end{lem} \begin{lem}\label{hmsubsingular} If $L'(c_L,c_M,h_L,h_M)$ is reducible and $u'$ is a singular vector not in $\mathbb C\1$, then ${\rm hm}(u')=L_{-p}^{r}{\bf 1}$ for some $r\in \mathbb Z_+$, and $\ell_{L}(u')=r$. \end{lem} \proof By Lemma \ref{ll4.1}, we may assume that any term of $u'$ does not involve $M_{-p}$ or $Q_{-\frac{p}{2}}$ (this factor does not appear if $p$ is even). If $\ell_{L}(u')=0$, using similar discussions in Section 3 (see the beginning part of the proof of Lemma \ref{l3.6}, and Theorem \ref{singular-W}), we can get $u'\in J'(c_L,c_M,h_L,h_M)$, a contradiction. So $\ell_{L}(u')\ne0$, and suppose that \begin{equation*} u'= (g_0L_{-p}^r+g_1L_{-p}^{r-1}+g_2L_{-p}^{r-2}+\cdots+g_r)\1, \label{subsingularL} \end{equation*} where $r\in\mathbb Z_+, g_i\in U(\frak g_-), i=0, 1, \cdots, r, g_0\ne 0$ and any $g_i$ does not involve $L_{-p}, M_{-p}, Q_{-\frac p2}$. By the proof of Lemma \ref{l3.1} (ignoring the eigenvalue of $u'$), we can get ${\ell}_L(g_0)=0$. Using the proof of Lemma \ref{l3.6}, we have ${\ell}_Q(g_0)=0$. So $g_0\in U(\mathcal{M}_-)$. Now we need to show that $g_0\in \mathbb C$. (1) First we consider the case of $p=1$. Note that $h_M=0$, hence $[L_{-1},M_1]=0$. If $\ell_L(g_1)\ne 0$. Set ${\rm hm}(g_1)=b(M, Q)L_{-\nu}$ for some $b(M, Q)\in U(\mathcal{M}_-+\mathcal Q_-)$. Then $\nu_1>1$. By the action of $M_{\nu_1}$ on $u'$, we can get a contradiction by comparing the coefficient of $L_{-1}^{r-1}$. So $\ell_L(g_1)=0$. Similarly, we have $\ell_L(g_2)=\cdots=\ell_L(g_{r})=0$ since $M_0L_{-1}^j\1=0, M_kL_{-1}^j\1=0$ for any $k, j\in\mathbb Z_+$ (Theorem \ref{cor3.3}). If $g_0\notin \mathbb C$, set ${\rm hm}\,(g_0)=M_{-\mu}$ not involving $M_{-1}$, then $$L_{\mu_1}u'=[L_{\mu_1}, g_0]L_{-1}^r+B, $$ where the degree of $L_{-1}$ in $B$ is no more than $r-1$ and $ [L_{\mu_1}, M_{-\mu}]\1\ne 0$. It gets a contradiction. So $g_0\in\mathbb C^*$. Consequently, ${\rm hm}(u')=L_{-1}^{r}{\bf 1}$. In this case, $g_1=0$ since $g_1$ is not involving $M_{-1}, Q_{-\frac12}$. (2) Now we consider the case of $p>1$. As in Lemma \ref{l3.1} and Lemma \ref{l3.6} (using $M_1$ instead of $M_0$ in the arguments), we get \begin{equation*}\ell_L (g_1)=1\ {\rm and}\ g_1=\sum_{i=1}^{s}b_iL_{-i}+b_0,\label{g1}\end{equation*} where $b_i\in, i=1, \cdots, s$, $b_s\ne 0$ and $s<p$, $b_0\in \mathcal{MQ}$. Moreover, we can get \begin{eqnarray*} \ell_L (g_i)=i \end{eqnarray*} for $i=1,\cdots,r$ by induction, and all $L_{-\nu}$ in $g_i, i\ge1$ must be satisfied the condition that $\nu_1<p$ (see the proof of Lemma \ref{l3.1} using $M_1$ instead $M_0$ in the arguments). In this case $\ell_{L}(u')=r$. Now we shall prove that $g_0\in \mathbb C^*$. Otherwise, set ${\rm hm}\,(g_0)=M_{-\mu}$ not involving $M_{-p}$, then $$L_{\mu_1}u'=[L_{\mu_1}, g_0]L_{-p}^r+B, $$ where the degree of $L_{-p}$ in $B$ is no more than $r-1$ and $ [L_{\mu_1}, M_{-\mu}]\1\ne 0$. It gets a contradiction. The lemma follows. \qed Lemma \ref{hmsubsingular} tells us that, if there exists a singular vector $u'\in L'(c_L,c_M,h_L,h_M)$ of weight $pr$, then it is unique up to a scalar multiple. In the following, we provide the necessary conditions for the existence of subsingular vectors in the Verma module $V(h_L, h_M, c_L, c_M)$ over the N=1 BMS superalgebra $\mathfrak g$. \begin{theo}\label{necessity} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ and $c_M\neq 0$. Assume that there exists a singular vector $u'\in L'(c_L,c_M,h_L,h_M)$ such that ${\rm hm}(u')=L_{-p}^{r}\1$ for some $r\in \mathbb Z_+$. Then $h_L=h_{p, r}$ where \begin{eqnarray}\label{e3.37}\label{atypical} h_{p,r}=-\frac{p^2-1}{24}c_L+\frac{(41p+5)(p-1)}{48}+\frac{(1-r)p}{2}-\frac{1+(-1)^p}8p. \end{eqnarray} \end{theo} \proof {\bf Case 1}: $p=1$. From the proof of Lemma \ref{hmsubsingular}, we can suppose that $$u'=(L_{-1}^r+g_2L_{-1}^{r-2}+\cdots+g_{r-1}L_{-1}+g_{r})\1, $$ where $r\in\mathbb Z_+$, each $g_i\in U(\mathcal{M}_-+\mathcal{Q}_-)$ does not involve $M_{-1}, Q_{-\frac 12}$ for $ i=1,2, \cdots, r$. Considering the coefficient of $L_{-1}^{r-1}$ in $L_1u'$ and using the formula $$L_1L_{-1}^r \1=L_{-1}^{r-1}\left(rL_0+\frac{r(r-1)}2\right)\1 $$ we can get $h_L=\frac{1-r}2$ by comparing the coefficient of $L_{-1}^{r-1}$. {\bf Case 2}: $p>1$. From Lemma \ref{hmsubsingular}, we can suppose that $$u'=(L_{-p}^r+g_1L_{-p}^{r-1}+\cdots+g_{r-1}L_{-p}+g_{r})\1, $$ where $r\in\mathbb Z_+$, $g_i\in U(\frak g_-), i=1,2, \cdots, r$ do not involve $L_{-p}, M_{-p}, Q_{-\frac p2}$. By Lemma \ref{hmsubsingular}, we can further assume that \begin{equation}g_1=\sum_{i=1}^{p-1}l_{i}M_{-i}L_{-(p-i)}+\sum_{j=1}^{\lfloor \frac{p}{2}\rfloor}n_iQ_{-p+i-\frac{1}{2}}Q_{-i+\frac{1}{2}}+C,\label{g1-exp}\end{equation} where $l_i, n_j\in\mathbb C$ and \begin{equation}C=\sum_{\stackrel{i=1, 2, \cdots, p-1}{\ell(\la)\ge 2}}a_{\la, i}M_{-\la}L_{-i}+\sum_{\ell(\la)+\ell(\mu)\ge 3}b_{\la, \mu}M_{-\la}Q_{-\mu+\frac12}\label{g1-C}\end{equation} for some $a_{\mu, i}, b_{\la, \mu}\in\mathbb C$. For any $k=1,2,\cdots, p-1$, \begin{eqnarray*}\label{Lkaction} L_ku'&=&[L_k, L_{-p}^r+g_1L_{-p}^{r-1}+\cdots+g_{r-1}L_{-p}+g_{r}]\mathbf 1\\ &=&([L_k, L_{-p}^r]+[L_k, g_1]L_{-p}^{r-1}+B)\1, \end{eqnarray*} where the degree of $L_{-p}$ in $B$ is less than $r-2$. The coefficient with $L_{-p}^{r-1}\1$ in $L_{k}u'$ should be zero. Comparing the coefficients of $L_{-p+k}L_{-p}^{r-1}$ in $L_{k}u'$, we can get $r(k+p)+l_k(2kh_M+\frac{k^3-k}{12}c_M)=0$, yielding that \begin{eqnarray*} l_k=-r\frac{p^2-1}{2h_Mk(p-k)} \end{eqnarray*} for $k=1,\ldots,p-1$. Note that here the degree of $L_{-p}$ of $[L_k, C]L_{-p}^{r-1}\1$ is $r-1$, or $r-2$. For the former, the length of any non-zero summand in $[L_k, C]$ is not less than $2$ with respect to $M$ (see \eqref{g1-C}). For any $k=1,2,\cdots, \lfloor \frac{p}{2}\rfloor$, comparing the coefficients of $Q_{-p+k-\frac 12}L_{-p}^{r-1}$ in $Q_{k-\frac 12}u'$, we obtain that $ \frac{p+2k-1}2+\left(2h_M-\frac{8(k^2-k)h_M}{p^2-1}\right)n_k=0$, yielding that \begin{eqnarray*} n_{k}=r\frac{p^{2}-1}{4h_M(p-2k+1)}. \end{eqnarray*} Note that \begin{eqnarray*} [L_{p}, L_{-p}^{r}]=rpL_{-p}^{r-1}\Big((r-1)p+2L_{0}+\frac{p^{2}-1}{12}c_L\Big). \end{eqnarray*} The coefficient with $L_{-p}^{r-1}\1$ in $L_{p}u'$ is \begin{eqnarray}\label{e3.401} rp\Big((r-1)p+2h_L+\frac{p^{2}-1}{12}c_L\Big)&+&\sum_{i=1}^{p-1}2l_{i}h_Mi(2p-i)\frac{p^{2}-i^{2}}{p^{2}-1}\notag\\ &+&\sum_{i=1}^{\lfloor \frac{p}{2}\rfloor}n_{i}h_M(3p-2i+1)\frac{p^{2}-(1-2i)^{2}}{p^{2}-1}=0, \end{eqnarray} i.e., \begin{eqnarray*} &&rp\Big((r-1)p+2h_L+\frac{p^{2}-1}{12}c_L\Big)-2rp^2(p-1)-\frac{rp(p^2-1)}6 +\frac14 r(p-1)(3p+1)\left\lfloor \frac{p}{2}\right\rfloor\notag\\ +&&\frac12 r(p+1)\left\lfloor \frac{p}{2}\right\rfloor(\left\lfloor \frac{p}{2}\right\rfloor+1)-\frac r6\left\lfloor \frac{p}{2}\right\rfloor(\left\lfloor \frac{p}{2}\right\rfloor+1)(2\left\lfloor \frac{p}{2}\right\rfloor+1)=0.\label{e3.40} \end{eqnarray*} It gives \eqref{atypical}. \qed In the above proof we did not use the actions of $M_k$ for $1\le k<p$ because they can be generated by $Q_{i-\frac12}$ (for example $M_1=Q_{\frac12}^2$). This tells us that, for $u'$, the summands $\sum_{i=1}^{p-1}l_{i}M_{-i}L_{-(p-i)}+\sum_{j=1}^{\left\lfloor \frac{p}{2}\right\rfloor}n_iQ_{-p+i-\frac{1}{2}}Q_{-i+\frac{1}{2}}$ in \eqref{g1-exp} are unique determined. We will particularly use this fact for $r=1$ later. Now we first determine singular vectors in $L'(c_L,c_M,h_L,h_M)_p$ under the condition $h_L=h_{p, 1}$. \begin{lem}\label{l4.4} If $u'$ is a singular vector in $L'(c_L,c_M,h_L,h_M)_p$ (implying that $h_L=h_{p, 1}$), then $u'$ can be written as follows. \begin{eqnarray}\label{subsingular2} u'={\rm T}\1=L_{-p}\1+\sum_{i=1}^{p-1}g_{p-i}(M)L_{-i}\1+u_p(M,Q)\1, \end{eqnarray} where $g_{i}(M)\in U(\mathcal{M}_-)$, $u_{p}(M,Q)\in U(\mathcal{M}_-+\mathcal{Q}_-)$ not involving $M_{-p}$ or $Q_{-\frac{p}{2}}$, and $\ell_Q(u_{p}(M,Q))= 2$. \end{lem} \proof The case for $p=1$ is clear since $h_{1,1}=0$ and $u'=L_{-1}$. So we need only to consider the case for $p>1$. By Lemma \ref{hmsubsingular}, we may assume that $u'={\rm T}\1$ where \begin{eqnarray} {\rm T}=L_{-p}+\sum_{i=1}^{p-1}g_{p-i}(M,Q)L_{-i} +u_p(M,Q), \end{eqnarray} and $g_{i}(M,Q)\in U(\mathcal{M}_-+\mathcal{Q}_-)$ not involving $M_{-p},Q_{-\frac{p}{2}}$. Note that $M_0u'=ku'$ with $k\in \mathbb{C}$. On one hand, $[M_0,{\rm T}]=pM_{-p}+\sum_{i=1}^{p-1}ig_{p-i}(M,Q)M_{-i}$. Then $[M_0,{\rm T}]\neq k'{\rm T}$ for any $k'\in \mathbb{C}$. So $[M_0,{\rm T}]\1\in J'(c_L,c_M,h_L,h_M)_p$. It implies $[M_0,{\rm T}]=l{\rm S}$ for some $l\in \mathbb{C}^*$. On the other hand, ${\rm S}=M_{-p}+f_p(M)$. So $l=p$ and in $U(\mathfrak{g_{-}})$, we get \begin{equation} [M_0,{\rm T}]=p{\rm S}.\label{W0T} \end{equation} This implies $\sum_{i=1}^{p-1}ig_{p-i}(M,Q)M_{-i}=p{\rm S}$. So $g_{p-i}(M,Q)\in U(\mathcal M_-)$. We denote it by $g_{p-i}(M)$ for any $i=1,2, \cdots, p-1$. For $1\le k\le p,$ considering $$0=Q_{k-\frac12}u'=\left(\frac p2+k-\frac12\right)Q_{-p+k-\frac12}\1+\sum_{i=1}^{p-1}\left(\frac i2+k-\frac12\right)g_{p-i}(M)Q_{-i+k-\frac12}\1+[Q_{k-\frac12},u_p(M,Q)]\1, $$ we see that $\ell_Q(u_{p}(M,Q))=2$. This completes the proof. \qed \begin{rem} We found the element ${\rm T}\in U(\frak{g}_-)$ when $h_L=h_{p,1}$. From the above proof we know that (\ref{W0T}) holds whenever $\phi(p)=0$, no need to assume that $h_L=h_{p, 1}$. \end{rem} \begin{theo}\label{subsingular} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ and $h_L=h_{p, 1}$ for some $p\in \mathbb{Z_+}$. Then there exists a unique subsingular vector $u'={\rm T}{\bf 1}$ in $L'(c_L,c_M,h_L,h_M)_p$ up to a scalar multiple, where ${\rm T}$ is defined in Lemma \ref{l4.4}. \end{theo} \begin{proof} By Lemma \ref{l4.4}, we can suppose that \begin{eqnarray}\label{subsingular2'} u'={\rm T}\1=L_{-p}\1+\sum_{i=1}^{p-1}g_{i}(M)L_{-p+i}\1+u_p(M,Q)\1, \end{eqnarray} where $g_{i}(M)\in U(\mathcal{M}_-)$, $u_{p}(M,Q)\in U(\mathcal{M}_-+\mathcal{Q}_-)$ not involving $M_{-p},Q_{-\frac{p}{2}}$, and $\ell_Q(u_p(M, Q))=2$. We order all the possible summands of ${\rm T}$ in \eqref{subsingular2} by the ordering $\succ$ defined in Section 2.2: \begin{eqnarray} \nonumber &&L_{-p}, M_{-1}L_{-(p-1)}, M_{-2} L_{-(p-2)}, M_{-1}^2L_{-(p-2)}, \cdots, M_{-(p-1)}L_{-1}, \cdots, M_{-1}^{p-1}L_{-1},\\ \nonumber &&Q_{-p+\frac{1}{2}}Q_{-\frac{1}{2}}, Q_{-p+\frac{3}{2}}Q_{-\frac{3}{2}}, \cdots,Q_{-p+\lfloor \frac{p}{2}\rfloor -\frac{1}{2}}Q_{-\lfloor \frac{p}{2}\rfloor +\frac{1}{2}}, \\ \nonumber &&M_{-1}Q_{-p+\frac{3}{2}}Q_{-\frac{1}{2}}, M_{-2}Q_{-p+\frac{5}{2}}Q_{-\frac{1}{2}}, M_{-1}^2Q_{-p+\frac{5}{2}}Q_{-\frac{1}{2}},\cdots, M_{-p+2}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}, M_{-1}^{p-2}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}},\\ &&M_{-(p-1)}M_{-1}, M_{-(p-2)}M_{-2}, M_{-(p-2)}M_{-1}^2,\cdots, M_{-1}^{p}. \label{singu-order} \end{eqnarray} The coefficients of above monomials in $u'$ are determined by some elements in $U(\mathfrak{g}_{+})_{-p}$ which act on $u'$ getting $0$. Namely, we need to consider the linear equations \begin{equation}xu'=0\label{singular-equation}\end{equation} for some particular $x\in U(\frak g_+)_{-p}$. We choose $x$ from \eqref{singu-order} by changing $L_{-p}$ to $L_p$, $L_{-i}$ to $M_i$, $M_{-i}$ to $L_i, i=1, \cdots p-1$, $Q_{-r}$ to $Q_r$, and arrange them according to the original ordering as follows: $L_{p}$, $L_{1}M_{p-1}$, $L_{2}M_{p-2}$, $L_{1}^2M_{p-2}$, $\cdots$, $L_{p-1}M_{1}$, $\cdots$, $L_{-1}^{p-1}M_{1}$, $Q_{\frac{1}{2}}Q_{p-\frac{1}{2}}$, $Q_{\frac{3}{2}}Q_{p-\frac{3}{2}}$, $\cdots$, $Q_{\lfloor \frac{p}{2}\rfloor -\frac{1}{2}}Q_{p-\lfloor \frac{p}{2}\rfloor +\frac{1}{2}}$, $L_1Q_{\frac{1}{2}}Q_{p-\frac{3}{2}},\cdots, L_{1}^{p-2}Q_{\frac{1}{2}}Q_{\frac{3}{2}}$, $L_{1}L_{p-1}$, $L_{2}L_{p-2}$, $L_{1}^2L_{p-2}$, $\cdots$, $L_{1}^p$. We consider the following Table 1. \setlength{\belowcaptionskip}{-10pt} \begin{table}[htbp]\label{table1} \centering\caption{The matrix $A_{p, 1}$} \begin{eqnarray*}\label{sub-table} \fontsize{4.98pt}{\baselineskip}\selectfont \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hline &$L_{-p}{\bf 1}$ & $M_{-1} L_{-(p-1)}{\bf 1}$ & $\cdots$ & $M_{-1}^{p-1}L_{-1}{\bf 1}$ &$Q_{-p+\frac{1}{2}}Q_{-\frac{1}{2}}{\bf 1}$ & $\cdots$ &$Q_{-p+\lfloor \frac{p}{2}\rfloor -\frac{1}{2}}Q_{-\lfloor \frac{p}{2}\rfloor +\frac{1}{2}}{\bf 1}$ & $M_{-1}Q_{-p+\frac{3}{2}}Q_{-\frac{1}{2}}{\bf 1}$ & $\cdots$ & $M_{-1}^{p-2}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}{\bf 1}$ & $M_{-(p-1)}M_{-1}{\bf 1}$ & $\cdots$ & $M_{-1}^p{\bf 1}$ \\ \hline $L_{p}$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$ & $0$ & $\cdots$ & $0$ & $0$ & $0$ & $0$ \\ \hline $L_{1}M_{p-1}$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}0$ & $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}0$ & $0$& $\cdots$ & $0$& $0$& $0$& $0$ \\ \hline $\vdots$ & $ \cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$ & $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$ & $0$& $\cdots$ & $0$& $0$& $0$& $0$ \\ \hline $L_{1}^{p-1}M_{1}$ & $ \cellcolor{gray!50}\star$& $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}0$& $\cellcolor{gray!50}\cdots$& $\cellcolor{gray!50}0$ & $0$& $\cdots$ & $0$& $0$& $0$& $0$ \\ \hline $Q_{p-\frac{1}{2}}Q_{\frac{1}{2}}$ & $ \cellcolor{gray!50}\star$& $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\cdots$& $\cellcolor{gray!50}\star$ & $0$& $\cdots$ & $0$& $0$& $0$& $0$ \\ \hline $\vdots$ & $ \cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$ & $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$& $\cellcolor{gray!50}\vdots$ & $0$& $\cdots$ & $0$& $0$& $0$& $0$ \\ \hline $Q_{p-\lfloor \frac{p}{2}\rfloor +\frac{1}{2}}Q_{\lfloor \frac{p}{2}\rfloor -\frac{1}{2}}$ & $ \cellcolor{gray!50}\star$& $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\cdots$& $\cellcolor{gray!50}\star$ & $0$& $\cdots$ & $0$& $0$& $0$& $0$ \\ \hline $L_1Q_{p-\frac{3}{2}}Q_{\frac{1}{2}}$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50} 0$ & $\cellcolor{gray!50} 0$ & $0$ & $0$ & $0$ \\ \hline $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\cellcolor{gray!50}\star$& $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50} 0$ & $0$ & $0$ & $0$ \\ \hline $L_1^{p-2}Q_{\frac{3}{2}}Q_{\frac{1}{2}}$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ & $0$ \\ \hline $L_{p-1}L_1$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50} 0$& $\cellcolor{gray!50} 0$ \\ \hline $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\star$ & $\star$ & $\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50} 0$ \\ \hline $L_1^p$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ \\ \hline \end{tabular}, \end{eqnarray*} \end{table} The $(i, j)$-entry in Table 1 is the coefficient of ${\bf 1}$ produced by the $i$-th operator from Column $0$ acting on the monomial of the $j$-th element on Row $0$. Now we shall investigate the coefficient matrix $A_{p,1}$ of the linear equations \eqref{singular-equation} by using Table 1. This matrix $A_{p,1}$ is a lower trianglar block matrix. Note that the lower two shaded submatrices in Table 1 are nonsingular lower triangular matrices (with nonzero diagonal entries). So we need only to consider the upper-left shaded submatrix which will be denoted by $A_p$. In addition, these operators ($L_{p}$, $L_{1}M_{p-1}$, $\cdots$, $Q_{\lfloor \frac{p}{2}\rfloor -\frac{1}{2}}Q_{p-\lfloor \frac{p}{2}\rfloor +\frac{1}{2}}$) from Column $0$ except for $L_{\la}Q_{\mu-\frac12}$ with $\ell(\la)\ge 2$ in Table 1 act trivially on the monomial $M_{-\la}Q_{-\mu+\frac12}$ with $\ell(\la)\ge 2$ respectively. In order to calculate the rank of matrix $A_p$ we only need to consider a better submatrix $B_p$ of the matrix $A_p$ as Table 2. Actually, after row and column operations, $A_p$ can be arranged as a lower block-triangular matrix with $B_p$ to be the upper-left block with corank$(A_p)=$corank$(B_p)$. It is clear that corank$(B_p)=0$ or $1$. \setlength{\belowcaptionskip}{-10pt} \begin{table}[htbp] \centering\caption{The matrix $B_p$}\label{table 2} \begin{eqnarray*}\tiny\label{sub-table} \begin{tabular} {|c|c|c|c|c|c|c|c|c|}\hline & $L_{-p}{\bf 1}$ & $M_{-1} L_{-(p-1)}{\bf 1}$ & $M_{-2} L_{-(p-2)}{\bf 1}$ & $\cdots$ & $M_{-(p-1)} L_{-1}{\bf 1}$ & $Q_{-p+\frac{1}{2}}Q_{-\frac{1}{2}}{\bf 1}$ & $\cdots$ & $Q_{-p+\lfloor \frac{p}{2}\rfloor -\frac{1}{2}}Q_{-\lfloor \frac{p}{2}\rfloor +\frac{1}{2}}{\bf 1}$ \\ \hline $L_{p}$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\cdots$ & $\cellcolor{gray!50}\star$ \\ \hline $L_{1}M_{p-1}$ & $\cellcolor{gray!50}\star$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline $L_{2}M_{p-2}$ & $\cellcolor{gray!50}\star$ & $0$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline $\vdots$ & $\cellcolor{gray!50}\vdots$ & $0$ & $0$ & $\cellcolor{gray!50}\ddots$ & $0$ & $0$ & $0$ & $0$ \\ \hline $L_{p-1}M_{1}$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ & $0$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ & $0$ \\ \hline $Q_{p-\frac{1}{2}}Q_{\frac{1}{2}}$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ & $0$ & $0$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ \\ \hline $\vdots$ & $\cellcolor{gray!50}\vdots$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\cellcolor{gray!50}\ddots$ & $0$ \\\hline $Q_{p-\lfloor \frac{p}{2}\rfloor +\frac{1}{2}}Q_{\lfloor \frac{p}{2}\rfloor -\frac{1}{2}}$ & $\cellcolor{gray!50}\star$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\cellcolor{gray!50}\star$ \\ \hline \end{tabular}. \end{eqnarray*} \end{table} From the proof of Theorem \ref{necessity} with $r=1$, we know that the matrix $ B_p$ is of corank $1$ if and only if $h_L=h_{p,1}$, that is, the matrix $A_{p,1}$ is of corank $1$ if and only if $h_L=h_{p,1}$, in which case there is only one singular vector $u'$ in $L'(c_L,c_M,h_L,h_M)_p$, up to a scalar multiple. \end{proof} From th proof of Theorem \ref{necessity} we see that that \begin{equation*}\label{T-exp'} {\rm T}=L_{-p}+\sum_{i=1}^{p-1} \frac{12}{i(p-i)c_M} M_{-p+i}L_{-i}-\sum_{i=1}^{\lfloor \frac{p}{2}\rfloor }\frac{6}{(p-2k+1)c_M}Q_{i-p-\frac{1}{2}}Q_{-i+\frac{1}{2}}+\text{some other terms}. \end{equation*} We further have the following formula for {\rm T}. \begin{cor}\label{subsingular-T} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$, $c_M\neq 0$ and $h_L=h_{p, 1}$. Let $k_i=\frac{12}{i(p-i)c_M},\ i=1, 2,\cdots, p-1$. Then the subsingular vector ${\rm T}\1$ can be determined as follows: \begin{equation}\label{T-exp} {\rm T}=L_{-p}+\sum_{i=1}^{p-1}g_{p-i}(M)L_{-i}+u_p(M, Q), \end{equation} where \begin{eqnarray}\label{T-exp-ki} g_{1}(M)=k_1M_{-1}, g_{i}(M)=k_iM_{-i}+k_i\sum_{j=1}^{i-1}\left(1-\frac{j}{2p-i}\right)g_{j}(M)M_{-(i-j)}, i=2, \cdots, p-1. \end{eqnarray} and \begin{eqnarray*}\label{T-exp-u_p} u_p(M, Q)&=&\sum_{\nu\in\mathcal P(p), \ell(\mu)\ge 2} d_\mu M_{-\mu} +\sum_{i=1}^{\lfloor \frac{p}{2}\rfloor }d_iQ_{i-p-\frac{1}{2}}Q_{-i+\frac{1}{2}} +\sum_{\stackrel{\frac p2\ne l_1>l_2\ge 1}{\mu\in\mathcal P(p-l_1-l_2+1)}}d_{\mu}^{l_1, l_2}Q_{-l_1+\frac{1}{2}}Q_{-l_2+\frac12}M_{-\mu} \end{eqnarray*} with unique coefficients $d_\mu, d_{\mu}^{l_1, l_2}, d_i\in\mathbb C$. \end{cor} \begin{proof} For $i=p-1, p-2, \cdots, 1$, using \eqref{subsingular2} we deduce that $$ 0= M_{p-i}{\rm T}\1=[M_{p-i},L_{-p}]\1+\sum_{j=1}^{p-1}g_{p-j}(M)[M_{p-i},L_{-j}]\1 =[M_{p-i},L_{-p}]\1+\sum_{j=p-i}^{p-1}g_{p-j}(M)[M_{p-i},L_{-j}]\1\\ $$ $$\aligned=&(2p-i)M_{-i}\1+(2p-i-1)g_1(M)M_{-i-1}\1+\cdots+ g_{i}(M)\left( 2(p-i)M_0+\frac{(p-i)^3-(p-i)}{12}c_M\right)\1. \endaligned$$ Applying $h_M=-\frac{p^2-1}{24}c_M$ we deduce that $$(2p-i)M_{-i} +(2p-i-1)g_1(M)M_{-i-1} +\cdots+ g_{i}(M)\left( 2(p-i)M_0+\frac{(p-i)^3-(p-i)}{12}c_M\right)=0$$ \begin{eqnarray*}\label{giw} g_{1}(M)=k_1M_{-1}, g_{i}(M)=k_iM_{-i}+k_i\sum_{j=1}^{i-1}\left(1-\frac{j}{2p-i}\right)g_{j}(M)M_{-(i-j)}, i=2, \cdots, p-1. \end{eqnarray*} So \eqref{T-exp-ki} follows by induction. By actions of $Q_{i-\frac12}, i=p, p-1, \cdots, 1$ on \eqref{T-exp}, we can get all $d_i$ by induction. Meanwhile, by actions of $L_i, i=p-1, \cdots, 1$ on \eqref{T-exp}, we can get all $d_{\mu}^{l_1, l_2}, d_\mu$ by induction. \end{proof} \begin{exa} (1) $p=4, h_M=-\frac{5}{8}c_M, h_L=-\frac{5}{8}c_L+\frac{153}{16}: $ \begin{eqnarray*}{\rm T}{=}&L_{-4}+\frac{4}{c_M}M_{-1}L_{-3}+\left(\frac{3}{c_M}M_{-2}+\frac{10}{c_M^{2}}M_{-1}^{2}\right)L_{-2} +\left(\frac{4}{c_M}M_{-3}+\frac{20}{c_M^2}M_{-2}M_{-1}+\frac{24}{c_M^3}M_{-1}^{3}\right)L_{-1}\\ &-\frac{2}{c_M}Q_{-\frac{7}{2}}Q_{-\frac{1}{2}}-\frac{6}{c_M}Q_{-\frac{5}{2}}Q_{-\frac{3}{2}} -\frac{16}{c_M^2}M_{-1}Q_{-\frac{5}{2}}Q_{-\frac{1}{2}}+\frac{6}{c_M^2}M_{-2}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}} -\frac{12}{c_M^3}M_{-1}^2Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}\\ &+\left(\frac{66}{c_M^2}-\frac{4c_L}{c_M^2}\right)M_{-3}M_{-1}+\left(\frac{51}{4c_M^2}-\frac{3c_L}{2c_M^2}\right)M_{-2}^2 +\left(\frac{342}{c_M^3}-\frac{20c_L}{c_M^3}\right)M_{-2}M_{-1}^2+\left(\frac{321}{c_M^4}-\frac{18c_L}{c_M^4}\right)M_{-1}^4. \end{eqnarray*} {\small (2) $p=5, h_M=-c_M, h_L=-c_L+\frac{35}{2}$: \begin{eqnarray*} {\rm T}\hskip -7pt&=&\hskip -7pt L_{-5}+\frac{3}{c_M}M_{-1}L_{-4}+\left(\frac{2}{c_M}M_{-2}+\frac{21}{4c_M^{2}}M_{-1}^{2}\right)L_{-3} +\left(\frac{2}{c_M}M_{-3}+\frac{8}{c_M^2}M_{-2}M_{-1}+\frac{15}{2c_M^3}M_{-1}^3\right)L_{-2}\\ &&+\left(\frac{3}{c_M}M_{-4}+\frac{21}{2c_M^{2}}M_{-3}M_{-1}+\frac{4}{c_M^{2}}M_{-2}^2 +\frac{45}{2c_M^{3}}M_{-2}M_{-1}^2+\frac{45}{4c_M^{4}}M_{-1}^4\right)L_{-1}\\ &&-\frac{3}{2c_M}Q_{-\frac{9}{2}}Q_{-\frac{1}{2}} -\frac{3}{c_M}Q_{-\frac{7}{2}}Q_{-\frac{3}{2}}-\frac{27}{4c_M^{2}}M_{-1}Q_{-\frac{7}{2}}Q_{-\frac{1}{2}} +\frac{3}{2c_M^{2}}M_{-3}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}\\ &&-\frac{3}{c_M^{3}}M_{-2}M_{-1}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}+\frac{9}{4c_M^{4}}M_{-1}^3Q_{-\frac{3}{2}}Q_{-\frac{1}{2}} +\left(\frac{105}{2c_M^{2}}-\frac{3c_L}{c_M^{2}}\right)M_{-4}M_{-1}+\left(\frac{31}{c_M^{2}}-\frac{2c_L}{c_M^{2}}\right)M_{-3}M_{-2}\\ &&+\left(\frac{369}{2c_M^{3}}-\frac{21c_L}{2c_M^{3}}\right)M_{-3}M_{-1}^2 +\left(\frac{148}{c_M^{3}}-\frac{8c_L}{c_M^{3}}\right)M_{-2}^2M_{-1} +\left(\frac{1653}{4c_M^{4}}-\frac{45c_L}{2c_M^{4}}\right)M_{-2}M_{-1}^3 +\left(\frac{675}{4c_M^{5}}-\frac{9c_L}{c_M^{5}}\right)M_{-1}^5. \end{eqnarray*} } \end{exa} Note that we have the particular element ${\rm T}\in U(\frak{g})$ but we will use ${\rm T}$ without assuming the condition that $h_L=h_{p, 1}$. Now we provide some key properties of the operators ${\rm S}, {\rm R}$ and ${\rm T}$ in $L'(c_L,c_M,h_L,h_M) $ without assuming that $h_L=h_{p, 1}$. \begin{lem}\label{ST} Let $p$ be even and ${\rm S}, {\rm T}$ be defined as above. In $L'(c_L,c_M,h_L,h_M) $, we have that $[{\rm S},{\rm T}]\1=0$, and consequently, ${\rm S}{\rm T}^i\1=0$ for any $i\in\mathbb Z_+$. \end{lem} \begin{proof} Note that $p>1$. We claim that if $[{\rm S}, {\rm T}]\1\ne 0$, then $[{\rm S}, {\rm T}]\1$ is a subsingular vector in $V(c_L,c_M, h_L,h_M)$ in the case of $h_L=h_{p, 1}$. In fact, using $[M_0,[{\rm S}, {\rm T}]]\1=0$ and \eqref{W0T} it is easy to see $[{\rm S},{\rm T}]\1$ is a $\mathfrak{g}_0$ eigenvector. For any $x\in\frak g_+$, $$x[{\rm S},{\rm T}]\1=x{\rm S}{\rm T}\1 =[x, {\rm S}]{\rm T}\1, \text{ in } L'(c_L,c_M, h_{p, 1},h_M).$$ By Lemma \ref{l3.15'}, we get $[x,{\rm S}]{\rm T}\1=0$. So the claim holds. However,$[{\rm S},{\rm T}]\1$ is not a subsingular vector in $V(c_L,c_M, h_{p, 1},h_M)_{2p}$ by ${\rm hm}([{\rm S},{\rm T}]\1)\neq L_{-p}^{2}{\bf 1}$ and Lemma \ref{hmsubsingular}. So $[{\rm S}, {\rm T}]\1=0$. It means that $[{\rm S}, {\rm T}]= y{\rm S}$ for some $y\in U(\frak g_-)$ since $p$ is even. So ${\rm S}{\rm T}\1=0$ in $L'(c_L,c_M,h_L,h_M)$ for arbitrary $h_L$. Moreover, $${\rm S}{\rm T}^2\1=[{\rm S},{\rm T}]{\rm T}\1+{\rm T}{\rm S}{\rm T}\1=y{\rm S}{\rm T}\1+{\rm T}{\rm S}{\rm T}\1=0.$$ By induction we can get ${\rm S}{\rm T}^i\1=0$ for any $i\in\mathbb Z_+$. \end{proof} \begin{lem}\label{RTcomm} If $p$ is odd, in $L'(c_L,c_M,h_L,h_M) $, we have $[{\rm R}, {\rm T}]\1=0$, and ${\rm R}{\rm T}^i\1=0$ for any $i\in\mathbb Z_+$. Consequently, ${\rm S}{\rm T}^i\1=0$ for any $i\in\mathbb Z_+$. \end{lem} \begin{proof} It is essentially the same as that of Lemma \ref{ST}, the only difference is that we shall use Lemma \ref{l3.15} here instead of Lemma \ref{l3.15'}. \end{proof} \subsection{Sufficient condition for the existence of subsingular vectors} For any $k\in\mathbb Z_+$, set \begin{equation}\mathcal M_-(p)={\rm span}_{\mathbb C}\{M_{-1}, M_{-2}, \cdots, M_{-p+1}\}\label{M_-(p)},\end{equation} \begin{equation} U^{(k)}:={\rm span}_{\mathbb C}\Big \{x_{i_1}x_{i_2}\cdots x_{i_{k}}\mid x_{i_1}, x_{i_2}, \cdots, x_{i_{k}}\in U(\mathcal M_-(p))\cup \{{\rm T}\} \Big\},\label{UTk}\end{equation} (each monomial can only have a maximum $k$ copies of ${\rm T}$) and $U^{(0)}=\mathbb C$. Clearly, $$U^{(0)}\subset U^{(1)}\subset \cdots\subset U^{(k)}\subset\cdots.$$ First we give the following lemmas by direct calculation to show the existence of singular vectors in $L'(c_L,c_M,h_L,h_M)_{rp}$. \begin{lem} \label{g+T} {\rm (a)} For any $1\le i\le p$, we have \begin{eqnarray*} [L_i,{\rm T}]&=&a_0(M)\beta(L_0,{\bf c}_L)+b_0\left(2M_0+\frac{p^2-1}{12}{\bf c}_M\right)+ c_0{\rm R}\\ &&+\sum_{i=1}^{i-1}a_i(M)L_{i}+\sum_{i=1}^{p-1}b_iM_{i}+\sum_{i=1}^{\lfloor \frac{p}{2}\rfloor}c_iQ_{-i+\frac12}, \end{eqnarray*} where $b_i\in U(\frak g_-)$, $c_i\in U(\mathcal M_-+\mathcal Q_-)$, $\beta(h_{p,1},c_L)=0$ and all $a_i(M)\in U(\mathcal M_-(p))$. Moreover, $[L_i,{\rm T}]\1\in U(\mathcal M_-(p))\1$ in $L'(c_L,c_M,h_L,h_M)$. {\rm (b)} For any $x\in \mathcal M_+$, we have \begin{eqnarray*} [x,{\rm T}]\subset {U({\mathfrak{g}}_{-})}\left(2M_0+\frac{p^2-1}{12}{\bf c}_M\right)+{U({\mathcal M}_{-})}({\mathcal M_+}). \end{eqnarray*} Moreover, $[x, {\rm T}]\1=0$ in $L'(c_L,c_M,h_L,h_M)$. {\rm (c)} For any $x\in \mathcal Q_+$, we have \begin{eqnarray*} [x,{\rm T}]\subset {U({\mathfrak{g}}_{-})}\left(2M_0+\frac{p^2-1}{12}{\bf c}_M\right)+ U(\mathcal{M}_-+\mathcal Q_-){\rm R}+{U({\mathfrak{g}}_{-})}({\mathcal M_++\mathcal Q_+}). \end{eqnarray*} Moreover, $[x, {\rm T}]\1=0$ in $L'(c_L,c_M,h_L,h_M)$. \end{lem} \proof (a) We know that $[L_i,{\rm T}]\1=L_i{\rm T}\1=0$ in $L' (c_L,c_M, h_L,h_M)$ in the case of $h_L=h_{p, 1}$. Then the formula follows. The proofs for (b) and (c) are similar to that of (a). \qed \begin{lem} \label{W0Tk} For any $k\in\mathbb Z_+$, in $L'(c_L,c_M,h_L,h_M) $ with $\phi(p)=0$ we have {\rm (a)} $M_{0}{\rm T}^{k}{\bf 1}=h_{M}{\rm T}^{k}{\bf 1}$, and $\left(M_{0}-\frac1{24}(p^2-1){\bf c}_M\right){\rm T}^{k}{\bf 1}=0$. {\rm (b)} For $y={\rm S}, {\rm R}, M_i$ or $Q_{-i+\frac12}$ with $k, i\in\mathbb Z_+$, we have $yU^{(k)}\1=0$, where $U^{(k)}$ is defined in \eqref{UTk}. \end{lem} \proof (a) By \eqref{W0T}, we know that $M_{0}{\rm T}{\bf 1}=h_M{\rm T}{\bf 1}$ in $L'(c_L,c_M,h_L,h_M)$. By induction on $k$ and Lemma \ref{ST}, we get $$M_{0}{\rm T}^{k}{\bf 1}=[M_{0},{\rm T}]{\rm T}^{k-1}{\bf 1}+{\rm T}M_{0}{\rm T}^{k-1}{\bf 1}=p{\rm S}{\rm T}^{k-1}{\bf 1}+h_M{\rm T}^{k}{\bf 1}=h_M{\rm T}^{k}{\bf 1}.$$ The rest of (a) are clear. (b) In the proof of Lemma \ref{ST}, we see that ${\rm R}{\rm T}, {\rm S}{\rm T}\in U(\frak g_-){\rm R}+U(\frak g_-){\rm S}$. Using these we can deduce that ${\rm R}U^{(k)}\1={\rm S}U^{(k)}\1=0$. By Lemma \ref{g+T} (b) we have $M_i{\rm T}\1=0$ and $Q_{i-\frac12} {\rm T}\1=0$ in $L'(c_L,c_M,h_L,h_M) $ (not assuming $h_L=h_{p,1}$). Consequently, $M_if_1{\rm T}f_2\1=Q_{i-\frac12}f_1{\rm T}f_2\1=0$ for any $f_1,f_2\in U(\mathcal{M}_-)$. The statements follow by induction on $k\in\mathbb{Z}_+$. \qed \begin{lem} \label{L0Tk} Let $k\in \mathbb N$. In $L'(c_L,c_M,h_L,h_M) $ with $\phi(p)=0$ we have {\rm (a)} $L_{0}{\rm T}^{k}{\bf 1}=(h_{L}+kp){\rm T}^{k}{\bf 1}$. {\rm (b)} For any $L_i, i\in\mathbb Z_+$, we have $L_i{\rm T}^{k+1}\1\in U^{(k)}\1.$ \end{lem} \begin{proof} (a) follows from the fact that $[L_0, {\rm T}]=p{\rm T}$. (b) follows from Lemma \ref{g+T} and induction on $k$. \end{proof} \begin{lem}\label{LpT} {\rm (a)} In $U(\frak g)$, we have $$ [L_{p},{\rm T}] =\alpha(L_0, M_0, {\bf c}_L, {\bf c}_M)+\sum_{i=1}^{p-1}a_i(M)L_{p-i} +\sum_{i>0}b_iM_i+\sum_{i>0}c_iQ_{i-\frac{1}{2}}, $$ where \begin{eqnarray*} \alpha(L_0, M_0, {\bf c}_L, {\bf c}_M)&=&2p\left(L_0+\frac{p^2-1}{24}{\bf c}_L\right)+\sum_{i=1}^{p-1}\frac{24(2p-i)}{c_M(p-i)}\left(M_0+\frac{i^2-1}{24}{\bf c}_M\right)\\ &&-\sum_{i=1}^{\lfloor \frac{p}{2}\rfloor }\frac{12(\frac{3}{2}p+\frac{1}{2}-i)}{c_M(p-2i+1)}\left(M_0+\frac{i^2-i}{6}{\bf c}_M\right), \end{eqnarray*} and $a_i(M)\in U(\mathcal M_-(p))$, $b_i\in U(\frak g_-), c_i\in U(\mathcal M_-+\mathcal Q_-)$. {\rm (b)} For $k\in\mathbb Z_+$, $$L_p{\rm T}^k\1-2kp(h_L-h_{p, k}){\rm T}^{k-1}\1\in U^{(k-2)}\1.$$ {\rm (c)} Let $k\in\mathbb N$, then $$L_pU^{(k+1)}\1\subset U^{(k)}\1.$$ \end{lem} \begin{proof} (a) From (\ref{T-exp'}) we see that \begin{eqnarray*} [L_{p},{\rm T}]&=& \left[L_{p},L_{-p}+\sum_{i=1}^{p-1} \frac{12}{i(p-i)c_M} M_{-p+i}L_{-i}-\sum_{i=1}^{\lfloor \frac{p}{2}\rfloor }\frac{6}{(p-2i+1)c_M}Q_{i-p-\frac{1}{2}}Q_{-i+\frac{1}{2}}\right]\\ &&+\sum_{i=1}^{p-1}a_i(M)L_{p-i}+\sum_{i>0}b_iM_i +\sum_{i>0}c_iQ_{i-\frac{1}{2}}\\ &=&2pL_0+\frac{p^3-p}{12}{\bf c}_L +\sum_{i=1}^{p-1}\frac{24(2p-i)}{c_M(p-i)}\left(M_0+\frac{i^2-1}{24}{\bf c}_M\right)\\ &&-\sum_{i=1}^{\lfloor \frac{p}{2}\rfloor }\frac{12(\frac{3}{2}p+\frac{1}{2}-i)}{c_M(p-2i+1)}\left(M_0+\frac{i^2-i}{6}{\bf c}_M\right)\\ &&+\sum_{i=1}^{p-1}a_i(M)L_{p-i}+\sum_{i>0}b_iM_i +\sum_{i>0}c_iQ_{i-\frac{1}{2}} \end{eqnarray*} for some $a_i(M)\in U(\mathcal M_-(p))$, $b_i\in U(\frak g_-), c_i\in U(\mathcal M_-+\mathcal Q_-)$. (b) Using (a) and Lemma \ref{L0Tk} (b), (c), we can get (b) by induction on $k$, where $\alpha(L_0, M_0, {\bf c}_L, {\bf c}_M)\1$ is calculated as \eqref{e3.401} in the proof of Theorem \ref{necessity}. (c) follows from (a) (b) and some direct calculations by using induction on $k$. \end{proof} For any $n, k\in\mathbb N$, by Lemma \ref{LpT} (c) and Lemma \ref{W0Tk} (b), we see that \begin{cor}\label{LpUk} If $n>k\ge0$, then $ L_p^nU^{(k)}\1=0$. \end{cor} \begin{lem}\label{lprtr} For $k\in\mathbb Z_+$, $L_{p}^{k}{\rm T}^{k}{\bf 1}=(2p)^kk!\prod_{i=1}^{k}(h_L-h_{p,i}){\bf 1}$ in $L'(c_L,c_M,h_L,h_M)$. \end{lem}\proof Using induction on $k$ we obtain this result by Lemma \ref{LpT} and Corollary \ref{LpUk}. \qed Now let's give the main theorem about subsingular vectors. \begin{theo}\label{main3} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ for some $p\in \mathbb Z_+$ with $c_M\ne0$. Then there exists a singular vector $L'(c_L,c_M,h_L,h_M)_n$ for $n\in\frac12\mathbb Z_+$ if and only if $n=rp\in\mathbb Z_+$ for some $r\in\mathbb Z_+$ and $h_L=h_{p,r}$. Up to a scalar multiple, the only singular vector ${\rm T}_{p, r}\1\in L(c_L,c_M,h_L,h_M)_{rp}$ can written as \begin{eqnarray}\label{u'pr} {\rm T}_{p, r}\1=\left({\rm T}^r+v_1{\rm T}^{r-1}+\cdots +v_{r-1}{\rm T}+v_r\right)\1, \end{eqnarray} where $v_i\in U(\mathcal M_-)_{ip}$ does not involve $M_{-p}$. \end{theo} \proof The uniqueness of the singular vector ${\rm T}_{p, r}\1\in L(c_L,c_M,h_L,h_M)_{rp}$ is guaranteed by Theorem \ref{necessity}. We need only to show the existence of ${\rm T}_{p, r}\1$. The case of $r=1$ follows from Theorem \ref{subsingular}. Let $r>1$. Assume that \begin{eqnarray}\label{sub-gen} {\rm T}_{p,r}={\rm T}^r+v_1{\rm T}^{r-1}+v_2{\rm T}^{r-2}+\cdots +v_{r-1}{\rm T}+v_r, \end{eqnarray} where $v_i\in U(\mathcal M_-)_{ip}$. We order all the possible summands of ${\rm T}_{p,r}$: \begin{eqnarray}\label{sub-term} {\rm T}^r, M_{-(p-1)}M_{-1}{\rm T}^{r-1}, \cdots, M_{-1}^p{\rm T}^{r-1}, M_{-2p}{\rm T}^{r-2},\cdots,M_{-1}^{2p}{\rm T}^{r-2}, \cdots, M_{-rp},\cdots, M_{-1}^{rp}, \end{eqnarray} where the summands above don't involve $M_{-p}$ as factors. Note that ${\rm T}_{p, r}\1$ is a linear combination of the terms in (\ref{sub-term}). We will try to find a solution for the coefficients of above summands in ${\rm T}_{p, r}\1$. We only need to consider the action of ${\mathfrak{vir}}_+$. By the PBW theorem, we consider the corresponding operators \begin{eqnarray}\label{operators} L_p^r, L_{p-1}L_1L_p^{r-1}, L_1^pL_p^{r-1},L_{2p}L_p^{r-2}, L_1^{2p}L_p^{r-2}, \cdots L_{rp},\cdots, L_1^{rp}. \end{eqnarray} we get the linear equations \begin{equation}\label{xTpr=0} x{\rm T}_{p, r}\1=0 \ \mbox{in}\ L'(c_L,c_M,h_L,h_M) \end{equation} for all $x$ in \eqref{operators}. The coefficient matrix of this linear equations (\ref{xTpr=0}) is a lower triangular matrix, with $(1,1)$-entry $(2p)^rr!\prod_{i=1}^{r}(h_L-h_{p,i}){\bf 1}$, and all other diagonal entries non-zero. By Lemma \ref{lprtr}, we deduce that ${\rm T}_{p, r}\1$ is the only singular vector up to a scalar multiple in $L'(c_L,c_M,h_L,h_M)$ if and only if $h_L=h_{p,r}$ for some $r\in\mathbb Z_+$. \qed \begin{exa}(cf. \cite{R}) Let $p=1,h_M=0$. Then \begin{eqnarray*} &(1)&h_L=-\frac{1}{2}: {\rm T}_{1,2}=L_{-1}^2+\frac{6}{c_M}M_{-2};\\ &(2)&h_L=-1: {\rm T}_{1,3}=L_{-1}^3+\frac{24}{c_M}M_{-2}L_{-1}+\frac{12}{c_M}M_{-3};\\ &(3)&h_L=-\frac{3}{2}: {\rm T}_{1,4}=L_{-1}^4+\frac{60}{c_M}M_{-2}L_{-1}^2+\frac{60}{c_M}M_{-3}L_{-1}+\frac{36}{c_M}M_{-4}+\frac{108}{c_M^2}M_{-2}^2. \end{eqnarray*} \end{exa} \begin{exa} $p=2,r=2, h_M=-\frac{1}{8}c_M, h_L=h_{2,2}=-\frac{1}{8}c_L+\frac{5}{16}:$ \small{ \begin{eqnarray*} {\rm T}_{2,2}&=&L_{-2}^2+\frac{12}{c_M}M_{-1}L_{-3}+\frac{24}{c_M}M_{-1}L_{-2}L_{-1}+\frac{144}{c_M^2}M_{-1}^2L_{-1}^2-\frac{12}{c_M}M_{-3}L_{-1}\\ &&-\frac{12}{c_M}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}L_{-2}+\left(\frac{174}{c_M^2}-\frac{12c_L}{c_M^2}\right)M_{-1}^2L_{-2}-\frac{144}{c_M^2}M_{-1}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}L_{-1}+\left(\frac{2088}{c_M^3}-\frac{144c_L}{c_M^3}\right)M_{-1}^3L_{-1}\\ &&-\frac{3}{c_M}Q_{-\frac{7}{2}}Q_{-\frac{1}{2}}-\frac{3}{c_M}Q_{-\frac{5}{2}}Q_{-\frac{3}{2}}-\frac{72}{c_M^2}M_{-1}Q_{-\frac{5}{2}}Q_{-\frac{1}{2}}+\left(\frac{72c_L}{c_M^3}-\frac{1476}{c_M^3}\right)M_{-1}^2Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}\\ &&+\frac{12}{c_M}M_{-4}+\left(\frac{12c_L}{c_M^2}+\frac{6}{c_M^2}\right)M_{-3}M_{-1}+\left(\frac{36c_L^2}{c_M^4}-\frac{1044c_L}{c_M^4}+\frac{2385}{c_M^4}\right)M_{-1}^4. \end{eqnarray*}} Note that $p=2,r=1, h_M=-\frac{1}{8}c_M, h_L=h_{2,1}=-\frac{1}{8}c_L+\frac{21}{16}:$ \begin{eqnarray*} {\rm T}=L_{-2}+\frac{12}{c_M}M_{-1}L_{-1}+\left(\frac{87}{c_M^2}-\frac{6c_L}{c_M^2}\right)M_{-1}^2-\frac{6}{c_M}Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}. \end{eqnarray*} By direct calculation, we get \begin{eqnarray*} {\rm T}_{2,2}=T^2+\frac{6}{c_M}M_{-4}+\frac{216}{c_M^2}M_{-3}M_{-1}-\frac{5184}{c_M^4}M_{-1}^4. \end{eqnarray*} \end{exa} \qed In the above arguments, starting from Lemma \ref{ll4.1} to Theorem \ref{main3}, by deleting parts (or terms) involving $\mathcal Q$ we derive the following results about the subalgebra $W(2, 2)$ of $\frak g$: \begin{cor} \label{w22-sub} Let $(c_L,c_M,h_L,h_M)\in\bC^4$. The Verma module $V_{W(2,2)}(h_L, h_M, c_L, c_M)$ over $W(2,2)$ has a subsingular vector if and only if $\phi(p)=0$ for some $p\in\mathbb Z_+$, and \begin{eqnarray*} h_L=h_{p, r}'=-\frac{p^2-1}{24}c_L+\frac{(13p+1)(p-1)}{12}+\frac{(1-r)p}{2}, \end{eqnarray*} for some $r\in\mathbb Z_+$. \end{cor} \begin{rem} The value \( h_{p,r}' \) is obtained by omitting the final summand in equation (\ref{e3.401}). This corollary was first conjectured in \cite{R} and further discussed in \cite{JZ} with some new ideas. \end{rem} \begin{cor}\label{main2-w22} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $\phi(p)=0$ for some $p\in\mathbb Z_+$, and $h_L=h_{p, r}'$ for some $r\in\mathbb Z_+$. Then $$u'_{p,r}=\left({\rm T}^r+v_1{\rm T}^{r-1}+\cdots +v_{r-1}{\rm T}+v_r\right)\1$$ for $v_i\in U({\mathcal{M}}_-)_{ip}$, is the unique subsingular vector of the Verma module $V_{W(2,2)}(h_L, h_M, c_L, c_M)$, up to a scalar multiple, where \begin{equation}\label{T-exp-W22} {\rm T}=L_{-p}+\sum_{i=1}^{p-1}g_{p-i}(M)L_{-i}+\sum_{\nu\in\mathcal P(p), \ell(\nu)\ge 2} d_\nu M_{-\nu}, \end{equation} and $g_{i}(M)$ are given in \eqref{T-exp-ki}, and $d_\nu$ can be determined as in Corollary \ref{subsingular-T} by actions of $L_i, i=p-1, p-2, \cdots, 1$. \end{cor} \section{Characters of irreducible highest weight modules and composition series } In this section, we provide the maximal submodules of $V(c_L,c_M,h_L,h_M)$ and the character formula for irreducible highest weight modules. We also derive the composition series (of infinite length) of $V(c_L,c_M,h_L,h_M)$. Again we fix $(c_L,c_M,h_L,h_M)\in\bC^4$, and assume that $\phi(p)=2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$. Let us first define atypical and typical Verma module $V(c_L,c_M,h_L,h_M)$. \begin{defi} For $c_L,c_M\in\mathbb C$, let $$ {\mathcal {AT} }(c_L,c_M)= \left\{ \left(h_{p,r}, \frac{1-p^2}{24}c_M\right) \mid p,r \in \mathbb{Z}_+ \right\},$$ where $h_{p,r}$ is defined in (\ref{e3.37}). We say the Verma module $V(c_L,c_M,h_L,h_M)$ to be \textit{atypical} if $(h_L,h_M)\in \mathcal {AT}(c_L, c_M)$, otherwise to be \textit{typical} (see \cite{AR2}). \end{defi} \begin{lem} \label{R-S-lemma} Let ${\rm T}_{p,r}$ be defined in Theorem \ref{main3}, then in $V(c_L,c_M,h_L,h_M)$, we have \begin{eqnarray} M_{(r-1)p}{\rm T}_{p,r}\1=r!p^r{\rm S}\1+\delta_{r,1}h_M{\rm T}_{p,r}\1; \label{MS} \\ Q_{(r-1)p+\frac{p}{2}}{\rm T}_{p,r}\1=r!p^r{\rm R}\1, \text{ \rm if $p$ is odd}.\label{QR} \end{eqnarray} \end{lem} \begin{proof} Let us first prove \eqref{MS}. This is clear for $r=1$ since ${\rm T}_{p, 1}={\rm T}$ and $[M_0, {\rm T}]=p{\rm S}$. Now we assume that $r>1$. By \eqref{u'pr}, we have $M_{(r-1)p}{\rm T}_{p,r}\1=M_{(r-1)p}{\rm T}^r\1+v_1M_{(r-1)p}{\rm T}^{r-1}\1$. By Lemma \ref{g+T} (b) and by induction on $k\ge 1$ we see that $$M_{kp}{\rm T}^{k}\1=U(\mathcal M)\left(M_0+\frac1{24}(p^2-1)c_M\right)\1=0.$$ By induction on $k\ge1$ and using Lemma \ref{g+T} (b), we can prove that $M_{(k-1)p+j}{\rm T}^k\1= 0$ for any $j\in\mathbb Z_+$. Now by induction on $k\ge2$ we will prove that $M_{(k-1)p}{\rm T}^k\1= k!p^k{\rm S}\1$. This is clear for $k=2$ by direct computations. We compute that $$\aligned M_{(k-1)p}{\rm T}^k\1=&[M_{(k-1)p}, {\rm T}]{\rm T}^{k-1}\1+{\rm T}M_{(k-1)p}{\rm T}^{k-1}\1=[M_{(k-1)p}, {\rm T}]{\rm T}^{k-1}\1\\ =&[M_{(k-1)p}, L_{-p}]{\rm T}^{k-1}\1+\sum_{i=1}^{p-1}g_i(M)[M_{(k-1)p}, L_{-p+i}]{\rm T}^{k-1}\1\\ =& (kp)M_{(k-2)p}{\rm T}^{k-1}\1+\sum_{i=1}^{p-1}g_i(M)M_{(k-2)p+i}{\rm T}^{k-1}\1\\ =&(kp)M_{(k-2)p}{\rm T}^{k-1}\1\\ =&k!p^k{\rm S}\1, \,\,\, (\text{induction used}).\endaligned$$ So \eqref{MS} holds. Now we prove \eqref{QR}. By induction on $k\in\mathbb Z_+$ and using Lemma \ref{g+T} (c), we can prove that $Q_{(k-1)p+j+\frac{p}{2}}{\rm T}^k\1=0$ for any $j\in \mathbb Z_+$. Now by induction on $k\ge1$ we will prove that $Q_{(k-1)p+\frac{p}{2}}{\rm T}^k\1= k!p^k{\rm R}\1$. This is clear for $k=1$ by direct computations. We compute that $$\aligned Q_{(k-1)p+\frac{p}{2}}{\rm T}^k\1=&[Q_{(k-1)p+\frac{p}{2}}, {\rm T}]{\rm T}^{k-1}\1+{\rm T}Q_{(k-1)p+\frac{p}{2}}{\rm T}^{k-1}\1\\ =&[Q_{(k-1)p+\frac{p}{2}}, {\rm T}]{\rm T}^{k-1}\1\\ =&kpQ_{(k-2)p+\frac{p}{2}}{\rm T}^{k-1}\1\\ =& k!p^k{\rm R}\1, \,\,\, (\text{induction used}).\endaligned$$ Then $Q_{(r-1)p+\frac{p}{2}}{\rm T}_{p,r}\1 =Q_{(r-1)p+\frac{p}{2}}{\rm T}^r\1=r!p^r{\rm R}\1.$ \end{proof} \subsection{Maximal submodules and characters} Now we are ready to present a couple of other main theorems in this paper. \begin{theo}\label{irreducibility} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$ and $(h_L,h_M)\not\in \mathcal{AT}(c_L, c_M)$ (typical case). Then $J(c_L,c_M,h_L,h_M)$, the maximal submodule of $V(c_L,c_M,h_L,h_M)$, is generated by $ {\rm S}\1 $ if $ p\in 2\mathbb Z_+$, by $ {\rm R}\1 $ if $p\in 2\mathbb Z_+-1 $, and the simple quotient $L(c_L,c_M,h_L,h_M)=V(c_L,c_M,h_L,h_M)/J(c_L,c_M,h_L,h_M)$ has a basis ${\mathcal B}$ in (\ref{e4.1}) if $p\in 2\mathbb Z_+$; or the basis ${\mathcal B}'$ in (\ref{e4.2}) if $p\in 2\mathbb Z_+-1$. Moreover, $$ {\rm char}\, L(c_L,c_M,h_L,h_M)= q^{h_L}(1-q^{\frac{p}2})\left(1+\frac12(1+(-1)^p)q^{\frac p2}\right)\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}. $$ \end{theo} \begin{proof} It follows from Theorems \ref{necessity} and \ref{main2}. \end{proof} \begin{theo}\label{irreducibility1} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$ and $(h_L,h_M)\in \mathcal{AT}(c_L, c_M)$ (atypical case). Then $J(c_L,c_M,h_L,h_M)=\langle {\rm T}_{p,r}\1\rangle$ is generated by $ {\rm T}_{p,r}\1$ defined in Section 4, and $L(c_L,c_M,h_L,h_M)=V(c_L,c_M,h_L,h_M)/J(c_L,c_M,h_L,h_M)$ has a basis \begin{equation}\label{B''} {\mathcal B_1}=\{M_{-\la}Q_{-\mu+\frac{1}{2}}L_{-\nu}{\bf 1}\mid \la,\nu\in \mathcal P, \mu\in\mathcal{SP} \ \mbox{not involving}\ M_{-p}, L_{-p}^n (n\geq r) \} \end{equation} if $p\in 2\mathbb Z_+$; or \begin{equation}\label{B''odd} {\mathcal B}_1'=\{M_{-\la}Q_{-\mu+\frac{1}{2}}L_{-\nu}{\bf 1}\mid \la,\nu\in \mathcal P, \mu\in\mathcal{SP} \ \mbox{not involving}\ M_{-p}, Q_{-\frac{p}{2}}, L_{-p}^n (n\geq r)\} \end{equation} if $ p\in 2\mathbb Z_+-1$. Moreover, $$ {\rm char}\, L(c_L,c_M,h_L,h_M)= q^{h_{p,r}}(1-q^{\frac{p}2})\left(1+\frac12(1+(-1)^p)q^{\frac p2}\right)(1-q^{rp})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}. $$ \end{theo} \proof We first consider the case that $p$ is even. By Theorem \ref{main3}, we know that ${\rm T}_{p,r}\1$ is a subsingular vector in $V(c_L,c_M,h_{p,r},h_M)$. By Lemma \ref{R-S-lemma} we have $\langle {\rm S}\1 \rangle\subset \langle {\rm T}_{p,r}\1\rangle$. Then \begin{eqnarray*} {\rm char}\, V(c_L,c_M,h_{p,r},h_M)/\langle {\rm T}_{p,r}\1\rangle =q^{h_{p,r}}(1-q^{p})(1-q^{rp})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}. \end{eqnarray*} Note that ${\rm T}_{p,r}\1=(L_{-p}^r+g_1L_{-p}^{r-1}+\cdots+g_{r-1}L_{-p}+g_{r})\mathbf 1$, i.e., $\label{rtor-1} L_{-p}^r\1=-(g_1L_{-p}^{r-1}+\cdots+g_{r-1}L_{-p}+g_{r})\mathbf 1 $ in $V(c_L,c_M,h_{p,r},h_M)/\langle {\rm T}_{p,r}\1 \rangle$. Then we know that $\mathcal B_1$ is a basis for $V(c_L,c_M,h_L,h_M)/\langle {\rm T}_{p,r} \1\rangle$. Now we determine the irreducibility of the quotient module $V(c_L,c_M,h_{p,r},h_M)/\langle {\rm T}_{p,r}\1 \rangle$. Suppose that $x\in V(c_L,c_M,h_{p,r},h_M)/\langle {\rm T}_{p,r}\1 \rangle$ is a weight vector with $h_L=h_{p,r}$ such that $U(\mathfrak{g_+})x\subset \langle {\rm T}_{p,r}\1 \rangle$. Analogous to the proof of Lemma \ref{hmsubsingular}, one can prove that ${\rm hm}(x)=L_{-p}^s\1$ for some $s<r$. Similar arguments as the proof of Theorem \ref{necessity}, one can show that $h_{p,r}=h_{p,s}$, which is a contradiction. So $J(c_L,c_M,h_L,h_M)=\langle {\rm T}_{p,r}\1\rangle$. Next, we consider the case that $p$ is odd. It is clear that \begin{eqnarray*} {\rm char}\, V(c_L,c_M,h_{p,r},h_M)/\langle {\rm T}_{p,r}\1\rangle=q^{h_{p,r}}(1-q^{\frac{p}{2}})(1-q^{rp})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}. \end{eqnarray*} Similar as for the case of even $p$, one can determine the irreducibility of the quotient module $V(c_L,c_M,h_{p,r},h_M)/\langle {\rm T}_{p,r}\1 \rangle$. So $J(c_L,c_M,h_L,h_M)=\langle {\rm T}_{p,r}\1\rangle$ in this case also. \qed \begin{rem} From the above two theorems, unlike the Virasoro algebra, any maximal submodule of the Verma module $V(c_L,c_M,h_{p,r},h_M)$ is always generated by a single weight vector. \end{rem} \subsection {Composition series of submodules} In this subsection, we investigate the composition series of submodules of the Verma module $V(c_L,c_M,h_L,h_M)$. We have seen that the chain length of the submodules is infinite in the reducible Verma module $V(c_L,c_M,h_L,h_M)$. Now we consider both typical and atypical cases. \begin{theo}\label{main4-1} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$ and $(h_L,h_M)\notin \mathcal{AT}(c_L, c_M)$. $(1)$ If $p\in 2\mathbb Z_+$, then the Verma module $V(c_L,c_M,h_L,h_M)$ has the following infinite composition series of submodules \begin{eqnarray}\label{filtration3} V(c_L,c_M,h_L,h_M)\supset\langle {\rm S}\1 \rangle \supset \langle {\rm S}^2\1 \rangle\supset\cdots\supset \langle {\rm S}^n\1 \rangle\supset \cdots \end{eqnarray} Moreover, $\langle {\rm S}^n\1 \rangle/\langle {\rm S}^{n+1}\1 \rangle$ is isomorphic to the irreducible highest weight module $L(c_L,c_M,h_L+np,h_M)$ for any $n\in \mathbb{N}$.\par $(2)$ If $p\in 2\mathbb Z_+-1$, then the Verma module $V(c_L,c_M,h_L,h_M)$ has the following composition series of submodules \begin{eqnarray}\label{filtration4} V(c_L,c_M,h_L,h_M)\supset \langle {\rm R}\1 \rangle\supset\langle {\rm R}^2\1 \rangle \supset\cdots \supset \langle {\rm R}^n\1 \rangle\supset\cdots \end{eqnarray} Moreover, $\langle {\rm R}^{2n}\1 \rangle/\langle {\rm R}^{2n+1}\1 \rangle$ is isomorphic to the irreducible highest weight module $L(c_L,c_M,h_L+np,h_M)$ and $\langle {\rm R}^{2n+1}\1 \rangle/\langle {\rm R}^{2n+2}\1 \rangle$ is isomorphic to the irreducible highest weight module $L(c_L,c_M,h_L+np+\frac{p}{2},h_M)$ for any $n\in \mathbb{N}$. \end{theo} \proof (1) We know that $\langle {\rm S}^n\1 \rangle$ is a submodule of $V(c_L,c_M,h_L,h_M)$ for any $n\in \mathbb{Z}_+$ and (\ref{filtration3}) follows from Theorem \ref{main1}. Note that $M_{0}{\rm S}^{n}\1=h_M {\rm S}^{n}\1$, $L_{0}{\rm S}^{n}\1=(h_L+np) {\rm S}^{n}\1$. Clearly, the submodule $\langle {\rm S}^n\1 \rangle$ is isomorphic to $V(c_L,c_M,h_L+np,h_M)$ with the highest weight $(c_L,c_M,h_L+np,h_M)$. Note that ${\rm S}^{n}\1+\langle {\rm S}^{n+1}\1 \rangle$ generates $\langle {\rm S}^{n}\1 \rangle/\langle {\rm S}^{n+1}\1 \rangle$ and $M_{0}({\rm S}^{n}\1+\langle {\rm S}^{n+1}\1 \rangle)\in h_M ({\rm S}^{n}\1+\langle {\rm S}^{n+1}\1 \rangle)$, $L_{0}({\rm S}^{n}\1+\langle {\rm S}^{n+1}\1 \rangle)\in (h_L+np) ({\rm S}^{n}\1+\langle {\rm S}^{n+1}\1 \rangle)$. Then $\langle {\rm S}^{n}\1 \rangle/\langle {\rm S}^{n+1}\1 \rangle$ is isomorphic to a highest weight module with weight $(c_L,c_M,h_L+np,h_M)$ and highest weight vector ${\rm S}^{n}\1+\langle {\rm S}^{n+1}\1 \rangle$. By the calculation of character, we have \begin{eqnarray*} {\rm char}\, \langle {\rm S}^{n}\1 \rangle/\langle {\rm S}^{n+1}\1 \rangle &=&q^{h_L+np}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}-q^{h_L+(n+1)p)}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}\\ &=&q^{h_L+np}(1-q^p)\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}\\ &=&{\rm char}\, L(c_L,c_M,h_L+np,h_M). \end{eqnarray*} By Theorem \ref{irreducibility}, $\langle {\rm S}^{n+1}\1 \rangle$ is the maximal submodule of $\langle {\rm S}^n\1\rangle$ and the quotient module $\langle {\rm S}^n\1 \rangle/\langle {\rm S}^{n+1}\1 \rangle$ is isomorphic to $L(c_L,c_M,h_L+np,h_M)$ for $n\in \mathbb{N}$. For Statement (2), ${\rm R}^n{\bf 1}$ is a singular vectors in $V(c_L,c_M,h_L,h_M)$ for any $n\in\mathbb{Z}_+$ by Theorems \ref{main1}, \ref{main2}. The submodule $\langle {\rm R}^n\1 \rangle$ generated by ${\rm R}^n\1$ is isomorphic to $V(c_L,c_M,h_L+\frac{np}{2},h_M)$. Clearly, $(\ref{filtration4})$ holds. Since $h_L\neq h_{p,r}$ for all $p\in 2\mathbb Z_+-1, r\in \mathbb Z_+$, the quotient $V(c_L,c_M,h_L,h_M)/\langle {\rm R}\1 \rangle=L(c_L,c_M,h_L,h_M)$ is irreducible by Theorem \ref{irreducibility}. Note that ${\rm R}^{2n}\1+\langle {\rm R}^{2n+1}\1 \rangle$ generates $\langle {\rm R}^{2n}\1 \rangle/\langle {\rm R}^{2n+1}\1 \rangle$. Moreover, $M_{0}({\rm R}^{2n}\1+\langle {\rm R}^{2n+1}\1 \rangle)\in h_M {\rm R}^{2n}\1+\langle {\rm R}^{2n+1}\1 \rangle$ and $L_{0}({\rm R}^{2n}\1+\langle {\rm R}^{2n+1}\1 \rangle)\in (h_L+np) {\rm R}^{2n}\1+\langle {\rm R}^{2n+1}\1 \rangle$. We have \begin{eqnarray*} {\rm char}\, \langle {\rm R}^{2n}\1 \rangle/\langle {\rm R}^{2n+1}\1 \rangle &=&q^{h_L+np}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}-q^{h_L+(n+\frac{1}{2})p)}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}\\ &=&q^{h_L+np}(1-q^{\frac{p}{2}})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}\\ &=&{\rm char}\, L(c_L,c_M,h_L+np,h_M). \end{eqnarray*} Clearly, $h_L+np\neq h_{p,r}$ for all $p\in 2\mathbb Z_+-1, r\in \mathbb Z_+$, and then the quotient $\langle {\rm R}^{2n}\1 \rangle/\langle {\rm R}^{2n+1}\1 \rangle\cong L(c_L,c_M,h_L+np,h_M)$ for any $n\in \mathbb{N}$ is irreducible by Theorem \ref{irreducibility}. Note that ${\rm R}^{2n+1}\1+\langle {\rm R}^{2n+2}\1 \rangle$ generates $\langle {\rm R}^{2n+1}\1 \rangle/\langle {\rm R}^{2n+2}\1 \rangle$. Moreover, $M_{0}({\rm R}^{2n+1}\1+\langle {\rm R}^{2n+2}\1 \rangle)\in h_M {\rm R}^{2n+1}\1+\langle {\rm R}^{2n+2}\1 \rangle$ and $L_{0}({\rm R}^{2n+1}\1+\langle {\rm R}^{2n+2}\1 \rangle)\in \left(h_L+np+\frac{p}{2}\right) {\rm R}^{2n+1}\1+\langle {\rm R}^{2n+2}\1 \rangle$. Similarly, \begin{eqnarray*} {\rm char}\,\langle {\rm R}^{2n+1}\1 \rangle/\langle {\rm R}^{2n+2}\1 \rangle &=&q^{h_L+np+\frac{p}{2}}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}-q^{h_L+(n+1)p)}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}\\ &=&q^{h_L+np+\frac{p}{2}}(1-q^{\frac{p}{2}})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}\\ &=&{\rm char}\, L(c_L,c_M,h_L+np+\frac{p}{2},h_M). \end{eqnarray*} So $\langle {\rm R}^{2n+1}\1 \rangle/\langle {\rm R}^{2n+2}\1 \rangle\cong L(c_L,c_M,h_L+np+\frac{p}{2},h_M)$. \qed We next consider the atypical case. \begin{theo} \label{main4-2} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$ and $(h_L,h_M)\in \mathcal{AT}(c_L, c_M)$. $(1)$ If $p\in 2\mathbb Z_+$, then there exists $r\in\mathbb Z_+$ satisfying the Verma module $V(c_L,c_M,h_L,h_M)$ has the following infinite composition series of submodules $$ \aligned\label{filtration-aS1} V(c_L,c_M,&h_L,h_M)=\langle {\rm S}^0\1 \rangle\supset\langle {\rm T}_{p, r}\1 \rangle \supset\langle {\rm S}\1 \rangle \supset \langle {\rm T}_{p, r-2}({\rm S}\1) \rangle\supset\cdots \\ &\supset\langle {\rm S}^{\lfloor \frac{r-1}{2}\rfloor}\1 \rangle \supset\langle {\rm T}_{p, r-2\lfloor \frac{r-1}{2}\rfloor}({\rm S}^{\lfloor \frac{r-1}{2}\rfloor}\1) \rangle\supset\langle {\rm S}^{\lfloor \frac{r-1}{2}\rfloor+1}\1 \rangle\supset\langle {\rm S}^{\lfloor \frac{r-1}{2}\rfloor+2}\1 \rangle\supset \cdots \endaligned$$ $(2)$ If $p\in 2\mathbb Z_+-1$, then there exists $r\in\mathbb Z_+$ satisfying the Verma module $V(c_L,c_M,h_L,h_M)$ has the following infinite composition series of submodules $$ \aligned\label{filtration-aR1}\nonumber V(c_L,c_M,&h_L,h_M)=\langle {\rm R}^0\1 \rangle\supset\langle {\rm T}_{p, r}\1 \rangle \supset\langle {\rm R}\1 \rangle \supset \langle {\rm T}_{p, r-1}{\rm R}\1 \rangle \supset \langle {\rm R}^2\1 \rangle \supset \langle {\rm T}_{p, r-2}{\rm R}^2\1 \rangle\supset\cdots\\ &\supset\langle {\rm R}^{r-1}\1 \rangle \supset\langle {\rm T}_{p, 1}{\rm R}^{r-1}\1 \rangle\supset\langle {\rm R}^{r}\1 \rangle\supset\langle {\rm R}^{r+1}\1 \rangle\supset \cdots \endaligned$$ \end{theo} \begin{proof} Clearly, $\langle {\rm S}^i\1\rangle$ and $\langle {\rm R}^i\1\rangle$ are Verma modules, and $\langle {\rm S}^i\1\rangle\cong V(c_L,c_M,h_{p,r}+ip,h_M)$, $\langle {\rm R}^i\1\rangle\cong V(c_L,c_M,h_{p,r}+{\frac 12}ip,h_M), i\in\mathbb N$. (1) Let $p\in 2\mathbb Z_+$. By Theorem \ref{main3}, ${\rm T}_{p,r-2i}({\rm S}^i\1)$ is the unique subsingular vector of $\langle {\rm S}^i\1\rangle\cong V(c_L,c_M,h_{p,r}+ip,h_M)$ (here ${\rm S}^i\1$ is the highest weight vector of $V(c_L,c_M,h_{p,r}+ip,h_M)$). By Theorem \ref{irreducibility1}, $\langle {\rm T}_{p,r-2i}({\rm S}^i\1)\rangle$ is the maximal submodule of $\langle {\rm S}^i\1\rangle$ for any $i=0, 1, \cdots, \lfloor \frac{r-1}{2}\rfloor$. Note that \begin{eqnarray}\label{hpr-relation} h_{p,r}+ip=h_{p,1}+\frac{(1-r)p}{2}+ip=h_{p,1}+\frac{(1-(r-2i))p}{2}=h_{p,r-2i} \end{eqnarray} for $i\in \mathbb{Z}_+$ and $r-2i>0$. It implies \begin{equation}\langle {\rm S}^i\1\rangle/\langle {\rm T}_{p,r-2i}({\rm S}^i\1) \rangle\cong L(c_L, c_M, h_{p, r-2i}, h_M), \ i=0,1,\cdots,\textstyle{\left\lfloor \frac{r-1}{2}\right\rfloor}.\label{subquotient00}\end{equation} By Lemma \ref{R-S-lemma} we have $\langle {\rm S}\1\rangle\subset \langle {\rm T}_{p, r}\1 \rangle$. Then \begin{eqnarray*} &&{\rm char}\, \langle {\rm T}_{p,r}\1\rangle /\langle {\rm S}\1\rangle\\ &=&{\rm char}\,V(c_L,c_M,h_{p,r},h_M)-{\rm char}\,L(c_L,c_M,h_{p,r},h_M)-{\rm char}\,\langle {\rm S}\1 \rangle\\ &=&q^{h_{p,r}}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}} -q^{h_{p,r}}(1-q^p)(1-q^{rp})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}-q^{h_{p,r}+p}\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}\\ &=&q^{h_{p,r}+rp}(1-q^{p})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}= {\rm char}\ L(c_L,c_M,h_{p,r}+rp,h_M), \end{eqnarray*} where we have used that $(h_{p,r}+rp=h_{p,-r},h_M)\notin \mathcal{AT}(c_L, c_M)$. So $\langle {\rm T}_{p,r}\1\rangle /\langle {\rm S}\1\rangle$ is isomorphic to the irreducible highest weight module $L(c_L,c_M,h_{p,r}+rp,h_M)$. By Lemma \ref{R-S-lemma} we have $\langle {\rm S}({\rm S}^i\1)\rangle\subset \langle {\rm T}_{p,r-2i}({\rm S}^i\1)\rangle$. Replaced $\1$ by ${\rm S}^i\1$ as above we can get \begin{eqnarray}\langle {\rm T}_{p,r-2i}({\rm S}^i\1)\rangle/\langle {\rm S}^{i+1}\1\rangle\cong L(c_L, c_M, h_{p, r}+(r-2i)p, h_M), i=0, 1, \cdots, \textstyle{\left\lfloor \frac{r-1}{2}\right\rfloor}.\label{TquotientS}\end{eqnarray} So the statement (1) follows from \eqref{subquotient00} and \eqref{TquotientS}. (2) Let $p\in 2\mathbb Z_+-1$. Note that \eqref{hpr-relation} also holds and \begin{eqnarray}\label{hpr-relation1} h_{p,r}+ip+\frac{p}{2}=h_{p,1}+\frac{(1-(r-2i-1))p}{2}=h_{p,r-2i-1} \end{eqnarray} for $i\in \mathbb{N}$ and $r-2i-1>0$. By Theorem \ref{irreducibility1}, $\langle {\rm T}_{p,r-i}{\rm R}^{i}\1\rangle$ is the maximal module of $\langle {\rm R}^{i}\1\rangle$. Combining with \eqref{hpr-relation} and \eqref{hpr-relation1} we get \begin{equation*}\langle {\rm R}^{i}\1\rangle/\langle {\rm T}_{p,r-i}{\rm R}^i\1 \rangle\cong L(c_L, c_M, h_{p, r-i}, h_M), \ i=0,1,\cdots, r-1.\label{subquotient1}\end{equation*} At the same time, by Lemma \ref{R-S-lemma}, $\langle {\rm R}\1\rangle$ is a submodule of $\langle {\rm T}_{p, r}\1\rangle$. By direct calculation, we can get \begin{eqnarray*} {\rm char}\, \langle {\rm T}_{p,r}\1\rangle /\langle {\rm R}\1\rangle=q^{h_{p,r}+rp}(1-q^{\frac{p}{2}})\prod_{k=1}^{\infty}\frac{1+q^{k-\frac{1}{2}}}{(1-q^{k})^{2}}= {\rm char}\ L(c_L,c_M,h_{p,r}+rp,h_M), \end{eqnarray*} where we have used that $(h_{p,r}+rp=h_{p,-r},h_M)\notin \mathcal{AT}(c_L, c_M)$. So $\langle {\rm T}_{p,r}\1\rangle /\langle {\rm R}\1\rangle$ is isomorphic to the irreducible highest weight module $L(c_L,c_M,h_{p,r}+rp,h_M).$ Replaced $\1$ by ${\rm R}^i\1$, we can get $\langle {\rm T}_{p, r-i}{\rm R}^i\1\rangle/\langle {\rm R}^{i+1}\1\rangle \cong L(c_L, c_M, h_{p, r}+(r-i)p, h_M)$ is also irreducible for all $i=0, 1, \cdots, r-1$. So the statement (2) holds. \end{proof} \begin{rem} Theorem \ref{main4-2} demonstrates that a submodule of $V(c_L,c_M,h_L,h_M)$ is not necessarily a highest weight module (or generated by singular vectors), which markedly contrasts with the case of the Virasoro algebra \cite{FF}. Indeed, the $\frak g$-module $\langle {\rm T}_{p, r}\1\rangle$ represents an extension of the Verma module $\langle {\rm S}\1\rangle$ by the irreducible highest weight module $L(c_L, c_M, h_{p, r}+rp, h_M)$, and cannot be generatable by singular vectors. \end{rem} In the proofs of Theorems \ref{main4-1} and \ref{main4-2}, by deleting part (or terms) involving $\mathcal Q$ we derive the following results about the subalgebra $W(2, 2)$ of $\frak g$. Set $ {\mathcal {AT}}_{W(2,2)}(c_L,c_M)= \left\{ \left(h_{p,r}', \frac{1-p^2}{24}c_M\right) :\, p,r \in \mathbb{Z}_+ \right\}$, where $h_{p,r}'$ is defined in Corollary \ref{w22-sub}. \begin{cor}[\cite{R}] \label{main3-w22} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$ and $(h_L,h_M)\not\in {\mathcal {AT}}_{W(2,2)}(c_L,c_M)$. Then the Verma module $V_{W(2,2)}(c_L,c_M,h_L,h_M)$ has the following infinite composition series of submodules \begin{eqnarray*} V_{W(2,2)}(c_L,c_M,h_L,h_M)\supset\langle {\rm S}\1 \rangle \supset \langle {\rm S}^2\1 \rangle\supset\cdots\supset \langle {\rm S}^n\1 \rangle\supset \cdots \end{eqnarray*} \end{cor} \begin{cor} Let $(c_L,c_M,h_L,h_M)\in\bC^4$ such that $2h_M+\frac{p^2-1}{12}c_M=0$ for some $p\in \mathbb Z_+$ with $c_M\neq 0$ and $(h_L,h_M)\in {\mathcal {AT}}_{W(2,2)}(c_L,c_M)$ and $r\in \mathbb{Z}_+$. Then the Verma module $V_{W(2,2)}(c_L,c_M,h_L,h_M)$ has the following infinite composition series of submodules $$ \aligned V_{W(2,2)}(c_L,c_M,&h_L,h_M)=\langle {\rm S}^0\1 \rangle\supset\langle {\rm T}_{p, r}\1 \rangle \supset\langle {\rm S}\1 \rangle \supset \langle {\rm T}_{p, r-2}({\rm S}\1) \rangle\supset\cdots\nonumber\\ &\supset\langle {\rm S}^{\lfloor \frac{r-1}{2}\rfloor}\1 \rangle \supset\langle {\rm T}_{p, r-2\lfloor \frac{r-1}{2}\rfloor}({\rm S}^{\lfloor \frac{r-1}{2}\rfloor}\1) \rangle\supset\langle {\rm S}^{\lfloor \frac{r-1}{2}\rfloor+1}\1 \rangle\supset\langle {\rm S}^{\lfloor \frac{r-1}{2}\rfloor+2}\1 \rangle\supset \cdots \endaligned$$ \end{cor} \section*{Acknowledgements} This work was supported by the National Natural Science Foundation of China (12071405) and NSERC (311907-2020). \bibliographystyle{amsalpha} \begin{thebibliography}{ACKP} \bibitem{AJR}D. Adamovic, B. Jandric and G. Radobolja, The N= 1 super Heisenberg–Virasoro vertex algebra at level zero, J. Algebra Appl. {\bf 21}(2022), 2350003. \bibitem{AR1} D. Adamovic and G. Radobolja, Free field realization of the twisted Heisenberg-Virasoro algebra at level zero and its applications, J. Pure Appl. Algebra, {\bf219} (2015), 4322-4342. \bibitem{AR2} D. Adamovic and G. Radobolja, On free field realization of W(2,2)-modules, SIGMA {\bf 12} (2016), 1-13. \bibitem{ACKP} E. Arbarello, C. De Concini, V.G. Kac and C. Procesi, Moduli spaces of curves and representation theory, Commun. Math. Phys. \textbf{117} (1988), 1-36. \bibitem{As}A. Astashkevich, On the structure of Verma modules over Virasoro and Neveu-Schwarz algebras, Commun. Math. Phys. {\bf 186} (1997), 531-562. \bibitem{AF} A. Astashkevich and D. Fuchs, Asymptotics for singular vectors in Verma modules over the Virasoro algebra, Pacific J. Math. {\bf 177} (1997) 201-209. \bibitem{BDMT} G. Barnich, L. Donnay, J. Matulich and R. Troncoso, Asymptotic symmetries and dynamics of three dimensional flat supergravity, JHEP {\bf 8} (2014), 071. \bibitem{BGMM} A. Bagchi, R. Gopakumar, I. Mandal and A. Miwa, GCA in 2d, JHEP, {\bf 8} (2010), 004. \bibitem{Ba0} A. Bagchi, The BMS/GCA correspondence, Phys. Rev. Lett. {\bf 105} (2010), 171601. \bibitem{BSZ}A. Bagchi, A. Saha and Zodinmawia, BMS characters and modular Invariance, JHEP {\bf 07} (2019) 138. \bibitem{BMRW}O. Blondeau-Fournier, P. Mathieu, D. Ridout and S. Wood, Superconformal minimal models and admissible Jack polynomials, Adv. Math. {\bf 314} (2017), 71-123. \bibitem{BT} G. Barnich and C. Troessaert, Aspects of the BMS/CFT correspondence, JHEP, {\bf 1005} (2010), 062. \bibitem{BNSZ}A. Bagchi, P. Nandi, A. Saha and Zodinmawia, BMS modular diaries: Torus one-point function, JHEP, {\bf 11} (2020) 065. \bibitem{BJMN} N. Banerjee, D. P. Jatkar, S. Mukhi and T. Neogi, Free-field realisations of the BMS$_3$ algebra and its extensions, JHEP, {\bf 06} (2016), 024. \bibitem{Bi} Y. Billig, Representations of the twisted Heisenberg--Virasoro algebra at level zero, {Can. Math. Bull.} \textbf{46} (2003), 529-537. \bibitem{BM} H. Bondi, M.G.J. van der Burg and A.W.K. Metzner, Gravitational waves in general relativity. VII, Waves from axisymmetric isolated systems, Proc. Roy. Soc. Lond. A, {\bf 269} (1962) 21-52. \bibitem{BH} J. D. Brown and M. Henneaux, Central charges in the canonical realization of asymptotic symmetries: an example from three-dimensional gravity, Commun. Math. Phys. {\bf 104} (1986) 207–226. \bibitem{DG}M. D\"{o}rrzapf and B. Gato-Rivera, Singular dimensions of the N = 2 superconformal algebras I, Commun. Math. Phys. {\bf 206} (1999), 493–531. \bibitem{DGL} M. Dilxat, S. Gao, D. Liu, Whittaker modules over the $N=1$ super-BMS$_3$ algebra. J. Alg. Appl. 2024, Vol. 23, No. 05, 2450088. \bibitem{FF} B. Feigin and D. Fuchs, Verma modules over the Virasoro algebra, Lecture Notes in Math. {\bf 1060} (1984), 230-245. \bibitem{FK} I. B. Frenkel and H. K. Kim, Three dimensional construction of the Virasoro-Bott group, arXiv:2107.11693 (2021). \bibitem{HSSU}M. Henkel, R. Schott, S. Stoimenov and J. Unterberger, On the dynamical symmetric algebra of ageing: Lie structure, representations and Appell systems, Confluent. Math. {\bf 04} (2012) 1250006. \bibitem{IK0} K. Iohara and Y. Koga, Representation theory of Neveu-Schwarz and Ramond algebras I: Verma modules, Adv. Math. {\bf 177} (2003) 61–69. \bibitem{IK1}K. Iohara and Y. Koga, Representation theory of Neveu–Schwarz and Remond algebras II: Fock modules, Ann. Inst. Fourier {\bf 53} (2003) 1755–1818. \bibitem{IK} K. Iohara and Y. Koga, Representation theory of the Virasoro algebra, Springer Monographs in Mathematics, Springer-Verlag London, Ltd. London, 2011. \bibitem{JP}W. Jiang and Y. Pei, On the structure of Verma modules over the W-algebra $W(2,2)$, J. Math. Phys. {\bf 51} (2010), 022303. \bibitem{JZ}W. Jiang and W. Zhang, Verma modules over the $W(2,2)$ algebras, J. Geom. Phys. {\bf 98} (2015), 118-127. \bibitem{JPZ}W. Jiang, Y. Pei and W. Zhang, Determinant formula and a realization for the Lie algebra W(2,2), Sci. China Math. {\bf 61} (2018), 685–694. \bibitem{LPXZ} D. Liu, Y. Pei, L. Xia and K. Zhao, Smooth modules over the $N=1$ Bondi-Metzner-Sachs superalgebra, Commun. Contemp. Math. doi: 10.1142/S0219199724500214, arXiv: 2307.14608. \bibitem{MY}K. Mimachi and Y. Yamada. Singular vectors of the Virasoro algebra in terms of Jack symmetric polynomials, Commun. Math. Phys. {\bf 174} (1995), 447-455. \bibitem{MR} A. Meurman and A. Rocha-Caridi, Highest weight representations of the Neveu–Schwarz and Ramond algebras, Commun. Math. Phys. {\bf 107}(1986), 263-294. \bibitem{O}B. Oblak, Characters of the BMS group in three dimensions, Commun. Math. Phys. {\bf 340} (2015), 413-432. \bibitem{R} G. Radobolja, Subsingular vectors in Verma modules, and tensor product modules over the twisted Heisenberg-Virasoro algebra and $W(2,2)$ algebra, J. Math. Phys. {\bf 54} (2013), 071701. \bibitem{Sa} R. Sachs, Gravitational waves in general relativity. VIII, Waves in asymptotically flat space-times, Proc. Roy. Soc. Lond. A, {\bf 270} (1962) 103–126. \bibitem{SZ} A. Strominger and A. Zhiboedov, Gravitational memory, BMS supertranslations and soft theorems, JHEP, {\bf 01} (2016), 086. \bibitem{RW}A.Rocha-Caridi and N.R. Wallach, Characters of irreducible representations of the Lie algebra of vector fields on the circle, Invent. Math. {\bf 72} (1983), 57-76. \bibitem{Wi}B. Wilson, Highest-weight theory for truncated current Lie algebras, {J. Algebra}, {\bf 336} (2011), 1-27. \bibitem{ZD} W. Zhang and C. Dong, {$W$-algebra $W(2,2)$ and the vertex operator algebra $L(\frac{1}{2},0)\otimes L(\frac{1}{2},0)$}, Commun. Math. Phys. {\bf 285} (2009), 991-1004. \end{thebibliography} \end{document}
2412.17103v1
http://arxiv.org/abs/2412.17103v1
Algebraic and geometric properties of homeomorphism groups of ordinals
\pdfoutput=1 \documentclass[microtype]{gtpart} \usepackage{graphicx} \usepackage[mathscr]{eucal} \usepackage{amssymb} \usepackage{xcolor} \definecolor{darkred}{RGB}{200, 60, 0} \definecolor{mildblue}{RGB}{0, 100, 250} \usepackage{enumerate, cite} \usepackage[margin=1.25in]{geometry} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=darkred, urlcolor=darkred, citecolor=mildblue, urlcolor=darkred, } \usepackage[nameinlink]{cleveref} \newcommand{\br}{\mathbb{R}} \newcommand{\bc}{\mathbb C} \newcommand{\bz}{\mathbb Z} \newcommand{\bn}{\mathbb N} \newcommand{\bq}{\mathbb Q} \newcommand{\bh}{\mathbb H} \newcommand{\bS}{\mathbb S} \newcommand{\bd}{\mathbb D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cc}{\mathcal C} \newcommand{\cm}{\mathscr M} \newcommand{\cac}{\mathcal{AC}} \newcommand{\al}{\alpha} \newcommand{\be}{\beta} \newcommand{\si}{\sigma} \newcommand{\ep}{\epsilon} \newcommand{\ve}{\varepsilon} \newcommand{\vp}{\varphi} \newcommand{\Si}{\Sigma} \newcommand{\sig}{\sigma} \newcommand{\ssm}{\smallsetminus} \newcommand{\into}{\hookrightarrow} \newcommand{\mr}{\mathring} \newcommand{\sE}{\mathscr E} \newcommand{\Enp}{\s\varepsilon_{\mathrm{np}}} \newcommand{\Eno}{\s\varepsilon_{\mathrm{no}}} \newcommand{\Ep}{\varepsilon_{\mathrm{p}}} \newcommand{\Enpp}{\varepsilon_{\mathrm{npp}}} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sP}{\mathscr P} \DeclareMathOperator{\arcsinh}{arcsinh} \DeclareMathOperator{\mcg}{MCG} \DeclareMathOperator{\pmcg}{PMCG} \DeclareMathOperator{\mmcg}{MCG_{\mathscr M}} \DeclareMathOperator{\isom}{Isom} \DeclareMathOperator{\Ends}{\cE} \DeclareMathOperator{\Mod}{Mod} \DeclareMathOperator{\Homeo}{Homeo} \DeclareMathOperator{\PHomeo}{PHomeo} \DeclareMathOperator{\Diffeo}{Diffeo} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Fin}{Fin} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\Sym}{Sym} \DeclareMathOperator{\PH}{PH} \DeclareMathOperator{\HH}H \DeclareMathOperator{\F}F \DeclareMathOperator{\Alt}{Alt} \renewcommand{\co}{\colon\thinspace} \newtheorem{Thm}{Theorem}[section] \newtheorem{Thm*}{Theorem} \newtheorem{Prop}[Thm]{Proposition} \newtheorem{Lem}[Thm]{Lemma} \newtheorem{Cor}[Thm]{Corollary} \newtheorem{Cor*}[Thm*]{Corollary} \newtheorem{Conj}[Thm*]{Conjecture} \newtheorem{Question}[Thm]{Question} \newtheorem*{MainThm1}{\Cref{thm:normal generators 1}} \newtheorem*{MainThm2}{\Cref{thm:mainthm2}} \newtheorem*{MainThm3}{\Cref{thm:main3}} \newtheorem*{MainThm4}{\Cref{thm:ses}} \newtheorem*{MainThm5}{\Cref{thm:normal generators F}} \newtheorem*{MainThm5'}{\Cref{thm:perfect F}} \newtheorem*{MainThm6}{\Cref{thm:abelianization}} \newtheorem*{MainThm7}{\Cref{thm:cardinality}} \theoremstyle{definition} \newtheorem{Def}[Thm]{Definition} \newtheorem{Ex}[Thm]{Example} \newtheorem*{Ex*}{Examples} \newtheorem{Rem}[Thm]{Remark} \newtheorem{Problem}[Thm]{Problem} \numberwithin{equation}{section} \newcommand{\nv}[1]{\color{Cerulean} \{#1\}\color{black}} \title{Algebraic and geometric properties of \\homeomorphism groups of ordinals} \author{Megha Bhat} \address{Department of Mathematics \\ CUNY Graduate Center \\ New York, NY 10016} \email{[email protected]} \author{Rongdao Chen} \address{Department of Mathematics \\ CUNY Queens College \\ Flushing, NY 11367} \email{[email protected]} \author{Adityo Mamun} \address{Department of Mathematics \\ CUNY Queens College \\ Flushing, NY 11367} \email{[email protected]} \author{Ariana Verbanac} \address{Department of Mathematics \\ CUNY Queens College \\ Flushing, NY 11367} \email{[email protected]} \author{Eric Vergo} \address{Department of Mathematics \\ CUNY Queens College \\ Flushing, NY 11367} \email{[email protected]} \author{Nicholas G. Vlamis} \address{Department of Mathematics \\ CUNY Graduate Center \\ New York, NY 10016, and \newline Department of Mathematics \\ CUNY Queens College \\ Flushing, NY 11367} \email{[email protected]} \begin{document} \begin{abstract} We study the homeomorphism groups of ordinals equipped with their order topology, focusing on successor ordinals whose limit capacity is also a successor. This is a rich family of groups that has connections to both permutation groups and homeomorphism groups of manifolds. For ordinals of Cantor--Bendixson degree one, we prove that the homeomorphism group is strongly distorted and uniformly perfect, and we classify its normal generators. As a corollary, we recover and provide a new proof of the classical result that the subgroup of finite permutations in the symmetric group on a countably infinite set is the maximal proper normal subgroup. For ordinals of higher Cantor--Bendixson degree, we establish a semi-direct product decomposition of the (pure) homeomorphism group. When the limit capacity is one, we further compute the abelianizations and determine normal generating sets of minimal cardinality for these groups. \end{abstract} \maketitle \vspace{-0.5in} \section{Introduction} An ordinal has a natural topology given by the order topology. Motivated by recent results in the study of homeomorphism groups of 2-manifolds, we investigate several algebraic and geometric properties of the homeomorphisms group of ordinals. We provide a discussion of this motivation from geometric topology after introducing our main results. For an introduction to ordinals and the terminology that follows, see \Cref{sec:ordinals}. Our results focus on successor ordinals, or equivalently, the compact ordinals. Up to homeomorphism, a successor ordinal has the form \( \omega^\alpha\cdot d + 1 \), where \( \alpha \) is an ordinal (called the \emph{limit capacity}), \( \omega \) is the first infinite ordinal, and \( d \in \omega \) (called the \emph{coefficient}). We study the successor ordinals for which the limit capacity is a successor ordinal. Given an ordinal \( \alpha \) and \( d \in \omega \), let \( \HH_{\alpha,d} = \Homeo(\omega^{\alpha+1}\cdot d + 1) \), i.e., the group consisting of homeomorphisms \( \omega^{\alpha+1} \cdot d + 1 \to \omega^{\alpha +1}\cdot d + 1 \). Our first slate of results is for the case when the coefficient is one. In this case, \( \HH_{\alpha,1} \) is isomorphic to \( \Homeo(\omega^{\alpha+1}) \), and so we state our results for \( \Homeo(\omega^{\alpha+1}) \), as this is the view our proofs will take. This class of groups can be thought of as a generalization of \( \Sym(\bn) \), the symmetric group on the natural numbers, as \( \Sym(\bn) \) is isomorphic to \( \Homeo(\omega) \). An element \( g \) in a group \( G \) is a \emph{normal generator} of \( G \) if every element of \( G \) can be expressed as product of conjugates of \( g \) and \( g^{-1} \); it is a \emph{uniform normal generator} of \( G \) if there exists \( k \in \bn \)\footnote{Throughout, we use the convention that \( 0 \notin \bn \), i.e., \( \bn = \omega \ssm\{0\} \).} such that every element of \( G \) can be expressed a product of at most \( k \) conjugates of \( g \) and \( g^{-1} \), and the minimum such \( k \) is called the \( g \)-width of \( G \). Every element of an ordinal is itself an ordinal. From this, we say an element of \( \omega^{\alpha+1} \) is \emph{maximal rank} if it is of the form \( \omega^\alpha \cdot k \) for some \( k \in \bn \). \begin{MainThm1}[Normal generators] Let \( \alpha \) be an ordinal. For \( h \in \Homeo(\omega^{\alpha+1}) \), the following are equivalent: \begin{enumerate}[(i)] \item \( h \) normally generates \( \Homeo(\omega^{\alpha+1}) \). \item \( h \) uniformly normally generates \( \Homeo(\omega^{\alpha+1}) \). \item \( h \) induces an infinite permutation of the set of maximal rank elements of \( \omega^{\alpha+1} \). \end{enumerate} Moreover, if one---and hence all---of the above conditions are satisfied, then the \( h \)-width of \( \Homeo(\omega^{\alpha+1}) \) is at most twelve. \end{MainThm1} The proof of Theorem~\ref{thm:normal generators 1} relies on a uniform fragmentation result---a generalization of a lemma of Galvin \cite{GalvinGenerating} from permutation groups---and a technique from Anderson's proof \cite{AndersonAlgebraic} of the algebraic simplicity of several transformation groups, including the homeomorphism group of the Cantor set and the homeomorphism group of the 2-sphere. As an immediate consequence, \( \Homeo(\omega^{\alpha+1}) \) has a maximal proper normal subgroup. \begin{Cor} \label{cor:max subgroup} Let \( \alpha \) be an ordinal. The subgroup \( \mathrm{Fin}(\omega^{\alpha+1}) \) of \( \Homeo(\omega^{\alpha+1}) \) consisting of homeomorphisms that induce a finite permutation on the set of maximal rank elements in \( \omega^{\alpha+1} \) is the maximal proper normal subgroup of \( \Homeo(\omega^{\alpha+1}) \), that is, it contains every proper normal subgroup. \qed \end{Cor} As \( \Homeo(\omega) \) is isomorphic to \( \Sym(\bn) \), \Cref{cor:max subgroup} recovers the classical theorem of Schreier--Ulam \cite{Schreier1933} that the subgroup of finite permutations in \( \Sym(\bn) \) is its maximal normal subgroup. The proof presented here is arguably simpler than the standard proof given in classical texts such as in \cite{ScottGroup}, which relies on cycle decompositions. Similarly, \Cref{thm:normal generators 1} can be viewed as a generalization of a theorem of Bertram \cite{BertramTheorem} showing that all infinite permutations in \( \Sym(\bn) \) are uniform normal generators. Unlike Bertram's proof, our proof does not rely on cycle decompositions. Bertram proves that any permutation in \( \Sym(\bn) \) can be expressed as a product of four conjugates of any infinite permutation, raising the question of what the optimal constant in \Cref{thm:normal generators 1} is. A group \( G \) is \emph{perfect} if it is equal to its commutator subgroup \( [G,G] \), i.e., the subgroup generated by the set \( \{ [x,y] : x,y \in G \} \), where \( [x,y] = xyx^{-1}y^{-1} \). It is \emph{uniformly perfect} if there exists some \( k \in \bn \) such that every element of \( G \) can be expressed as a product of \( k \) commutators; the minimum such \( k \) is called the \emph{commutator width} of \( G \). It is not difficult to find a commutator in \( \Homeo(\omega^{\alpha+1}) \) that induces an infinite permutation on the set of maximal rank elements of \( \omega^{\alpha+1} \); \Cref{thm:normal generators 1} then implies that \( \Homeo(\omega^{\alpha+1}) \) is a uniformly perfect group of commutator width at most twelve. A direct proof, which is simpler than the proof of \Cref{thm:normal generators 1}, can be given of this fact that provides a better bound on the commutator width of \( G \). This is our second main result. \begin{MainThm2} If \( \alpha \) is an ordinal, then \( \Homeo(\omega^{\alpha+1}) \) is uniformly perfect, and the commutator width is at most three. \end{MainThm2} A theorem of Ore \cite{OreSome} says that every element of \( \Sym(\bn) \) can be expressed as a single commutator. It is natural to ask whether the same is true for \( \Homeo(\omega^{\alpha+1}) \). Continuing in analogy with \( \Sym(\bn) \), we turn to a geometric property. In \cite{BergmanGenerating}, Bergman showed that any left-invariant metric on \( \Sym(\bn) \) has bounded diameter; in the literature, this property is referred to as \emph{strong boundedness} or the \emph{Bergman property}. There are several equivalent definitions of strongly bounded, one of relevance here is that every group action on a metric space has bounded orbits. As a consequence, any homomorphism from a strongly bounded group to a countable group has finite image, a fact we use below. Adapting a technique appearing several places in the literature (see \cite[Construction~2.3]{LeRouxStrong}), we establish a property stronger than strong boundedness for \( \Homeo(\omega^{\alpha+1}) \). A group \( G \) is \emph{strongly distorted} if there exists \( m \in \bn \) and \( \{w_n\}_{n\in\bn} \subset \bn \) such that for any sequence \( \{ g_n\}_{n\in\bn} \) in \( G \) there exists \( S \subset G \) of cardinality at most \( m \) satisfying \( g_n \in S^{w_n} \), where \( S^k = \{ s_1s_2\cdots s_k : s_1, \ldots, s_k \in S \} \). The notion of strong distortion was introduced by Calegari--Freedman in \cite{CalegariDistortion}, where they established strong distortion for the homeomorphism groups of spheres; in the appendix of the same paper, Cornulier showed that if \( G \) is strongly distorted, then \( G \) is strongly bounded. \begin{MainThm3} For every ordinal \( \alpha \), the group \( \Homeo(\omega^{\alpha+1}) \) is strongly distorted. \end{MainThm3} Strong distortion also implies several other properties, including uncountable cofinality and the Schreier property; we refer the reader to the introduction of \cite{LeRouxStrong} for a thorough discussion. As strong distortion implies strong boundedness, Theorem~\ref{thm:main3}, in the case \( \alpha = 1 \), establishes that \( \mathrm{Sym}(\bn) \) is strongly bounded, recovering Bergman's result \cite{BergmanGenerating} and providing a simpler proof. We finish our discussion of the degree one case by observing that we cannot drop the hypothesis that the limit capacity is a successor. It is a consequence of \cite[Theorem~3.9]{DowThin} that every countable group is a quotient of \( \Homeo(\omega_1) \), where \( \omega_1 \) is the first uncountable ordinal. As a consequence, \( \Homeo(\omega_1) \) can be neither uniformly perfect, strongly bounded, nor strongly distorted. We now turn to the case of coefficient, or equivalently Cantor--Bendixson degree, greater than one, investigating the algebraic structure of \( \HH_{\alpha,d} \) and establishing the failure of the above results when \( d > 1 \). There are several normal subgroups of \( \HH_{\alpha,d} \) that are essential to our results, which we must introduce. The first is \( \F_{\alpha,d} \), the subgroup consisting of the homeomorphisms of \( \omega^{\alpha+1}\cdot d+1 \) that induce a finite permutation on the subset \( N_{\alpha,d} := \{ \omega^{\alpha+1}\cdot k + \omega^\alpha \cdot \ell : k \in d, \ell \in \bn \} \), i.e., the elements of next-to-maximal rank. The second is obtained by equipping \( \HH_{\alpha,d} \) with the compact-open topology and taking the closure of \( \F_{\alpha,d} \), which we denote \( \overline \F_{\alpha,d} \). Lastly, let \( M_{\alpha,d} = \{ \omega^{\alpha+1}\cdot k : 1\leq k \leq d \} \subset \omega^{\alpha+1}\cdot d + 1 \), i.e., the set of maximal rank elements. Then \( M_{\alpha,d} \) is preserved by every element of \( \HH_{\alpha,d} \), yielding an action of this group on \( M_{\alpha,d} \)---which we call the \emph{canonical action}. We let \( \PH_{\alpha,d} \) denote the kernel of this action, i.e., the subgroup of \( \HH_{\alpha,d} \) fixing each element of \( M_{\alpha,d} \); we call this group the \emph{pure homeomorphism group} of \( \omega^\alpha\cdot d +1 \). Note that \( \HH_{\alpha,1} = \PH_{\alpha, 1} = \overline \F_{\alpha,1} \), where the last equality follows from \Cref{thm:normal generators 1}. Recall that a short exact sequence \( 1 \to A \to B \to C \to 1 \) is \emph{right split} if the map \( B \to C \) admits a section. \begin{MainThm4} Let \( \alpha \) be an ordinal, and let \( d \in \bn \ssm\{1\} \). \begin{enumerate}[(1)] \item There exists a homomorphism \( \chi \co \PH_{\alpha,d} \to \bz^{d-1} \) such that \[ 1 \longrightarrow \overline \F_{\alpha,d} \lhook\joinrel\longrightarrow \PH_{\alpha,d} \overset{\chi}\longrightarrow \bz^{d-1} \longrightarrow 1 \] is a short exact sequence and it is right split. \item If \( \Phi \co \HH_{\alpha,d} \to \Sym(M_{\alpha,d} ) \) is the canonical action, then the short exact sequence \[ 1 \longrightarrow \PH_{\alpha,d} \lhook\joinrel\longrightarrow \HH_{\alpha,d} \overset{\Phi}\longrightarrow \Sym(M_{\alpha,d}) \longrightarrow 1 \] is right split. \end{enumerate} \end{MainThm4} A short exact sequence is right split if and only if the middle group is a semi-direct product of the other two groups, yielding the following corollary. \begin{Cor}(Semi-direct product decomposition) Let \( \alpha \) be an ordinal, and let \( d \in \bn \). If \( d > 1 \), then \( \PH_{\alpha,d} \) is isomorphic to \( \overline \F_{\alpha,d} \rtimes \bz^{d-1} \) and \( \HH_{\alpha,d} \) is isomorphic to \( \PH_{\alpha,d} \rtimes \Sym(d) \). In particular, \( \HH_{\alpha,d} \) is isomorphic to \( (\overline \F_{\alpha,d} \rtimes \bz^{d-1}) \rtimes \Sym(d) \). \end{Cor} An explicit description of \( \chi \), and its section, is given in \Cref{sec:ses}, which comes from \( d-1 \) independent \emph{flux homomorphisms} counting the shifting of the elements of \( N_{\alpha,d} \). As a consequence, we see that \( \PH_{\alpha,d} \) and \( \HH_{\alpha,d} \) are neither perfect nor strongly bounded. For the latter, note that \( \PH_{\alpha,d} \) has a countably infinite quotient and hence is not strongly bounded per the discussion above, and as \( \HH_{\alpha,d} \) has a finite-index subgroup that fails to be strongly bounded, it must also fail to be strongly bounded. A topological group is \emph{coarsely bounded} if every continuous left-invariant metric on the group has bounded diameter (see \cite{RosendalBook}). Equipped with the compact-open topology, \( \HH_{\alpha,d} \) is a topological group (see \Cref{sec:ordinals}). With this topology, it is readily verified that the homomorphism \( \chi \) is continuous, and hence it follows from \Cref{thm:ses} that \( \PH_{\alpha,d} \) (and hence \( \HH_{\alpha,d} \)) is not coarsely bounded. Together with \Cref{thm:main3}, this yields the following corollary. \begin{Cor} \label{cor:d=1} Let \( \alpha \) be an ordinal, and let \( d \in \bn \). The following are equivalent: \begin{enumerate}[(i)] \item \( \HH_{\alpha,d} \) is strongly distorted. \item \( \HH_{\alpha,d} \) is strongly bounded. \item \( \HH_{\alpha,d} \) is coarsely bounded. \item \( d = 1 \). \end{enumerate} \end{Cor} Recently, Branman--Domat--Hoganson--Lyman \cite[Theorem~A]{BranmanGraphical} showed that for a countable ordinal \( \alpha \), the group \( \Homeo(\omega^\alpha\cdot d +1) \) is coarsely bounded if and only if \( d = 1 \). \Cref{cor:d=1} is a strengthening of this statement under the additional hypothesis that \( \alpha \) is a successor ordinal. We do not know if \( \Homeo(\omega^\lambda) \) is strongly bounded when \( \lambda \) is a countable limit ordinal; however, we do know that it is coaresly bounded \cite{MannLarge-scale}. Before moving on, we stress that Corollary~\ref{cor:d=1} considers \( \HH_{\alpha,d} \) equipped with the compact-open topology. Given any ordinal \( \alpha \), Gheysens \cite{GheysensDynamics,GheysensHomeomorphism} has proven that \( \Homeo(\alpha) \), equipped with the topology of pointwise convergence, is a Roelcke precompact topological group, implying that it is coarsely bounded. Ideally, we would like to use \Cref{thm:ses} to compute the abelianizations of \( \PH_{\alpha,d} \) and \( \HH_{\alpha,d} \), as well as to produce normal generating sets. The main obstacle is working with \( \F_{\alpha,d} \), which is a complicated group. If we quotient out by \( \F_{\alpha,d} \), then we do have the tools to proceed. The downside is that the isomorphism class of this quotient does not depend on \( \alpha \). Indeed, let \( \mathrm K_{\alpha,d} \) denote the kernel of the action of \( \HH_{\alpha,d} \) on the set \( N_{\alpha,d} \). It can be readily verified that \( \HH_{\alpha,d} / \mathrm K_{\alpha,d} \) is isomorphic to \( \HH_{0,d} \), leading us to the investigation of the groups \( \Homeo(\omega\cdot d +1) \). The difference in the case \( \alpha =0 \) is that \( \F_{0,d} \) is the group of finitely supported homeomorphisms; in particular, the elements of \( \F_{0,d} \) are supported away from the set \( M_{0,d} \). \begin{MainThm5} \label{thm:d} Let \( d \in \bn \). For \( f \in \overline \F_{0,d} \), the following are equivalent: \begin{enumerate}[(i)] \item \( f \) normally generates \( \overline \F_{0,d} \). \item \( f \) uniformly normally generates \( \overline F_{0,d} \). \item \( f \) induces an infinite permutation of the set \( \{ \omega \cdot k + \ell : \ell \in \bn \} \) for each \( k \in d \). \end{enumerate} \end{MainThm5} \Cref{thm:normal generators F} stated below contains another condition---related to \Cref{thm:normal generators 2}---that we omit here, as we do not have the appropriate definitions yet at hand. The proof of \Cref{thm:normal generators F} follows from a fragmentation lemma, \Cref{lem:factorization2}, that holds for all \( \alpha \) but that is strongest when \( \alpha =1 \), again as \( \F_{0,d} \) consists only of finitely supported homeomorphisms. Fragmentation will allow us to view any element of \( \overline \F_{0,d} \) as a composition of \( d+1 \) homeomorphisms, each of which can be viewed as homeomorphism of \( \Homeo(\omega) \). This fragmentation also allows to consider the abelianization of \( \overline \F_{0,d} \) \begin{MainThm5'} If \( d \in \bn \), then \( \overline \F_{0,d} \) is uniformly perfect and has commutator width at most four. \end{MainThm5'} As consequences of \Cref{thm:ses}, \Cref{thm:perfect F}, and \Cref{thm:normal generators F}, we can compute the abelianizations of \( \PH_{0,d} \) and \( \HH_{0,d} \) as well as the minimal cardinality of normal generating sets. A subset of a group is a \emph{normal generating set} if the elements of the set together with all their conjugates generate the group. \begin{MainThm6} If \( d \in \bn\ssm\{1\} \), then: \begin{enumerate}[(1)] \item The abelianization of \( \PH_{0,d} \) is isomorphic to \( \bz^{d-1} \). \item The abelianization of \( \HH_{0,d} \) is isomorphic to \( \bz/2\bz \times \bz/2\bz \). \end{enumerate} \end{MainThm6} The computation of the abelianization of \( \PH_{0,d} \) and of \( \HH_{0,d} \) implies that the minimal cardinality for a normal generating set is at least \( d-1 \) and at least two, respectively. Our final theorem, says this is in fact an equality, and the proofs construct explicit normal generating sets of the desired cardinalities. \begin{MainThm7} If \( d \in \bn \ssm \{1\} \), then the minimal cardinality of a normal generating set for \( \PH_{0,d} \) is \( d-1 \) and is two for \( \HH_{0,d} \). \end{MainThm7} We do not have a good sense of whether one should expect the above theorems to hold for \( \alpha > 0 \). A resolution in either direction would be interesting. \subsection*{Motivation and context} The end space of a non-compact manifold is an encoding of the different ways of escaping to infinity in the manifold, and this set of directions naturally inherits a topology from the manifold. For example, \( \br \) is two-ended while \( \br^2 \) is one ended, and if a compact totally disconnected subset \( E \) of the 2-sphere \( \mathbb S^2 \) is removed from \( \mathbb S^2 \), then the end space of the resulting manifold, \( \mathbb S^2 \ssm E \), is homeomorphic to \( E \). The end space is a topological invariant of the manifold and hence any homeomorphism between manifolds induces a homeomorphism on the end spaces. Therefore, given a manifold \( M \) with end space \( E \) there is a canonical homomorphism \( \Homeo(M) \to \Homeo(E) \) given by the action of \( \Homeo(M) \) on \( E \). Now, if \( E \) is a second-countable Stone space (i.e., \( E \) is second countable, compact, zero-dimensional, and Hausdorff), then \( E \) embeds as a subset of \( \mathbb S^2 \). Identifying \( E \) with such an embedding, it follows from \cite{RichardsClassification} that the canonical homomorphism \( \Homeo(\mathbb S^2 \ssm E) \to \Homeo(E) \) is surjective. Moreover, as any two isotopic homeomorphisms induce the same action on \( E \), this homomorphism factors to give an epimorphism \( \mcg(\mathbb S^2 \ssm E) \to \Homeo(E) \), where \( \mcg(\mathbb S^2\ssm E) \) is the \emph{mapping class group} of \( \mathbb S^2 \ssm E \), i.e., the group of isotopy classes of homeomorphisms \( \mathbb S^2 \ssm E \to \mathbb S^2 \ssm E \). The group \( \mcg(\mathbb S^2 \ssm E) \) is an example of a \emph{big} mapping class group, a class of groups that have been investigated intensely recently. We therefore see that theorems about big mapping class groups have ramifications for theorems regarding homeomorphism groups of second-countable Stone spaces. Homeomorphism groups of Stone spaces have a long history of being studied from the perspective of automorphism groups of Boolean algebras, and the mapping class group perspective has brought a renewed interest. Ordinals are Stone spaces, and the topological structure of ordinals are simpler than Stone spaces in general. This has led to a recent interest in homeomorphism groups of ordinals, e.g., see \cite{HernandezAmple, BestvinaClassification, BranmanGraphical}. Many of the theorems here are motivated by work of the last author and his collaborators on big mapping class groups, see \cite{VlamisHomeomorphism,AramayonaFirst}. But with that said, we stress that our results do not follow from that setting. For one, only countable successor ordinals can be realized as the end space of a manifold (assuming the manifold is second countable; even dropping the second countability assumption on the manifold, the type of ordinals that can appear are quite limited, see \cite{FernandezEnds} for a discussion of ends in non-metrizable manifolds). Second, if \( E = \omega^{\alpha+1} + 1 \) for some countable ordinal \( \alpha \), then our first slate of results fail for \( \mcg(\mathbb S^2 \ssm E) \). Let us explain: Using the work of Domat--Dickmann \cite{DomatBig}, Malestein--Tao \cite{MalesteinSelf} showed that \( \mcg( \mathbb S^2 \ssm (\omega+1) ) \) surjects onto the additive group of real numbers. Now, there is an epimorphism \( \mcg(\mathbb S^2 \ssm E) \to \mcg(\mathbb S^2 \ssm (\omega+1)) \) obtained by ``filling in'' the ends that are neither maximal nor next-to-maximal rank (this is called a \emph{forgetful homomorphism} in the literature). It follows that \( \mcg(\mathbb S^2 \ssm E) \) also surjects onto the reals; in particular, \( \mcg(\mathbb S^2 \ssm E) \) is neither uniformly perfect, finitely normally generated, strongly bounded, nor strongly distorted. Therefore, our results together with the work of Malestein--Tao, show that the failure of \( \Homeo(\mathbb S^2 \ssm E) \) to be strongly bounded, uniformly perfect, and finitely normally generated, stems from a two-dimensional obstruction, i.e., it cannot be detected by the action on the end space. It is also interesting to note that the results only fail in the category of abstract groups, meaning their analogs in the category of topological groups hold; in particular, Mann--Rafi \cite{MannLarge-scale} showed that \( \mcg(\mathbb S^2 \ssm E) \) is coarsely bounded, and it follows from the work of the last author with Lanier \cite{LanierMapping} that \( \mcg(\mathbb S^2 \ssm E) \) is topologically normally generated by a single mapping class (Baik \cite{BaikTopological} also showed finite topological normal generation for \( \mcg(\mathbb S^2 \ssm (\omega^{\alpha+1}\cdot d +1)) \), where \( \alpha \) is a countable ordinal and \( d \in \bn \)). \subsection*{Acknowledgments} This paper stems from a vertically integrated summer research program at Queens College supported by NGV's NSF Award DMS-2212922. NGV was also supported in part by PSC-CUNY Awards 66435-00 54 and Award 67380-00 55. The authors thank Kathryn Mann for the reference to Galvin's work and Ferr\'an Valdez for a discussion of his forthcoming work with collaborators on the geometry of homeomorphism groups of ordinals. \section{Ordinals and their topology} \label{sec:ordinals} We provide an introduction to ordinals and their topology, aimed at non-experts. In the first three subsections we cover the basics of ordinals, following \cite[\S4.1--2]{CiesielskiSet} and \cite[Chapter~2]{JechSet}. \subsection{Defining the ordinals} We will use von Neumman's definition of the ordinals. \begin{Def}[Ordinal] A set \( \alpha \) is an \emph{ordinal} if it satisfies the following: \begin{enumerate}[(i)] \item if \( \beta \in \alpha \), then \( \beta \subset \alpha \); \item if \( \beta, \gamma \in \alpha \), then \( \beta = \gamma \), \( \beta \in \gamma \), or \( \gamma \in \beta \); and \item if \( B \subset \alpha \) is nonempty, then there exists \( \gamma \in B \) such that \( \gamma \cap B = \varnothing \). \end{enumerate} \end{Def} Every element of an ordinal is itself an ordinal. An ordinal can be well-ordered by set inclusion, i.e., if \( \alpha \) is an ordinal and \( \beta, \gamma \in \alpha \), declaring \( \beta \leq \gamma \) if and only if \( \beta \subseteq \gamma \) yields a well-ordering relation on \( \alpha \). Note that if \( \beta, \gamma \in \alpha \), then \( \gamma < \beta \) if and only if \( \gamma \in \beta \). In particular, if \( \beta \in \alpha \), then \( \beta = \{ \eta \in \alpha : \eta < \beta \} \). If \( \alpha \) and \( \beta \) are order-isomorphic ordinals, then \( \alpha = \beta \); it now follows from basic properties of well-ordered sets that given two ordinals \( \alpha \) and \( \beta \), either \( \alpha = \beta \), \( \alpha \in \beta \), or \( \beta \in \alpha \). We can therefore compare any two distinct ordinals, writing \( \beta < \alpha \) if and only if \( \beta \in \alpha \) (here, we have slightly abused notation, using \( < \) to denote both a relation on a given ordinal and as a comparison of two distinct ordinals; however, no confusion arises with this abuse). In other words, \begin{quote} \emph{The ordinal \( \alpha \) is the set of all ordinals less than \( \alpha \).} \end{quote} Given any ordinal \( \alpha \), the set \( \alpha \cup \{\alpha\} \) is an ordinal, called the \emph{successor of \( \alpha \)}. An ordinal is a \emph{successor ordinal} if it is the successor to some ordinal; otherwise, it is a \emph{limit ordinal}\footnote{We follow \cite{CiesielskiSet} in considering 0 a limit ordinal.}. From the discussion above, any set of ordinals has a least element; this least element is necessarily the empty set and is denoted by 0. Therefore, 0 is the first limit ordinal. We can now proceed to construct the natural numbers, identifying 1 with \( \{0\} \), or equivalently, \( \{\varnothing\} \), 2 with \( \{ 0, 1\} \), or equivalently, \( \{ \varnothing, \{\varnothing\} \} \), etc. Throughout, we will identity the natural numbers, denoted \( \bn \), with \( \bn = \{ 1, 2, \ldots \} \); in particular, \( 0 \notin \bn \). The natural numbers together with 0 constitute the set of finite ordinals, which itself is an ordinal, denoted \( \omega \); equivalently, \( \omega \) is the first infinite ordinal. Note that \( \omega \) is the first non-zero limit ordinal. Let us finish this subsection with recalling transfinite induction, which follows from the existence of minimal elements in well-ordered sets. \begin{Thm}[Transfinite induction] Let \( \alpha \) be an ordinal, and let \( S \subset \alpha \). Suppose \( S \) satisfies the following: if \( \lambda \in \alpha \) and \( \beta \in S \) for all \( \beta < \lambda \), then \( \lambda \in S \). Then \( S = \alpha \). \qed \end{Thm} The above statement is a bit awkward because we are avoiding using the languages of classes, allowing us to make a statement that is valid in ZFC. \subsection{Ordinal arithmetic} With the definition of an ordinal, it follows from the basic theory of well-ordered sets that every well-ordered set is order isomorphic to a unique ordinal, a fact we will use to define ordinal arithmetic. \begin{Def}[Ordinal addition] \label{def:addition} Let \( \alpha \) and \( \beta \) be ordinals. We define the \emph{sum} of \( \alpha \) and \( \beta \), denoted \( \alpha + \beta \), to be the ordinal that is order isomorphic to the set \( \alpha \sqcup \beta \) equipped with the order \( \preceq \) given by \( \eta \preceq \gamma \) if and only if either \begin{itemize} \item \( \eta, \gamma \in \alpha \) and \( \eta \leq \gamma \), \item \( \eta, \gamma \in \beta \) and \( \eta \leq \gamma \), or \item \( \eta \in \alpha \) and \( \gamma \in \beta \). \end{itemize} \end{Def} Observe that \( \alpha +1 \) is the successor of \( \alpha \) for any ordinal \( \alpha \). Note that addition of ordinals is not commutative, e.g., \( 1 + \omega \neq \omega +1 \), as \( 1 + \omega \) is a limit ordinal (namely \( \omega \)) and \( \omega+1 \) is a successor ordinal. However, ordinal addition is associative. \begin{Def}[Ordinal multiplication] \label{def:multiplication} We define the \emph{product} of \( \alpha \) and \( \beta \), denoted \( \alpha\cdot\beta \), to be the ordinal that is order isomorphic to the set \( \beta \times \alpha \) equipped with the lexicographical ordering. \end{Def} As for summation, multiplication of ordinals is not commutative, e.g., \( 2 \cdot \omega \neq \omega \cdot 2 \), as the elements of \( 2 \cdot \omega \) are all finite ordinals (i.e., \( 2\cdot \omega = \omega \)) whereas \( \omega \in \omega \cdot 2 \). \begin{Def}[Ordinal exponentiation] We define the \emph{exponentiation} of \( \alpha \) by \( \beta \), denoted \( \alpha^\beta \), to be the ordinal that is order isomorphic to the set of functions \[ \{ f \co \beta \to \alpha : |\{\eta\in \beta : f(\eta) \neq 0 \}| < \infty\} \] equipped with the order \( \prec \) given by \( f \prec g \) if and only if there exists \( \eta \in \beta \) such that \( f(\eta) < g(\eta) \) and \( f(\gamma) = g(\gamma) \) for all \( \gamma < \eta \). \end{Def} All the above ordinal arithmetic operations can be defined using transfinite recursion. It is worth noting how this works for exponentiation: we set \( \alpha^0 = 1 \), \( \alpha^{\beta+1} = \alpha^\beta \cdot \alpha \) for all \( \beta \), and \( \alpha^\lambda = \bigcup_{\beta<\lambda} \alpha^\beta \) for a limit ordinal \( \lambda \). We mention this as it is important for us to know that \begin{equation} \label{eq:exponent} \omega^{\alpha+1} = \omega^\alpha \cdot \omega . \end{equation} We can now provide a normal form for ordinals, known as Cantor normal form: \begin{Thm}[Cantor Normal Form] If \( \alpha \) is a non-zero ordinal, then \( \alpha \) can be uniquely represented in the form \begin{equation} \label{eq:cantor} \alpha = \omega^{\beta_1}\cdot k_1 + \cdots + \omega^{\beta_n}\cdot k_n, \end{equation} where \( n \in \bn \), \( \alpha \geq \beta_1 > \cdots > \beta_n \), and \( k_1, \ldots, k_n \in \bn \). \qed \end{Thm} Following \cite{KieftenbeldClassification}, we call \( \beta_1 \) in \eqref{eq:cantor} the \emph{limit capacity} of \( \alpha \) and \( k_1 \) the \emph{coefficient} of \( \alpha \). \subsection{Topology of ordinals} An ordinal has a canonical topology given by the order topology; for all that follows, whenever we reference the topology of an ordinal, we are referring to the order topology. Let \( \alpha \) be an ordinal. A subbasis for the order topology on \( \alpha \) is given by the sets of form \( [0, \beta) = \{ \eta \in \alpha : \eta < \beta \} \) and \( (\beta, \alpha) = \{ \eta \in\alpha : \beta < \eta \} \), where \( \beta \in \alpha \). Now, let \( \lambda \in \alpha \) be a limit ordinal. Note that if \( \beta < \lambda \) then \( \beta+1 < \lambda \), and observe that if \( \lambda \in \alpha \) is a limit ordinal, then \( [0, \lambda) = \bigcup_{\eta < \lambda} [0, \eta+1) \). It follows that the sets of the form \( [0, \beta+1) = [0,\beta] \) and \( (\beta, \alpha)=[\beta+1, \alpha) \) with \( \beta \in \alpha \) form a subbasis for the order topology. In particular, the order topology admits a basis consisting of clopen sets, i.e., it is zero-dimensional. Moreover, it is an exercise to check that sets of the form \( [\beta, \gamma] \) are compact and that if \( \lambda \) is a limit ordinal then \( [0, \lambda) \) fails to be compact. We summarize this in the following proposition. \begin{Prop} An ordinal is a zero-dimensional Hausdorff topological space. Moreover, a non-zero ordinal is compact if and only if it is a successor ordinal. \qed \end{Prop} For the remainder of the article, we will focus on successor ordinals, and hence, the compact ordinals. The next goal of this subsection is to classify the successor ordinals up to homeomorphism, which is accomplished in \Cref{thm:classification}. In the process, we establish several preliminary results and introduce the Cantor--Bendixson derivative. \begin{Rem} A topological classification of ordinals, both successor and limit, is given by Kieftenbeld--L{\"o}we in \cite{KieftenbeldClassification}, a report for the Institute for Logic, Language, and Computation. We reproduce a complete argument for the classification of the compact ordinals here as it is instructive and convenient for the reader. \end{Rem} \begin{Lem} \label{lem:commutative} If \( \alpha \) and \( \beta \) are successor ordinals, then \( \alpha + \beta \) is homeomorphic to the topological disjoint union of \( \alpha \) and \( \beta \). In particular, \( \alpha + \beta \) and \( \beta + \alpha \) are homeomorphic. \end{Lem} \begin{proof} By definition, \( \alpha + \beta \) is order isomorphic to the disjoint union of \( \alpha \) and \( \beta \) with the order \( \preceq \) given in Definition~\ref{def:addition}; call this well-ordered set \( X \) and equip it with its order topology, so that \( X \) is homeomorphic to \( \alpha + \beta \). The inclusions of \( \alpha \) and \( \beta \) into \( X \) are topological embeddings. The image of \( \beta \) in \( X \) is closed and the image of \( \alpha \) is open. As \( \alpha \) is compact, it must also be closed. Therefore, both \( \alpha \) and \( \beta \) are clopen subsets of \( X \), implying \( X \), and hence, \( \alpha + \beta \) is homeomorphic to the topological disjoint union of \( \alpha \) and \( \beta \). Similarly, \( \beta + \alpha \) is homeomorphic to the topological disjoint union of \( \alpha \) and \( \beta \), implying that \( \alpha + \beta \) and \( \beta + \alpha \) are homeomorphic. \end{proof} \begin{Lem} \label{lem:sum-equal} Let \( \alpha \) and \( \beta \) be ordinals. If the limit capacity of \( \beta \) is strictly less than that of \( \alpha \), then \( \beta + \alpha = \alpha \). \end{Lem} \begin{proof} As the limit capacity of \( \beta \) is strictly less than \( \alpha \), it follows that \( \beta \cdot \omega \leq \alpha \), as \( \beta + k < \alpha \) for all \( k \in \omega \). Let \( \gamma \) be the ordinal order isomorphic to the complement of \( \beta \cdot \omega \) in \( \alpha \). Then \( \alpha = \beta \cdot \omega + \gamma \). So, \( \beta + \alpha = \beta + ( \beta \cdot \omega + \gamma ) = ( \beta + \beta \cdot \omega) + \gamma \). It is readily checked that \( \beta + \beta \cdot \omega = \beta \cdot \omega \), and hence \( \beta + \alpha = \alpha \). \end{proof} \begin{Prop} \label{prop:reverse} Let \( \alpha \) be an ordinal. If \( \beta \) is the limit capacity of \( \alpha \) and \( k \) is its coefficient, then \( \alpha \) is homeomorphic to \( \omega^\beta \cdot k + 1 \). \end{Prop} \begin{proof} As \( \alpha \) is a successor, the Cantor normal form of \( \alpha \) is given by \[ \alpha = \omega^{\beta_1}\cdot k_1 + \cdots + \omega^{\beta_n} \cdot k_n + 1, \] where \( \beta_1 = \beta \) and \( k_1 = k \). We can therefore write \[ \alpha = \left(\omega^{\beta}\cdot k + 1 \right) + \gamma, \] where \( \gamma \) is a successor ordinal. By \Cref{lem:commutative}, \( \alpha \) is homeomorphic to \( \gamma + \left(\omega^{\beta} \cdot k + 1\right) \). As the limit capacity of \( \gamma \) is strictly less than that of \( \omega^{\beta} \cdot k + 1 \), \Cref{lem:sum-equal} implies that \( \gamma + \left(\omega^{\beta} \cdot k + 1\right) = \omega^{\beta} \cdot k + 1 \), and hence \( \alpha \) is homeomorphic to \( \omega^{\beta}\cdot k + 1 \). \end{proof} To classify the compact ordinals, we need to show that the limit capacity and the coefficient of a successor ordinal are topological invariants. For this, we need the Cantor--Bendixson derivative. \begin{Def}[Cantor--Bendixson derivative] Let \( X \) be a topological space. The \emph{derived set of \( X \)}, denoted \( X' \), is the subset of \( X \) consisting of all the accumulation points in \( X \). Given an ordinal \( \alpha \), the \( \alpha^{\text{th}} \)-Cantor--Bendixson derivative of \( X \), denoted \( X^{(\alpha)} \), is defined via transfinite recursion as follows: \begin{itemize} \item \( X^{(0)} = X' \), \item \( X^{(\alpha+1)} = \left(X^{(\alpha)}\right)' \), and \item if \( \lambda \) is a limit ordinal, then \( X^{(\lambda)} = \bigcap_{\beta < \lambda} X^{(\beta)} \). \end{itemize} \end{Def} Next, we introduce the Cantor--Bendixson degree and rank. We will not make the most general definition, rather focusing on a definition that suits our purposes. \begin{Def}[Cantor--Bendixson rank and degree] Let \( X \) be a topological space. The \emph{Cantor--Bendixson rank of \( X \)} is the least ordinal \( \alpha \) such that \( X^{(\alpha)} \) is empty, if such an ordinal exists; otherwise, it is set to be \( \infty \). If \( X \) is compact and its Cantor--Bendixson degree is an ordinal \( \alpha \), then \( \alpha \) is necessarily a successor ordinal, allowing us to define the \emph{Cantor--Bendixson degree of \( X \)} to be the cardinality of \( X^{(\alpha-1)} \), which is finite. \end{Def} Using the Cantor normal form and transfinite induction, it is an exercise to show the following: \begin{Prop} \label{prop:forward} The Cantor--Bendixson rank of a successor ordinal is equal to the successor of its limit capacity and its Cantor--Bendixson degree is equal to its coefficient. \qed \end{Prop} The classification of successor ordinals up to homeomorphism now follows immediately from \Cref{prop:reverse} and \Cref{prop:forward}. \begin{Thm}[Topological classification of successor ordinals] \label{thm:classification} Two successor ordinals are homeomorphic if and only if their limit capacities and their coefficients agree, or equivalently, their Cantor--Bendixson ranks and degrees agree. \qed \end{Thm} Next, we establish a stability property for neighborhoods of elements of ordinals. First, we need to extend the notion of Cantor--Bendixson rank to elements of ordinals. \begin{Def}[Rank of a point] Let \( X \) be a topological space of Cantor--Bendixson rank \( \alpha \), with \( \alpha \neq \infty \). For \( x \in X \), we define the \emph{rank of \( x \)} to be the least ordinal \( \beta \) such that \( x \notin X^{(\beta)} \). \end{Def} We can use the classification of successor ordinals to show that each element of an ordinal is \emph{stable} in the sense introduced by Mann--Rafi \cite{MannLarge-scale}. \begin{Def}[Stable point] Let \( X \) be a zero-dimensional topological space. A point \( x \in X \) is \emph{stable} if it admits a neighborhood basis consisting of pairwise homeomorphic clopen sets. \end{Def} \begin{Lem} \label{lem:stable} Let \( \alpha \) be an ordinal, let \( b \in \alpha +1 \), let \( \beta+1 \) be the rank of \( b \), and let \( U \) be a clopen neighborhood of \( b \) in \( \alpha + 1 \). If the rank of every element of \( U \ssm \{b\} \) has rank strictly less than \( \beta + 1 \), then \( U \) is homeomorphic to \( \omega^{\beta} + 1 \). \end{Lem} \begin{proof} Observe that if \( \eta \leq \beta \), then \( U \) contains an element of rank \( \eta \). It follows that, as \( U \) is a well-ordered set, \( U \) must be isomorphic to an ordinal of Cantor--Bendixson rank \( \beta + 1 \) and Cantor--Bendixson degree 1. Therefore, by \Cref{thm:classification}, \( U \) homeomorphic to \( \omega^\beta + 1 \). \end{proof} Note that all sufficiently small neighborhoods of \( b \) in \( \alpha \) satisfy the hypothesis of \Cref{lem:stable}, allowing us to conclude that \( b \) is stable. \begin{Prop} Every element of an ordinal is stable. \qed \end{Prop} As already noted, \Cref{lem:stable} says that all sufficiently small clopen neighborhoods of \( b \) in \( \alpha \) are homeomorphic. It will be useful to have a name for such sets. \begin{Def}[Stable neighborhood] For an ordinal \( \alpha \), a clopen neighborhood \( U \) of an element \( b \) in a \( \alpha + 1 \) is \emph{stable} if \( b \) is the unique highest rank element in \( U \). \end{Def} By \Cref{lem:stable}, a stable neighborhood of a rank \( \beta + 1 \) element of \( \alpha + 1 \) is homeomorphic to \( \omega^\beta + 1 \). Our theorems deal with the successor ordinals whose limit capacities are successor ordinals, that is, ordinals that are homeomorphic to \( \omega^{\alpha + 1}\cdot d + 1 \). For the remainder of this section, we focus on \( d =1 \), introducing a convenient coordinate system. For any ordinal \( \alpha \), the successor of \( \omega^\alpha \) is \( \omega^\alpha \cup \{ \omega^\alpha \} = \omega^\alpha + 1 \); topologically, this says that \( \omega^\alpha + 1 \) is the one-point compactification of \( \omega^\alpha \). Moreover, \( \omega^{\alpha} \) is the unique rank \( \alpha + 1 \) element of \( \omega^{\alpha} + 1 \), which means that every homeomorphism of \( \omega^\alpha +1 \) fixes \( \omega^\alpha \), which yields the following. \begin{Lem} \label{lem:isomorphism} For any ordinal \( \alpha \), \( \Homeo(\omega^\alpha + 1) \) is isomorphic to \( \Homeo(\omega^\alpha) \). \qed \end{Lem} In the next section, where we focus on Cantor--Bendixson degree one ordinals, the prior lemma allows us to focus on \( \omega^\alpha \) rather than \( \omega^{\alpha} + 1 \). The advantage of this viewpoint is the following proposition. \begin{Prop} \label{prop:coordinates} For any ordinal \( \alpha \), the spaces \( \omega^{\alpha+1} \) and \( \bn \times (\omega^\alpha + 1) \) are homeomorphic. \end{Prop} \begin{proof} For each \( k \in \bn \), \( \omega^\alpha \cdot k +1 \in \omega^{\alpha +1} \). Now, for \( \eta \in \omega^{\alpha +1} \), there exists \( k \in \bn \) such that \( \eta \in \omega^\alpha\cdot k + 1 \). It follows that \begin{align*} \omega^{\alpha+1} &= \bigcup_{k\in\bn} (\omega^\alpha\cdot k +1 ) \\ &= [0, \omega^\alpha] \cup \bigcup_{k\in\bn} (\omega^\alpha\cdot k, \omega^\alpha \cdot (k+1)] \\ &= [0, \omega^\alpha] \cup \bigcup_{k\in\bn} [\omega^\alpha\cdot k + 1, \omega^\alpha \cdot (k+1)] \\ &= \bigcup_{k\in \omega} I_k, \end{align*} where we let \( I_0 = [0, \omega^\alpha] \) and \( I_k = [\omega^\alpha\cdot k + 1, \omega^\alpha \cdot (k+1)] \) for \( k \in \bn \). Now, for distinct \( j,k \in \omega \), \( I_k \) is clopen and \( I_k \cap I_j = \varnothing \). Therefore, \( \omega^{\alpha+1} \) is homeomorphic to the topological disjoint union of the \( I_k \). To finish, observe that each of the \( I_k \) is order isomorphic to, and hence homeomorphic to, \( \omega^\alpha + 1 \), implying that \( \omega^{\alpha+1} \) is homeomorphic to \( \omega \times (\omega^\alpha + 1) \), which is itself is clearly homeomorphic to \( \bn \times (\omega^\alpha+1) \). \end{proof} \subsection{Topology of homeomorphism groups} Though our focus is on homeomorphism groups as abstract groups, a key subgroup in what follows is defined by taking a closure in an appropriate topology on the homeomorphism group of an ordinal. \begin{Def}[Permutation topology] For an ordinal \( \alpha \), the \emph{permutation topology} on the group \( \Homeo(\alpha+1) \) is generated by the sets of the form \( U(A,B) = \{ h \in \Homeo(\alpha) : h(A) = B \} \), where \( A \) and \( B \) are clopen subsets of \( \alpha +1 \). \end{Def} The permutation topology is motivated by the model theory perspective and arises from the identification, via Stone duality, of \( \Homeo(\alpha+1) \) with the automorphism group of the Boolean algebra given by the clopen subsets of \( \alpha+1 \). It is an exercise to show that that the permutation topology agrees with the compact-open topology, that is, the topology generated by sets of the form \( U(K,W) = \{h\in \Homeo(\alpha + 1) : h(K) \subset W \} \), where \( K \) is compact and \( W \) is open in \( \alpha + 1 \). A theorem of Arens \cite{ArensTopologies}---which the reader may treat as an exercise---tells us that the homeomorphism group of a compact Hausdorff space equipped with the compact-open topology is a topological group. From this discussion, we record the following. \begin{Prop} For any ordinal \( \alpha \), the group \( \Homeo(\alpha+1) \) equipped with the permutation topology is a topological group. \qed \end{Prop} \section{Cantor--Bendixson degree one} Throughout this section, we study \( \Homeo(\omega^{\alpha+1}) \) instead of \( \Homeo(\omega^{\alpha+1}+1) \), but recall they are isomorphic by \Cref{lem:isomorphism}. Before beginning, let us recall two standard definitions. Let \( X \) be a topological space. A family of subset \( \{ Y_n \}_{n\in\bn} \) of \( X \) is \emph{locally finite} if, given any compact set \( K \), the cardinality of the set \( \{ n\in \bn : K \cap Y_n \neq \varnothing \} \) is finite. The \emph{support} of a homeomorphism \( f \co X \to X \) is the closure of the set \( \{ x \in X : f(x) \neq x \} \). Given a sequence \( \{ f_n \}_{n\in\bn} \) of homeomorphisms \( X \to X \) whose supports form a locally finite family of pairwise-disjoint sets, the infinite product of the \( f _n \), denoted \( \prod_{n\in\bn} f_n \), is the homeomorphism that agrees with \( f_n \) on the support of \( f_n \) for each \( n \in \bn \) and restricts to the identity elsewhere. In the compact-open topology on \( \Homeo(X) \), the infinite product can be realized as the limit of the finite products, that is, \( \prod_{n\in\bn} f_n = \lim_{n \to \infty} \prod_{k=1}^n f_k \). \subsection{Topological moieties} A moiety in the natural numbers refers to a subset that has the same cardinality as its complement. We extend the definition to ordinals of the form \( \omega^{\alpha+1} \). \begin{Def} A subset \( A \) of \( \omega^{\alpha+1} \) is a \emph{topological moiety} if \( A \) is clopen and both \( A \) and \( \omega^{\alpha+1} \ssm A \) contain infinitely many rank \( \alpha +1 \) points. \end{Def} We will require several properties of moieties. \begin{Prop} \label{prop:moieties same} Let \( \alpha \) be an ordinal. Every topological moiety of \( \omega^{\alpha+1} \) is homeomorphic to \( \omega^{\alpha+1} \). \end{Prop} \begin{proof} Let \( A \) be a moiety of \( \omega^{\alpha+1} \). Then \( A \), equipped with the order inherited from \( \omega^{\alpha+1} \), is a well-ordered set, and hence order isomorphic to an ordinal. Let \( \{a_n\}_{n\in\bn} \) be an enumeration of the maximal rank elements of \( A \) such that \( a_n < a_m \) whenever \( n < m \). Let \( U_1 = [0, a_1] \cap A \), and let \( U_n = [a_{n-1}+1, a_n] \cap A \) for \( n > 1 \). Then \( \{U_n\}_{n\in\bn} \) is a collection of pairwise-disjoint clopen subsets satisfying \( A = \bigcup_{n\in\bn} U_n \), implying that \( A \) is homeomorphic to the topological disjoint union of the \( A_n \). Now, by the classification of successor ordinals (\Cref{thm:classification}), each of the \( U_n \) is homeomorphic to \( \omega^\alpha + 1 \); therefore, \( A \) is homeomorphic to \( \bn \times (\omega^\alpha+1) \), which is in turn homeomorphic to \( \omega^{\alpha+1} \) by \Cref{prop:coordinates}. \end{proof} \Cref{prop:moieties same} yields the following ``change of coordinates principle''. \begin{Lem} \label{lem:change of coordinates} Let \( \alpha \) be an ordinal. If \( A \) and \( B \) are topological moieties, then there exists \( \sigma \in \Homeo(\omega^{\alpha+1}) \) such that \( \sigma(A) = B \). Moreover, if \( A \cap B = \varnothing \), then \( \sigma \) can be chosen to be an involution supported in \( A \cup B \). \end{Lem} \begin{proof} By \Cref{prop:moieties same}, there exists a homeomorphism \( f \co A \to B \). Let \( C \) and \( D \) the complements of \( A \) and \( B \), respectively. And as the complement of a topological moiety is a topological moiety, there exists a homeomorphism \( g\co C \to D \). Define \( \sigma \in \Homeo(\omega^{\alpha+1}) \) such that \( \sigma|_A = f \) and \( \sigma|_C = g \). Then \( \sigma(A) = B \). Now suppose that \( A \cap B = \varnothing \). Define \( \iota \in \Homeo(\omega^{\alpha+1}) \) by \[ \iota(x) = \left\{ \begin{array}{ll} f(x) & \text{if } x \in A \\ f^{-1}(x) & \text{if } x \in B \\ x & \text{otherwise} \end{array} \right. \] Then \( \iota \) is an involution supported in \( A\cup B \) mapping \( A \) onto \( B \). \end{proof} To finish this subsection, we need to establish the existence of a homeomorphism that will translate a topological moiety off of itself. \begin{Def} Given a topological moiety \( A \) in \( \omega^{\alpha+1} \), we call \( \vp \in \Homeo(\omega^{\alpha+1}) \) an \( A \)-translation if \( \vp^n(A) \cap \vp^m(A) = \varnothing \) for all \( n,m \in \bz \). If, in addition, \( \{ \vp^n(A) \}_{n\in\bz} \) is locally finite, then we say that \( \vp \) is a \emph{convergent \( A \)-translation}. \end{Def} \begin{Lem} \label{lem:translation} Let \( \alpha \) be an ordinal. If \( A \) is a topological moiety in \( \omega^{\alpha+1} \), then there exists a convergent \( A \)-translation in \( \vp \in \Homeo(\omega^{\alpha+1}) \). Moreover, \( \vp \) can be chosen to be supported in a topological moiety. \end{Lem} \begin{proof} By \Cref{prop:coordinates}, \( \omega^{\alpha+1} \) is homeomorphic to \( \bn \times (\omega^\alpha +1) \), which in turn is homeomorphic to \( \bz \times \bn^2 \times (\omega^\alpha +1) \). Consider the subset \( A' \) of the latter space given by \( A' = \{ 0 \} \times \{ 1 \} \times \bn \times (\omega^\alpha+1) \). Define \( \tau' \) by \( \tau'(\ell, 1,n,x) = (\ell+1,1, n, x) \) and \( \tau'(\ell, m ,n , x) = ( \ell, m ,n , x) \) whenever \( m > 1\). Fix a homeomorphism \( \psi \co \bz \times \bn^2 \times (\omega^\alpha +1) \to \omega^{\alpha+1} \). Then \( \psi(A') \) is a topological moiety, and by \Cref{lem:change of coordinates}, we can assume that \( \psi(A') = A \). It follows that \( \vp = \psi \circ \tau'\circ \psi^{-1} \) is a convergent \( A \)-translation. Moreover, as \( \tau' \) is supported in \( \bz \times \{1 \} \times \bn \times (\omega^\alpha + 1) \), we have that \( \tau \) is supported in a topological moiety. \end{proof} \subsection{Galvin's lemma} The key behind all the proofs in the remainder of the section is a uniform fragmentation result, i.e., given two topological moieties whose union is a topological moiety, we provide a way of writing any homeomorphism as a composition of three homeomorphisms each of which is supported in one of the two given moieties. We call this uniform fragmentation result Galvin's lemma, as it is an extension of Galvin's \cite[Lemma~2.1]{GalvinGenerating} from the setting of permutation groups to \( \Homeo(\omega^{\alpha+1}) \). Given a topological moiety \( A \) of \( \omega^{\alpha+1} \), we let \[ F_A = \{ f \in \Homeo(\omega^{\alpha+1}) : f(a) = a \text{ for all } a \in A\}. \] \begin{Prop}[Galvin's lemma for ordinals] \label{prop:galvin} Let \( \alpha \) be an ordinal, and let \( A \) and \( B \) be disjoint topological moieties in \( \omega^{\alpha+1} \). If \( A \cup B \) is a topological moiety, then \[ \Homeo(\omega^{\alpha+1}) = F_AF_BF_A \cup F_BF_AF_B. \] \end{Prop} \begin{proof} Fix \( h \in \Homeo(\omega^{\alpha +1}) \). Let \( C \) denote the complement of \( A \cup B \). As \[ C = (C \ssm h(A)) \cup (C \ssm h(B)), \] at least one of \( C \ssm h(A) \) or \( C \ssm h(B) \) is a topological moiety. First, suppose that \( C \ssm h(A) \) is a topological moiety. We can then write \( C = M_1 \cup M_2 \), where \( M_1 \) and \( M_2 \) are disjoint topological moieties and \( h(A) \cap C \subset M_1 \). By \Cref{lem:change of coordinates}, it is possible to choose \( f_1 \in F_A \) such that \( f_1(B \cup M_1) = C \) and \( f_1(M_2) = B \). Observe that \( (f_1\circ h)(A) \) is contained in \( A \cup C \) and is disjoint from a moiety contained in \( C \). It follows that there exists \( f_2 \in F_B \) such that \( f_2(f_1(h(A))) = A \). Therefore, \( f_2 \circ f_1 \circ h = f_3 \circ f_4 \), where \( f_3 \) is supported in \( A \) and \( f_4 \) is supported in the complement of \( A \). So, \[ h = f_1^{-1} \circ (f_2^{-1} \circ f_3) \circ f_4, \] with \( f_1^{-1}, f_4 \in F_A \) and \( f_2^{-1}, f_3 \in F_B \), implying \( h \in F_A F_B F_A \). As noted above, if \( C \ssm h(A) \) is not a topological moiety, then \( C \ssm h(B) \) must be a topological moiety. In this case, a similar argument establishes that \( h \in F_B F_A F_B \). \end{proof} \subsection{Normal generation and uniform perfectness} We now turn to (1) showing that \( \Homeo(\omega^{\alpha+1}) \) is a uniformly perfect group and (2) classifying the normal generators of of \( \Homeo(\omega^{\alpha+1}) \). Let us recall several definitions. A group \( G \) is \emph{perfect} if it is equal to its commutator subgroup, i.e., the subgroup generated by the set \( \{ [x,y] : x,y \in G \} \), where \( [x,y] = xyx^{-1}y^{-1} \); it is \emph{uniformly perfect} if there exists some \( k \in \bn \) such that every element of \( G \) can be expressed as a product of \( k \) commutators, and the minimum such \( k \) is called the \emph{commutator width} of \( G \). \begin{Lem} \label{lem:commutator} Let \( \alpha \) be an ordinal, and let \( h \in \Homeo(\omega^{\alpha+1}) \). If \( h \) is supported in a topological moiety, then \( h \) can be expressed as a single commutator. \end{Lem} \begin{proof} Let \( A \) be a topological moiety that contains the support of \( h \). By \Cref{lem:translation}, there exists a convergent \( A \)-translation \( \tau \in \Homeo(\omega^{\alpha+1}) \). We now apply a standard commutator trick. Setting \( \sigma = \prod_{n=0}^\infty (\tau^n \circ h \circ \tau^{-n}) \), it is readily checked that \( h = [\sigma, \tau] \). \end{proof} \begin{Thm} \label{thm:mainthm2} If \( \alpha \) is an ordinal, then \( \Homeo(\omega^{\alpha+1}) \) is uniformly perfect, and the commutator width is at most three. \end{Thm} \begin{proof} Let \( h \in \Homeo(\omega^{\alpha+1}) \). By \Cref{prop:galvin}, \( h = h_1 h_2 h_3 \), with \( h_i \) supported in a topological moiety. By \Cref{lem:commutator}, \( h_i \) is a commutator, and hence, \( h \) can be expressed as a product of three commutators. \end{proof} Recall that an element \( g \) in a group \( G \) is a \emph{normal generator} if every element of \( G \) can be expressed as a product of conjugates of \( g \) and \( g^{-1} \), or equivalently, \( G \) is the smallest normal subgroup containing \( g \). The element \( g \) is a \emph{uniform normal generator} if there exists \( k \in \bn \) such that every element of \( G \) can be expressed as a product of at most \( k \) conjugates of \( g \) and \( g^{-1} \); the \emph{\( g \)-width} of \( G \) is the minimum such \( k \). To classify normal generators, we use a technique that goes back to at least the work of Anderson \cite{AndersonAlgebraic}. As with uniform perfectness, we first investigate writing a homeomorphism supported in a moiety as a bounded-length product of conjugates of a given element and its inverse; we can then use Galvin's lemma to upgrade this to express any element in the same form. \begin{Prop}[Anderson's method for ordinals] \label{prop:anderson} Let \( \alpha \) be an ordinal, and let \( h \in \Homeo(\omega^{\alpha+1}) \). If there exists a topological moiety \( A \) such that \( h(A) \cap A = \varnothing \), then each element of \( \Homeo(\omega^{\alpha+1}) \) supported in a topological moiety can be expressed as product of four conjugates of \( h \) and \( h^{-1} \). \end{Prop} \begin{proof} Let \( f \in \Homeo(\omega^{\alpha+1}) \) such that \( f \) is supported in a moiety. Let \( A \) be a topological moiety such that \( h(A) \cap A = \varnothing \), and let \( B \) be a submoiety of \( A \). By \Cref{lem:change of coordinates}, up to conjugating \( f \), we may assume that \( f \) is supported in \( B \). \Cref{prop:moieties same} and \Cref{lem:translation} imply the existence of a convergent \( B \)-translation \( \tau \) supported in \( A \). Define \( \psi \in \Homeo(\omega^{\alpha+1}) \) by \[ \psi(x) = \left\{ \begin{array}{ll} h(x) & \text{if } x \in A \\ (\tau \circ h^{-1})(x) & \text{if } x \in h(A) \\ x & \text{otherwise} \end{array}\right. \] Let \( \sigma = \prod_{n=0}^\infty (\tau^n \circ f \circ \tau^{-n}) \), and let \( \eta =[\sigma, h] \). Then one can check that \[ f = \eta \circ \psi \circ \eta \circ \psi^{-1}. \] Informally, one sees this by noting that \( \sigma \) is ``performing \( f \) on all the forward iterates of \( B \) in A'', and hence so is \( \eta \), but \( \eta \) is also ``performing \( f^{-1} \) on all the forward iterates of \( h(B) \) in \( h(A) \)''. Now, \( \psi \circ \eta \circ \psi^{-1} \) restricts to the inverse of \( \eta \) on the complement of \( B \) and is the identity on \( B \) (this is on account of \( \psi|_{h(A)} = \tau \circ h^{-1} |_{h(A)} \)). Therefore, the composition \( \eta \circ (\psi \circ \eta \circ \psi^{-1}) \) is supported on \( B \), where it agrees with \( \sigma \) and hence with \( f \). Expanding out the above formulation of \( f \), we have: \begin{align*} f &= \eta \circ \psi \circ \eta \circ \psi^{-1} \\ &= \sigma \circ h \circ \sigma^{-1} \circ h^{-1} \circ \psi \circ \sigma \circ h \circ \sigma^{-1} \circ h^{-1} \circ \psi^{-1} \\ &= (\sigma \circ h \circ \sigma^{-1}) \circ (h^{-1}) \circ (\psi \circ \sigma \circ h \circ \sigma^{-1} \circ \psi^{-1}) \circ (\psi \circ h^{-1} \circ \psi^{-1}), \end{align*} implying that \( f \) can be expressed as a product of four conjugates of \( h \) and \( h^{-1} \). \end{proof} \begin{Cor} Let \( \alpha \) be an ordinal, and let \( h \in \Homeo(\omega^{\alpha+1}) \). If there exists a topological moiety \( A \) such that \( h(A) \cap A = \varnothing \), then \( h \) uniformly normally generates \( \Homeo(\omega^{\alpha+1}) \) and the \( h \)-width of \( \Homeo(\omega^{\alpha+1}) \) is at most twelve. \end{Cor} \begin{proof} By \Cref{prop:galvin}, each element of \( \Homeo(\omega^{\alpha+1}) \) can be expressed as a composition of three homeomorphisms each of which is supported in a topological moiety. Therefore, by \Cref{prop:anderson}, each element of \( \Homeo(\omega^{\alpha+1}) \) can be expressed as a composition of twelve homeomorphisms each of which is a conjugate of \( h \) or \( h^{-1} \). \end{proof} We have established the existence of uniform normal generators for \( \Homeo(\omega^{\alpha+1}) \), but we have yet to classify all such homeomorphisms. To do so, we will show that if a homeomorphism induces an infinite permutation of the maximal rank elements (i.e., the rank \( \alpha+1 \) elements), then it must move some moiety off itself, implying it is a uniform normal generator. The elements inducing a finite permutation on the rank \( \alpha+1 \) elements is a proper normal subgroup, and hence this will yield a complete classification of the normal generators. \begin{Lem} \label{lem:displacement} Let \( \alpha \) be an ordinal, and let \( h \in \Homeo(\omega^{\alpha+1}) \). If \( h \) induces an infinite permutation on the set of rank \( \alpha + 1 \) elements of \( \omega^{\alpha+1} \), then there exists a moiety \( A \) such that \( h(A) \cap A = \varnothing \) and \( A \cup h(A) \) is a moiety. \end{Lem} \begin{proof} Choose a rank \( \alpha + 1 \) point \( a_1 \) such that \( h(a_1) \neq a_1 \). By continuity, there exists a clopen neighborhood \( A_1 \) of \( a_1 \) such that \( a_1 \) is the unique rank \( \alpha + 1 \) element of \( A_1 \) and such that \( h(A_1) \cap A_1 = \varnothing \). Next, choose \( a_2 \) in the complement of \( A_1 \cup h(A_1) \cup h^{-1}(A_1) \) such that \( h(a_2) \neq a_2 \), which is possible as \( h \) induces an infinite permutation on the set of rank \( \alpha \) points. Again by continuity, we can choose a clopen neighborhood \( A_2 \) of \( a_2 \) in the complement of \( A_1 \cup h(A_1) \cup h^{-1}(A_1) \) such that \( h(A_2) \cap A_2 = \varnothing \). In particular, \( h(A_1) \cap A_2 = \varnothing \) and \( h(A_2) \cap A_1 = \varnothing \). Proceeding in this fashion, we construct a sequence of clopen sets \( \{ A_n \}_{n\in\bn} \), each containing a unique rank \( \alpha + 1 \) point, such that \( h(A_i) \cap A_j = \varnothing \) for all \( i,j \in \bn \). Let \( A = \bigcup_{n\in\bn} A_{2n} \). Then, by construction, \( A \) is a moiety, \( h(A) \cup A \) is a moiety, and \( h(A) \cap A = \varnothing \). \end{proof} \begin{Thm} \label{thm:normal generators 1} Let \( \alpha \) be an ordinal. For \( h \in \Homeo(\omega^{\alpha+1}) \), the following are equivalent: \begin{enumerate}[(i)] \item \( h \) normally generates \( \Homeo(\omega^{\alpha+1}) \). \item \( h \) uniformly normally generates \( \Homeo(\omega^{\alpha+1}) \). \item \( h \) induces an infinite permutation of the set of maximal rank elements of \( \omega^{\alpha+1} \). \end{enumerate} Moreover, if any of the above conditions are satisfied, then the \( h \)-width of \( \Homeo(\omega^{\alpha+1}) \) is at most twelve. \end{Thm} \begin{proof} It is clear that (ii) implies (i) and that (i) implies (iii). Now, we assume (iii), that is, \( h \) induces an infinite permutation of the set of \( \alpha \) rank points of \( \omega^{\alpha+1} \). In this case, \Cref{lem:displacement}, implies there exists a moiety \( A \) such that \( h(A) \cap A = \varnothing \). By \Cref{prop:anderson}, every homeomorphism supported in a topological moiety is a product of four conjugates of \( h \) and \( h^{-1} \). We can then apply \Cref{prop:galvin} to conclude that every element of \( \Homeo(\omega^{\alpha+1}) \) is a product of twelve conjugates of \( h \) and \( h^{-1} \), establishing that (iii) implies (ii). \end{proof} \subsection{Strong distortion} Recall that a group \( G \) is \emph{strongly distorted} if there exists \( m \in \bn \) and a sequence \( \{w_n\}_{n\in\bn} \subset \bn \) such that for any sequence \( \{ g_n\}_{n\in\bn} \subset G \) there exists \( S \subset G \) of cardinality at most \( m \) such that \( g_n \in S^{w_n} \). The notion of strong distortion was introduced by Calegari--Freedman \cite{CalegariDistortion}, and they provide a technique for establishing strong distortion that we employ here. This technique was described in a general form by Le Roux--Mann \cite[Construction~2.3]{LeRouxStrong} that we rephrase here in the context of ordinals. Below, and throughout the rest of the article, given \( g, h \in G \), we let \( g^h = h g h^{-1} \). \begin{Prop} \label{prop:construction} Let \( \alpha \) be an ordinal, and let \( \{h_n\}_{n\in\bn} \) be a sequence in \( \Homeo(\omega^{\alpha+1}) \) such that \( h_n \) is supported in a topological moiety \( A \). If \( \sigma \) is a convergent \( A \)-translation supported in a topological moiety \( B \) and if \( \tau \) is a convergent \( B \)-translation, then \[ \vp = \prod_{n,m \in \omega} h_n^{\tau^n\sigma^m} \] exists and is a homeomorphism, and \[ h_n = [\vp^{\tau^{-n}}, \sigma]. \] \qed \end{Prop} \begin{Thm} \label{thm:main3} For every ordinal \( \alpha \), the group \( \Homeo(\omega^{\alpha+1}) \) is strongly distorted. \end{Thm} \begin{proof} Let \( \{ h_n\}_{n\in\bn} \) be a sequence in \( \Homeo(\omega^{\alpha+1}) \). We must find \( m \in \bn \) and a sequence \( \{ w_n\}_{n\in\bn} \) in \( \bn \), independent of \( \{h_n\} \), such that \( h_n \) is a word of length at most \( w_n \) in a subgroup generated by a subset \( S \), depending on \( \{h_n\} \), of \( \Homeo(\omega^{\alpha+1}) \) of cardinality at most \( m \). Fix a topological moiety \( A \). Let us first assume that each of the \( h_n \) is supported in \( A \). By \Cref{lem:translation}, there exists a convergent \( A \)-translation \( \sigma \) supported in a topological moiety \( B \). Using \Cref{lem:translation} again, there exists a convergent \( B \)-translation \( \tau \). Applying \Cref{prop:construction} yields a homeomorphism \( \vp \) such that \( h_n \) can be expressed as a word of length \( 4n+4 \) in the set \( \{ \vp, \sigma, \tau \} \). We now assume \( \{h_n\} \) is a general sequence, but we keep \( A \), \( B \), \( \sigma \), \( \tau \), and \( \vp \) as above. Fix a topological moiety \( C \) in \( \omega^{\alpha+1} \) such that \( A \cup C \) is a topological moiety, and using \Cref{lem:change of coordinates}, choose \( \theta \in \Homeo(\omega^{\alpha+1}) \) such that \( \theta(C) = A \). By \Cref{prop:galvin}, we can write \[ h_n = h_{n,1} \circ h_{n,2} \circ h_{n,3}, \] where either \( h_{n,i} \) or \( h_{n,i}^\theta \) is supported in \( A \). Therefore, by the above argument, we can write \( h_n \) as a word of length at most \( 12n+18 \) in the subgroup generated by the set \( S = \{ \sigma, \tau, \vp, \theta \} \). Setting \( m = 4 \) and \( w_n = 12n+18 \) establishes the result. \end{proof} \section{Cantor--Bendixson degree greater than one} We now turn our attention to ordinals of the form \( \omega^{\alpha+1}\cdot d +1 \), where \( d \in \bn \). Let us set some notations. Given an ordinal \( \alpha \) and \( d \in \bn \), we let: \begin{itemize} \item \( X_{\alpha,d} = \omega^{\alpha+1}\cdot d + 1 \), \item \( \mu_k = \omega^{\alpha+1}\cdot k \in X_{\alpha,d} \) for \( 1 \leq k \leq d \), \item \( M_{\alpha,d} = \{\mu_1, \ldots, \mu_d\} \) be the set of maximal rank points, i.e., the rank \( \alpha+2 \) points of \( X_{\alpha,d} \), \item \( N_{\alpha,d} = \{ \omega^{\alpha+1}\cdot k + \omega^\alpha \cdot \ell : k \in d, \ell \in \bn \} \) denote the subset of next-to-maximal rank points, i.e., the rank \( \alpha + 1 \) points of \( X_{\alpha,d} \), \item \( \HH_{\alpha,d} = \Homeo(X_{\alpha,d}) \), \item \( \F_{\alpha,d} \) denote the subgroup of \( \HH_{\alpha,d} \) consisting of homeomorphisms that induce a finite permutation on the set \( N_{\alpha,d} \), \item \( \overline \F_{\alpha,d} \) denote the closure of \( \F_{\alpha,d} \) in \( \HH_{\alpha,d} \), and \item \( \PH_{\alpha,d} \) denote the subgroup of \( \HH_{\alpha,d} \) fixing each of the maximal rank points. \end{itemize} \medskip We will often drop the subscripts when they are clear from context. \subsection{Right-split short exact sequences} \label{sec:ses} The goal of this subsection is to establish two right-split exact sequences, one of the form \begin{equation} \label{eq:split1} 1 \longrightarrow \overline \F_{\alpha,d} \longrightarrow \PH_{\alpha,d} \longrightarrow \bz^{d-1} \longrightarrow 1 \end{equation} and the other of the form \begin{equation} \label{eq:split2} 1 \longrightarrow \mathrm{\PH}_{\alpha,d} \longrightarrow \HH_{\alpha,d} \longrightarrow \mathrm{Sym}(d)\longrightarrow 1 \end{equation} where \( \mathrm{Sym}(d) \) is the symmetric group on \( d \) letters. Let us first describe the homomorphism \( \PH_{\alpha,d} \to \bz^{d-1} \). Choose pairwise-disjoint clopen neighborhoods \( \Omega_1, \ldots, \Omega_{d-1} \) of \( \mu_1, \ldots, \mu_{d-1} \), respectively. Given \( h \in \PH \), let \[ O_k(h) = \{ x \in \Omega_k \cap N : h(x) \notin \Omega_k \} \] and let \[ I_k(h) = \{ x \in N \ssm \Omega_k : h(x) \in \Omega_k \}, \] that is, \( O_k(h) \) is the subset of \( N \) consisting of points that \( h \) moves out of \( \Omega_k \) by \( h \) and \( I_k(h) \) is the subset consisting of points of \( N \) that \( h \) moves into \( \Omega_k \). As \( h \in \PH \) and as \( N \) is a discrete set whose closure is \( N \cup M \), continuity implies that \( O_k(h) \) and \( I_k(h) \) are each finite. This allows us to define \( \chi_k \co \PH \to \bz \) by \[ \chi_k(h) = |O_k(h)| - |I_k(h)|. \] It is readily verified that \( \chi_k \) is a homomorphism. We will now focus our attention on the homomorphism \( \chi_{\alpha,d} \co \PH_{\alpha,d} \to \bz^{d-1} \) given by \[ \chi_{\alpha,d}(h) = \left(\chi_1(h), \ldots, \chi_{d-1}(h)\right). \] As with our other subscripts, we will simply write \( \chi \) when doing so does not cause confusion. A priori, the definition of \( \chi \) depends on our choice of \( \Omega_1, \ldots, \Omega_{d-1} \), but it turns out this is not the case; we will explain after proving Theorem~\ref{thm:ses}. \begin{Lem} \label{lem:section} Let \( \alpha \) be an ordinal, and let \( d \in \bn \ssm\{1\} \). The homomorphism \( \chi_{\alpha,d} \) admits a section. \end{Lem} \begin{proof} For \( i \in \{ 1, \ldots, d-1\} \), let \( Y_i = \omega^\alpha + 1 \), and let \( \overline Y_i = (Y_i \times \bz) \cup \{ \hat \mu_i, \hat \mu_{d,i} \} \) be the two-point compactification of \( Y_i \times \bz \) where \( (y, n) \to \hat \mu_{d,i} \) as \( n \to \infty \) and \( (y,n) \to \hat \mu_i \) as \( n \to -\infty \). Define the homeomorphism \( \hat s_i \co \overline Y_i \to \overline Y_i \) by \( \hat s_i(y,n) = (y, n+1) \), \( \hat s_i(\hat \mu_i) = \hat \mu_i \), and \( \hat s_i( \hat \mu_{d,i}) = \hat \mu_{d,i} \). Let \( Y \) be the quotient space obtained from the disjoint union of the \( \overline Y_i \) by identifying \( \hat \mu_{d,i} \) and \( \hat \mu_{d,j} \) for \( i,j \in \{1, \ldots, d-1\} \), that is, \[ Y = \left. \left(\bigsqcup_{i=1}^{d-1} \overline Y_i \right) \middle/ \left\{ \hat \mu_{d,1}, \ldots, \hat \mu_{d,d-1} \right\} \right. \] Note that each of the \( \hat s_i \) descends to a homeomorphism \( s_i \co Y \to Y \). Now, using the classification of successor ordinals, it is an exercise to show that there exists a homeomorphism \( Y \to X \) mapping \( \hat \mu_i \) to \( \mu_i \) and the equivalence class of the \( \hat \mu_{d,i} \) to \( \mu_d \). Therefore, we may view the \( s_i \) as homeomorphisms of \( X \) satisfying \( s_i^n(x) \to \mu_d \) as \( n \to \infty \) and \( s_i^n(x) \to \mu_i \) as \( n \to -\infty \) for any \( x \in \supp(s_i) \ssm M \). By construction, the \( s_i \) pairwise commute, as the intersection of any two of their supports is a fixed point, namely \( \mu_d \). We claim that \( \chi_i(s_i) = 1 \) and \( \chi_j(s_i) = 0 \) if \( i \neq j \). Let us argue the first statement. Let \( U_i \)\footnote{We work with an arbitrary neighborhood rather than the \( \Omega_k \), as we want to later observe that \( \chi \) does not depend on the choice of the \( \Omega_k \).} be a clopen neighborhood of \( \mu_i \) in \( Y \) disjoint from \( \mu_j \) for \( j \neq i \). Note that if \( j\neq i \), then \( s_i \) restricts to the identity on \( U_i \cap \overline Y_j \), allowing us to focus on \( \widehat U_i = U_i \cap \overline Y_i \) and to work in the coordinates of \( \overline Y_i \). Let \( N_i = \{ n \in \bz : (\omega^\alpha, n) \in \widehat U_i \} \), and let \( a = \min (\bz \ssm N_i) - 1 \). Let \( b \in N_i \) such that \( \hat s_i(\omega^\alpha, b) \notin \widehat U_i \). If \( b > a \), then letting \( c = \max \{ m \in \bz \ssm N_i : m < b \} \), we have that \( \hat s_i(\omega^\alpha, c) \in \widehat U_i \). With the exception of \( (\omega^\alpha, a) \), we have paired each element of the form \( (\omega^\alpha, n) \in U_i \) that \( \hat s_i \) sends out of \( \widehat U_i \) with an element of the same form in the complement of \( U_i \) that \( \hat s_i \) maps into \( \widehat U_i \). As \( \hat s_i(\omega^\alpha, n) = (\omega^\alpha, n+1) \in \widehat U_i \) for \( m < a \) and \( \hat s_i(\omega^\alpha, a) \notin \widehat U_i \), we can conclude that \( \chi_i(s_i) = 1 \). A similar line of argument can be used to establish \( \chi_j(s_i) = 0 \) whenever \( i \neq j \). Letting \( \{e_1, \ldots, e_{d-1} \} \) be the standard free generating set for \( \bz^{d-1} \), it follows that the map \( \bz^{d-1} \to \PH \) given by \( e_i \mapsto s_i \) is a section of \( \chi \). \end{proof} To establish \eqref{eq:split1}, we now need to show that \( \overline \F \) is the kernel of \( \chi \). We first establish that \( \overline \F \) is contained in the kernel and then establish the converse; we split the proof into two lemmas. \begin{Lem} \label{lem:in kernel} Let \( \alpha \) be an ordinal, and let \( d \in \bn \). Then \( \overline \F_{\alpha,d} \) is contained in \( \ker \chi_{\alpha,d} \). \end{Lem} \begin{proof} It is an exercise to check that \( \F \) is contained in the kernel of \( \chi \). Now, let \( f \in \overline \F \). The set \[ \mathcal U := \bigcap_{k =1}^{d-1} \left\{ h \in \HH : h(\Omega_k) = f(\Omega_k) \right\} \] is an open neighborhood of \( f \) in \( \HH \). Therefore, as \( \F \) is dense in \( \overline \F \) by definition, there exists \( g \in \mathcal U \cap \F \). It follows that \( (g^{-1}\circ f)(\Omega_k) = \Omega_k \) for each \( k \), implying that \[ 0 = \chi(g^{-1}\circ f) = \chi(f) - \chi(g) = \chi(f), \] as \( g \in \ker \chi \). \end{proof} In what follows, we let \( \Omega_d \) denote the complement of \( \Omega_1 \cup \cdots \cup \Omega_{d-1} \) in \( X_{\alpha,d} \). \begin{Lem} \label{lem:factorization} Let \( \alpha \) be an ordinal, and let \( d \in \bn \). If \( f \in \ker\chi_{\alpha,d} \), then there exist homeomorphisms \( f_0, f_1, \ldots, f_d \in \overline\F_{\alpha,d} \) satisfying \begin{itemize} \item \( f_0 \in \F_{\alpha,d} \), \item \( f_k \) is supported in \( \Omega_k \) for \( k \in \{1, \ldots, d\} \), and \item \( f = f_0 \circ f_1 \circ \cdots \circ f_d \). \end{itemize} \end{Lem} \begin{proof} Fix \( f \in \ker \chi \). As \( |O_k(f) | = | I_k(f) | \), we can choose \( g_k \in F \) such that \( g_k(f(O_k(f))) = f(I_k(f)) \), \( g_k(f(I_k(f))) = f(O_k(f)) \), and \( g_k \) fixes every other element of \( N \). Let \( g = g_1 \circ \cdots \circ g_{d-1} \), so that \( (g\circ f) (\Omega_k \cap N) = \Omega_k \cap N \) for all \( k \). Choose \( f_k \in \PH \) supported in \( \Omega_k \) such that \( f_k \) induces the same permutation of \( \Omega_k \cap N \) as does \( g \circ f \). It follows that \[ h = g \circ f \circ f_d^{-1} \circ \cdots \circ f_{1}^{-1} \] induces a trivial permutation on \( N \); in particular, \( h \in \F \). Setting \( f_0 = g^{-1} \circ h \) yields the factorization \( f = f_0 \circ f_1 \circ \cdots f_d \) with \( f_0 \in \F \) and \( f_k \) supported in \( \Omega_k \) for \( k > 0 \). It is left to verify that each of the \( f_k \) is in \( \overline F \). Let \( G_k \) denote the subgroup of \( \PH \) consisting of elements supported in \( \Omega_k \). Note that \( f_k \in G_k \). We can identify \( G_k \) with \( \Homeo(\Omega_k) \), and as \( \Omega_k \) is homeomorphic to \( \omega^{\alpha+1} + 1 \) by \Cref{lem:stable}, we can apply \Cref{cor:max subgroup} to see that \( G_k \cap \overline F = G_k \), allowing us to conclude that \( f_k \in \overline F \), as desired. \end{proof} \begin{Thm} \label{thm:ses} Let \( \alpha \) be an ordinal, and let \( d \in \bn \ssm\{1\} \). \begin{enumerate}[(1)] \item There exists a homomorphism \( \chi \co \PH_{\alpha,d} \to \bz^{d-1} \) such that \[ 1 \longrightarrow \overline \F_{\alpha,d} \lhook\joinrel\longrightarrow \PH_{\alpha,d} \overset{\chi}\longrightarrow \bz^{d-1} \longrightarrow 1 \] is a short exact sequence and it is right split. \item If \( \Phi \co \HH_{\alpha,d} \to \Sym(M_{\alpha,d} ) \) is the canonical action, then the short exact sequence \[ 1 \longrightarrow \PH_{\alpha,d} \lhook\joinrel\longrightarrow \HH_{\alpha,d} \overset{\Phi}\longrightarrow \Sym(M_{\alpha,d}) \longrightarrow 1 \] is right split. \end{enumerate} \end{Thm} \begin{proof} For (1), we have already established the section to \( \chi \) in \Cref{lem:section}; in \Cref{lem:in kernel}, we showed that \( \overline \F \) is in \( \ker \chi \). It is left to show that \( \ker \chi \) is in \( \overline \F \). Let \( f \in \ker \chi \). Then \Cref{lem:factorization} implies \( f = f_0 \circ f_1 \circ \cdots \circ f_k \) with \( f_0 \in \F \) and \( f_k \) supported in \( \Omega_k \). Let \( G_k \) denote the subgroup of \( \PH \) consisting of elements supported in \( \Omega_k \). We can identify \( G_k \) with \( \Homeo(\Omega_k) \), and as \( \Omega_k \) is homeomorphic to \( \omega^{\alpha+1} + 1 \) by \Cref{lem:stable}, we can apply \Cref{cor:max subgroup} to see that \( G_k \cap \overline \F = G_k \), allowing us to conclude that \( f_k \in \overline \F \), as desired. This establishes (1). We move to the second sequence, which is exact by definition; we must show that it is right split. Let \( J_1 = \{ x \in X_{\alpha,d}: x \leq \mu_1 \} \) and, for \( k \in \{2, \ldots, d\} \), let \( J_k = \{ x \in X_{\alpha,d} : \mu_{k-1} < x \leq \mu_k \} \). Given \( j,k \in \{ 1, \ldots, d\} \), there is a unique order-preserving bijection \( g_{j,k} \co J_j \to J_k \). Using our enumeration of \( M \), identify \( \Sym(M) \) with \( \Sym\left(\{1, \ldots, d\}\right) \). Taking \( \sigma \in \Sym(M) \), define \( \Psi(\sigma) \in \HH \) to be the homeomorphism given by \[ \Psi(\sigma)(x) = g_{k,\sigma(k)}(x), \] whenever \( x \in J_k \). Then \( \Psi \) is a section of \( \Phi \), finishing the proof. \end{proof} Before continuing, let us come back to noting that the definition of \( \chi \) did not depend on the choice of the \( \Omega_k \). Suppose a different choice is made resulting in a homomorphism \( \chi' \). A careful reading of the proof of Lemma~\ref{lem:section} shows that \( \chi \) and \( \chi' \) share a section. Moreover, Theorem~\ref{thm:ses} applies to \( \chi' \), so that \( \chi \) and \( \chi' \) have the same kernel. Together, these facts imply that \( \chi = \chi' \). \subsection{Properties of \( \overline \F_{0,d} \)} Given \Cref{thm:ses}, it is clear that we need to understand the structure of \( \overline \F \) in order to understand the structure of \( \PH \) and \( \HH \). Unfortunately, the subgroup \( \F \) gets in the way of our strategies. We have the tools to investigate the quotient \( \overline \F_{\alpha,d} / \F_{\alpha,d} \), but unfortunately this quotient is independent of \( \alpha \); in particular, this quotient group is isomorphic to \( \overline \F_{0,d} \). Because of this, we now limit our focus to \( \overline \F_{0,d} \). The key idea is to use fragmentation to reduce statements about \( \overline \F_{0,d} \) to statements about \( \Homeo(\omega) \). Using \Cref{lem:in kernel} together with \Cref{lem:factorization} and the fact that the choices of the \( \Omega_k \) above were arbitrary, we obtain the following version of \Cref{lem:factorization} that is well suited to our purposes. \begin{Lem}[Fragmentation in \( \overline \F_{\alpha,d} \)] \label{lem:factorization2} Let \( \alpha \) be an ordinal, and let \( d \in \bn \). If \( f \in \overline \F_{\alpha,d} \), then for any pairwise-disjoint clopen neighborhoods \( U_1, \ldots, U_d \) of \( \mu_1, \ldots, \mu_d \), respectively, there exist homeomorphisms \( f_0, f_1, \ldots, f_d \) such that \begin{itemize} \item \( f_0 \in \F_{\alpha,d} \), \item \( f_i \) is supported in \( U_i \) for \( i \in \{1, \ldots, d\} \), and \item \( f = f_0 \circ f_1 \circ \cdots \circ f_d \). \qed \end{itemize} \end{Lem} We stated the fragmentation lemma for general \( \alpha \), but it is particularly useful for \( \alpha =1 \), as in this case \( \F_{0,d} \) is the subgroup of finitely supported homeomorphisms. \begin{Lem} \label{lem:local isom} Let \( \alpha \) be an ordinal, and let \( d \in \bn \). If \( U \) is a clopen neighborhood of \( \mu \in M \) such that \( U \cap M = \{\mu\} \), then the subgroup of \( \overline \F_{\alpha,d} \) consisting of homeomorphisms supported in \( U \) is isomorphic to \( \Homeo(\omega^{\alpha+1}) \). \end{Lem} \begin{proof} Let \( G \) denote the subgroup of \( \overline \F_{\alpha,d} \) consisting of elements supported in \( U \). We can then identify \( G \) with a subgroup of \( \Homeo(U) \). By \Cref{lem:stable}, \( U \) is homeomorphic to \( X_{\alpha,1} \), and hence as \( G \) contains an element inducing an infinite permutation on the set \( N_{\alpha,d} \), Theorem~\ref{thm:normal generators 1} implies \( G \) is isomorphic to \( \Homeo(\omega^{\alpha+1}) \). \end{proof} \Cref{lem:factorization2} and \Cref{lem:local isom} imply that \( \overline \F_{\alpha,d} / \F_{\alpha,d} \) can be realized as quotient of \( \Homeo(\omega^{\alpha+1})^d \), and hence this quotient is strongly distorted and uniformly perfect. In the case of uniform perfectness, the difficulty is then understanding whether we can express the elements of \( \F_{\alpha,d} \) as products of commutators of elements in \( \overline \F_{\alpha,d} \). Unfortunately, we cannot answer this question in general, but we can do so for \( \alpha = 0 \). \begin{Thm} \label{thm:perfect F} If \( d \in \bn\ssm \{1\} \), then \( \overline \F_{0,d} \) is uniformly perfect of commutator width at most four. \end{Thm} \begin{proof} Fix \( f \in \overline \F_{0,d} \). Let \( U_1, \ldots, U_d \) be pairwise-disjoint clopen neighborhoods of \( \mu_1, \ldots, \mu_d \), respectively. By \Cref{lem:factorization2}, we can write \( f= f_0 \circ f_1 \circ \cdots \circ f_d \) with \( f_0 \in \F \) and \( f_i \) supported in \( U_i \) for \( i > 0 \). Combining \Cref{lem:local isom} and \Cref{thm:mainthm2}, we can write \( f_i = [g_{i,1}, h_{i,1}][g_{i,2},h_{i,2}][g_{i,3},h_{i,3}] \) with \( g_{i,j} \) and \( h_{i,j} \) supported in \( U_i \) for \( i > 1 \). Observe that if group elements \( a \), \( b \), \( c \), and \( d \) are such that \( a \) and \( b \) both commute with each of \( c \) and \( d \), then \( [a,b][c,d] = [ac,bd] \). It follows that \( f_1 \circ \cdots \circ f_d \) can be expressed a product of three commutators. To finish, observe that as the support of \( f_0 \) is finite, we can conjugate \( f_0 \) to have support in \( U _1 \), allowing us to apply \Cref{lem:commutator} to express \( f_0 \) as a commutator. Therefore, \( f \) is a product of four commutators, as desired. \end{proof} We now turn to classifying the normal generators of \( \overline \F_{0,d} \). Given \( Z \subset M_{0,d} \), let \( \F_Z \) denote the subgroup of \( \overline \F_{0,d} \) consisting of homeomorphisms that restrict to the identity in an open neighborhood of each point of \( Z \). Given a subgroup \( \Gamma \) of \( \overline \F_{0,d} \), we let \( Z_\Gamma \) be the subset of \( M_{0,d} \) consisting of the \( \mu \in M_{0,d} \) such that for each \( g \in \Gamma \) there is an open neighborhood \( U \) of \( \mu \) that is fixed pointwise by \( g \), that is, \( Z_\Gamma \) is the largest subset of \( M_{0,d} \) satisftying \( \F_{Z_\Gamma} < \Gamma \). For \( f \in \overline \F_{0,d} \), we write \( Z_f \) to denote \( Z_{\langle f \rangle} \). \begin{Thm} \label{thm:normal generators 2} Let \( d \in \bn \ssm\{1\} \), and let \( \Gamma \) be a normal subgroup of \( \overline \F_{0,d} \). If \( \Gamma \) is not contained in \( \F_{0,d} \), then \( \F_{Z_\Gamma} = \Gamma \). Moreover, (1) there exists \( \gamma \in \Gamma \) such that \( Z_\gamma = Z_\Gamma \), and (2) if \( \gamma \in \Gamma \) such that \( Z_\gamma = Z_\Gamma \), then \( \gamma \) uniformly normally generates \( \Gamma \). \end{Thm} \begin{proof} By assumption, \( \Gamma \) is a normal subgroup of \( \overline \F \) not contained in \( \F \), and therefore, \( Z_\Gamma \neq M \). As the elements of \( N_{0,d} \) are isolated, it follows that \( \Gamma < F_{Z_\Gamma} \), so we need only show that \( F_{Z_\Gamma} < \Gamma \). First, we claim that if \( \mu \in M \ssm Z_\Gamma \), then there exists a clopen neighborhood \( U_\mu \) of \( \mu \) and an element \( g_\mu \) in \( \Gamma \) supported in \( U_\mu \) such that \( U_\mu \cap M = \{\mu\} \) and such that \( g_\mu \) induces an infinite permutation on \( N \cap U_\mu \). Fix \( \mu \in M \ssm Z_\Gamma \), and choose \( g \in \Gamma \) such that \( g \) induces a nontrivial permutation on \( U \cap N \) for every open neighborhood \( U \) of \( \mu \). By continuity, we can choose pairwise-disjoint clopen neighborhoods \( U_1, \ldots, U_d \) of \( \mu_1, \ldots, \mu_d \), respectively, such that \( g(U_i) \cap U_j = \varnothing \) whenever \( j \neq k \). Applying \Cref{lem:factorization2} to \( g \) allows us to write \( g = g_0 \circ g_1 \circ \cdots \circ g_d \) with \( g_0 \in \F \) and with \( g_i \in \overline \F \) supported in \( U_i \). Let \( k \in \{1, \ldots, d\} \) be such that \( \mu = \mu_k \). By \Cref{lem:displacement}, there exists an infinite subset \( A \) of \( N \cap U_k \) such that \( g_k(A) \cap A = \varnothing \). By shrinking \( A \) we may assume that \( A \) is disjoint from the support of \( g_0 \). Let \( \tau \) be the involution such that \( \tau|_A = g_k|_A \), \( \tau|_{g_k(A)} = g_k^{-1}|_{g_k(A)} \), and such that \( \tau \) is the identity on the complement of \( A \cup g_k(A) \). It follows that \( [\tau, g_0g_k] \) induces an infinite permutation on \( U_k \cap N \), and \begin{align*} [\tau, g] &= \tau \circ g \circ \tau^{-1} \circ g^{-1} \\ &= g_0^\tau \circ g_k^\tau \circ g_k^{-1} \circ g_0^{-1} \\ &= [\tau, g_0g_k] \end{align*} The first equality implies that \( [\tau,g] \in \Gamma \), and the second equality is deduced from the fact that \( \tau \) commutes with each \( g_i \) for \( i > 0 \) and \( i \neq k \). As \( \tau \) is supported in \( U_k \), it follows from the second equality that \( [\tau,g] \) restricts to the identity on the complement of \( g_0(U_k) \), i.e., \( [\tau,g] \) is supported in \( g_0(U_k) \). Setting \( g_\mu = [\tau, g] \) and \( U_\mu = g_0(U_k) \), establishes the claim. Let \( G_\mu \) denote the subgroup of \( \overline \F \) consisting of elements supported in \( U_\mu \). \Cref{lem:local isom} implies that \( G_\mu \) is isomorphic to \( \Homeo(\omega^{\alpha+1}) \). In particular, \Cref{thm:normal generators 1} implies that \( g_\mu \) uniformly normally generates \( G_\mu \). Moreover, as \( g_\mu \in \Gamma \) and \( \Gamma \) is normal, we have that \( G_\mu < \Gamma \). Now, fix \( f \in \F_{Z_\Gamma} \). Again applying \Cref{lem:factorization2}, we can write \( f = f_0 \circ \prod_{\mu \in M \ssm Z_\Gamma} f_\mu \) with \( f_0 \in \F \) and each \( f_\mu \in \overline \F \) supported in \( U_\mu \). It follows that there exist \( w \in \overline \F_{0,d} \) such that \( f \circ w = f_0 \in \F \) and such that \( w \) can be expressed as a product of \( 12 |M \ssm Z_\Gamma| \) elements from the set \( \{ g_\mu, g_\mu^{-1} : \mu \in M \ssm Z_\Gamma \} \) and their conjugates (the number twelve comes from the proof of \Cref{thm:normal generators 1}). In particular, \( w \in \Gamma \). It is left to check that \( f_0 \in \Gamma \). As \( f_0 \in \F_{0,d} \) has finite support, there is a conjugate of \( f_0 \) supported in \( U_{\mu_1} \). \Cref{prop:anderson} implies that \( f_0 \) can be written as a product of four conjugates of \( g_{\mu_1} \) and \( g_{\mu_1}^{-1} \). Therefore, \( F_{Z_\Gamma} < \Gamma \), as desired. To finish, let \( \gamma \in \Gamma \) be such that \( Z_\gamma = Z_\Gamma \). First note that such a \( \gamma \) exists: the product of the \( g_\mu \) constructed above is such a \( \gamma \). We can now run the entirety of the argument above with \( g = \gamma \). Then, by construction, each of the \( g_\mu \) is a product of a conjugate of \( \gamma \) and \( \gamma^{-1} \); the result follows. \end{proof} We can now readily deduce a classification of the normal generators of \( \overline \F_{0,d} \). \begin{Thm}[Normal generators for \( \overline \F_{0,d} \)] \label{thm:normal generators F} Let \( d \in \bn \). For \( f \in \overline\F_{0,d} \), the following are equivalent: \begin{enumerate}[(i)] \item \( f \) normally generates \( \overline\F_{0,d} \). \item \( f \) uniformly normally generates \( \overline\F_{0,d} \). \item \( Z_f = \varnothing \). \item \( f \) induces an infinite permutation of the set \( \{ \omega\cdot k + \ell : \ell \in \bn \} \) for each \( k \in d \). \end{enumerate} \end{Thm} \begin{proof} The implications (ii) to (i), (i) to (iv), and (iv) to (iii) are immediate. The implication (iii) to (ii) follows from \Cref{thm:normal generators 2} by setting \( \Gamma = \overline \F_{0,d} \) and \( \gamma = f \). \end{proof} \subsection{Properties of \( \PH_{0,d} \) and \( \HH_{0,d} \)} In light of the short exact sequences in \Cref{thm:ses}, we can build on the uniform perfectness of and the classification of normal generators for \( \overline \F_{0,d} \) to compute the abelianization of and construct minimal normal generating sets for each of \( \PH_{0,d} \) and \( \HH_{0,d} \). \begin{Thm}[Abelianization] \label{thm:abelianization} If \( d \in \bn\ssm\{1\} \), then: \begin{enumerate}[(1)] \item The abelianization of \( \PH_{0,d} \) is isomorphic to \( \bz^{d-1} \). \item The abelianization of \( \HH_{0,d} \) is isomorphic to \( \bz/2\bz \times \bz/2\bz \). \end{enumerate} \end{Thm} \begin{proof} By \Cref{thm:ses}, we know \( \PH_{0,d} \cong \overline \F_{0,d} \rtimes \bz^{d-1} \), and by \Cref{thm:perfect F} that \( \overline\F_{0,d} \) is perfect, so that the abelianization of \( \PH_{0,d} \) is \( \bz^{d-1} \). For \( \HH_{0,d} \), again as \( \overline\F_{0,d} \) is perfect, the abelianization of \( \HH_{0,d} \) factors through the abelianization of \( \bz^{d-1} \rtimes \Sym(d) \), where we are using \Cref{thm:ses} for the semi-direct product decomposition. We will do the computation through a presentation. Let \( s_1, \ldots, s_{d-1} \) be the shift homeomorphisms from the proof of \Cref{lem:section}, and let \( e_i \) be the projection of \( s_i \) under the quotient map \( \HH_{0,d} \to \bz^{d-1} \rtimes \Sym(d) \). It follows that \( \{e_1, \ldots, e_{d-1} \} \) is a free generating set for \( \bz^{d-1} \). If we let \( \tau_i \in \Sym(d) \) denote the transposition of \( i \) and \( i +1 \), then building off a standard presentation for the symmetric group, a presentation for \( \bz^{d-1} \rtimes \Sym(d) \) is given by \[ \left\{ e_1, \ldots, e_{d-1}, \tau_1, \ldots, \tau_{d-1} : \begin{array}{ll} \tau_i^2 = 1 & \\ {[e_i,e_j]} = 1 & \\ {[\tau_i,\tau_j] = 1} & \text{if } |i-j| > 1 \\ \tau_i\tau_j\tau_i = \tau_j\tau_i\tau_j & \text{if } |i-j| = 1 \\ {[\tau_i, e_j] = 1} & \text{if } i \neq j, j+1 \\ \tau_i e_i \tau_i^{-1} = e_{i+1} & \text{if } i \neq d-1\\ \tau_i e_{i+1} \tau_i^{-1} = e_i & \text{if } i \neq d-1\\ \tau_{d-1} e_{d-1} \tau_{d-1} = -e_{d-1} & \end{array} \right\} \] To compute the abelianization, we add the commutator relation for each generator. Letting \( \pi \) denote the canonical homomorphism to the abelianization, we see that \( \pi(\tau_i) = \pi(\tau_j) \), \( \pi(e_i) = \pi(e_j) \), and \( \pi(e_i) = -\pi(e_i) \). It follows that the abelianization is generated by two commuting order two elements and is therefore isomorphic to \( \bz/2\bz \times \bz/2\bz \). \end{proof} \begin{Thm}[Minimal cardinality for normal generating set] \label{thm:cardinality} Let \( \alpha \) be an ordinal, and let \( d \in \bn \). The minimal cardinality for a normal generating set of \( \PH_{0,d} \) is \( d-1 \) and is two for \( \HH_{0,d} \). \end{Thm} \begin{proof} Let us first consider \( \PH_{0,d} \). First note that, by \Cref{thm:abelianization}, the cardinality of any normal generating set for \( \PH_{0,d} \) must be at least \( d-1 \), as it projects to a generating set for \( \bz^{d-1} \). So, we need only find a normal generating set of cardinality of \( d-1 \); to this end, let \( S = \{s_1, \ldots, s_{d-1} \} \), where the \( s_i \) are the shift homeomorphisms constructed in the proof of \Cref{lem:section}. We can then label the elements of \( N \) as \( \{ (n,i) : n \in \bn, 1 \leq i \leq d-1 \} \) in such a way that \( s_i(n,i) = (n+1,i) \) and \( s_i(n,j) = (n,j) \) whenever \( j \neq i \). Let \( s = s_1\circ s_2 \circ \cdots \circ s_{d-1} \). Choose \( t \in \PH_{0,d} \) such that \( t(2n,i) = (2n+1,i) \) and \( t(2n+1,i) = (2n,i) \). Then \( s^t(2n,i) = (2n+3,i) \) and \( s^t(2n+1,i) = (2n, i) \). In particular, \[ [t,s](2n+1,i) = (2n+3,i) \text{ and } [t,s](2n,i) = (2n-2,i) \] It follows that \( [t,s] \in \overline \F_{0,d} \) and \( Z_{[t,s]} = \varnothing \); therefore, \( [t,s] \) normally generates \( \overline\F_{0,d} \) by \Cref{thm:normal generators F}. By \Cref{thm:ses}, \( \PH_{0,d} \) is the semi-direct product of \( \overline \F_{0,d} \) and the subgroup generated by \( S \), and hence \( S \) is a normal generating set for \( \PH_{0,d} \). Turning to \( \HH_{0,d} \), \Cref{thm:abelianization} tells us that a normal generating set for \( \HH_{0,d} \) must have cardinality at least two. The group \( \Sym(d) \) is normally generated by a single element (e.g., if \( d\neq 4 \), then every odd permutation is a normal generator). Let \( h \in \HH_{0,d} \) such that the induced permutation of \( h \) on \( \Sym(M) \) normally generates \( \Sym(M) \). Let \( s_1, \ldots, s_{d-1} \) and \( s \) be as above. Observe that \( s_i \) and \( s_j \) are conjugate in \( \HH_{0,d} \); in particular, \( s \) is a product of conjugates of \( s_1 \). Therefore, from above, we see that \( \PH_{0,d} \) is in the group normally generated by \( s_1 \) in \( \HH_{0,d} \). By \Cref{thm:ses}, \( \HH_{0,d} \) is isomorphic to \( \PH_{0,d} \rtimes \Sym(M) \), implying \( \{s_1, h\} \) is a normal generating set for \( \HH_{0,d} \). \end{proof} \bibliographystyle{amsplain} \bibliography{ordinals-bib} \end{document}
2412.17124v1
http://arxiv.org/abs/2412.17124v1
Bounds for higher Steklov and mixed Steklov Neumann eigenvalues on domains with holes
\documentclass[12pt,reqno]{amsart} \usepackage{cmap,mathtools} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{amsfonts,amssymb,amsmath} \usepackage{footnote,bigints} \usepackage{array} \usepackage{adjustbox} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{enumitem} \allowdisplaybreaks \usepackage[pagewise, mathlines]{lineno} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{a4wide} \setlength{\parskip}{0.4em} \usepackage{xcolor} \usepackage[colorlinks=true,linktocpage,pdfpagelabels, bookmarksnumbered,bookmarksopen]{hyperref} \definecolor{ForestGreen}{rgb}{0.1,0.6,0.05} \definecolor{EgyptBlue}{rgb}{0.063,0.1,0.6} \hypersetup{ colorlinks=true, linkcolor=EgyptBlue, citecolor=ForestGreen, urlcolor=olive } \renewcommand{\UrlFont}{\ttfamily\small} \usepackage[hyperpageref]{backref} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{definition}{Definition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{remark}{Remark}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{Example}{Example}[section] \newtheorem{conjecture}{Conjecture}[section] \numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{equation}{section} \numberwithin{theorem}{section} \DeclareMathOperator*{\esssup}{ess\,sup} \newcommand{\W}{W_0^{1,p}} \newcommand{\intO}{\int_\Omega} \newcommand{\C}{C^1_0(\overline{\Omega})} \newcommand{\E}{E_{\alpha,\beta}} \newcommand{\En}{E_{\alpha_n,\beta_n}} \newcommand{\J}{J_{\alpha,\beta}} \newcommand{\N}{\mathbb{N}} \newcommand{\diver}{\mathop{\rm div}} \newcommand{\sgn}{\operatorname{sgn}} \newcommand{\al} {\alpha} \newcommand{\be} {\beta} \newcommand{\ga}{\gamma} \newcommand{\de}{\delta} \newcommand{\Om}{\Omega} \newcommand{\la}{\lambda} \newcommand{\dr}{\text{d}r} \def \tr {\textcolor{red}} \def \tb {\textcolor{blue}} \DeclarePairedDelimiter\autobracket{(}{)} \newcommand{\br}[1]{\autobracket*{#1}} \DeclarePairedDelimiter\autoset{\{}{\}} \newcommand{\set}[1]{\autoset*{#1}} \def\inpr#1{\left\langle #1\right\rangle} \DeclarePairedDelimiter\norm{\lVert}{\rVert} \usepackage{ulem} \usepackage{xcolor} \usepackage[colorlinks=true,linktocpage,pdfpagelabels, bookmarksnumbered,bookmarksopen]{hyperref} \definecolor{ForestGreen}{rgb}{0.1,0.6,0.05} \definecolor{EgyptBlue}{rgb}{0.063,0.1,0.6} \hypersetup{ colorlinks=true, linkcolor=EgyptBlue, citecolor=ForestGreen, urlcolor=olive } \def\L#1#2{\text{L}_{#1,#2}} \def\lnk#1{\text{link}{(#1)}} \subjclass[2010]{Primary 58J50; Secondary 35P15} \usepackage[hyperpageref]{backref} \usepackage[foot]{amsaddr} \DeclareUnicodeCharacter{2212}{-} \title [Steklov eigenvalues on Symmetric domains]{Bounds for higher Steklov and mixed Steklov Neumann eigenvalues on domains with holes} \author{Sagar Basak$^1$ \and Sheela Verma$^*$} \address{$1$ Department of Mathematical Sciences, Indian Institute of Technology (BHU), Varanasi, India} \email{[email protected]} \address{$*$ Corresponding author, Department of Mathematical Sciences, Indian Institute of Technology (BHU), Varanasi, India} \email{[email protected]} \keywords{Steklov eigenvalues, Steklov Neumann Eigenvalues, Doubly connected domains, Symmetries} \DeclareUnicodeCharacter{2212}{-} \begin{document} \begin{abstract} In this article, we study Steklov eigenvalues and mixed Steklov Neumann eigenvalues on a smooth bounded domain in $\mathbb{R}^{n}$, $n \geq 2$, having a spherical hole. We focus on two main results related to Steklov eigenvalues. First, we obtain explicit expression for the second nonzero Steklov eigenvalue on concentric annular domain. Secondly, we derive a sharp upper bound of the first $n$ nonzero Steklov eigenvalues on a domain $\Omega \subset \mathbb{R}^{n}$ having symmetry of order $4$ and a ball removed from its center. This bound is given in terms of the corresponding Steklov eigenvalues on a concentric annular domain of the same volume as $\Omega$. Next, we consider the mixed Steklov Neumann eigenvalue problem on $4^{\text{th}}$ order symmetric domains in $\mathbb{R}^{n}$ having a spherical hole and obtain upper bound of the first $n$ nonzero eigenvalues. We also provide some examples to illustrate that symmetry assumption in our results is crucial. Finally, We make some numerical observations about these eigenvalues using FreeFEM++ and state them as conjectures. \end{abstract} \maketitle \section{introduction} Among all domains with certain geometric constraints, finding a domain which optimizes eigenvalues of a differential operator is a classical problem in Spectral Geometry and it is referred as shape optimization problem. In this work, we are interested in shape optimization problem for eigenvalues of the Dirichlet to Neumann operator \begin{align*} \mathcal{D} : L^{2}(\mathcal{C}_1) & \longrightarrow L^{2}(\mathcal{C}_1), \\ f & \longmapsto \frac{\partial \tilde{f}}{\partial \nu}. \end{align*} Here $\Omega$ is a bounded connected domain having Lipschitz boundary $\partial \Omega = \mathcal{C}_1 \sqcup \mathcal{C}_2$ and $\nu$ is unit outward normal to $\partial \Omega$. The function $\tilde{f}$ represents the Harmonic extension of $f$ to $\Omega$ satisfying certain conditions on $\mathcal{C}_2$. If $\mathcal{C}_2 = \emptyset$ and function $\tilde{f}$ is the Harmonic extension of $f$ to $\Omega$, then eigenvalues of operator $\mathcal{D}$ are same as eigenvalues of the following Steklov problem \begin{align} \label{eqn: SP} \begin{cases} \Delta u =0 \quad in \,\, \Omega, \\ \frac{\partial u}{\partial \nu} = \sigma u \quad on \,\, \partial \Omega (= \mathcal{C}_1). \end{cases} \end{align} Eigenvalues of Problem \eqref{eqn: SP} are positive and form an increasing sequence $0 = \sigma_{0}(\Omega) < \sigma_{1}(\Omega) \leq \sigma_{2}(\Omega) \leq \cdots \nearrow \infty.$ If function $\tilde{f}$ is the Harmonic extension of $f$ to $\Omega$ satisfying Neumann boundary condition on $\mathcal{C}_2$, then eigenvalues of operator $\mathcal{D}$ are same as eigenvalues of the following mixed Steklov Neumann problem \begin{align} \label{eqn: SNP} \begin{cases} \Delta u =0 \quad in \,\, \Omega, \\ \frac{\partial u}{\partial \nu} = \mu u \quad on \,\, \mathcal{C}_1 \\ \frac{\partial u}{\partial \nu} = 0 \quad on \,\, \mathcal{C}_2. \end{cases} \end{align} Eigenvalues of Problem \eqref{eqn: SNP} are positive and form an increasing sequence $0 = \mu_{0}(\Omega) < \mu_{1}(\Omega) \leq \mu_{2}(\Omega) \leq \cdots \nearrow \infty.$ In literature, various sharp bounds have been proved for Steklov eigenvalues but very few results are available related to the mixed Steklov Neumann eigenvalues. Now, We state some results related to our study on Steklov and mixed Steklov Neumann eigenvalues. Weinstock \cite{weinstock1954inequalities} obtained first isoperimetric sharp bound for the first nonzero Steklov eigenvalue and proved that among all simply connected planar domains of fixed perimeter, disk maximizes the first nonzero Steklov eigenvalue. This result can not be generalised to higher dimensions for around 60 years. Recently in \cite{bucur2021weinstock}, this result is generalized to convex domains contained in $\mathbb{R}^{n}, n \geq 2$ by Bucur et. al. The authors proved that among all bounded convex set $\Omega \subset \mathbb{R}^{n}$ of fixed perimeter $L$, ball maximizes the first nonzero Steklov eigenvalue i.e., \begin{align} \label{Bucur_convex} \sigma_1 (\Omega) \leq \sigma_1 (B), \end{align} where $B$ is a ball in $\mathbb{R}^{n}$ with perimeter $L$. For general simply connected domains in higher dimensions, Fraser and Schoen \cite{fraser2019shape} proved that this result is not true by showing the existence of a smooth contractible domain having same perimeter as of the unit ball but having larger first nonzero Steklov eigenvalue. A quantitative version of inequality \eqref{Bucur_convex} has been proved in \cite{gavitone2020quantitative}. The theory of extremal metrics on a surface have been developed for Steklov eigenvalues in \cite{fraser2011first, fraser2013minimal, fraser2016sharp}. For stability results related to first nonzero Steklov eigenvalue on Planar domains, see \cite{bucur2021stability, girouard2017spectral}. Note that all the above isoperimetric bounds are proved for bounded smooth domains under perimeter constraints. Several other estimates have also been obtained for Steklov eigenvalues on bounded smooth domains under volume constraints. In \cite{brock2001isoperimetric}, F. Brock proved that among all bounded domains in $\mathbb{R}^{n}$ with Lipschitz boundary and fix volume, ball minimizes harmonic mean of the first $n$- nonzero Steklov eigenvalues. This result has been generalised for the harmonic mean of the first $(n-1)$ nonzero Steklov eigenvalues on bounded domains contained in Hyperbolic space \cite{verma2021isoperimetric} and in curved spaces \cite{chen2024upper}. Similar result has also been proved for the first nonzero Steklov eigenvalue on bounded domains contained in noncompact rank-$1$ symmetric spaces in \cite{BinoySanthanam2024}. The problem of finding bounds for the first Laplace eigenvalue on doubly connected domains with different boundary conditions is an interesting problem and many results for planar domains have been proved in this direction in $19^{\text{th}}$ century. To the best of our knowledge, first such result was due to Makai \cite{makai1959principal}, where upper bound for the first Dirichlet eigenvalue on ring shaped planar domain was obtained. For the mixed Dirichlet Neumann problem, first such result was proved by Payne and Weinberger \cite{payne1961some}, where authors consider a bounded domain having several holes (multi connected domain) with Dirichlet condition on the outer boundary and Neumann condition on other boundaries. Among such domains of given area and given perimeter of outer boundary, it was proved in \cite{payne1961some} that first Laplace eigenvalue is maximum for concentric annular domain. In the last decades, this problem has been studied by many researchers for various boundary conditions like the Mixed Dirichlet Neumann problem \cite{anoop2020reverse, anoop2021shape}, the Mixed Steklov Dirichlet problem \cite{basak2023sharp, gavitone2023isoperimetric,verma2020eigenvalue}. For results related to Dirichlet boundary condition, see \cite{anisa2005two, chorwadwala2013two, el2008extremal}. In \cite{ftouhi2022place}, Steklov eigenvalue problem on an annular domain in $\mathbb{R}^{n}$ i.e., a ball in $\mathbb{R}^{n}$ having circular obstacle, was considered. The author proved that among all position of inner ball, the first nonzero Steklov eigenvalue is maximized if the outer and inner ball have the same center i.e., the first nonzero Steklov eigenvalue is maximum on the concentric annular domain. Sharp bounds for higher eigenvalues of Laplacian with various boundary conditions have also been obtained on doubly connected domains under certain symmetry restrictions. In \cite{anoop2022szego1}, authors proved that among all multi-connected domains in $\mathbb{R}^{n}$ satisfying certain symmetry conditions, concentric annular domain maximizes the first nonzero Neumann eigenvalue. This result was later extended to domain contained in simply connected space forms, see \cite{anoop2022szego}. Recently similar bounds have been proved for Robin-Neumann boundary condition \cite{anoop2023reverse} and for the mixed Steklov Dirichlet condition \cite{basak2023sharp}. Now we present results related to mixed Steklov Neumann eigenvalues. Certain inequalities between the mixed Steklov-Dirichlet and the Steklov-Neumann eigenvalues of the Laplacian have been obtained in \cite{banuelos2010eigenvalue}. In \cite{hassannezhad2020eigenvalue}, bounds on the Riesz means of the mixed Steklov-Neumann and Steklov-Dirichlet eigenvalue problem have been studied on a bounded domain in $\mathbb{R}^{n}$. Considering link between mixed Steklov Neumann eigenvalues and mixed Steklov Dirichlet eigenvalues, sharp isoperimetric bound for these eigenvalues is obtained in \cite{arias2024applications}. Authors also provided full asymptotics for these mixed Steklov problems on arbitrary surfaces. To best of our knowledge, higher Steklov eigenvalues and mixed Steklov Neumann eigenvalues are not explored much. With this motivation, in this article, we are interested in isoperimetric bounds of higher Steklov eigenvalues and higher Steklov Neumann eigenvalues on doubly connected domains. The rest part of this article is organized as follows: In Section \ref{sec:annular domain}, we study eigenvalues and eigenfunctions of Steklov eigenvalue problem on concentric annular domain and find explicit expression for the second nonzero Steklov eigenvalue counted without multiplicity (Theorem \ref{thm:increasing}). Some auxiliary results about the behavior of Steklov eigenfunctions on annular domain have been proved in Section \ref{sec:int. ineq.}. Then in Section \ref{sec: iso. bound}, we prove that concentric annular domain is the optimal domain for the first $n$ nonzero Steklov eigenvalues among all multi connected domains in $\mathbb{R}^{n}$ having certain symmetry restrictions (Theorem \ref{thm:isoperimetric}). The mixed Steklov Neumann problem have been studied in Section \ref{sec:SN problem} and similar bounds as in Theorem \ref{thm:isoperimetric} have been obtained for the mixed Steklov Neumann eigenvalues (Theorem \ref{thm:isoperimetricSN}). In Section \ref{Counter examples}, we provide examples of some domains to show that the symmetry condition assumed in Theorem \ref{thm:isoperimetric} and \ref{thm:isoperimetricSN} can not be dropped. In last Section \ref{sec:numerical observation}, we observed interesting monotonicity behaviour of the first two nonzero Steklov eigenvalues and mixed Steklov Neumann eigenvalues on certain domains. For example, on any annular domain (ball with a spherical hole) in $\mathbb{R}^{2}$, the first mixed Steklov Neumann eigenvalue have multiplicity 2. Further, it decreases as the distance between centers of both ball increases. \section{Steklov eigenvalue problem on concentric annular domain} \label{sec:annular domain} Let $B_L$ and $B_1$ be balls in $\mathbb{R}^n$ of radius $L (> 1)$ and $1$, respectively centered at the origin. Consider the Steklov eigenvalue problem on annular domain $\Omega_{0} = B_L \setminus \bar{B_1}$, \begin{align} \begin{cases} \Delta u =0 \quad in \,\, \Omega_0, \\ \frac{\partial u}{\partial \nu} = \sigma u \quad on \,\, \partial \Omega_0. \end{cases} \label{stk on ball-ball} \end{align} In this section, we first calculate all eigenvalues and eigenfunctions of \eqref{stk on ball-ball} and then among them we find the second nonzero Steklov eigenvalue on annular domain counted without multiplicity (Theorem \ref{thm:increasing}). \subsection{Preliminaries} Due to symmetry of annular domain $\Omega_{0}$, any eigenfunction of \eqref{stk on ball-ball} will be of the form $u(r,\omega)=f(r)g(\omega)$, where $f(r)$ is a radial function defined on $[1, L]$ and $g(\omega)$ is an eigenfunction of $\Delta_{S^{n-1}}$. Recall that the spectrum of $\Delta_{S^{n-1}}$ is $\{l(l+n-2), l \in \mathbb{N} \cup \{0\}\}$. The multiplicity of eigenvalue $l(l+n-2)$ is same as the dimension of the set of all homogeneous harmonic polynomials $H_l$ in $n$ variables and of degree $l \in \mathbb{N} \cup \{0\}$. It is well known that \begin{align*} \text{dim } H_{l} = \frac{2l+n-2}{l+n-2} {l+n-2 \choose {l}}. \end{align*} Eigenfunctions corresponding to eigenvalue $l(l+n-2)$ are precisely the set of all spherical harmonics of degree $l$. The set of spherical harmonics are defined as the restriction to $\mathbb{S}^{n-1}$ of homogeneous harmonic polynomials in $n$ variables. For more details about the spectrum and eigenfunctions of $\Delta_{S^{n-1}}$, see \cite{anoop2022szego}. The Laplacian of $u(r,\omega)$ takes the form, \begin{align*} \Delta u(r, \omega) &= g(\omega)\left(-f''(r)- \frac{n-1}{r} f'(r) \right) + \frac{f(r)}{r^2} \Delta_{S ^{n-1}} g(\omega)\\ &= g(\omega) \left(-f''(r)- \frac{n-1}{r} f'(r) + \frac{f(r)}{r^2} l(l+n-2) \right). \end{align*} Since $u(r,\omega)$ is a solution of (\ref{stk on ball-ball}), function $f(r)$ satisfies \begin{eqnarray} \begin{array}{ll} \label{ode} -f''(r)-\frac{n-1}{r}f'(r)+ \frac{l(l+n-2)}{r^2}f(r)=0 ~\mbox{ for } ~ r \in (1,L), \\ f'(1)=-\sigma f(1), \,\, f'(L)=\sigma f(L). \end{array} \end{eqnarray} Solving this ordinary differential equation for function $f(r)$, we get \begin{equation*} f(r)= \begin{cases} C_1 \ln r + C_2 &\quad l=0, n=2, \\ C_1 r^{l}+C_2 \frac{1}{r^{l+n-2}} &\quad \text{ otherwise}, \end{cases} \end{equation*} for some constants $C_1$ and $C_2$. To calculate values of $C_1$ and $C_2$, we use the boundary condition of (\ref{ode}) and get a system of two linear equations. \textbf{Case I.} For $l=0, n=2$ \begin{equation} \label{eqn:linear system} \begin{cases} C_1 = -C_2 \sigma, \\ C_1\frac{1}{L}=\sigma (C_1ln(L)+C_2). \end{cases} \end{equation} This system has non trivial solution if $\sigma^2 L\,\, \ln L-\sigma (L+1)=0$. This equation have two solutions say, $\sigma_{0, 1} = 0$ and $\sigma_{0, 2} = \frac{1+L}{L \, \ln L}$. Thus $\sigma_{0, 1}$ and $\sigma_{0, 2}$ are eigenvalues of \eqref{ode} corresponding to eigenfunctions $f_{0, 1}$ and $f_{0, 2}$, respectively, where $ f_{0,i}(r) = 1-\sigma_{0, i} \,ln(r), i = 1,2$ and $r \in [1,L]$. \textbf{Case II.} For the remaining values of $l$ and $n$, \begin{equation} \begin{cases} C_1(l+\sigma)-C_2(l+n-2-\sigma)=0,\\ C_1(lL^{l-1}-L^l\sigma)-C_2(\frac{l+n-2}{L^{l+n-1}}+\frac{1}{L^{l+n-2}}\sigma)=0. \end{cases} \end{equation} This system has nontrivial solution if $ \tilde{A} \sigma^2 + \tilde{B} \sigma + \tilde{C} = 0$, Where \begin{align*} \begin{cases} \tilde{A}=L(L^{2l+n-2}-1) \\ \tilde{B}=-\{ lL^{2l+n-2}+L^{2l+n-1}(l+n-2)+lL+l+n-2 \} \\ \tilde{C} = l(l+n-2)(L^{2l+n-2}-1). \end{cases} \end{align*} We compute the discriminant $\mathcal{D}$ of the quadratic equation and verify that $\mathcal{D} > 0$. Note that \begin{align*} \mathcal{D}&=\tilde{B}^2 -4\tilde{A} \tilde{C}\\ &= \{(l+n-2)L^{2l+n-1}+lL^{2l+n-2}+lL+l+n-2\}^2-4(l+n-2)lL(L^{2l+n-2}-1)^2 \\ & \geq \{(l+n-2)L^{2l+n-1}+lL^{2l+n-2}\}^2-4(l+n-2)lL(L^{2l+n-2}-1)^2 \\ &\geq (L^{2l+n-2}-1)^2\{(l+n-2)L+l\}^2-4(l+n-2)lL(L^{2l+n-2}-1)^2 \\ & =(L^{2l+n-2}-1)^2 \{(l+n-2)L-l\}^2 > 0. \end{align*} Thus equation $ \tilde{A} \sigma^2 + \tilde{B} \sigma + \tilde{C} = 0$ have two different real solutions, say \begin{align*} \sigma_{l, 1} = \frac{-\tilde{B}-\sqrt{\mathcal{D}}}{2\tilde{A}} < \frac{-\tilde{B}+\sqrt{\mathcal{D}}}{2\tilde{A}} = \sigma_{l, 2}. \end{align*} The eigenfunction of \eqref{ode} corresponding to $\sigma_{l, i}, i=1, 2$ is \begin{align} \label{eqn: eigenfunction ODE} f_{l,i}(r)= r^l + \frac{(l+\sigma_{l,i})}{l+n-2-\sigma_{l,i}} \frac{1}{r^{l+n-2}}, \quad r \in [1,L]. \end{align} \begin{remark} \label{rmk:explicit expression} Note that eigenfunctions of \eqref{stk on ball-ball} are of the form $f_{l,i}(r) g(\omega)$ corresponding to the eigenvalues $\sigma_{l,i}$,\,\, $l\geq 0,$ $i= 1,2$, and $g(\omega)$ is an eigenfunction of $\Delta_{\mathbb{S}^{n-1}}$ corresponding to eigenvalue $l(l+n-2)$. The following explicit expressions for $\sigma_{l,i}$ will be used in the next subsection to prove Theorem \ref{thm:increasing}: \begin{enumerate} \item $\sigma_{0, 1} = 0.$ \item Value of $ \sigma_{0, 2}$ depends on the value of $n$, \begin{align*} \sigma_{0, 2} = \begin{cases} \frac{1+L}{L \, \ln L}, & \quad \ n=2 \\ \frac{(n-2)(1+L^{n-1})}{(L^{n-1}-L)}, & \quad \ n>2.\\ \end{cases} \end{align*} \item $\sigma_{1,2} = \frac{(L+n-1)+L^n(1+L(n-1))+\sqrt{\left((L+n-1)+L^n(1+L(n-1))\right)^2-4L(L^n-1)^2(n-1)}}{2L(L^{n}-1)}.$ \item $\sigma_{2,1} = \frac{L^{n+2}(2+Ln)+(n+2L)-\sqrt{\left( L^{n+2}(2+Ln)+(n+2L) \right)^2-8Ln(L^{n+2}-1)^2}}{2L(L^{n+2}-1)}.$ \end{enumerate} \end{remark} \subsection{Second nonzero Steklov eigenvalue counted without multiplicity} It was proved in \cite{ftouhi2022place} that the first nonzero Steklov eigenvalue on annular domain $\sigma_{1}(\Omega_0) = \sigma_{1,1}$ is of multiplicity $n$ i.e., $\sigma_{1}(\Omega_0) = \sigma_{2}(\Omega_0)= \cdots = \sigma_{n}(\Omega_0) = \sigma_{1,1}$. In this subsection, we find the next smallest Steklov eigenvalue $\sigma_{n+1}(\Omega_0)$ on annular domain and provide it's explicit expression. Precisely, we prove the following theorem: \begin{theorem} \label{thm:increasing} The second nonzero Steklov eigenvalue, counted without multiplicity, on concentric annular domain $\Omega_0$ is \begin{align*} \sigma_{n+1}(\Omega_0) = \sigma_{2,1} = \frac{L^{n+2}(2+Ln)+(n+2L)-\sqrt{\left( L^{n+2}(2+Ln)+(n+2L) \right)^2-8Ln(L^{n+2}-1)^2}}{2L(L^{n+2}-1)} \end{align*} with multiplicity $\frac{(n+2)(n-1)}{2}.$ \end{theorem} To prove Theorem \ref{thm:increasing}, we need the following lemmas. \begin{lemma} \label{lem: eigenvalue ineq. for planar} Let $L, \sigma_{0,2}, \sigma_{1,2}$ and $\sigma_{2,1}$ be defined as the above. Then, for $n=2$, \begin{enumerate} \item $ \sigma_{0,2} \leq \sigma_{1,2}.$ \item $ \sigma_{2,1} \leq \sigma_{0,2}.$ \end{enumerate} \end{lemma} \begin{proof} Note that for $n=2$, \begin{align*} \sigma_{0,2}= \frac{L+1}{L\, lnL}, \quad \sigma_{1,2} = \frac{(L^2+1)+\sqrt{(L-1)^4+4L^2}}{2L(L-1)} \\ \sigma_{2,1}=\frac{(1+L^4)-\sqrt{(1+L^4)^2-4L(1+L^2)^2(1-L)^2}}{L(L-1)(1+L^2)}. \end{align*} \begin{enumerate} \item Inequality $\sigma_{0,2} \leq \sigma_{1,2}$ is equivalent to prove that \begin{align*} 2(L^2-1)\leq \left((L^2+1)+\sqrt{(L-1)^4+4L^2}\right)lnL. \end{align*} Again this inequality is true if \begin{align*} 2(L^2-1)\leq (L+1)^2 \ln L. \end{align*} We take $\tilde{h}(t)=(t+1)^2 \ln t - 2(t^2-1), 1 \leq t \leq L$. Note that $\tilde{h}(1)= 0, \tilde{h}'(1)= 0, \tilde{h}''(1)= 0$ and $\tilde{h}'''(t) \geq 0$ for all $1 \leq t \leq L$. This gives $\tilde{h}''(t)$ is an increasing function and $\tilde{h}''(t) \geq \tilde{h}''(1) = 0$ for all $1 \leq t \leq L$. Using this argument repeatedly, we get $\tilde{h}(t) \geq 0$ for all $1 \leq t \leq L$. In particular, $\tilde{h}(L) \geq 0,$ which gives $\sigma_{0,2} \leq \sigma_{1,2}$. \item Inequality $\sigma_{2,1} \leq \sigma_{0,2}$ is equivalent to prove that \begin{align} \label{inequality} \left((1+L^4)-\sqrt{(1+L^4)^2-4L(1+L^2)^2(1-L)^2} \right) \ln L \leq (L^4-1). \end{align} A simple calculation shows that \begin{align*} (1+L^4)-\sqrt{(1+L^4)^2-4L(1+L^2)^2(1-L)^2} \leq \frac{2(1+L^4)}{(L+1)}. \end{align*} So the inequality (\ref{inequality}) is true if \begin{align*} \ln L \leq \frac{(L^4-1)(L+1)}{2(1+L^4)}. \end{align*} We introduce $w(t)=\frac{(t^4-1)(t+1)}{2(1+t^4)}- \ln t, 1 \leq t \leq L$. The idea is to show that $w(t)$ is an increasing function of $t$ by proving that $w'(t) \geq 0$. This combining with $w(1) = 0$ will give the desired result. We have \begin{align*} w'(t)=\frac{2t^9-4t^8+16t^5+8t^4-2t-4}{4t(1+t^4)}. \end{align*} Consider the function $\tilde{w}(t)=2t^9-4t^8+16t^5+8t^4-2t-4.$ It is easy to check that the $k^{\text{th}}$ derivative of $\tilde{w}(t)$ at point $t=1$ i.e., $\tilde{w}^{(k)}(1)\geq 0 $ for $k=1,2,3$, and $\tilde{w}^{(4)}(t)\geq 0$ for all $1 \leq t \leq L$. Now using the same arguments as in the first part of this Lemma, we can prove that $\tilde{w}(t) \geq 0$. Since the denominator term in $w'(t)$ is always positive, we get that $w'(t) \geq 0$. \end{enumerate} \end{proof} \begin{lemma} \label{positive lemma} For $n\geq 3$, consider \begin{align*} h(t) = (n-2)t^{2n+1}-(n+2)t^{2n}+(n-2)t^{n+3}+(3n-2)t^{n+2}-2n t^{n-1}+4t-2(n-2), \end{align*} then $h(L) \geq 0$ for each $L > 1$. \end{lemma} \begin{proof} For a fix $L>1$, let $h^{(k)}(t)$ denotes the $k$th derivative of function $h$ at a point $t \in [1,L]$. We first prove that $h^{(k)}(1) \geq 0$ for all $0 \leq k \leq 7$ and $h^{(8)}(t) \geq 0$. Then using the same idea as in Lemma \ref{lem: eigenvalue ineq. for planar}, our conclusion follows. We have \begin{align*} h(1) = 0, \quad h^{(1)}(1) = 2n^2-8 \geq 0, \quad h^{(2)}(1) = 2(n-2)(n+2)^2 \geq 0. \end{align*} For $3 \leq k \leq 2n+1,$ \begin{align*} h^{(k)}(t)= & (n-2)\frac{(2n+1)!}{(2n+1-k)!}t^{(2n+1-k)}-(n+2)\frac{(2n)!}{(2n-k)!}t^{(2n-k)}+(n-2)\frac{(n+3)!}{(n+3-k)!}t^{(n+3-k)}\\ &+(3n-2)\frac{(n+2)!}{(n+2-k)!}t^{(n+2-k)}-2n\frac{(n-1)!}{(n-1-k)!}t^{(n-1-k)}, \end{align*} with the condition that the power of $t$ in each term is non-negative; otherwise, we take that term to be zero. By substituting the values of $k$, we can check for $ k=3,4,\dots 7, \,\,h^{(k)}(1) \geq 0 $. For $k=8$, \begin{align} \nonumber h^{(8)}(t)=&(n-2)\frac{(2n+1)!}{(2n+1-8)!} t^{(2n+1-8)}-(n+2)\frac{(2n)!}{(2n-8)!}t^{(2n-8)} \\ \nonumber & \quad +(n-2)\frac{(n+3)!}{(n+3-8)!}t^{(n+3-8)} +(3n-2)\frac{(n+2)!}{(n+2-8)!}t^{(n+2-8)}\\ \label{8th derivative} & \qquad -2n\frac{(n-1)!}{(n-1-8)!}t^{(n-1-8)}. \end{align} Note that $\bigg((n-2)\frac{(2n+1)!}{(2n+1-8)!} - (n+2)\frac{(2n)!}{(2n-8)!}\bigg)$ and $\bigg((n-2)\frac{(n+3)!}{(n+3-8)!}+(3n-2)\frac{(n+2)!}{(n+2-8)!}-2n\frac{(n-1)!}{(n-1-8)!}\bigg)$ are positive, we get that $h^{(8)}(t) \geq 0$. \end{proof} \begin{lemma} \label{lem: inequality1for higher order} For $n \geq 3$, $L>1$, let $\sigma_{0, 2}$ and $\sigma_{2,1}$ be defined as in Remark \ref{rmk:explicit expression}. Then \begin{equation} \label{long inequality} \sigma_{2,1} \leq \sigma_{0,2}. \end{equation} \end{lemma} \begin{proof} The inequality (\ref{long inequality}) is equivalent to prove \begin{align} \nonumber &\left( L^{n+2}(2+Ln)+(n+2L)-\sqrt{\left( L^{n+2}(2+Ln)+(n+2L) \right)^2-8Ln(L^{n+2}-1)^2} \right)(L^{n-2}-1) \\ \label{ineq: first} &\leq 2(n-2)(L^{n-1}+1)(L^{n+2}-1). \end{align} Since $(2+Ln)=2+L(n-2) +2L \geq 2+(n-2)+2L=n+2L$. Using this, we have \begin{align*} \left( L^{n+2}(2+Ln)+(n+2L) \right)^2-8Ln(L^{n+2}-1)^2 & \geq \left( L^{n+2} + 1 \right)^{2} (n+2L)^{2} - 8 Ln \left( L^{n+2} - 1 \right)^{2} \\ & \geq \left( L^{n+2} - 1 \right)^{2} \left( (n+2L)^{2} - 8 Ln \right) \\ & = \left( L^{n+2} - 1 \right)^{2} (2L - n)^{2}. \end{align*} This gives, \begin{align*} &\left( L^{n+2}(2+Ln)+(n+2L)-\sqrt{\left( L^{n+2}(2+Ln)+(n+2L) \right)^2-8Ln(L^{n+2}-1)^2} \right)\\ & \leq (n+2)L^{n+2}+(n-2)L^{n+3}+4L. \end{align*} Now the inequality \eqref{ineq: first} is true if \begin{align*} \left( (n+2)L^{n+2}+(n-2)L^{n+3}+4L\right)(L^{n-2}-1) \leq 2(n-2)(L^{n-1}+1)(L^{n+2}-1) \end{align*} or equivalently, if \begin{align*} (n-2)L^{2n+1}-(n+2)L^{2n}+(n-2)L^{n+3}+(3n-2)L^{n+2}-2nL^{n-1}+4L-2(n-2) \geq 0 \end{align*} is true. Now the conclusion follows from Lemma (\ref{positive lemma}). \end{proof} \begin{lemma} \label{lem: inequality2for higher order} For $n \geq 3$, $L>1$, let $\sigma_{1, 2}$ and $\sigma_{2,1}$ be defined as in Remark \ref{rmk:explicit expression}. Then \begin{align} \label{2nd inequality} \sigma_{2,1} \leq \sigma_{1,2}. \end{align} \end{lemma} \begin{proof} Since $\sqrt{\left((L+n-1)+L^n(1+L(n-1))\right)^2-4L(L^n-1)^2(n-1)} \geq 0$, so the inequality (\ref{2nd inequality}) is true if \begin{align*} & \left(L^{n+2}(2+Ln)+(n+2L)-\sqrt{\left( L^{n+2}(2+Ln)+(n+2L) \right)^2-8Ln(L^{n+2}-1)^2}\right)(L^n-1) \\ & \leq \left((L+n-1)+L^n(1+L(n-1)\right)(L^{n+2}-1). \end{align*} or equivalently, if \begin{align*} &\left( n^2L^{2n+6)}-4nL^{2n+5}+4L^{2n+4}+(2n^2+16n+8)L^{n+3}+4nL^{n+2}+4L^2-4Ln+n^2\right) \\ & \times (L^n-1)^2 -\left(L^{2n+2}-(n+1)L^{n+2}+(n+1)L^n-1\right)^2(1+L)^2 \geq 0. \end{align*} Now consider \begin{align*} \tilde{h}(L)=&\left( n^2L^{2n+6)}-4nL^{2n+5}+4L^{2n+4}+(2n^2+16n+8)L^{n+3}+4nL^{n+2}+4L^2-4Ln+n^2\right) \\ & \times (L^n-1)^2 - \left(L^{2n+2}-(n+1)L^{n+2}+(n+1)L^n-1\right)^2(1+L)^2. \end{align*} Applying the similar technique as in the proof of Lemma \ref{positive lemma}, it can be proved that $\tilde{h}(L)\geq 0$. Hence the conclusion follows. \end{proof} \begin{lemma} \label{lem:Illias} For $l\geq 0, i\in\{1, 2\}$, if $\sigma_{l,i}$ are Steklov eigenvalues on annular domain $\Omega_0$. Then \begin{enumerate} \item the sequence $\{\sigma_{l, 1}\}_{l\geq 0}$ is strictly increasing. \item $\sigma_{1, 1} \leq \sigma_{0, 2}$. In particular, the first nonzero Steklov eigenvalue of $\Omega_{0}$ is $\sigma_{1}(\Omega_{0}) = \sigma_{1, 1}$ with multiplicity $n$. \end{enumerate} \end{lemma} For a proof of this Lemma, see [\cite{ftouhi2022place}, Lemma 4.3]. \noindent \textbf{Proof of Theorem \ref{thm:increasing}:} For $n=2$, it follows from Lemma \ref{lem: eigenvalue ineq. for planar} and \ref{lem:Illias} that \begin{align*} \sigma_{1, 1} \leq \sigma_{2,1} \leq \sigma_{0, 2} \leq \sigma_{1,2} \quad \text{and} \quad 0 = \sigma_{0, 1} \leq \sigma_{1, 1} \leq \sigma_{2, 1} \leq \sigma_{3, 1} \leq \cdots. \end{align*} Combining this with the relation $\sigma_{l,1} < \sigma_{l,2}$ for all $l \geq 0$, we conclude that $\sigma_{2,1}$ is the next smallest eigenvalue after $\sigma_{1,1}$ i.e., $\sigma_{n+1}(\Omega_{0}) = \sigma_{2,1}$. Similarly for $n \geq 3$, $\sigma_{n+1}(\Omega_{0}) = \sigma_{2,1}$ follows from Lemma \ref{lem: inequality1for higher order}, Lemma \ref{lem: inequality2for higher order} and Lemma \ref{lem:Illias}. Note that eigenfunctions of \eqref{stk on ball-ball} are of the form $f_{2,1}(r) g_{2}(\omega)$ corresponding to eigenvalue $\sigma_{2,1}$, where $f_{2,1}(r)$ is defined in \eqref{eqn: eigenfunction ODE} and $g_{2}(\omega)$ is an eigenfunction of $\Delta_{\mathbb{S}^{n-1}}$ corresponding to eigenvalue $2n$ i.e., it is an element of the set of all spherical harmonics of degree two. As we already mentioned in the last subsection that multiplicity of $ \sigma_2(\Omega_{0}) = \sigma_{2,1}$ = dim $H_{2} = \frac{(n+2)(n-1)}{2}.$ \section{Some integral inequalities} \label{sec:int. ineq.} In this section, we provide some results related to function $f_{1,1}(r)$, which will be used in the next section to prove our second main result about higher Steklov eigenvalues, Theorem \ref{thm:isoperimetric}. \begin{definition} A domain $W\subset \mathbb{R}^n$ is said to be symmetric of order $s$ with respect to the origin, if $R_{i,j}^{\frac{2\pi}{s}}(W)=W$ for all $1\leq i <j \leq n,$ where $R_{i,j}^{\frac{2\pi}{s}}$ denotes anticlockwise rotation with respect to the origin by angle $\frac{2\pi}{s}$ in the coordinate plane $(x_i, x_j).$ \end{definition} Hereafter, we let $\Omega_{out} \subset \mathbb{R}^n$ be a connected open bounded smooth domain with symmetry of order 4 with respect to the origin and $B_1$ be a unit ball centered at the origin such that $ \Bar{ B_1 }\subset \Omega_{out}$. Denote $\Omega = \Omega_{out} \backslash \Bar{B_1}$. Consider $\Omega_0 = B_L \backslash \Bar{ B_1}$ where $B_L$ is a ball of radius $L$ centered at the origin such that $Vol(B_L)= Vol(\Omega_{out})$ i.e., $Vol(\Omega)= Vol(\Omega_0)$. Let $f_{1,1}:[1,L] \rightarrow \mathbb{R} $ be the function defined as in \eqref{eqn: eigenfunction ODE}. From now onwards, we consider $f_{1,1}:[1,\infty) \rightarrow \mathbb{R} $ by extending function $f_{1,1} (r)$ radially to $[1, \infty)$. \begin{lemma} \label{decreasing lemma} Define $F,G : [1,\infty) \rightarrow \mathbb{R}$ as \begin{align*} F(r) = \left((f'_{1,1}(r))^2 + \frac{(n-1)}{r^2} f_{1,1}^2(r)\right) \end{align*} and \begin{align*} G(r) = \left( 2f_{1,1}(r)f_{1,1}'(r) + \frac{n-1}{r}f_{1,1}^2(r) \right). \end{align*} Then $F$ is a decreasing function of $r$, and $G$ is an increasing function of $r$. \end{lemma} \begin{proof} Recall that $f_{1,1}(r)=r + \frac{1+\sigma_{1,1}}{n-1-\sigma_{1,1}}\frac{1}{r^{n-1}}$, then $f_{1,1}'(r)=1-\frac{1+\sigma_{1,1}}{n-1-\sigma_{1,1}}\frac{n-1}{r^n}$. Substituting these values, we get \begin{align*} F(r)=& n+\frac{(1+\sigma_{1,1})^2(n-1)n}{(n-1-\sigma_{1,1})^2}\frac{1}{r^{2n}}, \qquad \text{ and } \\ G(r)=& (n+1)r+\frac{(1+\sigma_{1,1})}{(n-1-\sigma_{1,1})}\frac{2}{r^{(n-1)}}-\frac{(1+\sigma_{1,1})^2(n-1)}{(n-1-\sigma_{1,1})^2}\frac{1}{r^{(2n-1)}}. \end{align*} By differentiating these functions, we have for all $r \in [1, \infty)$, \begin{align*} F'(r)=&-\frac{2n^2(n-1)(1+\sigma_{1,1})^2}{(n-1-\sigma_{1,1})^2}\frac{1}{r^{(2n+1)}} \leq 0, \qquad \text{ and } \\ G'(r)=&2+(n-1)\Big (\frac{(1+\sigma_{1,1})}{(n-1-\sigma_{1,1})}\frac{1}{r^n}-1\Big )^2+\frac{2(1+\sigma_{1,1})^2(n-1)^2}{(n-1-\sigma_{1,1})^2}\frac{1}{r^n} \geq 0. \end{align*} This proves our claim. \end{proof} \begin{lemma} \label{lem:integral2} Let $F$ be a decreasing radial function defined as in Lemma \ref{decreasing lemma} and $\Omega, \Omega_0$ be defined as above. Then the following inequality holds. \begin{equation} \int_\Omega F(r) dV \leq \int_{\Omega_0} F(r) dV. \end{equation} \end{lemma} For a proof, see \cite{basak2023sharp}. \begin{lemma}\label{integral6} Let $f_{1,1}(r), \Omega_{out}$ and $B_L$ be defined as above, and $\partial \Omega_{out}, \partial B_L$ are boundaries of $\Omega_{out}, B_L,$ respectively. Then the following inequality holds. \begin{equation} \label{inequality 5} \int_{\partial {\Omega_{out}}} f_{1,1}^2(r) dS \geq \int_{\partial {B_L}} f_{1,1}^2(r) dS. \end{equation} \end{lemma} \begin{proof} Let $B= \{ R_u u \,| \,\, u\in S^{n-1}\}$, where $S^{n-1}$ is $(n-1)$ dimensional unit sphere and $R_u= sup\{r \ | \ r u\in \partial \Omega \}$. Then \begin{align*} \int_{\partial {\Omega_{out}}} f_{1,1}^2(r) dS & \geq \int_{B} f_{1,1}^2(r) dS \\ &= \int_{u \in {S} ^{n-1}} f_{1,1}^2(R_u) sec(\theta) R_u^{n-1} du \\ & \geq \int_{ u\in {S} ^{n-1}} f_{1,1}^2(R_u) R_u^{n-1} du \\ & = \int_{u \in S^{n-1}}\int_{1}^{R_u} \left ( 2 f_{1,1}(r) f'_{1,1}(r) r^{n-1} + f_{1,1}^2(r) (n-1) r^{n-2} \right) dr du +f_{1,1}^2(1)|S^{n-1}|\\ & = \int_{u\in {S}^{n-1}}\int_{1}^{R_u} \left( 2 f_{1,1}(r) f'_{1,1}(r)+ f_{1,1}^2(r) \frac{(n-1)}{r} \right)r^{n-1} dr du + f_{1,1}^2(1)|S^{n-1}|\\ & \geq \int_{\Omega} \left( 2 f_{1,1}(r) f'_{1,1}(r) + f_{1,1}^2(r) \frac{(n-1)}{r}\right) dV +f_{1,1}^2(1)|S^{n-1}| \\ &=\int_{\Omega}G(r)dV+f_{1,1}^2(1)|S^{n-1}|. \end{align*} where $G(r) = \left( 2 f_{1,1}(r) f'_{1,1}(r) + f_{1,1}^2(r) \frac{(n-1)}{r} \right)$ is defined as in lemma (\ref{decreasing lemma}). Since $G$ is an increasing function of $r$, we have \begin{equation} \label{inequality 3} G(r) < G(L) \,\,\, \text{in}\,\, \Omega_0 \backslash (\Omega \cap \Omega_0), ~\mbox{ and } ~~ G(r) > G(L)\,\, \text{in} \,\,\, \Omega \backslash (\Omega \cap \Omega_0). \end{equation} Now \begin{align*} \int_ { \Omega} G(r) dV +f_{1,1}^2(1)|S^{n-1}|& = \int_{\Omega \cap \Omega_0} G(r) dV+ \int_{\Omega \backslash (\Omega \cap \Omega_0)} G(r) dV +f_{1,1}^2(1)|S^{n-1}|\\ & = \int_{ \Omega_0} G(r) dV - \int_{ \Omega_0 \backslash (\Omega \cap \Omega_0)} G(r) dV + \int_{ \Omega \backslash (\Omega \cap \Omega_0)} G(r) dV +f_{1,1}^2(1)|S^{n-1}|. \end{align*} Thus, from inequality (\ref{inequality 3}), we get \begin{align*} \int_{ \Omega} G(r) dV +f_{1,1}^2(r)|S^{n-1}| \geq \int_{\Omega_0} G(r) dV - \int_{\Omega_0 \backslash (\Omega \cap \Omega_0)} G(L) dV + \int_{\Omega \backslash (\Omega \cap \Omega_0)} G(L) dV +f_{1,1}^2(1)|S^{n-1}|. \end{align*} Since Vol$(\Omega_0 \backslash (\Omega \cap \Omega_0))$ = Vol$(\Omega \backslash (\Omega \cap \Omega_0))$, we get \begin{align*} \int_{\Omega} G(r) dV +f_{1,1}^2(r)|S^{n-1}| & \geq \int_{\Omega_0} G(r) dV +f_{1,1}^2(1)|S^{n-1}| \\ &= \int_{\Omega_0} \left( 2 f_{1,1}(r) f'_{1,1}(r) + f_{1,1}^2(r) \frac{(n-1)}{r}\right) dV +f_{1,1}^2(1)|S^{n-1}|\\ & = \int_{u \in S^{n-1}} \int_{r=1}^{L} \left( 2 f_{1,1}(r) f'_{1,1}(r) + f_{1,1}^2(r) \frac{(n-1)}{r} \right) r^{n-1} drdu+f_{1,1}^2(1)|S^{n-1}|\\ &= \int_{u\in S^{n-1}} \Big[f_{1,1}^2(r)r^{n-1} \Big]_{1}^L du + f_{1,1}^2(1)|S^{n-1}| \\ & = \int_{u\in S^{n-1}} \left(f_{1,1}^2(L) L^{n-1} - f_{1,1}^2(1)\right) du + f_{1,1}^2(1)|S^{n-1}| \\ & = \int_{\partial B_{L}} f_{1,1}^2(r) dS. \end{align*} Thus, $\displaystyle \int_{\partial {\Omega_{out}} }f_{1,1}^2(r) dS \geq \int_{\partial B_{L}} f_{1,1}^2(r) dS$. \end{proof} \begin{cor} \label{cor: integral4} The following inequality holds for $ \Omega, \Omega_0 $ and $f_{1,1}$: \begin{equation} \displaystyle \int_{\partial {\Omega }}f_{1,1}^2(r) dS \geq \int_{\partial \Omega_0} f_{1,1}^2(r) dS \end{equation} \end{cor} We will use the following proposition to conclude some integral identities related to function $f_{1,1}(r)$, see Corollary \ref{cor: f_{1,1}}. \begin{proposition} \label{prop:integral expression} Let $g : (0, \infty) \rightarrow \mathbb{R}$ be a smooth positive radial function of $r$. Let $W$ be a bounded smooth domain in $\mathbb{R}^n$ with smooth boundary $\partial W$. \begin{enumerate} \item If $W$ is symmetric of order 2 with respect to the origin, then for each $i=1,2,\ldots n$, we have \begin{enumerate} \item $\displaystyle \int_{x \in W} g(r) x_i\, dV = 0$, \label{eqn:integral 1}~~~ \item $\displaystyle \int_{x \in \partial W} g(r) x_i\, dS = 0$. \label{eqn: integral 3}. \end{enumerate} \item If $W$ is symmetric of order 4 with respect to the origin, then for each $i, j=1,2,\dots n$, $i\neq j$, we have \begin{enumerate} \item $\displaystyle \int_{ x \in W} g(r) x_i x_j \, dV = 0$, \label{eqn:integral 2} \item $\displaystyle \int_{x \in \partial W} g(r) x_i x_j \,dS = 0$. \label{eqn: integral 4} \end{enumerate} \end{enumerate} Here $(x_1, x_2, \ldots, x_n)$ represents Cartesian coordinates of a point $x \in \mathbb{R}^{n}$ with respect to the origin. \end{proposition} \begin{cor} \label{cor: f_{1,1}} Let $\Omega$ and $f_{1,1}$ be defined as in the beginning of this section. Then for each $i,j=1,2,\dots n$, $i\neq j$ we have \begin{enumerate} \item $\displaystyle \int_{x\in \partial \Omega}f_{1,1}(r) \frac{f_{1,1}(r)}{r}x_i\, dS = 0,$ \label{proof (i)}\\ \item $\displaystyle \int_{x \in \partial \Omega} \frac{f_{1,1}(r)}{r}x_i. \frac{f_{1,1}(r)}{r}x_j \,dS=0,$ \item $\displaystyle \int_{ x \in \Omega} \left< \nabla f_{1,1}(r), \nabla \left(\frac{f_{1,1}(r)}{r}x_i \right)\right> dV=0,$ \item $\displaystyle \int_{x \in \Omega} \left< \nabla \left(\frac{f_{1,1}(r)}{r} x_i \right) , \nabla \left(\frac{f_{1,1}(r)}{r}x_j \right)\right> dV=0$. \end{enumerate} \end{cor} \begin{lemma} \label{lem:integral1} Let $W \subset \mathbb{R}^n$ be an open bounded smooth domain having symmetry of order $4$ with respect to the origin. Let $\Phi(r)$ be a positive radial function on $\mathbb{R}^n$. Then, there exists a constant $A>0$ such that \begin{equation} \int_{W} \Phi(r) x_i^2\,\, dV= A\,\, \text{for all}\, \,i\in \{1,2 \dots n\}. \end{equation} \end{lemma} For a detailed proof of Proposition \ref{prop:integral expression}, Corollary \ref{cor: f_{1,1}} and Lemma \ref{lem:integral1}, see (\cite{basak2023sharp}). \section{Steklov eigenvalue problem on symmetric doubly connected domain} \label{sec: iso. bound} Let $\Omega_{out} \subset \mathbb{R}^n$ be a connected open bounded smooth domain with symmetry of order 4 centered at the origin. Let $B_1$ be a unit ball centered at the origin such that $ \Bar{ B_1 }\subset \Omega_{out}$. Consider the following Steklov eigenvalue problem on $\Omega = \Omega_{out} \backslash \Bar{B_1}$. \begin{align} \begin{cases} \Delta u =0 \quad in \quad \Omega, \\ \frac{\partial u}{\partial \nu}= \sigma u \quad on \quad \partial \Omega. \end{cases} \label{problem on annular domain } \end{align} For $k \in \mathbb{N}, \sigma_{k}$ admits the following variational characterization \begin{align} \label{variational characterization} \sigma_{k}(\Omega)=\min_{E\in \mathcal{H}_{k+1}(\Omega)} \max_{u(\neq 0) \in E} R(u), \end{align} where $\mathcal{H}_{k+1}(\Omega)$ is the collection of all $(k+1)$ dimensional subspace of the sobolev space $H^1(\Omega)$ and $R(u):= \frac{\displaystyle \int_{\Omega} \| \nabla u \|^2 dV}{\displaystyle \int_{\partial{\Omega}} \|u\|^2 dS}$.\\ Now we prove the following theorem for the first $n$ nonzero Steklov eigenvalues. \begin{theorem} \label{thm:isoperimetric} Let $\Omega$ be a connected open bounded smooth domain defined as above, and $\sigma_{k}$ is the $k$th eigenvalue of (\ref{problem on annular domain }) on $\Omega$. Then for $1 \leq k \leq n$, \begin{equation} \sigma_{k}(\Omega) \leq \sigma_{k}(\Omega_0) = \sigma_1 (\Omega_0), \end{equation} where $\Omega_0 = B_L \backslash \Bar{ B_1}$ and $B_L$ is a ball of radius $L$ centered at the origin such that $Vol(B_L)= Vol(\Omega_{out})$ i.e, $Vol(\Omega)= Vol(\Omega_0)$. \end{theorem} \begin{proof} Since $\sigma_k(\Omega_0)=\sigma_1(\Omega_0), 1\leq k \leq n $, so it is enough to prove that $\sigma_n(\Omega) \leq \sigma_n(\Omega_0)=\sigma_1(\Omega_0)$. Consider the following $(n+1)$ dimensional subspace of $H^1(\Omega)$, \begin{align*} E=span\{ f_{1,1}(r), \frac{f_{1,1}(r)}{r}x_1, \dots \frac{f_{1,1}(r)}{r}x_n \}, \end{align*} where $f_{1,1}(r)$ is define in (\ref{eqn: eigenfunction ODE}) with $l=1$ and $i=1$. For any $u \in E \backslash \{0\}$, there exist $c_0, c_1, \dots c_n \in \mathbb{R}$ not simultaneously equal to zero, such that \begin{align*} u= c_0 f_{1,1}(r) + c_1 \frac{f_{1,1}(r)}{r}x_1 + \dots + c_n\frac{f_{1,1}(r)}{r}x_n. \end{align*} Then by using Corollary \ref{cor: f_{1,1}}, we get \begin{align} \frac{\displaystyle \int_\Omega \| \nabla u \|^2 dV}{\displaystyle \int_{\partial {\Omega}} u^2 dS}= \frac{c_0^2 \displaystyle \int_\Omega \|\nabla f_{1,1}(r)\|^2 \, dV+ \displaystyle \sum_{i=1}^nc_i^2 \displaystyle \int_\Omega \bigg \| \nabla \left( \frac{f_{1,1}(r)}{r} x_i \right) \bigg \|^2 \, dV}{c_0^2 \displaystyle \int_{\partial {\Omega}}f_{1,1}^2(r) \, dS + \displaystyle \sum_{i=1}^n c_i^2 \displaystyle \int_{\partial {\Omega}} \frac{f_{1,1}^2(r)}{r^2}x_i^2 \, dS}. \label{equality} \end{align} According to Lemma \ref{lem:integral1} there are constants $A_1, A_2 > 0 $ such that for all natural numbers $ 1\leq i \leq n$, \begin{align*} \int_{\partial {\Omega}}\left( \frac{f_{1,1}(r)}{r}x_i \right)^2 dS & =\int_{\partial {\Omega}} \frac{f_{1,1}^2(r)}{r^2}x_i^2 dS = A_1,\\ \int_\Omega \bigg \| \nabla \left( \frac{f_{1,1}(r)}{r} x_i \right) \bigg \|^2 dV & = \int_\Omega \left( \frac{(f'_{1,1}(r))^2}{r^2} x_i^2 - \frac{f_{1,1}^2(r)}{r^4}x_i^2 + \frac{f_{1,1}^2(r)}{r^2} \right) dV = A_2. \end{align*} Therefore $$ n\,A_1 = \sum_{i=1}^n \int_{\partial {\Omega}} \left( \frac{f_{1,1}(r)}{r}x_i \right)^2 \, dS = \int_{\partial {\Omega}} f_{1,1}^2(r) \, dS, $$ and $$n\,A_2 = \sum_{i=1}^n \int_\Omega \left(\frac{(f'_{1,1}(r))^2}{r^2} x_i^2-\frac{f_{1,1}^2(r)}{r^4}x_i^2 +\frac{f_{1,1}^2(r)}{r^2} \right) dV =\int_\Omega \left((f'_{1,1}(r))^2 + \frac{(n-1)}{r^2}f_{1,1}^2(r)\right)\, dV.$$ Thus for all natural numbers $ 1\leq i\leq n,$ we have \begin{equation} \label{sum 1} \displaystyle \int_{\partial {\Omega}} \left( \frac{f_{1,1}(r)}{r}x_i \right)^2 \, dS = A_1 = \frac{1}{n} \displaystyle \int_{\partial {\Omega}} f_{1,1}^2(r) \, dS. \end{equation} \begin{equation} \label{sum 1.5} \displaystyle \int_\Omega \bigg \| \nabla \left( \frac{f_{1,1}(r) x_i}{r}\right) \bigg \|^2 \, dV = A_2 = \frac{1}{n} \displaystyle \int_\Omega \left( (f'_{1,1}(r))^2 + \frac{(n-1)}{r^2} f_{1,1}^2(r)\right) \, dV. \end{equation} Now, from (\ref{equality}), \eqref{sum 1} and \eqref{sum 1.5}, we get \begin{align} \frac{\displaystyle \int_\Omega \| \nabla u \|^2 dV}{\displaystyle \int_{\partial {\Omega}} u^2 \, dS} = \frac{c_0^2 \displaystyle \int_\Omega \|\nabla f_{1,1}(r)\|^2 \, dV + A_2 \displaystyle \sum_{i=1}^n c_i^2}{c_0^2 \displaystyle \int_{\partial {\Omega}}f_{1,1}^2(r) \, dS + A_1 \displaystyle \sum_{i=1}^n c_i^2 } \leq \text{max} \Bigg \{\frac{\displaystyle \int_\Omega \|\nabla f_{1,1}(r)\|^2 \, dV}{\displaystyle \int_{\partial {\Omega}}f_{1,1}^2(r) \, dS}, \frac{A_2}{A_1} \Bigg \}. \label{gradient inequality 1} \end{align} Now \begin{align*} \frac{A_2}{A_1} & = \frac{\displaystyle \int_\Omega \left( (f'_{1,1}(r))^2 + \frac{(n-1)}{r^2} f_{1,1}^2(r)\right) dV}{\displaystyle \int_{\partial {\Omega}} f_{1,1}^2(r) \, dS} \geq \frac{\displaystyle \int_\Omega (f'_{1,1}(r))^2 \, dV}{\displaystyle \int_{\partial {\Omega}} f_{1,1}^2(r) \, dS} = \frac{\displaystyle \int_\Omega \|\nabla f_{1,1}(r)\|^2 \, dV}{\displaystyle \int_{\partial {\Omega}}f_{1,1}^2(r) \, dS}. \end{align*} Then from the inequality (\ref{gradient inequality 1}) we get \begin{align} \frac{\displaystyle \int_\Omega \| \nabla u \|^2 dV}{\displaystyle \int_{\partial {\Omega}} u^2 \, dS} \leq \frac{A_2}{A_1} =\frac{\displaystyle \int_\Omega \left( (f'_{1,1}(r))^2 + \frac{(n-1)}{r^2} f_{1,1}^2(r)\right) \, dV }{\displaystyle \int_{\partial {\Omega}} f_{1,1}^2(r) \, dS}. \label{gradient inequality} \end{align} Next, using the Lemma \ref{lem:integral2} and Corollary \ref{cor: integral4}, we get \begin{align*} \frac{ \displaystyle \int_\Omega \left( (f_{1,1}'(r))^2 + \frac{(n-1)}{r^2} f_{1,1}^2(r)\right) \, dV }{\displaystyle \int_{\partial {\Omega}} f_{1,1}^2(r) \, dS} \leq \frac{\displaystyle \int_{\Omega_{0}} \left( (f_{1,1}'(r))^2 + \frac{(n-1)}{r^2} f_{1,1}^2(r)\right ) \, dV }{\displaystyle \int_{\partial \Omega_0}f_{1,1}^2(r) \, dS} = \sigma_1(\Omega_0). \end{align*} Therefore, from the variational characterization (\ref{variational characterization}) and inequality (\ref{gradient inequality}), we conclude, \begin{align*} \sigma_{n}(\Omega) \leq \max_{u(\neq 0) \in E} \frac{\displaystyle \int_\Omega \| \nabla u \|^2 \, dV}{\displaystyle \int_{\partial {\Omega}} u^2 \, dS} \leq \sigma_1(\Omega_0). \end{align*} \end{proof} \section{Mixed Steklov Neumann eigenvalue problem } \label{sec:SN problem} In this section, we study the mixed Steklov Neumann problem on doubly connected domains. Throughout this section, we consider the following notations: Let $\Omega_{out}$ denote a connected smooth bounded domain in $\mathbb{R}^n$ centered at the origin having symmetric of order $4$ and $B_{R_1}$ is a ball of radius $R_1$ in $\mathbb{R}^n$ centered at the origin such that $\bar{B}_{R_1} \subset \Omega_{out}$. Take $\tilde{\Omega}_0$ as a concentric annular domain in $\mathbb{R}^{n}$. We consider the following Steklov Neumann eigenvalue problem \begin{align} \label{SNproblem1} \begin{cases} \Delta u = 0 \,\, \text{in}\,\,\, \tilde{\Omega}=\Omega_{out}\backslash \bar{B}_{R_1}, \\ \frac{\partial u}{\partial \nu} = 0 \,\, \text{on}\,\,\, \partial B_{R_1}, \\ \frac{\partial u}{\partial \nu} = \mu u \,\, \text{on}\,\,\, \partial \Omega_{out}. \end{cases} \end{align} Eigenvalues of problem (\ref{SNproblem1}) are positive and form an increasing sequence \begin{align*} 0=\mu_0(\tilde{\Omega})<\mu_1(\tilde{\Omega}) \leq \mu_2(\tilde{\Omega}) \leq \cdots \nearrow \infty \end{align*} Here eigenvalues are counted with multiplicity. \subsection{Steklov Neumann Eigenvalue problem on concentric annular domain} For $R_2 >R_1,$ let $B_{R_1}$ and $B_{R_2}$ be balls in $\mathbb{R}^n$ centered at the origin of radius $R_1$ and $R_2$, respectively. Consider the Steklov Neumann eigenvalue problem on an annular domain $\tilde{\Omega}_0 = B_{R_2} \backslash B_{R_1},$ \begin{align} \label{SN problem} \begin{cases} \Delta u = 0 \,\, \text{in}\,\,\, \tilde{\Omega}_0, \\ \frac{\partial u}{\partial \nu} = 0 \,\, \text{on}\,\,\, \partial B_{R_1}, \\ \frac{\partial u}{\partial \nu} = \mu u \,\, \text{on}\,\,\, \partial B_{R_2}. \end{cases} \end{align} In a similar manner as described in Section \ref{sec:annular domain}, using separation of variable technique, any solution of (\ref{SN problem}) is of the form $u=f(r)g(w)$. Here $g(w)$ is an eigenfunction of $\Delta_{\mathbb{S}^{n-1}}$ and $f(r)$ satisfies \begin{eqnarray} \begin{array}{ll} \label{SNode} -f''(r)-\frac{n-1}{r}f'(r)+ \frac{l(l+n-2)}{r^2}f(r)=0 ~\mbox{ for } ~ r \in (R_1,R_2), l\geq 0, \\ f'(R_1)=0, \,\, f'(R_2)=\mu f(R_2). \end{array} \end{eqnarray} By solving this ODE, we find that corresponding to each $l \geq 0$, it has one eigenvalue $\mu_l $ which is given by \begin{align*} \mu_l = \frac{l(l+n-2) \Big((\frac{R_2}{R_1})^{2l+n-2}-1\Big)}{R_2 \Big( (l+n-2)(\frac{R_2}{R_1})^{2l+n-2}+l\Big)}, \quad l \geq 0, \end{align*} corresponding to eigenfunction \begin{align}\label{Neuman eigenfunction} \tilde{f}_l(r)=r^l+\frac{lR_1^{2l+n-2}}{(l+n-2)r^{l+n-2}}, \quad l \geq 0. \end{align} Hence for each $l\geq 0$, $\mu_l (\tilde{\Omega}_0) = \mu_l$ is an eigenvalue of \eqref{SN problem} with multiplicity equal to the dimension of $H_l$. Eigenfunctions corresponding to eigenvalue $\mu_l (\tilde{\Omega}_0)$ are $\tilde{f}_{l}(r)g_{l}(w)$, where $g_{l}(w)$ is an eigenfunction of $\Delta_{\mathbb{S}^{n-1}}$ corresponding to eigenvalue $l(l+n-2)$ and $\tilde{f}_{l}(r)$ is defined as in \eqref{Neuman eigenfunction}. Also \begin{align} \mu_1 (\tilde{\Omega}_0) = \mu_2 (\tilde{\Omega}_0) = \cdots = \mu_n (\tilde{\Omega}_0) = \frac{\displaystyle \int_{\tilde{\Omega}_{0}} \left( (\tilde{f}_{1}'(r))^2 + \frac{(n-1)}{r^2} \tilde{f}_{1}^2(r)\right ) \, dV }{\displaystyle \int_{\partial B_{R_2}}\tilde{f}_{1}^2(r) \, dS}. \end{align} \begin{lemma} \label{SN de(increasing)lemma} Let $\tilde{f}_1:[R_1, \infty) \to \mathbb{R}$ be function define in \eqref{Neuman eigenfunction} for $l=1$. Define $\tilde{F}, \tilde{G}:[R_1, \infty) \to \mathbb{R}$ as \begin{align*} \tilde{F}(r) &=\left((\tilde{f}'_{1}(r))^2 + \frac{(n-1)}{r^2} \tilde{f}_{1}^2(r)\right) \qquad \text{ and } \\ \tilde{G}(r) &= \left( 2\tilde{f}_{1}(r)\tilde{f}_{1}'(r) + \frac{n-1}{r}\tilde{f}_{1}^2(r) \right). \end{align*} Then $\tilde{F}$ is a decreasing function of $r$ and $\tilde{G}$ is an increasing function of $r$. \end{lemma} \begin{proof} We have $\tilde{f}_1(r)=r+\frac{R_1^n}{(n-1)r^{n-1}}$ and $\tilde{f}_1'(r)=1-\frac{R_1^n}{r^n}.$ Substituting these values, we get \begin{align*} \tilde{F}(r)&=n+\frac{n}{n-1}\frac{R_1^{2n}}{r^{2n}} \,\, \,\quad \text{and}\\ \tilde{G}(r)&=(n+1)r+\frac{2R_1^n}{(n-1)r^{n-1}}-\frac{R_1^{2n}}{(n-1)r^{2n-1}}. \end{align*} Then by differentiating these functions, we have for all $r\in(R_1, \infty)$ \begin{align*} \tilde{F}'(r)&=-\frac{2n^2}{n-1}\frac{R_1^{2n}}{r^{2n+1}}\leq 0, \quad \text{and}\\ \tilde{G}'(r)&=(n+1)- \frac{2R_1^n}{r^n}+\frac{2n-1}{n-1}\frac{R_1^{2n}}{r^{2n}} \geq \frac{(r^n-R_1^n)^2}{r^{2n}}\geq 0. \end{align*} Hence the conclusion follows. \end{proof} \subsection{Mixed Steklov Neumann problem on symmetric doubly connected domain} In this subsection, we prove a sharp upper bound of the first $n$ nonzero mixed Steklov Neumann eigenvalues on $\tilde{\Omega}=\Omega_{out}\backslash \bar{B}_{R_1}.$ Let $\tilde{\Omega}_0 = B_{R_2} \backslash \Bar{B}_{R_1}$, where $B_{R_2}$ is a ball in $\mathbb{R}^{n}$ of radius $R_2$ centered at the origin. We choose $R_2 > R_1$ such that $Vol(B_{R_2})= Vol(\Omega_{out})$ i.e., $Vol(\Omega)= Vol(\tilde{\Omega}_0)$. It is well known that for each $k\in \mathbb{N}, \mu_k(\tilde{\Omega})$ admits the following variational characterization \begin{align} \label{variational characterization (SN)} \mu_{k}(\tilde{\Omega})=\min_{E\in \mathcal{H}_{k+1}(\Omega)} \max_{u(\neq 0) \in E} \tilde{R}(u), \end{align} where $\mathcal{H}_{k+1}(\tilde{\Omega})$ is the collection of all $(k+1)$ dimensional subspace of the sobolev space $H^1(\tilde{\Omega})$ and $\tilde{R}(u)= \frac{\displaystyle \int_{\tilde{\Omega}} \| \nabla u \|^2 dV}{\displaystyle \int_{\partial{\Omega_{out}}} \|u\|^2 dS}$. \begin{lemma} \label{SN integral 1} Let $\tilde{F}$ be a decreasing radial function defined as in Lemma \ref{SN de(increasing)lemma} and $\tilde{\Omega}, \tilde{\Omega_0}$ be defined as above. Then the following inequality holds. \begin{equation*} \int_{\tilde{\Omega}} \tilde{F}(r) dV \leq \int_{\tilde{\Omega}_0} \tilde{F}(r) dV. \end{equation*} \end{lemma} For proof, see \cite{basak2023sharp}. \begin{lemma} \label{SN integral 2} Let $\tilde{f}_{1}(r), {\Omega}_{out} $ and $B_{R_2}$ be defined as above, and $\partial {\Omega}_{out}, \partial B_{R_2}$ are boundaries of $\Omega_{out}, B_{R_2},$ respectively. Then the following inequality holds. \begin{equation} \label{inequality 5} \int_{\partial {\Omega_{out}}} \tilde{f}_{1}^2(r) dS \geq \int_{\partial {B_{R_2}}} \tilde{f}_{1}^2(r) dS. \end{equation} \end{lemma} The proof will follow the same as the proof of Lemma \ref{integral6}. Now we state the main result related to mixed Steklov Neumann eigenvalues. \begin{theorem} \label{thm:isoperimetricSN} Let $\tilde{\Omega}$ and $\tilde{\Omega}_0$ be defined as the above, and $\mu_k$ is the $k$th eigenvalue of (\ref{SNproblem1}) on $\tilde{\Omega}.$ Then for each $1\leq k \leq n,$ \begin{equation} \mu_k(\tilde{\Omega}) \leq \mu_k(\tilde{\Omega}_0)=\mu_1(\tilde{\Omega}_0). \end{equation} \end{theorem} \begin{proof} Since $\mu_k(\tilde{\Omega}_0)=\mu_1(\tilde{\Omega}_0), 1\leq k \leq n $, so it is enough to prove that $\mu_n(\tilde{\Omega}) \leq \mu_n(\tilde{\Omega}_0)=\mu_1(\tilde{\Omega}_0)$. To find bound for $\mu_n(\tilde{\Omega})$, we consider an $(n+1)$ dimensional subspace of $H^1(\tilde{\Omega})$ and use it in the variational characterization \eqref{variational characterization (SN)} of $\mu_n(\tilde{\Omega})$. Take \begin{align*} E = span \bigg\{\tilde{f}_{1}(r), \frac{\tilde{f}_{1}(r)}{r}x_1, \dots \frac{\tilde{f}_{1}(r)}{r}x_n \bigg\}, \end{align*} where $\tilde{f}_{1}(r)$ is define in (\ref{Neuman eigenfunction}) with $l=1$. For any $u \in E \backslash \{0\}$, there exist $d_0, d_1, \dots d_n \in \mathbb{R}$ not simultaneously equal to zero, such that \begin{align*} u= d_0 \tilde{f}_{1}(r) + d_1 \frac{\tilde{f}_{1}(r)}{r}x_1 + \dots + d_n\frac{\tilde{f}_{1}(r)}{r}x_n. \end{align*} Then by using proposition \ref{prop:integral expression}, we get \begin{align} \frac{\displaystyle \int_{\tilde{\Omega}} \| \nabla u \|^2 dV}{\displaystyle \int_{\partial {\Omega_{out}}} u^2 dS}= \frac{d_0^2 \displaystyle \int_{\tilde{\Omega}} \|\nabla \tilde{f}_{1}(r)\|^2 \, dV+ \displaystyle \sum_{i=1}^nd_i^2 \displaystyle \int_{\tilde{\Omega}} \bigg \| \nabla \left( \frac{\tilde{f}_{1}(r)}{r} x_i \right) \bigg \|^2 \, dV}{d_0^2 \displaystyle \int_{\partial {\Omega_{out}}}\tilde{f}_{1}^2(r) \, dS + \displaystyle \sum_{i=1}^n d_i^2 \displaystyle \int_{\partial {\Omega_{out}}} \frac{\tilde{f}_{1}^2(r)}{r^2}x_i^2 \, dS}. \label{equality} \end{align} Then using Lemma \ref{lem:integral1} and the similar argument used in the proof of Theorem \ref{thm:isoperimetric}, we get \begin{align} \frac{\displaystyle \int_{\tilde{\Omega}} \| \nabla u \|^2 dV}{\displaystyle \int_{\partial {\Omega_{out}}} u^2 \, dS} \leq \frac{\displaystyle \int_{\tilde{\Omega}} \left( (\tilde{f}'_{1}(r))^2 + \frac{(n-1)}{r^2} \tilde{f}_{1}^2(r)\right) \, dV }{\displaystyle \int_{\partial {\Omega_{out}}} \tilde{f}_{1}^2(r) \, dS}. \label{gradient inequality} \end{align} Next, using Lemma \ref{SN integral 1} and Lemma \ref{SN integral 2} we get \begin{align*} \frac{ \displaystyle \int_{\tilde{\Omega}} \left( (\tilde{f}_{1}'(r))^2 + \frac{(n-1)}{r^2} \tilde{f}_{1}^2(r)\right) \, dV }{\displaystyle \int_{\partial {\Omega_{out}}} \tilde{f}_{1}^2(r) \, dS} \leq \frac{\displaystyle \int_{\tilde{\Omega}_{0}} \left( (\tilde{f}_{1}'(r))^2 + \frac{(n-1)}{r^2} \tilde{f}_{1}^2(r)\right ) \, dV }{\displaystyle \int_{\partial B_{R_2}}\tilde{f}_{1}^2(r) \, dS} = \mu_1(\tilde{\Omega}_0). \end{align*} This proves the theorem. \end{proof} \begin{figure} \centering \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{ball-ball.png} \caption{Concentric Annular Domain} \label{fig:figure1} \end{minipage} \hspace{0.5 cm} \begin{minipage}{0.5\textwidth} \centering \vspace{1cm} \begin{tabular}{|c|c|c|c|} \hline Domain $(\Omega)$ &$ \Omega_1$& $\Omega_2$ &$\Omega_3 $\\ \hline $ \sigma_2 $ & $ 0.1783 $ &$ 0.2384$ & $ 0.20204$ \\ \hline $\mu_2$ & $0.18467 $ & $0.23222$ &$ 0.24484$ \\ \hline \end{tabular} \captionof{table}{Comparison of eigenvalues} \label{tab:mytable} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rectangle-ball.png} \caption{Rectangle with spherical hole} \label{fig:rectangle-ball} \end{minipage} \hfill \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=\textwidth, height = 8.5 cm ]{ellipse-ball.png} \caption{Ellipse with spherical hole} \label{fig:ellipse-ball} \end{minipage} \end{figure} \section{Counter examples} \label{Counter examples} In this section, we provide some examples to emphasis the fact that assumption `symmetry of order $4$' on domains is crucial. In particular, we present some domains having symmetry of order $2$ (not having symmetry of order $4$) for which Theorem \ref{thm:isoperimetric} and \ref{thm:isoperimetricSN} fails. Consider concentric annular domain $\Omega_1$ (Figure \ref{fig:figure1}) with outer and inner radius of 5 cm and 1 cm, respectively. \begin{enumerate} \item Take domain $\Omega_2$, rectangle (with sides $13.095$ cm and $6$ cm) having a spherical hole of radius 1, see Figure \ref{fig:rectangle-ball}. Then using FreeFEM++ \cite{hecht2012new}, we find that $\sigma_{2}(\Omega_2) > \sigma_{2}(\Omega_1)$ and $\mu_{2}(\Omega_2) > \mu_{2}(\Omega_1)$ (see Table \ref{tab:mytable}). \item Our next example is domain $\Omega_3$, an ellipse (with 8.33 cm major axis and 3 cm minor axis) having a spherical hole of radius 1 (Figure \ref{fig:ellipse-ball}). Then again using FreeFEM++ \cite{hecht2012new}, we obtain that $\sigma_{2}(\Omega_3) > \sigma_{2}(\Omega_1)$ and $\mu_{2}(\Omega_3) > \mu_{2}(\Omega_1)$ (see Table \ref{tab:mytable}). \end{enumerate} These examples show that symmetry condition in Theorems \ref{thm:isoperimetric} and \ref{thm:isoperimetricSN} can not be dropped. \section{Numerical Observations} \label{sec:numerical observation} Let $\Omega_{(t_1, t_2)} = \Omega_{out} \setminus B_1(t_1, t_2)$ be a doubly connected domain, where $B_1(t_1, t_2)$ is the ball of radius $1$ centered at $(t_1, t_2)$ and $B_1(t_1, t_2)$ is compactly contained in $\Omega_{out}$. In this section, we present numerical values of first and second nonzero eigenvalues of Steklov problem and mixed Steklov Neumann problem on following two different types of doubly connected domains: \begin{enumerate} \item When outer domain is a ball and $B_1(t_1, t_2)$ varies inside the ball. \item When outer domain is an ellipse and $B_1(t_1, t_2)$ varies inside the ellipse. \end{enumerate} The eigenvalues have been calculated numerically using FreeFEM++ \cite{hecht2012new} and they are given below in tabular form. Based on these values, we make various observations and state them as conjectures. \subsection{When outer domain is a ball:} We take outer domain to be ball $B_5$ in $\mathbb{R}^{2}$ of radius $5$ centered at the origin and remove a ball of radius $1$ from its interior. On this doubly connected domain, we have provided the first two nonzero Steklov eigenvalues and Steklov Neumann eigenvalues in Table \ref{tab: ball}. Based on the observations from Table \ref{tab: ball}, we state the following conjecture: \begin{conjecture} If $\Omega_{out}$ is the ball $B_{R_2}$ in $\mathbb{R}^{2}$ of radius $R_2$ centered at the origin. Then \begin{enumerate} \item The first and second nonzero Steklov eigenvalues, $\sigma_1(\Omega_{(t_1, t_2)})$ and $\sigma_2(\Omega_{(t_1, t_2)})$, decrease as distance between centers of both the ball increases. In particular, among all positions of $B_1(t_1, t_2)$ inside $B_{R_2}$, eigenvalues $\sigma_1(\Omega_{(t_1, t_2)})$ and $\sigma_2(\Omega_{(t_1, t_2)})$ attain maximum when both balls are concentric and these eigenvalues attain minimum when the inner ball touches the outer ball. \item The first nonzero mixed Steklov Neumann eigenvalue $\mu_1(\Omega_{(t_1, t_2)})$ (with Steklov condition on the outer boundary and Neumann condition on the inner boundary) has multiplicity $2$ and it is decreasing with respect to the distance between centers of both the balls. \end{enumerate} \end{conjecture} \begin{table}[h] \centering \begin{tabular} {|c|c|c|c|c|c|c|c|} \hline $(t_1, t_2)$ & (0.5, 0) & (1, 0) & (1.5, 0) & (2, 0) & (2.5, 0) & (3, 0) & (3.5, 0) \\ \hline $\sigma_1(\Omega_{(t_1, t_2)})$ & 0.177575 & 0.175421 & 0.171908 & 0.167088 & 0.160911 & 0.152997 & 0.141689 \\ \hline $\sigma_2(\Omega_{(t_1, t_2)})$ & 0.178242 & 0.178052 & 0.177698 & 0.177101 & 0.176084 & 0.174166 & 0.169468 \\ \hline $\mu_1(\Omega_{(t_1, t_2)})$ & 0.184583 & 0.18448 & 0.184289 & 0.183969 & 0.183431 & 0.182441 & 0.180127 \\ \hline $\mu_2(\Omega_{(t_1, t_2)})$ & 0.184583 & 0.18448 & 0.184289 & 0.183969 & 0.183431 & 0.182441 & 0.180127 \\ \hline \end{tabular} \caption{\small{Eigenvalues when outer domain $\Omega_{out}$ is ball of radius 5 centered at the origin.}} \label{tab: ball} \end{table} \subsection{When outer domain is an ellipse:} Now we take ellipse $E_{3,8.33}$ in $\mathbb{R}^{2}$ centered at the origin with minor axis $3$ cm and major axis $8.33$ cm as the outer domain. In Table \ref{tab: ellipse1}, \ref{tab: ellipse2} and \ref{tab: ellipse3}, we have given the first and second nonzero Steklov and Mixed Steklov Neumann eigenvalues for different positions of $B_1(t_1, t_2)$ inside $E_{3,8.33}$. Based on the observations from these tables, we state the following conjecture. \begin{conjecture} If $\Omega_{out}$ is the ellipse $E_{r,R}$ in $\mathbb{R}^{2}$ centered at the origin with minor axis $r$ and major axis $R$. Then \begin{enumerate} \item $\sigma_1(\Omega_{(t_1, t_2)})$ and $\mu_2(\Omega_{(t_1, t_2)})$ are decreasing with respect to the distance between centers of $E_{r,R}$ and $B_1(t_1, t_2)$. \item $\mu_1(\Omega_{(t_1, t_2)})$ is decreasing with respect to the distance between centers of $E_{r,R}$ and $B_1(t_1, t_2)$ when center of $B_1(t_1, t_2)$ varies along lines $y=0$ and $y=x$. However, it is increasing when center of $B_1(t_1, t_2)$ varies along line $x=0$. \end{enumerate} \end{conjecture} \begin{table}[h] \centering \begin{tabular} {|c|c|c|c|c|c|} \hline $ (t_1, t_2)$ &(0.4, 0) &(0.8, 0) &(1.2, 0) &(1.6, 0) &(1.9, 0) \\ \hline $\sigma_1(\Omega_{(t_1, t_2)})$ &0.0671757 & 0.0670031 & 0.0666325 & 0.0657731 & 0.0636588 \\ \hline $\sigma_2(\Omega_{(t_1, t_2)})$ & 0.200234 & 0.195403 & 0.186742 & 0.17198 & 0.1478 \\ \hline $\mu_1(\Omega_{(t_1, t_2)})$ & 0.0682314 & 0.0680922 & 0.0677969 & 0.0671274 & 0.0655633 \\ \hline $\mu_2(\Omega_{(t_1, t_2)})$ & 0.231544 & 0.230396 & 0.228218 & 0.22400 & 0.214635 \\ \hline \end{tabular} \caption{\small{Eigenvalues when outer domain is $E_{3,8.33}$ and center of inner ball varies along $X-$axis}} \label{tab: ellipse1} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $(t_1, t_2)$ &(0, 0.5) & (0, 1.5)& (0, 2.5) & (0, 3.5) & (0, 4.5)& (0, 5.5)& (0, 6.5) \\ \hline $\sigma_1(\Omega_{(t_1, t_2)})$ & 0.0671408 & 0.0664638 & 0.0651789 & 0.0633908 & 0.0611841 & 0.0585319 & 0.0548775 \\ \hline $\sigma_2(\Omega_{(t_1, t_2)})$ & 0.202004 & 0.203392 & 0.204995 & 0.204736 & 0.200454 & 0.190421 &0.17005 \\ \hline $\mu_1(\Omega_{(t_1, t_2)})$ & 0.0682859 & 0.0683868 & 0.0685867 & 0.0688788 & 0.0692449 & 0.0696235 & 0.0697002 \\ \hline $\mu_2(\Omega_{(t_1, t_2)})$ & 0.231531 & 0.228691 & 0.223756 & 0.217687 & 0.211176 & 0.204242 & 0.19409 \\ \hline \end{tabular} \caption{\small{Eigenvalues when outer domain is $E_{3,8.33}$ and center of inner ball varies along $Y-$axis}} \label{tab: ellipse2} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline $(t_1, t_2)$ & (0.5, 0.5) &(1, 1)&(1.5, 1.5) & (1.9, 1.9) \\ \hline $\sigma_1(\Omega_{(t_1, t_2)})$ & 0.0670582 & 0.0664918& 0.0651753 & 0.0600965 \\ \hline $\sigma_2(\Omega_{(t_1, t_2)})$ & 0.199557 & 0.192501 & 0.177751 & 0.128205 \\ \hline $\mu_1(\Omega_{(t_1, t_2)})$ & 0.0682192 & 0.0680132 & 0.0673899 & 0.0641569 \\ \hline $\mu_2(\Omega_{(t_1, t_2)})$ & 0.230959 & 0.227984 & 0.221902 & 0.198211 \\ \hline \end{tabular} \caption{\small {Eigenvalues when outer domain is $E_{3,8.33}$ and center of inner ball varies along line $y=x$.}} \label{tab: ellipse3} \end{table} \newpage \textbf{Acknowledgement:} S. Basak is supported by University Grants Commission, India. The corresponding author S. Verma acknowledges the project grant provided by SERB-SRG sanction order No. SRG/2022/002196. \bibliographystyle{plain} \bibliography{main} \end{document}
2412.17157v2
http://arxiv.org/abs/2412.17157v2
A new look at unitarity in quantization commutes with reduction for toric manifolds
\documentclass[12pt,english]{amsart} \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsthm} \usepackage{xcolor} \usepackage{amsmath} \usepackage{hyperref} \makeatletter \tagsleft@false \makeatother \usepackage{comment} \newtheorem{lemma}{Lemma}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{exmp}{Example}[section] \newtheorem{thm}{Theorem}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{dfn}{Definition}[section] \newtheorem{rmk}{Remark}[section] \newcommand{\shP}{\mathcal{P}} \newcommand{\shL}{\mathcal{L}} \newcommand{\CC}{\mathbb{C}} \newcommand{\N}{{\mathbb N}} \newcommand{\C}{{\mathbb C}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\R}{{\mathbb R}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\I}{{\mathbb I}} \newcommand{\wt}{{\widetilde}} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\ba}{\begin{eqnarray}} \newcommand{\ea}{\end{eqnarray}} \begin{document} \title[Unitarity in quantization for toric manifolds]{A new look at unitarity in quantization commutes with reduction for toric manifolds} {} \author[Mour\~ao]{José M. Mour\~ao} \email{[email protected]} \address{Department of Mathematics and Center for Mathematical Analysis, Geometry and Dynamical Systems, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal} \author[Nunes]{João P. Nunes} \email{[email protected]} \address{Department of Mathematics and Center for Mathematical Analysis, Geometry and Dynamical Systems, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal} \author[Pereira]{Augusto Pereira} \email{[email protected]} \address{Department of Mathematics, Instituto Superior de Economia e Gest\~ao, Universidade de Lisboa, 1200-781 Lisboa, Portugal} \author[Wang]{Dan Wang} \email{[email protected];[email protected]} \address{Department of Mathematics and Center for Mathematical Analysis, Geometry and Dynamical Systems, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Max Planck Institute for Mathematics, Vivatsgasse 7, 53111 Bonn, Germany} \maketitle \begin{abstract} For a symplectic toric manifold we consider half-form quantization in mixed polarizations $\mathcal{P}_\infty$, associated to the action of a subtorus $T^p\subset T^n$. The real directions in these polarizations are generated by components of the $T^p$ moment map. Polarizations of this type can be obtained by starting at a toric K\"ahler polarization $\mathcal{P}_0$ and then following Mabuchi rays of toric K\"ahler polarizations generated by the norm square of the moment map of the torus subgroup. These geodesic rays are lifted to the quantum bundle via a generalized coherent state transform (gCST) and define equivariant isomorphisms between Hilbert spaces for the K\"ahler polarizations and the Hilbert space for the mixed polarization. The polarizations $\mathcal{P}_\infty$ give a new way of looking at the problem of unitarity in the quantization commutes with reduction with respect to the $T^p$-action, as follows. The prequantum operators for the components of the moment map of the $T^p$-action act diagonally with discrete spectrum corresponding to the integral points of the moment polytope. The Hilbert space for the quantization with respect to $\mathcal{P}_\infty$ then naturally decomposes as a direct sum of the Hilbert spaces for all its quantizable coisotropic reductions which, in fact, are the K\"ahler reductions of the initial K\"ahler polarization $\mathcal{P}_0$. This will be shown to imply that, for the polarization $\mathcal{P}_\infty$, quantization commutes unitarily with reduction. The problem of unitarity in quantization commutes with reduction for $\mathcal{P}_0$ is then equivalent to the question of whether quantization in the polarization $\mathcal{P}_0$ is unitarily equivalent with quantization in the polarization $\mathcal{P}_\infty$. In fact, this does not hold in general in the toric case. \end{abstract} {} \tableofcontents \section{Introduction and main results} Recently, several interesting relations between Hamiltonian flows in imaginary time and geometric quantization on a K\"ahler manifold, $M$, have been explored (see e.g. \cite{KMN1,KMN4,LW1,LW3,P,W1}). Namely, these ``flows" describe geodesics for the Mabuchi metric on the space of K\"ahler metrics on $M$ which often converge, at infinite geodesic time, to interesting real or mixed polarizations. Besides leading to interesting polarizations the Mabuchi geodesics have frequently the very important property of allowing natural lifts to the quantum bundle via generalized coherent state transforms. These transforms can then be used to compare quantizations for different polarizations. A particularly nice class of K\"ahler manifolds to study with these methods is the class of toric K\"ahler manifolds $(M, \omega,J)$, with Hamiltonian $T^{n}$ action and the corresponding moment map $\mu:M \twoheadrightarrow P$. Here, the Mabuchi rays generated by a convex function of the moment map and starting at any initial toric K\"ahler polarization give, at infinite geodesic time, the toric real polarization. By lifting the geodesics to the quantum bundle we get coherent state transforms which relate the spaces of holomorphic sections along the geodesic family and show the convergence of the monomial sections to the distributional sections of the prequantum line bundle which generate the Hilbert space of quantum states for the real toric polarization. This convergence has been established for $L^1$ normalized sections in \cite{BFMN} and in \cite{KMN1} it was shown for $L^2$ normalized sections if one includes the half-form correction. In \cite{KMN4}, the convergence of half-form corrected holomorphic sections is described in terms of a generalized coherent state transform. In \cite{LW1,LW2,LW3}, the study of Mabuchi geodesic families was generalized from the toric case to the case of manifolds with an Hamiltonian torus action. There are many previous works on closely related problems (see e.g. \cite{An2,BFHMN,BHKMN,CLL,FMMN1,GS1,GS3,Hal,HK2,JW,LY1,MNP,MNR}). In the present paper, we include the half-form correction for toric mixed polarizations $\mathcal{P}_{\infty}$, attained at infinite geodesic time along Mabuchi rays which, starting at an initial toric K\"ahler structure defined by a symplectic potential $g$, are obtained by Hamiltonian flow in imaginary time generated by functions $H$ on the moment polytope that are convex only along $\mathrm{Im}\mu_p$, where $\mu_{p}: M\twoheadrightarrow \Delta$ denotes the moment map of a subtorus $T^{p}$ action. We first provide a local description of $\mathcal{P}_{\infty}$. Secondly, we identify a basis for quantum space $\mathcal{H}_{\mathcal{P}_\infty}$ associated to half-form corrected $\mathcal{P}_{\infty}$ that is labelled by the integer points of the moment polytope $P$. Thirdly, using the generalized coherent state transform (gCST), we construct an isomorphism between the quantum spaces for the half-form corrected K\"ahler polarization and for the mixed polarization. As described in Section \ref{newsubsec}, the quantization commutes with reduction correspondence is a very natural property of the quantization in the mixed polarization $\mathcal{P}_\infty$, since its coisotropic reductions occur at fixed values of the components of $\mu_p$ and since these global functions, being $\mathcal{P}_\infty$-polarized, act simply by multiplication operators on $\mathcal{H}_{\mathcal{P}_\infty}.$ The general properties of $\mathcal{P}_\infty$, namely having real directions corresponding to the components of $\mu_p$ and complex directions corresponding to the K\"ahler reductions of $\mathcal{P}_0$, also indicate as a natural consequence that the quantization commutes with reduction correspondence, for each level set of $\mu_p$, should be unitary up to constant. As we will show, in our case, in fact, unitarity is obtained globally, for all (quantizable) levels of $\mu_p$ at once. In fact, as described in Section \ref{subsecqcr}, the gCST provides a natural definition of hermitian structure on $\mathcal{H}_{\mathcal{P}_\infty}$, so that there is a unitary isomorphism between $\mathcal{H}_{\mathcal{P}_\infty}$ and the direct sum of the Hilbert spaces of the (quantizable) K\"ahler reductions for the initial toric K\"ahler structure, defined by the symplectic potential $g$, which, in fact, correspond to the (quantizable) coisotropic reductions of $\mathcal{P}_\infty.$ Thus, for the initial K\"ahler polarization $\mathcal{P}_0$ and relative to the $T^p$-action, the question of unitarity in the quantization commutes with reduction correspondence is more naturally phrased as the question of unitary equivalence between quantizations with respect to $\mathcal{P}_0$ and to $\mathcal{P}_\infty$. As described in Section \ref{newsubsec}, we believe that this perspective should be taken more generally, whenever a K\"ahler polarization is related to a ``Fourier" polarization (as in Section 3 of \cite{BHKMN}.) (For a study of the problem of unitarity in the quantization commutes with reduction correspondence, with an emphasis on the semi-classical limit, see \cite{HK1}.) {} Let $(M,\omega, J)$ be a toric variety determined by the Delzant polytope $P$. Our main results are as follows: \begin{thm}[Theorem \ref{prop:evol-polarization}] For a choice of symplectic potential on $M$, $g$, let $\mathcal{P}_s, s\geq 0$, be the family of K\"ahler polarizations associated to the symplectic potential $g+sH, s\geq 0$, which is obtained under the imaginary time flow of the Hamiltonian vector field $X_H$. Then the limiting polarization $\mathcal{P}_\infty$ exists, and $$\mathcal{P}_\infty:= \lim_{s\to +\infty} \mathcal{P}_s $$ is a (singular) mixed polarization. Moreover, over the open dense $(\mathbb{C}^*)^n$-orbit $\mathring M,$ $$ \mathcal{P}_\infty = \langle \partial/\partial\tilde \theta_{1}, \dots, \partial/\partial\tilde \theta_p, X_{\tilde z_{p+1}}, \dots, X_{\tilde z_{n}},\rangle_{\mathbb C}= $$ $$ = \langle \partial/\partial\tilde \theta_{1}, \dots, \partial/\partial\tilde \theta_p, \frac{\partial}{\partial{\tilde z_{p+1}}}, \dots, \frac{\partial}{\partial{\tilde z_{n}}}\rangle_{\mathbb C}. $$ \end{thm} So the polarization $\mathcal{P}_{\infty}$ is no longer real but mixed, and the holomorphic sections converge to distributional sections, which have a holomorphic factor along the complex directions of the limit polarization. In particular, when $H$ is a strictly convex function on $P$, this result coincides with \cite[Theorem 1.2]{BFMN}. \begin{dfn}[Definition \ref{correctedQS}] The quantum Hilbert space for the half-form corrected mixed polarization $\mathcal{P}_{\infty}$ is defined by $$ \mathcal{H}_{\mathcal{P}_{\infty}} =B_{\mathcal{P}_{\infty}} \otimes \sqrt{|dX_{1}^{p}\wedge dZ_{p+1}^{n}|}, $$ where $$B_{\mathcal{P}_{\infty}}=\{\sigma \in \Gamma(M, L^{-1})' \mid \nabla^{\infty}_{\xi} \sigma=0, \forall \xi \in \Gamma(M, \mathcal{P}_{\infty})\}.$$ \end{dfn} \begin{thm}[Theorem\ref{thm_polsectionsinside}] The distributional sections $\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n$, in (\ref{limitsections}), are in $\mathcal{H}_{\mathcal{P}_\infty}$. Moreover, for any $\sigma \in \mathcal{H}_{\mathcal{P}_\infty}$, $\sigma$ is a linear combination of the sections $\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n$. Therefore, the distributional sections $\{\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n\}_{m\in P\cap \mathbb{Z}^n}$ form a basis of $\mathcal{H}_{\mathcal{P}_\infty}$. \end{thm} This implies that the dimension of the quantum Hilbert space $\mathcal{H}_{\mathcal{P}_\infty}$ for the half-form corrected mixed polarization coincides with those of quantum spaces for real and K\"ahler polarizations. When $p=n$, this result coincides with \cite[Theorem 4.7]{KMN1}. Let $H^{prQ}$ be the Kostant-Souriau prequantum operator associated to $H$. The gCST is then defined by the $T^n-$equivariant linear isomorphism \begin{align*} U_{s} = (e^{s\hat{H}^{prQ}} \otimes e^{is \shL_{X_H}}) \circ e^{-s\hat H^Q} : \mathcal{H}_{\mathcal{P}_{0}} \to \mathcal{H}_{\mathcal{P}_{s}}, s \geq 0. \end{align*} \begin{thm}[Theorem \ref{convsections}] Let $s\geq 0.$ \begin{align*} \lim_{s\to +\infty} U_{s}\sigma^m_{0} = \tilde{\sigma}^m_\infty, \, m\in P\cap \mathbb{Z}^n. \end{align*} \end{thm} This result establishes that there is an isomorphism $U_{\infty}$ between quantum spaces for the half-form corrected K\"ahler polarization and mixed polarization. When $p=n$, this result coincides with \cite[Theorem 4.3]{KMN4}. A natural hermitian structure can be defined on $\mathcal{H}_{{\mathcal P}_\infty}$ from the following \begin{thm}[Theorem \ref{lemma-norms}] Let $\{\tilde{\sigma}^{m}_{s}:= U_{s}(\sigma_{0}^{m})\}_{m \in P \cap \mathbb{Z}^{n}}$ be the basis of $\mathcal{H}_{\mathcal{P}_{s}}$. Then we have: $$\lim_{s \to \infty}\vert\vert \tilde{\sigma}^m_s\vert\vert_{L^2}^2 = c_m \pi^{p/2},$$ where the constant $c_m$ is given by $$ c_m = \int_{P} \left(\Pi_{j=1}^p\delta(x^j-m^j)\right) e^{-2((x-m)\cdot y -g_P)} (\det D)^\frac12dx^1\cdots dx^n. $$ \end{thm} {} Along the Mabuchi rays of toric K\"ahler structures that we are considering, for any finite value of $s\geq 0$, the correspondence given by quantization commutes with reduction is not unitary. However, this correspondence is unitary at infinite geodesics time $s\to \infty$. Let $M_{\underline m}$ be the K\"ahler reduction corresponding to the level set $\mu_p^{-1}(\underline m)$, for $\underline m \in \Delta \cap \mathbb{Z}^p$ and let $\mathcal{H}_{M_{\underline m}}$ be the Hilbert space for its K\"ahler quantization, so that $$ \mathcal{H}_{M_{\underline m}}=\left\{ \sigma^{\underline m, m'}, (\underline m, m')\in P\cap \mathbb{Z}^n\right\}. $$ Then, \begin{thm}[Theorem \ref{thmqcr}] The natural $T^n$-equivariant linear isomorphism \begin{eqnarray}\nonumber \mathcal{H}_{\mathcal{P}_\infty} &\to& \bigoplus_{\underline m\in \Delta\cap\mathbb{Z}^n}\mathcal{H}_{M_{\underline m}}\\ \nonumber \tilde \sigma^m_\infty & \mapsto & \sigma^{\underline m, m'}, \end{eqnarray} for $(m_1, \dots, m_p)=\underline m$ and $m=(\underline m, m')\in P\cap \mathbb{Z}^n$ is unitary up to the overall constant $\pi^{p/2}$. \end{thm} As expected from the general discussion in Section \ref{newsubsec}, we thus obtain \begin{cor}[Corollary \ref{cor_qrunitary}] The quantization commutes with reduction correspondence for the toric mixed polarization $\mathcal{P}_\infty$ is unitary (up to an overall constant). \end{cor} Note that, however, that the quantizations with respect to $\mathcal{P}_0$ and $\mathcal{P}_\infty$ are not unitarily equivalent, which is a restatement of the fact that the quantization commutes with reduction correspondence for the initial K\"ahler polarization $\mathcal{P}_0$ is not unitary. \section{Preliminaries}\label{section-prelim} \subsection{Geometric quantization}\label{section:geometric-quantization} In classical mechanics one works with real-valued functions on symplectic manifolds, respectively the physical observables on phase space, in the terminology of physicists. Geometric quantization is concerned with devising a consistent method of going from a classical system to a quantum system. The physical observables in quantum mechanics are no longer functions on phase space, but rather operators on a Hilbert space of quantum states. \begin{dfn} A symplectic manifold $(M,\omega)$ is \emph{quantizable} if there exists a hermitian line bundle $(L,h) \to M$ with compatible connection $\nabla$ of curvature $F_\nabla = -i\omega$. The triple $(L,\nabla,h)$ is called \emph{prequantization data}. \end{dfn} One can show \cite{Wo} that, in the compact case, $M$ is quantizable whenever \begin{align}\nonumber \left[\frac{\omega}{2\pi}\right] \in H^2(M,\mathbb{Z}). \end{align} This is often referred to as the \emph{integrality condition}. The \emph{prequantum Hilbert space} $L^2(M,L)$ is the $L^2$-completion of the space of sections of the hermitian line bundle $(L,h) \to M$ with respect to the scalar product given by \begin{align}\nonumber \langle \psi, \psi \rangle : = \int_M h(\psi,\psi)\Omega, \end{align} where $\Omega = (1/n!)\omega^n$ is the Liouville volume form. \begin{dfn}\label{dfn:kostant-souriau-operator} Let $f \in C^\infty(M,\mathbb{C})$. The \emph{Kostant-Souriau prequantum operator} $\hat f^{prQ}$ is the (unbounded) operator on the prequantum Hilbert space given by \begin{align}\nonumber \hat f^{prQ} = f + i\nabla_{X_f}, \end{align} where $f$ stands for multiplication by $f$ and $X_f$ is the Hamiltonian vector field associated to $f$. \end{dfn} The prequantum Hilbert space, however, is too big, in the sense that the asignment $f \mapsto \hat f^{prQ}$ does not give an irreducible representation of the Poisson algebra $C^\infty(M)$. To improve on this, one needs to choose a \emph{polarization} $\mathcal{P}$ of $(M,\omega)$ that is, an integrable Lagrangian distribution in $TM \otimes \mathbb{C}$. There are two special types of polarizations, namely, the \emph{real} polarizations (i.e. $\mathcal{P} = \bar{\mathcal{P}}$) and the \emph{complex} polarizations (i.e. $\mathcal{P} \cap \bar{\mathcal{P}} = \{0\}$). Moreover, in the event that $(M,\omega)$ is equipped with a compatible integrable almost complex structure, then the holomorphic tangent bundle $T^{1,0}M$ itself is a complex polarization, called the \emph{Kähler polarization}. \begin{rmk} The quantum space associated with a real, or mixed type, polarization, may consist of distributional (therefore non-smooth) sections when the polarization has non-simply-connected fibers which bring the need of imposing Bohr-Sommerfeld conditions. \end{rmk} The \emph{quantum Hilbert space} (associated to $\mathcal{P}$) is then defined as the closure of the space of polarized sections in the prequantum Hilbert space which, moreover, are required to satisfy some appropriate $L^2$ condition. Note that the quantum Hilbert space for a Kähler polarization would be given by the sections $s$ such that \begin{align}\nonumber \nabla_{\frac{\partial}{\partial \bar z_j}}s = 0, \end{align} for local holomorphic coordinates $(z_1, \dots, z_n)$ on $M$, which corresponds to the space holomorphic sections of $H^0(M,L)$. Usually, the above framework is improved by including the so-called \emph{half-form correction}. In the Kähler case, one introduces the canonical bundle \begin{align}\nonumber K_M = \bigwedge^n(T^*M)^{1,0}, \end{align} that is, the bundle whose sections are $(n,0)$-forms (and thus dependent on the complex structure). There is a natural metric on $K_M$ induced by the Kähler structure of $M$: if $\eta$ is an $(n,0)$-form, then \begin{align}\nonumber \lVert \eta \rVert^2_{K_M} = \frac{\eta \wedge \bar\eta}{(2i)^n(-1)^{n(n+1)/2}\omega^n/n!}. \end{align} The half-form corrected quantum Hilbert space associated to a given Kähler polarization $\mathcal{P}$ is then given by the polarized sections of $L \otimes \sqrt{K_M}$, where $\sqrt{K_M}$ is a square root of $K_M$, that is, $\sqrt{K_M} \otimes \sqrt{K_M} \cong K_M$, with an appropriate connection on $\sqrt{K_M}$ induced by that of $K_M$. However, since $c_1(M) = -c_1(K_M)$, then this is possible if and only if $c_1(M)$ is even, which is not always the case. The line bundle whose sections comprise the Hilbert space would then satisfy \begin{align}\nonumber c_1(L\otimes\sqrt{K_M}) = \left[\frac{\omega}{2\pi}\right] - \frac{c_1(M)}{2}. \end{align} This motivates us to consider instead, for the quantization procedure, a line bundle $L \to M$ with a different integrality condition, namely \begin{align}\label{eq:half-form-integrality-condition} c_1(L) = \left[\frac{\omega}{2\pi}\right] - \frac{c_1(M)}{2} \in \mathbb{Z}. \end{align} \subsection{Toric K\"ahler geometry}\label{subsec_torickahlergeom} Let $P$ be a Delzant polytope and $(M,\omega_P)$ the associated symplectic toric manifold with moment map $\mu$. The points of $M$ where the torus action is free make up a dense open subset $\mathring{M} = \mu^{-1}(\mathring{P})$, where $\mathring{P}$ is the interior of $P$. Moreover, $\mathring{M} \cong \mathring{P} \times T^n$ as symplectic manifolds, the latter with the symplectic structure inherited from $\mathbb{R}^n \times T^n$. One can describe toric Kähler structures with the aid of the so-called \emph{symplectic potential} \cite{Ab2,Gui2}. Denoting by $(x,\theta)$ the action-angle coordinates for $\mathring{P} \times T^n$, let $P$ be given by the linear inequalities \begin{align}\label{polytopeconditions} l_r(x) = \langle x, \nu_r\rangle +\lambda_r\geq 0, \, r=1\, \dots, d, \end{align} where $\nu_j$ is the primitive inward pointing normal to the $jth$ facet of $P$. The function \begin{align}\label{dfn:symp-potential} g_P(x) = \frac{1}{2}\sum_{r=1}^d l_r(x)\log l_r(x), \end{align} defined for $x \in \mathring{P}$, endows $M$ with a torus-invariant compatible complex structure $J_P$ given by the block matrix \begin{align}\nonumber J_P = \begin{bmatrix} 0 & -G_P^{-1} \\ G_P & 0 \end{bmatrix}, \end{align} where $G_P$ is the Hessian of $g_P$ \cite{Gui2}. This is not the only compatible toric complex structure as shown by Abreu in \cite{Ab2}. More precisely, \begin{thm}[\cite{Ab2}] Let $(M,\omega)$ be the symplectic toric manifold associated to a given Delzant polytope $P$. If $J$ is a compatible toric complex structure, then \begin{align}\label{eq:cpx-structure-symp-potential} J = \begin{bmatrix} 0 & -G^{-1} \\ G & 0 \end{bmatrix} \end{align} where $G$ is the Hessian of \begin{align}\label{eq:symp-potential-param} g = g_P + \phi, \end{align} $g_P$ being defined as in \eqref{dfn:symp-potential}, $\phi \in C^\infty(P)$. Moreover, $G$ is positive-definite on $\mathring{P}$ and satisfies the regularity condition \begin{align}\nonumber \det G = \left(\delta(x)\prod_{r=1}^d l_r(x)\right)^{-1}, \end{align} where $\delta$ is a smooth and strictly positive function on $P$. Conversely, given a function $g$ as in \eqref{eq:symp-potential-param} with the above hypotheses, then $J$ as defined in \eqref{eq:cpx-structure-symp-potential} defines a compatible toric complex structure on $(M,\omega)$. \end{thm} A function $g$ as above is called a \emph{symplectic potential}. (There is, in addition, a coordinate chart associated to each vertex $v$ of $P$, whose construction is detailed, for instance, in \cite{KMN1}.) Let $J$ be a compatible complex structure induced by a symplectic potential $g$ on $(M,\omega)$. Then $(\mathring{M},J) \cong \mathring{P} \times T^n$ as Kähler manifolds, the latter with the complex structure induced from $\mathbb{C}^n$, via the biholomorphism \begin{align}\label{dfn:biholomorphism} \mathring{P} \times T^n &\xrightarrow{\cong} (\mathbb{C}^*)^n \\ (x,\theta) &\mapsto w = (e^{y_1+i\theta_1}, \dots, e^{y_n+i\theta_n}), \end{align} where $y_j = \partial g/\partial x^j$. The assignment $x \mapsto y = \partial g/\partial x$ is an invertible Legendre transform, with inverse given by $x = \partial \kappa/\partial y$, where $$\kappa = x(y)\cdot y - g(x(y))$$ is a Kähler potential given in terms of $g$. Associated to $P$ there is the line bundle $L = \mathcal{O}(D) \to M$, where \begin{align}\nonumber D = \sum_{r=1}^d \lambda_rD_r, \end{align} with each $\lambda_r$ a non-negative integer and each $D_r$ being the divisor corresponding to the inverse image by the moment map of the facet of $P$ defined by $\ell_r = 0$. With this in mind, the meromorphic section $\sigma_D$ of $L$ whose divisor is given by $D$ trivializes $L$ over $\mathring{M}$. Moreover, the divisor of the meromorphic function, whose expression in the dense open subset is given by \begin{align}\label{eq:monomials} w^m = w_1^{m_1}\cdots w_n^{m_n}, \quad m = (m_1, \dots, m_n) \in \mathbb{Z}^n, \end{align} can be calculated to be \cite{CJH} \begin{align}\nonumber \mathrm{div}(w^m) = \sum_{r=1}^d \langle m, \nu_r \rangle D_r. \end{align} Now, since the holomorphic sections of $L$ are generated by the sections which, under the trivialization $\sigma_D$, are given by monomials as in \eqref{eq:monomials} with effective divisors, i.e. \begin{align}\label{eq:holomorphic-sections-integers} H^0(M,L) = \mathrm{span}_{\mathbb{C}}\{w^m \sigma_D : m \in \mathbb{Z}^n, \mathrm{div}(w^m\sigma_D) \geq 0\}, \end{align} and $\mathrm{div}(w^m\sigma_D) \geq 0$ precisely when $l_r(m) = \langle m, \nu_r \rangle - \lambda_r \geq 0$ for $r = 1, \dots, d$, it follows that there is a correspondence between the toric basis of $H^0(M,L)$ and the integral points of $P$. \subsection{Half-form corrected quantization in the toric case} \label{corrq} Let $(M,\omega, I)$ be a compact smooth toric K\"ahler manifold with toric complex structure $I$ and such that $\left[ \frac{\omega}{2\pi}\right] -\frac{1}{2}c_{1}(M)$ is an ample integral cohomology class. Let $K_{I}$ be the canonical line bundle on $M$. Let the moment polytope be \begin{align}\label{t11} P = \left\{x \in \R^n \ : \ \ell_r(x) = \nu_r \cdot x + \lambda_r \geq 0, \ \ j = 1, \dots, d\right\} , \end{align} where we use the freedom of translating the moment polytope to choose the $\{\lambda_j\}_{j=1,\dots,d}$ to be half-integral and defined as follows. We consider an equivariant complex line bundle $L\cong {\mathcal O}(\lambda^L_1 D_1+\cdots+\lambda^L_dD_d)$ as in \cite{KMN1} such that $c_{1}(L) = \left[\frac{\omega}{2\pi}\right] -\frac{1}{2}c_{1}(M)$, where the $\{\lambda^{L}% _{j}\}_{j=1,\cdots, d}$ define a polytope with integral vertices, $P_{L}$. The half-integral $\{\lambda_j\}_{j=1,\dots,d}$ in (\ref{t11}) are then defined by \begin{align}\label{halflambdas} \lambda_j := \lambda^{L}_{j}+\frac12, ~j=1,\dots, d, \end{align} in accordance with the fact that ${\rm div}\,(dZ)= - D_1 \cdots -D_d$. Within the open orbit $U_{0}$, the holomorphic $(n,0)$-form $dz$ is given by: $dZ=dz^1\wedge \cdots \wedge dz^n = dW/w^{\mathbf{1}}$, with \begin{equation}\label{partialz} z=\,^{t}(z^{1},\dots,z^{n})=\partial g/\partial{x}+i{\theta} \end{equation} and $dW:=dw^{1}\wedge\cdots\wedge dw^{n}$, such that $dZ$ and $dW$ are trivializing sections of ${K_{M}}_{|_{U_{0}}}$. (Here, $\mathbf{1}=(1,\dots,1)$ so that $w^{\mathbf{1}}=w^1\cdots w^n$.) Note that $P_L$ is obtained from the moment polytope $P$ by shifts of $\frac12$ along each of the integral primitive inward pointing normals. We will call $P_{L}\subset P$ the \textit{corrected polytope}. We equip the line bundle $L$ with a $U(1)$ connection, $\nabla^{I}$, whose curvature is given by \begin{align} F_{\nabla^{I}}= -i\omega+\frac{i} {2}\rho_{I}. \end{align} From the analysis in the previous section, we recognize that $\sqrt{|K_I|}$ and $\mu_I$ are always well-defined, even when $\sqrt{K_I}$ is not. Based on this, the authors of \cite{KMN1} defined the quantum space for half-form corrected K\"ahler quantization of $(M, \omega, L, I)$ as follows: \begin{dfn}\label{qhs}\cite[Definition 3.1]{KMN1} The quantum Hilbert space for the half-form corrected K\"ahler quantization of $(M,\omega,L,I)$ is defined by \[ {\mathcal{H}}^{Q}_{I} = \mathcal{B}_{I}^{Q} \otimes\mu_{I}, \] where \[ \mathcal{B}_{I}^{Q}=\{s\in\Gamma(M, L): \nabla^{I}_{\overline{\mathcal{P}}_{I}}. s=0\}. \] The inner product is defined by \begin{align}\label{innerproduct} \langle \sigma\otimes \mu_I ,\sigma' \otimes \mu_I\rangle = \langle \sigma,\sigma'\rangle = \frac{1}{(2\pi)^n}\int_M h^L(\sigma,\sigma') \frac{\omega^n}{n!}. \end{align} \end{dfn} Now fix a choice of symplectic potential $g$ for the complex structure $I$ on $M$. We define the connection $\nabla^{I}$ on $L$ by \begin{align}\label{conn1} \Theta_{v}&=-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\sum_{k=1}^n d\theta_{v}^{k}+\frac {i}{4}\left( \frac{\partial}{\partial x_{v}}\log\det G_{v}\right) \cdot G_{v}^{-1}d\theta_{v}\\ & =-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\mathrm{Im}\left( \partial\log\det G_{v}+\sum_{k=1}^{n}dz_{v}^{k}\right)=\frac{\nabla^{I}{\mathbf{1}}_{v}^{U(1)}}{{\mathbf{1}}_{v}^{U(1)}}. \nonumber \end{align} On the open orbit $\mathring{M}$, the connection forms are specified by \begin{align}\label{conn2} \Theta_{0} & :=-i\,{x}\cdot d{\theta}+\frac{i}{4}\left( \frac{\partial }{\partial x}\log\det G\right) \cdot G^{-1}d\theta\\ & =-i\,{x}\cdot d{\theta+}\frac{i}{2}\mathrm{Im}\partial\log\det G.\nonumber \end{align} One may check that $\Theta_{v}-\Theta_{v^{\prime}}=d\log\tilde{g}_{v^{\prime }v}^{L}$ and $\Theta_{v}-\Theta_{0}=d\log\tilde{g}_{0v}^{L}$ so that $\{\Theta_{0},\Theta_{v}: v\in V\}$ does indeed define a $U(1)$-connection on $L$. \begin{rmk} Despite the singularity of $d\theta_v^j$ as $x_v^j \to 0$, the connection form $\Theta_v$ defined in (\ref{conn1}) remains non-singular over the open set $U_v$. This non-singularity can be verified either by analyzing the behavior of the matrix $G_v$ or by using the alternative coordinates $\{a_v^j, b_v^j\}_{j=1,\dots,n}$ which regularize the expression. \end{rmk} \subsection{Complex-time dynamics and quantization}\label{section:cpx-time} We shall collect here the results in \cite{BFMN, MN, SZ} concerning the imaginary time flow formalism in the concrete case of symplectic toric manifolds. Recall that if $(M,\omega_0,J_0)$ is a compact K\"ahler manifold with K\"ahler form $\omega_0$, the space of K\"ahler metrics on $M$ in the same K\"ahler class is given in terms of global relative K\"ahler potentials as \begin{align}\nonumber \mathcal{H}(\omega_0) = \{\rho \in C^{\infty}(M) \mid \omega_0 + i\partial\bar\partial\rho > 0\}/\mathbb{R}, \end{align} which has a natural infinite-dimensional smooth manifold structure as it is an open subset in $C^\infty(M)$ (modulo constants), the latter considered with the topology of uniform convergence on compact sets of the functions and their derivatives of all orders. Moreover, from this fact, the tangent vectors are simply functions on $M$. This space is equipped with the so-called \emph{Mabuchi metric} \cite{M}, given by \begin{align}\nonumber \langle \delta_1 \rho, \delta_2 \rho \rangle = \int_M\frac{1}{n!}(\delta_1\rho\cdot\delta_2\rho) \omega_\rho^n, \end{align} where the functions $\delta_1 \rho, \delta_2 \rho\in C^\infty(M)$ are tangent vectors at $\rho \in \mathcal{H}(\omega_0)$ and $\omega_\rho = \omega_0 + i\partial\bar\partial\rho$. From Moser's theorem, there are diffeomorphisms defining equivalent K\"ahler structures $\varphi_\rho:(M,\omega_\rho,J_0)\to (M,\omega_0,J_\rho)$, so that the K\"ahler metrics can then be described in terms of a varying complex structure $J_\rho$ for fixed symplectic form $\omega_0$. Let now $(M,\omega)$ be the symplectic toric manifold defined by the Delzant polytope $P$ and let $h$ be a smooth uniformly convex function on $P$. It is known (see \cite{SZ}) that the family of symplectic potentials $$ g_s = g + sh, s \geq 0, $$ defines a Mabuchi geodesic ray of toric K\"ahler structures. Let $\mathcal{P}_s$ denote the K\"ahler polarization defined by the complex structure $J_s$ on $(M,\omega_P)$ by the symplectic potential $g_s$. One has from \cite{BFMN}, pointwise in the Lagrangian Grassmannian along $\mathring{M}_P$ , $$ \lim_{s\to +\infty} \mathcal{P}_s = \mathcal{P}_\mathbb{R}, $$ where $\mathcal{P}_\mathbb{R}$ is the toric real polarization defined by the fibers of the moment map $\mu.$ As dscribed above, let now $L \to M$ be the smooth line bundle with first Chern class \begin{align}\nonumber c_1(L) = \frac{\omega}{2\pi} - \frac{c_1(M)}{2}. \end{align} From \cite{KMN1}, we know that the half-form corrected Hilbert space $\mathcal{H}_{\mathcal{P}_{s}}$ associated to the polarization $\mathcal{P}_{s}$ is given by \begin{align}\label{kahlerquants} \mathcal{H}_{\mathcal{P}_{s}} = \left\{\sigma_s^m = w_s^m e^{-k_s(x)}\mathbf{1}^{U(1)} \otimes \sqrt{dZ_s}\right\}, \text{ }m \in P \cap \mathbb{Z}^p, \end{align} where $$k_s(x) = x \cdot \frac{\partial g_s}{\partial x} - g_s(x),$$ is the K\"ahler potential, $w_s^j = e^{y^j_s + i\theta_j}$, $y_s^j = \frac{\partial g_s}{\partial x_j}$, and $\mathbf{1}^{U(1)}$ is a local trivializing section for $l$, so that $\mathbf{1}^{U(1)} \otimes \sqrt{dZ_s}$ is a trivialization of $L$ over the open dense subset $\mathring M$. While $L$ is not necessarily the tensor product of a prequantum line bundle with a square root of the canonical bundle of $M$, we write local sections in this way as their product is well-defined (cf. \cite{Wo}, \S 10.4). Let $\hat h^{prQ}$ be the prequantum operator associated to $h$. From \cite{KMN1,KMN4}, we have a linear $T^n -$equivariant isomorphism $$ e^{s\hat h^{prQ}} \otimes e^{is\shL_{X_H}} : \mathcal{H}_{\mathcal{P}_{t}} \to \mathcal{H}_{\mathcal{P}_{t+s}}, t, s\geq 0, $$ with \begin{align}\label{eq:prq-evolution} e^{s\hat h^{prQ}} \otimes e^{is\shL_{X_H}} \sigma_t^m = \sigma_{t+s}^m, \text{ }m \in P \cap \mathbb{Z}^n. \end{align} Following \cite{KMN1,KMN4}, we define the quantum operator $h^Q:\mathcal{H}_{\mathcal{P}_{t}}\to \mathcal{H}_{\mathcal{P}_{t}}$ associated to $h$ by $$ h^Q \sigma_t^m = h(m) \sigma_t^m, \, m\in P \cap \mathbb{Z}^n. $$ The generalized coherent state transform $U_s, s>0$, which lifts the imaginary Hamiltonian flow along the Mabuchi geodesics generated by $h$ to the bundle of half-form corrected polarized Hilbert spaces, is defined as the linear isomorphism \begin{align}\label{defcst} U_{s} = (e^{s\hat{h}^{prQ}} \otimes e^{is\shL_{X_H}}) \circ e^{-s\hat h^Q} : \mathcal{H}_{\mathcal{P}_t} \to \mathcal{H}_{\mathcal{P}_{t+s}}. \end{align} We now recall \begin{thm}\label{thmkmn2}\cite[Theorem 4.3]{KMN4} Let $\left\{\delta^m\right\}_{m\in P \cap \mathbb{Z}^n}$ be the Bohr-Sommerfeld basis of distributional sections for the half-form corrected Hilbert space of $\mathcal{P}_\mathbb{R}-$ polarizaed sections (see \cite{BFMN,KMN1}). Then, $$ \lim_{s\to +\infty} U_s \sigma_0^m = (2\pi)^{n/2} e^{g(m)} \delta^m. $$ \end{thm} In the following we will generalize this theorem to the case when the hamiltonian function generating the Mabuchi geodesic ray of toric K\"ahler metrics is convex only along a subset of the moment map coordinates so that the limit polarization becomes a mixed polarization. \section{``$[Q,R]=0$" for toric manifolds: a new perspective} \label{chapter:mixed-polarizations} Let $M$ be a $n$--dimensional toric manifold and $T^p$ be a fixed torus subgroup of the toric group, $T^n$. In this section we consider mixed polarizations, ${\mathcal P}_\infty$, associated with the Hamiltonian action of $T^p$ on $M$, with moment map $\mu_p$. These are toric polarizations with real directions given by the Hamiltonian vector fields of the components of $\mu_p$. Thus, for such a polarization, the (prequantum operators corresponding to the) components act diagonally and the level sets of $\mu_p$ correspond to symplectic reductions of $M$ with respect to the action of $T^p$. We will obtain such polarizations by looking at the evolution of the Kähler polarization of a toric Kähler manifold under the imaginary time flow of an appropriate Hamiltonian, namely a smooth convex function of the moment map for the Hamiltonian action of the subtorus $T^p\subset T^n.$ \subsection{``Quantization commutes with Reduction"} \label{newsubsec} In this section, we describe general features of appropriate toric mixed polarizations for which the discussion of the quantization commutes with reduction correspondence is naturally placed. These are associated to the action of a subtorus $T^p\subset T^n$, with moment map $\mu_p$ and moment polytope $\Delta.$ We call such polarizations ``Fourier polarizations". (See Section 3 in \cite{BHKMN}.) In the next section, we will then describe explicitly mixed toric polarizations $\mathcal{P}_\infty$ which are Fourier polarizations. We have the diagram $$ M/T^p \stackrel{\pi}{\longleftarrow} M \stackrel{\mu_p}{\longrightarrow} \Delta , $$ where $\pi$ denotes the quotient map, and also the map $\alpha: M/T^p \to \Delta$ such that $\alpha \circ \pi = \mu_p.$ The fibers of $\mu_p$ are $T^p$ principal bundles in $M$ and that the real directions in $\mathcal{P}_\infty$ are given by the $T^p$ orbits. The fibers of $\alpha$ then give the coisotropic reductions for $\mathcal{P}_\infty$ and they correspond to the K\"ahler reductions $$ M_c = \mu_p^{-1}(c)/T^p $$ with K\"ahler structure given by the reduction of the K\"ahler structure on $M$ defined by the symplectic potential $g$. (See Theorem \ref{prop:evol-polarization} where this is explicit.) This is an example of a ``fibering polarizations", as considered in Section 3 of \cite{BHKMN} (see also \cite{BFHMN}), more precisely $\mathcal{P}_\infty$ is a ``Fourier polarization", as in Definition 3.18 in \cite{BHKMN}. As will be seen quite explicitly below in Theorem \ref{prop:evol-polarization}, the toric K\"ahler structures along the coisotropic reductions of $\mathcal{P}_\infty$ do not evolve along the Mabuchi ray, so that, at infinite geodesic time, one will obtain that the quantizations of the coisotropic reductions of $\mathcal{P}_\infty$ coincide with the K\"ahler quantizations of the K\"ahler reductions of the initial K\"ahler structure. (See Section \ref{subsecronaldo} below for further details.) {} More generally, suppose that a symplectic manifold $M$ carries an effective Hamiltonian action of a torus $T^p$ with moment map $\mu_p= (\mu^1,\dots, \mu^p)$. Let $\mathcal{P}_F$ be a Fourier polarization on $M$, so that its real directions are given by the orbits of $T^p$, that is they are generated by the Hamiltonian vector fields of the components of $\mu_p$, and the holomorphic directions give an ample K\"ahler structure on the coisotropic reductions of $\mathcal{P}_F.$ For this type of mixed polarizations, with respect to the $T^p$-action, the quantization commutes with reduction correspondence becomes, almost, a tautology. Indeed, given what the real and holomorphic directions in $\mathcal{P}_F$ are, one expects that, generally, $\mathcal{P}_F$-polarized sections should have the form of sums of products of a factor given by a function of the components of $\mu_p$ times a factor holomorphic along the coisotropic reductions. Reducing after quantizing just corresponds to fixing the values of the components of $\mu_p$ in the polarized sections, according to the chosen level set of $\mu_p$. Moreover, since the components of $\mu_p$ are $\mathcal{P}_F$-polarized, their Kostant-Souriau prequantum operators will act on $\mathcal{P}_F$-polarized sections just by multiplication operators. Thus, after reduction they will act just by multplication by the appropriate constant. This a natural property to demand for states that correspond to states in the quantization of the symplectic reduction. On the other hand, for a given level set $\mu_p=c$, in the first reduce then quantize approach one then just obtains the states which are holomorphic with respect to the K\"ahler structure on the corresponding coisotropic reduction $M_c = \mu_p^{-1}(c)/T^p$. Thus, at the level of linear spaces, $$ \mathcal{H}_{\mathcal{P}_F} = \bigoplus_{c} \mathcal{H}_{M_c}, $$ where $\mathcal{H}_{M_c}$ is the K\"ahler quantizations of $M_c$ and where the sum goes over quantizable coisotropic reductions of $\mathcal{P}_F$. Thus, quantization commutes with reduction is a natural consequence of the general properties of the Fourier polarization $\mathcal P_F$. Regarding the problem of unitarity in the quantization commutes with reduction correspondence, it follows from the above description that for a fixed level set, $\mu_p=c$, one obtains that the quantization commutes with reduction correspondence is unitary, up to an overall constant factor. However, there is, a priori, the possibility that this overall factor actually depends on the level set. {} Typically, the quantization commutes with reduction correspondence is considered for a given $T^p$-invariant K\"ahler structure, $\mathcal{P}_0$, on the ambient symplectic manifold and for the induced structures on the corresponding K\"ahler quotients. If for this K\"ahler polarization one can define an associated Fourier polarization $\mathcal{P}_F$, as above, with matching K\"ahler structures along the K\"ahler and coisotropic reductions, respectively, then the question of unitarity in the quantization commutes with reduction for $\mathcal{P}_0$ can be more naturally formulated simply has the problem of whether or not the quantizations with respect to $\mathcal{P}_0$ and with respect to $\mathcal{P}_F$ are unitarily equivalent. In Section \ref{subsecqcr}, in the toric case, we will show that the polarization $\mathcal{P}_\infty$ obtained in Section \ref{subsec_geodraysconv} is a Fourier polarization where the above description holds, and where the quantization commutes with reduction correspondence is unitary. In this case, moreover, the unitary correpondece is obtained for all coisotropic reductions at once, with no need for choosing level-dependent constants to achieve unitarity. The fact that for a given initial toric K\"ahler polarization $\mathcal{P}_0$ the quantization commutes with reduction correspondence is not unitary becomes, more naturally, the fact that quantizations with respect to $\mathcal{P}_0$ and $\mathcal{P}_\infty$ are not unitarily equivalent. Note that unitarity in quantization commutes with reduction holds only in the case when $M=(\mathbb{C}^*)^n$ and the toric K\"ahler structure is flat. {} \subsection{Geodesic rays converging to mixed polarizations} \label{subsec_geodraysconv} Let $T^{p}$ denote a subtorus of $T^{n}=\mathbb{R}^n/\mathbb{Z}^n$. In this subsection, we describe the mixed polarization (which we denote by $\mathcal{P}_\infty$) associated with the action of the subtorus $T^{p}$ and a toric complex structure $J$ determined by a symplectic potential $g$. $\mathcal{P}_\infty$ is a Fourier polarization, in the sense described in Section \ref{newsubsec}. Associated to the subtorus $T^p$ we have a primitive sublattice of $\mathbb{Z}^n$, such that, infinitesimally, the Lie algebra inclusion $\frak{t}^{p} \subset \frak{t}^{n}$ is represented by $p$ primitive vectors in $\mathbb{Z}^n$ generating the sublattice. Let $\hat B$ be the $(p\times n)$-matrix with those vectors in the rows. $\hat B$ is then the upper $(p\times n)$-block of a matrix $B \in SL(n,\mathbb{Z})$. (See Section 1 of \cite{AH} and Section 3 in Chapter 1 of \cite{GL}.) The action of $T^{p}$ on $M$ is a Hamiltonian action, denoted by $\rho_{p}: T^{p} \rightarrow \mathrm{Diff}(M,\omega,J)$ and the corresponding moment map $\mu_{p}$ for this action is given by $$\mu_p=\hat B \circ \mu.$$ According to \cite{LW1} or \cite{W1}, the mixed (singular) polarization $\mathcal{P}_\infty$ can be defined as follows: \begin{align} \mathcal{P}_\infty = (\mathcal{D}_{\mathbb{C}} \cap P_{J} ) \oplus \mathcal{I}_{\mathbb{C}}, \end{align} where $\mathcal{D}_{\mathbb{C}}=(\ker d \mu_{p} \otimes \CC)$ and $\mathcal{I}_{\mathbb{C}}=(\mathrm{Im} d \rho_{p} \otimes \CC)$. We denote the moment polytope for the Hamiltonian $T^p$ action by $\Delta = \mu_p(M).$ We will consider Mabuchi geodesic rays of toric K\"ahler structures generated by Hamiltonian functions which are smooth convex functions on $\Delta$. As we will describe, Theorem 1.2 in \cite{BFMN} generalizes and one obtains a mixed toric polarization at infinite Mabuchi geodesic time (refer to \cite{BFMN,LW1,W1}). Consider the affine change of action-angle coordinates on $\mathring M$, $$\tilde x = B\cdot x, \,\,\tilde \theta = {}^t{B^{-1}} \cdot \theta,$$ such that the moment polytope $P$ for the $T^n$-action is now described by the linear inequalities $$ \tilde l_j (\tilde x) = \langle \tilde x, \tilde \nu_j \rangle - \lambda_j \geq 0,\,\,\, j=1, \dots r, $$ where $\tilde \nu _j = {}^tB^{-1} \nu_j$ gives the primitive normals to the facets of $P$ in the new affine coordinates. Note that one has, for the holomorphic coordinates on $\mathring M$ associated to a symplectic potential $g$, $$ \tilde z_j = \frac{\partial g}{\partial \tilde x_j} + i\tilde \theta_j = \sum_{k=1}^n B^{-1}_{kj} z_k,\,\, j=1, \dots, n, $$ where $$z_k= \frac{\partial g}{\partial x_k} + i \theta_k,\, k=1, \dots, n.$$ Let \begin{align}\label{simplehamiltonian} H = \frac12 \sum_{j=1}^p \tilde x_j^2. \end{align} By the same argument as in the proof of Theorem 1.2 in \cite{BFMN}, Theorem 3.20 in \cite{LW1} and Theorem 3.5 in \cite{W1} we obtain, \begin{thm}\label{prop:evol-polarization} For a choice of symplectic potential on $M$, $g$, let $\mathcal{P}_s, s\geq 0$, be the family of K\"ahler polarizations associated to the symplectic potential $g+sH, s\geq 0$, which is obtained under the imaginary time flow of the Hamiltonian vector field $X_H$. Then the limiting polarization $\mathcal{P}_\infty$ exists, and $$\mathcal{P}_\infty:= \lim_{s\to +\infty} \mathcal{P}_s $$ is a (singular) mixed polarization. Moreover, over the open dense $(\mathbb{C}^*)^n$-orbit $\mathring M,$ $$ \mathcal{P}_\infty = \langle \partial/\partial\tilde \theta_{1}, \dots, \partial/\partial\tilde \theta_p, X_{\tilde z_{p+1}}, \dots, X_{\tilde z_{n}},\rangle_{\mathbb C}= $$ $$ = \langle \partial/\partial\tilde \theta_{1}, \dots, \partial/\partial\tilde \theta_p, \frac{\partial}{\partial {\tilde z_{p+1}}}, \dots, \frac{\partial}{\partial{\tilde z_{n}}}\rangle_{\mathbb C}. $$ \end{thm} {} \begin{proof} Note that $\mathcal{P}_s, s\geq 0$ is generated on the open dense orbit by the Hamiltonian vector fields $$ X_{\bar{\tilde{z}}^j_s}, j=1, \dots n. $$ But we have $$ \tilde z^j_s = \tilde z^j_0 + s \tilde x^j,\, j=1, \dots p, $$ and $$ \tilde z^j_s = \tilde z^j_0, \, j= p+1, \dots n, $$ from which the result for the open dense orbit follows by taking $s\to \infty$. The second expression for $\mathcal{P}_\infty$ follows directly from the formulation in \cite{LW1} and the equivalence between the two expressions can also be directly checked by writing the Hamiltonian vector fields explicitly in action-angle coordinates and then taking the limit $s\to \infty.$ \end{proof} We will also describe the explicit expressions for $P_\infty$ along $\mu^{-1}(\partial P)$ below in Proposition \ref{prop_limippolontheboundary}. Thus, the limit polarization $\mathcal{P}_\infty$ has $p$ real directions and $n-p$ holomorphic directions. (Note that when $p=n$ one obtains the real toric polarization on $\mathring M$.) Recall that around each vertex $v$ of $P$ one has a coordinate neighborhood $U_v$ and holomorphic coordinates $(w_v^1, \dots, w_v^n)$ which are rational functions of the holomorphic coordinates $(w^1, \dots , w^n)$ along $\mathring M \cap U_v.$ If $A_v\in GL_n(\mathbb{Z})$ is matrix whose rows correspond to the primitive inner pointing normals to the facets of $P$ which are adjacent to $v$, recall that one has affine coordinates on $U_v$ $$ x_v = A_v x + \lambda_v, \,\,\, \theta_v = {}^tA_v^{-1}\theta, $$ where $\lambda_v$ is the $n\times 1$ matrix containing the $\lambda_j's$ corresponding the facets adjacent to $P$ given by the condition $l_j(x)=0$. One obtains, $$ z_v = \frac{\partial g}{\partial x_v} + i \theta_v = {}^tA_v^{-1} z, $$ and $$ w_v^j = e^{z_v^j} = \Pi_{k=1}^n (w^k)^{(A_v^{-1})_{kj}}, $$ which is simply written as $w = w_v^{A_v}$. In terms of the new affine coordinates $(\tilde x, \tilde \theta)$ we obtain, correspondingly, $$ \tilde w = \tilde w_v^{\tilde A_v}, $$ where $\tilde A_v = A_v B^{-1}.$ For the symplectic potentials $g_s = g+s H, s\geq 0$, consider the Hessian, with respect to the new action coordinates $\tilde x$, $$ \tilde G_s = \tilde G + s T, $$ where $T$ is the $n\times n$ matrix whose top $p\times p$ diagonal block is the identity and whose remaining entries are zero and where $\tilde G$ is the Hessian for $g$. Let $D$ denote the lower $(n-p)\times (n-p)$ diagonal block in $\tilde G$. As in Theorem 3.4 in \cite{BFMN}, in any of the charts $(U_v,\tilde w_{v})$, the limit polarization $\mathcal{P}_{\infty}$ can be described as follows: for any face $F$ in the coordinate neighbourhood, we write abusively $j \in F$ if $\tilde w_{v}^{j}=0$ along $F$. Over the boundary of the moment polytope one then obtains \begin{prop} \label{prop_limippolontheboundary} Over $\mu^{-1}(F) \cap U_v$, \begin{eqnarray}\nonumber && (\mathcal{P}_\infty) := \lim_{s\to +\infty} \left( \langle \frac{\partial}{\partial \tilde w_v^{j}}, j\in F \rangle_{\mathbb{C}} \oplus \langle \sum_{k\notin F} (\tilde G_s^v)^{-1}_{jk}\frac{\partial}{\partial \tilde x_v^k} -i\frac{\partial}{\partial \tilde \theta_v^j} , \, j\notin F \rangle_{\mathbb{C}}\right) \\ \nonumber && = \langle \frac{\partial}{\partial \tilde w_v^{j}}, j\in F \rangle_{\mathbb{C}} \oplus \langle \sum_{k\notin F, q,r=1}^{n} (\tilde{A}_{v})_{jq} (\tilde G^{-1}_\infty)_{qr}(\tilde{A}^{T}_{v})_{rk} \frac{\partial}{\partial \tilde x_v^k} -i\frac{\partial}{\partial \tilde \theta_v^j} , \, j\notin F \rangle_{\mathbb{C}} \\ \nonumber && = \langle \frac{\partial}{\partial \tilde w_v^{j}}, j\in F \rangle_{\mathbb{C}} \oplus \langle \sum_{q,l=p+1}^{n} (\tilde A_v)_{jq} D^{-1}_{(q-p)(l-p)} \frac{\partial}{\partial \tilde x^l}-i \frac{\partial}{\partial \tilde \theta_v^j},\, j\notin F\rangle_\mathbb{C}, \end{eqnarray} where $\tilde G_s^v$ is the Hessian of $g_s$ with respect to the coordinates $\tilde x_v.$ \end{prop} \begin{proof} The result follows from Lemma 3.3 in \cite{BFMN} and from the form of the inverse of the Hessian of $g_s$ as $s\to \infty.$ Note that if $$ \tilde G = \left[ \begin{array}{cc} A_1 & A_2 \\ A_3 & D \end{array} \right], $$ where $A$ is the top diagonal $p\times p$ block in $\tilde G$, then $$ \tilde G_s = \left[ \begin{array}{cc} A_1 + sI_p & A_2 \\ A_3 & D \end{array} \right]. $$ (Note that $A_2^T = A_3$ and that $A_1, D$ are symmetric.) Define \( S = A_1 + sI_p - A_2D^{-1}A_3 \). It is clear that \( S \) becomes invertible when \( s \) is sufficiently large. Using the inverse formula, we obtain: \[ \tilde{G}_{s}^{-1}= \begin{bmatrix} S^{-1} & -S^{-1} A_{2}D^{-1} \\ -D^{-1} A_{3} S^{-1} & D^{-1} + D^{-1} A_{3} S^{-1} A_{2} D^{-1} \end{bmatrix}. \] so that $$ \tilde{G}_{\infty}^{-1}=\lim_{s\to \infty} \tilde G_s^{-1} = \left[ \begin{array}{cc} 0 & 0 \\ 0 & D^{-1} \end{array} \right], $$ from which the result follows. \end{proof} {} \begin{exmp} Let us first consider the case of a $4$-dimensional toric manifold $M$ with moment polytope $P$. Consider the following component of the moment map, given in polytope coordinates by \begin{align}\nonumber \tilde x_1(x_1,x_2) = a_1x_1 + a_2x_2, \end{align} where $a_1$ and $a_2$ are coprime integers and $(x_1,x_2)\in P$. The complex coordinates $w_j = e^{z_j}$, with $z_j = y_j+i\theta_j$ and $y_j = \partial g/\partial x_j,\, j=1,2$, define a (Kähler) polarization $\mathcal{P}$. Setting $H = (\tilde x_1)^2/2$, we then proceed to calculate the evolution $\mathcal{P}_{s}, s\geq 0$, of this polarization under the imaginary time flow of the Hamiltonian vector field $X_H$ in order to examine its behavior as $s \to \infty$. First, notice that $X_{w_j} = w_jX_{z_j}$, $j = 1, 2$, meaning that $\mathcal{P}$ is spanned by $X_{z_1}$ and $X_{z_2}$. Then \begin{align}\nonumber e^{isX_H}\cdot z_j = y_j + i(\theta_j - isa_j\tilde x_1) = y_j + sa_j\tilde x_1 + i\theta_j, \end{align} and consequently, \begin{align}\nonumber e^{isX_H}\cdot X_{z_j} = X_{y_j} + sX_{a_j\tilde x_1} + iX_{\theta_j} = s\left(X_{a_j\tilde x_1}+\frac{1}{s}(X_{y_j}+iX_{\theta_j})\right). \end{align} {} We now introduce the new action coordinate \begin{align}\label{eq:change-of-coords}\nonumber \tilde x_2 &= b_1x_1 + b_2x_2, \\ \nonumber \end{align} where $b_1$ and $b_2$ are integers such that $a_1b_2 - a_2b_1 = 1$. Their existence is guaranteed by the fact that $a_1$ and $a_2$ are coprime. In other words, we have $\tilde x = A\cdot x$ with \begin{align}\nonumber A = \begin{bmatrix} a_1 & a_2 \\ b_1 & b_2 \end{bmatrix}\in SL(2,\mathbb{Z}). \end{align} Note that $\tilde P = A(P)$ is an equivalent Delzant polytope describing the same toric manifold $M$. The corresponding angle coordinates are $\tilde \theta = {}^tA^{-1}\cdot \theta$. These new action-angle coordinates also give rise to new complex coordinates compatible with the Kähler structure on $M$, given by $\tilde z_j = \tilde y_j + i\tilde\theta_j$, where $\tilde y_j = \partial g/\partial\tilde x_j$. In these new coordinates, the evolution is much simpler: \begin{align}\nonumber e^{isX_H}\cdot \tilde z_2 &= \tilde z_2, \\ \nonumber e^{isX_H}\cdot \tilde z_1 &= \tilde z_1 + \tilde x_1 s, \end{align} and therefore \begin{align} \nonumber e^{isX_H}\cdot X_{\tilde z_2} &= X_{\tilde z_2} \\ \nonumber e^{isX_H}\cdot X_{\tilde z_1} &= X_{\tilde z_1} + X_{\tilde x_1}s = s\left(X_{\tilde x_1} + \frac{1}{s}X_{\tilde z_1}\right), \end{align} and so at the limit $s \to \infty$ the polarization is spanned by $X_{\tilde z_2}$ and $X_{\tilde x_1} = -\partial/\partial\theta_1$. \end{exmp} {} \subsection{Some properties of the K\"ahler reductions of $\mathcal{P}_\infty$} \label{newnewsub} Of course, the symplectic reductions discussed above in Section \ref{newsubsec} may be singular. Let $V_c\subset {\mathbb R}^n$ denote the hyperplace defined by $\mu_p = (x^1,\dots, x^p) = c\in \mathbb{R}^{p}$ and let $M_c = \mu_p^{-1}(V_c)/T^p.$ Let us describe, in more detail, the singularities one obtains along the symplectic reductions above. From \cite{CDG} one has that the symplectic potentials for the reduced K\"ahler toric structures are given by the restriction of the symplectic potential on $P$ to the corresponding level sets of the moment map. Let then the Delzant polytope $P\subset \mathbb R^n$ be defined by $$ l_j(x) = \langle x, \nu_j\rangle +\lambda_j \geq 0, \, j=1, \dots, r, $$ where $\nu_j$ is the primitive inward pointing normal to the facet $j$ of $P$. The associated Guillemin sympletic potential is $$ g_P = \frac12 \sum_{j=1}^r l_j \log l_j. $$ Let us consider the symplectic reductions corresponding to the level sets $$ (x_1, \dots, x_k )=(c_1, \dots, c_k) =c\in \mathbb R^k, \,\, k< n, $$ which we assume have non-empty intersection with $P$. Let $\nu_j = (a_j, b_j), \, a_j\in \mathbb Z^k, b_j\in \mathbb Z^{n-k}$ and $y=(x_{k+1}, \dots, x_n)$. The restriction of the Guillemin potential for $P$ becomes $$ g_\mathrm{red} = \frac12 \sum_{j=1}^r \tilde l_j \log \tilde l_j, $$ where $$ \tilde l_j (y) = \langle y, b_j\rangle + \tilde \lambda_j, $$ where $\tilde \lambda_j = \langle c, a_j\rangle+\lambda_j$. The reduced polytope $P_\mathrm{red}^c$ is defined by the conditions $$ \tilde l_j (y) \geq 0, j=1, \dots r. $$ Since the $b_j$ will in general not be primitive, we obtain immediately that, in general, the level $c$ reduction will have, at least, orbifold singularities. By construction, every term in $g_\mathrm{red}$ will be singular somewhere on the boundary of $P^c_\mathrm{red}$ for some value of $c$, since the level sets have non-empty intersection with $P$. That is, for each $j=1, \dots r$ there is a value of $c$ and $y\in P^c_\mathrm{red}$ such that $\tilde l_j(y)=0.$ Since $P$ is Delzant, at most $n$ linear funtions $\tilde l_j$ are allowed to vanish simultaneously at a given $y\in P^c_\mathrm{red}$ but, in general, more than $n-k$ such terms may vanish so that $P^c_\mathrm{red}$ will not, in general, be Delzant. Even when $P^c_\mathrm{red}$ is Delzant, $g_\mathrm{red}$ is not in general of the form $$ g_{P^c_\mathrm{red}} + \mathrm{smooth} $$ so that the geometry induced by $g_\mathrm{red}$ will in general be singular and will include singularities not of orbifold type. As a simple example, we can consider the reductions of the form $$ x_3=\alpha_1 x_1 + \alpha_2 x_2 + c $$ in $\mathbb C^3$, giving the reduced symplectic potential, at $c=0$, $$ g_\mathrm{red}=\frac12 x_1 \log x_1 + \frac12 x_2 \log x_2 + \frac12 (\alpha_1 x_1 + \alpha_2 x_2) \log (\alpha_1 x_1 + \alpha_2 x_2). $$ which is manifestly singular at the origin $x_1=x_2=0$. In fact, for $\alpha_1=\alpha_2=\alpha$, one finds from Abreu's formula \cite{Ab2} that the scalar curvature reads $$ S = \frac{2\alpha}{(\alpha+1)(x_1+x_2)}, $$ which explodes at the origin. {} In the next sections we will study the quantization along the mixed toric polarizations described above. In the event that the hyperplane $V_c$ intersects the moment polytope at integral points, it is natural to define the quantization $\mathcal{H}_{M_c}$ of the Kähler reduction $M_c$ as being isomorphic to the space of distributional sections supported on the integral points of each level set, as in \eqref{eq:section-limit-cst} below. If there are no integral points at the intersection, the quantization is then the trivial space $\{0\}$. As remarked above, some of these reductions will have the structure of a toric orbifold, while some may have worse singularities, depending on how the hyperplane $V_c$ meets the facets of $P$. It is natural to define the quantization of the level sets containing Bohr-Sommerfeld points as being given by the limit of the K\"ahler polarization, as below in Theorem \ref{convsections}, with other level sets having trivial quantization. We will thus obtain for the quantization in the limit mixed toric polarization, \begin{align}\nonumber \mathcal{H}_{\mathcal{P}_\infty} \cong \bigoplus_{c \in \Delta\cap {\mathbb Z}^p} \mathcal{H}_{M_c}, \end{align} so that the quantum Hilbert space $\mathcal{H}_{\mathcal{P}_\infty}$ of the quantization of $M$ in the limit polarization decomposes as the direct sum of the quantization spaces of the symplectic reductions. In fact, this isomorphism is unitary for a natural hermitian structure on $\mathcal{H}_{\mathcal{P}_\infty}$, as described in section \ref{subsecqcr} and Theorem \ref{thmqcr}. \section{Half-form corrected mixed quantization} In this Section, we will describe the quantization of the toric variety $M$ along the Mabuchi geodesic of toric K\"ahler structures in Theorem \ref{prop:evol-polarization} and also the quantization with respect to the limit polarization $\mathcal{P}_\infty$. Recall that, as described in Section \ref{chapter:mixed-polarizations}, we have used an $SL_n(\mathbb{Z})$ affine transformation so that the Hamiltonian $H$ generating the Mabuchi family takes the simple form (\ref{simplehamiltonian}). For simplicity of notation, we will drop the tilde from the coordinates $\tilde x, \tilde \theta, \tilde z, \tilde w$ and from the matrices $\tilde A_v$ of the previous Section, so that in this Section they will be written simply as $x,\theta, z, w$ and $A_v$. {} \subsection{Canonical line bundle associated to $\shP_{\infty}$} As $\shP_{\infty}$ is a complex Lagrangian distribution on $M$, it corresponds to a complex (singular) line bundle $K_{\infty}$ defined by \begin{align} K_{\infty,p}=\{\alpha \in \wedge^{n}T^{*}_{p}M_{\CC}\mid \iota_{\bar{\xi}} \alpha =0, \forall \xi \in \shP_{p}\}. \end{align} Note that $K_{\infty}$ is a singular complex line bundle with mild singularity, that is, $K_{\infty}$ is smooth on the open dense subset of $M$. From Section \ref{chapter:mixed-polarizations}, we obtain that the fiber of $K_\infty$ over $p\in \mathring M$ is $$ (K_\infty)_p = \langle dX_1^p \wedge d Z_{p+1}^n \rangle_\mathbb{C}, $$ where $$ dX_1^p= d x^1 \wedge \cdots dx^p,\,\, dZ_{p+1}^n= d{{z}}_{p+1}\wedge \cdots \wedge d{{z}}_n. $$ The section $dX_1^p \wedge dZ_{p+1}^n$ is a trivializing section of $K_\infty$ over $\mathring M$ which has a divisor along $\partial P$. Note that the divisor of $dz_1 \wedge \cdots \wedge dz_n$ is $-D_1-\cdots -D_d$ while $dx^j$ goes to zero on the facet where $l_j=0.$ Moreover, from Proposition \ref{prop_limippolontheboundary}, we see that $K_\infty$ has no nontrivial smooth sections over $M\setminus \mathring M$. \subsection{Quantization and coherent state transforms} \label{subsecronaldo} {} In this Section, following \cite{KMN4} as recalled in Theorem \ref{thmkmn2}, we use a generalized coherent state transform, to describe how polarized sections evolve along the Mabuchi ray of K\"ahler polarizations $\mathcal{P}_s, s\geq 0$, considered in Theorem \ref{prop:evol-polarization}. Recall the families of toric K\"ahler structures considered in Section \ref{chapter:mixed-polarizations} which were defined by the symplectic potential: \begin{align} g_{s}:=g+sH, \,\, s>0, \end{align} where the Hamiltonian $H$ is defined in (\ref{simplehamiltonian}). (Recall that $\tilde x$ is denoted simply as $x$ in this Section.) Denote the corresponding $s$-dependent complex structure by $I_{s}$. As explained in \cite{KMN4}, and recalled in Theorem \ref{thmkmn2}, for the case when $p=n$ and $\mathcal{P}_\infty = \mathcal{P}_\mathbb{R}$, the evolution of polarized $I_s$-holomorphic sections along the Mabuchi geodesic is nicely described by a generalized coherent state transform $U_s$ in (\ref{defcst}). The same approach applies successfully to the more general case that we are considering here with $p<n$, as we now describe. Recall that the quantization of $M$ with respect to the K\"ahler polarizations $\mathcal{P}_s, s>0$ is given by $$ \mathcal{H}_{\mathcal{P}_s} = \left\{\sigma^m_s, m\in P\cap \mathbb{Z}\right\}, $$ where $\sigma^m_s$ is the monomial section $\sigma^m$, as defined in Section \ref{section:cpx-time}, with respect to the toric complex structure $I_s$. We define the quantum operator corresponding to $H$, $H^Q:\mathcal{H}_{{\mathcal P}_s}\to \mathcal{H}_{{\mathcal P}}$, to be given by $$ H^Q \sigma^m_s = H(m) \sigma^m_s, m\in P\cap {\mathbb Z}^n,\, s\geq 0. $$ Note that $\hat{H}^Q$ is given by \begin{align}\nonumber \hat{H}^Q = \frac{1}{2}\sum_{j=1}^p (\hat{x}_{j}^{prQ})^2, \end{align} where $x_j^{prQ}$ is the Kostant-Souriau prequantum operator associated to $x_j.$ The generalized coherent state transform (gCST) is then defined by the $T^n-$equivariant linear isomorphism \begin{align}\label{gcst} U_{s} = (e^{s\hat{H}^{prQ}} \otimes e^{is \shL_{X_H}}) \circ e^{-s\hat H^Q} : \mathcal{H}_{\mathcal{P}_{0}} \to \mathcal{H}_{\mathcal{P}_{s}}, s \geq 0, \end{align} where, as before, $H^{prQ}$ is the Kostant-Souriau prequantum operator associated to $H$. It is immediate to verify the validity of the following analog of Theorem \ref{thmkmn2} for the case $p<n$: \begin{thm}\label{local-description}For $s\geq 0$, $$ U_{s} \sigma^m_{0} = e^{-s H(m)}\sigma^m_{s}, \, m\in P\cap \mathbb{Z}. $$ \end{thm} {} {} Also, as in the case $p=n, \mathcal{P}_\infty = \mathcal{P}_\mathbb{R},$ the gCST gives a well-defined limit for the quantization at infinite geodesic time along the Mabuchi ray. While in that case the $I_s$-holomorphic sections converge, as $s\to\infty$, to Dirac delta distibutional sections supported on Bohr-Sommerfeld fibers, which are the fibers with integral value of the moment map, in the more general present case of $p<n$ we have convergence to distributional sections where only $p$ of the moment map coordinates localize. Consider the following distributional sections of $L$ \begin{align} \tilde{\sigma}^m_\infty := \sqrt{2\pi}^p\delta^{m_{1}^p}W^m \sqrt{dX_{1}^p \wedge dZ_{p+1}^{n}}, \end{align} where $$\delta^{m^p_{1}} = \delta(x_{1}-m_{1})\cdots\delta(x_p-m_p)$$ is the Dirac delta distribution, and $$W^m = e^{-k}w_{p+1}^{m_{p+1}}\cdots w_{n}^{m_{n}}e^{i\sum_{j=1}^p m_j \theta_j}e^{\sum_{j=1}^p m_j\frac{\partial g}{\partial x_{j} }}.$$ Let $\underline m = (m_1, \dots, m_p).$ Note that, since $\kappa= x\cdot y -g$, with $y=\frac{\partial g}{\partial x}$, on the support of the product of delta functions, $x_j=m_j, j=1, \dots, p$, and $$ -\kappa +\sum_{j=1}^p m_j\frac{\partial g}{\partial x_{j} } = -\kappa_{\mathrm{red}}, $$ where $\kappa_\mathrm{red}$ is the K\"ahler potential obtained on the K\"ahler quotient $M_{\underline m}=\mu_p^{-1}(m_1,\dots, m_p)/T^p$ and which is given by the Legendre transform of the restriction of $g$ to $\mu_p^{-1}(m_1, \dots, m_p)$. Indeed, as recalled in Section \ref{newsubsec} , from \cite{CDG}, this restriction gives the symplectic potential for the reduced K\"ahler structure on the K\"ahler quotient $M_{\underline m}.$ We have therefore that \begin{align} \label{limitsections} \tilde{\sigma}^m_\infty := \sqrt{2\pi}^p\delta^{m_{1}^p}e^{i\sum_{j=1}^p m_j \theta_j}\sigma^{\underline m, m'} \sqrt{dX_{1}^p}, \end{align} where $m'=(m_{p+1}, \dots, m_n)$ and \begin{align} \label{reducedsections} \sigma^{\underline m, m'} = e^{-\kappa_\mathrm{red}} w_{p+1}^{m_{p+1}}\cdots w_{n}^{m_{n}} \sqrt{dZ_{p+1}^{n}} \end{align} are the monomial sections for the K\"ahler quantization of $M_{\underline m}$ with respect to the reduced K\"ahler structure for the toric K\"ahler structure defined by the symplectic potential $g$, which is the initial point of the Mabuchi ray of toric K\"ahler structures $g_s, s\geq0$. (See also Section \ref{newsubsec}.) Note further that, from Lemma 3.3 in \cite{KMN1}, $\tilde{\sigma}^m_\infty$ gives a well-defined distributional section on $M$. Indeed, $\tilde{\sigma}^m_\infty$ has no poles along $\mu^{-1}(\partial P)$ since the zeros of $w_{p+1}^{m_{p+1}}\cdots w_n^{m_n}$ cancel the poles of $\sqrt{dZ^{n}_{p+1}}$. {} Following, the proof of Theorem 4.3 in \cite{KMN4}, we obtain for the $s \to +\infty$ limit, \begin{thm}\label{convsections} Let $s\geq 0.$ \begin{align}\label{eq:section-limit-cst} \lim_{s\to +\infty} U_{s}\sigma^m_{0} = \tilde{\sigma}^m_\infty, \, m\in P\cap \mathbb{Z}^n. \end{align} \end{thm} {} \begin{proof} First we calculate the evolution of the half-form. The operator $e^{is \shL_{X_H}}$ acts on half-forms while keeping the consistency of its action on the algebra of smooth functions, that is, \begin{align*}\nonumber &e^{is\shL_{X_H}}\sqrt{dZ} =\sqrt{ d\left(sx_{1} + z_{1}\right) \wedge \cdots \wedge d\left(sx_p +z_p \right)\wedge dZ_{p+1}^n }\\ &= \sqrt{s}^p\sqrt{ d\left(x_{1} + \frac{1}{s}z_{1}\right) \wedge \cdots \wedge d\left(x_p + \frac{1}{s}z_p \right)\wedge dZ_{p+1}^n}. \end{align*} Now, using the connection on $L$ given by \begin{align}\nonumber \nabla \mathbf{1}^{U(1)} = -i\sum x_j d\theta_j\mathbf{1}^{U(1)}, \end{align} a routine calculation gives us \begin{align}\nonumber \hat{H}^{prQ} = -\frac{1}{2}\sum_{j=1}^p x_{j}^2 - i\sum_{j=1}^p x_{j}\frac{\partial}{\partial \theta^{j}}. \end{align} Furthermore, \begin{align}\nonumber \hat{H}^Q = - \frac{1}{2}\sum_{j=1}^p \frac{\partial^2}{(\partial \theta^{j})^2}. \end{align} Thus, \begin{align*} e^{s\hat{H}^{prQ}}\circ e^{-s{H}(m)} w^m = e^{-s\alpha}e^{\beta}e^{i\Theta}w_{p+1}^{m_{p+1}}\cdots w_{n}^{m_{n}}, \end{align*} where \begin{align*} &\alpha(x_{1},\dots,x_p) = \sum_{j=1}^p \frac{1}{2}(x_j-m_j)^2, \\ &\beta(x_{1},\dots,x_p) = \sum_{j=1}^p m_j\frac{\partial g_{0}}{\partial x_{j} }, \\ &\Theta(\theta_{1},\dots,\theta_p) = \sum_{j=1}^p m_j\theta_j. \end{align*} We denote $W^{m}$ as $$W^m = e^{-k}w_{p+1}^{m_{p+1}}\cdots w_{n}^{m_{n}}e^{i\sum_{j=1}^p m_j \theta_j}e^{\sum_{j=1}^p m_j\frac{\partial g}{\partial x_{j} }}.$$ We then have: \begin{align*} & \lim_{s\to +\infty} U_{s}\sigma^m_{0}=\lim_{s\to +\infty} e^{-s\alpha}W^{m}\sqrt{s}^p\sqrt{ dX_{1}^{P}\wedge dZ_{p+1}^n +O(\frac{1}{s})}\\ &=\lim_{s\to +\infty}\left(\left(\frac{s}{2\pi}\right)^{p/2}e^{-s\sum_{j=1}^p \frac{1}{2}(x_j-m_j)^2}\right){\sqrt{2\pi}^p}W^{m}\sqrt{ dX_{1}^{P}\wedge dZ_{p+1}^n +O(\frac{1}{s})}\\ &=\sqrt{2\pi}^p~\delta^{m_{1}^p}~W^m \sqrt{dX_{1}^p \wedge dZ_{p+1}^{n}}= \tilde{\sigma}^m_\infty. \end{align*} \end{proof} In the next sections, we will show that the distributional sections $\tilde{\sigma}^m_\infty$ are $\mathcal{P}_\infty$-polarized and that, for $m\in P\cap \mathbb{Z}^n$, they span the vector space of $\mathcal{P}_\infty$-polarized sections, so that the gCST provides a continuous interpolation between the quantization along the Mabuchi ray from $s=0$ to $s=\infty$. In order to show that for the quantization wth respect to $\mathcal{P}_\infty$ no new distributional polarized sections arise with support along the boundary $\partial \Delta$, we will now recall in detail the study of half-form polarized sections in \cite{KMN1}. \subsection{Quantum space $\mathcal{H}_{\mathcal{P}_\infty}$ for the half-form corrected $\mathcal{P}_\infty$} In this Section, we prove that the Hilbert space of half-form corrected sections polarized with respect to $\mathcal{P}_\infty$ consists of the linear span of the basis $\left\{\tilde{\sigma}^m_\infty\right\}_{m\in P\cap \mathbb{Z}^n}$. {} For the family of complex structures $I_s, s\geq 0$ described above, consider the family of connections $\Theta_v$ described in (\ref{conn1}) and (\ref{conn2}) where $G_s = G + s T$, where $T$ was defined above Proposition \ref{prop_limippolontheboundary}. To define the condition for a half-form corrected section to be $\mathcal{P}_\infty$-polarized, we follow \cite{KMN1} and define a ``limit connection" by considering the $s\to \infty$-limit of the operators of covariant differentiation along $\mathcal{P}_\infty$. Note that, along $\mathring{M}$ we have $$ X_{{z}^j_s} = -2i \sum_{k=1}^n (G_s)_{jk} \frac{\partial}{\partial \bar{z}^k_s} = -2i \left( \frac{\partial}{\partial x^j} + i \sum_{k=1}^n (G_{s})_{jk} \frac{\partial}{\partial \theta^k}\right) $$ and that $$ X_{{z}^j_s} = X_{{z}^j_0} -s \frac{\partial}{\partial \theta^j},\, j=1, \dots p, $$ $$ X_{{z}^j_s} = X_{{z}^j_0}, \, j=p+1, \dots n.$$ The behaviour of the connection forms as $s\to \infty$ is described as follows. \begin{lemma}\label{lemma_limitconn} On the open orbit $\mathring M$, we obtain \begin{align} \Theta_{0}^{\infty} :=-i\,{x}\cdot d{\theta}+\frac{i}{4}\sum_{j,k=p+1}^{n}\left( \frac{\partial }{\partial x_{j}}\log\det D\right) \cdot D^{-1}_{j-p,k-p}d\theta_{k}, \end{align} and on the vertex chart $U_v$ \begin{align*} \Theta_{v}^{\infty} :=&-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\sum_{k=1}^n d\theta_{v}^{k}\\ &+\frac{i}{4}\sum_{i,j=1,q,l=p+1}^{n}\left( \frac{\partial }{\partial x_{v}^{j}}\log\det D\right) \cdot (A_{v})_{jq}D^{-1}_{q-p,l-p}(A_{v}^{T})_{l,i}d\theta_{v}^{i}. \end{align*} \end{lemma} \begin{proof} Now fix a choice of symplectic potential $g$ for the complex structure $I$ on $X$. According to equations (\ref{conn1}) and (\ref{conn2}), we have the connection $\nabla^{I}$ on $L$ defined by \begin{align*} \Theta_{v} & :=\frac{\nabla^{I}{\mathbf{1}}_{v}^{U(1)}}{{\mathbf{1}}_{v}^{U(1)}} =-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\sum_{k=1}^n d\theta_{v}^{k}+\frac {i}{4}\left( \frac{\partial}{\partial x_{v}}\log\det G_{v}\right) \cdot G_{v}^{-1}d\theta_{v}\\ & =-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\mathrm{Im}\left(\partial\log\det G_{v}+\sum_{k=1}^{n}dz_{v}^{k}\right). \end{align*} For any symplectic potential $g_{s}$ for the complex structure $I_{s}$ on $X$, On the open orbit $\check X$, the connection forms are specified by \begin{align*} \Theta_{0}^{s} & :=-i\,{x}\cdot d{\theta}+\frac{i}{4}\left( \frac{\partial }{\partial x}\log\det G_{s}\right) \cdot G_{s}^{-1}d\theta\\ & =-i\,{x}\cdot d{\theta+}\frac{i}{2}\mathrm{Im}\partial\log\det G_{s}, \end{align*} where$$ G_s = \left[ \begin{array}{cc} A_1 + sI_p & A_2 \\ A_3 & D \end{array} \right], $$ with $A$ being the top diagonal $p\times p$ block in $\tilde G$ and $D$ being the invertible $(n-p) \times (n-p)$ matrix. Moreover, $\det G_{s} \sim s^{p}\det D+O(s^{p-1})$, and $\lim_{s\rightarrow \infty}G_{s}^{-1}= \left[ \begin{array}{cc} 0 & 0 \\ 0 & D^{-1} \end{array} \right].$ We therefore obtain: \begin{align*} \Theta_{0}^{\infty} & :=-i\,{x}\cdot d{\theta}+\frac{i}{4}\sum_{j,k=p+1}^{n}\left( \frac{\partial }{\partial x_{j}}\log\det D\right) \cdot D^{-1}_{j-p,k-p}d\theta_{k}. \end{align*} Similarly, on the vertex chart, we have: \begin{align*} &\Theta_{v}^{\infty} :=-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\sum_{k=1}^n d\theta_{v}^{k}+\frac{i}{4}\left( \frac{\partial }{\partial x_{v}}\log\det D\right) \cdot A_{v}G_{\infty}^{-1}A_{v}^{T}d\theta_{v}\\ &=-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\sum_{k=1}^n d\theta_{v}^{k}\\ &+\frac{i}{4}\sum_{i,j=1,q,l=p+1}^{n}\left( \frac{\partial }{\partial x_{v}^{j}}\log\det D\right) \cdot (A_{v})_{jq}D^{-1}_{q-p,l-p}(A_{v}^{T})_{l,i}d\theta_{v}^{i}. \end{align*} \end{proof} We therefore define $B_{\mathcal{P}_{\infty}}$, the space of $\mathcal{P}_\infty$-polarized sections to given by the intersection of the kernels of the differential operators $$ \nabla^\infty_{\frac{\partial}{\partial \theta^j}} = \frac{\partial}{\partial \theta^j} + \Theta_0^\infty \left(\frac{\partial}{\partial \theta^j}\right), \, j=1, \dots p, $$ and $$ \nabla^\infty_{X_{z^j_0}} = X_{z^j_0} + \Theta_0^\infty \left(X_{z^j_0}\right), \, j=p+1, \dots n. $$ \begin{dfn}\label{correctedQS} The quantum Hilbert space for the half-form corrected mixed polarization $\mathcal{P}_{\infty}$ is defined by $$ \mathcal{H}_{\mathcal{P}_{\infty}} =B_{\mathcal{P}_{\infty}} \otimes \sqrt{|dX_{1}^{p}\wedge dZ_{p+1}^{n}|}, $$ where $$B_{\mathcal{P}_{\infty}}=\{\sigma \in \Gamma(M, L^{-1})' \mid \nabla^{\infty}_{\xi} \sigma=0, \forall \xi \in \Gamma(M, \mathcal{P}_{\infty})\}.$$ \end{dfn} Note that in terms of the vertex chart coordinates $$ \frac{\partial}{\partial \theta_k} = \sum_{j=1}^n (A_v^{-1})_{jk} \frac{\partial}{\partial \theta_v^j} $$ and $$ X_{z^k_0} = \sum_{j=1}^n (A_v)_{jk} X_{z^j_{v,0}}. $$ Therefore, on the vertex chart $U_v$ the conditions of $\mathcal{P}_\infty$-polarizability are given by the intersections of the kernels of $$ \sum_{k=1}^n (A_v^{-1}){kj} \left(\frac{\partial}{\partial \theta_v^k} + \Theta_v^\infty\left(\frac{\partial}{\partial \theta_v^k}\right)\right), \, j=1, \dots, p, $$ and $$ \sum_{k=1}^n (A_v)_{kj} \left(X_{z^k_{v,0}} + \Theta_v^\infty\left(X_{z^k_{v,0}} \right) \right), j= p+1, \dots, n. $$ {} \begin{lemma} \label{outsidebdy} There are no nonzero solutions of the equations of covariant constancy along $\mathcal{P}_\infty$ with support contained along $\mu_{p}^{-1}(\partial \Delta).$ \end{lemma} \begin{proof} We have the limit connection as before: On the open orbit $\mathring M$, we obtain \begin{align*} \Theta_{0}^{\infty} :=-i\,{x}\cdot d{\theta}+\frac{i}{4}\sum_{j,k=p+1}^{n}\left( \frac{\partial }{\partial x_{j}}\log\det D\right) \cdot D^{-1}_{j-p,k-p}d\theta_{k}, \end{align*} and on the vertex chart $U_v$ \begin{align*} \Theta_{v}^{\infty} :=&-i\,{x}_{v}\cdot d{\theta}_{v}+\frac{i}{2}\sum_{k=1}^n d\theta_{v}^{k}\\ &+\frac{i}{4}\sum_{i,j=1,q,l=p+1}^{n}\left( \frac{\partial }{\partial x_{v}^{j}}\log\det D\right) \cdot (A_{v})_{jq}D^{-1}_{q-p,l-p}(A_{v}^{T})_{l,i}d\theta_{v}^{i} \end{align*} from the expression for $\Theta_v^\infty$ we see that, for $k=1, \dots, p,$ $$ \Theta^\infty_v \left( \frac{\partial}{\partial \theta^k}\right) = -i \sum_{j=1}^n x_v^j (A_v^{-1})_{jk} + \frac{i}{2} \sum_{j=1}^n (A_v^{-1})_{jk}, $$ since the last term in the expression for $\Theta^\infty_v$ does not contribute. Assume $\delta$ is the distributional section with support on $\mu_{-1}(\partial \Delta)$. Let us consider a solution with support along $\mu_{p}^{-1}(x_{v}^{j}=0)$, for some fixed $j=1, \cdots, p$. Here $(x_{v}^{1}, \cdots, x_{v}^{p})$ is a vertex coordinate for $\Delta$. Let $\check{u}=(u_{1},\cdots,u_{j-1},u_{j+1}, \cdots, u_{n})$ and $\check{v}=(v_{1},\cdots,v_{j-1},v_{j+1}, \cdots, v_{n})$. We take coordinates $(u_{j},v_{j},\check{u},\check{v})$ (see \cite{BFMN}), so that $x_{v}^{j}=0 \Leftrightarrow (u_{j},v_{j})=(0,0)$ and $$\nabla^{\infty}_{\frac{\partial}{\partial \theta_{v}^{j}}}=-i(-v_{j}\frac{\partial}{\partial u_{j}}+u_{j}\frac{\partial}{\partial v_{j}}) + \frac{i}{2}$$ in that neighbourhood. The same argument as in the proof of \cite[Theorem 4.7]{BFMN}, $\nabla_{\frac{\partial}{\partial \theta_{v}^{j}}}^{\infty}\delta=0$ implies $\delta=0$. Therefore no nonzero solutions with support along $\mu_{p}^{-1}(\partial \Delta)$. \end{proof} \begin{thm}\label{thm_polsectionsinside} The distributional sections $\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n$, in (\ref{limitsections}), are in $\mathcal{H}_{\mathcal{P}_\infty}$. Moreover, for any $\sigma \in \mathcal{H}_{\mathcal{P}_\infty}$, $\sigma$ is a linear combination of the sections $\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n$. Therefore, the distributional sections $\{\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n\}_{m\in P\cap \mathbb{Z}^n}$ form a basis of $\mathcal{H}_{\mathcal{P}_\infty}$. \end{thm} \begin{proof} Recall that, $\tilde{\sigma}^m_\infty$ on $\mathring M$ can be write as \begin{align}\label{local} \tilde{\sigma}^m_\infty := \sqrt{2\pi}^p\delta^{m_{1}^p}W^m \sqrt{dX_{1}^p \wedge dZ_{p+1}^{n}}, \end{align} where $$\delta^{m^p_{1}} = \delta(x_{1}-m_{1})\cdots\delta(x_p-m_p)$$ is the Dirac delta distribution, and $$W^m = e^{-k}w_{p+1}^{m_{p+1}}\cdots w_{n}^{m_{n}}e^{i\sum_{j=1}^p m_j \theta_j}e^{\sum_{j=1}^p m_j\frac{\partial g}{\partial x_{j} }}.$$ Since $L\otimes\sqrt{|K_{\infty}|} \cong l \otimes K_{\infty}$ in $\mathring M$ and for any $\xi \in \Gamma(\mathring M, \mathcal{P}_{\infty})$, $\xi$ can be written as a linear combination of $\frac{\partial}{\partial \theta^{j}}$ and $\frac{\partial}{\partial \bar{z}_{k}}$, for $j=1,\cdots p$ and $k=p+1, \cdots, n$, we find that $\tilde{\sigma}^m_\infty$ belongs to $\mathcal{H}_{\mathcal{P}_\infty}$, by applying Equation (\ref{local}) and Lemma \ref{lemma_limitconn} $\tilde{\sigma}^m_\infty$. By Lemma \ref{outsidebdy}, we establish that there are no solutions supported on $\mu_{p}^{-1}(\partial \Delta)$. It remains to show that any $\sigma \in \mathcal{H}_{\mathcal{P}_\infty}$, $\sigma$ can be expressed as a linear combination of the sections $\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n$. The polarization $\mathcal{P}_\infty$ is mixed and over $\mathring M$ its real fibers are given by tori of dimension $p$ generated by the vector fields $\partial/\partial {\theta_j}, j=1, \dots, p.$ Therefore, distributional sections which are $\mathcal{P}_\infty$-polarized will be supported on the partial Bohr-Sommerfeld locus given by the conditions $(x_1, \dots, x_p)=(m_1, \dots ,m_p)\in \mathbb{Z}^p \cap \Delta$ where $\Delta$ is the projection of $P$ on the first $p$ momentum coordinates. The argument analogous to those in the proof of Theorem 4.7 in \cite{KMN1} or Proposition 3.1 in \cite{BFMN} then shows that the dependence of the polarized sections on the variables $(x_1, \dots, x_p)$ is given in terms of Dirac delta distributions and that, on the other hand, derivatives of Diract delta distributions derivatives are not covariantly constant. On the other hand, by Theorem \ref{prop:evol-polarization}, the remaining directions in $\mathcal{P}_\infty$ over $\mathring M$ are just given by the holomorphic vector fields $\frac{\partial}{\partial \bar{z}^j_0}, j=p+1, \dots, n$, so the dependence on the variables $(w_{p+1}, \dots , w_{n})$ is given by products of monomials times an exponential of minus the restriction of the K\"ahler potential to the partial Bohr-Sommerfeld fiber, in the form of the term $W$ in (\ref{limitsections}). Note also that the section $\sqrt{dX_{1}^p \wedge dZ_{p+1}^{n}}$ of the half-form bundle $K_{\mathcal{P}_\infty}$ is $\mathcal{P}_\infty$-polarized since the functions $x_1, \dots, x_p, z_{p+1}, \dots, z_{n}$ on $\mathring M$ are $\mathcal{P}_\infty$-polarized. Therefore, for any $\sigma \in \mathcal{H}_{\mathcal{P}_\infty}$, $\sigma$ is a linear combination of the sections $\tilde{\sigma}^m_\infty, m\in P\cap \mathbb{Z}^n$. \end{proof} \begin{dfn}For any $\sigma_{0} \in \mathcal{H}_{\mathcal{P}_{0}}$, we define $$U_{\infty}(\sigma_{0})=\lim_{s \to \infty} U_{s}(\sigma_{0}).$$ \end{dfn} According to Theorem \ref{convsections} and Theorem \ref{thm_polsectionsinside}, we obtain the following result. \begin{cor}\label{corQSconnection} Then, $U_{\infty}: \mathcal{H}_{\mathcal{P}_{0}} \rightarrow \mathcal{H}_{\mathcal{P}_\infty}$ is a $T$-invariant isomorphism. In particular, $$U_{\infty}(\sigma_{0}^{m})=\lim_{s \to \infty} \tilde{\sigma}_{s}^{m}= \tilde{\sigma}^{m}_{\infty},$$ where $\{\sigma_{0}^{m}\}_{m\in P \cap \mathbb{Z}^{n}}$ is the canonical basis of $\mathcal{H}_{\mathcal{P}_{0}}$, and $\tilde{\sigma}_{s}^{m}=U_{s}(\sigma_{0}^{m})$. \end{cor} \subsection{Quantization commutes with reduction: asymptotic unitarity} \label{subsecqcr} Let us consider the half-form corrected holomorphic section with respect to the complex structure $I_s$, $\sigma^m_s$ (Note that due to the integration along $T^n$, $\sigma^m_s$ is orthogonal to $\sigma^{m'}_s$ for $m\neq m'$.) From equation (4.7) in \cite{KMN1}, the norm squared of the half-form corrected section $\sigma^m_s$ is $$ \vert\vert \sigma^m_s\vert\vert^2_{L^2} = \int_P e^{-2s(\sum_{j=1}^p(x^j-m^j)^2-H)} e^{-2(\sum_{j=1}^n (x^j-m^j)y^j-g_P)}\cdot $$ $$ \cdot (\det G_s)^\frac12 dx^1\cdots dx^p\cdot dx^{p+1}\cdots dx^n, $$ where $y= \partial g / \partial x.$ Let $U_{s}:\mathcal{H}_{\mathcal{P}_{0}} \to \mathcal{H}_{\mathcal{P}_{s}}$ be the generalized coherent state transform (gCST) is defined as before (see equation (\ref{gcst})). We denote $U_{s}(\sigma^{m}_{0})$ by $\tilde{\sigma}^{m}_{s}$. Then we have \begin{thm}\label{lemma-norms} Let $\{\tilde{\sigma}^{m}_{s}:= U_{s}(\sigma_{0}^{m})\}_{m \in P \cap \mathbb{Z}^{n}}$ be the basis of $\mathcal{H}_{\mathcal{P}_{s}}$. Then we have: $$\lim_{s \to \infty}\vert\vert \tilde{\sigma}^m_s\vert\vert_{L^2}^2 = c_m \pi^{p/2},$$ where the constant $c_m$ is given by \begin{align}\label{coefficient} c_m = \int_{P} \left(\Pi_{j=1}^p\delta(x^j-m^j)\right) e^{-2((x-m)\cdot y -g)} (\det D)^\frac12dx^1\cdots dx^n.\hspace{0.5cm} \end{align} \end{thm} \begin{proof} By the proof of Lemma 4.12 in \cite{KMN1}, we have, as $s\to \infty$, $$ \det G_s \sim s^{p} \det D, $$ where $G_{s}=\mathrm{Hess}g_{s}= \left[ \begin{array}{cc} A_1 + sI_p & A_2 \\ A_3 & D \end{array} \right]$. By equation (4.7) in \cite{KMN1}, the norm squared of the half-form corrected section $\sigma^m_s$ is $$ \vert\vert \sigma^m_s\vert\vert^2_{L^2} = \int_P e^{-2s(\sum_{j=1}^p(x^j-m^j)^2-H)} e^{-2(\sum_{j=1}^n (x^j-m^j)y^j-g)} (\det G_s)^\frac12 dx^1\cdots dx^n, $$ where $y= \partial g/ \partial x.$ As $s\to \infty$, the Laplace approximation (see \cite[Lemma4.1]{KMN1}) for the integration on the variables $x^1, \dots, x^p,$ gives, asymptotically $$ \vert\vert \sigma^m_s\vert\vert_{L^2}^2 \sim \left(\frac{2\pi}{s}\right)^{p/2} \frac{e^{2sH(m)} s^{p/2} }{2^{p/2}} c_m = \pi^{p/2} c_m e^{2sH(m)}. {} $$ By Theorem \ref{local-description}, we have $$\vert\vert \tilde{\sigma}^m_s\vert\vert^2_{L^2}=\vert\vert \sigma^m_s\vert\vert^2_{L^2}e^{-2sH(m)}.$$ Therefore, we conclude: $$\lim_{s \to \infty}\vert\vert \tilde{\sigma}^m_s\vert\vert_{L^2}^2 = c_m \pi^{p/2}.$$ \end{proof} In view of Theorem \ref{local-description}, Theorem \ref{convsections} and Theorem \ref{lemma-norms}, it is then natural to consider the following \begin{dfn}\label{def-hermitian} We define a hermitian structure $\vert\vert\cdot\vert\vert_{(\infty)}$ on quantum space $\mathcal{H}_{\mathcal{P}_\infty}=\mathrm{span}\{ \tilde{\sigma}_{\infty}^{m} \}_{m\in P \cap \mathbb{Z}^{n}}$ such that the basis $\{ \tilde{\sigma}_{\infty}^{m} \}_{m\in P \cap \mathbb{Z}^{n}}$ is orthogonal and $$\vert\vert \tilde{\sigma}_{\infty}^{m} \vert\vert_{(\infty)}^{2}=c_m \pi^{p/2},$$ where the constant $c_m$ is given by (\ref{coefficient}). \end{dfn} Recall that, as mentioned in section \ref{newsubsec}, from \cite{CDG}, the restriction of the symplectic potential to a level set of the moment map defines a toric symplectic potential in the corresponding symplectic reduction. Thus, for a given K\"ahler reduction $M_{\underline m}= \mu_p^{-1}(m_1,\dots, m_p)/T^p$, where $\underline m =(m_1, \dots , m_p)$, we will have a natural toric K\"ahler structure defined by taking as symplectic potential the restriction of $g$ to the level set $\mu^{-1}(m_1, \dots, m_p)$. For such a symplectic reduction $M_{\underline m}$, equipped with this K\"ahler structure, consider the corresponding half-form corrected K\"ahler quantization giving the collection of monomial sections $\left\{\sigma^{(\underline m, m')}\right\}_{(\underline m,m')\in P\cap \mathbb{Z}^n}$ and the corresponding Hilbert space $\mathcal{H}_{M_{\underline m}}$, given by the standard half-form corrected inner product, as described for symplectic toric manifolds in section \ref{section-prelim}. Note that even in the case when $M_{\underline m}$ is singular, we are considering only the monomial sections corresponding to integral points in that level set and the (singular) toric reduced K\"ahler structure which is smooth on the open dense orbit in $M_{\underline m}$. As a consequence of Theorem \ref{lemma-norms} and the natural Definition \ref{def-hermitian}, we then obtain that the quantization commutes with reduction correspondence between quantization in the limit mixed polarization and the K\"ahler reductions given by integrable values of $\mu_p$ is, in fact, unitary. As described in Section \ref{newsubsec}, this is to be naturally expected given the properties of $\mathcal{P}_\infty$. {} \begin{thm}\label{thmqcr} The natural $T^n$-equivariant linear isomorphism \begin{eqnarray}\nonumber \mathcal{H}_{\mathcal{P}_\infty} &\to& \bigoplus_{\underline m\in \Delta\cap\mathbb{Z}^n}\mathcal{H}_{M_{\underline m}}\\ \nonumber \tilde \sigma^m_\infty & \mapsto & \sigma^{\underline m, m'}, \end{eqnarray} for $(m_1, \dots, m_p)=\underline m$ and $m=(\underline m, m')\in P\cap \mathbb{Z}^n$ is unitary up to the overall constant $\pi^{p/2}$. \end{thm} \begin{proof} Let $m=(\underline m, m')\in P\cap \mathbb{Z}^n$. The norm of $\tilde \sigma^m_\infty$ is given in Definition \ref{def-hermitian}. In the expression (\ref{coefficient}) for $c_m$ the integral over $dx^1\cdots dx^p$ localizes the integrand on $x^j=m^j, j=1, \dots, p$. In the exponent, then, only partial derivatives of $g_P$ with respect to $(x^{p+1}, \dots, x^{n})$ survive, corresponding to the term $(x-m)\cdot y$. Likewise, the matrix $D$ gives the matrix of second derivatives of $g_P$ with respect to $(x^{p+1}, \dots, x^n)$. Thus, after integrating along $dx^1\cdots dx^p$ one obtains precisely the expression for the half-form corrected $L^2$ norm of $\sigma^{(\underline m, m')}$ on the reduction $M_{\underline m}$ for the symplectic potential given by the restriction of $g$ to $\mu_p^{-1}(\underline m)$, as in (\ref{reducedsections}). (Compare with the general expression for the $L^2$ norm of the monomial sections just before Theorem \ref{lemma-norms}). \end{proof} Along the Mabuchi geodesic ray defined by the symplectic potentials $g_s = g +sH, s>0$, for finite $s$, quantization commutes with reduction is not unitary, a fact that can be easily checked. However, in the infinite geodesic limit $s\to \infty$ one obtains a natural hermitian structure on the Hilbert space of distributional polarized sections with respect to $\mathcal{P}_\infty$ which gives, as just described, a natural $T^n$-equivariant unitary isomophism $$ \mathcal{H}_{\mathcal{P}_\infty} \cong \bigoplus_{\underline m \in \Delta\cap \mathbb{Z}^p} \mathcal{H}_{M_{\underline m}}. $$ Note, moreover, that this unitary correspondence holds for all quantizable levels of $\mu_p$ simultaneously, without the need of adjusting overall constants at each level in order to achieve unitarity. We thus obtain, in the spirit of Section \ref{newsubsec}, \begin{cor}\label{cor_qrunitary} The quantization commutes with reduction correspondence for the toric mixed polarization $\mathcal{P}_\infty$ is unitary (up to an overall constant). \end{cor} The property that the quantization commutes with reduction correspondence, relative to the $T^p$ action, is not unitary for the initial K\"ahler polarization $\mathcal{P}_0$ can then be more naturally reformulated by expressing the fact that the quantizations with respect to $\mathcal{P}_0$ and to $\mathcal{P}_\infty$ are not unitarily equivalent. \begin{rmk} Note that in the case when $M_{\underline m}$ is singular, as described in section \ref{newnewsub}, we are considering that the K\"ahler quantization is obtained by the monomial sections corresponding to the integral points of $P$ in that $\mu_p$ level set. \end{rmk} \begin{rmk} Note that in this paper, for simplicity, we have described the family of toric K\"ahler structures along the Mabuchi geodesic generated by the Hamiltonian $H$, which is quadratic in $(x^1, \dots, x^p).$ In fact, the results generalize straightforwardly to Mabuchi geodesics generated by Hamiltonians which are strictly convex in $(x^1,\dots, x^p)$ but not necessarily quadratic like $H$. (See eg \cite{BFMN} and the proof of Lemma 4.12 in \cite{KMN1}.) For Mabuchi geodesics starting at the same symplectic potential, both the limit polarization $\mathcal{P}_\infty$ and the hermitian structure defined above is then natural in the sense that $\mathcal{P}_\infty$ is also obtained in the limit of infinite geodesic time for this larger family of geodesics and that the result of in Theorem \ref{lemma-norms} also holds so, that the hermitian structure naturally induced on $\mathcal{H}_{\mathcal{P}_\infty}$ is the same as above. Note that the corresponding gCST (see \cite{KMN4}) will then satisfy $$ \lim_{s\to \infty} \vert\vert U_s \left(\sigma^m_0\right)\vert\vert_{L^{2}}^{2} = \vert\vert\tilde{\sigma}^m_\infty\vert\vert_{(\infty)}. $$ \end{rmk} \section{Acknowledgements} The authors would like to thank Thomas Baier for many discussions on the topics of the paper. The authors were supported by the the projects CAMGSD UIDB/04459/2020 and CAMGSD UIDP/04459/2020. A.P. held an FCT LisMath PhD fellowship PD/BD/135528/2018. \begin{thebibliography}{99} \bibitem{Ab2} M. Abreu, {\em K\"ahler geometry of toric manifolds in symplectic coordintes}, in "Symplectic and Contact Topology: Interactions and Perspectives" (eds. Y. Eliashberg, B. Khesin and F. Lalonde), Fields Institute Communications 35, Amer. Math. Soc., 2003. \bibitem{AH} A.A'Campo-Neuen and J.Hausen, ``Quotients of toric varieties by the action of a subtorus", Tohoku Math. J. 51 (1999), 1-12. \bibitem{An2} J. E. Andersen, {\em Geometric quantization of symplectic manifolds with respect to reducible non-negative polarizations}, Comm. Math. Phys. 183 (2), (1997) 401-421. \bibitem{BFHMN} T. Baier, A.C. Ferreira, J. Hilgert, J. M. Mour\~{a}o and J. P. Nunes, {\em Fibering polarizations and Mabuchi rays on symmetric spaces of compact type}, Anal. Math. Phys. 15, 21 (2025). \bibitem{BFMN} T. Baier, C. Florentino, J. M. Mour\~{a}o and J. P. Nunes, {\em Toric K\"ahler metrics seen from infinity, quantization and compact tropical amoebas}, J. Diff. Geom. 89 (3), (2011) 411-454. \bibitem{BHKMN} T. Baier, J. Hilgert, O. Kaya, J. M. Mour\~ao, and J. P. Nunes (2023). {\em Quantization in fibering polarizations, Mabuchi rays and geometric Peter--Weyl theorem}, Journ. of Geometry and Physics Vol.207 (2025) 105355. \bibitem{CDG} David. Calderbank, L. David, and P. Gauduchon {\em The Guillemin formula and Kähler metrics on toric symplectic manifolds} J. Symplectic Geom. 1(4), (2003): 767-784. \bibitem{CLL} K. Chan, N. C. Leung, and Q. Li, {\em Quantizable functions on Kähler manifolds and non-formal quantization}, Adv. Math. 433 (2023): 109293. \bibitem{CJH} D. A. Cox, J. B. Little and H. K. Schenck, {\em Toric varieties}, Amer. Math. Soc., 2011. \bibitem{FMMN1} C. Florentino, P. Matias, J. M. Mour\~{a}o and J. P. Nunes, {\em Geometric quantization, complex structures and the coherent state transform}, J. Funct. Anal. 221 (2005), 303-322. \bibitem{Gui2} V. Guillemin, {\em K\"ahler structures on toric varieties}, J. Diff. Geom. 40 (1994) 285-309. \bibitem{GL} P.M.Gruber, C.G.Lekkerkerker, ``Geometry of numbers", North-Holland, 1987. \bibitem{GMW} V. Guillemin, E. Miranda, J. Weitsman, {\em On geometric quantization of b-symplectic manifolds}, Adv. Math. 331 (2018): 941-951. \bibitem{GS1} V. Guillemin and S. Sternberg, {\em Geometric Quantization and Multiplicities of Group Representations}, Inventiones Mathematicae 67.3 (1982): 515-538. \bibitem{GS3} V. Guillemin and S. Sternberg, {\em The Gelfand-Cetlin system and quantization of the complex flag manifolds}, J. Funct. Anal. 52 (1983) 106-128. \bibitem{Hal} B. C. Hall, {\em Geometric quantization and the generalized Segal-Bargmann transform for Lie group of compact type}, Comm. Math. Phys. 226 (2002) 233-268. \bibitem{HK1} B.C. Hall and W.D. Kirwin, {\em Unitarity in “Quantization Commutes with Reduction"}, Commun. Math. Phys. 275 (2007) 401–442. \bibitem{HK2} B. C. Hall and W. D. Kirwin, {\em Complex structures adapted to magnetic flows}, J. Geom. Phys. 90 (2015) 111-131. \bibitem{JW} L. Jeffrey and J. Weitsman, {\em Bohr–Sommerfeld orbits in the moduli space of flat connections and the Verlinde dimension formula}, Commun. Math. Phys. 150 (1992), 593–630. \bibitem{KMN1} W.D. Kirwin, J.M. Mour\~{a}o, J.P. Nunes {\em Degeneration of K\"ahler structures and half-form quantizaiton of toric varieties.} J. Sympl. Geo 11, 603-643(2013). \bibitem{KMN4} W. D. Kirwin, J. M. Mour\~{a}o and J. P. Nunes, {\em Complex symplectomorphisms and pseudo-K\"ahler islands in the quantization of toric manifolds}, Math. Ann., (2016) 364 (1-2): 1-28. \bibitem{KW} W.D. Kirwin and S. Wu, {\em Geometric quantization, parallel transport and the Fourier transform}, Comm. Math. Phys., 266 (2006) 577-594. \bibitem{LW1} N.C. Leung and D. Wang, {\em Geodesic rays in the space of K\"ahler metrics with T-symmetry}, Adv. Math. 450 (2024): 109756. \bibitem{LW2} N.C. Leung and D. Wang, {\em Geometric quantizations of mixed polarizations on K\"ahler manifolds with $T$-symmetry}, arXiv: 2301.01011. \bibitem{LW3} N.C. Leung and D. Wang, {\em Limit of geometric quantizations on Kähler manifolds with T‐symmetry}, Proc. Lond. Math. Soc. 129.4 (2024): e12642. \bibitem{LY1} N. C. Leung and Y. Yau, {\em Deformation quantization via Toeplitz operators on geometric quantization in real polarizations}, Comm. Math. Phys. 397 (2023) 875–900. \bibitem{M} T. Mabuchi, {\em Some symplectic geomery on compact K\"ahler manifolds I}, Osaka J. Math. 24(2), (1987) 227-252. \bibitem{MN} J. M. Mour\~{a}o and J. P. Nunes, {\em On complexified analytic Hamiltonian flows and geodesics on the space of K\"ahler metrics}, International Mathematics Research Notices, 2015.20 (2015): 10624-10656. \bibitem{MNP} J. M. Mour\~{a}o, J. P. Nunes, and M. B. Pereira, {\em Partial coherent state transforms, $G \times T$-invariant K\"ahler structures and geometric quantization of cotangent bundles of compact Lie groups}, Advances in Mathematics 368 (2020) 107139. \bibitem{MNR} J. M. Mour\~{a}o, J. P. Nunes, and T. Reis, { \em A new approximation method for geodesics on the space of Kähler metrics}, Anal. Math. Phys. 9 (2019) 1927–1939. \bibitem{P} A. Pereira, {\em Applications of flows in imaginary time to quantization}, PhD thesis. Instituto Superior Técnico, University of Lisbon, 2022. \bibitem{SZ} J. Song, and S. Zelditch, {\em Bergman metrics and geodesics in the space of Kähler metrics on toric varieties}, Analysis \& PDE 3.3 (2010): 295-358. \bibitem{W1} D. Wang, {\em Quantum spaces associated to mixed polarizations and their limiting behavior on toric varieties}, arXiv: 2410.17130. \bibitem{Wo} N. M. J. Woodhouse, {\em Geometric quantization}, Second Edition, Clarendon Press, Oxford, 1991. \end{thebibliography} \end{document}
2412.17233v1
http://arxiv.org/abs/2412.17233v1
Totally positive skew-symmetric matrices
\documentclass[12pt, reqno, english]{amsart} \usepackage{amsmath, amsthm, amssymb, color, xcolor} \usepackage[colorlinks=true,citecolor=red,linkcolor=blue,urlcolor=blue]{hyperref} \usepackage{graphicx} \usepackage{comment} \usepackage{caption} \usepackage{bold-extra} \usepackage{mathtools} \usepackage{enumerate} \usepackage{bm} \usepackage{rotating} \usepackage{mathrsfs} \usepackage{verbatim} \usepackage{tikz, tikz-cd, tikz-3dplot} \usepackage{amssymb} \usepackage{secdot} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \usepackage{caption} \usepackage[normalem]{ulem} \usepackage{subcaption} \usepackage{multicol} \usepackage{makecell} \usepackage{array} \usepackage{enumitem} \usepackage{adjustbox} \usepackage{blkarray} \usepackage[top=25mm, bottom=25mm, left=25mm, right = 25mm]{geometry} \usepackage{cleveref} \usepackage{lineno} \usepackage{enumitem} \usepackage{titlesec} \usetikzlibrary{matrix} \usetikzlibrary{arrows} \usetikzlibrary{decorations.pathmorphing} \usetikzlibrary{patterns} \titleformat{\section} {\centering \fontsize{12}{17} \large \bf \scshape }{\thesection}{0mm}{. \hspace{0.00mm}} \titleformat{\subsection}[runin] {\fontsize{12}{17} \bf}{\thesubsection}{0mm}{. \hspace{0.00mm}}[.\\] \newtheorem{theorem}{Theorem}[section] \newtheorem{assumption}{Assumption} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{algo}[theorem]{Algorithm} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{problem}[theorem]{Problem} \newtheorem{remark}[theorem]{Remark} \newtheorem{cor}[theorem]{Corollary} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \newtheorem*{claim}{Claim} \newcommand{\Pf}{\mathrm{Pf}} \newcommand{\PP}{\mathbb{P}} \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\NN}{\mathbb{N}} \renewcommand{\SS}{\mathbb{S}} \newcommand{\Id}{\mathrm{Id}} \newcommand{\Gr}{\mathrm{Gr}} \newcommand{\OGr}{\mathrm{OGr}} \newcommand{\Ical}{\mathcal{I}} \newcommand{\Pcal}{\mathcal{P}} \newcommand{\Qcal}{\mathcal{Q}} \newcommand{\Lcal}{\mathcal{L}} \newcommand{\Rcal}{\mathcal{R}} \newcommand{\Span}{\mathrm{span}} \newcommand{\SO}{\mathrm{SO}} \newcommand{\Spin}{\mathrm{Spin}} \newcommand{\SL}{\mathrm{SL}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\LGP}{\mathrm{LGP}} \newcommand{\rowspan}{\mathrm{rowspan}} \renewcommand{\mod}{\mathrm{\ mod \ }} \newcommand{\jon}[1]{{\tt \textcolor{red}{Jon: #1}}} \newcommand{\veronica}[1]{{\tt \textcolor{blue}{Veronica: #1}}} \newcommand{\yassine}[1]{{\tt \textcolor{orange}{Yassine: #1}}} \definecolor{dgreen}{HTML}{026a10} \definecolor{dviolet}{HTML}{9109E3} \definecolor{dorange}{HTML}{e55700} \DeclareMathOperator{\sgn}{sgn} \renewcommand{\tilde}{\widetilde} \usepackage{nicematrix} \title{\bf Totally positive skew-symmetric matrices} \author[J. Boretsky]{Jonathan Boretsky} \address{Jonathan Boretsky (MPI MiS)} \email{[email protected]} \author[V. Calvo Cortes]{Veronica Calvo Cortes} \address{Veronica Calvo Cortes (MPI MiS)} \email{[email protected]} \author[Y. El Maazouz]{Yassine El Maazouz} \address{Yassine El Maazouz (Caltech)} \email{[email protected]} \date{\today} \keywords{Orthogonal Grassmannian, Total positivity, Pfaffians, Skew-symmetric matrices, Spinors.} \subjclass{14M15, 15B48, 05E14.} \begin{document} \begin{abstract} A matrix is totally positive if all of its minors are positive. This notion of positivity coincides with the type A version of Lusztig's more general total positivity in reductive real-split algebraic groups. Since skew-symmetric matrices always have nonpositive entries, they are not totally positive in the classical sense. The space of skew-symmetric matrices is an affine chart of the orthogonal Grassmannian \texorpdfstring{$\OGr(n,2n)$}{OGr(n,2n)}. Thus, we define a skew-symmetric matrix to be \emph{totally positive} if it lies in the \emph{totally positive orthogonal Grassmannian}. We provide a positivity criterion for these matrices in terms of a fixed collection of minors, and show that their Pfaffians have a remarkable sign pattern. The totally positive orthogonal Grassmannian is a CW cell complex and is subdivided into \emph{Richardson cells}. We introduce a method to determine which cell a given point belongs to in terms of its associated matroid. \end{abstract} \maketitle \setcounter{tocdepth}{1} \section{Introduction} Let $n \geq 1$ be a positive integer and denote by $\SS_n = \SS_n(\RR)$ the $\binom{n}{2}$-dimensional real vector space of skew-symmetric $n \times n$ matrices with real entries. This article studies the semi-algebraic set $\SS_n^{> 0}\subset \SS_n$ of \emph{totally positive} skew-symmetric matrices. The latter are defined using total positivity of partial flag varieties in the sense of Lusztig \cite{Lusztig1}, as follows. \smallskip Let $q$ be the non-degenerate symmetric bilinear form on $\RR^{2n}$ given by \begin{equation}\label{eq:quadForm} q(x,y) = \sum_{i=1}^{n} x_{i} y_{n+i} + \sum_{i=1}^{n} y_{i} x_{n+i}, \quad \text{for } x,y \in \RR^{2n}. \end{equation} In the standard basis $(e_{1}, \dots, e_{n}, f_{1}, \dots, f_{n})$ of $\RR^{2n}$, this bilinear form is given by the matrix \[ Q = \begin{bmatrix} 0 & \Id_n \\ \Id_n & 0 \end{bmatrix}, \] where $\Id_n$ is the $n \times n$ identity matrix. The \emph{orthogonal Grassmannian} is the variety of $n$-dimensional vector subspaces $V$ of $\RR^{2n}$ that are \emph{$q$-isotropic}, meaning that $q(v,w) = 0$ for any $v,w \in V$. Two distinguished points in this variety are the vector spaces \begin{equation}\label{eq:EFspaces} E := \Span( e_1, \dots, e_n) \quad \text{and} \quad F:= \Span(f_{1}, \dots, f_{n}). \end{equation} The orthogonal Grassmannian is a smooth algebraic variety embedded in $\mathbb{RP}^{\binom{2n}{n}-1}$ by Pl\"ucker coordinates. It has two isomorphic irreducible connected components of dimension $\binom{n}{2}$: \begin{align*} \OGr(n,2n) :=& \{ V \text{ $q$-isotropic} \colon \dim(V)=n,\; \dim(E \cap V) = n \mod 2\},\\ \OGr_{-}(n,2n) :=& \{ V \text{ $q$-isotropic} \colon \dim(V)=n,\; \dim(E \cap V) = n+1 \mod 2 \}. \end{align*} The Zariski open set in $\OGr(n,2n)$ where the Pl\"ucker coordinate $\Delta^{1, \dots, n }$ does not vanish is isomorphic to the affine space $\SS_n$. This isomorphism identifies $A \in \SS_n$ with the rowspan of the $n\times 2n$ matrix $\begin{bmatrix} \Id_n | A \end{bmatrix}$. We may also view $\OGr(n,2n)$ as the connected component of the identity in a parabolic quotient of the real special orthogonal group ${\rm SO}(2n)$. This is a connected reductive $\RR$-split algebraic group and therefore admits a \emph{totally positive part} in the sense of Lusztig \cite{Lusztig1}. \smallskip A key example of Lusztig positivity is the case of ${\rm SL}(n)$. A parabolic quotient of ${\rm SL}(n)$ is a \textit{flag variety} whose points are flags of linear subspaces. Such flags can be represented as row spans of matrices in ${\rm SL}(n)$. Lusztig's total positivity then matches the classical notion of total positivity: a flag is totally positive (resp. nonnegative) if it can be represented by a totally positive (resp. nonnegative) matrix, that is, one whose minors are all positive (resp. nonnegative). In general, the totally nonnegative part of a flag variety admits a nice topological structure that interplays well with matroid theory \cite{PositiveGeometries,GKL_Ball22,postnikov06}. These notions have become increasingly important to understand for other real reductive groups as positivity and positroid combinatorics are gaining more relevance in the study of scattering amplitudes in quantum field theory \cite{ABCGPT, TheAmplituhedron,WilliamsICM}. \begin{definition} A skew-symmetric matrix $A \in \SS_n$ is \textit{totally nonnegative} (resp. \textit{totally positive}) if the rowspan of $\begin{bmatrix} \Id_n | A \end{bmatrix}$ is a point in the totally nonnegative region $\OGr^{\geq 0}(n,2n)$ (resp. the totally positive region $\OGr^{> 0}(n,2n)$) of $\OGr(n,2n)$. See \Cref{def:LusztigPositive} for more details. \end{definition} Given a skew-symmetric matrix, or more generally, a point in any partial flag variety, it is difficult to determine directly from the definition whether it is totally positive. Accordingly, positivity tests for certain partial flag varieties have been developed, for example \cite{BFZIII, ChevalierPositivity}. However, these positivity criteria are sometimes not very explicit. Explicit tests for positivity have been described in type A \cite{BlochKarp,BossingerLi} and for certain flag varieties of types B and C \cite{BBEG24}. In this article we give an explicit and minimal positivity test for a skew symmetric matrix $A$ in terms of its minors, which mirrors the fact that total positivity on $\SL(n)$ is determined by the positivity of minors. \begin{definition}\label{def:SpecialMinorsPfaff} For any $n \times n$ matrix $A$ we denote by $\Delta_{I}^J(A)$ be the determinant of the submatrix of $A$ in rows $I$ and columns $J$. We denote by $M_{j,k}(A)$ the signed minor: \begin{equation}\label{eq:SpecialMinors} M_{j,k}(A) = (-1)^{jk} \Delta_{\{1,\ldots,n-k-1,n-k+j, \ldots, n \}}^{\{1,2,\ldots, n-j\}}(A) \qquad\text{for any } 1 \leq j \leq k \leq n-1. \end{equation} Note that the minor $M_{j,k}$ is a polynomial of degree $n-j$ in the entries of $A$. It corresponds up to a sign to a left justified minor where the rows are indexed by the complement of an interval, as illustrated by the shaded region in \Cref{fig:Minor}. \end{definition} \begin{figure}[ht] \centering \scalebox{0.881}{ \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (135,44.4) -- (135,181.6) ; \draw (270.6,44.8) -- (270.6,182) ; \draw (135,44.4) -- (270.6,44.8) ; \draw (135,181.6)-- (270.6,182) ; \draw (135.8,36) -- (243.4,36) ; \draw [shift={(243.4,36)}, rotate = 180] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ; \draw [shift={(135.8,36)}, rotate = 180] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ; \draw (126.6,80.8) -- (126.2,143.6) ; \draw [shift={(126.2,143.6)}, rotate = 270.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ; \draw [shift={(126.6,80.8)}, rotate = 270.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ; \path[pattern color=black, pattern=north west lines] (135,44.4) -- (243.4,44.4) -- (243.4,81.6) -- (135,81.6) -- cycle ; \path[pattern color=black, pattern=north west lines] (135,144.4) -- (243.4,144.4) -- (243.4,181.6) -- (135,181.6) -- cycle ; \draw (228.4,16.6) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$n-j$}; \draw (131.2,17.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$1$}; \draw (80,74.6) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$n-k$}; \draw (38.4,136.2) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$n-k+j-1$}; \path[draw, ->, decorate, decoration ={snake, amplitude = 1.5}] (280,115) -- (320,115); \draw (325,105) node [anchor=north west][inner sep=0.75pt] [font=\normalsize] {$( -1)^{jk} \ M_{j,k}(A)$}; \draw (196.8,102.2) node [anchor=north west][inner sep=0.75pt] [font=\normalsize] {$A$}; \end{tikzpicture} } \caption{The shading indicates which minor of the matrix $A$ is used to compute $M_{j,k}$.} \label{fig:Minor} \end{figure} \begin{example}[$n=4$]\label{ex:n=4Minors} The minors $M_{j,k}(A)$ for $1 \leq j \leq k \leq 3$ for a $4 \times 4$ skew-symmetric matrix $A=(a_{ij})$ are the following: \begin{alignat*}{3} M_{1,1}(A) &= a_{12}a_{14}a_{23}-a_{12}a_{13}a_{24}+a_{12}^2a_{34}, & & \\ M_{1,2}(A) &= a_{13}^{2}a_{24}-a_{13}a_{14}a_{23}-a_{12}a_{13}a_{34}, \quad M_{2,2}(A) &=&a_{12}a_{14}, \quad \\ M_{1,3}(A) &= a_{14}a_{23}^2-a_{13}a_{23}a_{24}+a_{12}a_{23}a_{34}, \quad M_{2,3}(A) &=& a_{13}a_{24} - a_{14}a_{23}, \quad M_{3,3}(A) &= a_{14}. \end{alignat*} \end{example} We realize these minors via a graphical interpretation of the Marsh-Rietsh parametrization \cite{MR} of $\OGr^{>0}(n,2n)$, using the Lindst\"rom-Gessel-Viennot (LGV) lemma. Our first main result is a positivity test for $\OGr^{>0}(n,2n)$ using the signed minors in \Cref{def:SpecialMinorsPfaff}. \begin{figure}[H] \centering \scalebox{0.9}{\begin{tikzpicture} \coordinate (l6) at (0,4.5); \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (l1) at (0,0); \coordinate (r6) at (7,4.5); \coordinate (r7) at (7,4); \coordinate (r8) at (7,3.5); \coordinate (r9) at (7,3); \coordinate (r10) at (7,2.5); \coordinate (r5) at (7,2); \coordinate (r4) at (7,1.5); \coordinate (r3) at (7,1); \coordinate (r2) at (7,0.5); \coordinate (r1) at (7,0); \coordinate (v11) at (2.5,0); \coordinate (v21) at (2,0.5); \coordinate (v22) at (2.5,0.5); \coordinate (v23) at (4,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v34) at (4,1); \coordinate (v35) at (5.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v46) at (5.5,1.5); \coordinate (v47) at (6,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v54) at (6,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v104) at (6,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v96) at (5.5,3); \coordinate (v97) at (6,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v85) at (5.5,3.5); \coordinate (v71) at (2,4); \coordinate (v72) at (2.5,4); \coordinate (v73) at (4,4); \coordinate (v61) at (2.5,4.5); \draw[black] (-0.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (7.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (7.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (7.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (7.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (7.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (7.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (7.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (7.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (7.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (7.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[draw=dorange, line width=2pt] (l1) -- (v11); \draw[draw=black, line width=1pt] (v11) -- (r1); \draw[draw=dviolet, line width=2pt] (l2) -- (v21); \draw[draw=black, line width=1pt] (v21) -- (v22); \draw[draw=dorange, line width=2pt] (v22) -- (v23); \draw[draw=black, line width=1pt] (v23) -- (r2); \draw[draw=dgreen, line width=2pt] (l3) -- (v31); \draw[draw=black, line width=1pt] (v31) -- (v32); \draw[draw=dviolet, line width=2pt] (v32) -- (v33); \draw[draw=black, line width=1pt] (v33) -- (v34); \draw[draw=dorange, line width=2pt] (v34) -- (r3); \draw[draw=blue, line width=2pt] (l4) -- (v41); \draw[draw=black, line width=1pt] (v41) -- (v42); \draw[draw=dgreen, line width=2pt] (v42) -- (v43); \draw[draw=black, line width=1pt] (v43) -- (v44); \draw[draw=dviolet, line width=2pt] (v44) -- (r4); \draw[draw=red, line width=2pt] (l5) -- (v51); \draw[draw=black, line width=1pt] (v51) -- (v52); \draw[draw=dgreen, line width=2pt] (v52) -- (v53); \draw[draw=black, line width=1pt] (v53) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (v101); \draw[draw=blue, line width=2pt] (v101) -- (v102); \draw[draw=black, line width=1pt] (v102) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (v91); \draw[draw=red, line width=2pt] (v91) -- (v92); \draw[draw=black, line width=1pt] (v92) -- (v93); \draw[draw=blue, line width=2pt] (v93) -- (v94); \draw[draw=black, line width=1pt] (v94) -- (v95); \draw[draw=dgreen, line width=2pt] (v95) -- (v96); \draw[draw=black, line width=1pt] (v96) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (v81); \draw[draw=red, line width=2pt] (v81) -- (v82); \draw[draw=black, line width=1pt] (v82) -- (v83); \draw[draw=blue, line width=2pt] (v83) -- (v84); \draw[draw=black, line width=1pt] (v84) -- (v85); \draw[draw=dgreen, line width=2pt] (v85) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (v72); \draw[draw=black, line width=1pt] (v72) -- (v73); \draw[draw=blue, line width=2pt] (v73) -- (r7); \draw[draw=black, line width=1pt] (l6) -- (v61); \draw[draw=red, line width=2pt] (v61) -- (r6); \draw[draw=blue, line width=2pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101); \draw[draw=red, line width=2pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91); \draw[draw=dgreen, line width=2pt, ->] (v31) -- (v42); \draw[draw=red, line width=2pt, ->] (v92) -- (v81); \draw[draw=dviolet, line width=2pt, ->] (v21) -- (v32); \draw[draw=red, line width=2pt, ->] (v82) -- (v71); \draw[draw=dorange, line width=2pt, ->] (v11) -- (v22); \draw[draw=red, line width=2pt, ->] (v72) -- (v61); \draw[draw=dgreen, line width=2pt, ->] (v43) -- (v52); \draw[draw=blue, line width=2pt, ->] (v102) -- (v93); \draw[draw=dviolet, line width=2pt, ->] (v33) -- (v44); \draw[draw=blue, line width=2pt, ->] (v94) -- (v83); \draw[draw=dorange, line width=2pt, ->] (v23) -- (v34); \draw[draw=blue, line width=2pt, ->] (v84) -- (v73); \draw[line width=2pt, ->] (v45) .. controls (5.25,1.75) and (5.25,2.25) .. (v103); \draw[draw=dgreen, line width=2pt, ->] (v53) .. controls (4.75,2.25) and (4.75,2.75) .. (v95); \draw[line width=2pt, ->] (v35) -- (v46); \draw[draw=dgreen, line width=2pt, ->] (v96) -- (v85) ; \draw[line width=2pt, ->] (v47) -- (v54); \draw[line width=2pt, ->] (v104) -- (v97); \end{tikzpicture}} \caption{ The collection of non-intersecting paths in the LGV diagram corresponding to the minor $M_{2,1}(A)$ for $n = 5$.} \label{fig:PathCollectionExample} \end{figure} \begin{theorem} \label{thm:Main} A skew-symmetric matrix $A \in \SS_n$ is totally positive if and only if \[ M_{j,k}(A) > 0 \quad \text{for any } 1 \leq j \leq k \leq n-1. \] This test is minimal in the sense that it uses the fewest possible number of inequalities. \end{theorem} The set $\SS_n^{\geq 0}$ of totally nonnegative skew-symmetric matrices is the Euclidean closure of $\SS_n^{>0}$. While the minors $M_{j,k}$ are non-negative on $\SS_n^{\geq 0}$, there exist skew-symmetric matrices $A \not \in \SS_n^{\geq 0}$ with $M_{j,k}(A) = 0$ for all $1 \leq j \leq k \leq n-1$. So the minors $M_{j,k}(A)$ are not enough to test for the nonnegativity of $A$. Nonetheless, together with the semigroup property of $\SO^{>0}(2n)$ we are able to give a nonnegativity test in the following~form. \begin{theorem}\label{thm:Nonnegative} Fix $X \in \OGr(n,2n)$. Then, for any smooth $1$-parameter family $Z(\epsilon)$ in $\SO^{>0}(2n)$ such that $Z(\epsilon) \xrightarrow[\epsilon \to 0] {} \Id_{2n} $ and $X(\epsilon) \coloneqq X \cdot Z(\epsilon)$, the following are equivalent. \begin{enumerate}[wide=40pt, leftmargin = 58pt] \item \label{nonnegativeitem1} $X$ is totally nonnegative. \item \label{nonnegativeitem2} $X(\epsilon)$ is totally positive for all $\epsilon>0$ sufficiently small. \item \label{nonnegativeitem3} For all $1\leq j\leq k\leq n-1$, the leading coefficient in the Taylor expansion of $M_{j,k}(B(\epsilon))$ is positive, where $B(\epsilon)$ is defined by $X(\epsilon) = \rowspan \big([\Id_n|B(\epsilon)]\big)$. \end{enumerate} Moreover, the family $Z(\epsilon)$ can be chosen so that $M_{j,k}(B(\epsilon))$ are polynomials in $\epsilon$. \end{theorem} As for flag varieties, the set $\SS_{n}^{\geq 0}$ decomposes into a disjoint union of semi-algebraic sets called \emph{positive Richardson cells} as follows: \begin{equation} \SS_n^{\geq 0} = \bigsqcup \mathcal{R}^{>0}_{v, w}, \end{equation} where the union is over all minimal coset representatives $w$ in the parabolic quotient $W^{[n-1]}$ of the Weyl group of $\SO(2n)$, and all $v \leq w$ in Bruhat order. See \Cref{subsec:RichardsonDeodhar} for more details. Our next result determines the Richardson cell that contains a given $A \in \SS_n^{\geq 0}$. A constructive version of this theorem in stated in \Cref{thm:RichardsonRestated}. \begin{theorem}\label{thm:Richardson} Let $A \in \SS^{\geq 0}_n$ and $\mathscr{M}_A$ be the realizable rank $n$ matroid on $[2n]$ associated to $[\Id_n|A]$. Then, the Richardson cell containing $A$ can be determined from $\mathscr{M}_A$. \end{theorem} Given a skew-symmetric matrix $A \in \SS_n$, its principal minors are perfect squares whose square root is a polynomial in the entries of $A$. These polynomials are called the \emph{Pfaffians} of $A$. As described here, there is a sign ambiguity for Pfaffians. However, in \Cref{sec:5}, we give a more intrinsic definition that fixes the sign. Given $I \subset [n]$ we denote by $\Pf_I(A)$ the Pfaffian of $A$ corresponding to the principal minor $\det(A_I^I)=\Pf_I(A)^2$. We take the convention that $\Pf_{\emptyset}(A)=1$ and $\Pf(A):=\Pf_{[n]}(A)$; also note that if $I$ has odd size, $\Pf_I(A)=0$. Similar to positive definite symmetric matrices, whose principal minors are positive, one could alternatively consider defining positive skew-symmetric matrices in terms of their Pfaffians. Remarkably, it turns out that the Pfaffians do have a fixed sign on the $\SS_n^{>0}$: \begin{theorem}\label{thm:PfaffianSign} For any $A \in \SS_n^{>0}$, and $I \subset [n]$ of even size, we have \begin{equation}\label{eq:signPattern} \sgn(\Pf_I(A)) = (-1)^{\sum_{i \in I} i - \binom{|I|+1}{2}} =: \sgn(I, [n]). \end{equation} If $I=\{i_1<\cdots < i_{|I|}\}$ and $[n]\setminus I=\{j_1<\cdots < j_{n-|I|}\}$, this is the sign of the permutation $i_1, \dots, i_{|I|}, j_{1} \dots, j_{n - |I|}$ in one-line notation. \end{theorem} \noindent However, we note that there also exist skew-symmetric matrices that are not totally positive, or even totally nonnegative, whose Pfaffians have this sign pattern. \subsection{Outline} We begin by collecting some necessary background and a few preliminary results in \Cref{sec:2}. In \Cref{sec:3}, we prove the positivity criterion for skew-symmetric matrices in \Cref{thm:Main}. We deal with the set of nonnegative skew-symmetric matrices $\SS_n^{\geq 0}$ and prove \Cref{thm:Nonnegative} and \Cref{thm:Richardson} in \Cref{sec:4}. In \Cref{sec:5} we show that the Pfaffians of a skew-symmetric matrix $A \in \SS_n^{\geq 0}$ have a remarkable sign pattern and briefly discuss Pfaffian positivity. Finally, in \Cref{sec:6}, we discuss future directions and open questions. \smallskip Some computations in this article were carried on using the computer algebra system Macaulay2 \cite{M2}. The relevant Macaulay2 script we used is available at \begin{center} \href{https://mathrepo.mis.mpg.de/PositiveSkewMatrices}{\tt https://mathrepo.mis.mpg.de/PositiveSkewMatrices}. \end{center} \smallskip \subsection{Acknowledgment} We thank Bernd Sturmfels for suggesting this problem, Chris Eur for helpful Macaulay2 code, Steven Karp for pointing us to \cite[Proposition 8.17]{Lusztig1} and \cite[Theorem 3.4]{Lusztig2} used in the proof of \Cref{lem:semiGroup}, and Grant Barkley for helpful discussions around generalized minors. YEM was partially supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB-TRR 195 “Symbolic Tools in Mathematics and their Application.” \section{Preliminaries}\label{sec:2} \subsection{A pinch of Lie theory}\label{subsec:LieTheory} Let $G$ be a split reductive algebraic group over $\RR$. A \emph{pinning} for $G$ is a choice of a maximal torus $T$, a Borel subgroup $B$ with opposite Borel group $B^{-}$, and a group homomorphism $\phi_i: \SL_2 \to G$ associated to each simple root. We will focus on the case $G=\SO(2n)$, the special orthogonal group with respect to the bilinear form $q$. Let $\mathfrak{so}_{2n}$ denote its Lie algebra, the vector space of $2n\times 2n$ matrices $X$ such that $X^T Q + QX=0$ with the usual Lie bracket. \smallskip We fix the maximal torus to be the subgroup of $\SO(2n)$ of diagonal matrices \begin{equation}\label{eq:maxTorus} T := \big\{\mathrm{diag}(t_1,\ldots,t_n,t^{-1}_1,\ldots,t^{-1}_n): \ t_1,\ldots, t_n \in \RR^* \big \}. \end{equation} We fix the Borel subgroup $B$ to be the upper triangular matrices in $\SO(2n)$. The opposite Borel subgroup is the subgroup of lower triangular matrices in $\SO(2n)$. We take, as simple roots, $\{\varepsilon_1-\varepsilon_2, \varepsilon_2-\varepsilon_3, \ldots, \varepsilon_{n-1}-\varepsilon_{n}, \varepsilon_{n-1}+\varepsilon_n\}$ where $\{\varepsilon_i\}_{i=1}^{n}$ is the standard basis of $\RR^n$. The group homomorphisms $\big(\phi_i: \SL(2) \to \SO(2n)\big)_{i=1}^{n}$ for our pinning~are \begin{equation}\label{eq:pinning} \resizebox{0.85\textwidth}{!}{$ \begin{aligned} & \phi_i\begin{pmatrix} a&b\\ c&d \end{pmatrix}=\begin{blockarray}{cccccccccc} \begin{block}{c(ccccccccc)} & 1 & & & & & & & &\\ & & \ddots & & & & & & & \\ i& & & a & b & & & & & \\ i+1& & &c & d & & & & & \\ & & & & & \ddots & & & & \\ n+i&&&&&&d&-c&&\\ n+i+1&&&&&&-b&a&&\\ &&&&&&&&\ddots&\\ &&&&&&&&&1\\ \end{block} \end{blockarray}\hspace{5pt} \qquad \text{for } 1 \leq i \leq n-1,\\ &\text{and}\\ & \phi_n\begin{pmatrix} a&b\\ c&d \end{pmatrix}=\begin{blockarray}{cccccccccc} & & &n-1 & n & & & & 2n-1 &2n\\ \begin{block}{c(ccccccccc)} & 1 & & & & & & & &\\ & & \ddots & & & & & & &\\ n-1& & & a & & & & && b\\ n& & & & a & & & &-b &\\ &&&&&1&&&&\\ &&&&&&\ddots&&&\\ 2n-1&&&&-c&&&&d&\\ 2n&&&c&&&&&&d\\ \end{block} \end{blockarray}\hspace{5pt}. \end{aligned}$} \end{equation} The Weyl group of $\operatorname{SO}(2n)$ is the group $W$ of permutations $\sigma \in S_{2n}$ satisfying \begin{enumerate}[wide=60pt] \item \ $\quad \sigma(n+i)=n+\sigma(i)$ modulo $2n$ for all $i \in [n]$, \item \ $\quad \big| \{i \in [n]: \sigma(i)>n\} \big|$ is even. \end{enumerate} This is a Coxeter group; it is generated by \begin{equation}\label{eq:WeylGroupGens} \resizebox{0.90\textwidth}{!}{ $s_i=(i, i+1)(n+i, n+i+1) \quad \text{for } i=1,\ldots n-1 \quad \text{and} \quad s_n=(n,2n-1)(n-1,2n) $}, \end{equation} We note that these generators satisfy the following braid relations \begin{align*} &s_i^2={\rm id}, \ s_{j}s_{j+1}s_j=s_{j+1}s_js_{j+1} \ \text{ and } \ s_{n-2}s_ns_{n-2}=s_{n-2}s_ns_{n-2} \quad \text{for any } i \in [n], j \in [n-1],\\ &s_is_j = s_js_i \quad \text{whenever } 1\leq i \neq j \leq n \text{ and } \{i,j\} \notin \big\{\{k,k+1\}_{k\in [n-1]}\big\} \cup \big\{ \{n-2,n\} \big\}. \end{align*} Given $w \in W$ the \emph{length of $w$} is $\ell(w) := \min\{k \in \NN: w=s_{i_1}\cdots s_{i_k}$ where $i_j \in [n]\}$. A \emph{reduced expression of $w$} is a particular choice of (ordered) simple transpositions $s_{i_j}$ such that $w=s_{i_1}\cdots s_{i_{\ell(w)}}$. We denote by $w_0$ the element of maximal length in $W$. \medskip In \cite[Theorem 11.3]{MR} Marsh and Rietsch give a parametrization of the totally positive part of the complete flag variety for any split reductive group $G$ and choice of pinning in terms of a reduced expression for the longest element in the Weyl group. Lusztig's characterization of positivity for flag varieties \cite{Lusztig2} guarantees that we can project this parametrization from the complete flag variety of $G$ down to any partial flag variety. \begin{remark}\label{rem:MRParamConvention} We use a different convention than Marsh and Rietsch by viewing flags as row spans of matrices rather than column spans. This allows us to present examples more easily. Concretely, a matrix $M$ represents a positive flag in our convention if and only if its transpose $M^T$ represents a positive flag in~\cite{MR}. \end{remark} \smallskip In this section, we rely on some basic facts about the Weyl group of $\SO(2n)$, which can be found in \cite{HumphreysCoxeter}. For $t \in \RR$ and $1\leq i \leq n$ we denote by $x_i(t)$ the matrices \begin{equation}\label{eq:x_i(t)} x_i(t) := \phi_i\begin{pmatrix} 1&t\\ 0&1 \end{pmatrix}. \end{equation} Projecting the Marsh-Rietsch parametrization \cite[Section 11]{MR} for $G=\SO(2n)$, we obtain \begin{equation} \label{eq:parameterizationCompleteFlag} \OGr(n,2n)^{>0} = \Big\{ \rowspan \big(\pi_n \big(x_{j_1}(t_1)\cdots x_{j_{\hat{N}}}(t_{\hat{N}}) \big)\big): t_1,\ldots, t_{\hat{N}} \in \RR_{>0} \Big\}, \end{equation} where $\pi_n:\operatorname{SO}(2n) \to \OGr(n,2n)$ is the map that takes a $2n \times 2n$ orthogonal matrix to the row span of its first $n$ rows, and $w_0 = s_{j_1}s_{j_2} \cdots s_{j_{\hat{N}}}$ is the longest element of the Weyl group, and as such, $\hat{N} = 2 \binom{n}{2}$. We will find it useful to fix a reduced expression $\underline{w}_0$ for $w_0$, that is, a minimal sequence of generators which multiplied together yield $w_0$. For reasons that will be evident shortly, we also give a name to the final $\binom{n}{2}$ generators of $\underline{w}_0$, as follows: \begin{equation}\label{eq:w_0ReducedExpr} \resizebox{.915\hsize}{!}{$ \begin{aligned} \underline{w}_0^{[n-1]} &= s_n(s_{n-2}\cdots s_1)(s_{n-1}\cdots s_2)s_n(s_{n-2}\cdots s_3)(s_{n-1}\cdots s_4)\cdots s_n && \text{for $n$ even,}\\ \underline{w_0}^{[n-1]} &= s_n(s_{n-2}\cdots s_1)(s_{n-1}\cdots s_2)s_n(s_{n-2}\cdots s_3)(s_{n-1}\cdots s_4)\cdots s_n s_{n-2} s_{n-1}&& \text{for $n$ odd,} \end{aligned}$} \end{equation} \begin{equation}\label{eq:w_0LongReducedExpr} \underline{w}_0 = s_1(s_2s_1) \dots (s_{n-1}s_{n-2}\dots s_1) \ \underline{w}_0^{[n-1]}. \end{equation} Observe that, for $Z\in \SO(2n)$, $i\in [n-1]$, and $t\in \mathbb{R}$, $\pi_n(Z)=\pi_n(x_i(t)Z)$. Thus, instead of using an expression for $w_0$ in the Marsh-Rietsch parametrization of $\OGr(n,2n)^{>0}$, it suffices to use any representative of the right coset $w_0W_{[n-1]}$, where $W_{[n-1]}\coloneqq \langle s_1,\ldots,s_{n-1}\rangle$. We will use a minimum length coset representative. Such a representative should have length $N\coloneqq \binom{n}{2}$. Note that, by equation \eqref{eq:w_0LongReducedExpr}, $\underline{w}_0^{[n-1]}$ is an expression for a right coset representative of $w_0$ and, by equation \eqref{eq:w_0ReducedExpr}, it is of length $N$. We have \begin{equation}\label{eq:posorthogonalgrassmannian} \OGr(n,2n)^{>0} = \Big\{ \rowspan \big(\pi_n \big(x_{i_1}(t_1)\cdots x_{i_N}(t_{N}) \big)\big): t_1,\ldots, t_{N} \in \RR_{>0} \Big\}, \end{equation} where $\underline{w}_0^{[n-1]}=s_{i_1}s_{i_2}\cdots s_{i_N}$. We denote by $X = X(t_1, \dots, t_N) $ the $n \times 2n$ matrix in the first $n$ rows of $x_{i_1}(t_1)\cdots x_{i_N}(t_N)$. Since the matrices \eqref{eq:x_i(t)} are all unipotent, the leftmost $n\times n$ block matrix of $X$ is unipotent. We may thus row reduce $X$ to the form $[\Id_n|A]$, where $A=A(t_1,\dots,t_N)$ is skew-symmetric matrix with polynomial entries in the $t_i$. \begin{definition}\label{def:LusztigPositive} A real skew-symmetric $n \times n$ matrix is \textit{totally positive} if it is of the form $A(t_1, \dots, t_N)$ with $t_1,\dots, t_N \in \RR_{> 0}$. \end{definition} \begin{example}[$n=4$]\label{ex:n=4MRparam} In this case we have $ N = \binom{4}{2} = 6$ parameters. The expressions for $w_0$ and $w_0^{[3]}$ are $\underline{w}_0 =s_{1}(s_{2}s_{1})(s_{3}s_{2}s_{1})s_{4}(s_{2}s_{1})(s_{3}s_{2})s_{4}$ and $\underline{w}_0^{[3]} = s_{4}(s_{2}s_{1})(s_{3}s_{2})s_{4}$, meaning \[ i_1 = 4 , \quad i_2 = 2 \quad i_3 = 1 \quad i_4 = 3 \quad i_5 = 2 \quad i_6 = 4. \] The matrices $X$ and $A$ are given by \[ X = \begin{pmatrix} 1&t_{3}&t_{3}t_{5}&0&0&0&0&t_{3}t_{5}t_{6}\\ 0&1&t_{2}+t_{5}&t_{2}t_{4}&0&0&-t_{2}t_{4}t_{6}&t_{2}t_{6}+t_{5}t_{6}\\ 0&0&1&t_{4}&0&t_{1}t_{4}t_{5}&-t_{1}t_{4}-t_{4}t_{6}&t_{1}+t_{6}\\ 0&0&0&1&-t_{1}t_{2}t_{3}&t_{1}t_{2}+t_{1}t_{5}&-t_{1}-t_{6}&0 \end{pmatrix} \] and \[ A = \begin{pmatrix} 0&t_{1}t_{2}t_{3}t_{4}t_{5}&-t_{1}t_{2}t_{3}t_{4}&t_{1}t_{2}t_{3}\\ -t_{1}t_{2}t_{3}t_{4}t_{5}&0&t_{1}t_{2}t_{4}&-t_{1}t_{2}-t_{1}t_{5}\\ t_{1}t_{2}t_{3}t_{4}&-t_{1}t_{2}t_{4}&0&t_{1}+t_{6}\\ -t_{1}t_{2}t_{3}&t_{1}t_{2}+t_{1}t_{5}&-t_{1}-t_{6}&0 \end{pmatrix}. \] In this parametrization, the minors $M_{j,k}(A)$, which we computed in Example \ref{ex:n=4Minors}, are \begin{alignat*}{3} M_{1,1}(A) &= t_{1}^{2}t_{2}^{2}t_{3}^{2}t_{4}^{2}t_{5}^{2}t_{6}, & & \\ M_{1,2}(A) &= t_{1}^{2}t_{2}^{2}t_{3}^{2}t_{4}^{2}t_{5}t_{6}, \quad M_{2,2}(A) &=& \ t_{1}^{2}t_{2}^{2}t_{3}^{2}t_{4}t_{5}, \quad \\ M_{1,3}(A) &= t_{1}^{2}t_{2}^{2}t_{3}t_{4}^{2}t_{5}t_{6}, \quad M_{2,3}(A) &=& \ t_{1}^{2}t_{2}t_{3}t_{4}t_{5}, \quad M_{3,3}(A) &= t_{1}t_{2}t_{3}. \end{alignat*} Note that these minors are all monomials in the parameters $t_1, \ldots, t_6$ and that if all of these minors are positive, then all the $t_i$'s are positive. This essentially proves Theorem \ref{thm:Main} in the case $n=4$ and motivates our general approach. Moreover, since the exponent vectors of the $M_{j,k}$ form a $\ZZ$-basis for $\ZZ^6$, each $t_i$ is a Laurent monomial in the minors $M_{j,k}(A)$. \end{example} \subsection{The Lindstr\"om-Gessel-Viennot diagram} In this section, we introduce the Lindstr\"om-Gessel-Viennot (LGV) diagram which will be useful in the remainder of this article. Loosely speaking, this diagram encodes the combinatorics that govern the multiplication of matrices $x_{i_1}(t_1) \cdots x_{i_N}(t_N)$ and hence also of $X = X(t_1,\dotsm t_N)$ and $A = A(t_1,\dotsm t_N)$. This construction is similar to \cite[Section 3.2]{boretsky}, in which the matrices $x_i(t)$ are seen as adjacency matrices of a weighted directed graph. \begin{definition}\label{def:LGVdiagram} We define the \textit{LGV diagram} as a directed graph consisting of $2n$ horizontal edges called \textit{strands}, with $2\binom{n}{2}$ weighted vertical arrows connecting them in the following way. Let $i_1,\ldots, i_N$ be such that $\underline{w}_0^{[n-1]}=s_{i_1}s_{i_2}\cdots s_{i_N}$. \begin{enumerate} \item Strands are always directed rightwards. We call the ends of the strands \textit{vertices}. Label the vertices on both sides in order $1,\ldots, n, 2n,\ldots, n+1$ starting from the bottom. Vertices on the left are called \textit{source vertices} and those on the right are called \textit{sink vertices}. \smallskip \item For each $1\leq l\leq N$, we add arrows to the strands, with arrows corresponding to smaller values of $l$ always appearing to the left of those corresponding to larger values of $l$. To obtain the arrows corresponding to $l$, look at the non-diagonal entries of $x_{i_l}(t_l)$. For every non-zero entry in position $(j,k)$ we draw an arrow from strand $j$ to strand $k$ with weight given by the value of the entry, which is either $t_l$ or $-t_l$. \end{enumerate} We stress that the order in which the arrows from different $x_{i_l}(t_l)$ matters, reflecting the noncommutativity of these matrices. However, the multiple arrows corresponding to a single matrix $x_{i_l}(t_l)$ can be drawn in any order. \end{definition} \begin{example}[$n=4$]\label{ex:n=4LGV} In this case, $(i_1, i_2, i_3, i_4, i_5, i_6) = (4,2,1,3,2,4)$ and the matrices $x_1, x_2, x_3, x_4$ are, respectively, \[ \resizebox{1\textwidth}{!}{$ \begin{bmatrix} 1 & t &&&&&&\\ & 1 &&&&&&\\ &&1&&&&&\\ &&&1&&&&\\ &&&&1&&&\\ &&&&-t&1&&\\ &&&&&&1&\\ &&&&&&&1\\ \end{bmatrix},\quad \begin{bmatrix} 1 & &&&&&&\\ & 1 & t&&&&&\\ &&1&&&&&\\ &&&1&&&&\\ &&&&1&&&\\ &&&&&1&&\\ &&&&&-t&1&\\ &&&&&&&1\\ \end{bmatrix}, \quad \begin{bmatrix} 1&&&&&&&\\ &1&&&&&&\\ &&1&t&&&&\\ &&&1&&&&\\ &&&&1&&&\\ &&&&&1&&\\ &&&&&&1&\\ &&&&&&-t&1\\ \end{bmatrix} \quad \text{and} \quad \begin{bmatrix} 1 & &&&&&&\\ & 1 &&&&&&\\ &&1&&&&&t\\ &&&1&&&-t&\\ &&&&1&&&\\ &&&&&1&&\\ &&&&&&1&\\ &&&&&&&1\\ \end{bmatrix}.$} \] Since $\underline{w}_0^{[3]} = s_{4}(s_{2}s_{1})(s_{3}s_{2})s_{4}$, the LGV diagram for $n=4$ is as follows. As the horizontal strands are always directed rightwards, we do not explicitly indicate it in our drawings. \begin{center} \scalebox{1}{\begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (5.5,4); \coordinate (r8) at (5.5,3.5); \coordinate (r9) at (5.5,3); \coordinate (r10) at (5.5,2.5); \coordinate (r5) at (5.5,2); \coordinate (r4) at (5.5,1.5); \coordinate (r3) at (5.5,1); \coordinate (r2) at (5.5,0.5); \coordinate (v21) at (2,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v71) at (2,4); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (6,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (6,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (6,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (6,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (6,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (6,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (6,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (6,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[draw=black, line width=1pt] (l2) -- (r2); \draw[draw=black, line width=1pt] (l3) -- (r3); \draw[draw=black, line width=1pt] (l4) -- (r4); \draw[draw=black, line width=1pt] (l5) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (r7); \draw[draw=black, line width=1pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\tiny $t_{1}$}; \draw[draw=black, line width=1pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\tiny $-t_{1}$}; \draw[draw=black, line width=1pt, ->] (v31) -- (v42) node[below right] {\tiny $t_{2}$}; \draw[draw=black, line width=1pt, ->] (v92) -- (v81) node[below right] {\tiny $-t_{2}$}; \draw[draw=black, line width=1pt, ->] (v21) -- (v32) node[below right] {\tiny $t_{3}$}; \draw[draw=black, line width=1pt, ->] (v82) -- (v71) node[below right] {\tiny $-t_{3}$}; \draw[draw=black, line width=1pt, ->] (v43) -- (v52) node[below right] {\tiny $t_{4}$}; \draw[draw=black, line width=1pt, ->] (v102) -- (v93) node[below right] {\tiny $-t_{4}$}; \draw[draw=black, line width=1pt, ->] (v33) -- (v44) node[below right] {\tiny $t_{5}$}; \draw[draw=black, line width=1pt, ->] (v94) -- (v83) node[below right] {\tiny $-t_{5}$}; \draw[draw=black, line width=1pt, ->] (v45) .. controls (5.25,1.75) and (5.25,2.25) .. (v103)node[midway, below left] {\tiny $t_{6}$}; \draw[draw=black, line width=1pt, ->] (v53) .. controls (4.75,2.25) and (4.75,2.75) .. (v95) node[midway, below left] {\tiny $-t_{6}$}; \end{tikzpicture}} \end{center} \end{example} \begin{definition} A \textit{path} in the LGV diagram is a path in the directed graph from a source vertex to a sink vertex. We will identify a path $\mathcal{P}$ with the set of vertical arrows it uses. Observe that this set uniquely defines $\mathcal{P}$ \smallskip Given $I,J \subset [2n]$ of the same cardinality, we define a \textit{path collection} from $I$ to $J$ as $|I|$ paths from the source vertices labeled by elements of $I$ to the sink vertices labeled by elements of $J$. We say a path collection is \textit{non-intersecting} if the paths do not intersect when drawn in the LGV diagram. We say that a non-intersecting path collection from $I$ to $J$ is \textit{unique} if it is the only non-intersecting path collection with source vertices $I$ and sink vertices $J$. \smallskip A \textit{left greedy path} in the LGV diagram originating from $v$ is a path that starts from source vertex $v$ and uses each vertical arrow it encounters until it reaches a sink vertex. These are paths that turn left whenever possible. We denote this path by ${\rm LGP}(v)$. A \textit{left greedy path collection} is a path collection where all the paths are left greedy. \smallskip We say that a path $\mathcal{P}$ in a non-intersecting path collection $P$ is \textit{right greedy} if it never uses vertical arrows unless that is the only way to avoid intersecting paths originating below it in $P$. These are paths that turn right whenever possible, subject to $P$ being a non-intersecting path collection. \end{definition} \begin{example}\label{example:n5pathcollection} Figure \ref{fig:examplePathCollection2} depicts a non-intersecting path collection in the LGV diagram for $n=5$ with $5$ paths from $I=\{1,2,3,4,5\}$ to $J=\{3,4,8,7,6\}$. The red, blue and green paths from $5$ to $6$, from $4$ to $7$, and from $3$ to $8$ respectively, are examples of left greedy paths. The orange and purple paths from $1$ to $3$ and from $2$ to $4$, respectively, are right greedy in this path collection. \end{example} \begin{figure}[H] \centering \scalebox{0.9}{\begin{tikzpicture} \coordinate (l6) at (0,4.5); \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (l1) at (0,0); \coordinate (r6) at (7,4.5); \coordinate (r7) at (7,4); \coordinate (r8) at (7,3.5); \coordinate (r9) at (7,3); \coordinate (r10) at (7,2.5); \coordinate (r5) at (7,2); \coordinate (r4) at (7,1.5); \coordinate (r3) at (7,1); \coordinate (r2) at (7,0.5); \coordinate (r1) at (7,0); \coordinate (v11) at (2.5,0); \coordinate (v21) at (2,0.5); \coordinate (v22) at (2.5,0.5); \coordinate (v23) at (4,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v34) at (4,1); \coordinate (v35) at (5.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v46) at (5.5,1.5); \coordinate (v47) at (6,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v54) at (6,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v104) at (6,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v96) at (5.5,3); \coordinate (v97) at (6,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v85) at (5.5,3.5); \coordinate (v71) at (2,4); \coordinate (v72) at (2.5,4); \coordinate (v73) at (4,4); \coordinate (v61) at (2.5,4.5); \draw[black] (-0.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (7.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (7.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (7.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (7.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (7.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (7.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (7.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (7.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (7.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (7.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[draw=dorange, line width=2pt] (l1) -- (v11); \draw[draw=black, line width=1pt] (v11) -- (r1); \draw[draw=dviolet, line width=2pt] (l2) -- (v21); \draw[draw=black, line width=1pt] (v21) -- (v22); \draw[draw=dorange, line width=2pt] (v22) -- (v23); \draw[draw=black, line width=1pt] (v23) -- (r2); \draw[draw=dgreen, line width=2pt] (l3) -- (v31); \draw[draw=black, line width=1pt] (v31) -- (v32); \draw[draw=dviolet, line width=2pt] (v32) -- (v33); \draw[draw=black, line width=1pt] (v33) -- (v34); \draw[draw=dorange, line width=2pt] (v34) -- (r3); \draw[draw=blue, line width=2pt] (l4) -- (v41); \draw[draw=black, line width=1pt] (v41) -- (v42); \draw[draw=dgreen, line width=2pt] (v42) -- (v43); \draw[draw=black, line width=1pt] (v43) -- (v44); \draw[draw=dviolet, line width=2pt] (v44) -- (r4); \draw[draw=red, line width=2pt] (l5) -- (v51); \draw[draw=black, line width=1pt] (v51) -- (v52); \draw[draw=dgreen, line width=2pt] (v52) -- (v53); \draw[draw=black, line width=1pt] (v53) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (v101); \draw[draw=blue, line width=2pt] (v101) -- (v102); \draw[draw=black, line width=1pt] (v102) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (v91); \draw[draw=red, line width=2pt] (v91) -- (v92); \draw[draw=black, line width=1pt] (v92) -- (v93); \draw[draw=blue, line width=2pt] (v93) -- (v94); \draw[draw=black, line width=1pt] (v94) -- (v95); \draw[draw=dgreen, line width=2pt] (v95) -- (v96); \draw[draw=black, line width=1pt] (v96) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (v81); \draw[draw=red, line width=2pt] (v81) -- (v82); \draw[draw=black, line width=1pt] (v82) -- (v83); \draw[draw=blue, line width=2pt] (v83) -- (v84); \draw[draw=black, line width=1pt] (v84) -- (v85); \draw[draw=dgreen, line width=2pt] (v85) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (v72); \draw[draw=black, line width=1pt] (v72) -- (v73); \draw[draw=blue, line width=2pt] (v73) -- (r7); \draw[draw=black, line width=1pt] (l6) -- (v61); \draw[draw=red, line width=2pt] (v61) -- (r6); \draw[draw=blue, line width=2pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\scriptsize \textcolor{blue}{$t_{1}$}}; \draw[draw=red, line width=2pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\scriptsize \textcolor{red}{$-t_{1}$}}; \draw[draw=dgreen, line width=2pt, ->] (v31) -- (v42) node[midway, right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{2}$}}; \draw[draw=red, line width=2pt, ->] (v92) -- (v81) node[midway, right=0.5mm] {\scriptsize \textcolor{red}{$-t_{2}$}}; \draw[draw=dviolet, line width=2pt, ->] (v21) -- (v32) node[midway, right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_{3}$}}; \draw[draw=red, line width=2pt, ->] (v82) -- (v71) node[midway, right=0.5mm] {\scriptsize \textcolor{red}{$-t_{3}$}}; \draw[draw=dorange, line width=2pt, ->] (v11) -- (v22) node[midway, right=0.5mm] {\scriptsize \textcolor{orange}{$t_{4}$}}; \draw[draw=red, line width=2pt, ->] (v72) -- (v61) node[midway, right=0.5mm] {\scriptsize \textcolor{red}{$-t_{4}$}}; \draw[draw=dgreen, line width=2pt, ->] (v43) -- (v52) node[midway, right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{5}$}}; \draw[draw=blue, line width=2pt, ->] (v102) -- (v93) node[midway, right=0.5mm] {\scriptsize \textcolor{blue}{$-t_{5}$}}; \draw[draw=dviolet, line width=2pt, ->] (v33) -- (v44) node[midway, right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_{6}$}}; \draw[draw=blue, line width=2pt, ->] (v94) -- (v83) node[midway, right=0.5mm] {\scriptsize \textcolor{blue}{$-t_{6}$}}; \draw[draw=dorange, line width=2pt, ->] (v23) -- (v34) node[midway, right=0.5mm] {\scriptsize \textcolor{orange}{$t_{7}$}}; \draw[draw=blue, line width=2pt, ->] (v84) -- (v73) node[midway, right=0.5mm] {\scriptsize \textcolor{blue}{$-t_{7}$}}; \draw[line width=2pt, ->] (v45) .. controls (5.25,1.75) and (5.25,2.25) .. (v103) node[midway, below left] {\scriptsize {$t_{8}$}}; \draw[draw=dgreen, line width=2pt, ->] (v53) .. controls (4.75,2.25) and (4.75,2.75) .. (v95) node[midway, below left] {\scriptsize \textcolor{dgreen}{$-t_{8}$}}; \draw[line width=2pt, ->] (v35) -- (v46) node[midway, right] {\scriptsize {$t_{9}$}}; \draw[draw=dgreen, line width=2pt, ->] (v96) -- (v85) node[midway, right] {\scriptsize \textcolor{dgreen}{$-t_{9}$}}; \draw[line width=2pt, ->] (v47) -- (v54) node[midway, right] {\scriptsize {$t_{10}$}}; \draw[line width=2pt, ->] (v104) -- (v97) node[midway, right] {\scriptsize {$t_{10}$}}; \end{tikzpicture}} \caption{The unique path collection from $I = \{1,2,3,4,5\}$ to $\{3,4,6,7,8\}$. } \label{fig:examplePathCollection2} \end{figure} \begin{proposition}\label{LGVlemma} Let $I \in \binom{[2n]}{n}$ and $\mathcal{P}_I$ be the set of non-intersecting path collections from $\{1,\ldots,n\}$ to $I$ in the LGV diagram. Then, the maximal minor of $X$ in the columns indexed by $I$ is given by \[ \Delta^I(X) :=\det\left(X_I\right)= \sum_{P=(P_1,\ldots,P_n) \in \mathcal{P}_I} \sgn(\pi_P)\prod_{i=1}^n \prod_{e \in P_i} w(e), \] where $\pi_P$ is the permutation that sends $i$ to the end point of $P_i$, and $w(e) \in \{\pm t_1, \dots, \pm t_N\}$ is the weight of the edge $e$. \end{proposition} \begin{proof} Recall that $X$ is the first $n$ rows of $Z:= x_{i_1}(t_1) \cdots x_{i_N}(t_N)$. Hence, the maximal minors of $X$ are $n\times n$ minors of $Z$ where we always pick the first $n$ rows. Note that we can view each matrix $Y\coloneqq x_{i_l}(t_l)$ as the adjacency matrix for a weighted directed graph $G_l$ with $2n$ source vertices, $2n$ sink vertices and with source $j$ connected to sink $k$ by a directed edge of weight $(Y)_{jk}$ if $(Y)_{jk}\neq 0$. A classical result from graph theory tells us that the entry $(Z)_{jk}$ is the sum of the weights of paths from source $j$ to sink $k$ in the concatenation of the graphs $G_l$, where for $1\leq l\leq N-1$ we identify the sinks of $G_l$ with the corresponding sources of $G_{l+1}$. Up to adding in horizontal strands, which don't contribute to the weights of paths, this concatenation is precisely the LGV diagram. Thus, the result follows from a simple corollary \cite[Corollary 2.21]{boretsky} of the LGV lemma \cite{GV,Lindstrom}. \end{proof} In the LGV diagram, each matrix $x_{i_l}$ contributes two arrows. There are two different types of arrows, depending on whether or not $i_l = n$. If $i_l \neq n$ then the arrows corresponding to $x_{i_l}(t_l)$ are $i_l \to i_l +1$ and $n+i_l-1 \to n+i_l$. If $i_l = n$, the arrows are $n-1 \to 2n$ and $n \to 2n-1$, which jump over strands $n$ and $2n$ respectively. Now, we introduce notation to encode each arrow in the graph and their corresponding weights. \begin{itemize} \item We start with arrows that do not skip strands. These will be denoted $a_i^{(j)}$, with the lower index describing which strands of the LGV diagram that the arrow spans and the upper index describing where it lies horizontally. For $i<n$, $a_i^{(j)}$ is the arrow $i \to i+1$ coming from the $s_i$ in the $j$th set of parentheses in expression \eqref{eq:w_0ReducedExpr}. For $i>n$, $a_i^{(j)}$ is the arrow $i \to i-1$ coming from the $s_{n-i-1}$ in the $j$th set of parentheses in expression \eqref{eq:w_0ReducedExpr}. \item All other arrows skip strands. These are denoted $b_i^{(j)}$, where the upper and lower indices again describe the horizontal and vertical position of the arrow in the LGV diagram, respectively. For $i=n$ or $i=n-1$, $b_i^{(j)}$ is the arrow $n \to 2n-1$ or from $n-1\to 2n$, respectively, coming from the $j^{\text{th}}$ time $s_n$ appears in expression \eqref{eq:w_0ReducedExpr}. \end{itemize} Each of these arrows is weighted by $\pm t_l$ for some $l$. We define a function $\lambda$ which assigns to an arrow $a$ with weight $t_l$ the index of its weight $\lambda(a)=l$. Note that for $i<n$ and any suitable $j$, $\lambda\left(a_i^{(j)}\right) = \lambda\left(a_{n+i+1}^{(j)}\right)$ and $\lambda\left(b_{n-1}^{(j)}\right) = \lambda\left(b_n^{(j)}\right)$. \begin{example} Paths in the LGV diagram, can be explicitly described in terms of the arrows $a_{i}^{(j)}, b_{i}^{(j)}$. For example, the path collection in \Cref{example:n5pathcollection} is given by the five paths: \[ \textcolor{dorange}{a_1^1a_2^2}, \quad \textcolor{dviolet}{a_2^1a_3^2}, \quad \textcolor{dgreen}{a_3^1a_4^2b_5^2a_9^3}, \quad \textcolor{blue}{b_4^1a_{10}^2a_9^2a_8^2}, \quad \text{and} \quad \textcolor{red}{b_5^1a_{9}^1a_8^1a_7^1}. \] The weight indices of the arrows in this LGV diagram are as follows. \smallskip \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \textbf{Arrow $\mathbf{a}$} & $b_4^1, b_5^1$ & $a_1^1, a_9^1$ & $a_2^1, a_8^1$ & $a_3^1, a_7^1$ & $a_4^2, a_{10}^2$ & $a_3^2, a_7^2$ & $a_2^2, a_8^2$ & $b_4^2, b_5^2$ & $a_3^3, a_7^3$ & $a_4^4, a_{10}^4$\\ \hline $\lambda(\mathbf{a})$& $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \end{tabular} \end{center} \end{example} \begin{remark} The LGV diagram can be explicitly described using expression \eqref{eq:w_0ReducedExpr}. The LGV diagram consists of $2n$ horizontal strands, with a sequence of arrows added from left to right, one pair corresponding to each $s_i$ in \eqref{eq:w_0ReducedExpr}. (We have been drawing pairs of arrows directly on top of one another, but there is no change in anything we have described if we draw the higher arrow slightly to the right of the lower arrow.) For $n$ even the sequence of arrows is as follows, with parentheses matching those in \eqref{eq:w_0ReducedExpr}: \begin{equation}\label{eq:curlyW0} \mathcal{W}_0 \coloneqq \begin{array}{l} b_n^{(1)}b_{n-1}^{(1)} (a_{n-2}^{(1)}a_{2n-1}^{(1)} \ \cdots \ a_{1}^{(1)}a_{n+2}^{(1)})(a_{n-1}^{(2)}a_{2n}^{(2)} \ \cdots \ a_{2}^{(2)}a_{n+3}^{(2)})\\ b_n^{(2)}b_{n-1}^{(2)}(a_{n-2}^{(3)}a_{2n-1}^{(3)} \ \cdots \ a_{3}^{(3)}a_{n+4}^{(3)})(a_{n-1}^{(4)}a_{2n}^{(4)}\ \cdots \ a_{4}^{(4)}a_{n+5}^{(4)}) \\ \multicolumn{1}{c}{\vdots}\\ b_n^{(l)}b_{n-1}^{(l)} (a_{n-2}^{(2l-1)}a_{2n-1}^{(2l-1)} \ \cdots \ a_{2l-1}^{(2l-1)}a_{n+2l}^{(2l-1)}) (a_{n-1}^{(2l)}a_{2n}^{(2l)} \ \cdots \ a_{2l}^{(2l)}a_{n+2l+1}^{(2l)})\\ \multicolumn{1}{c}{\vdots}\\ b_n^{(n/2)}b_{n-1}^{(n/2)}. \end{array} \end{equation} For $n$ odd, we replace the last line with $b_n^{(\lfloor n/2\rfloor)}b_{n-1}^{(\lfloor n/2\rfloor)}(a_{n-2}^{(n-2)}a_{2n-1}^{(n-2)}) (a_{n-1}^{(n-1)} a_{2n}^{(n-1)})$. \medskip Note that $\mathcal{W}_0$ contains either $(n-2)$ or $(n-1)$ sets of parentheses, for $n$ even or odd respectively. Each set of parentheses defines a ``diagonal lines'' in the LGV diagram. For instance, see the arrows $\textcolor{dgreen}{a_3^{(1)}}\textcolor{dviolet}{a_2^{(1)}}\textcolor{dorange}{a_1^{(1)}}$ and $\textcolor{red}{a_9^{(1)}}\textcolor{red}{a_8^{(1)}}\textcolor{red}{a_7^{(1)}}$ in \Cref{fig:examplePathCollection2}, which all appear in the first set of parentheses of $\mathcal{W}_0$. We will refer to the $m^{\text{th}}$ pair of parentheses in $\mathcal{W}_0$ as \textit{parentheses pair $m$}. Explicitly, parentheses pair $m$ contains all arrows $a_i^{(2m-1)}$ and $a_i^{(2m)}$. \end{remark} \section{Positivity criterion} \label{sec:3} \subsection{Monomial Minors} The minors in \Cref{def:SpecialMinorsPfaff} correspond to particular path collections in the LGV diagram. First, note that the maximal minors of $X = X(t_1,\ldots, t_N)$ are, up to sign, equal to the minors of $A = A(t_1,\ldots,t_N)$. We defer the specific sign analysis of $M_{j,k}$ to \Cref{subsec:signAnalysis}, but we have the following: For $1\leq j\leq k\leq n-1$, \[ M_{j,k} := M_{j,k}(A) = \pm \ \Delta^{n-k,\ldots,n-k+j-1,n+1,n+2,\ldots,2n-j}(X), \quad \text{ for } 1 \leq j \leq k \leq n-1. \] By virtue of \Cref{LGVlemma}, to compute $M_{j,k}$, we must look at non-intersecting path collections from $\{1,\ldots, n\}$ to $\{n-k,\ldots,n-k+j-1,n+1,n+2,\ldots,2n-j\}$. \begin{lemma}\label{lem:LGVdiagramLeftGreedy} The left greedy path collection from source vertices $[n]$ in the LGV diagram is a unique non-intersecting path collection that uses all arrows in the LGV diagram. Moreover, if ${\rm LGP}(i)$ terminates at sink vertex $j$, then no other path originating from source vertex $i$ terminates above strand $j$. \end{lemma} \begin{proof} We will prove these statements in more generality in \Cref{prop:pathcollectionproperties} and \Cref{lem:greedyisextremeuv}. \end{proof} \begin{proposition}\label{prop:UniquePathColl} For any $1 \leq j \leq k \leq n-1$, there is a unique non-intersecting path collection from the source vertices $\{1,\ldots, n\}$ to the sink vertices $\{n-k,\ldots,n-k+j-1,n+1,n+2,\ldots,2n-j\}$. Explicitly it is given by the following $n$ paths: \[ 1 \to n-k, \quad \cdots, \quad j \to n-k+j-1, \quad j+1 \to 2n-j, \quad \cdots, \quad n \to n+1. \] In particular, the minors $M_{j,k}$ are monomials in the variables $t_i$. \end{proposition} \smallskip Before giving a proof, let us first give some intuition on the path collections in \Cref{prop:UniquePathColl}. For $1\leq j \leq k \leq n-1$, the corresponding path collection can be obtained as follows: \begin{itemize} \item Start with the path collection consisting of the path from $1$ to $k$ and the left greedy paths starting at $2,\ldots, n$. \item Modify sequentially the paths starting at $2,\ldots, n-k+j-1$ and make them right greedy. That is, make path starting at $2$ right greedy, and then the one starting at $3$, and so on. \end{itemize} Then, the index $k$ indicates where the path starting at $1$ ends while the index $j$ tells us which paths are right greedy and which are left greedy. \begin{example}\label{ex:somePathsCollectionsInLGVDiagram} In the case of $n=4,5$ there are respectively $6$ and $10$ minors $M_{j,k}$. We illustrate some examples of how $M_{1,1}$, $M_{1,2}$ and $M_{2,2}$ arise from unique non-intersecting path collections. For $k=j=1$ we obtain the following path collections which corresponds to the minor $M_{1,1}=(t_1\cdots t_5)^2t_{6}$ for $n=4$ and $M_{1,1}=(t_1\cdots t_9)^2t_{10}$ for $n=5$. \begin{center} \begin{figure}[ht] \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (5.5,4); \coordinate (r8) at (5.5,3.5); \coordinate (r9) at (5.5,3); \coordinate (r10) at (5.5,2.5); \coordinate (r5) at (5.5,2); \coordinate (r4) at (5.5,1.5); \coordinate (r3) at (5.5,1); \coordinate (r2) at (5.5,0.5); \coordinate (v21) at (2,0.5); \coordinate (v22) at (2.5,0.5); \coordinate (v23) at (4,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v34) at (4,1); \coordinate (v35) at (5.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v46) at (5.5,1.5); \coordinate (v47) at (5.5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v54) at (5.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v104) at (5.5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v96) at (5.5,3); \coordinate (v97) at (5.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v85) at (5.5,3.5); \coordinate (v71) at (2,4); \coordinate (v72) at (2.5,4); \coordinate (v73) at (4,4); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (6,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (6,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (6,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (6,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (6,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (6,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (6,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (6,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[draw=dviolet, line width=2pt] (l2) -- (v21); \draw[draw=black, line width=1pt] (v21) -- (v22); \draw[draw=black, line width=1pt] (v22) -- (v23); \draw[draw=black, line width=1pt] (v23) -- (r2); \draw[draw=dgreen, line width=2pt] (l3) -- (v31); \draw[draw=black, line width=1pt] (v31) -- (v32); \draw[draw=dviolet, line width=2pt] (v32) -- (v33); \draw[draw=black, line width=1pt] (v33) -- (v34); \draw[draw=black, line width=1pt] (v34) -- (v35); \draw[draw=black, line width=1pt] (v35) -- (r3); \draw[draw=blue, line width=2pt] (l4) -- (v41); \draw[draw=black, line width=1pt] (v41) -- (v42); \draw[draw=dgreen, line width=2pt] (v42) -- (v43); \draw[draw=black, line width=1pt] (v43) -- (v44); \draw[draw=dviolet, line width=2pt] (v44) -- (v45); \draw[draw=dviolet, line width=2pt] (v45) -- (v46); \draw[draw=dviolet, line width=2pt] (v46) -- (r4); \draw[draw=red, line width=2pt] (l5) -- (v51); \draw[draw=black, line width=1pt] (v51) -- (v52); \draw[draw=dgreen, line width=2pt] (v52) -- (v53); \draw[draw=black, line width=1pt] (v53) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (v101); \draw[draw=blue, line width=2pt] (v101) -- (v102); \draw[draw=black, line width=1pt] (v102) -- (v103); \draw[draw=black, line width=1pt] (v103) -- (v104); \draw[draw=black, line width=1pt] (v104) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (v91); \draw[draw=red, line width=2pt] (v91) -- (v92); \draw[draw=black, line width=1pt] (v92) -- (v93); \draw[draw=blue, line width=2pt] (v93) -- (v94); \draw[draw=black, line width=1pt] (v94) -- (v95); \draw[draw=dgreen, line width=2pt] (v95) -- (v96); \draw[draw=dgreen, line width=2pt] (v96) -- (v97); \draw[draw=dgreen, line width=2pt] (v97) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (v81); \draw[draw=red, line width=2pt] (v81) -- (v82); \draw[draw=black, line width=1pt] (v82) -- (v83); \draw[draw=blue, line width=2pt] (v83) -- (v84); \draw[draw=blue, line width=2pt] (v84) -- (v85); \draw[draw=blue, line width=2pt] (v85) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (v72); \draw[draw=red, line width=2pt] (v72) -- (v73); \draw[draw=red, line width=2pt] (v73) -- (r7); \draw[draw=blue, line width=2pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\scriptsize \textcolor{blue}{$t_{1}$}}; \draw[draw=red, line width=2pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\scriptsize \textcolor{red}{$-t_{1}$}}; \draw[draw=dgreen, line width=2pt, ->] (v31) -- (v42) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{2}$}}; \draw[draw=red, line width=2pt, ->] (v92) -- (v81) node[below right] {\scriptsize \textcolor{red}{$-t_{2}$}}; \draw[draw=dviolet, line width=2pt, ->] (v21) -- (v32) node[below right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_{3}$}}; \draw[draw=red, line width=2pt, ->] (v82) -- (v71) node[below right] {\scriptsize \textcolor{red}{$-t_{3}$}}; \draw[draw=dgreen, line width=2pt, ->] (v43) -- (v52) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{4}$}}; \draw[draw=blue, line width=2pt, ->] (v102) -- (v93) node[below right] {\scriptsize \textcolor{blue}{$-t_{4}$}}; \draw[draw=dviolet, line width=2pt, ->] (v33) -- (v44) node[below right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_5$}}; \draw[draw=blue, line width=2pt, ->] (v94) -- (v83) node[below right] {\scriptsize \textcolor{blue}{$-t_{5}$}}; \draw[draw=black, line width=2pt, ->] (v45) .. controls (5.25,1.75) and (5.25,2.25) .. (v103) node[midway, below left] {\scriptsize $t_{6}$}; \draw[draw=dgreen, line width=2pt, ->] (v53) .. controls (4.75,2.25) and (4.75,2.75) .. (v95) node[midway, below left] {\scriptsize \textcolor{dgreen}{$-t_{6}$}}; \end{tikzpicture} \quad \begin{tikzpicture} \coordinate (l6) at (0,4.5); \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (l1) at (0,0); \coordinate (r6) at (7,4.5); \coordinate (r7) at (7,4); \coordinate (r8) at (7,3.5); \coordinate (r9) at (7,3); \coordinate (r10) at (7,2.5); \coordinate (r5) at (7,2); \coordinate (r4) at (7,1.5); \coordinate (r3) at (7,1); \coordinate (r2) at (7,0.5); \coordinate (r1) at (7,0); \coordinate (v11) at (2.5,0); \coordinate (v21) at (2,0.5); \coordinate (v22) at (2.5,0.5); \coordinate (v23) at (4,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v34) at (4,1); \coordinate (v35) at (5.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v46) at (5.5,1.5); \coordinate (v47) at (6,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v54) at (6,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v104) at (6,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v96) at (5.5,3); \coordinate (v97) at (6,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v85) at (5.5,3.5); \coordinate (v71) at (2,4); \coordinate (v72) at (2.5,4); \coordinate (v73) at (4,4); \coordinate (v61) at (2.5,4.5); \draw[black] (-0.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (7.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (7.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (7.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (7.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (7.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (7.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (7.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (7.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (7.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (7.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[draw=dorange, line width=2pt] (l1) -- (v11); \draw[draw=black, line width=1pt] (v11) -- (r1); \draw[draw=dviolet, line width=2pt] (l2) -- (v21); \draw[draw=black, line width=1pt] (v21) -- (v22); \draw[draw=dorange, line width=2pt] (v22) -- (v23); \draw[draw=black, line width=1pt] (v23) -- (r2); \draw[draw=dgreen, line width=2pt] (l3) -- (v31); \draw[draw=black, line width=1pt] (v31) -- (v32); \draw[draw=dviolet, line width=2pt] (v32) -- (v33); \draw[draw=black, line width=1pt] (v33) -- (v34); \draw[draw=dorange, line width=2pt] (v34) -- (v35); \draw[draw=black, line width=1pt] (v35) -- (r3); \draw[draw=blue, line width=2pt] (l4) -- (v41); \draw[draw=black, line width=1pt] (v41) -- (v42); \draw[draw=dgreen, line width=2pt] (v42) -- (v43); \draw[draw=black, line width=1pt] (v43) -- (v44); \draw[draw=dviolet, line width=2pt] (v44) -- (v45); \draw[draw=black, line width=1pt] (v45) -- (v46); \draw[draw=dorange, line width=2pt] (v46) -- (r4); \draw[draw=red, line width=2pt] (l5) -- (v51); \draw[draw=black, line width=1pt] (v51) -- (v52); \draw[draw=dgreen, line width=2pt] (v52) -- (v53); \draw[draw=black, line width=1pt] (v53) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (v101); \draw[draw=blue, line width=2pt] (v101) -- (v102); \draw[draw=black, line width=1pt] (v102) -- (v103); \draw[draw=dviolet, line width=2pt] (v103) -- (v104); \draw[draw=black, line width=1pt] (v104) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (v91); \draw[draw=red, line width=2pt] (v91) -- (v92); \draw[draw=black, line width=1pt] (v92) -- (v93); \draw[draw=blue, line width=2pt] (v93) -- (v94); \draw[draw=black, line width=1pt] (v94) -- (v95); \draw[draw=dgreen, line width=2pt] (v95) -- (v96); \draw[draw=black, line width=1pt] (v96) -- (v97); \draw[draw=dviolet, line width=2pt] (v97) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (v81); \draw[draw=red, line width=2pt] (v81) -- (v82); \draw[draw=black, line width=1pt] (v82) -- (v83); \draw[draw=blue, line width=2pt] (v83) -- (v84); \draw[draw=black, line width=1pt] (v84) -- (v85); \draw[draw=dgreen, line width=2pt] (v85) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (v72); \draw[draw=black, line width=1pt] (v72) -- (v73); \draw[draw=blue, line width=2pt] (v73) -- (r7); \draw[draw=black, line width=1pt] (l6) -- (v61); \draw[draw=red, line width=2pt] (v61) -- (r6); \draw[draw=blue, line width=2pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\scriptsize \textcolor{blue}{$t_{1}$}}; \draw[draw=red, line width=2pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\scriptsize \textcolor{red}{$-t_{1}$}}; \draw[draw=dgreen, line width=2pt, ->] (v31) -- (v42) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{2}$}}; \draw[draw=red, line width=2pt, ->] (v92) -- (v81) node[below right] {\scriptsize \textcolor{red}{$-t_{2}$}}; \draw[draw=dviolet, line width=2pt, ->] (v21) -- (v32) node[below right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_{3}$}}; \draw[draw=red, line width=2pt, ->] (v82) -- (v71) node[below right] {\scriptsize \textcolor{red}{$-t_{3}$}}; \draw[draw=dorange, line width=2pt, ->] (v11) -- (v22) node[below right=0.5mm] {\scriptsize \textcolor{dorange}{$t_{4}$}}; \draw[draw=red, line width=2pt, ->] (v72) -- (v61) node[below right] {\scriptsize \textcolor{red}{$-t_{4}$}}; \draw[draw=dgreen, line width=2pt, ->] (v43) -- (v52) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{5}$}}; \draw[draw=blue, line width=2pt, ->] (v102) -- (v93) node[below right] {\scriptsize \textcolor{blue}{$-t_{5}$}}; \draw[draw=dviolet, line width=2pt, ->] (v33) -- (v44) node[below right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_{6}$}}; \draw[draw=blue, line width=2pt, ->] (v94) -- (v83) node[below right] {\scriptsize \textcolor{blue}{$-t_{6}$}}; \draw[draw=dorange, line width=2pt, ->] (v23) -- (v34) node[below right=0.5mm] {\scriptsize \textcolor{dorange}{$t_{7}$}}; \draw[draw=blue, line width=2pt, ->] (v84) -- (v73) node[below right] {\scriptsize \textcolor{blue}{$-t_{7}$}}; \draw[draw=dviolet, line width=2pt, ->] (v45) .. controls (5.25,1.75) and (5.25,2.25) .. (v103) node[midway, below left] {\scriptsize \textcolor{dviolet}{$t_{8}$}}; \draw[draw=dgreen, line width=2pt, ->] (v53) .. controls (4.75,2.25) and (4.75,2.75) .. (v95) node[midway, below left] {\scriptsize \textcolor{dgreen}{$-t_{8}$}}; \draw[draw=dorange, line width=2pt, ->] (v35) -- (v46) node[below right=0.5mm] {\scriptsize \textcolor{dorange}{$t_{9}$}}; \draw[draw=dgreen, line width=2pt, ->] (v96) -- (v85) node[below right] {\scriptsize \textcolor{dgreen}{$-t_{9}$}}; \draw[line width=2pt, ->] (v47) -- (v54)node[below right=0.5mm] {\scriptsize $t_{10}$}; \draw[draw=dviolet, line width=2pt, ->] (v104) -- (v97) node[below right] {\scriptsize \textcolor{dviolet}{$-t_{10}$}}; \end{tikzpicture} \caption{Illustration of the path collection corresponding to $M_{1,1}$.} \end{figure} \vspace{-1.5em} \end{center} For $k=2,j=1$ we modify the path starting at $1$ and obtain the following path collections corresponding to $M_{1,2}=(t_1t_2t_3t_4)^2t_5t_{6}$ for $n=4$ and $M_{1,2}=(t_1\cdots t_8)^2t_9t_{10}$ for $n=5$. \begin{center} \begin{figure}[ht] \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (5.5,4); \coordinate (r8) at (5.5,3.5); \coordinate (r9) at (5.5,3); \coordinate (r10) at (5.5,2.5); \coordinate (r5) at (5.5,2); \coordinate (r4) at (5.5,1.5); \coordinate (r3) at (5.5,1); \coordinate (r2) at (5.5,0.5); \coordinate (v21) at (2,0.5); \coordinate (v22) at (2.5,0.5); \coordinate (v23) at (4,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v34) at (4,1); \coordinate (v35) at (5.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v46) at (5.5,1.5); \coordinate (v47) at (5.5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v54) at (5.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v104) at (5.5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v96) at (5.5,3); \coordinate (v97) at (5.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v85) at (5.5,3.5); \coordinate (v71) at (2,4); \coordinate (v72) at (2.5,4); \coordinate (v73) at (4,4); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (6,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (6,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (6,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (6,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (6,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (6,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (6,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (6,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[draw=dviolet, line width=2pt] (l2) -- (v21); \draw[draw=black, line width=1pt] (v21) -- (v22); \draw[draw=black, line width=1pt] (v22) -- (v23); \draw[draw=black, line width=1pt] (v23) -- (r2); \draw[draw=dgreen, line width=2pt] (l3) -- (v31); \draw[draw=black, line width=1pt] (v31) -- (v32); \draw[draw=dviolet, line width=2pt] (v32) -- (r3); \draw[draw=blue, line width=2pt] (l4) -- (v41); \draw[draw=black, line width=1pt] (v41) -- (v42); \draw[draw=dgreen, line width=2pt] (v42) -- (v43); \draw[draw=black, line width=1pt] (v43) -- (v44); \draw[draw=black, line width=1pt] (v44) -- (v45); \draw[draw=black, line width=1pt] (v45) -- (v46); \draw[line width=2pt] (v46) -- (r4); \draw[draw=red, line width=2pt] (l5) -- (v51); \draw[draw=black, line width=1pt] (v51) -- (v52); \draw[draw=dgreen, line width=2pt] (v52) -- (v53); \draw[draw=black, line width=1pt] (v53) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (v101); \draw[draw=blue, line width=2pt] (v101) -- (v102); \draw[draw=black, line width=1pt] (v102) -- (v103); \draw[draw=black, line width=1pt] (v103) -- (v104); \draw[draw=black, line width=1pt] (v104) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (v91); \draw[draw=red, line width=2pt] (v91) -- (v92); \draw[draw=black, line width=1pt] (v92) -- (v93); \draw[draw=blue, line width=2pt] (v93) -- (v94); \draw[draw=black, line width=1pt] (v94) -- (v95); \draw[draw=dgreen, line width=2pt] (v95) -- (v96); \draw[draw=dgreen, line width=2pt] (v96) -- (v97); \draw[draw=dgreen, line width=2pt] (v97) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (v81); \draw[draw=red, line width=2pt] (v81) -- (v82); \draw[draw=black, line width=1pt] (v82) -- (v83); \draw[draw=blue, line width=2pt] (v83) -- (v84); \draw[draw=blue, line width=2pt] (v84) -- (v85); \draw[draw=blue, line width=2pt] (v85) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (v72); \draw[draw=red, line width=2pt] (v72) -- (v73); \draw[draw=red, line width=2pt] (v73) -- (r7); \draw[draw=blue, line width=2pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\scriptsize \textcolor{blue}{$t_{1}$}}; \draw[draw=red, line width=2pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\scriptsize \textcolor{red}{$-t_{1}$}}; \draw[draw=dgreen, line width=2pt, ->] (v31) -- (v42) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{2}$}}; \draw[draw=red, line width=2pt, ->] (v92) -- (v81) node[below right] {\scriptsize \textcolor{red}{$-t_{2}$}}; \draw[draw=dviolet, line width=2pt, ->] (v21) -- (v32) node[below right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_{3}$}}; \draw[draw=red, line width=2pt, ->] (v82) -- (v71) node[below right] {\scriptsize \textcolor{red}{$-t_{3}$}}; \draw[draw=dgreen, line width=2pt, ->] (v43) -- (v52) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{4}$}}; \draw[draw=blue, line width=2pt, ->] (v102) -- (v93) node[below right] {\scriptsize \textcolor{blue}{$-t_{4}$}}; \draw[line width=2pt, ->] (v33) -- (v44) node[below right=0.5mm] {\scriptsize $t_5$}; \draw[draw=blue, line width=2pt, ->] (v94) -- (v83) node[below right] {\scriptsize \textcolor{blue}{$-t_{5}$}}; \draw[draw=black, line width=2pt, ->] (v45) .. controls (5.25,1.75) and (5.25,2.25) .. (v103) node[midway, below left] {\scriptsize $t_{6}$}; \draw[draw=dgreen, line width=2pt, ->] (v53) .. controls (4.75,2.25) and (4.75,2.75) .. (v95) node[midway, below left] {\scriptsize \textcolor{dgreen}{$-t_{6}$}}; \end{tikzpicture} \quad \begin{tikzpicture} \coordinate (l6) at (0,4.5); \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (l1) at (0,0); \coordinate (r6) at (7,4.5); \coordinate (r7) at (7,4); \coordinate (r8) at (7,3.5); \coordinate (r9) at (7,3); \coordinate (r10) at (7,2.5); \coordinate (r5) at (7,2); \coordinate (r4) at (7,1.5); \coordinate (r3) at (7,1); \coordinate (r2) at (7,0.5); \coordinate (r1) at (7,0); \coordinate (v11) at (2.5,0); \coordinate (v21) at (2,0.5); \coordinate (v22) at (2.5,0.5); \coordinate (v23) at (4,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v34) at (4,1); \coordinate (v35) at (5.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v46) at (5.5,1.5); \coordinate (v47) at (6,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v54) at (6,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v104) at (6,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v96) at (5.5,3); \coordinate (v97) at (6,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v85) at (5.5,3.5); \coordinate (v71) at (2,4); \coordinate (v72) at (2.5,4); \coordinate (v73) at (4,4); \coordinate (v61) at (2.5,4.5); \draw[black] (-0.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (7.5,0) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (7.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (7.5,1) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (7.5,1.5) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (7.5,2) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (7.5,2.5) node [xscale = 0.8, yscale = 0.8] {$10$}; \draw[black] (7.5,3) node [xscale = 0.8, yscale = 0.8] {$9$}; \draw[black] (7.5,3.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (7.5,4) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (7.5,4.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[draw=dorange, line width=2pt] (l1) -- (v11); \draw[draw=black, line width=1pt] (v11) -- (r1); \draw[draw=dviolet, line width=2pt] (l2) -- (v21); \draw[draw=black, line width=1pt] (v21) -- (v22); \draw[draw=dorange, line width=2pt] (v22) -- (v23); \draw[draw=black, line width=1pt] (v23) -- (r2); \draw[draw=dgreen, line width=2pt] (l3) -- (v31); \draw[draw=black, line width=1pt] (v31) -- (v32); \draw[draw=dviolet, line width=2pt] (v32) -- (v33); \draw[draw=black, line width=1pt] (v33) -- (v34); \draw[draw=dorange, line width=2pt] (v34) -- (r3); \draw[draw=blue, line width=2pt] (l4) -- (v41); \draw[draw=black, line width=1pt] (v41) -- (v42); \draw[draw=dgreen, line width=2pt] (v42) -- (v43); \draw[draw=black, line width=1pt] (v43) -- (v44); \draw[draw=dviolet, line width=2pt] (v44) -- (v45); \draw[draw=black, line width=1pt] (v45) -- (r4); \draw[draw=red, line width=2pt] (l5) -- (v51); \draw[draw=black, line width=1pt] (v51) -- (v52); \draw[draw=dgreen, line width=2pt] (v52) -- (v53); \draw[draw=black, line width=1pt] (v53) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (v101); \draw[draw=blue, line width=2pt] (v101) -- (v102); \draw[draw=black, line width=1pt] (v102) -- (v103); \draw[draw=dviolet, line width=2pt] (v103) -- (v104); \draw[draw=black, line width=1pt] (v104) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (v91); \draw[draw=red, line width=2pt] (v91) -- (v92); \draw[draw=black, line width=1pt] (v92) -- (v93); \draw[draw=blue, line width=2pt] (v93) -- (v94); \draw[draw=black, line width=1pt] (v94) -- (v95); \draw[draw=dgreen, line width=2pt] (v95) -- (v96); \draw[draw=black, line width=1pt] (v96) -- (v97); \draw[draw=dviolet, line width=2pt] (v97) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (v81); \draw[draw=red, line width=2pt] (v81) -- (v82); \draw[draw=black, line width=1pt] (v82) -- (v83); \draw[draw=blue, line width=2pt] (v83) -- (v84); \draw[draw=black, line width=1pt] (v84) -- (v85); \draw[draw=dgreen, line width=2pt] (v85) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (v72); \draw[draw=black, line width=1pt] (v72) -- (v73); \draw[draw=blue, line width=2pt] (v73) -- (r7); \draw[draw=black, line width=1pt] (l6) -- (v61); \draw[draw=red, line width=2pt] (v61) -- (r6); \draw[draw=blue, line width=2pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\scriptsize \textcolor{blue}{$t_{1}$}}; \draw[draw=red, line width=2pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\scriptsize \textcolor{red}{$-t_{1}$}}; \draw[draw=dgreen, line width=2pt, ->] (v31) -- (v42) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{2}$}}; \draw[draw=red, line width=2pt, ->] (v92) -- (v81) node[below right] {\scriptsize \textcolor{red}{$-t_{2}$}}; \draw[draw=dviolet, line width=2pt, ->] (v21) -- (v32) node[below right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_{3}$}}; \draw[draw=red, line width=2pt, ->] (v82) -- (v71) node[below right] {\scriptsize \textcolor{red}{$-t_{3}$}}; \draw[draw=dorange, line width=2pt, ->] (v11) -- (v22) node[below right=0.5mm] {\scriptsize \textcolor{dorange}{$t_{4}$}}; \draw[draw=red, line width=2pt, ->] (v72) -- (v61) node[below right] {\scriptsize \textcolor{red}{$-t_{4}$}}; \draw[draw=dgreen, line width=2pt, ->] (v43) -- (v52) node[below right=0.5mm] {\scriptsize \textcolor{dgreen}{$t_{5}$}}; \draw[draw=blue, line width=2pt, ->] (v102) -- (v93) node[below right] {\scriptsize \textcolor{blue}{$-t_{5}$}}; \draw[draw=dviolet, line width=2pt, ->] (v33) -- (v44) node[below right=0.5mm] {\scriptsize \textcolor{dviolet}{$t_6$}}; \draw[draw=blue, line width=2pt, ->] (v94) -- (v83) node[below right] {\scriptsize \textcolor{blue}{$-t_{6}$}}; \draw[draw=dorange, line width=2pt, ->] (v23) -- (v34) node[below right=0.5mm] {\scriptsize \textcolor{dorange}{$t_{7}$}}; \draw[draw=blue, line width=2pt, ->] (v84) -- (v73) node[below right] {\scriptsize \textcolor{blue}{$-t_{7}$}}; \draw[draw=dviolet, line width=2pt, ->] (v45) .. controls (5.25,1.75) and (5.25,2.25) .. (v103) node[midway, below left] {\scriptsize \textcolor{dviolet}{$t_{8}$}}; \draw[draw=dgreen, line width=2pt, ->] (v53) .. controls (4.75,2.25) and (4.75,2.75) .. (v95) node[midway, below left] {\scriptsize \textcolor{dgreen}{$-t_{8}$}}; \draw[line width=2pt, ->] (v35) -- (v46) node[below right=0.5mm] {\scriptsize $t_{9}$}; \draw[draw=dgreen, line width=2pt, ->] (v96) -- (v85) node[below right] {\scriptsize \textcolor{dgreen}{$-t_{9}$}}; \draw[line width=2pt, ->] (v47) -- (v54)node[below right=0.5mm] {\scriptsize $t_{10}$}; \draw[draw=dviolet, line width=2pt, ->] (v104) -- (v97) node[below right] {\scriptsize \textcolor{dviolet}{$-t_{10}$}}; \end{tikzpicture} \caption{Illustration of the path collection corresponding to $M_{1,2}$.} \end{figure} \vspace{-1.5em} \end{center} The path collection in example \ref{example:n5pathcollection} is obtained from the previous one by changing the path starting at $2$ from left greedy to right greedy, and it corresponds to $M_{2,2}=(t_1\cdots t_7)^2t_8t_9$ for $n=5$. \end{example} \begin{proof}[Proof of \Cref{prop:UniquePathColl}] For each $1\leq j \leq k \leq n-1$ we need to argue that the path collection consisting of a path from $1 \to n-k$, non-intersecting right greedy paths from sources $2,\ldots,j$ and left greedy paths from sources $j+1,\ldots,n$ is unique and has the claimed sink set. By \Cref{lem:LGVdiagramLeftGreedy} and \eqref{eq:curlyW0}, the left greedy path collection originating from $[n]$ is explicitly given by: \[ \begin{array}{ll} 1 \to n &\text{ if } n \text{ is odd}\\ 1 \to 2n &\text{ if } n \text{ is even} \end{array}, \quad 2 \to 2n-1, \quad 3 \to 2n-2, \quad \cdots, \quad n \to n+1. \] It also follows from \Cref{lem:LGVdiagramLeftGreedy} that the left greedy paths from $j+1,\ldots,n$ are unique. Moreover, combining \Cref{lem:LGVdiagramLeftGreedy} with the explicit description of the paths in the left greedy path collection above, we see that there is no path collection originating from any source set contained in $[n]$ other than $j+1,\ldots,n$ with the same sink set. Then, we just need to check the result for the subcollection corresponding to the paths with source vertices $1,\ldots,j$. \smallskip We start by showing there exists a unique path in the LGV diagram from $1$ to $n-k$. First, we describe the left greedy path ${\rm LGP}(1)$ starting at $1$. From $\mathcal{W}_0$ we see that the only arrow with a tail in strand $1$ is $a_1^{(1)}$, so it must be the first arrow in ${\rm LGP}(1)$. The unique arrow to the right of $a_1^{(1)}$ which starts on strand $2$ is $a_2^{(2)}$. Similarly, for $i\in[n-3]$, $a_{i+1}^{(i+1)}$ is the unique arrow originating on strand $i+1$ to the right of the head of $a_i^{(i)}$. Thus, ${\rm LGP}(1)$ begins with $a_1^{(1)}a_2^{(2)}a_3^{(3)}\cdots a_{n-2}^{(n-2)}$. There is then a unique arrow originating on strand $n-1$ to the right of the head of $a_{n-2}^{(n-2)}$. For $n$ odd, it is $a_{n-1}^{(n-1)}$ and for $n$ even, $b_{n-1}^{(n/2)}$. Accordingly, for $1\leq k \leq n-1$ the unique path from $1$ to $n-k$ follows ${\rm LGP}(1)$ until it reaches strand $n-k$, and then terminates horizontally. Explicitly, it is given by the arrows $a_1^{(1)}a_2^{(2)}\cdots a_{n-k-1}^{(n-k-1)}$. \smallskip When we change ${\rm LGV}(2)$ to the right greedy path originating at $2$, it only uses the arrows forced by the fact that the bottom $n-k$ strands are blocked at some point by the path $1 \to n-k$. That is, the path $2 \to n-k+1$ is given by $a_2^{(1)}a_3^{(2)}\cdots a_{n-k-2}^{(n-k-1)}$. Similarly, the path starting at $3$ will only use the arrows forced by the fact that the path $2\to n-k+1$ at some point blocks the bottom $n-k+1$ strands, and so on. Continuing this argument, the path $j \to n-k+j-1$ is given by $a_{j}^{(1)}a_{j+1}^{(2)}\cdots a_{n-k+j-2}^{(n-k-1)}$. In particular, these paths have the desired sinks and each arrow they use is forced. Moreover, note that since these paths all terminate at sinks in $[n]$, they do not use any arrows that skip over other arrows. Thus, the path originating at $i$ must terminate below the path originating at $i+1$ for $1\leq i < j$. Thus, the path collection described is the unique non-intersecting path collection from $\{1,\ldots, j\}$ to $\{n-k, \ldots, n-k+j-1\}$. \end{proof} \begin{lemma}\label{lem:determiningtheparameters} Let $X=X(t_1,\ldots, t_N)$ be as in \Cref{subsec:LieTheory} for some parameters $t_i \in \RR^*$. For $1 \leq j \leq k \leq n-1$, let us denote by $\#(j,k)$ the index \[ \#(j,k) := N - \left(\binom{k}{2} + (j-1)\right). \] Then, the signs of the minors $\big\{M_{j',k'} \colon (j',k')\leq (j,k) \textnormal{ in reverse lexicographic order}\big\}$ determine the sign of $t_{\#(j,k)}$. Moreover, the indeterminates $t_i$ can be written as Laurent monomials in the minors $M_{j,k}$. \end{lemma} \begin{proof} We will prove the result by reverse induction on $\#(j,k)$. Equivalently, we induct on $(j,k)$ in reverse lexicographic order. By \Cref{prop:UniquePathColl}, to determine $M_{j,k}$ we need to look at the weights appearing in the path collection given by \[ 1 \to n-k, \quad \cdots, \quad j \to n-k+j-1, \quad j+1 \to 2n-j, \quad \cdots ,\quad n \to n+1. \] We will denote this path collection as $(n-k,\ldots,n-k+j-1,2n-j,\ldots, n+1)$. We know from \Cref{prop:UniquePathColl} that each $M_{j,k}$ is a monomial in the $t_i$ with coefficient $\pm1$. We write $M_{j,k}=\epsilon_{j,k}\mathcal{M}_{j,k}$, where $\epsilon_{j,k}\in \{1,-1\}$ and $\mathcal{M}_{j,k}$ is a monomial in the $t_i$ with coefficient $1$. We will show in \Cref{lem:Mjkpositive} that $\epsilon_{j,k}=1$, but for the purposes of this lemma, it suffices to work with $\mathcal{M}_{j,k}$ instead of $M_{j,k}$. We denote the minor corresponding to the left greedy path collection via \Cref{LGVlemma} as $M=\epsilon \mathcal{M}$, where $\mathcal{M}$ is a monomial in the $t_i$ with coefficient $1$. By \Cref{lem:LGVdiagramLeftGreedy}, the left greedy path collection originating from $[n]$ unique and uses all arrows in the LGV diagram, so $\mathcal{M}=(t_1\cdots t_N)^2$. For $k=1, j=1$, the unique path collection $(n-1,2n-1,\ldots, n+1)$ is almost identical to the left greedy path collection originating from $[n]$, only the path originating at $1$ is modified. Recall from the proof of \Cref{prop:UniquePathColl} that for $k=1$ the unique path $1 \to n-1$ uses all the arrows other than the last one in ${\rm LGP}(1)$. Thus, in the path collection corresponding to $M_{1,1}$, the only arrow which is not used is $a_{n-1}^{(n-1)}$ if $n$ is odd and $b_{n-1}^{(n/2)}$ if $n$ is even. Since $\lambda(a_{n-1}^{(n-1)})=N$ for $n$ odd and $\lambda(b_{n-1}^{(n/2)})=N$ for $n$ even, $\mathcal{M}_{1,1}=(\prod_{i=1}^{N-1}t_i^2)t_N$. Also, since $\mathcal{M}$ is a perfect square, $\mathcal{M}_{1,1}$ determines the sign of $t_N=\frac{\mathcal{M}}{\mathcal{M}_{1,1}}$. We use the following lemma to complete the proof for $k=1, j=1$. \begin{lemma} The minor $M$ can be written as a Laurent monomial in $\{M_{1,1},\ldots,M_{n-1,n-1}\}$. \end{lemma} \begin{proof} We continue to work with $\mathcal{M}$ and $\mathcal{M}_{j,k}$, dealing with the sign analysis later. Note that $\mathcal{M} = \Pf(A)^2$ if $n$ is even and $\mathcal{M}=\Pf_{[n-1]}(A)^2$ if $n$ is odd. Thus, it suffices to determine $|\Pf(A)|$ or $|\Pf_{[n-1]}(A)|$, for $n$ even or odd, respectively, as a Laurent monomial in $\{\mathcal{M}_{1,1},\ldots, \mathcal{M}_{n-1,n-1}\}$. Using \cite[Theorem 1]{Weyman}, we can write the minors of a skew-symmetric matrix in terms of its Pfaffians. Applying this to the (unsigned) minors $\mathcal{M}_{j,k}$ of $A$ with $j=k$ yields \begin{align*} &\mathcal{M}_{k,k} = |\Pf_{[n-k-1]}(A) \ \Pf_{1,\ldots,n-k,n}(A)| &&\text{if } n-k \text{ is odd}\\ &\mathcal{M}_{k,k} = | \Pf_{[n-k]}(A) \ \Pf_{1,\ldots,n-k-1,n}(A)| &&\text{if } n-k \text{ is even.} \end{align*} Note that there is a common term appearing in both $\mathcal{M}_{k,k}$ and $\mathcal{M}_{k-1,k-1}$. Explicitly, \[ \frac{\mathcal{M}_{k-1,k-1}}{\mathcal{M}_{k,k}} =\begin{cases} \left|\frac{\Pf_{[n-k+1]}(A)}{\Pf_{[n-k-1]}(A)}\right|, &\text{if } n-k \text{ is odd}\\[8pt] \left|\frac{\Pf_{1,\ldots,n-k+1,n}(A)}{\Pf_{1,\ldots,n-k-1,n}(A)}\right|,&\text{if } n-k \text{ is even.} \end{cases} \] This implies that \[ \begin{cases} |\Pf_{[n-k+1]}(A)|=\frac{|\Pf_{[n-k-1]}(A)| \ \mathcal{M}_{k-1,k-1}}{\mathcal{M}_{k,k}},&\text{if } n-k \text{ is odd}\\[8pt] |\Pf_{1,\ldots, n-k+1,n}(A)|=\frac{|\Pf_{1,\ldots, n-k-1,n}(A)| \ \mathcal{M}_{k-1,k-1}}{\mathcal{M}_{k,k}}, &\text{if } n-k \text{ is even.} \end{cases} \] Observe that $\mathcal{M}_{n-1,n-1}=|\Pf_{1,n}(A)|$. By reverse induction on $k$, we can write $|\Pf_{1,\cdots n-k,n}|$ if $n-k$ is odd and $|\Pf_{[n-k]}|$ if $n-k$ is even as a Laurent monomial in $\{\mathcal{M}_{k,k},\ldots, \mathcal{M}_{n-1,n-1}\}$. In particular, for $k=1$ we obtain expressions for $\Pf_{[n-1]}(A)$ for $n$ odd and $\Pf(A)$ for $n$ even as Laurent monomials in the $\mathcal{M}_{k,k}$, as desired. \end{proof} Assume we have determined the signs of $t_N,\ldots, t_{\#(j,k)+1}$ and we have written them as Laurent monomials in $\mathcal{M}_{j',k'}$ for $(j',k')\leq (j,k)$. \smallskip For $j=1$, the minor $M_{1,k}$ corresponds to the path collection using the unique path from $1 \to n-k$ and the left greedy path collection originating from $2,\ldots, n$. This differs from the path collection corresponding to $M_{1,1}$ only in the path starting at $1$. For an illustration of this fact, we refer the reader to \Cref{ex:somePathsCollectionsInLGVDiagram}. Hence, $\frac{\mathcal{M}_{1,1}}{\mathcal{M}_{1,k}}= t_{i_1}\cdots t_{i_{k-1}}$ where $i_1=\lambda(a_{n-k}^{(n-k)})<i_2=\lambda(a_{n-k+1}^{(n-k+1)}) \ldots < i_{k-1}=\lambda(a_{n-2}^{(n-2)})<i_{k}=N $ are the indices of the labels of the last $k$ arrows in ${\rm LGP}(1)$. In other words, since $\mathcal{M}_{1,1}=(\prod_{i=1}^{N-1}t_i^2)t_N$, then all the variables $t_i, 1\leq i \leq N$ other than $t_{i_1},\cdots t_{i_{k-1}}, t_{i_k}=t_N$ appear squared in $\mathcal{M}_{1,k}$. We now prove that $i_1=\#(1,k)$, which allows us to express $t_{\#(1,k)}$ as a Laurent monomial in $\mathcal{M}_{1,1},\mathcal{M}_{1,k}$ and $\{t_i\colon i> \#(1,k)\}$. By induction, this proves for $j=1$ that the variables can written as Laurent monomials in the unsigned minors $\mathcal{M}_{j',k'}$. Recall the sequence $\mathcal{W}_0$ of arrows appearing in the LGV graph and that, up to sign, $t_i$ is the weight of each of a pair of consecutive arrows in $\mathcal{W}_0$. To compute $i_1$, observe that $a_{n-k}^{(n-k)}$ is one of the last two arrows in some parentheses pair. For $n-k$ even, $a_{n-k}^{(n-k)}$ lies in parentheses pair $\frac{n-k}{2}$. Each parentheses pair is preceded by a pair of $b$ arrows. As each set of parentheses in parentheses pair $l$ contain $n-2l$ pairs of arrows, we have \begin{align} \begin{split}\label{eq:lambdaofalpha} \lambda\left(a_{n-k}^{(n-k)}\right) &= \sum_{l=1}^{\frac{n-k}{2}} \left[ 1+2(n-2l)\right] \\ &= \frac{n-k}{2}(1+2n)-2\frac{n-k}{2}\left(\frac{n-k}{2}+1\right)\\ &=\left(\frac{n-k}{2}\right)(n+k-1)\\ &=\frac{1}{2}(n(n-1)-k(k-1))=N-\binom{k}{2}. \end{split} \end{align} A similar calculation, taking into account that there is one extra set of parentheses of length $k-1$ and an extra pair of $b$ arrows, gives the same result for $n-k$ odd. Thus, $i_1=N-\binom{k}{2}=\#(k,1)$. \smallskip For $1<j\leq k$, the minor $M_{j,k}$ corresponds to the path collection $(n-k,\ldots, n-k+j-1,2n-j,\ldots,n+1)$. This is given by taking the path collection corresponding to $M_{j-1,k}$ and modifying the path starting at $j$. Concretely, we are removing arrows from ${\rm LGP}(j)$ in order to make it right greedy. Hence, $\frac{\mathcal{M}_{j-1,k}}{\mathcal{M}_{j,k}}= t_{i_1}\cdots t_{i_l}$ where $i_1\leq \ldots \leq i_l$ are the indices of the labels of the arrows in ${\rm LGP}(j)$ but not in the path $j \to n-k+j-1$. \smallskip Moreover, we can explicitly identify the first arrow, $\alpha_j$, along the path ${\rm LGP}(j)$ that is not on the path $j \to n-k+j-1$. Observe that by definition of a left greedy path, $\alpha_j$ is the first arrow starting on strand $n-k+j-1$ to the right of the head of $a_{n-k+j-2}^{(n-k-1)}$. As we can identify where $a_{n-k+j-2}^{(n-k-1)}$ lies in $\mathcal{W}_0$, we can identify $\alpha_j$:. \begin{itemize} \item If $j<k$, then $n-k+j-1<n-1$ and the only arrows originating on strand $n-k+j-1$ are of the form $a_{n-k+j-1}^{(l)}$ for some $l$. If we also ask for this arrow to be to the right of the head of $a_{n-k+j-2}^{(n-k-1)}$, we must have $l\geq n-k-1$. Since the parentheses in \eqref{eq:curlyW0} have decreasing lower indices, we obtain $\alpha_j=a_{n-k+j-1}^{(n-k)}$. \item If $j=k$, then $n-k+j-1=n-1$ and additionally to the $a_{n-1}^{(l)}$ arrows originating on strand $n-1$, we also have $b_{n-1}^{(l')}$ arrows. We must consider two cases. \begin{itemize} \item For $n-k-1$ odd, the arrow $a_{n-2}^{(n-k-1)}$ is in the first set of parentheses in parentheses pair $\frac{n-k}{2}$ of $\mathcal{W}_0$. Then, the next available arrow originating on strand $n-1$ is $\alpha_j=a_{n-1}^{(n-k)}$, which is in the second set of parentheses in the same parentheses pair. \item For $n-k-1$ even, $a_{n-2}^{(n-k-1)}$ is in the second parentheses of parentheses pair $m=\frac{n-k-1}{2}$ in $\mathcal{W}_0$. Then, there is a jump arrow on strand $n-1$ before the next parentheses pair, implying $\alpha_j = b_{n-1}^{(m+1)}$. \end{itemize} \end{itemize} Now, we want to compute $\lambda(\alpha_j)$. When $\alpha_j=a_{n-k+j-1}^{(n-k)}$ it follows from \eqref{eq:curlyW0} that $\lambda(\alpha_j)=\lambda(a_{n-k}^{(n-k)})-(j-1)$. When $\alpha_j=b_{n-1}^{(m+1)}$, we claim the same is true. For $j=k=1$, we must have $n$ even and this follows from our analysis of the $j=k=1$ case earlier in this proof. For $j=k>1$, note that $\alpha_{j-1}=a_{n-2}^{(n-k)}$ and we have $\lambda(\alpha_{j-1})=\lambda(a_{n-k}^{(n-k)})-(j-2)$. As $\alpha_{j-1}=a_{n-2}^{(n-k)}$ is the first arrow in parentheses pair $m+1=\frac{n-k+1}{2}$ of $\mathcal{W}_0$, it follows that $\lambda(\alpha_j)=\lambda(\alpha_{j-1})-1=\lambda(a_{n-k}^{(n-k)}-(j-1)$, as claimed. In conclusion, using \eqref{eq:lambdaofalpha}, $i_1=\lambda(\alpha_j)=N-\binom{k}{2}-(j-1)=\#(j,k)$, concluding the proof. \end{proof} \subsection{Sign analysis}\label{subsec:signAnalysis} We now show that the parameters $(t_i, \ 1 \leq i \leq N)$ are positive if and only if the minors $(M_{j,k}, \ 1 \leq j \leq k \leq n-1)$ are positive. We begin with the following lemma. \begin{lemma}\label{lem:signXtoA} For $1\leq j \leq k \leq n-1$, we have \[ \Delta^{n-k,\ldots,n-k+j-1,n+1,n+2,\ldots,2n-j}(X)=(-1)^{j(n-1-k)}\Delta_{\{1,\ldots,n-k-1,n-k+j, \ldots, n\}}^{\{1,2,\ldots, n-j\}}(A). \] \end{lemma} \begin{proof} This follows immediately from Laplace expansion. \end{proof} \begin{lemma}\label{lem:Mjkpositive} If $t_1,\ldots, t_N >0$ then the minors $M_{j,k}$ are positive for all $1\leq j \leq k \leq n-1$. \end{lemma} \begin{proof} Concretely we prove that $M_{j,k}$ are monomials on the $t_i$ with coefficient $1$. That is, using notation from \Cref{lem:determiningtheparameters}, for all $1\leq j\leq k\leq n-1$, $\epsilon_{j,k}=1$ where $M_{j,k}=\epsilon_{j,k}\mathcal{M}_{j,k}$. \smallskip Note that we can divide the LGV diagram into two parts: from strand $1$ to $n$ (\textit{lower part}) and from strand $2n$ to $n+1$ (\textit{upper part}). These are connected by the ``jump arrows'' $b_i^{(j)}$. The lower part of the LGV diagram has all weights positive and the upper part has all weights negative. Therefore, to determine the sign of the minors $M_{j,k}$, we need to count the number of arrows in the upper part of the diagram and the number of jump arrows of negative weight used by the corresponding path collection; we also must take into account the sign from the LGV lemma, \Cref{LGVlemma}, the sign from \Cref{lem:signXtoA}, and the sign appearing in the definition of $M_{j,k}$. \smallskip The path collection corresponding to $M_{j,k}$ is $(n-k,\ldots, n-k+j-1,2n-j,\ldots,n+1)$. Since the paths starting at $1,\ldots, j$ remain in the lower part of the diagram, their weights are positive. The paths starting at $j+1,\ldots,n$ are left greedy paths which use some arrows in the lower part of the diagram then take one jump arrow and continue in the upper part. Explicitly, they use all the arrows in the upper part of the graph coming from the first $n-j$ sets of parentheses in \eqref{eq:curlyW0}. Each path uses all the arrows in the upper part of the graph coming from a single set of parentheses. For an illustration of this fact, we refer the reader to \Cref{ex:somePathsCollectionsInLGVDiagram}. If $n-j$ (the number of left greedy paths) is even then we can pair paths with consecutive sources: \[ \big\{\LGP(n), \LGP(n-1)\big\}, \big\{\LGP(n-2),\LGP(n-3)\big\},\ldots, \big\{\LGP(n-j+1),\LGP(n-j)\big\}. \] Each pair of paths then uses all the arrows in the upper part of the graph coming from a parentheses pair in \eqref{eq:curlyW0}. Each set of parentheses in a parentheses pair has the same number of arrows coming from the upper part of the graph. Thus, these arrows contribute an even number of factors of $-1$. Then, the sign given by the arrows depends on the parity of the number of jump arrows of negative weight used. Each parentheses pair is preceded by a pair of jump arrows, one each of positive and negative weight. So, there are exactly $\frac{n-j}{2}$ jump arrows of negative weight used by the path collection contributing a sign of $(-1)^{\frac{n-j}{2}}$. The permutation that sorts $(n-k,\ldots, n-k+j-1,2n-j,\ldots,n+1)$ can be written as the product of $\frac{n-j}{2}$ transpositions corresponding to exchanging $n+a$ and $2n-j-a+1$ for $1\leq a\leq \frac{n-j}{2}$. Thus, the sign contributed by \Cref{LGVlemma} is $(-1)^{\frac{n-j}{2}}$. If $n-j$ is odd, a similar argument shows that the sign contributed by arrows of negative weight is $(-1)^{\frac{n-j-1}{2}+n+1}$ and the sign contributed by \Cref{LGVlemma} is $(-1)^{\frac{n-j-1}{2}}$. In either case, the sign contributed by arrows of negative weight and by \Cref{LGVlemma} together is $(-1)^{j(n-1)}$. Multiplying this by the sign $(-1)^{j(n-1-k)}$ contributed by \Cref{lem:signXtoA}, we obtain a sign of $(-1)^{jk}$. This matches the sign that appears in \Cref{def:SpecialMinorsPfaff}, the definition of $M_{j,k}$. \end{proof} \begin{lemma}\label{lem:Mjkdeterminematrix} Let $A$ be a $n \times n$ skew-symmetric matrix such that $M_{j,k}(A) \neq 0$ for all $1\leq j \leq k\leq n-1$. Then, these minors fully determine the matrix $A$. \end{lemma} \begin{proof} Let $U$ be the set of matrices $A$ in $\SS_n$ such that $M_{j,k}(A) \neq 0$ for any $1 \leq j \leq k \leq n-1$. Similar to \Cref{def:LusztigPositive}, we denote by $V$ the following set \[ V := \Big\{ A(t_1, \dots, t_N) \colon t_1,\dots,t_N \in \RR^\ast \Big\}, \] where we recall that $A(t_1,\dots,t_N)$ is the $n\times n$ skew-symmetric matrix with polynomial entries in the parameters $t_i$ obtained after row-reducing in \eqref{eq:posorthogonalgrassmannian}. We note that the sets $U$ and $V$ are both Zariski dense in $\OGr(n,2n)$. By \Cref{lem:Mjkpositive}, we have $V \subset U$. Let $\varphi \colon U \to V$ be a rational map which takes a skew-symmetric matrix $A$ in $U$, determines the $t_i$'s in the Marsh-Rietsch parametrization, as described in the proof of \Cref{lem:determiningtheparameters}, and then, using these $t_i$'s, outputs an element in $V$ via the map \eqref{eq:posorthogonalgrassmannian}. Note that the function $\varphi$ extends to a rational map $\tilde{\varphi} \colon \OGr(n,2n) \dashrightarrow \OGr(n,2n)$. Restricting $\tilde{\varphi}$ to $V$ gives the identity and, since $V$ is Zariski dense in $\OGr(n,2n)$, we deduce that $\tilde{\varphi} = \rm{id}$. In particular, $\varphi$ as a function applied to $A\in U$ depends only on the minors $M_{j,k}(A)$ and outputs $A$. Thus, the minors $M_{j,k}(A)$ fully determine the matrix $A$. \end{proof} \begin{proof}[\bf Proof of \Cref{thm:Main}] By \Cref{lem:determiningtheparameters} and \Cref{lem:Mjkpositive}, for a point $A(t_1, \dots, t_N)$ with $t_i \in \RR^\ast$, the parameters $t_i$ are positive if and only if all the $M_{j,k}(A)$ are. The first statement of the result then follows from \Cref{lem:Mjkdeterminematrix}. For the second statement, observe that any positivity test of $\OGr^{>0}(n,2n)$ using regular functions $f_1, \dots, f_r$ can be turned into a positivity test for the positive orthant $(\RR_{>0})^{N}$ using the Marsh-Rietsh parametrization. The minimal number of inequalities that cut out the orthant $(\RR_{>0})^N$ in $\RR^N$ is $N$, so we deduce that $r \geq N$. The positivity test in \Cref{thm:Main} uses exactly $N = \binom{n}{2}$ inequalities so it is indeed a minimal positivity test. \end{proof} \section{Nonnegativity test and boundary cells}\label{sec:4} \subsection{Nonnegativity test} Recall that the totally nonnegative orthogonal Grassmannian $\OGr^{\geq 0}(n,2n)$ is the Euclidean closure of $\OGr^{>0}(n,2n)$. We now prove \Cref{thm:Nonnegative}, which provides a nonnegativity criterion for skew-symmetric matrices. To do so, we use \Cref{lem:semiGroup}, which exploits the semigroup property of $\SO^{>0}(2n)$. \smallskip Recall that $\SO(2n)$ acts on $\OGr(n,2n)$ via matrix multiplication, i.e. given $M \in \SO(2n)$ and $X=\rowspan (N) \in \OGr(n,2n)$ we have $X \cdot M = \rowspan (NM)$. \begin{lemma}[]\label{lem:semiGroup} For any $Z \in \SO^{ > 0}(2n)$ and $X\in \OGr^{\geq 0}(n,2n)$, we have $X \cdot Z \in \OGr^{>0}(n,2n)$. \end{lemma} \begin{proof} In Proposition 3.2(c) and Proposition 8.17 in \cite{Lusztig1} this statement is proven for the complete flag. By Theorem 3.4 in \cite{Lusztig2} the result follows for partial flags by projecting. \end{proof} \begin{proof}[\bf Proof of \Cref{thm:Nonnegative}] The equivalence between (\ref{nonnegativeitem2}) and (\ref{nonnegativeitem3}) follows from \Cref{thm:Main}. To show the equivalence of (\ref{nonnegativeitem1}) and (\ref{nonnegativeitem2}), let $Z(\epsilon) \in \SO^{>0}(2n)$ be a smooth $1$-parameter family that converges to $\Id_{2n}$ as $\epsilon \to 0$. Fix $X \in \OGr^{\geq 0}(n,2n)$. Then, by virtue of \Cref{lem:semiGroup}, $X \cdot Z(\epsilon)$ is totally positive. By our positivity criterion, \Cref{thm:Main}, we deduce that the minors $M_{j,k}(B(\epsilon))$ are positive for any $\epsilon > 0$. Conversely, suppose that the leading term in the Taylor expansion of all $M_{j,k}(B(\epsilon))$ is positive for small enough $\epsilon$. Then using \Cref{thm:Main} we deduce that $X(\epsilon) \in \OGr(n,2n)^{> 0}$ for $\epsilon$ small enough. Since $Z(\epsilon) \xrightarrow[]{\epsilon \to 0} \Id_{2n}$, we conclude that $X(\epsilon) \xrightarrow[]{\epsilon \to 0} X$. This means that $X \in \overline{\OGr^{>0}(n,2n)} = \OGr^{\geq 0}(n,2n)$. Thus, (\ref{nonnegativeitem1}) and (\ref{nonnegativeitem2}) are equivalent. \smallskip We now show that we can choose $Z(\epsilon)$ to have polynomial entries in $\epsilon$. The parametrization of $\OGr^{>0}(n,2n)$ in equation \eqref{eq:parameterizationCompleteFlag} is a projection of the parametrization of the complete flag variety given in \cite{MR}. Thus, for $\epsilon >0$ the following matrix representing a point in the positive complete flag variety for $\SO(2n)$: \[ z(\epsilon) = x_{j_1}(\epsilon)\cdots x_{j_{\hat{N}}}(\epsilon). \] We then have $ Z(\epsilon) := z(\epsilon)^Tz(\epsilon) \in \SO(2n)^{>0}$. To see why, recall that the Marsh-Rietsch parametrization of the totally positive complete flag variety is obtained by representing a complete flag with a matrix in the totally positive unipotent radical $U_{>0}^{+}$ of $\SO(2n)$. Using \cite[Section 11]{MR}, we observe that $z(\epsilon) \in U_{>0}^{+}$ and $z(\epsilon)^T \in U_{>0}^{-}$. It follows from the definition of $\SO(2n)^{>0}$, see \cite[2.12]{Lusztig1}, that $Z(\epsilon)$ is totally positive. Finally, note that as $\epsilon \to 0$ each $x_i(\epsilon)$ converges to the identity, so $Z(\epsilon) \xrightarrow[\epsilon \to 0]{} \Id_{2n}$. Moreover, since the entries of each $x_{j_i}(\epsilon)$ are polynomial in $\epsilon$, the same is true of the entries and minors of $Z(\epsilon)$. \end{proof} \begin{example}[$n=4$]\label{ex:NonNegn4} The matrix $Z(\epsilon)$ from the parametrization in \cite[Section 11]{MR} is \[ \resizebox{1\textwidth}{!}{$\begin{bmatrix} 1&4\,\epsilon&10\,\epsilon^{2}&7\,\epsilon^{3}&-\epsilon^{6}&5\,\epsilon^{5}&-11\,\epsilon^{4}&13\,\epsilon^{3}\\ 4\,\epsilon&16\,\epsilon^{2}+1&40\,\epsilon^{3}+4\,\epsilon&28\,\epsilon^{4}+4\,\epsilon^{2}&-4\,\epsilon^{7}-\epsilon^{5}&20\,\epsilon^{6}+4\,\epsilon^{4}&-44\,\epsilon^{5}-7\,\epsilon^{3}&52\,\epsilon^{4}+6\,\epsilon^{2}\\ 10\,\epsilon^{2}&40\,\epsilon^{3}+4\,\epsilon&100\,\epsilon^{4}+16\,\epsilon^{2}+1&70\,\epsilon^{5}+16\,\epsilon^{3}+2\,\epsilon&-10\,\epsilon^{8}-4\,\epsilon^{6}-\epsilon^{4}&50\,\epsilon^{7}+16\,\epsilon^{5}+3\,\epsilon^{3}&-110\,\epsilon^{6}-28\,\epsilon^{4}-4\,\epsilon^{2}&130\,\epsilon^{5}+24\,\epsilon^{3}+2\,\epsilon\\ 7\,\epsilon^{3}&28\,\epsilon^{4}+4\,\epsilon^{2}&70\,\epsilon^{5}+16\,\epsilon^{3}+2\,\epsilon&49\,\epsilon^{6}+16\,\epsilon^{4}+4\,\epsilon^{2}+1&-7\,\epsilon^{9}-4\,\epsilon^{7}-2\,\epsilon^{5}-\epsilon^{3}&35\,\epsilon^{8}+16\,\epsilon^{6}+6\,\epsilon^{4}+2\,\epsilon^{2}&-77\,\epsilon^{7}-28\,\epsilon^{5}-8\,\epsilon^{3}-2\,\epsilon&91\,\epsilon^{6}+24\,\epsilon^{4}+4\,\epsilon^{2}\\ -\epsilon^{6}&-4\,\epsilon^{7}-\epsilon^{5}&-10\,\epsilon^{8}-4\,\epsilon^{6}-\epsilon^{4}&-7\,\epsilon^{9}-4\,\epsilon^{7}-2\,\epsilon^{5}-\epsilon^{3}&\epsilon^{12}+\epsilon^{10}+\epsilon^{8}+10\,\epsilon^{6}+36\,\epsilon^{4}+16\,\epsilon^{2}+1&-5\,\epsilon^{11}-4\,\epsilon^{9}-3\,\epsilon^{7}-14\,\epsilon^{5}-24\,\epsilon^{3}-4\,\epsilon&11\,\epsilon^{10}+7\,\epsilon^{8}+4\,\epsilon^{6}+8\,\epsilon^{4}+6\,\epsilon^{2}&-13\,\epsilon^{9}-6\,\epsilon^{7}-2\,\epsilon^{5}-3\,\epsilon^{3}\\ 5\,\epsilon^{5}&20\,\epsilon^{6}+4\,\epsilon^{4}&50\,\epsilon^{7}+16\,\epsilon^{5}+3\,\epsilon^{3}&35\,\epsilon^{8}+16\,\epsilon^{6}+6\,\epsilon^{4}+2\,\epsilon^{2}&-5\,\epsilon^{11}-4\,\epsilon^{9}-3\,\epsilon^{7}-14\,\epsilon^{5}-24\,\epsilon^{3}-4\,\epsilon&25\,\epsilon^{10}+16\,\epsilon^{8}+9\,\epsilon^{6}+20\,\epsilon^{4}+16\,\epsilon^{2}+1&-55\,\epsilon^{9}-28\,\epsilon^{7}-12\,\epsilon^{5}-12\,\epsilon^{3}-4\,\epsilon&65\,\epsilon^{8}+24\,\epsilon^{6}+6\,\epsilon^{4}+4\,\epsilon^{2}\\ -11\,\epsilon^{4}&-44\,\epsilon^{5}-7\,\epsilon^{3}&-110\,\epsilon^{6}-28\,\epsilon^{4}-4\,\epsilon^{2}&-77\,\epsilon^{7}-28\,\epsilon^{5}-8\,\epsilon^{3}-2\,\epsilon&11\,\epsilon^{10}+7\,\epsilon^{8}+4\,\epsilon^{6}+8\,\epsilon^{4}+6\,\epsilon^{2}&-55\,\epsilon^{9}-28\,\epsilon^{7}-12\,\epsilon^{5}-12\,\epsilon^{3}-4\,\epsilon&121\,\epsilon^{8}+49\,\epsilon^{6}+16\,\epsilon^{4}+8\,\epsilon^{2}+1&-143\,\epsilon^{7}-42\,\epsilon^{5}-8\,\epsilon^{3}-2\,\epsilon\\ 13\,\epsilon^{3}&52\,\epsilon^{4}+6\,\epsilon^{2}&130\,\epsilon^{5}+24\,\epsilon^{3}+2\,\epsilon&91\,\epsilon^{6}+24\,\epsilon^{4}+4\,\epsilon^{2}&-13\,\epsilon^{9}-6\,\epsilon^{7}-2\,\epsilon^{5}-3\,\epsilon^{3}&65\,\epsilon^{8}+24\,\epsilon^{6}+6\,\epsilon^{4}+4\,\epsilon^{2}&-143\,\epsilon^{7}-42\,\epsilon^{5}-8\,\epsilon^{3}-2\,\epsilon&169\,\epsilon^{6}+36\,\epsilon^{4}+4\,\epsilon^{2}+1 \end{bmatrix}$.} \] We use this $1$-parameter family and \Cref{thm:Nonnegative} to check if the following matrix is in $\SS_4^{\geq 0}$: \[ A=\begin{bmatrix} 0&0&0&2\\ 0&0&0&0\\ 0&0&0&-2\\ -2&0&2&0 \end{bmatrix}. \] All the minors $M_{j,k}(A)$ are nonnegative. However, \begin{alignat*}{3} & M_{1,1}(B(\epsilon))= 80\epsilon^{5} + o(\epsilon^{5}), \quad M_{1,2}(B(\epsilon)) = 40\epsilon^{4} + o(\epsilon^{4}) , \quad M_{1,3}(B(\epsilon))= 16\epsilon^{2} + o(\epsilon^{3}) , \\ & M_{2,2}(B(\epsilon)) = 80\epsilon^{5} + o(\epsilon^{5}) , \quad M_{2,3}(B(\epsilon)) =\mathbf{-16}\,\epsilon^{2} + o(\epsilon^{2}), \\ & M_{3,3}(B(\epsilon)) = 2-8\,\epsilon + o(\epsilon). \end{alignat*} In particular the leading coefficient of $M_{2,3}(B(\epsilon))$ is $-16$, which implies $A \notin \SS_4^{\geq 0}$. We provide Macaulay2 \cite{M2} code to carry out this calculation in \cite{M2Code}. \end{example} \subsection{Expressions in the Weyl group} In this subsection we go into more details about concepts we already introduced in \Cref{subsec:LieTheory}. Let $G$ be a reductive algebraic group and $T$ any maximal torus. Let $W=N_G(T)/T$ be its Weyl group. We will assume basic familiarity with the definition of Weyl groups and we refer the reader to \cite{HumphreysCoxeter} for further details. Let $s_1, \dots, s_n$ be simple reflections that generate $W$. We will reserve underlined letters for expressions (i.e. words) in the simple reflections. Given an expression $\underline{w}=s_{i_1}\cdots s_{i_l}$, we denote by the non-underlined letter $w$ the element $s_{i_1}\cdots s_{i_l}\in W$. We say an expression $\underline{w}=s_{i_1}\cdots s_{i_l}$ for $w\in W$ has length $\ell(\underline{w})=l$. The expression is \emph{reduced} if it is an expression for $w$ of minimal length and we write $\ell(w)$ for the length of a reduced expression for $w$. As before, we denote by $w_0$ for the element of largest length in $W$. \begin{definition} For any subset $J\subset[n]$, we define the \textit{parabolic subgroup} $W_J$ of $W$ by $W_J\coloneqq\langle s_i\mid i\in J\rangle$. We write $W^J \coloneqq W_J \backslash W$ for the set of right cosets of $W_J$. For each coset $A\in W^J$, there is a unique $w\in W$ of minimal length with $A = W_J \cdot w$ called \textit{minimal coset representative}. Given $w\in W$, we denote its minimal coset representative by $w^J$. \end{definition} For a reduced expression $\underline{v}=s_{i_1}\cdots s_{i_p}$ of an element $v\in W$, a \emph{subexpression} $\underline{u}$ of $\underline{v}$ is a choice of either $1$ or $s_{i_j}$ for each $j\in[p]$. We will record this by writing $\underline{u}$ as a string whose $j^{\textnormal{th}}$ entry is either $1$ or $s_{i_j}$. We may interpret the subexpression $\underline{u}$ as an expression for some $u\in W$ by removing the $1$s. We will say $\underline{u}$ is reduced if this expression is reduced. Let $\underline{v}$ be an expression for $v\in W$ and $\underline{u}$ be a subexpression of $\underline{v}$. For $k\geq 0$, we will let $\underline{u}_{(k)}$ be the subexpression of $\underline{v}$ which is identical to $\underline{u}$ in its last $k$ entries and is all $1$s beforehand. \begin{example}\label{ex:expressions} Let $n=3$. We work in the Weyl group of type D generated by the simple transpositions in \eqref{eq:WeylGroupGens}. Consider $\underline{v}=s_1s_2s_3s_1s_2$, an expression for $v=462$ (in one-line notation for signed permutations). An example of a subexpression is $\underline{u}=s_11s_3s_11$, which is an expression for $u=624$. We then have \[ \underline{u}_{(0)}=\underline{u}_{(1)}=11111,\quad \underline{u}_{(2)}=111s_11,\quad \underline{u}_{(3)}=\underline{u}_{(4)}=11s_3s_11, \quad \text{and} \quad \underline{u}_{(5)}=\underline{u}. \] In terms of Weyl group elements we get \[ u_{(0)} = u_{(1)}=123, \quad u_{(2)} = 213, \quad u_{(3)} = u_{(4)}=615, \quad \text{and} \quad u_{(5)} = u. \] \end{example} The \emph{Bruhat order} $(W,<)$ is defined by $u<v$ if there is a subexpression for $u$ in some, equivalently any, expression for $v$. \begin{definition} We say that a subexpression $\underline{u}$ of $\underline{v}$ is \textit{distinguished}, denoted $\underline{u} \preceq \underline{v}$, if $u_{(j)}\leq s_{i_{n-j+1}}u_{(j-1)}$ for all $j\in[p]$. That is, if multiplying $u_{(j-1)}$ on the left by $s_{i_{n-j+1}}$ decreases the length of $u_{(j-1)}$, then $\underline{u}$ must contain $s_{i_{n-j+1}}$. \end{definition} \begin{example} Continuing \Cref{ex:expressions}, we have $\underline{u}\prec\underline{v}$. However, $\underline{w}=111s_11\npreceq\underline{v}$ because $w_{(5)}=s_1\nleq s_1w_{(4)} = {\rm id}$. This example highlights that, colloquially, distinguished subexpressions are leftmost subexpressions. \end{example} \begin{theorem}{\cite[Lemma 3.5]{MR}} Let $\underline{v}$ be a reduced expression for $v\in W$ and let $u<v$. Then there exists a unique subexpression $\underline{u}$ for $u$ which is reduced and distinguished in $\underline{v}$. We call this subexpression the \textbf{reduced distinguished subexpression} for $u$ in $\underline{v}$. \end{theorem} \begin{remark} As in \cref{rem:MRParamConvention}, our conventions and notation for distinguished subexpressions differ from \cite{MR} since we use row spans. Accordingly, we work with right cosets of $W$ and our definition of distinguished subexpressions is mirrored from \cite{MR}. \end{remark} \subsection{Richardson and Deodhar decompositions}\label{subsec:RichardsonDeodhar} The boundary structure of the totally nonnegative part of partial flag varieties is combinatorially rich and closely related to the theory of positroids in type A. In this section we describe this boundary structure for arbitrary flag varieties and then specialize to the case of the orthogonal Grassmannian. Our presentation follows \cite{KodamaAndWilliams}. \smallskip Recall that $G$ is a split reductive algebraic group over $\mathbb{R}$. Fix a pinning for $G$. The reader unfamiliar with the general theory of split reductive algebraic groups over $\RR$ can continue to work with $G=\SO(2n)$, with the pinning introduced in \Cref{sec:2}. The complete flag variety $G/B$ can be identified with the variety of Borel subgroups $\mathcal{B}$ via conjugation: $gB \in G/B \leftrightarrow g\cdot B:= gBg^{-1} \in \mathcal{B}$. We can also lift the Weyl group elements $w \in W=N_G(T)/T$ to $G$ using the pinning. For each simple reflection $s_i \in W$, define \begin{equation}\label{eq:SiDot} \dot{s}_i := \varphi_i \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}. \end{equation} For a reduced expression $\underline{w}=s_{i_1}\cdots s_{i_k}$, define $\dot{w}=\dot{s}_{i_1}\cdots \dot{s}_{i_k}$. Then, the Bruhat decomposition of $G$ descends to the complete flag variety. The \emph{Richardson cells} of $G/B$ are intersections of opposite Bruhat cells. Concretely, given $v,w \in W$ we define \[ \Rcal_{v,w}:= (B\dot{w} \cdot B) \cap (B^{-}\dot{v} \cdot B). \] For $\underline{v},\underline{w}$ expressions of $v,w \in W$, Deodhar \cite{Deodhar85} defined the \emph{Deodhar component} $\Rcal_{\underline{v},\underline{w}}$. They refine the Richardson cells, explicitly for a fixed reduced expression $\underline{w}$ we have \[ \Rcal_{v,w} = \bigsqcup_{\underline{v} \prec \underline{w}} \Rcal_{\underline{v},\underline{w}}, \] where the disjoint union is over all distinguished subexpressions $\underline{v}$ of $\underline{w}$. For each Richardson cell $\Rcal_{v,w}$ there is a unique Deodhar component of maximal dimension, namely when we take $\underline{v}$ to be the unique reduced distinguished subexpression for $v$ in $\underline{w}$, which we denote $\underline{v}^{+}$. We denote by $\mathcal{R}^{>0}_{v,w}$ the positive part $\mathcal{R}_{v,w}\cap (G/B)^{\geq0}$ of the Richardson cell $\mathcal{R}_{v,w}$ and by $\mathcal{R}^{>0}_{\underline{v},\underline{w}}$ the positive part $\mathcal{R}_{\underline{v},\underline{w}}\cap (G/B)^{\geq0}$ of the Deodhar component $\mathcal{R}_{\underline{v},\underline{w}}$. In each Richardson cell, the only Deodhar component that intersects $(G/B)^{\geq 0}$ is the Deodhar component of maximal dimension; that is, by \cite[Theorem 11.3]{MR}, \[ \Rcal_{v,w}^{>0}=\Rcal^{>0}_{\underline{v}^{+},\underline{w}}. \] When $w = w_0$ and $v = {\rm id}$, the component $\Rcal_{id, w_0}=(G/B)^{>0}$, which is dense in $(G/B)^{\geq 0}$. As explained in \cite[Section 3]{KodamaAndWilliams}, partial flag varieties also admit Richardson (resp.\ Deodhar) decompositions. Each Richardson cell (resp.\ Deodhar component) of a partial flag variety is the projection of a Richardson cell (resp.\ Deodhar component) of the complete flag variety. Moreover, restricting Richardson cells and Deodhar components to $(G/B)^{\geq 0}$, we obtain a cell decomposition of the nonnegative part of the flag variety into positive Richardson cells $\mathcal{R}_{v,w}^{>0}$. \smallskip We now focus on the concrete case of $\OGr(n,2n)$ and its corresponding parabolic quotient $W^{[n-1]}$. We have a Richardson cell for each pair $v, w^{[n-1]} \in W$, where $w^{[n-1]}$ is a minimal coset representative and $v \leq w^{[n-1]}$. For a fixed expression $\underline{w}^{[n-1]}$ for $w^{[n-1]}$, we have a Deodhar component for each distinguished subexpression $\underline{v}$ of $\underline{w}^{[n-1]}$. Slightly abusing notation, we will also denote by $\Rcal_{v,w^{[n-1]}}$, $\Rcal_{\underline{v},\underline{w}^{[n-1]}}$, and $\Rcal_{v,w^{[n-1]}}^{>0}=\Rcal_{\underline{v}^+,\underline{w}^{[n-1]}}^{>0}$ the Richardson cells, Deodhar components and positive Richardson cells, respectively, in $\OGr(n,2n)$. \begin{definition} \label{defn:Jsets} For a subexpression $\underline{u}$ of a length $p$ expression $\underline{v}$, we define \begin{align*} J_{\underline{u}}^{+}&:=\{k\in[p]\mid u_{(k)}> u_{(k-1)}\},\\ J_{\underline{u}}^{\circ}&:=\{k\in[p]\mid u_{(k)}=u_{(k-1)}\},\\ J_{\underline{u}}^{-}&:=\{k\in[p]\mid u_{(k)} < u_{(k-1)}\}. \end{align*} \end{definition} As we did for the totally positive part in \Cref{sec:2}, we can use Marsh and Rietsch's work to parametrize any Deodhar component in terms of the corresponding pair of subexpressions \cite[Section 5]{MR}. Let $\underline{w}^{[n-1]}=s_{i_1}\cdots s_{i_k}$ be a reduced expression. Then, for any distinguished subexpression $\underline{v}\preceq \underline{w}^{[n-1]}$, we have \[ \Rcal_{\underline{v},\ \underline{w}^{[n-1]}} = \left\{ \pi_n(g_1\cdots g_p) : \quad \begin{array}{ll} g_k= x_{i_k}(t_k) \text{ with } t_k \in \RR^* &\text{if } k\in J_{\underline{v}^{+}}^{+}\\ g_k= s_{i_k}^T &\text{if } k\in J_{\underline{v}^{+}}^{\circ} \\ g_k= x_{i_k}(a_k)^Ts_{i_k} \text{ with } a_k \in \RR &\text{if } k\in J_{\underline{v}^{+}}^{-} \end{array}\right\}. \] The restriction to $(G/B)^{\geq 0}$ is empty unless $\underline{v}=\underline{v}^+$ is reduced distinguished, in which case \begin{equation}\label{eq:Deodharparam} \Rcal^{>0}_{\underline{v}^{+},\ \underline{w}^{[n-1]}} = \left\{ \pi_n(g_1\cdots g_p) : \quad \begin{array}{ll} g_k= x_{i_k}(t_k) \text{ with } t_k \in \RR_{>0} &\text{if } k\in J_{\underline{v}^{+}}^{+}\\ g_k= s_{i_k}^T &\text{if } k\in J_{\underline{v}^{+}}^{\circ} \end{array}\right\}. \end{equation} \subsection{LGV Diagrams for Boundary Components} We have worked extensively with the LGV diagram to view Pl\"ucker coordinates in terms of non-intersecting weighted path collections. The LGV diagram we constructed in \Cref{def:LGVdiagram} is tailored to the top cell of the orthogonal Grassmannian, $\mathcal{R}_{{\rm id}, w_0^{[n-1]}}=\OGr^{>0}(n,2n)$. Here, we explain how to extend this construction to arbitrary Richardson cells in the boundary of $\OGr^{\geq 0}(n,2n)$. We begin by defining a helpful manipulation of certain directed graphs. \begin{definition}\label{def:signedaction} Let $G$ be a graph with $2n$ horizontal labeled strands directed rightwards, and possibly some vertical arrows between strands. Refer to the $k^{\text{th}}$ strand from the bottom as strand $k$. For each $i\in [n]$, we define the \textit{signed $s_i$ action} on $G$ to be the following sequential operations, mapping $G$ to $s_i\cdot G$. \begin{enumerate} \item\label{signedactionstep1} Let $s_i=(i_1, i_2)(i_3, i_4)$. Permute strands $i_1$ and $i_2$, and also strands $i_3$ and $i_4$. The arrows move with the strands: if an arrow originates on strand $i_1$ in $G$ then it should originate on strand $i_2$ in $s_i\cdot G$. \item\label{signedactionstep2} If $i\in [n-1]$, let $\alpha$ be the rightmost vertical arrow incident to strand $i+1$ in $G$. Add a weight of $-1$ to the portion of strand $i+1$ right of $\alpha$. Do the same for strand $n-i+1$. If $i=n$, do the same for strands $n$ and $n+1$. \end{enumerate} \end{definition} \begin{definition}\label{def:boundaryLGVdiagram} We define a weighted directed graph $G_{v,w}$ for $v\leq w$ and $w$ a minimal coset representative. This definition is technical and is most easily understood by an example (see \Cref{ex:generalLGVdiagram}). Let $\underline{w}^{[n-1]}_0 = s_{i_1}\cdots s_{i_N}$ be as in \eqref{eq:w_0ReducedExpr}, with $N=\binom{n}{2}$. Let $\underline{w}^{[n-1]}=\rho_{1}\cdots \rho_{N}$ be the reduced distinguished expression for $w^{[n-1]}$ in $\underline{w}_0^{[n-1]}$, where each $\rho_{j}$ is either $s_{i_j}$ or $1$. Let $\underline{v}=r_{1}\cdots r_{l}$ be the reduced distinguished subexpression for $v$ in $\underline{w}^{[n-1]}$, where each $r_{j}$ is either $s_{i_j}$ or $1$. In particular, for $j\in [N]$, $\rho_j=1$ implies $r_j=1$. We start from the LGV diagram. Recall that as in \Cref{def:LGVdiagram}, each simple reflection in $\underline{w}^{[n-1]}_0$ corresponds to a pair of arrows in the LGV diagram. \begin{enumerate} \item \label{LGVstep1}For each $j\in [N]$ such that $\rho_j=1$, delete the arrows corresponding to $s_{i_j}$. \smallskip \item \label{LGVstep2}For each $j\in [N]$ such that $r_j\neq 1$, in increasing order of $j$, cut the graph in half with a vertical split between the arrows corresponding to $s_{i_j}$ and the closest arrow to the right of them (if there are no more arrows, then anywhere to the right of them). In the left piece of the graph, remove the arrows corresponding to $s_{i_j}$. Viewing the strands as being labeled by the source vertex, preform a signed $s_{i_j}$ action. Then reattach the two sides of the graph. \end{enumerate} \end{definition} \begin{example}\label{ex:generalLGVdiagram} Let us explicitly construct $G_{v,w}$ for $w=s_4s_2s_1s_3,\, v=s_1$. Recall the LGV diagram for $n=4$ from \Cref{ex:n=4LGV}. Note that the reduced distinguished subexpression $\underline{w}^+ \prec \underline{w}_0^{[n-1]}$ is $s_4s_2s_1s_311$ and the reduced distinguished subexpression $\underline{v}^+\prec \underline{w}$ is $11s_1111$. Observe that $\rho_5=\rho_6=1$, so step \ref{LGVstep1} of the construction of $G_{v,w}$ is to remove the last four arrows in the LGV diagram. \begin{center} \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (4,4); \coordinate (r8) at (4,3.5); \coordinate (r9) at (4,3); \coordinate (r10) at (4,2.5); \coordinate (r5) at (4,2); \coordinate (r4) at (4,1.5); \coordinate (r3) at (4,1); \coordinate (r2) at (4,0.5); \coordinate (v21) at (2,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v71) at (2,4); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (4.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (4.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (4.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (4.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (4.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (4.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (4.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (4.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[draw=black, line width=1pt] (l2) -- (r2); \draw[draw=black, line width=1pt] (l3) -- (r3); \draw[draw=black, line width=1pt] (l4) -- (r4); \draw[draw=black, line width=1pt] (l5) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (r7); \draw[draw=black, line width=1pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\tiny $t_{1}$}; \draw[draw=black, line width=1pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\tiny $-t_{1}$}; \draw[draw=black, line width=1pt, ->] (v31) -- (v42) node[below right] {\tiny $t_{2}$}; \draw[draw=black, line width=1pt, ->] (v92) -- (v81) node[below right] {\tiny $-t_{2}$}; \draw[draw=black, line width=1pt, ->] (v21) -- (v32) node[below right] {\tiny $t_{3}$}; \draw[draw=black, line width=1pt, ->] (v82) -- (v71) node[below right] {\tiny $-t_{3}$}; \draw[draw=black, line width=1pt, ->] (v43) -- (v52) node[below right] {\tiny $t_{4}$}; \draw[draw=black, line width=1pt, ->] (v102) -- (v93) node[below right] {\tiny $-t_{4}$}; \end{tikzpicture} \end{center} We next apply Step \ref{LGVstep2}. The only $j\in[N]$ for which $r_j\neq 1$ is $j=3$. We begin by dividing the graph above into two pieces, between the third and fourth pairs of arrows. \begin{center} \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (3,4); \coordinate (r8) at (3,3.5); \coordinate (r9) at (3,3); \coordinate (r10) at (3,2.5); \coordinate (r5) at (3,2); \coordinate (r4) at (3,1.5); \coordinate (r3) at (3,1); \coordinate (r2) at (3,0.5); \coordinate (v21) at (2,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v71) at (2,4); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[draw=black, line width=1pt] (l2) -- (r2); \draw[draw=black, line width=1pt] (l3) -- (r3); \draw[draw=black, line width=1pt] (l4) -- (r4); \draw[draw=black, line width=1pt] (l5) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (r7); \draw[draw=black, line width=1pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\tiny $t_{1}$}; \draw[draw=black, line width=1pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\tiny $-t_{1}$}; \draw[draw=black, line width=1pt, ->] (v31) -- (v42) node[below right] {\tiny $t_{2}$}; \draw[draw=black, line width=1pt, ->] (v92) -- (v81) node[below right] {\tiny $-t_{2}$}; \draw[draw=black, line width=1pt, ->] (v21) -- (v32) node[below right] {\tiny $t_{3}$}; \draw[draw=black, line width=1pt, ->] (v82) -- (v71) node[below right] {\tiny $-t_{3}$}; \end{tikzpicture} \qquad \qquad \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (2.5,4); \coordinate (r8) at (2.5,3.5); \coordinate (r9) at (2.5,3); \coordinate (r10) at (2.5,2.5); \coordinate (r5) at (2.5,2); \coordinate (r4) at (2.5,1.5); \coordinate (r3) at (2.5,1); \coordinate (r2) at (2.5,0.5); \coordinate (v1) at (1,0.5); \coordinate (v2) at (1,1); \coordinate (v3) at (1,3.5); \coordinate (v4) at (1,4); \draw[black] (3,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (3,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (3,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (3,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (3,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (3,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (3,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (3,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[draw=black, line width=1pt] (l2) -- (r2); \draw[draw=black, line width=1pt] (l3) -- (r3); \draw[draw=black, line width=1pt] (l4) -- (r4); \draw[draw=black, line width=1pt] (l5) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (r7); \draw[draw=black, line width=1pt, ->] (v1) -- (v2) node[below right] {\tiny $t_{4}$}; \draw[draw=black, line width=1pt, ->] (v3) -- (v4) node[below right] {\tiny $-t_{4}$}; \end{tikzpicture} \end{center} We remove the arrows with weights $\pm t_3$ and then apply the signed $s_1$ action to the left piece, interchanging strands $1$ and $2$, as well as strands $5$ and $6$. Note that the arrow with weight $t_2$ originates from the second strand from the bottom before applying the signed $s_1$ action but, after permuting strands $1$ and $2$, it originates on the bottom strand. We use red to indicate the parts of strands which get a weight of $-1$. \begin{center} \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (3,4); \coordinate (r8) at (3,3.5); \coordinate (r9) at (3,3); \coordinate (r10) at (3,2.5); \coordinate (r5) at (3,2); \coordinate (r4) at (3,1.5); \coordinate (r3) at (3,1); \coordinate (r2) at (3,0.5); \coordinate (v21) at (2,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v71) at (2,4); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \path[draw, ->, decorate, decoration ={snake, amplitude = 2}] (4,2) -- (5.6,2); \draw[draw=black, line width=1pt] (l2) -- (r2); \draw[draw=black, line width=1pt] (l3) -- (r3); \draw[draw=black, line width=1pt] (l4) -- (r4); \draw[draw=black, line width=1pt] (l5) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (r7); \draw[draw=black, line width=1pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\tiny $t_{1}$}; \draw[draw=black, line width=1pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\tiny $-t_{1}$}; \draw[draw=black, line width=1pt, ->] (v31) -- (v42) node[below right] {\tiny $t_{2}$}; \draw[draw=black, line width=1pt, ->] (v92) -- (v81) node[below right] {\tiny $-t_{2}$}; \end{tikzpicture} \qquad \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7) at (3,4); \coordinate (r8) at (3,3.5); \coordinate (r9) at (3,3); \coordinate (r10) at (3,2.5); \coordinate (r5) at (3,2); \coordinate (r4) at (3,1.5); \coordinate (r3) at (3,1); \coordinate (r2) at (3,0.5); \coordinate (v21) at (2,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v71) at (2,4); \coordinate (vmod1) at (2,1.5); \coordinate (vmod2) at (2,3); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[red] (2.75,0.8) node [xscale=0.65,yscale=0.65]{$-1$}; \draw[red] (2.75,3.8) node [xscale=0.65,yscale=0.65]{$-1$}; \draw[draw=black, line width=1pt] (l2) -- (r2); \draw[draw=red, line width=2pt] (l3) -- (r3); \draw[draw=black, line width=1pt] (l4) -- (r4); \draw[draw=black, line width=1pt] (l5) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (r7); \draw[draw=black, line width=1pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\tiny $t_{1}$}; \draw[draw=black, line width=1pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\tiny $-t_{1}$}; \draw[draw=black, line width=1pt, ->] (v21) .. controls (2.25,0.75) and (2.25,1.25) .. (vmod1) node[midway, below left] {\tiny $t_{2}$}; \draw[draw=black, line width=1pt, ->] (vmod2) .. controls (2.25,3.25) and (2.25,3.75) .. (v71) node[midway, below left] {\tiny $-t_{2}$}; \end{tikzpicture} \end{center} Finally we reattach the two pieces of the graph to obtain $G_{v,w}$. \begin{center} \begin{tikzpicture} \coordinate (l7) at (0,4); \coordinate (l8) at (0,3.5); \coordinate (l9) at (0,3); \coordinate (l10) at (0,2.5); \coordinate (l5) at (0,2); \coordinate (l4) at (0,1.5); \coordinate (l3) at (0,1); \coordinate (l2) at (0,0.5); \coordinate (r7') at (3,4); \coordinate (r7) at (4,4); \coordinate (r8) at (4,3.5); \coordinate (r9) at (4,3); \coordinate (r10) at (4,2.5); \coordinate (r5) at (4,2); \coordinate (r4) at (4,1.5); \coordinate (r3') at (3,1); \coordinate (r3) at (4,1); \coordinate (r2) at (4,0.5); \coordinate (v21) at (2,0.5); \coordinate (v31) at (1.5,1); \coordinate (v32) at (2,1); \coordinate (v33) at (3.5,1); \coordinate (v41) at (1,1.5); \coordinate (v42) at (1.5,1.5); \coordinate (v43) at (3,1.5); \coordinate (v44) at (3.5,1.5); \coordinate (v45) at (5,1.5); \coordinate (v51) at (0.5,2); \coordinate (v52) at (3,2); \coordinate (v53) at (4.5,2); \coordinate (v101) at (1,2.5); \coordinate (v102) at (3,2.5); \coordinate (v103) at (5,2.5); \coordinate (v91) at (0.5,3); \coordinate (v92) at (1.5,3); \coordinate (v93) at (3,3); \coordinate (v94) at (3.5,3); \coordinate (v95) at (4.5,3); \coordinate (v81) at (1.5,3.5); \coordinate (v82) at (2,3.5); \coordinate (v83) at (3.5,3.5); \coordinate (v84) at (4,3.5); \coordinate (v71) at (2,4); \coordinate (vmod1) at (2,1.5); \coordinate (vmod2) at (2,3); \coordinate (vmod3) at (3.5,0.5); \coordinate (vmod4) at (3.5,1); \coordinate (vmod5) at (3.5,3.5); \coordinate (vmod6) at (3.5,4); \draw[black] (-0.5,0.5) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (-0.5,1) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (-0.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (-0.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (-0.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (-0.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (-0.5,3.5) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[black] (-0.5,4) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (4.5,0.5) node [xscale = 0.8, yscale = 0.8] {$1$}; \draw[black] (4.5,1) node [xscale = 0.8, yscale = 0.8] {$2$}; \draw[black] (4.5,1.5) node [xscale = 0.8, yscale = 0.8] {$3$}; \draw[black] (4.5,2) node [xscale = 0.8, yscale = 0.8] {$4$}; \draw[black] (4.5,2.5) node [xscale = 0.8, yscale = 0.8] {$8$}; \draw[black] (4.5,3) node [xscale = 0.8, yscale = 0.8] {$7$}; \draw[black] (4.5,3.5) node [xscale = 0.8, yscale = 0.8] {$6$}; \draw[black] (4.5,4) node [xscale = 0.8, yscale = 0.8] {$5$}; \draw[red] (2.75,0.8) node [xscale=0.65,yscale=0.65]{$-1$}; \draw[red] (2.75,3.8) node [xscale=0.65,yscale=0.65]{$-1$}; \draw[draw=black, line width=1pt] (l2) -- (r2); \draw[draw=red, line width=2pt] (l3) -- (r3'); \draw[draw=black, line width=1pt] (r3') -- (r3); \draw[draw=black, line width=1pt] (l4) -- (r4); \draw[draw=black, line width=1pt] (l5) -- (r5); \draw[draw=black, line width=1pt] (l10) -- (r10); \draw[draw=black, line width=1pt] (l9) -- (r9); \draw[draw=black, line width=1pt] (l8) -- (r8); \draw[draw=black, line width=1pt] (l7) -- (v71); \draw[draw=red, line width=2pt] (v71) -- (r7'); \draw[draw=black, line width=1pt] (r7') -- (r7); \draw[draw=black, line width=1pt, ->] (v41) .. controls (1.25,1.75) and (1.25,2.25) .. (v101) node[midway, below left] {\tiny $t_{1}$}; \draw[draw=black, line width=1pt, ->] (v51) .. controls (0.75,2.25) and (0.75,2.75) .. (v91) node[midway, below left] {\tiny $-t_{1}$}; \draw[draw=black, line width=1pt, ->] (v21) .. controls (2.25,0.75) and (2.25,1.25) .. (vmod1) node[midway, below left] {\tiny $t_{2}$}; \draw[draw=black, line width=1pt, ->] (vmod2) .. controls (2.25,3.25) and (2.25,3.75) .. (v71) node[midway, below left] {\tiny $-t_{2}$}; \draw[draw=black, line width=1pt, ->] (vmod3) -- (vmod4) node[below right] {\tiny $t_{4}$}; \draw[draw=black, line width=1pt, ->] (vmod5) -- (vmod6) node[below right] {\tiny $-t_{4}$}; \end{tikzpicture} \end{center} \end{example} Each arrow in $G_{v,w}$ corresponds to some arrow in the original LGV diagram. We will continue to use the notations $a_i^{(j)}$ and $b_i^{(j)}$ to refer to the arrows in $G_{v,w}$. \begin{proposition}\label{LGVlemmavw} Let $I \in \binom{[2n]}{n}$ and $\mathcal{P}_I$ be the set of non-intersecting path collections from $\{1,\ldots,n\}$ to $I$ in $G_{v,w}$. Then, the maximal minor of $X \in \mathcal{R}_{v,w}^{>0}$ corresponding to the columns indexed by $I$ is given by \[ \Delta^I(X) :=\det\left(X_I\right)= \sum_{P=(P_1,\ldots,P_n) \in \mathcal{P}_I} \sgn(\pi_P)\prod_{i=1}^n \prod_{e \in P_i} w(e), \] where $\pi_P$ is the permutation that sends $i$ to the end point of $P_i$, and the weight of the edge $e$ is $w(e) \in \{\pm t_1, \ldots, \pm t_{\ell(w)-\ell(v)}\}$. \end{proposition} \begin{proof} The proof of this statement is identical to the proof of \Cref{LGVlemma}, except that we now look at the parametrization of $\mathcal{R}_{v,w}^{>0}$ given by the positive part of the Deodhar component $\mathcal{R}_{\underline{v}^+,\underline{w}}^{>0}$, as in \eqref{eq:Deodharparam}. Consequently, we now need to account for the matrices $\dot{s}_i$ in addition to the matrices $x_{i_l}(t_l)$, which can also be treated as adjacency matrices. \end{proof} \subsection{Identification of cells} Our goal in this section is to identify the Richardson cell $\mathcal{R}_{v,w}^{>0}$ containing a given point $X\in \OGr^{\geq 0}(n, 2n)$. Let $\mathscr{M}_X$ denote the matroid associated to $X$, that is, the rank $n$ matroid on $[2n]$ with bases \[ \left \{ I \in \binom{[2n]}{n} \colon \quad \Delta^{I}(X) \neq 0\right \}. \] In the nonnegative Grassmannian, Richardson cells are in bijection with \emph{positroids}. As we shall see, the Richardson cell that contains a given point $X \in \OGr^{\geq 0}(n,2n)$ is also determined by its matroid $\mathscr{M}_X$. The following definition is motivated by \cite[Definition 3.26]{boretsky}. \begin{definition}\label{def:lowering} Let $I$ be a basis of a rank $k$ matroid $\mathscr{M}$ on a ground set $E$. Let $<$ be a total order on $E$. We inductively define a sequence of bases of $\mathscr{M}$ as follows. Let $I^{(0)}=I$. For $1\leq j\leq k$, suppose $I^{(j-1)}=\left\{i^{(j-1)}_1<\cdots <i^{(j-1)}_k\right\}$. Then, $I^{(j)}$ is obtained by replacing $i^{(j-1)}_{j}$ in $I^{(j-1)}$ by the element \[ \min_{<}\left\{ t \in E \colon \quad \left( I^{(j-1)}\setminus \{i^{(j-1)}_{j}\} \right) \cup \{t\} \text{ is a basis of } \mathscr{M} \right\}. \] In words, $I^{(j)}$ is the basis obtained from $I^{(j-1)}$ by decreasing the $j^{\text{th}}$ element of $I^{(j)}$ as much as possible with respect to $<$. For $0\leq j\leq k$, we say $I^{(j)}$ is a \textit{lowering} of $I$ with respect to~$<$. \end{definition} Recall that permutations $\sigma$ in the Weyl group $W$ of $\SO(2n)$ satisfy $\sigma(n+j)=n+\sigma(j)\text{ modulo } 2n$ for all $i\in [n]$. Thus, to determine $v$ and $w$, it suffices to specify $v(j)$ and $w(j)$ for all $j\in[n]$. The total order $\prec$ we shall use on $[2n]$ is the following: \[ 1 \prec 2 \prec \dots \prec n \prec 2n \prec 2n-1 \prec \dots \prec n+1. \] Recall that for a fixed total order $<$ of $E$, the \textit{Gale order} on $\binom{E}{k}$, also denoted $<$, is defined~by \begin{equation*} A < B \qquad \text{if} \qquad a_i < b_i \ \text{ for all } i \in [k], \end{equation*} where $A=\{a_1<a_2<\cdots < a_k\}$ and $B=\{b_1<b_2<\cdots < b_k\}$. We make use of the following definition of a matroid: a rank $k$ matroid on a ground set $E$ is a collection of subsets of $E$ of size $k$, that has a unique Gale-maximal element with respect to any total order on $E$ \cite[Theorem 4]{Gale}. We denote by $I_X$ the unique maximal element of the matroid $\mathscr{M}_X$ in the Gale order $\prec$. \begin{definition}\label{def:Iw} For $w\in W$, define $I_w=w^{-1} \big([2n]\setminus[n] \big)\cap [n]=\{i\in[n]\colon w(i)>n\}$. A defining property of the Weyl group $W$ of type D is that $|I_w|$ is even. \end{definition} \begin{remark}\label{rem:wAsSet} Viewing $w\in W$ as a permutation of $[2n]$ written in one-line notation, we note that multiplying $w$ on the left by $s_i$ for $i<n$ permutes separately the values in $[n]$ and those in $[2n]\setminus[n]$ in $w$. Thus, for $w\in W$, the coset $W_{[n-1]} w$ is determined by the (unordered) positions of $[2n]\setminus [n]$ in $w$. Since $w(i)$ determines $w(i+n)$ for $i\in [n]$, the coset $W_{[n-1]}w$ is determined by $I_w$. The minimal coset representative of $w$ will be the shortest permutation $w'$ satisfying $I_{w'} = I_w$. Specifically, this is the permutation which, in one-line notation, has $1,2,\ldots, n-|I_w|$ in order in positions $[n]\setminus I_w$ and $2n, 2n-1, \ldots, 2n-|I_w|+1$ in positions $I_w$. Thus, the right cosets $W^{[n-1]}$ are in bijection with subsets of $[n]$ of even size, with the bijection given by $w^{[n-1]}\leftrightarrow I_{w^{[n-1]}}$. \end{remark} \begin{lemma}\label{lem:bijectionsetstoexpressions} Fix an element $w\in W$. A reduced subexpression $\underline{w}^{[n-1]}$ for the minimal coset representative $w^{[n-1]}$ can be extracted from the expression $\underline{w}_0^{[n-1]}$ in \eqref{eq:w_0ReducedExpr} as follows. \begin{enumerate} \item Locate the first $|I_w| / 2$ factors of $s_n$ appearing in \eqref{eq:w_0ReducedExpr}. These appear in~$\underline{w}^{[n-1]}$. \smallskip \item If $I_w = \{i_1<i_2<\cdots <i_{|I_w|}\}$, then for each $1\leq j\leq |I_w|$ and $j$ odd (resp. even), the reflections $s_{n-2}\cdots s_{i_{(j+1)}}$ (resp. $s_{n-1}\cdots s_{i_j}$) from the $j^\text{th}$ set of parentheses of \eqref{eq:w_0ReducedExpr} appear in $\underline{w}^{[n-1]}$. \smallskip \item All the other reflections in \eqref{eq:w_0ReducedExpr} are replaced with $1$. \end{enumerate} \end{lemma} \begin{proof} Consider the subexpression $\underline{u}$ defined in the lemma statement. Our goal is to show that $u=w^{[n-1]}$. Ignoring factors of $1$, we can write it as $\underline{u} = \underline{w}_1 \cdots \underline{w}_{{|I_w|}/{2}}$, where $\underline{w}_j=s_n(s_{n-2}\cdots s_{i_{(2j-1)}})(s_{n-1}\cdots s_{i_{(2j)}})$. Recall that acting on the right, simple transpositions act on positions. Note that $I_{s_n}=\{n-1,n\}$. Applying the sequence of permutations $s_{n-2}\cdots s_{i_1}$ on the right moves the entry at position $n-1$ to position $i_1$. Thus, $I_{s_n(s_{n-2}\cdots s_{i_1})}=\{i_1,n\}$. Similarly, applying the sequence of permutations $s_{n-1}\cdots s_{i_2}$ on the right moves the entry at position $n$ to position $i_2$. Thus, $I_{w_1}=\{i_1,i_2\}$. Also, $w_1$ is the shortest element of $W$ with this property. To see this, observe that for any $u\in W$, $I_{us_n}$ equals the symmetric difference $I_u \ \triangle \{n-1,n\}$, while, for $i\in [n-1]$, $I_{us_i}$ equals $I_w$ with $i$ replaced by $i+1$ and vice versa. By our construction, each simple reflection in $\underline{w}_1$ acts non trivially: the factors of $s_n$ act by adding $\{n-1,n\}$ whereas the other $s_i$ act by swapping $i+1$ for $i$. Thus, it uses the fewest possible simple transpositions to obtain the set $I_{w_1}=\{i_1, i_2\}$. \smallskip Using a similar argument, one can verify inductively that $I_{w_1\cdots w_j}=\{i_1,\ldots, i_{2j}\}$ and that $w_1\cdots w_j$ is the shortest element of $W$ with this property. Thus, we deduce $u=w^{[n-1]}$, and the given expression is reduced. \end{proof} \begin{example} Let $n=5$. Then, from \eqref{eq:w_0ReducedExpr}, $\underline{w_0}^{[n-1]}=s_5(s_3s_2s_1)(s_4s_3s_2)s_5(s_3)(s_4)$. Let $w=1\; 10\; 2\; 9\; 3$, which is a shortest coset representative. Then, $I_w=\{2,4\}$. The subexpression for $\underline{w}$ is thus $s_5(s_3s_21)(s_411)1(1)(1)$. \end{example} The following proposition proves a number of helpful technical facts about $G_{v,w}$. While many of its claims seem unrelated, they are all proven in the same induction and so are most easily proven together. \begin{proposition}\label{prop:pathcollectionproperties} Let $v\leq w$ in $W$ such that $w$ is a minimal coset representative. In $G_{v,w}$, there is a unique non-intersecting path collection from $[n]$ to $I_X$. Moreover, \begin{enumerate}[wide = 30pt, leftmargin = 50pt] \item \label{property1} $I_X$ depends only on $w$, \item \label{property2} In this path collection, the path originating from source vertex $i$ terminates below the path originating from source vertex $i+1$ for all $i\in[n-1]$, \item \label{property3} The path collection is left greedy, and \item \label{property4} The path collection uses every vertical edge of $G_{v,w}$. \end{enumerate} \end{proposition} \begin{proof} Fix $w \in W^{[n-1]}$ and $\underline{w}$ the expression given by \Cref{lem:bijectionsetstoexpressions}. We first show the result for $v = {\rm id}$ and proceed by induction on $\ell(v)$. \smallskip Let $v={\rm id}$. We give a graphical interpretation of $I_w$. Let $I'_w$ be the set $I'_w := w^{-1}([2n]\setminus [n])$. Note that $I_w$ determines $I'_w$ since for $w\in W$ and $i\in [n]$, $w(i+n)=w(i)+n\text{ modulo }2n$. Define $G'_{v,w}$ to be identical to $G_{v,w}$ but with unoriented vertical arrows. We consider path collections in $G'_{v,w}$ (not necessarily non-intersecting). We define the greedy path originating at $i\in[2n]$ in $G'_{v,w}$ to be the path which travels rightwards along strands and greedily turns onto every vertical edge it passes. The difference from a left greedy path in $G_{v,w}$ is that a greedy path in $G'_{v,w}$ can travel either way along vertical edges. Given a set $S\subset [2n]$, we define the greedy path collection originating at $S$ in $G'_{v,w}$ to be given by the greedy paths originating at each $i\in S$. In general, this is not non-intersecting. In the proof of \Cref{lem:bijectionsetstoexpressions}, there is an action of $s_i$ on sets $I_w$. We give a graph theoretic interpretation of this action. For each $i\in[n]$, observe that the greedy path originating from $S\subset[2n]$ in $G'_{{\rm id}, s_i}$ has sink set $s_i(S)$. Thus, the greedy path collection originating from $S$ in $G'_{{\rm id}, w}$ has sink set $w^{-1}(S)$. In particular, the sink set of the greedy path collection $P$ originating from $[2n]\setminus[n]$ in $G'_{{\rm id}, w}$ is $I'_w$. It follows from the proof of \Cref{lem:bijectionsetstoexpressions} that if $\underline{w}=s_{i_1}\cdots s_{i_l}$ we have: \begin{align*} I'_{s_{i_1}\cdots s_{i_k}} &= \Big(I'_{s_{i_1}\cdots s_{i_{k-1}}}\setminus \{i_k+1,n+i_k+1\} \Big)\cup\{i_k,n+i_k\}, \quad \text{if }1\leq i_k\leq n-1, \\ I'_{s_{i_1}\cdots s_{i_{k}}} &= \Big( I'_{s_{i_1}\cdots s_{i_{k-1}}}\setminus \{2n-1,2n\}\Big)\cup\{n-1,n\}, \quad \text{if } i_k=n. \end{align*} In particular for $i\in [n]$, $I'_{s_{i_1}\cdots s_{i_{k}}}\prec I'_{s_{i_1}\cdots s_{i_{k-1}}}$ and acts by replacing two elements of $I'_{s_{i_1}\cdots s_{i_{k-1}}}$. This reflects the movement of paths in $P$ downwards along the two arrows corresponding to $s_i$ in $G'_{{\rm id},w}$. Thus paths in $P$ travel downwards along all edges of $G'_{{\rm id}, w}$. Observe that the greedy path originating from $[2n]$ in $G'_{{\rm id}, w}$ uses each vertical edge twice, once going upwards and once going downwards. Thus, the greedy path collection $Q$ originating from $[n]$ in $G'_{{\rm id}, w}$ uses all the vertical edges in the upwards direction. \smallskip We claim that the paths in $Q$ do not cross. Suppose $p_1$ and $p_2$ cross. Immediately before they first meet, one of them, say $p_1$, must travel along a vertical edge $e$ (necessarily upwards) and the other, $p_2$, along a horizontal strand. But then, by greediness, $p_2$ would be forced to use the vertical edge $e$ going downwards, which is a contradiction. Since paths in $Q$ only use vertical edges in the upwards direction and do not cross one another, we may view it as a non-intersecting path collection in $G_{{\rm id}, w}$ instead of in $G'_{{\rm id}, w}$. By the definition of a greedy path collection in $G'_{{\rm id}, w}$, each individual path in $Q$ is left greedy in $G_{{\rm id}, w}$, proving (\ref{property3}). Since $Q$ uses all edges of $G'_{{\rm id},w}$, (\ref{property4}) is satisfied as well. Each path is directed from $i$ to $w^{-1}(i)$ for some $i\in [n]$. Since $w\in W^{[n-1]}$, by \Cref{rem:wAsSet}, $w^{-1}(1)\prec\cdots \prec w^{-1}(n)$, proving (\ref{property2}). In this base case, (\ref{property1}) does not assert anything. Finally, the next lemma implies it is a unique path collection. \begin{lemma}\label{lem:usesalledges} Suppose $G_{v,w}$ has all vertical edges pointing upwards. Let $P$ be a non-intersecting path collection in $G_{v,w}$ using every vertical edge. Then $P$ is a unique path~collection. \end{lemma} \begin{proof} For a vertical edge $e$ that skips over $k-1$ strands, define $h_e \coloneqq k$. For a path collection $Q$, define $h(Q)\coloneqq \sum_e h_e$ where the sum is over all vertical edges used by $Q$. Observe that \[ h(P) = \max \Big\{ h(Q) \colon Q\text{ is a non-intersecting path collection in }G_{v,w} \Big\}. \] We also claim that a non-intersecting path collection is uniquely determined by the vertical arrows it uses, which implies that $P$ is the unique non-intersecting path collection maximizing $h$. To prove the claim, let $V$ be the set of vertical edges of a path collection $Q$ in $G_{v,w}$. The path collection $Q$ is obtained from $V$ as follows. Let $\nu_1$ be the rightmost arrow in $V$ (if there are multiple, choose the top one). The path $p_1$ in $Q$ which uses $\nu_1$ must terminate vertically from the head of $\nu_1$ and must approach $\nu_1$ along the strand $\sigma_{\nu_1}$ where $\nu_1$ originates. If there are no other arrows in $V$ whose head lies on $\sigma_{\nu_1}$, $p_1$ must originate on strand $\sigma_{\nu_1}$. Otherwise, let $\nu_2$ be the rightmost arrow in $V$ terminating on strand $\sigma_{\nu_1}$. By the non-intersecting condition, $p_1$ must use $\nu_2$. Continuing in this way, we see that the path $p_1$ which uses $\nu_1$ is uniquely determined. Having determined $p_1$, remove the vertical arrows used by $p_1$ from the set $V$ and run the same argument again to determine another path $p_2$. Repeating this until $V$ is empty, we obtain a path collection. Observe that since we assume that $V$ is the set of arrows of a path collection, the algorithm will in fact terminate. Since every step in the construction was forced, the resulting path collection is unique. For $q$ a path in $Q$ whose sink lies $k'$ strands above its source, define $h_q\coloneqq k'$. Observe that $h(Q)=\sum _q h_q$ where the sum is over all paths of $Q$. Thus, $h(Q)$ is fully determined by the source and sink set of $Q$. Since $P$ uniquely maximizes $h$, there is a unique path collection with the same source and sink set as $P$, that is, $P$ is a unique path collection. \end{proof} This completes the proof for $G_{{\rm id}, w}$. We now prove the lemma for $G_{v,w}$ with $\ell(v)>0$ by induction. Suppose the result holds for all $G_{u,w}$ with $\ell(u)<\ell(v)$. Let $u$ be the product of the first $\ell(v)-1$ factors of the distinguished subexpression $\underline{v}^+ \prec \underline{w}$. The corresponding expression $\underline{u}$ is again distinguished in $\underline{w}$. Suppose $\underline{v}^+=\underline{u}s_i$. By \Cref{def:boundaryLGVdiagram} we obtain $G_{v,w}$ from $G_{u,w}$ by cutting the graph to the right of some pair of arrows $(\alpha_1,\alpha_2)$ in $G_{u,w}$, deleting those arrows, and performing a signed $s_i$ action on the left part. As in \Cref{def:signedaction}, we will always use \textit{strand $k$} to refer to the strand that is $k$ from the bottom. We claim $(\alpha_1,\alpha_2)$ is the leftmost pair of vertical arrow between strands $i$ and $i+1$ and strands $n+i+1$ and $n+i$ if $i \in [n-1]$ or strands $n-1$ and $2n$ and strands $n$ and $2n-1$ if $i=n$. We first show the following. \begin{lemma}\label{lem:removingarrows} Let $\underline{u}$ be a reduced distinguished subexpression of $\underline{w}$. Suppose in $G_{u,w}$ there are arrows $\alpha_1,\ldots, \alpha_p$ from strand $i$ to strand $i+1$. Recall that every $\alpha_j$ corresponds to some simple reflection $s_{i_j}$ of $\underline{w}$ which is replaced by $1$ in $\underline{u}$. For $j\in [p]$, let $\underline{u}_j$ be the subexpression of $\underline{w}$ obtained by using $s_{i_j}$ instead of the corresponding $1$ in $\underline{u}$. Then, $u_1=\cdots = u_p$. An analogous result holds for arrows from strand $n-1$ to strand $n+1$. \end{lemma} \begin{proof} Since, in \Cref{def:boundaryLGVdiagram}, we swap the source vertices in $G_{u,w}$ for each simple transposition in some expression $\underline{u}$, we can read off $u$ from the source vertices of $G_{u,w}$. From bottom to top, they are $u^{-1}(1), u^{-1}(2), \cdots,u^{-1}(n), u^{-1}(2n), u^{-1}(2n-1),\cdots, u^{-1}(n+1).$ The effect of adding a transposition from $\underline{w}$ to $\underline{u}$ is to permute in $G_{u,w}$ the source labels of the strands connected by the corresponding arrows. \end{proof} By \Cref{lem:removingarrows}, the reduced distinguished expression for $v$ will be the one obtained using the leftmost possible transposition, and so $(\alpha_1,\alpha_2)$ is the leftmost pair of arrows between the appropriate strands. Since, in step \ref{LGVstep2} of \Cref{def:boundaryLGVdiagram} we only modify vertical arrows to the left of the arrows that we remove, this implies the following. \begin{cor}\label{cor:arrowspointup} All arrows in $G_{v,w}$ are pointing upwards. \end{cor} Suppose $(\alpha_1,\alpha_2)=a_i^{(j)}, a_{n+i+1}^{(j)}$ for some $i\in [n-1]$ (the argument is identical for $(\alpha_1,\alpha_2)=b_{n-1}^{(j)},b_{n}^{(j)}$). We compare the left greedy path collections $Q_u$ and $Q_v$ originating from $[n]$ in three different sections of the graphs $G_{u,w}$ and $G_{v,w}$, respectively. \begin{enumerate}[label=(\roman*)] \item \label{collection1} Truncate the graphs $G_{v,w}$ and $G_{u,w}$ immediately to the left of $a_i^{(j)}$. There is a bijection between path collections originating from $[n]$ in the truncated $G_{u,w}$ terminating at $J$ and those in the truncated $G_{v,w}$ terminating at $J'$, where $J'$ is obtained from $J$ by swapping $i \leftrightarrow i+1$ and $n+i\leftrightarrow n+i+1$. Moreover this bijection preserves left greediness: the truncation of $Q_u$ maps to the truncation of $Q_v$. \item \label{collection2} Near $a_{i}^{(j)}$ in $G_{u,w}$, a path in $Q_u$ enters on strand $i$ but not on $i+1$, uses the vertical arrow, and leaves on strand $i+1$ (any other configuration of paths entering on strands $i$ and $i+1$ is not possible since all vertical edges must be used in the left greedy path collection in $G_{u,w}$, by induction). Let $J$ and $J'$ be as in the previous bullet for the truncations of $Q_u$ and $Q_v$, respectively. Then, $i\in J\setminus J'$ and $i+1\in J'\setminus J$. Thus, $Q_v$ just uses strand $i+1$ and we preserve the left greediness of each path around $a_{i}^{(j)}$. Similar considerations apply locally near $a_{n+i+1}^{(j)}$. It follows that if we truncate $G_{u,w}$ and $G_{v,w}$ just to the right of $a_i^{(j)}$, the corresponding truncations of $Q_u$ and $Q_v$ have the same sink set. \item \label{collection3} Since $G_{v,w}$ is identical to $G_{u,w}$ to the right of $a_i^{(j)}$, there is a trivial bijection preserving the left greediness of paths between path collections in $G_{u,w}$ and those in $G_{v,w}$ in this part of the graphs. \end{enumerate} Taken all together, we can see that the sink sets of $Q_u$ and $Q_v$ are the same and $Q_v$ is non-intersecting and uses all vertical edges of $G_{v,w}$. This proves (\ref{property1}) and (\ref{property4}). The paths in this path collection are all left greedy, proving (\ref{property3}). We next show that for $k\in[n-1]$ the path originating from source vertex $k$ terminates below the path originating from source vertex $k+1$. The only place where this might break is if the left greedy paths paths in the truncation of $G_{u,w}$ in \ref{collection1} originating at $k$ and $k+1$ terminate in strands $i$ and $i+1$; this would imply that both $i,i+1$ belong to $J$, as defined in item \ref{collection1} above. However, this never happens for the left greedy path collection, as explained in item \ref{collection2}. This proves (\ref{property2}). Finally, uniqueness is guaranteed by \Cref{lem:usesalledges}. We have now completed the proof of \Cref{prop:pathcollectionproperties}. \end{proof} \begin{lemma}\label{lem:greedyisextremeuv} Let $p_i$ be the left greedy path originating from source vertex $i \in [n]$ in $G_{v,w}$. There is no path in $G_{v,w}$ originating from $i$ and achieving a $\prec$-larger sink than $p_i$. \end{lemma} \begin{proof} We start by proving the result for $G_{{\rm id}, w}$. From \Cref{lem:bijectionsetstoexpressions}, observe that in $G_{{\rm id}, w}$, whenever an arrow $a_{\lambda}^{(\kappa)}$ terminates on strand $s$ and an arrow $b_l^{(k)}$ skips a strand $s$ to the right of $a_{\lambda}^{(\kappa)}$, there is an arrow $\gamma$ originating on strand $s$ between $a_{\lambda}^{(\kappa)}$ and $b_{l}^{(k)}$. Suppose $p'$ were a path originating at $i$ and terminating above the sink of $p$. Then, at some point, $p'$ lies above $p$. Thus, $p'$ at some point uses an arrow which skips a strand $s$ used by $p$. Say this arrow is $b_l^{(k)}$ for some $l,k$. Since $b_l^{(k)}$ originates below strand $s$, source vertex $i$ does not lie on strand $s$. Say $p$ used an arrow $a_{\lambda}^{(\kappa)}$ to reach strand $s$. By our observation at the beginning of the proof, there is an arrow $\gamma$ originating on strand $s$ between $a_{\lambda}^{(\kappa)}$ and $b_l^{(k)}$. Note that the arrows $b_{l}^{(k)}$ in $G_{{\rm id}, w}$ only skip one strand. Since $p'$ gets above $p$ by skipping a single strand, $p$ does not use $\gamma$, which contradicts left greediness. For $G_{v,w}$, we argue by induction. Let $\underline{w}^+\prec \underline{w}_)^{[n-1]}$ be reduced distinguished let $\underline{v}^+\prec \underline{w}^+$ be reduced distinguished. Let $u< v$ be such that $\underline{v}^+=\underline{u}s_k$ for some simple transposition $s_k$. Observe that $\underline{u}\prec \underline{w}^+$ is reduced distinguished as well. Let $m_i$ be the sink of the left greedy path originating at $i$ in $G_{v,w}$ and suppose there is a path $p$ originating at $i$ and terminating at $ j\succ m_i$, that is, on a strand above $m_i$. We make two observations. \begin{enumerate} \item Combining (\ref{property1}) and (\ref{property2}) of \Cref{prop:pathcollectionproperties}, it follows that the left greedy path starting at $i$ in both $G_{u,w}$ and $G_{v,w}$ have the same sink vertex $m_i$. \item The existence of $p$ in $G_{v,w}$ implies that there is a path in $G_{u,w}$ from $i$ to $j'$ for some $j'\succeq j$. To see this, we work case by case, depending on how many of the strands that get permuted when forming $G_{v,w}$ from $G_{u,w}$ are used by $p$. In each case, the observation follows from splitting $G_{u,w}$ and $G_{v,w}$ into three sections each and comparing them section by section, similarly to \ref{collection1} - \ref{collection3} of the proof of \Cref{prop:pathcollectionproperties}. Note that, compared to \ref{collection1}-\ref{collection3}, we are going ``backwards", from $G_{v,w}$ to $G_{u,w}$ where $\ell(v)>\ell(u)$. \end{enumerate} Thus, we have a path originating from $i$ in $G_{u,w}$ which terminates at $j'\succ m_i$. Since $m_i$ is the sink of the left greedy path originating at $i$ in $G_{u,w}$, this contradicts our induction hypothesis. \end{proof} \begin{remark} \Cref{prop:pathcollectionproperties} and \Cref{lem:greedyisextremeuv} with $v={\rm id}$ and $w=w_0^{[n-1]}$ imply \Cref{lem:LGVdiagramLeftGreedy}. \end{remark} \begin{theorem} \label{thm:RichardsonRestated} Let $X\in \OGr^{\geq 0}(n, 2n)$ and $v, w \in W$ such that $X \in \mathcal{R}^{>0}_{v, w}$. Recall \[ t_j := \min_{\prec}\left\{t \colon \left(I_X^{(j-1)}\setminus \{i^{(j-1)}_{j}\}\right)\cup\{t\} \text{ is a basis of } \mathscr{M}_X\right\}, \] where $I_X^{(l)} := \{ i_1^{(l)} \prec \dots \prec i_{n}^{(l)} \}$ is the $l$-th lowering of $I_X=\{i_1\prec i_2\prec\cdots\prec i_n\}$ with respect to $\prec$ as defined in \Cref{def:lowering}. Then \begin{enumerate}[wide=20pt] \item $w \in W$ is given by $w^{-1}(j)=i_j$ for $j\in[n]$, and \item $v \in W$ is given by $v(j)=t_j$ for $j\in[n]$. \end{enumerate} In particular, the Richardson containing $X$ is fully determined from $\mathscr{M}_X$ \end{theorem} \begin{proof} By \Cref{prop:pathcollectionproperties}, it suffices to show the first statement for $\mathcal{R}_{{\rm id}, w}^{>0}$. We view the arrows of $G_{{\rm id}, w}$ as representing the transpositions in an expression for $w$. The fact that each vertical edge is used by the left greedy path collection $P$ originating from $[n]$ in $G_{{\rm id}, w}$ means that the sink set of $P$ is obtained by acting on $[n]$ with each transposition in $\underline{w}$ from left to right. This yields precisely $w^{-1}([n])$. For the second statement, we consider the left greedy path collection $P$ originating from $[n]$ in $G_{v,w}$. We claim that $I^{(l)}_X$ is the sink set of the path collection $P^{(l)}$ originating from $[n]$ obtained from $P$ by replacing the paths originating from $[l]$ with horizontal paths. We show this by induction. Suppose the sink set of $P^{(l-1)}$ is $I^{(l-1)}_X$. Recall from \Cref{prop:pathcollectionproperties} that for $i,j\in [n]$ with $i<j$, the path originating from $i$ in $P$ terminates at a sink smaller than the sink of the path originating at $j$, in $\prec$ order. Thus, $i^{(l-1)}_l$ is the sink of the path originating at $l$ in $P^{(l-1)}$. To obtain $I^{(l)}_X$, we replace this element by the $\prec$ smallest possible thing. Suppose we wish to construct a path collection with sink set $I^{(l)}$. Since each individual path in $P$ is left greedy by \Cref{prop:pathcollectionproperties}, and reaches the highest possible sink by \Cref{lem:greedyisextremeuv}, the subcollection of $P$ from $\{l+1,\ldots, n\}$ to $\{i^{(0)}_{l+1},\ldots, i^{(0)}_n\}=\{i^{(l)}_{l+1},\ldots, i^{(l)}_n\}$ is the unique path collection originating from a subset of $[n]$ with that sink set. Thus, in any path collection from $[n]$ to $I^{(l)}$, these paths appear. To obtain a path collection with sink set $I^{(l)}$, we have to lower the path originating at $l$, that is, modify it to have a lower sink. The lowest possible sink that can be obtained by a collection of paths originating from $[l]$ is a collection of horizontal paths, since there are no vertical edges going downwards in $G_{v,w}$ by \Cref{cor:arrowspointup}. Thus, the sink set of $P^{(l)}$ is indeed $I^{(l)}_X$, as claimed. It follows that $t_j$ is the sink of the horizontal path originating from source vertex $j$ in $G_{v,w}$. Since the labels of the source vertices in $G_{v,w}$ are, from bottom to top, \[ v^{-1}(1),\ldots, v^{-1}(n), v^{-1}(2n),\ldots, v^{-1}(n+1), \] this implies that $t_j=v(j)$. \end{proof} \begin{example} We use the same $1$-parameter family as in \Cref{ex:NonNegn4} to apply \Cref{thm:Nonnegative} and check that the following matrix is in $\SS_4^{\geq 0}$: \[ A=\begin{bmatrix} 0&0&0&2\\ 0&0&0&0\\ 0&0&0&2\\ -2&0&-2&0 \end{bmatrix}. \] Using our {\tt Macaulay2} implementation of \Cref{thm:RichardsonRestated} we computed that $\begin{bmatrix} \Id_n | A \end{bmatrix} \in \mathcal{R}^{>0}_{2134, 2385}$ where the permutations are written in one-line notation. \end{example} \begin{proposition}\label{prop:BoundSSMat} The following holds \[ \SS_n^{ \geq 0} = \bigsqcup_{v\in W_{[n-1]}} \mathcal{R}_{v,w}^{>0} \] \end{proposition} \begin{proof} The set of positive skew-symmetric matrices $\SS_n^{>0}$ is in bijection with the set of points in $\OGr^{\geq 0}(n,2n)$ represented by matrices $X$ that can be row reduced to the form $\begin{bmatrix} \Id_n | A \end{bmatrix}$. For this, we must have that the minor of the first $n$ columns of $X$ is nonzero. That is, there must be a path collection from $[n]$ to $[n]$ in the graph $G_{v,w}$. By \Cref{cor:arrowspointup}, this is possible if and only if the source vertices on the bottommost $n$ strands are $[n]$. This happens when $v^{-1}([n])=[n]$, equivalently $v([n])=[n]$, which is exactly the condition that $v \in W_{[n-1]}$. \end{proof} \section{Pfaffian signs}\label{sec:5} We recall that the determinant $\det(A)$ of a skew-symmetric matrix $A = (a_{ij})_{i,j=1}^{2m}$ of size $2m \times 2m$ is a homogeneous polynomial of degree $2m$ in the entries of $A$. Moreover, it is the square of a degree $m$ homogeneous polynomial $\Pf(A)$ known as the \emph{Pfaffian}, \begin{equation} \Pf(A) := \frac{1}{2^m m!} \sum_{\sigma \in S_{2m}} {\rm sgn}(\sigma) \prod_{i=1}^{m} a_{\sigma(2i-1),\sigma(2i)}. \end{equation} For any skew-symmetric matrix $A \in \SS_{n}$ and any set $I \subset [n]$ of even size we denote by $\Pf_I(A)$ the pfaffian of the submarix $A_{I}$ of $A$ whose rows and column indices are in $I$. By convention, we set $\Pf_{\emptyset}(A) = 1$. We denote the semi-algebraic set of \emph{Pfaffian-positive} skew-symmetric matrices by \begin{equation} \SS_n^{\Pf > 0} \coloneqq \Big\{ A \in \SS_n : \sgn(I, [n]) \ \Pf_I(A) > 0 \quad \text{for any } I\subset [n] \text{ of even size}\Big\}, \end{equation} and denote by $\SS^{\Pf \geq 0}_n$ its Euclidean closure in $\RR^{n \times n}$. \smallskip \begin{remark} Let $g = {\rm diag}(1, -1, 1 \dots, (-1)^{n+1})$. We note that a skew-symmetric matrix $A$ is in $\SS_n^{\Pf > 0}$ if and only if $\Pf_I(A') > 0$ for any subset $I \subset [n]$ of even size where $A' = g A g^T$. So the set $\SS_n^{\Pf > 0}$ is conjugate to set of matrices with positive pfaffians. This justifies our~notation. \end{remark} A priori, there is no reason why the notion of total positivity in $\SS_n$ we have discussed so far should be compatible with $\SS_n^{\Pf > 0}$. In this section, we briefly discuss how these two notions of positivity are related, namely we prove \Cref{thm:PfaffianSign} and discuss a few consequences. To do so, we shall need some background on the half-spin representation. For more details on Lie theoretic background we refer the reader to \cite{Hall, Procesi}. \smallskip Recall the vector spaces $E$ and $F$ from \eqref{eq:EFspaces} and let $V = E \oplus F \cong \RR^{2n}$. The space $V$ is equipped with the quadratic form $q$ as in \eqref{eq:quadForm}. The exterior algebra $\bigwedge^\bullet E$ decomposes into a direct sum of two vector spaces \(\bigwedge ^\bullet E = S \oplus S^- \), where $S$ (resp. $S^-$) is the subspace of elements in $\bigwedge^\bullet E$ of degree equal to $n$ (resp. $n+1$) modulo 2. For any $I=\{i_1<\ldots< i_\ell\} \subset [n]$ we denote $e_I := e_{i_1}\wedge \cdots \wedge e_{i_\ell}$. The $2^{n-1}$ vectors $e_{[n] \setminus I}$ where $I$ is a subset of $[n]$ of even size form a basis of $S$. The orthogonal Grassmannian $\OGr(n,2n)$ can be minimally embedded as the \emph{Spinor variety} in $\PP(S)$. Here, we briefly explain how this embedding is obtained using Pfaffians. Following \cite[Section 2]{Manivel} and \cite[Section 11.7]{Procesi}, let $\mathscr{C}$ be the \emph{Clifford algebra} of the quadratic space $(V,q)$. This algebra is defined as the quotient of the tensor algebra $T^\bullet V$ modulo the relations \begin{equation} u \otimes v + v\otimes u = 2 q(u,v), \quad \text{for } u,v \in V. \end{equation} Let $f := f_1 \cdots f_n \in \mathscr{C}$ and note that the left-ideal $\mathscr{C} \cdot f$ is isomorphic (as a vector space) to $\wedge^\bullet E$ via $ e_I \cdot f \mapsto e_I$. Now, let $A \in \SS_n$ and $H=\rowspan \begin{bmatrix} \Id_n | A \end{bmatrix}$. Then, $H$ is spanned by \[ e_i(A) := e_i + \sum_{j=1}^{n} a_{ij} f_j, \quad \text{for } 1 \leq i \leq n. \] Let $h = e_1(A) \cdots e_{n}(A) \in \mathscr{C}$, then the \emph{pure spinor} of $A$, namely the point in $\PP(S)$ that represents $H$, is the one dimensional linear space $h\mathscr{C} \cap \mathscr{C} f$ in $\mathscr{C} f \cong \wedge^\bullet E$, see \cite[III.1.1]{Chevalley}. The line $h\mathscr{C} \cap \mathscr{C} f$ in $S$ is spanned over $\CC$ by $h\cdot f \in \mathscr{C} \cdot f$. In other words, the pure spinor of $A$ is the element $u_H \in \bigwedge ^\bullet E$ such that of $u_H\cdot f = h\cdot f$. \begin{lemma}\label{lem:ManivelFormCorrect} Given a generic point in $\OGr(n,2n)$ of the form $\begin{bmatrix} \Id_n | A \end{bmatrix}$ with $A \in \SS_n$, the corresponding pure Spinor in $\PP(S)$ is \begin{equation}\label{eq:SpinorEmbedding} \sum_{I \subset [n] \text{ of even size}} \sgn(I,[n]) \ 2^{|I|/2} \ \Pf_{I}(A) \ e_{[n] \setminus I} \quad \in S. \end{equation} \end{lemma} \begin{proof} It is enough to compute $h \cdot f$ in $\mathscr{C}$. We proceed by induction on $n$. Suppose the result holds for all $A \in \SS_n$. Fix $A=(a_{i,j})_{i,j=1}^{n+1} \in \SS_{n+1}$ and let us write \[ u_i := e_{i} + \sum_{j=1}^{n} a_{ij} f_j \quad \text{for } 1 \leq i \leq n+1. \] Note that for any $1\leq i \leq n+1$ and $1 \leq j \leq n$ we have $f_i^2 = 0$ and $f_{n+1} u_j = - u_j f_{n+1}$. We then have \begin{align*} e_1(A) \cdots e_{n}(A) \ e_{n+1}(A) \ f &= e_1(A) \cdots e_{n}(A) \ e_{n+1} \ f \\ &= \left(u_1 + a_{1,n+1} f_{n+1}\right)\cdots \left(u_n + a_{n,n+1} f_{n+1}\right) \ e_{n+1} \ f \\ & = \left(u_1 \cdots u_n + \sum_{i=1}^{n} (-1)^{n-i} a_{i,n+1} u_1 \cdots \widehat{u_i} \cdots u_n \ f_{n+1} \right) \ e_{n+1} \ f\\ & = u_1 \cdots u_n \ e_{n+1} \ f + \sum_{i=1}^{n} (-1)^{n-i} a_{i,n+1} u_1 \cdots \widehat{u_i} \cdots u_n \ f_{n+1} \ e_{n+1} \ f. \end{align*} By the induction hypothesis, we have \[ u_1 \cdots u_n e_{n+1} f = \sum \limits_{K \subset [n] \text{ even}} {\rm sgn} (K,[n]) \ 2^{|K|/2} \ \Pf_K(A) e_{[n+1] \setminus K} \ f. \] Note that $f_{n+1} e_{n+1} \ f = ( - e_{n+1}f_{n+1} \ + \ 2 ) \ f = 2 f$. For each $i \in [n]$ we denote $A'_i$ for the submatrix of $A$ with rows and columns $i,n+1$ removed. Then, \[ u_1 \cdots \widehat{u_i} \cdots u_n f = \sum_{K \subset [n]\setminus i \text{ even}} {\rm sgn} (K, [n+1]) \ 2^{|K|/2} \ \Pf_K(A'_i) e_{[n] \setminus (K \cup i)}f. \] Using the recursive formula for Pfaffians, we obtain \begin{align*} &\sum_{i=1}^{n} (-1)^{n-i} a_{i,n+1} \ u_1 \cdots \widehat{u_i} \cdots u_n \ f_{n+1} \ e_{n+1} \ f \\ &=\sum_{i=1}^{n} (-1)^{n-i} \ 2 \ a_{i,n+1} \ u_1 \cdots \widehat{u_i} \cdots u_n f\\ &= \sum_{i=1}^{n} \sum_{K \subset [n]\setminus i \text{ even}}(-1)^{n-i} \ 2^{1 + |K|/2} \ a_{i,n+1}{\rm sgn} (K,[n]\setminus i) \ \Pf_K(A'_i) \ e_{[n] \setminus (K \cup i)} \ f\\ &= \sum_{\substack{K = \{i_1<\ldots< i_l\} \text{ even}\\ i_l=n+1 }} {\rm sgn} (K,[n+1]) 2^{|K|/2} \left( \sum_{j=1}^{l-1} (-1)^{j}a_{i_j,n+1} \ \Pf_{K\setminus \{i_j,n+1\}}(A) \right) \ e_{[n+1] \setminus K} \ f\\ &=\sum_{\substack{K \subset [n+1] \text{ even} \\ n+1 \in K}} {\rm sgn} (K,[n+1]) \ 2^{|K|/2} \ \Pf_{K}(A) \ e_{[n+1] \setminus K} \ f. \qedhere \end{align*} \end{proof} We note that \Cref{eq:SpinorEmbedding} appears in \cite[Section 2.3]{Manivel} up to a sign and constant inconsistency, which is not relevant for their purposes. \smallskip The vector space $S$ can be endowed with an action of the spin group $\Spin(2n)$. This is the half-spin representation. It is an irreducible representation corresponding to the highest weight vector $\lambda = \frac{1}{2}(1,\dots, 1) \in \RR^{n}$. The basis $\{e_{[n]\setminus I}: I \subset [n] \text{ of even size}\}$ is a basis of weight vectors of $S$ for this representation, and the weights of $S$ are $ \frac{1}{2} \left(\pm 1,\ldots, \pm 1\right)$ where the number of minus signs is even. Explicitly, the weight of $e_{[n]\setminus I}$ has $-$ in the positions indexed by $I$, and in particular the highest weight vector is $e_{[n]}$ with weight $\lambda$. \smallskip We now describe $S$ as a representation of the Lie algebra $\mathfrak{spin}(2n)\cong \mathfrak{so}(2n)=\bigwedge^2 V$. Following \cite[Chapter II]{Chevalley} and \cite[Section 5]{SpinGeometry} the action of $\mathfrak{so}(2n)$ on $S$ is given as follows. For $v=v' \oplus v'' \in E \oplus F $ and $\omega=\omega_1\wedge \cdots \wedge \omega_k \in \bigwedge^{k} E$, we write \[ v\cdot \omega := v'\wedge \omega_1\wedge \cdots \wedge \omega_k+2\sum_{i=1}^k (-1)^{i-1} q(v'',\omega_i) \ \omega_1\wedge \cdots \omega_{i-1}\wedge\omega_{i+1}\wedge\cdots \wedge \omega_k. \] Note that $v \cdot \omega \in \bigwedge^{k-1} E \oplus \bigwedge^{k+1} E$. Then for $\omega \in S$ we have $v \cdot \omega \in S^{-}$. Acting again with another element $w \in V$ yields $w\cdot(v \cdot \omega) \in S$. Hence we obtain an action of $\mathfrak{so}(2n)$ on $S$: \[ (v_1\wedge v_2)\cdot \omega = \frac{1}{4} [v_1,v_2] \cdot \omega = \frac{1}{4}(v_1 \cdot (v_2 \cdot \omega) - v_2 \cdot (v_1 \cdot \omega) ) \quad \text{for any } v\wedge w \in \mathfrak{so}(2n) \text{ and } \omega \in S. \] The factor of $1/4$ comes from the degree $2$ covering map $\Spin(2n) \to \SO(2n)$. \smallskip The $2^{n-1}$ Pfaffians of $A$ are related to certain regular functions on $\OGr(n,2n)$ known as \emph{generalized minors}. These functions are positive on $\OGr^{>0}(n,2n)$, see \cite[Lemma 11.4]{MR}. To define them we need to fix the root space decomposition of $\mathfrak{so}(2n)$. The maximal torus \eqref{eq:maxTorus} of $\SO(2n)$ corresponds to a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{so}(2n)$. We identify the dual $\mathfrak{h}^\ast$ of $\mathfrak{h}$ with the vector space $\RR^n = \Span_{\RR}(\varepsilon_1,\ldots, \varepsilon_n)$. The action of the Weyl group $W$ on $\RR^n$ is given through its generators \eqref{eq:WeylGroupGens} as follows \begin{equation}\label{eq:WeylGroupActionOnWeights} s_i\cdot (a_1,\ldots,a_n)=\begin{dcases} (a_1,\ldots,a_{i-1},a_{i+1},a_i,a_{i+2},\ldots,a_n) & \text{if } 1\leq i \leq n-1,\\ (a_1,\ldots, a_{n-2},-a_n,-a_{n-1}) &\text{if } i = n. \end{dcases} \end{equation} We choose a set of simple roots $\Phi = \{\varepsilon_1-\varepsilon_2, \varepsilon_2-\varepsilon_3, \ldots, \varepsilon_{n-1}-\varepsilon_{n}, \varepsilon_{n-1}+\varepsilon_n\}$. The positive roots are then \[ \Phi^{+} = \{\varepsilon_i-\varepsilon_j : 1\leq i < j \leq n\} \cup \{\varepsilon_i+\varepsilon_j: 1\leq i,j \leq n\}. \] Finally, we note that the following roots and their root vectors correspond to the pinning in \Cref{sec:2} by taking the matrix exponential $ \exp: \mathfrak{so}(2n) \cong \wedge^2 \CC^{2n} \to \SO(2n)$: \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Roots & $\varepsilon_i-\varepsilon_j$ for $i \neq j$ & $\varepsilon_i+\varepsilon_j$ for $i \leq j$ & $ -\varepsilon_i-\varepsilon_j$ for $i \leq j$ \\ \hline Root vectors & $e_i\wedge f_j$ & $e_i \wedge e_j$ & $f_j \wedge f_i$\\ \hline \end{tabular} \caption{The roots of $\mathfrak{so}(2n)$ and their corresponding root vectors in the half spin representation $S$.} \label{tab:rootsAndrootVectors} \end{table} Recall the definition of $\dot{s}_i$ from \eqref{eq:SiDot}. \begin{lemma}\label{lem:logSi} Let $I$ be a subset of $[n]$ of even size. Then for $1 \leq i \leq n-1$, \[ {\rm Log}(\dot{s}_i) \cdot e_{[n]\setminus I} = \begin{cases} - \frac{\pi}{2} e_{[n]\setminus ( (I \setminus i) \cup i+1 )} \quad & \text{if } i \in I \text{ and } i+1 \not \in I, \\ \frac{\pi}{2} e_{[n]\setminus ( (I \setminus i+1) \cup i )} \quad & \text{if } i \not \in I \text{ and } i+1 \in I, \\ 0 \quad & \text{otherwise,} \end{cases} \] and \[ {\rm Log}(\dot{s}_n) \cdot e_{[n]\setminus I} = \begin{cases} \frac{-\pi}{4}e_{[n]\setminus (I \setminus \{n-1, n\})} \quad & \text{if } n \in I \text{ and } n+1 \in I, \\ \pi e_{[n]\setminus (I \cup \{n-1, n\})} \quad & \text{if } n \not \in I \text{ and } n+1 \not \in I, \\ 0 \quad & \text{otherwise.} \end{cases} \] Here, ${\rm Log} \colon {\rm Spin}(2n) \to \mathfrak{spin}(2n)$ is the logarithm map between a simply connected Lie group and its Lie algebra. \end{lemma} \begin{proof} Using a matrix logarithm for the root subgroup corresponding to $s_i$ in ${\rm Spin}(2n)$ yields the following formula for ${\rm Log}(\dot{s}_i) \in \mathfrak{spin}(2n) \cong \wedge^2 (E \oplus F)$: \[ {\rm Log}(\dot{s}_i) = \begin{dcases} \frac{\pi}{2}(e_{i+1}\wedge f_i-e_i\wedge f_{i+1}) & \text{if } 1\leq i \leq n-1,\\ \frac{\pi}{2}(f_{n}\wedge f_{n-1}-e_{n-1}\wedge e_{n}) &\text{if } i = n. \end{dcases} \] Note that, for $I \subset [n]$ of even size, we have for any $i,j \in [n]$ \begin{align*} & e_i\cdot e_{[n]\setminus I} = \begin{dcases} (-1)^{\sgn(i,I\setminus i)} e_{[n]\setminus (I\setminus i)}, &\text{ if } i \in I,\\ 0, &\text{otherwise}, \end{dcases} \\ & f_j\cdot e_{[n]\setminus I} = 2 \begin{dcases} (-1)^{\sgn(j,I)} e_{[n]\setminus (I\cup j)}, &\text{ if } j\notin I,\\ 0, &\text{otherwise.} \end{dcases} \end{align*} The following computation gives the desired result for ${\rm Log}(\dot{s}_i) \cdot e_{[n]\setminus I}$. A similar computation yields the result for $\dot{s}_n$. \begin{align*} &(e_{i+1}\wedge f_i-e_i\wedge f_{i+1})\cdot e_{[n]\setminus I} = \frac{1}{4}\left(e_{i+1}f_i-f_ie_{i+1}-e_if_{i+1}+f_{i+1}e_{i}\right)\cdot e_{[n]\setminus I}\\ &= \frac{1}{2}\left(e_{i+1}f_i-e_if_{i+1}\right)\cdot e_{[n]\setminus I}=\begin{dcases} -e_{[n]\setminus((I\setminus i)\cup i+1)} & \text{if } i \in I \text{ and } i+1 \not \in I, \\ e_{[n]\setminus ( (I \setminus i+1) \cup i )} \quad & \text{if } i \not \in I \text{ and } i+1 \in I, \\ 0 \quad & \text{otherwise.} \end{dcases} \end{align*} \end{proof} We are now ready to prove \Cref{thm:PfaffianSign}. \begin{proof}[\bf Proof of \Cref{thm:PfaffianSign}] Given a skew-symmetric matrix $A \in \SS_n$, the Pfaffians of $A$ are, up to fixed scalars, an instance of \emph{generalized minors} of the point $[\Id_n | A]$ in $\OGr(n,2n)$ for the half-spin representation $S$. Following \cite[Section III, 1.3-1.7]{Chevalley}, we know that if $g \in \Spin(2n)$ is a lift of an element $\begin{bmatrix} \Id_n & \ast\\ A & \ast \end{bmatrix}$ of $\SO(2n)$ where $A \in \SS_n$, then $g\cdot e_{[n]}$ is the pure spinor corresponding to the rowspan of $\begin{bmatrix} \Id_n | A \end{bmatrix}$. Thus, by \Cref{lem:ManivelFormCorrect}, \begin{equation}\label{eq:PfaffianExpansions} g \cdot e_{[n]} = \sum_{\substack{I \subset [n]\\ |I| \text{ even}}} \sgn(I, [n]) \ 2^{|I|/2} \ \Pf_I(A) \ e_{[n]\setminus I}. \end{equation} Note that the change from row to columns convention is due to \Cref{rem:MRParamConvention}. The generalized minors corresponding to the half-spin representation $S$ are given by \[ m_w(g)=\langle g \cdot e_{[n]}, \dot{w}\cdot e_{[n]} \rangle, \] where $\dot{w} \in \Spin(2n)$ is a lift of $w \in W$ to the Spin group, and $\langle \cdot, \cdot \rangle$ is the standard inner product with respect to the basis $(e_{[n] \setminus I})$ of $S$. We stress that this lifting depends on the choice of a Cartan subalgebra $\mathfrak{h}$, the simple roots, and root vectors made previously for $\mathfrak{so}(2n)$, that is, a pinning of $\Spin(2n)$ compatible with that of $\SO(2n)$ in \Cref{sec:2}. \smallskip Recall from \Cref{def:Iw} that $ I_w := \{i \in [n]: w(i)>n \}$. \begin{lemma} For any $w \in W$ we have \[ \dot{w} \cdot e_{[n]} = c_w \ e_{[n] \setminus I_{w^{-1}}}, \quad \text{for some nonzero scalar } c_w \in \RR. \] \end{lemma} \begin{proof} A classical result from the representation theory of Lie algebras \cite[Section 7.2 and Lemma 21.3]{HumphreysBook} guarantees that for any $w \in W$, the vector $\dot{w} \cdot e_{[n]}$ is a weight vector of weight $w \cdot \lambda$, where $\lambda$ is the highest weight. Hence, it is enough to show that $w \cdot \lambda$ has $-\frac{1}{2}$ in the positions indexed by the set $I_{w^{-1}}$. By equation \eqref{eq:WeylGroupActionOnWeights}, $w\in W$ acts on $\lambda$ by applying a signed permutation. Observe that when we apply $w$ to $\lambda=(\frac{1}{2}, \cdots , \frac{1}{2},-\frac{1}{2}, \cdots , -\frac{1}{2})$, we have $w^{-1}(i)>n$ if and only if $(w\cdot \lambda)_{i}<0$. Since all the weight spaces of $S$ are one-dimensional, we obtain $\dot{w} \cdot e_{[n]} = c_w \ e_{[n]\setminus I_{w^{-1}}}$, where $c_w \in \RR$ as desired. \end{proof} Hence, from \eqref{eq:PfaffianExpansions}, we deduce that \[ m_{w}(g) = \langle g \cdot e_{[n]} , \dot{w}\cdot e_{[n]} \rangle = c_w \sgn(I_{w^{-1}},[n])\ 2^{|I_{w^{-1}}|/2} \ \Pf_{I_{w^{-1}}}(A). \] Then, by \cite[Lemma 11.4]{MR} (and, indirectly, by \cite[Theorem 3.4]{Lusztig2}), it is enough to prove that the scalars $c_w$ are positive. To do so we first need the following. \smallskip From \Cref{rem:wAsSet} there is a bijection $w W_{[n-1]} \mapsto W_{[n-1]}w^{-1} \mapsto I_{w^{-1}}$ between left cosets of $W_{[n-1]}$ and even subsets of $[n]$. Hence, it is enough to consider the minimal representatives of left-cosets in $W / W_{[n-1]}$ to obtain all Pfaffians as generalized minors. \smallskip We now prove inductively on the length of $w \in W^{[n-1]}$ that $c_w>0$. For $\ell(w)=0$, $w={\rm id}$ and $I_{w^{-1}} = \emptyset$. Then, we have the desired sign because $c_{\rm id}=1$. Assume that $c_{w'}>0$ and let $w= s_i \ w'$, for some $i \in [n]$. Since $w'$ is a minimal left coset representative, $w$ is as well. We have $\dot{w}\cdot e_{[n]} = \dot{s}_i \ \dot{w}'\cdot e_{[n]} = c_{w'} \ \dot{s}_i \cdot e_{[n] \setminus I_{(w')^{-1}}}$. As we are considering a representation of the simply connected group of type $D_n$, $\Spin(2n)$, we have $\dot{s}_i\cdot e_{[n]\setminus I_{(w')^{-1}}}=\exp({\rm Log}(\dot{s}_i))\cdot e_{[n]\setminus I_{(w')^{-1}}}$. \smallskip Since $\ell(w')=\ell(w)-1$, $(w')^{-1}(\alpha_i)$ is a positive root. If $1\leq i \leq n-1$, this means $(w')^{-1}\varepsilon_i-(w')^{-1}\varepsilon_{i+1}> 0$. However, since $w$ and $w'$ are minimal coset representatives, $I_{w^{-1}} \neq I_{(w')^{-1}}$ and so either $i \in I_{(w')^{-1}}$ or $i+1 \in I_{(w')^{-1}}$, but not both. Thus, $i+1 \in I_{(w')^{-1}}$ and $i \notin I_{(w')^{-1}}$, that is, $I_{w^{-1}}=(I_{(w')^{-1}}\setminus \{i+1\})\cup \{i\}$. If $i=n$, since $|I_{(w')^{-1}}|$ is even, $(w')^{-1}\alpha_n > 0$ implies $n-1,n \notin I_{(w')^{-1}}$. In this case, $I_{w^{-1}} = I_{(w')^{-1}} \cup \{n-1,n\}$. \smallskip By \Cref{lem:logSi}, $\Span_{\RR}(e_{[n]\setminus I_{(w')^{-1}}},e_{[n]\setminus I_{w^{-1}})}$ is invariant under the action of ${\rm Log}(\dot{s}_i)$. We can calculate the corresponding action of $\dot{s}_i$ via the matrix exponential, obtaining $c_w=c_{w'}$ if $i \in [n-1]$ and $c_w=2c_{w'}$ if $i=n$. \end{proof} \begin{example}\label{ex:NegPfaff} The matrix $A$ in \Cref{ex:NonNegn4} is manifestly not nonnegative as its Pfaffians do not satisfy the sign pattern in \Cref{thm:PfaffianSign}. Indeed the entry $(3,4)$ of $A$ is $-2<0$. \end{example} \begin{example} We have shown that $\SS_n^{\geq 0} \subset \SS_{n}^{\Pf \geq 0}$. This inclusion is strict. For example \[ A = \begin{bmatrix} 0&0&0&2\\ 0&0&1&0\\ 0&-1&0&2\\ -2&0&-2&0 \end{bmatrix} \] belongs in $\SS_{n}^{\Pf \geq 0}$. However, using \Cref{thm:Nonnegative} as in \Cref{ex:NonNegn4}, one can check that the minors $M_{j,k}(B(\epsilon))$ are \begin{align*} &8\,\epsilon+ o(\epsilon^{2}), \quad &&16 \, \epsilon + o( \epsilon^{2}),& \quad & 8\,\epsilon + o(\epsilon^{2}),\\ &2 + o(\epsilon^{2}), \quad &&\mathbf{-2} + o(\epsilon^{2}),& \quad & 2 + o(\epsilon). \end{align*} Thus, the matrix $A$ is \emph{not} nonnegative. \end{example} \section{Future directions}\label{sec:6} \subsection{Cluster structure} The connection between positivity and cluster algebras has arisen frequently since the seminal works of Bernstein, Fomin and Zelevinsky, see for example \cite{BFZIII}. Consequently, whenever there is a positivity test for a semi-algebraic region of a projective variety in terms of regular functions, it is natural to ask whether they form a cluster in some cluster algebra. The cluster algebra structure of partial flag varieties was described in general in \cite{GeissLeclercSchroer} using the language of cluster categories, but it is difficult to infer whether it relates to the minors $M_{j,k}$ in the case of the orthogonal Grassmannian. We briefly comment on how the relationship between positivity and cluster structures on the orthogonal Grassmannian $\OGr(n,2n)$ compares to related varieties. \smallskip Recent work of Bossinger and Li \cite{BossingerLi} explored cluster structures on partial flag varieties of type A, giving an explicit interpretation of the cluster structure in \cite{GeissLeclercSchroer}. Bossinger and Li describe how to obtain the cluster structure on two specific families of partial flag varieties from the cluster structure on a Grassmannian and conjecture this can be done more generally. In particular, starting from an initial quiver for the Grassmannian, they give a sequence of mutations, freezings of mutable vertices, and deletions of vertices that transform it into a quiver of the partial flag variety cluster algebra. Since positivity of the variables in a single cluster guarantees positivity of all cluster variables, this work provides a cluster theoretic grounding for the positivity test described in \cite{boretsky} and can be interpreted as conjecturally giving an explicit positivity test for any (type A) partial flag variety. In this setting, all Pl\"ucker coordinates are cluster variables and thus are positive on the positive flag variety. \smallskip We now turn to $\OGr(n,2n)$. Unlike the type A setting, our positivity asks that a set of Pl\"ucker coordinates have a fixed sign pattern, so that some coordinates are forced to be negative. Moreover, there exist Pl\"ucker coordinates whose sign is not fixed on the positive orthogonal Grassmannian. In \Cref{ex:n=4MRparam} we described the open Deodehar component in $\OGr(4,8)$; the Pl\"ucker coordinate $\Delta^{1458}(X)=t_1^2t_2^2t_3t_4-t_1t_2t_3t_4t_5t_6$ has indeterminate sign for $t_i \in \RR_{>0}$. We remark there are similar behaviors in the BCFW tiles of the $(n,k,m=4)$ amplituhedron \cite{BCFWClusterStructure}. Using \textit{signed seeds}, the authors of \cite{BCFWClusterStructure} are still able to obtain the cluster structure on a standard BCFW tile from a seed of the cluster structure on $\Gr(4,n)$ by freezing certain variables, where the Pl\"ucker coordinates whose signs are not fixed on the tiles do not appear as cluster variables. This could offer a hint of how to proceed in the case of the orthogonal Grassmannian. \subsection{Positroids} One could try to study \textit{orthogonal positroids} of rank $n$. Positroids are rich combinatorial objects, which can be concisely described in terms of many familiar structures, including (decorated) permutations, (matroid) polytopes, planar (bi-colored) graphs, and (Le-diagrams on) tableaux. Let $\mathbb{k}$ be a field and let $X\in \mathbb{k}^{\binom{n}{k}}$ have coordinates $\Delta^I(X)$ indexed by $\binom{[n]}{k}$. Define the \textit{support} of $X$ as ${\rm supp}(X)=\{I\colon \Delta^I(X)\neq 0\}$. Then, the rank $k$ positroids on $[n]$ are $\{{\rm supp}(X)\colon X\in \Gr^{\geq 0}(k,n)\}$. These are in bijection with cells in the Richardson decomposition of $\Gr(k,n)$. Points of the Grassmannian over the sign hyperfield $\mathscr{S}$, in the sense of \cite{BakerBowler}, are called oriented matroids. By \cite{ArdilaRinconWilliams,SpeyerWilliams21}, positively oriented matroids, that is, those with all coordinates equal to $1$ or $0$, are in bijection with positroids; If we extend the definition of support to allow $\mathbb{k}$ to be a hyperfield, then positroids are precisely the supports of positively oriented matroids. To generalize this story, we can analogously define orthogonal positroids of rank $n$ to be $\{{\rm supp}(X)\colon X\in \OGr^{\geq 0}(n,2n)\}$. We note that $\OGr(n,2n)$ is cut out by Pl\"ucker relations and by some additional relations $\mathcal{S}$ equating certain Pl\"ucker coordinates (possibly with a sign). We propose the following definitions. \begin{definition} The \textit{orthogonal Grassmannian $\OGr_{\mathscr{H}}(n,2n)$ over a hyperfield} $\mathscr{H}$ is the subset of the Grassmannian $\Gr(n,2n)$ over $\mathscr{H}$ consisting of points which satisfy $\mathcal{S}$. \end{definition} \begin{definition} We say a point of $\OGr_{\mathscr{H}}(n,2n)$ is a (rank $n$) \textit{orthogonal oriented matroid} on $2n$ elements. \end{definition} As in our consideration of cluster algebra structures, we need to be careful since not all Pl\"ucker coordinates have a fixed sign on the nonnegative orthogonal Grassmannian. For each $\epsilon = (\epsilon_I)\in \{-1,1\}^{\binom{2n}{n}}$ indexed by $\binom{[2n]}{n}$, let \[ \OGr_{\mathscr{S}}^\epsilon(n,2n)=\left\{\Delta^I\in \{-1,0,1\}^{\binom{2n}{n}}\in \OGr_{\mathscr{S}}(n,2n) \colon \epsilon_I \Delta^I\in \{0,1\} \text{ for each } I\in \binom{[2n]}{n}\right\}. \] \begin{definition} We say that an oriented orthogonal matroid in $\OGr_{\mathscr{S}}^\epsilon(n,2n)$ is an \textit{orthogonal $\epsilon$-oriented matroid.} \end{definition} It is straightforward to show that there exists $\epsilon$ such that each orthogonal positroid is the support of an orthogonal $\epsilon$-oriented matroid. However, it is not clear if there are choices of $\epsilon$ for which the converse holds. \smallskip Finally, while there are type D Le diagrams as constructed in \cite{LamWilliamsCominiscule}, there are many questions to explore surrounding the combinatorics of orthogonal positroids. Concretely, one may ask if there exist objects generalizing decorated permutations, matroid polytopes, plabic graphs, or other familiar positroid cryptomorphisms which admit nice combinatorial descriptions and are in bijection with the Richardson cells $\mathcal{R}_{v,w^{[n-1]}}$. \subsection{Related semi-algebraic sets} In the beginning of this article, we made a choice of the quadratic form $q$. However, besides connecting the orthogonal Grassmannian to skew-symmetric matrices, this choice was somewhat arbitrary and our result translates nicely to other quadratic forms of signature $(n,n)$. Explicitly, changing quadratic forms is the same as doing a change of basis, and if we also shift the pinning by the same change of bases, we obtain a shifted positivity test. We denote by $\OGr_{q}(n,2n)$ (resp. $\OGr^{\geq 0}_{q}(n,2n)$) the orthogonal Grassmannian (resp. nonnegative orthogonal Grassmannian) for the quadratic form $q$. Another natural choice of quadratic form we may consider is \[ q':\RR^{2n} \xrightarrow[]{} \RR, \quad x \mapsto \sum_{i=1}^{2n} (-1)^{i+1} x_i^2. \] The standard component of $\OGr_{q'}(n,2n)$ is cut out by the Pl\"ucker relations and the additional linear relations \[ \Delta^I(X) = \Delta^{[2n]\setminus I} (X), \quad \text{for all } I \in \binom{[2n]}{n}. \] The Pl\"ucker non-negative semi-algebraic set in $\OGr_{q'}(n,2n)$ emerged as the geometry behind the ABJM amplitudes in \cite{ABJM2,ABJM1} and, in \cite{IsingModel}, it was connected to the Ising model in statistical mechanics. This semi-algebraic set is rather different from $\OGr^{\geq 0}_{q'}(n,2n)$. It particular, it turns out that some Pl\"ucker coordinates do not even have a fixed sign in $\OGr^{\geq 0}_{q'}(n,2n)$. \medskip More generally, the notion of total positivity introduced by Lusztig for flag varieties only coincides with the positivity of Pl\"ucker coordinates in special cases, see \cite{BBEG24}. In many cases, the geometry that is relevant to applications, for example in physics, is the semi-algebraic set where certain regular functions are positive. In the cases where the notion of positivity we are interested in does not coincide with the notion of total positivity it generally becomes hard to find combinatorial structures which explain the boundary stratification of the semi-algebraic set of interest. An example of such a situation is $\SS_n^{\Pf \geq 0}$ which is interesting to study in its own right. As the structure stops being governed by the combinatorics of the Weyl group, other tools are necessary to study its cell decomposition. Also, as briefly explained in \Cref{sec:5}, this is closely related to considering a \emph{positive Spinor variety}. We leave these questions for future work. \titleformat{\section}{\centering \fontsize{12}{17} \large \bf \scshape }{\thesection}{0mm}{ \hspace{0.00mm}} \bibliographystyle{acm} \bibliography{references} \end{document}
2412.17324v1
http://arxiv.org/abs/2412.17324v1
Character values at elements of order 2
\documentclass[12pt]{amsart} \usepackage[colorlinks=true, pdfstartview=FitV, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \usepackage{amssymb,amsmath, amscd} \usepackage{times, verbatim} \usepackage{graphicx} \usepackage[english]{babel} \usepackage[usenames, dvipsnames]{color} \usepackage{amsmath,amssymb,amsfonts} \usepackage{enumerate} \usepackage{parskip} \usepackage[T1]{fontenc} \usepackage{tikz-cd} \thispagestyle{empty} \usetikzlibrary{calc, arrows, decorations.markings} \tikzset{>=latex} \usepackage{amsmath,amssymb,amsfonts} \usepackage{amscd, amssymb, latexsym, amsmath, amscd} \usepackage{csquotes} \usepackage[all]{xy} \usepackage{pb-diagram} \usepackage{enumerate} \usepackage{ulem} \usepackage{anysize} \marginsize{3cm}{3cm}{3cm}{3cm} \usepackage[colorlinks=true, pdfstartview=FitV, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \usepackage{mathdots} \usepackage{amssymb,amsmath, amscd} \usepackage{times, verbatim} \usepackage{graphicx} \usepackage[english]{babel} \usepackage[usenames, dvipsnames]{color} \usepackage{amsmath,amssymb,amsfonts} \usepackage{mathtools} \usepackage{enumerate} \usepackage{cleveref} \usepackage[vcentermath]{youngtab} \usepackage{mathrsfs} \usepackage{tikz} \thispagestyle{empty} \usetikzlibrary{calc, arrows, decorations.markings} \tikzset{>=latex} \usepackage{amsmath,amssymb,amsfonts} \usepackage{amscd, amssymb, latexsym, amsmath, amscd} \usepackage[all]{xy} \usepackage{pb-diagram} \usepackage{enumerate} \usepackage{ulem} \usepackage{anysize} \marginsize{3cm}{3cm}{3cm}{3cm} \input xy \xyoption{all} \usepackage{pb-diagram} \usepackage[all]{xy} \usepackage{tabularx} \usepackage{ytableau} \usepackage{xcolor} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \input xy \xyoption{all} \theoremstyle{plain} \newtheorem{thm}{Theorem}\newtheorem{theorem}{Theorem}[section] \newtheorem{question}{Question} \newtheorem{lemma}{Lemma} \newtheorem{cor}{Corollary} \newtheorem{corollary}{Corollary} \newtheorem{cors}[thm]{Corollaries} \newtheorem{prop}{Proposition} \newtheorem{proposition}{Proposition} \newtheorem{crit}[thm]{Criterion} \newtheorem{conj}{Conjecture} \newtheorem{eqn}[theorem]{Equation} \newtheorem{asum}[thm]{Assumption} \theoremstyle{definition} \theoremstyle{definition} \newtheorem{remark}{Remark} \newtheorem{example}{Example} \newtheorem{defn}{Definition} \newtheorem{condition}[thm]{Condition} \newtheorem{alg}[thm]{Algorithm} \newtheorem{prob}[thm]{Problem} \newtheorem{rem}[theorem]{Remark} \newtheorem{note}[theorem]{Note} \newtheorem{summ}[thm]{Summary} \renewcommand{\thesumm}{} \newtheorem{ack}{Acknowledgments} \renewcommand{\theack}{} \newtheorem{notation}{Notation} \renewcommand{\thenotation}{} \newtheorem{notationnum}[thm]{Notation} \theoremstyle{remark} \newtheorem*{claim}{Claim} \newtheorem{explanation}{Explanation} \newtheorem{topic}[thm]{} \newtheorem{case}{Case} \newtheorem{casediff}{} \newtheorem{subcase}{} \newtheorem{integral}{Integral} \newtheorem{step}{Step} \newtheorem{stepdiff}{Step} \newtheorem{approach}{Approach} \newtheorem{principle}{Principle} \newtheorem{fact}{Fact} \newtheorem{subsay}{} \newcommand{\GL}{{\rm GL}} \newcommand{\C}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\sgn}{\text{Sgn}} \newcommand{\perm}{\text{Perm}} \newcommand{\Dim}{\text{dim}} \newcommand{\Mod}{\text{mod}} \newcommand{\Char}{\text{char}} \newcommand{\F}{\mathscr{F}_{\mu}} \newcommand{\Diag}{\text {diag }} \newcommand{\M}{\underline{\mu}} \newcommand{\T}{\underline{t}} \newcommand{\Sp}{{\rm Sp}} \newcommand{\SO}{{\rm SO}} \title{Character values at elements of order 2} \begin{document} \author{Chayan Karmakar} \address{Indian Institute of Technology Bombay, Powai, Mumbai-400076, INDIA} \email{[email protected]} \keywords{Weyl Character Formula; Compact Lie Groups; Finite Dimensional Highest Weight Representations of Classical Lie Groups, Maximal Torus} \subjclass{Primary 20G05; Secondary 05E05, 20G20, 22E46} \maketitle {\hfill \today} \begin{abstract} In this paper we compute the character values of highest weight representations for classical groups of types \( A_n \), \( B_n \), \( C_n \), \( D_n \) and the Exceptional group $G_2$ at all conjugacy classes of order 2. We prove that these character values, if nonzero, can be expressed either as a product involving the dimensions of two highest weight representations from classical subgroups, along with an additional constant term, or as an alternating sum of products of the dimensions of two highest weight representations from the classical subgroups, also accompanied by an extra constant term. \end{abstract} \section{Introduction} In the work \cite{DP} of Dipendra Prasad, he discovered a certain factorization theorem for characters of ${\rm GL}(mn,\C)$ at certain special elements of the diagonal torus, those of the form \[ \T \cdot c_n := \begin{pmatrix} \T & & \\ & \omega_n\T & \\ & & \ddots \\ & & & \omega_n^{n-1}\T \end{pmatrix}, \] where $\T=(t_1,t_2,\cdots,t_m) \in (\C^{*})^{m}$ and $\omega_n$ is a primitive $n$th root of unity. D. Prasad proved that the character of a finite dimensional highest weight representation $\pi_{\underline{\lambda}}$ of ${\rm GL}(mn,\C)$ of highest weight $\underline{\lambda}$ at such elements $\T \cdot c_n$ is the product of characters of certain highest weight representations of ${\rm GL}(m,\C)$ at the element $\T^{n}=(t_1^{n},t_2^{n},\cdots,t_m^{n})\in (\C^{*})^{m}$. This work of D. Prasad was recently generalized to all classical groups in \cite{AK}. A very special case of the works \cite{DP} and \cite{AK} calculates the character of highest weight representations of classical groups at conjugacy classes of elements of order $2$ of the form $(\underbrace{1,\cdots,1}_{k \ {\rm times}},\underbrace{-1,\cdots,-1}_{k\ {\rm times}})$ for ${\rm GL}(2k,\C)$, and for elements of the form \\ $(\underbrace{1,\cdots,1}_{k \ {\rm times} },\underbrace{-1,\cdots,-1}_{k \ {\rm times} },\underbrace{-1,\cdots,-1}_{ k \ {\rm times} },\underbrace{1,\cdots,1}_{k \ {\rm times}})$ for both the groups ${\rm Sp}(2k,\C)$ and ${\rm SO}(2k,\C)$, along with the element $(\underbrace{1,\cdots,1}_{k \ { \rm times} },\underbrace{-1,\cdots,-1}_{ k \ { \rm times} },1,\underbrace{-1,\cdots,-1}_{k \ {\rm times} },\underbrace{1,\cdots,1}_{k \ {\rm times} })$ of ${\rm SO}(2k+1,\C)$.\ In this work, we will calculate the character $\Theta_{\lambda}(x_0)$ at any element $x_{0}$ of order $2$ in $G$ when $G$ is a classical group or $G=G_{2}$, generalizing the work of \cite{DP}, \cite{AK} and \cite{N}. \section{Some elementay Results for square matices} In this section we discuss an elementary lemma regarding the determinant of a square matrix generalizing the well-known expression of the determinant of an $n \times n$ matrix in terms of the first column. We will omit the proof of the following lemma which is a direct consequence of expressing the determinant of a matrix in terms of the linear transformation on the highest exterior power of the underlying vector space. \begin{lemma}\label{lem 1} Let \( B = [b_{ij}] \) be an \( n \times n \) matrix, and let \( S=\{ s_1 < \cdots <s_k\} \) be a subset of $\{1,2,\cdots,n \}$ of $k$ elements. Let $C_{S}$ be the set of all $k \times k$ submatrices of $B$ taken from the columns indexed by $S$. For each $k \times k$ matrix $X \in C_{S}$, let $X^{'}$ be the $(n-k) \times (n-k)$ matrix consisting of complementary rows and columns, Then: \[ \det(B)= \sum_{X \in C_{S}} \epsilon_{X} \det(X) \det(X^{'}), \] where $\epsilon_{X}$ is the sign of the \enquote{shuffle} permutation: if $X$ corresponds to rows $\{ r_{1} < \cdots < r_{k} \}$ and $X^{'}$ corresponds to rows $\{t_{1}<\cdots < t_{n-k}\}$, then \[ \epsilon_{X}(e_1 \wedge \cdots \wedge e_{n})=e_{r_1} \wedge \cdots e_{r_{k}} \wedge e_{t_{1}} \wedge \cdots \wedge e_{t_{n-k}}. \] \end{lemma} The following corollaries are special case of Lemma \ref{lem 1}. \begin{corollary}\label{cor 1} Let $A$ be a square matrix of order $n$ and $B$ is a submatrix of size $a \times b$ with $a+b>n$, and B=0, then $ \det A =0$. \end{corollary} \begin{corollary}\label{cor 2} Let $A$ be a square matrix of order $n$ and $B$ is a submatrix of size $a \times b$ with $a+b=n$, and $B=0$, then we have $$\det A=\det A_1 \times \det A_2,$$ where $A_{1}$ and $A_{2}$ are complementary square submatrices of order $a$ and $b$ respectively. \end{corollary} \section{Definitions and Notations}\label{sec: defn} We begin with some general notation to be used throughout the paper. Let \( G \) be one of the classical groups: \(\GL(n, \mathbb{C})\), \(\Sp(2n, \mathbb{C})\), \(\SO(2n, \mathbb{C})\), or \(\SO(2n+1, \mathbb{C})\). We consider a highest weight representation of \( G \) characterized by the highest weight \(\underline{\lambda}= (\lambda_1, \lambda_2, \ldots, \lambda_n)\), where \( \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \). For $0 \leq i \leq 1$, we define the set \(\eta_i(\underline{\lambda})\) as follows: \[ \eta_i(\underline{\lambda}) = \{ a \in \underline{\lambda} + \rho \mid a \equiv i \mod 2 \}, \] where $a \in \underline{\lambda} + \rho$ means, $a$ appears as some coordinate of $\underline{\lambda}+\rho$, \(\rho\) is the half the sum of the positive roots of \( G \) and $\rho$ is given by: \begin{align*} \rho=\begin{cases} \rho_{n}=(n-1,n-2,\cdots,0) & \text{if} \ G=\GL(n,\C), \\ \rho^{n}=(n,n-1,\cdots,1) & \text{if} \ G=\Sp(2n,\C), \\ \overline{\rho}_{2n}=(n-1,n-2,\cdots,0) & \text{if} \ G=\SO(2n,\C), \\ \overline{\rho}_{2n+1}=(n-1/2,n-3/2,\cdots,1/2) & \text{if} \ G=\SO(2n+1,\C). \\ \end{cases} \label{0.5} \end{align*} When \(0 \leq i \leq 1\), we define \(\eta_i(\underline{\lambda})/2 = \left\{ \frac{a}{2} \mid a \in \eta_i(\underline{\lambda}) \right\}\) and \([\eta_1(\underline{\lambda}) - 1]/2 = \left\{ \frac{b - 1}{2} \mid b \in \eta_1(\underline{\lambda}) \right\}\). For any positive integer $n$, let us define $[n]=\{1,2,\cdots,n\}$. For any $n$-tuple $a=(a_1,a_2,\cdots,a_{n}) \in (\C^{\ast})^{n}$, define $a^{t}=(a_{n},a_{n-1},\cdots,a_{1})$. \section{The General Linear Group ${\rm GL}(n,\C)$} \begin{theorem}\label{thm 1} Let $G={\rm GL}(n,\C)$. For each integer $0<k \leq n/2$, consider the diagonal element $$C_{n-k,k}=(\underbrace{1,1,\cdots,1}_{n-k \ {\rm times}},\underbrace{-1,-1,\cdots,-1}_{k \ {\rm times}}).$$ These are elements of order 2 in $G$. Let $S_{(\underline{\lambda})}\C^{n}$ be the highest weight representation of ${\rm GL}(n,\C)$ with highest weight $\underline{\lambda}=(\lambda_1,\lambda_2,\cdots,\lambda_n)$ with $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_{n}$. Assume after possibly multiplying $S_{(\underline{\lambda})}\C^{n}$ by the determinant character that $\# \eta_{0}(\underline{\lambda}) \geq \# \eta_{1}(\underline{\lambda})$. Then : \begin{itemize} \item [A.] The character $\Theta_{\underline{\lambda}}(C_{n-k,k})=0$ if $\#\eta_{0}(\underline{\lambda})>n-k$. \\ \item [B.] For $\#\eta_{0}(\underline{\lambda})=n-k$, we have the following factorization: \[ \Theta_{\underline{\lambda}}(C_{n-k,k})= \pm 2^{c(k)} \cdot \dim(S_{(\lambda^{0})}\C^{n-k} \otimes S_{(\lambda^{1})}\mathbb{C}^{k}), \] where $c(k)=\binom{n-2k}{2}$ and the highest weights $\lambda^{0}$ and $\lambda^{1}$ are given by: \\ $ \begin{cases} \lambda^{0}+\rho_{n-k}=\eta_{0}(\underline{\lambda})/2, \\ \lambda^{1}+\rho_{k}=[\eta_{1}(\underline{\lambda})-1]/2, \end{cases} $ \\ \\ where $\rho_{k}=(k-1,k-2,\cdots,0).$ By convention, the elements of $\eta_{i}(\underline{\lambda})$ are arranged in the decreasing order. \\ \item[C.] Let $\underline{\lambda}+\rho_{n}=(l_{1},l_{2},\cdots,l_{n})$. If $\# \eta_{0}(\underline{\lambda})<n-k$, we have the following alternating sum: \[ \Theta_{\underline{\lambda}}(C_{n-k,k})=\pm \frac{ \sum_{S \in I}\epsilon_{S}\cdot \rm dim (S_{(\lambda_{S})}\C^{k} \otimes S_{(\lambda_{S^{'}})}\C^{n-k}) }{2^{k(n-k-1)}}, \] where $S^{'}=[n] \backslash S$, $I=\{ \{i_1,i_2,\cdots,i_k\} \subset [n] \ \vert \ l_{i_{j}} \in \eta_{1}(\underline{\lambda}) \ \forall \ 1 \leq j \leq k \}$, $\epsilon_{S}$ is the sign of the \enquote{shuffle} permutation corresponding to the sets of rows indexed by $S$ and $S^{'}$. The highest weights $\lambda_{S}$ of $\GL(k,\C)$ and $\lambda_{S^{'}}$ of $\GL(n-k,\C)$ are given by: \small \begin{enumerate} \item[(i.)]$\lambda_{S}+\rho_k=(l_{s} \vert s \in S)$. \item[(ii.)]$\lambda_{S^{'}}+\rho_{n-k}=(l_{s} \vert s \in S^{'})$. \end{enumerate} \noindent There are a total of ${\# \eta_{1}(\underline{\lambda}) \choose k}$ terms in the above summation. \end{itemize} \begin{proof} The proof of this theorem will be a direct application of the Weyl character formula. Note that the elements $C_{n-k,k}$ are singular elements of ${ \rm GL}(n,\C)$, hence the Weyl denominator is zero. Therefore $\Theta_{\lambda}(C_{n-k,k})$ is calculated by taking a limit $\Theta_{\lambda}(C_{n-k,k}(\epsilon))$, where $C_{n-k,k}(\epsilon)$ are elements of ${\rm GL}(n,\C)$ which converge to $C_{n-k,k}$ as $\epsilon \rightarrow 0$. There are many options to consider $C_{n-k,k}(\epsilon)$, but we take $$C_{n-k,k}(\epsilon)=(x_{n-k}(\epsilon),x_{n-k+1}(\epsilon),\cdots,x_{1}(\epsilon),-x_{k}(\epsilon),-x_{k-1}(\epsilon),\cdots,-x_{1}(\epsilon)),$$ where \[ x_{j}(\epsilon)=1+j\epsilon , 1 \leq j \leq n-k. \] These are regular elements for each $\epsilon >0$ and therefore the Weyl denominator is nonzero. We will use the Weyl character formula as the quotient of two determinants given by $$ \Theta_{\underline{\lambda}}(x_1,x_2,\cdots,x_n)=\frac{\det (x_j^{\lambda_{i}+n-i} )}{\Delta(x_1,x_2,\cdots,x_n)}, $$ where $\Delta$ denote the Vandermonde determinant. We have $$\lim_{\epsilon \rightarrow 0}\frac{A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))}{A_{\rho_{n}}(C_{n-k,k}(\epsilon))}=\Theta_{\underline{\lambda}}(C_{n-k,k}).$$ So we will begin by calculating the Weyl denominator $A_{\rho_{n}}(C_{n-k,k}(\epsilon))$. \\ \begin{center} \textbf{Calculation of the Weyl denominator at $C_{n-k,k}(\epsilon)$} \end{center} The Weyl denominator at $C_{n-k,k}(\epsilon)$ is given by: \begin{align} \small \nonumber A_{\rho_n}(C_{n-k,k}(\epsilon))&=\Delta(C_{n-k,k}(\epsilon)), \label{2.0} \\ &=\Delta(x_{n-k}(\epsilon),x_{n-k-1}(\epsilon),\cdots,x_{1}(\epsilon),-x_k(\epsilon),-x_{k-1}(\epsilon),\cdots,-x_{1}(\epsilon)), \\ &=(-1)^{\frac{k(k-1)}{2}}\Delta(C(\epsilon)) \times \Delta(D(\epsilon)) \times \prod^{n-k}_{s=1}\prod^{k}_{t=1}(x_{s}(\epsilon)+x_{t}(\epsilon)), \label{2.1}\\ &=(-1)^{\frac{k(k-1)}{2}}\frac{\Delta(C^{2}(\epsilon)) \times \Delta(D^{2}(\epsilon)) \times \prod^{k}_{s=1}2x_s(\epsilon)}{\prod_{\{ s > t \vert k+1 \leq s,t \leq n-k \}}(x_s(\epsilon)+x_{t}(\epsilon))}\label{2.2} , \\ &=(-1)^{\frac{k(k-1)}{2}}\frac{2^{k}\Delta(C^{2}(\epsilon)) \times \Delta(D^{2}(\epsilon)) \times \prod^{k}_{s=1}x_{s}(\epsilon)}{\prod_{\{ s > t \vert k+1 \leq s,t \leq n-k \}}(x_{s}(\epsilon)+x_{t}(\epsilon))}\label{2.3}, \end{align} where $$C(\epsilon)=(x_{n-k}(\epsilon),x_{n-k-1}(\epsilon),\cdots,x_{1}(\epsilon)) \in {\rm GL}(n-k,\C)$$ and $$D(\epsilon)=(x_{k}(\epsilon),x_{k-1}(\epsilon),\cdots,x_{1}(\epsilon)) \in {\rm GL}(k,\C);$$ and where $C^{2}(\epsilon)$ and $D^{2}(\epsilon)$ are the squares of the corresponding matrices. Now to show that $\Theta_{\underline{\lambda}}(C_{n-k,k})=0$, it is sufficient to consider the conditions under which the Weyl numerator $A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))$ is identically zero which is what we do now. Let $\underline{\lambda}+\rho_n=(l_1,l_2,\cdots,l_n)$ with $l_i=\lambda_i+n-i$. Let $\#\eta_{0}(\underline{\lambda})=n-k+s$ with $ 1 \leq s \leq k$. Interchanging the rows of \(A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))\) changes the determinant only by a sign. Therefore, without loss of generality, we can assume that \(\eta_{0}(\underline{\lambda})=\{l_1, l_2, \ldots, l_{n-k+s}\}\). Now the Weyl numerator at $C_{n-k,k}(\epsilon)$ is given by \[ A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))= \det \begin{pmatrix} x_{n-k}(\epsilon)^{l_1} & \cdots & x_{1}(\epsilon)^{l_1} & x_{k}(\epsilon)^{l_1} & \cdots & x_{1}(\epsilon)^{l_1} \\ \\ x_{n-k}(\epsilon)^{l_2} & \cdots & x_{1}(\epsilon)^{l_2} & x_{k}(\epsilon)^{l_2} & \cdots & x_{1}(\epsilon)^{l_2} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ x_{n-k}(\epsilon)^{l_{n-k+s}} & \cdots & x_{1}(\epsilon)^{l_{n-k+s}} & x_{k}(\epsilon)^{l_{n-k+s}} & \cdots & x_{1}(\epsilon)^{l_{n-k+s}} \\ x_{n-k}(\epsilon)^{l^{'}_{s+1}} & \cdots & x_{1}(\epsilon)^{l^{'}_{s+1}} & -x_{k}(\epsilon)^{l^{'}_{s+1}} & \cdots & -x_{1}(\epsilon)^{l^{'}_{s+1}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ x_{n-k}(\epsilon)^{l^{'}_{n}} & \cdots & x_{1}(\epsilon)^{l^{'}_{n}} & -x_{k}(\epsilon)^{l^{'}_{n}} & \cdots & -x_{1}(\epsilon)^{l^{'}_{n}} \\ \end{pmatrix}, \] where for $1 \leq i \leq k-s$ , we define $l_{s+i}^{'}=l_{n-k+s+i}$. For each $1 \leq i \leq k$, we will apply the following set of successive column operations on the Weyl numerator $A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))$: \begin{itemize} \item[(1)]\( C_{n-k+i} \rightarrow C_{n-k+i}-C_{n-2k+i}, \) \item[(2)]\(C_{n-k+i} \rightarrow -\frac{1}{2}C_{n-k+i}. \) \end{itemize} By applying these successive operations, the matrix corresponding to the Weyl numerator $A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))$ transforms into the following matrix of the form: $$ (-2)^{k} \times \begin{pmatrix} A_{(n-k+s) \times (n-k)}(\epsilon) & \textbf{0}_{(n-k+s) \times k}\\ C_{(k-s) \times (n-k)}(\epsilon) & D_{(k-s) \times k}(\epsilon) \end{pmatrix}, $$ where $$A_{(n-k+s) \times (n-k)}(\epsilon)=\begin{pmatrix} x_{n-k}(\epsilon)^{l_1} & \cdots & x_{1}(\epsilon)^{l_1} \\ \\ x_{n-k}(\epsilon)^{l_2} & \cdots & x_{1}(\epsilon)^{l_2} \\ \vdots & \ddots & \vdots \\ x_{n-k}(\epsilon)^{l_{n-k+s}} & \cdots & x_{1}(\epsilon)^{l_{n-k+s}} \\ \end{pmatrix}$$ and $$D_{(k-s) \times k}(\epsilon)=\begin{pmatrix} x_{k}(\epsilon)^{l^{'}_{s+1}} & \cdots & x_{1}(\epsilon)^{l^{'}_{s+1}} \\ \\ x_{k}(\epsilon)^{l^{'}_{s+2}} & \cdots & x_{1}(\epsilon)^{l^{'}_{s+2}} \\ \vdots & \ddots & \vdots \\ x_{k}(\epsilon)^{l^{'}_{n}} & \cdots & x_{1}(\epsilon)^{l^{'}_{s+1}} \\ \end{pmatrix}.$$ Note that for the zero matrix $\textbf{0}_{(n-k+2s) \times (k-s)}$, we have $(n-k+s)+k=n+s>n$. Therefore by Corollary \ref{cor 1} we deduce that $A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))=0$. Thus $A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k})=0$ whenever $\#\eta_{0}(\underline{\lambda})>n-k$. This completes the proof of Part (A) of Theorem \ref{thm 1}. Next we turn to the proof of Part$(B)$ of the theorem, thus we assume that \( \#\eta_{0}(\underline{\lambda}) = n-k \). Without loss of generality, we can assume that \( \eta_{0}(\underline{\lambda})=\{ l_1, l_2, \ldots, l_{n-k} \}. \) The Weyl numerator at $C_{n-k,k}(\epsilon)$ is given by \[ \det \begin{pmatrix} x_{n-k}(\epsilon)^{l_1} & \cdots & x_{1}(\epsilon)^{l_1} & x_{k}(\epsilon)^{l_1} & \cdots & x_{1}(\epsilon)^{l_1} \\ \\ x_{n-k}(\epsilon)^{l_2} & \cdots & x_{1}(\epsilon)^{l_2} & x_{k}(\epsilon)^{l_2} & \cdots & x_{1}(\epsilon)^{l_2} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ x_{n-k}(\epsilon)^{l_{n-k}} & \cdots & x_{1}(\epsilon)^{l_{n-k}} & x_{k}(\epsilon)^{l_{n-k}} & \cdots & x_{1}(\epsilon)^{l_{n-k}} \\ \\ x_{n-k}(\epsilon)^{l_{n-k+1}} & \cdots & x_{1}(\epsilon)^{l_{n-k+1}} & -x_{k}(\epsilon)^{l_{n-k+1}} & \cdots & -x_{1}(\epsilon)^{l_{n-k+1}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ x_{n-k}(\epsilon)^{l_{n}} & \cdots & x_{1}(\epsilon)^{l_{n}} & -x_{k}(\epsilon)^{l_{n}} & \cdots & -x_{1}(\epsilon)^{l_{n}} \\ \end{pmatrix}, \] For each $1 \leq i \leq k$, we will apply the following set of column operations on $A_{\underline{\lambda}+\rho_n}(C_{n-k,k}(\epsilon))$: \begin{itemize} \item[(i)]\( C_{n-k+i} \rightarrow C_{n-k+i}-C_{n-2k+i}, \) \item[(ii)]\(C_{n-k+i} \rightarrow -\frac{1}{2}C_{n-k+i}. \) \end{itemize} Applying these operations $A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))$ can be transformed into the block diagonal form given by \[ (-2)^{k} \times \det \begin{pmatrix} E_{(n-k) \times (n-k)}(\epsilon) & 0_{(n-k) \times k} \\ K_{k \times (n-k)}(\epsilon) & F_{k \times k}(\epsilon) \end{pmatrix}, \] where $E_{(n-k) \times (n-k)}(\epsilon)=(x_{n-k-j+1}(\epsilon)^{l_i})_{ij}$ and $F_{k \times k}(\epsilon)=(x_{k-j+1}(\epsilon)^{(l_{n-k+i})})_{ij}$. Let us consider the matrix \[\begin{pmatrix} E_{(n-k) \times (n-k)}(\epsilon) & 0_{(n-k) \times k} \\ K_{k \times (n-k)}(\epsilon) & F_{k \times k}(\epsilon) \end{pmatrix}. \] It has the zero submatrix $0_{(n-k) \times k}$ and also $(n-k)+k=n$. Therefore by Corollary \ref{cor 2} we have \begin{align} \nonumber A_{\underline{\lambda}+\rho_n}(C_{n-k,k}(\epsilon))&=\pm 2^{k} \det (E_{n-k \times n-k}(\epsilon)) \times \det (F_{k \times k}(\epsilon)), \\ &=\pm 2^{k} \det (E_{n-k \times n-k}(\epsilon)) \times \det (F^{'}_{k \times k}(\epsilon)) \times \prod_{j=1}^{k}x_{j}(\epsilon),\label{6} \end{align} where $E_{(n-k) \times (n-k)}(\epsilon)$ is defined earlier and $F^{'}_{k \times k}(\epsilon)=(x_{k-j+1}(\epsilon)^{(l_{n-k+i}-1)})_{ij}$. Note that we have $l_{m}\equiv\begin{cases} 0 \mod 2 & \text{if} \ m=1, \cdots,n-k \\ 1 \mod 2 & \text{if} \ m=n-k+1,\cdots,n \end{cases}$ \\ Now from equations \eqref{2.3}, \eqref{6} and by the applications of Weyl character formula we have: \begin{align} \nonumber \Theta_{\underline{\lambda}}(C_{n-k,k}(\epsilon))&=\pm \prod_{\{ s > t \vert k+1 \leq s,t \leq n-k \}}(x_{s}(\epsilon)+x_{t}(\epsilon)) \times \frac{\det (E_{n-k \times n-k}(\epsilon))}{\Delta(C^{2}(\epsilon))} \times \frac{\det (F^{'}_{k \times k}(\epsilon))}{(D^{2}(\epsilon))}, \\ &=\pm \prod_{\{ s > t \vert k+1 \leq s,t \leq n-k \}}(x_{s}(\epsilon)+x_{t}(\epsilon)) \times \Theta_{\lambda^{0}}(C^{2}(\epsilon)) \times \Theta_{\lambda^{1}}(D^{2}(\epsilon)),\label{7} \end{align} where the highest weights $\lambda^{0}$ of $\GL(n-k,\C)$ and $\lambda^{1}$ of $\GL(k,\C)$ are defined by : \begin{itemize} \item [(i.)] $\lambda^{0}+\rho_{n-k}=\eta_{0}(\lambda)/2=(\frac{l_1}{2},\frac{l_2}{2},\cdots,\frac{l_{n-k}}{2})$. \item [(ii.)] $\lambda^{1}+\rho_{k}=[\eta_{1}(\lambda)-1]/2=(\frac{l_{n-k+1}-1}{2},\frac{l_{n-k+2}-1}{2},\cdots,\frac{l_{n}-1}{2}).$ \end{itemize} Now from equation \eqref{7} we get \begin{align} \nonumber \Theta_{\underline{\lambda}}(C_{n-k,k})&=\lim_{\epsilon \rightarrow 0}\Theta_{\underline{\lambda}}(C_{n-k,k}(\epsilon)), \\ &=\prod_{\{ i > j \vert k+1 \leq i,j \leq n-k \}}\lim_{\epsilon \rightarrow 0}(x_{s}(\epsilon)+x_{t}(\epsilon)) \times \lim_{\epsilon \rightarrow 0} \Theta_{\lambda^{0}}(C^{2}(\epsilon)) \times \lim_{\epsilon \rightarrow 0} \Theta_{\lambda^{1}}(D^{2}(\epsilon)), \\ &=2^{{n-2k \choose 2}} \times {\rm dim}(S_{(\lambda^{0})}\C^{n-k}) \times {\rm dim}(S_{(\lambda^{1})}\C^{k}), \\ &=2^{c(k)} \times {\rm dim}( S_{(\lambda^{0})}\C^{n-k} \otimes S_{(\lambda^{1})}\C^{k}), \end{align} where $c(k)={n-2k \choose 2}$. This completes the proof of Part (B) of Theorem \ref{thm 1}. \\ \\ Next we will prove part (C) of this theorem. Let $\#\eta_{0}(\underline{\lambda})=m$ with $k+1 \leq m \leq n-k-1$. Without loss of generality let us assume that $\eta_0(\underline{\lambda})=\{l_1,l_2,\cdots,l_{m}\}$. For each $1 \leq i \leq k$, we will apply the following set of successive column operations on the Weyl numerator $A_{\lambda+\rho_{n}}(C_{n-k,k}(\epsilon))$: \begin{itemize} \item[(1)]\( C_{n-k+i} \rightarrow C_{n-k+i}-C_{n-2k+i}, \) \item[(2)]\(C_{n-k+i} \rightarrow -\frac{1}{2}C_{n-k+i}, \) \end{itemize} By applying these successive operations the Weyl numerator $A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))$ transforms into $$ (-2)^{k} \times \det(B(\epsilon)), $$ where $$B(\epsilon)=\det \begin{pmatrix} x_{n-k}(\epsilon)^{l_1} & \cdots & x_{1}(\epsilon)^{l_1} &0 & \cdots & 0 \\ \\ x_{n-k}(\epsilon)^{l_2} & \cdots & x_{1}(\epsilon)^{l_2} &0 & \cdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ x_{n-k}(\epsilon)^{l_m} & \cdots & x_{1}(\epsilon)^{l_m} &0 & \cdots & 0 \\ \\ x_{n-k}(\epsilon)^{l_{m+1}} & \cdots & x_{1}(\epsilon)^{l_{m+1}} & x_k(\epsilon)^{l_{m+1}} & \cdots & x_1(\epsilon)^{l_{m+1}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ x_{n-k}(\epsilon)^{l_{n}} & \cdots & x_{1}(\epsilon)^{l_{n}} & x_k(\epsilon)^{l_{n}} & \cdots & x_1(\epsilon)^{l_{n}} \\ \end{pmatrix}.$$ Applying Lemma \ref{lem 1} on $B(\epsilon)$ by choosing the subset $H=\{n-k+1,\cdots,n\}$, which corresponds to the last $k$ columns, we get \begin{align} B(\epsilon) &= \sum_{\substack{ S \subset H_m \\ \\ \# S = k }} \epsilon_{S} \cdot \det(M_{S,H}(\epsilon)) \det(M_{S',H'}(\epsilon)), \end{align} where \( H_m = \{ m+1, m+2, \dots, n \} \), $\epsilon_{S}$ is the sign of the \enquote{shuffle} permutation corresponding to the sets of rows indexed by $S$ and $S^{'}$. \( M_{S,H}(\epsilon) \) denotes the \( k \times k \) submatrix of the matrix \( B(\epsilon) \) obtained by choosing rows with indices in \( S \) and columns with indices in \( H \), and \( M_{S',H'}(\epsilon) \) denotes the submatrix in \( B(\epsilon) \) having rows with indices in $S^{'}=[n] \setminus S$ and columns with indices in $H^{'}=[n] \setminus H$. In particular $$ M_{S,H}(\epsilon)=(x^{l_{u}}_{j}(\epsilon))_{u \in S, j \in [k]}$$ and $$M_{S',H'}(\epsilon)=(x^{l_{u}}_{j}(\epsilon))_{u \in S', j \in [n-k]},$$ where for any integer $w$, we have defined $[w]$ in Section \ref{sec: defn}. Now the Weyl numerator is given by \begin{align} \nonumber A_{\underline{\lambda}+\rho_{n}}(C_{n-k,k}(\epsilon))&=(-2)^{k} \times B(\epsilon), \\ &=(-2)^{k} \times \sum_{\substack{ S \subset H_m \\ \\ \# S = k }} \epsilon_{S} \cdot \text{det}(M_{S,H}(\epsilon))\text{det}(M_{S^{'},H^{'}}(\epsilon)).\label{14.1} \end{align} Now from equations \eqref{2.1}, \eqref{14.1} and by the applications of Weyl character formula we get \small \begin{align} \Theta_{\underline{\lambda}}(C_{n-k,k}(\epsilon))&=\pm2^{k} \times \sum_{\substack{ S \subset H_m \\ \\ \# S = k }} \frac{\epsilon_{S}}{\prod^{n-k}_{s=1}\prod^{k}_{t=1}(x_s(\epsilon)+x_t(\epsilon))} \times \frac{\det(M_{S,H}(\epsilon))}{\Delta(C(\epsilon))} \times \frac{\det(M_{S^{'},H}(\epsilon))}{\Delta(D(\epsilon))},\label{12} \\ &=\pm 2^{k} \times \sum_{\substack{ S \subset H_m \\ \\ \# S = k }}\frac{\epsilon_{S}}{\prod^{n-k}_{s=1}\prod^{k}_{t=1}(x_s(\epsilon)+x_t(\epsilon))} \times \Theta_{\lambda_{S}}(C(\epsilon)) \times \Theta_{\lambda_{S^{'}}}(D(\epsilon)),\label{14} \end{align} where the highest weights $\lambda_{S}$ of $\GL(k,\C)$ and $\lambda_{S^{'}}$ of $\GL(n-k,\C)$ are defined by: \begin{itemize} \item [1.] $\lambda_{S}+\rho_{k}=(l_{s} \vert u \in S)$. \item [2.] $\lambda_{S^{'}}+\rho_{n-k}=(l_{u} \vert u \in S^{'})$. \end{itemize} From equation \eqref{14} we have \small \begin{align*} \Theta_{\underline{\lambda}}(C_{n-k,k})&=\lim_{\epsilon \rightarrow 0}\Theta_{\underline{\lambda}}(C_{n-k,k}(\epsilon)), \\ &=\pm \sum_{\substack{ S \subset H_m \\ \\ \# S = k }}\frac{2^{k}\epsilon_{S}}{\prod^{n-k}_{s=1}\prod^{k}_{t=1}\lim_{\epsilon \rightarrow 0}(x_s(\epsilon)+x_t(\epsilon))} \times \lim_{\epsilon \rightarrow 0}\Theta_{\lambda_{S}}(C(\epsilon)) \times \lim_{\epsilon \rightarrow 0}\Theta_{\lambda_{S^{'}}}(D(\epsilon)), \\ &=\pm \sum_{\substack{ S \subset H_m \\ \\ \# S = k }}2^{k} \epsilon_{S} \cdot \frac{{\rm dim}(S_{(\lambda_{S})}\C^{k}){\rm dim}(S_{(\lambda_{S^{'}})}\C^{n-k})}{2^{k(n-k)}}, \\ &=\pm \frac{1}{2^{k(n-k-1)}} \sum_{S \in I}\epsilon_{S} \cdot {\rm dim}(S_{(\lambda_{S})}\C^{k} \otimes S_{(\lambda_{S^{'}})}\C^{n-k}), \end{align*} where $I=\{S \subset H_{m} \vert \#S=k \}$ and $H_{m}=\{j \mid l_{j} \in \eta_{1}(\underline{\lambda}) \}$. \\ \\ This completes the proof of Part (C) of Theorem \ref{thm 1}. \end{proof} \end{theorem} \begin{remark} Observe that $c(k)$ is 0 if we have $n=2k$ and $2k+1$ respectively. Therefore when $n=2k$ or $n=2k+1$, the character value at $C_{n-k,k}$ can be written up to a sign as the dimension of an highest weight representation of ${\rm GL}(n-k,\C) \times {\rm GL}(k,\C)$. Thus Theorem \ref{thm 1} generalizes Theorem 2 of \cite{DP} and Theorem 2.5 of \cite{AK} in the particular case of elements of order $2$. \end{remark} \section{The Symplectic Group ${\rm Sp}(2n,\C)$} \label{Sec:Symplectic} For any $n$-tuple $a=(a_1,a_2,\cdots,a_{n}) \in (\C^{\ast})^{n}$, let us recall the notation $a^{t}$ defined in Section \ref{sec: defn}. \begin{theorem}\label{thm 2} Let $G={\rm Sp}(2n,\C)$. For each integer $0<k \leq n/2$, consider $D_{n-k,k}=(C_{n-k,k},(C^{t}_{n-k,k})^{-1})$, where $C_{n-k,k}=(\underbrace{1,1,\cdots,1}_{n-k \ {\rm times}},\underbrace{-1,-1,\cdots,-1}_{k \ {\rm times}})$. These are elements of order 2. Let $S_{\langle \underline{\lambda} \rangle}\C^{2n}$ be the highest weight representation of ${\rm Sp}(2n,\C)$ with highest weight $\underline{\lambda}=(\lambda_1,\lambda_2,\cdots,\lambda_n)$. Then: \begin{itemize} \item[A.] The character $\Theta_{\underline{\lambda}}(D_{n-k,k})=0$ if either $\#\eta_{0}(\underline{\lambda})>n-k$ or $\#\eta_{1}(\underline{\lambda})>n-k$. \\ \item[B.] For $\#\eta_{0}(\underline{\lambda})=n-k$ or $\#\eta_{0}(\underline{\lambda})=k$, we have the following factorization: \[ \noindent \Theta_{\underline{\lambda}}(D_{n-k,k})= \begin{cases} \pm 2^{d(k)} \cdot \dim(S_{\langle \lambda^{0} \rangle} \mathbb{C}^{2n-2k} \bigotimes S_{[\lambda^{1}]} \mathbb{C}^{2k+1}) , & \text{if } \#\eta_{0}(\lambda) = n-k, \\[1ex] \pm 2^{d(k)} \cdot \dim(S_{\langle \lambda^{0} \rangle} \mathbb{C}^{2k} \otimes S_{[\lambda^{1}]} \mathbb{C}^{2n-2k+1}) , & \text{if } \#\eta_{0}(\lambda) = k, \end{cases} \] where $d(k)={(n-2k)^{2}}$ and the highest weights $\lambda^{0}$ and $\lambda^{1}$ are defined by: \\ \\ $ \begin{cases} \begin{cases} \lambda^{0}+\rho^{n-k}=\eta_{0}(\underline{\lambda})/2, \\ \lambda^{1}+\overline{\rho}_{2k+1}=\eta_{1}(\underline{\lambda})/2. \end{cases} & \text{if} \ \#\eta_{0}(\underline{\lambda})=n-k, \\ \\ \begin{cases} \lambda^{0}+\rho^{k}=\eta_{0}(\underline{\lambda})/2, \\ \lambda^{1}+\overline{\rho}_{2n-2k+1}=\eta_{1}(\underline{\lambda})/2. \end{cases} & \text{if} \ \#\eta_{0}(\underline{\lambda})=k , \end{cases} $ \\ \\ where $\rho^{n}$ and $\overline{\rho}_{2k+1}$ are the half the sum of the positive roots of ${\rm Sp}(2n,\C)$ and ${\rm SO}(2k+1,\C)$ respectively, for example $\rho^{n}=(n,n-1,\cdots,1)$. \item[C.] Let $\underline{\lambda}+\rho^{n}=(l_1,l_2,\cdots,l_n)$. For $k< \#\eta_{0}(\lambda)<n-k$, we have the following alternating sum: \small \[ \Theta_{\underline{\lambda}}(D_{n-k,k})=\pm \frac{\sum_{S \in I}\epsilon_{S}\cdot {\rm dim}(S_{ \langle \lambda_{S} \rangle}\mathbb{C}^{2n-2k} \otimes S_{ \langle \lambda_{S^{'}} \rangle}\mathbb{C}^{2k})}{2^{k(2n-2k-1)}}, \] where $S^{'}=[n] \backslash S$, $I=\{ \{i_1,i_2,\cdots,i_k\} \subset [n] \vert \ l_{i_{j} } \in \eta_{1}(\lambda) \ \forall \ 1 \leq j \leq k\}$, $\epsilon_{S}$ is the sign of the \enquote{shuffle} permutation corresponding to the sets of rows indexed by $S$ and $S^{'}$. The highest weights $\lambda_{S}$ of ${\rm Sp}(2k,\C)$ and $\lambda_{S^{'}}$ of ${\rm Sp}(2n-2k,\C)$ are given by: \small \begin{enumerate} \item[(i.)]$\lambda_{S}+\rho^{k}=(l_{s} \vert s \in S)$. \item[(ii.)]$\lambda_{S^{'}}+\rho^{n-k}=(l_{s} \vert s \in S^{'})$. \end{enumerate} \noindent There are a total of ${\# \eta_{1}(\underline{\lambda}) \choose k}$ terms in the above summation. \end{itemize} \begin{proof} The proof of this theorem is analogous to the proof of the corresponding theorem for $\GL(n,\C)$ in which we replace the Weyl numerator (and the Weyl denominator) $\det(x^{\lambda_i+n-i}_{j})$ by $\det(x^{\lambda_i+n+1-i}_{j}-x^{-\lambda_i-n-1+i}_{j})$. \end{proof} \end{theorem} \begin{remark} For $n=2k$, we have $d(k)=0$. So that whenever $n=2k$, the character value at $D_{n-k,k}$ can be written up to a sign either as the dimension of a highest weight representation of ${\rm Sp}(2n-2k,\C) \times {\rm SO}(2k+1,\C)$ or as the dimension of a highest weight representation of ${\rm Sp}(2k,\C) \times {\rm SO}(2n-2k+1,\C)$. This shows that Theorem \ref{thm 2} generalizes Theorem 2.11 from \cite{AK} in the particular case of elements of order $2$. \end{remark} \section{The even Orthogonal Group ${\rm SO}(2n,\C)$} For $a=(a_1,a_2,\cdots,a_{n}) \in (\C^{\ast})^{n}$, let us recall the notation $a^{t}$ defined in Section \ref{sec: defn}. \begin{theorem}\label{thm 3} Let $G={\rm SO}(2n,\C)$. For each integer $0<k \leq n/2$, consider $E_{n-k,k}=(C_{n-k,k},(C^{t}_{n-k,k})^{-1})$, where $C_{n-k,k}=(\underbrace{1,1,\cdots,1}_{n-k \ {\rm times}},\underbrace{-1,-1,\cdots,-1}_{k \ {\rm times}})$. Let $S_{[\underline{\lambda}]}\C^{2n}$ be the highest weight representation of ${\rm SO}(2n,\C)$ with highest weight $\underline{\lambda}=(\lambda_1,\lambda_2,\cdots,\lambda_n)$. Then: \begin{itemize} \item[A.]The character $\Theta_{\underline{\lambda}}(E_{n-k,k})=0$ if either $\#\eta_{0}(\underline{\lambda})>n-k$ or $\#\eta_{1}(\underline{\lambda})>n-k$. \\ \item[B.]For $\#\eta_{0}(\underline{\lambda})=n-k$ or $\#\eta_{0}(\underline{\lambda})=k$, we have the following factorization: \[ \Theta_{\underline{\lambda}}(E_{n-k,k})= \begin{cases} \pm 2^{e(k)} \cdot {\rm dim}(S_{[\lambda^{0}]}\C^{2n-2k} \otimes S_{[\lambda^{1}]}\C^{2k}) , & \text{if } \#\eta_{0}(\lambda)=n-k, \\ \pm 2^{e(k)} \cdot {\rm dim}(S_{[\lambda^{0}]}\C^{2k} \otimes S_{[\lambda^{1}]}\C^{2n-2k}) , & \text{if } \#\eta_{0}(\lambda)=k, \end{cases} \] where $e(k)=2\binom{n-2k}{2}-k+1$ and the highest weights $\lambda^{0}$ and $\lambda^{1}$ are defined by: \\ \\ $ \begin{cases} \begin{cases} \lambda^{0}+\overline{\rho}_{2n-2k}=\eta_{0}(\underline{\lambda})/2, \\ \lambda^{1}+\overline{\rho}_{2k}=\eta_{1}(\underline{\lambda})/2. \end{cases} & \text{if} \ \#\eta_{0}(\lambda)=n-k, \\ \\ \begin{cases} \lambda^{0}+\overline{\rho}_{2k}=\eta_{0}(\underline{\lambda})/2, \\ \lambda^{1}+\overline{\rho}_{2n-2k}=\eta_{1}(\underline{\lambda})/2. \end{cases} & \text{if} \ \#\eta_{0}(\lambda)=k, \end{cases} $ \\ \\ where $\overline{\rho}_{2n}=(n-1,n-2,\cdots,0)$ be the half the sum of the positive roots of ${\rm SO}(2n,\C)$. The highest weight representations $S_{[\lambda^{1}]}\C^{2k}$ and $S_{[\lambda^{1}]}\C^{2n-2k}$ are Spin representations of $\SO(2k,\C)$ and $\SO(2n-2k,\C)$ respectively. \item [C.] Let $\underline{\lambda}+\overline{\rho}_{2n}=(l_1,l_2,\cdots,l_n)$ and $n-k>k$. If $k<\#\eta_{0}(\lambda)<n-k$, we have the following alternating sum: \small \[ \Theta_{\underline{\lambda}}(E_{n-k,k})=\pm \frac{\sum_{S \in I}\epsilon_{S}\cdot {\rm dim}(S_{ [ \lambda_{S} ]}\mathbb{C}^{2n-2k} \otimes S_{ [ \lambda_{S^{'}} ]}\mathbb{C}^{2k})}{2^{k(2n-2k-1)}}, \] where $S^{'}=[n] \backslash S$, $I=\{ \{i_1,i_2,\cdots,i_k\} \vert \ l_{i_{j} } \in \eta_{1}(\underline{\lambda}) \ \forall \ 1 \leq j \leq k\}$, $\epsilon_{S}$ is the sign of the \enquote{shuffle} permutation corresponding to the sets of rows indexed by $S$ and $S^{'}$. The highest weights $\lambda_{S}$ of ${\rm SO}(2k,\C)$ and $\lambda_{S^{'}}$ of ${\rm SO}(2n-2k,\C)$ are given by: \small \begin{enumerate} \item[(i.)]$\lambda_{S}+\overline{\rho}_{2k}=(l_{s} \vert s \in S)$. \item[(ii.)]$\lambda_{S^{'}}+\overline{\rho}_{2n-2k}=(l_{s} \vert s \in S^{'})$. \end{enumerate} \noindent There are a total of ${\# \eta_{1}(\underline{\lambda}) \choose k}$ terms in the above summation. \end{itemize} \begin{proof} The proof of this theorem is analogous to the proof of the corresponding theorem for $\GL(n,\C)$ in which we replace the Weyl numerator (and the Weyl denominator) $\det(x^{\lambda_i+n-i}_{j})$ by $\det(x^{\lambda_i+n-i}_{j}-x^{-\lambda_i-n+i}_{j})+\det(x^{\lambda_i+n-i}_{j}+x^{-\lambda_i-n+i}_{j})$. \end{proof} \end{theorem} In the next theorem we will discuss about the character theory of $\SO(2n+1,\C)$. \section{The Odd Orthogonal Group ${\rm SO}(2n+1,\C)$} For $a=(a_1,a_2,\cdots,a_{n}) \in (\C^{\ast})^{n}$, let us recall the notation $a^{t}$ defined in Section \ref{sec: defn}. \begin{theorem}\label{thm 4} Let $G={\rm SO}(2n+1,\C)$. For each $0<k<n$, consider $F_{n-k,k}=(C_{n-k,k},1,(C^{t}_{n-k,k})^{-1})$, where $C_{n-k,k}=(\underbrace{1,1,\cdots,1}_{n-k \ {\rm times}},\underbrace{-1,-1,\cdots,-1}_{k \ {\rm times}})$. These are elements of order 2. Let $S_{[\underline{\lambda}]}\C^{2n}$ be the highest weight representation of ${\rm SO}(2n+1,\C)$ with highest weight $\underline{\lambda}=(\lambda_1,\lambda_2,\cdots,\lambda_n)$. Let \(\underline{\lambda} + \rho_n = (l_1, l_2, \ldots, l_n)\). Then: \begin{itemize} \item[(A)] When $n=2k$, we have: \[ \Theta_{\underline{\lambda}}(F_{k,k})=\pm {\rm dim}(S_{(\lambda^{0})}\C^{2k}), \] where the highest weight $\lambda^{0}$ of ${\rm GL}(2m,\C)$ is given by the receipe of Theorem 2.17 in \cite{AK}. \\ \item[(B.)] When $n-k \neq k$, the character $\Theta_{\underline{\lambda}}(F_{n-k,k})$ is given by: \[ \Theta_{\underline{\lambda}}(F_{n-k,k})= \frac{\sum_{L\subsetneq [n] \vert \#L=k}\epsilon_{L} \cdot {\rm dim}(S_{[\lambda_{L}]}\C^{2k}\bigotimes S_{[\lambda_{L^{'}}]}\C^{2n-2k+1})}{2^{2kn-2k^{2}+k-1}}, \] where $[n]=\{1,2,\cdots,n\}$, $H=\{n-k+1,n-k+2,\cdots,n\}$, $L^{'}=[n] \textbackslash L$ and $\epsilon_{L}$ is the sign of the \enquote{shuffle} permutation corresponding to the sets of rows indexed by $L$ and $L^{'}$. The highest weights $\lambda_{L}$ of $\SO(2k,\C)$ and $\lambda_{L^{'}}$ of $\SO(2n-2k+1,\C)$ are defined by: \\ \\ \begin{enumerate} \item[(a)]$\lambda_{L}+\overline{\rho}_{2k}=\{l_{u} \vert u \in L \}+\underbrace{(1/2,1/2,\cdots,1/2)}_{k \ {\rm times}}$. \item[(b)]$\lambda_{L^{'}}+\overline{\rho}_{2n-2k+1}=\{l_{u} \vert u \in L^{'} \}+\underbrace{(1/2,1/2,\cdots,1/2)}_{n-k \ {\rm times}}$. \end{enumerate} \noindent For each $L \subset [n]$ with $\# L=k$, we have $S_{[\lambda_{L}]}\C^{2k}$ is a Spin representation of $\SO(2k,\C)$. \end{itemize} \begin{proof} The proof for part (A) is done by Ayyer-Nishu in \cite{AK} and the proof for Part (B) is similar to the proof of Part (C) of Theorem \ref{thm 1}. So we will omit the proof of both Part (A) and Part (B) here. \end{proof} \end{theorem} \section{Comparisons between $A_{n}$,$B_{n}$,$C_{n}$ and $D_{n}$} Let us focus on the classical groups \( C_4 = \mathrm{Sp}(8, \mathbb{C}) \) and \( B_4 = \mathrm{SO}(9, \mathbb{C}) \). Let \( \underline{\lambda} = (\lambda_1, \lambda_2, \lambda_3, \lambda_4) \) represent a highest weight. We analyze the highest weight representations \( S_{\langle \lambda \rangle} \mathbb{C}^8 \) associated with \( C_4 \) and \( S_{[\lambda]} \mathbb{C}^9 \) corresponding to \( B_4 \). Let us consider the element $D_{2,2}=(1,1,-1,-1,-1,-1,1,1)$ in $C_{4}$ and the element $F_{2,2}=(1,1,-1,-1,1,-1,-1,1,1)$ in $B_{4}$. Assuming that \( \lambda_1 \) is an even integer and \( \lambda_2, \lambda_3, \lambda_4 \) are odd integers, we invoke Theorem \ref{thm 2}. This theorem indicates that the character \( \Theta_{\underline{\lambda}}(D_{2,2}) \) vanishes, while the character \( \Theta_{\underline{\lambda}}(F_{2,2}) \) is non-zero. Specifically, the value of \( \Theta_{\underline{\lambda}}(F_{2,2}) \) can be expressed as: \[ \Theta_{\underline{\lambda}}(F_{2,2}) = \pm \dim(S_{(\lambda^{1})} \mathbb{C}^4), \] where the highest weight \( \lambda^{1} \) for a representation of $\GL(4,\C)$ is defined by: \[ \lambda^{1} + \rho_4 = \left( \frac{\lambda_1 + 4}{2}, \frac{\lambda_2 + 3}{2}, \frac{\lambda_4 + 1}{2}, \frac{-\lambda_3 - 1}{2} \right). \] This result follows either by a direct calculation or from \cite{AK} and underscores a crucial difference in the non-vanishing conditions for \( B_n \) compared to those for the classical groups \( A_n, C_n, \) and \( D_n \): the vanishing conditions for $A_{n},C_{n}$ and $D_{n}$ are the same but not for $B_{n}$. \begin{remark} Unlike the characters of $\GL(n,\C)$, $\SO(2n,\C)$, $\Sp(2n,\C)$, there is no vanishing theorem about the character of the highest weight representation $S_{[\underline{\lambda}]}\C^{2n+1}$ with highest weight $\underline{\lambda}=(\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0)$ at $F_{n-k,k}$. \end{remark} \section*{CHARACTER THEORY FOR $\mathrm{G}_{2}(\C)$} The following proposition from the book of Fulton and Harris, Proposition 24.48 of \cite{FH} expressing the character of a highest weight representation of $\mathrm{G}_{2}(\mathbb{C})$ in terms of the character of the corresponding highest weight representation of $\mathrm{SL}_{3}(\mathbb{C}) \subset \mathrm{G}_{2}(\mathbb{C})$ is the main tool for us for computing the characters of $\mathrm{G}_{2}(\mathbb{C})$. \begin{proposition}\label{prop 1} For the embedding of $\mathrm{SL}_{3}(\mathbb{C}) \subset \mathrm{G}_{2}(\mathbb{C})$, the character of the highest weight representation $\Pi_{k, l}$ of $\mathrm{G}_{2}(\mathbb{C})$ with highest weight $k \omega_{1}+l \omega_{2}$ is given by: $$ \Theta_{k, l}=\frac{\mathcal{S}_{(k+2 l+1, k+l+1,0)}-\mathcal{S}_{(k+2 l+1, l, 0)}}{\mathcal{S}_{(1,1,0)}-\mathcal{S}_{(1,0,0)}}, $$ where $\mathcal{S}_{(a, b, c)}$ denotes the character of the highest weight representation of $\mathrm{GL}(3, \mathbb{C})$ with highest weight $(a, b, c)$ (restricted to $\mathrm{SL}_{3}(\mathbb{C})$ which is contained in $\mathrm{G}_{2}(\mathbb{C})$). Note that the highest weight of the representation of $\mathrm{SL}_{3}(\mathbb{C})$ with character $\mathcal{S}_{(k+2 l+1, k+l+1,0)}$ is the highest weight of the representation $\Pi_{k, l}$ of $\mathrm{G}_{2}(\mathbb{C})$ treated as a representation of the subgroup $\mathrm{SL}_{3}(\mathbb{C})$, and the character $\mathcal{S}_{(k+2 l+1, l, 0)}$ is that of the dual representation of $\mathrm{SL}_{3}(\mathbb{C})$; the denominator is the difference of the character of the standard 3 dimensional representation of $\mathrm{SL}_{3}(\mathbb{C})$ and its dual. \end{proposition} Proposition \eqref{prop 1} can be used to calculate the character of a highest weight representation of $\mathrm{G}_{2}(\mathbb{C})$ at any conjugacy class inside $\mathrm{SL}_{3}(\mathbb{C})$, hence at any conjugacy class of $\mathrm{G}_{2}(\mathbb{C})$. Since a calculation with the Weyl character formula for the conjugacy class $\mathcal{C}_{2}$ in $\mathrm{G}_{2}$, even with a more convenient variant such as Proposition 5.1, involves dealing with $0 / 0$, we have another result where calculation at the singular conjugacy class $\mathcal{C}_{2}$ is possible. The following theorem when especialized to $x=1$ will calculate the character of irreducible representations of $\mathrm{G}_{2}(\mathbb{C})$ at $\mathcal{C}_{2}$. This theorem is at the same time, of theoretical importance in giving a factorization theorem just as in \cite{DP}, \cite{AK}. \begin{theorem}\label{thm 5} Let $\Pi_{k, l}$ be the highest weight representation of the group $\mathrm{G}_{2}(\C)$ with highest weight $k \omega_{1}+l \omega_{2}$. Then for an element $(a, b, c) \in \mathrm{SL}_{3}(\mathbb{C}) \subset \mathrm{G}_{2}(\mathbb{C})$ with $a \cdot b \cdot c=1$, we have the following character relationship at the particular element $\left(x,-x,-x^{-2}\right) \in$ $\mathrm{SL}_{3}(\mathbb{C}) \subset \mathrm{G}_{2}(\mathbb{C})$ \\ $\Theta_{k, l}\left(x,-x,-x^{-2}\right)=\left\{\begin{array}{lc}0, & \text { if } k, l \text { are odd, } \\ [1ex] -\Theta_{1}\left(x^{2}, x^{-2}\right) \times \Theta_{2}\left(x^{3}, x^{-3}\right), & \text { if } k \text { even }, l \text { odd, } \\ [1ex] \Theta_{3}\left(x^{2}, x^{-2}\right) \times \Theta_{4}\left(x^{3}, x^{-3}\right), & \text { if } k \text { odd }, l \text { even, } \\ [1ex] -\Theta_{5}\left(x^{2}, x^{-2}\right) \times \Theta_{6}\left(x^{3}, x^{-3}\right), & \text { if } k, l \text { are even, }\end{array}\right.$ \\ \\ here $\Theta_{1}, \Theta_{2}, \Theta_{3}, \Theta_{4}, \Theta_{5}$ and $\Theta_{6}$ are the characters of highest weight representations of $\mathrm{SL}_{2}(\mathbb{C})$ with highest weights $\frac{3 l+k}{4} L_{1}, \frac{k+l}{2} L_{1}, \frac{k-3}{4} L_{1}, \frac{k+2 l+1}{2} L_{1}, \frac{2 k+3 l+1}{4} L_{1}$, and $\frac{l-1}{2} L_{1}$, respectively. \begin{proof} We have \[ \Theta_{k,l} = \frac{\mathcal{S}(k+2l+1, k+l+1, 0) - \mathcal{S}(k+2l+1, l, 0)}{\mathcal{S}(1, 1, 0) - \mathcal{S}(1, 0, 0)}, \] where \( \mathcal{S}_{\lambda} \) denotes the Schur polynomial corresponding to the highest weight \( \lambda = (l_1, l_2, l_3) \), where \( l_1 \geq l_2 \geq l_3 \geq 0 \) are integers. The Weyl denominator is easy enough to calculate: \[ \mathcal{S}(1, 1, 0)(x, -x, -x^{-2}) - \mathcal{S}(1, 0, 0)(x, -x, -x^{-2}) = -\frac{x^2 - 1}{x^{2}}. \] Now we will calculate the Weyl numerator for different cases. \textbf{Case I:} Suppose that \(k\) and \(l\) are both odd. In this case, the Schur polynomial \(\mathcal{S}(k+2l+1, k+l+1, 0)(x, -x, -x^{-2})\) can be expressed as: \begin{align*} \mathcal{S}(k+2l+1, k+l+1, 0)(x, -x, -x^{-2}) &= \frac{ \begin{vmatrix} x^{k+2l+3} & (-x)^{k+2l+3} & (-x^{-2})^{k+2l+3} \\ x^{k+l+2} & (-x)^{k+l+2} & (-x^{-2})^{k+l+2} \\ 1 & 1 & 1 \end{vmatrix} }{\Delta(x, -x, -x^{-2})} \\ &= 0. \end{align*} Similarly, \begin{align*} \mathcal{S}(k+2l+1, l, 0)(x, -x, -x^{-2}) &= \frac{ \begin{vmatrix} x^{k+2l+3} & (-x)^{k+2l+3} & (-x^{-2})^{k+2l+3} \\ x^{l+1} & (-x)^{l+1} & (-x^{-2})^{l+1} \\ 1 & 1 & 1 \end{vmatrix} }{\Delta(x, -x, -x^{-2})} \\ &=0. \end{align*} \textbf{Case II:} Suppose that \(k\) is even and \(l\) is odd. In this case, for \(X = (x, -x, -x^{-2})\), we have: \begin{align*} \mathcal{S}_{(k+2l+1,k+l+1,0)}(X) &= \frac{\begin{vmatrix} x^{k+2l+3} & (-x)^{k+2l+3} & (-x^{-2})^{k+2l+3} \\ x^{k+l+2} & (-x)^{k+l+2} & (-x^{-2})^{k+l+2} \\ 1 & 1 & 1 \\ \end{vmatrix}}{\Delta(x, -x, -x^{-2})} = \frac{\begin{vmatrix} x^{k+2l+3} & -x^{k+2l+3} & -(x^{-2})^{k+2l+3} \\ x^{k+l+2} & -x^{k+l+2} & -x^{-2(k+l+2)} \\ 1 & 1 & 1 \\ \end{vmatrix}}{-2x \left( x + \frac{1}{x^2} \right) \left( x - \frac{1}{x^2} \right)} \\[2em] &= \frac{\begin{vmatrix} 0 & -x^{k+2l+3} & -(x^{-2})^{k+2l+3} \\ 0 & -x^{k+l+2} & -x^{-2(k+l+2)} \\ 2 & 1 & 1 \\ \end{vmatrix}}{-2x \left( x + \frac{1}{x^2} \right) \left( x - \frac{1}{x^2} \right)} = -\frac{\left( x^{-k-1} - x^{-k-3l-4} \right)}{\left( x^3 - \frac{1}{x^3} \right)}. \end{align*} \begin{align*} \mathcal{S}_{(k+2l+1, l, 0)}(X) &= \frac{\begin{vmatrix} x^{k+2l+3} & (-x)^{k+2l+3} & (-x)^{k+2l+3} \\ x^{l+1} & (-x)^{l+1} & (-x^{-2})^{l+1} \\ 1 & 1 & 1 \\ \end{vmatrix}}{\Delta(x, -x, -x^{-2})} = \frac{\begin{vmatrix} x^{k+2l+3} & -x^{k+2l+3} & -x^{k+2l+3} \\ x^{l+1} & -x^{l+1} & -x^{-2(l+1)} \\ 1 & 1 & 1 \\ \end{vmatrix}}{-2x \left( x + \frac{1}{x^2} \right) \left( x - \frac{1}{x^2} \right)} \\[2em] &= \frac{\begin{vmatrix} 2x^{k+2l+3} & -x^{k+2l+3} & -x^{k+2l+3} \\ 0 & x^{l+1} & -x^{-2(l+1)} \\ 0 & 1 & 1 \\ \end{vmatrix}}{-2x \left( x + \frac{1}{x^2} \right) \left( x - \frac{1}{x^2} \right)} = -\frac{\left( x^{k+3l+4} - x^{k+1} \right)}{\left( x^3 - \frac{1}{x^3} \right)}. \end{align*} Therefore we have \begin{align*} \mathcal{S}_{(k+2l+1,k+l+1,0)}(X) - \mathcal{S}_{(k+2l+1,l,0)}(X) &= -\frac{\left( x^{-k-1} - x^{k+3l+4} \right) + \left( x^{k+1} - x^{-k-3l-4} \right)}{\left( x^3 - \frac{1}{x^3} \right)} \tag{5.2} \\[3em] &= -\frac{\left( \left( x^3 \right)^{\frac{l+1}{2}} - \left( x^{-3} \right)^{\frac{l+1}{2}} \right) \times \left( \left( x^2 \right)^{\frac{2k+3l+5}{4}} - \left( x^{-2} \right)^{\frac{2k+3l+5}{4}} \right)}{\left( x^3 - \frac{1}{x^3} \right)}. \tag{5.5} \end{align*} So we have \begin{align*} \Theta_{k,l}(x, -x, -x^{-2})&= \frac{\left( (x^2)^{\frac{2k + 3l + 5}{4}} - (x^{-2})^{\frac{2k + 3l + 5}{4}} \right)} {(x^2 - x^{-2})} \times \frac{\left( (x^3)^{\frac{l + 1}{2}} - (x^{-3})^{\frac{l +1 }{2}} \right)} {(x^3 - x^{-3})} \\[10pt] &= \Theta_5(x^2, x^{-2}) \times \Theta_6(x^3, x^{-3}). \end{align*} where $\Theta_5$ and $\Theta_6$ are characters of the highest weight representations of $\mathrm{SL}_{2}(\mathbb{C})$ with highest weights $\frac{2k + 3l + 1}{4}$ and $\frac{l - 1}{2}$ respectively (note that these numbers may not themselves be integers when $k$ is even and $l$ odd, but their product is which is a reflection of the fact that we are not dealing with $\mathrm{SL}_2(\mathbb{C}) \times \mathrm{SL}_2(\mathbb{C})$ as a subgroup of $\mathrm{G}_2(\mathbb{C})$ but a quotient of it by $(-1, -1)$ which corresponds to $\mathrm{SL}_2(\alpha_2) \times \mathrm{SL}_2(\alpha_4)$ inside $\mathrm{G}_2(\C)$. \textbf{Cases 3 and 4:} Both $k$ and $l$ are even, or $k$ is odd and $l$ is even. We omit a very similar calculation as before. \end{proof} \begin{remark} It would be nice to understand what is behind the factorization of characters in Theorem \ref{thm 5}. We are not sure what is so special about the element $(x,-x,-x^{-2})$ used there for the factorization. For example at the element $(x,x,x^{-2})$, there is no factorization. \end{remark} {\rm The proof of the following proposition is a direct consequence of Theorem 5.1 evaluated at the element $(x, -x, -x^{-2})$ for $x = 1$. This proposition is also available in the work of Reeder \cite{R}.} \begin{proposition}\label{prop 2} Let $\Pi_{k,l}$ be the highest weight representation of $\mathrm{G}_2(\mathbb{C})$ with highest weight $k\omega_1 + l\omega_2$, where $\omega_1, \omega_2$ are the fundamental representations of $\mathrm{G}_2(\C)$, with character $\Theta_{k,l}(\mathcal{C}_2)$ at the unique conjugacy class $\mathcal{C}_2$ of order 2. Then we have \[ \Theta_{k,l}(\mathcal{C}_2) = \begin{cases} 0, & \text{if } k, l \text{ are odd} \\[15pt] \frac{(k + l + 2) \times (3l + k + 4)}{8}, & \text{if } k, l \text{ are even} \\[15pt] -\frac{(k + 1) \times (k + 2l + 3)}{8}, & \text{if } k \text{ odd, } l \text{ even} \\[15pt] -\frac{(l + 1) \times (3l + 2k + 5)}{8}, & \text{if } k \text{ even, } l \text{ odd} \end{cases} \] \end{proposition} \end{theorem} \begin{remark} From Proposition \ref{prop 2} it can be shown that when the character $\Theta_{k,l}(\mathcal{C}_2)$ is nonzero, up to a sign it can be written as half the dimension of a Spin representation of $\SO(4,\C)$. We have: \[ \Theta_{k,l}(\mathcal{C}_2) = \begin{cases} \frac{1}{2} \ {\rm dim}(S_{[(\frac{k+2l+1}{2},\frac{l+1}{2})]}\C^{4}), & \text{if } k, l \text{ are even} \\[15pt] -\frac{1}{2} \ {\rm dim}(S_{[(\frac{k+l}{2},\frac{l+1}{2})]}\C^{4}), & \text{if } k \text{ odd, } l \text{ even} \\[15pt] -\frac{1}{2} \ {\rm dim}(S_{[(\frac{k+2l+1}{2},\frac{k+l+2}{2})]}\C^{4}), & \text{if } k \text{ even, } l \text{ odd} \end{cases} \] \end{remark} \noindent{\bf Acknowledgement:} This work is part of the author's thesis work carried out at IIT Bombay under the guidance of Professor Dipendra Prasad whom the author thanks profusely for continued support on this work. \begin{thebibliography}{} \bibitem[FH]{FH} W. Fulton and J. Harris, {\it Representation Theory A First Course.} Springer-Verlag, New York, 1991. \bibitem[AK]{AK} A. Ayyer and N. Kumari, {\it Factorization of Classical characters Twisted by Roots of Unity.} J. Algebra 609 (2022), 437–483. \bibitem[KO]{KO} B. Kostant. {\it On Macdonald's $\eta$-function formula, the Laplacian and generalized exponents.} Adv. Math. 20 (1976), no. 2, 179–212. \bibitem[R]{R} Reeder, Mark, {\it Weyl Group Characters Afforded By Zero Weight Spaces.} Transformation Groups 29 (2024), no.3, 1161-1198. \bibitem[DP]{DP} D. Prasad, {\it A character relationship on $\text{GL}(n,\mathbb{C})$.} Israel J. Math. 211 (2016), no. 1, 257–270. \bibitem[N]{N} Kumari, Nishu, {\it Factorization of Classical characters Twisted by Roots of Unity: II.} J. Pure Appl. Algebra 228 (2024), no. 11, Paper No. 107714, 45 pp. \end{thebibliography} \end{document}
2412.17882v1
http://arxiv.org/abs/2412.17882v1
The Frobenius problem for Binomial Coefficients
\documentclass[12pt]{article} \usepackage[left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm,a4paper]{geometry} \usepackage[a4paper]{geometry} \usepackage{graphicx} \usepackage{microtype} \usepackage{siunitx} \usepackage{booktabs} \usepackage{graphics} \usepackage{graphicx} \usepackage{epsfig} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage{cleveref} \usepackage{listings} \usepackage{paralist} \usepackage{sectsty} \usepackage{datetime} \usepackage{algorithmic,algorithm} \pagestyle{headings} \usepackage[X2,T1]{fontenc} \usepackage{authblk} \author[1]{WonTae Hwang} \author[2]{Kyunghwan Song\thanks{[email protected]}} \affil[1]{Department of Mathematics, Jeonbuk National University, Republic of Korea} \affil[2]{Department of Mathematics, Jeju National University, 102 Jejudaehakro Jeju, 63243, Republic of Korea} \date{} \DeclareTextSymbolDefault{\CYRABHDZE}{X2} \DeclareTextSymbolDefault{\cyrabhdze}{X2} \newcommand{\Ezh}{\CYRABHDZE} \newcommand{\ezh}{\cyrabhdze} \numberwithin{equation}{section} \subsectionfont{\normalfont} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem{problem}[theorem]{problem} \newtheorem{notation}[theorem]{Notation} \providecommand{\keywords}[1]{\textbf{\textit{Key words---}} #1} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bee}{\begin{equation*}} \newcommand{\eee}{\end{equation*}} \providecommand{\keywords}[1]{\textbf{\textit{Key words---}} #1} \usepackage{color} \title{ The Frobenius problem for Binomial Coefficients } \begin{document} \maketitle \begin{abstract} The greatest integer that does not belong to a numerical semigroup $S$ is called the Frobenius number of $S$, and finding the Frobenius number is called the Frobenius problem. In this paper, we solve the Frobenius problem for Binomial Coefficients. \end{abstract} \section{Introduction}\label{sec_Introduction} The greatest integer that does not belong to a numerical semigroup $S$ is called the {\em Frobenius number} of $S$ and is denoted by $F(S)$. In other words, the Frobenius number is the largest integer that cannot be expressed as a sum $\sum_{i=1}^n t_i a_i$, where $t_1, t_2, ..., t_n$ are nonnegative integers and $a_1, a_2, ..., a_n$ are given positive integers such that $\gcd(a_1, a_2, ..., a_n) = 1$. Finding the Frobenius number is called the Frobenius problem, the coin problem or the money changing problem. The Frobenius problem is not only interesting for pure mathematicians, but is also connected with graph theory in \cite{Heap (1965), Hujter (1987)} and the theory of computer science in \cite{Raczunas1996}, as introduced in \cite{Owens2003}. There are explicit formulas for calculating the Frobenius number when only two relatively prime numbers are present \cite{Sylvester1883}. For three relatively prime numbers, it was shown decades ago that there is a somewhat algorithmic method to obtain the Frobenius number using the Euclidean algorithm \cite{Rodseth1978}. More recently, a semi-explicit formula \cite{Robles2012} for the Frobenius number for three relatively prime numbers was presented. An improved semi-explicit formula was presented for this case in 2017 \cite{Tripathi (2017)}. F. Curtis proved in \cite{Curtis (1990)} that the Frobenius number for three or more relatively prime numbers cannot be given by a finite set of polynomials and Ram\'irez-Alfons\'in proved in \cite{Ramirez1996} that the problem is NP-hard. Currently, only algorithmic methods of determining the general formula for the Frobenius number of a set that has three or more relatively prime numbers in \cite{Beihoffer (2005), Bocker (2007)} exist. Some recent studies have reported that the running time for the fastest algorithm is $O(a_1)$, with the residue table in memory in \cite{Brauer (1962)} and $O(na_1)$ with no additional memory requirements in \cite{Bocker (2007)}. In addition, research on the limiting distribution and lower bound of the Frobenius number were presented in \cite{Shchur2009} and \cite{Aliev (2007),Fel (2015)}, respectively. From an algebraic viewpoint, rather than finding the general formula for three or more relatively prime numbers, the formulae for special cases were found such as the Frobenius number of a set of integers in a geometric sequence in \cite{Ong2008}, Pythagorean triples in \cite{Gil (2015)} and three consecutive squares or cubes in \cite{Lepilov (2015)}. Recently, various methods of solving the Frobenius problem for numerical semigroups have been suggested in \cite{Aliev (2009)}, \cite{Rosales2000}, \cite{Rosales2009}, \cite{Rosales2004}, etc. In particular, a method of computing the Ap\'ery set and obtaining the Frobenius number using the Ap\'ery set is an efficient tool for solving the Frobenius problem of numerical semigroups as reported in \cite{Marquez (2015),Rosales2009,Ramirez2009}. Furthermore, in recent articles presenting the Frobenius problems for Fibonacci numerical semigroups in \cite{Marin (2007)}, Mersenne numerical semigroups in \cite{Rosales2017}, Thabit numerical semigroups in \cite{Rosales2015} and repunit numerical semigroups in \cite{Rosales2016}, this method is used to obtain the Frobenius number. Let $m \leq n, m,n \in \mathbb{N}$, and let us consider the following set $$ B_n = \left\{\binom{n}{1},\binom{n}{2}, \ldots, \binom{n}{n-1}\right\}. $$ Ram proved that $\gcd(B_n) = \begin{cases} p & \text{when }n = p^m\text{ is a prime power} \\ 1 & \text{otherwise} \end{cases}$ \cite{Ram1909} and therefore $$ S(B_n) = \left\langle\left\{\binom{n}{1},\binom{n}{2}, \ldots, \binom{n}{n-1}\right\}\right\rangle $$ is a numerical semigroup when $n$ is not a prime power and if $n = p^m$ is a prime power then $$ S(B_n) = \left\langle\left\{\frac{1}{p}\binom{n}{1},\frac{1}{p}\binom{n}{2}, \ldots, \frac{1}{p}\binom{n}{n-1}\right\}\right\rangle $$ is a numerical semigroup. In this paper, we solve the Frobenius problem for $S(B_n)$. Since $\binom{n}{k} = \binom{n}{n-k}$, we have $$ S(B_n) = \left\langle\left\{\binom{n}{1},\binom{n}{2}, \ldots, \binom{n}{\lfloor n/2\rfloor}\right\}\right\rangle. $$ where $\lfloor \cdot \rfloor$ is a gauss function. \section{Preliminaries} Let $\mathbb{N}$ be the set of nonnegative integers. To begin, we introduce a numerical semigroup and submonoid generated by a nonempty subset. \begin{definition} A {\em numerical semigroup} is a subset $S$ of $\mathbb{N}$ that is closed under addition and contains $0$, such that $\mathbb{N}\texttt{\char`\\}S$ is finite. \end{definition} \begin{definition} \label{def_submonoid} Given a nonempty subset $A$ of a numerical semigroup $\mathbb{N}$, we denote by $\big<A\big>$ \textit{the submonoid of $(\mathbb{N},+)$ generated by $A$}, that is, \begin{displaymath} \big<A\big> = \{\lambda_1 a_1 + \cdots + \lambda_n a_n | n \in \mathbb{N}\texttt{\char`\\}\{0\}, a_i \in A, \lambda_i \in \mathbb{N} \end{displaymath} \textrm{ for all } $i \in \{1,\cdots,n\}\}$. \end{definition} In addition, we introduce several theorems and definitions directly related to the above definitions. \begin{theorem} (\cite{Rosales2015,Rosales2009}). Let $\big<A\big>$ be the submonoid of $(\mathbb{N},+)$ generated by a nonempty subset $A \subseteq \mathbb{N}$. Then $\big<A\big>$ is a numerical semigroup if and only if $\gcd(A) = 1$. \end{theorem} \begin{definition} If $S$ is a numerical semigroup and $S = \big< A\big>$ then we say that $A$ is a {\em system of generators of} $S$. Moreover, if $S \neq \big< X\big>$ for all $X \subsetneq A$, we say that $A$ is a {\em minimal system of generators of} $S$. \end{definition} \begin{theorem} (\cite{Rosales2009}). Every numerical semigroup admits a finite and unique minimal system of generators. \end{theorem} \begin{definition} The cardinality of the minimal system of generators of $S$ is called the {\em embedding dimension} of $S$ and is denoted by $e(S)$. \end{definition} \begin{definition} The cardinality of $\mathbb{N}\texttt{\char`\\}S$ is called the {\em genus of} $S$ and is denoted by $g(S)$ for a numerical semigroup $S$. \end{definition} The relation among the Frobenius number, genus, and Ap\'{e}ry set of a numerical semigroup is provided in the following lemma. \begin{lemma} \emph{(\cite{Rosales2009, Selmer1977})}. \label{lem_F_g} Let $S$ be a numerical semigroup and let $x \in S\texttt{\char`\\}\{0\}$. Then, \begin{enumerate} \item $F(S) = \max(Ap(S,x)) - x$ \item $g(S) = \frac{1}{x} (\sum_{w \in Ap(S,x)} w) - \frac{x-1}{2}$ \end{enumerate} \end{lemma} \begin{definition} \label{def_pseudo} An integer $x$ is a {\em pseudo-Frobenius number} if $x \not\in S$ and $x + s \in S$ for all $s \in S\texttt{\char`\\}\{0\}$. The set of pseudo-Frobenius numbers of $S$ is denoted by $PF(S)$. In addition, we call this set's cardinality the {\em type} of $S$ and denote it by $t(S)$. \end{definition} In this paper, the valuation for $\mathbb{N}$ is just defined as the function $v_p: \mathbb{N} \rightarrow \mathbb{N} \bigcup \{0\}$ $$ v_p(x) = \max\{v \in \mathbb{N} \bigcup \{0\}: p^v | x\}. $$ Let us see the following lemma. \begin{lemma}\cite{Sun2011} Let $p$ be an prime and let $a, m, n \in \mathbb{N}$ with $m \geq n \geq 0$. Then \begin{equation}\label{eq:modp} \frac{\binom{p^am}{p^an}}{\binom{m}{n}} \equiv 1 + [p=2]pn(m-n)\pmod{p^{2+v_p(n)}} \end{equation} where $[A] = \begin{cases} 1 &\text{ if }A\text{ is true},\\ 0 &\text{ if }A\text{ is false}. \end{cases}$ \end{lemma} We use this lemma to obtain the following result that is mainly used in this paper. \begin{lemma}\label{lem:modn} Let $n = p_1^{k_1}p_2^{k_2}\cdots p_t^{k_t}$ where $p_i$ are distinct primes. Then we have $$ \binom{n}{p^k} \equiv \frac{n}{p^k}\pmod{n} $$ where $p = p_i$, $k \leq k_i$ for some $i \in \{1,\ldots,t\}$. \end{lemma} \begin{proof} By substituting $p = 2, a = k \leq k_1, m = \frac{n}{2^k}, n = 1$ in (\ref{eq:modp}) we have $$ \binom{n}{m} = \binom{2^k\cdot \frac{n}{2^k}}{2^k} \equiv \frac{n}{2^k} + 2\cdot n\cdot (\frac{n}{2^k} - 1) \equiv \frac{n}{2^k}\pmod{p^{2+k_1}}. $$ If $p_i$ is an odd prime number, by substituting $p = p_i, a = k \leq k_i, m = n/p_i^{k}, n = 1$ in (\ref{eq:modp}) we have $$ \binom{n}{m} = \binom{{p_i}^{k}\cdot \frac{n}{{p_i}^k}}{{p_i}^k} \equiv \frac{n}{{p_i}^k}\pmod{{p_i}^{2+k_i}}. $$ Therefore we have $$ \binom{n}{p_i^{k}} \equiv \frac{n}{{p_i}^k}\pmod{{p_i}^{k_i}} $$ for any prime number $p_i$ and $k \leq k_i$. Since $$ \binom{n}{p_i^{k}} \equiv 0\pmod{p_j^{k_j}} $$ for any $p_j \neq p_i$ we obtain the result by the Chinese Remainder Theorem. \end{proof} We introduce the following proposition that is also used in the main result. \begin{proposition}\label{prop_bound}\cite{Gallier2014} Let $p_1, \ldots, p_k$ be $k \geq 2$ integers such that $2\leq p_1\leq \ldots \leq p_k$, with $\gcd(p_1,\ldots,p_k) = 1$. Then, for all $n \geq (p_1 - 1)(p_k -1)$, there exists $i_1, \ldots, i_k \in \mathbb{N}$ such that $$ n = i_1p_1 + \cdots + i_kp_k. $$ \end{proposition} \section{Main Result} \begin{theorem}\label{thm:general} Let $n = p_1^{k_1}\cdots p_t^{k_t}$ with nonnegative integers $k_i$ and $p_i$ are distinct primes. Then for any $m = p_1^{s_1}\cdots p_t^{s_t}$ where $m \leq \lfloor \frac{n}{2}\rfloor$, $\binom{n}{m}$ is a nonnegative integer combination of $\binom{n}{1}, \binom{n}{p_1^{s_1}}, \binom{n}{p_2^{s_2}}, \ldots, \binom{n}{p_t^{s_t}}$. \end{theorem} \begin{proof} Because $v_{p_i}\left(\binom{n}{m}\right) \geq v_{p_i}\left(\binom{n}{p_i^{s_i}}\right)$ for any $i$ we have $$ \gcd\left(\binom{n}{1}, \binom{n}{p_1^{s_1}},\binom{n}{p_2^{s_2}}, \ldots,\binom{n}{p_t^{s_t}}\right) | \binom{n}{m}. $$ If we let $c = \gcd\left(\binom{n}{1}, \binom{n}{p_1^{s_1}},\binom{n}{p_2^{s_2}}, \ldots,\binom{n}{p_t^{s_t}}\right), x = \min\left(\binom{n}{1}, \binom{n}{p_1^{s_1}},\binom{n}{p_2^{s_2}}, \ldots,\binom{n}{p_t^{s_t}}\right) = n, y = \max\left(\binom{n}{1}, \binom{n}{p_1^{s_1}},\binom{n}{p_2^{s_2}}, \ldots,\binom{n}{p_t^{s_t}}\right)$ then $\gcd\left(\frac{1}{c}\binom{n}{1}, \frac{1}{c}\binom{n}{p_1^{s_1}},\frac{1}{c}\binom{n}{p_2^{s_2}}, \ldots,\frac{1}{c}\binom{n}{p_t^{s_t}}\right) = 1$ and using Proposition \ref{prop_bound} we have $$ F\left(\frac{1}{c}\binom{n}{1}, \frac{1}{c}\binom{n}{p_1^{s_1}},\frac{1}{c}\binom{n}{p_2^{s_2}}, \ldots,\frac{1}{c}\binom{n}{p_t^{s_t}}\right) < \left(\frac{n}{c} - 1\right)\left(\frac{y}{c} - 1\right) < \frac{1}{c}\binom{n}{m}. $$ Since $\binom{n}{m} = c\cdot \frac{1}{c}\binom{n}{m}$ we complete the proof. \end{proof} Now we are ready to solve the Frobenius problem when $n$ is not a prime power. \begin{theorem} Let $n = p_1^{k_1}\ldots p_t^{k_t}$ where $p_1< \cdots < p_t$ be primes, $k_i$ be nonnegative integers, and $n$ is not a prime power. And let us consider $$ S(B_{n}) = \left\langle \left\{\binom{n}{1}, \ldots, \binom{n}{n-1}\right\}\right\rangle. $$ Then $S(B_{n}) = \langle\{\binom{n}{1}, \binom{n}{p_1},\binom{n}{p_1^2},\ldots,\binom{n}{p_1^{k_1}}, \ldots, \binom{n}{p_t},\binom{n}{p_t^2},\ldots,\binom{n}{p_t^{k_t}}\}\rangle$ and the embedding dimension of $B_{n}$ is $e(B_{n}) = 1 + \sum_{i=1}^{t} k_i$. \end{theorem} \begin{proof} Note that $$ \left\langle\left\{\binom{n}{1},\binom{n}{p_1},\binom{n}{p_1^2},\ldots,\binom{n}{p_1^{k_1}}, \ldots, \binom{n}{p_t},\binom{n}{p_t^2},\ldots,\binom{n}{p_t^{k_t}}\right\}\right\rangle \subseteq S(B_n) $$ because $\binom{n}{p^k} > \binom{n}{m}$ and $v(\binom{n}{p^k}) < v(\binom{n}{m})$ for any $p \in \{p_1, \ldots, p_t\}, 1\leq m\leq p^k-1, k \leq k_i$ when $p = p_i$. If $p_i \not| m$ for any $i$ then we have $n | \binom{n}{m}$ and therefore $\binom{n}{m}$ is a nonnegative integer combination of $\left\{\binom{n}{1}\right\}$. If $m = p_1^{s_1}\cdots p_t^{s_t}$ where $m \leq \lfloor \frac{n}{2}\rfloor$ then $\binom{n}{m}$ is a nonnegative integer combination of $\binom{n}{1}, \binom{n}{p_1^{s_1}}, \binom{n}{p_2^{s_2}}, \ldots, \binom{n}{p_t^{s_t}}$ by Theorem \ref{thm:general}. If $\gcd(m,n) > 1$ and $m > \lfloor \frac{n}{2}\rfloor$ we just apply the property $\binom{n}{m} = \binom{n}{n-m}$ and therefore $\binom{n}{m}$ is also a nonnegative integer combination of $\binom{n}{1}, \binom{n}{p_1^{s_1}}, \binom{n}{p_2^{s_2}}, \ldots, \binom{n}{p_t^{s_t}}$. Combining all the cases we conclude that a minimal system of generator of $B_{n}$ is $\{\binom{n}{1}, \binom{n}{p_1},\binom{n}{p_1^2},\ldots,\binom{n}{p_1^{k_1}}, \ldots, \binom{n}{p_t},\binom{n}{p_t^2},\ldots,\binom{n}{p_t^{k_t}}\}$. \end{proof} By Lemma \ref{lem:modn} we have the following \begin{corollary}\label{cor:ap_n} Let $n = p_1^{k_1}\ldots p_t^{k_t}$ where $p_1< \cdots < p_t$ be primes, $k_i$ be nonnegative integers, and $n$ is not a prime power. Then the Ap$\acute{e}$ry set of $S(B_n)$ is $$ \text{Ap}(S(B_{n}),n) = \Big\{\sum_{i=1}^{t}\sum_{j=1}^{k_i} c_{i,j}\binom{n}{p_i^{j}} : c_{i,j} \in \mathbb{Z}, 0\leq c_{i,j} \leq p_i-1\Big\}. $$ \end{corollary} \begin{corollary} Let $n = p_1^{k_1}\ldots p_t^{k_t}$ where $p_1< \cdots < p_t$ be primes, $k_i$ be nonnegative integers, and $n$ is not a prime power. Then the Frobenius number of $S(B_{n})$ is $$ F(S(B_{n})) = \sum_{i=1}^{t} (p_i-1)\binom{n}{p_i^j} - n. $$ \end{corollary} \begin{proof} By Lemma \ref{lem_F_g} and Corollary \ref{cor:ap_n}, $$ \text{Max}(\text{Ap}(S(B_{n}),n)) = \sum_{i=1}^{t} (p_i-1)\binom{n}{p_i^j} $$ and therefore $$ F(S(B_{n})) = \text{Max}(\text{Ap}(S(B_{n}),n)) - n = \sum_{i=1}^{t} (p_i-1)\binom{n}{p_i^j} - n. $$ \end{proof} \begin{corollary} Let $n = p_1^{k_1}\ldots p_t^{k_t}$ where $p_1< \cdots < p_t$ be primes, $k_i$ be nonnegative integers, and $n$ is not a prime power. Then the genus of $S(B_{n})$ is $g(S(B_{n})) = \frac{1}{n} \left( \sum_{i=1}^{t}\sum_{j=1}^{k_i}\frac{p_i(p_i-1)}{2}\binom{n}{p_i^j} \right) - \frac{n-1}{2}$. \end{corollary} \begin{proof} By Lemma \ref{lem_F_g} and Corollary \ref{cor:ap_n}, \begin{align*} g(S(B_{n})) & = \frac{1}{n}\sum_{w \in \text{Ap}(S(B_{n}),n)} w - \frac{n-1}{2} \\ & = \frac{1}{n} \left( \sum_{i=1}^{t}\sum_{j=1}^{k_i}\frac{p_i(p_i-1)}{2}\binom{n}{p_i^j} \right) - \frac{n-1}{2}. \end{align*} \end{proof} \begin{corollary} Let $n = p_1^{k_1}\ldots p_t^{k_t}$ where $p_1< \cdots < p_t$ be primes, $k_i$ be nonnegative integers, and $n$ is not a prime power. Then the Pseudo-Frobenius number of $S(B_{n})$ is $PF(S(B_{n})) = \{\sum_{i=1}^{t} (p_i-1)\binom{n}{p_i^j} - n\}$ and $t(S(B_n)) = 1$. \end{corollary} \begin{proof} Since $\sum_{i=1}^{t} (p_i-1)\binom{n}{p_i^j} - w \in \text{Ap}(S(B_{n},n))$ for any $w \in \text{Ap}(S(B_{n}),n)$ $\text{maximal}(\text{Ap}(B_{n},n)) = \sum_{i=1}^{t} (p_i-1)\binom{n}{p_i^j}$ and we obtain the Pseudo-Frobenius number. \end{proof} Similarly, we can obtain the result for the case when $n = p^m$ is a prime power. In this case we let $$ S(B_n) = \left\langle\left\{\frac{1}{p}\binom{n}{1},\frac{1}{p}\binom{n}{2}, \ldots, \frac{1}{p}\binom{n}{n-1}\right\}\right\rangle. $$ and because of the similarity of the proof, we omit the proofs and just state the main theorem and corollaries. \begin{theorem} Let $n = p^m$ where $p$ is a prime and $m \in \mathbb{N}$. And let us consider $$ S(B_{n}) = \left\langle \left\{\frac{1}{p}\binom{n}{1}, \ldots, \frac{1}{p}\binom{n}{n-1}\right\}\right\rangle. $$ Then $S(B_{n}) = \langle\{\frac{1}{p}\binom{n}{1}, \frac{1}{p}\binom{n}{p},\frac{1}{p}\binom{n}{p^2},\ldots, \frac{1}{p}\binom{n}{p^{m-1}}\}\rangle$ and the embedding dimension of $B_{n}$ is $e(B_{n}) = m$. \end{theorem} \begin{corollary} Let $n = p^m$ where $p$ is a prime and $m \in \mathbb{N}$. Then the Ap$\acute{e}$ry set of $S(B_n)$ is $$ \text{Ap}(S(B_{n}),n) = \Big\{\frac{1}{p}\sum_{i=1}^{m-1} c_i\binom{n}{p^i} : c_{i} \in \mathbb{Z}, 0\leq c_{i} \leq p-1\Big\}. $$ \end{corollary} \begin{corollary} Let $n = p^m$ where $p$ is a prime and $m \in \mathbb{N}$. Then the Frobenius number of $S(B_{n})$ is $$ F(S(B_{n})) = \frac{p-1}{p}\sum_{i=1}^{m-1} \binom{p^m}{p^i} - p^m. $$ \end{corollary} \begin{corollary} Let $n = p^m$ where $p$ is a prime and $m \in \mathbb{N}$. Then the genus of $S(B_{n})$ is $g(S(B_{n})) = \frac{p-1}{2p^{m}}\sum_{i=1}^{m-1}\binom{p^m}{p^i} - \frac{p^m-1}{2}$. \end{corollary} \begin{corollary} Let $n = p^m$ where $p$ is a prime and $m \in \mathbb{N}$. Then the Pseudo-Frobenius number of $S(B_{n})$ is $PF(S(B_{n})) = \{(p-1)\sum_{i=1}^{m} \frac{p-1}{p}\sum_{i=1}^{m} \binom{p^m}{p^i} - p^m\}$ and $t(S(B_n)) = 1$. \end{corollary} \begin{thebibliography}{} \bibitem{Aliev (2007)} I. M. Aliev and P. M. Gruber, {An optimal lower bound for the Frobenius problem,} {\it J. Number Theory} \textbf{123} (2007), 71--79. \bibitem{Aliev (2009)} I. M. Aliev and P. M. Gruber, {Bounds on the number of numerical semigroups of a given genus,} {\it J. Pure. Appl. Algebra} \textbf{213} (2009), 997--1001. \bibitem{Beihoffer (2005)} D. Beihoffer, J. Hendry, A. Nijenhuis, and S. Wagon, {Faster algorithms for Frobenius numbers,} {\it Electron J. Comb.} \textbf{12} (2005). \bibitem{Bocker (2007)} S. Bocker and Z. Lipt$\acute{a}$k, {A fast and simple algorithm for the money changing problem,} {\it Algorithmica} \textbf{48} (2007), 413--432. \bibitem{Brauer (1962)} A. Brauer and J. E. Shockley, {On a problem of Frobenius,} {\it J. Reine. Angew. Math.} \textbf{211} (1962), 215--220. \bibitem{Curtis (1990)} F. Curtis, {On formulas for the Frobenius number of a numerical semigroup,} {\it Math. Scand.} \textbf{67} (1990), 190--192. \bibitem{Fel (2015)} L. G. Fel, {On Frobenius numbers for symmetric (not complete intersection) semigroups generated by four elements,} {\it Semigroup Forum, Short Note} (2015), 1--4. \bibitem{Gil (2015)} B. K. Gil, J. W. Han, T. H. Kim, R. H. Koo, B. W. Lee, J. H. Lee, K. S. Nam, H. W. Park and P. S. Park, {Frobenius numbers of Pythagorean triples,} {\it Int. J. Number Theory} \textbf{11} (2015), 613--619. \bibitem{Gu (2017)} Z. Gu and X. Tang, {The Frobenius problem for a class of numerical semigroups,} {\it Int. J. Number Theory} \textbf{13} (2017), 1335--1347. \bibitem{Heap (1965)} B. R. Heap and M. S. Lynn, {On a linear diophantine problem of Frobenius: an improved algorithm,} {\it Numer. Math.} \textbf{7} (1965), 226--231. \bibitem{Hujter (1987)} M. Hujter and B. Vizvari, {The exact solution to the Frobenius problem with three variables,} {\it J. Ramanujan. Math. Soc.} \textbf{2} (1987), 117--143. \bibitem{Lepilov (2015)} M. Lepilov, J. O'Rourke, and I. Swanson, {Frobenius numbers of numerical semigroups generated by three consecutive squares or cubes,} {\it Semigroup Forum} \textbf{91} (2015), 238--259. \bibitem{Marin (2007)} J. M. Mar\'{i}n, J. L. Ram\'{i}rez Alfons\'{i}n, and M. P. Revuelta, {On the Frobenius number of Fibonacci numerical semigroups,} {\it Integers} \textbf{7} (2007), A14. \bibitem{Marquez (2015)} G. M\'{a}rquez-Campos, I. Ojeda, and J. M. Tornero, {On the computation of the Ap$\acute{e}$ry set of numerical monoids and affine semigroups,} {\it Semigroup Forum} \textbf{91} (2015), 139--158, Springer US. \bibitem{Ong2008} D. C. Ong, and V. Ponomarenko, {The Frobenius number of geometric sequences,} {\it Integers} \textbf{8} (2008), A33. \bibitem{Owens2003} R. W. Owens, {An algorithm to solve the Frobenius problem,} {\it Mathematics Magazine}, \textbf{76} (2003) 264--275. \bibitem{Raczunas1996} M. Raczunas, and P. Chrzastowski-Wachtel, {A diophantine problem of Frobenius in terms of the least common multiple} {\it Discrete Math.} \textbf{150} (1996), 347--357. \bibitem{Ramirez1996} J. L. Ram\'{i}rez-Alfons\'{i}n, {Complexity of the Frobenius problem,} {\it Combinatorica} \textbf{16} (1996), 143--147. \bibitem{Ramirez2009} J. L. Ram\'{i}rez-Alfons\'{i}n and \O. J. R\o dseth, {Numerical semigroups: Ap$\acute{e}$ry sets and Hilbert series,} {\it Semigroup Forum} \textbf{79} (2009), 323--340. \bibitem{Robles2012} A. M. Robles-P\'{e}rez, and J. C. Rosales, {The Frobenius problem for numerical semigroups with embedding dimension equal to three,} {\it Math. Comput.} \textbf{81} (2012), 1609--1617. \bibitem{Rodseth1978} \O. J. R\o dseth, {On a linear diophantine problem of Frobenius,} {\it J. Reine Angew. Math.} \textbf{301} (1978), 171--178. \bibitem{Rosales2000} J. C. Rosales, {Numerical semigroups with Ap\'{e}ry sets of unique expression,} {\it J. Algebra} \textbf{226} (2000), 479--487. \bibitem{Rosales2015} J. C. Rosales, M. B. Branco, and D. Torr\~{a}o, {The Frobenius problem for Thabit numerical semigroups,} {\it J. Number Theory} \textbf{155} (2015), 85--99. \bibitem{Rosales2016} J. C. Rosales, M. B. Branco, and D. Torr\~{a}o, {The Frobenius problem for repunit numerical semigroups,} {\it Ramanujan J.} \textbf{40} (2016), 323--334. \bibitem{Rosales2017} J. C. Rosales, M. B. Branco, and D. Torr\~{a}o, {The Frobenius problem for Mersenne numerical semigroups,} {\it Math. Z.} \textbf{286} (2017) 1--9. \bibitem{Rosales2009} J. C. Rosales, and P. A. Garc\'{i}a-S\'{a}nchez, {\it Numerical Semigroups}, Springer Science \& Business Media, New York, 2009. \bibitem{Rosales2004} J. C. Rosales, P. A. Garc\'{i}a-S\'{a}nchez, J. I. Garc\'{i}a-Garc\'{i}a, and J. J. Madrid, {Fundamental gaps in numerical semigroups,} {\it J. Pure. Appl. Algebra} \textbf{189} (2004), 301--313. \bibitem{Selmer1977} E. S. Selmer, {On the linear diophantine problem of Frobenius,} {\it J. Reine. Angew. Math.} \textbf{293} (1977), 1--17. \bibitem{Sylvester1883} J. J. Sylvester, {Problem 7382,} {\it The Educational Times and Journal of the College Of Preceptors, New Series} \textbf{36} (1883), 177. \bibitem{Shchur2009} V. Shchur, Y. Sinai, and A. Ustinov, {Limiting distribution of Frobenius numbers for $n= 3$,} {\it J. Number Theory} \textbf{129} (2009), 2778--2789. \bibitem{Tripathi (2017)} A. Tripathi, {Formulae for the Frobenius number in three variables,} {\it J. Number Theory} \textbf{170} (2017), 368--389. \bibitem{Gallier2014} Gallier, J. (2014). The Frobenius Coin Problem Upper Bounds on The Frobenius Number. \bibitem{Sun2011}Sun, Z. W., \& Tauraso, R. (2011). On some new congruences for binomial coefficients. International Journal of Number Theory, 7(03), 645--662. \bibitem{Ram1909} B. Ram. (1909). Common Factors of $\frac{n!}{m!(n-m)!}$, $(m = 1,2,\ldots, n-1)$. J. Indian Math. Club \textbf{1}, 39--43. \end{thebibliography} \end{document}
2412.17311v1
http://arxiv.org/abs/2412.17311v1
Dualizing involutions on the $n$-fold metaplectic cover of $\GL(2)$
\documentclass[11 point, a4paper, reqno]{amsart} \pagestyle{plain} \usepackage[english]{babel} \usepackage{amsthm, fullpage} \usepackage{graphicx} \usepackage{amsfonts} \usepackage{eufrak} \usepackage{amsopn} \usepackage{amsmath, amsfonts, amssymb, amscd, amsthm} \usepackage{enumitem} \usepackage{setspace} \usepackage{mathtools} \usepackage[all]{xy} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \theoremstyle{example} \newtheorem{example}[theorem]{Example} \theoremstyle{note} \newtheorem{note}[theorem]{Note} \numberwithin{equation}{section} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Stab}{Stab} \DeclareMathOperator{\Ind}{Ind} \DeclareMathOperator{\ch}{char} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\SL}{SL} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\Image}{Im} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\ind}{ind} \DeclareMathOperator{\M}{M} \DeclareMathOperator{\Sp}{Sp} \DeclareMathOperator{\Gsp}{Gsp} \DeclareMathOperator{\SO}{SO} \DeclareMathOperator{\Id}{I} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\Ortho}{O} \DeclareMathOperator{\Int}{Int} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\tr}{Tr} \DeclareMathOperator{\reg}{reg} \DeclareMathOperator{\coh}{H} \usepackage[colorlinks]{hyperref} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \usepackage{comment} \newcommand{\C}{\mathbb{C}} \begin{document} \title{Dualizing involutions on the $n$-fold metaplectic cover of $\GL(2)$} \author{Kumar Balasubramanian} \thanks{Research of Kumar Balasubramanian is supported by the SERB grant: CRG/2023/000281.} \address{Kumar Balasubramanian\\ Department of Mathematics\\ IISER Bhopal\\ Bhopal, Madhya Pradesh 462066, India} \email{[email protected]} \author{Renu Joshi} \address{Renu Joshi\\ Department of Mathematics\\ IISER Bhopal\\ Bhopal, Madhya Pradesh 462066, India} \email{[email protected]} \author{Sanjeev Kumar Pandey} \address{Sanjeev Kumar Pandey\\ Department of Mathematics\\ IISER Bhopal\\ Bhopal, Madhya Pradesh 462066, India} \email{[email protected]} \author{Varsha Vasudevan} \address{Varsha Vasudevan\\ Department of Mathematics\\ IISER Bhopal\\ Bhopal, Madhya Pradesh 462066, India} \email{[email protected]} \keywords{Metaplectic groups, Dualizing Involutions, Genuine representations} \subjclass{Primary: 22E50} \maketitle \begin{abstract} Let $F$ be a non-Archimedean local field of characteristic zero and $G=\GL(2,F)$. Let $n\geq 2$ be a positive integer and $\widetilde{G}=\widetilde{\GL}(2,F)$ be the $n$-fold metaplectic cover of $G$. Let $\pi$ be an irreducible smooth representation of $G$ and $\pi^{\vee}$ be the contragredient of $\pi$. Let $\tau$ be an involutive anti-automorphism of $G$ satisfying $\pi^{\tau}\simeq \pi^{\vee}$. In this case, we say that $\tau$ is a dualizing involution. A well known theorem of Gelfand and Kazhdan says that the standard involution $\tau$ on $G$ is a dualizing involution. In this paper, we show that any lift of the standard involution to $\widetilde{G}$ is a dualizing involution if and only if $n=2$. \end{abstract} \section{Introduction} Let $F$ be a non-Archimedean local field of characteristic $0$ and $G=\GL(n,F)$. For $g\in G$, we let $g^{\top}$ denote the transpose of the matrix $g$, and $w_{0}$ to be the matrix with anti-diagonal entries equal to one. Let $\tau: G\rightarrow G$ be the map $\tau(g)=w_{0}g^{\top}w_{0}$. Clearly $\tau$ is an anti-automorphism of $G$ such that $\tau^{2}=1$. We call $\tau$ the \textit{standard involution} on $G$. Let $(\pi,V)$ be an irreducible smooth complex representation of $G$. We write $(\pi^{\vee}, V^{\vee})$ for the smooth dual or the contragredient of $(\pi,V)$. For $\beta$ an anti-automorphism of $G$ such that $\beta^{2}=1$, we let $\pi^{\beta}$ to be the twisted representation defined by \[\pi^{\beta}(g)=\pi(\beta(g^{-1})).\] The following is an old result of Gelfand and Kazhdan. \begin{theorem}[Gelfand-Kazhdan] Let $\tau$ be the standard involution on $G$. Then \[\pi^{\tau}\simeq \pi^{\vee}.\] \end{theorem} We refer the reader to Theorem 2 in \cite{GelKaj} for a proof of the above result. \\ If $\beta$ is any anti-automorphism of $G$ such that $\beta^{2}=1$, and satisfies $\pi^{\beta}\simeq \pi^{\vee}$, then we call $\beta$ a \textit{dualizing involution}. The above result of Gelfand and Kazhdan says that the standard involution $\tau$ on $G$ is a dualizing involution. \\ Let $r \geq 2$ be a positive integer. For $ n\geq 2$, let $\widetilde{G}$ be the $r$-fold metaplectic cover of $G$ (see Chapter 0 in \cite{KazPat[2]} for the general definition). It is well known that (see Proposition 3.1 in \cite{KazPat}) the standard involution $\tau$ on $G$ has at least one lift to the metaplectic group. A natural and an interesting question that arises is whether the lifts of the standard involution are themselves dualizing involutions. In the case when $\widetilde{G}$ is the $2$-fold metaplectic cover of $G = \GL(2, F)$, it is known that any lift of $\tau$ to $\widetilde{G}$ is dualizing (see \cite{KumAji}, \cite{KumEkt}).\\ In this paper, we address this question when $\widetilde{G}$ is the $n$-fold metaplectic cover of $G = \GL(2, F)$ (explained later) for $n\geq 2$. It follows from Proposition 1 in \cite{Kab} that there are many lifts of the standard involution to $\widetilde{G}$. In particular, for each $\alpha\in F^{\times}$, we have a lift $\sigma_{\alpha}$ (see Section~\ref{main theorem} for the definition) of $\tau$ to $\widetilde{G}$. We show that the involutions $\sigma_{\alpha}$ are dualizing if and only if $n=2$. To be precise, we prove \\ \begin{theorem}[Main Theorem]\label{main-theorem} Let $\pi$ be any irreducible admissible genuine representation of $\widetilde{G}$. For $\alpha\in F^{\times}$, let $\sigma_{\alpha}$ be the lift of $\tau$ to $\widetilde{G}$. Then \[\pi^{\sigma_{\alpha}}\simeq \pi^{\vee} \] if and only if $n=2$. \end{theorem} The above result raises the question, if a similar result holds for the $r$-fold covering $\widetilde{G}$ of $G = \GL(n)$ for $r,n\geq 2$. To be precise, a lift of the standard involution $\tau$ of $G$ to $\widetilde{G}$ is a dualizing involution if and only if $r=2$. We hope to address some of these questions in future.\\ The paper is organized as follows. In Section~\ref{preliminaries}, we recall the basic properties of the Hilbert symbol and use it to establish some more properties that we need. We give the definition of the metaplectic group that we work with and discuss its topology. We recall a result of Harish-Chandra about the character of an admissible representation and give references for analogous results in the setting of metaplectic groups. We prove some basic properties of the character that we need. We also mention a few results of Kable about the lifts of an automorphism. In Section~\ref{sigma and its properties}, we explicitly define lifts of the standard involution and discuss some properties. In Section~\ref{main theorem}, we prove the main result of this paper. \section{preliminaries}\label{preliminaries} In this section, we set up some preliminaries which we need and recall a few results which will be used throughout this paper. \subsection{$n$-th order Hilbert Symbol and its properties} Let $F$ be a non-Archimedean local field of characterisic zero and $F^{\times}$ be the group of non-zero elements in $F$. For $n\in \mathbb{N}$, let $\mu_{n}=\{x\in F^{\times} \mid x^n=1\}$ be the subset of $n$-th roots of unity in $F^{\times}$. Suppose also that $|\mu_{n}|=n$. We let \[\langle \, , \, \rangle: F^{\times}\times F^{\times}\rightarrow \mu_{n}\] denote the $n$-th order Hilbert symbol. In this section, we recall some fundamental properties of the $n$-th order Hilbert symbol and use it to derive some simple consequences that we need. \\ The following basic properties (see \cite{KazPat[2]}) of the Hilbert symbol are well known. We record them in the proposition below. \begin{proposition}\label{fundamental properties of the Hilbert symbol} Let $a,b,c\in F^{\times}$. The Hilbert symbol satisfies \begin{enumerate} \item[\emph{1)}] $\langle ab,c\rangle = \langle a,c\rangle \langle b,c\rangle$. \vspace{0.1 cm} \item[\emph{2)}] $\langle a,b\rangle \langle b,a\rangle=1$. \vspace{0.1 cm} \item[\emph{3)}] $\langle a,1-a\rangle=1$, if $1-a\in F^{\times}$. \vspace{0.1 cm} \item[\emph{4)}] $\{a \in F^{\times} \mid \langle a,x\rangle = 1$ for all $x\in F^{\times}\}=(F^{\times})^n$, where $(F^{\times})^n=\{a^n \mid a \in F^{\times}\}$. \end{enumerate} \end{proposition} The following properties of the Hilbert symbol can be derived easily from properties $1)$ to $4)$ mentioned in Proposition~\ref{fundamental properties of the Hilbert symbol} above. \begin{proposition}\label{properties of the Hilbert symbol} For $a,b,c \in F^{\times}$, the Hilbert symbol satisfies \begin{enumerate}[label=\emph{(\roman*)}] \item $\langle a,bc\rangle = \langle a,b\rangle \langle a,c\rangle$. \vspace{0.1 cm} \item $\langle a^m,b\rangle =\langle a,b\rangle^m=\langle a,b^m\rangle$ for any $m\in \mathbb{N}$. \vspace{0.1 cm} \item $\langle 1-a,a\rangle=1$ if $a\neq 1$, and $\langle a, 1 \rangle=\langle 1, a\rangle=1$. \vspace{0.1 cm} \item $\langle b,a\rangle=\langle a,b\rangle^{-1}=\langle a^{-1},b\rangle=\langle a,b^{-1}\rangle$. \vspace{0.1 cm} \item $\langle -a,a\rangle=\langle a,-a\rangle=\langle a,a^2\rangle=1$. \vspace{0.1 cm} \item If $\langle a, x \rangle =1$ for all $x\in F^{\times},$ then $a\in (F^{\times})^{n}$. In other words, if $a\notin (F^{\times})^{n}$, there exists $x_0 \in F^{\times}$ such that $\langle a, x_0 \rangle \neq 1$. \end{enumerate} \end{proposition} \begin{proof} \hfill \\ \begin{enumerate}[label=(\roman*)] \item Using properties 1) and 2) in Proposition~\ref{fundamental properties of the Hilbert symbol}, we have \begin{align*} \langle a,bc\rangle &= \langle bc, a \rangle^{-1} \\ &=\langle b,a \rangle^{-1} \langle c, a \rangle^{-1} \\ &=\langle a,b\rangle \langle a,c\rangle. \end{align*} \item It follows from (i) and property 1). \\ \item We know that $\langle 1-a,a\rangle \langle a, 1-a\rangle=1$, so the result follows from property 3). By (i), we have $$\langle a,1\rangle=\langle a,1\cdot 1\rangle=\langle a,1\rangle \langle a,1\rangle.$$ Hence $\langle a,1\rangle=1$. Consequently, $\langle 1,a\rangle=\langle a,1\rangle^{-1}=1$. \\ \item Since $\langle a,b\rangle \langle b,a\rangle=1$, we obtain $\langle b,a\rangle=\langle a,b\rangle^{-1}$. Using (iii) and property 1), we have $$\langle a, b \rangle \langle a^{-1}, b \rangle=\langle aa^{-1}, b\rangle =\langle 1, a \rangle =1.$$ Similarly $$\langle a, b \rangle \langle a, b^{-1} \rangle=\langle a, bb^{-1}\rangle =\langle a, 1 \rangle =1.$$ \vspace{0.2 cm} \item If $a=1$, we are done using (iii). For $a\neq 1$, observe that $-a=\frac{1-a}{1-a^{-1}}$. Then \begin{align*} \langle a,-a \rangle &= \left< a, \frac{1-a}{1-a^{-1}}\right >\\ &=\langle a,1-a \rangle \langle a,(1-a^{-1})^{-1} \rangle & \text{(using (i))}\\ &=\langle a,1-a \rangle \langle a^{-1},1-a^{-1} \rangle & \text{(using (iv))}\\ &= 1& \text{(using property 3))}. \end{align*} Thus, we also have $\langle a,a^2\rangle=\langle a, -a\rangle \langle a, -a\rangle =1$. \\ \item Follows from property 4). \end{enumerate} \end{proof} \subsection{The Metaplectic Group} Let $F$ be a non-Archimedean local field of characteristic zero and $G=\GL(2,F)$. For $n\geq 2$, let $\mu_{n}\subset F^{\times}$ be the subset of $n$-th roots of unity in $F$. Suppose also that $|\mu_{n}|=n$. Throughout, we write $\widetilde{G}=\widetilde{\GL}(2,F)$ for the $n$-fold metaplectic cover of $G$. In this section, we recall the definition of $\widetilde{G}$ and a few basic facts about its topology. \\ Throughout, we write $\mathfrak{o}$ for the ring of integers in $F$, $\mathfrak{p}$ for the unique maximal ideal in $\mathfrak{o}$, $\varpi$ for the generator of $\mathfrak{p}$ and $k_{F}$ for the finite residue field. We write $ord$ for the valuation on $F$.\\ The first explicit construction of the $n$-fold metaplectic cover of $G=\GL(2,F)$ was given by Kubota in \cite{Kub} by concretely describing a $2$-cocyle on $G$ taking values in $\mu_{n}$. For $g_{1}, g_{2}, m\in G$, the following simpler version of the Kubota cocycle $c: G\times G \rightarrow \mu_n$ defined as $$ c(g_1, g_2)= \left<\frac{X(g_1g_2)}{X(g_1)}, \frac{X(g_1g_2)}{X(g_2)\Delta(g_1)}\right>, $$ where $$ X(m)= \begin{cases} m_{21}& \text{if}\; m_{21}\neq 0\\m_{22} &\text{otherwise} \end{cases} \quad \textrm{ and } \Delta(g_1)=\det(g_1) $$ was given by Kazhdan and Patterson in \cite{KazPat}. Throughout this paper, we take $\widetilde{G}=\widetilde{\GL}(2,F)$ to be the central extension of $G$ by $\mu_{n}$ determined by the $2$-cocycle $c$. Since $G$, $\mu_{n}$ are locally compact groups, Mackey's theorem (see Theorem 2 in \cite{Mac}) implies that $\widetilde{G}$ is a locally compact topological group and defines a topological central extension of $G$ by $\mu_{n}$. We call the group $\widetilde{G}$ constructed above as ``the" metaplectic group. \\ It can be shown that the topology on $\widetilde{G}$ has a neighborhood base at the identity consisting of compact open subgroups (see Lemma 3 in \cite{Kab}). Before we give the construction of this basis, we recall a few preliminaries. \\ Let $p:\widetilde{G}\rightarrow G$ be the projection of $\widetilde{G}$ onto $G$. A map $\ell: G\rightarrow \widetilde{G}$ is called a \textit{section} if $p\circ \ell = 1_{G}$. Given a subgroup $H$ of $G$, we say that $\widetilde{G}$ \textit{splits} over $H$ if there exists a homomorphism $h: H\rightarrow \widetilde{G}$ such that $p\circ h =1_{H}$.\\ Let $\ell: G\rightarrow \widetilde{G}$ be the map $\ell(g)=(g,1)$. Then $\ell$ is a section and is called the natural or preferred section. For $g=\begin{bmatrix} a & b \\ c & d \end{bmatrix}\in G$, define $s: G \rightarrow \mu_{n}$ as \begin{equation*}\label{definition of s(g)} s(g) = \begin{cases} \langle c, d\Delta(g)\rangle, & \textrm{ if } cd\neq 0 \textrm{ and $ord(c)$ is odd}\\ \hfil 1, & \textrm{ otherwise. } \end{cases} \end{equation*} Let $K_{0}=\GL(2, \mathfrak{o})$ be the maximal compact subgroup in $G$, and for $\lambda\geq 1$, let $K_{\lambda}= 1+\varpi^{\lambda}\M(n,\mathfrak{o})$. It is known that $\{K_{\lambda}\}_{\lambda\geq 0}$ is a neighborhood base at the identity element in $G$ consisting of compact open subgroups. We can use this base to define a neighborhood base at the identity in $\widetilde{G}$. For a suitable choice of $\lambda_{0}$, it can be shown that $\widetilde{G}$ splits over $K_{\lambda}$, where $\lambda\geq \lambda_{0}$ (see Proposition 2.8 in \cite{Gel}, Theorem 2 in \cite{Kub}). Define $\kappa: K_{\lambda_{0}}\rightarrow \widetilde{G}$ as $\kappa(k)=(k,s(k))$. Let $K_{\lambda_{0}}^{*}=\kappa(K_{\lambda_{0}})$ and for $\lambda > \lambda_{0}$, $K^{*}_{\lambda}= K_{\lambda_{0}}^{*}\cap p^{-1}(K_{\lambda})$. It can be shown that $\{K^{*}_{\lambda}\}_{\lambda\geq \lambda_{0}}$ is a neighborhood base at the identity in $\widetilde{G}$. \\ \subsection{Character of an admissible representation} Let $F$ be a non-Archimedean local field of characteristic $0$ and $G=G(F)$ be a connected reductive algebraic group defined over $F$. We let $(\pi,V)$ be an irreducible smooth complex representation of $G$. It can be shown that such representations are always admissible. For an admissible representation $(\pi,V)$, we can define a suitable notion of a character. Before we proceed further, we set up some notation and recall an important result of Harish-Chandra. \\ Throughout we let $G=G(F)$ and $(\pi,V)$ to be an irreducible smooth representation of $G$. We let $C^{\infty}_{c}(G)$ to be the space of all locally constant complex valued functions on $G$ with compact support. For $f\in C^{\infty}_{c}(G)$, we let $\pi(f): V\rightarrow V$ denote the linear operator given by \[\pi(f)v= \int_{G}f(g)\pi(g)vdg, \quad v\in V,\] where the integral is with respect to a Haar measure $\mu_{G}$ on $G$ which we fix throughout. If $(\pi,V)$ is an admissible representation, it can be shown that the trace of the operator $\pi(f)$ is finite for all $f\in C^{\infty}_{c}(G)$. The resulting linear functional \[\Theta_{\pi}: C^{\infty}_{c}(G)\longrightarrow \mathbb{C}\] given by \[\Theta_{\pi}(f)=\tr(\pi(f))\] is called the \textit{distribution character} of $\pi$. It determines the irreducible representation $\pi$ up to equivalence, i.e., if $\Theta_{\pi_{1}}(f)=\Theta_{\pi_{2}}(f)$, $\forall f\in C^{\infty}_{c}(G)$, then $\pi_{1}\simeq \pi_{2}$. \\ We now state a theorem of Harish-Chandra which is used to define the character of the representation $\pi$. \\ Let $G_{\reg}$ be the subset of regular semi-simple elements in $G$. It can be shown that $G_{\reg}$ is an open dense subset of $G$ whose complement has measure zero. The following is a deep result of Harish-Chandra. \begin{theorem}[Harish-Chandra]\label{regularity theorem} There exists a locally integrable complex valued function $\Theta_{\pi}$ on $G$ such that $\Theta_{\pi}|_{G_{\reg}}$ is a locally constant function on $G_{\reg}$ and satisfies \[\Theta_{\pi}(f)=\int_{G}f(g)\Theta_{\pi}(g)dg, \quad \forall f\in C^{\infty}_{c}(G).\] \vspace{0.2 cm}\\ Also, for $x\in G_{\reg}, y\in G$, we have $$\Theta_{\pi}(yxy^{-1})=\Theta_{\pi}(x).$$ \end{theorem} \vspace{0.2 cm} We refer the reader to \cite{Har} for a proof of the above result. The locally constant function $\Theta_{\pi}$ on $G_{\reg}$ in the above theorem is called the \textit{character} of $\pi$. \\ Let $\widetilde{G}$ be a locally compact topological central extension of $G$ by $\mu_{n}$. Let $\xi: \mu_{n}\rightarrow \mathbb{C}^{\times}$ be any injective character of $\mu_{n}$. Let $(\pi,V)$ be a representation of $\widetilde{G}$. $\pi$ is called a \textit{genuine} representation, if for $\epsilon\in \mu_{n}$, $g\in \widetilde{G}$, we have \[\pi(\epsilon g)=\xi(\epsilon)\pi(g).\] The above result of Harish-Chandra also holds in this setting. To be more precise, it can be shown that the distribution character of an irreducible admissible genuine representation $\pi$ of $\widetilde{G}$ is represented by a locally integrable function $\Theta_{\pi}$ on $\widetilde{G}$ which is locally constant on $\widetilde{G}_{\reg}=p^{-1}(G_{\reg})$ (here $p: \widetilde{G}\rightarrow G$ is the projection) and satisfies \[\Theta_{\pi}(y^{-1}xy)=\Theta_{\pi}(x), \textrm{ for } x\in \widetilde{G}_{\reg}, y\in \widetilde{G}.\] \vspace{0.1 cm} We refer the reader to Theorem 4.3.2 and Corollary 4.3.3 in \cite{Wen} where results about character theory for metaplectic groups are discussed. See also Theorem I.5.1 in \cite{KazPat[2]} where the result is only mentioned. \\ Before we proceed further, we set up some notation and record a few properties of the character. Throughout we write $\mu=\mu_{\widetilde{G}}$ for the Haar measure on $\widetilde{G}$. For $K$ a compact open subgroup of $\widetilde{G}$, we let $e_K\in C_{c}^{\infty}(\widetilde{G})$ to be the function \[e_K(g) = \begin{cases} \mu(K)^{-1}, & g \in K \\ 0, & g \notin K \end{cases}.\] For $g \in \widetilde{G}$ and $f \in C_{c}^{\infty}(\widetilde{G})$, the natural action of $\widetilde{G}$ on $C_{c}^{\infty}(\widetilde{G})$ is denoted as $g.f$ where $$(g \cdot f)(x) := f(g^{-1}x).$$ \begin{lemma}\label{central-action} Let $(\pi, V)$ be an irreducible admissible genuine representation of $\widetilde{G}.$ Let $z$ be an element in the center of $\widetilde{G}$ and $\omega_\pi$ be the central character of $\pi$. Then, the following statements hold: \begin{enumerate} \item[\emph{1)}] $\pi(z \cdot f) = \omega_\pi(z) \pi(f),$ for all $f \in C_{c}^{\infty}(\widetilde{G}).$ \smallskip \item[\emph{2)}] $\Theta_{\pi}(z g) = \omega_\pi(z) \Theta_{\pi}( g),$ for all $g \in \widetilde{G}_{\reg}.$ \end{enumerate} \end{lemma} \begin{proof} Let $f \in C_{c}^{\infty}(\widetilde{G})$. We have, \begin{equation}\label{action-z-g} \pi(z \cdot f) = \int_{\widetilde{G}} (z \cdot f)(g) \pi(g) dg = \int_{\widetilde{G}} f(z^{-1}g) \pi(g) dg. \end{equation} By applying the change of variable $g \mapsto z g$ in \eqref{action-z-g} and using $\pi(z) = \omega_\pi(z) 1_{V},$ we get $$\pi(z \cdot f) = \omega_\pi(z) \displaystyle\int_{\widetilde{G}} f(g) \pi(g) dg= \omega_\pi(z) \pi(f).$$ This proves 1). For $2)$, using the definition of the distribution character $\Theta_{\pi}$ we have, \begin{equation*} \Theta_{\pi}(z \cdot f) = \tr(\pi(z \cdot f)) = \omega_\pi(z) \tr(\pi(f)) = \omega_\pi(z) \Theta_{\pi}(f). \end{equation*} \noindent Thus we get \begin{align*} \Theta_{\pi}(z \cdot f) &= \omega_\pi(z) \Theta_{\pi}( f) \\ &= \displaystyle\int_{\widetilde{G}} f(g) \omega_\pi(z) \Theta_{\pi}(g) dg \\ &= \int_{\widetilde{G}} (z \cdot f)(g) \Theta_{\pi}(g) dg \\ &= \int_{\widetilde{G}} f(z^{-1}g) \Theta_{\pi}(g) dg \\ &= \int_{\widetilde{G}} f(g) \Theta_{\pi}(zg) dg. \end{align*} It follows that $$\Theta_{\pi}(z g) = \omega_\pi(z) \Theta_{\pi}( g).$$ \end{proof} \begin{lemma}\label{distribution character-non-zero} Let $(\pi, V)$ be an irreducible admissible genuine representation of $\widetilde{G}.$ Then $$\Theta_{\pi}(f) \neq 0$$ for some $f\in C_{c}^{\infty}(\widetilde{G})$. \end{lemma} \begin{proof} Let $0\neq u \in V$. Since $\pi$ is a smooth representation, there exists a compact open subgroup $K$ of $\widetilde{G}$ such that $u\in V^{K}$, where $V^{K} = \{v \in V \mid \pi(k)v = v, \forall k \in K\}$ is the subspace of $K$-fixed vectors in $V$. Let $f=e_{K}\in C_{c}^{\infty}(\widetilde{G})$. It is easy to see that $\pi(e_{K})\neq 0$ and defines a projection of $V$ onto $V^{K}$. Indeed, \[ \pi(e_K)u = \int_{\widetilde{G}} e_K(g) \pi(g)u dg = \mu(K)^{-1} \int_{K} \pi(g)u dg = \mu(K)^{-1} \int_{K} u dg = u. \] It follows that $\pi(e_{K})$ fixes $V^{K}$ and $\pi(e_K) \neq 0.$ Let $v \in V$ such that $v \notin V^K$. Take $v_1 = \pi(e_K)v$. For $k \in K,$ we have \begin{align*} \pi(k) v_1 &= \int_{\widetilde{G}} e_K(g) \pi(kg)v dg \\ &= \mu(K)^{-1} \int_{K} \pi(kg)v dg \\ &= \mu(K)^{-1} \displaystyle\int_{K} \pi(g)v dg ~~~~~~~~~ \mbox{ ($g \mapsto k^{-1}g$)} \nonumber \\ & = \displaystyle\int_{\widetilde{G}} e_K(g) \pi(g)v dg \\ & = v_1. \end{align*} Hence $\pi(e_K)$ defines a projection of $V$ onto $V^K$. Thus we have $$\Theta_{\pi}(e_K) = \tr(\pi(e_K)) = \dim_{\mathbb{C}}(V^K)\neq 0.$$ Since $\pi$ is admissible, we have that $\dim_{\mathbb{C}}(V^K)$ is finite and the result follows. \end{proof} \begin{lemma}\label{character-non-zero} Let $(\pi, V)$ be an irreducible admissible genuine representation of $\widetilde{G}.$ There exists $g_0 \in G_{\reg}$ such that $$\Theta_{\pi}((g_0, 1)) \neq 0.$$ \end{lemma} \begin{proof} Suppose that $\Theta_{\pi}((g, 1)) = 0$ for all $g \in G_{\reg}$. For $x=(g,\eta)\in \widetilde{G}_{\reg},$ we have \[\Theta_{\pi}(x)=\Theta_{\pi}((g,\eta))= \omega_{\pi}(\eta)\Theta_{\pi}((g,1))=0. \] Using Theorem~\ref{regularity theorem}, we have $\Theta_{\pi}(f)= 0$ for every $f \in C_{c}^{\infty}(\widetilde{G})$. This is not possible. \end{proof} \subsection{Some results about lifts of an automorphism of $G$ to $\widetilde{G}$}\label{results we need} We recall a few results from \cite{Kab} which we need in proving our main result. Let $\widetilde{G}$ be a central extension of a group $G$ by an abelian group $A$. Let $p: \widetilde{G}\rightarrow G$ be the projection map, $s: G\rightarrow \widetilde{G}$ be a section of $p$ and $\tau$ be the 2-cocycle representing the class of this central extension in $\coh ^{2}(G,A)$ with respect to the section $s$. If $f: G\rightarrow G$ is an automorphism (resp. anti-automorphism) of $G$, then a lift of $f$ is an automorphism (resp. anti-automorphism) $\tilde{f}: \widetilde{G}\rightarrow \widetilde{G}$ such that \[p(\tilde{f}(g))=f(p(g)), \forall g\in \widetilde{G}.\] Let $\mathcal{L}(f)$ denote the set of all lifts of $f$. The group $\Aut(G)$ acts on $\coh ^{2}(G,A)$ by $f[\sigma]= [\sigma\circ (f^{-1}\times f^{-1})]$ for any 2-cocycle $\sigma$. \begin{proposition}\label{description of the set of liftings} The set $\mathcal{L}(f)$ is precisely described in terms of this action by the following: \begin{enumerate} \item[1)] The set $\mathcal{L}(f)$ is non-empty if and only if $f[\tau]=[\tau]$. \item[2)] If $\mathcal{L}(f)$ is non-empty, then $\mathcal{L}(f)$ is a principal homogeneous space for the group $\Hom(G,A)$ under the action \[(\phi.\tilde{f})(g)= \phi(p(g))\tilde{f}(g).\] \end{enumerate} \end{proposition} We also need the following result (see Corollary 1 in \cite{Kab} for a proof) which discusses the continuity properties of the lift in the case when $G=\GL(m,F)$. We state it below for clarity. \begin{proposition}\label{continuity of lift} Let $F$ be a non-Archimedean local field and suppose that the group of $n^{th}$ roots of unity in $F$ has order $n$. Let $\langle \, , \, \rangle$ be the $n^{th}$ order Hilbert symbol on $F$ and $\widetilde{\GL}(m)$ the corresponding metaplectic group. Then the lift of any topological automorphism of $\GL(m)$ to $\widetilde{\GL}(m)$ is also a topological automorphism. \end{proposition} \section{Lift of the standard involution}\label{sigma and its properties} Let $G=\GL(2,F)$ and $\widetilde{G}=\widetilde{\GL}(2,F)$ be the $n$-fold metaplectic cover of $G$. Let $\tau$ be the standard involution on $G$. In this section, we define a lift $\sigma$ of $\tau$ and show that it is an involutive anti-automorphism of $\widetilde{G}$. We also discuss an important property of this lift (see Theorem~\ref{conjugacy property of sigma}) which is crucial in proving the main result of this paper. \\ For $\lambda\in F^{\times}$ and $g=\begin{bmatrix} a & b \\ c & d\end{bmatrix}\in G$, we let $u(\lambda)=\begin{bmatrix*}[r] \lambda & 0 \\ 0 & -\lambda\end{bmatrix*}$ and $\Delta(g)=\det(g)$. It is easy to see that \[\tau(g)= w_{0}g^{\top}w_{0}= u(\Delta(g))g^{-1}u(1).\] For $\epsilon\in \mu_{n}$, we denote the element $(I_{2},\epsilon)\in \widetilde{G}$ as $\epsilon$. Let $\tilde{u}(\lambda)=(u(\lambda),1)$ and $\tilde{z}(\lambda)= (\lambda I_{2}, 1)$ where $I_{2}$ is the $2\times 2$ identity matrix. We extend $\Delta$ to $\widetilde{G}$ by $\Delta((g,\epsilon))=\Delta(g)$. For $h=(g,\epsilon)\in \widetilde{G}$ define \begin{equation}\label{definition of sigma}\sigma(h)=\tilde{u}(\Delta(h))h^{-1}\tilde{u}(1).\end{equation} Before discussing the properties of $\sigma$, we first prove a proposition which is used to study the properties of $\sigma$. \begin{proposition}\label{properties to check sigma is an involution} Let $\lambda, \lambda_1, \lambda_2 \in F^{\times}$, and $h\in \widetilde{G}$. Then \begin{enumerate}[label=\emph{(\roman*)}] \item $h\tilde{z}(\lambda)= \langle \Delta(h), \lambda\rangle\; \tilde{z}(\lambda)h$. \vspace{0.1 cm} \item $\tilde{z}(\lambda_{1})\tilde{z}(\lambda_{2})= \langle \lambda_{1}, \lambda_{2}\rangle \tilde{z}(\lambda_{1}\lambda_{2})$. \vspace{0.1 cm} \item $\tilde{u}(\lambda_{1})\tilde{u}(\lambda_{2})= \langle \lambda_{1}, -\lambda_{2}\rangle \tilde{z}(\lambda_{1}\lambda_{2})$. \vspace{0.1 cm} \item $\tilde{u}(\lambda)^{-1}= \tilde{u}(\lambda^{-1})$. \vspace{0.1 cm} \item $\tilde{u}(\lambda_{1})\tilde{z}(\lambda_{2})= \langle\lambda_{1}, \lambda_{2}\rangle\tilde{u}(\lambda_{1}\lambda_{2})$. \end{enumerate} \end{proposition} \begin{proof} \hfill \\ \begin{enumerate}[label=(\roman*)] \item Let $h=\left(\begin{bmatrix*}[r] a & b\\ c & d\end{bmatrix*}, \epsilon\right)$. It is easy to see that \[\tilde{z}(\lambda)h=\Bigg(\begin{bmatrix*}[r] a\lambda & b\lambda \\ c\lambda & d\lambda\end{bmatrix*}, c\left (\begin{bmatrix*}[r] \lambda & 0 \\ 0 & \lambda\end{bmatrix*}, \begin{bmatrix} a & b \\ c & d\end{bmatrix} \right )\epsilon \Bigg),\] and \[h\tilde{z}(\lambda)=\Bigg(\begin{bmatrix*}[r] a\lambda & b\lambda \\ c\lambda & d\lambda\end{bmatrix*}, c\left ( \begin{bmatrix} a & b \\ c & d\end{bmatrix}, \begin{bmatrix*}[r] \lambda & 0 \\ 0 & \lambda\end{bmatrix*} \right )\epsilon \Bigg).\] Note that $c\left (\begin{bmatrix*}[r] \lambda & 0 \\ 0 & \lambda\end{bmatrix*}, \begin{bmatrix} a & b \\ c & d\end{bmatrix} \right )=\begin{cases} \langle \lambda, c\rangle , & c\neq 0 \\ \langle \lambda, d\rangle, & c=0.\end{cases}$\\ Therefore \begin{align*} \langle \Delta(h), \lambda\rangle c\left (\begin{bmatrix*}[r] \lambda & 0 \\ 0 & \lambda\end{bmatrix*}, \begin{bmatrix} a & b \\ c & d\end{bmatrix} \right )&=\begin{cases} \langle \Delta(h), \lambda\rangle \langle \lambda, c\rangle , & c\neq 0 \\ \langle \Delta(h), \lambda\rangle \langle \lambda, d\rangle, & c=0\end{cases}\\ &=\begin{cases} \langle \lambda, \Delta(h)^{-1}c\rangle, & c\neq 0\\ \langle \lambda, \Delta(h)^{-1}d\rangle, & c= 0 \end{cases}\\ &=c\left (\begin{bmatrix} a & b \\ c & d\end{bmatrix}, \begin{bmatrix*}[r] \lambda & 0 \\ 0 & \lambda\end{bmatrix*} \right ). \end{align*} The result follows. \\ \item Since $c(z(\lambda_1), z(\lambda_2))=c(\lambda_1 I_2, \lambda_2 I_2)=\langle \lambda_1, \lambda_2\rangle$, we have \[\tilde{z}(\lambda_{1})\tilde{z}(\lambda_{2})= (\lambda_1 \lambda_2 I_2,\langle \lambda_1, \lambda_2\rangle )= \langle \lambda_{1}, \lambda_{2}\rangle \tilde{z}(\lambda_{1}\lambda_{2}).\] \item Using \[c(u(\lambda_1),u(\lambda_2))=c\left (\begin{bmatrix*}[r] \lambda_1 & 0 \\ 0 & -\lambda_1\end{bmatrix*}, \begin{bmatrix} \lambda_2 & 0 \\ 0 & -\lambda_2 \end{bmatrix} \right )= \langle \lambda_1, -\lambda_2\rangle,\] we get \[\tilde{u}(\lambda_{1})\tilde{u}(\lambda_{2})= (\lambda_{1}\lambda_{2} I_2, \langle \lambda_1, -\lambda_2\rangle)= \langle \lambda_{1}, -\lambda_{2}\rangle \tilde{z}(\lambda_{1}\lambda_{2}).\] \item We have $\tilde{u}(\lambda)=(u(\lambda),1)$. Thus, \begin{align*} \tilde{u}(\lambda)^{-1}&=\big(u(\lambda)^{-1},c(u(\lambda),u(\lambda)^{-1})^{-1}\big)\\ &= \big(u(\lambda)^{-1}, c(u(\lambda),u(\lambda^{-1}))^{-1}\big)\\ &=\big(u(\lambda^{-1}),\langle -\lambda, \lambda \rangle^{-1}\big)\\ &= \big(u(\lambda^{-1}),1\big)\\ &=\tilde{u}(\lambda^{-1}).\\ \end{align*} \item It is easy to see that, \[c(u(\lambda_1),z(\lambda_2))=c\left (\begin{bmatrix*}[r] \lambda_1 & 0 \\ 0 & -\lambda_1\end{bmatrix*}, \begin{bmatrix} \lambda_2 & 0 \\ 0 & \lambda_2 \end{bmatrix} \right )= \langle \lambda_2, \lambda_1^{-1}\rangle=\langle \lambda_1, \lambda_2\rangle.\] Thus we have, \[\tilde{u}(\lambda_{1})\tilde{z}(\lambda_{2})= \Bigg(\begin{bmatrix*}[r] \lambda_1\lambda_2 & 0 \\ 0 & -\lambda_1\lambda_2\end{bmatrix*}, \langle\lambda_{1}, \lambda_{2}\rangle\Bigg)=\langle\lambda_{1}, \lambda_{2}\rangle\tilde{u}(\lambda_{1}\lambda_{2}).\] \end{enumerate} \end{proof} Let $\sigma: \widetilde{G}\rightarrow \widetilde{G}$ be defined as in \eqref{definition of sigma}. We record some properties satisfied by $\sigma$ in the proposition below. \begin{lemma}\label{Properties of sigma} The map $\sigma$ satisfies the following properties: \begin{enumerate} \item[\emph{(1)}] $\sigma$ is an anti-automorphism. \vspace{0.1 cm} \item[\emph{(2)}] $\sigma(\epsilon)=\epsilon^{-1}$ for any $\epsilon\in \mu_{n}$. \vspace{0.1 cm} \item[\emph{(3)}] $\sigma$ is an involution. \vspace{0.1 cm} \item[\emph{(4)}] $\sigma(h^{-1})=\sigma(h)^{-1}$ for all $h\in \widetilde{G}$. \vspace{0.1 cm} \item[\emph{(5)}] $\sigma$ is a lift of $\tau$. \end{enumerate} \end{lemma} \begin{proof} \hfill \\ \begin{enumerate} \item We use properties (i), (iii), and (v) from Proposition \ref{properties to check sigma is an involution}. For $h_{1}, h_{2}\in \widetilde{G}$, we have \begin{align*} \sigma(h_{1}h_{2}) &= \tilde{u}(\Delta(h_{1}h_{2}))(h_{1}h_{2})^{-1}\tilde{u}(1)\\ &= \tilde{u}(\Delta(h_{2})\Delta(h_{1}))(h_{1}h_{2})^{-1}\tilde{u}(1)\\ &= \left<\Delta(h_{1}), \Delta(h_{2})\right>\tilde{u}(\Delta(h_{2}))\tilde{z}(\Delta(h_{1}))h_{2}^{-1}h_{1}^{-1}\tilde{u}(1)\\ &= \tilde{u}(\Delta(h_{2}))h_{2}^{-1}\tilde{z}(\Delta(h_{1}))h_{1}^{-1}\tilde{u}(1)\\ &= \tilde{u}(\Delta(h_{2}))h_{2}^{-1}\tilde{u}(1)\tilde{u}(\Delta(h_{1}))h_{1}^{-1}\tilde{u}(1)\\ &= \sigma(h_{2})\sigma(h_{1}). \end{align*} \item Let $1\neq \epsilon\in \mu_{n}$ be any non-trivial element in $\mu_{n}$. Identifying $\epsilon$ with ($I_2,\epsilon$), we obtain $\Delta(\epsilon)=1$. We have \begin{align*} \sigma(\epsilon) &= \tilde{u}(\Delta(\epsilon))\epsilon^{-1}\tilde{u}(1)\\ &=(u(\Delta(\epsilon)), 1)(I_2,\epsilon)^{-1}(u(1),1)\\ &= \left (u(1), 1 \right )\left (I_2, \epsilon^{-1} \right )\left (u(1), 1 \right )\\ &= \left (u(1), \epsilon^{-1} \right )\left (u(1), 1 \right) & \text{(since $c\left (u(1), I_2 \right )=1$)}\\ &= (I_2, \epsilon^{-1}) & \text{(since $c\left (u(1), u(1) \right )=1$)}\\ &= \epsilon^{-1}. \end{align*} \item For $h=(g, \epsilon)\in \widetilde{G}$, observe that $\Delta(\sigma(h))=\Delta(u(\Delta(g))g^{-1}u(1))= \Delta(h)$. Using the properties (i)-(iv) mentioned in Proposition~\ref{properties to check sigma is an involution}, we obtain \begin{align*} \sigma((\sigma(h))) &= \sigma( \tilde{u}(\Delta(h))h^{-1}\tilde{u}(1))\\ &= \tilde{u}(\Delta(h))\tilde{u}(1)h\tilde{u}(\Delta(h)^{-1})\tilde{u}(1)\\ &= \langle \Delta(h), -1\rangle \tilde{z}(\Delta(h))h\langle \Delta(h)^{-1}, -1\rangle \tilde{z}(\Delta(h)^{-1})\\ &= \langle \Delta(h), \Delta(h)\rangle h \tilde{z}(\Delta(h))\tilde{z}(\Delta(h)^{-1})\\ &= \langle \Delta(h), \Delta(h)\rangle h\langle \Delta(h), \Delta(h)^{-1}\rangle \\ &= h. \end{align*} \item It follows directly from the fact that $\sigma$ is an anti-automorphism. \\ \item It is enough to consider the case when $h=(g,1)$, with $g\in G$. Indeed, if we have $\sigma(g,1)=(x,\xi)$, then $\sigma(g,\epsilon) = (x, \epsilon^{-1}\xi)$, using (1) and (2). This implies that $x= (p\circ \sigma)(g,\epsilon) = (p\circ\sigma)(g,1)$. To show that $\sigma$ is a lift of $\tau$, it is enough to show that $(p\circ \sigma)(h)= (\tau\circ p)(h)$ for $h=(g, 1) \in \widetilde{G}$. We now evaluate $\sigma$, depending on whether $c$ is non-zero or zero. If $c\neq 0$, we have \begin{align*} \sigma(h) &=\sigma(g,1)\\ &= (u(\Delta(g)), 1)(g^{-1}, 1)(u(1), 1)\\ &= (u(\Delta(g))g^{-1}, \left<\Delta(g), c\right>)(u(1), 1)\\ &= (u(\Delta(g)g^{-1}u(1), \left<\Delta(g), c\right>)\\ &= (\tau(g), \left<\Delta(g), c\right>), \end{align*} and if $c=0$, we have \begin{align*} \sigma(h) &=\sigma(g,1)\\ &=(u(ad), 1)(g^{-1}, \left<a, d\right>)(u(1), 1)\\ &= (u(ad)g^{-1}, \left<d, a\right>\left<d, d\right>\left<a, d\right>)(u(1), 1)\\ &= (u(ad)g^{-1}, \left<d, d\right>)(u(1), 1)\\ &= (u(ad)g^{-1}u(1), \left<d, d\right>\left<d, -1\right>)\\ &= (\tau(g), 1). \end{align*} In both the cases, we see that $(p\circ \sigma)(h)= \tau(g)= (\tau\circ p)(h)$. Hence the result follows. \end{enumerate} \end{proof} \begin{remark}\label{sigma-h-computation} We have used the following computations in part (5) of Lemma \ref{Properties of sigma}. Let $g = \begin{bmatrix} a & b \\ c & d \end{bmatrix}\in G$ and $h = (g, 1)\in \widetilde{G}$. Then, \begin{enumerate} \item[(1)] $c(g, g^{-1}) = c(g^{-1}, g)= \begin{cases} 1, & \mbox{if $c \neq 0$} \\ \langle d, a \rangle, & \mbox{if $c =0$} \end{cases}$. \vspace{0.1 cm}\\ \item[(2)] $h ^{-1} = (g^{-1}, c(g, g^{-1})^{-1} ) = \begin{cases} (g^{-1}, 1), & \mbox{if $c \neq 0$} \\ (g^{-1}, \langle a, d \rangle), & \mbox{if $c =0$} \end{cases}$. \vspace{0.1 cm}\\ \item[(3)] $\sigma(h) = \begin{cases} (\tau(g), \langle \triangle(h), c \rangle ), & \mbox{if $c \neq 0$} \\ (\tau(g), 1 ), & \mbox{if $c=0$} \end{cases}$. \end{enumerate} \end{remark} The involution $\sigma$ satisfies a certain conjugation property which plays an important role in the proof of our main result. Before we proceed further, we record a few more lemmas that we need to establish this property of $\sigma$. \\ \begin{lemma}\label{properties-cocycles-used-proof-Gtilde-equal-S} For $x, g\in G$, let \[\lambda = c(xg, x^{-1}) c(x, g) c(x, x^{-1})^{-1},\] and \[\beta = c(x, x^{-1}) c(g^{-1} x ^{-1}, x) c(g, x^{-1})^{-1} c(x, g x^{-1})^{-1} c(x, g^{-1} x^{-1})^{-1}.\] Then, \begin{enumerate} \item[\emph{(1)}] $c(x g^{-1} x^{-1}, x g x ^{-1}) = c(g^{-1}, g) \beta.$ \smallskip \item[\emph{(2)}] $\lambda \beta = c(g^{-1} x^{-1}, x) c(x, g^{-1} x ^{-1})^{-1}.$ \end{enumerate} \end{lemma} \begin{proof} Using the cocycle property, we have for $g_{1}, g_{2}, g_{3}\in G$ \begin{equation}\label{associativity-cocycles} c(g_1 g_2, g_3) c(g_1, g_2) = c(g_1, g_2 g_3) c(g_2, g_3). \end{equation} For (1), taking $g_1 = x, g_2 = g^{-1} x^{-1}$ and $g_3 = xg x^{-1}$ in \eqref{associativity-cocycles}, we get \begin{align}\label{properties-cocycles-used-proof-Gtilde-equal-S-eq1} c(x g^{-1} x^{-1}, x g x ^{-1}) = & c(x, x^{-1}) c(g^{-1} x^{-1}, xg x^{-1}) c(x, g^{-1} x^{-1})^{-1}. \end{align} Applying \eqref{associativity-cocycles} again with $g_1 = g^{-1} x^{-1}, g_2 = x$ and $g_3 = g x^{-1}$ and then again with $g_1 = g^{-1}, g_2 = g$ and $g_3 = x^{-1}$ we get the following equalities: \begin{align}\label{properties-cocycles-used-proof-Gtilde-equal-S-eq2} c(g^{-1} x^{-1}, xg x^{-1}) = & c(g^{-1} , g x^{-1}) c(g^{-1} x^{-1}, x) c(x, g x^{-1})^{-1} \nonumber \vspace{0.1 cm}\\ = & [ c(g^{-1}, g) c( g, x^{-1})^{-1}] c(g^{-1} x^{-1}, x) c(x, g x^{-1})^{-1}. \end{align} Now, (1) follows by \eqref{properties-cocycles-used-proof-Gtilde-equal-S-eq1} and \eqref{properties-cocycles-used-proof-Gtilde-equal-S-eq2}. \\ Proof of (2) is a direct application of \eqref{associativity-cocycles} with $g_1 = x, g_2 = g$ and $g_3 = x^{-1}.$ \end{proof} \begin{lemma}\label{g-conjugate-g-2 or g-3} Let $g = \begin{bmatrix} \alpha & \beta \\ 0 & \delta \end{bmatrix} \in G.$ Then, $g$ is conjugate to either $\begin{bmatrix} \alpha & 0 \\ 0 & \delta \end{bmatrix}$ or $\begin{bmatrix} \alpha & 1 \\ 0 & \alpha \end{bmatrix}.$ \end{lemma} \begin{proof} If $\alpha \neq \delta,$ taking $x = \left[\begin{array}{cc} 1 & \beta/(\alpha- \delta) \\ 0 & -1 \end{array}\right]$ we get $x g x^{-1} = \begin{bmatrix} \alpha & 0 \\ 0 & \delta \end{bmatrix}.$ Suppose that $\alpha = \delta.$ If $\beta =0$, then we have $g = \alpha I_2.$ On the other hand, if $\beta \neq 0$, choosing $x = \begin{bmatrix} 1 & 0 \\ 0 & \beta \end{bmatrix}$ we have $x g x^{-1} = \begin{bmatrix} \alpha & 1 \\ 0 & \alpha \end{bmatrix}.$ \end{proof} Let $\sigma$ be defined as in \eqref{definition of sigma}. Consider the set \begin{equation}\label{Definition of the set S} S:= \{h = (g, \eta) \in \widetilde{G}: \sigma(h) = c(g^{-1}, g)^{-2} \eta^{-2} x h x^{-1}, \mbox{for some $x \in \widetilde{G}$}\}. \end{equation} \begin{lemma}\label{closed-under-epsilon} Let $h \in S$ and $\epsilon\in \mu_{n}$. Then $$\epsilon h \in S.$$ That is, $S$ is closed under multiplication by $\epsilon\in \mu_{n}$. \end{lemma} \begin{proof} Consider $e = (I_2, 1)\in \widetilde{G}$. Since $c(I_2, I_2) =1$, we have $e \in S$. Thus, $S$ is non-empty. Let $h = (g, \eta) \in S.$ Choose $z \in \widetilde{G}$ such that $\sigma(h) = c(g^{-1}, g)^{-2} \eta^{-2} z h z^{-1}.$ Then, we have \begin{align*} \sigma (\epsilon h) &= \epsilon^{-1} \sigma(h) \vspace{0.1 cm}\\ &= \epsilon^{-1} \left[ c(g^{-1}, g)^{-2} \eta^{-2} z h z^{-1}\right] \vspace{0.1 cm}\\ &= c(g^{-1}, g)^{-2} (\eta \epsilon)^{-2} z (\epsilon h) z^{-1}. \end{align*} \end{proof} \begin{lemma}\label{h_i-arein-S} Consider the matrices $g_{1}, g_{2}, g_{3}$ in $G$ defined as follows: \\ \[g_1 = \begin{bmatrix} 0 & v \\ 1 & w \end{bmatrix}: v\in F^{\times}, w\in F,\,g_2 = \begin{bmatrix} a & 0 \\ 0 & d \end{bmatrix}: a \neq d, a,d\in F^{\times} \textrm{ and } g_3 = \begin{bmatrix} b & 1 \\ 0 & b \end{bmatrix}:b\in F^{\times}.\] For $\epsilon \in \mu_n,$ and $i \in \{1,2,3\}$, we have $(g_i, \epsilon) \in S$. \end{lemma} \begin{proof} For $i \in \{1,2,3\},$ write $h_i = (g_i, 1).$ By Lemma \ref{closed-under-epsilon}, it is enough to show that $h_i \in S.$ Consider $h_{1}=(g_{1},1)$. Let $y= \left[\begin{array}{cc} 1 & 0 \\ -w/v & 1 \end{array}\right]$ and $\widetilde{y} = (y, 1).$ It is easy to see that \[\tau(g_1) = \begin{bmatrix} w & v \\ 1 & 0 \end{bmatrix} = y g_1 y^{-1}.\] A simple calculation gives $c(y g_1, y^{-1})=1$ and $c(y, g_1)=1.$ Since $\langle \triangle(h_1), 1 \rangle = c(g_1^{-1}, g_1)= 1$, using (3) of Remark \ref{sigma-h-computation}, it follows that $$\widetilde{y} h_1 \widetilde{y}^{-1} = (\tau(g_1), 1)=\sigma(h_1)$$ and $h_1\in S$. \\ Consider $h_{2}=(g_{2},1)$. Let $z= \begin{bmatrix} 0 & d \\ 1 & 0 \end{bmatrix}$ and set $\widetilde{z} = (z, 1).$ Clearly we have \[z g_2 z^{-1} = \begin{bmatrix} d & 0 \\ 0 & a \end{bmatrix} = \tau(g_2).\] It is easy to see that $c(z g_2, z^{-1}) = c(z , z^{-1})=1$, $c( g_2^{-1}, g_2)= \langle d, a \rangle$ and $c(z, g_2) = \langle d, a \rangle ^{2}$. Using (3) of Remark \ref{sigma-h-computation}, we get $$\widetilde{z} h_2 \widetilde{z}^{-1} = (\tau(g_2), \langle d, a \rangle ^2 )= c( g_2^{-1}, g_2)^{2}(\tau(g_2), 1)= c( g_2^{-1}, g_2)^{2}\sigma(h_{2}).$$ Thus $h_{2}\in S$. \\ We now show that $h_3=(g_{3},1) \in S$. It is clear that $\tau(g_3) = g_3$. By (3) of Remark \ref{sigma-h-computation}, we have $\sigma(h_3) = (g_3, 1)$. Since $c( g_3^{-1}, g_3)= \langle b, b \rangle$ and $\langle b, b \rangle^2 =1,$ we get $\sigma(h_3) = c( g_3^{-1}, g_3)^{-2} h_3.$ \end{proof} \begin{lemma}\label{working-equation-prove-Gtilde-equaltoS} Let $g_1,$ $g_2$ and $g_3$ be as in Lemma~\ref{h_i-arein-S}. Let $g \in G$, and let $h = (g,1)\in \widetilde{G}$. Suppose that there exists $x\in G$ such that $x g x^{-1} = g_i$, for some $i \in \{1,2,3\}$. Let $A = c(x^{-1} g_i^{-1}, x) c(x, x^{-1}g_i^{-1})^{-1}$. Then, for $s \in F^{\times}$ there exists $z \in \widetilde{G}$ such that \[\sigma(h) = \langle s, \triangle(h) \rangle A^{-2} c(g^{-1}, g)^{-2} z h z^{-1}.\] \end{lemma} \begin{proof} Let $\widetilde{x} = (x, 1)$ and $\lambda = c(xg, x^{-1}) c(x, g) c(x, x^{-1})^{-1}$. Since $xgx^{-1}=g_{i}$, we have \begin{equation}\label{Gtilde-equal-S-eq1} \widetilde{x} h \widetilde{x}^{-1} = (xg x^{-1}, \lambda) = (g_i, \lambda). \end{equation} Let $s \in F^{\times}.$ Using (i) of Proposition~\ref{properties to check sigma is an involution}, we have \begin{align}\label{Gtilde-equal-S-eq2} (g_i, \lambda) &= \widetilde{x} \widetilde{z}(s)^{-1} \widetilde{z}(s) h \widetilde{z}(s)^{-1} \widetilde{z}(s) \widetilde{x}^{-1} \nonumber \vspace{0.1 cm} \\ &= \langle s, \triangle(h) \rangle \widetilde{x} \widetilde{z}(s)^{-1} h \widetilde{z}(s) \widetilde{x}^{-1} \nonumber \vspace{0.1 cm}\\ &= \langle s, \triangle(h) \rangle u h u^{-1}; \mbox{ where $u = \widetilde{x} \widetilde{z}(s)^{-1}$.} \end{align} By Lemma \ref{h_i-arein-S}, $(g_i, \lambda) \in S.$ Choose $y \in \widetilde{G}$ such that \begin{equation}\label{Gtilde-equal-S-eq4} \sigma((g_i, \lambda)) = c(g_i^{-1}, g_i)^{-2} \lambda^{-2} y (g_i, \lambda) y^{-1}. \end{equation} Applying $\sigma$ on both sides to \eqref{Gtilde-equal-S-eq2}, and using Lemma \ref{Properties of sigma} and \eqref{Gtilde-equal-S-eq4}, we get \begin{equation}\label{Gtilde-equal-S-eq5} c(g_i^{-1}, g_i)^{-2} \lambda^{-2} y (g_i, \lambda) y^{-1} = \langle s, \triangle(h) \rangle^{-1} \sigma(u)^{-1} \sigma(h) \sigma(u). \end{equation} Simplifying \eqref{Gtilde-equal-S-eq5}, we have \begin{align}\label{Gtilde-equal-S-eq6} \sigma(h) = & \langle s, \triangle(h) \rangle c(g_i^{-1}, g_i)^{-2} \lambda^{-2} \sigma(u) y (g_i, \lambda) y^{-1} \sigma(u)^{-1} \nonumber \vspace{0.1 cm} \\ = & \langle s, \triangle(h) \rangle c(g_i^{-1}, g_i)^{-2} \lambda^{-2} \sigma(u) y \widetilde{x} \left[\widetilde{x}^{-1}(g_i, \lambda) \widetilde{x}\right] \widetilde{x}^{-1} y^{-1} \sigma(u)^{-1}. \end{align} Write $z = \sigma(u) y \widetilde{x}.$ By \eqref{Gtilde-equal-S-eq1}, $\widetilde{x}^{-1}(g_i, \lambda) \widetilde{x} = h.$ Thus, \eqref{Gtilde-equal-S-eq6} gives: \begin{equation*}\label{Gtilde-equal-S-eq7} \sigma(h) = \langle s, \triangle(h) \rangle c(g_i^{-1}, g_i)^{-2} \lambda^{-2} z h z^{-1}. \end{equation*} Applying Lemma~\ref{properties-cocycles-used-proof-Gtilde-equal-S}, the result follows. \end{proof} \begin{theorem}\label{conjugacy property of sigma} For $h = (g, \eta) \in \widetilde{G},$ we have $$\sigma(h) = c(g^{-1}, g)^{-2} \eta^{-2} z h z^{-1},$$ for some $z \in \widetilde{G}.$ \end{theorem} \begin{proof} Let $g \in G$ and set $h = (g,1).$ By Lemma \ref{closed-under-epsilon}, it is enough to show that $h \in S$. Suppose that $g = \begin{bmatrix} \alpha & 0 \\ 0 & \alpha \end{bmatrix}$, for some $\alpha \in F^{\times}.$ Take $u= \begin{bmatrix} 0 & \alpha \\ 1 & 0 \end{bmatrix}$, and write $\widetilde{u} = (u,1).$ Since $\tau(g) = g,$ by (3) of Remark \ref{sigma-h-computation}, we get $\sigma(h) = h.$ It is easy to see that $c(u g, u^{-1}) = 1= c(u , u^{-1})$ and $c(u, g) = \langle \alpha, \alpha \rangle^2 =1.$ Hence, $\widetilde{u} h \widetilde{u}^{-1} = h.$ Observe that $c(g^{-1} , g) = \langle \alpha, \alpha \rangle.$ Thus, $$\sigma(h) = c(g^{-1} , g)^{-2} \widetilde{u} h \widetilde{u}^{-1}.$$ Suppose that $g$ is not a scalar matrix. Let $g_1,$ $g_2$ and $g_3$ be as in Lemma \ref{h_i-arein-S}. Using the rational canonical form, it follows that $g$ is conjugate to a matrix of the type either $g_1,$ $g_2,$ or $g_3$. We discuss these cases separately. \textbf{Case (1)}: Assume that $g$ is conjugate to $g_1,$ where $g_1 = \begin{bmatrix} 0 & v \\ 1 & w \end{bmatrix}$. Suppose $g = \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix}$. Since $g$ is conjugate to $g_1,$ we have $$v = - \Delta(g) = -\det(g) \quad \text{and} \quad w = \tr(g) = \alpha + \delta.$$ Also, $\gamma \neq 0$ by Lemma \ref{g-conjugate-g-2 or g-3}. Consider $x = \begin{bmatrix} 0 & v \gamma^{-1} \\ 1 & \delta \gamma^{-1} \end{bmatrix}$. It is easy to verify that $x g x^{-1} = g_1.$ By Lemma \ref{working-equation-prove-Gtilde-equaltoS}, there exists $z \in \widetilde{G}$ such that \begin{equation}\label{working-equatioto-provethe-theorem-1} \sigma(h) = A^{-2} c(g^{-1}, g)^{-2} z h z^{-1}, \end{equation} where $A = c(x^{-1} g_1^{-1}, x) c(x, x^{-1}g_1^{-1})^{-1}.$ We now compute $c(x^{-1} g_1^{-1}, x)$ and $c(x, x^{-1}g_1^{-1}).$ These are as follows: \begin{equation*} c(x^{-1} g_1^{-1}, x) = \begin{cases} \langle 1, -\triangle(g) \rangle, & w=0 \\ \langle w^{-1}, -\triangle(g) \rangle, & w \neq 0 \end{cases} \quad \text{and} \quad c(x, x^{-1}g_1^{-1}) = \begin{cases} \langle -\triangle(g)^{-1}, \triangle(g)^{-1} \rangle, & w=0 \\ \langle -\triangle(g), w \rangle, & w \neq 0. \end{cases} \end{equation*} This implies that $c(x^{-1} g_1^{-1}, x) c(x, x^{-1}g_1^{-1})^{-1} = 1$ irrespective of $w$ is zero or non-zero. Hence, $A =1.$ Thus, \eqref{working-equatioto-provethe-theorem-1} reduces to $$ \sigma(h) = c(g^{-1}, g)^{-2} z h z^{-1}.$$ \textbf{Case (2)}: Assume that $g$ is conjugate to $g_2,$ where $g_2 = \left[\begin{array}{cc} a & 0\\ 0 & d \end{array}\right]$ with $a \neq d.$ Choose $x \in G$ such that $x g x^{-1} = g_2.$ We claim the following: % If $A^{-2} =1,$ by taking $s =1$ in \eqref{working-equatioto-provethe-theorem-2}, we get $\sigma(h) = z h z^{-1}.$ On the other-hand, if $A^{-2} \neq 1,$ \begin{equation}\label{g-conjugate-g_2} \mbox{There exists $y \in G$ such that $y g y^{-1} = g_2$ and $c(y^{-1} g_2^{-1}, y) c(y, y^{-1}g_2^{-1})^{-1} =1.$} \end{equation} Suppose that \eqref{g-conjugate-g_2} is true. Hence, in this case result follows by applying Lemma \ref{working-equation-prove-Gtilde-equaltoS} with $y g y^{-1} = g_2.$ We now prove the claim \eqref{g-conjugate-g_2}. Write $x = \begin{bmatrix} f & p \\ q & r \end{bmatrix}$, and consider the following cases:\\ (i) Suppose that $q =0.$ Take $y = tx,$ where $t= \begin{bmatrix} f^{-1} & 0\\ 0 & r^{-1} \end{bmatrix}$. Clearly, $y g y^{-1} = g_2.$ Also, we have $c(y^{-1} g_2^{-1}, y) = \langle 1, a \rangle$ and $c(y, y^{-1}g_2^{-1}) = \langle d, 1 \rangle.$ Thus, $$c(y^{-1} g_2^{-1}, y) c(y, y^{-1}g_2^{-1})^{-1} =1.$$ (ii) Suppose that $q \neq 0.$ Let $y = tx,$ where $t= \begin{bmatrix} p^{-1} & 0\\ 0 & q^{-1} d^{-1} \end{bmatrix}$ if $f =0$ and $t= \begin{bmatrix} \frac{d}{(d-a)f} & 0\\ 0 & q^{-1} d^{-1} \end{bmatrix}$ if $f \neq 0.$ It is clear that $y g y^{-1} = g_2.$ Also, $c(y^{-1} g_2^{-1}, y) = \langle 1, -d \rangle$ and $c(y, y^{-1}g_2^{-1}) = \langle 1, -a \rangle.$ Thus, $$c(y^{-1} g_2^{-1}, y) c(y, y^{-1}g_2^{-1})^{-1} =1.$$ \textbf{Case (3)}: Assume that $g$ is conjugate to $g_3,$ where $g_3 = \begin{bmatrix} b & 1\\ 0 & b \end{bmatrix}$. Let $x \in G$ such that $x g x^{-1} = g_3.$ Given $s \in F^{\times},$ there exists $z \in \widetilde{G}$ (see Lemma \ref{working-equation-prove-Gtilde-equaltoS}) such that \begin{equation}\label{working-equatioto-provethe-theorem-3} \sigma(h) = \langle s, \triangle(h) \rangle A^{-2} c(g^{-1}, g)^{-2} z h z^{-1}. \end{equation} Here, $A = c(x^{-1} g_3^{-1}, x) c(x, x^{-1}g_3^{-1})^{-1}.$ Write $x = \begin{bmatrix} f & p \\ q & r \end{bmatrix}$. Then, we have \begin{equation*} c(x^{-1} g_3^{-1}, x) = \begin{cases} \langle r, fb \rangle, & q=0 \\ \langle q, b \rangle, & q \neq 0 \end{cases} \quad \text{and} \quad c(x, x^{-1}g_3^{-1}) = \begin{cases} \langle br, f \rangle, & q=0 \\ \langle b, -q \rangle, & q \neq 0. \end{cases} \end{equation*} Thus, \begin{equation*} A = \begin{cases} \langle fr, b \rangle, & q=0 \\ \langle -q^2, b \rangle, & q \neq 0. \end{cases} \end{equation*} Note that $\triangle(h) = b^2.$ If $q=0,$ taking $s = fr$ we get $\langle s, \triangle(h) \rangle A^{-2} =1.$ Thus, $$\sigma(h) = c(g^{-1}, g)^{-2} z h z^{-1},$$ for some $z \in \widetilde{G}$ via \eqref{working-equatioto-provethe-theorem-3}. Also, $s = - q^2$ works when $q \neq 0.$ Proof of the theorem is complete. \end{proof} \begin{corollary}\label{G-tilde-equal-S} Let $S$ be defined as in ~\eqref{Definition of the set S}. We have $$\widetilde{G}=S.$$ \end{corollary} \begin{proof} Clearly $S\subset \widetilde{G}$. Using Theorem~\ref{conjugacy property of sigma}, it follows that $\widetilde{G}\subset S$. Hence the result. \end{proof} In the following remark, we make some pertinent observations about the proof of $\widetilde{G} = S$ (Corollary \ref{G-tilde-equal-S}) comparing it with the case when $\widetilde{G}$ is the $2$-fold metaplectic cover of $G$ considered in \cite{KumAji}. \begin{remark} \hfill \\ \begin{enumerate} \item For $n \geq 3,$ let $\epsilon \in \mu_n$ such that $\epsilon^2 \neq 1.$ Put $h = (g, \epsilon),$ where $g = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$. We will show that $\sigma(h)$ and $h$ are not conjugate in $\widetilde{G}.$ Since $\tau(g) = g,$ we get $\sigma(h) = (g, \epsilon^{-1})$ by Lemma \ref{Properties of sigma} and Remark \ref{sigma-h-computation}. Let $z = (x, \eta) \in \widetilde{G}.$ Then, $$z h z^{-1} = (xg x^{-1}, \lambda \epsilon),$$ where $\lambda = c(xg, x^{-1}) c(x, g) c(x, x^{-1})^{-1}$. For $\sigma(h) = z h z^{-1},$ we must have $g = x g x^{-1},$ i.e., $x \in C_{G}(g)$. Note that $ C_{G}(g) = \left\{ \begin{bmatrix} a & b \\ 0 & a \end{bmatrix}: a \in F^{\times}, b \in F \right\}.$ For any $x \in C_{G}(g),$ we have $c(xg, x^{-1}) = \langle a, a \rangle =c(x, x^{-1})$ and $ c(x, g) = 1$ yielding $\lambda =1.$ Thus, $$\sigma(h) \neq z h z^{-1},$$ for any $z \in \widetilde{G}$. Hence, the set $S$ considered in \cite{KumAji} does not work for $n \geq 3,$ i.e., $\widetilde{G} \neq S$. \\ \item It is clear that the set $S$ defined in \eqref{Definition of the set S} and considered in \cite{KumAji} agree when $n=2$. Also, conjugacy invariance of $S$ was used to show that $\widetilde{G} = S$ in the case when $n=2$ (see \cite[Lemma 4.10 \& Theorem 4.11]{KumAji}). \\ \item In the current context, it is not immediate to see whether $S$ is conjugation invariant or not. However, once Corollary \ref{G-tilde-equal-S} is established then it is clear that $S$ is conjugation invariant. \\ \end{enumerate} \end{remark} \subsection{Other lifts of the standard involution and their properties} In this section, we define other lifts of the standard involution and show that they also satisfy analogous properties like the lift $\sigma$ of $\tau$. \begin{lemma}\label{properties of sigms-alpha} For each $\alpha \in F^{\times}$ and $h\in \widetilde{G}$, let \begin{equation}\label{definiton of sigma-alpha}\sigma_{\alpha}(h)=\langle \alpha, \Delta(h)\rangle \sigma(h).\end{equation} Then \begin{enumerate} \item[\emph{(1)}] $\sigma_{\alpha}$ is an anti-automorphism. \vspace{0.1 cm} \item[\emph{(2)}] $\sigma_{\alpha}$ is an involution. \vspace{0.1 cm} \item[\emph{(3)}] $\sigma_{\alpha}$ is a lift of $\tau$. \end{enumerate} \end{lemma} \begin{proof} \hfill\\ \begin{enumerate} \item[(1)] For $g,h \in \widetilde{G}$, we have \begin{align*} \sigma_{\alpha}(gh) &= \langle \alpha, \Delta(gh)\rangle \sigma(gh)\\ &= \langle \alpha, \Delta(g)\Delta(h)\rangle \sigma(h)\sigma(g)\\ &= \langle \alpha, \Delta(g)\rangle \langle \alpha, \Delta(h)\rangle \sigma(h)\sigma(g)\\ &= \langle \alpha, \Delta(h) \rangle \sigma(h) \langle \alpha, \Delta(g) \rangle \sigma(g)\\ &= \sigma_{\alpha}(h)\sigma_{\alpha}(g). \\ \end{align*} \item[(2)] Let $\sigma_{\alpha}(h)=y$. We have \begin{align*} \sigma_{\alpha}(\sigma_{\alpha}(h)) &= \sigma_{\alpha}(y)\\ &= \langle \alpha, \Delta(y)\rangle \sigma(y)\\ &= \langle \alpha, \Delta(h) \rangle \sigma(\langle \alpha, \Delta(h) \rangle\sigma(h))\\ &= \langle \alpha, \Delta(h) \rangle \langle \alpha, \Delta(h) \rangle ^{-1}\sigma(\sigma(h))\\ &= h. \\ \end{align*} \item[(3)] Let $h=(g, \epsilon)\in \widetilde{G}$. Suppose that $\sigma((g,1))= (\tau(g), \xi)$. We have \[\tau(p(h))= \tau(p((g,\epsilon)))= \tau(g)\] and \[ p(\sigma(h))= p(\sigma((g,\epsilon)))= p((\tau(g), \epsilon^{-1} \xi))= \tau(g).\] Thus $\sigma_{\alpha}$ is a lift of $\tau$. \end{enumerate} \end{proof} \begin{remark} We have used the following observation in part (2) of Lemma~\ref{properties of sigms-alpha}. Let $h=(x, \epsilon)$ and $y=\sigma_{\alpha}(h)$. Then we get $$y=( \tau(x),* ).$$ Using the extension of $\Delta$ to $\widetilde{G}$ as defined earlier, we see that $$\Delta(y)=\Delta(\tau(x))=\Delta(h).$$ \end{remark} It follows from Proposition~\ref{description of the set of liftings} that any lift of $\tau$ is of the form $\sigma_{\alpha}$ for $\alpha\in F^{\times}$. We now show that all the lifts $\sigma_{\alpha}$ also satisfy a similar conjugacy property like $\sigma$. To be precise, we have \begin{theorem}\label{sigma-alpha-has-similar-propertyas-sigma} Let $h=(g, \eta) \in \widetilde{G}$. We have \[\sigma_{\alpha}(h)= c(g^{-1},g)^{-2} \eta^{-2} zhz^{-1},\] for some $z \in \widetilde{G}$. \end{theorem} \begin{proof} Suppose $\Delta(h)\in (F^{\times})^{n}$. Then $\langle \alpha, \Delta(h)\rangle =1$ and $\sigma_{\alpha}(h)=\sigma(h)$. Thus, the theorem holds in this case. Suppose $\Delta(h)\not\in (F^{\times})^{n}$. Taking $\lambda=\alpha^{-1}$ in (i) of Proposition~\ref{properties to check sigma is an involution}, and $u=\tilde{z}(\alpha^{-1})$ we get \[\langle \alpha, \Delta(h)\rangle ^{-1} h = uhu^{-1}.\] Indeed, \[ hu = h\tilde{z}(\alpha^{-1}) = \langle \alpha^{-1}, \Delta(h)\rangle ^{-1} \tilde{z}(\alpha^{-1})h = \langle \alpha, \Delta(h)\rangle u h .\] Thus, \begin{align*}\sigma_{\alpha}(h) &= \langle \alpha, \Delta(h)\rangle \sigma(h) \\ &= \sigma(\langle \alpha, \Delta(h)\rangle^{-1}h) \\ &= \sigma(uhu^{-1}) \\ &= \sigma(u)^{-1}\sigma(h) \sigma(u). \end{align*} Since $\widetilde{G}=S$, we have $\sigma(h)= c(g^{-1},g)^{-2} \eta^{-2}vhv^{-1}$, for some $v \in \widetilde{G}$. \\\\ Thus, we get \[ \sigma_{\alpha}(h) =c(g^{-1},g)^{-2} \eta^{-2} \sigma(u)^{-1} v h v^{-1} \sigma(u). \] Now taking $z=\sigma(u)^{-1} v$, we get \begin{align*} \sigma_{\alpha}(h)=c(g^{-1},g)^{-2} \eta^{-2} z h z^{-1}. \end{align*} Hence the result. \end{proof} \section{Main Theorem}\label{main theorem} Throughout, we let $\mu_{\widetilde{G}}$ denote the Haar measure on $\widetilde{G}$. We write $\Aut_{c}(\widetilde{G})$ for the group of continuous automorphisms of $\widetilde{G}$ and $\mathbb{R}^{\times}_{> 0}$ for the multiplicative group of positive real numbers. \\ \begin{lemma}\label{results-on-haarmeasure} Let $\gamma\in \Aut_{c}(\widetilde{G})$. Then, the following statements hold: \begin{enumerate} \item[\emph{(1)}] There exists $c_{\gamma}>0$ such that $\mu_{\widetilde{G}}\circ \gamma = c_{\gamma}\mu_{\widetilde{G}}.$ \smallskip \item[\emph{(2)}] The map $\gamma\mapsto c_{\gamma}$ from $\Aut_{c}(\widetilde{G})$ to $\mathbb{R}^{\times}_{>0}$ is a homomorphism. \smallskip \item[\emph{(3)}] If $\gamma^{2}=1,$ then $\mu_{\widetilde{G}}\circ \gamma =\mu_{\widetilde{G}}.$ \end{enumerate} \end{lemma} \begin{proof} We refer the reader to Section 5 (Lemma 5.7, Lemma 5.8 and Lemma 5.9) in \cite{KumAji} for the proof. \end{proof} \begin{lemma}\label{restriction-mu-n-of-central-character-injective} Let $(\pi, V)$ be an irreducible smooth genuine representation of $\widetilde{G}.$ Let $\omega_\pi$ be the central character of $\pi.$ Then, the following statements hold: \begin{enumerate} \item[\emph{(1)}] The restriction ${\omega_\pi}\big |\mu_n$ of $\omega_\pi$ to $\mu_n$ is injective. \smallskip \item[\emph{(2)}] $\omega_\pi(\eta^2) = 1,$ for every $\eta\in \mu_n$ if and only if $n=2.$ \end{enumerate} \end{lemma} \begin{proof} Let $\eta\in \mu_n$. Since $\pi$ is a genuine representation, we have $\pi(\eta) = \xi(\eta) 1_V.$ Also, we have that $\mu_n$ is contained in the center of $\widetilde{G}$. Thus applying Schur's lemma, we get $\pi(\eta) = \omega_\pi(\eta)1_V$. It follows that $ \omega_\pi(\eta) = \xi(\eta).$ This proves (1). If $n=2$, clearly we have $\omega_\pi(\eta^2) = 1.$ Suppose that $\omega_\pi(\eta^2) = 1.$ This implies that $\omega_\pi(\eta) = \pm 1.$ If $n \geq 3,$ there exist $\eta_1, \eta_2$ in $\mu_n$ with $\eta_1 \neq \eta_2$ such that $\omega_\pi(\eta_1) = \omega_\pi(\eta_2),$ which contradicts part (1). This completes the proof of (2). \end{proof} \begin{lemma}\label{character-contragredient} For $g\in \widetilde{G}_{\reg}$, we have $$\Theta_{\pi^{\vee}}(g)=\Theta_{\pi}(g^{-1}).$$ \end{lemma} \begin{proof} We refer the reader to Section 5 (Lemma 5.11) in \cite{KumAji} for the proof. \end{proof} \begin{lemma} Let $\rho \in \Aut_{c}(\widetilde{G})$ such that it is a lift of the automorphism $h \mapsto \tau(h^{-1})$ of $G.$ Then, $$\rho(g) \text{ and } \eta^2 g^{-1} \text{are in } \widetilde{G}_{\reg},$$ for any $g\in \widetilde{G}_{\reg}$ and $\eta \in \mu_n.$ \end{lemma} \begin{proof} Let $\eta \in \mu_n$ and $g = (x, \epsilon) \in \widetilde{G}_{\reg}.$ Since $x \in G_{\reg}$ and $p(\eta^2 g^{-1}) = x^{-1}$, it follows that $\eta^2 g^{-1} \in \widetilde{G}_{\reg}$. Since $\rho$ is a lift of the automorphism $h \mapsto \tau(h^{-1})$, clearly we have $p(\rho(g))=\tau(x^{-1}) \in G_{\reg}$. Thus $\rho(g) \in \widetilde{G}_{\reg}.$ \end{proof} Let $\pi$ be an irreducible admissible genuine representation of $\widetilde{G}$. For $f\in C_{c}^{\infty}(\widetilde{G})$, $\rho\in \Aut(\widetilde{G})$ we define \[f^{\rho}(g)=f(\rho(g)) \text{ and } \pi^{\rho}(g)=\pi(\rho(g)).\] \begin{proposition}\label{rho-dualizing-involution} Let $\rho \in \Aut_{c}(\widetilde{G})$ such that $\rho^2 =1$ and $\rho(g)$ is conjugate to $\eta^2g^{-1},$ for any $g = (x, \eta)\in \widetilde{G}$. Also, assume that $\rho$ is a lift of the automorphism $h \mapsto \tau(h^{-1})$ of $G.$ Let $\pi$ be an irreducible admissible genuine representation of $\widetilde{G}$ and $\omega_\pi$ be the central character of $\pi.$ Suppose that $\pi^{\rho}$ is isomorphic to $\pi^{\vee}.$ Then \[\omega_{\pi}(\eta^2) =1,\] for all $\eta \in \mu_n.$ \end{proposition} \begin{proof} Let $f\in C_{c}^{\infty}(\widetilde{G})$. Since $\pi^{\rho}\cong \pi^{\vee},$ it follows that $\Theta_{\pi^{\rho}}(f) = \Theta_{\pi^{\vee}}(f).$ It is straightforward to verify that $\Theta_{\pi^{\rho}}(f) = \Theta_{\pi}(f^{\rho}).$ We have, \begin{align}\label{pi-rho-isomorphic-pi-check-1} \Theta_{\pi}(f^{\rho}) &= \int_{\widetilde{G}} f^{\rho}(g) \Theta_{\pi}(g)dg \nonumber \\ &= \int_{\widetilde{G}} f(\rho(g)) \Theta_{\pi}(g)dg \nonumber \\ &= \int_{\widetilde{G}} f(g) \Theta_{\pi}(\rho(g))dg ~~~~~\mbox{($g \mapsto \rho(g)$ and using (3) of Lemma \ref{results-on-haarmeasure})}. \end{align} Let $g = (x, \eta) \in \widetilde{G}_{\reg}.$ Since $\rho(g)$ is conjugate to $\eta^2 g^{-1}$ and $\Theta_{\pi}$ is constant on (regular semi-simple) conjugacy classes, \eqref{pi-rho-isomorphic-pi-check-1} yields \begin{align}\label{pi-rho-isomorphic-pi-check-3} \Theta_{\pi}(f^{\rho}) &= \int_{\widetilde{G}} f(g) \Theta_{\pi}(\eta^2 g^{-1})dg. \end{align} Also, \begin{align}\label{pi-rho-isomorphic-pi-check-4} \Theta_{\pi^{\vee}}(f) &= \int_{\widetilde{G}} f(g) \Theta_{\pi^{\vee}}( g)dg \nonumber \\ &= \int_{\widetilde{G}} f(g) \Theta_{\pi}( g^{-1})dg ~~~~~~\mbox{ (by Lemma \ref{character-contragredient})}. \end{align} It follows from \eqref{pi-rho-isomorphic-pi-check-3} and \eqref{pi-rho-isomorphic-pi-check-4} that \begin{equation}\label{theta-pi-equalon-etasquareg-inverse-and-ginverse} \Theta_{\pi}(\eta^2 g^{-1}) = \Theta_{\pi}(g^{-1}). \end{equation} By Lemma \ref{character-non-zero}, $\Theta_{\pi}((g_0, 1)) \neq 0$ for some $g_0 \in G_{\reg}.$ For $ \eta \in \mu_n,$ we have $(g_0, \eta) \in \widetilde{G}_{\reg}.$ By \eqref{theta-pi-equalon-etasquareg-inverse-and-ginverse}, $\Theta_{\pi}(\eta^2 (g_0^{-1}, \eta)^{-1}) = \Theta_{\pi}((g_0^{-1}, \eta)^{-1}).$ Using (2) of Lemma \ref{central-action} proof follows. \end{proof} \subsection{Proof of the Main Theorem}\label{Proof-of-Main-Theorem} Let $g = (x, \eta) \in \widetilde{G}.$ For $\alpha\in F^{\times},$ let $\rho_{\alpha}(g)= \sigma_{\alpha}(g^{-1})$, where $\sigma_{\alpha}$ is given as in \eqref{definiton of sigma-alpha}. Clearly, $\rho_{\alpha}^2=1.$ Since $\sigma_{\alpha}$ is continuous (see Proposition~\ref{continuity of lift}), it follows that $\rho_{\alpha}\in \Aut_{c}(\widetilde{G})$ for all $\alpha\in F^{\times}$. Using Theorem \ref{sigma-alpha-has-similar-propertyas-sigma}, it follows that there exists $z\in \widetilde{G}$ such that $\rho_{\alpha}(g)$ is a conjugate of $\eta^2 g^{-1}$. Indeed, we have \begin{align*} \rho_{\alpha}(g) &= \sigma_{\alpha}((x^{-1}, c(x, x^{-1})^{-1} \eta^{-1}))\\ &= c(x, x^{-1})^{-2} (c(x, x^{-1})^{-1} \eta^{-1})^{-2} z g^{-1} z^{-1} \\ &= z (\eta^2 g^{-1}) z^{-1}. \end{align*} By Proposition~\ref{rho-dualizing-involution} and part (2) of Lemma \ref{restriction-mu-n-of-central-character-injective}, we have $n=2$. Conversely, if $n=2$ then for each $\alpha\in F^{\times}$, $\sigma_{\alpha}$ is a dualizing involution of $\widetilde{G}$ (see \cite{KumAji} and \cite{KumEkt}). \bibliographystyle{amsplain} \bibliography{metaplectic} \end{document}
2412.17421v1
http://arxiv.org/abs/2412.17421v1
Hereditary properties of finite ultrametric spaces
\documentclass[11pt,english]{article} \usepackage{amsfonts,amsmath,amsthm,amscd,amssymb,latexsym,cite,verbatim,texdraw,floatflt, caption2,pb-diagram} \usepackage{tikz} \usetikzlibrary{positioning,trees, decorations.markings} \textheight20cm \textwidth12cm \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{definition}{Definition}[section] \theoremstyle{definition} \newtheorem{example}{Example}[section] \newtheorem{remark}{Remark}[section] \newcommand{\keywords}{\textbf{Key words. }\medskip} \newcommand{\subjclass}{\textbf{MSC 2020. }\medskip} \renewcommand{\abstract}{\textbf{Abstract. }\medskip} \numberwithin{equation}{section} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\Iso}{Iso} \DeclareMathOperator{\Fix}{Fix} \newcommand{\mfu}{\mathfrak{U}} \newcommand{\mct}{\mathfrak{T}} \newcommand{\RR}{\mathbb{R}} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\Sp}[1]{\operatorname{Sp}(#1)} \def\we{\mathrel{\stackrel{\rm w}=}} \begin{document} \title{Hereditary properties of finite ultrametric spaces} \author{Evgeniy A. Petrov} \maketitle \begin{abstract} A characterization of finite homogeneous ultrametric spaces and finite ultrametric spaces generated by unrooted labeled trees is found in terms of representing trees. A characterization of finite ultrametric spaces having perfect strictly $n$-ary trees is found in terms of special graphs connected with the space. Further, we give a detailed survey of some special classes of finite ultrametric spaces, which were considered in the last ten years, and study their hereditary properties. More precisely, we are interested in the following question. Let $X$ be an arbitrary finite ultrametric space from some given class. Does every subspace of $X$ also belong to this class? \end{abstract} \subjclass{Primary 54E35, 05C05;} \keywords{finite ultrametric space, representing tree, homogeneous ultrametric space, labeled tree, perfect strictly $n$-ary tree} \section{Introduction} In 2001 at the Workshop on General Algebra~\cite{WGA} the attention of experts on the theory of lattices was paid to the following problem of I.~M.~Gelfand: using graph theory describe up to isometry all finite ultrametric spaces. An appropriate representation of ultrametric spaces $X$ by monotone rooted trees $T_X$ was proposed in~\cite{GurVyal(2012)} that can be considered in some sense as a solution of above mentioned problem. The question naturally arises about applications of this representation. One such application is the structural characteristic of finite ultrametric spaces for which the Gomory-Hu inequality becomes an equality, see~\cite{PD(UMB)}. The ultrametric spaces for which its representing trees are strictly binary were described in~\cite{DPT}. A characterization of finite ultrametric spaces which are as rigid as possible was also obtained, see~\cite{DPT(Howrigid)}. Extremal properties of finite ultrametric spaces and related them properties of monotone rooted trees have been found in~\cite{DP20}. Se also papers~\cite{DP19, DP18, Do19, Do20BBMS, Do20, P18} for some another properties of ultrametric spaces based on analysis of representing trees. The present paper is also a contribution to this line of studies. The paper is organized as follows. The first section of the paper contains the main definitions and the required technical results. In Section 2 and Section 4 finite homogenous ultrametric spaces and finite ultrametric spaces defined by unrooted labeled trees are described in terms of representing trees. In Section 3 we give a characterizations of finite ultrametric spaces having perfect strictly $n$-ary trees in terms of special graphs $G_{r,X}$ connected with the space $X$. In Section 5 we give a detailed survey of some special classes of finite ultrametric spaces which were considered in the last ten years. In Section 6 from the above mentioned classes we distinguish the classes such that every subspace of a space from a given class also belongs to this class. Recall some definitions from the theory of metric spaces and the graph theory. An \textit{ultrametric} on a set $X$ is a function $d\colon X\times X\rightarrow \mathbb{R}^+$, $\mathbb R^+ = [0,\infty)$, such that for all $x,y,z \in X$: \begin{itemize} \item [\textup{(i)}] $d(x,y)=d(y,x)$, \item [\textup{(ii)}] $(d(x,y)=0)\Leftrightarrow (x=y)$, \item [\textup{(iii)}] $d(x,y)\leq \max \{d(x,z),d(z,y)\}$. \end{itemize} Inequality (iii) is often called the {\it strong triangle inequality}. The pair $(X,d)$ is called an \emph{ultrametric space.} The \emph{spectrum} of an ultrametric space $(X,d)$ is the set $$\operatorname{Sp}(X)=\{d(x,y)\colon x,y \in X\}.$$ For simplicity we will always assume that $X\cap \Sp{X}=\varnothing$. The spectrum $\operatorname{Spec}(X,x)$ of the space $X$ at the point $x$ is the set $$ \operatorname{Spec}(X,x)= \{d(x,y)\, | \, y\in X \}. $$ The quantity $$ \diam X=\sup\{d(x,y)\colon x,y\in X\}. $$ is the \emph{diameter} of the space $(X,d)$. Recall that a \textit{graph} is a pair $(V,E)$ consisting of a nonempty set $V$ and a (probably empty) set $E$ elements of which are unordered pairs of different points from $V$. For a graph $G=(V,E)$, the sets $V=V(G)$ and $E=E(G)$ are called \textit{the set of vertices} and \textit{the set of edges}, respectively. Recall that a \emph{path} is a nonempty graph $P=(V,E)$ of the form $$ V=\{x_0,x_1,...,x_k\}, \quad E=\{\{x_0,x_1\},...,\{x_{k-1},x_k\}\}, $$ where all $x_i$ are different. A connected graph without cycles is called a \emph{tree}. A tree $T$ may have a distinguished vertex called the \emph{root}; in this case $T$ is called a \emph{rooted tree}. An \emph{$n$-ary tree} is a rooted tree, such that the degree of each of its vertices is at most $n+1$. A rooted tree is \emph{strictly $n$-ary} if every internal node has exactly $n$ children. In the case $n=2$ such tree is called \emph{strictly binary}. Generally we follow terminology used in~\cite{BM}. Let $k\geqslant 2$. A nonempty graph $G$ is called \emph{complete $k$-partite} if its vertices can be divided into $k$ disjoint nonempty sets $X_1,...,X_k$ so that there are no edges joining the vertices of the same set $X_i$ and any two vertices from different $X_i,X_j$, $1\leqslant i,j \leqslant k$ are adjacent. In this case we write $G=G[X_1,...,X_k]$. We shall say that $G$ is a {\it complete multipartite graph} if there exists $k \geqslant 2$ such that $G$ is complete $k$-partite, cf. \cite{Di}. \begin{definition}[\!\!\cite{DDP(P-adic)}]\label{d2} Let $(X,d)$ be a finite ultrametric space. Define the graph $G_X^d$ as follows $V(G_X^d)=X$ and $$ (\{u,v\}\in E(G_X^d))\Leftrightarrow(d(u,v)=\diam X). $$ We call $G_X^d$ the \emph{diametrical graph} of $X$. \end{definition} \begin{definition}\label{d14} Let $(X,d)$ be an ultrametric space with $|X|\geqslant 2$ and the spectrum $\operatorname{Sp}(X)$ and let $r\in \operatorname{Sp}(X)$ be nonzero. Define by $G_{r,X}$ a graph for which $V(G_{r,X})=X$ and $$ (\{u,v\}\in E(G_{r,X}))\Leftrightarrow (d(u,v)=r). $$ For $r=\operatorname{diam} X$ it is clear that $G_{r,X}$ is the diametrical graph of $X$. \end{definition} \begin{theorem}[\!\!\cite{DDP(P-adic)}]\label{t13} Let $(X,d)$ be a finite ultrametric space, $|X|\geqslant 2$. Then $G_X^d$ is complete multipartite. \end{theorem} With every finite ultrametric space $(X, d)$, we can associate a labeled rooted $n$-ary tree $T_X$ by the following rule (see~\cite{PD(UMB)}). If $X=\{x\}$ is a one-point set, then $T_X$ is the rooted tree consisting of one node $X$ labeled by $0$. Let $|X|\geqslant 2$. According to Theorem~\ref{t13} we have $G^d_X = G^d_X[X_1,...,X_k]$. In this case the root $X$ of the tree $T_X$ is labeled by $\diam X$ and, moreover, $T_X$ has $k$ nodes $X_1,...,X_k$ of the first level with the labels \begin{equation}\label{e2.7} l(X_i)= \diam X_i, \quad i = 1,...,k. \end{equation} The nodes of the first level indicated by $0$ are leaves, and those indicated by strictly positive numbers are internal nodes of the tree $T_X$. If the first level has no internal nodes, then the tree $T_X$ is constructed. Otherwise, by repeating the above-described procedure with $X_i$ corresponding to the internal nodes of the first level, we obtain the nodes of the second level, etc. Since $|X|$ is finite, all vertices on some level will be leaves and the construction of $T_X$ is completed. The above-constructed labeled tree $T_X$ is called the \emph{representing tree} of the space $(X, d)$. To underline that $l_X(v)$, $v\in V(T_X)$, is a labeling function of the representing tree $T_X$ we shall write $(T_X,l_X)$. The rooted tree $T_X$ without the labels we will denote by $\overline{T}_X$. Let $T$ be a rooted tree. For every vertex $v$ of $T$ we denote by $T_v$ the subtree of $T$ such that \(v\) is the root of \(T_v\), \begin{equation*}\label{e2.5} V(T_v) = \{u \in V(T) \colon u = v \text{ or \(u\) is a successor of } v \text{ in } T\}, \end{equation*} and satisfying \begin{equation*}\label{e2.6} (\{v_1, v_2\} \in E(T_v)) \Leftrightarrow (\{v_1, v_2\} \in E(T)) \end{equation*} for all \(v_1\), \(v_2 \in V(T_v)\). Denote by \(\overline{L}(T_v)\) the set of all leaves of \(T_v\). If \(T = T_X\) is the representing tree of a finite ultrametric space \((X, d)\), \(v \in V(T_X)\) and $\overline{L}(T_v) = \{\{x_1\}, \ldots, \{x_n\}\}$, then for simplicity we write $L(T_v) = \{x_1, \ldots, x_n\}$. Consequently, the equality $v = L(T_v)$ holds for every \(v \in V(T_X)\). Let $v$ be a node of the rooted tree $T$. Denote by $\delta^+(v)$ the \emph{out-degree} of $v$, i.e., $\delta^+(v)$ is the number of children of $v$, and by $\operatorname{lev}(v)$ denote the level of a node $v \in V(T)$. Note that the correspondence between ultrametric spaces and trees or tree-like structures is well known, cf.~\cite{GV, GNS00,GurVyal(2012),H04,Le03,Ho01,BS17}. \begin{definition}\label{d15} Let $G=(V,E)$ be nonempty graph, and let $V_0$ be the set (possibly empty) of all isolated vertices of the graph $G$. Denote by $G'$ the subgraph of the graph $G$, generated by the set $V\backslash V_0$. \end{definition} \begin{lemma}[\!\!\cite{DP19}]\label{c25} Let $(X, d)$ be a finite ultrametric space with $|X|\geqslant 2$ and let $r \in \Sp{X}\setminus \{0\}$. Then the graph $G'_{r, X}$ is a union of $p$ complete multipartite graphs $G^1_{r}, \ldots, G^p_{r}$, where $p$ is the number of all distinct nodes \(x_1\), \(\ldots\), \(x_p\) of \(T_X\) labeled by \(r\) and this union is disjoint if \(p \geqslant 2\). Moreover, for every \(i \in \{1, \ldots, p\}\), the graph \(G_{r}^i\) is complete \(k\)-partite, \begin{equation}\label{eq25} G_{r}^i = G_{r}^i [L_{T_{x_{i1}}}, \ldots, L_{T_{x_{ik}}}], \end{equation} where \(k = \delta^{+}(x_i)\) and $x_{i1}, \ldots, x_{ik}$ are the direct successors of $x_i$. \end{lemma} Let $(X,d)$ be an ultrametric space. Recall that a \emph{ball} with a radius $r \geqslant 0$ and a center $c\in X$ is the set \[ B_r(c)=\{x\in X\colon d(x,c)\leqslant r\}. \] The \emph{ballean} $\mathbf{B}_X$ of the ultrametric space $(X,d)$ is the set of all balls of $(X,d)$. Every one-point subset of $X$ belongs to $\mathbf{B}_X$, this is a \emph{singular} ball in~$X$. The following proposition claims that the ballean of a finite ultrametric space $(X,d)$ is the vertex set of the representing tree $T_X$. \begin{proposition}[\!\!\cite{P(TIAMM)}]\label{lbpm} Let $(X,d)$ be a finite nonempty ultrametric space with the representing tree $T_X$. Then the following statements hold. \begin{itemize} \item [\textup{(i)}] $L_{T_v}$ belongs to $\mathbf{B}_X$ for every node $v\in V(T_X)$. \item [\textup{(ii)}] For every $B \in \mathbf{B}_X$ there exists the unique node $v$ such that $L_{T_v}=B$. \end{itemize} \end{proposition} \begin{theorem}[\!\!\cite{DP18}]\label{t2.9} Let \((X, d)\) be a finite ultrametric space and let \(x_1\) and \(x_2\) be two different points of \(X\). If \(P\) is the path joining the different leaves \(\{x_1\}\) and \(\{x_2\}\) in \((T_X,l_X)\), then we have \[ d(x_1, x_2) = \max_{v \in V(P)} l_X(v). \] \end{theorem} Recall that metric spaces $(X,d)$ and $(Y, \rho)$ are \emph{isometric} if there is a bijection $f\colon X\to Y$ such that the equality $$ d(x,y) = \rho(f(x),f(y)) $$ holds for all $x$, $y \in X$. \begin{definition} Let $T_1$ and $T_2$ be rooted trees with the roots $v_1$ and $v_2$, respectively. A bijective function $\Psi\colon V(T_1)\to V(T_2)$ is an isomorphism of $T_1$ and $T_2$ if $$ (\{x,y\}\in E(T_1))\Leftrightarrow(\{\Psi(x),\Psi(y)\}\in E(T_2)) $$ for all $x,y \in V(T_1)$ and $\Psi(v_1)=v_2$. If there exists an isomorphism of rooted trees $T_1$ and $T_2$, then we will write $T_1\simeq T_2$. \end{definition} We shall say that trees $(T_X,l_X)$ and $(T_Y,l_Y)$ are isomorphic as labeled rooted trees if $\overline{T}_X \simeq \overline{T}_Y$ with an isomorphism $\Psi$ and $l_X(x)=l_Y(\Psi(x))$, $x\in V(T_X)$. \begin{theorem}[\!\!\cite{DPT(Howrigid)}]\label{l3.3} Let $(X, d)$ and $(Y, \rho)$ be nonempty finite ultrametric spaces. Then the representing trees $T_X$ and $T_Y$ are isomorphic as labeled rooted trees if and only if $(X, d)$ and $(Y, \rho)$ are isometric. \end{theorem} \section{Finite homogeneous ultrametric spaces} A relational structure $\textbf R$ is homogeneous if every isomorphism between finite induced substructures of $\textbf R$ extends to an automorphism of the whole structure $\textbf R$ itself. Introduced by Fra\"{\i}ss\'{e}~\cite{Fr54} and J\'{o}nsson ~\cite{Jo60}, homogeneous structures are now playing a fundamental role in model theory. According to the terminology of Fra\"{\i}ss\'{e}~\cite{Fr00}, a metric space $X$ is homogeneous if every isometry $f$ whose domain and range are finite subsets of $X$ extends to surjective isometry of $X$ onto $X$. Some results related to indivisible homogeneous ultrametric spaces can be found in~\cite{DLPS07, DLPS08}. The more deep investigation of homogeneous ultrametric spaces was done in~\cite{DLPS16}, where in particular the following result was obtained. \begin{theorem}[\!\!\cite{DLPS16}]\label{t31} An ultrametric space $X$ is homogeneous if and only if the following two conditions hold: \begin{itemize} \item [\textup{(1)}] $\operatorname{Spec}(X,x)= \operatorname{Spec}(X,x')$ for all $x, x' \in X$. \item [\textup{(2)}] Balls of $X$ with the same kind are isometric. \end{itemize} \end{theorem} Two balls have the same kind if they have the same diameter which is attained in both or in none. Clearly, for finite metric spaces this definition can be reduced to the condition that balls have the same diameter since in this case diameter is always attained. The following theorem gives us a characterization of finite homogeneous ultrametric spaces in terms of representing trees, see Figure~\ref{fig3}. \begin{theorem}\label{t23} Let $X$ be a finite ultrametric space with the representing tree $(T_X,l_X)$. The space $X$ is homogeneous if and only if the following two conditions hold. \begin{itemize} \item [\textup{(i)}] For all different $x,y \in V(T_X)$ with $\operatorname{lev}(x)=\operatorname{lev}(y)$ the equality $l_X(x)=l_X(y)$ holds. \item [\textup{(ii)}] For all different $x,y \in V(T_X)$ with $\operatorname{lev}(x)=\operatorname{lev}(y)$ the equality $\delta^{+}(x)=\delta^{+}(y)$ holds. \end{itemize} \end{theorem} \begin{proof} Let $X$ be homogenous. It is clear that for any $x\in X$ the set $\operatorname{Spec}(X,x)$ coincide with the set of all labels on the unique path from the root of $T_X$ to the leaf $\{x\}$. Hence, according to condition (1) of Theorem~\ref{t31} and to the construction of $T_X$ (all the labels on a path from the root of $T_X$ to any leaf strictly decrease) we immediately obtain condition (i) and the fact that all the leaves of $T_X$ are on one and the same level. Further, let $B_1$ and $B_2$ be any two balls with equal diameters. According to Proposition~\ref{lbpm} there exist two inner nodes $b_1$ and $b_2$ of the tree $T_X$ such that $L_{T_{b_1}}=B_1$ and $L_{T_{b_2}}=B_2$. According to condition (i) the nodes $b_1$ and $b_2$ are on one and the same level. By condition (2) of Theorem~\ref{t31} $B_1$ and $B_2$ are isometric. Hence, by Theorem~\ref{l3.3} the subtrees $T_{b_1}$ and $T_{b_2}$ are isomorphic and this means that $\delta^{+}(b_1)=\delta^{+}(b_2)$ which establishes condition (ii). The converse implication easily follows from the construction of representing trees and from Proposition~\ref{lbpm}. \end{proof} \begin{figure}[h] \begin{center} \begin{tikzpicture}\tikzstyle{level 1}=[level distance=15mm,sibling distance=30mm] \tikzstyle{level 2}=[level distance=15mm,sibling distance=15mm] \tikzstyle{level 3}=[level distance=15mm,sibling distance=4mm] \tikzset{ solid node/.style={circle,draw,inner sep=1.5,fill=black}, hollow node/.style={circle,draw,inner sep=1.5} } \node [label=left:{\(T_X\)}] at (-2,0) {}; \node (1) [solid node, label=above:{\(l_0\)}] at (0,0) {} child {node[solid node, label=above:{\(l_1\)}]{} child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } } child {node[solid node, label=above:{\(l_1\)}]{} child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } } child {node[solid node, label=above:{\(l_1\)}]{} child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } } child {node[solid node, label=above:{\(l_1\)}]{} child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } child{node[solid node, label=left:{\(l_{2}\)}] {} child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } child{node[solid node, label=below:{\(0\)}] {} } } }; \end{tikzpicture} \end{center} \caption{An example of the labeled representing tree $T_X$ of a finite homogeneous ultrametric space.} \label{fig3} \end{figure} Analyzing the structural properties of representing trees of homogeneous finite ultrametric spaces it is possible also to distinguish the following two classes of finite ultrametric spaces: the spaces $X$ for which all the leaves of $T_X$ are on the same level and the spaces $X$ for which all the labels of inner nodes of $T_X$ being at the same level are equal. The proofs of the following two propositions follow directly from the fact that for every $x\in X$ the set $\operatorname{Spec}(X,x)$ coincides with the set of labels of vertices lying at the unique path from the leaf $\{x\}$ to the root of $T_X$ and the fact that these labels monotonically increase along this path. \begin{proposition}\label{p210} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$. The following conditions are equivalent. \begin{itemize} \item [\textup{(i)}] All leaves of $T_X$ are on the same level. \item [\textup{(ii)}] $|\operatorname{Spec}(X,x)|= |\operatorname{Spec}(X,x')|$ for all $x, x' \in X$. \end{itemize} \end{proposition} \begin{proposition}\label{p211} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$ and let $\Sp{X}=\{0,s_1,...,s_n\}$ with $s_1<\cdots <s_n$. The following conditions are equivalent. \begin{itemize} \item [\textup{(i)}] All labels of the inner nodes of $T_X$ being at the same level are equal. \item [\textup{(ii)}] For every $x\in X$ we have $\operatorname{Spec}(X,x)= \{0,s_k,s_{k+1},...,s_n\}$ for some $k\in \{1,...,n\}$. \end{itemize} \end{proposition} The previous two propositions immediately give the following. \begin{corollary}\label{c211} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$. The following conditions are equivalent. \begin{itemize} \item [\textup{(i)}] All labels of the inner nodes of $T_X$ being at the same level are equal and all leaves of $T_X$ are on the same level. \item [\textup{(ii)}] $\operatorname{Spec}(X,x)= \operatorname{Spec}(X,x')$ for all $x, x' \in X$. \end{itemize} \end{corollary} In the following proposition we consider a slight modification of representing trees of finite homogeneous ultrametric spaces. First, we preserve for such trees condition (i) of Theorem~\ref{t23} and instead of condition (ii) we consider that all leaves of a representing tree are at the same level. \begin{proposition}\label{t210} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$ and let all leaves of $T_X$ be on the same level. The following conditions are equivalent. \begin{itemize} \item [\textup{(i)}] All labels of the inner nodes of $\ T_X$ being at the same level are equal. \item [\textup{(ii)}] $V(G'_{r,X})=X$ for every nonzero $r\in \Sp{X}$. \end{itemize} \end{proposition} \begin{proof} (i)$\Rightarrow$(ii) Let all inner nodes $x_1,...,x_p$ being at the same level are labeled by $r$. We can easily see that $L_{T_{x_1}},...,L_{T_{x_p}}=X$. Using this fact and Lemma~\ref{c25} we obtain a desired implication. (ii)$\Rightarrow$(i) In the case where the number of levels of $T_X$ is equal to 1 this implication is evident. Suppose the number of levels is more than 1. Let condition (ii) hold and let $x$ and $y$ be some inner nodes being at the first level such that $l(x)\neq l(y)$. Without loss of generality suppose $l(x)>l(y)$. If $V(G'_{l(y),X})\neq X$ then we have a contradiction. Assume that $V(G'_{l(y),X})= X$. Consider the graph $G'_{l(x),X}$. Since the nodes $x$ and $y$ are on the same level it is easy to see that $L_{T_y}\not\subseteq V(G'_{l(x),X})$ which contradicts to (ii). Hence $l(x)=l(y)$. Arguing as above for the nodes of next levels we complete the proof. \end{proof} \begin{corollary} Let $(X,d)$ be a finite homogeneous ultrametric space. Then $V(G'_{r,X})=X$ for every $r\in \Sp{X}\setminus \{0\}$. \end{corollary} \section{Spaces for which the representing trees are perfect} Following~\cite{DADS} we shall call strictly $n$-ary tree $T$ \emph{perfect} if all leaves of $T$ are on the same level. It is clear that in general case the representing trees of finite homogeneous ultrametric spaces are not perfect. They are perfect if condition (ii) of Theorem~\ref{t23} is replaced by the following more strict condition: for every inner node $x$ of the representing tree $T_X$ we have $\delta^+(x)=n$. The aim of this section is to describe spaces for which the representing trees are perfect in terms of graphs $G'_{r,X}$. In the following proposition we have a description of such spaces in the special case when the internal labeling of $T_X$ is injective. \begin{figure}[ht] \begin{center} \begin{tikzpicture}\tikzstyle{level 1}=[level distance=10mm,sibling distance=30mm] \tikzstyle{level 2}=[level distance=10mm,sibling distance=10mm] \tikzstyle{level 3}=[level distance=15mm,sibling distance=3mm] \tikzset{ solid node/.style={circle,draw,inner sep=1.5,fill=black}, hollow node/.style={circle,draw,inner sep=1.5} } \node [label=left:{\(T\)}] at (-2,0) {}; \node (1) [solid node, label=above:{}] at (0,0) {} child {node[solid node, label=left:{}]{} child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } } child {node[solid node, label=left:{}]{} child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } } child {node[solid node, label=left:{}]{} child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } child{node[solid node, label=left:{}] {} child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } child{node[solid node, label=below:{}] {} } } }; \end{tikzpicture} \caption{An example of a perfect strictly 3-ary tree $T$.} \label{fig4} \end{center} \end{figure} \begin{proposition}\label{t4} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$ and let $T_X$ be its representing tree such that all the labels of different internal nodes of $T_X$ are different. The following conditions are equivalent. \begin{itemize} \item [\textup{(i)}] $T_X$ is a perfect strictly $n$-ary tree. \item [\textup{(ii)}] $G_{r,X}'= G_{r,X}'[X_1,...,X_n]$ with $|X_1|=\cdots =|X_n|$ for every nonzero $r\in \Sp{X}$. \end{itemize} \end{proposition} \begin{proof} (i)$\Rightarrow$(ii) Let $T_X$ be a perfect strictly $n$-ary tree such that all labels of $T_X$ are different. According to Lemma~\ref{c25} we have $G'_{r,X}=G'_{r,X}[L_{T_{x_1}},....L_{T_{x_n}}]$ for every $r\in \Sp{X}\setminus \{0\}$, where $x_1,...,x_n$ are the direct successors of the node $x$ labeled by $r$. Since $T_X$ is perfect it is easy to see that the trees $T_{x_1},...,T_{x_n}$ are isomorphic as rooted trees. Hence $|L_{T_{x_1}}|=\cdots=|L_{T_{x_n}}|$ which is equivalent to condition (ii). (ii)$\Rightarrow$(i) Let us prove this implication by induction on the number of levels of the tree $T_X$. Let the number of levels of $T_X$ be equal to 1. It is evident that in this case implication (ii)$\Rightarrow$(i) holds. Suppose that the implication (ii)$\Rightarrow$(i) holds in the case when the number of levels of $T_X$ is equal to $k$ and suppose that the number of levels of $T_X$ is equal to $k+1$ and condition (ii) holds. Let $x$ be a root of $T_X$ and $x_1,...,x_n$ be its direct successors. Consider the subtrees $T_{x_1},...,T_{x_n}$ with the roots $x_1,...,x_n$ and let $X_1=L_{T_{x_1}},...,X_n=L_{T_{x_n}}$. Since all the labels of $T_X$ are different and condition (ii) holds for every $r\in \Sp{X}\setminus\{0\}$ it follows that condition (ii) also holds for the subspaces $(X_1,d),...,(X_n,d)$. According to the supposition of induction all the trees $T_{X_1},...,T_{X_n}$ are perfect strictly $n$-ary since the number of levels of the trees $T_{x_1},...,T_{x_n}$ is equal to $k$. Taking into consideration the construction of $T_X$ from the trees $T_{x_1},...,T_{x_n}$ it easy to see that $T_X$ is also a perfect strictly $n$-ary tree. \end{proof} Omitting the condition ``different internal nodes of $T_X$ are different'' we can generalize Theorem~\ref{t4} to the following. \begin{theorem}\label{t5} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$. The following conditions are equivalent. \begin{itemize} \item [\textup{(i)}] $T_X$ is a perfect strictly $n$-ary tree. \item [\textup{(ii)}] $G_{r,X}'$ is a union of a finite number of complete multipartite graphs having the same number of vertices in each part for every nonzero $r\in \Sp{X}$. \end{itemize} \end{theorem} \begin{proof} Implication (i)$\Rightarrow$(ii) easily follows form Lemma~\ref{c25}. (ii)$\Rightarrow$(i) Let condition (ii) hold, $(T_X,l_X)$ be a representing tree of the space $(X,d)$ and let $(T_X,\tilde{l}_X)$ be the same tree with another labeling function $\tilde{l}_X$ having the following properties: labels of different internal nodes are different and labels monotonically decrease along all paths from the root of $T_X$ to any leaf. Taking into consideration Lemma~\ref{c25} it is easy to see that condition (ii) of Proposition~\ref{t4} holds for the new ultrametric space $\tilde{X}$ with the representing tree $(T_X,\tilde{l}_X)$. Since the structure of $T_X$ was not changed while changing the labeling function by Proposition~\ref{t4} we have that $T_X$ is a perfect strictly $n$-ary tree. \end{proof} \section{Ultrametric spaces generated by unrooted labeled trees}\label{sec4} Throughout this section by \(T = T(l)\) we denote an unrooted labeled tree with the labeling function $l\colon V(T)\to \RR^+$. In the paper~\cite{Do20} it was defined a mapping \(d_l \colon V(T) \times V(T) \to \RR^{+}\) as \begin{equation}\label{e11.3} d_l(u, v) = \begin{cases} 0 & \text{if } u = v,\\ \max\limits_{v^{*} \in V(P)} l(v^{*}) & \text{if } u \neq v, \end{cases} \end{equation} where \(P\) is a path joining \(u\) and \(v\) in \(T(l)\). Also the following proposition was shown there. \begin{proposition}[\cite{Do20}]\label{p11.9} The following statements are equivalent for every labeled tree \(T = T(l)\). \begin{itemize} \item [\textup{(i)}] The function $d_l$ is an ultrametric on $V(T)$. \item [\textup{(ii)}] The inequality \begin{equation}\label{t11e1} \max\{l(u_1), l(v_1)\} > 0 \end{equation} holds for every \(\{u_1, v_1\} \in E(T)\). \end{itemize} \end{proposition} It is possible to show that in the case when condition~(\ref{t11e1}) does not hold the space $X$ is pseudoultrametric, see~\cite{DK21} for the details. In this section we show that not every finite ultrametric space $(X,d)$ can be represented by a labeled tree $T(l)$. Thus, labeled unrooted tress $T(l)$ with labeling functions $l$ satisfying~(\ref{t11e1}) generate a new class of finite ultrametric spaces. We need the following lemma for the proof of the main result of this section. \begin{lemma}\label{l1.3} Let $X$, $|X|\geqslant 2$, be a finite ultrametric space and let $u$ be an inner node of $T_X$. Then $u$ has at least one direct successor which is a leaf if and only if in the ball $B=L_{T_u}$ there exists a point $z$ such that $d(z,t)=\diam B$ for all $t \in B\setminus \{z\}$. \end{lemma} \begin{proof} The necessity follows directly from Theorem~\ref{t2.9} and Proposition~\ref{lbpm}. As the point $z$ one can chose any leaf of the tree $T_X$ which is a direct successor of the inner node $u$. The sufficiency can be easily shown by contradiction. \end{proof} \begin{theorem}\label{t1.4} Let \(T = T(l)\), $|V(T)|\geqslant 2$, be a labeled tree such that \(X=(V(T), d_l)\) is an ultrametric space. Then in the representing tree $T_X$ every inner node has at least one direct successor which is a leaf. Conversely, let $X$, $|X|\geqslant 2$, be an ultrametric space such that in the representing tree $T_X$ every inner node has at least one direct successor which is a leaf. Then $X$ can be represented by a labeled unrooted tree \(T = T(l)\). \end{theorem} \begin{proof} Let $B=B(x,r)$ be any ball from $X$ with the center $x$ and the diameter $r$. Since $X$ is finite, without loss of generality, we consider that diameter is attained, i.e., there exists $y\in B$ such that $d_l(x,y)=r$. According to~(\ref{e11.3}) there exists a vertex $z\in V(P)$ such that $l(z)=r$ (possibly $z=x$ or $z=y$), where $P$ is the unique path connecting $x$ and $y$ in $T(l)$. Since $d_l(x,z)=r$ we have $z\in B$. Let us describe the set $B$ on the tree $T(l)$. According to the definition of $d_l$ we have $t\in B$, $t\neq x$, if $ \max\limits_{v^*\in V(P)}l(v^*)\leqslant r $ where $P$ is a path joining $x$ and $t$ in $T(l)$. Hence, using~(\ref{e11.3}) and the fact that $r=l(z)$ is the maximal label among all labels of vertices of $T$ belonging to $B$, we see that $d_l(t,z)=r$ for all $t\in B\setminus \{z\}$. Thus, it follows from Lemma~\ref{l1.3} and Proposition~\ref{lbpm} that every inner note of $T_X$ has at least one direct successor which is a leaf. Conversely, let us now describe a procedure of construction of the labeled tree $T(l)$ from the representing tree $(T_X,l_X)$ in which every inner node satisfies the above mentioned property. Let the root $r$ of $T_X$ have the label $l_X(r)$ and let $r_1,...,r_n$ be direct successors of $r$ which are leaves, $n\geqslant 1$, and $s_1,...,s_k$ be direct successors of $r$ which are inner nodes, $k\geqslant 1$, with the respective labels $l_X(s_1),...,l_X(s_k)$. Set $V(T(l)):=\{r_1,...,r_n\}$, $E(T(l)):=\{ \{r_1,r_2\},...,\{r_{n-1},r_n\} \}$, $l(r_1)=\cdots = l(r_n):=l_X(r)$. Further, let $t_{11},...,t_{1n_1}$ be the direct successors of $s_1$ which are leaves, $n_1\geqslant 1$, and $u_{11},...,u_{1p_1}$ be direct successors of $s_1$ which are inner nodes, $k_1\geqslant 0$,\ldots, $t_{k1},...,t_{kn_k}$ be the direct successors of $s_k$ which are leaves, $n_k\geqslant 1$, and $u_{k1},...,u_{1p_k}$ be the direct successors of $s_k$ which are inner nodes, $p_k\geqslant 0$. Set $$V(T(l)):=V(T(l)) \cup\{t_{11},...,t_{1n_{1}}\} \cup... \cup\{t_{k1},...,t_{kn_{k}}\},$$ $$E(T(l)):=E(T(l))\cup\{ \{r_n, t_{11}\},\{t_{11},t_{12}\},...,\{t_{1n_1-1},t_{1n_1}\}\} \},$$ $$ ... $$ $$E(T(l)):=E(T(l))\cup\{ \{r_n, t_{k1}\},\{t_{k1},t_{k2}\},...,\{t_{kn_k-1},t_{kn_k}\}\} \},$$ and $l(t_{11})=...=l(t_{1n_{1}}):=l_X(s_1)$, ... $l(t_{k1})=...=l(t_{kn_{k}}):=l_X(s_k)$. Thus, the degree of $r_n$ is equal to $1+k$. After that we look at the successors of the inner nodes $u_{11},...,u_{1{p_1}}$,...,$u_{k1},...,u_{1p_k}$ and repeat the same procedure. Examples of a representing tree $(T_X,l_X)$ and the corresponding tree $T(l)$ are depicted in Figure~\ref{fig11} and in Figure~\ref{fig13}, respectively. In order to finish the proof it suffices only to note that the ultrametric $d_l$ defined by~(\ref{e11.3}) indeed coincides with the ultrametric $d$. \end{proof} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[ node distance=1cm, on grid=true, level 1/.style={level distance=1cm,sibling distance=2.5cm}, level 2/.style={level distance=1cm,sibling distance=1.5cm}, level 3/.style={level distance=1cm,sibling distance=.9cm}, solid node/.style={circle,draw,inner sep=1.5,fill=black}, hollow node/.style={circle,draw,inner sep=1.5}, scale = 0.85 ] \node [hollow node, label={[label distance=20pt] left:{}}] (A2) at (3,0) {\(r\)} child {node [hollow node]{\(s_1\)} child {node [hollow node]{\(t_{11}\)} child {node [hollow node]{\(x_{8}\)}} child {node [hollow node]{\(x_{9}\)}} } child {node [hollow node]{\(x_3\)}} child {node [hollow node]{\(t_{12}\)} child [sibling distance=1cm] {node [hollow node]{\(x_{10}\)}} child [sibling distance=1cm] {node [hollow node]{\(x_{11}\)}} } child {node [hollow node]{\(x_4\)}} child {node [hollow node]{\(t_{13}\)} child [sibling distance=1cm] {node [hollow node]{\(x_{12}\)}} child [sibling distance=1cm] {node [hollow node]{\(x_{13}\)}} } } child {node [hollow node]{\(x_1\)}} child {node [hollow node]{\(x_2\)}} child {node [hollow node]{\(s_2\)} child {node [hollow node]{\(x_5\)}} child {node [hollow node]{\(t_{21}\)} child [sibling distance=1cm] {node [hollow node]{\(x_{14}\)}} child [sibling distance=1cm] {node [hollow node]{\(x_{15}\)}} child [sibling distance=1cm] {node [hollow node]{\(x_{16}\)}} } child {node [hollow node]{\(x_6\)}} child {node [hollow node]{\(x_7\)}} }; \end{tikzpicture} \caption{The canonical representing tree $(T_X,l_X)$ of a finite ultrametric space $(X,d)$ with $X=\{x_1,...,x_{16}\}$.} \label{fig11} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[ node distance=1.5cm, on grid, hollow node/.style={circle,draw,inner sep=1.5} ] \tikzset{solid/.style={circle,draw,inner sep=1.5,fill=black}} \node [hollow node, label=above:\tiny{$l(r)$}] (x1) {$x_1$}; \node [hollow node, label=above:\tiny{$l(r)$}] (x2) [right=of x1] {$x_2$}; \node [hollow node, label=above:\tiny{$l(s_1)$}] (x3) [right=of x2, yshift=1.5cm] {$x_3$}; \node [hollow node, label=above:\tiny{$l(s_1)$}] (x4) [right=of x3] {$x_4$}; \node [hollow node, label=above:\tiny{$l(t_{11})$}] (x8) [right=of x4, yshift=1.4cm] {$x_8$}; \node [hollow node, label=above:\tiny{$l(t_{11})$}] (x9) [right=of x8] {$x_{9}$}; \node [hollow node, label=above:\tiny{$l(t_{12})$}] (x10) [right=of x4] {$x_{10}$}; \node [hollow node, label=above:\tiny{$l(t_{12})$}] (x11) [right=of x10] {$x_{11}$}; \node [hollow node, label=above:\tiny{$l(t_{13})$}] (x12) [right=of x4, yshift=-1.4cm] {$x_{12}$}; \node [hollow node, label=above:\tiny{$l(t_{13})$}] (x13) [right=of x12] {$x_{13}$}; \draw (x4)--(x8); \draw (x4)--(x10); \draw (x4)--(x12); \draw (x8)--(x9); \draw (x10)--(x11); \draw (x12)--(x13); \node [hollow node, label=above:\tiny{$l(s_2)$}] (x5) [right=of x2, yshift=-1.5cm] {$x_5$}; \node [hollow node, label=above:\tiny{$l(s_2)$}] (x6) [right=of x5] {$x_6$}; \node [hollow node, label=above:\tiny{$l(s_2)$}] (x7) [right=of x6] {$x_7$}; \node [hollow node, label=above:\tiny{$l(t_{21})$}] (x14) [right=of x7] {$x_{14}$}; \node [hollow node, label=above:\tiny{$l(t_{21})$}] (x15) [right=of x14] {$x_{15}$}; \node [hollow node, label=above:\tiny{$l(t_{21})$}] (x16) [right=of x15] {$x_{16}$}; \draw (x1)--(x2)--(x3)--(x4); \draw (x2)--(x5)--(x6)--(x7); \draw (x7)--(x14)--(x15)--(x16); \end{tikzpicture} \end{center} \caption{The labeled tree $T(l)$ of the space $(X,d)$.} \label{fig13} \end{figure} \section{Some classes of finite ultrametric spaces}\label{s5} In this section we give a detailed survey of some special classes of finite ultrametric spaces, which were considered in the last ten years. The structure of representing trees will be mainly used in studying of hereditary properties of spaces from these classes. The images of representing trees of the spaces discussed in this section can be found in the corresponding references. \noindent\textbf{2.1. Spaces extremal for the Gomory-Hu inequality.} In 1961 E.\,C.~Gomory and T.\,C.~Hu \cite{GomoryHu(1961)} for arbitrary finite ultrametric space $X$ proved the inequality \mbox{$|\operatorname{Sp}(X)| \leqslant |X|$}. Define by $\mfu$ the class of finite ultrametric spaces $X$ such that $\abs{\Sp{X}} = \abs{X}$. In~\cite{PD(UMB)} two descriptions of $X \in \mfu$ were obtained in terms of graphs $G'_{r,X}$ (see Definitions~\ref{d14}, ~\ref{d15}) and in terms of representing trees (see Theorem~\ref{t22} below). In~\cite{DP19} it was proved that \(X\in \mfu\) if and only if there are no equilateral triangles in \(X\) and the graph \(G'_{r, X}\) is connected for every nonzero \(r \in \Sp{X}\). Another one criterium of $X \in \mfu$ in terms of weighted Hamiltonian cycles and weighted Hamiltonian paths was proved in~\cite{DPT}. \begin{theorem}[\!\cite{PD(UMB)}]\label{t22} Let $(X, d)$ be a finite ultrametric space with $|X| \geqslant 2$. The following condidtions are equivalent. \begin{enumerate} \item [\textup{(i)}] $(X, d) \in \mathfrak U$. \item [\textup{(ii)}] $G'_{r,X}$ is complete bipartite for every nonzero $r\in \operatorname{Sp}(X)$. \item [\textup{(iii)}] $T_X$ is strictly binary and the labels of different internal nodes are different. \end{enumerate} \end{theorem} The following corollary of Theorem~\ref{t22} in fact states that subspaces of the space from the class $\mfu$ inherit the class. \begin{corollary}[\!\!\cite{PD(UMB)}]\label{c22} Let $X \in \mfu$ and $Y$ be a nonempty subspace of $X$. Then $Y \in \mfu$. \end{corollary} \noindent\textbf{2.2. Spaces for which the labels of different internal nodes of the representing trees are different.} Omitting in statement (iii) of Theorem~\ref{t22} the condition ``$T_X$ is strictly binary'' we obtain the class of finite ultrametric spaces $X$ for which the labels of different internal nodes of $T_X$ are different. In the following theorem we combine results obtained in~\cite{DP20} and~\cite{DP19}. \begin{theorem}[\!\!\cite{DP20, DP19}]\label{t1} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$. The following conditions are equivalent. \begin{itemize} \item[\textup{(i)}] The labels of different internal nodes of $T_X$ are different. \item[\textup{(ii)}] The graph $G'_{r,X}$ is complete multipartite for every nonzero $r\in \Sp{X}$. \item[\textup{(iii)}] The graph \(G'_{r,X}\) is connected for every nonzero \(r \in \Sp{X}\). \item[\textup{(iv)}] The diameters of different nonsingular balls are different. \item[\textup{(v)}] The equality $$ |\Sp{X}| = |\mathbf{B}_X| - |X| + 1 $$ holds. \end{itemize} \end{theorem} \noindent\textbf{2.3. Spaces for which the representing trees are strictly binary.} Omitting in statement (iii) of Theorem~\ref{t22} the condition ``the labels of different internal nodes are different'' we obtain the class of finite ultrametric spaces having the strictly binary representing trees. In the following we identify a finite ultrametric space $(X, d)$ with a complete weighted graph $G_X = (G_X, w)$, \(w \colon E(G_X) \to \mathbb{R}^{+}\), such that $V(G_X) = X$ and $w(\{x,y\}) = d(x,y)$ for all different \(x\), \(y \in X\). \begin{proposition}[\!\!\cite{DPT}]\label{p10} Let $(X,d)$ be a finite nonempty ultrametric space. The following conditions are equivalent. \begin{itemize} \item[\textup{(i)}] $T_X$ is strictly binary. \item[\textup{(ii)}] If $Y\subseteq X$ and $|Y|\geqslant 3$, then there exists a Hamilton cycle $C \subseteq G_{Y}$ with exactly two edges of maximal weight. \item[\textup{(iii)}] There are no equilateral triangles in $(X,d)$. \end{itemize} \end{proposition} \noindent\textbf{2.4. Spaces for which the representing trees are strictly $n$-ary.} Let $(X,d)$ be a metric space. Recall that balls $B_1$, $\ldots$, $B_k$ in $(X,d)$ are \emph{equidistant} if there is $r>0$ such that $d(x_i, x_j)=r$ holds whenever $x_i \in B_i$ and $x_j \in B_j$ and $1\leqslant i< j \leqslant k$. Every two disjoint balls in any ultrametric space are equidistant. \begin{theorem}[\!\cite{DP20}]\label{t27} Let $(X,d)$ be a finite ultrametric space with $|X| \geqslant 2$ and let $n\geqslant 2$ be integer. The following conditions are equivalent. \begin{itemize} \item [\textup{(i)}] $T_X$ is strictly $n$-ary. \item [\textup{(ii)}] For every nonzero $t\in \Sp{X}$, the graph $G'_{t,X}$ is the union of $p$ complete $n$-partite graphs, where $p$ is the number of internal nodes of $T_X$ labeled by~$t$. \item [\textup{(iii)}] For every nonsingular ball $B \in \mathbf B_X$, there are equidistant disjoint balls $B_1,...,B_n \in \mathbf B_X$ such that $B=\bigcup\limits_{j=1}^n B_j$. \item [\textup{(iv)}] The equality \begin{equation*}(n-1)|\mathbf{B}_Y| +1 = n |Y| \end{equation*} holds for every ball $Y\in \mathbf B_X$. \end{itemize} \end{theorem} Write $$ \Delta^+(T) := \max_{v \in V(T)} \delta^+(v). $$ \begin{corollary}[\!~\cite{DP20}]\label{c2.10} The inequality \begin{equation*}|\mathbf{B}_X| \geqslant \frac{\Delta^+(T_X)|X|-1}{\Delta^+(T_X)-1} \end{equation*} holds for every finite nonempty ultrametric space $(X,d)$. This inequality becomes an equality if and only if $T_X$ is a strictly $n$-ary rooted tree with $n = \Delta^+(T_X)$. \end{corollary} \begin{proposition}[\!\!\cite{DP20}]\label{p2.11} Let $(X, d)$ be a finite ultrametric space with $|X|\geq 2$. Then the inequality \begin{equation*}2|\mathbf{B}_X| \geqslant |\operatorname{Sp}(X)| + \frac{2\Delta^+(T_X)|X|-\Delta^+(T_X)-|X|}{\Delta^+(T_X) - 1} \end{equation*} holds. This inequality becomes an equality if and only if $T_X$ is a strictly $n$-ary rooted tree with injective internal labeling and $n=\Delta^+(T_X)$. \end{proposition} \noindent\textbf{2.5. Ultrametric spaces which are as rigid as possible.} Let $(X, d)$ be a metric space and let $\operatorname{Iso}(X)$ be the group of isometries of $(X, d)$. We say that $(X, d)$ is \emph{rigid} if $|\operatorname{Iso}(X)|=1$. It is clear that $(X, d)$ is rigid if and only if $g(x)=x$ for every $x \in X$ and every $g \in \Iso(X)$. For every self-map $f\colon X\to X$ we denote by $\Fix(f)$ the set of fixed points of $f$. Using this denotation we obtain that a finite metric space $(X, d)$ is rigid if and only if $$ \min_{g \in \operatorname{Iso}(X)} |\Fix(g)|=|X|. $$ It is easy to show that the finite ultrametric spaces $X$ with $|X| \geq 2$ are not rigid since for every such $X$ there is a self-isometry having exactly $|X|-2$ fixed points, see Proposition 3.2 in~\cite{DPT(Howrigid)}. If a metric space $(X, d)$ is finite, nonempty and nonrigid, then the inequality \begin{equation*} \min_{g \in \operatorname{Iso}(X)} |\Fix(g)| \leq |X|-2 \end{equation*} holds, because the existence of $|X|-1$ fixed points for $g \in \operatorname{Iso}(X)$ implies that $g$ is identical. The quantity $\min_{g \in \operatorname{Iso}(X)} |\Fix(g)|$ can be considered as a measure for ``rigidness'' for finite metric spaces $(X, d)$. Thus the finite ultrametric spaces satisfying the equality \begin{equation*} \min_{g \in \operatorname{Iso}(X)} |\Fix(g)| = |X|-2, \end{equation*} are as rigid as possible. Let us denote by $\mathfrak{R}$ the class of all finite ultrametric spaces $(X, d)$ which satisfy this equality. The following theorem gives us a characterization of spaces from the class $\mathfrak{R}$ in terms of representing trees. \begin{theorem}[\!\!\cite{DPT(Howrigid)}] Let $(X, d)$ be a finite ultrametric space with $|X| \geq 2$. Then the following statements are equivalent. \begin{itemize} \item [\textup{(i)}] $(X, d) \in \mathfrak{R}$. \item [\textup{(ii)}] $|\Iso(X)|=2$. \item [\textup{(iii)}] $T_X$ is strictly binary with exactly one inner node at each level except the last level. \end{itemize} \end{theorem} \noindent\textbf{2.6. Weak similarity generating spaces.} Denote by $\tilde{\mathfrak R}$ the class of finite ultrametric spaces $X$ for which $T_X$ has exactly one inner node at each level except the last level. It is clear that $\mathfrak R$ is a proper subclass of $\tilde{\mathfrak R}$. \begin{definition}\label{d4.5} Let $(X,d)$ and $(Y,\rho)$ be metric spaces. A bijective mapping $\Phi\colon X\to Y$ is a \emph{weak similarity} if there exists a strictly increasing bijection $f\colon \Sp{X}\to \Sp{Y}$ such that the equality \begin{equation*}f(d(x,y))=\rho(\Phi(x),\Phi(y)) \end{equation*} holds for all $x$, $y\in X$. The function $f$ is said to be a \emph{scaling function} of $\Phi$. If $\Phi\colon X\to Y$ is a weak similarity, we write $X \we Y$ and say that $X$ and $Y$ are \emph{weakly similar}. The pair $(f,\Phi)$ is called a \emph{realization} of $X\we Y$. \end{definition} In~\cite{DP2} the notion of weak similarity was introduced in a slightly different but equivalent form. The next theorem gives a description of finite ultrametric spaces for which the isomorphism of representing trees implies the weak similarity of the spaces. \begin{theorem}[\!\!\cite{P18}] Let $X$ be a finite ultrametric space. Then the following statements are equivalent. \begin{itemize} \item [\textup{(i)}] The implication $(\overline{T}_X\simeq \overline{T}_Y) \Rightarrow (X\we Y)$ holds for every finite ultrametric space $Y$. \item [\textup{(ii)}] $X\in \tilde{\mathfrak{R}}$. \end{itemize} \end{theorem} The following lemma in fact states that subspaces of the space from the class $\tilde{\mathfrak{R}}$ inherit the class. \begin{lemma}[\cite{DP18}]\label{l5.11} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$, and let $T_X$ have exactly one internal node at each level except the last level. Then for every $Y\subseteq X$, $|Y|\geqslant 2$, the representing tree $T_Y$ of the space $(Y,d)$ also has exactly one internal node at each level except the last level. \end{lemma} Denote by $\mathfrak D$ the class of all finite ultrametric spaces $X$ such that the different internal nodes of $T_X$ have the different labels. It is clear that $\mathfrak R$ and $\tilde{\mathfrak R}$ are subclasses of $\mathfrak D$. A question arises whether there exist finite ultrametric spaces $X$, $Y\in \mathfrak D$ which do not belong to the class $\tilde{\mathfrak R}$ and for which the isomorphism of $\overline{T}_X$ and $\overline{T}_Y$ implies $X\we Y$. Let us define a rooted tree $T$ with $n$ levels by the following two conditions: (A) There is only one inner node at the level $k$ of $T$ whenever $k<n-1$. (B) If $u$ and $v$ are different inner nodes at the level $n-1$ then the numbers of offsprings of $u$ and $v$ are equal. Denote by $\mct$ the class of all finite ultrametric spaces $X$ for which $T_X$ satisfies conditions (A) and (B). \begin{theorem}[\!\!\cite{P18}] Let $X \in \mathfrak D$ be a finite ultrametric space. Then the following statements are equivalent. \begin{itemize} \item [\textup{(i)}] The implication $(\overline{T}_X\simeq \overline{T}_Y) \Rightarrow (X\we Y)$ holds for every finite ultrametric space $Y \in \mathfrak D$. \item [\textup{(ii)}] $X \in \mct$. \end{itemize} \end{theorem} \noindent\textbf{2.7. Isometry generating spaces.} Let us denote by $\mathcal{TSI}$ (tree-spectrum isometric) the class of all finite ultrametric spaces~$(X,d)$ which satisfy the following condition: If $(Y,\rho)$ is a finite ultrametric space such that $\overline{T}_X \simeq \overline{T}_Y$ and $\operatorname{Sp}(X) = \operatorname{Sp}(Y)$, then $(X,d)$ and $(Y,\rho)$ are isometric. Let $T$ be a rooted tree. The \emph{height} of $T$ is the number of edges on the longest path between the root and a leaf of $T$. The height of $T$ will be denoted by $h(T)$. Thus, \begin{equation*}h(T)= \max_{v \in L_T} \operatorname{lev}(v). \end{equation*} \begin{theorem}[\!\!\cite{DP18}]\label{t3.7} Let $T$ be a rooted tree with $\delta^+(u)\geqslant 2$ for every internal node $u$. Then the following conditions are equivalent. \begin{itemize} \item[\textup{(i)}] The tree $T$ contains exactly one internal node at the levels $1$, $\ldots$, $h(T)-2$ and at most two internal nodes at the level $h(T)-1$. Moreover, if $u$ and $v$ are different internal nodes with $$ \operatorname{lev}(u)=\operatorname{lev}(v)=h(T)-1, $$ then $\delta^+(u)=\delta^+(v)$ holds. \item[\textup{(ii)}] The statement $\overline{T}_X \simeq T$ implies $(X,d) \in \mathcal{TSI}$ for every finite ultrametric space $(X,d)$. \end{itemize} \end{theorem} \begin{theorem}[\!\!\cite{DP18}]\label{t3.5} Let $(X,d)$ be a finite ultrametric space with $|X|\geqslant 2$. Suppose that the representing tree $T_X$ has an injective internal labeling. Then the following statements are equivalent. \begin{enumerate} \item [\textup{(i)}] The space $(X,d)$ belongs to $\mathcal{TSI}$. \item [\textup{(ii)}] The equality $\operatorname{lev}(v)=\operatorname{lev}(u)$ implies $$ \operatorname{lev}(v)=\operatorname{lev}(u)=h(T_X)-1 \text{ and } \delta^+(u)=\delta^+(v) $$ for all distinct internal nodes $u$, $v \in V(T_X)$. \end{enumerate} \end{theorem} \begin{corollary}\label{c5.3} Let $(X, d)$ be an ultrametric space with $|X| \leqslant 4$. Then $(X, d)$ belongs to $\mathcal{TSI}$. \end{corollary} \begin{remark} The structures of representing trees of spaces from the classes $\mathcal{TSI}$ and $\tilde{\mathfrak{R}}$ coincide. \end{remark} \noindent\textbf{2.8. Spaces admitting ball-preserving mappings.} Let $X$ and $Y$ be nonempty metric spaces. A mapping $F\colon X\to Y$ is ball-preserving if the membership relations \begin{equation*}F(Z)\in \textbf{B}_Y \quad \text{and}\quad F^{-1}(W)\in \textbf{B}_X \end{equation*} hold for all balls $Z\in \textbf{B}_X$ and all balls $W\in \textbf{B}_Y$, where $F(Z)$ is the image of $Z$ under the mapping $F$ and $F^{-1}(W)$ is the preimage of $W$ under this mapping. For every finite nonempty ultrametric space $X$ denote by $\mathfrak{F}_1^*(X)$ the class of all finite nonempty ultrametric spaces $Y$ for which there are ball-preserving mappings $F \colon X \to Y$. The next our goal is to describe the finite ultrametric spaces \((X, d)\) which admit ball-preserving mapping \(F \colon Y \to Z\) for every nonempty \(Y \subseteq X\) and each nonempty \(Z \subseteq Y\), i.e., \begin{equation*}(Z, d|_{Z \times Z}) \in \mathfrak{F}_1^{*} (Y) \end{equation*} holds for all nonempty \(Y \subseteq X\) and \(Z \subseteq Y\). \begin{theorem}[\!~\cite{DP19}]\label{th4.21} Let $(X, d)$ be a finite ultrametric space with $|X| \geqslant 2$. Then the following statements are equivalent. \begin{itemize} \item[\textup{(i)}] We have \((Z, d|_{Z \times Z}) \in \mathfrak{F}_1^*(Y)\) for every nonempty $Y \subseteq X$ and every nonempty \(Z \subseteq Y\). \item[\textup{(ii)}] $T_X$ is strictly binary and one of the following conditions is satisfied. \begin{itemize} \item[\textup{($\mathrm{i_1}$)}] For every inner node $v$ of $T_X$ there is a leaf $\{x\}$ of $T_X$ such that $\{x\}$ is a direct successor of $v$. \item[\textup{($\mathrm{i_2}$)}] $X$ is the unique node of $T_X$ for which both direct successors are inner nodes of $T_X$. \end{itemize} \end{itemize} \end{theorem} \section{Class preserving subspaces} The aim of this section is to distinguish classes of ultrametric spaces such that for every space $X$ from the given class every subspace $Y$ of $X$ also belongs to this class. In this case we shall say that the subspace $Y$ inherits the class. Let us first consider consecutively the classes of spaces considered in Section~\ref{s5}. Corollary~\ref{c22} states that every nonempty subspace of the space $X\in \mathfrak U$ also belongs to the class $\mathfrak U$. Consider the class of ultrametric spaces $X$ for which all labels of different internal nodes of $T_X$ are different. Let $x_1$ be a leaf of $T_X$ and $v$ be its direct predecessor. There are only two possibilities for the node $v$: \begin{itemize} \item [\textup{(i)}] $v$ has only two direct successors $x_1$ and $x_2$, \item [\textup{(ii)}] $v$ has more than two direct successors. \end{itemize} Consider a transformation of the representing tree $T_X$ to the representing tree $T_Y$, where $Y=X\setminus \{x_1\}$. In case (i) the removal of the leaf $x_1$ from the tree will entail the removal of the edges $\{v,x_1\}$ and $\{v,x_2\}$. The node $x_2$ replaces $v$ and the label $l(v)$ disappears. In case (ii) the removal of the leaf $x_1$ entails only the removal of the edge $\{x_1,v\}$. In each case all labels of different internal nodes remain different. Let us consider a class of ultrametric spaces $X$ for which $T_X$ is strictly binary. According to condition (iii) of Proposition~\ref{p10} the equivalent condition is that there are no equilateral triangles in the space $X$. It is clear that there are also no any equilateral triangles in every subspace $Y$ of $X$. Thus $Y$ inherits the class. It is clear that a removal of one leaf from a strictly $n$-ary tree, \mbox{$n\geqslant 3$}, makes it not strictly $n$-ary. Thus, this class does not have the desired property. According to Lemma~\ref{l5.11} weak similarity generating spaces $\mathfrak{R}$ and $\tilde{\mathfrak{R}}$ have the desired property since in each case representing trees have exactly one inner node at each level except the last level. The class $\mct$ from subsection 2.6 as well as the class $\mathcal{TSI}$ from subsection 2.7 does not have the desired property. The reason is that in general case in the representing trees there are two different internal nodes $u$ and $v$ at the level $h(T_X)-1$ and the number of offsprings of $u$ and $v$ are equal. A removal of one of the offsprings violates the structure of the tree. It is easy to show that spaces admitting ball-preserving mappings have the desired property (Theorem~\ref{th4.21}). Case (\emph{$i_1$}) follows from Lemma~\ref{l5.11} and condition (iii) of Proposition~\ref{p10}. In case (\emph{$i_2$}) it is possible to apply these lemma and proposition to the subtrees $T_u$, $T_v$, where $u$ and $v$ are direct successor of the root $X$ of the tree $T_X$. It is easy to see that finite homogeneous ultrametric spaces $X$ do not have the desired property. Clearly, a removal of one leaf from the representing tree $T_X$ violates condition (ii) of Theorem~\ref{t23}. It is easy to construct a representing tree $T_X$ with all leaves at the same level such that a removal of one leaf violates this property. For example, it suffices to consider a perfect strictly binary tree $T_X$ with $|X|\geqslant 4$. Let $X$ be an ultrametric space such that all labels of inner nodes of $T_X$ being at the same level are equal. A removal of only one leaf of $T_X$ can lead only to two possibilities: all labels remain at their places or one label disappears. In any case, the new tree preserves this property. Clearly, the spaces having perfect strictly $n$-ary representing trees, $n\geqslant 2$, do not have the desired property. According to Theorem~\ref{t1.4} an ultrametric space $X$ is generated by unrooted labeled tree if and only if in the representing tree $T_X$ every inner node has at least one direct successor which is a leaf. In order to show that such spaces do not have the desired property it suffices to consider a space $X$ such that in the representing tree $T_X$ there exists at least one inner node with exactly one successor which is a leaf. Let us summarize the above considerations. The classes of ultrametric spaces such that for every space $X$ from the given class every subspace of $X$ also belongs to this class: \begin{itemize} \item The class $\mathfrak U$. \item The class of finite ultrametric spaces $X$ for which all labels of different internal nodes of $T_X$ are different. \item The class of finite ultrametric spaces $X$ for which $T_X$ is strictly binary. \item The class $\mathfrak R$. \item The class $\tilde{\mathfrak R}$. \item The class of finite ultrametric spaces admitting ball-preserving mappings. \item The class of finite ultrametric spaces $X$ for which all labels of inner nodes of $T_X$ being at the same level are equal. \end{itemize} The classes which does not fulfil this condition: \begin{itemize} \item The class of finite ultrametric spaces $X$ for which $T_X$ is strictly $n$-ary, $n\geqslant 3$. \item The class $\mct$. \item The class $\mathcal{TSI}$. \item The class of finite homogeneous ultrametric spaces. \item The class of finite ultrametric spaces $X$ for which all leaves of $T_X$ are on the same level. \item The class of finite ultrametric spaces having perfect strictly $n$-ary trees. \item The class of finite ultrametric spaces generated by unrooted labeled trees. \end{itemize} \begin{thebibliography}{10} \bibitem{WGA} Lemin, A.~J. (2001). \newblock {On {G}elgfand's problem concerning graphs, lattices, and ultrametric spaces}. \newblock In {\em AAA62 - 62nd Workshop on General Algebra}, pages 12--13, Linz, Austria, June 14-17 2001. \newblock (The abstract - http://at.yorku.ca/c/a/g/l/22.htm). \bibitem{GurVyal(2012)} Gurvich, V., Vyalyi, M. (2012). \newblock {Characterizing (quasi-)ultrametric finite spaces in terms of (directed) graphs}. \newblock {\em {Discrete Appl. Math.}, 160}(12), 1742--1756. \bibitem{PD(UMB)} Petrov, E., Dovgoshey, A. (2014). \newblock On the {G}omory-{H}u inequality. \newblock {\em J. Math. Sci., 198}(4), 392--411. \newblock Translation from \emph{Ukr. Mat. Visn., 10}(4), 469--496, 2013. \bibitem{DPT} Dovgoshey, O., Petrov, E., Teichert, H.--M. (2015). \newblock On spaces extremal for the {G}omory-{H}u inequality. \newblock {\em p-Adic Numbers Ultrametric Anal. Appl., 7}(2), 133--142. \bibitem{DPT(Howrigid)} Dovgoshey, O., Petrov, E., Teichert, H.--M. (2017). \newblock {How rigid the finite ultrametric spaces can be?} \newblock {\em Fixed Point Theory Appl., 19}(2), 1083--1102. \bibitem{DP20} Dovgoshey, O., Petrov, E. (2020). \newblock On some extremal properties of finite ultrametric spaces. \newblock {\em p-Adic Numbers Ultrametric Anal. Appl., 12}(1), 1--11. \bibitem{DP19} Dovgoshey, O., Petrov, E. (2019). \newblock {Properties and morphisms of finite ultrametric spaces and their representing trees}. \newblock {\em {\(p\)-Adic Numbers Ultrametric Anal. Appl.}, 11}(1), 1--20. \bibitem{DP18} Dovgoshey, O., Petrov, E. (2018). \newblock {From isomorphic rooted trees to isometric ultrametric spaces}. \newblock {\em {\(p\)-Adic Numbers Ultrametric Anal. Appl.} 10}(4), 287--298. \bibitem{Do19} Dovgoshey, O. (2019). \newblock {Finite ultrametric balls}. \newblock {\em {\(p\)-Adic Numbers Ultrametric Anal. Appl.}, 11}(3), 177--191. \bibitem{Do20BBMS} Dovgoshey, O. (2020). \newblock {Combinatorial properties of ultrametrics and generalized ultrametrics}. \newblock {\em {Bull. Belg. Math. Soc. - Simon Stevin}, 27}(3), 379--417. \bibitem{Do20} Dovgoshey, O. (2020). \newblock {Isomorphism of trees and isometry of ultrametric spaces}. \newblock {\em {Theory Appl. Graphs}, 7}(2), \newblock Id/No 3, 1--39 \bibitem{P18} Petrov, E. (2018). \newblock Weak similarities of finite ultrametric and semimetric spaces. \newblock {\em p-Adic Numbers Ultrametric Anal. Appl., 10}(2), 108--117. \bibitem{BM} Bondy, J.~A., Murty, U.~S.~R. (2008). \newblock {\em Graph theory}. \newblock {Graduate Texts in Mathematics}, volume 244. \newblock Springer, New York. \bibitem{Di} Diestel, R. (2005). \newblock {\em {Graph Theory}}. \newblock {Graduate Texts in Mathematics}, volume 173. \newblock Springer, Berlin. \bibitem{DDP(P-adic)} Dordovskyi, D., Dovgoshey, O., Petrov, E. (2011). \newblock Diameter and diametrical pairs of points in ultrametric spaces. \newblock {\em p-Adic Numbers Ultrametric Anal. Appl., 3}(4), 253--262. \bibitem{GV} Gurvich, V., Vyalyi, M. (2012). \newblock Ultrametrics, trees, and bottleneck arcs. \newblock {\em Math. Ed., 3}(16), 75--88. \bibitem{GNS00} Grigorchuk, R.~I., Nekrashevich, V.~V., Sushanskyi, V.~I. (2000). \newblock {Automata, dynamical systems, and groups}. \newblock {\em Proc. Steklov Inst. Math., 231}, 128--203. \bibitem{H04} Hughes, B. (2004). \newblock Trees and ultrametric spaces: a categorical equivalence. \newblock {\em Adv. Math., 189}(1), 148--191. \bibitem{Le03} Lemin, A.~J. (2003). \newblock The category of ultrametric spaces is isomorphic to the category of complete, atomic, tree-like, and real graduated lattices {LAT*}. \newblock {\em Algebra Universalis, 50}(1), 35--49. \bibitem{Ho01} {Holly}, J.~E. (2001). \newblock {Pictures of ultrametric spaces, the \(p\)-adic numbers, and valued fields.} \newblock {\em {Am. Math. Mon.}, 108}(8), 721--728. \bibitem{BS17} Beyrer, J., Schroeder, V. (2017). \newblock {Trees and ultrametric M\"obius structures}. \newblock {\em {\(p\)-Adic Numbers Ultrametric Anal. Appl.}, 9}(4), 247--256, 2017. \bibitem{P(TIAMM)} Petrov, E.~A. (2013). \newblock Ball-preserving mappings of finite ulrametric spaces. \newblock {\em Tr. Inst. Prikl. Mat. Mekh., 26}, 150--158, \newblock (In Russian). \bibitem{Fr54} {Fra\"{\i}ss\'e}, R. (1954). \newblock {Sur l'extension aux r\'elations de quelques propri\'et\'es des ordres}. \newblock {\em {Ann. Sci. \'Ec. Norm. Sup\'er. (3)}, 71}, 363--388. \bibitem{Jo60} {Jonsson}, B. (1960). \newblock {Homogeneous universal relational systems}. \newblock {\em {Math. Scand.}, 8}, 137--142. \bibitem{Fr00} {Fra\"{\i}ss\'e}, R. (2000). \newblock {\em Theory of relations.} \newblock Stud. Logic Found. Math., volume 145. \newblock North-Holland, Amsterdam. \bibitem{DLPS07} {Delhomm\'e}, C., {Laflamme}, C., {Pouzet}, M., {Sauer}, N. (2007). \newblock {Divisibility of countable metric spaces}. \newblock {\em {Eur. J. Comb.}, 28}(6), 1746--1769. \bibitem{DLPS08} {Delhomm\'e}, C., {Laflamme}, C., {Pouzet}, M., {Sauer}, N. (2008). \newblock {Indivisible ultrametric spaces}. \newblock {\em {Topology Appl.}, 155}(14), 1462--1478. \bibitem{DLPS16} {Delhomm\'e}, C., {Laflamme}, C., {Pouzet}, M., {Sauer}, N. (2016). \newblock On homogeneous ultrametric spaces. \newblock https://arxiv.org/abs/1509.04346. \bibitem{DADS} Black, P.~E. (2020). \newblock Perfect $k$-ary tree. \newblock In {\em Dictionary of Algorithms and Data Structures}. \newblock https://xlinux.nist.gov/dads/HTML/perfectKaryTree.html. \bibitem{DK21} Dovgoshey, O., K\"{u}c\"{u}kaslan, M. (2022). \newblock Labeled trees generating complete, compact, and discrete ultrametric spaces. \newblock {\em Ann. Comb.}, \newblock https://doi.org/10.1007/s00026-022-00581-8. \bibitem{GomoryHu(1961)} Gomory, R.~E., Hu, T.~C. (1961). \newblock Multi-terminal network flows. \newblock {\em SIAM, 9}(4), 551--570. \bibitem{DP2} Dovgoshey, O., Petrov, E. (2013). \newblock Weak similarities of metric and semimetric spaces. \newblock {\em Acta Math. Hungar., 141}(4), 301--319. \end{thebibliography} \bigskip CONTACT INFORMATION \medskip Evgeniy Aleksandrovych Petrov \\ Institute of Applied Mathematics and Mechanics of the NAS of Ukraine, Slavyansk, Ukraine \\ E-Mail: [email protected] \end{document}
2412.17416v1
http://arxiv.org/abs/2412.17416v1
Minimum spanning paths and Hausdorff distance in finite ultrametric spaces
\documentclass[12pt]{amsart} \usepackage{longtable} \usepackage{amsfonts,amsmath,mathrsfs,amsthm,amscd,amssymb,latexsym,amsbsy,pb-diagram,cite,caption,url} \usepackage{tikz} \usetikzlibrary{positioning} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{problem}[theorem]{Problem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{procedure}[theorem]{Procedure} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \begin{document} \title[Minimum spanning paths and Hausdorff distance]{Minimum spanning paths and Hausdorff distance in finite ultrametric spaces} \author{Evgeniy Petrov} \address{Institute of Applied Mathematics and Mechanics of the NAS of Ukraine, Dobrovolskogo str. 1, Slovyansk 84100, Ukraine} \email{[email protected]} \begin{abstract} It is shown that a minimum weight spanning tree of a finite ultrametric space can be always found in the form of path. As a canonical representing tree such path uniquely defines the whole space and, moreover, it has much more simple structure. Thus, minimum spanning paths are a convenient tool for studying finite ultrametric spaces. To demonstrate this we use them for characterization of some known classes of ultrametric spaces. The explicit formula for Hausdorff distance in finite ultrametric spaces is also found. Moreover, the possibility of using minimum spanning paths for finding this distance is shown. \end{abstract} \keywords{Finite ultrametric space, Hausdorff distance, minimum spanning tree, representing tree, strictly $n$-ary tree, injective internal labeling} \subjclass[2010]{54E35, 05C05} \maketitle \section{Introduction} The correspondence between ultrametric spaces and trees is well-known. A convenient presentation of finite ultrametric spaces by monotone rooted trees was proposed in~\cite{GV12,GV}. Such trees are known as canonical trees. So-called Cartesian trees are an alternative approach, see~\cite{DLW09} and also~\cite[p.~1744]{GV12} for details. These two types of trees can be characterized as labeled trees since the distance between two points in ultrametric space is defined by some label assigned to a vertex of a corresponding representing tree. It is also possible to distinguish the another type of trees so-called weighted trees. The first example is equidistant trees, see e.g.,~\cite[p.~151]{SS03}. The distance between two points of a space here is defined as a sum of edge weights belonging to a path connecting these points in the corresponding tree. The second example is well-known minimum weight spanning trees. In this case the distance between two point is defined as a maximal weight of the edge belonging to the path connecting these points. For connections between infinite ultrametric spaces and tress see, e.g.,~\cite{H04, H12, LB17, L03}. The problem of finding a minimum spanning tree of a weighted graph is well studied, see e.g., \cite{FT87, KKT95, CLRS09, PR02} for different algorithms. The weighted graph obtained from a finite metric space has a restriction on weights caused by the triangle inequality. The algorithm for this case was considered in~\cite{I99}. The algorithms for construction of minimum spanning trees of finite ultrametric spaces can be found, e.g., in~\cite{GV12, WCT99, CC04, S20}. A simple algorithm which produces minimum spanning paths in the case of finite ultrametric spaces was proposed in~\cite{D82}. In Section~\ref{MSPA} we analyse the work of this algorithm on representing trees. A question how to distinguish the set of balls in a finite ultrametric space using only any minimum spanning path of this space is also considered. In section~\ref{MSTFSC} using minimum spanning paths we give a set of criteria which guarantee that a finite ultrametric space belongs to some special class. Moreover, we give a criterion which guarantees that every minimum spanning tree of the space is a path. Some questions related to Hausdorff metric and Gromov-Hausdorff metric (see, e.g.,~\cite{BBI01,G01} for the definitions of these metrics) in non-Archimedean spaces were considered in~\cite{Q09,Q14}. In particular, in~\cite[Lemma~2.1]{Q14} it was found an explicit formula for Hausdorff distance between any two distinct balls of a finite ultrametric space. See also Lemma 2.6 in~\cite{D19} for a variation of this formula. In Section~\ref{HDandMST} we generalize this result by giving an explicit formula for Hausdorff distance between arbitrary subsets of a finite ultrametric space. We also give an example of using minimum spanning paths for calculating this distance. Note that Hausdorff distance has various applications in both applied and abstract areas of mathematics including computer vision, computer graphics, nonsmooth analysis, optimization theory and calculus of variations, see, e.g.,~\cite{KMCM20, DFS17, AVZ16} and references therein. Let us introduce main definitions and required technical results. \begin{definition}\label{d1.1} An \textit{ultrametric} on a set $X$ is a function $d\colon X\times X\rightarrow \mathbb{R}^+$, $\mathbb R^+ = [0,\infty)$, such that for all $x,y,z \in X$: \begin{itemize} \item [(i)] $d(x,y)=d(y,x)$, \item [(ii)] $(d(x,y)=0)\Leftrightarrow (x=y)$, \item [(iii)] $d(x,y)\leq \max \{d(x,z),d(z,y)\}$. \end{itemize} \end{definition} Inequality (iii) is often called the {\it strong triangle inequality}. The \emph{spectrum} of an ultrametric space $(X,d)$ is the set $$ \operatorname{Sp}(X) := \{d(x,y)\colon x,y \in X\}. $$ The quantity $$ \operatorname{diam} X := \sup\{d(x,y)\colon x,y\in X\} $$ is the diameter of $(X,d)$. A \textit{graph} is a pair $(V,E)$ consisting of a nonempty set $V$ and a set $E$ whose elements are unordered pairs of different points from $V$. For a graph $G=(V,E)$, the sets $V=V(G)$ and $E=E(G)$ are called \textit{the set of vertices (or nodes)} and \textit{the set of edges}, respectively. If $\{x,y\} \in E(G)$, then the vertices $x$ and $y$ are \emph{adjacent}. A graph $G=(V,E)$ together with a function $w\colon E\rightarrow \mathbb{R}^+$ is called a \textit{weighted} graph, and $w$ is called a \textit{weight}. Recall that a \emph{path} is a nonempty graph $P$ whose vertices can be numbered so that $$ V(P) = \{x_1,...,x_k\}\quad \text{and} \quad E(P) = \{\{x_1,x_2\},...,\{x_{k-1},x_k\}\}. $$ A finite graph $C$ is a \textit{cycle} if $|V(C)|\geq 3$ and there exists an enumeration $(v_1,v_2,...,v_n)$ of its vertices such that \begin{equation*} (\{v_i,v_j\}\in E(C))\Leftrightarrow (|i-j|=1\quad \mbox{or}\quad |i-j|=n-1). \end{equation*} A graph $H$ is a \emph{subgraph} of a graph $G$ if $$ V(H) \subseteq V(G) \quad \text{and} \quad E(H) \subseteq E(G). $$ We write $H\subseteq G$ if $H$ is a subgraph of $G$. A cycle $C$ is a \emph{Hamilton cycle} in a graph $G$ if $C\subseteq G$ and $V(C) =V(G)$. A connected graph without cycles is called a \emph{tree}. A vertex $v$ of a tree $T$ is a \emph{leaf} if the \emph{degree} of $v$ is less than two, i.e., $$ \delta(v) = |\{u \in V(T) \colon \{u, v\} \in E(T)\}| < 2. $$ If a vertex $v \in V(T)$ is not a leaf of $T$, then we say that $v$ is an \emph{internal node} of $T$. A tree $T$ may have a distinguished vertex $r$ called the \emph{root}; in this case $T$ is a \emph{rooted tree} and we write $T=T(r)$. A \emph{labeled rooted tree} $T=T(r,l)$ is a rooted tree $T(r)$ with a labeling $l\colon V(T)\to L$ where $L$ is a set of labels. Generally we follow terminology used in~\cite{BM}. \begin{definition}\label{def3.1} A graph $G$, $E(G) \neq \varnothing$, is called \emph{complete $k$-partite} if its vertices can be divided into disjoint nonempty sets $X_1$, $\ldots$, $X_k$ so that $k \geqslant 2$ and there are no edges joining the vertices of the same set $X_i$ and any two vertices from different $X_i$ and $X_j$, $1\leqslant i,j \leqslant k$ are adjacent. In this case we write $G=G[X_1,\ldots, X_k]$. \end{definition} We shall say that $G$ is a {\it complete multipartite graph} if there exists $k$ such that $G$ is complete $k$-partite. \begin{definition}\label{d14} Let $(X,d)$ be a finite ultrametric space and let $t\in \operatorname{Sp}(X)$ be nonzero. Denote by $G_{t,X}$ a graph for which $V(G_{t,X})=X$ and $$ (\{u,v\}\in E(G_{t,X}))\Leftrightarrow (d(u,v)=t). $$ In particular, for $|X| \geqslant 2$ we write $G_{D, X} := G_{\operatorname{diam} X, X}$ for short and call $G_{D, X}$ the \emph{diametrical graph} of $X$. \end{definition} \begin{theorem}[\!\cite{DDP(P-adic)}]\label{t13} Let $(X,d)$ be a finite ultrametric space, $|X|\geqslant 2$. Then $G_{D,X}$ is complete multipartite. \end{theorem} In the following procedure with every nonempty finite ultrametric space $(X, d)$ we associate a labeled rooted tree $T_X=T_X(r,l)$ with $r=X$ and $l\colon V(T_{X})\to \mathbb{R}^{+}$. For these purposes we use an approach based on diametrical graphs which for the first time was considered in~\cite{PD(UMB)}. The only difference between obtained trees and the canonical trees is that we name vertices of the tree $T_X$ by subsets of the set $X$, which in fact are parts of the corresponding diametrical graphs. \begin{procedure}\label{p1} If $X$ is a one-point set, then $T_X$ is the rooted tree consisting of the node $X$ with the label~$\operatorname{diam} X=0$. Note that for the rooted trees consisting only of one node, we consider that this node is the root as well as a leaf. Let $|X|\geqslant 2$. According to Theorem~\ref{t13} we have $G_{D,X} = G_{D,X}[X_1,...,X_k]$ and $k \geqslant 2$. In this case the root of the tree $T_X$ is labeled by $\operatorname{diam} X$ and $T_X$ has the nodes $X_1,...,X_k$ of the first level with the labels \begin{equation}\label{e2.7} l(X_i)= \operatorname{diam} X_i, \end{equation} $i = 1,\ldots,k$. The nodes of the first level with the label $0$ are leaves, and those indicated by strictly positive labels are internal nodes of the tree $T_X$. If the first level has no internal nodes, then the tree $T_X$ is constructed. Otherwise, by repeating the above-described procedure with each $X_i$ satisfying $\operatorname{diam} X_i > 0$ instead of $X$, we obtain the nodes of the second level, etc. Since $X$ is finite, all vertices on some level will be leaves, and the construction of $T_X$ is completed. The above-constructed labeled rooted tree $T_X$ is called the \emph{representing tree} of the ultrametric space $(X, d)$. \end{procedure} \begin{definition}\label{d15} Let $G$ be a nonempty graph, and let $V_0(G)$ be the set (possibly empty) of all isolated vertices of $G$. Denote by $G'$ the subgraph of the graph $G$, induced by set $V(G)\backslash V_0(G)$. \end{definition} Let $T$ be a rooted tree with the root $r$. We denote by $\overline{L}_T$ the set of the leaves of $T$, and, for every node $v$ of $T$, by $T_v=T(v)$ the rooted subtree of $T$ with the root $v$ and such that, for every $u\in V(T)$, \begin{equation}\label{e2.1} (u \in V(T_v)) \Leftrightarrow (u=v \text{ or $u$ is a successor of $v$ in $T$}). \end{equation} If $(X,d)$ is a finite ultrametric space and $T=T_X$ is the representing tree of $X$, then for every node $v\in V(T)$ there are $x_1$, $\ldots$, $x_k \in X$ such that $\overline{L}_{T_v}=\{\{x_1\}, \ldots, \{x_k\}\}$. Thus $\overline{L}_{T_v}$ is a set of one-point subsets of $X$. In what follows we will use the notation $L(T_v)$ for the set $\{x_1, \ldots, x_k\}$. Let $(X,d)$ be a metric space. Recall that a \emph{closed ball} with a radius $r \geqslant 0$ and a center $c\in X$ is the set $$ B_r(c)=\{x\in X\colon d(x,c)\leqslant r\}. $$ The \emph{ballean} $\mathbf B_X$ of a metric space $(X,d)$ is the set of all closed balls in $(X,d)$. Note that the sets of all open and closed balls in a finite metric space coincide. The following two statements are fundamental facts from the theory of finite ultrametric spaces. \begin{proposition}\label{lbpm} Let $(X,d)$ be a finite nonempty ultrametric space with the representing tree $T_X$. Then the following statements hold. \begin{itemize} \item [(i)] $L(T_v)$ belongs to $\mathbf{B}_X$ for every node $v\in V(T_X)$. \item [(ii)] For every $B \in \mathbf{B}_X$ there exists the unique node $v$ such that $L(T_v)=B$. \end{itemize} \end{proposition} In fact Proposition~\ref{lbpm} claims that the ballean of a finite ultrametric space $(X,d)$ is the vertex set of representing tree $T_X$. \begin{lemma}\label{l2} Let $(X, d)$ be a finite ultrametric space with the representing tree $T_X$ whose labeling $l\colon V(T_X)\to \operatorname{Sp}(X)$ is defined by~(\ref{e2.7}) and let $u_1 = \{x_1\}$ and $u_2 = \{x_2\}$ be two different leaves of the tree $T_X$. If $(u_1, v_1, \ldots, v_n, u_2)$ is the path joining the leaves $u_1$ and $u_2$ in $T_X$, then \begin{equation}\label{e112} d(x_1, x_2) = \max\limits_{1\leqslant i \leqslant n} l({v}_i). \end{equation} \end{lemma} The proofs of Proposition~\ref{lbpm} and Lemma~\ref{l2} can be found, for example, in~\cite{P(TIAMM)} and ~\cite{PD(UMB)}, respectively. \section{Minimum weight spanning tree algorithms}\label{MSPA} Every finite metric space $(X,d)$ can be considered as a complete weighted graph $(G_X,w)$ if we define $$ V(G_X)=X \ \text{ and } \ w(\{x,y\})=d(x,y) $$ for every $x,y \in X$, $x\neq y$. Recall that a minimum weight spanning tree or simply \emph{minimum spanning tree} $(T_{\min},w)$ of a finite metric space $(X,d)$ is a spanning tree of the graph $(G_X,w)$ such that the sum of weights of all edges of $(T_{\min}, w)$ is smallest among all spanning trees of $(G_X,w)$. Let us consider the following algorithm described in~\cite{GV12}. Below we adopt it to our terminology. \textbf{Algorithm 1.} Let $(X,d)$ be a finite ultrametric space and let $(G_X,w)$ be its corresponding weighted graph. \begin{itemize} \item[(i)] Let $G_{D,X}=$ $G_{D,X}[X_1,...,X_k]$ be the diametrical graph of the space $X$. Let us choose in $(G_X,w)$ any $k-1$ edges that would form a spanning tree in the factor-graph obtained from $(G_X,w)$ by contracting each of the $k$ parts $X_1,...,X_k$ to a vertex. \item[(ii)] Repeat the same procedure for each of the parts $X_1,...,X_k$ etc., until every obtained part becomes a vertex. \end{itemize} Clearly, every minimum weight spanning tree can be obtained by this procedure. \begin{remark}[\!\cite{GV12}]\label{r31} All minimum weight spanning trees of a given $(X,d)$ have one and the same unique weight distribution. Note that all $k-1$ edges chosen at step (i) are of weight $\operatorname{diam} X$ and the edges chosen while considering the parts $X_k$ are of weight $\operatorname{diam} X_k$. Let $\operatorname{Sp}(X)=\{0,d_1,...,d_n\}$. Denote by $I(T_X)$ the set of all inner nodes of the tree $T_X$ and by $S(v)$ the set of all direct successors of the node $v \in V(T_X)$. It is easy to see that for every $i\in \{1,...,n\}$ the corresponding weight $d_i$ appears \begin{equation}\label{e4} \sum_{v\in I(T_X) | l(v)=d_i}(|S(v)|-1) \end{equation} times in any minimum weight spanning tree. The uniqueness of weight distribution follows, for example, from Kruskal greedy algorithm~\cite{K56}. \end{remark} \begin{remark}[\!\cite{GV12}]\label{r} For every two points $x,y\in X$ the equality \begin{equation}\label{e23} d(x,y)=\max\{w(e) \, | \, e\in E(P_{x,y})\} \end{equation} holds, where $P_{x,y}$ is a path connecting the vertices $x$, $y$ in the minimum weight spanning tree $(T_{\min},w)$. \end{remark} The following simple algorithm was proposed in~\cite[p. 189]{D82}. \textbf{Algorithm 2.} Let $(X,d)$ be an ultrametric space with $|X|=n$. Choose arbitrarily $x_1\in X$. For $i:=1,..., n-1$ the point $x_{i+1}$ minimizes the distance $d(x,x_{i})$ among the set of points $x\in X$ not belonging to $\{x_1,..,x_{i}\}$. Below we consider how this algorithm works on representing trees $T_X$ of finite ultrametric spaces $X$. Moreover, we prove that it indeed produces minimum spanning paths. Recall that the \emph{node level} is the length of the path from the root to the node. \textbf{Algorithm 3.} Let $(X,d)$ be an ultrametric space with $|X|=n$. Set $i:=1$ and let $x_1$ be an arbitrary leaf of $T_X$. Define $P_1:=\{x_1\}$. Consider the following procedure for the tree $T_X$. \begin{itemize} \item [(i)] Choose a node $v_i\in V(T_X)$ such that $x_i\in L(T_{v_i})$, $L(T_{v_i})\setminus P_i\neq\varnothing$ and the level of the node $v_i$ is as maximal as possible. Set $i:=i+1$ and go to step (ii). \item [(ii)] Choose arbitrary $x_i \in L(T_{v_{i-1}})\setminus P_{i-1}$. Set $P_{i}:=P_{i-1}\cup\{x_i\}$. If $i=n$, then construction of the path $P$ is completed else go to step (i). \end{itemize} \begin{remark}\label{r23} Note that condition (i) means that we choose the smallest ball of the space $X$ that contains the vertex $x_i$ and contains at least one vertex which does not belong to $P_i$. According to the construction of $T_X$ and Lemma~\ref{l2} the distance $d\{x_i,x_{i+1}\}$ is equal to the label $l(v_i)$, $i=1,\ldots,n-1$. Using Lemma~\ref{l2} and the fact that labels of vertices of representing trees strictly decrease on the paths from the root to any leaves we see that the minimum of the value $d(x,x_i)$ $x \not \in \{x_1,...,x_{i}\}$ is achieved at any $x\in L(T_{v_{i}})\setminus P_{i}$ (compare with Algorithm 2). It is easy to see that one and the same node can be chosen several times, i.e., the equality $v_i=v_j$, $i\neq j$ is possible. By step (ii) all elements of the set $P_n$ are different, i.e., $P_n=L_{T_X}=X$. The existence of a vertex $v_i$ such that the conditions $x_i\in L(T_{v_i})$, $L(T_{v_i})\setminus P_i\neq\varnothing$ follows from the existence of the root of the tree $T_X$. \end{remark} \begin{example} The rooted representing tree $T_Z$ of an ultrametric space $Z$ and the corresponding minimum spanning path $P$ are depicted in Figures~\ref{fig2} and~\ref{fig3}, respectively. To demonstrate the work of Algorithm 3 it is sufficient to indicate the sequence of inner nodes chosen at step (i). Since all the inner nodes of $T_Z$ are marked by different labels we can represent the sequence in the following form: $l(v_1)=1$; $l(v_2)=1$; $l(v_3)=4$; $l(v_4)=4$; $l(v_5)=2$; $l(v_6)=9$; $l(v_7)=3$; $l(v_8)=5$; $l(v_9)=9$; $l(v_{10})=7$; $l(v_{11})=7$; $l(v_{12})=8$; $l(v_{13})=8$; $l(v_{14})=6$. \end{example} \begin{figure}[htb] \begin{tikzpicture}[scale=1.4] \tikzstyle{level 1}=[level distance=10mm,sibling distance=2.4cm] \tikzstyle{level 2}=[level distance=10mm,sibling distance=0.8cm] \tikzstyle{level 3}=[level distance=10mm,sibling distance=.6cm] \tikzset{small node/.style={circle,draw,inner sep=0.7pt,fill=black}} \tikzset{solid node/.style={circle,draw,inner sep=1.5pt,fill=black}} \tikzset{white node/.style={circle,draw,inner sep=1.5pt,fill=white}} \tikzset{common node/.style={circle,draw,inner sep=1.5pt,fill=gray}} \node at (0,0) [small node, label=above: {\tiny $v_i$}] {} child{node [small node, label=left: {\tiny $v_{i_1}$}] {} child{node [small node, label=right: {\tiny $$}] {} } child{node [small node, label=below: {\scriptsize $$}] {}} child{node [small node, label=right: {\tiny $x_i$}] {} } } child{node [small node, label=right: {\tiny $v_{i_2}$}] {} child{node [small node, label=right: {\tiny $x_{i+1}$}] {} } child{node [small node, label=below: {\scriptsize $$}] {}} } child{node [small node, label=right: {\tiny $v_{i_k}$}] {} child{node [small node, label=right: {\tiny $$}] {} } child{node [small node, label=below: {\scriptsize $$}] {}} child{node [small node, label=right: {\scriptsize $x_m$}] {} } }; \end{tikzpicture} \caption{The subtree $T_{v_i}$ of the tree $T_X$.} \label{fig0} \end{figure} \begin{proposition}\label{p25} Every path produced by Algorithm 3 is a minimum spanning path. \end{proposition} \begin{proof} According to Remark~\ref{r31} to show that the path $P_n$ is a minimum weight spanning path it suffices to show that $P_n$ has the weight distribution described by~(\ref{e4}). Indeed, let the node $v_i$ was chosen the first time at step (i), see Figure~\ref{fig0}. This means that some $x_i\in L(T_{v_i})$ was just chosen. Let $|S(v_i)|=k$ and let $v_{i_1},...,v_{i_k}$ be direct successors of $v_i$. Without loss of generality consider that $x_i \in L(T_{v_{i_1}})$. According to step (i) all elements of the ball $L(T_{v_{i_1}})$ are already added to the path $P_i$. Otherwise, we must chose $v_{i_{1}}$ instead of $v_i$. Further we must chose the next point $x_{i+1}$ of the path belonging to $L(T_{v_i})\setminus L(T_{v_{i_1}})$. Without loss of generality consider that $x_{i+1}\in L(T_{v_{i_2}})$, etc. Let $x_m$ be the last one point chosen from the ball $L(T_{v_{i_k}})$. According to Lemma~\ref{l2} in the part $(x_1,...,x_i,x_{i+1},...,x_m)$ of the path $P_n$ the weight $l(v_i)=d(x_i, x_{i+1})$ appears $|S(v_i)|-1=k-1$ times only during crossing from one of the balls $L(T_{v_{i_1}}),...,L(T_{v_{i_k}})$ to another. Taking into consideration that another vertices with the label $l(v_i)$ may exist in the tree $T_X$ we see that distribution~(\ref{e4}) holds for $d_i=l(v_i)$. \end{proof} Let $(X,d)$ be an ultrametric space with $|X|=n$. Denote by $\mathfrak{P}_{min}(X)$ the set of all minimum spanning paths of the graph $(G_X,w)$. Let $P\in \mathfrak{P}_{min}(X)$, $V(P)=\{x_1,....,x_n\}$. Denote by \emph{path spectrum} $\mathcal{PS}(P)$ the following finite sequence of numbers $(s_1,...,s_{n-1})$ where $s_i=w(\{x_i, x_{i+1}\})=d(x_i, x_{i+1})$, $i=1,...,n-1$. \begin{remark}\label{r55} Using the representing tree $T_X$ of a finite ultrametric space $X$ it is easy to see which subsets of $X$ are balls in the space $X$. The situation with minimum spanning paths is also enough clear. Let $P\in \mathfrak{P}_{min}(X)$ with $V(P)=\{x_1,...,x_n\}$ and $\mathcal{PS}(P)=(s_1,..,s_{n-1})$. Using Remark~\ref{r} one easily can establish that there is one-to-one correspondence between nonsingular balls of the space $X$ and subsets of consecutive points $x_i, x_{i+1},..., x_{i+k}$, $i,k\geqslant 1$, $i+k\leqslant n$ that satisfy the following property $$ s_{l}<s_{i-1}\, \text{ and } s_l<s_{i+k} \, \text{ for every } \, l\in \{i,...,i+k-1\}. $$ Naturally, in the case $i=1$ only the inequality $s_l<s_{i+k}$ must hold as well as in the case $i+k=n$ only the inequality $s_l<s_{i-1}$. In the extremal case $i=1$ and $i+k=n$ we obtain that $\{x_1,...,x_n\}=X \in \mathbf B_X$. In other words, the subsequence $(x_i,x_{i+1}, ... ,x_{i+k})$ forms a ball if and only if all the weights inside this part of the path $P$ are surrounded by bigger weights $s_{i-1}$ and $s_{i+k}$. Clearly, the diameter of the ball $\{x_i, x_{i+1},..., x_{i+k}\}$ is the maximal value among $s_i,...,s_{i+k-1}$. \end{remark} \section{Minimum spanning paths for special classes of finite ultrametric spaces}\label{MSTFSC} It is possible to distinguish classes of finite ultrametric spaces applying some restrictions on their representing trees. Finite ultrametric spaces $X$ for which their representing trees $T_X$ are strictly binary, have injective internal labeling, are extremal for the Gomory-Hu inequality, are as rigid as possible were considered in~\cite{DPT}, \cite{DP20}, \cite{PD(UMB)} and~\cite{DPT(Howrigid)}, respectively. The aim of this section is to give a characterization of the above mentioned classes in terms of special type minimum spanning paths. \subsection{ Spaces for which $T_X$ is strictly binary.} Recall that a rooted tree is strictly binary if its every internal node has exactly two children. \begin{proposition}[\!\cite{DPT}]\label{p10} Let $(X,d)$ be a finite nonempty ultrametric space. The following conditions are equivalent. \begin{itemize} \item[(i)] $T_X$ is strictly binary. \item[(ii)] If $Y\subseteq X$ and $|Y|\geqslant 3$, then there exists a Hamilton cycle $C \subseteq (G_{Y},w)$ with exactly two edges of maximal weight. \item[(iii)] There are no equilateral triangles in $(X,d)$. \end{itemize} \end{proposition} \begin{proposition}\label{p46} Let $(X,d)$ be a finite nonempty ultrametric space. Then for any $P\in \mathfrak{P}_{min}(X)$ with $V(P)=\{x_1,..,x_n\}$ and $\mathcal{PS}(P)=(s_1,...,s_{n-1})$ the following conditions are equivalent. \begin{itemize} \item[(i)] $T_X$ is strictly binary. \item[(ii)] If $s_i=s_j$, $1\leqslant i<j\leqslant n-1$, then there exists $k$, $i<k<j$, such that $s_k>s_i=s_j$. \end{itemize} \end{proposition} \begin{proof} (i)$\Rightarrow$(ii) Let $(X,d)$ be a finite ultrametric space and let $T_X$ be strictly binary. Suppose that (ii) does not hold for some $P\in \mathfrak{P}_{min}(X)$. Consequently, if there exist $i,j$, $i<j$ such that $s_i=s_j$, then for all $k$ such that $i<k<j$ the inequality $s_k<s_i=s_j$ holds. Suppose first that $i<j-1$ and such $k$ do exists. Consider a triplet of points $(x_{i}, x_j, x_{j+1})$. By~(\ref{e23}) we have the equalities $d(x_{i}, x_j)=d(x_j, x_{j+1})=d(x_{i}, x_{j+1})=s_i$ which contradicts to condition (iii) of Proposition~\ref{p10}. Let now $i=j-1$. Then for the triplet $(x_i,x_j,x_{j+1})$ we have the same contradiction. (ii)$\Rightarrow$(i) Conversely, let condition (ii) hold and let $T_X$ be not strictly binary. Consequently, by condition (iii) of Proposition~\ref{p10} for every $(P,w)\in \mathfrak{P}_{min}(X)$ there exist three points $x_i, x_j, x_k \in V(P)$, $i<j<k$ such that $d(x_i,x_j)=d(x_j,x_k)=d(x_i,x_k)$. By~(\ref{e23}) there exists an edge $e_1\in E(P)$ of maximal weight among the edges of the subpath of the path $P$ with vertices $\{x_i,...,x_j\}$. Analogously, there exists an edge $e_2\in E(P)$ with the same property belonging to the subpath with vertices $\{x_j,...,x_k\}$. Obviously, there are only two possibilities: 1) $e_1$ and $e_2$ are adjacent; 2) the weight of every edge which is between $e_1$ and $e_2$ is less or equal to $w(e_1)=w(e_2)$, which contradicts to condition (ii). \end{proof} \subsection{Spaces for which $T_X$ has injective internal labeling.} We shall say that internal labeling of a representing tree $T_X$ is injective if the labels of different internal nodes of $T_X$ are different. \begin{theorem}[\!~\cite{DP20}]\label{t1} Let $(X,d)$ be a finite nonempty ultrametric space. The following conditions are equivalent. \begin{itemize} \item [(i)] The diameters of different nonsingular balls are different. \item [(ii)] The internal labeling of $T_X$ is injective. \item [(iii)] $G'_{t,X}$ is a complete multipartite graph for every $t\in \operatorname{Sp}(X)\setminus \{0\}$. \item [(iv)] The equality $$ |\operatorname{Sp}(X)| = |\mathbf{B}_X| - |X| + 1 $$ holds. \end{itemize} \end{theorem} \begin{proposition}\label{p48} Let $(X,d)$ be a finite nonempty ultrametric space. Then for any $P\in \mathfrak{P}_{min}(X)$ with $V(P)=\{x_1,..,x_n\}$ and $\mathcal{PS}(P)=(s_1,...,s_{n-1})$ the following conditions are equivalent. \begin{itemize} \item[(i)] The labels of different internal nodes of $T_X$ are different. \item[(ii)] If $s_i=s_j$, $1\leqslant i < j \leqslant n-1$, then the inequality $s_k\leqslant s_i=s_j$ holds for every $k$ such that $i<k<j$. \end{itemize} \end{proposition} \begin{proof} (i)$\Rightarrow$(ii) Let $(X,d)$ be a finite ultrametric space and let all the labels of different internal nodes of $T_X$ are different. Suppose that (ii) does not hold for some $P\in \mathfrak{P}_{min}(X)$. Consequently, if there exist $i,j$, $i<j$ such that $s_i=s_j$, then there exists at least one $k$ such that $i<k<j$ and the inequality $s_k>s_i=s_j$ holds. Let $k_1$ be the smallest integer such that $i<k_1$ and $s_i<s_{k_1}$. Analogously, let $k_2$ be the largest integer such that $k_2<j$ and $s_j<s_{k_2}$. (Possibly $k_1=k_2$.) Let also $l_1$ be the largest integer such that $l_1<i$ and $s_{l_1}>s_{i}$. Analogously, let $l_2$ be the smallest integer such that $j<l_2$ and $s_j<s_{l_2}$. By Remark~\ref{r55} the sets $\{x_{l_1+1},...,x_i,...,x_{k_1}\}$ and $\{x_{k_2+1},...,x_j,...,x_{l_2}\}$ are different nonsingular balls in $X$ with the equal diameters $s_i=s_j$ which contradicts to condition (i) of Theorem~\ref{t1}. Note also that if there is no $l_1$ ($l_2$) with the above described properties, then the set $\{x_{1},...,x_i,...,x_{k_1}\}$ ($\{x_{k_2+1},...,x_j,...,x_{n}\}$) will form the respective desired ball. (ii)$\Rightarrow$(i) Let condition (ii) hold. Suppose that there are two different internal nodes $v_1$ and $v_2$ with $l(v_1)=l(v_2)$. Since labels on representing trees strictly decrease from the root to any vertex we have that $v_1$ is not a successor of $v_2$ and $v_2$ is not a successor of $v_1$. Let $P\in \mathfrak{P}_{min}(X)$ with $V(P)=\{x_1,...,x_n\}$ and $\mathcal{PS}(P)=(s_1,...,s_{n-1})$ be any path constructed using Algorithm 3. Hence there exist $i,j$, $i<j-1$, such that $d(x_i, x_{i+1})=d(x_j, x_{j+1})$, (i.e., $s_i=s_j$) where $x_i,x_{i+1}$ and $x_j, x_{j+1}$ are successors of $v_1$ and $v_2$, respectively. By Lemma~\ref{l2} we have $d(x_{i+1}, x_{j})>l(v_1)=l(v_2)$. According to Remark~\ref{r} there exists $k$, $i<k<j$, such that $s_k>s_i=s_j$ which contradicts to condition (ii). \end{proof} \subsection{Spaces extremal for the Gomory-Hu inequality.} In 1961 E.\,C.~Gomory and T.\,C.~Hu \cite{GomoryHu(1961)} for arbitrary finite ultrametric space $X$ proved the inequality \mbox{$|\operatorname{Sp}(X)| \leqslant |X|$}. Define by $\mathfrak{U}$ the class of finite ultrametric spaces $X$ with $|\operatorname{Sp}(X)| = |X|$. There exist descriptions of $X \in \mathfrak{U}$ in terms of graphs $G'_{r,X}$~\cite{PD(UMB), DP18}, in terms of representing trees~\cite{PD(UMB)} and in terms of weighted Hamiltonian cycles and weighted Hamiltonian paths~\cite{DPT}. Below we need the following criterion. \begin{theorem}[\!\cite{PD(UMB)}]\label{t22} Let $(X, d)$ be a finite ultrametric space with $|X| \geqslant 2$. The following conditions are equivalent. \begin{itemize} \item [(i)] $(X, d) \in \mathfrak U$. \item [(iii)] $T_X$ is strictly binary and the labels of different internal nodes are different. \end{itemize} \end{theorem} \begin{proposition}\label{p44} Let $(X,d)$ be a finite nonempty ultrametric space. The following conditions are equivalent. \begin{itemize} \item[(i)] $X \in \mathfrak{U}$. \item[(ii)] All elements of $\mathcal{PS}(P)$ are different for every $P\in \mathfrak{P}_{min}(X)$. \end{itemize} \end{proposition} \begin{proof} According to Theorem~\ref{t22} condition (i) of this proposition is equivalent to the intersection of conditions (i) of Proposition~\ref{p46} and (i) of Proposition~\ref{p48}. This in turn is equivalent to the intersection of conditions (ii) of Proposition~\ref{p46} and (ii) of Proposition~\ref{p48}. The last two conditions give condition (ii) of this proposition. This completes the proof. \end{proof} \subsection{Ultrametric spaces which are as rigid as possible.} Let $(X, d)$ be a metric space and let $\operatorname{Iso}(X)$ be the group of isometries of $(X, d)$. For every self-map $f\colon X\to X$ we denote by $\operatorname{Fix}(f)$ the set of fixed points of $f$. The finite ultrametric spaces satisfying the equality \begin{equation*} \min_{g \in \operatorname{Iso}(X)} |\operatorname{Fix}(g)| = |X|-2, \end{equation*} are called as rigid as possible, see~\cite{DPT(Howrigid)} for detailed definitions. Denote by $\mathfrak{R}$ this class of spaces. The following theorem gives us a characterization of spaces from the class $\mathfrak{R}$ in terms of representing trees. \begin{theorem}[\!~\cite{DPT(Howrigid)}]\label{t38} Let $(X, d)$ be a finite ultrametric space with $|X| \geq 2$. Then the following statements are equivalent. \begin{itemize} \item [(i)] $(X, d) \in \mathfrak{R}$. \item [(ii)] $T_X$ is strictly binary with exactly one inner node at each level except the last level. \end{itemize} \end{theorem} \begin{proposition}\label{p410x} Let $(X,d)$ be a finite nonempty ultrametric space. The following conditions are equivalent. \begin{itemize} \item[(i)] $(X, d) \in \mathfrak{R}$. \item[(ii)] There exists a path $P\in \mathfrak{P}_{min}(X)$ such that the sequence $\mathcal{PS}(P)=(s_1,...,s_{n-1})$ is strictly monotone. \end{itemize} \end{proposition} \begin{proof} The proof easily follows from the structure of representing tree $T_X$ described by condition (ii) of Theorem~\ref{t38}. \end{proof} \begin{proposition}\label{p41} Let $X$ be an ultrametric space. Any minimum spanning tree $T_{\min}$ of the space $X$ is a path if and only if the representing tree $T_X$ is isomorphic as rooted tree to one of the rooted trees $T_1,..., T_5$ depicted in Figure~\ref{fig1}. \end{proposition} \begin{proof} For the proof of this proposition refer to Algorithm 1. Let for the space $(X,d)$ the diametrical graph $G_{D,X}[X_1, X_2]$ be bipartite. It is easy to see that in order to avoid a construction of a minimum spanning tree which is not a path every part $X_1$ and $X_2$ must contain no more than two points. This observation describes cases $T_1,...,T_3$. The case when $G_{D,X}$ has four parts and more is not admissible since there exists a four-point tree which is not a path. The case $T_5$ is trivial and the case $T_4$ is left to the reader. \end{proof} \begin{figure}[htb] \begin{tikzpicture}[scale=1] \tikzstyle{level 1}=[level distance=10mm,sibling distance=1.2cm] \tikzstyle{level 2}=[level distance=10mm,sibling distance=0.5cm] \tikzstyle{level 3}=[level distance=10mm,sibling distance=.5cm] \tikzset{solid node/.style={circle,draw,inner sep=1.5pt,fill=black}} \node at (0,0.5) [label=right:\(T_1\)] {}; \node at (0,0) [solid node] {} child{node [solid node] {} child{node [solid node] {}} child{node [solid node] {}} } child{node [solid node] {} child{node [solid node] {}} child{node [solid node] {}} }; \node at (2.5,0.5) [label=right:\(T_2\)] {}; \node at (2.5,0) [solid node] {} child{node [solid node] {} child{node [solid node] {}} child{node [solid node] {}} } child{node [solid node] {} }; \node at (5,0.5) [label=right:\(T_3\)] {}; \node at (5,0) [solid node] {} child{node [solid node] {} } child{node [solid node] {} }; \tikzstyle{level 1}=[level distance=10mm,sibling distance=0.6cm] \node at (7.5,0.5) [label=right:\(T_4\)] {}; \node at (7.5,0) [solid node] {} child{node [solid node] {} } child{node [solid node] {} } child{node [solid node] {} }; \node at (10,0.5) [label=right:\(T_5\)] {}; \node at (10,0) [solid node] {}; \end{tikzpicture} \caption{Rooted representing trees.} \label{fig1} \end{figure} \textbf{Open problem.} By Remark~\ref{r}, every finite ultrametric space $(X,d)$ is defined by any minimum spanning path $P\in \mathfrak{P}_{min}(X)$. In other words any finite ultrametric space $X$ with $|X|=n$ is defined by the sequence $(s_1,...,s_{n-1})$, $s_i\in \operatorname{Sp}(X)\setminus \{0\}$. Moreover, such sequence is not unique. It is clear that it is not possible to reconstruct the whole space $X$ having only its spectrum $\operatorname{Sp}(X)$. This indeterminacy hints to the following question: what can we say about the space $X$ or its representing tree $T_X$ knowing only its spectrum $\operatorname{Sp}(X)$? For example, by Theorem~\ref{t22} the extremal case $|\operatorname{Sp}(X)|=|X|$ allows us to establish some properties of $T_X$. More strict variation: what can we say about the space $X$ knowing only its multispectrum? Under \emph{multispectrum} of $(X,d)$ we understand the following set $\{(d_i, k_i) \, | \, d_i \in \operatorname{Sp}(X)\}$, where $k_i$ is the number of times when $d_i$ appears among values of the ultrametric $d$. Such formulation of the problem is similar to the one of the tasks of spectral graph theory: study of the properties of a graph in relationship to the eigenvalues of matrices associated with the graph. \section{Hausdorff distance and minimum spanning paths}\label{HDandMST} The main result of this section is Theorem~\ref{t44} which gives us an explicit formula for Hausdorff distance between two subsets of a finite ultrametric space. The idea is the following. First, given two subsets $X$ and $Y$ of the finite ultrametric space $Z$ we define a set of balls $\mathfrak{B}_{XY} \subseteq \mathbf{B}_Z$ depending on these subsets, see Definition~\ref{d21}. Then in Theorem~\ref{t44} we show that the Hausdorff distance $d_H(X,Y)$ is equal to the maximal diameter of balls from $\mathfrak{B}_{XY}$. Let us recall the necessary definitions. Let $(Z,d)$ be a finite ultrametric space. For any two subsets $X$ and $Y$ of $Z$ the distance between $X$ and $Y$ is $$ \operatorname{dist}(X, Y) = \min \{d(x, y)\colon x \in X , y \in Y\}, $$ in particular, for $x\in Z$, $\operatorname{dist}(x, X) = \operatorname{dist} (\{x\}, X)$. For a set $A \subset Z$ and $\varepsilon >0$, its $\varepsilon$-neighborhood is the set $$ U_{\varepsilon} (A) = \{ x \in X\colon \operatorname{dist} (x, A) < \varepsilon \}.$$ The Hausdorff distance between $X$ and $Y$ is defined by \begin{equation}\label{e41} d_{H} (X, Y) = \inf \{\varepsilon > 0 \colon X \subset U_{\varepsilon} (Y) \ \text{ and } \ Y\subset U_{\varepsilon} (X) \}. \end{equation} Recall that in ultrametric space every ball is a union of disjoint balls. For finite ultrametric spaces $X$ this fact easily follows from the construction of representing tree $T_X$ and Proposition~1.7. In particular, Proposition~1.7 and Lemma~1.8 imply that for every nonsingular ball $B \in \mathbf{B}_X$ the diametrical graph $G_{D,B}=G_{D,B}[B_1,...,B_n]$ is a complete $n$-partite graph with the parts $B_1,...,B_n$, where all the subsets $B_1,...,B_n$ are balls of the space $X$. In this case, clearly, $B=B_1\cup...\cup B_n$. \begin{definition}\label{d21} Let $(Z,d)$ be an ultrametric spaces and let $X,Y$, $X\neq Y$, be some subsets of $Z$. Denote by $\mathfrak B_{XY}\subseteq \mathbf B_Z$ the set of all balls $B$ in the space $Z$ having simultaneously the following two properties. \begin{itemize} \item [(i)] ($(X\setminus Y)\cap B\neq \varnothing \neq Y \cap B$) or ($(Y\setminus X)\cap B\neq \varnothing \neq X \cap B$). \item [(ii)] Let \begin{equation}\label{e1} G_{D,B} = G_{D,B}[B_1,...,B_n]. \end{equation} be the diametrical graph of the subspace $B\subseteq Z$. Then there exists at least one ball $B_k$, $k\in \{1,...,n\}$, such that $$ (B_k\cap (X\setminus Y)\neq \varnothing \text{ and } B_k\cap Y=\varnothing) \text{ \emph{xor} } (B_k\cap (Y\setminus X)\neq \varnothing \text{ and } B_k\cap X=\varnothing). $$ \end{itemize} As usual, $xor$ here is exclusive disjunction. \end{definition} \begin{remark} It follows from condition (i) that if $|B|=1$, then $B\notin \mathfrak B_{XY}$. \end{remark} As usual $X \Delta Y= (X\cup Y)\setminus(X \cap Y)$ is the symmetric difference of the sets $X$ and $Y$ \begin{lemma}\label{l1} Let $Z$ be a finite ultrametric space and let $X, Y$, $X\neq Y$, be some nonempty subsets of $Z$. Then for every $x\in X\Delta Y$ there exists a ball $B\in \mathfrak B_{XY}$ such that $x\in B$. \end{lemma} \begin{proof} Without loss of generality, consider that $x\in X\setminus Y$. Let $\{x\}=\tilde{B}_1\subset \tilde{B}_2\subset \dots \tilde{B}_{n-1}\subset \tilde{B}_n =Z$ be a sequence of all different balls containing $x$ and let $\tilde{B}_i \notin \mathfrak{B}_{XY}$ for all $i\in 2,...,n$. Clearly, condition (i) of Definition~\ref{d21} holds for $B=\tilde{B}_n$. Hence, condition (ii) of the same definition does not hold for $B=\tilde{B}_n$. Consequently, for every ball $B_i$, $i\in \{1,...,n\}$ from~(\ref{e1}) only one of three following possibilities holds: 1) $B_i\cap(X\cup Y)=\varnothing$, 2) $X\cap Y \cap B_i\neq \varnothing$ and $(X\Delta Y)\cap B_i=\varnothing$, 3) Condition (i) of Definition~\ref{d21} holds with $B=B_i$. Since $x\in \tilde{B}_{n-1}$, $x\in X\setminus Y$, and $\tilde{B}_{n-1}=B_i$ for some $i\in \{1,...,n\}$ we have that condition (i) holds for $B_i=\tilde{B}_{n-1}$. Repeating these considerations with every $\tilde{B}_{i}$ we see that even if all $\tilde{B}_{i}\notin \mathfrak{B}_{XY}$, $i=3,..,n$, the ball $\tilde{B}_{2}$ must belong to $\mathfrak{B}_{XY}$ since condition (i) holds for this ball by supposition of this procedure and condition (ii) holds because we can take $B_k=\{x\}$ in (ii). \end{proof} \begin{theorem}\label{t44} Let $(Z,d)$ be an ultrametric space. Then for nonempty subsets $X,Y\subseteq Z$ the Hausdorff distance can be calculated as follows: \begin{equation}\label{e45} d_H(X,Y)= \begin{cases} 0, &X=Y,\\ \max\limits_{B\in \mathfrak{B}_{XY}}\operatorname{diam} B, &\text{otherwise}. \end{cases} \end{equation} \end{theorem} \begin{proof} If $X=Y$, then the equality $d_H(X,Y)=0$ is evident. Suppose $X\neq Y$. Let us show that for $r= \max\limits_{B\in \mathfrak{B}_{XY}}\operatorname{diam} B$ the relations $X\subset U_{r}(Y)$ and $Y\subset U_{r}(X)$ hold. Let $y\in Y$. If $y\in X\cap Y$, then, clearly, $y\in U_r(X)=\bigcup\limits_{x\in X}B_r(x)$ for any $r>0$. Let $y\in Y\setminus X$. Consequently, Lemma~\ref{l1} implies that $y$ belongs to some ball $B\in \mathfrak{B}_{XY}$. According to condition (i) of Definition~\ref{d21} there exists $x\in X$ such that $x\in B$. Since $\operatorname{diam} B\leqslant r$ we have $y\in B_r(x)$. Consequently, $y\in U_r(X)$. The inclusion $X\subset U_{r}(Y)$ can be shown analogously. Let now $r<\max\limits_{B\in \mathfrak{B}_{XY}}\operatorname{diam} B$. Consequently, there exists $B\in \mathfrak{B}_{XY}$ such that $\operatorname{diam} B>r$. Let us show that at least one of the relations $X\subset U_{r}(Y)$ or $Y\subset U_{r}(X)$ does not hold. Without loss of generality, in virtue of condition (ii) of Definition~\ref{d21} consider that in decomposition~(\ref{e1}) for the ball $B$ there exists a ball $B_k$ such that $B_k\cap (X\setminus Y)\neq \varnothing$ and \begin{equation}\label{e3} B_k\cap Y=\varnothing. \end{equation} Let $x\in B_k\cap (X\setminus Y)$. Using Lemma~\ref{l2} we see that by~(\ref{e1}) and by~(\ref{e3}), the equality $d(x,y) = \operatorname{diam} B > r$ holds for all $y\in B\cap Y$. Moreover, using the property that the labels of a representing tree strictly decrease on any path from the root to a leaf, we see that the inequality $d(x,y) > \operatorname{diam} B$ holds for all $y \in Y\setminus B$. Hence $x$ does not belong to $U_r(Y)$. Thus, the infinum in~(\ref{e41}) is achieved when $\varepsilon = \max\limits_{B\in \mathfrak{B}_{XY}}\operatorname{diam} B$. \end{proof} \begin{figure}[htb] \begin{tikzpicture}[scale=1.4] \tikzstyle{level 1}=[level distance=10mm,sibling distance=2.4cm] \tikzstyle{level 2}=[level distance=10mm,sibling distance=0.8cm] \tikzstyle{level 3}=[level distance=10mm,sibling distance=.6cm] \tikzset{small node/.style={circle,draw,inner sep=0.7pt,fill=black}} \tikzset{solid node/.style={circle,draw,inner sep=1.5pt,fill=black}} \tikzset{white node/.style={circle,draw,inner sep=1.5pt,fill=white}} \tikzset{common node/.style={circle,draw,inner sep=1.5pt,fill=gray}} \node at (0,0) [small node, label=above: {\tiny $9$}] {} child{node [small node, label=right: {\tiny $4$}] {} child{node [small node, label=right: {\tiny $1$}] {} child{node [common node, label=below: {\scriptsize $x_3$}]{}} child{node [small node, label=below: {\scriptsize $x_2$}] {}} child{node [solid node, label=below: {\scriptsize $x_1$}] {}} } child{node [white node, label=below: {\scriptsize $x_4$}] {}} child{node [small node, label=right: {\tiny $2$}] {} child{node [white node, label=below: {\scriptsize $x_6$}] {}} child{node [solid node, label=below: {\scriptsize $x_5$}] {}} } } child{node [small node, label=right: {\tiny $5$}] {} child{node [small node, label=right: {\tiny $3$}] {} child{node [common node, label=below: {\scriptsize $x_7$}] {}} child{node [small node, label=below: {\scriptsize $x_8$}] {}} } child{node [small node, label=below: {\scriptsize $x_9$}] {}} } child{node [small node, label=right: {\tiny $8$}] {} child{node [small node, label=right: {\tiny $7$}] {} child{node [common node, label=below: {\scriptsize $x_{10}$}] {}} child{node [small node, label=below: {\scriptsize $x_{11}$}] {}} child{node [solid node, label=below: {\scriptsize $x_{12}$}] {}} } child{node [common node, label=below: {\scriptsize $x_{13}$}] {}} child{node [small node, label=right: {\scriptsize $6$}] {} child{node [white node, label=below: {\scriptsize $x_{15}$}] {}} child{node [solid node, label=below: {\scriptsize $x_{14}$}] {}} } }; \end{tikzpicture} \caption{Rooted representing tree $T_Z$.} \label{fig2} \end{figure} \begin{figure}[htb] \begin{center} \begin{tikzpicture} \tikzset{small node/.style={circle,draw,inner sep=0.7pt,fill=black}} \tikzset{solid node/.style={circle,draw,inner sep=1.5pt,fill=black}} \tikzset{white node/.style={circle,draw,inner sep=1.5pt,fill=white}} \tikzset{common node/.style={circle,draw,inner sep=1.5pt,fill=gray}} \def\xx{0cm}; \def\yy{0cm}; \def\dx{0.8cm}; \foreach \i in {1}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {2}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=1pt]; } \foreach \i in {3}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = gray] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {4}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = white] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {5}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {6}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = white] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {7}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = gray] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {8}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=1pt]; } \foreach \i in {9}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=1pt]; } \foreach \i in {10}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = gray] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {11}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=1pt]; } \foreach \i in {12}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {13}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = gray] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {14}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$} -- (\i*\dx+\dx,\yy); \draw [fill = black] (\i*\dx, \yy) circle [radius=2pt]; } \foreach \i in {15}{ \draw (\i*\dx, \yy) node (\i) [below] {$x_{\i}$}; \draw [fill = white] (\i*\dx, \yy) circle [radius=2pt]; } \def\myarr{{1,1,4,4,2,9,3,5,9,7,7,8,8,6}}; \foreach \i in {1,...,14}{ \draw (\i*\dx+\dx/2, \yy) node [above] { \scriptsize \pgfmathparse{\myarr[\i-1]}\pgfmathresult}; } \end{tikzpicture} \end{center} \caption{Minimum weight spanning path $P$.} \label{fig3} \end{figure} \begin{example}\label{e5} In this example we are going to apply Theorem~\ref{t44} for calculating Hausdorff distance between subsets of a finite ultrametric space. Let $Z$ be an ultrametric space with the representing tree $T_Z$ depicted in Figure~\ref{fig2} and let $$ X=\{x_3,x_4,x_6,x_7,x_{10},x_{13},x_{15}\}, Y=\{x_1,x_3,x_5,x_7,x_{10},x_{12},x_{13},x_{14}\}. $$ The points of the subset $X\setminus Y$ are denoted by white circles, of the subset $Y\setminus X$ by black and of the subset $X\cap Y$ by gray. Since the labeling of the tree $T_Z$ is injective using Proposition~\ref{lbpm} we see that there is one-to-one correspondence between the labels of inner nodes of $T_Z$ and the set of nonsingular balls of the space $Z$. Thus, denote by $B_i$ the ball with the diameter $i$, $i\in \operatorname{Sp}(Z)\setminus \{0\}$. One can easily see that the following balls satisfy condition (i) of Definition~\ref{d21}: $B_1$, $B_4$, $B_2$, $B_9$, $B_8$, $B_7$, $B_6$. Among them only the balls $B_1$, $B_4$, $B_2$, $B_7$, $B_6$ satisfy condition (ii) of the same definition. Thus, $\mathfrak B_{XY}=\{B_1, B_2, B_4, B_6, B_7\}$. By~(\ref{e45}) we have $d_H(X,Y)=7$. \end{example} \begin{remark} One of the minimum spanning paths $P$ of the space $Z$ is depicted in Figure~\ref{fig3}. The reader can easily check the correctness of Remark~\ref{r55} and establish the set of balls of the space $Z$ considering the path $P$. Verifying conditions (i) and (ii) of Definition~\ref{d21} using $P$ even is more easy than using $T_Z$. This observation again shows that minimum spanning paths are a convenient tool for studying finite ultrametric spaces. \end{remark} \section{Funding} The research was partially supported by the National Academy of Sciences of Ukraine, Project 0117U002165 ``Development of mathematical models, numerically analytical methods and algorithms for solving modern medico-biological problems''. \begin{thebibliography}{10} \bibitem{AVZ16} A.~V. {Arutyunov}, S.~A. {Vartapetov} and S.~E. {Zhukovskiy}, \newblock ``Some properties and applications of the Hausdorff distance,'' \newblock {J. Optim. Theory Appl.} \textbf{171}(2), 527--535 (2016). \bibitem{BM} J.~A. Bondy and U.~S.~R. Murty, \newblock \emph{Graph theory}, Graduate Texts in Mathematics \textbf{244} (Springer, New York, 2008). \bibitem{BBI01} D.~Burago, Y.~Burago and S.~Ivanov, \newblock \emph{A course in metric geometry}, Graduate Studies in Mathematics~\textbf{33} (Amer. Math. Soc., Providence, RI, 2001). \bibitem{CC04} H.-F. Chen and M.-S. Chang, \newblock ``An efficient exact algorithm for the minimum ultrametric tree problem,'' \newblock in \emph{Algorithms and computation}, Lecture Notes in Comput. Sci.~\textbf{3341}, 282--293 (Springer, Berlin, 2004). \bibitem{CLRS09} T.~H. Cormen, C.~E. Leiserson, R.~L. Rivest and C.~Stein, \newblock \emph{Introduction to algorithms} (MIT Press, Cambridge, MA, 2009). \bibitem{DFS17} J.~Dejun, H.~Fazhi, H.~Soonhung, et al., \newblock ``An efficient approach to directly compute the exact Hausdorff distance for 3d point sets,'' \newblock Integrated Computer-Aided Engineering \textbf{24}(3), 261--277 (2017). \bibitem{DLW09} E.~D. Demaine, G.~M. Landau and O.~Weimann, \newblock ``On {C}artesian trees and range minimum queries,'' \newblock in \emph{Automata, languages and programming. {P}art {I}}, Lecture Notes in Comput. Sci.~\textbf{5555}, 341--353 (Springer, Berlin, 2009). \bibitem{D82} E.~{Diday}, \newblock ``Crossings, orders and ultrametrics: Application to visualization of consensus for comparing classifications,'' \newblock in \emph{COMPSTAT 1982 5th Symposium held at Toulouse 1982, Part I: Proc. comput. stat.}, 186--191 (Physica, Heidelberg, 1982). \bibitem{DDP(P-adic)} D.~Dordovskyi, O.~Dovgoshey and E.~Petrov, \newblock ``Diameter and diametrical pairs of points in ultrametric spaces,'' \newblock p-Adic Numbers Ultrametric Anal. Appl. \textbf{3}(4), 253--262 (2011). \bibitem{D19} O.~Dovgoshey, \newblock ``Finite ultrametric balls,'' \newblock p-Adic Numbers Ultrametric Anal. Appl. \textbf{11}(3), 177--191 (2019). \bibitem{DP18} O.~Dovgoshey and E.~Petrov, \newblock ``From isomorphic rooted trees to isometric ultrametric spaces,'' \newblock p-Adic Numbers, Ultrametric Anal. Appl. \textbf{10}(4), 287--298 (2018). \bibitem{DP20} O.~Dovgoshey and E.~Petrov, \newblock ``On some extremal properties of finite ultrametric spaces,'' \newblock p-Adic Numbers Ultrametric Anal. Appl. \textbf{12}(1), 1--11 (2020). \bibitem{DPT} O.~Dovgoshey, E.~Petrov and H.-M. Teichert, \newblock ``On spaces extremal for the {G}omory-{H}u inequality,'' \newblock p-Adic Numbers Ultrametric Anal. Appl. \textbf{7}(2), 133--142 (2015). \bibitem{DPT(Howrigid)} O.~Dovgoshey, E.~Petrov and H.-M. Teichert, \newblock ``How rigid the finite ultrametric spaces can be?,'' \newblock Fixed Point Theory Appl. \textbf{19}(2), 1083--1102 (2017). \bibitem{FT87} M.~L. Fredman and R.~E. Tarjan, \newblock ``Fibonacci heaps and their uses in improved network optimization algorithms,'' \newblock J. Assoc. Comput. Mach. \textbf{34}(3), 596--615 (1987). \bibitem{GomoryHu(1961)} R.~E. Gomory and T.~C. Hu, \newblock ``Multi-terminal network flows,'' \newblock SIAM \textbf{9}(4), 551--570 (1961). \bibitem{G01} M.~Gromov, \newblock \emph{Metric structures for {R}iemannian and non-{R}iemannian spaces}, \newblock Modern Birkh\"{a}user Classics (Birkh\"{a}user, Boston, 2007). \bibitem{GV12} V.~Gurvich and M.~Vyalyi, \newblock ``Characterizing (quasi-)ultrametric finite spaces in terms of (directed) graphs,'' \newblock {Discrete Appl. Math.} \textbf{160}(12), 1742--1756 (2012). \bibitem{GV} V.~Gurvich and M.~Vyalyi, \newblock ``Ultrametrics, trees, flows and bottleneck arcs,'' \newblock Mat. Pros., Ser.~3 \textbf{16}, 75--88 (2012) \newblock [In Russian]. \bibitem{H04} B.~Hughes, \newblock ``Trees and ultrametric spaces: a categorical equivalence,'' \newblock Adv. Math. \textbf{189}(1), 148--191 (2004). \bibitem{H12} B.~Hughes, \newblock ``Trees, ultrametrics, and noncommutative geometry,'' \newblock Pure Appl. Math. Q. \textbf{8}(1), 221--312 (2012). \bibitem{I99} P.~Indyk, \newblock ``Sublinear time algorithms for metric space problems,'' \newblock in \emph{Proceedings of the thirty-first annual ACM symposium on Theory of Computing}, 428--432 (ACM, New York, 1999). \bibitem{KKT95} D.~R. Karger, P.~N. Klein and R.~E. Tarjan, \newblock ``A randomized linear-time algorithm to find minimum spanning trees,'' \newblock J. Assoc. Comput. Mach. \textbf{42}(2), 321--328 (1995). \bibitem{K56} J.~B. Kruskal, Jr., \newblock ``On the shortest spanning subtree of a graph and the traveling salesman problem,'' \newblock Proc. Amer. Math. Soc. \textbf{7}, 48--50 (1956). \bibitem{KMCM20} K.~S. Kumar, T.~Manigandan, D.~Chitra and L.~Murali, \newblock ``Object recognition using hausdorff distance for multimedia applications,'' \newblock Multimedia Tools and Appl. \textbf{79}, 4099--4114 (2020). \bibitem{LB17} A.~Lambert and G.~U. Bravo, \newblock ``The Comb Representation of Compact Ultrametric Spaces,'' \newblock p-Adic Numbers, Ultrametric Anal. Appl. \textbf{9}(1), 22--38 (2017). \bibitem{L03} A.~J. Lemin, \newblock ``The category of ultrametric spaces is isomorphic to the category of complete, atomic, tree-like, and real graduated lattices LAT*,'' \newblock Algebra Universalis \textbf{50}(1), 35--49 (2003). \bibitem{PD(UMB)} E.~Petrov and A.~Dovgoshey, \newblock ``On the {G}omory-{H}u inequality,'' \newblock J. Math. Sci. \textbf{198}(4), 392--411 (2014); \newblock translation from Ukr. Mat. Visn. 10(4), 469--496 (2013). \bibitem{P(TIAMM)} E.~A. Petrov, \newblock ``Ball-preserving mappings of finite ulrametric spaces,'' \newblock Tr. Inst. Prikl. Mat. Mekh.~\textbf{26}, 150--158 (2013) \newblock [In Russian]. \bibitem{PR02} S.~Pettie and V.~Ramachandran, \newblock ``An optimal minimum spanning tree algorithm,'' \newblock J. ACM \textbf{49}(1), 16--34 (2002). \bibitem{Q09} D.~Qiu, \newblock ``Geometry of non-{A}rchimedian {G}romov-{H}ausdorff distance,'' \newblock p-Adic Numbers Ultrametric Anal. Appl. \textbf{1}(4), 317--337 (2009). \bibitem{Q14} D.~Qiu, \newblock ``The structures of {H}ausdorff metric in non-{A}rchimedean spaces,'' \newblock p-Adic Numbers Ultrametric Anal. Appl. \textbf{6}(1), 33--53 (2014). \bibitem{S20} J.~Sch\"{a}fer, \newblock ``A note on ultrametric spaces, minimum spanning trees and the topological distance algorithm,'' \newblock Information \textbf{11}(9), 1--9 (2020). \bibitem{SS03} C.~Semple and M.~Steel, \newblock \emph{Phylogenetics}, Oxford Lecture Series in Mathematics and its Applications \textbf{24} (Oxford University Press, New York, 2003). \bibitem{WCT99} B.~Y. Wu, K.-M. Chao and C.~Y. Tang, \newblock ``Approximation and exact algorithms for constructing minimum ultrametric trees from distance matrices,'' \newblock J. Comb. Optim. \textbf{3}(2-3), 199--211 (1999). \end{thebibliography} \end{document}
2412.17437v1
http://arxiv.org/abs/2412.17437v1
Existence of solution to modified Gursky-Streets equation
\documentclass[psamsfonts]{amsart} \usepackage{amssymb} \usepackage{graphicx,color} \usepackage{amssymb,amsfonts,amsthm} \usepackage[all,arc]{xy} \usepackage{enumerate} \usepackage{mathrsfs} \newtheorem{thm}{Theorem}[section] \newtheorem*{theorem}{Theorem A} \newtheorem{lemma}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{exmp}[thm]{Example} \newtheorem{problem}[thm]{Problem} \newtheorem{quest}[thm]{Question} \newtheorem{exer}[thm]{Exercise} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \makeatletter \let\c@equation\c@thm \makeatother \numberwithin{equation}{section} \bibliographystyle{plain} \title[Modified Gursky-Streets equation]{Existence of solution to modified Gursky-Streets equation} \thanks{The author is supported by National Natural Science Foundation of China (No. 12001138).} \author{Zhenan Sui} \address{Institute for Advanced Study in Mathematics of HIT, Harbin Institute of Technology, Harbin, China} \email{[email protected]} \begin{document} \begin{abstract} We solve the modified Gursky-Streets equation, which is a fully nonlinear equation arising in conformal geometry, for all $1 \leq k \leq n$ with uniform $C^{1, 1}$ estimates. \end{abstract} \subjclass[2010]{Primary 53C21; Secondary 35J60} \maketitle \section {\large Introduction} \vspace{4mm} On a smooth compact Riemannian manifold $(M^n, g)$ of dimension $n \geq 3$, we are interested in solving the following class of conformal curvature equations \begin{equation} \label{eq1} u_{tt} \sigma_k \big( W[u] \big) - \sigma_{k}^{ij} \big( W[u] \big) u_{ti} u_{tj} = \psi(x, t) \quad \text{on} \quad M \times [0, 1] \end{equation} subject to the boundary condition \begin{equation} \label{eq1-4} u(\cdot, 0) = u_0, \quad u(\cdot, 1) = u_1, \end{equation} where \begin{equation} \label{eq1-7} W[u] = g^{- 1} \bigg( \nabla^2 u + s d u \otimes d u + \Big( \gamma \Delta u - \frac{r}{2} |\nabla u|^2 \Big) g + A \bigg), \end{equation} $\gamma, s, r \in \mathbb{R}$, $\gamma \geq 0$, $A$ is a smooth symmetric tensor on $M$ with $\lambda ( g^{- 1} A ) \in \Gamma_k$, and $\psi \geq 0$ is a given smooth function defined on $M \times [0, 1]$, $u_0$, $u_1$ are given smooth functions on $M$ with $\lambda\big( W[u_0] \big) \in \Gamma_k$ and $\lambda\big( W[u_1] \big) \in \Gamma_k$. Also, $\nabla{u}$, $\nabla^{2}u$ and $\Delta u$ are the gradient, Hessian and Laplace-Beltrami operator of $u$ with respect to the background metric $g$ respectively. In order to make the notation and computation easier, we always choose a smooth local orthonormal frame field $e_1, \ldots, e_n$ on $M$ with respect to the metric $g$. Thus, $u_{ti} := \nabla_{e_i} u_t$, $u_{ij} := \nabla_{e_j e_i} u$, and higher order covariant derivatives can be similarly written in this manner. Besides, \[ \sigma_k \big( W[u] \big) := \sigma_k \Big( \lambda \big( W[u] \big) \Big), \quad \sigma_{k}^{ij} \big( W[u] \big) : = \frac{\partial \sigma_k \big( W[u] \big)}{\partial W_{ij}[u]}, \] where $\lambda \big( W[u] \big) = ( \lambda_1, \ldots, \lambda_n )$ are the eigenvalues of the matrix $W[u]$, and \[ \sigma_k (\lambda) = \sum\limits_{ 1 \leq i_1 < \cdots < i_k \leq n} \lambda_{i_1} \cdots \lambda_{i_k} \] is the $k$th elementary symmetric function defined on Garding's cone \[\Gamma_k = \{ \lambda \in \mathbb{R}^n : \sigma_j ( \lambda ) > 0, \, j = 1, \ldots, k \}. \] If we set \[ E_u = u_{tt} W[u] - g^{- 1} d u_t \otimes d u_t, \] we arrive at the following relation \begin{equation} \label{eq4} u_{tt} \sigma_k \big( W[u] \big) - \sigma_{k}^{ij} \big( W[u] \big) u_{ti} u_{tj} = u_{tt}^{1 - k} \sigma_k ( E_u ) \end{equation} by Proposition \ref{prop4}. Thus equation \eqref{eq1} is equivalent to \begin{equation} \label{eq2} u_{tt}^{1 - k} \sigma_k (E_u) = \psi(x, t). \end{equation} According to \cite{HeXuZhang}, we call a $C^2$ function $u$ on $M \times [0, 1]$ admissible if \[ \lambda \big( W [u] \big) \in \Gamma_k, \quad u_{tt} > 0, \quad \sigma_k (E_u) > 0. \] Consequently, when $\psi > 0$ throughout $M \times [0, 1]$, equation \eqref{eq1} or \eqref{eq2} is elliptic for $C^2$ solution $u$ with $\lambda \big( W [u] \big) (x, t) \in \Gamma_k$ for any $(x, t) \in M \times [0, 1]$ (see Proposition \ref{Prop7}). If $\psi \equiv 0$, $s = 1$, $r = 1$, $\gamma = 0$, and $A$ is the Schouten tensor of $g$, then we obtain the following degenerate conformal curvature equation. \begin{equation} \label{eq1-2} u_{tt} \sigma_k \big( A_u \big) - \sigma_{k}^{ij} \big( A_u \big) u_{ti} u_{tj} = 0, \end{equation} with \begin{equation} \label{eq1-3} A_u = \nabla^2 u + d u \otimes d u - \frac{1}{2} |\nabla u|^2 g + A. \end{equation} Equation \eqref{eq1-2} was introduced by Gursky-Streets \cite{Gursky-Streets2018}, where the authors defined a Riemannian metric on a conformal class of metrics on four-manifolds, that is when $n = 4$ and $k = 2$, and thus \eqref{eq1-2} is known as Gursky-Streets equation, which turns out to be the geodesic equation under this metric. A remarkable application of the solvability of \eqref{eq1-2} subject to the boundary condition \eqref{eq1-4} is the proof of the uniqueness of solution to $\sigma_2$-Yamabe problem. Later, He \cite{He2021} generalized Gursky-Steets' results to any dimension $n \geq 4$. To be more precise, he was able to establish uniform $C^{1, 1}$ regularity of Gursky-Streets equation for $k = 2$, which gives a more straightforward proof of the uniqueness of solution to $\sigma_2$-Yamabe problem. Then, He, Xu and Zhang \cite{HeXuZhang} extended He's results to $k \leq \frac{n}{2}$ by a crucial proof of the concavity of the operator associated to \eqref{eq1}, which holds for all $1 \leq k \leq n$ (see Proposition \ref{prop5}) and relies on Garding theory of hyperbolic polynomials and results in the theory of real roots for interlacing polynomials. This concavity plays a substantial role in a priori estimates. It is worth mentioning that the geometry of Gursky-Streets' metric on the space of conformal metrics has a parallel theory with the geometry of the space of K\"ahler metrics, where the geodesic equation can be written as a homogeneous complex Monge-Amp\`ere equation (see \cite{Calabi1982, Mabuchi1987, Mabuchi1986, Semmes1992, Donaldson1999, ChenXX2000, Donaldson2004}). Motivated by the above work, we are especially interested in the existence and regularity of solutions to the generalized Gursky-Streets equation \eqref{eq1}. Under conformal deformation of metric $g_u = e^{2 u} g$, the Ricci tensor $\text{Ric}_u$ of $g_u$ and $\text{Ric}$ of $g$ are related by the formula \begin{equation} \label{eq1-5} - \frac{\text{Ric}_u}{n - 2} = \nabla^2 u - d u \otimes d u + \Big( \frac{\Delta u}{n - 2} + |\nabla u|^2 \Big) g - \frac{\text{Ric}}{n - 2}. \end{equation} Moreover, the Schouten tensor \[ A_u = \frac{1}{n - 2} \Big( Ric_u - \frac{S_u}{2 (n - 1)} g_u \Big) \] of $g_u$ and $A$ of $g$ are related by the formula \begin{equation} \label{eq1-6} - A_u = \nabla^2 u - d u \otimes d u + \frac{1}{2} |\nabla u|^2 g - A , \end{equation} where $S_u$ is the scalar curvature of $g_u$. If we change $u$ into $- u$ in \eqref{eq1-6}, we will obtain exactly \eqref{eq1-3}. All \eqref{eq1-3}, \eqref{eq1-6} and \eqref{eq1-5} fall into the form of \eqref{eq1-7}, which are known as conformal Schouten tensor, conformal negative Schouten tensor and conformal negative Ricci tensor. A central problem in conformal geometry is on the solvability of fully nonlinear Yamabe type problem which can be written in the general form \begin{equation} \label{eq1-8} \sigma_k \big( W [u] \big) = e^{- 2 k u} \psi(x), \end{equation} or \begin{equation} \label{eq1-9} \sigma_k \big( W [u] \big) = e^{2 k u} \psi(x), \end{equation} where $\psi(x)$ is a given smooth positive function defined on smooth compact manifold with or without boundary. There is a vast amount of literature on the existence and regularity of solutions to \eqref{eq1-8} or \eqref{eq1-9} ever since the work of Viaclovski \cite{Viaclovski2000}. In particular, we notice that the tensor \eqref{eq1-3}, \eqref{eq1-6} and \eqref{eq1-5} may bring very different existence and regularity issues to equations in the form \eqref{eq1-8} or \eqref{eq1-9}. The estimates in \cite{Viaclovsky2002} and \cite{GV03} reflect such difference (see also \cite{CGH22} for a more general equation). Moreover, when solving fully nonlinear Loewner-Nirenberg type problems, we observe that the negative Schouten tensor case has Lipschitz continuous solution in general, as proved by Gonz\'alez, Li and Nguyen \cite{Gonzalez-Li-Nguyen}, and there are counterexamples to $C^1$ regularity given in Li and Nguyen \cite{LN21} and in Li, Nguyen and Xiong \cite{LNX23}. In contrast, when $\gamma > 0$ (for example, \eqref{eq1-5}), Guan \cite{Guan08, Guan09} obtained smooth solution to \eqref{eq1-9} on smooth compact manifold with boundary, based on which, smooth solution to fully nonlinear Loewner-Nirenberg problem of negative Ricci tensor was obtained in \cite{Sui2024}. In view of these work, we are interested in whether the $\Delta u$ term can still bring in a priori estimates up to second order for equation \eqref{eq1} when $\psi > 0$ throughout $M \times [0, 1]$. The following result gives an affirmative answer. \begin{thm} \label{Theorem1} Let $(M, g)$ be a compact Riemannian manifold with $\lambda(A) \in \Gamma_k$ and suppose that $\gamma > 0$. Given a smooth positive function $\psi(x, t)$ defined on $M \times [0, 1]$ and smooth function $u_0$, $u_1$ defined on $M$ with $\lambda\big( W[u_0] \big) \in \Gamma_k$ and $\lambda\big( W[u_1] \big) \in \Gamma_k$, there exists a unique smooth solution $u$ of \eqref{eq1}--\eqref{eq1-4} on $M \times [0, 1]$ with $\lambda\big( W[ u ] \big) (x, t) \in \Gamma_k$ for any $(x, t) \in M \times [0, 1]$. Moreover, we have the following estimates which are independent of $\inf\psi$, \[ \max\limits_{M \times [0, 1]} \Big( |u| + |u_t| + |\nabla u| + u_{tt} + |\nabla^2 u| + |\nabla u_t| \Big) \leq C. \] \end{thm} Theorem \ref{Theorem1} was proved by He, Xu and Zhang \cite{HeXuZhang} for $W[u] = A_u$ assuming $k \leq \frac{n}{2}$. The condition $k \leq \frac{n}{2}$ plays a crucial role in their derivation of $C^2$ estimates due to the pattern $d u \otimes d u - \frac{1}{2} |\nabla u|^2 g$ in \eqref{eq1-3}. If we assume $\gamma > 0$, the $\Delta u$ term can bring in good terms for $C^2$ estimates of \eqref{eq1}, regardless of the pattern of gradient terms. If we merely assume $\gamma \geq 0$ but $r \neq 0$, we can still obtain $C^1$ estimates for \eqref{eq1}: \[ \max\limits_{M \times [0, 1]} \Big( |u| + |u_t| + |\nabla u| \Big) \leq C. \] In order to apply continuity method to prove existence of admissible solution to \eqref{eq1}--\eqref{eq1-4}, we need to find an admissible function which can serve as an initial solution of the continuity process. Different from \cite{HeXuZhang}, where $(1 - t) u_0 + t u_1 + a t (t - 1)$ with sufficiently large $a$ is admissible due to the pattern $d u \otimes d u - \frac{1}{2} |\nabla u|^2 g$, we establish the following existence and regularity result which seems to be useful in itself. \begin{thm} \label{Thm4} Let $(M, g)$ be a compact Riemannian manifold with $\lambda(A) \in \Gamma_k$ and suppose that $\gamma > 0$. Given a smooth positive function $\psi(x, t)$ defined on $M \times [0, 1]$ and smooth function $u_0$, $u_1$ defined on $M$ with $\lambda\big( W[u_0] \big) \in \Gamma_k$ and $\lambda\big( W[u_1] \big) \in \Gamma_k$, which also satisfy the compatibility condition: \begin{equation} \label{eq5-2} e^{- 2 k u_0} \sigma_k \big( W[u_0] \big) = \psi(x, 0) \quad \text{and} \quad e^{- 2 k u_1} \sigma_k \big( W[u_1] \big) = \psi(x, 1), \end{equation} there exists a unique smooth solution $u(x, t)$ to \begin{equation} \label{eq5-1} \left\{ \begin{aligned} & e^{- 2 k u} \sigma_k \big( W[u] \big) = \psi(x, t) \quad \text{on} \quad M \times [0, 1], \\ & u(\cdot, 0) = u_0, \quad u(\cdot, 1) = u_1 \end{aligned} \right. \end{equation} with $\lambda\big( W[ u ] \big) (x, t) \in \Gamma_k$ for any $( x, t ) \in M \times [0, 1]$. Moreover, we have the estimates \[ \max\limits_{M \times [0, 1]} \Big( |u| + |u_t| + |\nabla u| + |\nabla^2 u| \Big) \leq C. \] \end{thm} Since the estimates in Theorem \ref{Theorem1} are independent of $\inf \psi$, we obtain the existence of $C^{1, 1}$ solution to the degenerate equation. \begin{thm} \label{Theorem2} Let $(M, g)$ be a compact Riemannian manifold with $\lambda(A) \in \Gamma_k$ and suppose that $\gamma > 0$. Given smooth function $u_0$, $u_1$ defined on $M$ with $\lambda\big( W[u_0] \big) \in \Gamma_k$ and $\lambda\big( W[u_1] \big) \in \Gamma_k$, there exists a viscosity solution $u \in C^{1, 1} \big( M \times [0, 1] \big)$ of \begin{equation} \label{eq1-1} \left\{ \begin{aligned} & u_{tt} \sigma_k \big( W[u] \big) - \sigma_{k}^{ij} \big( W[u] \big) u_{ti} u_{tj} = 0, \\ & u(\cdot, 0) = u_0, \quad u(\cdot, 1) = u_1 \end{aligned} \right. \end{equation} with $\lambda\big( W[ u ] \big) (x, t) \in \overline{\Gamma}_k$ and $u_{tt} \geq 0$ almost everywhere in $M \times [0, 1]$. Moreover, we have the following estimates \[ \sup \limits_{M \times [0, 1]} \Big( |u| + |u_t| + |\nabla u| + u_{tt} + |\nabla^2 u| + |\nabla u_t| \Big) \leq C. \] \end{thm} Theorem \ref{Theorem2} was proved in \cite{HeXuZhang} for $W[u] = A_u$ assuming $k \leq \frac{n}{2}$, where uniqueness result was also obtained because of the pattern $d u \otimes d u - \frac{1}{2} |\nabla u|^2 g$ in \eqref{eq1-3} which can lead to an approximation result. This paper is organized as follows. The necessary preliminaries and estimates for $u$ and $u_t$ are presented in section 2. Section 3 is on estimate of $|\nabla u|$, while section 4 and 5 are devoted to second order estimates. The existence is proved in section 6. \medskip \noindent {\bf Acknowledgements} \quad The author would like to express deep thanks to Wei Sun for bringing the idea of Theorem \ref{Thm4}. The author is supported by National Natural Science Foundation of China (No. 12001138). \vspace{4mm} \section{Preliminary estimates} \vspace{4mm} In this section, we first provide some basic facts. Let $M_{n + 1}$ be the set of real symmetric $(n + 1) \times (n + 1)$ matrices. For $R \in M_{n + 1}$, we write \[ R = (r_{IJ})_{0 \leq I, J \leq n}, \] and denote the submatrix $(r_{ij})_{1 \leq i, j \leq n}$ by $r$. We have the following fact on concavity, which was proved in He, Xu and Zhang \cite{HeXuZhang}. \begin{prop} \label{prop5} The set \[ \mathcal{S} = \{ R \in M_{n + 1} | \lambda(r) \in \Gamma_k, \, F_k (R):= r_{00} \sigma_k (r) - \sigma_k^{ij} (r) r_{0i} r_{0j} > 0 \} \] is a convex cone. Besides, $F_k^{\frac{1}{k + 1}}(R)$, and hence $\ln F_k (R)$ are concave on $\mathcal{S}$ for $1 \leq k \leq n$. \end{prop} We also need the following fact which can be found in Gursky and Streets \cite{Gursky-Streets2018}. \begin{prop} \label{prop4} Given a symmetric $n \times n$ matrix $A$ and a vector $X = (X_1, \cdots, X_n)$, we have \[ \sigma_k^{ij} (A - X \otimes X) X_i X_j = \sigma_k^{ij} (A) X_i X_j , \] \[ \sigma_k (A - X \otimes X) = \sigma_k (A) - \sigma_k^{ij}(A) X_i X_j. \] \end{prop} The following proposition is proved in \cite{HeXuZhang}. \begin{prop} \label{prop6} Let $u$ be a $C^2$ function. Suppose that $\lambda \big( W [ u ] \big) \in \Gamma_k$, $u_{tt} > 0$ and $\sigma_k (E_u) > 0$, then $\lambda(E_u) \in \Gamma_k$. \end{prop} By direct computation, the linearized operator associated to equation \eqref{eq2} is \[ \begin{aligned} & \mathcal{L} (v) := (1 - k) u_{tt}^{- k} \sigma_k (E_u) v_{tt} \\ & + u_{tt}^{1 - k} \sigma_{k}^{ij} (E_u) \Big( v_{tt} W_{ij}[u] + u_{tt} \mathcal{M}_{ij} (v) - u_{t i} v_{tj} - v_{ti} u_{tj} \Big), \end{aligned} \] where \[ \mathcal{M} (v) := g^{-1} \Big( \nabla^2 v + s d u \otimes d v + s d v \otimes d u + \big( \gamma \Delta v - r \langle \nabla u, \nabla v \rangle \big) g \Big). \] \begin{prop} \label{Prop7} When $\psi > 0$ throughout $M \times [0, 1]$, equation \eqref{eq1} (or equivalently, \eqref{eq2}) is elliptic for $C^2$ solution $u$ with $\lambda\big( W[u] \big) \in \Gamma_k$. \end{prop} \begin{proof} It suffices to consider the operator \[ \begin{aligned} & \tilde{\mathcal{L}} (v) = (1 - k) u_{tt}^{- 1} \sigma_k (E_u) v_{tt} \\ & + \sigma_{k}^{ij} (E_u) \Big( v_{tt} W_{ij}[u] + u_{tt} \big( v_{ij} + \gamma \Delta v \delta_{ij} \big) - u_{t i} v_{tj} - v_{ti} u_{tj} \Big). \end{aligned} \] By direct computation, we have \[ \begin{aligned} \frac{\partial\tilde{\mathcal{L}}(v)}{\partial v_{tt}} = & (1 - k) u_{tt}^{- 1} \sigma_k (E_u) + \sigma_{k}^{ij} (E_u) W_{ij}[u], \\ \frac{\partial\tilde{\mathcal{L}}(v)}{\partial v_{ti}} = & - \sigma_{k}^{ij} (E_u) u_{tj}, \\ \frac{\partial\tilde{\mathcal{L}}(v)}{\partial v_{tj}} = & - \sigma_{k}^{ij} (E_u) u_{ti}, \\ \frac{\partial\tilde{\mathcal{L}}(v)}{\partial v_{ij}} = & \Big( \sigma_{k}^{ij} (E_u) + (n - k + 1) \sigma_{k - 1}(E_u) \gamma \delta_{ij} \Big) u_{tt}. \end{aligned} \] For any $(\xi_0, \xi_1, \cdots, \xi_n) \in \mathbb{R}^{n + 1}$, \begin{equation*} \begin{aligned} & \Big( (1 - k) u_{tt}^{- 1} \sigma_k (E_u) + \sigma_{k}^{ij} (E_u) W_{ij}[u] \Big) \xi_0^2 - \sigma_{k}^{ij} (E_u) u_{tj} \xi_0 \xi_i - \sigma_{k}^{ij} (E_u) u_{ti} \xi_0 \xi_j \\ & + \Big( \sigma_{k}^{ij} (E_u) + (n - k + 1) \sigma_{k - 1}(E_u) \gamma \delta_{ij} \Big) u_{tt} \xi_i \xi_j \\ = & \bigg( (1 - k) u_{tt}^{- 1} \sigma_k (E_u) + \sigma_{k}^{ij} (E_u) \frac{(E_u)_{ij} + u_{ti} u_{tj}}{u_{tt}} \bigg) \xi_0^2 - \sigma_{k}^{ij} (E_u) u_{tj} \xi_0 \xi_i \\ & - \sigma_{k}^{ij} (E_u) u_{ti} \xi_0 \xi_j + \Big( \sigma_{k}^{ij} (E_u) + (n - k + 1) \sigma_{k - 1}(E_u) \gamma \delta_{ij} \Big) u_{tt} \xi_i \xi_j \\ = & \Big( (1 - k) u_{tt}^{- 1} \sigma_k (E_u) + u_{tt}^{- 1} \sigma_{k}^{ij} (E_u) (E_u)_{ij} \Big) \xi_0^2 \\ & + \sigma_{k}^{ij} (E_u) u_{ti} u_{tj} u_{tt}^{- 1} \xi_0^2 - \sigma_{k}^{ij} (E_u) u_{tj} \xi_0 \xi_i - \sigma_{k}^{ij} (E_u) u_{ti} \xi_0 \xi_j + \sigma_{k}^{ij} (E_u) u_{tt} \xi_i \xi_j \\ & + (n - k + 1) \sigma_{k - 1}(E_u) \gamma u_{tt} \sum\limits_{i = 1}^n \xi_i^2 \\ = & u_{tt}^{- 1} \sigma_k (E_u) \xi_0^2 + (n - k + 1) \sigma_{k - 1}(E_u) \gamma u_{tt} \sum\limits_{i = 1}^n \xi_i^2 \\ & + \sigma_{k}^{ij} (E_u) \bigg( \frac{u_{ti}}{\sqrt{u_{tt}}} \xi_0 - \sqrt{u_{tt}} \xi_i \bigg) \bigg( \frac{u_{tj}}{\sqrt{u_{tt}}} \xi_0 - \sqrt{u_{tt}} \xi_j \bigg). \end{aligned} \end{equation*} By Proposition \ref{prop6}, we can see the ellipticity of \eqref{eq1} or \eqref{eq2}. \end{proof} Next we give some preliminary estimates for admissible solutions to \eqref{eq1}--\eqref{eq1-4}. \vspace{2mm} \subsection{$C^0$ estimate}~ \vspace{2mm} In this subsection, we derive the $C^0$ estimate, following the idea of Gursky-Streets \cite{Gursky-Streets2018}. \begin{prop} \label{Prop1} Let $u$ be an admissible solution to \eqref{eq1}--\eqref{eq1-4}. Then \[ \max\limits_{M \times [0, 1]} |u| \leq C, \] where $C$ is a positive constant depending only on $|u_0|_{C^0(M)}$, $|u_1|_{C^0(M)}$ and the upper bound of $\psi$. \end{prop} \begin{proof} First, since $u_{tt} > 0$, we have \[ \frac{u(\cdot, t) - u(\cdot, 0)}{t - 0} < \frac{u(\cdot, 1) - u(\cdot, t)}{1 - t}, \quad \forall t \in (0, 1), \] which yields the upper bound \[ u(\cdot, t) < (1 - t) u(\cdot, 0) + t u(\cdot, 1) = (1 - t) u_0 + t u_1 \quad \forall t \in (0, 1). \] To find a lower bound of $u$, we consider \[ \Psi = u + a t (1 - t), \quad t \in [0, 1], \] where $a$ is a positive constant to be determined. Assume that $\Psi$ attains an interior minimum at $(x_0, t_0)$. Thus, at $(x_0, t_0)$, we have \[ \nabla u = 0, \quad \nabla^2 u \geq 0. \] In addition, \begin{equation} \label{eq2-3} \begin{aligned} \mathcal{L} (\Psi) = & (1 - k) u_{tt}^{- k} \sigma_k (E_u) \Psi_{tt} \\ & + u_{tt}^{1 - k} \sigma_{k}^{ij} (E_u) \Big( \Psi_{tt} W_{ij}[u] + u_{tt} \mathcal{M}_{ij} (\Psi) - u_{t i} \Psi_{tj} - \Psi_{ti} u_{tj} \Big) \\ = & (1 - k) u_{tt}^{- k} \sigma_k (E_u) (u_{tt} - 2 a) \\ & + u_{tt}^{1 - k} \sigma_{k}^{ij} (E_u) \Big( (u_{tt} - 2 a) W_{ij}[u] + u_{tt} \mathcal{M}_{ij} (u) - 2 u_{ti} u_{tj} \Big) \\ = & (1 + k) u_{tt}^{1 - k} \sigma_k (E_u) - 2 a u_{tt}^{- k} \sigma_k (E_u) - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) A_{ij} \\ & - 2 a u_{tt}^{- k} \sigma_k^{ij} (E_u) u_{t i} u_{t j}, \end{aligned} \end{equation} where \[ W_{ij}[u] = \nabla_{ij} u + \gamma \Delta u \delta_{ij} + A_{ij} , \quad (E_u)_{ij} = u_{tt} W_{ij}[u] - u_{t i} u_{t j}, \] \[ \mathcal{M}_{ij} (u) = \nabla_{ij} u + \gamma \Delta u \delta_{ij}. \] Since $\Psi$ attains an interior minimum at $(x_0, t_0)$, we know that at $(x_0, t_0)$, \[ \Psi_{tt} \nabla^2 \Psi - d \Psi_t \otimes d \Psi_t \geq 0. \] One may see \cite{Gursky-Streets2018} Page 3528--3529 for a proof. The above inequality means that \[ (u_{tt} - 2 a ) \nabla^2 u - d u_t \otimes d u_t \geq 0. \] It follows that \[ u_{tt} \nabla^2 u - d u_t \otimes d u_t \geq 0. \] Consequently, \[ E_u = u_{tt} \nabla^2 u + u_{tt} \gamma \Delta u g + u_{tt} A - d u_t \otimes d u_t \geq u_{tt} A, \] if we assume that $\gamma \geq 0$. Since we have assumed that $\lambda(A) \in \Gamma_k$, by the concavity of $\sigma_k^{\frac{1}{k}}$ we have \begin{equation} \label{eq2-1} \sigma_k^{ij} (E_u) A_{ij} \geq k \sigma_k^{1 - \frac{1}{k}} (E_u) \sigma_k^{\frac{1}{k}} (A) > 0. \end{equation} Also, by Proposition \ref{prop4}, we have \begin{equation} \label{eq2-2} \begin{aligned} & u_{tt}^{- k} \sigma_k (E_u) + u_{tt}^{- k} \sigma_k^{ij} (E_u) u_{t i} u_{t j} \\ = & u_{tt}^{- k} \sigma_k \big( u_{tt} W[u] - d u_t \otimes d u_t \big) + u_{tt}^{- k} \sigma_k^{ij} \big( u_{tt} W[u] - d u_t \otimes d u_t \big) u_{t i} u_{t j} \\ = & u_{tt}^{- k} \Big( \sigma_k \big( u_{tt} W[u] \big) - \sigma_{k}^{ij} \big( u_{tt} W[u] \big) u_{ti} u_{tj} \Big) + u_{tt}^{- k} \sigma_k^{ij} \big( u_{tt} W[u] \big) u_{t i} u_{t j} \\ = & \sigma_k \big( W[u] \big). \end{aligned} \end{equation} Taking \eqref{eq2-1} and \eqref{eq2-2} into \eqref{eq2-3}, we can see that at $(x_0, t_0)$, \[ \begin{aligned} \mathcal{L} (\Psi) < & (1 + k) u_{tt}^{1 - k} \sigma_k ( E_u ) - 2 a \sigma_k \big( W[u] \big) \\ \leq & (1 + k) \psi (x, t) - 2 a \sigma_k ( A ). \end{aligned} \] By taking $a$ sufficiently large, depending on the upper bound of $\psi$, $u_0$ and $u_1$, we can make \[ \mathcal{L} (\Psi) < 0 \quad \text{ at } (x_0, t_0). \] But this is impossible. Hence $\Psi$ can not have an interior minimum. We therefore obtain that \[ u \geq \min\Big\{ \min\limits_{M}{u_0}, \min\limits_M {u_1} \Big\} - \frac{a}{4}. \] \end{proof} \vspace{2mm} \subsection{Estimate for $u_t$}~ \vspace{2mm} We shall adopt the idea of Gursky-Streets \cite{Gursky-Streets2018} to derive the bound of $u_t$. \begin{prop} \label{Prop2} Let $u$ be an admissible solution to \eqref{eq1}--\eqref{eq1-4}. Then we have \[ \max\limits_{M \times [0, 1]} |u_t| \leq C, \] where $C$ is a positive constant depending only on the upper bound of $\psi$, the bound of $u_0$, $u_1$, and the positive lower bound of $\sigma_k \big( W[u_0] \big)$, $\sigma_k \big( W[u_1] \big)$. \end{prop} \begin{proof} Since $u_{tt} > 0$, we have \[ u_t(\cdot, 0) \leq u_t (\cdot, t) \leq u_t (\cdot, 1), \quad \forall \, t \in [0, 1]. \] Hence it suffices to give a lower bound for $u_t(\cdot, 0)$ and an upper bound for $u_t (\cdot, 1)$. First, we give a lower bound for $u_t(\cdot, 0)$. Consider the test function \[ \Phi (x, t) = u(x, t) - u_0 (x) - b t^2 + c t, \] where $b$ and $c$ are positive constants to be determined. Assume that $\Phi$ attains an interior minimum at $(x_0, t_0) \in M \times (0, 1)$. Then, at $(x_0, t_0)$, we have \[ \nabla u = \nabla u_0, \quad \nabla^2 u \geq \nabla^2 u_0, \] \begin{equation} \label{eq2-4} \begin{aligned} & \mathcal{L} (\Phi) = (1 - k) u_{tt}^{- k} \sigma_k (E_u) (u_{tt} - 2 b) \\ & + u_{tt}^{1 - k} \sigma_{k}^{ij} (E_u) \Big( (u_{tt} - 2 b) W_{ij}[u] + u_{tt} \big( W_{ij} [u] - W_{ij} [u_0] \big) - 2 u_{ti} u_{tj} \Big) \\ = & (1 + k) u_{tt}^{1 - k} \sigma_k (E_u) - 2 b (1 - k) u_{tt}^{- k} \sigma_k (E_u) - 2 b u_{tt}^{1 - k} \sigma_k^{ij} (E_u) W_{ij} [u] \\ & - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) W_{ij}[u_0] \\ = & (1 + k) u_{tt}^{1 - k} \sigma_k (E_u) - 2 b \sigma_k ( W[u] ) - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) W_{ij}[u_0] , \end{aligned} \end{equation} where \[ (E_u)_{ij} = u_{tt} W_{ij}[u] - u_{t i} u_{t j},\] and the last inequality is derived in the same way as Proposition \ref{Prop1}. Since $\Phi$ attains an interior minimum at $(x_0, t_0)$, we know that at $(x_0, t_0)$, \[ \Phi_{tt} \nabla^2 \Phi - d \Phi_t \otimes d \Phi_t \geq 0, \] or equivalently, \[ (u_{tt} - 2 b ) \nabla^2 (u - u_0) - d u_t \otimes d u_t \geq 0. \] It follows that \[ u_{tt} \nabla^2 ( u - u_0 ) - d u_t \otimes d u_t \geq 0. \] Consequently, \[ \begin{aligned} E_u = & u_{tt} \bigg( \nabla^2 u + s du \otimes du + \Big( \gamma \Delta u - \frac{r}{2} |\nabla u|^2 \Big) g + A \bigg) - d u_t \otimes d u_t \\ \geq & u_{tt} \bigg( \nabla^2 u_0 + s du_0 \otimes du_0 + \Big( \gamma \Delta u_0 - \frac{r}{2} |\nabla u_0|^2 \Big) g + A \bigg) \\ = & u_{tt} W[u_0]. \end{aligned} \] Since we have assumed that $\lambda\big( W[u_0] \big) \in \Gamma_k$, by the concavity of $\sigma_k^{\frac{1}{k}}$ we have \begin{equation*} \sigma_k^{ij} (E_u) W_{ij} [u_0] \geq k \sigma_k^{1 - \frac{1}{k}} (E_u) \sigma_k^{\frac{1}{k}} \big( W[u_0] \big) > 0. \end{equation*} Also, it is straightforward to see that at $(x_0, t_0)$, \[ W[u] \geq W[u_0]. \] Therefore, we conclude that at $(x_0, t_0)$, \[ \begin{aligned} \mathcal{L} (\Phi) < & (1 + k) \psi(x, t) - 2 b \sigma_k \big( W[u_0] \big). \end{aligned} \] By taking $b$ sufficiently large, depending on the upper bound of $\psi$, $u_0$, $u_1$ and the positive lower bound of $\sigma_k \big( W [u_0] \big)$, we can make \[ \mathcal{L} (\Phi) < 0 \quad \text{ at } (x_0, t_0). \] But this is impossible. Hence $\Phi$ can not attain an interior minimum. Now we choose $c$ sufficiently large depending in addition on the lower bound of $u_1 - u_0$ such that $\Phi(\cdot, 1) \geq 0 = \Phi(\cdot, 0)$. Then we conclude that $\Phi_t (x, 0) \geq 0$ for all $x \in M$, for otherwise $\Phi$ would attain an interior minimum. Hence we obtain $u_t (\cdot, 0) \geq - c$. Next, we give an upper bound for $u_t(\cdot, 1)$. Consider the test function \[ \Theta (x, t) = u(x, t) - u_1 (x) - e t^2, \] where $e$ is a positive constant to be determined. Similar as the previous argument, by taking $e$ sufficiently large, depending on the upper bound of $\psi$, $u_0$, $u_1$ and the positive lower bound of $\sigma_k \big( W [u_1] \big)$, we can prove that $\Theta$ can not attain an interior minimum. Now we choose $e$ further large depending in addition on the upper bound of $u_1 - u_0$ such that $\Theta(x, 1) = - e \leq u_0(x) - u_1 (x) = \Theta(x, 0)$. Then we conclude that $\Theta_t (x, 1) \leq 0$ for all $x \in M$. Hence we obtain $u_t (\cdot, 1) \leq 2 e$. \end{proof} \vspace{4mm} \section{Gradient estimate}~ \vspace{4mm} \begin{thm} \label{Thm5} Let $u$ be an admissible solution to \eqref{eq1}--\eqref{eq1-4}. If $r \neq 0$ or $\gamma > 0$, then we have \[ \max\limits_{M \times [0, 1]} |\nabla u| \leq C, \] where $C$ is a positive constant depending only on $n$, $k$, $\sup\psi$, $\sup \big| \nabla ( \psi^{\frac{1}{k + 1}} ) \big|$, $g$, $|u_0|_{C^1}$, $|u_1|_{C^1}$. \end{thm} \begin{proof} Let $u \in C^3 \big( M \times (0, 1) \big) \cap C^1 \big( M \times [0, 1] \big)$ be an admissible solution of \eqref{eq1}. We consider the test function \[ \Phi_1 = |\nabla u|^2 + e^{- \lambda_2 u} + \lambda_3 t (t - 1), \] where $\lambda_2$ and $\lambda_3$ are constants to be determined. By direct calculation, we have \[ \begin{aligned} \mathcal{M}_{ij} \big( |\nabla u|^2 \big) = 2 u_l \mathcal{M}_{ij} (u_l) + 2 u_{li} u_{lj} + 2 \gamma |\nabla^2 u|^2 \delta_{ij}, \end{aligned} \] and thus \begin{equation} \label{eq2-12} \begin{aligned} \mathcal{L} \big( |\nabla u|^2 \big) = & 2 u_l \mathcal{L}(u_l) + 2 (1 - k) u_{tt}^{- k} \sigma_k (E_u) |\nabla u_t|^2 \\ & + 2 u_{tt}^{1 - k} \sigma_k^{ij}(E_u) \Big( |\nabla u_t|^2 W_{ij}[u] + u_{tt} u_{l i} u_{l j} + \gamma u_{tt} |\nabla^2 u|^2 \delta_{ij} \\ & - u_{ti} u_{lj} u_{lt} - u_{li} u_{lt} u_{tj} - \frac{u_{ti} u_{tj} |\nabla u_t|^2}{u_{tt}} + \frac{u_{ti} u_{tj} |\nabla u_t|^2}{u_{tt}} \Big) \\ = & 2 u_l \mathcal{L}(u_l) + 2 u_{tt}^{- k} \sigma_k (E_u) |\nabla u_t|^2 \\ & + 2 u_{tt}^{1 - k} \sigma_k^{ij}(E_u) \Big( u_{tt} u_{l i} u_{l j} + \gamma u_{tt} |\nabla^2 u|^2 \delta_{ij} \\ & - u_{ti} u_{lj} u_{lt} - u_{li} u_{lt} u_{tj} + \frac{u_{ti} u_{tj} |\nabla u_t|^2}{u_{tt}} \Big), \end{aligned} \end{equation} where $|\nabla^2 u|^2 = \sum\limits_{l m} u_{lm}^2$. Also, we compute \[\begin{aligned} (E_u)_{ij, l} = & u_{ttl} W_{ij}[u] + u_{tt} \Big( u_{ijl} + s u_{il} u_j + s u_i u_{jl} \\ & + \big( \gamma (\Delta u)_l - r u_m u_{ml} \big) \delta_{ij} + A_{ij, l} \Big) - u_{t il} u_{tj} - u_{ti} u_{tjl}. \end{aligned} \] Now we differentiate \eqref{eq2} to obtain \begin{equation} \label{eq2-13} \begin{aligned} & (1 - k) u_{tt}^{- k} u_{ttl} \sigma_k (E_u) + u_{tt}^{1 - k} \sigma_k^{ij}(E_u) \bigg( u_{ttl} W_{ij} [u] \\ & + u_{tt} \Big( u_{ijl} + s u_{il} u_j + s u_i u_{jl} + \big( \gamma (\Delta u)_l - r u_m u_{ml} \big) \delta_{ij} + A_{ij, l} \Big) \\ & - u_{til} u_{tj} - u_{ti} u_{tjl} \bigg) = \psi_l. \end{aligned} \end{equation} Comparing with \[\begin{aligned} & \mathcal{L}(u_l) = (1 - k) u_{tt}^{- k} \sigma_k (E_u) u_{ltt} \\ & + u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( u_{ltt} W_{ij}[u] + u_{tt} \mathcal{M}_{ij} (u_l) - u_{ti} u_{ltj} - u_{lti} u_{tj} \Big), \end{aligned} \] where \[ \mathcal{M}_{ij} (u_l) = u_{lij} + s u_i u_{lj} + s u_{li} u_j + \Big( \gamma \Delta (u_l) - r \langle \nabla u, \nabla(u_l) \rangle \Big) \delta_{ij}, \] and in view of \begin{equation} \label{eq2-28} \nabla_{ijl} u = \nabla_{lij} u + R_{lij}^m \nabla_m u, \end{equation} and \begin{equation} \label{eq2-29} \nabla_l \Delta u = \Delta \nabla_l u - \sum\limits_m R_{lmm}^s \nabla_s u, \end{equation} equation \eqref{eq2-13} becomes \begin{equation} \label{eq2-14} \mathcal{L}(u_l) = \psi_l + u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (R_{lji}^m u_m + \gamma R_{lmm}^s u_s \delta_{ij} - A_{ij, l}). \end{equation} Also, by Cauchy-Schwartz inequality, the term in \eqref{eq2-12} can be estimated as \begin{equation} \label{eq2-23} \begin{aligned} & 2 u_{tt}^{1 - k} \sigma_k^{ij}(E_u) \Big( u_{tt} u_{l i} u_{l j} - u_{ti} u_{lj} u_{lt} - u_{li} u_{lt} u_{tj} + \frac{u_{ti} u_{tj} |\nabla u_t|^2}{u_{tt}} \Big) \\ \geq & 2 u_{tt}^{1 - k} \sigma_k^{ij}(E_u) \Big( u_{tt} u_{l i} u_{l j} + \frac{u_{ti} u_{tj} |\nabla u_t|^2}{u_{tt}} \Big) - 2 u_{tt}^{- k} \sigma_k^{ij}(E_u) u_{ti} u_{lt}^2 u_{tj} \\ & - 2 u_{tt}^{2 - k} \sigma_k^{ij}(E_u) u_{li} u_{lj} = 0 . \end{aligned} \end{equation} Taking \eqref{eq2-23} and \eqref{eq2-14} into \eqref{eq2-12}, we obtain \begin{equation} \label{eq2-24} \begin{aligned} \mathcal{L} \big( |\nabla u|^2 \big) \geq & 2 u_l \mathcal{L}(u_l) + 2 u_{tt}^{- k} \sigma_k (E_u) |\nabla u_t|^2 \\ = & 2 u_l \psi_l + 2 u_l u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (R_{lji}^m u_m \\ & + \gamma R_{lmm}^s u_s \delta_{ij} - A_{ij, l}) + 2 u_{tt}^{- k} \sigma_k (E_u) |\nabla u_t|^2 . \end{aligned} \end{equation} Next, we compute \[ \begin{aligned} \mathcal{M}_{ij} (e^{- \lambda_2 u}) = & e^{- \lambda_2 u} \Big( - \lambda_2 u_{ij} + \lambda_2^2 u_i u_j - 2 \lambda_2 s u_i u_j \\ & + \big( - \gamma \lambda_2 \Delta u + \gamma \lambda_2^2 |\nabla u|^2 + r \lambda_2 |\nabla u|^2 \big) \delta_{ij} \Big). \end{aligned} \] \begin{equation} \label{eq2-16} \begin{aligned} \mathcal{L}(e^{- \lambda_2 u}) = & - \lambda_2 e^{- \lambda_2 u} \mathcal{L}(u) + (1 - k) u_{tt}^{- k} \sigma_k (E_u) e^{- \lambda_2 u} \lambda_2^2 u_t^2 \\ & + u_{tt}^{1 - k} \sigma_k^{ij} (E_u) e^{- \lambda_2 u} \lambda_2^2 \Big( u_t^2 W_{ij}[u] - u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} + u_i u_j u_{tt} \\ & + \gamma |\nabla u|^2 u_{tt} \delta_{ij} - u_{ti} u_j u_t - u_i u_t u_{tj} + u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ = & - \lambda_2 e^{- \lambda_2 u} \mathcal{L}(u) + u_{tt}^{- k} \sigma_k (E_u) e^{- \lambda_2 u} \lambda_2^2 u_t^2 \\ & + u_{tt}^{1 - k} \sigma_k^{ij} (E_u) e^{- \lambda_2 u} \lambda_2^2 \Big( u_i u_j u_{tt} + \gamma |\nabla u|^2 u_{tt} \delta_{ij} \\ & - u_{ti} u_j u_t - u_i u_t u_{tj} + u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big). \end{aligned} \end{equation} Also, we can compute \[ \mathcal{M}_{ij} (u) = u_{ij} + 2 s u_i u_j + \big( \gamma \Delta u - r |\nabla u|^2 \big) \delta_{ij}. \] \begin{equation} \label{eq2-18} \begin{aligned} \mathcal{L}(u) = & (1 - k) u_{tt}^{1 - k} \sigma_k (E_u) + u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( u_{tt} W_{ij} [u] \\ & + u_{tt} \mathcal{M}_{ij}(u) - 2 u_{ti} u_{tj} \Big) \\ = & (1 - k) u_{tt}^{1 - k} \sigma_k (E_u) + u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( 2 u_{tt} W_{ij} [u] - u_{tt} A_{ij} \\ & + s u_{tt} u_i u_j - \frac{r}{2} |\nabla u|^2 u_{tt} \delta_{ij} - 2 u_{ti} u_{tj} \Big) \\ = & (1 + k) \psi - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) A_{ij} + s u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j \\ & - \frac{r}{2} |\nabla u|^2 u_{tt}^{2 - k} (n - k + 1) \sigma_{k - 1} (E_u). \end{aligned} \end{equation} By Cauchy-Schwartz inequality, the term in \eqref{eq2-16} can be estimated as \begin{equation} \label{eq2-21} \begin{aligned} & u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( u_i u_j u_{tt} - u_{ti} u_j u_t - u_i u_t u_{tj} + u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ \geq & u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( u_i u_j u_{tt} - \frac{1}{2} u_i u_j u_{tt} - 2 u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} + u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ = & u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( \frac{1}{2} u_i u_j u_{tt} - u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ = & u_{tt}^{1 - k} \Big( \frac{1}{2} \sigma_k^{ij} (E_u) u_i u_j u_{tt} - u_{tt}^{- 1} \sigma_k^{ij} \big( u_{tt} W[u] - d u_t \otimes d u_t \big) u_t^2 u_{ti} u_{tj} \Big) \\ = & \frac{1}{2} u_{tt}^{1 - k} \sigma_k^{ij} (E_u) u_i u_j u_{tt} - u_{tt}^{- 1} \sigma_k^{ij} \big( W[u] \big) u_t^2 u_{ti} u_{tj} \\ = & \frac{1}{2} u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j - \sigma_k \big( W[u] \big) u_t^2 + u_{tt}^{- k} u_t^2 \sigma_k (E_u). \end{aligned} \end{equation} Taking \eqref{eq2-21} and \eqref{eq2-18} into \eqref{eq2-16} yields \begin{equation} \label{eq2-22} \begin{aligned} & \mathcal{L}(e^{- \lambda_2 u}) \\ \geq & - \lambda_2 e^{- \lambda_2 u} \mathcal{L}(u) + e^{- \lambda_2 u} \lambda_2^2 \gamma (n - k + 1) u_{tt}^{2 - k} |\nabla u|^2 \sigma_{k - 1} (E_u) \\ & + e^{- \lambda_2 u} \lambda_2^2 \Big( \frac{1}{2} u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j - \sigma_k \big( W[u] \big) u_t^2 + 2 u_{tt}^{- k} u_t^2 \sigma_k (E_u) \Big) \\ = & - \lambda_2 e^{- \lambda_2 u} \Big( (1 + k) \psi - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) A_{ij} \\ & + s u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j - \frac{r}{2} |\nabla u|^2 u_{tt}^{2 - k} (n - k + 1) \sigma_{k - 1} (E_u) \Big) \\ & + e^{- \lambda_2 u} \lambda_2^2 \gamma (n - k + 1) u_{tt}^{2 - k} |\nabla u|^2 \sigma_{k - 1} (E_u) \\ & + e^{- \lambda_2 u} \lambda_2^2 \Big( \frac{1}{2} u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j - \sigma_k \big( W[u] \big) u_t^2 + 2 u_{tt}^{- k} u_t^2 \sigma_k (E_u) \Big) . \end{aligned} \end{equation} Last, we compute \begin{equation} \label{eq2-20} \begin{aligned} \mathcal{L} \Big( \lambda_3 t (t - 1) \Big) = & (1 - k) u_{tt}^{- k} \sigma_k (E_u) 2 \lambda_3 + u_{tt}^{1 - k} \sigma_k^{ij}(E_u) 2 \lambda_3 W_{ij}[u] \\ = & (1 - k) u_{tt}^{- k} \sigma_k (E_u) 2 \lambda_3 + u_{tt}^{- k} \sigma_k^{ij}(E_u) 2 \lambda_3 \Big( (E_u)_{ij} + u_{ti} u_{tj} \Big) \\ = & 2 \lambda_3 u_{tt}^{- k} \sigma_k (E_u) + 2 \lambda_3 u_{tt}^{- k} \sigma_k^{ij} (E_u) u_{ti} u_{tj} \\ = & 2 \lambda_3 u_{tt}^{- k} \sigma_k (E_u) + 2 \lambda_3 u_{tt}^{- k} \sigma_k^{ij} \big( u_{tt} W[u] \big) u_{ti} u_{tj} \\ = & 2 \lambda_3 \sigma_k \big( W[u] \big). \end{aligned} \end{equation} Combining \eqref{eq2-24}, \eqref{eq2-22} and \eqref{eq2-20} yields, \begin{equation} \label{eq2-25} \begin{aligned} & \mathcal{L}(\Phi_1) \geq 2 u_l \psi_l + 2 u_l u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (R_{lji}^m u_m + \gamma R_{lmm}^s u_s \delta_{ij} - A_{ij, l}) \\ & + 2 u_{tt}^{- k} \sigma_k (E_u) |\nabla u_t|^2 - \lambda_2 e^{- \lambda_2 u} \Big( (1 + k) \psi - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) A_{ij} \\ & + s u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j - \frac{r}{2} |\nabla u|^2 u_{tt}^{2 - k} (n - k + 1) \sigma_{k - 1} (E_u) \Big) \\ & + e^{- \lambda_2 u} \lambda_2^2 \gamma (n - k + 1) u_{tt}^{2 - k} |\nabla u|^2 \sigma_{k - 1} (E_u) \\ & + e^{- \lambda_2 u} \lambda_2^2 \Big( \frac{1}{2} u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j - \sigma_k \big( W[u] \big) u_t^2 + 2 u_{tt}^{- k} u_t^2 \sigma_k (E_u) \Big) \\ & + 2 \lambda_3 \sigma_k \big( W[u] \big) \\ \geq & 2 u_l \psi_l - \lambda_2 e^{- \lambda_2 u} (1 + k) \psi + 2 e^{- \lambda_2 u} \lambda_2^2 u_{tt}^{- 1} u_t^2 \psi \\ & + \Big( \lambda_2 e^{- \lambda_2 u} \frac{(n - k + 1) r}{2} |\nabla u|^2 + e^{- \lambda_2 u} \lambda_2^2 \gamma (n - k + 1) |\nabla u|^2 \\ & - C |\nabla u|^2 - C |\lambda_2| e^{- \lambda_2 u} \Big) u_{tt}^{2 - k} \sigma_{k - 1} (E_u) \\ & + e^{- \lambda_2 u} \Big( \frac{\lambda_2^2}{2} - \lambda_2 s \Big) u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j + \Big( 2 \lambda_3 - C e^{- \lambda_2 u} \lambda_2^2 \Big) \sigma_k \big( W[u] \big). \end{aligned} \end{equation} By arithmetic and geometric mean inequality, Newton-Maclaurin inequality and \eqref{eq2}, we have \begin{equation*} \begin{aligned} & k u_{tt}^{2 - k} \sigma_{k - 1} (E_u) |\nabla u|^2 + \psi u_{tt}^{- 1} u_t^2 \geq (k + 1) \Big( u_{tt}^{(2 - k) k - 1} \sigma_{k - 1}^k (E_u) |\nabla u|^{2 k} \psi u_t^2 \Big)^{\frac{1}{k + 1}} \\ \geq & C(n, k) \Big( \psi^{k - 1} |\nabla u|^{2 k} \psi u_t^2 \Big)^{\frac{1}{k + 1}} = C(n, k) \psi^{\frac{k}{k + 1}} |\nabla u|^{\frac{2 k}{k + 1}} |u_t|^{\frac{2}{k + 1}}. \end{aligned} \end{equation*} \vspace{2mm} {\bf The case when $r \neq 0$.} \vspace{2mm} When $r > 0$, we may subtract $c_1 t + c_2$ from $u$, where $c_1$ and $c_2$ are sufficiently large constants, to make $u < 0$ and $u_t \leq - 1$ on $M \times [0, 1]$. When $r < 0$, we may add $c_1 t + c_2$ to $u$, to make $u > 0$ and $u_t \geq 1$ on $M \times [0, 1]$. When $r > 0$, we choose $\lambda_2 > 0$ sufficiently large, while when $r < 0$, we choose $ - \lambda_2 > 0$ sufficiently large, and then choose $\lambda_3 > 0$ sufficiently large so that \eqref{eq2-25} reduces to \begin{equation} \label{eq2-27} \begin{aligned} & \mathcal{L}(\Phi_1) \geq - 2 |\nabla u| |\nabla \psi| - \lambda_2 e^{- \lambda_2 u} (1 + k) \psi \\ & + \min\Big\{ 2 |\lambda_2|, \frac{(n - k + 1) |r|}{4 k} \Big\} |\lambda_2| e^{- \lambda_2 u} C(n, k) \psi^{\frac{k}{k + 1}} |\nabla u|^{\frac{2 k}{k + 1}} |u_t|^{\frac{2}{k + 1}} \\ & + \Big( \frac{1}{2} \lambda_2 e^{- \lambda_2 u} \frac{(n - k + 1) r}{2} |\nabla u|^2 - C |\nabla u|^2 - C |\lambda_2| e^{- \lambda_2 u} \Big) u_{tt}^{2 - k} \sigma_{k - 1} (E_u) \\ & + e^{- \lambda_2 u} \Big( \frac{\lambda_2^2}{2} - \lambda_2 s \Big) u_{tt}^{2 - k} \sigma_k^{ij} (E_u) u_i u_j + \Big( 2 \lambda_3 - C e^{- \lambda_2 u} \lambda_2^2 \Big) \sigma_k \big( W[u] \big) \\ \geq & 2 |\nabla \psi| \Big( |\nabla u|^{\frac{2 k}{k + 1}} - |\nabla u| \Big) - \lambda_2 e^{- \lambda_2 u} (1 + k) \psi \\ & + \frac{1}{2} \min\Big\{ 2 |\lambda_2|, \frac{(n - k + 1) |r|}{4 k} \Big\} |\lambda_2| e^{- \lambda_2 u} C(n, k) \psi^{\frac{k}{k + 1}} |\nabla u|^{\frac{2 k}{k + 1}} \\ & + \Big( \lambda_2 e^{- \lambda_2 u} \frac{(n - k + 1) r}{8} |\nabla u|^2 - C |\lambda_2| e^{- \lambda_2 u} \Big) u_{tt}^{2 - k} \sigma_{k - 1} (E_u) . \end{aligned} \end{equation} Suppose that $\Phi_1$ attains its maximum at $(x_1, t_1) \in M \times (0, 1)$. We may assume that $|\nabla u| (x_1, t_1)$ is sufficiently large, since otherwise we are done, so that \eqref{eq2-27} implies that \[ \mathcal{L}(\Phi_1) (x_1, t_1) > 0 . \] But this is impossible. Therefore, $\Phi_1$ attains its maximum on $M \times \{ 0, 1 \}$. We hence obtain a bound for $|\nabla u|$ on $M \times [0, 1]$. \vspace{2mm} {\bf The case when $\gamma > 0$.} In this case, we use the term \[ e^{- \lambda_2 u} \lambda_2^2 \gamma (n - k + 1) u_{tt}^{2 - k} |\nabla u|^2 \sigma_{k - 1} (E_u)\] instead of \[ e^{- \lambda_2 u} \lambda_2 \frac{(n - k + 1) r}{2} u_{tt}^{2 - k} |\nabla u|^2 \sigma_{k - 1} (E_u) \] to derive the estimate. We may subtract $c_1 t + c_2$ from $u$, where $c_1$ and $c_2$ are sufficiently large constants, to make $u < 0$ and $u_t \leq - 1$ on $M \times [0, 1]$. We choose $\lambda_2 > 0$ sufficiently large, and then choose $\lambda_3 > 0$ sufficiently large so that \eqref{eq2-25} reduces to \begin{equation} \label{eq2-26} \begin{aligned} \mathcal{L}(\Phi_1) \geq & 2 |\nabla \psi| \Big( |\nabla u|^{\frac{2 k}{k + 1}} - |\nabla u| \Big) - \lambda_2 e^{- \lambda_2 u} (1 + k) \psi \\ & + \frac{1}{2} \min\Big\{ 2, \frac{(n - k + 1) \gamma}{4 k} \Big\} \lambda_2^2 e^{- \lambda_2 u} C(n, k) \psi^{\frac{k}{k + 1}} |\nabla u|^{\frac{2 k}{k + 1}} \\ & + \Big( \lambda_2^2 e^{- \lambda_2 u} \frac{(n - k + 1) \gamma}{4} |\nabla u|^2 - C \lambda_2 e^{- \lambda_2 u} \Big) u_{tt}^{2 - k} \sigma_{k - 1} (E_u). \end{aligned} \end{equation} The rest of the proof follows from the same line as the above case. \end{proof} \vspace{4mm} \section{Second order boundary estimate} \vspace{4mm} In this section, we derive boundary estimate for second order derivatives. \begin{thm} \label{Thm3} Let $u$ be an admissible solution to \eqref{eq1}--\eqref{eq1-4}. Suppose that $\gamma > 0$, we have the estimate \[ \max\limits_{M \times \{0, 1\}} \Big( u_{tt} + |\nabla u_t| + |\nabla^2 u| \Big) \leq C . \] \end{thm} \begin{proof} A bound for $|\nabla^2 u|$ on $t = 0$ and $t = 1$ is immediate. Next, we give a bound for $|\nabla u_t|$ on $t = 0$. Consider the test function \[ \Psi = \big| \nabla (u - u_0) \big| + e^{- a (u - u_0)} - 1 + b t (t - 1) - c t , \] where $a$, $b$ and $c$ are positive constants to be chosen later. We shall prove that by choosing $a$, $b$ appropriately, $\Psi$ can not achieve an interior maximum. If not, suppose that $\Psi$ attains an interior maximum at $(x_1, t_1) \in M \times (0, 1)$. We choose a smooth local orthonormal frame field $e_1, \ldots, e_n$ around $x_1$ on $M$ such that \[ e_1 (x_1) = \frac{\nabla(u - u_0)}{\big| \nabla(u - u_0) \big|} (x_1, t_1) \quad \text{ if } \big| \nabla(u - u_0) \big| (x_1, t_1) \neq 0. \] If $\big| \nabla(u - u_0) \big| (x_1, t_1) = 0$, we may choose arbitrary smooth local orthonormal frame field $e_1, \ldots, e_n$ around $x_1$ on $M$. Then we see that \[ \tilde{\Psi} = (u - u_0)_1 + e^{- a (u - u_0)} - 1 + b t (t - 1) - c t \] attains its maximum at $(x_1, t_1)$. By \eqref{eq2-14}, \begin{equation} \label{eq4-1} \begin{aligned} \mathcal{L} \Big( (u - u_0)_1 \Big) = \mathcal{L} \big( u_1 \big) - \mathcal{L} \big( (u_0)_1 \big) \geq \psi_1 - C u_{tt}^{2 - k} \sigma_{k - 1} (E_u) . \end{aligned} \end{equation} Next, we compute \[ \begin{aligned} \mathcal{M}_{ij} \big( e^{- a (u - u_0) } \big) = & e^{- a (u - u_0) } \Big( - a \mathcal{M}_{ij} (u - u _0) + a^2 (u - u_0)_i (u - u_0)_j \\ & + \gamma a^2 \big| \nabla ( u - u_0 ) \big|^2 \delta_{ij} \Big). \end{aligned} \] It follows that \begin{equation} \label{eq4-3} \begin{aligned} & \mathcal{L} \big( e^{- a (u - u_0) } \big) \\ = & - a e^{- a (u - u_0)} \mathcal{L}(u - u_0) + u_{tt}^{- k} \sigma_k (E_u) e^{- a (u - u_0)} a^2 u_t^2 \\ & + a^2 u_{tt}^{1 - k} \sigma_k^{ij} (E_u) e^{- a (u - u_0)} \Big( (u - u_0)_i (u - u_0)_j u_{tt} \\ & - 2 u_{ti} (u - u_0)_j u_t + u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ & + u_{tt}^{2 - k} (n - k + 1) \sigma_{k - 1} (E_u) e^{- a (u - u_0)} \gamma a^2 \big| \nabla (u - u_0) \big|^2. \end{aligned} \end{equation} By Cauchy-Schwartz inequality, \begin{equation} \label{eq4-7} \begin{aligned} & u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( (u - u_0)_i (u - u_0)_j u_{tt} - 2 u_{ti} (u - u_0)_j u_t + u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ \geq & u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( (u - u_0)_i (u - u_0)_j u_{tt} - \frac{1}{2} (u - u_0)_i (u - u_0)_j u_{tt} \\ & - 2 u_{tt}^{- 1} u_t^2 u_{ti} u_{tj}+ u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ = & u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( \frac{1}{2} (u - u_0)_i (u - u_0)_j u_{tt} - u_{tt}^{- 1} u_t^2 u_{ti} u_{tj} \Big) \\ = & \frac{1}{2} u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (u - u_0)_i (u - u_0)_j - u_{tt}^{- 1} \sigma_k^{ij} \big( W[u] \big) u_t^2 u_{ti} u_{tj} \\ = & \frac{1}{2} u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (u - u_0)_i (u - u_0)_j - \sigma_k \big( W[u] \big) u_t^2 + u_{tt}^{- k} u_t^2 \sigma_k (E_u). \end{aligned} \end{equation} Also, we notice that \[ \begin{aligned} \mathcal{L} (u_0) = u_{tt}^{2 - k} \sigma_k^{ij}(E_u) \Big( (u_0)_{ij} + s u_i (u_0)_j + s (u_0)_i u_j + \big( \gamma \Delta u_0 - r \langle \nabla u, \nabla u_0 \rangle \big) \delta_{ij} \Big). \end{aligned} \] Combining with \eqref{eq2-18}, we arrive at \begin{equation} \label{eq4-8} \begin{aligned} & \mathcal{L} (u - u_0) = (1 + k) \psi + s u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (u - u_0)_i (u - u_0)_j \\ & - \frac{r}{2} u_{tt}^{2 - k} (n - k + 1) \sigma_{k - 1} (E_u) \big| \nabla (u - u_0) \big|^2 - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) W_{ij}[u_0] . \end{aligned} \end{equation} By \eqref{eq4-7} and \eqref{eq4-8}, \eqref{eq4-3} can be estimated as \begin{equation} \label{eq4-4} \begin{aligned} & \mathcal{L} \big( e^{- a (u - u_0) } \big) \\ \geq & - a e^{- a (u - u_0)} \Big( (1 + k) \psi + s u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (u - u_0)_i (u - u_0)_j \\ & - \frac{r}{2} u_{tt}^{2 - k} (n - k + 1) \sigma_{k - 1} (E_u) \big| \nabla (u - u_0) \big|^2 - u_{tt}^{2 - k} \sigma_k^{ij} (E_u) W_{ij}[u_0] \Big) \\ & + a^2 e^{- a (u - u_0)} \Big( \frac{1}{2} u_{tt}^{2 - k} \sigma_k^{ij} (E_u) (u - u_0)_i (u - u_0)_j - \sigma_k \big( W[u] \big) u_t^2 \\ & + 2 u_{tt}^{- k} u_t^2 \sigma_k (E_u) \Big) + u_{tt}^{2 - k} (n - k + 1) \sigma_{k - 1} (E_u) \gamma a^2 e^{- a (u - u_0) } \big| \nabla (u - u_0) \big|^2. \end{aligned} \end{equation} Also by \eqref{eq2-20}, \begin{equation} \label{eq4-5} \mathcal{L} \Big( b t (t - 1) \Big) = 2 b \sigma_k \big( W[u] \big). \end{equation} In addition, it is obvious to see that \[ \mathcal{L} ( - 1 - c t ) = 0. \] We realize that $\lambda\big( W[u_0] \big) \in \Gamma_k$ and $\Gamma_k$ is open. Thus there exists a small positive constant $c_0$ such that $\lambda\big( W[u_0] - c_0 I \big) \in \Gamma_k$. It follows that \[ \begin{aligned} \sigma_k^{ij} (E_u) W_{ij}[u_0] = & \sigma_k^{ij} (E_u) \big( W_{ij}[u_0] - c_0 \delta_{ij} \big) + c_0 (n - k + 1) \sigma_{k - 1} (E_u) \\ \geq & k \sigma_k^{\frac{1}{k}}\big( W[u_0] - c_0 I \big) \sigma_k^{1 - \frac{1}{k}}(E_u) + c_0 (n - k + 1) \sigma_{k - 1} (E_u) \\ > & c_0 (n - k + 1) \sigma_{k - 1} (E_u) . \end{aligned} \] We may subtract $c_1 t + c_2$ from $u$, where $c_1$ and $c_2$ are sufficiently large constants, to make $u - u_0 < 0$ and $u_t \leq - 1$ on $M \times [0, 1]$. Combining \eqref{eq4-1}, \eqref{eq4-4}, \eqref{eq4-5} and in view of the above fact, by choosing $a$ sufficiently large, we arrive at \begin{equation} \label{eq4-6} \begin{aligned} & \mathcal{L} ( \tilde{\Psi} ) \geq \psi_1 - a e^{- a (u - u_0)} (1 + k) \psi \\ & + \frac{a}{2} e^{- a (u - u_0)} c_0 (n - k + 1) \sigma_{k - 1}(E_u) u_{tt}^{2 - k} - a^2 e^{- a (u - u_0)} \sigma_k \big( W[u] \big) u_t^2 \\ & + 2 a^2 e^{- a (u - u_0)} u_{tt}^{- k} \sigma_k (E_u) + 2 b \sigma_k \big( W[u] \big). \end{aligned} \end{equation} By Newton-Maclaurin inequality, arithmetic and geometric mean inequality, we have \begin{equation*} \begin{aligned} & \frac{a}{2} e^{- a (u - u_0)} c_0 (n - k + 1) \sigma_{k - 1}(E_u) u_{tt}^{2 - k} + 2 a^2 e^{- a (u - u_0)} u_{tt}^{- k} \sigma_k (E_u) \\ \geq & e^{- a (u - u_0)} \Big( \frac{a}{2} c_0 (n - k + 1) \sigma_{k}^{\frac{k - 1}{k}}(E_u) u_{tt}^{2 - k} + 2 a^2 u_{tt}^{- 1} \psi \Big) \\ = & e^{- a (u - u_0)} \Big( \frac{a}{2} c_0 (n - k + 1) \psi^{\frac{k - 1}{k}} u_{tt}^{\frac{1}{k}} + 2 a^2 u_{tt}^{- 1} \psi \Big) \\ \geq & e^{- a (u - u_0)} (k + 1) 2^{- \frac{k - 1}{k + 1}} \Big( \frac{n - k + 1}{k} \Big)^{\frac{k}{k + 1}} a^{\frac{k + 2}{k + 1}} c_0^{\frac{k}{k + 1}} \psi^{\frac{k}{k + 1}} . \end{aligned} \end{equation*} Hence \eqref{eq4-6} reduces to \begin{equation} \label{eq4-10} \begin{aligned} & \mathcal{L} ( \tilde{\Psi} ) \geq \psi_1 - a e^{- a (u - u_0)} (1 + k) \psi \\ & + e^{- a (u - u_0)} (k + 1) 2^{- \frac{k - 1}{k + 1}} \Big( \frac{n - k + 1}{k} \Big)^{\frac{k}{k + 1}} a^{\frac{k + 2}{k + 1}} c_0^{\frac{k}{k + 1}} \psi^{\frac{k}{k + 1}} \\ & - a^2 e^{- a (u - u_0)} \sigma_k \big( W[u] \big) u_t^2 + 2 b \sigma_k \big( W[u] \big). \end{aligned} \end{equation} By choosing $a$ further large depending on $\sup\psi$, $\sup \big| \nabla ( \psi^{\frac{1}{k + 1}} ) \big|$, and then choosing $b$ sufficiently large, we have \[ \mathcal{L} ( \tilde{\Psi} ) > 0 \quad \text{ in a neighborhood of } (x_1, t_1) . \] This means that $\tilde{\Psi}$ can not have an interior maximum at $(x_1, t_1)$. Hence $\Psi$ can not attain its maximum in $M \times (0, 1)$. That is, \[ \max\limits_{M \times [0, 1]} \Psi = \max\limits_{M \times \{0, 1\}} \Psi . \] Now we choose $c$ sufficiently large such that $\Psi (\cdot, 1) \leq 0$. Hence we have proved that \[ \Psi \leq \Psi(\cdot, 0) \equiv 0 \quad \text{ on } M \times [0, 1]. \] For any point $(x_0, 0) \in M \times \{ t = 0 \}$, we choose a smooth local orthonormal frame field around $x_0$ on $M$. Then in a neighborhood of $(x_0, 0)$, for any $1 \leq l \leq n$, \[ \begin{aligned} 0 \geq \Psi = & \big| \nabla (u - u_0) \big| + e^{- a (u - u_0)} - 1 + b t (t - 1) - c t \\ \geq & \pm (u - u_0)_l + e^{- a (u - u_0)} - 1 + b t (t - 1) - c t . \end{aligned} \] Since \[ \Big( \pm (u - u_0)_l + e^{- a (u - u_0)} - 1 + b t (t - 1) - c t \Big) (x_0, 0) = 0, \] we thus obtain \[ \Big( \pm (u - u_0)_l + e^{- a (u - u_0)} - 1 + b t (t - 1) - c t \Big)_t (x_0, 0) \leq 0, \] which implies a bound for $|u_{lt}|(x_0, 0)$. Therefore, we have derived a bound for $|\nabla u_t|$ on $t = 0$. For a bound of $|\nabla u_t|$ on $t = 1$, we consider the test function \[ \Phi = \big| \nabla (u - u_1) \big| + e^{- a (u - u_1)} - 1 + b t (t - 1) + c (t - 1) , \] where $a$, $b$ and $c$ are positive constants to be chosen. We can prove similarly as above. Finally, by \eqref{eq1}, we can directly see that on $t = 0$, \[ u_{tt} \sigma_k \big( W[u_0] \big) - \sigma_{k}^{ij} \big( W[u_0] \big) u_{ti} u_{tj} = \psi(x, 0). \] Since we have obtained a bound for $|\nabla u_t|$ on $t = 0$ and $\sigma_k \big( W[u_0] \big)$ has a positive lower bound, we obtain an upper bound for $u_{tt}$ on $t = 0$. An upper bound for $u_{tt}$ on $t = 1$ can be proved similarly. \end{proof} \vspace{4mm} \section{Global second order estimate}~ \vspace{4mm} In this section, we write equation \eqref{eq2} in the following form \begin{equation} \label{eq3} \begin{aligned} G(R):= \ln \Big( u_{tt}^{1 - k} \sigma_k (E_u) \Big) = & \ln \Big( u_{tt} \sigma_k (W[u]) - \sigma_{k}^{ij} (W[u]) u_{ti} u_{tj} \Big) = \ln \psi, \end{aligned} \end{equation} where \begin{equation} \label{eq4-11} R := R_u = (r_{IJ})_{0 \leq I, J \leq n} = \left( \begin{array}{cccc} u_{tt} & u_{t1} & \cdots & u_{tn} \\ u_{1t} & W_{11}[u] & \cdots & W_{1n}[u] \\ \vdots & \vdots & \ddots & \vdots \\ u_{nt} & W_{n1}[u] & \cdots & W_{nn}[u] \\ \end{array} \right). \end{equation} The linearized operator of $G(R)$ is given by \[ \begin{aligned} \mathbb{L}(v) := & G^{tt} (R) v_{tt} + 2 G^{ti}(R) v_{ti} + G^{ij}(R) \Big( v_{ij} + s u_i v_j + s u_j v_i \\ & + \big( \gamma \Delta v - r \langle \nabla u, \nabla v \rangle \big) \delta_{ij} \Big) \\ = & G^{tt} (R) v_{tt} + 2 G^{ti}(R) v_{ti} + G^{ij}(R) \mathcal{M}_{ij}(v), \end{aligned} \] where \begin{equation} \label{eq3-1} \begin{aligned} G^{tt} = & \frac{\partial G}{\partial r_{00}} = \frac{\partial G}{\partial u_{tt}} = \frac{\sigma_k \big( W[u] \big)}{\sigma_k (E_u)} u_{tt}^{k - 1}, \\ G^{ti} = & \frac{\partial G}{\partial r_{0i}} = \frac{\partial G}{\partial u_{ti}} = \frac{- \sigma_k^{ij} \big( W[u] \big) u_{tj}}{u_{tt}^{1 - k} \sigma_k (E_u)} = \frac{- \sigma_k^{ij}(E_u) u_{tj}}{\sigma_k (E_u)}, \quad 1 \leq i \leq n, \\ G^{ij} = & \frac{\partial G}{\partial r_{ij}} = \frac{\partial G}{\partial W_{ij}[u]} = \frac{ u_{tt} \sigma_k^{ij}(E_u)}{\sigma_k (E_u)}, \quad 1 \leq i, j \leq n. \end{aligned} \end{equation} Now we can compute \begin{equation} \label{eq3-2} \begin{aligned} \mathbb{L}(u_{tt}) = & G^{tt} (R) u_{tttt} + 2 G^{ti}(R) u_{ttti} + G^{ij}(R) \Big( u_{ttij} + s u_i u_{ttj} + s u_j u_{tti} \\ & + \big( \gamma \Delta u_{tt} - r \langle \nabla u, \nabla u_{tt} \rangle \big) \delta_{ij} \Big) \\ = & G^{tt} (R) u_{tttt} + 2 G^{ti}(R) u_{ttti} + G^{ij}(R) \mathcal{M}_{ij}(u_{tt}). \end{aligned} \end{equation} Differentiating \eqref{eq3} with respect to $t$ we obtain \begin{equation*} G^{IJ} (R) r_{IJt} = G^{tt} (R) u_{ttt} + 2 G^{ti} (R) u_{tit} + G^{ij} (R) \big( W_{ij}[u] \big)_t = \frac{\psi_t}{\psi}. \end{equation*} Differentiating again we obtain \begin{equation} \label{eq3-3} \begin{aligned} & G^{IJ, KL} r_{IJ t} r_{KL t} + G^{tt} (R) u_{tttt} + 2 G^{ti} u_{titt} + G^{ij} (R) \big( W_{ij} [u] \big)_{tt} \\ = & \frac{\psi_{tt}}{\psi} - \frac{\psi_t^2}{\psi^2}, \end{aligned} \end{equation} where \[ \begin{aligned} \big( W_{ij} [u] \big)_t = & u_{ijt} + s u_{it} u_j + s u_i u_{jt} + \Big( \gamma \Delta u_t - r \langle \nabla u, \nabla u_t \rangle \Big) \delta_{ij}, \\ \big( W_{ij} [u] \big)_{tt} = & u_{ijtt} + s u_{itt} u_j + 2 s u_{it} u_{jt} + s u_i u_{jtt} \\ & + \Big( \gamma \Delta u_{tt} - r \langle \nabla u, \nabla u_{tt} \rangle - r |\nabla u_t|^2 \Big) \delta_{ij}. \end{aligned} \] By \eqref{eq3-3}, we can see that \eqref{eq3-2} can be expressed as \begin{equation} \label{eq3-4} \begin{aligned} \mathbb{L}(u_{tt}) = & \frac{\psi_{tt}}{\psi} - \frac{\psi_t^2}{\psi^2} - G^{IJ, KL} r_{IJ t} r_{KL t} \\ & - 2 s G^{ij}(R) u _{it} u_{jt} + r |\nabla u_t|^2 \sum G^{ii} (R) \\ = & \frac{\psi_{tt}}{\psi} - \frac{\psi_t^2}{\psi^2} - G^{IJ, KL} r_{IJ t} r_{KL t} \\ & - 2 s \frac{ u_{tt} \sigma_k^{ij}(E_u)}{\sigma_k (E_u)} u _{it} u_{jt} + r |\nabla u_t|^2 \frac{(n - k + 1) u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} . \end{aligned} \end{equation} In order to give an upper bound for $u_{tt}$, we also need to compute $\mathbb{L}(u_t^2)$, which can be obtained by first computing $\mathcal{L}(u_t^2)$. By direct calculation, \[ \begin{aligned} \mathcal{M}_{ij} ( u_t^2 ) = & (u_t^2)_{ij} + s u_i (u_t^2)_j + s (u_t^2)_i u_j + \big( \gamma \Delta (u_t^2) - r \langle \nabla u, \nabla(u_t^2) \rangle \big) \delta_{ij} \\ = & 2 u_t \mathcal{M}_{ij} (u_t) + 2 u_{ti} u_{tj} + 2 \gamma |\nabla u_t|^2 \delta_{ij}, \end{aligned} \] and thus \begin{equation} \label{eq2-11} \begin{aligned} \mathcal{L}(u_t^2) = & (1 - k) u_{tt}^{- k} \sigma_k (E_u) (2 u_t u_{ttt} + 2 u_{tt}^2) \\ & + u_{tt}^{1 - k} \sigma_k^{ij}(E_u) \Big( (2 u_t u_{ttt} + 2 u_{tt}^2) W_{ij}[u] + u_{tt} \big( 2 u_t \mathcal{M}_{ij}(u_t) + 2 u_{ti} u_{tj} \\ & + 2 \gamma |\nabla u_t|^2 \delta_{ij} \big) - u_{ti} (2 u_t u_{ttj} + 2 u_{tj} u_{tt}) - (2 u_t u_{tti} + 2 u_{ti} u_{tt}) u_{tj} \Big) \\ = & 2 u_t \mathcal{L}(u_t) + 2 u_{tt} \psi + 2 \gamma u_{tt}^{1 - k} |\nabla u_t|^2 (n - k + 1) \sigma_{k - 1} (E_u). \end{aligned} \end{equation} In addition, we can compute \[ \begin{aligned} & \mathcal{L}(u_t) = (1 - k) u_{tt}^{- k} \sigma_k (E_u) u_{ttt} \\ & + u_{tt}^{1 - k} \sigma_k^{ij} (E_u) \Big( u_{ttt} W_{ij}[u] + u_{tt} \mathcal{M}_{ij}(u_t) - u_{ti} u_{ttj} - u_{tti} u_{tj} \Big), \end{aligned} \] where \[ \mathcal{M}_{ij}(u_t) = u_{tij} + s u_i u_{tj} + s u_{ti} u_j + \big( \gamma \Delta u_t - r \langle \nabla u, \nabla u_t \rangle \big) \delta_{ij}. \] Differentiating \eqref{eq2} with respect to $t$, we have \[ (1 - k) u_{tt}^{- k} u_{ttt} \sigma_k (E_u) + u_{tt}^{1 - k} \sigma_k^{ij} (E_u) (E_u)_{ijt} = \psi_t , \] where \[ \begin{aligned} & (E_u)_{ijt} = u_{ttt} W_{ij}[u] + u_{tt} \Big( u_{ijt} + s u_{it} u_j + s u_i u_{jt} \\ & + \big( \gamma (\Delta u)_t - r \langle \nabla u, \nabla u_t \rangle \big) \delta_{ij} - u_{tti} u_{tj} - u_{ti} u_{ttj} \Big). \end{aligned} \] We notice that \[ \mathcal{L} (u_t) = \psi_t . \] Hence \eqref{eq2-11} becomes \begin{equation} \label{eq2-17} \begin{aligned} \mathcal{L}(u_t^2) = & 2 u_t \psi_t + 2 u_{tt} \psi + 2 \gamma u_{tt}^{1 - k} |\nabla u_t|^2 (n - k + 1) \sigma_{k - 1} (E_u). \end{aligned} \end{equation} We note that the relation between $\mathcal{L}(v)$ and $\mathbb{L}(v)$ is \begin{equation} \label{eq3-8} \mathbb{L}(v) = \frac{\mathcal{L}(v)}{u_{tt}^{1 - k} \sigma_k (E_u)}. \end{equation} Therefore, we know that \begin{equation} \label{eq3-9} \begin{aligned} \mathbb{L}(u_t^2) = & \frac{2 u_t \psi_t}{\psi} + 2 u_{tt} + \frac{2 \gamma |\nabla u_t|^2 (n - k + 1) \sigma_{k - 1} (E_u)}{\sigma_k (E_u)}. \end{aligned} \end{equation} Also, we notice that for a function $\eta(v)$, we have \begin{equation} \label{eq3-20} \begin{aligned} \mathcal{L} \big( \eta (v) \big) = & \eta' \mathcal{L} (v) + \eta'' v_t^2 \sigma_k \big( W[u] \big) + \eta'' u_{tt}^{2 - k} \sigma_k^{ij} (E_u) v_i v_j \\ & + (n - k + 1) \gamma \eta'' u_{tt}^{2 - k} \sigma_{k - 1} (E_u) |\nabla v|^2 - 2 \eta'' u_{tt}^{1 - k} \sigma_k^{ij} (E_u) u_{ti} v_j v_t. \end{aligned} \end{equation} Therefore, \begin{equation} \label{eq3-11} \begin{aligned} \mathbb{L} \big( \eta(v) \big) = & \eta' \mathbb{L} (v) + \eta'' \frac{v_t^2 \sigma_k \big( W[u] \big)}{u_{tt}^{1 - k} \sigma_k (E_u)} + \eta'' \frac{u_{tt} \sigma_k^{ij} (E_u) v_i v_j}{\sigma_k(E_u)} \\ & + \frac{(n - k + 1) \gamma \eta'' u_{tt} \sigma_{k - 1} (E_u) |\nabla v|^2}{\sigma_k (E_u)} - \frac{ 2 \eta'' \sigma_k^{ij} (E_u) u_{ti} v_j v_t}{\sigma_k (E_u)}. \end{aligned} \end{equation} \begin{thm} \label{Thm1} Let $u$ be an admissible solution to \eqref{eq1}--\eqref{eq1-4}. Suppose that $\gamma > 0$, we have the estimate \[ \max\limits_{M \times [0, 1]} u_{tt} \leq C. \] \end{thm} \begin{proof} Let $u \in C^4 \big( M \times (0, 1) \big) \cap C^2 \big( M \times [0, 1] \big)$ be an admissible solution of \eqref{eq3}. We may subtract $c_1 t + c_2$ from $u$, where $c_1$ and $c_2$ are sufficiently large constants, to make $u < 0$ and $u_t \leq - 1$ on $M \times [0, 1]$. We consider the test function $u_{tt} + \eta(u_t^2)$, where $\eta$ is a function to be chosen later. By \eqref{eq3-4}, \eqref{eq3-9}, \eqref{eq3-11} and the concavity of $G$ we have \begin{equation*} \begin{aligned} & \mathbb{L} \Big( u_{tt} + \eta( u_t^2 ) \Big) \\ \geq & \frac{\psi_{tt}}{\psi} - \frac{\psi_t^2}{\psi^2} - 2 s \frac{ u_{tt} \sigma_k^{ij}(E_u)}{\sigma_k (E_u)} u _{it} u_{jt} + r |\nabla u_t|^2 \frac{(n - k + 1) u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \\ + & \eta' \bigg( \frac{2 u_t \psi_t}{\psi} + 2 u_{tt} + \frac{2 \gamma |\nabla u_t|^2 (n - k + 1) \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \bigg) \\ & + \eta'' \frac{4 u_t^2 u_{tt}^2 \sigma_k \big( W[u] \big)}{u_{tt}^{1 - k} \sigma_k (E_u)} + \eta'' \frac{4 u_t^2 u_{tt} \sigma_k^{ij} (E_u) u_{ti} u_{tj}}{\sigma_k(E_u)} \\ & + \frac{4 (n - k + 1) \gamma \eta'' u_t^2 u_{tt} \sigma_{k - 1} (E_u) |\nabla u_t|^2}{\sigma_k (E_u)} - \frac{ 8 \eta'' u_t^2 \sigma_k^{ij} (E_u) u_{ti} u_{tj} u_{tt}}{\sigma_k (E_u)} \\ = & \frac{\psi_{tt}}{\psi} - \frac{\psi_t^2}{\psi^2} - 2 s \frac{ u_{tt} \sigma_k^{ij}(E_u)}{\sigma_k (E_u)} u _{it} u_{jt} + r |\nabla u_t|^2 \frac{(n - k + 1) u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \\ + & \eta' \bigg( \frac{2 u_t \psi_t}{\psi} + 2 u_{tt} + \frac{2 \gamma |\nabla u_t|^2 (n - k + 1) \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \bigg) \\ & + 4 \eta'' u_t^2 u_{tt} + \frac{4 (n - k + 1) \gamma \eta'' u_t^2 u_{tt} \sigma_{k - 1} (E_u) |\nabla u_t|^2}{\sigma_k (E_u)}. \end{aligned} \end{equation*} {\bf The case when $\gamma > 0$.} We may choose $\eta(v) = \frac{\lambda}{2} v^2$, where $\lambda > 0$ is a constant to be chosen later. \begin{equation*} \begin{aligned} & \mathbb{L} \Big( u_{tt} + \eta( u_t^2 ) \Big) \\ \geq & \frac{\psi_{tt}}{\psi} - \frac{\psi_t^2}{\psi^2} - \big( 2 |s| + |r| \big) \frac{ u_{tt} (n - k + 1) \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} |\nabla u_t|^2 \\ + & \lambda u_t^2 \bigg( \frac{2 u_t \psi_t}{\psi} + 2 u_{tt} + \frac{2 \gamma |\nabla u_t|^2 (n - k + 1) \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \bigg) \\ & + 4 \lambda u_t^2 u_{tt} + \frac{4 (n - k + 1) \gamma \lambda u_t^2 u_{tt} \sigma_{k - 1} (E_u) |\nabla u_t|^2}{\sigma_k (E_u)}. \end{aligned} \end{equation*} Now we may choose $\lambda > 0$ sufficiently large so that \begin{equation*} \begin{aligned} \mathbb{L} \Big( u_{tt} + \eta( u_t^2 ) \Big) \geq \frac{\psi_{tt}}{\psi} - \frac{\psi_t^2}{\psi^2} + \lambda u_t^2 \Big( \frac{2 u_t \psi_t}{\psi} + 2 u_{tt} \Big). \end{aligned} \end{equation*} Suppose that $u_{tt} + \eta (u_t^2)$ attains its maximum at $(x_2, t_2) \in M \times (0, 1)$. We may assume that $u_{tt} (x_2, t_2) > 0$ is sufficiently large (otherwise we are done) such that \begin{equation*} \mathbb{L} \Big( u_{tt} + \eta ( u_t^2 ) \Big) (x_2, t_2) > 0 . \end{equation*} But this is impossible. We thus obtain the upper bound for $u_{tt}$ on $M \times [0, 1]$. \end{proof} Next, we compute $\mathbb{L}( \Delta u )$. \begin{equation} \label{eq3-5} \begin{aligned} \mathbb{L}(\Delta u) = & G^{tt} (R) (\Delta u)_{tt} + 2 G^{ti}(R) (\Delta u)_{ti} + G^{ij}(R) \Big( (\Delta u)_{ij} \\ & + s u_i (\Delta u)_j + s u_j (\Delta u)_i + \big( \gamma \Delta (\Delta u) - r \langle \nabla u, \nabla (\Delta u) \rangle \big) \delta_{ij} \Big). \end{aligned} \end{equation} Taking covariant derivative of \eqref{eq3} in the $e_p$ direction we obtain \[ G^{IJ}(R) r_{IJ, p} = G^{tt}(R) u_{ttp} + 2 G^{ti} (R) u_{tip} + G^{ij} (R) \big( W_{ij} [u] \big)_p = \frac{\psi_p}{\psi}. \] Differentiating again we have \begin{equation} \label{eq3-6} \begin{aligned} & G^{IJ, KL} (R) r_{IJ, p} r_{KL, p} + G^{tt} (R) \Delta (u_{tt}) + 2 G^{ti} (R) \Delta (u_{ti}) \\ & + G^{ij} (R) \Delta \big( W_{ij}[u] \big) = \frac{\Delta \psi}{\psi} - \frac{|\nabla \psi|^2}{\psi^2}, \end{aligned} \end{equation} where \[ \begin{aligned} \big( W_{ij}[u] \big)_p = & u_{ijp} + s u_{ip} u_j + s u_i u_{jp} + \Big( \gamma (\Delta u)_p - r u_k u_{k p} \Big) \delta_{ij} + A_{ij, p}, \\ \Delta \big( W_{ij}[u] \big) = & \Delta (u_{ij}) + s \Delta (u_i) u_j + 2 s u_{ip} u_{jp} + s u_i \Delta (u_j) \\ & + \Big( \gamma \Delta (\Delta u) - r u_k \Delta (u_k) - r |\nabla^2 u|^2 \Big) \delta_{ij} + \Delta (A_{ij}). \end{aligned} \] In view of \eqref{eq3-6}, \eqref{eq2-28}, \eqref{eq2-29} as well as \[ \begin{aligned} \nabla_{ijkl} u = & \nabla_{klij} u + R_{kjl}^m \nabla_{im} u + \nabla_i R_{kjl}^m \nabla_m u + R_{kil}^m \nabla_{mj} u \\ & + R_{kij}^m \nabla_{lm} u + R_{lij}^m \nabla_{km} u + \nabla_k R_{lij}^m \nabla_m u, \end{aligned} \] which implies that \[ \begin{aligned} \nabla_{ji} ( \Delta u ) = & \Delta ( \nabla_{ji} u ) + R_{lil}^m \nabla_{jm} u + \nabla_j R_{lil}^m \nabla_m u + R_{ljl}^m \nabla_{mi} u \\ & + 2 R_{lji}^m \nabla_{lm} u + \nabla_l R_{lji}^m \nabla_m u, \end{aligned} \] \eqref{eq3-5} can be expressed as \begin{equation} \label{eq3-7} \begin{aligned} \mathbb{L}(\Delta u) = & \frac{\Delta \psi}{\psi} - \frac{|\nabla \psi|^2}{\psi^2} - G^{IJ, KL} (R) r_{IJ, m} r_{KL, m} \\ & - 2 G^{ti}(R) R_{ill}^m u_{t m} + G^{ij}(R) \Big( R_{ljl}^m u_{mi} + R_{lil}^m u_{mj} + 2 R_{lji}^m u_{ml} \\ & + R_{lil, j}^m u_m + R_{lji, l}^m u_m - s R_{ill}^m u_m u_j - s u_i R_{jll}^m u_m - 2 s u_{im} u_{jm} \\ & + \big( r u_k R_{kll}^m u_m + r |\nabla^2 u|^2 \big) \delta_{ij} - \Delta (A_{ij}) \Big). \end{aligned} \end{equation} Also, taking \eqref{eq2-23}, \eqref{eq2-14} into \eqref{eq2-12} and by \eqref{eq3-8}, we have \begin{equation} \label{eq3-12} \begin{aligned} & \mathbb{L} \big( |\nabla u|^2 \big) \geq \frac{2 u_l \psi_l}{\psi} + \frac{2 (n - k + 1) \gamma u_{tt} \sigma_{k - 1} (E_u) |\nabla^2 u|^2}{\sigma_k (E_u)} \\ & + 2 u_l u_{tt} \frac{\sigma_k^{ij} (E_u) (R_{lji}^m u_m + \gamma R_{lmm}^s u_s \delta_{ij} - A_{ij, l}) }{\sigma_k (E_u)}+ 2 u_{tt}^{- 1} |\nabla u_t|^2 . \end{aligned} \end{equation} \begin{thm} \label{Thm2} Let $u$ be an admissible solution to \eqref{eq1}--\eqref{eq1-4}. Suppose that $\gamma > 0$, we have the estimate \[ \max\limits_{M \times [0, 1]} \Delta u \leq C. \] \end{thm} \begin{proof} Let $u \in C^4 \big( M \times (0, 1) \big) \cap C^2 \big( M \times [0, 1] \big)$ be an admissible solution of \eqref{eq3}. In view of \eqref{eq3-1} and the concavity of $G$, \eqref{eq3-7} can be estimated as \begin{equation} \label{eq3-14} \begin{aligned} \mathbb{L} (\Delta u) \geq & \frac{\Delta \psi}{\psi} - \frac{|\nabla \psi|^2}{\psi^2} - C \frac{\sigma_{k - 1} (E_u)}{\sigma_k (E_u)} |\nabla u_t|^2 \\ & - C \frac{u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \Big( |\nabla^2 u| + 1 + |\nabla^2 u|^2 \Big). \end{aligned} \end{equation} Since $\lambda (E_u) \in \Gamma_k \subset \Gamma_1$, we know that \begin{equation*} u_{tt} \text{tr} W[u] - |\nabla u_t|^2 > 0. \end{equation*} Hence \begin{equation} \label{eq3-13} |\nabla u_t|^2 \leq u_{tt} \Big( (1 + \gamma n) \Delta u + C \Big) \leq C u_{tt} \Big( |\nabla^2 u| + 1 \Big) . \end{equation} Consequently, \eqref{eq3-14} reduces to \begin{equation} \label{eq3-15} \begin{aligned} \mathbb{L} (\Delta u) \geq & \frac{\Delta \psi}{\psi} - \frac{|\nabla \psi|^2}{\psi^2} - C \frac{u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \Big( |\nabla^2 u| + 1 + |\nabla^2 u|^2 \Big). \end{aligned} \end{equation} Also, \eqref{eq3-12} can be estimated as \begin{equation} \label{eq3-16} \begin{aligned} \mathbb{L} \big( |\nabla u|^2 \big) \geq \frac{2 u_l \psi_l}{\psi} + \frac{2 (n - k + 1) \gamma u_{tt} \sigma_{k - 1} (E_u) |\nabla^2 u|^2}{\sigma_k (E_u)} - C u_{tt} \frac{\sigma_{k - 1} (E_u)}{\sigma_k (E_u)} . \end{aligned} \end{equation} By \eqref{eq2-20} and \eqref{eq3-8}, \begin{equation} \label{eq3-17} \begin{aligned} \mathbb{L} \Big( t (t - 1) \Big) = \frac{ 2 \sigma_k \big( W[u] \big)}{u_{tt}^{1 - k} \sigma_k (E_u)}. \end{aligned} \end{equation} Now we consider the test function $\Delta u + \lambda |\nabla u|^2 + \mu t (t - 1)$, where $\lambda$, $\mu$ are positive constants to be chosen later. By \eqref{eq3-15}, \eqref{eq3-16} and \eqref{eq3-17}, \begin{equation*} \begin{aligned} & \mathbb{L} \Big( \Delta u + \lambda |\nabla u|^2 + \mu t (t - 1) \Big) \\ & \geq \frac{\Delta \psi}{\psi} - \frac{|\nabla \psi|^2}{\psi^2} - C \frac{u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} \Big( |\nabla^2 u| + 1 + |\nabla^2 u|^2 \Big) \\ & +\frac{2 \lambda u_l \psi_l}{\psi} + \frac{2 (n - k + 1) \lambda \gamma u_{tt} \sigma_{k - 1} (E_u) |\nabla^2 u|^2}{\sigma_k (E_u)} - \frac{C \lambda u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} + \frac{ 2 \mu \sigma_k \big( W[u] \big)}{u_{tt}^{1 - k} \sigma_k (E_u)}. \end{aligned} \end{equation*} {\bf The case when $\gamma > 0$.} We may choose $\lambda$ sufficiently large so that \begin{equation} \label{eq3-18} \begin{aligned} & \mathbb{L} \Big( \Delta u + \lambda |\nabla u|^2 + \mu t (t - 1) \Big) \\ \geq & \frac{\Delta \psi}{\psi} - \frac{|\nabla \psi|^2}{\psi^2} + \lambda \frac{2 u_l \psi_l}{\psi} + \Big( (n - k + 1) \gamma \lambda |\nabla^2 u|^2 - C |\nabla^2 u| \\ & - C ( 1 + \lambda ) \Big) \frac{u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)} + \frac{ 2 \mu \sigma_k \big( W[u] \big)}{u_{tt}^{1 - k} \sigma_k (E_u)}. \end{aligned} \end{equation} Since $\lambda \big( W[u] \big) \in \Gamma_k \subset \Gamma_1$, we know that $\Delta u \geq - C$. Also, by Theorem \ref{Thm1}, $u_{tt} \leq C$. Together with \eqref{eq4}, we know that \[ \frac{\sigma_k \big( W[u] \big)}{u_{tt}^{1 - k} \sigma_k (E_u)} = u_{tt}^{- 1} \frac{ u_{tt} \sigma_k \big( W[u] \big)}{u_{tt}^{1 - k} \sigma_k (E_u)} \geq C^{- 1}. \] Then we can choose $\mu$ sufficiently large so that \eqref{eq3-18} reduces to \begin{equation} \label{eq3-19} \begin{aligned} & \mathbb{L} \Big( \Delta u + \lambda |\nabla u|^2 + \mu t (t - 1) \Big) \\ \geq & \Big( (n - k + 1) \gamma \lambda |\nabla^2 u|^2 - C |\nabla^2 u| - C ( 1 + \lambda ) \Big) \frac{u_{tt} \sigma_{k - 1} (E_u)}{\sigma_k (E_u)}. \end{aligned} \end{equation} Suppose that $\Delta u + \lambda |\nabla u|^2 + \mu t (t - 1)$ attains its maximum at $(x_3, t_3) \in M \times (0, 1)$. We may assume that $|\nabla^2 u| (x_3, t_3)$ is sufficiently large (otherwise we are done) such that \begin{equation*} \mathbb{L} \Big( \Delta u + \lambda |\nabla u|^2 + \mu t (t - 1) \Big) (x_3, t_3) > 0 . \end{equation*} But this is impossible. We thus obtain an upper bound for $\Delta u$ on $M \times [0, 1]$. \end{proof} For $k \geq 2$, by the relation that \[ \big| W[u] \big|^2 = \sigma_1^2 \big( W[u] \big) - 2 \sigma_2 \big( W[u] \big) \leq \sigma_1^2 \big( W[u] \big) , \] we obtain a bound for $|\nabla^2 u|$ on $M \times [0, 1]$. Finally, by the fact that $\lambda(E_u) \in \Gamma_k \subset \Gamma_1$, we have \[ u_{tt} \sigma_1 \big( W[u] \big) - |\nabla u_t|^2 > 0, \] and we therefore obtain a bound for $|\nabla u_t|$ on $M \times [0, 1]$. \vspace{4mm} \section{Existence} \vspace{4mm} We shall use standard continuity method to prove Theorem \ref{Theorem1}. To start the continuity process, we need to construct an admissible function $w(x, t)$ which satisfies $w(x, 0) = u_0$ and $w(x, 1) = u_1$. For this, we shall first establish Theorem \ref{Thm4}. {\bf Proof of Theorem \ref{Thm4}.} First, it is obvious to see that for $t = 0$, $u_0$ is the unique solution to \eqref{eq5-1}. Also, the linearized operator associated to \eqref{eq5-1} is invertible so that we can apply implicit function theorem to prove the openness of the set of $t \in [0, 1]$ at which \eqref{eq5-1} has an admissible solution $u(\cdot, t)$. The closedness can be established once we are able to derive $C^2$ estimates with respect to the spatial variable $x$. Then the existence of solution $u(x, t)$ to \eqref{eq5-1} for any $t \in [0, 1]$ can be obtained, which can be further proved to be smooth with respect to $x$ by Evans-Krylov theory \cite{Evans, Krylov} and classical Schauder theory. In addition, we are sure that $u(\cdot, 1) = u_1$ by the uniqueness of solution to \eqref{eq5-1} for any $t \in [0, 1]$. In order to prove $u(x, t)$ to be smooth with respect to $(x, t)$, we differentiate \eqref{eq5-1} with respect to $t$ to obtain \begin{equation} \label{eq5-3} \sigma_k^{ij} \big( W[u] \big) \mathcal{M}_{ij} (u_t) = e^{2 k u} (2 k \psi u_t + \psi_t), \end{equation} from which we can apply classical Schauder theory once we are able to give a uniform bound for $|u_t|$. Recall that the global estimates for $|\nabla u|$ and $|\nabla^2 u|$ have been derived in Guan \cite{Guan08}. Thus it remains to give a bound for $|u|$ and $|u_t|$. To give an upper bound for $u$, assume that $u$ attains an interior maximum at $(x_0, t_0) \in M \times (0, 1)$. Then at $(x_0, t_0)$, \[ \nabla u = 0, \quad \nabla^2 u \leq 0. \] It follows that at $(x_0, t_0)$, \[ \sigma_k \big( W[u] \big) \leq \sigma_k (A). \] Consequently, at $(x_0, t_0)$, \[ C^{- 1} \leq \psi(x, t) = e^{- 2 k u} \sigma_k \big( W[u] \big) \leq e^{- 2 k u} \sigma_k (A) \leq C e^{- 2 k u} . \] We thus obtain an upper bound for $u(x_0, t_0)$ and consequently for $u$. To give a lower bound for $u$, assume that $u$ attains an interior minimum at $(x_1, t_1) \in M \times (0, 1)$. Then at $(x_1, t_1)$, \[ \nabla u = 0, \quad \nabla^2 u \geq 0. \] It follows that at $(x_1, t_1)$, \[ \sigma_k \big( W[u] \big) \geq \sigma_k (A). \] Consequently, at $(x_1, t_1)$, \[ C \geq \psi(x, t) = e^{- 2 k u} \sigma_k \big( W[u] \big) \geq e^{- 2 k u} \sigma_k (A) \geq C^{- 1} e^{- 2 k u} . \] We thus obtain a lower bound for $u(x_1, t_1)$ and consequently for $u$. Next, we derive the estimate for $|u_t|$ on $M \times \{ 0, 1 \}$. To give an upper bound for $u_t$ on $\{ t = 0 \}$, we consider the function \[ \Phi_1 (x, t) = u - u_0 - c_1 t \quad \text{on} \quad M \times [0, 1], \] where $c_1$ is a positive constant to be chosen. We observe that $\Phi_1(x, 0) = 0$. By choosing $c_1$ sufficiently large, we can guarantee that \[ \Phi_1 (x, 1) = u_1 - u_0 - c_1 \leq 0 \quad \text{for any } x \in M. \] Assume that $\Phi_1$ attains an interior maximum at $(x_2, t_2) \in M \times (0, 1)$. Then at $(x_2, t_2)$, \[ u \geq u_0 + c_1 t_2, \quad \nabla u = \nabla u_0, \quad \nabla^2 u \leq \nabla^2 u_0 . \] It follows that \[\begin{aligned} & \sigma_k \big( W[u_0] \big) (x_2) = e^{2 k u_0 (x_2)} \psi(x_2, 0) \\ \geq & \sigma_k \big( W[u] \big) (x_2, t_2) = e^{2 k u(x_2, t_2)} \psi(x_2, t_2) \\ \geq & e^{2 k \big( u_0 (x_2) + c_1 t_2 \big)} \psi(x_2, t_2). \end{aligned} \] Consequently, we arrive at \begin{equation} \label{eq5-4} \psi(x_2, 0) \geq e^{2 k c_1 t_2} \psi(x_2, t_2). \end{equation} We may choose $c_1$ further large such that \[ c_1 > \frac{\sup( - \psi_t)}{2 k \inf \psi} , \] which means that \eqref{eq5-4} can not happen. Thus, $\Phi_1$ can not have an interior maximum, and so $\Phi_1 \leq 0$ on $M \times [0, 1]$. Hence \[ (\Phi_1)_t (x, 0) \leq 0 \quad \text{for any } x \in M, \] which implies an upper bound for $u_t$ on $\{ t = 0 \}$. Similarly, to give a lower bound for $u_t$ on $\{ t = 0 \}$, we consider the test function \[ \Phi_2 (x, t) = u - u_0 + c_2 t \quad \text{on} \quad M \times [0, 1], \] where $c_2 > 0$ is a sufficiently large constant such that \[ c_2 \geq \sup( u_0 - u_1 ) \quad \text{and} \quad c_2 > \frac{\sup{\psi_t}}{2 k \inf{\psi}}. \] To give a lower bound for $u_t$ on $\{ t = 1 \}$, we consider the test function \[ \Phi_3 (x, t) = u - u_1 + c_3 ( t - 1 ) \quad \text{on} \quad M \times [0, 1], \] where $c_3 > 0$ is a sufficiently large constant such that \[ c_3 \geq \sup( u_0 - u_1 ) \quad \text{and} \quad c_3 > \frac{\sup{\psi_t}}{2 k \inf{\psi}}. \] To give an upper bound for $u_t$ on $\{ t = 1 \}$, we consider the function \[ \Phi_4 (x, t) = u - u_1 - c_4 (t - 1) \quad \text{on} \quad M \times [0, 1], \] where $c_4 > 0$ is a sufficiently large constant such that \[ c_4 \geq \sup (u_1 - u_0) \quad \text{and} \quad c_4 > \frac{\sup( - \psi_t)}{2 k \inf \psi} . \] In order to give global bound for $|u_t|$, we consider the linearized operator associated to \eqref{eq5-1} \[ \mathcal{T}(v) = \sigma_k^{ij} \big( W[u] \big) \mathcal{M}_{ij} (v) - e^{2 k u} 2 k \psi v . \] To give an upper bound for $u_t$, we consider the test function \[ \Phi_5 (x, t) = u_t - c_5, \] where $c_5$ is a positive constant to be chosen. By \eqref{eq5-3}, we have \[ \mathcal{T} ( \Phi_5 ) = \sigma_k^{ij} \big( W[u] \big) \mathcal{M}_{ij} (u_t) - e^{2 k u} 2 k \psi (u_t - c_5) = e^{2 k u} (\psi_t + 2 k \psi c_5) . \] By choosing $c_5$ sufficiently large so that \[ c_5 \geq \frac{\sup ( - \psi_t )}{2 k \inf \psi}, \] we deduce that \[ \mathcal{T} ( \Phi_5 ) \geq 0 \quad \text{on} \quad M \times [0, 1], \] which tells that \[ \Phi_5 \leq \max\limits_{M \times \{0, 1\}} (\Phi_5)^+ \quad \text{on} \quad M \times [0, 1]. \] We thus obtain an upper bound for $u_t$ on $M \times [0, 1]$. To give a lower bound for $u_t$ on $M \times [0, 1]$, we consider the test function \[ \Phi_6 (x, t) = u_t + c_6, \] where $c_6 > 0$ is a sufficiently large constant such that \[ c_6 \geq \frac{\sup ( \psi_t )}{2 k \inf \psi}. \] \hfill \qedsymbol {\bf Proof of Theorem \ref{Theorem1}.} By Theorem \ref{Thm4}, we know that there exists a smooth solution $v(x, t)$ to \begin{equation} \label{eq5-5} \left\{ \begin{aligned} e^{- 2 k u} \sigma_k \big( W[u] \big) = & (1 - t) e^{- 2 k u_0} \sigma_k \big( W[u_0] \big) + t e^{- 2 k u_1} \sigma_k \big( W[u_1] \big), \\ u(\cdot, 0) = & u_0, \quad u(\cdot, 1) = u_1 \end{aligned} \right. \end{equation} which satisfies $\lambda\big( W[ v ] \big)(x, t) \in \Gamma_k$ for any $(x, t) \in M \times [0, 1]$. Let \[ w(x, t) = v(x, t) + a t (t - 1). \] We may choose $a$ sufficiently large such that \[ w_{tt} > 0 \quad \text{on} \quad M \times [0, 1] \] and \[ w_{tt} \sigma_k \big( W [v] \big) - \sigma_k^{ij} \big( W[ v ] \big) v_{ti} v_{tj} > 0 \quad \text{on} \quad M \times [0, 1]. \] It follows that \[ \lambda \big( E_w \big) \in \Gamma_k \quad \text{on} \quad M \times [0, 1]. \] Now, we construct the continuity process for $\tau \in [0, 1]$, \begin{equation} \label{eq5-6} \left\{ \begin{aligned} & u_{tt} \sigma_k \big( W[u] \big) - \sigma_k^{ij} \big( W[ u ] \big) u_{ti} u_{tj} \\ = & (1 - \tau) \Big( w_{tt} \sigma_k \big( W[w] \big) - \sigma_k^{ij} \big( W[ w ] \big) w_{ti} w_{tj} \Big) + \tau \psi(x, t), \\ & u(\cdot, 0) = u_0, \quad u(\cdot, 1) = u_1. \end{aligned} \right. \end{equation} It is obvious to see when $\tau = 0$, $u^0 = w$ is an admissible solution to \eqref{eq5-6}. Since the linearized operator associated to \eqref{eq5-6} is invertible, we can apply implicit function theorem to prove the openness of the set of $\tau \in [0, 1]$ at which \eqref{eq5-6} has an admissible solution $u^{\tau}(x, t)$ on $M \times [0, 1]$. The closedness can be proved by the a priori estimates which are established in previous sections. Then the existence of solution $u^{\tau}(x, t)$ to \eqref{eq5-6} for any $\tau \in [0, 1]$ follows from classical continuity method. The uniqueness follows from maximum principle. \hfill \qedsymbol \vspace{2mm} Before we give the proof of Theorem \ref{Theorem2}, we provide the definition of viscosity solution of \begin{equation} \label{eq5-8} u_{tt} \sigma_k \big( W[u] \big) - \sigma_{k}^{ij} \big( W[u] \big) u_{ti} u_{tj} = 0 \end{equation} according to Definition 1.1 in \cite{LiYanyan2009}. \begin{defn} \label{Def1} Let $\Omega$ be an open subset of $M \times [0, 1]$. A continuous function $u$ in $\Omega$ is a viscosity supersolution of \eqref{eq5-8} if for any $(x_0, t_0) \in \Omega$ and $\varphi \in C^2(\Omega)$, if $u - \varphi$ has a local minimum at $(x_0, t_0)$, then $R_{\varphi} (x_0, t_0) \in M_{n + 1} \setminus \mathcal{S}$, where $R_{\varphi}$ is given in \eqref{eq4-11} and $\mathcal{S}$ is given in Proposition \ref{prop5}. A continuous function $u$ in $\Omega$ is a viscosity subsolution of \eqref{eq5-8} if for any $(x_0, t_0) \in \Omega$ and $\varphi \in C^2(\Omega)$, if $u - \varphi$ has a local maximum at $(x_0, t_0)$, then $R_{\varphi} (x_0, t_0) \in \overline{\mathcal{S}}$. We say that $u$ is a viscosity solution of \eqref{eq5-8} if it is both a viscosity supersolution and a viscosity subsolution. \end{defn} \vspace{2mm} {\bf Proof of Theorem \ref{Theorem2}.} For any $\epsilon \in (0, 1]$, by Theorem \ref{Theorem1}, there exists a unique smooth admissible solution $u^{\epsilon} (x, t)$ to the Dirichlet problem \begin{equation} \label{eq5-7} \left\{ \begin{aligned} & u_{tt} \sigma_k \big( W[u] \big) - \sigma_k^{ij} \big( W[ u ] \big) u_{ti} u_{tj} = \epsilon \quad \text{on } M \times [0, 1], \\ & u(\cdot, 0) = u_0, \quad u(\cdot, 1) = u_1. \end{aligned} \right. \end{equation} By the estimates established in previous sections, we know that the solutions $\{ u^{\epsilon} \}$ have uniform $C^2$ bound which is independent of $\epsilon$. Also, by the comparison principle, we know that $u^{\epsilon_1} \leq u^{\epsilon_2}$ if $\epsilon_1 \geq \epsilon_2$. Thus, as $\epsilon \rightarrow 0$, $u^{\epsilon}$ converges in $C^{1, \alpha}$ to a $C^{1, 1}$ solution $u$ of \eqref{eq1-1} for any $\alpha \in (0, 1)$. In the sense of Definition \ref{Def1}, $u$ is a viscosity solution of \eqref{eq1-1}. \hfill \qedsymbol \vspace{4mm} \medskip \begin{thebibliography}{9} \vspace{4mm} \bibitem{Calabi1982} E. Calabi, {\em Extremal K\"ahler Metrics}, Seminar on Differential Geometry, vol. 102, Princeton University Press, 1982, pp. 259--290. \bibitem{ChenXX2000} X. Chen, {\em The space of K\"ahler metrics}, J. Differential Geom. {\bf 56} (2000), 189--234. \bibitem{CGH22} L. Chen, X. Guo and Y. He,{ \em A class of fully nonlinear equations arising in conformal geometry}, Int. Math. Res. Not. IMRN (2022), 3651--3676. \bibitem{Donaldson1999} S. Donaldson, {\em Symmetric spaces, K\"ahler geometry and Hamiltonian dynamics}, Northern California Symplectic Geometry Seminar, Amer. Math. Soc. Providence, RI, 1999, 13--33. \bibitem{Donaldson2004} S. Donaldson, {\em Conjectures in K\"ahler geometry. Strings and geometry}, Clay Math. Proc. vol. 3, Amer. Math. Soc. Providence, RI, 2004, 71--78. \bibitem{Evans} L. C. Evans, {\em Classical solutions of fully nonlinear, convex, second order elliptic equations}, { Communications on Pure and Applied Mathematics} {\bf 35} (1982), 333-363. \bibitem{Gonzalez-Li-Nguyen} M. Gonz\'alez, Y.Y. Li and L. Nguyen, {\em Existence and uniqueness to a fully nonlinear version of the Loewner-Nirenberg problem}, Commun. Math. Stat. {\bf 6} (2018), 269--288. \bibitem{Guan08} B. Guan, {\em Complete conformal metrics of negative Ricci curvature on compact manifolds with boundary}, Int. Math. Res. Not. IMRN 2008, Art. ID rnn pp 105, 25. \bibitem{Guan09} B. Guan, {\em Addendum to: Complete conformal metrics of negative Ricci curvature on compact manifolds with boundary}, Int. Math. Res. Not. IMRN {\bf 22} (2009), 4354--4355. \bibitem{Gursky-Streets2018} M. Gursky and J. Streets, {\em A formal Riemannian structure on conformal classes and uniqueness for the $\sigma_2$-Yamabe problem}, Geom. Topol. {\bf 22} (2018), 3501--3573. \bibitem{GV03} M. Gursky and J. Viaclovsky,{ \em Fully nonlinear equations on Riemannian manifolds with negative curvature}, Indiana Univ. Math. J. {\bf 52} (2003), 399--419. \bibitem{He2021} W. He, {\em The Gursky-Streets equations}, Math. Ann. {\bf 381} (2021), 1085--1135. \bibitem{HeXuZhang} W. He, L. Xu and M. Zhang, {\em A fully nonlinear partial differential equation and its application to the $\sigma_k$-Yamabe problem}, J. Funct. Anal. {\bf 281} (2021), Paper No. 109140, 40 pp. \bibitem{Hormander} L. H\"ormander, {\em Notions of Convexity}, Progress in Mathematics, vol. 127, Birkh\"auser Boston, Inc., Boston, MA, 1994. \bibitem{Krylov} N. V. Krylov, {\em Boundedly nonhomogeneous elliptic and parabolic equations in a domain}, {Izvestiya Rossiiskoi Akademii Nauk, Seriya Matematicheskaya} {\bf 47} (1983), 75--108. \bibitem{LiYanyan2009} Y. Y. Li, {\em Local gradient estimates of solutions to some conformally invariant fully nonlinear equations}, Comm. Pure Appl. Math. {\bf 62} (2009), 1293--1326. \bibitem{LN21} Y. Y. Li and L. Nguyen,{ \em Solutions to the $\sigma_k$-Loewner-Nirenberg problem on annuli are locally Lipschitz and not differentiable}, J. Math. Study {\bf 54} (2021), 123--141. \bibitem{LNX23} Y. Y. Li, L. Nguyen and J. Xiong,{ \em Regularity of viscosity solutions of the $\sigma_k$-Loewner-Nirenberg problem}, Proc. Lond. Math. Soc. {\bf 127} (2023), 1--34. \bibitem{Mabuchi1986} T. Mabuchi, {\em K-energy maps integrating Futaki invariants}, Tohoku Math. J. {\bf 38} (1986), 575--593. \bibitem{Mabuchi1987} T. Mabuchi, {\em Some symplectic geometry on compact K\"ahler manifolds}, Osaka J. Math. {\bf 24} (1987), 227--252. \bibitem{Semmes1992} S. Semmes, {\em Complex Monge-Amp\`ere and symplectic manifolds}, Amer. J. Math. {\bf 114} (1992), 495--550. \bibitem{Sui2024} Z. Sui, {\em On fully nonlinear Loewner-Nirenberg problem of Ricci curvature}, J. Funct. Anal. {\bf 286} (2024), Paper No. 110379, 53 pp. \bibitem{Viaclovski2000} J. Viaclovsky, {\em Conformal geometry, contact geometry, and the calculus of variations}, Duke Math. J. {\bf 101} (2000), 283--316. \bibitem{Viaclovsky2002} J. Viaclovsky, {\em Estimates and existence results for some fully nonlinear elliptic equations on Riemannian manifolds}, Comm. Anal. Geom. {\bf 10} (2002), 815--846. \end{thebibliography} \end{document}
2412.17500v1
http://arxiv.org/abs/2412.17500v1
Oriented Ramsey numbers of some oriented graphs with one cycle
\documentclass[11pt]{article} \usepackage{amsmath,amsfonts,amsthm,amssymb, mathtools} \usepackage{mathrsfs, graphicx,color,latexsym, tikz, calc, } \usepackage[colorlinks,bookmarksopen,bookmarksnumbered,citecolor=blue, linkcolor=red, urlcolor=blue]{hyperref} \usepackage{enumitem} \usepackage{authblk} \usetikzlibrary{shadows} \usetikzlibrary{patterns,arrows,decorations.pathreplacing} \voffset -2cm \makeatletter \def\leftharpoonfill@{\arrowfill@\leftharpoonup\relbar\relbar} \def\rightharpoonfill@{\arrowfill@\relbar\relbar\rightharpoonup} \newcommand\rbjt{\mathpalette{\overarrow@\rightharpoonfill@}} \newcommand\lbjt{\mathpalette{\overarrow@\leftharpoonfill@}} \makeatother \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{problem}{Problem} \newtheorem{conjecture}{Conjecture} \newtheorem{corollary}{Corollary} \newtheorem{remark}{Remark} \newtheorem{prop}{Proposition} \newtheorem{claim}{Claim}[section] \renewcommand\proofname{\bf{Proof}} \marginparwidth 0pt \oddsidemargin 32pt \evensidemargin 0pt \topmargin 20pt \textheight 21.5 truecm \textwidth 14.5 truecm \def\baselinestretch{1.1} \begin{document} \title{\bf \Large Oriented Ramsey numbers of some oriented graphs with one cycle} \author{Junying Lu, Yaojun Chen\footnote{Corresponding author. [email protected].}} \affil{{\small {School of Mathematics, Nanjing University, Nanjing 210093, China}}} \date{ } \maketitle \begin{abstract} The oriented Ramsey number of an oriented graph $H$, denoted by $\rbjt{r}(H)$, is the smallest integer $N$ such that every tournament on $N$ vertices contains a copy of $H$. Rosenfeld (JCT-B, 1974) conjectured that $\rbjt{r}(C_n)=n$ if $C_n$ is a nondirected oriented cycle of sufficiently large order $n$, which was confirmed for $n\geq 9$ by Zein recently. In this paper, we determine the oriented Ramsey number of an oriented graph obtained by identifying a vertex of an antidirected cycle with one end of a directed path. Some other oriented Ramsey numbers for oriented graphs with one cycle are also discussed. \vskip 2mm \noindent {\it AMS classification:} 05C20\\[1mm] \noindent {\it Keywords:} Oriented Ramsey number; Tournament; Cycle; Path \end{abstract} \baselineskip=0.202in \section{Introduction} A digraph $D$ is a pair $D=(V(D),E(D))$, where $V(D)$ is a set of vertices and $E(D)$ is the set of arcs of $D$ such that $E\subseteq (V\times V)\setminus \{(v,v): v \in V \}$. An \emph{oriented graph} $D$ is a digraph where $(u, v) \in E(D)$ implies $(v, u) \notin E(D)$ for every $u, v \in V(D)$. For a digraph $D$, if $(u,v)$ is an arc, we say that $u$ \emph{dominates} $v$ and write $u\to v$. If $v_1\to v_2$ for any $v_1\in V_1$, then we write $V_1\to v_2$ and the notation $v_1\to V_2$ is defined similarly. If $v_1\to v_2$ for any $v_1\in V_1$ and $v_2\in V_2$, then we write $V_1\to V_2$ or $V_2\gets V_1$. For any $W\subseteq V(D)$, we denote by $D[W]$ the subdigraph induced by $W$ in $D$, and $D-W=D[V(D)\setminus W]$. The \emph{dual} digraph of $D$ is the digraph $-D$ on the same set of vertices such that $x\to y$ is an arc of $-D$ if and only if $y\to x$ is an arc of $D$. Let $v$ be a vertex of $D$. The \emph{out-neigbourhood} of $v$, denoted by $N_D^+(v)$, is the set of vertices $w$ such that $v\to w$. The \emph{in-neigbourhood} of $v$, denoted by $N_D^-(v)$, is the set of vertices $w$ such that $w\to v$. The \emph{out-degree} $d_D^+(v)$ (resp. the \emph{in-degree} $d_D^-(v)$) is $|N_D^+(v)|$ (resp. $|N_D^-(v)|$). Compared to the well known directed path (cycle), the \emph{antidirected paths (cycles)} are the oriented paths (cycles) in which every vertex has either in-degree $0$ or out-degree $0$ (in other words, two consecutive edges are oriented in opposite ways). A tournament is an orientation of a complete graph. A tournament is regular if each vertex has the same out-degree. The \emph{oriented Ramsey number} of an oriented graph $H$, denoted by $\rbjt{r}(H)$, is the smallest integer $N$ such that every tournament on $N$ vertices contains a copy of $H$. Because of transitive tournaments, which are acyclic, $\rbjt{r}(H)$ is finite if and only if $H$ is acyclic. Note that $\rbjt{r}(D)=\rbjt{r}(-D)$ for any acyclic oriented graph $D$. Indeed, if any tournament $T$ of order $n$ contains $D$, then $-T$ contains $D$ and so $T$ contains $-D$. The oriented Ramsey numbers of oriented paths and non-directed cycles were widely studied. It started with R\'{e}dei's theorem \cite{Redei} which states that the oriented Ramsey number of $\vec{P}_n$, the directed path on $n$ vertices, is $n$. Later on, in 1971, Gr\"{u}nbaum \cite{Grunbaum} proved that the oriented Ramsey number of an antidirected path of order $n$ is $n$ unless $n=3$ (in which case it is not contained in the tournament which is a directed $3$-cycle) or $n=5$ (in which case it is not contained in the regular tournament of order $5$) or $n=7$ (in which case it is not contained in the Paley tournament of order $7$). In the same year, Rosenfeld \cite{Rosenfeld} gave an easier proof and conjectured that there is a smallest integer $N>7$ such that $\rbjt{r}(P)=|P|$ for every oriented path of order at least $N$. The condition $N>7$ results from Gr\"{u}nbaum's counterexamples. Several papers gave partial answers to this conjecture \cite{Alspach, Forcade, Straight} until Rosenfeld's conjecture was verified by Thomason, who proved in \cite{Thomason} that $N$ exists and is less than $2^{128}$. Finally, Havet and Thomass\'{e} \cite{Havet}, showed that $\rbjt{r}(P)=|P|$ for every oriented path $P$ except the antidirected paths of order $3$, $5$ and $7$. Concerning the oriented cycles, Gr\"{u}nbaum \cite{Grunbaum} conjectured that the oriented Ramsey number of the antidirected cycle on $n\ge 10$ vertices is $n$. Let $AC_{2k}$ denote the antidirected cycle on $2k$ vertices. In 1974, Rosenfeld \cite{Rosenfeld2} proved the conjecture for large $n$ and obtained the following. \begin{theorem}[Rosenfeld, \cite{Rosenfeld2}]\label{thm-R} $\rbjt{r}(AC_{2k})=2k$ for $k\ge 14$. \end{theorem} Rosenfeld \cite{Rosenfeld2} also conjectured the existence of some integer $N$, such that every tournament on $n$ vertices, $n > N$, contains any oriented Hamiltonian cycle, except possibly the directed ones. Thomason \cite{Thomason} showed that any tournament $T$ of order $n\ge 2^{128}+1$ is pancyclic, that is, $T$ contains every non-directed oriented cycle $C$ with $3\le |C|\le n$. In 1999, Havet \cite{Havet1} proved that every tournament of order $n \ge 68$ contains any oriented Hamiltonian cycle, except possibly the directed ones. Recently, Zein \cite{Zein} showed that, with exactly $35$ exceptions, every tournament of order $n \ge 3$ is pancyclic. In particular, any tournament contains each Hamiltonian non-directed cycle with 30 exceptions, all of order less than $9$. For $3\le i\le n$, let $H(n,i)$ denote the oriented graph with vertex set $\{1,2,\ldots,n\}$ and arc set $\{(1,i),(j,j+1)\colon 1\le j\le n-1\}$. At the Sixth Yugoslav Seminar on Graph Theory in Zagreb (1986), S\'{o}s posed the following conjecture. \begin{conjecture}[S\'{o}s, 1986] $\rbjt{r}(H(n,i))=n$ for each $n$ and $i~(4\le i\le n-1)$. \end{conjecture} Petrovi\'{c} \cite{Petrovic} completely resolved this conjecture. He showed that if $3\le i\le n$, then any tournament $T_n$ contains a copy of $H(n,i)$ unless $i=3$ or $i=5$ and $T_n$ belongs to a certain class of exceptional tournaments. In fact, $H(n,i)$ can be obtained by identifying one end of a directed path $\vec{P}_{n-i+1}$ with a vertex of a specific non-directed cycle of length $i$. The \emph{blocks} of an oriented path (resp. cycle) are the maximal subdipaths of this path (resp. cycle). It is clear that the underlying graph of $H(n,i)$ is unicyclic and the length of the largest block of the cycle attains the maximum. An \emph{in-arborescence} (resp. \emph{out-arborescence}) is an oriented tree in which all arcs are oriented towards (resp. away from) a fixed vertex called the root. An \emph{arborescence} is either an in-arborescence or an out-arborescence. A directed path is a special arborescence. Motivated by S\'os' conjecture, we are interested in the oriented Ramsey numbers of other oriented graphs whose underlying graphs are unicyclic. In particular, we focus on the cases when the length of the largest block of the cycle attained the minimum (which is 1), that is, the antidirected cycle, or the cycle has exactly two blocks, that is, $C(p,q)$, which is obtained from a directed cycle of length $p+q$ by changing the orientation of $p$ consecutive edges. It should be noted that antidirected cycles are highly symmetric and difficult to deal with in studying whether a tournament contains a Hamiltonian cycle of such type. For $k,\ell\ge 1$, we denote the oriented graph, which is obtained by identifying an end $u$ of directed path $\vec{P}_{\ell+1}$ with a vertex $v\in V(AC_{2k})$, by \[\begin{cases} Q^+(2k,+\ell) & \text{if } d_{\vec{P}_{l+1}}^+(u)=1 \text{ and } d_{AC_{2k}}^+(v)=2;\\ Q^-(2k,+\ell) & \text{if } d_{\vec{P}_{l+1}}^+(u)=1 \text{ and } d_{AC_{2k}}^-(v)=2;\\ Q^+(2k,-\ell) & \text{if } d_{\vec{P}_{l+1}}^-(u)=1 \text{ and } d_{AC_{2k}}^+(v)=2;\\ Q^-(2k,-\ell) & \text{if } d_{\vec{P}_{l+1}}^-(u)=1 \text{ and } d_{AC_{2k}}^-(v)=2. \end{cases}\] If the sign is omitted it is assumed positive. Note that the dual of $Q^+(2k,-\ell)$ is $Q^-(2k,+\ell)$, and the dual of $Q^-(2k,-\ell)$ is $Q^+(2k,+\ell)$. The first main result of this paper is on $\rbjt{r}(Q^\pm(2k,\ell))$ as below. \begin{theorem}\label{thm-1} For $k\ge 54$, $\rbjt{r}(Q^\pm(2k,\ell))=2k+\ell$. \end{theorem} Secondly, as a generalization, we consider the oriented graph obtained by adding a forward (resp. backward) arc from a vertex of an antidirected cycle to the root of an out-arborescence (resp. in-arborescence). By duality, we treat only the case when the arborescence is out-arborescence. For an oriented tree, the \emph{leaves} of it are the vertices having exactly one neighbor. There are two kinds of leaves: \emph{in-leaves} which have out-degree 1 and in-degree 0, and \emph{out-leaves} which have out-degree 0 and in-degree 1. Let $A$ be an out-arborescence with $\ell$ vertices and $a$ out-leaves. Denote by $ACA^+(2k;\ell, a)$ (resp. $ACA^-(2k;\ell, a)$) the oriented graph obtained by adding a forward arc from a vertex $v$ of an antidirected cycle $AC_{2k}$ to the root of $A$, where $d_{AC_{2k}}^+(v)=2$ (resp. $d_{AC_{2k}}^-(v)=2$). It is not difficult to see that $ACA^\pm(2k;\ell,1)=Q^\pm(2k,\ell)$ for $\ell\ge 2$ and $ACA^\pm(2k;1,0)=Q^\pm(2k,1)$. \begin{theorem}\label{thm-4} For $k\ge 25$ and $\ell>a\ge 1$, $\rbjt{r}(ACA^\pm(2k;\ell,a))\le 2k+\ell+a$. \end{theorem} Finally, let $CP(p,q;\ell)$ be the oriented graph obtained by identifying the end $u$ of a directed path $\vec{P}_{\ell+1}$ with a vertex $v$ of $C(p,q)$ satisfied $d_{\vec{P}_{\ell+1}}^+(u)=1$ and $d_{C_{p,q}}^+(v)=0$. Note that $CP(1,i-1;n-i)=H(n,i)$. It remains interesting to consider $p,q\ge 2$. In this paper, as a special case, we determine the value of of $\rbjt{r}(CP(2,2;n-4))$ and obtain the following. \begin{theorem}\label{thm-5} For $n\ge 4$, $\rbjt{r}(CP(2,2;n-4))= n+1$. \end{theorem} \section{Preliminaries} In this section, we will introduce some results which will be used later. Firstly, as a preparation to prove Theorems \ref{thm-1} and \ref{thm-4}, we introduce some concepts about the median order and give several basic properties. Let $\sigma=(v_1,v_2,\ldots,v_n)$ be an ordering of the vertices of a digraph $D$. An arc $(v_i,v_j)$ is \emph{forward} (according to $\sigma$) if $i<j$ and \emph{backward} (according to $\sigma$) if $j<i$. A \emph{median order} of $D$ is an ordering of the vertices of $D$ with the maximum number of forward arcs, or equivalently the minimum number of backward arcs. Some basic properties of median orders of tournaments are as follows. \begin{lemma}[Dross and Havet \cite{Dross}]\label{lem-1} Let $T$ be a tournament and $(v_1,v_2,\ldots,v_n)$ a median order of $T$. Then, for any two indices $i,j$ with $1\le i<j\le n$:\\ {\rm (P1)} $(v_i,v_{i+1},\ldots,v_j)$ is a median order of the induced tournament $T[\{v_i,v_{i+1},\ldots,v_j\}]$.\\ {\rm (P2)} $v_i$ dominates at least half of the vertices $v_{i+1},v_{i+2},\ldots,v_j$, and $v_j$ is dominated by at least half of the vertices $v_i,v_{i+1},\ldots,v_{j-1}$. In particular, each vertex $v_\ell$, $1\le \ell\le n-1$, dominates its successor $v_{\ell+1}$. \end{lemma} A \emph{local median order} is an ordering of the vertices of $D$ that satisfies property (P2). Now, we list some results on the oriented Ramsey numbers of oriented cycle and tree. \begin{lemma}[Dross and Havet \cite{Dross}]\label{lem-5} Let $A$ be an out-arborescence with $n$ vertices, $k$ out-leaves and root $r$, let $T$ be a tournament on $m=n+k-1$ vertices, and let $(v_1,v_2,\ldots,v_m)$ be a local median order of $T$. There is an embedding $\phi$ of $A$ into $T$ such that $\phi(r)=v_1$. \end{lemma} An ADH path (cycle) is an antidirected Hamiltonian path (cycle). If $v\to u\gets \cdots$ is an ADH path in $T_n$, then $v$ is called a \emph{starting vertex}, and if $v\gets u\to \cdots$ is an ADH path in $T_n$, then $v$ is a \emph{terminating vertex}. \begin{lemma}[Rosenfeld \cite{Rosenfeld}]\label{lem-2-6} (a) If $T$ is a tournament with an odd number of vertices and $T$ has an ADH path, then $T$ has a double vertex, i.e., a vertex $v$ such that $T$ has an ADH path with $v$ as starting vertex and an ADH path with $v$ as terminating vertex. (b) If $n\ge 10$ is an even integer, then for every vertex $v\in V(T_n)$ there is an ADH path on $T_n$ having $v$ as an end-vertex. \end{lemma} Denote the transitive tournament of order $n$ by $TT_n$. Our proof depends heavily on the existence of large transitive tournaments and we shall use the following results. \begin{lemma}[Sanchez-Flores \cite{Sanchez}]\label{lem-2-7} Let $k$ and $n$ be positive integers with $k\ge 7$ and $n\ge 54\cdot2^{k-7}$. Every tournament $T_n$ contains a $TT_k$. \end{lemma} \begin{lemma}[Rosenfeld \cite{Rosenfeld2}]\label{lem-2-8} Let $TT_n$ be a transitive tournament on $\{1,2,\ldots,n\}$ with arc set $\{(i,j)\colon 1\leq i<j\leq n\}$. (a) If $n$ is even, then $TT_n$ has an ADH path starting at $i~(i\ne n)$ and terminating at $j$ except for the following cases: (i) $j=1$; (ii) $i=1,j=2~(n>2)$; (iii) $i=n-1,j=n~(n>2)$. (b) If $n$ is odd, then $TT_n$ has an ADH path with $i,j$ as starting vertices if $i,j\ne n$ and for $n>3$, $\{i,j\}\ne \{n-2,n-1\}$; as terminating vertices if $i,j\ne 1$ and for $n>3$, $\{i,j\}\ne \{2,3\}$. \end{lemma} \section{Proofs of Theorems \ref{thm-1} and \ref{thm-4}} The main task of this section is to give the proofs of Theorems \ref{thm-1} and \ref{thm-4}. In order to prove Theorem \ref{thm-1}, we need the following lemma. \begin{lemma} \label{lem-3-4} Let $T$ be a regular tournament on $2k+1$ vertices with $k\ge 9$. If $T$ has a $TT_m$ for $m\ge 8$, then $T$ contains $Q^\pm(2k,1)$. \end{lemma} To make the arguments easier to follow, we postpone the proof of Lemma \ref{lem-3-4} until the end of this section. Let $P=(x_1,\ldots,x_n)$ be a path. We say that $x_1$ is the \emph{origin} of $P$ and $x_n$ is the \emph{terminus} of $P$. If $x_1\to x_2$, $P$ is an \emph{outpath}, otherwise $P$ is an \emph{inpath}. The \emph{directed outpath} of order $n$ is the path $(x_1,\ldots,x_n)$ in which $x_i\to x_{i+1}$ for all $i$, $1\le i\le n-1$; the dual notion is \emph{directed inpath}. \begin{proof}[\bfseries{Proof of Theorem \ref{thm-1}}] Suppose $\ell=1$. If $T$ is a non-regular tournament of order $n$, let $v\in V(T)$ be a vertex with maximum in-degree. Since $k\ge 54$, $T-\{v\}$ contains a copy of $AC_{2k}$ by Theorem \ref{thm-R}. Note that $d_{AC_{2k}}^-(v)=d_T^-(v)\ge k+1$, $T$ contains $Q^\pm(2k,1)$. By Lemma \ref{lem-2-7}, $T$ contains $TT_8$. If $T$ is regular, then Lemma \ref{lem-3-4}, $T$ contains $Q^\pm(2k,1)$. Suppose $\ell\ge 2$. Let $\sigma=(v_1,v_2,\ldots,v_n)$ be a median order of $T$. By Lemma \ref{lem-1}, we can see that $P=(v_{2k+2},v_{2k+3},\ldots, v_n)$ is a directed outpath of length $\ell-2$. Set $A=\{v_1,\ldots,v_{2k+1}\}$. By Lemma \ref{lem-1}, we have $d_A^-(v_{2k+2})\ge k+1$. Let $X\subseteq A$ and $|X|=2k$ such that $X$ contains as many vertices in $N_A^-(v_{2k+2})$ as possible. Since $k\ge 54$, by Theorem \ref{thm-R}, $T[X]$ contains an antidirected cycle $C$ of length $2k$. Since $d_C^-(v_{2k+2})\ge k+1$, there exist $x,y\in V(C)$ such that $d_C^+(x)=d_C^-(y)=2$ and $(x,v_{2k+2}),(y,v_{2k+2})\in E(T)$. Therefore, $T$ contains $Q^+(2k,\ell-1)$ and $Q^-(2k,\ell-1)$. Suppose to the contrary that $T$ contains no $Q^+(2k,\ell)$ or $Q^-(2k,\ell)$. Let $A\setminus X=\{v\}$. Obviously, $v\to v_n$. We claim that $v\to v_j$ for any $2k+2\le j\le n-1$. If there exists $j$ with $2k+2\le j\le n-1$ such that $v_j \to v$, denote by $j_0$ the largest $j$ such that $v_{j_0}\to v$, then $v\to v_{j_0+1}$ and thus we obtain a directed outpath $(v_{2k+2},\ldots,v_{j_0},v,v_{j_0+1},\ldots,v_{n})$. It follows that $T$ contains $Q^\pm(2k,\ell)$, a contradiction. In particular, $v\to v_{2k+2}$. By the choice of $X$, we have $X\to v_{2k+2}$ and thus $A\to v_{2k+2}$. Now we can choose $X$ to be any $2k$-subset of $A$, and hence we have $v_i\to \{v_{2k+2},\ldots,v_n\}$ for any $1\le i\le 2k+1$. It follows that $A\to V(T)\setminus A$. Since $T[A]$ contains $Q^\pm(2k,1)$ by the arguments before, we can see that $T$ contains $Q^\pm(2k,\ell)$, again a contradiction. Therefore, the result follows. \end{proof} \begin{proof}[\bfseries{Proof of Theorem \ref{thm-4}}] Let $n=2k+\ell+a$ and $A$ be an out-arborescence with $\ell$ vertices, $a$ out-leaves and root $r$. Suppose $\sigma=(v_1,v_2,\ldots,v_n)$ be a median order of a tournament $T_n$. By Lemma \ref{lem-1}(P1), $(v_{2k+2},v_{2k+3},\ldots,v_n)$ is a median order of the induced subtournament $T[\{v_{2k+2},v_{2k+3},\ldots,v_n\}]$. Therefore, there is an embedding $\phi$ of $A$ in $T_n$ such that $\phi(r)=v_{2k+2}$ due to Lemma \ref{lem-5}. Set $B=\{v_1,\ldots,v_{2k+1}\}$. By Lemma \ref{lem-1}, we have $d_B^-(v_{2k+2})\ge k+1$. Let $X\subseteq B$ such that $|X|=2k$ and $X$ contains as many vertices in $N_B^-(v_{2k+2})$ as possible. Therefore, $T_n[X]$ contains an antidirected cycle $C$ with length $2k$. Since $d_C^-(v_{2k+2})\ge k+1$, there exist $x,y\in V(C)$ such that $d_C^+(x)=d_C^-(y)=2$ and $(x,v_{2k+2}),(y,v_{2k+2})\in E(T_n)$. Therefore, $T_n$ contains $ACA^+(2k;\ell,a)$ and $ACA^-(2k;\ell,a)$. \end{proof} We now give the proof of Lemma \ref{lem-3-4} in details. \begin{proof}[\bfseries{Proof of Lemma \ref{lem-3-4}}] Let $TT_m$ be a maximum transitive subtournament with vertex set $\{1,2,\dots,m\}$ and arc set $\{(i,j):1\leq i<j\leq m\}$ and let $T^*=T-TT_m=T[\{a_1,a_2,\ldots,a_s\}]$. Since $T$ is regular, we have $m\le k+1$ and $s\ge k$. {\flushleft\bf Case 1.} $m$ is even. \vskip 2mm In this case, $s$ is odd and $s\ge 9$. By Lemma \ref{lem-2-6}(a), assume that $a_1$ is a double vertex in tournament $T^*$. Since $m\ge 8$, we have $m-3\geq 5$, and hence there exist two vertices $x,y\in \{3,\ldots,m-3\}$ such that $\{x,y\}\to a_1$ or $\{x,y\}\gets a_1$. If $\{x,y\}\to a_1$, let $a_1\gets a_2\cdots a_{s-1}\to a_s$ be an ADH path in $T^*$. Since $m$ is maximum, there is some vertex $t\in V(TT_m)$ such that $t\to a_s$. If $t\notin \{m-1,m\}$, let $z=\min\{\{x,y\}\setminus \{t\}\}$, then $z,t\not=m-1$ and $\{z,t\}\not=\{m-3,m-2\}$. By Lemma \ref{lem-2-8}(b), $TT_m-\{m\}$ has an ADH path with $z$ and $t$ as starting vertices and hence $z\to a_1\gets a_2\cdots a_{s-1}\to a_s\gets t\to \cdots \gets z$ is an ADH cycle in $T-\{m\}$. Since $TT_m-\{m\} \to m$, $T$ contains $Q^\pm(2k,1)$. Therefore, we may assume that $a_s\to \{1,2,\ldots,m-2\}$. Again, by the maximality of $m$, there is some vertex $t_1\in V(TT_m)$ such that $a_{s-1}\to t_1$. By symmetry of $x$ and $y$, assume that $t_1\not=x$. If $t_1\notin\{m-1,m\}$, then by Lemma \ref{lem-2-8}(a) $TT_m-\{m,t_1\}$ have an ADH path $x\to \cdots \to t_2$ for some $t_2\in V(TT_m)\setminus \{x,t_1,m-1,m\}$, and hence $x\to a_1\gets a_2\cdots a_{s-1}\to t_1\gets a_s\to t_2\gets \cdots \gets x$ is an ADH cycle in $T-\{m\}$. It follows that $T$ contains $Q^\pm(2k,1)$. Since $T'=T[\{a_s,1,\ldots,m-2\}]$ is a transitive subtournament in $T$, by Lemma \ref{lem-2-8}(b), $T'$ contains an ADH path $1\to \cdots\gets x$. If $t_1=m-1$, then $x\to a_1\gets a_2 \cdots a_{s-1}\to t_1\gets 1\to \cdots \gets x$ is an ADH path in $T-\{m\}$, and so $T$ contains $Q^\pm(2k,1)$. If $t_1=m$, then we may assume that $\{1,2,\ldots,m-1\}\to a_{s-1}$. It follows that $T[\{1,\ldots,m-1,a_{s-1},m\}]$ is a transitive subtournament on $m+1$ vertices, which contradicts that $m$ is maximal. If $\{x,y\}\gets a_1$, let $a_1\to a_2 \cdots a_{s-1}\gets a_s$ be an ADH path in $T^*$. Since $m$ is maximum, there is some vertex $t\in V(TT_m)$ such that $t\gets a_s$. If $t\notin \{1,m\}$, let $z=\max\{\{x,y\}\setminus \{t\}\}$, then $z,t\not=1$ and $\{z,t\}\not=\{2,3\}$. By Lemma \ref{lem-2-8}(b), $TT_m-\{m\}$ has an ADH path with $z$ and $t$ as terminating vertices and hence $z\gets a_1\to a_2 \cdots a_{s-1}\gets a_s\to t\gets \cdots \to z$ is an ADH cycle in $T-\{m\}$. Since $TT_m-\{m\} \to m$, $T$ contains $Q^\pm(2k,1)$. Therefore, we may assume that $a_s\gets \{2,\ldots,m-2,m-1\}$. Again, by the maximality of $m$, there is some vertex $t_1\in V(TT_m)$ such that $a_{s-1}\gets t_1$. By symmetry of $x$ and $y$, assume that $t_1\ne x$. If $t_1\notin\{1,m\}$, then by Lemma \ref{lem-2-8}(a), $TT_m-\{m,t_1\}$ have an ADH path $x\gets \cdots \gets t_2$ for some $t_2\in V(TT_m)\setminus \{1,x,t_1,m\}$, and hence $x\gets a_1\to a_2 \cdots a_{s-1}\gets t_1\to a_s\gets t_2\to \cdots \to x$ is an ADH cycle in $T-\{m\}$. It follows that $T$ contains $Q^\pm(2k,1)$. Since $T'=T[\{2,\ldots,m-1,a_s\}]$ is a transitive subtournament in $T$, by Lemma \ref{lem-2-8}(b), $T'$ contains an ADH path $m-1\gets \cdots\to x$. If $t_1=1$, then $x\gets a_1\to a_2 \cdots a_{s-1}\gets t_1\to m-1\gets \cdots \to x$ is an ADH cycle in $T-\{m\}$, and so $T$ contains $Q^\pm(2k,1)$. Now, we are left to consider $t_1=m$. If $a_s\to m$, then since $TT_m-\{m-1\}$ has an ADH path $m\gets \cdots \to x$ by Lemma \ref{lem-2-8}(b), $x\gets a_1\to a_2 \cdots a_s\to m\gets \cdots \to x$ is an ADH cycle in $T-\{m-1\}$. Note that $\{1,\ldots,m-2\}\to m-1$, one can see that $T$ contains $Q^\pm(2k,1)$. Therefore, we may assume that $m\to a_s$. In such a case, by Lemma \ref{lem-2-8}(a), there is some vertex $z\in \{2,\ldots,m-3\}$ such that $TT_m-\{m-1,m\}$ has an ADH path $z\to \cdots \to x$. Thus, $x\gets a_1\to \cdots \to a_{s-1}\gets t_1\to a_s\gets z\to \cdots \to x$ is an ADH cycle in $T-\{m-1\}$. Hence $T$ contains $Q^\pm(2k,1)$. {\flushleft\bf Case 2.} $m$ is odd. \vskip 2mm In this case, $s$ is even and $s\ge 10$. Let $i_0=\frac{m-1}{2}$. To complete the proof of this case, we need the following claim. \begin{claim}\label{cla-3-1} If $TT_m-\{i_0\}\to v$ or $v\to TT_m-\{i_0\}$ for some $v\in V(T^*)$, then $T$ contains $Q^\pm(2k,1)$. \end{claim} \begin{proof} Set $T''=T[V(T)\cup \{v\}-\{i_0\}]$. Since $m$ is maximum, $T''$ is a maximal transitive subtournament. Relabel $V(T'')$ as $\{1',2',\ldots,m'\}$ such that $i'\to j'$ for $i<j$. Note that $\{2',\ldots,(i_0-1)'\}\to i_0$ and $i_0\to \{(i_0+1)',\ldots,(m-1)'\}$. Since $s$ is even and $s\ge 10$, by Lemma \ref{lem-2-6}(b), $T-T''$ has an ADH path with $i_0$ as an end vertex. Firstly, let the ADH path be \[i_0\gets b_2\to \cdots\gets b_s~(\{b_2,\ldots,b_s\}=V(T^*)\setminus\{v\}). \] Since $m$ is maximum, there is a vertex $t\in V(T'')$ such that $b_s\to t$. If $t\notin\{1',m'\}$, then by Lemma \ref{lem-2-8}(a), one can choose a vertex $x\in \{2',\ldots,(i_0-1)'\}$ and $x\ne t$ such that $T''-\{m'\}$ have an ADH path $x\to \cdots \to t$, and so $x\to i_0\gets b_2\cdots b_s\to t\gets \cdots\gets x$ is an ADH cycle in $T-\{m'\}$. It indicates that $T$ contains $Q^\pm(2k,1)$. Thus, we may assume that $\{2',\ldots,(m-1)'\}\to b_s$. Let $t'\in V(T'')$ such that $t'\to b_{s-1}$. If $t'\notin\{1',m'\}$, then by Lemma \ref{lem-2-8}(b), there is an ADH path $x'\to \cdots \gets x$ in $T''-\{t',m'\}$ such that $x\in \{2',\ldots,(i_0-1)'\}$ and $x'\in \{2',\ldots,(m-2)'\}\setminus \{t',x\}$. Thus, $x\to i_0\gets b_2 \cdots b_{s-1}\gets t'\to b_s\gets x'\to \cdots \gets x$ is an ADH cycle in $T-\{m'\}$. It follows that $T$ contains $Q^\pm(2k,1)$. If $t'=1'$, then $y\to i_0\gets b_2 \cdots b_{s-1}\gets t'\to (m-1)'\gets (m-2)'\to b_s\gets y'\to \cdots \gets y$, where $y'\to \cdots\gets y$ is an ADH path in $T''-\{1',(m-2)',(m-1)',m'\}$, is an ADH cycle in $T-\{m'\}$. Therefore, $T$ contains $Q^\pm(2k,1)$. Now, we are left to consider $t'=m'$. If $b_s\to m'$, then $(i_0-1)'\to i_0\gets b_2 \cdots b_s\to m'\gets \cdots \gets (i_0-1)'$ is an ADH cycle in $T-\{(m-1)'\}$, where $m'\gets \cdots \gets (i_0-1)'$ is an ADH path in $T''-\{(m-1)'\}$. Thus, $T$ contains $Q^\pm(2k,1)$. Therefore, we have $m'\to b_s$. By Lemma \ref{lem-2-8}(b), there exists $z\in \{3',\ldots,(m-3)'\}$ such that $T''-\{(m-1)',m'\}$ has an ADH path $z\to \cdots \gets 2'$. Thus, $2'\to i_0\gets b_2 \cdots b_{s-1}\gets t'\to b_s\gets z\to \cdots \gets 2'$ is an ADH cycle in $T-\{(m-1)'\}$. Hence $T$ contains $Q^\pm(2k,1)$. Next, let the ADH path be \[i_0\to b_2\gets \cdots\to b_s~(\{b_2,\ldots,b_s\}=V(T^*)\setminus\{v\}). \] Since $m$ is maximum, there is a vertex $t\in V(T'')$ such that $b_s\gets t$. If $t\notin\{(m-1)',m'\}$, then by Lemma \ref{lem-2-8}(a), one can choose a vertex $x\in \{(i_0+1)',\ldots,(m-2)'\}$ and $x\ne t$ such that $T''-\{m'\}$ have an ADH path $x\gets \cdots \gets t$, and thus $x\gets i_0\to b_2\cdots b_s\gets t\to \cdots\to x$ is an ADH cycle in $T-\{m'\}$. It indicates that $T$ contains $Q^\pm(2k,1)$. Thus, we may assume that $\{1',\ldots,(m-2)'\}\gets b_s$. Let $t'\in V(T'')$ such that $t'\gets b_{s-1}$. If $t'\notin\{(m-1)',m'\}$, then there is an ADH path $x'\gets \cdots \to x$ in $T''-\{t',m'\}$ such that $x\in \{(i_0+1)',\ldots,(m-1)'\}$ and $x'\in \{1',\ldots,(m-2)'\}\setminus \{x,t'\}$. It follows that $x\gets i_0\to b_2 \cdots b_{s-1}\to t'\gets b_s\to x'\gets \cdots \to x$ is an ADH cycle in $T-\{m'\}$. Thus, $T$ contains $Q^\pm(2k,1)$. If $t'=(m-1)'$, then $y\gets i_0\to b_2 \cdots b_{s-1}\to (m-1)'\gets 1'\to 2'\gets b_s\to y'\gets \cdots \to y$, where $y'\gets \cdots\to y$ is an ADH path in $T''-\{1',2',(m-1)',m'\}$, is an ADH cycle in $T-\{m'\}$. Therefore, $T$ contains $Q^\pm(2k,1)$. If $t'=m'$, then we may assume that $\{1',2',\ldots,(m-1)'\}\to b_{s-1}$. It follows that $T[\{1',2,\ldots,(m-1)',b_{s-1},m'\}]$ is a transitive subtournament on $m+1$ vertices, which contradicts that $m$ is maximum. \end{proof} By Lemma \ref{lem-2-6}(a), there is a double vertex in $T[\{a_1,\ldots,a_s,i_0\}]$. \vskip 2mm \noindent{\bf Subcase 2.1.} $i_0$ is not the double vertex. \vskip 2mm Assume that $a_1$ is a double vertex in $T[\{a_1,\ldots,a_s,i_0\}]$. Since $m\ge 9$, there exists $\{x,y\}\subseteq \{3,\ldots,i_0-1,i_0+1,\ldots,m-3\}$ such that $a_1\gets \{x,y\}$ or $a_1\to \{x,y\}$. If $a_1\gets \{x,y\}$, then let $a_1\gets b_2\to \cdots \to b_{s+1}$ be an ADH path in $T[V(T^*)\cup\{i_0\}]$, where $\{b_2,\ldots,b_{s+1}\}=\{a_2,\ldots,a_s,i_0\}$. If $b_{s+1}=i_0$, then by Lemma \ref{lem-2-8}(a), there is an ADH path in $TT_m-\{m\}$ with $x$ as starting vertex and $i_0$ as terminating vertex. It follows that $x\to a_1\gets b_2\cdots b_{s+1}\gets \cdots \gets x$ is an ADH cycle in $T-\{m\}$, and so $T$ contains $Q^{\pm}(2k,1)$. Therefore, we may assume $b_{s+1}\ne i_0$. Since $m$ is maximum, there is a vertex $t\in V(TT_m)$ such that $b_{s+1}\gets t$. If $t\notin\{i_0,m-1,m\}$, we choose $x$ or $y$, say $x$, such that $x\ne t$ and $\{x,t\}\ne \{m-3,m-2\}$. By Lemma \ref{lem-2-8}(b), $TT_m-\{i_0,m\}$ has an ADH path with $x$ and $t$ as starting vertices and thus $x\to a_1\gets b_2 \cdots b_{s+1}\gets t\to \cdots \gets x$ is an ADH cycle in $T-\{m\}$. It indicates that $T$ contains $Q^\pm(2k,1)$, and so we can assume $b_{s+1}\to V(TT_m)\setminus \{i_0,m-1,m\}$. Consider the vertex $b_s$. If $b_s=i_0$, then $x\to a_1\gets b_2 \cdots b_s\to m-2\gets b_{s+1}\to z\gets \cdots \gets x$, where $z\gets \cdots \gets x$ is an ADH path in $TT_m-\{m,m-2,i_0\}$, is an ADH cycle in $T-\{m\}$. Hence $T$ contains $Q^\pm(2k,1)$ and hence we assume $b_s\ne i_0$. Since $m$ is maximum, there is a vertex $t_1\in V(TT_m)$ such that $b_s\to t_1$ . By symmetry of $x$ and $y$, assume that $t_1\ne x$. If $t_1\notin \{i_0,m-1,m\}$, then we can always find a vertex $t_2\in V(TT_m)\setminus\{t_1,x,i_0,m-1,m\}$ such that $TT_m-\{t_1,m,i_0\}$ has an ADH path $t_2\gets \cdots \gets x$ due to Lemma \ref{lem-2-8}(a). Hence $x\to a_1\gets b_2 \cdots b_s\to t_1\gets b_{s+1}\to t_2\gets \cdots \gets x$ is an ADH cycle in $T-\{m\}$ and so $T$ contains $Q^+(2k,1)$. Note that $T'=T[\{b_{s+1},1,\ldots,i_0-1,i_0+1,\ldots,m-2\}]$ is a transitive subtournament. If $t_1\in \{m-1,m\}$, then by Lemma \ref{lem-2-8}(a), $T'$ contains an ADH path $1\to \cdots \gets x$. Hence $x\to a_1\gets b_2 \cdots b_s\to t_1\gets 1\to \cdots \gets x$ is an ADH cycle in $T-\{m\}$ or $T-\{m-1\}$ and so $T$ contains $Q^\pm(2k,1)$. Now, we may assume that $V(TT_m)\setminus \{i_0\}\to b_s$. By Claim \ref{cla-3-1}, the result follows. If $a_1\to \{x,y\}$, then let $a_1\to b_2\gets \cdots \gets b_{s+1}$ be an ADH path in $T[V(T^*)\cup\{i_0\}]$, where $\{b_2,\ldots,b_{s+1}\}=\{a_2,\ldots,a_s,i_0\}$. Similar argument yields that either $T$ contains $Q^\pm(2k,1)$ or there exists $v\in V(T^*)$ such that $v\to V(TT_m)\setminus \{i_0\}$. If the latter occurs, by Claim \ref{cla-3-1}, $T$ also contains $Q^\pm(2k,1)$. \vskip 2mm \noindent{\bf Subcase 2.2.} $i_0$ is the double vertex. \vskip 2mm Let $i_0\gets a_1 \cdots a_{s-1}\to a_s$ be an ADH path in $T[V(T^*)\cup \{i_0\}]$. Since $m$ is maximum, let $t\in V(TT_m)$ be a vertex such that $t\to a_s$. If $t\notin \{i_0,m-1,m\}$, then by Lemma \ref{lem-2-8}(a), there exists an ADH path starting at $t$ and terminating at $i_0$ in $TT_m-\{m\}$. Hence $i_0\gets a_1 \cdots a_s\gets t \to \cdots \to i_0$ is an ADH cycle in $T-\{m\}$. It yields that $T$ contains $Q^\pm(2k,1)$. Thus we may assume $a_s\to V(TT_m)\setminus \{i_0,m-1,m\}$. Consider the vertex $a_{s-1}$. Let $t_1\in V(TT_m)$ such that $a_{s-1}\to t_1$. If $t_1\notin \{i_0,m-1,m\}$, then by Lemma \ref{lem-2-8}(b) and the choice of $i_0$, we can find a suitable vertex $t_2\in V(TT_m)\setminus \{t_1,i_0,m-1,m\}$ such that $t_2\gets \cdots \to i_0$ is an ADH path in $TT_m-\{t_1,m\}$. Hence $i_0\gets a_1 \cdots a_{s-1}\to t_1\gets a_s\to t_2\gets \cdots \to i_0$ is an ADH cycle in $T-\{m\}$. It follows that $T$ contains $Q^\pm(2k,1)$. Note that $T'=T[\{a_s,1,\ldots,i_0-1,i_0+1,\ldots,m-2\}]$ is a transitive subtournament on $m-2$ vertices. If $t_1\in \{m-1,m\}$, then there is a suitable vertex $z\in V(T')$ such that $1\to \cdots \gets z$ is an ADH path in $T'$ and $z\to i_0$. Hence $i_0\gets a_1 \cdots a_{s-1}\to t_1\gets 1\to \cdots\gets z\to i_0$ is an ADH cycle in $T-\{m\}$ or $T-\{m-1\}$. Since $\{1,\ldots,m-2\}\to \{m-1,m\}\setminus \{t_1\}$, $T$ contains $Q^\pm(2k,1)$. Now, we may assume $V(TT_m)\setminus \{i_0\}\to a_{s-1}$. By Claim \ref{cla-3-1}, $T$ contains $Q^\pm(2k,1)$. \end{proof} \section{Proof of Theorem \ref{thm-5}} We first show that $\rbjt{r}(CP(2,2;n-4))\ge n+1$. Let $T=T_4^* \to T_{n-4}$, where $T_4^*=\vec{C}_3\to T_1$ or $T_1\to \vec{C}_3$. We claim $T$ contains no $CP(2,2;n-4)$. Suppose to the contrary that $T$ contains a copy of $CP(2,2;n-4)$, denoted by $H$. Let $E(H)=\{(v_1,v_2),(v_1,v_3),(v_2,v_4),(v_3,v_4)\}\cup \{(v_i,v_{i+1}): 4\le i\le n-1\}$. Note that $d_H^+(v_i)=1$ for $2\le i\le n-1$. Let $i_0$ be the smallest index such that $v_j\in V(T_{n-4})$ for any $j\ge i_0$. Obviously, $i_0\ge 5$. By the construction of $T$, we have $\{v_1,v_2,\ldots,v_{i_0-1}\}\subseteq V(T_4^*)$. It implies that $i_0=5$. However, $T_4^*$ contains no $C(2,2)$. Therefore, $H\not \cong CP(2,2;n-4)$, a contradiction. Now, we proceed to prove $\rbjt{r}(CP(2,2;n-4))=n+1$ by induction on $n$. The following lemma indeed establishes the result for the case when $n=4$, and acts as the induction basis. \begin{lemma}\label{lem-6} Every tournament $T_5$ contains $C(2,2)$. \end{lemma} \begin{proof} Suppose to the contrary that there exists a tournament $T_5$ containing no $C(2,2)$. Let $V(T_5)=\{u_1,\ldots,u_5\}$. Note that $T_5$ contains a copy of a transitive triangle, say $C$, with $E(C)=\{(u_1,u_2),(u_2,u_3),(u_1,u_3)\}$. Assume without loss of generality that $u_4\to u_5$. Suppose that $u_4\to u_1$. If $u_5\to u_2$ or $u_5\to u_3$, then $u_1u_2u_5u_4u_1$ or $u_1u_3u_5u_4u_1$ is a $C(2,2)$, and if $u_2\to u_5$ and $u_3\to u_5$, then $u_1u_2u_5u_3u_1$ is a $C(2,2)$, a contradiction. Thus we may assume that $u_1\to u_4$. If $u_4\to u_3$, then $u_1u_2u_3u_4u_1$ is a $C(2,2)$ and so we have $u_3\to u_4$. If $u_2\to u_4$, then $u_1u_2u_4u_3u_1$ is a $C(2,2)$ and hence $u_4\to u_2$. Now, if $u_3\to u_5$, then $u_1u_3u_5u_4u_1$ is a $C(2,2)$ and if $u_5\to u_3$, then $u_2u_3u_5u_4u_2$ is a $C(2,2)$, a final contradiction. Therefore, the result follows. \end{proof} \begin{proof}[\bfseries{Proof of Theorem \ref{thm-5}}] For $n=4$, the assertion follows from Lemma \ref{lem-6}. Assume now $n\ge 5$ and the assertion holds for $n-1$. By induction hypothesis, for any tournament $T_{n+1}$, its subtournament $T_n$ contains a copy of $CP(2,2;n-5)$, say $H$. Let $V(H)=\{v_1,\ldots,v_{n-1}\}$, $E(H)=\{(v_1,v_2),(v_1,v_3),(v_2,v_4),(v_3,v_4)\}\cup \{(v_i,v_{i+1}): 4\le i\le n-2\}$ and $V(T_{n+1})\setminus V(H)=\{v_n,v_{n+1}\}$. Assume without loss of generality that $v_2\to v_3$ and $v_n\to v_{n+1}$. In the following proof, suppose to the contrary that $T_{n+1}$ contains no $CP(2,2;n-4)$. \begin{claim}\label{cla-1} For any $v\in \{v_n,v_{n+1}\}$, $v\to \{v_4,v_5,\ldots,v_{n-1}\}$. \end{claim} \begin{proof} It's easy to see that $v\to v_{n-1}$, otherwise $E(H)\cup \{(v_{n-1},v)\}$ produces a copy of $CP(2,2;n-4)$. If there exists some $i$ with $4\le i\le n-2$ such that $v_i\to v$, denote by $i_0$ the largest $i$ such that $v_{i_0}\to v$, then $v\to v_{i_0+1}$ and thus we obtain a directed outpath $(v_4,v_5,\ldots,v_{i_0},v,v_{i_0+1},\ldots,v_{n-1})$. It follows that $T_{n+1}[V(H)\cup \{v\}]$ contains $CP(2,2;n-4)$, a contradiction. Hence, $v\to \{v_4,v_5,\ldots,v_{n-1}\}$. \end{proof} \begin{claim}\label{cla-2} For any $v\in \{v_n,v_{n+1}\}$, if $v_1\to v$, then $v_3\to v$; and if $v\to v_1$, then $v_2\to v$. \end{claim} \begin{proof} If $v_1\to v$ and $v\to v_3$, then we can see that $v_1vv_3v_2v_1$ is a $C(2,2)$ in which $v_3$ has out-degree 0 and $(v_3,v_4,\ldots,v_{n-1})$ is a directed outpath, which implies that $T_{n+1}$ contains a copy of $CP(2,2;n-4)$, a contradiction. If $v\to v_1$ and $v\to v_2$, then $vv_1v_3v_2v$ is a $C(2,2)$ in which $v_3$ has out-degree 0 and $(v_3,v_4,...,v_{n-1})$ is a directed outpath, which implies that $T_{n+1}$ contains a copy of $CP(2,2;n-4)$, a contradiction. \end{proof} We consider the following three cases separately. {\flushleft\bf Case 1.} $v_1\to \{v_n,v_{n+1}\}$. \vskip 2mm In this case, we have $v_3\to \{v_n,v_{n+1}\}$ by Claim \ref{cla-2}. Thus, $v_1v_nv_{n+1}v_3v_1$ is a $C(2,2)$ in which $v_{n+1}$ has out-degree 0. By Claim \ref{cla-1}, $v_{n+1}\to v_4$ and so $(v_{n+1},v_4,v_5,\ldots,v_{n-1})$ is a directed outpath of length $n-4$. Therefore, $T_{n+1}$ contains a copy of $CP(2,2;n-4)$, a contradiction. {\flushleft\bf Case 2.} $v_1\to v_n$ and $v_{n+1}\to v_1$. \vskip 2mm In this case, we have $v_2\to v_{n+1}$ by Claim \ref{cla-2}. Thus, $v_1v_2v_{n+1}v_nv_1$ is a $C(2,2)$ in which $v_{n+1}$ has out-degree 0. By Claim \ref{cla-1}, $v_{n+1}\to v_4$ and so $(v_{n+1},v_4,v_5,\ldots,v_{n-1})$ is a directed outpath of length $n-4$. Therefore, $T_{n+1}$ contains a copy of $CP(2,2;n-4)$, a contradiction. {\flushleft\bf Case 3.} $v_n\to v_1$. \vskip 2mm By Claim \ref{cla-2}, we have $v_2\to v_n$. Thus, $v_2v_3v_{n+1}v_nv_2$ is a $C(2,2)$ in which $v_{n+1}$ has out-degree 0 if $v_3\to v_{n+1}$, and $v_1v_3v_{n+1}v_nv_1$ is a $C(2,2)$ in which $v_3$ has out-degree 0 if $v_3\to v_{n+1}$. By Claim \ref{cla-1}, $(v_{n+1},v_4,v_5,\ldots,v_{n-1})$ is a directed outpath of length $n-4$. Clearly, $(v_3,v_4,\ldots,v_{n-1})$ is also a directed outpath of length $n-4$. Therefore, $T_{n+1}$ always contains a copy of $CP(2,2;n-4)$, a contradiction. \end{proof} \section{Concluding remarks} In this paper, we show that $\rbjt{r}(Q^\pm(2k,\ell))=\rbjt{r}(AC_{2k})+\ell$ for $k\ge 54$ and $\rbjt{r}(CP(2,2;n-4))=\rbjt{r}(C(2,2))+n-4$. Combining S\'os' conjecture, we have the following problem. \begin{problem} Is it true that $\rbjt{r}(CP(p,q;n-p-q))=\rbjt{r}(C(p,q))+n-p-q$? More generally, does the oriented Ramsey number of the oriented graph which is obtained by identifying the origin of a directed path with a vertex of a non-directed cycle behave like the non-directed cycle? \end{problem} Additionally, we establish an upper bound for $\rbjt{r}(ACA^\pm(2k;\ell,a))$. However, we do know the exact value of it. The following problem remains of interesting. \begin{problem} Determine $\rbjt{r}(ACA^\pm(2k;\ell,a))$. \end{problem} \section*{Declarations} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \section*{\bf\Large Availability of Data and Materials} Not applicable. \section*{Acknowledgments} This research was supported by National Key R\&D Program of China under grant number 2024YFA1013900, NSFC under grant number 12471327, and Postgraduate Research \& Practice Innovation Program of Jiangsu Province under grant number KYCX24\_0124. {\small \begin{thebibliography}{xx} \small \setlength{\itemsep}{-.10mm} \bibitem{Alspach} B. Alspach, M. Rosenfeld, Realization of certain generalized paths in tournaments, Discrete Math. 34 (1981) 199--202. \bibitem{Dross} F. Dross, F. Havet, On the unavoidability of oriented trees, J. Combin. Theory Ser. B 151 (2021) 83--110. \bibitem{Forcade} R. Forcade, Parity of paths and circuits in tournaments, Discrete Math. 6 (1973) 115--118. \bibitem{Grunbaum} B. Grünbaum, Antidirected Hamiltonian paths in tournaments, J. Combin. Theory Ser. B 11 (1971) 249--257. \bibitem{Havet1} F. Havet, Oriented Hamiltonian cycles in tournaments, J. Combin. Theory Ser. B 80 (2000), 1--31. \bibitem{Havet} F. Havet, S. Thomassé, Oriented hamiltonian paths in tournaments: a proof of Rosenfeld’s conjecture, J. Combin. Theory Ser. B 78 (2000) 243--273. \bibitem{Petrovic} V. Petrović, Some unavoidable subdigraphs of tournaments, J. Graph Theory 12 (1988) 317--325. \bibitem{Redei} L. Rédei, Ein kombinatorischer Satz, Acta Litt. Sci. Szeged 7 (1934) 39--43. \bibitem{Rosenfeld} M. Rosenfeld, Antidirected Hamiltonian paths in tournaments, J. Combin. Theory Ser. B 12 (1971) 93--99. \bibitem{Rosenfeld2} M. Rosenfeld, Antidirected Hamiltonian circuits in tournaments, J. Combin. Theory Ser. B 16 (1974) 234--242. \bibitem{Sanchez} A. Sanchez-Flores, On tournaments free of large transitive subtournaments, Graphs Combin. 14 (1998) 181--200. \bibitem{Straight} H. Straight, The existence of certain type of semi-walks in tournaments, Congr. Numer. 29 (1980) 901--908. \bibitem{Thomason} A. Thomason, Paths and cycles in tournaments, Tran. Am. Math. Soc. 296 (1986) 167--180. \bibitem{Zein} A. E. Zein, Oriented Hamiltonian Cycles in Tournaments: a Proof of Rosenfeld's Conjecture, arXiv preprint, arXiv:2204.11211. \end{thebibliography} } \end{document}
2412.17480v1
http://arxiv.org/abs/2412.17480v1
Width bounds and Steinhaus property for unit groups of continuous rings
\pdfoutput=1 \documentclass[11pt,a4paper,reqno]{amsart} \renewcommand{\baselinestretch}{1.1} \usepackage[british]{babel} \usepackage[DIV=9,oneside,BCOR=0mm]{typearea} \usepackage{microtype} \usepackage[T1]{fontenc} \usepackage{setspace} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{enumitem} \usepackage{mathrsfs} \usepackage[colorlinks,citecolor=blue,urlcolor=black,linkcolor=blue,linktocpage]{hyperref} \usepackage{mathrsfs} \usepackage{mathtools} \usepackage{xcolor} \usepackage{xfrac} \usepackage{multicol} \usepackage{ifsym} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{problem}[thm]{Problem} \newtheorem*{claim}{Claim} \newtheorem*{fact}{Fact} \theoremstyle{definition} \newtheorem{exmpl}[thm]{Example} \newtheorem{definition}[thm]{Definition} \newtheorem{remark}[thm]{Remark} \let\uglyphi\phi \let\uglyepsilon\epsilon \renewcommand{\epsilon}{\varepsilon} \renewcommand{\phi}{\varphi} \newcommand{\defeq}{\mathrel{\mathop:}=} \newcommand{\eqdef}{\mathrel{\mathopen={\mathclose:}}} \renewcommand{\theenumi}{\textnormal{(\Alph{enumi})}} \renewcommand{\labelenumi}{\theenumi} \DeclareMathOperator{\free}{F} \DeclareMathOperator{\I}{I} \DeclareMathOperator{\lat}{L} \DeclareMathOperator{\latop}{L^{\!{}^{{}_\circlearrowleft}}\!} \DeclareMathOperator{\E}{E} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\SL}{SL} \DeclareMathOperator{\cent}{Z} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\im}{im} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\rAnn}{Ann_r} \DeclareMathOperator{\lAnn}{Ann_l} \DeclareMathOperator{\ZZ}{Z} \DeclareMathOperator{\C}{\mathbb{C}} \DeclareMathOperator{\R}{\mathbb{R}} \DeclareMathOperator{\Q}{\mathbb{Q}} \DeclareMathOperator{\Z}{\mathbb{Z}} \DeclareMathOperator{\N}{\mathbb{N}} \DeclareMathOperator{\T}{\mathbb{T}} \DeclareMathOperator{\Pfin}{\mathcal{P}_{fin}} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\cha}{char} \DeclareMathOperator{\Nil}{Nil} \DeclareMathOperator{\M}{M} \DeclareMathOperator{\U}{U} \DeclareMathOperator{\Homeo}{Homeo} \makeatletter \def\moverlay{\mathpalette\mov@rlay} \def\mov@rlay#1#2{\leavevmode\vtop{ \baselineskip\z@skip \lineskiplimit-\maxdimen \ialign{\hfil$\m@th#1##$\hfiincr#2\crcr}}} \newcommand{\charfusion}[3][\mathord]{ \mathpalette\mov@rlay{#2\cr#3} } } \def\smallunderbrace#1{\mathop{\vtop{\m@th\ialign{##\crcr $\hfil\displaystyle{#1}\hfil$\crcr \noalign{\kern3\p@\nointerlineskip} \tiny\upbracefill\crcr\noalign{\kern3\p@}}}}\limits} \makeatother \newcommand{\cupdot}{\charfusion[\mathbin]{\cup}{\cdot}} \newcommand{\bigcupdot}{\charfusion[\mathop]{\bigcup}{\boldsymbol{\cdot}}} \newcommand{\veedot}{\charfusion[\mathbin]{\vee}{\cdot}} \newcommand{\bigveedot}{\charfusion[\mathop]{\bigvee}{\boldsymbol{\cdot}}} \begin{document} \author{Josefin Bernard} \address{J.B., Institute of Discrete Mathematics and Algebra, TU Bergakademie Freiberg, 09596 Freiberg, Germany} \email{[email protected]} \author{Friedrich Martin Schneider} \address{F.M.S., Institute of Discrete Mathematics and Algebra, TU Bergakademie Freiberg, 09596 Freiberg, Germany} \email{[email protected]} \title[Width bounds and Steinhaus property]{Width bounds and Steinhaus property for\\ unit groups of continuous rings} \date{\today} \begin{abstract} We prove an algebraic decomposition theorem for the unit group $\GL(R)$ of an arbitrary non-discrete irreducible, continuous ring $R$ (in von Neumann's sense), which entails that every element of $\GL(R)$ is both a product of $7$ commutators and a product of $16$ involutions. Combining this with further insights into the geometry of involutions, we deduce that $\GL(R)$ has the so-called Steinhaus property with respect to the natural rank topology, thus every homomorphism from $\GL(R)$ to a separable topological group is necessarily continuous. Due to earlier work, this has further dynamical ramifications: for instance, for every action of $\GL(R)$ by homeomorphisms on a non-void metrizable compact space, every element of $\GL(R)$ admits a fixed point in the latter. In particular, our results answer two questions by Carderi and Thom, even in generalized form. \end{abstract} \subjclass[2020]{22A05, 37B02, 06C20, 16E50} \keywords{Continuous ring, involution width, commutator width, verbal width, perfect group, topological group, Steinhaus property, automatic continuity} \maketitle \tableofcontents \section{Introduction}\label{section:introduction} The topic of the present paper is the unit group $\GL(R)$, i.e., the group of invertible elements, of an arbitrary irreducible, continuous ring $R$. The study of such rings was initiated and profoundly advanced by John von Neumann in his fundamental work on \emph{continuous geometry}~\cite{VonNeumannBook}, a continuous-dimensional counterpart of finite-dimensional projective geometry. Inspired by conversations with Garret Birkhoff on the lattice-theoretic formulation of projective geometry~\cite{BirkhoffBulletin}, von Neumann introduced \emph{continuous geometries} as complete, complemented, modular lattices possessing a certain continuity property, and he established a distinguished algebraic representation for these objects through his \emph{coordinatization theorem}~\cite[II.XIV, Theorem~14.1, p.~208]{VonNeumannBook}: for any complemented modular lattice $L$ of an order at least $4$ there exists a regular ring $R$, unique up to isomorphism, such that $L$ is isomorphic to the lattice $\lat(R)$ of principal right ideals of~$R$. Many algebraic properties and constructions translate conveniently via the resulting correspondence. For instance, a regular ring~$R$ is (directly) irreducible if and only if the lattice $\lat(R)$ is. A \emph{continuous ring} is a regular ring $R$ whose associated lattice $\lat(R)$ is a continuous geometry. Another deep result due to von Neumann~\cite{VonNeumannBook} asserts that every irreducible, continuous ring $R$ admits a unique \emph{rank function}---naturally corresponding to a unique \emph{dimension function} on the continuous geometry $\lat(R)$. This rank function gives rise to a complete metric on $R$, and the resulting topology, which we call the \emph{rank topology}, turns $R$ into a topological ring. While an irreducible, continuous ring is \emph{discrete} with respect to its rank topology if and only if it isomorphic to a finite-dimensional matrix ring over some division ring (see Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete}), the class of \emph{non-discrete} irreducible, continuous rings appears ineffably large. In addition to the original example given by the ring of densely defined, closed, linear operators affiliated with an arbitrary $\mathrm{II}_{1}$ factor~\cite{MurrayVonNeumann}, such objects have emerged in the study of Kaplansky's direct finiteness conjecture~\cite{ElekSzabo,linnell} and the Atiyah conjecture~\cite{LinnellSchick,elek2013,elek}. As discovered already by von Neumann~\cite{NeumannExamples}, a rather elementary example may be constructed from any field $K$ via the following procedure. Consider the inductive limit of the matrix rings \begin{displaymath} K \, \cong \, \M_{2^{0}}(K) \, \lhook\joinrel\longrightarrow \, \ldots \, \lhook\joinrel\longrightarrow \, \M_{2^{n}}(K) \, \lhook\joinrel\longrightarrow \, \M_{2^{n+1}}(K) \, \lhook\joinrel\longrightarrow \, \ldots \end{displaymath} relative to the embeddings \begin{displaymath} \M_{2^n}(K)\,\lhook\joinrel\longrightarrow \, \M_{2^{n+1}}(K), \quad a\,\longmapsto\, \begin{pmatrix} a & 0\\ 0 & a \end{pmatrix} \qquad (n \in \N) . \end{displaymath} Those embeddings are isometric with respect to the normalized rank metrics \begin{displaymath} \M_{2^{n}}(K) \times \M_{2^{n}}(K) \, \longrightarrow \, [0,1], \quad (a,b) \, \longmapsto \, \tfrac{\rank(a-b)}{2^{n}} \qquad (n \in \N) , \end{displaymath} whence the latter jointly extend to a metric on the inductive limit. The corresponding metric completion, which we denote by $\M_{\infty}(K)$, is a non-discrete irreducible, continuous ring~\cite{NeumannExamples,Halperin68}. The paper at hand explores algebraic and dynamical properties of the group $\GL(R)$ for an arbitrary non-discrete irreducible, continuous ring $R$. The center $\ZZ(R)$ of such a ring constitutes a field, and viewing $R$ as a unital $\ZZ(R)$-algebra naturally leads us to considering matricial subalgebras of $R$ and studying \emph{simply special} and \emph{locally special} elements of $\GL(R)$ (see Definition~\ref{definition:simply.special} and Definition~\ref{definition:locally.special} for details). Our first main result is the following uniform decomposition theorem. \begin{thm}[Theorem~\ref{theorem:decomposition}]\label{theorem:decomposition.introduction} Let $R$ be a non-discrete irreducible, continuous ring. Every element $a\in\GL(R)$ admits a decomposition \begin{displaymath} a \, = \, bu_{1}v_{1}v_{2}u_{2}v_{3}v_{4} \end{displaymath} where \begin{enumerate} \item[---\,] $b \in \GL(R)$ is simply special, \item[---\,] $u_{1},u_{2} \in \GL(R)$ are locally special, \item[---\,] $v_{1},v_{2},v_{3},v_{4} \in \GL(R)$ are locally special involutions. \end{enumerate} \end{thm} The proof of Theorem~\ref{theorem:decomposition.introduction} combines a decomposition argument inspired by Fathi's work on algebraic properties of the group $\Aut([0,1],\lambda)$~\cite{Fathi} with our Corollary~\ref{corollary:simply.special.dense}, which establishes density of the set of simply special elements in the unit group of any irreducible, continuous ring $R$. The latter result rests on two pillars: the work of von Neumann~\cite{VonNeumann37} and Halperin~\cite{Halperin62} about the density of the set of \emph{algebraic} elements in $R$, and our characterization of such elements as those contained in some matricial subalgebra of $R$ (Theorem~\ref{theorem:matrixrepresentation.case.algebraic}). Thanks to results of Gustafson, Halmos and Radjavi~\cite{GustafsonHalmosRadjavi76}, Thompson~\cite{Thompson61,Thompson62,ThompsonPortugaliae}, and Borel~\cite{Borel}, our Theorem~\ref{theorem:decomposition.introduction} has some notable consequences. For a group $G$ and a natural number $m$, let us recall that any map $g \in G^{m}$ naturally extends to a unique homomorphism $\free(m) \to G, \, w \mapsto w(g)$ from the free group $\free(m)$ on $m$ generators to~$G$, and let us define $w(G) \defeq \{ w(g) \mid g \in G^{m} \}$. A group $G$ is said to be \emph{verbally simple} if, for every $m \in \N$ and every $w \in \free(m)$, the subgroup of $G$ generated by~$w(G)$ is trivial or coincides with $G$. \begin{cor}[Theorem~\ref{theorem:width}]\label{corollary:width.introduction} Let $R$ be a non-discrete irreducible, continuous ring. \begin{enumerate} \item\label{corollary:width.introduction.involutions} Every element of $\GL(R)$ is a product of $16$ involutions. \item\label{corollary:width.introduction.commutators} Every element of $\GL(R)$ is a product of $7$ commutators. In particular, $\GL(R)$ is perfect. \item\label{corollary:width.introduction.word} Suppose that $\ZZ(R)$ is algebraically closed. For all $m \in \N$ and $w \in \free(m)\setminus \{ \epsilon \}$, \begin{displaymath} \qquad \GL(R) \, = \, w(\GL(R))^{14} . \end{displaymath} In particular, $\GL(R)$ is verbally simple. \end{enumerate} \end{cor} In the language of~\cite{Liebeck}, the corollary above establishes uniform finite upper bounds for \emph{involution width}, \emph{commutator width}, and \emph{$w$-width} for any non-trivial reduced word $w$ in finitely many letters and their inverses. For every non-discrete irreducible, continuous ring $R$, the commutator subgroup had been known to be dense in $\GL(R)$ with respect to the rank topology, due to Smith's work~\cite[Theorems~1 and~2]{Smith57}. Our algebraic results have non-trivial dynamical ramifications. To be more precise, we recall that a topological group $G$ has~\emph{automatic continuity}~\cite{KechrisRosendal} if every homomorphism from the $G$ to any separable topological group is continuous. Examples of topological groups with this property include the automorphism group of the ordered rational numbers endowed with the topology of pointwise convergence~\cite{RosendalSolecki}, the unitary group of an infinite-dimensional separable Hilbert space equipped with the strong operator topology~\cite{Tsankov}, and the full group of an ergodic measure-preserving countable equivalence relation furnished with the uniform topology~\cite{KittrellTsankov}. The reader is referred to~\cite{RosendalSuarez} for a recent survey on this subject. One route towards automatic continuity is provided by the Steinhaus property: given some $n \in \N$, a topological group $G$ is called \emph{$n$-Steinhaus}~\cite[Definition~1]{RosendalSolecki} if, for every symmetric and countably syndetic\footnote{A subset $W$ of a group $G$ is called \emph{countably syndetic} if there exists a countable subset $C\subseteq G$ such that $G=CW$.} subset $W\subseteq G$, the set $W^{n}$ is an identity neighborhood in $G$. Equipped with the associated rank topology, the unit group of any irreducible, continuous ring constitutes a topological group. Using Corollary~\ref{corollary:width.introduction}\ref{corollary:width.introduction.involutions} along with further insights into the geometry of involutions, we establish the Steinhaus property for all of these topological groups in a uniform way. \begin{thm}[Theorem~\ref{theorem:194-Steinhaus}]\label{theorem:194-Steinhaus.introduction} Let $R$ be an irreducible, continuous ring. Then the topological group $\GL(R)$ is $194$-Steinhaus. In particular, $\GL(R)$ has automatic continuity. \end{thm} Since a Polish topological group with automatic continuity admits a unique Polish group topology, we obtain the following. \begin{cor}\label{corollary:unique.polish.topology} Let $R$ be an irreducible, continuous ring. If $R$ is separable with respect to the rank topology\footnote{equivalently, if $\GL(R)$ is separable with respect to the rank topology (Remark~\ref{remark:separable})}, then the latter restricts to the unique Polish group topology on $\GL(R)$. \end{cor} In~\cite{CarderiThom}, Carderi and Thom asked, given any finite field $K$, whether $\GL(\M_{\infty}(K))$ would have a unique Polish topology, or even automatic continuity. In particular, Theorem~\ref{theorem:194-Steinhaus.introduction} and Corollary~\ref{corollary:unique.polish.topology} provide affirmative answers to both of these questions, even in generalized form. Moreover, in combination with results of~\cite{SchneiderGAFA} and~\cite{CarderiThom}, our Theorem~\ref{theorem:194-Steinhaus.introduction} turns out to have non-trivial dynamical consequences for the underlying abstract groups, too. \begin{cor}\label{corollary:inert} Let $R$ be a non-discrete irreducible, continuous ring. For every action of $\GL(R)$ by homeomorphisms on a non-empty metrizable compact space $X$ and every $g\in\GL(R)$, there exists $x\in X$ with $gx=x$. \end{cor} \begin{cor}\label{corollary:fixpoint} Let $K$ be a finite field and let $R \defeq \M_{\infty}(K)$. \begin{enumerate} \item\label{corollary:fixpoint.exotic} Every unitary representation of $\GL(R)$ on a separable Hilbert space is trivial. \item\label{corollary:fixpoint.extremely.amenable} Every action of $\GL(R)$ by homeomorphisms on a non-empty metrizable compact space has a fixed point. \end{enumerate} \end{cor} The work of Carderi and Thom~\cite{CarderiThom} establishes that, for any finite field $K$, the topological group $\GL(\M_{\infty}(K))$ is \emph{exotic}, i.e., it does not admit any non-trivial strongly continuous unitary representation. Every strongly continuous unitary representation $\pi$ of a separable topological group (such as $\GL(\M_{\infty}(K))$ for any countable field $K$) has the property that every element of the underlying Hilbert space $H$ is contained in some separable closed $\pi$-invariant linear subspace of $H$. Therefore, Corollary~\ref{corollary:fixpoint}\ref{corollary:fixpoint.exotic} strengthens the exoticness result of~\cite{CarderiThom}. This article is organized as follows. The first three sections provide some necessary preliminaries concerning continuous geometries (Section~\ref{section:continuous.geometries}), regular rings and their principal ideal lattices (Section~\ref{section:regular.rings}), as well as continuous rings and their rank functions (Section~\ref{section:continuous.rings.and.rank.functions}). In Section~\ref{section:subgroups.of.the.form.Gamma_e(R)}, we introduce and describe a natural family of subgroups of the unit group of a unital ring induced by idempotent ring elements. The subsequent Section~\ref{section:geometry.of.involutions} is dedicated to the study of involutions in irreducible, continuous rings, their conjugacy classes, and further structural properties. In Section~\ref{section:dynamical.independence}, we revisit an argument due to von Neumann~\cite{NeumannExamples} and Halperin~\cite{Halperin62} and extract from it a notion of dynamical independence for ideals. The latter constitutes a key concept in our analysis of algebraic elements pursued in Section~\ref{section:algebraic.elements}. These insights are then put into use for proving density of simply matricial elements in any irreducible, continuous ring (Section~\ref{section:approximation.by.matrix.algebras}), establishing our main decomposition theorem for the unit group of such (Section~\ref{section:decomposition.into.locally.special.elements}), and deducing the Steinhaus property for the associated rank topology (Section~\ref{section:steinhaus.property}). The resulting Corollaries~\ref{corollary:unique.polish.topology}, \ref{corollary:inert} and \ref{corollary:fixpoint} are proved in the final Section~\ref{section:proofs.of.corollaries}. \section{Continuous geometries}\label{section:continuous.geometries} In this section, we give a brief overview of the basic concepts related to von Neumann's continuous geometry~\cite{VonNeumannBook}. For a start, let us recall some order-theoretic terminology. A \emph{lattice} is a partially ordered set $L$ in which every pair of elements $x,y\in L$ admits both a (necessarily unique) supremum $x\vee y \in L$ and a (necessarily unique) infimum $x\wedge y \in L$. Equivalently, a lattice may be defined as an algebraic structure carrying two commutative, associative binary operations satisfying the two absorption laws. We will freely use aspects of both of these definitions. Moreover, a lattice $L$ is called \begin{enumerate} \item[---\,] \emph{complete} if for every $S \subseteq L$ there is a (necessarily unique) supremum $\bigvee S \in L$, or equivalently, if for every subset $S \subseteq L$ there exists a (necessarily unique) infimum $\bigwedge S \in L$, \item[---\,] \emph{(directly) irreducible} if $\vert L \vert \geq 2$ and $L$ is not isomorphic to a direct product of two lattices of cardinality at least 2, \item[---\,] \emph{bounded} if $L$ has a (necessarily unique) least element $0\in L$ and a (necessarily unique) greatest element $1\in L$, \item[---\,] \emph{complemented} if $L$ is bounded and \begin{displaymath} \qquad \forall x\in L\ \exists y\in L\colon \quad x\wedge y=0,\ \, x\vee y=1, \end{displaymath} \item[---\,]\emph{modular} if \begin{displaymath} \qquad \forall x,y,z\in L\colon\quad x\leq y\ \Longrightarrow\ x\vee (y\wedge z)=y\wedge (x\vee z). \end{displaymath} \end{enumerate} A \emph{continuous geometry} is a complete, complemented, modular lattice $L$ such that, for every chain\footnote{A subset $C$ of a partially ordered set $P$ is called a \emph{chain} if the restriction of the order of $P$ to $C$ is a linear order on $C$.} $C\subseteq L$ and every element $x \in L$, \begin{displaymath} x\wedge \bigvee C \, = \, \bigvee\{x\wedge y\mid y\in C\},\qquad x\vee\bigwedge C \, = \, \bigwedge\{x\vee y\mid y\in C\}. \end{displaymath} One of the main results of von Neumann's work~\cite{VonNeumannBook} about irreducible continuous geometries asserts the existence of a unique dimension function on such (Theorem~\ref{theorem:dimension.function.lattice}). Before providing details on this, let us record two general facts about modular lattices. \begin{lem}\label{lemma:complement} Let $L$ be a complemented, modular lattice. If $x,y,z \in L$ satisfy $x \wedge y = 0$ and $x \vee y \leq z$, then there exists $x' \in L$ with $x \leq x'$, $x' \wedge y = 0$ and $x' \vee y = z$. \end{lem} \begin{proof} Let $x,y,z \in L$ with $x \wedge y = 0$ and $x \vee y \leq z$. Consider $y' \defeq x \vee y$ and note that $x \leq y' \leq z$. By~\cite[I.I, Theorem~1.3, p.~5]{VonNeumannBook} (see also \cite[VIII.1, Theorem~1, p.~114]{BirkhoffBook}), there exists $x' \in L$ such that $x' \wedge y' = x$ and $x' \vee y' = z$. Then $x = x' \wedge y' \leq x'$ and $x' \vee y = z$. Moreover, $x' \wedge y = x' \wedge (y' \wedge y) = (x' \wedge y') \wedge y = x \wedge y = 0$. \end{proof} \begin{remark}\label{remark:independence.equivalence.intersection} Let $L$ be a bounded lattice, let $n\in\N$ and $(x_{1},\dots,x_{n})\in L^{n}$. Then $(x_{1},\dots,x_{n})$ is said to be \emph{independent} in $L$ and we write $(x_{1},\dots,x_{n})\perp$ if \begin{displaymath} \forall I,J \subseteq \{ 1,\ldots,n\} \colon \quad I\cap J=\emptyset \ \Longrightarrow \ \left(\bigvee\nolimits_{i\in I}x_{i}\right)\!\wedge\! \left(\bigvee\nolimits_{j\in J}x_{j}\right)\! =0 . \end{displaymath} If $L$ is modular, then~\cite[I.II, Theorem~2.2, p.~9]{VonNeumannBook} (see also~\cite[I.1, Satz 1.8, p.~13]{MaedaBook}) asserts that \begin{displaymath} (x_{1},\ldots,x_{n}) \perp \quad \Longleftrightarrow \quad \forall i\in\{1,\ldots,n-1\}\colon \ (x_{1} \vee \ldots \vee x_{i}) \wedge x_{i+1} = 0. \end{displaymath} \end{remark} In order to clarify some terminology needed for Theorem~\ref{theorem:dimension.function.lattice}, let $L$ be a bounded lattice. A \emph{pseudo-dimension function} on $L$ is a map $\delta \colon L\to [0,1]$ such that \begin{enumerate} \item[---\,] $\delta(0)=0$ and $\delta(1)=1$, \item[---\,] $\delta$ is \emph{monotone}, i.e., \begin{displaymath} \qquad \forall x,y \in L \colon \quad x\leq y \ \Longrightarrow \ \delta(x)\leq \delta(y), \end{displaymath} \item[---\,] $\delta$ is \emph{modular}, i.e., \begin{displaymath} \qquad \forall x,y \in L \colon \quad \delta(x \vee y) + \delta(x \wedge y) = \delta(x)+\delta(y). \end{displaymath} \end{enumerate} A \emph{dimension function} on $L$ is a pseudo-dimension function $\delta \colon L \to [0,1]$ such that \begin{displaymath} \forall x,y \in L \colon \quad x < y \ \Longrightarrow \ \delta(x) < \delta(y). \end{displaymath} For any pseudo-dimension function $\delta$ on $L$, the map \begin{displaymath} d_{\delta}\colon\, L\times L\,\longrightarrow\,[0,1],\quad (x,y)\,\longmapsto\,\delta(x\vee y)-\delta(x\wedge y). \end{displaymath} constitutes a pseudo-metric on $L$~\cite[V.7, Theorem~9, p.~77]{BirkhoffBook}, and we see that $d_{\delta}$ is a metric if and only if $\delta$ is a dimension function on $L$. \begin{thm}[\cite{VonNeumannBook}]\label{theorem:dimension.function.lattice} Let $L$ be an irreducible, continuous geometry. Then there exists a unique dimension function $\delta_{L}$ on $L$. Moreover, the metric $d_{L} \defeq d_{\delta_{L}}$ is complete. \end{thm} \begin{proof} See~\cite[I.VI, Theorem~6.9, p.~52]{VonNeumannBook} for existence and~\cite[I.VII, p.~60, Corollary~1]{VonNeumannBook} for uniqueness. Completeness of $d_{L}$ is proved in~\cite[I.6, Satz~6.4, p.~48]{MaedaBook}. \end{proof} \begin{prop}[\cite{MaedaBook}]\label{proposition:dimension.function.continuous} Let $L$ be an irreducible continuous geometry. If $C$ is a chain in $L$, then \begin{displaymath} \delta_{L}\!\left(\bigvee C\right)\! \, = \, \sup \delta_{L}(C), \qquad \delta_{L}\!\left(\bigwedge C\right)\! \, = \, \inf \delta_{L}(C) . \end{displaymath} \end{prop} \begin{proof} This is due to~\cite[V.2, Satz~2.1, p.~118]{MaedaBook} (see~\cite[V.1, Satz~1.8, p.~117]{MaedaBook} for a more general result). \end{proof} \section{Regular rings}\label{section:regular.rings} The purpose of this section is to recollect some background material about regular rings from~\cite{VonNeumannBook} (see also~\cite{GoodearlBook, MaedaBook}). We start off with some bits of notation that will be of central relevance to the entire manuscript. To this end, let $R$ be a unital ring. Then we consider its \emph{center} \begin{displaymath} \ZZ(R) \, \defeq \, \{a\in R\mid \forall b\in R\colon\, ab=ba\} , \end{displaymath} which is a unital subring of $R$, its \emph{unit group} \begin{displaymath} \GL(R) \, \defeq \, \{a\in R\mid \exists b\in R\colon\, ab=ba=1\} , \end{displaymath} equipped with the multiplication inherited from $R$, and the sets \begin{displaymath} \lat(R) \, \defeq \, \{aR\mid a\in R\}, \qquad \latop(R) \, \defeq \, \{Ra\mid a\in R\} \end{displaymath} of \emph{principal right ideals} (resp., \emph{principal left ideals}) of $R$, partially ordered by inclusion. \begin{remark}\label{remark:quantum.logic} Let $R$ be a unital ring. The set \begin{displaymath} \E(R) \, \defeq \, \{ e \in R \mid ee = e \} \end{displaymath} of \emph{idempotent} elements of $R$ naturally carries a partial order defined by \begin{displaymath} e\leq f \quad :\Longleftrightarrow \quad ef=fe=e \qquad (e,f\in\E(R)). \end{displaymath} Two elements $e,f\in\E(R)$ are called \emph{orthogonal} and we write $e\perp f$ if $ef=fe=0$. Let $e,f \in \E(R)$. The following hold. \begin{enumerate} \item\label{remark:quantum.logic.1} If $f \leq e$, then $e-f\in\E(R)$ and $f\perp (e-f)\leq e$. \item\label{remark:quantum.logic.2} If $e\perp f$, then $e+f \in \E(R)$ satisfies $e \leq e+f$ and $f \leq e+f$. \end{enumerate} \end{remark} We now turn to the class of regular rings. A unital ring $R$ is called \emph{(von Neumann) regular} if, for every $a \in R$, there exists $b \in R$ such that $aba = a$. \begin{remark}[\cite{VonNeumannBook}, II.II, Theorem~2.2, p.~70]\label{remark:regular.idempotent.ideals} Let $R$ be a unital ring. Then the following are equivalent. \begin{enumerate} \item[---\,] $R$ is regular. \item[---\,] $\lat(R) = \{ eR \mid e \in \E(R) \}$. \item[---\,] $\latop(R) = \{ Re \mid e \in \E(R) \}$. \end{enumerate} \end{remark} Our next remark requires some additional notation. If $R$ is a unital ring, then, for any $S \subseteq R$ and $a \in R$, the corresponding \emph{right annihilators} in $R$ are defined as \begin{displaymath} \rAnn(S) \defeq \{x\in R\mid \forall s\in S\colon\, sx=0 \}, \qquad \rAnn(a) \defeq \rAnn(\{ a \}), \end{displaymath} and the corresponding \emph{left annihilators} in $R$ are defined as \begin{displaymath} \lAnn(S) \defeq \{x\in R\mid \forall s\in S\colon\, xs=0 \}, \qquad \lAnn(a) \defeq \lAnn(\{ a \}). \end{displaymath} \begin{remark}\label{remark:bijection.annihilator} Let $R$ be a regular ring. By~\cite[II.II, Corollary~2, p.~71 and II.II, Lemma~2.3, p.~72]{VonNeumannBook}, the maps \begin{align*} &(\lat(R),{\subseteq}) \, \longrightarrow \, (\latop(R),{\subseteq}), \quad I \, \longmapsto \, \lAnn(I) ,\\ &(\latop(R),{\subseteq}) \, \longrightarrow \, (\lat(R),{\subseteq}), \quad I \, \longmapsto \, \rAnn(I) \end{align*} are mutually inverse order-reversing bijections. Due to~\cite[II.II, Theorem~2.4, p.~72]{VonNeumannBook}, the partially ordered set $(\lat(R),{\subseteq})$ (resp., $(\latop(R),{\subseteq})$) is a complemented, modular lattice, in which \begin{displaymath} I\vee J\,= \, I+J, \qquad I\wedge J \, = \, I\cap J \end{displaymath} for all $I,J \in \lat(R)$ (resp., $I,J \in \latop(R)$). Moreover, by \cite[II.II, Lemma~2.2(i), p.~71]{VonNeumannBook}, \begin{align*} \forall e\in\E(R)\colon\qquad R(1-e)=\lAnn(eR),\quad (1-e)R=\rAnn(Re). \end{align*} \end{remark} \begin{lem}\label{lemma:partial.inverse} Let $R$ be a regular ring, $f\in\E(R)$ and $a\in R$ with $\rAnn(a) \subseteq (1-f)R$. Then there exists $b\in R$ such that $ba=f$. \end{lem} \begin{proof} We observe that \begin{displaymath} Ra \, \stackrel{\ref{remark:bijection.annihilator}}{=} \, \lAnn(\rAnn(Ra)) \, = \, \lAnn(\rAnn(a)) \, \supseteq \, \lAnn((1-f)R) \, \stackrel{\ref{remark:bijection.annihilator}}{=} \, Rf . \end{displaymath} In particular, there exists $b\in R$ such that $ba=f$. \end{proof} In the principal right ideal lattice of a regular ring, independence can be described by means of orthogonal idempotents as follows. \begin{remark}\label{remark:independence.ideals.idempotents} Let $R$ be a regular ring, let $n \in \N$ and let $I_{1},\dots,I_{n} \in \lat(R)$. It is straightforward to check that $(I_{1},\dots, I_{n})\perp$ if and only if \begin{displaymath} \forall (x_{1},\ldots,x_{n}) \in I_{1} \times \dots \times I_{n} \colon \quad x_{1}+\ldots+x_{n}=0 \ \Longrightarrow \ x_{1}=\ldots=x_{n}=0. \end{displaymath} As is customary, if $(I_{1},\dots, I_{n})\perp$, then we write \begin{displaymath} I_{1}\oplus \ldots \oplus I_{n} \, \defeq \, I_{1}+\ldots+I_{n} \, = \, \sum\nolimits_{j=1}^{n} I_{j} \, \eqdef \, \bigoplus\nolimits_{j=1}^{n} I_{j} . \end{displaymath} By~\cite[II.III, Lemma~3.2, p.~94 and its Corollary, p.~95]{VonNeumannBook}, the following are equivalent. \begin{enumerate} \item[---\,] $I_{1} \oplus \ldots \oplus I_{n} = R$. \item[---\,] There exist $e_{1},\dots,e_{n}\in\E(R)$ pairwise orthogonal such that $e_{1}+\dots+e_{n}=1$ and $I_{j}=e_{j}R$ for each $j \in \{1,\ldots,n\}$. \end{enumerate} \end{remark} A ring $R$ is called \emph{(directly) irreducible} if $R$ is non-zero and not isomorphic to the direct product of two non-zero rings. \begin{remark}[\cite{VonNeumannBook}, II.II, Theorem~2.7, p.~75 and II.II, Theorem~2.9, p.~76]\label{remark:irreducible.center.field} Let $R$ be a regular ring. The following are equivalent. \begin{enumerate} \item[---\,] $R$ is irreducible. \item[---\,] $\ZZ(R)$ is a field. \item[---\,] $\lat(R)$ is irreducible. \end{enumerate} \end{remark} A basic ingredient in later decomposition arguments will be the following standard construction of subrings induced by idempotent elements of a given ring. \begin{remark}\label{remark:corner.rings} Let $R$ be a unital ring, $e \in \E(R)$. Then $eRe$ is a subring of $R$, with multiplicative unit $e$. If $R$ is regular, then so is $eRe$~\cite[II.II, Theorem~2.11, p.~77]{VonNeumannBook}. \end{remark} \begin{lem}\label{lemma:right.multiplication} Let $R$ be a unital ring and let $e \in \E(R)$. The following hold. \begin{enumerate} \item\label{lemma:right.multiplication.1} If $I$ is a right ideal of $R$, then $Ie = I \cap Re$. \item\label{lemma:right.multiplication.2} If $I$ and $J$ are right ideals of $R$, then $(I \cap J)e = Ie \cap Je$. \item\label{lemma:annihilator.eRe} Let $e \in \E(R)$ and $a \in R$. If $ae=ea$, then \begin{displaymath} \qquad e\rAnn(a) = \rAnn(ae) \cap eR, \qquad e\rAnn(a)e = \rAnn(ae) \cap eRe. \end{displaymath} \end{enumerate} \end{lem} \begin{proof} \ref{lemma:right.multiplication.1} As $x=xe \in Ie$ for every $x \in I \cap Re$, we see that $I \cap Re \subseteq Ie$. Conversely, as $I$ is a right ideal of $R$, we have $Ie \subseteq I$ and thus $Ie \subseteq I \cap Re$. \ref{lemma:right.multiplication.2} $(I \cap J)e \stackrel{\ref{lemma:right.multiplication.1}}{=} (I \cap J) \cap Re = (I \cap Re) \cap (J \cap Re) \stackrel{\ref{lemma:right.multiplication.1}}{=} Ie \cap Je$. \ref{lemma:annihilator.eRe} Suppose that $ae=ea$. First let us show that $e\rAnn(a) = \rAnn(ae) \cap eR$. Evidently, if $x \in \rAnn(a)$, then $aeex = aex = eax = 0$, thus $ex \in \rAnn(ae) \cap eR$. Conversely, if $x \in \rAnn(ae) \cap eR$, then $ax = aex = 0$, hence $x = ex \in e\rAnn(a)$. This proves the first equality. We conclude that \begin{displaymath} e\rAnn(a)e \, \stackrel{\ref{lemma:right.multiplication.1}}{=} \, e\rAnn(a) \cap Re \, = \, \rAnn(ae) \cap eR \cap Re \, \stackrel{e \in \E(R)}{=} \, \rAnn(ae) \cap eRe . \qedhere \end{displaymath} \end{proof} \begin{lem}\label{lemma:right.multiplication.regular} Let $R$ be a regular ring and let $e \in \E(R)$. The following hold. \begin{enumerate} \item\label{lemma:right.multiplication.4} If $I \in \lat(R)$ and $I \subseteq eR$, then $Ie \in \lat(eRe)$. \item\label{lemma:right.multiplication.3} Suppose that $R$ is regular, let $n \in \N_{>0}$ and $I_{1},\ldots,I_{n} \in \lat(R)$. If $(I_{1},\ldots,I_{n})\perp$ and $I_{i} \subseteq eR$ for each $i \in \{ 1,\ldots,n \}$, then $(I_1e,\ldots,I_ne)\perp$. \end{enumerate} \end{lem} \begin{proof} \ref{lemma:right.multiplication.4} Let $I \in \lat(R)$ with $I \subseteq eR$. By~\cite[Lemma~7.5]{SchneiderGAFA}, there exists $f \in \E(R)$ with $f \leq e$ and $I = fR$, which implies that $Ie = fRe = (efe)eRe \in \lat(eRe)$. \ref{lemma:right.multiplication.3} If $(I_{1},\ldots,I_{n})\perp$ and $I_{i} \subseteq eR$ for each $i \in \{ 1,\ldots,n \}$, then \begin{displaymath} (I_{1}e + \ldots + I_{i}e) \cap I_{i+1}e \, \subseteq \, (I_{1} + \ldots + I_{i}) \cap I_{i+1} \, = \, \{ 0 \} \end{displaymath} for every $i \in \{ 1,\ldots,n-1 \}$, whence $(I_{1}e,\ldots,I_{n}e)\perp$ by Remark~\ref{remark:independence.equivalence.intersection}. \end{proof} Our later considerations, especially those in Section~\ref{section:algebraic.elements} and Section~\ref{section:approximation.by.matrix.algebras}, will crucially rely on a study of embeddings of matrix algebras into continuous rings. We conclude this section with a clarifying remark on this matter. To recollect some terminology from~\cite[II.III, Definition~3.5, p.~97]{VonNeumannBook}, let $R$ be a unital ring and let $n \in \N_{>0}$. Then $s = (s_{ij})_{i,j\in\{1,\ldots,n\}} \in R^{n\times n}$ is called a \emph{family of matrix units} for $R$ if \begin{displaymath} \forall i,j,k,\ell\in\{1,\ldots,n\}\colon\qquad s_{ij}s_{k\ell} \, = \, \begin{cases} \, s_{i\ell} & \text{if } j=k, \\ \, 0 & \text{otherwise} \end{cases} \end{displaymath} and $s_{11}+\ldots+s_{nn}=1$. \begin{remark}\label{remark:matrixunits.ring.homomorphism} Let $K$ be a field and let $n \in \N_{>0}$. Since the matrix ring $\M_{n}(K)$ is simple (see, e.g.,~\cite[IX.1, Corollary~1.5, p.~361]{GrilletBook}), any unital ring homomorphism from $\M_{n}(K)$ to a non-zero unital ring is an embedding. Let $R$ be a unital $K$-algebra. If $s\in R^{n\times n}$ is any family of matrix units for $R$, then \begin{align*} \M_{n}(K)\,\longrightarrow\, R,\quad (a_{ij})_{i,j\in\{1,\ldots,n\}}\,\longmapsto\, \sum\nolimits_{i,j=1}^{n}a_{ij}s_{ij} \end{align*} constitutes a unital $K$-algebra homomorphism. Using the standard basis of the $K$-vector space $\M_{n}(K)$, it is easy to see that this construction induces a bijection between the set of families of matrix units for $R$ in $R^{n \times n}$ and the set of unital ring homomorphisms from $\M_{n}(K)$ to $R$. \end{remark} \section{Rank functions and continuous rings}\label{section:continuous.rings.and.rank.functions} This section provides some background on von Neumann's work on continuous rings, regarding in particular their description via rank functions. To begin with, we recall some convenient terminology from~\cite[Chapter~16]{GoodearlBook}. Let $R$ be a regular ring. A \emph{pseudo-rank function} on $R$ is a map $\rho \colon R \to [0,1]$ such that \begin{enumerate} \item[---\,] $\rho(1) = 1$, \item[---\,] $\rho(ab) \leq \min\{\rho(a),\rho(b)\}$ for all $a,b \in R$, and \item[---\,] $\rho(e+f) = \rho(e) + \rho(f)$ for any two orthogonal $e,f \in \E(R)$. \end{enumerate} Of course, the third condition entails that $\rho(0) = 0$ for any pseudo-rank function $\rho$ on $R$. A \emph{rank function} on $R$ is a pseudo-rank function $\rho$ on $R$ such that $\rho(a)>0$ for each $a \in R\setminus\{0\}$. A \emph{rank ring} is a pair consisting of a regular ring and a rank function on it. \begin{remark}[\cite{VonNeumannBook}]\label{remark:properties.pseudo.rank.function} Let $\rho$ be a pseudo-rank function on a regular ring $R$. \begin{enumerate} \item\label{remark:rank.difference.smaller.idempotent} If $e,f\in\E(R)$ and $e\leq f$, then $e \perp (f-e) \in \E(R)$ by Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.1} and therefore $\rho(f) = \rho(e+(f-e)) = \rho(e)+\rho(f-e)$, thus $\rho(f-e) = \rho(f)-\rho(e)$. \item\label{remark:rank.order.isomorphism} If $\rho$ is a rank function and $E$ is a chain in $(\E(R),{\leq})$, then~\ref{remark:rank.difference.smaller.idempotent} entails that \begin{displaymath} \qquad \forall e,f \in E \colon \quad e \leq f \ \Longleftrightarrow \ \rho(e) \leq \rho(f) . \end{displaymath} and \begin{displaymath} \qquad \forall e,f \in E \colon \quad \lvert \rho(e)-\rho(f) \rvert = \rho(e-f). \end{displaymath} \item\label{remark:rank.matrixunits} Let $n \in \N_{>0}$. If $s \in R^{n\times n}$ is a family of matrix units for $R$, then $\rho(s_{ij}) = \tfrac{1}{n}$ for all $i,j \in \{1,\ldots,n\}$. This follows by straightforward calculation. \item\label{remark:inequation.sum.pseudo.rank} For all $a,b \in R$, \begin{displaymath} \qquad \rho(a+b) \, \leq \, \rho(a) + \rho(b). \end{displaymath} (Proofs of this are contained in~\cite[II.XVIII, p.~231, Corollary~$(\overline{f})$]{VonNeumannBook}, as well as in~\cite[VI.5, Hilfssatz~5.1(3°), p.~153]{MaedaBook}, and~\cite[Proposition~16.1(d), p.~227]{GoodearlBook}.) \item\label{remark:rank.continuous} The map \begin{align*} \qquad d_{\rho} \colon \, R \times R \, \longrightarrow \, [0,1], \quad (a,b) \, \longmapsto \, \rho(a-b) \end{align*} is a pseudo-metric on $R$. Evidently, $d_{\rho}$ is a metric if and only if $\rho$ is a rank function. Moreover, the map $\rho \colon R \to [0,1]$ is $1$-Lipschitz, hence continuous, with respect to $d_{\rho}$. (For proofs, we refer to~\cite[II.XVIII, Lemma~18.1, p.~231]{VonNeumannBook}, \cite[VI.5, Satz~5.1, p.~154]{MaedaBook}, or~\cite[Proposition~19.1, p.~282]{GoodearlBook}.) \item\label{remark:rank.group.topology} Equipped with the \emph{$\rho$-topology}, i.e., the topology generated by $d_{\rho}$, the ring $R$ constitutes a topological ring (see~\cite[Remark~7.8]{SchneiderGAFA}). It follows from~\ref{remark:rank.continuous} that the $\rho$-topology is Hausdorff if and only if $\rho$ is a rank function. Furthermore, $\GL(R)$ is a topological group with respect to the relative $\rho$-topology, as the latter is generated by the bi-invariant pseudo-metric ${d_{\rho}}\vert_{\GL(R)\times \GL(R)}$. \end{enumerate} \end{remark} The following remark recollects several general facts about rank functions and unit groups of the underlying rings from the literature. \begin{remark}\label{remark:properties.rank.function} Let $(R,\rho)$ be a rank ring. \begin{enumerate} \item\label{remark:directly.finite} $R$ is \emph{directly finite}, i.e., \begin{displaymath} \qquad \forall a,b \in R \colon \quad ab=1 \ \Longrightarrow \ ba=1 . \end{displaymath} (For proofs, we refer to~\cite[Corollary~6(1)]{Handelman76}, \cite[Proposition~16.11(b), p.~234]{GoodearlBook}, or~\cite[Lemma~7.13(2)]{SchneiderGAFA}.) \item\label{remark:invertible.rank} $\GL(R) = \{ a \in R \mid \rho(a) = 1 \}$. (For a proof, see~\cite[Lemma~7.13(3)]{SchneiderGAFA}.) \item\label{remark:GL(R).closed} $\GL(R)$ is closed in $(R,d_{\rho})$ by~\ref{remark:invertible.rank} and Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.continuous}. \end{enumerate} \end{remark} For later use, we record some additional basic observations. \begin{lem}[\cite{MaedaBook,SchneiderGAFA}]\label{lemma:pseudo.dimension.function} Let $\rho$ be a pseudo-rank function on a regular ring $R$. Then \begin{displaymath} \delta_{\rho} \colon \, \lat(R) \, \longrightarrow \, [0,1], \quad aR \, \longmapsto \, \rho(a) \end{displaymath} is a pseudo-dimension function. Moreover, the following hold. \begin{enumerate} \item\label{lemma:rank.dimension.function} $\rho$ rank function on $R$ $\ \Longleftrightarrow\ $ $\delta_{\rho}$ dimension function on $\lat(R)$. \item\label{lemma:rank.dimension.annihilator} $\rho(a)=1-\delta_{\rho}(\rAnn(a))$ for every $a \in R$. \item\label{lemma:multplication.Lipschitz} For every $a \in R$, the mapping $\lat(R) \to \lat(R),\, I \mapsto aI$ is $1$-Lipschitz, hence continuous, with respect to $d_{\delta_{\rho}}$. \end{enumerate} \end{lem} \begin{proof} The fact that $\delta \defeq \delta_{\rho}$ is a pseudo-dimension function and assertion~\ref{lemma:rank.dimension.function} follow by the proof of \cite[VI.5, Satz~5.2, p.~154]{MaedaBook}. \ref{lemma:rank.dimension.annihilator} Let $a \in R$. Since $R$ is regular, Remark~\ref{remark:regular.idempotent.ideals} asserts the existence of $e\in\E(R)$ such that $Ra=Re$. Hence, \begin{displaymath} \rAnn(a) \, = \, \rAnn(Ra) \, = \, \rAnn(Re) \, \stackrel{\ref{remark:bijection.annihilator}}{=} \, (1-e)R \end{displaymath} and therefore \begin{displaymath} \rho(a) \, = \, \rho(e) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, 1-\rho(1-e) \, = \, 1-\delta((1-e)R) \, = \, 1-\delta(\rAnn(a)) . \end{displaymath} \ref{lemma:multplication.Lipschitz} The argument follows the lines of~\cite[Proof of Lemma~9.10(2)]{SchneiderGAFA}. Let $a \in R$. If $I, J \in \lat(R)$ and $I\subseteq J$, then Remark~\ref{remark:bijection.annihilator} and Lemma~\ref{lemma:complement} together assert the existence of $I' \in \lat(R)$ such that $J = I\oplus I'$, whence \begin{align*} d_{\delta}(aI,aJ) \, &= \, \delta(aJ)-\delta(aI) \, \leq \, \delta(aI)+\delta(aI')-\delta(aI) \\ &\leq \, \delta(I') \, = \, \delta(J)-\delta(I) \, = \, d_{\delta}(I,J) . \end{align*} Consequently, for all $I,J \in \lat(R)$, \begin{align*} d_{\delta}(aI,aJ) \, &\leq \, d_{\delta}(aI,a(I+J)) + d_{\delta}(a(I+J),aJ) \\ &\leq \, d_{\delta}(I,I+J) + d_{\delta}(I+J,J) \, = \, 2\delta(I+J)-\delta(I)-\delta(J) \stackrel{(\ast)}{=} \, d_{\delta}(I,J), \end{align*} where $(\ast)$ follows by~\cite[Lemma~6.3(3)]{SchneiderGAFA}. \end{proof} We now turn to von Neumann's continuous rings. A \emph{continuous ring} is a regular ring $R$ such that the lattice $\lat(R)$ is a continuous geometry. \begin{thm}[\cite{VonNeumannBook}]\label{theorem:unique.rank.function} Let $R$ be an irreducible, continuous ring. Then \begin{displaymath} \rk_{R} \colon \, R \, \longrightarrow \, [0,1] , \quad a \, \longmapsto \, \delta_{\lat(R)}(aR). \end{displaymath} is the unique rank function on $R$. Moreover, the metric $d_{R} \defeq d_{\rk_{R}}$ is complete. \end{thm} \begin{proof} By~\cite[II.XVII, Theorem~17.1, p.~224]{VonNeumannBook} and~\cite[II.XVII, Theorem~17.2, p.~226]{VonNeumannBook}, the map $\rk_{R}$ is the unique rank function on $R$. The metric space $(R,d_{R})$ is complete according to~\cite[II.XVII, Theorem~17.4, p.~230]{VonNeumannBook}. (Alternatively, proofs may be found in~\cite[VII.2, pp.~162--165]{MaedaBook}.) \end{proof} A rank ring $(R,\rho)$ is called \emph{complete} if the metric space $(R,d_{\rho})$ is complete. If $R$ is an irreducible, continuous ring, then $(R,\rk_{R})$ is a complete rank ring by Theorem~\ref{theorem:unique.rank.function}. Conversely, if $(R,\rho)$ is a complete rank ring, then $R$ is a continuous ring according to~\cite[VI.5, Satz~5.3, p.~156]{MaedaBook} (see also~\cite[II.XVIII, Proof of Theorem~18.1, p.~237]{VonNeumannBook}). The remainder of this section records several useful facts about the rank function of an irreducible, continuous ring. \begin{remark}\label{remark:rank.function.general} Let $R$ be an irreducible, continuous ring. \begin{enumerate} \item\label{remark:characterization.discrete} The work of von Neumann~\cite{VonNeumannBook} implies that the following are equivalent. \begin{enumerate} \item[---\,] $R$ is \emph{discrete}, i.e., the topology generated by~$d_{R}$ is discrete. \item[---\,] $R\cong \M_{n}(D)$ for some division ring $D$ and $n \in \N_{>0}$. \item[---\,] $\rk_{R}(R) \ne [0,1]$. \end{enumerate} For a proof of this, see~\cite[Remark~3.4]{SchneiderIMRN} and~\cite[Remark~3.6]{SchneiderIMRN}. \item\label{remark:uniqueness.rank.embedding} Suppose that $\rho$ is a pseudo-rank function on a regular ring $S$ and let $\phi \colon R \to S$ be a unital ring homomorphism. Then $\rho \circ \phi$ is a pseudo-rank function on~$R$, so $(\rho \circ \phi)^{-1}(\{ 0 \})$ is a proper two-sided ideal of $R$ by~\cite[Proposition~16.7(a), p.~231]{GoodearlBook}. Since the ring $R$ is simple according to~\cite[VII.3, Hilfssatz~3.1, p.~166]{MaedaBook} (see also~\cite[Corollary~13.26, p.~170]{GoodearlBook}), it follows that $(\rho \circ \phi)^{-1}(\{ 0\}) = \{ 0\}$, i.e., $\rho \circ \phi$ is a rank function on $R$. Thus, $\rho \circ \phi = \rk_{R}$ by Theorem~\ref{theorem:unique.rank.function}, whence \begin{displaymath} \qquad \forall a,b \in R \colon \quad d_{R}(a,b) \, = \, d_{\rho}(\phi(a),\phi(b)) . \end{displaymath} \item\label{remark:duality} By Remark~\ref{remark:bijection.annihilator} and Remark~\ref{remark:irreducible.center.field}, both $\lat(R)$ and $\latop(R)$ are irreducible continuous geometries. Moreover, by~\cite[II.XVII, Lemma~17.2, p.~223]{VonNeumannBook}, \begin{displaymath} \qquad \forall I \in \latop(R) \colon \quad \delta_{\latop(R)}(I) \, = \, 1-\delta_{\lat(R)}(\rAnn(I)) . \end{displaymath} \end{enumerate} \end{remark} \begin{lem}[\cite{VonNeumannBook}]\label{lemma:matrixunits.idempotents} Let $R$ be an irreducible, continuous ring, let $n \in \N_{>0}$, and let $e_{1},\ldots,e_{n} \in \E(R)$ be pairwise orthogonal with $e_{1} + \ldots + e_{n} = 1$ and $\rk_{R}(e_{1}) = \rk_{R}(e_{i})$ for each $i \in \{ 1,\ldots,n\}$. Then there exists a family of matrix units $s\in R^{n\times n}$ for $R$ such that $s_{ii} = e_{i}$ for each $i\in\{1,\ldots,n\}$. \end{lem} \begin{proof} Note that $R = e_{1}R \oplus \ldots \oplus e_{n}R$ by Remark~\ref{remark:independence.ideals.idempotents} and, for each $i \in \{ 1,\ldots, n\}$, \begin{displaymath} \delta_{\lat(R)}(e_{1}R) \, \stackrel{\ref{theorem:unique.rank.function}}{=} \, \rk_{R}(e_{1}) \, = \, \rk_{R}(e_{i}) \, \stackrel{\ref{theorem:unique.rank.function}}{=} \, \delta_{\lat(R)}(e_{i}R) . \end{displaymath} Thus, by~\cite[II.III, Lemma~3.6, p.~97]{VonNeumannBook} and~\cite[I.VI, Theorem~6.9(iii)'', p.~52]{VonNeumannBook}, there is a family of matrix units $s\in R^{n\times n}$ for $R$ such that $e_{i}R = s_{ii}R$ for each $i \in \{1,\ldots,n\}$. Using the argument from~\cite[II.II, Corollary~(iv) after Definition~2.1, p.~69]{VonNeumannBook}, we see that, for each $i \in \{1,\ldots,n\}$, \begin{align*} s_{ii}(1-e_{i}) \, &= \, s_{ii}(e_{1} + \ldots + e_{i-1} + e_{i+1} + \ldots + e_{n}) \\ &= \, s_{ii}e_{1} + \ldots + s_{ii}e_{i-1} + s_{ii}e_{i+1} + \ldots + s_{ii}e_{n} \\ &= \, s_{ii}s_{11}e_{1} + \ldots + s_{ii}s_{i-1,\,i-1}e_{i-1} + s_{ii}s_{i+1,\,i+1}e_{i+1} + \ldots + s_{ii}s_{nn}e_{n} \, = \, 0, \end{align*} i.e., $s_{ii} = s_{ii}e_{i} = e_{i}$, as desired. \end{proof} \begin{lem}\label{lemma:order} Let $R$ be an irreducible, continuous ring and let $e \in \E(R)$. \begin{enumerate} \item\label{lemma:order.1} For every $e' \in \E(R)$ with $e \leq e'$ and every $t \in [\rk_{R}(e),\rk_{R}(e')] \cap \rk_{R}(R)$, there exists $f \in \E(R)$ such that $\rk_{R}(f) = t$ and $e \leq f \leq e'$. \item\label{lemma:order.2} If $R$ is non-discrete and $\rk_{R}(e) = \tfrac{1}{2}$, then we find $(e_{n})_{n\in\N_{>0}} \in \E(R)^{\N_{>0}}$ pairwise orthogonal such that $e_{1} = e$ and $\rk_{R}(e_{n}) = 2^{-n}$ for each $n\in\N_{>0}$. \end{enumerate} \end{lem} \begin{proof} Let $e' \in \E(R)$ with $e \leq e'$. The Hausdorff maximal principle asserts the existence of a maximal chain $E$ in $(\E(R),{\leq})$ such that $\{ e,e' \} \subseteq E$. By~\cite[Corollary~7.19]{SchneiderGAFA}, $\rk_{R}(E) = \rk_{R}(R)$. We deduce from Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.order.isomorphism} that ${{\rk_{R}}\vert_{E}} \colon (E,{\leq}) \to (\rk_{R}(R),{\leq})$ is an isomorphism of linearly ordered sets. \ref{lemma:order.1} Now, let $t \in [\rk_{R}(e),\rk_{R}(e')] \cap \rk_{R}(R)$. We define $f \defeq ({{\rk_{R}}\vert_{E}})^{-1}(t)$. Then, from $\rk_{R}(e) \leq t \leq \rk_{R}(e')$, we infer that $e \leq f \leq e'$. \ref{lemma:order.2} Let $e' \defeq 1$. Suppose that $R$ is non-discrete and $\rk_{R}(e) = \tfrac{1}{2}$. Note that $\rk_{R}(R) = [0,1]$ by Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete}. Considering \begin{displaymath} f_{n} \, \defeq \, ({\rk_{R}}\vert_{E})^{-1}\!\left(\tfrac{2^{n}-1}{2^{n}}\right) \qquad (n\in\N), \end{displaymath} we conclude that $(f_{n})_{n \in \N}$ is a monotone sequence in $(\E(R),\leq)$ with $f_{0} = 0$ and~$f_{1} = e$. For each $n \in \N_{>0}$, let us define $e_{n} \defeq f_{n}-f_{n-1}$ and observe that \begin{displaymath} \rk_{R}(e_{n}) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, \rk_{R}(f_{n}) - \rk_{R}(f_{n-1}) \, = \, \tfrac{2^{n}-1}{2^{n}}-\tfrac{2^{n-1}-1}{2^{n-1}} \, = \, 2^{-n}. \end{displaymath} Evidently, $e_{1} = f_{1}-f_{0} = e$. Moreover, for any two $m,n\in\N_{>0}$ with $m>n$, \begin{displaymath} e_{m}e_{n} \, = \, f_{m}(1-f_{m-1})f_{n}(1-f_{n-1}) \, = \, f_{m}(f_{n}-f_{n})(1-f_{n-1}) \, = \, 0 . \qedhere \end{displaymath} \end{proof} Finally, we turn once more to the construction mentioned in Remark~\ref{remark:corner.rings}. \begin{remark}\label{remark:eRe.non-discrete.irreducible.continuous} Let $R$ be a continuous ring and let $e\in\E(R)$. Then $eRe$ is continuous due to~\cite[Proposition~13.7, p.~162]{GoodearlBook}. Suppose now that $R$ is irreducible and $e \ne 0$. Then $\ZZ(eRe) = \ZZ(R)e$ by~\cite[Lemma~2.1]{Halperin62}, and therefore $\ZZ(R) \cong \ZZ(eRe)$.\footnote{We usually identify $\ZZ(eRe)$ with $\ZZ(R)$.} In turn, $eRe$ is irreducible by Remark~\ref{remark:irreducible.center.field}. Moreover, using Theorem~\ref{theorem:unique.rank.function}, we see that \begin{displaymath} {\rk_{eRe}} \, = \, \tfrac{1}{\rk_{R}(e)}{{\rk_{R}}\vert_{eRe}} . \end{displaymath} In particular, if $R$ is non-discrete, then it follows by Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete} and Lemma~\ref{lemma:order}\ref{lemma:order.1} that $eRe$ is non-discrete, too. \end{remark} \section{Subgroups induced by idempotents}\label{section:subgroups.of.the.form.Gamma_e(R)} In this section, we isolate and study a certain natural family of subgroups of the unit group of a unital ring. Our observations about these subgroups will be fundamental to both Section~\ref{section:decomposition.into.locally.special.elements} and Section~\ref{section:steinhaus.property}. The construction, detailed in Lemma~\ref{lemma:subgroup.unit.group}, is based on the following type of ring embeddings. \begin{lem}\label{lemma:sum.embedding} Let $R$ be a unital ring, let $n \in \N$, let $e_{1},\ldots,e_{n} \in \E(R)$ be pairwise orthogonal and $e \defeq e_{1} + \ldots + e_{n}$. Then \begin{displaymath} \prod\nolimits_{i=1}^{n} e_{i}Re_{i} \, \longrightarrow \, eRe, \quad (x_{1},\dots,x_{n}) \,\longmapsto \, x_{1}+\ldots+x_{n} \end{displaymath} is a unital $\ZZ(R)$-algebra embedding. Moreover, the following hold. \begin{enumerate} \item\label{lemma:sum.embedding.3} If $(a_{i})_{i \in I} \in \prod\nolimits_{i \in I} e_{i}Re_{i}$, then \begin{displaymath} \qquad \sum\nolimits_{i=1}^{n} a_{i} + 1 - \sum\nolimits_{i = 1}^{n} e_{i} \, = \, \prod\nolimits_{i=1}^{n} a_{i}+1-e_{i} , \end{displaymath} where the factors in the product commute. \item\label{lemma:sum.embedding.2} Suppose that $R$ is regular ring and let $\rho$ be a pseudo-rank function on $R$. If $a_{1} \in e_{1}Re_{1},\dots,a_{n}\in e_{n}Re_{n}$, then $\rho(a_{1}+\dots+a_{n}) = \rho(a_{1})+\dots+\rho(a_{n})$. \end{enumerate} \end{lem} \begin{proof} It is straightforward to check that $\prod\nolimits_{i=1}^{n} e_{i}Re_{i} \to eRe, \, x \mapsto x_{1}+\ldots+x_{n}$ is a unital $\ZZ(R)$-algebra homomorphism. Furthermore, this mapping is injective: indeed, if $x \in \prod\nolimits_{i=1}^{n} e_{i}Re_{i}$ and $x_{1}+\ldots+x_{n}=0$, then it follows that $x_{i} = e_{i}(x_{1}+\ldots +x_n) = 0$ for every $i \in \{ 1,\ldots,n \}$, i.e., $x=0$. \ref{lemma:sum.embedding.3} Let $(a_{i})_{i \in I} \in \prod\nolimits_{i \in I} e_{i}Re_{i}$. For any two distinct $i,j \in \{ 1,\ldots,n\}$, from $e_{i} \perp e_{j}$ we infer that \begin{displaymath} (a_{i}+1-e_{i})(a_{j}+1-e_{j}) \, = \, a_{i}+a_{j}+1-e_{i}-e_{j} \, = \, (a_{j}+1-e_{j})(a_{i}+1-e_{i}) . \end{displaymath} Hence, the factors in the product commute. By induction, we show that \begin{displaymath} \forall j \in \{ 1,\ldots,n \} \colon \qquad \prod\nolimits_{i=1}^{j} a_{i}+1-e_{i} = \sum\nolimits_{i=1}^{j} a_{i} + 1 - \sum\nolimits_{i = 1}^{j} e_{i} . \end{displaymath} For $j=1$, the statement is trivial. Now, if $\prod\nolimits_{i=1}^{j} a_{i}+1-e_{i} = \sum\nolimits_{i=1}^{j} a_{i} + 1 - \sum\nolimits_{i = 1}^{j} e_{i}$ for some $j \in \{ 1,\ldots,n-1\}$, then from $\bigl(\sum_{i=1}^{j} e_{i}\bigr) \perp e_{j+1}$ we deduce that \begin{align*} \prod\nolimits_{i=0}^{j+1}a_{i}+1-e_{i} \, &= \, \! \left(\sum\nolimits_{i=0}^{j}a_{i} +1 - \sum\nolimits_{i=0}^{j}e_{i}\right)\!(a_{j+1}+1-e_{j+1})\\ &=\, \sum\nolimits_{i=0}^{j+1}a_{i}+1-\sum\nolimits_{i=0}^{j+1}e_{i} . \end{align*} This completes the induction, which entails the desired statement for $j=n$. \ref{lemma:sum.embedding.2} Since $R$ is regular, for each $i \in \{1,\ldots,n\}$, we may choose an element $b_{i} \in R$ with $a_{i}b_{i}a_{i} = a_{i}$ and define $f_{i} \defeq a_{i}b_{i}e_{i}$. We observe that, for every $i \in \{ 1,\ldots,n \}$, \begin{displaymath} f_{i}f_{i} \, = \, a_{i}b_{i}e_{i}a_{i}b_{i}e_{i} \, = \, a_{i}b_{i}a_{i}b_{i}e_{i} \, = \, a_{i}b_{i}e_{i} \, = \, f_{i} \end{displaymath} and \begin{displaymath} a_{i}R \, = \, a_{i}b_{i}a_{i}R \, = \, a_{i}b_{i}e_{i}a_{i}R \, = \, f_{i}a_{i}R \, \subseteq \, f_{i}R \, = \, a_{i}b_{i}e_{i}R \, \subseteq \, a_{i}R, \end{displaymath} that is, $a_{i}R = f_{i}R$. Moreover, $f_{1},\dots,f_{n}$ are pairwise orthogonal: if $i,j \in \{ 1,\ldots,n \}$ are distinct, then \begin{displaymath} f_{i}f_{j} \, = \, a_{i}b_{i}e_{i}a_{j}b_{j}e_{j} \, = \, a_{i}b_{i}e_{i}e_{j}a_{j}b_{j}e_{j} \, = \, 0 . \end{displaymath} Also, \begin{align*} (a_{1} + \ldots + a_{n})(e_{1}b_{1}e_{1} + \ldots + e_{n}b_{n}e_{n}) \, &= \, a_{1}e_{1}b_{1}e_{1} + \ldots + a_{n}e_{n}b_{n}e_{n} \\ &= \, a_{1}b_{1}e_{1} + \ldots + a_{n}b_{n}e_{n} \, = \, f_{1} + \ldots + f_{n} . \end{align*} We conclude that \begin{align*} \rho(a_{1}) + \ldots + \rho(a_{n}) \, &= \, \rho(f_{1}) + \dots + \rho(f_{n}) \, = \, \rho(f_{1} + \ldots + f_{n}) \\ &= \, \rho((a_{1} + \ldots + a_{n})(e_{1}b_{1}e_{1} + \ldots + e_{n}b_{n}e_{n})) \\ &\leq \, \rho(a_{1} + \ldots + a_{n}) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:inequation.sum.pseudo.rank}}{\leq} \, \rho(a_{1}) + \ldots + \rho(a_{n}), \end{align*} thus $\rho(a_{1}+\ldots+a_{n}) = \rho(a_{1})+\ldots+\rho(a_{n})$. \end{proof} We arrive at the announced construction of subgroups. \begin{lem}\label{lemma:subgroup.unit.group} Let $R$ be a unital ring and let $e,f \in \E(R)$. Then \begin{displaymath} \Gamma_{R}(e) \, \defeq \, \GL(eRe) + 1-e \, = \, \GL(R) \cap (eRe + 1-e) \end{displaymath} is a subgroup of $\GL(R)$ and \begin{align*} &{\GL(eRe)} \, \longrightarrow \, \Gamma_{R}(e),\quad a \, \longmapsto \, a+1-e, \\ &{\Gamma_{R}(e)} \, \longrightarrow \, \GL(eRe), \quad a \, \longmapsto \, ae \end{align*} are mutually inverse group isomorphisms. Moreover, the following hold. \begin{enumerate} \item\label{lemma:subgroup.unit.group.order} If $e \leq f$, then $\Gamma_{R}(e)\leq\Gamma_{R}(f)$. \item\label{lemma:subgroup.unit.group.orthogonal} If $e \perp f$, then $ab=ba$ for all $a\in\Gamma_{R}(e)$ and $b\in\Gamma_{R}(f)$. \item\label{lemma:subgroup.unit.group.conjugation} If $a\in \GL(R)$, then $a\Gamma_{R}(e)a^{-1}=\Gamma_R(aea^{-1})$. \end{enumerate} \end{lem} \begin{proof} Note that $e\perp (1-e) \in \E(R)$ according to Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.1}. Consider the unital ring $S \defeq eRe \times (1-e)R(1-e)$ and observe that \begin{displaymath} \GL(S) \, = \, \GL(eRe) \times \GL((1-e)R(1-e)) . \end{displaymath} In turn, $\psi \colon \GL(eRe) \to \GL(S), \, a \mapsto (a,1-e)$ is a well-defined group embedding. Moreover, it follows from Lemma~\ref{lemma:sum.embedding} that \begin{displaymath} \phi \colon \, \GL(S) \, \longrightarrow \, \GL(R), \quad (a,b) \, \longmapsto \, a+b \end{displaymath} is a well-defined embedding. Thus, $\phi \circ \psi \colon \GL(eRe) \to \GL(R)$ is an embedding with \begin{displaymath} \Gamma_{R}(e) \, = \, \im (\phi \circ \psi) \, \subseteq \, \GL(R) \cap (eRe + 1-e) . \end{displaymath} Now, if $a \in \GL(R) \cap (eRe + 1-e)$, then $ae=eae=ea$ and $a(1-e)=1-e=(1-e)a$, thus $a^{-1}e=ea^{-1}$ and $a^{-1}(1-e) = 1-e = (1-e)a^{-1}$, which implies that \begin{displaymath} ea^{-1}eeae \, = \, ea^{-1}ae \, = \, ee \, = \, e \, = \, ee \, = \, eaa^{-1}e \, = \, eaeea^{-1}e \end{displaymath} and therefore $eae \in \GL(eRe)$, whence $a = eae + 1-e \in \Gamma_{R}(e)$. This shows that indeed $\Gamma_{R}(e) = \GL(R) \cap (eRe + 1-e)$, as claimed. Since ${\GL(eRe)} \to \Gamma_{R}(e),\, a \mapsto a+1-e$ is an isomorphism, its inverse ${\Gamma_{R}(e)} \to \GL(eRe), \, a \mapsto ae$ is an isomorphism, too. \ref{lemma:subgroup.unit.group.order} If $e\leq f$, then \begin{displaymath} eRe+1-e \, = \, eRe+f-e+1-f \, = \, f(eRe+1-e)f+1-f \, \subseteq \, fRf+1-f \end{displaymath} and therefore \begin{displaymath} \Gamma_{R}(e) \, = \, \GL(R) \cap (eRe+1-e) \, \subseteq \, \GL(R) \cap (fRf+1-f) \, = \, \Gamma_{R}(f). \end{displaymath} \ref{lemma:subgroup.unit.group.orthogonal} This follows from Lemma~\ref{lemma:sum.embedding}\ref{lemma:sum.embedding.3}. \ref{lemma:subgroup.unit.group.conjugation} If $a\in \GL(R)$, then \begin{align*} a\Gamma_{R}(e)a^{-1}\! \, &= \, a(\GL(R) \cap (eRe+1-e))a^{-1} \\ & = \, \GL(R) \cap {\left(aea^{-1}Raea^{-1}+1-aea^{-1}\right)} \, = \, \Gamma_{R}\!\left( aea^{-1} \right)\! .\qedhere \end{align*} \end{proof} We point out the following result for the proof of Lemma~\ref{lemma:invertible.rank.idempotent} and Lemma~\ref{lemma:GL(R).covered.by.c_nW}. \begin{lem}\label{lemma:Gamma.annihilator.right-ideal} Let $R$ be a unital ring, let $a \in R$ and $e, f \in \E(R)$ be such that $1-f \leq e$, $f \in \rAnn(1-a)$, and $(1-a)R \subseteq eR$. Then $a\in eRe+1-e$. Moreover, if $a\in \GL(R)$, then $a\in \Gamma_{R}(e)$. \end{lem} \begin{proof} Since $(1-a)R \subseteq eR$, we see that \begin{equation}\label{eq--20} a \, = \, ea+(1-e)a \, = \, ea-e+e(1-a)+a \, = \, ea-e+1-a+a \, = \, ea+1-e. \end{equation} Moreover, as $1-f\leq e$ (i.e., $1-e\leq f$) and $(1-a)f=0$ (i.e., $af=f$), \begin{equation}\label{eq--21} a \, = \, ae+a(1-e) \, = \, ae+af(1-e) \, = \, ae+1-e. \end{equation} Thus, we conclude that \begin{equation*} a \, \stackrel{\eqref{eq--20}}{=} \, ea+1-e \, \stackrel{\eqref{eq--21}}{=} \, e(ae+1-e)+1-e \, = \, eae+1-e. \end{equation*} Finally, if $a\in \GL(R)$, then $a\in \Gamma_{R}(e)$ by Lemma~\ref{lemma:subgroup.unit.group}. \end{proof} \begin{lem}\label{lemma:invertible.rank.idempotent} Let $\rho$ be a pseudo-rank function on a regular ring $R$, and let $e \in \E(R)$. For every $a\in \Gamma_{R}(e)$, there is $f \in \E(R)$ with $f \leq e$, $a \in \Gamma_{R}(f)$ and $\rho(f) \leq 2\rho(1-a)$. \end{lem} \begin{proof} Let $a\in \Gamma_{R}(e)$. Then $\rAnn(e-a) \subseteq eR$: indeed, if $x \in \rAnn(e-a)$, then \begin{displaymath} x \, = \, ex + (1-e)x \, \stackrel{a \in \Gamma_{R}(e)}{=} \, ex + (1-e)ax \, \stackrel{(e-a)x=0}{=} \, ex + (1-e)ex \, = \, ex \, \in \, eR . \end{displaymath} Thanks to Remark~\ref{remark:bijection.annihilator} and~\cite[Lemma~7.5]{SchneiderGAFA}, there exists $e'\in \E(R)$ such that $e' \leq e$ and $e'R=\rAnn(e-a)$. Note that $\E(R) \ni e-e' \leq e$ due to Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.1}, in particular $(e-e')R \subseteq eR$. Furthermore, $1-a = e-eae \in eR$ and therefore $(1-a)R \subseteq eR$. Hence, $(e-e')R+(1-a)R \subseteq eR$. By Remark~\ref{remark:bijection.annihilator} and~\cite[Lemma~7.5]{SchneiderGAFA}, we thus find $f\in \E(R)$ with $e-e' \leq f \leq e$ and $fR=(e-e')R+(1-a)R$. We conclude that \begin{align*} \rho(e-e') \, &\stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, \rho(e)-\rho(e') \, \stackrel{\ref{lemma:pseudo.dimension.function}\ref{lemma:rank.dimension.annihilator}}{=} \, \rho(e)-1+\rho(e-a) \\ &\stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, \rho(e-a)-\rho(1-e) \, = \, \rho(e(1-a)e - (1-e))-\rho(1-e) \\ &\stackrel{\ref{lemma:sum.embedding}\ref{lemma:sum.embedding.2}}{=} \, \rho(e(1-a)e) + \rho(-(1-e)) - \rho(1-e) \, = \, \rho(e(1-a)e) \, \leq \, \rho(1-a) \end{align*} and therefore \begin{displaymath} \rho(f) \, \leq \, \rho(e-e')+\rho(1-a) \, \leq \, 2\rho(1-a). \end{displaymath} Finally, we see that $f' \defeq e'+1-e \in \E(R)$ by Remark~\ref{remark:quantum.logic} and \begin{displaymath} (1-a)f' \, = \, (1-a)e'+(1-e)-a(1-e) \, = \, (1-e)-(1-e) \, = \, 0, \end{displaymath} thus $f' \in \rAnn(1-a)$. Moreover, $1-f' = e-e' \leq f$ and $(1-a)R \subseteq fR$. Therefore, $a\in \Gamma_{R}(f)$ thanks to Lemma~\ref{lemma:Gamma.annihilator.right-ideal}. \end{proof} We conclude this section with a discussion of some general matters of convergence in complete ranks, which will turn out useful later on in Sections~\ref{section:decomposition.into.locally.special.elements} and~\ref{section:steinhaus.property}. \begin{remark} Let $(R,\rho)$ be a rank ring. If $I$ is any set and $(e_{i})_{i \in I}$ is a family of pairwise orthogonal elements of $\E(R)$, then \begin{displaymath} \forall n \in \N_{>0} \colon \qquad \left\lvert \left\{ i \in I \left\vert \, \rho(e_{i}) \geq \tfrac{1}{n} \right\} \right\rvert \, \leq \, n , \right. \end{displaymath} thus $\{ i \in I \mid e_{i} \ne 0 \} = \bigcup_{n=1}^{\infty} \!\left\{ i \in I \left\vert \, \rho(e_{i}) \geq \tfrac{1}{n} \right\} \right.$ is countable. \end{remark} \begin{lem}\label{lemma:convergence.sequences} Let $(R,\rho)$ be a complete rank ring, let $I$ be a set, and let $(e_{i})_{i \in I}$ be a family of pairwise orthogonal elements of $\E(R)$. Then \begin{displaymath} \prod\nolimits_{i \in I} e_{i}Re_{i} \, \longrightarrow \, R, \quad (a_{i})_{i \in I} \, \longmapsto \, \sum\nolimits_{i \in I} a_{i} \defeq \lim\nolimits_{F \to \Pfin(I)} \sum\nolimits_{i \in F} a_{i} \end{displaymath} is a well-defined $\ZZ(R)$-algebra embedding and, for every $(a_{i})_{i \in I} \in \prod\nolimits_{i \in I} e_{i}Re_{i}$,\footnote{According to Lemma~\ref{lemma:sum.embedding}\ref{lemma:sum.embedding.3}, there is no need to specify an order for the factors in the product.} \begin{displaymath} \sum\nolimits_{i \in I} a_{i} + 1 - \sum\nolimits_{i \in I} e_{i} \, = \, \lim\nolimits_{F \to \Pfin(I)} \prod\nolimits_{i \in F} a_{i}+1-e_{i} . \end{displaymath} In particular, the following hold. \begin{enumerate} \item\label{lemma:convergence.idempotent} $\sum\nolimits_{i \in I} e_{i} \in \E(R)$. \item\label{lemma:convergence.monotone} If $J \subseteq J' \subseteq I$, then $\sum\nolimits_{i \in J} e_{i} \leq \sum\nolimits_{i \in J'} e_{i}$. \item\label{lemma:convergence.orthogonal} If $J,J' \subseteq I$ and $J \cap J' = \emptyset$, then $\left( \sum\nolimits_{i \in J} e_{i}\right)\! \perp \! \left( \sum\nolimits_{i \in J'} e_{i}\right)$. \end{enumerate} \end{lem} \begin{proof} To prove that the map is well-defined, let $(a_{i})_{i \in I} \in \prod\nolimits_{i \in I} e_{i}Re_{i}$ and consider \begin{displaymath} s \, \defeq \, \sup\nolimits_{F \in \Pfin(I)} \rho\!\left(\sum\nolimits_{i \in F} a_{i}\right)\! \, \in \, [0,1] . \end{displaymath} We will prove that the net $(\sum_{i \in F} a_{i})_{F \in \Pfin(I)}$ is a Cauchy in $(R,d_{\rho})$. For this purpose, let $\epsilon \in \R_{>0}$. Then there exists some $F_{0} \in \Pfin(I)$ such that $s \leq \rho(\sum_{i \in F_{0}} a_{i}) + \epsilon$. Thus, if $F,F' \in \Pfin(I)$ and $F_{0} \subseteq F \cap F'$, then \begin{displaymath} \rho\!\left( \sum\nolimits_{i \in F \mathbin{\triangle} F'} a_{i}\right) \! + \rho\!\left( \sum\nolimits_{i \in F_{0}} a_{i}\right) \! \, \stackrel{\ref{lemma:sum.embedding}\ref{lemma:sum.embedding.2}}{=} \, \rho\!\left(\sum\nolimits_{i \in (F \mathbin{\triangle} F') \cup F_{0}} a_{i}\right)\! \, \leq \, s \end{displaymath} and therefore \begin{align*} \rho\!\left( \sum\nolimits_{i \in F} a_{i} - \sum\nolimits_{i \in F'} a_{i}\right) \! \, &= \, \rho\!\left( \sum\nolimits_{i \in F\setminus F'} a_{i} + \sum\nolimits_{i \in F'\setminus F} -a_{i} \right) \\ &\stackrel{\ref{lemma:sum.embedding}\ref{lemma:sum.embedding.2}}{=} \, \sum\nolimits_{i \in F \mathbin{\triangle} F'} \rho(a_{i}) \, \stackrel{\ref{lemma:sum.embedding}\ref{lemma:sum.embedding.2}}{=} \, \rho\!\left( \sum\nolimits_{i \in F \mathbin{\triangle} F'} a_{i} \right) \\ & \leq \, s - \rho\!\left( \sum\nolimits_{i \in F_{0}} a_{i}\right)\! \, \leq \, \epsilon . \end{align*} This shows that $(\sum_{i \in F} a_{i})_{F \in \Pfin(I)}$ is indeed a Cauchy net, hence converges in $(R,d_{\rho})$ due to completeness of the latter. In turn, $\phi \colon \prod\nolimits_{i \in I} e_{i}Re_{i} \to R, \, a \mapsto \sum\nolimits_{i \in I} a_{i}$ is well defined. Since $R$ is a Hausdorff topological ring with respect to the $\rho$-topology by Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.group.topology}, it follows from Lemma~\ref{lemma:sum.embedding} that $\phi$ is a $\ZZ(R)$-algebra homomorphism, which readily entails the assertions~\ref{lemma:convergence.idempotent}, \ref{lemma:convergence.monotone}, and~\ref{lemma:convergence.orthogonal}. Finally, we observe that $\phi$ is injective: if $a \in \prod\nolimits_{i \in I} e_{i}Re_{i}$ and $\phi(a)=0$, then \begin{displaymath} a_{i} \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.group.topology}}{=} \, e_{i}\sum\nolimits_{i \in I} a_{i} \, = \, e_{i}\phi(a) \, = \, e_{i}0 \, = \, 0 \end{displaymath} for each $i \in I$, i.e., $a=0$. \end{proof} \begin{lem}\label{lemma:local.decomposition} Let $(R,\rho)$ be a complete rank ring, let $I$ be a set, let $(e_{i})_{i \in I}$ be a family of pairwise orthogonal elements of $\E(R)$, and let $e \defeq \sum\nolimits_{i \in I} e_{i}$. Then both \begin{displaymath} \prod\nolimits_{i \in I} \GL(e_{i}Re_{i}) \, \longrightarrow \, \GL(eRe), \quad (a_{i})_{i \in I} \, \longmapsto \, \sum\nolimits_{i \in I} a_{i} \end{displaymath} and \begin{displaymath} \prod\nolimits_{i \in I} \Gamma_{R}(e_{i}) \, \longrightarrow \, \Gamma_{R}(e), \quad (a_{i})_{i \in I} \, \longmapsto \, \prod\nolimits_{i \in I} a_{i} \defeq \lim\nolimits_{F \to \Pfin(I)} \prod\nolimits_{i \in F} a_{i} \end{displaymath} are group embeddings. \end{lem} \begin{proof} This is a consequence of Lemma~\ref{lemma:convergence.sequences} and Lemma~\ref{lemma:subgroup.unit.group}. \end{proof} \section{Geometry of involutions}\label{section:geometry.of.involutions} In this section we prove that two involutions of irreducible, regular ring are conjugate in the associated unit group if and only if they are at equal rank distance from the identity (Proposition~\ref{proposition:conjugation.involution.rank}). Moreover, we provide two geometric decomposition results for involutions in non-discrete irreducible, continuous rings (Lemma~\ref{lemma:involution.rank.distance.char.neq2} and Lemma~\ref{lemma:involution.rank.distance.char=2}), relevant for the proof of Lemma~\ref{lemma:Gamma_{R}(e).subset.W^192}. The latter also builds on the fact that the unit group of every non-discrete irreducible, continuous ring contains a separable, uncountable, Boolean subgroup (Corollary~\ref{corollary:boolean.subgroup}). Let us first clarify some notation. The set of \emph{involutions} of a group $G$ is defined as \begin{displaymath} \I(G) \, \defeq \, \left\{ g\in G \left\vert \, g^{2} = 1 \right\} . \right. \end{displaymath} For a unital ring $R$, we consider $\I(R) \defeq \I(\GL(R))$ as well as the set \begin{displaymath} \Nil_{2}(R) \, \defeq \, \left\{ a\in R \left\vert \, a^{2} = 0 \right\} \right. \end{displaymath} of \emph{$2$-nilpotent} elements of $R$. We observe that, for a unital ring $R$, the action of $\GL(R)$ on $R$ by conjugation preserves the sets $\E(R)$, $\I(R)$ and $\Nil_{2}(R)$. Our considerations are based in the following correspondence between involutions one the one hand and idempotents (resp., 2-nilpotent elements) on the other. As is customary, the \emph{characteristic} of a unital ring $R$ will be denoted by $\cha(R)$. \begin{remark}\label{remark:ehrlich} Let $R$ be a unital ring. \begin{enumerate} \item\label{remark:bijection.idempotent.involution.char.neq2} If $\cha(R)\neq2$, then \begin{align*} \qquad &\E(R) \, \longrightarrow \, \I(R), \quad e \, \longmapsto \, 2e-1, \\ &\I(R) \, \longrightarrow \, \E(R), \quad u \, \longmapsto \, \tfrac{1}{2}(1+u) \end{align*} are mutually inverse bijections, $\GL(R)$-equivariant with respect to conjugation. This observation is due to~\cite[Lemma~1]{Ehr56}. \item\label{remark:bijection.2-nilpotent.involution.char=2} If $\cha(R)=2$, then \begin{align*} \qquad &\Nil_{2}(R) \, \longrightarrow \, \I(R), \quad x \, \longmapsto \, 1+x, \\ &\I(R) \, \longrightarrow \, \Nil_{2}(R), \quad u \, \longmapsto \, 1+u \end{align*} are mutually inverse bijections, $\GL(R)$-equivariant with respect to conjugation. This follows by straightforward calculation. \end{enumerate} \end{remark} Our later considerations crucially rely on the following two insights about idempotent (resp., 2-nilpotent) elements and the resulting characterization of conjugacy of involutions (Proposition~\ref{proposition:conjugation.involution.rank}). \begin{lem}[\cite{Ehr56}, Lemma~9]\label{lemma:conjugation.idempotent} Let $R$ be an irreducible, continuous ring and $e,f \in \E(R)$. Then \begin{displaymath} \rk_{R}(e) = \rk_{R}(f) \ \ \Longleftrightarrow \ \ \exists g \in \GL(R)\colon \ geg^{-1} = f. \end{displaymath} \end{lem} \begin{proof} ($\Longleftarrow$) This follows from the $\GL(R)$-invariance of $\rk_{R}$. ($\Longrightarrow$) This is proved in~\cite[Lemma~9]{Ehr56}. (The standing assumption of~\cite{Ehr56} excluding characteristic 2 is not used in this particular argument.) \end{proof} \begin{lem}\label{lemma:conjugation.2-nilpotent} Let $R$ be an irreducible, continuous ring and $a,b \in\Nil_{2}(R)$. Then \begin{displaymath} \rk_{R}(a) = \rk_{R}(b) \ \ \Longleftrightarrow \ \ \exists g \in \GL(R) \colon \, gag^{-1} = b. \end{displaymath} \end{lem} \begin{proof} ($\Longleftarrow$) This follows from the $\GL(R)$-invariance of $\rk_{R}$. ($\Longrightarrow$) Suppose that $\rk_{R}(a) = \rk_{R}(b)$. Due to~\cite[II.XVII, Theorem~17.1(d), p.~224]{VonNeumannBook}, there exist $u,v \in \GL(R)$ such that $b=uav$. By Remark~\ref{remark:bijection.annihilator} and Remark~\ref{remark:regular.idempotent.ideals}, there exists $f \in \E(R)$ with $fR = \rAnn(a)$. Then \begin{displaymath} \tilde{f} \, \defeq \, v^{-1}fv \, \in \, \E(R) \end{displaymath} and \begin{displaymath} \rAnn(b) \, = \, \rAnn(uav) \, = \, \rAnn(av) \, = \, v^{-1}\rAnn(a) \, = \, v^{-1}fR \, = \, v^{-1}fvR \, = \, \tilde{f}R. \end{displaymath} Since $a,b\in\Nil_{2}(R)$, we see that \begin{displaymath} aR \subseteq \rAnn(a) = fR, \qquad bR \subseteq \rAnn(b) = \tilde{f}R. \end{displaymath} Thus, by~\cite[Lemma~7.5]{SchneiderGAFA}, there exist $e,\tilde{e} \in \E(R)$ such that \begin{displaymath} eR=aR, \quad e\leq f,\qquad \tilde{e}R=bR, \quad \tilde{e}\leq \tilde{f}. \end{displaymath} From $uaR=uavR=bR$, we infer that $ueR=uaR=bR=\tilde{e}R$. Since \begin{align*} \rk_{R}(f-e) \, &\stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, \rk_{R}(f)-\rk_{R}(e) \, = \, \rk_{R}(v^{-1}fv)-\rk_{R}(a) \\ & = \, \rk_{R}(\tilde{f})-\rk_{R}(\tilde{e}) \, = \, \rk_{R}(\tilde{f}-\tilde{e}), \end{align*} the work of von Neumann~\cite[II.XVII, Theorem~17.1(d), p.~224]{VonNeumannBook} asserts the existence of $w,\tilde{w}\in\GL(R)$ such that $w(f-e)\tilde{w}=(\tilde{f}-\tilde{e})$. Therefore, $w(f-e)R =(\tilde{f}-\tilde{e})R$. Moreover, we observe that \begin{displaymath} v^{-1}(1-f)R \, = \, v^{-1}(1-f)vR \, = \, (1-v^{-1}fv)R \, = \, (1-\tilde{f})R. \end{displaymath} Now, define \begin{displaymath} g\defeq v^{-1}(1-f)+w(f-e)+ue,\qquad h\defeq v(1-\tilde{f})+w^{-1}(\tilde{f}-\tilde{e})+u^{-1}\tilde{e}. \end{displaymath} Then \begin{align*} hg \, &= \, v(1-\tilde{f})v^{-1}(1-f)+v(1-\tilde{f})w(f-e)+v(1-\tilde{f})ue\\ &\qquad +w^{-1}(\tilde{f}-\tilde{e})v^{-1}(1-f)+w^{-1}(\tilde{f}-\tilde{e})w(f-e)+w^{-1}(\tilde{f}-\tilde{e})ue\\ &\qquad +u^{-1}\tilde{e}v^{-1}(1-f)+u^{-1}\tilde{e}w(f-e)+u^{-1}\tilde{e}ue\\ &= \,vv^{-1}(1-f)+v(1-\tilde{f})(\tilde{f}-\tilde{e})w(f-e)+v(1-\tilde{f})\tilde{e}ue\\ &\qquad +w^{-1}(\tilde{f}-\tilde{e})(1-\tilde{f})v^{-1}(1-f)+w^{-1}w(f-e)+w^{-1}(\tilde{f}-\tilde{e})\tilde{e}ue\\ &\qquad +u^{-1}\tilde{e}(1-\tilde{f})v^{-1}(1-f)+u^{-1}\tilde{e}(\tilde{f}-\tilde{e})w(f-e)+u^{-1}ue\\ &= \, (1-f)+(f-e)+e \, = \, 1 \end{align*} and hence $g \in \GL(R)$ by Remark~\ref{remark:properties.rank.function}\ref{remark:directly.finite}. Finally, \begin{align*} ga \, &= \, v^{-1}(1-f)a+w(f-e)a+uea \, = \, v^{-1}(1-f)ea+w(f-e)ea+ua \, = \, ua, \\ bg \, &= \, bv^{-1}(1-f)+bw(f-e)+b\!\!\smallunderbrace{ue}_{\in\,bR} \, = \, ua(1-f)+b(\tilde{f}-\tilde{e})w(f-e) \\ & = \, ua(1-f) \, = \, ua(1-f)+uaf\, = \, ua, \end{align*} That is, $b=gag^{-1}$ as desired. \end{proof} \begin{prop}\label{proposition:conjugation.involution.rank} Let $R$ be an irreducible, continuous ring and $a,b\in \I(R)$. Then \begin{displaymath} \rk_{R}(1-a) = \rk_{R}(1-b) \ \ \Longleftrightarrow \ \ \exists g \in \GL(R) \colon \, gag^{-1} = b. \end{displaymath} \end{prop} \begin{proof} ($\Longleftarrow$) This follows from the $\GL(R)$-invariance of $\rk_{R}$. ($\Longrightarrow$) We distinguish two cases, depending on the characteristic of $R$. First, let us suppose that $\cha(R)\neq 2$. Since $\ZZ(R)$ is a field due to Remark~\ref{remark:irreducible.center.field} and $\cha(\ZZ(R)) = \cha(R) \ne 2$, we see that $2 \in \ZZ(R)\setminus \{ 0 \} \subseteq \GL(R)$. Moreover, by Remark~\ref{remark:ehrlich}\ref{remark:bijection.idempotent.involution.char.neq2}, there exist $e,f \in \E(R)$ such that $a=2e-1$ and $b=2f-1$. In turn, \begin{align*} 1-\rk_{R}(e) \, &\stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, \rk_{R}(1-e) \, \stackrel{2 \in \GL(R)}{=} \, \rk_{R}(2(1-e)) \, = \, \rk_{R}(1-2e+1) \\ & = \, \rk_{R}(1-a) \, = \, \rk_{R}(1-b) \, = \, \rk_{R}(1-2f+1) \, = \, \rk_{R}(2(1-f)) \\ & \stackrel{2 \in \GL(R)}{=} \, \rk_{R}(1-f) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, 1-\rk_{R}(f), \end{align*} i.e., $\rk_{R}(e)=\rk_{R}(f)$. Hence, by Lemma~\ref{lemma:conjugation.idempotent}, there exists $g\in \GL(R)$ with $f=geg^{-1}$, which readily entails that $b=gag^{-1}$ by Remark~\ref{remark:ehrlich}\ref{remark:bijection.idempotent.involution.char.neq2}. Second, suppose that $\cha(R)=2$. Then Remark~\ref{remark:ehrlich}\ref{remark:bijection.2-nilpotent.involution.char=2} asserts the existence of $x,y\in \Nil_2(R)$ such that $a=1+x$ and $b=1+y$. We observe that \begin{displaymath} \rk_{R}(x) \, = \, \rk_{R}(1-a) \, = \, \rk_{R}(1-b) \, = \, \rk_{R}(y). \end{displaymath} Thanks to Lemma~\ref{lemma:conjugation.2-nilpotent}, thus there exists $g\in \GL(R)$ such that $y=gxg^{-1}$, wherefore $b=gag^{-1}$ according to Remark~\ref{remark:ehrlich}\ref{remark:bijection.2-nilpotent.involution.char=2}. \end{proof} The following two decomposition results are pivotal to the proof of Lemma~\ref{lemma:Gamma_{R}(e).subset.W^192}. \begin{lem}\label{lemma:involution.rank.distance.char.neq2} Let $R$ be a non-discrete irreducible, continuous ring with $\cha(R)\neq 2$. Then \begin{displaymath} \I(R) \, \subseteq \, \left\{ gh \left\vert \, g,h\in\I(R), \, \rk_{R}(1-g) = \rk_{R}(1-h) = \tfrac{1}{2} \right\}. \right. \end{displaymath} \end{lem} \begin{proof} Let $a \in \I(R)$. By Remark~\ref{remark:ehrlich}\ref{remark:bijection.idempotent.involution.char.neq2}, there exists $e \in \E(R)$ such that $a = 2e-1$. Due to Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete} and Lemma~\ref{lemma:order}\ref{lemma:order.1}, we find $f,f' \in \E(R)$ with $f \leq e$, $\rk_{R}(f) = \tfrac{1}{2}\rk_{R}(e)$, $f'\leq 1-e$, and $\rk_{R}(f') = \tfrac{1}{2}\rk_{R}(1-e)$. Note that \begin{displaymath} g \, \defeq \, 2(e-f+f')-1, \qquad h \, \defeq \, 2(1-f-f')-1 \end{displaymath} belong to $\I(R)$ by Remark~\ref{remark:quantum.logic} and Remark~\ref{remark:ehrlich}\ref{remark:bijection.idempotent.involution.char.neq2}. As $\ZZ(R)$ is a field by Remark~\ref{remark:irreducible.center.field} and $\cha(\ZZ(R)) = \cha(R) \ne 2$, we see that $2 \in \ZZ(R)\setminus \{ 0 \} \subseteq \GL(R)$. Hence, \begin{align*} \rk_{R}(1-g) \, &= \, \rk_{R}((1-e)-f'+f) \, = \, \rk_{R}((1-e)-f') + \rk_{R}(f) \\ &\stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, \rk_{R}(1-e) - \rk_{R}(f') + \rk_{R}(f) \, = \, \tfrac{1}{2}\rk_{R}(1-e) + \tfrac{1}{2}\rk_{R}(e) \, = \, \tfrac{1}{2}, \\ \rk_{R}(1-h) \, &= \, \rk_{R}(f+f') \, = \, \rk_{R}(f) + \rk_{R}(f') \, = \, \tfrac{1}{2}\rk_{R}(e) + \tfrac{1}{2}\rk_{R}(1-e) \, = \, \tfrac{1}{2}. \end{align*} Finally, as desired, \begin{align*} gh \, &= \, (2(e-f+f')-1)(2(1-f-f')-1)\\ &= \, 4(e-f+f')(1-f-f')-2(e-f+f')-2(1-f-f')+1\\ &= \, 4e-4f-2e+4f-1 \, = \, 2e-1 \, = \, a. \qedhere \end{align*} \end{proof} \begin{lem}\label{lemma:involution.rank.distance.char=2} Let $R$ be a non-discrete irreducible, continuous ring with $\cha(R) = 2$. Then \begin{displaymath} \I(R) \, \subseteq \, \left\{ gh \left\vert \, g,h\in\I(R), \, \rk_{R}(1-g) = \rk_{R}(1-h) = \tfrac{1}{4} \right\}. \right. \end{displaymath} \end{lem} \begin{proof} Let $a \in \I(R)$. By Remark~\ref{remark:ehrlich}\ref{remark:bijection.2-nilpotent.involution.char=2}, there exists $b \in \Nil_{2}(R)$ such that $a=1+b$. According to Remark~\ref{remark:regular.idempotent.ideals}, we find $e \in \E(R)$ such that $Rb = Re$. By Remark~\ref{remark:bijection.annihilator}, thus $\rAnn(b)=\rAnn(Rb)=\rAnn(Re)=(1-e)R$. Since $b\in\Nil_{2}(R)$, we see that $bR\subseteq \rAnn(b)=(1-e)R$. Consequently, \cite[Lemma~7.5]{SchneiderGAFA} asserts the existence of some $f'\in \E(R)$ such that $bR=f'R$ and $f'\leq 1-e$. Using Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.1}, we infer that $f \defeq 1-e-f' \in \E(R)$ and $f\leq 1-e$, i.e., $e \perp f$. Clearly, $bR=f'R=(1-e-f)R$. Note that \begin{align*} \rk_{R}(b) \, &= \, \rk_{R}(1-e-f) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, \rk_{R}(1-e)-\rk_{R}(f)\\ &\stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, 1-\rk_{R}(e)-\rk_{R}(f) \, = \, 1-\rk_{R}(b)-\rk_{R}(f), \end{align*} wherefore \begin{equation}\label{eq5} \rk_{R}(f) \, = \, 1-2\rk_{R}(b). \end{equation} Furthermore, thanks to Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete} and Lemma~\ref{lemma:order}\ref{lemma:order.1}, there exists $e_{0} \in \E(R)$ such that $e_{0} \leq e$ and $\rk_{R}(e_{0}) = \tfrac{1}{2}\rk_{R}(e)$. Then $e_{1} \defeq e-e_{0} \in \E(R)$ and $e_{1} \leq e$ due to Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.1} and $\rk_{R}(e_{1}) = \rk_{R}(e)-\rk_{R}(e_{0}) = \tfrac{1}{2}\rk_{R}(e)$ by Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}. In a similar manner, using Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete}, Lemma~\ref{lemma:order}\ref{lemma:order.1}, Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.1} and Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}, we find $f_{0},f_{1} \in \E(R)$ such that \begin{align*} &f_{0} \leq f, \quad f_{1} \leq f, \quad f_{0} \perp f_{1}, \quad \rk_{R}(f_{0}) = \rk_{R}(f_{1}) = \tfrac{1}{4}\rk_{R}(f). \end{align*} By Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and Lemma~\ref{lemma:conjugation.idempotent}, there exists $u\in\GL(fRf)$ with $uf_0=f_1u$. Define \begin{displaymath} x_{i} \, \defeq \, uf_{0} + be_{i} \qquad (i\in\{0,1\}). \end{displaymath} For any $i,j \in \{0,1\}$, we see that \begin{align*} x_{i}x_{j} \, &= \, (uf_{0}+be_{i})(uf_{0}+be_{j}) \, = \, uf_{0}uf_{0}+uf_{0}be_{j}+be_{i}uf_{0}+be_{i}be_{j} \\ &= \, u\!\smallunderbrace{f_{0}f_{1}}_{=\,0}\!u+uf_{0}\!\smallunderbrace{f(1-e-f)}_{=\,0}\!be_{j}+be_{i}\!\smallunderbrace{ef}_{=\,0}\!f_{1}u+be_{i}\!\smallunderbrace{e(1-e-f)}_{=\,0}\!be_{j} \, = \, 0. \end{align*} Hence, $x_{0},x_{1} \in \Nil_{2}(R)$ and $x_{1}x_{0} =x_{0}x_{1} = 0$. Since $e \in Rb$ and $(1-e-f)b = b$, we find $c \in R$ such that $cb = e$ and $c(1-e-f) = c$. Moreover, let us consider the inverse $u^{-1}\in\GL(fRf)\subseteq fRf$. Then, for each $i\in\{0,1\}$, \begin{align} \left(u^{-1}+c\right)\!(uf_{0}+be_{i}) \, &= \, u^{-1}uf_{0}+u^{-1}be_{i}+cuf_{0}+cbe_{i}\nonumber\\ &= \, f_{0}+u^{-1}\!\smallunderbrace{f(1-e-f)}_{=\,0}be_{i}+c\smallunderbrace{(1-e-f)f}_{=\,0}\!f_{1}u+ee_{i}\nonumber\\ &= \, f_{0}+e_{i}\label{eq4} \end{align} and therefore \begin{align*} \rk_{R}(f_{0})+\rk_{R}(e_{i}) \, &= \, \rk_{R}(f_{0}+e_{i}) \, \stackrel{\eqref{eq4}}{=} \, \rk_{R}\!\left(\left(u^{-1}+c\right)\!(uf_{0}+be_{i})\right)\\ &\leq \, \rk_{R}(uf_{0}+be_{i}) \, = \, \rk_{R}(x_{i}) \\ &\stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:inequation.sum.pseudo.rank}}{\leq} \, \rk_{R}(uf_{0})+\rk_{R}(be_{i}) \, \leq \, \rk_{R}(f_{0})+\rk_{R}(e_{i}) , \end{align*} which entails that \begin{displaymath} \rk_{R}(x_{i}) = \rk_{R}(f_{0})+\rk_{R}(e_{i}) = \tfrac{1}{4}\rk_{R}(f)+\tfrac{1}{2}\rk_{R}(e)\stackrel{\eqref{eq5}}{=} \tfrac{1}{4}(1-2\rk_{R}(b))+\tfrac{1}{2}\rk_{R}(b) = \tfrac{1}{4}. \end{displaymath} We conclude that, for each $i\in\{0,1\}$, \begin{displaymath} g_{i} \, \defeq \, 1+x_{i} \, \stackrel{\ref{remark:ehrlich}\ref{remark:bijection.2-nilpotent.involution.char=2}}{\in} \, \I(R) \end{displaymath} and $\rk_{R}(1-g_{i}) = \rk_{R}(x_{i}) = \tfrac{1}{4}$. Finally, as intended, \begin{align*} &g_{0}g_{1} \, = \, (1+x_{0})(1+x_{1}) \, = \, 1+x_{0}+x_{1}+x_{0}x_{1} \, = \, 1+x_{0}+x_{1} \\ &\ \ = \, 1+uf_{0}+be_{0}+uf_{0}+be_{1} \, \stackrel{\cha(R)=2}{=} \, 1+b(e_{0}+e_{1}) \, = \, 1+be \, = \, 1+b \, = \, a. \qedhere \end{align*} \end{proof} \enlargethispage{9mm} Turning to this section's final results, let us recall that a group $G$ is said to be \emph{Boolean} if $x^{2} = 1$ for every $x \in G$. \begin{remark} Let $S$ be a commutative unital ring. Then $(\E(S),\vee,\wedge,\neg,0,1)$, with the operations defined by \begin{displaymath} x\vee y \, \defeq \, x+y-xy,\qquad x\wedge y \, \defeq \, xy, \qquad \neg x \, \defeq \, 1-x \end{displaymath} for all $x,y \in \E(S)$, constitutes a Boolean algebra (see, e.g.,~\cite[Chapter~8, p.~83]{GoodearlBook}). In particular, $(\E(S),\mathbin{\triangle})$ with \begin{displaymath} x\mathbin{\triangle} y \, \defeq \, (x\vee y)\wedge \neg(x\wedge y) \, = \, x+y-2xy \qquad (x,y\in\E(S)) \end{displaymath} is a Boolean group. This Boolean group will be considered in Proposition~\ref{proposition:homomorphism.idempotent.eRe.GL(R)} and the proof of Corollary~\ref{corollary:boolean.subgroup}. \end{remark} \begin{prop}\label{proposition:homomorphism.idempotent.eRe.GL(R)} Let $R$ be a unital ring, $e \in \E(R)$, $a \in \GL(R)$ with $e \perp aea^{-1}$, and $S$ be a commutative unital subring of $eRe$. Then \begin{displaymath} \phi \colon \, \E(S) \, \longrightarrow\, \GL(R), \quad x \, \longmapsto \, ax+xa^{-1}+1-x-axa^{-1} \end{displaymath} is a well-defined group embedding. Furthermore, if $R$ is a regular ring and $\rho$ is a pseudo-rank function on $R$, then \begin{equation*} \phi \colon \, (\E(S),d_{\rho}) \, \longrightarrow \, (\GL(R),d_{\rho}) \end{equation*} is 4-Lipschitz, hence continuous. \end{prop} \begin{proof} Note that $axa^{-1} \leq aea^{-1}$ for all $x \in \E(eRe)$. As $e \perp aea^{-1}$, this entails that \begin{equation}\label{stern} \forall x,y \in \E(eRe) \colon \quad x \perp aya^{-1}. \end{equation} Therefore, if $x,y \in \E(S)$, then \begin{align*} \phi(x)\phi(y) \, &= \, \!\left(ax+xa^{-1}+1-x-axa^{-1}\right)\!\left(ay+ya^{-1}+1-y-aya^{-1}\right)\\ &= \, \smallunderbrace{axay}_{=\,axa\mathrlap{ya^{-1}a \, \stackrel{\eqref{stern}}{=}\,0}}+\:axya^{-1}+ax(1-y)-\smallunderbrace{axaya^{-1}}_{\stackrel{\eqref{stern}}{=}\,0}\\ &\qquad+xa^{-1}ay+\smallunderbrace{xa^{-1}ya^{-1}}_{=\,a^{-1}axa^{-1}y\mathrlap{a^{-1}\,\stackrel{\eqref{stern}}{=}\,0}}+\:xa^{-1}-\smallunderbrace{xa^{-1}y}_{=\,a^{-1}a\mathrlap{xa^{-1}y\,\stackrel{\eqref{stern}}{=}\,0}}-\:xa^{-1}aya^{-1}\\ &\qquad+\smallunderbrace{(1-x)ay}_{=\,(1-x)ay\mathrlap{a^{-1}a\,\stackrel{\eqref{stern}}{=}\,ay}}+\:(1-x)ya^{-1}+(1-x)(1-y)-\smallunderbrace{(1-x)aya^{-1}}_{\stackrel{\eqref{stern}}{=} \, aya^{-1}}\\ &\qquad-axa^{-1}ay-\smallunderbrace{axa^{-1}ya^{-1}}_{\stackrel{\eqref{stern}}{=}\,0}-\smallunderbrace{axa^{-1}(1-y)}_{\stackrel{\eqref{stern}}{=}\,axa^{-1}}+\:axa^{-1}aya^{-1}\\ &= \, axya^{-1}+ax-axy+xy+xa^{-1}-xya^{-1}+ay\\ &\qquad+(1-x)ya^{-1}+(1-x)(1-y)-aya^{-1}-axy-axa^{-1}+axya^{-1}\\ &= \, ax+ay-2axy+xa^{-1}+ya^{-1}-2xya^{-1}\\ &\qquad+1-x-y+2xy-axa^{-1}-aya^{-1}+2axya^{-1}\\ &= \, a(x\mathbin{\triangle} y)+(x\mathbin{\triangle} y)a^{-1}+1-(x\mathbin{\triangle} y)-a(x\mathbin{\triangle} y)a^{-1}\\ &= \, \phi(x\mathbin{\triangle} y). \end{align*} Since $\phi(0) = 1$, it follows that $\phi \colon \E(S) \to \GL(R)$ is a well-defined group homomorphism. From $e\perp aea^{-1}$, we deduce that $eR \cap aeR = eR \cap aea^{-1}R = \{ 0 \}$. Now, if $x \in \E(S)$ and $\phi(x) = 1$, then \begin{displaymath} eR \, \ni \, x \, = \, \phi(x)x \, = \, ax + \smallunderbrace{xa^{-1}x}_{=\,a^{-1}ax\mathrlap{a^{-1}x \, \stackrel{\eqref{stern}}{=} \, 0}} + \, (1-x)x + \smallunderbrace{axa^{-1}x}_{=\,axa^{-1}\mathrlap{x \, \stackrel{\eqref{stern}}{=} \, 0}} \, = \, ax \, \in \, aeR \end{displaymath} and thus $x=0$. This shows that $\phi$ is an embedding. Finally, if $R$ is regular and $\rho$ is a pseudo-rank function on $R$, then \begin{align*} d_{\rho}(\phi(x),\phi(y)) \, &= \, \rho(\phi(x)-\phi(y))\\ & = \, \rho\!\left(a(x-y)+(x-y)a^{-1}-(x-y)-a(x-y)a^{-1}\right) \\ & \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:inequation.sum.pseudo.rank}}{\leq} \, \rho(a(x-y))+\rho\!\left((x-y)a^{-1}\right)\!+\rho(x-y) +\rho\!\left(a(x-y)a^{-1}\right) \\ & \stackrel{a \in \GL(R)}{=} \, 4\rho(x-y) \, = \, 4d_{\rho}(x,y) \end{align*} for all $x,y \in \E(S)$, i.e., $\phi \colon (\E(S),d_{\rho}) \to (\GL(R),d_{\rho})$ is indeed 4-Lipschitz. \end{proof} \begin{cor}\label{corollary:boolean.subgroup} Let $R$ be a non-discrete irreducible, continuous ring. Then $\GL(R)$ contains a separable, uncountable, Boolean subgroup. \end{cor} \begin{proof} According to the Hausdorff maximal principle, there exists a maximal chain~$E$ in $(\E(R),\leq)$. By~\cite[Corollary~7.20]{SchneiderGAFA}, the map ${{\rk_{R}}\vert_{E}}\colon (E,\leq)\to ([0,1],\leq)$ is an order isomorphism. Now, let $e\defeq({{\rk_{R}}\vert_{E}})^{-1}\!\left(\tfrac{1}{2}\right)$. Since $(E,\leq)$ is a chain, \begin{displaymath} E' \, \defeq \, \{ f \in E \mid f \leq e \} \, = \, \left\{ f \in E \left\vert \, \rk_{R}(f) \leq \tfrac{1}{2} \right\} \right. \end{displaymath} is a commutative submonoid of the multiplicative monoid of $eRe$. Hence, the unital subring $S$ of $eRe$ generated by $E'$ is commutative, too. As ${{\rk_{R}}\vert_{E}}\colon (E,d_{R}) \to ([0,1],d)$ is isometric with respect to the standard metric $d$ on $[0,1]$ by Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.order.isomorphism}, we see that $(E,d_{R})$ is separable, therefore $(E',d_{R})$ is separable. Due to Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.group.topology}, we know that $eRe$ is a topological ring with regard to the topology generated by $d_{R}$, wherefore $(S,d_{R})$ is separable, thus $(\E(S),d_{R})$ is separable. From $E' \subseteq \E(S)$ and $\vert E'\vert =\left\lvert\left[0,\tfrac{1}{2}\right]\right\rvert$, we infer that $\E(S)$ is uncountable. Since \begin{displaymath} \rk_{R}(1-e) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, 1-\rk_{R}(e) \, = \, 1-\tfrac{1}{2} \, = \, \tfrac{1}{2} \, = \, \rk_{R}(e), \end{displaymath} Lemma~\ref{lemma:conjugation.idempotent} asserts the existence of $a \in \GL(R)$ with $aea^{-1} = 1-e$. By Proposition~\ref{proposition:homomorphism.idempotent.eRe.GL(R)}, there exists a continuous injective homomorphism from $\E(S)$ to $\GL(R)$, whose image necessarily constitutes a separable, uncountable, Boolean subgroup of $\GL(R)$. \end{proof} \section{Dynamical independence of ideals}\label{section:dynamical.independence} The focus of this section is on the following notion of dynamical independence of principal right ideals of a regular ring (Definition~\ref{definition:halperin.independence}), which turns out to be a key ingredient for the proof of our characterization of algebraic elements (Theorem~\ref{theorem:matrixrepresentation.case.algebraic}). \begin{definition}\label{definition:halperin.independence} Let $R$ be a regular ring, let $a\in R$, and let $m \in \N$. Then a pair $(I,J) \in \lat(R) \times \lat(R)$ will be called \emph{$(a,m)$-independent} if $(I,aI,\ldots,a^{m-1}I,J)\perp$. \end{definition} \begin{lem}\label{lemma:independence.inductive} Let $R$ be an irreducible, continuous ring, let $a\in R$, let $m \in \N$, let $J,J_{1} \in \lat(R)$, and let $P \defeq \{ I\in \lat(R) \mid \text{$(I,J)$ $(a,m)$-independent}, \, I \subseteq J_{1} \}$. Then the partially ordered set $(P,{\subseteq})$ is inductive. In particular, every element of $P$ is contained in a maximal element of $(P,{\subseteq})$ \end{lem} \begin{proof} First we show that $P$ is closed in $(\lat(R),d_{\lat(R)})$. Note that $\delta_{\lat(R)} = \delta_{\rk_{R}}$ thanks to Theorem~\ref{theorem:unique.rank.function}, Lemma~\ref{lemma:pseudo.dimension.function} and Theorem~\ref{theorem:dimension.function.lattice}. Applying~\cite[V.7, p.~76, Lemma]{BirkhoffBook} (or~\cite[I.6, Satz 6.2(IV), p.~46]{MaedaBook}) and Lemma~\ref{lemma:pseudo.dimension.function}\ref{lemma:multplication.Lipschitz}, we see that \begin{displaymath} \Phi \colon \, (\lat(R),d_{\lat(R)}) \, \longrightarrow \, (\lat(R),d_{\lat(R)}), \quad I \, \longmapsto \, I+J_{1} \end{displaymath} is $1$-Lipschitz, \begin{displaymath} \Psi \colon \, (\lat(R),d_{\lat(R)}) \, \longrightarrow \, (\lat(R),d_{\lat(R)}), \quad I \, \longmapsto \, J \cap \left( \sum\nolimits_{i=0}^{m-1} a^{i}I\right) \end{displaymath} is $m$-Lipschitz, and for each $i\in\{0,\ldots,m-1\}$ the map \begin{displaymath} \Psi_{i} \colon \, (\lat(R),d_{\lat(R)}) \, \longrightarrow \, (\lat(R),d_{\lat(R)}), \quad I \, \longmapsto \, a^{i}I \cap \left( J + \sum\nolimits_{j=0,\,j\ne i}^{m-1} a^{j}I\right) \end{displaymath} is $m$-Lipschitz. The continuity of those maps implies that \begin{align*} P \, &= \, \{ I\in \lat(R) \mid (I,aI,\ldots,a^{m-1}I,J)\perp, \, I \subseteq J_{1} \} \\ &\stackrel{\ref{remark:independence.equivalence.intersection}}{=} \, {\Phi^{-1}(\{J_{1}\})} \cap {\Psi^{-1}(\{\{0\}\})} \cap {\bigcap\nolimits_{i=0}^{m-1}\Psi_i^{-1}(\{\{0\}\})} \end{align*} is closed in $(\lat(R),d_{\lat(R)})$, as claimed. We now deduce that $(P,{\subseteq})$ is inductive. Since $\{ 0\} \in P$, we see that $P$ is non-void. Consider a non-empty chain $\mathcal{C}$ in $(P,\subseteq)$. As $P$ is closed in $(\lat(R),d_{{\lat(R)}})$, we conclude that $\overline{\mathcal{C}}\subseteq P$, where the closure is taken with respect to the topology induced by $d_{{\lat(R)}}$. For every $\epsilon\in\R_{>0}$, Proposition~\ref{proposition:dimension.function.continuous} asserts the existence of some $C\in\mathcal{C}$ such that \begin{displaymath} \delta_{\lat(R)}\!\left(\bigvee \mathcal{C}\right)\!-\delta_{\lat(R)}(C) \, \leq \, \epsilon , \end{displaymath} whence \begin{displaymath} d_{{\lat(R)}}\!\left(C,\bigvee\mathcal{C}\right)\! \, \stackrel{C\subseteq\bigvee\mathcal{C}}{=} \, \delta_{\lat(R)}\!\left(\bigvee \mathcal{C}\right)\! -\delta_{\lat(R)}(C) \, \leq \, \epsilon . \end{displaymath} Thus, $\bigvee \mathcal{C}\in \overline{\mathcal{C}} \subseteq Z$. In turn, $\bigvee \mathcal{C}$ constitutes the desired upper bound for $\mathcal{C}$ in $(P,{\subseteq})$. This shows that $(P,\subseteq)$ is inductive. The final assertion follows by Zorn's lemma. \end{proof} \begin{lem}\label{lemma:independence.sum} Let $R$ be a regular ring, let $a\in R$, let $m \in \N$, let $I,I',J \in \lat(R)$, and $J' \defeq \sum\nolimits_{i=0}^{m-1}a^{i}I + J$. If $(I,J)$ is $(a,m)$-independent and $(I',J')$ is $(a,m)$-independent, then $(I+I',J)$ is $(a,m)$-independent. \end{lem} \begin{proof} Suppose that $(I,J)$ is $(a,m)$-independent and $(I',J')$ is $(a,m)$-independent. Let $x_{0},\ldots,x_{m-1} \in I$, $y_{0},\ldots,y_{m-1} \in I'$ and $z \in J$ be such that \begin{displaymath} \sum\nolimits_{i=0}^{m-1} a^{i}x_{i} + \sum\nolimits_{i=0}^{m-1} a^{i}y_{i} + z \, = \, 0 . \end{displaymath} Since $(I',J')$ is $(a,m)$-independent and $\sum\nolimits_{i=0}^{m-1} a^{i}x_{i} + z \in J'$, it follows that $a^{i}y_{i} = 0$ for each $i \in \{ 0,\ldots,m-1\}$ and $\sum\nolimits_{i=0}^{m-1} a^{i}x_{i} + z = 0$. As $(I,J)$ is $(a,m)$-independent, the latter necessitates that $a^{i}x_{i} = 0$ for each $i \in \{ 0,\ldots,m-1\}$ and $z = 0$. According to Remark~\ref{remark:independence.ideals.idempotents}, this shows that $(I+I',\ldots,a^{m-1}(I+I'),J)$ is independent, i.e., $(I+I',J)$ is $(a,m)$-independent. \end{proof} The subsequent Lemma~\ref{lemma:independence.halperin}, this section's main insight, refines and extends an argument from~\cite{Halperin62}. For its proof, the following characterization of the center of an irreducible, continuous ring will be needed. \begin{prop}[\cite{Halperin62}, Corollary and Remark to Lemma~2.2, p.~4]\label{proposition:center.halperin} If $R$ is an irreducible, continuous ring, then \begin{displaymath} \ZZ(R) \, = \, \{ a \in R \mid \forall e \in \E(R) \colon \, eae = ae \} . \end{displaymath} \end{prop} For the statement of the next result, let us clarify some notation and terminology. To this end, let $K$ be a field and let $R$ be a unital $K$-algebra. We denote by $K[X]$ the polynomial ring over $K$ and by $\deg \colon K[X]\setminus\{0\} \to \N$ the usual degree function. For any $a \in R$, we consider the induced evaluation map \begin{displaymath} K[X] \, \longrightarrow \, R, \quad p=\sum\nolimits_{i=0}^{m}c_{i}X_{i} \, \longmapsto \, p(a) \defeq p_{R}(a) \defeq \sum\nolimits_{i=0}^{m}c_{i}a^{i} , \end{displaymath} which is a unital $K$-algebra homomorphism. \begin{lem}[cf.~\cite{Halperin62}, Lemma~5.1]\label{lemma:independence.halperin} Let $R$ be a non-discrete irreducible, continuous ring, let $K \defeq \ZZ(R)$, let $a \in R$, let $J \in \lat(R)$ and let $J_{1} \in \lat(R)$ be such that $J \subseteq J_{1}$, $aJ \subseteq J$, $aJ_{1} \subseteq J_{1}$. Suppose that $m\in\N$ is such that \begin{displaymath} \forall p \in K[X] \setminus \{0\} \colon \quad \deg(p)<m \ \Longrightarrow \ \rk_{R}(p(a))=1 . \end{displaymath} Then the following hold. \begin{enumerate} \item\label{lemma:independence.halperin.1} If $J \ne J_{1}$, then there exists $I \in \lat(R)$ such that $(I,J)$ is $(a,m)$-independent and $\{ 0 \} \ne I \subseteq J_{1}$. \item\label{lemma:independence.halperin.2} Suppose that $a^{m}x \in \sum\nolimits_{i=0}^{m-1} Ka^{i}x + J$ for every $x \in J_{1}$. If $I \in \lat(R)$ is maximal such that $(I,J)$ is $(a,m)$-independent and $I \subseteq J_{1}$, then $J_{1} = \bigoplus\nolimits_{i=0}^{m-1}a^{i} I \oplus J$. \end{enumerate} \end{lem} \begin{proof} \ref{lemma:independence.halperin.1} The argument, which follows the lines of~\cite[Proof of Lemma~5.1]{Halperin62}, proceeds by induction on $m \in \N$. If $m = 0$, then the statement is trivially satisfied for $I \defeq J_{1}$. For $m = 1$, we see that, thanks to Lemma~\ref{lemma:complement}, there exists $J' \in \lat(R)$ such that $J \oplus J' = J_{1}$, where $J' \neq \{0\}$ as $J \ne J_{1}$. For the induction step, let us assume that the desired implication is true for some $m \in \N_{>0}$. Suppose now that \begin{displaymath} \forall p\in K[X]\setminus \{0\}\colon\quad \deg(p)<m+1 \ \Longrightarrow \ \rk_{R}(p(a))=1. \end{displaymath} As $X\in K[X]\setminus\{0\}$ and $\deg(X)=1<m+1$, we see that $\rk_{R}(a)=1$, so $a\in\GL(R)$ by Remark~\ref{remark:properties.rank.function}\ref{remark:invertible.rank}. Our induction hypothesis asserts the existence of some $I_{0} \in \lat(R)$ such that $\{ 0 \} \ne I_{0}\subseteq J_{1}$ and $(I_{0},\ldots,a^{m-1}I_{0},J)\perp$. We will deduce that \begin{equation}\label{eq3} \exists I \in \lat(R) \colon \quad I\subseteq I_{0}, \ \, a^{m}I \nsubseteq I+\ldots+a^{m-1}I+J. \end{equation} For a proof of~\eqref{eq3} by contraposition, suppose that \begin{displaymath} \forall I\in \lat(R), \, I\subseteq I_{0} \colon \quad a^{m}I \subseteq I+\ldots+a^{m-1}I+J. \end{displaymath} From this we conclude that \begin{equation}\label{eqhalp} \forall e\in\E(R)\cap I_{0}\ \exists x\in J\ \exists u_{0},\ldots,u_{m-1}\in eRe\colon\quad a^{m}e = x+\sum\nolimits_{i=0}^{m-1}a^{i}u_{i}. \end{equation} Indeed, if $ e\in\E(R)\cap I_{0}$, then $a^{m}e \in eI_{0}+\ldots+a^{m-1}eI_{0}+J$, thus \begin{displaymath} a^{m}e \, \in \, eI_{0}e+\ldots+a^{m-1}eI_{0}e+Je \, \subseteq \, eRe+\ldots+a^{m-1}eRe+J . \end{displaymath} Now, let us choose any $e_{0} \in \E(R)$ with $e_{0}R = I_{0}$. According to~\eqref{eqhalp}, there exist $x \in J$ and $u_{0},\ldots,u_{m-1} \in e_{0}Re_{0}$ such that $a^{m}e_{0} = x+\sum\nolimits_{i=0}^{m-1}a^{i}u_{i}$. We will show that $u_{0},\ldots,u_{m-1} \in \ZZ(e_{0}Re_{0})$. As $e_{0}Re_{0}$ is an irreducible, continuous ring by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}, we may do so using Proposition~\ref{proposition:center.halperin}. To this end, let $e\in \E(e_{0}Re_{0})$. Then \begin{displaymath} a^{m}e \, = \, a^{m}e_{0}e \, = \, xe+\sum\nolimits_{i=0}^{m-1}a^{i}u_{i}e . \end{displaymath} Moreover, since $\E(e_{0}Re_{0}) \subseteq {\E(R)} \cap {I_{0}}$, assertion~\eqref{eqhalp} provides the existence of $y \in J$ and $v_{0},\ldots,v_{m-1} \in eRe$ such that $a^{m}e = y+\sum\nolimits_{i=0}^{m-1}a^{i}v_{i}$. Hence, \begin{displaymath} 0 \, = \, a^{m}e-a^{m}e \, = \, \smallunderbrace{(y-xe)}_{\in\, J} + \sum\nolimits_{i=0}^{m-1}\smallunderbrace{a^{i}(v_{i}-u_{i}e)}_{\in\, a^{i}I_{0}}. \end{displaymath} As $(I_{0},\ldots,a^{m-1}I_{0},J)\perp$ and $a\in\GL(R)$, thus $v_{i}=u_{i}e$ for each $i\in\{0,\ldots,m-1\}$. It follows that, for every $i\in\{0,\ldots,m-1\}$, \begin{displaymath} eu_{i}e \, = \, ev_{i} \, = \, v_{i} \, = \, u_{i}e. \end{displaymath} Due to Proposition~\ref{proposition:center.halperin}, this shows that $u_{0},\ldots,u_{m-1}\in \ZZ (e_{0}Re_{0})$. So, by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}, for each $i \in \{0,\ldots,m-1\}$ we find $z_{i} \in K$ such that $u_{i} = z_{i}e_{0}$. In turn, \begin{displaymath} a^{m}e_{0} \, = \, x+\sum\nolimits_{i=0}^{m-1}a^{i}z_{i}e_{0} \end{displaymath} and therefore $x = \bigl(a^{m}-\sum\nolimits_{i=0}^{m-1}z_{i}a^{i}\bigr)e_{0}$. Concerning \begin{displaymath} p \, \defeq \, X^{m}-\sum\nolimits_{i=0}^{m-1}z_{i}X^{i} \, \stackrel{m > 0}{\in} \, K[X]\setminus\{0\}, \end{displaymath} we thus see that $p(a)e_{0} = x$. Furthermore, $\deg(p) < m+1$ and hence $\rk_{R}(p(a)) = 1$, so $p(a) \in \GL(R)$ by Remark~\ref{remark:properties.rank.function}\ref{remark:invertible.rank}. From $aJ \subseteq J$ we infer that $p(a)J \subseteq J$, so that $p(a)J = J$ by \cite[Lemma~9.4(B)]{SchneiderGAFA}, and therefore $p(a)^{-1}J = J$. We conclude that \begin{displaymath} e_{0} \, = \, p(a)^{-1}x \, \in \, p(a)^{-1}J \, = \, J \end{displaymath} and hence \begin{displaymath} e_{0} \, \in \, I_{0} \cap J \, \stackrel{(I_0,J)\perp}{=} \, \{0\}, \end{displaymath} thus $e_{0} = 0$ and so $I_{0} = e_{0}R = \{0\}$, which gives the intended contradiction. This proves~\eqref{eq3}. Now, by~\eqref{eq3}, we find $I \in \lat(R)$ with $I \subseteq I_{0}$ and $a^{m}I \nsubseteq I + \ldots + a^{m-1}I + J$. By Remark~\ref{remark:bijection.annihilator} and Lemma~\ref{lemma:complement}, there exists $I' \in \lat(R)$ such that \begin{displaymath} ((a^{m}I) \cap (I+\ldots+a^{m-1}I+J))\oplus I' \, = \, a^{m}I. \end{displaymath} Necessarily, $I' \neq \{0\}$. Consider $I'' \defeq a^{-m}I' \in \lat(R) \setminus \{ 0 \}$ and observe that \begin{displaymath} I'' \, \subseteq \, a^{-m}a^{m}I \, = \, I \, \subseteq \, I_{0}. \end{displaymath} Hence, $(I'',\ldots,a^{m-1}I'', J)\perp$. Moreover, as $I' \subseteq a^{m}I$, \begin{align*} a^{m}I''\cap (I''+\ldots+a^{m-1}I''+J) \, &\subseteq \, I' \cap\,(I+\ldots+a^{m-1}I+J)\\ &= \, I'\cap((a^{m}I)\cap(I+\ldots+a^{m-1}I+J)) \, = \, \{0\}. \end{align*} Thus, $(I'',\ldots,a^{m-1}I'',a^mI'',J)\perp$ by Remark~\ref{remark:independence.equivalence.intersection}. This completes the induction. \ref{lemma:independence.halperin.2} By Lemma~\ref{lemma:independence.inductive}, we find $I \in \lat(R)$ maximal such that $(I,J)$ is $(a,m)$-independent and $I \subseteq J_{1}$. Our assumption entails that \begin{displaymath} a^{m}I \, \subseteq \, \sum\nolimits_{i=0}^{m-1} Ka_{i}I + J \, = \, \sum\nolimits_{i=0}^{m-1} a_{i}I + J \, \eqdef \, J' . \end{displaymath} Since $aJ \subseteq J$, we conclude that $aJ' \subseteq J'$. Also, $J' \subseteq J_{1}$. We claim that $J' = J_{1}$. Suppose for contradiction that $J' \ne J_{1}$. Then, by~\ref{lemma:independence.halperin.1}, there exists $I'\in\lat(R)$ such that $(I',J')$ is $(a,m)$-independent and $\{ 0 \} \ne I' \subseteq J_{1}$. Thus, Lemma~\ref{lemma:independence.sum} implies that $I+I'$ is $(a,m)$-independent. As $I \subsetneq I+I' \subseteq J_{1}$, this contradicts maximality of $I$. Therefore, $J' = J_{1}$ as claimed. Since $I$ is $(a,m)$-independent, thus $J_{1} = J' = \bigoplus\nolimits_{i=0}^{m-1}a^{i} I \oplus J$. \end{proof} Finally, let us record a sufficient condition for the hypothesis of Lemma~\ref{lemma:independence.halperin}. \begin{lem}\label{lemma:sufficient.condition.halperin} Let $R$ be an irreducible, continuous ring, let $K\defeq\ZZ(R)$, let $p\in K[X]$ be irreducible with $m\defeq \deg(p)$, and let $a\in R$. If there exists some $n\in \N_{>0}$ such that $p^{n}(a) = 0$, then \begin{displaymath} \forall q \in K[X]\setminus\{0\} \colon \quad \deg(q)<m \ \Longrightarrow \ \rk_{R}(q(a)) = 1 . \end{displaymath} \end{lem} \begin{proof} Let $q \in K[X]\setminus\{0\}$ with $\deg(q)<m$. By Remark~\ref{remark:bijection.annihilator} and Remark~\ref{remark:regular.idempotent.ideals}, there is $e \in \E(R)$ such that $\rAnn(q(a)) = eR$. Consider the ideal $I \defeq \{ f \in K[X] \mid f(a)e = 0 \}$ in $K[X]$. Suppose that $I \neq K[X]$. Since $K[X]$ is a principal ideal domain, we now find $f \in K[X]$ such that $I=K[X]f$. From $0 \neq q \in I = K[X]f$, we infer that $\deg(f) \leq \deg(q) < m$ and $f \ne 0$. Moreover, $p^{n} \in I = K[X]f$ for some $n \in \N_{>0}$. Since $K[X]$ is a unique factorization domain, $f$ is not a unit in $K[X]$ (as $I\neq K[X]$), and $p$ is irreducible, we conclude that $f\in K[X]p$ and therefore $m = \deg(p) \leq \deg(f) < m$, which is absurd. Consequently, $I=K[X]$ and so $1\in I$. This necessitates that $e=0$ and hence $\rAnn(q(a))=\{0\}$, so that \begin{equation*} \rk_{R}(q(a)) \, \stackrel{\ref{lemma:pseudo.dimension.function}\ref{lemma:rank.dimension.annihilator}}{=} \, 1-\delta_{\rk_{R}}(\rAnn(q(a))) \, = \, 1. \qedhere \end{equation*} \end{proof} \section{Algebraic elements}\label{section:algebraic.elements} Considering a field $K$ and a unital $K$-algebra $R$, an element $a\in R$ is said to be \emph{algebraic over $K$} if there exists $p\in K[X]\setminus\{0\}$ such that $p(a)=0$. The purpose of this section is to prove that the algebraic elements in a non-discrete irreducible, continuous ring are precisely those which are contained in some subalgebra isomorphic to a finite product of matrix algebras (Theorem~\ref{theorem:matrixrepresentation.case.algebraic}). In order to clarify some terminology, let $K$ be a field. Then a $K$-algebra $R$ is called \emph{matricial}~\cite[Chapter~15, p.~217, Definition]{GoodearlBook} if there exist $m\in\N_{>0}$ and $n_{1},\ldots,n_{m} \in \N_{>0}$ such that $R \cong_{K} \prod\nolimits_{i=1}^{m} \M_{n_{i}}(K)$, where (as in the following) the symbol $\cong_{K}$ indicates $K$-algebra isomorphism. As follows from a classical theorem (see, e.g.,~\cite[IX.1, Corollary~1.5, p.~361]{GrilletBook}), a $K$-algebra is both matricial and simple if and only if it is isomorphic to $\M_{n}(K)$ for some $n \in \N_{>0}$. \begin{remark}\label{remark:matricial} \begin{enumerate} \item\label{remark:matricial.decomposition} Let $K$ be a field. A $K$-algebra $R$ is matricial if and only if there exist $m\in\N_{>0}$, $f_{1},\ldots,f_{m} \in \E(R)\setminus \{ 0 \}$ pairwise orthogonal with $1 = \sum_{i=1}^{m} f_{i}$, and simple, matricial unital $K$-subalgebras $R_{1}\leq f_{1}Rf_{1}, \, \ldots, \, R_{m}\leq f_{m}Rf_{m}$ such that $S = R_{1} + \ldots + R_{m}$. \item\label{remark:matricial.sum} Let $R$ be an irreducible, regular ring. For any $m\in\N_{>0}$, pairwise orthogonal elements $f_{1},\ldots,f_{m} \in \E(R)\setminus \{ 0 \}$ with $1 = \sum_{i=1}^{m} f_{i}$, and matricial unital $\ZZ(R)$-subalgebras $R_{1}\leq f_{1}Rf_{1}, \, \ldots, \, R_{m}\leq f_{m}Rf_{m}$, the set $R_{1} + \ldots + R_{m}$ is a matricial unital $\ZZ(R)$-subalgebra of $R$. This is a consequence of Lemma~\ref{lemma:sum.embedding}. \end{enumerate} \end{remark} Based on the terminology above, we give the following definition. \begin{definition}\label{definition:matricial} Let $R$ be an irreducible, regular ring. An element of $R$ will be called \emph{matricial} if it is contained in some matricial unital $\ZZ(R)$-subalgebra of $R$. An element of $R$ will be called \emph{simply matricial} if it is contained in some simple, matricial unital $\ZZ(R)$-subalgebra of $R$. \end{definition} We will show that, in a non-discrete irreducible, continuous ring, the set of algebraic elements coincides with the set of matricial elements. This result, Theorem~\ref{theorem:matrixrepresentation.case.algebraic}, which provides a central ingredient in the proof of Theorem~\ref{theorem:matricial.dense}, will be verified via four intermediate steps, detailed in Lemmata~\ref{lemma:matrixrepresentation.case.p=0}--\ref{lemma:matrixrepresentation.case.p^n=0}. The following facts are needed. \begin{remark}[\cite{SchneiderGAFA}, Lemma~8.4(2)]\label{remark:root.K[X]X+K.invertible} Let $K$ be a field, let $R$ be a unital $K$-algebra, and let $a\in R$. If $p(a) = 0$ for some $p\in K[X]\cdot X+(K\setminus\{0\})$, then $a\in\GL(R)$. \end{remark} \begin{lem}\label{lemma:properties.polynomials} Let $K$ be a field, $R$ be a unital $K$-algebra, $a\in R$ and $p\in K[X]$. \begin{enumerate} \item\label{lemma:annihilator.invariant} $a\rAnn(p(a))\subseteq \rAnn(p(a))$. \item\label{lemma:evaluation.eRe} Let $e\in\E(R)$. If $eae = ae$, then $ep(a)e = p(a)e = p_{eRe}(ae)$. \item\label{lemma:bezout} Let $q\in K[X]$ be such that $p$ and $q$ are coprime. Then \begin{displaymath} \qquad \rAnn((pq)(a)) \, = \, \rAnn(p(a))\oplus\rAnn(q(a)). \end{displaymath} \end{enumerate} \end{lem} \begin{proof} \ref{lemma:annihilator.invariant} If $x\in \rAnn(p(a))$, then $p(a)ax=ap(a)x=0$, thus $ax\in \rAnn(p(a))$. \ref{lemma:evaluation.eRe} This is a consequence of~\cite[Proposition~9.3(2)]{SchneiderGAFA} and~\cite[Lemma~10.5(1)]{SchneiderGAFA}. \ref{lemma:bezout} Since $p$ and $q$ are coprime, there exist $s,t \in K[X]$ with $1=sp+tq$. So, \begin{equation}\label{eq--9} \forall x \in R \colon \quad x = s(a)p(a)x + t(a)q(a)x. \end{equation} This directly implies that $\rAnn(p(a)) \cap \rAnn(q(a))= \{ 0 \}$. Moreover, both $\rAnn(p(a))$ and $\rAnn(q(a))$ are contained in \begin{displaymath} J \, \defeq \, \rAnn((pq)(a)) \, = \, \rAnn((qp)(a)) , \end{displaymath} thus $\rAnn(p(a)) + \rAnn(q(a)) \subseteq J$. Conversely, if $x \in J$, then $s(a)p(a)x \in \rAnn(q(a))$ and $t(a)q(a)x \in \rAnn(p(a))$, thus \begin{displaymath} x \, \stackrel{\eqref{eq--9}}{=} \, s(a)p(a)x + t(a)q(a)x \, \in \, \rAnn(q(a)) + \rAnn(p(a)) . \end{displaymath} Hence, $J = \rAnn(p(a))\oplus\rAnn(q(a))$. \end{proof} \begin{lem}\label{lemma:matrixrepresentation.case.p=0} Let $R$ be a non-discrete irreducible, continuous ring, let $K \defeq \ZZ(R)$, let $p \in K[X]$ be irreducible, consider $m \defeq \deg(p) \in \N_{>0}$, and let $c_{0},\ldots,c_{m} \in K$ be such that $p = \sum\nolimits_{i=0}^{m} c_{i}X^{i}$. If $a\in R$ and $p(a)=0$, then $a$ is simply matricial in $R$. \end{lem} \begin{proof} Note that $m > 0$ due to irreducibility of $p$. Let $a\in R$ with $p(a)=0$. If $m=1$, then either $p=c_{1}X$ and thus $a=0 \in K$, or $p = c_{1}X + c_{0}$ with $c_{1} \ne 0$, whence $a = -c_{0}c_{1}^{-1} \in K$; where either case entails the claim. Therefore, we may and will henceforth assume that $m > 1$, thus $p \in K[X]\cdot X + (K\setminus \{ 0 \})$ by irreducibility of $p$, and so $a \in \GL(R)$ by Remark~\ref{remark:root.K[X]X+K.invertible}. By Lemma~\ref{lemma:independence.inductive}, Lemma~\ref{lemma:sufficient.condition.halperin} and Lemma~\ref{lemma:independence.halperin}\ref{lemma:independence.halperin.2}, there exists $I\in\lat(R)$ such that $R = I\oplus aI \oplus \ldots \oplus a^{m-1}I$. By Remark~\ref{remark:independence.ideals.idempotents}, we find pairwise orthogonal $e_1,\ldots,e_{m}\in\E(R)$ with $e_1+\ldots+e_{m}=1$ and $e_{i}R=a^{i-1}I$ for each $i\in\{1,\ldots,m\}$. Now, let us define $s \in R^{m\times m}$ by setting \begin{displaymath} s_{ij} \, \defeq \, a^{i-j}e_{j} \qquad (i,j\in\{1,\ldots,m\}) . \end{displaymath} For all $i,j\in\{1,\ldots,m\}$, we see that $s_{ij}e_{j}R = a^{i-j}e_{j}R = a^{i-j}a^{j-1}I = a^{i-1}I = e_{i}R$, hence $s_{ij} = s_{ij}e_{j} = e_{i}s_{ij}e_{j}$, i.e., $s_{ij}\in e_{i}Re_{j}$. In particular, $s_{ij}s_{k\ell} = s_{ij}e_{j}e_{k}s_{k\ell} = 0$ whenever $i,j,k,\ell\in\{1,\ldots,m\}$ and $j\ne k$. Moreover, for all $i,j,k \in\{1,\ldots,m\}$, \begin{displaymath} s_{ij}s_{jk} \, = \, a^{i-j}e_{j}s_{jk} \, = \, a^{i-j}s_{jk} \, = \, a^{i-j}a^{j-k}e_{k} \, = \, a^{i-k}e_{k} \, = \, s_{ik} . \end{displaymath} Finally, since $s_{ii} = a^{0}e_{i} = e_{i}$ for every $i \in \{ 1,\ldots,m\}$, we readily conclude that $s_{11}+\ldots +s_{mm}=e_{1}+\ldots+e_{m}=1$. This shows that $s$ is a family of matrix units for $R$. Furthermore, \begin{align*} ae_{m}\, &= \, a^{m}a^{1-m}e_{m} \, = \, a^{m}s_{1m}\, \stackrel{p(a)=0}{=} \, \!\left(-c_{m-1}c_m^{-1}a^{m-1}-\ldots-c_{1}c_{m}^{-1}a-c_{0}c_{m}^{-1}\right)\! s_{1m}\\ &=\, -c_{m-1}c_m^{-1}a^{m-1}s_{1m}-\ldots-c_1c_m^{-1}as_{1m}-c_0c_m^{-1}s_{1m}\\ &=\, -c_{m-1}c_m^{-1}s_{mm}-\ldots-c_{1}c_m^{-1}s_{2m}-c_{0}c_m^{-1}s_{1m}. \end{align*} Consequently, \begin{align*} a \, &= \, a(e_{1}+\ldots+e_{m}) \, = \, ae_{1} + \ldots + ae_{m-1} + ae_{m} \\ &= \, s_{21} + \ldots + s_{m,\, m-1} + -c_{m-1}c_m^{-1}s_{mm}-\ldots-c_1c_m^{-1}s_{2m}-c_0c_m^{-1}s_{1m} . \end{align*} Thus, $a$ is simply matricial in $R$ by Remark~\ref{remark:matrixunits.ring.homomorphism}. \end{proof} \begin{lem}\label{lemma:matrixrepresentation.case.nilpotent} Let $R$ be an irreducible, regular ring, let $a\in R$, $n \in \N_{>0}$ and $I \in \lat(R)$ such that $R = \bigoplus\nolimits_{i=0}^{n-1} a^{i}I$ and $a^{n-1}I = \rAnn(a)$. Then $a$ is simply matricial in $R$. \end{lem} \begin{proof} Let $K \defeq \ZZ(R)$. By Remark~\ref{remark:independence.ideals.idempotents}, we find pairwise orthogonal $e_{1},\ldots,e_{n}\in\E(R)$ with $e_{1}+\ldots+e_{n}=1$ and $e_{i}R=a^{i-1}I$ for each $i\in\{1,\ldots,n\}$. Since \begin{displaymath} \rAnn(a) \, = \, a^{n-1}I \, = \, e_{n}R \, = \, (1-(e_{1}+\ldots +e_{n-1}))R , \end{displaymath} Lemma~\ref{lemma:partial.inverse} asserts the existence of some $b \in R$ such that $ba = e_{1}+\ldots +e_{n-1}$. By induction, we see that \begin{equation}\label{induction.partial.inverse} \forall j \in \{ 1,\ldots,n \} \colon \quad b^{j-1}a^{j-1}e_{1} \, = \, e_{1} . \end{equation} Indeed, $b^{0}a^{0}e_{1} = e_{1}$, and if $j \in \{ 2,\ldots,n\}$ is such that $b^{j-2}a^{j-2}e_{1} \, = \, e_{1}$, then \begin{align*} b^{j-1}a^{j-1}e_{1} \, &= \, b^{j-2}baa^{j-2}e_{1} \, = \, b^{j-2}bae_{j-1}a^{j-2}e_{1} \\ & = \, b^{j-2}(e_{1}+\ldots +e_{n-1})e_{j-1}a^{j-2}e_{1} \\ & = \, b^{j-2}e_{j-1}a^{j-2}e_{1} \, = \, b^{j-2}a^{j-2}e_{1} \, = \, e_{1} . \end{align*} In turn, $b^{j-1}e_{j}R = b^{j-1}a^{j-1}I = b^{j-1}a^{j-1}e_{1}R \stackrel{\eqref{induction.partial.inverse}}{=} e_{1}R$ for every $j \in \{ 1,\ldots,n \}$, thus \begin{equation}\label{induction.partial.inverse.2} \forall j \in \{ 1,\ldots,n \} \colon \quad b^{j-1}e_{j} \, = \, e_{1}b^{j-1}e_{j} . \end{equation} Now, let us define $s \in R^{n \times n}$ by setting \begin{displaymath} s_{ij} \, \defeq \, a^{i-1}b^{j-1}e_{j} \qquad (i,j\in\{1,\ldots,n\}) . \end{displaymath} For all $i,j\in\{1,\ldots,n\}$, we observe that \begin{displaymath} s_{ij} \, = \, a^{i-1}b^{j-1}e_{j} \, \stackrel{\eqref{induction.partial.inverse.2}}{=} \, a^{i-1}e_{1}b^{j-1}e_{j} \, = \, e_{i}a^{i-1}e_{1}b^{j-1}e_{j} \, \in \, e_{i}Re_{j} . \end{displaymath} In particular, $s_{ij}s_{k\ell} = s_{ij}e_{j}e_{k}s_{k\ell} = 0$ whenever $i,j,k,\ell\in\{1,\ldots,n\}$ and $j\ne k$. Moreover, for all $i,j,k \in\{1,\ldots,n\}$, \begin{align*} s_{ij}s_{jk} \, &= \, a^{i-1}b^{j-1}e_{j}s_{jk} \, = \, a^{i-1}b^{j-1}s_{jk} \, = \, a^{i-1}b^{j-1}a^{j-1}b^{k-1}e_{k} \\ & = \, a^{i-1}b^{j-1}a^{j-1}e_{1}b^{k-1}e_{k} \, = \, a^{i-1}e_{1}b^{k-1}e_{k} \, = \, a^{i-1}b^{k-1}e_{k} \, = \, s_{ik} . \end{align*} Finally, for each $i \in \{ 1,\ldots,n \}$, we find $x \in R$ such that $e_{i} = a^{i-1}e_{1}x$, wherefore \begin{displaymath} s_{ii} \, = \, a^{i-1}b^{i-1}e_{i} \, = \, a^{i-1}b^{i-1}a^{i-1}e_{1}x \, \stackrel{\eqref{induction.partial.inverse}}{=} \, a^{i-1}e_{1}x \, = \, e_{i} . \end{displaymath} We conclude that $s_{11}+\dots +s_{nn}=e_{1}+\ldots+e_{n}=1$. Hence, $s$ is a family of matrix units for $R$. Moreover, $ae_{n} = 0$ as $e_{n} \in a^{n-1}I = \rAnn(a)$, and so \begin{displaymath} a \, = \, a(e_{1}+\ldots+e_{n}) \, = \, ae_{1} + \ldots + ae_{n-1} \, = \, s_{21} + \ldots + s_{n,\, n-1} . \end{displaymath} Thus, $a$ is simply matricial in $R$ by Remark~\ref{remark:matrixunits.ring.homomorphism}. \end{proof} \begin{lem}\label{lemma:matrixrepresentation.case.tower} Let $R$ be an irreducible, continuous ring, let $K \defeq \ZZ(R)$, let $p\in K[X]$ be irreducible with $m \defeq \deg(p)$, let $a\in R$, $n \in \N_{>0}$ and $I \in \lat(R)$ be such that \begin{displaymath} R = \bigoplus\nolimits_{j=0}^{n-1} \bigoplus\nolimits_{i=0}^{m-1} a^{i}p(a)^{j}I , \qquad \rAnn (p(a)) = \bigoplus\nolimits_{i=0}^{m-1} a^{i}p(a)^{n-1}I . \end{displaymath} Then $a$ is simply matricial in $R$. \end{lem} \begin{proof} Let $c_{0},\ldots,c_{m} \in K$ be such that $p = \sum\nolimits_{i=0}^{m} c_{i}X^{i}$. By irreducibility of $p$, either $p = c_{1}X$ with $c_{1} \ne 0$, or $p\in K[X]\cdot X+(K\setminus\{0\})$. In the first case, $R = \bigoplus\nolimits_{j=0}^{n-1} a^{n}I$ and $\rAnn(a) = a^{n-1}I$, thus $a$ is simply matricial in $R$ by Lemma~\ref{lemma:matrixrepresentation.case.nilpotent}. Therefore, we will henceforth assume that $p\in K[X]\cdot X+(K\setminus\{0\})$. Then $p^{n}\in K[X]\cdot X+(K\setminus\{0\})$ and so $a\in \GL(R)$ by Remark~\ref{remark:root.K[X]X+K.invertible}. Consider the bijection \begin{displaymath} \{1,\ldots,m\}\times \{1,\ldots,n\} \, \longrightarrow\, \{1,\ldots,mn\}, \quad (i,j) \,\longmapsto\, m(j-1)+i . \end{displaymath} According to Remark~\ref{remark:independence.ideals.idempotents}, there exist pairwise orthogonal $e_{1},\ldots,e_{mn}\in \E(R)$ such that $1 = \sum\nolimits_{i=1}^{m}\sum\nolimits_{j=1}^{n} e_{m(j-1)+i}$ and \begin{equation}\label{eq-7} \forall i\in\{1,\ldots,m\} \ \forall j\in\{1,\ldots,n\}\colon \quad e_{m(j-1)+i}R \, = \, a^{i-1}p(a)^{j-1}I. \end{equation} Consider $f \defeq \sum\nolimits_{i=1}^{m} \sum\nolimits_{j=1}^{n-1} e_{m(j-1)+i} = 1 - \sum\nolimits_{i=1}^{m} e_{m(n-1)+i} \in \E(R)$ and note that \begin{align*} (1-f)R \, &= \, \!\left(\sum\nolimits_{i=1}^{m} e_{m(n-1)+i}\right)\!R \\ & = \, \bigoplus\nolimits_{i=1}^{m} e_{m(n-1)+i}R \, \stackrel{\eqref{eq-7}}{=} \, \bigoplus\nolimits_{i=0}^{m-1} a^{i}p(a)^{n-1}I \, = \, \rAnn (p(a)) . \end{align*} Hence, by Lemma~\ref{lemma:partial.inverse}, there exists $b\in R$ with $bp(a) = f$. By induction, we see that \begin{equation}\label{eq6} \forall \ell\in\{1,\ldots,n\} \colon \quad b^{\ell-1}p^{\ell-1}(a)e_{1} \, = \, e_{1}. \end{equation} Indeed, $b^{0}p^{0}(a)e_{1} = e_{1}$, and if $j \in \{ 2,\ldots,n \}$ satisfies $b^{j-2}p(a)^{j-2}e_{1} = e_{1}$, then \begin{displaymath} b^{j-1}p(a)^{j-1}e_{1} \, = \, b^{j-2}f\smallunderbrace{p(a)^{j-2}e_{1}}_{\in fR} \, = \, b^{j-2}p(a)^{j-2}e_{1} \, = \, e_{1}. \end{displaymath} For any $i \in \{ 1,\ldots,m \}$ and $j \in \{ 1,\ldots,n\}$, from \begin{displaymath} b^{j-1}a^{-(i-1)}e_{m(j-1)+i} \, \stackrel{\eqref{eq-7}}{\in}\, b^{j-1}p(a)^{j-1}I \, \stackrel{\eqref{eq-7}}{=} \, b^{j-1}p(a)^{j-1}e_{1}R \, \stackrel{\eqref{eq6}}{=} \, e_{1}R \end{displaymath} we infer that \begin{equation}\label{eq-6} b^{j-1}a^{-(i-1)}e_{m(j-1)+i} \, = \, e_{1}b^{j-1}a^{-(i-1)}e_{m(j-1)+i} . \end{equation} Now, for any $i,i'\in\{1,\ldots,m\}$ and $j,j'\in\{1,\ldots,n\}$, we define \begin{displaymath} s_{m(j-1)+i,\,m(j'-1)+i'} \, \defeq \, a^{i-1}p(a)^{j-1}b^{j'-1}a^{-(i'-1)}e_{m(j'-1)+i'} \end{displaymath} and note that, by~\eqref{eq-7} and~\eqref{eq-6}, \begin{equation}\label{eq-range} s_{m(j-1)+i,\,m(j'-1)+i'} \, \in \, e_{m(j-1)+i}Re_{m(j'-1)+i'} . \end{equation} Evidently, for any choice of $i,i',i'',i'''\in\{1,\ldots,m\}$ and $j,j',j'',j'''\in\{1,\ldots,n\}$ with $(i',j')\neq(i'',j'')$, from $e_{m(j'-1)+i'}\perp e_{m(j''-1)+i''}$, it follows that \begin{displaymath} s_{m(j-1)+i,\,m(j'-1)+i'} \cdot s_{m(j''-1)+i'',\,m(j'''-1)+i'''} \, \stackrel{\eqref{eq-range}}{=} \, 0 . \end{displaymath} Moreover, for all $i,i',i''\in\{1,\ldots,m\}$ and $j,j',j''\in\{1,\ldots,n\}$, \begin{align*} &s_{m(j-1)+i,\,m(j'-1)+i'}\cdot s_{m(j'-1)+i,\, m(j''-1)+i''} \\ &\quad = \, a^{i-1}p(a)^{j-1}b^{j'-1}a^{-(i'-1)}e_{m(j'-1)+i'}a^{i'-1}p(a)^{j'-1}b^{j''-1}a^{-(i''-1)}e_{m(j''-1)+i''} \\ &\quad \stackrel{\eqref{eq-6}}{=} \, a^{i-1}p(a)^{j-1}b^{j'-1}a^{-(i'-1)}e_{m(j'-1)+i'}a^{i'-1}p(a)^{j'-1}e_{1}b^{j''-1}a^{-(i''-1)}e_{m(j''-1)+i''} \\ &\quad \stackrel{\eqref{eq-7}}{=} \, a^{i-1}p(a)^{j-1}b^{j'-1}a^{-(i'-1)}a^{i'-1}p(a)^{j'-1}e_{1}b^{j''-1}a^{-(i''-1)}e_{m(j''-1)+i''} \\ &\quad = \, a^{i-1}p(a)^{j-1}b^{j'-1}p(a)^{j'-1}e_{1}b^{j''-1}a^{-(i''-1)}e_{m(j''-1)+i''} \\ &\quad \stackrel{\eqref{eq6}}{=} \, a^{i-1}p(a)^{j-1}e_{1}b^{j''-1}a^{-(i''-1)}e_{m(j''-1)+i''} \\ &\quad \stackrel{\eqref{eq-6}}{=} \, a^{i-1}p(a)^{j-1}b^{j''-1}a^{-(i''-1)}e_{m(j''-1)+i''} \, = \, s_{m(j-1)+i,\,m(j''-1)+i''}. \end{align*} Also, for each pair $(i,j)\in\{1,\ldots,m\} \times \{1,\ldots,n\}$, thanks to~\eqref{eq-7} we find $x\in R$ such that $e_{m(j-1)+i} = a^{i-1}p(a)^{j-1}e_{1}x$, which entails that \begin{align}\label{eq9} s_{m(j-1)+i,\,m(j-1)+i} \, &= \, a^{i-1}p(a)^{j-1}b^{j-1}a^{-(i-1)}e_{m(j-1)+i} \nonumber \\ & = \, a^{i-1}p(a)^{j-1}b^{j-1}a^{-(i-1)} a^{i-1}p(a)^{j-1}e_{1}x \nonumber\\ &= a^{i-1}p(a)^{j-1}b^{j-1}p(a)^{j-1}e_{1}x \, \stackrel{\eqref{eq6}}{=} \, a^{i-1}p(a)^{j-1}e_{1}x \nonumber \\ & = \, e_{m(j-1)+i} \end{align} In turn, \begin{displaymath} \sum\nolimits_{i=1}^{m}\sum\nolimits_{j=1}^{n} s_{m(j-1)+i,\,m(j-1)+i} \, \stackrel{\eqref{eq9}}{=} \, \sum\nolimits_{i=1}^{m}\sum\nolimits_{j=1}^{n} e_{m(j-1)+i} \, = \, 1 , \end{displaymath} and so $\left(s_{m(j-1)+i,\,m(j'-1)+i'}\right)_{i,i'\in \{1,\ldots,m\},\, j,j'\in \{1,\ldots,n\}}$ constitutes a family of matrix units for $R$. Now, if $j\in\{1,\ldots,n\}$ and $i,k\in\{1,\ldots,m\}$ are such that $i+k \leq m$, then \begin{align}\label{eq10} a^{k}s_{m(j-1)+i,\,m(j-1)+i} \, &= \, a^{k+i-1}p(a)^{j-1}b^{j-1}a^{-(i-1)}e_{m(j-1)+i} \nonumber \\ &= \, s_{m(j-1)+k+i,\,m(j-1)+i}. \end{align} In particular, for all $j \in \{ 1,\ldots,n\}$ and $i \in \{ 1,\ldots,m-1\}$, \begin{align}\label{eq66} a e_{m(j-1)+i} \, \stackrel{\eqref{eq9}}{=} \, as_{m(j-1)+i,\,m(j-1)+i} \, \stackrel{\eqref{eq10}}{=} \, s_{m(j-1)+i+1,\,m(j-1)+i}. \end{align} Furthermore, for every $j\in\{1,\ldots,n-1\}$, \begin{align}\label{eq11} p(a)s_{m(j-1)+1,\,m(j-1)+m} \, &= \, p(a) p(a)^{j-1}b^{j-1}a^{-(m-1)}e_{m(j-1)+m}\nonumber\\ &= \, s_{mj+1,\,m(j-1)+m} \end{align} and hence \begin{align}\label{67} & ae_{m(j-1)+m} \, \stackrel{\eqref{eq9}}{=} \, as_{m(j-1)+m,\,m(j-1)+m} \, \stackrel{\eqref{eq10}}{=} \, a^{m}s_{m(j-1)+1,\,m(j-1)+m}\nonumber\\ &\qquad = \, \! \left(c_{m}^{-1}p(a)-\sum\nolimits_{i=0}^{m-1}c_{m}^{-1}c_{i}a^{i}\right)\!s_{m(j-1)+1,\,m(j-1)+m}\nonumber\\ &\qquad \stackrel{\eqref{eq10}+\eqref{eq11}}{=} \, c_{m}^{-1}s_{mj+1,\,m(j-1)+m}-\sum\nolimits_{i=0}^{m-1}c_{m}^{-1}c_{i}s_{m(j-1)+i+1,\,m(j-1)+m}. \end{align} Finally, since $s_{m(n-1)+1, \, m(n-1)+m} \stackrel{\eqref{eq-range}}{\in} e_{m(n-1)+1}R \stackrel{\eqref{eq-7}}{=} p(a)^{n-1}I \subseteq \rAnn(p(a))$, \begin{align}\label{68} ae_{m(n-1)+m} \, &\stackrel{\eqref{eq9}}{=} \, as_{m(n-1)+m, \, m(n-1)+m} \, \stackrel{\eqref{eq10}}{=} \, a^{m}s_{m(n-1)+1,\,m(n-1)+m} \nonumber\\ &= \, \left(c_{m}^{-1}p(a)-\sum\nolimits_{i=0}^{m-1}c_{m}^{-1}c_{i}a^{i}\right) \! s_{m(n-1)+1, \, m(n-1)+m} \nonumber \\ &\stackrel{\eqref{eq10}}{=} \, \sum\nolimits_{i=0}^{m-1}-c_{m}^{-1}c_{i}s_{m(n-1)+i+1,\,m(n-1)+m}. \end{align} Combining~\eqref{eq66}, \eqref{67} and~\eqref{68}, we conclude that \begin{align*} a \, &= \, a\sum\nolimits_{i=1}^{m}\sum\nolimits_{j=1}^{n} e_{m(j-1)+i} \, = \, \sum\nolimits_{i=1}^{m}\sum\nolimits_{j=1}^{n} ae_{m(j-1)+i} \\ &= \, \sum\nolimits_{i=1}^{m-1}\sum\nolimits_{j=1}^{n}s_{m(j-1)+i+1,\,m(j-1)+i}\\ &\qquad +\sum\nolimits_{j=1}^{n-1}\left(c_m^{-1}s_{mj+1,\,m(j-1)+m}-\sum\nolimits_{i=0}^{m-1}c_m^{-1}c_{i}s_{m(j-1)+k+1,\,m(j-1)+m}\right)\\ &\qquad +\sum\nolimits_{i=0}^{m-1}-c_{m}^{-1}c_{i}s_{m(n-1)+k+1,\,m(n-1)+m}. \end{align*} Consequently, $a$ is simply matricial in $R$ by Remark~\ref{remark:matrixunits.ring.homomorphism}. \end{proof} \begin{lem}\label{lemma:matrixrepresentation.case.p^n=0} Let $n \in \N_{>0}$. If $R$ is a non-discrete irreducible, continuous ring, $p\in \ZZ(R)[X]$ is irreducible, and $a\in R$ satisfies $p^{n}(a) = 0$, then $a$ is matricial in $R$. \end{lem} \begin{proof} We prove the claim by induction on $n \in \N_{>0}$. The induction base, at $n=1$, is provided by Lemma~\ref{lemma:matrixrepresentation.case.p=0}. Now, let $n \in \N_{>0}$ and suppose that the claim has been established for every strictly smaller positive integer. Let $R$ be a non-discrete irreducible, continuous ring, let $K\defeq \ZZ(R)$, let $p\in K[X]$ be irreducible, and let $a\in R$ be such that $p^{n}(a) = 0$. Note that $m \defeq \deg(p) \in \N_{>0}$ by irreducibility of $p$. Let $c_{0},\ldots,c_{m}\in K$ so that $p=\sum\nolimits_{i=0}^mc_iX^i$. By our induction hypothesis, we may and will assume that $n\in\N_{>0}$ is minimal such that $p^n(a)=0$. Moreover, let us note that \begin{equation}\label{equation0} \forall x \in R \colon \quad a^{m}x = \sum\nolimits_{i=0}^{m-1} -c_{i}c_{m}^{-1}a^{i}x + c_{m}^{-1}p(a)x . \end{equation} We now proceed in three preparatory steps. Our first step is to find $I_{1},\ldots,I_{n}\in \lat(R)$ such that \begin{equation}\label{first.step.1} \forall j \in \{ 1,\ldots,n \} \colon \quad \rAnn(p(a)^{j}) = I_{j} \oplus aI_{j}\oplus \ldots \oplus a^{m-1}I_{j}\oplus \rAnn(p(a)^{j-1}) \end{equation} and \begin{equation}\label{first.step.2} \forall j\in\{2,\ldots,n\} \colon \quad p(a)I_{j} \subseteq I_{j-1} . \end{equation} The argument proceeds by downward induction starting at $j=n$. By Lemma~\ref{lemma:independence.inductive}, there exists $I_{n} \in \lat(R)$ maximal such that $(I_{n},\rAnn(p(a)^{n-1}))$ is $(a,m)$-independent. For every $x \in R = \rAnn(p^{n}(a)) = \rAnn(p(a)^{n-1}p(a))$, \begin{displaymath} a^{m}x \, \stackrel{\eqref{equation0}}{=} \, \sum\nolimits_{i=0}^{m-1} -c_{i}c_{m}^{-1}a^{i}x + p(a)x \, \in \, \sum\nolimits_{i=0}^{m-1} Ka^{i}x + \rAnn(p(a)^{n-1}) \end{displaymath} Also, \begin{displaymath} a\rAnn(p(a)^{n-1}) \, = \, a\rAnn(p^{n-1}(a)) \, \stackrel{\ref{lemma:properties.polynomials}\ref{lemma:annihilator.invariant}}{\subseteq} \, \rAnn(p^{n-1}(a)) \, = \, \rAnn(p(a)^{n-1}) . \end{displaymath} Using Lemma~\ref{lemma:sufficient.condition.halperin} and Lemma~\ref{lemma:independence.halperin}\ref{lemma:independence.halperin.2}, we deduce that \begin{displaymath} \rAnn(p(a)^{n}) \, = \, R \, = \, I_{n} \oplus \ldots \oplus a^{m-1}I_{n} \oplus \rAnn(p(a)^{n-1}). \end{displaymath} For the induction step, let $j\in\{2,\ldots,n\}$ and suppose that $I_{n},\ldots,I_{j}\in\lat(R)$ have been chosen with the desired properties. In particular, \begin{displaymath} \rAnn(p(a)^{j}) \, = \, I_{j} \oplus \ldots \oplus a^{m-1}I_{j} \oplus \rAnn(p(a)^{j-1}). \end{displaymath} This immediately entails that $p(a)I_{j} \subseteq p(a)\rAnn(p(a)^{j}) \subseteq \rAnn(p(a)^{j-1})$. Moreover, since $\rAnn(p(a)) \subseteq \rAnn(p(a)^{j-1})$, the additive group homomorphism \begin{displaymath} \sum\nolimits_{i=0}^{m-1} a^{i}I_{j} \, \longrightarrow \, R, \quad x \, \longmapsto \, p(a)x \end{displaymath} is injective: if $x\in \sum\nolimits_{i=0}^{m-1} a^{i}I_{j}$ and $p(a)x=0$, then \begin{displaymath} x \, \in \, \!\left(\sum\nolimits_{i=0}^{m-1} a^{i}I_{j}\right) \cap \rAnn(p(a)) \, = \, \{ 0 \} , \end{displaymath} i.e., $x = 0$. It follows that \begin{displaymath} (p(a)I_{j}, p(a)aI_{j},\ldots,p(a)a^{m-1}I_{j})\perp. \end{displaymath} Since $(I_{j}, \ldots,a^{m-1}I_{j},\rAnn(p(a)^{j-1})\perp$, we also see that \begin{displaymath} \left(p(a)I_{j}\oplus \ldots\oplus p(a)a^{m-1}I_{j}\right) \cap \rAnn(p(a)^{j-2}) \, = \, \{0\}. \end{displaymath} Consequently, by Remark~\ref{remark:independence.equivalence.intersection}, \begin{displaymath} (p(a)I_{j},\smallunderbrace{p(a)aI_{j}}_{=\,ap(a)I_{j}},\ldots,\smallunderbrace{p(a)a^{m-1}I_{j}}_{=\,a^{m-1}p(a)I_{j}},\rAnn(p(a)^{j-2}))\perp, \end{displaymath} i.e., $\left(p(a)I_{j},\rAnn(p(a)^{j-2})\right)$ is $(a,m)$-independent. Thanks to Lemma~\ref{lemma:independence.inductive}, there exists $I_{j-1} \in \lat(R)$ maximal such that $\left(I_{j-1},\rAnn(p(a)^{j-2})\right)$ is $(a,m)$-independent and $p(a)I_{j} \subseteq I_{j-1} \subseteq \rAnn(p(a)^{j-1})$. For every $x \in \rAnn(p(a)^{j}) = \rAnn(p(a)^{j-1}p(a))$, \begin{displaymath} a^{m}x \, \stackrel{\eqref{equation0}}{=} \, \sum\nolimits_{i=0}^{m-1} -c_{i}c_{m}^{-1}a^{i}x + p(a)x \, \in \, \sum\nolimits_{i=0}^{m-1} Ka^{i}x + \rAnn(p(a)^{j-1}) \end{displaymath} Also, \begin{displaymath} a\rAnn(p(a)^{j-1}) \, = \, a\rAnn(p^{j-1}(a)) \, \stackrel{\ref{lemma:properties.polynomials}\ref{lemma:annihilator.invariant}}{\subseteq} \, \rAnn(p^{j-1}(a)) \, = \, \rAnn(p(a)^{j-1}) . \end{displaymath} Applying Lemma~\ref{lemma:sufficient.condition.halperin} and Lemma~\ref{lemma:independence.halperin}\ref{lemma:independence.halperin.2}, we infer that \begin{displaymath} \rAnn(p(a)^{j-1}) \, = \, I_{j-1} \oplus \ldots \oplus a^{m-1}I_{j-1} \oplus \rAnn(p(a)^{j-2}) . \end{displaymath} This finishes the induction. Our second step is to find $J_{1},\ldots,J_{n}\in\lat(R)$ such that \begin{equation}\label{second.step.1} \forall j \in \{ 1,\ldots,n \} \colon \quad I_{j} = p(a)^{n-j}I_{n} \oplus J_{j} \end{equation} and \begin{equation}\label{second.step.2} \forall j \in \{ 2,\ldots,n \} \colon \quad p(a)J_{j} \subseteq J_{j-1} \end{equation} Again, the argument proceeds by downward induction starting at $j=n$, for which we define $J_{n} \defeq \{0\}$. For the induction step, let $j\in\{2,\ldots,n\}$ and suppose that $J_{n},\ldots,J_{j}\in\lat(R)$ have been chosen with the desired properties. Then \begin{displaymath} p(a)^{n-(j-1)}I_{n} \, \subseteq \, p(a)^{n-j}I_{n-1} \, \subseteq \, \ldots \, \subseteq \, p(a)I_{j} \, \subseteq \, I_{j-1} \end{displaymath} and $I_{j} = p(a)^{n-j}I_{n} \oplus J_{j}$. The additive group homomorphism $I_{j} \to R, \, x \mapsto p(a)x$ is injective: indeed, if $x\in I_{j}$ and $p(a)x=0$, then \begin{displaymath} x \, \in \, I_{j} \cap \rAnn(p(a)) \, \subseteq \, I_{j} \cap \rAnn(p(a)^{j-1}) \, \stackrel{\eqref{first.step.1}}{=} \, \{ 0 \} , \end{displaymath} i.e., $x = 0$. Since $I_{j} = p(a)^{n-j}I_{n} \oplus J_{j}$, we conclude that $p(a)^{n-(j-1)}I_{n} \cap p(a)J_{j} = \{0\}$. Also, \begin{displaymath} p(a)J_{j} \, \stackrel{\eqref{second.step.1}}{\subseteq} \, p(a)I_{j} \, \stackrel{\eqref{first.step.2}}{\subseteq} \, I_{j-1} . \end{displaymath} So, by Remark~\ref{remark:bijection.annihilator} and Lemma~\ref{lemma:complement}, there exists $J_{j-1}\in\lat(R)$ with $p(a)J_{j} \subseteq J_{j-1}$ and \begin{displaymath} I_{j-1} \, = \, p(a)^{n-(j-1)}I_{n} \oplus J_{j-1} , \end{displaymath} which finishes the induction. As our third step, we show that \begin{equation}\label{third.step} \forall i \in \{ 0,\ldots,m-1\} \ \forall j \in \{ 1,\ldots,n\} \colon \quad {a^{i}p(a)^{n-j}I_{n}} \cap {a^{i}J_{j}} = \{ 0 \} . \end{equation} For this purpose, we observe that irreducibility of $p$ implies that either $p = c_{1}X$ with $c_{1} \ne 0$, or $p\in K[X]\cdot X+(K\setminus\{0\})$. In the first case, $m=1$ and so~\eqref{third.step} follows directly from~\eqref{second.step.1}. In the second case, we see that $p^{n}\in K[X]\cdot X+(K\setminus\{0\})$, thus $a\in \GL(R)$ by Remark~\ref{remark:root.K[X]X+K.invertible} and so $a^{i} \in \GL(R)$, wherefore~\eqref{third.step} is an immediate consequence of~\eqref{second.step.1}, too. Due to the preparations above and the fact that $\rAnn(p(a)^{0}) = \rAnn(1) = \{ 0 \}$, \begin{align*} R \, = \, \rAnn(p(a)^{n}) \, &\stackrel{\eqref{first.step.1}}{=} \, \bigoplus\nolimits_{j=1}^{n} \bigoplus\nolimits_{i=0}^{m-1}a^{i}I_{j}\\ &\stackrel{\eqref{second.step.1}}{=} \, \bigoplus\nolimits_{j=1}^{n} \bigoplus\nolimits_{i=0}^{m-1} a^{i}\!\left(p(a)^{n-j}I_{n}\oplus J_{j}\right)\\ &\stackrel{\eqref{third.step}}{=} \, \bigoplus\nolimits_{j=1}^{n} \bigoplus\nolimits_{i=0}^{m-1} a^{i}p(a)^{n-j}I_{n}\oplus a^{i}J_{j} \\ &= \, \! \left(\bigoplus\nolimits_{j=0}^{n-1} \bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{j}I_{n} \right) \! \oplus \! \left(\bigoplus\nolimits_{j=1}^{n} \bigoplus\nolimits_{i=0}^{m-1} a^{i}J_{j} \right) \! . \end{align*} That is, $R = I \oplus J$ where \begin{align*} I \, \defeq \, \bigoplus\nolimits_{j=0}^{n-1} \bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{j}I_{n} , \qquad J \, \defeq \, \bigoplus\nolimits_{j=1}^{n} \bigoplus\nolimits_{i=0}^{m-1} a^{i}J_{j} . \end{align*} Thanks to Remark~\ref{remark:independence.ideals.idempotents}, there exists $e\in\E(R)$ such that $I=eR$ and $J=(1-e)R$. Since $p(a)^{n} = p^{n}(a) = 0$ and thus $p(a)^{n}I_{n} = \{ 0 \}$, \begin{align*} &aI = \sum\nolimits_{j=0}^{n-1} \sum\nolimits_{i=0}^{m-1} aa^{i}p(a)^{j}I_{n} = \left( \sum\nolimits_{j=0}^{n-1} \sum\nolimits_{i=1}^{m-1} a^{i}p(a)^{j}I_{n} \right)\! + \! \left( \sum\nolimits_{j=0}^{n-1} a^{m}p(a)^{j}I_{n} \right) \\ &\stackrel{\eqref{equation0}}{\subseteq} \left( \sum\nolimits_{j=0}^{n-1} \sum\nolimits_{i=0}^{m-1} a^{i}p(a)^{j}I_{n} \right)\! + \! \left( \sum\nolimits_{j=1}^{n} p(a)^{j}I_{n} \right) \subseteq \sum\nolimits_{j=0}^{n-1} \sum\nolimits_{i=0}^{m-1} a^{i}p(a)^{j}I_{n} = I . \end{align*} Furthermore, as $p(a)J_{1} \stackrel{\eqref{second.step.1}}{\subseteq} p(a)I_{1} \stackrel{\eqref{first.step.1}}{=} \{ 0 \}$, \begin{align*} aJ \, &= \, \sum\nolimits_{j=1}^{n} \sum\nolimits_{i=0}^{m-1} aa^{i}J_{j} \, = \, \! \left( \sum\nolimits_{j=1}^{n} \sum\nolimits_{i=1}^{m-1} a^{i}J_{j} \right)\! + \! \left( \sum\nolimits_{j=1}^{n} a^{m}J_{j} \right) \\ &\stackrel{\eqref{equation0}}{\subseteq} \, \! \left( \sum\nolimits_{j=1}^{n} \sum\nolimits_{i=0}^{m-1} a^{i}J_{j} \right)\! + \! \left( \sum\nolimits_{j=1}^{n} p(a)J_{j} \right) \\ & \stackrel{\eqref{second.step.2}}{\subseteq} \, \! \left( \sum\nolimits_{j=1}^{n} \sum\nolimits_{i=0}^{m-1} a^{i}J_{j} \right)\! + \! \left( \sum\nolimits_{j=1}^{n-1} J_{j} \right)\! + p(a)J_{1} \, = \, \sum\nolimits_{j=1}^{n} \sum\nolimits_{i=0}^{m-1} a^{i}J_{j} \, = \, J . \end{align*} Therefore, $aeR = aI\subseteq I = eR$ and $a(1-e)R = aJ\subseteq J = (1-e)R$. We conclude that $ae=eae$ and $a(1-e)=(1-e)a(1-e)$. Consequently, \begin{equation}\label{eq-4} a \, = \, ae + a(1-e) \, = \, eae + (1-e)a(1-e) \, \in \, eRe + (1-e)R(1-e). \end{equation} This means that $ea=ae$ and thus entails that \begin{equation}\label{equation:polynomial.commutes} ep(a) \, = \, p(a)e \, \stackrel{\ref{lemma:properties.polynomials}\ref{lemma:evaluation.eRe}}{=} \, p_{eRe}(ae) \, = \, p_{eRe}(eae) . \end{equation} From~\eqref{first.step.1} and minimality of $n$, we infer that $I_{n} \ne \{ 0 \}$ and thus $I \ne \{0\}$, whence $e \ne 0$. Therefore, $eRe$ is a non-discrete irreducible, continuous ring by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}. We observe that $I_{n}e \in \lat(eRe)$ by Lemma~\ref{lemma:right.multiplication.regular}\ref{lemma:right.multiplication.4} and, moreover, \begin{align*} eRe \, = \, Ie \, &\stackrel{\ref{lemma:right.multiplication.regular}\ref{lemma:right.multiplication.3}}{=} \, \bigoplus\nolimits_{j=0}^{n-1} \bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{j}I_{n}e \, = \, \bigoplus\nolimits_{j=0}^{n-1} \bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{j}eI_{n}e \\ &\stackrel{\eqref{eq-4}+\ref{lemma:properties.polynomials}\ref{lemma:evaluation.eRe}}{=} \, \bigoplus\nolimits_{j=0}^{n-1} \bigoplus\nolimits_{i=0}^{m-1}(eae)^{i}p_{eRe}(eae)^{j}I_{n}e . \end{align*} Furthermore, \begin{align*} \rAnn(p(a)) \, &\stackrel{\eqref{first.step.1}}{=} \, \bigoplus\nolimits_{i=0}^{m-1}a^{i}I_{1}\, \stackrel{\eqref{second.step.1}}{=} \, \bigoplus\nolimits_{i=0}^{m-1} a^{i}\!\left(p(a)^{n-1}I_{n}\oplus J_{1}\right)\\ &\stackrel{\eqref{third.step}}{=} \, \bigoplus\nolimits_{i=0}^{m-1} a^{i}p(a)^{n-1}I_{n}\oplus a^{i}J_{1} \\ &= \, \! \left(\bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{n-1}I_{n} \right) \! \oplus \! \left( \bigoplus\nolimits_{i=0}^{m-1} a^{i}J_{1} \right) \end{align*} and thus $e\rAnn(p(a)) = \bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{n-1}I_{n}$, wherefore \begin{align*} &\rAnn(p_{eRe}(eae)) \cap eRe \, \stackrel{\eqref{equation:polynomial.commutes}}{=} \, \rAnn(p(a)e) \cap eRe \, \stackrel{\eqref{equation:polynomial.commutes}+\ref{lemma:right.multiplication}\ref{lemma:annihilator.eRe}}{=} \, e\rAnn(p(a))e \\ & \qquad = \, \! \left(\bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{n-1}I_{n}\right)\! e \, \stackrel{\ref{lemma:right.multiplication.regular}\ref{lemma:right.multiplication.3}}{=} \, \bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{n-1}I_{n}e \\ & \qquad = \, \bigoplus\nolimits_{i=0}^{m-1}a^{i}p(a)^{n-1}eI_{n}e \, \stackrel{\eqref{eq-4}+\ref{lemma:properties.polynomials}\ref{lemma:evaluation.eRe}}{=} \, \bigoplus\nolimits_{i=0}^{m-1}(eae)^{i}p_{eRe}(eae)^{n-1}I_{n}e . \end{align*} Hence, $eae$ is matricial in $eRe$ by Lemma~\ref{lemma:matrixrepresentation.case.tower}. Thus, if $1-e = 0$, then $a$ is matricial in $R$, as desired. Suppose now that $1-e \ne 0$, whence $(1-e)R(1-e)$ is a non-discrete irreducible, continuous ring by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}. Furthermore, since $J_{n} = \{0\}$ by~\eqref{second.step.1}, \begin{align*} J \, = \, \bigoplus\nolimits_{j=1}^{n-1} \bigoplus\nolimits_{i=0}^{m-1} a^{i}J_{j} \, &\stackrel{\eqref{second.step.1}+\eqref{first.step.1}}{\subseteq} \, \sum\nolimits_{j=1}^{n-1} \sum\nolimits_{i=0}^{m-1} a^{i}\rAnn(p(a)^{j}) \\ & \stackrel{\ref{lemma:properties.polynomials}\ref{lemma:annihilator.invariant}}{\subseteq} \, \sum\nolimits_{j=1}^{n-1} \rAnn(p(a)^{j}) \, \subseteq \, \rAnn(p(a)^{n-1}) \end{align*} In particular, $1-e \in J \subseteq \rAnn(p^{n-1}(a))$ and hence \begin{displaymath} \left(p^{n-1}\right)_{(1-e)R(1-e)}\!((1-e)a(1-e)) \, \stackrel{\eqref{eq-4}+\ref{lemma:properties.polynomials}\ref{lemma:evaluation.eRe}}{=} \, p(a)^{n-1}(1-e)\, = \, 0. \end{displaymath} Our induction hypothesis now asserts that $(1-e)a(1-e)$ is matricial in $(1-e)R(1-e)$. According to~\eqref{eq-4} and Remark~\ref{remark:matricial}\ref{remark:matricial.sum}, it follows that $a$ is matricial in $R$, which completes the induction. \end{proof} Everything is prepared to deduce the desired characterization of algebraic elements of non-discrete irreducible, continuous rings. \begin{thm}\label{theorem:matrixrepresentation.case.algebraic} Let $R$ be a non-discrete irreducible, continuous ring, let $K\defeq \ZZ(R)$. An element of $R$ is algebraic over $K$ if and only if it is matricial in $R$. \end{thm} \begin{proof} ($\Longrightarrow$) Let $a \in R$ be algebraic over $K$. Then there exists $f\in K[X]\setminus \{0\}$ such that $f(a)=0$. Since $K[X]$ is a principal ideal domain, there exist $m\in\N_{>0}$, $n_{1},\ldots,n_{m}\in \N_{>0}$ and $p_{1},\ldots,p_{m}\in K[X]\setminus \{0\}$ irreducible and pairwise distinct such that $f=p_{1}^{n_{1}}\cdots p_{m}^{n_{m}}$. It follows that $p_{1}^{n_{1}},\ldots, p_{m}^{n_{m}}$ are pairwise coprime. Upon permuting the indices, we may and will assume without loss of generality that \begin{displaymath} \{ i \in \{1,\ldots,m\} \mid p_{i}^{n_{i}}(a) \ne 0\} \, = \, \{ 1,\ldots,\ell \} \end{displaymath} for some $\ell\in\{1,\ldots,m\}$. An $\ell$-fold application of Lemma~\ref{lemma:properties.polynomials}\ref{lemma:bezout} shows that \begin{align*} R \, &= \, \rAnn(f(a)) \, = \, \rAnn(p_{1}^{n_{1}}(a))\oplus\rAnn(p_{2}^{n_{2}}(a)\cdot\ldots\cdot p_{m}^{n_{m}}(a))\\ &= \, \ldots\\ &= \, \rAnn(p_{1}^{n_{1}}(a))\oplus \ldots\oplus \rAnn(p_{\ell}^{n_{\ell}}(a)) \oplus \rAnn(p_{\ell+1}^{n_{\ell+1}}(a)\cdot\ldots\cdot p_{m}^{n_{m}}(a)) \\ &= \, \rAnn(p_{1}^{n_{1}}(a))\oplus \ldots\oplus \rAnn(p_{\ell}^{n_{\ell}}(a)) . \end{align*} According to Remark~\ref{remark:independence.ideals.idempotents}, there exist pairwise orthogonal $e_{1},\ldots,e_{\ell}\in\E(R)\setminus \{0\}$ such that $e_{1}+\ldots+e_{\ell}=1$ and $e_{i}R=\rAnn(p_{i}^{n_{i}}(a))$ for each $i\in\{1,\ldots,\ell\}$. Moreover, for every $i\in\{1,\ldots,\ell\}$, \begin{displaymath} ae_{i}R \, = \, a\rAnn(p_{i}^{n_{i}}(a)) \, \stackrel{\ref{lemma:properties.polynomials}\ref{lemma:annihilator.invariant}}{\subseteq} \, \rAnn(p_{i}^{n_{i}}(a)) \, = \, e_{i}R, \end{displaymath} thus $ae_{i} = e_{i}ae_{i}$. Consequently, as $e_{1},\ldots,e_{\ell}$ are pairwise orthogonal, \begin{displaymath} a \, = \, (e_{1}+\ldots+e_{\ell})a(e_{1}+\ldots+e_{\ell}) \, = \, e_{1}ae_{1}+\ldots+e_{\ell}ae_{\ell} \in e_{1}Re_{1} + \ldots + e_{\ell}Re_{\ell}. \end{displaymath} For each $i\in\{1,\ldots,\ell\}$, we see that \begin{displaymath} 0 \, = \, p^{n_{i}}_{i}(a)e_{i} \, \stackrel{\ref{lemma:properties.polynomials}\ref{lemma:evaluation.eRe}}{=} \, (p_{i}^{n_{i}})_{e_{i}Re_{i}}(e_{i}ae_{i}), \end{displaymath} whence $e_{i}ae_{i}$ is matricial in $e_{i}Re_{i}$ by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and Lemma~\ref{lemma:matrixrepresentation.case.p^n=0}. In turn, $a$ is matricial in $R$ due to Remark~\ref{remark:matricial}\ref{remark:matricial.sum}. ($\Longleftarrow$) Any matricial unital $K$-algebra is finite-dimensional over $K$, and any element of a finite-dimensional $K$-algebra is $K$-algebraic. This implies the claim. \end{proof} \section{Approximation by matrix algebras}\label{section:approximation.by.matrix.algebras} The purpose of this section is to prove two density results for non-discrete irreducible, continuous rings, namely Theorem~\ref{theorem:matricial.dense} and Corollary~\ref{corollary:simply.special.dense}, which may be viewed as refinements of work of von Neumann~\cite{VonNeumann37} and Halperin~\cite{Halperin62}. The following terminology, inspired by Definition~\ref{definition:matricial}, will be convenient. \begin{definition}\label{definition:simply.special} Let $R$ be an irreducible, regular ring, and let $K \defeq \ZZ(R)$. An element $a\in R$ will be called \begin{enumerate}[label=---\,] \item \emph{special} if there exist $m\in\N_{>0}$, $n_{1},\ldots,n_{m}\in\N_{>0}$ and a unital $K$-algebra embedding $\phi\colon\prod\nolimits_{i=1}^m\M_{n_{i}}(K)\to R$ such that $a\in\phi(\prod\nolimits_{i=1}^{m} \SL_{n_{i}}(K))$, \item \emph{simply special} if there exist a positive integer $n\in\N_{>0}$ and a unital $K$-algebra embedding $\phi\colon \M_{n}(K)\to R$ such that $a\in \phi(\SL_{n}(K))$. \end{enumerate} \end{definition} Note that any special element of an irreducible, regular ring is invertible. Moreover, we record the following observation for the proof of Theorem~\ref{theorem:decomposition}. \begin{remark}\label{remark:matricial.conjugation.invariant} Let $R$ be an irreducible, regular ring. The set of matricial (resp., simply matricial, special, simply special) elements of $R$ is invariant under the action of $\GL(R)$ on $R$ by conjugation. \end{remark} The subsequent observations concerning matricial subalgebras will turn out useful in the proofs of Theorem~\ref{theorem:matricial.dense} and Proposition~\ref{proposition:simply.special.dense}. \begin{lem}\label{lemma:matricial.algebra.blow.up} Let $R$ be a non-discrete irreducible, continuous ring, let $K \defeq \ZZ(R)$ and $m,n\in \N_{>0}$. Every unital $K$-subalgebra of $R$ isomorphic to $\M_{n}(K)$ is contained in some unital $K$-subalgebra of $R$ isomorphic to $\M_{mn}(K)$. \end{lem} \begin{proof} Let $S$ be a unital $K$-algebra of $R$ isomorphic to $\M_n(K)$. By Remark~\ref{remark:matrixunits.ring.homomorphism}, there exists a family of matrix units $s \in R^{n\times n}$ for $R$ such that $S=\sum\nolimits_{i,j=1}^{n}Ks_{ij}$. Since $R$ is non-discrete, using Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete}, Lemma~\ref{lemma:order}\ref{lemma:order.1} and Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.1}, we find pairwise orthogonal elements $e_{1},\ldots,e_{m}\in\E(R)$ such that $s_{11}=\sum\nolimits_{k=1}^{m}e_{k}$ and \begin{displaymath} \forall k \in \{1,\ldots,m\} \colon \qquad \rk_{R}(e_{k}) = \tfrac{\rk_{R}(s_{11})}{m}. \end{displaymath} The former entails that $e_{k} \in s_{11}Rs_{11}$ for each $k \in \{1,\ldots,m\}$. Since $s_{11}Rs_{11}$ is an irreducible, continuous ring by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and \begin{displaymath} \forall k \in \{1,\ldots,m\} \colon \qquad \rk_{s_{11}Rs_{11}}(e_{k}) \stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}}{=} \tfrac{1}{\rk_{R}(s_{11})}\rk_{R}(e_{k}) = \tfrac{1}{m}, \end{displaymath} thanks to Lemma~\ref{lemma:matrixunits.idempotents} there exists a family of matrix units $t \in (s_{11}Rs_{11})^{m \times m}$ for $s_{11}Rs_{11}$ such that $t_{kk} = e_{k}$ for each $k \in \{1,\ldots,m\}$. Note that \begin{displaymath} \{1,\ldots,n\}\times \{1,\ldots,m\} \, \longrightarrow \, \{1,\ldots,mn\}, \quad (i,k) \, \longmapsto \, m(i-1)+k \end{displaymath} is a bijection. Let us define $r\in R^{mn\times mn}$ by setting \begin{displaymath} r_{m(i-1)+k,\,m(j-1)+\ell} \, \defeq \, s_{i1}t_{k\ell}s_{1j} \end{displaymath} for all $i,j\in\{1,\ldots,n\}$ and $k,\ell\in\{1,\ldots,m\}$. We see that \begin{align*} r_{m(i-1)+k,\,m(j-1)+\ell} \cdot r&_{m(j-1)+\ell,\,m(i'-1)+k'} \, = \, s_{i1}t_{k\ell}s_{1j}s_{j1}t_{\ell k'}s_{1i'} \, = \, s_{i1}t_{k\ell}s_{11}t_{\ell k'}s_{1i'}\\ &=s_{i1}t_{k\ell}t_{\ell k'}s_{1i'} \, = \, s_{i1}t_{kk'}s_{1i'} \, = \, r_{m(i-1)+k,m(i'-1)+k'} \end{align*} for all $i,i',j\in\{1,\ldots,n\}$ and $k,k',\ell\in\{1,\ldots,m\}$. Also, if $i,i',j,j'\in\{1,\ldots,n\}$ and $k,k',\ell,\ell'\in\{1,\ldots,m\}$ are such that $m(j-1)+\ell\neq m(i'-1)+k'$, then either $j\neq i'$ and thus \begin{align*} r_{m(i-1)+k,\,m(j-1)+\ell} \cdot r_{m(i'-1)+k',\,m(j'-1)+\ell'} \, = \, s_{i1}t_{k\ell}\smallunderbrace{s_{1j}s_{i'1}}_{=\,0}t_{k'\ell'}s_{1j'} \, = \, 0, \end{align*} or $j=i'$ and therefore $\ell\neq k'$, whence \begin{align*} r_{m(i-1)+k,\, m(j-1)+\ell} \cdot r_{m(i'-1)+k',\, m(j'-1)+\ell'} \, = \, s_{i1}\smallunderbrace{t_{k\ell}t_{k'\ell'}}_{=\,0}s_{1j'} \, = \, 0. \end{align*} Moreover, for all $i,j\in\{1,\ldots,n\}$, \begin{align}\label{eqstar1} s_{ij} \, = \, s_{i1}s_{11}s_{1j} \, = \, s_{i1}\!\left(\sum\nolimits_{k=1}^{m}t_{kk}\right)\!s_{1j} \, &= \, \sum\nolimits_{k=1}^{m}s_{i1}t_{kk}s_{1j}\nonumber\\ &= \, \sum\nolimits_{k=1}^{m}r_{m(i-1)+k,\,m(j-1)+k}. \end{align} In particular, we conclude that \begin{displaymath} \sum\nolimits_{i=1}^{n}\sum\nolimits_{k=1}^{m} r_{m(i-1)+k,\,m(i-1)+k} \stackrel{\eqref{eqstar1}}{=} \, \sum\nolimits_{i=1}^{n}s_{ii} \, = \, 1. \end{displaymath} This shows that $r\in R^{mn\times mn}$ is a family of matrix units for $R$. In turn, by Remark~\ref{remark:matrixunits.ring.homomorphism}, \begin{displaymath} T \, \defeq \, \sum\nolimits_{i,j=1}^{n}\sum\nolimits_{k,\ell=1}^{m}K r_{m(i-1)+k,\,m(j-1)+\ell} \end{displaymath} is a unital $K$-subalgebra of $R$ with $T \cong_{K} \M_{mn}(K)$. Finally, \begin{align*} S \, &= \, \sum\nolimits_{i,j=1}^{n}Ks_{ij} \, \stackrel{\eqref{eqstar1}}{=} \, \sum\nolimits_{i,j=1}^{n}K\!\left(\sum\nolimits_{k=1}^{m} r_{m(i-1)+k,m(j-1)+k}\right)\\ &\subseteq \, \sum\nolimits_{i,j=1}^{n}\sum\nolimits_{k=1}^{m}K r_{m(i-1)+k,\,m(j-1)+k} \, \subseteq \, T. \qedhere \end{align*} \end{proof} \begin{lem}\label{lemma:sum.subalgebras.eRe.matricial} Let $m \in \N_{>0}$. Let $R$ be an irreducible, continuous ring, let $K\defeq \ZZ(R)$, $e_{1},\ldots,e_{m} \in \E(R)$ pairwise orthogonal with $1=\sum\nolimits_{i=1}^{m}e_{i}$, and $t,r_{1},\ldots,r_{m} \in \N_{>0}$~with \begin{displaymath} \forall i\in\{1,\ldots,m\}\colon\quad \rk_{R}(e_{i}) = \tfrac{r_{i}}{t}. \end{displaymath} For each $i \in \{ 1,\ldots,m \}$, let $S_{i}$ be a unital $K$-subalgebra of $e_{i}Re_{i}$ with $S_{i} \cong_{K} \M_{r_{i}}(K)$. Then there exists a unital $K$-subalgebra $S\leq R$ with $\sum\nolimits_{i=1}^{m} S_{i} \leq S \cong_{K} \M_{t}(K)$. \end{lem} \begin{proof} We first prove the claim for $m = 2$. For each $\ell \in \{1,2\}$, let us consider a unital $K$-subalgebra $S_{\ell} \leq e_{\ell}Re_{\ell}$ with $S_{\ell} \cong_{K} \M_{r_{\ell}}(K)$, so that Remark~\ref{remark:matrixunits.ring.homomorphism} asserts the existence of a family of matrix units $s^{(\ell)} \in (e_{\ell}Re_{\ell})^{r_{\ell}}$ for $e_{\ell}Re_{\ell}$ with $S_{\ell} = \sum\nolimits_{i,j=1}^{r_{\ell}} Ks_{i,j}^{(\ell)}$. For every $\ell \in \{ 1,2 \}$, we observe that \begin{displaymath} \rk_{R}\!\left(s_{1,1}^{(\ell)}\right)\! \, \stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}+\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.matrixunits}}{=} \, \tfrac{1}{r_{\ell}}\rk_{R}(e_{\ell}) \, = \, \tfrac{1}{r_{\ell}}\tfrac{r_{\ell}}{t} \, = \, \tfrac{1}{t} . \end{displaymath} Hence, by Lemma~$\ref{lemma:conjugation.idempotent}$, there exists $g\in \GL(R)$ such that $gs_{1,1}^{(1)}=s_{1,1}^{(2)}g$. Note that \begin{displaymath} 1 \, = \, \rk_{R}(e_{1}) + \rk_{R}(e_{2}) \, = \, \tfrac{r_{1}}{t} + \tfrac{r_{2}}{t} \, = \, \tfrac{r_{1}+r_{2}}{t}, \end{displaymath} thus $t = r_{1} + r_{2}$. Now, let us define $\tilde{s}\in R^{t\times t}$ by setting \begin{displaymath} \tilde{s}_{i,j} \, \defeq \, \begin{cases} \, s_{i,j}^{(1)} & \text{if } i,j \in \{1,\ldots,r_{1}\} , \\ \, s_{i-r_1,j-r_1}^{(2)} & \text{if } i,j\in\{r_{1}+1,\ldots,t\}, \\ \, s_{i,1}^{(1)}g^{-1}s_{1,j-r_1}^{(2)}& \text{if } i\in\{1,\ldots,r_{1}\}, \, j\in\{r_{1}+1,\ldots,t\}, \\ \, s_{i-r_1,1}^{(2)}gs_{1,j}^{(1)}& \text{if } i\in\{r_{1}+1,\ldots,t\}, \, j\in\{1,\ldots,r_1\} . \end{cases} \end{displaymath} Straightforward calculations show that $\tilde{s}$ is a family of matrix units in $R$. Therefore, by Remark~\ref{remark:matrixunits.ring.homomorphism}, $S \defeq \sum\nolimits_{i,j=1}^{t} K\tilde{s}_{i,j}$ is a unital $K$-algebra of $R$ with $S \cong_{K} \M_{t}(K)$. Finally, \begin{displaymath} S_{1} + S_{2} \, = \, \sum\nolimits_{i,j=1}^{r_{1}} Ks_{i,j}^{(1)} + \sum\nolimits_{i,j=1}^{r_{2}} Ks_{i,j}^{(2)} \, \subseteq \, \sum\nolimits_{i,j=1}^{t} K\tilde{s}_{i,j} \, = \, S. \end{displaymath} This completes the proof for $m=2$. The proof of the general statement proceeds by induction on $m \in \N_{>0}$. For $m=1$, the statement is trivial. For the induction step, suppose that the claim is true for some $m \in \N_{>0}$. Let $e_{1},\ldots,e_{m+1} \in\E(R)$ be pairwise orthogonal with $1=\sum\nolimits_{i=1}^{m+1}e_{i}$ and $t,r_{1},\ldots,r_{m+1} \in \N_{>0}$ be such that $\rk_{R}(e_{i}) = \tfrac{r_{i}}{t}$ for each $i \in \{1,\ldots,m+1\}$. Furthermore, for every $i \in \{ 1,\ldots,m+1 \}$, let $S_{i}$ be a unital $K$-subalgebra of $e_{i}Re_{i}$ with $S_{i} \cong_{K} \M_{r_{i}}(K)$. Considering \begin{displaymath} e \, \defeq \, \sum\nolimits_{i=1}^{m} e_{i} \, \stackrel{\ref{remark:quantum.logic}\ref{remark:quantum.logic.2}}{\in} \, \E(R) \end{displaymath} and $t' \defeq \sum_{i=1}^{m} r_{i} = t\sum_{i=1}^{m} \rk_{R}(e_{i}) = t\rk_{R}(e)$, we see that \begin{displaymath} \rk_{eRe}(e_{i}) \, \stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}}{=} \, \tfrac{1}{\rk_{R}(e)}\rk_{R}(e_{i}) \, = \, \tfrac{t}{t'}\tfrac{r_{i}}{t} \, = \, \tfrac{r_{i}}{t'} \end{displaymath} for each $i \in \{ 1,\ldots,m \}$. Thus, by our induction hypothesis, there exists a unital $K$-subalgebra $S'$ of $eRe$ such that $\sum_{i=1}^{m} S_{i} \leq S' \cong_{K} \M_{t'}(K)$. Moreover, as \begin{displaymath} e \perp e_{m+1}, \qquad \rk_{R}(e) = \tfrac{t'}{t} , \qquad \rk_{R}(e_{m+1}) = \tfrac{r_{m+1}}{t} , \end{displaymath} the assertion for the special case proven above yields a unital $K$-subalgebra $S$ of $eRe$ such that $\M_{t}(K) \cong_{K} S \geq S'+S_{m+1} \geq \sum\nolimits_{i=1}^{m+1} S_{i}$, as desired. \end{proof} We arrive at our main density theorem, which builds on~\cite{VonNeumann37,Halperin62}: the relevant result~\cite[Theorem~8.1]{Halperin62} had been announced in~\cite[\S7, (21)]{VonNeumann37} without proof. \begin{thm}\label{theorem:matricial.dense} Let $R$ be a non-discrete irreducible, continuous ring. Then the set of simply matricial elements of $R$ is dense in $(R,d_{R})$. \end{thm} \begin{proof} Define $K\defeq \ZZ(R)$. Let $a\in R$ and $\epsilon\in \R_{>0}$. Due to~\cite[Theorem~8.1]{Halperin62}, there is a $K$-algebraic element $b\in R$ with $d_R(a,b)\leq \tfrac{\epsilon}{2}$. By Theorem~\ref{theorem:matrixrepresentation.case.algebraic} and Remark~\ref{remark:matricial}\ref{remark:matricial.decomposition}, we find $m \in\N_{>0}$, $n_{1},\ldots,n_{m} \in \N_{>0}$, $f_{1},\ldots,f_{m} \in \E(R)\setminus \{ 0 \}$ pairwise orthogonal with $1 = \sum_{i=1}^{m} f_{i}$, and unital $K$-subalgebras $R_{1}\leq f_{1}Rf_{1}, \, \ldots, \, R_{m}\leq f_{m}Rf_{m}$ such that $b \in R_{1} + \ldots + R_{m}$ and $R_{i} \cong_{K} \M_{n_{i}}(K)$ for each $i \in \{ 1,\ldots,m\}$. It follows that $\sum\nolimits_{i=1}^{m}\rk_{R}(f_{i}) = 1$ and $\rk_{R}(f_{i}) > 0$ for each $i \in \{1,\ldots,m\}$. Since $\Q$ is dense in $\R$, there exist $q_{1},\ldots,q_{m} \in \Q_{>0}$ such that $1-\tfrac{\epsilon}{2}\leq \sum\nolimits_{i=1}^{m}q_{i} < 1$ and \begin{displaymath} \forall i \in \{1,\ldots,m\} \colon \qquad q_{i} \, \leq \, \rk_{R}(f_{i}). \end{displaymath} For each $i \in \{1,\ldots,m\}$, recall that $f_{i}Rf_{i}$ is a non-discrete irreducible, continuous ring with $\rk_{f_{i}Rf_{i}} = \tfrac{1}{\rk_R(f_{i})}{{\rk_{R}}\vert_{f_{i}Rf_{i}}}$ and $\ZZ(f_{i}Rf_{i}) = Kf_{i} \subseteq R_{i}$ by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and observe that \begin{displaymath} \dim_{\ZZ(f_{i}Rf_{i})}(R_{i}) \, = \, \dim_{K}(R_{i}) \, = \, \dim_{K}(\M_{n_{i}}(K)) \, = \, n_{i}^{2} \, < \, \infty, \end{displaymath} wherefore~\cite[Corollary~7.20(1)]{SchneiderGAFA} and~\cite[Corollary~9.12(1)]{SchneiderGAFA} assert the existence of some $e_{i} \in \E(f_{i}Rf_{i})$ such that $\rk_{f_{i}Rf_{i}}(e_i) = \tfrac{q_{i}}{\rk_R(f_{i})}$, i.e., $\rk_{R}(e_{i}) = q_{i}$, and \begin{equation}\label{star} \forall x \in R_{i} \colon \qquad e_{i}xe_{i} \, = \, xe_{i} . \end{equation} Thus, by~\cite[Lemma~10.5(1)]{SchneiderGAFA}, for each $i\in\{1,\ldots,m \}$, the map \begin{displaymath} R_{i} \, \longrightarrow \, e_{i}R_{i}e_{i}, \quad x \, \longmapsto \, xe_{i} \end{displaymath} constitutes a surjective unital non-zero $K$-algebra homomorphism, hence an isomorphism by simplicity of $R_{i} \cong_{K} \M_{n_{i}}(K)$ (see, e.g.,~\cite[IX.1, Corollary~1.5, p.~361]{GrilletBook}), so that $S_{i} \defeq e_{i}R_{i}e_{i} \cong_{K} R_{i} \cong_{K} \M_{n_{i}}(K)$. Since $e_{1} \in \E(f_{1}Rf_{1}), \, \ldots , \, e_{m} \in \E(f_{m}Rf_{m})$, we see that $e_{1},\ldots,e_{m}$ are pairwise orthogonal, too. Concerning \begin{displaymath} e_{m+1} \, \defeq \, 1-\sum\nolimits_{i=1}^{m} e_{i} \, \stackrel{\ref{remark:quantum.logic}}{\in} \, \E(R), \end{displaymath} we note that \begin{displaymath} q_{m+1} \, \defeq \, \rk_{R}(e_{m+1}) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, 1-\sum\nolimits_{i=1}^{m}\rk_{R}(e_{i}) \, = \, 1-\sum\nolimits_{i=1}^{m}q_{i} \, \in \, \!\left(0,\tfrac{\epsilon}{2}\right]\!\cap\Q. \end{displaymath} As $\rk_{R}(e_{m+1})>0$ and therefore $e_{m+1}\neq 0$, the unital $K$-subalgebra \begin{displaymath} S_{m+1} \, \defeq \, Ke_{m+1} \, \leq \, e_{m+1}Re_{m+1} \end{displaymath} is isomorphic to $K \cong_{K} \M_{n_{m+1}}(K)$, where $n_{m+1}\defeq 1$. Now, consider \begin{displaymath} c \, \defeq \, b(1-e_{m+1}) \, = \, \sum\nolimits_{i=1}^{m}be_{i} \, = \, \sum\nolimits_{i=1}^{m}\smallunderbrace{bf_{i}}_{\in R_{i}}\!e_{i} \, \stackrel{\eqref{star}}{=} \, \sum\nolimits_{i=1}^{m} e_{i}bf_{i}e_{i} \, \in \, \sum\nolimits_{i=1}^{m+1} S_{i} \end{displaymath} and observe that \begin{displaymath} d_{R}(b,c) \, = \, \rk_{R}(b-c) \, = \, \rk_{R}(be_{m+1}) \, \leq \, \rk_{R}(e_{m+1}) \, \leq \, \tfrac{\epsilon}{2}, \end{displaymath} thus \begin{displaymath} d_{R}(a,c) \, \leq \, d_{R}(a,b) + d_{R}(b,c) \, \leq \, \epsilon. \end{displaymath} We will show that $c$ is simply matricial, which will finish the proof. To this end, choose $t,r_{1},\ldots,r_{m+1}\in\N_{>0}$ such that \begin{displaymath} \forall i\in\{1,\ldots,m+1\}\colon\qquad q_{i} = \tfrac{r_{i}}{t},\quad n_{i}\vert r_{i}. \end{displaymath} For each $i \in \{1,\ldots,m+1\}$, since $e_{i}Re_{i}$ is a non-discrete irreducible, continuous ring by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}, \begin{displaymath} \ZZ(e_{i}Re_{i}) \, \stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}}{=} \, Ke_{i} \, \leq \, S_{i} \, \leq \, e_{i}Re_{i}, \end{displaymath} and $S_{i} \cong_{K} \M_{n_{i}}(K)$, we infer from Lemma~\ref{lemma:matricial.algebra.blow.up} the existence of a unital $K$-subalgebra $T_{i} \leq e_{i}Re_{i}$ with $S_{i} \subseteq T_{i} \cong_{K} \M_{r_{i}}(K)$. Moreover, as $e_{1},\ldots,e_{m+1}$ are pairwise orthogonal with $1=\sum\nolimits_{i=1}^{m+1}e_{i}$ and \begin{displaymath} \forall i\in\{1,\ldots,m+1\}\colon\qquad\rk_{R}(e_{i}) = q_{i} = \tfrac{r_{i}}{t}, \end{displaymath} by Lemma~$\ref{lemma:sum.subalgebras.eRe.matricial}$ there is a unital $K$-subalgebra $T \leq R$ with $\sum_{i=1}^{m+1}T_{i} \subseteq T \cong_{K} \M_{t}(K)$. We conclude that \begin{displaymath} c \, \in \, \sum\nolimits_{i=1}^{m+1} S_{i} \, \subseteq \, \sum\nolimits_{i=1}^{m+1} T_{i} \, \subseteq \, T \, \cong_{K} \, \M_{t}(K) , \end{displaymath} so $c$ is simply matricial. \end{proof} We deduce our second density result, which concerns unit groups of non-discrete irreducible, continuous rings. \begin{prop}\label{proposition:simply.special.dense} Let $R$ be a non-discrete irreducible, continuous ring, let $K \defeq \ZZ(R)$, let $a \in \GL(R)$ and $\epsilon \in \R_{>0}$. Then there exists $n \in \N_{>0}$ such that, for every $m \in \N_{>0}$ with $n \vert m$, there exist a unital $K$-algebra embedding $\phi \colon \M_{m}(K) \to R$ and an element $A \in \SL_{m}(K)$ such that $d_{R}(a,\phi(A)) < \epsilon$. \end{prop} \begin{proof} Let $a \in \GL(R)$ and $\epsilon \in \R_{>0}$. By Theorem~\ref{theorem:matricial.dense}, there exist $n_{0} \in \N_{>0}$ and a unital $K$-subalgebra $S \leq R$ isomorphic to $\M_{n_{0}}(K)$ with $\inf_{b \in S} d_{R}(a,b) < \tfrac{\epsilon}{3}$. Define \begin{displaymath} n \, \defeq \, n_{0} \cdot \left\lceil\tfrac{3}{\epsilon}\right\rceil \, \in \, \N_{>0}. \end{displaymath} Now, let $m \in \N_{>0}$ with $n \vert m$. Thanks to Lemma~\ref{lemma:matricial.algebra.blow.up}, there exists a unital $K$-algebra embedding $\phi \colon \M_{m}(K) \to R$ such that $S \leq \phi (\M_{m}(K))$. In particular, we find $B \in \M_{m}(K)$ such that $d_{R}(a,\phi(B)) < \tfrac{\epsilon}{3}$. Since \begin{displaymath} \rk_{\M_{m}(K)}(B) \, \stackrel{\ref{remark:rank.function.general}\ref{remark:uniqueness.rank.embedding}}{=} \, \rk_{R}(\phi(B)) \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.continuous}}{>} \, \rk_{R}(a)-\tfrac{\epsilon}{3} \, = \, 1-\tfrac{\epsilon}{3}, \end{displaymath} an elementary argument using linear algebra shows the existence of some $C\in \GL_{m}(K)$ such that $d_{\M_{m}(K)}(B,C) < \tfrac{\epsilon}{3}$. Furthermore, upon multiplying one column (or one row) of $C$ by $\det(C)^{-1}$, we obtain $A\in \SL_{m}(K)$ such that \begin{displaymath} d_{\M_m(K)}(C,A) \, \leq \, \tfrac{1}{m} \, \leq \, \tfrac{1}{n} \, \leq \, \tfrac{\epsilon}{3}. \end{displaymath} Therefore, \begin{align*} d_{R}(a,\phi(A)) \, &\leq \, d_{R}(a,\phi(B)) + d_{R}(\phi(B),\phi(C)) + d_{R}(\phi(C),\phi(A)) \\ &\stackrel{\ref{remark:rank.function.general}\ref{remark:uniqueness.rank.embedding}}{=} \, d_{R}(a,\phi(B)) + d_{\M_m(K)}(B,C) + d_{\M_m(K)}(C,A) \, < \, \epsilon . \qedhere \end{align*} \end{proof} \begin{cor}\label{corollary:simply.special.dense} Let $R$ be a non-discrete irreducible, continuous ring. The set of simply special elements of $R$ is dense in $(\GL(R),d_{R})$. \end{cor} \begin{proof} This is an immediate consequence of Proposition~\ref{proposition:simply.special.dense}. \end{proof} \section{Decomposition into locally special elements}\label{section:decomposition.into.locally.special.elements} This section is devoted to proving our main decomposition result for unit groups of non-discrete irreducible, continuous rings (Theorem~\ref{theorem:decomposition}) and deducing some structural consequences (Theorem~\ref{theorem:width}). The following notion, which rests on Lemma~\ref{lemma:convergence.sequences} and Lemma~\ref{lemma:local.decomposition}, is fundamental to these results. \begin{definition}\label{definition:locally.special} Let $R$ be an irreducible, continuous ring. An element $a\in R$ will be called \emph{locally special} if there exist $(e_{n})_{n\in\N}\in \E(R)^{\N}$ pairwise orthogonal and $(a_{n})_{n \in \N} \in \prod_{n \in \N} e_{n}Re_{n}$ such that \begin{enumerate} \item[---\,] for each $n \in \N$, the element $a_n$ is simply special in $e_{n}Re_{n}$, \item[---\,] and $a = \prod\nolimits_{n\in \N} a_{n}+1-e_{n}$. \end{enumerate} \end{definition} As follows by Lemma~\ref{lemma:convergence.sequences} and Lemma~\ref{lemma:subgroup.unit.group}, any locally special element of an irreducible, continuous ring is necessarily invertible. For constructing of such elements in the proof of Theorem~\ref{theorem:decomposition}, we make a few preparations. \begin{lem}\label{lemma:simply.special.involution} Let $R$ be an irreducible, continuous ring and $e_1,e_2\in \E(R)$ with $e_1\perp e_2$ and $\rk_R(e_1)=\rk_R(e_2)=\tfrac{1}{3}$. Then there exists a simply special involution $u\in \I(R)$ such that $ue_1u=e_2$. \end{lem} \begin{proof} Let $K \defeq \ZZ(R)$ and $e_{3} \defeq 1-e_{1}-e_{2}$. Then $e_{3} \in \E(R)$, the elements $e_{1},e_{2},e_{3}$ are pairwise orthogonal, and $1 = e_{1} + e_{2} + e_{3}$. So, $\rk_{R}(e_{3})=\tfrac{1}{3}$. Thanks to Lemma~\ref{lemma:matrixunits.idempotents}, there exists a family of matrix units $s\in R^{3\times 3}$ for $R$ such that $e_{i} = s_{ii}$ for each $i\in\{1,2,3\}$. Let us define $B,C,U\in \M_{3}(K)$ by \begin{displaymath} B \defeq \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\!, \quad C\defeq \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\!, \quad U\defeq \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \\ \end{pmatrix}\!. \end{displaymath} Then $U\in\SL_{3}(K)$, $U=U^{-1}$ and $UBU=C$. The unital $K$-algebra homomorphism \begin{displaymath} \phi \colon\, \M_{3}(K)\,\longrightarrow\, R, \quad (a_{ij})_{i,j\in\{1,2,3\}}\,\longmapsto\,\sum\nolimits_{i,j=1}^{3} a_{ij}s_{ij} \end{displaymath} satisfies $\phi(B)=s_{11}=e_{1}$ and $\phi(C)=s_{22}=e_{2}$. Consequently, $u\defeq\phi(U) \in \I(R)$ is simply special and \begin{displaymath} ue_{1}u \, = \, \phi(U)\phi(B)\phi(U) \, = \, \phi(UBU) \, = \, \phi(C) \, = \, e_{2} . \qedhere \end{displaymath} \end{proof} \begin{lem}\label{lemma:simply.special.involution.2} Let $R$ be an irreducible, continuous ring and $e,f \in \E(R)\setminus \{ 0 \}$ with $e \perp f $ and $\rk_{R}(e)=2\rk_{R}(f)$. Then there exists $v \in \I(\Gamma_{R}(e+f))$ with $v(e+f)$ simply special in $(e+f)R(e+f)$ such that $vfv \leq e$. \end{lem} \begin{proof} By Lemma~\ref{lemma:order}\ref{lemma:order.1}, there exists $e' \in \E(R)$ with $e' \leq e$ and $\rk_{R}(e') = \rk_{R}(f)$. Using Remark~\ref{remark:quantum.logic}\ref{remark:quantum.logic.2}, we see that $e+f \in \E(R)$, and both $e' \leq e \leq e+f$ and $f \leq e+f$. Our Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} asserts that $S \defeq (e+f)R(e+f)$ is a non-discrete irreducible, continuous ring. We calculate that \begin{align*} \rk_{R}(e+f) \, &= \, \rk_{R}(e) + \rk_{R}(f) \, = \, \tfrac{3}{2}\rk_{R}(e), \\ \rk_{S}(e') \, &\stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}}{=} \, \tfrac{1}{\rk_{R}(e+f)}\rk_{R}(e') \, = \, \tfrac{2\rk_{R}(f)}{3\rk_{R}(e)} \, = \, \tfrac{1}{3}, \\ \rk_{S}(f) \, &\stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}}{=} \, \tfrac{1}{\rk_{R}(e+f)}\rk_{R}(f) \, = \, \tfrac{2\rk_{R}(f)}{3\rk_{R}(e)} \, = \, \tfrac{1}{3} . \end{align*} Moreover, from $e \perp f$ and $e' \leq e$, we deduce that $e' \perp f$. Therefore, by Lemma~\ref{lemma:simply.special.involution}, there exists $u \in \I(S)$ simply special in $S$ such that $e' = ufu$. Inspecting \begin{displaymath} v \, \defeq \, u + 1-(e+f) \, \stackrel{\ref{lemma:subgroup.unit.group}}{\in} \, \I(\Gamma_{R}(e+f)) , \end{displaymath} we deduce that \begin{align*} vfv \, &= \, (u + 1 - (e+f))f(u + 1 - (e+f)) \\ & = \, ufu + uf - uf + fu + f - f - fu - f + f \, = \, ufu \, = \, e' \, \leq \, e . \qedhere \end{align*} \end{proof} \begin{lem}\label{lemma:partial.approximation} Let $R$ be a non-discrete irreducible, continuous ring, let $e \in \E(R)\setminus \{ 0 \}$, $t \in (0,\rk_{R}(e)]$, and $a \in \Gamma_{R}(e)$. Then there exist $b \in \Gamma_{R}(e)$ and $f \in \E(R)$ such that \begin{enumerate} \item[---\,] $be$ is simply special in $eRe$, \item[---\,] $f \leq e$ and $\rk_{R}(f) = t$, \item[---\,] $b^{-1}a \in \Gamma_{R}(f)$. \end{enumerate} \end{lem} \begin{proof} Note that $ae \in \GL(eRe)$ by Lemma~\ref{lemma:subgroup.unit.group}. By Corollary~\ref{corollary:simply.special.dense} and Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}, there exists $c \in \GL(eRe)$ simply special in $eRe$ and such that \begin{displaymath} \rk_{eRe}(c-ae) \, \leq \, \tfrac{t}{2\rk_{R}(e)}. \end{displaymath} Defining $b \defeq c+1-e \in \Gamma_{R}(e)$, we see that \begin{displaymath} \rk_{R}(b-a) \, = \, \rk_{R}(c-ae) \, \stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}}{=} \, \rk_{R}(e)\rk_{eRe}(c-ae) \, \leq \, \tfrac{t}{2} . \end{displaymath} By Lemma~\ref{lemma:invertible.rank.idempotent}, there exists $\bar{e} \in \E(R)$ such that $\bar{e} \leq e$, $b^{-1}a \in \Gamma_{R}({\bar{e}})$, and \begin{displaymath} \rk_{R}(\bar{e}) \, \leq \, 2\rk_{R}\!\left(1-b^{-1}a\right)\! \, = \, 2\rk_{R}(b-a) \, \leq \, t. \end{displaymath} Moreover, by Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete} and Lemma~\ref{lemma:order}\ref{lemma:order.1}, there exists $f \in \E(R)$ such that $\bar{e} \leq f \leq e$ and $\rk_{R}(f) = t$. Finally, \begin{displaymath} b^{-1}a \, \in \, \Gamma_{R}(\bar{e}) \, \stackrel{\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.order}}{\subseteq} \, \Gamma_{R}(f) . \qedhere \end{displaymath} \end{proof} We are ready to embark on the proof of the announced decomposition theorem. Our argument proving Theorem~\ref{theorem:decomposition} is inspired by work of Fathi~\cite[Lemme~9]{Fathi} on corresponding properties of the group~$\Aut([0,1],\lambda)$. \begin{thm}\label{theorem:decomposition} Let $R$ be a non-discrete irreducible, continuous ring. Then every element $a\in\GL(R)$ admits a decomposition \begin{displaymath} a \, = \, bu_{1}v_{1}v_{2}u_{2}v_{3}v_{4} \end{displaymath} where \begin{enumerate} \item[---\,] $b \in \GL(R)$ is simply special, \item[---\,] $u_{1},u_{2} \in \GL(R)$ are locally special, \item[---\,] $v_{1},v_{2},v_{3},v_{4} \in \GL(R)$ are locally special involutions. \end{enumerate} \end{thm} \begin{proof} Let $a\in \GL(R)$. Define $a_{1} \defeq a$. By Lemma~\ref{lemma:partial.approximation}, there exist $f_{1} \in \E(R)$ with $\rk_{R}(f_{1}) = \tfrac{1}{8}$ and $b_{1} \in \GL(R)$ simply special with $b_{1}^{-1}a_{1} \in \Gamma_{R}(f_{1})$. By Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete} and Lemma~\ref{lemma:order}\ref{lemma:order.1}, there exists $e_{1} \in \E(R)$ with $f_{1} \leq e_{1}$ and $\rk_{R}(e_{1}) = \tfrac{1}{2}$. By Lemma~\ref{lemma:order}\ref{lemma:order.2}, we find $(e_{m})_{m\in\N_{>0}}\in\E(R)^{\N_{>0}}$ pairwise orthogonal with $e_{1} = e$ and $\rk_{R}(e_{n}) = 2^{-n}$ for each $n \in \N_{>0}$. Now, by Lemma~\ref{lemma:simply.special.involution.2}, there exists $v_{1} \in \I(\Gamma_{R}(e_{2}+f_{1}))$ with $v_{1}(e_{2}+f_{1})$ simply special in $(e_{2}+f_{1})R(e_{2}+f_{1})$ such that $v_{1}f_{1}v_{1} \leq e_{2}$. Building on the above, recursively we construct sequences \begin{displaymath} (e_{i})_{i \in \N_{>1}} \in \E(R)^{\N_{>1}}, \ \ (a_{i})_{i\in\N_{>1}}, \, (b_{i})_{i\in\N_{>1}} \in \GL(R)^{\N_{>1}}, \ \ (v_{i})_{i\in\N_{>1}} \in \I(R)^{\N_{>1}} \end{displaymath} such that, for every $i \in \N_{>0}$, \begin{align} &f_{i}\leq e_{i}, \label{eq:nesting} \\ &a_{i+1},b_{i+1}\in\Gamma_{R}(e_{i+1}) , \ \ b_{i}^{-1}a_{i}\in \Gamma_{R}(f_{i}) , \label{eq:support} \\ &b_{i+1}e_{i+1} \textnormal{ simply special in } e_{i+1}Re_{i+1} , \label{eq:simply.special} \\ &v_{i} \in \Gamma_{R}(e_{i+1}+f_{i}) , \label{eq:support.involution} \\ &v_{i}(e_{i+1}+f_{i}) \textnormal{ simply special in } (e_{i+1}+f_{i})R(e_{i+1}+f_{i}) , \label{eq:simply.special.involution} \\ &v_{i}f_{i}v_{i} \leq e_{i+1} , \label{eq:similar.idempotents} \\ &a_{i+1} = v_{i}b_{i}^{-1}a_{i}v_{i} . \label{eq:connection} \end{align} For the recursive step, let $i\in \N_{>0}$. Consider \begin{displaymath} a_{i+1} \, \defeq \, v_{i}b_{i}^{-1}a_{i}v_{i} \, \stackrel{\eqref{eq:support}}{\in} \, v_{i}\Gamma_{R}(f_{i})v_{i} \, \stackrel{\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.conjugation}}{=} \, \Gamma_{R}(v_{i}f_{i}v_{i}) \, \stackrel{\eqref{eq:similar.idempotents}+\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.order}}{\subseteq} \, \Gamma_{R}(e_{i+1}) . \end{displaymath} By Lemma~\ref{lemma:partial.approximation}, there exist $b_{i+1} \in \Gamma_{R}(e_{i+1})$ with $be_{i+1}$ simply special in $e_{i+1}Re_{i+1}$ and $f_{i+1} \in \E(R)$ with $f_{i+1} \leq e_{i+1}$ and $\rk_{R}(f_{i+1}) = \tfrac{1}{2}\rk_{R}(e_{i+2})$ such that \begin{displaymath} b_{i+1}^{-1}a_{i+1} \, \in \, \Gamma_{R}(f_{i+1}) . \end{displaymath} Thanks to Lemma~\ref{lemma:simply.special.involution.2}, there exists $v_{i+1} \in \I(\Gamma_{R}(e_{i+2}+f_{i+1}))$ with $v_{i+1}(e_{i+2}+f_{i+1})$ simply special in $(e_{i+2}+f_{i+1})R(e_{i+2}+f_{i+1})$ such that $v_{i+1}f_{i+1}v_{i+1} \leq e_{i+2}$. From~\eqref{eq:nesting} and pairwise orthogonality of the sequence $(e_{n})_{n \in \N_{>0}}$, we infer that both $(e_{2i}+f_{2i-1})_{i\in\N_{>1}}$ and $(e_{2i+1}+f_{2i})_{i\in\N_{>1}}$ are pairwise orthogonal. Using Lemma~\ref{lemma:local.decomposition}, we observe that \begin{displaymath} b \, \defeq \, \prod\nolimits_{i=1}^{\infty}b_{2i+1}, \qquad b' \, \defeq \, \prod\nolimits_{i=1}^{\infty}b_{2i} \end{displaymath} are locally special in $R$, and \begin{displaymath} v \, \defeq \, \prod\nolimits_{i=1}^{\infty}v_{2i-1}, \qquad v' \, \defeq \, \prod\nolimits_{i=1}^{\infty}v_{2i} \end{displaymath} are locally special involutions in $R$. Now, for each $i \in \N_{>0}$, since $v_{i} \in \I(\Gamma_{R}(e_{i+1}+f_{i}))$ by~\eqref{eq:support.involution} and \begin{displaymath} b_{i}^{-1}a_{i} \, \stackrel{\eqref{eq:support}}{\in} \, \Gamma_{R}(f_{i}) \, \stackrel{\ref{remark:quantum.logic}\ref{remark:quantum.logic.2}+\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.order}}{\subseteq} \, \Gamma_{R}(e_{i+1}+f_{i}) , \end{displaymath} we see that \begin{displaymath} u_{i} \, \defeq \, b_{i}^{-1}a_{i}v_{i}\!\left( b_{i}^{-1}a_{i}\right)^{-1}\! \, \in \, \I(\Gamma_{R}(e_{i+1}+f_{i})) \end{displaymath} and, moreover, \begin{displaymath} u_{i}(e_{i+1}+f_{i}) \, \stackrel{\ref{lemma:subgroup.unit.group}}{=} \, b_{i}^{-1}a_{i}(e_{i+1}+f_{i})v_{i}(e_{i+1}+f_{i})\!\left( b_{i}^{-1}a_{i}(e_{i+1}+f_{i})\right)^{-1} \end{displaymath} is simply special in $(e_{i+1}+f_{i})R(e_{i+1}+f_{i})$ by~\eqref{eq:simply.special.involution}, Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and Remark~\ref{remark:matricial.conjugation.invariant}. In turn, \begin{displaymath} u \, \defeq \, \prod\nolimits_{i=1}^{\infty}u_{2i-1}, \qquad u' \, \defeq \, \prod\nolimits_{i=1}^{\infty}u_{2i} \end{displaymath} are locally special involutions in $R$. Moreover, we observe that, for every $i \in \N_{>0}$, \begin{equation}\label{eq:better.connection} b_{i}u_{i}v_{i} \, = \, b_{i}b_{i}^{-1}a_{i}v_{i}\!\left(b_{i}^{-1}a_{i}\right)^{-1}\!v_{i} \, = \, a_{i}v_{i}\!\left(b_{i}^{-1}a_{i}\right)^{-1}\!v_{i} \, \stackrel{\eqref{eq:connection}}{=} \, a_{i}a_{i+1}^{-1} . \end{equation} Combining this with the pairwise orthogonality of $(e_{n})_{n \in \N_{>0}}$ and the resulting pairwise orthogonality of both $(e_{2i}+e_{2i-1})_{i\in\N_{>1}}$ and $(e_{2i+1}+e_{2i})_{i\in\N_{>1}}$, we conclude that \begin{align} b_{1}buv \, &= \, b_{1}\!\left(\prod\nolimits_{i=2}^{\infty} b_{2i-1}\right)\! \left(\prod\nolimits_{i=1}^{\infty}u_{2i-1} \right)\!\left(\prod\nolimits_{i=1}^{\infty}v_{2i-1}\right) \nonumber \\ &= \, b_{1}\!\left(\prod\nolimits_{i=2}^{\infty} b_{2i-1}\right)\! u_{1}\!\left(\prod\nolimits_{i=2}^{\infty}u_{2i-1} \right)\! v_{1}\!\left(\prod\nolimits_{i=2}^{\infty}v_{2i-1}\right) \nonumber \\ &\stackrel{\ref{lemma:convergence.sequences}\ref{lemma:convergence.orthogonal}+\ref{lemma:local.decomposition}+\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.orthogonal}}{=} \, b_{1}u_{1}v_{1}\!\left(\prod\nolimits_{i=2}^{\infty} b_{2i-1}\right)\!\left(\prod\nolimits_{i=2}^{\infty}u_{2i-1} \right)\! \left(\prod\nolimits_{i=2}^{\infty}v_{2i-1}\right) \nonumber \\ &\stackrel{\ref{lemma:local.decomposition}}{=} \, b_{1}u_{1}v_{1}\!\left(\prod\nolimits_{i=2}^{\infty} b_{2i-1}u_{2i-1}v_{2i-1}\right)\! \, \stackrel{\eqref{eq:better.connection}}{=} \, a_{1}a_{2}^{-1}\!\left(\prod\nolimits_{i=2}^{\infty} a_{2i-1}a_{2i}^{-1}\right)\nonumber \\ &\stackrel{\ref{lemma:local.decomposition}}{=} \, a_{1}a_{2}^{-1}\!\left(\prod\nolimits_{i=2}^{\infty}a_{2i-1}\right)\!\left(\prod\nolimits_{i=2}^{\infty}a_{2i}^{-1}\right) \nonumber \\ & \stackrel{\ref{lemma:convergence.sequences}\ref{lemma:convergence.orthogonal}+\ref{lemma:local.decomposition}+\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.orthogonal}}{=} \, a_{1}\!\left(\prod\nolimits_{i=2}^{\infty}a_{2i-1}\right)\! a_{2}^{-1}\!\left(\prod\nolimits_{i=2}^{\infty}a_{2i}^{-1}\right) \nonumber \\ &= \, a_{1}\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i+1}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i}^{-1}\right)\!, \label{eq:partial.decomposition.1} \\ b'u'v' \, &= \, \left(\prod\nolimits_{i=1}^{\infty}b_{2i}\right)\!\left(\prod\nolimits_{i=1}^{\infty}u_{2i}\right) \! \left(\prod\nolimits_{i=1}^{\infty}v_{2i}\right) \! \, \stackrel{\ref{lemma:local.decomposition}}{=} \, \prod\nolimits_{i=1}^{\infty}b_{2i}u_{2i}v_{2i} \nonumber\\ & \stackrel{\eqref{eq:better.connection}}{=} \, \prod\nolimits_{i=1}^{\infty}a_{2i}a_{2i+1}^{-1} \, \stackrel{\ref{lemma:local.decomposition}}{=} \, \!\left(\prod\nolimits_{i=1}^{\infty}a_{2i}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i+1}^{-1}\right) \! , \label{eq:partial.decomposition.2} \end{align} so that finally \begin{align*} a \, &= \, a_{1} \, = \, a_{1}\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i+1}a_{2i+1}^{-1}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i}^{-1}a_{2i}\right) \nonumber \\ &\stackrel{\ref{lemma:local.decomposition}}{=} \, a_{1}\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i+1}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i+1}^{-1}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i}^{-1}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i}\right) \\ &\stackrel{\ref{lemma:convergence.sequences}\ref{lemma:convergence.orthogonal}+\ref{lemma:local.decomposition}+\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.orthogonal}}{=} \, a_{1}\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i+1}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i}^{-1}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i}\right)\!\left(\prod\nolimits_{i=1}^{\infty}a_{2i+1}^{-1}\right) \\ &\stackrel{\eqref{eq:partial.decomposition.1}+\eqref{eq:partial.decomposition.2}}{=} \, b_{1}buvb'u'v'. \qedhere \end{align*} \end{proof} The result above has further implications for the associated unit groups, which are detailed in Theorem~\ref{theorem:width}. As usual, a \emph{commutator} in a group $G$ is any element of the form $[x,y] \defeq xyx^{-1}y^{-1}$ where $x,y \in G$. A group $G$ is said to be \emph{perfect} if its commutator subgroup, i.e., the subgroup of $G$ generated by the set $\{ [x,y] \mid x,y \in G \}$, coincides with $G$. \begin{remark}\label{remark:product.involutions.commutators} Let $\ell \in \N$, let $G$ be a group, let $(G_{i})_{i\in I}$ be a family of groups, and let $\phi \colon \prod\nolimits_{i\in I} G_{i} \to G$ be a homomorphism. Then \begin{displaymath} \phi\!\left( \prod\nolimits_{i \in I} \I(G_{i})^{\ell} \right)\! \, = \, \phi\!\left( {\I\!\left(\prod\nolimits_{i \in I} G_{i}\right)}^{\ell}\right) \, \subseteq \, \I(G)^{\ell} \end{displaymath} and, for all $m \in \N$ and $w \in \free(m)$, \begin{displaymath} \phi\!\left( \prod\nolimits_{i \in I} w(G_{i})^{\ell} \right)\! \, = \, \phi\!\left(\! {w\!\left(\prod\nolimits_{i \in I} G_{i}\right)}^{\ell}\right) \, \subseteq \, w(G)^{\ell} . \end{displaymath} \end{remark} \begin{lem}\label{lemma:special.decomposition} Let $R$ be a non-discrete irreducible, continuous ring. \begin{enumerate} \item\label{lemma:special.decomposition.commutator} Every simply special element of $R$ is a commutator in $\GL(R)$. \item\label{lemma:special.decomposition.word} Suppose that $\ZZ(R)$ is algebraically closed. If $m \in \N$ and $w \in \free (m)\setminus \{ \epsilon \}$, then every simply special element of $R$ belongs to $w(\GL(R))^{2}$. \end{enumerate} \end{lem} \begin{proof} Let $K \defeq \ZZ(R)$ and $a \in R$ be simply special in $R$. Then there exist $n \in \N_{>0}$ and a unital $K$-algebra embedding $\phi \colon \M_{n}(K) \to R$ such that $a \in \phi(\SL_{n}(K))$. \ref{lemma:special.decomposition.commutator} If $(n,\vert K\vert )\neq (2,2)$, then every element of $\SL_{n}(K)$ is a commutator in $\GL_{n}(K)$ by Thompson's results~\cite{Thompson61,Thompson62,ThompsonPortugaliae}, which implies that $a$ is a commutator in $\GL(R)$, as desired. Suppose now that $(n,\vert K\vert ) = (2,2)$. According to Lemma~\ref{lemma:matricial.algebra.blow.up}, there exists a unital $K$-algebra $S\leq R$ such that $\phi(\M_{2}(K)) \leq S \cong \M_{4}(K)$. In particular, we find a unital $K$-algebra embedding $\psi\colon \M_{4}(K)\to R$ such that $a \in \psi(\M_{4}(K))$. Consider any $B\in \M_{4}(K)$ such that $a = \psi(B)$. Then \begin{displaymath} \rk_{\M_{4}(K)}(B) \, \stackrel{\ref{remark:rank.function.general}\ref{remark:uniqueness.rank.embedding}}{=} \, \rk_{R}(\psi(B)) \, = \, \rk_{R}(a) \, \stackrel{\ref{remark:properties.rank.function}\ref{remark:invertible.rank}}{=} \, 1 \end{displaymath} and therefore \begin{displaymath} A \, \stackrel{\ref{remark:properties.rank.function}\ref{remark:invertible.rank}}{\in} \, \GL_{4}(K) \, \stackrel{\vert K \vert = 2}{=} \, \SL_{4}(K) . \end{displaymath} Thus, $B$ is a commutator in $\GL_{n}(K)$ by~\cite{Thompson61}, whence $a$ is a commutator in $\GL(R)$. This completes the argument. \ref{lemma:special.decomposition.word} Let $m \in \N$ and $w \in \free (m)\setminus \{ \epsilon \}$. The work of Borel~\cite[\S1, Theorem~1]{Borel} (see also Larsen's proof~\cite[Lemma~3]{Larsen}) asserts that $w(\SL_{n}(K))$ contains a dense open subset of $\SL_{n}(K)$ (with respect to the Zariski topology), whence $\SL_{n}(K) = w(\SL_{n}(K))^{2}$ thanks to~\cite[I, Proposition~1.3(a), p.~47]{BorelBook}, as argued in~\cite[\S1, p.~157, Remark~3]{Borel}. Consequently, $a \in \phi(\SL_{n}(K)) = \phi(w(\SL_{n}(K))^{2}) \subseteq w(\GL(R))^{2}$. \end{proof} \begin{lem}\label{lemma:locally.special.decomposition} Let $R$ be a non-discrete irreducible, continuous ring. \begin{enumerate} \item\label{lemma:locally.special.decomposition.involutions} Every locally special element of $R$ is a product of $4$ involutions in $\GL(R)$. \item\label{lemma:locally.special.decomposition.commutators} Every locally special element of $R$ is a commutator in $\GL(R)$. \item\label{lemma:locally.special.decomposition.word} Suppose that $\ZZ(R)$ is algebraically closed. If $m \in \N$ and $w \in \free (m)\setminus \{ \epsilon \}$, then every locally special element of $R$ belongs to $w(\GL(R))^{2}$. \end{enumerate} \end{lem} \begin{proof} Let $b \in \GL(R)$ be locally special. Then we find $(e_{n})_{n\in\N} \in (\E(R)\setminus \{ 0 \})^{\N}$ pairwise orthogonal and $(b_{n})_{n\in\N} \in \prod\nolimits_{n \in \N} \GL(e_{n}Re_{n})$ such that the map \begin{align*} \phi\colon\, \prod\nolimits_{n \in \N} \GL(e_{n}Re_{n}) \, \longrightarrow \, \GL(R), \quad (a_{n})_{n\in\N} \, \longmapsto \, \prod\nolimits_{n \in \N} a_{n}+1-e_{n} \end{align*} satisfies $\phi((b_{n})_{n\in\N}) = b$ and, for each $n\in\N$, the element $b_{n}$ is simply special in $e_{n}Re_{n}$. We proceed by separate arguments for~\ref{lemma:locally.special.decomposition.involutions}, \ref{lemma:locally.special.decomposition.commutators}, and~\ref{lemma:locally.special.decomposition.word}. \ref{lemma:locally.special.decomposition.involutions} Let $K \defeq \ZZ(R)$. Then, for every $n\in\N$, there exist $m_{n} \in \N_{>0}$ and a unital $K$-algebra homomorphism $\psi_{n} \colon \M_{m_{n}}(K) \to e_{n}Re_{n}$ such that $b_{n} \in \psi_{n}(\SL_{m_{n}}(K))$. For each $n\in\N$, due to~\cite{GustafsonHalmosRadjavi76}, every element of $\SL_{m_{n}}(K)$ is a product of $4$ involutions in $\GL_{m_{n}}(K)$, thus $b_{n}$ is a product of $4$ involutions in $\GL(e_{n}Re_{n})$. Therefore, by Lemma~\ref{lemma:local.decomposition} and Remark~\ref{remark:product.involutions.commutators}, the element $b$ is a product of $4$ involutions in $\GL(R)$. \ref{lemma:locally.special.decomposition.commutators} For every $n \in \N$, Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and Lemma~\ref{lemma:special.decomposition}\ref{lemma:special.decomposition.commutator} together assert that $b_{n}$ is a commutator in $\GL(e_{n}Re_{n})$. As Lemma~\ref{lemma:local.decomposition} asserts that $\phi$ is a group homomorphism, Remark~\ref{remark:product.involutions.commutators} implies that $b$ is a commutator in $\GL(R)$. \ref{lemma:locally.special.decomposition.word} Let $m \in \N$ and $w \in \free (m)\setminus \{ \epsilon \}$. Then \begin{displaymath} b \, = \, \phi((b_{n})_{n\in\N}) \, \stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}+\ref{lemma:special.decomposition}\ref{lemma:special.decomposition.word}}{\in} \, \phi\!\left( \prod\nolimits_{n \in \N} w(\GL(e_{n}Re_{n}))^{2} \right)\! \, \stackrel{\ref{lemma:local.decomposition}+\ref{remark:product.involutions.commutators}}{\subseteq} \, w(\GL(R))^{2} .\qedhere \end{displaymath} \end{proof} Everything is prepared to deduce the announced width bounds. \begin{thm}\label{theorem:width} Let $R$ be a non-discrete irreducible, continuous ring. \begin{enumerate} \item\label{theorem:width.involutions} Every element of $\GL(R)$ is a product of $16$ involutions. \item\label{theorem:width.commutators} Every element of $\GL(R)$ is a product of $7$ commutators. In particular, $\GL(R)$ is perfect. \item\label{theorem:width.word} Suppose that $\ZZ(R)$ is algebraically closed. For all $m \in \N$ and $w \in \free (m)\setminus \{ \epsilon \}$, \begin{displaymath} \qquad \GL(R) \, = \, w(\GL(R))^{14} . \end{displaymath} In particular, $\GL(R)$ is verbally simple. \end{enumerate} \end{thm} \begin{proof} \ref{theorem:width.involutions} This follows from Theorem~\ref{theorem:decomposition} and Lemma~\ref{lemma:locally.special.decomposition}\ref{lemma:locally.special.decomposition.involutions}. \ref{theorem:width.commutators} This is a direct consequence of Theorem~\ref{theorem:decomposition} and Lemma~\ref{lemma:locally.special.decomposition}\ref{lemma:locally.special.decomposition.commutators}. \ref{theorem:width.commutators} This is due to Theorem~\ref{theorem:decomposition} and Lemma~\ref{lemma:locally.special.decomposition}\ref{lemma:locally.special.decomposition.word}. \end{proof} \section{Steinhaus property}\label{section:steinhaus.property} This section is dedicated to the proof of Theorem~\ref{theorem:194-Steinhaus}. Among other things, we will make use of the following two basic facts about metric spaces. \begin{remark}\label{remark:metric.space} Let $X$ be a metric space. \begin{enumerate} \item\label{remark:uncountable.separable} If $X$ is separable, then every discrete subspace of $X$ is countable (see, e.g.,~\cite[4.1, Theorem~4.1.15, p.~255]{EngelkingBook}). \item \label{remark:neighborhood} A subset $U\subseteq X$ is a neighborhood of a point $x \in X$ if and only if, for every sequence $(x_{n})_{n \in \N}$ in $X$ converging to $x$, there is $m \in \N$ with~$x_{m} \in U$. While ($\Longrightarrow$) is trivial, the implication ($\Longleftarrow$) follows by contraposition, considering any sequence from the non-empty set $\prod_{n \in \N} \! \left\{y\in X\setminus U \left\vert \, d(x,y)<\tfrac{1}{n+1}\right\}\right.$. \end{enumerate} \end{remark} We now proceed to the proof of Theorem~\ref{theorem:194-Steinhaus}. The global strategy follows the ideas of \cite[Theorem~3.1]{KittrellTsankov} (see also~\cite[Proposition~A.1]{BenYaacovBerensteinMelleray}), but our setting requires a very careful analysis of several algebraic peculiarities. \begin{definition} Let $R$ be a unital ring and $e\in\E(R)$. A subset $W\subseteq \GL(R)$ is called \emph{full} for $e$ if, for every $t \in \GL(eRe)$, there exists some $s \in W$ such that $t = se$ and $s(1-e) = (1-e)s(1-e)$. \end{definition} \begin{lem}\label{lemma:full.c_mW} Let $(R,\rho)$ be a complete rank ring, $(e_{m})_{m \in \N} \in \E(R)^{\N}$ be pairwise orthogonal, and $(W_{m})_{m \in \N}$ be a sequence of subsets of $\GL(R)$ with $\GL(R) = \bigcup\nolimits_{m \in \N} W_{m}$. Then there exists $m \in \N$ such that $W_{m}$ is full for $e_m$. \end{lem} \begin{proof} For contradiction assume that, for each $m\in\N$, the set $W_{m}$ is not full for~$e_{m}$, i.e., there exists a sequence $(t_{m})_{m \in \N} \in \prod_{m \in \N} \GL(e_{m}Re_{m})$ such that \begin{equation} \forall m \in \N \ \forall s\in W_{m} \colon\quad (t_{m} \neq se_{m})\,\vee\, (s(1-e_{m}) \neq (1-e_{m})s(1-e_{m})).\label{equation:full0} \end{equation} By Lemma~\ref{lemma:convergence.sequences} and Lemma~\ref{lemma:subgroup.unit.group}, \begin{displaymath} t \, \defeq \, \left(\sum\nolimits_{n \in \N} t_{n}\right) \! +\! \left(1-\sum\nolimits_{n \in \N} e_{n} \right)\! \, \in \, \GL(R). \end{displaymath} For every $m\in\N$, we see that \begin{align} &te_{m} \, = \, \! \left(\sum\nolimits_{n \in \N} t_{n} + 1 - \sum\nolimits_{n \in \N} e_{n} \right) \! e_{m} \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.group.topology}}{=} \, t_{m}, \label{equation:full1} \\ &e_{m}t \, = \, e_{m}\!\left(\sum\nolimits_{n \in \N} t_{n} + 1 - \sum\nolimits_{n \in \N} e_{n}\right)\! \, \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.group.topology}}{=} \, t_{m} \, = \, e_{m}t_{m} \, \stackrel{\eqref{equation:full1}}{=} \, e_{m}te_{m}, \label{equation:full2} \\ &t(1-e_{m}) \, = \, t-te_{m} \, \stackrel{\eqref{equation:full2}}{=} \, t-te_{m}-e_{m}t+e_{m}te_{m} \, = \, (1-e_{m})t(1-e_{m}) , \label{equation:full3} \end{align} and we observe that conjunction of~\eqref{equation:full0}, \eqref{equation:full1} and~\eqref{equation:full3} implies that $t \notin W_{m}$. Hence, $t \notin \bigcup_{m\in \N} W_{m} = \GL(R)$, which is the desired contradiction. \end{proof} \begin{lem}\label{lemma:full.W^2} Let $(R,\rho)$ be a complete rank ring, let $W\subseteq \GL(R)$ be symmetric and countably syndetic and let $(e_m)_{m\in \N}\in \E(R)^{\N}$ be pairwise orthogonal. Then there exists $m\in \N$ such that $W^2$ is full for $e_m$. \end{lem} \begin{proof} Since $W$ is countably syndetic, there exists a sequence $(c_{m})_{m\in \N} \in \GL(R)^{\N}$ such that $\GL(R) = \bigcup_{m \in \N} c_{m}W$. By Lemma~\ref{lemma:full.c_mW}, there exists $m \in \N$ such that $c_{m}W$ is full for $e_{m} \eqdef e$. We will show that $W^{2}$ is full for $e$, too. To this end, let $t \in \GL(eRe)$. Since $c_{m}W$ is full for $e$, there exists $s\in c_{m}W$ such that \begin{align} &t \, = \, se,\label{eq--13}\\ &s(1-e) \, = \, (1-e)s(1-e).\label{eq--14} \end{align} Due to~\cite[Lemma~7.13(2)]{SchneiderGAFA} and~\cite[Lemma~9.4(B)]{SchneiderGAFA}, assertion~\eqref{eq--14} implies that \begin{align} s^{-1}(1-e) \, &= \, (1-e)s^{-1}(1-e).\label{eq--15} \end{align} Since $t \in \GL(eRe)$, we see that \begin{displaymath} st \, = \, set \, \stackrel{\eqref{eq--13}}{=} \, tet \, = \, t^{2} \, \in \, \GL(eRe) . \end{displaymath} Consequently, again by fullness of $c_{m}W$ for $e$, there exists $\tilde{s} \in c_{m}W$ such that $st = \tilde{s}e$ and $\tilde{s}(1-e) = (1-e)\tilde{s}(1-e)$. Hence, the element \begin{displaymath} s^{-1}\tilde{s} \, \in \, (c_{m}W)^{-1}(c_{m}W) \, = \, W^{-1}W \, = \, W^{2} \end{displaymath} satisfies $t = s^{-1}\tilde{s}e$ and \begin{displaymath} s^{-1}\tilde{s}(1-e) = s^{-1}(1-e)\tilde{s}(1-e) \overset{\eqref{eq--15}}{=} (1-e)s^{-1}(1-e)\tilde{s}(1-e) = (1-e)s^{-1}\tilde{s}(1-e), \end{displaymath} as desired. \end{proof} \begin{lem}\label{lemma:Gamma_{R}(e).subset.W^192} Let $R$ be a non-discrete irreducible, continuous ring, let $W\subseteq \GL(R)$ be symmetric and countably syndetic in $\GL(R)$. Then there exists $e\in\E(R)\setminus\{0\}$ such that $\Gamma_{R}(e)\subseteq W^{192}$. \end{lem} \begin{proof} By Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete} and Lemma~\ref{lemma:order}, there exists a sequence $(e_{m})_{m\in\N} \in \E(R)^{\N}$ of pairwise orthogonal non-zero elements with $\sum\nolimits_{m\in\N}e_{m} = 1$. Thanks to Theorem~\ref{theorem:unique.rank.function} and Lemma~\ref{lemma:full.W^2}, we find $m\in\N$ such that $W^{2}$ is full for $e_{m} \eqdef e$. Let \begin{displaymath} \theta \, \defeq \, \begin{cases} \, 4 & \text{if } \cha(R)=2 , \\ \, 2 & \text{otherwise}. \end{cases} \end{displaymath} We proceed in three intermediate steps, marked by the items~\eqref{claim1}, \eqref{claim2} and~\eqref{claim3}. First we prove the existence of an element $s\in W^2\cap \I(\Gamma_{R}(e))$ such that \begin{equation}\label{claim1} 0 \, < \, \rk_{R}(1-s) \, < \, \tfrac{\rk_{R}(e)}{\theta}. \end{equation} By Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}, Corollary~\ref{corollary:boolean.subgroup} and Lemma~\ref{lemma:subgroup.unit.group}, $\Gamma_{R}(e)$ admits an uncountable, separable subgroup $\Gamma$ consisting entirely of involutions. As $W$ is countably syndetic in $\GL(R)$, there exists a countable subset $C \subseteq \GL(R)$ such that $\GL(R) = CW$. Since $\Gamma$ is uncountable, there exists $c \in C$ such that $\Gamma \cap cW$ is uncountable. As $\Gamma$ is separable, Remark~\ref{remark:metric.space}\ref{remark:uncountable.separable} implies that $\Gamma \cap cW$ is not discrete, thus there exist $s_{1},s_{2} \in \Gamma \cap cW$ such that \begin{displaymath} 0 \, < \, d_{R}(s_{1},s_{2}) \, < \, \tfrac{\rk_{R}(e)}{\theta} . \end{displaymath} Consider $s\defeq s_{1}s_{2} \in \Gamma \subseteq \I(\Gamma_{R}(e))$. Moreover, \begin{displaymath} s \, = \, s_{1}s_{2} \, = \, s_{1}^{-1}s_{2} \, \in \, (cW)^{-1}(cW) \, = \, W^{-1}W \, = \, W^{2} \end{displaymath} and \begin{displaymath} \rk_{R}(1-s) \, = \, \rk_{R}\!\left(1-s_{1}^{-1}s_{2}\right)\! \, = \, \rk_{R}(s_{1}-s_{2}) \, = \, d_{R}(s_{1},s_{2}), \end{displaymath} thus $0<\rk_{R}(1-s)<\tfrac{\rk_{R}(e)}{\theta}$, which proves~\eqref{claim1}. Second we show that \begin{equation}\label{claim2} \forall u \in \I(\Gamma_{R}(e)) \colon \quad \rk_{R}(1-u) = \rk_{R}(1-s) \ \Longrightarrow \ u \in W^6 . \end{equation} Let $u \in \I(\Gamma_{R}(e))$ with $\rk_{R}(1-u) = \rk_{R}(1-s)$. We recall that $\Gamma_{R}(e) \cong \GL(eRe)$ due to Lemma~\ref{lemma:subgroup.unit.group}. Hence, by Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and Proposition~\ref{proposition:conjugation.involution.rank}, there exists $v \in \Gamma_{R}(e)$ such that $vsv^{-1} = u$. Since $W^2$ is full for $e$, we find $\tilde{v}\in W^{2}$ such that $ve=\tilde{v}e$ and $\tilde{v}(1-e)=(1-e)\tilde{v}(1-e)$. By~\cite[Lemma~7.13(2)]{SchneiderGAFA} and~\cite[Lemma~9.4(B)]{SchneiderGAFA}, the latter implies that \begin{align} \tilde{v}^{-1}(1-e) \, = \, (1-e)\tilde{v}^{-1}(1-e). \label{eqstar} \end{align} We observe that \begin{align}\label{eqstarstar} \tilde{v}^{-1}e \, &= \, \tilde{v}^{-1}vv^{-1}e \, \stackrel{v^{-1}\in \Gamma_{R}(e)}{=} \, \tilde{v}^{-1}vev^{-1}e \nonumber \\ &\stackrel{ve=\tilde{v}e}{=} \, \tilde{v}^{-1}\tilde{v}ev^{-1}e \, = \, ev^{-1}e \, \stackrel{v^{-1}\in \Gamma_{R}(e)}{=} \, v^{-1}e. \end{align} Therefore, both \begin{align*} \tilde{v}s\tilde{v}^{-1}e \, &\stackrel{\eqref{eqstarstar}}{=} \, \tilde{v}sv^{-1}e \, \stackrel{sv^{-1}\in \Gamma_{R}(e)}{=} \, \tilde{v}e sv^{-1}e \, \stackrel{\tilde{v}e=ve}{=} \, vesv^{-1}e \, \stackrel{sv^{-1}\in \Gamma_{R}(e)}{=} \, vsv^{-1}e=ue \end{align*} and \begin{align*} \tilde{v}s\tilde{v}^{-1}(1-e) \, &\stackrel{\eqref{eqstar}}{=} \, \tilde{v}s(1-e)\tilde{v}^{-1}(1-e) \, \stackrel{s\in\Gamma_{R}(e)}{=} \, \tilde{v}(1-e)\tilde{v}^{-1}(1-e) \\ &\stackrel{\eqref{eqstar}}{=} \, \tilde{v}\tilde{v}^{-1}(1-e) \, \stackrel{u\in \Gamma_{R}(e)}{=} \, u(1-e) . \end{align*} Consequently, \begin{displaymath} \tilde{v}s\tilde{v}^{-1} \, = \, \tilde{v}s\tilde{v}^{-1}e + \tilde{v}s\tilde{v}^{-1}(1-e) \, = \, ue + u(1-e) \, = \, u \end{displaymath} and hence $u = \tilde{v}s\tilde{v}^{-1} \in W^{2}W^{2}W^{-2} = W^{6}$, as desired in~\eqref{claim2}. Next we prove the existence of an element $f \in \E(R)\setminus\{0\}$ such that \begin{equation}\label{claim3} \I(\Gamma_{R}(f)) \, \subseteq \, W^{12}. \end{equation} Note that \begin{displaymath} 2\rk_{R}(1-s) \, \leq \, \theta\rk_{R}(1-s) \, \stackrel{\eqref{claim1}}{<} \, \rk_{R}(e). \end{displaymath} Thus, by Remark~\ref{remark:rank.function.general}\ref{remark:characterization.discrete}, Lemma~\ref{lemma:order}\ref{lemma:order.1} and Lemma~\ref{lemma:invertible.rank.idempotent}, there exists $f\in\E(R)$ such that $f\leq e$, $s\in \Gamma_{R}(f)$, and $\rk_{R}(f)=\theta\rk_{R}(1-s)$. Note that $s\ne 1$ by~\eqref{claim1}, which necessitates that $f\neq0$. Thanks to Remark~\ref{remark:eRe.non-discrete.irreducible.continuous} and the conjunction of Lemma~\ref{lemma:involution.rank.distance.char.neq2} and Lemma~\ref{lemma:involution.rank.distance.char=2}, \begin{align*} \I(fRf) \, &\subseteq \, \left\{gh\left\vert \, g,h\in\I(fRf),\, \rk_{fRf}(f-g)=\rk_{fRf}(f-h)=\tfrac{1}{\theta}\right\} \right. \\ &\stackrel{\ref{remark:eRe.non-discrete.irreducible.continuous}}{=} \, \left\{gh\left\vert \, g,h\in\I(fRf),\, \rk_{R}(f-g)=\rk_{R}(f-h)=\rk_{R}(1-s) \right\} . \right. \end{align*} Applying the isomorphism $\GL(fRf) \to \Gamma_{R}(f), \, a \mapsto a + 1-f$ from Lemma~\ref{lemma:subgroup.unit.group}, we conclude that \begin{align*} \I(\Gamma_{R}(f)) \, &\subseteq \, \{gh\mid g,h\in\I(\Gamma_{R}(f)), \, \rk_{R}(1-g)=\rk_{R}(1-h)=\rk_{R}(1-s)\}\\ &\stackrel{\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.order}}{\subseteq} \, \{gh\mid g,h\in\I(\Gamma_{R}(e)), \, \rk_{R}(1-g)=\rk_{R}(1-h)=\rk_{R}(1-s)\}\\ &\stackrel{\text{\eqref{claim2}}}{\subseteq} \, W^{6}W^{6} \, = \, W^{12}, \end{align*} i.e., \eqref{claim3} holds. Finally, by Theorem~\ref{theorem:width}\ref{theorem:width.involutions} and Remark~\ref{remark:eRe.non-discrete.irreducible.continuous}, every element of $\Gamma_{R}(f)\overset{\ref{lemma:subgroup.unit.group}}{\cong} \GL(fRf)$ is a product of $16$ involutions, whence \begin{displaymath} \Gamma_{R}(f) \, = \, \I(\Gamma_{R}(f))^{16} \, \overset{\eqref{claim3}}{\subseteq}\, W^{12\cdot 16} \, = \, W^{192}. \qedhere \end{displaymath} \end{proof} \begin{lem}\label{lemma:GL(R).covered.by.c_nW} Let $R$ be an irreducible, continuous ring, let $W \subseteq \GL(R)$ be symmetric and countably syndetic, and let $e \in \E(R) \setminus \{ 0 \}$ and $\ell \in \N$ be such that $\Gamma_{R}(e) \subseteq W^{\ell}$. Then $W^{\ell+2}$ is an identity neighborhood in $\GL(R)$. \end{lem} \begin{proof} By Remark~\ref{remark:metric.space}\ref{remark:neighborhood}, it suffices to check that, for every sequence $(t_{n})_{n \in \N}$ in $\GL(R)$ converging to $1$, there exists $m\in\N$ such that $t_{m} \in W^{\ell+2}$. To this end, let $(t_{n})_{n\in\N}\in\GL(R)^{\N}$ with $\lim_{n \to \infty} \rk_{R}(1-t_{n}) = 0$. Upon passing to a subsequence, we may and will assume that \begin{equation}\label{eq--18} \sum\nolimits_{n \in \N} \rk_{R}(1-t_{n}) \, < \, \tfrac{1}{2}\rk_{R}(e). \end{equation} Let us note that \begin{equation}\label{eq--28} \forall a \in R \colon \quad \delta_{\latop(R)}(Ra) \, \stackrel{\ref{remark:rank.function.general}\ref{remark:duality}}{=} \, 1-\delta_{\lat(R)}(\rAnn(Ra)) \, \stackrel{\ref{theorem:unique.rank.function}+\ref{lemma:pseudo.dimension.function}\ref{lemma:rank.dimension.annihilator}}{=} \, \rk_{R}(a) . \end{equation} Since $W$ is countably syndetic, there exists a sequence $(c_n)_{n\in \N}\in\GL(R)^{\N}$ such that $\GL(R)=\bigcup\nolimits_{n \in \N} c_nW$. We see that \begin{align} \delta_{\latop(R)}\!\left(\bigvee\nolimits_{n \in \N} R(1-c_{n}t_{n}c_{n}^{-1})\right) \, &\stackrel{\ref{proposition:dimension.function.continuous}}{=} \, \sup\nolimits_{m \in \N} \delta_{\latop(R)}\!\left(\bigvee\nolimits_{n=0}^{m}R(1-c_{n}t_{n}c_{n}^{-1})\right) \nonumber \\ &\leq \, \sup\nolimits_{m \in \N} \sum\nolimits_{n=0}^{m}\delta_{\latop(R)}(R(1-c_{n}t_{n}c_{n}^{-1}))\nonumber\\ &= \, \sum\nolimits_{n \in \N} \rk_{R}(1-c_{n}t_{n}c_{n}^{-1})\nonumber\\ &= \, \sum\nolimits_{n \in \N} \rk_{R}(1-t_{n}) \label{eq--4} \end{align} and therefore \begin{align}\label{eq--19} \delta_{\lat(R)}\left(\bigwedge\nolimits_{n \in \N} \rAnn(R(1-c_{n}t_{n}c_{n}^{-1}))\right) \, &\stackrel{\ref{remark:bijection.annihilator}}{=} \, \delta_{\lat(R)}\left(\rAnn\left(\bigvee\nolimits_{n \in \N} R(1-c_{n}t_{n}c_{n}^{-1})\right)\right)\nonumber\\ &\stackrel{\eqref{eq--28}}{=} \, 1-\delta_{\latop(R)}\left(\bigvee\nolimits_{n \in \N} R(1-c_{n}t_{n}c_{n}^{-1})\right)\nonumber\\ &\stackrel{\eqref{eq--4}}{\geq} \, 1-\sum\nolimits_{n \in \N} \rk_{R}(1-t_{n})\nonumber\\ &\stackrel{\eqref{eq--18}}{>} \, 1-\tfrac{1}{2}\rk_{R}(e). \end{align} Consider \begin{displaymath} J \, \defeq \, \bigwedge\nolimits_{n \in \N} \rAnn(1-c_{n}t_{n}c_{n}^{-1}) \, \in \, \lat(R) . \end{displaymath} Since $R$ is regular, there exists $\tilde{e}\in\E(R)$ such that $\tilde{e}R = J$. By \cite[Lemma~7.5]{SchneiderGAFA}, there exists $f\in\E(R)$ such that $1-\tilde{e}\leq f$ and \begin{displaymath} fR \, = \, (1-\tilde{e})R + \bigvee\nolimits_{n\in\N}(1-c_{n}t_{n}c_{n}^{-1})R \end{displaymath} We observe that \begin{align*} \delta_{\lat(R)}((1-\tilde{e})R) \, &=\, \rk_{R}(1-\tilde{e}) \stackrel{\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.difference.smaller.idempotent}}{=} \, 1-\rk_{R}(\tilde{e}) \, = \, 1-\delta_{\lat(R)}(J) \\ & \stackrel{\eqref{eq--19}}{<} \, 1-\!\left(1-\tfrac{1}{2}\rk_{R}(e)\right)\! \, = \, \tfrac{1}{2}\rk_{R}(e) \end{align*} and hence \begin{align*} \rk_{R}(f) \, &= \, \delta_{\lat(R)}(fR) \, \leq \, \delta_{\lat(R)}((1-\tilde{e})R) + \delta_{\lat(R)}\!\left(\bigvee\nolimits_{n \in \N} (1-c_{n}t_{n}c_{n}^{-1})R\right) \\ &< \, \tfrac{1}{2}\rk_R(e) + \delta_{\lat(R)}\!\left(\bigvee\nolimits_{n \in \N} (1-c_{n}t_{n}c_{n}^{-1})R\right) \\ &\stackrel{\eqref{eq--4}}{\leq} \, \tfrac{1}{2}\rk_R(e) + \sum\nolimits_{n \in \N} \rk_R(1-t_{n}) \, \stackrel{\eqref{eq--18}}{<} \, \rk_{R}(e) . \end{align*} By Lemma~\ref{lemma:order}\ref{lemma:order.1}, there exists $e_{0}\in\E(R)$ such that $e_{0} \leq e$ and $\rk_{R}(e_{0}) = \rk_{R}(f)$. By Lemma~\ref{lemma:conjugation.idempotent}, there exists $g\in\GL(R)$ such that $ge_{0}g^{-1} = f$. Since $\GL(R) = \bigcup\nolimits_{n=0}^{\infty} c_{n}W$, there is $m \in \N$ such that $g \in c_{m}W$. Consider $s \defeq c_{m}^{-1}g \in W$ and note that \begin{align} e_{0} \, = \, s^{-1}se_{0}s^{-1}s \, = \, s^{-1}c_{m}^{-1}ge_{0}g^{-1}c_{m}s \, = \, s^{-1}c_{m}^{-1}fc_{m}s .\label{something} \end{align} Furthermore, we see that \begin{enumerate} \item[---\,] $1-\tilde{e}\leq f$, \item[---\,] $\tilde{e}\in\bigwedge_{n \in \N} \rAnn(1-c_{n}t_{n}c_{n}^{-1})\subseteq \rAnn(1-c_{m}t_{m}c_{m}^{-1})$, \item[---\,] $(1-c_{m}t_{m}c_{m}^{-1})R\subseteq\bigvee_{n \in \N} (1-c_{n}t_{n}c_{n}^{-1})R\subseteq fR$. \end{enumerate} Consequently, Lemma~\ref{lemma:Gamma.annihilator.right-ideal} asserts that $c_{m}t_{m}c_{m}^{-1} \in \Gamma_{R}(f)$. In turn, \begin{displaymath} s^{-1}t_{m}s \, \stackrel{\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.conjugation}}{\in} \, \Gamma_{R}(s^{-1}c_{m}^{-1}fc_{m}s) \, \stackrel{\eqref{something}}{=} \, \Gamma_{R}(e_{0}) \, \stackrel{\ref{lemma:subgroup.unit.group}\ref{lemma:subgroup.unit.group.order}}{\subseteq} \, \Gamma_{R}(e) \, \subseteq \, W^{\ell} \end{displaymath} and thus $t_{m} = ss^{-1}t_{m}ss^{-1} \in sW^{\ell}s^{-1} \subseteq W^{\ell+2}$, as desired. \end{proof} We arrive at this section's main result. \begin{thm}\label{theorem:194-Steinhaus} Let $R$ be an irreducible, continuous ring. Then the topological group $\GL(R)$ is $194$-Steinhaus. In particular, $\GL(R)$ has automatic continuity. \end{thm} \begin{proof} Since any discrete group is even $0$-Steinhaus, the desired conclusion is trivial if $R$ is discrete. If $R$ is non-discrete, then the claim follows from Lemma~\ref{lemma:Gamma_{R}(e).subset.W^192} and Lemma~\ref{lemma:GL(R).covered.by.c_nW}. In turn, $\GL(R)$ has automatic continuity by~\cite[Proposition~2]{RosendalSolecki}. \end{proof} \section{Proofs of corollaries}\label{section:proofs.of.corollaries} \begin{proof}[Proof of Corollary~\ref{corollary:unique.polish.topology}] Note that $\GL(R)$ is closed in $(R,d_R)$ by Remark~\ref{remark:properties.rank.function}\ref{remark:GL(R).closed}. Since closed subspaces of complete metric spaces are complete, Theorem~\ref{theorem:unique.rank.function} entails that $(\GL(R),d_{R})$ is complete. Suppose now that $(R,d_{R})$ is separable. As subspaces of separable metric spaces are separable, we conclude that $(\GL(R),d_{R})$ is separable. Hence, the rank topology $\tau$ on $\GL(R)$, which is a group topology by Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.group.topology}, is also Polish. Let $\sigma$ be another Polish group topology on $\GL(R)$. Then Theorem~\ref{theorem:194-Steinhaus} asserts that the identity homomorphism $\id\colon (\GL(R),\tau)\to (\GL(R),\sigma)$ is continuous, i.e., $\sigma\subseteq \tau$. By the open mapping theorem for Polish groups (see, e.g.,~\cite[Theorem~3]{RosendalSuarez}), $\id \colon (\GL(R),\tau)\to (\GL(R),\sigma)$ is also open, that is, $\tau\subseteq \sigma$. Therefore, $\sigma=\tau$. \end{proof} \begin{remark}\label{remark:separable} Let $R$ be an irreducible, continuous ring. With respect to the rank topology, $R$ is separable if and only if $\GL(R)$ is separable. The implication ($\Longrightarrow$) is obvious, as argued in the proof of Corollary~\ref{corollary:unique.polish.topology}. In order to prove ($\Longleftarrow$), consider any maximal chain $E$ in $(\E(R),{\leq})$, the existence of which is guaranteed by the Hausdorff maximal principle. Note that $\rk_{R}(E) = \rk_{R}(R)$ due to~\cite[Corollary~7.19]{SchneiderGAFA}, therefore~\cite[II.XVII, Theorem~17.1(d), p.~224]{VonNeumannBook} asserts that \begin{displaymath} \GL(R) \times E \times \GL(R) \, \longrightarrow \, R, \quad (u,e,v) \, \longmapsto \, uev \end{displaymath} is surjective. Moreover, this map is continuous for the rank topology by Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.group.topology}. Since ${{\rk_{R}}\vert_{E}}\colon (E,d_{R}) \to ([0,1],d)$ is isometric with regard to the standard metric $d$ on $[0,1]$ by Remark~\ref{remark:properties.pseudo.rank.function}\ref{remark:rank.order.isomorphism}, we see that $(E,d_{R})$ is separable. Consequently, if $\GL(R)$ is separable with respect to the rank topology, then so is $R$. \end{remark} \begin{proof}[Proof of Corollary~\ref{corollary:inert}] The homeomorphism group $\Homeo(X)$ of any metrizable compact space $X$, endowed with the topology of uniform convergence, constitutes a separable topological group (see, e.g.,~\cite[Example~9.B 8), p.~60]{KechrisBook}). Any action of $\GL(R)$ by homeomorphisms on a non-empty metrizable compact space $X$ gives rise to a homomorphism from $\GL(R)$ to $\Homeo(X)$, which then has to be continuous with respect to the rank topology due to Theorem~\ref{theorem:194-Steinhaus}, wherefore the action is (jointly) continuous. Hence, the claim follows by~\cite[Corollary~1.6]{SchneiderGAFA}. \end{proof} \begin{proof}[Proof of Corollary~\ref{corollary:fixpoint}] \ref{corollary:fixpoint.exotic} Consider a unitary representation $\pi$ of $\GL(R)$ on a separable Hilbert space $H$, i.e., a homomorphism $\pi \colon \GL(R) \to \U(H)$ to the corresponding unitary group. The latter, equipped with the strong operator topology, constitutes a topological group, which is separable due to separability of $H$ (see, e.g.,~\cite[I.9, Example~9.B~6), p.~59]{KechrisBook}). Hence, $\pi$ is continuous with respect to the rank topology by Theorem~\ref{theorem:194-Steinhaus}. As the topological group $\GL(R)$ is exotic~\cite{CarderiThom}, thus $\pi$ is trivial. \ref{corollary:fixpoint.extremely.amenable} As explained in the proof of Corollary~\ref{corollary:inert}, our Theorem~\ref{theorem:194-Steinhaus} implies that every action of $\GL(R)$ by homeomorphisms on a metrizable compact space is continuous with respect to the rank topology. Since the topological group $\GL(R)$ is \emph{extremely amenable}~\cite{CarderiThom}, this entails the claim. \end{proof} \begin{thebibliography}{50} \bibitem{BenYaacovBerensteinMelleray} Itaï Ben Yaacov, Alexander Berenstein, Julien Melleray, \emph{Polish topometric groups}. Trans.\ Amer.\ Math.\ Soc.~\textbf{365} (2013), no.~7, pp.~3877--3897. \bibitem{BirkhoffBook} Garrett Birkhoff, \emph{Lattice Theory}. Revised edition. American Mathematical Society Colloquium Publications, Vol.~25, American Mathematical Society, New York, N.Y., 1948. \bibitem{BirkhoffBulletin} Garrett Birkhoff, \emph{Von Neumann and lattice theory}. Bull.\ Amer.\ Math.\ Soc.~\textbf{64} (1958), pp.~50--56. \bibitem{Borel} Armand Borel, \emph{On free subgroups of semisimple groups}. Enseign.\ Math.\ (2)~\textbf{29} (1983), no.~1-2, pp.~151--164. \bibitem{BorelBook} Armand Borel, \emph{Linear algebraic groups}. Second edition. Graduate Texts in Mathematics, 126. Springer-Verlag, New York, 1991. \bibitem{CarderiThom} Alessandro Carderi, Andreas Thom, \emph{An exotic group as limit of finite special linear groups}. Ann.\ Inst.\ Fourier (Grenoble) \textbf{68} (2018), no.~1, pp.~257--273. \bibitem{Ehr56} Gertrude Ehrlich, \emph{Characterization of a continuous geometry within the unit group}. Trans.\ Amer.\ Math.\ Soc.~\textbf{83} (1956), pp.~397--416. \bibitem{elek2013} Gábor Elek, \emph{Connes embeddings and von Neumann regular closures of amenable group algebras}. Trans.\ Amer.\ Math.\ Soc.~365 (2013), no.~6, pp.~3019--3039. \bibitem{elek} Gábor Elek, \emph{Lamplighter groups and von Neumann's continuous regular ring}. Proc.\ Amer.\ Math.\ Soc.~144 (2016), no.~7, pp.~2871--2883. \bibitem{ElekSzabo} Gábor Elek, Endre Szabó, \emph{Sofic groups and direct finiteness}. J. Algebra~\textbf{280} (2004), no.~2, pp.~426--434. \bibitem{EngelkingBook} Ryszard Engelking, \emph{General Topology}. Sigma Series in Pure Mathematics, 6. Heldermann, Berlin, 1989. \bibitem{Fathi} Albert Fathi, \emph{Le groupe des transformations de $[0, 1]$ qui pr\'{e}servent la mesure de Lebesgue est un groupe simple}. Israel J.\ Math.~\textbf{29} (1978), no.~2-3, pp.~302--308. \bibitem{GoodearlBook} Kenneth~R.\ Goodearl, \emph{von Neumann regular rings}. Monographs and Studies in Mathematics, 4. Pitman (Advanced Publishing Program), Boston, Mass.-London, 1979. \bibitem{GrilletBook} Pierre A. Grillet, \emph{Abstract Algebra}. Graduate Texts in Mathematics, 242. Springer, New York, 2007. \bibitem{GustafsonHalmosRadjavi76} William~H.\ Gustafson, Paul~R.\ Halmos, Heydar Radjavi, \emph{Products of involutions}. Linear Algebra Appl.~\textbf{13} (1976), no.~1-2, pp.~157--162. \bibitem{Halperin62} Israel Halperin, \emph{Von Neumann's arithmetics of continuous rings}. Acta Sci.\ Math.~(Szeged)~\textbf{23} (1962), pp.~1--17. \bibitem{Halperin68} Israel Halperin, \emph{Von Neumann’s manuscript on inductive limits of regular rings}. Canad.\ J.\ Math.~\textbf{20} (1968), pp.~477--483. \bibitem{Handelman76} David Handelman, \emph{Simple regular rings with a unique rank function}. J.\ Algebra~\textbf{42} (1976), no.~1, pp.~60--80. \bibitem{KechrisBook} Alexander~S.\ Kechris, \emph{Classical descriptive set theory}. Graduate Texts in Mathematics, 156. Springer-Verlag, New York, 1995. \bibitem{KechrisRosendal} Alexander~S.\ Kechris, Christian Rosendal, \emph{Turbulence, amalgamation, and generic automorphisms of homogeneous structures}. Proc.\ Lond.\ Math.\ Soc.~(3)~\textbf{94} (2007), no.~2, pp.~302--350. \bibitem{KittrellTsankov} John Kittrell, Todor Tsankov, \emph{Topological properties of full groups}. Ergodic Theory Dynam.\ Systems~\textbf{30} (2010), no.~2, pp.~525--545. \bibitem{Larsen} Michael Larsen, \emph{Word maps have large image}. Israel J.\ Math.~\textbf{139} (2004), pp.~149--156. \bibitem{Liebeck} Martin~W.\ Liebeck, \emph{Width Questions for Finite Simple Groups}. Groups St Andrews 2013, pp.~51--72. London Math. Soc. Lecture Note Ser., 422. Cambridge University Press, Cambridge, 2015. \bibitem{linnell} Peter~A.\ Linnell, \emph{Embedding group algebras into finite von Neumann regular rings}. Modules and comodules, pp.~295--300, Trends Math., Birkhäuser Verlag, Basel, 2008. \bibitem{LinnellSchick} Peter~A.\ Linnell, T.~Schick, \emph{The Atiyah conjecture and Artinian rings}. Pure Appl.\ Math.\ Q.~8 (2012), no.~2, pp.~313--327. \bibitem{MaedaBook} Fumitomo Maeda, \emph{Kontinuierliche Geometrien}. Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete, Band 95. Springer-Verlag, Berlin-Göttingen-Heidelberg, 1958. \bibitem{MurrayVonNeumann} Francis~J.\ Murray, John von Neumann, \emph{On rings of operators}. Ann.\ of Math.\ (2)~\textbf{37} (1936), no.~1, pp.~116--229. \bibitem{NeumannExamples} John von Neumann, \emph{Examples of continuous geometries}. Proc.\ Nat.\ Acad.\ Sci.\ U.S.A.~\textbf{22} (1936), pp.~101--108. \bibitem{VonNeumann37} John von Neumann, \emph{Continuous rings and their arithmetics}. Proc.\ Nat.\ Acad.\ Sci.\ U.S.A.~\textbf{23} (1937), pp.~341--349. \bibitem{VonNeumannBook} John von Neumann, \emph{Continuous geometry}. Foreword by Israel Halperin. Princeton Mathematical Series, No.~25, Princeton University Press, Princeton, N.J., 1960. \bibitem{RosendalSolecki} Christian Rosendal, S\l awomir Solecki, \emph{Automatic continuity of homomorphisms and fixed points on metric compacta}. Israel J.\ Math.~\textbf{162} (2007), pp.~349--371. \bibitem{RosendalSuarez} Christian Rosendal, Luis~C.\ Suarez, \emph{Aspects of automatic continuity}. arXiv: 2406.12143 [math.GR]. \bibitem{Smith57} Robert~J.\ Smith, \emph{A determinant in continuous rings}. Pacific J.\ Math.~\textbf{7} (1957), pp.~1701--1709. \bibitem{Thompson61} Robert~C.\ Thompson, \emph{Commutators in the special and general linear groups}. Trans.\ Amer.\ Math.\ Soc.~\textbf{101} (1961), pp.~16--33. \bibitem{Thompson62} Robert~C.\ Thompson, \emph{Commutators of matrices with coefficients from the field of two elements}. Duke Math.\ J.~\textbf{29} (1962), pp.~367--373. \bibitem{ThompsonPortugaliae} Robert~C.\ Thompson, \emph{On matrix commutators}. Portugal.\ Math.~\textbf{21} (1962), pp.~143--153. \bibitem{Tsankov} Todor Tsankov, \emph{Automatic continuity for the unitary group}. Proc.\ Amer.\ Math.\ Soc.~\textbf{141} (2013), no.~10, pp.~3673--3680. \bibitem{SchneiderGAFA} Friedrich~M.\ Schneider, \emph{Concentration of invariant means and dynamics of chain stabilizers in continuous geometries}. Geom.\ Funct.\ Anal.~\textbf{33} (2023), no.~6, pp.~1608--1681. \bibitem{SchneiderIMRN} Friedrich~M.\ Schneider, \emph{Group von Neumann algebras, inner amenability, and unit groups of continuous rings}. Int.\ Math.\ Res.\ Not.\ IMRN (2024), no.~8, pp.~6422--6446. \end{thebibliography} \end{document}
2412.17554v2
http://arxiv.org/abs/2412.17554v2
Growth-Optimal E-Variables and an extension to the multivariate Csiszár-Sanov-Chernoff Theorem
\documentclass[11pt]{article} \usepackage[table]{xcolor} \definecolor{light-gray}{gray}{0.90} \usepackage{amsmath} \usepackage{amsthm} \usepackage{enumerate} \usepackage{graphicx,psfrag,epsf} \usepackage{amssymb} \usepackage{hyperref} \usepackage{diagbox} \usepackage{geometry} \geometry{a4paper} \usepackage[utf8]{inputenc} \usepackage{siunitx} \usepackage{float} \usepackage{todonotes} \usepackage[title]{appendix} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage[none]{hyphenat} \usepackage{titlesec} \usepackage[official]{eurosym} \usepackage{mathtools} \usepackage{natbib} \usepackage{caption} \usepackage{subfigure} \usepackage{color} \usepackage{algorithm} \usepackage{algorithmicx} \usepackage{algpseudocode} \usepackage{color} \usepackage{extarrows} \usepackage{booktabs} \usepackage{bm} \usepackage{authblk} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \newcommand{\alt}{\ensuremath{{\Theta}_{\text{\sc alt}}}} \newcommand{\shtarkov}{\ensuremath{\textsc{shtarkov}}} \newcommand{\mmreg}{\ensuremath{\textsc{mmreg}}} \newcommand{\mreg}{\ensuremath{\textsc{mreg}}} \newcommand{\mmred}{\ensuremath{\textsc{mmred}}} \newcommand{\reg}{\ensuremath{\textsc{reg}}} \setuptodonotes{fancyline, color=blue!30} \numberwithin{equation}{section} \newcommand{\quotec}[1]{``#1"} \newcommand{\E}[0]{$\mathtt{E}$-} \newcommand{\hmc}[1]{$\mathcal{H}_#1$} \renewcommand*\contentsname{Outline} \newcolumntype{C}[1]{>{\centering\arraybackslash}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\arraybackslash}m{#1}} \definecolor{mygray}{gray}{0.9} \newtheorem{proposition}{Proposition} \newtheorem{conjecture}{Conjecture} \newtheorem{theorem}{Theorem} \newtheorem{condition}{Condition} \newtheorem{assump}{Assumption} \newtheorem{example}{Example} \newtheorem{proof_proposition}{Proof of Proposition} \theoremstyle{plain} \newtheorem{definition}{Definition} \newtheorem{lemma}{Lemma} \newtheorem{remark}{Remark} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand{\Hc}[0]{\mathcal{H}} \newcommand{\Nc}[0]{\mathcal{N}} \newcommand{\R}[0]{\mathbb{R}} \newcommand{\Bold}[0]{\bm{\theta}} \newcommand{\iFi}[3]{\,_1F_1\left(#1;#2;#3\right)} \newcommand{\iidSim}[0]{\overset{\text{i.i.d.}}{\sim}} \DeclareMathOperator*{\Gam}{Gam} \DeclareMathOperator*{\der}{d} \newcommand{\grow}{\ensuremath{\textsc{GROW}}} \newcommand{\bd}{\ensuremath{\textsc{bd}}} \newcommand{\nv}{\ensuremath{0}} \newcommand{\Sr}{\ensuremath{S_{\textsc{ripr}}}} \newcommand{\Sm}{\ensuremath{S_{\textsc{mix}}}} \newcommand{\Sc}{\ensuremath{S_{\textsc{cond}}}} \newcommand{\Sp}{\ensuremath{S_{\textsc{pseudo}}}} \newcommand{\Scond}{\ensuremath{{S}_{\textsc{cond}}}} \newcommand{\Sripr}{\ensuremath{{S}_{\textsc{RIPr}}}} \newcommand{\Smix}{\ensuremath{{S}_{\textsc{mix}}}} \newcommand{\Spseudo}{\ensuremath{{S}_{\textsc{pseudo}}}} \newcommand{\Sappr}{\ensuremath{{S}_{\textsc{appr}}}} \newcommand{\Sui}{\ensuremath{S_{\textsc{ui}}}} \newcommand{\Sgro}{\ensuremath{{S}_{\textsc{GRO}(\bm{\mu}^1)}}} \newcommand{\Sgrow}{\ensuremath{{S}_{\textsc{grow}}}} \newcommand{\Srel}{\ensuremath{{S}_{\textsc{rel}}}} \newcommand{\Sbound}{\ensuremath{{S}_{\textsc{bound}}}} \newcommand{\Sbayes}{\ensuremath{{S}_{\textsc{Bayes}}}} \newcommand{\sgn}{\ensuremath{\textsc{sgn}}} \newcommand{\lrp}[1]{\left(#1\right)} \newcommand{\lrb}[1]{\left[#1\right]} \newcommand{\lrsetb}[1]{\left\{#1\right\}} \newcommand{\conv}{\ensuremath{\textsc{conv}}} \renewcommand{\vec}[1]{\ensuremath{{\bm #1}}} \newcommand{\authcmt}[2]{\textcolor{#1}{#2}} \newcommand{\akshay}[1]{\authcmt{teal}{[AB: #1]}} \newcommand{\yishan}[1]{\authcmt{orange}{[YSW: #1]}} \newcommand{\peter}[1]{\authcmt{red}{[PG: #1]}} \newcommand{\yunda}[1]{\authcmt{blue}{[YH: #1]}} \newcommand{\cU}{\ensuremath{\mathcal{U}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}} \newcommand{\cH}{\ensuremath{\mathcal{H}}} \newcommand{\cA}{\ensuremath{\mathcal{A}}} \newcommand{\cW}{\ensuremath{\mathcal{W}}} \newcommand{\cP}{\ensuremath{\mathcal{P}}} \newcommand{\cS}{\ensuremath{\mathcal{S}}} \newcommand{\cY}{\ensuremath{\mathcal{Y}}} \newcommand{\cX}{\ensuremath{\mathcal{X}}} \newcommand{\cV}{\ensuremath{\mathcal{V}}} \newcommand{\cR}{\ensuremath{\mathcal{R}}} \newcommand{\cQ}{\ensuremath{\mathcal{Q}}} \newcommand{\reals}{\ensuremath{\mathbb{R}}} \newcommand{\naturals}{\ensuremath{\mathbb{N}}} \newcommand{\meanspace}{\ensuremath{\text{\tt M}}} \newcommand{\pmu}{\ensuremath{p}} \newcommand{\Pmu}{\ensuremath{P}} \newcommand{\pmv}{\ensuremath{\bar{p}}} \newcommand{\Pmv}{\ensuremath{\bar{P}}} \newcommand{\comp}{\ensuremath{\textsc{c}}} \newcommand{\commentout}[1]{} \title{Growth-Optimal E-Variables and an extension to the multivariate Csisz\'ar-Sanov-Chernoff Theorem} \begin{document} \author[12]{Peter Grünwald} \author[1]{Yunda Hao} \author[3]{Akshay Balsubramani} \affil[1]{Centrum Wiskunde \& Informatica, Amsterdam, The Netherlands} \affil[2]{Leiden University, Leiden, The Netherlands} \affil[3]{Sanofi, Waltham, USA} \bibliographystyle{plainnat} \maketitle \begin{abstract} We consider growth-optimal e-variables with maximal e-power, both in an absolute and relative sense, for simple null hypotheses for a $d$-dimensional random vector, and multivariate composite alternatives represented as a set of $d$-dimensional means $\meanspace_1$. These include, among others, the set of all distributions with mean in $\meanspace_1$, and the exponential family generated by the null restricted to means in $\meanspace_1$. We show how these optimal e-variables are related to Csisz\'ar-Sanov-Chernoff bounds, first for the case that $\meanspace_1$ is convex (these results are not new; we merely reformulate them) and then for the case that $\meanspace_1$ `surrounds' the null hypothesis (these results are new). \end{abstract} \section{Introduction} $E$-variables present a compelling alternative to traditional $P$-values, particularly in hypothesis tests involving optional stopping and continuation \citep{GrunwaldHK19, VovkW21, Shafer21, ramdas2023savi, henzi2021valid, Grunwald23}. As is well-known, there is a close connection between optimal rejection regions of anytime-valid tests at fixed level $\alpha$ and optimal anytime-valid concentration inequalities \citep{howard2021time}. In this paper we consider a variation of this connection in the context of a simple multivariate null and several types of composite alternatives. We study absolute and relative {\em GROW\/} (`growth-rate optimal in the worst-case') e-variables as introduced by \cite{GrunwaldHK19}, and we show how such e-variables are related to a concentration inequality which we call {\em Csisz\'ar-Sanov-Chernoff\/} (CSC from now on). The 1-dimensional version of this inequality is well-known as a straightforward application of the Cram\'er-Chernoff method and is sometimes called the {\em generic\/} Chernoff bound. The multivariate version was apparently first derived by \cite{Csiszar84} as a (significant) strengthening of Sanov's classical theorem; we review this history in Section~\ref{sec:cscconvex} beneath Theorem~\ref{thm:old}. Given all this we decided to name the bound `Csisz\'ar-Sanov-Chernoff'. Formally, we consider a $d$-dimensional random vector $Y=(Y_1, \ldots, Y_d)$ supported on some (possibly finite or countable) subset $\cY$ of $\reals^d$. Whenever we speak of `a distribution for $Y$' we mean a distribution on $\cY$ equipped with its standard Borel $\sigma$-algebra. Then $\conv(\cY)$, the convex hull of $\cY$, is the set of means for such distributions; we invariably assume that the zero-vector $(0,\ldots, 0)^{\top}$ (which we abbreviate to $\nv$ whenever convenient) is contained in $\cY$. We then let the null hypothesis $P_0$ be a distribution for $Y$ with mean equal to the zero vector: ${\mathbb E}_{P_0}[Y] = \nv$. We fix a background measure $\rho$ and assume that $Y$ has a density $p_0$ relative to $\rho$ under $P_0$ (we may in fact take $\rho$ equal to $P_0$ itself without restricting the generality of our results). We assume that we are given a set of means $\meanspace_1 \subset \conv(\cY)$ and an alternative (i.e. a set of distributions on $\cY$) $\cH_1$ that is {\em compatible\/} with the given $\meanspace_1$, in the sense that, for all $\mu \in \conv(\cY)$: \begin{equation}\label{eq:compatible} \mu \in \meanspace_1 \Leftrightarrow \text{\ there exists $P \in \cH_1$ with ${\mathbb E}_P[Y]= \mu$}. \end{equation} We consider various such $\cH_1$. In general, $\cH_1$ is allowed to contain distributions that do not have densities, but whenever a $P_1 \in \cH_1$ does have a density, it is denoted by small letters, i.e. $p_1$. We further invariably assume that $P_0$ and $\cH_1$ are {\em separated\/} in terms of the mean. That is, $ \inf_{\mu \in \meanspace_1 } \| \mu \|_2 > 0, $ and finally, that $Y$ admits a moment generating function under $P_0$. This is a strong assumption, but it is the only strong assumption we impose. In Section~\ref{sec:convexm1} we consider the GROW (growth-rate-optimal in the worst-case over $\cH_1$) e-variable $S_{\grow}$ for this scenario, assuming either that $\cH_1$ is the set $\cP_1$ of {\em all\/} $P$ with mean in some given {\em convex\/} set $\meanspace_1$, or that $\cH_1$ is the set $\cE_1$ of all elements of the exponential family generated by $P_0$ with means in $\meanspace_1$, or any $\cH_1$ with $\cE_1 \subset \cH_1 \subset \cP_11$ --- it turns out that the GROW e-variables coincide for all such $\cH_1$ . We show how this result can be derived using the celebrated Csisz\'ar-Tops{\o}e Pythagorean theorem for relative entropy and how it leads to the basic CSC concentration inequality. We do not claim novelty for this section, which mostly contains re-formulations of results that are well-known in the information-theoretic (though perhaps not in the e-value) community. The real novelty comes in subsequent sections: In Section~\ref{sec:surrounding} we move to the case that the {\em complement\/} of $\meanspace_1$ is a connected, bounded set containing $P_0$ --- a case that is more likely to arise in practical applications, is more closely related to the setting of the multivariate CLT, yet has, as far as we know, not been considered before when deriving CSC bounds, with the exception of \cite{kaufmann2021mixture} who consider a variation of this setting (we return to their results in the final Section~\ref{sec:discuss}). We denote this as the {\em surrounding\/} $\cH_1$ case, since $P_0$ is `surrounded' by $\cH_1$. We can extend the previous $S_{\grow}$ e-variable to this case in two ways. We may either look at the straightforward {\em absolute\/} extension of the GROW e-variable to the multivariate case, which we still denote by $\Sgrow$; or we can determine a {\em relatively\/} optimal GROW e-variable $\Srel$ that is as close as possible to the largest $\Sgrow$ among all e-variables $\Sgrow$ that can be defined on convex subsets of $\cH_1$, where, in this paper, as in \cite{jang2023tighter}, we define relative optimality in a minimax-regret sense. We characterize $\Sgrow$ for the case that $d=1$ (leaving the complicated case $d > 1$ as an open question), and we characterize $\Srel$ for general dimension $d$. We then show that $\Srel$ leads again to a CSC bound, Theorem~\ref{thm:nml} --- and this CSC bound is new. The CSC bound arrived at in Theorem~\ref{thm:nml} contains a minimax regret term $\mmreg$, which may be hard to verify in practice. In typical applications, we will have $Y = n^{-1}\sum_{i=1}^n X_i$ with $X_i$ i.i.d., for some fixed sample size $n$. Then, if the exponential family generated by $P_0$ is regular (as it will be in most cases), we know that $Y$ is equal to $\hat\mu_{|X_1, \ldots, X_n}$, the maximum likelihood estimator for the generated family, given in its mean-value parameterization. We can then think of the CSC bound as a concentration inequality that bounds the probability of the MLE falling in some set. Based on this instantiation of $Y$, we provide, in Section~\ref{sec:asymptotic_growth_rate}, based on earlier work by \cite{ClarkeB94,takeuchi1997asymptotically}, asymptotic expressions of the minimax regret term $\mmreg_n$ as a function of $n$, and show that, under regularity conditions on the boundary of the set $\meanspace_1$, it increases as $$ \frac{d-1}{2} \log n + O(1), $$ It is no coincidence that this term is equal to the BIC/MDL model complexity for $d-1$-dimensional statistical family: it turns out that the boundary of $\meanspace_1$ is the relevant quantity here, and it defines a $(d-1)$-dimensional exponential family embedded within the $d$-dimensional family generated by $P_0$. We show how this result gives us an asymptotic expression for the absolute GROW e-variable $\Sgrow$ after all, provided that the complement of $\meanspace_1$ is a Kullback-Leibler ball around $P_0$. This paper is still a work in progress. In the final section we provide additional discussion of the results, a comparison to the multivariate Central Limit Theorem, and we indicate the future work we would like to add to our current results. \subsection{Background on GROW e-variables} Since it will help provide the right context, in this --- and only in this --- subsection we allow composite null hypotheses $\cH_0$. Each $P \in \cH_0$ is then a distribution for $Y$. \begin{definition} A nonnegative statistic $S= s(Y)$ is called an $e$-variable relative to $\mathcal{H}_0$ if $$ \text{for all $P \in \mathcal{H}_0$: } \mathbb{E}_P[S] \leq 1. $$ \end{definition} Let $\cS_0$ be the set of all e-variables that can be defined relative to $\cH_0$ and such that ${\mathbb E}_P[\log S]$ is well-defined as an extended real number for all $P \in \cH_1$, where we adopt the convention that $\log \infty = \infty$ and $\log 0 = \infty$. `Well-defined' means that we may have ${\mathbb E}_P[(\log S) \vee 0] = \infty$ or ${\mathbb E}_P[(\log S) \wedge 0] = - \infty$ but not both. \cite{GrunwaldHK19} defines the $worst$-$case\ optimal\ expected\ capital\ growth\ rate$ $(\grow)$ as \begin{equation} \label{eq:growmax} \grow := \sup\limits_{S: S \in \mathcal{S}_0} \inf\limits_{P \in \cH_1} \mathbb{E}_{P}[\log S], \end{equation} where $\mathbb{E}_{P}[\log S]$ is the so-called $growth\ rate$ of $S$ under $P \in \cH_1$. The $\grow$ $E$-variable, denoted as $\Sgrow$, if it exists, is the e-variable achieving the supremum above. We refer to \cite{GrunwaldHK19,ramdas2023savi} for extensive discussion on why this is, in a particular sense, the {\em optimal\/} e-variable that can be defined for the given testing problem. As a special case of their main result, \citet[Theorem 2] {GrunwaldHK19} show the following: \begin{theorem}\label{thm:ghk}{\bf \cite[Theorem 2, Special Case]{GrunwaldHK19}} Suppose that (a) $D(P_1 \| P_0)< \infty$ for all $P_1 \in\cH_1, P_0 \in \cH_0$ and (b) \begin{equation} \label{eq:growmin} \min\limits_{P_1 \in \conv(\cH_1)} \min_{P_0 \in \conv(\cH_0)} D(P_1 \| P_0) \end{equation} is achieved by some $P_1^*, P_0$, then we have \begin{align}\label{eq:GHK} & \sup_{S \in \cS_0} \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P} \left[\log S \right] = D(P^*_1 \| P_0) = \grow = \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P}\left[ \log\frac{p^*_1(Y)}{p_0(Y)}\right], \\ \label{eq:grow} & \text{\ and $\Sgrow$, achieving (\ref{eq:growmax}), is therefore given by\ } \Sgrow= \frac{p_1^*(Y)}{p_0(Y)}. \end{align} Here $p_1^*$ is the density of $P_1^*$, which exists by the finite KL assumption. \end{theorem} \subsection{Simple $\cH_0$ and the Pythagorean Property} \label{sec:pythagoras} Most recent work in e-variable theory has concentrated on the case of composite $\cH_0$ and simple $\cH_1$ \citep{ramdas2023savi}. Throughout this paper we consider the reverse case, simple $\cH_0 = \{P_0\}$ and composite $\cH_1$. Now the problem clearly simplifies and in fact, a lot more has been known about this special case since the 1970s, albeit expressed in the different language of data-compression: in a landmark paper, \citet[Theorem 9]{Topsoe79} proved a minimax result for relative entropy which (essentially) implies (\ref{eq:GHK}) for the simple $\cH_0$ case. In fact, his result even implies that a distribution $P^*_1$ such that (\ref{eq:GHK}) and (\ref{eq:grow}) hold exists under much weaker conditions, in particular condition (b) above is not needed: $P_1^*$ exists even if the minimum in $\min_{P \in \conv(\cH_1)} D(P_1 \| P_0)$ is not achieved. The key result that Tops{\o}e used to prove his version of (\ref{eq:GHK}) and (\ref{eq:grow}) is Theorem 8 of his paper, a version of the {\em Pythagorean theorem\/} for KL divergence originally due to Csisz\'ar \citep{Csiszar75,CoverT91,csiszar_information_2003} . We will re-state this result and explicitly use it to re-derive a version of (\ref{eq:GHK}) and (\ref{eq:grow}) that is slightly stronger than Tops{\o}e's and better suited to our needs (Tops{\o}e's Theorem 9 still assumes condition (a); our derivation weakens it). The Pythagorean theorem expresses that in the following sense, the KL divergence behaves like a squared Euclidean distance: for arbitrary $P_0$ and $\cH_1$ as above, we have as long as $\cH_1$ is {\em convex} and $\inf_{P_1\in \cH_1} D(P_1 \| P_0) < \infty$, that there exists a probability distribution $P^*_1$, called the {\em information projection of $P_0$ on $\cH_1$}, that satisfies: \begin{align} \label{eq:pythagoras} & \text{for all $P \in \cH_1$:}\ D(P \| P_0) \geq D(P \| P^*_1) + D(P^*_1 \| P_{0}) \\ \nonumber &\text{for every $Q_1, Q_2, \ldots \in \cH_1$ with $\lim_{j \rightarrow \infty} D(Q_j \| P_0) = \inf_{P \in \cH_1} D(P_1 \| P_0) $, we have}: \\& \ \ \ \ \lim_{j \rightarrow \infty} D(Q_j \| P^*_1) = 0. \\ \label{eq:railjet} & D(P^*_1 \| P_0) \leq \inf_{P \in \cH_1} D(P \| P_0). \end{align} In standard cases, the final inequality will hold with equality; in particular we have equality if $\min_{P_1 \in \cH_1} D(P_1 \| P_0)$ is achieved. We call (\ref{eq:pythagoras}) the {\em Pythagorean property}. Note that it is {\em implied\/} by convexity of $\cH_1$ and finiteness of $\inf D(P_1 \| P_0)$, but it may sometimes hold even if $\cH_1$ is not convex. We now show, slightly generalizing Tops{\o}e's result, how the Pythagorean property (\ref{eq:pythagoras}) implies a version of \cite{GrunwaldHK19}'s theorem for simple $\cH_0$ (in fact, in the reformulation as a minimax theorem for data compression, the Pythagorean property is in fact {\em equivalent\/} to the minimax statement but we will not need that fact here; see \cite[Section 8]{GrunwaldD04} for an extended treatment of this equivalence). \begin{proposition}\label{prop:pythagorasminimax}{\bf [Pythagoras $\Rightarrow S_{\grow} = p_1^*/p_0$]} Suppose that $\cH_0 = \{P_0\}$ and let $\cH_1$ be any set of distributions (not necessarily convex!) for $Y$ such that $\inf_{P \in \cH_1} D(P \| P_0) < \infty$, and suppose that $P_1^*$ is such that (\ref{eq:pythagoras})--(\ref{eq:railjet}) holds, so that it has a density $p^*_1$. Further assume that, with `well-defined' defined as above (\ref{eq:growmax}), \begin{equation}\label{eq:THEcondition} {\mathbb E}_P[\log p^*_1(Y)/p_0(Y)] \text{\rm \ is well-defined for all $P \in \cH_1$}, \end{equation} so that $p^*_1(Y)/p_0(Y) \in \cS_0$. Then we have: \begin{equation}\label{eq:pythagorasminimax} \sup_{S \in \cS_0} \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P} \left[\log S \right] = D(P^*_1 \| P_0) = \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P}\left[ \log\frac{p^*_1(Y)}{p_0(Y)}\right] \end{equation} so that $S_{\textsc{grow}} = p^*_1(Y)/p_0(Y)$. \end{proposition} \begin{proof} Since we deal with a simple null, it holds that (as shown by \cite{GrunwaldHK19}) any $S \in \cS_0$ must be of the form $q(Y)/p_0(Y)$ for some sub-probability density $q$ relative to the measure $\rho$, and any such ratio defines an e-variable: the notions are equivalent. Here `sub-probability' means that $\int q(y) d \rho(y) \leq 1$ is allowed to be smaller than $1$. We thus have, with $\sup_q$ denoting the supremum of all sub-probability density functions $q$ for $Y$, \begin{align} \sup_{S \in \cS_0} \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P} \left[\log S \right] & = \sup_{q} \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P} \left[\log \frac{q(Y)} {p_0(Y)} \right] \nonumber \\ \label{eq:leftbound} & \leq \sup_q \mathbb{E}_{Y \sim P^*_1} \left[\log \frac{q(Y)} {p_0(Y)} \right] = D(P^*_1 \| P_0). \end{align} At the same time, the Pythagorean inequality (\ref{eq:pythagoras}) gives, by simple re-arranging of the logarithmic terms, that for all $P \in \cH_1$ with $D(P \| P_0) < \infty$: \begin{align}\label{eq:obb} \mathbb{E}_{Y \sim P}\left[ \log \frac{p^*_1(Y)}{p_0(Y)} \right] \geq \mathbb{E}_{Y \sim P^*_1}\left[ \log \frac{p^*_1(Y)}{p_0(Y)} \right] = D(P^*_1 \| P_0), \end{align} whereas if $D(P \| P_0) = \infty$ then by assumption (\ref{eq:railjet}), the right-hand side of (\ref{eq:obb}) is finite. (\ref{eq:pythagoras}) then implies $D(P \| P_1) = \infty$, and. because by assumption (\ref{eq:THEcondition}), the left-hand side of (\ref{eq:obb}) is well-defined, we can again re-arrange (\ref{eq:pythagoras}) to give (\ref{eq:obb}). Thus, we have shown that for all $P \in \cH_1$, (\ref{eq:obb}) holds. But then \begin{align}\label{eq:rightbound} \sup_{S \in \cS_0} \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P} \left[\log S \right] & \geq \inf_{P \in \cH_1} \mathbb{E}_{Y \sim P} \left[\log \frac{p^*_1(Y)} {p_0(Y)} \right] \geq D(P^*_1 \| P_0). \end{align} Together, (\ref{eq:leftbound}) and (\ref{eq:rightbound}) imply the result. \end{proof} While Condition (\ref{eq:THEcondition}) may look complicated, it is immediately verified to hold if $D(P_1 \| P_0) < \infty$ for all $P_1 \in \cH_1$ but also under Condition ALT-$\cH_1$ presented in the next section, which allows for $\cH_1$ to even contain distributions $P_1$ with $P_1 \not \ll P_0$ (see Example~\ref{ex:welldefined} for an instance of this). \subsection{The R\^ole of Exponential Families} We shall from now on tacitly assume that the convex support of $P_0$ is $d$-dimensional (see \cite[Chapter 1]{Brown86} for the precise definition of `convex support'). This is without loss of generality: if $Y$ takes values in $\reals^d$ yet the convex support does not have dimension $d$, it must have dimension $d' < d$, and then we can replace $Y$ by $d'$-dimensional $Y'$ that is an affine function of $Y$ and work with $Y'$ instead. Combined with our earlier assumption that $Y$ has a moment generating function under $P_0$, it follows \citep[Chapter 1]{Brown86} that $Y$ and $P_0$ jointly {\em generate\/} a $d$-dimensional natural exponential family $\cE = \{P_{\vec{\theta}}: \vec{\theta} \in \Theta \}$: a set of distributions for $Y$ with parameter space $\Theta \subseteq {\mathbb R}^d$ . Each distribution $P_{\theta}$ has density $p_{\theta}$ relative to $\rho$, given by: \begin{equation}\label{eq:expfam} p_{\theta}(Y) = \frac{1}{Z(\theta)} \exp\left({\theta}^\top Y \right) \cdot p_0(Y), \end{equation} where $Z(\theta)$ is the normalizing factor and $p_0$ is the density of the {\em generating distribution} $P_0$ and $\Theta = \{\theta: Z(\theta) < \infty\}$. From now on, we freely use standard properties, terminology and definitions concerning exponential families (such as `carrier density' and so on), that can be found, in, for example, \cite{BarndorffNielsen78,efron_2022,Brown86}. We will only mention these works, and then specific sections therein, when we refer to results that are otherwise hard to find. Parameterization (\ref{eq:expfam}) is called the {\em canonical\/} or {\em natural\/} parameterization. As is well-known, exponential families can be re-parameterized in terms of the mean of $Y$. Thus, there is a 1-to-1 mapping $\mu: \Theta \rightarrow \meanspace$, mapping each $\theta \in \Theta$ to $\mu(\theta) := {\mathbb E}_{P_{\theta}}[Y]$, with $\meanspace$ being the {\em mean-value parameter space}. We let $\theta(\mu)$ be the inverse of this mapping and let $\Pmv_{\mu} := P_{\theta(\mu)}$ with density $\pmv_{\mu}(Y) := p_{\theta(\mu)}(Y)$. Then we can equivalently write our exponential family as \begin{equation}\label{eq:expfammean} \cE = \{\Pmv_\mu: \mu \in \meanspace \}. \end{equation} Without loss of generality, we assume that $Y$ is defined such that $\nv \in \meanspace$ and the natural parameterization is such that $\theta(\nv) = \nv, \mu(\nv) = 0$. Clearly the null hypothesis $\cH_0 = \{P_0\}$ is given by the element of $\cE$ corresponding to the parameter vector $\theta = \nv$. The mean-value parameterization will be the most `natural' one (no pun intended) to use to define the alternative. As said in the introduction, we restrict ourselves to cases in which $Y$ has a moment generating function under $P_0$. Then $\meanspace$ contains an open set around $0$ and the exponential family (\ref{eq:expfammean}) exists. We define the alternative $\cH_1$ in terms of a given set of means $\meanspace_1 \subset \conv(\cY)$, invariably satisfying: \paragraph{Condition ALT-$\meanspace_1$:} (a) $\meanspace_1$ is closed, and $\inf_{\mu \in \meanspace_1} \| \mu \|_2 > 0$; and, (b), for all $\mu \in \meanspace_1$, there is a $\mu' \in \meanspace \cap \meanspace_1$ that lies on the straight line connecting $\nv$ and $\mu$. \\ \ \\ \noindent For the actual alternative hypothesis we then invariably further assume: \paragraph{Condition ALT-$\cH_1$:} (a) $\cH_1$ and $\meanspace_1$ are {\em compatible\/} in the sense of (\ref{eq:compatible}), and (b) $\cE_1 := \{ \Pmv_{\mu}: \mu \in \meanspace_1 \cap \meanspace\} \subset \cH_1$ and, (c), for all $P \in \cH_1$, ${\bf E}_{Y \sim P}[Y]$ is well-defined. \\ \ \\ \noindent To appreciate these conditions, consider first the case that $\meanspace= \conv(\cY)$, i.e. the mean-value parameter space of family $\cE$ contains every possible mean. Then Condition ALT-$\meanspace_1$ (a) says that $\meanspace_1$ is separated from $0$ and that it contains its boundary (note that it does not need to be bounded: for example, in the case that $\cE$ is the 1-dimensional normal location family, having $\meanspace_1= [1,\infty)$ is perfectly fine). Condition ALT-$\meanspace_1$ (b) holds automatically if $\meanspace= \conv(\cY)$ (e.g. in the Gaussian location case); the example below illustrates the case that $\meanspace$ is a strict subset of $\conv(\cY)$. Condition ALT-$\cH_1$ (b) simply says that for every mean in $\meanspace_1$, $\cH_1$ contains the element of the exponential family $\cE$ with that mean. \begin{example}\label{ex:ample} {\rm As a very simple example, suppose that $Z_1, Z_2, \ldots$ are i.i.d. Bernoulli$(p)$ for some $0 < p < 1$, and $X_i = 1/p$ if $Z_i=1$ whereas $X_i = - 1/(1-p)$ if $Z_i=0$. Let $Y=n^{-1} \sum_{i=1}^n X_i$. Then $\conv(\cY) = [-1/(1-p),1/p]$ and according to $P_0$, $Y$ is a linear transform of a $\textsc{bin}(n,p)$ random variable with ${\mathbb E}_{P_0}[Y]= 0$. Then Condition ALT-$\meanspace_1$ (a) expresses that $\meanspace_1$ must not contain a neighborhood of the mean of $P_0$ (i.e., $\mu=0$), and (b) that it must not be restricted to singletons at the boundary (i.e. $\mu = 1/p$ or $\mu= -1/(1-p)$). However, for example, $\meanspace_1=[1/p-\epsilon,1/p]$ for any $0 < \epsilon < 1/p$ satisfies Condition ALT-$\meanspace_1$. We see that the condition only very minimally restricts the set of $\meanspace_1$ that are allowed. It becomes more restrictive for the (very seldomly encountered!) case that the generated exponential family $\cE$ is {\em irregular\/} \citep{BarndorffNielsen78}. By construction of $\cE$, this is equivalent to it being {\em not steep}. For example, let $Y$ be 1-dimensional and let $P_0$ have density $p_0(y) = {\bf 1}_{y > 1} \cdot (2/y^3)$ relative to Lebesgue measure. Then we get $\cE = \{P_{\theta} \mid \theta \leq 0\}$ with $p_{\theta}(y) \propto \exp(\theta y)p_0(y)$. Then $\meanspace= (1,2]$ yet $\cY= (1,\infty)$ (from the fact that $\meanspace$ is not open we immediately see that $\cE$ is not regular). Condition ALT-$\meanspace_1$ now requires that $\meanspace_1$ contains a $\mu < 2$, even though $P_0(Y \geq b)> 0$ for any $b \geq 2$. } \end{example} \commentout{ \paragraph{Condition 1} We assume that: \begin{enumerate} \item $Y$ has a moment generating function under $P_0$, so $P_0$ and $Y$ generate an exponential family $\cE = \{\Pmv_{\mu}: \mu \in \meanspace\}$. \item We assume that this family is {\em regular}, so (by \cite[Theorem 9.2]{BarndorffNielsen78}), $\meanspace$ is convex and equal to the interior of the convex hull of the support of $Y$ under $P_0$. \item We assume that $\meanspace_1$ is contained in the interior of the convex hull of the support of $Y$ under $P_0$. \end{enumerate} The strong condition here is that a moment generating function exists, which implies that $Y$ has exponentially small tails. Once this is the case, in `most' cases the family generated by $P_0$ and $Y$ is steep, and therefore regular; see \cite{BarndorffNielsen78} who provides lots of examples and discussion. } \section{Convex $\meanspace_1$} \label{sec:convexm1} \paragraph{The Connection between Exponential Families and the Pythagorean Theorem} Although exponential families are usually employed as families that are reasonable in their own right, as is well-known \citep{Csiszar75,GrunwaldD04} they can also be arrived at as characterizing the information projection $P^*_1$ in the Pythagorean property above for certain $\cH_1$. We will heavily use this characterization below. Variations of the following result (see Figure~\ref{fig:enter-label} for illustration) are well-known: \begin{proposition}\label{prop:exp}{\bf [GROW e-variable is $\bar{p}_{\mu^*}/p_0$, with $\bar{P}_{\mu^*}$ an element of exponential family $\cE$ ]} Let $\cH_0 =\{P_0\}$ and $\meanspace_1$ be such that Condition ALT-$\meanspace_1$ holds. Furthermore let $\meanspace_1$ be convex. Then there exists $\mu^* \in \meanspace \setminus \{0 \}$ uniquely achieving $\inf_{\mu \in \meanspace_1} D(\Pmv_{\mu} \| P_0)$, and we have: \begin{equation}\label{eq:rightmin} \min_{\mu \in \meanspace_1 \cap \meanspace} D(\Pmv_{\mu} \| P_0) = \min_{\mu \in \meanspace_1} D(\Pmv_{\mu} \| P_0) = D(\Pmv_{\mu^*} \| P_0) = \theta^{*\top} \mu^* - \log Z(\theta^*) \end{equation} with $\theta^* := \theta(\mu^*) \in \Theta$. Furthermore let $\cH_1$ be convex and such that Condition ALT-$\cH_1$ holds. Then the minimum in (\ref{eq:rightmin}) further satisfies: \begin{align}\label{eq:eu} \inf_{P \in \cH_1} D(P \| P_0) = D(\Pmv_{\mu^*} \| P_0) \end{align} the minimum KL on the left being achieved uniquely by $\Pmv_{\mu^*}$. As a consequence, $\Sgrow = \pmv_{\mu^*}(Y)/\pmv_0(Y) \in \cS_0$ and $\grow = D(\Pmv_{\mu^*} \| P_0)$. \end{proposition} \begin{proof} The KL divergence $D(\bar{P}_{\mu} \| P_0)$ is continuous in $\mu$, has its overall minimum over $\conv(\cY)$ in the point $\mu =0$ and is strictly convex. This implies (i) that by Condition ALT-$\meanspace_1$(a), $\min_{\mu \in \meanspace_1} D(\bar{P}_{\alpha\mu} \| P_0)$ is uniquely achieved for some $\mu^*$ on the boundary of $\meanspace_1$. By Condition ALT-$\meanspace_1$(b), the boundary of $\meanspace_1$ is included in $\meanspace$, so $\mu^* \in \meanspace_1 \cap \meanspace$. This yields the first two equations in (\ref{eq:rightmin}). Writing out the densities in $D(\Pmv_{\mu^*} \| \Pmv_0)$ then gives the rightmost equality. It remains to prove (\ref{eq:eu}). Condition ALT-$\cH_1$ implies that $\cH_1$ contains a $\bar{P}_{\mu} \in \cE$, hence $\meanspace_1$ contains a $\mu \in \meanspace$, and since $D(\bar{P}_{\mu} \|P_0) < \infty$ for all $\mu \in \meanspace$, we have $\inf_{P \in \cH_1} D(P \| P_0) < \infty$. Therefore, it suffices to show (\ref{eq:eu}) with the infimum taken over $\{P \in \cH_1: D(P \| P_0) < \infty \}$. In particular all $P$ in this set have a density $p$. Thus, fix any $P$ in the set $\{P \in \cH_1: D(P \| P_0) < \infty \}$ and let $\mu = \mathbb{E}_{P}[Y]$. We first consider the case that $\mu \in \meanspace$, so that $\Pmv_{\mu} \in \cE$ (in particular then also $\mu \in \meanspace \cap \meanspace_1$ and $\Pmv_{\mu} = P_{\theta}$ with $\theta= \theta(\mu)$; note though that we may have $P \neq \bar{P}_{\mu}$). Straightforward rewriting and linearity of expectation gives \begin{align}\label{eq:sport} & D(P \| P_0)= \mathbb{E}_{Y \sim P}\left[ \log \frac{p(Y)}{p_0(Y)} \right] \geq \mathbb{E}_{P}\left[ \log \frac{p_{\theta}(Y)}{p_0(Y)} \right] = \mathbb{E}_{P}\left[ \log \frac{1}{Z(\theta)} \cdot e^{\theta^{\top} Y } \right]= \nonumber \\ & \theta^{\top} \mu - \log Z(\theta) = D(P_{\theta} \| P_0)= D(\Pmv_{\mu} \| P_0) \geq \min_{\mu \in \meanspace_1 \cap \meanspace} D(\Pmv_{\mu} \| P_0), \end{align} the final inequality following because $\mu \in \meanspace \cap \meanspace_1$. Together with (\ref{eq:rightmin}) this shows (\ref{eq:eu}) for the case that $\mu \in \meanspace$. It remains to consider the case that $\mu \not \in \meanspace$. In that case, Condition ALT-$\meanspace_1$ and ALT-$\cH_1$ imply that there exists a $\mu' \in \meanspace \cap \meanspace_1$ and $\Pmv_{\mu'} \in \cH_1 \cap \cE$ such that $\mu' = \alpha \mu$ for some $0 < \alpha < 1$. Retracing the steps of (\ref{eq:sport}) with $\theta' = \theta(\mu')$ in the place of $\mu$, we find \begin{equation}\label{eq:doei} D(P \| P_0) \geq \theta^{'\top} \mu - \log Z(\theta') = f(1) \end{equation} where, for $\gamma \in [0,1]$, we set $f(\gamma) = {\mathbb E}_{P_{\gamma \mu}}\left[\log \frac{\pmv_{\mu'}(Y)}{p_0(Y)} \right]$. Since $f(0)$ is minus a KL divergence, $f(0) < 0$. Also, $f(\alpha \mu) = f(\mu') > 0$, since $f(\mu')$ is a KL divergence. Since $f(\gamma)$ is linear in $\gamma$, it follows that $f(\gamma)$ is strictly increasing so $f(1) > f(\mu')$ and then (\ref{eq:doei}) gives that $D(P \|P_0) \geq D(\Pmv_{\mu} \| P_0)$ which again implies the result. It remains to prove that $S_{\grow} := S$ with $S= \bar{p}_{\mu^*}(Y)/p_0(Y)$. For this, first note that for all $P \in \cH_1$, we have $P(p_0(Y) > 0)=1$ by definition (namely, $\cY$ is the support of $Y$ under $P_0$), from which it follows that $P(S > 0) =1$, so that $\log S= \theta^{*\top} Y - \log Z(\theta^*)$ whence ${\bf E}_{Y \sim P}[\log S]$ is well-defined; it follows that $S\in \cS_0$. By Tops{\o}e's result and the assumed convexity of $\cH_1$ and finiteness of $D(\bar{P}_{\mu^*} \| P_0)$, we may now apply Proposition~\ref{prop:pythagorasminimax}, and the result follows. \end{proof} \begin{example}{\rm \label{ex:welldefined} Since $\cE$ is an exponential family, we know that all elements $P \in \cE$ have the same support as $P_0 \in \cE$, and, by definition of $P_0$, this support is equal to $\cY$. This implies that, even if $P \in \cH_1$ puts positive mass on an outcome $y \in \cY$ that has mass $0$ under $P_0$, then (because $y$ must be in $P_0$'s support), well-definedness (\ref{eq:THEcondition}) may still hold. For example, consider the case that $Y = \reals$ and $P(\{0 \}) = 1/2$ and $P \mid Y \neq 0 = N(0,1)$ is a standard normal, and $\cE$ is the normal location family so that $\bar{P}_{\mu} = N(\mu,1)$. We get ${\mathbb E}_P[\log \bar{p}_{\mu}(Y)/p_0(Y)]= (1/2) D(P_0 \| \bar{P}_{\mu})+ \mu^2/2$, i.e. it is well-defined. On the other hand, if we were to allow $P_0$ defined on a sample space $\cY$ with $Y$'s support under $P_0$ a strict subset of $\cY$, and we would take $P \in \cH_1$ that put positive mass on an outcome that is not in the support of $P_0$, then $\bar{p}_{\mu}(Y)/p_0(Y)$ would evaluate to $0/0$ with positive $P$-probability, and $\Sgrow$ of the form above would be undefined. We avoid such issues by requiring $\cY$ to coincide with its support under $P_0$. We suspect that using the ideas of \cite{larsson2024numeraire}, we can even obtain well-defined growth expressions for this case, but will leave this for future work. } \end{example} \subsection{CSC (Chernoff-Sanov-Csisz\'ar) for convex $\meanspace_1$} \label{sec:cscconvex} Note that the only role $\cH_1$ plays in the theorem below is to make $\Sgrow$ well-defined; the bounds further do not depend on the specific choice of $\cH_1$ as long as $\Sgrow = \pmv_{\mu^*}(Y)/p_0(Y)$. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{figures/figure_convex.pdf} \caption{Convex $\mathtt{M}_1$. $\mathtt{M}_1$ is the mean parameter set that $\cH_1$ is compatible with, and $\mathtt{M}$ is the mean parameter space of the exponential family generated from $P_0$. The setting of this figure satisfies Condition ALT-$\meanspace_1$.} \label{fig:enter-label} \end{figure} \begin{theorem}\label{thm:old} Suppose $P_0$ and $\meanspace_1$ are such that $\meanspace_1$ is convex and Condition ALT-$\meanspace_1$ holds so that, by Proposition~\ref{prop:exp}, there exists $\mu^* \in \meanspace$ minimizing $D(\bar{P}_{\mu} \| P_0)$ over $\meanspace_1$ with $\Pmv_{\mu^*} \in \cE$. Let $\cH_1$ be any set of distributions such that Condition ALT-$\cH_1$ holds, so that, by Proposition~\ref{prop:exp}, $\Sgrow(Y) := \pmv_{\mu^*}(Y)/p_0(Y)$. Define \begin{align} \underline{D}:= \inf_{\mu \in \meanspace_{1}} D(\Pmv_{\mu} \| P_0). \end{align} We have: \begin{equation}\label{eq:subset} y \in \meanspace_{1} \Rightarrow \Sgrow(y) \geq e^{\underline{D}} \end{equation} so that $$ {\mathbb E}_{Y \sim \Pmu_0}\left[ {\bf 1}_{Y \in \meanspace_{1}} \cdot {e^{\underline{D}} } \right] \leq {\mathbb E}_{Y \sim \Pmu_0}\left[ {\bf 1}_{Y \in \meanspace_{1}} \cdot {S_{\grow}} \right] \leq {\mathbb E}_{Y \sim \Pmu_0}\left[ {S_{\grow}} \right] \leq 1. $$ As a consequence, we have: \begin{align}\label{eq:basiccsc} P_0(Y \in \meanspace_{1}) = {\mathbb E}_{Y \sim \Pmu_0}\left[ {\bf 1}_{Y \in \meanspace_{1}} \cdot {e^{\underline{D}} } \right] \cdot e^{- \underline{D}} \leq e^{- \underline{D}}, \end{align} and we also have, for the one-dimensional case with $Y \in \reals$, for any $D>0$ for which there exists $\mu^* \in \meanspace$ such that $D(\Pmv_{\mu^*} \| P_0) = D$, with $s = \sgn(\mu^*)$, \begin{align} \label{eq:MLEbound} & P_0\left( \frac{\sup_{\mu \in \meanspace} \bar{p}_{\mu}(Y)}{p_0(Y)} \geq e^{D}, \textsc{sgn}(Y) = s \right) = P_0\left( \frac{\bar{p}_{\mu^*}(Y)}{p_0(Y)} \geq e^{D} \right) \leq e^{- D}. \end{align} \end{theorem} (\ref{eq:basiccsc}) is the bound first developed by \cite{Csiszar84}, who presented it as an extension of part of Sanov's theorem. In the one-dimensional case, it can also be seen as the `generic' Chernoff bound. This bound is usually formulated as $$ ``P_0(Y\geq \mu^*) \leq \inf_{\theta > 0} {\mathbb E}_{P_0}[e^{\theta^{\top} Y}]e^{- \theta^{\top} \mu^*}."$$ Evaluating the infimum shows that the right-hand side is equal to $\exp(-\underline{D})$ with $\underline{D}= D(\bar{P}_{\mu^*} \| P_0)= \inf_{\mu \geq \mu^*} D(\bar{P}_{\mu} \| P_0)$, making the bounds equivalent; hence our choice of the {\em CSC}-terminology (this is the {\em generic\/} Chernoff bound; when authors speak of {\em the\/} Chernoff bound they usually refer to a specific instance of it, with $Y= \sum_{i=1}^n X_i$ and $X_i$ binary). Links between Sanov's theorem, Csisz\'ar's extension thereof and Chernoff have been explored before; see e.g. Van Erven [\citeyear{VanErven12}]. We also note that, while versions of (\ref{eq:MLEbound}) have been known for a long time, it is sometimes considered surprising, because a direct, more naive application of Markov's inequality would give, with $ S' = \frac{\sup_{\mu \in \meanspace}\bar{p}_{\mu}(Y)}{p_0(Y)}$, $$ P_0( S' \geq e^{D}) \leq e^{-D} \cdot {\mathbb E}_{P_0}\left[S' \right] = e^{-D} \cdot \int p_0(y) \cdot \frac{\sup_{\mu \in \meanspace}\bar{p}_{\mu}(y)}{p_0(y)} d \rho(y) \gg e^{-D}, $$ which can be considerably weaker, since $S'$ is not an e-variable. The result (\ref{eq:MLEbound}) shows that we {\em can\/} establish an underlying e-variable, and it is given by $\bar{p}_{\mu^*}/p_0$. Even though Theorem~\ref{thm:old} is not new, we give its proof in full since its ingredients will be reused later on: \begin{proof}{\bf [of Theorem~\ref{thm:old}]} Let $\meanspace_1$ and $\cH_1$ be as in the theorem, so that ALT conditions hold. Note that we may choose $\cH_1$ to be convex. By the final part of Proposition~\ref{prop:exp} we then know that $\Sgrow = \frac{\pmv_{\mu^*}({Y})} {p_{0}({Y})}$ and that for all $P \in \cH_1$, with $\mu ={\mathbb E}_P[Y]$ and $\underline{D} = D(\Pmv_{\mu^*} \| P_0)$, we have \begin{equation}\label{eq:meanimp} \mu \in \meanspace_1 \Rightarrow {\mathbb E}_{P} \left[ \log \frac{\pmv_{\mu^*}({Y})} {p_{0}({Y})} \right] \geq \underline{D}, \end{equation} where the expectation is well-defined. Now note that the right-hand side can be rewritten, with $\theta^* = \theta(\mu^*)$, as $$ \theta^{*\top} \mu - \log Z(\theta^*) \geq \underline{D}. $$ Thus (\ref{eq:meanimp}) can be rewritten as: $$ y \in \meanspace_1 \Rightarrow \theta^{*\top} y - \log Z(\theta^*) \geq \underline{D}, $$ or, again equivalently, $$ y \in \meanspace_1 \Rightarrow \log \pmv_{\mu^*}(Y)/\pmv_0(Y)\geq \underline{D}. $$ The result (\ref{eq:subset}) follows after exponentiating. The subsequent inequality (\ref{eq:basiccsc}) now readily follows as well. For (\ref{eq:MLEbound}), we only consider the case $\mu^* > 0$, the case $\mu^* < 0$ being completely analogous. We have \begin{align}\label{eq:mannheim} & P_0\left(\log \sup_{\mu \in \meanspace} \frac{\bar{p}_{\mu}(Y)}{p_0(Y)} \geq {D}, Y \geq \mu^* \right)= P_0\left(Y \geq \mu^* \right) = \nonumber \\ & P_0\left( \frac{\bar{p}_{\mu^*}(Y)}{p_0(Y)} \geq e^{D}, Y \geq \mu^* \right)\leq P_0(S_{\grow} \geq D) \leq e^{-D}, \end{align} which follows because, by the {\em robustness property of exponential families\/} \citep[Chapter 19]{grunwald2007minimum}(but also easily verified directly by considering the natural parameterization), $\log p_{\mu^*}(y)/p_0(y) = D(\bar{P}_{\mu^*} \| P_0)$ if $y= \mu^*$, and $\log p_{\mu^*}(y)/p_0(y)$ is increasing in $y$ for $\mu^* > 0$, which implies that the events inside the left and the right probability are identical. On the other hand, \begin{align}\label{eq:kaiserslautern} & P_0\left(\log \sup_{\mu \in \meanspace} \frac{\bar{p}_{\mu}(Y)}{p_0(Y)} \geq {D}, 0 \leq Y < \mu^*) \right)= P_0\left(\log \frac{\bar{p}_Y(Y)}{p_0(Y)} \geq {D}, 0 \leq Y < \mu^*) \right)= \nonumber \\ & P_0 (D(\bar{P}_Y \|P_0) \geq D(\bar{P}_{\mu^*} \|P_0), 0 \leq Y < \mu^*) = 0, \end{align} where the first equality follows because the $\mu$ maximizing the likelihood $\bar{p}_{\mu}(Y)$ is uniquely given by $Y$ if $Y \in \meanspace$, and the second is again the robustness property of exponential families, and the third follows because KL divergence $D(\bar{P}_{\mu} \| P_0)$ is strictly increasing in $\mu$ if $\mu > 0$. Together, (\ref{eq:mannheim}) and (\ref{eq:kaiserslautern}) imply the result. \end{proof} \section{Surrounding $\meanspace_1$} \label{sec:surrounding} We now consider tests and concentration bounds in the often more relevant setting of `surrounding' $\meanspace_1$. Formally, we call $\meanspace_1$ {\em surrounding\/} if its complement, $\meanspace_1^{\comp} := \conv(\cY) \setminus \meanspace_1$ is an open, bounded, connected set containing $\nv$, and contained in $\reals^d$. We will call surrounding $\meanspace_1$ {\em nice\/} if (a) $\meanspace_1^{\comp}$ is contained in the interior of the mean-value space $\meanspace$ of exponential family $\cE$ generated by $P_0$ and also (b) $\meanspace_1^{\comp}$ is {\em $\nv$-star-shaped}, which means that for any straight line going through $\nv$, its intersection with $\meanspace_1^{\comp}$ is an interval, so that it crosses the boundary $\bd(\meanspace_1^{\comp})$ only once. Note in particular that any convex $\meanspace_1^{\comp}$ is automatically star-shaped; see Figure~\ref{fig:starA} and~\ref{fig:starB} for two examples of star-shaped $\meanspace_1^{\comp}$. Note also that any {\em nice\/} $\meanspace_1$ automatically satisfies Condition ALT-$\meanspace_1$. \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.2in]{figures/surrounding-partitioned.pdf} \caption{\label{fig:starA} Surrounding, nice, $\mathtt{M}_1$ with a finite nice partition into convex sets. This figure is obtained by taking $P_0$ a Gamma distribution on $X$ and defining $Y= (Y_1,Y_2) = (\log X,X-c)$ for a constant $c > 1$. Then $\cE$ is a translated Gamma family with sufficient statistic $Y$ and mean-value space $\meanspace= \{(y_1,y_2): y_1 \in \reals, y_2 = e^{y_1} - c \}$ (unlike in Figure~\ref{fig:enter-label}, we have $\meanspace_1 \subset \meanspace$ here).} \end{minipage}\ \ \ \ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.2in]{figures/surrounding-cannot_partitioned.pdf} \caption{\label{fig:starB} Surrounding, nice $\mathtt{M}_1$ that cannot be partitioned into a finite number of convex sets (again we show the translated Gamma family).} \end{minipage} \end{figure} \commentout{Our results will hold for surrounding $\Theta_1$ such that In practice it may be more likely that we want to bound the probability of a surrounding set $\meanspace_1$ in the mean-value space. Since $P_0(\mu \in \meanspace_1) = P_0(\theta \in \Theta_1)$ with $\Theta_1 = \theta(\meanspace_1)$, we may use the results to get such a bound. In principle it might happen that $\meanspace_1$, although itself $\nv$-star-shaped, whereas $\theta(\meanspace_1)$ is not. In that case, we can use the results below to get an upper bound on the $P_0$-probability of the $\nv$-star-shaped hull $\textsc{star}_\nv(\Theta_1)$, and uses this to further upper bound the $P_0$-probability of $\meanspace_1$, which cannot be larger.} We now develop optimal e-variables for surrounding and regular $\meanspace_1$. The GROW criterion is still meaningful in this setting, and we discuss it in Section~\ref{sec:grow} below. Yet alternative, {\em relative\/} growth criteria are sometimes more meaningful in hypothesis testing \citep{ramdas2023savi,GrunwaldHK19} and one of these, minimax regret, more directly leads to corresponding CSC-type bounds. We consider these in Section~\ref{sec:regret} and~\ref{sec:inequality_theorem}. \subsection{GROW for $d=1$}\label{sec:grow} We may again consider the GROW criterion for general $\cH_1$ that could be any set of distributions compatible with a given nice surrounding $\meanspace_1$ and $d \geq 1$, but this turns out to be surprisingly complicated in general. We only managed to find a simple characterization of $\Sgrow$ for the case $d=1$, $\meanspace_1 \subset \meanspace$ and $\cH_1 = \cE_1= \{\bar{P}_{\mu}: \mu \in \meanspace_1\}$; that is, we are now testing $P_0$, a member of 1-dimensional exponential family $\cE$, against a subset $\cE_1$ of $\cE$ that is bounded away from $P_0$. In the sequel we denote by $\pmv_{W}(X^n) := \int \pmv_{\mu}(X^n) dW(\mu)$ the Bayes marginal density corresponding to prior measure $W$. \begin{theorem}\label{thm:main_new} Let $P_0$ be a distribution for 1-dimensional $Y \subset \reals$, and suppose that $\meanspace_1$ is nice, i.e. $\meanspace_1^{\comp} = (\mu^{-}_1,\mu^+_1)$ is an open interval containing $\nv$ and contained in the mean-value parameter space $\meanspace$ for the 1-dimensional exponential family generated by $P_0$. Then, among all distributions $W$ on the boundary $\bd(\meanspace_1^{\comp})= \{ \mu^-_1,\mu^+_1\}$, the minimum $D(\bar{P}_{W}|| P_{0})$ is achieved by a distribution $W^*$ that satisfies \begin{equation}\label{eq:jantje} D(\bar{P}_{W^*}|| \bar{P}_{\mu_0}) = {\mathbb E}_{\bar{P}_{\mu^-_1}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_{0}(Y)} \right] = {\mathbb E}_{\bar{P}_{\mu^+_1}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_{0}(Y)} \right]. \end{equation} The GROW e-variable relative to $\cE_1 =\{ \bar{P}_{\mu}: \mu \in \meanspace_1\cap \meanspace\}$, denoted $S_{\grow}$, and the GROW e-variable relative to $\cE_1^{\bd} := \{\bar{P}_{\mu}: \mu \in \bd(\meanspace_1^{\comp})\}$, denoted $S_{\grow}^{\bd}$, are {\em both\/} given by: \begin{equation}\label{eq:pietje} S_{\textsc{grow}} = S^{\bd}_{\textsc{grow}} = \frac{\bar{p}_{{W}^*}(Y)}{p_{0}(Y)}, \end{equation} i.e., the support of the prior ${W}^*$ on $\meanspace_1$ minimizing the KL divergence $D(\bar{P}_{{W}^*} \| P_0)$ is fully concentrated on the boundary of $\meanspace_1^{\comp}$. \end{theorem} \begin{proof}{\bf (of Theorem~\ref{thm:main_new}),} Define, for $\mu \in \meanspace$ and $w^* \in [0,1]$, \begin{align} \label{eq:brenda} & f(\mu,w^*) = \mathbb{E}_{ \bar{P}_\mu} \left[\log \frac{(1-w^*) \bar{p}_{\mu^-_1}(Y)+ w^* \bar{p}_{\mu^+_1}(Y)}{p_{\mu_0}(Y)} \right] \end{align} and note that, for each $\mu$, it holds that $f(\mu,w^*)$ is continuous in $w^*$. Now consider $f(\mu^-_1,w^*) = - D(\bar{P}_{\mu^-_1} \| \bar{P}_{W^*}) + D(\bar{P}_{\mu^-_1} \| {P}_{0})$. Minus the first term, $D(\bar{P}_{\mu^-_1} \| \bar{P}_{W^*})$, is $0$ at $w^*=0$ and continuously monotone increasing in $w^*$, since KL divergence is nonnegative and strictly convex in its second argument. Therefore $f(\mu^-_1,w^*)$ is itself continuously monotone decreasing in $w^*$ with $f(\mu^-_1,0) = D(\bar{P}_{\mu^-_1} \| P_0) > 0$. Also, $f(\mu^-_1,1) = - D(\bar{P}_{\mu^-_1} \| \bar{P}_{\mu^+_1}) + D(\bar{P}_{\mu^-_1} \| P_{0}) < 0$, since KL divergence $D(\bar{P}_{\mu^-_1} \| \bar{P}_{\mu'})$ is strictly increasing in $\mu'$, for $\mu'> \mu^-_1$. Analogously $f(\mu^+_1,w^*)$ is continuously monotone increasing in $w^*$, $f(\mu^+_1,1) = D(\bar{P}_{\mu^+_1} \| P_0) > 0$ and $f(\mu^+_1,0) < 0$. This shows that there exists $0 < w^{\circ}< 1 $ such that $f(\mu^-_1,w^{\circ}) = f(\mu^+_1,w^{\circ})$. This implies that there exists a $W^*$ (with $W^*(\{\mu_1^+ \}) = w^{\circ}$) such that the rightmost equality in (\ref{eq:jantje}) holds. But this rightmost equality implies that for all $w' \in [0,1]$, $$ (1-w') {\mathbb E}_{\bar{P}_{\mu^-_1}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_{0}(Y)} \right] + w' {\mathbb E}_{\bar{P}_{\mu^+_1}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_{0}(Y)} \right] = {\mathbb E}_{\bar{P}_{\mu^-_1}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_{0}(Y)} \right]. $$ Plugging in $w' = w^{\circ}$, we get the left equality in (\ref{eq:jantje}). Now (\ref{eq:jantje}) in turn gives that \begin{equation}\label{eq:barcelona} \min_{\mu \in \bd(\meanspace_1^{\comp})} {\mathbb E}_{\bar{P}_{\mu}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_0(Y)} \right] = D(\bar{P}_{W^*}|| \bar{P}_{\mu_0}), \end{equation} whereas for any probability density $q$ for $\cY$, \begin{equation}\label{eq:finalwork} \min_{\mu \in \bd(\meanspace_1^{\comp})} {\mathbb E}_{\bar{P}_{\mu}} \left[ \log \frac{q(Y)}{p_0(Y)} \right] \leq {\mathbb E}_{\bar{P}_{W^*}} \left[ \log \frac{q(Y)}{p_0(Y)} \right] \leq {\mathbb E}_{\bar{P}_{W^*}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_0(Y)} \right]= D(\bar{P}_{W^*}|| \bar{P}_{\mu_0}), \end{equation} which together with (\ref{eq:barcelona}) shows that $S^{\bd}_{\grow} = \frac{\bar{p}_{{W}^*}(Y)}{p_{0}(Y)}$, since every well-defined e-variable can be written as $q(Y)/p_0(Y)$ for some probability density $q$. Also, similarly to (\ref{eq:finalwork}), we have \begin{equation*} \inf_{\mu \in \meanspace_1} {\mathbb E}_{\bar{P}_{\mu}} \left[ \log \frac{q(Y)}{p_0(Y)} \right] \leq {\mathbb E}_{\bar{P}_{W^*}} \left[ \log \frac{q(Y)}{p_0(Y)} \right] \leq D(\bar{P}_{W^*}|| \bar{P}_{\mu_0}), \end{equation*} so if we could show \begin{equation}\label{eq:realfinalwork} \inf_{\mu \in \meanspace_1} {\mathbb E}_{\bar{P}_{\mu}} \left[ \log \frac{\bar{p}_{W^*}(Y)}{p_0(Y)} \right]= D(\bar{P}_{W^*}|| \bar{P}_{\mu_0}), \end{equation} then the above two statements would together also imply that $S_{\grow} = \frac{\bar{p}_{{W}^*}(Y)}{p_{0}(Y)}$, and we would be done. But (\ref{eq:realfinalwork}) follows by (\ref{eq:jantje}) \commentout{ This in turn implies that (a) for all $w\in [0,1]$ with $w \neq w^{\circ}$, $\inf_{\mu \in \{\mu_1^-,\mu_1^+\}} f(\mu,w) < f(\mu_1^-,w^{\circ}) = f(\mu_1^+,w^{\circ})$, whereas (b) for both $\mu \in \{\mu_1^-,\mu_1^+\}$, $\sup_{w \in [0,1]} f(\mu,w) \geq f(\mu_1^-,w^{\circ}) = f(\mu_1^+,w^{\circ})$. (a) and (b) together show that $w^{\circ}$ achieves $$ \max_{w \in [0,1]} \inf_{\mu \in \{\mu_1^-,\mu_1^+\}} f(\mu,w) = \inf_{\mu \in \{\mu_1^-,\mu_1^+\}} \max_{w \in [0,1]} f(\mu,w) $$ which shows that the minimum $D(\bar{P}_{W} \| P_0)$ over all $W$ is achieved by $W^*$, verifying the claim above (\ref{eq:jantje}), and that the GROW e-variable relative to $\bd(\meanspace_1^{\comp})$ is given by (\ref{eq:pietje}). It only remains to show that \begin{align}\nonumber \min\limits_{\mu \in {\meanspace}_1} f(\mu,w^{\circ}) =\min\limits_{\mu \in \bd({\meanspace}_1^{\comp})} f(\mu,w^{\circ}). \end{align} But this is implied by } together with the following lemma, which thus completes the proof: \begin{lemma}\label{lem:brown} $f(\mu,w^{\circ})$ is increasing on $\{ \mu \in \meanspace_1: \mu \geq \mu^+ \}$ and decreasing on $\{ \mu \in \meanspace_1: \mu \leq \mu^- \}$. \end{lemma} \end{proof} Lemma~\ref{lem:brown} is proved in Section~\ref{sec:proof}. It follows from the fact that exponential families represent {\em variation reducing kernels} \citep{brown1981variation}, a notion in theoretical statistics that seems to have been largely forgotten, and that we recall in Section~\ref{sec:proof}. In that section we also explain why this result, even for $d=1$, is difficult to prove, which also explains why proving anything nonasymptotic for the case $d > 1$ is currently beyond our reach. \subsection{Alternative Optimality Criteria: minimax redundancy and regret}\label{sec:regret} $\Sgrow$ is difficult to characterize when $d > 1$ and $\cH_1$ is surrounding; while it seems intuitive that, at least under some additional regularity conditions, it is still given by a likelihood ratio with a Bayes mixture concentrated on the boundary of $\meanspace_1$, we did not manage to prove this (we indicate what the difficulty is towards the end of Section~\ref{sec:proof}); we can say more though for the special case that $\meanspace_1^{\comp}$ is a KL ball and $Y = n^{-1} \sum_{i=1}^n X_i$ as $n \rightarrow \infty$; see Section~\ref{sec:asymptotic_growth_rate}. However, in e-variable practice we often deal with $\cH_1$ that can be partitioned into a family of subsets $\{\cH_{1,r}: r \in \cR\}$ such that, ideally, we would like to use the GROW e-variable relative to the $\cR$ that actually contains the alternative. This was called the {\em relative GROW\/} criterion by \cite{GrunwaldHK19} and it was used by, for example, \cite{TurnerLG24}. We thus face a collection of e-variables $\cS= \{S_{r,\grow} : r \in \cR\}$ where each $S_{r,\grow}$ is GROW for $\cH_{1,r}$. If an oracle told us beforehand ``if the data come from $P \in \cH_1$, then in fact $P \in \cH_{1,r}$'' then we would want to use $S_{r,\grow}$. Not having access to such an oracle, we want to use an e-variable that loses the least e-power compared to $S_{r,\grow}$ for the `true' $r:= r(P)$ (i.e. such that $P \in \cH_{1,r}$), in the worst-case over all $r \in \cR$. This is akin to what in the information theory literature is called a {\em minimax redundancy\/} approach \citep{ClarkeB94,grunwald2007minimum}. This approach is still hard to analyze in general but is amenable to the asymptotic analysis we provide in the next section. In a minor variation of this idea, used in an e-value context before by \cite{orabona2021tight,jang2023tighter}, we may consider the e-variable that loses the least e-power compared $S_{\breve{r},\grow}$ for the $\breve{r}$ that is optimal with hindsight for the data at hand, i.e. achieving $\max_{r \in \cR} S_{r,\grow}$; thus $\breve{r}$ can be thought of as a maximum likelihood estimator. This is akin to what information theorists call {\em minimax individual-sequence regret\/} \citep{grunwald2007minimum}. This final approach {\em can\/} be analyzed nonasymptotically and leads to an analogue of the CSC theorem. We now formalizeboth the redundancy and regret approaches. Thus, suppose that $\meanspace_1$ is surrounding and {\em nice\/} as defined in the beginning of Section~\ref{sec:surrounding}, and let $\{ \meanspace_{1,r}: r \in \cR\}$ be a partition of $\meanspace_{1}$ (see Figure~\ref{fig:starA}). We will restrict attention to {\em nice\/} partitions, i.e. partitions such that \begin{equation}\label{eq:nicepart} \text{for all $r \in \cR$,\ $\meanspace_{1,r}$ is a closed convex subset of $\meanspace_1$ with $\meanspace_{1,r} \cap \bd(\meanspace_1^{\comp})\neq \emptyset$} \end{equation} Let $\cH_{1,r} := \{P \in \cH_1: {\bf E}_{Y \sim P}[Y] \in \meanspace_{1,r}\}$. By strict convexity of $D(\Pmv_{\mu} \| P_0)$ in $\mu$ and the nice-ness condition, we have that $\min_{\mu \in \meanspace_{1,r} \cap \meanspace} D(\Pmv_{\mu} \| P_0)$ exists and is achieved uniquely for a point $\mu^*(r)$ on the boundary $\bd(\meanspace_{1,r}) \cap \bd(\meanspace_1^{\comp})$. Every $r \in \cR$ is mapped to a point $f(r) \in \bd(\meanspace_1^{\comp})$ in this way, and the mapping $f: \cR \rightarrow \bd(\meanspace_1^{\comp})$ is injective since $\meanspace_{1,r} \cap \meanspace_{1,r'} = \emptyset$ for every $r \neq r'$ with $r, r' \in \cR$. Therefore we will simply {\em identify\/} $\cR$ with a subset of $\bd(\meanspace_1^{\comp})$, such that $f(\mu) = \mu$ for all $\mu \in \cR$. For $P \in \cH_1$ with ${\mathbb E}_P[Y] = \mu$ (so $\mu \in \meanspace_1$), we will now define $r(P)$ to be the $r \in \cR$ such that $P \in \cH_{1,r}$, i.e. such that $\mu \in \meanspace_{1,r}$. Note we can think of $r(P)$ either as an index of sub-hypothesis $\cH_{1,r}$ or as a special boundary point of the space of mean-values $\meanspace_{1,r}$. If we were to test $\cH_0$ vs. $\cH_{1,r}$ for given $r$, then we would still like to use the GROW e-variable $S_{r,\grow} = \bar{p}_{r}/p_0$. In reality we do not know $r$, but we aim for an e-value that loses as little evidence as possible compared to $S_{r,\grow}$, in the worst-case over all $r$. Formally, we seek to find e-variable $S= q(Y)/p_0(Y)$, where $q$ achieves \begin{align}\label{eq:redundancy} & \sup_q \inf_{P \in \cH_1}\ {\bf E}_{Y \sim P} \left[ \log \frac{q(Y)}{p_0(Y)} - \log \frac{\pmv_{r(P)}(Y)}{p_0(Y)} \ \right]= \sup_q \ \inf_{P \in \cH_1}\ {\bf E}_{Y \sim P} \left[ \log \frac{q(Y)}{\pmv_{r(P)}(y)} \ \right] \nonumber \\ & = - \mmred(\cH_1) \text{\ where\ } \mmred(\cH_1) := \inf_q \sup_{P \in \cH_1} \left( D(P \|Q) - D(P \| \bar{P}_{r(P)}) \right). \end{align} where the supremum is over all probability densities on $Y$ and $r(P)$ is again the unique $r \in \cR \subset \meanspace_1$ such that $P \in \cH_{1,r}$. $\mmred(\cH)$ is easily shown to be nonnegative for any $\cH$, and both equations in (\ref{eq:redundancy}) are immediate. From the rightmost expression, information theorists will recognize the $q$ as minimizing the maximum {\em redundancy\/} \citep{CoverT91,ClarkeB94,takeuchi2013asymptotically}: the worst-case additional mean number of bits needed to encode the data by an encoder who only knows that $P \in \cH_1$ compared to an encoder with the additional knowledge that $P \in \cH_{1,r}$. As said, it is easier to analyze a slight variation of this approach which makes at least as much sense: rather than comparing ourselves to the inherently unknowable $r(P)$, we may consider the actually observed data $Y=y$ and compare ourselves to (i.e. try to obtain as much evidence against $P_0$ as possible compared to) the $\breve{r}(y)$ we would like to have used with hindsight, after seeing and in light of data $y$; and rather than optimizing an expectation under an imagined distribution which we will never fully identify anyway, we will optimize in the worst-case over all data. The setup works for general functions $\breve{r}$, indicating what $r\in \cR$ we would have liked to use with hindsight; further below we discuss intuitive choices. We note that $\breve{r}$ maps data $\cY$ to a point in $\cR \subset \bd(\meanspace_1^{\comp}) \subset \meanspace_1$, so we can think of $\breve{r}$ as an {\em estimator\/} of parameter $\mu$; however, the estimator is restricted to a small subset of $\meanspace$, namely the set $\cR$. Thus, we now seek to find e-variable $\Srel= q(Y)/p_0(Y)$, where $q$ now achieves \begin{align}\label{eq:shtarkov} & \sup_q \inf_{y \in \cY}\ \left( \log \frac{q(y)}{p_0(y)} - \log \frac{\pmv_{\breve{r}(y))}(y)}{p_0(y)} \ \right)= \sup_q \inf_{y \in \cY}\ \left( \log \frac{q(y)}{\pmv_{\breve{r}(y)}(y)} \ \right) \nonumber\\ & = - \mmreg(\breve{r}) \text{\ where\ } \mmreg(\breve{r}) := \inf_q \sup_{y \in \cY}\ \left( - \log \frac{q(y)}{\pmv_{\breve{r}(y)}(y)} \ \right), \end{align} where again the supremum is over all probability densities that can be defined on $Y$, the quantity $\mmreg(\breve{r})$ is easily seen to be nonnegative regardless of how $\breve{r}$ is defined, and both equalities in (\ref{eq:shtarkov}) are immediate. From the rightmost expression, information theorists will recognize the optimizing $q$ as the $q$ minimizing {\em individual sequence regret\/}: it minimizes the `regret' in terms of the number of bits needed to encode the data, in the worst-case over all sequences, compared to somebody who has seen $\breve{r}(y)$ in advance; the word `regret' is also meaningful in our setting --- the aim is to minimize regret in the sense of loosing as little evidence as possible compared to the largest attainable e-value (evidence) with hindsight. In information theory, neither the minimax redundancy nor the minimax individual sequence regret is considered inherently superior or more natural, and (as we shall also see in our context, in the next section), both quantities often behave similarly. Indeed, (\ref{eq:shtarkov}) being a variation of a standard problem within information theory and sequential prediction with the logarithmic loss, it is well-known \citep{BarronRY98,CesaBianchiL06,grunwald2007minimum,GrunwaldM19} that the solution for $q$ is uniquely achieved by the following variation of the {\em Shtarkov distribution}, a notion going back to \cite{Shtarkov87}: \begin{equation}\label{eq:shtarkovb} q_{\shtarkov,\breve{r}}(y) = \frac{\pmv_{\breve{r}(y)}(y)} {\int_y \pmv_{\breve{r}(y))}(y)\nu(dy)} \ \text{so}\ \Srel = \frac{q_{\shtarkov}(Y)}{p_0(Y)}. \end{equation} We then get that $ \left( \log \frac{q(y)}{\pmv_{\breve{r}(y)}(y)} \ \right) = \log \int_y \pmv_{\breve{r}(y)}(y)\rho(dy) := \mmreg(\breve{r}) $ independently of $y$, where $\mmreg(\breve{r})$ is the {\em maximin regret\/} (called `minimax regret' originally, since in data compression the rightmost expression, without the minus sign, is the relevant one). The most straightforward choice is to take $\breve{r}(y):= \hat{r}(y)$ the maximum likelihood estimator within $\cR$, achieving \begin{equation}\label{eq:mle} \max_{r \in \cR} \bar{p}_{r}(y) \end{equation} for the given $y$, since then $q_{\shtarkov}$ has minimal overhead compared to the $S_{\grow,r}$ that is largest with hindsight, i.e. that provides the most evidence with hindsight --- thus providing additional justification for the terminology `regret' in this special case, which was also Shtarkov's (\citeyear{Shtarkov87}) original focus. If $\breve{r}$ is set to $\hat{r}$, then $Q_{\shtarkov}$ is also known as the {\em normalized maximum likelihood (NML)\/} distribution relative to the set $\cR \subset \bd(\meanspace_1^{\comp})$ (not relative to the full exponential family $\cE$!). In the ensuing results we will mostly be interested in the case that $\cR$ is either finite (as in Figure~\ref{fig:starA}), or that it is in 1-to-1 correspondence with $\bd(\meanspace^{\comp}_1)$ (as in Figure~\ref{fig:starB}). In both cases, the maximum in (\ref{eq:mle}) is achieved, although it may be achieved for more than one $r$; in that case, we set $\hat{r}$ to be the largest $r$ achieving (\ref{eq:mle}) in lexicographical ordering, making $\hat{r}$ well-defined in all cases. While the upcoming analogue of the CSC theorem will mention the MLE $\hat{r}$, it turns out that in the proof, and in the detailed theorem statement, we also need to refer to an estimator $\breve{r}$ that may differ from the MLE. The reason is that, intriguingly, in general we may have that for some $r \in \cR$ and $y \in \meanspace_{1,r}$ we may have that $\hat{r}(y) \neq r$, which complicates the picture. To this end, we formulated the minimax regret approach for general $\breve{r}: \conv(\cY) \rightarrow {\cal R}$, and not just the MLE, as was done earlier e.g. by \cite{GrunwaldM19}. \begin{example}{\bf [Gaussian Example]} {\rm We use the simple 2-dimensional Gaussian case to gain intuition. Thus, we let $\cY = \conv(\cY) = \reals^2$, and $P_0$ a 2-dimensional Gaussian distribution on $Y$, with mean $0$ (i.e., $(0,0)$) and $\Sigma$ a positive definite $2 \times 2$ covariance matrix. Then $\meanspace = \reals^2$ and $\cE$ is a 2-dimensional Gaussian location family. For now, take $\Sigma$ to be the identity matrix. Then $D(P_0 \| P_{\mu}) = (1/2) \| \mu \|_2^2$ is simply the squared norm of $\mu$, facilitating the reasoning. A simple case of a convex $\meanspace_1$ is the translated half-space $\meanspace_1 = \{ (y_1,y_2): y_1 \geq a\}$ for constant $a > 0$. The point $\mu \in \meanspace_1$ minimizing KL divergence to $P_0$ must then clearly be $(a,0)$. Therefore, if $\cH_1$ is any set of distributions with means in $\meanspace_1$ and containing $\cE_1^{\bd} = \{ \bar{P}_{\mu}: \mu \in \bd(\meanspace_1^{\comp})\}$, we have by Proposition~\ref{prop:exp} that $S_{\grow} = \bar{p}_{(a,0)}(Y)/\bar{p}_{(0,0)}(Y)$. We see that even if we have convex $\cH_1$, the minimax individual sequence regret approach leads to a different e-variable, if we carve up $\cH_1$ into $\{\cH_{1,r}: r \in \cR\}$ for $\cR$ with more than one element. For example, we can take $\cR= \{(a,\mu_2): \mu_2 \in \reals\}$ be a vertical line and let $\meanspace_{1,(a,\mu_2)}$ be the subset of $\meanspace_1$ on the line connecting $(0,0)$ with $(a,\mu_2)$. Then, with $\hat{r}$ the MLE as in (\ref{eq:mle}), we get that $\hat{r}(y)$ is the point where the line $\cR$ intersects with the line connecting $(0,0)$ and $y$. Since now $\hat{r}(y)$, and hence the sub-hypothesis $\cH_{1,\hat{r}(y)}$ we want to be almost GROW against, changes with $y$, we get a solution $\Srel$ in (\ref{eq:shtarkovb}) that differs from $S_{\grow}$. In the case of convex $\cH_1$, whether to use the absolute or relative GROW e-variable may depend on the situation (see \cite{GrunwaldHK19} for a motivation of when absolute or relative is more appropriate). In the case of nonconvex $\cH_1$ with $d > 1$, we simply do not know how to characterize the absolute GROW and we have to resort to the relative GROW. } \end{example} \subsection{CSC Theorem for surrounding $\cH_1$}\label{sec:inequality_theorem} For given estimator $\breve{r}$ and probability density $q$ on $Y$, define $\reg(q,\breve{r})$ (`regret') as \begin{equation}\label{eq:regret} \reg(q,\breve{r},y) := \log \left( \frac{\pmv_{\breve{r}(y)}(y)}{q(y)} \right) \ ; \ \mreg(q,\breve{r}) := \sup_{y: y \in \meanspace_1} \log \left( \frac{\pmv_{\breve{r}(y)}(y)}{q(y)} \right). \end{equation} Whereas above we discussed an MLE $\breve{r}$, i.e. $\breve{r}=\hat{r}$, we now require it to be self-consistent, i.e. we set it to be any function of $y$ such that for all $y \in \meanspace_1$, we have $y \in \meanspace_{1,\breve{r}(y)}$. The value of $r(y)$ for $y \in \conv(\cY) \setminus \meanspace_1$ will not affect the result below. \begin{theorem}{\rm \bf [CSC Theorem for surrounding $\cH_1$]} \label{thm:nml} Suppose that $\meanspace_1$ is nice and let $ \{ \meanspace_{1,r}: r \in \cR \}$ be any nice partition of $\meanspace_1$ as in (\ref{eq:nicepart}), with $\breve{r}$ any self-consistent estimator as above. Let $q$ be an arbitrary probability density function. Then: $$ {\mathbb E}_{P_{0}}\left[ {{\bf 1}_{Y \in \meanspace_1}}\cdot e^{D(\Pmv_{\breve{r}(Y))} \| P_0) - \reg(q,\breve{r},Y)}\right] \leq 1. $$ so that in particular, with $\underline{D}:= \inf_{\mu \in \bd(\meanspace_1^{\comp})} D(P_{\mu}\| P_{0})$, \begin{equation}\label{eq:mlesharpbound} P_{0}\left( Y\in \meanspace_1 \right)\leq e^{\mreg(q,\breve{r}) - \underline{D}} \overset{(*)}{=} e^{\mmreg(\breve{r})- \underline{D}} \overset{(**)}{\leq} e^{\mmreg(\hat{r})- \underline{D}} , \end{equation} where $(*)$ holds if we take $q= q_{\shtarkov,\breve{r}}$, and $(**)$ holds if the MLE estimator $\hat{r}$ is well-defined. \end{theorem} \begin{proof} Folllowing precisely analogous steps as in the proof of (\ref{eq:subset}) as based on (\ref{eq:meanimp}) within the proof of Theorem~\ref{thm:old}, we obtain, for all $y \in \conv(\cY)$, \begin{equation}\label{eq:subsetb} y \in \meanspace_1 \Rightarrow \frac{\pmv_{\breve{r}(y)}(y)}{p_0(y)} \geq e^{D(\Pmv_{\breve{r}(y)} \| P_0)}. \end{equation} Then (\ref{eq:subsetb}) gives, using definition (\ref{eq:regret}): \begin{align*} & {\mathbb E}_{P_{0}}\left[ {{\bf 1}_{Y \in \meanspace_1}}\cdot {e^{D(\Pmv_{\breve{r}(Y)} \| P_0) - \reg(q,\breve{r},Y)}}\right] \leq {\mathbb E}_{P_{0}}\left[ {{\bf 1}_{Y \in \meanspace_1}}\cdot \frac{\pmv_{\breve{r}(Y)}(Y)}{p_0(Y)} \cdot {e^{- \reg(q,\breve{r},Y)}}\right] \leq \\ & {\mathbb E}_{P_{0}}\left[ {{\bf 1}_{Y \in \meanspace_1}}\cdot \frac{q(Y)}{p_0(Y)} \cdot e^{ \reg(q,\breve{r},Y)} \cdot {e^{- \reg(q,\breve{r},Y)}}\right] \leq {\mathbb E}_{P_{0}}\left[ \frac{q(Y)}{p_0(Y)} \right]= 1, \end{align*} which proves the first statement in the theorem. The second statement is then immediate. \commentout{Combining the two displays, we have: \begin{align*} P_{\theta_0}\left( \hat\theta \in \Theta_1 \right) &\leq P_{\theta_0}\left( {\bf E}_{\hat\theta} \left[ \log \frac{p_{g(\hat\theta)}(Y)}{p_{\theta_0}(Y)} \right] \geq D(P_{g(\hat\theta)} \| P_{\theta_0}) \right) \\ &\leq P_{\theta_0}\left( {\bf E}_{\hat\theta} \left[ \log \frac{p_{g(\hat\theta)}(Y)}{p_{\theta_0}(Y)} \right] \geq \underline{D} \right) \\ &\stackrel{(a)}{=} P_{\theta_0} \left( \log \frac{p_{g(\hat\theta)}({Y})}{p_{\theta_0}({Y})} \geq \underline{D} \right) \end{align*} where $(a)$ holds by (\ref{eq:robustnessb}). Then proceeding from above, \begin{align*} P_{\theta_0}\left( \hat\theta \in \Theta_1 \right) &\leq P_{\theta_0}\left( \log \frac{q({Y})}{p_{\theta_0}({Y})} + \reg(q) \geq \underline{D} \right) \\ &= P_{\theta_0}\left( \frac{q({Y})}{p_{\theta_0}({Y})} \geq e^{-\reg(q) + \underline{D}} \right) \\ &\stackrel{(b)}{\leq} {\bf E}_{\theta_0} \left[ \frac{q({Y})}{p_{\theta_0}({Y})}\right] \cdot e^{\reg(q) -\underline{D}}= e^{\reg(q) - \underline{D}} \end{align*} where $(b)$ holds by Markov's inequality. We have thus shown, in analogy to Theorem~\ref{thm:old}, Part 2: } \end{proof} \paragraph{Analyzing the CSC Result --- Different Partitions $\{\meanspace_{1,r}: r \in \cR\}$} The next question is how to cleverly partition any given nonconvex $\meanspace_1$ so as to get a good bound when applying Theorem~\ref{thm:nml}. We first note that, for any given $\meanspace_1$, the final bound (\ref{eq:mlesharpbound}) does not worsen if we enlarge $\meanspace_1$, as long as $\underline{D}= \inf_{\mu \in \bd(\meanspace_1^{\comp})} D(P_{\mu}\| P_{0})$ stays the same. Thus, we still get the same bound if we shrink the complement $\meanspace_1^{\comp}$ to the $\underline{D}$-KL ball $\{ \mu: D(\Pmv_{\mu} \|P_0) < \underline{D} \}$, without making the bound looser. We will therefore, from now on, simply assume that we are in the situation in which $\meanspace_1^{\comp}$ is the $\underline{D}$-KL ball. We know that such a KL ball is convex \citep{Brown86} --- with such a convex $\meanspace_1^{\comp}$ we are thus in the `dual case', as it were, to the case of convex $\meanspace_1$ which we discussed in Section~\ref{sec:convexm1}. There are now basically two approaches to apply the CSC Theorem~\ref{thm:nml} that suggest themselves. In the first approach, we first determine a larger $\meanspace'_1$ (hence smaller $\meanspace^{'\comp}_1$) that contains $\meanspace_1$, such that $\meanspace'_1$ can be partitioned into a finite number $|\cR'|$ of convex subsets, $\{ \meanspace'_{1,r}: r \in \cR \}$, and then we apply Theorem~\ref{thm:nml} to $\meanspace'_1$ and $\cR'$. We could, for example, take $\meanspace_1^{'\comp}$ be a convex polytope with a finite number of corners, all touching $\bd(\meanspace_1^{\comp})$. Such a situation is depicted in Figure~\ref{fig:starA}, if we interpret the dashed curve the boundary of a KL ball and the $\meanspace^{\comp}_1= \meanspace \setminus \meanspace_1$ in the Figure as the polytope $\meanspace_1^{'\comp}$. \commentout{ \begin{example}{\bf [Gaussian Example, Continued]} {\rm Consider the 2-dimensional Gaussian case, with identity covariance matrix. Then the $\underline{D}$-KL ball coincides with the Euclidean ball of radius $\underline{D}^{1/2}$ around $0$ and $\bd(\meanspace_1^{\comp})$ is a circle. In the first approach, we could take any $\meanspace'_1$ such that $\meanspace_1^{'\comp}$ is a regular polygon with $k \geq 3$ corners that is circumscribed by the circle, i.e. it touches the circle at its corners. Then we would take an $\cR$ with $k$ elements, one for each edge. We set each $r \in \cR$ equal to a midpoint of the corresponding edge, set $\meanspace_{r}$ equal to the cone from $0$ with sides going through the endpoints of this edge, and $\meanspace_{1,r}:= \meanspace_1 \cap \meanspace_r$. Then, for any $P \in \cH_1$ with ${\mathbb E}_P[Y]= \mu$, we have $r(P)$ is equal to the point in the cone containing $\mu$ that minimizes both Euclidean distance and KL divergence to $0$. } \end{example}} In the second approach, we set $\cR= \bd(\meanspace_1^{\comp})$, making it a manifold in $\reals^d$, and set $$ \meanspace_{1,r} := \{ \mu \in \meanspace_1: \alpha \mu = r \text{\ for some\ } \alpha > 0 \},$$ i.e. the set of points in $\meanspace_1$ on the ray starting at $0$ and going through $r$. Then we have for all $y \in \bd(\meanspace_1^{\comp})$ that $r(y) = y$. We may think of this second approach as a limiting case of the first one, when we let the number of corners of the polytope go to infinity. In the next section we show that, if we apply the CSC Theorem~\ref{thm:nml}, in our main case of interest, with $Y=n^{-1} \sum_{i=1}^n X_i$, and $\meanspace_1^{\comp}$ a KL Ball, then for large $n$, the second, `continuous' approach always leads to better bounds than the first. \section{Asymptotic expression of growth rate and regret}\label{sec:asymptotic_growth_rate} While the exact sizes of $\mmred(\cH_1)$ and $\mmreg(\breve{r})$ are hard to determine, for the case of nice $\meanspace_1^{\comp}$ and our central case of interest, with $Y= n^{-1} \sum X_i$, we can use existing results to obtain relatively sharp asymptotic (in $n$) approximations of $\mmred(\cH_1)$ and $\mmreg(\breve{r})$'s upper bound $\mmreg(\hat{r})$. We now derive these approximations and show how they, in turn, lead to an approximation to $\grow$ if moreover $\meanspace_1^{\comp}$ is a KL ball, which as explained above, is also the case of central interest. Thus, we now assume that $Y := n^{-1} \sum X_i$, with $X, X_1, X_2, \ldots$ i.i.d. $\sim P'_0$, where $P'_0$ is a distribution on $X$, inducing distribution $P_0 \equiv P_0^{[Y]}$ for $Y$. Like $P_0$, we have that $P'_0$ also generates an exponential family, now with sufficient statistic $X$ and densities \begin{equation}\label{eq:trump} p'_{\theta}(x) \propto e^{\theta^{\top}x } p'_0(x) \end{equation} extended to $n$ outcomes by assuming independence. Now ${\mathbb E}_{P'_{\theta}} [Y]= n^{-1} {\mathbb E}_{P'_\theta} [\sum_{i=1}^n X_i] = {\mathbb E}_{P'_{\theta}} [X] $ so that the mapping $\mu(\theta)$ from canonical to mean-value parameter is the same for both the family (\ref{eq:trump}) and the original family $\{P_{\theta}(Y): \theta \in \Theta \}$, and the likelihood ratio of any member $P'_{\theta}$ of the family (\ref{eq:trump}) to $P'_0$ on $n$ outcomes is given by, with $\mu = \mu(\theta)$, $$ \prod_{i=1}^n \frac{\bar{p}'_{\mu}(X_i)}{p'_0(X_i)} = \frac{p'_{\theta}(X^n)}{p'_0(X^n)} = \frac{p_{\theta} (Y)}{p_0(Y)} = \frac{\bar{p}_{\mu}(Y)}{p_0(Y)}, $$ which in turn implies that $D(\bar{P}_{\mu} \| P_0) = n D(\bar{P}'_{\mu} \| P'_0 )$, where $ D(\bar{P}'_{\mu} \| P'_0 )$ is an expression that does not change with $n$. This means that if we keep $\meanspace_1$ constant, the $\underline{D}= \inf_{\mu \in \meanspace_1} D(\bar{P}_{\mu} \| P_0)$ in (\ref{eq:mlesharpbound}) increases linearly in $n$. To avoid confusion here, it is useful to make explicit the dependency of $\mmreg$ and $\underline{D}$ on $n$ in Theorem~\ref{thm:nml}, by writing it as $\mmreg_n$ and $\underline{D}_n$: we can then restate the bound (\ref{eq:mlesharpbound}) in the theorem as \begin{equation}\label{eq:mlesharpboundB} P_{0}\left( Y\in \meanspace_1 \right)\leq e^{\mmreg_n(\hat{r})- \underline{D}_n}= e^{\mmreg_n(\hat{r})- n \underline{D}_1}. \end{equation} Thus, as $n$ increases, if we keep $\meanspace_1$ fixed, then the quantity $\underline{D}_n$ in the bound increases linearly in $n$. On the other hand, we will now show that, for sufficiently smooth boundaries of $\meanspace_1$, we have that $\mmreg_n(\hat{r})$ only increases logarithmically in $n$, making the strength of bound (\ref{eq:mlesharpbound}) still grow exponentially in $n$. The result is a direct corollary of a result by \cite{takeuchi2013asymptotically}. \begin{theorem}\label{thm:TakeuchiB}{\bf [Corollary of Theorem 2 of \cite{takeuchi2013asymptotically}; see also \cite{takeuchi1997asymptotically}]} Consider the setting of surrounding $\meanspace_1$ and suppose that $\meanspace_1$ is {\em nice}, as defined in the beginning of Section~\ref{sec:surrounding}, and that there exists a bijective function $\phi: \cU \rightarrow \bd(\meanspace_1^{\comp})$ so that $\cU$ is a subset of $\reals^{d-1}$ with open interior, $\phi$ has at least four derivatives and these are bounded on $\cU$. Then $\hat{r}$ is well-defined and there exists $C > 0$ such that for all $n$, $$\mmreg_n(\hat{r}) \leq \frac{d-1}{2} \log n + C.$$ \end{theorem} We also have a bound on the minimax regret for the case that $\cH_1$ contains $\cE_1^{\bd}$, the subset of exponential family $\cE$ restricted to the boundary $\bd(\meanspace_1^{\comp})$: \begin{theorem}\label{thm:ClarkeB}{\bf [Corollary of Theorem 1 of \cite{ClarkeB94}]} Consider the setting and conditions of the previous theorem. Let $\cE_1^{\bd} := \{ \bar{P}_{\mu}: \mu \in \bd(\meanspace_1^{\comp}) \cap \meanspace \}$. Then there exists $C' < 0$ such that for all $n$, $$\mmred_n(\cE_1^{\bd}) = \inf_q \sup_{ \mu \in \bd(\meanspace_1^{\comp})} D(\bar{P}_{\mu} \|Q) \geq \frac{d-1}{2} \log n + C' .$$ \end{theorem} \begin{quote} To see how this result follows from Clarke and Barron's, note that for this we need to verify the regularity Conditions 1--3 in Section 2 of their paper. the assumption of niceness implies that $\bd(\meanspace_1^{\comp})$ is contained in a compact subset $\meanspace'$ of the interior of $\meanspace$. Then also the corresponding natural parameters are contained in a compact subset $\Theta'$ of the interior of $\Theta$. Condition 1 of their paper is immediately verified for parameters in the natural parameterization $\Theta$. Since the functions $\theta: \meanspace \rightarrow \Theta$ and $\phi: \cU \rightarrow \meanspace$ and their first and second derivatives are themselves bounded on $\cU$ and $\meanspace'$, Condition 1 of their paper is verified in terms of the relevant parameterization $\bar{P}_{\phi(u)}$ as well. Moreover, for all $k \in \naturals$, all partial derivatives of form $\partial^k/(\partial u_{j_1} \ldots \partial u_{j_k}) \int \bar{p}_{\phi(u)}(y) d \rho(y)$, with $j_1, \ldots, j_k \in \{1, \ldots, d-1\}$, can be calculated by exchanging differentiation and integration (this follows from \citep[Theorem 2.2]{Brown86}. Since we already established \cite{ClarkeB94}'s Condition 1, this implies that their Condition 2 also holds, as they explain underneath their Condition 2. Their Condition 3 is immediate. \end{quote} Now, for any $\cH_1$ that contains $\cE_1^{\bd}$, Theorem~\ref{thm:ClarkeB} implies that \begin{equation}\label{eq:eleven} \mmred_n(\cH_1) \geq \mmred_n(\cE_1^{\bd}) \geq \frac{d-1}{2} \log n + C', \end{equation} and it is also immediate, by definition of $\hat{r}$, that \begin{equation}\label{eq:twelve} \mmreg_n(\hat{r}) \geq \mmred_n(\cH_1). \end{equation} Together with (\ref{eq:eleven}) and (\ref{eq:twelve}), Theorem~\ref{thm:TakeuchiB} above now gives that, under the assumption of both theorems, \begin{align}\label{eq:dminusone} \mmred_n(\cH_1) = \mmreg_n(\hat{r}) + O(1) = \frac{d-1}{2} \log n + O(1). \end{align} \paragraph{\bf How to Partition $\meanspace_1$, Continued} We now restrict to the case that $\meanspace_1^{\comp}$ is a $\underline{D}_1$-KL ball. At the end of previous section we explained why this is the major case of interest. As above, we want to keep $\meanspace_1^{\comp}$ fixed as $n$ increases, i.e. the set of parameters that stays in $\meanspace_1$ does not change with $n$. This means that, when viewed as a KL ball of distributions on $X$, the radius of the ball remains constant with $n$, but when viewed as a KL ball of distributions on $Y$, the radius of the ball does need to scale linearly with $n$, i.e. we set: \begin{equation}\label{eq:klball} \meanspace_1^{\comp} = \{ \mu: D(\bar{P}'_{\mu} \| P'_0) < \underline{D}_1 \} = \{ \mu: D(\bar{P}_{\mu} \| P_0) < n \underline{D}_1 \}.\end{equation} We now return to the two approaches to applying the CSC Theorem~\ref{thm:nml} which we discussed at the end of the previous section: one based on a finite $\cR$ defining a polytope, one based on a `continuous' $\cR = \bd(\meanspace_1^{\comp})$. It turns out that if $\meanspace_1$ is defined in terms of a KL ball (\ref{eq:klball}), then, for large $n$, it is always better to take the continuous $\cR$ approach. To see this, suppose that, in the polytope approach, we take a polytope $\meanspace_1^{'\comp}$ with $k$ corners (e.g., $k=5$ in Figure~\ref{fig:starA}); let $\breve{r}_{k}$ be the corresponding estimator and $\hat{r}_{k}$ be the corresponding MLE in $\cR$ and $\underline{D}'_{1,k} < \underline{D}_1$ (Figure~\ref{fig:starA}) indicates why the inequality holds) be the minimum KL divergence we then obtain in (\ref{eq:mlesharpbound}) when replacing $\meanspace_1^{\comp}$ by $\meanspace_1^{'\comp}$, applied for $n=1$; similarly we let $\breve{r}_{\infty}$ be the corresponding estimator for the second, continuous approach and $\hat{r}_{\infty}$ be the corresponding MLE in $\cR = \bd(\meanspace_1^{\comp})$ and $\underline{D}'_{1,\infty} = \underline{D}_1$ be the corresponding minimum KL divergence appearing in the bound. Then the rightmost bounds in (\ref{eq:mlesharpbound}) will, respectively, look like $$ e^{\mmreg_n(\breve{r}_k) - n \underline{D}'_{1,k}} \leq e^{\mmreg_n(\hat{r}_k) - n \underline{D}'_{1,k}} \text{\ vs.\ } e^{\mmreg_n(\breve{r}_{\infty}) - n \underline{D}_{1}} \leq e^{\mmreg_n(\hat{r}_{\infty}) - n \underline{D}_1}. $$ Since $\mmreg_n(\breve{r}_k)$ is always nonnegative, no matter the definition of $\breve{r}$ and the value of $k$ and $n$, whereas $\mmreg_n(\hat{r}_{\infty})$ is logarithmic, and $\underline{D}'_{1,k} < \underline{D}_1$ for all finite $k$, the continuous approach based on $k=\infty$ provides bounds that are eventually exponentially better in $n$ compared to the bound based on any finite $k$. \paragraph{An Asymptotic GROW Result} In the KL ball setting, we can also say something about the asymptotic size of the worst-case optimal growth rate: \begin{proposition}\label{prop:scaling} In the KL ball setting above, let $\cH_1$ be any set of distributions that contains $\cE_1^{\bd}$ and with means contained in $\meanspace_1$, i.e. $\{{\mathbb E}_{P}[Y]: P \in \cH_1 \} \subset \meanspace_1$. Then the growth rate $\grow_n$ at sample size $n$ is given by $$ \grow_n = n \underline{D} - \frac{d-1}{2} \log n + O(1). $$ \end{proposition} \begin{proof} We have, with the supremum being taken over all probability densities $q$ for $Y$, \begin{align*} & \sup_q \inf_{P \in \cH_1} {\mathbb E}_{P}\left[ \log \frac{q(Y)}{p_0(Y)}\right] \leq \sup_q \inf_{\mu \in \bd(\meanspace_1^{\comp})} {\mathbb E}_{\bar{P}_{\mu}}\left[ \log \frac{q(Y)}{p_0(Y)}\right] = \\ & \sup_q \inf_{\mu \in \bd(\meanspace_1^{\comp})} \left( \; {\mathbb E}_{\bar{P}_{\mu}}\left[ \log \frac{q(Y)}{p_0(Y)} - \log \frac{\bar{p}_{\mu}(Y)}{p_0(Y)}\right] + {\mathbb E}_{\bar{P}_{\mu}}\left[ \log \frac{\bar{p}_{\mu}(Y)}{p_0(Y)}\right] \; \right) = \\ & \sup_q \inf_{\mu \in \bd(\meanspace_1^{\comp})} {\mathbb E}_{\bar{P}_{\mu}}\left[ \log \frac{q(Y)}{p_0(Y)} - \log \frac{\bar{p}_{\mu}(Y)}{p_0(Y)}\right] + n \underline{D}_1 = - \frac{d-1}{2} \log n + O(1) + n \underline{D}_1. \end{align*} where the last equation follows from (\ref{eq:dminusone}). On the other hand, with $q_{\shtarkov,\hat{r}}$ defined as in (\ref{eq:shtarkovb}), we also have: \begin{align*} & \sup_q \inf_{P \in \cH_1} {\mathbb E}_{P}\left[ \log \frac{q(Y)}{p_0(Y)}\right] \geq \\ & \inf_{P \in \cH_1} {\mathbb E}_{P}\left[ \log \frac{q_{\shtarkov,\hat{r}}(Y)}{p_0(Y)}\right] = \inf_{\mu \in \meanspace_1} \inf_{P \in \cH_1: {\mathbb E}_P[Y] = \mu} {\mathbb E}_{P}\left[ \log \frac{\bar{p}_{\hat{r}(Y)}(Y)}{p_0(Y)}\right] - \mmreg_n(\hat{r}) \geq \\ & \inf_{\mu \in \meanspace_1} \inf_{P \in \cH_1: {\mathbb E}_P[Y] = \mu} {\mathbb E}_{P}\left[ \log \frac{\bar{p}_{\mu}(Y)}{p_0(Y)}\right] - \mmreg_n(\hat{r}) \overset{*}{=} \\ & \inf_{\mu \in \meanspace_1} {\mathbb E}_{\bar{P}_{\mu}}\left[ \log \frac{\bar{p}_{\mu}(Y)}{p_0(Y)}\right] - \mmreg_n(\hat{r}) = \inf_{\mu \in \meanspace_1} D(\bar{P}_{\mu} \| P_0) - \mmreg_n(\hat{r}) = \\ & n \underline{D}_1 - \frac{d-1}{2} \log n + O(1), \end{align*} where $(*)$ follows by rewriting the quantity inside the logarithm in terms of the mean-value parameterization and evaluating the expectation. Combining the two displays above, the result follows. \end{proof} \section{Proof of Lemma~\ref{lem:brown} and further discussion of Theorem~\ref{thm:main_new}} \label{sec:proof} To prove Lemma~\ref{lem:brown}, we first provide some background on {\em variation diminishing transformations\/} \citep{brown1981variation}. \begin{definition}\label{def:SignFunction} Let $\cX = \{x_1, \ldots, x_n \} $ be a finite subset of $\reals$ with $x_1 < x_2< \ldots < x_n$ and let $g: \cX \rightarrow \reals$ be a function, so that $(g(x_1), \ldots, g(x_n)) \in \mathbb{R}^n$. We let $S^-(g)$ denote the number of sign changes of sequence $g(x_1), \ldots, g(x_n)$ where we ignore zeros; if $g$ is identically $0$ then we set $S^-(g)$ to $0$. \end{definition} \begin{example} If $g'(x_1,x_2,x_3) = (-1, 0, 1 )$, $g''(x_1,x_2,x_3) = (-2, 1, 4 )$ and $g'''(x_1,x_2,x_4) = (-2, 0, 0, 3 )$, then $S^-(g') = S^-(g'')= S^-(g''') =1$. \end{example} \begin{definition} Now consider arbitrary $\cX \subset \reals$ and let $g: \cX \rightarrow \reals$. For finite $\cV \subset \cX$, say $\cV= \{x_1, \ldots, x_n\}$ with $x_1 < \ldots < x_n$, we let $g_{\cV}= \{g(x_1), \ldots, g(x_n)\}$. We let $S^-(g)$ be the supremum of $S^-(g_{\cV})$ over all finite subsets $\cV$ of $\cX$. \end{definition} Intuitively, $S^-(g)$ is the number of times that the function $g(x) $ changes sign as $x \in \cX \subset \reals$ increases. \begin{lemma}\label{lemma:Exp_SVR}{\bf \citep[Example 3.1, Proposition 3.1]{brown1981variation}} Let $P_0$ and $Y$ be as above (\ref{eq:expfam}), where $Y= Y_1$ is 1-dimensional, so $\cY \subseteq \reals$, and consider the 1-dimensional exponential family generated by $P_0$ as in (\ref{eq:expfam}). In the terminology of \cite{brown1981variation}, this family is {\em SVR}$_{n}(\reals,\Theta)$ and hence {\em SVR}$_{n}(\cY,\Theta)$ for all $n$. Rather than giving the precise definition of SVR (`strict variation reducing'), we just state the implication of this fact that we need: for any function $g: \cY \rightarrow \reals$ with $\int |g| d \rho > 0$ and $\gamma: \Theta \rightarrow \reals$ with $\gamma(\theta) := \int p_{\theta}(y)g(y) \rho(dy)$, we have: $S^{-}(\gamma) \leq S^{-}(g)$. \end{lemma} In words, for any function $g$ as above, the number of sign changes of ${\bf E}_{P_{\theta}}[g(Y)]$ as we vary $\theta$ is bounded by the number of sign changes of $g$ itself on $y\in \cY \subset \reals$. Since, in one-dimensional full exponential families, $\mu(\theta)$ is a continuous, strictly increasing function of $\theta$, this also implies that, for any function $g$, the expectation $\gamma(\mu) := {\bf E}_{\bar{P}_{\mu}}[g(Y)]$ also satisfies $S^{-}(\gamma) \leq S^{-}(g)$. Now for any constant $c \in \reals$, any $w^{\circ} \in [0,1]$, we set $g_c(y) = c+ \log \frac{ (1- w^{\circ}) \bar{p}_{\mu_1^-}(y)+ w^{\circ} \bar{p}_{\mu_1^+}(y)}{p_0(y)}$. A little calculation of the derivatives shows that $g_c(y)$ is strictly convex on $\conv(\cY)$ and not monotonic. Therefore, $g_c(y)$ has exactly one minimum point $y$ and is strictly monotonic on both sides of $y$. Thus, $g_c(y)$ as a function of $y \in \conv(\cY)$ changes sign twice; $g_c(y)$'s domain being restricted to $\cY$ (which is not the same as $\conv(\cY)$ in the discrete case), it changes sign at most twice. Thus, for all $c \in \reals$, we have $S^-(g_c) \leq 2$. Lemma \ref{lemma:Exp_SVR} thus implies that $S^-(\gamma_c(\mu)) \leq 2$ with $\gamma_c(\mu) := f(\mu,w^{\circ})+c$ where $f(\mu,w)$ is defined as in (\ref{eq:brenda}). Therefore, we know that $g_0(\mu) = f(\mu,w^{\circ})$ as a function of $\mu$ can at most have one minimum point achieved at some $\mu^*$ and if it has such a minimum point, it must be strictly monotonic on both sides of $\mu^*$. Now $f(0,w^{\circ}) = {\bf E}_{P_0}[g_0(Y)] = - D(\bar{P}_0 \|P_{W_1^{\circ}}) < 0$; but by (\ref{eq:jantje}), which we already showed, $f(\mu^+_1,w^{\circ}) = f(\mu^-_0,w^{\circ}) > 0$. It follows that a $\mu^*$ as mentioned above must exist, and that it lies in between $\mu^-_0$ and $\mu^+_0$; the result follows. \paragraph{Why the case $d > 1$ is complicated} We only managed to prove a general GROW result for surrounding $\cH_1$ for $d=1$. To give the reader an idea where the difficulties lie, we first discuss the case $d=1$ a little more. One may wonder why even there, we had to resort, via Lemma~\ref{lem:brown}, to the pretty sophisticated theory of variation diminishing transformations. It would seem much simpler to directly calculate the derivative $(d/ d\mu) f(\mu,w^{\circ})$ and show that, for appropriate choice of $w^{\circ}$, the derivative is $0$ at some $\mu^*$ within the interval $(\mu^-_1,\mu^+_1)$, and negative to the left and positive to the right of $\mu^*$; this would lead to the same conclusion as just stated. Yet the derivative is given by \begin{equation}\label{eq:derivative} \frac{d}{d\mu } f(\mu,w^{\circ}) = \sigma^2_\mu \cdot \left( {\bf E}_{\bar{P}_{\mu}} \left[ Y \cdot g_0(Y) \right] - {\bf E}_{\bar{P}_{\mu}} \left[ Y \right] \cdot {\bf E}_{\bar{P}_{\mu}} \left[ g_0(Y) \right] \right), \end{equation} where $\sigma^2_\mu = {\bf E}_{\bar{P}_{\mu}}[Y^2]-({\bf E}_{\bar{P}_{\mu}}[Y])^2$ is the variance of $\bar{P}_{\mu}$. While (\ref{eq:derivative}) looks `clean', it is not easy to analyze --- for example, it is not a priori clear whether the derivative can be $0$ in only one point. Taking further derivatives does not help either in this respect; for example, the second derivative is not necessarily always-positive. Another `straightforward' route to show the result via differentiation might be the following. We fix any prior with finite support, with positive probability $(1-\alpha) w(\mu^+_1)> 0$ on $\{\mu^+_1\}$ and $(1-\alpha) w(\mu^-_1)> 0 $ on $\{\mu^-_1\}$ and, for $j=1, \ldots, k$, prior $ \alpha w_j$ on $\mu'_j \in \meanspace_1 \setminus \bd(\meanspace_1^{\comp})$, i.e. $\mu'_j$ is not on the boundary of $\meanspace_1$, so that $\sum_{j=1}^k w_j = w(\mu^+_1) +w(\mu^-_1)=1$, $0 \leq \alpha \leq 1$. We let $$ q_{\alpha}(Y) := \sum_{j=1}^k w_j \bar{p}_{\mu_j}(Y) + w(\mu^+_1) \bar{p}_{\mu^+_1}(Y) +w(\mu^-_1) \bar{p}_{\mu^-_1}(Y). $$ Then, if we could show that the KL divergence \begin{align}\label{eq:deragain} {\bf E}_{Q_{\alpha}} \left[ \log \frac{q_{\alpha}(Y)}{p_0(Y)} \right] \end{align} were minimized by setting $\alpha =0$, it would follow, by applying Theorem~\ref{thm:ghk}, that $S_{\grow}$ is given by $p^*(Y)/p_0(Y)$ where $p^*(Y)$ must be of the form $w(\mu^-_1) \bar{p}_{\mu^-_1} + w(\mu^+_1) \bar{p}_{\mu^+_1}$. Yet, if we try to show this by differentiating (\ref{eq:deragain}) with respect to $\alpha$, we end up with a similarly hard-to-analyze expression as (\ref{eq:derivative}), and it is again not clear how to proceed. These difficulties with showing the result in a straightforward way, by differentiation, only get exacerbated if $d> 1$. So, instead, we might try to extend the above lemma based on variation diminishing transformations to the case $d > 1$. But, literally quoting \cite[Chapter 2]{Brown86}, `results concerning sign changes for multidimensional families appear very weak by comparison to their univariate cousins', and indeed we have not found any existing result in the literature that allows us to extend the above lemma to $d > 1$. \section{Discussion, Conclusion, Future Work}\label{sec:discuss} We have shown how GROW e-variables relative to alternative $\cH_1$ defined in terms of a set of means $\meanspace_1$ relate to a CSC probability bound on an event defined by the same $\meanspace_1$. We first considered the case of convex $\meanspace_1$; here our work consisted mostly of reformulating and re-interpreting existing results. We then considered nonconvex, surrounding $\meanspace_1$. We showed how GROW and the individual-sequence-regret type of {\em relative\/} GROW again relate to a version of the CSC theorem, and we established some additional results for the case that $\meanspace_1^{\comp}$ is a fixed-radius KL ball for sample size $1$, whereas we let the actual sample size $n$ grow. As far as we are aware, our CSC bounds for surrounding $\meanspace_1$ that are KL balls are optimal for this setting. It is of some interest though to consider the alternative setting in which, at sample size $n$, we consider a KL ball that has a fixed, or very slowly growing, radius when considering distributions on $Y= n^{-1} \sum_{i=1}^n X_i$ rather than on $X$. Thus, instead of (\ref{eq:klball}) we now set, at sample size $n$, \begin{equation}\label{eq:klballb} \meanspace_1^{\comp} = \{ \mu: D(\bar{P}_{\mu} \| P_0) < \underline{D}_n \} = \{ \mu: D(\bar{P}'_{\mu} \| P'_0) < \underline{D}_n/n \},\end{equation} where either $\underline{D}_n = \underline{D}$ is constant, or very slowly growing in $n$. First consider the case that it is constant. Then, in terms of a single outcome, the corresponding ball in Euclidean (parameter space) shrinks at rate $1/{\sqrt{n}}$, the familiar scaling when we consider classical parametric testing. Since the boundary $\bd(\meanspace_1^{\comp})$ now changes with $n$, the asymptotics (\ref{eq:dminusone}) we established above are not valid anymore. Therefore, while the CSC Theorem~\ref{thm:nml} is still valid, it may be hard to evaluate the bound (\ref{eq:mlesharpbound}) that it provides. Now, for the setting (\ref{eq:klballb}), we may also heuristically apply the multivariate Central Limit Theorem (CLT): a second order Taylor approximation of $D(\bar{P}_{\mu} \| P_0)$ in a neighborhood of $\mu=0$ gives that, up to leading order, $D(\bar{P}_{\mu} \| P_0)= \mu^{\top} J(0) \mu$, with $J(\mu)$ the Fisher information matrix of $\cE$ in terms of the mean-value parameterization, which is equal to the inverse of the covariance matrix. The multivariate CLT then immediately gives that, as $n \rightarrow \infty$, we have that $P_0(Y \in \meanspace_1^{\comp}) \rightarrow A$, where $A$ is the probability that a normally distributed random vector $V$, i.e. $V \sim N(0,I)$, with $I$ the $(d-1) \times (d-1)$ identity matrix, falls in a Euclidean ball of radius $\sqrt{\underline{D}}$. This implies that the bound (\ref{eq:mlesharpbound}) would only remain relevant if for all large $n$, its right-hand side evaluates to a constant smaller than $1$. We currently do not know if this is the case; it is an interesting question for future work. Now let us consider the scaling (\ref{eq:klballb}) for the case that $\underline{D}_n$ is growing at the very slow rate $a (\log(b+ c\log n)$ for suitable $a, b$ and $c$. \citep{kaufmann2021mixture} give an {\em anytime-valid bound} for this case, in which the right-hand side is also a nontrivial constant (i.e. $< 1$), for all large enough $n$. Again, we do not know if we can replicate such bounds with our analyses --- it is left for future work to determine this, and to further analyze the relation between anytime-valid bounds and the bounds we derived here, which are related to e-values and hence indirectly related to anytime-validity, but are not anytime-valid themselves. \bibliography{savi,references,peter} \end{document}
2412.17545v1
http://arxiv.org/abs/2412.17545v1
Lattice 3-polytopes of lattice width 2 and corresponding toric hypersurfaces
\documentclass[12pt,a4paper]{amsart} \usepackage{amsmath,amsfonts,amssymb,amsthm,amscd} \usepackage{dsfont,mathrsfs,enumerate} \usepackage{curves,epic,eepic} \setlength{\topmargin}{0cm} \setlength{\textwidth}{15cm} \setlength{\oddsidemargin}{0.5cm} \setlength{\evensidemargin}{0.5cm} \setlength{\unitlength}{1mm} \usepackage[colorlinks=true, linkcolor=red, citecolor=blue, pagebackref]{hyperref} \renewcommand\backrefxxx[3]{ \hyperlink{page.#1}{$\uparrow$#1}} \usepackage{url} \usepackage{cancel} \usepackage{pdflscape} \usepackage[pdftex]{graphicx} \usepackage{psfrag} \usepackage{epic} \usepackage{eepic} \usepackage[matrix,arrow,curve]{xy} \input{epsf} \usepackage{tikz} \usetikzlibrary{calc} \usetikzlibrary{intersections} \usetikzlibrary{through} \usetikzlibrary{backgrounds} \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{coro}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{defi}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{ex}[thm]{Example} \newtheorem{con}[thm]{Construction} \def\Z{\mathds Z} \def\Q{\mathds Q} \def\R{\mathds R} \def\C{\mathds C} \def\phi{\varphi} \def\<{{\langle}} \def\>{{\rangle}} \newcommand{\area}[1]{\mathrm{area}(#1)} \newcommand{\vol}[1]{\mathrm{vol}(#1)} \newcommand{\lw}[1]{\mathrm{lw}(#1)} \newcommand{\lwd}[2]{\mathrm{lwd}_{#1}(#2)} \newcommand{\interior}[1]{\mathrm{int}(#1)} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\Min}[2]{\mathrm{Min}_{#1}(#2)} \newcommand{\conv}[1]{\mathrm{conv}\left(#1\right)} \newcommand{\normalfan}[1]{\Sigma_{#1}} \renewcommand{\vec}[1]{\overrightarrow{#1}} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \definecolor{ududff}{rgb}{0.3,0.3,1} \definecolor{aabbcc}{rgb}{1,0,0} \begin{document} \title[Lattice width $2$ and corresponding toric hypersurfaces]{Lattice $3$-polytopes of lattice width $2$ and corresponding toric hypersurfaces} \author[Martin Bohnert]{Martin Bohnert} \address{Mathematisches Institut, Universit\"at T\"ubingen, Auf der Morgenstelle 10, 72076 T\"ubingen, Germany} \email{[email protected]} \begin{abstract} The Kodaira dimension of a nondegenerate toric hypersurface can be computed from the dimension of the Fine interior of its Newton polytope according to recent work of Victor Batyrev, where the Fine interior of the Newton polytope is the subpolytope consisting of all points which have an integral distance of at least $1$ to all integral supporting hyperplanes. In particular, if we have a Fine interior of codimension $1$, then the hypersurface is of general type and the Newton polytope has lattice width $2$. In this article we study this situation for lattice $3$-polytopes and the corresponding surfaces of general type. In particular, we classify all $2$-dimensional Fine interiors of those lattice $3$-polytopes which have at most $40$ interior lattice points, thus obtaining many examples of surfaces of general type and genus at most 40. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction} Let $M\cong \Z^d$ be a lattice of rank $d\in \Z_{\geq 1}$ and $N=\mathrm{Hom}(M\Z)\cong \Z^d$ the dual lattice. We have the natural pairing $\left<\cdot,\cdot\right>\colon M \times N \to \Z$, which extends to the real vector spaces $M_\R= M\otimes \R$ and $N_\R= N\otimes \R$. For a rational $d$-polytope $P$ in $M_\R$, i.e. $P\subseteq M_\R$ is the convex hull of a finite number of points of $M_\Q=M \otimes \Q$ and $P$ is of full dimension, we have the \textit{Fine interior} of $P$ defined as \begin{align*} F(P) := \{x \in M_\R \mid \left<x,n\right> \geq \Min{P}{n} + 1 \ \forall n \in N\setminus \{0\} \} \subseteq P \end{align*} with \begin{align*} \Min{P}{n}:=\min \{\left<x,n\right> \mid x \in P\}. \end{align*} The Fine interior is itself a rational polytope, and if $P$ is the Newton polytope of a nondegenerate toric hypersurface $Z$ and has a non-empty Fine interior, then recent work by Victor Batyrev \cite{Bat23} shows that the Fine interior and additional combinatorial data can be used to construct a minimal model $\hat{Z}$ of the hypersurface. Moreover, we can compute some important invariants of the minimal model directly from the Fine interior. The Kodaira dimension is given by \cite[9.2]{Bat23} as \begin{align*} \kappa(\hat{Z})=\min \{\dim F(P), d-1 \} \end{align*} and we have for example for the top intersection number in the case $\dim F(P)=d-1$ we have the formula (\cite[5.1]{Gie22}, \cite[9.4]{Bat23}) \begin{align*} (K_{\hat{Z}})^{d-1}= 2 \mathrm{Vol}_{d-1}(F(P)), \end{align*} where $\mathrm{Vol}_{d-1}$ means that we are looking at the normalized volume in the affine subspace generated by $F(P)$, where normalized means that we have volume $1$ for an simplex whose vertices form an affine lattice basis of the sublattice in the subspace. We focus in this article on the case $d=3$ with $\dim F(P)=2$, so our nondegenerate toric hypersurface becomes a surface of general type. This implies that our Newton polytope $P\subseteq M_\R$ is a lattice $3$-polytope with lattice width $2$, where we should remember that the \textit{lattice width} $\lw{P}$ is given by \begin{align*} \lw{P} := \min\{ -\Min{P}{-n} - \Min{P}{n} \mid n \in N \setminus \{0 \}. \end{align*} and we call $n_{lw}\in N$ a \textit{lattice width direction} if $\lw{P}=-\Min{P}{-n_{lw}} - \Min{P}{n_{lw}}$. Why is the combinatorics in this situation easy enough to handle? As a first step, we will see in Theorem \ref{Fine_int_by_middle_polytope} in section $2$, that the Fine interior of a lattice $d$-polytope with lattice width $2$ and lattice width direction $n_{lw}$ is completely determined by its half-integral \textit{middle polytope}, i.e. the rational polytope of dimension $d-1$, which we get by the intersection of $P$ with the hyperplane \begin{align*} \{x \in M_\R \mid \left<x,n_{lw}\right> = \Min{P}{n_{lw}}+1\}. \end{align*} In particular, we can work for lattice $3$-polytopes with a \textit{middlepolygon} and from there only have to do combinatorics from there on only in dimension $2$. As a second step, we will see in section $3$ that in this case not only the middle polygon but also the Fine interior is a half-integral polygon (Theorem \ref{Fine_int_half_integral}). Moreover, we will see there that the Fine interiors can always be described by the union of a lattice polygon and some very special triangles that we can have on edges of the lattice polygons having two lattice points (see Theorem \ref{hats}). It turns out that this description of the Fine interior is restrictive enough that we end up with an efficient classification algorithm for these Fine interiors in section $4$. We will use this algorithm there to obtain our main classification result in \ref{classification}: There are up to affine unimodular equivalence exactly $24 324 158$ different $2$-dimensional Fine interiors those lattice $3$-polytopes which have at most $40$ interior lattice points. In the last section we use this classification to classify as a corollary the Chern numbers for nondegenerated toric hypersurfaces of genus at most $40$ which have as Newton polytope a lattice $3$-polytope with $2$-dimensional Fine interior. Having the Chern numbers of the surfaces, we can arrange them in the geography of the surfaces of general type. \section{The Fine interior of lattice polytopes of lattice width 2} In this section we will see that the Fine interior of a lattice $d$-polytope of lattice width $2$ for arbitrary dimension $d$ is determined by its half-integral middle polytope, which allows us to do our Fine interior computations in codimension $1$, or in other words, without loss of generality, we can restrict ourselves for Fine interior computations to the case of pyramids of heigth $2$ which have the same middle polytope. We begin with the following lemma, which shows why the case of lattice width $2$ is important for Fine interior computations. \begin{lemma} Let $P\subseteq M_\R$ be a rational $d$-polytope. If $\lw{P}<2$, then $F(P)=\emptyset$. If $\lw{P}=2$, then $\dim F(P)<\dim P$, and if $\dim F(P)=\dim(P)-1$, then $\lw{P}=2$. \end{lemma} \begin{proof} If $n_{lw}\in N$ is a lattice width direction, then we have \begin{align*} F(P) \subseteq \{x\in M_\R \mid \left<x,n_{lw}\right> \geq \Min{P}{n_{lw}}+1, \left<x,-n_{lw}\right> \geq \Min{P}{-n_{lw}}+1 \} \end{align*} and the set on the right side is empty for $\lw{P}<2$ and at of dimension $d-1$ for $\lw{P}=2$. If we have $\dim F(P)=\dim(P)-1$, then we must have a normal vector $n\in N$ for $F(P)$ such that $-n$ is also a normal vector and thus $-\Min{P}{-n}-\Min{P}{n}=2$. This implies $\lw{P}\leq 2$ and we have $\lw{P}\geq 2$ since this is always the case for non-empty Fine interior by the first part of the lemma. \end{proof} \begin{rem} There are many examples of lattice polytopes $P\subseteq M_\R$ with lattice width greater than $2$ and $\dim F(P) < \dim P-1$. Since $\dim F((d+1)\Delta_d)=0$ for the standard simplex $\Delta_d$, i.e. the convex hull of an affine lattice basis, and $\lw{(d+1)\Delta_d}=d+1$, we have for $d, k, w\in \Z, d\geq 2, 0\leq k<d-1, 3\leq w\leq d-k+1$ the lattice $d$-polytope $(d-k+1)\Delta_{d-k} \times [0,w]^{k}$ with lattice width $w>2$ and $\dim(F(P))=k$. \end{rem} \begin{rem} We can see that $\dim(F(P))=d-1$ implies $\lw{P}\leq 2$ also from the general theory about lattice projections along the Fine interior. By \cite[9.1]{Bat23} we have a lattice projection along the Fine interior on a lattice polytope with a Fine interior of dimension $0$. So from a Fine interior of dimension $d-1$ we get a projection onto a lattice polytope of dimension $1$ with a Fine interior of dimension $0$. But up to lattice translation there is only $[-1,1]$ with this property, and a lattice projection onto $[-1,1]$ is obviously equivalent to $\lw{P}\leq 2$. \end{rem} We now rearrange our coordinates for a good working setup with the middle polytope and split the dual lattice $N$ with respect to the middle polytope. \begin{defi} Let $P\subseteq \R\times M_\R$ be a lattice polytope with $\lw{P}=2$ and $P \subseteq [-1,1] \times M_\R$. Then we have the half-integral middle polytope $P_0\subseteq M_\R$ of $P$ defined by \begin{align*} \{0\} \times P_0 = P \cap (\{0\} \times M_\R). \end{align*} Using $P_0$, we define a partition $N\setminus \{0\}=N_1(P_0)\cup N_2(P_0)$ of the non-zero dual lattice vectors by \begin{align*} N_1(P_0) := \{n \in N\setminus \{0\} \mid \Min{P_0}{n} \in \Z\}, N_2(P_0) := \{n \in N\setminus \{0\} \mid \Min{P_0}{n} \notin \Z\}. \end{align*} \end{defi} Now we are ready to show how the middle polytope determines the Fine interior of a lattice polytope of lattice width $2$. \begin{thm}\label{Fine_int_by_middle_polytope} Let $P\subseteq \R \times M_\R$ be a lattice polytope of lattice width $2$, $P \subseteq [-1,1] \times M_\R $ and $P_0\subseteq M_\R$ the middle polytope of $P$. Then the Fine interior $F(P)$ of $P$ is \begin{align*} F(P) = F_1(P)\cap F_2(P) \end{align*} with \begin{align*} F_1(P):=&\{0\} \times \{x \in M_\R \mid \left<x,n\right> \geq \Min{P_0}{n}+1 \ \forall n \in N_1(P_0)\} \\ F_2(P):=&\{0\} \times \{x \in M_\R \mid \left<x,2n\right> \geq \Min{P_0}{2n}+1 \ \forall n \in N_2(P_0)\} . \end{align*} In particular, $F(P)$ is completely determined by $P_0$. \end{thm} \begin{proof} Since $P\subseteq [-1,1] \times M_\R$ and $\lw{P}=2$, we have $F(P)=F(P)\cap (\{0\} \times M_\R)$ and \begin{align*} F(P)=\{0\} \times \{x \in M_\R \mid \left<(0,x),(n_0,n)\right> \geq \Min{P}{(n_0,n)}+1 \ \forall n_0 \in \Z, n \in N\setminus \{0\}\}. \end{align*} So we have $F(P)\subseteq \{0\} \times P_0$, and since $P$ is a lattice polytope and $P_0$ is half-integral, it is sufficient enough to look at those pairs $(n_0,n)$ with \begin{align*} \Min{P}{(n_0,n)}+\frac{1}{2}\geq \Min{\{0\} \times P_0}{(n_0,n)}. \end{align*} With \begin{align*} \tilde{N_1} :=& \{(n_0,n) \in \Z \times (N\setminus \{0\}) \mid \Min{P}{(n_0,n)}= \Min{\{0\} \times P_0}{(n_0,n)}\}, \\\tilde{N_2} :=& \{(n_0,n) \in \Z \times (N\setminus \{0\}) \mid \Min{P}{(n_0,n)}+\frac{1}{2}= \Min{\{0\} \times P_0}{(n_0,n)}\} \end{align*} we get \begin{align*} F(P)=& (\{0\} \times \{x \in M_\R \mid \left<(0,x),(n_0,n)\right> \geq \Min{\{0\} \times P_0}{(n_0,n)}+1 \ \forall (n_0,n) \in \tilde{N_1}\}) \ \cap \\ & (\{0\} \times \{x \in M_\R \mid \left<(0,x),(n_0,n)\right> \geq \Min{\{0\} \times P_0}{(n_0,n)}+\frac{1}{2} \ \forall (n,\nu) \in \tilde{N_2}\}\\ =& (\{0\} \times \{x \in M_\R \mid \left<x,n\right> \geq \Min{P_0}{n}+1 \ \forall (n_0,n) \in \tilde{N_1}\} ) \ \cap \\ & (\{0\} \times \{x \in M_\R \mid \left<x,2n\right> \geq \Min{P_0}{2n}+1 \ \forall (n_0,n) \in \tilde{N_2}\} )\\ \end{align*} We have for $(n_0,n)\in \tilde{N_1}$ that $\Min{P_0}{n}=\Min{P}{(n_0,n)}\in \Z$ and so $n\in N_1(P_0)$. With the same argument we get $n\in N_2(P_0)$ for all $(n_0,n)\in \tilde{N_2}$. It remains to show that for all $n\in N_1(P_0)$ we have a $n_0\in \Z$ with $(n_0,n)\in \tilde{N_1}$ and analogously for $N_2(P_0)$ and $\tilde{N_1}$. Let be $n\in N_1(P_0)$, i. e. $\Min{P_0}{n}\in \Z$. Then there is a vertex $e$ of $P_0$ with $\Min{P_0}{n}=\left<e,n\right>$. We choose such a vertex $f=(1,f')$ of $P$ such that $n_0:=\left<e-f',n\right>\in \Z$ is maximal. We have \begin{align*} \left<(1,f'),(n_0,n)\right>=\left<f',n\right> + n_0=\left<e,n\right>=\Min{P_0}{n} \end{align*} and since $\left<(0,e),(n_0,n)\right>=\left<e,n\right>=\Min{P_0}{n}$ we get that the dual lattice vector $(n_0,n)$ defines a hyperplane containing $(0,e)$ and $(1,f')$ that is also a supporting hyperplane for $P$ since we have chose $n_0$ maximal. So we get \begin{align*} \Min{P}{(n_0,n)}=\left<e,n\right>=\Min{\{0\} \times P_0}{(n_0,n)} \end{align*} and so we have $(n_0,n)\in \tilde{N_1}$. Let be $n\in N_2(P_0)$, i.e. $2\Min{P_0}{n}\in \Z\setminus 2\Z$. Then there is a vertex $e$ of $P_0$ with $\Min{P_0}{n}=\left<e,n\right>$. We now choose a vertex $f=(1,f')$ of $P$, so that $n_0:=\left<e-f',n\right>-\frac{1}{2}\in \Z$ is maximal. Then we have \begin{align*} \left<(1,f'),(n_0,n)\right>=\left<f',n\right> + n_0=\left<e,n\right>-\frac{1}{2}=\Min{P_0}{n}-\frac{1}{2} \end{align*} and so the dual lattice vector $(n_0,n)$ defines a hyperplane containing $(1,f')$ and this is also a supporting hyperplane for $P$ since we chose $n_0$ maximal. So we get \begin{align*} \Min{P}{(n_0,n)}+\frac{1}{2}=\left<e,n\right>=\Min{\{0\} \times P_0}{(n_0,n)} \end{align*} and so we have $(n_0,n)\in \tilde{N_2}$. \end{proof} If the middle polytope is a lattice polytope, then the situation becomes easier. This was already seen in \cite[4.3]{Boh24b}. We now understand this situation as a corollary. \begin{coro} Let $P\subseteq \R \times M_\R$ be a lattice polytope of lattice width $2$ with a lattice polytope $P_0\subseteq M_\R$ as middle polytope of $P$. Then $F(P)=\{0\} \times F(P_0)$. \end{coro} \begin{proof} Since $P_0$ is a lattice polytope, we have $N_2(P_0)=\emptyset, N_1(P_0)=N\setminus \{0\}$ and therefore $F(P)=F_1(P)=\{0\} \times F(P_0)$. \end{proof} We now introduce some new notation to work directly in $M_\R$. \begin{defi} Let $P_0\subseteq M_\R$ be a half-integral polytope. Then we set \begin{align*} \bar{F}(P_0):=\bar{F}_1(P_0)\cap \bar{F}_2(P_0) \end{align*} with \begin{align*} \bar{F}_1(P_0):=&\{x \in M_\R \mid \left<x,n\right> \geq \Min{P_0}{n}+1 \ \forall n \in N_1(P_0)\}\\ \bar{F}_2(P_0):=&\{x \in M_\R \mid \left<x,2n\right> \geq \Min{P_0}{2n}+1 \ \forall n \in N_2(P_0)\}. \end{align*} \end{defi} With this notation we now have the following short version of \ref{Fine_int_by_middle_polytope}. \begin{coro} Let $P\subseteq \R \times M_\R$ be a lattice polytope of lattice width $2$, $P \subseteq [-1,1] \times M_\R $ and $P_0\subseteq M_\R$ the middle polytope of $P$. Then we have \begin{align*} F(P)=\{0\} \times \bar{F}(P_0). \end{align*} \end{coro} \begin{rem} Note that $\bar{F}(P_0)$ is not in general the Fine interior of the half-integral polytope $P_0\subseteq M_\R$. We have that $\bar{F}(P_0)$ is the Fine interior of $P_0$ if and only if the primitive normal vectors to $F(P_0)$ are all in $N_1(P_0)$. In particular, $\bar{F}(P_0)=F(P_0)$ if $P_0$ is a lattice polytope. We also have the following inclusions \begin{align*} F(F(2P_0))\subseteq 2F(P_0)\subseteq 2 \bar{F}(P_0)\subseteq F(2P_0). \end{align*} \end{rem} If the middle polytope completely determines the Fine interior, we can focus on some special polytopes with this middle polytope. For a rational polytope $P_0 \subseteq M_\R$ we have the pyramid over $P$ defined by $\mathrm{Pyr}(P_0):= \conv{(1,0), \{0\}\times P_0}\subseteq \R \times M_\R$ and if we translate $2\mathrm{Pyr}(P_0)$ by the lattice vector $(-1,0)$ to a subpolytope of $[-1,1] \times M_\R$, then we get $P_0$ as the middle polytope. So we have the following corollary. \begin{coro}\label{FineInt_by_middle_polytop} In the situation from \ref{Fine_int_by_middle_polytope} we have \begin{align*} F(P)=\{0\} \times \bar{F}(P_0) \cong F(2\cdot \mathrm{Pyr}(P_0)) \end{align*} Moreover, if $P_0$ is even a lattice polytope, then we also have \begin{align*} F(P)=F([-1,1] \times P_0). \end{align*} \end{coro} \section{The Fine interior of a lattice $3$-polytope of lattice width $2$} \begin{rem} Let $P$ be as in the theorem, with a lattice polygon $P_0\subseteq M_\R$ as middle polytope of $P$.\\ Then $F(P)=F(P_0)\times \{0\}=\conv{\mathrm{int}(P)\cap M}$ is a lattice polygon. \end{rem} \subsection{The Fine interior of a lattice $3$-polytope of lattice width $2$ is a half-integral polygon} \begin{thm}\label{Fine_int_half_integral} Let $M\cong \Z^2$ be a lattice of rank $2$, $P\subseteq \R \times M_\R$ a lattice $3$-polytope of lattice width $2$, $P\subseteq [-1,1] \times M_\R$, and $\dim(F(P))=2$. Then $F(P)\subseteq \{0\} \times M_\R$ is a half-integral polygon. \end{thm} \begin{proof} Let be $P_0\subseteq M_\R$ the middle polygon of $P$. By \ref{Fine_int_by_middle_polytope} it is sufficient to show that $\bar{F}(P_0)$ is a half-integral polygon. To see this, we will prove that \begin{align*} \bar{F}(P_0) \subseteq \conv{x\in \bar{F}(P_0) \mid 2x \in M}. \end{align*} We can assume that we have at least two lattice points in $\bar{F}(P_0)$, since lattice 3-polytopes with at most $1$ interior lattice point cannot have a Fine interior of dimension $2$ by \cite{BKS22}. So $\conv{x\in \bar{F}(P_0) \mid 2x \in M}$ has at least dimension $1$ and we can look at any edge $e\preceq \conv{x\in \bar{F}(P_0) \mid 2x \in M}$. Now it is enough to see that $e$ is also an edge of $\bar{F}(P_0)$. We have two cases, either the affine hull of $e$ contains lattice points or it does not. If the affine hull of $e$ contains lattice points, then after a suitable affine unimodular transformation we can assume that the line segment $\conv{(0,0), (1/2,0)}$ is a subset of $e$ and that we have no points of $\conv{x\in \bar{F}(P_0) \mid 2x \in M}$ with positive second coordinate. If we can now show that there are no points of $P_0$ with second coordinate greater than $1$, we get that there are no points of $\bar{F}(P_0)$ with positive second coordinate and so we are done. Let $x_m=(x_{m1}, x_{m2})\in P_0$ be a half-integral point with maximal second coordinate. Then the triangle $\conv{(0,0), (1/2,0), x_m}$ contains a half-integral point with second coordinate $1/2$ by Pick's theorem. With a suitable shearing we can assume that this point is $(1/2,1/2)$ and so we get that $1/2 \leq x_{m1} \leq x_{m2}$ by convexity of the triangle. If $x_{m2}>1$, then we have for $n\in N\setminus \{0\}$ at least one of the following inequalities \begin{align*} \left<(0,0),n\right>\leq& \left<(1/2,1/2),n\right> \ \text{for} \text{or}\\ \left<(1/2,0),n\right>\leq& \left<(1/2,1/2),n\right>\ \text{or}\\ \left<(x_m,n\right>< \left<(1,1),n\right> <& \left<(1/2,1/2),n\right>\ \text{or}\\ \left<(x_m,n\right>< \left<(1/2,1),n\right> <& \left<(1/2,1/2),n\right>. \end{align*} These inequalities imply $\left<(1/2,1/2),n\right> \geq \Min{\bar{F}(P_0)}{n}$ and so we get $(1/2,1/2)\in \conv{x\in \bar{F}(P_0) \mid 2x \in M}$ which contradicts $x_{m2}>1$. So we must have $x_{m2}\leq 1$ and so we are done. If the affine hull of $e$ contains no lattice points then we start our argument in a similar way. After a suitable affine unimodular transformation we can assume that the line segment $\conv{(0,-1/2), (1/2,-1/2)}$ is a subset of $e$ and we have no points of $\conv{x\in \bar{F}(P_0) \mid 2x \in M}$ with second coordinate greater than $-1/2$. If we can now proof that there are no points of $P_0$ with second coordinate greater than $0$, we are done. Let $x_m=(x_{m1}, x_{m2})\in P_0$ again be a half-integral point with maximal second coordinate. Then, by Pick's theorem, the triangle $\conv{(0,-1/2), (1/2,-1/2), x_m}$ contains a half-integral point with second coordinate $0$. With another affine unimodular transformation we can assume that this point is $(1/2,0)$ and get so that $1/2 \leq x_{m1} \leq x_{m2}+1/2$ from the convexity of the triangle. With analogous inequalities as in the first case, we now get that $x_{m2}\leq 1/2$. So if $x_{m2}>0$, then we have $x_m=(1/2,1/2)$ or $x_m=(1,1/2)$. In the first case we get $\Min{P_0}{(1,-1)}=0$, because otherwise we get $(0,0)\in \interior{P_0}$ and therefore the contradiction $(0,0)\in \bar{F}(P_0)$. But $\Min{P_0}{(1,-1)}=0$ also leads to a contradiction, since $(0,-1/2)\in \bar{F}(P_0)$. Similary we get contradictions if $x_m=(1,1/2)$. All in all we see that $x_{m2}\leq 0$ and so we are done. \end{proof} The theorem also gives us also possibilities to compute $\bar{F}(P_0)$, since we only have to check which half-integral points are contained. If we know the support of the Fine interior of the lattice polygon $2P_0$, i.e. all the dual lattice vectors $n\in N$ with $\Min{2P_0}{n}+1=\Min{F(2P_0)}{n}$, we can use the following corollary. Note that the Fine interior here is just the convex hull of the interior lattice points, since $2P_0$ is a lattice polygon (\cite[2.9]{Bat17}). \begin{coro} Let $P$ be as in the theorem, $P_0$ its half-integral middle polygon. Then we get $2\bar{F}(P_0)$ as the convex hull of the following finite sets of lattice points \begin{align*} &\{x \in \partial F(2P_0)\cap M \mid \left<x,n\right> \neq \Min{2P_0}{n}+1 \forall n \in S_F(2P_0)\cap N_1(P_0) \},\\ &F(F(2P_0))\cap M. \end{align*} \end{coro} From the proof of the theorem we get another corollary. We have shown there that an edge of $\bar{F}(P_0)$ without lattice point has only lattice distance $1/2$ to a supporting line which contains therefore lattice points. But this means that such an edge of $\bar{F}(P_0)$ cannot exist and we get the following corollary. \begin{coro}\label{lpoint_on_edge} Let $P$ be as in the theorem with $\dim F(P)=2$. Then we have for all edges $E\preceq F(P)$ that $E\cap M\neq \emptyset$. \end{coro} \subsection{The Fine interior vs. the convex hull of the interior lattice points} For a lattice polygon, we know from \cite[2.9]{Bat17} that the Fine interior is just the convex hull of the interior lattice points. For $\bar{F}$ of a half-integral polygon the situation is more complicated, but there are also some connections between $\bar{F}$ and the convex hull of the interior lattice points that we want to study in this subsection. \begin{prop}\label{edges_of_Fineint} Let $P_0\subseteq M_\R$ be a half-integral polygon with at least two interior lattice points, $e \preceq \conv{\interior{P_0} \cap M}$ an edge and $v \preceq \conv{\interior{P_0}\cap M}$ a vertex. Then $e$ is an edge of $\bar{F}(P_0)$ or $|e\cap M|=2$ and $v$ is a boundary lattice point of $\bar{F}(P_0)$. \end{prop} \begin{proof} First, we show that an edge $e \preceq \conv{\interior{P_0} \cap M}$ with $|e\cap M|>2$ is an edge of $\bar{F}(P_0)$. After a suitable affine unimodular transformation we can assume that $\conv{(0,0), (2,0)}\subseteq e$ and the second coordinate of all interior lattice points of $P_0$ is at most $0$. Suppose now that $e$ is not an edge of $\bar{F}(P_0)$, i.e. we have a half-integral point with positive second coordinate in $\bar{F}(P_0)$ and we cah choose $S=(s_1,s_2)\in \bar{F}(P_0)$ as a half-integral point with maximal possible second coordinate $s_2$. By a suitable shearing we can assume that $s_2 \geq s_1>0$. We conclude directly that $s_2<\frac{3}{2}$, since otherwise $\conv{(0,0), (2,0), (s_1,s_2)}\subseteq \interior{P_0}$ contains the lattice point $(1,1)\in M$, which contradicts our choice of coordinates. It remains to consider the cases $S=(1/2,1)$ and $S=(1/2,1/2)$. Since $s_2$ was chosen maximal and every edge of $\bar{F}(P_0)$ contains a lattice point by \ref{lpoint_on_edge}, $S$ must be a vertex of $\bar{F}(P_0)$. Let $e_{S,l}, e_{S,r} \preceq \bar{F}(P_0)$ be the edges of $\bar{F}(P_0)$ containing $S$, where the indices $l$ and $r$ indicate that the first coordinates of the points of $e_{S_l}$ are not greater than those of $e_{S_r}$ and so $e_{S_l}$ is on the left. Take the parallel lines to the affine hull of $e_{S_l}$ through $(0,1)$ and to the affine hull of $e_{S_2}$ through $(1,1)$. Then $P_0$ must be below both of these parallel. This implies that all points of $P_0$ have second coordinate of at most $1$, which contradicts the fact that $S\in \bar{F}(P_0)$. We now show that every vertex of $\conv{\interior{P_0}\cap M}$ is a boundary lattice point of $\bar{F}(P_0)$. From the first part of the proof we already know that this is true for vertices on edges of $\bar{F}(P_0)$ with at least $3$ lattice points. So we only need to consider edges of $\conv{\interior{P_0}\cap M}$ with exactly two lattice points. After a suitable affine unimodular transformation we can assume that this edge is $\conv{(0,0), (1,0)}$ and that the second coordinate of all interior lattice points of $P_0$ to be at most $0$. Furthermore, we can still assume that $S=(s_1,s_2)\in \bar{F}(P_0)$ is a half-integral point with the maximum possible second coordinate $s_2$ and $s_2 \geq s_1>0$. We now get $s_1=\frac{1}{2}$, since otherwise $\conv{(0,0), (1,0), (s_1,s_2)}\subseteq \interior{P_0}$ contains the lattice point $(1,1)\in M$, which contradicts our choice of coordinates. By symmetry it is now sufficient to show that the line segment $\conv{(0,0), S}$ is a subset of an edge of $\bar{F}(P_0)$. As in the first part of the proof, we consider parallel lines to the affine hull of $e_{S_l}$ through $(0,1)$ and to the affine hull of $e_{S_2}$ through $(1,1)$. If $\conv{(0,0), S}$ is not a subset of an edge of $\bar{F}(P_0)$, then we get that all points of $P_0$ have a second coordinate of at most $s_2+\frac{1}{2}$. If $s_2\in \frac{1}{2}\Z \setminus \Z$, then this contradicts the fact that $S\in \bar{F}(P_0)$.If $s_2\in \Z$, then the affine hull of $(0,1)$ and $(1/2,s_2+1/2)$ must be a supporting line of $P_0$. But this also contradicts $S\in \bar{F}(P_0)$ and so we are done. \end{proof} Since we have seen, that edges of $\conv{\interior{P_0} \cap M}$ with $3$ or more lattice points are always edges of $\bar{F}(P_0)$, we get the following corollary \begin{coro} Let $P_0\subseteq M_\R$ be a half-integral polygon with at least three interior lattice points, which are collinear. Then $\dim \bar{F}(P_0)=1$ and we have in particular that $P_0$ is affine unimodular equivalent to a polygon in $\R \times [-1,1]$. \end{coro} We now focus on the edges of $\conv{\interior{P_0} \cap M}$ which are edges of $\bar{F}(P_0)$. For them we have the following proposition. \begin{prop} Let $P_0\subseteq M_\R$ be a half-integral polygon with at least two interior lattice points, $e \preceq \conv{\interior{P_0} \cap M}$ an edge which is not an edge of $\bar{F}(P_0)$. Then there is only one vertex $v$ of $\bar{F}(P)$, which is on the other side of $e$ than $\interior{P_0} \cap M$. Moreover, if the lattice distance of $v$ from $e$ is $h$, then $P_0$ has at least $2(h-1)$ interior lattice points, and the triangle $\conv{e,v}$ is unimodular affine equivalent to the half-integral triangle $\conv{(0,0),(1,0),(1/2,h)}$. \end{prop} \begin{proof} We have already seen in the proof of the second part of \ref{edges_of_Fineint}, that there is only one vertex $v$ of $\bar{F}(P)$, which is on the other side of $e$ than $\interior{P_0} \cap M$ and that the triangle $\conv{e,v}$ is unimodular affine equivalent to the half-integral triangle $\conv{(0,0),(1,0),(1/2,h)}$. So we can assume that the triangle is $\conv{(0,0),(1,0),(1/2,h)}$ and we can shift the affine hulls of $(0,0), (1/2,h)$ and $(1,0), (1/2,h)$ by one integral step outside and bound with them an area which contains $P_0$. In particular, since $(0,0), (1,0)$ are interior lattice points of $P_0$ there must a points $(x_1,x_2)\in P_0$ with $x_1\leq -1/2, x_2\leq -h+1$ as well as a point with $x_1\geq 3/2, x_2\leq -h+1$. Therefore all the lattice points in \begin{align*} \conv{(0,0), (1,0), (0,-h-2), (1,h-2)} \end{align*} are interior lattice point of $P_0$ and we therefore have at least $(2(h-1))$ interior lattice points of $P_0$. \end{proof} \begin{coro} The lattice height $h$ of the triangles on edges of $\conv{\interior{P_0} \cap M}$ with two lattice points from the last proposition is bounded by $\frac{1}{2}(|P_0\cap M|+2)$. \end{coro} Let us end this subsection with summerizing some propeties of $\bar{F}(P_0)$ we have proven so far. \begin{thm}\label{hats} Let $P_0\subseteq M_\R$ be a half-integral polygon with $\dim(\bar{F}(P_0))=2$. Then $\bar{F}(P_0)$ is a half-integral polygon, every edge of $\bar{F}(P_0)$ contains lattice points and we get $\bar{F}(P_0)$ as union of $\conv{\interior{P_0} \cap M}$ with some triangles, whereby the base of each of these triangle is an edge of $\conv{\interior{P_0} \cap M}$ with two lattice point and the third vertex of the triangle has a lattice height of at most $\frac{1}{2}(|P_0\cap M|+2)$ over this edge. Moreover, every vertex of $\conv{\interior{P_0} \cap M}$ is a boundary lattice point of $\bar{F}(P_0)$. \end{thm} \subsection{The maximal half-integral polygon to a given Fine interior} In this subsection we want to describe the unique inclusion maximal lattice polytope to a given Fine interior and the unique inclusion maximal half-integral polygon for given $\bar{F}$. This leads also to a test, if a given polytope can occure as a Fine interior. We start by moving out the facets of a polytope by lattice distance $1$. \begin{defi} Let $P\subseteq M_\R$ be a rational polytope. Then we define \begin{align*} P^{(-1)}:=\{x \in M_\R \mid \left<x,n\right> \geq \Min{P}{n}-1 \ \forall n \in \normalfan{P}[1]\}. \end{align*} \end{defi} \begin{rem} Moving out the facets is partially an inverse operation of computing the Fine interior. We get directly from the definition the following relations \begin{align*} F(\conv{P^{(-1)}\cap M)}\subseteq F(P^{(-1)})\subseteq P. \end{align*} The opposite inclusions are not correct in general but they hold for Fine interiors as we see in the next lemma. \end{rem} \begin{lemma} A rational $d$-polytope $Q\subseteq M_\R$ is a Fine interior of a rational polytope if and only if $Q\subseteq F(Q^{(-1)})$. Moreover, $Q$ is a Fine interior of a lattice polytope if and only if $Q\subseteq F(\conv{Q^{(-1)}\cap M)}$. \end{lemma} \begin{proof} If the inclusions hold then $Q$ is a Fine interior since the opposite inclusions hold in general. If $Q$ is a Fine interior, then there is a $d$-polytope $Q'\subseteq M_\R$ with $F(Q')=Q$. We get $Q'\subseteq Q^{(-1)}$ and so by the monotony of the Fine interior $Q=F(Q')\subseteq F(Q^{(-1)})$. \end{proof} The situation becomes especially easy for lattice polygons. We have by \cite[Lemma 9]{HS09} the following proposition. \begin{prop} A lattice polygon $Q$ is a Fine interior of a lattice polygon if and only if $Q^{(-1)}$ is a lattice polygon. \end{prop} \begin{rem} All lattice polygons with up to 112 lattice points which are a Fine interior a lattice polygon were classified for calculations in \cite{BS24a}. The data are available on zenodo \cite{BS24b}. \end{rem} We now want similar results as for the Fine interior also for $\bar{F}$. \begin{defi} Let $P_0\subseteq M$ be a half-integral polygon with $\dim \bar{F}(P_0)=2$. Then we call $P_0$ a maximal half-integral polygon for given $\bar{F}$ if and only if \begin{align*} P_0=\conv{(\bar{F}(P_0))^{(-1)}\cap M/2}. \end{align*} \end{defi} \begin{rem} The maximal half-integral polygon for given $\bar{F}$ is unique by definition and contains all other half-integral polygons with the same $\bar{F}$. If a half-integral polygon is maximal with respect to inclusion among all half-integral polygons with the same number of interior lattice points, it is in particular maximal for given $\bar{F}$. Thus we get all inclusion maximal half-integral polygons as a subset of the polygons maximal for given $\bar{F}$. This was used to classify half-integral polygons in \cite{BS24a}. \end{rem} We get now analogue to above a test for Fine interiors of dimension $2$ of lattice $3$-polytopes. \begin{coro}\label{FineInt_test} A half-integral polygon $Q\subseteq M_\R$ is unimodular equivalent to the Fine interior of a lattice $3$-polytope if and only if \begin{align*} Q=\bar{F}(\conv{Q^{(-1)}\cap M/2}). \end{align*} \end{coro} \section{Classification of Fine interiors of small lattice width} The previous section gives us the following algorithm to classify the two dimensional Fine interiors of lattice $3$ polytopes with given number of lattice points. \begin{enumerate} \item Take all lattice polygons with the given number of lattice points, e.g. from the dataset \cite{BS24b}. \item For each lattice polygon with the given number of lattice points consider all possibilities to put triangles on the edges with $2$ lattice points, which are permitted by \ref{hats}. We get a finite number of lattice polygons with hats, since the height of the hats is bounded and the lattice polygon with hats needs to be convex. \item Check for each lattice polygon with hats from the last step with \ref{FineInt_test}, if it is indeed a Fine interior or not and discard the polygons which are not a Fine interior. \item Eliminate affine unimodular equivalent Fine interior, e. g. by use of a normal form as described in \cite[section 2.1.]{BS24a}. \end{enumerate} This algorithm was used for calculations with up to $40$ lattice points and we got the following result. \begin{thm}\label{classification} There are exactly 24 324 158 half-integral polygons with at most 40 lattice points, which are Fine interiors of a lattice 3-polytope. The data for the vertices of these polygons are available on \cite{Boh24a}. The examples with $2$ or $3$ lattice points can be seen in figure \ref{2points} and \ref{3points}. \end{thm} \begin{figure} \begin{tikzpicture}[x=0.6cm,y=0.6cm] \draw[step=2.0,black,thin,xshift=0cm,yshift=0cm] (-11,12.2) grid (11,-2.2); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-10,8) -- (-9,9) -- (-8,8) -- cycle; \draw [line width=1pt,color=black] (-9,9)-- (-10,8); \draw [line width=1pt,color=black] (-9,9)-- (-8,8); \draw [line width=1pt,color=black] (-8,8)-- (-10,8); \draw [fill=black] (-9,9) circle (2.5pt); \draw [fill=red] (-10,8) circle (2.5pt); \draw [fill=red] (-8,8) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-4,8) -- (-3,10) -- (-2,8) -- cycle; \draw [line width=1pt,color=black] (-3,10)-- (-4,8); \draw [line width=1pt,color=black] (-3,10)-- (-2,8); \draw [line width=1pt,color=black] (-2,8)-- (-4,8); \draw [fill=black] (-3,10) circle (2.5pt); \draw [fill=red] (-4,8) circle (2.5pt); \draw [fill=red] (-2,8) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (2,8) -- (3,11) -- (4,8) -- cycle; \draw [line width=1pt,color=black] (3,11)-- (2,8); \draw [line width=1pt,color=black] (3,11)-- (4,8); \draw [line width=1pt,color=black] (4,8)-- (2,8); \draw [fill=black] (3,11) circle (2.5pt); \draw [fill=red] (2,8) circle (2.5pt); \draw [fill=red] (4,8) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (8,8) -- (9,12) -- (10,8) -- cycle; \draw [line width=1pt,color=black] (9,12)-- (8,8); \draw [line width=1pt,color=black] (9,12)-- (10,8); \draw [line width=1pt,color=black] (10,8)-- (8,8); \draw [fill=black] (9,12) circle (2.5pt); \draw [fill=red] (8,8) circle (2.5pt); \draw [fill=red] (10,8) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-10,4) -- (-9,5) -- (-8,4) -- (-9,3) -- cycle; \draw [line width=1pt,color=black] (-10,4)-- (-9,5); \draw [line width=1pt,color=black] (-9,5)-- (-8,4); \draw [line width=1pt,color=black] (-8,4)-- (-9,3); \draw [line width=1pt,color=black] (-10,4)-- (-9,3); \draw [fill=black] (-9,5) circle (2.5pt); \draw [fill=black] (-9,3) circle (2.5pt); \draw [fill=red] (-10,4) circle (2.5pt); \draw [fill=red] (-8,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-4,4) -- (-3,6) -- (-2,4) -- (-3,3) -- cycle; \draw [line width=1pt,color=black] (-4,4)-- (-3,6); \draw [line width=1pt,color=black] (-3,6)-- (-2,4); \draw [line width=1pt,color=black] (-2,4)-- (-3,3); \draw [line width=1pt,color=black] (-3,3)-- (-4,4); \draw [fill=black] (-3,6) circle (2.5pt); \draw [fill=black] (-3,3) circle (2.5pt); \draw [fill=red] (-4,4) circle (2.5pt); \draw [fill=red] (-2,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (2,4) -- (3,7) -- (4,4) -- (3,3) -- cycle; \draw [line width=1pt,color=black] (2,4)-- (3,7); \draw [line width=1pt,color=black] (3,7)-- (4,4); \draw [line width=1pt,color=black] (4,4)-- (3,3); \draw [line width=1pt,color=black] (3,3)-- (2,4); \draw [fill=black] (3,7) circle (2.5pt); \draw [fill=black] (3,3) circle (2.5pt); \draw [fill=red] (2,4) circle (2.5pt); \draw [fill=red] (4,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (8,4) -- (9,6) -- (10,4) -- (9,2) -- cycle; \draw [line width=1pt,color=black] (8,4)-- (9,6); \draw [line width=1pt,color=black] (9,6)-- (10,4); \draw [line width=1pt,color=black] (10,4)-- (9,2); \draw [line width=1pt,color=black] (8,4)-- (9,2); \draw [fill=black] (9,6) circle (2.5pt); \draw [fill=black] (9,2) circle (2.5pt); \draw [fill=red] (8,4) circle (2.5pt); \draw [fill=red] (10,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-10,0) -- (-10,1) -- (-8,0) -- (-10,-1) -- cycle; \draw [line width=1pt,color=black] (-10,0)-- (-10,1); \draw [line width=1pt,color=black] (-10,1)-- (-8,0); \draw [line width=1pt,color=black] (-8,0)-- (-10,-1); \draw [line width=1pt,color=black] (-10,-1)-- (-10,0); \draw [fill=black] (-10,1) circle (2.5pt); \draw [fill=black] (-10,-1) circle (2.5pt); \draw [fill=red] (-10,0) circle (2.5pt); \draw [fill=red] (-8,0) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-4,0) -- (-4,1) -- (-2,0) -- (-3,-1) -- cycle; \draw [line width=1pt,color=black] (-4,0)-- (-4,1); \draw [line width=1pt,color=black] (-4,1)-- (-2,0); \draw [line width=1pt,color=black] (-2,0)-- (-3,-1); \draw [line width=1pt,color=black] (-3,-1)-- (-4,0); \draw [fill=black] (-4,1) circle (2.5pt); \draw [fill=black] (-3,-1) circle (2.5pt); \draw [fill=red] (-4,0) circle (2.5pt); \draw [fill=red] (-2,0) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (2,0) -- (2,1) -- (4,0) -- (3,-2) -- cycle; \draw [line width=1pt,color=black] (2,0)-- (2,1); \draw [line width=1pt,color=black] (2,1)-- (4,0); \draw [line width=1pt,color=black] (4,0)-- (3,-2); \draw [line width=1pt,color=black] (3,-2)-- (2,0); \draw [fill=black] (2,1) circle (2.5pt); \draw [fill=black] (3,-2) circle (2.5pt); \draw [fill=red] (2,0) circle (2.5pt); \draw [fill=red] (4,0) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (9,-2) -- (10,0) -- (7,2) -- cycle; \draw [line width=1pt,color=black] (8,0)-- (9,-2); \draw [line width=1pt,color=black] (9,-2)-- (10,0); \draw [line width=1pt,color=black] (10,0)-- (7,2); \draw [line width=1pt,color=black] (7,2)-- (8,0); \draw [fill=black] (7,2) circle (2.5pt); \draw [fill=black] (9,-2) circle (2.5pt); \draw [fill=red] (8,0) circle (2.5pt); \draw [fill=red] (10,0) circle (2.5pt); \end{tikzpicture} \caption{All Fine interiors of dimension $2$ for lattice $3$-polytopes with $2$ interior lattice points}\label{2points} \end{figure} \begin{figure} \begin{tikzpicture}[x=0.6cm,y=0.6cm] \draw[step=2.0,black,thin,xshift=0.6cm,yshift=0cm] (-13.9,10.2) grid (11.9,-7.2); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-11,8) -- (-9,8) -- (-11,10) -- cycle; \draw [line width=1pt,color=black] (-11,8)-- (-9,8); \draw [line width=1pt,color=black] (-9,8)-- (-11,10); \draw [line width=1pt,color=black] (-11,10)-- (-11,8); \draw [fill=red] (-11,8) circle (2.5pt); \draw [fill=red] (-9,8) circle (2.5pt); \draw [fill=red] (-11,10) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-7,7) -- (-5,8) -- (-7,10) -- cycle; \draw [line width=1pt,color=black] (-7,7)-- (-5,8); \draw [line width=1pt,color=black] (-5,8)-- (-7,10); \draw [line width=1pt,color=black] (-7,10)-- (-7,7); \draw [fill=black] (-7,7) circle (2.5pt); \draw [fill=red] (-7,8) circle (2.5pt); \draw [fill=red] (-5,8) circle (2.5pt); \draw [fill=red] (-7,10) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-3,8) -- (-2,7) -- (-1,8) -- (-3,10) -- cycle; \draw [line width=1pt,color=black] (-3,8)-- (-2,7); \draw [line width=1pt,color=black] (-2,7)-- (-1,8); \draw [line width=1pt,color=black] (-1,8)-- (-3,10); \draw [line width=1pt,color=black] (-3,10)-- (-3,8); \draw [fill=black] (-2,7) circle (2.5pt); \draw [fill=red] (-3,8) circle (2.5pt); \draw [fill=red] (-1,8) circle (2.5pt); \draw [fill=red] (-3,10) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (1,8) -- (2,6) -- (3,8) -- (1,10) -- cycle; \draw [line width=1pt,color=black] (1,8)-- (2,6); \draw [line width=1pt,color=black] (2,6)-- (3,8); \draw [line width=1pt,color=black] (3,8)-- (1,10); \draw [line width=1pt,color=black] (1,10)-- (1,8); \draw [fill=black] (2,6) circle (2.5pt); \draw [fill=red] (1,8) circle (2.5pt); \draw [fill=red] (3,8) circle (2.5pt); \draw [fill=red] (1,10) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (5,8) -- (6,5) -- (7,8) -- (5,10) -- cycle; \draw [line width=1pt,color=black] (5,8)-- (6,5); \draw [line width=1pt,color=black] (6,5)-- (7,8); \draw [line width=1pt,color=black] (7,8)-- (5,10); \draw [line width=1pt,color=black] (5,8)-- (5,10); \draw [fill=black] (6,5) circle (2.5pt); \draw [fill=red] (5,8) circle (2.5pt); \draw [fill=red] (7,8) circle (2.5pt); \draw [fill=red] (5,10) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-10,1) -- (-9,2) -- (-11,4) -- (-12,3) -- cycle; \draw [line width=1pt,color=black] (-12,3)-- (-10,1); \draw [line width=1pt,color=black] (-10,1)-- (-9,2); \draw [line width=1pt,color=black] (-9,2)-- (-11,4); \draw [line width=1pt,color=black] (-11,4)-- (-12,3); \draw [fill=black] (-10,1) circle (2.5pt); \draw [fill=black] (-12,3) circle (2.5pt); \draw [fill=red] (-11,2) circle (2.5pt); \draw [fill=red] (-9,2) circle (2.5pt); \draw [fill=red] (-11,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-8,4) -- (-7,2) -- (-5,1) -- (-5,2) -- (-7,4) -- cycle; \draw [line width=1pt,color=black] (-8,4)-- (-7,2); \draw [line width=1pt,color=black] (-7,2)-- (-5,1); \draw [line width=1pt,color=black] (-5,1)-- (-5,2); \draw [line width=1pt,color=black] (-5,2)-- (-7,4); \draw [line width=1pt,color=black] (-7,4)-- (-8,4); \draw [fill=black] (-8,4) circle (2.5pt); \draw [fill=black] (-5,1) circle (2.5pt); \draw [fill=red] (-7,2) circle (2.5pt); \draw [fill=red] (-5,2) circle (2.5pt); \draw [fill=red] (-7,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-4,4) -- (-3,2) -- (-2,1) -- (-1,2) -- (-3,4) -- cycle; \draw [line width=1pt,color=black] (-4,4)-- (-3,2); \draw [line width=1pt,color=black] (-3,2)-- (-2,1); \draw [line width=1pt,color=black] (-2,1)-- (-1,2); \draw [line width=1pt,color=black] (-1,2)-- (-3,4); \draw [line width=1pt,color=black] (-3,4)-- (-4,4); \draw [fill=black] (-4,4) circle (2.5pt); \draw [fill=black] (-2,1) circle (2.5pt); \draw [fill=red] (-3,2) circle (2.5pt); \draw [fill=red] (-1,2) circle (2.5pt); \draw [fill=red] (-3,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (3,2) -- (0,5) -- (1,2) -- (2,1) -- cycle; \draw [line width=1pt,color=black] (3,2)-- (0,5); \draw [line width=1pt,color=black] (0,5)-- (1,2); \draw [line width=1pt,color=black] (1,2)-- (2,1); \draw [line width=1pt,color=black] (2,1)-- (3,2); \draw [fill=black] (2,1) circle (2.5pt); \draw [fill=black] (0,5) circle (2.5pt); \draw [fill=red] (1,2) circle (2.5pt); \draw [fill=red] (3,2) circle (2.5pt); \draw [fill=red] (1,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (4,5) -- (5,2) -- (7,1) -- (7,2) -- cycle; \draw [line width=1pt,color=black] (4,5)-- (5,2); \draw [line width=1pt,color=black] (5,2)-- (7,1); \draw [line width=1pt,color=black] (7,1)-- (7,2); \draw [line width=1pt,color=black] (7,2)-- (4,5); \draw [fill=black] (4,5) circle (2.5pt); \draw [fill=black] (7,1) circle (2.5pt); \draw [fill=red] (5,2) circle (2.5pt); \draw [fill=red] (7,2) circle (2.5pt); \draw [fill=red] (5,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (8,4) -- (9,2) -- (12,0) -- (11,2) -- (9,4) --cycle; \draw [line width=1pt,color=black] (8,4)-- (9,2); \draw [line width=1pt,color=black] (9,2)-- (12,0); \draw [line width=1pt,color=black] (12,0)-- (11,2); \draw [line width=1pt,color=black] (11,2)-- (9,4); \draw [line width=1pt,color=black] (9,4)-- (8,4); \draw [fill=black] (8,4) circle (2.5pt); \draw [fill=black] (12,0) circle (2.5pt); \draw [fill=red] (9,2) circle (2.5pt); \draw [fill=red] (11,2) circle (2.5pt); \draw [fill=red] (9,4) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-12,-2) -- (-10,-6) -- (-9,-4) -- (-11,-2) -- cycle; \draw [line width=1pt,color=black] (-12,-2)-- (-10,-6); \draw [line width=1pt,color=black] (-10,-6)-- (-9,-4); \draw [line width=1pt,color=black] (-9,-4)-- (-11,-2); \draw [line width=1pt,color=black] (-11,-2)-- (-12,-2); \draw [fill=black] (-12,-2) circle (2.5pt); \draw [fill=black] (-10,-6) circle (2.5pt); \draw [fill=red] (-11,-4) circle (2.5pt); \draw [fill=red] (-9,-4) circle (2.5pt); \draw [fill=red] (-11,-2) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-8,-1) -- (-7,-4) -- (-6,-6) -- (-5,-4) -- cycle; \draw [line width=1pt,color=black] (-8,-1)-- (-7,-4); \draw [line width=1pt,color=black] (-7,-4)-- (-6,-6); \draw [line width=1pt,color=black] (-6,-6)-- (-5,-4); \draw [line width=1pt,color=black] (-5,-4)-- (-8,-1); \draw [fill=black] (-8,-1) circle (2.5pt); \draw [fill=black] (-6,-6) circle (2.5pt); \draw [fill=red] (-7,-4) circle (2.5pt); \draw [fill=red] (-5,-4) circle (2.5pt); \draw [fill=red] (-7,-2) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (-4,-1) -- (-2,-7) -- (-1,-4) -- cycle; \draw [line width=1pt,color=black] (-4,-1)-- (-2,-7); \draw [line width=1pt,color=black] (-2,-7)-- (-1,-4); \draw [line width=1pt,color=black] (-1,-4)-- (-4,-1); \draw [fill=black] (-4,-1) circle (2.5pt); \draw [fill=black] (-2,-7) circle (2.5pt); \draw [fill=red] (-3,-4) circle (2.5pt); \draw [fill=red] (-1,-4) circle (2.5pt); \draw [fill=red] (-3,-2) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (0,-3) -- (2,-5) -- (3,-4) -- (3,-3) -- (1,-2) -- cycle; \draw [line width=1pt,color=black] (0,-3)-- (2,-5); \draw [line width=1pt,color=black] (2,-5)-- (3,-4); \draw [line width=1pt,color=black] (3,-4)-- (3,-3); \draw [line width=1pt,color=black] (3,-3)-- (1,-2); \draw [line width=1pt,color=black] (1,-2)-- (0,-3); \draw [fill=black] (3,-3) circle (2.5pt); \draw [fill=black] (0,-3) circle (2.5pt); \draw [fill=black] (2,-5) circle (2.5pt); \draw [fill=red] (1,-4) circle (2.5pt); \draw [fill=red] (3,-4) circle (2.5pt); \draw [fill=red] (1,-2) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (4,-3) -- (5,-4) -- (7,-5) -- (7,-4) -- (6,-2) -- (5,-2) -- cycle; \draw [line width=1pt,color=black] (4,-3)-- (5,-4); \draw [line width=1pt,color=black] (5,-4)-- (7,-5); \draw [line width=1pt,color=black] (7,-5)-- (7,-4); \draw [line width=1pt,color=black] (7,-4)-- (6,-2); \draw [line width=1pt,color=black] (6,-2)-- (5,-2); \draw [line width=1pt,color=black] (5,-2)-- (4,-3); \draw [fill=black] (6,-2) circle (2.5pt); \draw [fill=black] (4,-3) circle (2.5pt); \draw [fill=black] (7,-5) circle (2.5pt); \draw [fill=red] (5,-4) circle (2.5pt); \draw [fill=red] (7,-4) circle (2.5pt); \draw [fill=red] (5,-2) circle (2.5pt); ll[line width=2pt,color=black,fill=black,fill opacity=0.3] (7,-3) -- (11,-5) -- (11,-3) -- (9,-2) --cycle; \draw [line width=1pt,color=black] (7,-3)-- (11,-5); \draw [line width=1pt,color=black] (11,-5)-- (11,-3); \draw [line width=1pt,color=black] (11,-3)-- (9,-2); \draw [line width=1pt,color=black] (9,-2)-- (7,-3); \draw [fill=black] (11,-3) circle (2.5pt); \draw [fill=black] (11,-5) circle (2.5pt); \draw [fill=black] (7,-3) circle (2.5pt); \draw [fill=red] (9,-4) circle (2.5pt); \draw [fill=red] (11,-4) circle (2.5pt); \draw [fill=red] (9,-2) circle (2.5pt); \end{tikzpicture} \caption{All Fine interiors of dimension $2$ for lattice 3-polytopes with $3$ interior lattice points}\label{3points} \end{figure} \section{Geography of minimal surfaces of general type arising from lattice $3$-polytopes of lattice width $2$} We can also visualize invariants of the minimal surfaces which correspond the lattice $3$-polytopes with Fine interior of dimension $2$. \begin{thm}(\cite[9.4]{Bat23}, \cite[5.1]{Gie22}) Let $\hat{Z}$ be a minimal surface coming from a nondegenerated toric hypersurface with Newton polytope $P$ of dimension $d$ with $\dim(F(P))=d-1$. Then \begin{align*} (K_{\hat{Z}})^{d-1}=2\mathrm{Vol}_{d-1}(F(P)) \end{align*} \end{thm} \begin{coro} If $\dim (F(P))\geq d-1$, then $2 \mathrm{Vol}_{\dim F(P)}(F(P))\in \Z$. \end{coro} \begin{proof} If $\dim (F(P))=d-1$, then $2\mathrm{Vol}_{d-1}(F(P))=(K_{\hat{Z}})^{d-1}\in \Z$. If $\dim (F(P))=d$, then we have for $Q:=P \times [-1,1]$, that $F(Q)=F(P)$ and $2\mathrm{Vol}_{d}(F(P))=2\mathrm{Vol}_{d}(F(Q))=(K_{\hat{Z}})^{d}\in \Z$. \end{proof} If $d=3$, then we get for the Chern numbers the following corollary. \begin{coro}\cite[10.1]{Bat23} \begin{align*} c_1^2(\hat{Z})=&2\mathrm{Vol}_{2}(F(P))\\ c_2(\hat{Z})=&12(1+\chi)-c_1^2=12(1+|F(P)\cap M|)-c_1^2 \end{align*} \end{coro} Remember that the Chern numbers of surfaces of general type are restricted by general theory. We have the following theorem \begin{thm}\cite[VII. (1.1)]{BHPV04} $c_1^2, c_2\in \Z_{\geq 0}$, $c_1^2+c_2=12\chi \equiv 0 \mod 12$ and \begin{align*} c_1^2\leq& 3c_2 &&\quad (\text{Bogomolov–Miyaoka–Yau inequality})\\ c_1^2\geq 2\chi-4=& \frac{c_2-36}{5} &&\quad (\text{Noether inequality}) \end{align*} \end{thm} For surfaces on the Noether line we have the following corollary. \begin{coro} $P$ defines a surface with Chern numbers on the Noether line, i.e. with $c_1^2=2\chi-4$, if and only if $F(P)$ is a hollow polygon. Moreover, for every point on the Noether line exists a suitable hollow $F(P)$. \end{coro} \begin{proof} $c_1^2\geq 2\chi-4$ gives for the Fine interior $2\mathrm{Vol}_2(F(P))\geq 2|F(P)\cap M|-4$. By Pick's formula $2\mathrm{Vol}_2(F(P))\geq 4|F(P)\cap M|-2|\partial F(P) \cap M|-4\geq 2| F(P) \cap M|-4$ with equality if and only if $F(P)$ is a hollow polygon. \end{proof} If the Fine interior is a lattice polygon, we get $c_1^2, c_2 \in 2\Z_{\geq 0}$ and the following restrictions from the theory of lattice polygons \begin{thm}[\cite{Sco76}] $|\partial P \cap M|\leq 2|\mathrm{int}(P)\cap M|+6$ or $P\cong 3\Delta_2$ \end{thm} \begin{coro} \begin{align*} c_1^2=2\mathrm{Vol}_2(F(P))\geq \frac{8}{3}|F(P)\cap M|-8=\frac{2}{7}(c_2-48) \end{align*} \end{coro} \begin{proof} We have $3|\partial P \cap M|\leq 2|P\cap M|+6$ or $P\cong 3\Delta_2$ from Scott's inequality. So we get by Pick's theorem \begin{align*} c_1^2=&2\mathrm{Vol}_2(F(P))= 4|F(P)\cap M|-2|\partial F(P) \cap M|-4\geq \frac{8}{3}|F(P)\cap M|-8\\=&\frac{2}{7}(c_2-48). \end{align*} \end{proof} \begin{prop} $vol(P)\leq l-\frac{5}{2}$ and $c_1^2\leq \frac{1}{2}(c_2-42).$ \end{prop} \begin{proof} We have at least $3$ boundary points, so by Pick's theorem $vol(P)\leq l-\frac{5}{2}$. Now we have \begin{align*} c_1^2=2\mathrm{Vol}_2(F(P))\leq 4 |F(P)\cap M|-10=\frac{c_2+c_1^2}{3}-14\leq \frac{1}{2}(c_2-42). \end{align*} \end{proof} Our classification of Fine interiors \ref{classification} can be translated to the following theorem and the result is visualized in figure \ref{Chern_num_drawing}. \begin{thm} A pair $(c_1^2,c_2)$ is a pair of Chern numbers of a non-degenate toric hypersurface corresponding to a lattice polytope of width $2$ with up to $40$ interior lattice points and $2$ dimensional Fine interior if and only if it correspondents to a pair $(\chi,c_1^2)$ of the following table \begin{center} \begin{tabular}{c|c|c} $\chi$ & \text{ number of Fine interiors} & $c_1^2 \text{ is element of }$ \\\hline $2$&$12$&$[1,4]\cap \Z$\\ $3$&$17$&$[2,6]\cap \Z$\\ $4$&$48$&$[4,10]\cap\Z$\\ $5$&$86$&$[6,12]\cap \Z$\\ $6$&$177$&$[8,17]\cap \Z$\\ $7$&$279$&$[10,19]\cap \Z$\\ $8$&$504$&$[12,24]\cap \Z$\\ $9$&$768$&$[14,26]\cap \Z$\\ $10$&$1222$&$[16,31]\cap \Z$\\ $11$&$1850$&$[18,34]\cap \Z$\\ $12$&$2881$&$[20,39]\cap \Z$\\ $13$&$4160$&$[22,41]\cap \Z$\\ $14$&$6150$&$[24,47]\cap \Z$\\ $15$&$8480$&$[26,49]\cap \Z$\\ $16$&$12066$&$[28,54]\cap \Z$\\ $17$&$16746$&$[30,56]\cap \Z$\\ $18$&$23462$&$[32,62]\cap \Z$\\ $19$&$31601$&$[34,64]\cap \Z$\\ $20$&$42914$&$[36,69]\cap \Z$\\ $21$&$56675$&$[38,72]\cap \Z$\\ $22$&$75457$&$[40,77]\cap \Z$\\ $23$&$98713$&$[42,79]\cap \Z$\\ $24$&$129468$&$[44,85]\cap \Z$\\ $25$&$167366$&$[46,87]\cap \Z$\\ $26$&$216764$&$[48,92]\cap \Z$\\ $27$&$276569$&$[50,95]\cap \Z$\\ $28$&$352907$&$[52,100]\cap \Z$\\ $29$&$446184$&$[54,103]\cap \Z$\\ $30$&$564041$&$[56,108]\cap \Z$\\ $31$&$706531$&$[58,110]\cap \Z$\\ $32$&$884749$&$[60,116]\cap \Z$\\ $33$&$1097809$&$[62,118]\cap \Z$\\ $34$&$1362551$&$[64,124]\cap \Z$\\ $35$&$1681298$&$[66,127]\cap \Z$\\ $36$&$2071958$&$[68,131]\cap \Z$\\ $37$&$2535238$&$[70,134]\cap \Z$\\ $38$&$3099580$&$[72,139]\cap \Z$\\ $39$&$3768629$&$[74,142]\cap \Z$\\ $40$&$4578248$&$[76,147]\cap \Z$\\ \end{tabular} \end{center} \end{thm} \begin{landscape} \begin{figure} \begin{center} \begin{tikzpicture}[x=0.046cm,y=0.092cm] \draw[color=black] (140,140) node { $c_1^2\leq 3c_2$ (Bogomolov-Miyaoka-Yau inequality) }; \draw[color=black] (295,35) node { $5c_1^2\geq c_2-36$ (Noether inequality) }; \draw[color=red] (165,17) node { $7c_1^2\geq 2c_2-96$}; \draw[color=red] (165,12) node { (Scott inequality) }; \draw[color=red] (200,100) node { $2c_1^2\leq c_2-42$ }; \draw [line width=1pt,color=black] (-5,0)-- (460,0); \draw [line width=1pt,color=black] (0,-3)-- (0,158); \draw [line width=1pt,color=red] (54,6)-- (358,158); \draw [line width=1pt,color=red] (104,16)-- (454,116); \draw [line width=1pt,color=red] (54,6)-- (104,16); \draw [line width=1pt,color=black] (0,0)-- (51,153); \draw [line width=1pt,color=black] (36,0)-- (456,84); \foreach \n in {1,...,4}{\draw [fill=black] (36-\n,\n) circle (0.5pt);} \foreach \n in {2,...,6}{\draw [fill=black] (48-\n,\n) circle (0.5pt);} \foreach \n in {4,...,10}{\draw [fill=black] (60-\n,\n) circle (0.5pt);} \foreach \n in {6,...,12}{\draw [fill=black] (72-\n,\n) circle (0.5pt);} \foreach \n in {8,...,17}{\draw [fill=black] (84-\n,\n) circle (0.5pt);} \foreach \n in {10,...,19}{\draw [fill=black] (96-\n,\n) circle (0.5pt);} \foreach \n in {12,...,24}{\draw [fill=black] (108-\n,\n) circle (0.5pt);} \foreach \n in {14,...,26}{\draw [fill=black] (120-\n,\n) circle (0.5pt);} \foreach \n in {16,...,31}{\draw [fill=black] (132-\n,\n) circle (0.5pt);} \foreach \n in {18,...,34}{\draw [fill=black] (144-\n,\n) circle (0.5pt);} \foreach \n in {20,...,39}{\draw [fill=black] (156-\n,\n) circle (0.5pt);} \foreach \n in {22,...,41}{\draw [fill=black] (168-\n,\n) circle (0.5pt);} \foreach \n in {24,...,47}{\draw [fill=black] (180-\n,\n) circle (0.5pt);} \foreach \n in {26,...,49}{\draw [fill=black] (192-\n,\n) circle (0.5pt);} \foreach \n in {28,...,54}{\draw [fill=black] (204-\n,\n) circle (0.5pt);} \foreach \n in {30,...,56}{\draw [fill=black] (216-\n,\n) circle (0.5pt);} \foreach \n in {32,...,62}{\draw [fill=black] (228-\n,\n) circle (0.5pt);} \foreach \n in {34,...,64}{\draw [fill=black] (240-\n,\n) circle (0.5pt);} \foreach \n in {36,...,69}{\draw [fill=black] (252-\n,\n) circle (0.5pt);} \foreach \n in {38,...,72}{\draw [fill=black] (264-\n,\n) circle (0.5pt);} \foreach \n in {40,...,77}{\draw [fill=black] (276-\n,\n) circle (0.5pt);} \foreach \n in {42,...,79}{\draw [fill=black] (288-\n,\n) circle (0.5pt);} \foreach \n in {44,...,85}{\draw [fill=black] (300-\n,\n) circle (0.5pt);} \foreach \n in {46,...,87}{\draw [fill=black] (312-\n,\n) circle (0.5pt);} \foreach \n in {48,...,92}{\draw [fill=black] (324-\n,\n) circle (0.5pt);} \foreach \n in {50,...,95}{\draw [fill=black] (336-\n,\n) circle (0.5pt);} \foreach \n in {52,...,100}{\draw [fill=black] (348-\n,\n) circle (0.5pt);} \foreach \n in {54,...,103}{\draw [fill=black] (360-\n,\n) circle (0.5pt);} \foreach \n in {56,...,108}{\draw [fill=black] (372-\n,\n) circle (0.5pt);} \foreach \n in {58,...,110}{\draw [fill=black] (384-\n,\n) circle (0.5pt);} \foreach \n in {60,...,116}{\draw [fill=black] (396-\n,\n) circle (0.5pt);} \foreach \n in {62,...,118}{\draw [fill=black] (408-\n,\n) circle (0.5pt);} \foreach \n in {64,...,124}{\draw [fill=black] (420-\n,\n) circle (0.5pt);} \foreach \n in {66,...,127}{\draw [fill=black] (432-\n,\n) circle (0.5pt);} \foreach \n in {68,...,131}{\draw [fill=black] (444-\n,\n) circle (0.5pt);} \foreach \n in {70,...,134}{\draw [fill=black] (456-\n,\n) circle (0.5pt);} \foreach \n in {72,...,139}{\draw [fill=black] (468-\n,\n) circle (0.5pt);} \foreach \n in {74,...,142}{\draw [fill=black] (480-\n,\n) circle (0.5pt);} \foreach \n in {76,...,147}{\draw [fill=black] (492-\n,\n) circle (0.5pt);} \end{tikzpicture} \caption{The Chern numbers of the classified surfaces of general type}\label{Chern_num_drawing} \end{center} \end{figure} \end{landscape} \begin{thebibliography}{BHPV04} \begin{footnotesize} \bibitem[BHPV04]{BHPV04} Wolf P. Barth, Klaus Hulek, Chris A. M. Peters and Antonius Van de Ven, \emph{Compact complex surfaces.} 2nd enlarged ed. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge 4. Berlin: Springer (2004). \bibitem[Bat17]{Bat17} Victor Batyrev, \emph{The stringy Euler number of Calabi-Yau hypersurfaces in toric varieties and the Mavlyutov duality.} Pure Appl. Math. Q. 13, No. 1, 1-47 (2017). \bibitem[Bat23]{Bat23} Victor V. Batyrev, \emph{Canonical models of toric hypersurfaces.} Algebr. Geom. 10, No. 4, 394-431 (2023). \bibitem[BKS22]{BKS22} Victor Batyrev, Alexander Kasprzyk and Karin Schaller, \emph{On the Fine interior of three-dimensional canonical Fano polytopes.} Interactions with lattice polytopes. Selected papers based on the presentations at the workshop, Magdeburg, Germany, September 14–16, 2017. Cham: Springer. Springer Proc. Math. Stat. 386, 11-47 (2022). \bibitem[Boh24a]{Boh24a} Martin Bohnert, \emph{2-dimensional Fine interiors of lattice 3-polytopes [Data set]} (2024) Zenodo. \href{https://doi.org/10.5281/zenodo.14544435}{https://doi.org/10.5281/zenodo.14544435} \bibitem[Boh24b]{Boh24b} Martin Bohnert, \emph{The Fine interior of dilations of a lattice polytope.} (2024) \newblock \href{https://arxiv.org/abs/2412.11163} {arXiv:2412.11163}. \bibitem[BS24a]{BS24a} Martin Bohnert, Justus Springer, \emph{Classifying rational polygons with small denominator and few interior lattice points.} (2024) \newblock \href{https://arxiv.org/abs/2410.17244} {arXiv:2410.17244}. \bibitem[BS24b]{BS24b} Martin Bohnert, Justus Springer, \emph{Lattice polygons with at most 70 lattice points (1.1.0) [Data set]} (2024). Zenodo. \href{https://doi.org/10.5281/zenodo.13959996} {https://doi.org/10.5281/zenodo.13959996} \bibitem[Gie22]{Gie22} Julius Giesler, \emph{The plurigenera of nondegenerate minimal toric hypersurfaces.} (2022) \newblock \href{https://arxiv.org/abs/2112.10523} {arXiv:2112.10523}. \bibitem[HS09]{HS09} Christian Haase and Josef Schicho, \emph{Lattice polygons and the number $2i+7$.} Am. Math. Mon. 116, No. 2, 151-165 (2009). \bibitem[Sco76]{Sco76} P. R. Scott, \emph{On convex lattice polygons.} Bull. Aust. Math. Soc. 15, 395-399 (1976). \end{footnotesize} \end{thebibliography} \end{document}
2412.17642v1
http://arxiv.org/abs/2412.17642v1
Characterization of Double-Arborescences and their Minimum-Word-Representants
\documentclass{llncs} \usepackage{amssymb,stmaryrd,mathrsfs,dsfont,mathtools,enumitem} \usepackage[ruled,linesnumbered]{algorithm2e} \usepackage{algpseudocode} \newtheorem{notation}[theorem]{Notation} \input xy \xyoption{all} \usepackage{tikz} \tikzstyle{vertex}=[circle, draw, inner sep=0pt, minimum size=2.7pt] \newcommand{\vertex}{\node[vertex]} \newcommand{\srec}{\!\;_{m}{\ostar}_{m'}\!\;} \begin{document} \frontmatter \pagestyle{headings} \mainmatter \title{Characterization of Double-Arborescences and their Minimum-Word-Representants} \titlerunning{Title} \author{Tithi Dwary \and K. V. Krishna} \authorrunning{Tithi Dwary \and K. V. Krishna} \institute{Indian Institute of Technology Guwahati\\ \email{[email protected]};\;\;\; \email{[email protected]}} \maketitle \begin{abstract} A double-arborescence is a treelike comparability graph with an all-adjacent vertex. In this paper, we first give a forbidden induced subgraph characterization of double-arborescences, where we prove that double-arborescences are precisely $P_4$-free treelike comparability graphs. Then, we characterize a more general class consisting of $P_4$-free distance-hereditary graphs using split-decomposition trees. Consequently, using split-decomposition trees, we characterize double-arborescences and one of its subclasses, viz., arborescences; a double-arborescence is an arborescence if its all-adjacent vertex is a source or a sink. In the context of word-representable graphs, it is an open problem to find the classes of word-representable graphs whose minimum-word-representants are of length $2n - k$, where $n$ is the number of vertices of the graph and $k$ is its clique number. Contributing to the open problem, we devise an algorithmic procedure and show that the class of double-arborescences is one such class. It seems the class of double-arborescences is the first example satisfying the criteria given in the open problem, for an arbitrary $k$. \end{abstract} \keywords{Distance-hereditary graphs; treelike comparability graphs, split decomposition; arborescences; minimum-word-representants.} \section{Introduction } This work aims at characterizing a special class of comparability graphs, viz., double-arborescences, and also finding their minimum-word-representants. The graphs in this work are simple and connected. In this section and Section \ref{split-dec-trees}, we present the requisite background material and fix the notation. A graph $G = (V, E)$ is called a comparability graph if it admits a transitive orientation, i.e., an assignment of direction to the edges of $G$ such that $\overrightarrow{ab} \in E$ and $\overrightarrow{bc} \in E$, then $\overrightarrow{ac} \in E$. Based on a given transitive orientation, every comparability graph $G$ induces a partially ordered set (in short, poset), denoted by $P_G$. If $G$ admits a transitive orientation such that the Hasse diagram of the poset $P_G$ is a tree, then $G$ is called a treelike comparability graph and the corresponding orientation is called a treelike orientation. It was shown in \cite{cornelsen2009treelike} that a treelike orientation of a comparability graph, if exists, is unique up to isomorphism and reversing the whole orientation. A treelike comparability graph $G$ with an all-adjacent vertex is called a double-arborescence, i.e., there is a vertex $r$ such that $V = \{r\} \cup N_G(r)$, where $N_G(r)$ is the neighborhood of $r$ in $G$. We consider $r$ as the root of $G$. In addition, under the treelike orientation, if the root $r$ is a source (or a sink), i.e., the indegree (respectively, outdegree) of $r$ is zero, then $G$ is called an arborescence. The treelike orientation of a (double-)arborescence is called the (double-)arborescence orientation. Note that every arborescence is a double-arborescence but not conversely. We call a double-arborescence as a strict-double-arborescence if it is not an arborescence. A treelike comparability graph $G$ is called a path of $k$ double-arborescences (or simply, a path of double-arborescences) if $G$ (precisely) consists of $k$ number of double-arborescences and a path connecting their roots. The smallest possible $k$ such that $G$ is a path of $k$ double-arborescences can be determined in linear time \cite{cornelsen2009treelike}. These graphs are well depicted and understood by their Hasse diagrams or the digraphs of transitive reductions\footnote{The transitive reduction of a transitive orientation is obtained by deleting the transitive edges.}. For example, refer Fig. \ref{fig_4} for these types of graphs given by their transitive reductions, in which (b) is a strict-double-arborescence. For details on the arborescences and related graphs, one may refer to \cite{golumbic2022they} and the references thereof. \begin{figure}[t] \centering \begin{minipage}{.5\textwidth} \centering \[\begin{tikzpicture}[scale=0.8] \vertex (a_1) at (0,0) [fill=black] {}; \vertex (a_2) at (-0.7,0.5) [fill=black] {}; \vertex (a_3) at (0.7,0.5) [fill=black] {}; \vertex (a_4) at (-1,1) [fill=black] {}; \vertex (a_5) at (-0.4,1) [fill=black] {}; \vertex (a_6) at (0.4,1) [fill=black] {}; \vertex (a_7) at (1,1) [fill=black] {}; \vertex (a_8) at (0,-0.5) [fill=black,label=below:$r$] {}; \vertex (a_9) at (0,0.5) [fill=black] {}; \path[->] (a_8) edge (a_1) (a_1) edge (a_2) (a_1) edge (a_3) (a_2) edge (a_4) (a_2) edge (a_5) (a_3) edge (a_6) (a_3) edge (a_7) (a_1) edge (a_9); \vertex (a'_1) at (3,1) [fill=black,label=above:$r$] {}; \vertex (a'_2) at (2.4,0.5) [fill=black] {}; \vertex (a'_3) at (3.6,0.5) [fill=black] {}; \vertex (a'_4) at (3,0.5) [fill=black] {}; \vertex (a'_5) at (3,0) [fill=black] {}; \vertex (a'_6) at (3.6,0) [fill=black] {}; \vertex (a'_7) at (3,-0.5) [fill=black] {}; \vertex (a'_8) at (2.6,-0.5) [fill=black] {}; \path[<-] (a'_1) edge (a'_2) (a'_1) edge (a'_3) (a'_1) edge (a'_4) (a'_4) edge (a'_5) (a'_3) edge (a'_6) (a'_5) edge (a'_7) (a'_5) edge (a'_8); \end{tikzpicture}\] (a) Arborescences \end{minipage} \begin{minipage}{.5\textwidth} \centering \[\begin{tikzpicture}[scale=0.8] \vertex (a_1) at (0,1) [fill=black] {}; \vertex (a_2) at (-0.7,1.5) [fill=black] {}; \vertex (a_3) at (0.7,1.5) [fill=black] {}; \vertex (a_6) at (0.8,2) [fill=black] {}; \vertex (a_8) at (0,0.5) [fill=black,label=left:$r$] {}; \vertex (a_9) at (0,1.5) [fill=black] {}; \vertex (a_{10}) at (-0.2,2) [fill=black] {}; \vertex (a_{11}) at (0.2,2) [fill=black] {}; \vertex (a'_2) at (-0.5,0) [fill=black] {}; \vertex (a'_3) at (0.5,0) [fill=black] {}; \vertex (a'_5) at (-1,-0.5) [fill=black] {}; \vertex (a'_6) at (0.2,-0.5) [fill=black] {}; \vertex (a'_7) at (0.8,-0.5) [fill=black] {}; \path[->] (a_8) edge (a_1) (a_1) edge (a_2) (a_1) edge (a_3) (a_1) edge (a_9) (a_9) edge (a_{10}) (a_9) edge (a_{11}) (a_3) edge (a_6); \path[->] (a'_2) edge (a_8) (a'_3) edge (a_8) (a'_5) edge (a'_2) (a'_6) edge (a'_3) (a'_7) edge (a'_3); \end{tikzpicture}\] (b) Double-arborescence \end{minipage} \begin{minipage}{.5\textwidth} \centering \[\begin{tikzpicture}[scale=0.8] \vertex (a_1) at (0,1) [fill=black] {}; \vertex (a_2) at (-0.7,1.5) [fill=black] {}; \vertex (a_3) at (0.7,1.5) [fill=black] {}; \vertex (a_8) at (0,0.5) [fill=black,label=left:$r_1$] {}; \vertex (a_9) at (0,1.5) [fill=black] {}; \vertex (a_{10}) at (-0.2,2) [fill=black] {}; \vertex (a_{11}) at (0.2,2) [fill=black] {}; \vertex (a'_2) at (-0.5,0) [fill=black] {}; \vertex (a'_3) at (0.5,0) [fill=black] {}; \vertex (a'_5) at (-1,-0.5) [fill=black] {}; \vertex (a'_6) at (0.2,-0.5) [fill=black] {}; \vertex (a'_7) at (0.8,-0.5) [fill=black] {}; \vertex (1) at (1.7,0.5) [fill=black,label=below:$r_2$] {}; \vertex (2) at (1.3,1) [fill=black] {}; \vertex (3) at (2.1,1) [fill=black] {}; \vertex (4) at (2.1,1.5) [fill=black] {}; \vertex (5) at (3.4,0.5) [fill=black,label=above:$r_3$] {}; \vertex (6) at (3,0) [fill=black] {}; \vertex (7) at (3.8,0) [fill=black] {}; \vertex (8) at (3.4,-0.5) [fill=black] {}; \vertex (9) at (4.2,-0.5) [fill=black] {}; \vertex (10) at (2.5,-0.5) [fill=black] {}; \vertex (11) at (5.1,0.5) [fill=black,label=right:$r_4$] {}; \vertex (12) at (4.7,1) [fill=black] {}; \vertex (13) at (5.5,1) [fill=black] {}; \vertex (14) at (5.1,1.5) [fill=black] {}; \vertex (15) at (5.9,1.5) [fill=black] {}; \vertex (16) at (5.5,0) [fill=black] {}; \vertex (17) at (5.1,-0.5) [fill=black] {}; \vertex (18) at (5.9,-0.5) [fill=black] {}; \vertex (19) at (5.8,2) [fill=black] {}; \path[->] (a_8) edge (a_1) (a_1) edge (a_2) (a_1) edge (a_3) (a_1) edge (a_9) (a_9) edge (a_{10}) (a_9) edge (a_{11}) (1) edge (2) (1) edge (3) (3) edge (4) (6) edge (5) (7) edge (5) (8) edge (7) (9) edge (7) (10) edge (6) (11) edge (12) (11) edge (13) (13) edge (14) (13) edge (15) (16) edge (11) (17) edge (16) (18) edge (16) (15) edge (19); \path[->] (1) edge (a_8) (1) edge (5) (5) edge (11); \path[->] (a'_2) edge (a_8) (a'_3) edge (a_8) (a'_5) edge (a'_2) (a'_6) edge (a'_3) (a'_7) edge (a'_3); \end{tikzpicture}\] (c) Path of double-arborescences \end{minipage} \caption{Examples of treelike comparability graphs in terms of transitive reductions} \label{fig_4} \end{figure} The arborescences were first studied by Wolk in \cite{wolk1962comparability,wolk1965note} and characterized them as $(C_4, P_4)$-free graphs, i.e., the graphs in which none of $P_4$ and $C_4$ is present as an induced subgraph, where $C_n$ is a cycle on $n$ vertices and $P_n$ is a path on $n$ vertices. Further, Golumbic characterized the arborescences as trivially perfect graphs, i.e., the graphs in which for every induced subgraph the size of a largest independent set equals the number of maximal cliques \cite{golumbic1978trivially}. Based on the aforementioned characterizations, linear-time algorithms for recognizing the arborescences are presented in the literature (see \cite{yan1996quasi,chu2008simple}). The arborescences were further generalized and studied in \cite{jung_1978}. In \cite{cornelsen2009treelike}, Cornelsen and Di Stefano proved that a graph is a path of double-arborescences if and only if it is a treelike permutation graph. However, no characterization is available for double-arborescences in the literature. A word over a finite set of letters is a finite sequence written by juxtaposing the letters of the sequence. A subword $u$ of a word $w$ is a subsequence of the sequence $w$, denoted by $u \ll w$. For instance, $abcabb \ll abbcacbba$. Let $w$ be a word over a set $A$, and $B$ be a subset of $A$. We write $w_B$ to denote the subword of $w$ that precisely consists of all occurrences of the letters of $B$. For example, if $w=abbcacbba$, then $w_{\{b, c\}} = bbccbb$. We say that the letters $a$ and $b$ alternate in $w$ if $w_{\{a, b\}}$ is of the form either $ababa\cdots$ or $babab\cdots$, which can be of even or odd length. A word $w$ is called $k$-uniform if every letter occurs exactly $k$ times in $w$. A graph $G = (V, E)$ is called a word-representable graph if there is a word $w$ with the symbols of $V$ such that, for all $a, b \in V$, $a$ and $b$ are adjacent in $G$ if and only if $a$ and $b$ alternate in $w$; such a word $w$ is called a word-representant of $G$. If a graph is word-representable, then it has infinitely many word-representants \cite{MR2467435}. A word-representable graph $G$ is said to be $k$-word-representable if a $k$-uniform word represents it. It is known that every word-representable graph is $k$-word-representable, for some $k$ \cite{MR2467435}. For a comprehensive introduction to the topic of word-representable graphs, one may refer to the monograph \cite{words&graphs} by Kitaev and Lozin. Further, a minimum-word-representant of a word-representable graph $G$ is a shortest (in terms of its length) word-representant of $G$. The length of a minimum-word-representant of $G$ is denoted by $\ell(G)$. Note that a minimum-word-representant of a word-representable graph need not be uniform. Let $G$ be a word-representable graph on $n$ vertices. It is evident that $\ell(G) \ge n$. It is known that the class of circle graphs\footnote{A circle graph is an intersection graph of chords of a circle.} characterizes the 2-word-representable graphs \cite{MR2914710}. Thus, an obvious upper bound for $\ell(G)$ of a circle graph is $2n$. In the seminal work \cite{Marisa_2020}, Gaetz and Ji considered the subclasses, viz., cycles and trees, of circle graphs and provided explicit formulae for both the length and the number of minimum-word-representants. In \cite{Eshwar_2024}, Srinivasan and Hariharasubramanian proved that there is no circle graph $G$ with $\ell(G) = 2n$ and an edgeless graph $G$ is the only circle graph having $\ell(G) = 2n-1$. Moreover, they showed that $\ell(G) = 2n-2$ for a triangle-free circle graph $G$ containing at least one edge. However, they established through an example (see \cite[Example 2.15]{Eshwar_2024}) that $\ell(G)$ need not be $2n-k$ for a word-representable graph $G$ with clique number\footnote{A clique is a complete subgraph. The size of a maximum clique in a graph is its clique number.} $k$. In this connection, they posed an open problem to find classes of word-representable graphs $G$ with clique number $k$ such that $\ell(G) = 2n-k$. So far, no examples of such graph classes are available in the literature for an arbitrary $k$. In this work, we employ the notion of split-decomposition trees and characterize double-arborescences as well as arborescences. Further, we find the minimum-word-representants of double-arborescences and show that this class of graphs serves as an example for the above-mentioned open problem from \cite{Eshwar_2024}. In Section 2, we recall the notion of split-decomposition trees and present relevant results from the literature. In Section 3, we first provide a forbidden induced subgraph characterization of double-arborescences, where we prove that double-arborescences are precisely $P_4$-free treelike comparability graphs. Also, using split decomposition we obtain an alternative proof for the well-known characterization of arborescences given in \cite[Theorem 3]{wolk1965note} (also see \cite{wolk1962comparability}). Next, we consider a more general class of treelike comparability graphs, viz., distance-hereditary graphs. We introduce a notion called s-leaf-path in the minimal split-decomposition tree of a distance-hereditary graph, and using this notion we characterize the class of $P_4$-free distance-hereditary graphs. Consequently, we obtain characterizations of arborescences and double-arborescences with respect to their minimal split-decomposition trees. Finally, in Section 4, we note that arborescences and double-arborescences are word-representable graphs and devise an algorithm based on breadth-first search to construct minimum-word-representants of arborescences. Moreover, using the algorithm, we also obtain minimum-word-representants of double-arborescences. We prove that if $G$ is a double-arborescence on $n$ vertices with clique number $k$, then $\ell(G) = 2n - k$. \section{Split-Decomposition Trees} \label{split-dec-trees} In this section, we recall the concepts of split decomposition of a connected graph (from \cite{circlegraph3}) and graph-labelled trees (from \cite{graph-labelled_2012}), and present them in a unified framework for fixing the notation of split-decomposition trees. The concept of split decomposition was introduced by Cunningham in \cite{cunningham_2,cunningham_1} and it was used to recognize certain classes of graphs such as circle graphs \cite{circlegraph3}, parity graphs \cite{cicerone1999extension}, and distance-hereditary graphs \cite{DHgraph1}. Recently, split decomposition is also used in the context of word-representable graphs \cite{tithi}. Let $G = (V, E)$ be a connected graph. A split of $G$ is a partition $\{V_1, V_2\}$ of $V$ such that each of $V_1$ and $V_2$ contains at least two vertices, and every vertex in $N_G(V_1)$ is adjacent to every vertex in $N_G(V_2)$, where, for $A \subseteq V$, $N_G(A) = \bigcup_{a \in A} N_G(a) \setminus A$, called the neighborhood of $A$. A prime graph is a graph without any split. A split decomposition of a graph $G = (V, E)$ with split $\{V_1, V_2\}$ is represented as a disjoint union of the induced subgraphs $G[V_1]$ and $G[V_2]$ along with an edge $e = \overline{v_1v_2}$, where $v_1$ and $v_2$ are two new vertices such that $v_1$ and $v_2$ are adjacent to each vertex of $N_G(V_2)$ and $N_G(V_1)$, respectively. We call $v_1$ and $v_2$ as marked vertices and $e$ as a marked edge. By deleting the edge $e$, we obtain two components with vertex sets $V_1 \cup \{v_1\}$ and $V_2 \cup \{v_2\}$ called the split components. The two components are then decomposed recursively to obtain a split decomposition of $G$. A minimal split decomposition of a graph is a split decomposition whose split components can be cliques, stars\footnote{A star is a tree on $n$ vertices with one vertex, called the center, of degree $n-1$.} and prime graphs such that the number of split components is minimized. While there can be multiple split decompositions of a graph, a minimal split decomposition of a graph is unique \cite{cunningham_2,cunningham_1}. \begin{figure}[t] \centering \[\begin{tikzpicture}[scale=0.8] \vertex (a_1) at (-1, 1) [fill=black,label=above:$1'$] {}; \vertex (a_2) at (-1, -1) [fill=black,label=below:$2'$] {}; \vertex (a_3) at (0,0) [fill=black,label=above:${\alpha_1}$] {}; \vertex (a_4) at (1,0) [fill=black,label=below:${\alpha_2}$] {}; \vertex (a_5) at (1,1) [fill=black,label=above:$3'$] {}; \vertex (a_6) at (2,0) [fill=black,label=above:${\alpha_3}$] {}; \vertex (a_7) at (2,-1) [fill=black,label=below:$4'$] {}; \vertex (a_8) at (3,0) [fill=black,label=below:${\alpha_4}$] {}; \vertex (a_9) at (4,1) [fill=black,label=above:$5'$] {}; \vertex (a_{10}) at (4,-1) [fill=black,label=below:$6'$] {}; \path (a_1) edge (a_3) (a_2) edge (a_3) (a_3) edge (a_4) (a_4) edge (a_5) (a_4) edge (a_6) (a_6) edge (a_7) (a_6) edge (a_8) (a_8) edge (a_9) (a_8) edge (a_{10}); \end{tikzpicture}\] \caption{A tree $T$} \label{Tree} \end{figure} We now recall the concept of graph-labelled tree introduced by Gioan and Paul \cite{graph-labelled_2012} and its relation with a split decomposition of a graph. Let $T$ be a tree and $\mathcal{F}$ be a family of vertex disjoint graphs. A graph-labelled tree, denoted by $T_{\mathcal{F}}$, is the graph obtained from $T$ by labelling (in fact, by inserting) every internal vertex $\alpha$ of degree $k$ ($\ge 2$) by a graph $G_{\alpha} \in \mathcal{F}$ on $k$ vertices such that there is a bijection from the edges of $T$ incident to $\alpha$ to the vertices of $G_{\alpha}$ (and by replacing the end point $\alpha$ of a tree edge with the corresponding vertex of $G_\alpha$). Note that the set of pendant (degree one) vertices of $T_{\mathcal{F}}$ is precisely the set of leaves of $T$. Given a graph-labelled tree $T_{\mathcal{F}}$ and the family $\mathcal{F}$, we can determine the underlying tree structure $T$. For clarity, we encircle the graphs $G_\alpha$ in $T_{\mathcal{F}}$ that are replacing the internal vertices $\alpha$ of $T$. For instance, for the tree given in Fig. \ref{Tree}, a graph-labelled tree is depicted in Fig. \ref{fig_3}(b). The edges of $T_\mathcal{F}$ in the encircled portions are called $\mathcal{F}$-edges and the remaining edges are called $T$-edges, as they correspond to the tree edges of $T$. A path in $T_{\mathcal{F}}$ is called an alternated path if it alternates between $T$-edges and $\mathcal{F}$-edges. A maximal alternated path is an alternated path that cannot be extended by adding more edges while maintaining the alternatedness. The accessibility graph of a graph-labelled tree $T_{\mathcal{F}}$ is the graph, denoted by $T^A_{\mathcal{F}}$, in which the vertex set is the set of pendant vertices in $T_{\mathcal{F}}$ and any two vertices $a$ and $b$ in $T^A_{\mathcal{F}}$ are adjacent if and only if there is an alternated path between $a$ and $b$ in $T_{\mathcal{F}}$. Let $\mathcal{H}$ be a split decomposition of a graph $G$. Extend $\mathcal{H}$ by adding one new pendant vertex $a'$ for each non-marked vertex $a$ of $\mathcal{H}$ such that $a$ and $a'$ are adjacent. The graph thus extended can be viewed as a graph-labelled tree $T_{\mathcal{F}}$, called a split-decomposition tree of $G$, where $\mathcal{F}$ consists of the split components of $\mathcal{H}$. If the split components of a split-decomposition tree are cliques (called clique components) and stars (called star components), then it is called a clique-star tree. We rewrite the uniqeness result by Cunningham in terms of graph-labelled trees in the following theorem and call such a graph-labelled tree as a minimal split-decomposition tree. For example, refer Fig. \ref{fig_3} for a graph and its minimal split-decomposition tree. \begin{theorem}[\cite{MR3907778,cunningham_2,graph-labelled_2012}]\label{reduced_cs_tree} For every connected graph $G$, there exists a unique split-decomposition tree $T_{\mathcal{F}}$ of $G$ such that \begin{enumerate}[label=\rm (\roman*)] \item\label{acc_iso} the accessibility graph $T^A_{\mathcal{F}}$ is isomorphic to the graph $G$, \item every split component is a clique, a star, or a prime graph, \item\label{point3} every split component has at least three vertices, \item there is no marked edge with end points in two clique components, and \item there is no marked edge between the center of a star component and a leaf of another star component. \end{enumerate} \end{theorem} Treelike comparability graphs were characterized using split decomposition in \cite[Theorem 4]{cornelsen2009treelike}. In view of the algorithm provided in \cite[Theorem 5]{cornelsen2009treelike}, we rewrite the characterization of treelike comparability graphs in the setting of graph-labelled trees. \begin{theorem}[\cite{cornelsen2009treelike}]\label{Treelike_ch} Let $G$ be a graph and $T_{\mathcal{F}}$ be the minimal split-decomposition tree of $G$. Then $G$ is a treelike comparability graph if and only if \begin{enumerate}[label=\rm (\roman*)] \item $T_{\mathcal{F}}$ is a clique-star tree, \item each clique component has at most two marked vertices, and \item there is no marked edge between the centers of two star components. \end{enumerate} \end{theorem} \begin{figure}[t] \centering \begin{minipage}{.3\textwidth} \centering \[\begin{tikzpicture} \vertex (a_1) at (-1, 0) [fill=black,label=left:$3$] {}; \vertex (a_2) at (-0.5, 0.5) [fill=black,label=above:$1$] {}; \vertex (a_3) at (-0.5,-0.5) [fill=black,label=below:$2$] {}; \vertex (a_4) at (0.5,0.5) [fill=black,label=above:$5$] {}; \vertex (a_5) at (0.5,-0.5) [fill=black,label=below:$6$] {}; \vertex (a_6) at (1,0) [fill=black,label=right:$4$] {}; \path (a_1) edge (a_2) (a_1) edge (a_3) (a_2) edge (a_3) (a_2) edge (a_4) (a_2) edge (a_5) (a_3) edge (a_5) (a_3) edge (a_4) (a_4) edge (a_5) (a_4) edge (a_6) (a_5) edge (a_6); \end{tikzpicture}\] (a) \end{minipage} \begin{minipage}{.5\textwidth} \centering \[\begin{tikzpicture}[scale=1] \vertex (a_1) at (-1, 0.5) [fill=black, label=above:$3$] {}; \vertex (a_2) at (0.8, 0.5) [fill=black, label=above:$1$] {}; \vertex (a_3) at (0.8, -0.5) [fill=black, label=below:$2$] {}; \vertex (a_4) at (-2.55, -2) [fill=black, label=below:$5$] {}; \vertex (a_5) at (-1.8, -2) [fill=black, label=below:$6$] {}; \vertex (a_6) at (-2.55, -0.5) [fill=black, label=above:$4$] {}; \vertex (a_7) at (-2.175, -1) [label=left:$ $] {}; \vertex (a_8) at (-2.175, -1.5) [label=left:$ $] {}; \vertex (a_9) at (-1.8, -0.5) [label=above:$ $] {}; \vertex (a_{10}) at (-1, -0.5) [label=below:$ $] {}; \vertex (a_{11}) at (-0.5, 0) [label=left:$ $] {}; \vertex (a_{12}) at (0.3,0) [label=below:$ $] {}; \vertex (b_1) at (-1.3, 1) [fill=black,label=above:$3'$] {}; \vertex (b_2) at (1.1, 1) [fill=black,label=above:$1'$] {}; \vertex (b_3) at (1.1, -1) [fill=black,label=below:$2'$] {}; \vertex (b_6) at (-3.05, -0.5) [fill=black,label=left:$4'$] {}; \vertex (b_4) at (-3.05, -2.5) [fill=black,label=left:$5'$] {}; \vertex (b_5) at (-1.3, -2.5) [fill=black,label=right:$6'$] {}; \path (a_1) edge (a_{11}) (a_2) edge (a_3) (a_2) edge (a_{12}) (a_3) edge (a_{12}) (a_4) edge (a_5) (a_4) edge (a_8) (a_5) edge (a_8) (a_6) edge (a_7) (a_1) edge (b_1) (a_7) edge (a_9) (a_2) edge (b_2) (a_{10}) edge (a_{11}) (a_3) edge (b_3) (a_6) edge (b_6) (a_4) edge (b_4) (a_5) edge (b_5) (a_{12}) edge (a_{11}) (a_{10}) edge (a_9) (a_7) edge (a_8); \draw[] (0.7,0) circle[radius=0.6cm]; \draw[] (-1,0) circle[radius=0.6cm]; \draw[] (-2.175,-0.5) circle[radius=0.6cm]; \draw[] (-2.175,-2) circle[radius=0.6cm]; \end{tikzpicture}\] (b) \end{minipage} \caption{(a) A graph, and (b) its minimal split-decomposition tree} \label{fig_3} \end{figure} A graph $G$ is called a distance-hereditary graph if the distance between any two vertices in any connected induced subgraph of $G$ is same as the distance in $G$. The class of distance-hereditary graphs is more general and includes treelike comparability graphs \cite{cornelsen2009treelike}. In \cite{DH_graph}, a multiple characterizations of distance-hereditary graphs were obtained based on various parameters. The distance-hereditary graphs were characterized as the graphs whose minimal split-decomposition trees are clique-star trees \cite{Bouchet_1}. Further, certain subclasses of distance-hereditary graphs were characterized in terms of their minimal split-decomposition trees in \cite{MR3907778}. In particular, $C_4$-free distance-hereditary graphs were characterized as per the following result. \begin{theorem}[\cite{MR3907778}] \label{lemma_C4} Let $G$ be a distance-hereditary graph with the minimal split-decomposition tree $T_{\mathcal{F}}$. Then $G$ is $C_4$-free if and only if $T_{\mathcal{F}}$ does not have any center-center path (i.e., an alternated path whose endpoints are the centers of star components, and it does not contain any edge from either of star components). \end{theorem} In this work, we characterize the class of $P_4$-free distance-hereditary graphs in terms of their minimal split-decomposition trees (see Theorem \ref{lemma_P4}). In this connection, we need the following properties of maximal alternated paths in split-decomposition trees. \begin{lemma}[\cite{MR3907778}] \label{lemma_1} Let $T_{\mathcal{F}}$ be the minimal split-decomposition tree of a distance-hereditary graph $G$ and $G_{\alpha}$ be a split component of $T_{\mathcal{F}}$. We have the following properties of maximal alternated paths in $T_{\mathcal{F}}$. \begin{enumerate}[label=\rm (\roman*)] \item\label{maximal_alter_path} There exists a maximal alternated path from any vertex of $G_{\alpha}$ but does not contain any edge of $G_{\alpha}$. \item Any maximal alternated path starting from a vertex of $G_{\alpha}$ ends in a pendant vertex of $T_{\mathcal{F}}$. \item\label{point_3} Let $P$ and $Q$ be two maximal alternated paths from distinct vertices of $G_{\alpha}$. If $P$ and $Q$ do not contain any edge of $G_{\alpha}$, then they end at distinct pendant vertices of $T_{\mathcal{F}}$. \end{enumerate} \end{lemma} \section{Characterizations} In this section, we aim to characterize double-arborescences and arborescences in terms of their split-decomposition trees. For these subclasses, first we provide forbidden induced subgraph characterizations. Further, we obtain a necessary and sufficient condition to identify a more general class, viz., $P_4$-free distance-hereditary graphs. Consequently, we give one more characterization each for double-arborescences and arborescences. \begin{lemma}\label{path_double_arb} Suppose $G$ is a path of $k$ double-arborescences for some $k \ge 2$ such that $k$ is the smallest possible. Then $G$ contains $P_4$ as an induced subgraph. \end{lemma} \begin{proof} Let $D$ be the digraph corresponding to the treelike orientation of $G$. Further, let $D'$ be the transitive reduction of $D$, i.e., the spanning subgraph of $D$ obtained by deleting the transitive edges. Let $G_i$ with root $r_i$, $1 \le i \le k$, be the $k$ double-arborescences in $G$ such that $\langle r_1, r_2, \ldots, r_k \rangle$ is the root-path of $D'$. Suppose the edge $\overline{r_1r_2}$ is oriented as $\overrightarrow{r_1r_2}$ in $D'$. We claim that there exists a vertex $a_1$ in $G_1$ such that $\overrightarrow{r_1a_1}$ is in $D'$. On the contrary, suppose $\overrightarrow{ar_1}$ in $D$ for all vertices $a$ in $G_1$. Hence, we have $\overrightarrow{ar_2}$ in $D$ for all vertices $a$ in $G_1$. Then, $G$ is a path of $k - 1$ double-arborescences, viz., $G_1 \cup G_2, G_3, \ldots, G_k$ with the root-path $\langle r_2, r_3, \ldots, r_k \rangle$ in $D'$; a contradiction to the minimality of $k$. Further, there exist a vertex $a_2$ in $\displaystyle\bigcup_{2 \le i \le k} G_i$ such that $\overrightarrow{a_2r_t}$ is in $D'$, for some $t \ge 2$, as shown in the following cases: \begin{itemize} \item Case-1: $\overrightarrow{r_{i+1}r_i}$ in $D'$ for some $i \ge 2$. Let $t$ be the least such that $\overrightarrow{r_{t+1}r_t}$ in $D'$. Then we choose $a_2$ to be $r_{t+1}$. \item Case-2: $\overrightarrow{r_{i}r_{i+1}}$ for all $i < k$ in $D'$. For each $i \ge 2$, if $\overrightarrow{r_ia}$ in $D$ for all vertices $a$ in $G_i$, then $\overrightarrow{r_1a}$ in $D$ for all vertices $a$ in $\displaystyle\bigcup_{2 \le i \le k} G_i$. In which case, $G$ is a double-arborescence with the root $r_1$; a contradiction to the minimality of $k$. Hence, there exists $a_2$ in $G_t$, for some $t \ge 2$, such that $\overrightarrow{a_2r_t}$ is in $D'$. \end{itemize} The path $\langle a_1, r_1, r_t, a_2 \rangle$ is an induced $P_4$ in $G$. Indeed, other than these three edges, there will not be any more edges between $a_1, r_1, r_t$ and $a_2$ in $G$ as there is no directed path between $a_1$ and $r_t$; $a_1$ and $a_2$; or $r_1$ and $a_2$ in $D'$. Similarly, one can observe that if $\overline{r_1r_2}$ is oriented as $\overrightarrow{r_2r_1}$ in $D'$, then there exist vertices $a_1$ in $G_1$ and $a_2$ in $\displaystyle\bigcup_{2 \le i \le k} G_i$ such that $\overrightarrow{a_1r_1}$ and $\overrightarrow{r_ta_2}$ are in $D'$ for some $t \ge 2$ so that $\{a_1, r_1, r_t, a_2\}$ induces a $P_4$ in $G$. \qed \end{proof} We now give a forbidden induced subgraph characterization for \break double-arborescences within treelike comparability graphs. \begin{theorem}\label{P_4-free} A graph $G$ is a double-arborescence if and only if $G$ is a $P_4$-free treelike comparability graph. \end{theorem} \begin{proof} Suppose $G$ is a double-arborescence with a root $r$. Since $G$ is a treelike comparability graph, by Theorem \ref{Treelike_ch}, $G$ is distance-hereditary. If $G$ contains a $P_4$ induced by, say, $\{a_1, a_2, a_3, a_4\}$, then clearly $r \neq a_i$, for all $1 \leq i \leq 4$, as $r$ is an all-adjacent vertex of $G$. Note that the graph induced by $\{r, a_1, a_2, a_3, a_4\}$ is the complement of $K_1 \cup P_4$ (see Gem in Fig. \ref{fig_8}). But, by \cite[Theorem 3.1]{Stefano_2012}, it is a forbidden induced subgraph for a distance-hereditary comparability graph. Hence, $G$ is $P_4$-free. Conversely, suppose a treelike comparability graph $G$ is $P_4$-free. Note that $P_4$-free graphs are permutation graphs (cf. \cite{Bose_1998}). Thus, by \cite[Theorem 6]{cornelsen2009treelike}, $G$ is a path of $k$ double-arborescences such that $k$ is the smallest number of double-arborescences in $G$. If $k \ge 2$, then by Lemma \ref{path_double_arb}, $G$ induces a $P_4$; which is not possible. Hence, $k = 1$ so that $G$ is a double-arborescence. \qed \end{proof} \begin{lemma}\label{prop_arbor} Suppose $G$ is a strict-double-arborescence with a root $r$. There exist two non-adjacent vertices $a_1$ and $a_2$ in $G$ such that $\overrightarrow{ra_1}$ and $\overrightarrow{ra_2}$. Similarly, there exist two non-adjacent vertices $a_3$ and $a_4$ in $G$ such that $\overrightarrow{a_3r}$ and $\overrightarrow{a_4r}$. \end{lemma} \begin{proof} Suppose $G = (V, E)$ is a strict-double-arborescence with a root $r$. Let $D$ be the digraph corresponding to the treelike orientation of $G$ and $D'$ be the transitive reduction of $D$. Let $A_1 = \{a \in V \mid \overrightarrow{ra} \ \text{in} \ D\}$ and $A_2 = \{a \in V \mid \overrightarrow{ar} \ \text{in} \ D\}$. Note that $A_1 \neq \varnothing$, $A_2 \neq \varnothing$, and $V = \{r\} \cup A_1 \cup A_2$. Moreover, if $a, b \in A_1$ are adjacent in $G$, then they are on a directed path from $r$ in $D'$. Accordingly, if the vertices of $A_1$ form a clique in $G$, then the vertices of $A_1$ are on a directed path from $r$ in $D'$. In that case, $G$ can be seen as an arborescence whose root would be the end point, say $a'$, of the above mentioned directed path, i.e., $V = \{a'\} \cup \{a \in V \mid \overrightarrow{aa'} \ \text{in} \ D\}$; a contradiction to $G$ is a strict-double-arborescence. Hence, there exist two vertices $a_1, a_2 \in A_1$ such that $a_1$ and $a_2$ are not adjacent in $G$. Similarly, there exist two non-adjacent vertices in $A_2$. \qed \end{proof} \begin{theorem}\label{C_4-free} Let $G$ be a treelike comparability graph. Then $G$ is $C_4$-free if and only if $G$ does not contain a strict-double-arborescence as its induced subgraph. \end{theorem} \begin{proof} Let $D$ be the digraph corresponding to the treelike orientation of $G$. Suppose $G$ is $C_4$-free and contains a strict-double-arborescence, say $H$ with root $r$, as its induced subgraph. Then, by Lemma \ref{prop_arbor}, there exist two pairs of non-adjacent vertices $a_1$, $b_1$ and $a_2$, $b_2$ in $H$ such that $\overrightarrow{ra_1}$, $\overrightarrow{rb_1}$, $\overrightarrow{a_2r}$ and $\overrightarrow{b_2r}$ exist in $D$. Then $\{a_1, b_1, a_2, b_2\}$ induces a $C_4$, viz., $\langle a_1, a_2, b_1, b_2, a_1 \rangle$, in $H$ and hence in $G$; which is a contradiction. Conversely, suppose $G$ does not contain a strict-double-arborescence as its induced subgraph. If $G$ contains a $C_4$ induced by $\{a_1, b_1, a_2, b_2\}$, then by \cite[Proposition 4.2]{dobson2004treelike}, $N_G(a_1) \cap N_G(b_1) \cap N_G(a_2) \cap N_G(b_2)$ induces a complete subgraph of $G$. Let $a$ be a vertex in $N_G(a_1) \cap N_G(b_1) \cap N_G(a_2) \cap N_G(b_2)$, then observe that the graph induced by $\{a, a_1, b_1, a_2, b_2\}$ is a strict-double-arborescence as shown in Fig. \ref{Double-arborescence}; a contradiction. \qed \end{proof} \begin{figure}[t] \centering \begin{minipage}{.5\textwidth} \centering \[\begin{tikzpicture}[scale=0.7] \vertex (a_1) at (0,0) [fill=black,label=above:$a$] {}; \vertex (a_2) at (1,1) [fill=black,label=above:$b_1$] {}; \vertex (a_3) at (1,-1) [fill=black,label=below:$b_2$] {}; \vertex (a_4) at (-1,1) [fill=black,label=above:$a_1$] {}; \vertex (a_5) at (-1,-1) [fill=black,label=below:$a_2$] {}; \path (a_1) edge (a_2) (a_1) edge (a_3) (a_1) edge (a_4) (a_1) edge (a_5) (a_2) edge (a_4) (a_4) edge (a_5) (a_3) edge (a_5) (a_2) edge (a_3); \end{tikzpicture}\] \end{minipage} \begin{minipage}{.5\textwidth} \centering \[\begin{tikzpicture}[scale=0.7] \vertex (a_1) at (0,0) [fill=black,label=above:$a$] {}; \vertex (a_2) at (1,1) [fill=black,label=above:$b_2$] {}; \vertex (a_3) at (1,-1) [fill=black,label=below:$a_2$] {}; \vertex (a_4) at (-1,1) [fill=black,label=above:$a_1$] {}; \vertex (a_5) at (-1,-1) [fill=black,label=below:$b_1$] {}; \path[->] (a_3) edge (a_1) (a_1) edge (a_4); \path[->] (a_1) edge (a_2) (a_5) edge (a_1); \end{tikzpicture}\] \end{minipage} \caption{Strict-double-arborescence.} \label{Double-arborescence} \end{figure} The following corollary is evident from Theorem \ref{P_4-free} and Theorem \ref{C_4-free}. \begin{corollary} \label{ch_arborescence} A graph $G$ is an arborescence if and only if $G$ is a $(C_4, P_4)$-free treelike comparability graph. \end{corollary} We now characterize the class of $P_4$-free distance-hereditary graphs through the notion of s-leaf-paths in their minimal split-decomposition trees. Let $G$ be a distance-hereditary graph with the minimal split-decomposition tree $T_{\mathcal{F}}$. We call a leaf of a star component in $T_{\mathcal{F}}$ as s-leaf. An s-leaf-path in $T_{\mathcal{F}}$ is an alternated path $P$ such that the endpoints of $P$ are s-leaves of two star components and $P$ does not contain any edge from either of star components (e.g., refer Fig. \ref{fig_5}). \begin{figure}[t] \centering \begin{minipage}{.65\textwidth} \centering \[\begin{tikzpicture} \vertex (a_1) at (0,0) [fill=black,label=above:$a'$] {}; \node (a_2) at (0.5,0) [] {}; \node (a_3) at (1,0) [] {}; \vertex (a_4) at (1.5, 0) [label=below:$a_1$] {}; \vertex (a_5) at (2, 0.5) [label=left:$c_\alpha$] {}; \vertex (a_6) at (2.5, 0) [label=below:$c_1$] {}; \node (a_7) at (3, 0) [] {}; \node (a_8) at (3.5, 0) [] {}; \vertex (a_9) at (4, 0) [label=below:$b_1$] {}; \vertex (a_{10}) at (4.5, 0.5) [label=right:$c_\beta$] {}; \vertex (a_{11}) at (5, 0) [label=below:$d_1$] {}; \node (a_{12}) at (5.5, 0) [] {}; \node (a_{13}) at (6, 0) [] {}; \vertex (a_{14}) at (6.5, 0) [fill=black,label=above:$d'$] {}; \node (a_{15}) at (2, 1) [] {}; \node (a_{16}) at (2, 1.5) [] {}; \vertex (a_{17}) at (2, 2) [fill=black,label=above:$b'$] {}; \node (a_{18}) at (4.5, 1) [] {}; \node (a_{19}) at (4.5, 1.5) [] {}; \vertex (a_{20}) at (4.5,2) [fill=black,label=above:$c'$] {}; \vertex (a_{21}) at (2, -0.2) [] {}; \node (a_{22}) at (2, -1.5) [] {}; \vertex (a_{23}) at (4.5, -0.2) [] {}; \node (a_{24}) at (4.5, -1.5) [] {}; \node (a_{25}) at (1.2, -1.1) [label=above:$G_\alpha$] {}; \node (a_{26}) at (5.2, -1.2) [label=above:$G_\beta$] {}; \node (a_{27}) at (0.75,0) [label=above:$P_{a'}$] {}; \node (a_{28}) at (3.25,0) [label=below:$P$] {}; \node (a_{29}) at (5.75,0) [label=above:$P_{d'}$] {}; \node (a_{30}) at (2,1.3) [label=left:$P_{b'}$] {}; \node (a_{31}) at (4.5,1.3) [label=right:$P_{c'}$] {}; \path (a_1) edge (a_2) (a_3) edge (a_4) (a_5) edge (a_4) (a_5) edge (a_6) (a_5) edge (a_{15}) (a_6) edge (a_7) (a_8) edge (a_9) (a_{10}) edge (a_9) (a_{10}) edge (a_{11}) (a_{10}) edge (a_{18}) (a_{11}) edge (a_{12}) (a_{13}) edge (a_{14}) (a_{16}) edge (a_{17}) (a_{19}) edge (a_{20}) (a_{21}) edge (a_5) (a_{21}) edge (a_{22}) (a_{10}) edge (a_{23}) (a_{23}) edge (a_{24}) ; \path [dashed] (a_2) edge (a_3) (a_7) edge (a_8) (a_{12}) edge (a_{13}) (a_{15}) edge (a_{16}) (a_{18}) edge (a_{19}); \draw[] (2,0) circle[radius=0.8cm]; \draw[] (4.5,0) circle[radius=0.8cm]; \end{tikzpicture}\] \end{minipage} \begin{minipage}{.3\textwidth} \centering \[\begin{tikzpicture}[scale=0.7] \vertex (a_1) at (0,0) [fill=black,label=below:$a'$] {}; \vertex (a_2) at (0.5,1) [fill=black,label=above:$b'$] {}; \vertex (a_3) at (2,1) [fill=black,label=above:$c'$] {}; \vertex (a_4) at (2.5,0) [fill=black,label=below:$d'$] {}; \path (a_1) edge (a_2) (a_2) edge (a_3) (a_4) edge (a_3); \end{tikzpicture}\] \end{minipage} \caption{A portion of some $T_{\mathcal{F}}$ with an s-leaf-path $P$ and its accessibility graph $P_4$} \label{fig_5} \end{figure} \begin{theorem} \label{lemma_P4} Let $G$ be a distance-hereditary graph and $T_{\mathcal{F}}$ be the minimal split-decomposition tree of $G$. Then $G$ contains an induced $P_4$ if and only if there exists an s-leaf-path in $T_{\mathcal{F}}$. \end{theorem} \begin{proof} Since $G$ is isomorphic to the accessibility graph $T_{\mathcal{F}}^A$ (see Theorem \ref{reduced_cs_tree}\ref{acc_iso}), we prove that $T_\mathcal{F}^A$ contains an induced $P_4$ if and only if there exists an s-leaf-path in $T_{\mathcal{F}}$. Let $P$ be an s-leaf-path between two star components $G_\alpha$ and $G_\beta$ in $T_{\mathcal{F}}$. Let $c_1 \in G_\alpha$, $b_1 \in G_\beta$ be the endpoints of $P$. Since $T_{\mathcal{F}}$ is the minimal split-decomposition tree, both $G_\alpha$ and $G_\beta$ have at least three vertices each (see Theorem \ref{reduced_cs_tree}\ref{point3}). Thus, there are at least two s-leaves in each of the star components $G_\alpha$ and $G_\beta$. Then, by Lemma \ref{lemma_1}\ref{maximal_alter_path}, there are at least two maximal alternated paths from $G_\alpha$ that do not use any edge of $G_\alpha$ and, by Lemma \ref{lemma_1}\ref{point_3}, these maximal alternated paths end at distinct pendant vertices of $T_{\mathcal{F}}$. Of these paths, suppose one path, say $P_{a'}$, is from an s-leaf $a_1$ in $G_\alpha$ to a pendant vertex $a'$ of $T_{\mathcal{F}}$, and the other path, say $P_{b'}$, is from the center $c_\alpha$ of $G_\alpha$ to a pendant vertex $b'$ of $T_{\mathcal{F}}$. Similarly, there are at least two maximal alternated paths from $G_\beta$ one, say $P_{c'}$, is from its center $c_\beta$ to a pendant vertex $c'$ of $T_{\mathcal{F}}$, and the other, say $P_{d'}$, is from an s-leaf, say $d_1$, in $G_\beta$ to a pendant vertex $d'$ of $T_{\mathcal{F}}$, as shown in Fig. \ref{fig_5}. We show that $\langle a', b', c', d' \rangle$ is an induced path in $T_\mathcal{F}^A$. As shown in Fig. \ref{fig_5}, note that the path consisting of $P_{a'}$ followed by the edge $\overline{a_1c_\alpha}$ and then $P_{b'}$ is an alternated path from $a'$ to $b'$ in $T_{\mathcal{F}}$ so that $a'$ and $b'$ are adjacent in $T_\mathcal{F}^A$. Further, the path consisting of $P_{b'}$ followed by the edge $\overline{c_\alpha c_1}$, the path $P$, the edge $\overline{b_1c_\beta}$, then the path $P_{c'}$ is an alternated path from $b'$ to $c'$ in $T_{\mathcal{F}}$. Thus, $b'$ and $c'$ are adjacent in $T_\mathcal{F}^A$. Similarly, the vertices $c'$ and $d'$ are adjacent in $T_\mathcal{F}^A$. Consider the underlying tree $T$ of $T_{\mathcal{F}}$ and note that there is a unique path between $a'$ and $c'$ in $T$ that should pass through the vertices $\alpha$ and $\beta$ of $T$. Thus, any path between $a'$ and $c'$ in $T_{\mathcal{F}}$ should pass through $G_\alpha$ and $G_\beta$, and by construction, that path must use two edges of $G_\alpha$ so that it cannot be an alternated path. This shows that $a'$ and $c'$ are not adjacent in $T_\mathcal{F}^A$. Similarly, one can show that $\overline{b'd'}$ and $\overline{a'd'}$ are not in $T_\mathcal{F}^A$. Hence, $T_\mathcal{F}^A$ contains an induced $P_4$. Conversely, suppose $T_\mathcal{F}^A$ has an induced $P_4$, say $\langle a', b', c', d'\rangle$. Since $\overline{b'a'}$, $\overline{b'c'}$ are edges of $T_\mathcal{F}^A$, there exist alternated paths, say $P_{b', a'}$ and $P_{b', c'}$, which begin at the pendant vertex $b'$ of $T_{\mathcal{F}}$ and end at the pendant vertices $a'$ and $c'$ of $T_{\mathcal{F}}$, respectively. Similarly, there exists an alternated path $P_{c', d'}$ between the pendant vertices $c'$ and $d'$ of $T_{\mathcal{F}}$. Let $G_\alpha$ be the split component of $T_{\mathcal{F}}$ until which the paths $P_{b', a'}$ and $P_{b', c'}$ have the common segment and they split thereafter. Let $c_\alpha \in G_\alpha$ be the vertex at which the common segment ends, and $a_1$ and $c_1$ be the vertices of $G_\alpha$ at which the paths $P_{b', a'}$ and $P_{b', c'}$ exit $G_\alpha$, respectively. We now show that $a_1$ and $c_1$ are not adjacent in $G_\alpha$ so that $G_\alpha$ is a star component. Otherwise, we can have an alternated path between $a'$ and $c'$, viz., $P_{a', a_1}$ followed by the edge $\overline{a_1c_1}$ of $G_\alpha$ and then $P_{c_1, c'}$, where $P_{a', a_1}$ is the segment of $P_{b', a'}$ between $a_1$ and $a'$, and $P_{c_1, c'}$ is the segment of $P_{b', c'}$ between $c_1$ and $c'$. Thus, $a'$ and $c'$ are adjacent in $T_\mathcal{F}^A$; a contradiction. Hence, $G_\alpha$ must be a star component. It is evident that $c_\alpha$ is its center and $a_1$, $c_1$ are its s-leaves. Let $G_\beta$ be the split component of $T_{\mathcal{F}}$ until which the paths $P_{c', d'}$ and $P_{b', c'}$ have the common segment and they split thereafter. Let $c_\beta \in G_\beta$ be the vertex at which the common segment ends, and $b_1$ and $d_1$ be the vertices of $G_\beta$ at which the paths $P_{b', c'}$ and $P_{c', d'}$ exit $G_\beta$, respectively. As shown above, observe that $G_\beta$ is a star component, in which $c_\beta$ is the center and $b_1, d_1$ are s-leaves. Note that the alternated path $P_{b', c'}$ passes through the split components $G_\alpha$ and $G_\beta$. We observe that the pendant vertex $b'$ is nearer to $G_\alpha$ than $G_\beta$ on the path $P_{b', c'}$. On the contrary, suppose $G_\beta$ is nearer to $b'$. Then, clearly, $G_\alpha$ would be nearer to $c'$. Since the path $P_{b', c'}$ exits $G_\alpha$ at $c_1$ towards the pendant vertex $c'$, the vertex $c_1$ is closest in $G_\alpha$ to $c'$. Similarly, the vertex $b_1$ is closest in $G_\beta$ to $b'$. Accordingly, the vertices $b', c', c_1, b_1, c_\alpha, c_\beta$ will appear in the following sequence on the path $P_{b', c'}$: $$b', b_1, c_\beta, c_\alpha, c_1, c'$$ Thus, the segment of $P_{b', c'}$ between $c_\alpha$ and $c_\beta$ is a center-center path in $T_{\mathcal{F}}$, as shown in Fig. \ref{fig_6}. Then by Theorem \ref{lemma_C4}, $\langle a', b', c', d', a' \rangle$ forms a $C_4$ in $T_\mathcal{F}^A$; a contradiction to induced $P_4$ over these vertices. Hence, the pendant vertex $b'$ is nearer to $G_\alpha$ than $G_\beta$ on the path $P_{b', c'}$ so that the vertices $b', c', c_1, b_1, c_\alpha, c_\beta$ will appear in the following sequence on the path $P_{b', c'}$: $$b', c_\alpha, c_1, b_1, c_\beta, c'$$ Evidently, the segment of $P_{b', c'}$ between $c_1$ and $b_1$ is an s-leaf-path between $G_\alpha$ and $G_\beta$ in $T_{\mathcal{F}}$. \qed \end{proof} \begin{figure}[t] \centering \begin{minipage}{.5\textwidth} \centering \[\begin{tikzpicture} \vertex (a_1) at (-1,1.5) [fill=black,label=above:$c'$] {}; \node (a_2) at (-0.5,1.5) [] {}; \node (a_3) at (0,1) [] {}; \vertex (a_4) at (0, 0.5) [label=left:$c_1$] {}; \vertex (a_5) at (0.5, 0) [label=below:$c_\alpha$] {}; \vertex (a_6) at (0, -0.5) [label=left:$a_1$] {}; \node (a_7) at (0, -1) [] {}; \node (a_8) at (-0.5,-1.5) [] {}; \vertex (a_9) at (-1,-1.5) [fill=black,label=below:$a'$] {}; \node (a_{10}) at (1, 0) [] {}; \node (a_{11}) at (1.5, 0) [] {}; \vertex (a_{12}) at (2, 0) [label=below:$c_\beta$] {}; \vertex (a_{13}) at (2.5, 0.5) [label=right:$b_1$] {}; \node (a_{14}) at (2.5, 1) [] {}; \node (a_{15}) at (3, 1.5) [] {}; \vertex (a_{16}) at (3.5, 1.5) [fill=black,label=above:$b'$] {}; \vertex (a_{17}) at (2.5, -0.5) [label=right:$d_1$] {}; \node (a_{18}) at (2.5, -1) [] {}; \node (a_{19}) at (3, -1.5) [] {}; \vertex (a_{20}) at (3.5,-1.5) [fill=black,label=below:$d'$] {}; \vertex (a_{21}) at (-0.3,0) [] {}; \node (a_{22}) at (-1.5, 0) [] {}; \vertex (a_{23}) at (2.8,0) [] {}; \node (a_{24}) at (4, 0) [] {}; \node (a_{25}) at (0.5, -0.7) [label=below:$G_\alpha$] {}; \node (a_{26}) at (2, -0.7) [label=below:$G_\beta$] {}; \path (a_1) edge (a_2) (a_3) edge (a_4) (a_5) edge (a_4) (a_5) edge (a_6) (a_5) edge (a_{10}) (a_5) edge (a_{21}) (a_6) edge (a_7) (a_8) edge (a_9) (a_{10}) edge (a_{11}) (a_{23}) edge (a_{12}) (a_{11}) edge (a_{12}) (a_{13}) edge (a_{12}) (a_{17}) edge (a_{12}) (a_{13}) edge (a_{14}) (a_{16}) edge (a_{15}) (a_{18}) edge (a_{17}) (a_{23}) edge (a_{24}) (a_{19}) edge (a_{20}) (a_{22}) edge (a_{21}) ; \path [dashed] (a_2) edge (a_3) (a_7) edge (a_8) (a_{10}) edge (a_{11}) (a_{15}) edge (a_{14}) (a_{18}) edge (a_{19}); \draw[] (0,0) circle[radius=0.8]; \draw[] (2.5,0) circle[radius=0.8cm]; \end{tikzpicture}\] \end{minipage} \caption{A center-center path.} \label{fig_6} \end{figure} Using the split decomposition, we now present an alternative proof for the well-known characterization of arborescences given in \cite[Theorem 3]{wolk1965note} (also see \cite{wolk1962comparability}). \begin{theorem}\label{alter_proof} A graph $G$ is an arborescence if and only if $G$ is $(C_4, P_4)$-free. \end{theorem} \begin{proof} Suppose $G$ is an arborescence. Then, by Corollary \ref{ch_arborescence}, $G$ is a $(C_4, P_4)$-free graph. Conversely, suppose $G$ is a $(C_4, P_4)$-free graph. Using Theorem \ref{Treelike_ch}, we show that $G$ is a treelike comparability graph so that it is an arborescence (again by Corollary \ref{ch_arborescence}). Note that $G$ does not contain any of the graphs given in Fig. \ref{fig_8}, as each of them has $C_4$ or $P_4$ as an induced subgraph. Thus, by \cite[Theorem 3.1]{Stefano_2012}, $G$ is a distance-hereditary comparability graph so that the minimal split-decomposition tree $T_{\mathcal{F}}$ of $G$ is a clique-star tree. As $G$ is $(C_4, P_4)$-free, note that $T_{\mathcal{F}}$ has neither a center-center path (by Theorem \ref{lemma_C4}) nor an s-leaf-path (by Theorem \ref{lemma_P4}). In particular, there is no marked edge between the centers of two star components; otherwise, we will have a center-center path in $T_{\mathcal{F}}$. Finally, we claim that each clique component of $T_{\mathcal{F}}$ has at most two marked vertices. On the contrary, suppose there is a clique component in $T_{\mathcal{F}}$ with three marked vertices say $a_1$, $a_2$ and $a_3$. For $1 \le i \le 3$, let $e_i$ be the marked edge in $T_{\mathcal{F}}$ with one end point $a_i$ and the other end point, say, $b_i$. Since $T_{\mathcal{F}}$ is the minimal split-decomposition tree of $G$, by statement (iv) of Theorem \ref{reduced_cs_tree}, each $b_i$ ($1 \le i \le 3$) is a vertex of a star component. Note that not all $b_i$'s can be centers of the respective star components; otherwise, $T_\mathcal{F}$ will have a center-center path, e.g., $\langle b_1, a_1, a_2, b_2\rangle$. Also, not all $b_i$'s can be s-leaves of the respective star components; otherwise, $T_\mathcal{F}$ will have an s-leaf-path, e.g., $\langle b_1, a_1, a_2, b_2\rangle$. Hence, one of $b_i$'s will be the center (or a s-leaf), without loss of generality assume that it is $b_1$, and the other two will be s-leaves (or the centers, respectively) of the respective star components. In any case, $\langle b_2, a_2, a_3, b_3\rangle$ is an s-leaf-path or center-center path in $T_\mathcal{F}$, which is not possible in $T_\mathcal{F}$. Thus, there will be at most two marked vertices in any clique component of $T_\mathcal{F}$. Hence, by Theorem \ref{Treelike_ch}, $G$ is a treelike comparability graph. \qed \end{proof} \begin{figure}[t] \centering \begin{minipage}{.2\textwidth} \centering \[\begin{tikzpicture}[scale=0.6] \vertex (1) at (0,0) [fill=black] {}; \vertex (2) at (0,1.1) [fill=black] {}; \vertex (3) at (1.1,0) [fill=black] {}; \vertex (4) at (1.1,1.1) [fill=black] {}; \vertex (5) at (0.55,1.9) [fill=black] {}; \path (1) edge (2) (1) edge (3) (2) edge (4) (3) edge (4) (2) edge (5) (4) edge (5); \end{tikzpicture}\] House \end{minipage} \begin{minipage}{.2\textwidth} \centering \[\begin{tikzpicture}[scale=0.6] \vertex (1) at (0,0) [fill=black] {}; \vertex (2) at (0.7,0) [fill=black] {}; \vertex (3) at (1.4,-0.2) [fill=black] {}; \vertex (4) at (-0.7,-0.2) [fill=black] {}; \vertex (5) at (0.35,-1.6) [fill=black] {}; \path (4) edge (1) (1) edge (2) (2) edge (3) (5) edge (1) (5) edge (2) (5) edge (3) (5) edge (4); \end{tikzpicture}\] Gem \end{minipage} \begin{minipage}{.2\textwidth} \centering \[\begin{tikzpicture}[scale=0.6] \vertex (a_1) at (0,-0.5) [fill=black] {}; \vertex (a_2) at (-1,-0.5) [fill=black] {}; \vertex (a_3) at (1,-0.5) [fill=black] {}; \vertex (a_4) at (0,0.5) [fill=black] {}; \vertex (a_5) at (-1,0.5) [fill=black] {}; \vertex (a_6) at (1,0.5) [fill=black] {}; \path (a_1) edge (a_2) (a_1) edge (a_3) (a_1) edge (a_4) (a_2) edge (a_5) (a_3) edge (a_6) (a_4) edge (a_5) (a_4) edge (a_6) ; \end{tikzpicture}\] Domino \end{minipage} \begin{minipage}{.2\textwidth} \centering \[\begin{tikzpicture}[scale=0.6] \vertex (a_1) at (0.7,0) [fill=black] {}; \vertex (a_2) at (1.3,1) [fill=black] {}; \vertex (a_3) at (0,2) [fill=black] {}; \vertex (a_4) at (-1.2,1) [fill=black] {}; \vertex (a_5) at (-0.7,0) [fill=black] {}; \path (a_1) edge (a_2) (a_2) edge (a_3) (a_3) edge (a_4) (a_4) edge (a_5); \path [dashed] (a_1) edge (a_5); \end{tikzpicture}\] $C_n$,$n \geq 5$ \end{minipage} \begin{minipage}{.2\textwidth} \centering \[\begin{tikzpicture}[scale=0.6] \vertex (1) at (-1,0) [fill=black] {}; \vertex (2) at (0,0) [fill=black] {}; \vertex (3) at (-0.5,1) [fill=black] {}; \vertex (4) at (-0.5,1.7) [fill=black] {}; \vertex (5) at (-1.5,-0.5) [fill=black] {}; \vertex (6) at (0.5,-0.5) [fill=black] {}; \path (1) edge (2) (2) edge (3) (3) edge (1) (3) edge (4) (1) edge (5) (2) edge (6); \end{tikzpicture}\] Net \end{minipage} \caption{Forbidden induced subgraphs for distance-hereditary comparability graphs.} \label{fig_8} \end{figure} We consolidate the following characterizations of double-arborescences and arborescences in terms of their minimal split-decomposition trees, as a consequence of the work presented so far. \begin{corollary}\label{ch_double_arb} Let $G$ be a graph and $T_{\mathcal{F}}$ be its minimal split-decomposition tree. Then $G$ is a double-arborescence if and only if the following statements hold. \begin{enumerate}[label=\rm (\roman*)] \item $T_{\mathcal{F}}$ is a clique-star tree. \item Each clique component has at most two marked vertices. \item There is no marked edge between the centers of two star components. \item $T_{\mathcal{F}}$ does not have any s-leaf-path. \end{enumerate} \end{corollary} \begin{proof} Suppose $G$ is a double-arborescence. Then, by Theorem \ref{P_4-free}, we have $G$ is a $P_4$-free treelike comparability graph. Thus, we have the statements (i), (ii) and (iii), using Theorem \ref{Treelike_ch}. Further, since $T_{\mathcal{F}}$ is a clique-star tree (statement (i)), it is evident that $G$ is a distance-hereditary graph. Hence, we have $G$ is a $P_4$-free distance-hereditary graph so that the statement (iv) holds by Theorem \ref{lemma_P4}. Conversely, using the first three statements, we have $G$ is a treelike comparability graph, by Theorem \ref{Treelike_ch}. Further, in view of Theorem \ref{lemma_P4}, $G$ is $P_4$-free distance-hereditary graph using the statements (i) and (iv). Hence, $G$ is $P_4$-free treelike comparability graph, i.e., $G$ is a double-arborescence by Theorem \ref{P_4-free}. \qed \end{proof} \begin{corollary}\label{main_th} Let $G$ be a graph and $T_{\mathcal{F}}$ be its minimal split-decomposition tree. Then $G$ is an arborescence if and only if the following statements hold. \begin{enumerate}[label=\rm (\roman*)] \item $T_{\mathcal{F}}$ is a clique-star tree. \item $T_{\mathcal{F}}$ does not have any center-center path. \item $T_{\mathcal{F}}$ does not have any s-leaf-path. \end{enumerate} \end{corollary} \begin{proof} Suppose $G$ is an arborescence. Then, by Theorem \ref{alter_proof}, we have $G$ is a $(C_4, P_4)$-free graph. Since $G$ is a treelike comparability graph, the statement (i) holds (by Theorem \ref{Treelike_ch}). Further, using the fact that every treelike comparability graph is a distance-hereditary graph, we have $G$ is a $(C_4, P_4)$-free distance-hereditary graph so that the statements (ii) and (iii) hold by Theorem \ref{lemma_C4} and Theorem \ref{lemma_P4}, respectively. For proof of the converse, first note that the statement (i) implies that $G$ is distance-hereditary. Thus, using statements (i) and (ii), $G$ is $C_4$-free (by Theorem \ref{lemma_C4}) and using statements (i) and (iii), we have $G$ is $P_4$-free (by Theorem \ref{lemma_P4}). Hence, $G$ is a $(C_4, P_4)$-free graph so that $G$ is an arborescence (by Theorem \ref{alter_proof}). \qed \end{proof} \section{Minimum-Word-Representants} In \cite{Bouchet_1}, it was established that the class of distance-hereditary graphs are circle graphs. Thus, being a subclass of distance-hereditary graphs, the double-arborescences are 2-word-representable. Hence, if $G$ is a double-arborescence on $n$ vertices, the length of its minimum-word-representants $\ell(G) \le 2n$. In this section, we devise an algorithm to find minimum-word-representants of arborescences and extend it to double-arborescences. Moreover, contributing a class to the open problem in \cite{Eshwar_2024}, we prove that for a double-arborescence $G$ on $n$ vertices with clique number $k$, $\ell(G) = 2n - k$. We refer to \cite{Trotterbook}, for the concepts and notation related to posets used in this section. \subsection{Word Construction} Let $G = (V, E)$ be an arborescence on $n$ vertices with clique number $k$. Note that a maximum size clique in a comparability graph can be found in quadratic time \cite{Even_1972}. Suppose $C_{\text{long}}$ is a maximum clique in $G$. As $G$ is a treelike comparability graph, the treelike orientation of $G$ as well as the transitive reduction (hence, the Hasse diagram) can be found in linear time \cite[Theorem 5]{cornelsen2009treelike}. Recall that the Hasse diagram of the induced poset $P_G$ with respect to the arborescence orientation of $G$ is a rooted tree and the root $r$ is the least element or the greatest element of $P_G$. Without loss of generality, we assume that $r$ is the greatest element. Note that the elements of $C_{\text{long}}$ form a longest chain in $P_G$ and $r \in C_{\text{long}}$. In what follows, let $\prec$ be the partial order on the poset $P_G$ and $\prec:$ be the corresponding covering relation, i.e., $a \prec: b$ means $a \prec b$ and there is no element between $a$ and $b$. In Algorithm \ref{algo-1}, we give a procedure based on breadth-first search (BFS) of $P_G$ to construct a word of minimum length representing $G$. For BFS, one may refer to \cite{cormen}. In what follows, $w$ refers to the output of Algorithm \ref{algo-1}. We have the following remarks on $w$. \begin{algorithm}[!htb] \caption{Constructing minimum-word-representant of an arborescence.} \label{algo-1} \KwIn{The Hasse diagram $P_G$ of an arborescence $G$ and a longest chain $C_{\text{long}}$ of $P_G$.} \KwOut{A word $w$ representing $G$.} Initialize $w$ with the root $r$.\\ Append $r$ in $Q$ (where $Q$ is the queue used in BFS)\\ \While{$Q$ is not empty}{ Remove the first element of $Q$, say $a$. \\ \If{$a$ is not a leaf}{ Let $a_1, a_2, \ldots, a_t$ be the children of $a$ and append them in $Q$.\\ \If{$a \in C_{\text{long}}$}{Without loss of generality let $a_1 \in C_{\text{long}}$ and update $w$ by $a_1a_2 \cdots a_twa_ta_{t-1} \cdots a_2$.} \Else{Replace the first occurrence of $a$ in $w$ by $a_1a_2 \cdots a_ta$ and the second occurrence of $a$ in $w$ by $a_ta_{t-1} \cdots a_1a$.} } } \Return $w$. \end{algorithm} \begin{remark}\label{len_w} If $G$ is an arborescence on $n$ vertices with clique number $k$, the length of the word $w$ is $2n - k$, as each element of $C_{\text{long}}$ appears exactly once and the remaining elements of $V$ appear twice in $w$. \end{remark} \begin{remark}\label{structure_w} The word $w$ is of the form $w_1rw_2$ such that the elements of $C_{\text{long}}$ appear exactly once in $w_1r$ and each element not in $C_{\text{long}}$ has one occurrence in $w_1$ and the other occurrence in $w_2$. \end{remark} We now prove that the word $w$ represents the arborescence $G$ through the following lemmas. \begin{lemma}\label{cor_1} For $a, b \in V$, if $a$ and $b$ are adjacent in $G$, then $a$ and $b$ alternate in $w$. \end{lemma} \begin{proof} Without loss of generality, suppose $b \prec a$ in $P_G$. Let $b = a_{s} \prec: a_{s-1} \prec: \cdots \prec:a_0 = a$ be the path from $a$ to $b$ in $P_G$. As the root $r$ is the greatest element of $P_G$, according to the construction of $w$, $a$ is visited first then $a_1, a_2, \ldots, a_{s-1}$ and lastly $b$ is visited. \begin{itemize} \item Case-1: Suppose $a \notin C_{\text{long}}$. Then $a_i \notin C_{\text{long}}$ for all $0 \le i \le s$ and consequently all of them occur twice in $w$. For $1 \le t \le s$, using induction on $t$, we show that $w_{\{a, a_t\}} = a_taa_ta$. Hence, in particular for $t = s$, $w_{\{a, b\}} = baba$ so that $a$ and $b$ alternate in $w$. \qquad For $t = 1$, since $a_1$ is a child of $a$, as per Step 11 of Algorithm \ref{algo-1}, we have $a_1aa_1a \ll w$ and both $a$ and $a_1$ appear exactly twice in $w$. Hence, $w_{\{a, a_1\}} = a_1aa_1a$. For $t = p$ suppose $w_{\{a, a_{p}\}} = a_{p}aa_{p}a$. For $t = p+1$, since $a_{p+1}$ is a child of $a_p$, both the occurrences of $a_p$ in $w$ are replaced by a word containing $a_{p+1}a_p$ as a subword. Hence, $a_{p+1}a_{p}aa_{p+1}a_{p}a \ll w$ so that $w_{\{a, a_{p+1}\}} = a_{p+1}aa_{p+1}a$. \item Case-2: Suppose $a \in C_{\text{long}}$. If $b$ is also in $C_{\text{long}}$, then both $a$ and $b$ occur exactly once in $w$ so that they alternate in $w$. If $b \notin C_{\text{long}}$, then let $m$ be the smallest possible index with $1 \le m \le s$ such that $a_m \notin C_{\text{long}}$. For $m \le t \le s$, using induction on $t$, we show that $w_{\{a, a_t\}} = a_taa_t$. Hence, for $t = s$, we have $a$ and $b$ alternate in $w$. \qquad Since $a_{m-1} \in C_{\text{long}}$, note that $a_{m-1}$ appears exactly once in $w$. If $m = 1$, clearly $a_maa_m \ll w$. For $m > 1$, note that $w_{\{a, a_{m-1}\}} = a_{m-1}a$. Since $a_{m}$ is a child of $a_{m-1}$, in view of Step 7 of Algorithm \ref{algo-1}, $a_m$ appears twice in $w$ such that $a_ma_{m-1}aa_m \ll w$. Thus, for $t = m$, $w_{\{a, a_m\}} = a_maa_m$. For $t = m+p$ suppose $w_{\{a, a_{m+p}\}} = a_{m+p}aa_{m+p}$. For $t = m+p+1$, since $a_{m+p+1}$ is a child of $a_{m+p}$, both the occurrences of $a_{m+p}$ in $w$ are replaced by a word containing $a_{m+p+1}a_{m+p}$ as a subword. Hence, $a_{m+p+1}a_{m+p}aa_{m+p+1}a_{m+p} \ll w$ so that $w_{\{a, a_{m+p+1}\}} = a_{m+p+1}aa_{m+p+1}$. \end{itemize} Hence, in any case, $a$ and $b$ alternate in $w$. \qed \end{proof} \begin{lemma}\label{cor_2} For $a, b \in V$, if $a$ and $b$ are not adjacent in $G$, then they do not alternate in $w$. \end{lemma} \begin{proof} Note that $a$ and $b$ are incomparable elements in $P_G$. As $P_G$ is a rooted tree with $r$ as the greatest element, there exists $c$ in $P_G$ such that $c$ is the least upper bound of $\{a, b\}$. Let $a = a_s \prec: a_{s-1} \prec: \cdots \prec: a_{1} \prec: c$ be the path from $c$ to $a$, and $b = b_{s'} \prec: b_{s'-1} \prec: \cdots \prec: b_{1} \prec: c$ be the path from $c$ to $b$ in $P_G$. Hence, $c$ is visited first and then $a_i$, $b_j$ are visited in Algorithm \ref{algo-1}. Note that $a_1, b_1$ are children of $c$ and let $a_1$ is visited before $b_1$ in the algorithm. \begin{itemize} \item Case-1: One of $a$ and $b$ is in $C_{\text{long}}$ but the other is not; say, $a \in C_{\text{long}}$ and $b \notin C_{\text{long}}$. Then $c \in C_{\text{long}}$, $a_i \in C_{\text{long}}$, for all $1 \le i \le s$, and $b_j \notin C_{\text{long}}$, for all $1 \le j \le s'$. For $1 \le t \le s$, using induction on $t$, we first show that $w_{\{a_t, b_1\}} = a_tb_1b_1$. \qquad For $t = 1$, since $a_1$, $b_1$ are children of $c$ and $a_1 \in C_{\text{long}}$, as per the algorithm, we have $a_1b_1cb_1 \ll w$. Hence, $w_{\{a_1, b_1\}} = a_1b_1b_1$. For $t = p$, suppose $w_{\{a_{p}, b_1\}} = a_{p}b_1b_1$. For $t = p+1$, since $a_{p+1}$ is a child of $a_{p}$ and both $a_{p}$ and $a_{p+1} \in C_{\text{long}}$, as per the algorithm, we have $a_{p+1}a_{p}b_1b_1 \ll w$ so that $w_{\{a_{p+1}, b_1\}} = a_{p+1}b_1b_1$. Hence, in particular for $t = s$, we have $w_{\{a, b_1\}} = ab_1b_1$. \qquad Now for $1 \le t \le s'$, using induction on $t$, we show that $w_{\{a, b_t\}} = ab_tb_t$. For $t = 1$, we have $w_{\{a, b_1\}} = ab_1b_1$. For $t = p$, suppose $w_{\{a, b_{p}\}} = ab_{p}b_{p}$. For $t = p+1$, since $b_{p+1}$ is a child of $b_{p}$, both the occurrences of $b_{p}$ in $w$ are replaced by a word containing $b_{p+1}b_{p}$ as a subword. Thus, we have $ab_{p+1}b_{p}b_{p+1}b_{p} \ll w$ so that $w_{\{a, b_{p+1}\}} = ab_{p+1}b_{p+1}$. Hence, in particular for $t = s'$, we have $w_{\{a, b\}} = abb$ so that $a$ and $b$ do not alternate in $w$. \item Case-2: Both $a$ and $b$ are not in $C_{\text{long}}$. Then we have the following cases. \item Case-2.1: Suppose $a_1 \notin C_{\text{long}}$ and $b_1 \notin C_{\text{long}}$. Then note that $a_i \notin C_{\text{long}}$ for all $1 \le i \le s$ and $b_j \notin C_{\text{long}}$ for all $1 \le j \le s'$. \qquad For $1 \le t \le s$, using induction on $t$, we first show that $w_{\{a_t, b_1\}} = a_tb_1b_1a_t$. Note that $a_1$, $b_1$ are children of $c$ and both $a_1$ and $b_1$ are not in $C_{\text{long}}$. Hence, if $c \in C_{\text{long}}$, as per the algorithm, we have $a_1b_1cb_1a_1 \ll w$ and if $c \notin C_{\text{long}}$, as per the algorithm, we have $a_1b_1cb_1a_1c \ll w$. Hence, in any case, for $t=1$, we have $w_{\{a_1, b_1\}} = a_1b_1b_1a_1$. For $t = p$, suppose $w_{\{a_{p}, b_1\}} = a_{p}b_1b_1a_{p}$. For $t = p+1$, since $a_{p+1}$ is a child of $a_{p}$, both the occurrences of $a_{p}$ in $w$ are replaced by a word containing $a_{p+1}a_{p}$ as a subword. Thus, we have $a_{p+1}a_{p}b_1b_1a_{p+1}a_{p} \ll w$ so that $w_{\{a_{p+1}, b_1\}} = a_{p+1}b_1b_1a_{p+1}$. Hence, in particular for $t = s$, we have $w_{\{a, b_1\}} = ab_1b_1a$. \qquad Now for $1 \le t \le s'$, using induction on $t$, we show that $w_{\{a, b_t\}} = ab_tb_ta$. For $t = 1$, we have $w_{\{a, b_1\}} = ab_1b_1a$. For $t = p$, suppose $w_{\{a, b_{p}\}} = ab_{p}b_{p}a$. For $t = p+1$, since $b_{p+1}$ is a child of $b_{p}$ and both the occurrences of $b_{p}$ in $w$ are replaced by a word containing $b_{p+1}b_{p}$ as a subword, we have $ab_{p+1}b_{p}b_{p+1}b_{p}a \ll w$ so that $w_{\{a, b_{p+1}\}} = ab_{p+1}b_{p+1}a$. Hence, in particular for $t = s'$, we have $w_{\{a, b\}} = abba$ so that $a$ and $b$ do not alternate in $w$. \item Case-2.2: One of $a_1$ and $b_1$ is in $C_{\text{long}}$ but not the other; say, $a_1 \in C_{\text{long}}$ and $b_1 \notin C_{\text{long}}$. Then $c \in C_{\text{long}}$ and $b_j \notin C_{\text{long}}$ for all $1 \le j \le s'$. Let $m (>1)$ be the smallest positive interger such that $a_m \notin C_{\text{long}}$. Therefore $a_i \notin C_{\text{long}}$ for all $m \le i \le s$. \qquad For $m \le t \le s$, using induction on $t$, we show that $w_{\{a_t, b\}} = a_tbba_t$. Hence, in particular for $t=s$, we have $w_{\{a, b\}} = abba$. As shown in Case-1, we can have $w_{\{a_{m-1}, b\}} = a_{m-1}bb$. For $t = m$, since $a_m$ is a child of $a_{m-1}$ but $a_m \notin C_{\text{long}}$, as per the algorithm, we have $a_ma_{m-1}bba_m \ll w$. Thus $w_{\{a_{m}, b\}} = a_{m}bba_m$. For $t = m+p$, suppose $w_{\{a_{m+p}, b\}} = a_{m+p}bba_{m+p}$. For $t = m+p+1$, as $a_{m+p+1}$ is a child of $a_{m+p}$ and both the occurrences of $a_{m+p}$ in $w$ are replaced by a word containing $a_{m+p+1}a_{m+p}$ as a subword, we have $a_{m+p+1}a_{m+p}bba_{m+p+1}a_{m+p} \ll w$. Thus $w_{\{a_{m+p+1}, b\}} = a_{m+p+1}bba_{m+p+1}$. Hence in particular for $t = s$, we have $w_{\{a, b\}} = abba$. \end{itemize} Hence in any case $a$ and $b$ do not alternate in $w$. \qed \end{proof} We now conclude in the following theorem that the word $w$ produced by Algorithm \ref{algo-1} on an arborescence $G$ is a minimum-word-representant of $G$. \begin{theorem} If $G$ is an arborescence on $n$ vertices with clique number $k$, then $\ell(G) = 2n - k$. \end{theorem} \begin{proof} Let $w$ be the word obtained by Algorithm \ref{algo-1} on an arborescence $G$. From lemmas \ref{cor_1} and \ref{cor_2}, it is evident that $w$ represents $G$. In view of Remark \ref{len_w}, the length of the word $w$ is $2n - k$ so that $\ell(G) \le 2n - k$. Further, as the size of a maximum clique in $G$ is $k$ we have $2n - k \le \ell(G)$ (cf. \cite[Theorem 2.9]{Eshwar_2024}). Hence, $w$ is a minimum-word-representant of $G$ and $\ell(G) = 2n - k$. \qed \end{proof} We now prove a more general theorem for the minimum-word-representants of double-arborescences. \begin{theorem}\label{min_word_doub_arbore} Suppose $G$ is a double-arborescence on $n$ vertices with clique number $k$. Then $\ell(G) = 2n - k$. \end{theorem} \begin{proof} Let $r$ be a root of a double-arborescence $G = (V, E)$ and $P_G$ be the induced poset of $G$ with respect to its double-arborescence orientation. If $C_{\text{long}}$ is a longest chain in $P_G$, then note that the size of $C_{\text{long}}$ is $k$. Let $A = \{a \in V \mid a \prec r \ \text{in} \ P_G\}$ and $B = \{a \in V \mid r \prec a \ \text{in} \ P_G\}$ so that $A \cap B = \{ r \}$ and $A \cup B = V$. Let $P_A$ and $P_B$ be the subposets of $P_G$ on the vertex sets $A$ and $B$, respectively. Then the Hasse diagrams of both $P_A$ and $P_B$ are rooted trees with root $r$. Note that $r$ is the greatest element of $P_A$ and it is the least element of $P_B$. Let $G_A$ and $G_B$ be the induced subgraphs of $G$ on the vertex sets $A$ and $B$, respectively. It can be observed that both $G_A$ and $G_B$ are arborescences. Let $C^A_{\text{long}} = C_{\text{long}} \cap A$ and $C^B_{\text{long}} = C_{\text{long}} \cap B$ so that $C^A_{\text{long}}$ and $C^B_{\text{long}}$ are longest chains in $P_A$ and $P_B$, respectively. We run Algorithm \ref{algo-1} separately on $P_A$ and $P_B$ by considering the respective longest chains $C^A_{\text{long}}$ and $C^B_{\text{long}}$. Let $u$ and $v$ be minimum-word-representants of $G_A$ and $G_B$, respectively, obtained from Algorithm \ref{algo-1}. Suppose that $u = u_1ru_2$ and $v = v_1rv_2$. Note that the reversal of $u$, $u^R = u^R_2ru^R_1$ also represents $G_A$. We show that the word $w = u^R_2v_1ru^R_1v_2$ represents the graph $G$. First note that $w_{A} = u^R_2ru^R_1$. Hence any two vertices $a$ and $a'$ of $G_A$ are adjacent in $G$ if and only if $a$ and $a'$ alternate in $w$. Also $w_B = v_1rv_2$ so that any two vertices $b$ and $b'$ of $G_B$ are adjacent in $G$ if and only if they alternate in $w$. We now show that $r$ alternates with any $a \in V \setminus \{r\}$ in $w$ in the following cases: \begin{itemize} \item $a \in A \setminus \{r\}$ such that $a \in C^A_{\text{long}}$: Then $a$ occurs exactly once in $u^R$ as well as in $w$. Moreover, since $r$ occurs exactly once in $w$, we have $a$ and $r$ alternate in $w$. Similarly, if $a \in B \setminus \{r\}$ such that $a \in C^B_{\text{long}}$, then both $a$ and $r$ occur exactly once in $w$ so that they alternate in $w$. \item $a \in A \setminus \{r\}$ such that $a \notin C^A_{\text{long}}$: Then $a$ occurs exactly twice in $u^R$ as well as in $w$. Further, in view of Remark \ref{structure_w}, we have $w_{\{a, r\}} = ara$. Similarly, when $a \in B \setminus \{r\}$ but $a \notin C^B_{\text{long}}$, then $w_{\{a, r\}} = ara$. \end{itemize} Finally, let $a \in A \setminus \{r\}$ and $b \in B \setminus \{r\}$. In view of Remark \ref{structure_w}, the positions of $a$'s in $u_1$ and $u_2$ and the positions of $b$'s in $v_1$ and $v_2$ are evident as shown in the following cases so that $a$ and $b$ alternate in $w$. \begin{itemize} \item If $a \in C^A_{\text{long}}$ and $b \in C^B_{\text{long}}$, then $w_{\{a, b\}} = ba$. \item If $a \in C^A_{\text{long}}$ and $b \notin C^B_{\text{long}}$, then $w_{\{a, b\}} = bab$. \item If $a \notin C^A_{\text{long}}$ and $b \in C^B_{\text{long}}$, then $w_{\{a, b\}} = aba$. \item If $a \notin C^A_{\text{long}}$ and $b \notin C^B_{\text{long}}$, then $w_{\{a, b\}} = abab$. \end{itemize} Hence, the word $w$ represents the graph $G$. Note that the elements of $C_{\text{long}}$ appear exactly once in $w$ while the rest of the elements of $V$ appear twice in $w$ so that the length of the word $w$ is $2n-k$. Further, since the size of a maximum clique in $G$ is $k$, we have $\ell(G) = 2n - k$. \qed \end{proof} \section{Conclusion} The characterizations of (double-)arborescences given in this work may be useful in addressing various enumeration related problems of double-arborescences. In \cite{Enumeration_arborescenes}, the arborescences were enumerated both exactly and asymptotically. However, no such results are available for double-arborescences. Considering the characterizations based on split decomposition for certain subclasses of distance-hereditary graphs, some grammars enumerating the subclasses were produced in \cite{MR3907778}. As our characterizations given in Corollary \ref{ch_double_arb} and Corollary \ref{main_th} for double-arborescences and arborescences are using the split-decomposition trees, taking a clue from the work in \cite{MR3907778}, one may produce grammars to enumerate the (double-)arborescences. Moreover, the recognization algorithm for treelike comparability graphs given in \cite[Theorem 5]{cornelsen2009treelike} may be customized to both double-arborescences and arborescences, using the characterizations given in this work. Considering the open problem given in \cite{Eshwar_2024}, we established that the length of a minimum-word-representant of a double-arborescence on $n$ vertices with clique number $k$ is $2n-k$. Based on our observations, we conjecture that the length of minimum-word-representants of a path of double-arborescences is also $2n-k$. Note that a minimum-word-representant of a double-arborescence or an arborescence need not be unique. In fact, the minimum-word-representant produced by Algorithm \ref{algo-1} on a double-arborescence may differ depending on the maximum size clique chosen. It is also interesting to enumerate all minimum-word-representants of double-arborescences and arborescences, in line with the work in \cite{Marisa_2020}. \begin{thebibliography}{10} \bibitem{MR3907778} M.~Bahrani and J.~Lumbroso. \newblock Enumerations, forbidden subgraph characterizations, and the split-decomposition. \newblock {\em Electron. J. Combin.}, 25(4):Paper No. 4.47, 45, 2018. \bibitem{DH_graph} H.-J. Bandelt and H.~M. Mulder. \newblock Distance-hereditary graphs. \newblock {\em J. Combin. Theory Ser. B}, 41(2):182--208, 1986. \bibitem{Bose_1998} P.~Bose, J.~F. Buss, and A.~Lubiw. \newblock Pattern matching for permutations. \newblock {\em Inform. Process. Lett.}, 65(5):277--283, 1998. \bibitem{circlegraph3} A.~Bouchet. \newblock Reducing prime graphs and recognizing circle graphs. \newblock {\em Combinatorica}, 7(3):243--254, 1987. \bibitem{Bouchet_1} A.~Bouchet. \newblock Transforming trees by successive local complementations. \newblock {\em J. Graph Theory}, 12(2):195--207, 1988. \bibitem{Enumeration_arborescenes} R.~Castelo and N.~Wormald. \newblock Enumeration of {$P_4$}-free chordal graphs. \newblock {\em Graphs Combin.}, 19(4):467--474, 2003. \bibitem{chu2008simple} F.~P.~M. Chu. \newblock A simple linear time certifying {LBFS}-based algorithm for recognizing trivially perfect graphs and their complements. \newblock {\em Inform. Process. Lett.}, 107(1):7--12, 2008. \bibitem{cicerone1999extension} S.~Cicerone and G.~Di~Stefano. \newblock On the extension of bipartite to parity graphs. \newblock {\em Discrete Appl. Math.}, 95(1-3):181--195, 1999. \bibitem{cormen} T.~H. Cormen, C.~E. Leiserson, R.~L. Rivest, and C.~Stein. \newblock {\em Introduction to algorithms}. \newblock MIT Press, Cambridge, MA, third edition, 2009. \bibitem{cornelsen2009treelike} S.~Cornelsen and G.~Di~Stefano. \newblock Treelike comparability graphs. \newblock {\em Discrete Appl. Math.}, 157(8):1711--1722, 2009. \bibitem{cunningham_2} W.~H. Cunningham. \newblock Decomposition of directed graphs. \newblock {\em SIAM J. Algebraic Discrete Methods}, 3(2):214--228, 1982. \bibitem{cunningham_1} W.~H. Cunningham and J.~Edmonds. \newblock A combinatorial decomposition theory. \newblock {\em Canadian J. Math.}, 32(3):734--765, 1980. \bibitem{Stefano_2012} G.~Di~Stefano. \newblock Distance-hereditary comparability graphs. \newblock {\em Discrete Appl. Math.}, 160(18):2669--2680, 2012. \bibitem{dobson2004treelike} M.~P. Dobson, M.~Gutierrez, and J.~L. Szwarcfiter. \newblock Treelike comparability graphs. \newblock In {\em Latin-{A}merican {C}onference on {C}ombinatorics, {G}raphs and {A}pplications}, volume~18 of {\em Electron. Notes Discrete Math.}, pages 97--102. Elsevier Sci. B. V., Amsterdam, 2004. \bibitem{tithi} T.~Dwary and K.~V. Krishna. \newblock Word-representability of graphs with respect to split recomposition. \newblock {\em Discrete Applied Mathematics}, 357:310--321, 2024. \bibitem{Even_1972} S.~Even, A.~Pnueli, and A.~Lempel. \newblock Permutation graphs and transitive graphs. \newblock {\em J. Assoc. Comput. Mach.}, 19:400--410, 1972. \bibitem{Marisa_2020} M.~Gaetz and C.~Ji. \newblock Enumeration and extensions of word-representants. \newblock {\em Discrete Appl. Math.}, 284:423--433, 2020. \bibitem{DHgraph1} C.~Gavoille and C.~Paul. \newblock Distance labeling scheme and split decomposition. \newblock {\em Discrete Math.}, 273(1-3):115--130, 2003. \bibitem{graph-labelled_2012} E.~Gioan and C.~Paul. \newblock Split decomposition and graph-labelled trees: characterizations and fully dynamic algorithms for totally decomposable graphs. \newblock {\em Discrete Appl. Math.}, 160(6):708--733, 2012. \bibitem{golumbic1978trivially} M.~C. Golumbic. \newblock Trivially perfect graphs. \newblock {\em Discrete Math.}, 24(1):105--107, 1978. \bibitem{golumbic2022they} M.~C. Golumbic. \newblock Why are they called trivially perfect graphs? \newblock {\em Cadernos do IME-S{\'e}rie Inform{\'a}tica}, 47:40--45, 2022. \bibitem{MR2914710} M.~M. Halld\'{o}rsson, S.~Kitaev, and A.~Pyatkin. \newblock Alternation graphs. \newblock In {\em Graph-theoretic concepts in computer science}, volume 6986 of {\em Lecture Notes in Comput. Sci.}, pages 191--202. Springer, Heidelberg, 2011. \bibitem{jung_1978} H.~A. Jung. \newblock On a class of posets and the corresponding comparability graphs. \newblock {\em J. Combinatorial Theory Ser. B}, 24(2):125--133, 1978. \bibitem{words&graphs} S.~Kitaev and V.~Lozin. \newblock {\em Words and graphs}. \newblock Monographs in Theoretical Computer Science. An EATCS Series. Springer, Cham, 2015. \bibitem{MR2467435} S.~Kitaev and A.~Pyatkin. \newblock On representable graphs. \newblock {\em J. Autom. Lang. Comb.}, 13(1):45--54, 2008. \bibitem{Eshwar_2024} E.~Srinivasan and R.~Hariharasubramanian. \newblock Minimum length word-representants of word-representable graphs. \newblock {\em Discrete Appl. Math.}, 343:149--158, 2024. \bibitem{Trotterbook} W.~T. Trotter. \newblock {\em Combinatorics and partially ordered sets: Dimension theory}. \newblock Johns Hopkins Series in the Mathematical Sciences. Johns Hopkins University Press, Baltimore, MD, 1992. \bibitem{wolk1962comparability} E.~S. Wolk. \newblock The comparability graph of a tree. \newblock {\em Proc. Amer. Math. Soc.}, 13:789--795, 1962. \bibitem{wolk1965note} E.~S. Wolk. \newblock A note on ``{T}he comparability graph of a tree''. \newblock {\em Proc. Amer. Math. Soc.}, 16:17--20, 1965. \bibitem{yan1996quasi} J.-H. Yan, J.-J. Chen, and G.~J. Chang. \newblock Quasi-threshold graphs. \newblock {\em Discrete Appl. Math.}, 69(3):247--255, 1996. \end{thebibliography} \end{document}
2412.17668v1
http://arxiv.org/abs/2412.17668v1
The Graph Coloring Game on $4\times n$-Grids
\documentclass{article} \usepackage[english]{babel} \usepackage{amsthm,amssymb} \usepackage[letterpaper, top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage{authblk} \usepackage{caption} \usepackage{subcaption} \usepackage{tikz} \usepackage{comment} \usepackage{ulem} \newtheorem{theorem}{Theorem} \newtheorem{observation}{Observation} \newtheorem{conjecture}{Conjecture} \newtheorem{claim}{Claim} \newtheorem{question}{Question} \newtheorem{corollary}{Corollary} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{definition}{Definition} \newcommand{\MyBox}{\hbox{\vrule width 6pt height 6pt }} \newcommand{\qedclaim}{\hfill $\diamond$ \medskip} \newenvironment{proofclaim}{\noindent{\it Proof of Claim.}}{\qedclaim} \newenvironment{sketch}{\noindent{\it Sketch of proof.}}{\qedclaim} \newcommand{\x}{\square} \newcommand{\red}[1]{{\textcolor{red}{#1}}} \newcommand{\blue}[1]{{\textcolor{blue}{#1}}} \newcommand{\purple}[1]{{\textcolor{purple}{#1}}} \newcommand{\martins}[1]{{\textcolor{orange}{\textbf{Nicolas: }#1}}} \title{The Graph Coloring Game on $4\times n$-Grids\thanks{Research supported by the CAPES-Cofecub project Ma 1004/23, by the Inria Associated Team CANOE, by the research grants ANR P-GASE, ANR DiGraphs and by the French government, through the EUR DS4H Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-17-EURE-0004.}} \author[1,4]{Caroline Brosse} \author[2]{Nicolas Martins} \author[1]{Nicolas Nisse} \author[3]{Rudini Sampaio} \affil[1]{Universit\'{e} C\^{o}te d’Azur, Inria, CNRS, I3S, France.} \affil[2]{Inst. de Eng. e Desenvolvimento Sustent\'{a}vel, UNILAB, Brazil.} \affil[3]{Dept. Computação, Universidade Federal do Cear\'{a}, Brazil.} \affil[4]{Université d’Orléans, INSA Centre Val de Loire, LIFO EA 4022, Orléans, France} \date{} \begin{document} \maketitle \begin{abstract} The graph coloring game is a famous two-player game (re)introduced by Bodlaender in $1991$. Given a graph $G$ and $k \in \mathbb{N}$, Alice and Bob alternately (starting with Alice) color an uncolored vertex with some color in $\{1,\cdots,k\}$ such that no two adjacent vertices receive a same color. If eventually all vertices are colored, then Alice wins and Bob wins otherwise. The game chromatic number $\chi_g(G)$ is the smallest integer $k$ such that Alice has a winning strategy with $k$ colors in $G$. It has been recently (2020) shown that, given a graph $G$ and $k\in \mathbb{N}$, deciding whether $\chi_g(G)\leq k$ is PSPACE-complete. Surprisingly, this parameter is not well understood even in ``simple" graph classes. Let $P_n$ denote the path with $n\geq 1$ vertices. For instance, in the case of Cartesian grids, it is easy to show that $\chi_g(P_m \x P_n) \leq 5$ since $\chi_g(G)\leq \Delta+1$ for any graph $G$ with maximum degree $\Delta$. However, the exact value is only known for small values of $m$, namely $\chi_g(P_1\x P_n)=3$, $\chi_g(P_2\x P_n)=4$ and $\chi_g(P_3\x P_n) =4$ for $n\geq 4$ [Raspaud, Wu, 2009]. Here, we prove that, for every $n\geq 18$, $\chi_g(P_4\x P_n) =4$. \end{abstract} \section{Introduction} Given a graph $G=(V,E)$ and $k \in \mathbb{N}$, the {\it $k$-coloring game} involves two players, Alice and Bob, who alternately (starting with Alice) choose an uncolored vertex $v \in V$ and assign to it a color $c(v) \in \{1,\cdots,k\}$ such that, for every colored neighbor $u$ of $v$, $c(u)\neq c(v)$ ({\it i.e.}, at every turn, the partial coloring must be {\it proper}). Alice wins if, eventually, all vertices of $G$ are colored; Bob wins otherwise, {\it i.e.}, if at some turn, there exists an uncolored vertex that has, for each $1 \leq i \leq k$, a neighbor already colored with color $i$. Let $\chi_g(G)$ be the smallest integer $k$ such that Alice has a winning strategy in the $k$-coloring game, {\it i.e.}, Alice can win whatever be the strategy of Bob. Note that, for any graph $G$ with maximum degree $\Delta(G)$, $\chi_g(G) \leq \Delta(G)+1$ since, for every vertex $v \in V$, Alice can always choose a color $c(v)\leq \Delta(G)+1$ for $v$ that does not appear in its neighborhood. The coloring game was introduced by Brams in the context of map coloring and was described by Gardner in 1981 in his ``Mathematical Games'' column of Scientific American~\cite{gardner81}. It remained unnoticed until Bodlaender \cite{bodlaender91} reinvented it in 1991 as the ``Coloring Construction Game''. It was only recently proved that deciding whether $\chi_g(G)\leq k$ is PSPACE-complete~\cite{costa20}. Thereafter, many games based on classical variants of the coloring problem were proved PSPACE-complete, such as the greedy coloring game \cite{lima22} and the connected coloring game \cite{lima23}. From the above complexity results, it is natural to investigate the coloring game in more restricted graph classes. This has been done in trees~\cite{faigle1993game,FRS23,DBLP:journals/entcs/FurtadoDFG19,DBLP:journals/rairo/FurtadoPDF23}. In particular, it has been proved that $\chi_g(T)\leq 4$ for every tree $T$~\cite{faigle1993game} and sufficient conditions for trees $T$ ensuring that $\chi_g(T) = 4$ are provided in~\cite{FRS23}. In other graph classes, it was proved that $\chi_g(G)\leq 7$ in outerplanar graphs \cite{kierstad94}, $\chi_g(G)\leq 3k+2$ in partial $k$-trees~\cite{zhu00} and $\chi_g(G)\leq 5$ in cacti~\cite{game-cactus07}. Moreover, $\chi_g(G)\leq 17$~\cite{zhu08} in planar graphs, $\chi_g(G)\leq 13$ in planar graphs with girth at least 4~\cite{sekiguchi14} and $\chi_g(G)\leq 5$ in planar graphs with girth at least 7~\cite{nakprasit18}. Finally, tori and grids with at most $3$ rows have been considered in~\cite{DBLP:journals/ipl/RaspaudW09}. Here, we consider the grids with four rows and prove that $\chi_g(P_4 \x P_n)\leq 4$, for every $n\in \mathbb{N}$. \section{Preliminaries} Given two graphs $G$ and $H$, $G'=G \x H$ denotes the Cartesian product of $G$ and $H$, that is the graph with vertex set $V(G) \times V(H)$ and, for every $a,b \in V(G)$ and $x,y \in V(H)$, there is an edge $\{(a,x),(b,y)\}$ in $G'$ if and only if $a=b$ and $\{x,y\} \in E(H)$ or if $x=y$ and $\{a,b\} \in E(G)$. For any $m$ and $n$, the grid of dimension $m \times n$ is isomorphic to the Cartesian product of the two paths $P_m$ and $P_n$. In what follows, we will then use the notation $P_m \x P_n$ to denote the $m\times n$ grid. The same holds for the cylinder $P_m\x C_n$ which is the Cartesian product of a path $P_m$ and a cycle $C_n$. \begin{claim} $\chi_g(P_4\x P_n)\leq 5$ and $\chi_g(P_4\x C_n) \leq 5$ \end{claim} \begin{proof} This is direct since the maximum degree is at most $4$: if five colors can be used, then at least one color is available for any uncolored vertex. \end{proof} The notation $v_{i,j}$ is used for the vertex on the row $i$ and column $j$ of a grid. We consider that the rows are numbered from bottom to top and the columns are numbered from left to right. Moreover, in this paper we consider only proper colorings with $4$ colors $\{1,2,3,4\}$. For any vertex $v$, we denote the color of the vertex $v$ by $c(v) \in \{0,1,2,3,4\}$, the value $c(v) = 0$ standing for an uncolored vertex. \begin{claim}\label{claim:lower} Let $n \geq 18$. Then, $\chi_g(P_4\x P_n) \geq 4$ and $ \chi_g(P_4\x C_n) \geq 4$. \end{claim} \begin{proof} Let us consider a game with three available colors. Consider the first turn of Bob and let $j$ be a column such no vertex has been colored yet in columns $k \in \{j-4,\cdots,j+3,j+4\}$ (such $j$ exists since $n\geq 18$). Bob colors $v_{2,j}$ with color $1$. W.l.o.g., next move of Alice does not color any vertex in columns $k \in \{j+1,j+2,j+3,j+4\}$. Then, Bob colors $v_{2,j+2}$ with color $c=2$ if Alice has not colored $v_{1,j}$ nor $v_{3,j}$, and Bob colors $v_{2,j+2}$ with color $c \in \{2,3\}$ if Alice has colored $v_{1,j}$ or $v_{3,j}$ with color $c\neq 1$. If, during her next turn, Alice does not color $v_{2,j+1}$, then at least one of $v_{1,j+1}$ or $v_{3,j+1}$ can be colored with the color in $\{2,3\} \setminus \{c\}$ and, now, $v_{2,j+1}$ has three neighbors with distinct colors. Hence, Alice must have colored $v_{2,j+1}$ with the color $c'$ in $\{2,3\} \setminus \{c\}$. If $v_{1,j}$ (resp. $v_{3,j}$) is colored with $c$, then Bob colors $v_{1,j+2}$ (resp. $v_{4,j+1}$) with color $1$ and then $v_{1,j+1}$ (resp. $v_{3,j+1}$) cannot be colored anymore. Otherwise, $v_{1,j}$ and $v_{3,j}$ are not colored and $c=2$ and $c'=3$. Bob colors $v_{4,j+2}$ with color $3$. If Alice does not color $v_{3,j+2}$ then Bob colors $v_{3,j+1}$ or $v_{3,j+3}$ with $1$ and $v_{3,j+2}$ cannot be colored anymore. Otherwise, Alice colors $v_{3,j+2}$ with $1$ and Bob colors $v_{4,j+1}$ with $2$ and $v_{3,j+1}$ cannot be colored anymore. \end{proof} Let $\chi^{\cal B}_g(G)$ be the smallest number of colors ensuring that Alice wins when Bob starts. \begin{lemma}\label{lem:easy} Let $n \geq 9$. Then, $\chi^{\cal B}_g(P_4\x P_n) = \chi^{\cal B}_g(P_4\x C_n) = 4$ \end{lemma} \begin{proof} The lower bound is similar to the previous one (except that we only need $n \geq 9$ since Bob starts). A winning strategy for Alice proceeds as follows. Each time Bob colors a vertex $v_{a,j}$ with color $c$, Alice colors the (unique) vertex $v_{b,j}$ on column $j$ at distance $2$ from $v_{a,j}$ (i.e., $|a-b|=2$) with the color $c$. It is clear that Alice can always follow this strategy since $c(v_{a,j-1})=c(v_{b,j-1})$, $c(v_{a,j+1})=c(v_{b,j+1})$ and both neighbors of $v_{b,j}$ in column $j$ have the same color. Hence, whenever a vertex $v$ is being colored, either its two neighbors on the same column are uncolored, or they are colored with the same color, or $v$ has at most three neighbors. In all cases, at most four colors are sufficient. \end{proof} Unfortunately, the simple strategy described above does not work when Alice starts. Indeed, let $n\geq 6$ and assume that, after her first move, Alice follows the strategy above. let $i\leq n$ be the column of the first vertex colored by Alice and let $j\leq n$, such that $i \notin \{j-1,j,j+1\}$. Bob colors $v_{1,j-1}$ and $v_{2,j+1}$ with color $1$, $v_{2,j-1}$ and $v_{1,j+1}$ with color $2$. Then, since Alice has started, Bob can ensure that she is the first to color some vertex of column $j$ (unless the game has stopped before). W.l.o.g., she colors the vertex $v_{a,j}$ with color $3$, then Bob colors the vertex $v_{b,j}$ such that $|a-b|=2$ with color $4$. The common neighbor of $v_{a,j}$ and $v_{b,j}$ will require a fifth color. \section{Graph coloring game in \texorpdfstring{$P_4 \x P_n$}{P4xPn}} Let us consider a partial proper $4$-coloring of the grid $G=P_4 \x P_n$. The \emph{columns} of $G$ are the copies of $P_4$ in $P_4 \x P_n$. A column is \emph{empty} if all its vertices are uncolored. A \emph{block} is any maximal subgrid with at least one colored vertex in each of its columns. As an exception\footnote{This exception is for technical reasons, just to avoid extra cases in the proof below.}, if the last column is empty and the penultimate column is not, we include the last column in the block of the penultimate column. Note that a block is a subgrid $B$ of the form $P_4 \x P_m$, with $m\leq n$. The \emph{right border} (resp. \emph{left border}) of a block $B$ is its rightmost (resp. leftmost) column. Note that, if the right border (resp. left border) of a block is the $j^{th}$ column of $G$, then no vertex of the $j+1^{th}$ column (resp. of the $j-1^{th}$ column) of $G$ is colored. When a block consists of a single column (except for the first column), we consider it as a left border. That is, when we mention the right border of a block, this implies that this block has at least two columns (this will be important in the design of the algorithm and in its proof). A vertex is \emph{inside} a block $B$ if it is a vertex of $B$ but not of its borders. A right (resp. left) border of a block at column $1<j<n$ is \emph{free} if columns $j+1$ and $j+2$ (resp. $j-1$ and $j-2$) are empty. \paragraph{Safe and sound vertices.} Let us now define some status of the vertices that will be important in what follows. Intuitively, a vertex $v$ of a block is called {\it safe} if whatever be the strategies of Alice and Bob (with $4$ colors) starting from this state ({\it i.e.} from the current partial $4$-coloring), the vertex $v$ will eventually be colored at the end of the game. More precisely, a vertex is \emph{safe} if (a) it is already colored, or (b) it has at most three neighbors, or (c) at least two of its neighbors are already colored with the same color, or (d) it has three neighbors colored with distinct colors $c,c',c''$ and its last neighbor is uncolored and adjacent to a vertex colored with the fourth color (which makes it always available for $v$). In other words, for any uncolored safe vertex, at least one of the four colors is available to color this vertex at any moment of the game. An unsafe (not safe) vertex $v$ in a block $B$ is \emph{sound} if it is associated with two uncolored safe neighbors $x$ and $y$ in $B$, called the \emph{doctors} of $v$, in such a way that every safe vertex is the doctor of at most one (unsafe) vertex called its \emph{patient}. The intuition for soundness is that, if Bob colors a doctor $x$ or $y$ of a patient $v$, then, in the next turn, Alice can always use an available color for $v$ and make $v$ safe. As an example, consider the configuration of Figure \ref{fig-firstex}, showing the middle of a block. Recall that the rows are numbered $1$ to $4$ from bottom to top. Notice that every vertex in column $j$ to $j+2$ is safe or sound. For this, note that every vertex in columns $j$ to $j+2$ is safe, except $v_{2,j+1}$ and $v_{3,j+2}$, which are sound, since we can associate the doctors $v_{2,j}$ and $v_{1,j+1}$ to the patient $v_{2,j+1}$, and the doctors $v_{2,j+2}$ and $v_{4,j+2}$ to the patient $v_{3,j+2}$. In this case, the safe uncolored vertex $v_{2,j+2}$ has two unsafe neighbors $v_{2,j+1}$ and $v_{3,j+2}$, but this is not a problem since $v_{2,j+2}$ is the doctor of only one patient ($v_{3,j+2}$). \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (0,0) --(1,0)--(2,0) --(3,0)--(4,0)--(5,0) ; \path[draw, very thick] (0,4) --(1,4)--(2,4) --(3,4)--(4,4)--(5,4) ; \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,0.5) node {$c$}; \draw (0.5,1.5) node {$c'$}; \draw (1.5,2.5) node {$c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,3.5) node {$0$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,1.5) node {$c$}; \end{scope} \end{tikzpicture} \caption{Illustration of the notions of safeness and soundness in the middle of a block. Note that a vertex is depicted by a square. The bold lines figure the surrounding of the block. A $0$ in a cell means that the corresponding vertex is not colored, $c$ and $c'$ denote any colors (not $0$). It can be deduced from the figure that $c'\neq c$ since there are two adjacent vertices colored $c$ and $c'$ respectively (and the coloring is proper). A white cell indicates that the status of the corresponding vertex is not constrained (it may be colored or not). The $j$ above denotes the index of the corresponding column.} \label{fig-firstex} \end{figure} The intuition for the proof of the main theorem of this section is the following: during the game, after Alice's move, every vertex of a block is safe or sound. Actually, this is not completely sufficient and we will need to consider a few exceptions defined later. \paragraph{Borders' configurations.} Let us describe some particular configurations for the border of a block. Let $j$ be the index of the border column of the block. \begin{figure}[hb]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4); \draw (2.5,4.5) node {$j$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Config. $\alpha$}; \end{scope} \begin{scope}[xshift=12cm, yshift=0cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4); \draw (2.5,4.5) node {$j$}; \draw (1.5,1.5) node {$c$}; \draw (2.5,0.5) node {}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,3.5) node {}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Config. $\beta$}; \end{scope} \begin{scope}[xshift=22cm, yshift=0cm, scale=2.5] \foreach \i in {3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(3,\j); \path[draw, densely dotted] (4,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(4,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4) ; \draw (3.5,4.5) node {$j$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.2) node {\tiny safe}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (3.5,-0.75) node {Config. $\gamma$}; \end{scope} \begin{scope}[xshift=36cm, yshift=0cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c'$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Config. $\delta$}; \end{scope} \begin{scope}[xshift=50cm, yshift=0cm, scale=2.5] \foreach \i in {1,2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3}{ \path[draw, densely dotted] (0,\j)--(3,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j) ;} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4); \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {$c$}; \draw (0.5,2.5) node {$c'$}; \draw (0.5,1.5) node {$c''$}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.8,0.5) node {}; \draw (2.5,1.5) node {$c'$}; \draw (2.5,3.5) node {}; \draw (1.5,-0.75) node {Config. $\pi$}; \end{scope} \end{tikzpicture} \caption{Illustration of the different configurations in the case of a right border (column $j$). The colors $c,c',c''\ne 0$ are pairwise distinct. Each configuration has its symmetrical counterpart (according to the horizontal symmetry axis of the grid). The configurations for the left borders are defined symmetrically (according to the vertical symmetry axis of the grid).} \label{fig:configs} \end{figure} \begin{description} \item[Configuration $\alpha$.] A right/left border (at column $j$) is in configuration $\alpha$ if $c(v_{1,j})=c(v_{3,j}) \neq 0$ and, moreover, none of $v_{2,j}$ and $v_{4,j}$ are doctors, and if they are colored, then $c(v_{2,j})\neq c(v_{4,j})$ (or, symmetrically, if $c(v_{2,j})=c(v_{4,j}) \neq 0$ and moreover, if none of $v_{1,j}$ and $v_{3,j}$ are doctors and if they are colored, then $c(v_{1,j})\neq c(v_{3,j})$). Recall that, if a block consists of a single column (except for the first column) in configuration $\alpha$, we consider that it is a left border to avoid ambiguity. \item[Configuration $\beta$.] A right border (at column $j$) is in configuration $\beta$ if $c(v_{2,j-1})=c(v_{3,j}) \neq 0$ and $c(v_{2,j})=0$ (or, symmetrically, if $c(v_{3,j-1})=c(v_{2,j}) \neq 0$ and $c(v_{3,j})=0$). The $\beta$ configuration for a left border is defined symmetrically (according to the vertical symmetry axis of the grid). \item[Configuration $\gamma$.] A right border (at column $j$) is in configuration $\gamma$ if $c(v_{1,j})=c(v_{4,j})=c(v_{3,j-1})=c\neq 0$, $c(v_{2,j-1})=c(v_{2,j})=c(v_{3,j})=0$ and $v_{2,j-1}$ is safe. The symmetric according to the horizontal axis of the grid is also a configuration $\gamma$. Analogously for a left border. Observe that $v_{2,j-1}$ being safe implies that either $c(v_{2,j-2})=c$ or $c(v_{2,j-2})=c(v_{1,j-1})$. Therefore $v_{2,j-1}$ is not the doctor of a vertex inside its block, and a block with a border in configuration $\gamma$ contains at least $3$ columns. \item[Configuration $\delta$.] A right border (at column $j$) is in configuration $\delta$ if $c(v_{3,j-1}) = c(v_{4,j})=c$ and $c(v_{2,j})=c'\neq c$. The symmetric according to the horizontal axis of the grid is also a configuration $\delta$, and the configuration $\delta$ for a left border is defined symmetrically. \item[Configuration $\pi$.] A right border (at column $j$) is in configuration $\pi$ if $c(v_{1,j})=c(v_{4,j})=c$, $c(v_{3,j})=c(v_{2,j+2})=c'$, $c(v_{2,j})=c''$ and $c,c',c''\ne 0$ are pairwise distinct. The symmetric according to the horizontal axis of the grid is also a configuration $\pi$, and the configuration $\pi$ for a left border is defined symmetrically. \end{description} The five configurations are depicted in Figure~\ref{fig:configs} (in the case of a right border). In the proof of the next theorem, the notation is the same as in Figure~\ref{fig:configs}, assuming the orientation of the configurations with right border at column $j$, except when it will be explicitly mentioned that a left border is considered. In order to avoid ambiguity, if a border is in both configurations $\alpha$ and $\beta$, then we say that it is in configuration $\beta$. Note that all vertices in the border of a block in configuration $\alpha$, $\beta$, $\delta$ or $\pi$ are safe. Also note that every vertex of configuration $\gamma$ shown in Figure \ref{fig:configs} is safe, except $v_{2,j}$, which is sound since we can associate the doctors $v_{2,j-1}$ and $v_{3,j}$ with $v_{2,j}$ (their only patient). \begin{observation} Every vertex of a border of a block in configuration $\alpha$, $\beta$, $\gamma$, $\delta$ or $\pi$ is safe or sound. \end{observation} Moreover, notice that, in configurations $\beta$, $\delta$ and $\pi$, there is no doctor in the border. In configuration $\gamma$, there is one doctor and its patient in the border. In configuration $\alpha$, there may be a doctor in the border and its possible patient would be inside the block, but, actually, we have added the constraint that no vertex of a border in configuration $\alpha$ can be a doctor. \paragraph{Particular configurations.} Let us define seven particular configurations: $\Delta$, $\Delta'$, $\Delta'_2$, $\Lambda$, $\Lambda'$, $\Lambda_2$ and $\Lambda_2'$, five of them depicted in Figure~\ref{fig-delta}. Configurations $\Lambda_2$ and $\Lambda_2'$ are obtained from Configurations $\Lambda$ and $\Lambda'$, respectively, by replacing exactly one 0 (zero) of column $j-2$ by a color different from $c$, with the additional constraint that column $j-3$ cannot be empty. All seven particular configurations have their symmetrical counterpart according to the horizontal axis. However, only configurations $\Delta'$ and $\Delta'_2$ have their symmetrical counterpart according to the vertical axis. That is, configurations $\Lambda, \Lambda', \Lambda_2$ and $\Lambda'_2$ are only defined for column $j$ being their left border, and, in configuration $\Delta$, column $j+2$ will always be its right border. \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[red!20] (1,2) rectangle (2,3); \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(3,0)--(3,4)--(0,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {}; \draw (0.5,2.5) node {$c$}; \draw (0.5,1.5) node {}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c'$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,0.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw(1.4,-0.7) node {Configuration $\Delta$}; \end{scope} \begin{scope}[xshift=17cm, yshift=0cm, scale=2.5] ll[red!20] (4,2) rectangle (5,3); \foreach \i in {1,2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(5,\j);} \path[draw, very thick] (0,0)--(6,0); \path[draw, very thick] (0,4)--(6,4); \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.65) node {$0$}; \draw (1.5,1.25) node {\tiny{safe}}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$c$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$c$}; \draw (5.5,1.5) node {$c$}; \draw (3.0,-0.7) node {Configuration $\Delta'$}; \end{scope} \begin{scope}[xshift=37cm, yshift=0cm, scale=2.5] ll[red!20] (3,1) rectangle (4,2); \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (0,0)--(5,0); \path[draw, very thick] (0,4)--(5,4); \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.65) node {$0$}; \draw (1.5,1.25) node {\tiny{safe}}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$c$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,1.5) node {$c'$}; \draw (4.5,0.5) node {$c$}; \draw (2.5,-0.7) node {Configuration $\Delta'_2$}; \end{scope} \begin{scope}[xshift=5cm, yshift=-20cm, scale=2.5] ll[red!20] (2,2) rectangle (3,3); \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4); \path[draw, very thick] (5,0)--(3,0)--(3,4)--(5,4); \draw (3.5,4.5) node {$j$}; \draw (4.5,1.5) node {$c''$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {$c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (2.5,-0.75) node {Configuration $\Lambda$}; \end{scope} \begin{scope}[xshift=30cm, yshift=-20cm, scale=2.5] ll[red!20] (2,2) rectangle (3,3); \foreach \i in {1,2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4); \path[draw, very thick] (6,0)--(3,0)--(3,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$c$}; \draw (5.6,2.5) node {\tiny{safe}}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {$c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (3.3,-0.75) node {Configuration $\Lambda'$}; \end{scope} \end{tikzpicture} \caption{Configurations $\Delta$, $\Delta'$, $\Delta'_2$, $\Lambda$ and $\Lambda'$, where $c'\ne c$ and $c''\ne c'$. The vertex marked ``safe" in configurations $\Delta'$ and $\Delta'_2$ is not a doctor of some vertex in column $j-2$ (indeed, for it being safe, its neighbor in column $j-2$ must be colored). Moreover, the vertex marked ``safe" in configuration $\Lambda'$ guarantees that $v_{2,j+1}$ is sound (with doctors $v_{1,j+1}$ and $v_{3,j+1}$). Configurations $\Lambda, \Lambda', \Lambda_2, \Lambda'_2$ and $\Delta$ have no symmetrical counter-part according to the vertical axis. Recall that column $j-3$ cannot be empty in configurations $\Lambda_2$ and $\Lambda'_2$. The red vertices are called the sick vertices.} \label{fig-delta} \end{figure} Note that in configuration $\Delta$, the vertex $v_{3,j+1}$ is not safe and is not sound either because $v_{3,j+2}$ (belonging to a border in configuration $\alpha$) cannot be its doctor. Moreover, in configuration $\Delta'$ of Figure \ref{fig-delta}, there are three uncolored unsafe vertices which are potential patients: $v_{2,j}$, $v_{2,j+1}$, and $v_{3,j+2}$. The doctors $v_{2,j-1}$ and $v_{3,j}$ are associated to the patient $v_{2,j}$ (which is then sound). However, there is no way to associate doctors to the patients $v_{2,j+1}$ and $v_{3,j+2}$, since there are only three possible doctors for both of them (notice that $v_{3,j+3}$ can be colored or belong to a border in configuration $\alpha$). In what follows, we will associate doctors $v_{1,j+1}$ and $v_{2,j+2}$ to the patient $v_{2,j+1}$ (making it sound) and leave the vertex $v_{3,j+2}$ neither safe nor sound. Similarly, in configuration $\Delta'_2$, the vertex $v_{2,j}$ is sound (with doctors $v_{3,j}$ and $v_{2,j-1}$), but $v_{2,j+1}$ is neither safe nor sound. Configurations $\Lambda$, $\Lambda'$, $\Lambda_2$, and $\Lambda'_2$ are defined to deal with an empty column between two blocks that contains exactly one unsafe vertex that we cannot make sound. We say that a vertex is \emph{sick} if it is either the vertex $v_{3,j+1}$ of configuration $\Delta$, or the vertex $v_{3,j+2}$ of configuration $\Delta'$, or the vertex $v_{2,j+1}$ of configuration $\Delta'_2$, or the vertex $v_{3,j-1}$ of configurations $\Lambda$, $\Lambda'$, $\Lambda_2$ or $\Lambda_2'$ in Figure \ref{fig-delta} (indicated in red) or their symmetrical equivalents considering horizontal symmetry (for all the seven $\Delta$ and $\Lambda$ configurations) or vertical symmetry (only for configurations $\Delta'$ and $\Delta'_2$). Note that a sick vertex has two uncolored neighbors and so there is always an available color to be used for it, even after Bob colors one of its neighbors. Also note that the sick vertices of configurations $\Lambda$, $\Lambda'$, $\Lambda_2$ and $\Lambda_2'$ are not inside a block. \paragraph{Intuition of the algorithm.} Roughly, to prove our main theorem, we describe a strategy for Alice in $P_4 \x P_n$ such that, after each move of Alice, every border of a block is in configuration $\alpha$, $\beta$, $\gamma$, $\delta$ or $\pi$ (or is column $1$ or $n$). Moreover, every vertex of a block will be safe or sound or sick. Then, for every possible move of Bob, we specify a valid move for Alice that ensures that the invariants still hold. In what follows, we say that two blocks $B$ and $B'$ (w.l.o.g., say $B$ is on the left of $B'$) are \emph{merged} if, before Bob's move, $B$ and $B'$ are separated by one or two empty columns and, after the next move of Bob and then of Alice, at least one vertex of each empty column separating $B$ and $B'$ is colored. In this case, the right border of $B'$ (resp. the left border of $B$) becomes the right (resp. left) border of the resulting new block. We are now ready to prove the main result. \begin{theorem} For any $n \geq 1$, $\chi_g(P_4\x P_n) \leq 4$. Moreover, $\chi_g(P_4\x P_n) = 4$ for any $n \geq 18$. \end{theorem} \begin{proof} The lower bound when $n\geq 18$ comes from Claim~\ref{claim:lower}. Now, let us describe a winning strategy for Alice with four colors. \begin{description} \item[First move of Alice.] First, Alice colors $v_{3,1}$ with color $1$. Here, we consider that column $1$ is in configuration $\beta$ assuming a virtual vertex $v_{2,0}$ in column $0$ (which actually does not exist) also colored with $1$ (indeed, vertex $v_{2,1}$ is safe because it has only 3 neighbors). \end{description} Now, by induction on the number of Alice's moves, let us assume that, after Alice's last move (and before Bob's next move), the reached partial proper coloring satisfies the following properties\footnote{Note that Properties $(3)$ and $(4)$ are redundant since they are included in the definition of configuration $\alpha$. We included them here to emphasize them.}: \begin{enumerate} \item Every vertex of a block is safe or sound or sick. \item Every border of a block is in configuration $\alpha$, $\beta$, $\gamma$, $\delta$ or $\pi$. \item No border has its $4$ vertices colored with only two colors. \item No vertex of a border in configuration $\alpha$ is a doctor. \item A left border in configuration $\alpha$ has two uncolored vertices, except column $j$ of configurations $\Lambda$, $\Lambda'$, $\Lambda_2$ and $\Lambda'_2$ (following the orientation of Figure \ref{fig-delta}). \end{enumerate} The induction hypothesis clearly holds after the first move of Alice. So, let us assume it is Bob's turn. Note that Bob can color any uncolored vertex using one of the four colors (it clearly holds for vertices not belonging to any block since they are safe or have at least two uncolored neighbors, and it holds for vertices in a block by Property $(1)$). Hence, if we prove that whatever be the next move of Bob, Alice can color a vertex in such a way that the induction hypothesis still holds, then this will describe a winning strategy for Alice ({\it i.e.} all vertices will eventually be colored with at most $4$ colors). There are many cases depending on which is the next vertex colored by Bob. The cases must be considered in the order they are described below. That is to say, when the last move of Bob fits into several cases, the one described first has priority. In the strategy for Alice as described below, it should be clear that, after each of Alice's moves, the properties of the induction hypothesis still hold. The main difficulty is to check that all cases are considered. We organized Bob's possible moves in such a way that it should be easy to check that no case has been forgotten. Before presenting all cases, we show in Figure \ref{fig:example} an example in $P_4\x P_n$ for $n\geq 10$. After Alice colors $v_{3,1}$ with $1$ in her first move, suppose that Bob colors $v_{2,3}$, $v_{3,5}$, $v_{2,7}$ and $v_{3,9}$ with color $1$ in his following four moves. We will see in case {\bf $3$-new} that Alice's responses are the ones in Figure \ref{fig:example}(a), that is, $v_{4,3}$, $v_{1,5}$, $v_{4,7}$ and $v_{1,9}$, where Alice's (resp. Bob's) moves are represented in red (resp. blue). Notice that, at this point, there are five blocks, each one with exactly one column. Column $1$ is in configuration $\beta$ (as explained before in the first move of Alice) and columns $3$, $5$, $7$, and $9$ are in configuration $\alpha$, considered as left borders. From this, suppose that Bob colors $v_{1,7}$ with $2$. Then according to the strategy as explained in case $2\alpha'6$, Alice responds in $v_{3,8}$ with color $3$ or $4$ (say $3$), obtaining a configuration $\Lambda$ in columns $5$ to $7$ and a configuration $\Delta$ in columns $7$ to $9$ (see Figure \ref{fig:example}(b)). Next, suppose that Bob colors $v_{4,5}$ with $2$. Then, following case $2\alpha'6$ again, Alice responds in $v_{2,6}$ with color $3$ or $4$ (say $3$), killing the configuration $\Lambda$ of columns $5$ to $7$, maintaining the configuration $\Delta$ in columns $7$ to $9$ and obtaining a new configuration $\Lambda$ in columns $3$ to $5$ (see Figure \ref{fig:example}(c)). Finally, suppose that Bob colors $v_{1,3}$ with $2$. Then, according to the strategy as explained in case $2\alpha'1$, Alice responds in $v_{1,2}$ with color $1$, replacing the configuration $\Lambda$ of columns $3$ to $5$ with a configuration $\Lambda_2$ (see Figure \ref{fig:example}(d)). \begin{figure}[ht!]\centering \scalebox{1.0}{ \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (9,\j)--(10,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (0,\j)--(9,\j);} \draw (-.5,0.5) node {$1$}; \draw (-.5,1.5) node {$2$}; \draw (-.5,2.5) node {$3$}; \draw (-.5,3.5) node {$4$}; \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$\red{1}$}; \draw (0.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$\blue{1}$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$\red{1}$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$\red{1}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {\blue{1}}; \draw (4.5,3.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (6.5,0.5) node {$0$}; \draw (6.5,1.5) node {$\blue{1}$}; \draw (6.5,2.5) node {$0$}; \draw (6.5,3.5) node {$\red{1}$}; \draw (7.5,0.5) node {$0$}; \draw (7.5,1.5) node {$0$}; \draw (7.5,2.5) node {$0$}; \draw (7.5,3.5) node {$0$}; \draw (8.5,0.5) node {$\red{1}$}; \draw (8.5,1.5) node {$0$}; \draw (8.5,2.5) node {$\blue{1}$}; \draw (8.5,3.5) node {$0$}; \draw (9.5,0.5) node {$0$}; \draw (9.5,1.5) node {$0$}; \draw (9.5,2.5) node {$0$}; \draw (9.5,3.5) node {$0$}; \draw (4,-1) node {(a)}; \foreach \i in {1,...,9}{ \draw (\i-.5,4.5) node {\i};} \end{scope} \begin{scope}[xshift=30cm, yshift=0cm, scale=2.5] \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (9,\j)--(10,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (0,\j)--(9,\j);} \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$1$}; \draw (0.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$1$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$1$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$1$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$1$}; \draw (4.5,3.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (6.5,0.5) node {$\blue{2}$}; \draw (6.5,1.5) node {$1$}; \draw (6.5,2.5) node {$0$}; \draw (6.5,3.5) node {$1$}; \draw (7.5,0.5) node {$0$}; \draw (7.5,1.5) node {$0$}; \draw (7.5,2.5) node {$\red{3}$}; \draw (7.5,3.5) node {$0$}; \draw (8.5,0.5) node {$1$}; \draw (8.5,1.5) node {$0$}; \draw (8.5,2.5) node {$1$}; \draw (8.5,3.5) node {$0$}; \draw (9.5,0.5) node {$0$}; \draw (9.5,1.5) node {$0$}; \draw (9.5,2.5) node {$0$}; \draw (9.5,3.5) node {$0$}; \draw (4,-1) node {(b)}; \foreach \i in {1,...,9}{ \draw (\i-0.5,4.5) node {\i};} \end{scope} \begin{scope}[xshift=0cm, yshift=-20cm, scale=2.5] \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (9,\j)--(10,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (0,\j)--(9,\j);} \draw (-.5,0.5) node {$1$}; \draw (-.5,1.5) node {$2$}; \draw (-.5,2.5) node {$3$}; \draw (-.5,3.5) node {$4$}; \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$1$}; \draw (0.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$1$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$1$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$1$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$1$}; \draw (4.5,3.5) node {$\blue{2}$}; \draw (5.5,0.5) node {$0$}; \draw (5.5,1.5) node {$\red{3}$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (6.5,0.5) node {$2$}; \draw (6.5,1.5) node {$1$}; \draw (6.5,2.5) node {$0$}; \draw (6.5,3.5) node {$1$}; \draw (7.5,0.5) node {$0$}; \draw (7.5,1.5) node {$0$}; \draw (7.5,2.5) node {$3$}; \draw (7.5,3.5) node {$0$}; \draw (8.5,0.5) node {$1$}; \draw (8.5,1.5) node {$0$}; \draw (8.5,2.5) node {$1$}; \draw (8.5,3.5) node {$0$}; \draw (9.5,0.5) node {$0$}; \draw (9.5,1.5) node {$0$}; \draw (9.5,2.5) node {$0$}; \draw (9.5,3.5) node {$0$}; \draw (4,-1) node {(c)}; \foreach \i in {1,...,9}{ \draw (\i-0.5,4.5) node {\i};} \end{scope} \begin{scope}[xshift=30cm, yshift=-20cm, scale=2.5] \foreach \i in {0,1,2,3,4,5,6,7,8,9}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (9,\j)--(10,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (0,\j)--(9,\j);} \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$1$}; \draw (0.5,3.5) node {$0$}; \draw (1.5,0.5) node {$\red{1}$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$\blue{2}$}; \draw (2.5,1.5) node {$1$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$1$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$1$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$1$}; \draw (4.5,3.5) node {$2$}; \draw (5.5,0.5) node {$0$}; \draw (5.5,1.5) node {$3$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (6.5,0.5) node {$2$}; \draw (6.5,1.5) node {$1$}; \draw (6.5,2.5) node {$0$}; \draw (6.5,3.5) node {$1$}; \draw (7.5,0.5) node {$0$}; \draw (7.5,1.5) node {$0$}; \draw (7.5,2.5) node {$3$}; \draw (7.5,3.5) node {$0$}; \draw (8.5,0.5) node {$1$}; \draw (8.5,1.5) node {$0$}; \draw (8.5,2.5) node {$1$}; \draw (8.5,3.5) node {$0$}; \draw (9.5,0.5) node {$0$}; \draw (9.5,1.5) node {$0$}; \draw (9.5,2.5) node {$0$}; \draw (9.5,3.5) node {$0$}; \draw (4,-1) node {(d)}; \foreach \i in {1,...,9}{ \draw (\i-0.5,4.5) node {\i};} \end{scope} \end{tikzpicture}} \caption{Example of a sequence of moves in the game from (a) to (d), where Alice's (resp. Bob's) moves are represented in red (resp. blue).\label{fig:example}} \end{figure} In the following, we present all possible cases. \begin{description} \item[Case 1.] {\bf When Bob colors a vertex: \begin{itemize} \item[(a)] inside a block, or \item[(b)] in the border of configuration $\Delta$, or \item[(c)] in column $j$ or $j-1$ of configurations $\{\Lambda,\Lambda_2,\Lambda',\Lambda'_2\}$, or \item[(d)] in column $j-2$ of configurations $\{\Lambda,\Lambda'\}$, or \item[(e)] in column $j-2$ of configurations $\{\Lambda_2,\Lambda'_2\}$ if column $j-3$ is not empty. \end{itemize}} In other words, Bob does not color a vertex of a border -- except in configurations $\Delta$, $\Lambda$ and $\Lambda'$ -- nor a vertex in an empty column -- except possibly in column $n$ if it is empty and column $n-1$ is not\footnote{This corresponds to the exception in the definition of blocks.}. \begin{description} \item[Case 1$\Delta$.] When Bob colors a vertex of column $j+1$ or $j+2$ of a configuration $\Delta$ in Figure~\ref{fig-delta} (recall that configuration $\Delta$ is only defined for column $j+2$ being the right border). If Bob has colored the sick vertex, then Alice colors any uncolored vertex of column $j+1$. Otherwise, Alice colors the sick vertex with any available color making safe all vertices of column $j+1$. Note that, if Bob has colored a vertex in column $j+2$, this remains a border in configuration $\alpha$. In all cases, the sick vertex is colored and so this configuration $\Delta$ disappears. \item[Case 1$\Delta'$.] When Bob colors a vertex of Configuration $\Delta'$ or $\Delta'_2$. The notations are the same as in Figure~\ref{fig-delta}, the symmetric case (according to the vertical or horizontal symmetries) can be dealt with similarly. First, let us consider the configuration $\Delta'$. If Bob colors $v_{2,j-1}$ or $v_{3,j}$, Alice colors $v_{2,j+1}$ with the same color, making $v_{2,j}$ and $v_{2,j+1}$ safe, and $v_{3,j+2}$ sound (with doctors $v_{2,j+2}$ and $v_{4,j+2}$). If Bob colors $v_{2,j}$, Alice colors $v_{1,j+1}$ with the same color, making $v_{2,j+1}$ safe and $v_{3,j+2}$ sound. From now on, $v_{2,j}$ is always sound (with doctors $v_{2,j-1}$ and $v_{3,j}$). If Bob colors $v_{2,j+1}$ (resp. $v_{1,j+1}$), Alice colors $v_{1,j+1}$ (resp. $v_{2,j+1}$) with any available color, making $v_{3,j+2}$ sound. If Bob colors $v_{3,j+2}$, Alice colors $v_{2,j+1}$ making it safe. If Bob colors $v_{4,j+2}$ (resp., $v_{4,j+1}$), Alice colors $v_{3,j+2}$ with any color, making $v_{3,j+2}$ safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{2,j+2}$). Finally, if Bob colors $v_{2,j+2}$ (with $c'\ne c$), Alice colors $v_{3,j+2}$ making it safe and obtaining the configuration $\Delta'_2$. Now let us assume Bob colors a vertex in a configuration $\Delta'_2$. If after Bob's move, Alice can color $v_{2,j}$ with $c'$, she does it, making all vertices of columns $j$ and $j+1$ safe. Otherwise, Alice colors $v_{1,j+1}$ with $c'$. There are two cases: either Bob's previous move colored $v_{2,j}$, then all vertices in columns $j$ and $j+1$ are safe, or Bob's previous move colored $z \in \{v_{2,j-1},v_{3,j}\}$ with color $c'$, in which case all vertices are safe but $v_{2,j}$ which is sound with doctors $v_{2,j+1}$ and $z'\in\{v_{2,j-1},v_{3,j}\}\setminus\{z\}$. \item[Case 1$\Lambda$.] When Bob colors a vertex of column $j-2$, $j-1$ or $j$ of configuration $\Lambda$ or $\Lambda_2$. We follow the orientation of Figure \ref{fig-delta}. If Bob colors $v_{4,j-1}$, Alice colors $v_{2,j-1}$ with the same color. If Bob colors $v_{2,j-1}$ (resp. $v_{3,j-1}$), Alice colors $v_{3,j-1}$ (resp. $v_{2,j-1}$) with any available color. If Bob colors $v_{2,j}$, Alice colors $v_{3,j-1}$ with the same color if available, otherwise we are in Configuration $\Lambda_2$ (with $v_{3,j-2}$ colored and $v_{1,j-2}$ uncolored) and then Alice plays the same color as Bob on $v_{1,j-1}$, making $v_{3,j-1}$ sound with doctors $v_{2,j-1}$ and $v_{4,j-1}$. If Bob colors $v_{1,j-1}$ with color $x$, Alice colors $v_{3,j-1}$ with $x$ if possible (in which case, all vertices of column $j-1$ are safe), otherwise we are in Configuration $\Lambda_2$ with $v_{3,j-2}$ colored $x$. If $x\ne c''$, Alice colors $v_{2,j}$ with $x$ (making $v_{3,j-1}$ sound with doctors $v_{4,j-1}$ and $v_{2,j-1}$). Otherwise, she colors $v_{4,j-1}$ with $c''$, making $v_{3,j-1}$ safe and $v_{2,j-1}$ sound with doctors $v_{3,j-1}$ and $v_{2,j}$. Finally, assume that Bob colors $v_{1,j-2}$ or $v_{3,j-2}$. First suppose that column $j-3$ is empty. Then we were in configuration $\Lambda$ before this move of Bob and column $j-2$ is a left border. Then Alice's answer will be treated later in case $2\alpha\beta\gamma F$ (if the border is free to the left) or case $2\alpha'$ (otherwise). Just getting ahead, in cases $2\alpha\beta\gamma F$, $2\alpha' 1$ and $2\alpha' 2$, Alice answers in column $j-3$, obtaining a configuration $\Lambda_2$. In cases $2\alpha'3$, $2\alpha'4$ and $2\alpha'5$, Alice merges two blocks, making the sick vertex safe or sound, and creating a new configuration $\Lambda$ or $\Lambda'$. Case $2\alpha'6$ cannot occur since $c(v_{1,j+2})\neq c$. So, suppose that column $j-3$ is not empty. Note that $v_{3,j-2}$ cannot be a doctor since it lies on a border in configuration $\alpha$. Then Alice colors the sick vertex $v_{3,j-1}$, removing the configuration $\Lambda$ or $\Lambda_2$. \item[Case 1$\Lambda'$.] When Bob colors a vertex of a column in $\{j-2,\ldots,j+1\}$ of configuration $\Lambda'$ or $\Lambda'_2$. Notice that $v_{3,j+1}$ is safe and $v_{2,j+1}$ is sound (with doctors $v_{1,j+1}$ and $v_{3,j+1}$). If Bob colors $v_{1,j+1}$ or $v_{3,j+1}$ of configuration $\Lambda'$ or $\Lambda'_2$, Alice colors $v_{2,j+1}$ with $c''\ne c'$, obtaining the configuration $\Lambda$ or $\Lambda_2$, respectively. If Bob colors $v_{2,j+1}$ of configuration $\Lambda'$ or $\Lambda'_2$, Alice colors the sick vertex $v_{3,j-1}$ (merging the two blocks and leaving $v_{2,j-1}$ sound with doctors $v_{1,j-1}$ and $v_{2,j}$). Finally assume that Bob colors $v_{1,j-2}$ or $v_{3,j-2}$. First suppose that column $j-3$ is empty. Then we were in configuration $\Lambda'$ before this Bob's move and column $j-2$ is a left border. Exactly as in case $1\Lambda$, Alice's answer will be treated in case $2\alpha\beta\gamma F$ or $2\alpha'$, obtaining a configuration $\Lambda_2'$ (instead of $\Lambda_2$) or a new configuration $\Lambda$ or $\Lambda'$, and making the sick vertex safe or sound. So, assume that column $j-3$ is not empty. As in case $1\Lambda$, $v_{3,j-2}$ cannot be a doctor since it lies on a border in configuration $\alpha$. Then Alice colors the sick vertex $v_{3,j-1}$, removing the configuration $\Lambda'$ or $\Lambda'_2$. All other possibilities are considered similarly as in Case $1\Lambda$. \item[Case 1-doc.] If Bob colors a doctor of a sound vertex $v$ and $v$ is not in a border in configuration $\gamma$, then Alice colors $v$ with any available color, making it safe. \item[Case $1\gamma$-doc.] If Bob colors the doctor (not in the border) of the sound vertex $v$ in a border (column $j$) in configuration $\gamma$, then Alice colors the second doctor $w$ of $v$ ($w$ is the uncolored neighbor of $v$ in the border in configuration $\gamma$) with the same color, making $v$ safe and obtaining a border (column $j$) in configuration $\beta$ (see Figure~\ref{fig-gamma-penultimate}). \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0) --(1,0)--(2,0)--(3,0) --(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(1,4)-- (0,4) ; \draw (2.5,4.5) node {$j$}; \draw (0.5,1.5) node {}; \draw (1.5,1.5) node {$\blue{c'}$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$\red{c'}$}; \draw (2.5,3.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $1\gamma$-doc: $\gamma\to\beta$}; \end{scope} \end{tikzpicture} \caption{Case $1\gamma$-doc: Bob just colored the vertex in blue of the penultimate column of a border in configuration $\gamma$. Alice's answer is in red. The bold line figures the surrounding of the updated block (before Alice's move).} \label{fig-gamma-penultimate} \end{figure} \item[Case 1-safe.] If Bob colors a safe or sound vertex, then: \begin{itemize} \item if there is a sick vertex, Alice colors it, removing the configuration $\Delta$, $\Delta'$, $\Delta'_2$, $\Lambda$, $\Lambda_2$, $\Lambda'$ or $\Lambda'_2$ it belongs to; \item if there is a sound vertex, Alice colors it, making it safe; \item if there is an uncolored safe vertex not in a border, Alice colors it (Note that it cannot be a doctor since, in that case, there are no sound vertices anymore); \end{itemize} Otherwise, every vertex inside a block is already colored and no configuration $\Lambda,\Lambda',\Lambda_2$ or $\Lambda'_2$ exists. \begin{itemize} \item {\bf Case $1\delta$.} When there is a border in configuration $\delta$ (notation of Figure~\ref{fig-1safes}, symmetrical configurations must be dealt with accordingly). If $v_{3,j+2}$ is not colored $c'$ (Case $1\delta 1$), Alice colors $v_{3,j+1}$ with $c'$ obtaining a border in configuration $\beta$ or merging two blocks. Otherwise (Case $1\delta 2$), she colors $v_{4,j+1}$ with $c'$, merging the two blocks and making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{3,j+1}$ and $v_{1,j+1}$). \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; ll[blue!20] (2,0) rectangle (3,1); ll[blue!20] (2,2) rectangle (3,3); \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c'$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$\red{c'}$}; \draw (3.5,3.5) node {$0$}; \draw (4.7,2.5) node {$\ne c'$}; \draw (2.5,-0.75) node {Case $1\delta 1$: $\delta\to\beta$/merge}; \end{scope} \begin{scope}[xshift=25cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \path[draw, very thick] (5,0)--(4,0)--(4,4)--(5,4) ; ll[blue!20] (2,0) rectangle (3,1); ll[blue!20] (2,2) rectangle (3,3); \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c'$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$\red{c'}$}; \draw (4.5,2.5) node {$c'$}; \draw (2.5,-0.75) node {Case $1\delta 2$: $\delta\to$ merge}; \end{scope} \end{tikzpicture} \caption{Case $1\delta$: Bob has played a safe or sound vertex and Alice plays in a border in configuration $\delta$, where $c,c'$ are distinct colors. In case $1\delta 1$, $c(v_{3,j+2})\neq c'$. The notation ``$\delta\to\beta$/merge" (in Case $1\delta 1$) means that the former block of column $j$ (before Alice's move) is extended by one column to the right and its new right border is in configuration $\beta$, or it is merged with the former block of column $j+2$.} \label{fig-1safes} \end{figure} \item {\bf Case $1\pi$.} If there is a border in configuration $\pi$ (notation of Figure~\ref{fig:configs}), Alice colors $v_{1,j+1}$ with color $c'$, making $v_{2,j+1}$ safe and $v_{3,j+1}$ sound, and merging the two blocks. \item {\bf Case $1\gamma$.} If there is a border in configuration $\gamma$, Alice plays in the border (vertex $v_{2,j}$ in Case $1\gamma$ of Figure \ref{fig-1safeFreeAB}) obtaining a border in configuration $\delta$. \item {\bf Case $1\alpha\beta F$.} If there is a free border in configuration $\alpha$ or $\beta$, Alice plays as described in Figure~\ref{fig-1safeFreeAB}. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(4,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4) ; \draw (3.5,4.5) node {$j$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {\red{$c'$}}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (3.5,-.75) node {Case $1\gamma$: $\gamma\to \delta$}; \end{scope} \begin{scope}[xshift=22cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0) --(1,0)--(2,0)--(3,0) --(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(1,4)-- (0,4) ; \draw (2.5,4.5) node {$j$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {\red{$c$}}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Case $1\alpha\beta F1$: $\alpha\to\beta$}; \end{scope} \begin{scope}[xshift=45cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0) --(1,0)--(2,0)--(3,0) --(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(1,4)-- (0,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,1.5) node {$c$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {\red{$c$}}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $1\alpha\beta F2$: $\beta\to\beta$}; \end{scope} \end{tikzpicture} \caption{Cases $1\alpha\beta F$ and $1\gamma$: Bob has played a safe or sound vertex and Alice plays in a free border in configuration $\alpha$ or $\beta$, or in a border in configuration $\gamma$.} \label{fig-1safeFreeAB} \end{figure} \end{itemize} Therefore, we can assume that every vertex inside a block is already colored and that there are no free borders nor borders in configuration $\gamma, \delta$ or $\pi$. Unless all vertices are already colored, there must be an empty column between two borders: the different possibilities are described in Figure~\ref{fig:Case8}. Recall that, by Property $(5)$, a left border in configuration $\alpha$ must have exactly two uncolored vertices (this is important for cases $1\alpha$ below). \begin{itemize} \item {\bf Case $1\beta$:} There exists a border in configuration $\beta$. Alice colors $v_{3,j+1}$ with any available color $c'\ne c$, making it safe and $v_{2,j+1}$ sound (with doctors $v_{2,j}$ and $v_{1,j+1}$), and merging the two blocks. \end{itemize} The only remaining cases occur when the empty column lies between two borders in Configuration $\alpha$. \begin{itemize} \item {\bf Case $1\alpha 1$:} If $v_{3,j+2}$ is not colored $c$, Alice colors $v_{3,j+1}$ with color $c$, making it and $v_{2,j+1}$ safe. \item {\bf Case $1\alpha 2$:} Else, if column $j+3$ is empty, Alice colors $v_{3,j+1}$ with color $c'$ obtaining the configuration $\Delta$. \item {\bf Case $1\alpha 3$:} Finally, if column $j+3$ is not empty, then, by Property $(4)$, $v_{2,j+2}$ is not a doctor. Alice colors $v_{3,j+1}$ with any available color $c'$, making it safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{2,j+2}$). \end{itemize} \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$0$}; \draw (2.5,2.5) node {\red{$c'$}}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (1.8,-.75) node {Case $1\beta$}; \draw (1.8,-1.5) node {$\beta\to$ merge}; \end{scope} \begin{scope}[xshift=20cm, yshift=0cm, scale=2.5] \foreach \i in {1,2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (0,\j)--(2,\j)--(3,\j);} \path[draw, very thick] (0,0) --(1,0)--(1,1) --(1,2)--(1,3)--(1,4)--(0,4) ; \path[draw, very thick] (3,0) --(2,0)--(2,1) --(2,2)--(2,3)--(2,4)--(3,4) ; \draw (0.5,4.5) node {$j$}; \draw (0.5,0.5) node {}; \draw (0.5,1.5) node {$c$}; \draw (0.5,2.5) node {}; \draw (0.5,3.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {\red{$c$}}; \draw (1.5,3.5) node {$0$}; \draw (2.8,2.5) node {$\ne c$}; \draw (1.5,-.75) node {Case $1\alpha 1$}; \draw (1.5,-1.5) node {$\alpha$+$\alpha\to$ merge}; \end{scope} \begin{scope}[xshift=40cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (0,\j)--(4,\j);} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4); \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4)--(3,0); \draw (0.5,4.5) node {$j$}; \draw (0.5,0.5) node {}; \draw (0.5,1.5) node {$c$}; \draw (0.5,2.5) node {}; \draw (0.5,3.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$\red{c'}$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,3.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.0,-.75) node {Case $1\alpha 2$}; \draw (2.0,-1.5) node {$\alpha$+$\alpha\to\Delta$}; \end{scope} \begin{scope}[xshift=60cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (0,\j)--(3,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4); \path[draw, very thick] (4,0)--(2,0)--(2,4)--(4,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,0.5) node {}; \draw (0.5,1.5) node {$c$}; \draw (0.5,2.5) node {}; \draw (0.5,3.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$\red{c'}$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,3.5) node {$0$}; \draw (3.8,4.5) node {$\ne 0$}; \draw (2.0,-0.75) node {Case $1\alpha 3$}; \draw (2.0,-1.5) node {$\alpha$+$\alpha\to$ merge}; \end{scope} \end{tikzpicture} \caption{Cases $1\beta$, $1\gamma$ and $1\alpha$: Bob has colored a safe or sound vertex inside a block and there is no uncolored vertex inside a block. Moreover, there are no free borders nor borders in configuration $\gamma, \delta$ or $\pi$. The red cell indicates Alice's move. All symmetrical configurations according to vertical/horizontal symmetry axis (column $j$ / line in the middle) are considered similarly.} \label{fig:Case8} \end{figure} \end{description} It can be checked that, in all subcases of Case $1$, after Alice's move, all invariants of the induction hypothesis still hold. \item[Case 2. When Bob colors a vertex of a border (not in configuration $\Delta$, $\Lambda$ or $\Lambda'$).]~\\ Note that Bob cannot color a vertex of a border in configuration $\pi$ since all its vertices are already colored. Moreover, a border in configuration $\pi$ cannot be free. \begin{itemize} \item {\bf Case $2\delta$.} Bob colored $v_{1,j}$ or $v_{3,j}$ in a border in configuration $\delta$. If $v_{3,j+2}$ is not colored $c'$ (Case $2\delta 1$), Alice colors $v_{3,j+1}$ with $c'$ obtaining a border in configuration $\beta$ or merging two blocks. Otherwise (Case $2\delta 2$), she colors $v_{4,j+1}$ with $c'$, making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{3,j+1}$ and $v_{1,j+1}$) (see Figure~\ref{fig:2S}). \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[blue!20] (2,0) rectangle (3,1); ll[blue!20] (2,2) rectangle (3,3); \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c'$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$\red{c'}$}; \draw (3.5,3.5) node {$0$}; \draw (4.7,2.5) node {$\ne c'$}; \draw (3.0,-.75) node {Case $2\delta 1$}; \draw (3.0,-1.5) node {$\delta\to\beta$/merge}; \end{scope} \begin{scope}[xshift=20cm, yshift=0cm, scale=2.5] ll[blue!20] (2,0) rectangle (3,1); ll[blue!20] (2,2) rectangle (3,3); \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \path[draw, very thick] (5,0)--(4,0)--(4,4)--(5,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c'$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$\red{c'}$}; \draw (4.5,2.5) node {$c'$}; \draw (3.0,-.75) node {Case $2\delta 2$}; \draw (3.0,-1.5) node {$\delta\to$ merge}; \end{scope} \end{tikzpicture} \caption{Case $2\delta$: Bob just colored a vertex (in blue) of a border in configuration $\delta$.} \label{fig:2S} \end{figure} \item {\bf Case $2\alpha\beta\gamma F$ (Free border in Configuration $\alpha$, $\beta$ or $\gamma$):} Bob colors a vertex $v$ of a free border of a block $B$ in configuration $\alpha$, $\beta$ or $\gamma$. Then Alice can extend this block $B$ with a new border in configuration $\beta$ as described in Figure~\ref{fig:nextMoveAlice1} (in case of a right border). In all subcases, Bob just colored a vertex of column $j$: if we need to specify which vertex, it is depicted in blue, otherwise it is any of the white cells of column $j$. The answer of Alice is in red. Also recall that, if $B$ is just one column $j$ (in Configuration $\alpha$), then it is a left border and therefore, it is free if columns $j-2$ and $j-1$ are empty. That is, if $B$ is a single column $j$ and column $j-2$ is not empty, it must not be considered in this case (even if column $j+1$ and $j+2$ are empty) but in the following Case 2$\alpha$ (indeed, applying Rule $2\alpha F$ ``to the right" in the case of a single column would not guarantee Property $(5)$). \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[blue!20] (2,1) rectangle (3,2); ll[blue!20] (2,3) rectangle (3,4); \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0) --(1,0)--(2,0)--(3,0) --(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(1,4)-- (0,4) ; \draw (2.5,4.5) node {$j$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {\red{$c$}}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Case $2\alpha F$: $\alpha\to\beta$}; \end{scope} \begin{scope}[xshift=18cm, yshift=0cm, scale=2.5] ll[blue!20] (2,0) rectangle (3,2); ll[blue!20] (2,3) rectangle (3,4); \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0) --(1,0)--(2,0)--(3,0) --(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(1,4)-- (0,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,1.5) node {$c$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {\red{$c$}}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $2\beta F$: $\beta\to\beta$}; \end{scope} \begin{scope}[xshift=36cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,1.65) node {$0$}; \draw (1.5,1.25) node {\tiny{safe}}; \draw (1.5,2.5) node {$c$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,1.5) node {$\blue{c'}$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$\red{c'}$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (2.8,-.75) node {Case $2\gamma F1$: $\gamma\to\beta$}; \end{scope} \begin{scope}[xshift=54cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,1.65) node {$0$}; \draw (1.5,1.25) node {\tiny{safe}}; \draw (1.5,2.5) node {$c$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$\blue{c'}$}; \draw (2.5,3.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$\red{c'}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (2.8,-.75) node {Case $2\gamma F2$: $\gamma\to\beta$}; \end{scope} \end{tikzpicture} \caption{Case $2\alpha\beta\gamma F$: Bob just colored a vertex (in blue) of a free border in configuration $\alpha$, $\beta$ or $\gamma$. Alice's answer is in red. The bold line figures the surrounding of the updated block (before Alice's move). The block increases by one column.} \label{fig:nextMoveAlice1} \end{figure} Following the orientation of Figure \ref{fig:nextMoveAlice1}: \begin{itemize} \item {\bf Case $2\alpha F$:} If Bob colors a vertex of a border in configuration $\alpha$, Alice colors $v_{2,j+1}$ with $c$, obtaining a border in configuration $\beta$. \item {\bf Case $2\beta F$:} If Bob colors a vertex of a border in configuration $\beta$, Alice colors $v_{2,j+1}$ with $c$, obtaining a border in configuration $\beta$. \item {\bf Case $2\gamma F1$:} If Bob plays $c'\ne c$ in the vertex $v_{2,j}$ of a border in configuration $\gamma$, Alice colors $v_{3,j+1}$ with $c'$, obtaining a border in configuration $\beta$. \item {\bf Case $2\gamma F2$:} If Bob plays $c'\ne c$ in the vertex $v_{3,j}$ of a border in configuration $\gamma$, Alice colors $v_{2,j+1}$ with $c'$, obtaining a border in configuration $\beta$. \end{itemize} \item {\bf Case $2\beta$ (Non-Free border in Configuration $\beta$):} Bob colors a vertex $v$ of a non-free border of a block $B$ in configuration $\beta$. See Figure \ref{fig:nextMoveAliceXb} (the symmetric cases can be dealt with accordingly). The blue squares are the possible vertices that Bob colored. If Bob colored a vertex of column $j$ other than $v_{2,j}$ (Case $2\beta 1$), Alice colors $v_{3,j+1}$ with any available color $c'$ and we are done, since $v_{2,j+1}$ is sound (with doctors $v_{2,j}$ and $v_{1,j+1}$). Thus assume Bob just colored $v_{2,j}$ with $c'$. If $v_{3,j+2}$ is not colored $c'$ (Case $2\beta 2$), Alice colors $v_{3,j+1}$ with $c'$ making $v_{2,j+1}$ and $v_{3,j+1}$ safe. Otherwise, if $v_{4,j}$ is not colored $c'$ (Case $2\beta 3$), Alice colors $v_{4,j+1}$ with $c'$ making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{3,j+1}$). Finally, if $v_{4,j}$ is colored $c'$ (Case $2\beta 4$), then this is equivalent to the situation in which $v_{2,j}$ and $v_{4,j}$ were colored $c'$ (configuration $\alpha$) and Bob just colored $v_{3,j}$ with $c$, which is treated in the next case $2\alpha2$ regarding the configuration $\alpha$. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[blue!20] (1,0) rectangle (2,1) ; ll[blue!20] (1,3) rectangle (2,4); \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,2.5) node {\red{$c'$}}; \draw (1.5,1.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (2.0,-.75) node {Case $2\beta 1$}; \draw (2.0,-1.5) node {merge}; \end{scope} \begin{scope}[xshift=17cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.5) node {\blue{$c'$}}; \draw (3.75,2.5) node {$\neq c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,2.5) node {\red{$c'$}}; \draw (2.5,3.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.0,-.75) node {Case $2\beta 2$}; \draw (2.0,-1.5) node {merge}; \end{scope} \begin{scope}[xshift=34cm, yshift=0cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \path[draw, densely dotted] (1,0)--(1,4); \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.25,3.5) node {$\neq c'$}; \draw (1.5,1.5) node {\blue{$c'$}}; \draw (3.5,2.5) node {$c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {\red{$c'$}}; \draw (2.5,1.5) node {$0$}; \draw (2.0,-.75) node {Case $2\beta 3$}; \draw (2.0,-1.5) node {merge}; \end{scope} \begin{scope}[xshift=51cm, yshift=0cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \path[draw, densely dotted] (1,0)--(1,4); \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,3.5) node {$c'$}; \draw (1.5,1.5) node {\blue{$c'$}}; \draw (3.5,2.5) node {$c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.0,-.75) node {Case $2\beta 4$:}; \draw (2.0,-1.5) node {treated as Case $2\alpha2$}; \end{scope} \end{tikzpicture} \caption{Case $2\beta$: Bob colored a vertex of a non-free border in configuration $\beta$ at column $j$. Alice merge the two blocks in her next move.} \label{fig:nextMoveAliceXb} \end{figure} \item {\bf Case $2\gamma$ (Non-Free border in Configuration $\gamma$):} Bob colors a vertex $v$ of a non-free border of a block $B$ in configuration $\gamma$ at column $j$. See Figure \ref{fig:nextMoveAliceXc} (the symmetric cases can be dealt with accordingly). Alice plays, merging the two blocks (with all vertices safe or sound) or obtaining the configuration $\pi$. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(5,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$\blue{c'}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$\red{c''}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (3.5,-0.75) node {Case $2\gamma 1$: merge}; \end{scope} \begin{scope}[xshift=25cm, yshift=0cm, scale=2.5] \foreach \i in {3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(5,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$\blue{c'}$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$\red{c'}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (5.8,1.5) node {$\ne c'$}; \draw (3.5,-0.75) node {Case $2\gamma 2$: merge}; \end{scope} \begin{scope}[xshift=50cm, yshift=0cm, scale=2.5] \foreach \i in {3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(5,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$\red{c''}$}; \draw (3.5,2.5) node {$\blue{c'}$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (5.5,1.5) node {$c'$}; \draw (3.5,-0.75) node {Case $2\gamma 3$: $\gamma\to\pi$}; \end{scope} \end{tikzpicture} \caption{Case $2\gamma$: Bob colors a vertex of border in configuration $\gamma$ at column $j$.} \label{fig:nextMoveAliceXc} \end{figure} \begin{itemize} \item {\bf Case $2\gamma 1$:} If Bob colors $v_{2,j}$, then Alice colors $v_{2,j+1}$ with any available color $c''$, making $v_{3,j+1}$ either safe or sound (with doctors $v_{3,j}$ and $v_{4,j+1}$). \item {\bf Case $2\gamma 2$:} If Bob colors $v_{3,j}$ with $c'$ and $v_{2,j+2}$ is not colored $c'$, Alice colors $v_{2,j+1}$ with $c'$, making $v_{2,j}$ and $v_{3,j+1}$ safe. \item {\bf Case $2\gamma 3$:} Otherwise, Bob colors $v_{3,j}$ with $c'$ and $v_{2,j+2}$ is colored $c'$. Then Alice colors $v_{2,j}$ with $c''$ such that $c''\ne c$ and $c''\ne c'$, obtaining a border in configuration $\pi$. \end{itemize} \item {\bf Case $2\alpha$ (Non-Free border in Configuration $\alpha$ of a block with at least 2 columns):} Bob colors a vertex $v$ of a non-free border in configuration $\alpha$ of a block $B$ with at least 2 columns. The different possibilities are depicted in Figure~\ref{fig:nextMoveAliceX} (all symmetric cases can be dealt with accordingly). If we need to specify which vertex Bob colored, it is depicted in blue. Alice' answer is in red in some subcases. The bold line figures the surrounding of the updated block (before Alice's move). In each case, two blocks are merged into one. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[blue!20] (1,1) rectangle (2,2); ll[blue!20] (1,3) rectangle (2,4); \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (0.2,4.5) node {$\ne 0$}; \draw (1.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,0.5) node {$c$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {\red{$c$}}; \draw (3.75,1.5) node {$\neq c$}; \draw (2,-0.75) node {Case $2\alpha 1$: merge}; \end{scope} \begin{scope}[xshift=20cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (0.2,4.5) node {$\ne 0$}; \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,3.5) node {\blue{$c'$}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {\red{$c'$}}; \draw (2.5,3.5) node {$0$}; \draw (3.5,1.5) node {$c$}; \draw (3.75,2.5) node {$\neq c'$}; \draw (2.5,-0.75) node {Case $2\alpha 2.1$: merge}; \end{scope} \begin{scope}[xshift=40cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (0.2,4.5) node {$\ne 0$}; \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,3.5) node {\blue{$c'$}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {\red{$c'$}}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (3.5,1.5) node {$c$}; \draw (3.5,2.5) node {$c'$}; \draw (2.5,-0.75) node {Case $2\alpha 2.2$: merge}; \end{scope} \begin{scope}[xshift=60cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (0.2,4.5) node {$\ne 0$}; \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,1.5) node {$\blue{c'}$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (3.5,0.5) node {}; \draw (3.5,1.5) node {$c$}; \draw (3.5,2.5) node {}; \draw (3.5,3.5) node {}; \draw (2,-0.75) node {Case $2\alpha 3$: merge}; \end{scope} \begin{scope}[xshift=10cm, yshift=-20cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (0.2,4.5) node {$\ne 0$}; \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,1.5) node {$c'$}; \draw (1.5,3.5) node {$c''$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$\red{c}$}; \draw (3.5,1.5) node {$c$}; \draw (3.75,3.5) node {$\neq c$}; \draw (2.5,-0.75) node {Case $2\alpha 4$: merge}; \end{scope} \begin{scope}[xshift=30cm, yshift=-20cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (0.2,4.5) node {$\ne 0$}; \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,1.5) node {$c'$}; \draw (1.5,3.5) node {$c''$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,0.5) node {$\red{c'}$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (3.8,0.5) node {$\ne c'$}; \draw (3.5,1.5) node {$c$}; \draw (3.5,3.5) node {$c$}; \draw (2.5,-0.75) node {Case $2\alpha 5$: merge}; \end{scope} \begin{scope}[xshift=50cm, yshift=-20cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (0.2,4.5) node {$\ne 0$}; \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,1.5) node {$c'$}; \draw (1.5,3.5) node {$c''$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$\red{c'}$}; \draw (2.5,3.5) node {$0$}; \draw (3.5,0.5) node {$c'$}; \draw (3.5,1.5) node {$c$}; \draw (3.8,2.5) node {$\ne c'$}; \draw (3.5,3.5) node {$c$}; \draw (2.5,-0.75) node {Case $2\alpha 6$: merge}; \end{scope} \end{tikzpicture} \caption{Case $2\alpha$: Bob colors a vertex of a border in configuration $\alpha$ at column $j$ with a color distinct from $c$, in a block containing at least two columns. In cases $2\alpha 4$, $2\alpha 5$ and $2\alpha 6$, $c''$ is any color distinct from $c$ (in particular, $c''$ might be equal to $c'$).} \label{fig:nextMoveAliceX} \end{figure} \begin{itemize} \item {\bf Case $2\alpha 1$:} If $v_{2,j+2}$ is not colored $c$, Alice colors $v_{2,j+1}$ with $c$ making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} From now on, let us assume that $c(v_{2,j+2})=c$. \begin{itemize} \item {\bf Case $2\alpha 2$:} If Bob colors $v_{4,j}$ with $c'$ and $v_{2,j}$ is not colored yet, then Alice colors $v_{3,j+1}$ with $c'$ if available (case $2\alpha 2.1$), making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{2,j}$ and $v_{1,j+1}$). If $c'$ is unavailable (case $2\alpha 2.2$), meaning that $v_{3,j+2}$ is colored $c'$, then Alice can color $v_{2,j+1}$ with color $c'$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \item {\bf Case $2\alpha 3$:} Assume that Bob colored $v_{2,j}$ with color $c'$ and $v_{4,j}$ is not colored yet. If $v_{3,j+2}$ is not colored $c'$, Alice colors $v_{3,j+1}$ with $c'$ making $v_{2,j+1}$ and $v_{3,j+1}$ safe. Otherwise, Alice colors $v_{4,j+1}$ with $c'$ making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{3,j+1}$). \end{itemize} In the next three subcases, assume that after Bob's move, all four vertices of column $j$ are colored. \begin{itemize} \item {\bf Case $2\alpha 4$:} If $v_{4,j+2}$ is not colored $c$, Alice colors $v_{4,j+1}$ with $c$, making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{3,j+1}$). \item {\bf Case $2\alpha 5$:} If $v_{1,j+2}$ is not colored $c'$, Alice colors $v_{1,j+1}$ with $c'$, making $v_{2,j+1}$ safe and $v_{3,j+1}$ sound (with doctors $v_{2,j+1}$ and $v_{4,j+1}$). \item {\bf Case $2\alpha 6$:} By Property $(3)$ of the induction hypothesis, $v_{3,j+2}$ is not colored $c'$. Then Alice colors $v_{3,j+1}$ with $c'$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} \item {\bf Case $2\alpha'$ (Non-Free border in Configuration $\alpha$ of a 1-column block):} Bob colors a vertex $v$ of a non-free border of a 1-column block $B$ (column $j$) in configuration $\alpha$. Again, $B$ is considered as a left border. Note that, by Property $(3)$, before Bob's move, column $j$ has exactly two colored vertices. The different possibilities are depicted in Figure~\ref{fig:nextMoveAliceX} (all symmetric cases, according to the horizontal symmetry axis of the grid, can be dealt with accordingly). If we need to specify which vertex Bob colored, it is depicted in blue. Alice' answer is in red in some subcases. The bold line figures the surrounding of the updated block (before Alice's move). So, in all cases except one subcase of $2\alpha'3$, two blocks are merged into one. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, thin] (1,0)--(1,2); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4)--(4,0); \draw (3.5,4.5) node {$j$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {\blue{$c'$}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {}; \draw (1.5,1.5) node {$c$}; \draw (0.8,2.5) node {or $\ne 0$}; \draw (1.25,3.5) node {$\ne c$}; \draw (2.5,-0.75) node {Case $2\alpha' 1$: merge}; \end{scope} \begin{scope}[xshift=17cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4)--(4,0); \draw (0.2,4.5) node {$\ne 0$}; \draw (3.5,4.5) node {$j$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {\blue{$c'$}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {\red{$c'$}}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (2.5,-0.75) node {Case $2\alpha' 2$: merge}; \end{scope} \begin{scope}[xshift=34cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(5,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4)--(1,0); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4)--(4,0); \draw (3.5,4.5) node {$j$}; \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$0$}; \draw (0.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$\red{c}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (5.7,1.5) node {$\ne c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {{\blue{$c'$}}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (2.5,-0.75) node {Case $2\alpha' 3$: $\Lambda+\beta$}; \draw (2.5,-1.5) node {or $\Lambda$+merge}; \end{scope} \begin{scope}[xshift=53cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(5,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4)--(1,0); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4)--(4,0); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$0$}; \draw (0.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (5.6,2.5) node {\tiny{safe}}; \draw (4.5,3.5) node {$\red{c}$}; \draw (5.7,3.5) node {$\ne c$}; \draw (5.5,2.5) node {}; \draw (5.5,1.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {{\blue{$c'$}}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (2.5,-0.75) node {Case $2\alpha' 4$: $\Lambda'$+merge}; \end{scope} \begin{scope}[xshift=10cm, yshift=-20cm, scale=2.5] \foreach \i in {1,2,3,4,5,6}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(7,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(5,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4)--(1,0); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4)--(4,0); \path[draw, very thick] (7,0)--(5,0)--(5,4)--(7,4); \draw (6.7,4.5) node {$\ne 0$}; \draw (3.5,4.5) node {$j$}; \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$0$}; \draw (0.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$\red{c''}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (5.5,3.5) node {$c$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {{\blue{$c'$}}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (3.5,-0.75) node {Case $2\alpha' 5$:}; \draw (3.5,-1.5) node {$\Lambda$+merge}; \end{scope} \begin{scope}[xshift=35cm, yshift=-20cm, scale=2.5] \foreach \i in {1,2,3,4,5,6}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(7,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(6,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4)--(1,0); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4)--(4,0); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4)--(6,0); \draw (3.5,4.5) node {$j$}; \draw (6.5,0.5) node {$0$}; \draw (6.5,1.5) node {$0$}; \draw (6.5,2.5) node {$0$}; \draw (6.5,3.5) node {$0$}; \draw (0.5,0.5) node {$0$}; \draw (0.5,1.5) node {$0$}; \draw (0.5,2.5) node {$0$}; \draw (0.5,3.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$\red{c''}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (5.5,3.5) node {$c$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$c$}; \draw (5.5,0.5) node {$0$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {{\blue{$c'$}}}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (3.5,-0.75) node {Case $2\alpha' 6$:}; \draw (3.5,-1.5) node {$\Lambda$+$\Delta$}; \end{scope} \end{tikzpicture} \caption{Case $2\alpha'$: Bob colors a vertex of border in configuration $\alpha$ at column $j$ with a color distinct from $c$ of a 1-column block.\label{fig:nextMoveAliceX2}} \end{figure} Regardless of what Bob played on column $j$, if $v_{2,j-2}$ is not colored $c$, then Alice can color $v_{2,j-1}$ with $c$, merging the two blocks. So, in the next cases, we assume that $v_{2,j-2}$ is colored with color $c$. Moreover, if Bob colors $v_{2,j}$, Alice can respond similarly as treated in case $2\alpha 3$ (but considering the symmetry according to the vertical axis). Therefore, we may consider that Bob colors $v_{4,j}$ with $c'$. \begin{itemize} \item {\bf Case $2\alpha' 1$:} If $v_{4,j-2}$ is not colored $c$, Alice colors $v_{4,j-1}$ with $c$ making $v_{3,j-1}$ safe and $v_{2,j-1}$ sound (with doctors $v_{1,j-1}$ and $v_{3,j-1}$). So, assume that $v_{4,j-2}$ is colored $c$. If $v_{3,j-2}$ is colored some color $c''$ (possibly $c'' = c'$), Alice colors $v_{2,j-1}$ with $c'$, making $v_{2,j-1}$ and $v_{3,j-1}$ safe. Otherwise, $v_{3,j-2}$ is uncolored: the next cases deal with this situation. \end{itemize} From now on, suppose that $v_{4,j-2}$ is colored $c$ and $v_{3,j-2}$ is uncolored. \begin{itemize} \item {\bf Case $2\alpha' 2$:} If column $j-3$ is not empty, then Alice colors $v_{2,j-1}$ with $c'$, making $v_{2,j-1}$ safe and $v_{3,j-1}$ sound (with doctors $v_{3,j-2}$ and $v_{4,j-1}$). Note that after Alice's move, Column $j-2$ is not a border anymore, so $v_{3,j-2}$ (that was not a doctor before due to Property $(4)$) can be used as a doctor. \end{itemize} So, assume that column $j-3$ is empty and consequently, by Property $(5)$, $v_{1,j-2}$ is not colored. \begin{itemize} \item {\bf Case $2\alpha' 3$:} If $v_{2,j+2}$ is not colored $c$, Alice colors $v_{2,j+1}$ with $c$, obtaining a configuration $\Lambda$ on the left and merging the blocks on the right (or obtaining a border $j+1$ in configuration $\beta$ if column $j+2$ was empty) with $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} So assume that $v_{2,j+2}$ is colored $c$. Note that column $j+2$ cannot be in configuration $\gamma$ and so, $v_{3,j+2}$ must be safe. \begin{itemize} \item {\bf Case $2\alpha' 4$:} If $v_{4,j+2}$ is not colored $c$, then Alice colors $v_{4,j+1}$ with $c$, making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{3,j+1}$), merging the blocks on the right and also obtaining a configuration $\Lambda'$ on the left. \end{itemize} Finally, assume that $v_{4,j+2}$ is colored $c$. Since column $j+2$ is a left border in configuration $\alpha$, then $v_{3,j+2}$ is not colored by Property $(5)$. \begin{itemize} \item {\bf Case $2\alpha' 5$:} If column $j+3$ is not empty, then Alice colors $v_{2,j+1}$ with $c''\ne c'$, obtaining a configuration $\Lambda$ on the left and merging the blocks on the right, with $v_{2,j+1}$ safe and $v_{3,j+1}$ sound (with doctors $v_{4,j+1}$ and $v_{3,j+2}$ which was not a doctor before, by Property $(4)$). \item {\bf Case $2\alpha' 6$:} So also assume that column $j+3$ is empty and consequently (by Property $(5)$) $v_{1,j+2}$ is not colored. Then Alice colors $v_{2,j+1}$ with $c''\ne c'$, obtaining a configuration $\Lambda$ on the left and a configuration $\Delta$ on the right. It is important to notice that while the two configurations $\Delta$ and $\Lambda$ share a column, there is no ambiguity on the response of Alice to further Bob's move in these columns. If Bob plays on column $j-2$, $j-1$, or $j$, his move is considered as Case $1\Lambda$. On the other hand, if Bob plays on column $j+1$ or $j+2$, then Alice answers as described in Case $1\Delta$. \end{itemize} \end{itemize} Again, in all subcases of Case $2$, after Alice's move, all properties of the induction hypothesis hold. \medskip \item[Case 3.] {\bf When Bob colors a vertex of an empty column (not in column $j-1$ of a configuration $\Lambda,\Lambda_2,\Lambda'$ or $\Lambda'_2$).}~\\ \begin{itemize} \item {\bf Case $3$-new (New block):} Bob colors a vertex $v_{a,j}$ with color $c$ such that no vertices in columns $j-1,j,j+1$ are colored (or $j=n$ and column $j-1$ is empty). Then, Alice colors $v_{b,j}$ (with $|a-b|=2$) with color $c$. This creates a new block (restricted to column $j$) with border in configuration $\alpha$. Then the induction hypothesis still holds. \item \noindent {\bf Case $3\pi$.} Bob colors a vertex in the empty column of configuration $\pi$ (see Figure~\ref{fig:configs}). Notice that, in the Alice's responses below, she always plays in a vertex of the column $j+1$. If Bob colors $v_{2,j+1}$ (resp. $v_{3,j+1}$), Alice colors $v_{3,j+1}$ (resp. $v_{2,j+1}$), and we are done. If Bob colors $v_{1,j+1}$ with $c'$ or $c''$, Alice colors $v_{3,j+1}$ with any available color, and we are done (all vertices of column $j+1$ are safe) since $v_{2,j+1}$ is safe. Thus, suppose that Bob colors $v_{1,j+1}$ with $w \notin \{c,c',c''\}$. If $v_{3,j+2}$ is not colored $w$, Alice colors $v_{3,j+1}$ with $w$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. If $v_{3,j+2}$ is colored $w$, Alice colors $v_{3,j+1}$ with $c''$, making $v_{3,j+1}$ and $v_{2,j+1}$ safe. Finally, suppose that Bob colors $v_{4,j+1}$. If the color of $v_{4,j+1}$ is $c'$, then $v_{3,j+1}$ is safe and Alice colors $v_{2,j+1}$ with any available color. If the color of $v_{4,j+1}$ is $w \notin \{c,c',c''\}$, Alice colors $v_{2,j+1}$ with $w$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. Thus assume that Bob colored $v_{4,j+1}$ with $c''$. If $v_{3,j+2}$ is colored $c''$, then $v_{3,j+1}$ is safe and Alice colors $v_{2,j+1}$ with any available color. If $v_{3,j+2}$ is colored either $c$ or $w$, Alice colors $v_{2,j+1}$ with the same color, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. Thus, also assume that $v_{3,j+2}$ is not colored. Then Alice colors $v_{3,j+1}$ with $w$. Since $v_{1,j+1}$ cannot receive the color $c$ during the game and the other neighbors of $v_{2,j+1}$ are colored $w$, $c'$ and $c''$, then it will always be possible to color $v_{2,j+1}$ in the game. In other words, $v_{1,j+1}$ is safe and the induction hypothesis still holds. \item {\bf Case $3\delta$:} Bob colors in the column adjacent to a border (column $j$) in configuration $\delta$ (Bob plays in column $j+1$ in case of a right border). Alice will color a vertex of the column $j$ or $j+1$, obtaining a border in configuration $\alpha$, $\beta$, $\gamma$ or $\delta$, or merging two blocks, maintaining the induction hypothesis. Note that all vertices of column $j$ are safe by induction. \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(4,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,0.5) node {$\blue{y}$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$\red{y}$}; \draw (4.7,2.5) node {$\ne y$}; \draw (2.5,-0.75) node {Case $3\delta 1a$}; \draw (2.5,-1.75) node {$\delta \to \alpha/$merge}; \end{scope} \begin{scope}[xshift=15cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(4,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \path[draw, very thick] (5,0)--(4,0)--(4,4)--(5,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$\red{y}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$\blue{y}$}; \draw (4.5,2.5) node {$y$}; \draw (2.5,-0.75) node {Case $3\delta 1b$}; \draw (2.5,-1.75) node {$\delta \to$ merge}; \end{scope} \begin{scope}[xshift=30cm, yshift=0cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(4.2,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$0$}; \draw (3.7,2.5) node {$\red{c/y}$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$\blue{c}$}; \draw (2.5,-0.75) node {Case $3\delta 2$}; \draw (2.5,-1.75) node {$\delta \to \alpha/\beta/$merge}; \end{scope} \begin{scope}[xshift=43cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(5,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$\blue{c}$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$\red{c}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.7,2.5) node {$\ne c$}; \draw (3.5,-0.75) node {Case $3\delta 3$a}; \draw (3.5,-1.75) node {$\delta \to \beta/$merge}; \end{scope} \begin{scope}[xshift=60cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(5,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$\blue{c}$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,3.5) node {$\red{c}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,2.5) node {$c$}; \draw (3.5,-0.75) node {Case $3\delta 3$b}; \draw (3.5,-1.75) node {$\delta \to$ merge}; \end{scope} \begin{scope}[xshift=0cm, yshift=-20cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(4,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$\blue{c}$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$\red{y}$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (3,-0.75) node {Case $3\delta 4$}; \draw (3,-1.75) node {$\delta \to \delta/\alpha$}; \end{scope} \begin{scope}[xshift=15cm, yshift=-20cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(4,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$\blue{c}$}; \draw (3.5,2.5) node {$\red{y}$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (4.7,2.5) node {$\ne y$}; \draw (3,-0.75) node {Case $3\delta 5a$}; \draw (3,-1.75) node {$\delta \to \beta/$merge}; \end{scope} \begin{scope}[xshift=30cm, yshift=-20cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(4,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \path[draw, very thick] (5,0)--(4,0)--(4,4)--(5,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$\red{c}$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$\blue{c}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,2.5) node {$y$}; \draw (3,-0.75) node {Case $3\delta 5b$}; \draw (3,-1.75) node {$\delta \to$ merge}; \end{scope} \begin{scope}[xshift=45cm, yshift=-20cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(4,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$\blue{y}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$\red{y}$}; \draw (4.7,0.5) node {$\ne y$}; \draw (3.0,-0.75) node {Case $3\delta 5c$}; \draw (3.0,-1.75) node {$\delta \to \gamma/$merge}; \end{scope} \begin{scope}[xshift=61cm, yshift=-20cm, scale=2.5] \foreach \i in {2,3,4}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(4,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \path[draw, very thick] (5,0)--(4,0)--(4,4)--(5,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$x$}; \draw (2.5,3.5) node {$x$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$y$}; \draw (3.5,3.5) node {$\blue{y}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$y$}; \draw (3.0,-.75) node {Case $3\delta 5d$:}; \draw (3.0,-1.5) node {not possible}; \end{scope} \end{tikzpicture} \caption{Case $3\delta$: Bob plays in the empty column adjacent to a border in configuration $\delta$, where $x,y$ are distinct colors. The move of Bob is depicted in blue and the answer of Alice is in red.} \label{fig-s-rudini} \end{figure} \begin{itemize} \item {\bf Case $3\delta 1$:} Suppose that Bob colored $v_{1,j+1}$ with color $y$. If $v_{3,j+2}$ is not colored $y$ (Case $3\delta 1a$), she colors $v_{3,j+1}$ with $y$ obtaining a border in configuration $\alpha$ or merging two blocks. Otherwise (Case $3\delta 1b$), she colors $v_{4,j+1}$ with $y$, merging two blocks, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \item {\bf Case $3\delta 2$:} Suppose that Bob colored $v_{1,j+1}$ with color $c\ne y$. Then Alice can color $v_{3,j+1}$ with $c$ or $y$, obtaining a border in configuration $\alpha$ or $\beta$, or merging two blocks. \item {\bf Case $3\delta 3$:} Suppose that Bob colored $v_{2,j+1}$ with $c\ne y$. If column $j+2$ is not empty, Alice colors $v_{3,j+1}$ with any available color, merging the two blocks. So, assume that column $j+2$ is empty. If $v_{3,j+3}$ is not colored $c$ (Case $3\delta 3$a), then Alice colors $v_{3,j+2}$ with color $c$, obtaining a border in configuration $\beta$ or merging two blocks. If $v_{3,j+3}$ is colored $c$ (Case $3\delta 3$b), then she colors $v_{4,j+2}$ with color $c$, making $v_{3,j+2}$ safe, $v_{2,j+2}$ sound (with doctors $v_{3,j+2}$ and $v_{1,j+2}$) and $v_{3,j+1}$ sound (with doctors $v_{3,j}$ and $v_{4,j+1}$). \item {\bf Case $3\delta 4$:} Suppose that Bob colored $v_{3,j+1}$ with $c$. If column $j+2$ is not empty, Alice colors $v_{2,j+1}$ with any available color, merging two blocks. So, assume that column $j+2$ is empty. Then Alice colors $v_{1,j+1}$ with $y$, obtaining a border in configuration $\alpha$ (if $c=y$) or in configuration $\delta$ (if $c\ne y$). \item {\bf Case $3\delta 5$:} Suppose that Bob colored $v_{4,j+1}$ with $c\ne x$. If $c\ne y$ and $v_{3,j+2}$ is not colored $y$ (Case $3\delta 5a$), Alice colors $v_{3,j+1}$ with color $y$ obtaining a border in configuration $\beta$ or merging two blocks. If $c\ne y$ and $v_{3,j+2}$ is colored $y$ (Case $3\delta 5b$), Alice colors $v_{3,j}$ with $c$, merging two blocks, making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{3,j+1}$ and $v_{1,j+1}$). So, suppose that $c=y$. If $v_{1,j+2}$ is not colored $y$ (Case $3\delta 5c$), Alice colors $v_{1,j+1}$ with $y$, obtaining a border in configuration $\gamma$ or merging two blocks (with $v_{3,j+1}$ sound). So, assume that $c=y$ and $v_{1,j+2}$ is colored $y$. If $v_{2,j+2}$ is colored $c'\neq 0$, Alice colors $v_{3,j+1}$ with $c'$, merging two blocks, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. So also assume that $v_{2,j+2}$ is not colored. If $v_{3,j+2}$ is colored $c'$, Alice colors $v_{2,j+1}$ with $c'$ if $c'\ne y$ or any other color otherwise, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. So also assume that $v_{3,j+2}$ is not colored (Case $3\delta 5d$). Then column $j+2$ must be a border in configuration $\gamma$ and $v_{4,j+2}$ is colored $y$, a contradiction since Bob just colored $v_{4,j+1}$ with $y$. \end{itemize} \end{itemize} From now on, we assume that Bob just played in an empty column adjacent to some border (column $j$) but not adjacent to a border in configuration $\pi$ or $\delta$. The next cases first assume that border at column $j$ is free. \begin{itemize} \item {\bf Case $3\alpha F$. Empty column adjacent to a free configuration $\alpha$.} The next case occurs if Bob colors a vertex in an empty column (column $j+1$ in the illustrations) that is adjacent to a free border in configuration $\alpha$. The different subcases are described in Figure~\ref{fig:nextMoveAlice2}. \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[blue!20] (2,0) rectangle (3,1); ll[blue!20] (2,3) rectangle (3,4); \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,1.5) node {$\red{c}$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,-.75) node {Case $3\alpha F1$:}; \draw (2.5,-1.5) node {$\alpha\to\beta$}; \end{scope} \begin{scope}[xshift=20cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {$\red{c'}$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {\blue{$c'$}}; \draw (2.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $3\alpha F2$:}; \draw (2.5,-1.5) node {$\alpha\to\alpha$}; \end{scope} \begin{scope}[xshift=40cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted](0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j)--(3,\j)--(4,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,0.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$\blue{c'}$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$\red{c}$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $3\alpha F3$:}; \draw (2.5,-1.5) node {$\alpha\to\alpha/\delta$}; \end{scope} \end{tikzpicture} \caption{Case $3\alpha F$: Bob colors a vertex in an empty column adjacent to a free border in configuration $\alpha$. The blue cell is Bob's move and the red one is Alice's answer. The bold lines figure the surrounding of the block(s) before Alice's move.} \label{fig:nextMoveAlice2} \end{figure} \begin{itemize} \item {\bf Case $3\alpha F 1$:} If Bob plays on vertex $v_{1,j+1}$ or $v_{4,j+1}$, then Alice colors $v_{2,j+1}$ with color $c$, obtaining a new border in configuration $\beta$. \item {\bf Case $3\alpha F 2$:} If Bob colors $v_{3,j+1}$ with $c'$ (note that $c'\neq c$), Alice plays $c'$ on $v_{1,j+1}$, obtaining a new border in configuration $\alpha$. Notice that $v_{2,j+1}$ is not a doctor, satisfying Property $(4)$. \item {\bf Case $3\alpha F 3$:} If Bob plays $c'$ on $v_{2,j+1}$, Alice colors $v_{4,j+1}$ with $c$ obtaining a border in configuration $\alpha$ (if $c'=c$) or in configuration $\delta$ (if $c'\ne c$). \end{itemize} \item {\bf Case $3\beta F$. Empty column adjacent to a free configuration $\beta$.} The next case occurs if Bob colors a vertex in an empty column adjacent to a free border in configuration $\beta$. The different subcases are described in Figure~\ref{fig:nextMoveAlice3}. \begin{figure}[ht]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {\blue{$c'$}}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {\red{$c'$}}; \draw (2.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $3\beta F1$:}; \draw (2.5,-1.5) node {$\beta\to\alpha$}; \end{scope} \begin{scope}[xshift=17cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {\blue{$c$}}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {\red{$c$}}; \draw (2.5,-.75) node {Case $3\beta F2$:}; \draw (2.5,-1.5) node {$\beta\to\gamma$}; \end{scope} \begin{scope}[xshift=34cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {\blue{$c'$}}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {\red{$c$}}; \draw (2.5,-.75) node {Case $3\beta F3$:}; \draw (2.5,-1.5) node {$\beta\to\alpha$ or $\delta$}; \end{scope} \begin{scope}[xshift=53cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {\red{$c$}}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {\blue{$c'$}}; \draw (2.5,-.75) node {Case $3\beta F4$:}; \draw (2.5,-1.5) node {$\beta\to\alpha$ or $\beta$}; \end{scope} \begin{scope}[xshift=0cm, yshift=-20cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \path[draw, densely dotted] (1,0)--(1,4); \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (1.25,0.5) node {$\neq c'$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {\red{$c'$}}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {\blue{$c'$}}; \draw (2.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $3\beta F5$:}; \draw (2.5,-1.5) node {$\beta\to\alpha$}; \end{scope} \begin{scope}[xshift=18cm, yshift=-20cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,0.5) node {$c'$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {\red{$c'$}}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {\blue{$c'$}}; \draw (4.75,1.5) node {$\neq c'$}; \draw (2.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $3\beta F6$:}; \draw (2.5,-1.5) node {$\beta\to\beta$ or merge}; \end{scope} \begin{scope}[xshift=36cm, yshift=-20cm, scale=2.5] \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (5,0) --(4,0)--(4,1) --(4,2)--(4,3)--(4,4)--(5,4) ; \draw (1.5,4.5) node {$j$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,0.5) node {$c'$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (3.5,0.5) node {\red{$c'$}}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {\blue{$c'$}}; \draw (4.5,1.5) node {$c'$}; \draw (2.5,3.5) node {$0$}; \draw (2.5,-.75) node {Case $3\beta F7$}; \draw (2.5,-1.5) node {$\beta\to$ merge}; \end{scope} \end{tikzpicture} \caption{Case $3\beta F$: Bob colors a vertex in an empty column adjacent to a free border in configuration $\beta$. The blue cell is Bob's move and the red one is Alice's answer. The bold lines figure the surrounding of the block(s) before Alice's move.} \label{fig:nextMoveAlice3} \end{figure} First assume that Bob colored $v_{1,j+1}$ with $c'$. \begin{itemize} \item {\bf Case $3\beta F 1$:} If $c'\ne c$, Alice plays $c'$ on $v_{3,j+1}$, obtaining a border in configuration $\alpha$ (notice that $v_{2,j+1}$ is not a doctor). \item {\bf Case $3\beta F 2$:} If $c'=c$, Alice colors $v_{4,j+1}$ with $c$, obtaining a border in configuration $\gamma$. \end{itemize} Now assume that Bob colored $v_{2,j+1}$ with $c'$, which may be equal to $c$. \begin{itemize} \item {\bf Case $3\beta F 3$:} Alice colors $v_{4,j+1}$ with $c$, obtaining a border in configuration $\alpha$ (if $c'=c$) or in configuration $\delta$ (if $c'\ne c$). \end{itemize} Now assume that Bob colored $v_{4,j+1}$ with $c'$, which may be equal to $c$. \begin{itemize} \item {\bf Case $3\beta F 4$:} Alice plays $c$ on $v_{2,j+1}$, obtaining a border in configuration $\alpha$ (if $c'=c$) or $\beta$ (if $c'\ne c$). \end{itemize} Finally assume that Bob colored $v_{3,j+1}$ with $c'\ne c$. \begin{itemize} \item {\bf Case $3\beta F 5$:} If $v_{1,j}$ is not colored $c'$, Alice plays $c'$ on $v_{1,j+1}$, obtaining a border in configuration $\alpha$ (notice that $v_{2,j+1}$ is not a doctor). \item {\bf Case $3\beta F 6$:} So, suppose that $v_{1,j}$ is colored $c'$. If $v_{2,j+3}$ is not colored $c'$, Alice colors $v_{2,j+2}$ with $c'$, obtaining a border in configuration $\beta$ or merging two blocks. \item {\bf Case $3\beta F 7$:} So, also suppose that $v_{2,j+3}$ is colored $c'$. Then Alice colors $v_{1,j+2}$ with $c'$, merging two blocks and making $v_{2,j+1}$ sound (with doctors $v_{2,j}$ and $v_{1,j+1}$) and $v_{3,j+2}$ sound (with doctors $v_{2,j+2}$ and $v_{4,j+2}$). \end{itemize} \item {\bf Case $3\gamma F$. Empty column adjacent to a free configuration $\gamma$.} The next case occurs if Bob colors a vertex in an empty column adjacent to a free border in configuration $\gamma$. The different subcases are described in Figure~\ref{fig-6gamma}. The move of Bob is in blue and the answer of Alice is in red. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(5,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$\blue{c'}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,1.5) node {$\red{c'}$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (4.0,-.75) node {Case $3\gamma F1$:}; \draw (4.0,-1.5) node {$\gamma \to \alpha$}; \end{scope} \begin{scope}[xshift=17cm, yshift=0cm, scale=2.5] \foreach \i in {3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(5,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$\red{c'}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$\blue{c'}$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (4.0,-.75) node {Case $3\gamma F2$:}; \draw (4.0,-1.5) node {$\gamma \to \alpha$}; \end{scope} \begin{scope}[xshift=34cm, yshift=0cm, scale=2.5] \foreach \i in {3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(5,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,2.5) node {$\red{c'}$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,1.5) node {$\blue{c'}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (4.0,-.75) node {Case $3\gamma F3$:}; \draw (4.0,-1.5) node {$\gamma \to \beta$}; \end{scope} \begin{scope}[xshift=51cm, yshift=0cm, scale=2.5] \foreach \i in {3,4,5,6}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(7,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(6,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,1.5) node {$\blue{c}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$\red{c}$}; \draw (6.75,2.5) node {$\neq c$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (4.5,-.75) node {Case $3\gamma F4$:}; \draw (4.5,-1.5) node {$\gamma\to\beta$ or merge}; \end{scope} \begin{scope}[xshift=0cm, yshift=-20cm, scale=2.5] \foreach \i in {3,4,5,6}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(7,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(6,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \path[draw, very thick] (7,0)--(6,0)--(6,4)--(7,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,1.5) node {$\blue{c}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$\red{c}$}; \draw (5.5,2.5) node {$0$}; \draw (6.5,2.5) node {$c$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (4.2,-.75) node {Case $3\gamma F5$:}; \draw (4.2,-1.5) node {$\gamma\to$ merge}; \end{scope} \begin{scope}[xshift=17cm, yshift=-20cm, scale=2.5] \foreach \i in {3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(5,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$\red{c'}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$\blue{c'}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$0$}; \draw (4.0,-.75) node {Case $3\gamma F6$:}; \draw (4.0,-1.5) node {$\gamma\to\beta$}; \end{scope} \begin{scope}[xshift=34cm, yshift=-20cm, scale=2.5] \foreach \i in {3,4,5,6}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(7,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(6,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$\blue{c}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$\red{c}$}; \draw (5.5,0.5) node {$0$}; \draw (6.8,1.5) node {$\ne c$}; \draw (4.5,-.75) node {Case $3\gamma F7$:}; \draw (4.5,-1.5) node {$\gamma\to\beta$ or merge}; \end{scope} \begin{scope}[xshift=51cm, yshift=-20cm, scale=2.5] ll[red!20] (5,2) rectangle (6,3); \foreach \i in {3,4,5,6}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(7,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(6,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \path[draw, very thick] (7,0)--(6,0)--(6,4)--(7,4); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$\blue{c}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$\red{c}$}; \draw (6.5,1.5) node {$c$}; \draw (4.5,-.75) node {Case $3\gamma F 8$:}; \draw (4.5,-1.5) node {$\gamma\to\Delta'$}; \end{scope} ll[red!20] (5,2) rectangle (6,3); \foreach \i in {3,4,5,6}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(8,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(4,\j); \path[draw, thin] (6,\j)--(7,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4); \path[draw, very thick] (7,0)--(6,0)--(6,4)--(7,4)--(7,0); \draw (3.5,4.5) node {$j$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.25) node {\tiny{safe}}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$\blue{c}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$0$}; \draw (5.5,3.5) node {$0$}; \draw (5.5,2.5) node {$0$}; \draw (5.5,1.5) node {$0$}; \draw (5.5,0.5) node {$\red{c}$}; \draw (6.5,3.5) node {$c$}; \draw (6.5,2.5) node {$0$}; \draw (6.5,1.5) node {$c$}; \draw (6.5,0.5) node {$0$}; \draw (7.5,3.5) node {$0$}; \draw (7.5,2.5) node {$0$}; \draw (7.5,1.5) node {$0$}; \draw (7.5,0.5) node {$0$}; \draw (4.5,-0.75) node {$(3\gamma F 8)$: $\to$ Config $\Delta'$}; \end{scope} \end{tikzpicture} \caption{Case $3\gamma F$: Bob colors a vertex in an empty column adjacent to a free border in configuration $\gamma$ at column $j$.} \label{fig-6gamma} \end{figure} \begin{itemize} \item {\bf Case $3\gamma F 1$:} If Bob colors $v_{4,j+1}$ with $c'$, then Alice colors $v_{2,j+1}$ with $c'$ obtaining a border in configuration $\alpha$. \item {\bf Case $3\gamma F 2$:} If Bob colors $v_{1,j+1}$ with $c'$, then Alice colors $v_{3,j+1}$ with $c'$ obtaining a border in configuration $\alpha$. \item {\bf Case $3\gamma F 3$:} If Bob colors $v_{2,j+1}$ with $c' \neq c$, then Alice colors $v_{3,j}$ with $c'$ obtaining a border in configuration $\beta$. \item {\bf Case $3\gamma F 4$:} If Bob colors $v_{2,j+1}$ with $c$ and $c(v_{3,j+3})\neq c$, then Alice colors $v_{3,j}$ with $c$ obtaining a border in configuration $\beta$ or merging two blocks. \item {\bf Case $3\gamma F 5$:} If Bob colors $v_{2,j+1}$ with $c$ and $c(v_{3,j+3})= c$ and $c(v_{1,j+3})\neq c$ then Alice colors $v_{4,j+2}$ with $c$ making $v_{3,j+2}$ safe, $v_{2,j+2}$ sound (with doctors $v_{3,j+2}$ and $v_{1,j+2}$) and $v_{3,j+1}$ sound (with doctors $v_{3,j}$ and $v_{4,j+1}$). \item {\bf Case $3\gamma F 6$:} If Bob colors $v_{3,j+1}$ with $c'\ne c$, then Alice colors $v_{2,j}$ with $c'$ obtaining a border in configuration $\beta$. \end{itemize} So, assume that Bob colors $v_{3,j+1}$ with $c$. \begin{itemize} \item {\bf Case $3\gamma F 7$:} If $v_{2,j+3}$ is not colored $c$, then Alice colors $v_{2,j+2}$ with $c$ obtaining a border in configuration $\beta$. \item {\bf Case $3\gamma F8$:} So, also assume that $v_{2,j+3}$ is colored with $c$. Then we are in configuration $\Delta'$, with all vertices safe or sound, except $v_{3,j+2}$, which is sick. \end{itemize} \end{itemize} In the next, we assume that Bob just played in an empty column separating two borders (not in configuration $\delta$ nor $\pi$). First, let us assume that at least one of these borders is in configuration $\gamma$. \begin{itemize} \item {\bf Case $3\gamma$.} Bob colors a vertex in an empty column that is separating two blocks with one of them in configuration $\gamma$ and the other in configuration $\alpha$, $\beta$ or $\gamma$. The cases are depicted in Figure~\ref{fig-case7c} and all of them merge the two blocks. Let $j$ be the column of the border of the block in configuration $\gamma$. Up to symmetry, we consider the column $j$ is the right border of its block, that is, $j+1$ is the empty column before Bob's move. If Bob colors $v_{2,j+1}$ (resp. $v_{3,j+1}$), Alice can color $v_{3,j+1}$ (resp. $v_{2,j+1}$) and we are done. So assume that Bob colored either $v_{1,j+1}$ or $v_{4,j+1}$. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(5,\j);} \path[draw, very thick] (1,0)--(4,0)--(4,4)--(1,4); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (1.5,1.5) node {$c$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$\red{x}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$\blue{x}$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (3.5,-.75) node {Case $3\gamma 1$}; \end{scope} \begin{scope}[xshift=18cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(5,\j);} \path[draw, very thick] (1,0)--(4,0)--(4,4)--(1,4); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (1.5,1.5) node {$c$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$\red{x}$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$\blue{x}$}; \draw (5.8,1.5) node {$\ne x$}; \draw (3.5,-.75) node {Case $3\gamma 2$}; \end{scope} \begin{scope}[xshift=36cm, yshift=0cm, scale=2.5] \foreach \i in {2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(5,\j);} \path[draw, very thick] (1,0)--(4,0)--(4,4)--(1,4); \path[draw, very thick] (6,0)--(5,0)--(5,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (1.5,1.5) node {$c$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$\red{x}$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$\blue{x}$}; \draw (5.5,1.5) node {$x$}; \draw (3.5,-.75) node {Case $3\gamma 3$}; \end{scope} \end{tikzpicture} \caption{Case $3\gamma$: Bob colors a vertex in an empty column that is separating two blocks with one of them in configuration $\gamma$ and the other in configuration $\alpha$, $\beta$ or $\gamma$. In all cases, the two blocks are merged after Alice's move.} \label{fig-case7c} \end{figure} \begin{itemize} \item {\bf Case $3\gamma 1$:} If Bob colors $v_{1,j+1}$ with $x\ne c$, Alice colors $v_{2,j}$ with $x$, making $v_{2,j+1}$ safe and $v_{3,j+1}$ sound (with doctors $v_{3,j}$ and $v_{4,j+1}$). \end{itemize} So, assume Bob colors $v_{4,j+1}$ with $x\ne c$. \begin{itemize} \item {\bf Case $3\gamma 2$:} If $v_{2,j+2}$ is not colored $x$, Alice colors $v_{2,j+1}$ with $x$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} So, also assume that $v_{2,j+2}$ is colored $x$. \begin{itemize} \item {\bf Case $3\gamma 3$:} Since $x\ne c$, Alice can color $v_{2,j}$ with $x$, making $v_{2,j+1}$ safe and $v_{3,j+1}$ sound (with doctors $v_{2,j+1}$ and $v_{3,j}$). \end{itemize} Next, we assume that Bob just played in an empty column separating two borders (not in configuration $\delta$ nor $\pi$ nor $\gamma$). We further assume that one of these borders is in configuration $\beta$. \item {\bf Case $3\beta$.} Bob colors a vertex in an empty column that is separating two blocks with one of them in configuration $\beta$ and the other in configuration $\alpha$ or $\beta$. The cases are depicted in Figure~\ref{fig-case7b} and all of them merge the two blocks. Let $j$ be the column of the border of the block in configuration $\beta$. Up to symmetry, we consider the column $j$ is the right border of its block, that is, $j+1$ is the empty column before Bob's move. If Bob colors $v_{2,j+1}$ (resp. $v_{3,j+1}$), Alice can color $v_{3,j+1}$ (resp. $v_{2,j+1}$) and we are done. So assume that Bob colored either $v_{1,j+1}$ or $v_{4,j+1}$. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,1.5) node {$0$}; \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$\blue{c'}$}; \draw (2.5,2.5) node {$\red{c''}$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2,-0.75) node {Case $3\beta 1$}; \end{scope} \begin{scope}[xshift=13cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.5) node {\red{$c'$}}; \draw (2.5,3.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$\blue{c'}$}; \draw (2,-0.75) node {Case $3\beta 2$}; \end{scope} \begin{scope}[xshift=26cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.5) node {$0$}; \draw (2.5,3.5) node {$\red{c}$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$\blue{c}$}; \draw (3.8,3.5) node {$\ne c$}; \draw (2,-0.75) node {Case $3\beta 3$}; \end{scope} \begin{scope}[xshift=39cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (2.5,2.5) node {$\red{c'}$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$\blue{c}$}; \draw (3.5,3.5) node {$c$}; \draw (3.5,1.5) node {$c$}; \draw (2,-0.75) node {Case $3\beta 4$}; \end{scope} \begin{scope}[xshift=52cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (2.5,2.5) node {$\red{x}$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$\blue{c}$}; \draw (3.5,3.5) node {$c$}; \draw (3.5,1.5) node {$x$}; \draw (2,-0.75) node {Case $3\beta 5$}; \end{scope} \begin{scope}[xshift=65cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j);} \path[draw, very thick] (0,0)--(2,0)--(2,4)--(0,4); \path[draw, very thick] (4,0)--(3,0)--(3,4)--(4,4); \draw (1.5,4.5) node {$j$}; \draw (0.5,1.5) node {$c$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$\red{x}$}; \draw (2.5,0.5) node {$\blue{c}$}; \draw (3.5,3.5) node {$c$}; \draw (3.5,2.5) node {$x$}; \draw (3.5,1.5) node {$0$}; \draw (2,-0.75) node {Case $3\beta 6$}; \end{scope} \end{tikzpicture} \caption{Case $3\beta$: Bob colors a vertex in an empty column that is separating two blocks with one of them in configuration $\beta$ and the other in configuration $\alpha$ or $\beta$. In each case, Alice's move merges the two blocks.} \label{fig-case7b} \end{figure} \begin{itemize} \item {\bf Case $3\beta 1$:} If Bob colors $v_{4,j+1}$ with any color, Alice colors $v_{3,j+1}$ with any other color, making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{2,j}$). \end{itemize} So, assume Bob colors $v_{1,j+1}$ with $c'$, which may be $c$. \begin{itemize} \item {\bf Case $3\beta 2$:} If $c'\neq c$, Alice colors $v_{2,j}$, making $v_{2,j+1}$ safe and $v_{3,j+1}$ sound (with doctors $v_{2,j+1}$ and $v_{4,j+1}$). \end{itemize} So, in fact, assume Bob colors $v_{1,j+1}$ with $c$. \begin{itemize} \item {\bf Case $3\beta 3$:} If $v_{4,j+2}$ is not colored $c$, Alice colors $v_{4,j+1}$ with $c$, making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{2,j}$ and $v_{3,j+1}$). \end{itemize} So, also assume $v_{4,j+2}$ is colored $c$. \begin{itemize} \item {\bf Case $3\beta 4$:} If $v_{2,j+2}$ is colored $c$ (then $v_{2,j+1}$ is already safe), Alice colors $v_{3,j+1}$ with any color, making it safe. \item {\bf Case $3\beta 5$:} If $v_{2,j+2}$ is colored $x\ne c$, Alice colors $v_{3,j+1}$ with $x$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} So, also assume $v_{2,j+2}$ is not colored. \begin{itemize} \item {\bf Case $3\beta 6$:} If $v_{3,j+2}$ is colored $x$, Alice colors $v_{2,j+1}$ with $x$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} So, also assume $v_{3,j+2}$ is not colored. Then, by induction, column $j+2$ must be in configuration $\gamma$, a contradiction since this case considers an empty column between a border in configuration $\beta$ and a border in configuration $\alpha$ or $\beta$. \end{itemize} Finally, we assume that Bob just played in an empty column separating two borders both in configuration $\alpha$, except in configurations $\Lambda$ and $\Lambda'$, which are treated in cases $1\Lambda$ and $1\Lambda'$. \begin{itemize} \item {\bf Case $3\alpha$.} Bob colors a vertex in an empty column that is separating two blocks in configuration $\alpha$. The cases are depicted in Figure~\ref{fig-case7a} and all of them merge the two blocks. Let $j$ be the column of the right border of the block in the left, that is, $j+1$ is the empty column before Bob's move. If Bob colors $v_{2,j+1}$ (resp. $v_{3,j+1}$), Alice can color $v_{3,j+1}$ (resp. $v_{2,j+1}$) and we are done. So assume that Bob colored either $v_{1,j+1}$ or $v_{4,j+1}$. Also notice that the border at column $j+2$ contains two non-colored vertices from Property $(5)$. \begin{figure}[ht!]\centering \begin{tikzpicture}[scale=0.195] \tikzstyle{vertex}=[draw,circle,fill=black,inner sep=0pt, minimum size=1pt] \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[blue!20] (1,3) rectangle (2,4); ll[blue!20] (1,0) rectangle (2,1); \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j) ;} \foreach \i in {1,2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3}{ \path[draw, densely dotted] (0,\j)--(3,\j);} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4) ; \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {$c'$}; \draw (0.5,2.5) node {}; \draw (0.5,1.5) node {$c'$}; \draw (0.5,0.5) node {}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$\red{c'}$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (1.5,-0.75) node {Case $3\alpha 1$}; \end{scope} \begin{scope}[xshift=11cm, yshift=0cm, scale=2.5] ll[blue!20] (1,3) rectangle (2,4); ll[blue!20] (1,0) rectangle (2,1); \foreach \i in {1,2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(3,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j) ;} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4) ; \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {}; \draw (0.5,2.5) node {$c'$}; \draw (0.5,1.5) node {}; \draw (0.5,0.5) node {$c'$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$\red{c'}$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (1.5,-0.75) node {Case $3\alpha 2$}; \end{scope} \begin{scope}[xshift=22cm, yshift=0cm, scale=2.5] \foreach \i in {1,2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(3,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j) ;} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4) ; \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {}; \draw (0.5,2.5) node {$c$}; \draw (0.5,1.5) node {}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$\blue{x}$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$\red{x}$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (1.5,-0.75) node {Case $3\alpha 3$}; \end{scope} \begin{scope}[xshift=33cm, yshift=0cm, scale=2.5] \foreach \i in {1,2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(3,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j) ;} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4) ; \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {}; \draw (0.5,2.5) node {$c$}; \draw (0.2,1.5) node {$\ne x$}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$\red{x}$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$\blue{x}$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (1.5,-0.75) node {Case $3\alpha 4$}; \end{scope} \begin{scope}[xshift=44cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, thin] (3,4)--(4,4) ; \path[draw, thin] (3,0)--(4,0) ; \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4) ; \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4) -- (3,0); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {}; \draw (0.5,2.5) node {$c$}; \draw (0.5,1.5) node {$x$}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$\blue{x}$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,2.5) node {$\red{x}$}; \draw (2.5,3.5) node {$c$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (1.5,-0.75) node {Case $3\alpha 5$}; \end{scope} \begin{scope}[xshift=60cm, yshift=0cm, scale=2.5] \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4) ; \path[draw, very thick] (4,0)--(2,0)--(2,4)--(4,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {}; \draw (0.5,2.5) node {$c$}; \draw (0.5,1.5) node {$x$}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$\red{x}$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$\blue{x}$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (3.8,4.5) node {$\ne 0$}; \draw (1.5,-0.75) node {Case $3\alpha 6$}; \end{scope} \end{tikzpicture} \caption{Case $3\alpha$: Bob colors a vertex in an empty column that is separating two blocks in configuration $\alpha$. In each case, Alice's move merges the two blocks.} \label{fig-case7a} \end{figure} The subcases depend on the symmetry between the two borders in configuration $\alpha$ separated by the empty column. \begin{itemize} \item {\bf Case $3\alpha 1$:} If Bob colors $v_{1,j+1}$ or $v_{4,j+1}$, Alice colors $v_{3,j+1}$ with $c'$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \item {\bf Case $3\alpha 2$:} If $c'\ne c$, Alice colors $v_{2,j+1}$ with $c'$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe, independently if Bob colored $v_{1,j+1}$ or $v_{4,j+1}$. \end{itemize} So, assume that $v_{1,j}$, $v_{3,j}$, $v_{2,j+2}$ and $v_{4,j+2}$ have the same color $c$. \begin{itemize} \item {\bf Case $3\alpha 3$:} If Bob colors $v_{1,j+1}$ with $x$, Alice colors $v_{3,j+1}$ with $x$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} So, also assume that Bob colored $v_{4,j+1}$ with $x\ne c$. \begin{itemize} \item {\bf Case $3\alpha 4$:} If $v_{2,j}$ is not colored $x$, Alice colors $v_{2,j+1}$ with $x$, making $v_{2,j+1}$ and $v_{3,j+1}$ safe. \end{itemize} So, also assume that $v_{2,j}$ is colored $x$. \begin{itemize} \item {\bf Case $3\alpha 5$:} If column $j+3$ is empty, Alice colors $v_{3,j+2}$ with $x$, making $v_{3,j+1}$ safe and $v_{2,j+1}$ sound (with doctors $v_{1,j+1}$ and $v_{3,j+1}$). \item {\bf Case $3\alpha 6$:} So, assume that column $j+3$ is not empty. Then, by Property $(4)$, $v_{3,j+2}$ is not a doctor and Alice colors $v_{1,j+1}$ with $x$, making $v_{2,j+1}$ safe and $v_{3,j+1}$ sound (with doctors $v_{2,j+1}$ and $v_{3,j+2}$). \end{itemize} \end{itemize} Notice that, in each of the subcases above of the Case $3$, Alice's answer is never in a column that was a border in configuration $\alpha$ that could be in configuration $\Delta$. So if the empty column played by Bob was the column $j+3$ of configuration $\Delta$ according to the orientation of Figure \ref{fig-delta}, then this configuration disappears and the sick vertex becomes sound, because its empty neighbor in the border in configuration $\alpha$ can now become its missing doctor. The same for configurations $\Lambda$ and $\Lambda'$ if the empty column played by Bob is the column $j-3$ of these configurations according to the orientation of Figure \ref{fig-delta} and if Alice's answer is not in the column $j-2$. If Alice's answer is in the column $j-2$, then the configurations $\Lambda$ and $\Lambda'$ become $\Lambda_2$ and $\Lambda_2'$, respectively. In every subcase of Case $3$, after Alice's move, the properties of the induction hypothesis still hold. \end{description} This concludes the description of Alice's algorithm and its correctness and the proof of the theorem. \end{proof} \section{Further work} The natural open problem is to decide whether $\chi_g(P_m\x P_n)\leq 4$ for every grid when $m,n\geq5$. In our opinion, obtaining such results for $m,n\geq 5$ should require the use of an essentially different technique. Indeed, there can be many more configurations already fitting in a $5\times 5$ square that may lead Alice to be inevitably defeated with only four colors; the question is to know if (and when) Bob can always force a fifth color by reaching such a configuration. The question we addressed could also be extended to the cylinder $P_4\x C_n$ (and more generally to $P_m\x C_n$): does the absence of a first and last column have an impact on whether or not Alice has a winning strategy with four colors when she begins? Finally, it would also be interesting to consider the coloring game in other graph classes. In particular, the case of trees is not settled yet. \normalem \nocite{*} \bibliographystyle{plain} \input{game-color-4grid.bbl} \newpage \thispagestyle{empty} \section*{Visual summary of the configurations to keep in mind} \vspace{2cm} \begin{center} \begin{tikzpicture}[scale=0.2] \begin{scope} \begin{scope}[xshift=30cm, yshift = 15cm] \draw (0,0) node {\large Border configurations}; \end{scope} \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] \foreach \i in {2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4); \draw (2.5,4.5) node {$j$}; \draw (2.5,0.5) node {$c$}; \draw (2.5,2.5) node {$c$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Config. $\alpha$}; \end{scope} \begin{scope}[xshift=12cm, yshift=0cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4); \draw (2.5,4.5) node {$j$}; \draw (1.5,1.5) node {$c$}; \draw (2.5,0.5) node {}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,3.5) node {}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Config. $\beta$}; \end{scope} \begin{scope}[xshift=22cm, yshift=0cm, scale=2.5] \foreach \i in {3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (2,\j)--(3,\j); \path[draw, densely dotted] (4,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (3,\j)--(4,\j);} \path[draw, very thick] (2,0)--(4,0)--(4,4)--(2,4) ; \draw (3.5,4.5) node {$j$}; \draw (2.5,2.5) node {$c$}; \draw (2.5,1.65) node {$0$}; \draw (2.5,1.2) node {\tiny safe}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$c$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (3.5,-0.75) node {Config. $\gamma$}; \end{scope} \begin{scope}[xshift=36cm, yshift=0cm, scale=2.5] \foreach \i in {2,3}{ \path[draw, thin] (\i,0)--(\i,4) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (1,\j)--(4,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (2,\j)--(3,\j);} \path[draw, very thick] (1,0)--(3,0)--(3,4)--(1,4) ; \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c'$}; \draw (3.5,0.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (2.5,-0.75) node {Config. $\delta$}; \end{scope} \begin{scope}[xshift=50cm, yshift=0cm, scale=2.5] \foreach \i in {1,2}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3}{ \path[draw, densely dotted] (0,\j)--(3,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(2,\j) ;} \path[draw, very thick] (0,0)--(1,0)--(1,4)--(0,4); \path[draw, very thick] (3,0)--(2,0)--(2,4)--(3,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {$c$}; \draw (0.5,2.5) node {$c'$}; \draw (0.5,1.5) node {$c''$}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$0$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.8,0.5) node {}; \draw (2.5,1.5) node {$c'$}; \draw (2.5,3.5) node {}; \draw (1.5,-0.75) node {Config. $\pi$}; \end{scope} \end{scope} \begin{scope}[xshift=5cm, yshift = -30cm] \begin{scope}[xshift = 25cm, yshift=15cm] \draw (0,0) node {\large Special configurations (sick vertices)}; \end{scope} \begin{scope}[xshift=0cm, yshift=0cm, scale=2.5] ll[red!20] (1,2) rectangle (2,3); \foreach \i in {1,2,3}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(4,\j) ;} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(3,\j) ;} \path[draw, very thick] (0,0)--(3,0)--(3,4)--(0,4); \draw (0.5,4.5) node {$j$}; \draw (0.5,3.5) node {}; \draw (0.5,2.5) node {$c$}; \draw (0.5,1.5) node {}; \draw (0.5,0.5) node {$c$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c'$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$0$}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$c$}; \draw (2.5,0.5) node {$0$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$0$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw(1.4,-0.7) node {Configuration $\Delta$}; \end{scope} \begin{scope}[xshift=16cm, yshift=0cm, scale=2.5] ll[red!20] (4,2) rectangle (5,3); \foreach \i in {1,2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(5,\j);} \path[draw, very thick] (0,0)--(6,0); \path[draw, very thick] (0,4)--(6,4); \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.65) node {$0$}; \draw (1.5,1.25) node {\tiny{safe}}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$c$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,3.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,0.5) node {$c$}; \draw (5.5,1.5) node {$c$}; \draw (3.5,-0.7) node {Configuration $\Delta'$}; \end{scope} \begin{scope}[xshift=37cm, yshift=0cm, scale=2.5] ll[red!20] (3,1) rectangle (4,2); \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (0,0)--(5,0); \path[draw, very thick] (0,4)--(5,4); \draw (2.5,4.5) node {$j$}; \draw (1.5,2.5) node {$c$}; \draw (1.5,1.65) node {$0$}; \draw (1.5,1.25) node {\tiny{safe}}; \draw (2.5,3.5) node {$c$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,0.5) node {$c$}; \draw (3.5,3.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,0.5) node {$0$}; \draw (4.5,1.5) node {$c'$}; \draw (4.5,0.5) node {$c$}; \draw (3.1,-0.7) node {Configuration $\Delta'_2$}; \end{scope} \begin{scope}[xshift=5cm, yshift=-17cm, scale=2.5] ll[red!20] (2,2) rectangle (3,3); \foreach \i in {1,2,3,4}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(5,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4); \path[draw, very thick] (5,0)--(3,0)--(3,4)--(5,4); \draw (3.5,4.5) node {$j$}; \draw (4.5,1.5) node {$c''$}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {$c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (3,-0.75) node {Configuration $\Lambda$}; \end{scope} \begin{scope}[xshift=30cm, yshift=-17cm, scale=2.5] ll[red!20] (2,2) rectangle (3,3); \foreach \i in {1,2,3,4,5}{ \path[draw, thin] (\i,0)--(\i,4);} \foreach \j in {0,1,2,3,4}{ \path[draw, densely dotted] (0,\j)--(6,\j);} \foreach \j in {0,1,2,3,4}{ \path[draw, thin] (1,\j)--(4,\j);} \path[draw, very thick] (1,0)--(2,0)--(2,4)--(1,4); \path[draw, very thick] (6,0)--(3,0)--(3,4)--(6,4); \draw (3.5,4.5) node {$j$}; \draw (4.5,0.5) node {$0$}; \draw (4.5,1.5) node {$0$}; \draw (4.5,2.5) node {$0$}; \draw (4.5,3.5) node {$c$}; \draw (5.6,2.5) node {\tiny{safe}}; \draw (3.5,0.5) node {$c$}; \draw (3.5,1.5) node {$0$}; \draw (3.5,2.5) node {$c$}; \draw (3.5,3.5) node {$c'$}; \draw (2.5,0.5) node {$0$}; \draw (2.5,1.5) node {$0$}; \draw (2.5,2.5) node {$0$}; \draw (2.5,3.5) node {$0$}; \draw (1.5,0.5) node {$0$}; \draw (1.5,1.5) node {$c$}; \draw (1.5,2.5) node {$0$}; \draw (1.5,3.5) node {$c$}; \draw (3.5,-0.75) node {Configuration $\Lambda'$}; \end{scope} \end{scope} \end{tikzpicture} \end{center} \vspace{1cm} Configurations $\Lambda_2$ and $\Lambda_2'$ are obtained from Configurations $\Lambda$ and $\Lambda'$, respectively, by replacing exactly one 0 (zero) of column $j-2$ by a color different from $c$, with the additional constraint that column $j-3$ cannot be empty. All configurations have their symmetrical counterpart according to the horizontal axis. However, configurations $\Lambda, \Lambda', \Lambda_2$ and $\Lambda'_2$ are only defined for column $j$ being their left border, and, in configuration $\Delta$, column $j+2$ will always be its right border. \end{document}
2412.17735v1
http://arxiv.org/abs/2412.17735v1
Colouring t-perfect graphs
\documentclass[a4paper, 12pt]{article} \pdfoutput=1 \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{authblk} \usepackage{amsfonts,amsmath,amssymb,mathtools,dsfont,microtype} \usepackage{xcolor} \usepackage[noBBpl]{mathpazo} \DeclareFontFamily{U} {cmr}{} \DeclareFontShape{U}{cmr}{m}{n}{ <-6> cmr5 <6-7> cmr6 <7-8> cmr7 <8-9> cmr8 <9-10> cmr9 <10-12> cmr10 <12-> cmr12}{} \DeclareSymbolFont{Xcmr} {U} {cmr}{m}{n} \usepackage[labelsep = period, labelfont = bf]{caption} \usepackage{float,graphicx,subcaption} \usepackage{booktabs,makecell} \usepackage{enumitem} \usepackage{tkz-euclide} \tkzSetUpPoint[fill=black, size = 3pt] \tkzSetUpLine[color=black, line width=0.6pt] \tkzSetUpCircle[color=black, line width=0.6pt] \usetikzlibrary{shapes.symbols} \usepackage[left = 2.5cm, right = 2.5cm, top = 2.5cm, bottom = 2.5cm, headsep = 12pt, headheight = 15pt]{geometry} \definecolor{lightseagreen}{rgb}{0.13, 0.7, 0.67} \definecolor{secondcolor}{HTML}{5BFF00} \newcommand{\secondopacity}{.3} \definecolor{bordeaux}{RGB}{100,0,50} \definecolor{darkblue}{RGB}{25, 25, 112} \newcommand{\say}[1]{``#1''} \newcommand\dd{\hbox{-}} \usepackage[hyphens]{url} \usepackage[linktoc = all, hidelinks, colorlinks, unicode=true, pagebackref=true]{hyperref} \usepackage[capitalise, compress, nameinlink, noabbrev]{cleveref} \hypersetup{linkcolor={{blue!70!black}}, citecolor={black}, urlcolor={blue!70!black}} \hypersetup{linkcolor=bordeaux, citecolor = darkblue, urlcolor = darkblue} \usepackage{amsthm,thmtools,thm-restate} \declaretheorem[name = Theorem, numberwithin = section, style = plain]{theorem} \declaretheorem[name = Claim, numberwithin = theorem, style = plain]{claim} \declaretheorem[name = Corollary, numberlike = theorem, style = plain]{corollary} \declaretheorem[name = Conjecture, numberlike = theorem, style = plain]{conjecture} \declaretheorem[name = Definition, numberlike = theorem, style = definition]{definition} \declaretheorem[name = Lemma, numberlike = theorem, style = plain]{lemma} \declaretheorem[name = Observation, numberlike = theorem, style = plain]{observation} \declaretheorem[name = Proposition, numberlike = theorem, style = plain]{proposition} \declaretheorem[name = Problem, numberlike = theorem, style = plain]{problem} \declaretheorem[name = Remark, numberlike = theorem, style = definition]{remark} \crefname{observation}{Observation}{Observations} \crefname{conjecture}{Conjecture}{Conjectures} \crefname{claim}{Claim}{Claims} \crefname{problem}{Problem}{Problems} \DeclareFontFamily{U}{matha}{\hyphenchar\font45} \DeclareFontShape{U}{matha}{m}{n}{ <5> <6> <7> <8> <9> <10> gen * matha <10.95> matha10 <12> <14.4> <17.28> <20.74> <24.88> matha12 }{} \DeclareSymbolFont{matha}{U}{matha}{m}{n} \DeclareMathSymbol{\specialuparrow}{\mathrel}{matha}{"D2} \DeclareMathSymbol{\specialrightarrow}{\mathrel}{matha}{"D1} \renewcommand*{\backrefsep}{, } \renewcommand*{\backreftwosep}{, } \renewcommand*{\backreflastsep}{, } \renewcommand*{\backref}[1]{} \renewcommand*{\backrefalt}[4]{ \ifcase #1 Not cited. \or $\specialuparrow$#2 \else $\specialuparrow$#2 } \renewcommand{\epsilon}{\varepsilon} \renewcommand{\ge}{\geqslant} \renewcommand{\le}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \renewcommand{\emptyset}{\varnothing} \renewcommand{\subset}{\subseteq} \renewcommand{\supset}{\supseteq} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \DeclarePairedDelimiter{\set}{\{}{\}} \newcommand{\defn}[1]{\textcolor{PakistanGreen}{\emph{#1}}} \newcommand{\bE}{\mathbb{E}} \newcommand{\bN}{\mathbb{N}} \newcommand{\bP}{\mathbb{P}} \newcommand{\bR}{\mathbb{R}} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cG}{\mathcal{G}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cO}{\mathcal{O}} \newcommand{\cR}{\mathcal{R}} \DeclareMathOperator{\stablesetpolytope}{SSP} \newcommand{\ssp}[1]{\stablesetpolytope(#1)} \DeclareMathOperator{\tperfectpolytope}{TSTAB} \newcommand{\tstab}[1]{\tperfectpolytope(#1)} \DeclareMathOperator{\hperfectpolytope}{HSTAB} \newcommand{\hstab}[1]{\hperfectpolytope(#1)} \DeclareMathOperator{\qperfectpolytope}{QSTAB} \newcommand{\qstab}[1]{\qperfectpolytope(#1)} \newcommand{\vasek}{Chv\'{a}tal } \newcommand\reals{\mathbb{R}} \DeclareMathOperator{\forb}{\mathbf{Forb}} \title{Colouring t-perfect graphs} \date{} \begin{document} \author{Maria Chudnovsky\thanks {Supported by NSF Grant DMS-2348219 and by AFOSR grant FA9550-22-1-0083.}} \affil{\small Department of Mathematics, Princeton University, Princeton, USA} \author{Linda Cook\thanks{Supported by the Gravitation programme NETWORKS (NWO grant no. 024.002.003) of the Dutch Ministry of Education, Culture and Science (OCW) and a Marie Skłodowska-Curie Action of the European Commission (COFUND grant no. 945045) }} \affil{\small Korteweg-de Vries Institute for Mathematics, University of Amsterdam, The~Netherlands} \author{James Davies} \affil{\small Trinity Hall, University of Cambridge, United Kingdom} \author{Sang-il Oum\thanks{Supported by the Institute for Basic Science (IBS-R029-C1).}} \affil{\small Discrete Mathematics Group, Institute for Basic Science (IBS), Daejeon, South Korea} \author{Jane Tan} \affil{\small All Souls College, University of Oxford, United Kingdom} \affil[ ]{Email: \textsf{\href{mailto:: [email protected]}{[email protected]}}, \textsf{\href{[email protected]}{[email protected]}} \textsf{\href{[email protected]}{[email protected]}}, \textsf{\href{mailto:[email protected]}{[email protected]}}, \textsf{\href{mailto:[email protected]}{[email protected]}}} \maketitle \begin{abstract} Perfect graphs can be described as the graphs whose stable set polytopes are defined by their non-negativity and clique inequalities (including edge inequalities). In 1975, Chv\'{a}tal defined an analogous class of t-perfect graphs, which are the graphs whose stable set polytopes are defined by their non-negativity, edge inequalities, and odd circuit inequalities. We show that t-perfect graphs are $199053$-colourable. This is the first finite bound on the chromatic number of t-perfect graphs and answers a question of Shepherd from 1995. Our proof also shows that every h-perfect graph with clique number $\omega$ is $(\omega + 199050)$-colourable. \end{abstract} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \renewcommand{\thefootnote}{\arabic{footnote}} \section{Introduction}\label{sec:intro} Let $G=(V,E)$ be a graph. A \emph{stable} set of $G$ is a set of pairwise non-adjacent vertices. We write $\chi(G)$ for the \emph{chromatic number} of $G$, that is, the minimum number of colours needed to colour the vertices of $G$ so that no two adjacent vertices have the same colour. Equivalently, $\chi(G)$ is the minimum number of stable sets needed to cover the vertex set of $G$. A \emph{clique} is a set of vertices that are pairwise adjacent. We write $\omega(G)$ for the size of the largest clique in a graph $G$, called the \emph{clique number} of $G$. A graph is \emph{triangle-free} if it does not have a clique of size three. The stable set problem is the problem of finding the maximum size of a stable set in a graph. Lov\'asz~\cite{Lovasz1994} described it as one of the simplest and most fundamental problems concerning graphs. Since the stable set problem is NP-hard in general, it is natural to restrict to input graphs on which the stable set problem can be solved efficiently. In particular, one approach is to consider the polytope generated by stable sets and use the techniques from linear programming. For a subset $S$ of $V$, we write $\chi^S\in \mathbb{R}^V$ to denote its \emph{incidence vector}, that is a $0$-$1$ vector such that $\chi^S(v)=1$ if $v\in S$ and $\chi^S(v)=0$ otherwise. The \emph{stable set polytope} of a graph $G=(V,E)$ is defined as the convex hull of the incidence vectors of the stable sets of $G$. We denote it by $\ssp{G}$. Since the stable set polytope is a convex hull of a set of points, it can be described by some set of linear inequalities. If we can efficiently identify, for a given point $x^*$ outside the polytope, a linear inequality certifying that $x^*$ is outside the polytope, then by using the ellipsoid method, one can solve the maximum weight stable set problem in polynomial time~\cite{GLS1988}. This problem of identifying such a linear inequality for a polytope is called the \emph{separation problem}. So if the separation problem for the stable set polytope of a graph can be solved efficiently, then the stable set problem can be solved efficiently for the graph. Here are some easy inequalities that are satisfied by points $x\in \mathbb{R}^{V(G)}$ in every stable set polytope of a graph $G$: \begin{enumerate}[label=(\alph*)] \item\label{eq:nonnegative} \emph{(Nonnegativity)} $x_v\ge 0$ for every vertex $v$ of~$G$. \item\label{eq:edge} \emph{(Edge inequality)} $x_u+x_v\le 1$ for every edge $uv$ of $G$. \item \label{eq:clique} \emph{(Clique inequality)} $\sum_{v\in K} x_v\le 1$ for every clique $K$ of $G$. \item\label{eq:oddcycle}\emph{(Odd cycle inequality)} $\sum_{v\in V(C)} x_v\le \frac{\abs{V(C)}-1}{2}$ for every odd cycle $C$ of $G$. \end{enumerate} Let $\qstab{G}$ be the set of all vectors $x\in \mathbb R^{V(G)}$ satisfying \ref{eq:nonnegative} and \ref{eq:clique}. Remarkably, important theorems by Lov\'asz~\cite{lovasz1972normal} and Fulkerson~\cite{perfectgraphpolytope-fulkerson} on perfect graphs, as stated in \vasek~\cite{chvatal-introduces-t-perfect-1975}, imply that \[ \ssp{G}=\qstab{G} \text{ if and only if $G$ is perfect.}\] Perfect graphs were introduced in the 1960s by Berge in terms of chromatic numbers and clique numbers. A graph $H$ is an \emph{induced} subgraph of a graph $G$ if $H$ can be obtained from $G$ by deleting vertices and all incident edges adjacent to them. A graph~$G$ is called \emph{perfect} if $\chi(H)=\omega(H)$ for every induced subgraph~$H$ of $G$. In what has since become known as the Strong Perfect Graph Theorem, Chudnovsky, Robertson, Seymour, and Thomas~\cite{CRST2006} proved a theorem characterising the list of forbidden induced subgraphs for the class of perfect graphs. Motivated by perfect graphs, \vasek~\cite{chvatal-introduces-t-perfect-1975} initiated the study of t-perfect graphs. Let $\tstab{G}$ be the set of all vectors $x\in \mathbb R^{V(G)}$ satisfying \ref{eq:nonnegative}, \ref{eq:edge}, and \ref{eq:oddcycle}. We define that a graph is \emph{t-perfect}\footnote{Here `t’ stands for `trou’, the French word for `hole'.} if $\ssp{G}=\tstab{G}$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=1, every node/.style={circle, draw, fill=black, inner sep=1pt, minimum size=2pt}] \foreach \i in {0,1,2} { \node (v\i) at (0,\i) {}; \node (u\i) at (2,\i) {}; \node (w\i) at (3,\i+1){}; \draw (v\i)--(u\i)--(w\i)--(v\i); } \draw (v1)--(u2)--(v0); \draw (v1)--(u0)--(v2); \draw (v0)--(u1)--(v2); \draw (w0)--(w1)--(w2); \draw (w0) [bend right] to (w2); \end{tikzpicture}\quad \begin{tikzpicture}[scale=1, every node/.style={circle, draw, fill=black, inner sep=1pt, minimum size=2pt}] \foreach \i in {0,1,2,3,4} { \node (v\i) at (90+72*\i:.6) {}; \node (w\i) at (90+72*\i:1.5) {}; \draw (v\i)--(w\i); } \draw (v0)--(v2)--(v4)--(v1)--(v3)--(v0); \draw (v0)--(w1)--(v2)--(w3)--(v4)--(w0)--(v1)--(w2)--(v3)--(w4)--(v0); \end{tikzpicture} \end{center} \caption{The only known $4$-critical t-perfect graphs in the literature. On the left is the complement of the line graph of the complement of $C_6$ found by Laurent and Seymour~\cite[p. 1207]{schrijver2003combinatorial}, and on the right is the complement of the line graph of the $5$-wheel found by Benchetrit~\cite{benchetrit-PhD,benchetrit20164critical}.} \label{fig:laurent-seymour} \end{figure} Let us write $\mathbf{1}\in \mathbb{R}^{V(G)}$ for the vector with all entries equal to $1$. It is easy to verify that $\frac{1}{3}\mathbf{1}\in \tstab{G}$ for every graph $G$ and this implies that $K_4$ is not t-perfect and the fractional chromatic number of any t-perfect graph is at most three. In 1992, Shepherd~\cite[8.14]{jensen-toft} asked whether for each t-perfect graph $G$, the polytope $\ssp{G}$ has the \emph{integer decomposition property}, that is, for every positive integer $k$, every integral vector in $k \ssp{G}$ can be written as a sum of $k$ vertices of $\ssp{G}$. If $G$ is t-perfect and $\ssp{G}$ has the integer decomposition property, then $\frac13\mathbf{1}\in \tstab{G}=\ssp{G}$ should be expressible as a sum of three incidence vectors of stable sets, implying that $G$ is $3$-colourable. However, two counterexamples, depicted in~\cref{fig:laurent-seymour}, were found by Laurent and Seymour~\cite[p. 1207]{schrijver2003combinatorial} in 1994 and Benchetrit~\cite{benchetrit-PhD,benchetrit20164critical} in 2015, respectively, answering Shepherd's question in the negative. On the other hand, Seb\H{o} conjectured that triangle-free t-perfect graphs are $3$-colourable~(see \cite{clawfreetperfect-maya}), and this is wide open. More generally, is every t-perfect graph $4$-colourable? This very natural question appears in the problem book of Jensen and Toft~\cite[8.14]{jensen-toft}, and is attributed to Shepherd from 1994. Reiterating this, Shepherd wrote in the conclusion of his 1995 paper \cite{shepherd-lehman}: \begin{quote} \itshape For every $k\ge 4$, it is not known whether each t-perfect graph is $k$-colourable. \end{quote} Our main result is the first positive answer to this question. We remark that we have optimised the proof for simplicity rather than to optimise the bound. \begin{theorem}\label{thm:main}\mbox{} Every t-perfect graph is $199053$-colourable. \end{theorem} Let $\hstab{G}$ be the set of all vectors $x\in \mathbb R^{V(G)}$ satisfying \ref{eq:nonnegative}, \ref{eq:clique}, and \ref{eq:oddcycle}. A graph is \emph{h-perfect} if $\ssp{G}=\hstab{G}$. By their definitions, we have the relationships $\ssp{G}\subseteq \hstab{G}\subseteq \tstab{G}$ and $\hstab{G}\subseteq \qstab{G}$, and t-perfect graphs are precisely h-perfect graphs without $K_4$ subgraphs. Obviously, every perfect graph is h-perfect and every t-perfect graph is h-perfect. The study of h-perfect graphs was initiated by Sbihi and Uhry in 1984, who conjectured that every h-perfect graph $G$ with $\omega(G)\ge 3$ is $\omega(G)$-colourable~\cite[Conjecture 5.4]{SU1984}. This conjecture is false because of the graphs in \cref{fig:laurent-seymour}. However, we prove that it is true up to an additive constant. \begin{theorem}\label{thm:hperfect} Every h-perfect graph~$G$ is $(\omega(G)+ 199050)$-colourable. \end{theorem} We remark that \cref{thm:hperfect} is the first known result bounding the chromatic number of $h$-perfect graphs in terms of the clique number, establishing that the class of h-perfect graphs is a \say{$\chi$-bounded} class. For further background on t-perfect graphs, we refer to the chapter on t-perfect graphs in the book of Schrijver~\cite[Chapter 68]{schrijver2003combinatorial}. \subsection{Prior Work} Several subclasses of t-perfect graphs are known to be $3$-colourable. In 1982, Fonlupt and Uhry~\cite{fonlupt-uhry-1982} showed that if a graph $G$ has a vertex whose deletion results in a bipartite graph, then it is t-perfect. Obviously, such a graph is $3$-colourable. The \emph{claw} is the bipartite graph $K_{1,3}$, $P_5$ is the $5$-vertex graph, and the \emph{fork} is the five vertex graph obtained from $K_{1,3}$ by subdividing one edge. We say that a graph $G$ is \emph{$H$-free} if $G$ has no induced subgraph isomorphic to~$H$. Bruhn and Stein~\cite{clawfreetperfect-maya}, Bruhn and Fuchs~\cite{P5free-BruhnFuchs}, and Cao and Wang~\cite{cao2022forkfree} showed respectively that the classes of claw-free t-perfect graphs, $P_5$-free t-perfect graphs, and fork-free t-perfect graphs are all $3$-colourable. Additionally, classes of graphs forbidding certain subdivisions of $K_4$ have been proven to be t-perfect and $3$-colourable. Series-parallel graphs are graphs that do not contain a $K_4$-subdivision as a subgraph and they are well known to be $3$-colourable because they always have a vertex of degree at most two. Series-parallel graphs are t-perfect, as conjectured by Chvat\'al~\cite{chvatal-introduces-t-perfect-1975} and proved by Boulala and Uhry~\cite{Boulala-Uhry-1979-series-parallele}. Subsequently, much stronger versions of this statement were also established. We call a graph an \emph{odd $K_4$} if it is a subdivision of $K_4$ such that every triangle of the $K_4$ maps to an odd cycle. In 1986, Gerards and Schrijver~\cite{GerardsSchrijver-NoOddK4istperfect-1986} showed that graphs containing no odd $K_4$ as a subgraph are t-perfect. Catlin~\cite{Catlin1979} showed that graphs having no odd $K_4$ as a subgraph are $3$-colourable. For further strengthening, we say that a \emph{bad $K_4$} is a subdivision of $K_4$ that is not t-perfect.\footnote{Every bad $K_4$ is an odd $K_4$, but the converse is false. Barahona and Mahjoub prove a characterisation of bad $K_4$'s \cite{BM1994b, schrijver2003combinatorial}. } In 1998, Gerards and Shepherd~\cite{GS1998} showed that graphs having no bad $K_4$ as a subgraph are t-perfect. In addition, they showed that graphs containing no bad $K_4$ as a subgraph are $3$-colourable\footnote{In \cite{GS1998}, Gerards and Shepherd stated this but omitted the proof. However, it can be easily deduced from their structural description.}, extending the result of Catlin~\cite{Catlin1979}. \paragraph{Connection with $\chi$-boundedness} Motivated by the theory of perfect graphs, Gy\'arf\'as~\cite{Gyarfas1987} introduced the concept of $\chi$-boundedness. A class $\mathcal{C}$ of graphs is said to be \emph{$\chi$-bounded} if there exists a function $f$ such that every induced subgraph $H$ of a graph $G \in \mathcal{C}$ satisfies $\chi(H) \leq f(\omega(H))$. Such a function~$f$ is called a \emph{$\chi$-bounding function} for $\mathcal{C}$. If a class of graphs has a polynomial $\chi$-bounding function, then it is called \emph{polynomially $\chi$-bounded}. If it has a linear $\chi$-bounding function, then it is called \emph{linearly $\chi$-bounded}. Trivially, the class of perfect graphs is linearly $\chi$-bounded. In general, hereditary $\chi$-bounded classes of graphs can have arbitrarily fast-growing $\chi$-bounding functions \cite{brianski2024separating}. Classical constructions of Zykov, Mycielski, and Tutte show that the class of all graphs is not $\chi$-bounded \cite{descartes1947three, descartes1954solution, Mycielski1955, zykov1949}. See a survey by Scott and Seymour~\cite{SS2018} for more on $\chi$-boundedness. \cref{thm:hperfect} implies that the class of h-perfect graphs is linearly $\chi$-bounded and is the first result showing are $\chi$-bounded by any function. Since t-perfect graphs are precisely $K_4$-free h-perfect graphs, the statement that the class of h-perfect graphs is $\chi$-bounded implies that there is a constant $k$ so that every t-perfect graph is $k$-colourable. Because \cref{thm:hperfect} is a corollary of \cref{thm:main}, this is not an alternative proof of a finite bound on the chromatic number of $t$-perfect graphs. \subsection{Our approach} Let us now sketch how we prove \cref{thm:main}. Our first step is to reduce the problem into the case where there are no \say{short} odd cycles. For that, we will use a nice lemma, which originated from Sbihi and Uhry~\cite{SU1984} and was explicitly stated in the Ph.~D.~thesis of Marcus~\cite{Marcus1996}. It allows us to deduce that we can intersect all shortest odd cycles in a t-perfect graph by one stable set. Removing such a stable set increases the length of the shortest odd cycle by at least two and decreases the chromatic number by at most one. By repeatedly applying this argument, we reduce our problem to proving a bound on the chromatic number of an arbitrary t-perfect graph $G$ with no odd short odd cycles. In our case, we only need that $G$ has no odd cycles of length less than $11$. After this reduction step, we will use techniques developed for proving $\chi$-bounded\-ness for graph classes. We assume for a contradiction that $G$ has \say{large} chromatic number and no short odd cycles. By a standard inductive argument, we may assume that $G$ is connected. The first tool we use is to partition the vertex set into levels $L_0$, $L_1$, $L_2$, $\ldots$ where $L_i$ is the set of vertices of distance exactly $i$ from a fixed vertex $v$. By a standard and simple method for proving $\chi$-boundedness, there is an integer~$i$ so that $\chi(G[L_i])\ge \frac12 \chi(G)$. Since there are no short odd cycles, $i$ has to be larger than some constant. Since $G[L_i]$ has large chromatic number, we can prove that it must contain various structures as induced subgraphs. We will use this fact to show that $G$ must contain an induced subgraph that is not $t$-perfect, a contradiction. An \emph{odd wheel} is a graph consisting of an odd cycle and a vertex adjacent to every other vertex on the cycle called its \emph{center}. It is an easy consequence of the definition of t-perfection that odd wheels are not t-perfect. Graphs with high chromatic number need not contain any wheels as an induced subgraph \cite{davies2023triangle,pournajafi2023burling}, but fortunately for us it is enough to show that $G$ contains an odd wheel as a \say{t-minor} \cite{GS1998}. A \emph{t-minor} of a graph~$G$ is a graph obtained from~$G$ by a sequence of vertex deletions and t-contractions, where a \emph{t-contraction} is an operation that is to contract all edges incident with a fixed vertex~$w$ when the neighbourhood of $w$ is a stable set. It is known that every t-minor of a t-perfect graph is t-perfect, shown by Gerards and Shepherd~\cite{GS1998}, The odd cycles of a graph are a critical in determining whether it is t-perfect. By definition, the parity of cycles is preserved under taking t-minors and so we need to be careful with parity when we hunt for an odd wheel t-minor in $G$. We overcome this obstacle by introducing a structure we call an \say{arithmetic rope}, which we find to be of independent interest. By using the assumption that $G[L_i]$ has large chromatic number, we will show that $G[L_i]$ contains a 5-arithmetic rope. Our proof method is similar to that of Chudnovsky, Scott, Seymour, and Spirkl~\cite{CSSS20}, who showed that the class of graphs without long odd induced cycles is $\chi$-bounded. Informally, our $5$-arithmetic rope consists of $5$ vertices $q_1$, $q_2$, $q_3$, $q_4$, $q_5$ (with $q_6=q_1$) inside $L_i$ pairwise far apart in~$G$ such that for each $j\in\{1,2,\ldots,5\}$, $G[L_i]$ has both an odd-length induced path and an even-length induced path from $q_j$ to $q_{j+1}$ so that any choice of one of the two paths for each $j$ gives an induced cycle containing all $5$ vertices. Once we have shown that $G[L_i]$ contains a 5-arithmetic rope $\mathcal{Q}$, to conclude that $G$ contains an odd wheel t-minor it suffices to show that there is a way of extending some choice of an odd cycle of our 5-arithmetic rope into an odd wheel t-minor. This will follow from the fact that $G$ has no short odd cycles, the definition of the levelling, and the property that the vertices $q_1, q_2, q_3, q_4, q_5$ are pairwise far apart. These arguments, along with the idea of arithmetic ropes, comprise the bulk of what is novel in our paper. We provide a more detailed sketch below. \paragraph{Extracting an odd wheel t-minor} For each~$j\in\{1,2,\ldots,5\}$, we choose one vertex $x_j$ in $L_{i-1}$ that is adjacent to~$q_j$ and a vertex $y_j$ in $L_{i-2}$ that is adjacent to~$x_j$. As $q_1$, $q_2$, $q_3$, $q_4$, $q_5$ are pairwise far apart, these vertices $x_1$, $x_2$, $x_3$, $x_4$, $x_5$, $y_1$, $y_2$, $y_3$, $y_4$, $y_5$ are all distinct and each of $x_1$, $x_2$, $x_3$, $x_4$, $x_5$ has degree $1$ in the subgraph of $G$ induced by these $10$ vertices. A simple lemma will show that $G[L_0\cup L_1\cup \cdots \cup L_{i-3}\cup \{x_1,x_2,x_3,x_4,x_5,y_1,y_2,y_3, y_4,y_5\}]$ contains a connected bipartite induced subgraph~$H$ containing $\{x_1,x_2,x_3,x_4,x_5\}$. Three out of these five vertices, say $x_a$, $x_b$, and $x_c$ with $a<b<c$, will be on the same side of the bipartite subgraph by the pigeonhole principle. We will delete the other two vertices in $\{x_1,x_2,x_3,x_4,x_5\}\setminus \{x_a,x_b,x_c\}$ from $H$ to obtain a connected bipartite induced subgraph~$H'$. Now, we take three odd-length induced paths from $q_a$ to $q_b$, from~$q_b$ to $q_c$, and from~$q_c$ to $q_a$ to obtain an odd induced cycle $C$ in $G[L_i]$ such that $q_a$, $q_b$, and~$q_c$ split $C$ into three odd-length induced paths. We then consider the subgraph $G'$ of $G$ induced by $V(H')\cup V(C)$. We will then show that this structure will give us an odd wheel as a t-minor of~$G'$. By applying t-contractions to every vertex of $H'$ on the side of the bipartite subgraph $H'$ not containing $\{x_a, x_b,x_c\}$, we will obtain a t-minor $G''$ where all vertices of $H'$ are identified as a single vertex, say~$w$. So, $G''$ is a graph consisting of an odd cycle~$C$ with an extra vertex $w$ such that $w$ is adjacent to $q_a$, $q_b$, and~$q_c$ and possibly more. Let $G'''$ be the graph obtained from $G''$ by repeatedly applying t-contractions to degree-$2$ vertices. Since $q_a$, $q_b$, and $q_c$ split~$C$ into odd-length paths and each $t$-contraction preserves the parity of the length of these paths in $C$, we deduce that $G'''$ is an odd wheel with at least three vertices on the rim. This implies that an odd wheel is a t-minor of $G''$, contradicting the assumption that $G$ is t-perfect. We remark that this is the reason why we want to construct a $5$-arithmetic rope: if we do not enforce any parity conditions on the length of subpaths of~$C$ split by neighbours of~$w$, then we may end up identifying two of $q_a$, $q_b$, and $q_c$, giving a t-perfect~$G'''$, failing to provide a contradiction. \subsection{Organisation} This paper is organised as follows. In \cref{sec:preliminaries} we will review definitions and key properties t-perfect graphs, including t-minors. In \cref{sec:overview} we will give an overview of our proof of \cref{thm:main} and explain how we reduce the problem to graphs with no short odd cycles and how we deduce \cref{thm:hperfect} from \cref{thm:main}. In \cref{sec:ropes}, we will introduce the concept of arithmetic ropes, that will give us a cyclic structure with correct parity of the lengths of the paths inside a graph of large chromatic number. In \cref{sec:oddwheel}, we will complete the proof by finding an odd wheel as a t-minor in a graph of large chromatic number without short odd cycles. In \cref{sec:conclusion}, we will conclude the paper with discussions and open problems. \section{Preliminaries}\label{sec:preliminaries} \label{sec:prelim} All graphs in this paper are simple, having no loops and no parallel edges. For two graphs $H$ and $G$, we say $G$ is \emph{$H$-free} if $G$ has no induced subgraph isomorphic to $H$. A \emph{stable} set of a graph is a set of pairwise non-adjacent vertices. A graph is \emph{$k$-colourable} if its vertex set is a union of $k$ stable sets. The \emph{chromatic number} $\chi(G)$ of a graph $G$ is the minimum non-negative integer $k$ such that it is $k$-colourable. For $X\subseteq V(G)$, we often use $\chi(X)$ to denote $\chi(G[X])$, where $G[X]$ is the \emph{induced subgraph} of $G$ on vertex set $X$. A graph is called \emph{$k$-critical} if its chromatic number is $k$ and removing any vertex makes decreases the chromatic number. A \emph{hole} is an induced cycle of length at least $4$. An \emph{anti-hole} is the complement of a hole. We write $N_G(v)$ to denote the set of all neighbours of a vertex $v$ in $G$ and let $N_G[v]=\{v\}\cup N_G(v)$. For a non-negative integer $r$, we write $N_G^r(v)$ to denote the set of all vertices of distance exactly $r$ from $v$ in $G$ and $N_G^r[v]$ to denote the set of all vertices of distance at most $r$ from $v$ in $G$. We may omit the subscripts $G$ if it is clear from the context. A vertex set $B$ \emph{covers} a vertex set $C$ if $B$ and $C$ are disjoint and every vertex in $C$ has a neighbour in~$B$. For a graph $G$, the \emph{$G$-distance} between two vertices $x$ and $y$ is the length of a shortest path from $x$ to $y$. A vertex set $X$ is \emph{anticomplete} to a vertex set $Y$ if every vertex in $X$ is non-adjacent to all vertices in $Y$. A \emph{t-contraction} is an operation to contract all edges in $G[N_G[v]]$ for some vertex~$v$ whose set of neighbours is stable. A graph $H$ is a \emph{t-minor} of a graph $G$ if $H$ can be obtained from~$G$ by a sequence of vertex deletions and t-contractions. Gerards and Shepherd~\cite{GS1998} showed the following. \begin{lemma}[Gerards and Shepherd~\cite{GS1998}]\label{lem:tminor} Every t-minor of a t-perfect graph is t-perfect. \end{lemma} For completeness, we include an expanded version of the proof in Gerards and Shepherd~\cite[(14)]{GS1998}. \begin{proof} It is an easy and well known fact that an induced subgraph of a t-perfect graph is t-perfect, see Eisenbrand et al.~\cite[Lemma 4.1]{combinatorial-t-perfect}. Thus, it is enough to prove that if $G$ is a $t$-perfect graph and $u \in V(G)$ has degree at least two and $N_G(u)$ is a stable set, then the graph $\tilde{G}$ obtained by contracting all edges incident is also $t$-perfect. Let us write $\tilde u$ to denote the vertex of~$\tilde G$ corresponding to $u$ of $G$. Suppose that $x\in \tstab{\tilde G}$ is a vertex of $\tstab{\tilde G}$. We define $y\in \mathbb R^{V(G)}$ by \[ y_v:= \begin{cases} x_v & \text{if $v\notin N_G[u]$},\\ x_{\tilde u} & \text{if $v$ is adjacent to $u$},\\ 1-x_{\tilde u} & \text{if $v=u$}. \end{cases} \] Since $x$ is a vertex, there is a set of tight inequalities from the definition of $\tstab{\tilde G}$ that determines~$x$. Then it is easy to observe that $y$ is determined by the corresponding inequalities for $\tstab{G}$ as well as the additional constraints $x_u+x_{v}=1$ for every $v\in N_G(u)$. Thus, $y$ is also a vertex of $\tstab{G}$. Since $G$ is t-perfect, $y$ is the incidence vector of some stable set $S$ of $G$. Thus, $S\cap N_G[u]$ is either $\{u\}$ or $N_G(u)$. Let $\tilde S:=S\setminus \{u\}$ if $u\in S$ and $\tilde S:=(S\setminus N_G(u))\cup \{\tilde u\}$ otherwise. Then $\tilde S$ is a stable set of $\tilde G$ and $x$ is the incidence vector of $\tilde S$. Thus $x\in \ssp{\tilde G}$. Therefore $\tilde G$ is t-perfect. \end{proof} For positive integers $k\ge 3$, the \emph{$k$-wheel} $W_k$ is the graph consisting of a cycle of length~$k$ and a single additional vertex adjacent to every vertex of the cycle. An \emph{odd wheel} is any $k$-wheel with $k$ odd. We will need the simple fact that odd wheels are not t-perfect. \begin{lemma}[Folklore; see~{\cite[Proposition 3.6.5]{benchetrit-PhD}}]\label{lem:oddwheelforbid} No odd wheel is t-perfect. \end{lemma} \begin{proof} Let $G$ be an odd wheel with a vertex $v$ such that $G-v$ is a cycle. Clearly $\frac13\mathbf{1} \in \tstab{G}$. Suppose that $\tstab{G}=\ssp{G}$. Then $\frac13\mathbf{1} =\sum_{i=1}^k \lambda_i \mathbf{1}_{S_i}$ for some stable sets $S_1,\ldots,S_k$ and positive reals $\lambda_1,\ldots,\lambda_k$ such that $\sum_{i=1}^k \lambda_i=1$. We may assume that $\lambda_1=\frac13$ and $S_1=\{v\}$ because for every stable set $S$ of~$G$, $v\notin S$ or $S=\{v\}$. This implies that $S_2$, $S_3$, $\ldots$, $S_k$ are stable sets in $G-v$, and therefore $\abs{S_i}< \frac{\abs{V(G)}-1}{2}$ for all $i\in \{2,3,\ldots,k\}$. Then, $\frac{\abs{V(G)}}{3}=\mathbf{1} \cdot \frac13\mathbf{1} = \sum_{i=1}^k \lambda_i \abs{S_i} < \frac{1}{3}+\frac23\frac{\abs{V(G)}-1}{2}$, which is a contradiction. \end{proof} Observe that Lemma 2.2 immediately implies that no $t$-perfect graph contains a clique of size four. The \emph{odd girth} of a graph is the length of a shortest odd cycle. (If it has no odd cycle, then it is $\infty$.) We shall freely use the following basic observation throughout the paper. \begin{observation}\label{lem:ball} Let $r$ be a non-negative integer. If the odd girth of a graph~$G$ is larger than $2r+1$, then $\chi(G[N_G^r[v]])\le 2$ for every vertex~$v$. \qed \end{observation} \section{Reducing to the case where there are no short odd cycles}\label{sec:overview} Our main technical result which implies \cref{thm:main} and in turn \cref{thm:hperfect} is as follows: \begin{theorem}\label{thm:technical-main} Every graph $G$ with odd girth at least $11$ that contains no odd wheel as a t-minor is $199049$-colourable. \end{theorem} \paragraph{Remarks on \cref{thm:technical-main}} With more technical arguments we are able to prove that triangle-free graphs containing no odd wheel as a t-minor have bounded chromatic number, but we do not know how to do this for graphs containing triangles. It was recently shown by Carbonero, Hompe, Moore, and Spirkl \cite{carbonero2023counterexample} that there are $K_4$-free graphs with arbitrarily large chromatic number that contain no induced triangle-free graph with chromatic number more than four. In general, it is not true that graphs without a t-minor~$H$ are $\chi$-bounded. This is because if a graph $G$ contains a graph $H$ as a t-minor, then $G$ also contains $H$ as an induced minor, and Pawlik, Kozik, Krawczyk, Laso{\'n}, Micek, Trotter, and Walczak~\cite{pawlik2014triangle} showed that there are graphs $H$ such that the class of graphs not containing $H$ as an induced minor is not $\chi$-bounded. There are also other similar notions of wheels, such as a graph consisting of an induced cycle of length at least four and an extra vertex with at least three neighbours on the cycle, whose exclusion as an induced subgraph does not imply $\chi$-boundedness \cite{davies2023triangle,pournajafi2023burling}. \\ The remainder of this section is dedicated to deriving \cref{thm:main} and then \cref{thm:hperfect} assuming \cref{thm:technical-main}. We will use a lemma in the Ph.D.~thesis of Marcus~\cite[Proposition 3.6]{Marcus1996} to reduce the problem to the case where there are no short odd cycles. Such an argument also appeared earlier in Sbihi and Uhry~\cite[Proposition 5.3]{SU1984}. \begin{lemma}\label{color-reduction} If the class of t-perfect graphs with no odd cycle of length at most $2k+1$ is $c$-colourable, then the class of t-perfect graphs is $(c+k)$-colourable. \end{lemma} We write $\chi^*(G)$ to denote the \emph{fractional chromatic number} of a graph $G$, which is defined as the minimum $\sum_{S\in\mathcal S}f(S)$ for the set $\mathcal S$ of all stable sets of~$G$ subject to the conditions that \begin{enumerate}[label=(\roman*)] \item $\sum_{S\in \mathcal S: v\in S} f(S)\ge 1$ for every vertex $v$ of $G$, \item $f(S)\ge 0$ for all $S\in \mathcal S$. \end{enumerate} The following lemma is due to Sbihi and Uhry~\cite[Theorem 5.2]{SU1984}, see Schrijver~\cite[(68.81)]{schrijver2003combinatorial}. \begin{lemma}\label{lem:fractionalcolor} Let $G$ be a t-perfect graph. Let $\ell$ be a positive integer. If $G$ has no odd cycle of length less than $2\ell+1$, then $\chi^*(G)\le 2+\frac{1}{\ell}$. The equality holds if and only if $G$ has an odd cycle of length $2\ell+1$. \end{lemma} We remark that Bruhn and Fuchs~\cite{P5free-BruhnFuchs} conjectured that a graph $G$ is t-perfect if and only if $\chi^*(H)=2+\frac{2}{\operatorname{oddgirth}(H)-1}$ for every non-bipartite t-minor~$H$ of~$G$. The above lemma is the easy direction of this. For completeness, we include its proof. \begin{proof} Let $G=(V,E)$ be a t-perfect graph with no odd cycle of length less than $2\ell$. Then $\frac{1}{2+(1/\ell)}\mathbf{1}$ is in $\tstab{G}$ because every odd cycle of $G$ has length at least $2\ell+1$. Since $G$ is t-perfect, $\frac{1}{2+({1}/{\ell})}\mathbf{1}$ is a convex combination of the incidence vectors of stable sets of~$G$. This means that $\mathbf{1}$ is a non-negative linear combination of the incidence vectors of stable sets of~$G$ where the sum of the coefficients is $2+\frac{1}{\ell}$ and therefore $\chi^*(G)\le 2+\frac{1}{\ell}$. Since the cycle of length $2\ell+1$ has the fractional chromatic number $2+\frac{1}{\ell}$, the equality holds if and only if $G$ has an odd cycle of length $2\ell+1$. \end{proof} In the next lemma, we show that there is a stable set hitting all shortest odd cycles of a t-perfect graph, because the fractional chromatic number of a t-perfect graph is tight on its shortest odd cycle. This idea was used by Sbihi and Uhry~\cite[Proposition 5.3]{SU1984} to show that if all odd induced cycles of a t-perfect graph have the same length, then it is $3$-colourable. Later, Marcus~\cite[Proposition 3.30]{Marcus1996} used this idea repeatedly to show that the chromatic number of a h-perfect graph~$G$ with at most $t$ distinct lengths of odd induced cycles is at most $\omega(G)+t$. \begin{lemma}[Marcus~{\cite[Proposition 3.6]{Marcus1996}}]\label{lem:reduce} Let $G$ be a t-perfect graph of odd girth at least $2\ell+1$. Then $G$ contains a stable set~$S$ such that $\abs{S}\ge \frac{\ell}{2\ell+1}\abs{V(G)}$ and $G-S$ has odd girth at least $2\ell+3$. \end{lemma} It is possible to deduce from \cref{lem:reduce} that $\chi(G)\le O(\log \abs{V(G)})$ for t-perfect graphs~$G$, as explained in~\cite[Proposition 3.36]{Marcus1996}. As the paper of Marcus~{\cite[Proposition 3.6]{Marcus1996}} is written in French, we include a proof of \cref{lem:reduce} for completeness. \begin{proof} We may assume that $G$ has an odd cycle of length $2\ell+1$. By~\cref{lem:fractionalcolor}, $\chi^*(G)=2+\frac{1}{\ell}$. So there is a list of stable sets $S_1,\ldots,S_k$ of $G$ and a list of positive weights $w_1,w_2,\ldots,w_k$ such that $w_1+w_2+\cdots+w_k=2+\frac{1}{\ell}$ and for each vertex $v$ of $G$, $\sum_{i: v\in S_i}w_i\ge 1$. Then $\abs{V(G)}\le \sum_{i=1}^k w_i \abs{S_i}\le (\sum_{i=1}^k w_i)\max_{j} \abs{S_j}$ and therefore there is $j$ such that $\abs{S_j}\ge \frac{\ell}{2\ell+1}\abs{V(G)}$. By symmetry, we may assume that $j=1$. Let $H=G-S_1$. Then $H$ is a t-perfect graph such that $\chi^*(H)=2+\frac{1}{\ell}-w_1<2+\frac{1}{\ell}$ and therefore $H$ has no odd cycle of length $2\ell+1$. Thus $S_1$ is a stable set that intersects every odd cycle of length $2\ell+1$. \end{proof} \cref{color-reduction} is a direct consequence of \cref{lem:reduce}. Assuming \cref{thm:technical-main}, we can now prove \cref{thm:main}. \begin{proof}[Proof of \cref{thm:main} assuming \cref{thm:technical-main}.] By \cref{lem:tminor,lem:oddwheelforbid}, no t-perfect graph contains an odd wheel as a t-minor. By \cref{color-reduction,thm:technical-main}, it then follows that every t-perfect graph is $199053$-colourable. \end{proof} The same proof allows us to deduce the following theorem from \cref{thm:technical-main}. \begin{theorem}\label{thm:trianglefree} Every triangle-free t-perfect graph is $199052$-colourable. \qed \end{theorem} \begin{remark} \cref{thm:technical-main} is strictly stronger than \cref{thm:main}. There are $t$-imperfect graphs, such as sufficiently large even M\"{o}bius ladders \cite{shepherd-lehman}, which have arbitrarily large odd girth and do not contain an odd wheel as a t-minor. \end{remark} Next, we show that \cref{thm:trianglefree} implies \cref{thm:hperfect}. To see why this is the case, we use the following well-known fact about h-perfect graphs~$G$, due to Seb\H{o} in~\cite[Lemma 26]{clawfreetperfect-maya}, also in~\cite[Fact 3.2]{Marcus1996}. \begin{lemma}[{Seb\H{o} in~\cite[Lemma 26]{clawfreetperfect-maya}}, {\cite[Fact 3.2]{Marcus1996}}]\label{lem:reduceomega} Let $G$ be a h-perfect graph with $\omega(G)\ge3$. Then $G$ contains a stable set $S$ such that $\omega(G-S)<\omega(G)$. \end{lemma} This follows from the observation that $\frac{1}{\max(\omega(G),3)}\mathbf{1}\in \hstab{G}$, see Sbihi and Uhry~\cite[Theorem 5.2]{SU1984} or Bruhn and Stein~\cite[(14)]{clawfreetperfect-maya}: \[ \chi^*(G)=\omega(G) \quad\text{if $G$ is h-perfect and $\omega(G)\ge 3$.} \] Since every induced subgraph of a h-perfect graph is h-perfect, by the same argument of \cref{lem:reduce}, we have the following proof. \begin{proof}[Proof of \cref{lem:reduceomega}] There is a stable set~$S$ of~$G$ such that $\chi^*(G-S)<\chi^*(G)$, as in the proof of \cref{lem:reduce}. We may assume that $\omega(G-S)\ge 3$, because otherwise it is trivial. Since both $G$ and $G-S$ are h-perfect, we have $\chi^*(G-S)=\omega(G-S)<\chi^*(G)=\omega(G)$. \end{proof} \begin{proof}[Proof of \cref{thm:hperfect} assuming \cref{thm:trianglefree}] Let $G$ be a h-perfect graph. We may assume that $\omega(G)\ge 2$. By \cref{lem:reduceomega}, every h-perfect graph~$G$ admits $\omega(G)-2$ stable sets such that after deleting all these stable sets, we have a triangle-free h-perfect induced subgraph~$H$. Such a graph $H$ is t-perfect. By \cref{thm:trianglefree}, $H$ is $199052$-colourable. Therefore, $G$ is $(\omega(G)+ 199050)$-colourable, as desired. \end{proof} The proof of \cref{thm:technical-main} is split into two parts. In the next section we show how to find so-called arithmetic ropes as induced subgraphs within graphs of large chromatic number and odd girth at least $11$. Then, in Section~\ref{sec:oddwheel}, we shall construct odd wheel t-minors in graphs of large chromatic number by using arithmetic ropes as a useful gadget. \section{Arithmetic ropes} \label{sec:ropes} An $r$-arithmetic rope is a graph consisting of $2r$ paths $Q_{1,1}, Q_{1,2}, \ldots , Q_{r,1}, Q_{r,2}$ with ends contained in a vertex set $\{q_1,\ldots , q_r\}$ so that (taking indices modulo $r$) \begin{itemize} \item for every $1\le i \le r$, we have that $Q_{i,1}$ and $Q_{i,2}$ are $q_1q_2$-paths with odd and even length, respectively. \item for every $(h_1, \ldots , h_r) \in \{1,2\}^r$, the graph $Q_{1,h_1} \cup Q_{2,h_2} \cup \cdots \cup Q_{r,h_r}$ is an induced cycle, and \item the vertices $q_1,\ldots , q_r$ are pairwise at $G$-distance at least $5$. \end{itemize} We denote such an arithmetic rope by $(q_i,Q_{i,1},Q_{i,2})_{i=1}^r$. An example of such a graph is shown in \cref{fig:5arope}. The key feature of an $r$-arithmetic rope is that by choosing suitable $(h_1, \ldots , h_r) \in \{1,2\}^r$, one can control the parity of paths between $q_i$ and $q_j$ in an induced cycle containing $q_1, \ldots , q_r$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=1, every node/.style={circle, draw, fill=black, inner sep=1pt, minimum size=2pt}] \node [label=$q_1$] at (0,3) (q1) {}; \node [label=$q_2$] at (2,3.2) (q2) {}; \node [label=$q_3$] at (4,2) (q3) {}; \node [label=$q_4$] at (3,0) (q4) {}; \node [label=$q_5$] at (1,0) (q5) {}; \draw[thick,dotted] (q1) .. controls (0.5,3.5) and (1.5,2.5) .. (q2); \draw[thick,dotted] (q2) .. controls (3,4) and (3.5,1.5) .. (q3); \draw[thick,dotted] (q3) .. controls (4.5,1) and (3.3,-0.3) .. (q4); \draw[thick,dotted] (q4) .. controls (2,1) and (2,-.5) .. (q5); \draw[thick,dotted] (q5) .. controls (-0.5,.5) and (1.5,2.5) .. (q1); \draw[thick,dashed] (q1) .. controls (0.8,3) and (1.2,4) .. (q2); \draw[thick,dashed] (q2) .. controls (2.8,2) and (3,3) .. (q3); \draw[thick,dashed] (q3) .. controls (3.5,.5) and (5,-0.3) .. (q4); \draw[thick,dashed] (q4) .. controls (2,0) and (2,.5) .. (q5); \draw[thick,dashed] (q5) .. controls (-2,1.5) and (3.5,2) .. (q1); \end{tikzpicture} \end{center} \caption{A $5$-arithmetic rope. Dotted lines represent odd-length paths and dashed lines represent even-length paths.} \label{fig:5arope} \end{figure} The aim of this section is to prove the following. \begin{theorem} \label{thm:ropefinding} For any $r \in \mathbb{N}$, every graph $G$ with odd girth at least $11$ and $X\subseteq V(G)$ with $\chi(X) \ge 6^{r+1}+\frac{34}{5}(6^r-1)-1$ contains an $r$-arithmetic rope in $X$. \end{theorem} Chudnovsky, Scott, Seymour, and Spirkl~\cite{CSSS20} proved that the class of graphs without an induced long odd cycle is $\chi$-bounded. In one part of the proof, they find three paths $P$, $Q_1$, $Q_2$ where $P$ is long, $Q_1$ has odd length, $Q_2$ has even length, and both $P\cup Q_1$ and $P\cup Q_2$ are (long) holes. By definition, one of $P \cup Q_1$, $P\cup Q_2$ is odd and the other is even. Roughly, the idea of the proof of \cref{thm:ropefinding} is to carefully extend these ideas in order to iterate and find many such pairs of paths $Q_{i,1}$, $Q_{i,2}$ attached in order. Although doing this adds some difficulty, we are also able to simplify parts significantly due to the stronger odd girth assumption. We remark that the idea from \cite{CSSS20} of finding paths $P, Q_1, Q_2$ is itself a variant of an older idea, \cite{recognizing-berge-graphs} introduces a precursor where the restriction on the length of $P$ is omitted. Let $G$ be a graph. We define a a stable grading to be an ordered partition of $V(G)$ into stable sets $(W_1, \dots, W_n)$. Formally, a \emph{stable grading} is a sequence $(W_1, \ldots , W_n)$ of pairwise disjoint stable sets of $V(G)$ such that their union is $V(G)$. We say that $u \in V(G)$ is \emph{earlier} than $v \in V(G)$ with respect to a stable grading $(W_1, \ldots , W_n)$ if $u \in W_i$ and $v \in W_j$ for some $i < j$. We require a couple of preliminary lemmas regarding stable gradings before we can start \say{building} part of an arithmetic rope. \begin{lemma}\label{lem:earlier1} Let $c \ge 1$ and let $(W_1, \ldots, W_n)$ be a stable grading of a graph $G$ with $\chi(G) \ge c + 2$. Then there exists a subset $X$ of $V(G)$ and an edge $uv$ of $G$ with the following properties: \begin{itemize} \item $G[X]$ is connected, \item $\chi(X) \ge c$, \item $u$ and $v$ are earlier than every vertex in $X$, and \item at least one of $u$ or $v$ has a neighbour in $X$. \end{itemize} \end{lemma} \begin{proof} We say that a vertex $w\in V(G)$ is \emph{left active} if there exists an edge $uv$ of $G$ such that $u$ and $v$ are earlier than $w$ and $w$ is adjacent to at least one of $u$ or $v$. Let $A$ be the vertices of $G$ that are left active, and let $B=V(G)\setminus A$. Suppose first that $\chi(A)\ge c$. Let $C$ be a connected component of $G[A]$ with $\chi(C)\ge \chi(A) \ge c$. Let $i$ be a minimum integer such that $1 \le i \le n$ and $W_i \cap V(C)$ is non-empty and let $w$ be a vertex of $W_i \cap V(C)$. Since $w$ is left-active, there exists an edge $uv$ of $G$ such that $u$ and $v$ are earlier than $w$ (and thus every vertex of $C$), and $w$ is adjacent to at least one of $u$ or $v$. Setting $X=V(C)$, the lemma follows. So, we may assume that $\chi(A)\le c-1$, and thus $\chi(B)\ge 3$. Let $D$ be a connected component of $G[B]$ with $\chi(D)\ge \chi(B) \ge 3$. Let~$k$ be a minimum integer such that $1\le k \le n$ and $G\left[\left( \bigcup_{i=1}^k W_i \right) \cap V(D) \right]$ contains an odd cycle. Such an integer $k$ exists since $\chi(D)\ge 3$. Since each $W_i$ is a stable set, there exists some~$w\in W_i \cap V(D)$ and an edge~$uv$ of~$G$ with $u,v \in \left( \bigcup_{i=1}^{k-1} W_i \right) \cap V(D)$ such that $w$ is adjacent to at least one of $u$ or $v$. But then, $w$ is left active, contradicting the fact that $w\in B$. \end{proof} We extend this slightly in the triangle-free case (with a slightly worse bound) in order to say that only $v$ and not $u$ has a neighbour in $X$. \begin{lemma}\label{lem:earlier2} Let $c \ge 1$ and let $(W_1, \ldots, W_n)$ be a stable grading of a triangle-free graph $G$ with $\chi(G) \ge c + 3$. Then there exists a subset $X$ of $V(G)$ and an edge $uv$ of $G$ with the following properties: \begin{itemize} \item $G[X]$ is connected, \item $\chi(X) \ge c$, \item $u$ and $v$ are earlier than every vertex in $X$, \item $u$ has no neighbour in $X$, and \item $v$ has a neighbour in $X$. \end{itemize} \end{lemma} \begin{proof} By \cref{lem:earlier1}, there exists a subset $X'$ of $V(G)$ and an edge $u'v'$ of $G$ with the following properties: \begin{itemize} \item $G[X']$ is connected, \item $\chi(X') \ge c+1$, \item $u'$ and $v'$ are earlier than every vertex in $X'$, and \item $v'$ has a neighbour in $X'$. \end{itemize} Let $C$ be a connected component of $G[X'\setminus N(v')]$ with maximum chromatic number. Since $G$ is triangle-free, we have that $\chi(C)\ge \chi(X')-1\ge c$. If $u'$ has a neighbour in $V(C)$, then the lemma follows by setting $u=v'$, $v=u'$, and $X=V(C)$. So, we may assume now that $u'$ has no neighbour in $V(C)$. Let $w$ be a neighbour of $v'$ in $X'$ that has a neighbour in $V(C)$. Note that $w$ is non-adjacent to $u'$ since $G$ is triangle-free. Then, the lemma follows by setting $u=u'$, $v=v'$, and $X=V(C)\cup \{w\}$. \end{proof} The following lemma extracts the key induction step that we will use to prove \cref{thm:ropefinding}. \begin{lemma}\label{lem:ropeinduction} Let $c \ge 1$ and let $G$ be a graph with odd girth at least $11$, let $B,C\subseteq V(G)$ be disjoint vertex subsets with $B$ covering $C$, let $q\in V(G)\setminus C$ be a vertex such that $G[C\cup \{q\}]$ is connected and $\chi(C) \ge 6c+17$. Then there exists $B'\subseteq B$, $C'\subseteq C$, $q'\in C\setminus C'$ and two induced paths $Q_0$, $Q_1$ between $q$ and $q'$ in $G[(C \setminus C') \cup (B \setminus B') \cup \{q\}]$ such that \begin{itemize} \item $G[C'\cup \{q'\}]$ is connected, \item $\chi(C') \ge c$, \item $B'$ covers $C'$, \item $B'$ is anticomplete to $(V(Q_0)\cup V(Q_1))\setminus N^{2}[q']$, \item $V(Q_0)\cup V(Q_1)$ is contained in $C \cup \{q\} \cup ((B\cap N^2(q'))\setminus N^3(q))$, \item $C'$ is anticomplete to $(V(Q_0) \cup V(Q_1))\setminus \{q'\}$, \item the $G$-distance between $q$ and $C'\cup \{q'\}$ is at least $5$, and \item $Q_0$ has even length and $Q_1$ has odd length. \end{itemize} \end{lemma} \begin{figure} \begin{center} \begin{tikzpicture} \draw [draw=red,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=.2] (0.5,4) to (3.5,6); \draw [draw=red,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=.2] (2.5,4) to (4,6); \draw [draw=red,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=.2] (4.5,4) to (4.5,6); \draw [draw=secondcolor,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=\secondopacity] (1.5,4) to (5.5,6); \draw [draw=secondcolor,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=\secondopacity] (3.5,4) to (6,6); \draw [draw=none,fill=blue,opacity=.2] (5,4) [out=170,in=-60] to (1.4,6)--(6.6,6)--(11.6,4)--cycle; \tikzstyle{v}=[circle, fill, inner sep=0pt, minimum size=3pt] \draw [fill=white,rounded corners=10pt] (1,6) rectangle (7,7); \node at (1,7) [label=below left:$B$]{}; \draw (3,6)--(3,7); \draw (5,6)--(5,7); \node at (2,6.5) {$B_0=B'$}; \node at (4,6.5) {$B_1$}; \node at (6,6.5) {$B_2$}; \node [label=$q$,v] at (-1,2) (q) {}; \draw [fill=white,rounded corners=10pt] (0,0) rectangle (12,4); \node at (0,4) [label=below left:$C$]{}; \foreach \i in {1,2,3,4,5,6,10,11} { \draw (\i,0)--(\i,4); } \foreach \y in {1,2,3,4,5,6,7}{ \pgfmathparse{cos(\y*25-50)} \let\xcoord\pgfmathresult \draw (q) -- (.4+.3*\xcoord,0.5*\y) node (v\y)[v]{}; } \foreach \y in {3,4,5,6}{ \node at (5.8,0.5*\y) (m\y)[v] {}; } \foreach \y in {1,2}{ \node at (5.8,0.5*\y) (m\y)[v,label=left:$m_\y$] {}; } \node at (5.8,0.5*7) (m7)[v,label=left:$m_n$] {}; \node at (5.5,3) {$\vdots$}; \node at (11.5,2) {$\cdots$}; \node at (7,0)[label=below:$M_{t+1}$] {}; \node at (-1,0)[label=below:$M_0$] {}; \node at (0.5,0)[label=below:$M_1$] {}; \node at (1.5,0)[label=below:$M_2$] {}; \node at (2.5,0)[label=below:$M_3$] {}; \node at (3.5,0)[label=below:$\cdots$] {}; \node at (4.5,0)[label=below:$M_{t-1}$] {}; \node at (5.5,0)[label=below:$M_{t}$] {}; \draw [decorate,decoration={brace},line width=1pt] (5,-1)--(-1.5,-1) node[midway,label=below:$M$]{}; \draw [line width=10pt, out=120,shorten <=-5pt,in=-90,opacity=.2] (7,3.7) to (2.5,6); \draw [line width=10pt, out=105,shorten <=-5pt,in=-90,opacity=.2] (8.2,3.7) to (4.5,6); \draw [line width=10pt, out=90,in=-90,opacity=.2] (9,3.7) to (6,6); \draw [rounded corners=10pt,fill=white] (6.5,.7) rectangle (9.5,3.7); \draw (9.5-.8,.7)--(9.5-.8,3.7); \draw (9.5-.8*2,.7)--(9.5-.8*2,3.7); \foreach \i in {0,1,2} { \node at (6.5+0.8*\i+1,.4) {$C_\i$}; } \node at (9.7,3.7) {$C^*$}; \foreach \i in {1,2,3,4,5,6} { \draw (6.5, .7+3*\i/7) --(9.5-.8*2,.7+3*\i/7); } \foreach \i in {1,2} { \node at (7.7, .48+3*\i/7) {\tiny $W_\i$}; } \node at (7.6, .54+3*3/7) {$\vdots$}; \node at (7.7, .48+3) {\tiny $W_n$}; \node at (6.7,.9+3/7) [v,label=right:$u'$] (u') {}; \node at (6.7,.9+3*3/7) [v,label=right:$q'$] (q'){}; \draw [draw=red] (q)--(v2)--(1.5,1) node[v]{} -- (2.6,1.1) node[v]{}--(3.5,1.6)node[label=below:$Q_{u'}$]{} --(4.5,1.7) node[v]{}-- (m2)--(u'); \draw [draw=blue](q)--(v3)--(1.6,2.1) node[v]{} -- (2.3,2.6) node[v]{}--(3.4,3.6)node[label=below:$Q_{q'}$]{} --(4.5,3.4) node [v]{} -- (m4)--(q'); \draw (u')--(q'); \node[draw, fill=white, cloud, cloud puffs=10, minimum width=4pt, minimum height=4pt] at (7.1,3)(c') {$C'$}; \draw (q')--(c'); \end{tikzpicture} \end{center} \caption{The situation in the proof of \cref{lem:ropeinduction} when $\chi(C_0)\ge c+3$.}\label{fig:part1} \end{figure} \begin{proof} For $i \ge 0$ let $M_i$ be the set of vertices in $C \cup \{q\}$ with $G[C \cup \{q\}]$-distance $i$ from $q$. Choose $t$ such that $\chi(M_{t+1}) \ge \lceil \chi(C)/2\rceil \ge 3c+9$. Since $3c+9 > 2$ and $\chi(M_0\cup M_1\cup M_2\cup M_3\cup M_4)\le 2$, it follows that $t \ge 4$. It follows that there exists some $C^*\subseteq M_{t+1}$ such that $G[C^*]$ is connected, the $G$-distance between $q$ and $C^*$ is at least $5$, and $\chi(C^*)\ge \chi(M_{t+1}) - \chi(N^{4}_G[q])\ge \chi(M_{t+1}) -2 \ge 3c+7$. Let $M=M_0 \cup \cdots \cup M_{t-1}$. Then, $G[M]$ is connected and anticomplete to $C^*$. Next we partition $B$ into three sets. Let $B_0$ be the set of vertices in $B$ with no neighbour in $M$. Let $B_1$ be the set of vertices $v$ in $B$ with a neighbour in $M$ such that the $G[M \cup \{v\}]$-distance between $q$ and $v$ is odd, and let $B_2$ be the set where this distance is even. For $i = 0,1,2$, let $C_i$ be the set of vertices in $C^*$ with a neighbour in $B_i$. Then, $C^*=C_0\cup C_1 \cup C_2$ and note that $B_i$ covers $C_i$ for each $i=0,1,2$. We must consider the cases that either $G[C_0]$ or $G[C_1]$ or $G[C_2]$ has large chromatic number. First we handle when $G[C_0]$ has large chromatic number. Suppose that $\chi(C_0)\ge c+3$. See \cref{fig:part1} for an illustration of the situation. Let $B'=B_0$. Take an ordering $m_1, \ldots , m_n$ of $M_t$, and for $1 \le i \le n$, let $W_i$ be the set of vertices $v \in C_0$ such that $v$ is adjacent to $m_i$ and nonadjacent to $m_1, \ldots, m_{i-1}$. Then $(W_1, \ldots, W_n)$ is a stable grading of $C_0$ since $G$ is triangle-free. By \cref{lem:earlier2} there exists a subset $C'$ of $C_0$ and an edge $u'q'$ of $G[C_0]$ with the following properties: \begin{itemize} \item $G[C'\cup \{q'\}]$ is connected, \item $\chi(C') \ge c$. \item $u'$ and $q'$ are earlier than every vertex in $C'$, and \item $u'$ has no neighbour in $C'$. \end{itemize} Let $Q_{u'}$ be an induced path in $G[M\cup M_{t} \cup \{u'\}]$ between~$u'$ and~$q$ of length~$t+1$ and similarly let $Q_{q'}$ be an induced path in $G[M\cup M_{t} \cup \{q'\}]$ between $q'$ and~$q$ of length~$t+1$. Observe that $Q_{u'}\cup \{q'\}$ is an induced path of length $t+2$ between~$q'$ and~$q$ since $G$ is triangle-free and there are no edges between $M$ and $M_{t+1}$. Of these two paths between~$q$ and $q'$, let $Q_0$ be the one of even length and let $Q_1$ be the one of odd length. This now provides the desired subsets $B'\subseteq B$, $C'\subseteq C$, vertex $q'\in C \setminus C'$, and paths $Q_0$, $Q_1$, so we may assume that $\chi(C_0) < c+3$. \begin{figure} \begin{center} \begin{tikzpicture} \draw [draw=red,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=.2] (0.5,4) to (3.5,6); \draw [draw=red,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=.2] (2.5,4) to (4,6); \draw [draw=red,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=.2] (4.5,4) to (4.5,6); \draw [draw=secondcolor,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=\secondopacity] (1.5,4) to (5.5,6); \draw [draw=secondcolor,line width=20pt,shorten >=-5pt,shorten <=-5pt, out=90,in=-90,opacity=\secondopacity] (3.5,4) to (6,6); \draw [draw=none,fill=blue,opacity=.2] (5,4) [out=170,in=-60] to (1.4,6)--(6.6,6)--(11.6,4)--cycle; \tikzstyle{v}=[circle, fill, inner sep=0pt, minimum size=3pt] \draw [fill=white,rounded corners=10pt] (1,6) rectangle (7,7); \node at (1,7) [label=below left:$B$]{}; \draw (3,6)--(3,7); \draw (5,6)--(5,7); \node at (2,6.5) {$B_0$}; \node at (4,6.5) {$B_1$}; \node at (6,6.5) {$B_2$}; \node [label=left:{$q=m_1$},v] at (-1,2) (q) {}; \draw [fill=white,rounded corners=10pt] (0,0) rectangle (12,4); \node at (0,4) [label=below left:$C$]{}; \foreach \i in {1,2,3,4,5,6,10,11} { \draw (\i,0)--(\i,4); } \foreach \y in {1,2,3,4,5,6,7}{ \draw (q) -- (.3,0.5*\y) node (v\y)[v]{}; \node at (1.5,0.5*\y) (w\y)[v]{}; \node at (2.3,0.5*\y) (ww\y)[v]{}; \node at (3.5,0.5*\y) [v]{}; \node at (4.5,0.5*\y) (z\y)[v]{}; } \node at (v1) [label=below:\tiny $m_2$,label distance=0pt,inner sep=0pt]{}; \node at (v2) [label=below:\tiny $m_3$,label distance=0pt,inner sep=0pt]{}; \node at (w1) [label=below:\tiny $m_{\abs{M_1}+2}$,label distance=0pt,inner sep=0pt]{}; \node at (z7) [label=above:\tiny $m_{n}$,label distance=0pt,inner sep=0pt]{}; \node at (11.5,2) {$\cdots$}; \node at (7,0)[label=below:$M_{t+1}$] {}; \node at (-1,0)[label=below:$M_0$] {}; \node at (0.5,0)[label=below:$M_1$] {}; \node at (1.5,0)[label=below:$M_2$] {}; \node at (2.5,0)[label=below:$M_3$] {}; \node at (3.5,0)[label=below:$\cdots$] {}; \node at (4.5,0)[label=below:$M_{t-1}$] {}; \node at (5.5,0)[label=below:$M_{t}$] {}; \draw [decorate,decoration={brace},line width=1pt] (5,-1)--(-1.5,-1) node[midway,label=below:$M$]{}; \draw [line width=10pt, out=120,shorten <=-5pt,in=-90,opacity=.2] (7,3.7) to (2.5,6); \draw [line width=10pt, out=105,shorten <=-5pt,in=-90,opacity=.2] (8.2,3.7) to (4.5,6); \draw [line width=10pt, out=90,in=-90,opacity=.2] (9,3.7) to (6,6); \draw [rounded corners=10pt,fill=white] (6.5,.7) rectangle (9.5,3.7); \draw (9.5-.8,.7)--(9.5-.8,3.7); \draw (6.5+.8,.7)--(6.5+.8,3.7); \node at (6.5+.4,.4) {$C_0$}; \node at (8,.4) {$C_1$}; \node at (6.5+0.8*2+1,.4) {$C_2$}; \node at (9.7,3.7) {$C^*$}; \foreach \i in {1,2,3,4,5,6} { \draw (6.5+0.8, .7+3*\i/7) --(9.5-.8*2+0.8,.7+3*\i/7); } \foreach \i in {1,2} { \node at (7.5, .48+3*\i/7) {\tiny $W_\i$}; } \node at (7.6, .54+3*3/7) {$\vdots$}; \node at (7.5, .48+3) {\tiny $W_n$}; \node at (6.7+1.4,.9+3/7) [v,label=right:$u'$] (u') {}; \node at (6.7+1.4,.9+3*3/7) [v,label=right:$q'$] (q'){}; \draw [draw=red] (u')--(q'); \node[draw, fill=white, cloud, cloud puffs=10, minimum width=4pt, minimum height=4pt] at (7.1+1,3)(c') {$C'$}; \draw (q')--(c'); \draw [draw=red](u') [out=140,in=-60] to node[pos=.45,sloped,above] {\tiny $Q_{2-h}=Q_1$} (3.3,6.2) node[v,label=$b_{u'}$]{}--(v1) node [label=right:{\tiny $m_{i_{u'}}$},label distance=0pt,inner sep=0pt]{}--(q); \draw [draw=blue] (q') [out=140,in=-70] to node[pos=.6,sloped,above] {\tiny $Q_{h-1}=Q_0$}(4.5,6.2) node[v,label=$b_{q'}$]{}--(ww3) node [label=right:{\tiny $m_{i_{q'}}$},label distance=0pt,inner sep=0pt]{}--(w4)--(v6)--(q) ; \end{tikzpicture} \end{center} \caption{The situation in the proof of \cref{lem:ropeinduction} when $\chi(C_0)<c+3$ and $\chi(C_1)\ge c+3$.}\label{fig:part2} \end{figure} Since $\chi(C_0) < c + 3$, it follows that for some $h\in \{1,2\}$, we have that $\chi(C_h) \ge c+3$. See \cref{fig:part2} for an illustration of the situation. Take an ordering $m_1, \ldots , m_n$ of $M$ so that for $1\le i < j\le n$, we have that the $G[M]$-distance between $q$ and $m_i$ is at most the $G[M]$-distance between $q$ and $m_j$. For $1 \le i \le n$ let $W_i$ be the set of vertices $v \in C_h$ such that $v$ has a neighbour in $B_h$ adjacent to $m_i$ and no neighbour in $B_h$ adjacent to one of $m_1, \ldots , m_{i-1}$. Then $(W_1, \ldots , W_n)$ is a stable grading of $C_h$ since $G$ has odd girth at least~$7$ and $W_i\subseteq N_G^2(m_i)$ for each $1\le i \le n$. By \cref{lem:earlier2}, there exists a subset $C'$ of $C_h$ and an edge $u'q'$ of $G[C_h]$ with the following properties: \begin{itemize} \item $G[C'\cup \{q'\}]$ is connected, \item $\chi(C') \ge c$. \item $u'$ and $q'$ are earlier than every vertex in $C'$, and \item $u'$ has no neighbour in $C'$. \end{itemize} Let $i_{u'}$ be such that $u'\in W_{i_{u'}}$ and similarly, let $i_{q'}$ be such that $q'\in W_{i_{q'}}$. Let $k=\max\{i_{u'}, i_{q'}\}$. Let $B'=B_h\setminus N(\{m_1, \ldots , m_k\})$. Since $u'$ and $q'$ are earlier than each vertex of $C'$, it follows that $B'$ covers $C'$. Let $b_{u'}$ be a vertex of $B_h\setminus B'$ adjacent to both $m_{i_{u'}}$ and $u'$, and similarly, let $b_{q'}$ be a vertex of $B_h\setminus B'$ adjacent to both $m_{i_{q'}}$ and $q'$. Since $G$ is triangle-free, $b_{u'}$ is non-adjacent to $q'$. Since the $G$-distance between $q$ and $C^*$ is at least $5$, neither $b_{u'}$ nor $b_{q'}$ is in $N^3(q)$. Let $Q_{2-h}$ be an induced path in $G[M\cup \{b_{u'}, u', q'\}]$ between $q$ and $q'$ obtained by extending a shortest path from $q$ to $b_{u'}$ in $G[M\cup\{b_{u'}\}]$ by two edges $b_{u'}u'$ and $u'q'$. Then, $Q_{2-h}$ has even length if $2-h=0$ and odd length if $2-h=1$. Similarly, let $Q_{h-1}$ be an induced path in $G[M\cup \{b_{q'}, q'\}]$ between $q$ and~$q'$ obtained by extending a shortest path from $q$ to $b_{q'}$ in $G[M\cup\{b_{q'}\}]$ by an edge $b_{q'}q'$. Then, $Q_{h-1}$ has even length if $h-1=0$ and odd length if $h-1=1$. This now provides the desired subsets $B'\subseteq B$, $C'\subseteq C$, a vertex $q'\in C \setminus C'$, and induced paths $Q_0$, $Q_1$. \end{proof} A \emph{broken $r$-arithmetic rope} is a graph consisting of paths $Q_{1,1}, Q_{1,2}, \ldots , Q_{r,1}, Q_{r,2}$ with ends contained in a vertex set $\{q_1,\ldots , q_{r+1}\}$ so that \begin{itemize} \item for every $1\le i \le r$, $Q_{i,1}$ is an odd-length path between $q_i$ and $q_{i+1}$, \item for every $1\le i \le r$, $Q_{i,2}$ is an even-length path between $q_i$ and $q_{i+1}$, \item for every $(h_1, \ldots , h_r) \in \{1,2\}^r$, $Q_{1,h_1} \cup Q_{2,h_2} \cup \cdots \cup Q_{r,h_r}$ is an induced path, and \item the vertices $q_1,\ldots , q_{r+1}$ are pairwise at $G$-distance at least $5$. \end{itemize} We denote such a broken $r$-arithmetic rope by $(q_i,Q_{i,1},Q_{i,2})_{i=1}^r$ and say that it has \emph{end}~$q_{r+1}$. By applying \cref{lem:ropeinduction} a total of $r$ times, we can find a broken $r$-arithmetic rope. \begin{lemma}\label{lem:broken} Let $r\ge 1$, let $G$ be a graph with odd girth at least $11$, and let $B,C\subseteq V(G)$ be disjoint vertex subsets with $B$ covering $C$. Let $q_1\in V(G)\setminus C$ be a vertex such that $G[C\cup \{q_1\}]$ is connected and $\chi(G[C]) \ge 6^rc+\frac{17}{5}(6^r-1)$. Then there exists $B'\subseteq B$, $C'\subseteq C$, $q_2,q_3,\ldots,q_{r+1}\in C\setminus C'$ and a broken $r$-arithmetic rope $(q_i,Q_{i,1},Q_{i,2})_{i=1}^r$ with end $q_{r+1}$ such that \begin{itemize} \item $G[C'\cup \{q_{r+1}\}]$ is connected, \item $\chi(C') \ge c$, \item $B'$ covers $C'$, \item $B'$ is anticomplete to $(V(Q_{i,1}) \cup V(Q_{i,2}) ) \setminus N^2[q_{i+1}]$ for all $i=1,2,\ldots,r$, \item $(V(Q_{i,1}) \cup V(Q_{i,2}) ) \setminus N^2[q_{i+1}]\subseteq C\cup\{q_1\}$ for all $i=1,2,\ldots,r$, \item $C'$ is anticomplete to $\left(\bigcup_{i=1}^r (V(Q_{i,1}) \cup V(Q_{i,2}) ) \right) \setminus \{q_{r+1}\}$, and \item the $G$-distance between $\{q_1, \ldots , q_r\}$ and $C'\cup \{q_{r+1}\}$ is at least $5$. \end{itemize} \end{lemma} \begin{proof} We proceed by the induction on $r$. The base case $r=1$ follows from \cref{lem:ropeinduction}. So, let us assume that $r>1$ and that the lemma holds for $r-1$. Thus there exists $B_{r-1}\subseteq B$, $C_{r-1}\subseteq C$, $q_2,q_3,\ldots,q_{r}\in C\setminus C_{r-1}$, and a broken $(r-1)$-arithmetic rope $(q_i,Q_{i,1},Q_{i,2})_{i=1}^{r-1}$ satisfying the above properties with $c':=6c+17$. We remark that $\chi(C_{r-1})\ge 6c+17$. By applying \cref{lem:ropeinduction} with $B_{r-1}$, $C_{r-1}$, and $q_{r}$, we deduce that there exists $B':=B_{r}\subseteq B_{r-1}$, $C':=C_{r}\subseteq C_{r-1}$, $q_{r+1}\in C_{r-1}\setminus C_{r}$, and two induced paths $Q_{r,1}$, $Q_{r,2}$ between $q_{r}$ and $q_{r+1}$ in $G[(C_{r-1} \setminus C_{r}) \cup (B_{r-1} \setminus B_{r}) \cup \{q_{r}\}]$ satisfying the properties of \cref{lem:ropeinduction}. It is easy to check that the above seven properties hold. It remains to check that $(q_i,Q_{i,1},Q_{i,2})_{i=1}^r$ is indeed a broken $r$-arithmetic rope. The first, second, and last conditions are immediate. To see the third condition, suppose that there is $(h_1, \ldots , h_r) \in \{1,2\}^r$ such that $Q_{1,h_1} \cup Q_{2,h_2} \cup \cdots \cup Q_{r,h_r}$ is not an induced path. By the induction hypothesis, $Q_{1,h_1} \cup Q_{2,h_2} \cup \cdots \cup Q_{r-1,h_{r-1}}$ is an induced path. Thus, there is an edge joining a vertex $x\in V(Q_{a,h_i}) \setminus\{q_{r}\}$ and a vertex $y\in V(Q_{b,h_r}) \setminus \{q_{r}\}$ for some $i\in\{1,2,\ldots,r-1\}$ and $a,b\in \{0,1\}$. Since $C_{r-1}$ is anticomplete to $(V(Q_{i,1})\cup V(Q_{i,2})) \setminus\{q_r\}$, it follows that $y\in (B_{r-1}\cap N^2(q_{r+1}))\setminus N^3(q_r)$. Since $B_{r-1}$ is anticomplete to $(V(Q_{i,1})\cup V(Q_{i,2})) \setminus N^2[q_{i+1}]$, it follows that $x\in N^2[q_{i+1}]$. Then we find a path of length at most $3$ from $q_{i+1}$ to $y$. Note that $y$ has a neighbour in $C_{r-1}$ on $Q_{b,h_r}$ because $V(Q_{b,h_r})\cap B_{r-1}\subseteq N^2(q_{r+1})$. This implies that if $i<r-1$, then we obtain a path of length~$4$ from $q_{i+1}$ to $C_{r-1}$, contradicting the induction hypothesis that the $G$-distance between $\{q_1, \ldots , q_{r-1}\}$ and $C_{r-1}$ is at least~$5$. Thus, $i=r-1$ and the $G$-distance from $q_{r}$ to $y$ is at most $3$, a contradiction. \end{proof} To prove \cref{thm:ropefinding} we shall now find a certain broken $r$-arithmetic rope and create an $r$-arithmetic rope from it by fixing it with an additional path between $q_{r+1}$ and~$q_1$ (and extending $Q_{r,1}$, $Q_{r,2}$ along this path). \begin{proof}[Proof of \cref{thm:ropefinding}] We may assume that $G[X]$ is connected since we can always restrict to a connected component with maximum chromatic number. Choose some vertex in $X$, and for $i \ge 0$, let $L_i$ be the set of vertices with $G[X]$-distance $i$ from the vertex. There exists $s$ such that $\chi(L_{s+1}) \ge \lceil \chi(G)/2\rceil \ge 6^r \cdot 3 + \frac{17}{5}(6^r-1)$. Since $G$ has odd girth at least $11$, $\chi(L_0 \cup \cdots \cup L_{4}) \le 2$, so it follows that $s \ge 4$. Let $C$ be the vertex set of a connected component of $G[L_{s+1}]$ with maximum chromatic number. Let $q_1$ be a vertex of $L_s$ with a neighbour in $C$, and let $B=L_s$. By \cref{lem:broken}, there exists $B'\subseteq B$, $C'\subseteq C$, $q_2,q_3,\ldots,q_{r+1}\in C\setminus C'$ and a broken $r$-arithmetic rope $(q_i,Q_{i,1},Q_{i,2})_{i=1}^r$ with end $q_{r+1}$ such that \begin{itemize} \item $G[C'\cup \{q_{r+1}\}]$ is connected, \item $\chi(G[C']) \ge 3$, \item $B'$ covers $C'$, \item $B'$ is anticomplete to $(V(Q_{i,1}) \cup V(Q_{i,2}) ) \setminus N^2[q_{i+1}]$ for all $i=1,2,\ldots,r$, \item $(V(Q_{i,1}) \cup V(Q_{i,2}) ) \setminus N^2[q_{i+1}]\subseteq C\cup\{q_1\}$ for all $i=1,2,\ldots,r$, \item $C'$ is anticomplete to $\left(\bigcup_{i=1}^r (V(Q_{i,1}) \cup V(Q_{i,2}) ) \right) \setminus \{q_{r+1}\}$, and \item the $G$-distance between $\{q_1, \ldots , q_r\}$ and $C'\cup \{q_{r+1}\}$ is at least $5$. \end{itemize} Since $\chi(G[C']) \ge 3$, there exists a vertex $x\in C'$ with $G$-distance at least $5$ from $q_{r+1}$ and also from $q_1, \ldots , q_r$ since $x\in C'$. Let $b\in B'$ be a vertex adjacent to $x$. Let $P_1$ be a shortest induced path between $q_{r+1}$ and $b$ in $G[\{q_{r+1},b\}\cup C']$. Since the $G$-distance between~$q_{r+1}$ and $x$ is at least $5$, the length of $P_1$ is at least $4$. Let $a_1\in L_{s-1}$ be a vertex adjacent to $q_1$ and let $a_2\in L_{s-1}$ be a vertex adjacent to~$b$. Observe that the vertices $q_1, \ldots , q_{r+1}, x$ are pairwise at $G$-distance at least $5$, $B'$ is anticomplete to $(V(Q_{i,1}) \cup V(Q_{i,2}) ) \setminus N^2[q_{i+1}]$ for every $i=1,2,\ldots,r$, and $(V(Q_{i,1}) \cup V(Q_{i,2}) ) \setminus N^2[q_{i+1}]\subseteq C\cup\{q_1\}$ for all $i=1,2,\ldots,r$. Therefore the only neighbour of~$a_1$ in $V(P_1)\cup \{a_2\} \cup \bigcup_{i=1}^r (V(Q_{i,1}) \cup V(Q_{i,2}) )$ is $q_1$. Furthermore, since the $G$-distance between $x$ and $q_i$ is at least $5$ for every $i=1,2,\ldots,{r+1}$ and $C'$ is anticomplete to $\left(\bigcup_{i=1}^r (V(Q_{i,1}) \cup V(Q_{i,2}) ) \right) \setminus \{q_{r+1}\}$, it follows that $P_1ba_2$ is an induced path and $V(P_1)\cup \{a_2\}\setminus\{q_{r+1}\}$ is anticomplete to $\bigcup_{i=1}^r (V(Q_{i,1}) \cup V(Q_{i,2}) ) \setminus \{q_{r+1}\}$. Let $P_2$ be an induced path between $a_1$ and $a_2$ in $G[\{a_1,a_2\} \cup \bigcup_{i=0}^{s-2} L_i]$. Clearly $V(P_2)\setminus \{a_1,a_2\}$ is anticomplete to $V(P_1)\cup \bigcup_{i=1}^r (V(Q_{i,1}) \cup V(Q_{i,2}) )$. Let $P$ be the induced path $P_1ba_2P_2a_1q_1$ between $q_{r+1}$ and $q_1$. Then we form an $r$-arithmetic rope from the broken $r$-arithmetic rope $(q_i,Q_{i,1},Q_{i,2})_{i=1}^r$ with end $q_{r+1}$ by extending $Q_{r,1}$, $Q_{r,2}$ along~$P$, possibly swapping their labels depending on the parity of the length of $P$. \end{proof} We actually only need \cref{thm:ropefinding} in the $r=5$ case, in which case we have the following bound. \begin{theorem} \label{thm:5ropefinding} Every graph $G$ with odd girth at least $11$ and $X\subseteq V(G)$ with $\chi(X) \ge 99525$ contains a $5$-arithmetic rope in $X$. \end{theorem} Let us remark that with more technical arguments, one can replace the odd girth condition in \cref{thm:ropefinding} with the conditions that $\omega(G)\le \omega$, every induced subgraph~$H$ of $G$ with $\omega(H)<\omega$ has chromatic number at most~$\tau$, and $\chi^{4}(G)=\max_{v\in V(G)}\{\chi(N^{4}[v])\}$ is bounded, giving a much worse bound for the chromatic number that now depends on $\omega$, $\tau$, $\chi^{4}(G)$, as well as $r$. One can even alter the definition of arithmetic ropes to say that the length of the paths $Q_{i,1}, Q_{i,2}$ differ by $1 \pmod{m}$ rather than just $1 \pmod{2}$. In particular, such arithmetic ropes can then be made to contain an induced cycle of length $\ell \pmod{m}$. This yields another proof of the key step of the proof of a theorem of Scott and Seymour \cite{scott2019induced} that the class of graphs without an induced cycle of length $\ell \pmod{m}$ is $\chi$-bounded. \section{Odd wheel t-minors} \label{sec:oddwheel} This section is dedicated to proving \Cref{thm:technical-main} (which then implies \cref{thm:main} and thus \cref{thm:hperfect}). Our main tool is \cref{thm:5ropefinding}, which was proven in the last section. We begin with some lemmas showing that certain graph structures contain odd wheels as t-minors. \begin{lemma}\label{lem:oddwheel} Let $G$ be a graph consisting of an induced odd cycle $C$ and a single additional vertex $v$ adjacent to at least three vertices $a$, $b$, $c$ that appear on $C$ and that partition $C$ into three odd-length paths. Then $G$ contains an odd wheel as a t-minor. \end{lemma} \begin{proof} If $\abs{V(C)}=3$, then $G=K_4$, and thus the lemma holds. So, we may assume that $\abs{V(C)}>3$ and we proceed inductively. If $v$ is adjacent to every vertex of $C$, then $G$ is an odd wheel, so we may assume that some vertex $u$ of $C$ is not adjacent to~$v$. By the assumption, $u$ is adjacent to at most one of $a$, $b$, and $c$. Then we apply a t-contraction at~$u$ to obtain a smaller graph $G'$ that still consists of an odd induced cycle $C'$ and a single additional vertex $v$ adjacent to at least three vertices $a$, $b$, $c$ that appear in order with on~$C'$ and partition $C'$ into three odd-length paths. By the inductive hypothesis, $G'$ (and hence $G$) contains an odd wheel as a t-minor, as desired. \end{proof} The following lemma extends \cref{lem:oddwheel}. \begin{lemma}\label{lem:oddwheel3} Let $G$ be a graph and let $X$ be a subset of $V(G)$ such that $G[X]$ contains an $r$-arithmetic rope $(q_i,Q_{i,1},Q_{i,2})_{i=1}^r$. If $G- X$ is a connected bipartite graph with a bipartition $(A,B)$ such that no vertex in $A$ has neighbours in $X$ and at least three vertices of $\{q_1,q_2,\ldots,q_r\}$ have neighbours in $B$, then $G$ contains an odd wheel as a t-minor. \end{lemma} \begin{proof} We proceed by induction on $\abs{A}$. Observe that $G[X]$ has an odd induced cycle~$C$ containing three vertices of $\{q_1,q_2,\ldots,q_k\}\cap N(B)$ that partition $C$ into three odd-length paths. If $A=\emptyset$, then by applying \cref{lem:oddwheel} to $C$ with the unique vertex in $B$, we deduce that $G$ contains an odd wheel as a t-minor. If $A$ contains a vertex~$v$, then we perform a t-contraction at $v$ to obtain a t-minor $G'$. Observe that $G'$ is the graph obtained from $G$ by contracting all the edges incident with $v$ and so still satisfies the assumption of the lemma. By the induction hypothesis, $G'$ contains an odd wheel as a t-minor and so does~$G$. \end{proof} We require one more simple lemma on graphs with large odd girth containing connected bipartite induced subgraphs that contain a distinguished set $S$ of vertices. We remark that there are similar Ramsey theorems on finding induced trees containing an increasing number (increasing with $\abs{S}$) of vertices of a distinguished set $S$ \cite{abrishami2024induced,davies2022vertex}. Such Ramsey results can be used in part to strengthen \cref{thm:technical-main} to triangle-free graphs (at the cost of much worse bounds). \begin{lemma}\label{lem:bipartite} Let $G$ be a connected graph with odd girth at least $2g+1$ and let $S$ be a stable set of~$G$ with $\abs{S}\le 2g$. Then $G$ contains a connected bipartite induced subgraph $H$ with $S\subseteq V(H)$. \end{lemma} \begin{proof} Let $H$ be a connected induced subgraph of $G$ with $S\subseteq V(H)$, and with $V(H)$ minimal subject to this. By minimality, for every $v\in V(H) \setminus S$, $H-v$ is disconnected and each connected component contains a vertex of $S$. If $H$ is a tree, then the lemma follows, so we may assume otherwise. Consider a cycle $C$ of $H$. For each $v\in V(C) \setminus S$, there is a connected component $H_v$ of $H-v$ containing a vertex of $S$ and not containing a vertex of $C$. Then, the induced subgraphs $(H_v : v\in V(C) \setminus S)$ are vertex-disjoint, and each contains a vertex of~$S$. Therefore, $\abs{V(C)}\le \abs{S}\le 2g$. So, $C$ is not an odd cycle since $G$ has odd girth at least $2g+1$. Hence $H$ is bipartite, as desired. \end{proof} We are now ready to prove \Cref{thm:technical-main} (which then implies \cref{thm:main} and thus \cref{thm:hperfect}). \begin{proof}[Proof of \Cref{thm:technical-main}.] Let $G$ be a graph with odd girth at least $11$ and chromatic number at least $199049$. Let $v$ be a vertex of $G$ in a connected component with maximum chromatic number. Let $L_0=\{v\}$ and for each positive integer $i$, let $L_i=N^i(v)$. Then there exists a positive integer $t$ such that $\chi(L_t) \ge \lceil 199049/2\rceil = 99525$. Since $\chi(N^3[v]) \le 2 < 99525$, we have that $t\ge 4$. Since $\chi(L_t) \ge 99525$, by \cref{thm:5ropefinding} there is a $5$-arithmetic rope $(q_i,Q_{i,1},Q_{i,2})_{i=1}^5$ of~$G$ contained in~$L_t$. For each $1\le i \le 5$, let $x_i$ be a vertex of $L_{t-1}$ adjacent to $q_i$, and let $y_i$ be a vertex of $L_{t-2}$ adjacent to $x_i$. Since the vertices $q_1, \ldots , q_5$ are pairwise at distance at least~$5$ in $G$, the vertices $x_1, \ldots , x_5 , y_1, \ldots , y_5$ are all distinct and each of $x_1, x_2, x_3, x_4, x_5$ has degree $1$ in $G[\{x_1, \ldots , x_5 , y_1, \ldots , y_5\}]$. Now, let $H^*$ be the induced subgraph of $G$ on vertex set $\{x_1, \ldots, x_5\} \cup \{y_1, \ldots , y_5\} \cup \bigcup_{i=0}^{t-3} L_i$. Then, $H^*$ is connected and each of the vertices $x_1 , \ldots , x_5$ has degree one in $H^*$. Since $H^*$ has odd girth at least~$7$, by \cref{lem:bipartite}, $H^*$ contains a connected bipartite induced subgraph~$H'$ with $\{x_1, \ldots , x_5\} \subseteq V(H')$. Let $A$, $B$ be the bipartition of~$H'$. Without loss of generality, we may assume that $B$ contains at least three vertices $a^*,b^*,c^*$ of $\{x_1, \ldots , x_5\}$ that are pairwise at an even distance in $H'$. Since the vertices $x_1, \ldots , x_5$ all have degree 1 in $H'$, it follows that $H=H'- (\{x_1, \ldots , x_5\} \setminus \{a^*, b^*, c^*\})$ is connected. No vertex of $V(H)\setminus \{a^*, b^*, c^*\}$ has a neighbour in the arithmetic rope $(q_i,Q_{i,1},Q_{i,2})_{i=1}^5$ since $V(H)\setminus \{a^*, b^*, c^*\} \subseteq \bigcup_{i=0}^{t-2} L_i$ and $(q_i,Q_{i,1},Q_{i,2})_{i=1}^5$ is contained in $L_t$. Hence $G$ contains an odd wheel as a t-minor by \cref{lem:oddwheel3}. \end{proof} \section{Concluding remarks and open problems} \label{sec:conclusion} As remarked in \cref{sec:intro}, we optimised the proof of \cref{thm:main} for simplicity, rather than for the best bound. With more delicate and technical arguments, it is possible to significantly improve. This would still leave a large gap between our bound and the lower bound of 4 for the chromatic number of t-perfect graphs. A good milestone for narrowing the gap would be to improve our upper bound to at most 1000. The only forbidden t-minors we used were odd wheels, so considering further forbidden t-minors such as even M\"{o}bius ladders \cite{shepherd-lehman} or working directly with the polytope definition of t-perfection may allow for better bounds. The complete list of forbidden t-minors for t-perfection remains a major open problem. While not all t-perfect graphs are $3$-colourable~\cite{schrijver2003combinatorial,benchetrit-PhD,benchetrit20164critical}, the graphs in \cref{fig:laurent-seymour} are the only known minimal counterexamples. Both counterexamples are relatively \say{uncomplicated} graphs, both contain at most $10$ vertices and both are complements of line graphs of well-known graphs. In \cite{benchetrit20164critical}, Benchetrit showed that the only 4-critical $P_6$-free t-perfect graphs are the graphs in \cref{fig:laurent-seymour}, but there may well be other minimal counterexamples that include an induced path on six vertices. It is natural to ask if these are the only counterexamples, or if there are infinitely many counterexamples. \begin{problem} Are there infinitely many $4$-critical t-perfect graphs? \end{problem} We remark that if there are only finitely many $4$-critical t-perfect graphs, then this would yield a polynomial time algorithm for determining whether a t-perfect graph is $3$-colourable. It is NP-Complete to decide whether a graph is 3-colourable in general. Lov{\'a}sz \cite{lovasz1972normal} proved that the complement of every perfect graph is a perfect graph. This is not the case for h-perfect graphs, however the class \emph{$\overline{h}$-perfect} of complements of h-perfect graphs is still a natural class of graphs that is equivalently defined by a polytope. Observe that a graph~$G$ is \emph{$\overline{h}$-perfect} if its clique set polytope is equal to \begin{align*} \overline{\hstab{G}}=\{ x\in \mathbb{R}^{V(G)}\colon & x(S)\le 1 \text{ for every stable set $S$,}\\ & x(C)\le \lfloor \abs{V(C)}/2\rfloor \text{ for every odd anti-hole $C$,}\\ & x(v)\ge 0 \text{ for every vertex $v$.} \}. \end{align*} As with the class of h-perfect graphs, we prove that $\overline{h}$-perfect graphs are also $\chi$-bounded. We prove that $\overline{h}$-perfect graphs are quadratically $\chi$-bounded. We remark that a (doubly exponential) $\chi$-bound also follows from \cref{thm:hperfect} and a theorem of Scott and Seymour~\cite{complementation-conjecture}. \begin{lemma}\label{lem:antihole} For every integer $k\ge 3$, $\overline{C_{2k+1}}$ is not h-perfect. \end{lemma} \begin{proof} First, observe that $\frac1k \mathbf{1}\in \hstab{\overline{C_{2k+1}}}$. So if it is h-perfect, then $\frac1k \mathbf{1}\in \ssp{\overline{C_{2k+1}}}$ and therefore there exist stable sets $S_1$, $S_2$, $\ldots$, $S_m$ with positive weights $\lambda_1$, $\lambda_2$, $\ldots$, $\lambda_m$ such that $\sum_{i=1}^m \lambda_i=1$ and $\sum_{i=1}^m \lambda_i \mathbf{1}_{S_i} = \frac1k \mathbf{1}$. Since each stable set has size at most $2$, we deduce that $\frac{2k+1}{k} = \sum_{i=1}^m \lambda_i \abs{S_i} \le 2 \sum_{i=1}^m\lambda_i = 2$, a contradiction. \end{proof} \begin{theorem}\label{thm:hcomplement} Every $\overline{h}$-perfect graph~$G$ is $\omega(G)+1 \choose 2$-colourable. \end{theorem} \begin{proof} We proceed by induction on $\abs{V(G)}$. Let $\omega=\omega(G)$. We may assume that $\omega>1$. Let $v$ be a vertex of $G$. By \cref{lem:antihole}, $G$ contains no odd hole of length at least~$7$. Observe that $G-N[v]$ contains no odd anti-hole since odd wheels are not h-perfect by \cref{lem:oddwheelforbid}. So, by the strong perfect graph theorem \cite{CRST2006}, $G-N[v]$ is perfect and therefore $\chi(G-N(v)) = \chi(G- N[v])= \omega(G- N[v]) \le \omega$. Clearly $\omega(G[N(v)]) < \omega$, so by the induction hypothesis, $\chi(G)\le \chi(G- N(v)) + \chi(G[N(v)])\le \omega+\binom{\omega}{2}=\binom{\omega+1}{2} $. \end{proof} \cref{thm:hcomplement} is tight for triangle-free graphs since $C_5$ is $\overline{h}$-perfect. More generally, pairwise complete copies of $C_5$ are $\overline{h}$-perfect and this shows that for each positive integer~$\omega$, there is an $\overline{h}$-perfect graph with clique number~$\omega$ and chromatic number $\lfloor \frac{3}{2}\omega \rfloor$. So, a $\chi$-bounding function of the form $\omega(G) + c$ as in \cref{thm:hperfect} for h-perfect graphs is not possible for $\overline{h}$-perfect graphs. We conjecture that the quadratic bound in \cref{thm:hcomplement} can be improved to a linear one. \begin{conjecture} The class of $\overline{h}$-perfect graphs is linearly $\chi$-bounded. \end{conjecture} \paragraph{Acknowledgements.} We thank Dabeen Lee for contributing to discussions in the early stages of this project, Andr\'as Seb\H{o} for helpful feedback, and Sebastian Wiederrecht for sharing this problem with us at the \href{https://www.math.sinica.edu.tw/www/file_upload/conference/202402Rim/index.html}{2024 Pacific Rim Graph Theory Group Workshop}. Thanks also go to Bruce Reed for organising and Academia Sinica for hosting this workshop. Most of this research was conducted during visits to Princeton University and the IBS. We are grateful to both institutions for their hospitality. { \fontsize{11pt}{12pt} \selectfont \bibliographystyle{Illingworth.bst} \bibliography{main_refs.bib} } \end{document}
2412.17775v1
http://arxiv.org/abs/2412.17775v1
The Calderón problem for the logarithmic Schrödinger equation
\documentclass[a4paper, 10pt, twoside, notitlepage]{amsart} \usepackage{amsmath,amscd} \usepackage{amssymb} \usepackage{amsthm} \usepackage{comment} \usepackage{graphicx, xcolor} \usepackage{mathrsfs} \usepackage[ocgcolorlinks, linkcolor=blue]{hyperref} \usepackage{bm} \usepackage{bbm} \usepackage{url} \usepackage[utf8]{inputenc} \usepackage{mathtools,amssymb} \usepackage{esint} \usepackage{tikz} \usepackage{dsfont} \usepackage{relsize} \usepackage{url} \urlstyle{same} \usepackage{xcolor} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage[shortlabels]{enumitem} \usepackage{lineno} \usepackage{amsmath} \usepackage{enumitem} \usepackage{amsthm} \usepackage{verbatim} \usepackage{dsfont} \numberwithin{equation}{section} \renewcommand{\thefigure}{\thesection.\arabic{figure}} \renewcommand{\H}{{\mathbb H}} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \allowdisplaybreaks \newcommand{\para}[1]{\vspace{3mm} \noindent\textbf{#1.}} \graphicspath{{images/}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{question}{Question} \newtheorem{assumption}{Assumption} \newtheorem{remark}[theorem]{Remark} \title[Calder\'on's problem for logarithmic Laplacian]{The Calder\'on problem for the logarithmic Schr\"odinger equation} \author[B. Harrach]{Bastian Harrach} \address{Institute for Mathematics, Goethe University Frankfurt, Germany} \email{[email protected]} \author[Y.-H. Lin]{Yi-Hsuan Lin} \address{Department of Applied Mathematics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan \& Fakult\"at f\"ur Mathematik, University of Duisburg-Essen, Essen, Germany} \curraddr{} \email{[email protected]} \author[T. Weth]{Tobias Weth} \address{Institute for Mathematics, Goethe University Frankfurt, Germany} \email{[email protected]} \keywords{Logarithmic Laplacian, Calder\'on's problem, unique continuation, Runge approximation, localized potentials, monotonicity method.} \subjclass[2020]{35R30; 26A33; 35J70} \newcommand{\loglap}{L_{\text{\tiny $\!\Delta \,$}}\!} \newcommand{\todo}[1]{\footnote{TODO: #1}} \newcommand{\C}{{\mathbb C}} \newcommand{\R}{{\mathbb R}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\N}{{\mathbb N}} \newcommand{\Q}{{\mathbb Q}} \newcommand{\A}{{\mathcal A}} \newcommand{\Order}{{\mathcal O}} \newcommand{\order}{o} \newcommand{\vareps}{\varepsilon} \newcommand{\eps}{\epsilon} \newcommand{\der}{{\mathrm d}} \newcommand{\id}{\mathrm{Id}} \newcommand {\p} {\partial} \newcommand{\LC}{\left(} \newcommand{\RC}{\right)} \newcommand{\wt}{\widetilde} \newcommand{\Kelvin}{K}\newcommand{\riesz}{I_{\alpha}}\newcommand{\xrt}{X}\newcommand{\dplane}{R_d} \newcommand{\no}{N}\newcommand{\nod}{N_d} \newcommand{\change}{\marginpar{\colorbox{red}{\color{black}{\bf !}}} } \newcommand{\schwartz}{\mathscr{S}} \newcommand{\cschwartz}{\mathscr{S}_0} \newcommand{\tempered}{\mathscr{S}^{\prime}} \newcommand{\rapidly}{\mathscr{O}_C^{\prime}} \newcommand{\slowly}{\mathscr{O}_M} \newcommand{\fraclaplace}{(-\Delta)^s} \newcommand{\fourier}{\mathcal{F}} \newcommand{\ifourier}{\mathcal{F}^{-1}} \newcommand{\vev}[1]{\left\langle#1\right\rangle} \newcommand{\pol}{\mathcal{O}_M} \newcommand{\borel}{\mathcal{M}} \newcommand{\Hcirc}{\overset{\hspace{-0.08cm}\circ}{H^s}} \newcommand{\test}{\mathscr{D}}\newcommand{\smooth}{\mathscr{E}}\newcommand{\cdistr}{\mathscr{E}'}\newcommand{\distr}{\mathscr{D}^{\prime}}\newcommand{\dimens}{n}\newcommand{\kernel}{h_{\alpha}} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\abs}[1]{\left\lvert #1 \right\rvert}\newcommand{\aabs}[1]{\left\lVert #1 \right\rVert}\newcommand{\ip}[2]{\left\langle #1,#2 \right\rangle}\DeclareMathOperator{\spt}{spt}\DeclareMathOperator{\ch}{ch}\DeclareMathOperator{\Div}{div} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\loc}{loc} \newcommand{\radon}{\mathscr{M}}\newcommand{\weak}{\rightharpoonup}\newcommand{\weakstar}{\overset{\ast}{\rightharpoonup}} \begin{document} \maketitle \begin{abstract} We study the Calder\'on problem for a logarithmic Schr\"odinger type operator of the form $\loglap +q$, where $\loglap$ denotes the logarithmic Laplacian, which arises as formal derivative $\frac{d}{ds} \big|_{s=0}(-\Delta)^s$ of the family of fractional Laplacian operators. This operator enjoys remarkable nonlocal properties, such as the unique continuation and Runge approximation. Based on these tools, we can uniquely determine bounded potentials using the Dirichlet-to-Neumann map. Additionally, we can build a constructive uniqueness result by utilizing the monotonicity method. Our results hold for any space dimension. \medskip \end{abstract} \tableofcontents \section{Introduction}\label{sec: introduction} Calder\'on's pioneer work \cite{Calderon_80} investigated the inverse conductivity problem, which asks whether the conductivity can be determined from its boundary electrical voltage and current measurements. Using a suitable reduction scheme shows that a closely related problem is the Calder\'on problem for the classical Schr\"odinger equation, which can be stated as follows. Let $\Omega \subset \R^n$ be a bounded domain with Lipschitz boundary $\p \Omega$, and $q\in L^\infty(\Omega)$. Consider the Dirichlet boundary value problem \begin{equation}\label{eq: local Schro} \begin{cases} \LC-\Delta +q \RC u=0 &\text{ in }\Omega, \\ u=f &\text{ on }\p \Omega. \end{cases} \end{equation} Assume that $0$ is not a Dirichlet eigenvalue of the Schrödinger operator $-\Delta+q$ in $\Omega$, which implies that there exists a unique solution $u\in H^1(\Omega)$ to \eqref{eq: local Schro}, for any given Dirichlet boundary data $f\in H^{1/2}(\p \Omega)$. Then the boundary measurement is called the (full) \emph{Dirichlet-to-Neumann} (DN) map, which is well-defined and can be encoded by \begin{equation}\label{local DN} \Lambda_q^{(1)} : H^{1/2}(\p \Omega)\to H^{-1/2}(\p \Omega), \quad f\mapsto \left.\p_\nu u_f \right|_{\p \Omega}, \end{equation} where $u_f \in H^{1}(\Omega)$ is the solution to \eqref{eq: local Schro}. The Calder\'on problem for the Schr\"odinger equation \eqref{eq: local Schro} is to ask whether the DN map $\Lambda_q$ determines the potential $q\in L^\infty(\Omega)$ or not. The result is positive for a more regular potential $q$ with the full boundary DN map, and we refer readers to the articles \cite{SU87} for $n\geq 3$ and \cite{Buk08} for $n=2$. The article \cite{uhlmann2009electrical} provides a comprehensive introduction and review in this direction. After several decades of developments on Calder\'on's problems, this type of problem has been generalized to the nonlocal scenario. The Calder\'on problem for the \emph{fractional Schr\"odinger equation} has first been studied in \cite{GSU20}. The underlying problem was proposed as an exterior value problem due to its natural nonlocality: Given $s\in (0,1)$, let $\Omega \subset\R^n$ be a bounded open set with Lipschitz boundary for $n\in \N$, and let $q\in L^\infty(\Omega)$. Consider the Dirichlet exterior value problem \begin{equation}\label{eq: fractional Schro} \begin{cases} \LC (-\Delta)^s +q \RC u=0 &\text{ in }\Omega, \\ u=f &\text{ in }\Omega_e, \end{cases} \end{equation} where \[ \Omega_e \vcentcolon =\R^n\setminus \overline{\Omega}. \] Similarly as in the case $s=1$, let us assume that $0$ is not a Dirichlet eigenvalue of $(-\Delta)^s+q$ in $\Omega$. Then there exists a unique solution $u\in H^s(\R^n)$ to \eqref{eq: fractional Schro}, for any given Dirichlet exterior data $f$ in a suitable function space, such as $C^\infty_c(\Omega_e)$. The exterior (partial) DN map can be formally defined by \begin{equation}\label{nonlocal DN} \Lambda_q^{(s)} : C^\infty_c(W_1)\ni f\mapsto \left. (-\Delta)^s u_f \right|_{W_2}, \end{equation} where $W_1, W_2\Subset \Omega_e$ can be arbitrary nonempty open subsets. In the foundational work \cite{GSU20}, the authors showed that the DN map \eqref{nonlocal DN} can determine the bounded potential $q$ in $\Omega$ uniquely, which holds for any spatial dimension $n\in \N$. Moreover, inverse problems for fractional equations have more profound properties than their local counterparts, such as the \emph{unique continuation property} (UCP) and \emph{Runge approximation property}. Roughly speaking, the UCP for $(-\Delta)^s$ ($0<s<1$) states that given a nonempty open subset $\mathcal{O}\subset \R^n$, $$ u=(-\Delta)^su =0 \text{ in }\mathcal{O}\quad \text{ implies that }\quad u\equiv 0\text{ in } \R^n, $$ where $u$ belongs to appropriate function spaces (e.g., $u$ belongs to some fractional Sobolev space $H^r(\R^n)$ for some $r\in \R$). Moreover, the Runge approximation property states that any $L^2$ function can be approximated by solutions of the fractional Schr\"odinger equation. Based on the properties as mentioned above, fractional-type inverse problems have received rapidly growing attention in recent years. In the work \cite{cekic2020calderon}, the authors demonstrated that both drift and potential can be determined uniquely, which cannot be true for the local case in general. Moreover, simultaneous recovering for both the obstacle and surrounded medium has been studied in \cite{CLL2017simultaneously}, and the determination of bounded potentials for anisotropic nonlocal Schr\"odinger equation was investigated in \cite{GLX}. These two problems remain open for the case $s=1$ and $n\geq 3$. Therefore, one could expect that the nonlocality is beneficial in the study of related inverse problems. For more related work on both linear and nonlinear nonlocal inverse problems, we refer readers to \cite{harrach2017nonlocal-monotonicity,harrach2020monotonicity,LL2022inverse,LL2020inverse,GRSU20,CMRU20,RS20,ruland2018exponential,GRSU20,LLR2019calder,lin2020monotonicity,LZ2023unique,GU2021calder,LLU2022para} and references therein. In particular, one can also determine the interior coefficients, by using either a reduction of the Caffarelli-Silvestre extension (see e.g. \cite{CGRU2023reduction,ruland2023revisiting,LLU2023calder,LZ2024approximation}) or the heat semigroup on closed Riemannian manifolds (see e.g. \cite{feizmohammadi2021fractional,Fei24_TAMS,FKU2024calder,lin2024fractional}), where the UCP may not be a necessary tool for some cases. Very recently, an entanglement principle for nonlocal operators has also been investigated in \cite{FKU2024calder,FL24}, and it may significantly influence new nonlocal inverse problems. Loosely speaking, most of the existing studies of inverse problems are related to the classical Laplacian $-\Delta$ or the fractional Laplacian $(-\Delta)^s$, $s \in (0,1)$. We want to point out that the key tools in solving classical and fractional inverse problems: \begin{itemize} \item \textbf{Classical case.} Suitable integral identities and complex geometrical optics (CGO) solutions ($(-\Delta)^s$ for $s =1$). \item \textbf{Fractional case.} Suitable integral identities and Runge approximation property ($(-\Delta)^s$ for $0<s<1$). \end{itemize} The above-mentioned tools are usually used to recover lower-order coefficients in related mathematical problems. Furthermore, a recent work \cite{CdHS24} investigates geometric optics solutions for the fractional Helmholtz equation, which combines both local and nonlocal features.\\ While the nonlocality distinguishes the fractional case from the classical one, it is important to note that the fractional Laplacian $(-\Delta)^s$ is still an elliptic operator of positive order $2s$ and therefore admits a highly useful elliptic regularity theory. In the present paper, we wish to study an inverse problem beyond the framework of operators of positive order. To motivate our problem, recall that \begin{equation}\label{eq: limit of fra-Lap} \lim_{s\to 1^-}(-\Delta)^s u = -\Delta u \quad \text{and}\quad \lim_{s\to 0^+} (-\Delta)^s u =u \qquad \text{pointwisely in $\R^n$} \end{equation} for any $u\in C^2_c(\R^n)$. For the second limit, one may identify a well-defined first-order correction term given as the formal derivative $\log (-\Delta)u := \frac{d}{ds} \big|_{s=0}(-\Delta)^su$. The associated {\em logarithmic Laplacian operator} $$ \loglap := \log (-\Delta) $$ has been introduced in \cite{CW2019dirichlet} in the context of Dirichlet problems, and it has attracted growing interest in recent years due to its usefulness in the analysis of order-dependent families of linear and nonlinear fractional Dirichlet problems and their asymptotic limits as $s \to 0^+$, see e.g. \cite{CW2019dirichlet,JSnW20,feulefack-jarohs-weth,hernandez-santamaria-saldana,laptev-weth}. It also appears in the context of the $0$-fractional perimeter, see \cite{de-luca-novaga-ponsiglione}. We note that the operator $\loglap$ has the Fourier symbol $2\log \abs{\xi}$, which can be seen by differentiating the symbol $\abs{\xi}^{2s}$ with respect to $s$ and evaluate at $s=0$. More precisely, by \cite[Theorem 1.1]{CW2019dirichlet}, there holds \begin{equation}\label{eq: def log-Lap Fourier} \log (-\Delta) u \vcentcolon = \mathcal{F}^{-1} \LC 2\log \abs{\xi}\widehat{u}(\xi)\RC , \quad \text{for} \quad u \in C^\alpha_c (\R^n), \end{equation} for some $\alpha>0$, where $\widehat{u}$ is the Fourier transform of $u$, and $\mathcal{F}^{-1}$ is the inverse Fourier transform. Due to the weak growth of the logarithmic symbol, $\loglap$ is sometimes called a (near) zero-order operator. Very recently, the problem of finding an extension problem for the logarithmic Laplacian has been solved in \cite{CHW2023extension}, and the authors used it to prove the UCP for $\loglap$. The UCP will play a major role in the proof of our main results.\\ \noindent \textbf{Mathematical formulations and main results.} Let $\Omega\subset \R^n$ be a bounded domain with Lipschitz boundary $\p \Omega$, and $q\in L^\infty(\Omega)$. Consider \begin{equation}\label{eq: main equation} \begin{cases} \LC \loglap + q \RC u=0 &\text{ in }\Omega,\\ u=f &\text{ in }\Omega_e . \end{cases} \end{equation} To obtain the well-posedness of \eqref{eq: main equation}, let us recall the eigenvalue problem of the logarithmic Laplacian. By \cite[Theorem 1.2]{CW2019dirichlet}, it is known that $\loglap$ in $\Omega$ has eigenvalues \begin{equation}\label{eigenvalues of the log Laplacian} \lambda_1(\Omega) <\lambda_2(\Omega) \leq \ldots \leq \lambda_k(\Omega) \leq \ldots \nearrow \infty \end{equation} in a bounded domain $\Omega$. Any of these eigenvalues $\lambda_k(\Omega)$ may be positive, zero, or negative, depending on the size and shape of $\Omega$. In order to have a well-posed forward problem \eqref{eq: main equation}, let us impose the condition \begin{equation}\label{eigenvalue} \lambda_1(\Omega)+q(x)\geq \lambda_0>0, \text{ for all }x\in \Omega, \end{equation} for some positive constant $\lambda_0$, where $\lambda_1(\Omega)$ is the first Dirichlet eigenvalue of $\loglap$ in $\Omega$. Lower bounds for $\lambda_1(\Omega)$ in terms of $|\Omega|$, the measure of $\Omega$, are given in \cite{CW2019dirichlet,laptev-weth}, see in particular \cite[Corollary 4.3 and Theorem 4.4]{laptev-weth}\footnote{Note that $\frac{1}{2}\loglap$ is considered in place of $\loglap$ in \cite{laptev-weth}, so the bounds there apply to $\frac{\lambda_1(\Omega)}{2}$ in place of $\lambda_1(\Omega)$.}. With the eigenvalue condition \eqref{eigenvalue} at hand, we can prove \eqref{eq: main equation} is well-posed for any $f$ contained in the associated {\em trace space} $\H_T(\Omega_e)$ which is defined in Section \ref{sec: forward problem} below. Hence, the DN map of \eqref{eq: main equation} can be defined as a map $$ \Lambda_{q} \vcentcolon \H_T(\Omega_e) \mapsto \H_T(\Omega_e)^*. $$ In particular, if $W_1,W_2 \Subset \Omega_e$ are open bounded subsets, then for every $f \in C_c^\infty(W_1)$ and $g \in C_c^\infty(W_1)$ we have $$ \left\langle \Lambda_q f, g \right\rangle = \int_{W_2} \bigl(\loglap u_f\bigr) g\,dx = 2 \int_{\R^n}(\log |\xi|)\widehat{u_f}(\xi)\overline{\widehat g(\xi)}\,d\xi, $$ where $u_f$ is the unique (weak) solution to \eqref{eq: main equation}, and $\langle \cdot ,\cdot \rangle $ denotes the natural duality pairing between $\mathbb{H}_T(\Omega_e)$ and $\mathbb{H}_T(\Omega_e)^\ast$, see Section \ref{sec: forward problem} below. The key question studied in this paper is the following. \begin{enumerate}[\textbf{(IP)}] \item \label{IP}\textbf{Inverse Problem.} Can one determine $q$ via the DN map $\Lambda_q$? \end{enumerate} The following main result gives an affirmative answer to \ref{IP}. \begin{theorem}[Global uniqueness]\label{thm: uniqueness} Let $\Omega\subset \R^n$ be a bounded Lipschitz domain for $n\in \N$, and $W_1,W_2\Subset \Omega_e$ be nonempty bounded open sets. Assume that $q_j \in L^\infty(\Omega)$ satisfies \eqref{eigenvalue} for $j=1,2$. Let $\Lambda_{q_j}$ be the DN map of \begin{equation}\label{eq: main equation j=12} \begin{cases} \LC \log (-\Delta) +q_j \RC u_j =0 &\text{ in }\Omega, \\ u_j =f &\text{ in }\Omega_e, \end{cases} \end{equation} for $j=1,2$. Suppose that \begin{equation}\label{eq: same DN map in thm 1} \langle \Lambda_{q_1} f, g \rangle = \langle \Lambda_{q_2} f, g \rangle \quad \text{for any $f\in C_c^{\infty}(W_1)$, $g \in C_c^\infty(W_2)$.} \end{equation} Then there holds $q_1=q_2$ in $\Omega$. \end{theorem} The proof of the preceding theorem is based on the UCP and Runge approximation for the logarithmic Schr\"odinger equation \eqref{eq: main equation}.\\ On top of that, we also have a constructive uniqueness for $q$ using a \emph{monotonicity test}. Indeed, similarly as in the case of the fractional problem considered in \cite{harrach2017nonlocal-monotonicity}, one can derive an if-and-only-if monotonicity relation between bounded potentials and associated DN maps. More precisely, we have the following. \begin{theorem}[If-and-only-if monotonicity] \label{thm: equivalent monotonicity} Let $\Omega\subset \R^n$ be a bounded Lipschitz domain, and assume that $q_j \in L^\infty(\Omega)$ satisfies \eqref{eigenvalue} for $j=1,2$. Then the following are equivalent: \begin{enumerate}[(i)] \item $q_2 \ge q_1$ a.e. in $\Omega$; \item $\left\langle (\Lambda_{q_2}-\Lambda_{q_1})f,f \right\rangle \ge 0$ for all $f \in \H_T(\Omega_e)$; \item There exists a nonempty bounded open set $W\Subset \Omega_e$ with the property that \begin{equation}\label{quadratic sense} \left\langle \LC \Lambda_{q_1}-\Lambda_{q_2} \RC f, f \right\rangle \geq 0, \qquad \text{for all $f\in C^\infty_c(W).$} \end{equation} \end{enumerate} \end{theorem} In the following, if $W\Subset \Omega_e$ is a nonempty bounded open set, we say that \begin{equation} \label{eq:definition-quadratic-sense} \text{$\Lambda_{q_1}\leq\Lambda_{q_2}\;$ in $\;W$} \end{equation} if (\ref{quadratic sense}) holds. \begin{remark} In the special case $W_1 = W_2=W$, Theorem \ref{thm: uniqueness} can be viewed as a corollary of Theorem~\ref{thm: equivalent monotonicity} since in this case condition (\ref{eq: same DN map in thm 1}) implies that both $\Lambda_{q_1}\leq \Lambda_{q_2}$ and $\Lambda_{q_2}\leq \Lambda_{q_1}$ hold in $\Omega$ and therefore $q_1 =q_2$ in $\Omega$ by Theorem~\ref{thm: equivalent monotonicity}. \end{remark} As indicated above, the if-and-only-if monotonicity relation given by Theorem~\ref{thm: equivalent monotonicity} yields a constructive global uniqueness result in the case where $\lambda_1(\Omega) >0$ and $q \in L^\infty(\Omega)$ is nonnegative. For this purpose, let us recall that a point $x\in \R^n$ is called (Lebesgue) \emph{density one} for a measurable set $E$ if \begin{align} \lim_{r\to 0}\dfrac{\left|B_r(x)\cap E\right|}{\left|B_r(x)\right|}=1, \end{align} where $B_r(x)$ denotes the ball of radius $r$ and centered at $x$. The space of \emph{density one simple functions} can be defined by \begin{align*} \Sigma := \bigg\{ \varphi=\sum_{j=1}^m a_j \chi_{E_j}:\, a_j\in \R,\ \text{$E_j\subseteq \Omega$ is a density one set} \bigg\}, \end{align*} where we call a subset $E\subseteq \Omega$ a \emph{density one set} provided that $E$ is nonempty, measurable and has density one for all $x\in E$. It is not hard to find that density one sets have a positive measure, and finite intersections of density one sets are density one. Let, moreover, $\Sigma_{+,0}\subseteq \Sigma$ denote the subset of {\em nonnegative} density one simple functions. \begin{theorem}[Constructive uniqueness]\label{thm: const. uniqueness} Let $\Omega\subset \R^n$ be a bounded Lipschitz domain with $\lambda_1(\Omega)>0$, and let $q \in L^\infty(\Omega)$ be nonnegative. Moreover, let $W\Subset \Omega_e$ be an arbitrary nonempty bounded open set. Then the potential $q$ can be determined uniquely via the formula \begin{equation} q(x)=\sup \left\{ \varphi(x)\:: \: \text{$\varphi \in \Sigma_{+,0}$ and $\Lambda_\varphi \leq \Lambda_q$ in $W$} \right\}, \text{ for a.e. }x\in \Omega, \end{equation} where the relation $\Lambda_\varphi \leq \Lambda_q$ in $W$ is defined as above. \end{theorem} Note that the proofs of both Theorems \ref{thm: uniqueness} and \ref{thm: const. uniqueness} rely on the Runge approximation and its applications for the logarithmic Schr\"odinger equation.\\ \noindent \textbf{Organization of this article.} In Section \ref{sec: preliminaries}, we introduce several basic function spaces and a rigorous definition of the logarithmic Laplacian. In Section \ref{sec: forward problem}, we demonstrate the well-posedness of the exterior value problem \eqref{eq: main equation} for suitable function spaces. The existence of the DN map can be guaranteed by the well-posedness of \eqref{eq: main equation}. We prove Theorem \ref{thm: uniqueness} in Section \ref{sec: pf of global unique}, by using suitable integral identity and the Runge approximation. Finally, in Section \ref{sec: mono}, we derive the monotonicity relations between the DN maps and nonnegative potentials as given in Theorem~\ref{thm: equivalent monotonicity}, which will be applied to show the constructive uniqueness as stated in Theorem \ref{thm: const. uniqueness}. \section{Preliminaries}\label{sec: preliminaries} In this section, let us introduce and recall several useful properties for our study. Our notation for the Fourier transform is \begin{equation} \widehat{f}(\xi) =\mathcal{F}f(\xi)=\int_{\R^n} e^{-\mathrm{i}x\cdot \xi} f(x)\, dx, \end{equation} where $\mathrm{i}=\sqrt{-1}$ denotes the imaginary unit. In what follows, let us always consider $u:\R^n\to \R$ as a (Lebesgue) measurable function, and $\Omega \subset \R^n$ be a bounded domain with Lipschitz boundary. Given $s\in (0,1)$, the fractional Sobolev space $H^s(\R^n)$ is the standard $L^2$-based Sobolev space with the norm $\norm{u}_{H^s(\R^n)}=\big\| \LC 1+\abs{\xi}\RC^{s/2} \widehat{u} \big\|_{L^2(\R^n)}$. \subsection{The logarithmic Laplacian and associated function spaces} Given $s\in (0,1)$, recall that the fractional Laplacian $(-\Delta)^s$ can be defined via the Fourier transform \begin{equation} (-\Delta)^s u \vcentcolon = \mathcal{F}^{-1} \big( \abs{\xi}^{2s}\widehat{u}(\xi)\big), \quad \text{for} \quad u \in \mathcal{S}(\R^n), \end{equation} where $\mathcal{S}(\R^n)$ stands for the Schwartz space. Alternatively, the fractional Laplacian can be written as a hypersingular integral operator of the form \begin{equation}\label{eq: integral representation of fractional Laplacian} (-\Delta)^s u (x) =\mathrm{P.V.}\int_{\R^n}\LC u(x)-u(z)\RC \mathcal{K}_s(x-z)\, dz \end{equation} with the symmetric kernel \begin{equation}\label{eq: kernel for fractional Laplacian} \mathcal{K}_s(z)\vcentcolon = \frac{ C_{n,s} }{\abs{z}^{n+2s}}, \end{equation} and \begin{equation}\label{constant C_n,s} C_{n,s}\vcentcolon= \frac{4^s\Gamma\LC \frac{n}{2}+s\RC }{\pi^{n/2}\abs{\Gamma(-s)}}. \end{equation} The logarithmic Laplacian $\loglap$ appears in the study of $(-\Delta)^s$ in the limit $s \to 0$. Given $u\in C^\alpha_c(\R^n)$ for some $\alpha>0$, $\loglap u(x)$ can be uniquely defined by the asymptotic expansion\footnote{Here $o(s)$ denotes the small o, which satisfies $\frac{o(s)}{s}\to 0$ as $s\to 0^+$.} \begin{equation} (-\Delta)^s u(x)= u(x)+s \loglap u (x) +o(s), \quad \text{as} \quad s\to 0^+, \end{equation} so $\loglap u$ appears as a linear correction term in the second limit of \eqref{eq: limit of fra-Lap}. Formally, we thus have \begin{equation} \frac{d}{ds} \Big|_{s=0} (-\Delta)^s u(x)=\loglap u(x), \end{equation} and $\loglap u$ has the symbol $2\log\abs{\xi}$ given by \eqref{eq: def log-Lap Fourier}. Moreover, from \cite[Theorem 1.1]{CW2019dirichlet} again, it is known that the logarithmic Laplacian admits an integral representation \begin{equation}\label{eq: integral representation} \begin{split} \loglap u(x)= c_n \mathrm{P.V.}\int_{B_1(x)}\frac{u(x)-u(z)}{\abs{x-z}^n}\, dz - c_n \int_{\R^n \setminus B_1(x)} \frac{u(z)}{\abs{x-z}^n} \, dz +\rho_n u(x), \end{split} \end{equation} for $x\in \R^n$ and $u\in C^\alpha_c(\R^n)$, for some $\alpha>0$, where $B_r(x)$ is the ball center at $x$ with radius $r>0$, and \begin{equation}\label{eq: constant c_n and rho_n} c_n \vcentcolon = \pi^{-n/2}\Gamma\LC \frac{n}{2} \RC =\frac{2}{\left| \mathbb S^{n-1}\right|} , \quad \rho_n\vcentcolon = 2\log 2 +\psi\LC \frac{n}{2}\RC -\gamma. \end{equation} Here $\left| \mathbb S^{n-1}\right|$, $\gamma \vcentcolon= -\Gamma'(1)$ and $\psi =\frac{\Gamma'}{\Gamma}$ are the $(n-1)$-dimensional volume of the unit sphere in $\R^n$, the Euler Mascheroni constant, and the Digamma function, respectively. In the distributional sense, the logarithmic Laplacian is defined on the function space \begin{equation}\label{L^1_0(R^n)} L^1_0(\R^n) \vcentcolon = \left\{ u\in L^1_{\mathrm{loc}}(\R^n) \vcentcolon \, \int_{\R^n}\frac{\abs{u(x)}}{(1+\abs{x})^n }\, dx <\infty\right\}. \end{equation} More precisely, for a function $u \in L^1_0(\R^n)$, the (distributional) logarithmic Laplacian is well-defined by setting $$ \LC \loglap u\RC (\phi) = \int_{\R^n} u\LC \loglap\phi\RC dx, \quad \text{for $\phi \in C^\infty_c(\R^n)$.} $$ On $\R^n$, the natural energy space associated with the logarithmic Laplacian is defined by $$ \mathbb{H}(\R^n)= \left\{ u \in L^2(\R^n)\::\:\int_{\R^n}\bigl|\log|\xi|\bigr| |\widehat u(\xi)|^2\,d\xi < \infty\right\}. $$ It is a Hilbert space with a scalar product \begin{equation}\label{eq: inner product R n} (v,w) \mapsto \langle v,w \rangle_{\mathbb{H}(\R^n)} \vcentcolon= \langle v,w \rangle_{L^2(\R^n)} + \int_{\R^n} \bigl|\log|\xi|\bigr| \widehat v(\xi) \overline{\widehat w(\xi)} \, d \xi \end{equation} and induced norm $\norm{v}_{\mathbb{H}(\R^n)}=\sqrt{\langle v,v \rangle _{\mathbb{H}(\R^n)}}$. Next, the bilinear form associated with the logarithmic Laplacian is given by \begin{equation}\label{eq:def-bilinear-form-loglap} \begin{split} B_0 &: \mathbb{H}(\R^n) \times \mathbb{H}(\R^n) \to \R, \\ B_0(v,w)&:= 2\int_{\R^n} \log |\xi| \widehat v(\xi) \overline{\widehat w(\xi)}d\xi. \end{split} \end{equation} Indeed this bilinear form is well-defined and continuous on $\H(\R^n)$ since \begin{equation}\label{B-0-upper-bound} \begin{split} \int_{\R^n} \bigl|\log |\xi| \bigr| |\widehat v(\xi)| |\overline{\widehat w(\xi)}|d\xi &\le \Bigl(\int_{\R^n} \bigl|\log |\xi|\bigr||\widehat{v}(\xi)|^2\,\xi\Bigr)^{1/2}\Bigl(\int_{\R^n} \bigl|\log |\xi|\bigr||\widehat{v}(\xi)|^2\,\xi\Bigr)^{1/2} \\ &\le \|u\|_{\H(\R^n)}\|v\|_{\H(\R^n)} \quad \text{for $u,v \in \H(\R^n)$.} \end{split} \end{equation} The bilinear form $B_0$ allows to define $\loglap$ in the weak sense as an operator $$ \loglap: \mathbb{H}(\R^n) \to \mathbb{H}(\R^n)^\ast $$ by $$ \left \langle \loglap v, w \right\rangle = B_0 (v,w) $$ for $v,w \in \mathbb{H}(\R^n)$, where $\left\langle \cdot, \cdot \right\rangle $ denotes the duality pairing between $\mathbb{H} (\R^n)^\ast$ and $\mathbb{H}(\R^n)$. By the definition of \eqref{eq:def-bilinear-form-loglap}, we have the symmetry property \begin{equation}\label{eq: symmetry log} B_0(v,w)=B_0(w,v), \quad \text{for $v,w \in \mathbb{H}(\R^n)$.} \end{equation} Finally, for an open subset $\Omega \subset \R^n$, we consider the closed subspace \begin{equation}\label{eq: H(Omega) space} \begin{split} \mathbb{H}_0(\Omega) \vcentcolon = \{ u \in \mathbb{H}(\R^n): \, u=0 \text{ in }\Omega_e \} \end{split} \end{equation} of $\mathbb{H}(\R^n)$, which is again a Hilbert space. We note the following observation. \begin{lemma} \label{equivalent-norms} Let $\Omega \subset \R^n$ be an open set of finite measure. Then the inner product \begin{equation}\label{eq: inner product} (v,w) \mapsto \langle v,w \rangle _{\mathbb{H}_0(\Omega)} \vcentcolon= \int_{\abs{x-z}\leq 1} \frac{\LC v(x)-v(z)\RC \LC w(x)-w(z)\RC }{\abs{x-z}^n}\, dxdz \end{equation} induces an equivalent norm $$v \mapsto \norm{v}_{\mathbb{H}_0(\Omega)}=\sqrt{\langle v,v \rangle _{\mathbb{H}_0(\Omega)} } \text{ on } \mathbb{H}_0(\Omega). $$ Moreover, $\mathbb{H}_0(\Omega)$ is compactly embedded into $L^2(\Omega)$. \end{lemma} \begin{proof} In the following, the letter $C>0$ stands for different positive constants. We shall use the symbol $\xi \mapsto \log (1 + |\xi|^2)$, associated with the logarithmic Schrödinger operator $\log(-\Delta + 1)$, as a comparison function. The operator $\log(-\Delta + 1)$ is a singular integral operator with kernel function $$ z \mapsto \ell(z)= C_n K_{n/2}(|z|)\abs{z}^{-n/2}, $$ for some constant $C_n$ depending only on $n\in \N$, where $K_{\nu}$ denotes the modified Bessel function of second kind with index $\nu$ (see e.g. \cite[Section 1]{feulefack}). As a consequence, for every $\phi \in L^2(\R^n)$ we have \begin{equation} \label{eq:log-schroedinger-comparison} \int_{\R^n}\log(1+|\xi|^2)|\widehat \phi(\xi)|^2\,d\xi = \int_{\R^n \times \R^n} \ell(|x-z|)\LC \phi(x)-\phi(z)\RC^2 \, dxdz, \end{equation} where the finiteness of one side of this equality implies the finiteness of the other. We also note that, by the positivity of $K_{\nu}$ and its asymptotics at $z=0$ (see e.g. \cite[Section 1]{feulefack} again), we have $$ \frac{\ell(z)}{C_0} \le |z|^{-n} \le C_0 \ell(z), \quad \text{for $z \in B_1(0) \setminus \{0\}$ with a constant $C_0>0$.} $$ Therefore, for every $\phi \in \mathbb{H}_0(\Omega) \subset \mathbb{H}(\R^n)$, we now have the estimate \begin{equation} \begin{split} \int_{\abs{x-z}\leq 1} \frac{\LC \phi(x)-\phi(z) \RC^2}{\abs{x-z}^n}\, dxdz &\le C \int_{\R^n \times \R^n} \ell(|x-z|) \LC \phi(x)-\phi(z)\RC^2\, dxdz\\ &= C \int_{\R^n}\log(1+|\xi|^2)|\widehat \phi(\xi)|^2\,d\xi\\ &\le C \int_{\R^n} (1+\bigl|\log|\xi|\bigr|)|\widehat \phi(\xi)|^2\,d\xi\\ &= C \|\phi\|_{\mathbb{H}(\R^n)}^2. \end{split} \end{equation} Moreover, by (\ref{eq:log-schroedinger-comparison}) we have \begin{equation*} \begin{split} \|\phi\|_{\mathbb{H}(\R^n)}^2 &= \int_{\R^n} (1+ \abs{\log|\xi|})|\widehat \phi(\xi)|^2\,d\xi \\ &\le \int_{\R^n}\bigl(1 - 1_{B_1(0)}(\xi)\log |\xi| + \log(1+|\xi|^2)\bigr)|\widehat \phi(\xi)|^2\,d\xi\\ &\le \|\widehat \phi\|_{L^2(\R^n)}^2+ \|\widehat \phi\|_{L^\infty(B_1(0)}^2 \int_{B_1(0)}(-\log |\xi|)d\xi\\ &\quad \, +C \int_{\R^n \times \R^n} \ell(|x-z|) \LC \phi(x)-\phi(z) \RC^2\, dxdz\\ &\le \|\phi\|_{L^2(\Omega)}^2+ C \|\phi\|_{L^1(\Omega)}^2 +C \int_{\abs{x-z}\leq 1} \frac{\LC \phi(x)-\phi(z)\RC^2}{|x-y|^n}\, dxdz\\ &\quad \, + C \int_{\abs{x-z}> 1}\ell(x-z) \LC \phi^2(x)+\phi^2(z)\RC \, dxdz\\ &\le C \Bigl(\|\phi\|_{L^2(\Omega)}^2 + \|\phi\|_{\mathbb{H}_0(\Omega)}^2 + \|\phi\|_{L^2(\R^n)}^2\int_{\R^n \setminus B_1(0)} \ell(y)\,dy \Bigr) \\ &\le C \bigl(\|\phi\|_{L^2(\Omega)}^2 + \|\phi\|_{\mathbb{H}_0(\Omega)}^2\bigr), \end{split} \end{equation*} where $1_{B_1(0)}(\xi)=\begin{cases} 1 &\text{ for }\xi \in B_1(0)\\ 0 &\text{ otherwise} \end{cases}$ is the characteristic function. Here we used the assumption that $\Omega$ has finite measure $\abs{\Omega}\in (0,\infty)$, such that the H\"older inequality $$ \|\phi\|_{L^1(\Omega)} \le |\Omega|^{1/2} \|\phi\|_{L^2(\Omega)} $$ holds. Combining the above estimates with the obvious bound $$ \|\phi\|_{L^2(\Omega)} = \|\phi\|_{L^2(\R^n)}= \|{\widehat \phi}\,\|_{L^2(\R^n)} \le \|\phi\|_{\mathbb{H}(\R^n)} \quad \text{for $\phi \in \mathbb{H}_0(\Omega)$}, $$ we deduce that $$ \phi \mapsto \|\phi\|_* := \bigl(\|\phi\|_{L^2(\Omega)}^2 + \|\phi\|_{\mathbb{H}_0(\Omega)}^2\bigr)^{\frac{1}{2}} $$ is an equivalent norm to $\|\cdot \|_{\mathbb{H}(\R^n)}$ on $\mathbb{H}_0(\Omega)$. Moreover, by \cite[Theorem 1.2]{jarohs-weth}, the embedding $(\mathbb{H}_0(\Omega),\|\cdot\|_*) \hookrightarrow L^2(\Omega)$ is compact. From this compactness and the fact that $\|\phi\|_{\mathbb{H}_0(\Omega)}>0$ for all $\phi \in \mathbb{H}_0(\Omega) \setminus \{0\}$, it follows by a standard argument that $\|\phi \|_{L^2(\Omega)} \le C \|\phi\|_{\mathbb{H}_0(\Omega)}$ for all $\phi \in \mathbb{H}_0(\Omega)$, so the norms $\|\cdot\|_*$ and $\|\cdot\|_{\mathbb{H}_0(\Omega)}$ are equivalent on $H^1_0(\Omega)$. This proves the result. \end{proof} We also have the following useful estimate. \begin{lemma} \label{first-useful-est} For $\alpha>0$, we have $C^\alpha_c(\R^n) \subset \H(\R^n)$. Moreover, for every nonempty bounded open set $W \subset \R^n$, there exists a constant $C=C(W,\alpha)>0$ with \begin{equation}\label{eq: estimate log f} \norm{f}_{\mathbb{H}(\R^n)} \leq C \norm{f}_{C^{\alpha}(W)} \qquad \text{for all $f \in C^\alpha_c(W)$.} \end{equation} \end{lemma} \begin{proof} It clearly suffices to prove (\ref{eq: estimate log f}). Since $W \subset B_R(0)$ for $R$ chosen sufficiently large, we may assume without loss of generality that $W= B_R(0)$ for some $R>1$. Since $\norm{\cdot}_{\mathbb{H}(\R^n)}$ is equivalent to $\norm{\cdot}_{\mathbb{H}_0(W)}$ on $\mathbb{H}_0(W)$, it suffices, by approximation, to prove the estimate with $\norm{\cdot}_{\mathbb{H}_0(W)}$ in place of $\norm{\cdot}_{\mathbb{H}(\R^n)}$. Let $f \in C^\alpha_c(W)$, then we can write, similar to \cite[Proposition 3.2]{CW2019dirichlet}, \begin{equation}\label{first-useful-est-proof-0} \begin{split} \norm{f}_{\mathbb{H}_0(W)}^2 = \int_{\stackrel{x,z \in W}{\abs{x-z}<1}} \frac{(f(x)-f(z))^2}{|x-z|^n}\,dx dz +2 \int_{W}|f(x)|^2 \kappa_{W}(x)\,dx \end{split} \end{equation} with \begin{equation} \label{eq:kappa-omega} \kappa_{W}(x)= \int_{(\R^n\setminus \overline{W}) \cap B_1(x)}|y-x|^{-n}\, dx. \end{equation} It is easy to compute that $$ \kappa_{W}(x) \le \left|\mathbb{S}^{n-1}\right| \Bigl[\log \frac{1}{\dist(x,\partial \Omega)}\Bigr]_+ \le \left|\mathbb{S}^{n-1}\right| \Bigl[\log \frac{1}{R-|x|}\Bigr]_+, $$ for $x \in W= B_R(0)$, which implies that \begin{equation}\label{first-useful-est-proof-1} \begin{split} \int_{W}|f(x)|^2 \kappa_{W}(x)\,dx & \le |\mathbb S^{n-1}|\|f\|_{L^\infty(W)}^2\int_{B_R(0)} \Bigl[\log \frac{1}{R-|x|}\Bigr]_+dx \\ &\le |\mathbb S^{n-1}|^2 \|f\|_{C^\alpha(W)}^2\int_{R-1}^{R} r^{n-1}\log \frac{1}{R-r}\, dr \\ &\le C\|f\|_{C^\alpha(W)}^2, \end{split} \end{equation} with a constant $C=C(W)>0$. Moreover, we have \begin{equation}\label{first-useful-est-proof-2} \begin{split} \int_{\stackrel{x,z \in W}{\abs{x-z}<1}} \frac{(f(x)-f(z))^2}{|x-z|^n}\, dxdz &\le \|f\|_{C^\alpha(W)}^2 \int_{\stackrel{x,z \in W}{\abs{x-z}<1}}|x-z|^{\alpha-n} \, dxdz \\ &\le |W| \|f\|_{C^\alpha(W)}^2 \int_{B_1(0)}|y|^{\alpha-n}\, dy \\ &\le \frac{|W||\mathbb{S}^{n-1}|}{\alpha} \|f\|_{C^\alpha(W)}^2. \end{split} \end{equation} Combining~\eqref{first-useful-est-proof-0}, \eqref{first-useful-est-proof-1} and \eqref{first-useful-est-proof-2}, we obtain \eqref{eq: estimate log f} with a constant $C=C(W,\alpha)>0$. This proves the assertion. \end{proof} Throughout the remainder of this paper, let us denote the $L^2$-inner product by \begin{equation} \LC \phi, \psi\RC_{L^2(A)}\vcentcolon = \int_{A} \phi \psi \, dx, \end{equation} for any $\phi,\psi\in L^2(A)$ and for any $A\subset \R^n$. \subsection{The energy form of the logarithmic Schrödinger type operator} Let $\Omega \subset \R^n$ be a bounded open set with Lipschitz boundary and $q \in L^\infty(\Omega)$. We consider the bilinear form associated with the operator $\loglap +q$ given by \begin{equation} \label{eq:def-bilinear-form-loglap-q} \begin{split} B_q & : \mathbb{H}(\R^n) \times \mathbb{H}(\R^n) \to \R, \\ B_q(v,w)&:=B_0(v,w)+ \LC q v , w \RC_{L^2(\Omega)}, \end{split} \end{equation} where $B_0$ is defined in (\ref{eq:def-bilinear-form-loglap}). We recall that the variational characterization of $\lambda_1(\Omega)$, the first Dirichlet-eigenvalue of $\loglap$ on $\Omega$, is then given by \begin{equation} \label{eq:var-char-lambda-1} \lambda_1(\Omega) = \inf_{\stackrel{u \in \H_0(\Omega)}{u \not = 0}}\frac{B_0(u,u)}{\|u\|_{L^2(\Omega)}^2}. \end{equation} Hence the eigenvalue condition, if satisfied, implies that \begin{equation} \label{eq:eigenvalue-reformulated} B_q(u,u) \ge \lambda_0 \|u\|_{L^2(\Omega)}^2 \qquad \text{for all $u \in \H_0(\Omega)$.} \end{equation} We also note the following useful estimates. \begin{lemma} \label{B-q-estimates} Let $B_q(\cdot, \cdot)$ be the bilinear form given by \eqref{eq:def-bilinear-form-loglap-q}, then we have \begin{equation} \label{eq:B-q-est-1} |B_q(u,v)| \le (2+\|q\|_{L^\infty(\Omega)}) \|u\|_{\H(\R^n)}\|v\|_{\H(\R^n)} \quad \text{for $u,v \in \H(\R^n)$.} \end{equation} and \begin{equation} \label{eq:B-q-est-2} B_q(u,u) \ge 2 \|u\|_{\H(\R^n)}^2 - C \|u\|_{L^2(\Omega)}^2 \qquad \text{for $u \in \H_0(\Omega)$} \end{equation} with a constant $C>0$. If moreover (\ref{eigenvalue}) holds, then \begin{equation} \label{eq:B-q-est-3} B_q(u,u) \ge C \|u\|_{\H(\R^n)}^2 \qquad \text{for $u \in \H_0(\Omega)$} \end{equation} with a constant $C>0$. \end{lemma} \begin{proof} For $u,v \in \H(\R^n)$ we have, by (\ref{B-0-upper-bound}), \begin{align*} |B_q(u,v)| &\le |B_0(u,v)| + |\LC q v , w \RC_{L^2(\Omega)}| \\ &\le 2\int_{\R^n}|\log |\xi|| |\widehat{u}(\xi)| |\widehat{ v}(\xi)|d\xi + \|q\|_{L^\infty(\Omega)}\|u\|_{L^2(\Omega)}\|v\|_{L^2(\Omega)}\\ &\le 2 \|u\|_{\H(\R^n)}\|v\|_{\H(\R^n)} + \|q\|_{L^\infty(\Omega)}\|u\|_{L^2(\R^n)}\|v\|_{L^2(\R^n)}\\ &\le (2 + \|q\|_{L^\infty(\Omega)})\|u\|_{\H(\R^n)}\|v\|_{\H(\R^n)}, \end{align*} as claimed in (\ref{eq:B-q-est-1}). Moreover, we have \begin{align*} B_0(u,u) &= 2\int_{\R^n}\log|\xi| |\widehat{u}(\xi)|^2\,d\xi \\ &= 2 \int_{\R^n}(1+ \bigl| \log|\xi|\bigr|) |\widehat{u}(\xi)|^2\,d\xi -2 \int_{\R^n}|\widehat{u}(\xi)|^2\,d\xi \\ &\quad \, + 4 \int_{B_1(0)}(\log|\xi|)|\widehat{u}(\xi)|^2\,d\xi\\ &\geq 2\|u\|_{\H(\R^n)}^2 -2\|u\|_{L^2(\Omega)}^2 +4\|\widehat{u}\|_{L^\infty(\R^n)}^2 \int_{B_1(0)} \log|\xi| \, d\xi\\ &\ge 2\|u\|_{\H(\R^n)}^2 -2\|u\|_{L^2(\Omega)}^2 -C \|u\|_{L^1(\Omega)}^2\\ &\ge 2\|u\|_{\H(\R^n)}^2 -C \|u\|_{L^2(\Omega)}^2, \quad \text{for $u \in \H_0(\Omega)$.} \end{align*} Here we used again the fact that $\|u\|_{L^1(\Omega)} \le C \|u\|_{L^2(\Omega)}$ thanks to $\Omega$ has finite measure. We conclude that $$ B_q(u,u) \ge B_0(u,u) -\|q\|_{L^\infty(\Omega)}\|u\|_{L^2(\Omega)}^2 \ge 2 \|u\|_{\H(\R^n)}^2 - C \|u\|_{L^2(\Omega)}^2, $$ for $u \in \H_0(\Omega)$, as claimed in (\ref{eq:B-q-est-2}). Finally, if (\ref{eigenvalue}) holds, then by (\ref{eq:eigenvalue-reformulated}) and (\ref{eq:B-q-est-2}) we have, for $u \in \H_0(\Omega)$ and $\eps \in (0,1)$, \begin{align*} B_q(u,u) &= (1-\eps)B_q(u,u) + \eps B_q(u,u)\\ &\ge (1-\eps)\lambda_0\|u\|_{L^2(\Omega)}^2 + \eps \big(2\|u\|_{\H(\R^n)}^2 - C \|u\|_{L^2(\Omega)}^2\big)\\ &= 2 \eps \|u\|_{\H(\R^n)}^2 + \big((1-\eps)\lambda_0 - C\eps\big)\|u\|_{L^2(\Omega)}^2. \end{align*} Since $\lambda_0>0$, we can then choose $\eps \in (0,1)$ such that $(1-\eps)\lambda_0 - C\eps \ge 0$. Then (\ref{eq:B-q-est-3}) holds with $C = 2 \eps$. This concludes the proof. \end{proof} \section{The forward problem}\label{sec: forward problem} Let $\Omega \subset \R^n$ be a bounded open set with Lipschitz boundary, and let $q \in L^\infty(\Omega)$ satisfy \eqref{eigenvalue}. In this section, we study the forward Dirichlet problem for the logarithmic Schr\"odinger type equation $\LC \loglap + q \RC u = F$ in $\Omega$. For this, we need some preparations. We define the trace space $$ \H_T(\Omega_e):= \big\{f\big|_{\Omega_e} \::\: f \in \H(\R^n)\big\}. $$ To define a suitable norm on $\H_T(\Omega_e)$, we note the following result. \begin{lemma} \label{minimal extension} For every function $f \in \H_T(\Omega_e)$, the infimum \begin{equation} \label{eq:infimum-energy-extension} c_f:= \inf \{\|g\|_{\H(\R^n)} \::\: g \in \H(\R^n), \: g\big|_{\Omega_e} = f\} \end{equation} admits a minimizer $\tilde f \in \H(\R^n)$. Moreover, $\tilde f$ is uniquely determined by the property that \begin{equation} \label{eq:unique-determination} \big\langle \tilde f, h \big\rangle_{\H(\R^n)} = 0 \qquad \text{for all $h \in \H_0(\Omega)$,} \end{equation} and the map $f \mapsto \tilde f$ is linear. \end{lemma} \begin{proof} Let $f \in \H_T(\Omega_e)$ By definition of $\H_T(\Omega_e)$, the set $$ M_f:= \big\{ g \in \H(\R^n)\,: \, g\big|_{\Omega_e} = f \big\} $$ is nonempty. Let $\LC f_k \RC_{k\in \N}$ be a minimizing sequence in $M_f$ for the infimum in (\ref{eq:infimum-energy-extension}). Since $f_k$ is bounded in $\H(\R^n)$, we may pass to a subsequence such that $f_k \rightharpoonup \tilde f \in \H(\R^n)$ and therefore $$ \|\tilde f\|_{\H(\R^n)} \le \lim_{k \to \infty} \|f_k\|_{\H(\R^n)} = c_f. $$ Moreover, we have $h_k:= f_k-f_1$ in $\H_0(\Omega)$ for all $k \in \N$, and $h_k \rightharpoonup \tilde f-f_1$. Since $\H_0(\Omega)\subset \H(\R^n)$ is a closed subspace, it follows that $\tilde f-f_1 \in \H_0(\Omega)$, hence $\tilde f \in M_f$ and $\tilde f$ is a minimizer for (\ref{eq:infimum-energy-extension}). It then follows that for any $h \in \H_0(\Omega)$ we have $$ 0 \le \lim_{t \to 0} \frac{1}{t}\bigl(\|\tilde f \pm t h \|_{\H(\R^n)}^2 - \|\tilde f\|_{\H(\R^n)}^2\bigr) = \pm 2 \langle \tilde f, h \rangle_{\H(\R^n)} $$ and therefore (\ref{eq:unique-determination}) follows. Moreover, if $\tilde f_* \in M_f$ is another function satisfying (\ref{eq:unique-determination}), then $\tilde f - \tilde f_* \in \H_0(\Omega) \cap \bigl(\H_0(\Omega)\bigr)^\perp = \{0\}$ and therefore $\tilde f = \tilde f_*$. This shows that $\tilde f$ is uniquely determined by (\ref{eq:unique-determination}), and from this, the linearity of the map $f \to \tilde f$ follows. This proves the assertion \end{proof} Lemma~\ref{minimal extension} shows that a norm $\|\cdot\|_{\H_T(\Omega_e)}$ is well-defined by setting \begin{equation} \label{eq:def-trace-norm} \begin{split} \|f\|_{\H_T(\Omega_e)}: = \|\tilde f\|_{\H(\R^n)}= \inf \{\|g\|_{\H(\R^n)} \::\: g \in \H(\R^n), \: g\big|_{\Omega_e} = f\}, \end{split} \end{equation} for $f \in \H_T(\Omega_e)$. \subsection{The Dirichlet problem} For $f \in \H_T(\Omega_e)$ and $F \in L^2(\Omega)$, we now consider the Dirichlet problem \begin{equation}\label{eq: well-posedness} \begin{cases} \LC \loglap +q \RC u =F &\text{ in }\Omega, \\ u=f &\text{ in }\Omega_e. \end{cases} \end{equation} To define the notion of weak solutions, we use the bilinear form $B_q$ defined in (\ref{eq:def-bilinear-form-loglap-q}). \begin{definition}[Weak solutions] Let $\Omega \subset \R^n$ be a bounded open set with Lipschitz boundary. Given $f \in \mathbb{H}_T(\Omega_e)$ and $F \in L^2(\Omega)$, a function $u\in \mathbb{H}(\R^n)$ is called a weak solution to \eqref{eq: well-posedness} if $u \equiv f$ in $\Omega_e$ and \begin{equation} B_q (u,\varphi) = \LC F, \varphi \RC_{L^2(\Omega)} ,\quad \text{for any $\varphi \in \mathbb{H}_0(\Omega)$.} \end{equation} \end{definition} We then have the following well-posedness result. \begin{lemma}[Well-posedness]\label{lem: well-posedness} Let $\Omega \subset \R^n$ be a bounded open set with Lipschitz boundary, and $ q\in L^\infty(\Omega)$ satisfy \eqref{eigenvalue}. Then, for any $f \in \H_T(\Omega_e)$ and $F \in L^2(\Omega)$, there exists a unique weak solution $u\in \mathbb{H}(\R^n)$ to \eqref{eq: well-posedness}. In addition, there holds \begin{equation}\label{eq: well-posed estimate} \norm{u}_{\mathbb{H}(\R^n)}\leq C \LC \norm{F}_{L^2(\Omega)} + \|f\|_{\H_T(\Omega_e)}\RC, \end{equation} for some constant $C>0$ independent of $u$, $F$, $f$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem: well-posedness}] Let $\tilde f$ be the $\|\cdot\|_{\H(\R^n)}$-minimizing extension of $f$ given by Lemma~\ref{minimal extension}, which satisfies $\|\tilde f\|_{\H(\R^n)} = \|f\|_{\H_T(\Omega_e)}$. Then $u\in \mathbb{H}(\R^n)$ is a weak solution to \eqref{eq: well-posedness} if and only if $v= u-\tilde f \in \mathbb{H}_0(\Omega)$, and $v$ satisfies \begin{equation} \label{weak-sol-transformed} B_q (v,\varphi) = \LC F, \varphi \RC_{L^2(\Omega)}-B_q(\tilde f,\varphi) ,\quad \text{for any $\varphi \in \mathbb{H}_0(\Omega)$.} \end{equation} The existence of a unique $v \in \mathbb{H}_0(\Omega)$ with this property follows from the Lax-Milgram theorem, since, by Lemma~\ref{B-q-estimates}, $B_q$ is a continuous bilinear form on $\mathbb{H}_0(\Omega)$ satisfying (\ref{eq:B-q-est-3}). This shows the existence of a unique weak solution $u\in \mathbb{H}(\R^n)$ to \eqref{eq: well-posedness}, and $u = v + \tilde f$ with $v \in \mathbb{H}_0(\Omega)$ as above. Moreover, by (\ref{eq:B-q-est-3}) and (\ref{eq:B-q-est-1}) we have, with a constant $C>0$, \begin{align*} \|v\|_{\H(\R^n)}^2 &\le C B_q(v,v)\\ &= C\bigl( \LC F, v \RC_{L^2(\Omega)}-B_q(\tilde f,v)\bigr)\\ &\le C \bigl(\|v\|_{L^2(\Omega)} \|F\|_{L^2(\Omega)} + \|\tilde f\|_{\H(\R^n)}\|v\|_{\H(\R^n)}\bigr)\\ &\le C \|v\|_{\H(\R^n)} \bigl( \|F\|_{L^2(\Omega)} + \|\tilde f\|_{\H(\R^n)}\bigr) \end{align*} and therefore \begin{equation} \label{eq:v-estimate} \|v\|_{\H(\R^n)} \le C \bigl(\|F\|_{L^2(\Omega)} + \|\tilde f\|_{\H(\R^n)}\bigr). \end{equation} Consequently, \begin{align*} \|u\|_{\H(\R^n)} & \le \|v\|_{\H(\R^n)} + \|\tilde f\|_{\H(\R^n)} \\ &\le C \bigl(\|F\|_{L^2(\Omega)} +\|\tilde f\|_{\H(\R^n)}\bigr)\\ &= C \bigl(\|F\|_{L^2(\Omega)} +\|f\|_{\H_T(\Omega_e)}\bigr), \end{align*} as claimed. \end{proof} \begin{corollary} \label{cor: well-posedness} Let $\Omega \subset \R^n$ be a bounded open set with Lipschitz boundary, let $ q\in L^\infty(\Omega)$ satisfy \eqref{eigenvalue}, and let $W \subset \Omega_e$ be a nonempty bounded open set. Then for every $f \in C^\alpha_c(W)$, there exists a unique weak solution $u\in \mathbb{H}(\R^n)$ to \eqref{eq: well-posedness}. In addition, there holds \begin{equation}\label{eq: well-posed estimate-corollary} \norm{u}_{\mathbb{H}(\R^n)}\leq C \LC \norm{F}_{L^2(\Omega)} + \|f\|_{C^\alpha(W)}\RC, \end{equation} for some constant $C>0$ independent of $u$, $F$, $f$. \end{corollary} \begin{proof} The result follows by combining Lemma~\ref{first-useful-est} and Lemma~\ref{lem: well-posedness}, since $f \in C^\alpha_c(W) \subset \H(\R^n)$ and $$ \bigl\| f\big|_{\Omega_e} \bigr\|_{\H_T(\Omega_e)} \le \|f\|_{\H(\R^n)} \le C \|f\|_{C^\alpha(W)} $$ by definition of $\|\cdot\|_{\H_T(\Omega_e)}$. \end{proof} \subsection{The DN map} With Theorem \ref{lem: well-posedness} at hand, the DN map of \eqref{eq: main equation} can be defined rigorously via the bilinear form $B_q$ defined in (\ref{eq:def-bilinear-form-loglap-q}). \begin{lemma}[The DN map] Let $\Omega \subset \R^n$ be a bounded Lipschitz domain, and let $q\in L^\infty(\Omega)$ fulfill \eqref{eigenvalue}. Define \begin{equation}\label{eq: defi DN map} \begin{split} \left\langle \Lambda_{q}f, g \right\rangle \vcentcolon = B_q(u_f,v_g), \end{split} \end{equation} for any $f, g \in \H_T(\Omega_e)$, where $u_f\in \mathbb{H}(\R^n)$ is the solution to \eqref{eq: main equation}, and $v_g\in \mathbb{H}(\R^n)$ can be any representative function with $\left. v_g\right|_{\Omega_e}=g$. Then \begin{equation}\label{eq: mapping DN map} \Lambda_q \vcentcolon \H_T(\Omega_e)\to \H_T(\Omega_e)^* \end{equation} is a bounded linear operator, which satisfies the symmetry property \begin{equation}\label{eq: self-adjoint} \left\langle \Lambda_{q}f, g \right\rangle = \left\langle \Lambda_{q}g, f \right\rangle \qquad \text{for any $f, g \in \H_T(\Omega_e)$.} \end{equation} \end{lemma} \begin{proof} First, if there are two functions $v_g, \wt v_g \in \mathbb{H}(\R^n)$ such that $\left. v_g\right|_{\Omega_e}= \left.\wt v_g \right|_{\Omega_e}$, then $v_g-\wt v_ g \in \mathbb{H}_0(\Omega)$. Therefore, by the linearity of $B_q(u_f ,\cdot)$, one has \begin{equation} B_q (u_f, \wt v_{g}) = \underbrace{B_q (u_f,v_g) + B_q (u_f , \wt v_{g}-v_g)=B_q (u_f,v_g)}_{\text{since }u_f \text{ solves }\eqref{eq: main equation} \text{ and }\wt v_{g}-v_g \in \mathbb{H}_0(\Omega)} , \end{equation} which shows \eqref{eq: defi DN map} is well-defined. To show the boundedness of $\Lambda_q$, we let $\tilde g \in \H(\R^n)$ be the $\|\cdot\|_{\H(\R^n)}$-minimizing extension of $g \in \H_T(\Omega_e)$ given by Lemma~\ref{minimal extension}, so that $\|\tilde g\|_{\H(\R^n)}= \|g\|_{\H_T(\Omega_e)}$. Then, by Lemma~\ref{B-q-estimates} and \eqref{eq: well-posed estimate} applied with $F=0$, we have $$ \left| \left\langle \Lambda_q f, g \right\rangle \right| =\left| B_q (u_f,\tilde g)\right| \leq C\|u_f\|_{\H(\R^n)}\|\tilde g\|_{\H(\R^n)} \le C \|f\|_{\H_T(\Omega_e)}\|g\|_{\H_T(\Omega_e)}. $$ This shows that $\Lambda_q \vcentcolon \H_T(\Omega_e)\to \H_T(\Omega_e)^*$ is bounded. Now, the symmetry of the DN map can be seen from the symmetry of the bilinear form $B_q$. This proves the assertion. \end{proof} We can also derive the integral identity. \begin{lemma}[Integral identity]\label{lem: integral id} Let $\Omega \subset \R^n$ be a bounded Lipschitz domain, and let $q_j\in L^\infty(\Omega)$ satisfy (\ref{eigenvalue}) for $j=1,2$. For any $f_1,f_2 \in \H_T(\Omega_e)$, we then have \begin{equation}\label{eq: integral id} \left\langle \LC \Lambda_{q_1}-\Lambda_{q_2} \RC f_1, f_2 \right\rangle =\int_{\Omega} \LC q_1 -q_2 \RC u_{f_1}u_{f_2}\, dx, \end{equation} where $u_{f_j}$ is the unique weak solution to \begin{equation} \begin{cases} \LC \loglap + q_j \RC u_{f_j} =0 &\text{ in }\Omega, \\ u_{f_j}=f_j &\text{ in }\Omega_e, \end{cases} \end{equation} for $j=1,2$. \end{lemma} \begin{proof} Via \eqref{eq: self-adjoint}, one can obtain \begin{equation} \begin{split} \left\langle \LC \Lambda_{q_1}-\Lambda_{q_2} \RC f_1, f_2 \right\rangle &= \left\langle \Lambda_{q_1}(f_1), f_2 \right\rangle - \left\langle f_1, \Lambda_{q_2}(f_2) \right\rangle \\ &= B_{q_1}(u_{f_1},u_{f_2}) -B_{q_2}(u_{f_1},u_{f_2})\\ &=\int_{\Omega} \LC q_1 -q_2 \RC u_{f_1}u_{f_2}\, dx, \end{split} \end{equation} where we used \eqref{eq: symmetry log} in the last equality. \end{proof} \section{Proof of Theorem \ref{thm: uniqueness}}\label{sec: pf of global unique} As we discussed before, in solving nonlocal inverse problems, one usually needs the Runge approximation for nonlocal operators. The proof of the (qualitative) Runge approximation for $\loglap$ is based on the following unique continuation property (UCP) for the logarithmic Laplacian. Recall the definition of the space $L^1_0(\R^n)$ given in \eqref{L^1_0(R^n)}. \begin{proposition}[\text{\cite[Theorem 5.1]{CHW2023extension}}]\label{prop: UCP} Let $D\subset \R^n$ be a nonempty open set. If $u\in L^1_0(\R^n)$ satisfies \begin{equation} u=\loglap u =0 \text{ in $D$ in distributional sense,} \end{equation} then $u \equiv 0 $ in $\R^n$. \end{proposition} Let $\Omega\subset \R^n$ be a bounded nonempty Lipschitz domain, and let $q \in L^\infty(\Omega)$ satisfy \eqref{eigenvalue}. Then the solution operator associated with \eqref{eq: main equation} is defined as \begin{equation}\label{eq: solution op.} P_q \vcentcolon \H_T(\Omega_e) \to \mathbb{H}(\R^n), \quad f\mapsto u_f, \end{equation} where $u_f \in \mathbb{H}(\R^n)$ is the solution to \eqref{eq: main equation}, so the solution of (\ref{eq: well-posedness}) with $F=0$. With Proposition \ref{prop: UCP} at hand, we can show the Runge approximation. \begin{proposition}[Runge approximation]\label{prop: Runge} Let $\Omega \subset \R^n$ be a bounded Lipschitz domain, let $ q\in L^\infty(\Omega)$ satisfy \eqref{eigenvalue}, and consider the solution operator $P_q$ given by \eqref{eq: solution op.}. Then for every nonempty open set $W\Subset \Omega_e$, the set \begin{equation} \mathcal{R}\vcentcolon = \left\{ P_q f\big|_{\Omega} : \, f\in C^\infty_c(W)\right\}, \end{equation} is dense in $L^2(\Omega)$. \end{proposition} \begin{proof} By the Hahn-Banach theorem, it suffices to show that for any $w\in L^2(\Omega)$, which satisfies \begin{equation}\label{eq: runge 1} \LC P_qf , w\RC_{L^2(\Omega)}=0 \quad \text{for any} \quad f\in C^\infty_c(W), \end{equation} it results that $w \equiv 0$. Let $\varphi \in \mathbb{H}_0(\Omega)$ be the solution to \begin{equation}\label{eq: runge 2} \begin{cases} \LC \loglap +q \RC \varphi = w &\text{ in }\Omega, \\ \varphi =0 &\text{ in }\Omega_e. \end{cases} \end{equation} The well-posedness of the above equation was guaranteed in Section \ref{sec: forward problem}. Hence, we have \begin{equation} \begin{split} B_q (\varphi,f)=B_q(\varphi, f-P_qf) = \LC w , f-P_qf \RC_{L^2(\Omega)} =-\LC w, P_q f \RC_{L^2(\Omega)}=0, \end{split} \end{equation} for any $f\in C^\infty_c(W)$, where we used \eqref{eq: runge 1} in the last identity. Hence for any $f\in C^\infty_c(W)$ we have, since $f \equiv 0$ in $\Omega$, \begin{align*} \int_{\R^n} \varphi \loglap f\,dx &= 2 \int_{\R^n}(\log |\xi|) \widehat \varphi(\xi)\widehat f(\xi)\,d\xi\\ &= B_q(\varphi,f)+ \int_{\Omega}q(x) \varphi(x) f(x) \, dx \\ &= 0. \end{align*} This yields that $\loglap\varphi\equiv 0$ in $W$ in distributional sense, while also $\varphi \equiv 0$ in $W$. Note that $\varphi\in L^1_0(\R^n)$ since $\mathbb{H}_0(\Omega) \subset L^1_0(\R^n)$, and this allows us to apply the UCP of Proposition \ref{prop: UCP}. Therefore, Proposition \ref{prop: UCP} implies that $\varphi\equiv 0$ in $\R^n$ and therefore $w\equiv 0$ as desired. This concludes the proof. \end{proof} \begin{remark} As we have mentioned earlier in Section \ref{sec: introduction}, the logarithmic Laplacian is a near zero-order operator, so we do not expect that the above-mentioned Runge approximation possesses a higher regularity approximation property. \end{remark} We are now ready to prove the global uniqueness result by using the Runge approximation. \begin{proof}[Proof of Theorem \ref{thm: uniqueness}] With the condition \eqref{eq: same DN map in thm 1}, the symmetry of the operators $\Lambda_{q_i}$ and integral identity \eqref{eq: integral id} imply \begin{align}\label{eq: integral id in proof 1} \int_{\Omega} \LC q_1 -q_2 \RC u_{f}u_{g}\, dx=0 \qquad \text{for all $f \in C^\infty_c(W_1)$, $g \in C^\infty_c(W_2)$,} \end{align} where $u_{f} = P_{q_1}f$ and $u_{g} = P_{q_2} g \in \mathbb{H}(\R^n)$ are the solutions to \begin{equation}\label{eq: equation in proof j=1} \begin{cases} \LC \loglap + q_1 \RC u_{f} =0 &\text{ in }\Omega, \\ u_{f}=f &\text{ in }\Omega_e, \end{cases} \end{equation} and \begin{equation}\label{eq: equation in proof j=2} \begin{cases} \LC \loglap + q_2 \RC u_{g} =0 &\text{ in }\Omega, \\ u_{g}=g &\text{ in }\Omega_e, \end{cases} \end{equation} respectively. By the Runge approximation in Proposition \ref{prop: Runge}, given any $h \in L^2(\Omega)$, there exist sequences of functions $f_k \in C^\infty_c(W_1)$, $g_k \in C^\infty_c(W_2)$ with $P_{q_1} f_k \to h$ and $P_{q_2}g_k \to 1$ in $L^2(\Omega)$ as $k\to \infty$. By (\ref{eq: integral id in proof 1}), we then conclude that \begin{equation} \int_{\Omega} \LC q_1 -q_2 \RC h \, dx=\lim_{k \to \infty}\int_{\Omega} \LC q_1 -q_2 \RC (P_{q_1}f_k)(P_{q_2}g_k) \, dx = 0. \end{equation} Due to the arbitrariness of $h\in L^2(\Omega)$, there must hold $q_1=q_2$ a.e. in $\Omega$. This proves the assertion. \end{proof} \begin{remark} It would be interesting to ask if one can prove Theorem \ref{thm: uniqueness} using a single measurement. It is clear that if the potential $q$ is regular enough (e.g. continuous potentials), one may directly apply the existing UCP of Proposition \ref{prop: UCP} for $\loglap$. However, for the rough potential case, one needs to study a measurable UCP result for the logarithmic Laplacian, i.e., the UCP for $\loglap$ holds on a set of positive measures. \end{remark} \section{Constructive uniqueness}\label{sec: mono} To prove Theorem \ref{thm: const. uniqueness}, we will utilize the monotonicity method combined with the localized potentials for \eqref{eq: main equation}. This shows that increasing the potential $q$ increases the DN map $\Lambda_q$ in the sense of quadratic forms, and vice versa. To accomplish the complete arguments, we need the following localized potentials. \subsection{Localized potentials} With the Runge approximation at hand, we can immediately derive the existence of the localized potentials. \begin{lemma}[Localized potentials]\label{lem:localized_potentials} Let $\Omega\subset \R^{n}$ be a bounded Lipschitz domain $q\in L^{\infty}(\Omega)$, and $W\Subset\Omega_{e}$ be a nonempty open set. For every measurable set $M\subset\Omega$, there exists a sequence $f^{k}\in C_{c}^{\infty}(W)$, such that the corresponding solutions $u^{k}\in \mathbb H(\R^{n})$ of \begin{equation}\label{eq:lem_loc_pot_uk} \begin{cases} \LC \loglap +q \RC u^{k}=0 & \text{ in }\Omega,\\ u^{k}=f^k & \text{ in }\Omega_e, \end{cases} \end{equation} for all $k\in \N$, satisfy that \[ \int_{M} \left|u^{k} \right|^{2}\, dx \to\infty\quad\text{and}\quad\int_{\Omega\setminus M}\left|u^{k}\right|^{2}\, dx \to0 \quad \mbox{as}\quad k\to \infty. \] \end{lemma} \begin{proof} Applying the Runge approximation of Proposition \ref{prop: Runge}, we find a sequence of functions $\widetilde{f}^{k} \in C_{c}^{\infty}(W)$ so that the corresponding solutions $\left. \widetilde{u}^{k}\right|_{\Omega}$ converge to $\frac{\chi_{M}}{\sqrt{\int_{M}1\, dx}}$ in $L^{2}(\Omega)$, and \[ \left\|\widetilde{u}^{k} \right\|_{L^{2}(M)}^2=\int_{M}\left|\widetilde{u}^{k}\right|^{2}\, dx\to 1, \quad\text{and}\quad \left\|\widetilde{u}^{k}\right\|_{L^{2}(\Omega\setminus M)}^2=\int_{\Omega\setminus M}\left|\widetilde{u}^{k} \right|^{2}\,dx \to 0 \] as $k\to \infty$. We may assume for all $k\in \mathbb{N}$ that $\widetilde{u}^{k}\not\equiv0$ without loss of generality, so that $\left\|\widetilde{u}^{(k)}\right\|_{L^{2}(\Omega \setminus M)}>0$ follows from the UCP of Proposition \ref{prop: UCP} for $\loglap$. Taking normalizing \[ f^{k}:=\frac{\widetilde{f}^{k}}{\left\|\widetilde{u}^{k}\right\|_{L^{2}(\Omega\setminus M)}^{1/2}} , \] the sequence of corresponding solutions $u^{k}\in \mathbb H(\R^{n})$ of \eqref{eq:lem_loc_pot_uk} possesses the desired property \[ \big\|u^{k}\big\|_{L^{2}(M)}^2 = \frac{\left\|\widetilde{u}^{k}\right\|_{L^{2}(M)}^2}{ \left\|\widetilde{u}^{k}\right\|_{L^{2}(\Omega\setminus M)} } \to \infty, \quad\text{and}\quad \big\|u^{k}\big\|_{L^{2}(\Omega\setminus M)}^2=\big\|\widetilde{u}^{k}\big\|_{L^{2}(\Omega\setminus M)}\to 0, \] as $k\to \infty$. This completes the proof. \end{proof} \subsection{Monotonicity relations} Here we complete the proof of Theorems~\ref{thm: equivalent monotonicity} and \ref{thm: const. uniqueness}. \begin{lemma}[Monotonicity relations] \label{Lemma for monotonicity} Let $\Omega\subset \R^{n}$ be a bounded open Lipschitz domain and $f \in \H_T(\Omega_e)$. Moreover, for $j=1,2$, let $q_{j}\in L^\infty(\Omega)$ satisfy \eqref{eigenvalue}, and let $u_{j}\in \mathbb H(\R^{n})$ be the unique solutions of \begin{equation}\label{eq: log-equation for monotonicity} \begin{cases} \LC \loglap+q_{j} \RC u_{j}=0 & \mbox{ in }\Omega,\\ u_{j}=f &\text{ in }\Omega_e. \end{cases} \end{equation} Then we have the monotonicity relations \begin{equation}\label{monotone relation 1} \left\langle \left(\Lambda_{q_{2}}-\Lambda_{q_{1}} \right) f,f\right\rangle \leq\int_{\Omega}(q_{2}-q_{1})\left|u_{1}\right|^{2}\, dx, \end{equation} and \begin{equation}\label{monotone relation 2} \begin{split} \left\langle (\Lambda_{q_2}-\Lambda_{q_1} f, f \right\rangle \geq\int_{\Omega}\left(q_{2}-q_{1}\right)\left|u_{2}\right|^{2} \, dx. \end{split} \end{equation} \end{lemma} \begin{proof} The definition of the DN map \eqref{eq: defi DN map} implies that $$ \left\langle\Lambda_{q_1}f,f \right\rangle =B_{q_{1}}(u_{1},u_{1})\qquad \text{and}\qquad \left\langle\Lambda_{q_2}f,f \right\rangle =B_{q_{2}}(u_{2},u_{2})=B_{q_{2}}(u_{2},u_{1}). $$ By the symmetry property of the bilinear form, we thus have \begin{equation} \begin{split} 0&\leq B_{q_{2}}(u_{2}-u_{1},u_{2}-u_{1})\\ &=B_{q_{2}}(u_{2},u_{2})-2B_{q_{2}}(u_{2},u_{1})+B_{q_{2}}(u_{1},u_{1})\\ &= -\left\langle\Lambda_{q_2}f,f \right\rangle+B_{q_{2}}(u_{1},u_{1})\\ &=\left\langle \LC \Lambda_{q_1}-\Lambda_{q_2}\RC f,f \right\rangle+B_{q_{2}}(u_{1},u_{1})-B_{q_{1}}(u_{1},u_{1})\\ &= \left\langle\LC \Lambda_{q_1}-\Lambda_{q_2}\RC f,f \right\rangle+\int_{\Omega}(q_{2}-q_{1}) \left| u_{1}\right|^{2} \, dx, \end{split} \end{equation} which proves \eqref{monotone relation 1}. Interchanging $q_{1}$ and $q_{2}$ in the above computations yields \eqref{monotone relation 2}. This proves the assertion. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: equivalent monotonicity}] We have to show the equivalence of Properties (i)--(iii). Suppose first that (i) holds, i.e., we have $q_{1}\leq q_{2}$ a.e. in $\Omega$. Then (\ref{monotone relation 2}) readily implies that $\left\langle\LC \Lambda_{q_2}-\Lambda_{q_1}\RC f,f \right\rangle \ge 0$ for every $f \in \H_T(\Omega_e)$, so (ii) holds. From (ii) it directly follows that (\ref{quadratic sense}) holds for any nonempty bounded open set $W \Subset \Omega_e$, since $C^\infty_c(W) \subset \H_T(\Omega_e)$. Therefore (ii) implies (iii). Finally, let us show the implication (iii) $\Longrightarrow$ (i). So we assume that (\ref{quadratic sense}) holds for any nonempty bounded open set $W \Subset \Omega_e$ for some nonempty bounded open set $W \Subset \Omega_e$, and we need to show that $q_{1}\leq q_{2}$ a.e.\ in $\Omega$. Arguing by contradiction, let us assume that $q_{1}\leq q_{2}$ does not hold a.e.\ in $\Omega$. Then there exists $\delta>0$ and a positive measurable set $M\subset\Omega$ such that $q_{1}-q_{2}\geq\delta>0$ on $M\subseteq\Omega$. Using the sequence of localized potentials from Lemma \ref{lem:localized_potentials} for the coefficient $q_{1}$, and the monotonicity inequality from Lemma \ref{Lemma for monotonicity}, one can get \begin{equation} \begin{split} \left\langle \LC \Lambda_{q_2}-\Lambda_{q_1}\RC f^{k},f^{k}\right\rangle & \leq\int_{\Omega}(q_{2}-q_{1})\left|u_{1}^{k}\right|^{2}\, dx\\ & \leq \left\|q_{2}-q_{1}\right\|_{L^{\infty}(\Omega\setminus M)}\big\|u_{1}^{k}\big\|_{L^{2}(\Omega\setminus M)}^{2}-\delta \big\|u_{1}^{k}\big\|_{L^{2}(M)}^2\\ &\to-\infty,\text{ as }k\to \infty, \end{split} \end{equation} which contradicts (\ref{quadratic sense}). Therefore, the assertion is proved. \end{proof} Finally, let us prove Theorem \ref{thm: const. uniqueness}. \begin{proof}[Proof of Theorem \ref{thm: const. uniqueness}] By \cite[Lemma 4.4]{harrach2017nonlocal-monotonicity}, it is known that given any nonnegative $q\in L^\infty(\Omega)$, then one has \begin{equation}\label{eq: simple function sup} q(x)=\sup\left\{ \varphi(x):\, \varphi \in \Sigma_{+,0} , \ \varphi \leq q \right\} \text{ for a.e. }x\in \Omega. \end{equation} Moreover, since we assume that $\lambda_1(\Omega)>0$, the relation (\ref{eigenvalue}) is satisfied, and it is also satisfied for $\varphi$ in place of $q$ if $\varphi \in \Sigma_{+,0}$. We may therefore combine \eqref{eq: simple function sup} with Theorem~\ref{thm: equivalent monotonicity}, applied to $q_1= \varphi \in \Sigma_{+,0}$ and $q_2 = q$, to complete the proof. \end{proof} \begin{remark} With the if-and-only-if monotonicity relations, inverse problems have more possible applications. For instance, one can also study inverse obstacle problems, which recover the unknown inclusion via the monotonicity test (see e.g. \cite{harrach2017nonlocal-monotonicity,HPSmonotonicity}). The stability result could also be interesting (see e.g. \cite{harrach2020monotonicity}). \end{remark} \medskip \noindent\textbf{Acknowledgments.} Y.-H. Lin is partially supported by the National Science and Technology Council (NSTC) Taiwan, under the projects 113-2628-M-A49-003. Y.-H. Lin is also a Humboldt research fellow. \bibliography{refs} \bibliographystyle{alpha} \end{document}
2412.17916v1
http://arxiv.org/abs/2412.17916v1
Data-Driven Priors in the Maximum Entropy on the Mean Method for Linear Inverse Problems
\documentclass[onefignum,onetabnum]{siamonline220329} \usepackage[utf8]{inputenc} \usepackage[mathscr]{euscript} \usepackage{graphicx,bm,stmaryrd,mathtools} \usepackage{amsmath,amsfonts,amssymb,listings,bbm} \usepackage{caption} \usepackage{subcaption,color,gensymb} \usepackage{tcolorbox} \usepackage{afterpage} \usepackage{cleveref} \usepackage{float} \usepackage[shortlabels]{enumitem} \newsiamthm{prop}{Proposition} \newsiamremark{remark}{Remark} \newsiamthm{assum}{Assumption} \DeclareMathOperator*{\argmax}{\arg\!\max} \DeclareMathOperator*{\argmin}{\arg\!\min} \DeclareMathOperator*{\arginf}{\arg\!\inf} \DeclareMathOperator*{\argsup}{\arg\!\sup} \DeclareMathOperator{\E}{\mathbb{E}} \DeclareMathOperator{\R}{\mathbb{R}} \DeclareMathOperator{\Prob}{\mathbb{P}} \DeclareMathOperator{\C}{\mathcal{C}} \DeclareMathOperator{\B}{\mathcal{B}} \DeclareMathOperator{\F}{\mathcal{F}} \DeclareMathOperator{\Lcal}{\mathcal{L}} \DeclareMathOperator{\toe}{\xrightarrow[n \to \infty]{e}} \DeclareMathOperator{\topoint}{\xrightarrow[n \to \infty]{p}} \DeclareMathOperator{\exc}{\mathrm{exc}} \DeclareMathOperator{\epi}{\mathrm{epi}} \DeclareMathOperator{\dom}{\mathrm{dom}} \DeclareMathOperator{\inti}{\mathrm{int}} \usepackage[margin=1in]{geometry} \newcommand{\mattcomment}[1]{{\color{red}#1}} \title{Data-Driven Priors in the Maximum Entropy on the Mean Method for Linear Inverse Problems.} \author{Matthew King-Roskamp, Rustum Choksi, \and Tim Hoheisel} \date{\today} \begin{document} \maketitle \begin{abstract} We establish the theoretical framework for implementing the maximumn entropy on the mean (MEM) method for linear inverse problems in the setting of approximate (data-driven) priors. We prove a.s. convergence for empirical means and further develop general estimates for the difference between the MEM solutions with different priors $\mu$ and $\nu$ based upon the epigraphical distance between their respective log-moment generating functions. These estimates allow us to establish a rate of convergence in expectation for empirical means. We illustrate our results with denoising on MNIST and Fashion-MNIST data sets. \end{abstract} \section{Introduction} Linear inverse problems are pervasive in data science. A canonical example (and our motivation here) is denoising and deblurring in image processing. Machine learning algorithms, particularly neural networks trained on large data sets, have proven to be a game changer in solving these problems. However, most machine learning algorithms suffer from the lack of a foundational framework upon which to rigorously assess their performance. Thus there is a need for mathematical models which are on one end, data driven, and on the other end, open to rigorous evaluation. In this article, we address one such model: the {\it Maximum Entropy on the Mean} (MEM). In addition to providing the theoretical framework, we provide several numerical examples for denoising images from {\it MNIST} \cite{deng2012mnist} and {\it Fashion-MNIST} \cite{xiao2017fashion} data sets. Emerging from ideas of E.T. Jaynes in 1957 \cite{jaynes1957information1,jaynes1957information2}, various forms and interpretations of MEM have appeared in the literature and found applications in different disciplines (see \cite{le1999new, vaisbourd2022maximum} and the references therein). The MEM method has recently been demonstrated to be a powerful tool for the blind deblurring of images possessing some form of symbology (e.g., QR barcodes) \cite{8758192,rioux2020maximum}. Let us briefly summarize the MEM method for linear inverse problems, with full details provided in the next section. Our canonical inverse problem takes the following form \begin{equation}\label{lin-inverse-p} b = C\overline{x} + \eta. \end{equation} The unknown solution $\overline{x}$ is a vector in $\R^{d}$; the observed data is $b \in \R^{m}$; $C \in \R^{m \times d}$, and $\eta \sim \mathcal{Z}$ is an random noise vector in $\R^{m}$ drawn from noise distribution $\mathcal{Z}$. In the setting of image processing, $\overline{x}$ denotes the the ground truth image with $d$ pixels, $C$ is a blurring matrix with typically $d = m$, and the observed noisy (and blurred image) is $b$. For known $C$, we seek to recover the ground truth $\overline{x}$ from $b$. In certain classes of images, the case where $C$ is also unknown (blind deblurring) can also be solved with the MEM framework (cf. \cite{8758192,rioux2020maximum}) but we will not focus on this here. In fact, our numerical experiments will later focus purely on denoising, i.e., $C = I$. The power of MEM is to exploit the fact that there exists a prior distribution $\mu$ for the space of admissible ground truths. The basis of the method is the {\it MEM function} $\kappa_{\mu} :\R^{d} \to \R \cup \{ + \infty\}$ defined as \begin{equation*} \kappa_{\mu}(x) := \inf \left\{ \mathrm{KL}(Q \Vert \mu) \: : \: Q \in \mathcal{P}(\mathcal{X}), \E_{Q} = x \right\}, \end{equation*} where $\mathrm{KL}(Q \Vert \mu)$ denotes the Kullback-Leibler (KL) divergence between the probability distributions $\mu$ and $\nu$ (see \Cref{sec:MEMProblem} for the definition). With $\kappa_{\mu}$ in hand, our proposed solution to \cref{lin-inverse-p} is \begin{equation}\label{MEM-sol} \overline{x}_{\mu} = \argmin_{x \in \R^{d}} \left\{ \alpha g_b(Cx) \, +\, \kappa_{\mu}(x) \right\}, \end{equation} where $g_{b}$ is any (closed, proper) convex function that measures {\it fidelity} of $Cx$ to $b$. The function $g_{b}$ depends on $b$ and can in principle be adapted to the noise distribution $\mathcal Z$. For example, as was highlighted in \cite{vaisbourd2022maximum}, one can take the {\it MEM estimator} (an alternative to the well-known {\it maximum likelihood estimator}) based upon a family of distributions (for instance, if the noise is Gaussian, then the MEM estimator is the familiar $g_b (\cdot) = \frac{1}{2}\Vert (\cdot) - b\Vert_{2}^{2}$). Finally $\alpha >0$ is a fidelity parameter. The variational problem \cref{MEM-sol} is solved via its Fenchel dual. As we explain in \Cref{sec:MEMProblem}, we exploit the well-known connection in the large deviations literature that, under appropriate assumptions, the MEM function $\kappa_{\mu}$ is simply the Cram\'er rate function defined as the Fenchel conjugate of the log-moment generating function (LMGF) \begin{equation*} L_{\mu}(y): = \log \int_{\mathcal{X}} \exp\langle y, \cdot \rangle d\mu. \end{equation*} Under certain assumptions on $g_b$ (cf. \Cref{sec:MEMProblem}) we obtain strong duality \begin{equation}\label{dual-primal} \min_{x \in \R^{d}} \alpha g_b(Cx) + \kappa_{\mu}(x) = - \min_{z \in \R^{m}} \alpha g^{*}(-z/\alpha) + L_{\mu}(C^{T}z), \end{equation} and, more importantly, a primal-dual recovery is readily available: If $\overline z_{\mu}$ is a solution to the dual problem (the argmin of the right-hand-side of \cref{dual-primal}) then \[ \overline{x}_{\mu} := \nabla L_{\mu}(C^{T}\overline{z}) \] is the unique solution of the primal problem. This is the MEM method in a nutshell. In this article, we address the following question: Given an approximating sequence $\mu_n \to \mu$ (for example, one generated by a sample empirical mean of $\mu$), does the approximate MEM solution $\overline x_{\mu_n}$ converge to the solution $\overline x_\mu$, and if so, at which rate? A key feature of the MEM approach is that one does not have to learn the full distribution $\mu$ from samples, but rather only approximate the LMGF $L_{\mu}$. Hence, our analysis is based on the {\it closeness} of $L_{\mu_n}$ to $L_{\mu}$ resulting in the {\it closeness} of the dual solutions $\overline z_n$ and in turn the primal solutions $\overline x_{\mu_n}$. Here, we leverage the fundamental work of Wets et al. on {\it epigraphical distances, epigraphical convergence, and epi-consistency} (\cite{rockafellar2009variational},\cite{royset2022optimization},\cite{king1991epi}). Our results are presented in four sections. In \Cref{sec:epi-convergence}, we work with a general $g_b$ satisfying standard assumptions. We consider the simplest way of approximating $\mu$ via empirical means of $n$ i.i.d. samples from $\mu$. In \cref{Thm:convergence_of_primal}, we prove that the associated MEM solutions $\overline{x}_{\mu_n}$ converge almost surely to the solution $\overline{x}_{\mu}$ with full prior. In fact, we prove a slightly stronger result pertaining to $\varepsilon_n$-solutions as $\varepsilon_n\downarrow 0$. This result opens the door to two natural questions: (i) At which rate do the solutions converge? (ii) Empirical means is perhaps the simplest way of approximating $\mu$ and will inevitably yield a rate dictated by the law of large numbers. Given that the MEM method rests entirely on the LMGF of the prior, it is natural to ask how the rate depends on an approximation to the LMGF. So, if we used a different way of approximating $\mu$, what would the rate look like? We address these questions for the case $g_b = \frac{1}{2}\Vert (\cdot) - b\Vert_{2}^{2}$. In \Cref{sec:rates} we provide insight into the second question first via a deterministic estimate which controls the difference in the respective solutions associated with two priors $\nu$ and $\mu$ based upon the epigraphical distance between their respective LMGFs. We again prove a general result for $\varepsilon$-solutions associated with prior $\mu$ (cf. \cref{thm:epsdeltaprimalbound_full}). In \Cref{sec:rates_n_empirical}, we apply this bound to the particular case of the empirical means approximation, proving a $\frac{1}{n^{1/4}}$ convergence rate (cf. \Cref{thm:final_rate_n}) in expectation. Finally, in \Cref{sec:numerics}, we present several numerical experiments for denoising based upon a finite MNIST data set. These serve not to compete with any of the state of the art machine learning-based denoising algorithm, but rather to highlight the effectiveness of our data-driven mathematical model which is fully supported by theory. \begin{remark}[Working at the higher level of the probability distribution of the solution] \label{remark:measure_valued} {\rm As in \cite{8758192,rioux2020maximum}, an equivalent formulation of the MEM problem is to work not at the level of the $x$, but rather at the level of the probability distribution of the ground truth, i.e., we seek to solve \[ \overline{Q} \, = \, { \argmin}_{Q \in \mathcal{P}(\mathcal{X})} \, \, \left\{ \alpha g_b(C \mathbb{E}_Q) \, + \, \mathrm{KL}(Q \Vert \mu) \right\}, \] where one can recover the previous image-level solution as $\overline x_\mu = \mathbb{E}_{\overline Q}$. As shown in \cite{rioux2020maximum}, under appropriate assumptions this reformulated problem has exactly the same dual formulation as in the right-hand-side of \cref{dual-primal}. Because of this one has full access to the entire probability distribution of the solution, not just its expectation. This proves useful in our MNIST experiments where the optimal $\nu$ is simply a weighted sum of images uniformly sampled from the MNIST set. For example, one can do thresholding (or masking) at the level of the optimal $\nu$ (cf. the examples in \Cref{sec:numerics}). } \end{remark} \noindent {\it Notation:} $\overline{\R} := \R \cup \{\pm \infty \}$ is the extended real line. The standard inner product on $\R^n$ is $\langle \cdot, \cdot \rangle$ and $\|\cdot\|$ is the Euclidean norm. For $C \in \R^{m \times d}$, $\Vert C \Vert = \sqrt{\lambda_{\max}(C^{T}C)}$ is its spectral norm, and analagously $\sigma_{\min}(C) = \sqrt{\lambda_{\min}(C^{T}C)}$ is the smallest singular value of $C$. The trace of $C$ is denoted $\text{Tr}(C)$. For smooth $f : \R^{d} \to \R$, we denote its gradient and Hessian by $\nabla f$ and $\nabla^{2} f$, respectively. \section{Tools from convex analysis and the MEM method for solving the problem \cref{lin-inverse-p} } \subsection{Convex analysis} \label{sec:convexAnalysisPrelim} We present here the tools from convex analysis essential to our study. We refer the reader to the standard texts by Bauschke and Combettes \cite{bauschke2019convex} or Chapters 2 and 11 of Rockafellar and Wets \cite{rockafellar2009variational} for further details. Let $f:\R^d \to\overline{\R}$. The domain of $f$ is $\text{dom}(f):=\{ x \in \R^{d} \: \vert \: f(x) < + \infty \}$. We call $f$ proper if $\dom(f)$ is nonempty and $f(x) > - \infty$ for all $x$. We say that $f$ is lower semicontinuous (lsc) if $f^{-1}([-\infty, a])$ is closed (possibly empty) for all $a \in \R$. We define the (Fenchel) conjugate $f^{*} :\R^{d} \to \overline{\R}$ of $f$ as $f^{*}(x^{*}) := \sup_{x \in \R^{d}} \{ \langle x, x^{*} \rangle - f(x) \}.$ A proper $f$ is said to be convex, if $f(\lambda x + (1-\lambda) y) \leq \lambda f(x) + (1-\lambda) f(y)$ for every $x,y \in \text{dom}(f)$ and all $\lambda \in (0,1)$. If the former inequality is strict for all $x \neq y$, then $f$ is said to be strictly convex. Finally, if $f$ is proper and there is a $c>0$ such that $f-\frac{c}{2}\|\cdot\|^2$ is convex we say $f$ is $c$-strongly convex. In the case where $f$ is (continuously) differentiable on $\R^{d}$, then $f$ is $c$-strongly convex if and only if \begin{equation} f(y) - f(x) \geq \nabla f(x)^{T}(y-x) + \frac{c}{2} \Vert y-x \Vert_{2}^{2}\quad\forall x,y\in \R^d. \label{eqn:alternate_strongconvexity} \end{equation} The subdifferential of $f :\R^{d} \to \overline \R$ at $\overline{x}$ is the $\partial f(\overline{x}) = \{ x^{*} \in \R^{d} \: \vert \: \langle x-\overline{x},x^{*}\rangle \leq f(x) - f(\overline{x}), \: \forall x \in \R^{d} \}.$ A function $f : \R^{d} \to \overline{\R}$ is said to be level-bounded if for every $\alpha \in \R$, the set $f^{-1}([-\infty, \alpha])$ is bounded (possibly empty). $f$ is (level) coercive if it is bounded below on bounded sets and satisfies \begin{equation*} \liminf_{\Vert x \Vert \to \infty} \frac{f(x)}{\Vert x \Vert} > 0. \end{equation*} In the case $f$ is proper, lsc, and convex, level-boundedness is equivalent to level-coerciveness \cite[Corollary 3.27]{rockafellar2009variational}. $f$ is said to be supercoercive if $\liminf_{\Vert x \Vert \to \infty} \frac{f(x)}{\Vert x \Vert} =+\infty$. \\ A point $\overline{x}$ is said to be an $\varepsilon$-minimizer of $f$ if $f(\overline{x}) \leq \inf_{x \in \R^{d}} f(x) + \varepsilon$ for some $\varepsilon >0$. We denote the set of all such points as $S_{\varepsilon}(f)$. Correspondingly, the solution set of $f$ is denoted as $\argmin(f) = S_{0}(f) =: S(f).$ The epigraph of a function $f : \R^{d} \to \overline{\R}$ is the set $\text{epi}(f) := \{ (x,\alpha) \in \R^{d} \times \overline{\R} \: \vert \: \alpha \geq f(x) \}$. A sequence of functions $f_{n} : \R^{d} \to \overline{\R} $ epigraphically converges (epi-converges)\footnote{This is one of many equivalent conditions that characterize epi-convergence, see e.g. \cite[Proposition 7.2]{rockafellar2009variational}.} to $f$, written $f_{n} \toe f$, if and only if \[ (i)\: \forall z, \forall z_{n} \to z: \: \liminf f_{n}(z_{n}) \geq f(z), \quad (ii)\: \forall z \;\exists z_{n} \to z: \limsup f_{n}(z_{n})\leq f(z). \] \subsection{Maximum Entropy on the Mean Problem} \label{sec:MEMProblem} For basic concepts of measure and probability, we follow most closely the standard text of Billingsley \cite[Chapter 2]{billingsley2017probability}. Globally in this work, $\mu$ will be a Borel probability measure defined on compact $\mathcal{X} \subset \R^{d}$\footnote{Equivalently, we could work with a Borel measure $\mu$ on $\R^d$ with support contained in $\mathcal X$.}. Precisely, we work on the probability space $(\mathcal{X},\mathcal{B}_{\mathcal{X}}, \mu)$, where $\mathcal{X} \subset \R^{d}$ is compact and $\mathcal{B}_{\mathcal{X}} = \{ B \cap \mathcal{X} \: : \: B \in \mathbb B_d \}$ where $\mathbb B_d$ is the $\sigma$-algebra induced by the open sets in $\R^d$. We will denote the set of all probability measures on the measurable space $(\mathcal{X},\mathcal{B}_{\mathcal{X}})$ as $\mathcal{P}(\mathcal{X})$, and refer to elements of $\mathcal{P}(\mathcal{X})$ as probability measures on $\mathcal{X}$, with the implicit understanding that these are always Borel measures. For $Q,\mu\in \mathcal P(\mathcal X)$, we say $Q$ is absolutely continuous with respect to $\mu$ (and write $Q \ll \mu$) if for all $A \in \mathcal{B}_{\mathcal{X}}$ with $\mu(A) = 0$, then $Q(A) = 0$. \cite[p.~422]{billingsley2017probability}. For $Q \ll \mu$, the Radon-Nikodym derivative of $Q$ with respect to $\mu$ is defined as the (a.e.) unique function $\frac{dQ}{d\mu}$ with the property $Q(A) = \int_{A} \frac{dQ}{d\mu} d\mu$ for $A\in \mathcal B_{\mathcal X}$ \cite[Theorem 32.3]{billingsley2017probability}. The Kullback-Leibler (KL) divergence \cite{kullback1951information} of $Q \in \mathcal{P}(\mathcal{X})$ with respect to $\mu \in \mathcal{P}(\mathcal{X})$ is defined as \begin{equation} \text{KL}(Q\Vert \mu) := \begin{cases} \int_{\mathcal{X}} \log(\frac{dQ}{d\mu}) d \mu, & Q \ll \mu, \\ + \infty, & \text{ otherwise.} \end{cases} \label{def-KL} \end{equation} For $\mu \in \mathcal{P}(\mathcal{X})$, the expected value $\E_{\mu} \in \R^{d}$ and moment generating function $M_{\mu}: \R^{d} \to \R$ function of $\mu$ are defined as \cite[Ch.21]{billingsley2017probability} \begin{equation*} \E_{\mu} := \int_{\mathcal{X}}x d\mu(x) \in \R^{d},\qquad M_{\mu}(y) := \int_{\mathcal{X}} \exp\langle y, \cdot \rangle d\mu, \end{equation*} respectively. The log-moment generating function of $\mu$ is defined as \begin{equation*} L_{\mu}(y):= \log M_{\mu}(y) = \log \int_{\mathcal{X}} \exp\langle y, \cdot \rangle d\mu . \end{equation*} As $\mathcal{X}$ is bounded, $M_{\mu}$ is finite-valued everywhere. By standard properties of moment generating functions (see e.g. \cite[Theorem 4.8]{severini2005elements}) it is then analytic everywhere, and in turn so is $L_{\mu}$. Given $\mu \in \mathcal{P}(\mathcal{X})$, the Maximum Entropy on the Mean (MEM) function \cite{vaisbourd2022maximum} $\kappa_{\mu} :\R^{d} \to \overline{\R}$ is \begin{equation*} \kappa_{\mu}(y) := \inf\{ \mathrm{KL}(Q \: \Vert \: \mu) : \E_{Q} = y , Q \in \mathcal{P}(\mathcal{X}) \} . \end{equation*} The functions $\kappa_{\mu}$ and $L_{\mu}$ are paired in duality in a way that is fundamental to this work. We will flesh out this connection, as well as give additional properties of $\kappa_{\mu}$ for our setting; a Borel probability measure $\mu$ on compact $\mathcal{X}$. A detailed discussion of this connection under more general assumptions is the subject of \cite{vaisbourd2022maximum}. For any $\mu \in \mathcal{P}(\mathcal{X})$ we have a vacuous tail-decay condition of the following form: for any $\sigma >0$, \begin{equation*} \int_{\mathcal{X}} e^{\sigma \Vert x \Vert} d\mu(x) \leq \max_{x \in \mathcal{X} } \Vert x \Vert e^{\sigma \Vert x \Vert} < + \infty. \end{equation*} Consequently, by \cite[Theorem 5.2 (iv)]{donsker1976asymptotic3} \footnote{ A technical remark on the application of \cite[Theorem 5.2 (iv)]{donsker1976asymptotic3}, which applies only over Banach spaces. When applying this result, we identify our probability measure $\mu$ on compact $\mathcal{X} \subset \R^{d}$ with its extension $\hat{\mu}$ on $\R^{d}$ defined by $\hat{\mu}(A) = \mu(A \cap \mathcal{X})$ for any Borel set $A$. Hence, we may apply \cite[Theorem 5.2 (iv)]{donsker1976asymptotic3} to find that $\kappa_{\hat{\mu}} =L^{*}_{\hat{\mu}}$. As integration with respect with $\mu$ or its extension $\hat{\mu}$ are identical, see, e.g., \cite[Example 16.4]{billingsley2017probability}, it follows $L_{\mu} = L_{\hat{\mu}}$, and in turn - with some minor proof details omitted - $\kappa_{\hat{\mu}}= \kappa_{\mu}$.\hfill$\diamond$} we have that \begin{equation*} \kappa_{\mu}(x) = \sup_{y \in \R^{d}} \left[ \langle y,x \rangle - \log \int_{\mathcal{X}} e^{\langle y,x\rangle} d\mu(x) \right](= L_{\mu}^{*}(x)). \end{equation*} Note that the conjugate $L_\mu^*$ is known in the large deviations literature as the (Cram{\'e}r) rate function. In summary, with our standing assumption that $\mathcal{X}$ is compact, $\kappa_{\mu} = L^{*}_{\mu}$. This directly implies the following properties of $\kappa_{\mu}$: (i) As $L_{\mu}$ is proper, lsc, and convex, so is its conjugate $L^{*}_{\mu} = \kappa_{\mu}$. (ii) Reiterating that $L_{\mu}$ is proper, lsc, convex, we may assert $(L_{\mu}^{*})^{*}= L_{\mu}$ via Fenchel-Moreau (\cite[Theorem 5.23]{royset2022optimization}), and hence $\kappa_{\mu}^{*} = L_{\mu}$. (iii) As $\dom(L_{\mu}) = \R^{d}$ we have that $\kappa_{\mu}$ is supercoercive \cite[Theorem 11.8 (d)]{rockafellar2009variational}. (iv) Recalling that $L_{\mu}$ is everywhere differentiable, $\kappa_{\mu}$ is strictly convex on every convex subset of $\dom (\partial \kappa_{\mu})$, which is also referred to as essentially strictly convex \cite[p.~253]{rockafellar1997convex}. With these preliminary notions, we can (re-)state the problem of interest in full detail. We work with images represented as vectors in $\R^{d}$, where $d$ is the number of pixels. Given observed image $b \in \R^{m}$ which may be blurred and noisy, and known matrix $C \in \R^{m \times d}$, we wish to recover the ground truth $\hat{x}$ from the linear inverse problem $ b = C\hat{x} + \eta,$ where $\eta \sim \mathcal{Z}$ is an unknown noise vector in $\R^{m}$ drawn from noise distribution $\mathcal{Z}$. We remark that, in practice, it is usually the case that $m=d$ and $C$ is invertible, but this is not necessary from a theoretical perspective. We assume the ground truth $\hat{x}$ is the expectation of an underlying image distribution - a Borel probability measure - $\mu$ on compact set $\mathcal{X} \subset \R^{d}$. Our best guess of $\hat{x}$ is then obtained by solving \begin{equation*} \overline{x}_{\mu} = \argmin_{x \in \R^{d}} \alpha g(Cx) + \kappa_{\mu}(x).\tag{P} \end{equation*} where $g = g_{b}$ is a proper, lsc, convex function which may depend on $b$ and serves as a fidelity term, and $\alpha >0$ a parameter. For example, if $g = \frac{1}{2}\Vert b - (\cdot) \Vert_{2}^{2}$ one recovers the so-called reformulated MEM problem, first seen in \cite{le1999new}. \begin{lemma} \label{lemma:soln_exist} For any lsc, proper, convex $g$, the primal problem (P) always has a solution. \end{lemma} \begin{proof} By the global assumption of compactness of $\mathcal{X}$, we have $\kappa_{\mu}$ is proper, lsc, convex and supercoercive, following the discussion above. As $g \circ C$ and $\kappa_{\mu}$ are convex, so is $g \circ C +\kappa_{\mu}$. Further as both $g \circ C$ and $\kappa_{\mu}$ are proper and lsc, and $\kappa_{\mu}$ is supercoercive, the summation $g \circ C +\kappa_{\mu}$ is supercoercive, \cite[Exercise 3.29, Lemma 3.27]{rockafellar2009variational}. A supercoercive function is, in particular, level-bounded, so by \cite[Theorem 1.9]{rockafellar2009variational} the solution set $\argmin( g \circ C +\kappa_{\mu}) $ is nonempty. \end{proof} We make one restriction on the choice of $g$, which will hold globally in this work: \begin{assum} $0\in \inti(\dom(g) - C\dom(\kappa_{\mu}))$. \label{assum:domain} \end{assum} We remark that this property holds vacuously whenever $g$ is finite-valued, e.g., $g = \frac{1}{2}\Vert b - ( \cdot) \Vert^{2}_{2}$. Instead of solving (P) directly, we use a dual approach. As $\kappa_\mu^*=L_\mu$ (by compactness of $\mathcal X$), the primal problem (P) has Fenchel dual (e.g., \cite[Definition 15.19]{bauschke2019convex}) given by \begin{equation} ({\rm arg})\!\!\min_{z \in \R^{m}} \alpha g^{*}(-z/\alpha) + L_{\mu}(C^{T}z). \label{dual} \tag{D} \end{equation} We will hereafter denote the dual objective associated with $\mu \in \mathcal{P}(\mathcal{X})$ as \begin{equation} \phi_{\mu}(z) := \alpha g^{*}(-z/\alpha) + L_{\mu}(C^{T}z). \end{equation} We record the following result which highlights the significance of \Cref{assum:domain} to our study. \begin{theorem} \label{thm:level-coercive} The following are equivalent: \begin{center} (i) \Cref{assum:domain} holds; \quad (ii) $\argmin \phi_\mu$ is nonempty and compact; \quad (iii) $\phi_\mu$ is level-coercive. \end{center} \noindent In particular, under \Cref{assum:domain}, the primal problem (P) has a unique solution given by \begin{equation} \overline{x}_{\mu} = \nabla L_{\mu}(C^{T}\overline{z}), \label{eqn:primal_dual_optimality} \end{equation} where $\overline{z} \in \argmin \phi_{\mu}$ is any solution of the dual problem (D). \end{theorem} \begin{proof} The equivalences follow from Proposition 3.1.3 and Theorem 5.2.1 in \cite{auslender2006interior}, respectively. The latter\footnote{Note that there is a sign error in equation (5.3) in the reference.} also yields the primal-dual recovery in \eqref{eqn:primal_dual_optimality} while using the differentiability of $L_\mu$. \end{proof} \subsection{Approximate and Empirical Priors, Random Functions, and Epi-consistency} \label{sec:approximation} If one has access to the true underlying image distribution $\mu$, then the solution recipe is complete: solve (D) and and use the primal-dual recovery formula \cref{eqn:primal_dual_optimality} to find a solution to (P). But in practical situations, such as the imaging problems of interest here, it is unreasonable to assume full knowledge of $\mu$. Instead, one specifies a prior $\nu \in \mathcal{P}(\mathcal{X})$ with $\nu \approx \mu$, and solves the approximate dual problem \begin{equation} \min_{z \in \R^{m}} \phi_{\nu}(z). \label{Dual_nu} \end{equation} Given $\varepsilon> 0$ and any $\varepsilon$-solution to \cref{Dual_nu}, i.e. given any $z_{\nu, \varepsilon} \in S_{\varepsilon} (\nu)$, we define \begin{equation} \overline{x}_{\nu, \varepsilon} := \nabla L_{\nu}(C^{T}z_{\nu, \varepsilon}), \label{defn:x_nu} \end{equation} with the hope, inspired by the recovery formula \cref{eqn:primal_dual_optimality}, that with a ``reasonable'' choice of $\nu \approx \mu$, and small $\varepsilon$, then also $\overline{x}_{\nu, \varepsilon} \approx \overline{x}_{\mu}$. The remainder of this work is dedicated to formalizing how well $\overline{x}_{\nu, \varepsilon}$ approximates $\overline{x}_{\mu}$ under various assumptions on $g$ and $\nu$. A natural first approach is to construct $\nu$ from sample data. Let $(\Omega,\mathcal{F}, \Prob)$ be a probability space. We model image samples as i.i.d. $\mathcal{X}$-valued random variables $\{X_{1} , \ldots, X_{n}, \ldots \}$ with shared law $\mu := \Prob X_1^{-1}$. That is, each $X_{i} : \Omega \to \mathcal{X}$ is an $(\Omega, \mathcal{F}) \to (\mathcal{X}, \mathcal{B}_{\mathcal{X}})$ measurable function with the property that $\mu(B) = \Prob(\omega \in \Omega \: : \: X_1(\omega) \in B)$, for any $B \in \mathcal{B}_{\mathcal{X}}$. In particular, the law $\mu$ is by construction a Borel probability measure on $\mathcal{X}$. Intuitively, a random sample of $n$ images is a given sequence of realizations $\{ X_{1}(\omega), \ldots, X_{n}(\omega), \ldots \}$, from which we take only the first $n$ vectors. We then approximate $\mu$ via the empirical measure \begin{equation*} \mu_{n}^{(\omega)} :=\frac{1}{n} \sum_{i=1}^{n} \delta_{X_{i}(\omega)}. \end{equation*} With this choice of $\nu = \mu_{n}^{(\omega)}$, we have the approximate dual problem \begin{equation} \min_{z \in \R^{m}} \phi_{\mu_{n}^{(\omega)}}(z) \quad {\rm with}\quad \phi_{\mu_{n}^{(\omega)}}(z)=\alpha g^{*}\left(\frac{-z}{\alpha}\right) + \log \frac{1}{n} \sum_{i=1}^{n} e^{\langle C^{T}z, X_{i}(\omega) \rangle }. \label{eqn:approx_dual} \end{equation} And exactly analagous to \eqref{defn:x_nu}, given an $\varepsilon$-solution $\overline{z}_{n,\varepsilon}(\omega)$ of \cref{eqn:approx_dual}, we define \begin{equation} \overline{x}_{n,\varepsilon}(\omega) := \nabla L_{\mu_{n}^{(\omega)}}(z)\vert_{z = \overline{z}_{n,\varepsilon}(\omega)} = \nabla_{z} \left[ \log \frac{1}{n} \sum_{i=1}^{n} e^{\langle C^{T}z, X_{i}(\omega) \rangle } \right]_{z =\overline{z}_{n,\varepsilon}(\omega)}. \label{defn:x_n} \end{equation} Clearly, while the measure $\mu_{n}^{(\omega)}$ is well-defined and Borel for any given $\omega$, the convergence properties of $\overline{z}_{n, \varepsilon}(\omega)$ and $\overline{x}_{n, \epsilon}(\omega)$ should be studied in a stochastic sense over $\Omega$. To this end, we leverage a probabilistic version of epi-convergence for random functions known as epi-consistency \cite{king1991epi}. Let $(T, \mathcal{A})$ be a measurable space. A function $f : \R^{m} \times T \to \overline{\R}$ is called a random\footnote{The inclusion of the word `random' in this definition need not imply a priori any relation to a random process; we simply require measurability properties of $f$. Random lsc functions are also known as normal integrands in the literature, see \cite[Chapter 14]{rockafellar2009variational}.} lsc function (with respect to $(T,\mathcal{A})$) \cite[Definition 8.50]{royset2022optimization} if the (set-valued) map $S_{f}: T \rightrightarrows \R^{m+1}, \;S_{f}(t) = \epi f(\cdot, t)$ is closed-valued and measurable in the sense $S_{f}^{-1}(O) = \{ t \in T \: : \: S_{f}(x) \cap O \neq \emptyset \} \in \mathcal{A}$. Our study is fundamentally interested in random lsc functions on $(\Omega, \mathcal{F})$, in service of proving convergence results for $\overline{x}_{n, \epsilon}(\omega)$. But we emphasize that random lsc functions with respect to $(\Omega,\mathcal{F})$ are tightly linked with random lsc functions on $(X, \mathcal{B}_{\mathcal{X}})$. Specifically, if $X: \Omega \to \mathcal{X}$ is a random variable and $f: \R^{m} \times \mathcal{X} \to \overline{\R}$ is a random lsc function with respect to $(\mathcal{X}, \mathcal{B}_{\mathcal{X}})$, then the composition $f(\cdot, X(\cdot)): \R^{m} \times \Omega \to \R$ is a random lsc function with respect to the measurable space $(\Omega, \mathcal{F})$, see e.g. \cite[Proposition 14.45 (c)]{rockafellar2009variational} or the discussion of \cite[Section 5]{romisch2007stability}. This link will prove computationally convenient in the next section. While the definition of a random lsc function is unwieldy to work with directly, it is implied by a host of easy to verify conditions \cite[Example 8.51]{royset2022optimization}. We will foremost use the following one: Let $(T, \mathcal{A})$ be a measurable space. If a function $f:\R^{m} \times T \to \overline{\R}$ is finite valued, with $f(\cdot, t)$ continuous for all $t$, and $f(z, \cdot)$ measurable for all $z$, we say $f$ is a Carath{\'e}odory function. Any function which is Carath{\'e}odory is random lsc \cite[Example 14.26]{rockafellar2006variational}. Immediately, we can assert $\phi_{\mu_{n}^{(\cdot)}}$ is a random lsc function from $\R^{d} \times \Omega \to \overline{\R}$, as it is Carath{\'e}odory. In particular, by \cite[Theorem 14.37]{rockafellar2009variational} or \cite[Section 5]{romisch2007stability}, the $\varepsilon$-solution mappings \begin{equation*} \omega \mapsto \left\{ z \: : \: \phi_{\mu_{n}^{(\omega)}}(z) \leq \inf \phi_{\mu_{n}^{(\omega)}} + \varepsilon \right\} \end{equation*} are measurable (in the set valued sense defined above), and it is always possible to find a $\overline{z}(\omega) \in \argmin \phi_{\mu_{n}^{(\omega)}}$ such that the function $\omega \mapsto \overline{z}(\omega)$ is $\Prob$-measurable in the usual sense. We conclude with the definition of epi-consistency as seen in \cite[p.~86]{king1991epi}; a sequence of random lsc functions $h_{n}: \R^{m} \times \Omega \to \overline{\R} $ is said to be epi-consistent with limit function $h: \R^{m} \to \overline{\R}$ if \begin{equation} \Prob\left(\left\{ \omega \in \Omega \: \vert \: h_{n}(\cdot,\omega) \toe h \right\}\right) =1. \label{def:epi-consistent} \end{equation} \section{Epigraphical convergence and convergence of minimizers} \label{sec:epi-convergence} The goal of this section is to prove convergence of minimizers in the empirical case, i.e., that $\overline{x}_{n,\varepsilon}(\omega)$ as defined in \eqref{defn:x_n} converges to $\overline{x}_{\mu}$, the solution of (P), for $\Prob$-almost every $\omega\in \Omega$ as $\varepsilon \downarrow 0$. To do so, we prove empirical approximations of the moment generating function are epi-consistent with $M_{\mu}$, and parley this into a proof of the epi-consistency of $\phi_{\mu_{n}^{(\omega)}}$ with limit $\phi_{\mu}$. Via classic convex analysis techniques, this guarantees the desired convergence of minimizers with probability one. \subsection{Epi-consistency of the empirical moment generating functions} Given $\{X_{1}, \ldots, X_{n}, \ldots\}$ i.i.d. with shared law $\mu = \Prob X_1^{-1} \in\mathcal P(\mathcal{X})$, we denote the moment generating function of $\mu_{n}^{(\omega)}$ as $M_{n}(y, \omega) := \frac{1}{n} \sum_{i=1}^{n} e^{\langle y, X_{i}(\omega) \rangle}.$ Define $f: \R^{m} \times \R^{d} \to \R$ as $f(z, x) = e^{\langle C^{T}z, x\rangle}$. Then \begin{align*} M_{\mu}(C^{T}z) &= \int_{\mathcal{X}} e^{\langle C^{T}z, \cdot \rangle} d \mu = \int_{\mathcal{X}} f(z, \cdot) d\mu, \\ M_{n}(C^{T}z, \omega) & = \frac{1}{n} \sum_{i=1}^{n} e^{\langle C^{T}z, X_{i}(\omega) \rangle} = \frac{1}{n} \sum_{i=1}^{n} f(z, X_{i}(\omega)). \end{align*} \noindent This explicit decomposition is useful to apply a specialized version of the main theorem of King and Wets \cite[Theorem 2]{king1991epi}, which we restate without proof. \begin{prop} \label{thm:epicon} Let $f : \R^{m} \times \mathcal{X} \to \overline{\R}$ be a random lsc function such that $f(\cdot, x)$ is convex and differentiable for all $x$. Let $X_{1}, \ldots, X_{n}$ be i.i.d. $\mathcal{X}$-valued random variables on $(\Omega, \mathcal{F}, \Prob)$ with shared law $\mu \in \mathcal{P}(\mathcal{X})$. If there exists $\overline{z} \in \R^{m}$ such that \begin{equation*} \int_{\mathcal{X}} f(\overline{z},\cdot) d\mu < +\infty, \qquad \text { and } \qquad \int_{\mathcal{X}} \Vert \nabla_{z}f(\overline{z}, \cdot) \Vert d\mu < + \infty, \end{equation*} then the sequence of (random lsc) functions $S_{n}: \mathbb{R}^{m} \times \Omega \to \overline{\R}$ given by \begin{equation*} S_{n}(z, \omega) := \frac{1}{n} \sum_{i=1}^{n}f(z, X_{i}(\omega)) \end{equation*} is epi-consistent with limit $S_{\mu}:z\mapsto\int_{\mathcal{X}} f(z, \cdot) d\mu$, which is proper, convex, and lsc. \end{prop} \noindent Via a direct application of the above we have the following. \begin{corollary} \label{thm:epicon_mgf} The sequence $M_{n}(C^{T}(\cdot), \cdot)$ is epi-consistent with limit $M_{\mu} \circ C^{T}$. \end{corollary} \begin{proof} Define $f(z,x) = e^{\langle C^{T}z, x\rangle}$. For any $x$, $\langle C^{T} (\cdot),x \rangle$ is a linear function, and $e^{ ( \cdot ) }$ is convex - giving that the composition $f(\cdot, x)$ is convex. As $f$ is differentiable (hence continuous) in $z$ for fixed $x$ and vice-versa, it is Carath{\'e}odory and thus a random lsc function (with respect to $(\mathcal{X},\mathcal{B}_{\mathcal{X}})$). Next we claim $\overline{z} = 0$ satisfies the conditions of the proposition. First, by direct computation \begin{equation*} \int_{\mathcal{X}} e^{ \langle 0,x \rangle } d\mu(x) = \int_{\mathcal{X}} d\mu(x) = 1 < + \infty \end{equation*} as $\mu$ is a probability measure on $\mathcal{X}$. As $f(\cdot, x)$ is differentiable, we can compute $\nabla_{z}f(\overline{z},x) = Cxe^{\langle C^{T}z,x\rangle} \vert_{z = 0} =Cx$. Hence \begin{equation*} \int_{\mathcal{X}} \Vert \nabla_{z}f(\overline{z},x) \Vert d\mu(x) = \int_{\mathcal{X}} \Vert C x\Vert d\mu(x) \leq \Vert C \Vert \max_{x \in \mathcal{X}} \Vert x \Vert < + \infty, \end{equation*} where we have used the boundedness of $\mathcal{X}$, and once again that $\mu$ is a probability measure. Thus we satisfy the assumptions of \cref{thm:epicon}, and can conclude that the sequence of random lsc functions $S_{n}$ given by $S_{n}(z,\omega) = \frac{1}{n}\sum_{i=1}^{n} f(z, X_{i}(\omega))$ are epi-consistent with limit $S_{\mu} : z \mapsto \int_{\mathcal{X}} f(z , \cdot) d\mu$. But, \begin{equation*} S_{n}(z, \omega) = \frac{1}{n} \sum_{i=1}^{n} e^{\langle C^{T}z, X_{i}(\omega) \rangle} = M_{n}(C^{T} z, \omega) \qquad \text{ and } \qquad S_{\mu}(z) = \int_{\mathcal{X}} e^{\langle C^{T}z, \cdot \rangle} d\mu = M_{\mu}(C^{T}z), \end{equation*} and so we have shown the sequence $M_{n}(C^{T}(\cdot), \cdot)$ is epi-consistent with limit $M_{\mu} \circ C^{T}$. \end{proof} \begin{corollary} \label{cor:Log_MGF_epiconverges} The sequence $L_{\mu_{n}^{(\omega)}} \circ C^{T}$ is epi-consistent with limit $L_{\mu} \circ C^{T}$. \end{corollary} \begin{proof} Let \begin{equation*} \Omega_{e} = \left\{ \omega \in \Omega \: \vert \: M_{n}(C^{T}(\cdot),\omega) \toe M_{\mu} \circ C^{T}(\cdot) \right\}, \end{equation*} which has $\Prob(\Omega_{e})=1$ by \cref{thm:epicon_mgf}, and let $\omega \in \Omega_{e}$. Both $M_{n}$ and $M_{\mu}$ are finite valued and strictly positive, and furthermore the function $\log: \R_{++} \to \R$ is continuous and increasing. Hence, by a simple extension of \cite[Exercise 7.8(c)]{rockafellar1997convex}, it follows, for all $\omega \in \Omega_{e}$, that \[ L_{\mu_{n}^{(\omega)}}\circ C^{T} = \log M_{n}(C^{T}(\cdot),\omega) \toe \log M_{\mu} \circ C^{T} = L_{\mu} \circ C^{T}. \] \hfill \end{proof} \subsection{Epi-consistency of the dual objective functions} We now use the previous lemma to obtain an epi-consistency result for the entire empirical dual objective function. This is not an immediately clear, as epi-convergence is not generally preserved by even simple operations such as addition, see, e.g., the discussion in \cite[p.~276]{rockafellar2009variational} and the note \cite{BuH15} that eludes to subtle difficulties when dealing with extended real-valued arithmetic in this context. \\ We recall the following pointwise convergence result for compact $\mathcal{X}$, which is classical in the statistics literature. \begin{lemma}\label{lemma:MGF_pointwise} If $\mu \in \mathcal{P}(\mathcal{X})$, for almost every $\omega \in \Omega$, and all $z \in \R^{m}$ \begin{equation*} M_{n}(C^{T}z, \omega) \to M_{\mu} \circ C^{T}(z), \end{equation*} namely pointwise convergence in $z$. \end{lemma} We remark that the literature contains stronger uniform convergence results, observed first in Cs{\"o}rg{\"o} \cite{csorgo1982empirical} without proof, and later proven in \cite{feuerverger1989empirical} and \cite[Proposition 1]{csorgHo1983kernel}. Noting that both $M_{n}(z, \omega),M_{\mu}(z) > 0$ are strictly positive for all $z \in \R^{m}$, and that the logarithm is continuous on the strictly positive real line, we have an immediate corollary: \begin{corollary} \label{cor:Logmgf_pointwise} For almost every $\omega \in \Omega$, for all $z \in \R^{m}$ \begin{equation*} L_{\mu_{n}^{(\omega)}}(C^{T}z) = \log M_{n}(C^{T}z, \omega) \to \log M_{\mu}(C^{T}z) = L_{\mu}( C^{T}z ). \end{equation*} \end{corollary} Using this we prove the first main result: \begin{theorem} \label{thm:epicon_dual_obj} For any lsc, proper, convex function $g$, the empirical dual objective function $\phi_{\mu_{n}^{(\omega)}}$ is epi-consistent with limit $\phi_{\mu}$ \end{theorem} \begin{proof} Define \begin{equation*} \Omega_{e} = \left\{ \omega \in \Omega \: \vert \: L_{\mu_{n}^{(\omega)}}\circ C^{T}(\cdot) \toe L_{\mu} \circ C^{T}(\cdot)\right\}. \end{equation*} By \cref{cor:Log_MGF_epiconverges}, $\Prob(\Omega_{e})=1$. Similarly denote \begin{equation*} \Omega_{p} = \left\{ \omega \in \Omega \: \vert \: L_{\mu_{n}^{(\omega)}}\circ C^{T}(\cdot) \to L_{\mu} \circ C^{T}(\cdot) \text{ pointwise} \right\}. \end{equation*} By \cref{cor:Logmgf_pointwise}, we also have $\Prob(\Omega_{p})=1$. In particular we observe that $\Prob(\Omega_{e} \cap \Omega_{p})=1$. On the other hand we have vacuously that the constant sequence of convex, proper, lsc functions $\alpha g^{*}\circ (-\text{Id}/\alpha)$ converges to $\alpha g^{*}\circ ( - \text{Id}/\alpha)$ both epigraphically and pointwise. \\ Thus for any fixed $\omega \in \Omega_{p} \cap \Omega_{e}$ we have constructed two sequences, namely $g_{n} \equiv \alpha g^{*} \circ (-\text{Id}/\alpha)$ and $L_{n} = L_{\mu_{n}^{(\omega)}}\circ C^{T}$, which both converge epigraphically and pointwise for all $\omega \in \Omega_{e} \cap \Omega_{p}$. Therefore, by \cite[Theorem 7.46(a)]{rockafellar2009variational}, for all $\omega \in \Omega_{e} \cap \Omega_{p}$ \begin{equation*} \alpha g^{*}\circ (- \text{Id}/\alpha) + L_{\mu_{n}^{(\omega)}}\circ C^{T} \toe \alpha g^{*}\circ (- \text{Id}/\alpha) + L_{\mu} \circ C^{T} . \end{equation*} As $\Prob(\Omega_{e} \cap \Omega_{p}) =1$, this proves the result. \end{proof} \subsection{Convergence of minimizers} We now parley epi-consistency into convergence of minimizers. At the dual level this can be summarized in the following lemma, essentially \cite[Proposition 2.2]{king1991epi}; which was stated therein without proof.\footnote{ We remark that (as observed in \cite{king1991epi}) epigraphical convergence of a (multi-)function depending on a parameter (such as $\omega$) guarantees convergence of minimizers in much broader contexts, see e.g. \cite[Theorem 1.10]{attouch1984variational} or \cite[Theorem 3.22]{rockafellar2006variational}. Here we include a first principles proof.} \begin{lemma} \label{lemma:min} There exists a subset $\Xi \subset \Omega$ of measure one, such that for any $\omega \in \Xi$ we have: Let $\{ \varepsilon_{n} \} \searrow 0$ and $z_{n}(\omega)$ such that \begin{equation*} \phi_{\mu_{n}^{(\omega)}}(z_{n}(\omega)) \leq \inf_{z} \phi_{\mu_{n}^{(\omega)}}(z) + \varepsilon_{n}. \end{equation*} Let $\{ z_{n_{k}}(\omega) \}$ be any convergent subsequence of $\{ z_{n}(\omega) \} $. Then $\lim_{k \to \infty}z_{n_{k}}(\omega)$ is a minimizer of $\phi_{\mu}$. If $\phi_{\mu}$ admits a unique minimizer $\overline{z}_{\mu}$, then $z_{n} \to \overline{z}_{\mu}$. \end{lemma} \begin{proof} Denote \begin{equation*} \Xi = \left\{ \omega \in \Omega \: \vert \: \phi_{\mu_{n}^{(\omega)}} \toe \phi_{\mu} \right\}. \end{equation*} By \cref{thm:epicon_dual_obj}, $\Prob(\Xi) = 1$. Fix any $\omega \in \Xi$. By \Cref{thm:level-coercive}, the global \cref{assum:domain} holds if and only if $\phi_{\mu}$ is level-bounded. As $\omega \in \Xi$, we have that the sequence of convex functions $\phi_{\mu_{n}^{(\omega)}} \toe \phi_{\mu}$ epi-converges to a level-bounded function, and therefore by \cite[Theorem 7.32 (c)]{rockafellar2009variational}, the sequence $\phi_{\mu_{n}}^{(\omega)}$ is eventually level-bounded.\footnote{A sequence of functions $f_{n}: \R^{d} \to \overline{\R}$ is eventually level-bounded if for each $\alpha$, the sequence of sets $\{ f_{n}^{-1}([-\infty, \alpha])\}$ is eventually bounded, see \cite[p.~266]{rockafellar2009variational}.} In particular this means the sequence of lsc, proper, eventually level-bounded functions $\phi_{\mu_{n}^{(\omega)}}$ epi-converge to $\phi_{\mu}$, which is also lsc and proper. Hence by \cite[Theorem 7.33]{rockafellar2009variational} any sequence of approximate minimizers $\{ z_{n}(\omega) \}$ is bounded and with all cluster points belonging to $\argmin \phi_{\mu} $. Namely, any convergent subsequence $\{ z_{n_{k}}(\omega) \}$ has the property that its limit $\lim_{k \to \infty} z_{n_{k}} \in \argmin \phi_{\mu} $. Lastly, if we also have $\argmin \phi_{\mu} = \{ \overline{z}_{\mu} \}$, then from the same result \cite[Theorem 7.33]{rockafellar2009variational}, then necessarily $z_{n}(\omega) \to \overline{z}_{\mu}$. \end{proof} We now push this convergence to the primal level by using, in essence, Attouch's Theorem \cite{attouch1977convergence}, \cite[Theorem 3.66]{attouch1984variational}, in the form of a corollary of Rockafellar and Wets \cite[Theorem 12.40]{rockafellar2009variational}. \begin{lemma} \label{lemma:gradient_converge} Let $\hat{z} \in \R^{m}$, and let $z_{n} \to \hat{z}$ be any sequence converging to $\hat{z}$. Then for almost every $\omega$, \begin{equation*} \lim_{n \to \infty} \nabla L_{\mu_{n}^{(\omega)}}(C^{T}z)\vert_{z = z_{n}} = \nabla L_{\mu}(C^{T}\hat{z}). \end{equation*} \end{lemma} \begin{proof} We first observe that $\dom (L_{\mu} \circ C^{T}) = \R^{m}$ so that $\hat{z} \in \text{int}(\text{dom}(L_{\mu} \circ C^{T} )).$ Also as $M_{\mu}$ is everywhere finite-valued, $L_{\mu}(C^{T}\hat{z}) = \log M_{\mu}(C^{T}\hat{z}) < + \infty$. Furthermore for all $n$, the function $L_{\mu_{n}^{(\omega)}}\circ C^{T}$ is proper, convex, and differentiable. Finally, we have shown in \cref{cor:Log_MGF_epiconverges}, that for almost every $\omega \in \Omega$, we have $ L_{\mu_{n}^{(\omega)}}\circ C^{T} \toe L_{\mu} \circ C^{T}$. \\ These conditions together are the necessary assumptions of \cite[Theorem 12.40 (b)]{rockafellar2009variational}. Hence we have convergence $\lim_{n \to \infty} \nabla L_{\mu_{n}^{(\omega)}}(C^{T}z)\vert_{z = z_{n}} = \nabla L_{\mu}(C^{T}\hat{z})$ for almost every $\omega \in \Omega$. \end{proof} We now prove the main result. \begin{theorem} \label{Thm:convergence_of_primal} There exists a set $\Xi \subseteq \Omega$ of probability one such that for each $\omega \in \Xi$ the following holds: Given $\varepsilon_{n} \searrow 0$, and $z_{n}(\omega)$ such that $\phi_{\mu_{n}^{(\omega)}}(z_{n}(\omega)) \leq \inf_{z} \phi_{\mu_{n}^{(\omega)}}(z) + \varepsilon_{n}$, define \begin{equation*} x_{n}(\omega) := \nabla L_{\mu_{n}^{(\omega)}}(C^{T}z)\vert_{z = z_{n}}. \end{equation*} If $z_{n_{k}}(\omega)$ is any convergent subsequence of $z_{n}(\omega)$ then $\lim_{k \to \infty} x_{n_{k}}(\omega) = \overline{x}_{\mu} $, where $\overline{x}_{\mu}$ is the unique solution of $(P)$. If \cref{eqn:approx_dual} admits a unique solution $\overline{z}_{\mu}$, then in fact $x_{n}(\omega) \to \overline{x}_{\mu}.$ \end{theorem} \begin{proof} Let \begin{equation*} \Xi = \{ \omega \in \Omega \: \vert \: \phi_{\mu_{n}^{(\omega)}} \toe \phi_{\mu} \}, \end{equation*} recalling that by \cref{thm:epicon}, $\Prob(\Xi)=1$. Fix $\omega \in \Xi.$ By \cref{lemma:min}, for any convergent subsequence $z_{n_{k}}(\omega)$ with limit $\overline{z}(\omega)$, we have that $\overline{z}(\omega) \in \argmin \phi_{\mu}$. Furthermore, by \cref{lemma:gradient_converge} \begin{equation*} \lim_{k \to \infty} x_{n_{k}}(\omega) =\lim_{k \to \infty }\nabla L_{\mu_{n}^{(\omega)}}(C^{T}z)\vert_{z = z_{n}} = \nabla L_{\mu}(C^{T}\overline{z}(\omega)) \end{equation*} Using the primal-dual optimality conditions \cref{eqn:primal_dual_optimality} we have that $\nabla L_{\mu}(C^{T}\overline{z}(\omega))$ solves the primal problem (P). As (P) admits a unique solution $\overline{x}_{\mu}$, necessarily $\lim_{k \to \infty} x_{n_{k}}(\omega) = \overline{x}_{\mu}$. If additionally $\argmin \phi_{\mu} = \{\overline{z}_{\mu} \}$, then necessarily $z_{n} \to \overline{z}_{\mu}$ via \cref{lemma:min}, and the result follows from an identical application of \cref{lemma:gradient_converge} and \cref{eqn:primal_dual_optimality}. \end{proof} \section{Convergence rates for quadratic fidelity} \label{sec:rates} When proving rates of convergence, we restrict ourselves to the case where $g = \frac{1}{2}\Vert b - (\cdot) \Vert_{2}^{2}$. Thus, the dual objective function reads \begin{equation}\label{eq:Dual_2norm} \phi_{\mu}(z) = \frac{1}{2\alpha} \Vert z \Vert^{2} - \langle b, z \rangle + L_{\mu}(C^{T}z). \end{equation} Clearly, $\phi_{\mu}$ is finite valued and ($1/\alpha$-)strongly convex, hence admits a unique minimizer $\overline{z}_{\mu}$. Recalling what was laid out in \Cref{sec:MEMProblem}, as global \cref{assum:domain} holds vacuously with $g = \frac{1}{2}\Vert c - ( \cdot ) \Vert_{2}^{2}$, the unique solution to the MEM primal problem (P) is given by $\overline{x}_{\mu} = \nabla L_{\mu}(C^{T}\overline{z}_{\mu})$. Further by our global compactness assumption on $\mathcal{X}$, $\phi_\mu$ is (infinitely many times) differentiable. \subsection{Epigraphical distances} Our main workhorse to prove convergence rates are epigraphical distances. We mainly follow the presentation in Royset and Wets \cite[Chapter 6.J]{royset2022optimization}, but one may find similar treatment in Rockafellar and Wets \cite[Chapter 7]{rockafellar2009variational}. For any norm $\Vert \cdot \Vert_{*}$ on $\R^{d}$, the distance (in said norm) between a point $c$ and a set $D$ is defined as $ d_{D}(c) = \inf_{d \in D} \Vert c - d \Vert_{*} $. For $C,D$ subsets of $\R^{d}$ we define the excess of $C$ over $D$ \cite[p.~399]{royset2022optimization} as \begin{equation*} \exc(C,D) := \begin{cases} \sup_{c \in C} d_{D}(c) &\text{ if } C,D \neq \emptyset, \\ \infty &\text{ if } C \neq \emptyset, D = \emptyset, \\ 0 &\text{ otherwise.} \end{cases} \end{equation*} We note that this excess explicitly depends on the choice of norm used to define $d_{D}$. For the specific case of the 2-norm, we denote the projection of a point $a \in \R^{d}$ onto a closed, convex set $B \subset \R^{d}$ as the unique point $\text{proj}_{B}(a) \in B$ which achieves the minimum $\Vert \text{proj}_{B}(a) - a \Vert = \min_{b \in B} \Vert b - a \Vert.$ \\ The truncated $\rho$-Hausdorff distance \cite[p.~399]{royset2022optimization} between two sets $C,D \subset \R^{d}$ is defined as \begin{equation*} \hat{d}_{\rho}(C,D) := \max \left\{ \exc(C \cap B_{\rho} ,D),\exc(D\cap B_{\rho} ,C)\right\}, \end{equation*} where $B_{\rho} = \{ x \in \R^{d} \: : \: \Vert x \Vert_{*} \leq \rho \}$ is the closed ball of radius $\rho$ in $\R^{d}$. When discussing distances on $\R^{d}$, we will consistently make the choice of $\Vert \cdot \Vert_{*} = \Vert \cdot \Vert$ the $2$-norm. We note we can recover the usual Pompeiu-Hausdorff distance by taking $\rho \to \infty$. However, to extend the truncated $\rho$-distance to epigraphs of functions - which here are subsets of $\R^{d+1}$ - we equip $\R^{d+1}$ with a very particular norm. For any $z \in \R^{d+1},$ write $z = (x,a)$ for $x \in \R^{d},a\in \R$. Then for any $z_{1},z_{2} \in \R^{d+1}$ we define the norm \begin{equation*} \Vert z_{1} - z_{2} \Vert_{*,d+1} = \Vert (x_{1},a_{1}) - (x_{2},a_{2}) \Vert_{*,d+1} := \max \{ \Vert x_{1}-x_{2} \Vert_{2}, \vert a_{1}-a_{2} \vert \}. \end{equation*} With this norm, we can define an epi-distance as in \cite[Equation 6.36]{royset2022optimization}: for $f,h: \R^{d} \to \overline{\R}$ not identically $+\infty$, and $\rho > 0$ we define \begin{equation} \hat{d}_{\rho}(f,h) := \hat{d}_{\rho}( \epi f ,\epi h), \label{defn:epidistance} \end{equation} where $\R^{d+1}$ has been equipped with the norm $\Vert \cdot \Vert_{*,d+1}$. This epi-distance quantifies epigraphical convergence in the following sense: \cite[Theorem 7.58]{rockafellar2009variational}\footnote{We remark that while at first glance the definition of epi-distance seen in \cite[Theorem 7.58]{rockafellar2009variational} differs from ours (which agrees with \cite{royset2022optimization}), it is equivalent up to multiplication by a constant and rescaling in $\rho$. See \cite[Proposition 6.58]{royset2022optimization} and \cite[Proposition 7.61]{rockafellar2009variational} - the Kenmochi conditions - for details.} if $f$ is a proper function and $f_{n}$ a sequence of proper functions, then for any constant $\rho_{0} >0$: \begin{equation*} f_{n} \toe f \text{ if and only if } \hat{d}_{\rho}(f_{n},f) \to 0 \text{ for all $\rho> \rho_{0}$}. \end{equation*} \subsection{Convergence Rates} We begin by proving rates of convergence for arbitrary prior $\nu$, and later specialize to the empirical case. To this end, we construct a key global constant $\rho_{0}$ induced by $C,b,\alpha$ in \cref{eq:Dual_2norm}: We define \begin{equation} \rho_{0} := \max \left\{ \hat{\rho}, \frac{\hat{\rho}^{2}}{2\alpha} + \Vert b \Vert \hat{\rho} + \hat{\rho}\Vert C \Vert \vert \mathcal{X} \vert \right\}, \label{eqn:rho_0_defn} \end{equation} where $\hat{\rho} = 2\alpha( \Vert b \Vert + \Vert C \Vert \vert \mathcal{X} \vert )$ and $\vert \mathcal{X} \vert := \max_{x \in \mathcal{X}} \Vert x \Vert.$ We emphasize that our running compactness assumption on $\mathcal X$ is essential for finiteness of $\rho_{0}$. The main feature of this constant is the following. \begin{lemma} \label{lemma:rho_0_conditions} For any $\nu \in \mathcal{P}(\mathcal{X})$, let $\phi_{\nu}$ be the corresponding dual objective function as defined in \cref{eq:Dual_2norm}, which has a unique minimizer $\overline{z}_{\nu}$. Then $\rho_{0}$ has the following two properties: \begin{equation*} (a) \quad \phi_{\nu}(\overline{z}_{\nu}) \in [-\rho_{0}, \rho_{0}], \qquad (b) \quad \Vert \overline{z}_{\nu} \Vert \leq \rho_{0}. \end{equation*} \end{lemma} \begin{proof} We first claim that $\Vert \overline{z}_{\nu} \Vert \leq \hat{\rho}$. Let $z \in \R^{d}$ be such that $\Vert z \Vert > \hat{\rho}$. Then, \begin{align*} \phi_{\nu}(z) &\geq \frac{\Vert z \Vert^{2}}{2 \alpha} - \Vert b \Vert \Vert z \Vert + \log \int_{\mathcal{X}} \exp\left( -\Vert C \Vert \Vert x \Vert \Vert z \Vert\right) d\nu(x) \geq \Vert z \Vert \left(\frac{\Vert z \Vert}{2 \alpha} - \Vert b \Vert -\Vert C \Vert \vert \mathcal{X} \vert \right). \end{align*} Here, in the first inequality we Cauchy-Schwarz (twice), and in the second we used that $\Vert x \Vert \leq \vert \mathcal{X} \vert$ and $\nu(\mathcal{X})=1$; hence the integral term can be bounded below by the constant $-\Vert z \Vert \Vert C \Vert \vert \mathcal{X} \vert$. From this estimate it is clear that $\Vert z \Vert >\hat{\rho}$ implies $\phi_{\nu}(z) > 0 $. But observing that $\phi_{\nu}(0)= 0,$ such $z$ cannot be a minimizer. Hence necessarily $\Vert \overline{z}_{\nu} \Vert \leq \hat{\rho} \leq \rho_{0}$. Once more, via Cauchy-Schwarz and using $\nu(\mathcal{X})=1$, $\Vert \overline{z}_{\nu} \Vert \leq \hat{\rho} $, we compute \begin{align*} \vert \phi_{\nu}(\overline{z}_{\nu}) \vert = \left\vert \frac{\Vert \overline{z}_{\nu} \Vert^{2}}{2 \alpha} - \langle b,\overline{z}_{\nu} \rangle + L_{\nu}(\overline{z}_{\nu})\right\vert &\leq \frac{ \hat{\rho}^{2}}{2 \alpha} + \hat{\rho} \Vert b \Vert + \log \int_{\mathcal{X}} \exp( \Vert C \Vert \vert \mathcal{X} \vert \hat{\rho} )d\nu(x)\\ &= \frac{\hat{\rho}^{2}}{2\alpha} + \hat{\rho} \Vert b \Vert + \hat{\rho}\Vert C \Vert \vert \mathcal{X} \vert. \end{align*} \end{proof} \begin{lemma} \label{cor:epidistanceBoundedBySupnorm} Let $\rho_{0}$ be given by \cref{eqn:rho_0_defn}. Then for all $\rho> \rho_{0}$ and all $\mu, \nu \in \mathcal{P}(\mathcal{X})$, we have \begin{equation*} \hat{d}_{\rho}(\phi_{\mu},\phi_{\nu}) \leq \max_{z \in B_{\rho}} \vert L_{\nu}(C^{T}z) - L_{\mu}(C^{T}z) \vert. \end{equation*} \end{lemma} \begin{proof} \cref{lemma:rho_0_conditions} guarantees that for both measures $\mu,\nu \in \mathcal{P}(\mathcal{X})$, we have \begin{equation} \phi_{\nu}(\overline{z}_{\nu}),\phi_{\mu}(\overline{z}_{\mu}) \in [-\rho_{0}, \rho_{0}] \qquad \text{ and } \qquad \Vert \overline{z}_{\nu} \Vert,\Vert \overline{z}_{\mu} \Vert \leq \rho_{0}. \label{eqn:rho_0_conditions_both} \end{equation} These conditions imply, for any $\rho > \rho_{0}$, that the set $ C_{\rho} := (\{ z : \phi_{\mu}(z) \leq \rho \} \cup \{ z : \phi_{\nu}(z) \leq \rho \} )\cap B_{\rho}$ is nonempty. This follows from \cref{eqn:rho_0_conditions_both} as for any $\rho > \rho_{0}$ the nonempty set $\{ \overline{z}_{\mu},\overline{z}_{\nu}\} \cap B_{\rho_{0}}$ is contained in $ C_{\rho_{0}} \subset C_{\rho}.$ As $C_{\rho}$ is nonempty we may apply \cite[Theorem 6.59]{royset2022optimization} with $f = \phi_{\mu}$ and $g= \phi_{\nu}$ to obtain $\hat{d}_{\rho}(\phi_{\mu},\phi_{\nu}) \leq \sup_{ z\in C_{\rho }} \vert \phi_{\nu}(z) - \phi_{\mu}(z) \vert.$ Then from the definition of $\phi_{\mu}$ and $\phi_{\nu}$, we have \begin{align*}\ \sup_{ z\in C_{\rho }} \vert \phi_{\nu}(z) - \phi_{\mu}(z) \vert &= \sup_{z \in C_{\rho}} \vert L_{\nu}(C^{T}z) - L_{\mu}(C^{T}z) \vert \\ &\leq \sup_{z \in B_{\rho}} \vert L_{\nu}(C^{T}z) - L_{\mu}(C^{T}z) \vert \\ &= \max_{z \in B_{\rho}} \vert L_{\nu}(C^{T}z) - L_{\mu}(C^{T}z) \vert, \end{align*} where in the penultimate line uses that $C_{\rho} \subseteq B_{\rho}$, and the final equality follows as the continuous function $L_{\mu}\circ C^{T} -L_{\nu} \circ C^{T}$ achieves a maximum over the compact set $B_{\rho}$. \end{proof} For notational convenience, we will hereafter denote \begin{equation*} D_{\rho}(\nu,\mu) := \max_{z \in B_{\rho}} \vert L_{\mu}(C^{T}z) - L_{\nu}(C^{T}z) \vert. \end{equation*} \noindent We also recall from \Cref{sec:convexAnalysisPrelim} that $S_{\varepsilon}(\nu)$ denotes the set of $\varepsilon$-minimizers of $\phi_{\nu}$. \begin{lemma} \label{cor:infdistRoyset} Let $\rho_{0}$ be given by \cref{eqn:rho_0_defn}. Then, for all $\mu, \nu \in \mathcal{P}(\mathcal{X})$, all $\rho > \rho_{0}$, and all $\varepsilon \in [0, \rho-\rho_{0}]$, the following holds: If \begin{equation*} \delta > \varepsilon + 2 D_{\rho}(\nu,\mu) , \end{equation*} then \begin{align*} \vert \phi_{\nu}(\overline{z}_{\nu}) - \phi_{\mu}(\overline{z}_{\mu}) \vert &\leq D_{\rho}(\nu,\mu) \qquad \text{and} \qquad \exc(S_{\varepsilon}( \nu)\cap B_{\rho}, S_{\delta}(\mu) ) \leq D_{\rho}(\nu,\mu). \end{align*} \end{lemma} \begin{proof} Let $\rho > \rho_{0}$ and $\varepsilon \in [0,\rho-\rho_{0}]$. By choice, we have $\varepsilon < 2\rho$. By \cref{lemma:rho_0_conditions}(a) we have $\phi_{\nu}(\overline{z}_{\nu}), \phi_{\mu}(\overline{z}_{\mu}) \in [ -\rho_{0}, \rho_{0}],$ and in turn by the choice of $\rho, \varepsilon$ we have $ [ -\rho_{0}, \rho_{0}] \subseteq [ -\rho, \rho_{0}] \subseteq [ -\rho, \rho - \varepsilon]$. Also, as $\rho > \rho_{0}$, by \cref{lemma:rho_0_conditions}(b) we have $\{ z_{\nu} \} = \argmin \phi_{\nu} \cap B_{\rho}$ and $\{ z_{\mu} \} = \argmin \phi_{\mu} \cap B_{\rho}.$ These properties of $\rho, \varepsilon$ are exactly the assumptions of \cite[Theorem 6.56]{royset2022optimization} for $f= \phi_{\mu}$ and $g= \phi_{\nu}$. This result yields that, if $\delta > \varepsilon +2 \hat{d}_{\rho}(\phi_{\mu}, \phi_{\nu} )$, then $\vert \phi_{\nu}(\overline{z}_{\nu}) - \phi_{\mu}(\overline{z}_{\mu}) \vert \leq \hat{d}_{\rho}( \phi_{\nu}, \phi_{\mu} )$ and $\text{exs}(S_{\varepsilon}( \nu)\cap B_{\rho}, S_{\delta}(\mu) )\leq \hat{d}_{\rho}( \phi_{\nu}, \phi_{\mu} )$.\\ However as $ \rho > \rho_{0}$, we may apply \cref{cor:epidistanceBoundedBySupnorm} to assert $\hat{d}_{\rho}(\phi_{\mu},\phi_{\nu}) \leq D_{\rho}(\nu,\mu)$. Hence, for any $\delta > \varepsilon +2D_{\rho}(\nu,\mu) \geq \varepsilon +2 \hat{d}_{\rho}(\phi_{\mu}, \phi_{\nu} )$ we obtain \begin{align*} \vert \phi_{\nu}(\overline{z}_{\nu}) - \phi_{\mu}(\overline{z}_{\mu}) \vert & \leq D_{\rho}(\nu,\mu),\\ \text{exs}(S_{\varepsilon}( \nu)\cap B_{\rho}, S_{\delta}(\mu) )& \leq D_{\rho}(\nu,\mu). \end{align*} \end{proof} For the main results, \cref{thm:epsdeltaprimalbound_full} and \cref{thm:final_rate_n}, we require additional auxiliary results. \begin{lemma} \label{lemma:MGF_bounded} Let $\rho >0$ and $\nu \in \mathcal{P}(\mathcal{X})$. Then, for all $z \in B_{\rho}$ we have \begin{equation*} M_{\nu}(C^{T}z) = \int_{\mathcal{X}} e^{\langle C^{T}z, \cdot \rangle} d\nu \in \left[\exp \left( -\rho \Vert C \Vert \vert \mathcal{X} \vert \right), \exp \left( \rho \Vert C \Vert \vert \mathcal{X} \vert \right) \right]. \end{equation*} \end{lemma} \begin{proof} For all $x \in \mathcal{X}$, $z \in B_{\rho}$, we have, via Cauchy-Schwarz, that \begin{equation*} \exp \left( - \rho \Vert C \Vert \vert \mathcal{X} \vert \right) \leq \exp \left( - \Vert z \Vert \Vert Cx \Vert \right) \leq \exp \langle C^{T}z ,x \rangle. \end{equation*} In particular, $ \exp \left( - \rho \Vert C \Vert \vert \mathcal{X} \vert \right) \leq \min_{x \in \mathcal{X}} \exp \langle C^{T}z ,x \rangle.$ On the other hand, we find that \begin{equation*} \exp \langle C^{T}z ,x \rangle \leq \exp \left( \Vert z \Vert \Vert Cx \Vert \right) \leq \exp \left( \rho \Vert C \Vert \vert \mathcal{X} \vert \right). \end{equation*} Thus, $\max_{x \in \mathcal{X}} \exp \langle C^{T}z ,x \rangle \leq \exp \left( \rho \Vert C \Vert \vert \mathcal{X} \vert \right).$ Hence for any $\nu \in \mathcal{P}(\mathcal{X})$ we find \[ 1 \cdot\exp \left( - \rho \Vert C \Vert \vert \mathcal{X} \vert \right)\leq \nu(\mathcal{X}) \min_{x \in \mathcal{X}} e^{\langle C^{T}z, x \rangle } \leq \int_{\mathcal{X}} e^{\langle C^{T}z, \cdot \rangle } d\nu \leq \nu(\mathcal{X}) \max_{x \in \mathcal{X}} e^{\langle C^{T}z, x \rangle } \leq 1 \cdot \exp \left( \rho \Vert C \Vert \vert \mathcal{X} \vert \right). \] \end{proof} \noindent With some additional computation, we can infer the following Lipschitz bound on $\nabla L_{\nu}$. \begin{corollary} \label{cor:global_K_bound} Let $\hat{\rho} > 0$ and $\nu \in \mathcal{P}(\mathcal{X})$. Then for all $x,y \in B_{\hat{\rho}} \subset \R^{d}$, we have that \begin{equation} \Vert \nabla L_{\nu}(x) - \nabla L_{\nu}(y) \Vert \leq K \Vert x-y \Vert \end{equation} for an explicit constant $K>0$ which depends on $\hat{\rho}, d, \vert \mathcal{X} \vert$, but not on $\nu$. \end{corollary} \begin{proof} As discussed in \Cref{sec:MEMProblem}, $L_{\nu}$ is twice continuously differentiable. Hence, using the fundamental theorem of calculus, we have $\nabla L_{\nu}(x) - \nabla L_{\nu}(y) = \int_{0}^{1} \nabla^{2} L_{\mu}(x +t(y-x)) \cdot (y-x) dt.$ Thus, as $x +t(y-x) \in B_{\hat{\rho}}$ for all $t\in [0,1]$, we have \begin{align*} \Vert \nabla L_{\nu}(z) - \nabla L_{\nu}(y) \Vert &\leq \int_{0}^{1} \Vert \nabla^{2} L_{\mu}(x +t(y-x)) \Vert \Vert y-x \Vert dt \\ &\leq \int_{0}^{1} \max_{z \in B_{\hat{\rho}}} \Vert \nabla^{2}L_{\nu}(z) \Vert \Vert y-x \Vert dt \\ &= \max_{z \in B_{\hat{\rho}}} \Vert \nabla^{2}L_{\nu}(z) \Vert \Vert y-x \Vert. \end{align*} By convexity of $L_\nu$, we observe that $\nabla^2 L_\nu(z)$ is (symmetric) positive semidefinite (for any $z$). Hence, $\max_{z \in B_{\hat{\rho}}} \Vert \nabla^{2}L_{\nu}(z) \Vert \leq \max_{z \in B_{\hat{\rho}}} \mathrm{Tr}(\nabla^{2} L_{\nu}(z) ).$ Now, observe that \begin{align*} \frac{\partial^{2}}{\partial z_{i}^{2}} L_{\nu}(z) = \frac{\partial^{2}}{\partial z_{i}^{2}} \log M_{\nu}(z) &= \frac{-1}{(M_{\nu}(z))^{2}} \left[ \int_{\mathcal{X}} x_{i} \exp \langle z, x \rangle d\nu(x) \right]^{2} + \frac{1}{M_{\nu}(z)} \left[ \int_{\mathcal{X}} x_{i}^{2} \exp \langle z, x \rangle d\nu(x) \right], \end{align*} where the interchange of the derivative and integral is permitted by the Leibniz rule for finite measures, see e.g. \cite[Theorem 2.27]{folland1999real} or \cite[Theorem 6.28]{klenke2013probability}. Taking the absolute value in the last identity, we may bound $\vert x_{i} \vert \leq \Vert x \Vert_{2} \leq \vert \mathcal{X} \vert$, $\Vert z \Vert \leq \hat{\rho}$, and apply \cref{lemma:MGF_bounded} to bound $M_\nu(z)$. This eventually yields \begin{align*} \left\vert \frac{\partial^{2}}{\partial z_{i}^{2}} L_{\nu}(z) \right\vert \leq \frac{ \vert \mathcal{X} \vert }{\exp(-\hat{\rho} \vert \mathcal{X} \vert)^{2}} \exp( \hat{\rho} \vert \mathcal{X} \vert )^{2} + \frac{\vert \mathcal{X} \vert^{2} }{\exp(-\hat{\rho} \vert \mathcal{X} \vert)} \exp( \hat{\rho} \vert \mathcal{X} \vert )=:\hat{K}, \end{align*} with $\hat{K}>0$ which depends on $\hat \rho$ and $\vert \mathcal{X} \vert$. As this uniformly bounds every term in the trace, $K := d \cdot\hat{K}$ is the desired constant. \end{proof} The key feature of the constant $K$ is that it does not depend on the choice of measure $\nu$. Hence we can uniformly apply this bound over a family of measures, the most pertinent example being $\left\{ \mu_{n}^{(\omega)} \right\}$. We remark that our upper bound on $K$ is a vast overestimate for practical examples, which can be observed numerically. Finally we require a very simple technical lemma, whose proof is left as an exercise for brevity. \begin{lemma} \label{lemma:excessDistanceBound} Let $A,B\subset \R^{d}$ be nonempty and let $B$ be closed and convex. Then for $\overline{a} \in A$ and $\overline{b} = \text{proj}_{B}(\overline{a})$ we have \begin{equation*} \Vert \overline{a} - \overline{b} \Vert \leq \exc(A;B). \end{equation*} \end{lemma} \noindent We now have developed all the necessary tools to state and prove the main result for the case of $g = \frac{1}{2} \Vert (\cdot) -b \Vert$. \begin{theorem} \label{thm:epsdeltaprimalbound_full} Let $\rho_{0}$ be given by \cref{eqn:rho_0_defn}, and suppose $\mathrm{rank}(C)=d$. Then for all $\mu, \nu \in \mathcal{P}(\mathcal{X})$, all $\rho > \rho_{0}$ and all $\varepsilon \in [0, \rho -\rho_{0}]$, we have the following: If $\overline{z}_{\nu,\varepsilon}$ is an $\varepsilon$-minimizer of $\phi_{\nu}$ as defined in \cref{eq:Dual_2norm}, then \begin{equation*} \overline{x}_{\nu, \varepsilon} := \nabla L_{\nu}(C^{T}\overline{z}_{\nu,\varepsilon}) \end{equation*} satisfies the error bound \begin{align*} \left\Vert \overline{x}_{\nu,\varepsilon} -\overline{x}_{\mu} \right\Vert \leq \frac{1}{\alpha \sigma_{\min}(C)} D_{\rho}(\nu,\mu) + \frac{2\sqrt{2}}{ \sqrt{\alpha} \sigma_{\min}(C)} \sqrt{ D_{\rho}(\nu,\mu) } + \left( K \Vert C \Vert \sqrt{2 \alpha } +\frac{2}{ \sqrt{\alpha} \sigma_{\min}(C)} \right) \sqrt{\varepsilon}, \end{align*} where $\overline{x}_{\mu}$ is the unique solution to the MEM primal problem $(P)$ for $\mu$ and $K>0$ is a constant which does not depend on $\mu, \nu$. \end{theorem} \begin{proof} Let $\rho > \rho_{0}$, $\nu, \mu \in \mathcal{P}(\mathcal{X})$ and $\varepsilon \in [0,\rho - \rho_{0}]$. Let $\overline{z}_{\nu,\varepsilon}$ be a $\varepsilon$-minimizer of $\phi_{\nu}$, and denote the unique minimizers of $\phi_{\mu}$ and $\phi_{\mu}$ as $\overline{z}_{\nu}$ and $\overline{z}_{\mu}$, respectively. Then \begin{align} \left\Vert \overline{x}_{\nu, \varepsilon} -\overline{x}_{\mu} \right\Vert &= \left\Vert \nabla L_{\nu}(C^{T}\overline{z}_{\nu,\varepsilon})- \nabla L_{\mu}(C^{T}\overline{z}_{\mu}) \right\Vert \nonumber \\ &=\left\Vert \nabla L_{\nu}(C^{T}\overline{z}_{\nu,\varepsilon}) - \nabla L_{\nu}(C^{T}\overline{z}_{\nu}) + \nabla L_{\nu}(C^{T}\overline{z}_{\nu})- \nabla L_{\mu}(C^{T}\overline{z}_{\mu}) \right\Vert, \nonumber \end{align} and so \begin{align} \left\Vert \overline{x}_{\nu, \varepsilon} -\overline{x}_{\mu} \right\Vert &\leq \left\Vert \nabla L_{\nu}(C^{T}\overline{z}_{\nu,\varepsilon}) - \nabla L_{\nu}(C^{T}\overline{z}_{\nu}) \right\Vert + \left\Vert \nabla L_{\nu}(C^{T}\overline{z}_{\nu})- \nabla L_{\mu}(C^{T}\overline{z}_{\mu}) \right\Vert \label{eq:term1+2}. \end{align} To estimate the first term on the right hand side of \cref{eq:term1+2}, we require an auxiliary bound. Observe that, as $\phi_{\nu}$ is strongly $1/\alpha$-convex with $\nabla \phi_{\nu}(\overline{z}_{\nu}) =0$, we have \begin{align} \Vert \overline{z}_{\nu} - \overline{z}_{\nu, \varepsilon}\Vert \leq \sqrt{2\alpha } \vert \phi_{\nu}(\overline{z}_{\nu}) - \phi_{\nu}(\overline{z}_{\nu, \varepsilon}) \vert^{1/2} \leq \sqrt{2 \alpha \varepsilon}. \label{eqn:epsilondistApproxMin} \end{align} Here the first inequality uses \cref{eqn:alternate_strongconvexity}, while the second follows from the definition of $\overline{z}_{\nu}$ and $\overline{z}_{\nu, \varepsilon}$, as $\vert \phi_{\nu}(z_{\nu}) - \phi_{\nu}(z_{\nu, \varepsilon}) \vert = \phi_{\nu}(z_{\nu, \varepsilon}) - \phi_{\nu}(z_{\nu}) \leq \varepsilon.$ From \cref{lemma:rho_0_conditions}(b), we find that $ \Vert \overline{z}_{\nu} \Vert \leq \rho_{0}$. Thus, \cref{eqn:epsilondistApproxMin} yields $\Vert \overline{z}_{\nu,\varepsilon} \Vert \leq \rho_{0} +\sqrt{2\alpha\varepsilon}$. This implies $\Vert C^{T}\overline{z}_{\nu} \Vert, \Vert C^{T}\overline{z}_{\nu, \varepsilon} \Vert \leq \Vert C \Vert (\rho_{0} +\sqrt{2\alpha \varepsilon})$. Hence, \cref{cor:global_K_bound} with $\hat{\rho} =\Vert C \Vert (\rho_{0} +\sqrt{2\alpha \varepsilon})$ yields \begin{equation*} \left\Vert \nabla L_{\nu}(C^{T}\overline{z}_{\nu,\varepsilon}) - \nabla L_{\nu}(C^{T}\overline{z}_{\nu}) \right\Vert \leq K \Vert C^{T} \overline{z}_{\nu} - C^{T}\overline{z}_{\nu,\varepsilon} \Vert, \end{equation*} where $K$ depends on $\hat{\rho}, \vert \mathcal{X} \vert, d$ and therefore on $\vert \mathcal{X} \vert, \Vert C \Vert,b, \varepsilon, \alpha,d$. The right-hand side in the last inequality can be further estimated with \cref{eqn:epsilondistApproxMin} to find \begin{equation} \Vert C^{T} \overline{z}_{\nu} - C^{T}\overline{z}_{\nu,\varepsilon} \Vert \leq \Vert C \Vert \Vert \overline{z}_{\nu} - \overline{z}_{\nu,\varepsilon} \Vert \leq \Vert C \Vert \sqrt{2\alpha \varepsilon}. \label{eq:final_left} \end{equation} We now turn to the second term on the right-hand side of \cref{eq:term1+2}. First order optimality conditions give \begin{align*} 0 = -\frac{\overline{z}_{\nu}}{\alpha} + b +C \nabla L_{\nu}(C^{T}\overline{z}_{\nu}) , \qquad 0 = -\frac{\overline{z}_{\mu}}{\alpha} + b + C \nabla L_{\mu}(C^{T}\overline{z}_{\mu}), \end{align*} and therefore $\left\Vert C( \nabla L_{\nu}(C^{T}\overline{z}_{\nu})- \nabla L_{\mu}(C^{T}\overline{z}_{\mu})) \right\Vert = \frac{1}{\alpha} \Vert \overline{z}_{\nu} - \overline{z}_{\mu} \Vert$. Furthermore, as $\text{rank}(C)=d$ we have $\sigma_{\min}(C)>0$. We also have for, any $x \in \R^{d}$, that $ \Vert Cx \Vert \geq \sigma_{\min}(C) \Vert x \Vert$, and hence \begin{equation*} \left\Vert \nabla L_{\nu}(C^{T}\overline{z}_{\nu})- \nabla L_{\mu}(C^{T}\overline{z}_{\mu}) \right\Vert \leq \frac{1}{\sigma_{\min}(C)} \left\Vert C(\nabla L_{\nu}(C^{T}\overline{z}_{\nu})- \nabla L_{\mu}(C^{T}\overline{z}_{\mu})) \right\Vert = \frac{1}{\alpha \sigma_{\min}(C)} \Vert \overline{z}_{\nu} - \overline{z}_{\mu} \Vert. \end{equation*} \noindent In order to bound $\Vert \overline{z}_{\nu} - \overline{z}_{\mu} \Vert$ from above, we define $\delta := 2 (\varepsilon + 2 D_{\rho}(\nu,\mu) ).$ Denoting as usual $S_{\delta}(\mu)$ as the set of $\delta$-minimizers of $\phi_{\mu}$, which is a closed, convex set by the continuity and convexity of $\phi_{\mu}$ respectively, define $y = \text{proj}_{S_{\delta}(\mu)}(\overline{z}_{\nu}).$ The triangle inequality gives \begin{equation} \Vert \overline{z}_{\nu} - \overline{z}_{\mu} \Vert \leq \Vert \overline{z}_{\nu} - y \Vert + \Vert y - \overline{z}_{\mu} \Vert. \label{eqn:projectionSplit} \end{equation} By the choice of $\rho > \rho_{0}$, we have by \cref{lemma:rho_0_conditions}(b) that $\overline{z}_{\nu}\in S_{\varepsilon}(\nu) \cap B_{\rho}$. Therefore applying \cref{lemma:excessDistanceBound} with $A = S_{\varepsilon}(\nu) \cap B_{\rho}$, $B = S_{\delta}(\mu)$ we can bound the first term on the right hand side of \cref{eqn:projectionSplit} as \begin{equation} \Vert \overline{z}_{\nu} - y \Vert \leq \text{exc}(S_{\varepsilon}(\nu) \cap B_{\rho} ; S_{\delta}(\mu)). \label{eq:finalright_1} \end{equation} For the remaining term of the right hand side of \cref{eqn:projectionSplit}, we use the characterization \cref{eqn:alternate_strongconvexity} of the $\frac{1}{\alpha}$-strong convexity in the differentiable case for $\phi_{\mu}$, noting $\nabla \phi_{\mu}(\overline{z}_{\mu}) =0$. Hence \begin{equation} \Vert y - \overline{z}_{\mu} \Vert \leq \sqrt{2\alpha} \vert \phi_{\mu}(y) - \phi_{\mu}(\overline{z}_{\mu}) \vert^{1/2} \leq \sqrt{2\alpha \delta}, \label{eq:finalright_2} \end{equation} where $y \in S_{\delta}(\mu)$ for the second inequality. Combining \cref{eq:final_left,eqn:projectionSplit,eq:finalright_1,eq:finalright_2} with \cref{eq:term1+2}, we find that \begin{equation*} \left\Vert \overline{x}_{\nu, \varepsilon} -\overline{x}_{\mu} \right\Vert \leq K \Vert C \Vert \sqrt{2 \alpha \varepsilon} + \frac{1}{\alpha \sigma_{\min}(C)} \text{exc}(S_{\varepsilon}(\nu) \cap B_{\rho}; S_{\delta}(\mu)) + \frac{1}{\alpha \sigma_{\min}(C)} \sqrt{2\alpha \delta}. \end{equation*} By the choice of $\delta = 2 (\varepsilon + 2 D_{\rho}(\nu,\mu) )$, \cref{cor:infdistRoyset} asserts $\text{exc}(S_{\varepsilon}(\nu) \cap B_{\rho}; S_{\delta}(\mu)) \leq D_{\rho}(\nu,\mu) .$ Therefore \begin{align*} \left\Vert \overline{x}_{\nu, \varepsilon} -\overline{x}_{\mu} \right\Vert &\leq \frac{1}{\alpha \sigma_{\min}(C)} \text{exc}(S_{\varepsilon}(\nu) \cap B_{\rho}; S_{\delta}(\mu)) + \frac{1}{\alpha \sigma_{\min}(C)} \sqrt{2\alpha \delta} + K \Vert C \Vert \sqrt{2 \alpha \varepsilon}\\ &\leq \frac{1}{\alpha \sigma_{\min}(C)} D_{\rho}(\nu,\mu) + \frac{1}{\sigma_{\min}(C)}\sqrt{\frac{ 4 \varepsilon}{\alpha}} + \frac{1}{\sigma_{\min}(C)}\sqrt{ \frac{ 8 D_{\rho}(\nu,\mu) }{\alpha}} + K \Vert C \Vert \sqrt{2 \alpha \varepsilon} \\ &= \frac{1}{\alpha \sigma_{\min}(C)} D_{\rho}(\nu,\mu) + \frac{2 \sqrt{2}}{ \sqrt{\alpha} \sigma_{\min}(C)} \sqrt{ D_{\rho}(\nu,\mu) } + \left( K \Vert C \Vert \sqrt{2 \alpha } +\frac{2}{ \sqrt{\alpha} \sigma_{\min}(C)} \right) \sqrt{\varepsilon} \end{align*} where in the second line we have used the definition of $\delta$ and the concavity of $\sqrt{x+y} \leq \sqrt{x} + \sqrt{y}$. \end{proof} \noindent Note that we may set $\varepsilon =0$ for a corollary on exact minimizers. However, the error bound still has the same scaling in terms of $D_{\rho}(\nu,\mu)$. \section[A statistical dependence on n]{A statistical dependence on $\bm{n}$} \label{sec:rates_n_empirical} This section is devoted to making the dependence on $n$ explicit in \cref{thm:epsdeltaprimalbound_full} for the special case $\nu = \mu_{n}^{(\omega)}$. We briefly recall the empirical setting developed in \Cref{sec:approximation}. Given i.i.d. random vectors $\{ X_{1}, X_{2}, \ldots , X_{n}, \ldots\} $ on $(\Omega,\mathcal{F}, \Prob)$ with shared law $\mu = \Prob X_{1}^{-1}$, we define $\mu_{n}^{(\omega)} = \sum_{i=1}^{n} \delta_{X_{i}(\omega)}$. For this measure, the dual objective reads \begin{align*} \phi_{\mu_{n}^{(\omega)}}(z) &= \frac{1}{2\alpha} \Vert z \Vert^{2} - \langle b, z \rangle + \log \frac{1}{n} \sum_{i=1}^{n} e^{\langle C^{T}z, X_{i}(\omega)\rangle}. \end{align*} Given $\overline{z}_{n, \varepsilon}(\omega)$, an $\varepsilon$-minimizer of $\phi_{\mu_{n}^{(\omega)}}(z)$, define \begin{equation*} \overline{x}_{n,\varepsilon}(\omega): = \nabla_{z} \left[ \log \frac{1}{n} \sum_{i=1}^{n} e^{\langle C^{T}z, X_{i}(\omega) \rangle } \right]_{z =\overline{z}_{n, \varepsilon}(\omega)}. \end{equation*} \noindent We begin with a simplifying lemma, recalling the notation developed in \Cref{sec:rates} of the moment generating function $M_{\mu}$ of $\mu$ and empirical moment generating function $M_{n}(\cdot, \omega)$ of $\mu_{n}^{(\omega)}$. \begin{lemma} \label{lemma:LogmgfboundedbyMGF} Let $\rho > 0$, $n \in \mathbb{N}$ and $\omega \in \Omega$ and set $K:= \exp \left( \rho \Vert C \Vert \vert \mathcal{X} \vert \right)$. Then \begin{equation*} D_{\rho}(\mu,\mu_{n}^{(\omega)}) \leq K \max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert. \end{equation*} \end{lemma} \begin{proof} Applying \cref{lemma:MGF_bounded} to the particular probability measures $\mu$ and $\mu_{n}^{(\omega)}$ gives \begin{equation} M_{\mu}(C^{T}z), M_{n}(C^{T}z,\omega) \in \left[\exp \left( -\rho \Vert C \Vert \vert \mathcal{X} \vert \right), \exp \left( \rho \Vert C \Vert \vert \mathcal{X} \vert \right) \right] =: [c,d] \label{eqn:upper_lowerbound_MGF} \end{equation} where $0< c< d.$ Furthermore, for any $s,t \in [c,d]$ we have $\vert \log(s) - \log(t) \vert \leq \frac{1}{c} \vert s - t \vert$, and hence \[ D_{\rho}(\mu,\mu_{n}^{(\omega)}) = \max_{z \in B_{\rho}} \vert L_{\mu_{n}^{(\omega)}}(C^{T}z) - L_{\mu}(C^{T}z) \vert \leq \exp \left( \rho \Vert C \Vert \vert \mathcal{X} \vert \right) \max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert. \] \end{proof} \begin{lemma} \label{lemma:primal_bound_n_global_constants} Let $\rho_{0}$ be as defined in \cref{eqn:rho_0_defn} and suppose $\mathrm{rank}(C)=d$. Then, for all $\rho > \rho_{0}$, all $\varepsilon \in [0, \rho-\rho_{0}]$, and for all $n \in \mathbb{N}$ we have: if $\overline{z}_{n,\varepsilon}(\omega) \in S_{\varepsilon}(\mu_{n}^{(\omega)})$, then $\overline{x}_{n, \varepsilon}(\omega) = \nabla L_{\mu_{n}^{(\omega)}}(C^{T}\overline{z}_{n,\varepsilon})$ satisfies \begin{align*} \left\Vert \overline{x}_{n,\varepsilon}(\omega) -\overline{x}_{\mu} \right\Vert &\leq \frac{K_{1}}{\alpha \sigma_{\min}(C)} \max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert \\ &+ \frac{2 \sqrt{2 K_{1}}}{\sqrt{\alpha} \sigma_{\min}(C) } \sqrt{ \max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert } + \left( K_{2} \Vert C \Vert \sqrt{2 \alpha } +\frac{2}{\sqrt{\alpha}\sigma_{\min}(C)} \right) \sqrt{\varepsilon} \end{align*} where $K_{1}$ is a constant which depends on $\rho, \vert \mathcal{X} \vert, \Vert C \Vert$, and $K_{2}$ on $\vert \mathcal{X} \vert, \Vert C \Vert,b, \varepsilon, d, \alpha$. \end{lemma} \begin{proof} As $\mu_{n}^{(\omega)} \in \mathcal{P}(\mathcal{X})$, \cref{thm:epsdeltaprimalbound_full} yields for all $n$, for $\rho_{0}$ as defined in \cref{eqn:rho_0_defn}, for all $\rho> \rho_{0}$, and all $\varepsilon \in [0, \rho-\rho_{0}]$, if $\overline{z}_{n,\varepsilon}(\omega) \in S_{\varepsilon}(\mu_{n}^{(\omega)})$, then $\overline{x}_{n, \varepsilon}(\omega) = \nabla L_{\nu}(C^{T}z_{n,\varepsilon})$ satisfies \begin{align*} \left\Vert \overline{x}_{n,\varepsilon}(\omega) -\overline{x}_{\mu} \right\Vert &\leq \frac{1}{\alpha \sigma_{\min}(C)} D_{\rho}(\mu,\mu_{n}^{(\omega)}) + \frac{2 \sqrt{2}}{\sqrt{\alpha} \sigma_{\min}(C)} \sqrt{ D_{\rho}(\mu,\mu_{n}^{(\omega)}) } \\ &+ \left( K_{2} \Vert C \Vert \sqrt{2 \alpha } +\frac{2}{\sqrt{\alpha} \sigma_{\min}(C)} \right) \sqrt{\varepsilon}, \end{align*} where we stress that the constant $K_{2}$ depends on $\vert \mathcal{X} \vert, \Vert C \Vert,b, \varepsilon, \alpha,d$, but does not depend on $n$. Applying \cref{lemma:LogmgfboundedbyMGF} to bound $D_{\rho}(\mu,\mu_{n}^{(\omega)}) \leq K_{1} \sup_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert$ gives the result. \end{proof} \noindent In order to construct a final bound which depends explicitly on $n$ it remains to estimate the term $\max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert$. This fits into the language of empirical process theory where this type of convergence is well studied. The main reference of interest here is Van Der Vaart \cite{van1996new}. For compact $\mathcal{X} \subset \R^{d}$, let $f: \mathcal{X} \to \R$ be a function and $\beta = (\beta_{1}, \ldots, \beta_{d})$ be a multi-index, i.e. a vector of $d$ nonnegative integers. We call $\vert \beta \vert = \sum_{i} \beta_{i}$ the order of $\beta$, and define the differential operator $ D^{\beta} = \frac{\partial^{\beta_{1}}}{\partial x_{1}^{\beta_{1}}} \cdots \frac{\partial^{\beta_{d}}}{\partial x_{d}^{\beta_{d}}}.$ For integer $k$, we denote by $\mathcal C^{k}(\mathcal X)$ as the space of $k$-smooth (also known as $k$-H\"older continuous) functions on $\mathcal{X}$, namely, those $f$ which satisfy \cite[p.~2131]{van1996new} \[ \|f\|_{\mathcal C^{k}(\mathcal X)} := \max_{|\beta|\leq k}\sup_{x\in\text{int}(\mathcal X)}\|D^{\beta}f(x)\|+\max_{|\beta|=k}\sup_{\substack{x,y\in \text{int} (\mathcal X)\\x\neq y}}\left|\frac{D^{\beta}f(x)-D^{\beta}f(y)}{\|x-y\|}\right|<\infty. \] \noindent Moreover, let $\mathcal C^{k}_R(\mathcal X)$ denote the ball of radius $R$ in $\mathcal C^{k}(\mathcal X)$. With this notation developed we can state the classical results of Van der Vaart \cite{van1996new}. In the notation therein, we apply the machinery of Sections 1 and 2 to the measure space $(\mathcal{X}_{1}, \mathcal{A}_{1})$ = $(\mathcal{X}, \mathcal{B}_{\mathcal{X}})$, equipped with probability measure $\mu$. Taking $\mathbb{G}_{n} = \sqrt{n}(\mu_{n}^{(\omega)} - \mu)$ and $\mathcal{F}_{1} = \mathcal{F} = C^{k}_{R}( \mathcal{X})$, this induces the norm $\Vert \mathbb{G}_{n} \Vert_{\mathcal{F}} = \sup_{f \in C^{k}_R(\mathcal X)} \{ \left\vert \int_\mathcal{X} f d\mathbb{G}_{n} \right\vert \}$, and hence the results of \cite[p.~2131]{van1996new} give \begin{theorem} \label{thm:donsker} Let $\mu \in \mathcal{P}(\mathcal{X})$. If $k > d/2$, then for any $R>0$, \begin{equation*} \E_{\Prob}^{*} \left[ \sup_{f \in C^{k}_{R}(\mathcal X) } \sqrt{n} \left\vert \ \int_{\mathcal{X}} fd\mu_{n}^{(\cdot)} - \int_{\mathcal{X}}fd\mu \right\vert\right] \leq D \end{equation*} where $D$ is a constant depending (polynomially) on $k,d,\vert \mathcal{X} \vert$, and $\E_{\Prob}^{*}$ is the outer expectation to avoid concerns of measurablity (see e.g. \cite[Section 1.2]{van1996weak}). \end{theorem} We remark that outer expectation is also known as the outer integral \cite[Chapter 14.F]{rockafellar2009variational}, and coincides with the usual expectation for measurable functions - which are the only functions of interest in this work. A self-contained proof of the above result is non-trivial, requiring the development of entropy and bracketing numbers of function spaces which is beyond the scope of this article.\footnote{Bounds of this ``Donsker'' type have previously been applied to empirical approximations of stochastic optimization problems, to derive large deviation-style results for specific problems, see \cite[Section 5.5]{romisch2007stability} for a detailed exposition and discussion, in particular \cite[Theorem 5.2]{romisch2007stability}. In principle this machinery could be used here to derive similar large deviation results. } We simply take this result as given. However, we show the following corollary. \begin{corollary} \label{cor:BoundedMGF} For all $n \in \mathbb{N}$, we have \begin{equation*} \E_{\Prob}\left[ \max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\cdot) \right\vert \right] \leq \frac{D}{\sqrt{n}}, \end{equation*} where $D$ is a constant depending on $d,\vert \mathcal{X} \vert$. \end{corollary} \begin{proof} Observe that for each $z$, the function $f_{z}(x) = \exp \langle C^{T}z, \cdot \rangle$ is an infinitely differentiable function on the compact set $\mathcal{X}$, and thus has bounded derivatives of all orders, in particular, of order $k=d > d/2$. Hence, for $\rho > 0$, the set of functions $f_{z}$ parameterized by $z\in B_{\rho}$ satisfies \[ \left\{ f_{z}(x) = \exp\langle C^{\top}z,x\rangle:z\in B_{\rho} \right\}\subset \mathcal C^{d}_{R_{d}}(\mathcal X), \] where $R_{d}$ is a constant which depends on $d,\rho,\|C\|$, and $\vert \mathcal{X} \vert$. Furthermore, as $\mathbb{Q}^{m} \cap B_{\rho} \subset B_{\rho}$ is a countable dense subset, $\max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\cdot) \right\vert = \sup_{z \in \mathbb{Q}^{m} \cap B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\cdot) \right\vert$ is a supremem of countably many $\Prob$-measurable functions and is hence $\Prob$-measurable. In particular the usual expectation agrees with the outer expectation. Hence applying \cref{thm:donsker} we may assert \begin{align*} \E_{\Prob}\left[ \max_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\cdot) \right\vert \right] &= \E_{\Prob}^{*} \left[ \sup_{z \in B_{\rho}} \left\vert \ \int_{\mathcal{X}} f_{z}d\mu_{n}^{(\cdot)} - \int_{\mathcal{X}}f_{z}\mu \right\vert\right] \\ &\leq \E_{\Prob}^{*} \left[ \sup_{f \in C^{d}_{R_{d}}(\mathcal X) } \left\vert \ \int_{\mathcal{X}} fd\mu_{n}^{(\cdot)} - \int_{\mathcal{X}}f\mu \right\vert\right] \\ &\leq \frac{D}{\sqrt{n}}, \end{align*} for a constant $D$ which depends on $d, \vert \mathcal{X} \vert$. We remark that the choice of $k=d$ in the above was aesthetic, so that the the constant $D$ only depends on $d, \vert \mathcal{X} \vert.$ \end{proof} The final result now follows as a simple consequence of \cref{cor:BoundedMGF} and \cref{lemma:primal_bound_n_global_constants}: \begin{theorem} \label{thm:final_rate_n} Suppose $\mathrm{rank}(C)=d$. For all $n \in \mathbb{N}$, and all $\overline{z}_{n,\varepsilon}(\omega) \in S_{\varepsilon}(\mu_{n}^{(\omega)})$, the associated $\overline{x}_{n, \varepsilon}(\omega) = \nabla L_{\mu_{n}^{(\omega)}}(C^{T}\overline{z}_{n,\varepsilon})$ satisfies \begin{align*} \E_{\Prob} \left\Vert \overline{x}_{n,\varepsilon}(\cdot) -\overline{x}_{\mu} \right\Vert &\leq \frac{D K_{1}}{\alpha \sigma_{\min}(C)} \frac{1}{\sqrt{n}} + \frac{2D \sqrt{2 K_{1}}}{\sqrt{\alpha} \sigma_{\min}(C) } \sqrt{ \frac{1}{\sqrt{n}} } + \left( K_{2} \Vert C \Vert \sqrt{2 \alpha } +\frac{2}{\sqrt{\alpha} \sigma_{\min}(C)} \right) \sqrt{\varepsilon} \\ &= O\left( \frac{1}{n^{1/4}} + \sqrt{\varepsilon} \right), \end{align*} where the leading constants $K_{1},K_{2},D$ depend on $\vert \mathcal{X} \vert, \Vert C \Vert,b, \varepsilon, \alpha,d$. \end{theorem} \begin{proof} Take $\rho = 2 \rho_{0}$ in \cref{lemma:primal_bound_n_global_constants}. This choice of $\rho$ gives that for all $n$ and all $\overline{z}_{n,\varepsilon}( \omega) \in S_{\varepsilon}(\mu_{n}^{(\omega)})$, the error bound \begin{align*} \left\Vert \overline{x}_{n,\varepsilon}(\omega) -\overline{x}_{\mu} \right\Vert& \leq \frac{K_{1}}{\alpha \sigma_{\min}(C)} \sup_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert\\ & \hspace{-5mm}+ \frac{2 \sqrt{2} \sqrt{K_{1}}}{\sqrt{\alpha} \sigma_{\min}(C) } \sqrt{ \sup_{z \in B_{\rho}} \left\vert M_{\mu}(C^{T}z) - M_{n}(C^{T}z,\omega) \right\vert } + \left( K_{2} \Vert C \Vert \sqrt{2 \alpha } +\frac{2}{\sqrt{\alpha} \sigma_{\min}(C)} \right) \sqrt{\varepsilon}, \end{align*} holds with constants $K_{1}, K_{2}$ that depend only on $\vert \mathcal{X} \vert, \Vert C \Vert,b, \varepsilon, \alpha,d$. As $\omega \to \overline{x}_{n, \varepsilon}(\omega)$ is the composition of the continuous function $\nabla L_{\mu_{n}^{(\omega)}}$ and the measurable function $C^{T}z_{n}(\omega)$ (see the discussion at the end of \Cref{sec:approximation}) the left hand side is $\Prob$-measurable. Hence taking the $\Prob$-expectation on both sides and applying \cref{thm:donsker} gives the result. \end{proof} \section{Numerical experiments} \label{sec:numerics} We now shift to a numerical examination of the convergence $\overline{x}_{n,\varepsilon} \to \overline{x}_{\mu}$. We focus entirely on the most recent setting of \Cref{sec:rates_n_empirical}, the MEM problem with an empirical prior $\mu_{n}^{(\omega)}$ and 2-norm fidelity. As discussed throughout, the empirical dual objective function $\phi_{\mu_{n}}^{(\omega)}$ is smooth and strongly convex, with easily computable derivatives\footnote{The derivatives are easily avaliable analytically, but a na{\"i}ve implementation may run into stablity issues. This can be easily addressed, see e.g. \cite{Kantas2015Particle}.}. Hence we may solve this problem with any off-the-shelf optimization package, and here we choose to use limited memory BFGS \cite{byrd1994representations}. As a companion to this paper, we include a jupyter notebook, \href{https://github.com/mattkingros/MEM-Denoising-and-Deblurring.git}{https://github.com/mattkingros/MEM-Denoising-and-Deblurring.git}, which can be used to reproduce and extend all computations performed hereafter. For our numerical experiments we will focus purely on denoising, namely the case $C=I$. We will use two datasets and two distributions of noise as proof of concept. For noise, we use additive Gaussian noise and ``salt-and-pepper" corruption noise where each pixel has an independent probability of being set to purely black or white. For datasets, the first is the MNIST digits dataset \cite{deng2012mnist} which contains 60000 28x28 grayscale pixel images of hand-drawn digits 0-9, and the second is the more expressive dataset of Fashion-MNIST \cite{xiao2017fashion}, which is once again 60000 28x28 grayscale pixel images but of various garments such as shoes, sneakers, bags, pants, and so on. In all experiments we \textit{always} include enough noise so that the nearest neighbour to $b$ in the dataset belongs to a different class than the ground truth, ensuring that recovery is non-trivial. We include this in all figures, captioned ``closest point''. Furthermore, we take the ground truth to be a new data-point hand drawn by the authors; in particular, it is {\it not present} in either of the original MNIST or MNIST fashion datasets. We begin with a baseline examination of the error in $n$ for the MNIST digit dataset, for various choices of $b$. To generate $\mu_{n}^{(\omega)}$ practically, we sample $n$ datapoints uniformly at random without replacement. As a remark on this methodology, the target best possible approximation is the one which uses all possible data, i.e., $\mu_{D} := \mu_{60,000}^{(\omega)}$. Hence, for error plots we compare to $\overline{x}_{\mu_{D}}$, as we do not have access to the full image distribution to construct $\overline{x}_{\mu}$. Given a noisy image $b$, the experimental setup is as follows: for 20 values of $n$, spaced linearly between $10000$ and $60000$, we perform $15$ random samples of size $n$. For each random sample, we compute an approximation $\overline{x}_{n, \epsilon}$ and the relative approximation error $\frac{\Vert \overline{x}_{n, \epsilon} - \overline{x}_{\mu_{D}} \Vert}{\Vert \overline{x}_{\mu_{D}} \Vert}$, which is then averaged over the trials. Superimposed is the upper bound $Kn^{1/4}$ convergence rate, for moderate constant $K$ which changes between figures. We also visually exhibit several reconstructions, the nearest neighbour, and postprocessed images for comparison. This methodology is used to create figures \cref{fig:Gaus_noise_8,fig:SP_noise_8,fig:Gaus_noise_shirt,fig:SP_noise_shirt,fig:gaus_noise_heel,fig:SP_noise_heel} A remark on postprocessing: As alluded to in the introduction (see \cref{remark:measure_valued}), an advantage of the dual approach is the ability to reconstruct the optimal measure $Q_{n}$ which solves the measure-valued primal problem as seen in \cite{rioux2020maximum}. In particular as $Q_{n} \ll \mu_{n}^{(\omega)}$, we have $\overline{x}_{n} = \E_{Q_{n}}$ is a particular weighted linear combination of the input data. This allows for two types of natural postprocessing: at the level of the linear combination, i.e., setting all weights below some given threshold to zero, or at the level of the pixels: i.e. setting all pixels above 1-$\gamma$ to 1 and below $\gamma$ to 0. Hence our figures also include the final measure $Q_{n}$ with all entries below $0.01$ set to zero, the corresponding linear combination of the remaining datapoints (bottom right image), and a further masking at the pixel level (top right image). With this methodology developed, we present the results. For the first figures, the ground truth is a hand-drawn $8$, which is approximated by sampling the MNIST digit dataset. For \cref{fig:Gaus_noise_8} we use additive Gaussian noise of variance $\sigma = 0.10\Vert x\Vert $. For \cref{fig:SP_noise_8} we use instead use salt-and-pepper corruptions with an equal probability of 0.2 of any given pixel being set to 1 or 0. \begin{figure}[h] \centering \begin{tabular}{cccc} \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_Gaus_8/ground_truth.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_Gaus_8/Observed_b.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_Gaus_8/nearest_neightbour_soln.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_Gaus_8/dual_cutoff_mask.png} \\ \includegraphics[width = 0.2\textwidth]{Updated_Numerics/MNIST_Gaus_8/n500.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/MNIST_Gaus_8/n20k.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/MNIST_Gaus_8/n60k.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_Gaus_8/Dual_cutoff_image.png} \end{tabular} \caption{Recovery of an 8 with additive Gaussian noise} \label{fig:Gaus_noise_8} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width = 0.4\textwidth]{Updated_Numerics/MNIST_Gaus_8/error_n.png} & \includegraphics[width = 0.4\textwidth]{Updated_Numerics/MNIST_Gaus_8/dual_measure.png} \end{tabular} \caption{Rates and thresholding of optimal measure $Q_{n}$, for \cref{fig:Gaus_noise_8} } \label{fig:Gaus_noise_8_rates} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cccc} \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_SP_8/ground_truth.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_SP_8/observed_b.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_SP_8/Nearest_Neighbour_solution.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_SP_8/masked_cutoff.png} \\ \includegraphics[width = 0.20\textwidth]{Updated_Numerics/MNIST_SP_8/n500.png} & \includegraphics[width = 0.20\textwidth]{Updated_Numerics/MNIST_SP_8/n20k.png} & \includegraphics[width = 0.20\textwidth]{Updated_Numerics/MNIST_SP_8/n60k.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/MNIST_SP_8/Dual_cutoff_image.png} \end{tabular} \caption{Recovery of an 8 with salt-and-pepper corruption noise} \label{fig:SP_noise_8} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width = 0.4\textwidth]{Updated_Numerics/MNIST_SP_8/error_n.png} & \includegraphics[width = 0.4\textwidth]{Updated_Numerics/MNIST_SP_8/Dual_cutoff.png} \end{tabular} \caption{Rates and thresholding of optimal measure $Q_{n}$, for \cref{fig:SP_noise_8} } \label{fig:sp_noise_8_rates} \end{figure} Examining \cref{fig:Gaus_noise_8,fig:SP_noise_8}, we can make several observations which will persist globally in all numerics. As seen in \cref{fig:Gaus_noise_8_rates,fig:sp_noise_8_rates}, the theoretical rate $n^{1/4}$ agrees with practical results, and appears to be tight for moderate $n$ on the order of $n < 30000$. While the leading constants given by theory are quite large, growing polynomially with $d, \rho$ e.g., in practice these are much more moderate - here $O(1)$. We also note the linear combination forming the final solution is quite compressible, here having only 157 datapoints having dual measure above $0.01$, and after thresholding being visually indistinguishable from the full solution. Furthermore the \textit{visual} convergence is fast, in the sense that there is not an appreciable visual difference between the solution given 20,000 and 60,000 datapoints. As they are formed as linear combinations of observed data, all solutions suffer from artefacting in the form of blurred edges, which can be solved by a final mask at the pixel level. Before shifting away from the MNIST dataset, we also include a cautionary experiment where, for small random samples, the method is visually ``confidently incorrect'', which can be seen in \Cref{fig:vis_switches}. In the previous examples \cref{fig:SP_noise_8,fig:Gaus_noise_8} for small $n$ we simply have blurry images. This type of failure is generally representative of how the method performs when $n$ is too small. However there is another failure case which distinctly highlights the risk of taking $n$ too small, which can lead to a biased sample which does not well approximate $\mu$ - leading to correspondingly biased solutions. In \cref{fig:vis_switches} we see the recovered image after masking is clearly a $3$ for one random samples of size $n=1000$, but a $5$ for other random samples of size $n=800,2000$. \begin{figure}[h] \begin{tabular}{ccccc} \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/ground_truth.png} & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/nearest_neighbour.png} & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/n800.png} & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/n1k.png} & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/n2k.png} \\ & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/observed_b.png} & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/n800_masked.png} & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/n1k_masked.png} & \includegraphics[width=0.15\textwidth]{Updated_Numerics/MNIST_5_Small_n/n2k_masked.png} \end{tabular} \caption{A specific type of failure case for small $n$.} \label{fig:vis_switches} \end{figure} We now move to the more expressive MNIST fashion dataset. We once again use a hand drawn target, which is not originally found in the dataset. To emphasize differences compared to handwritten digits, the fashion dataset is immediately more challenging, especially in regards to fine details. To illustrate, there are few examples of garments with spots, stripes, or other detailed patterns - and as such are much more difficult to learn from uniform random samples. Similarly, if the target ground truth image contains details or patterns which are not present in the fashion dataset, there is little hope of constructing a reasonable linear combination to approximate the ground truth. On the other hand, some classes - such as heels or sandals - are extremely easy to learn, as there are many near-identical examples in the dataset, and are visually quite distinct from many other classes. Disregarding the change in dataset, our methodology for remains the same as \cref{fig:Gaus_noise_8} \begin{figure}[h] \centering \begin{tabular}{cccc} \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/ground_truth.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/observed_b.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/nearest_neighbour.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/cutoff_mask.png} \\ \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/n500.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/n20k.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/n60k.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/cutoff_img.png} \end{tabular} \caption{Recovery of a hand drawn shirt with additive Gaussian noise} \label{fig:Gaus_noise_shirt} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width = 0.4\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/error_n.png} & \includegraphics[width = 0.4\textwidth]{Updated_Numerics/Fashion_Shirt_Gaus/dual_coeff.png} \end{tabular} \caption{Rates and thresholding of optimal measure $Q_{n}$, for \cref{fig:Gaus_noise_shirt}} \label{fig:Gaus_noise_shirt_rates} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cccc} \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_SP/ground_truth.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_SP/observed_b.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_SP/nearest_neighbour.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_SP/cutoff_mask.png}\\ \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_Shirt_SP/n500.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_Shirt_SP/n20k.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_Shirt_SP/n60k.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_Shirt_SP/dual_cutoff_img.png} \end{tabular} \caption{Recovery of a hand drawn shirt with salt and pepper corruption noise.} \label{fig:SP_noise_shirt} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width = 0.4\textwidth]{Updated_Numerics/Fashion_Shirt_SP/error_n.png} \includegraphics[width = 0.4\textwidth]{Updated_Numerics/Fashion_Shirt_SP/dual_measure.png} \end{tabular} \caption{Rates and thresholding of optimal measure $Q_{n}$, for \cref{fig:SP_noise_shirt}} \label{fig:SP_noise_shirt_rates} \end{figure} For the experiment with salt-and-pepper noise \cref{fig:SP_noise_shirt}, while MEM denoising clearly recovers a shirt, many of the finer details are lost or washed out. In contrast, with Gaussian noise \cref{fig:Gaus_noise_shirt}, there is some remnant of the shirts' pattern visible in the final reconstruction. In both cases the nearest neighbor is a bag or purse, and not visually close to the ground truth. While the constant leading constant is larger, once again we are firmly below the theoretically expected convergence rate of $n^{1/4}$, and solutions are compressible with respect to the optimal measure $Q_{n}$. Finally, we conclude with an experiment on MNIST fashion with a hand drawn target of a heel, where we observe once again recovery well within the expected convergence rate, and visually recovers (after postprocessing) a reasonable approximation to the ground truth. \begin{figure}[h] \centering \begin{tabular}{cccc} \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_heel_gaus/ground_truth.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_heel_gaus/observed_b.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_heel_gaus/nearest_neighbour.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_heel_gaus/masked_cutoff.png} \\ \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_heel_gaus/n500.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_heel_gaus/n20k.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/Fashion_heel_gaus/n60k.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/Fashion_heel_gaus/cutoff_image.png} \end{tabular} \caption{Recovery hand drawn heel with additive Gaussian noise.} \label{fig:gaus_noise_heel} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width = 0.4\textwidth]{Updated_Numerics/Fashion_heel_gaus/error_n.png}& \includegraphics[width = 0.4\textwidth]{Updated_Numerics/Fashion_heel_gaus/dual_measure.png} \end{tabular} \caption{Rates and thresholding of optimal measure $Q_{n}$, for \cref{fig:gaus_noise_heel}} \label{fig:gaus_noise_heel_rates} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cccc} \includegraphics[width = 0.17\textwidth]{Updated_Numerics/fashion_heel_SP/ground_truth.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/fashion_heel_SP/observed_b.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/fashion_heel_SP/nearest_neighbour.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/fashion_heel_SP/masked_cutoff.png} \\ \includegraphics[width = 0.2\textwidth]{Updated_Numerics/fashion_heel_SP/n500.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/fashion_heel_SP/n20k.png} & \includegraphics[width = 0.2\textwidth]{Updated_Numerics/fashion_heel_SP/n60k.png} & \includegraphics[width = 0.17\textwidth]{Updated_Numerics/fashion_heel_SP/cutoff_img.png} \end{tabular} \caption{Recovery of hand drawn heel with Salt and Pepper corruptions.} \label{fig:SP_noise_heel} \end{figure} \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width = 0.4\textwidth]{Updated_Numerics/fashion_heel_SP/output_rate_heel.png}& \includegraphics[width = 0.4\textwidth]{Updated_Numerics/fashion_heel_SP/dual_coeff.png} \end{tabular} \caption{Rates and thresholding of optimal measure $Q_{n}$, for \cref{fig:SP_noise_heel}} \label{fig:SP_noise_heel_rates} \end{figure} \clearpage \bibliographystyle{siam} \bibliography{references} \end{document}
2412.17930v2
http://arxiv.org/abs/2412.17930v2
Runs in Paperfolding Sequences
\documentclass[12pt,reqno]{article} \usepackage[usenames]{color} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{amscd} \usepackage{graphicx} \usepackage[colorlinks=true, linkcolor=webgreen, filecolor=webbrown, citecolor=webgreen]{hyperref} \definecolor{webgreen}{rgb}{0,.5,0} \definecolor{webbrown}{rgb}{.6,0,0} \usepackage{color} \usepackage{fullpage} \usepackage{float} \usepackage{graphics} \usepackage{latexsym} \usepackage{epsf} \usepackage{breakurl} \setlength{\textwidth}{6.5in} \setlength{\oddsidemargin}{.1in} \setlength{\evensidemargin}{.1in} \setlength{\topmargin}{-.1in} \setlength{\textheight}{8.4in} \newcommand{\seqnum}[1]{\href{https://oeis.org/#1}{\rm \underline{#1}}} \begin{document} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \title{Runs in Paperfolding Sequences} \author{Jeffrey Shallit\footnote{Research supported by a grant from NSERC, 2024-03725.}\\ School of Computer Science\\ University of Waterloo\\ Waterloo, ON N2L 3G1 \\ Canada\\ \href{mailto:[email protected]}{\tt [email protected]}} \maketitle \begin{abstract} The paperfolding sequences form an uncountable class of infinite sequences over the alphabet $\{ -1, 1 \}$ that describe the sequence of folds arising from iterated folding of a piece of paper, followed by unfolding. In this note we observe that the sequence of run lengths in such a sequence, as well as the starting and ending positions of the $n$'th run, is $2$-synchronized and hence computable by a finite automaton. As a specific consequence, we obtain the recent results of Bunder, Bates, and Arnold, in much more generality, via a different approach. We also prove results about the critical exponent and subword complexity of these run-length sequences. \end{abstract} \section{Introduction} Paperfolding sequences are sequences over the alphabet $\{ -1, 1\}$ that arise from the iterated folding of a piece of paper, introducing a hill ($+1$) or valley ($-1$) at each fold. They are admirably discussed, for example, in \cite{Davis&Knuth:1970,Dekking&MendesFrance&vanderPoorten:1982}. The formal definition of a paperfolding sequence is based on a (finite or infinite) sequence of {\it unfolding instructions} $\bf f$. For finite sequences $\bf f$ we define \begin{align} P_\epsilon &= \epsilon \nonumber\\ P_{{\bf f} a} &= (P_{\bf f}) \ a \ ({-P_{{\bf f}}^R}) \label{fund} \end{align} for $a \in \{ -1, 1\}$ and ${\bf f} \in \{-1, 1\}^*$. Here $\epsilon$ denotes the empty sequence of length $0$, $-x$ changes the sign of each element of a sequence $x$, and $x^R$ reverses the order of symbols in a sequence $x$. An easy induction now shows that $|P_{\bf f}| = 2^{|{\bf f}|} - 1$, where $|x|$ means the length, or number of symbols, of a sequence $x$. Now let ${\bf f} = f_0 f_1 f_2 \cdots$ be an infinite sequence in $\{-1, 1\}^\omega$. It is easy to see that $P_{f_0 f_1 \cdots f_n}$ is a prefix of $P_{f_0 f_1 \cdots f_{n+1}}$ for all $n \geq 0$, so there is a unique infinite sequence of which all the $P_{f_0 f_1 \cdots f_n}$ are prefixes; we call this infinite sequence $P_{\bf f}$. As in the previous paragraph, we always index the unfolding instructions starting at $0$: ${\bf f} = f_0 f_1 f_2 \cdots$. Also by convention the paperfolding sequence itself is indexed starting at $1$: $P_{\bf f} = p_1 p_2 p_3 \cdots$. With these conventions we immediately see that $P_{\bf f} [2^n] = p_{2^n} = f_n$ for $n \geq 0$. Since there are a countable infinity of choices between $-1$ and $1$ for each unfolding instructions, there are uncountably many infinite paperfolding sequences. As an example let us consider the most famous such sequence, the {\it regular paperfolding sequence}, where the sequence of unfolding instructions is $1^\omega = 111\cdots$. Here we have, for example, \begin{align*} P_1 &= 1 \\ P_{11} &= 1 \, 1 \, (-1) \\ P_{111} &= 1 \, 1 \, (-1) \, 1 \, 1 \, (-1) \, (-1) . \end{align*} The first few values of the limiting infinite paperfolding sequence $P_{1^\omega} [n]$ are given in Table~\ref{tab1}. \begin{table}[htb] \begin{center} \begin{tabular}{c|ccccccccccccccccc} $n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & $\cdots$\\ \hline $P_{1^\omega} [n]$ & 1& 1&$-1$& 1& 1&$-1$&$-1$& 1& 1& 1&$-1$&$-1$& 1&$-1$&$-1$ & 1& $\cdots$ \end{tabular} \end{center} \caption{The regular paperfolding sequence.} \label{tab1} \end{table} The paperfolding sequences have a number of interesting properties that have been explored in a number of papers. In addition to the papers \cite{Davis&Knuth:1970,Dekking&MendesFrance&vanderPoorten:1982} already cited, the reader can also see Allouche \cite{Allouche:1992}, Allouche and Bousquet-M\'elou \cite{Allouche&Bousquet-Melou:1994a,Allouche&Bousquet-Melou:1994b}, and Go\v{c} et al.~\cite{Goc&Mousavi&Schaeffer&Shallit:2015}, to name just a few. Recently Bunder et al.~\cite{Bunder&Bates&Arnold:2024} explored the sequence of lengths of runs of the regular paperfolding sequence, and proved some theorems about them. Here by a ``run'' we mean a maximal block of consecutive identical values. Runs and run-length encodings are a long-studied feature of sequences; see, for example, \cite{Golomb:1966}. The run lengths $R_{1111}$ for the finite paperfolding sequence $P_{1111}$, as well as the starting positions $S_{1111}$ and ending positions $E_{1111}$ of the $n$'th run, are given in Table~\ref{tab2}. \begin{table}[htb] \begin{center} \begin{tabular}{c|ccccccccccccccc} $n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline $P_{1111} [n] $ & 1& 1&$-1$& 1& 1&$-1$&$-1$& 1& 1& 1&$-1$&$-1$& 1&$-1$&$-1$ \\ $R_{1111} [n] $ & 2&1&2&2&3&2&1&2& & & & & & & \\ $S_{1111} [n] $ & 1& 3& 4& 6& 8&11&13&14& & & & & & & \\ $E_{1111} [n] $ & 2& 3& 5& 7&10&12&13&15& & & & & & & \\ \end{tabular} \end{center} \caption{Run lengths of the regular paperfolding sequence.} \label{tab2} \end{table} As it turns out, however, {\it much\/} more general results, applicable to {\it all\/} paperfolding sequences, can be proven rather simply, in some cases making use of the {\tt Walnut} theorem-prover \cite{Mousavi:2016}. As shown in \cite{Shallit:2023}, to use {\tt Walnut} it suffices to state a claim in first-order logic, and then the prover can rigorously determine its truth or falsity. In order to use {\tt Walnut} to study the run-length sequences, these sequences must be computable by a finite automaton (``automatic''). Although the paperfolding sequences themselves have this property (as shown, for example, in \cite{Goc&Mousavi&Schaeffer&Shallit:2015}), there is no reason, a priori, to expect that the sequence of run lengths will also have the property. For example, the sequence of runs of the Thue-Morse sequence ${\bf t} = 0110100110010110\cdots$ is $12112221121\cdots$, fixed point of the morphism $1 \rightarrow 121$, $2 \rightarrow 12221$ \cite{Allouche&Arnold&Berstel&Brlek&Jockusch&Plouffe&Sagan:1995}, and is known to {\it not\/} be automatic \cite{Allouche&Allouche&Shallit:2006}. The starting and ending positions of the $n$'th run are integer sequences. In order to use {\tt Walnut} to study these, we would need these sequences to be {\it synchronized\/} (see \cite{Shallit:2021}); that is, there would need to be an automaton that reads the integers $n$ and $x$ in parallel and accepts if $x$ is the starting (resp., ending) position of the $n$'th run. But there is no reason, a priori, that the starting and ending positions of the $n$'th run of an arbitrary automatic sequence should be synchronized. Indeed, if this were the case, and the length of runs were bounded, then the length of these runs would always be automatic, which as we have just seen is not the case for the Thue-Morse sequence. However, as we will see, there is a single finite automaton that can compute the run sequence $R_{\bf f}$ for {\it all\/} paperfolding sequences simultaneously, and the same thing applies to the sequences $S_{\bf f}$ and $E_{\bf f}$ of starting and ending positions respectively. In this paper we use these ideas to study the run-length sequences of paperfolding sequences, explore their critical exponent and subword complexity, and generalize the results of Bunder et al.~\cite{Bunder&Bates&Arnold:2024} on the continued fraction of a specific real number to uncountably many real numbers. \section{Automata for the starting and ending positions of runs} We start with a basic result with a simple induction proof. \begin{proposition} Let $\bf f$ be a finite sequence of unfolding instructions of length $n$. Then the corresponding run-length sequence $R_{\bf f}$, as well as $S_{\bf f}$ and $E_{\bf f}$, has length $2^{n-1}$. \end{proposition} \begin{proof} The result is clearly true for $n=1$. Now suppose ${\bf f}$ has length $n+1$ and write ${\bf f} = {\bf g} a$ for $a \in \{ -1,1 \}$. For the induction step, we use Eq.~\eqref{fund}. From it, we see that there are $2^{n-1}$ runs in $P_{\bf g}$ and in $-P_{\bf g}^R$. Since the last symbol of $P_{\bf g}$ is the negative of the first symbol of $-P_{\bf g}^R$, introducing $a$ between them extends the length of one run, and doesn't affect the other. Thus we do not introduce a new run, nor combine two existing runs into one. Hence the number of runs in $P_{\bf f} $ is $2^n$, as desired. \end{proof} \begin{remark} Bunder et al.~\cite{Bunder&Bates&Arnold:2024} proved the same result for the specific case of the regular paperfolding sequence. \end{remark} Next, we find automata for the starting and ending positions of the runs. Let us start with the starting positions. The desired automaton $\tt sp$ takes three inputs in parallel. The first input is a finite sequence $\bf f$ of unfolding instructions, the second is the number $n$ written in base $2$, and the third is some number $x$, also expressed in base $2$. The automaton accepts if and only if $x = S_{\bf f} [n]$. Normally we think of the unfolding instructions as over the alphabet $\{ -1, 1 \}$, but it is useful to be more flexible and also allow $0$'s, but only at the end; these $0$'s are essentially disregarded. We need this because the parallel reading of inputs requires that all three inputs be of the same length. Thus, for example, the sequences $-1, 1, 1, 0$ and $-1, 1, 1$ are considered to specify the same paperfolding sequence, while $-1, 0, 1, 1$ is not considered a valid specification. Because we choose to let $f_0$ be the first symbol of the unfolding instructions, it is also useful to require that the inputs $n$ and $x$ mentioned above be represented with the {\it least-significant digit first}. In this representation, we allow an unlimited number of trailing zeros. Finally, although we assume that $S_{\bf f}$ is indexed starting at position $1$, it is useful to define $S_{\bf f}[0] = 0$ for all finite unfolding instruction sequences $\bf f$. To find the automaton computing the starting positions of runs, we use a guessing procedure described in \cite{Shallit:2023}, based on a variant of the Myhill-Nerode theorem. Once a candidate automaton is guessed, we can rigorously verify its correctness with {\tt Walnut}. We will need one {\tt Walnut} automaton already introduced in \cite{Shallit:2023}: {\tt FOLD}, and another one that we can define via a regular expression. \begin{itemize} \item {\tt FOLD} takes two inputs, $\bf f$ and $n$. If $n$ is in the range $1 \leq n < 2^{|{\bf f}|}$, then it returns the $n$'th term of the paperfolding sequence specified by $f$. \item {\tt lnk} takes two inputs, $f$ and $x$. It accepts if $f$ is the valid code of a paperfolding sequence (that is, no $0$'s except at the end) and $x$ is $2^t-1$, where $t$ is the length of $f$ (not counting $0$'s at the end). It can be created using the {\tt Walnut} command \begin{verbatim} reg lnk {-1,0,1} {0,1} "([-1,1]|[1,1])*[0,0]*": \end{verbatim} \end{itemize} Our guessed automaton {\tt sp} has $17$ states. We must now verify that it is correct. To do so we need to verify the following things: \begin{enumerate} \item The candidate automaton {\tt sp} computes a partial function. More precisely, for a given $\bf f$ and $n$, at most one input of the form $({\bf f},n,x)$ is accepted. \item {\tt sp} accepts $({\bf f},0,0)$. \item {\tt sp} accepts $({\bf f},1,1)$ provided $|{\bf f}| \geq 1$. \item There is an $x$ such that {\tt sp} accepts $({\bf f},2^{|{\bf f}|-1},x)$. \item {\tt sp} accepts no input of the form $({\bf f},n,x)$ if $n > 2^{|{\bf f}|-1}$. \item If {\tt sp} accepts $({\bf f},2^{|{\bf f}|-1},x)$ then the symbols $P_{\bf f}[t]$ for $x \leq t < 2^{|{\bf f}|}$ are all the same. \item Runs are nonempty: if {\tt sp} accepts $({\bf f},n-1,y)$ and $({\bf f},n,z)$ then $y<z$. \item And finally, we check that if ${\tt sp}$ accepts $({\bf f},n,x)$, then $x$ is truly the starting position of the $n$'th run. This means that all the symbols from the starting position of the $(n-1)$'th run to $x-1$ are the same, and different from $P_{\bf f}[x]$. \end{enumerate} We use the following {\tt Walnut} code to check each of these. A brief review of {\tt Walnut} syntax may be useful: \begin{itemize} \item {\tt ?lsd\_2} specifies that all numbers are represented with the least-significant digit first, and in base $2$; \item {\tt A} is the universal quantifier $\forall$ and {\tt E} is the existential quantifier $\exists$; \item {\tt \&} is logical {\tt AND}, {\tt |} is logical {\tt OR}, {\tt \char'127} is logical {\tt NOT}, {\tt =>} is logical implication, {\tt <=>} is logical IFF, and {\tt !=} is inequality; \item {\tt eval} expects a quoted string representing a first-order assertion with no free (unbound) variables, and returns {\tt TRUE} or {\tt FALSE}; \item {\tt def} expects a quoted string representing a first-order assertion $\varphi$ that may have free (unbound) variables, and computes an automaton accepting the representations of those tuples of variables that make $\varphi$ true, which can be used later. \end{itemize} \begin{verbatim} eval tmp1 "?lsd_2 Af,n ~Ex,y x!=y & $sp(f,n,x) & $sp(f,n,y)": # check that it is a partial function eval tmp2 "?lsd_2 Af,x $lnk(f,x) => $sp(f,0,0)": # check that 0th run is at position 0; the lnk makes sure that # the format of f is correct (doesn't have 0's in the middle of it.) eval tmp3 "?lsd_2 Af,x ($lnk(f,x) & x>=1) => $sp(f,1,1)": # check if code specifies nonempty string then first run is at position 1 eval tmp4 "?lsd_2 Af,n,z ($lnk(f,z) & z+1=2*n) => Ex $sp(f,n,x)": # check it accepts n = 2^{|f|-1} eval tmp5 "?lsd_2 Af,n,z ($lnk(f,z) & z+1<2*n) => ~Ex $sp(f,n,x)": # check that it accepts no n past 2^{|f|-1} eval tmp6 "?lsd_2 Af,n,z,x ($lnk(f,z) & 2*n=z+1 & $sp(f,n,x)) => At (t>=x & t<z) => FOLD[f][x]=FOLD[f][t]": # check last run is right and goes to the end of the finite # paperfolding sequence specified by f eval tmp7 "?lsd_2 Af,n,x,y,z ($lnk(f,z) & $sp(f,n-1,x) & $sp(f,n,y) & 1<=n & 2*n<=z+1) => x<y": # check that starting positions form an increasing sequence eval tmp8 "?lsd_2 Af,n,x,y,z,t ($lnk(f,z) & n>=2 & $sp(f,n-1,y) & $sp(f,n,x) & x<=z & y<=t & t<x) => FOLD[f][x]!=FOLD[f][t]": # check that starting position code is actually right \end{verbatim} {\tt Walnut} returns {\tt TRUE} for all of these, which gives us a proof by induction on $n$ that indeed $x_n = S_{\bf f}[n]$. From the automaton for starting positions of runs, we can obtain the automaton for ending positions of runs, {\tt ep}, using the following {\tt Walnut} code: \begin{verbatim} def ep "?lsd_2 Ex $lnk(f,x) & ((2*n<=x-1 & $sp(f,n+1,z+1)) | (2*n-1=x & z=x))": \end{verbatim} Thus we have proved the following result. \begin{theorem} There is a synchronized automaton of $17$ states {\tt sp} computing $S_{\bf f}[n]$ and one of $13$ states {\tt ep} computing $E_{\bf f}[n]$, for all paperfolding sequences simultaneously. \end{theorem} Using the automaton {\tt ep}, we are now able to prove the following new theorem. Roughly speaking, it says that the ending position of the $n$'th run for the unfolding instructions $\bf f$ is $2n - \epsilon_n$, where $\epsilon_n \in \{0, 1 \}$, and we can compute $\epsilon_n$ by looking at a sequence of unfolding instructions closely related to $\bf f$. \begin{theorem} Let $\bf f$ be a finite sequence of unfolding instructions, of length at least $2$. Define a new sequence $\bf g$ of unfolding instructions as follows: \begin{equation} {\bf g} := \begin{cases} 1 \ (-x), & \text{if ${\bf f} = 11x$;} \\ (-1) \ (-x), & \text{if ${\bf f} = 1 (-1) x$;} \\ (-1) \ x, & \text{if ${\bf f} = (-1) 1 x $; } \\ 1 \ x, & \text{if ${\bf f} = (-1) (-1) x$}. \end{cases} \label{eq1} \end{equation} Then \begin{equation} E_{\bf f}[n] + \epsilon_n = 2n \label{2n} \end{equation} for $1 \leq n < 2^{n-1}$, where $$\epsilon_n = \begin{cases} 0, & \text{if $P_{\bf g}[n] = 1$;} \\ 1, & \text{if $P_{\bf g}[n]=-1$.} \end{cases} $$ Furthermore, if $\bf f$ is an infinite set of unfolding instructions, then Eq.~\eqref{2n} holds for all $n \geq 1$. \end{theorem} \begin{proof} We prove this using {\tt Walnut}. First, we need an automaton {\tt assoc} that takes two inputs $\bf f$ and $\bf g$ in parallel, and accepts if $\bf g$ is defined as in Eq.~\eqref{eq1}. This automaton is depicted in Figure~\ref{fig3}, and correctness is left to the reader. Now we use the following {\tt Walnut} code. \begin{verbatim} eval thm3 "?lsd_2 Af,g,y,n,t ($lnk(g,y) & $assoc(f,g) & y>=1 & n<=y & n>=1 & $ep(f,n,t)) => ((FOLD[g][n]=@-1 & t+1=2*n)|(FOLD[g][n]=@1 & t=2*n))": \end{verbatim} And {\tt Walnut} returns {\tt TRUE}. \begin{figure}[htb] \begin{center} \includegraphics[width=5.5in]{assoc.pdf} \end{center} \caption{The automaton {\tt assoc}.} \label{fig3} \end{figure} \end{proof} \section{Automaton for the sequence of run lengths} Next we turn to the sequence of run lengths itself. We can compute these from the automata for {\tt ep} and {\tt sp}. \begin{verbatim} def rl "?lsd_2 Ex,y $sp(f,n,x) & $ep(f,n,y) & z=1+(y-x)": \end{verbatim} \begin{proposition} For all finite and infinite sequences of paperfolding instructions, the only run lengths are $1,2,$ or $3$. \label{prop4} \end{proposition} \begin{proof} It suffices to prove this for the finite paperfolding sequences. \begin{verbatim} def prop4 "?lsd_2 Af,n,x,z ($lnk(f,x) & 1<=n & 2*n<=x+1 & $rl(f,n,z)) => (z=1|z=2|z=3)": \end{verbatim} And {\tt Walnut} returns {\tt TRUE}. \end{proof} \begin{remark} Proposition~\ref{prop4} was proved by Bunder et al.~\cite{Bunder&Bates&Arnold:2024} for the specific case of the regular paperfolding sequence. \end{remark} We now use another feature of {\tt Walnut}, which is that we can turn a synchronized automaton computing a function of finite range into an automaton returning the value of the function. The following code \begin{verbatim} def rl1 "?lsd_2 $rl(f,n,1)": def rl2 "?lsd_2 $rl(f,n,2)": def rl3 "?lsd_2 $rl(f,n,3)": combine RL rl1=1 rl2=2 rl3=3: \end{verbatim} computes an automaton {\tt RL} of two inputs $\bf f$ and $n$, and returns the value of the run-length sequence at index $n$ (either $1$, $2$, or $3$) for the unfolding instructions $\bf f$. This automaton has $31$ states. We now turn to examining the factors of the run-length sequences of paperfolding sequence. Recall that a factor is a contiguous block sitting inside a large sequence. We start with overlaps. Recall that an {\it overlap} is a string of the form $axaxa$, where $a$ is a single letter, and $x$ is a possibly empty string. For example, the word {\tt entente} is an overlap from French. We now prove that the sequence of run lengths in a paperfolding sequence contains no overlaps. \begin{theorem} The sequence of run lengths corresponding to every finite or infinite paperfolding sequence is overlap-free. \end{theorem} \begin{proof} It suffices to prove the result for every finite paperfolding sequence. We can do this is as follows: \begin{verbatim} def chk_over "?lsd_2 ~Ef,i,n,x $lnk(f,x) & x>=1 & i>=1 & n>=1 & i+2*n<=(x+1)/2 & At (t<=n) => RL[f][i+t]=RL[f][i+n+t]": # asserts no overlaps \end{verbatim} And {\tt Walnut} returns {\tt TRUE}. \end{proof} We now consider {\tt squares}, that is, blocks of the form $zz$, where $z$ is a nonempty sequence. \begin{theorem} The only possible squares occurring in the run lengths of a paperfolding sequence are $22$, $123123$, and $321321$. \end{theorem} \begin{proof} We start by showing that the only squares are of order $1$ or $3$. \begin{verbatim} def chk_sq1 "?lsd_2 Af,i,n,x ($lnk(f,x) & x>=1 & i>=1 & n>=1 & i+2*n-1<=(x+1)/2 & At (t<n) => RL[f][i+t]=RL[f][i+n+t]) => (n=1|n=3)": \end{verbatim} Next we check that the only square of order $1$ is $22$. \begin{verbatim} def chk_sq2 "?lsd_2 Af,x,i ($lnk(f,x) & x>=1 & i>=1 & i+1<=(x+1)/2 & RL[f][i]=RL[f][i+1]) => RL[f][i]=@2": \end{verbatim} Finally, we check that the only squares of order $3$ are $123123$ and $321321$. \begin{verbatim} def chk_sq3 "?lsd_2 Af,x,i ($lnk(f,x) & x>=1 & i>=1 & i+5<=(x+1)/2 & RL[f][i]=RL[f][i+3] & RL[f][i+1]=RL[f][i+4] & RL[f][i+2]=RL[f][i=5]) => ((RL[f][i]=@1 & RL[f][i+1]=@2 & RL[f][i+2]=@3)|(RL[f][i]=@3 & RL[f][i+1]=@2 & RL[f][i+2]=@1))": \end{verbatim} \end{proof} \begin{proposition} In every finite paperfolding sequence formed by $7$ or more unfolding instructions, the squares $22$, $123123$, and $321321$ are all present in the run-length sequence. \end{proposition} We now turn to palindromes. \begin{theorem} The only palindromes that can occur in the run-length sequence of a paperfolding sequence are $1,2,3, 22, 212, 232, 12321, $ and $32123$. \end{theorem} \begin{proof} It suffices to check the factors of the run-length sequences of length at most $7$. These correspond to factors of length at most $2+3\cdot 7 = 23$, and by the bounds on the ``appearance'' function given in Theorem~\cite[Thm 12.2.2]{Shallit:2023}, to guarantee we have seen all of these factors, it suffices to look at prefixes of paperfolding sequences of length at most $13 \cdot 23 = 299$. (Also see \cite{Burns:2022}.) Hence it suffices to look at all $2^9$ finite paperfolding sequences of length $2^9 - 1 = 511$ specified by instructions of length $9$. When we do this, the only palindromes we find are those in the statement of the theorem. \end{proof} Recall that the {\it subword complexity} of an infinite sequence is the function that counts, for each $n \geq 0$, the number of distinct factors of length $n$ appearing in it. The subword complexity of the paperfolding sequences was determined by Allouche \cite{Allouche:1992}. \begin{theorem} The subword complexity of the run-length sequence of an infinite paperfolding sequence is $4n+4$ for $n \geq 6$. \end{theorem} \begin{proof} First we prove that if $x$ is a factor of a run-length sequence, and $|x| \geq 2$, then $xa$ is a factor of the same sequence for at most two different $a$. \begin{verbatim} def faceq "?lsd_2 At (t<n) => RL[f][i+t]=RL[f][j+t]": eval three "?lsd_2 Ef,i,j,k,n n>=2 & i>=1 & RL[f][i+n]=@1 & RL[f][j+n]=@2 & RL[f][k+n]=@3 & $faceq(f,i,j,n) & $faceq(f,j,k,n)": \end{verbatim} Next we prove that if $|x| \geq 5$, then exactly four factors of a run-length sequence are right-special (have a right extension by two different letters). \begin{verbatim} def rtspec "?lsd_2 Ej,x $lnk(f,x) & i+n<=x & i>=1 & $faceq(f,i,j,n) & RL[f][i+n]!=RL[f][j+n]": eval nofive "?lsd_2 ~Ef,i,j,k,l,m,n n>=5 & i<j & j<k & k<l & l<m & $rtspec(f,i,n) & $rtspec(f,j,n) & $rtspec(f,k,n) & $rtspec(f,l,n) & $rtspec(f,m,n)": eval four "?lsd_2 Af,n,x ($lnk(f,x) & x>=127 & n>=6 & 13*n<=x) => Ei,j,k,l i>=1 & i<j & j<k & k<l & $rtspec(f,i,n) & $rtspec(f,j,n) & $rtspec(f,k,n) & $rtspec(f,l,n)": \end{verbatim} Here {\tt nofive} shows that no length 5 or larger has five or more right-special factors of that length, and every length $6$ or larger has exactly four such right-special factors. Here we have used \cite[Thm.~12.2.2]{Shallit:2023}, which guarantees that every factor of length $n$ of a paperfolding sequence can be found in a prefix of length $13n$. Thus we see if there are $t$ factors of length $n \geq 6$ then there are $t+4$ factors of length $n+1$: the $t$ arising from those that can be extended in exactly one way to the right, and the $4$ additional from those that have two extensions. Since there are $28$ factors of every run-length sequence of length $6$ (which we can check just by enumerating them, again using \cite[Thm.~12.2.2]{Shallit:2023}), the result now follows by a trivial induction. \end{proof} \section{The regular paperfolding sequence} In this section we specialize everything we have done so far to the case of a single infinite paperfolding sequence, the so-called regular paperfolding sequence, where the folding instructions are $1^\omega = 111\cdots$. In \cite{Bunder&Bates&Arnold:2024}, the sequence $2122321231232212\cdots$ of run lengths for the regular paperfolding sequence was called $g(n)$, and the sequence $2, 3, 5, 7, 10, 12, 13, 15, 18, 19, 21,\ldots$ of ending positions of runs was called $h(n)$. We adopt their notation. Note that $g(n)$ forms sequence \seqnum{A088431} in the On-Line Encyclopedia of Integer Sequences (OEIS) \cite{oeis}, while the sequence of starting positions of runs $1, 3, 4, 6, 8, 11, 13, 14, 16,\ldots$ is \seqnum{A371594}. In this case we can compute an automaton computing the $n$'th term of the run length sequence $g(n)$ as follows: \begin{verbatim} reg rps {-1,0,1} {0,1} "[1,1]*[0,0]*": def runlr1 "?lsd_2 Ef,x $rps(f,x) & n>=1 & n<=x/2 & RL[f][n]=@1": def runlr2 "?lsd_2 Ef,x $rps(f,x) & n>=1 & n<=x/2 & RL[f][n]=@2": def runlr3 "?lsd_2 Ef,x $rps(f,x) & n>=1 & n<=x/2 & RL[f][n]=@3": combine RLR runlr1=1 runlr2=2 runlr3=3: \end{verbatim} The resulting automaton is depicted in Figure~\ref{fig4}. \begin{figure}[htb] \begin{center} \includegraphics[width=5.5in]{RLR.pdf} \end{center} \caption{The lsd-first automaton {\tt RLR}.} \label{fig4} \end{figure} Casual inspection of this automaton immediately proves many of the results of \cite{Bunder&Bates&Arnold:2024}, such as their multi-part Theorems 2.1 and 2.2. To name just one example, the sequence $g(n)$ takes the value $1$ iff $n \equiv 2, 7$ (mod $8$). For their other results, we can use {\tt Walnut} to prove them. We can also specialize {\tt sp} and {\tt ep} to the case of the regular paperfolding sequence, as follows: \begin{verbatim} reg rps {-1,0,1} {0,1} "[1,1]*[0,0]*": def sp_reg "?lsd_2 (n=0&z=0) | Ef,x $rps(f,x) & n>=1 & n<=x/2 & $sp(f,n,z)": def ep_reg "?lsd_2 (n=0&z=0) | Ef,x $rps(f,x) & n>=1 & n<=x/2 & $ep(f,n,z)": \end{verbatim} These automata are depicted in Figures~\ref{fig7} and \ref{fig8}. \begin{figure}[htb] \centering \begin{minipage}{0.47\textwidth} \centering \includegraphics[height=1.7in]{sp_reg.pdf} \caption{Synchronized automaton {\tt sp\_reg} for starting positions of runs of the regular paperfolding sequence.} \label{fig7} \end{minipage} \quad \begin{minipage}{0.47\textwidth} \centering \includegraphics[height=1.5in]{ep_reg.pdf} \caption{Synchronized automaton {\tt ep\_reg} for ending positions of runs of the regular paperfolding sequence.} \label{fig8} \end{minipage} \end{figure} Once we have these automata, we can easily recover many of the results of \cite{Bunder&Bates&Arnold:2024}, such as their Theorem 3.2. For example they proved that if $n \equiv 1$ (mod $4$), then $h(n) = 2n$. We can prove this as follows with {\tt Walnut}: \begin{verbatim} eval test32a "?lsd_2 An (n=4*(n/4)+1) => $ep_reg(n,2*n)": \end{verbatim} The reader may enjoy constructing {\tt Walnut} expressions to check the other results of \cite{Bunder&Bates&Arnold:2024}. Slightly more challenging to prove is the sum property, conjectured by Hendriks, and given in \cite[Thm.~4.1]{Bunder&Bates&Arnold:2024}. We state it as follows: \begin{theorem} Arrange the set of positive integers not in $H := \{ h(n)+1 \, : \, n \geq 0 \}$ in increasing order, and let $t(n)$ be the $n$'th such integer, for $n \geq 1$. Then \begin{itemize} \item[(a)] $g(h(i)+1) = 2$ for $i \geq 0$; \item[(b)] $g(t(2i)) = 3$ for $i \geq 1$; \item[(c)] $g(t(2i-1)) = 1$ for $i \geq 1$. \end{itemize} \end{theorem} \begin{proof} The first step is to create an automaton {\tt tt} computing $t(n)$. Once again, we guess the automaton from data and then verify its correctness. It is depicted in Figure~\ref{fig9}. \begin{figure}[htb] \begin{center} \includegraphics[width=5.5in]{tt.pdf} \end{center} \caption{The automaton {\tt tt} computing $t(n)$.} \label{fig9} \end{figure} In order to verify its correctness, we need to verify that {\tt tt} indeed computes a increasing function $t(n)$ and further that the set $\{ t(n) \, : \, n \geq 1 \} = \{ 1,2, \ldots, \} \setminus H$. We can do this as follows: \begin{verbatim} eval tt1 "?lsd_2 An (n>=1) => Ex $tt(n,x)": # takes a value for all n eval tt2 "?lsd_2 ~En,x,y n>=1 & x!=y & $tt(n,x) & $tt(n,y)": # does not take two different values for the same n eval tt3 "?lsd_2 An,y,z (n>=1 & $tt(n,y) & $tt(n+1,z)) => y<z": # is an increasing function eval tt4 "?lsd_2 Ax (x>=1) => ((En n>=1 & $tt(n,x)) <=> (~Em,y $ep_reg(m,y) & x=y+1))": # takes all values not in H \end{verbatim} Now we can verify parts (a)-(c) as follows: \begin{verbatim} eval parta "?lsd_2 Ai,x (i>=1 & $ep_reg(i,x)) => RLR[x+1]=@2": eval partb "?lsd_2 Ai,x (i>=1 & $tt(2*i,x)) => RLR[x]=@3": eval partc "?lsd_2 Ai,x (i>=1 & $tt(2*i-1,x)) => RLR[x]=@1": \end{verbatim} And {\tt Walnut} returns {\tt TRUE} for all of these. This completes the proof. \end{proof} \section{Connection with continued fractions} Dimitri Hendriks observed, and Bunder et al.~\cite{Bunder&Bates&Arnold:2024} proved, a relationship between the sequence of runs for the regular paperfolding sequence, and the continued fraction for the real number $\sum_{i \geq 0} 2^{-2^i}$. As it turns out, however, a {\it much\/} more general result holds; it links the continued fraction for uncountably many irrational numbers to runs in the paperfolding sequences. \begin{theorem} Let $n\geq 2$ and $\epsilon_i \in \{ -1, 1\}$ for $2 \leq i \leq n$. Define $$\alpha(\epsilon_2, \epsilon_3, \ldots, \epsilon_n) := {1\over 2} + {1 \over 4} + \sum_{2\leq i\leq n} {\epsilon_i}2^{-2^i} .$$ Then the continued fraction for $\alpha(\epsilon_2, \epsilon_3, \ldots, \epsilon_n)$ is given by $[0, 1, (2R_{1, \epsilon_2, \epsilon_3, \ldots, \epsilon_n})']$, where the prime indicates that the last term is increased by $1$. As a consequence, we get that the numbers $\alpha(\epsilon_2, \epsilon_3,\ldots)$ have continued fraction given by $[0, 1, 2R_{1, \epsilon_2, \epsilon_3, \ldots}]$. \label{bba} \end{theorem} \begin{remark} The numbers $\alpha(\epsilon_2, \epsilon_3,\ldots)$ were proved transcendental by Kempner \cite{Kempner:1916}. They are sometimes erroneously called Fredholm numbers, even though Fredholm never studied them. \end{remark} As an example, suppose $(\epsilon_2,\epsilon_3,\epsilon_4,\epsilon_5) = (1,-1,-1,1)$. Then $$x(1,-1,-1,1) = 3472818177/2^{32} = [0, 1, 4, 4, 2, 6, 4, 2, 4, 4, 6, 4, 2, 4, 6, 2, 4, 5],$$ while $R_{1,1,-1,-1, 1} = 2213212232123122$. To prove Theorem~\ref{bba}, we need the ``folding lemma'': \begin{lemma} Suppose $p/q = [0, a_1, a_2,\ldots, a_t]$, $t$ is odd, and $\epsilon \in \{-1, 1\}$. Then $$p/q + \epsilon/q^2 = [0, a_1, a_2, \ldots, a_{t-1}, a_t - \epsilon, a_t + \epsilon, a_{t-1}, \ldots, a_2, a_1].$$ \label{lem1} \end{lemma} \begin{proof} See \cite[p.~177]{Dekking&MendesFrance&vanderPoorten:1982}, although the general ideas can also be found in \cite{Shallit:1979,Shallit:1982b}. \end{proof} We can now prove Theorem~\ref{bba} by induction. \begin{proof} From Lemma~\ref{lem1} we see that if $\alpha(\epsilon_2, \epsilon_3, \ldots, \epsilon_n) = [0, 1, a_2, \ldots, a_t]$ then $$\alpha(\epsilon_2, \epsilon_3, \ldots, \epsilon_n, \epsilon_{n+1}) = [0, 1, a_2, \ldots, a_{t-1}, a_t - \epsilon_{n+1}, a_t + \epsilon_{n+1}, a_{t-1}, a_{t-2}, \ldots, a_3, a_2+1] .$$ Now $F_{1, \epsilon_2, \epsilon_3, \ldots, \epsilon_n}$ always ends in $-1$. Write $R_{1, \epsilon_2, \epsilon_3, \ldots, \epsilon_n} = b_1 b_2 \cdots b_t$. Then $$R_{1, \epsilon_2, \ldots, \epsilon_n, \epsilon_{n+1}} = b_1 \cdots b_{t-1}, b_t +1, b_{t-1}, \ldots, b_1$$ if $\epsilon_{n+1} = -1$ (because we extend the last run with one more $-1$) and $$R_{1, \epsilon_2, \ldots, \epsilon_n, \epsilon_{n+1}} = b_1 \cdots b_{t-1}, b_t, b_t+1, b_{t-1}, \ldots, b_1$$ if $\epsilon_{n+1} =1$. Suppose \begin{align*} \alpha(\epsilon_2, \epsilon_3, \ldots, \epsilon_n) &= [0, 1, (2R_{1, \epsilon_2, \epsilon_3, \ldots, \epsilon_n})'] \\ &= [0, 1, a_2, \ldots, a_t ], \end{align*} and let $R_{1, \epsilon_2, \ldots, \epsilon_n} = b_1 b_2 \cdots b_{t-1}$. Then \begin{align*} \alpha(\epsilon_2, \epsilon_3, \ldots, \epsilon_n, \epsilon_{n+1} ) &= [0, 1, a_2, \ldots, a_{t-1}, a_t - \epsilon_{n+1}, a_t + \epsilon_{n+1}, a_{t-1}, \ldots, a_3, a_2+1] \\ &= [0, 1, 2b_1, \ldots, 2b_{t-2}, 2b_{t-1} + 1 - \epsilon_{n+1}, 2b_{t-1} + 1 + \epsilon_{n+1}, 2b_{t-2}, \ldots, 2b_2, 2b_1,1] \\ &= [0, 1, 2b_1, \ldots, 2b_{t-2}, 2b_{t-1} + 1 - \epsilon_{n+1}, 2b_{t-1} + 1 + \epsilon_{n+1}, 2b_{t-2}, \ldots, 2b_2, 2b_1 + 1] \\ &= [0, 1, 2R_{1, \epsilon_2, \ldots, \epsilon_{n+1}}'] , \end{align*} as desired. \end{proof} \section{Acknowledgments} I thank Jean-Paul Allouche and Narad Rampersad for their helpful comments. \begin{thebibliography}{99} \bibitem{Allouche:1992} J.-P. Allouche. \newblock The number of factors in a paperfolding sequence. \newblock {\it Bull. Aust. Math. Soc.} {\bf 46} (1992), 23--32. \bibitem{Allouche&Allouche&Shallit:2006} G. Allouche, J.-P. Allouche, and J. Shallit. \newblock Kolams indiens, dessins sur le sable aux \^{\i}les Vanuatu, courbe de Sierpinski, et morphismes de mono\"{\i}de. \newblock {\it Annales Inst. Fourier} {\bf 56} (2006), 2115--2130. \bibitem{Allouche&Arnold&Berstel&Brlek&Jockusch&Plouffe&Sagan:1995} J.-P. Allouche, A. Arnold, J. Berstel, S. Brlek, W. Jockusch, S. Plouffe, and B. E. Sagan. \newblock A relative of the Thue-Morse sequence. \newblock {\it Discrete Math.} {\bf 139} (1995), 455--461. \bibitem{Allouche&Bousquet-Melou:1994a} J.-P. Allouche and M.~Bousquet-M\'elou. \newblock Canonical positions for the factors in the paperfolding sequences. \newblock {\em Theoret. Comput. Sci.} {\bf 129} (1994), 263--278. \bibitem{Allouche&Bousquet-Melou:1994b} J.-P. Allouche and M.~Bousquet-M\'elou. \newblock Facteurs des suites de {Rudin-Shapiro} g\'en\'eralis\'ees. \newblock {\em Bull. Belgian Math. Soc.} {\bf 1} (1994), 145--164. \bibitem{Bunder&Bates&Arnold:2024} M. Bunder, B. Bates, and S. Arnold. \newblock The summed paperfolding sequence. \newblock {\it Bull. Aust. Math. Soc.} {\bf 110} (2024), 189--198. \bibitem{Burns:2022} R. Burns. \newblock The appearance function for paper-folding words. \newblock ArXiv preprint arXiv:2210.14719 [math.NT], October 22 2022. \newblock Available at \url{https://arxiv.org/abs/2210.14719}. \bibitem{Davis&Knuth:1970} C.~Davis and D.~E. Knuth. \newblock Number representations and dragon curves--{I, II}. \newblock {\em J. Recreational Math.} {\bf 3} (1970), 66--81, 133--149. \bibitem{Dekking&MendesFrance&vanderPoorten:1982} F.~M. Dekking, M.~{Mend\`es}~France, and A.~J. {van der Poorten}. \newblock Folds! \newblock {\em Math. Intelligencer} {\bf 4} (1982), 130--138, 173--181, 190--195. \newblock Erratum, {\bf 5} (1983), 5. \bibitem{Goc&Mousavi&Schaeffer&Shallit:2015} D. Go\v{c}, H. Mousavi, L. Schaeffer, and J. Shallit. \newblock A new approach to the paperfolding sequences. \newblock In A. Beckmann et al., eds., {\it CiE 2015}, Lecture Notes in Comput. Sci., Vol.~9136, Springer, 2015, pp.~34--43. \bibitem{Golomb:1966} S. W. Golomb. \newblock Run-length encodings. \newblock {\it IEEE Trans. Info. Theory} {\bf IT-12} (1966), 399--401. \bibitem{Kempner:1916} A. J. Kempner. \newblock On transcendental numbers. \newblock {\it Trans. Amer. Math. Soc.} {\bf 17} (1916), 476--482. \bibitem{Mousavi:2016} H.~Mousavi. \newblock Automatic theorem proving in {{\tt Walnut}}. \newblock Arxiv preprint arXiv:1603.06017 [cs.FL], available at \url{http://arxiv.org/abs/1603.06017}, 2016. \bibitem{Shallit:1979} J.~O. Shallit. \newblock Simple continued fractions for some irrational numbers. \newblock {\em J. Number Theory} {\bf 11} (1979), 209--217. \bibitem{Shallit:1982b} J.~O. Shallit. \newblock Explicit descriptions of some continued fractions. \newblock {\em Fibonacci Quart.} {\bf 20} (1982), 77--81. \bibitem{Shallit:2021} J. Shallit. \newblock Synchronized sequences. \newblock In T. Lecroq and S. Puzynina, eds., {\it WORDS 2021}, Lecture Notes in Comp. Sci., Vol.~12847, Springer, 2021, pp.~1--19. \bibitem{Shallit:2023} J.~Shallit. \newblock {\em The Logical Approach To Automatic Sequences: Exploring Combinatorics on Words with {\tt Walnut}}, Vol. 482 of {\em London Math. Soc. Lecture Note Series}. \newblock Cambridge University Press, 2023. \bibitem{oeis} N. J. A. Sloane et al. \newblock The On-Line Encyclopedia of Integer Sequences. \newblock Available at \url{https://oeis.org}. \end{thebibliography} \end{document}
2412.18075v1
http://arxiv.org/abs/2412.18075v1
How many crossing changes or Delta-moves does it take to get to a homotopy trivial link?
\documentclass[12pt,letterpaper]{amsart} \usepackage{epsfig,amsmath,amsthm,amssymb,graphicx} \usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry} \usepackage{color} \usepackage{thmtools} \usepackage{thm-restate} \usepackage{hyperref} \usepackage{cleveref} \declaretheorem[name=Theorem,numberwithin=section]{thm} \usepackage{xfrac} \usepackage[table]{xcolor} \usepackage{comment} \usepackage{array, multirow} \usepackage{makecell} \usepackage{import} \usepackage{enumerate} \usepackage{caption} \usepackage{subcaption} \captionsetup[subfigure]{labelfont=rm} \usepackage{faktor} \usepackage{tikz} \usepackage{tikz-cd} \usetikzlibrary{matrix, arrows, cd} \usepackage{pb-diagram} \usepackage{cancel} \definecolor{myOlive}{rgb}{.29,.28,.16} \definecolor{myRed}{rgb}{.78,0,0} \newtheorem{proposition}{Proposition}[section] \newtheorem{theorem}[proposition]{Theorem} \newtheorem{corollary}[proposition]{Corollary} \newtheorem{lemma}[proposition]{Lemma} \newtheorem*{theorem*}{Theorem} \newtheorem*{proposition*}{Proposition} \newtheorem*{lemma*}{Lemma} \newtheorem*{corollary*}{Corollary} \newtheorem*{examples}{Examples} \theoremstyle{definition} \newtheorem{definition}[proposition]{Definition} \newtheorem{question}[proposition]{Question} \newtheorem{problem}[proposition]{Problem} \newtheorem{conjecture}[proposition]{Conjecture} \theoremstyle{remark} \newtheorem{remark}[proposition]{Remark} \newtheorem{example}[proposition]{Example} \newcommand{\lineseg}[2]{\overline{#1#2}} \newcommand{\tb}{\text{tb}} \newcommand{\rot}{\text{rot}} \newcommand{\s}{\text{s}\,} \newcommand{\g}{\text{g}} \newcommand{\bdry}{\partial} \newcommand{\smooth}{\text{sm}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\F}{\mathcal{F}} \newcommand{\G}{\mathcal{G}} \newcommand{\C}{\mathcal{C}} \newcommand{\W}{\mathcal{W}} \newcommand{\Chat}{\widehat{\C}} \newcommand{\CC}{\mathbb{C}} \renewcommand{\H}{\mathcal{H}} \newcommand{\R}{\mathbb{R}} \renewcommand{\r}{\mathcal{r}} \newcommand{\Bl}{\operatorname{Bl}} \newcommand{\Ab}{\operatorname{Ab}} \newcommand{\Tsn}{\operatorname{Tsn}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\lk}{\operatorname{lk}} \renewcommand{\inf}{\text{Inf}} \renewcommand{\int}[1]{\operatorname{int}(#1)} \renewcommand{\r}{\mathfrak{r}} \newcommand{\str}{\text{str}} \renewcommand{\P}{\mathcal{P}} \newcommand{\FOS}{\mathcal{FOS}} \newcommand{\inv}{^{-1}} \newcommand{\Inv}[1]{#1^{-1}} \newcommand{\GCD}{\operatorname{GCD}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\K}{\mathcal{K}} \newcommand{\LH}{\mathcal{LH}} \newcommand{\SLH}{\mathcal{SLH}} \renewcommand{\L}{\mathcal{L}} \newcommand{\SL}{\mathcal{SL}} \newcommand{\RR}{\mathbb{R}} \newcommand{\Sum}{\displaystyle \sum } \newcommand{\Prod}{\displaystyle \prod } \renewcommand{\Cup}{\displaystyle \bigcup } \newcommand{\ot}{\leftarrow} \newcommand{\onto}{\twoheadrightarrow} \newcommand{\into}{\hookrightarrow} \newcommand{\immerse}{\looparrowright} \newcommand{\pref}[1]{(\ref{#1})} \newcommand{\nsubg}{\unlhd} \newcommand{\wt}{\operatorname{wt}} \newcommand{\chris}[1]{\textcolor{magenta}{[Chris] #1}} \newcommand{\anthony}[1]{\textcolor{blue}{[Anthony] #1}} \newcommand{\cotto}[1]{\textcolor{violet}{[cotto] #1}} \newcommand{\katherine}[1]{\textcolor{teal}{[Katherine] #1}} \newcommand{\taylor}[1]{\textcolor{green}{[taylor] #1}} \frenchspacing \begin{document} \title[Crossing changes and link homotopy]{How many crossing changes or Delta-moves does it take to get to a homotopy trivial link?} \author[A.~Bosman]{Anthony Bosman} \address{Department of Mathematics, Andrews University} \email{[email protected]} \urladdr{andrews.edu/cas/math/faculty/bosman-anthony.html} \author[C.~W.~Davis]{Christopher W.\ Davis} \address{Department of Mathematics, University of Wisconsin--Eau Claire} \email{[email protected]} \urladdr{people.uwec.edu/daviscw} \author[T.~Martin]{Taylor Martin} \address{Department of Mathematics and Statistics, Sam Houston State University} \email{[email protected]} \urladdr{shsu.edu/academics/mathematics-and-statistics/faculty/martin.html} \author[C.~Otto]{Carolyn Otto} \address{Department of Mathematics, University of Wisconsin--Eau Claire} \email{[email protected]} \urladdr{uwec.edu/profiles/ottoa} \author[K.~Vance]{Katherine Vance} \address{Department of Mathematics and Computer Science, Simpson College} \email{[email protected]} \urladdr{simpson.edu/faculty-staff/katherine-vance/} \date{\today} \subjclass[2020]{57K10} \begin{abstract} The homotopy trivializing number, $n_h(L)$, and the Delta homotopy trivializing number, $n_\Delta(L)$, are invariants of the link homotopy class of $L$ which count how many crossing changes or Delta moves are needed to reduce that link to a homotopy trivial link. In 2022, Davis, Orson, and Park proved that the homotopy trivializing number of $L$ is bounded above by the sum of the absolute values of the pairwise linking numbers and some quantity $C_n$ which depends only on $n$, the number of components. In this paper we improve on this result by using the classification of link homotopy due to Habegger-Lin to give a quadratic upper bound on $C_n$. We employ ideas from extremal graph theory to demonstrate that this bound is close to sharp, by exhibiting links with vanishing pairwise linking numbers and whose homotopy trivializing numbers grows quadratically. In the process, we determine the homotopy trivializing number of every 4-component link. We also prove a cubic upper bound on the difference between the Delta homotopy trivializing number of $L$ and the sum of the absolute values of the triple linking numbers of $L$. \end{abstract} \maketitle \section{Introduction and statement of results} Any link $L$ in $S^3$ can be reduced to the unlink by some sequence of crossing changes. If this can be done by changing only crossings where a component of $L$ crosses over itself, often called a \emph{self-crossing change}, then we say that $L$ is \emph{homotopy trivial}. If links $L$ and $J$ can be transformed into each other by self-crossing changes then we call $L$ and $J$ link homotopic. Unlike the question of when two links are isotopic, which is famously difficult, link homotopy is classified by Habegger-Lin \cite{HL1}, building on work of Milnor \cite{M1}. The number of crossing changes needed to transform a link to the unlink is called its unlinking number. This invariant has been the target of intense study; see for example \cite{KM86, Kohn91, Kohn93,Likorish82, Yasutaka81}. In \cite[Section 6]{DOP22} the second author, along with Park and Orson combine the unlinking number with the notion of link homotopy and introduce the \emph{homotopy trivializing number}, $n_h(L)$, the number of crossing changes needed to reduce $L$ to a homotopy trivial link. In that paper they show that $n_h(L)$ is controlled by the pairwise linking number of $L$ together with the number of components of $L$. \begin{theorem*}[\cite{DOP22}, Theorem 1.7] For any $n\in \N$ there is some $C_n\in \N$ so that for every $n$-component link $L$, $$\Lambda(L)\le n_h(L)\le \Lambda(L)+C_n,$$ where $\Lambda(L)=\Sum_{i<j}|\lk(L_i, L_j)|$ is the sum of the absolute values of the pairwise linking numbers. \end{theorem*} Such a bound is surprising since linking numbers form only the first of a family of higher order Milnor invariants which classify link homotopy \cite{HL1},\cite{M1}. This result indicates that these higher order invariants have only a bounded impact on the number of crossing changes needed to get to a homotopy trivial link. While one could parse out a precise value of the constant $C_n$ produced by the techniques of \cite{DOP22}, actually doing so would require a detailed combinatorial analysis and would result in a very large bound. We pose the following problem, on which we make significant progress. \begin{problem}\label{prob: main} For any $n\in \N$ compute $$C_n:=\max\{ n_h(L)-\Lambda(L) \mid L\text{ is an }n\text{-component link}\}.$$ \end{problem} Our first main result follows a different approach than \cite{DOP22} and finds quadratic upper and lower bounds on $C_n$. \begin{theorem}\label{thm: main} For all $n\ge 3$, $$ 2\left\lceil\frac{1}{3}n(n-2)\right\rceil\le C_n\le (n-1)(n-2) .$$ In particular, $C_3=2$ and $C_4=6$. \end{theorem} The upper bound we produce on $C_n$ comes from the following result, \begin{restatable*}{theorem}{UpperBoundTheorem} \label{upper bound theorem main} If $L$ is an $n$-component link and \[Q(L) = \#\{(i,j)\mid 2\leq i+1<j\leq n\text{ and } \lk(L_i,L_j)= 0\}\] then $n_h(L)\le \Lambda(L)+2Q(L)$. \end{restatable*} In order to see that this should seem quite surprising, note that when $L$ has vanishing pairwise linking number, this theorem gives a very concrete upper bound on the number of crossing changes needed to reduce $L$ to a homotopy trivial link. \begin{corollary} If an $n$-component link has vanishing pairwise linking numbers, then \[n_h(L)\le (n-1)(n-2).\] \end{corollary} When enough pairwise linking numbers are non-zero, the invariant $Q(L)$ of Theorem~\ref{upper bound theorem main} vanishes, so that the homotopy trivializing number is determined by the pairwise linking numbers. \begin{corollary}\label{cor: nonzero linking} Let $L$ be an $n$-component link. If $\lk(L_i,L_j)\neq 0$ for all $i,j$ with $|i-j|>1$, then $\Lambda(L) = n_h(L)$. \end{corollary} Our strategy to compute the homotopy trivializing number reveals a linear bound on $n_h(L)$ over all Brunnian links. Recall that a link is called Brunnian if its every proper sublink is trivial. \begin{restatable*}{corollary}{nhlForBruunian}\label{cor: nhl for brunnian} If $n\ge 3$ and $L$ is an $n$-component Brunnian link, then $n_h(L)\le 2(n-2).$ \end{restatable*} In \cite{DOP22} the homotopy trivializing number of any 3-component link $L$ is determined in terms of $\Lambda(L)$ along with Milnor's triple linking number, $\mu_{123}(L)$,$$n_h(L)=\begin{cases}\Lambda(L) &\text{ if }\Lambda(L)\neq 0,\\ 2&\text{ if }\Lambda(L)=0\text{ and }\mu_{123}(L)\neq 0, \\0&\text{otherwise}.\end{cases}$$ Our proof that $C_4=6$ passes through an argument that determines the homotopy trivializing number of every 4-component link. The precise statement (Theorems~\ref{thm:nhl for linking number zero}, \ref{thm:linking greater one}, and \ref{thm:nonvanishing linking}) is too long to state here. Instead we present some elements of this classification. Here $ \overline\mu_I(L)$ is the Milnor number of $L$ associated with multi-index $I$. It is only well defined modulo the greatest common divisor (\(\GCD\)) of those $\overline\mu_J(L)$ with $J$ the result of deleting some terms from $I$. As is convention, the first nonvanishing Milnor invariant is denoted $\mu_I(L)$ since it is well defined as an integer. \begin{theorem}[See Theorems~\ref{thm:nhl for linking number zero}, \ref{thm:linking greater one}, \ref{thm:nonvanishing linking}]\label{thm: 4-component sampler} Let $L=L_1\cup L_2\cup L_3\cup L_4$ be a 4-component link. \begin{itemize} \item $n_h(L)-\Lambda(L)=6$ if and only if $\Lambda(L)=0$, none of $\mu_{123}(L)$, $\mu_{124}(L)$, $\mu_{134}(L)$, and $\mu_{234}(L)$ are equal to zero and none of $\overline\mu_{1234}(L)$, $\overline\mu_{1324}(L)$, and $\overline\mu_{1234}(L)+\overline\mu_{1324}(L)$ are multiples of $\GCD(\mu_{123}(L), \mu_{124}(L), \mu_{134}(L), \mu_{234}(L))$. \item If $|\lk(L_1, L_2)|\ge 2$ and $\lk(L_3, L_4)\neq 0$, then $n_h(L)=\Lambda(L)$ \item If $\lk(L_1,L_2)$, $\lk(L_2, L_3)$ and $\lk(L_3, L_4)$ are all nonzero, then $n_h(L)=\Lambda(L)$ \item If any four linking numbers of $L$ fail to vanish, then $n_h(L)=\Lambda(L)$. \end{itemize} \end{theorem} \begin{figure} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[width=.2\textwidth]{DeltaMoveBefore.pdf}}; \node at (4,0) {$\longrightarrow$}; \node at (8,0){\includegraphics[width=.2\textwidth]{DeltaMoveAfter.pdf}}; \end{tikzpicture} \caption{The $\Delta$-move.} \label{fig: Delta Move} \end{figure} Another unknotting operation is the Delta-move (henceforth, $\Delta$-move) as pictured in Figure~\ref{fig: Delta Move}. By \cite[Theorem 1.1]{MN89}, any link with vanishing pairwise linking number can be undone by a sequence of $\Delta$-moves. If this $\Delta$-move involves strands of $L_i$, $L_j$ and $L_k$ with $i,j,k$ all distinct, then it changes the triple linking number $\mu_{ijk}$ by precisely 1. As a consequence, the number of $\Delta$-moves needed to reduce a link with vanishing linking numbers to a homotopy trivial link is bounded below by the sum of the absolute values of the triple linking numbers. Similarly to Theorem~\ref{thm: main}, we demonstrate an upper bound. Let $n_{\Delta}(L)$ be the minimal number of $\Delta$-moves needed to transform a link $L$ to a homotopy trivial link and $\Lambda_3(L) = \Sum_{i<j<k} |\mu_{ijk}(L)|$ be the sum of the absolute values of the triple linking numbers of $L$. We show the following: \begin{restatable*}{theorem}{thmMainDelta} \label{thm: main Delta} For any $n$-component link $L$ with vanishing pairwise linking numbers, $$\Lambda_3(L)\le n_\Delta(L)\le \Lambda_3(L)+\frac{2}{3} (n^3 - 3n^2 + 2n - 6).$$ \end{restatable*} \begin{corollary}\label{cor: vanishing Delta} For any $n$-component link $L$ if $\Lambda(L)=\Lambda_3(L)=0$, then $$n_\Delta(L)\le \frac{2}{3} (n^3 - 3n^2 + 2n - 6).$$ \end{corollary} In order to prove that $C_n\ge 2\left\lceil\frac{1}{3}n(n-2)\right\rceil$ we need to exhibit links $L$ with $n_h(L)\ge 2\left\lceil\frac{1}{3}n(n-2)\right\rceil$. We do so in Theorem~\ref{thm: large nhl using 4-component sublinks} by studying a link $L$ with vanishing pairwise linking numbers and whose every 4-component sublink has homotopy trivializing number 6. In order to compute the homotopy trivializing number of this link, we study any sequence of crossing changes transforming $L$ to a homotopy trivial link. We associate to this sequence a weighted graph with vertices $\{v_1,\dots v_n\}$; the edge from $v_i$ to $v_j$ is weighted by half of the number of crossing changes done between $L_i$ and $L_j$. Note that by our choice of $L$, the graph spanned by any four vertices of $G$ must have weight at least 3. We prove the following theorem, which we think will be of independent interest to a graph theorist. \begin{theorem}\label{thm:min_weight_phi_n} Let $G$ be a graph with non-negative integer weights on its edges. If the total weight of the subgraph of $G$ spanned by any four vertices is at least 3, then the total weight of $G$ is at least $\left\lceil\frac{1}{3}n(n-2)\right\rceil%2\cdot {\left\lceil\frac{n-1}{3}\right\rceil\choose 2}+ {\left\lfloor \frac{2n+1}{3}\right\rfloor\choose 2}. $. \end{theorem} The fact that $2\left\lceil\frac{1}{3}n(n-2)\right\rceil\le C_n$ will follow. Forgetting the link theory context, we pose the following graph theoretic problem motivated by the above result. \begin{problem} Fix any integers $n$, $k$, and $w$. Define $\Phi(n,k,w)$ to be the set of all graphs $G$ with non-negative integer weight on their edges which satisfy that the subgraph spanned by any $k$ vertices of $G$ has total weight at least $w$. Let $\phi(n,l,w)$ to be the minimal total weight among all $G\in \Phi(n,k,w)$. Determine $\phi(n,k,w)$. \end{problem} When $w=1$, this is essentially determined by a classical theorem of extremal graph theory called Tur\'an's theorem, which determines the graph on $n$-vertices having the maximal number of edges but not containing a $k$-vertex clique. See \cite{Turan1941} or, for a more modern treatment, \cite[Theorem 12.2]{GraphsAndDigraphs}. \subsection{Outline of the paper} Habiro \cite{Hab1} gives a family of moves called clasper surgery which generalizes both crossing changes and the $\Delta$-move. In Section~\ref{sect: clasper} we recall this language and use it to verify the intuitive fact that $n_h(L)$ and $n_\Delta(L)$ are invariants of the link homotopy class of $L$. As a consequence we can take advantage of the classification of link homotopy due to Habegger-Lin, in terms of the group $\mathcal{H}(n)$ of string links up to link homotopy. In Section~\ref{sect: link homotopy classification} we recall elements of this classification and study how $n_{h}$ and $n_{\Delta}$ interact with the structure of this group. $\H(n)$ decomposes as a semi-direct product of a sequence of nilpotent groups, called reduced free groups. By working over this decomposition, in Section~\ref{sect:bounding homotopy trivialing number} we prove half of Theorem~\ref{thm: main}, that $C_n\le (n-1)(n-2)$. In Section~\ref{sect: Delta change} we use a similar logic applied to the $\Delta$-move to prove Theorem~\ref{thm: main Delta}. When $n=4$, $\H(4)$ is small enough that we can check the homotopy trivializing numbers of every element of the group. We do so in Section~\ref{sect: 4 component}, proving much more than is stated as Theorem~\ref{thm: 4-component sampler}. Finally, in Section~\ref{sect: links with large n_h} we prove the graph theoretic result, Theorem~\ref{thm:min_weight_phi_n}, and use it to complete the proof of Theorem~\ref{thm: main}. \section{Clasper surgery }\label{sect: clasper} In \cite{Hab1}, Habiro introduces the notion of clasper surgery. These moves provide a useful language for crossing changes and $\Delta$-moves. We use the following definition from \cite[Definition 2.1]{KiMi2023}. \begin{definition} An embedded disk $\tau$ in $S^3$ is called a \emph{simple tree clasper} for a link $L$ if $\tau$ decomposes as a union of bands and disks satisfying the following \begin{enumerate} \item Each band connects two distinct disks and each disk is attached to either one or three bands. A disk attached to only one band is called a \emph{leaf}. \item $L$ intersects $\tau$ transversely and each point of $L\cap \tau$ is interior to a leaf. \item Each leaf intersects $L$ transversely in exactly one point. \end{enumerate} See Figure~\ref{fig:ClasperDisk} for a generic picture. A $C_k$ tree is a simple tree clasper with exactly $k+1$ leaves. Notice that a $C_k$ tree can be reconstructed from its disks together with a single framed arc along each band. Thus, we will record a clasper as a union of disks and (framed) arcs in between, as in figure~\ref{fig:ClasperArc}. When no framing is specified, we impose the blackboard framing. \end{definition} \begin{figure} \centering \begin{subfigure}[b]{0.25\textwidth} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[height=.45\textwidth]{GenericClasperDiskrevised.pdf}}; \end{tikzpicture} \caption{A simple tree clasper.} \label{fig:ClasperDisk} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[height=.3\textwidth]{GenericClasperArcRevised.pdf}}; \end{tikzpicture} \caption{A clasper as framed arcs and disks.} \label{fig:ClasperArc} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[height=.45\textwidth]{GenericClasperSurgeryRevised.pdf}}; \end{tikzpicture} \caption{Clasper surgery.} \label{fig:ClasperSurgery} \end{subfigure} \caption{} \label{fig: Claspers} \end{figure} Given a $C_k$ tree $\tau$ for a link $L$, the result of clasper surgery along $\tau$ is given in Figure~\ref{fig:ClasperSurgery}. A crossing change can be expressed as clasper surgery along a $C_1$ tree and a $\Delta$-move as clasper surgery along a $C_2$ tree. We can use the language of claspers to define the \emph{homotopy trivializing number}. A link $L$ can be reduced to a homotopy trivial link in $k$ crossing changes if there is a collection of $k$ disjoint $C_1$-claspers for $L$ so that the result of surgery along these claspers is homotopy trivial. Then, $n_h(L)$ is the minimal such value of $k$. Similarly, $n_\Delta(L)$ is defined using $C_2$-claspers. The $\Delta$-move is done by a single $C_2$-clasper surgery and conversely a $C_2$-clasper surgery can be done by a $\Delta$-move. Figure~\ref{fig: Delta by Clasper surgery} reveals how to perform the $\Delta$-move via a $C_2$-clasper, and Figure~\ref{fig: Clasper surgery by Delta} shows how $C_2$ clasper surgery can be undone by a $\Delta$-move. See also \cite[Section 7.1]{Hab1}. Thus, we define $n_{\Delta}(L)$ to be the minimal number of $C_2$-clasper surgeries needed to transform $L$ to a homotopy trivial link. It follows that $n_h(L)$ and $n_\Delta(L)$ are invariant under link homotopy. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[width=.9\textwidth]{DeltaMoveClasper}}; \end{tikzpicture} \caption{} \label{fig: Clasper for Delta} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.9\textwidth]{DeltaMoveClasperSurgery}}; \end{tikzpicture} \caption{} \label{fig: Clasper for Delta surgery} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.9\textwidth]{DeltaMoveAfter}}; \end{tikzpicture} \caption{} \label{fig: Clasper for Delta done} \end{subfigure} \caption{Left to right: \pref{fig: Clasper for Delta} A $C_2$-clasper which realizes the $\Delta$-move. \pref{fig: Clasper for Delta surgery} Performing Clasper surgery. \pref{fig: Clasper for Delta done} After an isotopy we get the result of the $\Delta$-move. } \label{fig: Delta by Clasper surgery} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.22\textwidth} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[width=.9\textwidth]{C2Surgery}}; \end{tikzpicture} \caption{} \label{fig: C2Surgery} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.9\textwidth]{C2SurgeryDeltaLocus}}; \end{tikzpicture} \caption{} \label{fig: C2Surgery Delta Locus} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.9\textwidth]{C2SurgeryDeltaDone}}; \end{tikzpicture} \caption{} \label{fig: C2Surgery Delta Done} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.9\textwidth]{C2SurgeryDeltaDoneIsotope}}; \end{tikzpicture} \caption{} \label{fig: C2Surgery Delta undone} \end{subfigure} \caption{Left to right: \pref{fig: C2Surgery} The result of $C_2$ clasper surgery. \pref{fig: C2Surgery Delta Locus} After an isotopy we see a place to perform a $\Delta$-move. \pref{fig: C2Surgery Delta Done} Performing the $\Delta$-move. \pref{fig: C2Surgery Delta undone} An isotopy reduces this to the trivial tangle. } \label{fig: Clasper surgery by Delta} \end{figure} \begin{theorem}\label{thm: homotopy invariance} If $L$ and $J$ are link homotopic, then $n_h(L) = n_h(J)$ and $n_\Delta(L) = n_\Delta (J)$. \end{theorem} \begin{proof} If $L$ and $J$ are link homotopic, there is a collection of $C_1$ claspers $\tau$ for $J$, each of which intersects only one component of $J$, so that changing $J$ by surgery along $\tau$ results in $L$. As a positive crossing change can be undone by a negative crossing change, there is a collection of $C_1$ claspers $\overline{\tau}$ for $L$ so that surgery along $\overline{\tau}$ results in $J$. Suppose that $n_h(L)=k$. Then there is a collection of $k$ many $C_1$ claspers, $\tau'$, for $L$ so that performing surgery along $\tau'$ changes $L$ to a homotopy trivial link, $L'$. We may now isotope ${\tau}'$ so that it is disjoint from $\overline{\tau}$. By performing surgery along $\overline{\tau}$, we may now think of $\tau'$ as a sequence of crossing changes for $J$. For the sake of clarity call this new collection $\tau'_J$, and the link resulting from surgery $J'$. Summarizing, we now have a collection of claspers $\tau\cup\tau'_J$ for $J$ so that surgery along this collection results in the homotopy trivial link $L'$. Since the order in which we perform surgery does not affect the result, we may first perform surgery along $\tau'_J$ to get a new link $J'$ and then change $J'$ by surgery along $\tau$. As each component of $\tau$ intersects only one component of $J'$ it follows that $J'$ is link homotopic to $L'$, and so is itself homotopy trivial. We have now produced a collection of $k$ many claspers $\tau'_J$ for $J$ so that surgery along $\tau'_J$ results in a homotpy trivial link. Thus, $n_h(J)\le k=n_h(L)$. The reverse inequality follows the same argument, as does the proof that $n_\Delta(L)=n_{\Delta}(J)$. \end{proof} \section{String links and Habegger-Lin's classification of link homotopy}\label{sect: link homotopy classification} Let $\mathcal{LH}_n$ be the set of $n$-components links up to link homotopy. By Theorem~\ref{thm: homotopy invariance}, $n_h(L)$ depends only on the equivalence class of $L$ in $\mathcal{LH}_n$. See also \cite[Remark 6.3]{DOP22}. As a consequence, we can appeal to the classification of links up to link homotopy due to Habegger-Lin \cite{HL1} as well as an earlier work of Goldsmith \cite{Goldsmith73} in order to organize our argument. In this section we recall some elements of this classification and explain the strategy we will follow. \begin{definition} Let $p_1,\dots, p_n$ be distinct points interior to the unit disk $D^2$. An $n$-component \emph{string link} $T$ is an isotopy class of disjoint embedded arcs $T_1\cup\dots T_n$ in $D^2\times[0,1]$ with $T_i$ running from $p_i\times\{0\}$ to $p_i\times\{1\}$. Two string links are called link homotopic if one can be transformed to the other by a sequence of self-crossing changes. $\SL_n$ denotes the set of $n$-component string links and $\H(n)$ the set of $n$-component string links up to link homotopy. A string link is homotopy trivial if it is link homotopic to the trivial string link. \end{definition} The notions of clasper surgery, crossing change and $\Delta$-moves all extend to string links, and so the definitions of $n_h$ and $n_{\Delta}$ extend in the obvious way to string links where they depend only on the class of a link in $\H(n)$. \begin{figure}[h] \subcaptionbox{$A*B$: The result of stacking string links $A$ and $B$.\label{fig: stacking}}[0.3\linewidth]{ \begin{tikzpicture}[scale=1.2] \draw[thick, black] (0.2,-0.2) -- (0.2, 2.2); \draw[thick, black] (0.5,-0.2) -- (0.5, 2.2); \draw[thick, black] (0.8,-0.2) -- (0.8, 2.2); \draw[thick, black] (1.8,-0.2) -- (1.8, 2.2); \draw[thick, black] (2.1,-0.2) -- (2.1, 2.2); \draw[thick, black, fill=white] (0,0) rectangle (2.3,0.8); \draw[thick, black, fill=white] (0,1.2) rectangle (2.3,2); \node[inner sep=0pt] at (1.17,0.4) {$B$}; \node[inner sep=0pt] at (1.17,1.63) {$A$}; \node[inner sep=0pt] at (1.34,-0.13) {\scalebox{1}{$\dots$}}; \node[inner sep=0pt] at (1.34,1) {\scalebox{1}{$\dots$}}; \node[inner sep=0pt] at (1.34,2.13) {\scalebox{1}{$\dots$}}; \node[above] at (1, -0.3) {$\,$}; \end{tikzpicture} } \hfill \subcaptionbox{{The closure $\widehat T$ of a string link $T$ together with a $d$-base, $D$.\label{fig: closure}}}[0.3\linewidth]{ \begin{tikzpicture}[scale=0.6] \draw (-.2,-.5)--(2.2,-.5) arc(-90:90:.2)--(-.2,-.1)arc(90:270:.2); \draw[fill=gray, opacity=.5] (-.2,-.5)--(2.2,-.5) arc(-90:90:.2)--(-.2,-.1)arc(90:270:.2); \node[left] at(-.2,-.3) {$D$}; \draw[fill=black](.2,-.3) circle (.04); \draw[fill=black](.5,-.3) circle (.04); \draw[fill=black](.8,-.3) circle (.04); \draw[fill=black](1.8,-.3) circle (.04); \draw[fill=black](2.1,-.3) circle (.04); \draw[thick, black] (0.2,-0.3) -- (0.2, 1); \draw[thick, black] (0.2,-0.5) -- (0.2, -.7); \draw[thick, black] (0.5,-0.3) -- (0.5, 1); \draw[thick, black] (0.5,-0.5) -- (0.5, -.7); \draw[thick, black] (0.8,-0.3) -- (0.8, 1); \draw[thick, black] (0.8,-0.5) -- (0.8, -.7); \draw[thick, black] (1.8,-0.3) -- (1.8, 1); \draw[thick, black] (1.8,-0.5) -- (1.8, -.7); \draw[thick, black] (2.1,-0.3) -- (2.1, 1); \draw[thick, black] (2.1,-0.7) -- (2.1, -.5); \draw[thick, black, fill=white] (0,0) rectangle (2.3,0.8); \node[inner sep=0pt] at (1.2,0.4) {$T$}; \node[inner sep=0pt] at (2.4,-1.5) {\scalebox{1}{$\vdots$}}; \node[inner sep=0pt] at (2.4, 2.2) {\scalebox{1}{$\vdots$}}; \draw[thick, black] (2.1, 1) arc (-180:-360:.2); \draw[thick, black] (1.8, 1) arc (-180:-360:{.2+.3}); \draw[thick, black] (.8, 1) arc (-180:-360:{.2+.3+1}); \draw[thick, black] (.5, 1) arc (-180:-360:{.2+.3+1+.3}); \draw[thick, black] (.2, 1) arc (-180:-360:{.2+.3+1+.3+.3}); \draw[thick, black] (2.1, -0.7) arc (180:360:.2); \draw[thick, black] (1.8, -0.7) arc (180:360:{.2+.3}); \draw[thick, black] (.8, -0.7) arc (180:360:{.2+.3+1}); \draw[thick, black] (.5, -0.7) arc (180:360:{.2+.3+1+.3}); \draw[thick, black] (.2, -0.7) arc (180:360:{.2+.3+1+.3+.3}); \draw[thick, black] ({2*2.3-0.2},-0.7) -- ({2*2.3-0.2}, 1); \draw[thick, black] ({2*2.3-0.5},-0.7) -- ({2*2.3-0.5}, 1); \draw[thick, black] ({2*2.3-0.8},-0.7) -- ({2*2.3-0.8}, 1); \draw[thick, black] ({2*2.3-1.8},-0.7) -- ({2*2.3-1.8}, 1); \draw[thick, black] ({2*2.3-2.1},-0.7) -- ({2*2.3-2.1}, 1); \end{tikzpicture} } \hfill \subcaptionbox{{$\phi\colon RF(n-1)\to \mathcal{H}(n)$ sends $x_i$ to the string link $x_{in}$ above. \label{fig: psi(x_i)}}}[0.3\linewidth]{ \begin{tikzpicture}[scale=1] \node[above] at (0.2, 2) {$T_1$}; \draw[thick, black] (0.2,-0.2) -- (0.2, 2); \node[inner sep=0pt] at (.7,2.3) {\scalebox{1}{$\dots$}}; \node[above] at (1.2, 2) {$T_i$}; \draw[thick, black] (1.2, 2)--(1.2,1.3); \draw[thick, black] (1.2, 1.1)--(1.2,-0.2); \node[inner sep=0pt] at (1.7,2.3) {\scalebox{1}{$\dots$}}; \node[above] at (2.4, 2) {$T_{n-1}$}; \draw[thick, black] (2.4, 2)--(2.4,1.3); \draw[thick, black] (2.4, 1.1)--(2.4,.5); \draw[thick, black](2.4,.3)--(2.4,-0.2); \node[above] at (3.2, 2) {$T_{n}$}; \draw[thick, black] (3.2, 2)--(3.2,1.6) arc (0:-90:.4)--(1.1,1.2) arc(90:270:.4); \draw[thick, black] (1.3,.4)--(2.8,.4) arc(90:0:.4)--(3.2,-.2); \node[above] at (1, -0.4) {$\,$}; \end{tikzpicture} } \caption{} \label{fig: stacking and closure} \end{figure} Since any link is the closure of some string link, the maps $\SL_n\to \L_n$ and $\mathcal{H}(n) \to \mathcal{LH}_n$ sending a string link $T$ to its closure $\widehat{T}$ are surjective. See Figure~\ref{fig: closure}. The disk $D$ also appearing in Figure~\ref{fig: closure} is called a \emph{$d$-base} for $L$. It is clear that $n_h(\widehat{T})\le n_h(T)$; indeed, a sequence of crossing changes reducing $T$ to a homotopy trivial string link immediately gives rise to a sequence of crossing changes reducing $\widehat{T}$ to the trivial link. More surprisingly the reverse inequality holds, so that nothing is lost by studying the homotopy trivializing number over string links instead of links. \begin{proposition} For any $T\in \H(n)$, $n_h(T)=n_h(\widehat{T})$ and $n_{\Delta}(T) = n_{\Delta}(\widehat{T})$. \end{proposition} \begin{proof} Let $T$ be a string link, $L=\widehat T$, and $D$ be the associated $d$-base. If $n_h(L)=k$ then there exists a collection of $k$ disjoint $C_1$-trees, $\tau$, for $L$ so that surgery along $\tau$ transforms $L$ to a homotopy trivial link $L'$. First isotope $\tau$ so that its every leaf is disjoint from $D$. As in Figure~\ref{fig: Crossing change disk avoids dbase} we may now perform a further isotopy to arrange that all of $\tau$ is disjoint from $D$. As a consequence we can view $\tau$ as collection of $C_1$ trees for $T$ in $S^3\setminus \nu(D)\cong D^2\times[0,1]$. After changing $T$ by surgery along $\tau$ one arrives at a new string link $T'$ which satisfies that $\widehat{T'}=L'$ is homotopy trivial. According to \cite[Corollary 2.7]{HL1}, then $T'$ is homotopy trivial. As a consequence $n_h(T)\le k= n_h(L)$. \begin{figure} \centering \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[width=.8\textwidth]{DbaseIsectsCrossingDisk.pdf}}; \end{tikzpicture} \caption{A $C_k$ tree for $\widehat T$ intersecting a $d$-base in an arc.} \label{fig:dbaseIntersectsCrossingDisk} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \begin{tikzpicture} \node at (0,0){\includegraphics[width=.8\textwidth]{DbaseDoesntIsectCrossingDisk.pdf}}; \end{tikzpicture} \caption{After an isotopy we remove this intersection.} \label{fig:dbaseMissesCrossingDisk} \end{subfigure} \hfill \caption{} \label{fig: Crossing change disk avoids dbase} \end{figure} \end{proof} The advantage of working with string links rather than links up to link homotopy is that string links form a group under the stacking operation of Figure~\ref{fig: stacking}. The inverse operation $\overline{T}$ is given by reflecting $T$ over $D^2\times\{1/2\}$ and then reversing the orientations. A key step in Habegger-Lin's classification of links up to link homotopy \cite{HL1} is the following split short exact sequence. \begin{equation}\label{exact sequence} \begin{tikzcd} 0 \arrow{r} & RF(n-1)\arrow{r}{\phi} & \mathcal{H}(n)\arrow{r}{p} & \arrow[dashed, bend left=33]{l}{s}\mathcal{H}(n-1)\arrow{r} & 0. \end{tikzcd} \end{equation} Recall that $RF(n-1)$ is the \emph{reduced free group}, that is it is the quotient of the free group $F(n-1) = F(x_1,\dots, x_{n-1})$ given by killing the commutator of each $x_i$ with any conjugate of itself. (Thus, in $RF(n-1)$, $x_i$ commutes with $\gamma x_i \gamma^{-1}$ for each $i$ and any $\gamma\in RF(n-1)$.) The map $\phi$ is given by sending the generator $x_i$ to the string link $x_{i,n}$ of Figure~\ref{fig: psi(x_i)}. When it will not result in confusion, we drop the comma and write $x_{in}$. The map $p:\H(n)\to \H(n-1)$ is given by deleting the $n$'th component of a string link, and the splitting $s:\H(n-1)\to \H(n)$ is given by introducing a new unknotted component unlinked from the rest. Recall that for any group $G$ and any $g,h\in G$ the commutator of $g$ with $h$ is defined by $[g,h] = {g}^{-1}{h}^{-1} gh$, (so that $gh = hg[g,h]$). The following results will turn out to be central to the proof of Theorem~\ref{thm: main}. \begin{proposition}\label{prop: basics of nhl} Let $n\in \N$, $i\neq j\in \{1,\dots, n\}$, $T,S\in \H(n)$, and $r\in RF(n-1)$. \begin{enumerate} \item\label{item:nh(x_i)} $n_h(x_{ij})=1$ and $n_{\Delta}(x_{ij})=\infty$. \item \label{item:nh(product)} $n_h(T*S) \le n_h(T)+n_h(S)$, and $n_\Delta(T*S) \le n_\Delta(T)+n_\Delta(S)$. \item \label{item:nh(commutator)}$n_h([T,S])\le 2\cdot \min(n_h(T), n_h(S))$ and $n_\Delta([T,S])\le 2\cdot \min(n_\Delta(T), n_\Delta(S))$. \item \label{item:nh([x,w])} $n_h([T,x_{ij}])\le 2$. \item \label{item:ndelta(commutator)} $n_{\Delta}(\phi([rx_i r^{-1}, x_j]))\le 1$. \end{enumerate} \end{proposition} \begin{proof} To see the first conclusion, observe that $x_{ij}$ is transformed to the trivial string link by changing a single crossing. Thus, $n_h(x_{ij})\le1$. Since linking number is a link homotopy obstruction, and $x_{ij}$ is not homotopy trivial, it follows that $n_h(x_{ij})=1$. The $\Delta$-move preserves linking number, so $x_{ij}$ cannot be unlinked by $\Delta$-moves. Thus, $n_\Delta(x_{ij})=\infty$. Next, suppose $n_h(T)=k$ and $n_h(S)=\ell$. Then $T*S$ can be transformed to $I*S=S$ by $k$ crossing changes. Here $I$ is (link homotopic to) the $n$-component trivial string link. An additional $\ell$ crossing changes transforms this to the trivial element of $\H(n)$. To see the third result, notice that by changing $k$ crossings, $[T,S] = {T}^{-1}{S}^{-1}TS$ is transformed to ${T}^{-1}{S}^{-1}S = T^{-1}$. Another $k$ crossing changes transforms it to a homotopy trivial string link. Thus, $n_h([T,S])\le 2n_h(T)$. By a similar analysis, $n_h([T,S])\le 2n_h(S)$. The fourth result is an immediate corollary of the first and third. Finally, let $r\in RF(n-1)$, and $S=\phi([rx_i r^{-1}, x_j])$. In Figure~\ref{fig: 3-clasper for Delta} we see a $C_2$-clasper $c$ on the trivial string link. In Figure~\ref{fig: 3-clasper surgery for delta} we see the result of clasper surgery, call it $T$. As the leftmost $n-1$ components of each of $T$ and $S$ are unlinked, $S,T\in \H(n)$ depend only on the class of their $n$'th component in the fundamental group of the complement of the first $n-1$ components, which is the free group on the meridians $m_1,\dots, m_{n-1}$. Using the Wirtinger presentation we write the homotopy classes of $T_n$ and $S_n$ as words in these meridians. In each case we get that $[T_n]=[S_n] = [m_i,\psi(r)m_j\psi(r)^{-1}]$. Here $\psi$ is the map given by replacing each $x_k$ by the corresponding meridian $m_k$. Claim~(\ref{item:ndelta(commutator)}) follows. \begin{figure}[h] \subcaptionbox{ A $C_2$-clasper on the trivial string link $T$.\label{fig: 3-clasper for Delta}}[0.4\linewidth]{ \begin{tikzpicture} \node at (0,0) { \includegraphics[height=.2\textheight]{commutatorClasper}}; \node at (0.3,2.5) {$T_j$}; \node at (-0.4,2.5) {$T_i$}; \node at (1.3,2.5) {$T_m$}; \node at (-.2,-.2) {$\phi(r)$}; \end{tikzpicture} } \hspace{.1\textwidth} \subcaptionbox{Performing clasper surgery. \label{fig: 3-clasper surgery for delta} }[0.35\linewidth]{ \begin{tikzpicture} \node at (0,0) {\includegraphics[height=.2\textheight]{CommutatorSurgery}}; \node at (-.2,.-.22) {$\phi(r)$}; \node at (0.3,2.5) {$T_j$}; \node at (-0.4,2.5) {$T_i$}; \node at (1.3,2.5) {$T_m$}; \end{tikzpicture} } \caption{ } \label{fig: unknotting phi([rxrbar,y])} \end{figure} \end{proof} \section{Bounding the homotopy trivializing number}\label{sect:bounding homotopy trivialing number} In this section, we prove Theorem~\ref{upper bound theorem main} which we use in the introduction to conclude that $C_n\le (n-1)(n-2)$. The bulk of our work will be in proving the following theorem which allows us to realize elements of $RF(m)$ as a product of a minimal number of powers of the preferred generators along with a short list of commutators. \begin{theorem}\label{hackey idea} Any element of $RF(m)$ can be written in the form $$ \prod_{k=0}^{m-1} x_{m-k}^{\alpha_{m-k}} \prod_{k=1}^{m-1}[\omega_k, x_k] $$ with $\alpha_1,\dots, \alpha_m\in \Z$ and $\omega_1,\dots, \omega_{m-1}\in RF(m)$. \end{theorem} Before proving Theorem~\ref{hackey idea}, we will use it to prove Theorem~\ref{upper bound theorem main}. We start with the proof in the special case of a string link in the image of $\phi:RF(n-1)\to \H(n)$. \begin{corollary}\label{nhL for RF} Suppose that $T\in \H(n)$ is in the image of $\phi:RF(n-1)\to \H(n)$. Let $Q(T)=\#\{1\leq k<n-1\mid \lk(T_n,T_k)=0\}$. Then $n_h(T)\le \Lambda(T)+2Q(T)$. \end{corollary} \begin{proof} Recall that $\phi:RF(n-1)\to \H(n)$ is given by $\phi(x_i)=x_{i,n}$. We appeal to Theorem~\ref{hackey idea}, apply $\phi$, and emphasize the terms of each product involving $x_{1,n}$, $$ T= \prod_{k=0}^{n-2} x_{n-1-k, n}^{\alpha_{n-1-k}} \prod_{k=1}^{m-2}[\omega_k, x_{k,n}]= \left(\prod_{k=0}^{n-3} x_{n-1-k,n}^{\alpha_{n-1-k}}\right)\cdot x_{n,1}^{\alpha_{1}}\cdot [\omega_1, x_{n,1}] \left(\prod_{k=2}^{m-2}[\omega_k, x_{n,k}]\right). $$ For notational ease, we have conflated $\omega_k$ with $\phi(\omega_k)$. If $\alpha_{1}>0$ then we can undo the center-most terms, $x_{n,1}^{\alpha_{1}}\cdot [\omega_1, x_{n,1}]$ in $\alpha_{1}$ crossing changes. Indeed, $$x_{n,1}^{\alpha_{1}}\cdot[\omega_1, x_{n,1}] = x_{n,1}^{\alpha_{1}}\cdot[x_{n,1},\omega_1^{-1}] = x_{n,1}^{(\alpha_{1}-1)}\omega_1 x_{n,1} \omega_1^{-1}.$$ After $\alpha_{1}$ crossing changes, this is transformed to $\omega_1\omega_1^{-1}=1$. Similarly, if $\alpha_{1}<0$ then $$x_{n,1}^{\alpha_{1}}\cdot[\omega_1, x_{n,1}] = x_{n,1}^{\alpha_{1}}\cdot[x_{n,1}^{-1}, \omega_1] = x_{n,1}^{(\alpha_{1}+1)}\omega_1^{-1} x_{n,1}^{-1} \omega_1$$ can be undone in $|\alpha_{n,1}|$ crossing changes. If $\alpha_{n,1}=0$, then it is undone in 2 crossing changes. Thus, if we set $q_{1} = \begin{cases}0&\text{ if }\alpha_{1}\neq 0\\2&\text{ if }\alpha_{1}= 0\end{cases},$ then after $|\alpha_{1}|+q_{1}$ crossing changes, $T$ is transformed into $$ \prod_{k=0}^{n-3} x_{n, n-1-k}^{\alpha_{n-1-k}}\prod_{k=2}^{m-2}[\omega_k, x_{n,k}]. $$ A direct induction now reveals that $$n_h(T)\le \sum_{k=1}^{n-1} |\alpha_{k}|+q_{k} $$ where $q_{k} = \begin{cases}0&\text{ if }|\alpha_{k}|\neq 0\text{ or }k=n-1\\2&\text{otherwise}\end{cases}$. The observation that $\alpha_{k}=\lk(T_n, T_k)$ and that $\Sum_{k=1}^n q_{k}=2 \cdot Q(L)$ completes the proof. \end{proof} Now suppose that $L=L_1\cup\dots\cup L_n$ is a Brunnian link. It follows then that $L_1\cup\dots L_{n-1}$ is the unlink, and if we realize $L$ as $\widehat{T}$ for some $T\in \H(n)$ then we may take $T_1\cup\dots T_{n-1}$ to be the trivial string link and thus $T$ is in the image of $\phi:RF(n-1)\to \H(n)$. If $L$ is Brunnian and has at least 3 components, then all of the pairwise linking numbers vanish, so $\Lambda(T)=0$ and $Q(T)=n-2$. The corollary below follows. \nhlForBruunian Induction and the decomposition $\H(n)\cong \H(n-1)\ltimes RF(n-1)$ now lets us control the homotopy trivializing number over all of $n$-component links. \UpperBoundTheorem \begin{proof} We proceed inductively on the number of components. Realize $L$ as $L=\widehat T$ for some $T\in \H(n)$. As a consequence of the split exact sequence of \pref{exact sequence}, $T=\phi(S)s(T')$ with $S\in RF(n-1)$ and $T'\in \H(n-1)$. By Corollary~\ref{nhL for RF}, $n_h(\phi(S))\le \Lambda(\phi(S))+2Q(\phi(S))$. Appealing to induction, $ n_h(s(T'))=n_h(T')\le \Lambda(T')+2Q(T'). $ Putting this together, $$ n_h(L)\le n_h(\phi(S))+n_h(T')\le \Lambda(\phi(S))+\Lambda(T')+2Q(\phi(S))+2Q(T') = \Lambda(L)+2Q(L). $$ This completes the proof. \end{proof} \subsection{Representing elements of $RF(m)$ as products without too many commutators.} In this subsection we prove Theorem~\ref{hackey idea}. We begin this with a recollection of basic properties of commutators and the lower central series. A standard reference is work of Magnus-Karrass-Solitar \cite[Chapter 5]{MKS76}. We begin by describing the weight of a commutator. We move on to apply these facts to the reduced free group. \begin{definition} Let $G$ be a group and $x_1,\dots, x_m$ be a generating set for $G$. Then we call $x_1,\dots, x_m$ weight 1 commutators. If $c_1$ and $c_2$ are commutators of \emph{weight} $w_1$ and $w_2$ respectively, then $[c_1, c_2]$ is a commutator of weight $w_1+w_2$. \end{definition} \begin{definition} If $H$ and $J$ are subgroups of $G$ then $[H,J]\le G$ is the subgroup generated by elements of the form $[h,j]$ with $h\in H$ and $j\in J$. \end{definition} \begin{definition} The \emph{lower central series} of a group $G$ is defined recursively by the rule $G_1=G$ and $G_{k+1}=[G_k, G]$. \end{definition} We give several well known properties of commutators and their behaviour modulo lower central series quotients. Many of these are grouped together as the Hall-Witt identities. \begin{proposition}\label{prop: commutators} Let $G$ be a group with generators $x_1,\dots, x_m$. Let $a,b,c\in G$. \begin{enumerate} \item \cite[Theorem 5.3 (8)]{MKS76} $[G_k, G_\ell] \subseteq G_{k+\ell}$. \item\label{lower central normal} $G_k\unlhd G$ is a normal subgroup. \item \label{LCS Abelian}\cite[Theorem 5.4]{MKS76} $G_k/G_{k+1}$ is an Abelian group generated by the set of all weight $k$ commutators. \item \label{item Commutator product 1} \cite[Theorem 5.1 (9), (10)]{MKS76} $[a, bc]=[a,c][a,b][[a,b],c]$ and $[bc,a] = [b,a][[b,a],c][c,a]$. \item \cite[Theorem 5.3 (5), (6)]{MKS76} \label{item Commutator product 2} If $a\in G_u$, $b\in G_v$ and $c\in G_w$ then in $G/G_{u+v+w}$, $[a, bc]=[a,b][a,c]$ and $[bc,a] = [b,a][c,a]$. \item \cite[Theorem 5.1 (8)]{MKS76} $[a,b]^{-1} = [b,a]$. \item \label{commutator inverse} $[a, b^{-1}]=[a,b]^{-1}[b, [a,b^{-1}]]$ and $[a^{-1},b]=[a,b]^{-1}[a, [a^{-1},b]]$. \item \label{commutator inverse 2} If $a\in G_u$ and $b\in G_v$ then in $G/G_{u+2v}$, $[a,b^{-1}] = [a,b]^{-1}$. In $G/G_{2u+v}$, $[a^{-1},b]=[a,b]^{-1}$. \item \label{commutation well defined LCS} If $a=b$ in $G/G_u$ and $c\in G_v$ then $[a,c]=[b,c]$ in $G/G_{u+v}$. \end{enumerate} \end{proposition} \begin{proof} We prove only those results which do not explicitly appear in \cite{MKS76}. If $A$ and $B$ are normal in $G$, then $[A,B]$ is also normal (see for example \cite[Lemma 5.1]{MKS76}). Together with induction, \pref{lower central normal} follows. If $a,b \in G$ then by \pref{item Commutator product 1} $$ \begin{array}{l} 1=[a,b^{-1}\cdot b] =[a,b][a,b^{-1}][[a,b^{-1}],b], \\ 1=[a^{-1}\cdot a, b] = [a^{-1},b][[a^{-1}, b],a][a,b].\end{array} $$Claim \pref{commutator inverse} follows. If $a\in G_u$ and $b\in G_v$, then $[b, [a,b^{-1}]]\in [[G_u, G_v],G_v]\subseteq G_{u+2v}$, proving \pref{commutator inverse 2}. Finally, if $a=b$ in $G/G_u$, then $b=a q$ with $q\in G_u$, and $$[b,c] = [aq,c]=[a,c][[a,c],q][q,c].$$ If $c\in G_v$ then $[[a,c],q]$ and $[q,c]$ are each in $G_{u+v}$, proving \pref{commutation well defined LCS}. \end{proof} Recall that the reduced free group $RF(m)$ on letters $x_1,\dots, x_m$ is the quotient of the free group on $x_1,\dots, x_m$ given by requiring each conjugate of $x_i$ to commute with each other conjugate of $x_i$ for all $i$. This results in some commutativity relations among commutators. First we explain recursively the fairly intuitive notion of what it means for a generator to be ``\emph{in}" a commutator. \begin{definition} Let $x_1,\dots, x_m$ be generators of a group $G$. We say that $x_i$ is in $x_j$ with multiplicity 1 if $i=j$ and otherwise $x_i$ is in $x_j$ with multiplicity $0$. If $a$ and $b$ are commutators, $x_i$ is in $a$ with multiplicity $p$ and $x_i$ is in $b$ with multiplicity $q$ then $x_i$ is in $[a,b]$ with multiplicity $p+q$. Whenever $x_i$ is in $a$ with mutliplicity greater than 0 we will simply say that $x_i$ is in $a$. \end{definition} \begin{proposition}\label{prop: RF commute} If $a$ and $b$ are commutators in $RF(m)$ and $x_i$ is in each of $a$ and $b$, then for any $\gamma,\delta\in RF(m)$ and any $k,\ell\in \Z$, $[\gamma a^k\gamma^{-1},\delta b^\ell \delta^{-1}]=1$ in $RF(m)$. \end{proposition} \begin{proof} The definition of the reduced free group immediately implies that the normal subgroup generated by $x_i$ is Abelian. We proceed by demonstrating via induction on $\wt(a)$ that if $x_i$ is in $a$ then $a$ is in the normal subgroup generated by $x_i$. When $\wt(a)=1$, $a=x_i$ and we are done. If $\wt(a)>1$ then $a=[u,v]$ and $x_i$ is in at least one of $u$ and $v$. Without loss of generality assume it is in $u$, so that we may inductively assume that $u$ is in the normal subgroup generated by $x_i$. Thus, $u^{-1}$ and $v^{-1} u v$ are each in the normal subgroup generated by $x_i$. As a consequence $a=[u,v]=u^{-1} (v^{-1} u v)$ is in the normal subgroup generated by $x_i$. Thus, each of $\gamma a^k \gamma^{-1}$ and $\delta b^\ell \delta^{-1}$ are in the normal subgroup generated by $x_i$, which is Abelian by the definition of $RF(m)$. We conclude that $[\gamma a\gamma^{-1},\delta b \delta^{-1}]=1$ as we claimed. \end{proof} \begin{proposition}\label{prop inverses in RF} For any commutators $a$ and $b$ in $RF(m)$ and and $k\in \Z$, $[a,b]^{k}=[a^{k},b]=[a, b^{k}]$. \end{proposition} \begin{proof} When $k>1$ we proceed by induction. By Proposition~\ref{prop: commutators}, \pref{item Commutator product 1} $$ [a^k,b] = [a^{k-1},b][[a^{k-1},b],a][a,b]. $$ Let $x_i$ be any of the preferred generators of $RF(n)$ which is in $a$. Then $[a^{k-1},b]$ and $a$ each sit in the normal subgroup generated by $x_i$, which is Abelian. Thus, $[[a^{k-1},b],a]=1$. Appealing to an inductive assumption completes the argument when $k\ge 0$. It suffices now to verify the claim when $k=-1$. By Proposition~\ref{prop: commutators}, \pref{commutator inverse}, $[a^{-1},b] = [a,b]^{-1}[a,[a^{-1},b]]$. As $a$ and $[a^{-1},b]$ each sit in the normal subgroup generated by some $x_i$, $[a,[a^{-1},b]]=1$. Since $[a, b^k] = [b^k, a]^{-1}$ the final claimed identity follows. \end{proof} \begin{proposition} \label{prop:rearrage} For any commutators $a$, $b$ and $c$, \[[[a,b],c] = [[c,b],a][[a,c],b]\] in the reduced free group. \end{proposition} \begin{proof} There is an approach which relies heavily on \cite[Theorem 5.1 (12)]{MKS76}. We prefer to give a direct argument which uses that $xy = yx[x,y]$ to follow the algorithm that realizes the Hall basis theorem \cite[Theorem 5.13 A]{MKS76} to gather terms together. Expand the commutator \begin{eqnarray*}[[a,b],c] &=& {[a,b]}^{-1}{c}^{-1} [a,b] c \\&=&{b^{-1}}{a^{-1}} b a ~{c^{-1}}~ {a^{-1}}{b^{-1}}ab~c. \end{eqnarray*} Use $xc=cx[x,c]$ to gather the $c$ and $ c^{-1}$ terms and cancel them: \begin{eqnarray*}[[a,b],c] &=& \Inv{b}\Inv{a} b a ~\Inv{c}~c~ \Inv{a}[\Inv{a},c]\Inv{b}[\Inv b,c]a[a,c]b[b,c] \\&=& \Inv{b}\Inv{a} b [\Inv{a},c]\Inv{b}[\Inv b,c]a[a,c]b[b,c]. \end{eqnarray*} Similarly gather together the first $b$ and $\Inv{b}$ and cancel: \begin{eqnarray*}[[a,b],c] &=& \Inv{b}~b~\Inv{a}[\Inv{a},b]~ [\Inv{a},c]\Inv{b}[\Inv b,c]a[a,c]b[b,c] \\&=& \Inv{a}[\Inv{a},b]~ [\Inv{a},c]\Inv{b}[\Inv b,c]a[a,c]b[b,c]. \end{eqnarray*} Gather together the two remaining $b$'s. This uses that by Proposition~\ref{prop: RF commute} $b$ commutes with $[\Inv{b},c]$ in the reduced free group: \begin{eqnarray*}[[a,b],c] &=& \Inv{a}[\Inv{a},b]~ [\Inv{a},c] \Inv{b}~b~[\Inv b,c]a[a,b][a,c][[a,c],b] [b,c] \\&=&\Inv{a}[\Inv{a},b]~ [\Inv{a},c] [\Inv b,c]a[a,b][a,c][[a,c],b] [b,c]. \end{eqnarray*} Finally, use that $a$ commutes with $[\Inv{a},c]$ and $[\Inv{a},b]$ to gather together the $a$ and $\Inv a$ terms: \begin{eqnarray*}[[a,b],c] &=&\Inv{a} a [\Inv{a},b]~ [\Inv{a},c] [\Inv b,c][[\Inv{b},c],a][a,b][a,c][[a,c],b][b,c] \\&=&[\Inv{a},b]~ [\Inv{a},c] [\Inv b,c][[\Inv{b},c],a][a,b][a,c][[a,c],b][b,c]. \end{eqnarray*} Each pair of the commutators in this product have at least one of $a$, $b$, and $c$ in common, and so by Proposition~\ref{prop: RF commute} they commute in $RF(n)$. Additionally, as a consequence of Proposition~\ref{prop: commutators} \pref{commutator inverse} and Proposition~\ref{prop: RF commute}, in the reduced free group $[\Inv{x},y] = \Inv{[x,y]}$ whenever $x$ and $y$ are commutators. As a consequence, most of the terms in this product cancel and we are left with \begin{eqnarray*}[[a,b],c] &=&[[\Inv{b},c],a][[a,c],b]. \end{eqnarray*} Finally, $[[\Inv{b},c],a] = [\Inv{[b,c]},a]=[[c,b],a]$ since $a$, $b$ and $c$ are commutators in the reduced free group. This completes the proof. \end{proof} We are now ready to progress in earnest to the proof of Theorem~\ref{hackey idea}. \begin{lemma}\label{lem: commutator as nice product} If $c$ is a commutator of weight $\wt(c)=w\ge 2$ in $RF(m)$ then there is a sequence $c_1,\dots, c_n$ of commutators of weight $\wt(c)-1$ and their inverses and $i_1,\dots, i_n<m$ so that $$ c=\left(\Prod_{j=1}^n [c_j, x_{i_j}]\right) c' $$ with $c'\in RF(m)_{w+1}$. \end{lemma} \begin{remark} Without the condition that $i_1,\dots, i_n<m$ this lemma has no content. The key result is that any such $c$ can be realized without any factors of the form $[c_j,x_m]$ ever appearing. We encourage the reader to run the proof below on the example of $[[x_1, x_2],x_m]$. \end{remark} \begin{proof} Let $c$ be a commutator of weight $w\ge 2$. Then $c=[a,b]$ for some commutators with $\wt(a)+\wt(b)=w$. If $x_m$ is in both $a$ and $b$ then $[a,b]=1$ by Proposition~\ref{prop: RF commute}. Without loss of generality we may assume that $x_m$ is not in $a$, for if $x_m$ were in $a$ then we may use that by Proposition~\ref{prop inverses in RF} $c=[a,b]=[b,a]^{-1}=[b^{-1},a]$ in $RF(m)$. We now proceed by induction on the weight of $a$. As a base case, if $a$ is weight 1 then $a=x_i$ for some $i< m$ and $c=[x_i,b]=[b^{-1}, x_i]$ so we are done. We now assume that $\wt(a)>1$ so that $a=[\alpha,\beta]$. As $x_m$ is not in $a$ it is in neither $\alpha$ nor $\beta$. Finally since $\wt(a) = \wt(\alpha)+\wt(\beta)$, $\wt(\alpha)<\wt(a)$. We now appeal to our inductive assumption to conclude that $a = \Prod_{j=1}^n [a_j, x_{i_j}] a'$ where $i_j<m$, $a_j$ is a commutator (or its inverse) with $\wt(a_i)=\wt(a)-1$, and $a'\in RF(m)_{\wt(a)+1}$. Thus, $$ c=[a,b] = \left[\Prod_{j=1}^n [a_j, x_{i_j}] a', b\right]. $$ Notice that $\Prod_{j=1}^n[a_j, x_{i_j}]\in RF(m)_{\wt(a_i)+1} = RF(m)_{\wt(a)}$, $a'\in RF(m)_{\wt(a)+1}$, and $b\in RF(m)_{\wt(b)}$. Appealing to Proposition~\ref{prop: commutators} \pref{item Commutator product 2}, it follows that modulo $RF(m)_{2\wt(a)+1+\wt(b)}\subseteq RF(m)_{\wt(c)+1}$, $$ c \equiv \left[\Prod_{j=1}^n [a_j, x_{i_j}], b\right]\cdot \left[ a', b\right]. $$ Since $\left[ a', b\right]\in RF(m)_{\wt(a)+1+\wt(b)} = RF(m)_{\wt(c)+1}$, we have that modulo $RF(m)_{\wt(c)+1}$, $$c\equiv\left[\Prod_{j=1}^n [a_j, x_{i_j}], b\right].$$ Since each $[a_j, x_{ij}]\in RF(m)_{\wt(a)}$ and $b \in RF(m)_{\wt(b)}$, we may iteratively apply Claim~\pref{item Commutator product 2} of Proposition~\ref{prop: commutators} to see that modulo $RF(m)_{2\cdot \wt(a)+\wt(b)}\subseteq RF(m)_{\wt(c)+1}$, $$c\equiv\Prod_{j=1}^n \left[ [a_j, x_{i_j}], b\right]. $$ Next apply Proposition~\ref{prop:rearrage}. $$c\equiv\Prod_{j=1}^n \left[ [b, x_{i_j}], a_j\right]\left[[a_j, b], x_{i_j}\right]\mod RF(m)_{\wt(c)+1}. $$ Recall that $x_m$ is not in $a_j$ and $\wt(a_j)=\wt(a)-1$. Thus, we may again apply the inductive assumption and conclude that for each $j$, $$\left[ [b, x_{i_j}], a_j\right]= \Prod_{k=1}^{n_j} [c_{k,j}, x_{i_{k,j}}]c_{j,k}'$$ with $\wt(c_{j,k}) = \wt(c)-1$, $c_{j,k}'\in RF(m)_{\wt(c)+1}$ and $i_{k,j}<m$. Putting this together, modulo $RF(m)_{\wt(c)+1}$, $$ c=\Prod_{j=1}^n \left(\Prod_{k=1}^{n_j} [c_{k,j}, x_{i_{k,j}}]\left[[a_j, b], x_{i_j}\right]\right), $$ which completes the proof. \end{proof} We now have everything we need to prove Theorem~\ref{hackey idea}. \begin{proof}[Proof of Theorem~\ref{hackey idea}] Let $z\in RF(m)$. We will inductively show that in $RF(m)/RF(m)_p$ $$z = \prod_{k=0}^{m-1} x_{m-k}^{\alpha_{m-k}} \prod_{k=1}^{m-1}[\omega_k, x_k].$$ When $p=2$, $RF(m)/RF(m)_p=\Z^m$ is the free Abelian group on $x_1,\dots, x_m$, so $$z = x_m^{\alpha_m}\cdot x_{m-1}^{\alpha_{m-1}}\cdot\dots \cdot x_1^{\alpha_1}\cdot z' = \prod_{k=0}^{m-1} x_{m-k}^{\alpha_k}\cdot z' $$ with $z'\in RF(m)_2$. This completes the proof when $p=2$. For convenience, we set $z_0=\Prod_{k=0}^{m-1} x_{m-k}^{\alpha_{m-k}}$. We now inductively assume that $$z = z_0 \prod_{k=1}^{m-1}[\omega_k, x_k]\cdot z'$$ with $z'\in RF(m)_{p}$. Thus, $z'$ is a product of weight at least $p$ commutators and so we can express $z'=\prod_{q=1}^r c_q\cdot z''$ where each $c_q$ is a weight $p$ commutator and $z''\in RF(m)_{p+1}$. Appealing to Lemma~\ref{lem: commutator as nice product}, modulo $RF(m)_{p+1}$ each $c_q$ can be rewritten as a product in the form $c_q=\Prod_r[d_{q,r},x_{i_{q,r}}]$ with $i_{q,r}<m$ and $\wt(d_{q,r})=p$. Still working modulo $RF(m)_{p+1}$, these factors commute by Proposition~\ref{prop: commutators} \pref{LCS Abelian}, so we can relabel and reorder this product so as to sort by $x_{i_{q,r}}$'s. $$z'\equiv\prod_{i=1}^{m-1}\prod_q [d_{q,i}, x_{i}] \mod RF(m)_{p+1}.$$ By Proposition~\ref{prop: commutators} \pref{item Commutator product 2} $[d, x_i][e,x_i] \equiv[de,x_i]$ modulo $RF(m)_{p+1}$. By repeatedly applying this, $$z'\equiv\prod_{i=1}^{m-1} [D_{i}, x_{i}]\mod RF(m)_{p+1},$$ with $D_i=\prod_{q}d_{q,i}\in RF(m)_{p}$. Thus, \begin{eqnarray*} z&=&z_0 \prod_{k=1}^{m-1}[\omega_k, x_k]\cdot z' \equiv z_0 \prod_{k=1}^{m-1}[\omega_k, x_k]\cdot \prod_{k=1}^{m-1} [D_{k}, x_{k}]\, \mod RF(m)_{p+1}. \end{eqnarray*} As $[D_{k}, x_{k}]\in RF(m)_{p}$ is central in $RF(m)/RF(m)_{p+1}$ it follows that $$ z\equiv z_0 \prod_{k=1}^{m-1}[\omega_k, x_k] [D_{k}, x_{k}] \mod RF(m)_{p+1}. $$ Appealing to Proposition~\ref{prop: commutators} \pref{item Commutator product 2}, $$ z\equiv z_0 \prod_{k=1}^{m-1}[\omega_kD_k, x_k] \mod RF(m)_{p+1}. $$ This completes the inductive step, and so we conclude that for every $p\in\N$, $$ z\equiv \prod_{k=0}^{m-1} x_{m-k}^{\alpha_{m-k}} \prod_{k=1}^{m-1}[\omega_k, x_k] \mod RF(m)_{p}. $$ Since $RF(m)_{m+1}=1$, taking $p=m+1$ completes the proof. \end{proof} \section{Link homotopy and $\Delta$-moves: The proof of Theorem \ref{thm: main Delta}}\label{sect: Delta change} A very similar proof to that of Theorem~\ref{hackey idea} produces the following result. \begin{theorem}\label{DeltaMoveVersion} Any element of $RF(m)$ can be written in the form $$ \prod_{i=1}^m x_i^{\alpha_i}\prod_{1\le i\le j\le m} [x_j, x_i]^{\beta_{i,j}} \prod_{i=1}^{m-1}\prod_{j=1}^{m-1}[[\omega_{i,j}, x_i],x_j] $$ with $\alpha_i, \beta_{i,j},\in \Z$ and $\omega_{i,j}\in RF(m)$. \end{theorem} \begin{proof}[Proof of Theorem~\ref{DeltaMoveVersion}] Let $z\in RF(m)$. We will inductively show that in $RF(m)/RF(m)_p$ $$[z] = \prod_{i=1}^mx_i^{\alpha_i}\prod_{1\le i\le j\le m} [x_j, x_i]^{\beta_{i,j}} \prod_{i=1}^{m-1}\prod_{j=1}^{m-1}[[\omega_{i,j}, x_i],x_j].$$ When $p\le 3$, the result follows from Hall's basis theorem, \cite[Theorem 5.13]{MKS76}. We now take $p\ge 3$ and inductively assume that $$z = \prod_{i=1}^m x_i^{\alpha_i}\prod_{1\le i\le j\le m} [x_j, x_i]^{\beta_{i,j}} \prod_{i=1}^{m}\prod_{j=1}^{m-1}[[\omega_{i,j}, x_i],x_j]\cdot z'$$ with $z'\in RF(m)_{p}$. Thus, $z'$ is a product of weight $p$ commutators, meaning $z'=\prod_{q=1}^r c_q$ where each $\wt(c_q)\ge p$. As in the proof of Theorem~\ref{hackey idea} we apply Lemma~\ref{lem: commutator as nice product} to each $c_q$, to see that modulo $RF(m)_{p+1}$, $z'$ can be rewritten as a product in the form $z'\equiv \Prod_r[d_{r},x_{i_{r}}]$ where $i_{r}<m$ and $d_r$ is a weight $p-1$ commutator not containing $x_{i_r}$. This product commutes modulo $RF(m)_{p+1}$ so that we may sort it by $i_r$. Next, using claim \pref{item Commutator product 2} of Proposition \ref{prop: commutators}, we see that $$z'\equiv \Prod_{i=1}^{m-1}[D_i,x_i]\mod RF(m)_{p+1}$$ where $D_i\in RF(m-1)_{p-1}$ sits in the $p-1$'th term of the lower central series of the copy of $RF(m-1)$ generated by $x_1, \dots, x_{i-1}, x_{i+1},\dots, x_m$. Applying Lemma~\ref{lem: commutator as nice product} to each $D_i$, $$z'\equiv \Prod_{i=1}^{m-1}\left[\Prod_{q=1}^n[e_{iq},x_{i_q}],x_i\right]\mod RF(m)_{p+1}$$ where $i_q<m$ and $e_{iq}$ is a commutator of weight $p-2$. Again using commutativity and claim \pref{item Commutator product 2} of Proposition \ref{prop: commutators} similarly to before, $$z'\equiv \Prod_{i=1}^{m-1}\Prod_{j=1}^{m-1}\left[[E_{ij},x_{j}],x_i\right]\mod RF(m)_{p+1},$$ where $E_{ij}\in RF(m)_{p-2}$. Since $RF(m)_p$ is central in $RF(m)/RF(m)_{p+1}$, $$ \begin{array}{rcl}z &\equiv& \displaystyle\prod_{i=1}^m x_i^{\alpha_i}\prod_{1\le i\le j\le m} [x_j, x_i]^{\beta_{i,j}} \prod_{i=1}^{m-1}\prod_{j=1}^{m-1}\left([[\omega_{i,j}, x_i],x_j][[e_{i,j}, x_i],x_j]\right) \\ &\equiv&\displaystyle \prod_{i=1}^m x_i^{\alpha_i}\prod_{1\le i\le j\le m} [x_j, x_i]^{\beta_{i,j}} \prod_{i=1}^{m-1}\prod_{j=1}^{m-1}[[\omega_{i,j}e_{i,j}, x_i],x_j].\end{array}$$ We now inductively conclude that the claim holds modulo $RF(m)_p$ for any $p$. Since $RF(m)_{m+1}=1$, we can set $p=m+1$ to complete the proof. \end{proof} \begin{corollary} If $T$ has vanishing pairwise linking numbers and is in the image of $RF(n-1)\to \H(n)$ then $$n_{\Delta}(T)\le \left(\Sum_{1\le i<j< n} |\mu_{ijn}(T)|\right)+2(n-1)(n-2).$$ \end{corollary} \begin{proof} Fist note that since $[[\omega_{i,j}, x_i],x_j]=1$ whenever $i=j$, the product $\prod_{j=1}^{m-1}[[\omega_{i,j}, x_i],x_j]$ has at most $(m-1)(m-2)$ terms. Each of these terms reduces to $$\alpha_{ij}:=[[\omega_{i,j}, x_i],x_j] = x_i^{-1} [ \omega_{ij}^{-1} x_i^{-1}\omega_{ij}, x_j] x_j^{-1} x_i {x_j}^{-1}.$$ By Proposition~\ref{prop: basics of nhl} claim \pref{item:ndelta(commutator)}, $n_{\Delta}(\phi[ \omega_{ij}^{-1} x_i^{-1}\omega_{ij}, x_j])\le 1$, so that after one $\Delta$-move, $\phi(\alpha_{ij})$ is reduced to $\phi([x_i,x_j])$ which in turn has $n_{\Delta}=1$. Thus $n_{\Delta}(\prod_{j=1}^{m-1}[[\omega_{i,j}, x_i],x_j])\le 2(n-1)(n-2)$. The proof is completed by noting that the integer $\beta_{ij}$ of Theorem~\ref{DeltaMoveVersion} is equal to $\mu_{ijn}(T)$. \end{proof} \thmMainDelta \begin{proof} Our proof follows an identical induction to the proof of Theorem~\ref{upper bound theorem main}. Realize $L$ as $L=\widehat T$ for some $T\in \H(n)$. As a consequence of the split exact sequence of \pref{exact sequence}, $T=\phi(t)s(T')$ with $s\in RF(n-1)$ and $T'\in \H(n-1)$. By Corollary~\ref{nhL for RF}, $$n_\Delta (\phi(t))\leq \Sum_{1\le i<j<k\le n-1 }|\mu_{ijk}(L)|+2(n-1)(n-2).$$ Appealing to induction, $$ n_\Delta(s(T'))\le \Lambda(T')+\Sum_{k=3}^{n-2} 2k(k-1). $$ Adding these together $$ n_h(L)\le n_h(\phi(t))+n_h(T')\le \Lambda_3(L)+\Sum_{k=3}^{n-1} 2k(k-1), $$ completing the proof. \end{proof} \section{Computing the homotopy unlinking number of 4-component links.}\label{sect: 4 component} In \cite{DOP22}, the second author, along with Orson and Park, determine the homotopy trivializing number of any 3-component string link, \begin{proposition*}[Theorem 1.7 of \cite{DOP22}] Let $L$ be a 3-component string link, then $$n_{h}(L) = \begin{cases} \Lambda(L) &\text{if }\Lambda(L)\neq 0,\\ 2&\text{ if }\Lambda(L)=0 \text{ and }\mu_{123}(L)\neq 0,\\ 0&\text{otherwise}. \end{cases}$$ \end{proposition*} In this section we turn our attention to an analogous computation for all 4-component links. As should not be surprising, this classification is significantly more involved than for 3-component links. We begin by recalling some elements of the classification of 4-component links up to link homotopy. The classification of string links up to link homotopy is provided in \cite{HL1} and is made explicit in \cite{Yasuhara09} We state this classification restricted to 4-component links: Any $T\in H(4)$ can be expressed uniquely as $T=A_1A_2A_3$ where \begin{equation}\label{eqn:4-component classification} \begin{array}{l} A_1=x_{12}^{a_{12}}x_{13}^{a_{13}}x_{14}^{a_{14}}x_{23}^{a_{23}}x_{24}^{a_{24}}x_{34}^{a_{34}}, \\ A_2=[x_{13},x_{12}]^{a_{123}}[x_{14},x_{12}]^{a_{124}}[x_{14},x_{13}]^{a_{134}}[x_{24},x_{23}]^{a_{234}}, \\ A_3 = [[x_{14}, x_{13}],x_{12}]^{a_{1234}}[[x_{14}, x_{12}],x_{13}]^{a_{1324}}, \end{array} \end{equation} and each $a_{I}$ above is an integer. For convenience, we will set $x_{ijk} = [x_{ik},x_{ij}]$ and $x_{1jk4}=[[x_{14},x_{1k}],x_{1j}]$. If $L=\widehat{T}$ is the closure of $T$, then each exponent $a_I$ recovers the corresponding Milnor invariant $\bar{\mu}_I(L)$ up to sign. In particular, $a_I=\bar{\mu}_I(L)$ when $I$ has length 2 and $a_I=-\bar{\mu}_I(L)$ when $I$ has length 3 or 4. In the following theorems, which are summarized in the introduction as Theorem \ref{thm: 4-component sampler}, $L$ is a 4-component link and $T=A_1A_2A_3$ as in \pref{eqn:4-component classification} is a 4-component string link with $\widehat{T}=L$. \begin{theorem}\label{thm:nhl for linking number zero} If $L$ has vanishing pairwise linking numbers, then $n_h(L)$ is given by the following table: \noindent \begin{tabular}{|l|}\hline \cellcolor{black!05}$n_h(L)=0$ if and only if $a_I=0$ for every choice of $I$\\ \hline \cellcolor{black!05}{\begin{minipage}{.85\textwidth}$n_h(L)=2$ if and only if $L$ is not homotopy trivial and at least one condition (below) is met\end{minipage}} \\ \makecell{\begin{minipage}{.98\linewidth} \begin{itemize} \item $a_{1324} \in (a_{123}, a_{124}) $ and $a_{134}=a_{234}=0$, \item $a_{1234} \in (a_{123},a_{134})$ and $a_{124}=a_{234}=0$, \item $a_{1234}+a_{1324} \in (a_{124},a_{134})$ and $a_{123}=a_{234}=0$, \item $a_{1234}+a_{1324}\in (a_{123},a_{234})$ and $a_{124}=a_{134}=0$, \item $a_{1234} \in (a_{124},a_{234})$ and $a_{123}=a_{134}=0$, \item $a_{1324} \in (a_{134},a_{234})$ and $a_{123}=a_{124}=0$. \end{itemize}\end{minipage}} \\\hline \cellcolor{black!05}$n_h(L)=4$ if and only if $n_h(L)\not\in\{0,2\}$ and at least one condition (below) is met \\ \makecell{ \begin{minipage}{.98\linewidth} \begin{itemize} \item $a_{jk\ell}=0$, for some $1\le j<k<\ell\le 4$ \item $a_{1324}\in (a_{123},a_{124}, a_{134},a_{234})$, \item $a_{1234}\in (a_{123},a_{124}, a_{134},a_{234})$, \item $a_{1234}+a_{1324}\in (a_{123},a_{124}, a_{134},a_{234})$. \end{itemize}\end{minipage}} \\\hline \cellcolor{black!05} $n_h(L)=6$ if and only if $n_h(L)\notin\{0,2,4\}$. \\ \hline \end{tabular} \end{theorem} Assume now that $L$ is a link with at least one non-vanishing pairwise linking number. It is demonstrated in \cite{DOP22} that if $L$ is a 3-component link, then $n_h(L)=\Lambda(L)$. The same is not quite true for 4-component links, but the reader should note that once the pairwise linking numbers get sufficiently complicated, they determine the homotopy trivializing number. In order to cut down on the number of cases needed we will assume (at the cost of permuting the components of $L$ and possibly changing the orientation of some components) that $\lk(L_1,L_2)\ge |\lk(L_i,L_j)|$ for all $i\neq j$. With that convention established, Theorems~\ref{thm:linking greater one} and \ref{thm:nonvanishing linking} complete our classification of the homotopy trivializing number. \begin{theorem}\label{thm:linking greater one} If $\lk(L_1,L_2)>0$ and all other pairwise linking numbers vanish, then $n_h(L)$ is determined by the following table: \noindent \begin{tabular}{|l|}\hline \centerline{ Suppose $\lk(T_1,T_2)=1$.}\\ \hline \cellcolor{black!05}{$n_h(L)=1$ if and only if $a_{134}=a_{234}=0$ and $a_{1324}=-a_{123}a_{124}$.}\\ \hline \cellcolor{black!05}$n_h(L)=3$ if and only if $n_h(L)\not=1$ and at least one condition (below) is met \\ \begin{minipage}{.98\linewidth} \begin{itemize} \item $a_{134}=0 $ or $a_{234}=0$, \item $a_{1324} + a_{123}a_{124} \in (a_{134}, a_{234})$. \end{itemize}\end{minipage} \\\hline \cellcolor{black!05}$n_h(L)=5$ if and only if none of the previous conditions (above) are met. \\ \begin{minipage}{.98\linewidth} In other words, $n_h(L)=5$ if and only if $a_{134}\cdot a_{234}\not=0$ and $a_{1324}+a_{134}\cdot a_{123} \not\in (a_{134},a_{234})$. \end{minipage} \\\hline \hline \centerline{Suppose $\lk(T_1,T_2)=2$.}\\ \hline \cellcolor{black!05}$n_h(L)=2$ if and only $a_{134}=a_{234}=0$ and at least one condition (below) is met \\ \begin{minipage}{.98\linewidth} \begin{itemize} \item at least one of $a_{123}$ or $a_{124}$ is odd, \item $a_{1324}$ is even. \end{itemize}\end{minipage} \\\hline \cellcolor{black!05}$n_h(L)=4$ if and only $n_h(L)\not=2$ and at least one condition (below) is met \\ {\begin{minipage}{.98\linewidth} \begin{itemize} \item $a_{134}=0$ or $a_{234}=0$, \item at least one of $a_{123},a_{134},a_{124}, a_{234}$ is odd or $a_{1324}$ is even. \end{itemize}\end{minipage}} \\\hline \cellcolor{black!05}$n_h(L)=6$ if and only if none of the previous conditions (above) are met. \\ \hline \hline \centerline{Suppose $\lk(T_1,T_2)\geq 3$.}\\ \hline \cellcolor{black!05} $n_h(L)\in \{\Lambda(L), \Lambda(L)+2\}$, and $n_h(L)=\Lambda(L)$ if and only if $a_{134}=a_{234}=0$. \\ \hline \end{tabular} \end{theorem} The reader will notice that as $\lk(L_1,L_2)$ grows, the effect of the higher order Milnor invariants shrinks. This phenomenon persists for links with multiple nonvanishing pairwise linking numbers. \begin{theorem}\label{thm:nonvanishing linking} If $L$ has at least two nonvanishing pairwise linking numbers, $\lk(L_1,L_2)>0$, and $\lk(L_1,L_2)\ge |\lk(L_i,L_j)|$ for all $i,j$, then $n_h(L)$ is given by the following table: \noindent \begin{tabular}{|l|}\hline \centerline{ Suppose $\lk(T_1,T_2)=1$, $\lk(T_3,T_4)=1$, and $\lk(T_i,T_j)=0$ for all other $i,j$.}\\ \hline \cellcolor{black!05}{$n_h(L)\in \{2,4\}$, and $n_h(L)=2$ if and only if $a_{1324}=-a_{123}a_{124}-a_{134}a_{234}$. } \\ \hline \hline \centerline{Suppose $\lk(T_1,T_2)=2$, $\lk(T_3,T_4)=1$, and $\lk(T_i,T_j)=0$ for all other $i,j$.}\\ \hline \cellcolor{black!05}{\begin{minipage}{.85\textwidth}$n_h(L)\in \{3,5\}$, and furthermore $n_h(L)=3$ if and only if at least one condition (below) is met\end{minipage}}\\ \begin{minipage}{.9\linewidth} \begin{itemize} \item $a_{1324}+a_{134}a_{234}$ is even, \item either of $a_{123}$ or $a_{124}$ is odd \end{itemize} \end{minipage}\\ \hline \hline \centerline{Suppose $\lk(T_1,T_2)=2$, $\lk(T_3,T_4)=2$, and $\lk(T_i,T_j)=0$ for all other $i,j$.}\\ \hline \cellcolor{black!05}{$n_h(L)\in \{\Lambda(L),\Lambda(L)+2\}$ and $n_h(L)=\Lambda(L)$ if and only if at least one condition (below) is met}\\ \begin{minipage}{.9\linewidth} \begin{itemize} \item at least one of $a_{123},a_{124},a_{134}, a_{234}$ are odd, \item $a_{1324}$ is even \end{itemize} \end{minipage}\\ \hline \hline \centerline{Suppose $\lk(T_1,T_2)\geq 3$, $\lk(T_3,T_4)\geq1$, and $\lk(T_i,T_j)=0$ for all other $i,j$.}\\ \hline \cellcolor{black!05}{$n_h(L)=\Lambda(L)$.}\\ \hline \hline \centerline{ Suppose $\lk(T_1,T_2)\not=0$, $\lk(T_1,T_3)\not=0$, and $\lk(T_i,T_j)=0$ for all other $i,j$.}\\ \hline \cellcolor{black!05}{$n_h(L)\in \{\Lambda(L),\Lambda(L)+2\}$, and $n_h(L)=\Lambda(L)$ if and only if $a_{234}=0$. } \\ \hline \hline \centerline{Suppose $\lk(T_1,T_2)\lk(T_2,T_3)\lk(T_3,T_4)\not=0$ or $\lk(T_1,T_2)\lk(T_1,T_3)\lk(T_2,T_3)\not=0$.}\\ \hline \cellcolor{black!05}{$n_h(L)= \Lambda(L)$.}\\ \hline \hline \centerline{Suppose $\lk(T_1,T_2), \lk(T_1,T_3), \lk(T_1,T_4)$ are nonzero and $\lk(T_i,T_j)=0$ for all other $i,j$.}\\ \hline \cellcolor{black!05}{$n_h(L) \in\{ \Lambda(L),\Lambda(L) +2$ \} and $n_h(L)= \Lambda(L)$ if and only if $a_{234}=0$.}\\ \hline \end{tabular} \end{theorem} The remainder of the section is organized as follows. In Subsection~\ref{subsect: nhL=1} we express the homotopy trivializing number in terms of a word length problem in $\H(n)$. We close this subsection by determining which elements of $\H(n)$ have $n_h(T)=1$. In Subsection~\ref{subsect: linking equals zero} we compute the homotopy trivializing numbers when pairwise linking numbers vanish, proving Theorem~\ref{thm:nhl for linking number zero}. In subsections~\ref{subsect: one nonvanishing linking} and \ref{subsect: many nonvanishing linking} respectively we complete the section with the proofs of Theorem \ref{thm:linking greater one} and Theorem \ref{thm:nonvanishing linking}. \subsection{The homotopy trivializing number as a word length.}\label{subsect: nhL=1} The braids $x_{ij}$ with $1\le i<j\le n$ generate $\H(n)$. Thus, they also normally generate. The following proposition reveals that that $n_h(T)$ is given by counting how many conjugates of the $x_{ij}$ must be multiplied together to get $T$. \begin{proposition}\label{prop: algegraic translation} Consider any $T\in \H(n)$. $T$ can be undone by a sequence of crossing changes consisting of changing $p_{ij}$ positive crossings and $n_{ij}$ negative crossings in between $T_i$ and $T_j$ for each $1\le i < j\le n$ if and only if $$T=\prod_{k=1}^m W_k^{-1} x_{i_k, j_k}^{\epsilon_{k}} W_k$$\ where $m=\Sum_{i,j}p_{ij}+n_{ij}$, each $W_k\in H(n)$, $\epsilon_{k}\in \{\pm 1\}$, and for each $i$ and $j$ there are a total of $p_{ij}$ (or $n_{ij}$ resp.) values of $k$ with $i_k=i$, $j_k=j$ and $\epsilon_{k}=+1$ ($\epsilon_k=-1$ resp.). \end{proposition} \begin{proof} Sufficiency is obvious. In order to see the converse when $m=1$, in Figure~\ref{fig: Crossing change to conjugate A} we see a link produced by changing the trivial string link by a single crossing change and in Figure~\ref{fig: Crossing change to conjugate C} we see that after a link homotopy, it is a conjugate of $x_{ij}$. Thus, a link is reduced to the unlink by a single crossing change if and only if it is a conjugate of $x_{ij}^\pm$ for some $i,j$. Now proceed inductively. If $T$ can be reduced to the trivial string link by $m+1$ crossing changes, then by performing one of these crossing changes and appealing to induction, we get a new string link $S=\prod_{k=1}^m W_k^{-1} x_{i_k, j_k}^{\epsilon_{k}} W_k$. As $T$ and $S$ differ by a single crossing change, $S^{-1}T$ can be undone by a single crossing change, so that $S^{-1}T = W_{m+1}^{-1}x_{i_{m+1}j_{m+1}}^{\epsilon_{m+1}}W_{m+1}$. The result follows. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture} \node[rotate=180] at (0,0){\includegraphics[height=.1\textheight]{GenericCrossChangeClasper1.pdf}}; \end{tikzpicture} \caption{} \label{fig: Crossing change to conjugate A} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture} \node[rotate=180] at (0,0){\includegraphics[height=.1\textheight]{GenericCrossChangeClasper2.pdf}}; \end{tikzpicture} \caption{} \label{fig: Crossing change to conjugate B} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture} \node[rotate=180] at (0,0){\includegraphics[height=.1\textheight]{GenericCrossChangeClasper3.pdf}};; \end{tikzpicture} \caption{} \label{fig: Crossing change to conjugate C} \end{subfigure} \caption{\pref{fig: Crossing change to conjugate A} A clasper that changes the trivial string link by a single positive crossing change. \pref{fig: Crossing change to conjugate B} The same diagram after an isotopy. \pref{fig: Crossing change to conjugate C} A link homotopy reducing this diagram to a conjugate of $x_{13}$. } \label{fig: Crossing change to conjugate} \end{figure} \end{proof} The next lemma reveals that the terms in the product in Proposition~\ref{prop: algegraic translation} commute at a cost of changing the conjugating elements $W_k$. The proof amounts to expanding out both sides. \begin{lemma}\label{lem: reorder cross change} For any group $G$ and any $W,V,x, y\in G$, $$W^{-1}x W V^{-1}yV = V^{-1}y V (W')^{-1}x (W'),$$ where $W'=W V^{-1}yW$. \end{lemma} Thus, in order to compute the homotopy trivializing number of every 4-component string link $T$, we need only determine the minimal number of conjugates of the preferred generators $x_{ij}$ you need to multiply together to get $T$. As a first step we see exactly what string links are conjugates of these generators, \begin{lemma}\label{cor: nhL=1} Let $T\in H(4)$. \begin{itemize} \item $T$ is a conjugate of $x_{12}$ if and only if $T=x_{12}x_{123}^\alpha x_{124}^\beta x_{1234}^\gamma x_{1324}^{-\alpha\beta}$ for some $\alpha,\beta, \gamma\in \Z$. \item $T$ is a conjugate of $x_{13}$ if and only if $T=x_{13}x_{123}^\alpha x_{134}^\beta x_{1234}^{\alpha\beta} x_{1324}^{\gamma}$ for some $\alpha,\beta, \gamma\in \Z$. \item $T$ is a conjugate of $x_{14}$ if and only if $T=x_{14}x_{124}^\alpha x_{134}^\beta x_{1234}^{\gamma} x_{1324}^{\delta}$ for some $\alpha,\beta, \gamma,\delta\in \Z$ with $\gamma+\delta =\alpha\beta$. \item $T$ is a conjugate of $x_{23}$ if and only if $T=x_{23}x_{123}^\alpha x_{234}^\beta x_{1234}^{\gamma} x_{1324}^{\delta}$ for some $\alpha,\beta, \gamma,\delta\in \Z$ with $\gamma+\delta =\alpha\beta$. \item $T$ is a conjugate of $x_{24}$ if and only if $T=x_{24}x_{124}^\alpha x_{234}^\beta x_{1234}^{\alpha\beta} x_{1324}^{\gamma}$ for some $\alpha,\beta, \gamma\in \Z$. \item $T$ is a conjugate of $x_{34}$ if and only if $T=x_{34}x_{134}^\alpha x_{234}^\beta x_{1234}^{\gamma} x_{1324}^{-\alpha\beta}$ for some $\alpha,\beta, \gamma\in \Z$. \end{itemize} \end{lemma} During the proof of Lemma~\ref{cor: nhL=1} we will make use of Table~\ref{commutator table} describing the commutator of $x_{ij}$ with each of the basis elements in \pref{eqn:4-component classification}. Each entry in this table follows from an application of Proposition~\ref{prop:rearrage} and the fact that $[x_{ij},x_{ik}]$ is link homotopic to $[x_{ik},x_{jk}]$. (This can be seen by using the Wirtinger presentation to express the $k$'th component of $[x_{ij},x_{ik}]$ in terms of the preferred meridians of $T_i$ and $T_j$ followed by an appeal to the homomorphism $\phi$ of \pref{exact sequence}.) For the sake of clarity we justify the entry corresponding to $[x_{123}, x_{14}]$. $$ \begin{array}{rcll} [x_{123},x_{14}] &=& [[x_{13},x_{12}],x_{14}] = [[x_{14},x_{12}],x_{13}][[x_{13},x_{14}],x_{12}] \\&=& [[x_{14},x_{12}],x_{13}][[x_{14},x_{13}],x_{12}]^{-1} = x_{1324}x_{1234}^{-1} = x_{1234}^{-1}x_{1324}. \end{array} $$ Note that we have used that $x_{1324}$ and $x_{1234}$ commute. \begin{table}[h!] \begin{tabular}{|r||l|l|l|l|l|l|} \hline &$x_{12}$&$x_{13}$&$x_{14}$&$x_{23}$&$x_{24}$&$x_{34}$ \\\hline\hline $x_{12}$&1&$x_{123}^{-1}$&$x_{124}^{-1}$&$x_{123}$&$x_{124}$&$1$ \\\hline $x_{13}$&$x_{123}$&1&$x_{134}^{-1}$&$x_{123}^{-1}$&$x_{1324}$&$x_{134}$ \\\hline $x_{14}$&$x_{124}$&$x_{134}$&$1$&$1$&$x_{124}^{-1}$&$x_{134}^{-1}$ \\\hline $x_{23}$&$x_{123}^{-1}$&$x_{123}$&$1$&1&$x_{234}^{-1}$&$x_{234}$ \\\hline $x_{24}$&$x_{124}^{-1}$&$x_{1324}^{-1}$&$x_{124}$&$x_{234}$&$1$&$x_{234}^{-1}$ \\\hline $x_{34}$&$1$&$x_{134}^{-1}$&$x_{134}$&$x_{234}^{-1}$&$x_{234}$&$1$ \\\hline $x_{123}$&1&1&$x_{1234}^{-1}\cdot x_{1324}$ &1& $x_{1324}^{-1}$ &$x_{1234}$ \\\hline $x_{124}$&1&$x_{1324}$&1&$x_{1234}x_{1324}^{-1}$&1&$x_{1234}^{-1}$ \\\hline $x_{134}$&$x_{1234}$&1&1&$x_{1234}^{-1}\cdot x_{1324}$&$x_{1324}^{-1}$&1 \\\hline $x_{234}$&$x_{1234}^{-1}$&$x_{1324}$&$x_{1234}x_{1324}^{-1}$&1&1&1 \\\hline \end{tabular}\caption{A multiplication table for the operation $[A,x_{ij}]$. $A$ takes values in the terms in the leftmost column while $x_{ij}$ takes those of the first row.}\label{commutator table} \end{table} We are ready to prove Lemma~\ref{cor: nhL=1}. \begin{proof}[Proof of Lemma~\ref{cor: nhL=1}] The proof of each of the claims amounts to an identical computation. We will focus on the case that $(ij)=(12)$. $T$ is a conjugate of $x_{12}$ if and only if $T=S^{-1}x_{12}S$ for some $S$. Let $S=A_1A_2A_3$, where $A_1$, $A_2$ and $A_3$ are as in \pref{eqn:4-component classification}. We shall show that $$ S^{-1}x_{12} S = x_{12}x_{123}^\alpha x_{124}^\beta x_{1234}^\gamma x_{1324}^{-\alpha\beta} $$ for $\alpha=a_{23}-a_{13}$, $\beta=a_{24}-a_{14}$, $\gamma = z-a_{134}+a_{234}$ and $z$ depends only on $A_1$. The value for $z$ will be revealed in equation \pref{[X12,A1]} at the end of the proof, but it is not relevant to our analysis. Once we do this, the claimed result will follow. Proceeding, $$S^{-1}x_{12}S = x_{12}[x_{12},S] = x_{12}[x_{12},A_1A_2] .$$ In the second equality above, we have used that $A_3\in \H(4)_3$ is central. By Proposition~\ref{prop: commutators}, claim~\pref{item Commutator product 1}, then \begin{equation}\label{S^-1x_12S}S^{-1}x_{12}S = x_{12}[x_{12},A_2][x_{12},A_1][[x_{12},A_1],A_2] =x_{12}[x_{12},A_2][x_{12},A_1]. \end{equation} The second equality above relies on that $[[x_{12},A_1],A_2]\in \H(4)_4=0$. We now compute $[x_{12},A_2]$ by using Proposition~\ref{prop: commutators}, claim~\pref{item Commutator product 1} again, along with the fact $x_{12}$ commutes with $x_{123}$ and $x_{124}$ and that $\H(4)_2$ is Abelian, $$\begin{array}{rcl}[x_{12},A_2] = [x_{12}, x_{134}^{a_{134}} x_{234}^{a_{234}}] = [x_{12},x_{134}]^{a_{134}} [x_{12},x_{234}]^{a_{234}}. \end{array}$$ Finally we compute each of these commutators using Table~\ref{commutator table}. \begin{equation}\label{[X12,A2]}\begin{array}{rcl}[x_{12},A_2] = x_{1234}^{-a_{134}+a_{234}}. \end{array}\end{equation} Next, we compute $[x_{12},A_1]$ via an iterated appeal to Proposition~\ref{prop: commutators}, claim~\pref{item Commutator product 1}. $$\begin{array}{rcl}[x_{12},A_1] = \Prod_{(pq)}[x_{12}, x_{pq}]^{a_{pq}} \Prod_{(pq)<(rs)}[[x_{12},x_{pq}], x_{rs}]^{a_{pq}a_{rs}} \end{array}.$$ Here we use the lexicographical ordering $(12)<(13)<(14)<(23)<(24)<(34)$. We compute this product by again referencing Table~\ref{commutator table}, \begin{equation}\label{[X12,A1]}[x_{12},A_1] = x_{123}^{a_{23}-a_{13}}x_{124}^{a_{24}-a_{14}} \cdot x_{1234}^{a_{13}a_{14}-a_{13}a_{34}-a_{14}a_{23}+a_{14}a_{34}+a_{23}a_{34}-a_{24}a_{34}}x_{1324}^{-(a_{13}-a_{23})(a_{14}-a_{24})}. \end{equation} If we let $z$ be the exponent of $x_{1234}$ in the preceding line then we may combine equations \pref{S^-1x_12S} \pref{[X12,A2]}, \pref{[X12,A1]} and recall our choices of $\alpha$, $\beta$, and $\gamma$ to complete the proof in the case that $(ij)=(12)$. Identical computations complete the proof in the remaining cases. \end{proof} \subsection{Four component links with vanishing pairwise linking numbers: the proof of Theorem~\ref{thm:nhl for linking number zero}}\label{subsect: linking equals zero} \begin{proof}[Proof of Theorem~\ref{thm:nhl for linking number zero}] Let $L$ be a 4-component link with all pairwise linking numbers zero. Suppose $T\in \H(4)$ satisfies $\widehat{T}=L$. Then $T$ can be written as $x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}$ . By Proposition~\ref{prop: algegraic translation}, $T\in \H(4)$ can be undone in two crossing changes with opposite signs between components $T_i$ and $T_j$ if and only if $T = V^{-1}x_{ij}^{-1}V\cdot Wx_{ij}W$ for some $V,W\in\H(4)$. Each of these factors has its form determined by Lemma~\ref{cor: nhL=1}. The proof amounts to expanding these products and simplifying. We begin by focusing on $(ij)=(12)$, $$ \begin{array}{rcl} V^{-1}x_{12}^{-1}V\cdot W^{-1}x_{12}W &=& (x_{12}x_{123}^\alpha x_{124}^\beta x_{1234}^\gamma x_{1324}^{-\alpha\beta})^{-1} \cdot x_{12}x_{123}^{\alpha'} x_{124}^{\beta'} x_{1234}^{\gamma'} x_{1324}^{-\alpha'\beta'}\\ &=&x_{123}^{\alpha'-\alpha} x_{124}^{\beta'-\beta} x_{1234}^{\gamma'-\gamma} x_{1324}^{\alpha\beta-\alpha'\beta'}. \end{array} $$ Thus, $T$ factors as above if and only if the system of equations below has a solution,$$ \begin{array}{c}a_{134}=a_{234}=0, \alpha'=a_{123}+\alpha, \beta'=a_{124}+\beta,\text{ and } \\a_{1324} = \alpha\beta - \alpha'\beta' = -a_{123}a_{124}-\alpha a_{124}-\beta a_{123}. \end{array}$$ As $-\alpha a_{124}-\beta a_{123}$ is a generic element of the ideal $(a_{123},a_{124})$, we see that this system of equations chas a solution if and only if $a_{134}=a_{234}=0$ and $a_{1324} \in (a_{123},a_{124})$. Thus, the first bullet point for the case $n_h(L)=2$ of the theorem classifies precisely when a string link, $T$, can be undone by two crossing changes between $T_1$ and $T_2$. The remaining bullet points determine when $T$ can be undone by any other pair of crossing changes of opposite sign between the same two components. For the next claim, note $n_h(L)=4$ if and only if for some $(ij)$ and $(k\ell)$, $T$ can be realized as \begin{equation}\label{eqn:Lambda=0,nhl=4} T=V^{-1}x_{ij}V\cdot (W^{-1}x_{ij}W)^{-1}\cdot X^{-1}x_{k\ell}X\cdot (Y^{-1}x_{k\ell}Y)^{-1}. \end{equation} When $(ij)=(k\ell)=(12)$, Lemma~\ref{cor: nhL=1} transforms \pref{eqn:Lambda=0,nhl=4} into \begin{align*} T =& V^{-1}x_{12}V\cdot (W^{-1}x_{12}W)^{-1}\cdot X^{-1}x_{12}X\cdot (Y^{-1}x_{12}Y)^{-1} \\ =& x_{12}x_{123}^{\alpha}x_{124}^{\beta}x_{1234}^{\gamma}x_{1324}^{-\alpha\beta}\cdot\left(x_{12}x_{123}^{\alpha'}x_{124}^{\beta'}x_{1234}^{\gamma'}x_{1324}^{-\alpha'\beta'}\right)^{-1}\cdot x_{12}x_{123}^{a}x_{124}^{b}x_{1234}^{c}x_{1324}^{-ab} \\ &\cdot\left(x_{12}x_{123}^{a'}x_{124}^{b'}x_{1234}^{c'}x_{1324}^{-a'b'}\right)^{-1} \\ =&x_{123}^{\alpha - \alpha' + a - a'}x_{124}^{\beta - \beta' + b - b'}x_{1234}^{\gamma - \gamma' + c - c'}x_{1324}^{\alpha'\beta'-\alpha\beta + a'b' - ab}. \end{align*} Notice any $x_{123}^{a_{123}}x_{124}^{a_{124}}x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}$ can be achieved by setting \[\alpha'=1,\;a=a_{123}+1,\;\beta=a_{124}+a_{1324},\;\beta'=a_{1324},\;\gamma=a_{1234},\;\alpha=a'=b=b'=\gamma'=c=c'=0.\] Therefore $L$ can be undone by four crossing changes between $L_1$ and $L_2$ if and only if $a_{134}=a_{234}=0$. An analogous result follows if $L$ can be undone by four crossing changes all between $L_i$ and $L_j$ for any $i<j$. When $(ij)=(12)$ and $(k\ell)=(13)$, \pref{eqn:Lambda=0,nhl=4} becomes \begin{align*} T =& V^{-1}x_{12}V\cdot (W^{-1}x_{12}W)^{-1}\cdot X^{-1}x_{13}X\cdot (Y^{-1}x_{13}Y)^{-1} \\ =& x_{12}x_{123}^{\alpha}x_{124}^{\beta}x_{1234}^{\gamma}x_{1324}^{-\alpha\beta} \cdot \left(x_{12}x_{123}^{\alpha'}x_{124}^{\beta'}x_{1234}^{\gamma'}x_{1324}^{-\alpha'\beta'}\right)^{-1}\cdot x_{13}x_{123}^{a}x_{134}^{b}x_{1234}^{ab}x_{1324}^{c}\\&\cdot\left(x_{13}x_{123}^{a'}x_{134}^{b'}x_{1234}^{a'b'}x_{1324}^{c}\right)^{-1} \\ &= x_{123}^{\alpha - \alpha' + a - b}x_{124}^{\beta - \beta'}x_{134}^{b - b'}x_{1234}^{\gamma - \gamma' + ab - a'b'}x_{1324}^{\alpha'\beta'-\alpha\beta + c - c'}. \end{align*} Therefore $L$ can be undone by two crossing changes between $L_1$ and $L_2$ and two more between $L_1$ and $L_3$ if and only if $a_{234}=0$. Repeating this for every other pair of the form $(ij)$ and $(ik)$ concludes that if $\{i,j,k,\ell\}=\{1,2,3,4\}$ then $a_{jk\ell}=0$ if and only if $L$ can be undone by two crossing changes between $L_i$ and $L_j$ followed by two crossing changes between $L_i$ and $L_k$. When $(ij)=(12)$ and $(k\ell)=(34)$, \pref{eqn:Lambda=0,nhl=4} becomes \begin{align*} T =& V^{-1}x_{12}V\cdot (W^{-1}x_{12}W)^{-1}\cdot X^{-1}x_{34}X\cdot (Y^{-1}x_{34}Y)^{-1} \\ =& x_{12}x_{123}^{\alpha}x_{124}^{\beta}x_{1234}^{\gamma}x_{1324}^{-\alpha\beta} \cdot\left(x_{12}x_{123}^{\alpha'}x_{124}^{\beta'}x_{1234}^{\gamma'}x_{1324}^{-\alpha'\beta'}\right)^{-1}\cdot x_{34}x_{134}^{a}x_{234}^{b}x_{1234}^{c}x_{1324}^{-ab} \\&\cdot\left(x_{34}x_{134}^{a'}x_{234}^{b'}x_{1234}^{c'}x_{1324}^{-a'b'}\right)^{-1} \\ &= x_{123}^{\alpha - \alpha'}x_{124}^{\beta - \beta'}x_{134}^{a - a'}x_{234}^{b - b'}x_{1234}^{\gamma - \gamma' + c-c'}x_{1324}^{\alpha'\beta'-\alpha\beta + a'b' - ab}. \end{align*} Therefore $L$ can be undone by two crossing changes between $L_1$ and $L_2$ and two more between $L_3$ and $L_4$ if and only if $a_{1324}\in (a_{123},a_{124},a_{134},a_{234})$. Repeating this for every other pair of $(ij)$ and $(k\ell)$ with $\{i,j,k,\ell\}=\{1,2,3,4\}$ completes the classification of links with linking number zero with $n_h(L)=4$. The final conclusion, that any 4-component link with vanishing pairwise linking numbers can be undone in six crossing changes, is an immediate consequence of Theorem~\ref{upper bound theorem main}. \end{proof} \subsection{Links with one nonvanishing linking number.}\label{subsect: one nonvanishing linking} Theorem~\ref{thm:linking greater one} classifies the homotopy trivializing number of 4-component links with precisely one non-vanishing pairwise linking number. In order to control the number of cases, we permute components and change some orientations if needed to arrange that $\lk(L_1,L_2)>0$ and that all other pairwise linking numbers vanish. We will further break our proof into cases depending on $\lk(L_1,L_2)$. \begin{proof}[Proof of Theorem~\ref{thm:linking greater one} when $\lk(L_1,L_2)=1$] Let $L$ be a link. Assume that $\lk(L_1,L_2)=1$ and that every other linking number vanishes. Let $T\in \H(4)$ satisfy $\widehat{T}=L$. Then $T=x_{12}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}$. Notice first that the only way that $n_h(L)$ could be equal to 1 is if $T$ can be undone by a single crossing change. Since $\lk(L_1,L_2)=1$, this crossing change must be between $T_1$ and $T_2$. Thus, by Lemmas~\ref{thm:nonvanishing linking} and \ref{cor: nhL=1}, $T=V^{-1}x_{12}V = x_{12}x_{123}^\alpha x_{124}^\beta x_{1234}^\gamma x_{1324}^{-\alpha\beta}$ for some $\alpha,\beta, \gamma\in \Z$. The first result of the theorem follows. Similarly, $L$ can be undone in 3 crossing changes if and only if there are some $V, W, X\in \H(4)$ so that \begin{equation}\label{eqn three conjugates}T = V^{-1}x_{12}VW^{-1}x_{ij}W(X^{-1}x_{ij}X)^{-1}.\end{equation} The subcases in Theorem~\ref{thm:linking greater one} for a homotopy trivializing number of 3 are now proven by evaluating this expression for the six choices of $(ij)$. If $(ij)=(12)$, then by Lemma~\ref{cor: nhL=1}, \pref{eqn three conjugates} becomes $$T=x_{12}x_{123}^{\alpha+\alpha'-\alpha''} x_{124}^{\beta+\beta'-\beta''} x_{1234}^{\gamma+\gamma'-\gamma''} x_{1324}^{-\alpha\beta-\alpha'\beta'+\alpha''\beta''}.$$ We claim that $T$ can be realized as such a product if and only if $a_{134}=a_{234}=0$. The necessity of this condition is clear. For sufficiency, note that if $a_{134}=a_{234}=0$, then $a_{123} = \alpha+\alpha'-\alpha''$, $a_{124} = \beta+\beta'-\beta''$, $a_{1234} = \gamma+\gamma'-\gamma''$ and $a_{1324} = -\alpha\beta-\alpha'\beta'+\alpha''\beta''$ is satisfied by setting $$\alpha = a_{123}+1, \alpha'=0, \alpha''=1, \beta=0,\beta'=0, \beta''=a_{1324}, \gamma=a_{1234}, \text{and } \gamma'=\gamma''=0.$$ The cases of $(ij)$ being $(13)$, $(14)$, $(23)$, or $(24)$ are all highly similar. Each results in one of $a_{134}$ or $a_{234}$ vanishing. We address the case $(ij)=(13)$, in particular. In this case, Lemma~\ref{cor: nhL=1} transforms \pref{eqn three conjugates} into $$ T=x_{12}x_{123}^{\alpha+\alpha'-\alpha''}x_{124}^\beta x_{134}^{\beta'-\beta''}x_{1234}^{\gamma+\alpha'\beta'-\alpha''\beta''}x_{1324}^{-\alpha\beta+\gamma'+\gamma''}. $$ The necessity that $a_{234}=0$ is now automatic. To see its sufficiency, notice that setting $a_{123}=\alpha+\alpha'-\alpha''$, $a_{124}=\beta$, $a_{134}=\beta'-\beta''$, $a_{1234}=\gamma+\alpha'\beta'-\alpha''\beta''$ and $a_{1324} = -\alpha\beta+\gamma'+\gamma''$ can now be solved for $\alpha$, $\beta$, $\beta'$, $\gamma$ and $\gamma'$. The proof in the cases that $(ij)$ is one of $(14)$, $(23)$, or $(24)$ is identical. Finally in the case that $(ij)=(34)$, Lemma~\ref{cor: nhL=1} transforms \pref{eqn three conjugates} into $$ T=x_{12}x_{123}^{\alpha}x_{124}^\beta x_{134}^{\alpha'-\alpha''}x_{234}^{\beta'-\beta''}x_{1234}^{\gamma+\gamma'-\gamma''}x_{1324}^{-\alpha\beta-\alpha'\beta'+\alpha''\beta''}. $$ If we set $a_{123}=\alpha$, $a_{124}=\beta$, $a_{134}=\alpha'-\alpha''$, $a_{234}=\beta'-\beta''$, $a_{1234} = \gamma+\gamma'-\gamma''$, then $a_{1324}=-\alpha\beta-\alpha'\beta'+\alpha''\beta''$ becomes $a_{1324} + a_{123}a_{124} = -a_{234}a_{134}-\beta'a_{134}-\alpha'a_{234}$, which admits a solution if and only if $a_{1324} + a_{123}a_{124} \in (a_{134}, a_{234})$. This completes the classification of 4-component links when $\lk(L_1,L_2)=1$, all other linking numbers vanishing, and $n_h(L)=3$. It remains only to show that any link with $\lk(L_1,L_2)=1$, and all other linking numbers vanishing can be undone in at most five crossing changes. By reordering the components, we may instead arrange that $\lk(L_1,L_3)=1$. We now appeal to Theorem~\ref{upper bound theorem main}. Since $Q(L)=2$ and $\Lambda(L)=1$, $n_h(L)\le 5$ as claimed. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:linking greater one} when $\lk(L_1,L_2)=2$] Next we address the case that $\lk(L_1,L_2)=2$ and all other pairwise linking numbers vanish. Thus, if $T$ is a string link with $\widehat{T}=L$ then \begin{equation}\label{eqn SL lk=2}T = x_{12}^2x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}.\end{equation} The only way that $L$ can be undone in exactly two crossing changes is if $$L=V^{-1}x_{12}VW^{-1}x_{12}W.$$ Applying Lemma~\ref{cor: nhL=1}, this is equivalent to $T$ having the form $$T=x_{12}^2x_{123}^{\alpha+\alpha'}x_{124}^{\beta+\beta'}x_{1234}^{\gamma+\gamma'}x_{1324}^{-\alpha\beta-\alpha'\beta'}.$$ Setting the exponents in these two expressions for $T$ equal to each other, we see $\alpha'=a_{123}-\alpha$, $\beta'=a_{124}-\beta$, $a_{134}=a_{234}=0$, $\gamma'=a_{1234}-\gamma$, and \begin{equation}\label{linking=2 nhl=2}a_{1324}+a_{123}a_{124}=-2\alpha\beta+a_{123}\beta+a_{124}\alpha.\end{equation} Thus, we need only see what choices of $a_{123}, a_{124},a_{1324}$ result in \pref{linking=2 nhl=2} having a solution. Note that if $a_{123}$ and $a_{234}$ are both even and $a_{1324}$ is odd, then we get a contradiction, thus the necessity of the condition that $a_{123}$ or $a_{124}$ is odd or $a_{1324}$ is even. To see the converse notice that \pref{linking=2 nhl=2} is equivalent to $$2a_{1324}+a_{123}a_{124}=-(2\alpha-a_{124})(2\beta-a_{123}). $$ If $a_{123}$ is odd then we may choose $\alpha$ and $\beta$ so that $2\beta-a_{123}=-1$ and $2\alpha-a_{124}=2a_{1234}+a_{123}a_{124}$. We may do similarly if $a_{124}$ is odd. If $a_{1324}$, $a_{123}$ and $a_{124}$ are all even then by dividing both sides by four, $$\frac{a_{1324}}{2}+\frac{a_{123}}{2}\frac{a_{124}}{2}=-\left(\alpha-\frac{a_{124}}{2}\right)\left(\beta-\frac{a_{123}}{2}\right). $$ And we may again choose $\alpha$ and $\beta$ so that $\beta-\frac{a_{123}}{2}=-1$ and $\alpha-\frac{a_{124}}{2}=\frac{a_{1324}}{2}+\frac{a_{123}}{2}\frac{a_{124}}{2}$. This determines what links with $\lk(L_1,L_2)=2$ and no other nonvanishing linking numbers have $n_h(L)=2$. We now determine when a link with $\lk(L_1,L_2)=2$ and no other nonvanishing linking numbers can be undone in 4 crossing changes. This is the case if and only if $T$ factors as $$T=(Vx_{12}V^{-1})( Wx_{12}W^{-1}X x_{ij}X^{-1}(Yx_{ij}Y^{-1})^{-1}).$$ Each of these factors has $\lk(T_1,T_2)=1$. The first can be undone in a single crossing change and the second can be undone in three. We have already classified homotopy trivializing numbers for such links. Taking advantage of this classification, we factor $T$ as $$T=(x_{12}x_{123}^{\alpha}x_{124}^{\beta}x_{1234}^{a_{1234}}x_{1324}^{-\alpha\beta})(x_{12}x_{123}^{a_{123}-\alpha}x_{124}^{a_{124}-\beta}x_{134}^{a_{134}}x_{234}^{a_{234}}x_{1324}^{a_{1324}+\alpha\beta}).$$ The first of these terms is a conjugate of $x_{12}$. The second can be undone in three crossing changes if and only if one of the following: \begin{itemize} \item $a_{134}=0$, \item $a_{234}=0$, or \item $ a_{1324}+\alpha\beta+(a_{123}-\alpha)(a_{124}-\beta) \in (a_{134}, a_{234})$. \end{itemize} Notice that the first and second of these bullet points agrees with one of the conditions claimed by the theorem. We must determine when there is a choice of $\alpha,\beta\in \Z$ satisfying the third. Expanding out, we see that this means that \begin{equation}\label{eqn: ln=2 nhl=4} a_{1324}+2\alpha\beta+a_{123}a_{124}-\alpha a_{124}-\beta a_{123} = xa_{134}+ya_{234} \end{equation} for some $\alpha, \beta, x, y\in \Z$. It immediately follows that if $a_{123}$, $a_{124}$, $a_{134}$, and $a_{234}$ are all even then so must $a_{1324}$ be. Thus, it remains only to show that if $a_{ijk}$ is odd for some $(ijk)$ or $a_{1324}$ is even then \pref{eqn: ln=2 nhl=4} is satisfied for some $\alpha$ and $\beta$. Some factoring reduces \pref{eqn: ln=2 nhl=4} to \begin{equation}\label{eqn: more on lk=2, nhl=4} a_{1324}+\frac{a_{123}a_{124}}{2}+(2\alpha-a_{123})\left(\beta-\frac{a_{124}}{2}\right) = xa_{134}+ya_{234}. \end{equation} If $a_{123}$ is odd and $a_{124}$ is even, then we may select $\alpha$ so that $2\alpha-a_{123}=1$ and $\beta$ so that $\beta-\frac{1}{2}a_{124} = xa_{134}+ya_{234}-a_{1324}-\frac{a_{123}a_{124}}{2}$. A similar analysis applies if If $a_{123}$ is even and $a_{124}$ is odd. If both of $a_{123}$ and $a_{124}$ are odd, then multiplication by 2 yields $$ 2a_{1324}+{a_{123}a_{124}}+(2\alpha-a_{123})\left(2\beta-a_{124}\right) = 2xa_{134}+2ya_{234}. $$ Again, we may select $\alpha$ and $\beta$ so that $2\alpha-a_{123}=1$ and $2\beta-a_{124}=2xa_{134}+2ya_{234}-2a_{1324}-{a_{123}a_{124}}.$ If $a_{1324}$, $a_{123}$, $a_{124}$, $a_{134}$, and $a_{234}$ are all even then dividing \pref{eqn: more on lk=2, nhl=4} by two results in $$ \frac{a_{1324}}{2}+\frac{a_{123}}{2}\frac{a_{124}}{2}+\left(\alpha-\frac{a_{123}}{2}\right)\left(\beta-\frac{a_{124}}{2}\right) = \frac{1}{2}(xa_{134}+ya_{234}). $$ Again, we may select $\alpha$ so that $\left(\alpha-\frac{1}{2}a_{123}\right)=1$ and $\left(\beta-\frac{1}{2}a_{124}\right) = \frac{1}{2}(xa_{134}+ya_{234})-\frac{a_{1324}}{2}-\frac{a_{123}}{2}\frac{a_{124}}{2}.$ Finally, if either of $a_{134}$ or $a_{234}$ is odd then 2 is a unit in $\Z/(a_{134}, a_{234})$ so it has an inverse $\overline{2}$. To solve \pref{eqn: more on lk=2, nhl=4} it suffices to find some $\alpha, \beta\in\Z/(a_{134}, a_{234})$ satisfying $$ -a_{1324}-\overline{2}\cdot a_{123}a_{134} \equiv (2\alpha-a_{123})(\beta-\overline{2}a_{124}) \mod (a_{134}, a_{234}). $$ This is satisfied by selecting $\alpha$ and $\beta$ so that $(2\alpha-a_{123})\equiv 1$ and $(\beta-a_{124}\overline{2})\equiv -a_{1324}-a_{123}a_{134}\cdot \overline{2}$. The proof that $n_h(L)\le 6= \Lambda(L)+2Q(L)$ now follows from the exact same application of Theorem~\ref{upper bound theorem main} as used in the proof when $\lk(L_1,L_2)=1$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:linking greater one} when $\lk(L_1,L_2)\ge 3$] We close by considering any link with $\lk(L_1,L_2)\ge 3$ and all other pairwise linking numbers equal to zero. Note that $n_h(L)=\Lambda(L)$ if and only if $$ T=\prod_{i=1}^{\lk(L_1,L_2)} W_i^{-1}x_{12}W_i. $$ Appealing to Lemma~\ref{cor: nhL=1}, any such $T$ will have the form \begin{equation}\label{lk>=3, Lambda=lk}T=x_{12}^{a_{12}}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{1234}^{a_{1234}}x_{1324}^{a_{1324}},\end{equation} as the theorem claims. Conversely, if $L$ has such a form, then let $a_{123}'\in\{0,1\}$ be the result of reducing $a_{123}$ mod 2. Then $$T=(x_{12}^{a_{12}-3})(x_{12}^{2}x_{123}^{a_{123}-a_{123}'-1}x_{124}^{a_{124}}x_{1234}^{a_{1234}}x_{1324}^{a_{1324}})(x_{12}x_{123}^{a_{123}'+1}).$$ The first of these factors is undone in $a_{12}-3$ crossing changes. Since $a_{123}-a_{123}'-1$ is odd, we have seen that the second can be undone in two crossing changes. Finally, the last is undone in one crossing change. It remains only to show that any link with $\lk(L_1,L_2)\ge 3$ can be undone in $\lk(L_1,L_2)+2$ crossing changes. To do so use the factorization, $$T=(x_{12}^{a_{12}}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{1234}^{a_{1234}}x_{1324}^{a_{1324}})(x_{134}^{a_{134}}x_{234}^{a_{234}}).$$ We have just verified that the first of these factors is undone in $a_{12}$ crossing changes. The second is a string link with vanishing pairwise linking numbers and which is undone in two crossing changes by Theorem~\ref{thm:nhl for linking number zero}. \end{proof} \subsection{Links with multiple nonvanishing linking numbers}\label{subsect: many nonvanishing linking}Theorem~\ref{thm:nonvanishing linking} classifies homotopy trivializing numbers of 4-components links with at least two nonvanishing pairwise linking numbers. Recall that we reorder and reorient the components as needed to ensure that $\lk(L_1,L_2)\ge |\lk(L_i,L_j)|$ for all $i,j$ and so that as many pairwise linking numbers as possible are positive. Similarly to Section~\ref{thm:linking greater one}, we proceed by cases, sorted by the complexity of the pairwise linking numbers, starting with the case that $\lk(L_1,L_2)$ and $\lk(L_3, L_4)$ are the only nonvanishing linking numbers. \begin{proof}[Proof of Theorem~\ref{thm:nonvanishing linking} when $\lk(L_1,L_2)=\lk(L_3,L_4)=1$.] Let $L$ be a 4-component link with $\lk(L_1,L_2)=\lk(L_3,L_4)=1$ and all other pairwise linking numbers vanishing. Let $T\in \H(4)$ satisfy $\widehat{T}=L$. Then $T=x_{12}x_{34}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}$. In order for $L$ to be undone in precisely two crossing changes, it must be that $T$ factors as $T=(V^{-1}x_{12}V)(W^{-1}x_{34}W)$. Lemma~\ref{cor: nhL=1} together with the commutator table \ref{commutator table} allows us to conclude that $$ \begin{array}{rcl}T&=& (x_{12}x_{123}^\alpha x_{124}^\beta x_{1234}^\gamma x_{1324}^{-\alpha\beta})(x_{34}x_{134}^{\alpha'}x_{234}^{\beta'} x_{1234}^{\gamma'} x_{1324}^{-\alpha'\beta'}) \\&=& x_{12}x_{34}x_{123}^\alpha x_{124}^\beta x_{134}^{\alpha'}x_{234}^{\beta'}x_{1234}^{\gamma+\gamma'+\alpha-\beta}x_{1324}^{-\alpha\beta-\alpha'\beta'}. \end{array}$$ The fact that $L$ can be undone in two crossing changes if and only if $a_{1324}=-a_{123}a_{124}-a_{134}a_{234}$ follows immediately. In order to see that any link with $\lk(L_1,L_2)=\lk(L_3,L_4)=1$ can be undone in four crossing changes, we appeal to Theorem~\ref{upper bound theorem main}, after permuting the components to minimize $Q(L)$, to see that $n_h(L)\le \Lambda(L)+2Q(L)=2+2$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:nonvanishing linking} when $\lk(L_1,L_2) =2$, $\lk(L_3, L_4)=1$.] Let $L$ be a 4-component link with $\lk(L_1,L_2)=2$, $\lk(L_3,L_4)=1$ and all other pairwise linking numbers vanishing. Let $T\in \H(4)$ satisfy $\widehat{T}=L$. Then $T=x_{12}^2x_{34}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}$. $L$ can be undone in precisely three crossing changes precisely when $T$ factors as $T=RS$ where $R$ has $\lk(R_1,R_2)=2$ and $n_h(R)=2$ and $S$ is a conjugate of $x_{34}$. Appealing to Theorems~\ref{thm:linking greater one} and \ref{cor: nhL=1}, this happens if and only if $$ T=(x_{12}^2x_{123}^{\alpha} x_{124}^{\beta}x_{1234}^\gamma x_{1324}^{\delta})(x_{34}x_{134}^{\alpha'}x_{234}^{\beta'}x_{1234}^{\gamma'} x_{1324}^{-\alpha'\beta'}), $$ where either $\alpha$ or $\beta$ is odd or $\delta$ is even. Appealing to Table \ref{commutator table}, $$ T=x_{12}^2 x_{34} x_{123}^{\alpha} x_{124}^{\beta} x_{134}^{\alpha'}x_{234}^{\beta'} x_{1234}^{\gamma+\gamma'+\alpha-\beta} x_{1324}^{\delta-\alpha'\beta'}, $$ $T$ can be put in such a form if and only if $a_{123}=\alpha$ is odd, $a_{124}=\beta$ is odd or $a_{1324}+a_{134}a_{234} = \delta$ is even. The fact that any such $L$ can be undone in five crossing changes follows from the same appeal to Theorem~\ref{upper bound theorem main} as in the previous argument. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:nonvanishing linking} when $\lk(L_1,L_2) =2$ and $\lk(L_3, L_4)=2$] Let $L$ be a 4-component link with $\lk(L_1,L_2)=2$, $\lk(L_3,L_4)=2$ and all other pairwise linking numbers vanishing. Let $T\in \H(4)$ satisfy $\widehat{T}=L$. Then $T=x_{12}^2x_{34}^2x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}$. $L$ can be undone in precisely four crossing changes precisely when $T$ factors as $T=RS$ where $R$ has $\lk(R_1,R_2)=2$, $\lk(R_3,R_4)=1$ and $n_h(R)=3$ and $S$ is a conjugate of $x_{34}$. Appealing to the case of Theorem~\ref{thm:nonvanishing linking} which we have already proven and to \ref{cor: nhL=1}, this happens if and only if $T$ factors as $$ \begin{array}{rcl}T&=&(x_{12}^2x_{34}x_{123}^{\alpha} x_{124}^{\beta}x_{134}^\gamma x_{234}^\delta x_{1234}^\epsilon x_{1324}^{\zeta})(x_{34}x_{134}^{\gamma'}x_{234}^{\delta'}x_{1234}^{\epsilon'} x_{1324}^{-\gamma'\delta'}) \\&=& x_{12}^2x_{34}^2x_{123}^{\alpha}x_{124}^{\beta}x_{134}^{\gamma+\gamma'}x_{234}^{\delta+\delta'}x_{1234}^{\epsilon+\epsilon'+\alpha-\beta}x_{1324}^{\zeta-\gamma'\delta'} \end{array} $$ where \begin{itemize} \item $a_{123} = \alpha$ is odd or $a_{124} = \beta$ is odd, or \item $a_{1234}+a_{134}a_{234}+2\gamma\delta-\delta a_{134}-\gamma a_{234}$ is even. \end{itemize} The latter bullet point is satisfied for some choice of $\gamma$ and $\delta$ in $\Z$ if and only if at least one of one of $a_{134}$, or $a_{234}$ is odd or $a_{1324}$ is even. We now close with the same appeal Theorem~\ref{upper bound theorem main} to conclude $n_h(L)\le \Lambda(L)+2=6$. \end{proof} When $\lk(L_1,L_2)\ge 3$ and $\lk(L_3, L_4)\ge 1$ we make no assumptions about the vanishing of any other pairwise linking number. \begin{proof}[Proof of Theorem~\ref{thm:nonvanishing linking} when when $\lk(L_1,L_2) \ge 3$ and $\lk(L_3, L_4)\ge 1$] Let $L$ be a 4-component link with $\lk(L_1,L_2)\ge 3$, $\lk(L_1,L_4)\ge1$. We make no assumptions about any other linking numbers. After changing $\Lambda(L)-4$ crossings we can replace $L$ with a new link $L'$ with $\lk(L_1',L_2')=3$, $\lk(L_1',L_4')=1$ and all other linking numbers vanishing. Let $T\in \H(4)$ satisfy $\widehat{T}=L'$. Then $$T=x_{12}^{3}x_{34}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}.$$ We need only factor $T$ as a $T=RS$ where $\lk(R_1,R_2)=n_h(R)=3$, and $\lk(S_3,S_4)=n_h(S)=1$. String links satisfying these conditions are classifified in Theorem~\ref{thm:linking greater one} and Lemma~\ref{cor: nhL=1} respectively. Motivated by these we use the commutator table \ref{commutator table} to factor $T$ as $$T=(x_{12}^{3}x_{123}^{a_{123}}x_{124}^{a_{124}} x_{1234}^{a_{1234}-a_{134}+a_{234}}x_{1324}^{a_{1324}+a_{134}a_{234}}) (x_{34}x_{134}^{a_{134}}x_{234}^{a_{234}}x_{1324}^{-a_{134}a_{234}}),$$ completing the computation of the homotopy trivializing number for all 4-component links with $\lk(L_1,L_2)$ and $\lk(L_3, L_4)$ as their only nonvanishing linking number. \end{proof} This completes the analysis when $\lk(L_1,L_2)$ and $\lk(L_3, L_4)$ are the only nonvanishing pairwise linking numbers. If $L$ has exactly two non-vanishing linking number and the both involve a shared component $L_i$, then up to reordering and reorienting, we assume that $\lk(L_1,L_2)\ge 1$, $\lk(L_1, L_3)\ge 1$ and that all other pairwise linking numbers vanish. \begin{proof}[Proof of Theorem~\ref{thm:nonvanishing linking} when $\lk(L_1,L_2) \ge 1$ and $\lk(L_1, L_3)\ge 1$] Let $L$ be a 4-component link with $\lk(L_1,L_2)\ge 1$, $\lk(L_1,L_3)\ge 1$ and all other pairwise linking numbers vanishing. Let $T\in \H(4)$ satisfy $\widehat{T}=L$. Then $T=x_{12}^{a_{12}}x_{13}^{a_{13}}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}$. Now $\Lambda(L)=n_h(L)$ if and only if $T$ can be written as a product of conjugates of $x_{12}$ and $x_{13}$. By Lemma~\ref{cor: nhL=1} and Table~\ref{commutator table}, it is clear that this will imply that $a_{234}=4$. Conversely suppose that $a_{234}=0$. We begin by making $\Lambda(L)-2$ crossing changes so that $a_{12}=a_{13}=1$. Using Table~\ref{commutator table}, it follows that $T$ factors as $$T=(x_{12}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{1234}^{a_{1234}}x_{1324}^{-a_{123}a_{124}})(x_{13}x_{134}^{a_{134}}x_{1324}^{a_{1324}+a_{123}a_{124}-a_{124}}). $$ Lemma~\ref{cor: nhL=1} allows us to reduce this to a homotopy trivial link by two crossing changes. Finally, to see that $T$ can be undone in $\Lambda(L)+2$ crossing changes, notice that by reordering components we arrange that $\lk(L_1,L_4)\ge 1$ and $\lk(L_1,L_3)\ge 1$. Thus $Q(L)= 2$ and Theorem~\ref{upper bound theorem main} concludes that $n_h(L)\le \Lambda(L)+2$. \end{proof} It remains only to cover the case that at least three linking numbers of $L$ are nonzero. There are three relevant cases to consider (up to reordering). First, all of the linking numbers involving $L_1$ may be non-zero. Secondly, it may be the $\lk(L_1,L_2)$, $\lk(L_2, L_3)$ and $\lk(L_1, L_3)$ may be nonzero. Finally, $\lk(L_1,L_2)$, $\lk(L_2, L_3)$, and $\lk(L_3, L_4)$ might be nonzero. \begin{proof}[Proof of Theorem~\ref{thm:nonvanishing linking} when at least three linking numbers are nonzero ] Let $L$ be a 4-component link for which $\lk(L_1,L_2)>0$, $\lk(L_1, L_3)>0$, $\lk(L_1,L_4)>0$, and all other pairwise linking numbers vanish. Let $T\in \H(4)$ satisfy $\widehat{T}=L$. Then \[T=x_{12}^{a_{12}}x_{13}^{a_{13}}x_{14}^{a_{14}}x_{123}^{a_{123}}x_{124}^{a_{124}}x_{134}^{a_{134}} x_{234}^{a_{234}} x_{1234}^{a_{1234}}x_{1324}^{a_{1324}}.\] If $n_h(L)=\Lambda(L)$ then $T$ must be a product of $a_{ij}$ conjugates of each $x_{ij}$ for $(ij)$ for $(ij)=(12)$, $(13)$ or $(14)$. A glance at Lemma~\ref{cor: nhL=1} reveals that any such product will have $a_{234}=0$. Conversely, if $a_{234}=0$ then we note that after $a_{14}$ crossing changes we can arrange that $a_{14}=0$. Theorem~\ref{thm:nonvanishing linking} now concludes that such a link can be undone in $a_{12}+a_{13}$ crossing changes. On the other hand, if $a_{234}\neq 0$ then since $x_{234}^{a_{234}}=[x_{23},x_{34}^{a_{234}}]$ can be undone in two crossing changes. After making these two crossing changes, we proceed as above for a total of $\Lambda(L)+2$ crossing changes. Now let $L$ be a link for which $\lk(L_1,L_2)$, $\lk(L_2,L_3)$ , and $\lk(L_3,L_4)$ are all nonzero. Permute the components of $L$ by the permutation $(1,3,4,2)$. You will now see that $Q(L)=0$, so that Theorem~\ref{upper bound theorem main} completes the proof. Finally, let $L$ be a link for which $\lk(L_1,L_2)$, $\lk(L_1,L_3)$, $\lk(L_2,L_3)$ are all nonzero. Up to reversing orientations of some components, we may assume that with the possible exception of $\lk(L_2,L_3)$, these are all positive. First we change $\Lambda(L)-3$ crossings in order to arrange that $\lk(L_1,L_2)=\lk(L_1,L_3)=|\lk(L_2,L_3)|=1$ and that all other linking numbers vanish. Let $T$ be a string link with $\widehat{T}=L$. Then $$ T=x_{12}x_{13}x_{23}^\epsilon x_{123}^{b_{123}}x_{124}^{b_{124}}x_{134}^{b_{134}}x_{234}^{b_{234}}x_{1234}^{b_{1234}}x_{1324}^{b_{1324}} $$ with $\epsilon=\pm1$. We use Table~\ref{commutator table} to verify the following factorization, $$ \begin{array}{l} T=(x_{12}x_{124}^{a_{124}}x_{1234}^{u})(x_{13}x_{134}^{a_{134}}x_{1324}^{v})(x_{23}x_{123}^{a_{123}\epsilon}x_{234}^{a_{234}\epsilon}x_{1324}^{a_{123}a_{124}})^\epsilon \end{array} $$ where $u$ and $v$ are chosen so that $a_{1234}=\epsilon a_{124}-\epsilon a_{134}+u$ and $a_{1324}=a_{124}(1-\epsilon)+a_{134}\epsilon +\epsilon a_{123}a_{124}+v$. Each of these factors is undone by one crossing change thanks to Lemma~\ref{cor: nhL=1}. \end{proof} \section{Links with large homotopy trivializing number}\label{sect: links with large n_h} We have shown that any $n$-component link $L$ with vanishing linking numbers can be reduced to a homotopy trivial link in $(n-1)(n-2)$ crossing changes. What needs further investigation is the sharpness of this bound. More precisely, for $n>4$ we do not know whether there exists an $n$-component link $L$ with vanishing pairwise linking numbers and $n_h(L)=(n-1)(n-2)$. In this section we make partial progress on this problem by exhibiting a sequence of links whose homotopy trivializing numbers grow quadratically in the number of components. Since the proof technique is different from what we have done so far in this paper we begin by proving the following proposition. While it is a weaker result than our main result (Theorem~\ref{thm: large nhl using 4-component sublinks}), its proof is easier while being similar in spirit and it results in links whose homotopy trivializing numbers grow quadratically in $n$. \begin{proposition}\label{prop: lower bound warmup}Let $n\ge 3$ and $L$ be an $n$-component link with vanishing pairwise linking numbers and which satisfies that no $3$-component sublink of $L$ is homotopy trivial. Then \[n_h(L)\geq 2\left\lfloor\frac{(n-1)^2}{4}\right\rfloor.\] \end{proposition} \begin{proof} Let $L$ be an $n$-component link with linking number zero but whose every 3-component sublink is not homotopy trivial. Let $S$ be any sequence of crossing changes reducing $L$ to a homotopy trivial link which realizes $n_h(L)$. We construct a graph $\Gamma$ which records the crossing changes in $S$. Let the vertex set of $\Gamma$ be $\{v_1,\dots, v_n\}$ and include the edge $e_{ij}$ from $v_i$ to $v_j$ in $\Gamma$ if $S$ includes a crossing change between the $i$'th and $j$'th components. Given this graph $\Gamma$, we create its complement, $\Gamma^c$, by keeping the vertex set $\{v_1,\dots, v_n\}$ and letting the edge set of $\Gamma^c$ be the complement of the edge set of $\Gamma$. Since every 3-component sublink $L_i\cup L_j\cup L_k$ is nontrivial it follows that at least one of $e_{ij}, e_{ik}, e_{jk}$ must be in $\Gamma$, and so cannot be in $\Gamma^c$. That is, $\Gamma^c$ contains no cycle of length 3. A classical theorem due to Mantel from extremal graph theory (see \cite{Mantel1907} or, for a more modern reference, \cite[Theorem 1.9]{GraphsAndDigraphs}) says that any graph with $n$ vertices and more than $\lfloor n^2/4 \rfloor$ edges contains a cycle of length 3. Thus, $\Gamma^c$ has at most $\lfloor n^2/4 \rfloor$ edges. As a consequence, $\Gamma$ must include at least ${n\choose 2} - \lfloor n^2/4 \rfloor$ edges. As $L$ has vanishing linking number, if $e_{ij}$ is an edge in $\Gamma$ then $S$ includes at least two crossing changes between $L_i$ and $L_j$. Thus, $n_h(L)\ge 2\left({n\choose 2} - \lfloor n^2/4 \rfloor\right)=2\left\lfloor {(n-1)^2}/{4} \right\rfloor$, where the equality follows from a direct case-wise analysis based on the parity of $n$. \end{proof} In brief, the proof above argues that since each 3-component sublink of $L$ is not homotopy trivial, it follows that the graph $\Gamma$ must contain certain edges. Our goal now is to strengthen this lower bound to a new bound whose proof instead considers 4-component sublinks. \begin{theorem} \label{thm: large nhl using 4-component sublinks} For any $n\ge 4$ there is a link with \(n_h(L) = 2\left\lceil\frac{1}{3}n(n-2)\right\rceil.\) \end{theorem} We put off the proof until the end of the section once we have built a bit more technology. In order to providuce the needed examples we start by proving the existence of an $n$-component links $L$ whose every 4-component sublink, $J$ has $n_h(J) = C_4=6$. We begin with the choice of $J$. \begin{example} \label{ex: large nhl 4-component} Consider the string link \[T=x^3_{123} x^3_{124} x^3_{134} x^3_{234} x_{1234}x_{1324},\] which is depicted in Figure \ref{fig:4_comp_J}. Note the link $T$ has vanishing pairwise linking numbers and $a_{123}=3$, $a_{124}=3$, $a_{134}=3$, $a_{234}=3$, $a_{1234}=1$, and $a_{1324}=1$. Therefore, by Theorem \ref{thm:nhl for linking number zero}, if $J=\widehat{T}$ then $n_h(J)=6.$ \begin{figure} \centering \begin{tikzpicture} \draw (0, 0) node[inner sep=0] {\includegraphics[width=0.75\linewidth]{Large_nhL_4component.png}}; \draw (-4.8, -0.35) node {$x_{123}^3$}; \draw (-2.85, -0.35) node {$x_{123}^3$}; \draw (-0.7, -0.35) node {$x_{123}^3$}; \draw (1.55, 0.35) node {$x_{123}^3$}; \draw (3.2, 0) node {$x_{1234}$}; \draw (4.85, 0) node {$x_{1324}$}; \draw (-6.5, -1.2) node {$J_1$}; \draw (-6.5, -0.4) node {$J_2$}; \draw (-6.5, 0.4) node {$J_3$}; \draw (-6.5, 1.2) node {$J_4$}; \end{tikzpicture} \caption{A string link whose closure $J$ has $n_h(J) = 6$.} \label{fig:4_comp_J} \end{figure} \end{example} \begin{proposition}\label{prop: example with good 4-component sublinks} For any $n\ge4$ there is an $n$-component link $L$ with pairwise linking number zero and whose every 4-component sublink $J$ has $n_h(J)=6$. \end{proposition} \begin{proof} We require an $n$-component string link $T$ whose every $4$-component sublink is the string link of Example \ref{ex: large nhl 4-component} above. To be precise, let \[T = \Prod_{1\leq i<j<k\leq n} x^3_{ijk} \Prod_{1\leq i<j<k<l\leq n} x_{ijkl}\Prod_{1\leq i<j<k<l\leq n} x_{ikjl}.\] Then every 4-component sublink of $T$ is the link $J$ of Example~\ref{ex: large nhl 4-component}. Therefore $\widehat{T}$ is the desired link. \end{proof} Consider now the $n$-component link $L$ of Proposition~\ref{prop: example with good 4-component sublinks}. Let $S$ be any sequence of crossing changes reducing $L$ to a homotopy trivial link and realizing $n_h(L)$. Form now a weighted graph $\Gamma$ with vertices $v_1, \dots, v_n$. We assign the edge from $v_i$ to $v_j$ a weight $\wt(v_i v_j)$ equal to half of the number of crossing changes between the components $L_i$ and $L_j$ in $S$. Recall that since $L$ has vanishing pairwise linking numbers, this number of crossing changes must be even. Each 4-component sublink of $L$ has homotopy trivializing number equal to $n_h(J)=6$. Thus the subgraph of $\Gamma$ spanned by any 4-component sublink has total weight at least $3$. The following extremal graph theory result, which is slightly stronger than that stated in the introduction as Theorem~\ref{thm:min_weight_phi_n}, will now imply Theorem~\ref{thm: large nhl using 4-component sublinks}. \begin{theorem}\label{thm: Graph theorem} Define $\Phi_n$ to be the set of all graphs with non-negative integer weights on their edges which satisfy that for every $G\in \Phi_n$ each subgraph of $G$ spanned by at least 4 vertices has total weight at least 3. Let $\phi_n$ denote the minimum total weight among all graphs in $\Phi_n$. For $n\geq 4$, $$\phi_n =\left\lceil\frac{1}{3}n(n-2)\right\rceil.$$ \end{theorem} We now gather together what we have to prove Theorem~\ref{thm:min_weight_phi_n}. \begin{proof}[Proof of Theorem~\ref{thm: large nhl using 4-component sublinks}] Let $L$ be the $n$-component link of Proposition \ref{prop: example with good 4-component sublinks}, and $S$ be be any sequence of crossing changes transforming $L$ to a homotopy trivial link. Let $\Gamma$ be the weighted graph on vertices $v_1,\dots, v_n$ with weights given by setting $\wt(e_i,e_j)$ equal to the number of crossings changes in $S$ between $L_i$ and $L_j$. Then $\Gamma\in \Phi(n)$ and so $\wt(\Gamma) \ge \left\lceil\frac{1}{3}n(n-2)\right\rceil$. The total weight of $\Gamma$ is equal to half the number of crossing changes in $S$. Thus $n_h(L)\ge 2\left\lceil\frac{1}{3}n(n-2)\right\rceil$, as we claimed. \end{proof} Before giving an inductive proof of Theorem~\ref{thm: Graph theorem}, similar to the proof of Mantel's theorem in \cite{mantel}, we first introduce the following lemma, which is key to our inductive step. For any vertex $v$ in a weighted graph $G$, $d(v) = \Sum_{u} \wt(uv)$ is the sum of the weight of the edges incident to $v$. \begin{lemma}\label{lem:vertex_degree_5comp} Let $n\ge 5$ and $G\in \Phi_n$. Then there is a vertex $v$ for which $d(v)\ge \frac{2}{3}n-\frac{4}{3}$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:vertex_degree_5comp}] We open with a special case. Suppose that there are three vertices $p,q,r$ with $\wt(pq)=\wt(pr)=\wt(qr)=0$. It follows then that for any $s\in V(G)\setminus\{p,q,r\}$, $\wt(ps)+\wt(qs)+\wt(rs)\ge 3$, and so $$d(p)+d(q)+d(r) = \sum_{s\notin \{p,q,r\}}\wt(ps)+\wt(qs)+\wt(rs)\ge 3(n-3). $$ In particular, then, the average of $d(p), d(q), d(r)$ is at least $n-3$ which is at least as large as $\frac{2}{3}n-\frac{4}{3}$ as long as $n\ge 5$. Thus, we may assume that no such triple of weight 0 edges exists. Suppose for the sake of contradiction that $d(v)< \frac{2}{3}n-\frac{4}{3}$ for every vertex $v$. For any vertex $v$, set $$N_v=\{x\in V(G)\mid \wt(xv)=0\}.$$ Since $d(v)< \frac{2}{3}n-\frac{4}{3}$, it follows that $|N_v|\ge (n-1)-d(v)> \frac{1}{3}n+\frac{1}{3}$. Finally, note that if $x,y\in N_v$ and $\wt(xy)=0$, then $v,x,y$ spans a triangle whose every edge has weight 0, putting us in the situation addressed at the start of the proof. Thus, $\wt(xy)\ge 1$ for every $x,y\in N_v$. We claim that there must exist an edge on $N_v$ having a weight of 1. Indeed, suppose $\wt(xy)\ge 2$ for every $x,y\in N_v$. For any $x\in N_v$, if we sum up only the weights of edges between $x$ and elements of $N_v$ we get $d(x)\geq\sum_{y\in N_v\setminus \{x\}} \wt(xy)\ge 2(|N_v|-1)>\frac{2}{3}n-\frac{4}{3}$, contradicting the assumption that $d(x)<\frac{2}{3}n-\frac{4}{3}$ for every vertex $x$. Thus, there exists some $p,q\in N_v$ such that $\wt(pq)=1$. Notice $v\in N_p\cap N_q$. For any $u\in N_p\cap N_q\setminus \{v\}$, Consider the graph spanned by $p,q,u,v$ to see that $$3\le \wt(p,q)+\wt(v,u)+\wt(v,p)+\wt(u,p)+\wt(v,q)+\wt(u,q)=1+\wt(u,v)$$ and hence $\wt(u,v)\ge 2$. In Figure \ref{fig:graph_problem}, we summarize what we have shown above. In particular, fix some vertex $v$. There are vertices $p,q\in N_v$ with $\wt(pq)=1$. For any $u\in N_p\cap N_q\setminus \{v\}$, $\wt(uv)\geq 2$. Additionally, by the same argument we used for $N_v$, for any $w\in N_p\cup N_q$, $\wt(wv)\ge 1$. Thus, $$d(v)\ge |N_p\cup N_q\setminus (N_p\cap N_q)|+2|N_p\cap N_q\setminus \{v\}|=|N_p|+|N_q|-2.$$ Moreover, since by the same argument we applied to $|N_v|$, we also have $|N_p|>\frac{1}{3}n+\frac{1}{3}$ and $|N_q|>\frac{1}{3}n+\frac{1}{3}$, hence we may we conclude that $$d(v)\ge |N_p|+|N_q|-2> \frac{2}{3}n-\frac{4}{3}.$$ This contradicts the assumption that $d(v)\le \frac{2}{3}n-\frac{4}{3}$ for every vertex $v$, completing the proof. \end{proof} \begin{figure} \centering \begin{tikzpicture} \node (Npq) at (0, 2) {$N_p\cap N_q$}; \node (Nq) at (-1.5, 0.5) {$N_q$}; \node (Np) at (1.5, 0.5) {$N_p$}; \node (u) at (0, 1.5) {$u$}; \node (v) at (0, 0) {$v$}; \draw [dashed, gray, right] (u) -- (v) node [midway] {$\geq 2$}; \node (w1) at (-1.8, -0.5) {$w_q$}; \node (w2) at (1.8, -0.5) {$w_p$}; \draw [dashed, gray, below] (w1) -- (v) node [midway] {$\geq 1$}; \draw [dashed, gray, below] (w2) -- (v) node [midway] {$\geq 1$}; \node (p) at (-1.2, -2) {$p$}; \node (q) at (1.2, -2) {$q$}; \draw [dashed, gray, above] (p) -- (q) node [midway] {$1$}; \node (Nv) at (0, -2.5) {$N_v$}; \draw (2.6,-2.8) arc[start angle=30, end angle=150,radius=3cm]; \draw (0.6,2.5) arc[start angle=30, end angle=-120,radius=2.5cm]; \draw (-0.6,2.5) arc[start angle=150, end angle=330,radius=2.5cm]; \end{tikzpicture} \caption{An edge $(p,q)$ of weight 1 in $N_v$ and minimum weights between vertices in $N_p$ and $N_q$ with a vertex $v\in N_p\cap N_q$.} \label{fig:graph_problem} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:min_weight_phi_n}] We construct a graph on $n$ vertices $a_1,\dots, a_{k}, b_1,\dots, b_{\ell}$ where $k=\left\lceil\frac{n-1}{3}\right\rceil$ and $\ell = n-k=\left\lfloor \frac{2n+1}{3}\right\rfloor$. Set $\wt(a_i,a_j)=2$, $\wt(b_i,b_j)=1$ and $\wt(a_i,b_j)=0$ for all relevant $i,j$. \begin{figure} \centering \begin{tikzpicture} \node (a1) at (-5, 0.5) {$a_1$}; \node (a2) at (-3, -0.5) {$a_2$}; \node (a3) at (-3, 1.5) {$a_3$}; \draw [gray, below] (a1) -- (a2) node [midway] {$2$}; \draw [gray, above] (a1) -- (a3) node [midway] {$2$}; \draw [gray, right] (a2) -- (a3) node [midway] {$2$}; \node (b1) at (-0.6, 1) {$b_1$}; \node (b2) at (0, -1) {$b_2$}; \node (b3) at (2, -1) {$b_3$}; \node (b4) at (2.6, 1) {$b_4$}; \node (b5) at (1, 2) {$b_5$}; \draw [gray] (b1) -- (b2); \draw [gray] (b1) -- (b3); \draw [gray] (b1) -- (b4); \draw [gray] (b2) -- (b3); \draw [gray] (b2) -- (b4); \draw [gray] (b3) -- (b4); \draw [gray] (b1) -- (b5); \draw [gray] (b2) -- (b5); \draw [gray] (b3) -- (b5); \draw [gray] (b4) -- (b5); \end{tikzpicture} \caption{A graph with $n=8$ vertices with $k=3$, $\ell=5$, and total weight $\phi_8=2{3 \choose 2}+{5\choose 2}=16$.} \end{figure} We can now compute the total weight of any 4-component subgraph: $$ \begin{array}{ccc} \wt(\langle a_p,a_q,a_r,a_s\rangle)=12,& \wt(\langle a_p,a_q,a_r,b_s\rangle)=6,& \wt(\langle a_p,a_q,b_r,b_s\rangle)=3, \\ \wt(\langle a_p,b_q,b_r,b_s\rangle)=3,& \wt(\langle b_p,b_q,b_r,b_s\rangle)=6. \end{array} $$ Notice each is at least 3. Next note that $\wt(G) = 2\cdot {\left\lceil\frac{n-1}{3}\right\rceil\choose 2}+ {\left\lfloor \frac{2n+1}{3}\right\rfloor\choose 2}$. Since $\phi_n$ is the minimum total weight amongst all such graphs, $\phi_n\le 2\cdot {\left\lceil\frac{n-1}{3}\right\rceil\choose 2}+ {\left\lfloor \frac{2n+1}{3}\right\rfloor\choose 2}$. This upper bound is equal to $\left\lceil\frac{1}{3}n(n-2)\right\rceil$ by a straightforward case-wise proof based on the class of $n$ mod $3$. We prove the reverse inequality by induction. It is obvious that $\phi_4=3$, since the total weight of a 4-vertex graph is the same as the weight of its only 4-vertex subgraph. Hence the theorem holds for $n=4$. Now fix some $n\ge 5$. As $\phi_n$ is defined to be a minimum, there is some graph $G$ on $n$ vertices, whose every 4-vertex subgroup has weight 3, and for which $\wt(G)=\phi_n$. By Lemma \ref{lem:vertex_degree_5comp}, there is some vertex $v$ with $d(v)\ge \left\lceil\frac{2n-4}{3}\right\rceil$. Set $G'$ to be the $n-1$ vertex subgraph spanned by $V(G)\setminus \{v\}$. We may inductively assume that $\wt(G')\ge \phi_{n-1}=\left\lceil\frac{1}{3}(n-1)(n-3)\right\rceil $. Thus, $$ \phi_n=\wt(G) = \wt(G')+d(v)\ge\phi_{n-1}+d(v)\ge \left\lceil\frac{1}{3}(n-1)(n-3)\right\rceil + \left\lceil\frac{2n-4}{3}\right\rceil.$$ That the rightmost term in the above inequality is precisely equal to $\left\lceil\frac{1}{3}n(n-2)\right\rceil$ follows from a casewise argument depending on the class of $n$ mod $3$. Indeed, suppose $n\equiv 0\pmod{3}$. Then $n=3k$ for some integer $k$ and we may evaluate directly, \[ \left\lceil\frac{1}{3}(n-1)(n-3)\right\rceil + \left\lceil\frac{2n-4}{3}\right\rceil =\left\lceil\frac{1}{3}(3k-1)(3k-3)\right\rceil + \left\lceil\frac{6k-4}{3}\right\rceil = 3k^2-2k \] yet also \[ \left\lceil\frac{1}{3}n(n-2)\right\rceil =\left\lceil\frac{1}{3}3k(3k-2)\right\rceil = 3k^2-2k. \] The argument that $\left\lceil\frac{1}{3}(n-1)(n-3)\right\rceil + \left\lceil\frac{2n-4}{3}\right\rceil = \left\lceil\frac{1}{3}n(n-2)\right\rceil$ is the same in the cases that $n\equiv 1,2\pmod{3}$. Therefore we have shown $\phi_n\ge \left\lceil\frac{1}{3}n(n-2)\right\rceil$ for all $n$, completing the proof. \end{proof} \bibliographystyle{plain} \bibliography{biblio} \end{document}
2412.18361v1
http://arxiv.org/abs/2412.18361v1
On a generalized Monge-Ampère equation on closed almost Kähler surfaces
\documentclass{amsart} \input{symbols} \usepackage{enumitem} \usepackage[dvipsnames]{xcolor} \usepackage[colorlinks]{hyperref} \hypersetup{ linkcolor=BrickRed, citecolor=Green, filecolor=Mulberry, urlcolor=NavyBlue, menucolor=BrickRed, runcolor=Mulberry } \usepackage[ noabbrev, capitalise, nameinlink, ] {cleveref} \crefname{lem}{Lemma}{Lemma} \crefname{prop}{Proposition}{Proposition} \usepackage{todonotes} \newcommand{\Zhang}[1]{\todo[color=yellow!10, linecolor=black!50!yellow]{\textbf{Z:} #1}} \newcommand{\TODO}[1]{\todo[color=green!10, linecolor=black!50!green, inline]{\textbf{TODO:} #1}} \newcommand{\TOC}[2]{\todo[color=red!10, linecolor=black!50!red, inline]{\textbf{TOCHANGE #2:} #1}} \newtheoremstyle{bfnoteonly}{}{}{\itshape}{}{\bfseries}{.}{ }{\thmnote{#3}} \theoremstyle{bfnoteonly} \newtheorem*{extthm}{} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem*{thm*}{Theorem} \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{rem}[thm]{Remark} \newtheorem{conj}[thm]{Conjecture} \newtheorem*{conj*}{Conjecture} \newtheorem*{rem*}{Remark} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \numberwithin{equation}{section} \usepackage{diffcoeff}[=v4] \diffdef {} { op-order-sep = 0 mu } \DeclareMathOperator{\dist}{dist} \newcommand\Ttilde[1]{\stackrel{\sim}{\smash{\Theta^{#1}}\rule{0pt}{1.2ex}}} \begin{document} \title[Generalized Monge-Amp\`{e}re equation]{On a generalized Monge-Amp\`{e}re equation on closed almost K\"{a}hler surfaces} \author{Ken Wang} \thanks{Supported by NSFC Grants 1197112} \address{School of Mathematical Sciences, Fudan University, Shanghai 100433, China} \email{[email protected].} \author{Zuyi Zhang} \thanks{} \address{Beijing International Center for Mathematical Research, China} \email{[email protected]} \author{Tao Zheng} \thanks{Supported by NSFC Grants 12371078} \address{School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China} \email{[email protected].} \author{Peng Zhu} \thanks{Supported by NSFC Grants 12171417} \address{School of Mathematics and Physics, Jiangsu University of Technology, Changzhou, Jiangsu 213001, China} \email{[email protected].} \keywords{almost K\"ahler form, $\mc{D}^+_J$ operator, generalized Monge-Amp`ere equation} \subjclass{53D35;53C56,53C65;32Q60} \begin{abstract} We show the existence and uniqueness of solutions to a generalized Monge-Amp\`{e}re equation on closed almost K\"{a}hler surfaces, where the equation depends only on the underlying almost Kähler structure. As an application, we prove Donaldson’s conjecture for tamed almost complex 4-manifolds. \end{abstract} \maketitle \section{Introduction} Yau's Theorem \cite{Yau78} for the Calabi conjecture \cite{Calabi57}, proven forty years ago, occupies a central place in the theory of K\"ahler manifolds and has wide-ranging applications in geometry and mathematical physics \cite{FuYau08,Yau77}. \par The theorem is equivalent to finding a K\"ahler metric within a given K\"ahler class that has a prescribed Ricci form. In other words, this involves solving the complex Monge-Amp\`ere equation for K\"ahler manifolds: \begin{equation} (\omega + \sqrt{-1}\partial_J \bar{\partial}_J \varphi)^n = e^{f}\omega^n \end{equation} for a smooth real function $\varphi$ satisfying $\omega + \sqrt{-1}\partial_J \bar{\partial}_J \varphi > 0,$ and $\sup_M \varphi= 0$, where $n$ is the complex dimension of $M$ and $f$ is any smooth real function with \begin{equation*} \int_M e^f \omega^n = \int_M \omega^n. \end{equation*} \par There has been significant interest in extending Yau's Theorem to non-K\"ahler settings. One extension of Yau's Theorem, initiated by Cherrier \cite{Cherrier87} in the 1980s, involves removing the closedness condition $d\omega = 0$. See also Tosatti-Weinkove \cite{TosattiW10}, and Fu-Yau \cite{FuYau08}. The Monge-Amp\`ere equation on almost Hermitian manifolds was studied by Chu-Tosatti-Weinkove \cite{Chu20,ChuTW19}. A different extension on symplectic manifolds was explored by Weinkove \cite{Weinkove07} and Tosatti-Weinkove-Yau \cite{TosattiWY08}, who studied the Calabi-Yau equation for 1-forms. Delan\"oe \cite{Delanoe96} and Wang-Zhu \cite{WangZ10} considered a Gromov type Calabi-Yau equation. \par This paper focuses on a generalized Monge-Amp\`ere equation on almost K\"ahler surfaces and establishes a uniqueness and existence theorem for it. Here is the main theorem: \begin{thm} \label{thm:1} Suppose that $(M,\omega,J,g)$ is a closed almost K\"ahler surface, then there exists a unique solution, $\varphi \in C^{\infty}(M,J)_0$, of the generalized Monge-Amp\`ere equation \begin{equation} \label{eq:1.2} (\omega + \mathcal{D}_J^+(\varphi))^2 = e^{f}\omega^2, \end{equation} for $\varphi$ satisfying $\omega + \mc{D}_J^+(\varphi) > 0$, where $f$ is any smooth real function with \begin{equation*} \int_M \omega^2 = \int_M e^{f}\omega^2 \end{equation*} and there is a $C^{\infty}$ $a\ priori$ bound of $\varphi$ depending only on $\omega,J,$ and $f$. \end{thm} We explain some of the notations used in the main theorem. The operator $\mathcal{D}_J^+$, introduced by Tan-Wang-Zhou-Zhu \cite{TanWZZ22}, generalizes $\del_J\bar{\del}_J$. Specifically, if $J$ is integrable, $\mathcal{D}_J^+$ reduces to $2\sqrt{-1}\del_J\bar{\del}_J$. Therefore, it can be viewed as a generalization of $\del_J\bar{\del}_J$. Using the operator $\mathcal{D}_J^+$, Tan-Wang-Zhou-Zhu \cite{TanWZZ22} resolved the Donaldson tameness question. Moreover, Wang-Wang-Zhu \cite{WangWZ23} derived a Nakai-Moishezon criterion for almost complex 4-manifolds. Recall that for a closed almost K\"ahler surface $(M,\omega,J,g)$, the inequality $0 \leq h_J^- \leq b^+ - 1$ holds (\cite{TanWZZ15, TanWZZ22}). Observe that the intersection of $H_J^+$ and $H_g^+$ is spanned by ${\omega, f_i \omega + d_J^-(\nu_i + \bar{\nu}_i)}$, where $\nu_i \in \Omega_J^{0,1}(M)$ and \begin{equation*} \int_M f_i \omega^2 = 0 \end{equation*} for $1 \leq i \leq b^+ - h_J^- - 1$. Note that the kernel of $\mathcal{W}_J$ is spanned by $\Set{1, f_i,\ 1 \leq i \leq b^+- h_J^- -1}$. Let $ C^{\infty}(M,J)_0 := C^{\infty}(M)_0 \setminus \mathrm{Span}\ \{f_i,\ 1 \leq i \leq b^+- h_J^- -1\},$ where \begin{equation*} C^{\infty}(M)_0 := \Set{f \in C^{\infty}(M) \given \int_M f\omega^2 = 0}. \end{equation*} Thus, $C^{\infty}(M,J)_0 \subset C^{\infty}(M)_0$; they are equal if $h_J^- = b^+-1$. Donaldson posted the following conjecture (see Donaldson \cite[Conjucture~1]{Donaldson06} or Tosatti-Weinkove-Yau \cite[Conjecture~1.1]{TosattiWY08}): \begin{conj} Let $M$ be a compact 4-manifold equipped with an almost complex structure $J$ and a taming symplectic form $\Omega$. Let $\sigma$ be a smooth volume form on $M$ with \begin{equation*} \int_M \sigma = \int_M \Omega^2 \end{equation*} Then if $\Tilde{\omega}$ is a almost K\"ahler form with $[\Tilde{\omega}] = [\Omega]$ and solving Calabi-Yau equation \begin{equation} \Tilde{\omega}^2 = \sigma, \end{equation} there are $C^{\infty}$ $a\ priori$ bounds on $\Tilde{\omega}$ depending only on $\Omega,J,$ and $M$. \end{conj} Now let $\sigma = e^{f} \Omega^2, \Omega = F + d_J^- (v + \bar{v}),$ where $v \in \Omega_J^{0,1}(M)$. If $h_J^- = b^+ - 1$, then $\omega := \Omega - d(v+\bar{v}) = F - d_J^+(v + \bar{v})$ is an almost K\"ahler form on $M$ (cf. Tan-Wang-Zhou-Zhu \cite[Theorem~1.1]{TanWZZ22} and Wang-Wang-Zhu \cite[Theorem~4.3]{WangWZ23}). And we define \begin{equation*} \log \frac{\Omega^2}{\omega^2} = f_0, \end{equation*} then $\sigma = e^{f}\Omega^2 = e^{f + f_0} \omega^2$, and $$\int_M \omega^2 =\int_M \Omega^2.$$ By \cref{thm:1}, there exists a $\varphi \in C^{\infty}(M)_0$ solving the generalized Monge-Amp\`ere equation \begin{equation*} e^{f + f_0} \omega^2 = \Tilde{\omega}^2 = (\omega + \mathcal{D}_J^+(\varphi))^2, \end{equation*} and there is a $C^{\infty}$ bound on $\varphi$ depending only on $\Omega, J$ and $f$. \par Hence, the following corollary of Theorem \ref{thm:1} gives a positive answer to Donaldson's Conjecture: \begin{cor} Suppose that $(M,J)$ is a closed almost complex $4$-manifold with $h_J^- = b^+ - 1$ , where $J$ is tamed by a symplectic form $\Omega = F + d_J^-(v + \bar{v})$ and $F$ is a positive $J-(1,1)$ form, $v \in \Omega_J^{0,1}(M)$. Then $\omega = \Omega - d(v + \bar{v}) = F - d_J^+(v+\bar{v})$ is an almost K\"ahler form on $M$. If \begin{equation*} \int_M e^{f} \Omega^2 = \int_M \Omega^2, \end{equation*} then there exists $\varphi \in C^{\infty}(M)_0$ such that $\Tilde{\omega} = \omega + \mathcal{D}_J^+(\varphi)$ solving the following equation \begin{equation*} \Tilde{\omega}^2 = e^{f+f_0} \omega^2 = e^{f} \Omega^2, \end{equation*} where \begin{equation*} \int_M \Tilde{\omega}^2 = \int_M e^{f} \Omega^2 \end{equation*} and there is a $C^{\infty}$ priori bound on $\varphi$ depending only $\Omega,J,f$ and $M$. \end{cor} \begin{rem} It is natural to consider a generalized $\del_J \overline{\del}_J$ operator \begin{equation*} \mc{D}_J^+: C^{\infty}(M^{2n}) \to \Omega_J^+(M^{2n}) \end{equation*} on an almost K\"ahler manifolds $(M^{2n},\omega,J)$ of complex dimension $n \geq 3$, and study the generalized Monge-Amp\`ere equation: \begin{equation} (\omega + \mc{D}_J^+(\varphi))^n = e^f \omega^n, \end{equation} where $\varphi,f \in C^{\infty}(M^{2n})$ satisfying \begin{equation} \int_{M^{2n}} \omega^n = \int_{M^{2n}} e^f \omega^n \end{equation} \end{rem} Section 2 introduces the notations for almost K\"ahler manifolds and defines the operator $\mathcal{D}_J^+$. Additionally, a local theory for the generalized Calabi-Yau equation is presented. In Section 3, the uniqueness part of the main theorem is proved. Finally, Section 4 provides a proof for the existence part of the main theorem. \section{Preliminaries} Let $(M, J)$ be an almost complex manifold of dimension $2n$. A Riemannian metric $g$ on $M$ is said to be compatible with the almost complex structure $J$ if \begin{equation*} g(JX, JY) = g(X, Y), \end{equation*} for all tangent vectors $X,Y \in TM$. In this case, $(M, J, g)$ is called an almost-Hermitian manifold. \par The almost complex structure $J$ induces a decomposition of the complexified tangent space $T^{\C}M$: \begin{equation*} T^{\C}M = T'M \oplus T''M, \end{equation*} where $T'M$ and $T''M$ are the eigenspaces of $J$ corresponding to the eigenvalues $\sqrt{-1}$ and $-\sqrt{-1}$, respectively. \par A local unitary frame ${e_1, \ldots, e_n}$ can be chosen for $T'M$, with the dual coframe denoted by ${\theta^1, \ldots, \theta^n}$. Using this coframe, the metric $g$ can be expressed as \begin{equation*} g = \theta^i \otimes \overline{\theta^i} + \overline{\theta^i} \otimes \theta^i. \end{equation*} The fundamental form $\omega$ is defined by $\omega(\cdot, \cdot) = g(J\cdot,\cdot)$ and can be written as \begin{equation*} \omega = \sqrt{-1}\theta^i \wedge \overline{\theta^i}. \end{equation*} The manifold $(M, \omega, J, g)$ is called almost K\"ahler if $d\omega = 0$. \par The almost complex structure $J$ also acts as an involution on the bundle of real two-forms via \begin{equation*} \mc{J}:\alpha(\cdot,\cdot) \mapsto \alpha(J\cdot,J\cdot). \end{equation*} This action induces a splitting of $\Lambda^2$ into $J$-invariant and $J$-anti-invariant two-forms: \begin{equation*} \Lambda^2 = \Lambda_J^+ \oplus \Lambda_J^-. \end{equation*} Let $\Omega_J^+$ and $\Omega_J^-$ denote the spaces of the $J$-invariant and $J$-anti-invariant forms, respectively. We use $\mathcal{Z}$ to denote the space of closed 2-forms and $\mathcal{Z}_J^{\pm} := \mathcal{Z} \cap \Omega_J^{\pm}$ for the corresponding projections. \par The following operators are defined as: \begin{align*} d_J^+ := P_J^+d: \Omega_{\R}^1 \to \Omega_J^+, \\ d_J^- := P_J^-d: \Omega_{\R}^1 \to \Omega_J^-, \end{align*} where $P_J^{\pm} = \frac{1}{2}(1\pm J)$ are algebraic projections on $\Omega_{\R}^2(M)$. \\ \begin{prop} Let $(M,J,F,g)$ be a closed Hermitian 4-manifold, then \begin{equation*} d_J^+: \Lambda_{\R}^1 \otimes L_1^2(M) \to \Lambda_J^{1,1} \otimes L^2(M) \end{equation*} has closed range. \end{prop} Li and Zhang \cite{LiZhang09} introduced the $J$-invariant and $J$-anti-invariant cohomology subgroups $H_J^{\pm}$ of $H^2(M;\R)$ as follows: \begin{defn} The $J$-invariant, respectively, $J$-anti-invariant cohomology subgroups $H_J^{\pm}$ are defined by $$ H_J^{\pm} := \Set{\mathfrak{a} \in H^2(M,\R) \given \exists \alpha \in \mathcal{Z}_J^{\pm} \text{ such that } [\alpha] = \mathfrak{a}}. $$ An almost complex structure $J$ is said to be $C^{\infty}$-pure if $H_J^+ \cap H_J^- = \{ 0 \}$, respectively $C^{\infty}$-full if $H_J^+ + H_J^- = H^2(M;\R)$. \end{defn} In the case of (real) dimension 4, this gives a decomposition of $H^2(M)$: \begin{prop}[\cite{DLZ10}] If $M$ is a closed almost complex 4-manifold, then any almost complex structure $J$ on $M$ is $C^{\infty}$-pure and full, i.e., $$ H^2(M;\R) = H_J^+ \oplus H_J^-. $$ Let $h^+_J$ and $h_J^-$ denote the dimensions of $H^+_J$ and $H_J^-$, respectively. Then $b^2 = h^+_J + h_J^-$ , where $b^2$ is the second Betti number. \end{prop} It is well-known that the self-dual and anti-self-dual decomposition of 2-forms is induced by the Hodge operator $\hodge_g$ of a Riemannian metric $g$ on a 4-dimensional manifold $M$: \begin{equation*} \Lambda^2 = \Lambda_g^+ \oplus \Lambda_g^-. \end{equation*} Let $\Omega^\pm_g$ denote the spaces of smooth sections of $\Lambda^{\pm}_g$. The Hodge-de Rham Laplacian, $$ \Delta_g=dd^*+d^*d:\Omega^2(M)\rightarrow\Omega^2(M), $$ where $d^*=-\hodge_g d \hodge_g$ is the codifferential operator with respect to the metric $g$, commutes with Hodge star operator $\hodge_g$. Consequently, the decomposition also holds for the space $\mathcal{H}_g$ of harmonic 2-forms. By Hodge theory, this induces a cohomology decomposition determined by the metric $g$: $$ \mathcal{H}_g=\mathcal{H}_g^+\oplus\mathcal{H}_g^-. $$ We can further define the operators \begin{equation*} d^\pm_g := P_g^{\pm}d:\Omega^1_\R \to \Omega^\pm_g, \end{equation*} where $d$ is the exterior derivative $d : \Omega^1_\R \to \Omega^2_\R$, and $P^\pm_g := \frac{1}{2}(1 \pm \hodge_g)$ are the algebraic projections. The following Hodge decompositions hold: \begin{equation*} \Omega^+_g = \mathcal{H}^+_g \oplus d^+_g(\Omega_{\R}^1), \quad \Omega^-_g=\mathcal{H}^-_g \oplus d^-_g(\Omega_{\R}^1). \end{equation*} These decompositions are related by \cite{TanWZZ22} \begin{align*} \Lambda_J^+ &= \R \omega \oplus \Lambda_g^-,\\ \Lambda_g^+ &= \R \omega \oplus \Lambda_J^-. \end{align*} In particular, any $J$-anti-invariant 2-form in 4 dimensions is self-dual. Therefore, any closed $J$-anti-invariant 2-form is harmonic and self-dual. This identifies the space $H_J^-$ with $\mathcal{Z}_J^-$ and, further, with the set $\mathcal{H}_g^{+,\omega^\perp}$ of harmonic self-dual forms that are pointwise orthogonal to $\omega$. \\ Lejmi \cite{Lejmi10E,Lejmi10S} first recognized $\mathcal{Z}^-_J$ as the kernel of an elliptic operator on $\Omega^-_J$: \begin{prop}[\cite{Lejmi10E}] \label{prop:3} Let $(M,\omega,J,g)$ be a closed almost Hermitian 4-manifold. Define the following operator \begin{align*} P: \Omega^-_J &\to \Omega^-_J\\ \psi&\mapsto P^-_J(dd^*\psi). \end{align*} Then $P$ is a self-adjoint, strongly elliptic linear operator with a kernel consisting of $g$-self-dual-harmonic, $J$-anti-invariant 2-forms. Hence, \begin{equation*} \Omega^-_J=\ker P \oplus d^-_J\Omega^1_\R. \end{equation*} \end{prop} By using the operator $P$ defined in \cref{prop:3}, Tan-Wang-Zhou-Zhu \cite{TanWZZ22} introduced the $\mc{D}^+_J$ operator: \begin{defn} Let $(M,\omega,J,g)$ be a closed almost Hermitian 4-manifold. Denote by $$ L_2^2(M)_0:=\{f\in L_2^2(M)|\int_Mfd\mu_g=0\}. $$ Define \begin{align*} \mathcal{W}_J: L^2_2(M)_0 &\longrightarrow \Lambda^1_\mathbb{R}\otimes L^2_1(M), \\ f &\longmapsto Jdf+d^*(\eta^1_f+\overline{\eta}^1_f), \end{align*} where $\eta^1_f \in \Lambda^{0,2}_J \otimes L^2_2(M)$ satisfies $$ d^-_J\mathcal{W}_J(f)=0. $$ Define \begin{align*} \mathcal{D}^+_J: L^2_2(M)_0 &\longrightarrow \Lambda^{1,1}_J\otimes L^2(M), \\ f &\longmapsto d\mathcal{W}_J(f). \end{align*} In the case of $(M,\omega,J,g)$ being an almost K\"ahler surface, such function $f$ is called the almost K\"ahler potential with respect to the almost K\"ahler metric $g$. \end{defn} The operatpr $\mc{D}_J^+$ has closed range as well (cf. \cite{TanWZZ22}): \begin{prop} Suppose $(M,\omega,J,g)$ is a closed almost K\"ahler surface. Then $\mc{D}_J^+: L_2^2(M)_0 \to \Lambda_J^+ \otimes L^2(M)$ has closed range. \end{prop} The remaining of this section is devoted to a local theory of a generalized Calabi-Yau equation suggested by Gromov \cite{Delanoe96,WangZ10}. \par Observe that the generalized Monge-Amp\`ere equation \labelcref{eq:1.2} is equivalent to the following Calabi-Yau equation: \begin{equation} \label{eq:3.3} (\omega + du)^2 = e^f \omega^2 \end{equation} for $u \in \Omega_\R^1(M)$, $d_J^- u = 0$, by letting $u = \mc{W}_J(\varphi)$. \begin{defn} Suppose $(M,\omega,J,g)$ is a closed almost K\"ahler surface. The sets $A,B,A_+$ and $B_+$ are defined as follows: \begin{alignat*}{2} &A & &:=\ \Set{u \in \Omega_\R^1(M) \given d_J^- u = 0, \quad d^* u = 0}, \\ &B & &:=\ \Set{\varphi \in C^{\infty}(M) \given \int_M \varphi\omega^2 = \int_M \omega^2}, \\ &A_+ & &:=\ \Set{u \in A \given \omega + du > 0}, \\[0.5em] &B_+ & &:=\ \Set{f \in B \given f > 0 \text{ on } M}. \end{alignat*} \end{defn} Let $\omega(\phi) = \omega + d\phi$. Since $$ \int_M(\omega(\phi))^2=\int_M\omega^2 $$ with $\phi\in A$, the operator $\mc{F}$, defined by \begin{equation*} \mc{F}(\phi)\omega^2 = (\omega(\phi))^2 \end{equation*} sends $A$ into $B$. \par By a direct calculation, for any $u \in A_+$, the tangent space $T_u A_+$ at $u$ is $A$. Given any $\phi \in A$, we define \begin{equation} L(u)(\phi) = \diff*{\mc{F}(u+t\phi)}{t}[t=0]. \end{equation} According to \cite{Delanoe96,WangZ10}, $L(u)$ is a linear elliptic system on $A$. Hence, we get the following lemma (cf. \cite[Proposition~1]{Delanoe96} ,\cite[Lemma~2.5]{WangZ10}): \begin{lem} Suppose that $(M,\omega,J,g)$ is a closed almost K\"ahler surface. Then the restricted operator \begin{equation*} \mc{F}|_{A_+}: A_+ \to B_+ \end{equation*} is of elliptic type on $A_+$. \end{lem} Obviously, $A_+ \subset A$ is an open convex set. As done in \cite{WangZ10}, \begin{equation*} \mc{F}|_{A_+}: A_+ \to B_+ \end{equation*} is injective. \\ In summary, by nonlinear analysis \cite{Aubin98}, the following result (cf. Delano\"e \cite[Theorem~2]{Delanoe97} or Wang-Zhu \cite[Proposition~2.6]{WangZ10}) is true: \begin{prop} \label{prop:4} The restricted operator \begin{equation*} \mc{F}|_{A_+}: A_+ \to \mc{F}(A_+) \subset B_+ \end{equation*} is a diffeomorphic map. \end{prop} \begin{rem} In fact, $\mc{F}(A_+) = B_+$ is equivalent to the existence theorem for the generalized Monge-Amp\`ere \labelcref{eq:1.2} on closed almost K\"ahler surfaces (cf. Delano\"e \cite{Delanoe96}, Wang-Zhu \cite{WangZ10}). \end{rem} \section{Uniqueness Theorem for the generalized Monge Amp\`ere equation} This section aims at demonstrating the uniqueness part of the main theorem. Throughout this section, we assume that $(M, \omega, J, g)$ is a closed almost K\"ahler surface. If $\omega_1 = \omega + \mc{D}_J^+(\varphi) > 0$ for some $\varphi$, the metric $g_1(\cdot, \cdot)$ is defined by $g_1(\cdot, \cdot) = \omega_1(\cdot, J \cdot)$. Let $\hodge_g$ and $\hodge_{g_1}$ denote the Hodge star operators corresponding to the metrics $g$ and $g_1$, respectively. \par Suppose $\varphi_0\in C^\infty(M,J)_0$ satisfies the following equation \begin{equation*} \omega_1 \wedge \mc{D}_J^+(\varphi_0) = (\omega + \mc{D}_J^+(\varphi)) \wedge \mc{D}_J^+(\varphi_0) = 0. \end{equation*} This implies that $P_{g_1}^+ \mc{D}_J^+(\varphi_0) = 0$ since $$ \Lambda_J^+ = \R \omega_1 \oplus d_{g_1}^-(\Omega^1(M)). $$ Thus \begin{equation*} \mc{D}_J^+(\varphi_0) = d\mc{W}_J(\varphi_0) = d_{g_1}^-\mc{W}_J(\varphi_0). \end{equation*} But \begin{align*} 0 = \int_M \mc{D}_J^+(\varphi_0) \wedge \mc{D}_J^+(\varphi_0) &= \int_M d_{g_1}^-\mc{W}_J(\varphi_0) \wedge d_{g_1}^-\mc{W}_J(\varphi_0) \\ &= - \norm{d_{g_1}^-\mc{W}_J(\varphi_0)}_{g_1}^2, \end{align*} hence \begin{equation} \mc{D}_J^+(\varphi_0) = d\mc{W}_J(\varphi_0) = 0. \end{equation} Since \begin{equation} d^* \mc{W}_J(\varphi_0) = 0. \end{equation} we have $\mc{W}_J(\varphi_0) = 0$. Hence $\varphi_0$ is a constant. \par We now suppose that if there are two solutions $\varphi_1$ and $\varphi_2$, i.e., \begin{equation*} (\omega + \mathcal{D}_J^+(\varphi_1))^2 = (\omega + \mathcal{D}_J^+(\varphi_2))^2 = e^{f}\omega^2 . \end{equation*} For each $t \in [0,1]$, set $\varphi_t = t \varphi_1 + (1-t) \varphi_2$, then \begin{equation*} \int_0^1 \diff*{(\omega + \mathcal{D}_J^+(\varphi_t))^2}{t} \dl3t = 0. \end{equation*} A direct calculation of $\diff*{(\omega + \mathcal{D}_J^+(\varphi_t))^2}{t}$ shows \begin{equation*} 0=\int_0^1 \diff*{(\omega + \mathcal{D}_J^+(\varphi_t))^2}{t} \dl3t=(2\omega + \mathcal{D}_J^+(\varphi_1+\varphi_2)) \wedge \mc{D}_J^+(\varphi_1 - \varphi_2). \end{equation*} This implies that $\varphi_1 - \varphi_2$ is a constant. Thus, as K\"ahler case \cite{Calabi57}, we obtain a uniqueness theorem for the generalized Monge-Amp\`ere equation: \begin{thm}\label{thm:3.1} The generalized Monge-Amp\`ere equation \labelcref{eq:1.2} on closed almost K\"ahler surface has at most one solution up to a constant. \end{thm} \section{Existence Theorem for the generalized Monge Amp\`ere equation} In this section, we first establish an estimate for the solution $\varphi$, and the existence part of the main theorem is proved at the end of this section. Consider a closed almost K\"ahler surface $(M, \omega, J, g)$. Recall that the Calabi-Yau equation \labelcref{eq:3.3} is equivalent to the generalized Monge-Amp\`ere equation \begin{equation*} (\omega + d\mathcal{W}_J(\varphi))^2 = e^{f}\omega^2, \end{equation*} where \begin{equation*} \int_M \omega^2 = \int_M e^{f}\omega^2, \end{equation*} $d\mathcal{W}_J(\varphi) = \mathcal{D}_J^+(\varphi)$ and $d^{\hodge} \mathcal{W}_J(\varphi) = 0$. Assume $$ \omega_1^2=e^f\omega^2. $$ We now define a function $\varphi \in C^{\infty}(M)_0$ as follows \begin{equation*} -\frac{1}{2}\Delta_g \varphi = \frac{\omega \wedge (\omega_1 - \omega)}{\omega^2}, \end{equation*} where $\Delta_g$ denotes the Laplacian associated with the Levi-Civita connection with respect to the almost K\"ahler metric $g$. The existence of $\varphi$ follows from elementary Hodge theory; it is uniquely determined up to the addition of a constant. Since $$ - \omega \wedge dJd\varphi = \frac{1}{2}\Delta_g \varphi \omega^2 $$ for any almost K\"ahler form $\omega$ associated with $g$ and any function $\varphi$, it follows by Lejmi's Theorem (\cref{prop:3}) that there exists $\sigma(\varphi) \in \Omega_J^-$ satisfying the following system: \begin{equation} \label{eq:4.21} \begin{dcases} d_J^- J d\varphi + d_J^- d^* \sigma(\varphi) = 0 \\ \omega \wedge dd^* \sigma(\varphi) = 0. \end{dcases} \end{equation} Hence, \begin{equation*} \begin{aligned} \omega_1 - \omega = \mc{D}_J^+(\varphi) &= dJ d\varphi + dd^* \sigma(\varphi) \\ & = dd^* (\varphi \omega) + dd^* \sigma(\varphi). \end{aligned} \end{equation*} Therefore $\mc{W}_J(\varphi)$ can be rewritten as $d^* (\varphi \omega) + d^* \sigma(\varphi)$. Then \begin{equation*} \begin{dcases} d \mc{W}_J(\varphi) = \omega_1 - \omega \\ d^* \mc{W}_J(\varphi) = 0. \end{dcases} \end{equation*} Let $\omega_1 = \omega + \mc{D}_J^+(\varphi)$, where both $\omega_1$ and $\omega$ are symplectic forms with $[\omega_1] = [\omega]$. For any $p \in M$, by the Darboux Theorem, we may assume, without loss of generality, that on a Darboux coordinate neighborhood $U_p$: \begin{equation} \label{eq:4.1} \begin{aligned} \omega(p) &= \sqrt{-1}(\theta^1 \wedge \overline{\theta^1} + \theta^2 \wedge \overline{\theta^2}), \\ g(p) &= 2(|\theta^1|^2 + |\theta^2|^2), \end{aligned} \quad \begin{aligned} \omega_1(p) &= \sqrt{-1}(a_1 \theta^1 \wedge \overline{\theta^1} + a_2 \theta^2 \wedge \overline{\theta^2}), \\ g_1(p) &= 2(a_1 |\theta^1|^2 + a_2 |\theta^2|^2), \end{aligned} \end{equation} where $0 < a_1 < a_2$ (using simultaneous diagonalization). \begin{lem} \label{lem:1} For any point $p$ in an almost K\"ahler surface $M$, using the coordinates in \cref{eq:4.1}, we have \begin{equation*} e^{f(p)} = a_1 a_2 \leq \abs{g_1(p)}_g^2 , \quad \abs{d \mc{W}_J(\varphi)(p)}_g^2 = 2 [(a_1-1)^2 + (a_2-1)^2], \end{equation*} and \begin{equation*} {\Delta_g \varphi}(p) = 2 - (a_1 + a_2) \leq 2(1 - e^{f(p)}) < 2. \end{equation*} \end{lem} \begin{proof} Since \begin{equation*} \begin{aligned} \dl \vol_{g_1}|_p = \frac{\omega_1^2(p)}{2!} &= -a_1 a_2 \theta^1 \wedge \overline{\theta^1} \wedge \theta^2 \wedge \overline{\theta^2} \\ &= -e^{f(p)} \theta^1 \wedge \overline{\theta^1} \wedge \theta^2 \wedge \overline{\theta^2}\ (\text{by } \labelcref{eq:1.2}), \end{aligned} \end{equation*} then {$e^{f(p)}=a_1 a_2\le2(a_1^2+a_2^2)=\abs{g_1(p)}_g^2$}. The others can be obtained similarly by direct calculations using \labelcref{eq:4.1}. \end{proof} Consider a family of symplectic forms on the almost K\"ahler suface $(M,\omega,J,g)$ \begin{equation*} \omega_s = (1-s)\omega + s\omega_1, s \in [0,1]. \end{equation*} Then, $\omega_0 = \omega$, $\omega_{\frac{1}{2}} = \frac{1}{2}(\omega+\omega_1)$. Moreover, we have \begin{equation} \label{eq:4.2} -2\omega_{\frac{1}{2}} < \omega_1 - \omega < 2\omega_{\frac{1}{2}}. \end{equation} Let $g_s(\cdot,\cdot) = \omega_s(\cdot,J\cdot)$ and $d^{\hodge_s} = - \hodge_{g_s} d \hodge_{g_s}$. Define the almost K\"ahler potentials $\varphi_s$ \cite{Weinkove07} by \begin{equation} \label{eq:4.3} -\frac{1}{2} \Delta_{g_s} \varphi_s = \frac{\omega_s \wedge (\omega_1 -\omega)}{\omega_s^2} \end{equation} Notice that $\varphi_0 = \varphi$. By \cref{prop:3}, since $\omega_1 - \omega \in \Omega_J^+ \cap d(\Omega^1)$, it is easy to see that \begin{equation*} \omega_1 - \omega = \mc{D}_J^+(\varphi_s) = dJd\varphi_s + dd^{\hodge_s}\sigma(\varphi_s),\ s \in [0,1], \end{equation*} where \begin{equation*} d_J^-Jd\varphi_s + d_J^-d^{\hodge_s}\sigma(\varphi_s) = 0. \end{equation*} Combining \labelcref{eq:4.2} and \labelcref{eq:4.3} gives \begin{equation} \label{eq:4.4} -4 < \Delta_{g_\frac{1}{2}} \varphi_{\frac{1}{2}} < 4. \end{equation} By the product rule, \begin{equation*} \Delta_{g_\frac{1}{2}} \varphi_{\frac{1}{2}}^2 = 2\varphi_{\frac{1}{2}} \Delta_{g_\frac{1}{2}} \varphi_{\frac{1}{2}} + 2\abs{\nabla_{g_\frac{1}{2}} \varphi_{\frac{1}{2}}}^2, \end{equation*} where $\nabla_{g_{\frac{1}{2}}}$ is the Levi-Civita connection with respect to the metric $g_{\frac{1}{2}}$. Then \begin{equation*} 2\abs{\varphi_{\frac{1}{2}}} \Delta_{g_\frac{1}{2}} \abs{\varphi_{\frac{1}{2}}} + 2\abs{\nabla_{g_\frac{1}{2}} \abs{\varphi_{\frac{1}{2}}}}^2=\Delta_{g_\frac{1}{2}} \abs{\varphi_{\frac{1}{2}}}^2=\Delta_{g_\frac{1}{2}} \varphi_{\frac{1}{2}}^2=2\varphi_{\frac{1}{2}} \Delta_{g_\frac{1}{2}} \varphi_{\frac{1}{2}} + 2\abs{\nabla_{g_\frac{1}{2}} \varphi_{\frac{1}{2}}}^2. \end{equation*} Hence, by Kato inequality $\abs{\nabla_{g_\frac{1}{2}} \varphi_{\frac{1}{2}}} \geq \abs{\nabla_{g_\frac{1}{2}} \abs{\varphi_{\frac{1}{2}}}}$, we get \begin{equation*} \Delta_{g_\frac{1}{2}} \abs{\varphi_{\frac{1}{2}}} \geq \frac{\varphi_{\frac{1}{2}}}{\abs{\varphi_{\frac{1}{2}}}} \Delta_{g_\frac{1}{2}} \varphi_{\frac{1}{2}} > -4. \end{equation*} \begin{lem} \label{lem:3} Let $p > 1$ be a real number. Then \begin{equation*} \int_M \abs[\Big]{d\abs{\varphi_{\frac{1}{2}}}^{\frac{p}{2}}}_g \dl \vol_g \leq \frac{8p^2}{p-1} \max_{x \in M} \frac{\omega_{\frac{1}{2}}^2}{\omega^2} \int_M \abs{\varphi_{\frac{1}{2}}}^{p-1} \dl \vol_g \end{equation*} \end{lem} \begin{proof} It is easy to find that the following equation holds by direct calculation \begin{equation} \label{eq:4.5} \abs[\Big]{d\abs{\varphi_{\frac12}}^{\frac{p}{2}}}_{g_{\frac{1}{2}}}^2 \dl \vol_{g_{\frac{1}{2}}} = -d\abs{\varphi_{\frac12}}^{\frac p2} \wedge Jd\abs{\varphi_{\frac 12}}^{\frac p2} \wedge \omega_{\frac12}. \end{equation} By substituting \labelcref{eq:4.5} and the equation $d\abs{\varphi_{\frac12}}^{\frac p2} = \frac{p}{2}\abs{\varphi_{\frac12}}^{\frac p2 - 1}d\abs{\varphi_{\frac12}}$, we find \begin{equation*} \int_M \abs[\Big]{d\abs{\varphi_{\frac12}}^{\frac{p}{2}}}_{g_{\frac{1}{2}}}^2 \dl \vol_{g_{\frac{1}{2}}} = -\frac{p^2}{p-1} \int_M d\abs{\varphi_{\frac12}}^{p-1} \wedge Jd\abs{\varphi_{\frac12}} \wedge \omega_{\frac12}. \end{equation*} Applying Stokes Theorem gives \begin{equation*} \int_M d\abs{\varphi_{\frac12}}^{p-1} \wedge Jd\abs{\varphi_{\frac12}} \wedge \omega_{\frac12} = -\int_M \abs{\varphi_{\frac12}}^{p-1} \wedge dJd\abs{\varphi_{\frac12}} \wedge \omega_{\frac12}. \end{equation*} Combining these with the equation $\omega_{\frac12} \wedge dJd\abs{\varphi_{\frac12}} = -\frac{1}{2} \Delta_{g_{\frac12}} \abs{\varphi_{\frac12}} \omega_{\frac12}^{2}$, inequality \labelcref{eq:4.4}, and the inequality $\abs{d\varphi_{\frac{1}{2}}}_g^2 \leq 2\abs{d\varphi_{\frac{1}{2}}}_{g_{\frac{1}{2}}}^2$, we obtain the result. \end{proof} We now give an zero order estimate for $\varphi_{\frac{1}{2}}$. \begin{prop} \label{prop:6} There is a constant $C$ depending only on $M$, $\omega$, $J$, and $f$ such that \begin{equation*} \norm{\varphi_{\frac{1}{2}}}_{C^0(g)} \leq C(M,\omega,J,f). \end{equation*} \end{prop} \begin{proof} Recall that $\varphi_{\frac12} \in C^{\infty}(M)_0$. Then for $p=2$, by applying \cref{lem:3}, the Sobelev embedding, and Poincare inequality, we obtain \begin{equation*} \norm{\varphi_{\frac12}}_{L^{2}(g)} \leq C_1(M,\omega,J,f) \max_{x \in M} \frac{\omega_{\frac{1}{2}}^2}{\omega^2} \norm{\varphi_{\frac12}}_{L^{1}(g)}. \end{equation*} The Moser iteration gives \begin{equation*} \norm{\varphi_{\frac12}}_{C^0(g)} \leq C_2(M,\omega,J,f) \max_{x \in M} \frac{\omega_{\frac{1}{2}}^2}{\omega^2} \norm{\varphi_{\frac12}}_{L^{1}(g)}. \end{equation*} Since $\omega_1 = \omega + \mc{D}_J^+(\varphi)$ is a solution of the Calabi-Yau equation \labelcref{eq:3.3}, $\omega_1^2 = e^f \omega^2$ and $e^f \in \mc{F}(A_+)$. By \cref{prop:4}, there exists an unique element $\phi\in A_+$ such that $\mathcal{F}(\phi)=e^f$. Because \[ \omega(\mc{W}_J(\varphi))^2=(\omega+d\mc{W}_J(\varphi))^2=e^f\omega^2=\mc{F}(\phi)\omega^2, \] one gets $\mc{W}_J(\varphi)=\phi$. As a result, \begin{equation} \label{eq:4.28} \omega_1=\omega+d\mc{F}^{-1}(e^f). \end{equation} Therefore we have the following bound \begin{equation*} \max_{x \in M} \frac{\omega_{\frac{1}{2}}^2}{\omega^2} =\max_{x \in M} \frac{(\omega+\frac12 d\mc{F}^{-1}(e^f))^2}{\omega^2} \leq C_3(M,\omega,J,e^f). \end{equation*} It remains to estimate $\|\varphi_\frac12\|_{L^1(g)}$. According to Aubin \cite{aubin2012nonlinear} Theorem 4.13 or \cite{Delanoe97}, there is a Green function $G(x,y)$ of the Lapalacian operator $\Delta_{g_\frac12}$ such that \[ \varphi_\frac12(x)=(\vol_{g_\frac12})^{-1}\int_M\varphi_\frac12d\vol_{g_\frac12}+\int_MG(x,y)\Delta_{g_\frac12}\varphi_{\frac12}(x)d\vol_{g_\frac12}. \] We can take $\varphi_\frac12$ such that $\int_M\varphi_\frac12d\vol_{g_\frac12}=0$. Therefore, by taking the $L^1(g)$ norm of the above equation, and notice that $|\Delta_{g_\frac12}\varphi_{\frac12}|$ is bounded, \[ \|\varphi_\frac12\|_{L^1(g)}= \|\int_MG(x,y)\Delta_{g_\frac12}\varphi_{\frac12}(x)d\vol_{g_\frac12}\|_{L^1(g)}\leq C_4(M,\omega,J,e^f). \] Hence, \begin{equation*} \norm{\varphi_{\frac12}}_{C^0(g)} \leq C(M,\omega,J,e^f). \end{equation*} \end{proof} \begin{rem} Here, we prove the $C^0$-estimate for $\varphi_{\frac12}$ using the method of Moser iteration (cf. \cite{Yau78,Delanoe97}). Chu-Tossatti-Weinkove \cite[Proposition~3.1]{ChuTW19}, Tossatti and Weinkove \cite{TosattiW18}, or Sz\'ekelyhidi \cite{Szekelyhidi18} obtained the $C^0$-estimate for $\varphi$, by using Alexandroff-Bakelman-Pucci maximum principle in the case of the complex Monge-Amp\`ere equation. \end{rem} As done in \cite[Theorem~3.1]{TosattiWY08} and \cite[Theorem~3.1]{Weinkove07}, we have the following proposition. \begin{prop} \label{prop:7} Let $g_1$ be an almost K\"ahler metric solving the Calabi-Yau equation \labelcref{eq:3.3} on closed almost K\"ahler surface $(M,\omega,g,J)$, where $g_1 = \omega_1(\cdot,J\cdot)$. Then there exist constants $C$ and $A$ depending only on $J$, $R$, $\sup \abs{f}$ and lower bound of $\Delta_g f$ such that \begin{equation*} \tr_g g_{\frac{1}{2}} \leq C e^{A(\varphi_{\frac12} - \inf_M \varphi_{\frac12})} \leq C(M,\omega,J,f). \end{equation*} \end{prop} We will prove this proposition later. For now, assume $g$ and $g_1$ take the form of \cref{eq:4.1} at $p\in M$. Since $\tr_g g_{\frac12} = \tr_g (\frac{1}{2}g + \frac{1}{2}g_1) = \frac{1}{2}\tr_g g_1 + 1$, it follows that $g_1(p) \leq 2C g(p)$ for some constant $C$ by \cref{prop:7}. Therefore, there exists a constant $C$, depending only on $M,\ \omega,\ J,\ f$ such that the following holds (the constant C can vary from line to line) \begin{equation}\label{eq:boundmet} g_1 \leq C(M,\omega,J,f)g, \quad \omega_1 \leq C(M,\omega,J,f)\omega. \end{equation} A combination of \cref{prop:7} and \cref{lem:1} yields \begin{equation*} 2e^{f/2} \leq \tr_g g_1 \leq C(M,\omega,J,f). \end{equation*} By the definition of $\varphi$, we have \begin{equation*} -1 \leq -\frac12 \Delta_g \varphi \leq C(M,\omega,J,f) \end{equation*} Recall that the condition required in \cref{prop:6} is the boundedness of $\abs{\Delta_{g_\frac{1}{2}}\varphi_\frac12}$. Because $\abs{\Delta_g \varphi}$ is bounded, the same argument as in the proof of \cref{prop:6} shows \begin{equation*} \norm{\varphi}_{C^0(g)} \leq C(M,\omega,J,f). \end{equation*} Schauder's estimate \cite[Theorem~6.6]{GilbargTrudinger77} implies \begin{equation*} \norm{\varphi}_{C^{k+2,\alpha}(g)} \leq C(M,\omega,J, \norm{f}_{C^{k,\alpha}(g)}), \end{equation*} for nonnegative integer $k$ and $\alpha \in (0,1)$.\par By \cref{lem:1} and \cref{prop:7}, we have the following proposition: \begin{prop} \label{prop:8} Suppose that $g_1$ is a solution of the generalized Monge-Amp\`ere equation \labelcref{eq:1.2}. Then \begin{alignat*}{2} \norm{2(e^{\frac{f}{2}}-1)}_{C^0(g)} &\leq \norm{d\mc{W}_J(\varphi)}_{C^0(g)} \leq C_1, \\ \norm{2e^{\frac{f}{2}}}_{C^0(g)} &\leq \norm{g_1}_{C^0(g)} \leq C_2, \end{alignat*} and \begin{equation*} \norm{2e^{-\frac{f}{2}}}_{C^0(g)} \leq \norm{g_1^{-1}}_{C^0(g)} \leq C_3. \end{equation*} where $C_1, C_2$ and $C_3$ are constants depending on $M,\omega, J$ and $f$. \end{prop} \begin{rem} Note that $\norm{g_1}_{C^0(g)} \leq C(M,\omega,J,f)$ can be regarded as the generalized second derivative estimate of the almost K\"ahler potential $\varphi$ \cite{Yau78}. \end{rem} The proof of \cref{prop:7} involves some calculations of curvature identities, which we present here. Let $(M^{2n}, J)$ be an almost complex manifold of complex dimension $n \geq 2$ with almost K\"ahler metrics $g$ and $\tilde{g}$. Let $\theta^i$ and $\tilde{\theta}^i$ denote local unitary coframes for $g$ and $\tilde{g}$, respectively. Denote by $\nabla_g^1$ and $\nabla_{\tilde{g}}^1$ the associated second canonical connections. We use $\Theta$ (resp. $\Psi$) to denote the torsion (resp. curvature) of $\nabla_g^1$, and $\tilde{\Theta}$ (resp. $\widetilde{\Psi}$) to denote the torsion (resp. curvature) of $\nabla_{\tilde{g}}^1$. Define local matrices $(a_j^i)$ and $(b_j^i)$ by \begin{equation} \label{eq:4.6} \tilde{\theta}^i = a_j^i \theta^j, \quad \theta^j = b_i^j \tilde{\theta}^i. \end{equation} Therefore $a_j^i b_i^k = \delta_j^k$. First, differentiating \labelcref{eq:4.6} and applying the first structure equation, we obtain \begin{equation*} - \tilde{\theta}^i_k \wedge \tilde{\theta}^k + \tilde{\Theta}^i = da_j^i \wedge \theta^j - a_j^i \theta_k^j \wedge \theta^k + a_j^i \Theta^j. \end{equation*} This is equivalent to \begin{equation} \label{eq:4.7} (b_k^j da_j^i - a_j^i b_k^l \theta_l^j + \tilde{\theta}^i_k)\wedge\widetilde{\theta}^k = \tilde{\Theta}^i - a_j^i \Theta^j. \end{equation} Taking the $(0,2)$ part of the equation, \begin{equation} \label{eq:4.8} \widetilde{N}_{\bar{j}\bar{k}}^i = \overline{b_j^r}\overline{b_k^s} a_t^i N_{\bar{r}\bar{s}}^t \end{equation} which shows that the $(0,2)$ part of the torsion is independent of the choice of the metric (cf. the proof of Lemma 2.3 in \cite{TosattiWY08}). By the definition of the second canonical connection, the right-hand side of $\labelcref{eq:4.7}$ has no $(1,1)$ part. Hence there exist functions $a_{kl}^i$ with $a_{kl}^i = a_{lk}^i$ satisfying \begin{equation*} b_k^j da_j^i - a_j^i b_k^l \theta_l^j + \tilde{\theta}_k^i = a_{kl}^i \tilde{\theta}^l. \end{equation*} This equation can be rewritten as \begin{equation} \label{eq:4.9} da_m^i - a_j^i \theta_m^j + a_m^k \tilde{\theta}^i_k = a_{kl}^i a_m^k \tilde{\theta}^l. \end{equation} We define the canonical Laplacian of a function $f$ on $M$ by \begin{equation*} \Delta_g^1 f = \sum_i \left( \left( \nabla_g^1 \nabla_g^1 f \right) \left( e_i, \overline{e_i} \right) +\left( \nabla_g^1 \nabla_g^1 f \right) \left( \overline{e_i}, e_i \right) \right). \end{equation*} Define the function $u$ by \begin{equation*} u = a_j^i\overline{a_j^i} = \frac{1}{2}\tr_g \tilde{g}; \end{equation*} there is \begin{equation*} b_i^j \overline{b_i^j} = \frac{1}{2} \tr_{\tilde{g}}g. \end{equation*} \begin{lem}[{\cite[Lemma~3.3]{TosattiWY08}}] \label{lem:4} For $g$ and $\tilde{g}$ almost K\"ahler metrics and $a_j^i, a_{kl}^i, b_j^i$ as defined above, we have \begin{equation*} \frac{1}{2} \Delta_{\tilde{g}}^1 u = a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p} - \overline{a_j^i} a_j^k \widetilde{R}_{kl\bar{l}}^i + \overline{a_j^i} a_r^i b_l^q \overline{b_l^s} R_{jq\bar{s}}^r, \end{equation*} where the curvatures of the second canonical connection of $g$ and $\widetilde{g}$ are \begin{align*} (\Psi_i^j)^{(1,1)} &= R_{ik\bar{l}}^j\ \theta^k \wedge \overline{\theta^l}, \\ (\widetilde{\Psi}_i^j)^{(1,1)} &= \widetilde{R}_{ik\bar{l}}^j\ \tilde{\theta}^k \wedge \overline{\tilde{\theta}^l}. \end{align*} \end{lem} \begin{proof} By \cref{eq:4.9}, using the first and second structure equations, we have \begin{equation*} \begin{split} -a_j^i \Psi_m^j + a_{jl}^k a_m^j \tilde{\theta}^l \wedge \tilde{\theta}_k^i + a_m^k \widetilde{\Psi}_k^i =& a_m^k da_{kl}^j \wedge \tilde{\theta}^l - a_{kl}^i a_m^j \tilde{\theta}_j^k \wedge \tilde{\theta}^l + a_{kl}^i a_{jp}^k \tilde{\theta}^p \wedge \tilde{\theta}^l \\ &- a_{kl}^i a_m^k \tilde{\theta}_j^l \wedge \tilde{\theta}^j + a_{kl}^i a_m^k \tilde{\Theta}^l. \end{split} \end{equation*} Multiplying by $b^m_r$ and rearranging, we obtain \begin{equation} \label{eq:4.10} \left( da_{rl}^i + a_{kl}^i a_{rj}^k \tilde{\theta}^j + a_{rl}^k \tilde{\theta}_k^i - a_{kl}^i \tilde{\theta}_r^k - a_{rj}^i \tilde{\theta}_l^j \right) \wedge \tilde{\theta}^l = -b_r^m \Psi_m^j a_j^i + \widetilde{\Psi}_r^i - a_{rl}^i \tilde{\Theta}^l. \end{equation} Define $a_{rlp}^i$ and $a_{rl\bar{p}}^i$ by \begin{equation} \label{eq:4.11} da_{rl}^i + a_{kl}^i a_{rj}^k \tilde{\theta}^j + a_{rl}^k \tilde{\theta}_k^i - a_{kl}^i \tilde{\theta}_r^k -a_{rj}^i \tilde{\theta}_l^j = a_{rlp}^i \tilde{\theta}^p + a_{rl\bar{p}}^i \overline{\tilde{\theta}^p}. \end{equation} Then taking the $(1,1)$ part of \cref{eq:4.10}, we see that \begin{equation} \label{eq:4.12} a_{rl\bar{p}}^i \overline{\tilde{\theta}^p} \wedge \tilde{\theta}^l = \left(-\widetilde{R}_{rl\bar{p}}^i + a_j^i b_r^m b_l^q \overline{b_p^s} R_{mq\bar{s}}^j \right) \overline{\tilde{\theta}^p} \wedge \tilde{\theta}^l, \end{equation} where we recall the definition \begin{align*} (\Psi_i^j)^{(1,1)} &= R_{ik\bar{l}}^j\ \theta^k \wedge \overline{\theta^l}, \\ (\widetilde{\Psi}_i^j)^{(1,1)} &= \widetilde{R}_{ik\bar{l}}^j\ \tilde{\theta}^k \wedge \overline{\tilde{\theta}^l}. \end{align*} Note that \begin{equation} \label{eq:4.13} du = \overline{a_j^i} da_j^i + a_j^i d\overline{a_j^i}. \end{equation} From \cref{eq:4.9}, we see that \begin{equation} \label{eq:4.14} \begin{split} du &= \overline{a_j^i} \left( a_{kl}^i a_j^k \tilde{\theta}^l + a_m^i \theta_j^m - a_j^k \tilde{\theta}_k^i \right) + a_j^i \left( \overline{a_{kl}^i a_j^k \tilde{\theta}^l} + \overline{a_m^i \theta_j^m} - \overline{a_j^k \tilde{\theta}_k^i} \right) \\ &= \overline{a_j^i} a_{kl}^i a_j^k \tilde{\theta}^l + a_j^i \overline{a_{kl}^i a_j^k \tilde{\theta}^l}. \end{split} \end{equation} Hence $\partial u = \overline{a_j^i} a_{kl}^i a_j^k \tilde{\theta}^l$. Applying the exterior derivative to this and substituting from \cref{eq:4.9,,eq:4.11,,eq:4.12}, we have \begin{equation*} (d\partial u)^{(1,1)} = a_{kl}^i a_j^k \overline{a_{pq}^i a_j^p \tilde{\theta}^q} \wedge \tilde{\theta}^l - \overline{a_j^i} a_j^k \widetilde{R}_{kl\bar{p}}^i \overline{\tilde{\theta}^p} \wedge \tilde{\theta}^l + \overline{a_j^i} a_r^i b_l^q \overline{b_p^s} R_{jq\bar{s}}^r\overline{\tilde{\theta}^p} \wedge \tilde{\theta}^l. \end{equation*} Hence, from the definition of the canonical Laplacian \cite{TosattiWY08}, we prove the lemma. \end{proof} Now let $\nu := \det (a_i^j)$ and set $v := \abs{\nu}^2 = \nu \overline{\nu}$, which is the ratio of the volume forms of $\tilde{g}$ and $g$. It is easy to see that $v = \tilde{\omega}^n / \omega^n$, where $\tilde{\omega}(\cdot,\cdot) = \tilde{g}(\cdot,J\cdot)$ and $\omega(\cdot,\cdot) = g(\cdot,J\cdot)$. Now we have the following lemma. \begin{lem}[{\cite[Lemma~3.4]{TosattiWY08}}] \label{lem:5} For $g$ and $\tilde{g}$ almost K\"ahler metrics and $v$ as above, the following identites hold: \begin{enumerate}[label=(\arabic*)] \item $(d\del \log v)^{(1,1)} = - R_{k\bar{l}}\ \theta^k \wedge \overline{\theta^l} + \widetilde{R}_{k\bar{l}} a_i^k \overline{a_j^l} \theta^i \wedge \overline{\theta^j}$; \item $\Delta_g^1 \log v = 2R - 2\widetilde{R}_{k\bar{l}} a_i^k \overline{a_i^l}$. \end{enumerate} where $R$ is the scalar curvature, $R_{k\bar{l}}$ and $\widetilde{R}_{k\bar{l}}$ are the $(1,1)$ part of Ricci curvature form with respect to Hermitian connections, that is, the second canonical connection of the metric $g$ and $\tilde{g}$ respectively. \end{lem} \begin{proof} By direct calculation, we have \begin{equation*} d\nu = \nu_j^i da_j^i, \end{equation*} where $\nu_j^i$ stands for the $(i,j)$-th cofactor of of the matrix $(a_i^j)$, such that $\nu_j^i = \nu b_j^i$. From \cref{eq:4.9}, we have \begin{equation*} da_m^i - a_j^i \theta_m^j + a_m^k \tilde{\theta}_k^i = a_{kl}^i a_m^k a_r^l \theta^r. \end{equation*} Hence \begin{equation} \label{eq:4.15} \begin{split} d\nu &= \nu_j^i \left( a_{pq}^i a_i^p a_k^q \theta^k + a_k^j \theta_i^k - a_i^k \tilde{\theta}_k^j \right) \\ &= \nu_k \theta^k + \nu \left( \theta_i^i - \tilde{\theta}_i^i \right), \end{split} \end{equation} for $\nu_k = \nu_j^i a_{pq}^j a_i^p a_k^q$. Now \begin{equation*} \begin{split} dv &= \bar{\nu}d\nu + \nu d\bar{\nu} \\ &= \bar{\nu} \left( \nu_k \theta^k + \nu (\theta_i^i - \tilde{\theta}_i^i) \right) + \nu \left( \overline{\nu_k} \overline{\theta^k} + \bar{\nu}(\overline{\theta_i^i} - \overline{\tilde{\theta}_i^i})\right) \\ &= \bar{\nu} \nu_k \theta^k + \nu \overline{\nu_k}\overline{\theta^k}. \end{split} \end{equation*} Therefore $\partial v = \bar{\nu} \nu_k \theta^k$. Define $v_k$ and $v_{\bar{k}}$ by $dv = v_k \theta^k + v_{\bar{k}} \overline{\theta^k}$. It implies that $v_k = \bar{\nu} \nu_k$. Applying the exterior derivative to \cref{eq:4.15} and using the second structure equation, we have \begin{equation*} \begin{split} 0 &= d \left(\nu_k \theta^k \right) + d\nu \wedge \left(\theta_i^i - \tilde{\theta}_i^i \right) + \nu d \left(\theta_i^i - \tilde{\theta}_i^i \right) \\ &= d\left(\nu_k \theta^k \right) + \nu_k \theta^k \wedge \left(\theta_i^i - \tilde{\theta}_i^i \right) + \nu \left(\Psi_i^i - \widetilde{\Psi}_i^i \right). \end{split} \end{equation*} Multiplying by $\bar{\nu}$ and using \cref{eq:4.15} again, we have \begin{equation*} \begin{split} 0 &= \bar{\nu} d \left(\nu_k \theta^k \right) + \nu_k \theta^k \wedge \left(\overline{\nu_l} \overline{\theta^l} - d\bar{\nu} \right) + v \left( \Psi_i^i - \widetilde{\Psi}_i^i \right) \\ &= d \left( \bar{\nu} \nu_k \theta^k \right) + \nu_k \overline{\nu_l} \theta^k \wedge \overline{\theta^l} + v \left( \Psi_i^i - \widetilde{\Psi}_i^i \right). \end{split} \end{equation*} Consider the $(1,1)$ part \begin{equation} \label{eq:4.18} \begin{split} (d\del v)^{(1,1)} &= -\nu_k \overline{\nu_l} \theta^k \wedge \overline{\theta^l} - v\left( \Psi_i^i - \widetilde{\Psi}_i^i \right)^{(1,1)} \\ &= -\frac{v_k \overline{v_l}}{v} \theta^k \wedge \overline{\theta^l} - vR_{k\bar{l}} \theta^k \wedge \overline{\theta^l} + v \widetilde{R}_{k\bar{l}} a_i^k \overline{a_j^l} \theta^i \wedge \overline{\theta^j}. \end{split} \end{equation} We also have \begin{equation*} d\del \log v = \frac{d\del v}{v} + \frac{\del v \wedge \overline{\del}v} {v^2}, \end{equation*} which combines with \cref{eq:4.18} to give (1). The other one follows from the definition of the canonical Laplacian. \end{proof} Let $(M,J)$ be an almost complex surface with the almost K\"ahler metrics $g$ and $g_{\frac{1}{2}}$, where $g_{\frac12} = \frac{1}{2}(g + g_1)$ with $g_1$ a solution of \labelcref{eq:1.2}. By \Cref{lem:4,lem:5}, we have the key lemma that is similar to Lemma~3.2 in \cite{TosattiWY08}. \begin{lem} \label{lem:6} Let $g$ and $g_{\frac{1}{2}}$ be defined as above. Then \begin{equation*} \Delta_{g_\frac12} \log u \geq \frac{1}{u}(C - 2R + 8N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p} + 2\overline{a_i^p}a_j^p b_q^k \overline{b_q^l} \mc{R}_{i\bar{j}k\bar{l}}), \end{equation*} where $\mc{R}_{i\bar{j}k\bar{l}} = R_{ik\bar{l}}^j + 4N_{\bar{l}\bar{j}}^r \overline{N_{\bar{r}\bar{k}}^i}$, and $C$ is some constant depending on $M, \omega, J$ and $\Delta_g f$. \end{lem} \begin{proof} Let $\tilde{g}=g_\frac12$, by applying \cref{lem:4}, \begin{equation*} \frac{1}{2} \Delta_{g_{\frac12}} u = a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p} - \overline{a_j^i} a_j^k \widetilde{R}_{kl\bar{l}}^i + \overline{a_j^i} a_r^i b_l^q \overline{b_l^s} R_{jq\bar{s}}^r, \end{equation*} where $a_{kl}^i, a_j^k, \widetilde{R}_{kl\bar{i}}^i,$ and $R_{jq\bar{s}}^r$ with respect to $g$ and $\tilde{g}$. Using the same calculation as in the proof of \cref{lem:4} and \cref{lem:5} (cf. \cite[Lemma~3.3, Lemma~3.4]{TosattiWY08}), one has \begin{equation*} \Delta_g \log v = 2R - 2\widetilde{R}_{k\bar{l}} {a}_i^k \overline{{a}_i^l}. \end{equation*} Recall the following equation \cite[(2.21)]{TosattiWY08} \begin{equation} \label{eq:4.16} R_{k\bar{l}} = R_{ik\bar{l}}^i = R_{ki\bar{i}}^l + 4N_{\bar{p}\bar{l}}^i \overline{N_{\bar{i}\bar{k}}^p} + 4N_{\bar{i}\bar{l}}^p \overline{N_{\bar{p}\bar{i}}^k}, \end{equation} Notice that for almost K\"ahler metrics, the Laplacian with respect to the Levi-Civita connection is same as the complex Laplacian \cite{TosattiWY08}. Combining \cref{lem:4,lem:5} with \labelcref{eq:4.16}, one gets \begin{equation*} \Delta_{g_\frac12} u = 2a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p} + 2\overline{a_j^i} a_r^i b_l^q \overline{b_l^s} R_{jq\bar s}^r + \Delta_g \log v - 2R + 8 \overline{a_j^i} a_j^k (\widetilde{N}_{\bar{p}\bar{i}}^l \overline{\widetilde{N}_{\bar{l}\bar{k}}^p} + \widetilde{N}_{\bar{l}\bar{i}}^p \overline{\widetilde{N}_{\bar{p}\bar{l}}^k}). \end{equation*} Using \labelcref{eq:4.8}, we have \begin{equation*} \overline{a_j^i} a_j^k (\widetilde{N}_{\bar{p}\bar{i}}^l \overline{\widetilde{N}_{\bar{l}\bar{k}}^p} + \widetilde{N}_{\bar{l}\bar{i}}^p \overline{\widetilde{N}_{\bar{p}\bar{l}}^k}) = N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p} + \overline{a_s^k} a_j^k \overline{b_l^t} b_l^r N_{\bar{t}\bar{j}}^p \overline{N_{\bar{p}\bar{r}}^s} \end{equation*} Hence \begin{equation} \label{eq:4.17} \Delta_{g_\frac12} u = 2a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p} + \Delta_g \log v - 2R + 8N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p} + 2\overline{a_i^p} a_j^p b_q^k \overline{b_q^l} \mc{R}_{i\bar jk\bar{l}}. \end{equation} By \labelcref{eq:4.17}, \begin{align*} \Delta_{g_\frac12} \log u &= \frac{1}{u} (\Delta_{g_{\frac12}} u - \abs{du}_{g_{\frac12}}^2 / u) \\ &= \frac{1}{u} (2a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p} + 8N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p} + 2\overline{a_i^p} a_j^p b_q^k \overline{b_q^l} \mc{R}_{i\bar{j}k\bar{l}} + \Delta_g \log v - 2R - \abs{du}_{g_{\frac12}}^2 / u). \end{align*} From (3.14) in \cite{TosattiWY08}, we have \begin{equation*} \abs{du}_{g_\frac12}^2 = 2u_l \overline{u_l}, \end{equation*} where $u_l = a_j^i a_{kl}^i a_j^k = \overline{a_j^i} B_{kj}^i$ and $B_{lj}^i = a_{kl}^i a_j^k$. Then the Cauchy-Schwarz inequality implies \cite{TosattiWY08} \begin{equation*} u_l \overline{u_l} \leq u a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p}, \end{equation*} It follows that \begin{equation} \abs{du}_{g_{\frac12}}^2 \leq 2u a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p}. \end{equation} Moreover, using $v = \omega_{\frac12}^2 / \omega^2$ and \cref{eq:4.28}, we find that \begin{equation*} \Delta_g \log v \geq C, \end{equation*} where $C$ is some constant depending on $M, \omega, J$, and $\Delta_g f$. Therefore \begin{align*} \Delta_{g_\frac12} \log u &= \frac{1}{u} ( 8N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p} + 2\overline{a_i^p} a_j^p b_q^k \overline{b_q^l} \mc{R}_{i\bar{j}k\bar{l}} + \Delta_g \log v - 2R + (2a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p}-\abs{du}_{g_{\frac12}}^2 / u))\\ &\geq \frac{1}{u} ( 8N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p} + 2\overline{a_i^p} a_j^p b_q^k \overline{b_q^l} \mc{R}_{i\bar{j}k\bar{l}} + C - 2R). \end{align*} This completes the proof of \cref{lem:6}. \end{proof} Now we are ready to prove \cref{prop:7} by \cref{lem:6}. \begin{proof}[Proof of \cref{prop:7}] Since $u = \frac12 \tr_g g_{\frac12} = \frac{1}{4} (\tr_g g_1 +2)$, by Calabi-Yau equation and the arithmetic geometric means inequality, $u$ is bounded below away from zero by a positive constant depending only on $\inf_M f$. Hence there exists a constant $C'$ depending only on $M, \omega, J, \inf_M f, \Delta_g f$, and $R$ such that \begin{equation} \label{eq:4.19} \abs{\frac{1}{u} (C - R + 4N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p})} \leq C'. \end{equation} Choosing $A'$ sufficiently large such that \begin{equation*} \mc{R}_{i\bar{j}k\bar{l}} + A' \delta_{ij} \delta_{kl} \geq 0. \end{equation*} Then \begin{equation} \label{eq:4.20} 2 \overline{a_i^p} a_j^p b_q^k \overline{b_q^l}\mc{R}_{i\bar{j}k\bar{l}} \geq - 2A' \overline{a_i^p} a_i^p b_q^k \overline{b_q^k} = -A' \tr_{g_{\frac12}} g. \end{equation} Combining \labelcref{eq:4.19,eq:4.20} with \cref{lem:6}, we obtain \begin{equation*} \Delta_{g_{\frac12}} \log u \geq -C' - A' \tr_{g_{\frac12}} g. \end{equation*} We apply the maximum principle to the above inequality. Suppose that the maximum of $u$ is achieved at point $x_0$: \begin{equation*} C'' \geq \Delta_{g_{\frac12}} (\log u - 2A' \varphi_{\frac12})(x_0) \geq (-C' + 3A'\tr_{g_{\frac12}} g - 8A')(x_0). \end{equation*} since $\Delta_{g_{\frac12}} \varphi_{\frac12} = 4 - 2\tr_{g_{\frac12}} g$. Hence \begin{equation*} (\tr_{g_{\frac12}} g)(x_0) \leq \frac{8A'+ C'}{3A'}. \end{equation*} Note that at $x_0$, \begin{equation*} g_{\frac12}(x_0) = (a_1+1) |\theta^1|^2 + (a_2+1) |\theta^2|^2,\ 0 < a_1 \leq a_2. \end{equation*} Using the equation \begin{equation*} \frac{\frac12 (a_1+1) + \frac12 (a_2+1)}{[\frac12 (a_1+1)]\cdot[\frac12 (a_2+1)]} = \frac{1}{\frac12 (a_1+1)} + \frac{1}{\frac12 (a_2+1)}, \end{equation*} we see that \begin{equation*} \frac{\tr_g g_{\frac12} }{2} \sqrt{\frac{\det g}{\det g_{\frac12}}} = \frac{1}{2} (\frac{\tr_{g_{\frac12}} g}{2}). \end{equation*} Hence, using \cref{eq:1.2} again, $u(x_0)$ can be bounded from above in terms of $\tr_{g_{\frac12}} g$ and $\sup_M f$. It follows that for any $x \in M$, \begin{equation*} \log u(x) - 2 A'\varphi_{\frac12}(x) \leq \log C'' - 2A' \inf_M \varphi_{\frac12}. \end{equation*} After exponentiation and applying \cref{prop:6}, this proves \cref{prop:7}. \end{proof} As in the K\"ahler case \cite{Aubin98,Yau78}, we can provide an estimate for the first derivative of $g_1$, which is regarded as the generalized third-order estimate for the almost K\"ahler potential $\varphi$. For Hermitian or almost Hermitian cases, see Tossatti-Wang-Weinkove-Yang \cite{TWWY15}, Tossati-Weinkove \cite{TosattiW10}, Chu-Tossatti-Weinkove \cite{ChuTW19}. Now we have the same result as Theorem 4.1 in \cite{TosattiWY08}. \begin{prop} \label{prop:9} Let $g_1$ be a solution of \labelcref{eq:1.2}, then \begin{equation*} \sup_M (\tr_g g_1) \leq C(M,\omega,J,f). \end{equation*} Thus there exists a constant $C_0$ depending on $M, \omega, J, f$ such that \begin{equation*} \abs{\nabla_g g_1}_{g_1} \leq C_0, \end{equation*} where $\nabla_g$ is the second connection associated to $g$ and $J$. \end{prop} \begin{proof} The boundedness of $\tr_gg_1$ follows directly from the boundedness of $\tr_g g_\frac12$.\par For the second part, instead of proving the boundedness of $\abs{\nabla_g g_1}_{g_1}$, we show that $S:=\frac14\abs{\nabla_g g_1}_{g_1}^2$ is bounded. Let the $\tilde{g}$ above \cref{eq:4.6} be $g_1$ here. Denote $\tilde{\theta}^i$, $\widetilde{R}^i_{jk\bar{i}}$, $\widetilde{N}^p_{\bar{q}\bar{i}}$, and $\widetilde{R}_{k\bar{i}}$ by the local unitary coframe, curvature tensor, Nijenhuis tensor, and Ricci tensor of $\tilde{g}$ respectively. Moreover, the local matrices $({a}^i_j)$ and $({b}^j_i)$ are given by \[ \tilde{\theta}^i={a}^i_j\theta^j,\quad \theta^j=b^j_i\tilde{\theta}^i. \] Because \[ \sup(\tr_gg_1)\leq C(M,\omega,J,f), \] as argued in \cref{eq:boundmet}, $({a}^i_j)$ and $(b^j_i)$ are bounded. By applying the same calculations in Tosatti-Weinkove-Yau \cite{TosattiWY08} (Lemma 4.2, 4.3, and 4.4), the following equations are true: \begin{equation*} \left\{ \begin{aligned} S=&{a}^i_{kl}\overline{{a}^i_{kl}},\\ \frac12\Delta_{\widetilde{g}}S=&|{a}^i_{klp}-{a}^i_{rl}{a}^r_{kp}|_{{\tilde{g}}}^2+|{a}^i_{kl\bar{p}}|_{{\tilde{g}}}^2+\overline{{a}^i_{kl}}{a}^i_{rl}\widetilde{R}^r_{kp\bar{p}}+\overline{{a}^i_{kl}}{a}^i_{kj}\widetilde{R}^j_{lp\bar{p}}-\overline{a^i_{kl}}{a}^r_{kl}\widetilde{R}^i_{rp\bar{p}}\\ &+2\mathrm{Re}(\overline{{a}^i_{kl}}({b}^m_k{b}^q_l\overline{{b}^s_p}R^j_{mq\bar{s}}{a}^i_{rp}{a}^r_j-{a}^i_j{b}^q_l\overline{{b}^s_p}R^j_{mq\bar{s}}{a}^r_{kp}{b}^m_r-{a}^i_j{b}^m_k\overline{{b}^s_p}R^j_{mq\bar{s}}{a}^r_{lp}{b}^q_r\\ &+{a}^i_j{b}^m_k{b}^q_l\overline{{b}^s_p}{b}^u_pR^j_{mq\bar{s},u}-\widetilde{R}_{k\bar{i},l}+4\widetilde{N}^p_{\bar{q}\bar{i},l}\overline{\widetilde{N}^q_{\bar{p}\bar{k}}}+4\widetilde{N}^p_{\bar{q}\bar{i}}\overline{\widetilde{N}^q_{\bar{p}\bar{k},\bar{l}}}+4\widetilde{N}^p_{\bar{q}\bar{i},l}\overline{\widetilde{N}^k_{\bar{p}\bar{q}}}\\ &+4\widetilde{N}^p_{\bar{q}\bar{i},l}\overline{\widetilde{N}^k_{\bar{p}\bar{q},\bar{l}}}+4\widetilde{N}^i_{\bar{p}\bar{q},k}\overline{\widetilde{N}^q_{\bar{p}\bar{l}}}+2\overline{\widetilde{N}^k_{\bar{l}\bar{p},ip}})),\\ \widetilde{N}^i_{\bar{j}\bar{k},m}=&\overline{{b}^r_j{b}^s_k}{b}^l_ma^i_tN^t_{\bar{r}\bar{s},l}+\overline{{b}^r_j{b}^s_k}{a}^l_tN^t_{\bar{r}\bar{s}}{a}^i_{lm},\\ \widetilde{N}^i_{\bar{j}\bar{k},\bar{m}}=&\overline{{b}^r_j{b}^s_k{b}^l_m}{a}^i_tN^t_{\bar{r}\bar{s},\bar{l}}-\overline{{b}^r_j{b}^s_k}{a}^i_tN^t_{\bar{r}\bar{s}}\overline{{a}^l_{jm}}-\overline{{b}^r_j{b}^s_l}{a}^i_tN^t_{\bar{r}\bar{s}}\overline{{a}^l_{km}},\\ |{a}^i_{kl}\widetilde{N}^k_{\bar{l}\bar{p},ip}|_{{\tilde{g}}}\leq& C(S+1)+\frac12|{a}^i_{klp}-{a}^i_{rl}{a}^r_{kp}|^2_{{\tilde{g}}},\\ \Delta_{\tilde{g}} u=&2a_{kl}^i \overline{a_{pl}^i} a_j^k \overline{a_j^p} + \Delta_g f - 2R + 8N_{\bar{p}\bar{i}}^l \overline{N_{\bar{l}\bar{i}}^p} + 2\overline{a_i^p} a_j^p b_q^k \overline{b_q^l} \mc{R}_{i\bar jk\bar{l}}, \end{aligned} \right.. \end{equation*} where ${a}^i_{kl}$ is defined by $d{a}^i_m-{a}^i_j\theta^j_m+{a}^k_m\tilde{\theta}^i_k={a}^i_{kl}{a}^k_m\tilde{\theta}^l$ and $u=\frac{1}{2}\tr_g\tilde{g}$. According to the definition of $a^i_{kl}$, $|a^i_{kl}|$ is bounded. Therefore \[ |\Delta_{\widetilde{g}}u|\leq C(M,\omega,J,f). \] Denote the Laplacian operator of $\tilde{g} = g_1$ as $\Delta_{g_1}$. The same argument in the proof of Lemma 4.5 from \cite{TosattiWY08} gives ($\log v$ and $F$ in \cite{TosattiWY08} correspond to $|\det({a}^i_j)|^2$ and $f$ here respectively) \[ \Delta_{g_1}S\geq-CS-C, \] for some positive constant $C$. Moreover the proof of Theorem 4.1 from \cite{TosattiWY08} shows \[ \Delta_{g_1}u\geq CS-C, \] for some positive constant $C$. The above two inequalities yield \[ \Delta_{g_1}(S+C'u)\geq S-C, \] for some large enough $C'$. Let $x$ be the point where $S|_x=\max S$, so $\Delta_{g_1}S|_x\leq0$. Combining this with the fact that $\Delta_{g_1}u$ is bounded and evaluating the above inequality at $x$, one has \[ C''\geq\Delta_{g_1}(S+C'u)|_x\geq\max S-C, \] for some large enough constant $C''$. This proves the boundedness of $S$ and the proposition follows. \end{proof} Now the same result holds as Theorem~1.3 in \cite{TosattiWY08} on closed almost K\"ahler surface, but not requiring Tian's $\alpha$-integral \cite{Tian87} \begin{equation*} I_{\alpha}(\varphi') := \int_M e^{-\alpha \varphi'} \omega^2, \end{equation*} where $\varphi'$ is defined by \begin{equation*} \frac14 \Delta_{g_1} \varphi' = 1 - \frac{\omega_1 \wedge \omega}{\omega^2}, \quad \sup_M \varphi' = 0 \end{equation*} \begin{thm} \label{thm:3} Let $(M,\omega,g,J)$ be a closed almost K\"ahler surface. If $(\omega_1, J)$ is an almost K\"ahler structure with $[\omega_1] = [\omega]$ and solving the Calabi-Yau equation \begin{equation*} \omega_1^2 = e^f \omega^2. \end{equation*} There are $C^{\infty}$ $a\ priori$ bounds on $\omega_1$ depending only on $M,\omega,J$ and $f$. \end{thm} \begin{proof} The argument after \cref{prop:7} shows that $\|g_1\|_{C^0}$ is bounded. Combining this with the previous proposition, one has \[ \norm{g_1}_{C^1(g_1)} \leq C, \] for some positive constant $C$ depending only on $M, \omega, J, f$. It remains to prove the higher order estimates. Our approach is along the lines used by Weinkove to prove Theorem~1 in \cite{Weinkove07}.\par Recall that given a function $\varphi_1$, there exists $\sigma(\varphi_1) \in \Omega_J^-$ satisfying \begin{equation} \label{eq:4.25} \begin{dcases} d_J^- J d\varphi_1 + d_J^- d^{*_1} \sigma(\varphi_1) = 0 \\ \omega_1 \wedge dd^{*_1} \sigma(\varphi_1) = 0 \end{dcases} \end{equation} The system is elliptic due to \cref{prop:3}. Fix any $0 < \alpha < 1$. Since $g_1$ is uniformly bounded in $C^{\alpha}$, we can apply the elliptic Schauder estimates \cite{GilbargTrudinger77} to \cref{eq:4.3} for $s=1$ to get a bound $\norm{\varphi_1}_{C^{2+\alpha}} \leq C(M,\omega,J,f)$. Hence $\sigma(\varphi_1)$ is bounded in $C^{2+\alpha}$, and coefficients of the above system have a $C^{\alpha}$ bound. Differentiating the generalized Monge-Amp\`ere equation (real version) \begin{equation*} \log \det g_1 = \log \det g + 2f, \end{equation*} we see that \begin{equation} \label{eq:4.26} g_1^{ij} \del_i \del_j (\del_k \varphi_1) + \Set{\text{lower order terms}} = g^{ij} \del_k g_{ij} + 2\del_k f, \end{equation} where the lower order terms may contain up to derivative of $\varphi_1$ or $\sigma(\varphi_1)$. Since the coefficients of this elliptic equation are bounded in $C^{\alpha}$, we can apply the Schaduer estimates again, we get $\norm{\varphi_1}_{C^{3+\alpha}} \leq C(M,\omega,J,f)$. Using \eqref{eq:4.25} implies $\norm{\sigma(\varphi_1)}_{C^{3+\alpha}} \leq C(M,\omega,J,f)$. Now a bootstrapping argument using \eqref{eq:4.25} and \eqref{eq:4.26} gives the required higher estimates. \end{proof} \noindent We are now ready to finish the proof of \cref{thm:1}. \begin{proof}[Proof of \cref{thm:1}] The uniqueness of \cref{eq:1.2} is proved in \cref{thm:3.1}. It remains to show the existence of the solution for \cref{eq:1.2}. This follows from the continuity method. Define $S\subset[0,1]$ as all numbers $t$ such that the equation \[ (\omega+\mc{D}_J^+(\varphi))^2=e^{tf}\omega^2 \] has a solution. Notice that $0\in S$, so $S$ is non-empty. By \cref{prop:4}, $S$ is open in $[0,1]$. If $S$ is also closed, then $S=[0,1]$. It follows that \cref{eq:1.2} has a solution when $t=1$.\par To show that $S$ is closed. Let $\{\varphi_i\}$ and $\{t_i\}\subset S$ be sequences such that \[ (\omega+\mc{D}_J^+(\varphi_i))^2=e^{t_if}\omega^2 \] and $\lim_it_i=t_0\in[0,1]$. The $a\ priori$ estimate from the previous theorem shows that \[ \|\varphi_i\|_{C^2}\leq C(M,\omega,J,t_if) \] for all $i$. Because $0\leq t_i\leq1$, there is a constant $C(M,\omega,J,f)$ such that \[ C(M,\omega,J,t_if)\leq C(M,\omega,J,f),\ \ \forall t_i. \] According to Arzela–Ascoli theorem, there is a convergent subsequence of $\{\varphi_i\}$ (still denoted as $\{\varphi_i\}$) that converge uniformly to a function $\varphi_0$. Therefore, by letting $i\rightarrow\infty$, \[ (\omega+\mc{D}_J^+(\varphi_0))^2=e^{t_0f}\omega^2. \] By \cref{thm:3}, there are $C^\infty$ $a\ priori$ bounds of $\varphi_0$. As a result, $t_0\in S$. So $S$ is a closed set. Because $S$ is both open and closed, $S$ must be $[0,1]$. This ends the proof of \cref{thm:1}. \end{proof} \section*{acknowledgement} The first named author is very grateful to his advisor Z. L\"u for his support; the authors thank Hongyu Wang for suggesting this problem and for many subsequent helpful, insightful, and encouraging discussions; Qiang Tan and Haisheng Liu for some helpful discussions. \bibliographystyle{amsplain} \bibliography{ref} \end{document} \usepackage{mathtools} \newcommand{\mvar}[1]{\mathit{#1}} \newcommand{\pldr}{{\boldsymbol{\cdot}}} \newcommand{\mvec}[1]{\mathbf{#1}} \newcommand{\mmat}[1]{\mathbf{#1}} \newcommand{\mtr}[1]{{#1}^{\operatorname{T}}} \DeclarePairedDelimiter\abs{\lvert}{\rvert} \DeclarePairedDelimiter\norm{\lVert}{\rVert} \let\oldforall\forall \renewcommand{\forall}{\oldforall \, } \let\oldexist\exists \renewcommand{\exists}{\oldexist \: } \newcommand\existu{\oldexist! \: } \providecommand\given{} \newcommand\SetSymbol[1][]{\nonscript\:#1\vert \allowbreak \nonscript\: \mathopen{}} \DeclarePairedDelimiterX\Set[1]\{\}{\renewcommand\given{\SetSymbol[\delimsize]} #1 } \DeclarePairedDelimiterXPP\lnorm[1]{}\lVert\rVert{_2}{#1} \DeclarePairedDelimiterX{\Parn}[1](){#1} \newcommand{\id}{\operatorname{id}} \newcommand{\im}{\operatorname{im}} \newcommand{\rk}{\operatorname{rk}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\sgn}{\operatorname{sgn}} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\ord}{\operatorname{ord}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\Inn}{\operatorname{Inn}} \newcommand{\Out}{\operatorname{Out}} \newcommand{\End}{\operatorname{End}} \newcommand{\Mat}{\operatorname{Mat}} \newcommand{\Obj}{\operatorname{Obj}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\Orb}{\operatorname{Orb}} \newcommand{\Sat}{\operatorname{Sat}} \newcommand{\Thm}{\operatorname{Thm}} \newcommand{\Der}{\operatorname{Der}} \newcommand{\PGL}{\operatorname{PGL}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\proj}{\operatorname{proj}} \newcommand{\diam}{\operatorname{diam}} \newcommand{\Frac}{\operatorname{Frac}} \newcommand{\Stab}{\operatorname{Stab}} \newcommand{\Isom}{\operatorname{Isom}} \newcommand{\coker}{\operatorname{coker}} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\spann}{span} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\grad}{grad} \newcommand{\hodge}{{*}} \newcommand{\fa}{\forall} \newcommand{\ex}{\exists} \newcommand{\DLO}{\textrm{DLO}} \newcommand{\ZFC}{\textrm{ZFC}} \newcommand{\ACF}{\textrm{ACF}} \newcommand{\FOL}{{\rm F{\small OL}}} \newcommand{\iso}{\cong} \newcommand{\pow}{\mathcal{P}} \newcommand{\comp}{\setminus} \newcommand{\symdiff}{\vartriangle} \newcommand{\infrule}{\rightsquigarrow} \newcommand{\cat}[1]{\textbf{#1}} \newcommand{\catset}{\cat{Set}} \newcommand{\catgrp}{\cat{Grp}} \newcommand{\cattop}{\cat{Top}} \newcommand{\BV}{BV} \newcommand{\del}{\partial} \renewcommand{\d}{\mathrm{d}} \newcommand{\rel}{\,\operatorname{rel}\,} \newcommand{\vol}{\mathrm{vol}} \newcommand{\dual}{\wedge} \newcommand{\adj}{\ast} \newcommand{\inprod}[2]{\l\langle{#1},{#2}\r\rangle} \newcommand{\act}{\curvearrowright} \newcommand{\semi}{\rtimes} \newcommand{\nsubgrp}{\triangleleft} \newcommand{\nsupgrp}{\triangleright} \newcommand{\nsubgrpeq}{\trianglelefteq} \newcommand{\nsupgrpeq}{\trianglerighteq} \renewcommand{\mod}[1]{\l(\operatorname{mod}\,#1\r)} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} \newcommand{\E}{\mathbb{E}} \newcommand{\A}{\mathbb{A}} \renewcommand{\S}{\mathbb{S}} \renewcommand{\P}{\mathbb{P}} \renewcommand{\H}{\mathbb{H}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\ms}[1]{\mathscr{#1}} \newcommand{\mb}[1]{\mathbb{#1}} \newcommand{\mf}[1]{\mathfrak{#1}} \renewcommand{\epsilon}{\varepsilon} \newcommand{\slot}{-} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{textcomp} \usepackage[small,bf,hang]{caption} \usepackage{subcaption} \usepackage{sidecap} \usepackage{hyperref} \usepackage[noabbrev,nameinlink]{cleveref} \usepackage{autonum} \usepackage{url} \usepackage{cite} \usepackage{tikz} \usepackage{pgfplots} \usepackage{pgfplotstable} \usepackage{placeins} \usepackage{graphicx} \usepackage{longtable} \pgfplotsset{compat=newest} \usepackage{tikz} \usepackage{tikz-cd} \usepackage{amsmath, amsfonts, mathtools, amsthm, amssymb} \usepackage{mathrsfs}
2412.18436v2
http://arxiv.org/abs/2412.18436v2
Fundamental solutions for parabolic equations and systems: universal existence, uniqueness, representation
\documentclass[11pt, letterpaper]{amsart} \usepackage[left=1in,right=1in,bottom=0.5in,top=0.8in]{geometry} \usepackage{amsfonts} \usepackage{amsmath, amssymb} \usepackage{graphicx} \usepackage[font=small,labelfont=bf]{caption} \usepackage{epstopdf} \usepackage[pdfpagelabels,hyperindex]{hyperref} \usepackage{xcolor} \usepackage{amsthm} \usepackage{float} \usepackage{pgfplots} \usepackage{listings} \usepackage{longtable} \usepackage{mathrsfs} \usepackage{bbold} \usepackage{comment} \usepackage{esint} \usepackage{mathtools} \usepackage{scalerel}[2014/03/10] \usepackage[usestackEOL]{stackengine} \DeclarePairedDelimiterX\set[1]\lbrace\rbrace{\def\given{\;\delimsize\vert\;}#1} \hypersetup{ pdftitle={An abstract theory for parabolic equations and applications}, pdfsubject={Mathematics, PDE, Analysis}, pdfauthor={Pascal Auscher and Khalid Baadi}, pdfkeywords={Parabolic equations, Cauchy problems, variational methods, Fundamental solution, Energy equality.} } \newcommand{\overbar}[1]{\mkern 1.5mu\overline{\mkern-1.5mu#1\mkern-1.5mu}\mkern 1.5mu} \DeclareRobustCommand{\stirling}{\genfrac\{\}{0pt}{}} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{conj}[thm]{Conjecture} \newtheorem{quest}[thm]{Question} \newtheorem{ppty}[thm]{Property} \newtheorem{ppties}[thm]{Properties} \newtheorem{claim}[thm]{Claim} \newtheorem{Assumption}[thm]{Assumption} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{defns}[thm]{Definitions} \newtheorem{con}[thm]{Construction} \newtheorem{exmp}[thm]{Example} \newtheorem{exmps}[thm]{Examples} \newtheorem{notn}[thm]{Notation} \newtheorem{notns}[thm]{Notations} \newtheorem{addm}[thm]{Addendum} \newtheorem{exer}[thm]{Exercise} \newtheorem{limit}[thm]{Limitation} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem{rems}[thm]{Remarks} \newtheorem{warn}[thm]{Warning} \newtheorem{sch}[thm]{Scholium} \definecolor{energy}{RGB}{114,0,172} \definecolor{freq}{RGB}{45,177,93} \definecolor{spin}{RGB}{251,0,29} \definecolor{signal}{RGB}{203,23,206} \definecolor{circle}{RGB}{217,86,16} \definecolor{average}{RGB}{203,23,206} \definecolor{kb}{rgb} {.6, 0, 0} \definecolor{pa}{rgb} {.4, .4, 0} \newcommand{\KB}{\color{kb}} \newcommand{\PA}{\color{pa}} \newcommand{\llangle}{\langle\!\langle} \newcommand{\rrangle}{\rangle\!\rangle} \newcommand{\K}{\operatornamewithlimits{K}} \newcommand{\bigslant}[2]{{\raisebox{.2em}{$#1$}\left/\raisebox{-.2em}{$#2$}\right.}} \colorlet{shadecolor}{gray!20} \pgfplotsset{compat=1.9} \def\N{10} \def\M{4} \usepgflibrary{fpu} \makeatletter \let\c@equation\c@thm \raggedbottom \makeatother \numberwithin{equation}{section} \DeclareMathOperator*{\supess}{ess\,sup} \author{Pascal Auscher} \address{Universit{\'e} Paris-Saclay, CNRS, Laboratoire de Math\'{e}matiques d'Orsay, 91405 Orsay, France} \email{[email protected]} \author{Khalid Baadi} \address{Universit{\'e} Paris-Saclay, CNRS, Laboratoire de Math\'{e}matiques d'Orsay, 91405 Orsay, France} \email{[email protected]} \date{December 24, 2024} \title[Fundamental solutions for parabolic systems]{Fundamental solutions for parabolic equations and systems: universal existence, uniqueness, representation} \keywords{Abstract parabolic equations, Cauchy problems, Integral identities, Variational methods, Fundamental solution, Green operators.} \subjclass[2020]{Primary: 35K90, 35A08 Secondary: 35K45, 35K46, 35K65, 47G20, 47B15} \thanks{The authors want to thank Moritz Egert for taking the time to look at a first version of this article and making valuable suggestions. A CC-BY 4.0 \url{https://creativecommons.org/licenses/by/4.0/} public copyright license has been applied by the authors to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission.} \begin{document} \begin{abstract} In this paper, we develop a universal, conceptually simple and systematic method to prove well-posedness to Cauchy problems for weak solutions of parabolic equations with non-smooth, time-dependent, elliptic part having a variational definition. Our classes of weak solutions are taken with minimal assumptions. We prove the existence and uniqueness of a fundamental solution which seems new in this generality: it is shown to always coincide with the associated evolution family for the initial value problem with zero source and it yields representation of all weak solutions. Our strategy is a variational approach avoiding density arguments, a priori regularity of weak solutions or regularization by smooth operators. One of our main tools are embedding results which yield time continuity of our weak solutions going beyond the celebrated Lions regularity theorem and that is addressing a variety of source terms. We illustrate our results with three concrete applications : second order uniformly elliptic part with Dirichlet boundary condition on domains, integro-differential elliptic part, and second order degenerate elliptic part. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{Section 1} Linear parabolic problems have a long history. The standard method usually begins by solving the Cauchy problem, with or without source terms, and representing the solutions through what is called fundamental solution. Abstractly, starting from an initial data $u_{0}$, the evolution takes the form $ \partial_t u + \mathcal{B}u =f$ where time describes an interval $(0,T)$, the sought solution $u$ and the source $f$ are valued in some different vector spaces and $\mathcal{B}$ is an operator that could also depend on time. Modeled after the theory of ordinary scalar differential equations, it is assumed some kind of dissipativity for $-\mathcal{B}$. One issue toward extension to non-linear equations is to not use regularity, but measurability, of the coefficients. We stick here to the linear ones. The amount of literature is too vast to be mentioned and we shall isolate only a few representative works for the sake of motivation. In the abstract case, when $\mathcal{B}$ does not depend on time, that is the autonomous situation, the approach from semi-group of operators and interpolation has been fruitful, starting from the earlier works (see Kato's book \cite{kato2013perturbation}) up to the criterion for maximal regularity in UMD Banach spaces (see L. Weis \cite{weis2001operator} or the review by Kunstmann-Weis \cite{kunstmann2004maximal}). In the non-autonomous case, results can be obtained by perturbation of this theory, assuming some time-regularity; see, for example, Kato's paper \cite{kato1961abstract}. A specific class of such problems is when $\mathcal{B}$ originates from a sesquilinear form with coercivity assumptions. Lions developed an approach in \cite{lions1957problemes} which he systematized in \cite{lions2013equations}. The nice thing about this approach is that there is no need for regularity with respect to time; its drawback is that it is restricted to Hilbert initial spaces. In parallel, there has been a systematic study of concrete parabolic Cauchy problems of differential type starting when the coefficients are regular. In this case, several methods exist for constructing the fundamental solution. The most effective technique involves a parametrix in combination with the freezing point method \cite{friedman2008partial}. This approach simplifies the problem to one where the coefficients become independent of space, leading to explicit solutions represented by kernels $ \Gamma(t, x, s, y) $ with Gaussian decay. When the coefficients are measurable (and possibly unbounded for the lower order terms), the theory of weak solutions, developed in the 1950s and 1960s, applies. This theory culminated in the book of Ladyzenskaja, Solonnikov and Ural'ceva \cite{ladyzhenskaia1968linear}. Although we shall not consider it here, the specific situation of second order parabolic equations with real, measurable, time-dependent coefficients was systematically treated by Aronson \cite{ aronson1967bounds, aronson1968non}: his construction of fundamental solutions and the proof of lower and upper bounds relied on regularity properties of local weak solutions by Nash \cite{nash1958continuity} and its extensions, and by taking limits from operators with regular coefficients. A recent result \cite{ataei2024fundamental} follows this approach for non autonomous degenerate parabolic problems in the sense of $A_{2}$-weights. In contrast, not using regularity theory, the article \cite{auscher2023universal} developed a framework for a Laplacian (and its integral powers), extending the Lions embedding theorem (see below) as a first step to obtain new results on fundamental solutions for equations with unbounded coefficients. A natural question is whether one can develop a framework, going beyond the one of Lions, that provides us with \begin{enumerate} \item An optimal embedding theorem with integral identities, \item The largest classes of weak solutions for which one obtains existence and uniqueness, \item Definition, existence and uniqueness of a fundamental solution. \end{enumerate} The answer is yes. The arguments can be developed in a very abstract manner, bearing on functional calculus of positive self-adjoint operators. We next present a summary of our results as a road map. Clearly, when it comes to concrete applications, our results do not distinguish equations from systems, the nature of the elliptic part (local or non-local) and its order, boundary conditions, etc. We give three examples at the end as an illustration. The first example deals with second-order uniformly elliptic parts under Dirichlet boundary conditions on domains. This situation may not seem original but we still give the main consequences for readers to be able to compare with literature. Our second example is parabolic equations with integro-differential elliptic part. The typical example is the fractional Laplacian and such equations arise in many fields from PDE to probability \cite{lions1969quelques, caffarelli2007extension, bogdan2012estimates}. The usual theory for the fractional Laplacians yields the fundamental solution as a density of a probability measure. The fundamental solution for general integro-differential parabolic operators with kernels are considered in the literature, assuming positivity condition and pointwise bounds. We refer to the introduction of \cite{kassmann2023upper} and the references there in. In that article, a proof of the poinswise upper bound is presented. Here, we do not assume any kind of positivity and we show the existence of a fundamental solution as an evolution family of operators. At our level of generality, these operators may not have kernels with pointwise bounds. Still, this family can be used to represent weak solutions without further assumptions. In any case, it gives a universal existence result. For example, the fundamental solution used in \cite{kassmann2023upper} must be the kernel of our fundamental solution operator. The third example is for degenerate operators as in \cite{ataei2024fundamental}, without assuming the coefficients to be real. It directly gives existence of a fundamental solution not using local properties of solutions. In a forthcoming article, the second author will use this to give a new proof of the pointwise estimates. \section{Summary of our results}\label{sec:summary} \subsection{Embeddings and integral identities} Lions' embedding theorem \cite{lions1957problemes} asserts that if $V$ and $H$ are two Hilbert spaces such that $V$ is densely embedded in $H$, itself densely embedded in $V^\star$, the dual of $V$ in the inner product of $H$, we have the continuous embedding $$ L^2((0, \mathfrak{T}); V) \cap H^1((0, \mathfrak{T}); V^\star) \hookrightarrow C([0, \mathfrak{T}]; H),$$ and absolute continuity of the map $t\mapsto\|u(t)\|_{H}^2$, which yields integral identities. The triple $(V, H, V^\star)$ is said to be a Gelfand triple. One of our main results is the following improvement. \begin{thm}\label{thm:energyboundedinterval} Consider a positive, self-adjoint operator on a separable Hilbert space $H$. Let $I=(0,\mathfrak{T})$ be a bounded, open interval of $\mathbb{R}$. Let $u \in L^1(I;H)$ such that $Su\in L^2(I;H)$. Assume that $\partial_t u = Sf+S^\beta g$ with $f \in L^2(I;H)$ and $g \in L^{\rho'}(I;H)$, where ${\beta}={2}/{\rho} \in [0,1)$ and $\rho'$ is the conjugate H\"older exponent to $\rho$. Then $ u \in C(\Bar{I},H)$ and $ t \mapsto \left \| u(t) \right \|^2_H$ is absolutely continuous on $\Bar{I}$ with, for all $ \sigma, \tau \in \Bar{I}$ such that $ \sigma < \tau$, the integral identity \begin{align}\label{eq:integralidentityintro} \left \| u(\tau) \right \|^2_H-\left \| u(\sigma) \right \|^2_H = 2\mathrm{Re}\int_{\sigma}^{\tau} \langle f(t),Su(t)\rangle_{H} + \langle g(t),S^\beta u(t)\rangle_{H} \ \mathrm d t. \end{align} As a consequence, $u\in L^r((0,\mathfrak{T});D(S^\alpha))$ for all $r\in (2,\infty]$ such that $\alpha=2/r$ with for a constant depending only on $\beta$, \begin{align}\label{eq:Lr} \| S^\alpha u \|_{L^r((0,\mathfrak{T});H)} \lesssim \|Su \|_{L^2((0,\mathfrak{T});H)}+ \|f \|_{L^2((0,\mathfrak{T});H)}+\|g \|_{L^{\rho'}((0,\mathfrak{T});H)} + \inf_{\tau \in [0,\mathfrak{T}]}\|u(\mathfrak{\tau})\|_{H}. \end{align} \end{thm} Let us comment this result. Its core is the continuity and the integral identity proved in Corollary \ref{corenergybounded} when $S$ is injective and Proposition \ref{prop:energyboundedintervalinhomo} in the general case. If $u$ had belonged to $L^2(I;H)$ then $u\in L^2(I;V)$ with $V$ the domain of $S$. Here we only assume $u\in L^1(I;V)$, which is used qualitatively, and $\partial_{t}u$ is taken in the sense of distributions on $I$ valued in $V^*$. We allow an extra second term $S^\beta g$ in the time derivative expression of $u$. In fact, $S^\beta g$ belongs to $SL^2(I;H)+ L^1(I;H)\subset L^1(I; V^*)$ (see Proposition \ref{prop:embed r>0} when $S$ is injective and the proof is the same otherwise). Note that it could be a finite combination of such terms and the integral identity would be modified accordingly. The $L^r$ estimate \eqref{eq:Lr} follows \textit{a posteriori}: one first proves \eqref{eq:Lr} with $r=\infty$ using \eqref{eq:integralidentityintro}, together with the interpolation inequality $$ \| S^\alpha u \|_{L^r((0,\mathfrak{T});H)} \le \|Su \|_{L^2((0,\mathfrak{T});H)}^\alpha \|u \|_{L^\infty((0,\mathfrak{T});H)}^{1-\alpha} $$ for $\alpha=\beta$ when $0<\beta<1$. Then we reuse this inequality for different $\alpha\in[0,1)$. See Proposition \ref{prop:embed r>0} when $S$ is injective together with the remark that follows it and the proof applies \textit{verbatim} when $S$ is not injective. Theorem \ref{thm:energyboundedinterval} admits versions on unbounded intervals. On the half-line $(0, \infty)$ (resp.~on $\mathbb{R}$), it holds assuming that $u \in L^1((0, \mathfrak{T}); H)$ for all $\mathfrak{T} < \infty$ (resp.~$u \in L^1_{\mathrm{loc}}(\mathbb{R}; H)$) when $S$ is not injective. In this case, $u$ is bounded in $H$ on $(0, \infty)$ (resp. $\mathbb{R}$), {see Proposition \ref{prop:energyboundedintervalinhomo}}. When $S$ is injective, the local integrability condition of $u$ in $H$ can be dropped using completions of $V$ and $V^*$ for the homogeneous norms $\|Su\|_{H}$ and $\|S^{-1}u\|_{H}$. In this case, not only $u$ is bounded, but it also tends to zero at infinity (resp.~at $\pm \infty$) in $H$, as shown in Proposition \ref{energy lemma} and Corollary \ref{corenergy}. As a consequence, we can eliminate the last term in \eqref{eq:Lr}. Our strategy is to prove the embedding first when $I = \mathbb{R}$, proceed by restriction to $(0,\infty)$. When it comes to bounded intervals, although the condition $u\in L^1(I;H)$ does not appear in \eqref{eq:Lr}, the condition $Su\in L^2(I;H)$ alone does not suffice as an example shows. We added strong integrability in $H$ but in fact, as the proof shows, it suffices that $u$ exists as a distribution on $I$ valued in $V^\star$ and there exists one $t$ for which $u(t)\in H$. But for applications to Cauchy problems, it is more natural to have the integrability condition and we stick to that. Our proof of the embedding already has a PDE flavor using $\partial_t u+S^2u=S^2u+Sf+S^\beta g$. This leads us to a thorough study of the abstract heat operator $\partial_t +S^2$ (hence the notation $\partial_tu$ rather than $u'$) which has some interest on its own right. This is done in Sections \ref{Section 2}, \ref{Section 3}, \ref{Section 4}. \subsection{Weak solutions and Cauchy problems} The embedding and its variants allow us to consider the largest possible class of weak solutions to abstract parabolic operators $\partial_{t}+\mathcal{B}$ with a time-dependent elliptic part $\mathcal{B}$ associated to a family of bounded and sesquilinear forms on the domain of $S$. We do not assume any time-regularity on $\mathcal{B}$ apart its weak measurability. This can be done either with estimates being homogeneous in $S$ if we decide to work on infinite intervals, (see Section \ref{sec:homogenoussetup}) or inhomogeneous (See Section \ref{sec:inhomogenousCP}; we called the elliptic operator $\Tilde{\mathcal{B}}$ there). Let us state the final result in the latter case, that is with inhomogeneous $\Tilde{\mathcal{B}}$ as in Section \ref{sec:inhomogenousCP}, combining Theorems \ref{ThmCauchy inhomog} and \ref{thm: passage au concret} on a finite interval $(0,\mathfrak{T})$. We fix $\rho \in (2,\infty)$ and set $\beta={2}/{\rho}$. Given an initial condition $a\in H$ and source terms $f \in L^2((0,\mathfrak{T});H)$ and $g \in L^{\rho'}((0,\mathfrak{T});H)$, $h\in L^{1}((0,\mathfrak{T});H)$ we wish to solve the Cauchy problem \begin{align}\label{eq:Cauchy inhomogeneintro} \left\{ \begin{array}{ll} \partial_t u +\Tilde{\mathcal{B}}u = S{f}+ S^\beta {g} + h \ \ \mathrm{in} \ \mathcal{D}'((0,\mathfrak{T}); \mathrm{D}), \\ u(0)=a \ \ \mathrm{weakly \ in} \ \mathrm{D}, \end{array} \right. \end{align} where $ \mathrm{D}$ is a Hausdorff topological dense subspace of the domain of $S$, equipped with the graph norm, that is, a core of $D(S)$. The first equation is thus understood in the weak sense against test functions in $\mathcal{D}((0,\mathfrak{T}); \mathrm{D})$. The meaning of the second equation is by taking the limit $\langle u(t), \Tilde{a}\rangle_{H}\rightarrow \langle a, \Tilde{a}\rangle_{H}$ for all $\Tilde{a}\in \mathrm{D}$ along a sequence converging to 0. \begin{thm}\label{Thm:Cauchyinhomogintro} There exists a unique weak solution $u\in L^1((0,\mathfrak{T}); H)$ with $Su\in L^2((0,\mathfrak{T}); H)$ to the problem \eqref{eq:Cauchy inhomogeneintro}. Moreover, $u \in C([0,\mathfrak{T}];H) \cap L^r((0,\mathfrak{T});D(S^\alpha))$ for all $r\in [2,\infty)$ with $\alpha=2/r$, and we have the estimate \begin{align*} \sup_{t\in [0,\mathfrak{T}]} \| u(t) \|_{H}+\| S^\alpha u \|_{L^r((0,\mathfrak{T});H)} \leq C ( \left \| f \right \|_{L^2((0,\mathfrak{T});H)} + \left \| g \right \|_{L^{\rho'}((0,\mathfrak{T});H)}+ \left \| h \right \|_{L^{1}((0,\mathfrak{T});H)}+ \left \| a \right \|_H ), \end{align*} where $C$ is a constant independent of $f,g,h$ and $a$. In addition, we can write the energy equality corresponding to the absolute continuity of $t\mapsto \|u(t)\|^2_{H}$. \end{thm} Theorem \ref{ThmCauchy inhomog} also contains a variant on the interval $(0,\infty)$ (case (b) there) where we replace $S$ by the operator $\Tilde{S}=(S^2+1)^{1/2}$ and we also obtain decay of the solution at $\infty$ while the class of uniqueness is $u\in L^1((0,\mathfrak{T}); H)$ for all $\mathfrak{T}<\infty$ with $Su\in L^2((0,\infty); H)$. We note that we consider classes of solutions in $L^1((0,\mathfrak{T}); H)$ rather than $L^\infty((0,\mathfrak{T}); H)$ as is customary. This condition suffices to obtain a priori continuity in time by Theorem \ref{thm:energyboundedinterval}. There is a similar theorem for the backward parabolic adjoint operator $ -\partial_s +\Tilde{\mathcal{B}}^*$ with final condition $\Tilde{a}$ at $\mathfrak{T}$. The estimates and the energy equality are a consequence of Theorem \ref{thm:energyboundedinterval}. Uniqueness relies on the energy equality. Existence is obtained by restriction from constructions first on $\mathbb{R}$ and then on $(0,\infty)$. The role of the core $\mathrm{D}$ of $D(S)$ is in fact irrelevant here and is only for the purpose of having a weak formulation with a small space of test functions. It can be equivalently replaced by $D(S)$ itself (see Theorem \ref{thm: passage au concret} and its proof). However, for concrete partial differential equations where $\mathrm{D}$ can be taken as a space of smooth and compactly supported functions of the variable $x$, we can work with smooth and compactly supported functions of the variables $(t,x)$. Homogeneous variants on $(0,\infty)$ and $\mathbb{R}$ will be found in the text (see Section \ref{Section 5}), where we had to develop an appropriate theoretical functional framework in Sections \ref{Section 2}, \ref{Section 3}, \ref{Section 4}. Actually, we start the proof by implementing a result of Kaplan (see Lemma \ref{lemme Hidden Coerc}) proving the invertibility of a parabolic operator on a sort of variational space involving the half-time derivative. This result has been central to other developments in the field of parabolic problems recently (see \textit{e.g.}, \cite{nystrom2017l2, auscher20202, auscher2023universal}), while it was more like a consequence of the construction of weak solutions in earlier works of the literature, including Kaplan's work \cite{kaplan1966abstract}. As this result can only be formulated when time describes $\mathbb{R}$, this explains why we proceed by restriction from this case. \subsection{Fundamental solution} We come to the notion of fundamental solution and evolution family (or propagators, or Green operators, as suggested by Lions) to represent weak solutions. Although it seems well known that they are the same, we feel it is essential to clarify the two different definitions. This distinction eventually leads to easy arguments, even in this very general context. We assume that $\partial_t + \mathcal{B}$ is a parabolic operator as above for which one can prove existence and uniqueness of weak solutions on $I$ with test functions valued in the core $\mathrm{D}$ of the Cauchy problems with the absolute continuity of $t\mapsto \|u(t)\|_{H}^2$ as in Theorem \ref{Thm:Cauchyinhomogintro}, and similarly for its backward adjoint. \begin{defn}[Fundamental solution for $\partial_t + \mathcal{B}$ on $I$] \label{FSintro} A fundamental solution for $\partial_t + \mathcal{B}$ is a family $\Gamma=(\Gamma(t,s))_{t,s \in I}$ of bounded operators on $H$ such that : \begin{enumerate} \item (Uniform boundedness on $H$) $\sup_{t,s \in I} \left\|\Gamma(t,s) \right\|_{\mathcal{L}(H)} < +\infty.$ \item (Causality) $\Gamma(t,s)=0$ if $s>t$. \item (Measurability) For all $a,\Tilde{a}\in \mathrm{D}$, the function $(t,s) \mapsto \langle \Gamma(t,s)a, \Tilde{a} \rangle_H$ is Borel measurable on $I^2$. \item (Representation) For all $\phi \in \mathcal{D}(I)$ and $a \in \mathrm{D}$, the weak solution of the equation $\partial_t u + \mathcal{B}u = \phi \otimes a $ in $\mathcal{D}'(I; \mathrm{D})$ satisfies for all $\Tilde{a} \in \mathrm{D}$, $\langle u(t), \Tilde{a} \rangle_H = \int_{-\infty}^{t} \phi(s) \langle \Gamma (t,s)a, \Tilde{a} \rangle_H \ \mathrm{d}s,$ for almost every $t\in I$. \end{enumerate} One defines a fundamental solution $\Tilde{\Gamma}=(\Tilde{\Gamma}(s,t))_{s,t \in I}$ to the backward operator $-\partial_s + \mathcal{B}^\star$ analogously and (2) is replaced by $\Tilde{\Gamma}(s,t)=0$ if $s>t$. \end{defn} Such an object must be unique (see Lemma \ref{Unicité sol fonda} in the case where $I=\mathbb{R}$, whose proof applies \textit{verbatim}). \begin{defn}[Green operators]\label{def:Greenintro} Let $t,s \in \overline{I}$ and $a,\Tilde{a} \in H$. \begin{enumerate} \item For $t \ge s$, $G(t,s)a$ is defined as the value at time $t$ of the weak solution to the equation $\partial_t u + \mathcal{B}u = 0$ with initial data $a$ at time $s$. \item For $s \le t$, $\Tilde{G}(s,t)\Tilde{a}$ is defined as the value at time $s$ of the weak solution $\Tilde{u} $ of the equation $-\partial_s \Tilde{u} + \mathcal{B^\star}\Tilde{u} = 0$ with final data $ \Tilde{a}$ at time $t$. \end{enumerate} We set $G(t,s)=0=\Tilde{G}(s,t)$ if $s>t$. The operators $G(t,s)$ and $\Tilde{G}(s,t)$ are called the Green operators for the parabolic operator $\partial_t +\mathcal{B}$ and the backward parabolic operator $ -\partial_s +\mathcal{B^\star}$, respectively. \end{defn} Uniqueness and the integral identities allow to obtain the identification of the two objects as follows, with proof being \textit{verbatim} the ones of Proposition \ref{Prop Green} and Theorem \ref{thm:identification}. \begin{thm}\label{thm:Green-FSintro} The following statements hold. \begin{enumerate} \item (Adjoint relation) For all $s <t $, $G(t,s)$ and $\Tilde{G}(s,t)$ are adjoint operators. \item (Chapman-Kolmogorov identity) For any $s < r <t $, we have $G(t,s)=G(t,r)G(r,s)$. \item (Existence) The family of Green operators $(G(t,s))_{s,t\in \overline{I}}$ is a fundamental solution. \end{enumerate} \end{thm} With this in hand, we may first combine the estimates obtained for both families. See Corollary \ref{Cor Green} for the ones for the Green operators (again, transposed \textit{verbatim}). Next, we obtain full representation for the weak solutions in Theorem \ref{Thm:Cauchyinhomogintro} (again, combining Theorems \ref{ThmCauchy inhomog} and \ref{thm: passage au concret}). \begin{thm}\label{Thm:FSCauchyintro} Consider the fundamental solution $(\Gamma_{\Tilde{\mathcal{B}}}(t,s))_{0\le s\le t\le \mathfrak{T}}$ of $\partial_t +\Tilde{\mathcal{B}}$ as in Theorem \ref{Thm:Cauchyinhomogintro}. For all $t \in [0,\mathfrak{T}]$, we have the following representation of the weak solution $u$ of \eqref{eq:Cauchy inhomogeneintro} : \begin{align*} u(t)=\Gamma_{\Tilde{\mathcal{B}}}(t,0)a&+\int_{0}^{t}\Gamma_{\Tilde{\mathcal{B}}}(t,s)Sf(s)\mathrm ds +\int_{0}^{t}\Gamma_{\Tilde{\mathcal{B}}}(t,s)S^\beta g(s) \ \mathrm ds+\int_{0}^{t}\Gamma_{\Tilde{\mathcal{B}}}(t,s)h(s) \ \mathrm ds, \end{align*} where the two integrals containing ${f}$ and ${g}$ are weakly defined in $H$, while the one involving $h$ converges strongly (i.e., in the Bochner sense). More precisely, for all $\Tilde{a} \in H$ and $t \in [0,\mathfrak{T}]$, we have the equality with absolutely converging integrals \begin{align*} \langle u(t) , \Tilde{a}\rangle_H & = \langle \Gamma_{\Tilde{\mathcal{B}}}(t,0)a , \Tilde{a}\rangle_H + \int_{0}^{t} \langle {f}(s) , S \Tilde{\Gamma}_{\Tilde{\mathcal{B}}}(s,t)\Tilde{a}\rangle_H \ \mathrm ds \\ & \qquad +\int_{0}^{t} \langle {g}(s) , S^\beta \Tilde{\Gamma}_{\Tilde{\mathcal{B}}}(s,t)\Tilde{a}\rangle_H \ \mathrm ds+\int_{0}^{t} \langle \Gamma_{\Tilde{\mathcal{B}}}(t,s) h(s) , \Tilde{a}\rangle_H \ \mathrm ds. \end{align*} \end{thm} As before, this is obtained from the variants on $(0,\infty)$ and $\mathbb{R}$ which in fact come first. We refer the reader to the text for details. \section{The abstract homogeneous framework}\label{Section 2} Throughout this article, we are working in a separable complex Hilbert space $H$ whose norm is denoted by $\left \| \cdot \right \|_H$ and its inner product by $\langle \cdot, \cdot \rangle_H$, and $S$ is a positive and self-adjoint operator on $H$. \textbf{From Sections \ref{Section 2} to \ref{Section 4}, we assume that $S$ is injective and shall not repeat this in statements.} The general case when $S$ might not be injective will be considered in Section~\ref{sec:Inhomogenous}. We do not assume that $0 \in \rho(S)$, that is, $S$ is not necessarily invertible. The spectrum of $S$ is contained in $\mathbb{R}_+=[0,\infty)$. To make our approach accessible, it is useful to present facts from functional calculus and give the construction of spaces of test functions and distributions in an abstract context given that 0 might be in the spectrum of $S$. \subsection{A review of the Borel functional calculus}\label{Section: calcul bor} For general background on self-adjoint operators and the spectral theorem, we refer to \cite{reed1980methods} and \cite{davies1995spectral}. By the spectral theorem for self-adjoint operators, there is a unique application $f\mapsto f(S)$ from the space of all locally bounded Borel functions on $(0,\infty)$ that we denote $\mathcal{L}^\infty_{\mathrm{loc}}((0,\infty))$ into the space of closed linear maps on $H$, which sends $1$ to the identity, $(t\mapsto(z-t)^{-1})$ to $(z-S)^{-1}$ for all $z \in \mathbb{C}\setminus \mathbb{R}_+$ and its restriction to the space of all bounded Borel functions on $(0,\infty)$ denoted $\mathcal{L}^\infty((0,\infty))$ is a $\star$-algebra homomorphism into $\mathcal{L}(H)$, the space of bounded linear maps on $H$. More precisely, we have $$\forall f \in \mathcal{L}^\infty((0,\infty)): \ \left\|f(S) \right\|\leq \left\|f \right\|_{\infty}.$$ Moreover, for all $f,g \in \mathcal{L}^\infty_{\mathrm{loc}}((0,\infty))$, we have $f(S)g(S)\subset (fg)(S)$ with equality if $g(S) \in \mathcal{L}(H)$. We also recall that $f(S)^*=f^\star (S)$ with $f^\star= \bar f$. We shall use that if $\varphi : (0,\infty) \rightarrow \mathbb{C}$ is a Borel function such that \begin{equation}\label{Psi+} \left| \varphi(t) \right| \leq C \min(\left|t \right|^s,\left|t \right|^{-s}), \end{equation} for some constants $C,s >0$ and for all $t>0$, then the operators $\int_{\varepsilon}^{{1}/{\varepsilon}} \varphi(aS) \ \frac{\mathrm{d}a}{a}$ are uniformly bounded for $0<\varepsilon<1$ and converge strongly in $\mathcal{L}(H)$, namely for all $v \in H$, \begin{equation}\label{eq:Calderon} \lim_{\varepsilon \to 0} \int_{\varepsilon}^{{1}/{\varepsilon}} \varphi(aS)v \ \frac{\mathrm{d}a}{a} = \left ( \int_{0}^{+\infty} \varphi(t) \frac{\mathrm d t}{t} \right ) v, \end{equation} where the limit is in $H$. This is the so-called Calder\'on reproducing formula. In this entire section, we fix a function $\Phi \in \mathcal{D}((0,\infty))$ such that $\int_{0}^{+\infty} \Phi (t) \frac{\mathrm{d} t}{t}=1$. Remark that for all $\alpha \in \mathbb{R}$, $t\mapsto t^\alpha \Phi(t) \in \mathcal{D} ((0,\infty) )$ and in particular verifies \eqref{Psi+} for some constants $\Tilde{C}, \Tilde{s}>0$ and for all $t>0$. For $\alpha \in \mathbb{R}$, let $S^{\alpha}$ denote the closed operator $\mathbf{t^{\alpha}}(S)$, which is also injective, positive and self-adjoint. We recall that for all $\alpha, \beta \in \mathbb{R}$, we have $$S^{\alpha+\beta}=S^\alpha S^\beta.$$ Denote by $D(S^{\alpha})$ the domain of $S^{\alpha}$. For any element $u\in D(S^\alpha)$, we set \begin{equation*} \left\| u \right\|_{S,\alpha}:=\left\|S^{\alpha}u \right\|_H. \end{equation*} We insist on the fact that $\left\| \cdot \right\|_{S,\alpha}$ denotes the homogeneous norm on the domain of $S^\alpha$ and the (Hilbertian) graph norm is $(\| \cdot \|_{S,\alpha}^2+ \|\cdot \|_H^2)^{1/2}$. The operator \begin{equation}\label{Salpha} S^{\alpha}: (D(S^{\alpha}), \left\| \cdot \right\|_{S,\alpha} ) \rightarrow \left ( H, \left\| \cdot \right\|_H \right ) \end{equation} is isometric with dense range. \subsection{An ambient space } We construct an ambient space in which we can perform all calculations. Consider the vector space \begin{align*} E_{-\infty}:= \bigcap_{\alpha \in \mathbb{R}} D(S^{\alpha }), \end{align*} endowed with the topology defined using the norms family $(\left\| \cdot \right\|_{S,\alpha} )_{\alpha \in \mathbb{R}}$. We recall the following moments inequality \begin{equation*} \left \| S^\gamma u \right \|_H\leq \| S^\alpha u \|_H^\theta \| S^\beta u \|_H^{1-\theta } \ (u \in E_{-\infty}), \end{equation*} for all $\gamma=\theta \alpha +(1-\theta)\beta$ and $\theta \in [0,1]$ and $\alpha$ and $\beta$ with same sign \cite[Proposition 6.6.4]{haase2006functional}. Using the moment inequality with the closedness of the powers $S^\alpha$, one can see that $E_{-\infty}$ endowed with the countable norms family $(\left\| \cdot \right\|_{S,\alpha} )_{\alpha \in \mathbb{Z}}$ is in fact a Fr\'echet space. Notice that for all $\alpha \in \mathbb{R}$, $S^{\alpha} : E_{-\infty} \rightarrow E_{-\infty}$ is an isomorphism. The space $E_{-\infty}$ is to be the test space as evidenced in the following lemma. \begin{lem}\label{density E_infty} $E_{-\infty}$ is dense in $ (D(S^{\alpha}), \left\| \cdot \right\|_{S,\alpha} )$ for all $\alpha \in \mathbb{R}.$ \end{lem} \begin{proof} Let $v \in D(S^\alpha)$. We regularise $v$ by setting for all $\varepsilon \in (0,1)$, \begin{align*} v_\varepsilon := \int_{\varepsilon }^{1/\varepsilon } \Phi (aS)v \ \frac{\mathrm{d} a}{a}. \end{align*} Indeed, we show that $v_\varepsilon \in E_{-\infty}$ and $S^\alpha v_\varepsilon \to S^\alpha v$ in $H$ as $\varepsilon\to 0$. First, for all $\beta \in \mathbb{R}$, since {$t \mapsto t^\beta \Phi(at) \in \mathcal{L}^\infty((0,\infty))$}, we have $v_\varepsilon \in D(S^\beta)$ with $S^\beta v_\varepsilon = \int_{\varepsilon }^{1/\varepsilon } S^\beta \Phi (aS)v \ \frac{\mathrm{d} a}{a} $. Hence, $v_\varepsilon \in E_{-\infty}$. Furthermore, as $v \in D(S^\alpha)$, we have $S^\alpha \Phi (aS)v= \Phi (aS)S^\alpha v$, so that $S^\alpha v_\varepsilon$ converges to $S^\alpha v$ by the Calder\'on reproducing formula. \end{proof} \begin{rem} The approximation is universal, in the sense that if $v\in D(S^\alpha) \cap D(S^\beta) $, then the approximation occurs simultaneously in both semi-norms. {In particular, $E_{-\infty}$ is dense in $D(S^\alpha)$ for the graph norm. } \end{rem} Let $E_{\infty}$ denote the topological anti-dual space of $E_{-\infty}$. The reason for which we are interested in such a space is that it provides an ambient space containing a copy of a completion of all spaces $ ( D(S^\alpha), \left\| \cdot \right\|_{S,\alpha}) $. To clarify this claim, we define \begin{align*} \left \| \varphi \right \|_{E_\alpha} := \sup_{v \in E_{-\infty}\setminus \left \{ 0 \right \}}\frac {| \varphi (v)|} { \| v \|_{S,-\alpha}} \end{align*} and the vector space \begin{align*} E_{\alpha}:= \{\varphi \in E_\infty : \left \| \varphi \right \|_{E_\alpha}<\infty\}. \end{align*} The space $\left ( E_{\alpha},\left \| \cdot \right \|_{E_\alpha} \right )$ is a Banach space. We set \begin{equation*} j : H \rightarrow E_\infty, \ \ v \mapsto j(v):=\langle v ,\cdot \rangle_{ H }. \end{equation*} The application $j$ is injective by the density of $E_{-\infty}$ in $H$ in Lemma \ref{density E_infty}. \begin{lem}\label{j identification} For all $\alpha \in \mathbb{R}$, $ j_{\scriptscriptstyle{\vert D(S^\alpha)}}: (D(S^{\alpha}), \left\| \cdot \right\|_{S,\alpha} ) \rightarrow \left ( E_{\alpha},\left \| \cdot \right \|_{E_\alpha} \right )$ is isometric with dense range. \end{lem} \begin{proof} If $v \in D(S^\alpha)$ then we can write for all $\Tilde{v} \in E_{-\infty}$, $j(v)(\Tilde{v})=\langle v, \Tilde{v} \rangle_H = \langle S^\alpha v, S^{-\alpha} \Tilde{v} \rangle_H$. This implies that $j(v) \in E_{\alpha}$ and $\left \| j(v) \right \|_{E_{\alpha }}=\left \| S^\alpha v \right \|_H=\left \| v \right \|_{S,\alpha}$. Now, if $\varphi \in E_\alpha$, then $\varphi \circ S^\alpha$ has a bounded extension on $H$. Using the Riesz representation theorem, there exists $v \in H$ such that $\varphi \circ S^\alpha = \langle v, \cdot \rangle_H$ or equivalently $\varphi = \langle v, S^{-\alpha}\cdot \rangle_H$. Moreover, we have $\left \| \varphi \right \|_{E_\alpha}= \left \| v \right \|_H$. Since the range of $S^\alpha$ is dense in $H$, there exists a sequence $(v_n)_{n \in \mathbb{N}} \in D(S^\alpha)^{\mathbb{N}}$ such that $S^\alpha v_n \to v$ in $H$. Now, we have for all $\Tilde{v}\in E_{-\infty}$, \begin{align*} j(v_n)(\Tilde{v})-\varphi (\Tilde{v})= \langle v_n, \Tilde{v} \rangle_H -\langle v, S^{-\alpha}\Tilde{v} \rangle_H = \langle S^\alpha v_n-v, S^{-\alpha}\Tilde{v} \rangle_H. \end{align*} Therefore, $\left \| j(v_n)-\varphi \right \|_{E_\alpha}=\left \| S^\alpha v_n -v \right \|_H \to 0$. \end{proof} To make clear the identification we will adopt in the next paragraph, we temporarily define the operator $T$ on the Hilbert space $j(H)=H^\star$ by setting \begin{equation*} D(T):=j(D(S)) \ , \ \ T:=j \circ S \circ j^{-1}. \end{equation*} Since $j: H \rightarrow j(H)$ is a unitary operator by Lemma \ref{j identification} when $\alpha=0$, $T$ is unitarily equivalent to $S$. It follows that $T$ has the same properties as $S$. More precisely, $T$ is injective, positive and selfadjoint and we have for all $\alpha \in \mathbb{R}$ $$D(T^\alpha):= j(D(S^\alpha))\ , \ \ T^\alpha= j \circ S^\alpha \circ j^{-1}, \ \ \left \| T^\alpha (j(v)) \right \|_{j(H)}=\left \| S^\alpha v \right \|_{H}.$$ Using Lemma \ref{j identification}, we have for all $\alpha \in \mathbb{R}$, $D(T^\alpha) \subset E_\alpha$ with dense and isometric inclusion for the homogeneous norm of $T^\alpha$ which means that $E_\alpha$ is a completion of $D(T^\alpha)$ for the homogeneous norm. Moreover, it follows from \eqref{Salpha} that $T^\alpha: D(T^\alpha) \rightarrow j(H)$ is isometric with dense range. Notice that by combining Lemma \ref{density E_infty} and Lemma \ref{j identification}, $j(E_{-\infty})=\bigcap_{\alpha \in \mathbb{R}} D(T^{\alpha })$ is dense in all the spaces $E_\alpha$. {Furthermore, we have for all $\alpha \in \mathbb{R}$, $E_\alpha \cap j(H)= D(T^\alpha)$}. The advantage is that all the spaces mentioned here are contained in $E_\infty$, an anti-dual space of a Fréchet space, which is in particular a Hausdorff topological vector space. From now on, by making the identification of $H$ with $j(H)$ and $S$ with $j\circ S \circ j^{-1}$ as above, we assume that $H$ is contained in a Hausdorff topological vector space $E_\infty$ that contains a completion of all the domains of $S^\alpha$ for the homogeneous norms and we denote by $D_{S,\alpha}$ this completion of $D(S^\alpha)$ in $E_\infty$. Moreover, we have that for all $\alpha \in \mathbb{R}$, $D_{S,\alpha}\cap H = D(S^\alpha)$. By Lemma \ref{density E_infty}, the space $E_{-\infty} = \bigcap_{\alpha \in \mathbb{R}} D(S^\alpha )$ is dense in all these completions. Moreover, there is a sesquilinear continuous duality form $\langle w, v\rangle$ on $E_\infty\times E_{-\infty}$ which extends the inner product on $H$. The functional calculus of $S$ extends to $E_\infty$ by $\langle f(S)w, v\rangle= \langle w, f^*(S)v\rangle$ whenever {$f\in \mathcal{L}^\infty_{\mathrm{loc}}((0,\infty))$ and $f(t)=O(t^a)$ at $0$ and $f(t)=O(t^b)$ at $\infty$} as $f^*(S):E_{-\infty}\to E_{-\infty}$ is bounded. In particular, $S^\alpha: E_\infty \to E_\infty$ is an automorphism and we have $D_{S,\alpha}=\left\{u \in E_\infty : S^\alpha u \in H \right\}$. The restriction of $S^\alpha$ to $D_{S,\alpha}$ agrees with the unique extension of $S^{\alpha}: (D(S^{\alpha}), \left\| \cdot \right\|_{S,\alpha} ) \rightarrow \left ( H, \left\| \cdot \right\|_H \right )$ (see \eqref{Salpha}). The norm on $D_{S,\alpha}$ is $\left \| S^\alpha \cdot \right \|_H$ and we keep denoting it by $\left \| \cdot \right \|_{S,\alpha}$ (and it makes it a Hilbert space). We record the following lemma. \begin{lem}\label{density} Let $\alpha \in \mathbb{R}$. Then, there are dense inclusions \begin{equation*} E_{-\infty} \hookrightarrow D(S^\alpha) \hookrightarrow D_{S,\alpha} \hookrightarrow E_\infty. \end{equation*} Moreover, the family $\left ( D_{S,\alpha} \right )_{\alpha \in \mathbb{R}}$ is a complex (and real) interpolation family. \end{lem} \begin{proof} For the first statement, the first two inclusions are already known to be dense. We show it for the last one. Indeed, if $w\in E_\infty$, then let $\mathcal{F}_w$ be a finite subset of $\mathcal{F}$ such that we have $|\langle w, v\rangle| \le C_w \sup_{\gamma \in \mathcal{F}_w} \|S^\gamma v\|_H$ for all $v\in E_{-\infty}$. The approximation procedure using the duality form and using that $\Phi^\star=\Phi$ shows that $w_\varepsilon$ converges to $w$ in $E_\infty$. We claim that $ w_\varepsilon \in D_{S,\alpha}$ for all $\alpha$ (hence, $w_\varepsilon\in E_{-\infty}$). Indeed, if we pick $v\in E_{-\infty}$, then $S^{\alpha} v_\varepsilon \in E_{-\infty}$ and $S^{\alpha+\gamma} v_\varepsilon \in H$ with norm controlled by $C_{\varepsilon,\alpha+\gamma} \|v\|_H$. Thus, we have $$ |\langle S^\alpha w_\varepsilon, v\rangle| = |\langle w, S^{\alpha} v_\varepsilon \rangle| \le C_w \sup_{\gamma \in \mathcal{F}_w} \|S^{\alpha+\gamma} v_\varepsilon\|_H \le C_w \sup_{\gamma \in \mathcal{F}_w} C_{\varepsilon,\alpha+\gamma}\, \|v\|_H,$$ from which the claim follows. Finally, the fact that family $\left ( D_{S,\alpha} \right )_{\alpha \in \mathbb{R}}$ is a complex (and real) interpolation family is proved in \cite{auscher1997holomorphic}. \end{proof} For any real $\alpha$, the sesquilinear form $(u,v) \mapsto \langle S^\alpha u, S^{-\alpha} v \rangle_H$ defines a canonical duality pairing between $D_{S,\alpha}$ and $D_{S,-\alpha}$ which is simply the inner product $\langle \cdot, \cdot \rangle_H$ extended from $E_{-\infty}\times E_{-\infty}$ to $D_{S,\alpha} \times D_{S,-\alpha}$. In fact, we have for all $(u,v) \in E_{-\infty} \times E_{-\infty}$, \begin{equation*} \sup_{w \in E_{-\infty}\setminus\left\{0 \right\}} \frac{\left|\langle u, w \rangle_H \right|}{\ \ \ \ \left\|w \right\|_{S,-\alpha}}=\left\|u \right\|_{S,\alpha} \ \ , \ \ \ \sup_{w \in E_{-\infty}\setminus\left\{0 \right\}} \frac{\left|\langle w, v \rangle_H \right|}{\ \ \ \ \left\|w \right\|_{S,\alpha}}=\left\|u \right\|_{S,-\alpha}. \end{equation*} For $(u,v) \in D_{S,\alpha} \times D_{S,-\alpha}$, we denote $\langle u, v \rangle_{H,\alpha}:= \langle S^\alpha u, S^{-\alpha} v \rangle_H$. It also coincides with the sesquilinear duality $\langle u, v \rangle$ on $E_\infty \times E_{-\infty}$ when $u\in D_{S,\alpha}$ and $ v\in E_{-\infty}$. We have the following lemma. \begin{lem}\label{Interpolation Jalpha bien définie} Let $\alpha, \beta \in \mathbb{R}$. If $u \in D_{S,\alpha}\cap D_{S,\beta}$ and $v \in D_{S,-\alpha} \cap D_{S,-\beta}$, then \begin{equation*} \langle u, v \rangle_{H,\alpha}= \langle u, v \rangle_{H,\beta}. \end{equation*} \end{lem} \begin{proof} The approximations $u_\varepsilon$ and $v_\varepsilon$ belong to $E_{-\infty}$ so $\langle u_\varepsilon, v_\varepsilon \rangle_{H,\alpha}=\langle u_\varepsilon, v_\varepsilon \rangle_H= \langle u_\varepsilon, v_\varepsilon \rangle_{H,\beta}$ for all $\varepsilon>0$. The result follows when $\varepsilon$ tends to $0$ as $u_\varepsilon$ converges to $u$ in both spaces $D_{S,\alpha}$ and $D_{S,\beta}$ and $v_\varepsilon$ converges to $v$ in both spaces $D_{S,-\alpha}$ and $D_{S,-\beta}$. \end{proof} \subsection{Spaces of test functions and distributions}\label{S2S1} For $I$ an open interval of $\mathbb{R}$, we denote by $\mathcal{D}(I;E_{-\infty})$ the space of $E_{-\infty}$-valued $C^\infty$ functions on $I$ with compact support. The space $\mathcal{D}(I;E_{-\infty})$ is endowed with the usual inductive limit topology and contains $\mathrm{span}(\mathcal{D}(I)\otimes E_{-\infty})$ as a dense subspace. We refer to \cite{hytonen2016analysis} for Banach-valued $L^p(I;B)$ spaces. The density lemma below explains why it is relevant to take $\mathcal{D}(I;E_{-\infty})$ as the space of test functions. \begin{lem}\label{Lemma density} $\mathcal{D}(I;E_{-\infty})$ is dense in $L^p(I; D_{S,\alpha})$ for all $\alpha \in \mathbb{R}$ and $p \in [1,\infty)$. \end{lem} \begin{proof} It is enough to consider the case $\alpha=0$, that is to prove the density of $\mathcal{D}(I;E_{-\infty})$ in $L^p(I, H)$ since $S^{-\alpha} : E_{-\infty} \rightarrow E_{-\infty}$ is an isomorphism and therefore $S^{-\alpha}: \mathcal{D}(I;E_{-\infty}) \rightarrow \mathcal{D}(I;E_{-\infty})$, where we set $(S^\alpha f)(t)=S^\alpha(f(t))$ for all $t\in I$ and $f\in \mathcal{D}(I;E_{-\infty})$. It is enough also to prove that $\mathcal{D}(I;E_{-\infty})$ is dense in $\mathcal{D}(I;H)$ for the $L^p$ norm since the latter is dense in $L^p(I; H)$. To do so, we fix $f \in \mathcal{D}(I;H)$ and regularize it by setting for all $\varepsilon>0$, and $t\in I$ \begin{align*} f_\varepsilon (t):= \int_{\varepsilon}^{1/\varepsilon} \Phi (aS)f(t)\ \frac{\mathrm d a}{a} = \left ( \int_{\varepsilon}^{1/\varepsilon} \Phi (aS)\ \frac{\mathrm d a}{a} \right ) f(t). \end{align*} It is obvious that $f_\varepsilon \in \mathcal{D}( I ; E_{-\infty})$ and by the Calder\'on reproducing formula $f_\varepsilon(t) \underset{\varepsilon \to 0}{\rightarrow} f(t) $ pointwisely in $H$. The uniform boundedness in $H$ of the approximate Calder\'on operators yields the domination allowing to apply the dominated convergence theorem. \end{proof} We can now define a space of distributions in which $S$ plays the role of a differential operator. Specifically, we denote by $\mathcal{D}'( I ; E_\infty)$ the space of all bounded anti-linear maps $u : \mathcal{D}( I ; E_{-\infty})\to \mathbb{C}$. Bounded means that for any compact set $K \subset I $, there exist two constants $C_K >0$ and $m_K \in \mathbb{N}$, and a finite set $\mathcal{A}_K \subset \mathbb{Z}$ such that \begin{align*} \forall \varphi \in \mathcal{D}( I , E_{-\infty}), \ \mathrm{supp}(\varphi) \subset K \Rightarrow \left|\llangle u ,\varphi \rrangle_{\mathcal{D}',\mathcal{D}} \right|\leq C_K \sup_{\left|j \right|\leq m_K,\, \alpha \in \mathcal{A}_K} \sup_{t\in K} \|\partial_t^j S^\alpha \varphi(t) \|_H . \end{align*} For convenience, we use the notation with partial derivative in $t$ for the derivative in $t$ thinking of future applications to concrete PDE. We also use a double bracket notation for dualities between vector-valued functions and distributions. For $\alpha \in \mathbb{R}$, we can embed $L^1_{\mathrm{loc}}(I; D_{S,\alpha})$ in $\mathcal{D}'\left ( I ; E_\infty \right )$ through the classical identification : \begin{align*} J_\alpha \colon L^1_{\mathrm{loc}}(I; D_{S,\alpha}) & \rightarrow \mathcal{D}'\left ( I ; E_\infty \right )\\ f&\mapsto J_\alpha f : \varphi \mapsto \int_{I} \langle f(t) ,\varphi(t) \rangle_{H,\alpha} \ \mathrm{d} t = \int_{I } \langle S^\alpha f(t) ,S^{-\alpha }\varphi(t) \rangle_H \ \mathrm{d} t. \end{align*} {This is well defined by the following lemma. \begin{lem}\label{lem: Jalpha bien définie} For all $\alpha, \beta \in \mathbb{R}$, $J_\alpha = J_\beta $ on $L^1_{\mathrm{loc}}(I; D_{S,\alpha}) \cap L^1_{\mathrm{loc}}(I; D_{S,\beta})$. \end{lem} \begin{proof} Straightforward corollary of Lemma \ref{Interpolation Jalpha bien définie}. \end{proof} Finally, the identification is achieved thanks to the following lemma.} \begin{lem} $J_\alpha$ is injective for all $\alpha \in \mathbb{R}$. \end{lem} \begin{proof} Testing with $\varphi = \theta \otimes a$, with $\theta\in \mathcal{D}(\mathbb{R)}$ real-valued and $a\in E_{-\infty}$, we easily conclude that $J_\alpha(f)=0$ implies $\langle S^\alpha f, S^{-\alpha} a\rangle_H=0$ a.e. on $I$. This implies that $S^\alpha f=0$ in $H$ a.e. on $I$, hence $f=0$ in $D_{S,\alpha}$ a.e. on $I$. \end{proof} Consequently, the space $L^p(I; D_{S,\alpha})$ can be identified to a sub-space of $\mathcal{D}'( I; E_\infty)$. We would like to apply powers of $S$ to a distribution like we apply them to functions valued in $E_{-\infty}.$ The definition below covers this. \begin{defn} For $\alpha \in \mathbb{R}$ and $u \in \mathcal{D}'( I ; E_\infty)$, we define the distribution $ S^\alpha u $ by setting \begin{align*} \llangle S^\alpha u , \varphi \rrangle_{\mathcal{D}',\mathcal{D}} := \llangle u , S^\alpha\varphi \rrangle_{\mathcal{D}',\mathcal{D}}, \ \forall \varphi \in \mathcal{D}(I; E_{-\infty}). \end{align*} \end{defn} \begin{rem} For $u \in \mathcal{D}'( I ; E_\infty)$, $S^\alpha u \in L^p(I; H)$ is equivalent to $ u \in L^p(I; D_{S,\alpha})$. Furthermore, powers of $S$ commute with derivatives in $t$ : for all $ k \in \mathbb{N}$ and $ \alpha \in \mathbb{R}$, $\partial_t^k S^\alpha = S^\alpha \partial_t^k $. \end{rem} When $I=\mathbb{R}$, we can use the space of tempered distributions adapted to $S$. Let us start by the Schwartz class $\mathcal{S}(\mathbb{R};E_{-\infty})$ defined by \begin{align*} \mathcal{S}(\mathbb{R};E_{-\infty}):= \left \{ \varphi \in C^\infty(\mathbb{R};E_{-\infty}) \ : \ \forall k, \ell \in \mathbb{N}, \ t^k \partial_t^\ell \varphi(t) \underset{\left | t \right |\to \infty}{\rightarrow}0 \ \mathrm{in} \ E_{-\infty} \right \}, \end{align*} which is a Fréchet space for a suitable countable family of norms. Moreover, $\mathcal{D}(\mathbb{R}; E_{-\infty})$ is dense in $\mathcal{S}(\mathbb{R};E_{-\infty})$ by the same argument as for the usual distributions. We denote by $\mathcal{S'}(\mathbb{R};E_\infty)$ the topological dual space of $\mathcal{S}(\mathbb{R};E_{-\infty})$. It is a subspace of $\mathcal{D}'(\mathbb{R}; E_{\infty})$ containing $L^p(\mathbb{R}; D_{S,\alpha})$ for all $\alpha \in \mathbb{R}$ and $p \in [1,\infty]$. For the proofs of this and the theorems below, we refer to \cite{zuily2002elements} for the classical distributions and the same proofs work here. It can be proven that $\mathcal{S}(\mathbb{R};E_{-\infty})$ is dense in $\mathcal{S'}(\mathbb{R};E_\infty)$, and more generally, that $\mathcal{D}(I;E_{-\infty})$ is dense in $\mathcal{D'}(I;E_\infty)$ for any open set $I \subset \mathbb{R}$. However, this is not important for the discussion that follows, so we leave the verification to the interested reader. As in the classical case, we will first define $\mathcal{F}$ on $L^1(\mathbb{R};H)$, then on $\mathcal{S}(\mathbb{R};E_{-\infty})$ and finally on $\mathcal{S'}(\mathbb{R};E_\infty)$ by duality. \begin{defn} The Fourier transform is defined on $L^1(\mathbb{R};H)$ by setting for all $f \in L^1(\mathbb{R};H)$ and $\tau \in \mathbb{R}$ \begin{align*} \mathcal{F}(f)(\tau):=\hat{f}(\tau):= \int_{\mathbb{R}} e^{-\textit{i}\tau t} f(t) \ \mathrm{d}t. \end{align*} We define $\overline{\mathcal{F}}$ by changing $-i\tau t $ to $i\tau t$ in the integral. \end{defn} The Fourier transform on $\mathcal{S}(\mathbb{R};E_{-\infty})$ enjoys many properties as we recall below. \begin{prop} \label{TFS} The Fourier transform $\mathcal{F}$ enjoys the following properties : \begin{enumerate} \item $\mathcal{F}: \mathcal{S}(\mathbb{R};E_{-\infty}) \rightarrow \mathcal{S}(\mathbb{R};E_{-\infty})$ is an automorphism verifying for all $\varphi \in \mathcal{S}(\mathbb{R};E_{-\infty})$, $k \in \mathbb{N}$ and $\alpha \in \mathbb{R}$ $$\mathcal{F}(S^\alpha \varphi )=S^\alpha \mathcal{F}(\varphi ), \ \partial_\tau^k \mathcal{F}(\varphi )= \mathcal{F}((-\textit{i}t)^k\varphi), \ \mathcal{F}(\partial_t^k\varphi )= (\textit{i}\tau )^k \mathcal{F}(\varphi ).$$ \item For all $\alpha \in \mathbb{R}$, $\mathcal{F}$ extends to an isomorphism on $ L^2(\mathbb{R};D_{S,\alpha})$ which verifies a Plancherel equality. \end{enumerate} \end{prop} We can now transport the Fourier transform $\mathcal{F}$ to $\mathcal{S'}(\mathbb{R};E_\infty)$ by sesquilinear duality. \begin{defn} We define the Fourier transform $\mathcal{F}$ on $\mathcal{S'}(\mathbb{R};E_\infty)$ by setting \begin{equation*} \llangle \mathcal{F}u,\varphi \rrangle_{\mathcal{S'},\mathcal{S}}:= \llangle \hat{u},\varphi \rrangle_{\mathcal{S'},\mathcal{S}} := \llangle u,\overline{\mathcal{F}}(\varphi) \rrangle_{\mathcal{S'},\mathcal{S}},\ u \in \mathcal{S'}(\mathbb{R};E_\infty), \ \varphi \in \mathcal{S}(\mathbb{R};E_{-\infty}). \end{equation*} \end{defn} From Proposition \ref{TFS}, we deduce the following proposition regarding the Fourier transform on $\mathcal{S'}(\mathbb{R};E_\infty)$. \begin{prop}\label{TFS'} $\mathcal{F}: \mathcal{S'}(\mathbb{R};E_\infty) \rightarrow \mathcal{S'}(\mathbb{R};E_\infty)$ is an automorphism and satisfies the property (1) and its restriction to $ L^2(\mathbb{R};D_{S,\alpha})$ agrees with the operator in (2) as in the statement above. \end{prop} For $\alpha \in \mathbb{R}$, we denote by $D_t^\alpha$ the time-derivative of order $\alpha$. More precisely, if $u \in \mathcal{S'}(\mathbb{R};E_\infty)$ is such that $\left | \tau \right |^{\alpha} \mathcal{F}u \in \mathcal{S'}(\mathbb{R};E_\infty)$, we set \begin{equation*} D_t^\alpha u := \mathcal{F}^{-1} \left ( \left | \tau \right |^{\alpha} \mathcal{F}u \right ) . \end{equation*} \section{Abstract heat equations }\label{Section 3} In this section, we study well-posedness of the abstract heat equation $\partial_t u + S^2u =f$ where the role of the Laplacian is played by the square of the self-adjoint operator $S$. These well-posedness results will imply embeddings and energy inequalities in the spirit of Lions that will be described in the next section. The abstract heat operator in $\mathbb{R}$ is $\partial_t +S^2$. The backward operator $-\partial_t +S^2$ corresponds to reversing time, and the results are exactly the same and are often proved and used simultaneously. \subsection{Solving the abstract heat equation using the Fourier method} Working on the real line makes the Fourier transform in time available and is a key tool to obtain homogeneous estimates in a simple way. \subsubsection{Uniqueness in homogeneous energy space} We begin with a uniqueness result which is key to our discussion. \begin{prop}[Uniqueness in homogeneous energy space]\label{Unicité} Let $u \in \mathcal{D}'(\mathbb{R}; E_\infty)$ be a solution of $\partial_t u + S^2 u =0$ in $\mathcal{D}'(\mathbb{R};E_\infty)$. If $u \in L^2(\mathbb{R};D_{S,\alpha})$ for some $\alpha \in \mathbb{R}$, then $u=0$. \end{prop} \begin{proof} As $S^\alpha$ is an isomorphism on $E_\infty$ which commutes with time derivatives, $v=S^\alpha u$ satisfies the same equation, hence we may assume $\alpha=0$ and $u\in L^2(\mathbb{R};H)$. As this is a subset of $ \mathcal{S}'(\mathbb{R};E_\infty)$, by applying the Fourier transform to this equation, we have for all $\varphi \in \mathcal{S}(\mathbb{R};E_{-\infty})$ \begin{equation}\label{v=0} \int_\mathbb{R} \langle \hat{u}(\tau ), (- i \tau+S^2 ) \varphi(\tau) \rangle_H \ \mathrm{d}\tau =0. \end{equation} Take a sequence $(\varphi_k)_{k \in \mathbb{N}} \in \mathcal{D}(\mathbb{R};E_{-\infty})^\mathbb{N}$ such that $\varphi_k \underset{}{\rightarrow} \hat{u}$ in $L^2(\mathbb{R};H)$ and $0 \notin \mathrm{supp}(\varphi_k)$, for all $k \in \mathbb{N}$. Taking $\tau \mapsto (-{i}\tau+S^2)^{-1}\varphi_k(\tau)$ as a test function in \eqref{v=0} and letting $k \to +\infty$, we have \begin{equation*} \int_\mathbb{R}\left \| \hat{u}(\tau) \right \|_H^2 \ \mathrm{d}\tau =0 . \end{equation*} By Plancherel, we have then $u=0$. \end{proof} \begin{cor}[Invertibility on abstract Schwartz functions and tempered distributions]\label{cor:iso} The operator $\partial_t + S^2$ is an isomorphism on $\mathcal{S}(\mathbb{R};E_{-\infty})$ and on $\mathcal{S}'(\mathbb{R};E_{\infty})$. \end{cor} \begin{proof} We begin with the result on $\mathcal{S}(\mathbb{R};E_{-\infty})$. The boundedness is clear. The injectivity follows from the above proposition. The surjectivity is as follows. By Fourier transform, it suffices to show the surjectivity for $i\tau+S^2$. If $\hat f\in \mathcal{S}(\mathbb{R};E_{-\infty}) $, then $S^{-2}\hat f \in L^2(\mathbb{R};H)$ and by the uniform boundedness of $(i\tau+S^2)^{-1} S^2$, $g(\tau)=(i\tau+S^2)^{-1} S^2 (S^{-2}\hat f(\tau)) \in L^2(\mathbb{R};H)$ with $i\tau g(\tau)+ S^2 g (\tau)= \hat f(\tau)$. Shifting with powers of $S$, we have $g\in L^2(\mathbb{R}; E_{-\infty})$. Setting $\hat u=g$, we see that $\partial_t u=f-S^2u$, so $\partial_t u\in L^2(\mathbb{R}; E_{-\infty})$ and by iteration, we have $u\in C^\infty(\mathbb{R}; E_{-\infty})$. The decay is easily checked following the argument and using $\tau$-derivatives of the resolvent $(i\tau+S^2)^{-1}$. This applies to the backward operator $-\partial_t+S^2$. Hence, by duality, we obtain the result on $\mathcal{S}'(\mathbb{R};E_{\infty})$. \end{proof} \begin{rem} In the sequel, we shall focus on $\alpha=1$ in Proposition \ref{Unicité} to make $H$ the pivotal space, but clearly, one can shift to this case by applying powers of $S$. \end{rem} \subsubsection{Solution and source spaces} We begin with recalling the following result of Lions for the sake of completeness, but we shall not use this result and prove a stronger one. \begin{prop}[Solving the abstract heat equation à la Lions]\label{lem: Lions} If $f\in L^2(\mathbb{R}; D_{S,-1})$, then there exists $u\in L^2(\mathbb{R}; D_{S,1})$ such that $\partial_t u + S^2 u =f$ in $\mathcal{D}'(\mathbb{R};E_\infty)$. \end{prop} \begin{proof} It is straightforward application of the Lions representation theorem \cite[Théorème 1.1]{lions2013equations} in the Hilbert space $L^2(\mathbb{R}; D_{S,1})$. \end{proof} As we said, we now argue with in mind that $H$, or rather $L^2(\mathbb{R}; H)$, is the pivotal Hilbert space. Define \begin{align*} V_1&:= L^2(\mathbb{R};D_{S,1}),\\ V_{-1}&:=\left\{ u \in L^2(\mathbb{R};D_{S,1}) \ : \ \partial_t u \in L^2(\mathbb{R};D_{S,-1}) \right\}. \end{align*} $V_{1}$ is the uniqueness space and $V_{-1}$ is the space to which the solution belongs when the source is taken in $L^2(\mathbb{R}; D_{S,-1})$ according to Lions' result. However, for the heat equation, Fourier methods are particularly handy to prove this and also allow more source spaces. We introduce a hierarchy of intermediate solution and source spaces. For $ \alpha \in [-1,1]$, define the following respective solution and source spaces \begin{align*} V_\alpha &:=\left \{ u \in L^2(\mathbb{R};D_{S,1}) : D_t^{\frac{1-\alpha}{2}}u \in L^2(\mathbb{R};D_{S,\alpha})\right \},\\ W_\alpha &:= \left \{ D_t^{\frac{1+\alpha}{2}}g \ : \ g \in L^2(\mathbb{R};D_{S,\alpha}) \right \}. \end{align*} with \begin{align*} \left \| u \right \|_{V_\alpha}&:=\left ( \left \| u \right \|_{L^2(\mathbb{R};D_{S,1})}^2+ \| D_t^{\frac{1-\alpha}{2}}u \|_{L^2(\mathbb{R}; D_{S,\alpha})}^2\right )^{1/2}, \\ \| f\|_{W_\alpha}&:=\| D_t^{-\frac{1+\alpha}{2}}f\|_{L^2(\mathbb{R};D_{S,\alpha})}. \end{align*} We can think of $V_\alpha = L^2(\mathbb{R};D_{S,1}) \cap \dot{H}^{\frac{1-\alpha}{2}}(\mathbb{R};D_{S,\alpha})$ using homogeneous Sobolev spaces on the real line but this presentation avoids having to define these spaces. In the same manner, we think of $W_\alpha=\dot{H}^{-\frac{1+\alpha}{2}}(\mathbb{R};D_{S,\alpha})$. Remark that $W_{-1}=L^2(\mathbb{R}; D_{S,-1}).$ The following lemma summarizes some properties of the spaces $V_\alpha$ and $W_\alpha$ and their relation. \begin{lem}[Properties of intermediate spaces]\label{Spaces V_alpha} Fix $-1 \leq \alpha \leq \alpha' \leq 1$. We have the following assertions. \begin{enumerate} \item $V_\alpha $ is a well-defined subspace of $\mathcal{S'}(\mathbb{R};E_\infty)$, $\left ( V_\alpha, \left \| \cdot \right \|_{V_\alpha} \right )$ is a Hilbert space, and we have \begin{equation*} V_\alpha = \left \{ u \in \mathcal{S'}(\mathbb{R};E_\infty) : S^\alpha (S+\left | \tau \right |^{1/2})^{1-\alpha} \hat{u} \in L^2(\mathbb{R};H)\right \}, \ \left \| u \right \|_{V_\alpha} \sim \| S^\alpha (S+\left | \tau \right |^{1/2})^{1-\alpha}\hat{u} \|_{L^2(\mathbb{R};H)}. \end{equation*} \item We have the following chain of continuous and dense inclusions: \begin{align*} \mathcal{S}(\mathbb{R};E_{-\infty}) \hookrightarrow V_\alpha \hookrightarrow V_{\alpha'} \hookrightarrow \mathcal{S'}(\mathbb{R};E_{\infty}). \end{align*} \item { $W_\alpha$ is a subspace of $\mathcal{S'}(\mathbb{R}; E_\infty)$, and $\left( W_\alpha, \left\| \cdot \right\|_{W_\alpha} \right)$ is a Hilbert space. We have a dense inclusion $\mathcal{S}_0(\mathbb{R}; E_{-\infty}) \hookrightarrow W_\alpha$, where $\mathcal{S}_0(\mathbb{R}; E_{-\infty}) := \{ f \in \mathcal{S}(\mathbb{R}; E_{-\infty}) \mid \hat{f}(0) = 0 \}. $} \item Let $V_\alpha^\star$ denote the anti-dual space of $V_\alpha$ with respect to $\langle \cdot , \cdot \rangle_{L^2(\mathbb{R};H)}$. It is a subspace of $\mathcal{S}'(\mathbb{R};E_\infty)$ and $V_\alpha^\star = L^2(\mathbb{R};D_{S,-1}) + W_{-\alpha}$ with the following estimate \begin{align*} \left \| \omega \right \|_{V_\alpha^\star} \sim \inf \left\{ \left \| f \right \|_{L^2(\mathbb{R};H)}+\left \| g \right \|_{L^2(\mathbb{R};D_{S,-\alpha})} \ : \ {\omega=Sf+D_t^{\frac{1-\alpha}{2}}g}\right\}. \end{align*} \end{enumerate} \end{lem} \begin{proof} Let us first prove (1) : $V_\alpha $ is a well-defined subspace of $\mathcal{S'}(\mathbb{R};E_\infty)$. In fact, if $u \in L^2(\mathbb{R};D_{S,1})$, then, by Proposition \ref{TFS'}, it follows that $ \left | \tau \right |^{\frac{1-\alpha}{2}} S \hat{u} \in L^1_{\mathrm{loc}}(\mathbb{R};H)$. Furthermore, for $\varphi \in \mathcal{S}(\mathbb{R};E_{-\infty})$, we have using Cauchy-Schwarz inequality \begin{align*} \int_{\mathbb{R}} |\langle \left | \tau \right |^{\frac{1-\alpha}{2}} S\hat{u}, S^{-1 }\varphi \rangle_H |\, \mathrm d \tau \leq \left \| S\hat{u} \right \|_{L^2(\mathbb{R};H)} \||\tau|^{\frac{(1-\alpha)}{2}} S^{-1 }\varphi \|_{L^2(\mathbb{R};H)} , \end{align*} and one can define $\left | \tau \right |^{{\frac{1-\alpha}{2}}}\hat u \in \mathcal{S'}(\mathbb{R};E_\infty)$ by \begin{align*} \llangle \left | \tau \right |^{\frac{1-\alpha}{2}}\hat u, \varphi \rrangle_{\mathcal{S'},\mathcal{S}} = \int_{\mathbb{R}} \langle \left | \tau \right |^{\frac{1-\alpha}{2}} S\hat{u}, S^{-1 }\varphi \rangle_H\, \mathrm d \tau . \end{align*} Finally, $ S^\alpha\left | \tau \right |^{{\frac{1-\alpha}{2}}}\hat u$ exists in $\mathcal{S'}(\mathbb{R};E_\infty)$ and agrees with $ \left | \tau \right |^{{\frac{1-\alpha}{2}}}S^\alpha\hat u$. The Hilbert space property (in particular, the completeness) is easy. Next, the proof of the set equality and the norms equivalence in (1) is easy using the boundedness of the operators $S^{1-\alpha}(S+\left | \tau \right |^{1/2})^{-(1-\alpha)}$ and $|\tau|^{\frac{1-\alpha }2}(S+\left | \tau \right |^{1/2})^{-(1-\alpha)}$ on $L^2(\mathbb{R};H)$. For the point (2), the inclusion of $\mathcal{S}(\mathbb{R};E_{-\infty})$ in $V_\alpha$ follows easily from (1). To check $V_\alpha \hookrightarrow V_{\alpha'}$ if $\alpha<\alpha'$, write \begin{equation*} S^{\alpha'} (S+\left | \tau \right |^{1/2})^{1-\alpha'}=S^{\alpha'-\alpha}(S+\left | \tau \right|^{1/2})^{\alpha-\alpha'} S^\alpha (S+\left | \tau \right |^{1/2})^{1-\alpha}, \end{equation*} and use (1) together with the boundedness of $S^{\alpha'-\alpha}(S+\left | \tau \right|^{1/2})^{\alpha-\alpha'}$ on $ L^2(\mathbb{R};H)$. The density of $\mathcal{S}(\mathbb{R};E_{-\infty})$ into $V_\alpha$ can be deduced using Lemma \ref{Lemma density} and that of $V_\alpha$ into $V_{\alpha'}$ follows. Finally, although we do not need this later on, the density of $V_\alpha$ in $\mathcal{S'}(\mathbb{R};E_{\infty})$ hold as $\mathcal{S}(\mathbb{R};E_{-\infty})$ is dense in $\mathcal{S'}(\mathbb{R};E_\infty)$. For point (3), since $-1\le \alpha\le 1$, the inclusions are clear together with the Hilbert space property. The density is as follows. Let $f\in W_\alpha$. By definition, let $g \in L^2(\mathbb{R};D_{S,\alpha})$ such that $\hat{f}=\left| \tau \right|^{\frac{1+\alpha}{2}} \hat{g}$ in $\mathcal{S}'(\mathbb{R}; E_{\infty})$. Note that the right hand side also belongs to $L^1_{\mathrm{loc}}(\mathbb{R}; D_{S,\alpha})$. Take a sequence $(\varphi_k)_{k \in \mathbb{N}} \in \mathcal{S}(\mathbb{R}; E_{-\infty})^{\mathbb{N}}$ such that $\hat{\varphi_k } \to \hat{g}$ in $L^2(\mathbb{R};D_{S,\alpha})$ and $0 \notin \mathrm{supp}(\hat{\varphi_k}).$ Take $f_k:= \mathcal{F}^{-1}(\left| \tau \right|^{(1+\alpha) /2}\hat{\varphi_k})$, then $f_k \in \mathcal{S}_0(\mathbb{R}; E_{-\infty})$ and $f_k {\rightarrow} f$ in $W_\alpha$. The proof of (4) is standard, using Fourier transform and that powers of $S$ commute with multiplication by powers of $|\tau|$, and the calculus occurs using the duality between $\mathcal{S'}(\mathbb{R};E_\infty)$ and $\mathcal{S}(\mathbb{R};E_{-\infty})$. \end{proof} \subsubsection{The main Theorem} Now, we come to the main result of this subsection. \begin{thm}[Invertibility on intermediate spaces]\label{ISOMORPHISM} For all $\alpha\in [-1,1]$, the operator $ \partial_t + S^2 $, defined on $\mathcal{S}(\mathbb{R};E_{-\infty})$, extends to a bounded and invertible operator $ A_\alpha : V_\alpha \rightarrow V_{-\alpha}^\star$, which agrees with the restriction of $\partial_t + S^2$ acting on $\mathcal{S}'(\mathbb{R};E_\infty)$. \end{thm} \begin{proof} From Fourier transform, $u\in V_\alpha$ if and only if $S\hat u \in L^2(\mathbb{R}; H)$ and $|\tau|^{\frac{1-\alpha}{2}} S^\alpha \hat u \in L^2(\mathbb{R}; H)$. Hence, $f=\partial_tu + S^2u$ is easily seen to belong to $ L^2(\mathbb{R};D_{S,-1}) + W_{\alpha}=V_{-\alpha}^\star$. The density of $\mathcal{S}(\mathbb{R};E_{-\infty})$ in $V_\alpha$ yields the bounded extension operator $A_{\alpha}$ and notice that $A_{\alpha}=\partial_t + S^2$ on $V_\alpha$. To show the invertibility, it suffices to prove that the restriction of the inverse of $\partial_t + S^2$ on $\mathcal{S}'(\mathbb{R};E_\infty)$ to $V_{-\alpha}^\star$ is bounded into $V_\alpha$. Let $w\in V_{-\alpha}^\star$ and again, by Fourier transform, write $\hat w= S \hat f+ |\tau|^{\frac{1+\alpha}{2}}S^{-\alpha} \hat g$, with $f,g\in L^2(\mathbb{R};H)$. Define $u\in \mathcal{S}'(\mathbb{R};E_\infty) $ by $\hat u(\tau)= \left ( i\tau+S^2 \right )^{-1} \hat{g}(\tau)$. That $ S\hat u \in L^2(\mathbb{R};H) $ follows from the uniform boundedness (with respect to $\tau$) of $S^2(i\tau + S^2)^{-1}$ and $|\tau|^{\frac{1+\alpha}{2}}S^{1-\alpha}(i\tau + S^2)^{-1}$ and that $|\tau|^{\frac{1-\alpha}{2}}S^{\alpha} \hat u \in L^2(\mathbb{R};H) $ from that of $|\tau|(i\tau + S^2)^{-1}$ and $|\tau|^{\frac{1-\alpha}{2}}S^{1+\alpha}(i\tau + S^2)^{-1}$. Hence $u\in V_\alpha$ and the estimate follows by taken infimum over all choices of $f$ and $g$. \end{proof} \begin{rem} When $\alpha=-1$, we recover Proposition \ref{lem: Lions} (Lions'~result) : existence in $V_{-1}$, and uniqueness follows from Proposition \ref{Unicité}. Note however that uniqueness occurs when $u\in V_1$ which is the largest possible space in that scale. \end{rem} \begin{rem} \label{rem:counterexample}The Fourier method is rather elementary once the setup has been designed, but does not furnish time continuity: we mostly used that $\partial_t$ and $S$ commute. Something specific to time derivatives is the classical embedding theorem of Lions \cite{lions1957problemes} mentioned earlier. This embedding is not true any longer when $V=D(S)$ and $V^\star=D(S^{-1})$ are replaced with their completions $D_{S,1}$ and $D_{S,-1}$ if $I$ is bounded. Indeed, as the embedding $D_{S,1} \hookrightarrow H $ fails, pick $v \in D_{S,1}\setminus H$ and define the function $u(t)=v$, $0\leq t \leq 1$. We have $u \in L^2((0,1);D_{S,1})$ and $\partial_t u =0$ but $u \notin C([0,1];H)$. However, this counterexample is ruled out if $I=\mathbb{R}$ or $I$ unbounded and in fact, the continuity holds. This can be obtained when $\alpha=-1$ by approximation from Lions' result but we present a different approach, which has the advantage of allowing $\alpha<0$ to conclude for regularity. Note however, that when $\alpha=0$, continuity cannot hold for all sources in $V_0^\star$ by the isomorphism property. We would have otherwise that any $u\in V_0$ is continuous, valued in $H$, but this is not the case. \end{rem} \subsection{Solving the abstract heat equation using the Duhamel method} Since $-S^2$ generates a $C_0$ contraction semigroup on $H$, the Duhamel formula \begin{equation}\label{Duhamel} Tf(t):=\int_{-\infty}^{t} e^{-(t-s)S^2}f(s) \ \mathrm{d}s \end{equation} is a way of constructing solutions to $\partial_t u +S^2 u =f$ in $\mathcal{D}'(\mathbb{R}; E_\infty)$. Remark that the adjoint Duhamel formula \begin{equation}\label{backwardDuhamel} \tilde T\tilde f(s):=\int_{s}^{\infty} e^{-(t-s)S^2}\tilde f(t) \ \mathrm{d}t \end{equation} is a way of constructing solutions to the backward equation $-\partial_s \tilde u +S^2 \tilde u =\tilde f$ in $\mathcal{D}'(\mathbb{R}; E_\infty)$. All what we shall prove for the (forward) heat equation applies to the backward one. We leave to the reader the care of checking it. For the moment, we assume $f$ to be a test function. \begin{lem}[A priori properties for the Duhamel solution] \label{lem:solution} If $f\in \mathcal{S}(\mathbb{R}; E_{-\infty}) $, then $Tf $ defined by \eqref{Duhamel} belongs to $ \mathcal{S}'(\mathbb{R}; E_{\infty})$ and is a solution of $\partial_t u +S^2 u =f$ in $\mathcal{S}'(\mathbb{R}; E_{\infty})$. \end{lem} \begin{proof} First using the regularity and contractivity of the semigroup, \begin{equation*} \iint_{\mathbb{R}^2} 1_{t>s}|\langle e^{-(t-s)S^2}f(s), \varphi(t)\rangle| \ \ \mathrm{d} s \mathrm d t \le \|f\|_{L^1(\mathbb{R};H)} \|\varphi\|_{L^1(\mathbb{R};H)} \end{equation*} for any $f,\varphi \in \mathcal{S}(\mathbb{R};E_{-\infty})$. In particular $u$ is defined for all $t$ by a Bochner integral and belongs to $ L^\infty(\mathbb{R}; H)$. Hence we may apply Fubini's theorem freely, exchanging integrals and inner products in the calculation below: \begin{align*} -\int_{\mathbb{R}} \langle u(t) , \partial_t\varphi(t)\rangle_H \ \mathrm{d} t & = - \int_{\mathbb{R}} \int_{-\infty}^{t} \langle f(s) , e^{-(t-s)S^2}\partial_t\varphi(t)\rangle_H \ \mathrm{d} s \mathrm d t \\ & = - \int_{\mathbb{R}} \int_{s}^{+\infty} \langle f(s) , e^{-(t-s)S^2}\partial_t\varphi(t)\rangle_H \ \mathrm{d} t \mathrm d s \\ & =- \int_{\mathbb{R}} \langle f(s) , \int_{s}^{+\infty} e^{-(t-s)S^2}\partial_t\varphi(t)\rangle_H \ \mathrm{d} t \mathrm d s \\ & = - \int_{\mathbb{R}} \langle f(s) , -\varphi (s)+ \int_{s}^{+\infty} S^2e^{-(t-s)S^2} \varphi(t) \ \mathrm{d} t \rangle_H \ \mathrm{d} s \\ & = \int_{\mathbb{R}} \langle f(s) , \varphi (s) \rangle_H \ \mathrm{d} s - \int_{\mathbb{R}} \int_{s}^{+\infty} \langle e^{-(t-s)S^2}f(s), S^2 \varphi(t) \rangle_H \ \mathrm{d} t \mathrm d s. \end{align*} Using Fubini once more, this shows that $ -\llangle u, \partial_t\varphi \rrangle_{\mathcal{S}',\mathcal{S}}= \llangle f, \varphi \rrangle_{\mathcal{S}',\mathcal{S}} - \llangle u, S^2\varphi \rrangle_{\mathcal{S}',\mathcal{S}},$ which means $\partial_t u +S^2 u =f$ in $\mathcal{S}'(\mathbb{R};E_\infty)$. \end{proof} We now gather a number of a priori estimates which are related to solving the heat equation within $L^2(\mathbb{R};D_{S,1})$. \begin{lem}[A priori estimates for the Duhamel operator] \label{lem:aprioriestimates} Let $ f \in \mathcal{S}(\mathbb{R}; E_{-\infty}) $, and define $ u = Tf $. For the inequalities involving $ \|f\|_{W_\alpha} $, we additionally assume that $ f \in \mathcal{S}_0(\mathbb{R}; E_{-\infty})$. \begin{enumerate} \item $u\in C_0(\mathbb{R}; H)$ and one has the following uniform bounds \begin{align*}\sup_{t \in \mathbb{R}} \left\| u(t)\right\|_H &\le \|f\|_{L^1(\mathbb{R};H)}\\ \sup_{t \in \mathbb{R}} \left\| u(t)\right\|_H &\le \frac{1}{\sqrt{2}} \|f \|_{L^2(\mathbb{R}; D_{S,-1})}\\ \sup_{t \in \mathbb{R}} \left\| u(t)\right\|_H &\le C(\alpha) \|f\|_{W_\alpha}, \ \alpha\in [-1,0). \end{align*} \item $u \in L^2(\mathbb{R};D_{S,1})$ and one has the following energy inequalities \begin{align*}\left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})} &\le \frac{1}{\sqrt{2}} \|f\|_{L^1(\mathbb{R};H)}\\ \left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})} &\le \|f \|_{L^2(\mathbb{R}; D_{S,-1})}\\ \left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})} &\le C'(\alpha) \|f\|_{W_\alpha}, \ \alpha\in [-1,1]. \end{align*} \item $u \in V_\alpha$ for all $\alpha\in [-1,1]$ and one has the following bound \begin{align*}\|D_t^{\frac{1-\alpha}{2}}S^\alpha u\|_{L^2(\mathbb{R};H)} \le \|f\|_{W_\alpha}. \end{align*} \end{enumerate} \end{lem} \begin{proof} That $u$ belongs to $L^\infty(\mathbb{R};H)$ with $\| u\|_{L^\infty(\mathbb{R};H)} \le \|f\|_{L^1(\mathbb{R};H)}$ has been already observed above. Note that $\partial_t u= f-T(S^2f)\in L^\infty(\mathbb{R};H)$. Thus $u$ is Lipschitz, hence continuous. The limit 0 at $-\infty$ is clear from the fact that $\|f(s)\|_H$ has rapid decay and the contraction property of the semigroup. As for the limit at $+\infty$, we write for fixed and large $A$ and $t>A$, $$ u(t)= e^{-(t-A)S^2}u(A)+ \int_A^t e^{-(t-s)S^2}f(s) \ \mathrm{d}s. $$ The first term tends to 0 in $H$ by properties of the semigroup and, for the second term, one uses again the contraction property and rapid decay of $\|f(s)\|_H$. We are left with proving the remaining estimates. \ \paragraph{\textit{Step 1: $\left\| u(t)\right\|_H \leq \frac{1}{\sqrt{2}} \left\|f \right\|_{L^2(\mathbb{R}; D_{S,-1})}$ for all $t \in \mathbb{R}$}} Using Cauchy-Schwarz inequality, we have for all $t \in \mathbb{R}$ and $a \in H$, \begin{equation*} \int_{-\infty}^{t} | \langle S^{-1}f(s) ,Se^{-(t-s)S^2} a \rangle_H | \ \mathrm d s \leq \frac{1}{\sqrt{2}}\left ( \int_{-\infty}^{t} \|S^{-1}f(s) \|^2_H \ \mathrm{d}s \right )^{1/2} \left\| a\right\|_H, \end{equation*} where we have used the quadratic equality \begin{equation*} \int_{0}^{\infty} \|Se^{-sS^2}a \|^2_H \mathrm d s =\frac{1}{2}\left\| a\right\|_H^2. \end{equation*} As \begin{equation*} \langle u(t) , a \rangle_H = \int_{-\infty}^{t} \langle S^{-1}f(s) , Se^{-(t-s)S^2}a \rangle_H \ \mathrm d s, \end{equation*} we obtain the desired bound for $\|u(t)\|_H$. \ \paragraph{\textit{Step 2: $\left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})} \le \frac{1}{\sqrt{2}} \|f\|_{L^1(\mathbb{R};H)}$}} We observe that by Fubini's theorem, we have $\langle Su, \tilde f\rangle= \langle u, S\tilde f\rangle=\langle f, \tilde u\rangle$, where $\tilde u=\tilde T(S\tilde f)$. Thus \begin{equation*} |\langle Su, \tilde f\rangle| \le \left \| f \right \|_{L^1(\mathbb{R};H)} \| \tilde T(S\tilde f) \|_{L^\infty(\mathbb{R};H)} \le \frac{1}{\sqrt{2}} \left \| f \right \|_{L^1(\mathbb{R};H)} \| \tilde f \|_{L^2(\mathbb{R};H)} \end{equation*} using step 1 for $\tilde T$. \ \paragraph{\textit{Step 3: $\left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})} \le \|f\|_{L^2(\mathbb{R};D_{S,-1})}$}} We already know from step 2 that $\left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})}$ is finite. To obtain the desired bound, we use again Fubini's theorem several times and obtain \begin{align*} \left\|Su \right\|_{L^2(\mathbb{R};H)}^2& =\int_{\mathbb{R}} \langle Su(t) , Su(t) \rangle_H \ \mathrm{d} t \\ & = \int_{\mathbb{R}} \int_{-\infty}^{t}\int_{-\infty}^{t} \langle S^2 e^{-(t-s)S^2}S^{-1}f(s) , S^2 e^{-(t-s')S^2}S^{-1}f(s')\rangle_H \ \mathrm{d} s \mathrm{d} s' \mathrm{d} t \\ & = \int_{\mathbb{R}} \int_{\mathbb{R}}\int_{\max(s,s')}^{+\infty} \langle S^4 e^{-(2t-(s+s'))S^2}S^{-1}f(s) , S^{-1}f(s')\rangle_H \ \mathrm{d} t \mathrm{d} s \mathrm{d} s'\\ & = \frac{1}{2} \int_{\mathbb{R}} \int_{\mathbb{R}} \langle S^2 e^{-(2\max(s,s')-(s+s'))S^2}S^{-1}f(s) , S^{-1}f(s')\rangle_H \ \mathrm{d} s \mathrm{d} s'\\ &= \int_{\mathbb{R}} \int_{s\leq s'} \langle S^2 e^{-(s'-s)S^2}S^{-1}f(s) , S^{-1}f(s')\rangle_H \ \mathrm{d} s \mathrm{d} s'\\ &= \int_{\mathbb{R}} \langle Su(s') , S^{-1}f(s')\rangle_H \ \mathrm{d} s'. \end{align*} Using Cauchy-Schwarz inequality, we deduce that \begin{align*} \left\|Su \right\|_{L^2(\mathbb{R};H)}^2 \leq \left\|Su \right\|_{L^2(\mathbb{R};H)} \left\|f \right\|_{L^2(\mathbb{R};D_{S,-1})}. \end{align*} Therefore \begin{align*} \left\|u \right\|_{L^2(\mathbb{R};D_{S,1})} \leq \left\|f \right\|_{L^2(\mathbb{R};D_{S,-1})}. \end{align*} \ \paragraph{\textit{Step 4: $\left\| u(t)\right\|_H \le C(\alpha) \|f\|_{W_\alpha}$, $ \alpha\in [-1,0)$}} For all $a\in E_{-\infty}$, we define \begin{align*} \forall t \in \mathbb{R};\ \ \varphi_a(t):= \mathbb{1}_{(-\infty,0]}(t) e^{tS^2}a. \end{align*} Remark that when $a\in H$, $\varphi_a\in L^2(\mathbb{R}, D_{S,1})$, hence $\varphi_a \in L^2(\mathbb{R}, E_{-\infty})$. For $t \in \mathbb{R}$, \begin{align*} \langle u(t), a \rangle_H &= \int_{-\infty}^{0} \langle e ^{sS^2}f(s-t) , a \rangle_H \ \mathrm d s \\ &= \int_{\mathbb{R}} \langle f(s-t), \varphi_a (s) \rangle_H \ \mathrm d s \\ & = \langle \tau_t g, D_t^{\frac{1+\alpha}{2}}S^{-\alpha}\varphi_a \rangle_{L^2(\mathbb{R};H),L^2(\mathbb{R};H)}. \end{align*} In the calculation, we used $\hat f(0)=0$ (see Lemma \ref{Spaces V_alpha}, point (3)) and wrote $f=D_t^{\frac{1+\alpha}{2}}S^{-\alpha}g$ with $g\in L^2(\mathbb{R}; H)$, defined $\tau_tg(s)=g(s-t)$ and used that translations commute with $D_t$. If we show that \begin{align*}\label{Hinfini estimation} \| D_t^{\frac{1+\alpha}{2}} S^{-\alpha}\varphi_a \|_{L^2(\mathbb{R}; H)} = C(\alpha) \left \| a \right \|_H, \end{align*} then \begin{align*} \left | \langle u(t), a \rangle_H \right |\leq C(\alpha) \left \| g \right \|_{L^2(\mathbb{R}; H)}\left \| a \right \|_H, \end{align*} and we may conclude using the density of $E_{-\infty}$ in $H$. To see this, applying Fourier transform to $\varphi_a$, we get \begin{align*} \forall \tau \in \mathbb{R}; \ \ \hat{\varphi_a}(\tau)= (-i \tau+S^2)^{-1}a, \end{align*} so that \begin{equation*} \left | \tau \right |^{1+\alpha} \left \| S^{-\alpha}(-{i} \tau+S^2 )^{-1}a \right \|^2_H= |\tau|^{-1} \langle \psi(|\tau|^{-1/2}S) a, a \rangle_H \end{equation*} with $\psi(t)= t^{-2\alpha}(1+t^4)^{-1}$. Using simple computations and Calder\'on's identity $$\int_{-\infty} ^\infty \psi(|\tau|^{-1/2}S)a \ \frac{\mathrm{d}\tau}{|\tau|}= \int_0^\infty \frac{ t ^{-2\alpha }}{1+t^4} \frac{\mathrm d t}{ t }\ a,$$ we obtain \begin{align*} \int_{\mathbb{R}} \left | \tau \right |^{1+\alpha} \left \|S^{-\alpha} \hat{\varphi_a }(\tau) \right \|^2_{H} \ \mathrm{d} \tau = \int_0^\infty \frac{ t ^{-2\alpha }}{1+t^4} \frac{\mathrm d t}{ t }\ \left \| a \right \|^2_H, \end{align*} and conclude using Plancherel identity that $C(\alpha)^2= \frac{1}{2\pi}\int_0^\infty \frac{ t ^{-2\alpha }}{1+t^4} \frac{\mathrm d t}{ t }$. \ \paragraph{\textit{Step 5: $\left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})}\le C'(\alpha) \|f\|_{W_{\alpha}}$, $ \alpha\in [-1,1]$}} Since $f\in \mathcal{S}(\mathbb{R}; E_{-\infty})$, we know a priori that $u\in L^2(\mathbb{R}; D_{S,1})$ from Step 2 and $u$ agrees with the solution given by Fourier transform of Theorem \ref{ISOMORPHISM}. Hence, we can use Fourier transform to compute. We have \begin{align*} \hat{u}(\tau) = ({i}\tau +S^2)^{-1} \left | \tau \right |^{\frac{1+\alpha}{2} }S^{-\alpha}g(\tau). \end{align*} where $f=D_t^{\frac{1+\alpha}{2}}S^{-\alpha}g$ with $g\in L^2(\mathbb{R}; H)$. Hence \begin{align*} \|S\hat u(\tau)\|^2_H &= \langle (\tau^2+S^4)^{-1} |\tau|^{1+\alpha}S^{2-2\alpha} \hat g(\tau), \hat g(\tau) \rangle_H \\ &= \|(|\tau|^{-1/2}S)^{1-\alpha}(1+(|\tau|^{-1/2}S)^4)^{-1/2}\hat g(\tau)\|_H^2 \\ & \le C'(\alpha)^2 \|\hat g(\tau)\|_H ^2 \end{align*} with $C'(\alpha)= \sup_{t>0} t^{1-\alpha}(1+t^4)^{-1/2}<\infty$ when $-1\le \alpha \le 1$. \ \paragraph{\textit{Step 6: $\|D_t^{\frac{1-\alpha}{2}}S^\alpha u\|_{L^2(\mathbb{R};H)} \le \|f\|_{W_{\alpha}}$, $ \alpha\in [-1,1]$}} We proceed as in step 5, and compute \begin{align*} \||\tau|^{\frac{1-\alpha}{2}}S^\alpha \hat u(\tau)\|^2_H = \langle (\tau^2+S^4)^{-1} |\tau|^{2} \hat g(\tau), \hat g(\tau) \rangle_H \le \|\hat g(\tau)\|_H ^2. \end{align*} The conclusion follows. \end{proof} \begin{rem} As noted in the proof, we can identify the Duhamel solution with the Fourier solution. So this gives an indirect proof that the Duhamel solution belongs to $\mathcal{S}(\mathbb{R}; E_{-\infty})$ for $f\in \mathcal{S}(\mathbb{R}; E_{-\infty}).$ \end{rem} \subsection{Regularity of solutions} We can now deduce existence and uniqueness results together with regularity. We begin with the simplest case. \begin{thm}[Regularity for source in $L^1$]\label{Régula Cas L1} Let $f \in L^1(\mathbb{R};H).$ Then there exists a unique $u \in L^2(\mathbb{R};D_{S,1})$ solution of the equation $\partial_t u +S^2 u =f$ in $\mathcal{S'}(\mathbb{R}; E_\infty)$. Moreover $u \in C_0(\mathbb{R}; H)$ with \begin{equation*} \sup_{t \in \mathbb{R}} \left\| u(t)\right\|_H \leq \left\|f \right\|_{L^1(\mathbb{R};H) } \qquad \mathrm{and} \qquad \left\|u \right\|_{L^2(\mathbb{R}; D_{S,1})} \leq \frac{1}{\sqrt{2}} \left\|f \right\|_{L^1(\mathbb{R};H) } . \end{equation*} \end{thm} \begin{proof} Uniqueness in $L^2(\mathbb{R};D_{S,1})$ is provided by Proposition \ref{Unicité}. The existence of such a regular solution with the estimates follow from Lemmas \ref{lem:solution} and \ref{lem:aprioriestimates}, when $f\in \mathcal{S}(\mathbb{R}; E_{-\infty}) $. Density of $\mathcal{S}(\mathbb{R}; E_{-\infty})$ in $L^1(\mathbb{R};H)$ allows us to pass to the limit both in the weak formulation of the equation and in the estimates. That the limit stays in $C_0(\mathbb{R}; H)$ follows from the closedness of this space for the sup norm. \end{proof} We turn to the second result extending Proposition \ref{lem: Lions} (the case $\beta=-1$). \begin{thm}[Regularity for source in $W_{-\beta}$]\label{cas Etheta} Let $\beta \in (0,1]$ and fix $f \in W_{-\beta}$. Then there exists a unique $u \in L^2(\mathbb{R};D_{S,1})$ solution of $\partial_t u +S^2 u =f$ in $\mathcal{S'}(\mathbb{R}; E_\infty)$. Moreover $u \in V_{-\beta} \cap C_0(\mathbb{R}; H)$ and there exists a constant $C=C(\beta)>0$ independent of $f$ such that \begin{equation*} \sup_{t \in \mathbb{R}} \left\| u(t)\right\|_H + \left\|u \right\|_{V_{-\beta}} \leq C \left\|f \right\|_{ W_{-\beta}}. \end{equation*} \end{thm} \begin{proof} It is a repetition of that of Theorem \ref{Régula Cas L1}, gathering uniqueness of Proposition \ref{Unicité}, estimates of Lemma \ref{lem:aprioriestimates} with $\beta=-\alpha$, and density from Lemma \ref{Spaces V_alpha}. \end{proof} \begin{rem} For $\beta\le 0$, there is a solution in $V_{-\beta}$ by Theorem \ref{ISOMORPHISM}, but it does not belong to $C_0(\mathbb{R}; H)$. \end{rem} \section{Embeddings and integral identities}\label{Section 4} The study of the abstract heat equation leads to embeddings for functions spaces in the spirit of Lions and then to integral identities expressing absolute continuity. \subsection{Embeddings} \begin{cor}[Extended Lions' embedding] For $\alpha \in [-1,0)$, we have $V_\alpha \hookrightarrow C_0(\mathbb{R};H)$. \end{cor} \begin{proof} Fix $\alpha \in [-1,0)$ and let $u \in V_\alpha$. We have $\partial_t u \in W_\alpha$ and $S^2u \in L^2(\mathbb{R};D_{S,-1})=W_{-1}$, hence $f=\partial_t u+ S^2u \in W_{-1}+W_{\alpha}= V_{-\alpha}^\star$. As $V_\alpha\subset L^2(\mathbb{R};D_{S,1})$, by Proposition \ref{Unicité}, $u$ is the unique solution in $L^2(\mathbb{R};D_{S,1})$ of the equation \begin{equation*} \partial_t \Tilde{u} +S^2 \Tilde{u} = f \ \ \mathrm{in} \ \mathcal{S}'(\mathbb{R};E_\infty). \end{equation*} Using linearity and Theorem \ref{cas Etheta} for $\beta=-\alpha$ and $\beta=1$, we deduce that $u \in C_0(\mathbb{R};H)$ and we have \begin{align*} \sup_{t \in \mathbb{R}} \left\| u(t)\right\|_H \leq C(\alpha) \left\|\partial_t u \right\|_{ W_\alpha}+\left ( 1+\frac{1}{\sqrt{2}} \right ) \left \| S^2u \right \|_{L^2(\mathbb{R};D_{S,-1})} \leq \Tilde{C}(\alpha) \left \| u \right \|_{V_\alpha}. \end{align*} \end{proof} { \begin{rem} The case $\alpha=-1$ is the homogeneous version of Lions result mentioned before. For $\alpha \in [0,1]$, there is no chance to have an embedding $V_\alpha \hookrightarrow C_0(\mathbb{R};H)$. In fact, the embedding $\dot{H}^{1/2}(\mathbb{R};H) \hookrightarrow L^\infty(\mathbb{R};H)$ fails (case $\alpha=0$), as the scalar embedding ${H}^{1/2}(\mathbb{R}) \hookrightarrow L^\infty(\mathbb{R})$ for the classical inhomogeneous Sobolev space of order 1/2 already fails. \end{rem} We complete the embeddings by exploring further the cases $\alpha \in [0,1]$ although this does not require the heat operator $\partial_t + S^2$. \begin{lem}[Hardy-Littlewood-Sobolev embedding]\label{Hardy-Littlewood-Sobolev} Let $\alpha \in (0,1]$ and let $r={2}/{\alpha} \in [2,\infty)$. Then, we have $V_{\alpha} \hookrightarrow L^{r}(\mathbb{R};D_{S,\alpha}) $ and there is a constant $C=C(r)>0$ such that for all $u \in V_{\alpha}$, \begin{equation*} \left\| u \right\|_{L^r(\mathbb{R};D_{S,\alpha})} \leq C(r) \|D_t^{\frac{1-\alpha}{2}}u \|_{L^2(\mathbb{R};D_{S,\alpha})}.\end{equation*} Consequently, we have $ L^{r'}(\mathbb{R};D_{S,-\alpha}) \hookrightarrow W_{-\alpha}$, where $r'$ is the H\"older conjugate of $r$. \end{lem} \begin{proof} The inequality holds for $u \in \mathcal{S}(\mathbb{R};E_{-\infty})$ using the Sobolev embedding in $\mathbb{R}$ extended to $D_{S,\alpha}$-valued functions as the inverse of $D_t^{\frac{1-\alpha}{2}}$ is the Riesz potential with exponent $\frac{1-\alpha}{2}$. We conclude by density and a duality argument. \end{proof} The next result shows that $V_0$ and $V_1\cap L^{\infty}(\mathbb{R}; H)$ share similar embeddings. \begin{prop}[Mixed norm embeddings]\label{prop:embed r>0} For $r\in (2,\infty)$ and $\alpha= 2/ r$, we have $V_1\cap L^{\infty}(\mathbb{R}; H)\hookrightarrow L^{r}(\mathbb{R};D_{S,\alpha})$ and $V_{0} \hookrightarrow L^{r}(\mathbb{R};D_{S,\alpha}) $, with $$ \left\| u \right\|_{L^r(\mathbb{R};D_{S,\alpha})} \leq \|u \|_{L^2(\mathbb{R};D_{S,1})}^\alpha \|u \|_{L^\infty(\mathbb{R};H)}^{1-\alpha}$$ and $$ \hspace{1cm} \left\| u \right\|_{L^r(\mathbb{R};D_{S,\alpha})} \leq \|u \|_{L^2(\mathbb{R};D_{S,1})}^\alpha \|D_t^{{1}/{2}}u \|_{L^2(\mathbb{R};H)}^{1-\alpha}.$$ Consequently, $L^{r'}(\mathbb{R};D_{S,-\alpha}) \hookrightarrow L^2(\mathbb{R}; D_{S,-1}) + L^1(\mathbb{R};H)$ and $L^{r'}(\mathbb{R};D_{S,-\alpha}) \hookrightarrow L^2(\mathbb{R}; D_{S,-1}) + W_0$. \end{prop} \begin{proof} For the first inequality, use the moment inequality $$\|S^\alpha u(t)\|_H\le \|Su(t)\|_H^\alpha \|u(t)\|_H^{1-\alpha}\le \|Su(t)\|_H^\alpha \|u \|_{L^\infty(\mathbb{R};H)}^{1-\alpha} $$ and integrate its $r$-power. For the second inequality, start with the moment inequality expressed in Fourier side when $u\in \mathcal{S}(\mathbb{R};E_{-\infty}),$ for fixed $\tau$, $$\||\tau|^{\frac{1-\alpha}{2}}S^\alpha \hat u(\tau)\|_H\le \|S \hat u(\tau)\|_H^\alpha \||\tau|^{{1}/{2}}\hat u(\tau)\|_H^{1-\alpha}.$$ Next, take its square, integrate in $\tau$, use H\"older inequality, Plancherel identity and density to conclude. The consequences are standard by density and duality and we skip details. \end{proof} \begin{rem} Note that the first inequality and its dual version in the statement hold whenever $\mathbb{R}$ is replaced by any interval. However, the second one and its dual version have a meaning only on $\mathbb{R}$. \end{rem} \begin{rem} Let $\alpha \in (0,1]$. Let $S^2=-\Delta_x$, more precisely $S=(-\Delta_x)^{1/2}$, where $\Delta_x$ is the usual Laplace operator defined as a self-adjoint operator on $L^2(\mathbb{R}^n)$. When $2\alpha < n$, Sobolev embedding in $\mathbb{R}^n$ gives us $$D(S^\alpha)\subset L^q(\mathbb{R}^n), \ \mathrm{with}\ \|v\|_{L^q} \le C \|S^\alpha v\|_{L^2}, \ q=\frac{2n}{n-2\alpha}.$$ This is true for $0\le \alpha\le 1$ if $n \geq 3$, or $\alpha \in (0,\frac{1}{2})$ if $n=1$ or $\alpha<1$ if $n=2$. Thus $D_{S,\alpha} \hookrightarrow L^q(\mathbb{R}^n)$. When $r=2/\alpha$, we have then $ V_\alpha \hookrightarrow L^r(\mathbb{R};L^q(\mathbb{R}^n))$. The constraints are equivalent to $$ \frac{1}{r}+\frac{n}{2q}=\frac{n}{4} \ \ \mathrm{and} \ \ \ 2 \leq r,q < \infty.$$ Thus, we recover the mixed space $L^r_t L^q_x$ that appears in the classical theory \cite[chp.~3]{ladyzhenskaia1968linear} and deduce for them the classical embedding $\|u\|_{L^r_tL^q_x}\le C \|\nabla_x u\|_{L^2_tL^2_x}^{{2}/{r}} \|u\|_{L^\infty_tL^2_x}^{1-{2}/{r}} $ from the first inequality in Proposition \ref{prop:embed r>0}. This argument is inspired from the one in \cite{auscher2023universal}. \end{rem} \subsection{Integral identities}\label{S2S4} The Lions' embedding using domains of $S$ and $S^{-1}$ comes with integral identities. We now prove they hold using completions of the domains of $S$ and $S^{-1}$, and allowing more general right hand sides. \begin{prop}[Integral identities: the real line case]\label{energy lemma} Let $u \in L^2(\mathbb{R};D_{S,1})$ and let $\rho \in (2,\infty]$. Assume that $\partial_t u = f+g$ with $f \in L^2(\mathbb{R};D_{S,-1})$ and $g \in L^{\rho'}(\mathbb{R};D_{S,-{\beta}})$, where ${\beta}={2}/{\rho} \in [0,1)$ and $\rho'$ is the H\"older conjugate of $\rho$. Then $ u \in C_0(\mathbb{R};H)$, $ t \mapsto \left \| u(t) \right \|^2_H$ is absolutely continuous on $\mathbb{R}$ and for all $ \sigma < \tau $, \begin{align}\label{eq:integralidentity} \left \| u(\tau) \right \|^2_H-\left \| u(\sigma) \right \|^2_H = 2\mathrm{Re}\int_{\sigma}^{\tau} \langle f(t),u(t)\rangle_{H,-1} + \langle g(t),u(t)\rangle_{H, -\beta}\ \mathrm d t. \end{align} {In particular, if $\rho=\infty$, then we infer that \begin{align}\label{important existence cas L1} \sup_{t\in \mathbb{R}}\|u(t) \|_{H} \leq \sqrt{2 \left\| u \right\|_{L^2(\mathbb{R};D_{S,1})} \left\| f \right\|_{L^2(\mathbb{R};D_{S,-1})} }+(1+\sqrt{2})\left\| g \right\|_{L^1(\mathbb{R};H)}. \end{align}} \end{prop} Remark that with our notation, $\mathrm{Re} \langle f(t),u(t)\rangle_{H,-1}= \mathrm{Re} \langle u(t),f(t)\rangle_{H,1}$. \begin{proof} The assumption $u \in L^2(\mathbb{R};D_{S,1})$ is equivalent to $S^2u \in L^2(\mathbb{R}; D_{S,-1})$, hence $u$ verifies the equation \begin{align*} \partial_t u+S^2u =S^2u + f+g=: h. \end{align*} Using Theorem \ref{cas Etheta} when $\rho<\infty$ and Theorem \ref{Régula Cas L1} when $\rho=\infty$, we know that $u\in L^2(\mathbb{R};D_{S,1})\cap C_0(\mathbb{R};H)$. It remains to prove the identity. Let $f_k,g_k \in \mathcal{S}(\mathbb{R};E_{-\infty})$ with $f_k \to S^2u +f$ in $L^2(\mathbb{R};D_{S,-1})$ and $g_k \to g$ in $ L^{\rho'}(\mathbb{R};D_{S,-{\beta}})$ and set $h_k=f_k+g_k$. Let $u_k \in L^2(\mathbb{R};D_{S,1})$ be the unique solution of the equation $\partial_t u_k + S^2 u_k = h_k$ given by Corollary \ref{cor:iso}. We have $u_k \in \mathcal{S}(\mathbb{R};E_{-\infty})$. The regularity of $u_k$ allows us to write for all $ \sigma < \tau $, \begin{equation*} \left \| u_k (\tau) \right \|^2_H-\left \| u_k (\sigma) \right \|^2_H = 2 \mathrm{Re}\int_{\sigma}^{\tau} \langle \partial_t u_k (t),u_k (t)\rangle_H \ \mathrm d t. \end{equation*} Since $ \partial_t u_k=-S^2u_k+h_k$ by the equation, we have for all $ \sigma < \tau $, \begin{equation*} \left \| u_k (\tau) \right \|^2_H-\left \| u_k (\sigma) \right \|^2_H = 2 \mathrm{Re}\int_{\sigma}^{\tau} \langle f_k (t)-S^2u_k (t),u_k (t)\rangle_H + \langle g_k(t), u_k(t) \rangle_H \ \mathrm{d}t. \end{equation*} To pass to the limit when $k\to \infty$, we observe that $u_k\to u$ in $L^2(\mathbb{R};D_{S,1})$ and in $C_0(\mathbb{R};H)$ in all cases, and also in $L^\rho(\mathbb{R};D_{S,\beta})$ when $\rho<\infty$. In particular $f_k-S^2u_k \to f$ in $L^2(\mathbb{R};D_{S,-1})$. We obtain \eqref{eq:integralidentity} at the limit. In the case $\rho=\infty$, letting $\sigma\to -\infty$ and taking $\tau$ at which $\|u(\tau) \|_H=\sup\|u(t)\|_{H}=X$, we obtain \begin{align*} X^2= \|u(\tau) \|_H^2 \le 2\int_{-\infty}^{\infty} \|S^{-1}f(t)\|_H\|Su(t)\|_{H} \ \mathrm d t+ 2X\int_{-\infty}^{\infty} \| g(t)\|_H\ \mathrm d t . \end{align*} Solving the inequality for $X$, we obtain the conclusion. \end{proof} We stress that the above result is false on bounded intervals as evidenced by the counter-example in Remark \ref{rem:counterexample}. But it remains valid on half-lines. On $(0,\infty)$ say, it can be shown either using the backward heat equation or an extension method. We describe the second method below. \begin{cor}[Integral identities: the half-line case]\label{corenergy} Let $I$ be an open half-line of $\mathbb{R}$. Let $u \in L^2(I;D_{S,1})$ and let $\rho \in (2,\infty]$. Assume that $\partial_t u = f+g$ with $f \in L^2(I;D_{S,-1})$ and $g \in L^{\rho'}(I;D_{S,-{\beta}})$, where ${\beta}={2}/{\rho} \in [0,1)$. Then $ u \in C_0(\Bar{I},H)$, $ t \mapsto \left \| u(t) \right \|^2_H$ is absolutely continuous on $\Bar{I}$ and \eqref{eq:integralidentity} holds for all $ \sigma, \tau \in \Bar{I}$ such that $ \sigma < \tau$. \end{cor} \begin{proof} We assume that $I=(0,\infty)$ because it is always possible to go back to this case. We will construct an even extension $u_e$ of $u$ and odd extensions $g_o,f_o$ of $g, f$ to $\mathbb{R}$. These extensions belong to the same spaces as $u,f,g$ but in $\mathbb{R}$ and $\partial_t u_e = f_o+g_o$. Thus, Proposition \ref{energy lemma} applies to ${u_e}$. We obtain the conclusion by restricting to $\Bar I$. We start by defining for all $a \in E_{-\infty}$ the distribution $\langle u, a \rangle $ on $(0,\infty)$ by setting \begin{align*} \forall \phi \in \mathcal{D}((0,\infty);\mathbb{C}), \langle \langle u, a \rangle, \phi \rangle_{\mathcal{D}',\mathcal{D}}:= \llangle u, \phi \otimes a \rrangle_{\mathcal{D}',\mathcal{D}}= \int_0^\infty \langle Su(t), S^{-1}a \rangle_H \Bar{\phi}(t) \ \mathrm{d}t. \end{align*} Hence $\langle u, a \rangle$ is locally integrable and agrees with $\langle u, a \rangle(t)=\langle Su(t), S^{-1}a \rangle_H$ almost everywhere. We have \begin{align*} \frac{\mathrm{d} }{\mathrm{d} t} \langle u,a \rangle = \langle g,a \rangle_{H,-\alpha}+ \langle f,a \rangle_{H,-1} \ \ in \ \mathcal{D}'((0,\infty);\mathbb{C}). \end{align*} The assumptions on $u, f,g$ imply that $ \langle u,a \rangle \in W^{1,1}(0,T)$ for any $T>0$. It follows that $\langle u,a \rangle$ can be identified with a absolutely continuous function on $[0,\infty)$. We define ${u_e} \in \mathcal{D}'(\mathbb{R};E_\infty)$ by \begin{align*} \llangle {u_e},\phi \otimes a \rrangle_{\mathcal{D}',\mathcal{D}} := \int_{0}^{\infty} \langle u,a \rangle (t) (\Bar{\phi}(t)+\Bar{\phi}(-t)) \ \mathrm{d}t. \end{align*} using that distributions $\mathcal{D}'(\mathbb{R};E_{\infty})$ are uniquely determined on tensor products $\phi \otimes a$ with $\phi \in \mathcal{D}(\mathbb{R})$ and $a \in E_{-\infty}$. We have ${u_e}=u$ in $\mathcal{D}'((0,\infty);E_\infty)$ by taking $\phi$ supported in $(0,\infty).$ Next, integration by parts shows that \begin{align*} \llangle {u_e},\frac{\mathrm{d}}{\mathrm{d}t}(\phi \otimes a) \rrangle_{\mathcal{D}',\mathcal{D}}&= -\int_{0}^{\infty} (\langle S^{-\beta}g(t),S^{\beta}a \rangle_H + \langle S^{-1}f(t),Sa \rangle_H)(\Bar{\phi}(t)+\Bar{\phi}(-t)) \ \mathrm{d}t \\ & = - \int_{\mathbb{R}} (\langle g_o(t),a \rangle_{H,-\beta} + \langle f_o(t),a \rangle_{H,-1}) \Bar{\phi}(t) \ \mathrm{d}t, \end{align*} where $g_o$ and $f_o$ are the odd extensions of $g$ and $f$, respectively. Hence $\partial_t {u_e} = g_o+f_o$ in $\mathcal{D}'(\mathbb{R};E_\infty).$ Lastly, \begin{align*} \llangle S{u_e},\phi \otimes a \rrangle_{\mathcal{D}',\mathcal{D}}= \llangle {u_e},S(\phi \otimes a) \rrangle_{\mathcal{D}',\mathcal{D}} &= \int_{0}^{\infty} \langle Su(t),a \rangle_H (\Bar{\phi}(t)+\Bar{\phi}(-t)) \ \mathrm{d}t \\ & = \int_{\mathbb{R}} \langle (Su)_e(t),a \rangle_H \Bar{\phi}(t) \ \mathrm d t, \end{align*} where $(Su)_e$ is the even extension of $Su$, so that $S{u_e}=(Su)_e$ in $\mathcal{D}'(\mathbb{R};E_\infty)$. \end{proof} The conclusion of Corollary \ref{corenergy} can be polarized, given two functions $u$, $\Tilde{u}$ that verify the assumptions of Corollary \ref{corenergy} with the same exponent $\rho \in (2,\infty]$ and $\beta = 2/\rho$. Thanks to the extendability seen in the previous proof, the same also works with open, half-infinite intervals and the conclusion is as follows. \begin{cor}[Polarized integral identities]\label{EnergyPol} Assume that $u$, $\Tilde{u}$ satisfy the same assumptions as in Corollary \ref{corenergy} on two open infinite intervals $I$ and $J$ with non empty intersection. Then $t \mapsto \langle u(t), \Tilde{u}(t) \rangle_H$ is absolutely continuous on $\Bar{I}\cap \Bar{J}$ and we have for all $ \sigma, \tau \in \Bar{I}\cap \Bar{J}$ such that $ \sigma < \tau$ \begin{align*} \langle u(\tau), \Tilde{u}(\tau) \rangle_H - \langle u(\sigma), \Tilde{u}(\sigma) \rangle_H =\int_{\sigma}^{\tau} &\langle f(t),\Tilde{u}(t)\rangle_{H,-1} + \langle g(t),\Tilde{u}(t)\rangle_{H,-{\beta}}\ \mathrm d t \\&+ \int_{\sigma}^{\tau} \langle u(t),\Tilde{f}(t)\rangle_{H,1} + \langle u(t), \Tilde{g}(t)\rangle_{H,{\beta}}\ \mathrm d t. \end{align*} \end{cor} \begin{rem} We note that by linearity, the above identities hold with $g$ replaced by a sum of several terms in $L^{\rho'}(\mathbb{R};D_{S,-{\beta}})$ for different pairs $(\rho,\beta)$ and similarly for the polarized version. However, the inequality \eqref{important existence cas L1} should be modified accordingly. \end{rem} On a bounded interval there is a statement with an extra $L^1(I;H)$ hypothesis on $u$. \begin{cor}[Integral identities: the bounded case]\label{corenergybounded} Let $I$ be a bounded, open interval of $\mathbb{R}$. Let $u \in L^2(I;D_{S,1})\cap L^1(I;H)$ and let $\rho \in (2,\infty]$. Assume that $\partial_t u = f+g$ with $f \in L^2(I;D_{S,-1})$ and $g \in L^{\rho'}(I;D_{S,-{\beta}})$, where ${\beta}={2}/{\rho} \in [0,1)$. Then $ u \in C(\Bar{I},H)$, $ t \mapsto \left \| u(t) \right \|^2_H$ is absolutely continuous on $\Bar{I}$ and \eqref{eq:integralidentity} holds for all $ \sigma, \tau \in \Bar{I}$ such that $ \sigma < \tau$. \end{cor} \begin{proof} Assume that $I=(0,\mathfrak{T})$. If we take $\chi$ a smooth real-valued function that is equal to 1 near 0 and 0 near $\mathfrak{T}$ and set $v=\chi u$ on $(0,\infty)$ then we can see that $v \in L^2((0,\infty); D_{{S},1})$ and $\partial_tv \in L^2((0,\infty); D_{{S},-1})+ L^{\rho'}((0,\infty);D_{{S},-\beta})+ L^1((0,\infty);H).$ We may apply Corollary \ref{corenergy} with the above remark, and by restriction we have the conclusion on any subinterval $[0,\mathfrak{T}']$ with $\mathfrak{T}'<\mathfrak{T}$. If we now do this with a smooth real-valued function that is equal to 0 near 0 and 1 near $\mathfrak{T}$, and apply Corollary \ref{corenergy} {on $(-\infty,\mathfrak{T})$} we have by restriction the conclusion for $u$ on any subinterval $[\mathfrak{T}',\mathfrak{T}]$ with $0<\mathfrak{T}'$. We conclude on $[0,\mathfrak{T}]$ by gluing. \end{proof} \section{Abstract parabolic equations}\label{Section 5} In this section, we study parabolic equations of type $$ \partial_t u + \mathcal{B}u=f, $$ where $\partial_t + \mathcal{B}$ is a parabolic operator with a time-dependent elliptic part $\mathcal{B}$ under ``divergence structure". Here, we do not assume any time-regularity on $\mathcal{B}$ apart its weak measurability. We provide a complete framework to prove well-posedness and to construct propagators and fundamental solution operators avoiding density arguments from parabolic operators with time regular elliptic part. We also avoid time regularization like Steklov approximations. Uniqueness implies that our construction agrees with others under common hypotheses. \subsection{Setup}\label{sec:homogenoussetup} Throughout this section, we fix an operator $$T: D(T)\subset H \rightarrow K$$ which is injective, closed and densely defined from $ D(T)\subset H $ to another complex separable Hilbert space $K$. The operator $T^\star T$ is an injective, positive self-adjoint operator on $H$, so is $S:=(T^\star T)^{1/2}$. Moreover, by the Kato's second representation theorem \cite{kato2013perturbation}, we have \begin{equation*} D(S)=D(T) \ \ \mathrm{and} \ \ \forall u,v \in D(T), \ \langle Su, Sv \rangle_H = \langle T^\star Tu, v\rangle_H= \langle Tu, Tv\rangle_K. \end{equation*} As a result, $D_{S,1}$ is a completion of $D(T)$ for the norm $\left\|T\cdot \right\|_K$. Next, $(B_t)_{t\in \mathbb{R}}$ is a fixed family of bounded and coercive sesquilinear forms on $D(T)\times D(T)$ {with respect to the homogeneous norm on $D(T)$} and with uniform bounds (independent of $t$). To be precise, $B_t : D(T) \times D(T) \rightarrow \mathbb{C}$ is a sesquilinear form verifying \begin{align}\label{EllipticityAbstract} \left | B_t(u,v) \right |\leq M \left \| u \right \|_{S,1}\left \| v \right \|_{S,1}\ ,\ \ \nu \left \| u \right \|_{S,1}^2 \leq \mathrm{Re} (B_t(u,u)), \end{align} for some $ M, \nu >0$ and for all $t \in \mathbb{R}$ and $u,v \in D(S)$. This is the equivalent to saying that for all $t \in \mathbb{R}$, there exists a bounded and strictly accretive linear map $A(t)$ on $ \overline{\mathrm{ran}(T)}$ such that \begin{align}\label{eq:TstarAT} \forall u,v \in D(T),\ B_t(u,v)=\langle A(t) Tu, Tv \rangle_K. \end{align} We assume in addition that the family $(B_t)_{t\in \mathbb{R}}$ is weakly measurable, \textit{i.e.}, $t \mapsto B_t(u,v)$ is a measurable function on $\mathbb{R}$, for all $u,v \in D(T)$. We keep denoting by $B_t$ the unique extension of $B_t$ to $D_{S,1}\times D_{S,1}$. Remark that the family $(B_t)_{t\in \mathbb{R}}$ is automatically weakly measurable for the reason that for all $u,v \in D_{S,1}$ the function $t \mapsto B_t(u,v)$ is a pointwise limit of a sequence of measurable functions. Note that the adjoint forms $B_t^\star$ defined by $B_t^\star(u,v)=\overline{B_t(v,u)}$ have the same properties and are associated to $A(t)^*$. As \begin{equation*} \int_\mathbb{R}\left | B_t(u(t),v(t)) \right |\mathrm dt \leq M \left \| u \right \|_{L^2(\mathbb{R};D_{S,1})} \left \| v \right \|_{L^2(\mathbb{R};D_{S,1})}, \end{equation*} the operator $\mathcal{B}$ defined by $$\llangle \mathcal{B}u, v \rrangle = \int_{\mathbb{R}} B_t(u(t),v(t)) \ \mathrm{d}t \ \ \mathrm{when} \ u,v \in L^2(\mathbb{R};D_{S,1})$$ is a bounded operator from $L^2(\mathbb{R};D_{S,1})$ to $L^2(\mathbb{R};D_{S,-1})$ with \begin{equation*} \left \| \mathcal{B}u \right \|_{L^2(\mathbb{R};D_{S,-1})} \leq M \left \| u \right \|_{L^2(\mathbb{R};D_{S,1})}. \end{equation*} Next, the partial derivative is a well-defined tempered distribution given by $$ \llangle \partial_t u , \varphi \rrangle_{\mathcal{S}',\mathcal{S}}= -\llangle u, \partial_t\varphi \rrangle_{\mathcal{S}',\mathcal{S}}. $$ When $u\in L^2(\mathbb{R};D_{S,1})$, one can compute the right hand side as $$\llangle u, \partial_t\varphi \rrangle_{\mathcal{S}',\mathcal{S}}= \llangle Su, S^{-1}\partial_t\varphi \rrangle_{\mathcal{S}',\mathcal{S}}= \int_{\mathbb{R}} \langle Su(t), S^{-1}\partial_t \varphi(t) \rangle_{H} \ \mathrm{d}t. $$ \begin{defn}[The forward parabolic operator associated to the family $(B_t)_{t \in \mathbb{R}}$]\label{Définition opérateur B} The operator $$\partial_t + \mathcal{B}: L^2(\mathbb{R};D_{S,1}) \rightarrow \mathcal{S}'(\mathbb{R};E_{\infty})$$ defined using the weak formulation \begin{equation*} \forall u \in L^2(\mathbb{R};D_{S,1}), \forall \varphi \in \mathcal{S}(\mathbb{R};E_{-\infty}), \ \llangle \partial_t u + \mathcal{B}u , \varphi \rrangle_{\mathcal{S}',\mathcal{S}}:= -\llangle u, \partial_t\varphi \rrangle_{\mathcal{S}',\mathcal{S}}+ \int_{\mathbb{R}} B_t (u(t),\varphi(t)) \ \mathrm{d}t \end{equation*} is called the parabolic operator associated to the family $(B_t)_{t \in \mathbb{R}}$. The definition is the same as above when $\mathbb{R}$ is substituted by an open interval $I \subset \mathbb{R}$, replacing $\mathcal{S}'(\mathbb{R};E_{\infty})$ by $\mathcal{D}'(I;E_{\infty})$ and $\mathcal{S}(\mathbb{R};E_{-\infty})$ by $\mathcal{D}(I;E_{-\infty})$. In both cases, we formally write $\partial_t + \mathcal{B} = \partial_t + T^\star A(t) T $. \end{defn} Remark that this definition needs no assumption $u\in L^1_{\mathrm{loc}}(\mathbb{R}; H)$ but if it is the case one can use also $\llangle u, \partial_t\varphi \rrangle_{\mathcal{S}',\mathcal{S}}=\int_{\mathbb{R}} \langle u(t), \partial_t \varphi(t) \rangle_{H} \ \mathrm{d}t$ (see Section \ref{subsection 4.9}). For $u,v \in \mathcal{S}(\mathbb{R};E_{-\infty})$, an integration by parts then yields $$ \llangle \partial_t u + \mathcal{B}u , v \rrangle_{\mathcal{S}',\mathcal{S}} = \overline{\llangle -\partial_t v +\mathcal{B}^\star v , u \rrangle}_{\mathcal{S}',\mathcal{S}}, $$ where $-\partial_t + \mathcal{B}^\star$ is the backward parabolic operator associated to the adjoint family of forms $(B_t^\star)_{t \in \mathbb{R}}$ defined similarly. We wish to find (weak) solutions $u\in L^2(\mathbb{R};D_{S,1})$ to $\partial_tu+\mathcal{B}u= f $ for appropriate source terms. The challenge here is that we cannot use Fourier Transform anymore, nor a semi-group. We could start with Lions representation theorem but we choose a different route, introducing a variational parabolic operator. We denote by $H_t$ the Hilbert transform with symbol $i \tau/ \left | \tau \right |$. More precisely, if $u \in \mathcal{S'}(\mathbb{R};E_\infty)$ is such that we have $i \tau/ \left | \tau \right | \mathcal{F}u \in \mathcal{S'}(\mathbb{R};E_\infty)$, then we set \begin{equation*} H_t u := \mathcal{F}^{-1}\left ( i \frac{\tau}{\left | \tau \right |} \mathcal{F}u \right ). \end{equation*} We define a bounded sesquilinear form $B_{V_0}: V_0 \times V_0 \rightarrow \mathbb{C}$ by \begin{equation*}\label{B_{V_0}} \forall u,v \in V_0, \ B_{V_0}(u,v):=\int_\mathbb{R}\langle H_t D_t^{1/2}u(t), D_t^{1/2}v(t) \rangle_H + B_t(u(t),v(t))\ \mathrm dt. \end{equation*} By the Riesz representation theorem, there exists a unique $\mathcal{H} \in \mathcal{L}(V_0,V_0^\star)$ such that \begin{equation*}\label{eq:mathcal{H}}\llangle \mathcal{H}u, v \rrangle_{V_0^\star, V_0}:= B_{V_0}(u,v), \ u,v \in V_0. \end{equation*} We have \begin{equation*}\label{H et part + B} \left ( \partial_t+\mathcal{B} \right )_{\scriptscriptstyle{\vert V_0}} = \mathcal{H} \ , \ \ \left ( -\partial_t+\mathcal{B}^\star \right )_{\scriptscriptstyle{\vert V_0}} = \mathcal{H}^\star, \end{equation*} where $\mathcal{H}^\star : V_0 \rightarrow V_0^\star$ is the adjoint of $\mathcal{H}$. Indeed, we have the almost everywhere equality $$\langle H_t D_t^{1/2}u(t), D_t^{1/2}v(t) \rangle_H=-\langle u(t), \partial_t v(t) \rangle_{H}=-\langle Su(t), S^{-1}\partial_t v(t) \rangle_{H} $$ when $u,v \in \mathcal{S}(\mathbb{R};E_{-\infty})$ so that $\mathcal{H}$ and $\partial_t+\mathcal{B} $ agree on $\mathcal{S}(\mathbb{R};E_{-\infty})$ and we conclude by density in $V_0$. Thus, we may call $\mathcal{H}$ the variational parabolic operator associated to $\mathcal{B}$ as it comes from the sesquilinear form $B_{V_0}$ and $V_0$ plays the role of a variational space. \subsection{Existence and uniqueness results}\label{S2S5} We now prove our main results. \subsubsection{Source term in $V_0^\star$ : Kaplan's method} The following lemma is essentially due to Kaplan \cite{kaplan1966abstract}. It expresses hidden coercivity of the variational parabolic operator $\mathcal{H}$. We reproduce the argument for completeness. \begin{lem}[Kaplan's lemma: invertibility on the pivotal variational space]\label{lemme Hidden Coerc} For each $f \in V_0^\star$, there exists a unique $u \in V_0$ such that $\mathcal{H}u=f$. Moreover, \begin{align*}\label{!} \left \| u \right \|_{V_0} \leq C(M,\nu) \left \| f \right \|_{V_0^\star}. \end{align*} \end{lem} \begin{proof} By the Plancherel theorem and the fact that the Hilbert transform $H_t$ commutes with $D_t^{1/2}$ and $S$, it is a bijective isometry on $V_0$. As it is skew-adjoint, for all $\delta \in \mathbb{R}$, $1+\delta H_t$ is an isomorphism on $V_0$ and $\|(1+\delta H_t)u\|_{V_0}^2= (1+\delta^2)\|u\|^2_{V_0}$. The same equality holds on $V_0^\star$. Let $\delta>0$ to be chosen later. The modified sesquilinear form $B_{V_0}(\cdot,(1+\delta H_t)\cdot)$ is bounded on $V_0\times V_0$ and for all $u \in V_0$ \begin{align*} \mathrm{Re}\hspace{0.1cm} B_{V_0}(u, (1+\delta H_t)u) &= \mathrm{Re}\int_\mathbb{R}\langle H_t D_t^{1/2}u, D_t^{1/2}(1+\delta H_t)u \rangle_H + B_t(u(t),(1+\delta H_t)u(t))\ \mathrm dt \\ & = \mathrm{Re}\int_\mathbb{R} \delta \langle H_t D_t^{1/2}u(t), H_t D_t^{1/2}u(t) \rangle_H + B_t(u(t),u(t))+\delta B_t(u(t),H_t u(t)) \ \mathrm dt. \end{align*} where we have used that $H_t$ is skew-adjoint, hence \begin{align*} \mathrm{Re}\int_\mathbb{R}\langle H_t D_t^{1/2}u(t), D_t^{1/2}u(t) \rangle_H \ \mathrm dt =0. \end{align*} We obtain \begin{align*} \mathrm{Re} (B_{V_0}(u, (1+\delta H_t)u)) \geq \delta \| D_t^{1/2}u \|^2_{L^2(\mathbb{R};H)}+(\nu -\delta M)\left \| u \right \|_{L^2(\mathbb{R};D_{S,1})}^2. \end{align*} Choosing $\delta = \frac{\nu}{1+M}$, it becomes \begin{align*} \mathrm{Re} (B_{V_0}(u, (1+\delta H_t)u)) \geq \frac{\nu}{1+M} \left \| u \right \|_{V_0}^2, \ \forall u \in V_0. \end{align*} Fix $f \in V_0^\star$. The Lax-Milgram lemma implies that there exists a unique $u \in V_0$ such that $$B_{V_0}(u, (1+\delta H_t)\cdot)= (1+\delta H_t)^\star \circ f.$$ Furthermore, we have the estimate \begin{align*} \left \| u \right \|_{V_0} \leq \frac{1+M}{\nu}\left \| (1+\delta H_t)^\star \circ f \right \|_{V_0^\star}. \end{align*} Using the fact that $(1+\delta H_t)^\star$ is an isomorphism on $V_0^\star$ with operator norm equal to $\sqrt{1+\delta^2}$, we have that for each $f \in V_0^\star$ there exists a unique $u \in V_0$ such that $B_{V_0}(u,\cdot)=f$ with \begin{align*} \left \| u \right \|_{V_0} \leq \frac{1+M}{\nu }\times \sqrt{ 1+\left ( \frac{\nu}{1+M} \right )^2 } \left \| f \right \|_{V_0^\star}. \end{align*} \end{proof} \noindent Now, we come to the uniqueness result below. \begin{prop}[Uniqueness in energy space] \label{prop:uniqueness} Let $I$ be an interval which is a neighbourhood of $-\infty$. If $u \in L^2(I;D_{S,1})$ is a solution of $\partial_t u + \mathcal{B}u=0$ in $\mathcal{D}'(I;E_\infty)$, then $u=0$. \end{prop} \begin{proof} We have $u \in L^2(I;D_{S,1})$ and $\partial_t u = - \mathcal{B}u \in L^2(I;D_{S,-1})$. Using Corollary \ref{corenergy}, we have $u \in C_0(\overline{I};H)$ and verifies for $ \sigma, \tau \in \Bar{I}$ such that $ \sigma < \tau$, \begin{align*} \left \| u(\tau) \right \|^2_H-\left \| u(\sigma) \right \|^2_H = - 2\mathrm{Re}\int_{\sigma}^{\tau} B_t (u(t), u(t)) \ \mathrm d t \leq 0 \ . \end{align*} When $\sigma \to -\infty$, we deduce that $u(\tau)=0$, for all $\tau \in \overline{I}$. \end{proof} \begin{rem} When $I=\mathbb{R}$, one can directly prove Proposition \ref{prop:uniqueness} using uniqueness in Lemma \ref{lemme Hidden Coerc}. In fact, if $u \in L^2(\mathbb{R};D_{S,1})$ is a solution of $\partial_t u + \mathcal{B}u=0$ in $\mathcal{D}'(\mathbb{R};E_\infty)$ then $\partial_t u=-\mathcal{B}u \in L^2(\mathbb{R};D_{S,-1})$, so $u \in V_{-1} \subset V_0$ and $\mathcal{H}u=0$, therefore $u=0$ by Lemma \ref{lemme Hidden Coerc}. \end{rem} \subsubsection{Source term in $W_{-\beta}$, $\beta \in (0,1]$} Let us start with the following theorem. \begin{prop}[Existence and uniqueness for $W_{-\beta}$ source] \label{prop:W-beta} Let $\beta \in (0,1]$ and let $f \in W_{-\beta}$. Then, there exists a unique $u \in L^2(\mathbb{R};D_{S,1})$ solution to $\partial_t u +\mathcal{B} u =f$ in $\mathcal{D}'(\mathbb{R};E_\infty)$. Moreover, $u \in C_0(\mathbb{R}; H)\cap V_{-\beta}$ and there exists $C=C(M,\nu,\beta)>0$ such that \begin{equation*}\label{,} \sup_{t\in \mathbb{R}} \| u(t) \|_{H}+ \left \| u \right \|_{V_{-\beta}} \leq C\left \| f \right \|_{W_{-\beta}}. \end{equation*} \end{prop} \begin{proof} Since $W_{-\beta} \hookrightarrow V_{\beta}^\star \hookrightarrow V_0^\star$, Lemma \ref{lemme Hidden Coerc} provides us with a solution $u=\mathcal{H}^{-1}f \in V_0$ with the estimate $\left \| u \right \|_{V_0} \leq C(M,\nu) \left \| f \right \|_{V_0^\star}$, so in particular, $\left \| u \right \|_{L^2(\mathbb{R};D_{S,1})} \leq C(M,\nu, \beta) \left \| f \right \|_{W_{-\beta}}.$ Uniqueness in $L^2(\mathbb{R};D_{S,1})$ is provided by Proposition \ref{prop:uniqueness}. Writing the equation as $$\partial_t u +S^2 u= S^2u -\mathcal{B}u+f \ \ \mathrm{in} \ \mathcal{D}'(\mathbb{R};E_\infty),$$ we may combine Theorems \ref{Régula Cas L1}, \ref{cas Etheta} together with $\mathcal{B}u \in L^2(\mathbb{R};D_{S,-1})$, to see that $u \in C_0(\mathbb{R};H)\cap V_{-\beta}$ with \begin{align*} \sup_{t\in \mathbb{R}} \| u(t) \|_{H}+\left \| u \right \|_{V_{-\beta}}&\leq C(M,\nu, \beta) \left ( \left \| u \right \|_{L^2(\mathbb{R};D_{S,1})} + \left \| f \right \|_{W_{-\beta}} \right ). \end{align*} Therefore, $$\sup_{t\in \mathbb{R}} \| u(t) \|_{H}+ \left \| u \right \|_{V_{-\beta}} \leq C(M,\nu,\beta)\left \| f \right \|_{W_{-\beta}}.$$ \end{proof} \begin{cor}[Boundedness properties of $\mathcal{H}^{-1} $]\label{Cas L^2 pour L^r} Fix $\rho \in [2,\infty)$ and set $\beta={2}/{\rho}\in (0,1]$. Then, $$\mathcal{H}^{-1} : L^{\rho'}(\mathbb{R};D_{S,-\beta}) \rightarrow V_{-\beta} \cap C_0(\mathbb{R};H) \ is \ bounded.$$ The same holds for $(\mathcal{H}^\star)^{-1}$. \end{cor} \begin{proof} Combine Proposition \ref{prop:W-beta}, Lemma \ref{Hardy-Littlewood-Sobolev} and Proposition \ref{prop:embed r>0}. \end{proof} \begin{rem} For fixed $f\in L^{\rho'}(\mathbb{R};D_{S,-\beta})$, we have $\mathcal{H}^{-1}f \in V_{-\beta} \subset V_0$. In particular, using Proposition \ref{prop:embed r>0}, we have $\mathcal{H}^{-1}f \in L^r(\mathbb{R};D_{S,\alpha})$ for any $ r\in (2,\infty)$ where $\alpha={2}/{r} $ and there exists a constant $C=C(M,\nu,\beta)>0$ such that $$ \left \| \mathcal{H}^{-1}f \right \|_{L^r(\mathbb{R};D_{S,\alpha})}\leq C\left \| f \right \|_{L^{\rho'}(\mathbb{R};D_{S,-\beta})}.$$ The same is true for $(\mathcal{H}^\star)^{-1}$. \end{rem} } \subsubsection{Source term in $L^1(\mathbb{R};H)$} The previous theorems rely on Lemma \ref{lemme Hidden Coerc} to prove the existence, so they do not apply anymore when $f \in L^1(\mathbb{R};H)$ since $L^1(\mathbb{R};H) \nsubseteq V_0^\star$. Yet, we can solve with such source terms using a duality scheme. \begin{prop}[Existence and uniqueness for source in $L^1$] \label{prop:L1} Let $f \in L^1(\mathbb{R};H)$. Then there exists a unique $u \in L^2(\mathbb{R},D_{S,1})$ solution to $ \partial_t u +\mathcal{B} u =f$ in $\mathcal{D'}(\mathbb{R}; E_\infty).$ Moreover, $u \in C_0(\mathbb{R}; H)\cap L^2(\mathbb{R};D_{S,1})$ and there exists a constant $C=C(M,\nu)>0$ such that \begin{equation}\label{asked estimate} \sup_{t\in \mathbb{R}} \| u(t) \|_{H}+\left \| u \right \|_{L^2(\mathbb{R};D_{S,1})} \leq C\left \| f \right \|_{L^1(\mathbb{R};H)}. \end{equation} \end{prop} \begin{proof} Uniqueness is provided by Proposition \ref{prop:uniqueness}. To prove the existence, we remark that Corollary \ref{Cas L^2 pour L^r} for the backward operator $\mathcal{H}^\star$ in the case $\rho=2$ implies that $(\mathcal{H}^\star)^{-1}$ is bounded from $L^2(\mathbb{R};D_{S,-1})$ into $C_0(\mathbb{R}; H)$. We define \begin{align*} \mathcal{T}: L^1(\mathbb{R};H) \rightarrow \mathcal{D'}(\mathbb{R};E_\infty), \ \llangle \mathcal{T}f, \varphi \rrangle_{\mathcal{D}',\mathcal{D}}:= \llangle f, (\mathcal{H}^\star)^{-1}\varphi \rrangle_{L^1(\mathbb{R};H),L^\infty(\mathbb{R};H)}, \end{align*} and we have \begin{equation}\label{1} \left \| \mathcal{T}f \right \|_{L^2(\mathbb{R};D_{S,1})} \leq C(M,\nu) \left \| f \right \|_{L^1(\mathbb{R};H)}. \end{equation} Next, let $f$ in $\mathcal{D}(\mathbb{R};E_{-\infty})$. We write for all $\varphi \in \mathcal{D}(\mathbb{R};E_{-\infty})$, observing that $\mathcal{D}(\mathbb{R};E_{-\infty})\subset V_0^\star$, \begin{align*} \llangle \mathcal{T}f, \varphi \rrangle_{\mathcal{D}',\mathcal{D}} = \llangle \mathcal{H}\mathcal{H}^{-1}f, (\mathcal{H}^\star)^{-1}\varphi \rrangle_{V_0^\star, V_0}= \llangle \mathcal{H}^{-1}f, \varphi \rrangle_{\mathcal{D}',\mathcal{D}}. \end{align*} Hence, $\mathcal{T}f=\mathcal{H}^{-1}f \in C_0(\mathbb{R}; H)$ and \begin{equation}\label{Eq} \forall \varphi \in \mathcal{D}(\mathbb{R};E_{-\infty}), \ \ -\llangle \mathcal{T}f, \partial_t \varphi \rrangle_{\mathcal{D}',\mathcal{D}} + \int_{\mathbb{R}} B_t(\mathcal{T}f(t),\varphi(t))\ \mathrm d t = \int_{\mathbb{R}} \langle f(t), \varphi(t) \rangle_H \ \mathrm d t. \end{equation} Using \eqref{important existence cas L1}, we obtain \begin{equation}\label{2} \sup_{t\in \mathbb{R}} \| \mathcal{T}f(t) \|_{H} \leq \sqrt{2M} \left\| \mathcal{T}f \right\|_{L^2(\mathbb{R};D_{S,1})}+(1+\sqrt{2})\left\| f \right\|_{L^1(\mathbb{R};H)} \leq C(M,\nu) \left\| f \right\|_{L^1(\mathbb{R};H)}. \end{equation} Now, let us pick $f \in L^1(\mathbb{R};H).$ Let $(f_k)_{k \in \mathbb{N}} \in \mathcal{D}(\mathbb{R};E_{-\infty})^{\mathbb{N}}$ such that $ f_k \to f$ in $L^1(\mathbb{R};H)$. By \eqref{1} and \eqref{2}, we have $\mathcal{T}f_k \to \mathcal{T}f$ in $L^2(\mathbb{R};D_{S,1}) \cap C_0(\mathbb{R}; H)$. Using \eqref{Eq} with $\mathcal{T}f_k$ for a fixed $\varphi \in \mathcal{D}(\mathbb{R};E_{-\infty})$ and letting $k \to \infty$ imply that $\partial_t (\mathcal{T}f) + \mathcal{B}(\mathcal{T}f) = f$ in $\mathcal{D}'(\mathbb{R};E_{\infty}).$ \end{proof} \subsubsection{Source term is a bounded measure on $H$} First, we define the space of bounded $H$-valued measures on $\mathbb{R}$, denoted $\mathcal{M}(\mathbb{R};H)$, as the topological anti-dual space of $C_0(\mathbb{R};H)$ with respect to the sup-norm. We denote by $\llangle \cdot, \cdot \rrangle_{\mathcal{M},C_0}$ the anti-duality bracket. We equip the space $\mathcal{M}(\mathbb{R};H)$ with the operator norm, that is \begin{equation*} \left \|\mu \right \|_{\mathcal{M}}:= \sup_{\varphi \in C_0(\mathbb{R};H)\setminus\left \{ 0 \right \}} \frac{\left | \llangle \mu, \varphi \rrangle_{\mathcal{M},C_0} \right |}{\ \ \ \ \left \|\varphi \right \|_{L^\infty(\mathbb{R};H)}}. \end{equation*} It is a Banach space containing a subspace isometric to $L^1(\mathbb{R};H)$. For $\mu \in \mathcal{M}(\mathbb{R};H)$, $\mathrm{supp}(\mu)$ is the complement of the largest open set of $\mathbb{R}$ on which $\mu$ is equal to $0$. More precisely, we say that $\mu$ equals $0$ on an open set $\Omega \subset \mathbb{R}$ if for all $\phi \in C_0(\mathbb{R},H)$ with support contained in $\Omega$, $\llangle \mu, \phi \rrangle_{\mathcal{M},C_0}=0$. Let $\mathcal{N}$ denotes the set of all such open sets. We have \begin{align*} \mathrm{supp}(\mu ):= \mathbb{R}\setminus \bigcup_{\Omega \in \mathcal{N}} \Omega. \end{align*} An important example are Dirac measures. For any $s \in \mathbb{R}$ and $a \in H$, we denote by $\delta_s \otimes a$ the Dirac measure on $s$ carried by $a$ which is defined by \begin{equation*} \llangle \delta_s \otimes a, \phi \rrangle_{\mathcal{M},C_0} = \langle a, \varphi(s) \rangle_H, \ \varphi \in C_0(\mathbb{R};H). \end{equation*} We first state the classical lemma below for later use. \begin{lem}\label{Radon} Let $\mu \in \mathcal{M}(\mathbb{R};H).$ Then there exists a sequence $(f_\varepsilon)_{\varepsilon>0}$ in $L^1(\mathbb{R};H)$ such that $f_\varepsilon \rightharpoonup \mu$ (weak-$\star$ convergence) and $\sup_{\varepsilon >0} \left \| f_\varepsilon \right \|_{L^1(\mathbb{R};H)} \leq \left \| \mu \right \|_{\mathcal{M}}.$ \end{lem} \begin{proof} We obtain the sequence $(f_\varepsilon)_{\varepsilon >0}$ by convoluting $\mu$ with a scalar mollifying sequence $(\varphi_\varepsilon)_{\varepsilon >0}$ and we easily check that we have all the required properties. \end{proof} \begin{prop}[Existence and uniqueness for bounded measure source] \label{CasRadon} Let $\mu \in \mathcal{M}(\mathbb{R};H)$. Then there exists a unique $u \in L^2(\mathbb{R};D_{S,1})$ solution to $ \partial_t u +\mathcal{B} u =\mu$ in $\mathcal{D'}(\mathbb{R}; E_\infty).$ Moreover, $u \in L^\infty(\mathbb{R};H)$ f and there is a constant $C=C(M,\nu)>0$ such that \begin{equation}\label{ESTT} \left \| u \right \|_{L^2(\mathbb{R};D_{S,1})}+\left \| u \right \|_{L^\infty(\mathbb{R};H)} \leq C(M,\nu) \left \| \mu \right \|_{\mathcal{M}}. \end{equation} If $I \subset \mathbb{R}\setminus \mathrm{supp}(\mu)$ an unbounded open interval, then $u \in C_0(\overline{I},H)$ and $t \mapsto \left \| u(t) \right \|_H^2$ is absolutely continuous on $\overline{I}$. Moreover, if $I$ is a neighbourhood of $-\infty$, then $u=0$ on $\overline{I}$. \end{prop} \begin{proof} Uniqueness is provided by Proposition \ref{prop:uniqueness}. To prove the existence, we use lemma \ref{Radon} to pick $\left ( f_n \right )_{n \in \mathbb{N}}\in L^1(\mathbb{R};H)^\mathbb{N}$ such that $f_n \rightharpoonup \mu$ and $\sup_{n\in \mathbb{N}} \left \| f_n \right \|_{L^1(\mathbb{R};H)} \leq \left \| \mu \right \|_{\mathcal{M}}.$ By Proposition \ref{prop:L1}, for all $n \in \mathbb{N}$, there is a unique $u_n \in L^2(\mathbb{R};D_{S,1})$ solution of the equation $\partial_t u_n+\mathcal{B}u_n= f_n$ in $\mathcal{D}'(\mathbb{R};E_\infty)$ and we have \eqref{asked estimate}, implying \begin{equation}\label{EST} \left \| u_n \right \|_{L^2(\mathbb{R};D_{S,1})}+\left \| u_n \right \|_{L^\infty(\mathbb{R};H)} \leq C(M,\nu) \left \| \mu \right \|_{\mathcal{M}}. \end{equation} Using the Banach-Alaoglu theorem, there exists $u \in L^2(\mathbb{R};D_{S,1}) \cap L^\infty(\mathbb{R};H)$ such that, up to extracting a sub-sequence, $u_n \rightharpoonup u$ weakly in $L^2(\mathbb{R};D_{S,1})$ and weakly-$\star$ in $L^\infty(\mathbb{R};H)$ for the duality pairing $L^\infty$-$L^1$. We have \eqref{ESTT} and easily pass to the limit in the equation to obtain the desired solution. Now, if $I \subset \mathbb{R}$ is an unbounded open interval such that $ I \cap \mathrm{supp}(\mu)= \varnothing$, then by Corollary \ref{corenergy}, $u \in C_0(\overline{I},H)$ and $t \mapsto \left \| u(t) \right \|_H^2$ is absolutely continuous on $\overline{I}$. If $I$ is a neighbourhood of $-\infty$, then $u=0$ by Proposition \ref{prop:uniqueness} on $I$. \end{proof} The next corollary is crucial to construct fundamental solution and Green operators. \begin{cor}[Existence and uniqueness for Dirac measure source]\label{CorRadon} Let $s \in \mathbb{R}$ and $a \in H$. Then there exists a unique $u \in L^2(\mathbb{R};D_{S,1})$ solution to $ \partial_t u + \mathcal{B}u = \delta_s \otimes a \ \ in \ \mathcal{D}'(\mathbb{R};E_{\infty}).$ Moreover, $u\in C_0(\mathbb{R}\setminus \{ s \}; H)$, equals $0$ on $(-\infty,s)$ and $\lim_{t \to s^+} u(t)=a$ in $H$, and there is a constant $C=C(M,\nu)>0$ such that \begin{equation*} \sup_{t\in \mathbb{R}\setminus\{s\}}\left \|u(t) \right \|_H + \left \| u \right \|_{L^2(\mathbb{R};D_{S,1})} \leq C \left \| a \right \|_H. \end{equation*} Furthermore, {$u_{\scriptscriptstyle{\vert (s,\infty)}}$ is the restriction of an element in $V_{-1}$}. \end{cor} \begin{proof} Applying the previous result Proposition \ref{CasRadon}, we have existence and uniqueness of $u$ with the estimates and $u \in C_0(\mathbb{R}\setminus \{ s \};H)$, equals $0$ on $(-\infty,s)$ and has a limit $u(s^{+})$ when $t \to s^+$. That $u_{\scriptscriptstyle{\vert (s,\infty)}}$ is a restriction to $(s,\infty)$ of an element in $V_{-1}$ follows from the fact that $\partial_t u=f$ {in $\mathcal{D}'((s,\infty);E_\infty)$} with $f= -\mathcal{B}u$ on $(s,\infty)$ and the method of Corollary \ref{corenergy}, by taking the even extension of $u$ and the odd extension of $f$ with respect to $s$. It remains to show $u(s^{+})=a$. Let $\Tilde{a}\in E_{-\infty}$ and $\theta\in \mathcal{D}(\mathbb{R})$ with $\theta(s)=1$ and set $\varphi=\theta\otimes \Tilde{a}$. Using the absolute continuity of $t\mapsto \langle u(t), \varphi(t)\rangle_H$ on both $[s,\infty)$ and $(-\infty,s]$ using Corollary \ref{EnergyPol}, we have \begin{align*} - \langle u(s^+), \Tilde{a}\rangle_H &= \int_s^\infty - B_t(u(t),\varphi(t)) + \langle u(t), \partial_t\varphi(t)\rangle_H \ \mathrm d t \\ + \langle u(s^-), \Tilde{a}\rangle_H &= \int_{-\infty}^s - B_t(u(t),\varphi(t)) + \langle u(t), \partial_t\varphi(t)\rangle_H \ \mathrm d t \end{align*} By the equation for $u$ on $\mathbb{R}$, and since $u\in L^1_{\mathrm{loc}}(\mathbb{R}; H)$, we obtain \begin{equation*} \int_{-\infty}^\infty B_t(u(t),\varphi(t)) - \langle u(t), \partial_t\varphi(t)\rangle_H \ \mathrm d t = \llangle \delta_s \otimes a, \varphi \rrangle_{\mathcal{M},C_0}= \langle a, \Tilde{a}\rangle_H. \end{equation*} Summing up, $- \langle u(s^+), \Tilde{a}\rangle_H+ \langle u(s^-), \Tilde{a}\rangle_H= - \langle a, \Tilde{a}\rangle_H$. As $u(s^-)=0$, this yields $\langle u(s^+), \Tilde{a}\rangle_H= \langle a, \Tilde{a}\rangle_H$ and we conclude by density of $E_{-\infty}$ in $H$. \end{proof} \subsection{Green operators}\label{S2S6} The notion below of Green operators was first introduced by J.-L. Lions \cite{lions2013equations}. \begin{defn}[Green operators] Let $t,s \in \mathbb{R}$ and $a,\Tilde{a} \in H$. \begin{enumerate} \item For $t \neq s$, $G(t,s)a$ is defined as the value at time $t$ of the solution $u \in L^2(\mathbb{R};D_{S,1})$ of the equation $\partial_t u + \mathcal{B}u = \delta_s \otimes a$ in Corollary \ref{CorRadon}. \item For $s \neq t$, $\Tilde{G}(s,t)\Tilde{a}$ is defined as the value at time $s$ of the solution $u \in L^2(\mathbb{R};D_{S,1})$ of the equation $-\partial_s u + \mathcal{B^\star}u = \delta_t \otimes \Tilde{a}$ in Corollary \ref{CorRadon}. \end{enumerate} The operators $G(t,s)$ and $\Tilde{G}(s,t)$ are called the Green operators for the parabolic operator $\partial_t +\mathcal{B}$ and the backward parabolic operator $ -\partial_t +\mathcal{B^\star}$, respectively. \end{defn} The properties discussed in the last section can be summarized in the following corollary. \begin{cor}[Estimates for Green operators]\label{Cor Green} There is a constant $C=C(M,\nu)>0$ such that one has the following statements. \begin{enumerate} \item For all $t<s \in \mathbb{R}$, $G(t,s)=0$ and for all $a \in H$, $t \mapsto G(t,s)a \in C_0([s,\infty);H)$ with $ G(s,s)a =a $ {and it is a restriction to $(s,\infty)$ of an element in $V_{-1}$}, and for any $r \in [2,\infty)$, $a \in H$ and $s\in \mathbb{R}$, we have $G(\cdot,s)a \in L^r((s,\infty);D_{S,\alpha})$ where $\alpha={2}/{r}$ with \begin{equation*} \sup_{t \geq s} \left \| G(t,s) \right \|_{\mathcal{L}(H)} \leq C \ \ \mathrm{and} \ \ \int_{s}^{+\infty} \left \| G(t,s)a \right \|^r_{S,\alpha}\mathrm d t \leq C^r \left \| a \right \|_H^r. \end{equation*} \item For all $s>t \in \mathbb{R}$, $\Tilde{G}(s,t)=0$ and for all $\Tilde{a} \in H$, $s \mapsto \Tilde{G}(s,t)\Tilde{a} \in C_0((-\infty,t];H)$ with $\Tilde{G}(t,t)\Tilde{a} =\Tilde{a}$ {and it is a restriction to $(-\infty,t)$ of an element in $V_{-1}$}, and for any $r \in [2,\infty)$, $t\in \mathbb{R}$, we have $\Tilde{G}(s,\cdot)a \in L^r((-\infty,t);D_{S,\alpha})$ where $\alpha={2}/{r}$ with \begin{equation*} \sup_{t \geq s} \| \Tilde{G}(s,t) \|_{\mathcal{L}(H)} \leq C \ \ \mathrm{and} \ \ \int_{-\infty}^{t} \| \Tilde{G}(s,t)\Tilde{a} \|^r_{S,\alpha}\mathrm d s \leq C^r \| \Tilde{a} \|_H^r. \end{equation*} \end{enumerate} \end{cor} \begin{proof} The properties follows from construction in Corollary \ref{CorRadon} and the interpolation inequalities in Proposition \ref{prop:embed r>0} for $2<r<\infty$. \end{proof} Moreover, expected adjointness and Chapman-Kolmogorov relations hold. \begin{prop}[Adjointess and Chapman-Kolmogorov identities]\label{Prop Green} The following statements hold. \begin{enumerate} \item For all $s <t $, $G(t,s)$ and $\Tilde{G}(s,t)$ are adjoint operators. \item For any $s < r <t $, we have $G(t,s)=G(t,r)G(r,s)$. \end{enumerate} \end{prop} \begin{proof} We first prove point (1). We fix $t,s \in \mathbb{R}$ such that $s<t$. For $a,\Tilde{a} \in H$, we can apply the integral identity of Corollary \ref{EnergyPol} to $u:= G(\cdot,s)a$ and $v = \Tilde{G}(\cdot,t)\Tilde{a}$ between $s$ and $t$. Note that by duality the integrand vanishes almost everywhere, hence \begin{equation*} \langle G(t,s)a, \Tilde{a}\rangle_H= \langle G(t,s)a, \Tilde{G}(t,t)\Tilde{a}\rangle_H = \langle G(s,s)a, \Tilde{G}(s,t)\Tilde{a}\rangle_H= \langle a, \Tilde{G}(s,t)\Tilde{a}\rangle_H. \end{equation*} The adjunction property follows. For point (2), we apply the same equality between $r$ and $t$ and use that the adjoint of $G(t,r)$ is $\Tilde{G}(r,t)$ from point (1), to obtain \begin{equation*} \langle G(t,s)a, \Tilde{a}\rangle_H= \langle G(t,s)a, \Tilde{G}(t,t)\Tilde{a}\rangle_H = \langle G(r,s)a, \Tilde{G}(r,t)\Tilde{a}\rangle_H= \langle G(t,r)G(r,s) a, \Tilde{a}\rangle_H. \end{equation*} \end{proof} \subsection{Fundamental solution}\label{sec:FS} We define the fundamental solution as representing the inverse of $\partial_t + \mathcal{B}$. \begin{defn}[Fundamental solution for $\partial_t + \mathcal{B}$] \label{FS} A fundamental solution for $\partial_t + \mathcal{B}$ is a family $\Gamma=(\Gamma(t,s))_{t,s \in \mathbb{R}}$ such that : \begin{enumerate} \item $\sup_{t,s \in \mathbb{R}} \left\|\Gamma(t,s) \right\|_{\mathcal{L}(H)} < +\infty.$ \item $\Gamma(t,s)=0$ if $s>t$. \item For all $a,\Tilde{a}\in E_{-\infty}$, the function $(t,s) \mapsto \langle \Gamma(t,s)a, \Tilde{a} \rangle_H$ is Borel measurable on $\mathbb{R}^2$. \item For all $\phi \in \mathcal{D}(\mathbb{R})$ and $a \in E_{-\infty}$, the solution $u \in L^2(\mathbb{R};D_{S,1})$ of the equation $\partial_t u + \mathcal{B}u = \phi \otimes a $ in $\mathcal{D}'(\mathbb{R};E_\infty)$ satisfies for all $\Tilde{a} \in E_{-\infty}$, $\langle u(t), \Tilde{a} \rangle_H = \int_{-\infty}^{t} \phi(s) \langle \Gamma (t,s)a, \Tilde{a} \rangle_H \ \mathrm{d}s,$ for almost every $t\in \mathbb{R}$. \end{enumerate} One defines a fundamental solution $(\Tilde{\Gamma}(s,t))_{s,t \in \mathbb{R}}$ to the backward operator $-\partial_s + \mathcal{B}^\star$ analogously and (ii) is replaced by $\Tilde{\Gamma}(s,t)=0$ if $s>t$. \end{defn} \begin{lem}[Uniqueness of fundamental solutions]\label{Unicité sol fonda} There is at most one fundamental solution to $\partial_t + \mathcal{B}$ in the sense of Definition \ref{FS}. \end{lem} \begin{proof} Assume $\Gamma_1$, $\Gamma_2$ are two fundamental solutions to $\partial_t + \mathcal{B}$. Fix $a$ and $\Tilde{a}$ in $E_{-\infty}$. The function $(t,s) \mapsto \langle \Gamma_k(t,s)a, \Tilde{a} \rangle_H$ is bounded and measurable for $k \in \left\{1,2 \right\}$ by (1) and (3), hence Fubini's Theorem with (2) and (4) yield for all $\phi, \Tilde{\phi} \in \mathcal{D}(\mathbb{R})$, \begin{equation*} \iint_{\mathbb{R}^2} \Tilde{\phi}(s) {\phi}(t) \langle \Gamma_1 (t,s)a, \Tilde{a} \rangle_H \ \mathrm{d}s\mathrm{d}t = \iint_{\mathbb{R}^2} \Tilde{\phi}(s) \phi(t) \langle \Gamma_2 (t,s)a, \Tilde{a} \rangle_H \ \mathrm{d}s\mathrm{d}t. \end{equation*} Therefore, we obtain $\langle \Gamma_1 (t,s)a, \Tilde{a} \rangle_H = \langle \Gamma_2 (t,s)a, \Tilde{a} \rangle_H$ for almost every $(t,s) \in \mathbb{R}^2$. At this stage, the almost everywhere equality can depend on $a$ and $\Tilde{a}$. Applying this for test elements $a,\Tilde{a} \in E_{-\infty}$ describing a countable set in $E_{-\infty}$ that is dense in $H$ and using that the operators $\Gamma_1(t,s)$, $\Gamma_2(t,s)$ are bounded on $H$ by (1), one deduces that $\Gamma_1$ and $\Gamma_2$ agree almost everywhere. \end{proof} \subsection{The Green operators are {the} fundamental solution operators} The two notions are well defined and we show that they lead to the same families. We borrow partially ideas from \cite{auscher2023universal}. \begin{thm}[Green operators and fundamental solution operators agree]\label{thm:identification} The family of Green operators is the fundamental solution (up to almost everywhere equality) and (4) holds for all $t\in \mathbb{R}$. \end{thm} \begin{proof} As there is at most one fundamental solution, it suffices to show that the family of Green operators satisfies the requirements (1)-(4) in Definition \ref{FS}. The Green operators verify (1) and (2) of the Definition \ref{FS} by Corollary \ref{Cor Green}. For the measurability issue (3), remark that for all $a,\Tilde{a} \in E_{-\infty}$, we have $(t,s)\mapsto \langle G(t,s)a, \Tilde{a} \rangle_H$ is separately continuous on $\mathbb{R}^2\setminus \left \{ (t,t), t \in \mathbb{R} \right \}$, so Borel measurable on $\mathbb{R}^2$. We only have to prove (4), namely that for any $\phi \in \mathcal{D}(\mathbb{R})$ and $a,\Tilde{a}\in E_{-\infty},$ if $u$ is the weak solution for the source term $\phi\otimes a$, we have for all (not just almost all) $t \in \mathbb{R}$, \begin{equation}\label{GreenLemma} \left\langle u(t), \Tilde{a} \right \rangle_H = \int_{-\infty}^{t}\phi(s) \left\langle G(t,s)a, \Tilde{a} \right \rangle_H \ \mathrm{d}s. \end{equation} Fix $t\in \mathbb{R}$. Introduce $\Tilde{u}=\Tilde{G}(\cdot, t)\Tilde{a}$. Using the absolute continuity of $s\mapsto \langle u(s),\Tilde{u}(s)\rangle_H$ on $(-\infty, t]$ with its zero limit at $-\infty$, and seing $\phi\otimes a \in L^1(\mathbb{R};H)$, we have by Corollary \ref{EnergyPol} \begin{equation*} \langle u(t),\Tilde{a}\rangle_H= \langle u(t),\Tilde{u}(t)\rangle_H= \int_{-\infty}^t \langle \phi(s) a, \Tilde{u}(s)\rangle_H \ \mathrm ds. \end{equation*} For $s\le t$, using Proposition \ref{Prop Green} in the last equality below \begin{equation*} \langle \phi(s) a, \Tilde{u}(s)\rangle_H = \phi(s)\langle a, \Tilde{G}(s,t)\Tilde{a}\rangle_H= \phi(s)\langle G(t,s)a, \Tilde{a}\rangle_H \end{equation*} and we are done. \end{proof} \subsection{Representation with the fundamental solution operators} Having identified Green operators to fundamental solution operators, the latter inherits the properties of the former. From now on, we use the more traditional notation $\Gamma(t,s)$. We can now state a complete representation theorem for all the distributional solutions seen in the last subsections, with specified convergence issues. \begin{thm}\label{ThmGreenReprese} Let $s\in \mathbb{R}$, $a \in H$, $g \in L^{\rho'}(\mathbb{R};H)$, where $\rho \in [2,\infty]$ and $\beta={2}/{\rho}$. Then the unique solution $u \in L^2(\mathbb{R};D_{S,1})$ of the equation \begin{equation*} \partial_t u +\mathcal{B}u= \delta_s \otimes a + S^\beta g \ \ \mathrm{in} \ \mathcal{D}'(\mathbb{R};E_\infty) \end{equation*} obtained by combining Propositions \ref{prop:W-beta}, \ref{prop:L1} and Corollary \ref{CorRadon} can be represented pointwisely by the equation \begin{equation*} u(t) = \Gamma(t,s)a + \int_{-\infty}^{t} \Gamma(t,\tau) S^\beta g(\tau)\ \mathrm d \tau, \end{equation*} where the integral is weakly defined in $H$ when $\rho<\infty$ and strongly defined when $\rho=\infty$ (i.e., in the Bochner sense). More precisely, for all $\Tilde{a}\in H$, we have the equality with absolutely converging integral \begin{equation}\label{WeaksensGreen} \langle u(t), \Tilde{a} \rangle_H = \langle \Gamma(t,s)a, \Tilde{a} \rangle_H + \int_{-\infty}^{t} \langle g(\tau),S^\beta\Tilde{\Gamma}(\tau,t) \Tilde{a} \rangle_{H} \ \mathrm d \tau. \end{equation} \end{thm} \begin{rem} Remark that by Proposition \ref{prop:embed r>0}, one could even reduce to proving the result when $\rho=2$ and $\rho=\infty$. \end{rem} \begin{proof} It is enough to prove \eqref{WeaksensGreen}. By uniqueness and linearity, we start by writing $u=u_1+u_2$ where $u_k$ is the solution of the equation considering only the $k^{th}$ term in the right-hand side of the equation. Recall that we have identification of $G$ and $\Gamma$. Fix $t \in \mathbb{R}.$ The first term involving $a$ is $u_1(t)=\Gamma(t,s)a$ by construction and identification. The argument for $u_2$ is as follows. According to the proof of Theorem \ref{Unicité sol fonda} the weak solution $v$ obtained from source $\phi\otimes c$, where $\phi\in \mathcal{D}(\mathbb{R})$ and $c \in E_{-\infty}$, satisfies for any $\Tilde{a} \in E_{-\infty}$ and $t\in \mathbb{R}$, \begin{align*} \langle v(t), \Tilde{a} \rangle_H &= \int_{-\infty}^{t}\phi(\tau) \langle \Gamma(t,\tau)c, \Tilde{a} \rangle_H \ \mathrm{d}\tau \\&= \int_{-\infty}^{t} \langle (\phi\otimes c)(\tau), \Tilde{\Gamma}(\tau,t)\Tilde{a} \rangle_H \ \mathrm{d}\tau. \end{align*} For any $\rho\in [2,\infty]$, we have $$ \int_{-\infty}^{t} |\langle h(\tau),\Tilde{\Gamma}(\tau,t) \Tilde{a} \rangle_{H,-\beta}| \ \mathrm{d}\tau \le C \|h\|_{L^{\rho'}(\mathbb{R};D_{S,-\beta})} \|\Tilde{a}\|_H$$ by using Cauchy-Schwarz inequality in $H$ and H\"older inequality invoking estimates for $\Tilde{G}$ in Proposition \ref{Prop Green}. Writing $\langle (\phi\otimes c)(\tau), \Tilde{\Gamma}(\tau,t)\Tilde{a} \rangle_H$ as $\langle (\phi\otimes c)(\tau), \Tilde{\Gamma}(\tau,t)\Tilde{a} \rangle_{H,-\beta}$, we can conclude for $u_2$ by density of the span of tensor products $\phi\otimes c$ in $L^{\rho'}(\mathbb{R};D_{S,-\beta})$, and density of $E_{-\infty}$ in $H$. For $\rho=\infty$, we may also verify the strong convergence. \end{proof} We record the following operator-valued Schwartz kernel result. \begin{prop} Let $\phi,\Tilde{\phi}\in \mathcal{D}(\mathbb{R})$ and $a,\Tilde{a}\in H$. Then, \begin{equation*} \llangle \mathcal{H}^{-1}(\phi \otimes a), \Tilde{\phi}\otimes \Tilde{a} \rrangle_{\mathcal{D}', \mathcal{D}} = \iint \phi(s) \langle \Gamma(t,s)a, \Tilde{a} \rangle_H \overline{\Tilde{\phi}}(t) \ \mathrm d s\mathrm d t. \end{equation*} In other words, one can see $\langle \Gamma(t,s)a, \Tilde{a} \rangle_H$ as the Schwartz kernel of the sesquilinear map $(\phi,\Tilde{\phi})\mapsto \langle \mathcal{H}^{-1}(\phi \otimes a), \Tilde{\phi}\otimes \Tilde{a} \rangle_{\mathcal{D}', \mathcal{D}}$ on $\mathcal{D}(\mathbb{R})\times \mathcal{D}(\mathbb{R}).$ \end{prop} \begin{proof} By density of $E_{-\infty}$ in $H$ and boundedness of the Green operators, we may use \eqref{GreenLemma} for $a,\Tilde{a}\in H$ and we obtain, \begin{align*} \llangle \mathcal{H}^{-1}(\phi \otimes a), \Tilde{\phi}\otimes \Tilde{a} \rrangle_{\mathcal{D}', \mathcal{D}} &= \int_{\mathbb{R}} \langle \mathcal{H}^{-1}(\phi \otimes a)(t), \Tilde{a} \rangle_H \overline{\Tilde{\phi}}(t)\ \mathrm{d}t \\& = \int_{\mathbb{R}} \left ( \int_{-\infty}^{t} \phi(s) \langle \Gamma(t,s)a,\Tilde{a}\rangle_H \ \mathrm ds \right ) \overline{\Tilde{\phi}}(t)\ \mathrm{d}t \\& = \iint \phi(s) \langle \Gamma(t,s)a, \Tilde{a} \rangle_H \overline{\Tilde{\phi}}(t) \ \mathrm d s\mathrm d t, \end{align*} where we have used Fubini's theorem and $\Gamma(s,t)=0$ for $s>t$ in the last line. \end{proof} \subsection{The Cauchy problem and the fundamental solution} In this section, we consider the Cauchy problem on the interval $(0,\infty)$. The coefficients $B_t$ are defined on $(0,\infty)$ and satisfy \eqref{EllipticityAbstract} there. We fix $\rho \in [2,\infty]$ and set $\beta={2}/{\rho}$. The Cauchy problem with initial condition $a\in H$ and $g\in L^{\rho'}((0,\infty);H)$ consists in finding a weak solution to \begin{align}\label{Pb Cauchy homogène} \left\{ \begin{array}{ll} \partial_t u +\mathcal{B}u = S^\beta g \ \mathrm{in} \ \mathcal{D}'((0,\infty);E_\infty), \\ u(0)=a \ \mathrm{in} \ E_\infty. \end{array} \right. \end{align} \begin{rem} Note that when $f \in L^2((0,\infty);K)$, there exists $g\in L^2((0,\infty); H)$ such that $T^*f=Sg$, hence the $\beta=1$ case covers the classical Lions equation. \end{rem} \begin{defn} A weak solution to \eqref{Pb Cauchy homogène} is a function $ u \in L^2((0,\infty); D_{S,1}), $ with \begin{enumerate} \item [(i)] $u$ solves the the first equation in $\mathcal{D}'((0,\infty);E_\infty)$, that is, for all $\varphi \in \mathcal{D}((0,\infty);E_{-\infty})$ \begin{align*} \int_{0}^{\infty} -\langle u(t), \partial_t{\varphi }(t)\rangle_{H,1} + B_t(u(t),\varphi(t))\ \mathrm d t = \int_{0}^{\infty} \langle g(t), S^\beta \varphi(t)\rangle_{H} \ \mathrm d t. \end{align*} \item [(ii)] $ \forall \Tilde{a} \in E_{-\infty}, \ \langle u(t),\Tilde{a} \rangle_{H} \rightarrow \langle a, \Tilde{a}\rangle_{H}$, along a sequence tending to $0$. \end{enumerate} \end{defn} A weaker formulation testing against functions $\varphi \in \mathcal{D}([0,\infty);E_{-\infty})$ with right hand side containing the additional term $\langle a, \varphi(0) \rangle_{H}$ is often encountered. In the end it amounts to the same solutions thanks to \textit{a priori} continuity in $H$, which only uses the upper bound on $B_t$. \begin{prop}\label{prop:weaksolCP} Any weak solution to (i) belongs to $C_0([0,\infty); H)$ and $t\mapsto\|u(t)\|_H^2$ satisfies the energy equality for any $ \sigma, \tau \in [0,\infty]$ such that $ \sigma < \tau$, \begin{align*} \left \| u(\tau) \right \|^2_H + 2\mathrm{Re}\int_{\sigma}^{\tau} B_t(u(t),u(t))\ \mathrm d t = \left \| u(\sigma) \right \|^2_H + 2\mathrm{Re}\int_{\sigma}^{\tau} \langle g(t),S^\beta u(t)\rangle_{H} \ \mathrm d t. \end{align*} \end{prop} \begin{proof} We assume $u \in L^2((0,\infty); D_{S,1})$ and the equation implies that $\partial_tu \in L^2((0,\infty); D_{S,-1})+ L^{\rho'}((0,\infty);D_{S,-\beta}).$ We may apply Corollary \ref{corenergy}. \end{proof} The main result of this section is the following theorem which puts together all the theory developed so far. \begin{thm}\label{ThmCauchy homog} Consider the above assumptions on $B_t$, $\rho, \beta$ and $g, a$. \begin{enumerate} \item There exists a unique weak solution $u$ to the problem \eqref{Pb Cauchy homogène}. Moreover, $u \in C_0([0,\infty);H)\cap L^r((0,\infty);H)$ for any $r \in (2,\infty)$ with $\alpha={2}/{r}$ $($if $\rho<\infty$, then u is also the restriction to $(0,\infty)$ of an element in $V_{-\beta})$ and $$ \sup_{t\in [0,\infty)} \| u(t) \|_{H}+\left \| u \right \|_{L^2((0,\infty);D_{S,1})}+\left \| u \right \|_{L^r((0,\infty);D_{S,\alpha})} \leq C \left ( \left \| g \right \|_{L^{\rho'}(\mathbb{R};H)}+ \left \| a \right \|_H \right ),$$ where $C=C(M,\nu, \rho)>0$ is a constant independent of $g, a$. \item There exists a unique fundamental solution $\Gamma=\{\Gamma(t,s)\}_{0\leq s \leq t <\infty}$ for $\partial_t+\mathcal{B}$ in the sense of Definition \ref{FS} in $(0,\infty)$. In particular, for all $t \in [0,\infty)$, we have the following representation of $u$ : \begin{align}\label{eq:repCauchy0infty} u(t)=\Gamma(t,0)a+ \int_{0}^{t} \Gamma(t,s)S^\beta g(s)\ \mathrm ds, \end{align} where the integral is weakly defined in $H$ when $\rho<\infty$ and strongly defined in $H$ when $\rho=\infty$ (i.e., in the Bochner sense). More precisely, for all $\Tilde{a} \in H$ and $t \in [0,\infty)$, we have equality with absolutely converging integral \begin{align}\label{faible} \langle u(t) , \Tilde{a}\rangle_H = \langle \Gamma(t,0)a , \Tilde{a}\rangle_H + \int_{0}^{t} \langle g(s) , S^\beta \Tilde{\Gamma}(s,t)\Tilde{a}\rangle_{H}\ \mathrm ds. \end{align} \end{enumerate} \end{thm} \begin{proof} We start with the existence of such a solution. We extend $g$ by $0$ on $(-\infty,0]$ and keep the same notation for the extensions. We also extend the family $({B}_t)_t$ to $\mathbb{R}$ by setting ${B}_t=\nu\langle S\cdot, S\cdot \rangle_H$ on $\mathbb{R}\setminus(0,\infty)$ and we keep calling ${\mathcal{B}}$ the operator associated to this family. Using Proposition \ref{prop:W-beta} when $\rho<\infty$ or Proposition \ref{prop:L1} when $\rho=\infty$ and Corollary \ref{CorRadon}, there exists a unique $ \Tilde{u} \in L^2(\mathbb{R};D_{S,1})$ solution of the equation \begin{equation*} \partial_t \Tilde{u} + \mathcal{B}\Tilde{u} = \delta_0 \otimes a + S^\beta g \ \ \mathrm{in} \ \mathcal{D}'(\mathbb{R};E_\infty). \end{equation*} Moreover, $\Tilde{u} \in C_0(\mathbb{R}\setminus\left \{ 0 \right \};H)$, $\Tilde{u}=0$ on $(-\infty,0)$ with $\lim_{t \to 0^+}\Tilde{u}(t)=a$ in $H$ and if $\rho <\infty$ then the restriction to $(0,\infty)$ of $\Tilde{u}$ is an element in $V_{-\beta}$. Furthermore, we have $\Tilde{u} \in L^r(\mathbb{R};D_{S,\alpha})$ for any $r \in (2,\infty)$ with $\alpha={2}/{r}$ by Proposition \ref{prop:embed r>0} and we have the estimate $$ \left \| \Tilde{u} \right \|_{L^\infty(\mathbb{R};H)}+\left \| \Tilde{u} \right \|_{L^2(\mathbb{R};D_{S,1})}+\left \| \Tilde{u} \right \|_{L^r(\mathbb{R};D_{S,\alpha})} \leq C(M,\nu,\rho) \left ( \| S^\beta g \|_{L^{\rho'}(\mathbb{R};H)}+ \left \| a \right \|_H \right ).$$ In addition, \eqref{WeaksensGreen} in Theorem \ref{ThmGreenReprese} implies \eqref{faible} and \eqref{eq:repCauchy0infty} for $\Tilde{u}(t)$ for all $t\in \mathbb{R}$ with the fundamental solution defined on $\mathbb{R}$. The candidate $u:= \Tilde{u}_{\scriptscriptstyle{\vert (0,\infty)}}$ satisfies all the required properties of the theorem, proving existence and representation. Next, we check uniqueness in the space $L^2((0,\infty);D_{S,1})$. We assume that $u$ is a solution to \eqref{Pb Cauchy homogène} with $a=0$, $f=0$ and $g=0$. By Proposition \ref{prop:weaksolCP}, \begin{align*} 2\mathrm{Re}\int_{0}^{\infty} B_t(u(t),u(t)) \ \mathrm d t = 0. \end{align*} Using the coercivity of $B_t$, we deduce that $u=0$ on $(0,\infty)$. Definition, existence and uniqueness of the fundamental solution in $(0,\infty)$ is \textit{verbatim} as in Section \ref{sec:FS}. \end{proof} \begin{rem}\label{Cauchy Pb homo} Uniqueness in the previous proof does not work if we are working on a bounded interval $(0,\mathfrak{T})$ because Corollary \ref{corenergy} fails in this case. \end{rem} \begin{rem} Of course, by linearity, we can replace $S^\beta g$ by a linear combination of terms in $L^{\rho'}((0,\infty);D_{S,-\beta})$ for different $\rho$. \end{rem} \section{Inhomogeneous version}\label{sec:Inhomogenous} One would like to treat parabolic operators with elliptic part being ${\mathcal{B}}$ plus lower order terms allowing $T$ to be not injective (e.g., differential operators with Neumann boundary conditions). Here is a setup for doing this effortlessly given the earlier developments. \subsubsection{Setup for the inhomogeneous theory}\label{sec:inhomogenouessetup} As before, consider $T$ and $S=({T}^*{T})^{1/2}$ without assuming that $T$ is injective. One can still define a Borel functional calculus associated to $S$ as in Subsection \ref{Section: calcul bor} by replacing $(0,\infty)$ by $[0,\infty)$. In the right hand side of the Calder\'on reproducing formula \eqref{eq:Calderon}, $v$ is replaced by its orthogonal projection onto $\overline{\mathrm{ran}(S)}$. The most important fact is that for any $\alpha \ge 0$, we can still define $S^\alpha$ as the closed operator $\mathbf{t^{\alpha}}(S)$, which is also positive and self-adjoint but not necessarily injective, having the same null space as $S$. Let $\Tilde{T}: D(\Tilde{T})= D(T) \rightarrow H\oplus K$ the operator defined by $\Tilde{T}u:=(\lambda u,Tu)$ where $\lambda \in \mathbb{R}_+$. Assume $\lambda>0$. Then $\Tilde{T}$ is injective and $\Tilde{S}_\lambda=(\Tilde{T}^*\Tilde{T})^{1/2}=(\lambda^2+S^2)^{{1}/{2}}$ is a self-adjoint, positive and invertible operator on $H$, with domain $D(\Tilde{S}_\lambda)=D(\Tilde{T})=D(T)=D(S)$. Using that $\Tilde{S}_\lambda =(\lambda^2+S^2)^{{1}/{2}}$, we have that for $\lambda> 0$ and $\alpha \ge 0$, \begin{equation*} D_{\Tilde{S}_\lambda,\alpha}= D(S^\alpha) \ , \ \ \| \cdot \|_{\Tilde{S}_\lambda,\alpha} \simeq \| \cdot \|_{S,\alpha} + \| \cdot \|_H. \end{equation*} For $\alpha<0$, we know that the sesquilinear form $(u,v) \mapsto \langle \Tilde{S}_\lambda^\alpha u, \Tilde{S}_\lambda^{-\alpha} v \rangle_H$ defines a canonical duality pairing between $D_{\Tilde{S}_\lambda,\alpha}$ and $D_{\Tilde{S}_\lambda,-\alpha}$. Therefore, for any $u \in D_{\Tilde{S}_\lambda,\alpha}$, there exists $(w,\Tilde{w}) \in H^2$ such that $$\langle \Tilde{S}_\lambda^\alpha u, \Tilde{S}_\lambda^{-\alpha} v \rangle_H = \langle w, S^{-\alpha} v \rangle_H+ \langle \Tilde{w}, v \rangle_H,$$ for all $v \in D(S^{-\alpha})$. In this sense, we write $D_{\Tilde{S}_\lambda,\alpha}=S^{-\alpha}H+H$ with norm equivalent to the quotient norm $$\| u \|_{\Tilde{S}_\lambda,\alpha} \simeq \inf_{u=S^{-\alpha}w+\Tilde{w}}(\| w \|_H+\| \Tilde{w} \|_H).$$ From now on and as before, we set $\Tilde{S}:=\Tilde{S}_1=(1+S^2)^{1/2}$. In conclusion, the ``inhomogeneous" fractional spaces for $S$ become the ``homogeneous" fractional spaces for $\Tilde{S}$, so that applying the above theory with $\Tilde{S}$ leads to the inhomogeneous theory for $S$ (even if $S$ is non injective). Finally, we set $$ \Tilde{E}_{-\infty}:= \bigcap_{\alpha \in \mathbb{R}} D(\Tilde{S}^{\alpha }) .$$ \subsubsection{Embeddings and Integral identities} We begin by noting that Proposition \ref{prop:embed r>0} holds \textit{verbatim} with the same proof, even if $S$ is not necessarily injective. As for continuity and integral identities, we have to modify the statement as follows. \begin{prop}\label{prop:energyboundedintervalinhomo} Let $\mathfrak{T} \in (0, \infty]$. Let $u \in L^1((0, \mathfrak{T}); D(S))$ with $ \int_0^{\mathfrak{T}} \| S u(t) \|_H^2 \, \mathrm{d}t < \infty$ if $\mathfrak{T} < \infty$, or $u \in L^1((0, \mathfrak{T}'); D(S))$ for all $\mathfrak{T}' < \infty$, with $\int_0^{\infty} \| S u(t) \|_H^2 \, \mathrm{d}t < \infty$ if $\mathfrak{T} = \infty$. Assume that $ \partial_t u = S f + S^\beta g $ in $\mathcal{D}'((0, \mathfrak{T}); \Tilde{E}_{\infty}) $, where $f \in L^2((0, \mathfrak{T}); H)$ and $g \in L^{\rho'}((0, \mathfrak{T}); H)$, with ${\beta} = {2}/{\rho} \in [0, 1)$. When $\mathfrak{T} < \infty$, then $u \in C([0, \mathfrak{T}]; H)$, and the function $t \mapsto \|u(t)\|_H^2$ is absolutely continuous on $[0, \mathfrak{T}]$. For all $\sigma, \tau \in [0, \mathfrak{T}]$ such that $\sigma < \tau$, the following integral identity holds: \begin{align*} \| u(\tau) \|_H^2 - \| u(\sigma) \|_H^2 = 2 \, \mathrm{Re} \int_{\sigma}^{\tau} \langle f(t), S u(t) \rangle_H \, \mathrm{d}t + \int_{\sigma}^{\tau} \langle g(t), S^\beta u(t) \rangle_H \, \mathrm{d}t. \end{align*} If $\mathfrak{T} = \infty$, then the same conclusion holds on any bounded interval, and $u$ is bounded in $H$. \end{prop} \begin{proof} Using Proposition \ref{prop:embed r>0}, we can express $S f + S^\beta g = S \Tilde{f} + h$, with $\Tilde{f} \in L^2((0, \mathfrak{T}); H)$ and $h \in L^1((0, \mathfrak{T}); H)$. We start with the case $\mathfrak{T} < \infty$. Consider the orthogonal decomposition $H = \overline{\mathrm{ran}(S)} \oplus \mathrm{nul}(S)$, and write $u = u_1 + u_2$, where $u_1 \in L^1((0, \mathfrak{T}); \overline{\mathrm{ran}(S)})$ satisfies $\int_0^{\mathfrak{T}} \| S u_1(t) \|_H^2 \, \mathrm{d}t < \infty,$ and $u_2 \in L^1((0, \mathfrak{T}); \mathrm{nul}(S))$. Similarly, we decompose $\Tilde{f} = \Tilde{f}_1 + \Tilde{f}_2$ and $h = h_1 + h_2$. We have $\partial_t u_1 = S \Tilde{f}_1 + h_1$ and $ \partial_t u_2 = h_2$ where both equalities hold in $ \mathcal{D}'((0, \mathfrak{T}); \tilde{E}_{\infty})$. We obtain that $\partial_t u_2 \in L^1((0, \mathfrak{T}); \mathrm{nul}(S))$, hence $u_2 \in W^{1,1}((0, \mathfrak{T}); \mathrm{nul}(S)) \hookrightarrow C([0, \mathfrak{T}]; \mathrm{nul}(S))$. Using Corollary \ref{corenergybounded} with $S_{\vert \overline{\mathrm{ran}(S)}}$ which is injective, we conclude that $u_1 \in C([0, \mathfrak{T}]; \overline{\mathrm{ran}(S)})$. Finally, we obtain the energy equality using orthogonality. When $\mathfrak{T} = \infty$, the conclusion is already valid on $[0,\infty)$. To see the behavior at $\infty$, we can use the same decomposition and $u_{1}\in C_{0}([0, \mathfrak{T}]; \overline{\mathrm{ran}(S)})$ from Corollary \ref{corenergy}. As for $u_{2}$ we have by direct integration, that for all $t\ge 0$, $\|u_{2}(t)\|_{H} \le \inf_{\tau\ge 0}\|u_{2}(\tau)\|_{H}+ \| h_{2} \|_{L^1((0, \infty); \mathrm{nul}(S))}$. \end{proof} \subsubsection{The Cauchy problem}\label{sec:inhomogenousCP} In this section, we are interested in the Cauchy problem on segments and half-lines, in a non-homogeneous manner. Recall that $\Tilde{S}_\lambda=(\lambda^2+S^2)^{{1}/{2}}$ with $D(\Tilde{S}_\lambda)=D({T})$ and we assume $\lambda\ge 0$ for the moment. Let $0<\mathfrak{T}\le \infty$. First, let us consider $(\Tilde{B}_t)_{t \in (0,\mathfrak{T})}$ a weakly measurable family of bounded sesquilinear forms on $D({T})\times D({T})$. More precisely, we assume that \begin{equation}\label{eq:unifbdd} |\Tilde{B}_t(u,v) |\leq M \| \Tilde{S}_\lambda u \|_H \| \Tilde{S}_\lambda v\|_H, \end{equation} for some $M>0$ and for all $u,v \in D({T})$ and $t \in (0,\mathfrak{T})$. In addition, we assume that the family $(\Tilde{B}_t+\kappa)_{t \in (0,\mathfrak{T})}$ is uniformly coercive for some $\kappa \in \mathbb{R}_+$, \textit{i.e.}, \begin{equation}\label{eq:unifcoercive} \mathrm{Re}(\Tilde{B}_t(v,v))+\kappa \left\|v \right\|_H^2 \geq \nu \|\Tilde{S}_\lambda v \|_H^2, \end{equation} for some $\nu>0$ and for all $v \in D(T)$ and $t \in (0,\mathfrak{T})$. Notice that $\Tilde{B}_t+\kappa$ satisfies the lower bound in \eqref{EllipticityAbstract} with $\Tilde{S}$ replacing $S$ and the upper bound with $M+\kappa$ on $(0,\mathfrak{T})$. We denote by $\Tilde{\mathcal{B}}$ the operator associated to the family $(\Tilde{B}_t)_{t \in (0,\mathfrak{T})}$. One may represent $\Tilde{B}_t$ as $ \Tilde{T}^*\Tilde{A}(t)\Tilde{T}$, where $\Tilde{A}(t)$ is bounded on $H\oplus \overline{\mathrm{ran(T)}}$. If we decide to represent $\Tilde{A}(t)$ in $2\times 2$ matrix form, then $\Tilde{\mathcal{B}}$ writes as ${\mathcal{B}}$ plus lower order terms with bounded operator-valued coefficients. {On segments, say $(0, \mathfrak{T})$, we can consider the Cauchy problem for all possible values of $\lambda \geq 0$ and $\kappa \geq 0$. On half-lines, say $(0, \infty)$, we restrict the range of the parameters. This leads to the following cases (we will not attempt to track $\lambda$ and $\kappa$ quantitatively).} \begin{enumerate} \item [(a)] $\mathfrak{T}<\infty$. \item[(b)] $\mathfrak{T}=\infty$, $\lambda>0$ and $\kappa=0$. \end{enumerate} See Remark \ref{rem : cas dégénéré} and Remark \ref{rem:tinfty,k>0} for more when $\mathfrak{T}=\infty$. We fix $\rho \in [2,\infty]$ and set $\beta={2}/{\rho}$. Given an initial condition $a\in H$ and $g\in L^{\rho'}((0,\mathfrak{T});H)$, we wish to solve the Cauchy problem \begin{align}\label{Pb Cauchy inhomogène} \left\{ \begin{array}{ll} \partial_t u +\Tilde{\mathcal{B}}u = \Tilde{S}^\beta {g} \ \ \mathrm{in} \ \mathcal{D}'((0,\mathfrak{T});\Tilde{E}_{\infty}), \\ u(0)=a \ \ \mathrm{in} \ \Tilde{E}_{\infty}. \end{array} \right. \end{align} Recall that $\Tilde{S}^\beta {g}$ can be written as ${S}^\beta {g_1}+ {g_2}$, with $g_i\in L^{\rho'}((0,\mathfrak{T});H)$, $i=1,2$. \begin{defn} A weak solution to \eqref{Pb Cauchy inhomogène} is a function $u \in L^1((0,\mathfrak{T});D(S))$ with $\int_{0}^{\mathfrak{T}} \left\|Su(t) \right\|_H^2 \mathrm d t <\infty$ if $\mathfrak{T}<\infty$ or $u \in L^1((0,\mathfrak{T}'); D(S))$ for all $\mathfrak{T'}<\infty$ with $\int_{0}^{\infty} \left\|Su(t) \right\|_H^2 \ \mathrm d t <\infty$ if $\mathfrak{T}=\infty$ and such that \begin{enumerate} \item [(i)] $u$ solves the first equation in $\mathcal{D}'((0,\mathfrak{T});\Tilde{E}_{\infty})$, that is, for all $\varphi \in \mathcal{D}((0,\mathfrak{T});\Tilde{E}_{-\infty})$ \begin{align*} \int_{0}^{\mathfrak{T}} -\langle u(t), \partial_t{\varphi }(t)\rangle_H + \Tilde{B}_t(u(t),\varphi(t))\ \mathrm d t = \int_{0}^{\mathfrak{T}} \langle {g}(t), \Tilde{S}^\beta \varphi(t)\rangle_{H} \ \mathrm d t. \end{align*} \item[(ii)] $ \forall \Tilde{a} \in \Tilde{E}_{-\infty}, \ \langle u(t),\Tilde{a} \rangle_H \rightarrow \langle a, \Tilde{a}\rangle_H $ along a sequence tending to 0. \end{enumerate} \end{defn} The difference with the homogeneous situation is the global or local $L^1(H)$ condition. Again, a weaker formulation testing against functions $\varphi \in \mathcal{D}([0,\mathfrak{T});\Tilde{E}_{-\infty})$ with right hand side containing the additional term $\langle a, \varphi(0) \rangle_{H}$ can be considered. In the end it amounts to the same solutions thanks to \textit{a priori} continuity in $H$, which only uses the upper bound on $\Tilde{B}_t$. \begin{lem}\label{lem:weaksolCPlocal} In case (a), any weak solution to (i) belongs to $C([0,\mathfrak{T}]; H)$, and $t\mapsto\|u(t)\|_H^2$ satisfies the energy equality for any $ \sigma, \tau \in [0,\mathfrak{T}]$ such that $ \sigma < \tau$, \begin{align*} \left \| u(\tau) \right \|^2_H + 2\mathrm{Re}\int_{\sigma}^{\tau} \Tilde{B}_t(u(t),u(t))\ \mathrm d t = \left \| u(\sigma) \right \|^2_H + 2\mathrm{Re}\int_{\sigma}^{\tau} \langle {g}(t), \Tilde{S}^\beta u(t)\rangle_{H}\ \mathrm d t. \end{align*} In case (b), we have the same conclusion on any bounded interval. \end{lem} \begin{proof} Case (b) follows directly from case (a). To prove the latter, using Proposition \ref{prop:embed r>0} with $\Tilde{S}$ replacing $S$, we can write $\Tilde{S}^\beta {g}=\Tilde{S}\Tilde{f}+\Tilde{h}$ with $\Tilde{f} \in L^2((0, \mathfrak{T}); H)$ and $\Tilde{h} \in L^1((0, \mathfrak{T}); H)$. This can be expressed as ${S}{f}+{h}$ with $\Tilde{f} \in L^2((0, \mathfrak{T}); H)$ and $\Tilde{h} \in L^1((0, \mathfrak{T}); H)$. We conclude on using Proposition \ref{prop:energyboundedintervalinhomo}. \end{proof} The main result of this section is the following theorem which puts together the inhomogeneous version of all the theory developed so far. \begin{thm}\label{ThmCauchy inhomog} Consider the above assumptions on $\Tilde{B}_t$, $\lambda,\kappa$, $f, g$ and $a$. \begin{enumerate} \item There exists a unique weak solution $u$ to the problem \eqref{Pb Cauchy inhomogène}. Moreover, $u \in C([0,\mathfrak{T}];H) \cap L^r((0,\mathfrak{T});D(\Tilde{S}^\alpha))$ for all $r\in [2,\infty]$ with $\alpha=2/r$, with $u(\infty)=0$ in case (b) where $\mathfrak{T}=\infty$, and we have the estimate \begin{align*} \sup_{t\in [0,\mathfrak{T}]} \| u(t) \|_{H}&+\| \Tilde{S}^\alpha u \|_{L^r((0,\mathfrak{T});H)} \leq C ( \left \| g \right \|_{L^{\rho'}((0,\mathfrak{T});H)}+ \left \| a \right \|_H ), \end{align*} where $C=C(M,\nu,\rho,\kappa,\mathfrak{T})>0$ is a constant independent of $g$ and $a$. \item There exists a unique fundamental solution $\Gamma_{\Tilde{\mathcal{B}}}=(\Gamma_{\Tilde{\mathcal{B}}}(t,s))_{0\leq s \leq t \leq \mathfrak{T}}$ for $\partial_t+\Tilde{\mathcal{B}}$ in the sense of Definition \ref{FS} in $(0,\mathfrak{T})$ (by convention, set $\Gamma(\infty,s)=0$ if $\mathfrak{T}=\infty$). In particular, for all $t \in [0,\mathfrak{T}]$, we have the following representation of $u$ : \begin{align*} u(t)=\Gamma_{\Tilde{\mathcal{B}}}(t,0)a+\int_{0}^{t}\Gamma_{\Tilde{\mathcal{B}}}(t,s)\Tilde{S}^\beta {g}(s) \ \mathrm ds , \end{align*} where the integral is weakly defined in $H$ when $\rho<\infty$ and strongly defined when $\rho=\infty$ (i.e., in the Bochner sense). For all $\Tilde{a} \in H$ and $t \in [0,\mathfrak{T}]$, \begin{align*}\label{eq:repCauchy0 T} \langle u(t) , \Tilde{a}\rangle_H &= \langle \Gamma_{\Tilde{\mathcal{B}}}(t,0)a , \Tilde{a}\rangle_H +\int_{0}^{t} \langle {g}(s) , \Tilde{S}^\beta \Tilde{\Gamma}_{\Tilde{\mathcal{B}}}(s,t)\Tilde{a}\rangle_H \ \mathrm ds. \end{align*} \end{enumerate} \end{thm} \begin{proof} We begin with existence. \paragraph{\textit{Existence in case (b) }} Apply Theorem \ref{ThmCauchy homog} with $\Tilde{S}$ replacing $S$ and as the right hand side belongs to $L^{\rho'}((0,\infty);D_{\Tilde{S},-\beta})$. This shows the existence of a weak solution $v$ in $L^2((0,\infty); D(\Tilde{S}))$, which also belongs to $C_0([0,\infty); H)$ and $L^r((0,\infty); D(\Tilde{S}^\alpha))$. \paragraph{\textit{Existence in case (a) }} Extend $g$ by $0$ and $\Tilde{B}_t$ by $\nu\langle \Tilde{S}\cdot, \Tilde{S}\cdot \rangle_H$ on $(\mathfrak{T}, \infty)$ if $\mathfrak{T}<\infty$ and use the same notation. Let $\kappa'>\kappa$. Apply Theorem \ref{ThmCauchy homog} with $\Tilde{S}$ replacing $S$ and with right hand side in $L^{\rho'}((0,\infty);D_{\Tilde{S},-\beta})$ to the auxiliary Cauchy problem \begin{align*} \left\{ \begin{array}{ll} \partial_t v + (\Tilde{\mathcal{B}}+\kappa')v = \Tilde{S}^\beta (e^{-\kappa' t} {g}) \ \ \mathrm{in} \ \mathcal{D}'((0,\infty); \Tilde{E}_\infty), \\ v(0)=a \ \mathrm{in} \ \Tilde{E}_\infty, \end{array} \right. \end{align*} and obtain a weak solution $v$ in $L^2((0,\infty); D(\Tilde{S}))$. The function $u:=e^{\kappa' t} v$ restricted to $[0,\mathfrak{T}]$ gives us a weak solution with the desired properties. Next, we prove uniqueness. Assume $u$ is a weak solution to \eqref{Pb Cauchy inhomogène} with $a=0$ and $g=0$. \paragraph{\textit{Uniqueness in case (b) }} We have $u \in L^1((0,\mathfrak{T}'); D(S))$ for all $\mathfrak{T'}<\infty$ and $\int_{0}^{\infty} \left\|Su(t) \right\|_H^2 \ \mathrm d t <\infty$. Applying Lemma \ref{lem:weaksolCPlocal}, we have $u \in C([0,\mathfrak{T}']; H)$ for all $\mathfrak{T'}>0$, $u(0)=0$ and \begin{equation*} \left \| u(\mathfrak{T}') \right \|^2_H + 2\mathrm{Re}\int_{0}^{\mathfrak{T}'} \Tilde{B}_t(u(t),u(t)) \ \mathrm dt= 0. \end{equation*} Using the coercivity of $\Tilde{B}_t $, we deduce that $u=0$ on $(0,\mathfrak{T}')$ and therefore, $u=0$ on $[0,\infty)$. \ \paragraph{\textit{Uniqueness in case (a) }} We have $u \in L^1((0,\mathfrak{T});D(S))$ with $\int_{0}^{\mathfrak{T}} \left\|Su(t) \right\|_H^2 \mathrm d t <\infty$. Let $\kappa'>\kappa$. Set $v = e^{-\kappa' t}u$ on $(0,\mathfrak{T})$ so that $v \in L^1((0,\mathfrak{T});D(S))$ with $\int_{0}^{\mathfrak{T}} \left\|Sv(t) \right\|_H^2 \mathrm d t <\infty$ and $\partial_t v + (\Tilde{\mathcal{B}}+\kappa') v =0$ in $\mathcal{D}'((0,\mathfrak{T});\Tilde{E}_\infty)$. Applying Lemma \ref{lem:weaksolCPlocal}, we have $v \in C([0,\mathfrak{T}]; H)$, $v(0)=0$ and \begin{equation*} \left \| v(\mathfrak{T}) \right \|^2_H + 2\mathrm{Re}\int_{0}^{\mathfrak{T}} \Tilde{B}_t(v(t),v(t)) + \kappa' \left \| v(t) \right \|^2_H \ \mathrm dt= 0. \end{equation*} Using the coercivity of $\Tilde{B}_t+\kappa'$ resulting from \eqref{eq:unifcoercive}, we deduce that $v=0$ and therefore, $u=0$ on $[0,\mathfrak{T}]$. Finally, definition, existence and uniqueness of the fundamental solution $\Gamma_{\Tilde{\mathcal{B}}}$ can be obtained easily by proceeding as in Section \ref{sec:FS}. \end{proof} \begin{rem}\label{rem:tinfty,k>0} If $\mathfrak{T}=\infty$ with $\kappa> 0$, then we can construct a weak solution but it does not satisfy $\int_{0}^{\infty} \|Su(t) \|_H^2 \ \mathrm d t <\infty$. \end{rem} \begin{rem}\label{rem : cas dégénéré} For $\mathfrak{T}=\infty$, $\lambda=0$, $\kappa=0$ and $\Tilde{S}^\beta g$ replaced by $S^\beta g$, we can apply Theorem \ref{ThmCauchy homog} provided that $T$ is injective. However, when $T$ (hence $S$) is not injective then the proof of Theorem \ref{ThmCauchy inhomog} provides us with a global solution but not with limit 0 at $\infty$. In fact, the zero limit at $\infty$ fails. Take $u_0 \in \mathrm{nul}(S) \setminus \{0\}$ and set $u(t)=u_0$ for all $t \ge 0$. We have $u \in L^2((0,\mathfrak{T}');H)$ for all $\mathfrak{T'}<\infty$ with $\int_{0}^{\infty} \left\|Su(t) \right\|_H^2 \ \mathrm d t =0 <\infty$. Moreover, $u$ is a weak solution to the abstract heat equation \begin{align*} \left\{ \begin{array}{ll} \partial_t u + S^2u = 0 \ \ \mathrm{in}\ \mathcal{D}'((0,\infty); \Tilde{E}_\infty), \\ u(0)=u_0 \ \mathrm{in} \ \Tilde{E}_\infty, \end{array} \right. \end{align*} with $\lim_{t\rightarrow\infty}u(t)=u_0$. \end{rem} \begin{rem} Consider the special case $\Tilde{\mathcal{B}}=\mathcal{B}+ \omega$, $\omega>0$, on $(0,\infty)$, keeping the condition \eqref{EllipticityAbstract} for $\mathcal{B}$ with $T$ (and $S$) injective. We have (b) with $\lambda=1$, constant $\sup(M,\omega)$ in \eqref{eq:unifbdd} and constant $\inf(\nu, \omega)$ in \eqref{eq:unifcoercive}. The theorem above applies and gives us fundamental solution operators $\Gamma_{\mathcal{B}+ \omega}(t,s)$, defined for $0\le s\le t<\infty$. Call $\Gamma_{\mathcal{B}}(t,s)$ the one obtained in the previous section. Uniqueness for the Cauchy problem for $\partial_t + \mathcal{B}+ \omega$ holds in $L^2((0,\mathfrak{T}), D(S))$ for all $\mathfrak{T}<\infty$ and this shows that $ \Gamma_{\mathcal{B}+ \omega}(t,s)=e^{-\omega(t-s)}\Gamma_{\mathcal{B}}(t,s)$. Working on $\mathbb{R}$, then we obtain the equality for $-\infty< s\le t<\infty$. \end{rem} \section{The final step towards concrete situations}\label{subsection 4.9} The reader might wonder how to apply our theory in concrete situations, where the abstract spaces of test functions $\mathcal{D}(I;E_{-\infty})$ or $\mathcal{D}(I;\Tilde{E}_{-\infty})$ might not be related to usual spaces of test functions. The following result gives us a sufficient condition showing that one can replace $E_{-\infty}$ or $\Tilde{E}_{-\infty}$ by an arbitrary dense set in the domain of $S$, sometimes called a core of $D(S)=D(T)$. \begin{thm}\label{thm: passage au concret} Let $\mathrm{D}$ be a Hausdorff topological vector space with continuous and dense inclusion $\mathrm{D} \hookrightarrow D(S)$, where $D(S)$ is equipped with the graph norm. Assume a priori that weak solutions belong to $L^1_{\mathrm{loc}}(I;H)$, and replace the test function space by $\mathcal{D}(I;\mathrm{D})$ in their definition, with in the latter case, $\partial_t u $ computed via : $$\forall \varphi \in \mathcal{D}(I;\mathrm{D}), \ \llangle \partial_t u, \varphi \rrangle = -\int_I \langle u(t),\partial_t\varphi(t)\rangle_{H} \ \mathrm{d}t.$$ Then our well-posedness results are the same: this means that existence with estimates, uniqueness (requiring additionally $u\in L^1_{\mathrm{loc}}(I;H)$ in the uniqueness class), and energy equalities hold. \end{thm} The proof relies on the following density lemma. Denote by $\| u \|_{D(S)}=(\|u\|_H^2+\| Su\|_{H}^2)^{1/2}$ the Hilbertian graph norm and $ \langle \cdot,\cdot\rangle_{_{D(S)}} $ the corresponding inner product. \begin{lem}\label{lem: densité D} Let $\mathrm{D}$ be as in the above theorem. For all open interval $I \subset \mathbb{R}$, $\mathcal{D}(I;\mathrm{D})$ is a dense subspace of $\mathcal{D}(I;D(S))$ in the following sense : for all $\varphi \in \mathcal{D}(I; D(S))$, there exists a sequence $(\varphi_k)_{k \geq 0} \in \mathrm{span}(\mathcal{D}(I)\otimes \mathrm{D})^{\mathbb{N}} $ such that \begin{enumerate} \item For all $k \geq 0$, $\mathrm{supp}(\varphi_k) \subset \mathrm{supp}(\varphi)$. \item For all $k \geq 0$ and $t \in I$, $$ \left\|\varphi_k(t) \right\|_{D(S)} \leq 3 \left\| \varphi(t) \right\|_{D(S)}, \ \ \|\partial_t \varphi_k(t) \|_{D(S)} \leq 3 \left\| \partial_t \varphi(t) \right\|_{D(S)} .$$ \item For all $t \in I$, $\left\|\varphi_k(t)-\varphi(t) \right\|_{D(S)}+\left\|\partial_t \varphi_k(t) - \partial_t \varphi(t) \right\|_{D(S)}\to 0$ as $k\to \infty$. \item For all $\beta \in [0,1]$, $t\in I$ and $k\geq 0$, $\| S^\beta \varphi_k(t)\|_H \le 3\ \| \varphi(t) \|_{D(S)} $ and $\| S^\beta( \varphi_k(t)- \varphi(t) ) \|_H \to 0$ as $k\to \infty$. \end{enumerate} \end{lem} \begin{proof} The space $(D(S),\left\| \cdot \right\|_{D(S)})$ is separable as it is isometric to a subspace of $H\times H$ which is separable. Let $(w_j)_{j\in \mathbb{N}} \in D(S)^\mathbb{N}$ be a Hilbertian basis of $ \left( D(S),\left\| \cdot \right\|_{D(S)} \right)$. As $\mathrm{D}$ is dense in $D(S)$ then for all $j\ge 0$, one can find a sequence $(v_j^k)_{k\in \mathbb{N}} \in \mathrm{D}^\mathbb{N}$ such that for all $k \ge 0$, $\| w_j-v_j^k \|_{D(S)} \leq \frac{1}{2^{j+k}} $. For $h=\sum_{j\ge 0} \alpha_j w_j\in D(S)$, one can see using Cauchy-Schwarz inequality and Plancherel that $\|\sum_{j=0}^k \alpha_j (v_j^k-w_j)\|_{D(S)} \le \sum_{j=0}^k \frac{|\alpha_j|}{2^{j+k}} \le \frac{4}{3\cdot 2^k}\|h\|_{D(S)}$, so that $h_k=\sum_{j=0}^k \alpha_j v_j^k$ satisfies $\|h_k-h\|_{D(S)} \le \frac{4}{3\cdot 2^k}\|h\|_{D(S)} + \|\sum_{j\ge k+1} \alpha_j w_j\|_{D(S)}$ and $\|h_k\|_{D(S)} \le (\frac{4}{3}+1)\|h\|_{D(S)}$. Now, fix $\varphi \in \mathcal{D}(I;D(S))$ and set for all $k \ge 0$ and $t \in I$, $$ \varphi_k(t):= \sum_{j=0}^{k} \langle \varphi(t),w_j\rangle_{_{D(S)}} v_j^k \ . $$ Clearly, the sequence $ (\varphi_k)_{k \in \mathbb{N}} \in \mathrm{span}(\mathcal{D}(I)\otimes \mathrm{D})^{\mathbb{N}}$ with (1), and (2) and (3) follow from the above estimates. Finally, (4) follows from the moments inequality combined with (2) and (3). \end{proof} \begin{proof}[Proof of Theorem \ref{thm: passage au concret}] The case using $\mathcal{D}(I;\Tilde{E}_{-\infty})$ being similar, it only suffices to show that with the a priori requirement that weak solutions also belong to $L^1_{\mathrm{loc}}(I;H)$, the formulations of the equations against test functions in $ \mathcal{D}(I ; E_{-\infty})$ and in $\mathcal{D}(I;\mathrm{D})$ are equivalent, because then they have the same solutions. In fact, they are equivalent to a formulation against test functions in $\mathcal{D}(I ; D(S))$. Indeed, if $u \in L^2(I;D_{S,1})\cap L^1_{\mathrm{loc}}(I;H)$ then by Lemma \ref{lem: Jalpha bien définie}, we have for all $ \varphi \in \mathcal{D}(I ; E_{-\infty})$, $$\llangle u , \partial_t \varphi \rrangle_{\mathcal{D}',\mathcal{D}} =\int_{I} \langle u (t), \partial_t \varphi(t) \rangle_{H,1} \ \mathrm{d}t= \int_{I} \langle S u(t) , S^{-1} \partial_t \varphi(t) \rangle_H \ \mathrm{d}t = \int_{I} \langle u (t), \partial_t \varphi(t) \rangle_H \ \mathrm{d}t.$$ Applying that $ \mathcal{D}(I ; E_{-\infty})$ is dense $\mathcal{D}(I ; D(S))$ as in Lemma \ref{lem: densité D} and dominated convergence, we can see that the weak formulation for all $\varphi \in \mathcal{D}(I ; D(S))$ holds. Of course we can conversely restrict to test functions $ \mathcal{D}(I ; E_{-\infty})$, showing that the formulations testing with $\varphi \in \mathcal{D}(I ; E_{-\infty})$ or $\varphi \in \mathcal{D}(I ; D(S))$ are equivalent. This would be the same starting from another dense set $\mathrm{D}$. Finally, the initial data property in the Cauchy problems testing against elements in $E_{-\infty}$ is equivalent to testing against arbitrary elements in $H$ by density as $u(t)$ belongs almost everywhere to $H$. This would be the same replacing $E_{-\infty}$ by another dense set $\mathrm{D}$ in $D(S)$ as it would also be dense in $H$. \end{proof} \section{Three applications }\label{Section 6} \subsection{Parabolic Cauchy problems on domains with Dirichlet boundary condition} Let $n\ge 1$ and $\Omega \subset \mathbb{R}^n$ an open set. We denote by $L^2(\Omega)$ the Hilbert space of square integrable functions on $\Omega$ with respect to the Lebesgue measure $\mathrm{d}x$ with norm denoted by ${\lVert \cdot \rVert}_{2}$ and its inner product by $\langle \cdot ,\cdot \rangle$. As usual, we denote by $\mathcal{D}(\Omega)$ the class of smooth and compactly supported functions on $\Omega$. We set $H^1(\Omega):=\left\{ u \in L^2(\Omega) : \nabla_x u \in L^2(\Omega) \right\}$ and it is a Hilbert space for the norm $\left\| u \right\|_{H^1(\Omega)}:=( \left\| u \right\|_{2}^2+\left\| \nabla_x u \right\|_{2}^2 )^{1/2}$. Finally, $H^1_0(\Omega)$ is defined as the closure of $\mathcal{D}(\Omega)$ in $(H^1(\Omega),\left\| \cdot \right\|_{H^1(\Omega)})$. We denote by $-\Delta_D $ the unbounded operator on $L^2(\Omega)$ associated to the positive symmetric sesquilinear form on $H^1_0(\Omega) \times H^1_0(\Omega)$ defined by \begin{equation*} (u,v) \mapsto \int_{\Omega} \nabla_x u(x) \cdot \overline{\nabla_x v(x)} \ \mathrm dx. \end{equation*} Let $A: I\times \Omega \rightarrow M_n(\mathbb{C})$ be a matrix-valued function with complex measurable entries and such that \begin{equation}\label{ellipticité A} \left | A(t,x)\xi \cdot \zeta \right |\leq M \left | \xi \right |\left | \zeta \right |, \ \ \ \ \nu \left | \xi \right |^2\leq \mathrm{Re}(A(t,x)\xi\cdot \overline{\xi}) \end{equation} for some $M,\nu>0$ and for all $\zeta, \xi \in \mathbb{C}^n$ and $(t,x) \in I\times \Omega$. We let $-\mathrm{div}_x$ be the adjoint of $\nabla_x:H^1_0(\Omega) \to L^2(\Omega)^n $ and use the customary notation $\mathcal{B}= -\mathrm{div}_x A(t,\cdot)\nabla_x.$ Fix $\mathfrak{T}>0$, $\rho \in (2,\infty)$, set $\beta={2}/{\rho} \in (0,1)$ and let $\rho'$ be the H\"older conjugate of $\rho$. For $ {f} \in L^2((0,\mathfrak{T});L^2(\Omega)^n)$, $g\in L^{\rho'}((0,\mathfrak{T});L^2(\Omega))$, $h \in L^1((0,\mathfrak{T});L^2(\Omega))$ and $\psi \in L^2(\Omega)$, consider the following Cauchy problem \begin{align}\label{Pb Cauchy Dirichlet} \left\{ \begin{array}{ll} \partial_t u - \mathrm{div}_x(A(t,\cdot)\nabla_x u) = -\mathrm{div}_x {f} +(-\Delta_D)^{\beta/2}{g}+h \ \mathrm{in} \ \mathcal{D}'((0,\mathfrak{T})\times \Omega), \\ u(t) \rightarrow \psi \ \mathrm{ in } \ \mathcal{D'}(\Omega) \ \mathrm{as} \ t \rightarrow 0^+. \end{array} \right. \end{align} The first equation is interpreted in the weak sense according to the following definition. \begin{defn}\label{defn:Dirichlet} A weak solution to the first equation in \eqref{Pb Cauchy Dirichlet} is a (complex-valued) {function $u \in L^1((0,\mathfrak{T});H^1_0(\Omega))$ with $\int_0^\mathfrak{T}\|\nabla_x u(t)\|^2_2\, \mathrm{d}t<\infty$} such that for all $\varphi \in \mathcal{D}((0,\mathfrak{T})\times \Omega)$, \begin{align*} \int_{0}^{\mathfrak{T}}\int_\Omega &- u(t,x) \partial_t \varphi(t,x)+A(t,x)\nabla_x u(t,x) \cdot \nabla_x \varphi(t,x) \ \mathrm{d}x \mathrm{d}t \\& = \int_{0}^{\mathfrak{T}}\int_\Omega {f}(t,x) \cdot \nabla_x \varphi(t,x)+ {g}(t,x) (-\Delta_D)^{\beta/2} \varphi(t,x) \ \mathrm{d}x +h(t,x)\varphi(t,x)\ \mathrm{d}x\mathrm{d}t. \end{align*} \end{defn} The consequence of our theory is \begin{thm}[Cauchy problem on $(0,\mathfrak{T})$]\label{thm: Pb Cauchy Dirichlet} Let $f,g,h, \psi$ be as above. \begin{enumerate} \item There exists a unique weak solution to the Cauchy problem \eqref{Pb Cauchy Dirichlet} as defined above. Moreover, $u \in C([0,\mathfrak{T}];L^2(\Omega))$ with $u(0)=\psi$, the application $t \mapsto \| u(t) \|^2_{2}$ is absolutely continuous on $[0,\mathfrak{T}]$ and we can write the energy equalities. Furthermore, $u \in L^r((0,\mathfrak{T});D((-\Delta_D)^{\alpha/2}))$ for any $\alpha \in (0,1]$ with $r={2}/{\alpha} \in [2,\infty)$ and we have \begin{align*} \sup_{t\in [0,\mathfrak{T}]} &\| u(t) \|_{2}+ \| (-\Delta_D)^{\alpha/2}u \|_{L^r((0,\mathfrak{T});H)} \\ &\leq C ( \left \| f \right \|_{L^2((0,\mathfrak{T});L^2(\Omega)^n)} + \left \| g \right \|_{L^{\rho'}((0,\mathfrak{T});L^2(\Omega))} + \left \| h \right \|_{L^{1}((0,\mathfrak{T});L^2(\Omega))}+ \| \psi \|_{2} ), \end{align*} where $C=C(M,\nu,\rho,\mathfrak{T})>0$ is a constant independent of the data $f,g,h$ and $\psi$. \item There exists a unique fundamental solution $\Gamma=(\Gamma(t,s))_{0\leq s \leq t \leq \mathfrak{T}}$ for $\partial_t-\mathrm{div}_x A(t,\cdot)\nabla_x$. In particular, for all $t \in [0,\mathfrak{T}]$, we have the following representation of $u(t)$ : \begin{align*} u(t) = \Gamma(t,0)\psi + \int_{0}^{t} \Gamma(t,\tau)( -\mathrm{div}_x{f})(\tau)\ \mathrm d \tau + \int_{0}^{t} \Gamma(t,\tau) (-\Delta_D)^{\beta/2}{g}(\tau)\ \mathrm d \tau +\int_{0}^{t} \Gamma(t,\tau)h(\tau)\ \mathrm d \tau, \end{align*} where the two integrals with ${f}$ and ${g}$ are weakly defined in $L^2(\Omega)$ while the other one converges strongly (i.e., in the Bochner sense). More precisely, we have for all $\Tilde{\psi} \in L^2(\Omega)$ and $t \in [0,\mathfrak{T}]$, \begin{align*} \langle u(t) , \Tilde{\psi} \rangle = \langle \Gamma(t,0)\psi , & \Tilde{\psi} \rangle + \int_{0}^{t} \langle {f}(s) , \nabla_x \Tilde{\Gamma}(s,t)\Tilde{\psi}\rangle \, \mathrm ds \\& +\int_{0}^{t} \langle {g}(s) , (-\Delta_D)^{\beta/2} \Tilde{\Gamma}(s,t) \Tilde{\psi} \rangle \, \mathrm ds +\int_{0}^{t} \langle \Gamma(t,s) h(s) , \Tilde{\psi} \rangle \, \mathrm ds. \end{align*} \end{enumerate} \end{thm} \begin{proof} {As $\mathcal{D}(\Omega)$ is dense in $H^1_0(\Omega)$ with respect to the graph norm of the injective self-adjoint operator $S=(-\Delta_D)^{1/2}$ by definition, we are in the context of Theorem \ref{thm: passage au concret} in Section \ref{subsection 4.9}, which corresponds to Theorem \ref{ThmCauchy inhomog} for each $f,g,h$ by linearity and using that $-\mathrm{div}_x{f} =(-\Delta_D)^{1/2} \Tilde{f}$ with $\Tilde{f}\in L^2((0,\mathfrak{T}); L^2(\Omega))$, with $B_t : H^1_0(\Omega) \times H^1_0(\Omega) \rightarrow \mathbb{C}$ being the sesquilinear form defined via \begin{equation*} \forall u,v \in H^1_0(\Omega) : \ B_t(u,v) := \int_{\Omega} A(t,x)\nabla_x u(x) \cdot \overline{\nabla_x v(x)} \, \mathrm{d} x. \end{equation*} } \end{proof} \begin{rems} \begin{enumerate} \item With the modification in the definition of weak solutions, the statement applies for the Cauchy problem on $(0,\infty)$ and $u$ has limit 0 at $\infty$. (Use Theorems \ref{thm: passage au concret} and \ref{ThmCauchy homog}). \item Remark that the theory applies for complex coefficients. In particular, we do not assume any local regularity for weak solutions and fundamental solutions are merely bounded operators. Bounds on their kernels need additional assumptions. \item If $\Omega$ is bounded (or only bounded in one direction), then Poincar\'e inequality holds on $H^1_0(\Omega)$ \cite[Proposition 3.25]{egert2024harmonic}, and it follows that $D(S)=D_{S,1}=H^1_0(\Omega)$ with equivalent norms. In particular, the inhomogeneous and homogeneous theories developed in Section \ref{Section 5} are the same for this concrete case. \item We may want to replace the spaces $L^r((0,\mathfrak{T});D((-\Delta_D)^{\alpha/2}))$ by mixed Lebesgue spaces $L^r((0,\mathfrak{T});L^q(\Omega))$. The embeddings of the domains of the fractional powers $(-\Delta_D)^{\alpha/2}$ into Lebesgue spaces $L^q(\Omega)$ depend on the geometry of the domain. See the discussion in \cite{auscher2023universal}. \end{enumerate} \end{rems} \subsection{Parabolic integro-differential operators} The second application is for integro-differential parabolic operators $\partial_t+ \mathcal{B}$ where $\mathcal{B}$ is associated with a sesquilinear form $B_t$ satisfying \eqref{EllipticityAbstract} for $t\in I$ ($I$ open interval) with $T=S=(-\Delta)^{\gamma/2}$ for some $\gamma>0$. The most notable example from the references mentioned in the introduction is that of $\mathcal{B}$ arising from the family of forms \begin{align*} B_t(u,v) := \iint_{\mathbb{R}^n \times \mathbb{R}^n} K(t,x,y) \frac{(u(x) - u(y)) \overline{(v(x)-v(y))}}{|x-y|^{n+2\gamma}} \, \mathrm{d} x \, \mathrm{d} y, \end{align*} for some $\gamma \in (0,1)$ and $u,v \in W^{\gamma,2}(\mathbb{R}^n)$. We assume here $K: I\times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{C}$ to be a measurable kernel that satisfies the accretivity condition for some $\lambda>0$, \begin{align} \label{eq:ellip} 0<\lambda \leq \mathrm{Re}\, K(t,x,y) \leq |K(t,x,y)| \leq \lambda^{-1} \qquad (\text{a.e. }(t,x,y) \in I \times \mathbb{R}^n \times \mathbb{R}^n). \end{align} The Sobolev space $W^{\gamma,2}(\mathbb{R}^n)$ is the space of measurable functions $u$ on $\mathbb{R}^n$ with norm $\|u\|_\gamma$ given by \begin{align*} \|u\|_\gamma^2= \int_{\mathbb{R}^n} |u(x)|^2\, \mathrm{d}x + \iint_{\mathbb{R}^n \times \mathbb{R}^n} \frac{|u(x) - u(y)|^2}{|x-y|^{n+2\gamma}} \, \mathrm{d} x \, \mathrm{d} y, \end{align*} and it is well known that $W^{\gamma,2}(\mathbb{R}^n)$ agrees with the domain of $(-\Delta)^{\gamma/2}$ and that the last term in the expression above is comparable to $\|(-\Delta)^{\gamma/2}u\|_2^2$. Using this observation, \eqref{eq:ellip} and Cauchy-Scwharz inequality, we can check \eqref{EllipticityAbstract} with $T=S=(-\Delta)^{\gamma/2}$. From now on, we can apply the theory developed so far and obtain well-posedness results on $I\times \mathbb{R}^n$ but we shall not repeat the statements and leave that to the reader. May be the most notable outcome is that there always exists a unique fundamental solution, and this seems new at this level of generality. \begin{thm} Let $\gamma>0$. The integro-differential parabolic operator $\partial_t+ \mathcal{B}$ on $I\times \mathbb{R}^n$ has a unique fundamental solution. \end{thm} \subsection{Degenerate parabolic operators} The third application concerns degenerate parabolic operators on $\mathbb{R}^n$. We fix a weight $\omega$ in the Muckenhoupt class $A_2(\mathbb{R}^n, \mathrm{d}x)$, meaning that $\omega: \mathbb{R}^n \to \mathbb{R}$ is a measurable and positive function satisfying \begin{equation*} nt_Q\omega^{-1}(x)\,\mathrm{d} x \right ) < \infty, \end{equation*} where the supremum is taken over all cubes $Q \subset \mathbb{R}^n$. For background on Muckenhoupt weights and related results, we refer to \cite[Ch. V]{Stein1993_HA}. We denote by $L^2_\omega(\mathbb{R}^n) := L^2(\mathbb{R}^n, \mathrm{d}\omega)$ the Hilbert space of square-integrable functions with respect to $\mathrm{d}\omega$, with norm denoted by ${\lVert \cdot \rVert}_{2,\omega}$ and inner product $\langle \cdot , \cdot \rangle_{2,\omega}$. It is known that $$\mathcal{D}(\mathbb{R}^n) \subset L^2_\omega(\mathbb{R}^n) \subset L^1_{\text{loc}}(\mathbb{R}^n, \mathrm{d}x) \subset \mathcal{D}'(\mathbb{R}^n)$$ and the first inclusion is dense. We define the weighted Sobolev space $H^1_{\omega}(\mathbb{R}^n)$ (or $W^{1,2}_\omega(\mathbb{R}^n)$) as the space of functions $f \in L^2_\omega(\mathbb{R}^n)$ for which the distributional gradient $\nabla_x f$ belongs to $L^2_\omega(\mathbb{R}^n)^n$, and equip this space with the norm $\left\| f \right\|_{H^1_\omega} := ( \left\| f \right\|_{2,\omega}^2 + \left\| \nabla_x f \right\|_{2,\omega}^2 )^{1/2}$ making it a Hilbert space. It is also known that $\mathcal{D}(\mathbb{R}^n)$ is dense in $H^1_{\omega}(\mathbb{R}^n)$ (see \cite[Thm. 2.5]{kilpelainen1994weighted}). Let $I \subset \mathbb{R}$ be an open interval. Let $A: I \times \mathbb{R}^n \to M_n(\mathbb{C})$ be a matrix-valued function with complex measurable coefficients such that $$ \left| A(t,x) \xi \cdot \zeta \right| \leq M \omega(x) \left| \xi \right| \left| \zeta \right|, \quad \nu \left| \xi \right|^2 \omega(x) \leq \mathrm{Re}(A(t,x) \xi \cdot \overline{\xi}), $$ for some constants $M, \nu > 0$ and for all $\xi, \zeta \in \mathbb{C}^n$ and $(t,x) \in I \times \mathbb{R}^n$. For each $t \in I$, we define the sesquilinear form $B_t : H^1_\omega(\mathbb{R}^n) \times H^1_\omega(\mathbb{R}^n) \to \mathbb{C}$ by $$ B_t(u,v) := \int_{\mathbb{R}^n} A(t,x) \nabla_x u(x) \cdot \overline{\nabla_x v(x)} \, \mathrm{d}x, $$ for all $u, v \in H^1_\omega(\mathbb{R}^n)$. The assumptions on $A$ yield $$ |B_t(u,v)| \leq M \left\| \nabla_x u \right\|_{2,\omega} \left\| \nabla_x v \right\|_{2,\omega}, \quad \nu \left\| \nabla_x u \right\|_{2,\omega}^2 \leq \mathrm{Re}(B_t(u,u)). $$ This is \eqref{EllipticityAbstract} with $T=\nabla_x : H^1_\omega(\mathbb{R}^n) \to L^2_\omega(\mathbb{R}^n)^n$. We note that $T$ is injective since $d\omega$ has infinite mass as a doubling measure on $\mathbb{R}^n$. We denote by $\partial_t - \omega^{-1}(x) \mathrm{div}_x A(t,x) \nabla_x$ the degenerate parabolic operator associated with the family $(B_t)_{t \in I}$. At this point, we can apply the theory developed above to obtain well-posedness results on $I \times \mathbb{R}^n$, for the Cauchy problems with test functions in $I\times \mathbb{R}^n$ using Theorem \ref{thm: passage au concret}, assuming weak solutions to \textit{a priori} be in $L^1_{\mathrm{loc}}(I; L^2(\mathbb{R}^n))$ if $I$ is unbounded. \begin{thm} The operator $\partial_t - \omega^{-1}(x) \mathrm{div}_x A(t,x) \nabla_x$ on $I \times \mathbb{R}^n$ has a unique fundamental solution. \end{thm} \bibliographystyle{alpha} \bibliography{references} \end{document}
2412.18427v1
http://arxiv.org/abs/2412.18427v1
Global Bifurcation Curve for Fourth-Order MEMS/NEMS Models with Clamped Boundary Conditions
\documentclass[reqno]{amsart} \usepackage{amsaddr} \usepackage{graphicx} \usepackage{amsmath,amsthm,amssymb,amsfonts} \usepackage{hyperref} \usepackage[noadjust]{cite} \newtheorem*{example}{Examples} \renewcommand\baselinestretch{1.3} \begin{document} \title[\hfilneg 2023/09\hfil Fourth-order MEMS/NEMS models] {Global bifurcation curve for fourth-order MEMS/NEMS models with clamped boundary conditions} \author[M. Lin \& H. Pan\hfil 2023/09\hfilneg]{Manting Lin and Hongjing Pan$^*$} \address{School of Mathematical Sciences, South China Normal University,\\ Guangzhou 510631, P. R. China} \email{[email protected] \& [email protected]} \thanks{${}^*$Corresponding author.} \subjclass[2020]{34B18, 34C23, 74G35, 74K10, 74H60} \keywords{Global bifurcation, Continuation method, Exact multiplicity, A priori estimate, MEMS/NEMS, Doubly clamped boundary conditions, Singularity, Turning point} \begin{abstract} Global solution curve and exact multiplicity of positive solutions for a class of fourth-order equations with doubly clamped boundary conditions are established. The results extend a theorem of P. Korman (2004) by allowing the presence of a singularity in the nonlinearity. The paper also provides a global a priori bound for $C^{3}$-norm of positive solutions, which is optimal in terms of regularity. Examples arising in MEMS/NEMS models are presented to illustrate applications of the main results. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{remark}[theorem]{Remark} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \section{Introduction}\label{sec:1} Consider the following fourth-order equation with the doubly clamped (Dirichlet) boundary conditions \begin{equation}\label{eq:4order} \left\{\begin{array}{l} u^{\prime \prime \prime \prime}(x)=\lambda f(u(x)), \quad x \in(0,1), \\ u(0)=u(1)=u^{\prime}(0)=u^{\prime}(1)=0, \end{array}\right. \end{equation} where $\lambda$ is a positive parameter and $f(u)$ is a continuous function. Problem \eqref{eq:4order} arises in many physical models describing the deformation of elastic objects clamped at the endpoints. The nonlinearity $f$ represents a nonlinear external force. In this paper, we are concerned with the global structure of the solution set of \eqref{eq:4order}, i.e., the structure of the global solution curve \begin{equation*} \mathcal{S}=\left\{(\lambda,u)\mid\lambda>0 \text{ and } u \text{ is a solution of } \eqref{eq:4order}_\lambda\right\}. \end{equation*} By a \emph{solution} we mean that $u\in C^4[0,1]$ satisfies \eqref{eq:4order}. For each given $\lambda>0$, we denote the solutions of \eqref{eq:4order} by $u_{\lambda}(x)$ or $u(x)$ in short. Specifically, we focus on cases where the nonlinear term $f$ exhibits a singularity. Such problems are practically significant, especially in models of Micro/Nano-Electro-Mechanical Systems (MEMS/NEMS). For instance, $f(u)=\frac{1}{(1-u)^2}$ models the Coulomb force in the 2-D parallel plate capacitors, following the inverse square law. The recent monograph \cite{Koochi2020} presents various 1-D beam-type MEMS/NEMS models that conform to the equation in \eqref{eq:4order}. Notably, the singularity of function $f$ is a critical characteristic, as demonstrated in examples following Theorem \ref{th2:u bounded} below. These models have significantly stimulated our research interest. In the past two decades, fourth-order MEMS/NEMS models have attracted significant attention, and various numerical and theoretical results have been established (cf. \cite{Esposito2010,Koochi2020,Laurencot2017,Lin2007,pelesko2002modeling,Cassani2009,Laurencot2014,Liu2022,Lindsay2014,Cowan2010,Guo2014,Guo2008,Guo2009}). However, to the best of the authors' knowledge, the presently known findings are still some distance away from characterizing the \emph{complete} structure of the solution curve of problem \eqref{eq:4order} when $f(u)$ exhibits a singularity at a certain point $r >0$. Significant progress in this direction was made in [12] for the following problem \begin{equation}\label{eq:biharmonic} \Delta^{2} u-T \Delta u=\frac{\lambda}{(1-u)^{2}} \quad \text { in } B_1, \quad u=\partial_n u=0 \quad \text {on } \partial B_1, \end{equation} where $T \geq 0$ and $B_1 \subset {\mathbb R}^{d}\ (d=1,2)$ is the unit ball. Specifically, a continuous global bifurcation curve has been derived in \cite[Theorem 1.1]{Laurencot2014} by the bifurcation theory of \cite{Buffoni2000} for real analytic functions. Furthermore, the behaviors at the ends of the curve have been confirmed in \cite{Laurencot2014}. However, the shape of the middle part as well as the exact multiplicity of solutions remains to be explored. In the present paper, we will focus on problem \eqref{eq:4order} --- the 1-D case, $T=0$, but with more general $f(u)$, which includes a variety of examples coming from \cite{Koochi2020}. This paper can be considered as a sequel of the work of Liu and the second author in \cite{Liu2022}, where they derived the complete global solution curve for the same equation in \eqref{eq:4order} with the doubly pinned (Navier) boundary conditions $$u(0)= u(1) = u^{\prime\prime}(0) = u^{\prime\prime}(1)=0. $$ However, the current problem is more challenging because it cannot be decomposed into a `well' system of second order equations, to which the maximum principle can directly apply, and the concavity of positive solutions varies over the interval $(0,1)$. Rynne \cite{Rynne04} studied $2m$-th Dirichlet boundary value problem under various convexity or concavity type assumptions on $f$, showed that the problem has a smooth solution curve $\mathcal{S}_0$ emanating from $(\lambda, u)=(0, 0)$, and described the possible shapes and asymptotics of the curve. For problem \eqref{eq:4order} with convex increasing nonlinearity $f$ defined on $[0,\infty)$, Korman \cite[Theorem 1.1]{Korman2004} first proved the complete structure of the global solution curve, using a bifurcation approach. \begin{theorem}[\cite{Korman2004}]\label{thm:korman} Assume that $ f(u) \in C^{2}(0,\infty) \cap C^1[0,\infty) $ satisfies $f(u)>0$ for $u \geq 0$, $f^{\prime}(0) \geqslant 0$, $f^{\prime \prime}(u) >0$ for $u>0$, and \begin{align} \label{eq:supper} & \lim _{u \rightarrow \infty} \frac{f(u)}{u}=\infty, \end{align} Then all positive solutions of \eqref{eq:4order} lie on a unique smooth curve of solutions. This curve starts at $(\lambda, u)=(0,0) $, it continues for $\lambda> 0$ until a critical $\lambda_{0}$, where it bends back, and continues for decreasing $\lambda$ without any more turns, tending to infinity when $\lambda\downarrow 0$. In other words, we have exactly two, one or no solutions, depending on whether $0<\lambda<\lambda_{0}, \lambda=\lambda_{0}$, or $\lambda>\lambda_{0}$. Moreover, all solutions are symmetric with respect to the midpoint $x=\frac{1}{2}$, and the maximum value of the solution, $ u(\frac{1}{2}) $, is strictly monotone on the curve. \end{theorem} The bifurcation diagram is depicted in Figure \ref{fig:fig1}(i). This result extends a theorem for the second-order problem in \cite[Example 4.1 \& Theorem 4.8]{Crandall1973} (also contained in \cite[Theorem 3.2]{Laetsch1970}) to the fourth-order one. The typical examples of $f$ satisfying the assumptions of Theorem \ref{thm:korman} are the Gelfand nonlinearity $f(u)=e^u$ and the power nonlinearity $f(u)=(1+u)^p\ (p>1)$. However, if $f$ is singular at some $r>0$, for example, $f(u)=\frac{1}{(r-u)^{p}} \ (p>0)$, then Theorem \ref{thm:korman} is no longer applicable. Our goal in this paper is to establish a global bifurcation result similar to Theorem \ref{thm:korman} but dealing with the problems with a singular nonlinearity. To this end, we replace the superlinear condition \eqref{eq:supper} at infinity with a growth condition near singularity; see \eqref{eq:supper2} below. Our main result of this paper is as follows. \begin{theorem}\label{th2:u bounded} Let $r\in (0,\infty)$. Assume that $ f(u) \in C^{2}(0, r) \cap C[0, r)$ satisfies \begin{align}\label{eq:f positive} & f(u)>0 \quad \text{for } 0 \leq u<r, \\\label{eq:f increasing} & f^{\prime}(u) > 0 \quad \text{for } 0<u<r, \\\label{eq:convex} & f^{\prime \prime}(u) >0 \quad \text{for } 0<u<r, \\\label{eq:supper2} & 0<\liminf_{u \rightarrow r^{-}} \,(r-u)f(u) \leq \infty. \end{align} Then all conclusions of Theorem \ref{thm:korman} still hold, except that the solution curve finally tends to a singular solution $w$ instead of infinity when $\lambda \downarrow 0$. Here, $w$ is an explicitly given axisymmetric function with respect to $x=\frac{1}{2}$, which is also its unique maximum point, $w( \frac{1}{2}) = r$ and $w\in ( {C}^{2+\alpha}[0,1] \setminus {C}^{3}[0,1])\cap C^{4}( [0,1]\setminus \{ \frac{1}{2}\} )$ for any $\alpha\in(0,1)$. \end{theorem} The bifurcation diagram is depicted in Figure \ref{fig:fig1}(ii). The explicit expression of the singular solution $w$ is present in Lemma 2.3 below. The method adopted in our study is the bifurcation approach to fourth-order Dirichlet problems, originally formulated by Korman \cite{Korman2004}. However, the singularity of $f$ poses new challenges due to the potential emergence of singular solutions. To address the substantial difficulties in applying Korman's method, we have established the crucial a priori bound $\|u\|_\infty<c<r$ for the solutions of \eqref{eq:4order}. Consequently, the arguments put forward by Korman in \cite{Korman2004} retain their validity for the present problem. An additional challenge concerns the characterization of the singular solutions. To overcome this, we adopt an idea from Lauren\c{c}ot and Walker \cite[Theorem 2.20]{Laurencot2014}, where a singular solution for problem \eqref{eq:biharmonic} is completely described. Combining the idea and our global a priori bound $\|u\|_{C^3}<C$, we also obtain an explicit singular solution of \eqref{eq:4order} with more general $f$, not limited to $f(u) = \frac{1}{(1-u)^{2}}$. The primary contribution of this paper lies in the derivation of the a priori estimates and the subsequent applications to some novel models arising from \cite{Koochi2020}. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{figure1c.pdf} \caption{Global bifurcation diagrams provided by Theorems \ref{thm:korman} and \ref{th2:u bounded}. (i) $r=+\infty$ and $ \lim _{u \rightarrow +\infty} \frac{f(u)}{u}=+\infty$. (ii) $r<+\infty$ and $ \liminf_{u \rightarrow r^{-}} \,(r-u)f(u) >0 $.} \label{fig:fig1} \end{figure} \begin{example} Theorem \ref{th2:u bounded} applies to many doubly-supported beam-type MEMS/NEMS models arising from the recent monograph \cite[Chapter 2]{Koochi2020} when the boundary conditions are of the clamped-clamped type (cf. \cite[(2.126)]{Koochi2020}). Some typical governing equations are present as follows. \begin{enumerate} \item Carbon-nanotube actuator in NEMS (cf. \cite[(2.147)]{Koochi2020}) $$ u''''=\frac{\beta_{n}}{(1-u)^{n}}. $$ \item Size-dependent double-sided nanobridge with single nanowire (cf. \cite[ (2.202)]{Koochi2020}) $$ u''''=\frac{\beta_{vdW}}{k(1+\delta) (1-u)^{4}}+\frac{\alpha}{(1+\delta)(1-u)\ln^2[2k(1-u)]}.\label{eq:vondelwal1}$$ \item Size-dependent double-sided nanobridge with two nanowires (cf. \cite[(2.269)]{Koochi2020}) $$u''''=\frac{\beta_{vdW}}{ 2(1+\delta)(1-2u)^{5/2}}+\frac{\alpha}{2(1+\delta)(1-2u)\ln^2[k(1-2u)]}.$$ \item Size-dependent nanoactuator (cf. \cite[(2.103)]{Koochi2020}) $$ u''''=\frac{\beta_{Cas}}{(1+\delta)(1-u)^{4}}+\frac{\alpha}{(1+\delta)(1-u)^{2}}+\frac{\alpha \gamma}{(1+\delta)(1-u)}. $$ \end{enumerate} Here, $\beta_{n}/\beta_{vdW}$, $\delta$, $\alpha$, and $\beta_{Cas}$ are the dimensionless parameters of the van der Waals force, for incorporating the size effect, associated with the external voltage, of the Casimir force, respectively; $k$ indicates the gap to nanowire radius ratio; $\gamma$ is related to the gap-to-width ratio associated with the fringing field effect. As the analysis on fulfillment of conditions for these examples is the same as in \cite{Liu2022} for the hinged-hinged boundary conditions, we omit the details and refer the readers to \cite[Section 2]{Liu2022}. \end{example} The subsequent sections of this paper are organized as follows. In Section 2, we enumerate some established facts and prove several pivotal lemmas, including the crucial a priori bounds. Section 3 offers the proof of the main theorem. Lastly, in Section 4, we provide concluding remarks and propose some open problems for further research. \section{Lemmas}\label{sec:lemmas} In this section, we prove a priori estimates which play crucial roles in the proof of Theorem \ref{th2:u bounded}. First, we list several facts about problem \eqref{eq:4order} that have been proven in \cite{Korman2004} for the case $u\in(0,+\infty)$, but clearly hold after modifying the range from $(0,+\infty)$ to $(0,r)$. Specifically, assuming $f(u) \in C^1(0, r) \cap C[0, r)$ satisfies \eqref{eq:f positive} and \eqref{eq:f increasing}, we derive the following facts: \begin{enumerate} \item[({A})] (Linearization) According to Korman \cite[Corollary 2.2]{Korman2004}, the linear space of the non-trivial solutions of the linearized problem \begin{equation}\label{eq:liner2} \left\{\begin{array}{l} w^{\prime \prime\prime \prime}(x)=\lambda f^{\prime}(u)w, \quad x \in(0, 1), \\ w(0)=w^{\prime}(0)=w(1)=w^{\prime}(1)=0. \end{array}\right. \end{equation} is either empty or one-dimensional. Furthermore, according to Korman \cite[Theorem 2.13]{Korman2004}, $w(x)$ cannot vanish inside $(0,1)$, i.e, the sign of any non-trivial solution of \eqref{eq:liner2} does not change. \item[({B})] (Convexity and inflection points) According to Korman \cite[Lemma 2.3]{Korman2004}, any positive solution of \eqref{eq:4order} satisfies \begin{equation}\label{eq:u2>0} u^{\prime\prime}(0)>0\quad\text{and} \quad u^{\prime\prime}(1)>0. \end{equation} Furthermore, according to \cite[Lemma 2.7]{Korman2004}, $u(x)$ has exactly one local maximum and exactly two inflection points. \item[({C})] (Symmetry) According to Korman \cite[Lemma 2.9]{Korman2004}, any positive solution of \eqref{eq:4order} is symmetric with respect to $x=\frac{1}{2}$. Moreover, $u^{\prime}(x)>0$ on $(0, \frac{1}{2})$. \item[({D})] (Global parameterization) According to Korman \cite[Lemma 2.10]{Korman2004}, all positive solutions of \eqref{eq:4order} are globally parameterized by their maximum values $ u_{\lambda}(\frac{1}{2})$. Precisely, for each $p> 0$, there is at most one $\lambda> 0$ and at most one solution $u_{\lambda}(x)$ of problem \eqref{eq:4order} such that $u_{\lambda}(\frac{1}{2})=p$. \end{enumerate} Next, we establish a priori estimates, which play key roles in the proof of the main theorem. \begin{lemma}\label{lem:u bounded} Assume that $f(u) \in C^1(0, r) \cap C[0, r)$ satisfies \eqref{eq:f positive},\eqref{eq:f increasing} and \eqref{eq:supper2}. If $I \subset [0,\infty)$ is a bound interval, then there exists a positive constant $C$ such that any positive solution $u_{\lambda}$ of \eqref{eq:4order} with $\lambda\in I$ satisfies \begin{equation}\label{eq:u bound0} \left \|u_\lambda(x)\right \|_{C^{3}[0,1]} \le C. \end{equation} If further $I \subset (0,\infty)$ is a compact interval, then there exist two positive constants $c$ and $C_1$ such that any positive solution $u_{\lambda}$ of \eqref{eq:4order} with $\lambda\in I$ satisfies \begin{equation}\label{eq:u bound} \left \|u_\lambda(x)\right \|_{C[0,1]} \leq c <r\quad\text{and}\quad \left \|u_\lambda(x)\right \|_{C^{4}[0,1]} \le C_1. \end{equation} \end{lemma} \begin{remark} For problem \eqref{eq:biharmonic}, a priori bound for $C^{\frac{3}{2}}$-norm of solutions has been established in \cite[Lemma 2.11]{Laurencot2014}. In term of regularity, the a priori bound \eqref{eq:u bound0} is optimal in the sense that for any $\alpha\in (0,1)$, there exist a sequence of $\lambda \rightarrow 0$ and a function $w\in C^{2+\alpha}[0,1]\setminus C^{3}[0,1]$ such that $u_\lambda\rightarrow w $ in $ C^{2+\alpha}[0,1]$; see Lemma \ref{lem:endpoint} below for details. \end{remark} \begin{proof}[\textbf{Proof}] For each given $\lambda>0$, denote by $u_\lambda$ positive solutions of \eqref{eq:4order} if exist. We claim that \begin{equation}\label{eq:solutionbounded} u_\lambda(x)<r \quad \text{for all } x \in (0,1). \end{equation} Otherwise, there is an $x_{0} \in(0,1)$ such that $ u_{\lambda}(x_{0})=r$ (Take $x_0$ to be the first such number from the left). Then, it follows from \eqref{eq:supper2} that $ \lim_{x \to x_{0}^-} f(u_\lambda(x)) = \infty$. This contradicts the fact that $u_\lambda\in C^{4}(0,1)$ satisfies the equation in \eqref{eq:4order}. From now on, we omit the subscript of $u_\lambda$ for simplicity. As mentioned in ({C}) above, since \eqref{eq:f positive} and \eqref{eq:f increasing} hold, any positive solution $u(x)$ of \eqref{eq:4order} is symmetric with respect to $x =\frac{1}{2}$ and $u^{\prime}(x)>0\text{ on } (0,\frac{1}{2})$. Then $u(x)$ takes its maximum at $\frac{1}{2}$ and $u^{\prime}(\frac{1}{2})= 0=u^{\prime\prime\prime}(\frac{1}{2})$. It follows from \eqref{eq:f positive} that $u^{\prime\prime\prime}$ is increasing in $[0,1]$ and $u^{\prime\prime\prime}(x)< 0$ on $[0,\frac{1}{2})$. In what follows, it suffices to consider the problem on the interval $[0,\frac{1}{2}]$. We next divide the proof into three steps. \textbf{Step 1.} We claim that for any given $x\in (0,\frac{1}{2})$, $u^{\prime\prime}(x) $ is bounded for all $ \lambda \in I$. Without loss of generality, let $x=\frac{1}{4}$. We next prove that $u^{\prime\prime}(\frac{1}{4}) $ is uniformly bounded for $ \lambda \in I$. Assume on the contrary, that there exist a sequence of unbounded numbers $u^{\prime\prime}(\frac{1}{4})$ along some sequence of $\lambda_l\in I$. On the one hand, if $u^{\prime\prime}(\frac{1}{4}) $ is unbounded from above, we may assume that $u^{\prime\prime}(\frac{1}{4}) $ is positive. Since $ u^{\prime\prime\prime}(x) < 0$ on $(0,\frac{1}{2})$, we have $$ u^{\prime}(x) =\int_{0}^{x}u^{\prime\prime}(t)\,\mathrm{d}t+u^{\prime}(0) =\int_{0}^{x}u^{\prime\prime}(t)\,\mathrm{d}t >u^{\prime\prime}(\frac{1}{4})x\quad \text{for } x\in (0,\frac{1}{4}).$$ Furthermore, since $u^{\prime}(x)> 0$ on $(0,\frac{1}{2})$, it follows that $$ u(x) =\int_{0}^{x}u^{\prime}(t)\,\mathrm{d}t+u(0) =\int_{0}^{x}u^{\prime}(t)\,\mathrm{d}t >\frac{1}{2}u^{\prime\prime}(\frac{1}{4})x^2\quad \text{for } x \in(0,\frac{1}{4}), $$ which implies that $u(x)$ is positive and unbounded on $(\frac{1}{8},\frac{1}{4})$, contradicting the boundedness \eqref{eq:solutionbounded}. So $ u^{\prime\prime}(\frac{1}{4}) $ is bounded from above. On the other hand, if $u^{\prime\prime}(\frac{1}{4}) $ is unbounded from below, we may assume that $u^{\prime\prime}(\frac{1}{4}) $ is negative. By the same way as above, we obtain that $u(x)$ is positive and unbounded on $ (\frac{3}{8},\frac{1}{2}) $, contradicting the boundedness \eqref{eq:solutionbounded}. So $ u^{\prime\prime}(\frac{1}{4}) $ is also bounded from below. The claim is true. \textbf{Step 2.} We prove the a priori bound \eqref{eq:u bound0}. We claim that $u^{\prime \prime \prime}(0)$ is bounded for all $\lambda\in I$. Since $ u^{\prime \prime\prime}(0) <0$, it suffices to prove that $u^{\prime \prime\prime}(0)$ is bounded from below. Assume, on the contrary, that there exist a sequence of unbounded numbers $u^{\prime\prime\prime}(0)$ along some sequence of $\lambda_l\in I$. Since $$ (u^{\prime\prime \prime})''(x)=(u'''')'(x)=\lambda f^{\prime}(u(x))u^{\prime}(x)\quad \text{ on } (0,\frac{1}{2}),$$ it follows from \eqref{eq:f increasing} and ({C}) that $u^{\prime\prime \prime}(x)$ is convex on $(0,\frac{1}{2})$ and hence $$u^{\prime \prime \prime}\Big(0+(1-\theta)\frac{1}{2}\Big) \leq \theta u^{\prime \prime \prime}(0)+(1-\theta) u^{\prime \prime \prime}\Big(\frac{1}{2}\Big)=\theta u^{\prime \prime \prime}(0)<0 \quad \text{for any } \theta \in(0, 1). $$ That is, for any given $ \gamma \in(0, \frac{1}{2})$, $u^{\prime \prime\prime}(\gamma)$ is negative and unbounded from below. Since $u^{\prime \prime\prime}(x)$ is increasing on $(0,\frac{1}{2})$, it follows that $u^{\prime\prime\prime}(x)$ is unbounded from below on any proper subinterval $(\beta,\gamma)$ of $\left(0, \frac{1}{2}\right)$. Therefore, \begin{equation}\label{eq:Integral} \int_{\beta}^{\gamma} u^{\prime \prime \prime}(t) \,\mathrm{d}t<(\gamma-\beta)u^{\prime \prime\prime}(\gamma)<0 \;\text{ is unbounded from below}. \end{equation} But, the boundedness of $ u^{\prime\prime}(\beta) $ and $u^{\prime\prime}(\gamma)$ for any $\lambda \in I$, as shown in Step 1, implies that \begin{equation}\label{eq:boundintegral} \int_{\beta}^{\gamma} u^{\prime \prime \prime}(t) \,\mathrm{d}t=u^{\prime\prime}(\gamma)-u^{\prime\prime}(\beta) \;\text{ is bounded}, \end{equation} contradicting \eqref{eq:Integral}. So the claim is true. Due to the symmetry of $u$, $u^{\prime \prime \prime}(1)$ is also bounded for all $\lambda\in I$. Then the monotonicity of $u^{\prime\prime\prime}$ on $[0,1]$ yields the boundedness of $\|u^{\prime\prime\prime}\|_{C^0}$ for all $\lambda \in I$. Together with the bound \eqref{eq:solutionbounded} and the boundary conditions, it follows that the bound \eqref{eq:u bound0} holds. \textbf{Step 3.} We prove the a priori bounds in \eqref{eq:u bound} provided that $I \subset (0,\infty)$ is compact. Since $f$ satisfies conditions \eqref{eq:f positive} and \eqref{eq:supper2}, it follows that there exists a constant $a>0$, such that \begin{equation}\label{eq:f>a} f(u)\geq \frac{a}{r-u} \qquad \text{for all } u \in (0, r). \end{equation} Indeed, set $a_1:=\liminf_{u \rightarrow r^{-}} \,(r-u)f(u)$. Then \eqref{eq:supper2} implies that $0<a_1\leq\infty$. If $a_1<\infty$, then for any given $\varepsilon\in (0, a_1)$, there exists a number $\delta\in(0,r)$ such that $(r-u)f(u)>a_1-\varepsilon$ on $(r-\delta,r)$. Since $(r-u)f(u)$ is continuous in $[0,r-\delta]$, we define $a_2:=\min_{[0,r-\delta]}(r-u)f(u)$ and $a:=\min\{a_1-\varepsilon,a_2\}$. Then \eqref{eq:f positive} implies that $a_2>0$ and hence $a>0$. The case of $a_1=\infty$ is similar by replacing $a_1-\varepsilon$ with some $M>0$. So \eqref{eq:f>a} holds. Multiplying the equation in \eqref{eq:4order} by $u^{\prime} $ and integrating over $(0,x)$ by parts, we derive the energy identity: $${u}^{\prime} {u}^{\prime \prime \prime}-\frac{1}{2} {u}^{\prime \prime 2}-\lambda {F}({u})=-\frac{1}{2} {u}^{\prime \prime}(0)^2\qquad \text{ for all }x \in [0,1],$$ where $ {F}({u})= \int_{0}^{u} f(t) \,\mathrm{d}t. $ By Step 3, we have the a priori bound $\|u_\lambda(x) \|_{C^{3}[0,1]} \le C$. Combining the energy identity, we get that $$ \lambda F(u)=u^{\prime} u^{\prime \prime \prime}-\frac{1}{2} u^{\prime \prime 2}+\frac{1}{2} u^{\prime \prime}(0)^2 \leq M. $$ where $M$ is a positive constant. Furthermore, it follows from \eqref{eq:f>a} that $$ M \geq \lambda F(u)=\lambda \int_0^u f(t) \,\mathrm{d}t \geq \lambda \int_0^u \frac{a}{r-t} \,\mathrm{d}t=- \lambda a \ln (r-t)|_0 ^u , $$ which implies that $$ u \leq r(1-e^{-\frac{M}{a \lambda_*}}).$$ Here, $\lambda_*$ is the positive lower bound of the compact interval $I$. Letting ${c}=r(1-e^{-\frac{M}{a \lambda_*}})$, we obtain that $u(x)\le c<r$ on $(0,1)$ uniformly for $\lambda\in I$. Now, since $f(u)$ is a continuous function on the interval $[0,c]$ and $u''''=\lambda f(u)$, it follows that $u''''$ is uniformly bounded on $ [0,1]$ for all $\lambda\in I$. Combining the boundedness of $u$ and $u''''$ with the boundary conditions, we obtain that $\left \| u \right \| _{C^{4}} $ is bounded. \end{proof} We next give an upper bound of $\lambda$ for the existence of positive solutions of \eqref{eq:4order} with a singular nonlinearity. \begin{lemma}\label{lem:l2} Assume that $f(u)\in C[0,r) $ satisfies conditions \eqref{eq:f positive} and \eqref{eq:supper2}. Then there exists $\lambda_0 >0$ such that problem \eqref{eq:4order} has no positive solution for $\lambda>\lambda_0$. \end{lemma} \begin{proof} Since the line $ y=\frac{4}{r^{2}}x $ passing through the origin is a tangent below the curve $ y=\frac{1}{r-x} $, it follows from \eqref{eq:f>a} that \begin{equation}\label{eq:f>2} f(u)\geq \frac{a}{r-u} \geq \frac{4a}{r^{2}}u \qquad \text{for all } u \in (0, r). \end{equation} With condition \eqref{eq:f>2} in place, the process that follows is routine. Let $\mu_1>0$ and $\varphi_1(x)>0$ on $(0,1)$ be the principal eigenpair of the problem $$\left\{\begin{array}{l} \varphi^{\prime \prime \prime \prime}=\mu \varphi \quad \text { in }(0,1); \\ \varphi(0)=\varphi^{\prime}(0)=0=\varphi(1)=\varphi^{\prime}(1). \end{array}\right. $$ Let $u$ be a positive solution of \eqref{eq:4order} with some $\lambda$. Multiplying the equation in \eqref{eq:4order} by $\varphi_1$ and integrating over $(0,1)$, we obtain from \eqref{eq:f>2} that $$ \mu_1 \int_0^1 u \varphi_1 \mathrm{d} x= \int_0^1 u \varphi''''_1 \mathrm{d} x = \int_0^1 u'''' \varphi_1 \mathrm{d} x =\lambda \int_0^1 f(u)\varphi_1 \mathrm{d} x \geq \lambda \frac{4a}{r^{2}} \int_0^1 u \varphi_1 \mathrm{d} x. $$ It follows that $$ \lambda \leq \frac{r^{2}}{4a}\mu_1.$$ Therefore, the $\lambda$ that makes problem \eqref{eq:4order} has positive solutions is bounded. Letting $\lambda_0$ be the supremum of the $\lambda$s, we complete the proof. \end{proof} The following result gives the existence and uniqueness of a singular solution at $\lambda=0$ as well as the explicit expression. \begin{lemma}\label{lem:endpoint} Assume that $f(u)\in C^1(0,r)\cap C[0,r) $ satisfies conditions \eqref{eq:f positive}, \eqref{eq:f increasing} and \eqref{eq:supper2}. Let $\alpha \in (0, 1)$ and let $\{(\lambda_n,u_n)\}$ be a sequence of solutions of problem \eqref{eq:4order} with $\lambda_n\rightarrow 0$. Then there exist a subsequence of $\{u_n\}$ (still denoted by $u_n$) and a function $w(x)$ such that \begin{equation}\label{eq:endpoints} \lim _{n \rightarrow \infty}\|u_n-w\|_{C^{2+\alpha}[0,1]}=0. \end{equation} Moreover, either $w \equiv 0$ or $\max _{x \in[0,1]} w(x)=r$. In the latter case, $w\in ( {C}^{2+\alpha}[0,1]\setminus{C}^{3}[0,1])\cap C^{4}( [0,1]\setminus \{ \frac{1}{2}\} )$ for any $\alpha\in(0,1)$, and $w$ satisfies \begin{equation}\label{eq:singulareq} \left\{\begin{array}{l} w^{\prime \prime \prime \prime}(x)=0, \quad x \in[0, \frac{1}{2}) \cup(\frac{1}{2}, 1]; \\ w(0)=w^{\prime}(0)=0=w(1)=w^{\prime}(1); \\ w(\frac{1}{2})-r=0=w^{\prime}(\frac{1}{2}), \end{array}\right. \end{equation} which is solved uniquely by \begin{equation}\label{eq:explicit} w(x)= \begin{cases}-16 r x^3+12 r x^2, & x \in[0,\frac{1}{2}]; \\ -16 r (1-x)^3+12 r (1-x)^2, & x \in[\frac{1}{2}, 1]. \end{cases} \end{equation} \end{lemma} \begin{proof} The existence of $w \in C^{2+\alpha}[0,1]$ and the convergence relation \eqref{eq:endpoints} follow directly from the a priori bound \eqref{eq:u bound0}, due to the compact embedding $C^{3}[0,1]\hookrightarrow C^{2+\alpha}[0,1]$ for any $\alpha \in [0, 1)$. As a limit of $C^{2+\alpha}$-convergence, $w$ naturally satisfies that \begin{equation}\label{eq:wbvc} w(0)=w^{\prime}(0)=w(1)=w^{\prime}(1)=0=w^{\prime}(\tfrac{1}{2}), \end{equation} since every solution satisfies the boundary conditions and the symmetric property. In view of \eqref{eq:solutionbounded}, we know $ w(x)\leq r$. We analyze the problem below in two cases. \textbf{Case 1.} If $\max _{x \in[0,1]} w(x)<r$, we claim that $w \equiv 0$ for all $x \in[0,1]$. Since in this case $w(x)<r$ for all $x \in[0,1]$, it follows from \eqref{eq:endpoints} that $f(u_n)$ is bounded on $[0,1]$ and $\lambda_n f(u_n) \rightarrow 0$ as $\lambda_n \rightarrow 0$. Since every solution $(\lambda_n,u_n)$ of \eqref{eq:4order} admits the integral form \begin{equation}\label{eq:integral} {u}_{n}^{\prime \prime}(x) = {u}_{n}^{\prime \prime }(0) + {u}_{n}^{\prime \prime \prime }(0)x+ {\int}_{0}^{x}{\lambda }_{n}f( {{u}_{n}( \xi ) })(x-\xi) \,\mathrm{d}\xi, \quad x \in[0,1]. \end{equation} Passing to the limit as $\lambda_n\rightarrow 0$, we conclude from \eqref{eq:u bound0} and \eqref{eq:endpoints} that $$ w^{\prime\prime}(x)=w^{\prime \prime}(0) +\vartheta x,\quad x \in[0,1], $$ where $\vartheta$ is a constant. Clearly, $w(x) \in C^4[0,1]$ and $w^{\prime \prime \prime \prime}(x)\equiv 0$ on $[0,1]$. It follows from \eqref{eq:wbvc} that the claim is true. \textbf{Case 2.} If $ \max_{x \in [0,1]} w(x) = r $, we next prove that $x=\frac{1}{2}$ is the unique maximum point of $w$. By the symmetry and the monotonicity of $u_n$ as given in ({C}), it is clear that $w(x) $ is symmetric on $[0,1]$, $ w(\tfrac{1}{2}) = r $, $w(x) $ is non-decreasing on the interval $ (0,\tfrac{1}{2})$ and non-increasing on $( \tfrac{1}{2},1) $. So there exists a number $ a \in (0,\tfrac{1}{2}]$ such that $w(x) = r $ for all $x\in [a, 1-a]$ and $w(x) < r $ for all $x\in [0,a)\cup(1-a,1]$. Moreover, $ w(a) = r= w(1-a)$ and $ w'(a)= 0= w'(1-a)$. For any $ \rho \in (0,a) $, since $w(x)<r$ for all $x \in[0,\rho]$, it follows from \eqref{eq:endpoints} that $f(u_n)$ is bounded on $[0,\rho]$. Similar to Case 1 above, we have that \begin{equation}\label{eq:twoseg} w(x) \in C^4 ([0,a)\cup(1-a,1]) \;\text{ and }\; w^{\prime\prime\prime\prime}(x) = 0 \; \text{ for all } x \in [0,a)\cup(1-a,1]. \end{equation} We claim that $ w(x) \in C^3([0,\frac{1}{2})\cup (\frac{1}{2},1]) $. In fact, since $ \|u_{n}\|_{C^3[0,1]} $ is bounded and $u^{\prime\prime\prime\prime}_{n}(x)$ is positive and increasing on $(0,1)$, using the arguments as in \eqref{eq:Integral} and \eqref{eq:boundintegral} (Replace $u^{\prime\prime\prime}$ with $u^{\prime\prime\prime\prime}$), we deduce that for any given $x\in[0,\frac{1}{2})$, the sequence $u_n^{\prime\prime\prime\prime}(x)$ is bounded and further for any closed subinterval $[\beta,\gamma] \subset [0,\frac{1}{2})$, $\|u^{\prime\prime\prime\prime}_n\|_{C^0[\beta,\gamma]}$ is bounded. Together with \eqref{eq:solutionbounded}, it follows that $\|u_{n}\|_{C^4[\beta,\gamma]}$ is bounded. This implies that $w(x) \in C^3[\beta,\gamma]$ by the compact embedding $C^4[\beta,\gamma]\hookrightarrow C^3[\beta,\gamma]$. Due to the arbitrariness of $[\beta,\gamma] \subset [0,\frac{1}{2})$, we conclude that $w(x) \in C^3[0, \frac{1}{2})$. Similarly, we have that $w(x) \in C^3(\frac{1}{2},1]$ and the claim is true. Using an idea from Lauren\c{c}ot and Walker \cite[(2.46)]{Laurencot2014}, we next show that $a=\frac{1}{2}$. Suppose on the contrary that $ a<\frac{1}{2}$. By the claim above, we have \begin{equation}\label{eq:derivatives} 0=w(a)-r=w^{\prime}(a)=w^{\prime\prime}(a)=w^{\prime\prime\prime}(a). \end{equation} Multiplying \eqref{eq:twoseg} by $ w $, integrating over $ (0,a)$ and using \eqref{eq:derivatives}, we obtain that $$0 = \int_0^{a} w \cdot w'''' \mathrm{d}x = \int_0^{a} (w'')^2 \mathrm{d}x,$$ which implies that $ w''(x) \equiv 0 $ on $[0,a] $ and hence $ w(x) = k_1 x + k_2 $. Here, $k_1, k_2$ are constants. It follows from \eqref{eq:wbvc} that $ w(x) \equiv 0 $ for $ x \in [0,a] $, contradicting \eqref{eq:derivatives}. So $ a = \frac{1}{2} $. Consequently, from \eqref{eq:twoseg} we obtain that $w\in C^{4}( [0,1]\setminus \{ \frac{1}{2}\} )$. Direct integration reveals that the explicit function $w(x)$ in \eqref{eq:explicit} uniquely satisfies \eqref{eq:singulareq}. Notably, $w'''(\frac{1}{2})$ is undefined and hence $w\notin C^3[0,1]$. \end{proof} \section{Proof of Theorem \ref{th2:u bounded}} In this section, we prove Theorem \ref{th2:u bounded}. Let us recall a well known local bifurcation theorem due to Crandall and Rabinowitz \cite[Theorem 3.2]{Crandall1973}. \begin{theorem}[\cite{Crandall1973}] \label{th:C-R} Let $X$ and $Y$ be Banach spaces. Let $ (\bar{ \lambda} ,\bar{x} )\in \mathbb{R} \times X $ and $F$ be a continuously differentiable mapping of an open neighborhood of $ (\bar{ \lambda}, \bar{x})$ into $Y$. Let the null-space $N(F_{x} (\bar{ \lambda},\bar{x})) = \mathrm{span} \{ x_{0} \} $ be a one-dimensional and $\mathrm{codim}R(F_{x} (\bar{ \lambda} ,\bar{x} ))$ $=1$. Let $F_{\lambda} ( \bar{\lambda },\bar{x} ) \notin R( F_{x} ( \bar{\lambda },\bar{x} ) )$. If $Z$ is the complement of $\mathrm{span} \{ x_{0} \} $ in $X$, then the solution of $F ( \lambda,x )=F ( \bar{\lambda },\bar{x} )$ near $ ( \bar{\lambda },\bar{x} )$ forms a curve $ ( \lambda(s),x(s) )= ( \lambda+\tau (s),\bar{x}+sx_{0}+z(s) )$, where $s\to ( \tau (s),z(s) ) \in \mathbb{R}\times X $ is a function that is continuously differentiable near $s=0$ and $ \tau (0) = \tau ^{\prime}(0) = 0, z(0)=z^{\prime}(0)=0.$ Moreover, if $F$ is $k$-times continuously differentiable, so are $\tau(s)$, $z(s)$. \end{theorem} \begin{proof}[\textbf{Proof of Theorem \ref{th2:u bounded}}] Based on the facts ({A})--({D}) and the lemmas in Section \ref{sec:lemmas}, the proof closely follows Korman's original proof of Theorem \ref{thm:korman}, utilizing the Implicit Function Theorem and the Crandall-Rabinowitz Theorem above. We omit repeating the argument and refer readers to \cite{Korman2004} for details. The new result on the singular solution $w$ is derived from Lemma \ref{lem:endpoint}. For the convenience of the readers, we will briefly outline the main ideas and key points of the proof here. Consider the Banach spaces $X=\{u\in C^4[0,1]\mid u(0)=u(1)=u^{\prime}(0)=u^{\prime}(1)=0\}$ and $Y=C[0,1]$. Define $F: \mathbb{R}\times X \rightarrow Y$ by $F(\lambda, u)=u''''-\lambda f(u).$ Starting at the point of the trivial solution $(\lambda = 0,u = 0)$, we derive the desired solution curve $\mathcal{S}$ by the continuation approach, relying on the Implicit Function Theorem (at regular points) and the Crandall-Rabinowitz theorem (at possible singular or turning points) to smoothly ‘continue’ the curve. While Lemma \ref{lem:l2} indicates that the solution curve cannot continue infinitely in the direction of increasing $\lambda$, a priori bounds \eqref{eq:u bound} in Lemma \ref{lem:u bounded} implies that this curve cannot stop at any $\lambda>0$, nor can it have a vertical asymptote at any $\lambda>0$. Furthermore, by the formula of the bifurcation direction at singular points, this curve must turn left at each possible singular point provided that $f$ is convex. Therefore, the solution curve continues globally and admits exactly a turn at some critical point $(\lambda_0,u_0)$. After turning back at $(\lambda_0,u_0)$, Lemma \ref{lem:endpoint} states that when $\lambda \downarrow 0$, there are two possible behaviors for the solution curve: it converges to either $(0,0)$ or $(0,w)$. However, the uniqueness of solutions near the origin $(0,0)$ excludes the convergence to $(0,0)$, according to the Implicit Function Theorem. Here, $w(x)$ is explicitly given by \eqref{eq:explicit} and its maximum value is $r$. According to (D), all positive solutions are globally parameterized by $p:=u(\frac{1}{2})=\left \| u \right \|_\infty$. From the solution curve $\mathcal{S}$, we immediately obtain a smooth global bifurcation curve, i.e., $$\mathcal{C}=\{(\lambda,\left \| u \right \|_\infty) \mid \lambda \text{ and } u \text{ satisfy } \eqref{eq:4order} \},$$ along which the parameter $p$ monotonically increases; see Figure \ref{fig:fig1}(ii). Moreover, the curve exhausts all solutions as $p$ varies from $0$ to $r$. \end{proof} \section{Concluding Remarks} In this paper, we have established a global bifurcation result for the fourth-order equation with doubly clamped boundary conditions, assuming the nonlinearity $f$ is increasing and convex. We have derived the complete structure of the solution set, revealing the exact multiplicity of positive solutions. The corresponding bifurcation diagram is depicted in Figure \ref{fig:fig1}(ii). Examples of fourth-order MEMS models arising from the recent monograph \cite{Koochi2020} have been presented to illustrate applications of the main theorem. Additionally, we have built the a priori estimate $\|u\|_{C^3}<C$, which is optimal in term of regularity. Based on the crucial estimate, we have demonstrated that the regular solutions converges to an explicit singular solution in $ {C}^{2+\alpha}[0,1]$ as $\lambda\rightarrow 0$ along the upper branch of the solution curve. We list some interesting topics as follows for future research. \begin{enumerate} \item Consider a more general MEMS model than \eqref{eq:4order}: $$ \left\{\begin{array}{l} u^{\prime\prime\prime\prime}(x)-Tu^{\prime\prime}(x)=\lambda f(u(x)), \quad x\in (0,1),T\geq 0; \\ u(0)=u{^\prime}(0)=0=u(1)=u{^\prime}(1). \end{array}\right. $$ Some interesting issues remain to be addressed in establishing analogous results to Theorem \ref{th2:u bounded} for the cases when $T> 0$ (cf. problem \eqref{eq:biharmonic}) and when the hypothesis of $f'(u)>0$ is removed. Major difficulties include establishing suitable a priori bounds and achieving global parametrization of solutions. \item Consider the fourth-order regularized MEMS model arising in \cite[(3.13b)]{Lindsay2014}: $$ \left\{\begin{array}{l} u^{\prime \prime \prime \prime}(x)= \frac{\lambda}{(1-u)^{2}}-\frac{\lambda\varepsilon^{m-2} }{(1-u)^{m}}, \quad x \in(0,1); \\ u(0)=u(1)=0=u^{\prime}(0)=u^{\prime}(1), \end{array}\right. $$ where $\varepsilon>0$ and $m>2$. Compared to the known increasing and convex nonlinearity, $f(u)=\frac{1}{(1-u)^{2}}-\frac{\varepsilon^{m-2} }{(1-u)^{m}}$ here is a non-monotonic and convex-concave function. The study of global bifurcation curves becomes more challenging. In contrast to the $\supset$-shaped curve when $\varepsilon=0$, numerically obtained bifurcation diagrams in \cite[Figure 4]{Lindsay2014} for $m=4$ exhibit $S$-shaped curves appearing for small positive values of $\varepsilon$, but the strict proof remains to be provided. \end{enumerate} \section*{Acknowledgment} The second author is partially supported by Guangdong Basic and Applied Basic Research Foundation (Grant No.2022A1515011867), which is gratefully acknowledged. \begin{thebibliography}{10} \bibitem{Buffoni2000} {B.~Buffoni, E.~N. Dancer, and J.~F. Toland}, {\em The regularity and local bifurcation of steady periodic water waves}, Arch. Ration. Mech. Anal., Vol.152 (2000), 207--240. \bibitem{Cassani2009} {D.~Cassani, J.~M. do~{\'O}, and N.~Ghoussoub}, {\em On a fourth order elliptic problem with a singular nonlinearity}, Advanced Nonlinear Studies, Vol.9 (2009), 177--197. \bibitem{Cowan2010} {C.~Cowan, P.~Esposito, N.~Ghoussoub, and A.~Moradifam}, {\em The critical dimension for a fourth-order elliptic problem with singular nonlinearity}, Arch. Ration. Mech. Anal., Vol.198 (2010), 763--787. \bibitem{Crandall1973} {M.~G. Crandall and P.~H. Rabinowitz}, {\em Bifurcation, perturbation of simple eigenvalues, and linearized stability}, Arch. Ration. Mech. Anal., Vol.52 (1973), 161--180. \bibitem{Esposito2010} {P.~Esposito, N.~Ghoussoub, and Y.~Guo}, {\em Mathematical analysis of partial differential equations modeling electrostatic {MEMS}}, Amer. Math. Soc., Providence, RI; Courant Inst. Math. Sci., NY., 2010. \bibitem{Guo2014} {Z.~Guo, B.~Lai, and D.~Ye}, {\em Revisiting the biharmonic equation modelling electrostatic actuation in lower dimensions}, Proc. Amer. Math. Soc., Vol.142 (2014), 2027--2034. \bibitem{Guo2008} {Z.~Guo and J.~Wei}, {\em Entire solutions and global bifurcations for a biharmonic equation with singular non-linearity in {$\Bbb R^3$}}, Adv. Differential Equations, Vol.13 (2008), 753--780. \bibitem{Guo2009} {Z.~Guo and J.~Wei}, {\em On a fourth order nonlinear elliptic equation with negative exponent}, SIAM J. Math. Anal., Vol.40 (2009), 2034--2054. \bibitem{Koochi2020} {A.~Koochi and M.~Abadyan}, {\em Nonlinear Differential Equations in Micro/Nano Mechanics: Application in Micro/nano Structures and Electromechanical Systems}, Elsevier, 2020. \bibitem{Korman2004} {P.~Korman}, {\em Uniqueness and exact multiplicity of solutions for a class of fourth-order semilinear problems}, Proc. Roy. Soc. Edinburgh Sect. A, Vol.134 (2004), 179--190. \bibitem{Laetsch1970} {T.~Laetsch}, {\em The number of solutions of a nonlinear two point boundary value problem}, Indiana Univ. Math. J., Vol.20 (1970), 1--13. \bibitem{Laurencot2014} {P.~Lauren\c{c}ot and C.~Walker}, {\em A fourth-order model for {MEMS} with clamped boundary conditions}, Proc. Lond. Math. Soc. (3), Vol.109 (2014), 1435--1464. \bibitem{Laurencot2017} {P.~Lauren\c{c}ot and C.~Walker}, {\em Some singular equations modeling {MEMS}}, Bull. Amer. Math. Soc. (N.S.), Vol.54 (2017), 437--479. \bibitem{Lin2007} {F.~Lin and Y.~Yang}, {\em Nonlinear non-local elliptic equation modelling electrostatic actuation}, Proc. R. Soc. Lond., Ser. A, Math. Phys. Eng. Sci., Vol.463 (2007), 1323--1337. \bibitem{Lindsay2014} {A.~E. Lindsay, J.~Lega, and K.~B. Glasner}, {\em Regularized model of post-touchdown configurations in electrostatic {MEMS}: equilibrium analysis}, Physica D, Vols.280-281 (2014), 95--108. \bibitem{Liu2022} {T.~Liu and H.~Pan}, {\em Global bifurcation curve for fourth-order {MEMS}/{NEMS} models}, Differ. Integral Equ., Vol.35 (2022), 437--450. \bibitem{pelesko2002modeling} {J.~A. Pelesko and D.~H. Bernstein}, {\em Modeling {MEMS} and {NEMS}}, CRC Press, 2002. \bibitem{Rynne04} {B.~P. Rynne}, {\em Solution curves of 2m-th order boundary-value problems}, Electron. J. Differential Equations, vol.2004 (2004), no.32, 1--16. \end{thebibliography} \end{document}
2412.18425v1
http://arxiv.org/abs/2412.18425v1
Computing the k-binomial complexity of generalized Thue--Morse words
\documentclass[a4paper]{article} \usepackage{a4wide} \usepackage{authblk} \providecommand{\keywords}[1]{\smallskip\noindent\textbf{\textit{Keywords:}} #1} \usepackage{graphicx} \usepackage{tikz} \usetikzlibrary{fit,patterns,decorations.pathmorphing,decorations.pathreplacing,calc,arrows} \usepackage{amsmath,amssymb,amsthm} \usepackage[inline]{enumitem} \usepackage{thmtools} \usepackage{hyperref} \usepackage[capitalize,nameinlink,sort]{cleveref} \usepackage{etoolbox} \usepackage{mleftright} \mleftright \usepackage[T1]{fontenc} \usepackage{newpxtext} \usepackage{euler} \setlength {\marginparwidth }{2cm}\usepackage[textsize=scriptsize]{todonotes} \newcommand{\mw}[2][]{\todo[backgroundcolor=blue!50!white!55!black,textcolor=white!30!orange,linecolor=blue!70!white!70!black,#1]{MW: #2}} \DeclareMathOperator{\Fac}{Fac} \DeclareMathOperator{\N}{\mathbb{N}} \DeclareMathOperator{\Z}{\mathbb{Z}} \DeclareMathOperator{\Q}{\mathbb{Q}} \newcommand{\infw}[1]{\mathbf{#1}} \newcommand{\fc}[1]{\mathsf{p}_{#1}} \newcommand{\bc}[2]{ \expandafter\ifstrequal\expandafter{#2}{1}{ \mathsf{a}_{#1} }{ \mathsf{b}_{#1}^{(#2)} } } \DeclareMathOperator{\pref}{pref} \DeclareMathOperator{\suff}{suff} \DeclareMathOperator{\am}{\mathcal{A}_m} \DeclareMathOperator{\ams}{\mathcal{A}^*_m} \DeclareMathOperator{\sm}{\sigma_{m}} \newcommand{\smn}[1]{\sigma_m^{#1}} \newcommand{\mi}[1]{\overline{#1}} \declaretheorem[numberwithin=section]{theorem} \declaretheorem[sibling=theorem]{lemma,corollary,proposition,conjecture} \declaretheorem[sibling=theorem,style=definition]{example,definition,remark} \declaretheorem[sibling=theorem,style=definition,refname={fact,facts},Refname={Fact,Facts}]{fact} \declaretheorem[refname={Claim,Claims},Refname={Claim,Claims}]{claim} \declaretheorem[name=Question,refname={Question,Questions},style=definition]{questions} \declaretheoremstyle[ headfont=\normalfont\itshape, bodyfont = \normalfont, qed=$\blacksquare$, headpunct={:}]{claimproofstyle} \declaretheorem[name={Proof of claim}, style=claimproofstyle, unnumbered]{claimproof} \renewcommand*{\thequestions}{\Alph{questions}} \crefname{equation}{}{} \title{Computing the $k$-binomial complexity of generalized Thue--Morse words} \author[1]{M. Golafshan} \author[1]{M. Rigo\thanks{The first two authors are supported by the FNRS Research grant T.196.23 (PDR)}} \author[2]{M. A. Whiteland\thanks{Part of the work was performed while affiliated with Univeristy of Liège and supported by the FNRS Research grant 1.B.466.21F}} \affil[1]{Department of Mathematics, University of Li\`ege, Li\`ege, Belgium\\\texttt{\{mgolafshan,m.rigo\}@uliege.be}} \affil[2]{Department of Computer Science, Loughborough University, Epinal Way, LE11 3TU Loughborough, Leicestershire, United Kingdom\\\texttt{[email protected]}} \date{} \begin{document} \maketitle \begin{abstract} Two finite words are $k$-binomially equivalent if each subword (i.e., subsequence) of length at most $k$ occurs the same number of times in both words. The $k$-binomial complexity of an infinite word is a function that maps the integer $n\geqslant 0$ to the number of $k$-binomial equivalence classes represented by its factors of length $n$. The Thue--Morse (TM) word and its generalization to larger alphabets are ubiquitous in mathematics due to their rich combinatorial properties. This work addresses the $k$-binomial complexities of generalized TM words. Prior research by Lejeune, Leroy, and Rigo determined the $k$-binomial complexities of the $2$-letter TM word. For larger alphabets, work by Lü, Chen, Wen, and Wu determined the $2$-binomial complexity for $m$-letter TM words, for arbitrary $m$, but the exact behavior for $k\geqslant 3$ remained unresolved. They conjectured that the $k$-binomial complexity function of the $m$-letter TM word is eventually periodic with period $m^k$. We resolve the conjecture positively by deriving explicit formulae for the $k$-binomial complexity functions for any generalized TM word. We do this by characterizing $k$-binomial equivalence among factors of generalized TM words. This comprehensive analysis not only solves the open conjecture, but also develops tools such as abelian Rauzy graphs. \end{abstract} \tableofcontents \section{Introduction} The Thue--Morse infinite word (or sequence) $\infw{t}_2=011010011001\cdots$ is the fixed point of the morphism $\sigma_2:0\mapsto 01, 1\mapsto 10$ starting with $0$. It was originally constructed by A.~Thue in the context of avoidable patterns. It does not contain any overlap of the form $auaua$ where $a\in\{0,1\}$ and $u\in\{0,1\}^*$. This word was later rediscovered by M.~Morse while studying differential geometry and geodesics on surfaces of negative curvature \cite{Morse1921}. The study of non-repetitive structures is fundamental in combinatorics. See references \cite{MR4046776,MR3032928} for further details. The Thue--Morse word has found applications across a wide range of fields including mathematics, physics, economics, and computer science \cite{JTNB_2015__27_2_375_0,ubiquitous}. In number theory, the word is linked to the Prouhet--Tarry--Escott problem \cite{MR104622}. Additionally, L.~Mérai and A.~Winterhof have analyzed its pseudo-random characteristics; see e.g., \cite{Merai}. The Thue--Morse word also emerges in physics as an example of an aperiodic structure that exhibits a singular continuous contribution to the diffraction pattern \cite{WOLNY2000313,PhysRevB.43.1034}. This property is significant in the study of quasi-crystals and materials with non-periodic atomic arrangements \cite{SAHEL20171} or fractal geometry \cite{Kennard}. In economics or game theory, the Thue--Morse word has been proposed to ensure fairness in sequential tournament competitions between two agents \cite{Palacios}. The Thue--Morse word arises in a wide range of unexpected contexts due to its remarkable combinatorial properties. For instance, consider the study of arithmetic complexity of an infinite word $\infw{w}=w_0w_1w_2\cdots$. This function maps $n$ to the number of subwords of size $n$ that appear in~$\infw{w}$ in an arithmetic progression, i.e., \[n\mapsto \#\{ w_tw_{t+r}\cdots w_{t+(n-1)r}\mid \, t\geqslant 0, r\geqslant 1\}.\] Let $m\geqslant 2$ be an integer and $\am=\{0,\ldots,m-1\}$ be the alphabet identified with the additive group $\Z/(m\Z)$. Hereafter, all operations on letters are considered modulo~$m$, and notation $\pmod{m}$ will be omitted. Avgustinovich et al.~showed that, under some mild assumptions, the fixed point of a {\em symmetric morphism} over $\am$ achieves a maximal arithmetic complexity $m^n$. Such a symmetric morphism $\varphi:\ams\to\ams$ is defined as follows. If $\varphi(0)$ is the finite word $x_0\cdots x_\ell$ over~$\am$, then for $i>0$, $\varphi(i)=(x_0+i)\cdots (x_\ell+i)$, with all sums taken modulo~$m$. This article deals with a natural generalization of the Thue--Morse word over an alphabet of size $m\geqslant 2$. Our primary goal is to identify and count its subwords. It directly relates to the notion of binomial complexity. We consider the symmetric morphism $\sm:\ams\to\ams$, defined by \begin{align*} \sm:i\mapsto i (i+1) \cdots (i+m-1). \end{align*} With our convention along the paper, integers out of the range $\{0,\ldots,m-1\}$ are reduced modulo~$m$. The images $\sm(i)$ correspond to cyclic shifts of the word $012\cdots (m-1)$. For instance, $\sigma_2$ is the classical Thue--Morse morphism. Our focus is on the infinite words $\infw{t}_m:=\lim_{j\to\infty}\smn{j}(0)$. For example, we have \begin{align*} \infw{t}_3=012 120 201 120 201 012 201 012 120\cdots. \end{align*} Throughout this paper, infinite words are denoted using boldface symbols. The Thue--Morse word~$\infw{t}_2$ and its generalizations~$\infw{t}_m$ play a prominent role in combinatorics on words~\cite{ubiquitous}. It serves as an example of an $m$-automatic sequence, where each letter is mapped by the morphism $\sm$ to an image of uniform length~$m$. Thus, $\sm$ is said to be {\em $m$-uniform}. The $j^{\text{th}}$ term of $\infw{t}_m$ is equal to the $m$-ary sum-of-digits of $j\geqslant 0$, reduced modulo~$m$. Further results on subwords of $\infw{t}_m$ in arithmetic progressions can be found in \cite{Parshina}. In this paper, we distinguish between a {\em factor} and a {\em subword} of a word $w=a_1a_2\cdots a_\ell$. A factor consists of consecutive symbols $a_ia_{i+1}\cdots a_{i+n-1}$, whereas a subword is a subsequence $a_{j_1}\cdots a_{j_n}$, with $1\leqslant j_1<\cdots <j_n\leqslant \ell$. Every factor is a subword, but the converse does not always hold. The set of factors of an infinite word $\infw{w}$ (respectively, factors of length~$n$) is denoted by $\Fac(\infw{w})$ (respectively, $\Fac_n(\infw{w})$). We denote the length of a finite word~$x$ by~$|x|$, and the number of occurrences of a letter~\(a\) in~\(x\) by~$|x|_a$. For general references on binomial coefficients of words and binomial equivalence, see~\cite{Lothaire,RigoBook,RigoRelations,RigoSalimov}. \begin{definition} Let $u$ and $w$ be words over a finite alphabet~$\mathcal{A}$. The \emph{binomial coefficient} \(\binom{u}{w}\) is the number of occurrences of $w$ as a subword of $u$. Writing $u = a_1\cdots a_n$, where $a_i \in \mathcal{A}$ for all $i$, it is defined as \[ \binom{u}{w}=\#\left\{ i_1<i_2<\cdots < i_{|w|} \mid \, a_{i_1}a_{i_2}\cdots a_{i_{|w|}}=w\right\}. \] \end{definition} Note that the same notation is used for the binomial coefficients of words and integers, as the context prevents any ambiguity (the binomial coefficient of unary words naturally coincides with the integer version: $\binom{a^n}{a^k} = \binom{n}{k}$). \begin{definition}[\cite{RigoSalimov}] Two words $u, v\in \mathcal{A}^*$ are said to be \emph{$k$-binomially equivalent}, and we write $u \sim_k v$, if \[ \binom{u}{x} = \binom{v}{x}, \quad \forall\, x\in \mathcal{A}^{\leqslant k}. \] If \(u\) and \(v\) are not $k$-binomially equivalent, we write $u\not\sim_k v$. \end{definition} A word $u$ is a permutation of the letters in $v$ if and only if $u \sim_1 v$. This relation is known as the \emph{abelian equivalence}. \begin{definition} Let $k\geqslant 1$ be an integer. The \emph{$k$-binomial complexity function} $\bc{\infw{w}}{k} \colon \N \to \N$ for an infinite word $\infw{w}$ is defined as \[ \bc{\infw{w}}{k}: n \mapsto \#\left(\Fac_n(\infw{w})/{\sim_k}\right). \] \end{definition} For \(k =1\), the $k$-binomial complexity is nothing else but the {\em abelian complexity function}, denoted by $\bc{\infw{w}}{1}(n)$. For instance, M. Andrieu and L. Vivion have recently shown that the $k$-binomial complexity function is well-suited for studying hypercubic billiard words \cite{Andrieu}. These words encode the sequence of faces successively hit by a billiard ball in a $d$-dimensional unit cube. The ball moves in straight lines until it encounters a face, then bounces elastically according to the law of reflection. A notable property is that removing a symbol from a $d$-dimensional billiard word results in a $(d-1)$-dimensional billiard word. Consequently, the projected factors of the $(d-1)$-dimensional word are subwords of the $d$-dimensional word. The connections between binomial complexity and Parikh-collinear morphisms are studied in~\cite{RSW}. \begin{definition}\label{def:Parikh-collinear} Let $\Psi:\mathcal{B}^*\to\N^{\# \mathcal{B}}$, defined as $w\mapsto \left(|w|_{b_1},\ldots,|w|_{b_m}\right)$ be the Parikh map for a totally ordered alphabet~$\mathcal{B}=\{b_1<\cdots<b_m\}$. A morphism $\varphi\colon \mathcal{A}^* \to \mathcal{B}^*$ is said to be \emph{Parikh-collinear} if, for all letters $a,b \in \mathcal{A}$, there exist constants $r_{a,b}, s_{a,b} \in\mathbb{N}$ such that $r_{a,b} \Psi\left(\varphi(b)\right) = s_{a,b} \Psi\left(\varphi(a)\right)$. If $r_{a,b}=s_{a,b}$ for all $a,b\in \mathcal{A}$, the morphism is called \emph{Parikh-constant}. \end{definition} \begin{proposition}[{\cite[Cor.~3.6]{RSW}}]\label{bkbound} Let $\infw{w}$ denote a fixed point of a Parikh-collinear morphism. For any $k \geqslant 1$, there exists a constant $C_k \in \N$ satisfying \(\bc{\infw{w}}{k}(n)\leqslant C_{k}\) for all \(n \in \N\). \end{proposition} It is worth noting that the above proposition was previously stated for Parikh-constant fixed points in \cite{RigoSalimov}. \subsection{Previously known results on generalized Thue--Morse words} It is well-known that the factor complexity of any automatic word, including the generalized Thue--Morse words, is in $\mathcal{O}(n)$. The usual factor complexity function of $\infw{t}_m$ is known exactly via results of Starosta \cite{Starosta}: \begin{theorem}\label{thm:starosta} For any $m \geq 1$, we have $\fc{\infw{t}_m}(0)=1$, $\fc{\infw{t}_m}(1) = m$, and \[ \fc{\infw{t}_m}(n) = \begin{cases} m^2(n - 1) - m(n - 2) & \text{if }2\leqslant n \leqslant m;\\ m^2(n - 1) - m^{k+1} + m^k & \text{if } m^{k} + 1 \leqslant n \leqslant 2m^{k}-m^{k-1},\ k\geq 1;\\ m^2(n-1)-m^{k+1}+m^k + m\ell & \text{if } n = 2m^k-m^{k-1} +1 + \ell,\\ & \text{ with } 0 \leqslant \ell < m^{k+1} - 2m^k + m^{k-1},\ k\geq 1. \end{cases} \] \end{theorem} The abelian complexity of $\infw{t}_m$ is known to be ultimately periodic with period $m$, as established by Chen and Wen \cite{ChenWen2019}. For example, $\left(\bc{\infw{t}_2}{1}(n)\right)_{n\geqslant 0}=(1,2,3,2,3,\ldots)$ and $\left(\bc{\infw{t}_3}{1}(n)\right)_{n\geqslant 0}=(1,3,6,7,6,6,7,6,\ldots)$. Moreover, the period takes either two or three distinct values, depending on the parity of $m$, as described in the following result. \begin{theorem}[{\cite{ChenWen2019}}]\label{thm:abelian_complexity} Let $m \geqslant 2$ and $n\geqslant m$. Let $\nu=n\pmod{m}$. \begin{itemize} \item If $m$ is odd, then we have \[ \bc{\infw{t}_m}{1}(n)=\#\left(\Fac_{n}(\infw{t}_m)/\!\sim_1\right) =\begin{cases} \frac14m(m^2-1)+1, & \text{ if }\nu=0;\\ \frac14m(m-1)^2+m, & \text{ otherwise.} \end{cases} \] \item If $m$ is even, then we have \[ \bc{\infw{t}_m}{1}(n) = \begin{cases} \frac14m^3+1, & \text{ if } \nu=0;\\ \frac14m(m-1)^2+\frac54m, & \text{ if } \nu\neq 0 \text{ is even};\\ \frac14m^2(m-2)+m, & \text{ if }\nu\text{ is odd}.\\ \end{cases} \] \end{itemize} \end{theorem} It is important to note that the abelian complexity function of a word generated by a Parikh-collinear morphism is not always eventually periodic \cite{RigoSW2023automaticity}. Furthermore, \cite{RigoSW2024automatic} shows that the abelian complexity function of such a word is automatic in the sense defined by Allouche and Shallit \cite{AS}. According to \cref{bkbound} the $k$-binomial complexity of $\infw{t}_m$ is bounded by a constant (that depends on~$k$). Explicit expressions of the functions $\bc{\infw{t}_2}{k}$ have been established: \begin{theorem}[{\cite[Thm.~6]{LLR}}]\label{thm:kbin2} Let $k\geqslant 1$. For every length $n\geqslant 2^k$, the $k$-binomial complexity \( \bc{\infw{t}_2}{k}(n) \) is given by \[ \bc{\infw{t}_2}{k}(n)=3\cdot 2^k+\left\{\begin{array}{ll} -3, & \text{ if }n\equiv 0 \pmod{2^k};\\ -4, & \text{ otherwise}.\\ \end{array}\right. \] If $n<2^k$, the $k$-binomial complexity $\bc{\infw{t}_2}{k}(n)$ is equal to the factor complexity $\mathrm{p}_{_{\infw{t}_m}}(n)$. \end{theorem} Let us also mention that infinite recurrent words, where all factors appear infinitely often, sharing the same $j$-binomial complexity as the Thue–Morse word $\infw{t}_2$, for all \( j\leqslant k \), have been characterized in~\cite{RSW}. The authors of~\cite{LLR} conclude that ``\ldots the expression of a formula describing the $k$-binomial complexity of $\infw{t}_m$ ($m>2$) seems to be more intricate. Therefore, a sharp description of the constants related to a given Parikh-constant morphism appears to be challenging''. Indeed, the difficulty in obtaining such an expression already becomes apparent with the $2$-binomial complexity. In~ \cite{ChenWen2024}, Lü, Chen, Wen, and Wu derived a closed formula for the $2$-binomial complexity of $\infw{t}_m$. \begin{theorem}[{\cite[Thm.~2]{ChenWen2024}}]\label{thm:2bin_complexity} For every length $n\geqslant m^2$ and alphabet size $m\geqslant 3$, the $2$-binomial complexity \( \bc{\infw{t}_m}{2}(n) \) is given by \[ \bc{\infw{t}_m}{2}(n) =\left\{\begin{array}{ll} \bc{\infw{t}_m}{1}(n/m)+m(m-1)(m(m-1)+1), & \text{ if }n\equiv 0\pmod{m};\\ \rule{0pt}{2.5ex} m^4-2m^3+2m^2,& \text{ otherwise}.\\ \end{array}\right. \] \end{theorem} The authors of~\cite{ChenWen2024} propose the conjecture that, for all $k\geqslant 3$, the $k$-binomial complexity of the generalized Thue–Morse word $\infw{t}_m$ is ultimately periodic. Precisely, \begin{conjecture}[{\cite[Conj.~1]{ChenWen2024}}]\label{conj:1} For every $k\geqslant 3$, the $k$-binomial complexity $\bc{\infw{t}_m}{k}$ of the generalized Thue--Morse word is ultimately periodic with period $m^k$. \end{conjecture} In this paper, we confirm this conjecture by getting the exact expression for the $k$-binomial complexity of $\infw{t}_m$ for alphabet of any size~$m$. \subsection{Main results} Let $k\geqslant 2$ and $m\geqslant 2$. The behavior of $\bc{\infw{t}_m}{k}(n)$ depends on the length~$n$ of the factors and is fully characterized by the following three results. \begin{restatable}{theorem}{shortlengths}\label{thm:main1short} The shortest pair of distinct factors that are $k$-binomially equivalent have a length of $2m^{k-1}$. In particular, for any length~$n<2m^{k-1}$, the $k$-binomial complexity $\bc{\infw{t}_m}{k}(n)$ coincides with the factor complexity $\mathrm{p}_{_{\infw{t}_m}}(n)$. \end{restatable} Recall \cref{thm:starosta} for an explicit expression for $\mathrm{p}_{_{\infw{t}_m}}(n)$. \begin{theorem}\label{thm:inter} Let~$n\in [2m^{k-1},2m^k)$. \begin{enumerate} \item If \( n=\nu\, m^{k-1} \) for some $\nu\in\{2,\ldots,2m-1\}$, then \[\bc{\infw{t}_m}{k}(\nu\, m^{k-1})=(m^{k-1}-1) \#E_{m}(\nu)+\bc{\infw{t}_m}{1}(\nu).\] \item If \(n= \nu\, m^{k-1}+\mu \) for some $\nu\in\{2,\ldots,2m-1\}$ and $0<\mu<m^{k-1}$, then \begin{align*} \bc{\infw{t}_m}{k}(\nu\, m^{k-1}+\mu) = (\mu-1)\#E_{m}(\nu+1) + (m^{k-1}-\mu-1)\# E_{m}(\nu) + \# Y_m(\nu) \end{align*} \end{enumerate} where \[ \#E_{m}(\nu)=\begin{cases} m(1+\nu m-\nu),&\text{ if }\nu<m;\\ m^3-m^2+m,&\text{ otherwise}\\ \end{cases}\] and \[\#Y_{m}(\nu)=\begin{cases} 2m(1+\nu m-\nu)-m\nu(\nu-1),&\text{ if }\nu<m;\\ m^3-m^2+2m,&\text{ otherwise.}\\ \end{cases} \] \end{theorem} \begin{theorem}\label{thm:main} For every length~$n\geqslant 2m^k$, if $\lambda=n \pmod{m^k}$ and $\lambda=\nu m^{k-1}+\mu$ with $\nu<m$ and $\mu<m^{k-1}$, we have \[ \bc{\infw{t}_m}{k}(n)=(m^{k-1}-1)(m^3-m^2+ m)+ \begin{cases} \bc{\infw{t}_m}{1}(m+\nu), & \text{ if }\mu =0;\\ \rule{0pt}{2.5ex} m, & \text{ otherwise}.\\ \end{cases} \] In particular, $\left(\bc{\infw{t}_m}{k}(n)\right)_{n\geqslant 2m^k}$ is periodic with period $m^k$. \end{theorem} Combining the above two theorems, we conclude that the periodic part of \( \bc{\infw{t}_m}{k}(n) \) begins at~$m^k$ and therefore answer positively to~\cref{conj:1}. \begin{corollary} The sequence $\left(\bc{\infw{t}_m}{k}(n)\right)_{n\geqslant m^k}$ is periodic with period $m^k$. \end{corollary} \begin{example} \cref{fig:complexm3} illustrates the $2$- and $3$-binomial complexities of $\infw{t}_3$. For short lengths, as described by \cref{thm:main1short}, the factor complexity is shown using a black dashed line, while values from \cref{thm:inter} are depicted in yellow. For larger lengths, values given by \cref{thm:main} are shown in purple and blue, with one period over $[2m^k,3m^k)$ highlighted in purple. \begin{figure}[h!t] \centering \includegraphics[width=11cm]{complexm3k23.pdf} \caption{The first few values of the factor complexity (dashed), $2$-, and $3$-binomial complexities of $\infw{t}_3$.} \label{fig:complexm3} \end{figure} For $m=3$ and $k=2, \ldots,6$, \cref{tabt3} provides the period of the $k$-binomial complexity of $\infw{t}_3$, where exponents denote repetitions. \begin{table}[h!tbp] \small{ \[ \begin{array}{l} (49,45^{2},48,45^{2},48,45^{2}); \ (175,171^{8},174,171^{8},174,171^{8});\ (553,549^{26},552,549^{26},552,549^{26}); \\ (1687,1683^{80},1686,1683^{80},1686,1683^{80}); \ (5089,5085^{242},5088,5085^{242},5088,5085^{242}) \end{array} \] } \caption{The period of $\bc{\infw{t}_3}{k}$ for $k=2,\ldots,6$.}\label{tabt3} \end{table} \end{example} Let us highlight that \cref{thm:main} simultaneously generalizes the results from \cite{LLR} and \cite{ChenWen2024}. Furthermore, for $k=2$, our formula reduces to \cref{thm:2bin_complexity}. We also compute the values of $\bc{\infw{t}_m}{2}(n)$ for the short lengths $n<m^2$. For $m=2$, \cref{thm:main} provides the following result. For every length $n\geqslant 2^k$, we have: \[ \bc{\infw{t}_2}{k}(n)=3\cdot 2^k+\left\{\begin{array}{ll} -6 + \bc{\infw{t}_m}{1}(2), & \text{ if }n\equiv 0 \pmod{2^k};\\ -6 + \bc{\infw{t}_m}{1}(3), & \text{ otherwise}. \end{array}\right. \] This result corresponds to~\cref{thm:kbin2}, with the shortest factors being handled by~\cref{thm:main1short}. \section{ Key Points of Our Proof Strategy }\label{sec:strategy} The developments presented are relatively intricate. Therefore, we found it useful to schematically outline the main steps of the proof. We hope this provides the reader with a general understanding about the structure of the paper, allowing each section to be read almost independently of the others. This, we believe, makes the paper easier to follow. \begin{definition}\label{def:factorization} Let $j\geqslant 1$ and $U$ be a factor of $\infw{t}_m$. A factorization of the form $U = x\smn{j}(u)y$ is referred to as a \emph{$\smn{j}$-factorization} if there exists a factor $aub$ of $\infw{t}_m$, where $a,b \in \am \cup \{\varepsilon\}$. In this factorization, $x$ (respectively, $y$) must be a proper suffix (respectively, prefix) of $\smn{j}(a)$ (respectively, $\smn{j}(b)$). Here, $\varepsilon$ is regarded as both a proper prefix and a proper suffix of itself. \end{definition} In the literature, the terms {\em interpretation in} $\infw{t}_m$ and {\em ancestor} are also used. See, for instance, \cite{Frid1998}. \cref{thm:main} addresses long enough factors. As discussed in~\cref{sec:rec}, any factor~$U\in\Fac(\infw{t}_m)$ of length $\geqslant 2m^k$ has a unique $\smn{k}$-factorization of the form $p_{_{U}}\smn{k}(u)s_{_{U}}$. In particular, notice that \(|p_{_{U}}|, |s_{_{U}}| <m^k\). Thus, we can associate each such factor~$U$ with a unique pair $(p_{_{U}},s_{_{U}})$, leading to the following definition. \begin{definition}\label{def:equivk} The equivalence relation on~$\mathcal{A}_m^{<m^k}\times \mathcal{A}_m^{<m^k}$ is defined by $(p_1,s_1)\equiv_k (p_2,s_2)$ if there exist $x,y,p,q,r,t\in \ams$ satisfying $|x|,|y|<m^{k-1}$ and \begin{eqnarray*} (p_1,s_1)&=&\left(x \smn{k-1}(p),\smn{k-1}(q) y\right),\\ (p_2,s_2)&=&\left(x \smn{k-1}(r),\smn{k-1}(t) y\right), \end{eqnarray*} and one of the following conditions holds \begin{itemize} \item $pq\sim_1 rt$, \item $pq\sim_1 rt\sm(0)$, \item $pq\sm(0)\sim_1 rt$. \end{itemize} \end{definition} We will show the following result in~\cref{sec:discern}. \begin{restatable}{proposition}{bothdir}\label{pro:both_dir} Let $x,y\in\ams$ and $k\geqslant 1$. Then, $x\sim_1 y$ holds if and only if $\smn{k}(x)\sim_{k+1}\smn{k}(y)$. \end{restatable} To achieve this result, a key challenge was identifying a suitable subword $z$ of length~$k+1$ such that $x\not\sim_1 y$, implies $\binom{x}{z}\neq\binom{y}{z}$. \cref{sec:discern} focuses on providing the necessary computations to distinguish non-equivalent factors. It can easily be shown that if $U,V\in\Fac(\infw{t}_m)$ are factors of length at least $2m^k$ and $(p_{_{U}},s_{_{U}})\equiv_k (p_{_{V}},s_{_{V}})$, then $U\sim_k V$. See~\cref{pro:imply}. Moreover, the converse of this property is also valid. However, further developments, as outlined below, are necessary to prove this result. Assuming, for now, that $(p_{_{U}},s_{_{U}})\equiv_k (p_{_{V}},s_{_{V}})$ if and only if $U\sim_k V$, proving \cref{thm:main}, reduces to counting the number \[\#\, \left\{(p_{_{U}},s_{_{V}})\mid \, U\in \Fac_n(\infw{t}_m)\right\}/\!\!\equiv_k\] of such equivalence classes for $n\geqslant 2m^k$. This forms the core of~\cref{sec:4} and is given by~\cref{thm:main_equiv}, whose statement is similar to \cref{thm:main}. To prove that $U\sim_k V$ implies $(p_{_{U}},s_{_{U}})\equiv_k (p_{_{V}},s_{_{V}})$, we first obtain the generalization of \cite[Thm.~2]{ChenWen2024} originally stated for $2$-binomial equivalence. This result is then extended to all $k\geqslant 2$. \begin{restatable}{proposition}{conclusionfinalgeneralization}\label{prop:conclusion-final-generalization} Let $k\geqslant 2$. For any two factors $U$ and $V$ of $\infw{t}_m$, the relation $U \sim_k V$ holds if and only if there exist $\smn{k-1}$-factorizations $U = p_{_{U}}\smn{k-1}(u) s_{_{U}}$ and $V = p_{_{V}} \smn{k-1}(v) s_{_{V}}$, such that $p_{_{U}} = p_{_{V}}$, $s_{_{U}} = s_{_{V}}$, and $u \sim_1 v$. \end{restatable} We proceed by induction on $k$. The base case for $k=2$ is essentially \cite[Thm.~2]{ChenWen2024}. However, our result slightly improves upon that of Chen et al. by not requiring any assumptions about the lengths of $U$ and $V$ in the factorizations. Using \cref{prop:conclusion-final-generalization}, we can easily deduce the following result, thereby concluding this part. \begin{restatable}{proposition}{propconverse}\label{prop:converse} Let $k\geqslant 2$. Let $U$ and $V$ be factors of $\infw{t}_m$ with the same length~$\geqslant 2m^k$ such that \begin{align*} U=p_{_{U}}\smn{k-1}\left(\alpha_{{u}}\sm(u)\beta_{{u}}\right)s_{_{U}}, \quad \text{and} \quad V=p_{_{V}}\smn{k-1}\left(\alpha_{{v}}\sm(v)\beta_{{v}}\right)s_{_{V}}, \end{align*} where $|p_{_{U}}|,|s_{_{U}}|,|p_{_{V}}|,|s_{_{V}}|<m^{k-1}$ and $|\alpha_{{u}}|,|\beta_{{u}}|,|\alpha_{{v}}|,|\beta_{{v}}|<m$. If \( U\sim_{k} V \), then \[ \left(p_{_{U}}\smn{k-1}(\alpha_{{u}}),\smn{k-1}(\beta_{{u}})s_{_{U}}\right)\equiv_{k}\left(p_{_{V}}\smn{k-1}(\alpha_{{v}}),\smn{k-1}(\beta_{{v}})s_{_{V}}\right). \] \end{restatable} We now focus on factors of length $n\in[2m^{k-1},2m^k)$. The proof of \cref{thm:inter} relies on analyzing the so-called abelian Rauzy graphs. \begin{definition}\label{def:abr} For an infinite word, the {\em abelian Rauzy graph} of order $\ell\geqslant 1$ is defined with vertices corresponding to the abelian equivalence classes of factors of length~$\ell$ (or equivalently, to their Parikh vectors). The edges of the graph are defined as follows. Let $a,b$ be letters. If $aUb$ is a factor of length $\ell+1$, there exists a directed edge from $\Psi(aU)$ to $\Psi(Ub)$ labeled $(a,b)$. \end{definition} We denote the abelian Rauzy graph of order $\ell$ of $\infw{t}_m$ by $G_{m,\ell}$. The number of vertices in $G_{m,\ell}$ is clearly $\bc{\infw{t}_{m}}{1}(\ell)$. For all $\ell\geqslant 1$, we define the following sets: \begin{eqnarray*} Y_{m,R}(\ell)&:=&\left\{\left(\Psi(U),a\right)\mid\, a\in\am, \, Ua\in\Fac_{\ell+1}(\infw{t}_m)\right\},\\ Y_{m,L}(\ell)&:=&\left\{\left(a,\Psi(U)\right)\mid\, a\in\am, \, aU\in\Fac_{\ell+1}(\infw{t}_m)\right\},\\ Y_m(\ell)&:=&Y_{m,R}(\ell)\cup Y_{m,L}(\ell). \end{eqnarray*} Since $\infw{t}_m=\smn{k-1}(\infw{t}_m)$, it is quite straightforward to adapt \cite[Prop.~5.5]{RSW}. The idea behind the following formula is that to get $\bc{\infw{t}_m}{k}(j\, m^{k-1}+r)$, one has to count the distinct $\smn{k-1}$-factorizations up to the equivalence relation given by \cref{prop:conclusion-final-generalization}. \begin{proposition}\label{pro:5.5} Let $k\geqslant 2$. We let $E_{m}(j)$ denote the set of edges in the abelian Rauzy graph $G_{m,j}$. For all $j\geqslant 2$ and $0<r<m^{k-1}$, the following holds \[\bc{\infw{t}_m}{k}\left(j\, m^{k-1}\right)=\left(m^{k-1}-1\right) \,\#E_{m}(j)+\bc{\infw{t}_m}{1}(j),\] and \[\bc{\infw{t}_m}{k}\left(j\, m^{k-1}+r\right)=(r-1)\, \#E_{m}(j+1) +(m^{k-1}-r-1)\,\#E_{m}(j)+\#Y_m(j).\] \end{proposition} The reader may notice that the formula leading to \cref{thm:inter} requires the values of the abelian complexity for short factors. However, \cref{thm:abelian_complexity} provides these values only for $j\geqslant m$, leaving the case $j<m$ unaddressed. Therefore, in~\cref{sec:abco}, we describe the missing values of $\bc{\infw{t}_m}{1}(j)$ for $j<m$. In \cref{sec:arg}, we proceed to a detailed analysis of the structure of the abelian Rauzy graph of order~$j$. We are thus able to determine explicit expressions for $\#E_{m}(j)$ and $\#Y_m(j)$. \section{ Compilation of Preliminary Results }\label{sec:collecting} For the sake of completeness, we recall some basic properties of binomial coefficients~\cite{Lothaire,RigoSalimov}, which are implicitly applied throughout this paper. \begin{lemma}\label{lem:binomial2} Let $x,y,z$ be three words over the alphabet $\mathcal{A}$. The following relation holds \[ \binom{xy}{z}=\sum_{\substack{u,v\in \mathcal{A}^*\\ uv=z}} \binom{x}{u}\binom{y}{v}. \] More generally, let $x_1,\ldots,x_\ell$, $z \in \mathcal{A}^*$ and $\ell \geqslant 1$. Then, the following relation holds \[ \binom{x_1 \cdots x_{\ell}}{z}=\sum_{\substack{e_1,\ldots,e_{\ell}\in \mathcal{A}^*\\ e_1\cdots e_\ell=z}} \, \prod_{i=1}^{\ell}\binom{x_i}{e_i}. \] \end{lemma} \begin{lemma}[Cancellation property]\label{lem:cancel} Let $u,v,w$ be three words. The following equivalences hold \begin{itemize} \item $v \sim_k w$ if and only if $u v \sim_k u w$; \item $v \sim_k w$ if and only if $v u \sim_k w u$. \end{itemize} \end{lemma} We present a few straightforward observations regarding generalized Thue--Morse words. See, for instance, \cite{Seebold}. \begin{proposition}[{\cite[Thm.~1]{AlloucheS2000sums}}]\label{prop:GTM-overlap-free} For any $m\geqslant 2$, the word $\infw{t}_m$ is overlap-free. \end{proposition} \begin{lemma}\label{lem:subwords} Let $i,j\in\am$. If $i<j$ (respectively, $i>j$), the word $ij$ appears exactly once as a subword in $m-j+i$ (respectively, $i-j$) of the images $\sm(0), \sm(1), \ldots,\sm(m-1)$. Furthermore, the word $ii$ does not occur as a subword in any of these images. Conversely, the $\binom{m}{2}$ distinct $2$-subwords appearing in $\sm(j)$ are given by $(j+t) (j+t+r)$, for $t=0,\ldots,m-2$ and $r=1,\ldots,m-t-1$. \end{lemma} Let $\tau_m\colon \ams \to \ams$ be the cyclic morphism where each letter $a \in \am$ is mapped to $a+1$. Because the compositions $\sm \circ \tau_m$ and $\tau_m \circ \sm$ are equal, the following lemma holds. \begin{lemma}[Folklore]\label{lem:permut} For all $n\geqslant 1$, the set $\Fac_n(\infw{t}_m)$ is closed under $\tau_m$. \end{lemma} The following result, proven in ~\cite[Lem.~2]{ChenWen2019}, uses the concept of boundary sequence introduced in~\cite{GuoWen2022}. \begin{lemma}\label{lem:boundaryseq} For all letters $a,b\in\am$ and all integer $n\geqslant 0$, there exists a factor of $\infw{t}_m$ in the form $awb$, where $|w|=n$. In particular, $\Fac_2(\infw{t}_m)=\mathcal{A}_m^2$. \end{lemma} Since $\sm$ is Parikh-constant, the following result holds. \begin{proposition}\label{prop:phik} Assume $k\geqslant 1$. For all $u,v\in \ams$, the following hold \begin{itemize} \item[(i)] If $u\sim_k v$, then $\sm(u)\sim_{k+1}\sm(v)$. \item[(ii)] If $u\sim_1 v$, then $\smn{k}(u)\sim_{k+1}\smn{k}(v)$. \item[(iii)] If $|u|=|v|$, then $ \smn{k}(u)\sim_{k}\smn{k}(v)$. \end{itemize} \end{proposition} \begin{proof} The first two statements are direct consequences of ~\cite[Prop.~3.9]{RSW}, which applies to any Parikh-collinear morphism. For all letters $i,j\in \am$, it holds that $\sm(i)\sim_1\sm(j)$. Hence, if two words $u$ and $v$ have the same length, then $\sm(u)\sim_1\sm(v)$. So statement (iii) follows directly from statement (ii). Therefore, (iii) holds true for any Parikh-constant morphism. \end{proof} \section{Ability to Discern \texorpdfstring{$k$-Binomially Non-Equivalent Factors}{k-Binomially Non-Equivalent Factors}} \label{sec:discern} The purpose of this section is to express differences of the form $\binom{\smn{k}(u)}{x}-\binom{\smn{k}(v)}{x}$ for suitable subwords $x$. We additionally compute $\binom{\smn{k}(u)}{x}-\binom{\smn{k}(u)}{y}$ for an appropriate choice of $x$ and $y$. Recall the convention that $\am=\Z/(m\Z)$, meaning any $i\in\Z$ is replaced with $(i\bmod{m})$. For example, a letter like $(-1)$ is identified as $m-1$. For convenience, if $a\in\mathbb{N}$, we let $\overline{a}$ denote $-a$. As an example, with $m=4$, the expression $2(-3)4(-1)=2\overline{3}0\overline{1}$ is indeed $2103$. In particular, the word $0 \mi{1} \cdots \mi{k}$ which has length~$k+1$, is a prefix of the periodic word $(0 \mi{1}\, \mi{2} \cdots 1)^\omega$. In the following statement, the letter~$0$ does not have any particular role. By \cref{lem:permut}, one can instead consider $\smn{k}(i)$ and the subword $i (i-1)\cdots (i-k)$. This kind of result is particularly useful for proving that two factors are not $(k+1)$-binomially equivalent. \begin{proposition}\label{prop:-1} Let $m\geqslant 2$ and $k\geqslant 1$. Then for all $j\in\mathcal{A}_m \setminus \{0\}$, the following holds \[ \binom{\smn{k}(0)}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(j)}{0 \mi{1} \cdots \mi{k}}=m^{\binom{k}{2}}. \] In particular, the coefficients $\binom{\smn{k}(j)}{0 \mi{1} \cdots \mi{k}}$ are identical for all $j\neq 0$. \end{proposition} As an example, for the classical Thue--Morse morphism, where $m=2$, it follows that $\mi{1}=1$. We have: \[ \binom{\sigma_2^{2n}(0)}{(01)^n0}-\binom{\sigma_2^{2n}(1)}{(01)^n0}=2^{n(2n-1)} \] and \[ \binom{\sigma_2^{2n+1}(0)}{(01)^{n+1}}-\binom{\sigma_2^{2n+1}(1)}{(01)^{n+1}}=2^{n(2n+1)}. \] \begin{proof} We proceed by induction on $k$. For the base case $k=1$, \cref{lem:subwords} shows that the subword $0\mi{1}$ occurs exactly once in $\sm(0)$ and does not appear in any other $\sm(j)$ for $j\neq 0$. Assume that the statement holds for some $k\geqslant 1$. We now prove it for $k+1$. The word $u=\smn{k+1}(0)$ can be factorized into $m$ consecutive words, each of length~$m^k$ (referred to as \emph{$m^k$-blocks}), as follows: $u=\smn{k}(0)\smn{k}(1)\cdots \smn{k}(\mi{1})$. Similarly, the word $v=\smn{k+1}(j)$ is a cyclic permutation of the $m^k$-blocks of $u$, given by \[ v=\smn{k}(j) \cdots \smn{k}(\mi{1})\smn{k}(0)\cdots \smn{k}(j-1). \] Our task is to count (or at least compare, as we are only interested in the difference) the occurrences of subwords $w=0 \mi{1}\cdots \mi{k}\,\mi{k+1}$ of length $k+2$ in $u$ and $v$. First, the number of occurrences fully contained within a single $m^k$-block is identical in $u$ and $v$ because they have the same $m^k$-blocks. Next, we count the occurrences of $w$ that are split across more than one $m^k$-block. These occurrences can be categorized into two cases: \begin{itemize} \item[I)] $w$ is split across at least two blocks, with no more than $k$ letters of $w$ appearing in each $m^k$-block. \cref{prop:phik} ensures that $\smn{k}(i)\sim_{k}\smn{k}(i')$ for all letters $i$ and $i'$. So $u$ and $v$ contain the same number of these types of occurrences. \item[II)] $w$ is split across at least two blocks, with $k+1$ letters of $w$ appearing within a single $m^k$-block. \end{itemize} A difference arises only when $k+1$ letters of $w$ appear within a single $m^k$-block, while its first or last letter belongs to a different $m^k$-block. By induction hypothesis, $\binom{\smn{k}(i)}{0 \mi{1} \cdots \mi{k}}=\binom{\smn{k}(i')}{0 \mi{1} \cdots \mi{k}}$ for any $i,i'\neq 0$. Similarly, $\binom{\smn{k}(i)}{\mi{1} \cdots \mi{k+1}}=\binom{\smn{k}(i')}{\mi{1} \cdots \mi{k+1}}$ for $i,i'\neq \mi{1}$. So to get different contributions, we only focus where the blocks $\smn{k}(0)$ and $\smn{k}(1)$ occur in $u$ and $v$. Let us first consider $\smn{k}(0)$. It appears at the beginning of $u$ and it contains the subword $0 \mi{1}\cdots \mi{k}$ exactly $\binom{\smn{k}(0)}{0 \mi{1} \cdots \mi{k}}$ times. Moreover $\mi{k+1}$ occurs once in every of the subsequent $(m-1)m^{k-1}$ blocks of length~$m$ within $\smn{k}(1)\cdots \smn{k}(\mi{1})$. However, the first $m^k$-block in $v$ is $\smn{k}(j)$, where the subword $0 \mi{1}\cdots \mi{k}$ appears only $\binom{\smn{k}(j)}{0 \mi{1} \cdots \mi{k}}$ times. By induction hypothesis, the resulting difference is \[ m^{\binom{k}{2}} (m-1) m^{k-1}. \] A similar reasoning applies to $\smn{k}(\mi{1})$, which appears as the suffix of $u$ and contains the subword $\mi{1}\, \mi{2}\cdots \mi{k+1}$ exactly $\binom{\smn{k}(\mi{1})}{\mi{1}\, \mi{2} \cdots \mi{k+1}}$ times. Moreover, $0$ occurs exactly once in each of the preceding $(m-1)m^{k-1}$ blocks of length~$m$ within $\smn{k}(0)\cdots \smn{k}(\mi{2})$. Using \cref{lem:permut} and the induction hypothesis, the resulting difference is once again $m^{\binom{k}{2}} (m-1) m^{k-1}$. We still have to take into account the contributions of $\smn{k}(0)$ and $\smn{k}(\mi{1})$ within $v$. The word $v$ begins with $m-1-j$ blocks of length $m^k$ followed by $\smn{k}(\mi{1})\smn{k}(0)$, and ends with $j-1$ blocks of length $m^k$. We have to count the number of $0$'s appearing before $\smn{k}(\mi{1})$ and the $\mi{k+1}$'s appearing after $\smn{k}(0)$. There are $(m-1-j)m^{k-1}$ such $0$'s and $(m-3-j)m^{k-1}$ such $\mi{k+1}$'s. By comparing with the blocks occurring in the corresponding position in $u$, we obtain the following difference \[ \left(\binom{\smn{k}(\mi{j+1})}{\mi{1}\, \mi{2}\cdots \mi{k}}-\binom{\smn{k}(\mi{1})}{\mi{1}\, \mi{2} \cdots \mi{k}}\right) (m-1-j)m^{k-1} + \left(\binom{\smn{k}(\mi{j})}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(0)}{0 \mi{1} \cdots \mi{k}}\right) (j-1)m^{k-1}. \] By induction hypothesis, we find that both terms in parentheses are equal to $-m^{\binom{k}{2}}$. Therefore, the difference is $-m^{\binom{k}{2}}(m-2)m^{k-1}$. Combining the results from the three preceding discussions, we get a total difference of \[ 2m^{\binom{k}{2}} (m-1) m^{k-1}-m^{\binom{k}{2}}(m-2)m^{k-1}=m^{\binom{k+1}{2}} \] matching the expected result. \end{proof} \begin{corollary}\label{cor:notequiv} Let $u,v\in\ams$ with the same length. Then, \[ \binom{\smn{k}(u)}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(v)}{0 \mi{1} \cdots \mi{k}}=\left(|u|_0-|v|_0\right)\, m^{\binom{k}{2}}. \] In particular, if $u\not\sim_1v$, then $\smn{k}(u)\not\sim_{k+1} \smn{k}(v)$. \end{corollary} \begin{proof} There exist words $p,u'$, and $v'$ such that $u\sim_1 p u'$ and $v\sim_1 p v'$, where $u'$ and $v'$ share no common letters, and $|u'|=|v'|$. Let $\Psi(u)=(s_1,\ldots,s_m)$ and $\Psi(v)=(t_1,\ldots,t_m)$. Then, $p$ is a word such that $\Psi(p)=\left(\min\{s_1,t_1\},\ldots,\min\{s_m,t_m\}\right)$. By \cref{prop:phik}, $\smn{k}(u)\sim_{k+1} \smn{k}(pu')$. Therefore, \[ \binom{\smn{k}(u)}{0 \mi{1} \cdots \mi{k}}=\binom{\smn{k}(p u')}{0 \mi{1} \cdots \mi{k}} =\sum_{\substack{x,y\in\ams \\ xy=0 \mi{1} \cdots \mi{k}}} \binom{\smn{k}(p)}{x} \binom{\smn{k}(u')}{y}. \] Thus, \[ \binom{\smn{k}(u)}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(v)}{0 \mi{1} \cdots \mi{k}} =\sum_{\substack{x,y\in\ams \\ xy=0 \mi{1} \cdots \mi{k}}} \binom{\smn{k}(p)}{x} \left(\binom{\smn{k}(u')}{y}-\binom{\smn{k}(v')}{y}\right). \] Using \cref{prop:phik} again, $\smn{k}(u')\sim_k\smn{k}(v')$. Therefore, if $|y|\leqslant k$, we have \[ \binom{\smn{k}(u')}{y}-\binom{\smn{k}(v')}{y}=0. \] Hence, we conclude \[ \binom{\smn{k}(u)}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(v)}{0 \mi{1} \cdots \mi{k}}= \binom{\smn{k}(u')}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(v')}{0 \mi{1} \cdots \mi{k}}. \] As shown in the proof of \cref{prop:-1}, since $\smn{k}(i)\sim_k\smn{k}(j)$ for all $i,j\in\am$, a non-zero difference arises only if a subword $0 \mi{1} \cdots \mi{k}$ appears entirely within an $m^k$-block. More precisely, if $u'=a_1\cdots a_r$ and $v'=b_1\cdots b_r$ where $a_i$'s and $b_j$'s are letters, the difference can be expressed as \[ \binom{\smn{k}(u)}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(v)}{0 \mi{1} \cdots \mi{k}}=\sum_{i=1}^r \binom{\smn{k}(a_i)}{0 \mi{1} \cdots \mi{k}}-\sum_{i=1}^r \binom{\smn{k}(b_i)}{0 \mi{1} \cdots \mi{k}} \] Using \cref{prop:-1}, it follows that \[ \binom{\smn{k}(u)}{0 \mi{1} \cdots \mi{k}}-\binom{\smn{k}(v)}{0 \mi{1} \cdots \mi{k}}=(|u'|_0-|v'|_0)\, m^{\binom{k}{2}}. \] In the particular case where $u$ and $v$ are not abelian equivalent, the words $u'$ and $v'$ must be non-empty. W.l.o.g., we assume that $0$ appears in $u'$ (and does not appear in $v'$). The conclusion then follows. \end{proof} By combining \cref{prop:phik,cor:notequiv}, we obtain \cref{pro:both_dir}, which is restated below. \bothdir* \cref{prop:-1} dealt with subwords of length~$k+1$ occurring in $m^k$-blocks. The next statement focuses on subwords of length at most $k$ that appear in the image of a word under $\smn{k}$. This result will play a key role in the proof of \cref{lem:bigdiff}. \begin{lemma}\label{lem:smart} Let $\ell\leqslant k$. For all $j$, the following holds \[ \binom{\smn{k}(u)}{0 \mi{1} \cdots \mi{\ell-1}} =\binom{\smn{k}(u)}{\mi{j} \cdots \mi{j+\ell-1}} \] \end{lemma} \begin{proof} Let $u=a_1\cdots a_t$, where $a_i\in \am$. First of all, we note that trivially \[ \binom{\smn{k}(a_1\cdots a_t)}{\mi{j} \cdots \mi{j+\ell-1}} = \binom{\tau_m^{j}(\smn{k}(a_1 \cdots a_t))}{\tau_m^{j}(\mi{j} \cdots \mi{j+\ell-1})}, \] as the subwords occur at the same positions in the respective words. Furthermore, we have $\tau_m^{j}(\mi{j} \cdots \mi{j+\ell-1}) = 0 \mi{1} \cdots \mi{\ell-1}$. Finally, since $\sm \circ \tau_m = \tau_m \circ \sm$, it follows that \[ \binom{\smn{k}(a_1\cdots a_t)}{\mi{j} \cdots \mi{j+\ell-1}} = \binom{\tau_m^{j}(\smn{k}(a_1 \cdots a_t))}{0 \mi{1} \cdots \mi{\ell-1}} = \binom{\smn{k}(\tau_m^{j}(a_1 \cdots a_t))}{0 \mi{1} \cdots \mi{\ell-1}} = \binom{\smn{k}((a_1+j)\cdots (a_t+j))}{0 \mi{1} \cdots \mi{\ell-1}}. \] Furthermore, by \cref{prop:phik}(iii), we know that $ \smn{k}(a_1\cdots a_t) \sim_k \smn{k}\left((a_1+j)\cdots (a_t+j)\right). $ Hence, the desired result. \end{proof} The next lemma is presented in its full generality. For the sake of presentation, the proof is given in \cref{sec:appbigfiff}. \begin{lemma}\label{lem:bigdiff} Let $k\geqslant 2$. Suppose $u,u',\gamma,\gamma',\delta,\delta'\in\ams$ are words such that $\gamma\delta\sim_1\gamma'\delta'$ and $|u|=|u'|$. Then, the difference \[ \binom{\smn{k-1}(\gamma \sm(u) \delta)}{0 \mi{1}\cdots \mi{k}}- \binom{\smn{k-1}(\gamma' \sm(u') \delta')}{0 \mi{1}\cdots \mi{k}} \] is given by \[ \begin{aligned} &m^{\binom{k}{2}} \biggl[ |u|_0 - |u'|_0 + |u| \, \left(|\gamma|_0 - |\gamma'|_0 + |\delta|_{\mi{1}} - |\delta'|_{\mi{1}}\right) \biggr] \\ &\quad + m^{\binom{k}{2}-1} \sum_{b \in \am} \left( \binom{\gamma\delta}{b\mi{1}} - \binom{\gamma'\delta'}{b\mi{1}} + \binom{\gamma\delta}{0b} - \binom{\gamma'\delta'}{0b} \right). \end{aligned} \] \end{lemma} \section{Recognizability and Structure of Factors}\label{sec:rec} First, we recall a recognizability property stating that any long enough factor~$U\in\Fac(\infw{t}_m)$ has a unique $\smn{k}$-factorization of the form $p_{_{U}}\smn{k}(u)s_{_{U}}$, where $p_{_{U}}$ and $s_{_{U}}$ are blocks of length less than $m^k$. Next, we examine the structure of those pairs $\left(p_{_{U}}, s_{_{U}}\right)$ in detail and show that they are subject to strong constraints. This will allow us to carry out precise counting in \cref{sec:4}. We summarize some well-known concepts and results (see, for instance, \cite{Balkova2012,Frid1998}). A morphism~$\varphi$ is called {\em marked} if, for every pair of distinct letters, their images under~$\varphi$ differ in both the first and last letters. A morphism~$\varphi\colon \mathcal{A}^*\to \mathcal{A}^*$ is said to be {\em primitive} if there exists an integer~$n$ such that, for all $a\in \mathcal{A}$, the word $\varphi^n(a)$ contains all letters of $\mathcal{A}$. \begin{remark} Let $\varphi:\mathcal{A}^*\to \mathcal{A}^*$ be a morphism, and let $n\geqslant 1$ be an integer. If $\varphi$ is marked (respectively, primitive, $\ell$-uniform), then $\varphi^n$ has the same properties, meaning $\varphi^n$ is marked (respectively, primitive, $\ell^n$-uniform). \end{remark} Note that, for all $k\geqslant 1$, the $k^{\text{th}}$ power of our morphism of interest $\sm$ is such that $\smn{k}(i)$ begins with $i$ and ends with $i-k$. Therefore, the morphism~$\smn{k}$ is marked. Let $\infw{x}$ be a fixed point of a morphism $\varphi$ over $\mathcal{A}$. A factor $w$ of $\infw{x}$ is said to contain a {\em synchronization point} $(w_1,w_2)$ if $w=w_1w_2$ and, for all $v_1,v_2\in \mathcal{A}^*$, $s\in\Fac(\infw{x})$ such that $\varphi(s)=v_1w_1w_2v_2$, there exist $s_1,s_2\in\Fac(\infw{x})$ such that $s=s_1s_2$, $\varphi(s_1)=v_1w_1$, and $\varphi(s_2)=w_2v_2$. A factor $w$ that contains a synchronization point is said to be {\em circular}. \begin{proposition}\label{pro:gen-circular} Let $\varphi$ be an $\ell$-uniform, primitive, marked morphism with $\infw{x}$ as one of its fixed points. If $u$ is a circular factor of $\infw{x}$, then $u$ has a unique $\varphi$-factorization (in the sense of \cref{def:factorization}). \end{proposition} \begin{proposition}\label{pro:circular} For all $k\geqslant 1$, the morphism~$\smn{k}$ is an $m^k$-uniform, primitive, marked morphism. Moreover, every factor of its fixed point $\infw{t}_m$ that has length at least $2m^k$ is circular. \end{proposition} \begin{example} The factor $\sm(0)^2$ of $\infw{t}_m$ has $m$ factorizations $\sm(00)$ and \[ \suff_{j}\left(\sm(j)\right) \cdot \sm(j) \cdot \pref_{m-j}\left(\sm(j)\right), \qquad j=1,\ldots,m-1. \] However, only one of these is a valid $\sm$-factorization, namely $\sm(00)$. This is because $j^3$ does not occur in $\infw{t}_m$ for any $j$ (cf.~\cref{prop:GTM-overlap-free}), implying that none of the other factorizations are valid $\sm$-factorizations. The factor $\sm(0)01\cdots (m-2)$ which has a length of $2m-1$, has two possible $\sm$-factorizations: \[ \sm(0)\cdot \pref_{m-1}\left(\sm(0)\right) \quad \text{and} \quad \suff_{m-1}\left(\sm(m-1)\right) \cdot \sm(m-1). \] Recall from \cref{lem:boundaryseq} that $00$ and $(m-1)(m-1)$ are indeed factors of $\infw{t}_m$. \end{example} \begin{remark}\label{rem:all-factors-smk-factorization} For any $k\geqslant 1$, it is obvious that all factors of length at least~$m^k-1$ in $\infw{t}_m$ have a $\smn{k}$-factorization, since the image of a letter has length~$m^k$. To simplify the arguments in \cref{sec:characterizing}, we extend this observation to all factors. Namely, for any $k\geqslant 1$, any factor $U$ of $\infw{t}_m$ has a $\smn{k}$-factorization. We will prove this by induction on $k$. For $k=1$, the only case to consider is when a factor $U$ appears properly within the image of a letter, i.e., $U = \ell \cdots \left(\ell + |U|-1\right)$ for some $\ell \in \am$ with $|U| \leqslant m-2$. Notice that \[ \pref_j(U) = \suff_j\left(\sm(\ell + j)\right) \quad \text{and} \quad \suff_{|U|-j}(U) = \pref_{|U| - j}\left(\sm(\ell+j)\right). \] Since all squares $a^2$, where $a \in \am$, appear in $\infw{t}_m$, it follows that for each value of $j$, where $0 \leqslant j \leqslant |U|$, the word $U$ has $|U|+1$ distinct $\sm$-factorizations of the form \[ \suff_{j}(\ell+j) \cdot \sm(\varepsilon) \cdot \pref_{|U|-j}\left(\sm(\ell + j)\right). \] Now, assume that $U$ has a $\smn{k}$-factorization of the form $x \smn{k}(u)y$, where $x$ is a proper suffix of $\smn{k}(a)$ and $y$ is a proper prefix of $\smn{k}(b)$, and $aub$ is a factor of $\infw{t}_m$. If $u = \varepsilon$, then we have the $\smn{k+1}$-factorization $x\cdot \smn{k+1}(\varepsilon)\cdot y$. This is valid since $(a+1)b$ is a factor of $\infw{t}_m$, $\smn{k}(a)$ is a suffix of $\smn{k+1}(a+1)$, and $\smn{k}(b)$ is a prefix of $\smn{k+1}(b)$. Now, assume $|u|\geqslant 1$, implying $|U| \geqslant m^k$. If $U$ does not appear properly within the $\smn{k+1}$-image of a letter, there is nothing to prove. Thus consider the case that $U$ appears, w.l.o.g., properly within $\smn{k+1}(0) = \smn{k}(0\cdots(m-1))$, which implies $|U| \leqslant m^{k+1}-2$. We can express $U$ as $U = x'\smn{k}(u')y'$, where $u' = \ell(\ell+1)\cdots (\ell + t)$ for some $\ell \geqslant 1$ and $t < m-1 - \ell$, with $x'$ being a proper suffix of $\smn{k}(\ell-1)$, and $y'$ a proper prefix of $\smn{k}(\ell + t + 1)$. Here, we allow $t=-1$ to indicate that $u'$ is empty. For instance, we obtain the $\smn{k+1}$-factorization $x' \cdot \smn{k+1}(\varepsilon) \cdot \smn{k}(u')y'$, where $x'$, being a suffix of $\smn{k}(\ell-1)$, is a proper suffix of $\smn{k+1}(\ell)$, and $u'y'$ is a proper prefix of $\smn{k+1}(\ell)$. As $\ell\ell$ is a factor of $\infw{t}_m$, the conclusion holds. If $x' = \varepsilon$, then we obtain the $\smn{k+1}$-factorization $\varepsilon \cdot \smn{k+1}(\varepsilon) \cdot \smn{k}(u')y'$. This concludes the proof. \end{remark} \begin{corollary}\label{cor:unique-factorization-bound} For all factors $U\in\Fac(\infw{t}_m)$ of length $|U|\geqslant 2m^k$, there exists a unique $\smn{k}$-factorization: \[ U=p_{_{U}} \smn{k}(u) s_{_{U}}. \] In particular, the words $p_{_{U}}$, $s_{_{U}}$, and $u$ are unique. \end{corollary} \begin{proof} This result follows directly from \cref{pro:gen-circular,pro:circular}. \end{proof} \begin{example} Let $m=3$ and $k=2$. The word \[ U=1200121202011202010122010121, \] which has length~$28$, is a factor of $\infw{t}_3$. It can be factorized as: \[ \sigma_3(1)\sigma_3^2(01)\sigma_3(20)1 \] where $p_{_{U}}=\sigma_3(1)$ and $s_{_{U}}=\sigma_3(20)1$. \end{example} Since the word $s_{_{U}}$ is a proper prefix of some $\smn{k}(j)$, it has a specific structure. Since $|s_{_{U}}|<m^k$, this length can be uniquely expressed using a base-$m$ expansion as \[ |s_{_{U}}|=\sum_{i=0}^{k-1} c_{k-i}\, m^i,\quad c_1,\ldots,c_k\in\{0,\ldots,m-1\}. \] By applying a similar greedy procedure to the word $s_{_{U}}$ (refer to~\cite{DumontThomas} for details on Dumont--Thomas numeration systems associated with a morphism, or~\cite{RigoBook}), we obtain the following unique decomposition \begin{equation} \label{eq:decompsu} s_{_{U}}=\prod_{i=1}^k \smn{k-i}\left(v_{i}\right) \end{equation} where the words $v_i$ are defined as follows \[ v_i=(j+\sum_{r=1}^{i-1}c_r) \, (j+\sum_{r=1}^{i-1}c_r+1) \cdots (j+\sum_{r=1}^{i}c_r-1). \] Notice that $|v_i|=c_i$, and $v_1\cdots v_k$ is a prefix of $\left(j(j+1)\cdots (j+m-1)\right)^\omega$. \begin{example} The base-$4$ expansion of $226$ is $3.4^3+2.4^2+2$. The prefix of $\sigma_4^4(0)$ with a length~$226$ is given by \[ \sigma_4^3(012)\sigma_4^2(30)12 \] where $v_1=012$, $v_2=30$, $v_3=\varepsilon$, and $v_4=12$. Thus, $v_1\cdots v_4=0123012$. For instance, $\sigma_4^3(\underline{02})\sigma_4^2(30)12$ is not the prefix of any $\sigma_4^4(a)$, as it involves applying $\sigma_4^3$ to a block composed of non-consecutive letters. \end{example} \begin{remark}\label{rem:uniquesu} Knowing the value of $j$ and the length $|s_{_{U}}|$ uniquely determines the decomposition given in~\eqref{eq:decompsu}. Equivalently, for all $n\geqslant 1$ and letter~$a$, there exists a unique factor of the form $s_{_{U}}$, of length~$n$, that starts (respectively, ends) with the letter~$a$. \end{remark} \begin{corollary}\label{cor:delpu} The collect the following facts. \begin{itemize} \item[(i)] With the above notation, let $q$ (respectively, $r$) be the least (respectively, largest) integer such that $c_q$ (or $c_r$) is non-zero. Let $v_q=xy$ and $v_r=zh$, such that $v_1\cdots v_k=xy v_{q+1}\cdots v_{r-1}zh$. Then, \[ \smn{k-q}(y)\prod_{i=q+1}^{r-1} \smn{k-i}\left(v_{i}\right)\smn{k-r}(z) \] is the proper prefix of the image of a letter under $\smn{k}$. \item[(ii)] If $c_1>0$ and at least one of $c_2,\ldots,c_k$ is non-zero, the only admissible deletion of letters from $v_1$, leading to a proper prefix of some $\smn{k}(a)$, is to suppress a prefix of $v_1$. Removing a proper suffix of $v_1$ or any ``internal'' factor would violate the constraint that $v_1\cdots v_k$ must be a prefix of the sequence $(j(j+1)\cdots (j+m-1))^\omega$. \item[(iii)] If $c_1$ is the only non-zero coefficient, the only permissible deletion of letters from $v_1$, resulting in a proper prefix of some $\smn{k}(a)$, is to suppress either a prefix or a suffix of $v_1$. \end{itemize} \end{corollary} A similar observation applies to $p_{_{U}}$, which is the proper suffix of some $\smn{k}(j+1)$. The only difference lies in the fact that $\sm(j+1)$ ends with $j$. Since $|p_{_{U}}|<m^k$, this length can be uniquely expressed using a base-$m$ expansion as: \[ |p_{_{U}}|=\sum_{i=0}^{k-1} c_{i+1}\, m^i,\quad c_k,\ldots,c_1\in\{0,\ldots,m-1\}. \] By applying a similar greedy procedure to the word $p_{_{U}}$, we obtain the following decomposition: \[ p_{_{U}}=\prod_{i=1}^{k} \smn{i-1}\left(v_{i}\right) \] where the words $v_i$ are defined as follows \[ v_i = \left(j-k+i-\sum_{r=i}^{k}c_r+1\right) \cdots \left(j-k+i-\sum_{r=i+1}^{k}c_r-1\right)\left(j-k+i-\sum_{r=i+1}^{k}c_r\right). \] Notice that $|v_i|=c_i$. \begin{example} The base-$4$ representation of $226$ is $3.4^3+2.4^2+2$. Here, the suffix of $\sigma_4^4(0)$ with a length of $226$ is given by \[ 23\sigma_4^2(23)\sigma_4^3(123) \] where $v_4=123$, $v_3=23$, $v_2=\varepsilon$, and $v_1=23$. \end{example} \begin{remark}\label{rem:uniquepu} Similar to the previous case, knowing the value of $j$ and the length $|p_{_{U}}|$ uniquely determines the decomposition. Equivalently, for all integers $n\geqslant 1$ and and any letter~$a$, there exists a unique factor of the form $p_{_{U}}$, of length~$n$, that starts (respectively, ends) with the letter~$a$. \end{remark} \begin{corollary} We collect the following facts. \begin{itemize} \item[(i)] If $c_k>0$ and at least one of $c_1,\ldots,c_{k-1}$ is non-zero, the only admissible deletion of letters from $v_k$ resulting in a proper suffix of some $\smn{k}(a)$ is to suppress a suffix of $v_k$. Deleting a proper prefix of $v_k$ or some ``internal'' factor would not yield a valid suffix. \item[(ii)] If $c_k$ is the only non-zero coefficient, the only admissible deletion of letters from $v_k$ leading to a proper suffix of some $\smn{k}(a)$, is to suppress either a prefix or a suffix of $v_k$. \end{itemize} \end{corollary} \section{Counting Classes of a New Equivalence Relation}\label{sec:4} Since $\sm$ is Parikh-constant, to determine $k$-binomial equivalence of two factors primarily depends on their short prefixes and suffixes, rather than their central part composed of $m^k$-blocks. Thus, it is meaningful to focus on these prefixes and suffixes for our analysis. This section presents the core of our counting methods. For the sake of presentation, let us recall \cref{def:equivk}. Let $(p_1,s_1),(p_2,s_2)\in\mathcal{A}_m^{<m^k}\times \mathcal{A}_m^{<m^k}$. We have $(p_1,s_1)\equiv_k (p_2,s_2)$ whenever there exist $x,y,p,q,r,t\in \ams$ with $|x|,|y|<m^{k-1}$ such that \begin{eqnarray*} (p_1,s_1)&=&\left(x \smn{k-1}(p),\smn{k-1}(q) y\right),\\ (p_2,s_2)&=&\left(x \smn{k-1}(r),\smn{k-1}(t) y\right), \end{eqnarray*} and one of the following conditions holds \begin{itemize} \item $pq\sim_1 rt$, \item $pq\sim_1 rt\sm(0)$, \item $pq\sm(0)\sim_1 rt$. \end{itemize} Notice that if $(p_1,s_1)\equiv_k (p_2,s_2)$, then \[ |p_1s_1|=|p_2s_2| \quad \text{or} \quad \left|\, |p_1s_1|-|p_2s_2|\, \right|=m^k. \] \begin{proposition}\label{pro:imply} Let $k\geqslant 2$, and $U,V\in \Fac(\infw{t}_m)$ of length at least $2m^k$. If $(p_{_{U}},s_{_{U}})\equiv_k (p_{_{V}},s_{_{V}})$, then $U\sim_k V$. \end{proposition} \begin{proof} Suppose first that $|p_{_{U}}s_{_{U}}|=|p_{_{V}}s_{_{V}}|$. By definition, there exist $x,y,p,q,r,t,u,v\in A^*$ such that: \begin{eqnarray*} U=p_{_{U}}\smn{k}(u)s_{_{U}}&=x \smn{k-1}(p)\smn{k}(u)\smn{k-1}(q) y&=x \smn{k-1}\left(p\sm(u)q\right)y\\ V=p_{_{V}}\smn{k}(v)s_{_{V}}&=x \smn{k-1}(r)\smn{k}(v)\smn{k-1}(t) y&=x \smn{k-1}\left(r\sm(v)t\right)y \end{eqnarray*} and $pq\sim_1 rt$. Since $|U|=|V|$, it follows that $|u|=|v|$ and $\sm(u)\sim_1\sm(v)$. Thus, $p\sm(u)q\sim_1 r\sm(v)t$. By \cref{prop:phik}, we have \[ \smn{k-1}\left(p\sm(u)q\right)\sim_k \smn{k-1}\left(r\sm(v)t\right). \] For the second case, suppose that $|p_{_{U}}s_{_{U}}|=|p_{_{V}}s_{_{V}}|+m^k$. Using the same notation as above, we have $pq\sim_1 rt\sm(0)$ and $|v|=|u|+1$. Therefore \[ r\sm(v)t\sim_1 r\sm(u)\sm(0)t\sim_1 p\sm(u)q \] and we reach the same conclusion. \end{proof} We have an immediate lower bound for the $k$-binomial complexity of the generalized Thue--Morse word~$\infw{t}_m$. Using \cref{thm:main_equiv}, we will get the value of \( \#\left( \left\{(p_{_{U}},s_{_{U}})\mid \, U\in \Fac_n(\infw{t}_m)\right\}/\equiv_k\right) \). \begin{corollary} For all $n\geqslant 2m^k$, the $k$-binomial complexity $\bc{\infw{t}_m}{k}(n)$ satisfies the inequality \[ \bc{\infw{t}_m}{k}(n) \geqslant \#\left( \{(p_{_{U}},s_{_{U}})\mid \, u\in \Fac_n(\infw{t}_m)\}/\equiv_k\right). \] \end{corollary} Let $n\geqslant 2 m^k$. Define $n=\lambda \pmod{m^k}$, where $\lambda=\mu+\nu m^{k-1}$, with $\mu<m^{k-1}$ and $\nu<m$. We begin by defining a partition of the set of pairs. \begin{definition} Let $\ell\in\{0,\ldots,m^{k-1}\}$. Let \[ P_\ell^{(n)}:=\left\{(p_{_{U}},s_{_{U}}) \mid \, U\in \Fac_n(\infw{t}_m), \, |p_{_{U}}|\equiv \ell\pmod m^{k-1}\right\}. \] and similarly, \[ S_{\ell'}^{(n)}:=\left\{(p_{_{U}},s_{_{U}}) \mid \, U\in \Fac_n(\infw{t}_m), \, |s_{_{U}}|\equiv \ell'\pmod m^{k-1}\right\}. \] \end{definition} Note that \[ \bigcup_{\ell=0}^{m-1}P_\ell^{(n)}=\left\{(p_{_{U}},s_{_{U}}) \mid \, U\in \Fac_n(\infw{t}_m)\right\}=\bigcup_{\ell'=0}^{m-1}S_{\ell'}^{(n)}. \] Let $(p,s)\in P_\ell^{(n)}$. By Euclidean division, since $|p|,|s|<m^k$, we have \[ |p|=\ell +\alpha\, m^{k-1} \text{ and }\quad |s|=\ell' +\alpha'\, m^{k-1}, \] for some $\alpha,\alpha'<m$ and $\ell'<m^{k-1}$. We show that $n,\ell,\alpha$ completely determine $\ell'$ and $\alpha'$. In particular, for each $\ell$, there exists a unique $\ell'$ such that $P_\ell^{(n)}=S_{\ell'}^{(n)}$. Since \[ \ell +\ell'+(\alpha+\alpha')\, m^{k-1}=|ps|\equiv n \pmod{m ^k}, \] we have \[ \ell +\ell'\equiv \mu\bmod{m^{k-1}}. \] Thus, either \begin{enumerate} \item $\ell\leqslant\mu$ and $\ell'=\mu-\ell$ or, \item $\ell>\mu$ and $\ell'=m^{k-1}+\mu-\ell$. \end{enumerate} If $\ell\leqslant\mu$, then $\ell +\ell'=\mu$ and: \[ |ps|=\mu+(\alpha+\alpha')\, m^{k-1}\equiv \mu+\nu m^{k-1}\pmod{m^k}. \] If $\alpha\leqslant\nu$, then $\alpha'=\nu-\alpha$. Otherwise $\alpha>\nu$, then $\alpha'=\nu+m-\alpha$. In the second case ($\ell>\mu$), we have \[ \ell +\ell'=\mu+m^{k-1}, \quad \text{and} \quad |ps|=\mu+(\alpha+\alpha'+1)\, m^{k-1} \] If $\alpha\leqslant\nu-1$, then $\alpha'=\nu-\alpha-1$. Otherwise, $\alpha>\nu-1$, then $\alpha'=\nu+m-\alpha-1$. These observations are recorded in \cref{tab:summary}. \begin{table}[h!tb] \centering \[ \begin{array}{ll|ll} &\ell\leqslant\mu& &\ell>\mu\\ \hline \alpha\leqslant\nu: & \ell'=\mu-\ell& \alpha\leqslant\nu-1: & \ell'=m^{k-1}+\mu-\ell\\ & \alpha'=\nu-\alpha & & \alpha'=\nu-\alpha-1\\ i.e., & \alpha+\alpha'=\nu & i.e., & \alpha+\alpha'=\nu-1 \\ \hline \alpha>\nu: & \ell'=\mu-\ell & \alpha>\nu-1: & \ell'=m^{k-1}+\mu-\ell\\ & \alpha'=\nu+m-\alpha & & \alpha'=\nu+m-\alpha-1 \\ i.e., & \alpha+\alpha'=\nu+m & i.e., & \alpha+\alpha'=\nu+m-1 \\ \hline \end{array} \] \caption{Summary for $(\ell',\alpha')$ for fixed $\mu,\nu$ and $\alpha$ varying.} \label{tab:summary} \end{table} \begin{example} Let $m=3$ and $k=2$. If $n\equiv 4\pmod{9}$, then $\mu=1$ and $\nu=1$. The set $P^{(n)}_0$ contains pairs $(p,s)$ such that $|p|=0,3,6$, which is $0+\alpha\, 3$ for $\alpha=0,1,2$. Since $\ell=0\leqslant 1=\mu$, we have $\ell'=1$. For $\alpha=0$ or $1$, which is less than or equal to $\nu$, the corresponding values of $\alpha'$ are $1$ and $0$, respectively. For $\alpha=2$ which is greater than $\nu$, $\alpha'$ is $\nu+3-\alpha=2$. Thus, the lengths of $s$ corresponding to $|p|=0,3,6$ are $4,1,7$, respectively. Therefore, note that $P^{(n)}_0=S^{(n)}_1$. The set $P^{(n)}_1$ contains pairs $(p,s)$ such that $|p|=1,4,7$, which is $1+\alpha\, 3$ for $\alpha=0,1,2$. Since $\ell=1\leqslant 1=\mu$, we have $\ell'=0$. For $\alpha=0$ or $1$, which is less than or equal to $\nu$, the corresponding values of $\alpha'$ are $1$ and $0$, respectively. For $\alpha=2$, which is greater than $\nu$, $\alpha'$ is $\nu+3-\alpha=2$. Thus, the lengths of $s$ corresponding to $|p| =1,4,7$ are $3,0,6$, respectively. Therefore, note that $P^{(n)}_1=S^{(n)}_0$. The set $P^{(n)}_2$ contains pairs $(p,s)$ such that $|p|=2,5,8$, which is $2+\alpha\, 3$ for $\alpha=0,1,2$. Since $\ell=2$ is greater than $\mu=1$, we have $\ell'=3+\mu-2=2$. For $\alpha=0$, the corresponding value of $\alpha'$ is $\nu-1=0$. For $\alpha=1$ and $\alpha=2$, both greater than $\nu-1$, the corresponding values of $\alpha'$ are $2$ and $1$ respectively. Thus, the lengths of $s$ corresponding to $|p|=2,5,8$ are $2,8,5$, respectively. Finally, note that $P^{(n)}_2=S^{(n)}_2$. \end{example} Note that if $\mu=0$, then $P^{(n)}_0=S^{(n)}_0$. If $\mu\neq 0$, then for $\ell=0$, we have $\ell'=\mu\neq 0$. In that case $P^{(n)}_0=S^{(n)}_\mu\neq S^{(n)}_0=P^{(n)}_\mu$. This observation gives an initial hint as to why the statement of \cref{thm:main_equiv} contains two cases. Recall that the abelian complexity of $\infw{t}_m$ is well known (see \cref{thm:abelian_complexity}). \begin{theorem}\label{thm:main_equiv} Let $n\geqslant 2 m^k$. If $\lambda=n \bmod{m^k}$ and $\lambda=\nu m^{k-1}+\mu$, where $\nu<m$ and $\mu<m^{k-1}$, then the value of \[\#\left\{(p_{_{U}},s_{_{U}}) \mid \, U\in \Fac_n(\infw{t}_m)\right\}/\equiv_k\] is given by \[ (m^{k-1}-1)(m^3-m^2+ m)+\left\{\begin{array}{ll} \bc{\infw{t}_m}{k}(m+\nu), & \text{ if }\mu =0;\\ m, & \text{ otherwise.}\\ \end{array}\right. \] \end{theorem} \begin{remark} Note that for $k=2$, which was the case studied in~\cite{ChenWen2024}, this expression matches the $2$-binomial complexity of $\infw{t}_m$. Thus, we obtain the converse of \cref{pro:imply}: Let $U$ and $V$ be two factors of $\infw{t}_m$ of length at least $\ 2m^2$. Then, $U\sim_2 V$ if and only if $(p_{_{U}},s_{_{U}})\equiv_2 (p_{_{V}},s_{_{V}})$. \end{remark} \begin{proof} $\bullet$ {\bf Case 1.a)} Let us consider $\mu\neq 0$ and $\ell\neq 0$. Assume that $\ell\leqslant\mu$. Referring to the first column of \cref{tab:summary}, the elements of $P^{(n)}_\ell$ have the form given in \cref{tab:pnl}, where $x^j$ and $y^j$ are words and $r_i^j$, $t_i^j$ are letters. \begin{table}[h!tbp] \[ \begin{array}{c|r|l|c} \alpha& p_{_{U}} & s_{_{U}} & \alpha'\\ \hline 0&x^0 \smn{k-1} (\varepsilon) & \smn{k-1} (r^0_1\cdots r^0_{\nu-1} r^0_\nu) y^0 &\nu\\ 1&x^1 \smn{k-1} (t_1^1) & \smn{k-1} (r^1_1\cdots r^1_{\nu-1}) y^1 & \nu-1 \\ \vdots&\vdots & \vdots & \vdots \\ \nu-1&x^{\nu-1} \smn{k-1} (t^{\nu-1}_{\nu-1}\cdots t^{\nu-1}_1) & \smn{k-1} (r_1^{\nu-1}) y^{\nu-1} & 1 \\ \nu&x^{\nu} \smn{k-1} (t^{\nu}_\nu\ t_{\nu-1}^{\nu}\cdots \ t^{\nu}_1\ ) & \smn{k-1} (\varepsilon) y^{\nu} & 0 \\ \nu+1&x^{\nu+1} \smn{k-1} (t^{\nu+1}_{\nu+1} t_{\nu}^{\nu+1}\cdots t^{\nu+1}_1) & \smn{k-1} (r^{\nu+1}_1\cdots r^{\nu+1}_{\nu+1}\cdots r^{\nu+1}_{m-1}) y^{\nu+1} & m-1 \\ \vdots & \vdots & \vdots & \vdots \\ m-1&x^{m-1} \smn{k-1} (t^{m-1}_{m-1}\cdots t_{\nu}^{m-1}\cdots t^{m-1}_1) & \smn{k-1} (r^{m-1}_1\cdots r^{m-1}_{\nu+1}) y^{m-1} & \nu+1 \\ \end{array} \] \caption{Words in $P^{(n)}_\ell$.}\label{tab:pnl} \end{table} Since we are dealing with proper suffixes or prefixes of the image of a letter under $\smn{k}$, we also have \[ \forall j<m: \quad t^j_{i+1}=t^j_i-1 \text{ and } r^j_{i+1}=r^j_i+1. \] Since $\ell\neq 0$ (respectively, $\ell\neq \mu$), the words $x^j$ (respectively, $y^j$) are non-empty of length~$\ell$ (respectively, $\mu-\ell$). Thanks to \cref{rem:uniquesu,rem:uniquepu}, there are at most $m^2$ words on each row of \cref{tab:pnl}: a prefix (respectively, \ suffix) of any given length is determined by its last (respectively, \ first) letter. Thanks to \cref{lem:boundaryseq}, there are exactly $m^2$ words on each row. We now consider the quotient by $\equiv_k$. Since the words $r^0_1\cdots r^0_{\nu-1} r^0_\nu$ have length less than $m$ and are made of consecutive letters, if two such words have distinct first letter, then there are not abelian equivalent. Hence the $m^2$ words on this row are pairwise non-equivalent. The same argument applies on the second row. Nevertheless, if $t_1^1=r_1^1-1$, then \[ (x^1 \smn{k-1} (t_1^1), \smn{k-1} (r^1_1\cdots r^1_{\nu-1}) y^1)\equiv_k (x^1 \smn{k-1} (\varepsilon), \smn{k-1} (t_1^1 r^1_1\cdots r^1_{\nu-1} ) y^1). \] If $t_1^1\neq r_1^1-1$, we cannot make such a move an keep equivalent pairs (we know from \eqref{eq:decompsu} that we must have consecutive letters in $t_1^1 r^1_1\cdots r^1_{\nu-1}$). So we find $m(m-1)$ new classes. We have a similar counting in the first $\nu+1$ rows (we proceed downwards, comparing elements on a row with elements on previous rows). Take a word of the form \[ x^{j} \smn{k-1} (t^{j}_j \cdots\ t_s^{j}\cdots \ t^{j}_1\ ) \] on the row $j\leqslant \nu$. Thanks to \cref{cor:delpu} (ii), we can only delete a suffix of $t^{j}_s \cdots \ t^{j}_1$ to keep a valid suffix of some $\smn{k}(a)$. If $t_1^j=r_1^j-1$, since the suffix is made of consecutive letters \[ \begin{aligned} (x^{j} \smn{k-1} (t^{j}_j \cdots t^{j}_s \cdots t^{j}_1), \smn{k-1} (r^j_1 \cdots r^j_{\nu-r}) y^j) \\ \equiv_k (x^{j} \smn{k-1} (t^{j}_j \cdots t^{j}_{s+1}), \smn{k-1} (t^{j}_{s} \cdots t^{j}_1 r^j_1 \cdots r^j_{\nu-r}) y^j) \end{aligned} \] for any $1\leqslant s\leqslant j$. We again find $m(m-1)$ new classes. For the second part of the Table, take row $j\geqslant \nu+1$. The reasoning is again the same but this time, when $t_1^j=r_1^j-1$, take $s\geqslant j-\nu$, then $t^{j}_{s} \cdots t^{j}_1\ r^j_1\cdots r^j_{m+\nu-j}$ has length $m+\nu-j+s\geqslant m$. So it has a prefix which is a cyclic permutation of $0,1,\ldots,m-1$. Hence, so we find an equivalent pair \[ (x^{j} \smn{k-1} (t^{j}_j \cdots t^{j}_{s+1} ), \smn{k-1} (r^j_{m-s}\cdots r^j_{m+\nu-j}) y^j) \] in the first part of the table. The case $\ell>\mu$ is treated similarly. As a conclusion, we have $m^2$ classes for the first row and $m(m-1)$ classes for each of the $m-1$ other rows for a total of $m^2+m(m-1)^2$ classes. We have considered so far $m^{k-1}-2$ sets $P^{(n)}_\ell$ each containing $m^2+m(m-1)^2$ classes. $\bullet$ {\bf Case 1.b)} Let us consider $\mu\neq 0$ and focus on $P^{(n)}_0$ (similar discussion for $P^{(n)}_\mu$). The only difference in \cref{tab:pnl} is that there is no word $x^j$ (it is empty because $\ell=0$). The word $y^j$ remains non-empty (because $\mu\neq 0$). In the first row, we have $(\varepsilon,\smn{k-1} (r^0_1\cdots r^0_{\nu-1} r^0_\nu) y^0)$ so the number of classes is given by the number~$m$ of choices for $r^0_1$. Now come the extra discussion for $1\leqslant j\leqslant m-1$ due to the absence of~$x^j$. In $\smn{k-1} (t^{j}_j\cdots \ t^{j}_1\ )$ to get equivalent pairs, we can as above move a suffix $t_s^{j}\cdots \ t^{j}_1$ to the second component whenever $t^j_1=r^j_1-1$ but also move a prefix $t^{j}_j \cdots t^j_{j-s+1}$ whenever $t^j_{j-s+1}=r^j_1-1$. Consequently, the word $t^{j}_j\cdots \ t^{j}_1$ should not contain $r^j_1-1$ which is equivalent to $t^{j}_j\in\{r^j_1,r^j_1+1,\ldots,r^j_1+m-j-1\}$ using the fact that the word is made of consecutive letters. So we have $m(m-j)$ choices. So the total is given by \[ m+\sum_{j=1}^{m-1} m(m-j)=\frac{1}{2} \left(m^3-m^2+2 m\right), \] and this contribution is doubled to take the symmetric case of $P^{(n)}_\mu$. As a conclusion, when $\mu\neq 0$, i.e., if $n\neq 0\pmod{m^k}$, then \begin{eqnarray*} \#\left\{\left(p_{_{U}},s_{_{U}}\right) \mid \, U\in \Fac_n(\infw{t}_m)\right\}/\equiv_k&=& (m^{k-1}-2)(m^2+m(m-1)^2)+m^3-m^2+2 m \\ &=& (m^{k-1}-1)(m^3-m^2+ m)+ m. \end{eqnarray*} $\bullet$ Case 2) Let $\mu=0$. If $\ell\neq 0$, then from \cref{tab:summary} we get $\ell'=m^{k-1}-\ell\neq 0$. Then, we have the same discussion as in our first case. The $m^{k-1}-1$ sets $P_\ell^{(n)}$ for $\ell=1,\ldots,m-1$ contain $m^2+m(m-1)^2$ classes (we get the same main term in the expression). If $\ell=0$, then $\ell'=0$. Here, the particularity of the single set $P_0^{(n)}$ is that in \cref{tab:pnl} the words $x^j$ and $y^j$ are both empty. So we only consider pairs $(p_{_{U}},s_{_{U}})$ of the form $(\smn{k-1}(p'),\smn{k-1}(s'))$ with $|p'|,|s'|<m$ and $|p's'|=\nu$ or $m+\nu$. We will show that \[ \#(P_0^{(\nu\, m^{k-1})}/\equiv_k)= \#(\Fac_{2m+\nu}(\infw{t}_m)/\sim_1). \] Thanks to \cref{pro:circular}, any factor $x$ of length $2m+\nu$ has a unique factorization of the form \[ x=p_x\sigma(w)s_x \text{ with } |p_x|,|s_x|<m \text{ and } |w|\in\{1,2\}. \] Thanks to \cref{lem:boundaryseq}, a pair $(p_{_{U}},s_{_{U}})=(\smn{k-1}(p'),\smn{k-1}(s'))$ belongs to $P_0^{(\nu\, m^{k-1})}$ if and only if $(p',s')$ is of the form $(p_x,s_x)$ for some $x$ in $\Fac_{2m+\nu}(\infw{t}_m)$. Let $x,y\in \Fac_{2m+\nu}(\infw{t}_m)$, and their corresponding factorizations $x=p_x\sigma(w)s_x$ and $y=p_y\sigma(w')s_y$. If $x\sim_1 y$ and $|p_xs_x|=|p_ys_y|$, then $|w|=|w'|$ and thus $\sm(w)\sim_1\sm(w')$. So $p_xs_x\sim_1 p_ys_y$ and we get $$(\smn{k-1}(p_x),\smn{k-1}(s_x))\equiv_k (\smn{k-1}(p_y),\smn{k-1}(s_y)).$$ If $x\sim_1 y$ but $|p_xs_x|\neq |p_ys_y|$, then the difference of their length is $m$. We may assume that $|p_xs_x|=|p_ys_y|+m$, so $|w|=1$ and $|w'|=2$. Since $\sm(a)$ is a circular permutation of $01\cdots (m-1)$, we deduce that $p_xs_x\sim_1 p_ys_y\sigma(0)$ and the same conclusion follows. The converse also holds, if $x,y\in \Fac_{2m+\nu}(\infw{t}_m)$ and $(\smn{k-1}(p_x),\smn{k-1}(s_x))\equiv_k (\smn{k-1}(p_y),\smn{k-1}(s_y))$, then considering both situations, one concludes that $x\sim_1 y$. It is known that for words of length at least $m$ the abelian complexity function is periodic of period $m$, see~\cite{ChenWen2019}. Hence, \[ \#\left(\Fac_{2m+\nu}(\infw{t}_m)/\sim_1\right)=\#\left(\Fac_{m+\nu}(\infw{t}_m)/\sim_1\right). \] \end{proof} \section{Characterizing Binomial Equivalence in \texorpdfstring{$\infw{t}_m$}{tm}}\label{sec:characterizing} In this section, we focus on characterizing $k$-binomial equivalence among factors of $\infw{t}_m$ through their $\smn{k-1}$-factorizations. We recall the main result: \conclusionfinalgeneralization* We observe that this proposition extends \cite[Thm.~2]{ChenWen2024} by removing an additional assumption $|u|,|v| \geqslant 3$ and extending it to all $k\geqslant 2$. To prove the main characterization, we shall present the following restricted version. \begin{lemma}\label{lem:k-1-factorisation-first-last} Let $k\geqslant 2$ and $U$ and $V$ be factors of~$\infw{t}_m$ for some $m\geqslant 2$. Assume further that $U$ and $V$ begin and end with distinct letters. Then $U \sim_k V$ if and only if there exist $\smn{k-1}$-factorizations $U = \smn{k-1}(u)$ and $V = \smn{k-1}(v)$ such that $u \sim_1 v$. \end{lemma} Before diving into the proof of \cref{lem:k-1-factorisation-first-last}, let us observe how \cref{prop:conclusion-final-generalization} follows from it. First, we obtain \cref{thm:main1short} as an immediate corollary of \cref{lem:k-1-factorisation-first-last}. \shortlengths* \begin{proof} The shortest pair of distinct $k$-binomially equivalent factors necessarily begin and end with different letters due to $k$-binomial equivalence being cancellative (cf.~\cref{lem:cancel}). \cref{lem:k-1-factorisation-first-last} thus shows that the pair of factors can be written in the form $\smn{k-1}(u)$ and $\smn{k-1}(v)$ with $u \sim_1 v$. Therefore, $|u| = |v| \geqslant 2$ (since they must begin and end with different letters), giving the lower bound. The pair $\smn{k-1}(01)$ and $\smn{k-1}(10)$, for example, gives the desired pair of length $2m^{k-1}$. \end{proof} We can now prove \cref{prop:conclusion-final-generalization} \begin{proof}[Proof of \cref{prop:conclusion-final-generalization}] Let $k \geqslant 2$ be arbitrary. If $U$ and $V$ have the $\smn{k-1}$-factorizations $U = p_{_{U}} \smn{k-1}(u) s_{_{U}}$ and $V = p_{_{V}} \smn{k-1}(v)s_{_{V}}$, where $p_{_{U}} = p_{_{V}}$, $s_{_{U}} = s_{_{V}}$, and $u\sim_1 v$, then $U \sim_k V$ follows by \cref{pro:both_dir} and the fact that $\sim_k$ is a congruence. For the converse, assume $U \sim_k V$. There is nothing to prove if $U = V$, as all factors have a $\smn{k-1}$-factorization by \cref{rem:all-factors-smk-factorization}. So assume $U \neq V$. Write $U = pU's$ and $V = pV's$, where $U'$ and $V'$ begin and end with distinct letters. By cancellativity (\cref{lem:cancel}), we have $U' \sim_k V'$. By \cref{lem:k-1-factorisation-first-last}, there exist $\smn{k-1}$-factorizations $U' = \smn{k-1}(u')$ and $V' = \smn{k-1}(v')$, where $u' \sim_1 v'$. Note that \cref{thm:main1short} implies $|U'|, |V'| \geqslant 2m^{k-1}$. By \cref{cor:unique-factorization-bound}, these $\smn{k-1}$-factorizations are unique. It follows that $U$ and $V$ have the desired (unique) $\smn{k-1}$-factorizations $U = p\smn{k-1}(u')s$ and $V = p\smn{k-1}(v')s$, where $u' \sim_1 v'$. \end{proof} The proof of \cref{lem:k-1-factorisation-first-last} proceeds by induction on $k$. We divide the remainder of the section into two subsections: the base case $k=2$, handled in the first subsection, and the induction step, covered in the second. We observe that the base case $k=2$ is almost handled by \cite[Thm.~2]{ChenWen2024}, except that the additional assumption $|u|$, $|v| \geqslant 3$ appearing there needs to be removed. Although the cases where $|u|$, $|v| \leqslant 3$ could be treated separately, we provide a complete, independent, but similar, proof of the case $k=2$, as it reveals our strategy for tackling the induction step. \subsection{The base case} We shall state the induction base case as a separate lemma: \begin{lemma}\label{lem:generalised-k=2-distinct-extremal-letters} Let $U$ and $V$ be factors of $\infw{t}_m$ that begin and end with distinct letters. Then $U \sim_2 V$ if and only if there exist $\sm$-factorizations $U = \sm(u)$ and $V = \sm(v)$, such that $u \sim_1 v$. \end{lemma} \begin{proof} If such $\sm$-factorizations exist for $U$ and $V$, then the two words are $2$-binomially equivalent by \cref{pro:both_dir}. Assume that $U$ and $V$ are $2$-binomially equivalent factors, beginning and ending with distinct letters. Let $U$ and $V$ have the $\sm$-factorizations $p_{_{U}} \sm(u) s_{_{U}}$ and $p_{_{V}} \sm(v) s_{_V}$, respectively (such factorizations exist by \cref{rem:all-factors-smk-factorization}). Notice that $\left||u|-|v|\right| \leqslant 1$ due to length constraints. W.l.o.g., we assume that $|u| \leqslant |v|$. \textbf{First, assume that $|u| = |v|$.} If both $s_{_{U}}$ and $s_{_{V}}$ are empty, it follows that $p_{_{U}}\sm(u) \sim_1 p_{_{V}} \sm(v)$. Since $\sm(u) \sim_1 \sm(v)$, we conclude that $p_{_{U}} \sim_1 p_{_{V}}$. This further implies $p_{_{U}} = \varepsilon = p_{_{V}}$, as $U$ and $V$ start with distinct letters, and $p_{_{U}}$ and $p_{_{V}}$ are proper suffixes of images of letters. By \cref{pro:both_dir}, it follows that $u \sim_1 v$, thereby establishing the claimed factorizations. Thus, we proceed under the assumption that at least one of the words $s_{_{U}}$ and $s_{_{V}}$ is non-empty, intending to get a contradiction. W.l.o.g., we assume that $s_{_{U}}$ is non-empty. Now, let $\alpha-1$ denote the last letter of $s_{_{U}}$. By assumption, we have $\binom{U}{\alpha (\alpha-1)} = \binom{V}{\alpha (\alpha-1)}$; applying \cref{lem:binomial2} twice, we obtain \[ \begin{aligned} \binom{p_{_{_U}} s_{_{_U}}}{\alpha (\alpha-1)} +& \binom{\sm(u)}{\alpha (\alpha-1)} + |u|\left(|p_{_{_U}}|_{\alpha} + |s_{_{_U}}|_{\alpha-1}\right) \\ = &\binom{p_{_{_V}} s_{_{_V}}}{\alpha (\alpha-1)} + \binom{\sm(v)}{\alpha (\alpha-1)} + |v|\left(|p_{_{_V}}|_{\alpha} + |s_{_{_V}}|_{\alpha-1}\right). \end{aligned} \] Observe that $|p_w|_{\alpha} = |p_ws_w|_{\alpha} - |s_w|_{\alpha}$, where $w$ is either $u$ or $v$. Similarly, we have $|s_{_{U}}|_{\alpha-1} = 1$ and $|s_{_{U}}|_{\alpha} = 0$. Substituting these values into the previous equation yields \begin{multline*} \binom{p_{_{U}} s_{_{U}}}{\alpha (\alpha-1)} + \binom{\sm(u)}{\alpha (\alpha-1)} + |u|\left(|p_{_{U}}s_{_{U}}|_{\alpha} + 1\right)\\ = \binom{p_{_{V}} s_{_{V}}}{\alpha (\alpha-1)} + \binom{\sm(v)}{\alpha (\alpha-1)} + |v|\left(\left|p_{_{V}} s_{_{V}}\right|_{\alpha} - |s_{_{V}}|_{\alpha} + |s_{_{V}}|_{\alpha-1}\right). \end{multline*} The terms $|u||p_{_{U}} s_{_{U}}|_{\alpha}$ and $|v||p_{_{V}} s_{_{V}}|_{\alpha}$ cancel because $|u| = |v|$, and the equivalence $U\sim_2 V$ implies $p_{_{U}} s_{_{U}} \sim_1 p_{_{V}} s_{_{V}}$. By \cref{lem:subwords}, $\alpha(\alpha-1)$ appears exclusively in $\sm(\alpha)$, implying that $\binom{\sm(u)}{\alpha (\alpha-1)}=|u|_\alpha$. Rearranging this equation yields the following equality \begin{equation}\label{eq:equal-length-2bin} |u|_{\alpha} + |v|\left(|s_{_{V}}|_{\alpha}-|s_{_{V}}|_{\alpha - 1}\right) = |v|_{\alpha} - |u| + \binom{p_{_{V}} s_{_{V}}}{\alpha(\alpha-1)} - \binom{p_{_{U}} s_{_{U}}}{\alpha(\alpha-1)}. \end{equation} \begin{claim} \begin{enumerate} \item[1)] The left-hand side of \eqref{eq:equal-length-2bin} is non-negative. Furthermore, it is equal to $0$ if and only if either $u = v = \varepsilon$, or $|u|_{\alpha} = 0$ and $|s_{_{V}}|_{\alpha} = |s_{_{V}}|_{\alpha-1}$. \item[2)] The right-hand side of \eqref{eq:equal-length-2bin} is non-positive. Moreover, it equals $0$ if and only if $|v|_{\alpha} = |v|$ and $\alpha$ does not appear in $p_{_{U}}s_{_{U}}$. \end{enumerate} \end{claim} \begin{claimproof} Consider the first claim. Note that the left-hand side can only be negative if $|s_{_{V}}|_{\alpha-1} > |s_{_{V}}|_{\alpha}$. However, this situation cannot occur: if $\alpha-1$ appears in $s_{_{V}}$, then as $s_{_{V}}$ does not end with $\alpha-1$; instead, $\alpha-1$ must be followed by $\alpha$. Consequently, the coefficient of $|v|$ is non-negative, showing the non-negativity of the left-hand side. To attain a value of $0$, we must have that either $u = \varepsilon$, or $|u|_{\alpha} = 0$ and $|s_{_{V}}|_{\alpha} = |s_{_{V}}|_{\alpha-1}$. Let us consider the second claim. If $\alpha$ does not appear in $p_{_U}s_{_{U}}$, then \[ \binom{p_{_V}s_{_{V}}}{\alpha(\alpha-1)} = 0 = \binom{p_{_U}s_{_{U}}}{\alpha(\alpha-1)}. \] Consequently, the right-hand side is equal to $|v|_{\alpha} - |v|$, which is clearly non-positive, and it is equal to $0$ if and only if $|v|_{\alpha} = |v|$. If $\alpha$ appears in $p_{_U}s_{_{U}}$, it must occur in $p_{_U}$ and does so precisely once. Since $\alpha-1$ does not appear in $p_{_U}$ after $\alpha$, we have $\binom{p_{_U} s_{_U}}{\alpha(\alpha-1)} = 1$. Next, consider the occurrences of $\alpha-1$ and $\alpha$ in $p_{_V} s_{_V}$. Note that $\alpha$ cannot precede $\alpha-1$ in $p_{_V}$ or $s_{_V}$. If $\alpha-1$ appears in $s_{_V}$ then, because $s_{_V}$ does not end with $\alpha-1$, it must be followed by $\alpha$ in $s_{_V}$. Thus, we conclude that $\binom{p_{_V} s_{_V}}{\alpha(\alpha-1)} = 0$. Hence, the right-hand side equals $|v|_{\alpha} - |v| - 1$, which is strictly negative. The desired conclusion thereby follows. \end{claimproof} The above claim shows that \eqref{eq:equal-length-2bin} can only be satisfied when both the left-hand side and the right-hand side are equal to zero. In other words, $\alpha$ must not appear in $p_{_U}s_{_U}$ (and consequently not in $p_{_V}s_{_V}$) and either: (a) $u=v=\varepsilon$; or (b) $|u|_{\alpha} = 0$, $|s_{_V}|_{\alpha-1} = |s_{_V}|_{\alpha} = 0$, and $|v|_{\alpha} = |v|$. Note that $p_{_V}$ must contain $\alpha-1$, which corresponds to the occurrence of $\alpha-1$ as the last letter of $s_{_U}$, and thus $p_{_V}$ must end with $\alpha-1$; otherwise, it would contain $\alpha$ immediately following $\alpha-1$. This situation is illustrated in \cref{fig:facsm1}. Since $|u|_\alpha=0$, the image of each letter of $u$ under $\sm$ contains the factor $\alpha(\alpha-1)$. Since $v=\alpha^{|v|}$, the image of each letter of $v$ under $\sm$ begins with $\alpha$ and ends with $\alpha-1$. \begin{figure}[h!t] \centering \begin{tikzpicture} \draw[yscale=.4] (0,0) -- (12,0) -- (12,1) -- (0,1) -- (0,0); \draw[yscale=.4] (1,1) -- (1,0); \draw[yscale=.4] (3.5,1) -- (3.5,0); \draw[yscale=.4] (8,1) -- (8,0); \draw[yscale=.4] (10.5,1) -- (10.5,0); \draw[yscale=.4] (12,1) -- (12,0); \node at (-.5,.2) {$U$}; \node at (.5,.7) {$p_{_U}$}; \node at (2,.2) {\small{$(\alpha-1)\alpha$}}; \draw[decoration={brace},decorate] (1,.5) -- (10.5,.5) node[above, midway] {$\sm(u)$}; \node at (5.75,.2) {$\cdots$}; \node at (9,.2) {\small{$(\alpha-1)\alpha$}}; \node at (11,.7) {$s_{_{U}}$}; \node at (11.45,.2) {\small{$(\alpha-1)$}}; \begin{scope}[yshift = -25] \draw[yscale=.4] (0,0) -- (12,0) -- (12,1) -- (0,1) -- (0,0); \draw[yscale=.4] (1.5,1) -- (1.5,0); \draw[yscale=.4] (4,1) -- (4,0); \draw[yscale=.4] (8.5,1) -- (8.5,0); \draw[yscale=.4] (11,1) -- (11,0); \draw[yscale=.4] (12,1) -- (12,0); \node at (-.5,.2) {$V$}; \node at (.5,-.3) {$p_{_{_V}}$}; \node at (1.10,.2) {\small{$(\alpha-1)\alpha$}}; \node at (3.45,.2) {\small{$(\alpha-1)$}}; \draw[decoration={brace,mirror},decorate] (1.5,-.1) -- (11,-.1) node[below, midway] {$\sm(\alpha^{|v|})$}; \node at (6.25,.2) {$\cdots$}; \node at (8.62,.2) {\small{$\alpha$}}; \node at (10.45,.2) {\small{$(\alpha-1)$}}; \node at (11.5,-.3) {$s_{_{V}}$}; \end{scope} \end{tikzpicture} \caption{Illustrating the situation $|u|=|v|$ and $s_{_{_U}}$ or $s_{_{V}}$ non-empty.}\label{fig:facsm1} \end{figure} Consider the sum \[ \sum_{x \in \am} \binom{U}{(\alpha-1) x} + \binom{U}{x \alpha} - \binom{V}{(\alpha-1) x} - \binom{V}{x \alpha}, \] which equals zero, based on the assumption that $U \sim_2 V$. Observe that \(\sum_{x \in \am}\binom{U}{(\alpha-1)x}\) counts, for each occurrence of $(\alpha-1)$ in $U$, the number of letters to its right. Similarly, \( \sum_{x \in \am}\binom{U}{x\alpha} \) counts, for each occurrence of $\alpha$ in $U$, the number of letters to its left. With this interpretation, the “positive” part of the sum is equal to $|u|\cdot |U|$. Each of the $|u|$ occurrences of the factor $(\alpha-1)\alpha$ contributes $|U|$ to the positive count, while the last occurrence of $\alpha-1$ contributes zero. Similarly, the negative part of the sum is equal to $-|v|\cdot |V| - |s_{_{V}}|$. Each of the $|v|$ occurrences of the factor $(\alpha-1)\alpha$ contributes $-|v|\cdot |V|$ to the negative count, while the last occurrence of $\alpha-1$ contributes $-|s_{_{V}}|$. Since the sum must equal zero, we conclude that $s_{_{V}} = \varepsilon$. However, now $V$ ends with $\alpha-1$: if $v \neq \varepsilon$, then $\sm(v)$ ends with $\alpha-1$, and if $v = \varepsilon$, then $p_{_{V}} = V$ ends with $\alpha - 1$. This contradicts the assumption that $U$ and $V$ end with distinct letters, resulting in a contradiction when $|u| = |v|$ and at least one of the words $s_{_{U}}$ and $s_{_{V}}$ is non-empty. \textbf{Second, assume that $|u|+1 = |v|$.} We will show that this case is impossible as it leads to a contradiction. In this situation, $s_{_{U}}$ must be non-empty (as must $p_{_{U}}$), since $|p_{_{U}}s_{_{U}}| = |p_{_{V}}s_{_{V}}| + m$ and $|p_{_{U}}|$, $|s_{_{U}}| < m$. Let $\alpha-1$ be the last letter of $s_{_{U}}$. Let $\beta$ be the first letter of $v = \beta v'$, where $|v'| = |u|$, and let $p_{_{V}}' = p_{_{_V}}\sm(\beta)$. Note that $p_{_{U}}s_{_{U}}\sim_1 p_{_{v}}'s_{_{_V}}$. As before, we have \[ \binom{U}{\alpha (\alpha-1)} = \binom{V}{\alpha (\alpha-1)}. \] Using similar techniques as in the previous case, the equality can be expressed equivalently as \[ |u|_{\alpha} + |v'|\left(|s_{_{V}}|_{\alpha} - |s_{_{V}}|_{\alpha-1}\right) = |v'|_{\alpha} - |u| + \binom{p_{_{V}}' s_{_{V}}}{\alpha (\alpha-1)} - \binom{p_{_{U}} s_{_{U}}}{\alpha (\alpha-1)}. \] We may proceed similarly as in the previous case. It is clear that the left-hand side is non-negative, and it equals zero if and only if either $u = v' = \varepsilon$ or $|s_{_{V}}|_{\alpha} = |s_{_{V}}|_{\alpha-1}$ and $|u|_{\alpha} = 0$. \begin{claim} The right-hand side is non-positive and, moreover, equals zero if and only if $v' = \alpha^i$ and $\beta = \alpha$. \end{claim} \begin{claimproof} To begin, we show that $\binom{p_{_{U}} s_{_{U}}}{\alpha(\alpha - 1)} = 1$. Since $\alpha$ appears in $p_{_{V}}'s_{_{V}}$ (in $\sm(\beta)$), it must also appear in $p_{_{U}}s_{_{U}}$; since it does not appear in $s_{_{U}}$, it appears in $p_{_{U}}$. Furthermore, there is exactly one occurrence of $\alpha$ in $p_{_{U}} s_{_{U}}$. It should be noted that in $p_{_{U}}$, $\alpha - 1$ can precede $\alpha$ (if it appears at all) since $|p_{_{U}}|<m$. Hence, there is only one occurrence of the subword $\alpha (\alpha-1)$, as desired. Next, we consider $\binom{p_{_{V}}'s_{_{V}}}{\alpha (\alpha-1)}$. Observe that $s_{_{V}}$ does not contain $\alpha-1$; if it did, then it would be followed by a second occurrence of $\alpha$ in $p_{_{V}}'s_{_{V}}$ since it cannot end with $\alpha-1$, resulting in a contradiction. Since $\alpha$ appears in $\sm(\beta)$ within $p_{_{V}}'s_{_{V}}$ (and only once), we conclude that $\binom{p_{_{V}}'s_{_{V}}}{\alpha (\alpha-1)} = 1$ if and only if $\beta = \alpha$. Otherwise, $\binom{p_{_{V}}'s_{_{V}}}{\alpha(\alpha-1)} = 0$. Consequently, the right-hand side is non-positive and equals $0$ if and only if $|v'| = |v'|_{\alpha}$ and $\beta = \alpha$. \end{claimproof} For the equation above to be satisfied, we must have $|u|_{\alpha} = 0$, $v' = \alpha^i$ for some $i \geqslant 0$, and $\beta = \alpha$. Additionally, we have established that $|s_{_{V}}|_{\alpha} = |s_{_{V}}|_{\alpha -1} = 0$, regardless of whether $u=\varepsilon$ or not. It should be noted that if $\alpha-1$ appears for a second time in $p_{_{U}}s_{_{U}}$, it must occur just before $\alpha$ in $p_{_{U}}$ and as the last letter of $p_{_{V}}$; otherwise $p_{_{V}}'s_{_{V}}$ would contain a second occurrence of $\alpha$. If $\alpha-1$ appears only once in $p_{_{U}}s_{_{U}}$, then $p_{_{U}}$ begins with $\alpha$. \cref{fig:facsm2} illustrates the situation (the possible occurrences of $\alpha-1$ in $p_{_{U}}$ and $p_{_{V}}$ are not shown). \begin{figure}[h!t] \centering \begin{tikzpicture} \draw[yscale=.4] (0,0) -- (12,0) -- (12,1) -- (0,1) -- (0,0); \draw[yscale=.4] (2,1) -- (2,0); \draw[yscale=.4] (4.5,1) -- (4.5,0); \draw[yscale=.4] (7.5,1) -- (7.5,0); \draw[yscale=.4] (10,1) -- (10,0); \draw[yscale=.4] (12,1) -- (12,0); \node at (-.5,.2) {$U$}; \node at (.5,.7) {$p_{_{U}}$}; \node at (1,.2) {\small{$\alpha$}}; \node at (3,.2) {\small{$(\alpha-1)\alpha$}}; \draw[decoration={brace},decorate] (2,.5) -- (10,.5) node[above, midway] {$\sm(u)$}; \node at (6,.2) {$\cdots$}; \node at (8.7,.2) {\small{$(\alpha-1)\alpha$}}; \node at (11,.7) {$s_{_{U}}$}; \node at (11.45,.2) {\small{$(\alpha-1)$}}; \begin{scope}[yshift = -35] \draw[yscale=.4] (0,0) -- (12,0) -- (12,1) -- (0,1) -- (0,0); \draw[yscale=.4] (1,1) -- (1,0); \draw[yscale=.4] (3.5,1) -- (3.5,0); \draw[yscale=.4] (9,1) -- (9,0); \draw[yscale=.4] (11.5,1) -- (11.5,0); \draw[yscale=.4] (12,1) -- (12,0); \node at (-.5,.2) {$V$}; \node at (.5,-.3) {$p_{_{V}}$}; \node at (1.14,.2) {\small{$\alpha$}}; \node at (2.97,.2) {\small{$(\alpha-1)$}}; \node at (3.65,.2) {\small{$\alpha$}}; \draw[decoration={brace,mirror},decorate] (1,-.1) -- (3.5,-.1) node[below, midway] {$\sm(\beta)$}; \draw[decoration={brace,mirror},decorate] (3.5,-.1) -- (11.5,-.1) node[below, midway] {$\sm(v')$}; \draw[decoration={brace},decorate] (0,.5) -- (3.5,.5) node[above, midway] {$p_v'$}; \node at (7,.2) {$\cdots$}; \node at (9.12,.2) {\small{$\alpha$}}; \node at (11,.2) {\small{$(\alpha-1)$}}; \node at (11.8,-.3) {$s_{_{V}}$}; \end{scope} \end{tikzpicture} \caption{Illustrating the situation $|u|+1=|v|$.}\label{fig:facsm2} \end{figure} Consider now the sum \[ \sum_{x \in \am} \binom{U}{(\alpha-1) x} + \binom{U}{x \alpha} - \binom{V}{(\alpha-1) x} - \binom{V}{x \alpha}, \] which is equal to $0$ due to the assumption that $U \sim_2 V$. If $\alpha-1$ does not appear in $p_{_{U}}$ (and thus not in $p_{_{V}}$), then the positive side equals $|u||U|$; recall that $p_{_{U}}$ begins with $\alpha$ in this case. The negative side equals $-|v'||V|-|p_{_{V}}s_{_{V}}|$. This implies that $p_{_{V}}s_{_{V}} = \varepsilon$. But then $V = \sm(\alpha^{i+1})$ ends with $\alpha-1$, a contradiction. If $\alpha-1$ does appear in $p_{_{U}}$ and $p_{_{V}}$, then the positive side is equal to $\left(|u|+1\right)|U|$ whereas the negative side is equal to $-\left(|v'|+1\right)|V| - |s_{_{V}}|$. Hence, $s_{_{V}} = \varepsilon$, and again $V$ ends with $\alpha-1$. This shows that the case where $|u|+1 = |v|$ is impossible. We have shown that the only possible way for $U \sim_2 V$ to hold is by having the claimed $\sm$-factorizations, thus completing the proof. \end{proof} \subsection{The induction step} \begin{proof}[Proof of \cref{lem:k-1-factorisation-first-last}] Suppose the two factors $U$ and $V$ possess the $\smn{k-1}$-factorizations $U = \smn{k-1}(u)$ and $V = \smn{k-1}(v)$, where $u\sim_1 v$. In that case, they are $k$-binomially equivalent, as stated in \cref{prop:phik}. We consider the converse claim by induction on $k$, starting with the base case $k=2$ which is addressed by \cref{lem:generalised-k=2-distinct-extremal-letters}. Assume that the claim holds for some $k \geqslant 2$, and consider $U \sim_{k+1} V$ with $U \neq V$, beginning and ending with distinct letters. Suppose $U$ and $V$ have $\smn{k}$-factorizations of the form $p_{_{U}} \smn{k}(u) s_{_{U}}$ and $p_{_{V}} \smn{k}(v) s_{_{V}}$, respectively, where $|u|$, $|v| \geqslant 0$ (note that such factorizations are guaranteed by \cref{rem:all-factors-smk-factorization}). By factoring out full $\smn{k-1}$-images from $p_{_{U}}$, $s_{_{U}}$, $p_{_{V}}$, and $s_{_{V}}$, we obtain the corresponding $\smn{k-1}$-factorizations of the form \[ U = p_{_{U}}' \smn{k-1}\left(\gamma_u\sm(u)\delta_u\right) s_{_{U}}' \quad \text{and} \quad V = p_{_{V}}' \smn{k-1}\left(\gamma_v\sm(u)\delta_v\right) s_{_{V}}', \] where $p_w = p'_w\smn{k-1}(\gamma_w)$ and $s_w = \smn{k-1}(\delta_w)s'_w$ for $w \in \{u,v\}$. Under this assumption, it follows that $U \sim_k V$, and by the induction hypothesis, we have $p_w's_w' = \varepsilon$ for $w \in \{u,v\}$. Furthermore, $\gamma_u \sm(u) \delta_u \sim_1 \gamma_v \sm(v) \delta_v$, where the words $U$ and $V$ begin and end with distinct letters. \textbf{First, assume that $|u| = |v|$.} Then $\gamma_u \delta_u \sim_1 \gamma_v \delta_v$. If both $\delta_u$ and $\delta_v$ are empty, it follows that $\gamma_u \sim_1 \gamma_v$. Since $\gamma_u$ and $\gamma_v$ are suffixes of $\sm$-images of letters, they must be equal. Moreover, since $U$ and $V$ begin with distinct letters, this implies that $\gamma_u = \gamma_v = \varepsilon$. Thus, we have $U = \smn{k}(u)$ and $V = \smn{k}(v)$, confirming the claimed $\smn{k}$-factorizations by \cref{pro:both_dir}. We now proceed to the case where either $\delta_u$ or $\delta_v$ is non-empty. W.l.o.g, we may assume that that $\delta_u \neq \varepsilon$, and let $\alpha-1$ denote its final letter. In particular, $\alpha$ does not occur in $\delta_u$. We can apply \cref{lem:bigdiff} to $\smn{k-1}(\gamma_u \sm(u)\delta_u)$ and $\smn{k-1}(\gamma_v \sm(v) \delta_v)$, using $\alpha(\alpha-1)\cdots$ in place of $0\overline{1} \cdots$. Since these two words are assumed to be $(k+1)$-binomially equivalent, we obtain, by dividing by $m^{\binom{k}{2}-1}$ \begin{multline*} 0 = m\biggl[ |u|_\alpha-|v|_\alpha + |u|\, (|\gamma_u|_\alpha-|\gamma_v|_\alpha +|\delta_u|_{\alpha-1} - |\delta_v|_{\alpha-1} )\biggr] \\+ \sum_{b\in\am}\left( \binom{\gamma_u\delta_u}{b(\alpha-1)}-\binom{\gamma_v\delta_v}{b(\alpha-1)}+ \binom{\gamma_u\delta_u}{\alpha b}-\binom{\gamma_v\delta_v}{\alpha b}\right). \end{multline*} By observing that $|\gamma_w|_\alpha = |\gamma_w\delta_w|_\alpha - |\delta_w|_\alpha$, where $w \in \{u,v\}$, we can simplify the first term as follows \[ \begin{aligned} m \bigl[ |u|_{\alpha} - |v|_{\alpha} + |u|(|\delta_{u}|_{\alpha - 1} - |\delta_u|_{\alpha} -|\delta_v|_{\alpha-1} + |\delta_v|_{\alpha})\bigr] \\ = m\bigl[|u|_{\alpha} + |u|(|\delta_v|_{\alpha} - |\delta_v|_{\alpha-1})\bigr] + m(|v|-|v|_{\alpha}). \end{aligned} \] Let us define $\Delta_{\alpha}$ as \[\Delta_{\alpha}:= \sum_{b\in\am}\left( \binom{\gamma_v\delta_v}{b(\alpha-1)}+\binom{\gamma_v\delta_v}{\alpha b}-\binom{\gamma_u\delta_u}{b(\alpha-1)}- \binom{\gamma_u\delta_u}{\alpha b}\right).\] Rearranging the previous equation, we obtain \begin{equation}\label{eq:simplified-bigdiff-equal-length} m\bigl[|u|_{\alpha} + |u|\left(|\delta_v|_{\alpha} - |\delta_v|_{\alpha-1}\right)\bigr] = m(|v|_{\alpha}-|v|) + \Delta_{\alpha}. \end{equation} Recall that $|\delta_u|_{\alpha-1}=1$ and $|\delta_u|_\alpha=0$. Notice that the left-hand side is non-negative, and the only way where it could become negative is if $\alpha-1$ appears in $\delta_v$. However, since $\delta_v$ does not end with $\alpha-1$ (as $\delta_u$ ends with it), this occurrence of $\alpha-1$ must be followed by $\alpha$. Furthermore, the left-hand side is equal to zero if and only if $|u|_{\alpha} = 0$ and $|\delta_v|_{\alpha} = |\delta_v|_{\alpha-1}$. Next, we show that the right-hand side is non-positive. Indeed, since $m\left(|v|_\alpha - |v|\right)$ is non-positive, it is sufficient to show that the sum $\Delta_{\alpha}$ is also non-positive. \begin{claim} The value of $\Delta_{\alpha}$ is $-|\delta_v|$ if and only if $|\gamma_u\delta_u|_{\alpha-1} = |\gamma_u\delta_u|_{\alpha} + 1$. Otherwise, $\Delta_\alpha = -|\gamma_u\delta_u|$. Moreover, in the former case, $\alpha - 1$ is the last letter of $\gamma_v$. \end{claim} \begin{claimproof} We first observe that \[ \sum_{b \in \am} \binom{x}{\alpha b} \] counts, for each occurrence of the letter $\alpha$ in the word $x$, the number of letters that occur to its right. Similarly, \[ \sum_{b \in \am} \binom{x}{b(\alpha-1)}, \] counts, for each occurrence of $\alpha-1$, the number of letters occurring to its left. We then consider the occurrences of $\alpha$ and $\alpha-1$ in the two words $\gamma_u\delta_u$ and $\gamma_v\delta_v$, as well as their contributions to the sum $\Delta_\alpha$. Notice that since $\delta_u$ does not contain $\alpha$, there is at most one occurrence of $\alpha$ in $\gamma_u\delta_u$. Furthermore, there can be at most two occurrences of $\alpha-1$. First of all, note that $\alpha$ does not appear in $\delta_u$. The contribution from $\alpha-1$, as the last letter of $\delta_u$, to the term $-\binom{\gamma_u\delta_u}{b(\alpha-1)}$ results in a value of $-|\gamma_u\delta_u|+1$ to $\Delta_\alpha$. \begin{itemize} \item First, assume that $|\gamma_u\delta_u|_{\alpha-1} = 1$. We proceed by dividing this into additional subcases, considering whether $\alpha-1$ appears in $\delta_v$ or not. \begin{itemize} \item If $\delta_v$ contains $\alpha-1$, then this occurrence must be followed by by $\alpha$. These two occurrences provide $|\gamma_v\delta_v|-2$ towards $\Delta_\alpha$. Now, $\alpha$ must appear in $\gamma_u$, whereas $\alpha-1$ should not. This situation occurs only if $\gamma_u$ starts with $\alpha$, as it is a suffix of the image of a letter (as depicted in \cref{fig:facsit1}). \begin{figure}[h!t] \centering \begin{tikzpicture} \draw[yscale=.4] (0,0) -- (6,0) -- (6,1) -- (0,1) -- (0,0); \draw[yscale=.4] (3.5,1) -- (3.5,0); \node at (1.25,.7) {$\gamma_u$}; \node at (0.2,.2) {\small{$\alpha$}}; \node at (5.45,.2) {\small{$(\alpha-1)$}}; \node at (4.5,.7) {$\delta_u$}; \begin{scope}[yshift = -25] \draw[yscale=.4] (0,0) -- (6,0) -- (6,1) -- (0,1) -- (0,0); \draw[yscale=.4] (2.5,1) -- (2.5,0); \node at (1.25,-.4) {$\gamma_v$}; \node at (4,.2) {\small{$\alpha(\alpha-1)$}}; \node at (3.75,-.4) {$\delta_v$}; \end{scope} \end{tikzpicture} \caption{Illustrating the situation $|\gamma_u\delta_u|_{\alpha-1}=|\gamma_v|_{\alpha-1}=1$.}\label{fig:facsit1} \end{figure} This occurrence contributes $-|\gamma_u\delta_u|+1$ towards $\Delta_\alpha$. Consequently, in this case, we have: \[ \Delta_\alpha = -|\gamma_u\delta_u|+1 + |\gamma_v\delta_v|-2 -|\gamma_u\delta_u|+1 = -|\gamma_u\delta_u|. \] Moreover, in this situation, we also have \[ |\gamma_u\delta_u|_{\alpha} = |\gamma_u\delta_u|_{\alpha-1}. \] \item If $\delta_v$ does not contain $\alpha-1$, then $\gamma_v$ contains $\alpha-1$. We then further split this case based on whether $\alpha$ appears in $\gamma_u\delta_u$ or not. \begin{itemize} \item Assume that $\alpha$ appears in $\gamma_v\delta_v$. In this case, either $\gamma_v$ contains $\alpha$ as the letter directly following $\alpha-1$ with $|\delta_v|_\alpha = 0$, or $\alpha-1$ is the last letter of $\gamma_v$ and $\delta_v$ begins with $\alpha$ (because, in this case, $\alpha-1$ does not appear in $\delta_v$). In both cases, we have $\alpha-1$ followed by $\alpha$ in $\gamma_v\delta_v$, resulting in a contribution of $|\gamma_v\delta_v|-2$. Now, $\alpha$ appears in $\gamma_u$, while $\alpha-1$ does not. This is possible only when $\alpha$ is the first letter of $\gamma_u$, thus contributing $-|\gamma_u\delta_u|+1$ towards $\Delta_\alpha$. Hence, we find \[ \Delta_\alpha = -|\gamma_u\delta_u|+1 + |\gamma_v\delta_v|-2-|\gamma_u\delta_u|+1 = -|\gamma_u\delta_u|. \] Note also that in this case \[ |\gamma_u\delta_u|_{\alpha} = |\gamma_u\delta_u|_{\alpha-1}. \] \item Assume that $\alpha$ does not occur in $\gamma_v\delta_v$. Consequently, the occurrence $\alpha-1$ in $\gamma_v$ must be its last letter. Therefore, in this case, we have \[ \Delta_\alpha = -|\gamma_u\delta_u|+1 + |\gamma_v|-1 = -|\delta_v|. \] Moreover, in this case, we have \[ |\gamma_u\delta_u|_{\alpha} + 1 = |\gamma_u\delta_u|_{\alpha-1} \] and $\alpha-1$ is the last letter of $\gamma_u$. \end{itemize} \end{itemize} \item Assume secondly that$|\gamma_u\delta_u|_{\alpha-1} = 2$. Therefore, $\delta_v$ must contain $\alpha-1$, and this is followed by $\alpha$ since $\delta_v$ cannot end with $\alpha-1$. These occurrences contribute $|\gamma_v \delta_v|-2$ to $\Delta_\alpha$. Now, $\gamma_u$ also contains $\alpha-1$. Since $\alpha$ appears in $\delta_v$, it must also occur in $\gamma_u$ causing the two letters to appear consecutively. These occurrences contribute $-|\gamma_u\delta_u|+2$ to $\Delta_\alpha$. Finally, we consider the contribution of $\alpha-1$ in $\gamma_v$. Since $\alpha$ is already present in $\delta_v$ it cannot occur in $\gamma_v$, thus $\gamma_v$ ends with $\alpha-1$. This provides $\Delta_\alpha$ with $|\gamma_v|-1$. \cref{fig:facsit2} illustrates this situation. \begin{figure}[h!t] \centering \begin{tikzpicture} \draw[yscale=.4] (0,0) -- (6,0) -- (6,1) -- (0,1) -- (0,0); \draw[yscale=.4] (3.5,1) -- (3.5,0); \node at (1.25,.7) {$\gamma_u$}; \node at (1.4,.2) {\small{$(\alpha-1)\alpha$}}; \node at (5.45,.2) {\small{$(\alpha-1)$}}; \node at (4.5,.7) {$\delta_u$}; \begin{scope}[yshift = -25] \draw[yscale=.4] (0,0) -- (6,0) -- (6,1) -- (0,1) -- (0,0); \draw[yscale=.4] (2.5,1) -- (2.5,0); \node at (1.25,-.4) {$\gamma_v$}; \node at (2,.2) {\small{$(\alpha-1)$}}; \node at (4,.2) {\small{$(\alpha-1)\alpha$}}; \node at (3.75,-.4) {$\delta_v$}; \end{scope} \end{tikzpicture} \caption{Illustrating the situation $|\gamma_u\delta_u|_{\alpha-1}=2$.}\label{fig:facsit2} \end{figure} Thus, in this case, we have \[ \begin{aligned} \Delta_\alpha &= -|\gamma_u \delta_u| + 1 + |\gamma_v \delta_v| - 2 - |\gamma_u \delta_u| + 2 + |\gamma_v| - 1 \\ &= -|\gamma_u \delta_u| + |\gamma_v| = -|\delta_v|. \end{aligned} \] Observe once more that \[ |\gamma_u\delta_u|_{\alpha} + 1 = |\gamma_u\delta_u|_{\alpha-1}, \] and furthermore, $\alpha-1$ is the last letter of $\gamma_v$. \end{itemize} All cases have been considered, and each one leads to the desired conclusion. \end{claimproof} The preceding claim indicates that $\Delta_\alpha$ in \eqref{eq:simplified-bigdiff-equal-length} is non-positive. For \eqref{eq:simplified-bigdiff-equal-length} to hold true, it must be the case that $\delta_v = \varepsilon$ and $|v|_{\alpha} = |v|$. Moreover, $\gamma_v$ ends with $\alpha-1$ as stated in the above claim. However, $\gamma_v\sm(v)$ ends with $\alpha-1$: either $\gamma_v$ ends with $\alpha-1$ when $v\neq \varepsilon$, or $\gamma_v$ ends with $\alpha-1$ if $v = \varepsilon$. This conclusion contradicts the initial assumption that the words end with distinct letters. Therefore, we have shown that the case $|u| = |v|$ is impossible if either of the words $s_{_{U}}$ or $s_{_{V}}$ is empty. \textbf{Second, assume that $|u| \neq |v|$.} Due to the length constraints, it follows that $\left||u|-|v|\right| = 1$. W.l.o.g, let us assume that $|v| = |u|+1$ and express $v$ in the form $v = \beta v'$, where $\beta \in \am$. Consequently, we have $\gamma_u\delta_u \sim_1 \gamma_v\sm(\beta)\delta_v$ implying that both $\gamma_u$ and $\delta_u$ are non-empty. Let $\alpha-1$ denote the last letter of $\delta_u$. We may now apply \cref{lem:bigdiff}, since we have $|u| = |v'|$ and $\gamma_u\delta_u \sim_1 \gamma_v\sm(\beta)\delta_v$ (with $\alpha$ in place of $0$). Rewriting $\gamma_v' $ as $ \gamma_v\sm(\beta)$, we obtain (after dividing both sides by $m^{\binom{k}{2}-1}$) \begin{multline*} 0= m \biggl[ |u|_{\alpha}-|v'|_\alpha + |u|\, (|\gamma_u|_\alpha - |\gamma_v'|_\alpha +|\delta_u|_{\alpha-1} - |\delta_v|_{\alpha-1} )\biggr]\\ +\sum_{b\in\am}\left( \binom{\gamma_u\delta_u}{b(\alpha-1)}-\binom{\gamma_v'\delta_v}{b(\alpha-1)}+ \binom{\gamma_u\delta_u}{\alpha b}-\binom{\gamma_v'\delta_v}{\alpha b}\right). \end{multline*} Write again \[ |\gamma_u|_\alpha -|\gamma_v'|_\alpha +|\delta_u|_{\alpha-1} - |\delta_v|_{\alpha-1} = |\delta_u|_{\alpha-1} - |\delta_u|_\alpha - |\delta_v|_{\alpha-1} + |\delta_v|_\alpha. \] Furthermore, defining \[ \Delta_\alpha = \sum_{b\in\am}\left( \binom{\gamma_v'\delta_v}{b(\alpha-1)} + \binom{\gamma_v'\delta_v}{\alpha b} - \binom{\gamma_u\delta_u}{b(\alpha-1)}- \binom{\gamma_u\delta_u}{\alpha b}\right), \] and recalling that $|\delta_u|_{\alpha-1} = 1$ and $|\delta_u|_{\alpha} = 0$, the preceding equation simplifies to \begin{equation}\label{eq:S_h-equation-diff-lengths} m \left(|u|_\alpha + |u|(|\delta_v|_{\alpha} - |\delta_v|_{\alpha-1})\right) = m\left(|v'|_{\alpha} - |v'|\right) + \Delta_\alpha. \end{equation} Using arguments analogous to those in \textbf{Case 1}, the left-hand side is shown to be non-negative. Moreover, it equals zero if and only if $|u|_{\alpha} = 0$ and $|\delta_v|_{\alpha} = |\delta_v|_{\alpha-1}$. Additionally, we compute the right-hand side in an analogous manner, showing that it is non-positive. \begin{claim}\label{cl:sign-Sh-diff} We have $\Delta_{\alpha} = -|\delta_v|$, or $\Delta_{\alpha} = -|\gamma_v\delta_v|$ if and only if $\beta = \alpha$. In all other cases, $\Delta_{\alpha} = -|\gamma_u\delta_u|$ or $\Delta_{\alpha} = -m -|\delta_v|$. \end{claim} \begin{claimproof} We once again consider the occurrences of $\alpha$ and $\alpha-1$ in the two words $\gamma_u\delta_u$ and $\gamma_v'\delta_v$, and examine their contributions to the sum $\Delta_\alpha$. Recall that $\alpha-1$ is the last letter of $\delta_u$. Therefore, $\alpha$ can appear at most once in $\gamma_u \delta_u$. Since $\gamma_u\delta_u \sim_1 \gamma_v'\delta_v = \gamma_v\sm(\beta)\delta_v$, and $\alpha$ appears in $\sm(\beta)$, we conclude that $\alpha$ appears precisely once in $\gamma_u\delta_u$, and therefore must appear in $\gamma_u$. $\bullet$ \textbf{Occurrences in $\delta_u$ and $\sm(\beta)$:} The occurrence of $\alpha-1$ as the last letter of $\delta_u$ contributes $\Delta_\alpha$ to $-|\gamma_u\delta_u|+1$. Since $\sm(\beta)$ contains both $\alpha-1$ and $\alpha$, there are two possible cases: \begin{enumerate} \item[1)] if $\alpha-1$ is the last letter of $\sm(\beta)$ (which is equivalent to $\beta=\alpha$), the contribution is \( |\gamma_v'|-1 + |\delta_v| + m-1 = |\gamma_v'\delta_v| + m-2. \) \item[2)] Otherwise, if the two letters appear consecutively, the contribution is $|\gamma_v'\delta_v|-2$. \end{enumerate} $\bullet$ \textbf{Other occurrences:} We consider two cases based on the number of the occurrences of $\alpha-1$. \begin{itemize} \item Suppose first that $\alpha-1$ appears exactly once in $\gamma_u\delta_u$. Consequently, $\alpha$ must be the first letter of $\gamma_u$, contributing $-|\gamma_u\delta_u| + 1$ to $\Delta_\alpha$. Thus, in this case, $\Delta_\alpha = m-|\gamma_u\delta_u| = -|\gamma_v\delta_v|$ if $\beta = \alpha$, and $\Delta_\alpha = -|\gamma_u\delta_u|$ otherwise. \item Now, assume that $\alpha-1$ occurs for a second time in $\gamma_u\delta_u$. Since $\alpha$ must appear in $\gamma_u$ with $\alpha-1$, the letters must appear consecutively, with $\alpha-1$ preceding $\alpha$. These occurrences give the contribution $-|\gamma_u\delta_u|+2$. It remains to consider the second occurrence of $\alpha-1$ in $\gamma_v'\delta_v$. Notice that $\alpha-1$ cannot appear in $\delta_v$; since it cannot be the last letter of $\delta_v$, it would be followed by a second $\alpha$. Thus $\alpha-1$ appears in $\gamma_v$. Since $\gamma_v$ does not contain $\alpha$, we must have that $\alpha-1$ is the last letter of $\gamma_v$. This gives the contribution $|\gamma_v|-1 = |\gamma_v'\delta_v| - |\delta_v| - m - 1$. In total, we have \begin{align*} \Delta_\alpha &= |\gamma_v'\delta_v|-2 -|\gamma_u\delta_u|+1 -|\gamma_u\delta_u|+2 + |\gamma_v'\delta_v| - |\delta_v| - m - 1 = -|\delta_v|-m \end{align*} if $\beta \neq \alpha$, and $\Delta_\alpha = -|\delta_v|$ if $\beta = \alpha$.\qedhere \end{itemize} \end{claimproof} We are now ready to conclude with the proof. The claim above asserts that the only way \eqref{eq:S_h-equation-diff-lengths} holds is if both sides are equal to zero. In particular, this implies that $|v'|_{\alpha} = |v'|$, $\beta = \alpha$, and $\delta_v = \varepsilon$. Consequently, the last letter of $\sm(v')$ is $\alpha-1$, leading us to conclude that the words $U = \smn{k-1}\left(\gamma_u \sm(u) \delta_u\right)$ and $V = \smn{k-1}(\gamma_v\sm(\beta v'))$ both end with the last letter of $\smn{k-1}(\alpha-1)$. This is a contradiction, in the case where $|u| \neq |v|$. Thus, we conclude that the only possible way for $U \sim_{k+1} V$ is when $\delta_u = \delta_v = \gamma_u = \gamma_v = \varepsilon$ and $u \sim_1 v$. Hence, the proof is complete. \end{proof} \section{Abelian Complexity for Short Factors}\label{sec:abco} The initial values of the abelian complexity $\bc{\infw{t}_m}{1}(\ell)$ of $\mathbf{t}_m$, for $1\leqslant\ell<m$ are presented in \cref{tab:aml}. For lengths $\ell\geqslant m$, the function is periodic with period~$m$, i.e., $\bc{\infw{t}_m}{1}(\ell+m)=\bc{\infw{t}_m}{1}(\ell)$, and its behavior is fully described by \cref{thm:abelian_complexity} from \cite{ChenWen2019}. Thus, the following proposition complements the findings of Chen et al. \begin{table}[h!t] {\small \[ \begin{array}{ccccccccccccc} 2 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 3 & 6 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 4 & 10 & 12 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 5 & 15 & 20 & 25 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 6 & 21 & 30 & 39 & 42 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ 7 & 28 & 42 & 56 & 63 & 70 & \text{} & \text{} & \text{} & \text{} & \text{} \\ 8 & 36 & 56 & 76 & 88 & 100 & 104 & \text{} & \text{} & \text{} & \text{} \\ 9 & 45 & 72 & 99 & 117 & 135 & 144 & 153 & \text{} & \text{} & \text{} \\ 10 & 55 & 90 & 125 & 150 & 175 & 190 & 205 & 210 & \text{} & \text{} \\ 11 & 66 & 110 & 154 & 187 & 220 & 242 & 264 & 275 & 286 & \text{} \\ 12 & 78 & 132 & 186 & 228 & 270 & 300 & 330 & 348 & 366 & 372 \\ 13 & 91 & 156 & 221 & 273 & 325 & 364 & 403 & 429 & 455 & 468 & 481 & \text{} \\ 14 & 105 & 182 & 259 & 322 & 385 & 434 & 483 & 518 & 553 & 574 & 595 & 602 \\ \end{array} \] } \caption{Values of \texorpdfstring{$\bc{\infw{t}_m}{1}(\ell)$}{bc(t_m,1)(ell)} for $1 \leqslant \ell < m \leqslant 14$.} \label{tab:aml} \end{table} \begin{proposition}\label{pro:abco_small} The initial values of the abelian complexity $\bc{\infw{t}_m}{1}(\ell)$ of the generalized Thue--Morse word over $m$ letters are given as follows. \begin{itemize} \item For odd $\ell<m$, say $\ell=2\ell'+1$, where $\ell'\ge 0$, we have \[\bc{\infw{t}_m}{1}(\ell)=m \left(1 - \ell' - \ell'^2 + \ell' m\right).\] \item For even $\ell<m$, we have \[\bc{\infw{t}_m}{1}(\ell) = \frac{m}{4} \left(6 - \ell^2 - 2 m + 2 \ell m\right).\] \end{itemize} \end{proposition} \begin{proof} By \cref{lem:boundaryseq}, every pair $(i,j)\in\mathcal{A}_m^2$ appears in $\mathbf{t}_m$. Thus, any factor $w$ of length $\ell<m$ can be written as $w=ps$, where $p$ is a suffix of some $\sm(i)$ and $s$ is a prefix of some $\sm(j)$. Our aim is to count the possible Parikh vectors for such $w$. Since we are dealing with abelian equivalence, and the images of a letter under $\sm$ are cyclic permutations of $01\cdots (m-1)$, we can limit ourselves to $|p|=\ell,\ell-1,\ldots, \lceil\ell/2\rceil$. When $p$ is shorter than $s$, we obtain exactly the same Parikh vectors. If $|p|=\ell$, then $p$ is of the form $t\, (t+1) \cdots (t+\ell-1)$, which is a factor of some $\sm(i)$. This corresponds to the $m$ cyclic permutations of the Parikh vector $1^\ell 0^{m-\ell}$ (expressed as a word of length~$m$). If $|p|=\ell-1$ and $|s|=1$, there are $m$ possible suffixes of $\sm(i)$ of the form $t\, (t+1)\cdots (t+\ell-2)$, where $t\in\am$. We need to determine which $j\in\am$ provides Parikh vectors that have not already been listed. Here, $s$ is the first letter of $\sm(j)$, which is $j$. If $j=t-1$ or $j=t+\ell-1$, then we get a Parikh vector from the first case. Thus, for $j$, we can choose any elements in $\am$ except these two, resulting in $m-2$ possibilities and a total of $m(m-2)$ new Parikh vectors. Note that we obtain Parikh vectors (along with their cyclic permutations) of the form $1^{\ell-1}0^r10^s$ with some isolated $1$, where $r,s>0$ and $r+s=m-\ell$, or of the form $1^r 2 1^{\ell-r-1} 0^{m-\ell}$, with one $2$ in any position within the block of size $\ell-1$. If $|p|=\ell-2$ and $|s|=2$, this case is similar. We have $m$ possible suffixes of the form $t\, (t+1)\cdots (t+\ell-3)$, where $t\in\am$. We need to determine which $j\in\am$ provides new Parikh vectors. Here, $s$ is the first two letters of $\sm(j)$, which are $j (j+1)$. If $j\in\{t-2,t-1,t+\ell-3,t+\ell-2\}$, the Parikh vectors are already described in the first two cases. Otherwise, we obtain new vectors either with a block $1^r 22 1^{\ell-r-2}$, or with two isolated block $1^2$ and $1^{\ell-2}$. This results in $m.(m-4)$ new Parikh vectors. In general, if $|p|=\ell-u$ and $|s|=u$ with $\ell/2>u$, then $p$ is of the form $t\, (t+1)\cdots (t+\ell-u-1)$. To obtain new Parikh vectors, either with a block $1^s 2^u 1^{\ell-s-u}$, or with two isolated blocks $1^u$ and $1^{\ell-u}$, from $s=j(j+1)\cdots (j+u-1)$, then $j$ cannot be in $\{t-u,\ldots,t-1,t+\ell-2u,\ldots,t+\ell-u-1\}$. Therefore, $j$ can take $m-2u$ values. In conclusion, if $\ell$ is odd of the form $\ell=2\ell'+1$, we obtain a total of \[\bc{\infw{t}_m}{1}(\ell)=m+\sum_{u=1}^{\ell'}m(m-2u) =m \left(1 - \ell' - \ell'^2 + \ell' m\right).\] Now, if $\ell$ is even, we still have to consider the situation where $|p|=|s|=\ell/2$. In this case, $p$ and $s$ have symmetric roles, and we should avoid double counting. We need to select two elements $i,j\in\am$ that are at distance greater than $\ell/2$ from each other (over $\mathbb{Z}/(m\mathbb{Z})$) in order to obtain Parikh vectors that are a cyclic permutation of $1^{\ell/2}0^r1^{\ell/2}0^s$, where $r,s>0$. The number of such pairs $\{i,j\}$, where $j\not\in\{i-\ell/2,\ldots,i-1,i,i+1,\ldots,i+\ell/2\}$, is given by $m (m-\ell-1)/2$. There are also $m$ permutations of $2^{\ell/2}0^{m-\ell/2}$ when $p=s$. Hence, for even $\ell$, we obtain \[\bc{\infw{t}_m}{1}(\ell)=m+\sum_{u=1}^{\frac{\ell}{2}-1}m(m-2u)+\frac{m (m-\ell-1)}{2}+m = \frac{m}{4} \left(6 - \ell^2 - 2 m + 2 \ell m\right).\] \end{proof} \begin{remark} Interestingly, the infinite triangular array whose initial elements are given in \cref{tab:aml}, exhibits several intriguing combinatorial properties and identities. \begin{itemize} \item Regarding the rows of the triangle, the following relation holds for $1\leqslant \ell <m-4$ \[ \bc{\infw{t}_{m}}{1}(\ell+4)= 2\bc{\infw{t}_{m}}{1}(\ell+3)-2 \bc{\infw{t}_{m}}{1}(\ell+1)+ \bc{\infw{t}_{m}}{1}(\ell).\] This relation can be easily deduced from the previous proposition. For $m\geqslant 5$, the initial conditions are given by \[ \left(\bc{\infw{t}_{m}}{1}(1),\ldots,\bc{\infw{t}_{m}}{1}(4)\right)=\left(m, m(m+1)/2,m(m-1),m(3m-5)/2\right).\] \item Similarly, for each column, the following holds for all $m\geqslant 2$ and all $\ell<m$, \[ \bc{\infw{t}_{m+3}}{1}(\ell)= 3\bc{\infw{t}_{m+2}}{1}(\ell)-3 \bc{\infw{t}_{m+1}}{1}(\ell)+ \bc{\infw{t}_{m}}{1}(\ell).\] \item Furthermore, the diagonal and parallels to the diagonal $\left(\bc{\infw{t}_{\ell+2+i}}{1}(\ell+1)\right)_{\ell\geqslant 0}$ for all $i\geqslant 0$ satisfy the same recurrence relation of order $6$ \[ x_{n+6}=2x_{n+5}+x_{n+4}-4x_{n+3}+x_{n+2}+2x_{n+1}-x_n. \] \item The sequence \( \left(\bc{\infw{t}_{m}}{1}(m-2)\right)_{m\geqslant 3}=3, 10, 20, 39, 63, 100, 144,\ldots \) appears in several entries of the OEIS, as {\tt A005997} (number of paraffins) and {\tt A272764} (number of positive roots in reflection group~$E_n$), among others. \item The sequence \( \left(\bc{\infw{t}_{2m+1}}{1}(2m)\right)_{m\geqslant 1}\) is given by \( 2m^3+m^2+2m+1\). \item The sequence \( \left((\bc{\infw{t}_{2m}}{1}(2m-1))/2\right)_{m\geqslant 1}=1, 6, 21, 52, 105, 186, 301,\ldots\) is the sequence of $q$-factorial numbers $([3]!_q)_{q\geqslant 0}$ where \[ [3]!_q= \frac{(1-q)(1-q^2)(1-q^3)}{(1-q)^3}= (1 + q) (1 + q + q^2). \] It appears as {\tt A069778} in the OEIS. \end{itemize} \end{remark} \section{Description of the Abelian Rauzy Graphs}\label{sec:arg} The abelian Rauzy graph is defined in \cref{def:abr}. Refer to \cref{sec:strategy} for the definitions of the sets $Y_{m,L},Y_{m,R}$, and $Y_m$. The aim of this section is to count the number of edges in the abelian Rauzy graph $G_{m,\ell}$ of order $\ell$ for $\infw{t}_m$, where $\ell<2m$, as well as determine the size of the corresponding set $Y_m(\ell)$. These expressions, together with \cref{pro:5.5}, lead to \cref{thm:inter}. The structure of these graphs depends on the value of the parameter $\ell$. Specifically, the behavior varies significantly depending on whether $\ell<m$ or $\ell\geqslant m$. \begin{example}\label{exa:abr64} \cref{fig:abr64} depicts the graph~$G_{6,4}$. To keep clarity in the figure, we have omitted the edge labels. The color of each edge is determined by the second component of its label. Thus, two edges originating from the same vertex and sharing the same color correspond to the same element of $Y_{m,R}$. The vertices are labeled with Parikh vectors. According to \cref{pro:abco_small}, $\bc{\infw{t}_{6}}{1}(4)=39$, which implies that the graph~$G_{6,4}$ has $39$ vertices. The symmetry of the graph results from \cref{lem:permut}. \begin{figure}[h!t] \centering \includegraphics[width=11cm]{abr64.pdf} \caption{Abelian Rauzy graph $G_{6,4}$ of order $4$ for $\infw{t}_6$.} \label{fig:abr64} \end{figure} \end{example} \begin{example}\label{exa:abr54} \cref{fig:abr54} depicts the graph~$G_{5,4}$, which has $\bc{\infw{t}_{5}}{1}(4)=25$ vertices. This example may help the reader follow the developments presented in the proof below, where the case of odd $m$ and even $\ell$ is discussed. Providing two distinct examples is insightful. \cref{fig:abr54} exhibits a $5$-fold symmetry in the graph. However, \cref{fig:abr64} shows that the three central vertices exhibit a different behavior, specifically a $3$-fold symmetry, instead of the $6$-fold symmetry present in the rest of the graph. \begin{figure}[h!t] \centering \includegraphics[width=11cm]{abr54.pdf} \caption{Abelian Rauzy graph $G_{5,4}$ of order $4$ for $\infw{t}_5$.} \label{fig:abr54} \end{figure} \end{example} \subsection{When \texorpdfstring{$\ell < m$}{ell < m}} \begin{proposition}\label{pro:edgml1} For $1\leqslant \ell<m$, the number of edges in the abelian Rauzy graph $G_{m,\ell}$ is given by \[m(1+\ell m-\ell).\] \end{proposition} \begin{proof} For $\ell=1$, all length-$2$ factors of the form $ab$ appear in $\infw{t}_m$. Thus, $G_{m,1}$ is a complete directed graph with $m^2$ edges. Now, assume $\ell\geqslant 2$. As a first case, let $\ell$ be even, in the form $2\ell'$, where $\ell'>0$, and $m$ is odd (as in \cref{exa:abr54}). \cref{tab:different_vertices} lists the possible Parikh vectors $v$ and their corresponding out-degree $d^+(v)$. Note that we must also consider the cyclic permutations of these vectors, which correspond to other vertices in the graph. \begin{table}[h!t] \[ \begin{array}{c||l|lr|l} \text{type} & \Psi(u) & d^+ & \text{choices} & \text{total when }\ell\text{ even}\\ \hline a)&2^{\ell'}0^{m-\ell} & 1 & & 1\\ b)&1^\ell0^{m-\ell} & \ell+m-1 & & 1\\ c)&1^i2^j1^{\ell-i-2j}0^{m-\ell+j} & 4& i,j,\ell-i-2j>0 & (\ell'-1)^2\\ d)&1^{\ell-2i}2^i0^{m-\ell+i} & 2 & i,\ell-2i>0 & \ell'-1\\ e)&2^i1^{\ell-2i}0^{m-\ell+i}&2 & i,\ell-2i>0 & \ell'-1 \\ f)&1^i0^j1^{\ell-i}0^{m-\ell-j} & 2 & i,j,\ell-i,m-\ell-j>0 & (\ell'-\frac12)(m-\ell-1)\\ \end{array} \] \caption{The different types of vertices (not counting permutations).} \label{tab:different_vertices} \end{table} We proceed similarly to the proof of \cref{pro:abco_small}, describing the Parikh vectors represented succinctly as words. \begin{enumerate} \item[(a)] The factor $0 1 \cdots \ell' 0 1 \cdots \ell'$ has a unique successor in $\infw{t}_m$, which is $1 \cdots \ell' 0 1 \cdots \ell' (\ell'+1)$. Thus, there is an edge $2^{\ell'}0^{m-\ell} \to 1 2^{\ell'-1}10^{m-\ell-1}$. The reader may refer to \cref{exa:abr54} to observe the different types of vertices described in this proof. For the first type, these vertices are located on the outermost part of \cref{fig:abr54}. \item[(b)] The Parikh vector $1^\ell0^{m-\ell}$ can be associated with the factor $0\cdots (\ell-1)$. Since all pairs of letters occur in $\infw{t}_m$, the factor $0\cdots (\ell-1) a$ occurs in $\infw{t}_m$ for all $a\in\am$. Thus, there are $m$ edges with the label $(0,a)$; in particular, one of them is a loop with label $(0,0)$. This Parikh vector is also associated with a factor of the form $vu$, where $u=0\cdots (i-1)$ and $v=i\cdots (\ell-1)$, with $i=1,\ldots,\ell-1$. Thus, there are $\ell-1$ loops labeled $(i,i)$. For the second type, these vertices are located on the innermost part of \cref{fig:abr54}. \item[(c)] The Parikh vector $1^i2^j1^{\ell-i-2j}0^{m-\ell+j}$ is associated with a factor of the form $uv$ or $vu$, where $u=0\cdots (i+j-1)$ and $v=i\cdots (\ell-j-1)$. It can also be associated with a factor $uv$ or $vu$, where $u=0\cdots (\ell-j-1)$ and $v=i\cdots (i+j-1)$. This results in four edges towards the following vertices: \begin{eqnarray*} &01^{i-1}2^j1^{\ell-i-2j+1}0^{m-\ell+j-1}, &1^{i+1}2^{j}1^{\ell-i-2j-1}0^{m-\ell+j},\\ &01^{i-1}2^{j+1}1^{\ell-i-2j-1}0^{m-\ell+j-1}, &1^{i+1}2^{j-1}1^{\ell-i-2j+1}0^{m-\ell+j}. \end{eqnarray*} \item[(d) $\&$ (e)] These cases are similar. The Parikh vector $2^i1^{\ell-2i}0^{m-\ell+i}$ is associated with a factor of the form $uv$ or $vu$, where $u=0\cdots (i-1)$ and $v=0\cdots (\ell-i-1)$. This results in two edges labeled $(0,\ell-i)$ and $(0,i)$, which are distinct because $\ell-2i>0$. \item[(f)] We have factors of the form $uv$ or $vu$, where $u=0\cdots (i-1)$ and $v=(i+j)\cdots (\ell+j-1)$. This results in two edges labeled $(0,\ell+j)$ and $(i+j,i)$. \end{enumerate} Next, we count the total number of edges. To do so, we need to determine the number of vertices of each type. There are $m$ pairwise distinct cyclic permutations of the vector of type (a). The same observation applies for type (b). This results in $m+m(\ell+m-1)=m(\ell+m)$ edges in $G_{m,\ell}$. For a vector of type (c), for each valid $j\leqslant\ell'-1$, there are $\ell-2j-1$ ways to arrange $\ell-2j$ ones on both sides of $2^j$. This results in \begin{equation} \label{eq:totl'} \sum_{j=1}^{\ell'-1} (\ell-2j-1)=(\ell'-1)^2. \end{equation} Taking into account the cyclic permutations, we obtain $4m(\ell'-1)^2$ edges. For a vector of type (d) or (e), there are $\ell'-1$ choices for $i$. This resulting in a total of $4m(\ell'-1)$ edges. The type (f) requires extra caution: since $\ell$ is even, not all cyclic permutations are distinct, so we must avoid double counting. We have to limit ourselves to $i\leqslant\ell'$. Indeed, the $m$~cyclic permutations of $1^i0^j1^{\ell-i}0^{m-\ell-j}$ and those of $1^{\ell-i}0^{m-\ell-j}1^i0^j$ are identical. For each $i<\ell'$, there are $m-\ell-1$ choices for~$j$. This results in $2m(\ell'-1)(m-\ell-1)$ edges. When $i=\ell'$, there are two blocks of ones of the same size, giving only $(m-\ell-1)/2$ choices for $j$. This is the only place where the fact that $m$ is odd plays a role. This provides $2m(m-\ell-1)/2=m(m-\ell-1)$ edges. Summing up all contributions yields the expected value \[ \begin{aligned} m (\ell + m) + 4 m (\ell' - 1)^2 &+ 4 m (\ell' - 1) + 2 m (\ell' - 1) (m - \ell - 1)\\ &+ m (m - \ell - 1) = m(1+\ell m-\ell). \end{aligned} \] If $m$ is even and $i=\ell'$, we must consider $j<(m-\ell)/2$ and $j=(m-\ell)/2$ separately because in the latter case, there are also two blocks of zeroes of the same size. Thus, we must again avoid double counting. This results in $2m\left(({m-\ell}/{2})-1\right)+m/2$ edges. The last term corresponds to the permutations of $1^{\ell'}0^{(m-\ell)/2}1^{\ell'}0^{(m-\ell)/2}$, which can be observed in \cref{fig:abr64} with the three innermost vertices. The summation yields the same expression. The case where $\ell$ is odd treated similarly. Note that there are no Parikh vectors of type (a). \end{proof} \begin{remark} For $1\leqslant\ell<m$, the graph $G_{m,\ell}$ is an Eulerian graph. The previous proof can be reproduced by focusing on the in-degree of the vertices and show that for all vertices $v$, $d^+(v)=d^-(v)$. Since $\infw{t}_m$ is recurrent, the graph $G_{m,\ell}$ is strongly connected. This suffices to conclude. \end{remark} \begin{proposition}\label{pro:Ym1} For $1\leqslant \ell<m$, the following holds \[\# Y_{m,R}(\ell)=\# Y_{m,L}(\ell)=m(1+\ell m-\ell)-\frac{m}{2}\ell(\ell-1).\] In particular, the value of $\# Y_{m}(\ell)$ is given by \[ \# Y_{m}(\ell)=2m(1+\ell m-\ell)-m\ell(\ell-1). \] \end{proposition} \begin{proof} Assume $\ell$ is even, of the form $2\ell'$. To compute $\# Y_{m,R}(\ell)$, we must identify the edges in $G_{m,\ell}$ that are outgoing from a vertex with labels sharing the same second component. If such edges exist, they are counted once in $\#Y_{m,R}(\ell)$. Our strategy is to subtract, from the total number of edges given by \cref{pro:edgml1}, those that do not contribute a new element to the set $\# Y_{m,R}(\ell)$. In \cref{exa:abr64}, to compute $\# Y_{6,R}(4)$, one must sum, for each vertex, the number of outgoing edges, counting only one edge per distinct color. Using the same notation as in the proof of \cref{pro:edgml1}, only vertices of type (b), (c), or (d) will contribute. We now identify the edges whose labels share the same second component. The vertex $1^\ell0^{m-\ell}$ has $\ell-1$ outgoing edges labeled $(0,j)$ and $\ell-1$ loops labeled $(j,j)$, for $j=1,\ldots,\ell-1$. (Refer to \cref{fig:abr54,fig:abr64} to observe the vertices having loops.) Considering the cyclic permutations of the Parikh vector, we must subtract $m(\ell-1)$ from the total number of edges. A vertex of type (c) has $2$ has two outgoing edges with a second component of $\ell-j$, and two outgoing edges with a second component of $i+j$. (Refer to \cref{fig:abr54,fig:abr64} to observe the vertices with an out-degree of $4$.) Moreover, $\ell-j\neq i+j$ since $\ell-i-2j>0$. From \eqref{eq:totl'}, we must subtract $2m(\ell'-1)^2$. Finally a vertex of type (d) has two outgoing edges with a second component of $\ell-i$. Hence, we subtract $m(\ell'-1)$. The total amount to subtract is: \[m \left[ \ell-1+2(\ell'-1)^2+\ell'-1\right]= \frac{m \ell (\ell-1)}{2}. \] The remaining cases are treated similarly. To determine $\# Y_{m,L}(\ell)$, we need to identify the edges in $G_{m,\ell}$ that are incoming to a vertex with labels sharing the same first component. If such edges exist, they are counted once in $Y_{m,L}(\ell)$. Only vertices of type (b), (c), or (e) contribute. Refer to \cref{exa:abr54b} for further clarification. The reasoning is similar in this case. \end{proof} \begin{example}\label{exa:abr54b} \cref{fig:abr54first} depicts the graph $G_{5,4}$. Compared to \cref{exa:abr54,exa:abr64}, the color of each edge is determined by the first component of its label. Vertices are labeled with their corresponding Parikh vectors. \begin{figure}[h!t] \centering \includegraphics[width=11cm]{abr54first.pdf} \caption{Abelian Rauzy graph $G_{5,4}$ of order $4$ for $\infw{t}_5$ ; edges colored by the first component of the label.} \label{fig:abr54first} \end{figure} \end{example} \subsection{When \texorpdfstring{$\ell \geqslant m$}{ell >= m}} \begin{proposition}\label{pro:abrl2} For $m\leqslant \ell<2m$, the number of edges in the abelian Rauzy graph $G_{m,\ell}$ is given by \[m(m^2-m+1).\] \end{proposition} \begin{proof} Let $b\in\am$. Due to the symmetry of $\sm$, we count the number of edges labeled $(0,b)$ and then multiply the result by $m$. So, we focus on factors of length~$\ell+1$ that start with $0$ and end with $b$. These factors can be of one of the following two forms \begin{itemize} \item $uvb$, where $u$ starts with $0$, $|u|=t\leqslant m$, and $|v|=\ell-t<m$, i.e., $\ell-m<t\leqslant m$ ; or \item $u\sm(a)vb$, for some letter $a$, and where $u$ starts with $0$, $|u|=t\leqslant \ell-m$, and $|v|=\ell-m-t$, i.e., $1\leqslant t\leqslant\ell-m$. \end{itemize} In both cases, $u$ (respectively, $vb$) is a suffix (respectively, prefix) of the image of a letter under $\sm$. In particular, all letters of $u$ are determined by the first letter~$0$, and all letters of $v$ are determined by $b$. Note that the first letter of $v$ is congruent to $b-\ell+t$ modulo~$m$. Consider the first case, where $\ell-b=m$. There is a single edge labeled $(0,b)$ from $2^b1^{m-b}$ to $12^b1^{m-b-1}$. Since $|u|=t$, the last letter of $u$ is $t-1$. Under the assumption $\ell-b=m$, the first letter of $v$ is $t$. Therefore, all the previously described factors have the same Parikh vector. Next, assume that $\ell-b\neq m$. We will prove that there are $m$ pairwise distinct Parikh vectors, each with an outgoing edge labeled $(0,b)$. Since there are $m-1$ possible values for $b$, we obtain the expected value of $m(m-1)=m^2-m$. In this case, the last letter of $u$ is $t-1$, and the first letter of $v$ is $b-\ell-t$ which is not congruent to $t$ modulo~$m$. First, assume that we have two factors $uvb$ and $u'v'b$ of the first form, where $|u'|=t'<|u|=t\leqslant m$. Then, $\Psi(uvb)-\Psi(u'v'b)$, and also contains $1$'s in positions corresponding to $t',t'+1,\ldots,t-1$ and contains $-1$'s in positions corresponding to $b-\ell+t',\ldots,b-\ell+t-1$ (modulo~$m$). Since $\ell-b\neq m$, the two intervals of length $t-t'$, made of these positions are not equal over $\mathbb{Z}/(m\mathbb{Z})$. Therefore, $\Psi(uvb)-\Psi(u'v'b)\neq 0$. A similar reasoning applies to the two factors $u\sm(a)vb$ and $u'\sm(a')v'b$ of the second form. Finally, we compare a factor $x=uvb$ of the first form with a factor $y=u'\sm(a)v'b$ of the second form. Let $t=|u|$ and $t'=|u'|$, with $\ell-m<t\leqslant m$ and $0<t'\leqslant \ell-m$. Then, $x$ and $y$ have the same prefix (respectively, suffix) of length $t'$ (respectively, $\ell-m-t'$). Thus, \[\Psi(x)-\Psi(y)=\Psi\left(t' (t'+1)\cdots (t-1) (b-\ell+t)\cdots (b-t') \right)-\Psi\left(\sm(a)\right).\] This difference is non-zero, as $\ell-b\neq m$. Consequently, the length-$m$ word \[t' (t'+1)\cdots (t-1) (b-\ell+t)\cdots (b-t')\] contains at least one repeated letter. \end{proof} \begin{example}\label{exa:abr45b} In \cref{fig:abr45first}, we have depicted the graph $G_{4,5}$. The color of each edge is determined by the first component of its label, as the next proof focuses on the set $Y_{m,L}$. The vertices are labeled with their corresponding Parikh vectors. \begin{figure}[h!t] \centering \includegraphics[width=9cm]{abr45first.pdf} \caption{Abelian Rauzy graph $G_{4,5}$ of order $5$ for $\infw{t}_4$ ; edges colored by the first component of the label.} \label{fig:abr45first} \end{figure} \end{example} \begin{proposition} For $m\leqslant \ell<2m$, the following holds \[\# Y_{m,R}= \# Y_{m,L}= \frac{m + m^2 (m - 1)}{2}. \] In particular, $\# Y_{m}(\ell)$ is given by \[\# Y_{m}(\ell)=2m + m^2 (m - 1).\] \end{proposition} \begin{proof} We focus on $Y_{m,L}$, using the same notation as in the proof of \cref{pro:abrl2}. The strategy is similar to that used in \cref{pro:Ym1}: subtracting, from the total number of edges given by \cref{pro:abrl2}, those that do not contribute a new element to the set $\# Y_{m,L}(\ell)$. If $\ell-b=m$, there are $m$ incoming edges labeled as $(0,i)$ for all $i\in\am$, directed to $x=12^b1^{m-b-1}$. The initial vertices $\Psi(0)+x-\Psi(i)$ are pairwise distinct. So we have to subtract $m-1$ from the total number of edges in $G_{m,\ell}$. For example, observe the four yellow vertices leading to vertex $1211$ in \cref{fig:abr45first}. For distinct $b,b'\neq\ell-m$, there exists a unique Parikh vector $x_{\{b,b'\}}$ with two incoming edges labeled as $(0,b)$ and $(0,b')$. For two such pairs $\{b,b'\}$ and $\{c,c'\}$, the corresponding vertices are such that $x_{\{b,b'\}}\neq x_{\{c,c'\}}$. Note that the number of these pairs is $\binom{m-1}{2}$. In \cref{fig:abr45first}, three vertices --- namely, $0221$, $1121$ and $1112$, each have two yellow incoming edges. So we also have to subtract $(m-1)(m-2)/2$. Thus, \[\# Y_{m,L}=m(m^2-m+1)-m\left[m-1+ \frac{(m-1)(m-2)}{2}\right]=m\left(\frac{(m^2-m)}{2}+1\right).\] To obtain the result for $Y_{m,R}$, the reasoning remains identical; however, one has to consider edges labeled as $(b,0)$. \end{proof} \begin{remark} For all $j\geqslant 1$ and $m\leqslant \ell<2m$, the abelian Rauzy graph $G_{m,\ell+j\cdot m}$ is isomorphic to $G_{m,\ell}$. Refer to the proof of \cref{pro:abrl2}. We may have factors of length~$\ell$ in one of the following two forms \begin{itemize} \item $uv$ where $u$ starts with $|u|=t\leqslant m$, $|v|=\ell-t<m$, i.e., $\ell-m<t\leqslant m$; or \item $u'\sm(c)v'$ for some letter $c$, where $|u'|=t\leqslant \ell-m$ and $|v'|=\ell-m-t$, i.e., $1\leqslant t\leqslant\ell-m$. \end{itemize} In both cases, $u,u'$ (respectively, $v,v'$) is a suffix of $\sm(a)$ for some letter $a$ (respectively, prefix of $\sm(b)$ for some letter $b$). By \cref{lem:boundaryseq}, there exists a factor $x$ (respectively, $y$) of length $j$ (respectively, $j+1$) such that $u\sm(x)v$ and $u'\sm(y)v'$ are factors of $\infw{t}_m$. Note that \[ \Psi\left(u\sm(x)v\right)=\Psi(uv)+j\cdot (1,\ldots,1) \] and \[ \Psi\left(u'\sm(y)v'\right)=\Psi\left(u'\sm(c)v'\right)+j\cdot (1,\ldots,1). \] These two observations show that $G_{m,\ell+t\cdot m}$ and $G_{m,\ell}$ are the same graph up to a renaming of the vertices. \end{remark} The careful reader may observe that this remark provides an alternative proof of our main result, \cref{thm:main}. Once the structure of the abelian Rauzy graphs is well understood, the formula given by \cref{pro:5.5} also provides a characterization of the $k$-binomial complexity. The two approaches developed in this paper are, in our view, complementary. Each approach provides its own set of combinatorial perspectives. With this article, we have reconciled several approaches. First, we simplified Lejeune's arguments in \cite{LLR} and considered the same type of equivalence relation for larger alphabets. Next, we applied abelian Rauzy graphs in a different context from that in \cite{RSW}. \section{Proof of \texorpdfstring{\cref{lem:bigdiff}}{Lemma bigdiff}}\label{sec:appbigfiff} Recall from \cref{sec:discern} that $\overline{a}$ denotes $-a$ for $a\in\mathbb{Z}$. \cref{lem:bigdiff} is crucial for proving \cref{lem:k-1-factorisation-first-last}. \begin{proof} Let $e=0 \mi{1}\cdots \mi{k}$. The subword $e$ may appear entirely in $\smn{k}(u)$, entirely in $\smn{k-1}(\gamma\delta)$, or intersects both parts. So we have \[ \begin{aligned} \binom{\smn{k-1}(\gamma \sm(u) \delta)}{e} &= \binom{\smn{k}(u)}{e}+ \binom{\smn{k-1}(\gamma \delta)}{e} \\ &\quad + \sum_{\substack{e=xyz\\ 0<|y|<k+1}} \binom{\smn{k-1}(\gamma)}{x}\binom{\smn{k}(u)}{y} \binom{\smn{k-1}(\delta)}{z}. \end{aligned} \] Since $|u|=|u'|$, by \cref{prop:phik}, $\smn{k}(u)\sim_{k}\smn{k}(u')$ and \begin{eqnarray} &&\binom{\smn{k-1}(\gamma \sm(u) \delta)}{e}- \binom{\smn{k-1}(\gamma' \sm(u') \delta')}{e} \label{bigsumeq1} \\ &=& \binom{\smn{k}(u)}{e}-\binom{\smn{k}(u')}{e}+ \binom{\smn{k-1}(\gamma \delta)}{e}-\binom{\smn{k-1}(\gamma' \delta')}{e}\nonumber \\ &&+ \sum_{\substack{e=xyz\\ 0<|y|<k+1}}\binom{\smn{k}(u)}{y} \left[ \binom{\smn{k-1}(\gamma)}{x} \binom{\smn{k-1}(\delta)}{z} -\binom{\smn{k-1}(\gamma')}{x} \binom{\smn{k-1}(\delta')}{z}\right]. \label{bigsumeq3} \end{eqnarray} Observing that the factors $x$, $y$, and $z$ in the above sum are respectively of the form $x=0\mi{1}\cdots \mi{j-1}$; $y=\mi{j} \cdots \mi{j+\ell-1}$; $z=\mi{j+\ell}\cdots \mi{k}$ for $1\leqslant \ell\leqslant k$, let us rewrite term~\eqref{bigsumeq3} of the latter expression as \[ \sum_{\ell=1}^k \sum_{j=0}^{k-\ell+1}\binom{\smn{k}(u)}{\mi{j} \cdots \mi{j+\ell-1}}\left[ \binom{\smn{k-1}(\gamma)}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta)}{\mi{j+\ell}\cdots \mi{k}} -\binom{\smn{k-1}(\gamma')}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta')}{\mi{j+\ell}\cdots \mi{k}}\right]. \] By \cref{lem:smart}, the coefficient $\binom{\smn{k}(u)}{\mi{j} \cdots \mi{j+\ell-1}}$ equals $\binom{\smn{k}(u)}{0\mi{1} \cdots \mi{\ell-1}}$ for each $j$ since $\ell \leqslant k$; thus, the sum simplifies to \[ \sum_{\ell=1}^k \binom{\smn{k}(u)}{0\mi{1} \cdots \mi{\ell-1}} \sum_{j=0}^{k-\ell+1}\left[ \binom{\smn{k-1}(\gamma)}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta)}{\mi{j+\ell}\cdots \mi{k}} -\binom{\smn{k-1}(\gamma')}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta')}{\mi{j+\ell}\cdots \mi{k}}\right]. \] By \cref{lem:smart} again, we may replace $\binom{\smn{k-1}(\delta)}{\mi{j+\ell}\cdots \mi{k}}$ with $\binom{\smn{k-1}(\delta)}{\mi{j}\cdots \mi{k-\ell}}$ and $\binom{\smn{k-1}(\delta')}{\mi{j+\ell}\cdots \mi{k}}$ with $\binom{\smn{k-1}(\delta')}{\mi{j}\cdots \mi{k-\ell}}$, as long as $|\mi{j}\cdots \mi{k-\ell}| < k$, i.e., when $\ell \geqslant 2$ or $\ell = 1$ and $j\geqslant 1$. We decompose the sum accordingly (for convenience, we also add and subtract the same extra term) \begin{align*} &\sum_{\ell=2}^k \binom{\smn{k}(u)}{0\mi{1} \cdots \mi{\ell-1}} \sum_{j=0}^{k-\ell+1}\left[ \binom{\smn{k-1}(\gamma)}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta)}{\mi{j}\cdots \mi{k-\ell}} -\binom{\smn{k-1}(\gamma')}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta')}{\mi{j}\cdots \mi{k-\ell}}\right]\\ &+ \binom{\smn{k}(u)}{0} \biggl( \sum_{j=1}^{k}\left[ \binom{\smn{k-1}(\gamma)}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta)}{\mi{j}\cdots \mi{k-1}} -\binom{\smn{k-1}(\gamma')}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(\delta')}{\mi{j}\cdots \mi{k-1}}\right]\\ &+ \binom{\smn{k-1}(\delta)}{0\mi{1}\cdots \mi{k-1}} -\binom{\smn{k-1}(\delta')}{0\mi{1}\cdots \mi{k-1}}\\ &+ \binom{\smn{k-1}(\delta)}{\mi{1}\cdots \mi{k}} - \binom{\smn{k-1}(\delta')}{\mi{1}\cdots \mi{k}} - \left[ \binom{\smn{k-1}(\delta)}{0\mi{1}\cdots \mi{k-1}} -\binom{\smn{k-1}(\delta')}{0\mi{1}\cdots \mi{k-1}} \right] \biggr). \end{align*} Since \[ \sum_{j=0}^{k-\ell+1}\binom{\smn{k-1}(x)}{0\mi{1}\cdots \mi{j-1}} \binom{\smn{k-1}(y)}{\mi{j}\cdots \mi{k-\ell}} = \binom{\smn{k-1}(xy)}{0\mi{1}\cdots \mi{k-\ell}} \] for any words $x$, $y$, we further simplify to \begin{multline} \sum_{\ell=1}^k\binom{\smn{k}(u)}{0\mi{1} \cdots \mi{\ell-1}} \left[ \binom{\smn{k-1}(\gamma\delta)}{0\mi{1}\cdots \mi{k-\ell}} - \binom{\smn{k-1}(\gamma'\delta')}{0\mi{1}\cdots \mi{k-\ell}}\right]\\ + m^{k-1}|u|\left( \binom{\smn{k-1}(\delta)}{\mi{1}\cdots\mi{k}} - \binom{\smn{k-1}(\delta)}{0\cdots\mi{k-1}} - \binom{\smn{k-1}(\delta')}{\mi{1}\cdots\mi{k}} + \binom{\smn{k-1}(\delta')}{0\cdots\mi{k-1}} \right). \label{eq:almost_simplified} \end{multline} Now $\binom{\smn{k-1}(\delta)}{\mi{1}\cdots\mi{k}} = \binom{\smn{k-1}\tau_m(\delta)}{0\cdots\mi{k-1}}$, where we recall that $\tau_m$ is the morphism defined by $\tau_m(i) = i+1$. Thus, by \cref{cor:notequiv}, the second term in \eqref{eq:almost_simplified} simplifies to: \begin{equation}\label{eq:missing} \begin{aligned} m^{k-1}|u|\left(m^{\binom{k-1}{2}}\left(|\delta+1|_0 - |\delta|_0 - |\delta'+1|_0 + |\delta'|_0 \right)\right) \\ = m^{\binom{k}{2}}|u|\left(|\delta|_{\mi{1}} - |\delta|_0 - |\delta'|_{\mi{1}} + |\delta'|_0 \right). \end{aligned} \end{equation} Consider the sum appearing in \eqref{eq:almost_simplified}. Since $|\delta\gamma|=|\delta'\gamma'|$, by \cref{prop:phik}, $\smn{k-1}(\gamma\delta)\sim_{k-1}\smn{k-1}(\gamma'\delta')$, and the sum reduces to a single term (corresponding to $\ell=1$) \[ \binom{\smn{k}(u)}{0} \left[ \binom{\smn{k-1}(\gamma\delta)}{0\mi{1}\cdots \mi{k-1}} - \binom{\smn{k-1}(\gamma'\delta')}{0\mi{1}\cdots \mi{k-1}} \right] = m^{k-1} |u|\, \left(|\gamma\delta|_0-|\gamma'\delta'|_0\right) m^{\binom{k-1}{2}} \] (where we have used \cref{cor:notequiv}) and is equal to \[ |u|\, m^{\binom{k}{2}} \left(|\gamma\delta|_0-|\gamma'\delta'|_0\right). \] We can now return to the initial difference~\eqref{bigsumeq1} of interest. By applying \cref{cor:notequiv} again, we get that ~\eqref{bigsumeq1} is equal to \[ \begin{aligned} &\binom{\smn{k-1}(\gamma \delta)}{e}-\binom{\smn{k-1}(\gamma' \delta')}{e} + \\ &m^{\binom{k}{2}} \biggl[ |u|_0 - |u'|_0 + |u| \, \bigl( |\gamma\delta|_0 - |\gamma'\delta'|_0 + |\delta|_{\mi{1}} - |\delta|_0 - |\delta'|_{\mi{1}} + |\delta'|_0 \bigr) \biggr]. \end{aligned} \] To conclude the proof, we develop the difference between the first two terms. Let $\gamma\delta=x_1\cdots x_t$ and $\gamma'\delta'=x_1'\cdots x_t'$. We use the same argument as in the proof of \cref{prop:-1}. We need to count occurrences of the subword $e$. If an occurrence is split across multiple $m^{k-1}$-blocks and at most $k-1$ letters appear in any block, then these occurrences will cancel because $\smn{k-1}(x_i)\sim_{k-1}\smn{k-1}(x_i')$. We only have to consider occurrences where at least $k$ letters (out of $k+1$) appear in the same $m^{k-1}$-block. Then, we look at $e$ occurring entirely within one $m^{k-1}$-block, given by the following expression \[\sum_{i=1}^t \left( \binom{\smn{k-1}(x_i)}{e} - \binom{\smn{k-1}(x_i')}{e} \right)\] and this sum vanishes because $\gamma\delta\sim_1\gamma'\delta'$. Alternatively, $e$ is split with $k$ letters in one $m^{k-1}$-block and one (the first or the last) in another $m^{k-1}$-block, we obtain \begin{eqnarray*} && \sum_{i=1}^{t-1}\sum_{j=i+1}^t \left( \binom{\smn{k-1}(x_i)}{0} \binom{\smn{k-1}(x_j)}{\mi{1}\cdots \mi{k}}- \binom{\smn{k-1}(x_i')}{0} \binom{\smn{k-1}(x_j')}{\mi{1}\cdots \mi{k}}\right)\\ &+& \sum_{i=1}^{t-1}\sum_{j=i+1}^t \left( \binom{\smn{k-1}(x_i)}{0\,\mi{1}\cdots \mi{k-1}} \binom{\smn{k-1}(x_j)}{\mi{k}}- \binom{\smn{k-1}(x_i')}{0\,\mi{1}\cdots \mi{k-1}} \binom{\smn{k-1}(x_j')}{\mi{k}}\right). \end{eqnarray*} We get \begin{eqnarray*} && \sum_{i=1}^{t-1}\sum_{j=i+1}^t m^{k-2} \left( \binom{\smn{k-1}(x_j+1)}{0\mi{1}\cdots \mi{k-1}}- \binom{\smn{k-1}(x_j'+1)}{0\mi{1}\cdots \mi{k-1}}\right)\\ &+& \sum_{i=1}^{t-1}\sum_{j=i+1}^t m^{k-2} \left( \binom{\smn{k-1}(x_i)}{0\,\mi{1}\cdots \mi{k-1}}- \binom{\smn{k-1}(x_i')}{0\,\mi{1}\cdots \mi{k-1}} \right). \end{eqnarray*} By \cref{cor:notequiv}, it is equal to \begin{eqnarray*} && \sum_{i=1}^{t-1}\sum_{j=i+1}^t m^{k-2} m^{\binom{k-1}{2}}(|x_j|_{\mi{1}}-|x_j'|_{\mi{1}}) + \sum_{i=1}^{t-1}\sum_{j=i+1}^t m^{k-2} m^{\binom{k-1}{2}}(|x_i|_0-|x_i'|_0) \end{eqnarray*} which can be rewritten as \[ m^{k-2} m^{\binom{k-1}{2}} \sum_{j=2}^t (j-1) \left(|x_j|_{\mi{1}}-|x_j'|_{\mi{1}}\right) + m^{k-2} m^{\binom{k-1}{2}} \sum_{i=1}^{t-1} (t-i) \left(|x_i|_0-|x_i'|_0\right). \] If $x_j=\mi{1}$, the factor $j-1$ represents the number of letters to the left of $x_j$ and if $x_i=0$, the factor $t-i$ represents the number of letters to the right of $x_i$. Therefore, we can write \[ m^{k-2} m^{\binom{k-1}{2}} \sum_{b\in\am}\left( \binom{\gamma\delta}{b\mi{1}}-\binom{\gamma'\delta'}{b\mi{1}}+ \binom{\gamma\delta}{0b}-\binom{\gamma'\delta'}{0b}\right). \] \end{proof} \bibliographystyle{plainurl} \bibliography{./bibliography} \end{document}
2412.18480v1
http://arxiv.org/abs/2412.18480v1
Diameter bounds for distance-regular graphs via long-scale Ollivier Ricci curvature
\documentclass[a4paper,11pt]{article} \usepackage[top=3.0cm, bottom=3.0cm, inner=3.0cm, outer=3.0cm, includefoot]{geometry} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{authblk} \usepackage{verbatim} \usepackage{multirow} \usepackage{multicol} \usepackage{hyperref} \usepackage{amssymb} \usepackage{amsmath} \usepackage{mathtools,bbm} \usepackage{graphicx,tikz} \usepackage{amsthm} \usepackage{caption,subcaption} \usepackage{float} \usepackage{color} \usepackage{enumerate} \usepackage{cancel} \setlength{\parindent}{0mm} \setlength{\parskip}{2mm } \newcommand{\red}{\color{red}{}} \newcommand{\dist}{\operatorname{dist}} \newcommand{\supp}{\operatorname{supp}} \newcommand{\ddiv}{\operatorname{div}} \newcommand{\Hess}{\operatorname{Hess}} \newcommand{\Ric}{\operatorname{Ric}} \newcommand{\grad}{\operatorname{grad}} \newcommand{\trace}{\operatorname{tr}} \newcommand{\IR}{{\mathbb{R}}} \newcommand{\IC}{{\mathbb{C}}} \newcommand{\IN}{{\mathbb{N}}} \newcommand{\IZ}{{\mathbb{Z}}} \newcommand{\IP}{{\mathbb{P}}} \newcommand{\IQ}{{\mathbb{Q}}} \newcommand{\IE}{{\mathbb{E}}} \newcommand{\Q}{{\mathcal{Q}}} \newcommand{\K}{{\mathcal{K}}} \newcommand{\M}{{\mathcal{M}}} \newcommand{\N}{{\mathcal{N}}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \renewcommand*{\v}{\mathbf{v}} \newcommand{\e}{\mathbf{e}} \newcommand{\w}{\mathbf{w}} \newcommand{\x}{\mathbf{x}} \newcommand{\posR}{\mathbb{R}^+} \newcommand{\nnegR}{\mathbb{R}^+_0} \newcommand{\sol}{u \in C^1( V \times \nnegR)} \newcommand{\downto}{{\searrow}} \newcommand{\eChar}{\begin{enumerate}[(i)]} \newcommand{\eCharR}{\begin{enumerate}[(a)]} \newcommand{\eBr}{\begin{enumerate}[(1)]} \newcommand{\ii}{j(i)} \newcommand{\II}{\posR} \newcommand{\ddx}{\frac{d}{dx}} \newcommand{\sgn}{\operatorname{sgn}} \newcommand{\Deg}{\operatorname{Deg}} \newcommand{\vol}{\operatorname{vol}} \newcommand{\diam}{\operatorname{diam}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\rank}{\operatorname{rank}} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand{\Abstract} \newcommand\at[2]{\left.#1\right|_{#2}} \newcommand{\rb}[1]{(\ref{#1})} \theoremstyle{plain} \newtheorem{lemma}{Lemma}[section] \newtheorem{theorem}[lemma]{Theorem} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{corollary}[lemma]{Corollary} \theoremstyle{definition} \newtheorem{acknowledgement}[lemma]{Acknowledgement} \newtheorem{algorithm}[lemma]{Algorithm} \newtheorem{axiom}[lemma]{Axiom} \newtheorem{case}[lemma]{Case} \newtheorem{claim}[lemma]{Claim} \newtheorem{conclusion}[lemma]{Conclusion} \newtheorem{condition}[lemma]{Condition} \newtheorem{conjecture}[lemma]{Conjecture} \newtheorem{definition}[lemma]{Definition} \newtheorem{remark}[lemma]{Remark} \newtheorem{remarks}[lemma]{Remarks} \newtheorem{question}[lemma]{Question} \newtheorem{criterion}[lemma]{Criterion} \newtheorem{example}[lemma]{Example} \newtheorem{examples}[lemma]{Examples} \newtheorem{exercise}[lemma]{Exercise} \newtheorem{problem}[lemma]{Problem} \newtheorem{solution}[lemma]{Solution} \newtheorem{summary}[lemma]{Summary} \newtheorem{rem}[lemma]{Remark} \newtheorem{defn}[lemma]{Definition} \numberwithin{equation}{section} \title { Diameter bounds for distance-regular graphs via long-scale Ollivier Ricci curvature } \author[1]{Kaizhe Chen\thanks{Email: [email protected]}} \author[2]{Shiping Liu\thanks{Email: [email protected]}} \affil[1]{School of Gifted Young, University of Science and Technology of China, Hefei} \affil[2]{School of Mathematical Sciences, University of Science and Technology of China, Hefei} \date{ } \begin{document} \maketitle \thispagestyle{plain} \begin{abstract} In this paper, we derive new sharp diameter bounds for distance regular graphs, which better answer a problem raised by Neumaier and Penji\' c in many cases. Our proof is built upon a relation between the diameter and long-scale Ollivier Ricci curvature of a graph, which can be considered as an improvement of the discrete Bonnet-Myers theorem. Our method further leads to significant improvement of existing diameter bounds for amply regular graphs and $(s,c,a,k)$-graphs. \end{abstract} \section{Introduction} Distance-regular graphs play an important role in algebraic combinatorics due to their close and deep relation to design theory, coding theory, finite and Euclidean geometry, and group theory \cite{BCN89, DKT}. Bounding the diameter of a distance-regular graph in terms of its intersection numbers is a very important problem which has attacted lots of attention \cite{BDKM,BHK,HLX24,I83,Mulder79,NP22JCTB,NP22,Smith74,T82,T83}. In \cite[Problem 1.1]{NP22}, Neumaier and Penji\' c raised a question asking for diameter bounds in terms of a small initial part of the intersection array. \begin{problem}[{\cite{NP22}}]\label{prob:NP} Let $G$ denote a distance-regular graph, and assume that we only know the first $ q + 2$ elements $b_i$ and $c_i$ of intersection array \begin{align}\label{intersection array} \{ b_0,b_1,...,b_q,b_{q+1},...; c_1,c_2,...,c_{q+1},c_{q+2},...\}, \end{align} i.e., assume that we don’t know intersection numbers $b_{q+2},..., b_{d-1}$ and $c_{q+3}, . . . , c_{d}$. Use the numbers given in \eqref{intersection array} to give an upper bound for the diameter of $G$. \end{problem} For a distance-regular graph $G$ of diameter $d$ and valency $k$, we denote its intersection array by $\{b_0,b_1,\ldots, b_{d-1}; c_1,c_2,\ldots, c_d\}$ (see Section \ref{Distance-regular graph} for definitions). We further denote $a_i:=k-b_i-c_i$, $0\le i\le d$, where we use the notation $c_0=b_d=0$. In \cite{NP22}, Neumaier and Penji\'c give the following upper bound for the diameter of $G$. \begin{theorem}[\cite{NP22}]\label{NP} Let $G$ denote a distance-regular graph of diameter $d$, valency $k\ge 3$ and let $q$ be an integer with $2 \le q \le d-1$. If $c_{q+1}> c_q$ and $a_q \le c_{q+1}-c_q$ then \begin{align}\label{eqNP} d\le \left(\left\lfloor\frac{k-c_{q+1}-1}{c_q}\right\rfloor +2\right)q+1. \end{align} \end{theorem} Note that Theorem \ref{NP} does not use all the numbers in \eqref{intersection array}. Moreover, the inequality \eqref{eqNP} is possible to take equality only when $d-1$ is a multiple of $q$. In this paper, we derive the following diameter bound for distance-regular graphs using all the numbers in \eqref{intersection array}, which is possible to take equality even if $d-1$ is not a multiple of $q$. \begin{theorem}\label{diameter} Let $G$ be a distance-regular graph of diameter $d$ and valency $k$. Let $q$ be an integer with $1\le q\le d-1$ such that $a_{q-1}=0$, $c_{q+1}>c_q$ and $c_{q+1}\ge a_{q}$. Then, for any $0\le p\le d$, we have \begin{align}\label{eq:diameter} d\le \max_{0\le r\le q-1} \left\{\left\lfloor\frac{b_p-c_p+b_r-c_r}{2c_q+M}\right\rfloor q +p+r\right\}, \end{align} where $$M=\left\lceil \frac{a_q(c_{q+1}-a_{q})}{c_{q+1}-c_q} \right\rceil.$$ \end{theorem} \begin{remark} Our Theorem \ref{diameter} better answers Problem \ref{prob:NP} in many cases. We provide three examples below. \begin{itemize} \item[(i)] If we know that $\{22, 21, 20, 3,...; 1, 2, 3, 20,...\}$ are the first $8$ numbers of the intersection array of a distance-regular graph $G$, then $b_4\le k-c_4=2$. Applying Theorem \ref{diameter} with $q=3$ and $p=4$ shows that the diameter $d$ of $G$ is at most $6$, which is sharp for the coset graph of the shortened binary Golay code with intersection array $\{22, 21, 20, 3,2,1; 1, 2, 3, 20,21,22\}$. Note that Theorem \ref{NP} can only tell $d\le 7$. \item[(ii)] If we know that $\{5, 4, 1,...; 1,1,4,...\}$ are the first $6$ numbers of the intersection array of a distance-regular graph $G$, then $b_3\le k-c_3=1$. Applying Theorem \ref{diameter} with $q=2$ and $p=3$ shows that the diameter $d$ of $G$ is at most $4$, which is sharp for the Wells graph with intersection array $\{5, 4, 1,1; 1,1,4,5\}$. Note that Theorem \ref{NP} can only tell $d\le 5$. \item[(iii)] Let $\{21, 20, 16, 6,2,...; 1, 2, 6, 16,...\}$ be the first $9$ numbers of the intersection array of a distance-regular graph $G$. Applying Theorem \ref{diameter} with $q=2$ and $p=4$ shows that the diameter $d$ of $G$ is at most $6$, which is sharp for the coset graph of the once shortened and once truncated binary Golay code with intersection array $\{21, 20, 16, 6,2,1,0; 1, 2, 6, 16,20,21\}$. Note that Theorem \ref{NP} can only tell $d\le 7$. \end{itemize} \end{remark} If we only know the values of $a_{q-1},a_q,c_q$ and $c_{q+1}$, we still get a diameter bound as follows. \begin{theorem}\label{diameter2} Let $G$ be a distance-regular graph of diameter $d$ and valency $k$. Let $q$ be an integer with $1\le q\le d-1$ such that $a_{q-1}=0$, $c_{q+1}>c_q$ and $c_{q+1}\ge a_{q}$. Then, for any $0\le p\le d$, we have \begin{align}\label{eq:diameter2} d\le 2p-1+ \max \left\{0, \left(\left\lfloor\frac{2(b_p-c_p)}{2c_q+M}\right\rfloor+1\right)q \right\}, \end{align} where $$M=\left\lceil \frac{a_q(c_{q+1}-a_{q})}{c_{q+1}-c_q} \right\rceil.$$ \end{theorem} We prove Theorems \ref{diameter} and \ref{diameter2} via establishing a relation between the diameter and long-scale Ollivier Ricci curvature \cite{CK19, O09} of a graph (see Theorem \ref{Wasserstein}), which can be considered as an improvement of the discrete Bonnet-Myers theorem via Ollivier/Lin-Lu-Yau curvature \cite{LLY11,O09}. Then our proofs are built upon estimating the long-scale Ollivier Ricci curvature of distance-regular graphs (see Theorems \ref{WassersteinJump} and \ref{WassersteinDRG}). Our method further leads to the following diameter bounds for amply regular graphs and $(s,c,a,k)$-graphs (see Definitions \ref{ARGdefinition} and \ref{scakdefinition}), which significantly improve the corresponding previous results. The $(s,c,a,k)$-graphs are introduced by Terwilliger \cite{T83} as a generalization of distance-regular graphs. In case $s=2$, an $(s,c,a,k)$-graph is amply regular. The following Corollaries \ref{ARG} and \ref{scak} can be considered as extensions of Theorem \ref{diameter2}. \begin{corollary}\label{ARG} Let $G$ be a connected amply regular graph of diameter $d\ge 4$ with parameters $(v,k,\lambda,\mu)$, where $1\ne \mu\ge\lambda$. Then \begin{align}\label{eq:ARGdiameter} d\le \left\lfloor\frac{2(k-2\mu)}{2+\left\lceil \frac{\lambda(\mu-\lambda)}{\mu-1} \right\rceil}\right\rfloor+4. \end{align} \end{corollary} \begin{remark} Let $G$ be a connected amply regular graph of diameter $d\ge 4$ with parameters $(v,k,\lambda,\mu)$, where $1\ne \mu>\lambda$. Brouwer, Cohen and Neumaier \cite[Corollary 1.9.2]{BCN89} prove that \begin{equation}\label{eq:BCN} \mathrm{diam}(G)\leq k-2\mu+4. \end{equation} Huang, Liu and Xia \cite{HLX24} prove that \begin{equation}\label{eq:HLX24} \mathrm{diam}(G)\leq \left\lfloor\frac{2k}{3}\right\rfloor. \end{equation} Note that Corollary \ref{ARG} significantly improves both \eqref{eq:BCN} and \eqref{eq:HLX24}. \end{remark} \begin{corollary}\label{scak} Let $G$ be an $(s,c,a,k)$-graph with $a\le c$ and diameter $d$. Then \begin{align}\label{eq:scakdiameter} d\le \max \left\{2s, (s-1)\left( \left\lfloor\frac{2(\delta-2c)}{2+M}\right\rfloor +3 \right)+2 \right\}, \end{align} where $\delta$ is the minimum valency of $G$ and $M=\left\lceil \dfrac{a(c-a)}{c-1} \right\rceil$. \end{corollary} \begin{remark} Let $G$ be an $(s,c,a,k)$-graph with $a\le c$, $c>2$ and diameter $d$. Terwilliger \cite[Theorem 2]{T83} prove that \begin{equation}\label{eq:T83} d\le \max \left\{3s-1, (s-1)\left( \frac{2ck-2c}{3c-2} -2c+5 \right)+2 \right\}. \end{equation} Note that we use the minimum valency $\delta$ instead of the maximum valency $k$ in \eqref{eq:scakdiameter}. In addition, the coefficient of $\delta$ in \eqref{eq:scakdiameter} is at most $\frac{2}{3}$ (when $M\ne 0$) and the coefficient of $k$ in \eqref{eq:T83} is $\frac{2c}{3c-2}>\frac{2}{3}$, which implies that \eqref{eq:scakdiameter} improves \eqref{eq:T83} for large $k$. \end{remark} The rest of the paper is organized as follows. In Section \ref{Preliminaries}, we recall definitions and properties about distance-regular graphs and perfect matching. In Section \ref{Wasserstein distance}, we recall the concept of Wasserstein distance and establish a relation between the diameter and the long-scale Ollivier Ricci curvature of a graph. In section 4, we will prove Theorems \ref{diameter} and \ref{diameter2}. In section 5, we will prove Corollaries \ref{ARG} and \ref{scak}. \section{Preliminaries}\label{Preliminaries} \subsection{Distance-regular graph}\label{Distance-regular graph} Let $G=(V,E)$ be a graph with vertex set $V$ and edge set $E$. For any $x\in V$, let $d_x$ be the valency of $x$. For any two vertices $x$ and $y$ in $V$, we denote by $d(x,y)$ the distance between them. We write $x\sim y$ if $x$ and $y$ are adjacent. Let $G=(V,E)$ be a graph with diameter $d$. For a vertex $x\in V$ and any non-negative integer $h\le d$, let $S_h(x)$ denote the subset of vertices in $V$ that are at distance $h$ from $x$. We use the notations that $S_{-1}(x) = S_{d+1}(x) := \emptyset$. For any two vertices $x$ and $y$ in $V$ at distance $h$, let $$A_h(x,y):=S_{h}(x)\cap S_{1}(y),\ B_h(x,y):=S_{h+1}(x)\cap S_{1}(y),\ C_h(x,y):=S_{h-1}(x)\cap S_{1}(y).$$ We say $G=(V,E)$ is \textit{regular with valency $k$} if each vertex in $G$ has exactly $k$ neighbors. A graph $G$ is called \textit{distance-regular} if there are integers $b_i$, $c_i$, $0 \le i \le d$ which satisfy $b_i = |B_i(x, y)|$ and $c_i = |C_i(x, y)|$ for any two vertices $x$ and $y$ in $V$ at distance $i$. Clearly such a graph is regular of valency $k := b_0, b_d = c_0 = 0, c_1 = 1$ and $$a_i:= |A_i(x, y)| = k-b_i-c_i,\ 0 \le i \le d.$$ The array $\{b_0,b_1,...,b_{d-1}; c_1,c_2,...,c_d\}$ is called the \textit{intersection array} of $G$. The following properties of intersection arrays are well-known. \begin{lemma}[{\cite[Proposition 4.1.6]{BCN89}}] Let $G$ be a distance-regular graph of diameter $d\ge2$, valency $k$ and intersection numbers $c_i, a_i, b_i, 0 \le i \le d$. The following hold. \begin{itemize} \item [\rm{(i)}] $k=b_0>b_1\ge b_2\ge\dots\ge b_d = 0$, \item [\rm{(ii)}] $c_0< 1=c_1\le c_2\le\dots \le c_d\le k$. \end{itemize} \end{lemma} For more information about distance-regular graphs, we refer the reader to \cite{BCN89}. \subsection{Perfect matching} Let us recall the definition of a \emph{perfect matching}. \begin{definition} Let $G$ be a graph. A set $\mathcal M$ of pairwise non-adjacent edges is called a \emph{matching}. Each vertex adjacent to an edge of $\mathcal M$ is said to be \emph{covered} by $\mathcal M$. A matching $\mathcal M$ is called a \emph{perfect matching} if it covers every vertex of the graph. \end{definition} The following K\"onig's theorem \cite{K16} is a key tool in estimating the (long-scale) Ollivier Ricci curvature. \begin{theorem}[König's theorem]\label{konig}A bipartite graph $G$ can be decomposed into $d$ edge-disjoint perfect matchings if and only if $G$ is $d$-regular.\end{theorem} Notice that in Theorem \ref{konig}, the bipartite graph is allowed to have multiple edges. \section{Wasserstein distance and diameter}\label{Wasserstein distance} In this section, we will prove Theorem \ref{Wasserstein} which relates various Wasserstein distances to diameter bounds of graphs. This provides the basic philosophy of our method. We first recall the definition of Wasserstein distance. \begin{definition}[Wasserstein distance] Let $G=(V,E)$ be a graph, $\mu_1$ and $\mu_2$ be two probability measures on $V$. The Wasserstein distance $W_1(\mu_1, \mu_2)$ between $\mu_1$ and $\mu_2$ is defined as \[W_1(\mu_1,\mu_2)=\inf_{\pi}\sum_{y\in V}\sum_{x\in V}d(x,y)\pi(x,y),\] where the infimum is taken over all maps $\pi: V\times V\to [0,1]$ satisfying \[\mu_1(x)=\sum_{y\in V}\pi(x,y),\,\,\mu_2(y)=\sum_{x\in V}\pi(x,y).\] Such a map is called a transport plan. \end{definition} For any $\varepsilon\in [0,1]$, let $\mu_x^\varepsilon$ be the probability measure defined as follows: \[\mu_x^\varepsilon(y)=\left\{ \begin{array}{ll} \varepsilon, & \hbox{if $y=x$;} \\ \frac{1-\varepsilon}{d_x}, & \hbox{if $y\sim x$;} \\ 0, & \hbox{otherwise.} \end{array}\right. \] We prove the following diameter estimate using Wasserstein distance, which is an improvement of the discrete Bonnet-Myers theorem via Ollivier/Lin-Lu-Yau curvature \cite{LLY11,O09}. \begin{theorem}\label{Wasserstein} Let $G$ be a connected graph and $0\le\varepsilon <1$ be a constant. Let $q> 0$ and $p\ge 0$ be two integers. Let $C_1>0$ and $C_2$ be two constants such that \begin{itemize} \item[\rm (1)] $W_1(\mu_x^\varepsilon, \mu_y^\varepsilon) \le q-C_1$ for any two vertices $x,y$ with $d(x,y)=q$, \item[\rm (2)] $W_1(\mu_x^1, \mu_y^\varepsilon)\le p+C_2$ for any two vertices $x,y$ with $d(x,y)=p$. \end{itemize} Then $G$ is finite with diameter $d$ satisfying \begin{align}\label{eq:Wasserstein} d\le 2p-1+ \max\left\{0,\left(\left\lfloor\frac{2C_2}{C_1} \right\rfloor +1\right)q\right\}. \end{align} \end{theorem} \begin{proof} If \eqref{eq:Wasserstein} does not hold, there exist two vertices $x$ and $y$ with $d(x,y)=D$ such that $D=2p+lq$, where $l$ is an integer satisfying \begin{align}\label{l} l\ge 0\ {\rm and}\ l\ge \left\lfloor\frac{2C_2}{C_1} \right\rfloor +1. \end{align} Let $L$ be a path of length $D$ connecting $x$ and $y$. On the path $L$, there is a sequence of vertices $x_0,x_1,...,x_{l}$ such that $d(x,x_0)=d(x_{l} ,y)=p$ and $d(x_{i-1},x_i)=q$ for $1\le i\le l$. Note that $W_1(\mu_x^1,\mu_y^1)=D$. It follows by the triangle inequality that \begin{align}\notag D=W_1\left(\mu_x^1,\mu_y^1\right) &\le W_1\left(\mu_x^1,\mu_{x_0}^{\varepsilon}\right) + \sum_{i=1}^l W_1\left(\mu_{x_{i-1}}^\varepsilon,\mu_{x_{i}}^\varepsilon\right) + W_1\left(\mu_{x_l}^\varepsilon, \mu_{y}^1\right) \\ \notag &\le 2(p+C_2)+l(q-C_1). \end{align} That is $lC_1\le 2C_2$, which is contradictory to \eqref{l}. \end{proof} \begin{remark} The Wasserstein distance and the Ollivier Ricci curvature are directly related. Let $G$ be a connected graph. For $p\in [0, 1]$, the $p$-Ollivier Ricci curvature of two vertices $x, y$ in $G$ is defined as $$\kappa_p(x,y)=1-\frac{W_1(\mu_x^p,\mu_y^p)}{d(x,y)}.$$ In particular, we call the curvature $\kappa_p(x,y)$ "long scale" when $d(x, y)\ge 2$. The concept of Ollivier Ricci curvature was introduced by Ollivier in \cite{O09}, and the long-scale Ollivier Ricci curvature was futher studied in \cite{CK19}. \end{remark} \section{Proofs of Theorems \ref{diameter} and \ref{diameter2}} In this section, we prove Theorems \ref{diameter} and \ref{diameter2} via (the philosophy of) Theorem \ref{Wasserstein}. For that purpose, we first show two Wasserstein distance estimates, stated as Theorems \ref{WassersteinJump} and \ref{WassersteinDRG} below. \begin{theorem}\label{WassersteinJump} Let $G$ be a connect graph and $0\le\varepsilon <1$ be a constant. Let $x$ and $y$ be two vertices in $G$ at distance $p$, then $$W\left(\mu_x^{1}, \mu_y^{\varepsilon}\right)\le p+\frac{(1-\varepsilon)(|B_p(x,y)|-|C_p(x,y)|)}{d_y}.$$ \end{theorem} \begin{proof} We consider the following particular transport plan $\pi_0$ from $\mu_x^{1}$ to $\mu_y^{\varepsilon}$: \begin{center} $\pi_0(v,u)=\begin{cases} \varepsilon, &{\rm if}\ v=x, u=y;\\ \frac{1-\varepsilon}{d_y}, &{\rm if}\ v=x, u\sim y;\\ 0, &{\rm otherwise}. \end{cases}$ \end{center} In $S_1(y)$, there are $|A_p(x,y)|$ vertices at distance $p$ from $x$, $|B_p(x,y)|$ vertices at distance $p+1$ from $x$ and $|C_p(x,y)|$ vertices at distance $p-1$ from $x$. Thus, \begin{align}\notag W\left(\mu_x^{1}, \mu_y^{\varepsilon}\right) &\le \sum_{v\in V}\sum_{u\in V}d(v,u)\pi_0(v,u)\\ \notag &\le \varepsilon p + \frac{1-\varepsilon}{d_y}\left(|A_p(x,y)|p+|B_p(x,y)|(p+1)+|C_p(x,y)|(p-1)\right) \\ \notag &=p+\frac{(1-\varepsilon)(|B_p(x,y)|-|C_p(x,y)|)}{d_y}, \end{align} completing the proof. \end{proof} \begin{theorem}\label{WassersteinDRG} Let $G$ be a distance-regular graph of diameter $d$ and valency $k$. Let $q$ be an integer with $1\le q\le d-1$ such that $a_{q-1}=0$, $c_{q+1}>c_q$ and $c_{q+1}\ge a_{q}$. Let $x$ and $y$ be two vertices in $G$ with $d(x,y)=q$, then $$W\left(\mu_x^{\frac{1}{k+1}}, \mu_y^{\frac{1}{k+1}}\right)\le q-\frac{2c_q + M}{k+1},$$where $$M=\left\lceil \frac{a_q(c_{q+1}-a_{q})}{c_{q+1}-c_q} \right\rceil.$$ \end{theorem} By the definition of a distance-regular graph, we have $$|A_q(y,x)|=|A_q(x,y)|=a_q,|B_q(y,x)|=|B_q(x,y)|=b_q\ {\rm and}\ |C_q(y,x)|= |C_q(x,y)|=c_q.$$ We first prove the following two lemmas. \begin{lemma} If $q\ge 2$, then there exists a bijection $\phi$ from $C_q(y,x)$ to $C_q(x,y)$ such that $d(v,\phi(v))=q-2$ for every $v\in C_q(y,x)$. \end{lemma} \begin{proof} For any $v\in C_q(y,x)$, we have $d(v,y)=q-1$. We claim that $C_{q-1}(v,y)\subset C_q(x,y)$. Indeed, for any $u\in C_{q-1}(v,y)$, we have $d(v,u)=q-2$, and hence $d(x,u)\le q-1$. It follows that $u\in C_q(x,y)$. Therefore, there are exactly $c_{q-1}$ vertices in $C_q(x,y)$ at distance $q-2$ from $v$. By symmetry, for any $u\in C_q(x,y)$, there are exactly $c_{q-1}$ vertices in $C_q(y,x)$ at distance $q-2$ from $u$. Construct a bipartite graph $H_1$ with bipartition $\{ C_q(y,x), C_q(x,y)\}$. For $v\in C_q(y,x)$ and $u\in C_q(x,y)$, $v$ and $u$ are adjacent if $d(v,u)=q-2$. Then, $H_1$ is $c_{q-1}$-regular. Theorem \ref{konig} implies a desired bijection. \end{proof} \begin{lemma} There is a bijection $\varphi$ from $A_q(y,x)$ to $A_q(x,y)$ such that $d(v,\varphi(v))=q-1$ for every $v\in A_q(y,x)$. \end{lemma} \begin{proof} For any $v\in A_q(y,x)$, we have $d(v,y)=q$. We claim that $C_{q}(v,y)\subset A_q(x,y)$. Indeed, for any $u\in C_{q}(v,y)$, we have $d(v,u)=q-1$, and hence $d(x,u)\le q$. It follows that $u\notin B_q(x,y)$. If $u\in C_q(x,y)$, then $v\in A_{q-1}(u,x)$, which is contradictory to $|A_{q-1}(u,x)|=a_{q-1}=0$. Thus we have $u\in A_q(x,y)$ and the claim is proved. Therefore, there are exactly $c_{q}$ vertices in $A_q(x,y)$ at distance $q-1$ from $v$. By symmetry, for any $u\in A_q(x,y)$, there are exactly $c_{q}$ vertices in $A_q(y,x)$ at distance $q-1$ from $u$. Similarly, we construct a bipartite graph $H_2$ with bipartition $\{ A_q(y,x), A_q(x,y)\}$. For $v\in A_q(y,x)$ and $u\in A_q(x,y)$, $v$ and $u$ are adjacent if $d(v,u)=q-1$. Then, $H_2$ is $c_{q}$-regular. Theorem \ref{konig} implies a desired bijection. \end{proof} \begin{proof}[Proof of Theorem \ref{WassersteinDRG}] If $q\ge 2$, we construct a bipartite multigraph $H_3$ with bipartition $$\{ A_q(y,x)\cup B_q(y,x), A_q(x,y)\cup B_q(x,y)\}.$$ The edge set of $H_3$ is given by $E_H=E_1\cup E_2$, where \begin{align}\notag &E_1=\{vu|v\in A_q(y,x)\cup B_q(y,x), u\in A_q(x,y)\cup B_q(x,y), d(v,u)=q\},\\ \notag &E_2=\{e_v^j|e_v^j=v\varphi(v), v\in A_q(y,x), 1\le j\le c_{q+1}-a_q\}. \end{align} We explain that $E_2$ contains $c_{q+1}-a_q$ number of parallel edges between $v$ and $\varphi(v)$ for each $v\in A_q(y,x)$, and $E_2=\emptyset$ when $c_{q+1}= a_q$. We claim that $H_3$ is $(c_{q+1}-c_q)$-regular. For any $v\in B_q(y,x)$, we have $d(v,y)=q+1$. There are exactly $c_{q+1}$ vertices in $S_1(y)$ at distance $q$ from $v$. For any $u\in C_q(x,y)$, since $d(x,u)=q-1$, we have $d(v,u)\le q$. Since $d(v,y)=q+1$, we have $d(v,u)\ge q$. It follows that $d(v,u)= q$. Thus, there are exactly $c_{q+1}-c_q$ vertices in $A_q(x,y)\cup B_q(x,y)$ at distance $q$ from $v$. That is, the valency of $v$ in $H_3$ is $c_{q+1}-c_q$. For any $v\in A_q(y,x)$, we have $d(v,y)=q$. There are exactly $a_q$ vertices in $S_1(y)$ at distance $q$ from $v$. For any $u\in C_q(x,y)$, since $d(x,u)=q-1$, we have $d(v,u)\le q$. Since $d(v,y)=q$, we have $d(v,u)\ge q-1$. If $d(v,u)= q-1$, then $v\in A_{q-1}(u,x)$, which is contradictory to $|A_{q-1}(u,x)|=a_{q-1}=0$. Thus, $d(v,u)= q$. It follows that there are exactly $a_q-c_q$ vertices in $A_q(x,y)\cup B_q(x,y)$ at distance $q$ from $v$. Together with the $c_{q+1}-a_q$ parallel edges in $E_2$, the valency of $v$ in $H_3$ is $c_{q+1}-c_q$. By symmetry, the valency of each vertex in $A_q(x,y)\cup B_q(x,y)$ is also $c_{q+1}-c_q$. Thus, $H_3$ is $(c_{q+1}-c_q)$-regular, as claimed. By Theorem \ref{konig}, $E_H$ can be decomposed into $(c_{q+1}-c_q)$ edge-disjoint perfect matchings. Since $|E_2|=a_q(c_{q+1}-a_{q})$, there is a perfect matching $\mathcal M$ such that \begin{equation*}\label{ME} |\mathcal M\cap E_2|\ge M:= \left\lceil \frac{a_q(c_{q+1}-a_{q})}{c_{q+1}-c_q} \right\rceil. \end{equation*} We consider the following particular transport plan $\pi_0$ from $\mu_x^{\frac{1}{k+1}}$ to $\mu_y^{\frac{1}{k+1}}$: \begin{center} $\pi_0(v,u)=\begin{cases} \frac{1}{k+1}, &{\rm if}\ v=x, u=y;\\ \frac{1}{k+1}, &{\rm if}\ v\in C_q(y,x), u=\phi(v);\\ \frac{1}{k+1}, &{\rm if}\ v\in A_q(y,x)\cup B_q(y,x), u\in A_q(x,y)\cup B_q(x,y), vu\in {\mathcal M};\\ 0, &{\rm otherwise}. \end{cases}$ \end{center} It is direct to check that $\pi_0$ is indeed a transport plan. By the definition of $\phi$, we have $d(v,\phi(v))=q-2$ for any $v\in C_q(y,x)$. For any $vu\in {\mathcal M}$, it follows by the definition of $E_1$ and $E_2$ that $d(v,u)$ equals to $q$ if $vu\in E_1$ and $q-1$ if $vu\in E_2$. Therefore, we have \begin{align}\notag W\left(\mu_x^{\frac{1}{k+1}},\mu_y^{\frac{1}{k+1}}\right) &\le \sum_{v\in V}\sum_{u\in V}d(v,u)\pi_0(v,u)\\ \notag &= \frac{1}{k+1}\left( q+c_q(q-2)+ |{\mathcal M}\cap E_2|(q-1) +(|{\mathcal M}|-|{\mathcal M}\cap E_2|)q \right)\\ \notag &=\frac{1}{k+1}\left( q+c_q(q-2) -|{\mathcal M}\cap E_2|+ |{\mathcal M}|q \right)\\ \notag &\le \frac{1}{k+1}\left( q+c_q(q-2) - M +(a_q+b_q)q \right)\\ \notag &=q-\frac{2c_q + M}{k+1}. \end{align} If $q=1$, then $A_q(y,x)=A_q(x,y)$. This case has been discussed in \cite[Proof of Theorem 3.1]{CHLZ24}. For readers' convenience, we present the argument here. Let us denote the $a_1$ vertices in $A_1(y,x)$ by $z_1, \cdots, z_{a_1}$. We construct a bipartite multigraph $H_4$ with bipartition $$\{A_1(y,x)\cup B_1(y,x), A'_1(x,y)\cup B_1(x,y)\}.$$ Here $A'_1(x,y):=\{z'_1, \cdots, z'_{a_1}\}$ is a new added set with ${a_1}$ vertices, which is considered as a copy of $A_1(y,x)$. The edge set of $H_4$ is given by $E_H:=\cup_{i=1}^5E_i$, where \begin{align}\notag &E_1=\{vu|v\in B_1(y,x), u\in B_1(x,y), v\sim u\},\\ \notag &E_2=\{vz'_i|v\in B_1(y,x), z'_i\in A'_1(x,y), v\sim z_i\},\\ \notag &E_3=\{z_iu| z_i\in A_1(y,x),u\in B_1(x,y), z_i\sim u\},\\ \notag &E_4=\{z_iz'_j| z_i\sim z_j,1\le i\le a_1,1\le j\le a_1 \},\\ \notag &E_5=\{e_i^j|e_i^j=z_iz'_i, 1\le i\le a_1, 1\le j\le c_2-a_1\}. \end{align} Similarly, we can prove that $H_4$ is $(c_2-c_1)$-regular. By Theorem \ref{konig}, $E_H$ can be decomposed into $c_2-c_1$ edge-disjoint perfect matchings. Since $|E_5|=a_1(c_{2}-a_{1})$, there is a perfect matching $\mathcal M$ such that \begin{equation*} |\mathcal M\cap E_5|\ge M:= \left\lceil \frac{a_1(c_{2}-a_{1})}{c_{2}-c_1} \right\rceil. \end{equation*} We consider the following particular transport plan $\pi_0$ from $\mu_x^{\frac{1}{k+1}}$ to $\mu_y^{\frac{1}{k+1}}$: \begin{center} $\pi_0(v,u)=\begin{cases} \frac{1}{k+1}, &{\rm if}\ v\in B_1(y,x)\cup A_1(y,x), u\in B_1(x,y)\ {\rm and}\ vu\in {\mathcal M};\\ \frac{1}{k+1}, &{\rm if}\ v\in B_1(y,x)\cup A_1(y,x),u\in A_1(x,y)\ {\rm and}\ vu'\in {\mathcal M};\\ 0, &{\rm otherwise}. \end{cases}$ \end{center} It is direct to check that $\pi_0$ is indeed a transport plan. There are $|{\mathcal M}|$ pairs of $(v,u)$ such that $\pi_0(v,u)\ne 0$. Among them, there are $|{\mathcal M}\cap E_5|$ pairs with $d(v,u)=0$ and $|{\mathcal M}|-|{\mathcal M}\cap E_5|$ pairs with $d(v,u)=1$. Therefore, we have \begin{align}\notag W\left(\mu_x^{\frac{1}{k+1}},\mu_y^{\frac{1}{k+1}}\right)&\le \sum_{v\in V}\sum_{u\in V}d(v,u)\pi_0(v,u)\\ \notag &= \frac{1}{k+1}(|{\mathcal M}|-|{\mathcal M}\cap E_5|)\\ \notag &\le \frac{1}{k+1}\left(a_1+b_1-M\right)\\ \notag &= 1-\frac{2+M}{k+1}. \end{align} We complete the proof. \end{proof} Now, we are prepared to prove Theorem \ref{diameter2} and Theorem \ref{diameter}. \begin{proof}[Proof of Theorem \ref{diameter2}] For any two vertices $x,y$ with $d(x,y)=p$, Theorem \ref{WassersteinJump} shows that \begin{align}\label{JumpDRG} W\left(\mu_x^{1}, \mu_y^{\frac{1}{k+1}}\right)\le p+\frac{b_p-c_p}{k+1}. \end{align} The result then follows by Theorem \ref{Wasserstein} and Theorem \ref{WassersteinDRG}. \end{proof} \begin{proof}[Proof of Theorem \ref{diameter}] There exist two integers $l$ and $r$ with $l\ge 0$ and $0\le r\le q-1$ such that $d-p=lq+r$. Let $x$ and $y$ be two vertices with $d(x,y)=d$. Let $L$ be a path of length $d$ connecting $x$ and $y$. On the path $L$, there is a sequence of vertices $x_0,x_1,...,x_{l}$ such that $d(x,x_0)=p$, $d(x_{i-1},x_i)=q$ for $1\le i\le l$, and $d(x_{l},y)=r$. It follows by the triangle inequality that $$W_1\left(\mu_x^1,\mu_y^1\right) \le W_1\left(\mu_x^1,\mu_{x_0}^{\frac{1}{k+1}}\right) + \sum_{i=1}^l W_1\left(\mu_{x_{i-1}}^\frac{1}{k+1},\mu_{x_{i}}^\frac{1}{k+1}\right) + W_1\left(\mu_{x_l}^\frac{1}{k+1}, \mu_{y}^1\right).$$ Note that $W_1\left(\mu_x^1,\mu_y^1\right)=d$. The inequality \eqref{JumpDRG} and Theorem \ref{WassersteinDRG} implies that $$d\le \left( p+\frac{b_p-c_p }{k+1} \right)+ l\left(q-\frac{2c_q + M}{k+1}\right)+\left( r+\frac{b_r-c_r }{k+1} \right).$$ That is $$l\le \left\lfloor \frac{b_p-c_p+b_r-c_r}{2c_q+M} \right\rfloor,$$ completing the proof. \end{proof} \section{Further applications} Our method applies not only to distance-regular graphs, but also to more general settings. In this section, we take amply regular graphs and $(s,c,a,k)$-graphs for example. \begin{definition}[Amply regular graph \cite{BCN89}] \label{ARGdefinition} Let $G$ be a $k$-regular graph with $v$ vertices. Then $G$ is called an amply regular graph with parameters $(v,k,\lambda,\mu)$ if any two adjacent vertices have $\lambda$ common neighbors, and any two vertices at distance $2$ have $\mu$ common neighbors. \end{definition} \begin{proof}[Proof of Theorem \ref{ARG}] For any two vertices $x,y$ with $d(x,y)=2$, Theorem \ref{WassersteinJump} shows that $$W\left(\mu_x^{1}, \mu_y^{\frac{1}{k+1}}\right)\le 2+\frac{|B_2(x,y)|-\mu}{k+1}\le 2+\frac{k-2\mu}{k+1}.$$ For any two adjacent vertices $x$ and $y$, the same proof as Theorem \ref{WassersteinDRG} with $q=1$ shows that $$W\left(\mu_x^{\frac{1}{k+1}}, \mu_y^{\frac{1}{k+1}}\right)\le 1-\frac{2 + \left\lceil \frac{\lambda(\mu-\lambda)}{\mu-1} \right\rceil}{k+1}.$$ The desired result then follows by Theorem \ref{Wasserstein}. \end{proof} \begin{definition}[$(s,c,a,k)$-graph \cite{T83}] \label{scakdefinition} Let $s,c,a$ and $k$ be integers with $s,c,a+2,k\ge 2$. An $(s,c,a,k)$-graph is a graph of maximum valence $k$ and girth $2s-1$ or $2s$ such that \begin{itemize} \item[\rm (1)] $|C_s(x,y)|=c$ for any two vertices $x,y$ with $d(x,y)=s$, \item[\rm (2)] $|A_{s-1}(x,y)|=a$ for any two vertices $x,y$ with $d(x,y)=s-1$. \end{itemize} \end{definition} \begin{lemma}[{\cite[Lemma 3.2]{T83}}]\label{scak83} An $(s, c, a, k)$-graph is either regular or bipartite, with all vertices in each partition having the same valency. In addition, $d_u=d_v$ for any two vertices $u,v$ with $d(u,v)=s-1$. \end{lemma} \begin{proof}[Proof of Theorem \ref{scak}] For any two vertices $x,y$ with $d(x,y)=s$ and $d_y=\delta$, Theorem \ref{WassersteinJump} shows that $$W\left(\mu_x^{1}, \mu_y^{\frac{1}{\delta+1}}\right)\le s+\frac{|B_s(x,y)|-c}{\delta+1}\le s+\frac{\delta-2c}{\delta+1}.$$ For any two vertices $x,y$ with $d(x,y)=s-1$ and $d_x=d_y=\delta$, the same proof as Theorem \ref{WassersteinDRG} with $q=s-1$ shows that $$W\left(\mu_x^{\frac{1}{\delta+1}}, \mu_y^{\frac{1}{\delta+1}}\right)\le s-1-\frac{2 + M}{\delta+1},\ {\rm where}\ M=\left\lceil \frac{a(c-a)}{c-1} \right\rceil.$$ If $G$ is regular, the result follows by Theorem \ref{Wasserstein}. Otherwise, by Lemma \ref{scak83}, we suppose that $G$ is bipartite with bipartition $\{A,B\}$ such that each vertex in $A$ has valency $\delta$ and each vertex in $B$ has valency $k$. In addition, $d_u=d_v$ for any two vertices $u,v$ with $d(u,v)=s-1$ implies that $s$ is odd. If \eqref{eq:scakdiameter} does not hold, there exist two vertices $x$ and $y$ with $d(x,y)=D$ and $x\in B$ such that $D=2s+l(s-1)$, where $l$ is an integer satisfying \begin{align}\label{l2} l\ge 0\ {\rm and}\ l\ge \left\lfloor\frac{2(\delta-2c)}{2+M} \right\rfloor +1. \end{align} Let $L$ be a path of length $D$ connecting $x$ and $y$. On the path $L$, there is a sequence of vertices $x_0,x_1,...,x_{l}$ such that $d(x,x_0)=d(x_{l} ,y)=s$ and $d(x_{i-1},x_i)=s-1$ for $1\le i\le l$. Then, $x_i\in A$ for $0\le i\le l$. Note that $W_1(\mu_x^1,\mu_y^1)=D$. It follows by the triangle inequality that \begin{align}\notag D=W_1\left(\mu_x^1,\mu_y^1\right) &\le W_1\left(\mu_x^1,\mu_{x_0}^{\frac{1}{\delta+1}}\right) + \sum_{i=1}^l W_1\left(\mu_{x_{i-1}}^\frac{1}{\delta+1},\mu_{x_{i}}^\frac{1}{\delta+1}\right) + W_1\left(\mu_{x_l}^\frac{1}{\delta+1}, \mu_{y}^1\right) \\ \notag &\le 2\left(s+\frac{\delta-2c}{\delta+1}\right)+l\left(s-1-\frac{2 + M}{\delta+1}\right). \end{align} That is $l(2+M)\le 2(\delta-2c)$, which is contradictory to \eqref{l2}. \end{proof} \section{Acknowledgement} This work is supported by the National Key R \& D Program of China 2023YFA1010200 and the National Natural Science Foundation of China No. 12031017 and No. 12431004. \begin{thebibliography}{99} \bibitem{BDKM} S. Bang, A. Dubickas, J. H. Koolen and V. Moulton, There are only finitely many distance-regular graphs of fixed valency greater than two, Adv. Math. 269 (2015), 1–55. \bibitem{BHK} S. Bang, A. Hiraki and J. H. Koolen, Improving diameter bounds for distance-regular graphs, European J. Combin. 27 (2006), 79–89, \bibitem{BI} E. Bannai and T. Ito, Algebraic combinatorics I: Association schemes, The Benjamin/Cummings Publishing Co., Inc., Menlo Park, CA, 1984. \bibitem{BCN89} A. E. Brouwer, A. M. Cohen and A. Neumaier, Distance-regular graphs, Springer-Verlag, 1989. \bibitem{CHLZ24} K. Chen, C. Hu, S. Liu and H. Zhang, Ricci curvature, diameter and eigenvalues of amply regular graphs, arXiv: 2410.21055, 2024. \bibitem{CK19} D. Cushing, S. Kamtue, Long-scale Ollivier Ricci curvature of graphs, Anal. Geom. Metr. Spaces 7 (2019), no. 1, 22-44. \bibitem{DKT} E. van Dam, J. H. Koolen and H. Tanaka, Distance-regular graphs, Dynamic Surveys, Electron. J.Combin., 2016, \\ http://www.combinatorics.org/ojs/index.php/eljc/article/view/DS22/pdf. \bibitem{HLX24} X. Huang, S. Liu and Q. Xia, Bounding the diameter and eigenvalues of amply regular graphs via Lin--Lu--Yau curvature, Combinatorica (2024), 44 (2024), no. 6, 1177-1192. \bibitem{I83} A. A. Ivanov, Bounding the diameter of a distance-regular graph, Dokl. Akad. Nauk SSSR 271(1983), 789–792. \bibitem{K16} D. König, Über Graphen und ihre anwendung auf determinanten theorie und mengenlehre, Math. Ann. 77(1916), 453-465. \bibitem{LLY11} Y. Lin, L. Lu and S.-T. Yau, Ricci curvature of graphs, Tohoku Math. J. 63 (2011), no. 4, 605-627. \bibitem{Mulder79} M. Mulder, $(0,\lambda)$-graphs and $n$-cubes, Discrete Math. 28 (1979), 179-188. \bibitem{NP22JCTB} A. Neumaier and S. Penji\'c, A unified view of inequalities for distance-regular graphs, part I, J. Comb. Theory Ser. B 154 (2022), 392-439. \bibitem{NP22} A. Neumaier and S. Penji\'c, On bounding the diameter of a distance-regular graph, Combinatorica 42 (2022), no. 2, 237-251. \bibitem{O09} Y. Ollivier, Ricci curvature of Markov chains on metric spaces, J. Funct. Anal. 256 (2009), no. 3, 810-864. \bibitem{Smith74} D. H. Smith, Bounding the diameter of a distance-transitive graph, J. Comb. Theory Ser. B 16 (1974), 139-144. \bibitem{T82} P. Terwilliger, The diameter of bipartite distance-regular graphs, J. Comb. Theory Ser. B 32 (1982), 182-188. \bibitem{T83} P. Terwilliger, Distance-regular graphs and $(s,c,a,k)$-graphs, J. Comb. Theory Ser. B 34 (1983), 151-164. \end{thebibliography} \end{document}
2412.18555v1
http://arxiv.org/abs/2412.18555v1
Analysis of non-overlapping models with a weighted infinite delay
\documentclass{ws-m3as} \usepackage{pgfkeys} \usepackage{bbold} \usepackage{bbm} \usepackage{dsfont} \usepackage[a4paper, total={6in, 8in}]{geometry} \usepackage{hyperref} \usepackage[toc]{appendix} \usepackage{pgfplots} \pgfplotsset{compat=1.18} \usepackage{pgfplotstable} \newcommand{\ep}{\varepsilon} \newcommand{\eps}[1]{{#1}_{\varepsilon}} \newcommand{\bo}{\boldsymbol} \newtheorem{Def}{Definition} \newtheorem{Theo}{Theorem} \newtheorem{Prop}{Proposition} \newtheorem{Lemma}{Lemma} \newtheorem{Corollary}{Corollary} \newtheorem{Ass}{Assumption} \newtheorem{Rmk}{Remark} \newtheorem{EX}{Example} \usepackage{tikz} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\alert}[1]{{\color{red}#1}} \newcommand{\cb}[1]{{\color{blue}#1}} \newcommand{\RR}{{\mathbb{R}}} \newcommand{\NN}{{\mathbb{N}}} \begin{document} \markboth{Thierno Mamadou Baldé and Vuk Milisic}{Analysis of non-overlapping models with a weighted infinite delay} \author{Thierno Mamadou Baldé } \address{Univ Brest, CNRS UMR 6205, Laboratoire de Mathématiques de Bretagne Atlantique 6, \\Avenue Victor Le Gorgeu, 29200 Brest, France} \author{Vuk Milisic} \address{Univ Brest, CNRS UMR 6205, Laboratoire de Mathématiques de Bretagne Atlantique 6, \\Avenue Victor Le Gorgeu, 29200 Brest, France} \title{Analysis of non-overlapping models with a weighted infinite delay} \maketitle \begin{abstract} The framework of this article is cell motility modeling. Approximating cells as rigid spheres we take into account for both non-penetration and adhesions forces. Adhesions are modeled as a memory-like microscopic elastic forces. This leads to a delayed and constrained vector valued system of equations. We prove that the solution of these equations converges when $\varepsilon$, the linkages turnover parameter, tends to zero to the a constrained model with friction. We discretize the problem and penalize the constraints to get an unconstrained minimization problem. The well-posedness of the constrained problem is obtained by letting the penalty parameter to tend to zero. Energy estimates \emph{à la} De Giorgi are derived accounting for delay. Thanks to these estimates and the convexity of the constraints, we obtain compactness uniformly with respect to the discretisation step and $\varepsilon$, this is the mathematically involved part of the article. Considering that the characteristic bonds lifetime goes to zero, we recover a friction model comparable to [Venel {\em et al}, ESAIM, 2011] but under more realistic assumptions on the external load, this part being also one of the challenging aspects of the work. \end{abstract} \keywords{Adhesions, contact models, Volterra equations, optimal conditions, friction.} \ccode{Mathematics Subject Classification: xxx, xxx} \section{Introduction} Cells migration is driven by various extracellular guidance cues which are of chemical or mechanical type. The first kind of response is due to gradient of diffusible cues that are either attractive or repulsive, we call this mechanism \textit{chemotaxis}. The chemotaxis may include bacteria migrating for nutrients \cite{jen906}, lymphocytes responding to chemokines gradients in order to locate sites of immune response \cite{thom90}. In \cite{xue02}, the authors prove that molecules of Family Growth Factor of type 4 and 8 respectively control the attractive and repulsive chemotaxis during the chicken gastrulation. In recent years \textit{durotaxis} (mechanical substrate compliance) has been investigated in many papers. In \cite{jai2022}, the elastic properties of the migratory substrate bias single and collective cells migration. The authors proved as well that cells exert higher traction and increase the areas when exposed to stiffer surfaces or stiff gradient and may alter their contractility to withstand the mechanical properties of the migratory substrate. Furthermore the authors of \cite{jai2022} prove that human cancer cells have stronger phenotypes when exposed to stiffer substrate, and collective epithelial cells undergo durotaxis even if the cells taken individually do not necessarily do so. These mechanisms, chemotaxis and durotaxis are are both investigated in \cite{carole22}. There the authors underline the similarity but also the remarkable diversity of cells' response to their local environment. In order to account for this locality, we model contacts between neighboring cells. When considering the literature related to this field, sweeping processes are the starting point. In his seminal paper \cite{mor77}, Moreau considers a point $q(t)$ in a moving closed and convex set $C(t)$ of a Hilbert space $H$ without external perturbation. The particle stays at rest as long as it happens to lie in the interior of $C$; and once caught up by the boundary $\partial C(t)$, it can only move in the inward normal direction : it always belongs to $C(t)$. Many other authors have been attempting to either weaken the hypotheses or add some external perturbation into the Moreau's system since. For instance in \cite{cast93}, in finite dimension, the authors considered the set valued function $C$ as the complement of a convex set. Moreover, the authors introduced a bounded, closed and convex valued multifunction. In \cite{cast95}, the perturbation is supposed to be upper semi-continuous with \textit{linear compact growth}, and $C$ is Hausdorff continuous and satisfies the so-called \textit{interior ball condition}. To weaken the convexity of $C(t)$, Colombo et al. introduce prox-regular sets. A prox-regular set (defined below in a more formal way) can be of any shape (non-convex for instance) but it is possible to project points on it if these are close enough. The authors deal first with an unperturbed problem before adding external perturbations. More recently, Juliette Venel uses similar arguments to deal with non-penetration models in the case of human crowd motion and emergency exits \cite{venel08}. Pedestrians are idealized as rigid disks whose radii centers are respectively $r_{i} > 0$ and $q_{i} \in \mathbb{R}^{2}$ and the individuals centers are collected in a single vector called global configuration. Venel models crowd's dynamics where individuals do not overlap. She perturbs the model by adding an individualistic (or idealized) velocity (the velocity that individuals aim in the absence of others) represented by Lipschitz bounded function. The actual velocity is then the closest velocity from the idealized one. Here we model adhesions using a microscopic description of bounds as a continuous deterministic death and birth process. This approach was used in the pioneering work of Oelz and Schmeiser \cite{OelzSch10}. The model is based on the microscopic description of the dynamics and interactions of individual filaments, called the Filament-Based Lamellipodium Model. The adhesion forces inside this model rely on a microscopic description of proteic linkages. The authors in \cite{OelzSch10} derived a formal limit (when the rate of linkages turnover $\varepsilon$ is small enough). They end up with a gradient flow model with classical friction terms for adhesion of actin filaments to the substrate and cross-links. Using \textbf{minimizing movements} {\em à la} De Giorgi, they prove that the semi-discretisation in time of the problem converges and provides existence and uniqueness of the limit problem. Since then various attempts were made to make this formal computation rigorous \cite{MiOelz11}, \cite{MiOelz16}, \cite{MiOelz18},\cite{Mi20}. To simplify the problem, a single adhesion point was considered. Its position is the first unknown of the problem and a population of bonds related to this point is the second one. The equation for the position is a Volterra equation accounting for forces balance between the elastic forces of the linkages and an external load. The population density solves an age-structured problem with a non-local birth term modelling saturation of bonds. This equation depends as well on $\varepsilon$. In \cite{MiOelz16}, the authors considered the fully-coupled case (the death-rate of linkages depends on the unknown position). They proved that if the balance between the on-rate of the linkages and the external force is violated then the velocity of the particles blows up as the density vanishes. This blow-up mimics detachment of the binding site from the substrate. In a further step, space-dependence was taken into account as well (see \cite{MiOelz18}, \cite{Mi20}). In \cite{Mi20}, a delayed harmonic map is considered on the sphere. A complete asymptotic study of a scalar fourth order penalized and delayed problem was achieved recently \cite{MiSou}, the authors considered limits with respect to $\epsilon$ and for large times. In the present work, we model time dependent positions of several cells. These minimize an energy functional under non-linear overlapping constraints. The energy contains two parts~: a delay term representing the adhesive energy and a coercive and strictly convex function representing the energy of the external load. The adhesive terms in the total energy rely on the same memory models presented above. Their presence does not allow straightforward proofs of existence neither provides compactness. This is why we discretize the problem with respect to time and age. This approach leads to delayed minimizing movements in the spirit of \cite{Mi20}. We extend energy estimates provided by classical {\em minimizing movements} \cite{OelzSch10} to the case with memory. The crucial property enabling this step is the monotonicty of the binding kernels. These estimates and convexity assumptions on the source term (the position dependent {\emph{external load}}) are used in order to prove compactness. Precisely we prove that the time derivative of the solution is bounded in $L^{2}(0,T)$ for any $T>0$. We prove that the discrete minimization scheme is equivalent to a variational inclusion and show that the discrete approximation of the solution converges toward the solution of the continuous problem. We show as well that when $\varepsilon$, the instantaneous turn-over parameter of our model tends to zero then the limit function solves the model investigated in \cite{venel08} weighted by friction coefficients. Nevertheless, as we only assume coercivity and convexity of the external load, we cannot apply the same techniques as in \cite{venel08}~: while the Lipshitz assumption made on the external load allows for the use of Uzawa's method in \cite{venel08}, this assumption is not made here and we propose a new alternative approach. Indeed in \cite{venel08} the Lipschitz hypothesis is contradicted even for the simplest quadratic potentials. Instead, here, at each time step, we penalize the discrete constraint and let the penalty parameter to tend to zero. This extends the well-posedness of our discrete constrained problem and applies as well to \cite{venel08}. Moreover in \cite{venel08}, the Lipschitz feature of the external load guarantees the boundedness of the discrete time derivative of the solution. Here, since we weakened this hypothesis, the arguments of \cite{venel08} do not apply in the asymptotics with respect to $\varepsilon$ (the delay operator is not uniformly bounded with respect to $\varepsilon$). In order to overcome this difficulty, we test the Euler-Lagrange equations against a regular enough test function and transpose the delay operator on it \cite{Mi20}. The paper is organized as follows: in Section 2, we set the framework of the problem. We first remind the notion of non-overlapping introduced in \cite{venel08}, then we define the contact adhesion model and lastly we set some assumptions on the data. Section 3 is devoted to the results of this paper. In this section we prove first the well-posedness of the discrete solution, we then establish a compactness criterion which we use to prove the convergence of our model toward a weighted differential inclusion. All the results are extended on the torus as well. We end section 3 by some numerical simulations. \section{Definition of the model} \subsection{Preliminaries} Consider $N_{p}$ particles which we idealize as rigid disks whose centers coordinate in the $(x,y)$-axis and radii are $q_{i} := (q_{i}^{x}, q_{i}^{y})$ and $r_{i}>0, \; i =1,\cdots,N_{p}$ respectively. We identify the $i$th particle $(q_{i},r_{i})$. The global configuration of all particles is given by \begin{equation} \boldsymbol{q}:= \left(q_{1},q_{2},\cdots,q_{N_{p}} \right) \in \mathbb{R}^{2N_{p}}. \end{equation} For $i < j$, we define $D_{ij}(\boldsymbol{q})$ the signed distance between $(q_{i},r_{i})$ and $(q_{j},r_{j})$ by \begin{equation}\label{signed_distance} D_{ij}(\boldsymbol{q}):= |q_{j}-q_{i}|-(r_{i}+r_{j}), \end{equation} see Figure \ref{distance}. Here $|\cdot|$ denotes the Euclidean norm. \begin{figure}[!ht] \centering \begin{tikzpicture} \draw (0,0) circle (1); \draw[ball color=black](0,0) circle(0.04) node[pos=0.5, below]{$q_{i}$} ; \draw (5,0) circle (1.5); \draw[ball color=black](5,0) circle(0.05) node[below]{$q_{j}$}; \draw (0,0) -- (-0.707, 0.707) node[pos=0.5, left, above, sloped]{$r_{i}$}; \draw (5,0) -- (5,1.5) node[pos=0.5, left, above, left]{$r_{j}$}; \draw [<->] (1.05,0) -- (3.45,0) node[pos=0.5,above] {$D_{ij}(\boldsymbol{q})$}; \draw [thick,->] (-0.1,0) -- (-2.5,0) node[pos=0.8,above] {$-e_{ij}(\boldsymbol{q})$}; \draw [thick,->] (5.1,0) -- (7.5,0) node[pos=0.9,above] {$e_{ij}(\boldsymbol{q})$}; \end{tikzpicture} \caption{The signed distance} \label{distance} \end{figure} Therefore the gradient vector of $D_{ij}$ naturally involves the oriented vector $e_{ij}(\bo{q})$ in Figure \ref{distance} and reads \begin{equation*} \boldsymbol{G}_{ij}(\boldsymbol{q}) := \nabla D_{ij}(\bo{q}) = \left(0,\cdots 0, \underset{i}{-e_{i,j}(\bo{q})}, 0\cdots 0, \underset{j}{e_{i,j}(\bo{q})}, 0, \cdots,0\right), \quad e_{ij}(\bo{q}):= \dfrac{q_{j}-q_{i}}{|q_{j}-q_{i}|}, \quad \forall i<j. \end{equation*} The particles should not overlap, so that we define $\boldsymbol{Q}_{0}$ the set of global configurations for which $D_{ij}$ is nonegative for any distinct particles. Precisely \begin{equation}\label{Q0} \boldsymbol{Q}_{0} := \left\{ \boldsymbol{q} \in \mathbb{R}^{2N_{p}}, \, D_{ij}(\boldsymbol{q}) \geq 0, \, \forall i<j \right\}. \end{equation} $\boldsymbol{Q}_{0}$ is called the set of feasible configurations. \subsection{Definition of the adhesion contact model} Let $T>0$ be any time value and $\varepsilon$ be a nonnegative parameter. In this article the positions of $N_{p}$ particles in $\mathbb{R}^{2}$ at time $t$ are represented by $\bo{z}_{\varepsilon}(t)\in \mathbb{R}^{2N_{p}}$ and solve the minimization problem: \begin{equation}\label{Eq1} \begin{cases} \displaystyle{\bo{z}_{\varepsilon}(t) = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{Q}_{0}} E^{\varepsilon}_{t}(\boldsymbol{q}), \quad t \in (0,T]}, \vspace{0.5em} \\ \boldsymbol{z}_{\varepsilon}(t) = \boldsymbol{z}_{p}(t), \quad \forall t \leq 0, \end{cases} \end{equation} where the energy functional reads \begin{equation*} E^{\varepsilon}_{t}(\boldsymbol{q}) := \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{\mathbf{\mathbb{R}}_{+}} \left|q_{i} - z_{\varepsilon,i}(t-\varepsilon a) \right|^{2}\rho_{i}(a)da + F(\boldsymbol{q}), \end{equation*} $\boldsymbol{z}_{p}$ represents the positions for negative times and $F:\mathbb{R}^{2N_{p}}\to \mathbb{R}$ is the energy associated to the external load. The parameter $\varepsilon$ represents the maximal lifetime of the linkages (an adimensionalized parameter representing a ratio between a characteristic time divided by a characteristic age of the bonds) and its inverse is assumed to be proportional to the linkages' stiffness.\\ Furthermore we assume that the linkages density is independent of time and $\varepsilon$ and solves an age structured equation. Precisely for any particle, $\rho_{i}$ solves the following equation \begin{equation}\label{contRho} \begin{cases} \partial_{a}\rho_{i}(a) + (\zeta_{i}\rho_{i})(a) = 0, \quad a > 0, \vspace{0.75em} \\ \displaystyle{\rho_{i}(0) = \beta_{i}\left(1-\int_{0}^{\infty}\rho_{i}(a)da \right)}, \end{cases} \end{equation} where the linkages' off-rate $\zeta_{i}: \mathbb{R}_{+}\to \mathbb{R}_{+}$ and the on-rates $\beta_{i} \in \mathbb{R}_{+}$ are given constants.\\ We mention that the non-local term between the parentheses in \eqref{contRho} is a saturation term: if the integral is close enough to $0$, more births occur while if it is large enough then $\rho_{i}(0)$ is small. We define the vector density of linkages $\boldsymbol{\rho} \in (\mathbb{R}_{+})^{N_{p}}$, as well as the vector on-rates $\boldsymbol{\beta}$ and off-rates $\boldsymbol{\zeta}$. \subsection{Main objective} We aim in this paper at proving that the global configuration $\boldsymbol{z}_{\varepsilon}$ satisfies \begin{equation}\label{goal1} \begin{cases} \boldsymbol{\mathcal{L}}_{\varepsilon}[\boldsymbol{z}_{\varepsilon}] +\nabla F(\boldsymbol{z}_{\varepsilon}) \in -N\left( \boldsymbol{K}(\boldsymbol{z}_{\varepsilon}),\boldsymbol{z}_{\varepsilon} \right), \quad \text{ a.e. } t \in (0,T], \vspace{0.5em} \\ \boldsymbol{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \quad \forall t \leq 0, \end{cases} \end{equation} where the delay operator reads \begin{equation}\label{cont-delay-operator} \mathcal{L}_{\varepsilon,i}[\boldsymbol{z}_{\varepsilon}](t):= \dfrac{1}{\varepsilon} \int_{0}^{\infty}\left(z_{\varepsilon,i}(t) - z_{\varepsilon,i}(t-\varepsilon a)\right)\rho_{i}(a)da, \quad \forall i. \end{equation} Moreover we prove that $\underset{\varepsilon \to 0}{\boldsymbol{z}_{\varepsilon} \longrightarrow \boldsymbol{z}_{0}}$ in $C\left([0,T]; \mathbb{R}^{2N_{p}}\right)$ where the limit function $\boldsymbol{z}_{0}$ solves \begin{equation}\label{eq.friction}\left\{ \begin{aligned} &\boldsymbol{\mu}_{1}\partial_{t}\boldsymbol{z}_{0} + \nabla F(\boldsymbol{z}_{0}) \in -N\left(\boldsymbol{K}(\boldsymbol{z}_{0}),\boldsymbol{z}_{0} \right), \quad \text{ a.e. } t \in (0,T], \vspace{0.5em} \\ &\boldsymbol{z}_{0}(0) = \boldsymbol{z}_{p}(0). \end{aligned} \right. \end{equation} and \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t}\boldsymbol{z}_{0} = (\mu_{1,i}\partial_{t}z_{0,i})_{i=1,\cdots,N_{p}} \text{ and } \mu_{1,i} := \int_{0}^{\infty} \tilde{a} \rho_{i}(\tilde{a})d\tilde{a} \in \mathbb{R}, \quad \forall i. \end{equation*} We mention that $\bo{K}(\bo{z}_{\varepsilon})$ (respectively $\bo{K}(\bo{z}_{0})$) is the interior convex approximation of $\bo{Q}_{0}$ at $\bo{z}_{\varepsilon}$ (respectively at $\bo{z}_{0}$) and $N(\bo{K}(\bo{z}_{\varepsilon}),\bo{z}_{\varepsilon})$ (respectively $N(\bo{K}(\bo{z}_{0}),\bo{z}_{0})$) is the proximal-normal cone of $\bo{K}(\bo{z}_{\varepsilon})$ (respectively $\bo{K}(\bo{z}_{0})$) at $\bo{z}_{\varepsilon}$ (respectively at $\bo{z}_{0}$). \\ We remind that for any closed and nonempty set $S$ of a Hilbert space $H$ and $x \in S$, the proximal-normal cone of $S$ at $x$ (represented in Figure \ref{cone-normal}) is defined as \begin{equation}\label{proximal-normal} N(S,x) := \left\{ v \in H; \; \exists \alpha > 0 \text{ s.t. } x \in P_{S}(x + \alpha v) \right\}. \end{equation} \begin{figure}[!ht] \centering \begin{tikzpicture} ll[orange!30] plot[smooth cycle] coordinates {(0,0) (4,-0.5) (4.5,-2.5) (2,-3.5) (1.25,-2)}; \node at (3,-2) {$S$}; lldraw[green!50!black] (1.5,-1) circle (2pt) node[below] {$z \in \mathring{S}$}; \node[green!50!black] at (1.5,-0.5) {$N(S,z) = \{0\}$}; \node[red] at (8,-4.5) {$N(S,a) = \emptyset$}; lldraw[red] (8,-4) circle (2pt) node[above] {$a \notin S$}; lldraw[blue] (4.4,-1) circle (2pt) node[below, rotate = 300] {$x \in \partial S$}; \draw[->, thick, blue] (4.4,-1) -- (6.5, -0.15); lldraw[blue](6.575, -0.1) circle (2pt) node[right] {$x+v$}; \draw[blue](5.5, -2.5) circle(0) node[left, rotate=300]{$P_S(x+v)$}; \draw[blue] (-1,-4.45) node[right] {$N(S,y)$}; \draw[->, thick, blue] (2,-3.5) -- (0.9,-6.5); lldraw(0.85,-6.605) circle (2pt) node[below] {$y+w$}; \draw[blue](4.05,-3.72) circle(0) node[left]{$P_S(y+w)$}; lldraw[blue] (2,-3.5) circle (2pt) node[above] {$y \in \partial S$}; \shade[ball color=blue, opacity=0.15] (2,-3.5) -- (2.75,-7) arc[start angle=-25, end angle=-200, radius=2] -- cycle; \end{tikzpicture} \caption{The proximal-normal cone of $S$ at $z \in \mathring{S}$, $x,y \in \partial S$ and $a \notin S$.} \label{cone-normal} \end{figure} To reach this main objective we proceed as follows: consider the discrete version of our problem, and prove that it converges to \eqref{goal1} by letting the discretization step to go to $0$ for fixed $\varepsilon$ which in turn converges when $\varepsilon$ goes to $0$. \subsection{Notations and assumptions on the data} \subsubsection{Notations} For any $T>0$, we note the following spaces: $\bo{\mathcal{C}} := \mathcal{C}([0,T]; \mathbb{R}^{2N_{p}})$, $\bo{H}^{1} := H^{1}([0,T]; \mathbb{R}^{2N_{p}}), \bo{L}^{2}:= L^{2}([0,T];\mathbb{R}^{2N_{p}}), \bo{L}^{\infty} := L^{\infty}([0,T];\mathbb{R}^{2N_{p}})$. \subsubsection{Assumptions}\label{Assump} \begin{itemize} \item [(i)] \textit{The off-rate} is assumed to be Lipschitz i.e. there exists a constant $L_{\zeta} > 0$ such that \begin{equation*} |\bo{\zeta}(a) - \bo{\zeta}(b)| \leq L_{\bo{\zeta}}\left|a- b\right|, \quad \forall a, b \in \mathbb{R}_{+}. \end{equation*} Moreover for any particle there exist $\underline{\zeta_{i}}$ and $\overline{\zeta_{i}}$ such that $\displaystyle{0 < \underline{\zeta_{i}} < \zeta_{i}(a) < \overline{\zeta_{i}}}$. We define $\displaystyle{\underline{\zeta}:= \min_{i}\underline{\zeta_{i}}}$ (respectively $\displaystyle{\overline{\zeta}:= \max_{i}\overline{\zeta_{i}}}$) as well. \item[(ii)] \textit{The source term} $F$ is coercive (\textit{cf.} Definition \ref{annexeA}.\ref{coercive}), strictly convex and continuous. \item[(iii)] \textit{The past configurations} satisfy $\boldsymbol{z}_{p} \in Lip\left(\mathbb{R}_{-}; \boldsymbol{Q}_{0}\right)$ : $\boldsymbol{z}_{p}(t) \in \boldsymbol{Q}_{0}, \forall t \leq 0$ and there exists $C_{\bo{z}_{p}}> 0$ such that \begin{equation*} \big|\bo{z}_{p}(t_{2}) - \bo{z}_{p}(t_{1})\big| \leq C_{\bo{z}_{p}}\big|t_{2} - t_{1}\big|, \quad \forall t_{1}, t_{2} \leq 0. \end{equation*} \end{itemize} Note as well that in this particular case, the closed form of the linkages density is at hand. Precisely \begin{equation}\label{expr_rho} \rho_{i}(a) = \dfrac{\beta_{i}}{1+\beta_{i} \int_{0}^{\infty} e^{-\int_{0}^{\sigma}\zeta_{i}(\tilde{a})d\tilde{a}}d\sigma} e^{-\int_{0}^{a}\zeta_{i}(\tilde{a})d\tilde{a}}, \quad i=1,\cdots,N_{p}. \end{equation} And by assumptions \ref{Assump} (i), the moments $\mu_{k,i}:= \int_{0}^{\infty}a^{k}\rho_{i}(a)da, k \in \mathbb{N}$ are well defined. Particularly for any particle, there exists $\underline{\mu_{k,i}}, \overline{\mu_{k,i}}$ such that \begin{equation*} 0 < \underline{\mu_{k,i}} \leq \mu_{k,i} \leq \overline{\mu_{k,i}}. \end{equation*} \subsection{Time and age discretization and numerical approximations} The age interval $\mathbb{R}_{+}$ is divided with constant discretization step $\Delta a$ such that \begin{equation*} \mathbb{R}_{+}:= \bigcup_{l=0}^{\infty}\big[l\Delta a, (l+1)\Delta a\big), \end{equation*} as well as the time interval with a discretization grid satisfying $\Delta t = \varepsilon \Delta a$ and $N := \left\lfloor \dfrac{T}{\Delta t} \right\rfloor$ and thus \begin{equation*} [0,T) = \bigcup_{n=0}^{N-1}\big[n\Delta t, (n+1)\Delta t\big). \end{equation*} We set $t^{n} :=n\Delta t$ and $a_{l}:= l\Delta a$ for $n,l \in \{0,1\cdots,N\}\times \mathbb{N}$.\\ We discretize \eqref{contRho} using an implicit Euler scheme. This provides $R_{l,i}$ as a function of $R_{l-1,i}$ and reads: \begin{equation}\label{discreteRho} R_{l,i} = R_{l-1,i}/\big(1+\Delta a \zeta_{l,i}\big), \quad (l,i) \in \mathbb{N}^{\ast} \times \{1,2,\cdots,N_{p}\} \end{equation} while on the boundary \begin{equation}\label{rhoinitial} R_{0,i} = \dfrac{R_{b,i}}{1+\frac{\Delta t}{\varepsilon}\zeta_{0,i}}, \quad \forall i \in \{1,2,\cdots,N_{p}\} \end{equation} For any particle $i$, the non-local condition relates $R_{b,i}$ to the mean of the density $\mu_{0,\Delta,i}$ as \begin{equation}\label{rhobound} R_{b,i} = \beta_{i}\big(1-\Delta a \sum_{l=0}^{\infty}R_{l,i}\big) =: \beta_{i}(1-\mu_{0,\Delta,i}). \end{equation} By induction over $l$ in \eqref{discreteRho} we have \begin{equation*} R_{l,i} = \left( \prod_{r=1}^{l} \dfrac{1}{1+\Delta a \zeta_{r,i}}\right) R_{0,i}, \quad \forall i \in \{1,2,\cdots,N_{p}\}, \end{equation*} so that we have the following system of two equations with two unknowns ($R_{b,i}$ and $R_{0,i}$) can be set~: \begin{equation*} \begin{cases} R_{b,i} - \left( 1 + \Delta a \zeta_{0,i}\right)R_{0,i} = 0\vspace{0.5em} \\ \displaystyle{R_{b,i} + \Delta a \beta_{i} \left( 1+\sum_{l=1}^{\infty} \prod_{r=1}^{l} \dfrac{1}{1+\Delta a\zeta_{r,i}} \right)R_{0,i}} = \beta_{i}, \end{cases} \end{equation*} which can be solved explicitly giving~: \begin{equation}\label{rho_0} \left\{ \begin{aligned} R_{0,i} & = \beta_{i}\left(1+\Delta a\left(\beta_{i} +\zeta_{0,i} + \beta_{i}\sum_{l=1}^{\infty} \prod_{r=1}^{l} \dfrac{1}{1+\Delta a \zeta_{r,i}}\right) \right)^{-1}, \\ R_{b,i} & = \dfrac{\beta_{i}(1+\Delta a \zeta_{0,i})}{1 +\Delta a\Big(\beta_{i} +\zeta_{0,i} + \beta_{i}\sum_{l=1}^{\infty} \prod_{r=1}^{l} \dfrac{1}{1+\Delta a \zeta_{r,i}}\Big)}. \end{aligned} \right. \end{equation} The discrete version of the minimization process \eqref{Eq1} is performed \begin{equation}\label{Eq1_discret} \begin{cases} \displaystyle{\boldsymbol{Z}^{n}_{\varepsilon} = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{Q}_{0}} \left\{ E_{n,\varepsilon}(\boldsymbol{q}):= \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty} |q_{i} - Z^{n-l}_{\varepsilon,i}|^{2} R_{l,i} + F(\boldsymbol{q}) \right\}}, \quad n = 1,2,\cdots,N \vspace{0.5em} \\ \boldsymbol{Z}^{n}_{\varepsilon} = \boldsymbol{Z}^{n}_{p}, \quad n \leq 0, \end{cases} \end{equation} where the discrete average of positions for negative times is : \begin{equation*} \bo{Z}^{n}_{p} = \dfrac{1}{\Delta t} \int_{n\Delta t}^{(n+1)\Delta t} \bo{z}_{p}(s)ds, \quad \forall n \in \mathbb{Z}_{-}. \end{equation*} We define as well \begin{itemize} \item the piecewise constant approximation functions \begin{equation}\label{Eq2} \bo{z}_{\varepsilon,\Delta}(t):= \displaystyle{\sum_{n=1}^{N} \bo{Z}_{\varepsilon}^{n} \mathbbm{1}_{(t^{n-1}, t^{n}]}}(t),\, \displaystyle{\bo{z}_{p,\Delta}(t):= \sum_{n = -\infty}^{n=0}\bo{Z}_{p}^{-n}\mathbbm{1}_{(t^{n-1}, t^{n}]}(t)}, \end{equation} \item the piecewise linear interpolation \begin{equation}\label{eq.linear.interp} \bo{\tilde{z}}_{\varepsilon,\Delta}(t) := \sum_{n=1}^{N}\left\{Z^{n-1}_{\varepsilon} + \frac{t-t^{n-1}}{\Delta t} (\bo{Z}^{n}_{\varepsilon} - \bo{Z}^{n-1}_{\varepsilon}) \right\} \mathbbm{1}_{(t^{n-1}, t^{n}]}(t), \end{equation} \item the piecewise linear constant of the linkages density \begin{equation}\label{rho_delta} \bo{\rho}_{\Delta}(a) := \sum_{l=0}^{\infty} \bo{R}_{l}\mathbbm{1}_{(l\Delta a,(l+1)\Delta a)}(a). \end{equation} \end{itemize} \section{Results} We first prove that the piecewise constant approximation of the linkages density converges towards $\bo{\rho}$ when the age stepsize $\Delta a$ is small enough. \begin{Prop} Under the CFL conditions, for any particle, the solution $R_{l,i}$ of \eqref{discreteRho} is nonnegative. \end{Prop} \begin{proof} We perform the proof by induction over $l \in \mathbb{N}$. Indeed \begin{itemize} \item $l=0$ since the birth-rate and death-rate are nonnegative, we have that $R_{b,i} \geq 0$ and $R_{0,i}$ for any particle (see \eqref{rho_0}) \\ \item Assume that the claim hold until $l-1$. \item Let us prove that the claim is valid for $l$. We use the induction hypothesis ($R_{l,i} \geq 0$) and the fact that $\zeta_{l,i}$ is nonnegative in the definition \eqref{discreteRho}. \end{itemize} \end{proof} \begin{Lemma} Under the CFL condition $\Delta t = \varepsilon \Delta a$, if linkages' density is defined as in \eqref{discreteRho}, $$ R_{l,i} \geq 0 \Leftrightarrow \mu_{0,\Delta,i} \leq 1, \quad \forall i \in \{1,\dots,N_p\}. $$ \end{Lemma} \begin{proof} The claim follows from the definition of the first order moment and the fact that the on-rate and the off-rate are nonnegative. Indeed,\\ $ \Rightarrow)$ assume that $R_{l,i} \geq 0, \quad \forall (l,i) \in \mathbb{N} \times \{1,2,\cdots,N_{p}\}$. By \eqref{rhoinitial} and \eqref{rhobound}, we have that \begin{equation*} R_{0,i} = \frac{R_{b,i}}{1+\Delta a \zeta_{0,i}} \geq 0 \implies R_{b,i} =: \beta_{i}(1-\mu_{0,\Delta,i}) \geq 0, \quad \forall i. \end{equation*} We've used the fact that $\zeta_{0,i} \geq 0$ in the latter denominator. The latter inequality gives needed result. \\ $\Leftarrow )$ Assume that $\mu_{0,\Delta,i} \leq 1$. Since $\beta_{i} \geq 0$ for all $i$, by \eqref{rhobound} we have that \begin{equation*} R_{b,i} = \beta_{i}(1-\mu_{0,\Delta,i}) \geq 0, \quad \forall i, \end{equation*} so that $R_{b,i} \geq 0$ for all particles. This in turn by \eqref{rhoinitial} and the fact that the death rate $\zeta_{0,i}$ is nonnegative gives that the initial linkages density $R_{0,i}\geq 0$ for all $i$. This, by induction over $l \in \mathbb{N}$ into equation \eqref{discreteRho} gives the nonnegative feature of the discrete linkages density. Furthermore note in this case that $\mu_{0,\Delta,i} \geq 0$ for all the particles. \end{proof} Define \begin{equation*} \overline{\bo{\rho}}_{\Delta}(a) := \sum_{l=0}^{\infty}\bo{\overline{R}}_{l}\mathbbm{1}_{(l\Delta a, (l+1)\Delta a)}(a) \text{ where } \bo{\overline{R}}_{l} = \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a} \bo{\rho}(a)da \end{equation*} where $\bo{\rho}$ solves \eqref{contRho} as well as $\bo{\overline{\mu}}_{0,\Delta} = \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a} \bo{\mu}_{0}(a)da $. We have \begin{Lemma} Under the same hypotheses as above if $\bo{\rho}$ solves $\eqref{contRho}$, we have that \begin{equation*} \left|\bo{\rho}_{\Delta} - \bo{\overline{\rho}}_{\Delta}\right|_{L^{1}_{a}} \leq O(\Delta a) \text{ and } \left| \bo{\overline{\rho}}_{\Delta} - \bo{\rho}\right|_{L^{1}_{a}} \leq O(\Delta a), \end{equation*} where $L^{1}_{a}:= L^{1}\left(\mathbb{R}_{+}, \mathbb{R}^{N_{p}}\right)$ and $\bo{\rho}_{\Delta}$ is defined in \eqref{rho_delta}. \end{Lemma} \begin{proof} Indeed due to the consistency of the scheme \eqref{discreteRho}, we have that \begin{eqnarray*} \delta \overline{R}_{l,i} + \Delta a \zeta_{l,i} \overline{R}_{l,i} &=& \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a}(1+\zeta_{l,i} \Delta a) e^{-\int_{0}^{\Delta a}\zeta_{i}(s)ds}\rho_{i}(a)da - \dfrac{1}{\Delta a}\int_{l\Delta a}^{(l+1)\Delta a}\rho_{i}(a)da\\ & = & \dfrac{1}{\Delta a} \int_{l\Delta }^{(l+1)\Delta a} \left( \Delta a(\zeta_{l,i} - \zeta_{i}(a)) + O(\Delta a^{2})\right)\rho_{i}(a)da \leq L_{\bo{\zeta}} ||\zeta_{i}||_{W^{1,\infty}_{a}} \Delta a^{2}\overline{R}_{l,i}. \end{eqnarray*} We've used the fact that \begin{equation*} |\zeta_{l,i} - \zeta_{i}(a)| \leq \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a} \left| \zeta_{i}(\sigma) - \zeta_{i}(a) \right| d\sigma, \quad \forall a \in \left(l\Delta a, (l+1)\Delta a\right), \forall i =1,\cdots,N_{p}, \end{equation*} so that for any particle \begin{eqnarray*} |\zeta_{l,i} - \zeta_{i}(a)| & \leq & \dfrac{1}{\Delta a} \int_{l\Delta}^{(l+1)\Delta a} |a-\sigma| \left|\dfrac{ \zeta_{i}(\sigma) - \zeta_{i}(a) }{\sigma - a} \right|d\sigma \\ & \leq & L_{\bo{\zeta}} \int_{l\Delta a}^{(l+1)\Delta a} \left|\left|\partial_{a}\zeta_{i}\right|\right|_{L^{\infty}_{a}}d\sigma \leq \Delta a \left|\left|\partial_{a}\zeta_{i}\right|\right|_{L^{\infty}_{a}}. \end{eqnarray*} On the other hand, setting $E_{i} := \Delta a \sum_{l=0}^{\infty}(R_{l+1,i} - \overline{R}_{l+1,i})$ for any particle, we have that \begin{eqnarray*} |E_{i}| &=& \Delta a\sum_{l=0}^{\infty}\left| \dfrac{R_{l,i}}{1+\Delta a \zeta_{l+1,i}} - \overline{R}_{l+1,i} \right| \leq \dfrac{\Delta a}{1+\Delta a \underline{\zeta}_{i}} \left(E_{i} + \sum_{l=0}^{\infty}\left|(1+\Delta a\zeta_{l,i})\overline{R}_{l+1,i} + \overline{R}_{l,i}\right|\right)\\ & \leq & \dfrac{\Delta a E_{i}}{1+\Delta a\underline{\zeta}_{i}} + \dfrac{C}{1+\Delta a \underline{\zeta}_{i}} \Delta a^{2}, \quad \forall i, \end{eqnarray*} which gives $ |E_{i}| \leq C \Delta a, \; \forall i \in \{1,2,\cdots,N_{p}\}$ implying that $|\bo{E}| \lesssim C\Delta a$. It follows that \begin{equation*} \int_{0}^{\infty} \left|\bo{\rho}_{\Delta} - \bo{\overline{\rho}}_{\Delta}\right|(a)da \leq \int_{0}^{\infty} \sum_{l=0}^{\infty} |\bo{R}_{l} - \bo{\overline{R}}_{l}| \mathbbm{1}_{\left(l\Delta,(l+1)\Delta a\right)}(a)da \leq C\Delta a, \end{equation*} so that $\left|\bo{\rho}_{\Delta} - \bo{\rho}_{\Delta}\right|_{L^{1}_{a}} \leq O(\Delta a)$, which is the first claim. Next \begin{eqnarray*} \int_{0}^{\infty} \left| \bo{\overline{\rho}_{\Delta}}(a) - \bo{\rho}(a) \right|da & = & \int_{0}^{\infty} \Big| \bo{\rho}(a) - \dfrac{1}{\Delta a} \sum_{l=0}^{\infty} \Big( \int_{l\Delta a}^{(l+1)\Delta a} \bo{\rho}(\sigma)d\sigma \Big) \mathbbm{1}_{(l\Delta, (l+1)\Delta a)}(a)da \Big|da \\ & \leq & \dfrac{1}{\Delta a} \sum_{l=0}^{\infty} \int_{0}^{\infty} \Big| \bo{\rho}(a) - \int_{l\Delta a}^{(l+1)\Delta a} \bo{\rho}(\sigma)d\sigma \Big|\mathbb{1}_{(l\Delta a, (l+1)\Delta l)}(a)da. \end{eqnarray*} Define the space $\displaystyle{U := \left\{ f \in L^{1}_{a} \text{ s.t. } \limsup_{\sigma \to 0} \int_{0}^{\infty} \big|\dfrac{f(a+\sigma) - f(a)}{\sigma}\big| da < \infty \right\}}$ endowed with the norm \begin{equation*} ||f||_{U} := ||f||_{L^{1}_{a}} + \limsup_{\sigma \to 0} \int_{0}^{\infty} \left|\dfrac{f(a+\sigma) - f(a)}{\sigma}\right|da, \end{equation*} we have by the Lemma Appendix B.2 p.36 \cite{Mi20} that \begin{equation*} \int_{0}^{\infty} \left| \bo{\overline{\rho}_{\Delta}}(a) - \bo{\rho}(a) \right|da \leq \Delta a\left|\bo{\rho}\right|_{U}. \end{equation*} Thus, taking $\Delta a$ small enough, gives the second claim. \end{proof} \subsection{Existence and uniqueness of solution of the constrained problem} Since $\boldsymbol{Q}_{0}$ is nonconvex (see Figure \ref{lack_convexity} below), we consider its interior convex approximation $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ defined as follows \begin{equation}\label{constSet} \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}) := \left\{ \boldsymbol{q} \in \mathbb{R}^{2N_{p}}:\, \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}) \leq 0, \; \forall \, i < j \right\}, \end{equation} where for any $n$ and $\varepsilon$ fixed, the constraints functions $\varphi^{n,\varepsilon}_{ij}: \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}$ are affine and read \begin{equation}\label{functions} \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}):=-D_{ij}(\bo{Z}^{n-1}_{\varepsilon}) - \boldsymbol{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot(\boldsymbol{q}- \bo{Z}^{n-1}_{\varepsilon}), \quad i <j. \end{equation} The minimization problem over this convex set reads : find $\boldsymbol{Z}^n_{\varepsilon} \in \RR^{2N_p}$ s.t. \begin{equation}\label{contranint} \left\{ \begin{aligned} \boldsymbol{Z}^{n}_{\varepsilon}& = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}) } E_{n,\varepsilon}(\boldsymbol{q}) , \quad n \geq 1, \vspace{0.75em} \\ \boldsymbol{Z}^{n}_{\varepsilon} & = \boldsymbol{Z}^{n}_{p}, \quad n \leq 0. \end{aligned}\right. \end{equation} Due to Lemma \ref{equality} below we have that \eqref{Eq1_discret} is equivalent to \eqref{contranint}, so that instead of \eqref{Eq1_discret}, we may deal with \eqref{contranint} in the following investigations. \begin{Theo}\label{thm1} Lets fix the integer $n \geq 1$ and assume that $\boldsymbol{Z}^{n-1} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1})$. Moreover suppose that assumptions \ref{Assump} (i)-(iii) hold and consider the penalised problem : find $\boldsymbol{Z}^{n}_{\varepsilon,\delta}$ such that \begin{equation}\label{penalise} \begin{cases} \displaystyle{\boldsymbol{Z}^{n}_{\varepsilon,\delta} = \argmin_{\boldsymbol{q}\, \in \, \mathbb{R}^{2N_{p}}} \left\{ E^{\delta}_{n,\varepsilon}(\boldsymbol{q}):= E_{n,\varepsilon}(\boldsymbol{q}) + \dfrac{1}{2\delta} \sum_{i<j} \max\left(\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}),0\right)^{2} \right\}}, \\ \boldsymbol{Z}^{n}_{\varepsilon,\delta} = \boldsymbol{Z}^{n}_{p}, \quad n \leq 0. \end{cases} \end{equation} Then there exists a unique $\boldsymbol{Z}^{n}_{\varepsilon, \delta} \in \RR^{2 N_p}$ solving the above problem. Moreover when letting the penalty parameter $\delta$ to go to $0$, $\boldsymbol{Z}^{n}_{\varepsilon, \delta}$ converges to $\boldsymbol{Z}^{n}_{\varepsilon}$ solving \eqref{contranint}. Again, one has that $\boldsymbol{Z}^{n}_{\varepsilon} \in \boldsymbol{K}(Z^{n}_{\varepsilon})$. The result is then true for any $n \in \NN^*$ \end{Theo} \begin{proof} Thanks to asumption \ref{Assump}.(iii), one has that $\boldsymbol{Z}^0_\varepsilon \equiv \boldsymbol{z}_p(0)$ is such that $\boldsymbol{Z}^0_\varepsilon \in \boldsymbol{K}(\boldsymbol{Z}^0_\varepsilon)$ which is thus non-empty. We check hereafter the hypotheses of Theorem \ref{annexeA}.\ref{ciarl}. Indeed \begin{enumerate} \item for $\varepsilon >0$ and $n \in \mathbb{N}^{\ast}$ fixed, $\boldsymbol{q} \mapsto E_{n,\varepsilon}(\boldsymbol{q})$ is continuous, coercive and strictly convex. Indeed, this is by definition since the sum of continuous (respectively coercive, strictly convex) function is continuous (respectively coercive, strictly convex). Let us mention that this ensures the existence and uniqueness of $\boldsymbol{Z}^{n}_{\varepsilon,\delta}$ solution of \eqref{penalise}. \item {Let's define $\boldsymbol{K}(\boldsymbol{p}):=\{\boldsymbol{q} \in \RR^{2N_p}\; : \; \varphi_{ij}(\boldsymbol{p},\boldsymbol{q})\leq 0,\; i<j\}$, where $\varphi_{ij}(\boldsymbol{p},\boldsymbol{q}):=-D_{ij}(\boldsymbol{p})-\boldsymbol{G}_{ij}(\boldsymbol{p})\cdot(\boldsymbol{q}-\boldsymbol{p})$. Assume that $\boldsymbol{p}\in\RR^{2N_p}$ is s.t. $D_{ij}(\boldsymbol{p})\geq 0$ for all $i<j$. Then we claim that $\boldsymbol{K}(\boldsymbol{p})$ is a closed convex, non-empty set. Indeed, $\boldsymbol{p} \in \boldsymbol{K}(\boldsymbol{p})$ which implies that it is non-empty. Since $\bo{q} \mapsto D_{ij}(\bo{q})$ is convex, it is easy to check that $\bo{K}(\bo{p})$ is convex as finite intersection of convex sets. It is closed as finite intersection of closed sets~: as \begin{equation*} \boldsymbol{K}(\boldsymbol{p}) = \bigcap_{i<j} (\varphi_{ij}(\boldsymbol{p},\cdot))^{-1}((-\infty, 0]), \end{equation*} so that since the maps $\boldsymbol{q} \mapsto \varphi_{ij}(\boldsymbol{p},\boldsymbol{q})$ are continuous and $(-\infty, 0]$ is a closed interval, $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ is closed as intersection of reciprocal images of closed subsets by continuous functions. Thus, $\boldsymbol{K}(Z^{n-1}_{\varepsilon})$ is a closed, convex and non empty set since $\boldsymbol{Z}^{n-1}_{\varepsilon} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon} )$.} \item The map $\psi^{n,\varepsilon}: \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}$ defined by \begin{equation*} \psi^{n,\varepsilon}(\boldsymbol{q}): = \dfrac{1}{2}\sum_{i<j} \max\left( \varphi^{n, \varepsilon}_{ij}(\boldsymbol{q}),0 \right)^{2}, \end{equation*} satisfies \eqref{eq.equiv.U.Phi}, namely it is continuous, convex and satisfies \begin{equation*} \psi^{n,\varepsilon}(\boldsymbol{q}) \geq 0 \text{ for every } \boldsymbol{q} \in \mathbb{R}^{2N_{p}} \text{ and } \psi^{n,\varepsilon}(\boldsymbol{q}) = 0 \iff \boldsymbol{q} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}). \end{equation*} We prove first the continuity. Indeed for any $n \in \mathbb{N}$ and $\varepsilon > 0$ fixed, the maps $f^{n,\varepsilon}_{ij}(\boldsymbol{q}) := \max(\cdot, 0)^{2} \circ \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}), \; i <j$ are continuous as composition of continuous functions, so that $\psi^{n,\varepsilon}(\boldsymbol{q}) := \sum_{i<j}f^{n,\varepsilon}_{ij}(\boldsymbol{q})$ is continuous. For the convexity we use properties of composition and sum of convex functions. Indeed the functions $f^{n,\varepsilon}_{ij}$ are convex as composition of convex functions, so that $\psi^{n,\varepsilon}$ is convex as sum of convex functions. Furthermore, by definition $\psi^{n,\varepsilon}(\boldsymbol{q}) \geq 0, \forall \bo{q} \in \mathbb{R}^{2N_{p}}$ and $\psi^{n,\varepsilon}(\boldsymbol{q}) = 0 \iff \bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. Indeed \begin{equation*} \sum_{i<j}f^{n,\varepsilon}_{ij}(\boldsymbol{q}) = 0 \implies \max\left(\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}), 0\right) = 0, \; \forall i < j \implies \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}) \leq 0,\quad \forall i<j. \end{equation*} Conversely let $\boldsymbol{q} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$, we have \begin{equation*} \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}) \leq 0, \; \forall i<j \implies \max(\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}), 0)^{2} = 0 , \; \forall i<j \implies \sum_{i<j} f^{n,\varepsilon}_{ij}(\bo{q}) = 0. \end{equation*} This shows the claim. \end{enumerate} Now having fulfilled all hypotheses of Theorem \ref{annexeA}.\ref{ciarl}, we have that the solution $\boldsymbol{Z}^{n}_{\varepsilon}$ of \eqref{contranint} exists as limit of $\boldsymbol{Z}^{n}_{\varepsilon, \delta}$, the unique solution of \eqref{penalise} when $\delta$ goes to $0$. Since $\boldsymbol{Z}^n_{\varepsilon}$ satisfies the constraint, $\boldsymbol{Z}^n_{\varepsilon} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon} )$ the proof extends to every $n \in \NN^*$ by induction. \end{proof} \subsection{The constrained problem in term of primal-dual problem} We aim at proving there exists (in general not a unique) a dual variable called the Lagrange variable such that the \textit{primal} problem \eqref{contranint} (whose variable $\boldsymbol{Z}^{n}_{\varepsilon}$ is called the primal variable) is equivalent to a involving both primal and dual variables : the \textit{primal-dual} problem. \begin{Def}(Feasible direction) Let $\bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ be a feasible configuration and $\bo{w} \in \mathbb{R}^{2N_{p}}$, we say that $\bo{w}$ is a feasible direction if and only if there exists $\eta > 0$ such that for any $0 < s \leq \eta$ we have $\bo{q} + s\bo{w} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$.\\ In other words, $\bo{q}$ is a feasible direction if from $\bo{q}$ one can move at least of $\eta$ by still staying in $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. In figure \ref{direction_memoire} we have the possible directions for $\boldsymbol{q}$ strictly interior in the domain on one hand and $\boldsymbol{q}$ on the boundary of the domain on the other hand. \end{Def} Let $\bo{q}$, $\tilde{\bo{q}} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ such that $\bo{q} \neq \tilde{\bo{q}}$. Since $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ is convex, we have $[\bo{q},\tilde{\bo{q}}] \subset \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ and $\bo{w} = \tilde{\bo{q}} - \bo{q}$ is a feasible direction. \begin{figure}[!ht] \centering \begin{tikzpicture}[scale=0.75,x=1mm,y=1mm] \path[draw,fill=white] (8,8) circle (28); \path[draw,fill=lightgray](8,8)circle(17); \draw [dashed] (13,15) circle (7); \draw [red] [thick,->] (13,15) -- (17.25,20.25) node[pos = 0.5, above, sloped]{$\boldsymbol{w}$}; \draw (13,15) circle(0.4) node[left]{$\boldsymbol{q}$}; \draw [thick,->] (-20,-17) -- (-0,-2) node[pos=-0.4, left, above]{$\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$}; \draw (-13,21) node[above, right, rotate=30]{$\varphi^{n,\varepsilon}_{ij} > 0$}; \end{tikzpicture} \hfill \vline \hfill \begin{tikzpicture}[scale=0.75,x=1mm,y=1mm] \path[draw,fill=white] (8,8)circle(28); \path[draw,fill=lightgray](8,8)circle(17); \draw [red] [thick,->] (19.8,19.8) -- (21,13) node[pos = 1.1, below, below]{$\boldsymbol{w}$}; \draw [blue] [thick,->] (19.8,19.8) -- (5,5) node[pos=0.65, left, above, sloped]{$-\nabla \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q})$}; \draw (19.8,19.8) circle(0.5) node[left]{$\boldsymbol{q}$}; \draw (-13,21) node[above, right, rotate=30]{$\varphi^{n,\varepsilon}_{ij} > 0$}; \draw [thick,->] (38,-15) -- (18,-1) node[pos=-0.4, left, above]{$\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$}; \end{tikzpicture} \caption{feasible directions for $\boldsymbol{q}$ strictly interior to $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ (left) vs. $\bo{q}$ on the boundary (right).} \label{direction_memoire} \end{figure} \begin{Def}\cite{Allairel05}\label{feasible_directions_memoire} Let $\boldsymbol{q} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$, for any fixed $\varepsilon > 0$ we define the cone of feasible directions at $\boldsymbol{q}$ by \begin{equation*} \boldsymbol{C}(\boldsymbol{q}) = \left\{ \boldsymbol{w}\in \mathbb{R}^{2N_{p}}, \, \exists \boldsymbol{q}^{r} \in \left(\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})\right)^{\mathbb{N}}, \exists \, \delta^{r} \in (\mathbb{R}_{+}^{\ast})^{\mathbb{N}}, \boldsymbol{q}^{r} \to \boldsymbol{q},\, \delta^{r} \to 0 \text{ and } \lim_{r \to \infty} \dfrac{\boldsymbol{q}^{r} - \boldsymbol{q}}{\delta^{r}} = \boldsymbol{w} \right\}. \end{equation*} \end{Def} \begin{Rmk}\label{rmks-cone} $\boldsymbol{C}(\boldsymbol{q})$ is a cone in the sense that $\boldsymbol{0} \in \boldsymbol{C}(\boldsymbol{q})$ (take $\boldsymbol{q}^{r} = \boldsymbol{q}$ for any $r$) and if $\boldsymbol{w} \in \boldsymbol{C}(\boldsymbol{q})$ we have that $\lambda \boldsymbol{w} \in \boldsymbol{C}(\boldsymbol{q})$ for any $\lambda > 0$. Moreover we have the followings \begin{itemize} \item If $\boldsymbol{q}$ is strictly interior to the domain $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$, we have that $C(\boldsymbol{q})= \mathbb{R}^{2N_{p}}$. It suffices to take $\boldsymbol{q}^{r} = \boldsymbol{q} + \dfrac{1}{r}\boldsymbol{w}$ for all $\boldsymbol{w} \in \mathbb{R}^{2N_{p}}$ and $r$ large enough (see figure the left hand side of \ref{feasible_directions_memoire}). \item Since $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ is convex $\boldsymbol{C}(\boldsymbol{q}) = \left\{\boldsymbol{w} - \boldsymbol{q} \text{ for all } \boldsymbol{w} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}) \right\}$. It suffices to take $\boldsymbol{q}^{r} = \boldsymbol{q} + \dfrac{1}{r}(\boldsymbol{w} - \boldsymbol{q})$ for all $r$. \end{itemize} \end{Rmk} For any $\boldsymbol{q} \in \boldsymbol{K} (\boldsymbol{Z}^{n-1}_{\varepsilon})$, the cone $\bo{C}(\bo{q})$ in Definition \ref{feasible_directions_memoire} can be seen as the set of all vectors which are tangent at $\boldsymbol{q}$ to a curve lying in $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ and passing through $\boldsymbol{q}$. More precisely $\bo{C}(\bo{q})$ is the set of all possible directions of variation from $\bo{q}$ which guarantee that one stays in $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. But the main issue here is the fact that we cannot always handle a closed form of $\boldsymbol{C}(\boldsymbol{q})$. Nevertheless in some specific cases; called the \textit{qualification conditions} one may obtain an explicit form of $\boldsymbol{C}(\boldsymbol{q})$.\\ For any $\bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$, we have that: \begin{itemize} \item if $\varphi_{ij}^{n,\varepsilon}(\boldsymbol{q}) < 0$, for any direction $\boldsymbol{w} \in \mathbb{R}^{2N_{p}}$ and $\eta > 0$ small enough, we have that $\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q} + \eta \boldsymbol{w}) \leq 0$ (see Figure \ref{feasible_directions_memoire} on the left hand side). We say that the constraint $ij$ is \textit{nonactive}. \item If $\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q})=0$ we want the direction $\boldsymbol{w}$ to satisfy the condition $\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q} + \eta \boldsymbol{w}) \leq 0$ for $i<j$, in order to ensure that all the constraints are satisfied for $\boldsymbol{q} + \eta \boldsymbol{w}$ (see Figure \ref{feasible_directions_memoire} on the right hand side). Such conditions are called \textit{qualification conditions}.\\ But since the functions $\varphi^{n,\varepsilon}_{ij}$ are affine, for any $\bo{w} \in \mathbb{R}^{2N_{p}}$ and $\eta > 0$ we have \begin{equation*} \varphi^{n,\varepsilon}_{ij}(\bo{q}) = 0 \implies \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q} + \eta \bo{w}) = - \eta \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot \bo{w}, \quad \forall i<j. \end{equation*} So that if there exists a direction $\overline{\bo{w}} \in \mathbb{R}^{2N_{p}}$ such that $\varphi^{n,\varepsilon}_{ij}(\bo{q} + \eta \overline{\boldsymbol{w}}) \leq 0$, we necessarily have $\boldsymbol{G}_{ij}(\boldsymbol{Z}^{n-1}_{\varepsilon})\cdot \overline{\bo{w}} \geq 0$. Such a direction exists : it suffices to take $\overline{\bo{w}} = \bo{0}$. We say that the constraints \eqref{constSet} are qualified at $\bo{q}$. \end{itemize} \begin{Rmk} Note that $\bo{q}$ above is chosen arbitrarily. Moreover $\boldsymbol{Z}^{n}_{\varepsilon}$ belongs to $ \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ for any time step so that, the constraints \eqref{constSet} are qualified at $\boldsymbol{Z}^{n}_{\varepsilon}$. \end{Rmk} \begin{Def}\cite{Allairel05}\label{qualified_memoire} Let $ \bo{q} \in \boldsymbol{K}(\textbf{Z}^{n-1}_{\varepsilon})$, we define the set of active constraints by \begin{equation*} Ind(\bo{q}) := \left\{1\leq i<j \leq N_{p} : \varphi^{n,\varepsilon}_{ij}(\bo{q})=0 \right\}. \end{equation*} $Ind(\boldsymbol{q})$ is also called the set of saturated constraints. \end{Def} \begin{Rmk} Let $\bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. We have that \begin{equation}\label{cone_dir_adm_memoire} \boldsymbol{C}(\boldsymbol{q}) = \left\{ \boldsymbol{w} \in \mathbb{R}^{2N_{p}}: \, \boldsymbol{G}_{ij}(\boldsymbol{Z}^{n-1}_{\varepsilon}) \cdot \boldsymbol{w} \geq 0, \; \forall i,j \in Ind(\boldsymbol{Z}^{n}_{\varepsilon}) \right\}. \end{equation} \end{Rmk} \begin{Def}\cite{Ciarlet89} Let $V$ and $M$ be two subsets consider $L: V \times M \longrightarrow \mathbb{R}$.\\ The couple of points $(u,\lambda) \in V\times M$ is called saddle point of $L$ if $u$ is the minimum of $L(\cdot, \lambda): v \in V \longmapsto L(v,\lambda) \in \mathbb{R}$ and $\lambda$ is the maximum of $L(u,\cdot): \mu \in M \longmapsto L(u,\mu) \in \mathbb{R}$. In other words $(u, \lambda)$ is a saddle point of $L$ if it satisfies \begin{equation*} \sup_{\mu\, \in \, M} L(u,\mu) = L(u,\lambda) = \inf_{v \, \in \, V} L(v,\lambda). \end{equation*} \end{Def} From now on $V:=\mathbb{R}^{2N_{p}}$ and $M:=(\mathbb{R}_{+})^{N_{c}}$ where $N_{c} := N_{p}(N_{p} - 1)/2$ is the maximal number of contacts. We introduce the Euler-Lagrange equations associated with \eqref{contranint} and investigate the existence of optimal points. To this end for $\boldsymbol{\mu} = (\mu_{ij})_{i<j}$, we define the Lagrangian $L: \mathbb{R}^{2N_{p}}\times \mathbb{R}^{N_{c}}_{+} \longrightarrow \mathbb{R}$ by \begin{equation}\label{Lag-op_memoire} L(\boldsymbol{q}, \boldsymbol{\mu}) = \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty} \left| q_{i}-Z^{n-l}_{\varepsilon,i}\right|^{2} R_{l,i} + F(\boldsymbol{q}) +\sum_{i<j}\mu_{ij}\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}). \end{equation} Since for all $n$, the mappings $E_{n}$ and $\varphi^{n,\varepsilon}_{ij}$, $i<j$ are convex, continuous in $\mathbb{R}^{2N_{p}}$ and differentiable in $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ and the constraints are qualified at $\boldsymbol{Z}^{n}_{\varepsilon}$, the KKT theorem (cf. Theorem \ref{annexeA}.\ref{kkt_cond}) guarantees that \eqref{contranint} is equivalent to the existence of $\boldsymbol{\lambda}^{n}_{\varepsilon} = (\lambda^{n,\varepsilon}_{ij})_{i<j} \in \left( \mathbb{R}_{+}\right)^{N_{c}} $ such that $(\boldsymbol{Z}^{n}_{\varepsilon}, \boldsymbol{\lambda}_{\varepsilon}^{n})$ is a saddle point of the Lagrangian \eqref{Lag-op_memoire} in $\mathbb{R}^{2N_{p}}\times \mathbb{R}^{N_{c}}_{+}$. This can be rephrased as $\boldsymbol{Z}^{n}_{\varepsilon}$ is a solution of \eqref{contranint} if and only if there exists $\boldsymbol{\lambda}^{n}_{\varepsilon} = \boldsymbol{\lambda}^{n}_{\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon})$ such that \begin{equation}\label{KKTconditions_memoire} \boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon}) \leq \boldsymbol{0},\; \boldsymbol{\lambda}^{n}_{\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon}) \geq \boldsymbol{0}, \; \boldsymbol{\lambda}^{n}_{\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon})\cdot \boldsymbol{\varphi}(\boldsymbol{Z}^{n}_{\varepsilon}) = 0; \, \boldsymbol{E}^{'}_{n}(\boldsymbol{Z}^{n}_{\varepsilon}) + \sum_{i<j} \lambda^{n,\varepsilon}_{ij}(\boldsymbol{Z}^{n}_{\varepsilon}) (\varphi^{n,\varepsilon}_{ij})^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) = \boldsymbol{0}, \end{equation} where $\boldsymbol{\varphi}^{n}_{\varepsilon}(\boldsymbol{q}) := \left( \varphi^{n,\varepsilon}_{ij} \right)_{i<j}: \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}^{N_{c}}$ is vectorized form of the constraints functions. \subsection{Energy estimates and compactness criterion} \begin{Prop}\label{estimation_energie} Under assumptions \ref{Assump}, if $(\bo{R}_{l})_{l \in \mathbb{N}}$ and $(\bo{Z}^{n}_{\varepsilon})_{n=1,2\cdots,N}$ are defined as above, there exists a constant $K_{0}$ independent either of $\varepsilon$ or $\Delta a$ such that \begin{equation}\label{energy-estimate-memoire} \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} \left|Z^{n}_{\varepsilon,i} -Z^{n-l}_{\varepsilon,i}\right|^{2}R_{l,i} + \Delta t\sum_{m=1}^{n} D^{m}_{\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) \leq K_{0} + F(\boldsymbol{Z}^{0}_{p}), \end{equation} where the dissipation term reads \begin{equation*} D^{n}_{\varepsilon} := \dfrac{\Delta a}{2} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} |U^{n-1}_{l,\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i}, \text{ and } U^{n}_{l,\varepsilon,i} :=\dfrac{1}{\varepsilon}( Z^{n}_{\varepsilon,i}-Z^{n-l}_{\varepsilon,i}), \quad \forall i=1,\cdots,N_{p},\; l \in \mathbb{N}^{\ast}. \end{equation*} \end{Prop} \begin{proof} By definition of the minimization process \begin{eqnarray*} E_{n,\epsilon}(\boldsymbol{Z}^{n}_{\varepsilon}) & \leq & E_{n,\varepsilon}(\boldsymbol{Z}^{n-1}_{\varepsilon}) = \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=2}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}), \end{eqnarray*} so that by a change of index, \begin{equation*} I_{n,\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) \leq \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-1-l}_{\varepsilon,i}|^{2}R_{l+1,i} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}), \end{equation*} where we've set \begin{equation*} I_{n,\varepsilon} := \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty}|Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i}. \end{equation*} Since $R_{l,i}$ solves \eqref{contRho}, we have that \begin{equation*} I_{n,\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) + \dfrac{\Delta a}{2\varepsilon} \dfrac{\Delta t}{\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-1-l}_{\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i} \leq I_{n-1,\varepsilon} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}), \end{equation*} so that by induction over $n$ \begin{equation*} I_{n,\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) + \dfrac{\Delta a}{2\varepsilon} \dfrac{\Delta t}{\varepsilon} \sum_{m=1}^{n} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-1-l}_{\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i} \leq I_{0,p} + F(\boldsymbol{Z}^{0}_{p}). \end{equation*} Now we need to find an upper bound for $I_{0,p}$. Indeed for any $i \in \{1,2,\cdots,N_{p}\}$ fixed, \begin{equation*} \left|Z^{0}_{\varepsilon,i} - Z^{-l}_{\varepsilon,i}\right| \leq \varepsilon \Delta a C_{z_{p,i}} l, \end{equation*} so that \begin{equation*} I_{0,p} := \dfrac{\Delta a}{2\varepsilon}\sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}\left| Z^{0}_{\varepsilon,i} - Z^{-l}_{\varepsilon,i} \right|^{2}R_{l,i} \leq \dfrac{\varepsilon}{2} \sum_{i=1}^{N_{p}}C_{z_{p,i}}^{2} \mu_{2,i}. \end{equation*} It then follows that \begin{equation*} I_{n,\varepsilon} + \Delta t\sum_{m=1}^{n}D^{m}_{\varepsilon } + F(\boldsymbol{Z}^{n}_{\varepsilon}) \leq \underbrace{ \dfrac{\varepsilon}{2}\sum_{i=1}^{N_{p}}C^{2}_{z_{p,i}}\mu_{2,i}}_{:=K_{0}} + F(\boldsymbol{Z}^{0}_{p}), \end{equation*} which is the claim. \end{proof} \begin{Lemma}\label{boundness} Under the same hypotheses as in Proposition \ref{estimation_energie}, the sequence $(\bo{Z}^{n}_{\varepsilon})_{n \in \mathbb{N}}$ is bounded. \end{Lemma} \begin{proof} Assume that there exists a subsequence $(\bo{Z}^{n_{k}}_{\varepsilon})_{k \in \mathbb{N}}$ such that $|\bo{Z}^{n_{k}}_{\varepsilon}| \underset{k \to \infty}{\longrightarrow} \infty$. Since $F$ is coercive, we have for all $M > 0$, there exists $k_{0} \in \mathbb{N}$ such that $\forall k > k_{0}$, $ F(\bo{Z}^{n_{k}}_{\varepsilon}) > M$, which contradicts the fact that $F(\bo{Z}^{n}_{\varepsilon}) \leq K_{0} + F(\bo{Z}^{0}_{\varepsilon})$. This prove that any sub-sequence $(\bo{Z}^{n_{k}}_{\varepsilon})_{k}$ is bounded. Thus $\bo{Z}^{n}_{\varepsilon}$ is bounded. \end{proof} \begin{Theo}$($Compactness$)$ \label{theo_compactness} Under assumptions \ref{Assump} (i)--(iii), there exists a constant $C > 0$, depending only on $\overline{\mu}_{2}, \underline{\mu_{0}}, \overline{\mu_{0}}, \overline{\zeta}$ such that \begin{equation}\label{compactness} \Delta t \sum_{n=1}^{N}\sum_{i=1}^{N_{p}} \left| \dfrac{Z^{n}_{\varepsilon,i}-Z^{n-1}_{\varepsilon,i}}{\Delta t} \right|^{2} \leq C. \end{equation} \end{Theo} \noindent Before perform the proof, we set the following notations $\delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}:= \boldsymbol{Z}^{n}_{\varepsilon} - \boldsymbol{Z}^{n-1}_{\varepsilon}, \quad \delta \boldsymbol{\mathcal{L}}^{n-\frac{1}{2}}_{\varepsilon}:= \boldsymbol{\mathcal{L}}^{n}_{\varepsilon} - \boldsymbol{\mathcal{L}}^{n-1}_{\varepsilon}$, where the discrete delay operator is $\boldsymbol{\mathcal{L}}^{n}_{\varepsilon} = (\mathcal{L}_{\varepsilon}^{n})_{i} \text{ and } \mathcal{L}^{n}_{\varepsilon,i} = \dfrac{\Delta a}{\varepsilon} \sum_{l=1}^{\infty} (Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i})R_{l,i}, \quad \forall i \in \{1,\dots,N_p\}. $ \begin{proof} First we easily check that the global elongation variable solves \begin{equation*} \varepsilon \dfrac{\textbf{U}^{n}_{\varepsilon,l} - \textbf{U}^{n-1}_{\varepsilon,l}}{\Delta t} + \dfrac{\textbf{U}^{n-1}_{\varepsilon,l} - \textbf{U}^{n-1}_{\varepsilon,l-1} }{\Delta a} = \dfrac{\textbf{Z}^{n}_{\varepsilon} -\textbf{Z}^{n-1}_{\varepsilon}}{\Delta t}. \end{equation*} So by multiplying this equation (taken component-wisely) by $R_{l,i}$ and summing over index $l \in \NN^*$, we have \begin{equation}\label{T} \dfrac{\varepsilon}{\Delta t} \delta \mathcal{L}^{n-\frac{1}{2}}_{\varepsilon,i} + \sum_{l=1}^{\infty} \big({U}^{n-1}_{\varepsilon,l,i}-{U}^{n-1}_{\varepsilon,l-1,i_{}}\big) R_{l,i_{}} = \dfrac{1}{\Delta t}\underbrace{\left(\Delta a \sum_{l=1}^{\infty} R_{l,i} \right)}_{=:\theta_{\Delta,i} } \delta{Z}^{n-\frac{1}{2}}_{\varepsilon,i}, \quad i=1,\cdots, N_{p}. \end{equation} Moreover, since $R_{l,i}$ solves \eqref{discreteRho}, we have that \begin{eqnarray*} \sum_{l= 1}^{\infty} \big({U} ^{n-1}_{\varepsilon,l,i} - {U}^{n-1}_{\varepsilon,l-1,i_{}}\big) R_{l,i} & = & \sum_{l=1}^{\infty}U^{n-1}_{\varepsilon,l,i} R_{l,i}-\sum_{l=1}^{\infty} U^{n-1}_{\varepsilon,l-1,i}R_{l,i} = \sum_{l=1}^{\infty}U^{n-1}_{\varepsilon,l,i} R_{l,i} - \sum_{l=0}^{\infty}U^{n-1}_{\varepsilon,l,i_{}} R_{l+1,i} \\ & = & \Delta a \sum_{l=1}^{\infty} U^{n-1}_{\varepsilon,l,i} \zeta_{l+1,i} R_{l+1,i}, \quad i=1,\cdots,N_{p}, \end{eqnarray*} which plugged into \eqref{T} gives \begin{equation*} \dfrac{\varepsilon}{\Delta t} \delta \mathcal{L}^{n-\frac{1}{2}}_{\varepsilon,i} + \Delta a \sum_{l=1}^{\infty}{U}^{n-1}_{\varepsilon,l,i}\zeta_{l+1,i}R_{l+1,i} = \theta_{\Delta,i}\dfrac{\delta Z^{n-\frac{1}{2}}_{\varepsilon,i}}{\Delta t}, \quad i =1,\cdots,N_{p}. \end{equation*} On the other hand, setting \begin{equation*} H^{n}_{\varepsilon,i}:= \sum_{k<j}\lambda^{n,\varepsilon}_{kj}(\varphi^{n,\varepsilon}_{kj})_{i}^{'}(\bo{Z}^{n}_{\varepsilon}) \end{equation*} the $i$th component of the non-penetration velocity, we have by the optimality conditions \eqref{KKTconditions_memoire} that \begin{equation}\label{Africa} \theta_{\Delta,i}\dfrac{\delta Z^{n-\frac{1}{2}}_{\varepsilon,i}}{\Delta t} + \dfrac{\varepsilon}{\Delta t} (H^{n}_{\varepsilon,i}-H^{n-1}_{\varepsilon, i})= \Delta a \sum_{l=1}^{\infty}U^{n-1}_{\varepsilon, l,i}\zeta_{l+1,i}R_{l+1,i}- \dfrac{\varepsilon}{\Delta t}\left[F_{i}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) - F_{i}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon})\right],\quad \forall i. \end{equation} Since the mappings $\left( \boldsymbol{\varphi}^{n,\varepsilon}_{kj}\right)_{k<j}$ are convex and differentiable, using Proposition 10.1.4 \cite{Allairel05} we have \begin{equation*} (\varphi^{n,\varepsilon}_{kj})^{'}(\bo{Z}^{n-1}_{\varepsilon})\cdot \delta \bo{Z}^{n-\frac{1}{2}}_{\varepsilon} \leq \varphi^{n,\varepsilon}_{kj}(\bo{Z}^{n}_{\varepsilon}) - \varphi^{n,\varepsilon}_{kj}(\bo{Z}^{n-1}_{\varepsilon}) \leq (\varphi^{n,\varepsilon}_{kj})^{'}(\bo{Z}^{n}_{\varepsilon})\cdot \delta \bo{Z}^{n-\frac{1}{2}}_{\varepsilon}. \end{equation*} Moreover since for any time step, $\sum_{k<j} \lambda^{n,\varepsilon}_{kj}\varphi^{n,\varepsilon}_{kj}(\boldsymbol{Z}^{n}_{\varepsilon})=0$ with $ \varphi^{n,\varepsilon}_{kj}(\boldsymbol{q}) \leq 0$ and $\lambda^{n,\varepsilon}_{kj}\geq 0$, for any $k < j$, \begin{equation*} 0 \leq - \sum_{k<j}\left\{\lambda^{n,\varepsilon}_{kj} \varphi^{n,\varepsilon}_{kj}(\bo{Z}^{n-1}_{\varepsilon}) + \lambda^{n-1,\varepsilon}_{kj} \varphi^{n-1,\varepsilon}_{kj}(\bo{Z}^{n}_{\varepsilon}) \right\} \leq (\bo{H}^{n}_{\varepsilon} - \bo{H}^{n-1}_{\varepsilon})\cdot \delta \bo{Z}^{n-\frac{1}{2}}_{\varepsilon}. \end{equation*} We multiply $\eqref{Africa}$ by $\delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}$ in order to obtain \begin{equation}\label{cp} \underline{\theta} \dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \left( \boldsymbol{S}^{n}_{\varepsilon} - \dfrac{\varepsilon}{\Delta t}(\boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon})-\boldsymbol{F}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon}))\right) \cdot \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}, \end{equation} where $\underline{\theta}:= \min_{i}\theta_{i}$ and $ S^{n}_{\varepsilon, i}:= \Delta a \sum_{l=1}^{\infty} \boldsymbol{U}^{n-1}_{\varepsilon,l,i}\zeta_{l+1,i}R_{l+1,i},$ for all $i$. As $F$ is strictly convex we have $\left(\boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) - \boldsymbol{F}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon}) \right)\cdot (\boldsymbol{Z}^{n}_{\varepsilon} - \boldsymbol{Z}^{n-1}_{\varepsilon}) > 0$, so that \begin{equation*} \underline{\theta} \dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \boldsymbol{S}^{n}_{\varepsilon}\cdot \delta \boldsymbol{Z}^{n-\frac{1} {2}}_{\varepsilon} \leq \dfrac{\Delta t}{\gamma} \left|\boldsymbol{S}^{n}_{\varepsilon}\right|^{2} + \dfrac{\gamma}{\Delta t} \left|\delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}, \quad \forall \gamma > 0, \end{equation*} where we've used the Young's inequality. It follows that \begin{equation*} (\underline{\theta} - \gamma)\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{\Delta t}{\gamma} \left|\boldsymbol{S}^{n}_{\varepsilon}\right|^{2}, \quad \forall \gamma > 0. \end{equation*} Moreover \begin{equation*} |\boldsymbol{S}^{n}_{\varepsilon}|^{2} = \sum_{i=1}^{N_{p}} \Delta a^{2}\left|\sum_{l=1}^{\infty} U^{n-1}_{l,\varepsilon,i} R_{l+1,i} \zeta_{l+1,i}\right|^{2} \\ \leq \underbrace{2 \Delta a \overline{\zeta}\, \overline{R}}_{:=K_{1}} \left( \dfrac{\Delta a}{2} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}|U^{n-1}_{l,\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i} \right) \leq K_{1}D^{n}_{\varepsilon}, \end{equation*} where the first inequality is due to Jensen. It follows that \begin{equation*} (\underline{\theta} - \gamma)\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{K_{1}}{\gamma} \Delta t D^{n}_{\varepsilon}, \quad \forall n=1,2\cdots,N. \end{equation*} So that the sum over $n$ in the latter inequality gives \begin{equation*} (\underline{\theta} -\gamma)\sum_{n=1}^{N} \dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{K_{1}}{\gamma } \left(\Delta t \sum_{n=1}^{N} D^{n}_{\varepsilon}\right), \quad \forall \gamma > 0, \end{equation*} which by the energy estimate \eqref{energy-estimate-memoire} gives \begin{equation*}\label{L2} (\underline{\theta} - \gamma)\sum_{n=1}^{N}\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{K_{1}}{\gamma}K_{0} + \dfrac{K_{1}}{\gamma}\left( F(\boldsymbol{Z}^{0}_{p}) - F(\boldsymbol{Z}^{N}_{\varepsilon}) \right), \quad \forall \gamma > 0. \end{equation*} By Lemma \ref{boundness}, there exist two constants $K_{2}$ and $K_{3}$ independent of $\varepsilon$ and $\Delta t$ \begin{equation*} K_{2} := \dfrac{K_{1}}{\gamma}K_{0} \; \text{ and } K_{3} \geq \dfrac{K_{1}}{\gamma}\left( F(\boldsymbol{Z}^{0}_{p}) - F(\boldsymbol{Z}^{N}_{\varepsilon})\right), \end{equation*} so that \begin{equation*} (\underline{\theta} - \gamma)\sum_{n=1}^{N}\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq K_{2} + K_{3}, \quad \forall \gamma > 0. \end{equation*} Hence there exists a constant $C := \frac{K_{2} + K_{3}}{\underline{\theta} - \gamma}$ such that \eqref{compactness} holds. This gives a bound on the discrete time derivative of $\boldsymbol{\tilde{z}}_{\varepsilon,\Delta}$ in $L^{2}((0,T))$ and ends the proof. \end{proof} \subsection{Convergences toward variational inclusions} This part is devoted to the convergence of the discrete model's solution toward the solution of the continuous variational inclusion when $\Delta a$ goes to $0$ and $\varepsilon > 0$ is fixed. Then we let $\varepsilon$ to go to $0$ and prove that the resulting limit $\bo{z}_{0}$ solves a weighted differential inclusion. To this end, we prove that the constrained minimization problem is equivalent to a variational inclusion (by the use of projections onto closed, nonempty and convex sets) in order to deal with the convergence of the discrete problem to the continuous one, when $\Delta a$ is small enough.\\ We mention that the set of admissible configurations is not convex (see Figure \ref{lack_convexity}) so that the projection onto $\boldsymbol{Q}_{0}$ is not well defined. Nevertheless as shown in \cite[Proposition 3.12 p.51]{venel08}, there exists $\eta > 0$ such that $P_{\boldsymbol{Q}_{0}}\boldsymbol{q}$ is well defined for $\boldsymbol{q} \in \mathbb{R}^{2N_{p}}$ satisfying $dist(\boldsymbol{Q}_{0},\boldsymbol{q}) < \eta$. We say that $\boldsymbol{Q}_{0}$ is $\eta$-\textit{prox-regular} or uniformly \textit{prox-regular}, see Appendix \ref{annexeA} or \cite{venel08} for more details. \begin{figure}[ht] \begin{center}\scalebox{.85}{ \begin{tikzpicture} \draw[thick,->] (-1.,0) -- (1.5,0); \draw[thick,->] (0,-0.75) -- (0,1.75); \draw (0,0) circle (0.5); \draw (0,1) circle (0.5); \draw[ball color=black](-0.5,-0.5) node[below]{$q_{1}$}; \draw[ball color=black](0.75,1) node[below]{$q_{2}$}; \draw[ball color=black](0,-2) node[below]{$\boldsymbol{q}=(q_{1},q_{2})$}; \end{tikzpicture} \quad \begin{tikzpicture} \draw[thick,->] (-1,0) -- (2,0); \draw[thick,->] (0,-0.75) -- (0,1.75); \draw[ball color=black](-0.5,1) node[below]{$\tilde{q}_{1}$}; \draw[ball color=black](1,1.2) node[below]{$\tilde{q}_{2}$}; \draw (0,0) circle (0.5); \draw (1,0) circle (0.5); \draw[ball color=black](0,-2) node[below]{$\boldsymbol{\tilde{q}} = (\tilde{q}_{1},\tilde{q}_{2} )$}; \end{tikzpicture} \quad \begin{tikzpicture} \draw[thick,->] (-1,0) -- (1.5,0); \draw[thick,->] (0,-0.75) -- (0,1.75); \draw (0,0) circle (0.5); \draw (0.5,0.5) circle (0.5); \draw[ball color=black](-0.6,1) node[below]{$\overline{q}_{1}$}; \draw[ball color=black](0.7,0.8) node[below]{$\overline{q}_{2}$}; \draw[ball color=black](0.5,-2) node[below]{$\boldsymbol{\overline{q}}= \frac{1}{2}(\boldsymbol{q}+\boldsymbol{\tilde{q}})$}; \end{tikzpicture}} \end{center} \caption{Lack of convexity of $\boldsymbol{Q}_{0}$.} \label{lack_convexity} \end{figure} \subsubsection{Expression of the contact model as a variational inclusion} We use the fact that $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ is convex to write the constrained minimization problem as a projection on a convex set. \begin{Prop}\label{prop.projection} Suppose that assumption \ref{Assump} (iii) hold. For any $\varepsilon > 0$, the solution of \eqref{Eq1_discret} also satisfies : \begin{equation}\label{projection} \bo{Z}^{n}_{\varepsilon} = P_{\boldsymbol{K}(\bo{Z}^{n-1}_{\varepsilon})}\left(\bo{Z}^{n}_{\varepsilon} - \Delta t\boldsymbol{\mathcal{L}}^{n}_{\varepsilon} - \Delta t \boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \right), \quad n=0,\cdots, N-1. \end{equation} \end{Prop} \begin{proof} Since $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ is nonempty closed and convex and the map $\boldsymbol{q} \mapsto E_{n,\varepsilon}(\boldsymbol{q})$ is differentiable at $\bo{Z}^{n}_{\varepsilon}$, by Euler inequality (see \cite[Theorem 10.2.1 p. 307]{Allairel05}) we have that \begin{equation*} \langle (\boldsymbol{E}_{n,\varepsilon})^{'}(\boldsymbol{Z}^{n}_{\varepsilon}), \boldsymbol{q}- \boldsymbol{Z}^{n}_{\varepsilon} \rangle \geq 0, \quad \forall \boldsymbol{q} \in \bo{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}). \end{equation*} This, since $\Delta t > 0$, is equivalent to \begin{equation*} \langle \big(\boldsymbol{Z}^{n}_{\varepsilon}-\Delta t (\boldsymbol{E}_{n,\varepsilon})^{'}(\boldsymbol{Z}^{n}_{\varepsilon})\big) - \boldsymbol{Z}^{n}_{\varepsilon}, \boldsymbol{q} -\boldsymbol{Z}^{n}_{\varepsilon} \rangle \leq 0, \quad \forall\boldsymbol{q} \in K(\boldsymbol{Z}^{n-1}_{\varepsilon}). \end{equation*} The latter inequality is nothing but the characterization of the projection onto $\bo{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ \cite[Theorem 5.2 p.132]{Haim11} i.e. \begin{equation*} \boldsymbol{Z}^{n}_{\varepsilon} = P_{\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})} \left( \boldsymbol{Z}^{n}_{\varepsilon} - \Delta t (E_{n,\varepsilon})^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \right), \end{equation*} which gives the claim. \end{proof} By definition of the proximal-normal cone (see \eqref{proximal-normal}) for convex sets, \eqref{projection} is equivalent to \begin{equation}\label{normalCone} \boldsymbol{\mathcal{L}}_{\varepsilon}^{n} + \bo{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \in -N\left(\bo{K}(\bo{Z}^{n-1}_{\varepsilon}), \bo{Z}^{n}_{\varepsilon}\right). \end{equation} \begin{Prop}\label{prop4} Assume that assumption \ref{Assump} (iii) holds, the discrete inclusion \eqref{normalCone} has a unique solution $\boldsymbol{Z}^{n}_{\varepsilon}$. \end{Prop} \begin{proof} The existence and uniqueness of solutions of \eqref{Eq1_discret} is given in Theorem \ref{thm1}, by Proposition \ref{prop.projection}, this solution also satisfies \eqref{projection} which ends the proof. \end{proof} \subsubsection{Convergence for a fixed $\varepsilon > 0$ when $\Delta a $ goes to 0} Let $\varepsilon > 0$, we need to check that the above inclusion is satisfied for the stepsize linear function $\boldsymbol{z}_{\varepsilon,\Delta}$ and then take the limit when $\Delta a$ goes to $0$. Consider the time stepsize constant functions \begin{equation*} \psi_{\Delta}|_{(t^{n-1},t^{n}]}: = t^{n-1}, \; \theta_{\Delta}|_{(t^{n-1},t^{n}]} := t^{n}, \text{ and } \psi_{\Delta}(0) = 0,\; \theta_{\Delta}(0) = 0. \end{equation*} \begin{Lemma} Under the same condition as in Proposition \ref{prop4}, given the sequence $(\boldsymbol{Z}^n_\epsilon)_{n\in \{0,N\}}$, the piecewise linear interpolation $\bo{\tilde{z}}_{\varepsilon,\Delta}$ defined in \eqref{eq.linear.interp} satisfies the following inclusion \begin{equation}\label{discre_incl_diff} \boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta}(t)+ \textbf{F}^{'}(\bo{\tilde{z}}_{\varepsilon,\Delta}(t)) \in -N\Big(\boldsymbol{K}\left( \bo{\tilde{z}}_{\varepsilon,\Delta}(\psi_{\Delta}(t))\right), \bo{\tilde{z}}_{\varepsilon,\Delta}(\theta_{\Delta}(t))\Big) \text{ a.e. } t \in [0,T], \end{equation} where $\boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta}$ is the linear interpolation of $\boldsymbol{\mathcal{L}}^{n}_{\varepsilon}$. \end{Lemma} \begin{proof} Indeed we have that \begin{equation*} \boldsymbol{\mathcal{L}}^{n}_{\varepsilon} + \boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \in -N\left(\boldsymbol{K}(\bo{Z}^{n-1}_{\varepsilon}),\bo{Z}^{n}_{\varepsilon}\right), \, \forall \, n < N. \end{equation*} On the other hand, evaluating the latter inequality at two time steps $t^{n}$ and $t^{n-1}$ and using the definition of $\bo{z}_{\varepsilon,\Delta}$ and $\bo{\mathcal{L}}_{\varepsilon,\Delta}$, we have that \begin{equation*} \bo{\tilde{\mathcal{L}}}_{\varepsilon,\Delta}(t) + \bo{A}_{\varepsilon,\Delta}(t) \in - \dfrac{t-t^{n-1}}{\Delta t} N\left(\bo{K}(\bo{Z}^{n-1}_{\varepsilon}), \bo{Z}^{n}_{\varepsilon}\right) - \big(1 - \dfrac{t-t^{n-1}}{\Delta t} \big) N\left(\bo{K}(\bo{Z}^{n-2}_{\varepsilon}), \bo{Z}^{n-1}_{\varepsilon}\right), \; t \in (t^{n-1},t^{n}) \end{equation*} where $\bo{A}_{\varepsilon,\Delta}(t):= \dfrac{t-t^{n-1}}{\Delta t} \bo{F}^{'}(\bo{Z}^{n}_{\varepsilon}) + (t^n- t)/\Delta t) \bo{F}^{'}(\bo{Z}^{n-1}_{\varepsilon})$. \end{proof} Let $\varepsilon > 0$ be fixed we prove that the piecewise constant function \eqref{Eq2} uniformly converges toward the solution of our continuous problem as the subdivision step $\Delta a$ goes to $0$. Moreover the limit function satisfies a variational inclusion. \begin{Lemma}\label{equality}\cite{venel08} Let $\boldsymbol{q} \in \boldsymbol{Q}_{0}$, we have equality between the cones \begin{equation}\label{equal_cones} N(\bo{Q}_{0}, \boldsymbol{q}) = N(\bo{ K}(\boldsymbol{q}), \boldsymbol{q}). \end{equation} So that we shall consider $N\left(\bo{Q}_{0}, \bo{Z}^{n}_{\varepsilon} \right)$ instead of $N\big(\boldsymbol{K}(\bo{Z}^{n-1}_{\varepsilon}), \bo{Z}^{n}_{\varepsilon}\big)$ in what follows. \end{Lemma} \begin{Theo}\label{thm_conv} Let $\varepsilon >0$ be fixed and $T> 0$. If the assumptions \ref{Assump} (i)-(iii) hold, then the piecewise linear interpolation $\bo{\tilde{z}}_{\varepsilon,\Delta}$ uniformly converges in $\mathcal{C}\left([0,T];\boldsymbol{Q}_{0} \right)$ when $\Delta a \to 0$. Moreover the limit function denoted by $\textbf{z}_{\varepsilon}$ satisfies \begin{equation}\label{conDiff} \begin{cases} \displaystyle{ \boldsymbol{\mathcal{L}}_ {\varepsilon}[\textbf{z}_{\varepsilon}](t) + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon}(t)) \in -N(\boldsymbol{Q}_{0}, \textbf{z}_{\varepsilon}(t)), \, t > 0}, \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \; t \leq 0, \end{cases} \end{equation} where $\boldsymbol{\mathcal{L}}_{\varepsilon}(t)=\left(\mathcal{L}_{\varepsilon,1}(t),\cdots, \mathcal{L}_{\varepsilon,N_{p}}(t) \right)$ and for any particle $\mathcal{L}_{\varepsilon,i}$ is defined in \eqref{cont-delay-operator}. \end{Theo} \begin{proof} In this proof, we aim at using the theorem due to Ascoli. To this purpose, we use compactness arguments as in \cite{venel08}. We have the followings \begin{itemize} \item By definition the piecewise linear interpolation $\bo{\tilde{z}}_{\varepsilon,\Delta}$ is equicontinuous on $[0,T]$. \item Moreover by Lemma \ref{boundness}, $\bo{Z}^{n}_{\varepsilon}$ is bounded uniformly with respect to the discretization step $\Delta a$ for any time $t^{n} = n\Delta t$. This implies that $\bo{\tilde{z}}_{\varepsilon,\Delta}$ admits a $L^{\infty}$-bound uniformly with respect to $\Delta a$. \end{itemize} Let $(\Delta_{m})_{m \in \mathbb{N}}$ be a sequence of discretization steps decreasing to $0$. Thanks to Arzelà-Ascoli's theorem, there exists a subsequence still denoted by $\left(\bo{\tilde{z}}_{\varepsilon, \Delta_{m}}\right)_{m \in \mathbb{N}}$ which uniformly converges to $\bo{z}_{\varepsilon}\in \bo{\mathcal{C}}$.\\ {We prove first that the limit function belongs to $\bo{Q_{0}}$ for all $t \in [0,T]$.} Indeed since \begin{equation*} \bo{\tilde{z}}_{\varepsilon,\Delta}|_{(t^{n-1}, t^{n})} = \left(\frac{t-t^{n-1}}{\Delta t} \right)\bo{Z}^{n}_{\varepsilon} + \left(1 - \frac{t - t^{n-1}}{\Delta t}\right) \bo{Z}^{n-1}_{\varepsilon}, \end{equation*} and $\bo{Z}^{n}_{\varepsilon}, \bo{Z}^{n-1}_{\varepsilon} \in \bo{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ which is convex, we have that $\bo{\tilde{z}}_{\varepsilon,\Delta} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon}) \subset \bo{Q}_{0}$ for all $n = 1,2,\cdots,N$. On the other hand, since $\bo{Q}_{0}$ is closed for the $\mathcal{C}$-topology we have that \begin{equation*} \bo{z}_{\varepsilon}(t) =: \lim_{m \to \infty}\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(t) \in \boldsymbol{Q}_{0}, \quad \forall\, t \in [0,T]. \end{equation*} Combining this with the fact that $\bo{z}_{\varepsilon} \in \bo{\mathcal{C}}$, we claim that $\bo{z}_{\varepsilon} \in \mathcal{C}([0,T], \boldsymbol{Q}_{0})$.\\ We prove now that $\bo{\pi}_{\varepsilon}:= \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N \left(\boldsymbol{Q}_{0},\bo{z}_{\varepsilon}\right)$. In fact, thanks to \eqref{equal_cones}, it suffices to prove that $\boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}), \bo{z}_{\varepsilon}\right), \quad \forall t \in [0,T]$. \begin{itemize} \item \textbf{Convergence: }First, we prove that the linear interpolation of the delay operator converges to the continuous limit with respect to the norm $||\cdot ||_{\bo{\mathcal{C}}}$. \\ Indeed for any $i=1,2,\cdots,N_{p}$, we have that \begin{multline*} \tilde{\mathcal{L}}_{\varepsilon,\Delta,i} = \dfrac{\mu_{\Delta,i}}{\varepsilon} \sum_{n=1}^{N} \left\{ \left(Z^{n}_{\varepsilon,i} + \dfrac{t - t^{n-1}}{\Delta t}(Z^{n}_{\varepsilon,i} - Z^{n-1}_{\varepsilon,i}) \right) \right\}\mathbbm{1}_{J_{n}}(t) \\ - \dfrac{\Delta a}{\varepsilon} \sum_{n=1}^{N} \left\{\sum_{l=0}^{\infty}\left(Z^{n-l-1}_{\varepsilon,i} + \dfrac{t - t^{n-1}}{\Delta t}(Z^{n-l}_{\varepsilon,i} - Z^{n-l-1}_{\varepsilon,i}) \right)R_{l,i}\right\}\mathbbm{1}_{J_{n}}(t)=: I^{1}_{\Delta,i} - I^{2}_{\Delta,i}, \end{multline*} where we've set $J_{n} := \big((n-1)\Delta t, n\Delta t\big)$. To deal with the convergence of $I_{\Delta,i}^{1}$, we use the fact that $\left|\bo{\rho}_{\Delta} - \bo{\rho}\right|_{L^{1}_{a}}\underset{\Delta \to 0}{\longrightarrow}0$ which for any particle gives \begin{equation*} I_{\Delta,i}^{1} = \dfrac{1}{\varepsilon} \tilde{z}_{\varepsilon, \Delta,i}(t) \int_{\mathbb{R}_{+}}\rho_{\Delta,i}(a)da \underset{\Delta \longrightarrow 0}{\xrightarrow{\hspace{1.25cm}}} \dfrac{1}{\varepsilon} z_{\varepsilon,i}(t) \int_{0}^{\infty}\rho_{i}(a)da, \text{ in } \bo{\mathcal{C}}, \end{equation*} On the other hand, we split the second term as follows \begin{eqnarray*} I^{2}_{\Delta,i} & = & \dfrac{1}{\varepsilon} \sum_{n=1}^{N} \left\{\Delta a \sum_{l=0}^{\infty} Z^{n-l-1}_{\varepsilon,i}R_{l,i} + \dfrac{t-t^{n-1}}{\Delta t} \Delta a \sum_{l=0}^{\infty}(Z^{n-l}_{\varepsilon,i} - Z^{n-l-1}_{\varepsilon,i})R_{l,i} \right\} \mathbbm{1}_{J_{n}}(t) \\ & = & \dfrac{1}{\varepsilon} \sum_{n=1}^{N}\left(\dfrac{t-t^{n-1}}{\Delta t} \int_{\mathbb{R}_{+}}\left(z_{\Delta,i}(n\Delta t - \varepsilon a) - z_{\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a) \right)\rho_{\Delta,i}(a)da \right) \mathbbm{1}_{J_{n}}(t)\\ & & \qquad + \dfrac{1}{\varepsilon} \sum_{n=1}^{N} \left( \int_{\mathbb{R}_{+}}z_{\varepsilon,\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a)\rho_{\Delta,i}(a)da \right) \mathbbm{1}_{J_{n}}(t) =: \dfrac{1}{\varepsilon} I^{2,1}_{\Delta,i} + \dfrac{1}{\varepsilon} I^{2,2}_{\Delta,i}. \end{eqnarray*} Let us now estimate $|\bo{I}^{2}_{\Delta} - \bo{\tilde{I}}_{\Delta}|$ where for any particle \begin{equation*} \tilde{I}_{\Delta,i} := \dfrac{1}{\varepsilon} \int_{\mathbb{R}_{+}} \tilde{z}_{\varepsilon,i}(t-\varepsilon\Delta a - \varepsilon a)\rho_{\Delta,i}(a)da \end{equation*} We prove that $\bo{I}^{2}_{\Delta}, \bo{\tilde{I}}_{\Delta} \in \bo{L}^{2}$. Indeed \begin{eqnarray*} \int_{0}^{T} |I^{2,2}_{\Delta,i}(t)|^{2}dt & \leq & \sum_{n=1}^{N}\int_{J_{n}} \left|\int_{\mathbb{R}_{+}}z_{\varepsilon,\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a)\rho_{\Delta,i}(a)da \right|^{2} dt \\ & \leq & \sum_{n=1}^{N} \int_{J_{n}} \int_{\mathbb{R}_{+}} \rho_{\Delta,i}(\sigma)d\sigma \int_{\mathbb{R}_{+}} \left|z_{\varepsilon,\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a)\right|^{2}\rho_{\Delta,i}(a)dadt, \quad \forall i, \end{eqnarray*} where we've used the Jensen's inequality in the latter inequality. Furthermore, since \begin{equation*} \int_{\mathbb{R}_{+}} \rho_{\Delta,i}(a)da = \mu_{0, \Delta,i} < \infty, \quad \forall i, \end{equation*} we have that \begin{equation*} \int_{0}^{T} |I_{\Delta,i}^{2,2}(t)|^{2} dt \leq \mu_{0,\Delta,i}\Delta t \sum_{n=1}^{N} \Delta a \sum_{l=0}^{\infty} \left|Z^{n-l-1}_{\varepsilon,i}\right|^{2}R_{l,i}, \end{equation*} which can be bounded uniformly with respect to $\varepsilon$ since \begin{equation*}\label{jo} \Delta t \sum_{n=1}^{N} \Delta a \sum_{l=0}^{\infty} \left|Z^{n-l-1}_{\varepsilon,i}\right|^{2}R_{l,i} \leq T\left( |z_{\varepsilon, \Delta, i}|^{2}_{L^{\infty}_{t}} + C_{z_{p,i}}^{2} + |z^{-1}_{p,i}|^{2} \right) \int_{\mathbb{R}_{+}}(1+a)^{2}\rho_{\Delta,i}(a)da, \quad \forall i = 1,\cdots,N_{p}. \end{equation*} In the latter inequality, we've split the sum over the ages into $l \in \left\{0,1,\cdots,n-1 \right\}$ and $l \in \{n,n+1,\cdots \}$. In the first part we've inserted the past data then use the bound provided by \eqref{compactness} and in the second part we use the Lipschitz condition of the past data. The same arguments guarantee that $\bo{I}^{1,2}_{\Delta}$ and $\bo{\tilde{I}}_{\Delta}$ belongs to $\bo{L}^{2}$.\\ Furthermor since the past data are Lipschitz and we have the bound \eqref{compactness}, it follows \begin{equation*} \displaystyle{\int_{0}^{T}\left| \bo{I}^{2}_{\Delta}(t) - \bo{\tilde{I}}_{\Delta}(t)\right|}dt \lesssim \Delta t \sum_{n=1}^{N} \Delta a \sum_{l=0}^{\infty} \left|Z^{n-l-1}_{\varepsilon,i} - Z^{n-l-2}_{\varepsilon,i}\right|^{2}R_{l,i} \leq O(\Delta a). \end{equation*} Thus $|| \bo{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}} - \bo{\mathcal{L}}_{\varepsilon}||_{\bo{\mathcal{C}}} \longrightarrow 0$ as $m$ grows to infinity.\\ Furthermore, using the fact that $F$ is continuously differentiable and $\bo{\tilde{z}}_{\varepsilon,\Delta_{m}} \to \bo{z}_{\varepsilon}$, we have that \begin{equation*} \bo{\tilde{\pi}}_{\varepsilon,\Delta_{m}} :=\boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}} + \boldsymbol{F}^{'}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}) \underset{m \to \infty}{\xrightarrow{\hspace{1.25cm}}} \boldsymbol{\pi}_{\varepsilon} =: \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \bo{F}^{'}(\bo{z}_{\varepsilon}), \quad \forall t \in [0,T] \text{ and } \forall \varepsilon > 0, \end{equation*} which gives the convergence. \item \textbf{Inclusion:} here we use the same arguments as in \cite{venel08}.\\ We need to prove that \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t) \right), \quad \text{ a.e. } t \in [0,T]. \end{equation*} By Lemma \ref{annexeA}.\ref{equivalences}, \eqref{discre_incl_diff} is equivalent to \begin{eqnarray*} \langle \bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big|\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Replacing $\boldsymbol{\xi}$ by $-\boldsymbol{\xi}$ in the above inequality, we have that \begin{eqnarray*} \langle \bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big|\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta}(t)))}\big(- \boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Let us now prove that $|\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}|$ is bounded uniformly with respect $\Delta a$. Indeed, on one hand since $\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}$ and $F$ is continuously differentiable, there exists a constant $K_{F}$ independent of $\varepsilon$ and $\Delta a$ such that $\big|\bo{F}^{'}(\boldsymbol{\tilde{z}}_{\varepsilon,\Delta_{m}})\big| \leq K_{F}$. On the other hand, using the energy estimates and the Jensen's inequality, we have \begin{equation}\label{nouniformity} |\bo{\mathcal{L}}^{n}_{\varepsilon}|^{2} \leq \frac{2 C_{0}}{\varepsilon} \sum_{i=1}^{N_{p}} \dfrac{\Delta a}{2\varepsilon} \sum_{l=1}^{\infty}|Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i} \leq \frac{2C_{0}}{\varepsilon}\left|K_{0} + F(\boldsymbol{Z}^{0}_{p}) - F(\bo{Z}^{n}_{\varepsilon})\right|, \end{equation} so that $|\bo{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}}| \leq \dfrac{K}{\sqrt{\varepsilon}}$ with $K> 0$ is independent of $\Delta a$ and $\varepsilon$, moreover \begin{eqnarray} |\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}| & \leq & \left| \boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}} \right| + \left|\bo{F}^{'}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}})\right| \leq \dfrac{K}{\sqrt{\varepsilon}} + K_{F}. \end{eqnarray} The sum of the two latter inequalities implies that \begin{equation}\label{last} \big|\langle \bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle \big| \leq \left(\dfrac{K}{\sqrt{\varepsilon}} + K_{F}\right)d_{\bo{K}( \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big| - \boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))) \big|, \quad \forall \varepsilon > 0. \end{equation} Using the fact that the distance to a nonempty, closed and convex set is $1$-Lipschitz and setting \begin{equation*} \tilde{I}_{\varepsilon,\Delta_{m}}(t):= \big|d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(-\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big)\big|, \end{equation*} we have that \begin{eqnarray*} \tilde{I}_{\varepsilon,\Delta_{m}} & \leq & \big| d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big( -\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & & \hspace{8.5em} + \big| d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle - \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & \leq & \big| \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta}(t)) - \bo{z}_{\varepsilon}(t)\big| + \underbrace{\big| d_{\bo{K}( \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big|}_{\tilde{J}_{\varepsilon, \Delta_{m}}(t)}. \end{eqnarray*} \end{itemize} Moreover by Proposition \ref{annexeA}.\ref{convergenceofprojection}, there exists $\nu > 0$ such that for all $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$ satisfying $|\boldsymbol{\xi}|\leq \nu$, $\tilde{J}_{\varepsilon, \Delta_{m}}(t) \underset{m \to \infty}{\longrightarrow} 0$.\\ Thus for any $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$, there exists $\nu > 0$ satisfying $|\boldsymbol{\xi}| \leq \nu$ and \begin{equation*} 0 \leq \tilde{I}_{\varepsilon,\Delta_{m}} \leq \big| \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) - \bo{z}_{\varepsilon}(t)\big| \underset{m \to \infty}{\longrightarrow 0}, \end{equation*} i.e. \begin{equation*} d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon, \Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big( -\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) \underset{ m \to \infty}{\longrightarrow} d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t)\big). \end{equation*} Since $\varepsilon > 0$ is fixed, equation \eqref{last} finally gives \begin{equation*} \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}, |\boldsymbol{\xi}| \leq \nu, \quad |\langle \boldsymbol{\pi}_{\varepsilon}(t), \boldsymbol{\xi} \rangle| \leq \left(\frac{K}{\sqrt{\varepsilon}} + K_{F}\right)d_{\bo{K}( \bo{z}_{\varepsilon}(t))} \big|- \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t))\big|, \end{equation*} which using back Lemma \ref{annexeA}.\ref{equivalences} is equivalent to \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t)), \quad \forall \varepsilon >0, \end{equation*} ending the proof once we prove that $\tilde{J}_{\varepsilon, \Delta_{m}}$; but this is a consequence of Proposition \ref{annexeA}.\ref{convergenceofprojection}. \end{proof} \subsubsection{Uniqueness of solutions of the continuous problem} \begin{Theo}\label{thm-exist-uniq} Let $\varepsilon > 0$ and $T>0$ be fixed. Under assumptions \ref{Assump} (i)-(iii), the variational inclusion \eqref{conDiff} has a unique solution $\boldsymbol{z}_{\varepsilon} $ in $\bo{\mathcal{C}}$. \end{Theo} \begin{proof} The existence of the limit $\bo{z}_{\varepsilon}$ is due to compactness. Indeed $\displaystyle{\bo{z}_{\varepsilon}(t) = \lim_{m \to \infty} \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}}(t)$ in $\bo{\mathcal{C}}$.\\ For the uniqueness, we use the fact that $\bo{z}_{\varepsilon} \in \bo{Q}_{0}$. Indeed since $\bo{z}_{\varepsilon} \in \boldsymbol{Q}_{0}$ and solves \eqref{conDiff}, the same arguments as above give \begin{equation*} \begin{cases} \displaystyle{(E^{\varepsilon}_{t}})^{'}(\bo{z}_{\varepsilon}) \in - N\left( \bo{K}(\bo{z}_{\varepsilon}), \bo{z}_{\varepsilon} \right), \quad t >0 \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \quad t \leq 0, \end{cases} \iff \begin{cases} \displaystyle{\bo{z}_{\varepsilon}(t) = \argmin_{\bo{q}\, \in \, \bo{K}(\bo{z}_{\varepsilon})} E^{\varepsilon}_{t}(\boldsymbol{q})}, \quad \forall t > 0 \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \quad t \leq 0. \end{cases} \end{equation*} For same seasons as in \eqref{KKTconditions_memoire}, the latter equation in turn is equivalent to the existence of saddle point $\left( \bo{\lambda}_{\varepsilon}, \boldsymbol{z}_{\varepsilon}\right)$ such that \begin{equation}\label{KKTconditions_memoireCont} \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \bo{F}^{'}(\boldsymbol{z}_{\varepsilon}) + \sum_{i<j} \lambda^{\varepsilon}_{ij} (\bo{\varphi}^{\varepsilon}_{ij})^{'}(\boldsymbol{z}_{\varepsilon}) = \boldsymbol{0}, \end{equation} where the functions $\varphi^{\varepsilon}_{ij}$ define the interior convex approximation set $\bo{K}(\bo{z}_{\varepsilon})$.\\ Consider two solutions $\bo{z}^{1}_{\varepsilon}, \bo{z}^{2}_{\varepsilon}$ of \eqref{KKTconditions_memoireCont} sharing the same positions for negative times $\bo{z}_{p}$ and the same linkages density $\bo{\rho}$. We have \begin{equation*} \langle \bo{\mathcal{\hat{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle + \langle \bo{F}^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \boldsymbol{\hat{z}}_{\varepsilon} \rangle + \left \langle \sum_{i<j} \left[ \lambda^{\varepsilon,2}_{ij} (\bo{\varphi}^{\varepsilon,2}_{ij})^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \lambda^{\varepsilon,1}_{ij} (\bo{\varphi}^{\varepsilon,1}_{ij})^{'}(\boldsymbol{z}^{1}_{\varepsilon})\right], \bo{\hat{z}}_{\varepsilon} \right\rangle = 0, \end{equation*} where $\bo{\hat{z}}_{\varepsilon}:= \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}$ and $\boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}:= \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}^{2}] - \bo{\mathcal{L}}_{\varepsilon}[\bo{z}^{1}_{\varepsilon}]$. Notice once again that since $F$ is convex, we have that \begin{equation*} \langle \bo{F}^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \boldsymbol{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon} \rangle \geq 0. \end{equation*} So that \begin{equation}\label{en_attendant} \langle \bo{\mathcal{\hat{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle + \left \langle \sum_{i<j} \left[ \lambda^{\varepsilon,2}_{ij} (\bo{\varphi}^{\varepsilon,2}_{ij})^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \lambda^{\varepsilon,1}_{ij} (\bo{\varphi}^{\varepsilon,1}_{ij})^{'}(\boldsymbol{z}^{1}_{\varepsilon})\right], \bo{\hat{z}}_{\varepsilon} \right \rangle \leq 0. \end{equation} Let's consider the second term on the right hand side. Since $\varphi^{\varepsilon,1}_{ij}$ and $\varphi^{\varepsilon,2}_{ij}$ are convex, by the same arguments as in the proof of Theorem \ref{theo_compactness}, we have that \begin{equation*} \langle (\bo{\varphi}^{\varepsilon,k}_{ij})^{'}(\bo{z}^{1}_{\varepsilon}), \bo{\hat{z}}_{\varepsilon} \rangle \leq \bo{\varphi}^{\varepsilon,k}_{ij}(\bo{z}^{2}_{\varepsilon}) - \bo{\varphi}^{\varepsilon,k}_{ij}(\bo{z}^{1}_{\varepsilon}) \leq \langle (\bo{\varphi}^{\varepsilon,k}_{ij})^{'}(\bo{z}^{1}_{\varepsilon}), \bo{\hat{z}}_{\varepsilon} \rangle, \quad k \in \{1,2\} \text{ and } i < j, \end{equation*} so that, since the Lagrange multipliers $\lambda^{\varepsilon,k}_{ij}(t) \geq 0$ for all $i <j$ and $t \in [0,T])$ and $\displaystyle{\sum_{i<j}\lambda^{\varepsilon,k}_{ij} \varphi_{ij}^{\varepsilon,k}(\bo{z}^{k}_{\varepsilon}) = 0}, \; k \in \{1,2\}$ we have that \begin{equation*} 0 \leq \sum_{i<j} \left[ \langle \lambda^{\varepsilon, 2}_{ij} (\bo{\varphi}^{\varepsilon,2}_{ij})^{'}(\bo{z}^{2}_{\varepsilon}) - \lambda^{\varepsilon,1}_{ij} (\bo{\varphi}^{\varepsilon,1}_{ij})^{'}(\bo{z}^{1}_{\varepsilon}), \bo{\hat{z}}_{\varepsilon}\rangle \right]. \end{equation*} By \eqref{en_attendant}, this means that \begin{equation}\label{I} \langle \bo{\hat{\mathcal{L}}}_{\varepsilon} , \bo{\hat{z}}_{\varepsilon} \rangle \leq 0. \end{equation} Then using \footnote{$\langle a-b,a \rangle \geq \dfrac{1}{2}\big(|a|^{2} - |b|^{2}\big)$ for any $a,b\in \mathbb{R}^{2N_{p}}$}, we have that \begin{eqnarray*} \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{\infty}\big| \hat{z}_{\varepsilon,i}(t)\big|^{2}(t) \rho_{i}(a)da - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da & \leq & \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle, \end{eqnarray*} so that by definition of $\bo{\rho}$, \begin{equation}\label{II} \dfrac{\mu_{0,m}}{2\varepsilon}|\bo{\hat{z}}_{\varepsilon}(t)|^{2} - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da \leq \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle, \quad \forall \varepsilon > 0 \text{ fixed. } \end{equation} Combining \eqref{I} and \eqref{II}, we have \begin{equation*} \left|\bo{\hat{z}}_{\varepsilon}(t)\right|^{2} \leq \dfrac{\overline{\rho}}{\mu_{0,m}} \int_{0}^{t} |\bo{\hat{z}}_{\varepsilon}(s)|^{2}ds, \quad \forall t \in [0,T], \end{equation*} which thanks to the Gronwall's lemma gives $|\bo{\hat{z}}_{\varepsilon}| \equiv 0$, i.e. $\bo{z}^{1}_{\varepsilon}(t) = \bo{z}^{2}_{\varepsilon}(t),$ a.e. $t\in [0,T]$. \end{proof} \subsubsection{Convergence when $\varepsilon$ is small enough} In this section we are interested in the asymptotic when the linkages remodelling rate becomes large enough. We prove the convergence of $\bo{z}_{\varepsilon}$ in $\bo{\mathcal{C}}$. Nevertheless we are not in conditions of using the same arguments of \cite[Theorem 4.5, p.72]{venel08}, because the delay operator is not uniformly bounded with respect to $\varepsilon$ (see \eqref{nouniformity}). \begin{Theo}\label{delay-friction} Let $T>0$ be fixed. Under assumptions \ref{Assump} (i)-(iii), when $\varepsilon$ tends to 0 we have that \begin{equation}\label{z_0} \int_{0}^{T}\langle \mathcal{L}_{\varepsilon}[\bo{z}_{\varepsilon}], \bo{\psi}(t)\rangle dt \longrightarrow \langle \bo{z}_{\varepsilon}(T),\bo{\psi}(T)\rangle - \langle \bo{z}_{\varepsilon}(0),\bo{\psi}(0)\rangle - \int_{0}^{T} \langle\bo{\mu}_{1}\bo{z}_{0},\partial_{t}\bo{\psi}(t)\rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation} \end{Theo} \begin{proof} Let $\bo{z}_{\varepsilon}$ be the unique solution of $\eqref{conDiff}$. By the energy estimates there exists a constant $C$ independent of $\varepsilon$ such that \begin{equation*} \sum_{i=1}^{N_{p}}\int_{0}^{T} \int_{0}^{\infty} \rho_{i}|u_{\varepsilon,i}|^{2}\zeta_{i}(a)dadt \leq C < \infty. \end{equation*} On the other, since the death rate $\zeta$ has a lower bound, we have that \begin{equation*} \int_{0}^{T}\left|\bo{\mathcal{L}}_{\varepsilon}\right|^{2}dt = \sum_{i=1}^{N_{p}}\int_{0}^{T}\left| \int_{0}^{\infty}\dfrac{\rho_{i}u_{\varepsilon,i}(a,t)\zeta_{i}(a) }{\zeta_{i}(a)}da \right|^{2}dt \leq \dfrac{1}{\underline{\zeta}^{2}} \sum_{i=1}^{N_{p}} \int_{0}^{T} \left|\int_{0}^{\infty}\rho_{i}u_{\varepsilon,i}(a,t)\zeta_{i}(a)da\right|^{2}dt, \end{equation*} so that by the Jensen inequality \begin{equation*} \dfrac{1}{\underline{\zeta}^{2}} \sum_{i=1}^{N_{p}} \int_{0}^{T} \left|\int_{0}^{\infty}\rho_{i}u_{\varepsilon,i}(a,t)\zeta_{i}(a)da\right|^{2}dt \leq \dfrac{\overline{\zeta}}{\underline{\zeta}^{2}} \mu_{0,M} \sum_{i=1}^{N_{p}} \int_{0}^{T} \int_{0}^{\infty}\rho_{i}|u_{\varepsilon,i}|^{2}\zeta_{i}(a)dadt. \end{equation*} This shows that the delay operator belongs to $\bo{L}^{2}$ uniformly with respect in $\varepsilon$. There there exists $\bo{\mathcal{L}}_{0} \in \bo{L}^{2}$ such that $\bo{\mathcal{L}}_{\varepsilon}$ weakly converges to $\bo{\mathcal{L}}_{0}$ in $\bo{L}^{2}$ when $\varepsilon$ tends to $0$, implying that \begin{equation*} \int_{0}^{T} \langle \bo{\mathcal{L}}_{\varepsilon},\bo{\psi}(t)\rangle dt \underset{\varepsilon \longrightarrow 0}{\xrightarrow{\hspace{1cm}}} \int_{0}^{T} \langle \bo{\mathcal{L}}_{0},\bo{\psi}(t) \rangle dt, \quad \forall \bo{\psi} \in \bo{L}^{2}. \end{equation*} As it stands, we have \begin{itemize} \item[i)] $\partial_{t}\bo{\tilde{z}}_{\varepsilon,\Delta} \in \bo{L}^{2}$, \item[ii)] $\bo{\tilde{z}}_{\varepsilon,\Delta} \in \bo{\mathcal{C}}$ and \item[iii)] $|| \bo{\tilde{z}}_{\varepsilon,\Delta} - \bo{z}_{\varepsilon}||_{\bo{\mathcal{C}}} \underset{\Delta \longrightarrow 0}{\xrightarrow{\hspace{1cm}}}0$. \end{itemize} Setting $I[\bo{\rho},\bo{z}_{\varepsilon},\bo{\psi}]:= \int_{0}^{T}\langle \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}],\bo{\psi} \rangle dt$, we split the integral as follows \begin{multline}\label{star} I[\bo{\rho}, \bo{z}_{\varepsilon}, \bo{\psi}] = \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{\infty} \int_{0}^{T}\langle z_{\varepsilon,i}(t), \psi_{i}(t) - \psi_{i}(t+\varepsilon a) \rangle\rho_{i}(a)dadt \, \\ + \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{T} \int_{0}^{\infty} \left\{ \langle z_{\varepsilon,i}(t),\psi_{i}(t+\varepsilon a)\rangle - \langle z_{\varepsilon,i}(t-\varepsilon a), \psi_{i}(t)\rangle \right\} \rho_{i}(a) dadt =: I^{\varepsilon}_{1} + I^{\varepsilon}_{2}. \end{multline} By the dominated Lebesgue's theorem, we have that \begin{equation*} I^{\varepsilon}_{1} \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1.5cm}}} -\int_{0}^{T} \langle \bo{\mu}_{1} \boldsymbol{z}_{0}, \partial_{t}\bo{\psi}\rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation*} Splitting $I_{2}^{\varepsilon}$ into $I^{\varepsilon}_{2,1}$ and $I^{\varepsilon}_{2,2}$ and using the same arguments as in \cite[Theorem 5.6 p.29]{Mi20} we have that \begin{equation*} I^{\varepsilon}_{2,1} := \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{T} \int_{0}^{\infty}\langle z_{\varepsilon,i}(t),\psi_{i}(t+\varepsilon a)\rangle \rho_{i}(a)da dt \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1cm}}} \langle \bo{\mu}_{1} \bo{z}_{0}(T),\bo{\psi}(T)\rangle, \end{equation*} and \begin{equation*} I^{\varepsilon}_{2,2} := \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{T} \int_{0}^{\infty} \langle z_{\varepsilon,i}(t-\varepsilon a),\psi_{i}(t)\rangle \rho_{i}(a)da dt \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1cm}}} \langle \bo{\mu}_{1} \bo{z}_{0}(0),\bo{\psi}(0)\rangle. \end{equation*} We gather the above convergences to obtain \begin{equation*} I[\bo{\rho}, \bo{z}_{\varepsilon},\bo{\psi}] \xrightarrow{\hspace{1cm}} \langle \bo{\mu}_{1}\bo{z}_{0}(T),\bo{\psi}(T)\rangle - \langle \bo{\mu}_{1}\bo{z}_{0}(0),\bo{\psi}(0)\rangle -\int_{0}^{T}\langle \bo{\mu}_{1}\bo{z}_{0},\partial_{t}\bo{\psi}\rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation*} On the other hand since $\partial_{t}\bo{z}_{0} \in \bo{L}^{2}$ and $\bo{z}_{0} \in \bo{L}^{\infty}$ we have that $\bo{z}_{0} \in \bo{\mathcal{C}}$, so that the integration by parts is well-defined in $\bo{H}^{1}$ and we have \begin{equation*} \langle \bo{\mu}_{1}\bo{z}_{0}(T),\bo{\psi}(T)\rangle - \langle \bo{\mu}_{1}\bo{z}_{0}(0),\bo{\psi}(0)\rangle -\int_{0}^{T}\langle \bo{\mu}_{1}\bo{z}_{0},\partial_{t}\bo{\psi}\rangle dt = \int_{0}^{T}\langle \mu_{1}\partial_{t}\bo{z}_{0},\bo{\psi} \rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation*} This gives that \begin{equation*} \int_{0}^{T} \langle \bo{\mathcal{L}}_{0} - \bo{\mu}_{1}\partial_{t}\bo{z}_{0},\bo{\psi} \rangle dt = 0, \quad \forall \bo{\psi} \in \bo{H}^{1} \iff \bo{\mathcal{L}}_{0} = \bo{\mu}_{1}\partial_{t}\bo{z}_{0} \text{ a.e. } t \in [0,T], \end{equation*} ending by the same way the proof. \end{proof} \begin{Theo}\label{lamda0} Let $\bo{z}_{\varepsilon}$ be the unique solution of \eqref{KKTconditions_memoireCont}. Under hypotheses \ref{Assump} (i)--(iii) there exists $\bo{\lambda}_{0} = \left(\lambda_{ij}^{0} \right)_{i<j} \in L^{2}\left([0,T];(\mathbb{R}_{+})^{N_{c}} \right)$ depending only on time such that \begin{equation*} \sum_{i<j} \int_{0}^{T} \lambda^{\varepsilon}_{ij}(t)\langle \bo{G}_{ij}(\bo{z}_{\varepsilon}), \bo{\psi}(t)\rangle dt \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1cm}}} \sum_{i<j} \int_{0}^{T}\lambda^{0}_{ij}(t)\langle \bo{G}_{ij}(\bo{z}_{0}),\bo{\psi}(t)\rangle dt, \quad \forall \bo{\psi} \in \bo{L}^{2}. \end{equation*} \end{Theo} \begin{proof} Let $\bo{U} := \bo{\mathcal{L}}_{\varepsilon} - \bo{F}^{'}(\bo{z}_{\varepsilon})$ and \begin{equation*} \bo{\Lambda}^{\varepsilon}_{\bo{z}_{\varepsilon},\bo{U}} := \left\{ \bo{\lambda}_{\varepsilon} \in \mathbb{R}^{N_{c}}, \; \sum_{i<j} \lambda^{\varepsilon}_{ij} \bo{G}_{ij}(\bo{z}_{\varepsilon}) =: \bo{U}, \; \lambda^{\varepsilon}_{ij} \geq 0 \text{ and } \lambda^{\varepsilon}_{ij} = 0 \text{ if } D_{ij}(\bo{z}_{\varepsilon}) > 0 \right\}. \end{equation*} If $\bo{\Lambda}^{\varepsilon}_{\bo{z}_{\varepsilon},\bo{U}} \neq \emptyset$, the same arguments as in \cite[Proposition 4.21 p.86]{venel08} guarantee that \begin{equation}\label{riz} \forall \bo{\lambda}_{\varepsilon} \in \bo{\Lambda}^{\varepsilon}_{\bo{z}_{\varepsilon},\bo{U}} \text{ and } \forall i<j, \text{ we've } \; \lambda^{\varepsilon}_{ij} \leq |\bo{U}|b^{N_{p}}, \text{ where } b := \dfrac{2\sqrt{n_{v}}}{\min\left( \sin\left(\dfrac{\pi}{n_{v} +1}\right), \sin\left(\dfrac{\pi}{N}\right) \right)}, \end{equation} and $n_{v}$ is the maximal number of neighbours that a particle may have.\\ It follows that \begin{equation*} \int_{0}^{T}|\lambda^{\varepsilon}_{ij}|^{2} dt \leq 2 b^{2N_{p}}\int_{0}^{T} \left( |\bo{\mathcal{L}}_{\varepsilon}|^{2} + \big|\boldsymbol{F}^{'}(\bo{z}_{\varepsilon})\big|^{2} \right) dt \lesssim 2b^{2N_{p}}\left( \dfrac{\overline{\zeta}\mu_{0,M}}{\underline{\zeta}} + K^{2} T \right), \quad \forall i<j, \end{equation*} where we've used the fact that $\bo{\mathcal{L}}_{\varepsilon} \in \bo{L}^{2}$ on one hand and $|\bo{F}^{'}(\bo{z}_{\varepsilon})| < \infty$ (since $\bo{F}^{'}$ is continuous and $\bo{z}_{\varepsilon}$ is bounded) on the other.\\ Furthermore, since $\bo{Q}_{0}$ is closed and $\bo{z}_{\varepsilon} \in \bo{Q}_{0}$, we have that $\displaystyle{\bo{z}_{0} := \lim_{\varepsilon \to 0}\bo{z}_{\varepsilon} \in \bo{Q}_{0}}$. On the other hand, since by definition $\bo{G}_{ij}$ is defined and continuous in $\bo{Q}_{0}$, we have that \begin{equation*} \bo{G}_{ij}(\bo{z}_{\varepsilon}) \underset{\varepsilon \longrightarrow 0}{\xrightarrow{\hspace{1.25cm}}} \bo{G}_{ij}(\bo{z}_{0}) \text{ in } \bo{\mathcal{C}}, \quad \forall i < j. \end{equation*} For any $i < j$, we have that \begin{equation*} \begin{cases} \bo{\lambda}_{\varepsilon} \rightharpoonup \bo{\lambda}^{0} \text{ in } L^{2}\left([0,T]; (\mathbb{R}_{+})^{N_{c}} \right) \vspace{0.5em} \\ \bo{G}_{ij}(\bo{z}_{\varepsilon}) \longrightarrow \bo{G}_{ij}(\bo{z}_{0}) \text{ in } \bo{\mathcal{C}}, \end{cases} \end{equation*} so that \begin{equation*} \lambda^{\varepsilon}_{ij} \bo{G}_{ij}(\bo{z}_{\varepsilon}) \rightharpoonup \lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}_{0}) \text{ in } \bo{L}^{2}, \end{equation*} implying that \begin{equation*} \sum_{i<j} \int_{0}^{T} \lambda^{\varepsilon}_{ij}(t) \langle \bo{G}_{ij}(\bo{z}_{\varepsilon})(t),\bo{\psi}(t) \rangle dt \underset{\varepsilon \longrightarrow 0}{\xrightarrow{\hspace{1.25cm}}} \sum_{i<j} \int_{0}^{T} \lambda^{0}_{ij}(t) \langle \bo{G}_{ij}(\bo{z}_{0}(t)),\bo{\psi}(t)\rangle dt, \quad \forall \bo{\psi} \in \bo{L}^{2}. \end{equation*} This is the end of the proof. \end{proof} \begin{Theo} Under hypotheses \ref{Assump} (i)-(iii), the unique solution of \eqref{conDiff} converges toward $\bo{z}_{0} \in \bo{\mathcal{C}}$ which in turn solves \begin{equation*} \begin{cases} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0}+\bo{F}^{'}(\boldsymbol{z}_{0}) \in -N\left(\bo{K}(\bo{z}_{0}),\bo{z}_{0}\right) \text{ a.e. } t \in (0,T], \vspace{0.5em} \\ \boldsymbol{z}_{0}(0) = \boldsymbol{z}_{p}(0), \end{cases} \end{equation*} where \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0} = (\mu_{1,i}\partial_{t}z_{0,i})_{i=1,\cdots,N_{p}} \text{ and } \mu_{1,i}:= \int_{0}^{\infty}a\rho_{i}(a)da \in \mathbb{R}, \quad \forall i. \end{equation*} Moreover the limit function $\bo{z}_{0}$ is unique. \end{Theo} \begin{proof} \textit{The primal-dual problem:} as in the proof of Theorem \ref{thm-exist-uniq}, it suffices to prove that there exists $\bo{\lambda}_{0} \in L^{2}\left([0,T];(\mathbb{R}_{+})^{N_{c}}\right)$ depending only on time such that \begin{equation}\label{weak0} \bo{\mu}_{1}\partial_{t}\bo{z}_{0} + \bo{F}^{'}(\bo{z}_{0}) - \sum_{i<j} \lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}_{0}) = \bo{0}. \end{equation} The existence of $\bo{z}_{0}$ is due to compactness, since $\displaystyle{\bo{z}_{0} := \lim_{\varepsilon \to 0}\bo{z}_{\varepsilon}}$ where $\bo{z}_{\varepsilon}$ is the unique solution of \eqref{conDiff}. Furthermore, from Theorems \ref{delay-friction} and \ref{lamda0} on one hand and the fact that $F$ is continuously differentiable on the other, we have that $\bo{z}_{0}$ solves \begin{equation*} \int_{0}^{t}<\bo{\mu}_{1}\partial_{t}\bo{z}_{0} + \bo{F}^{'}(\bo{z}_{0}) -\sum_{i<j}\lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}_{0}), \bo{\psi}>ds = 0, \quad \forall \bo{\psi} \in \bo{H}^{1} \text{ and } \forall t \in [0,T]. \end{equation*} {Uniqueness:} Let $\bo{z}^{1}_{0}$ and $\bo{z}^{2}_{0}$ be two solutions of \eqref{weak0} sharing the same initial positions i.e. $\bo{z}^{1}_{0}(0) = \bo{z}^{2}_{0}(0)$. We have \begin{equation*} \int_{0}^{t}<\bo{\mu}_{1}\partial_{t}\bo{\hat{z}}_{0} + \widehat{\bo{F}^{'}(\bo{z}_{0})}-\sum_{i<j}\lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}^{2}_{0}) + \sum_{i<j}\lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}^{1}_{0}), \bo{\psi}>ds = 0, \; \forall \bo{\psi} \in \bo{\mathcal{C}}\cap \bo{H}^{1}, \end{equation*} where $\bo{\hat{z}}_{0} := \bo{z}^{2}_{0} - \bo{z}^{1}_{0}$ and $\widehat{\bo{F}^{'}(\bo{z}_{0})} := \bo{F}^{'}(\bo{z}^{2}_{0}) - \bo{F}^{'}(\bo{z}^{1}_{0})$. Let us choose $\bo{\psi} = \bo{\hat{z}}_{0}$ in the latter equation. Since the source term and the constraints functions are convex, by the same arguments as in proof of Theorem \ref{thm-exist-uniq}, we have that \begin{equation*} \mu_{1,m}\int_{0}^{t}\langle \partial_{t}\bo{\hat{z}}_{0},\bo{\hat{z}}_{0} \rangle dt \leq 0 \implies |\bo{\hat{z}}_{0}(t)|^{2} \leq 0, \quad \forall t \in [0,T], \end{equation*} which proves that $|\bo{\hat{z}}_{0}(t)| \equiv 0$, meaning that $\bo{z}^{1}_{0} = \bo{z}^{2}_{0}$ for almost every $t \in [0,T]$. \end{proof} \subsection{The periodic contact model} \subsubsection{Definition of the periodic signed distance} \begin{Prop}\label{2D} For any $x = (x_{1},x_{2}) \in \mathbb{R}^{2}$ set $\displaystyle{\overline{x}_{1} := x_{1} - \left \lfloor \dfrac{x_{1}}{L} \right \rfloor L}$ and $\overline{x}_{2} := x_{2} - \left \lfloor \dfrac{x_{2}}{H} \right \rfloor H$. We have the following statements: \begin{itemize} \item $(\overline{x}_{1}, \overline{x}_{2}) \in [0,L]\times [0,H]$, \item moreover \begin{equation*} \min_{h,k \in \mathbb{Z}}|x - hLe_{1} - kHe_{2}| = \min_{h,k \in \{0,1\}} |\overline{x} - hLe_{1} - kHe_{2}|, \end{equation*} where $e_{1}$ and $e_{2}$ respectively denotes the first and second vector of the canonical basis of $\mathbb{R}^{2}$. \end{itemize} \end{Prop} \begin{proof} For sake of simplicity, we first perform the proof in one dimension i.e. $\mathcal{D} = [0,L]$. The 2D-case being obtained by extension.\\\ Let $x \in \mathbb{R}$, since $\mathbb{R}$ is Archimedean there exists $n:=\left\lfloor \dfrac{x}{L}\right\rfloor$ such that \begin{equation*} n \leq \dfrac{x}{L} < n+1, \end{equation*} which implies that \begin{equation}\label{niayla} nL \leq x < nL+L \implies 0 \leq \overline{x} < L, \end{equation} which proves the first claim.\\ For the second claim, we notice that \begin{equation*} \min_{k\in \mathbb{Z}}|x-kL| = \min_{k\in \mathbb{Z}}|\overline{x}+nL-kL|= \min_{k\in \mathbb{Z}}|\overline{x}-kL|. \end{equation*} On the other hand, since there exists $k \in \mathbb{Z}$ such that $|\overline{x} - kL| < L$ (take $k=0$ for instance), the map $A: k \mapsto |\overline{x} - kL|$ realizes its minimum for indices $k_{0}$ satisfying $A(k_{0}) < L$. But thanks to the first claim, \begin{equation*} |\overline{x}-kL| < L \implies (k-1)L < \overline{x} < (k+1)L \end{equation*} then by \eqref{niayla} we conclude that $-1<k<2$. Or equivalently \begin{equation*} \min_{k\in \mathbb{Z}}|\overline{x}-kL| = \min_{k \in \{0,1\}} |\overline{x}-kL|. \end{equation*} We conclude that \begin{equation*} \min_{k \in \mathbb{Z}}|x-kL| = \min_{k\in \{0,1\}}|\overline{x}-kL|=: \min\left(\overline{x},L-\overline{x} \right) = \begin{cases} \overline{x} \qquad \text{ if } \overline{x} \leq L/2 \\ L-\overline{x} \text{ if } \overline{x} > L/2. \end{cases} \end{equation*} This ends the proof. \end{proof} \subsubsection{The framework for the continuous periodic model} Consider the following minimization process \begin{equation}\label{cont_periodic} \tilde{\boldsymbol{z}}_{\varepsilon}(t) = \argmin_{ \bo{q} \, \in \, \tilde{\mathcal{U}}} \mathcal{E}_{t}(\boldsymbol{q}):= \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{\infty}\big|q_{i}-\tilde{z}_{\varepsilon,i}(t-\varepsilon a)\big|^{2}\rho_{i}(a)da+F(\tilde{\boldsymbol{z}_{\varepsilon}}), \end{equation} where the set of constraints $\tilde{\mathcal{U}}$ reads \begin{equation*} \tilde{\mathcal{U}}:= \left\{ \bo{q} \in \mathbb{R}^{2N_{p}}\, \text{ s.t. } \phi^{\varepsilon}_{ij}(\boldsymbol{q}):= -d_{ij}(\tilde{\bo{z}}_{\varepsilon})-\nabla d_{ij}(\tilde{\bo{z}}_{\varepsilon})\cdot\left(\bo{q}- \tilde{\bo{z}_{\varepsilon}}\right) \leq 0, \, \forall \, i<j \right\}, \end{equation*} and the periodic distance \begin{equation}\label{period_distance} d_{ij}(\bo{q}): = \underset{(h,k)\in \mathbb{Z}^{2}}{\min}\big|q_{j}-q_{i}-hLe_{1} - kHe_{2}\big|-(r_{i}+r_{j}). \end{equation} We denote $\overline{\boldsymbol{q}} := (\overline{q_{1}}, \cdots, \overline{q_{N_{p}}})$ the projection of particles' position in the 2D-torus. For any particle we denote $\overline{q_{i}}:=(\overline{q^{x}_{i}}, \overline{q^{y}_{i}})$ where $\overline{q^{x}_{i}}$ (resp. $\overline{q^{y}_{i}}$) is the projection in $[0,L]$ (resp. in $[0,H]$) of $q^{x}_{i}$ (reps. $q^{x}_{i}$) as in Proposition \ref{2D}. When accounting for adhesions, the corresponding energy represents past positions in the 2D plane, whereas contact forces occur on the torus. This is because we take into account the length of adhesions greater than the periodicity dimensions $L$ and $H$; see Figure \ref{Fig1}. \begin{figure}[!ht] \centering \begin{tikzpicture} \definecolor{gray}{rgb}{0.5, 0.5, 0.5} \definecolor{green}{rgb}{0, 0.5, 0} \draw[<->,>=latex, line width=0.25mm] (-6, 0) -- (4, 0) node[right] {}; ll[gray] (-1.5, 1.5) circle (0.3); \draw[->,gray, >=latex, line width=0.5mm] (-1.5, 1.2) -- (-1.5, 0) node[below] {}; \draw[->,gray, >=latex, line width=0.5mm] (-1.6, 1.22) -- (-2, 0) node[below] {}; \draw[->, gray,>=latex, line width=0.5mm] (-1.65, 1.23) -- (-3.5, 0) node[right] {}; \draw[->, gray,>=latex, line width=0.5mm] (-1.7, 1.3) -- (-5, 0) node[right] {}; \draw[->, gray, >=latex, line width=2mm] (-1.2, 1.5) -- (0.1, 1.5) node[above] {$\boldsymbol{U}(\bo{z}^{\ast}_{\varepsilon})$}; ll[black] (1, 1.5) circle (0.3); \draw[->,black,>=latex,line width=0.5mm] (1, 1.2) -- (1, 0) node[below] {$\bo{z}(t)$}; \draw[->,black,>=latex,line width=0.5mm] (0.9, 1.22) -- (0.4, 0) node[below] {}; \draw[->,black,>=latex,line width=0.5mm] (0.85, 1.23) -- (-0.5, 0) node[below] {}; \draw[->, black, >=latex, line width=0.5mm] (0.85, 1.25) -- (-1.5, 0) node[below] {$\boldsymbol{z}(t-\varepsilon a_{1})$}; \draw[->, black, >=latex, line width=0.5mm] (0.8, 1.3) -- (-3.5, 0) node[below] {$\bo{z}(t- \varepsilon a_{2})$}; \draw[->, black, >=latex, line width=2mm] (1.3, 1.5) -- (2.5, 1.5) node[above] {$\boldsymbol{U}(\bo{z}_{\varepsilon}(t))$}; ll[red] (1.75, 0) circle (0.05) node[below] {$L$}; ll[red] (-2.5, 0) circle (0.05) node[below] {$0$}; \end{tikzpicture} \caption{Linkages associated to some past positions in the domain $[0,L]$ where $\bo{z}^{\ast}_{\varepsilon}:= \bo{z}_{\varepsilon}(t-\varepsilon a_{1})$.} \label{Fig1} \end{figure} By Proposition \ref{2D}, we have that \begin{eqnarray*} d_{ij}(\bo{q}) & = & \underset{(h,k) \in \{0,1\}^{2}}\min\big| \overline{q_{j}}- \overline{q_{i}} -hLe_{1}-kHe_{2}\big|-(r_{i}+r_{j}). \end{eqnarray*} Since this distance is well-defined i.e there exist are $\underline{h},\underline{k} \in \{ 0,1\}$ such that \begin{equation*} d_{ij}(\bo{q}) = \big| \overline{q_{j}}- \overline{q_{i}} - \underline{h}Le_{1}- \underline{k}He_{2}\big|-(r_{i}+r_{j}), \end{equation*} we define the gradient vector of $d_{ij}$ in $\bo{\tilde{Q}}_{0}$ as \begin{equation*} \boldsymbol{\tilde{G}_{ij}} := \nabla d_{ij}(\bo{q}) = \Big(0,\cdots 0, \underset{i}{-\tilde{e}_{i,j}(\bo{q})}, 0\cdots 0, \underset{j}{\tilde{e}_{i,j}(\bo{q})}, 0, \cdots,0\Big), \quad i<j, \end{equation*} where $\tilde{e}_{ij}(\bo{q})$ is the action of the copy of particle $i$ on particle $j$ and is oriented towards $j$. It is unitary and reads \begin{equation*} \tilde{e}_{ij}(\bo{q}):= \dfrac{q_{j}-q_{i}- (n^{x}_{j}-n^{x}_{i}+\underline{h})Le_{1} - (n^{y}_{j}-n^{y}_{i}+\underline{k})He_{2} }{\left| q_{j}-q_{i}- (n^{x}_{j}-n^{x}_{i}+\underline{h})Le_{1} - (n^{y}_{j}-n^{y}_{i}+\underline{k})He_{2} \right|}, \quad i < j, \end{equation*} where $n_{k}^{x} := \lfloor q_{k}^{x}/L \rfloor$ and $n_{k}^{y} := \lfloor q_{k}^{y}/H \rfloor$ for any particle $k$. \subsubsection{The discrete periodic problem} The same arguments as earlier in this paper lead to the discrete minimization problem, \begin{equation}\label{discret_periodic} \boldsymbol{\tilde{Z}}^{n}_{\varepsilon} = \argmin_{\bo{q} \, \in \, \bo{\tilde{K}}\left(\boldsymbol{\tilde{\boldsymbol{Z}}}^{n-1}_{\varepsilon}\right)}\, \left\{ \mathcal{E}_{n,\varepsilon}(\bo{q}):= \frac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} \left|q_{i}- \tilde{Z}^{n-l}_{\varepsilon,i} \right|^{2}R_{l,i} + F(\boldsymbol{q}) \right\}, \end{equation} where the discrete constraints set reads \begin{equation*} \bo{\tilde{K}}\left(\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon}\right):= \left\{ \bo{q} \in \mathbb{R}^{2N_{p}}\, \text{ s.t. } \phi^{n,\varepsilon }_{ij}(\bo{q}):= -d_{ij}(\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon})-\nabla d_{ij}(\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon})\cdot\bigg(\bo{q}- {\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon}}\bigg) \leq 0, \quad i<j \right\}, \end{equation*} and \begin{equation*} \nabla \phi^{n,\varepsilon}_{ij}(\boldsymbol{q})= -\nabla d_{ij}(\tilde{\boldsymbol{Z}}^{n-1}_{\varepsilon}), \quad \forall \boldsymbol{q} \in \mathbb{R}^{2N_{p}}. \end{equation*} The same arguments as in Theorem \ref{thm1} still hold and guarantee the existence and uniqueness of the solution $\boldsymbol{\tilde{Z}}^{n}_{\varepsilon}$ to \eqref{discret_periodic}. We define in the same way the sets of feasible configurations and of active constraints by $\tilde{\boldsymbol{Q}}_{0}$ and $\tilde{I}_{q}$ as in \eqref{Q0} and Definition \ref{qualified_memoire} respectively. We only mention that, in the present case the periodic distance $d_{ij}$ is considered instead of $D_{ij}$. The Lagrangian $\tilde{L}$ is defined from $\mathbb{R}^{2N_{p}} \times (\mathbb{R}_{+})^{N_{c}}$ into $\mathbb{R}$ as well \begin{equation}\label{lagrangian_periodic} \tilde{L}(\bo{q},\boldsymbol{\mu})= \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} \left|q_{i}-\tilde{Z}^{n-l}_{\varepsilon,i}\right|^{2}R_{l,i} + F\left(\boldsymbol{q}\right) + \sum_{i<j}\mu_{ij}\phi^{n,\varepsilon}_{ij}(\bo{q}). \end{equation} All hypotheses of Theorem \ref{kkt_cond} hold and guarantee that \eqref{discret_periodic} is equivalent to existence of saddle-point $(\bo{\tilde{Z}}^{n}_{\varepsilon}, \boldsymbol{\tilde{\lambda}}^{n}_{\varepsilon})$ satisfying \begin{equation}\label{Euler-Lagrange_periodic} \boldsymbol{\tilde{ \lambda}}^{n}_{\varepsilon} \geq \boldsymbol{0}, \; \bo{\phi}^{n,\varepsilon}(\tilde{\boldsymbol{Z}}^{n}_{\varepsilon}) \leq \boldsymbol{0}, \; \boldsymbol{\tilde{\lambda}}^{n}_{\varepsilon} \cdot \boldsymbol{\phi}^{n,\varepsilon}(\boldsymbol{\tilde{Z}^{n}_{\varepsilon}})=0 \text{ and } (\boldsymbol{\mathcal{E}}_{n,\varepsilon})^{'}(\boldsymbol{\tilde{Z}}^{n}_{\varepsilon}) + \sum_{i<j}\tilde{ \lambda}^{n,\varepsilon}_{ij}(\phi^{n}_{ij})^{'}(\boldsymbol{\tilde{Z}}^{n}_{\varepsilon}) = \boldsymbol{0}, \end{equation} where \begin{equation*} \boldsymbol{\phi}^{n,\varepsilon}(\boldsymbol{q}) := \left(\phi^{n,\varepsilon}_{ij}(\boldsymbol{q})\right)_{i<j} : \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}^{N_{c}}. \end{equation*} Note that the periodic distance locally coincides with the one defined in \eqref{signed_distance} in the sense that $d_{ij} = D_{ij}$ in $\mathcal{D}$. So that these two distances have the same properties. This yields the same results as those obtained above with the usual distance (energy estimates, compactness criterion, variational inclusion, etc) in the present case. \subsection{Numerical approximation and simulations} \subsubsection{Uzawa's algorithm} Note that, due to the assumptions on $F$ (see \ref{Assump}), the last equation in \eqref{KKTconditions_memoire} is nonlinear with respect to $\bo{Z}^{n}_{\varepsilon}$ at each time step $n$. This induces the need of a nonlinear solver; such as the Newton solver in order to obtain the position from \eqref{KKTconditions_memoire} at time $t^n=n\Delta t$. In order to overcome the numerical cost of such an implementation, we transform the external load to a source term depending on the solution at the previous time step. So that we have a linear problem with respect to the unknown position at each time step. More precisely consider the following problem \begin{equation}\label{dicret_energy_uzawa} \boldsymbol{Z}^{n}_{\varepsilon} = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})}\left\{ \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty} |q_{i} - Z^{n-l}_{\varepsilon,i}|^{2} R_{l,i} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}) + \boldsymbol{F}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon})\cdot(\bo{q} - \bo{Z}^{n-1}_{\varepsilon}) \right\}. \end{equation} We are attempted to use the projected-gradient descent algorithm to numerically approximate the solution of \eqref{dicret_energy_uzawa}. But the closed form of projection onto $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ is not at hand here. To tackle this we pass through the dual problem and project the Lagrange multiplier onto $(\mathbb{R}_{+})^{N_{c}}$ by simple truncation and then iterate the process in the spirit of Uzawa's algorithm. Precisely we build a sequence of primal variables $\left(\bo{Z}^{n,r}_{\varepsilon}\right)_{n} \in \left(\mathbb{R}^{2N_{p}} \right)^{\mathbb{N}}$ and dual variables $(\bo{\lambda}^{n,r}_{\varepsilon})_{n} \in \left((\mathbb{R}_{+})^{N_{c}} \right)^{\mathbb{N}}$ as follows: \begin{equation*} \begin{cases} \bo{\lambda}^{n,0}_{\varepsilon} \in \left(\mathbb{R}_{+}\right)^{N_{c}} \vspace{0.5em} \\ \displaystyle{ L(\bo{Z}^{n,r}, \bo{\lambda}^{n,r}_{\varepsilon}) = \inf_{\bo{q} \, \in \, \mathbb{R}^{2N_{p}}} L\left(\boldsymbol{q}, \boldsymbol{\lambda}^{n,r}_{\varepsilon}\right)} \vspace{0.5em} \\ \boldsymbol{\lambda}^{n,r+1}_{\varepsilon} = \max\Big( \boldsymbol{\lambda}^{n,r}_{\varepsilon} + \eta \boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{Z}^{n,r}_{\varepsilon}), \boldsymbol{0} \Big), \end{cases} \end{equation*} which ensures that solution $\bo{Z}^{n}_{\varepsilon}$ of \eqref{dicret_energy_uzawa} is well defined. We note that $L$ in the above algorithm is the Lagrangian associated to the \eqref{dicret_energy_uzawa}. \begin{Prop}\label{convergence_uzawa} If $0 < \eta < 2\alpha/\varepsilon C^{2}$ with $ \alpha:= \mu_{\Delta}$ and $C:=\sqrt{2N_{c}}$, Uzawa's algorithm converges. More precisely, for any initial Lagrange multiplier $\bo{\lambda}^{n,0}_{\varepsilon}$, the sequence $\left(\bo{Z}^{n,r}_{\varepsilon} \right)_{r}$ converges to the solution of \eqref{dicret_energy_uzawa} when $r$ tends to infinity. \end{Prop} \begin{proof} Let us check the hypotheses of \cite[Theorem 10.5.9 p.343]{Allairel05}. To do so \begin{itemize} \item $E_{n,\varepsilon}$ is twice-differentiable as sum of a quadratic function and an affine function and we have that \begin{equation*} E^{''}_{n,\varepsilon}(\boldsymbol{q}) = diag\left( \alpha_{1},\alpha_{1},\cdots, \alpha_{N_{p}}\right), \quad \forall \bo{q} \in \mathbb{R}^{2N_{p}}, \end{equation*} where $\alpha_{i} = \dfrac{\mu_{\Delta,i}}{\varepsilon}, \forall i$, so that $E_{n,\varepsilon}$ is uniformly convex with the constant $\alpha_{m}:= \min_{i}\alpha_{i}$. \item $\boldsymbol{\varphi}^{n,\varepsilon}$ is convex, Lipschitz from $\mathbb{R}^{2N_{p}} $ into $(\mathbb{R}_{+})^{N_{c}}$. Indeed the convexity is obvious.\\ To prove the Lipschitz behavior, consider $\boldsymbol{\overline{q}} , \boldsymbol{\tilde{q}} \in \mathbb{R}^{2N_{p}}$. We have \begin{eqnarray*} \big| \varphi^{n,\varepsilon}_{ij}(\bo{\tilde{q}}) - \varphi^{n,\varepsilon}_{ij}(\bo{\overline{q}})\big| & = & \big| \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot(\bo{\tilde{q}}-\bo{Z}^{n-1}_{\varepsilon})- \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot(\bo{\overline{q}} - \bo{Z}^{n-1}_{\varepsilon}) \big| \\ & = & \big| \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot (\bo{\overline{q}} - \bo{\tilde{q}}) \big|\\ & \leq & \sqrt{2}\big| \bo{\tilde{q}} - \bo{\overline{q}} \big|, \quad \forall i<j. \end{eqnarray*} We've used the fact $\left|\boldsymbol{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\right| = \sqrt{2}$ for all $i<j$ in the third row. Thus \begin{equation*} |\boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{\tilde{q}}) - \boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{\overline{q}})|= \sqrt{\sum_{1\leq i<j \leq N_{p}} \big| \varphi^{n,\varepsilon}_{ij}(\boldsymbol{\tilde{q}}) - \varphi^{n,\varepsilon}_{ij}(\boldsymbol{\overline{q}})\big|^{2}} \leq \sqrt{2N_{c}}\big|\tilde{\bo{q}} - \overline{\bo{q}}\big|. \end{equation*} \end{itemize} Hence $\boldsymbol{\varphi}^{n,\varepsilon}$ is $C$-Lipschitz with $C=\sqrt{2N_{c}}$, which ends the proof. \end{proof} \subsubsection{Numerical simulations} We consider here the quadratic case, namely, the external load reads $F(\bo{q}) = \frac{1}{2} \bo{q}^{2}$. We expect the cells to follow the gradient of descent and finally to cluster at the origin. Uzawa's algorithm is performed to get the position at each time step. We start with a given initial deterministic configuration $\bo{z}_{p}$. We estimate the MSD \footnote{\label{msd-estim} $MSD(t) = \langle \bo{z}(t) - \bo{z}_{ref}\rangle= \dfrac{1}{N_{p}}\sum_{i=1}^{N_{p}}|z_{i}(t) - z_{ref,i}|^{2}$} (Mean Squared Displacement) which is a measure of the deviation of particles' positions with respect to a reference position $\bo{z}_{ref}=0$. We compare MSDs performed with and without contact forces and compute the theoretical MSD in the same figure. To do so we consider first the deterministic case without contact forces whose explicit solution is at hand. Then we perturb it randomly by adding a Gaussian white noise in the system. \begin{itemize} \item Consider the following non contact model whose dynamic is described as \begin{equation}\label{EDO} \begin{cases} \boldsymbol{\dot z} = -\nu \bo{z}, \quad t > 0\\ \bo{z}(0) = \bo{z}_{p}(0), \end{cases} \end{equation} where $\nu>0$ can be seen as the inverse of the viscosity coefficient in our friction model. In figure \ref{fig-msd-deterministic} are represented the curves of the global deviation with respect to the origin with and without contacts (top) and the curve of the average activation of the Lagrange multipliers (bottom) see \eqref{activation} below. \begin{figure}[!ht] \centering \begin{tikzpicture} \begin{axis}[ width=12cm, height=6cm, ylabel={msd ($m^{2}$)}, grid=major, legend style={at={(0.98,0.98)},anchor=north east}, title={Analytical and estimated MSDs}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[orange, thick, line width=2pt] table[x index=0, y index=1] {points_deterministic.txt}; \addlegendentry{With contacts (simulation)} \addplot[blue, thick, line width=2pt] table[x index=0, y index=2] {points_deterministic.txt}; \addlegendentry{No contacts (simulation)} \addplot[red, dotted, thick, line width=2pt] table[x index=0, y index=3] {points_deterministic.txt}; \addlegendentry{$t \mapsto |\bo{z}_{p}|^{2} e^{-2t}$} \end{axis} \begin{axis}[ at={(0,-1.05cm)}, anchor=north west, width=12cm, height=6cm, xlabel={Time (s)}, ylabel={Activ$(t)$}, grid=major, title={Average activation}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[magenta, dotted, thick, line width=2pt] table[x index=0, y index=4] {points_deterministic.txt}; \end{axis} \end{tikzpicture} \caption{Deterministic MSDs with respect to $\bo{0}$ (top) and the average activation of multipliers (bottom).} \label{fig-msd-deterministic} \end{figure} In the top figure, the global deviation starts from $16m^{2}$ at $t=0$ and decreases to end up by following horizontal lines ($H=0$ for the red and blue curves and $H\simeq 3$ for the orange one). This is what we expected in the absence of contact forces. Indeed in the absence of contacts (blue curve), each particle may be represented only by its center as a simple dot without any radius; allowing by the same way the overlapping between particles. Due to the source term, the particles are like attracted by the origin. The orange curve in Figure \ref{fig-msd-deterministic} represents the estimated MSD in the case of contacts between spheres. We remark that from $t=0s$ to $t \simeq 0.35s$ the curve is the same as the one without contact forces. Indeed as long as the particles are not in contact the Lagrange multipliers are not activate so that particles' trajectories are driven only by external loads. Once the signed distance vanishes (from $t\simeq 0.35 s$ to $t=5s$), the contact forces are active (see \eqref{KKTconditions_memoireCont}). The trajectories are no longer governed only by external load. The contacts induce a diminution of the velocity and the decay of the mean square displacement is no longer the same. This is illustrated by the change in the shape of the orange curve around $t = 0.35s$. The particles rearrange and move toward the origin increasing the contacts' number. As high congestion occurs, the particles move very slightly and end up by being motionless around the origin. This jamming leads to a complete steady state. The bottom pink curve in Figure \ref{fig-msd-deterministic} represents the average activation of the Lagrange multipliers over time defined as follows \begin{equation}\label{activation} Activ(t):= \dfrac{2}{N_{p}(N_{p}-1)} \sum_{1 \leq i < j \leq N_{p}}\mathbb{1}_{\{\lambda^{\varepsilon}_{ij}(t)\neq 0\}}. \end{equation} We may mention that the activation in \eqref{activation} is by definition the cardinal of the set of active constraints $Ind(\bo{z}_{\varepsilon})$ defined above (see Definition \ref{qualified_memoire}) over the maximal number of constraints. Precisely the average activation represents the ratio of the number of active constraints by the maximal number of constraints. Moreover, by definition of the Lagrange multipliers we have that \begin{equation*} supp(\lambda^{\varepsilon}_{ij}) \subset \left\{ t \geq 0 \text{ s.t. } D_{ij}(\bo{z}_{\varepsilon}(t)) = 0 \right\}, \quad \forall i< j, \end{equation*} so that the multipliers are activated once the signed distance vanishes. Here (the bottom curve of Figure \ref{fig-msd-deterministic}), the Lagrange multipliers are inactive for small times; in fact there is no contacts at the debut. The jump at $t \simeq 0.35s$ is due to the fact that some particles $i$ and $j$ are in contact; the Lagrange multipliers are positive . After that first jump, the average activation of the multipliers is constant equal to $0.15$ for less than one second, because the particles in contact try to rearrange until they reach a steady state. \item Consider now the stochastic model where a random perturbation is added in the the previous model. We obtain the Ornstein-Uhlenbeck process \begin{equation}\label{EDS} \begin{cases} \bo{\dot z} = -\nu \bo{z} + \sigma \bo{\eta}_{t}, \quad t > 0\\ \bo{z}_{0} = \bo{z}_{p}(0), \end{cases} \end{equation} where $(\bo{\eta}_{t})_{t\geq 0}$ denotes the $\mathbb{R}^{2N_{p}}$-Gaussian white noise. The explicit solution \eqref{EDS} as well as its second order moment are given in Appendix \ref{annexeC}. We compare the approximated MSD computed using the solutions of our algorithm and the theoretical value at each time step in Figure \ref{fig-msd}. \begin{figure}[!ht] \centering \begin{tikzpicture} \begin{axis}[ width=12cm, height=6cm, ylabel={msd ($m^{2}$)}, grid=major, legend style={at={(0.98,0.98)},anchor=north east}, title={Analytical and estimated MSDs}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[orange, thick, line width=2pt] table[x index=0, y index=1] {points_stochastic.txt}; \addlegendentry{With contacts (simulation)} \addplot[blue, thick, line width=2pt] table[x index=0, y index=2] {points_stochastic.txt}; \addlegendentry{No contacts (simulation)} \addplot[red, dotted, thick, line width=2pt] table[x index=0, y index=3] {points_stochastic.txt}; \addlegendentry{$t \mapsto |\bo{z}_{p}|^{2} e^{-2t} + \frac{1}{2}(1-e^{-2t})$} \end{axis} \begin{axis}[ at={(0,-1.05cm)}, anchor=north west, width=12cm, height=6cm, xlabel={Time (s)}, ylabel={Activ$(t)$}, grid=major, title={Average activation}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[magenta, dotted, thick, line width=1.5pt] table[x index=0, y index=4] {points_stochastic.txt}; \end{axis} \end{tikzpicture} \caption{Stochastic MSDs with respect to $\bo{0}$ (top), the average activation of the multipliers (bottom).} \label{fig-msd} \end{figure} We observe similar trends as in the deterministic case : the deviation exponentially decreases from $16 m^{2}$ to end up by following horizontal lines ($H=1/2$ for the red and blue curves and $H \simeq 4$ for the orange curve) for large times. Indeed by Appendix \ref{annexeC}, we have that \begin{equation*} \lim_{t \to 0}\mathbb{E}|\bo{z}_{t}|^{2} = |\bo{z}_{p}(0)|^{2} \text{ and } \lim_{t \to \infty} \mathbb{E}|\bo{z}_{t}|^{2} = \dfrac{1}{2}, \end{equation*} so that the particle never cluster at the origin as in the deterministic case, even without contacts. The red curve represents the second order moment of the trajectory \eqref{analytic-msd} when particles do not interact. \end{itemize} \section{Conclusions} In this paper we dealt with non-penetration models with adhesion forces. The position of cells at each time minimizes a convex energy functional with non-penetration constraints. The energy contains two terms : a delay part and the external load. By penalizing the constraints and letting the penalty parameter to go to zero, we prove that the solution of the constrained problem exists and is unique. We obtain energy estimates and we use convexity of the constraints and of the external load to obtain compactness. We then apply Arzela-Ascoli in order to obtain existence the continuous problem for a fixed $\epsilon>0$. Finally, we prove that, if the characteristic of lifetime of the bonds tends to zero, our model converges to the model investigates in \cite{venel08} with a weighted by friction coefficients related to the microscopic adhesions. The same results are obtained on the torus ; the periodic distance is considered instead of the signed one defined by the Euclidian norm. Numerical simulations are made to validate the mathematical analysis. \begin{appendices} \section{Theoretical background of contact modelling}\label{annexeA} \begin{Def}\label{coercive} Let $n \in \mathbb{N}^{\ast}$ and $J: \mathbb{R}^{n} \to \mathbb{R}$ be a real valued function. $J$ is called coercive if $ J(v) \longrightarrow \infty$ as $|v| \to \infty. $ \end{Def} \begin{Theo}\cite{Ciarlet89}\label{ciarl} Let $J : \mathbb{R}^{n} \to \mathbb{R}$ be a continuous, coercive and strictly convex function, $U$ a nonempty, convex and closed subset of $\mathbb{R}^{n}$ and $\psi: \mathbb{R}^{n} \to \mathbb{R}$ a continuous, convex function satisfying \begin{equation}\label{eq.equiv.U.Phi} \psi(v) \geq 0 \text{ for every } v \text{ in } \mathbb{R}^{n} \text{ and } \psi(v) = 0 \iff v \in U, \end{equation} Then for every $\delta > 0$, there exists a unique element $u_{\delta}$ satisfying \begin{equation*} u_{\delta} \in \mathbb{R}^{n} \text{ and } J_{\delta}(u_{\delta}) = \inf_{v \in \mathbb{R}^{n}} J_{\delta}(v), \quad J_{\delta}(v) := J(v) + \dfrac{1}{\delta} \psi(v). \end{equation*} Moreover, $u_{\delta} \to u$ when $\delta$ goes to $0$, where $u$ is the unique solution of \begin{equation*} \begin{cases} \text{ find } u \text{ such that }\\ \displaystyle{u \in U,\quad J(u) = \inf_{v \in U}J(v)}. \end{cases} \end{equation*} \end{Theo} \begin{Theo}\cite{Allairel05}\label{kkt_cond} Let $V$ be a Hilbert space and $M \in \mathbb{N}^{\ast}$ and $K$ defined as follows \begin{equation*} K = \left\{ v \in V: F_{i}(v) \leq 0, \; \forall 1\leq i \leq M \right\} \end{equation*} Assume that $J$ and $F_{1},\cdots F_{M}$ are convex continuous in $V$ and differentiable in $K$ and define the associated Lagrangian \begin{equation*} \mathcal{L}(v,q) = J(v) + q\cdot F(v), \quad \forall (v,q) \in V \times (\mathbb{R}_{+})^{M}. \end{equation*} Let $u$ be a point at which the constraints are qualified. Then $u$ us a global minimum of $J$ if and only if there exists $p \in (\mathbb{R}_{+})^{M}$ such that $(u,p)$ is a saddle-point of $\mathcal{L}$ on $V \times (\mathbb{R}_{+})^{M}$ or equivalently, such that \begin{equation*} F(u)\leq 0, \; p \geq 0,\; p\cdot u = 0, \; J^{'}(u) + \sum_{i=1}^{M} \lambda_{i}F^{'}(u_{i}) = 0, \end{equation*} where $F = (F_{1},\cdots,F_{M})$. \end{Theo} \begin{Def}\cite{venel08} Let $H$ be a Hilbert space and $ S \subset H$ be a closed and nonempty subset and $x \in H$ we define: \begin{itemize} \item the set of nearest points of $x \in S$ \begin{equation*} P_{S}(x) := \left\lbrace v \in S: \, d_{S}(x) = |x-v| \right\rbrace, \quad d_{S}(x) := \inf_{u \in S}\left|x-u\right|, \end{equation*} \item the proximal normal cone to $S$ at $x$ \begin{equation*} N^{P}(S,x) := \left\{ v \in H: \exists \alpha > 0, x \in P_{S}(x + \alpha v) \right\}, \end{equation*} \item the limiting normal cone to $S$ at $x$ by \begin{equation*} N^{L}(S,x) := \left\{ v \in H: \exists (x_{n}) \subset (S)^{\mathbb{N}}, \exists (v_{n}) \subset (N(S,x_{n}))^{\mathbb{N}} \text{ s.t } x_{n} \to x, v_{n} \rightharpoonup v \right\}, \end{equation*} \end{itemize} \end{Def} Note that if $S$ is convex, the proximal normal cone coincides with the outward normal cone $N(S,x)$ to $S$ at $x$ into which we have $x = P_{S}(x+\alpha v)$ in the definition of $N^{P}(S,x)$. \begin{Def}\cite{JeanFenel06} Let $S \subset H$ be a non empty closed subset of a Hilbert space $H$. For any fixed $\eta>0$, we say that $S$ is $\eta$-prox-regular (or uniformly prox-regular) if for any $v \in N^{L}(S,x)$ such that $|v| < 1$, $x \in P_{S}(x+\eta v)$. \end{Def} \begin{Prop}\label{prox-reg-char} \cite{JeanFenel06} Let $S$ be closed nonempty set of a Hilbert space $H$. $S$ is $\eta$-prox-regular if and only if a nonzero proximal normal $v \in N^{L}(S,x)$ can be realized by an $\eta$-ball, that is for all $x \in S$ and $v \in N(S,x)\setminus \{ 0\}$, $$S\cap B\left(x+\frac{\eta}{|v|}v, \eta \right) = \emptyset.$$ In other words for any $x \in S$ and $v \in N(S,x)$, \begin{equation*} \langle v, y-x \rangle \leq \dfrac{|v|}{2\eta} \left|y-x\right|^{2}, \quad \forall y \in S. \end{equation*} Furthermore $S$ is convex if and only if it is $\infty$-prox-regular. \end{Prop} \begin{Theo}\cite{venel08} \label{constant-prox-reg} The set of admissible constraints $\boldsymbol{Q}_{0}$ is $\eta$-prox-regular where \begin{equation} \eta = \dfrac{1}{N_{p}n_{n}}\left( \dfrac{\min\left(\sin\left(\dfrac{\pi}{n_{n}+1}\right), \sin\left(\dfrac{2\pi}{N_{p}}\right)\right)}{2\sqrt{n_{n}}} \right)^{N_{p}}\min_{i,j}(r_{i}+r_{j}), \end{equation} where $n_{n}$ is the number of maximal neighbors that a particle can have. \end{Theo} \begin{Lemma}(page 90 in \cite{venel08}) \label{equivalences} Let $S$ be a convex and closed subset of a Hilbert space $H$. Let $x \in S$ and $\omega \in H$ we have the following equivalences \begin{eqnarray} \omega \in N(S,x) & \overset{ \text{ def } }{\Leftrightarrow} & x = P_{S}(x+\omega) \\ & \Leftrightarrow & \forall y \in S, \quad \langle \omega, y-x \rangle \leq 0 \\ & \Leftrightarrow & \forall y \in H, \quad \langle \omega, y-x \rangle \leq |\omega| d_{S}(y) \\ & \Leftrightarrow & \forall \xi \in H, \quad \langle \omega, \xi \rangle \leq |\omega| d_{S}(\xi+x) \\ & \Leftrightarrow & \exists \eta > 0, \forall v \in H, |v| < \eta \quad \langle \omega, v \rangle \leq |\omega| d_{S}(v+x) \\ & \Leftrightarrow & \exists k > 0, \exists \eta > 0, \forall v \in H, |v| < \eta \quad \langle \omega, v \rangle \leq k d_{S}(v+x) \end{eqnarray} \end{Lemma} \begin{Prop}[page 76 in \cite{venel08}]\label{convergenceofprojection} Let $\boldsymbol{q}\in \boldsymbol{Q}_{0}$ and $(\boldsymbol{q}_{\Delta_{m}})$ be a sequence in $\boldsymbol{Q}_{0}$ satisfying $\textbf{q}_{\Delta_{m}} \to \textbf{q}$. For any $\textbf{z} \in \mathbb{R}^{2N_{p}}$ we denote by $\boldsymbol{p}=P_{K(\boldsymbol{q})}(\textbf{z})$ and $p_{\Delta_{m}}=P_{K(\boldsymbol{q}_{\Delta_{m}})}(\textbf{z})$ there exists $\nu$ such that for any $\textbf{z} \in \boldsymbol{B}( \textbf{q},\nu)$ we have $\boldsymbol{p}_{\Delta_{m}} \to \boldsymbol{p}$. Particularly $d_{K(\boldsymbol{q}_{\Delta_{m}})}(\textbf{z}) \longrightarrow d_{K(\boldsymbol{q})}(\textbf{z})$ as $m$ goes to infinity. \end{Prop} \begin{Def}\cite{Haim11} Let $(E, ||\cdot||_{E})$ be a norm vector space and $A \subset E$ be a subset of $E$. The convex hull of $A$, we denote here by $conv(A)$ is the intersection of all convex sets of $E$ containing $A$. \end{Def} \begin{Theo}\label{Mazur} \cite{Haim11}. Let $(E, ||\cdot||_{E})$ be a vector space and $(x_{n})_{n}$ a sequence that weakly converges to $x$ in $E$. There exists a sequence of real numbers $(y_{k})_{k=n\cdots,N(n)}$ (where $N:\mathbb{N}\to \mathbb{N}$) taking values in the convex hull of the set of values of the sequence $\left(x_{n}\right)_{n}$, satisfying \begin{equation*} y_{n} = \sum_{k=n}^{N(n)}\lambda_{n,k}x_{k} \text{ with } \sum_{k=n}^{N(n)} \lambda_{n,k} = 1, \quad \forall k \in \{n,\cdots,N(n) \}, \quad \lambda_{n,k} \in \mathbb{R}_{+} \end{equation*} and converges to $x$ i.e. \begin{equation*} ||y_{n} - x ||_{E} \to 0. \end{equation*} \end{Theo} \section{Mean Squared Displacement and Orsntein-Uhlenbeck process}\label{annexeC} Here we remind some properties of the Ornstein-Uhlenbeck (in the absence of contact forces), for which we compute the explicit formula of its MSD. To this end consider the following equation \eqref{EDS} (with $\nu = \sigma = 1$). By the variation of constants method we have \begin{equation*} \bo{z}_{t} = \bo{z}_{p}(0)e^{-t} + \int_{0}^{t}e^{-(t-r)}\eta_{r}dr. \end{equation*} Due to the null expectation of the white noise, we have that \begin{equation*} \mathbb{E}(\bo{z}_{t}) = \bo{z}_{p}(0)e^{-t}. \end{equation*} On the other hand for any $t,s \geq 0$, setting $t \wedge s := \min(t,s)$ we have \begin{eqnarray*} E[\bo{z}_{t}\cdot \bo{z}_{s}] & = & |\bo{z}_{p}(0)|^{2}e^{-(t+s)} + \mathbb{E}\left[ (\int_{0}^{t} e^{-(t-r_{1})}\eta_{r_{1}}dr_{1})\cdot (\int_{0}^{s} e^{-(s-r_{2})}\eta_{r_{2}}dr_{2}) \right] \\ & = & |\bo{z}_{p}(0)|^{2}e^{-(t+s)} + \int_{0}^{t \wedge s } e^{-(t+s-2r)}dr, \end{eqnarray*} where we've used the Ito's isometry at the second equality. Thus if $s=t$ we have \begin{eqnarray*} \mathbb{E}|\bo{z}_{t}|^{2} & = & |\bo{z}_{p}(0)|^{2}e^{-2 t} + e^{-2 t} \int_{0}^{t}e^{2r}d_{r}, \end{eqnarray*} which gives the explicit form \begin{eqnarray}\label{analytic-msd} \mathbb{E}|\bo{z}_{t}|^{2} & = & |\bo{z}_{p}(0)|^{2}e^{-2t} + \dfrac{1}{2}\left( 1 - e^{-2t}\right). \end{eqnarray} \end{appendices} \section*{Use of AI tools declaration} The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article. \section*{Acknowledgments (All sources of funding of the study must be disclosed)} We would like to thank you for following the instructions above very closely in advance. It will definitely save us lot of time and expedite the process of your paper's publication. \bibliography{biblio} \bibliographystyle{alpha} \end{document} \begin{Theo} Assume that hypotheses \ref{Assump} (i)-(iii) hold. There exists a unique function $\textbf{z}_{0} \in C((0,T],\mathbb{R}^{2N_{p}})$ such that $\left(\textbf{z}_{\varepsilon_{m}}\right)_{m}$ uniformly converges to $\textbf{z}_{0}$ as $m \to \infty$. Moreover the limit function $\textbf{z}_{0}$ satisfies the differential inclusion \begin{equation}\label{inclusion_limite} \begin{cases} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0}+\bo{F}^{'}(\boldsymbol{z}_{\varepsilon}) \in -N\left(\bo{K}(\bo{z}_{0}),\bo{z}_{0}\right) \text{ a.e. } t \in (0,T] \vspace{0.5em} \\ \boldsymbol{z}_{0}(0) = \boldsymbol{z}_{p}(0), \end{cases} \end{equation} where \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0} = (\mu_{1,i}\partial_{t}z_{0,i})_{i=1,\cdots,N_{p}} \text{ and } \mu_{1,i}:= \int_{0}^{\infty}a\rho_{i}(a)da \in \mathbb{R}, \quad \forall i. \end{equation*} \end{Theo} \begin{proof} We use the extraction convergence method as in the proof of earlier theorem \ref{thm_conv}. \subsubsection*{Extraction of sequence} Since $\bo{z}_{\varepsilon}$ is bounded and equicontinuous (as limit of bounded and equicontinuous function $\bo{z}_{\varepsilon,\Delta}$) we can apply the Arzella-Ascolli theorem to the sequence $\left(\bo{z}_{\varepsilon_{m}}\right)_{m}$: there exists a subsequence still denoted by $\left( \bo{z}_{\varepsilon_{m}} \right)_{m}$ such that $\bo{z}_{\varepsilon_{m}}$ uniformly converges in $(0,T]$. Let us denote $\bo{z}_{0}$ the limit function i.e. $\bo{z}_{0}(t) = \lim_{m\to \infty }\bo{z}_{\varepsilon_{m}}(t), \quad t \in (0,T]$. \subsection*{The convergence of $\boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\textbf{z}_{\varepsilon_{m}}]+F^{'}(\textbf{z}_{\varepsilon_{m}})$.} Since the delay operator $\boldsymbol{\mathcal{L}}_{\varepsilon}$ is bounded in $L^{\infty}((0,T],\mathbb{R}^{2N_{p}})$ (see the proof of Theorem \ref{thm-exist-uniq}), there exists a subsequence denoted by $\left( \mathcal{L}_{\varepsilon_{m}} \right)_{m}$ that weakly *-converges i.e. \begin{equation*} \dfrac{1}{\varepsilon_{m}}\int_{0}^{\infty}\left(z_{\varepsilon_{m},i}(t) - z_{\varepsilon_{m},i}(t-\varepsilon_{m} a)\right)\rho_{i}(a)da \overset{*}{\rightharpoonup}\partial_{t}z_{0,i}(t)\int_{0}^{\infty}a\rho_{i}(a)da \text{ in } L^{\infty}((0,T], \mathbb{R}^{2}), \quad \forall i, \end{equation*} which by duality with $L^{1}((0,T], \mathbb{R}^{2})$ and the compactness $(0,T]$ gives \begin{equation*} \dfrac{1}{\varepsilon_{m}}\int_{0}^{\infty}\left(z_{\varepsilon_{m},i}(t) - z_{\varepsilon_{m},i}(t-\varepsilon_{m} a)\right)\rho_{i}(a)da \overset{}{\rightharpoonup}\partial_{t}z_{0,i}(t)\int_{0}^{\infty}a\rho_{i}(a)da,\quad \text{ a.e. } t \in (0,T]. \end{equation*} Moreover, since $\bo{z}_{\varepsilon_{m}} \in L^{\infty}((0,T])$ and $(0,T]$ is compact for any $T>0$, we have that $\boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{m}})$ is bounded. Therefore \begin{equation}\label{weak_convergence} \mathcal{L}_{\varepsilon,i}[\boldsymbol{z}_{\varepsilon}](t) + F_{i}^{'}(\boldsymbol{z}_{\varepsilon}(t)) \overset{}{\rightharpoonup}\partial_{t}z_{0,i}(t)\int_{0}^{\infty}a\rho_{i}(a)da + F_{i}^{'}(\bo{z}_{0}(t)) \text{ in } L^{1}((0,T],\mathbb{R}^{2}), \quad i=1,\cdots,N_{p}, \end{equation} which guarantees that the global operator does so, i.e. \begin{equation*} \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}](t) + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon}(t)) {\rightharpoonup}\, \boldsymbol{\mu}_{1} \partial_{t}\bo{z}_{0}(t) + \boldsymbol{F}^{'}(\bo{z}_{0}(t))=:\boldsymbol{\pi}_{0}(t). \end{equation*} \subsection*{The weak limit satisfies the differential inclusion $\boldsymbol{\pi}_{0} \in -N\left( \bo{K}(\bo{z}_{0}),\bo{z}_{0} \right)$.} Consider the vector space $L^{1}([0,T],\mathbb{R}^{2N_{p}})$ endowed with its natural norm. We have that $\boldsymbol{\pi}_{\varepsilon_{m}}(t)\rightharpoonup \boldsymbol{\pi}_{0}(t)$ where we remind that $\boldsymbol{\pi}_{\varepsilon}:= \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon})$. Since the sequence $\left(\boldsymbol{\pi}_{\varepsilon_{m}} \right)_{m}$ weakly converges, there exists, by the Mazur Lemma a sequence $(\boldsymbol{y}_{\varepsilon_{m}})_{m}$ such that \begin{equation*} \boldsymbol{y}_{\varepsilon_{m}} \in Conv \left( \boldsymbol{\pi}_{\varepsilon_{m'}} ,\, \forall \, m'\geq m \right) \text{ and } ||\boldsymbol{y}_{\varepsilon_{m}} - \boldsymbol{\pi}_{0}||_{L^{1}((0,T], \mathbb{R}^{2N_{p}})} \underset{m \to \infty}{\longrightarrow 0}; \end{equation*} i.e. for almost $t\in (0,T]$ \begin{equation*} \boldsymbol{y}_{\varepsilon_{m}} \in Conv \left( \boldsymbol{\pi}_{\varepsilon_{m'}}, \forall\, m'>m \right) \text{ and } \boldsymbol{y}_{\varepsilon_{m}}(t) \underset{m \to \infty}{\longrightarrow} \boldsymbol{\mu}_{1} \partial_{t}\boldsymbol{z}_{0}(t) + \bo{F}^{'}(\boldsymbol{z}_{0}(t)),\, \text{ in } L^{1}((0,T], \mathbb{R}^{2N_{p}}). \end{equation*} It remains to prove that \begin{equation*} \boldsymbol{\pi}_{0} \in -N\left(\boldsymbol{K}(\bo{z}_{0}), \bo{z}_{0}\right), \text{ a.e. } t \in (0,T]. \end{equation*} Indeed since for any $m > 0$, \begin{equation}\label{legacy} \displaystyle{\boldsymbol{\pi}_{\varepsilon_{m}} \in - N\left( \boldsymbol{K}(\bo{z}_{\varepsilon_{m}}),\bo{z}_{\varepsilon_{m}} \right)}, \, \text{ a.e. in } (0,T], \end{equation} we have \begin{equation*} \langle \boldsymbol{y}_{\varepsilon_{m}}, \boldsymbol{\xi} \rangle \leq \sup_{m' \geq m} \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m'}} [\bo{z}_{\varepsilon_{m'}}] + \bo{F}^{'}(\bo{z}_{\varepsilon_{m}}), \boldsymbol{\xi} \rangle, \quad \forall \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{equation*} This, taking the limit $m \to \infty$ since $\boldsymbol{y}_{\varepsilon_{m}}(t) \longrightarrow \bo{\pi}_{0}$ gives \begin{equation*} \langle \boldsymbol{\pi}_{0}, \boldsymbol{\xi} \rangle \leq \lim \sup_{m} \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}} \left[\bo{z}_{\varepsilon_{m}}\right] + \bo{F}^{'}(\bo{z}_{\varepsilon_{m}}), \boldsymbol{\xi} \rangle, \quad \forall \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{equation*} On the other hand, by lemma \ref{equivalences}, equation \eqref{conDiff} is equivalent to \begin{equation*} \forall \, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}, \, \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \bo{F}^{'}(\boldsymbol{z}_{\varepsilon_{m}}(t)), \boldsymbol{\xi} \rangle \leq \left| \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon}](t) +\bo{ F}^{'}(\boldsymbol{z}_{\varepsilon_{m}}(t)) \right| d_{\bo{K}(\bo{z}_{\varepsilon_{m}})}(\boldsymbol{\xi} + \boldsymbol{z}_{\varepsilon_{m}}(t)), \end{equation*} replacing $\boldsymbol{\xi}$ by $- \boldsymbol{\xi}$ in the latter inequality \begin{equation*} \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \bo{F}^{'}(\bo{z}_{\varepsilon_{m}}(t)),- \boldsymbol{\xi} \rangle \leq \left| \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) +\bo{F} ^{'}(\bo{z}_{\varepsilon_{m}}(t)) \right| d_{\bo{K}(\bo{z}_{\varepsilon_{m}})}(- \boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}(t)). \end{equation*} By same arguments as earlier on the delay operator and the source term, there exists a constant $M$ independent of $\varepsilon_{m}$ such that \begin{equation*} |\boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{m}}(t))| \leq M, \end{equation*} we have \begin{equation*} \forall \bo{\xi} \in \mathbb{R}^{2N_{p}}, \quad \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{m}}(t)), \boldsymbol{\xi} \rangle \leq 2M d_{\bo{K}(\bo{z}_{\varepsilon_{m}}(t))}\left( -\boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}(t) \right). \end{equation*} Let us check that \begin{equation*} d_{\varepsilon_{m}}(t) := \left| d_{\bo{K}(\bo{z}_{\varepsilon_{m}})}\left( -\boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}(t) \right) - d_{\bo{K}(\bo{z}_{0})} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right| \longrightarrow 0, \forall t \in (0,T]. \end{equation*} Indeed using the fact that $\boldsymbol{q} \to d_{\boldsymbol{K}(\bo{z}_{0})}(\boldsymbol{q})$ is $1$-Lipschitz, \begin{eqnarray*} d_{\varepsilon_{m}}(t) & \leq & \left| d_{\boldsymbol{K}(\bo{z}_{\varepsilon_{m}})}\left( -\boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}\right) - d_{\boldsymbol{K}(\bo{z}_{\varepsilon_{m}})} \left(-\boldsymbol{\xi} + \bo{z}_{0} \right) \right| + \left| d_{\bo{K}(\bo{z}_{0})}\left( -\boldsymbol{\xi} + \bo{z}_{0}(t) \right) - d_{\boldsymbol{K}(\bo{z}_{0}(t))} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right| \\ \\ & \leq & \left| \bo{z}_{\varepsilon_{m}}(t) - \bo{z}_{0}(t) \right| + \left| d_{\boldsymbol{K}(\bo{z}_{\varepsilon_{m}})}\left( -\boldsymbol{\xi} + \bo{z}_{0}(t) \right) - d_{\boldsymbol{K}(\bo{z}_{0})} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right|. \end{eqnarray*} In the same way as in proposition \ref{convergenceofprojection}, there exists $\nu >0$ such that $|\boldsymbol{\xi}| \leq \nu$, \begin{equation*} \left| d_{\textbf{K}(\bo{z}_{\varepsilon_{m}})}\left( -\bo{\xi} + \bo{z}_{0}(t) \right) - d_{\bo{K}(\bo{z}_{0})} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right| \longrightarrow 0. \end{equation*} Taking the limit when $m \to \infty$ we have \begin{equation*} \forall\, \boldsymbol{\xi}\in \mathbb{R}^{2N_{p}}, \, |\boldsymbol{\xi}| \leq \nu, \, \langle \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0}(t) + \boldsymbol{F}^{'}(\bo{z}_{0}(t)), \boldsymbol{\xi} \rangle \leq M d_{\bo{K}(\bo{z}_{0}(t))}\left( -\boldsymbol{\xi} + \bo{z}_{0}(t) \right), \end{equation*} which finally by using the fifth equation in lemma \ref{equivalences} is equivalent to \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t} \bo{z}_{0}(t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{0}}(t)) \in - N\left( \boldsymbol{K}(\bo{z}_{0}), \bo{z}_{0}(t) \right), \forall \, t \in (0,T], \end{equation*} which ends the proof. \end{proof} \begin{Theo}\label{thm_conv} Let $\varepsilon >0$ be fixed. If the assumptions \ref{Assump} (i)-(iii) hold then the constant piecewise function $\bo{z}_{\varepsilon,\Delta}$ uniformly converges in $L^{\infty}\left([0,T];\boldsymbol{Q}_{0} \right)$ when $\Delta a \to 0$. Moreover the limit function denoted by $\textbf{z}_{\varepsilon}$ satisfies \begin{equation}\label{conDiff} \begin{cases} \displaystyle{ \boldsymbol{\mathcal{L}}_ {\varepsilon}[\textbf{z}_{\varepsilon}](t) + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon}(t)) \in -N(\boldsymbol{Q}_{0}, \textbf{z}_{\varepsilon}(t)), \, t > 0}, \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \; t \leq 0, \end{cases} \end{equation} where $\boldsymbol{\mathcal{L}}_{\varepsilon}(t)=\left(\mathcal{L}_{\varepsilon,1}(t),\cdots, \mathcal{L}_{\varepsilon,N_{p}}(t) \right)$ and for any particle \begin{equation*} \mathcal{L}_{\varepsilon,i}\left[\textbf{z}_{\varepsilon}\right](t):= \displaystyle{\dfrac{1}{\varepsilon}\int_{0}^{\infty}\left(z_{\varepsilon,i}(t) - z_{\varepsilon,i}(t-\varepsilon a) \right)\rho_{i}(a)da}. \end{equation*} \end{Theo} \begin{proof} We use the same arguments as in \cite{JeanFenel06}: the extraction convergence and inclusion.\\ To this end we use the Arzela-Ascolli theorem to prove the extraction and convergence. \begin{itemize} \item The piecewise constant function $\bo{z}_{\varepsilon,\Delta}$ is equicontinuous on $[0,T]$ \item $\bo{z}_{\varepsilon,\Delta}$ admits an $L^{\infty}$-bound. Indeed by \eqref{compactness}) and the Cauchy-Schwartz's inequality, we have \begin{eqnarray*} \left|\boldsymbol{Z}^{n}_{\varepsilon} - \boldsymbol{Z}^{0}_{\varepsilon}\right|^{2} = \left|\sum_{m=1}^{N} \sum_{i=1}^{N_{p}}(Z^{m}_{\varepsilon,i}-Z^{m-1}_{\varepsilon,i}) \right|^{2} & \leq & \left(\sum_{m=1}^{n}\sum_{i=1}^{N_{p}}\Delta t\right) \sum_{m=1}^{n}\sum_{i=1}^{N_{p}}\Delta t \left| \dfrac{Z^{m}_{\varepsilon,i}-Z^{m-1}_{\varepsilon,i}}{\Delta t} \right|^{2} \\ & \leq & tN_{p} \sum_{i=1}^{N_{p}} \left| \dfrac{Z^{m}_{\varepsilon,i}-Z^{m-1}_{\varepsilon,i}}{\Delta t} \right|^{2}, \quad \forall t > 0 \end{eqnarray*} i.e by \eqref{compactness} \begin{equation}\label{Linfty} \left|\boldsymbol{Z}^{n}_{\varepsilon}\right|^{2} \leq TN_{p}C + \left| \boldsymbol{Z}^{0}_{p}\right|^{2} < \infty. \end{equation} Therefore \begin{equation*} || \bo{z}_{\varepsilon,\Delta}||_{L^{\infty}\left( (0,T),\mathbb{R}^{2N_{p}}\right)} := \underset{0 < t < T}{\sup} \left( \sum_{i=1}^{N_{p}}|z_{\varepsilon,\Delta, i}(t)|^{2} \right)^{1/2} = \underset{0 < t < T}{\sup} \left( \sum_{i=1}^{N_{p}}\sum_{n=0}^{N-1}|Z^{n}_{\varepsilon,i}|^{2}\mathbbm{1}_{(t^{n},t^{n+1}]}(t) \right)^{1/2} < \infty. \end{equation*} \end{itemize} For any sequence $(\Delta_{m})_{m>0}$ of discretization steps decreasing to $0$, thanks to the Arzla-Ascolli theorem, there exists a subsequence still denoted by $\left(\bo{z}_{\varepsilon, \Delta_{m}}\right)$ which converges uniformly to $\bo{z}_{\varepsilon}\in L^{\infty}\left((0,T),\mathbb{R}^{2N_{p}}\right)$. \subsubsection*{For all $t\in (0,T)$, the limit function $\bo{z}_{\varepsilon}$ belongs to $\boldsymbol{Q}_{0}$.} Indeed since for any $ \boldsymbol{z}_{\varepsilon,\Delta}:= \boldsymbol{Z}^{n}_{\varepsilon}|_{(t^{n}, t^{n+1}]} \in \boldsymbol{Q}_{0}, \forall n$ we have that $\boldsymbol{z}_{\varepsilon,\Delta} \in \boldsymbol{Q}_{0}$ almost everywhere in $[0,T]$, so that since $\boldsymbol{Q}_{0}$ is closed we have that \begin{equation*} \bo{z}_{\varepsilon}(t) =: \lim_{m \to \infty}\bo{z}_{\varepsilon,\Delta_{m}}(t) \in \boldsymbol{Q}_{0}, \quad \forall\, t \in (0,T). \end{equation*} Combining this with the fact that $\bo{z}_{\varepsilon} \in L^{\infty}((0,T),\mathbb{R}^{2N_{p}})$, we have $\bo{z}_{\varepsilon} \in L^{\infty}((0,T), \boldsymbol{Q}_{0})$. \subsubsection*{We now need to prove that the limit function $\bo{z}_{\varepsilon}$ satisfies $\boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N \left(\boldsymbol{Q}_{0},\bo{z}_{\varepsilon}\right)$.} Thanks to Lemma \ref{equality}, we only need to prove that $\boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}), \bo{z}_{\varepsilon}\right)$, a.e. in $(0,T]$. \begin{itemize} \item \textbf{Convergence:} Firstly, since \begin{equation}\label{CU} \bo{z}_{\varepsilon,\Delta_{m}}(t) \underset{m \to \infty}{\longrightarrow} \bo{z}_{\varepsilon}(t) \text{ uniformly for all } t \in [0,T], \end{equation} and since $ \boldsymbol{\rho}$ is integrable, we have that for any fixed $\varepsilon >0$ \begin{equation*} \dfrac{1}{\varepsilon} z_{\varepsilon,\Delta_{m},i}(t)\int_{0}^{\infty}\rho_{i}(a)da \underset{m \to \infty}{\longrightarrow} \dfrac{1}{\varepsilon} z_{\varepsilon,i}(t)\int_{0}^{\infty}\rho_{i}(a)da \text{ uniformly in } [0,T], \quad i=1,\cdots,N_{p}. \end{equation*} Secondly, by \eqref{CU} taken point-wisely and the fact that $\bo{\rho}$ is bounded we have that \begin{equation*} z_{\varepsilon,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a) \longrightarrow z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a), \quad \forall i \text{ and } 0 \leq a \leq t/\varepsilon. \end{equation*} Furthermore there exists a constant $K_{i}$ independent of $\Delta_{m}$ such that \begin{equation*} |z_{\varepsilon,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a)| \leq K_{i} \dfrac{\beta_{i}\exp(- \underline{\zeta_{i}}a)}{1+\beta_{i} \int_{0}^{\infty} e^{-\int_{0}^{\sigma}\zeta_{i}(\tilde{a})d\tilde{a}}d\sigma}, \quad \forall i, \end{equation*} so that by the dominated convergence theorem, \begin{equation*} \dfrac{1}{\varepsilon} \int_{0}^{t/\varepsilon}z_{\varepsilon,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a)da \underset{m \to \infty}{\longrightarrow} \dfrac{1}{\varepsilon} \int_{0}^{t/\varepsilon}z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a)da , \quad \forall i. \end{equation*} The same arguments hold for the integral the positions $\bo{z}_{p}$ for negative times and give. \begin{equation*} \dfrac{1}{\varepsilon} \int_{t/\varepsilon}^{\infty}z_{p,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a)da \underset{m \to \infty}{\longrightarrow} \dfrac{1}{\varepsilon} \int_{t/\varepsilon}^{\infty}z_{p,i}(t-\varepsilon a)\rho_{i}(a)da , \quad \forall i. \end{equation*} Lastly, using \eqref{CU} and the fact that $F$ is continuously differentiable, we have that \begin{equation*} \bo{F}^{'}(\bo{z}_{\varepsilon, \Delta_{m}}) \underset{m \to \infty}{ \longrightarrow} \bo{F}^{'}(\bo{z}_{\varepsilon}). \end{equation*} Gathering the above terms we have that, for fixed $\varepsilon$ \begin{equation*} \bo{\pi}_{\varepsilon,\Delta_{m}}(t) :=\boldsymbol{\mathcal{L}}_{\varepsilon,\Delta_{m}}(t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon,\Delta_{m}}(t)) \underset{m \to \infty}{\longrightarrow} \boldsymbol{\mathcal{L}}_{\varepsilon}(t) + \boldsymbol{F}^{'}\left(\bo{z}_{\varepsilon}(t)\right) =: \boldsymbol{\pi}_{\varepsilon}(t), \quad \forall t \in (0,T], \end{equation*} which is the claim. \item \textbf{Inclusion:} Here we use the same arguments as in \cite{venel08}. We have to prove that \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t) \right), \text{ a.e. } t \in (0,T]. \end{equation*} Indeed by lemma \ref{equivalences} in Annexe \ref{annexeA}, \eqref{discre_incl_diff} is equivalent to \begin{eqnarray*} \langle \bo{\pi}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big|\pi_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Replacing $\boldsymbol{\xi}$ by $-\boldsymbol{\xi}$ in the above inequality, we have that \begin{eqnarray*} \langle \bo{\pi}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big| \bo{\pi}_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta}(t)))}\big(- \boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Let us now prove that $|\bo{\pi}_{\varepsilon, \Delta_{m}}|$ is bounded uniformly with respect to the discretization step. Indeed \begin{eqnarray} |\bo{\pi}_{\varepsilon, \Delta_{m}}| & \leq & \left| \boldsymbol{\mathcal{L}}_{\varepsilon,\Delta_{m}} \right| + \left|\bo{F}^{'}(\bo{z}_{\varepsilon,\Delta_{m}})\right| \end{eqnarray} On one hand since $\bo{z}_{\varepsilon,\Delta_{m}}$ is bounded with respect to $\varepsilon$ and $\Delta t$ and $F$ is continuously differentiable, there exists a constant $K_{F}$ independent of $\varepsilon$ and $\Delta t$ such that $\left|\bo{F}^{'}(\boldsymbol{z}_{\varepsilon,\Delta_{m}})\right| \leq K_{F}$. On the other hand using the energy estimates and the Jensen's inequality \begin{equation*} |\bo{\mathcal{L}}^{n}_{\varepsilon}|^{2} \leq \frac{C_{0}}{\varepsilon} \sum_{i=1}^{N_{p}} \dfrac{\Delta a}{\varepsilon} \sum_{l=1}^{\infty}|Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i} \leq \frac{C_{0}}{\varepsilon}\left(K_{0} + F(\boldsymbol{Z}^{0}_{p}) - F(\bo{Z}^{n}_{\varepsilon})\right), \end{equation*} so that there exists a constant independent of $\Delta a$ and $\varepsilon$ such that $|\bo{\mathcal{L}}_{\varepsilon,\Delta_{m}}| \leq \dfrac{K}{\varepsilon}$ \begin{equation*} |\bo{\pi}_{\varepsilon,\Delta_{m}}| \leq \dfrac{K}{\varepsilon} + K_{F} \end{equation*} The sum of the above two latter inequalities gives \begin{equation}\label{last} \big|\langle \bo{\pi}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle \big| \leq \left(\dfrac{K}{\varepsilon} + K_{F}\right)d_{\bo{K}( \bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big| - \boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))) \big|. \end{equation} Using the fact that $\boldsymbol{q} \mapsto d_{\bo{C}}(\bo{q})$ is $1$-Lipschitz for any nonempty closed convex set and setting \begin{equation*} I_{\varepsilon,\Delta_{m}}(t):= \big| d_{\bo{K}( \bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big)\big|, \end{equation*} we have \begin{eqnarray*} I_{\varepsilon,\Delta_{m}} & \leq & \big| d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & & \hspace{8.5em} + \big| d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle - \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & \leq & \big| \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta}(t)) - \bo{z}_{\varepsilon}(t)\big| + \underbrace{\big| d_{\bo{K}( \bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big|}_{J_{\varepsilon, \Delta_{m}}(t)}. \end{eqnarray*} \end{itemize} Moreover by proposition \ref{convergenceofprojection} in appendix \ref{annexeA} there exists $\nu > 0$ such that for all $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$ satisfying $|\boldsymbol{\xi}|\leq \nu$, $J_{\varepsilon, \Delta_{m}}(t) \underset{m \to \infty}{\longrightarrow} 0$.\\ Thus for any $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$, there exists $\nu > 0$ satisfying $|\boldsymbol{\xi}| \leq \nu$ and \begin{equation*} 0 \leq I_{\varepsilon,\Delta_{m}} \leq \big| \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) - \bo{z}_{\varepsilon}(t)\big| \underset{m \to \infty}{\longrightarrow 0}, \end{equation*} i.e. \begin{equation*} d_{\bo{K}(\bo{z}_{\varepsilon, \Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) \underset{ m \to \infty}{\longrightarrow} d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t)\big). \end{equation*} Since $\varepsilon > 0$ is fixed, \eqref{last} finally gives \begin{equation*} \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}, |\boldsymbol{\xi}| \leq \nu, |\langle \boldsymbol{\pi}_{\varepsilon}(t), \boldsymbol{\xi} \rangle| \leq \left(\frac{K}{\varepsilon} + K_{F}\right)d_{\bo{K}( \bo{z}_{\varepsilon}(t))} \big|- \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t))\big|, \end{equation*} which using back lemma \ref{equivalences} is equivalent to \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t)), \quad \forall \varepsilon >0, \end{equation*} this ends the proof once we prove that $J_{\varepsilon, \Delta_{m}}$. But this is a consequence of proposition \ref{convergenceofprojection} in appendix \ref{annexeA}. \end{proof} ###### PREUVE DE L'UNICITE ################# \begin{proof} The proof is divided into two parts. \begin{itemize} \item Existence: the existence is obvious by compactness. Indeed $\bo{z}_{\varepsilon}:=\lim_{m\to \infty} \bo{z}_{\varepsilon,\Delta_{m}}(t)$, where $(\bo{z}_{\varepsilon,\Delta_{m}})_{m}$ is defined in the proof of theorem \ref{thm_conv}. \item Uniqueness: let $\bo{z}^{1}_{\varepsilon},\bo{z}^{2}_{\varepsilon} \in \boldsymbol{Q}_{0}$ be two solutions of \eqref{conDiff}. Since $- (\boldsymbol{\mathcal{E}}_{t}^{\varepsilon})^{'}(\boldsymbol{z}_{\varepsilon}) \in N(\boldsymbol{Q}_{0}, \boldsymbol{z}_{\varepsilon})$, we have by proposition \ref{prox-reg-char} in appendix \ref{annexeA} that \begin{equation*} \langle - \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{1}_{\varepsilon}] - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}\rangle \leq \dfrac{ |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| }{2\eta}\Big| \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}\Big|^{2} \end{equation*} and \begin{equation*} \langle - \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{2}_{\varepsilon}] - \bo{F}^{'}(\bo{z}^{2}_{\varepsilon}), \bo{z}^{1}_{\varepsilon} - \bo{z}^{2}_{\varepsilon}\rangle \leq \dfrac{ |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| }{2\eta}\Big| \bo{z}^{2}_{\varepsilon} - \bo{z}^{2}_{\varepsilon}\Big|^{2}, \end{equation*} where we remind that $\eta >0$ is the prox-regularity constant of $\boldsymbol{Q}_{0}$ (see theorem \ref{constant-prox-reg}).\\ Summing the above inequalities, we have \begin{equation*} \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle + \langle \bo{F}^{'}(\bo{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon} \rangle \leq \dfrac{ |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| + |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{2}_{\varepsilon})| }{2\eta}\left| \bo{\hat{z}}_{\varepsilon} \right|^{2}, \end{equation*} where $\bo{\hat{\mathcal{L}}}_{\varepsilon} := \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{2}_{\varepsilon}] - \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{1}_{\varepsilon}]$ and $\bo{\hat{z}}_{\varepsilon} := \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}$. Since $F$ is convex we have that \begin{equation*} \langle \bo{F}^{'}(\bo{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon} \rangle \geq 0, \end{equation*} so that \begin{equation}\label{eq_interm} \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle \leq \dfrac{|(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| + |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{2}_{\varepsilon})|}{2\eta}\left| \bo{\hat{z}}_{\varepsilon} \right|^{2}. \end{equation} Let us prove that $(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}_{\varepsilon}) = \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \bo{F}^{'}(\bo{z}_{\varepsilon})$ is bounded, where $\bo{z}_{\varepsilon}$ solves \eqref{conDiff} .\\ On one hand, by decomposing the pointwise delay operator we have \begin{equation*} \mathcal{L}_{\varepsilon,i}[\boldsymbol{z}_{\varepsilon}]= \dfrac{1}{\varepsilon}\mu_{0,i} z_{\varepsilon,i}(t) - \dfrac{1}{\varepsilon}\int_{0}^{t/\varepsilon}z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a)da - \dfrac{1}{\varepsilon}\int_{t/\varepsilon}^{\infty}z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a)da, \quad \forall i. \end{equation*} The two first terms are bounded. Indeed since $\boldsymbol{z}_{\varepsilon}$ is bounded and $\bo{\rho}$ is integrable, \begin{eqnarray*} \left| \dfrac{\mu_{0,i}z_{\varepsilon,i}(t)}{\varepsilon} - \dfrac{1}{\varepsilon} \int_{0}^{t/\varepsilon} z_{\varepsilon,i}(t -\varepsilon a) \rho_{i}(a)da \right| & \leq & \dfrac{1}{\varepsilon} \left( \mu_{0,i} + \int_{0}^{t}\rho_{i}(a)da \right) \sup_{0 \leq t \leq T} |z_{\varepsilon,i}(t)| \\ & \leq & \dfrac{1}{\varepsilon} \left( 2\mu_{0,i} \sup_{0 \leq t \leq T} |z_{\varepsilon,i}(t)| \right), \quad \forall i \end{eqnarray*} The same arguments hold for the integral involving the $\bo{z}_{p}$ and ensure that there exists a constant $\tilde{K}$ (independent of $\varepsilon$) such that $\left|\bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}]\right| \leq \tilde{K}/\varepsilon$.\\ On the other hand, since $\boldsymbol{z}_{\varepsilon}$ is uniformly bounded with respect to $\varepsilon$ in $(0,T]$ and $F$ is assumed to be continuously differentiable, we have that $\bo{F}^{'}(\boldsymbol{z}_{\varepsilon})$ is bounded uniformly in $\varepsilon$, so that there exists a constant $K_{F}$ such that $|\bo{F}^{'}(\bo{z}_{\varepsilon})| \leq K_{F}$.\\ This implies that \begin{equation*} \left| (\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}_{\varepsilon}) \right| \leq \dfrac{\tilde{K}}{\varepsilon} + K_{F}. \end{equation*} By the latter inequality and \eqref{eq_interm} we have \begin{equation}\label{I} \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle \leq \dfrac{\frac{\tilde{K}}{\varepsilon} + K_{F}}{\eta}\left| \bo{\hat{z}}_{\varepsilon} \right|^{2} \end{equation} Let us now find a lower bound for $\langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle$. Since $\langle a-b,a \rangle \geq \dfrac{1}{2}\left(|a|^{2} - |b|^{2}\right),$ by assuming that $\bo{z}^{2}_{p}(t) = \bo{z}^{1}_{p}(t), \, \forall\, t \leq 0$, we have \begin{eqnarray*} \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{\infty}\big| \hat{z}_{\varepsilon,i}(t)\big|^{2}(t) \rho_{i}(a)da - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da & \leq & \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle, \end{eqnarray*} so that \begin{equation}\label{II} \dfrac{\mu_{0,m}}{2\varepsilon}|\bo{z}_{\varepsilon}(t)|^{2} - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da \leq \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle \end{equation} By \eqref{I} and \eqref{II}, we have \begin{equation*} \dfrac{\mu_{0,m}}{2}|\bo{z}_{\varepsilon}(t)|^{2} - \dfrac{1}{2} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da \leq \dfrac{\tilde{K} + \varepsilon K_{F}}{\eta}\left| \bo{\hat{z}}_{\varepsilon}(t) \right|^{2}. \end{equation*} It follows \end{itemize} \end{proof} ############################################# \bibliographystyle{plain} \bibliography{biblio} \end{document} \documentclass{ws-m3as} \usepackage{pgfkeys} \usepackage{bbold} \usepackage{bbm} \usepackage{dsfont} \usepackage[a4paper, total={6in, 8in}]{geometry} \usepackage{hyperref} \usepackage[toc]{appendix} \usepackage{pgfplots} \pgfplotsset{compat=1.18} \usepackage{pgfplotstable} \newcommand{\ep}{\varepsilon} \newcommand{\eps}[1]{{#1}_{\varepsilon}} \newcommand{\bo}{\boldsymbol} \newtheorem{Def}{Definition} \newtheorem{Theo}{Theorem} \newtheorem{Prop}{Proposition} \newtheorem{Lemma}{Lemma} \newtheorem{Corollary}{Corollary} \newtheorem{Ass}{Assumption} \newtheorem{Rmk}{Remark} \newtheorem{EX}{Example} \usepackage{tikz} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\alert}[1]{{\color{red}#1}} \newcommand{\cb}[1]{{\color{blue}#1}} \newcommand{\RR}{{\mathbb{R}}} \newcommand{\NN}{{\mathbb{N}}} \begin{document} \markboth{Thierno Mamadou Baldé and Vuk Milisic}{Analysis of non-overlapping models with a weighted infinite delay} \author{Thierno Mamadou Baldé } \address{Univ Brest, CNRS UMR 6205, Laboratoire de Mathématiques de Bretagne Atlantique 6, \\Avenue Victor Le Gorgeu, 29200 Brest, France} \author{Vuk Milisic} \address{Univ Brest, CNRS UMR 6205, Laboratoire de Mathématiques de Bretagne Atlantique 6, \\Avenue Victor Le Gorgeu, 29200 Brest, France} \title{Analysis of non-overlapping models with a weighted infinite delay} \maketitle \begin{abstract} The framework of this article is cell motility modeling. Approximating cells as rigid spheres we take into account for both non-penetration and adhesions forces. Adhesions are modeled as a memory-like microscopic elastic forces. This leads to a delayed and constrained vector valued system of equations. We prove that the solution of these equations converges when $\varepsilon$, the linkages turnover parameter, tends to zero to the a constrained model with friction. We discretize the problem and penalize the constraints to get an unconstrained minimization problem. The well-posedness of the constrained problem is obtained by letting the penalty parameter to tend to zero. Energy estimates \emph{à la} De Giorgi are derived accounting for delay. Thanks to these estimates and the convexity of the constraints, we obtain compactness uniformly with respect to the discretisation step and $\varepsilon$, this is the mathematically involved part of the article. Considering that the characteristic bonds lifetime goes to zero, we recover a friction model comparable to [Venel {\em et al}, ESAIM, 2011] but under more realistic assumptions on the external load, this part being also one of the challenging aspects of the work. \end{abstract} \keywords{Adhesions, contact models, Volterra equations, optimal conditions, friction.} \ccode{Mathematics Subject Classification: xxx, xxx} \section{Introduction} Cells migration is driven by various extracellular guidance cues which are of chemical or mechanical type. The first kind of response is due to gradient of diffusible cues that are either attractive or repulsive, we call this mechanism \textit{chemotaxis}. The chemotaxis may include bacteria migrating for nutrients \cite{jen906}, lymphocytes responding to chemokines gradients in order to locate sites of immune response \cite{thom90}. In \cite{xue02}, the authors prove that molecules of Family Growth Factor of type 4 and 8 respectively control the attractive and repulsive chemotaxis during the chicken gastrulation. In recent years \textit{durotaxis} (mechanical substrate compliance) has been investigated in many papers. In \cite{jai2022}, the elastic properties of the migratory substrate bias single and collective cells migration. The authors proved as well that cells exert higher traction and increase the areas when exposed to stiffer surfaces or stiff gradient and may alter their contractility to withstand the mechanical properties of the migratory substrate. Furthermore the authors of \cite{jai2022} prove that human cancer cells have stronger phenotypes when exposed to stiffer substrate, and collective epithelial cells undergo durotaxis even if the cells taken individually do not necessarily do so. These mechanisms, chemotaxis and durotaxis are are both investigated in \cite{carole22}. There the authors underline the similarity but also the remarkable diversity of cells' response to their local environment. In order to account for this locality, we model contacts between neighboring cells. When considering the literature related to this field, sweeping processes are the starting point. In his seminal paper \cite{mor77}, Moreau considers a point $q(t)$ in a moving closed and convex set $C(t)$ of a Hilbert space $H$ without external perturbation. The particle stays at rest as long as it happens to lie in the interior of $C$; and once caught up by the boundary $\partial C(t)$, it can only move in the inward normal direction : it always belongs to $C(t)$. Many other authors have been attempting to either weaken the hypotheses or add some external perturbation into the Moreau's system since. For instance in \cite{cast93}, in finite dimension, the authors considered the set valued function $C$ as the complement of a convex set. Moreover, the authors introduced a bounded, closed and convex valued multifunction. In \cite{cast95}, the perturbation is supposed to be upper semi-continuous with \textit{linear compact growth}, and $C$ is Hausdorff continuous and satisfies the so-called \textit{interior ball condition}. To weaken the convexity of $C(t)$, Colombo et al. introduce prox-regular sets. A prox-regular set (defined below in a more formal way) can be of any shape (non-convex for instance) but it is possible to project points on it if these are close enough. The authors deal first with an unperturbed problem before adding external perturbations. More recently, Juliette Venel uses similar arguments to deal with non-penetration models in the case of human crowd motion and emergency exits \cite{venel08}. Pedestrians are idealized as rigid disks whose radii centers are respectively $r_{i} > 0$ and $q_{i} \in \mathbb{R}^{2}$ and the individuals centers are collected in a single vector called global configuration. Venel models crowd's dynamics where individuals do not overlap. She perturbs the model by adding an individualistic (or idealized) velocity (the velocity that individuals aim in the absence of others) represented by Lipschitz bounded function. The actual velocity is then the closest velocity from the idealized one. Here we model adhesions using a microscopic description of bounds as a continuous deterministic death and birth process. This approach was used in the pioneering work of Oelz and Schmeiser \cite{OelzSch10}. The model is based on the microscopic description of the dynamics and interactions of individual filaments, called the Filament-Based Lamellipodium Model. The adhesion forces inside this model rely on a microscopic description of proteic linkages. The authors in \cite{OelzSch10} derived a formal limit (when the rate of linkages turnover $\varepsilon$ is small enough). They end up with a gradient flow model with classical friction terms for adhesion of actin filaments to the substrate and cross-links. Using \textbf{minimizing movements} {\em à la} De Giorgi, they prove that the semi-discretisation in time of the problem converges and provides existence and uniqueness of the limit problem. Since then various attempts were made to make this formal computation rigorous \cite{MiOelz11}, \cite{MiOelz16}, \cite{MiOelz18},\cite{Mi20}. To simplify the problem, a single adhesion point was considered. Its position is the first unknown of the problem and a population of bonds related to this point is the second one. The equation for the position is a Volterra equation accounting for forces balance between the elastic forces of the linkages and an external load. The population density solves an age-structured problem with a non-local birth term modelling saturation of bonds. This equation depends as well on $\varepsilon$. In \cite{MiOelz16}, the authors considered the fully-coupled case (the death-rate of linkages depends on the unknown position). They proved that if the balance between the on-rate of the linkages and the external force is violated then the velocity of the particles blows up as the density vanishes. This blow-up mimics detachment of the binding site from the substrate. In a further step, space-dependence was taken into account as well (see \cite{MiOelz18}, \cite{Mi20}). In \cite{Mi20}, a delayed harmonic map is considered on the sphere. A complete asymptotic study of a scalar fourth order penalized and delayed problem was achieved recently \cite{MiSou}, the authors considered limits with respect to $\epsilon$ and for large times. In the present work, we model time dependent positions of several cells. These minimize an energy functional under non-linear overlapping constraints. The energy contains two parts~: a delay term representing the adhesive energy and a coercive and strictly convex function representing the energy of the external load. The adhesive terms in the total energy rely on the same memory models presented above. Their presence does not allow straightforward proofs of existence neither provides compactness. This is why we discretize the problem with respect to time and age. This approach leads to delayed minimizing movements in the spirit of \cite{Mi20}. We extend energy estimates provided by classical {\em minimizing movements} \cite{OelzSch10} to the case with memory. The crucial property enabling this step is the monotonicty of the binding kernels. These estimates and convexity assumptions on the source term (the position dependent {\emph{external load}}) are used in order to prove compactness. Precisely we prove that the time derivative of the solution is bounded in $L^{2}(0,T)$ for any $T>0$. We prove that the discrete minimization scheme is equivalent to a variational inclusion and show that the discrete approximation of the solution converges toward the solution of the continuous problem. We show as well that when $\varepsilon$, the instantaneous turn-over parameter of our model tends to zero then the limit function solves the model investigated in \cite{venel08} weighted by friction coefficients. Nevertheless, as we only assume coercivity and convexity of the external load, we cannot apply the same techniques as in \cite{venel08}~: while the Lipshitz assumption made on the external load allows for the use of Uzawa's method in \cite{venel08}, this assumption is not made here and we propose a new alternative approach. Indeed in \cite{venel08} the Lipschitz hypothesis is contradicted even for the simplest quadratic potentials. Instead, here, at each time step, we penalize the discrete constraint and let the penalty parameter to tend to zero. This extends the well-posedness of our discrete constrained problem and applies as well to \cite{venel08}. Moreover in \cite{venel08}, the Lipschitz feature of the external load guarantees the boundedness of the discrete time derivative of the solution. Here, since we weakened this hypothesis, the arguments of \cite{venel08} do not apply in the asymptotics with respect to $\varepsilon$ (the delay operator is not uniformly bounded with respect to $\varepsilon$). In order to overcome this difficulty, we test the Euler-Lagrange equations against a regular enough test function and transpose the delay operator on it \cite{Mi20}. The paper is organized as follows: in Section 2, we set the framework of the problem. We first remind the notion of non-overlapping introduced in \cite{venel08}, then we define the contact adhesion model and lastly we set some assumptions on the data. Section 3 is devoted to the results of this paper. In this section we prove first the well-posedness of the discrete solution, we then establish a compactness criterion which we use to prove the convergence of our model toward a weighted differential inclusion. All the results are extended on the torus as well. We end section 3 by some numerical simulations. \section{Definition of the model} \subsection{Preliminaries} Consider $N_{p}$ particles which we idealize as rigid disks whose centers coordinate in the $(x,y)$-axis and radii are $q_{i} := (q_{i}^{x}, q_{i}^{y})$ and $r_{i}>0, \; i =1,\cdots,N_{p}$ respectively. We identify the $i$th particle $(q_{i},r_{i})$. The global configuration of all particles is given by \begin{equation} \boldsymbol{q}:= \left(q_{1},q_{2},\cdots,q_{N_{p}} \right) \in \mathbb{R}^{2N_{p}}. \end{equation} For $i < j$, we define $D_{ij}(\boldsymbol{q})$ the signed distance between $(q_{i},r_{i})$ and $(q_{j},r_{j})$ by \begin{equation}\label{signed_distance} D_{ij}(\boldsymbol{q}):= |q_{j}-q_{i}|-(r_{i}+r_{j}), \end{equation} see Figure \ref{distance}. Here $|\cdot|$ denotes the Euclidean norm. \begin{figure}[!ht] \centering \begin{tikzpicture} \draw (0,0) circle (1); \draw[ball color=black](0,0) circle(0.04) node[pos=0.5, below]{$q_{i}$} ; \draw (5,0) circle (1.5); \draw[ball color=black](5,0) circle(0.05) node[below]{$q_{j}$}; \draw (0,0) -- (-0.707, 0.707) node[pos=0.5, left, above, sloped]{$r_{i}$}; \draw (5,0) -- (5,1.5) node[pos=0.5, left, above, left]{$r_{j}$}; \draw [<->] (1.05,0) -- (3.45,0) node[pos=0.5,above] {$D_{ij}(\boldsymbol{q})$}; \draw [thick,->] (-0.1,0) -- (-2.5,0) node[pos=0.8,above] {$-e_{ij}(\boldsymbol{q})$}; \draw [thick,->] (5.1,0) -- (7.5,0) node[pos=0.9,above] {$e_{ij}(\boldsymbol{q})$}; \end{tikzpicture} \caption{The signed distance} \label{distance} \end{figure} Therefore the gradient vector of $D_{ij}$ naturally involves the oriented vector $e_{ij}(\bo{q})$ in Figure \ref{distance} and reads \begin{equation*} \boldsymbol{G}_{ij}(\boldsymbol{q}) := \nabla D_{ij}(\bo{q}) = \left(0,\cdots 0, \underset{i}{-e_{i,j}(\bo{q})}, 0\cdots 0, \underset{j}{e_{i,j}(\bo{q})}, 0, \cdots,0\right), \quad e_{ij}(\bo{q}):= \dfrac{q_{j}-q_{i}}{|q_{j}-q_{i}|}, \quad \forall i<j. \end{equation*} The particles should not overlap, so that we define $\boldsymbol{Q}_{0}$ the set of global configurations for which $D_{ij}$ is nonegative for any distinct particles. Precisely \begin{equation}\label{Q0} \boldsymbol{Q}_{0} := \left\{ \boldsymbol{q} \in \mathbb{R}^{2N_{p}}, \, D_{ij}(\boldsymbol{q}) \geq 0, \, \forall i<j \right\}. \end{equation} $\boldsymbol{Q}_{0}$ is called the set of feasible configurations. \subsection{Definition of the adhesion contact model} Let $T>0$ be any time value and $\varepsilon$ be a nonnegative parameter. In this article the positions of $N_{p}$ particles in $\mathbb{R}^{2}$ at time $t$ are represented by $\bo{z}_{\varepsilon}(t)\in \mathbb{R}^{2N_{p}}$ and solve the minimization problem: \begin{equation}\label{Eq1} \begin{cases} \displaystyle{\bo{z}_{\varepsilon}(t) = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{Q}_{0}} E^{\varepsilon}_{t}(\boldsymbol{q}), \quad t \in (0,T]}, \vspace{0.5em} \\ \boldsymbol{z}_{\varepsilon}(t) = \boldsymbol{z}_{p}(t), \quad \forall t \leq 0, \end{cases} \end{equation} where the energy functional reads \begin{equation*} E^{\varepsilon}_{t}(\boldsymbol{q}) := \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{\mathbf{\mathbb{R}}_{+}} \left|q_{i} - z_{\varepsilon,i}(t-\varepsilon a) \right|^{2}\rho_{i}(a)da + F(\boldsymbol{q}), \end{equation*} $\boldsymbol{z}_{p}$ represents the positions for negative times and $F:\mathbb{R}^{2N_{p}}\to \mathbb{R}$ is the energy associated to the external load. The parameter $\varepsilon$ represents the maximal lifetime of the linkages (an adimensionalized parameter representing a ratio between a characteristic time divided by a characteristic age of the bonds) and its inverse is assumed to be proportional to the linkages' stiffness.\\ Furthermore we assume that the linkages density is independent of time and $\varepsilon$ and solves an age structured equation. Precisely for any particle, $\rho_{i}$ solves the following equation \begin{equation}\label{contRho} \begin{cases} \partial_{a}\rho_{i}(a) + (\zeta_{i}\rho_{i})(a) = 0, \quad a > 0, \vspace{0.75em} \\ \displaystyle{\rho_{i}(0) = \beta_{i}\left(1-\int_{0}^{\infty}\rho_{i}(a)da \right)}, \end{cases} \end{equation} where the linkages' off-rate $\zeta_{i}: \mathbb{R}_{+}\to \mathbb{R}_{+}$ and the on-rates $\beta_{i} \in \mathbb{R}_{+}$ are given constants.\\ We mention that the non-local term between the parentheses in \eqref{contRho} is a saturation term: if the integral is close enough to $0$, more births occur while if it is large enough then $\rho_{i}(0)$ is small. We define the vector density of linkages $\boldsymbol{\rho} \in (\mathbb{R}_{+})^{N_{p}}$, as well as the vector on-rates $\boldsymbol{\beta}$ and off-rates $\boldsymbol{\zeta}$. \subsection{Main objective} We aim in this paper at proving that the global configuration $\boldsymbol{z}_{\varepsilon}$ satisfies \begin{equation}\label{goal1} \begin{cases} \boldsymbol{\mathcal{L}}_{\varepsilon}[\boldsymbol{z}_{\varepsilon}] +\nabla F(\boldsymbol{z}_{\varepsilon}) \in -N\left( \boldsymbol{K}(\boldsymbol{z}_{\varepsilon}),\boldsymbol{z}_{\varepsilon} \right), \quad \text{ a.e. } t \in (0,T], \vspace{0.5em} \\ \boldsymbol{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \quad \forall t \leq 0, \end{cases} \end{equation} where the delay operator reads \begin{equation}\label{cont-delay-operator} \mathcal{L}_{\varepsilon,i}[\boldsymbol{z}_{\varepsilon}](t):= \dfrac{1}{\varepsilon} \int_{0}^{\infty}\left(z_{\varepsilon,i}(t) - z_{\varepsilon,i}(t-\varepsilon a)\right)\rho_{i}(a)da, \quad \forall i. \end{equation} Moreover we prove that $\underset{\varepsilon \to 0}{\boldsymbol{z}_{\varepsilon} \longrightarrow \boldsymbol{z}_{0}}$ in $C\left([0,T]; \mathbb{R}^{2N_{p}}\right)$ where the limit function $\boldsymbol{z}_{0}$ solves \begin{equation}\label{eq.friction}\left\{ \begin{aligned} &\boldsymbol{\mu}_{1}\partial_{t}\boldsymbol{z}_{0} + \nabla F(\boldsymbol{z}_{0}) \in -N\left(\boldsymbol{K}(\boldsymbol{z}_{0}),\boldsymbol{z}_{0} \right), \quad \text{ a.e. } t \in (0,T], \vspace{0.5em} \\ &\boldsymbol{z}_{0}(0) = \boldsymbol{z}_{p}(0). \end{aligned} \right. \end{equation} and \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t}\boldsymbol{z}_{0} = (\mu_{1,i}\partial_{t}z_{0,i})_{i=1,\cdots,N_{p}} \text{ and } \mu_{1,i} := \int_{0}^{\infty} \tilde{a} \rho_{i}(\tilde{a})d\tilde{a} \in \mathbb{R}, \quad \forall i. \end{equation*} We mention that $\bo{K}(\bo{z}_{\varepsilon})$ (respectively $\bo{K}(\bo{z}_{0})$) is the interior convex approximation of $\bo{Q}_{0}$ at $\bo{z}_{\varepsilon}$ (respectively at $\bo{z}_{0}$) and $N(\bo{K}(\bo{z}_{\varepsilon}),\bo{z}_{\varepsilon})$ (respectively $N(\bo{K}(\bo{z}_{0}),\bo{z}_{0})$) is the proximal-normal cone of $\bo{K}(\bo{z}_{\varepsilon})$ (respectively $\bo{K}(\bo{z}_{0})$) at $\bo{z}_{\varepsilon}$ (respectively at $\bo{z}_{0}$). \\ We remind that for any closed and nonempty set $S$ of a Hilbert space $H$ and $x \in S$, the proximal-normal cone of $S$ at $x$ (represented in Figure \ref{cone-normal}) is defined as \begin{equation}\label{proximal-normal} N(S,x) := \left\{ v \in H; \; \exists \alpha > 0 \text{ s.t. } x \in P_{S}(x + \alpha v) \right\}. \end{equation} \begin{figure}[!ht] \centering \begin{tikzpicture} ll[orange!30] plot[smooth cycle] coordinates {(0,0) (4,-0.5) (4.5,-2.5) (2,-3.5) (1.25,-2)}; \node at (3,-2) {$S$}; lldraw[green!50!black] (1.5,-1) circle (2pt) node[below] {$z \in \mathring{S}$}; \node[green!50!black] at (1.5,-0.5) {$N(S,z) = \{0\}$}; \node[red] at (8,-4.5) {$N(S,a) = \emptyset$}; lldraw[red] (8,-4) circle (2pt) node[above] {$a \notin S$}; lldraw[blue] (4.4,-1) circle (2pt) node[below, rotate = 300] {$x \in \partial S$}; \draw[->, thick, blue] (4.4,-1) -- (6.5, -0.15); lldraw[blue](6.575, -0.1) circle (2pt) node[right] {$x+v$}; \draw[blue](5.5, -2.5) circle(0) node[left, rotate=300]{$P_S(x+v)$}; \draw[blue] (-1,-4.45) node[right] {$N(S,y)$}; \draw[->, thick, blue] (2,-3.5) -- (0.9,-6.5); lldraw(0.85,-6.605) circle (2pt) node[below] {$y+w$}; \draw[blue](4.05,-3.72) circle(0) node[left]{$P_S(y+w)$}; lldraw[blue] (2,-3.5) circle (2pt) node[above] {$y \in \partial S$}; \shade[ball color=blue, opacity=0.15] (2,-3.5) -- (2.75,-7) arc[start angle=-25, end angle=-200, radius=2] -- cycle; \end{tikzpicture} \caption{The proximal-normal cone of $S$ at $z \in \mathring{S}$, $x,y \in \partial S$ and $a \notin S$.} \label{cone-normal} \end{figure} To reach this main objective we proceed as follows: consider the discrete version of our problem, and prove that it converges to \eqref{goal1} by letting the discretization step to go to $0$ for fixed $\varepsilon$ which in turn converges when $\varepsilon$ goes to $0$. \subsection{Notations and assumptions on the data} \subsubsection{Notations} For any $T>0$, we note the following spaces: $\bo{\mathcal{C}} := \mathcal{C}([0,T]; \mathbb{R}^{2N_{p}})$, $\bo{H}^{1} := H^{1}([0,T]; \mathbb{R}^{2N_{p}}), \bo{L}^{2}:= L^{2}([0,T];\mathbb{R}^{2N_{p}}), \bo{L}^{\infty} := L^{\infty}([0,T];\mathbb{R}^{2N_{p}})$. \subsubsection{Assumptions}\label{Assump} \begin{itemize} \item [(i)] \textit{The off-rate} is assumed to be Lipschitz i.e. there exists a constant $L_{\zeta} > 0$ such that \begin{equation*} |\bo{\zeta}(a) - \bo{\zeta}(b)| \leq L_{\bo{\zeta}}\left|a- b\right|, \quad \forall a, b \in \mathbb{R}_{+}. \end{equation*} Moreover for any particle there exist $\underline{\zeta_{i}}$ and $\overline{\zeta_{i}}$ such that $\displaystyle{0 < \underline{\zeta_{i}} < \zeta_{i}(a) < \overline{\zeta_{i}}}$. We define $\displaystyle{\underline{\zeta}:= \min_{i}\underline{\zeta_{i}}}$ (respectively $\displaystyle{\overline{\zeta}:= \max_{i}\overline{\zeta_{i}}}$) as well. \item[(ii)] \textit{The source term} $F$ is coercive (\textit{cf.} Definition \ref{annexeA}.\ref{coercive}), strictly convex and continuous. \item[(iii)] \textit{The past configurations} satisfy $\boldsymbol{z}_{p} \in Lip\left(\mathbb{R}_{-}; \boldsymbol{Q}_{0}\right)$ : $\boldsymbol{z}_{p}(t) \in \boldsymbol{Q}_{0}, \forall t \leq 0$ and there exists $C_{\bo{z}_{p}}> 0$ such that \begin{equation*} \big|\bo{z}_{p}(t_{2}) - \bo{z}_{p}(t_{1})\big| \leq C_{\bo{z}_{p}}\big|t_{2} - t_{1}\big|, \quad \forall t_{1}, t_{2} \leq 0. \end{equation*} \end{itemize} Note as well that in this particular case, the closed form of the linkages density is at hand. Precisely \begin{equation}\label{expr_rho} \rho_{i}(a) = \dfrac{\beta_{i}}{1+\beta_{i} \int_{0}^{\infty} e^{-\int_{0}^{\sigma}\zeta_{i}(\tilde{a})d\tilde{a}}d\sigma} e^{-\int_{0}^{a}\zeta_{i}(\tilde{a})d\tilde{a}}, \quad i=1,\cdots,N_{p}. \end{equation} And by assumptions \ref{Assump} (i), the moments $\mu_{k,i}:= \int_{0}^{\infty}a^{k}\rho_{i}(a)da, k \in \mathbb{N}$ are well defined. Particularly for any particle, there exists $\underline{\mu_{k,i}}, \overline{\mu_{k,i}}$ such that \begin{equation*} 0 < \underline{\mu_{k,i}} \leq \mu_{k,i} \leq \overline{\mu_{k,i}}. \end{equation*} \subsection{Time and age discretization and numerical approximations} The age interval $\mathbb{R}_{+}$ is divided with constant discretization step $\Delta a$ such that \begin{equation*} \mathbb{R}_{+}:= \bigcup_{l=0}^{\infty}\big[l\Delta a, (l+1)\Delta a\big), \end{equation*} as well as the time interval with a discretization grid satisfying $\Delta t = \varepsilon \Delta a$ and $N := \left\lfloor \dfrac{T}{\Delta t} \right\rfloor$ and thus \begin{equation*} [0,T) = \bigcup_{n=0}^{N-1}\big[n\Delta t, (n+1)\Delta t\big). \end{equation*} We set $t^{n} :=n\Delta t$ and $a_{l}:= l\Delta a$ for $n,l \in \{0,1\cdots,N\}\times \mathbb{N}$.\\ We discretize \eqref{contRho} using an implicit Euler scheme. This provides $R_{l,i}$ as a function of $R_{l-1,i}$ and reads: \begin{equation}\label{discreteRho} R_{l,i} = R_{l-1,i}/\big(1+\Delta a \zeta_{l,i}\big), \quad (l,i) \in \mathbb{N}^{\ast} \times \{1,2,\cdots,N_{p}\} \end{equation} while on the boundary \begin{equation}\label{rhoinitial} R_{0,i} = \dfrac{R_{b,i}}{1+\frac{\Delta t}{\varepsilon}\zeta_{0,i}}, \quad \forall i \in \{1,2,\cdots,N_{p}\} \end{equation} For any particle $i$, the non-local condition relates $R_{b,i}$ to the mean of the density $\mu_{0,\Delta,i}$ as \begin{equation}\label{rhobound} R_{b,i} = \beta_{i}\big(1-\Delta a \sum_{l=0}^{\infty}R_{l,i}\big) =: \beta_{i}(1-\mu_{0,\Delta,i}). \end{equation} By induction over $l$ in \eqref{discreteRho} we have \begin{equation*} R_{l,i} = \left( \prod_{r=1}^{l} \dfrac{1}{1+\Delta a \zeta_{r,i}}\right) R_{0,i}, \quad \forall i \in \{1,2,\cdots,N_{p}\}, \end{equation*} so that we have the following system of two equations with two unknowns ($R_{b,i}$ and $R_{0,i}$) can be set~: \begin{equation*} \begin{cases} R_{b,i} - \left( 1 + \Delta a \zeta_{0,i}\right)R_{0,i} = 0\vspace{0.5em} \\ \displaystyle{R_{b,i} + \Delta a \beta_{i} \left( 1+\sum_{l=1}^{\infty} \prod_{r=1}^{l} \dfrac{1}{1+\Delta a\zeta_{r,i}} \right)R_{0,i}} = \beta_{i}, \end{cases} \end{equation*} which can be solved explicitly giving~: \begin{equation}\label{rho_0} \left\{ \begin{aligned} R_{0,i} & = \beta_{i}\left(1+\Delta a\left(\beta_{i} +\zeta_{0,i} + \beta_{i}\sum_{l=1}^{\infty} \prod_{r=1}^{l} \dfrac{1}{1+\Delta a \zeta_{r,i}}\right) \right)^{-1}, \\ R_{b,i} & = \dfrac{\beta_{i}(1+\Delta a \zeta_{0,i})}{1 +\Delta a\Big(\beta_{i} +\zeta_{0,i} + \beta_{i}\sum_{l=1}^{\infty} \prod_{r=1}^{l} \dfrac{1}{1+\Delta a \zeta_{r,i}}\Big)}. \end{aligned} \right. \end{equation} The discrete version of the minimization process \eqref{Eq1} is performed \begin{equation}\label{Eq1_discret} \begin{cases} \displaystyle{\boldsymbol{Z}^{n}_{\varepsilon} = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{Q}_{0}} \left\{ E_{n,\varepsilon}(\boldsymbol{q}):= \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty} |q_{i} - Z^{n-l}_{\varepsilon,i}|^{2} R_{l,i} + F(\boldsymbol{q}) \right\}}, \quad n = 1,2,\cdots,N \vspace{0.5em} \\ \boldsymbol{Z}^{n}_{\varepsilon} = \boldsymbol{Z}^{n}_{p}, \quad n \leq 0, \end{cases} \end{equation} where the discrete average of positions for negative times is : \begin{equation*} \bo{Z}^{n}_{p} = \dfrac{1}{\Delta t} \int_{n\Delta t}^{(n+1)\Delta t} \bo{z}_{p}(s)ds, \quad \forall n \in \mathbb{Z}_{-}. \end{equation*} We define as well \begin{itemize} \item the piecewise constant approximation functions \begin{equation}\label{Eq2} \bo{z}_{\varepsilon,\Delta}(t):= \displaystyle{\sum_{n=1}^{N} \bo{Z}_{\varepsilon}^{n} \mathbbm{1}_{(t^{n-1}, t^{n}]}}(t),\, \displaystyle{\bo{z}_{p,\Delta}(t):= \sum_{n = -\infty}^{n=0}\bo{Z}_{p}^{-n}\mathbbm{1}_{(t^{n-1}, t^{n}]}(t)}, \end{equation} \item the piecewise linear interpolation \begin{equation}\label{eq.linear.interp} \bo{\tilde{z}}_{\varepsilon,\Delta}(t) := \sum_{n=1}^{N}\left\{Z^{n-1}_{\varepsilon} + \frac{t-t^{n-1}}{\Delta t} (\bo{Z}^{n}_{\varepsilon} - \bo{Z}^{n-1}_{\varepsilon}) \right\} \mathbbm{1}_{(t^{n-1}, t^{n}]}(t), \end{equation} \item the piecewise linear constant of the linkages density \begin{equation}\label{rho_delta} \bo{\rho}_{\Delta}(a) := \sum_{l=0}^{\infty} \bo{R}_{l}\mathbbm{1}_{(l\Delta a,(l+1)\Delta a)}(a). \end{equation} \end{itemize} \section{Results} We first prove that the piecewise constant approximation of the linkages density converges towards $\bo{\rho}$ when the age stepsize $\Delta a$ is small enough. \begin{Prop} Under the CFL conditions, for any particle, the solution $R_{l,i}$ of \eqref{discreteRho} is nonnegative. \end{Prop} \begin{proof} We perform the proof by induction over $l \in \mathbb{N}$. Indeed \begin{itemize} \item $l=0$ since the birth-rate and death-rate are nonnegative, we have that $R_{b,i} \geq 0$ and $R_{0,i}$ for any particle (see \eqref{rho_0}) \\ \item Assume that the claim hold until $l-1$. \item Let us prove that the claim is valid for $l$. We use the induction hypothesis ($R_{l,i} \geq 0$) and the fact that $\zeta_{l,i}$ is nonnegative in the definition \eqref{discreteRho}. \end{itemize} \end{proof} \begin{Lemma} Under the CFL condition $\Delta t = \varepsilon \Delta a$, if linkages' density is defined as in \eqref{discreteRho}, $$ R_{l,i} \geq 0 \Leftrightarrow \mu_{0,\Delta,i} \leq 1, \quad \forall i \in \{1,\dots,N_p\}. $$ \end{Lemma} \begin{proof} The claim follows from the definition of the first order moment and the fact that the on-rate and the off-rate are nonnegative. Indeed,\\ $ \Rightarrow)$ assume that $R_{l,i} \geq 0, \quad \forall (l,i) \in \mathbb{N} \times \{1,2,\cdots,N_{p}\}$. By \eqref{rhoinitial} and \eqref{rhobound}, we have that \begin{equation*} R_{0,i} = \frac{R_{b,i}}{1+\Delta a \zeta_{0,i}} \geq 0 \implies R_{b,i} =: \beta_{i}(1-\mu_{0,\Delta,i}) \geq 0, \quad \forall i. \end{equation*} We've used the fact that $\zeta_{0,i} \geq 0$ in the latter denominator. The latter inequality gives needed result. \\ $\Leftarrow )$ Assume that $\mu_{0,\Delta,i} \leq 1$. Since $\beta_{i} \geq 0$ for all $i$, by \eqref{rhobound} we have that \begin{equation*} R_{b,i} = \beta_{i}(1-\mu_{0,\Delta,i}) \geq 0, \quad \forall i, \end{equation*} so that $R_{b,i} \geq 0$ for all particles. This in turn by \eqref{rhoinitial} and the fact that the death rate $\zeta_{0,i}$ is nonnegative gives that the initial linkages density $R_{0,i}\geq 0$ for all $i$. This, by induction over $l \in \mathbb{N}$ into equation \eqref{discreteRho} gives the nonnegative feature of the discrete linkages density. Furthermore note in this case that $\mu_{0,\Delta,i} \geq 0$ for all the particles. \end{proof} Define \begin{equation*} \overline{\bo{\rho}}_{\Delta}(a) := \sum_{l=0}^{\infty}\bo{\overline{R}}_{l}\mathbbm{1}_{(l\Delta a, (l+1)\Delta a)}(a) \text{ where } \bo{\overline{R}}_{l} = \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a} \bo{\rho}(a)da \end{equation*} where $\bo{\rho}$ solves \eqref{contRho} as well as $\bo{\overline{\mu}}_{0,\Delta} = \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a} \bo{\mu}_{0}(a)da $. We have \begin{Lemma} Under the same hypotheses as above if $\bo{\rho}$ solves $\eqref{contRho}$, we have that \begin{equation*} \left|\bo{\rho}_{\Delta} - \bo{\overline{\rho}}_{\Delta}\right|_{L^{1}_{a}} \leq O(\Delta a) \text{ and } \left| \bo{\overline{\rho}}_{\Delta} - \bo{\rho}\right|_{L^{1}_{a}} \leq O(\Delta a), \end{equation*} where $L^{1}_{a}:= L^{1}\left(\mathbb{R}_{+}, \mathbb{R}^{N_{p}}\right)$ and $\bo{\rho}_{\Delta}$ is defined in \eqref{rho_delta}. \end{Lemma} \begin{proof} Indeed due to the consistency of the scheme \eqref{discreteRho}, we have that \begin{eqnarray*} \delta \overline{R}_{l,i} + \Delta a \zeta_{l,i} \overline{R}_{l,i} &=& \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a}(1+\zeta_{l,i} \Delta a) e^{-\int_{0}^{\Delta a}\zeta_{i}(s)ds}\rho_{i}(a)da - \dfrac{1}{\Delta a}\int_{l\Delta a}^{(l+1)\Delta a}\rho_{i}(a)da\\ & = & \dfrac{1}{\Delta a} \int_{l\Delta }^{(l+1)\Delta a} \left( \Delta a(\zeta_{l,i} - \zeta_{i}(a)) + O(\Delta a^{2})\right)\rho_{i}(a)da \leq L_{\bo{\zeta}} ||\zeta_{i}||_{W^{1,\infty}_{a}} \Delta a^{2}\overline{R}_{l,i}. \end{eqnarray*} We've used the fact that \begin{equation*} |\zeta_{l,i} - \zeta_{i}(a)| \leq \dfrac{1}{\Delta a} \int_{l\Delta a}^{(l+1)\Delta a} \left| \zeta_{i}(\sigma) - \zeta_{i}(a) \right| d\sigma, \quad \forall a \in \left(l\Delta a, (l+1)\Delta a\right), \forall i =1,\cdots,N_{p}, \end{equation*} so that for any particle \begin{eqnarray*} |\zeta_{l,i} - \zeta_{i}(a)| & \leq & \dfrac{1}{\Delta a} \int_{l\Delta}^{(l+1)\Delta a} |a-\sigma| \left|\dfrac{ \zeta_{i}(\sigma) - \zeta_{i}(a) }{\sigma - a} \right|d\sigma \\ & \leq & L_{\bo{\zeta}} \int_{l\Delta a}^{(l+1)\Delta a} \left|\left|\partial_{a}\zeta_{i}\right|\right|_{L^{\infty}_{a}}d\sigma \leq \Delta a \left|\left|\partial_{a}\zeta_{i}\right|\right|_{L^{\infty}_{a}}. \end{eqnarray*} On the other hand, setting $E_{i} := \Delta a \sum_{l=0}^{\infty}(R_{l+1,i} - \overline{R}_{l+1,i})$ for any particle, we have that \begin{eqnarray*} |E_{i}| &=& \Delta a\sum_{l=0}^{\infty}\left| \dfrac{R_{l,i}}{1+\Delta a \zeta_{l+1,i}} - \overline{R}_{l+1,i} \right| \leq \dfrac{\Delta a}{1+\Delta a \underline{\zeta}_{i}} \left(E_{i} + \sum_{l=0}^{\infty}\left|(1+\Delta a\zeta_{l,i})\overline{R}_{l+1,i} + \overline{R}_{l,i}\right|\right)\\ & \leq & \dfrac{\Delta a E_{i}}{1+\Delta a\underline{\zeta}_{i}} + \dfrac{C}{1+\Delta a \underline{\zeta}_{i}} \Delta a^{2}, \quad \forall i, \end{eqnarray*} which gives $ |E_{i}| \leq C \Delta a, \; \forall i \in \{1,2,\cdots,N_{p}\}$ implying that $|\bo{E}| \lesssim C\Delta a$. It follows that \begin{equation*} \int_{0}^{\infty} \left|\bo{\rho}_{\Delta} - \bo{\overline{\rho}}_{\Delta}\right|(a)da \leq \int_{0}^{\infty} \sum_{l=0}^{\infty} |\bo{R}_{l} - \bo{\overline{R}}_{l}| \mathbbm{1}_{\left(l\Delta,(l+1)\Delta a\right)}(a)da \leq C\Delta a, \end{equation*} so that $\left|\bo{\rho}_{\Delta} - \bo{\rho}_{\Delta}\right|_{L^{1}_{a}} \leq O(\Delta a)$, which is the first claim. Next \begin{eqnarray*} \int_{0}^{\infty} \left| \bo{\overline{\rho}_{\Delta}}(a) - \bo{\rho}(a) \right|da & = & \int_{0}^{\infty} \Big| \bo{\rho}(a) - \dfrac{1}{\Delta a} \sum_{l=0}^{\infty} \Big( \int_{l\Delta a}^{(l+1)\Delta a} \bo{\rho}(\sigma)d\sigma \Big) \mathbbm{1}_{(l\Delta, (l+1)\Delta a)}(a)da \Big|da \\ & \leq & \dfrac{1}{\Delta a} \sum_{l=0}^{\infty} \int_{0}^{\infty} \Big| \bo{\rho}(a) - \int_{l\Delta a}^{(l+1)\Delta a} \bo{\rho}(\sigma)d\sigma \Big|\mathbb{1}_{(l\Delta a, (l+1)\Delta l)}(a)da. \end{eqnarray*} Define the space $\displaystyle{U := \left\{ f \in L^{1}_{a} \text{ s.t. } \limsup_{\sigma \to 0} \int_{0}^{\infty} \big|\dfrac{f(a+\sigma) - f(a)}{\sigma}\big| da < \infty \right\}}$ endowed with the norm \begin{equation*} ||f||_{U} := ||f||_{L^{1}_{a}} + \limsup_{\sigma \to 0} \int_{0}^{\infty} \left|\dfrac{f(a+\sigma) - f(a)}{\sigma}\right|da, \end{equation*} we have by the Lemma Appendix B.2 p.36 \cite{Mi20} that \begin{equation*} \int_{0}^{\infty} \left| \bo{\overline{\rho}_{\Delta}}(a) - \bo{\rho}(a) \right|da \leq \Delta a\left|\bo{\rho}\right|_{U}. \end{equation*} Thus, taking $\Delta a$ small enough, gives the second claim. \end{proof} \subsection{Existence and uniqueness of solution of the constrained problem} Since $\boldsymbol{Q}_{0}$ is nonconvex (see Figure \ref{lack_convexity} below), we consider its interior convex approximation $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ defined as follows \begin{equation}\label{constSet} \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}) := \left\{ \boldsymbol{q} \in \mathbb{R}^{2N_{p}}:\, \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}) \leq 0, \; \forall \, i < j \right\}, \end{equation} where for any $n$ and $\varepsilon$ fixed, the constraints functions $\varphi^{n,\varepsilon}_{ij}: \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}$ are affine and read \begin{equation}\label{functions} \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}):=-D_{ij}(\bo{Z}^{n-1}_{\varepsilon}) - \boldsymbol{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot(\boldsymbol{q}- \bo{Z}^{n-1}_{\varepsilon}), \quad i <j. \end{equation} The minimization problem over this convex set reads : find $\boldsymbol{Z}^n_{\varepsilon} \in \RR^{2N_p}$ s.t. \begin{equation}\label{contranint} \left\{ \begin{aligned} \boldsymbol{Z}^{n}_{\varepsilon}& = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}) } E_{n,\varepsilon}(\boldsymbol{q}) , \quad n \geq 1, \vspace{0.75em} \\ \boldsymbol{Z}^{n}_{\varepsilon} & = \boldsymbol{Z}^{n}_{p}, \quad n \leq 0. \end{aligned}\right. \end{equation} Due to Lemma \ref{equality} below we have that \eqref{Eq1_discret} is equivalent to \eqref{contranint}, so that instead of \eqref{Eq1_discret}, we may deal with \eqref{contranint} in the following investigations. \begin{Theo}\label{thm1} Lets fix the integer $n \geq 1$ and assume that $\boldsymbol{Z}^{n-1} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1})$. Moreover suppose that assumptions \ref{Assump} (i)-(iii) hold and consider the penalised problem : find $\boldsymbol{Z}^{n}_{\varepsilon,\delta}$ such that \begin{equation}\label{penalise} \begin{cases} \displaystyle{\boldsymbol{Z}^{n}_{\varepsilon,\delta} = \argmin_{\boldsymbol{q}\, \in \, \mathbb{R}^{2N_{p}}} \left\{ E^{\delta}_{n,\varepsilon}(\boldsymbol{q}):= E_{n,\varepsilon}(\boldsymbol{q}) + \dfrac{1}{2\delta} \sum_{i<j} \max\left(\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}),0\right)^{2} \right\}}, \\ \boldsymbol{Z}^{n}_{\varepsilon,\delta} = \boldsymbol{Z}^{n}_{p}, \quad n \leq 0. \end{cases} \end{equation} Then there exists a unique $\boldsymbol{Z}^{n}_{\varepsilon, \delta} \in \RR^{2 N_p}$ solving the above problem. Moreover when letting the penalty parameter $\delta$ to go to $0$, $\boldsymbol{Z}^{n}_{\varepsilon, \delta}$ converges to $\boldsymbol{Z}^{n}_{\varepsilon}$ solving \eqref{contranint}. Again, one has that $\boldsymbol{Z}^{n}_{\varepsilon} \in \boldsymbol{K}(Z^{n}_{\varepsilon})$. The result is then true for any $n \in \NN^*$ \end{Theo} \begin{proof} Thanks to asumption \ref{Assump}.(iii), one has that $\boldsymbol{Z}^0_\varepsilon \equiv \boldsymbol{z}_p(0)$ is such that $\boldsymbol{Z}^0_\varepsilon \in \boldsymbol{K}(\boldsymbol{Z}^0_\varepsilon)$ which is thus non-empty. We check hereafter the hypotheses of Theorem \ref{annexeA}.\ref{ciarl}. Indeed \begin{enumerate} \item for $\varepsilon >0$ and $n \in \mathbb{N}^{\ast}$ fixed, $\boldsymbol{q} \mapsto E_{n,\varepsilon}(\boldsymbol{q})$ is continuous, coercive and strictly convex. Indeed, this is by definition since the sum of continuous (respectively coercive, strictly convex) function is continuous (respectively coercive, strictly convex). Let us mention that this ensures the existence and uniqueness of $\boldsymbol{Z}^{n}_{\varepsilon,\delta}$ solution of \eqref{penalise}. \item {Let's define $\boldsymbol{K}(\boldsymbol{p}):=\{\boldsymbol{q} \in \RR^{2N_p}\; : \; \varphi_{ij}(\boldsymbol{p},\boldsymbol{q})\leq 0,\; i<j\}$, where $\varphi_{ij}(\boldsymbol{p},\boldsymbol{q}):=-D_{ij}(\boldsymbol{p})-\boldsymbol{G}_{ij}(\boldsymbol{p})\cdot(\boldsymbol{q}-\boldsymbol{p})$. Assume that $\boldsymbol{p}\in\RR^{2N_p}$ is s.t. $D_{ij}(\boldsymbol{p})\geq 0$ for all $i<j$. Then we claim that $\boldsymbol{K}(\boldsymbol{p})$ is a closed convex, non-empty set. Indeed, $\boldsymbol{p} \in \boldsymbol{K}(\boldsymbol{p})$ which implies that it is non-empty. Since $\bo{q} \mapsto D_{ij}(\bo{q})$ is convex, it is easy to check that $\bo{K}(\bo{p})$ is convex as finite intersection of convex sets. It is closed as finite intersection of closed sets~: as \begin{equation*} \boldsymbol{K}(\boldsymbol{p}) = \bigcap_{i<j} (\varphi_{ij}(\boldsymbol{p},\cdot))^{-1}((-\infty, 0]), \end{equation*} so that since the maps $\boldsymbol{q} \mapsto \varphi_{ij}(\boldsymbol{p},\boldsymbol{q})$ are continuous and $(-\infty, 0]$ is a closed interval, $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ is closed as intersection of reciprocal images of closed subsets by continuous functions. Thus, $\boldsymbol{K}(Z^{n-1}_{\varepsilon})$ is a closed, convex and non empty set since $\boldsymbol{Z}^{n-1}_{\varepsilon} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon} )$.} \item The map $\psi^{n,\varepsilon}: \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}$ defined by \begin{equation*} \psi^{n,\varepsilon}(\boldsymbol{q}): = \dfrac{1}{2}\sum_{i<j} \max\left( \varphi^{n, \varepsilon}_{ij}(\boldsymbol{q}),0 \right)^{2}, \end{equation*} satisfies \eqref{eq.equiv.U.Phi}, namely it is continuous, convex and satisfies \begin{equation*} \psi^{n,\varepsilon}(\boldsymbol{q}) \geq 0 \text{ for every } \boldsymbol{q} \in \mathbb{R}^{2N_{p}} \text{ and } \psi^{n,\varepsilon}(\boldsymbol{q}) = 0 \iff \boldsymbol{q} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}). \end{equation*} We prove first the continuity. Indeed for any $n \in \mathbb{N}$ and $\varepsilon > 0$ fixed, the maps $f^{n,\varepsilon}_{ij}(\boldsymbol{q}) := \max(\cdot, 0)^{2} \circ \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}), \; i <j$ are continuous as composition of continuous functions, so that $\psi^{n,\varepsilon}(\boldsymbol{q}) := \sum_{i<j}f^{n,\varepsilon}_{ij}(\boldsymbol{q})$ is continuous. For the convexity we use properties of composition and sum of convex functions. Indeed the functions $f^{n,\varepsilon}_{ij}$ are convex as composition of convex functions, so that $\psi^{n,\varepsilon}$ is convex as sum of convex functions. Furthermore, by definition $\psi^{n,\varepsilon}(\boldsymbol{q}) \geq 0, \forall \bo{q} \in \mathbb{R}^{2N_{p}}$ and $\psi^{n,\varepsilon}(\boldsymbol{q}) = 0 \iff \bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. Indeed \begin{equation*} \sum_{i<j}f^{n,\varepsilon}_{ij}(\boldsymbol{q}) = 0 \implies \max\left(\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}), 0\right) = 0, \; \forall i < j \implies \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}) \leq 0,\quad \forall i<j. \end{equation*} Conversely let $\boldsymbol{q} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$, we have \begin{equation*} \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}) \leq 0, \; \forall i<j \implies \max(\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}), 0)^{2} = 0 , \; \forall i<j \implies \sum_{i<j} f^{n,\varepsilon}_{ij}(\bo{q}) = 0. \end{equation*} This shows the claim. \end{enumerate} Now having fulfilled all hypotheses of Theorem \ref{annexeA}.\ref{ciarl}, we have that the solution $\boldsymbol{Z}^{n}_{\varepsilon}$ of \eqref{contranint} exists as limit of $\boldsymbol{Z}^{n}_{\varepsilon, \delta}$, the unique solution of \eqref{penalise} when $\delta$ goes to $0$. Since $\boldsymbol{Z}^n_{\varepsilon}$ satisfies the constraint, $\boldsymbol{Z}^n_{\varepsilon} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon} )$ the proof extends to every $n \in \NN^*$ by induction. \end{proof} \subsection{The constrained problem in term of primal-dual problem} We aim at proving there exists (in general not a unique) a dual variable called the Lagrange variable such that the \textit{primal} problem \eqref{contranint} (whose variable $\boldsymbol{Z}^{n}_{\varepsilon}$ is called the primal variable) is equivalent to a involving both primal and dual variables : the \textit{primal-dual} problem. \begin{Def}(Feasible direction) Let $\bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ be a feasible configuration and $\bo{w} \in \mathbb{R}^{2N_{p}}$, we say that $\bo{w}$ is a feasible direction if and only if there exists $\eta > 0$ such that for any $0 < s \leq \eta$ we have $\bo{q} + s\bo{w} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$.\\ In other words, $\bo{q}$ is a feasible direction if from $\bo{q}$ one can move at least of $\eta$ by still staying in $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. In figure \ref{direction_memoire} we have the possible directions for $\boldsymbol{q}$ strictly interior in the domain on one hand and $\boldsymbol{q}$ on the boundary of the domain on the other hand. \end{Def} Let $\bo{q}$, $\tilde{\bo{q}} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ such that $\bo{q} \neq \tilde{\bo{q}}$. Since $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ is convex, we have $[\bo{q},\tilde{\bo{q}}] \subset \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ and $\bo{w} = \tilde{\bo{q}} - \bo{q}$ is a feasible direction. \begin{figure}[!ht] \centering \begin{tikzpicture}[scale=0.75,x=1mm,y=1mm] \path[draw,fill=white] (8,8) circle (28); \path[draw,fill=lightgray](8,8)circle(17); \draw [dashed] (13,15) circle (7); \draw [red] [thick,->] (13,15) -- (17.25,20.25) node[pos = 0.5, above, sloped]{$\boldsymbol{w}$}; \draw (13,15) circle(0.4) node[left]{$\boldsymbol{q}$}; \draw [thick,->] (-20,-17) -- (-0,-2) node[pos=-0.4, left, above]{$\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$}; \draw (-13,21) node[above, right, rotate=30]{$\varphi^{n,\varepsilon}_{ij} > 0$}; \end{tikzpicture} \hfill \vline \hfill \begin{tikzpicture}[scale=0.75,x=1mm,y=1mm] \path[draw,fill=white] (8,8)circle(28); \path[draw,fill=lightgray](8,8)circle(17); \draw [red] [thick,->] (19.8,19.8) -- (21,13) node[pos = 1.1, below, below]{$\boldsymbol{w}$}; \draw [blue] [thick,->] (19.8,19.8) -- (5,5) node[pos=0.65, left, above, sloped]{$-\nabla \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q})$}; \draw (19.8,19.8) circle(0.5) node[left]{$\boldsymbol{q}$}; \draw (-13,21) node[above, right, rotate=30]{$\varphi^{n,\varepsilon}_{ij} > 0$}; \draw [thick,->] (38,-15) -- (18,-1) node[pos=-0.4, left, above]{$\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$}; \end{tikzpicture} \caption{feasible directions for $\boldsymbol{q}$ strictly interior to $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ (left) vs. $\bo{q}$ on the boundary (right).} \label{direction_memoire} \end{figure} \begin{Def}\cite{Allairel05}\label{feasible_directions_memoire} Let $\boldsymbol{q} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$, for any fixed $\varepsilon > 0$ we define the cone of feasible directions at $\boldsymbol{q}$ by \begin{equation*} \boldsymbol{C}(\boldsymbol{q}) = \left\{ \boldsymbol{w}\in \mathbb{R}^{2N_{p}}, \, \exists \boldsymbol{q}^{r} \in \left(\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})\right)^{\mathbb{N}}, \exists \, \delta^{r} \in (\mathbb{R}_{+}^{\ast})^{\mathbb{N}}, \boldsymbol{q}^{r} \to \boldsymbol{q},\, \delta^{r} \to 0 \text{ and } \lim_{r \to \infty} \dfrac{\boldsymbol{q}^{r} - \boldsymbol{q}}{\delta^{r}} = \boldsymbol{w} \right\}. \end{equation*} \end{Def} \begin{Rmk}\label{rmks-cone} $\boldsymbol{C}(\boldsymbol{q})$ is a cone in the sense that $\boldsymbol{0} \in \boldsymbol{C}(\boldsymbol{q})$ (take $\boldsymbol{q}^{r} = \boldsymbol{q}$ for any $r$) and if $\boldsymbol{w} \in \boldsymbol{C}(\boldsymbol{q})$ we have that $\lambda \boldsymbol{w} \in \boldsymbol{C}(\boldsymbol{q})$ for any $\lambda > 0$. Moreover we have the followings \begin{itemize} \item If $\boldsymbol{q}$ is strictly interior to the domain $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$, we have that $C(\boldsymbol{q})= \mathbb{R}^{2N_{p}}$. It suffices to take $\boldsymbol{q}^{r} = \boldsymbol{q} + \dfrac{1}{r}\boldsymbol{w}$ for all $\boldsymbol{w} \in \mathbb{R}^{2N_{p}}$ and $r$ large enough (see figure the left hand side of \ref{feasible_directions_memoire}). \item Since $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ is convex $\boldsymbol{C}(\boldsymbol{q}) = \left\{\boldsymbol{w} - \boldsymbol{q} \text{ for all } \boldsymbol{w} \in \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}) \right\}$. It suffices to take $\boldsymbol{q}^{r} = \boldsymbol{q} + \dfrac{1}{r}(\boldsymbol{w} - \boldsymbol{q})$ for all $r$. \end{itemize} \end{Rmk} For any $\boldsymbol{q} \in \boldsymbol{K} (\boldsymbol{Z}^{n-1}_{\varepsilon})$, the cone $\bo{C}(\bo{q})$ in Definition \ref{feasible_directions_memoire} can be seen as the set of all vectors which are tangent at $\boldsymbol{q}$ to a curve lying in $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ and passing through $\boldsymbol{q}$. More precisely $\bo{C}(\bo{q})$ is the set of all possible directions of variation from $\bo{q}$ which guarantee that one stays in $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. But the main issue here is the fact that we cannot always handle a closed form of $\boldsymbol{C}(\boldsymbol{q})$. Nevertheless in some specific cases; called the \textit{qualification conditions} one may obtain an explicit form of $\boldsymbol{C}(\boldsymbol{q})$.\\ For any $\bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$, we have that: \begin{itemize} \item if $\varphi_{ij}^{n,\varepsilon}(\boldsymbol{q}) < 0$, for any direction $\boldsymbol{w} \in \mathbb{R}^{2N_{p}}$ and $\eta > 0$ small enough, we have that $\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q} + \eta \boldsymbol{w}) \leq 0$ (see Figure \ref{feasible_directions_memoire} on the left hand side). We say that the constraint $ij$ is \textit{nonactive}. \item If $\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q})=0$ we want the direction $\boldsymbol{w}$ to satisfy the condition $\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q} + \eta \boldsymbol{w}) \leq 0$ for $i<j$, in order to ensure that all the constraints are satisfied for $\boldsymbol{q} + \eta \boldsymbol{w}$ (see Figure \ref{feasible_directions_memoire} on the right hand side). Such conditions are called \textit{qualification conditions}.\\ But since the functions $\varphi^{n,\varepsilon}_{ij}$ are affine, for any $\bo{w} \in \mathbb{R}^{2N_{p}}$ and $\eta > 0$ we have \begin{equation*} \varphi^{n,\varepsilon}_{ij}(\bo{q}) = 0 \implies \varphi^{n,\varepsilon}_{ij}(\boldsymbol{q} + \eta \bo{w}) = - \eta \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot \bo{w}, \quad \forall i<j. \end{equation*} So that if there exists a direction $\overline{\bo{w}} \in \mathbb{R}^{2N_{p}}$ such that $\varphi^{n,\varepsilon}_{ij}(\bo{q} + \eta \overline{\boldsymbol{w}}) \leq 0$, we necessarily have $\boldsymbol{G}_{ij}(\boldsymbol{Z}^{n-1}_{\varepsilon})\cdot \overline{\bo{w}} \geq 0$. Such a direction exists : it suffices to take $\overline{\bo{w}} = \bo{0}$. We say that the constraints \eqref{constSet} are qualified at $\bo{q}$. \end{itemize} \begin{Rmk} Note that $\bo{q}$ above is chosen arbitrarily. Moreover $\boldsymbol{Z}^{n}_{\varepsilon}$ belongs to $ \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ for any time step so that, the constraints \eqref{constSet} are qualified at $\boldsymbol{Z}^{n}_{\varepsilon}$. \end{Rmk} \begin{Def}\cite{Allairel05}\label{qualified_memoire} Let $ \bo{q} \in \boldsymbol{K}(\textbf{Z}^{n-1}_{\varepsilon})$, we define the set of active constraints by \begin{equation*} Ind(\bo{q}) := \left\{1\leq i<j \leq N_{p} : \varphi^{n,\varepsilon}_{ij}(\bo{q})=0 \right\}. \end{equation*} $Ind(\boldsymbol{q})$ is also called the set of saturated constraints. \end{Def} \begin{Rmk} Let $\bo{q} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon})$. We have that \begin{equation}\label{cone_dir_adm_memoire} \boldsymbol{C}(\boldsymbol{q}) = \left\{ \boldsymbol{w} \in \mathbb{R}^{2N_{p}}: \, \boldsymbol{G}_{ij}(\boldsymbol{Z}^{n-1}_{\varepsilon}) \cdot \boldsymbol{w} \geq 0, \; \forall i,j \in Ind(\boldsymbol{Z}^{n}_{\varepsilon}) \right\}. \end{equation} \end{Rmk} \begin{Def}\cite{Ciarlet89} Let $V$ and $M$ be two subsets consider $L: V \times M \longrightarrow \mathbb{R}$.\\ The couple of points $(u,\lambda) \in V\times M$ is called saddle point of $L$ if $u$ is the minimum of $L(\cdot, \lambda): v \in V \longmapsto L(v,\lambda) \in \mathbb{R}$ and $\lambda$ is the maximum of $L(u,\cdot): \mu \in M \longmapsto L(u,\mu) \in \mathbb{R}$. In other words $(u, \lambda)$ is a saddle point of $L$ if it satisfies \begin{equation*} \sup_{\mu\, \in \, M} L(u,\mu) = L(u,\lambda) = \inf_{v \, \in \, V} L(v,\lambda). \end{equation*} \end{Def} From now on $V:=\mathbb{R}^{2N_{p}}$ and $M:=(\mathbb{R}_{+})^{N_{c}}$ where $N_{c} := N_{p}(N_{p} - 1)/2$ is the maximal number of contacts. We introduce the Euler-Lagrange equations associated with \eqref{contranint} and investigate the existence of optimal points. To this end for $\boldsymbol{\mu} = (\mu_{ij})_{i<j}$, we define the Lagrangian $L: \mathbb{R}^{2N_{p}}\times \mathbb{R}^{N_{c}}_{+} \longrightarrow \mathbb{R}$ by \begin{equation}\label{Lag-op_memoire} L(\boldsymbol{q}, \boldsymbol{\mu}) = \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty} \left| q_{i}-Z^{n-l}_{\varepsilon,i}\right|^{2} R_{l,i} + F(\boldsymbol{q}) +\sum_{i<j}\mu_{ij}\varphi^{n,\varepsilon}_{ij}(\boldsymbol{q}). \end{equation} Since for all $n$, the mappings $E_{n}$ and $\varphi^{n,\varepsilon}_{ij}$, $i<j$ are convex, continuous in $\mathbb{R}^{2N_{p}}$ and differentiable in $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ and the constraints are qualified at $\boldsymbol{Z}^{n}_{\varepsilon}$, the KKT theorem (cf. Theorem \ref{annexeA}.\ref{kkt_cond}) guarantees that \eqref{contranint} is equivalent to the existence of $\boldsymbol{\lambda}^{n}_{\varepsilon} = (\lambda^{n,\varepsilon}_{ij})_{i<j} \in \left( \mathbb{R}_{+}\right)^{N_{c}} $ such that $(\boldsymbol{Z}^{n}_{\varepsilon}, \boldsymbol{\lambda}_{\varepsilon}^{n})$ is a saddle point of the Lagrangian \eqref{Lag-op_memoire} in $\mathbb{R}^{2N_{p}}\times \mathbb{R}^{N_{c}}_{+}$. This can be rephrased as $\boldsymbol{Z}^{n}_{\varepsilon}$ is a solution of \eqref{contranint} if and only if there exists $\boldsymbol{\lambda}^{n}_{\varepsilon} = \boldsymbol{\lambda}^{n}_{\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon})$ such that \begin{equation}\label{KKTconditions_memoire} \boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon}) \leq \boldsymbol{0},\; \boldsymbol{\lambda}^{n}_{\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon}) \geq \boldsymbol{0}, \; \boldsymbol{\lambda}^{n}_{\varepsilon}(\boldsymbol{Z}^{n}_{\varepsilon})\cdot \boldsymbol{\varphi}(\boldsymbol{Z}^{n}_{\varepsilon}) = 0; \, \boldsymbol{E}^{'}_{n}(\boldsymbol{Z}^{n}_{\varepsilon}) + \sum_{i<j} \lambda^{n,\varepsilon}_{ij}(\boldsymbol{Z}^{n}_{\varepsilon}) (\varphi^{n,\varepsilon}_{ij})^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) = \boldsymbol{0}, \end{equation} where $\boldsymbol{\varphi}^{n}_{\varepsilon}(\boldsymbol{q}) := \left( \varphi^{n,\varepsilon}_{ij} \right)_{i<j}: \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}^{N_{c}}$ is vectorized form of the constraints functions. \subsection{Energy estimates and compactness criterion} \begin{Prop}\label{estimation_energie} Under assumptions \ref{Assump}, if $(\bo{R}_{l})_{l \in \mathbb{N}}$ and $(\bo{Z}^{n}_{\varepsilon})_{n=1,2\cdots,N}$ are defined as above, there exists a constant $K_{0}$ independent either of $\varepsilon$ or $\Delta a$ such that \begin{equation}\label{energy-estimate-memoire} \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} \left|Z^{n}_{\varepsilon,i} -Z^{n-l}_{\varepsilon,i}\right|^{2}R_{l,i} + \Delta t\sum_{m=1}^{n} D^{m}_{\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) \leq K_{0} + F(\boldsymbol{Z}^{0}_{p}), \end{equation} where the dissipation term reads \begin{equation*} D^{n}_{\varepsilon} := \dfrac{\Delta a}{2} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} |U^{n-1}_{l,\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i}, \text{ and } U^{n}_{l,\varepsilon,i} :=\dfrac{1}{\varepsilon}( Z^{n}_{\varepsilon,i}-Z^{n-l}_{\varepsilon,i}), \quad \forall i=1,\cdots,N_{p},\; l \in \mathbb{N}^{\ast}. \end{equation*} \end{Prop} \begin{proof} By definition of the minimization process \begin{eqnarray*} E_{n,\epsilon}(\boldsymbol{Z}^{n}_{\varepsilon}) & \leq & E_{n,\varepsilon}(\boldsymbol{Z}^{n-1}_{\varepsilon}) = \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=2}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}), \end{eqnarray*} so that by a change of index, \begin{equation*} I_{n,\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) \leq \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-1-l}_{\varepsilon,i}|^{2}R_{l+1,i} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}), \end{equation*} where we've set \begin{equation*} I_{n,\varepsilon} := \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty}|Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i}. \end{equation*} Since $R_{l,i}$ solves \eqref{contRho}, we have that \begin{equation*} I_{n,\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) + \dfrac{\Delta a}{2\varepsilon} \dfrac{\Delta t}{\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-1-l}_{\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i} \leq I_{n-1,\varepsilon} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}), \end{equation*} so that by induction over $n$ \begin{equation*} I_{n,\varepsilon} + F(\boldsymbol{Z}^{n}_{\varepsilon}) + \dfrac{\Delta a}{2\varepsilon} \dfrac{\Delta t}{\varepsilon} \sum_{m=1}^{n} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}|Z^{n-1}_{\varepsilon,i} - Z^{n-1-l}_{\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i} \leq I_{0,p} + F(\boldsymbol{Z}^{0}_{p}). \end{equation*} Now we need to find an upper bound for $I_{0,p}$. Indeed for any $i \in \{1,2,\cdots,N_{p}\}$ fixed, \begin{equation*} \left|Z^{0}_{\varepsilon,i} - Z^{-l}_{\varepsilon,i}\right| \leq \varepsilon \Delta a C_{z_{p,i}} l, \end{equation*} so that \begin{equation*} I_{0,p} := \dfrac{\Delta a}{2\varepsilon}\sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}\left| Z^{0}_{\varepsilon,i} - Z^{-l}_{\varepsilon,i} \right|^{2}R_{l,i} \leq \dfrac{\varepsilon}{2} \sum_{i=1}^{N_{p}}C_{z_{p,i}}^{2} \mu_{2,i}. \end{equation*} It then follows that \begin{equation*} I_{n,\varepsilon} + \Delta t\sum_{m=1}^{n}D^{m}_{\varepsilon } + F(\boldsymbol{Z}^{n}_{\varepsilon}) \leq \underbrace{ \dfrac{\varepsilon}{2}\sum_{i=1}^{N_{p}}C^{2}_{z_{p,i}}\mu_{2,i}}_{:=K_{0}} + F(\boldsymbol{Z}^{0}_{p}), \end{equation*} which is the claim. \end{proof} \begin{Lemma}\label{boundness} Under the same hypotheses as in Proposition \ref{estimation_energie}, the sequence $(\bo{Z}^{n}_{\varepsilon})_{n \in \mathbb{N}}$ is bounded. \end{Lemma} \begin{proof} Assume that there exists a subsequence $(\bo{Z}^{n_{k}}_{\varepsilon})_{k \in \mathbb{N}}$ such that $|\bo{Z}^{n_{k}}_{\varepsilon}| \underset{k \to \infty}{\longrightarrow} \infty$. Since $F$ is coercive, we have for all $M > 0$, there exists $k_{0} \in \mathbb{N}$ such that $\forall k > k_{0}$, $ F(\bo{Z}^{n_{k}}_{\varepsilon}) > M$, which contradicts the fact that $F(\bo{Z}^{n}_{\varepsilon}) \leq K_{0} + F(\bo{Z}^{0}_{\varepsilon})$. This prove that any sub-sequence $(\bo{Z}^{n_{k}}_{\varepsilon})_{k}$ is bounded. Thus $\bo{Z}^{n}_{\varepsilon}$ is bounded. \end{proof} \begin{Theo}$($Compactness$)$ \label{theo_compactness} Under assumptions \ref{Assump} (i)--(iii), there exists a constant $C > 0$, depending only on $\overline{\mu}_{2}, \underline{\mu_{0}}, \overline{\mu_{0}}, \overline{\zeta}$ such that \begin{equation}\label{compactness} \Delta t \sum_{n=1}^{N}\sum_{i=1}^{N_{p}} \left| \dfrac{Z^{n}_{\varepsilon,i}-Z^{n-1}_{\varepsilon,i}}{\Delta t} \right|^{2} \leq C. \end{equation} \end{Theo} \noindent Before perform the proof, we set the following notations $\delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}:= \boldsymbol{Z}^{n}_{\varepsilon} - \boldsymbol{Z}^{n-1}_{\varepsilon}, \quad \delta \boldsymbol{\mathcal{L}}^{n-\frac{1}{2}}_{\varepsilon}:= \boldsymbol{\mathcal{L}}^{n}_{\varepsilon} - \boldsymbol{\mathcal{L}}^{n-1}_{\varepsilon}$, where the discrete delay operator is $\boldsymbol{\mathcal{L}}^{n}_{\varepsilon} = (\mathcal{L}_{\varepsilon}^{n})_{i} \text{ and } \mathcal{L}^{n}_{\varepsilon,i} = \dfrac{\Delta a}{\varepsilon} \sum_{l=1}^{\infty} (Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i})R_{l,i}, \quad \forall i \in \{1,\dots,N_p\}. $ \begin{proof} First we easily check that the global elongation variable solves \begin{equation*} \varepsilon \dfrac{\textbf{U}^{n}_{\varepsilon,l} - \textbf{U}^{n-1}_{\varepsilon,l}}{\Delta t} + \dfrac{\textbf{U}^{n-1}_{\varepsilon,l} - \textbf{U}^{n-1}_{\varepsilon,l-1} }{\Delta a} = \dfrac{\textbf{Z}^{n}_{\varepsilon} -\textbf{Z}^{n-1}_{\varepsilon}}{\Delta t}. \end{equation*} So by multiplying this equation (taken component-wisely) by $R_{l,i}$ and summing over index $l \in \NN^*$, we have \begin{equation}\label{T} \dfrac{\varepsilon}{\Delta t} \delta \mathcal{L}^{n-\frac{1}{2}}_{\varepsilon,i} + \sum_{l=1}^{\infty} \big({U}^{n-1}_{\varepsilon,l,i}-{U}^{n-1}_{\varepsilon,l-1,i_{}}\big) R_{l,i_{}} = \dfrac{1}{\Delta t}\underbrace{\left(\Delta a \sum_{l=1}^{\infty} R_{l,i} \right)}_{=:\theta_{\Delta,i} } \delta{Z}^{n-\frac{1}{2}}_{\varepsilon,i}, \quad i=1,\cdots, N_{p}. \end{equation} Moreover, since $R_{l,i}$ solves \eqref{discreteRho}, we have that \begin{eqnarray*} \sum_{l= 1}^{\infty} \big({U} ^{n-1}_{\varepsilon,l,i} - {U}^{n-1}_{\varepsilon,l-1,i_{}}\big) R_{l,i} & = & \sum_{l=1}^{\infty}U^{n-1}_{\varepsilon,l,i} R_{l,i}-\sum_{l=1}^{\infty} U^{n-1}_{\varepsilon,l-1,i}R_{l,i} = \sum_{l=1}^{\infty}U^{n-1}_{\varepsilon,l,i} R_{l,i} - \sum_{l=0}^{\infty}U^{n-1}_{\varepsilon,l,i_{}} R_{l+1,i} \\ & = & \Delta a \sum_{l=1}^{\infty} U^{n-1}_{\varepsilon,l,i} \zeta_{l+1,i} R_{l+1,i}, \quad i=1,\cdots,N_{p}, \end{eqnarray*} which plugged into \eqref{T} gives \begin{equation*} \dfrac{\varepsilon}{\Delta t} \delta \mathcal{L}^{n-\frac{1}{2}}_{\varepsilon,i} + \Delta a \sum_{l=1}^{\infty}{U}^{n-1}_{\varepsilon,l,i}\zeta_{l+1,i}R_{l+1,i} = \theta_{\Delta,i}\dfrac{\delta Z^{n-\frac{1}{2}}_{\varepsilon,i}}{\Delta t}, \quad i =1,\cdots,N_{p}. \end{equation*} On the other hand, setting \begin{equation*} H^{n}_{\varepsilon,i}:= \sum_{k<j}\lambda^{n,\varepsilon}_{kj}(\varphi^{n,\varepsilon}_{kj})_{i}^{'}(\bo{Z}^{n}_{\varepsilon}) \end{equation*} the $i$th component of the non-penetration velocity, we have by the optimality conditions \eqref{KKTconditions_memoire} that \begin{equation}\label{Africa} \theta_{\Delta,i}\dfrac{\delta Z^{n-\frac{1}{2}}_{\varepsilon,i}}{\Delta t} + \dfrac{\varepsilon}{\Delta t} (H^{n}_{\varepsilon,i}-H^{n-1}_{\varepsilon, i})= \Delta a \sum_{l=1}^{\infty}U^{n-1}_{\varepsilon, l,i}\zeta_{l+1,i}R_{l+1,i}- \dfrac{\varepsilon}{\Delta t}\left[F_{i}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) - F_{i}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon})\right],\quad \forall i. \end{equation} Since the mappings $\left( \boldsymbol{\varphi}^{n,\varepsilon}_{kj}\right)_{k<j}$ are convex and differentiable, using Proposition 10.1.4 \cite{Allairel05} we have \begin{equation*} (\varphi^{n,\varepsilon}_{kj})^{'}(\bo{Z}^{n-1}_{\varepsilon})\cdot \delta \bo{Z}^{n-\frac{1}{2}}_{\varepsilon} \leq \varphi^{n,\varepsilon}_{kj}(\bo{Z}^{n}_{\varepsilon}) - \varphi^{n,\varepsilon}_{kj}(\bo{Z}^{n-1}_{\varepsilon}) \leq (\varphi^{n,\varepsilon}_{kj})^{'}(\bo{Z}^{n}_{\varepsilon})\cdot \delta \bo{Z}^{n-\frac{1}{2}}_{\varepsilon}. \end{equation*} Moreover since for any time step, $\sum_{k<j} \lambda^{n,\varepsilon}_{kj}\varphi^{n,\varepsilon}_{kj}(\boldsymbol{Z}^{n}_{\varepsilon})=0$ with $ \varphi^{n,\varepsilon}_{kj}(\boldsymbol{q}) \leq 0$ and $\lambda^{n,\varepsilon}_{kj}\geq 0$, for any $k < j$, \begin{equation*} 0 \leq - \sum_{k<j}\left\{\lambda^{n,\varepsilon}_{kj} \varphi^{n,\varepsilon}_{kj}(\bo{Z}^{n-1}_{\varepsilon}) + \lambda^{n-1,\varepsilon}_{kj} \varphi^{n-1,\varepsilon}_{kj}(\bo{Z}^{n}_{\varepsilon}) \right\} \leq (\bo{H}^{n}_{\varepsilon} - \bo{H}^{n-1}_{\varepsilon})\cdot \delta \bo{Z}^{n-\frac{1}{2}}_{\varepsilon}. \end{equation*} We multiply $\eqref{Africa}$ by $\delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}$ in order to obtain \begin{equation}\label{cp} \underline{\theta} \dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \left( \boldsymbol{S}^{n}_{\varepsilon} - \dfrac{\varepsilon}{\Delta t}(\boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon})-\boldsymbol{F}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon}))\right) \cdot \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}, \end{equation} where $\underline{\theta}:= \min_{i}\theta_{i}$ and $ S^{n}_{\varepsilon, i}:= \Delta a \sum_{l=1}^{\infty} \boldsymbol{U}^{n-1}_{\varepsilon,l,i}\zeta_{l+1,i}R_{l+1,i},$ for all $i$. As $F$ is strictly convex we have $\left(\boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) - \boldsymbol{F}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon}) \right)\cdot (\boldsymbol{Z}^{n}_{\varepsilon} - \boldsymbol{Z}^{n-1}_{\varepsilon}) > 0$, so that \begin{equation*} \underline{\theta} \dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \boldsymbol{S}^{n}_{\varepsilon}\cdot \delta \boldsymbol{Z}^{n-\frac{1} {2}}_{\varepsilon} \leq \dfrac{\Delta t}{\gamma} \left|\boldsymbol{S}^{n}_{\varepsilon}\right|^{2} + \dfrac{\gamma}{\Delta t} \left|\delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}, \quad \forall \gamma > 0, \end{equation*} where we've used the Young's inequality. It follows that \begin{equation*} (\underline{\theta} - \gamma)\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{\Delta t}{\gamma} \left|\boldsymbol{S}^{n}_{\varepsilon}\right|^{2}, \quad \forall \gamma > 0. \end{equation*} Moreover \begin{equation*} |\boldsymbol{S}^{n}_{\varepsilon}|^{2} = \sum_{i=1}^{N_{p}} \Delta a^{2}\left|\sum_{l=1}^{\infty} U^{n-1}_{l,\varepsilon,i} R_{l+1,i} \zeta_{l+1,i}\right|^{2} \\ \leq \underbrace{2 \Delta a \overline{\zeta}\, \overline{R}}_{:=K_{1}} \left( \dfrac{\Delta a}{2} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty}|U^{n-1}_{l,\varepsilon,i}|^{2}R_{l+1,i}\zeta_{l+1,i} \right) \leq K_{1}D^{n}_{\varepsilon}, \end{equation*} where the first inequality is due to Jensen. It follows that \begin{equation*} (\underline{\theta} - \gamma)\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{K_{1}}{\gamma} \Delta t D^{n}_{\varepsilon}, \quad \forall n=1,2\cdots,N. \end{equation*} So that the sum over $n$ in the latter inequality gives \begin{equation*} (\underline{\theta} -\gamma)\sum_{n=1}^{N} \dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{K_{1}}{\gamma } \left(\Delta t \sum_{n=1}^{N} D^{n}_{\varepsilon}\right), \quad \forall \gamma > 0, \end{equation*} which by the energy estimate \eqref{energy-estimate-memoire} gives \begin{equation*}\label{L2} (\underline{\theta} - \gamma)\sum_{n=1}^{N}\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq \dfrac{K_{1}}{\gamma}K_{0} + \dfrac{K_{1}}{\gamma}\left( F(\boldsymbol{Z}^{0}_{p}) - F(\boldsymbol{Z}^{N}_{\varepsilon}) \right), \quad \forall \gamma > 0. \end{equation*} By Lemma \ref{boundness}, there exist two constants $K_{2}$ and $K_{3}$ independent of $\varepsilon$ and $\Delta t$ \begin{equation*} K_{2} := \dfrac{K_{1}}{\gamma}K_{0} \; \text{ and } K_{3} \geq \dfrac{K_{1}}{\gamma}\left( F(\boldsymbol{Z}^{0}_{p}) - F(\boldsymbol{Z}^{N}_{\varepsilon})\right), \end{equation*} so that \begin{equation*} (\underline{\theta} - \gamma)\sum_{n=1}^{N}\dfrac{\left| \delta \boldsymbol{Z}^{n-\frac{1}{2}}_{\varepsilon}\right|^{2}}{\Delta t} \leq K_{2} + K_{3}, \quad \forall \gamma > 0. \end{equation*} Hence there exists a constant $C := \frac{K_{2} + K_{3}}{\underline{\theta} - \gamma}$ such that \eqref{compactness} holds. This gives a bound on the discrete time derivative of $\boldsymbol{\tilde{z}}_{\varepsilon,\Delta}$ in $L^{2}((0,T))$ and ends the proof. \end{proof} \subsection{Convergences toward variational inclusions} This part is devoted to the convergence of the discrete model's solution toward the solution of the continuous variational inclusion when $\Delta a$ goes to $0$ and $\varepsilon > 0$ is fixed. Then we let $\varepsilon$ to go to $0$ and prove that the resulting limit $\bo{z}_{0}$ solves a weighted differential inclusion. To this end, we prove that the constrained minimization problem is equivalent to a variational inclusion (by the use of projections onto closed, nonempty and convex sets) in order to deal with the convergence of the discrete problem to the continuous one, when $\Delta a$ is small enough.\\ We mention that the set of admissible configurations is not convex (see Figure \ref{lack_convexity}) so that the projection onto $\boldsymbol{Q}_{0}$ is not well defined. Nevertheless as shown in \cite[Proposition 3.12 p.51]{venel08}, there exists $\eta > 0$ such that $P_{\boldsymbol{Q}_{0}}\boldsymbol{q}$ is well defined for $\boldsymbol{q} \in \mathbb{R}^{2N_{p}}$ satisfying $dist(\boldsymbol{Q}_{0},\boldsymbol{q}) < \eta$. We say that $\boldsymbol{Q}_{0}$ is $\eta$-\textit{prox-regular} or uniformly \textit{prox-regular}, see Appendix \ref{annexeA} or \cite{venel08} for more details. \begin{figure}[ht] \begin{center}\scalebox{.85}{ \begin{tikzpicture} \draw[thick,->] (-1.,0) -- (1.5,0); \draw[thick,->] (0,-0.75) -- (0,1.75); \draw (0,0) circle (0.5); \draw (0,1) circle (0.5); \draw[ball color=black](-0.5,-0.5) node[below]{$q_{1}$}; \draw[ball color=black](0.75,1) node[below]{$q_{2}$}; \draw[ball color=black](0,-2) node[below]{$\boldsymbol{q}=(q_{1},q_{2})$}; \end{tikzpicture} \quad \begin{tikzpicture} \draw[thick,->] (-1,0) -- (2,0); \draw[thick,->] (0,-0.75) -- (0,1.75); \draw[ball color=black](-0.5,1) node[below]{$\tilde{q}_{1}$}; \draw[ball color=black](1,1.2) node[below]{$\tilde{q}_{2}$}; \draw (0,0) circle (0.5); \draw (1,0) circle (0.5); \draw[ball color=black](0,-2) node[below]{$\boldsymbol{\tilde{q}} = (\tilde{q}_{1},\tilde{q}_{2} )$}; \end{tikzpicture} \quad \begin{tikzpicture} \draw[thick,->] (-1,0) -- (1.5,0); \draw[thick,->] (0,-0.75) -- (0,1.75); \draw (0,0) circle (0.5); \draw (0.5,0.5) circle (0.5); \draw[ball color=black](-0.6,1) node[below]{$\overline{q}_{1}$}; \draw[ball color=black](0.7,0.8) node[below]{$\overline{q}_{2}$}; \draw[ball color=black](0.5,-2) node[below]{$\boldsymbol{\overline{q}}= \frac{1}{2}(\boldsymbol{q}+\boldsymbol{\tilde{q}})$}; \end{tikzpicture}} \end{center} \caption{Lack of convexity of $\boldsymbol{Q}_{0}$.} \label{lack_convexity} \end{figure} \subsubsection{Expression of the contact model as a variational inclusion} We use the fact that $\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ is convex to write the constrained minimization problem as a projection on a convex set. \begin{Prop}\label{prop.projection} Suppose that assumption \ref{Assump} (iii) hold. For any $\varepsilon > 0$, the solution of \eqref{Eq1_discret} also satisfies : \begin{equation}\label{projection} \bo{Z}^{n}_{\varepsilon} = P_{\boldsymbol{K}(\bo{Z}^{n-1}_{\varepsilon})}\left(\bo{Z}^{n}_{\varepsilon} - \Delta t\boldsymbol{\mathcal{L}}^{n}_{\varepsilon} - \Delta t \boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \right), \quad n=0,\cdots, N-1. \end{equation} \end{Prop} \begin{proof} Since $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ is nonempty closed and convex and the map $\boldsymbol{q} \mapsto E_{n,\varepsilon}(\boldsymbol{q})$ is differentiable at $\bo{Z}^{n}_{\varepsilon}$, by Euler inequality (see \cite[Theorem 10.2.1 p. 307]{Allairel05}) we have that \begin{equation*} \langle (\boldsymbol{E}_{n,\varepsilon})^{'}(\boldsymbol{Z}^{n}_{\varepsilon}), \boldsymbol{q}- \boldsymbol{Z}^{n}_{\varepsilon} \rangle \geq 0, \quad \forall \boldsymbol{q} \in \bo{K}(\boldsymbol{Z}^{n-1}_{\varepsilon}). \end{equation*} This, since $\Delta t > 0$, is equivalent to \begin{equation*} \langle \big(\boldsymbol{Z}^{n}_{\varepsilon}-\Delta t (\boldsymbol{E}_{n,\varepsilon})^{'}(\boldsymbol{Z}^{n}_{\varepsilon})\big) - \boldsymbol{Z}^{n}_{\varepsilon}, \boldsymbol{q} -\boldsymbol{Z}^{n}_{\varepsilon} \rangle \leq 0, \quad \forall\boldsymbol{q} \in K(\boldsymbol{Z}^{n-1}_{\varepsilon}). \end{equation*} The latter inequality is nothing but the characterization of the projection onto $\bo{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ \cite[Theorem 5.2 p.132]{Haim11} i.e. \begin{equation*} \boldsymbol{Z}^{n}_{\varepsilon} = P_{\boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})} \left( \boldsymbol{Z}^{n}_{\varepsilon} - \Delta t (E_{n,\varepsilon})^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \right), \end{equation*} which gives the claim. \end{proof} By definition of the proximal-normal cone (see \eqref{proximal-normal}) for convex sets, \eqref{projection} is equivalent to \begin{equation}\label{normalCone} \boldsymbol{\mathcal{L}}_{\varepsilon}^{n} + \bo{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \in -N\left(\bo{K}(\bo{Z}^{n-1}_{\varepsilon}), \bo{Z}^{n}_{\varepsilon}\right). \end{equation} \begin{Prop}\label{prop4} Assume that assumption \ref{Assump} (iii) holds, the discrete inclusion \eqref{normalCone} has a unique solution $\boldsymbol{Z}^{n}_{\varepsilon}$. \end{Prop} \begin{proof} The existence and uniqueness of solutions of \eqref{Eq1_discret} is given in Theorem \ref{thm1}, by Proposition \ref{prop.projection}, this solution also satisfies \eqref{projection} which ends the proof. \end{proof} \subsubsection{Convergence for a fixed $\varepsilon > 0$ when $\Delta a $ goes to 0} Let $\varepsilon > 0$, we need to check that the above inclusion is satisfied for the stepsize linear function $\boldsymbol{z}_{\varepsilon,\Delta}$ and then take the limit when $\Delta a$ goes to $0$. Consider the time stepsize constant functions \begin{equation*} \psi_{\Delta}|_{(t^{n-1},t^{n}]}: = t^{n-1}, \; \theta_{\Delta}|_{(t^{n-1},t^{n}]} := t^{n}, \text{ and } \psi_{\Delta}(0) = 0,\; \theta_{\Delta}(0) = 0. \end{equation*} \begin{Lemma} Under the same condition as in Proposition \ref{prop4}, given the sequence $(\boldsymbol{Z}^n_\epsilon)_{n\in \{0,N\}}$, the piecewise linear interpolation $\bo{\tilde{z}}_{\varepsilon,\Delta}$ defined in \eqref{eq.linear.interp} satisfies the following inclusion \begin{equation}\label{discre_incl_diff} \boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta}(t)+ \textbf{F}^{'}(\bo{\tilde{z}}_{\varepsilon,\Delta}(t)) \in -N\Big(\boldsymbol{K}\left( \bo{\tilde{z}}_{\varepsilon,\Delta}(\psi_{\Delta}(t))\right), \bo{\tilde{z}}_{\varepsilon,\Delta}(\theta_{\Delta}(t))\Big) \text{ a.e. } t \in [0,T], \end{equation} where $\boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta}$ is the linear interpolation of $\boldsymbol{\mathcal{L}}^{n}_{\varepsilon}$. \end{Lemma} \begin{proof} Indeed we have that \begin{equation*} \boldsymbol{\mathcal{L}}^{n}_{\varepsilon} + \boldsymbol{F}^{'}(\boldsymbol{Z}^{n}_{\varepsilon}) \in -N\left(\boldsymbol{K}(\bo{Z}^{n-1}_{\varepsilon}),\bo{Z}^{n}_{\varepsilon}\right), \, \forall \, n < N. \end{equation*} On the other hand, evaluating the latter inequality at two time steps $t^{n}$ and $t^{n-1}$ and using the definition of $\bo{z}_{\varepsilon,\Delta}$ and $\bo{\mathcal{L}}_{\varepsilon,\Delta}$, we have that \begin{equation*} \bo{\tilde{\mathcal{L}}}_{\varepsilon,\Delta}(t) + \bo{A}_{\varepsilon,\Delta}(t) \in - \dfrac{t-t^{n-1}}{\Delta t} N\left(\bo{K}(\bo{Z}^{n-1}_{\varepsilon}), \bo{Z}^{n}_{\varepsilon}\right) - \big(1 - \dfrac{t-t^{n-1}}{\Delta t} \big) N\left(\bo{K}(\bo{Z}^{n-2}_{\varepsilon}), \bo{Z}^{n-1}_{\varepsilon}\right), \; t \in (t^{n-1},t^{n}) \end{equation*} where $\bo{A}_{\varepsilon,\Delta}(t):= \dfrac{t-t^{n-1}}{\Delta t} \bo{F}^{'}(\bo{Z}^{n}_{\varepsilon}) + (t^n- t)/\Delta t) \bo{F}^{'}(\bo{Z}^{n-1}_{\varepsilon})$. \end{proof} Let $\varepsilon > 0$ be fixed we prove that the piecewise constant function \eqref{Eq2} uniformly converges toward the solution of our continuous problem as the subdivision step $\Delta a$ goes to $0$. Moreover the limit function satisfies a variational inclusion. \begin{Lemma}\label{equality}\cite{venel08} Let $\boldsymbol{q} \in \boldsymbol{Q}_{0}$, we have equality between the cones \begin{equation}\label{equal_cones} N(\bo{Q}_{0}, \boldsymbol{q}) = N(\bo{ K}(\boldsymbol{q}), \boldsymbol{q}). \end{equation} So that we shall consider $N\left(\bo{Q}_{0}, \bo{Z}^{n}_{\varepsilon} \right)$ instead of $N\big(\boldsymbol{K}(\bo{Z}^{n-1}_{\varepsilon}), \bo{Z}^{n}_{\varepsilon}\big)$ in what follows. \end{Lemma} \begin{Theo}\label{thm_conv} Let $\varepsilon >0$ be fixed and $T> 0$. If the assumptions \ref{Assump} (i)-(iii) hold, then the piecewise linear interpolation $\bo{\tilde{z}}_{\varepsilon,\Delta}$ uniformly converges in $\mathcal{C}\left([0,T];\boldsymbol{Q}_{0} \right)$ when $\Delta a \to 0$. Moreover the limit function denoted by $\textbf{z}_{\varepsilon}$ satisfies \begin{equation}\label{conDiff} \begin{cases} \displaystyle{ \boldsymbol{\mathcal{L}}_ {\varepsilon}[\textbf{z}_{\varepsilon}](t) + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon}(t)) \in -N(\boldsymbol{Q}_{0}, \textbf{z}_{\varepsilon}(t)), \, t > 0}, \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \; t \leq 0, \end{cases} \end{equation} where $\boldsymbol{\mathcal{L}}_{\varepsilon}(t)=\left(\mathcal{L}_{\varepsilon,1}(t),\cdots, \mathcal{L}_{\varepsilon,N_{p}}(t) \right)$ and for any particle $\mathcal{L}_{\varepsilon,i}$ is defined in \eqref{cont-delay-operator}. \end{Theo} \begin{proof} In this proof, we aim at using the theorem due to Ascoli. To this purpose, we use compactness arguments as in \cite{venel08}. We have the followings \begin{itemize} \item By definition the piecewise linear interpolation $\bo{\tilde{z}}_{\varepsilon,\Delta}$ is equicontinuous on $[0,T]$. \item Moreover by Lemma \ref{boundness}, $\bo{Z}^{n}_{\varepsilon}$ is bounded uniformly with respect to the discretization step $\Delta a$ for any time $t^{n} = n\Delta t$. This implies that $\bo{\tilde{z}}_{\varepsilon,\Delta}$ admits a $L^{\infty}$-bound uniformly with respect to $\Delta a$. \end{itemize} Let $(\Delta_{m})_{m \in \mathbb{N}}$ be a sequence of discretization steps decreasing to $0$. Thanks to Arzelà-Ascoli's theorem, there exists a subsequence still denoted by $\left(\bo{\tilde{z}}_{\varepsilon, \Delta_{m}}\right)_{m \in \mathbb{N}}$ which uniformly converges to $\bo{z}_{\varepsilon}\in \bo{\mathcal{C}}$.\\ {We prove first that the limit function belongs to $\bo{Q_{0}}$ for all $t \in [0,T]$.} Indeed since \begin{equation*} \bo{\tilde{z}}_{\varepsilon,\Delta}|_{(t^{n-1}, t^{n})} = \left(\frac{t-t^{n-1}}{\Delta t} \right)\bo{Z}^{n}_{\varepsilon} + \left(1 - \frac{t - t^{n-1}}{\Delta t}\right) \bo{Z}^{n-1}_{\varepsilon}, \end{equation*} and $\bo{Z}^{n}_{\varepsilon}, \bo{Z}^{n-1}_{\varepsilon} \in \bo{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})$ which is convex, we have that $\bo{\tilde{z}}_{\varepsilon,\Delta} \in \bo{K}(\bo{Z}^{n-1}_{\varepsilon}) \subset \bo{Q}_{0}$ for all $n = 1,2,\cdots,N$. On the other hand, since $\bo{Q}_{0}$ is closed for the $\mathcal{C}$-topology we have that \begin{equation*} \bo{z}_{\varepsilon}(t) =: \lim_{m \to \infty}\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(t) \in \boldsymbol{Q}_{0}, \quad \forall\, t \in [0,T]. \end{equation*} Combining this with the fact that $\bo{z}_{\varepsilon} \in \bo{\mathcal{C}}$, we claim that $\bo{z}_{\varepsilon} \in \mathcal{C}([0,T], \boldsymbol{Q}_{0})$.\\ We prove now that $\bo{\pi}_{\varepsilon}:= \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N \left(\boldsymbol{Q}_{0},\bo{z}_{\varepsilon}\right)$. In fact, thanks to \eqref{equal_cones}, it suffices to prove that $\boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}), \bo{z}_{\varepsilon}\right), \quad \forall t \in [0,T]$. \begin{itemize} \item \textbf{Convergence: }First, we prove that the linear interpolation of the delay operator converges to the continuous limit with respect to the norm $||\cdot ||_{\bo{\mathcal{C}}}$. \\ Indeed for any $i=1,2,\cdots,N_{p}$, we have that \begin{multline*} \tilde{\mathcal{L}}_{\varepsilon,\Delta,i} = \dfrac{\mu_{\Delta,i}}{\varepsilon} \sum_{n=1}^{N} \left\{ \left(Z^{n}_{\varepsilon,i} + \dfrac{t - t^{n-1}}{\Delta t}(Z^{n}_{\varepsilon,i} - Z^{n-1}_{\varepsilon,i}) \right) \right\}\mathbbm{1}_{J_{n}}(t) \\ - \dfrac{\Delta a}{\varepsilon} \sum_{n=1}^{N} \left\{\sum_{l=0}^{\infty}\left(Z^{n-l-1}_{\varepsilon,i} + \dfrac{t - t^{n-1}}{\Delta t}(Z^{n-l}_{\varepsilon,i} - Z^{n-l-1}_{\varepsilon,i}) \right)R_{l,i}\right\}\mathbbm{1}_{J_{n}}(t)=: I^{1}_{\Delta,i} - I^{2}_{\Delta,i}, \end{multline*} where we've set $J_{n} := \big((n-1)\Delta t, n\Delta t\big)$. To deal with the convergence of $I_{\Delta,i}^{1}$, we use the fact that $\left|\bo{\rho}_{\Delta} - \bo{\rho}\right|_{L^{1}_{a}}\underset{\Delta \to 0}{\longrightarrow}0$ which for any particle gives \begin{equation*} I_{\Delta,i}^{1} = \dfrac{1}{\varepsilon} \tilde{z}_{\varepsilon, \Delta,i}(t) \int_{\mathbb{R}_{+}}\rho_{\Delta,i}(a)da \underset{\Delta \longrightarrow 0}{\xrightarrow{\hspace{1.25cm}}} \dfrac{1}{\varepsilon} z_{\varepsilon,i}(t) \int_{0}^{\infty}\rho_{i}(a)da, \text{ in } \bo{\mathcal{C}}, \end{equation*} On the other hand, we split the second term as follows \begin{eqnarray*} I^{2}_{\Delta,i} & = & \dfrac{1}{\varepsilon} \sum_{n=1}^{N} \left\{\Delta a \sum_{l=0}^{\infty} Z^{n-l-1}_{\varepsilon,i}R_{l,i} + \dfrac{t-t^{n-1}}{\Delta t} \Delta a \sum_{l=0}^{\infty}(Z^{n-l}_{\varepsilon,i} - Z^{n-l-1}_{\varepsilon,i})R_{l,i} \right\} \mathbbm{1}_{J_{n}}(t) \\ & = & \dfrac{1}{\varepsilon} \sum_{n=1}^{N}\left(\dfrac{t-t^{n-1}}{\Delta t} \int_{\mathbb{R}_{+}}\left(z_{\Delta,i}(n\Delta t - \varepsilon a) - z_{\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a) \right)\rho_{\Delta,i}(a)da \right) \mathbbm{1}_{J_{n}}(t)\\ & & \qquad + \dfrac{1}{\varepsilon} \sum_{n=1}^{N} \left( \int_{\mathbb{R}_{+}}z_{\varepsilon,\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a)\rho_{\Delta,i}(a)da \right) \mathbbm{1}_{J_{n}}(t) =: \dfrac{1}{\varepsilon} I^{2,1}_{\Delta,i} + \dfrac{1}{\varepsilon} I^{2,2}_{\Delta,i}. \end{eqnarray*} Let us now estimate $|\bo{I}^{2}_{\Delta} - \bo{\tilde{I}}_{\Delta}|$ where for any particle \begin{equation*} \tilde{I}_{\Delta,i} := \dfrac{1}{\varepsilon} \int_{\mathbb{R}_{+}} \tilde{z}_{\varepsilon,i}(t-\varepsilon\Delta a - \varepsilon a)\rho_{\Delta,i}(a)da \end{equation*} We prove that $\bo{I}^{2}_{\Delta}, \bo{\tilde{I}}_{\Delta} \in \bo{L}^{2}$. Indeed \begin{eqnarray*} \int_{0}^{T} |I^{2,2}_{\Delta,i}(t)|^{2}dt & \leq & \sum_{n=1}^{N}\int_{J_{n}} \left|\int_{\mathbb{R}_{+}}z_{\varepsilon,\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a)\rho_{\Delta,i}(a)da \right|^{2} dt \\ & \leq & \sum_{n=1}^{N} \int_{J_{n}} \int_{\mathbb{R}_{+}} \rho_{\Delta,i}(\sigma)d\sigma \int_{\mathbb{R}_{+}} \left|z_{\varepsilon,\Delta,i}(n\Delta t - \varepsilon \Delta a - \varepsilon a)\right|^{2}\rho_{\Delta,i}(a)dadt, \quad \forall i, \end{eqnarray*} where we've used the Jensen's inequality in the latter inequality. Furthermore, since \begin{equation*} \int_{\mathbb{R}_{+}} \rho_{\Delta,i}(a)da = \mu_{0, \Delta,i} < \infty, \quad \forall i, \end{equation*} we have that \begin{equation*} \int_{0}^{T} |I_{\Delta,i}^{2,2}(t)|^{2} dt \leq \mu_{0,\Delta,i}\Delta t \sum_{n=1}^{N} \Delta a \sum_{l=0}^{\infty} \left|Z^{n-l-1}_{\varepsilon,i}\right|^{2}R_{l,i}, \end{equation*} which can be bounded uniformly with respect to $\varepsilon$ since \begin{equation*}\label{jo} \Delta t \sum_{n=1}^{N} \Delta a \sum_{l=0}^{\infty} \left|Z^{n-l-1}_{\varepsilon,i}\right|^{2}R_{l,i} \leq T\left( |z_{\varepsilon, \Delta, i}|^{2}_{L^{\infty}_{t}} + C_{z_{p,i}}^{2} + |z^{-1}_{p,i}|^{2} \right) \int_{\mathbb{R}_{+}}(1+a)^{2}\rho_{\Delta,i}(a)da, \quad \forall i = 1,\cdots,N_{p}. \end{equation*} In the latter inequality, we've split the sum over the ages into $l \in \left\{0,1,\cdots,n-1 \right\}$ and $l \in \{n,n+1,\cdots \}$. In the first part we've inserted the past data then use the bound provided by \eqref{compactness} and in the second part we use the Lipschitz condition of the past data. The same arguments guarantee that $\bo{I}^{1,2}_{\Delta}$ and $\bo{\tilde{I}}_{\Delta}$ belongs to $\bo{L}^{2}$.\\ Furthermor since the past data are Lipschitz and we have the bound \eqref{compactness}, it follows \begin{equation*} \displaystyle{\int_{0}^{T}\left| \bo{I}^{2}_{\Delta}(t) - \bo{\tilde{I}}_{\Delta}(t)\right|}dt \lesssim \Delta t \sum_{n=1}^{N} \Delta a \sum_{l=0}^{\infty} \left|Z^{n-l-1}_{\varepsilon,i} - Z^{n-l-2}_{\varepsilon,i}\right|^{2}R_{l,i} \leq O(\Delta a). \end{equation*} Thus $|| \bo{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}} - \bo{\mathcal{L}}_{\varepsilon}||_{\bo{\mathcal{C}}} \longrightarrow 0$ as $m$ grows to infinity.\\ Furthermore, using the fact that $F$ is continuously differentiable and $\bo{\tilde{z}}_{\varepsilon,\Delta_{m}} \to \bo{z}_{\varepsilon}$, we have that \begin{equation*} \bo{\tilde{\pi}}_{\varepsilon,\Delta_{m}} :=\boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}} + \boldsymbol{F}^{'}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}) \underset{m \to \infty}{\xrightarrow{\hspace{1.25cm}}} \boldsymbol{\pi}_{\varepsilon} =: \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \bo{F}^{'}(\bo{z}_{\varepsilon}), \quad \forall t \in [0,T] \text{ and } \forall \varepsilon > 0, \end{equation*} which gives the convergence. \item \textbf{Inclusion:} here we use the same arguments as in \cite{venel08}.\\ We need to prove that \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t) \right), \quad \text{ a.e. } t \in [0,T]. \end{equation*} By Lemma \ref{annexeA}.\ref{equivalences}, \eqref{discre_incl_diff} is equivalent to \begin{eqnarray*} \langle \bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big|\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Replacing $\boldsymbol{\xi}$ by $-\boldsymbol{\xi}$ in the above inequality, we have that \begin{eqnarray*} \langle \bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big|\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta}(t)))}\big(- \boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Let us now prove that $|\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}|$ is bounded uniformly with respect $\Delta a$. Indeed, on one hand since $\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}$ and $F$ is continuously differentiable, there exists a constant $K_{F}$ independent of $\varepsilon$ and $\Delta a$ such that $\big|\bo{F}^{'}(\boldsymbol{\tilde{z}}_{\varepsilon,\Delta_{m}})\big| \leq K_{F}$. On the other hand, using the energy estimates and the Jensen's inequality, we have \begin{equation}\label{nouniformity} |\bo{\mathcal{L}}^{n}_{\varepsilon}|^{2} \leq \frac{2 C_{0}}{\varepsilon} \sum_{i=1}^{N_{p}} \dfrac{\Delta a}{2\varepsilon} \sum_{l=1}^{\infty}|Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i} \leq \frac{2C_{0}}{\varepsilon}\left|K_{0} + F(\boldsymbol{Z}^{0}_{p}) - F(\bo{Z}^{n}_{\varepsilon})\right|, \end{equation} so that $|\bo{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}}| \leq \dfrac{K}{\sqrt{\varepsilon}}$ with $K> 0$ is independent of $\Delta a$ and $\varepsilon$, moreover \begin{eqnarray} |\bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}| & \leq & \left| \boldsymbol{\tilde{\mathcal{L}}}_{\varepsilon,\Delta_{m}} \right| + \left|\bo{F}^{'}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}})\right| \leq \dfrac{K}{\sqrt{\varepsilon}} + K_{F}. \end{eqnarray} The sum of the two latter inequalities implies that \begin{equation}\label{last} \big|\langle \bo{\tilde{\pi}}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle \big| \leq \left(\dfrac{K}{\sqrt{\varepsilon}} + K_{F}\right)d_{\bo{K}( \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big| - \boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))) \big|, \quad \forall \varepsilon > 0. \end{equation} Using the fact that the distance to a nonempty, closed and convex set is $1$-Lipschitz and setting \begin{equation*} \tilde{I}_{\varepsilon,\Delta_{m}}(t):= \big|d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(-\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big)\big|, \end{equation*} we have that \begin{eqnarray*} \tilde{I}_{\varepsilon,\Delta_{m}} & \leq & \big| d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big( -\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & & \hspace{8.5em} + \big| d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle - \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & \leq & \big| \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta}(t)) - \bo{z}_{\varepsilon}(t)\big| + \underbrace{\big| d_{\bo{K}( \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big|}_{\tilde{J}_{\varepsilon, \Delta_{m}}(t)}. \end{eqnarray*} \end{itemize} Moreover by Proposition \ref{annexeA}.\ref{convergenceofprojection}, there exists $\nu > 0$ such that for all $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$ satisfying $|\boldsymbol{\xi}|\leq \nu$, $\tilde{J}_{\varepsilon, \Delta_{m}}(t) \underset{m \to \infty}{\longrightarrow} 0$.\\ Thus for any $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$, there exists $\nu > 0$ satisfying $|\boldsymbol{\xi}| \leq \nu$ and \begin{equation*} 0 \leq \tilde{I}_{\varepsilon,\Delta_{m}} \leq \big| \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) - \bo{z}_{\varepsilon}(t)\big| \underset{m \to \infty}{\longrightarrow 0}, \end{equation*} i.e. \begin{equation*} d_{\bo{K}(\bo{\tilde{z}}_{\varepsilon, \Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big( -\boldsymbol{\xi} + \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) \underset{ m \to \infty}{\longrightarrow} d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t)\big). \end{equation*} Since $\varepsilon > 0$ is fixed, equation \eqref{last} finally gives \begin{equation*} \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}, |\boldsymbol{\xi}| \leq \nu, \quad |\langle \boldsymbol{\pi}_{\varepsilon}(t), \boldsymbol{\xi} \rangle| \leq \left(\frac{K}{\sqrt{\varepsilon}} + K_{F}\right)d_{\bo{K}( \bo{z}_{\varepsilon}(t))} \big|- \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t))\big|, \end{equation*} which using back Lemma \ref{annexeA}.\ref{equivalences} is equivalent to \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t)), \quad \forall \varepsilon >0, \end{equation*} ending the proof once we prove that $\tilde{J}_{\varepsilon, \Delta_{m}}$; but this is a consequence of Proposition \ref{annexeA}.\ref{convergenceofprojection}. \end{proof} \subsubsection{Uniqueness of solutions of the continuous problem} \begin{Theo}\label{thm-exist-uniq} Let $\varepsilon > 0$ and $T>0$ be fixed. Under assumptions \ref{Assump} (i)-(iii), the variational inclusion \eqref{conDiff} has a unique solution $\boldsymbol{z}_{\varepsilon} $ in $\bo{\mathcal{C}}$. \end{Theo} \begin{proof} The existence of the limit $\bo{z}_{\varepsilon}$ is due to compactness. Indeed $\displaystyle{\bo{z}_{\varepsilon}(t) = \lim_{m \to \infty} \bo{\tilde{z}}_{\varepsilon,\Delta_{m}}}(t)$ in $\bo{\mathcal{C}}$.\\ For the uniqueness, we use the fact that $\bo{z}_{\varepsilon} \in \bo{Q}_{0}$. Indeed since $\bo{z}_{\varepsilon} \in \boldsymbol{Q}_{0}$ and solves \eqref{conDiff}, the same arguments as above give \begin{equation*} \begin{cases} \displaystyle{(E^{\varepsilon}_{t}})^{'}(\bo{z}_{\varepsilon}) \in - N\left( \bo{K}(\bo{z}_{\varepsilon}), \bo{z}_{\varepsilon} \right), \quad t >0 \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \quad t \leq 0, \end{cases} \iff \begin{cases} \displaystyle{\bo{z}_{\varepsilon}(t) = \argmin_{\bo{q}\, \in \, \bo{K}(\bo{z}_{\varepsilon})} E^{\varepsilon}_{t}(\boldsymbol{q})}, \quad \forall t > 0 \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \quad t \leq 0. \end{cases} \end{equation*} For same seasons as in \eqref{KKTconditions_memoire}, the latter equation in turn is equivalent to the existence of saddle point $\left( \bo{\lambda}_{\varepsilon}, \boldsymbol{z}_{\varepsilon}\right)$ such that \begin{equation}\label{KKTconditions_memoireCont} \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \bo{F}^{'}(\boldsymbol{z}_{\varepsilon}) + \sum_{i<j} \lambda^{\varepsilon}_{ij} (\bo{\varphi}^{\varepsilon}_{ij})^{'}(\boldsymbol{z}_{\varepsilon}) = \boldsymbol{0}, \end{equation} where the functions $\varphi^{\varepsilon}_{ij}$ define the interior convex approximation set $\bo{K}(\bo{z}_{\varepsilon})$.\\ Consider two solutions $\bo{z}^{1}_{\varepsilon}, \bo{z}^{2}_{\varepsilon}$ of \eqref{KKTconditions_memoireCont} sharing the same positions for negative times $\bo{z}_{p}$ and the same linkages density $\bo{\rho}$. We have \begin{equation*} \langle \bo{\mathcal{\hat{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle + \langle \bo{F}^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \boldsymbol{\hat{z}}_{\varepsilon} \rangle + \left \langle \sum_{i<j} \left[ \lambda^{\varepsilon,2}_{ij} (\bo{\varphi}^{\varepsilon,2}_{ij})^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \lambda^{\varepsilon,1}_{ij} (\bo{\varphi}^{\varepsilon,1}_{ij})^{'}(\boldsymbol{z}^{1}_{\varepsilon})\right], \bo{\hat{z}}_{\varepsilon} \right\rangle = 0, \end{equation*} where $\bo{\hat{z}}_{\varepsilon}:= \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}$ and $\boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}:= \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}^{2}] - \bo{\mathcal{L}}_{\varepsilon}[\bo{z}^{1}_{\varepsilon}]$. Notice once again that since $F$ is convex, we have that \begin{equation*} \langle \bo{F}^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \boldsymbol{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon} \rangle \geq 0. \end{equation*} So that \begin{equation}\label{en_attendant} \langle \bo{\mathcal{\hat{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle + \left \langle \sum_{i<j} \left[ \lambda^{\varepsilon,2}_{ij} (\bo{\varphi}^{\varepsilon,2}_{ij})^{'}(\boldsymbol{z}^{2}_{\varepsilon}) - \lambda^{\varepsilon,1}_{ij} (\bo{\varphi}^{\varepsilon,1}_{ij})^{'}(\boldsymbol{z}^{1}_{\varepsilon})\right], \bo{\hat{z}}_{\varepsilon} \right \rangle \leq 0. \end{equation} Let's consider the second term on the right hand side. Since $\varphi^{\varepsilon,1}_{ij}$ and $\varphi^{\varepsilon,2}_{ij}$ are convex, by the same arguments as in the proof of Theorem \ref{theo_compactness}, we have that \begin{equation*} \langle (\bo{\varphi}^{\varepsilon,k}_{ij})^{'}(\bo{z}^{1}_{\varepsilon}), \bo{\hat{z}}_{\varepsilon} \rangle \leq \bo{\varphi}^{\varepsilon,k}_{ij}(\bo{z}^{2}_{\varepsilon}) - \bo{\varphi}^{\varepsilon,k}_{ij}(\bo{z}^{1}_{\varepsilon}) \leq \langle (\bo{\varphi}^{\varepsilon,k}_{ij})^{'}(\bo{z}^{1}_{\varepsilon}), \bo{\hat{z}}_{\varepsilon} \rangle, \quad k \in \{1,2\} \text{ and } i < j, \end{equation*} so that, since the Lagrange multipliers $\lambda^{\varepsilon,k}_{ij}(t) \geq 0$ for all $i <j$ and $t \in [0,T])$ and $\displaystyle{\sum_{i<j}\lambda^{\varepsilon,k}_{ij} \varphi_{ij}^{\varepsilon,k}(\bo{z}^{k}_{\varepsilon}) = 0}, \; k \in \{1,2\}$ we have that \begin{equation*} 0 \leq \sum_{i<j} \left[ \langle \lambda^{\varepsilon, 2}_{ij} (\bo{\varphi}^{\varepsilon,2}_{ij})^{'}(\bo{z}^{2}_{\varepsilon}) - \lambda^{\varepsilon,1}_{ij} (\bo{\varphi}^{\varepsilon,1}_{ij})^{'}(\bo{z}^{1}_{\varepsilon}), \bo{\hat{z}}_{\varepsilon}\rangle \right]. \end{equation*} By \eqref{en_attendant}, this means that \begin{equation}\label{I} \langle \bo{\hat{\mathcal{L}}}_{\varepsilon} , \bo{\hat{z}}_{\varepsilon} \rangle \leq 0. \end{equation} Then using \footnote{$\langle a-b,a \rangle \geq \dfrac{1}{2}\big(|a|^{2} - |b|^{2}\big)$ for any $a,b\in \mathbb{R}^{2N_{p}}$}, we have that \begin{eqnarray*} \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{\infty}\big| \hat{z}_{\varepsilon,i}(t)\big|^{2}(t) \rho_{i}(a)da - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da & \leq & \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle, \end{eqnarray*} so that by definition of $\bo{\rho}$, \begin{equation}\label{II} \dfrac{\mu_{0,m}}{2\varepsilon}|\bo{\hat{z}}_{\varepsilon}(t)|^{2} - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da \leq \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle, \quad \forall \varepsilon > 0 \text{ fixed. } \end{equation} Combining \eqref{I} and \eqref{II}, we have \begin{equation*} \left|\bo{\hat{z}}_{\varepsilon}(t)\right|^{2} \leq \dfrac{\overline{\rho}}{\mu_{0,m}} \int_{0}^{t} |\bo{\hat{z}}_{\varepsilon}(s)|^{2}ds, \quad \forall t \in [0,T], \end{equation*} which thanks to the Gronwall's lemma gives $|\bo{\hat{z}}_{\varepsilon}| \equiv 0$, i.e. $\bo{z}^{1}_{\varepsilon}(t) = \bo{z}^{2}_{\varepsilon}(t),$ a.e. $t\in [0,T]$. \end{proof} \subsubsection{Convergence when $\varepsilon$ is small enough} In this section we are interested in the asymptotic when the linkages remodelling rate becomes large enough. We prove the convergence of $\bo{z}_{\varepsilon}$ in $\bo{\mathcal{C}}$. Nevertheless we are not in conditions of using the same arguments of \cite[Theorem 4.5, p.72]{venel08}, because the delay operator is not uniformly bounded with respect to $\varepsilon$ (see \eqref{nouniformity}). \begin{Theo}\label{delay-friction} Let $T>0$ be fixed. Under assumptions \ref{Assump} (i)-(iii), when $\varepsilon$ tends to 0 we have that \begin{equation}\label{z_0} \int_{0}^{T}\langle \mathcal{L}_{\varepsilon}[\bo{z}_{\varepsilon}], \bo{\psi}(t)\rangle dt \longrightarrow \langle \bo{z}_{\varepsilon}(T),\bo{\psi}(T)\rangle - \langle \bo{z}_{\varepsilon}(0),\bo{\psi}(0)\rangle - \int_{0}^{T} \langle\bo{\mu}_{1}\bo{z}_{0},\partial_{t}\bo{\psi}(t)\rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation} \end{Theo} \begin{proof} Let $\bo{z}_{\varepsilon}$ be the unique solution of $\eqref{conDiff}$. By the energy estimates there exists a constant $C$ independent of $\varepsilon$ such that \begin{equation*} \sum_{i=1}^{N_{p}}\int_{0}^{T} \int_{0}^{\infty} \rho_{i}|u_{\varepsilon,i}|^{2}\zeta_{i}(a)dadt \leq C < \infty. \end{equation*} On the other, since the death rate $\zeta$ has a lower bound, we have that \begin{equation*} \int_{0}^{T}\left|\bo{\mathcal{L}}_{\varepsilon}\right|^{2}dt = \sum_{i=1}^{N_{p}}\int_{0}^{T}\left| \int_{0}^{\infty}\dfrac{\rho_{i}u_{\varepsilon,i}(a,t)\zeta_{i}(a) }{\zeta_{i}(a)}da \right|^{2}dt \leq \dfrac{1}{\underline{\zeta}^{2}} \sum_{i=1}^{N_{p}} \int_{0}^{T} \left|\int_{0}^{\infty}\rho_{i}u_{\varepsilon,i}(a,t)\zeta_{i}(a)da\right|^{2}dt, \end{equation*} so that by the Jensen inequality \begin{equation*} \dfrac{1}{\underline{\zeta}^{2}} \sum_{i=1}^{N_{p}} \int_{0}^{T} \left|\int_{0}^{\infty}\rho_{i}u_{\varepsilon,i}(a,t)\zeta_{i}(a)da\right|^{2}dt \leq \dfrac{\overline{\zeta}}{\underline{\zeta}^{2}} \mu_{0,M} \sum_{i=1}^{N_{p}} \int_{0}^{T} \int_{0}^{\infty}\rho_{i}|u_{\varepsilon,i}|^{2}\zeta_{i}(a)dadt. \end{equation*} This shows that the delay operator belongs to $\bo{L}^{2}$ uniformly with respect in $\varepsilon$. There there exists $\bo{\mathcal{L}}_{0} \in \bo{L}^{2}$ such that $\bo{\mathcal{L}}_{\varepsilon}$ weakly converges to $\bo{\mathcal{L}}_{0}$ in $\bo{L}^{2}$ when $\varepsilon$ tends to $0$, implying that \begin{equation*} \int_{0}^{T} \langle \bo{\mathcal{L}}_{\varepsilon},\bo{\psi}(t)\rangle dt \underset{\varepsilon \longrightarrow 0}{\xrightarrow{\hspace{1cm}}} \int_{0}^{T} \langle \bo{\mathcal{L}}_{0},\bo{\psi}(t) \rangle dt, \quad \forall \bo{\psi} \in \bo{L}^{2}. \end{equation*} As it stands, we have \begin{itemize} \item[i)] $\partial_{t}\bo{\tilde{z}}_{\varepsilon,\Delta} \in \bo{L}^{2}$, \item[ii)] $\bo{\tilde{z}}_{\varepsilon,\Delta} \in \bo{\mathcal{C}}$ and \item[iii)] $|| \bo{\tilde{z}}_{\varepsilon,\Delta} - \bo{z}_{\varepsilon}||_{\bo{\mathcal{C}}} \underset{\Delta \longrightarrow 0}{\xrightarrow{\hspace{1cm}}}0$. \end{itemize} Setting $I[\bo{\rho},\bo{z}_{\varepsilon},\bo{\psi}]:= \int_{0}^{T}\langle \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}],\bo{\psi} \rangle dt$, we split the integral as follows \begin{multline}\label{star} I[\bo{\rho}, \bo{z}_{\varepsilon}, \bo{\psi}] = \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{\infty} \int_{0}^{T}\langle z_{\varepsilon,i}(t), \psi_{i}(t) - \psi_{i}(t+\varepsilon a) \rangle\rho_{i}(a)dadt \, \\ + \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{T} \int_{0}^{\infty} \left\{ \langle z_{\varepsilon,i}(t),\psi_{i}(t+\varepsilon a)\rangle - \langle z_{\varepsilon,i}(t-\varepsilon a), \psi_{i}(t)\rangle \right\} \rho_{i}(a) dadt =: I^{\varepsilon}_{1} + I^{\varepsilon}_{2}. \end{multline} By the dominated Lebesgue's theorem, we have that \begin{equation*} I^{\varepsilon}_{1} \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1.5cm}}} -\int_{0}^{T} \langle \bo{\mu}_{1} \boldsymbol{z}_{0}, \partial_{t}\bo{\psi}\rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation*} Splitting $I_{2}^{\varepsilon}$ into $I^{\varepsilon}_{2,1}$ and $I^{\varepsilon}_{2,2}$ and using the same arguments as in \cite[Theorem 5.6 p.29]{Mi20} we have that \begin{equation*} I^{\varepsilon}_{2,1} := \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{T} \int_{0}^{\infty}\langle z_{\varepsilon,i}(t),\psi_{i}(t+\varepsilon a)\rangle \rho_{i}(a)da dt \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1cm}}} \langle \bo{\mu}_{1} \bo{z}_{0}(T),\bo{\psi}(T)\rangle, \end{equation*} and \begin{equation*} I^{\varepsilon}_{2,2} := \dfrac{1}{\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{T} \int_{0}^{\infty} \langle z_{\varepsilon,i}(t-\varepsilon a),\psi_{i}(t)\rangle \rho_{i}(a)da dt \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1cm}}} \langle \bo{\mu}_{1} \bo{z}_{0}(0),\bo{\psi}(0)\rangle. \end{equation*} We gather the above convergences to obtain \begin{equation*} I[\bo{\rho}, \bo{z}_{\varepsilon},\bo{\psi}] \xrightarrow{\hspace{1cm}} \langle \bo{\mu}_{1}\bo{z}_{0}(T),\bo{\psi}(T)\rangle - \langle \bo{\mu}_{1}\bo{z}_{0}(0),\bo{\psi}(0)\rangle -\int_{0}^{T}\langle \bo{\mu}_{1}\bo{z}_{0},\partial_{t}\bo{\psi}\rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation*} On the other hand since $\partial_{t}\bo{z}_{0} \in \bo{L}^{2}$ and $\bo{z}_{0} \in \bo{L}^{\infty}$ we have that $\bo{z}_{0} \in \bo{\mathcal{C}}$, so that the integration by parts is well-defined in $\bo{H}^{1}$ and we have \begin{equation*} \langle \bo{\mu}_{1}\bo{z}_{0}(T),\bo{\psi}(T)\rangle - \langle \bo{\mu}_{1}\bo{z}_{0}(0),\bo{\psi}(0)\rangle -\int_{0}^{T}\langle \bo{\mu}_{1}\bo{z}_{0},\partial_{t}\bo{\psi}\rangle dt = \int_{0}^{T}\langle \mu_{1}\partial_{t}\bo{z}_{0},\bo{\psi} \rangle dt, \quad \forall \bo{\psi} \in \bo{H}^{1}. \end{equation*} This gives that \begin{equation*} \int_{0}^{T} \langle \bo{\mathcal{L}}_{0} - \bo{\mu}_{1}\partial_{t}\bo{z}_{0},\bo{\psi} \rangle dt = 0, \quad \forall \bo{\psi} \in \bo{H}^{1} \iff \bo{\mathcal{L}}_{0} = \bo{\mu}_{1}\partial_{t}\bo{z}_{0} \text{ a.e. } t \in [0,T], \end{equation*} ending by the same way the proof. \end{proof} \begin{Theo}\label{lamda0} Let $\bo{z}_{\varepsilon}$ be the unique solution of \eqref{KKTconditions_memoireCont}. Under hypotheses \ref{Assump} (i)--(iii) there exists $\bo{\lambda}_{0} = \left(\lambda_{ij}^{0} \right)_{i<j} \in L^{2}\left([0,T];(\mathbb{R}_{+})^{N_{c}} \right)$ depending only on time such that \begin{equation*} \sum_{i<j} \int_{0}^{T} \lambda^{\varepsilon}_{ij}(t)\langle \bo{G}_{ij}(\bo{z}_{\varepsilon}), \bo{\psi}(t)\rangle dt \underset{\varepsilon \to 0}{\xrightarrow{\hspace{1cm}}} \sum_{i<j} \int_{0}^{T}\lambda^{0}_{ij}(t)\langle \bo{G}_{ij}(\bo{z}_{0}),\bo{\psi}(t)\rangle dt, \quad \forall \bo{\psi} \in \bo{L}^{2}. \end{equation*} \end{Theo} \begin{proof} Let $\bo{U} := \bo{\mathcal{L}}_{\varepsilon} - \bo{F}^{'}(\bo{z}_{\varepsilon})$ and \begin{equation*} \bo{\Lambda}^{\varepsilon}_{\bo{z}_{\varepsilon},\bo{U}} := \left\{ \bo{\lambda}_{\varepsilon} \in \mathbb{R}^{N_{c}}, \; \sum_{i<j} \lambda^{\varepsilon}_{ij} \bo{G}_{ij}(\bo{z}_{\varepsilon}) =: \bo{U}, \; \lambda^{\varepsilon}_{ij} \geq 0 \text{ and } \lambda^{\varepsilon}_{ij} = 0 \text{ if } D_{ij}(\bo{z}_{\varepsilon}) > 0 \right\}. \end{equation*} If $\bo{\Lambda}^{\varepsilon}_{\bo{z}_{\varepsilon},\bo{U}} \neq \emptyset$, the same arguments as in \cite[Proposition 4.21 p.86]{venel08} guarantee that \begin{equation}\label{riz} \forall \bo{\lambda}_{\varepsilon} \in \bo{\Lambda}^{\varepsilon}_{\bo{z}_{\varepsilon},\bo{U}} \text{ and } \forall i<j, \text{ we've } \; \lambda^{\varepsilon}_{ij} \leq |\bo{U}|b^{N_{p}}, \text{ where } b := \dfrac{2\sqrt{n_{v}}}{\min\left( \sin\left(\dfrac{\pi}{n_{v} +1}\right), \sin\left(\dfrac{\pi}{N}\right) \right)}, \end{equation} and $n_{v}$ is the maximal number of neighbours that a particle may have.\\ It follows that \begin{equation*} \int_{0}^{T}|\lambda^{\varepsilon}_{ij}|^{2} dt \leq 2 b^{2N_{p}}\int_{0}^{T} \left( |\bo{\mathcal{L}}_{\varepsilon}|^{2} + \big|\boldsymbol{F}^{'}(\bo{z}_{\varepsilon})\big|^{2} \right) dt \lesssim 2b^{2N_{p}}\left( \dfrac{\overline{\zeta}\mu_{0,M}}{\underline{\zeta}} + K^{2} T \right), \quad \forall i<j, \end{equation*} where we've used the fact that $\bo{\mathcal{L}}_{\varepsilon} \in \bo{L}^{2}$ on one hand and $|\bo{F}^{'}(\bo{z}_{\varepsilon})| < \infty$ (since $\bo{F}^{'}$ is continuous and $\bo{z}_{\varepsilon}$ is bounded) on the other.\\ Furthermore, since $\bo{Q}_{0}$ is closed and $\bo{z}_{\varepsilon} \in \bo{Q}_{0}$, we have that $\displaystyle{\bo{z}_{0} := \lim_{\varepsilon \to 0}\bo{z}_{\varepsilon} \in \bo{Q}_{0}}$. On the other hand, since by definition $\bo{G}_{ij}$ is defined and continuous in $\bo{Q}_{0}$, we have that \begin{equation*} \bo{G}_{ij}(\bo{z}_{\varepsilon}) \underset{\varepsilon \longrightarrow 0}{\xrightarrow{\hspace{1.25cm}}} \bo{G}_{ij}(\bo{z}_{0}) \text{ in } \bo{\mathcal{C}}, \quad \forall i < j. \end{equation*} For any $i < j$, we have that \begin{equation*} \begin{cases} \bo{\lambda}_{\varepsilon} \rightharpoonup \bo{\lambda}^{0} \text{ in } L^{2}\left([0,T]; (\mathbb{R}_{+})^{N_{c}} \right) \vspace{0.5em} \\ \bo{G}_{ij}(\bo{z}_{\varepsilon}) \longrightarrow \bo{G}_{ij}(\bo{z}_{0}) \text{ in } \bo{\mathcal{C}}, \end{cases} \end{equation*} so that \begin{equation*} \lambda^{\varepsilon}_{ij} \bo{G}_{ij}(\bo{z}_{\varepsilon}) \rightharpoonup \lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}_{0}) \text{ in } \bo{L}^{2}, \end{equation*} implying that \begin{equation*} \sum_{i<j} \int_{0}^{T} \lambda^{\varepsilon}_{ij}(t) \langle \bo{G}_{ij}(\bo{z}_{\varepsilon})(t),\bo{\psi}(t) \rangle dt \underset{\varepsilon \longrightarrow 0}{\xrightarrow{\hspace{1.25cm}}} \sum_{i<j} \int_{0}^{T} \lambda^{0}_{ij}(t) \langle \bo{G}_{ij}(\bo{z}_{0}(t)),\bo{\psi}(t)\rangle dt, \quad \forall \bo{\psi} \in \bo{L}^{2}. \end{equation*} This is the end of the proof. \end{proof} \begin{Theo} Under hypotheses \ref{Assump} (i)-(iii), the unique solution of \eqref{conDiff} converges toward $\bo{z}_{0} \in \bo{\mathcal{C}}$ which in turn solves \begin{equation*} \begin{cases} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0}+\bo{F}^{'}(\boldsymbol{z}_{0}) \in -N\left(\bo{K}(\bo{z}_{0}),\bo{z}_{0}\right) \text{ a.e. } t \in (0,T], \vspace{0.5em} \\ \boldsymbol{z}_{0}(0) = \boldsymbol{z}_{p}(0), \end{cases} \end{equation*} where \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0} = (\mu_{1,i}\partial_{t}z_{0,i})_{i=1,\cdots,N_{p}} \text{ and } \mu_{1,i}:= \int_{0}^{\infty}a\rho_{i}(a)da \in \mathbb{R}, \quad \forall i. \end{equation*} Moreover the limit function $\bo{z}_{0}$ is unique. \end{Theo} \begin{proof} \textit{The primal-dual problem:} as in the proof of Theorem \ref{thm-exist-uniq}, it suffices to prove that there exists $\bo{\lambda}_{0} \in L^{2}\left([0,T];(\mathbb{R}_{+})^{N_{c}}\right)$ depending only on time such that \begin{equation}\label{weak0} \bo{\mu}_{1}\partial_{t}\bo{z}_{0} + \bo{F}^{'}(\bo{z}_{0}) - \sum_{i<j} \lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}_{0}) = \bo{0}. \end{equation} The existence of $\bo{z}_{0}$ is due to compactness, since $\displaystyle{\bo{z}_{0} := \lim_{\varepsilon \to 0}\bo{z}_{\varepsilon}}$ where $\bo{z}_{\varepsilon}$ is the unique solution of \eqref{conDiff}. Furthermore, from Theorems \ref{delay-friction} and \ref{lamda0} on one hand and the fact that $F$ is continuously differentiable on the other, we have that $\bo{z}_{0}$ solves \begin{equation*} \int_{0}^{t}<\bo{\mu}_{1}\partial_{t}\bo{z}_{0} + \bo{F}^{'}(\bo{z}_{0}) -\sum_{i<j}\lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}_{0}), \bo{\psi}>ds = 0, \quad \forall \bo{\psi} \in \bo{H}^{1} \text{ and } \forall t \in [0,T]. \end{equation*} {Uniqueness:} Let $\bo{z}^{1}_{0}$ and $\bo{z}^{2}_{0}$ be two solutions of \eqref{weak0} sharing the same initial positions i.e. $\bo{z}^{1}_{0}(0) = \bo{z}^{2}_{0}(0)$. We have \begin{equation*} \int_{0}^{t}<\bo{\mu}_{1}\partial_{t}\bo{\hat{z}}_{0} + \widehat{\bo{F}^{'}(\bo{z}_{0})}-\sum_{i<j}\lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}^{2}_{0}) + \sum_{i<j}\lambda^{0}_{ij}\bo{G}_{ij}(\bo{z}^{1}_{0}), \bo{\psi}>ds = 0, \; \forall \bo{\psi} \in \bo{\mathcal{C}}\cap \bo{H}^{1}, \end{equation*} where $\bo{\hat{z}}_{0} := \bo{z}^{2}_{0} - \bo{z}^{1}_{0}$ and $\widehat{\bo{F}^{'}(\bo{z}_{0})} := \bo{F}^{'}(\bo{z}^{2}_{0}) - \bo{F}^{'}(\bo{z}^{1}_{0})$. Let us choose $\bo{\psi} = \bo{\hat{z}}_{0}$ in the latter equation. Since the source term and the constraints functions are convex, by the same arguments as in proof of Theorem \ref{thm-exist-uniq}, we have that \begin{equation*} \mu_{1,m}\int_{0}^{t}\langle \partial_{t}\bo{\hat{z}}_{0},\bo{\hat{z}}_{0} \rangle dt \leq 0 \implies |\bo{\hat{z}}_{0}(t)|^{2} \leq 0, \quad \forall t \in [0,T], \end{equation*} which proves that $|\bo{\hat{z}}_{0}(t)| \equiv 0$, meaning that $\bo{z}^{1}_{0} = \bo{z}^{2}_{0}$ for almost every $t \in [0,T]$. \end{proof} \subsection{The periodic contact model} \subsubsection{Definition of the periodic signed distance} \begin{Prop}\label{2D} For any $x = (x_{1},x_{2}) \in \mathbb{R}^{2}$ set $\displaystyle{\overline{x}_{1} := x_{1} - \left \lfloor \dfrac{x_{1}}{L} \right \rfloor L}$ and $\overline{x}_{2} := x_{2} - \left \lfloor \dfrac{x_{2}}{H} \right \rfloor H$. We have the following statements: \begin{itemize} \item $(\overline{x}_{1}, \overline{x}_{2}) \in [0,L]\times [0,H]$, \item moreover \begin{equation*} \min_{h,k \in \mathbb{Z}}|x - hLe_{1} - kHe_{2}| = \min_{h,k \in \{0,1\}} |\overline{x} - hLe_{1} - kHe_{2}|, \end{equation*} where $e_{1}$ and $e_{2}$ respectively denotes the first and second vector of the canonical basis of $\mathbb{R}^{2}$. \end{itemize} \end{Prop} \begin{proof} For sake of simplicity, we first perform the proof in one dimension i.e. $\mathcal{D} = [0,L]$. The 2D-case being obtained by extension.\\\ Let $x \in \mathbb{R}$, since $\mathbb{R}$ is Archimedean there exists $n:=\left\lfloor \dfrac{x}{L}\right\rfloor$ such that \begin{equation*} n \leq \dfrac{x}{L} < n+1, \end{equation*} which implies that \begin{equation}\label{niayla} nL \leq x < nL+L \implies 0 \leq \overline{x} < L, \end{equation} which proves the first claim.\\ For the second claim, we notice that \begin{equation*} \min_{k\in \mathbb{Z}}|x-kL| = \min_{k\in \mathbb{Z}}|\overline{x}+nL-kL|= \min_{k\in \mathbb{Z}}|\overline{x}-kL|. \end{equation*} On the other hand, since there exists $k \in \mathbb{Z}$ such that $|\overline{x} - kL| < L$ (take $k=0$ for instance), the map $A: k \mapsto |\overline{x} - kL|$ realizes its minimum for indices $k_{0}$ satisfying $A(k_{0}) < L$. But thanks to the first claim, \begin{equation*} |\overline{x}-kL| < L \implies (k-1)L < \overline{x} < (k+1)L \end{equation*} then by \eqref{niayla} we conclude that $-1<k<2$. Or equivalently \begin{equation*} \min_{k\in \mathbb{Z}}|\overline{x}-kL| = \min_{k \in \{0,1\}} |\overline{x}-kL|. \end{equation*} We conclude that \begin{equation*} \min_{k \in \mathbb{Z}}|x-kL| = \min_{k\in \{0,1\}}|\overline{x}-kL|=: \min\left(\overline{x},L-\overline{x} \right) = \begin{cases} \overline{x} \qquad \text{ if } \overline{x} \leq L/2 \\ L-\overline{x} \text{ if } \overline{x} > L/2. \end{cases} \end{equation*} This ends the proof. \end{proof} \subsubsection{The framework for the continuous periodic model} Consider the following minimization process \begin{equation}\label{cont_periodic} \tilde{\boldsymbol{z}}_{\varepsilon}(t) = \argmin_{ \bo{q} \, \in \, \tilde{\mathcal{U}}} \mathcal{E}_{t}(\boldsymbol{q}):= \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{\infty}\big|q_{i}-\tilde{z}_{\varepsilon,i}(t-\varepsilon a)\big|^{2}\rho_{i}(a)da+F(\tilde{\boldsymbol{z}_{\varepsilon}}), \end{equation} where the set of constraints $\tilde{\mathcal{U}}$ reads \begin{equation*} \tilde{\mathcal{U}}:= \left\{ \bo{q} \in \mathbb{R}^{2N_{p}}\, \text{ s.t. } \phi^{\varepsilon}_{ij}(\boldsymbol{q}):= -d_{ij}(\tilde{\bo{z}}_{\varepsilon})-\nabla d_{ij}(\tilde{\bo{z}}_{\varepsilon})\cdot\left(\bo{q}- \tilde{\bo{z}_{\varepsilon}}\right) \leq 0, \, \forall \, i<j \right\}, \end{equation*} and the periodic distance \begin{equation}\label{period_distance} d_{ij}(\bo{q}): = \underset{(h,k)\in \mathbb{Z}^{2}}{\min}\big|q_{j}-q_{i}-hLe_{1} - kHe_{2}\big|-(r_{i}+r_{j}). \end{equation} We denote $\overline{\boldsymbol{q}} := (\overline{q_{1}}, \cdots, \overline{q_{N_{p}}})$ the projection of particles' position in the 2D-torus. For any particle we denote $\overline{q_{i}}:=(\overline{q^{x}_{i}}, \overline{q^{y}_{i}})$ where $\overline{q^{x}_{i}}$ (resp. $\overline{q^{y}_{i}}$) is the projection in $[0,L]$ (resp. in $[0,H]$) of $q^{x}_{i}$ (reps. $q^{x}_{i}$) as in Proposition \ref{2D}. When accounting for adhesions, the corresponding energy represents past positions in the 2D plane, whereas contact forces occur on the torus. This is because we take into account the length of adhesions greater than the periodicity dimensions $L$ and $H$; see Figure \ref{Fig1}. \begin{figure}[!ht] \centering \begin{tikzpicture} \definecolor{gray}{rgb}{0.5, 0.5, 0.5} \definecolor{green}{rgb}{0, 0.5, 0} \draw[<->,>=latex, line width=0.25mm] (-6, 0) -- (4, 0) node[right] {}; ll[gray] (-1.5, 1.5) circle (0.3); \draw[->,gray, >=latex, line width=0.5mm] (-1.5, 1.2) -- (-1.5, 0) node[below] {}; \draw[->,gray, >=latex, line width=0.5mm] (-1.6, 1.22) -- (-2, 0) node[below] {}; \draw[->, gray,>=latex, line width=0.5mm] (-1.65, 1.23) -- (-3.5, 0) node[right] {}; \draw[->, gray,>=latex, line width=0.5mm] (-1.7, 1.3) -- (-5, 0) node[right] {}; \draw[->, gray, >=latex, line width=2mm] (-1.2, 1.5) -- (0.1, 1.5) node[above] {$\boldsymbol{U}(\bo{z}^{\ast}_{\varepsilon})$}; ll[black] (1, 1.5) circle (0.3); \draw[->,black,>=latex,line width=0.5mm] (1, 1.2) -- (1, 0) node[below] {$\bo{z}(t)$}; \draw[->,black,>=latex,line width=0.5mm] (0.9, 1.22) -- (0.4, 0) node[below] {}; \draw[->,black,>=latex,line width=0.5mm] (0.85, 1.23) -- (-0.5, 0) node[below] {}; \draw[->, black, >=latex, line width=0.5mm] (0.85, 1.25) -- (-1.5, 0) node[below] {$\boldsymbol{z}(t-\varepsilon a_{1})$}; \draw[->, black, >=latex, line width=0.5mm] (0.8, 1.3) -- (-3.5, 0) node[below] {$\bo{z}(t- \varepsilon a_{2})$}; \draw[->, black, >=latex, line width=2mm] (1.3, 1.5) -- (2.5, 1.5) node[above] {$\boldsymbol{U}(\bo{z}_{\varepsilon}(t))$}; ll[red] (1.75, 0) circle (0.05) node[below] {$L$}; ll[red] (-2.5, 0) circle (0.05) node[below] {$0$}; \end{tikzpicture} \caption{Linkages associated to some past positions in the domain $[0,L]$ where $\bo{z}^{\ast}_{\varepsilon}:= \bo{z}_{\varepsilon}(t-\varepsilon a_{1})$.} \label{Fig1} \end{figure} By Proposition \ref{2D}, we have that \begin{eqnarray*} d_{ij}(\bo{q}) & = & \underset{(h,k) \in \{0,1\}^{2}}\min\big| \overline{q_{j}}- \overline{q_{i}} -hLe_{1}-kHe_{2}\big|-(r_{i}+r_{j}). \end{eqnarray*} Since this distance is well-defined i.e there exist are $\underline{h},\underline{k} \in \{ 0,1\}$ such that \begin{equation*} d_{ij}(\bo{q}) = \big| \overline{q_{j}}- \overline{q_{i}} - \underline{h}Le_{1}- \underline{k}He_{2}\big|-(r_{i}+r_{j}), \end{equation*} we define the gradient vector of $d_{ij}$ in $\bo{\tilde{Q}}_{0}$ as \begin{equation*} \boldsymbol{\tilde{G}_{ij}} := \nabla d_{ij}(\bo{q}) = \Big(0,\cdots 0, \underset{i}{-\tilde{e}_{i,j}(\bo{q})}, 0\cdots 0, \underset{j}{\tilde{e}_{i,j}(\bo{q})}, 0, \cdots,0\Big), \quad i<j, \end{equation*} where $\tilde{e}_{ij}(\bo{q})$ is the action of the copy of particle $i$ on particle $j$ and is oriented towards $j$. It is unitary and reads \begin{equation*} \tilde{e}_{ij}(\bo{q}):= \dfrac{q_{j}-q_{i}- (n^{x}_{j}-n^{x}_{i}+\underline{h})Le_{1} - (n^{y}_{j}-n^{y}_{i}+\underline{k})He_{2} }{\left| q_{j}-q_{i}- (n^{x}_{j}-n^{x}_{i}+\underline{h})Le_{1} - (n^{y}_{j}-n^{y}_{i}+\underline{k})He_{2} \right|}, \quad i < j, \end{equation*} where $n_{k}^{x} := \lfloor q_{k}^{x}/L \rfloor$ and $n_{k}^{y} := \lfloor q_{k}^{y}/H \rfloor$ for any particle $k$. \subsubsection{The discrete periodic problem} The same arguments as earlier in this paper lead to the discrete minimization problem, \begin{equation}\label{discret_periodic} \boldsymbol{\tilde{Z}}^{n}_{\varepsilon} = \argmin_{\bo{q} \, \in \, \bo{\tilde{K}}\left(\boldsymbol{\tilde{\boldsymbol{Z}}}^{n-1}_{\varepsilon}\right)}\, \left\{ \mathcal{E}_{n,\varepsilon}(\bo{q}):= \frac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} \left|q_{i}- \tilde{Z}^{n-l}_{\varepsilon,i} \right|^{2}R_{l,i} + F(\boldsymbol{q}) \right\}, \end{equation} where the discrete constraints set reads \begin{equation*} \bo{\tilde{K}}\left(\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon}\right):= \left\{ \bo{q} \in \mathbb{R}^{2N_{p}}\, \text{ s.t. } \phi^{n,\varepsilon }_{ij}(\bo{q}):= -d_{ij}(\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon})-\nabla d_{ij}(\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon})\cdot\bigg(\bo{q}- {\boldsymbol{\tilde{Z}}^{n-1}_{\varepsilon}}\bigg) \leq 0, \quad i<j \right\}, \end{equation*} and \begin{equation*} \nabla \phi^{n,\varepsilon}_{ij}(\boldsymbol{q})= -\nabla d_{ij}(\tilde{\boldsymbol{Z}}^{n-1}_{\varepsilon}), \quad \forall \boldsymbol{q} \in \mathbb{R}^{2N_{p}}. \end{equation*} The same arguments as in Theorem \ref{thm1} still hold and guarantee the existence and uniqueness of the solution $\boldsymbol{\tilde{Z}}^{n}_{\varepsilon}$ to \eqref{discret_periodic}. We define in the same way the sets of feasible configurations and of active constraints by $\tilde{\boldsymbol{Q}}_{0}$ and $\tilde{I}_{q}$ as in \eqref{Q0} and Definition \ref{qualified_memoire} respectively. We only mention that, in the present case the periodic distance $d_{ij}$ is considered instead of $D_{ij}$. The Lagrangian $\tilde{L}$ is defined from $\mathbb{R}^{2N_{p}} \times (\mathbb{R}_{+})^{N_{c}}$ into $\mathbb{R}$ as well \begin{equation}\label{lagrangian_periodic} \tilde{L}(\bo{q},\boldsymbol{\mu})= \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}}\sum_{l=1}^{\infty} \left|q_{i}-\tilde{Z}^{n-l}_{\varepsilon,i}\right|^{2}R_{l,i} + F\left(\boldsymbol{q}\right) + \sum_{i<j}\mu_{ij}\phi^{n,\varepsilon}_{ij}(\bo{q}). \end{equation} All hypotheses of Theorem \ref{kkt_cond} hold and guarantee that \eqref{discret_periodic} is equivalent to existence of saddle-point $(\bo{\tilde{Z}}^{n}_{\varepsilon}, \boldsymbol{\tilde{\lambda}}^{n}_{\varepsilon})$ satisfying \begin{equation}\label{Euler-Lagrange_periodic} \boldsymbol{\tilde{ \lambda}}^{n}_{\varepsilon} \geq \boldsymbol{0}, \; \bo{\phi}^{n,\varepsilon}(\tilde{\boldsymbol{Z}}^{n}_{\varepsilon}) \leq \boldsymbol{0}, \; \boldsymbol{\tilde{\lambda}}^{n}_{\varepsilon} \cdot \boldsymbol{\phi}^{n,\varepsilon}(\boldsymbol{\tilde{Z}^{n}_{\varepsilon}})=0 \text{ and } (\boldsymbol{\mathcal{E}}_{n,\varepsilon})^{'}(\boldsymbol{\tilde{Z}}^{n}_{\varepsilon}) + \sum_{i<j}\tilde{ \lambda}^{n,\varepsilon}_{ij}(\phi^{n}_{ij})^{'}(\boldsymbol{\tilde{Z}}^{n}_{\varepsilon}) = \boldsymbol{0}, \end{equation} where \begin{equation*} \boldsymbol{\phi}^{n,\varepsilon}(\boldsymbol{q}) := \left(\phi^{n,\varepsilon}_{ij}(\boldsymbol{q})\right)_{i<j} : \mathbb{R}^{2N_{p}} \longrightarrow \mathbb{R}^{N_{c}}. \end{equation*} Note that the periodic distance locally coincides with the one defined in \eqref{signed_distance} in the sense that $d_{ij} = D_{ij}$ in $\mathcal{D}$. So that these two distances have the same properties. This yields the same results as those obtained above with the usual distance (energy estimates, compactness criterion, variational inclusion, etc) in the present case. \subsection{Numerical approximation and simulations} \subsubsection{Uzawa's algorithm} Note that, due to the assumptions on $F$ (see \ref{Assump}), the last equation in \eqref{KKTconditions_memoire} is nonlinear with respect to $\bo{Z}^{n}_{\varepsilon}$ at each time step $n$. This induces the need of a nonlinear solver; such as the Newton solver in order to obtain the position from \eqref{KKTconditions_memoire} at time $t^n=n\Delta t$. In order to overcome the numerical cost of such an implementation, we transform the external load to a source term depending on the solution at the previous time step. So that we have a linear problem with respect to the unknown position at each time step. More precisely consider the following problem \begin{equation}\label{dicret_energy_uzawa} \boldsymbol{Z}^{n}_{\varepsilon} = \argmin_{\boldsymbol{q}\, \in \, \boldsymbol{K}(\boldsymbol{Z}^{n-1}_{\varepsilon})}\left\{ \dfrac{\Delta a}{2\varepsilon} \sum_{i=1}^{N_{p}} \sum_{l=1}^{\infty} |q_{i} - Z^{n-l}_{\varepsilon,i}|^{2} R_{l,i} + F(\boldsymbol{Z}^{n-1}_{\varepsilon}) + \boldsymbol{F}^{'}(\boldsymbol{Z}^{n-1}_{\varepsilon})\cdot(\bo{q} - \bo{Z}^{n-1}_{\varepsilon}) \right\}. \end{equation} We are attempted to use the projected-gradient descent algorithm to numerically approximate the solution of \eqref{dicret_energy_uzawa}. But the closed form of projection onto $\bo{K}(\bo{Z}^{n-1}_{\varepsilon})$ is not at hand here. To tackle this we pass through the dual problem and project the Lagrange multiplier onto $(\mathbb{R}_{+})^{N_{c}}$ by simple truncation and then iterate the process in the spirit of Uzawa's algorithm. Precisely we build a sequence of primal variables $\left(\bo{Z}^{n,r}_{\varepsilon}\right)_{n} \in \left(\mathbb{R}^{2N_{p}} \right)^{\mathbb{N}}$ and dual variables $(\bo{\lambda}^{n,r}_{\varepsilon})_{n} \in \left((\mathbb{R}_{+})^{N_{c}} \right)^{\mathbb{N}}$ as follows: \begin{equation*} \begin{cases} \bo{\lambda}^{n,0}_{\varepsilon} \in \left(\mathbb{R}_{+}\right)^{N_{c}} \vspace{0.5em} \\ \displaystyle{ L(\bo{Z}^{n,r}, \bo{\lambda}^{n,r}_{\varepsilon}) = \inf_{\bo{q} \, \in \, \mathbb{R}^{2N_{p}}} L\left(\boldsymbol{q}, \boldsymbol{\lambda}^{n,r}_{\varepsilon}\right)} \vspace{0.5em} \\ \boldsymbol{\lambda}^{n,r+1}_{\varepsilon} = \max\Big( \boldsymbol{\lambda}^{n,r}_{\varepsilon} + \eta \boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{Z}^{n,r}_{\varepsilon}), \boldsymbol{0} \Big), \end{cases} \end{equation*} which ensures that solution $\bo{Z}^{n}_{\varepsilon}$ of \eqref{dicret_energy_uzawa} is well defined. We note that $L$ in the above algorithm is the Lagrangian associated to the \eqref{dicret_energy_uzawa}. \begin{Prop}\label{convergence_uzawa} If $0 < \eta < 2\alpha/\varepsilon C^{2}$ with $ \alpha:= \mu_{\Delta}$ and $C:=\sqrt{2N_{c}}$, Uzawa's algorithm converges. More precisely, for any initial Lagrange multiplier $\bo{\lambda}^{n,0}_{\varepsilon}$, the sequence $\left(\bo{Z}^{n,r}_{\varepsilon} \right)_{r}$ converges to the solution of \eqref{dicret_energy_uzawa} when $r$ tends to infinity. \end{Prop} \begin{proof} Let us check the hypotheses of \cite[Theorem 10.5.9 p.343]{Allairel05}. To do so \begin{itemize} \item $E_{n,\varepsilon}$ is twice-differentiable as sum of a quadratic function and an affine function and we have that \begin{equation*} E^{''}_{n,\varepsilon}(\boldsymbol{q}) = diag\left( \alpha_{1},\alpha_{1},\cdots, \alpha_{N_{p}}\right), \quad \forall \bo{q} \in \mathbb{R}^{2N_{p}}, \end{equation*} where $\alpha_{i} = \dfrac{\mu_{\Delta,i}}{\varepsilon}, \forall i$, so that $E_{n,\varepsilon}$ is uniformly convex with the constant $\alpha_{m}:= \min_{i}\alpha_{i}$. \item $\boldsymbol{\varphi}^{n,\varepsilon}$ is convex, Lipschitz from $\mathbb{R}^{2N_{p}} $ into $(\mathbb{R}_{+})^{N_{c}}$. Indeed the convexity is obvious.\\ To prove the Lipschitz behavior, consider $\boldsymbol{\overline{q}} , \boldsymbol{\tilde{q}} \in \mathbb{R}^{2N_{p}}$. We have \begin{eqnarray*} \big| \varphi^{n,\varepsilon}_{ij}(\bo{\tilde{q}}) - \varphi^{n,\varepsilon}_{ij}(\bo{\overline{q}})\big| & = & \big| \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot(\bo{\tilde{q}}-\bo{Z}^{n-1}_{\varepsilon})- \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot(\bo{\overline{q}} - \bo{Z}^{n-1}_{\varepsilon}) \big| \\ & = & \big| \bo{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\cdot (\bo{\overline{q}} - \bo{\tilde{q}}) \big|\\ & \leq & \sqrt{2}\big| \bo{\tilde{q}} - \bo{\overline{q}} \big|, \quad \forall i<j. \end{eqnarray*} We've used the fact $\left|\boldsymbol{G}_{ij}(\bo{Z}^{n-1}_{\varepsilon})\right| = \sqrt{2}$ for all $i<j$ in the third row. Thus \begin{equation*} |\boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{\tilde{q}}) - \boldsymbol{\varphi}^{n,\varepsilon}(\boldsymbol{\overline{q}})|= \sqrt{\sum_{1\leq i<j \leq N_{p}} \big| \varphi^{n,\varepsilon}_{ij}(\boldsymbol{\tilde{q}}) - \varphi^{n,\varepsilon}_{ij}(\boldsymbol{\overline{q}})\big|^{2}} \leq \sqrt{2N_{c}}\big|\tilde{\bo{q}} - \overline{\bo{q}}\big|. \end{equation*} \end{itemize} Hence $\boldsymbol{\varphi}^{n,\varepsilon}$ is $C$-Lipschitz with $C=\sqrt{2N_{c}}$, which ends the proof. \end{proof} \subsubsection{Numerical simulations} We consider here the quadratic case, namely, the external load reads $F(\bo{q}) = \frac{1}{2} \bo{q}^{2}$. We expect the cells to follow the gradient of descent and finally to cluster at the origin. Uzawa's algorithm is performed to get the position at each time step. We start with a given initial deterministic configuration $\bo{z}_{p}$. We estimate the MSD \footnote{\label{msd-estim} $MSD(t) = \langle \bo{z}(t) - \bo{z}_{ref}\rangle= \dfrac{1}{N_{p}}\sum_{i=1}^{N_{p}}|z_{i}(t) - z_{ref,i}|^{2}$} (Mean Squared Displacement) which is a measure of the deviation of particles' positions with respect to a reference position $\bo{z}_{ref}=0$. We compare MSDs performed with and without contact forces and compute the theoretical MSD in the same figure. To do so we consider first the deterministic case without contact forces whose explicit solution is at hand. Then we perturb it randomly by adding a Gaussian white noise in the system. \begin{itemize} \item Consider the following non contact model whose dynamic is described as \begin{equation}\label{EDO} \begin{cases} \boldsymbol{\dot z} = -\nu \bo{z}, \quad t > 0\\ \bo{z}(0) = \bo{z}_{p}(0), \end{cases} \end{equation} where $\nu>0$ can be seen as the inverse of the viscosity coefficient in our friction model. In figure \ref{fig-msd-deterministic} are represented the curves of the global deviation with respect to the origin with and without contacts (top) and the curve of the average activation of the Lagrange multipliers (bottom) see \eqref{activation} below. \begin{figure}[!ht] \centering \begin{tikzpicture} \begin{axis}[ width=12cm, height=6cm, ylabel={msd ($m^{2}$)}, grid=major, legend style={at={(0.98,0.98)},anchor=north east}, title={Analytical and estimated MSDs}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[orange, thick, line width=2pt] table[x index=0, y index=1] {points_deterministic.txt}; \addlegendentry{With contacts (simulation)} \addplot[blue, thick, line width=2pt] table[x index=0, y index=2] {points_deterministic.txt}; \addlegendentry{No contacts (simulation)} \addplot[red, dotted, thick, line width=2pt] table[x index=0, y index=3] {points_deterministic.txt}; \addlegendentry{$t \mapsto |\bo{z}_{p}|^{2} e^{-2t}$} \end{axis} \begin{axis}[ at={(0,-1.05cm)}, anchor=north west, width=12cm, height=6cm, xlabel={Time (s)}, ylabel={Activ$(t)$}, grid=major, title={Average activation}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[magenta, dotted, thick, line width=2pt] table[x index=0, y index=4] {points_deterministic.txt}; \end{axis} \end{tikzpicture} \caption{Deterministic MSDs with respect to $\bo{0}$ (top) and the average activation of multipliers (bottom).} \label{fig-msd-deterministic} \end{figure} In the top figure, the global deviation starts from $16m^{2}$ at $t=0$ and decreases to end up by following horizontal lines ($H=0$ for the red and blue curves and $H\simeq 3$ for the orange one). This is what we expected in the absence of contact forces. Indeed in the absence of contacts (blue curve), each particle may be represented only by its center as a simple dot without any radius; allowing by the same way the overlapping between particles. Due to the source term, the particles are like attracted by the origin. The orange curve in Figure \ref{fig-msd-deterministic} represents the estimated MSD in the case of contacts between spheres. We remark that from $t=0s$ to $t \simeq 0.35s$ the curve is the same as the one without contact forces. Indeed as long as the particles are not in contact the Lagrange multipliers are not activate so that particles' trajectories are driven only by external loads. Once the signed distance vanishes (from $t\simeq 0.35 s$ to $t=5s$), the contact forces are active (see \eqref{KKTconditions_memoireCont}). The trajectories are no longer governed only by external load. The contacts induce a diminution of the velocity and the decay of the mean square displacement is no longer the same. This is illustrated by the change in the shape of the orange curve around $t = 0.35s$. The particles rearrange and move toward the origin increasing the contacts' number. As high congestion occurs, the particles move very slightly and end up by being motionless around the origin. This jamming leads to a complete steady state. The bottom pink curve in Figure \ref{fig-msd-deterministic} represents the average activation of the Lagrange multipliers over time defined as follows \begin{equation}\label{activation} Activ(t):= \dfrac{2}{N_{p}(N_{p}-1)} \sum_{1 \leq i < j \leq N_{p}}\mathbb{1}_{\{\lambda^{\varepsilon}_{ij}(t)\neq 0\}}. \end{equation} We may mention that the activation in \eqref{activation} is by definition the cardinal of the set of active constraints $Ind(\bo{z}_{\varepsilon})$ defined above (see Definition \ref{qualified_memoire}) over the maximal number of constraints. Precisely the average activation represents the ratio of the number of active constraints by the maximal number of constraints. Moreover, by definition of the Lagrange multipliers we have that \begin{equation*} supp(\lambda^{\varepsilon}_{ij}) \subset \left\{ t \geq 0 \text{ s.t. } D_{ij}(\bo{z}_{\varepsilon}(t)) = 0 \right\}, \quad \forall i< j, \end{equation*} so that the multipliers are activated once the signed distance vanishes. Here (the bottom curve of Figure \ref{fig-msd-deterministic}), the Lagrange multipliers are inactive for small times; in fact there is no contacts at the debut. The jump at $t \simeq 0.35s$ is due to the fact that some particles $i$ and $j$ are in contact; the Lagrange multipliers are positive . After that first jump, the average activation of the multipliers is constant equal to $0.15$ for less than one second, because the particles in contact try to rearrange until they reach a steady state. \item Consider now the stochastic model where a random perturbation is added in the the previous model. We obtain the Ornstein-Uhlenbeck process \begin{equation}\label{EDS} \begin{cases} \bo{\dot z} = -\nu \bo{z} + \sigma \bo{\eta}_{t}, \quad t > 0\\ \bo{z}_{0} = \bo{z}_{p}(0), \end{cases} \end{equation} where $(\bo{\eta}_{t})_{t\geq 0}$ denotes the $\mathbb{R}^{2N_{p}}$-Gaussian white noise. The explicit solution \eqref{EDS} as well as its second order moment are given in Appendix \ref{annexeC}. We compare the approximated MSD computed using the solutions of our algorithm and the theoretical value at each time step in Figure \ref{fig-msd}. \begin{figure}[!ht] \centering \begin{tikzpicture} \begin{axis}[ width=12cm, height=6cm, ylabel={msd ($m^{2}$)}, grid=major, legend style={at={(0.98,0.98)},anchor=north east}, title={Analytical and estimated MSDs}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[orange, thick, line width=2pt] table[x index=0, y index=1] {points_stochastic.txt}; \addlegendentry{With contacts (simulation)} \addplot[blue, thick, line width=2pt] table[x index=0, y index=2] {points_stochastic.txt}; \addlegendentry{No contacts (simulation)} \addplot[red, dotted, thick, line width=2pt] table[x index=0, y index=3] {points_stochastic.txt}; \addlegendentry{$t \mapsto |\bo{z}_{p}|^{2} e^{-2t} + \frac{1}{2}(1-e^{-2t})$} \end{axis} \begin{axis}[ at={(0,-1.05cm)}, anchor=north west, width=12cm, height=6cm, xlabel={Time (s)}, ylabel={Activ$(t)$}, grid=major, title={Average activation}, title style={at={(0.5,0.95)}, anchor=south}, ] \addplot[magenta, dotted, thick, line width=1.5pt] table[x index=0, y index=4] {points_stochastic.txt}; \end{axis} \end{tikzpicture} \caption{Stochastic MSDs with respect to $\bo{0}$ (top), the average activation of the multipliers (bottom).} \label{fig-msd} \end{figure} We observe similar trends as in the deterministic case : the deviation exponentially decreases from $16 m^{2}$ to end up by following horizontal lines ($H=1/2$ for the red and blue curves and $H \simeq 4$ for the orange curve) for large times. Indeed by Appendix \ref{annexeC}, we have that \begin{equation*} \lim_{t \to 0}\mathbb{E}|\bo{z}_{t}|^{2} = |\bo{z}_{p}(0)|^{2} \text{ and } \lim_{t \to \infty} \mathbb{E}|\bo{z}_{t}|^{2} = \dfrac{1}{2}, \end{equation*} so that the particle never cluster at the origin as in the deterministic case, even without contacts. The red curve represents the second order moment of the trajectory \eqref{analytic-msd} when particles do not interact. \end{itemize} \section{Conclusions} In this paper we dealt with non-penetration models with adhesion forces. The position of cells at each time minimizes a convex energy functional with non-penetration constraints. The energy contains two terms : a delay part and the external load. By penalizing the constraints and letting the penalty parameter to go to zero, we prove that the solution of the constrained problem exists and is unique. We obtain energy estimates and we use convexity of the constraints and of the external load to obtain compactness. We then apply Arzela-Ascoli in order to obtain existence the continuous problem for a fixed $\epsilon>0$. Finally, we prove that, if the characteristic of lifetime of the bonds tends to zero, our model converges to the model investigates in \cite{venel08} with a weighted by friction coefficients related to the microscopic adhesions. The same results are obtained on the torus ; the periodic distance is considered instead of the signed one defined by the Euclidian norm. Numerical simulations are made to validate the mathematical analysis. \begin{appendices} \section{Theoretical background of contact modelling}\label{annexeA} \begin{Def}\label{coercive} Let $n \in \mathbb{N}^{\ast}$ and $J: \mathbb{R}^{n} \to \mathbb{R}$ be a real valued function. $J$ is called coercive if $ J(v) \longrightarrow \infty$ as $|v| \to \infty. $ \end{Def} \begin{Theo}\cite{Ciarlet89}\label{ciarl} Let $J : \mathbb{R}^{n} \to \mathbb{R}$ be a continuous, coercive and strictly convex function, $U$ a nonempty, convex and closed subset of $\mathbb{R}^{n}$ and $\psi: \mathbb{R}^{n} \to \mathbb{R}$ a continuous, convex function satisfying \begin{equation}\label{eq.equiv.U.Phi} \psi(v) \geq 0 \text{ for every } v \text{ in } \mathbb{R}^{n} \text{ and } \psi(v) = 0 \iff v \in U, \end{equation} Then for every $\delta > 0$, there exists a unique element $u_{\delta}$ satisfying \begin{equation*} u_{\delta} \in \mathbb{R}^{n} \text{ and } J_{\delta}(u_{\delta}) = \inf_{v \in \mathbb{R}^{n}} J_{\delta}(v), \quad J_{\delta}(v) := J(v) + \dfrac{1}{\delta} \psi(v). \end{equation*} Moreover, $u_{\delta} \to u$ when $\delta$ goes to $0$, where $u$ is the unique solution of \begin{equation*} \begin{cases} \text{ find } u \text{ such that }\\ \displaystyle{u \in U,\quad J(u) = \inf_{v \in U}J(v)}. \end{cases} \end{equation*} \end{Theo} \begin{Theo}\cite{Allairel05}\label{kkt_cond} Let $V$ be a Hilbert space and $M \in \mathbb{N}^{\ast}$ and $K$ defined as follows \begin{equation*} K = \left\{ v \in V: F_{i}(v) \leq 0, \; \forall 1\leq i \leq M \right\} \end{equation*} Assume that $J$ and $F_{1},\cdots F_{M}$ are convex continuous in $V$ and differentiable in $K$ and define the associated Lagrangian \begin{equation*} \mathcal{L}(v,q) = J(v) + q\cdot F(v), \quad \forall (v,q) \in V \times (\mathbb{R}_{+})^{M}. \end{equation*} Let $u$ be a point at which the constraints are qualified. Then $u$ us a global minimum of $J$ if and only if there exists $p \in (\mathbb{R}_{+})^{M}$ such that $(u,p)$ is a saddle-point of $\mathcal{L}$ on $V \times (\mathbb{R}_{+})^{M}$ or equivalently, such that \begin{equation*} F(u)\leq 0, \; p \geq 0,\; p\cdot u = 0, \; J^{'}(u) + \sum_{i=1}^{M} \lambda_{i}F^{'}(u_{i}) = 0, \end{equation*} where $F = (F_{1},\cdots,F_{M})$. \end{Theo} \begin{Def}\cite{venel08} Let $H$ be a Hilbert space and $ S \subset H$ be a closed and nonempty subset and $x \in H$ we define: \begin{itemize} \item the set of nearest points of $x \in S$ \begin{equation*} P_{S}(x) := \left\lbrace v \in S: \, d_{S}(x) = |x-v| \right\rbrace, \quad d_{S}(x) := \inf_{u \in S}\left|x-u\right|, \end{equation*} \item the proximal normal cone to $S$ at $x$ \begin{equation*} N^{P}(S,x) := \left\{ v \in H: \exists \alpha > 0, x \in P_{S}(x + \alpha v) \right\}, \end{equation*} \item the limiting normal cone to $S$ at $x$ by \begin{equation*} N^{L}(S,x) := \left\{ v \in H: \exists (x_{n}) \subset (S)^{\mathbb{N}}, \exists (v_{n}) \subset (N(S,x_{n}))^{\mathbb{N}} \text{ s.t } x_{n} \to x, v_{n} \rightharpoonup v \right\}, \end{equation*} \end{itemize} \end{Def} Note that if $S$ is convex, the proximal normal cone coincides with the outward normal cone $N(S,x)$ to $S$ at $x$ into which we have $x = P_{S}(x+\alpha v)$ in the definition of $N^{P}(S,x)$. \begin{Def}\cite{JeanFenel06} Let $S \subset H$ be a non empty closed subset of a Hilbert space $H$. For any fixed $\eta>0$, we say that $S$ is $\eta$-prox-regular (or uniformly prox-regular) if for any $v \in N^{L}(S,x)$ such that $|v| < 1$, $x \in P_{S}(x+\eta v)$. \end{Def} \begin{Prop}\label{prox-reg-char} \cite{JeanFenel06} Let $S$ be closed nonempty set of a Hilbert space $H$. $S$ is $\eta$-prox-regular if and only if a nonzero proximal normal $v \in N^{L}(S,x)$ can be realized by an $\eta$-ball, that is for all $x \in S$ and $v \in N(S,x)\setminus \{ 0\}$, $$S\cap B\left(x+\frac{\eta}{|v|}v, \eta \right) = \emptyset.$$ In other words for any $x \in S$ and $v \in N(S,x)$, \begin{equation*} \langle v, y-x \rangle \leq \dfrac{|v|}{2\eta} \left|y-x\right|^{2}, \quad \forall y \in S. \end{equation*} Furthermore $S$ is convex if and only if it is $\infty$-prox-regular. \end{Prop} \begin{Theo}\cite{venel08} \label{constant-prox-reg} The set of admissible constraints $\boldsymbol{Q}_{0}$ is $\eta$-prox-regular where \begin{equation} \eta = \dfrac{1}{N_{p}n_{n}}\left( \dfrac{\min\left(\sin\left(\dfrac{\pi}{n_{n}+1}\right), \sin\left(\dfrac{2\pi}{N_{p}}\right)\right)}{2\sqrt{n_{n}}} \right)^{N_{p}}\min_{i,j}(r_{i}+r_{j}), \end{equation} where $n_{n}$ is the number of maximal neighbors that a particle can have. \end{Theo} \begin{Lemma}(page 90 in \cite{venel08}) \label{equivalences} Let $S$ be a convex and closed subset of a Hilbert space $H$. Let $x \in S$ and $\omega \in H$ we have the following equivalences \begin{eqnarray} \omega \in N(S,x) & \overset{ \text{ def } }{\Leftrightarrow} & x = P_{S}(x+\omega) \\ & \Leftrightarrow & \forall y \in S, \quad \langle \omega, y-x \rangle \leq 0 \\ & \Leftrightarrow & \forall y \in H, \quad \langle \omega, y-x \rangle \leq |\omega| d_{S}(y) \\ & \Leftrightarrow & \forall \xi \in H, \quad \langle \omega, \xi \rangle \leq |\omega| d_{S}(\xi+x) \\ & \Leftrightarrow & \exists \eta > 0, \forall v \in H, |v| < \eta \quad \langle \omega, v \rangle \leq |\omega| d_{S}(v+x) \\ & \Leftrightarrow & \exists k > 0, \exists \eta > 0, \forall v \in H, |v| < \eta \quad \langle \omega, v \rangle \leq k d_{S}(v+x) \end{eqnarray} \end{Lemma} \begin{Prop}[page 76 in \cite{venel08}]\label{convergenceofprojection} Let $\boldsymbol{q}\in \boldsymbol{Q}_{0}$ and $(\boldsymbol{q}_{\Delta_{m}})$ be a sequence in $\boldsymbol{Q}_{0}$ satisfying $\textbf{q}_{\Delta_{m}} \to \textbf{q}$. For any $\textbf{z} \in \mathbb{R}^{2N_{p}}$ we denote by $\boldsymbol{p}=P_{K(\boldsymbol{q})}(\textbf{z})$ and $p_{\Delta_{m}}=P_{K(\boldsymbol{q}_{\Delta_{m}})}(\textbf{z})$ there exists $\nu$ such that for any $\textbf{z} \in \boldsymbol{B}( \textbf{q},\nu)$ we have $\boldsymbol{p}_{\Delta_{m}} \to \boldsymbol{p}$. Particularly $d_{K(\boldsymbol{q}_{\Delta_{m}})}(\textbf{z}) \longrightarrow d_{K(\boldsymbol{q})}(\textbf{z})$ as $m$ goes to infinity. \end{Prop} \begin{Def}\cite{Haim11} Let $(E, ||\cdot||_{E})$ be a norm vector space and $A \subset E$ be a subset of $E$. The convex hull of $A$, we denote here by $conv(A)$ is the intersection of all convex sets of $E$ containing $A$. \end{Def} \begin{Theo}\label{Mazur} \cite{Haim11}. Let $(E, ||\cdot||_{E})$ be a vector space and $(x_{n})_{n}$ a sequence that weakly converges to $x$ in $E$. There exists a sequence of real numbers $(y_{k})_{k=n\cdots,N(n)}$ (where $N:\mathbb{N}\to \mathbb{N}$) taking values in the convex hull of the set of values of the sequence $\left(x_{n}\right)_{n}$, satisfying \begin{equation*} y_{n} = \sum_{k=n}^{N(n)}\lambda_{n,k}x_{k} \text{ with } \sum_{k=n}^{N(n)} \lambda_{n,k} = 1, \quad \forall k \in \{n,\cdots,N(n) \}, \quad \lambda_{n,k} \in \mathbb{R}_{+} \end{equation*} and converges to $x$ i.e. \begin{equation*} ||y_{n} - x ||_{E} \to 0. \end{equation*} \end{Theo} \section{Mean Squared Displacement and Orsntein-Uhlenbeck process}\label{annexeC} Here we remind some properties of the Ornstein-Uhlenbeck (in the absence of contact forces), for which we compute the explicit formula of its MSD. To this end consider the following equation \eqref{EDS} (with $\nu = \sigma = 1$). By the variation of constants method we have \begin{equation*} \bo{z}_{t} = \bo{z}_{p}(0)e^{-t} + \int_{0}^{t}e^{-(t-r)}\eta_{r}dr. \end{equation*} Due to the null expectation of the white noise, we have that \begin{equation*} \mathbb{E}(\bo{z}_{t}) = \bo{z}_{p}(0)e^{-t}. \end{equation*} On the other hand for any $t,s \geq 0$, setting $t \wedge s := \min(t,s)$ we have \begin{eqnarray*} E[\bo{z}_{t}\cdot \bo{z}_{s}] & = & |\bo{z}_{p}(0)|^{2}e^{-(t+s)} + \mathbb{E}\left[ (\int_{0}^{t} e^{-(t-r_{1})}\eta_{r_{1}}dr_{1})\cdot (\int_{0}^{s} e^{-(s-r_{2})}\eta_{r_{2}}dr_{2}) \right] \\ & = & |\bo{z}_{p}(0)|^{2}e^{-(t+s)} + \int_{0}^{t \wedge s } e^{-(t+s-2r)}dr, \end{eqnarray*} where we've used the Ito's isometry at the second equality. Thus if $s=t$ we have \begin{eqnarray*} \mathbb{E}|\bo{z}_{t}|^{2} & = & |\bo{z}_{p}(0)|^{2}e^{-2 t} + e^{-2 t} \int_{0}^{t}e^{2r}d_{r}, \end{eqnarray*} which gives the explicit form \begin{eqnarray}\label{analytic-msd} \mathbb{E}|\bo{z}_{t}|^{2} & = & |\bo{z}_{p}(0)|^{2}e^{-2t} + \dfrac{1}{2}\left( 1 - e^{-2t}\right). \end{eqnarray} \end{appendices} \section*{Use of AI tools declaration} The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article. \section*{Acknowledgments (All sources of funding of the study must be disclosed)} We would like to thank you for following the instructions above very closely in advance. It will definitely save us lot of time and expedite the process of your paper's publication. \section*{Conflict of interest} \bibliography{biblio} \bibliographystyle{alpha} \end{document} \begin{Theo} Assume that hypotheses \ref{Assump} (i)-(iii) hold. There exists a unique function $\textbf{z}_{0} \in C((0,T],\mathbb{R}^{2N_{p}})$ such that $\left(\textbf{z}_{\varepsilon_{m}}\right)_{m}$ uniformly converges to $\textbf{z}_{0}$ as $m \to \infty$. Moreover the limit function $\textbf{z}_{0}$ satisfies the differential inclusion \begin{equation}\label{inclusion_limite} \begin{cases} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0}+\bo{F}^{'}(\boldsymbol{z}_{\varepsilon}) \in -N\left(\bo{K}(\bo{z}_{0}),\bo{z}_{0}\right) \text{ a.e. } t \in (0,T] \vspace{0.5em} \\ \boldsymbol{z}_{0}(0) = \boldsymbol{z}_{p}(0), \end{cases} \end{equation} where \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0} = (\mu_{1,i}\partial_{t}z_{0,i})_{i=1,\cdots,N_{p}} \text{ and } \mu_{1,i}:= \int_{0}^{\infty}a\rho_{i}(a)da \in \mathbb{R}, \quad \forall i. \end{equation*} \end{Theo} \begin{proof} We use the extraction convergence method as in the proof of earlier theorem \ref{thm_conv}. \subsubsection*{Extraction of sequence} Since $\bo{z}_{\varepsilon}$ is bounded and equicontinuous (as limit of bounded and equicontinuous function $\bo{z}_{\varepsilon,\Delta}$) we can apply the Arzella-Ascolli theorem to the sequence $\left(\bo{z}_{\varepsilon_{m}}\right)_{m}$: there exists a subsequence still denoted by $\left( \bo{z}_{\varepsilon_{m}} \right)_{m}$ such that $\bo{z}_{\varepsilon_{m}}$ uniformly converges in $(0,T]$. Let us denote $\bo{z}_{0}$ the limit function i.e. $\bo{z}_{0}(t) = \lim_{m\to \infty }\bo{z}_{\varepsilon_{m}}(t), \quad t \in (0,T]$. \subsection*{The convergence of $\boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\textbf{z}_{\varepsilon_{m}}]+F^{'}(\textbf{z}_{\varepsilon_{m}})$.} Since the delay operator $\boldsymbol{\mathcal{L}}_{\varepsilon}$ is bounded in $L^{\infty}((0,T],\mathbb{R}^{2N_{p}})$ (see the proof of Theorem \ref{thm-exist-uniq}), there exists a subsequence denoted by $\left( \mathcal{L}_{\varepsilon_{m}} \right)_{m}$ that weakly *-converges i.e. \begin{equation*} \dfrac{1}{\varepsilon_{m}}\int_{0}^{\infty}\left(z_{\varepsilon_{m},i}(t) - z_{\varepsilon_{m},i}(t-\varepsilon_{m} a)\right)\rho_{i}(a)da \overset{*}{\rightharpoonup}\partial_{t}z_{0,i}(t)\int_{0}^{\infty}a\rho_{i}(a)da \text{ in } L^{\infty}((0,T], \mathbb{R}^{2}), \quad \forall i, \end{equation*} which by duality with $L^{1}((0,T], \mathbb{R}^{2})$ and the compactness $(0,T]$ gives \begin{equation*} \dfrac{1}{\varepsilon_{m}}\int_{0}^{\infty}\left(z_{\varepsilon_{m},i}(t) - z_{\varepsilon_{m},i}(t-\varepsilon_{m} a)\right)\rho_{i}(a)da \overset{}{\rightharpoonup}\partial_{t}z_{0,i}(t)\int_{0}^{\infty}a\rho_{i}(a)da,\quad \text{ a.e. } t \in (0,T]. \end{equation*} Moreover, since $\bo{z}_{\varepsilon_{m}} \in L^{\infty}((0,T])$ and $(0,T]$ is compact for any $T>0$, we have that $\boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{m}})$ is bounded. Therefore \begin{equation}\label{weak_convergence} \mathcal{L}_{\varepsilon,i}[\boldsymbol{z}_{\varepsilon}](t) + F_{i}^{'}(\boldsymbol{z}_{\varepsilon}(t)) \overset{}{\rightharpoonup}\partial_{t}z_{0,i}(t)\int_{0}^{\infty}a\rho_{i}(a)da + F_{i}^{'}(\bo{z}_{0}(t)) \text{ in } L^{1}((0,T],\mathbb{R}^{2}), \quad i=1,\cdots,N_{p}, \end{equation} which guarantees that the global operator does so, i.e. \begin{equation*} \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}](t) + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon}(t)) {\rightharpoonup}\, \boldsymbol{\mu}_{1} \partial_{t}\bo{z}_{0}(t) + \boldsymbol{F}^{'}(\bo{z}_{0}(t))=:\boldsymbol{\pi}_{0}(t). \end{equation*} \subsection*{The weak limit satisfies the differential inclusion $\boldsymbol{\pi}_{0} \in -N\left( \bo{K}(\bo{z}_{0}),\bo{z}_{0} \right)$.} Consider the vector space $L^{1}([0,T],\mathbb{R}^{2N_{p}})$ endowed with its natural norm. We have that $\boldsymbol{\pi}_{\varepsilon_{m}}(t)\rightharpoonup \boldsymbol{\pi}_{0}(t)$ where we remind that $\boldsymbol{\pi}_{\varepsilon}:= \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon})$. Since the sequence $\left(\boldsymbol{\pi}_{\varepsilon_{m}} \right)_{m}$ weakly converges, there exists, by the Mazur Lemma a sequence $(\boldsymbol{y}_{\varepsilon_{m}})_{m}$ such that \begin{equation*} \boldsymbol{y}_{\varepsilon_{m}} \in Conv \left( \boldsymbol{\pi}_{\varepsilon_{m'}} ,\, \forall \, m'\geq m \right) \text{ and } ||\boldsymbol{y}_{\varepsilon_{m}} - \boldsymbol{\pi}_{0}||_{L^{1}((0,T], \mathbb{R}^{2N_{p}})} \underset{m \to \infty}{\longrightarrow 0}; \end{equation*} i.e. for almost $t\in (0,T]$ \begin{equation*} \boldsymbol{y}_{\varepsilon_{m}} \in Conv \left( \boldsymbol{\pi}_{\varepsilon_{m'}}, \forall\, m'>m \right) \text{ and } \boldsymbol{y}_{\varepsilon_{m}}(t) \underset{m \to \infty}{\longrightarrow} \boldsymbol{\mu}_{1} \partial_{t}\boldsymbol{z}_{0}(t) + \bo{F}^{'}(\boldsymbol{z}_{0}(t)),\, \text{ in } L^{1}((0,T], \mathbb{R}^{2N_{p}}). \end{equation*} It remains to prove that \begin{equation*} \boldsymbol{\pi}_{0} \in -N\left(\boldsymbol{K}(\bo{z}_{0}), \bo{z}_{0}\right), \text{ a.e. } t \in (0,T]. \end{equation*} Indeed since for any $m > 0$, \begin{equation}\label{legacy} \displaystyle{\boldsymbol{\pi}_{\varepsilon_{m}} \in - N\left( \boldsymbol{K}(\bo{z}_{\varepsilon_{m}}),\bo{z}_{\varepsilon_{m}} \right)}, \, \text{ a.e. in } (0,T], \end{equation} we have \begin{equation*} \langle \boldsymbol{y}_{\varepsilon_{m}}, \boldsymbol{\xi} \rangle \leq \sup_{m' \geq m} \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m'}} [\bo{z}_{\varepsilon_{m'}}] + \bo{F}^{'}(\bo{z}_{\varepsilon_{m}}), \boldsymbol{\xi} \rangle, \quad \forall \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{equation*} This, taking the limit $m \to \infty$ since $\boldsymbol{y}_{\varepsilon_{m}}(t) \longrightarrow \bo{\pi}_{0}$ gives \begin{equation*} \langle \boldsymbol{\pi}_{0}, \boldsymbol{\xi} \rangle \leq \lim \sup_{m} \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}} \left[\bo{z}_{\varepsilon_{m}}\right] + \bo{F}^{'}(\bo{z}_{\varepsilon_{m}}), \boldsymbol{\xi} \rangle, \quad \forall \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{equation*} On the other hand, by lemma \ref{equivalences}, equation \eqref{conDiff} is equivalent to \begin{equation*} \forall \, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}, \, \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \bo{F}^{'}(\boldsymbol{z}_{\varepsilon_{m}}(t)), \boldsymbol{\xi} \rangle \leq \left| \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon}](t) +\bo{ F}^{'}(\boldsymbol{z}_{\varepsilon_{m}}(t)) \right| d_{\bo{K}(\bo{z}_{\varepsilon_{m}})}(\boldsymbol{\xi} + \boldsymbol{z}_{\varepsilon_{m}}(t)), \end{equation*} replacing $\boldsymbol{\xi}$ by $- \boldsymbol{\xi}$ in the latter inequality \begin{equation*} \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \bo{F}^{'}(\bo{z}_{\varepsilon_{m}}(t)),- \boldsymbol{\xi} \rangle \leq \left| \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) +\bo{F} ^{'}(\bo{z}_{\varepsilon_{m}}(t)) \right| d_{\bo{K}(\bo{z}_{\varepsilon_{m}})}(- \boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}(t)). \end{equation*} By same arguments as earlier on the delay operator and the source term, there exists a constant $M$ independent of $\varepsilon_{m}$ such that \begin{equation*} |\boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{m}}(t))| \leq M, \end{equation*} we have \begin{equation*} \forall \bo{\xi} \in \mathbb{R}^{2N_{p}}, \quad \langle \boldsymbol{\mathcal{L}}_{\varepsilon_{m}}[\bo{z}_{\varepsilon_{m}}](t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{m}}(t)), \boldsymbol{\xi} \rangle \leq 2M d_{\bo{K}(\bo{z}_{\varepsilon_{m}}(t))}\left( -\boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}(t) \right). \end{equation*} Let us check that \begin{equation*} d_{\varepsilon_{m}}(t) := \left| d_{\bo{K}(\bo{z}_{\varepsilon_{m}})}\left( -\boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}(t) \right) - d_{\bo{K}(\bo{z}_{0})} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right| \longrightarrow 0, \forall t \in (0,T]. \end{equation*} Indeed using the fact that $\boldsymbol{q} \to d_{\boldsymbol{K}(\bo{z}_{0})}(\boldsymbol{q})$ is $1$-Lipschitz, \begin{eqnarray*} d_{\varepsilon_{m}}(t) & \leq & \left| d_{\boldsymbol{K}(\bo{z}_{\varepsilon_{m}})}\left( -\boldsymbol{\xi} + \bo{z}_{\varepsilon_{m}}\right) - d_{\boldsymbol{K}(\bo{z}_{\varepsilon_{m}})} \left(-\boldsymbol{\xi} + \bo{z}_{0} \right) \right| + \left| d_{\bo{K}(\bo{z}_{0})}\left( -\boldsymbol{\xi} + \bo{z}_{0}(t) \right) - d_{\boldsymbol{K}(\bo{z}_{0}(t))} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right| \\ \\ & \leq & \left| \bo{z}_{\varepsilon_{m}}(t) - \bo{z}_{0}(t) \right| + \left| d_{\boldsymbol{K}(\bo{z}_{\varepsilon_{m}})}\left( -\boldsymbol{\xi} + \bo{z}_{0}(t) \right) - d_{\boldsymbol{K}(\bo{z}_{0})} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right|. \end{eqnarray*} In the same way as in proposition \ref{convergenceofprojection}, there exists $\nu >0$ such that $|\boldsymbol{\xi}| \leq \nu$, \begin{equation*} \left| d_{\textbf{K}(\bo{z}_{\varepsilon_{m}})}\left( -\bo{\xi} + \bo{z}_{0}(t) \right) - d_{\bo{K}(\bo{z}_{0})} \left(-\boldsymbol{\xi} + \bo{z}_{0}(t) \right) \right| \longrightarrow 0. \end{equation*} Taking the limit when $m \to \infty$ we have \begin{equation*} \forall\, \boldsymbol{\xi}\in \mathbb{R}^{2N_{p}}, \, |\boldsymbol{\xi}| \leq \nu, \, \langle \boldsymbol{\mu}_{1}\partial_{t}\bo{z}_{0}(t) + \boldsymbol{F}^{'}(\bo{z}_{0}(t)), \boldsymbol{\xi} \rangle \leq M d_{\bo{K}(\bo{z}_{0}(t))}\left( -\boldsymbol{\xi} + \bo{z}_{0}(t) \right), \end{equation*} which finally by using the fifth equation in lemma \ref{equivalences} is equivalent to \begin{equation*} \boldsymbol{\mu}_{1}\partial_{t} \bo{z}_{0}(t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon_{0}}(t)) \in - N\left( \boldsymbol{K}(\bo{z}_{0}), \bo{z}_{0}(t) \right), \forall \, t \in (0,T], \end{equation*} which ends the proof. \end{proof} \begin{Theo}\label{thm_conv} Let $\varepsilon >0$ be fixed. If the assumptions \ref{Assump} (i)-(iii) hold then the constant piecewise function $\bo{z}_{\varepsilon,\Delta}$ uniformly converges in $L^{\infty}\left([0,T];\boldsymbol{Q}_{0} \right)$ when $\Delta a \to 0$. Moreover the limit function denoted by $\textbf{z}_{\varepsilon}$ satisfies \begin{equation}\label{conDiff} \begin{cases} \displaystyle{ \boldsymbol{\mathcal{L}}_ {\varepsilon}[\textbf{z}_{\varepsilon}](t) + \boldsymbol{F}^{'}(\boldsymbol{z}_{\varepsilon}(t)) \in -N(\boldsymbol{Q}_{0}, \textbf{z}_{\varepsilon}(t)), \, t > 0}, \vspace{0.5em} \\ \bo{z}_{\varepsilon}(t) = \bo{z}_{p}(t), \; t \leq 0, \end{cases} \end{equation} where $\boldsymbol{\mathcal{L}}_{\varepsilon}(t)=\left(\mathcal{L}_{\varepsilon,1}(t),\cdots, \mathcal{L}_{\varepsilon,N_{p}}(t) \right)$ and for any particle \begin{equation*} \mathcal{L}_{\varepsilon,i}\left[\textbf{z}_{\varepsilon}\right](t):= \displaystyle{\dfrac{1}{\varepsilon}\int_{0}^{\infty}\left(z_{\varepsilon,i}(t) - z_{\varepsilon,i}(t-\varepsilon a) \right)\rho_{i}(a)da}. \end{equation*} \end{Theo} \begin{proof} We use the same arguments as in \cite{JeanFenel06}: the extraction convergence and inclusion.\\ To this end we use the Arzela-Ascolli theorem to prove the extraction and convergence. \begin{itemize} \item The piecewise constant function $\bo{z}_{\varepsilon,\Delta}$ is equicontinuous on $[0,T]$ \item $\bo{z}_{\varepsilon,\Delta}$ admits an $L^{\infty}$-bound. Indeed by \eqref{compactness}) and the Cauchy-Schwartz's inequality, we have \begin{eqnarray*} \left|\boldsymbol{Z}^{n}_{\varepsilon} - \boldsymbol{Z}^{0}_{\varepsilon}\right|^{2} = \left|\sum_{m=1}^{N} \sum_{i=1}^{N_{p}}(Z^{m}_{\varepsilon,i}-Z^{m-1}_{\varepsilon,i}) \right|^{2} & \leq & \left(\sum_{m=1}^{n}\sum_{i=1}^{N_{p}}\Delta t\right) \sum_{m=1}^{n}\sum_{i=1}^{N_{p}}\Delta t \left| \dfrac{Z^{m}_{\varepsilon,i}-Z^{m-1}_{\varepsilon,i}}{\Delta t} \right|^{2} \\ & \leq & tN_{p} \sum_{i=1}^{N_{p}} \left| \dfrac{Z^{m}_{\varepsilon,i}-Z^{m-1}_{\varepsilon,i}}{\Delta t} \right|^{2}, \quad \forall t > 0 \end{eqnarray*} i.e by \eqref{compactness} \begin{equation}\label{Linfty} \left|\boldsymbol{Z}^{n}_{\varepsilon}\right|^{2} \leq TN_{p}C + \left| \boldsymbol{Z}^{0}_{p}\right|^{2} < \infty. \end{equation} Therefore \begin{equation*} || \bo{z}_{\varepsilon,\Delta}||_{L^{\infty}\left( (0,T),\mathbb{R}^{2N_{p}}\right)} := \underset{0 < t < T}{\sup} \left( \sum_{i=1}^{N_{p}}|z_{\varepsilon,\Delta, i}(t)|^{2} \right)^{1/2} = \underset{0 < t < T}{\sup} \left( \sum_{i=1}^{N_{p}}\sum_{n=0}^{N-1}|Z^{n}_{\varepsilon,i}|^{2}\mathbbm{1}_{(t^{n},t^{n+1}]}(t) \right)^{1/2} < \infty. \end{equation*} \end{itemize} For any sequence $(\Delta_{m})_{m>0}$ of discretization steps decreasing to $0$, thanks to the Arzla-Ascolli theorem, there exists a subsequence still denoted by $\left(\bo{z}_{\varepsilon, \Delta_{m}}\right)$ which converges uniformly to $\bo{z}_{\varepsilon}\in L^{\infty}\left((0,T),\mathbb{R}^{2N_{p}}\right)$. \subsubsection*{For all $t\in (0,T)$, the limit function $\bo{z}_{\varepsilon}$ belongs to $\boldsymbol{Q}_{0}$.} Indeed since for any $ \boldsymbol{z}_{\varepsilon,\Delta}:= \boldsymbol{Z}^{n}_{\varepsilon}|_{(t^{n}, t^{n+1}]} \in \boldsymbol{Q}_{0}, \forall n$ we have that $\boldsymbol{z}_{\varepsilon,\Delta} \in \boldsymbol{Q}_{0}$ almost everywhere in $[0,T]$, so that since $\boldsymbol{Q}_{0}$ is closed we have that \begin{equation*} \bo{z}_{\varepsilon}(t) =: \lim_{m \to \infty}\bo{z}_{\varepsilon,\Delta_{m}}(t) \in \boldsymbol{Q}_{0}, \quad \forall\, t \in (0,T). \end{equation*} Combining this with the fact that $\bo{z}_{\varepsilon} \in L^{\infty}((0,T),\mathbb{R}^{2N_{p}})$, we have $\bo{z}_{\varepsilon} \in L^{\infty}((0,T), \boldsymbol{Q}_{0})$. \subsubsection*{We now need to prove that the limit function $\bo{z}_{\varepsilon}$ satisfies $\boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N \left(\boldsymbol{Q}_{0},\bo{z}_{\varepsilon}\right)$.} Thanks to Lemma \ref{equality}, we only need to prove that $\boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon}) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}), \bo{z}_{\varepsilon}\right)$, a.e. in $(0,T]$. \begin{itemize} \item \textbf{Convergence:} Firstly, since \begin{equation}\label{CU} \bo{z}_{\varepsilon,\Delta_{m}}(t) \underset{m \to \infty}{\longrightarrow} \bo{z}_{\varepsilon}(t) \text{ uniformly for all } t \in [0,T], \end{equation} and since $ \boldsymbol{\rho}$ is integrable, we have that for any fixed $\varepsilon >0$ \begin{equation*} \dfrac{1}{\varepsilon} z_{\varepsilon,\Delta_{m},i}(t)\int_{0}^{\infty}\rho_{i}(a)da \underset{m \to \infty}{\longrightarrow} \dfrac{1}{\varepsilon} z_{\varepsilon,i}(t)\int_{0}^{\infty}\rho_{i}(a)da \text{ uniformly in } [0,T], \quad i=1,\cdots,N_{p}. \end{equation*} Secondly, by \eqref{CU} taken point-wisely and the fact that $\bo{\rho}$ is bounded we have that \begin{equation*} z_{\varepsilon,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a) \longrightarrow z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a), \quad \forall i \text{ and } 0 \leq a \leq t/\varepsilon. \end{equation*} Furthermore there exists a constant $K_{i}$ independent of $\Delta_{m}$ such that \begin{equation*} |z_{\varepsilon,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a)| \leq K_{i} \dfrac{\beta_{i}\exp(- \underline{\zeta_{i}}a)}{1+\beta_{i} \int_{0}^{\infty} e^{-\int_{0}^{\sigma}\zeta_{i}(\tilde{a})d\tilde{a}}d\sigma}, \quad \forall i, \end{equation*} so that by the dominated convergence theorem, \begin{equation*} \dfrac{1}{\varepsilon} \int_{0}^{t/\varepsilon}z_{\varepsilon,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a)da \underset{m \to \infty}{\longrightarrow} \dfrac{1}{\varepsilon} \int_{0}^{t/\varepsilon}z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a)da , \quad \forall i. \end{equation*} The same arguments hold for the integral the positions $\bo{z}_{p}$ for negative times and give. \begin{equation*} \dfrac{1}{\varepsilon} \int_{t/\varepsilon}^{\infty}z_{p,\Delta_{m},i}(t-\varepsilon a)\rho_{i}(a)da \underset{m \to \infty}{\longrightarrow} \dfrac{1}{\varepsilon} \int_{t/\varepsilon}^{\infty}z_{p,i}(t-\varepsilon a)\rho_{i}(a)da , \quad \forall i. \end{equation*} Lastly, using \eqref{CU} and the fact that $F$ is continuously differentiable, we have that \begin{equation*} \bo{F}^{'}(\bo{z}_{\varepsilon, \Delta_{m}}) \underset{m \to \infty}{ \longrightarrow} \bo{F}^{'}(\bo{z}_{\varepsilon}). \end{equation*} Gathering the above terms we have that, for fixed $\varepsilon$ \begin{equation*} \bo{\pi}_{\varepsilon,\Delta_{m}}(t) :=\boldsymbol{\mathcal{L}}_{\varepsilon,\Delta_{m}}(t) + \boldsymbol{F}^{'}(\bo{z}_{\varepsilon,\Delta_{m}}(t)) \underset{m \to \infty}{\longrightarrow} \boldsymbol{\mathcal{L}}_{\varepsilon}(t) + \boldsymbol{F}^{'}\left(\bo{z}_{\varepsilon}(t)\right) =: \boldsymbol{\pi}_{\varepsilon}(t), \quad \forall t \in (0,T], \end{equation*} which is the claim. \item \textbf{Inclusion:} Here we use the same arguments as in \cite{venel08}. We have to prove that \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N\left(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t) \right), \text{ a.e. } t \in (0,T]. \end{equation*} Indeed by lemma \ref{equivalences} in Annexe \ref{annexeA}, \eqref{discre_incl_diff} is equivalent to \begin{eqnarray*} \langle \bo{\pi}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big|\pi_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Replacing $\boldsymbol{\xi}$ by $-\boldsymbol{\xi}$ in the above inequality, we have that \begin{eqnarray*} \langle \bo{\pi}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle & \leq & \big| \bo{\pi}_{\varepsilon, \Delta_{m}}(t) \big|d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta}(t)))}\big(- \boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))\big), \quad \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}. \end{eqnarray*} Let us now prove that $|\bo{\pi}_{\varepsilon, \Delta_{m}}|$ is bounded uniformly with respect to the discretization step. Indeed \begin{eqnarray} |\bo{\pi}_{\varepsilon, \Delta_{m}}| & \leq & \left| \boldsymbol{\mathcal{L}}_{\varepsilon,\Delta_{m}} \right| + \left|\bo{F}^{'}(\bo{z}_{\varepsilon,\Delta_{m}})\right| \end{eqnarray} On one hand since $\bo{z}_{\varepsilon,\Delta_{m}}$ is bounded with respect to $\varepsilon$ and $\Delta t$ and $F$ is continuously differentiable, there exists a constant $K_{F}$ independent of $\varepsilon$ and $\Delta t$ such that $\left|\bo{F}^{'}(\boldsymbol{z}_{\varepsilon,\Delta_{m}})\right| \leq K_{F}$. On the other hand using the energy estimates and the Jensen's inequality \begin{equation*} |\bo{\mathcal{L}}^{n}_{\varepsilon}|^{2} \leq \frac{C_{0}}{\varepsilon} \sum_{i=1}^{N_{p}} \dfrac{\Delta a}{\varepsilon} \sum_{l=1}^{\infty}|Z^{n}_{\varepsilon,i} - Z^{n-l}_{\varepsilon,i}|^{2}R_{l,i} \leq \frac{C_{0}}{\varepsilon}\left(K_{0} + F(\boldsymbol{Z}^{0}_{p}) - F(\bo{Z}^{n}_{\varepsilon})\right), \end{equation*} so that there exists a constant independent of $\Delta a$ and $\varepsilon$ such that $|\bo{\mathcal{L}}_{\varepsilon,\Delta_{m}}| \leq \dfrac{K}{\varepsilon}$ \begin{equation*} |\bo{\pi}_{\varepsilon,\Delta_{m}}| \leq \dfrac{K}{\varepsilon} + K_{F} \end{equation*} The sum of the above two latter inequalities gives \begin{equation}\label{last} \big|\langle \bo{\pi}_{\varepsilon, \Delta_{m}}, \boldsymbol{\xi} \rangle \big| \leq \left(\dfrac{K}{\varepsilon} + K_{F}\right)d_{\bo{K}( \bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big| - \boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t))) \big|. \end{equation} Using the fact that $\boldsymbol{q} \mapsto d_{\bo{C}}(\bo{q})$ is $1$-Lipschitz for any nonempty closed convex set and setting \begin{equation*} I_{\varepsilon,\Delta_{m}}(t):= \big| d_{\bo{K}( \bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big)\big|, \end{equation*} we have \begin{eqnarray*} I_{\varepsilon,\Delta_{m}} & \leq & \big| d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) - d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))} \big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & & \hspace{8.5em} + \big| d_{\bo{K}(\bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle - \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big| \\ \\ & \leq & \big| \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta}(t)) - \bo{z}_{\varepsilon}(t)\big| + \underbrace{\big| d_{\bo{K}( \bo{z}_{\varepsilon,\Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big(\langle -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \rangle \big) - d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t) \big) \big|}_{J_{\varepsilon, \Delta_{m}}(t)}. \end{eqnarray*} \end{itemize} Moreover by proposition \ref{convergenceofprojection} in appendix \ref{annexeA} there exists $\nu > 0$ such that for all $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$ satisfying $|\boldsymbol{\xi}|\leq \nu$, $J_{\varepsilon, \Delta_{m}}(t) \underset{m \to \infty}{\longrightarrow} 0$.\\ Thus for any $\boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}$, there exists $\nu > 0$ satisfying $|\boldsymbol{\xi}| \leq \nu$ and \begin{equation*} 0 \leq I_{\varepsilon,\Delta_{m}} \leq \big| \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) - \bo{z}_{\varepsilon}(t)\big| \underset{m \to \infty}{\longrightarrow 0}, \end{equation*} i.e. \begin{equation*} d_{\bo{K}(\bo{z}_{\varepsilon, \Delta_{m}}(\psi_{\Delta_{m}}(t)))}\big( -\boldsymbol{\xi} + \bo{z}_{\varepsilon,\Delta_{m}}(\theta_{\Delta_{m}}(t)) \big) \underset{ m \to \infty}{\longrightarrow} d_{\bo{K}(\bo{z}_{\varepsilon}(t))}\big(-\boldsymbol{\xi} + \bo{z}_{\varepsilon}(t)\big). \end{equation*} Since $\varepsilon > 0$ is fixed, \eqref{last} finally gives \begin{equation*} \forall\, \boldsymbol{\xi} \in \mathbb{R}^{2N_{p}}, |\boldsymbol{\xi}| \leq \nu, |\langle \boldsymbol{\pi}_{\varepsilon}(t), \boldsymbol{\xi} \rangle| \leq \left(\frac{K}{\varepsilon} + K_{F}\right)d_{\bo{K}( \bo{z}_{\varepsilon}(t))} \big|- \boldsymbol{\xi} + \bo{z}_{\varepsilon}(t))\big|, \end{equation*} which using back lemma \ref{equivalences} is equivalent to \begin{equation*} \boldsymbol{\pi}_{\varepsilon}(t) \in -N(\bo{K}(\bo{z}_{\varepsilon}(t)), \bo{z}_{\varepsilon}(t)), \quad \forall \varepsilon >0, \end{equation*} this ends the proof once we prove that $J_{\varepsilon, \Delta_{m}}$. But this is a consequence of proposition \ref{convergenceofprojection} in appendix \ref{annexeA}. \end{proof} ###### PREUVE DE L'UNICITE ################# \begin{proof} The proof is divided into two parts. \begin{itemize} \item Existence: the existence is obvious by compactness. Indeed $\bo{z}_{\varepsilon}:=\lim_{m\to \infty} \bo{z}_{\varepsilon,\Delta_{m}}(t)$, where $(\bo{z}_{\varepsilon,\Delta_{m}})_{m}$ is defined in the proof of theorem \ref{thm_conv}. \item Uniqueness: let $\bo{z}^{1}_{\varepsilon},\bo{z}^{2}_{\varepsilon} \in \boldsymbol{Q}_{0}$ be two solutions of \eqref{conDiff}. Since $- (\boldsymbol{\mathcal{E}}_{t}^{\varepsilon})^{'}(\boldsymbol{z}_{\varepsilon}) \in N(\boldsymbol{Q}_{0}, \boldsymbol{z}_{\varepsilon})$, we have by proposition \ref{prox-reg-char} in appendix \ref{annexeA} that \begin{equation*} \langle - \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{1}_{\varepsilon}] - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}\rangle \leq \dfrac{ |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| }{2\eta}\Big| \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}\Big|^{2} \end{equation*} and \begin{equation*} \langle - \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{2}_{\varepsilon}] - \bo{F}^{'}(\bo{z}^{2}_{\varepsilon}), \bo{z}^{1}_{\varepsilon} - \bo{z}^{2}_{\varepsilon}\rangle \leq \dfrac{ |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| }{2\eta}\Big| \bo{z}^{2}_{\varepsilon} - \bo{z}^{2}_{\varepsilon}\Big|^{2}, \end{equation*} where we remind that $\eta >0$ is the prox-regularity constant of $\boldsymbol{Q}_{0}$ (see theorem \ref{constant-prox-reg}).\\ Summing the above inequalities, we have \begin{equation*} \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle + \langle \bo{F}^{'}(\bo{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon} \rangle \leq \dfrac{ |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| + |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{2}_{\varepsilon})| }{2\eta}\left| \bo{\hat{z}}_{\varepsilon} \right|^{2}, \end{equation*} where $\bo{\hat{\mathcal{L}}}_{\varepsilon} := \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{2}_{\varepsilon}] - \boldsymbol{\mathcal{L}}_{\varepsilon}[\bo{z}^{1}_{\varepsilon}]$ and $\bo{\hat{z}}_{\varepsilon} := \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon}$. Since $F$ is convex we have that \begin{equation*} \langle \bo{F}^{'}(\bo{z}^{2}_{\varepsilon}) - \bo{F}^{'}(\bo{z}^{1}_{\varepsilon}), \bo{z}^{2}_{\varepsilon} - \bo{z}^{1}_{\varepsilon} \rangle \geq 0, \end{equation*} so that \begin{equation}\label{eq_interm} \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle \leq \dfrac{|(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{1}_{\varepsilon})| + |(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}^{2}_{\varepsilon})|}{2\eta}\left| \bo{\hat{z}}_{\varepsilon} \right|^{2}. \end{equation} Let us prove that $(\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}_{\varepsilon}) = \bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}] + \bo{F}^{'}(\bo{z}_{\varepsilon})$ is bounded, where $\bo{z}_{\varepsilon}$ solves \eqref{conDiff} .\\ On one hand, by decomposing the pointwise delay operator we have \begin{equation*} \mathcal{L}_{\varepsilon,i}[\boldsymbol{z}_{\varepsilon}]= \dfrac{1}{\varepsilon}\mu_{0,i} z_{\varepsilon,i}(t) - \dfrac{1}{\varepsilon}\int_{0}^{t/\varepsilon}z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a)da - \dfrac{1}{\varepsilon}\int_{t/\varepsilon}^{\infty}z_{\varepsilon,i}(t-\varepsilon a)\rho_{i}(a)da, \quad \forall i. \end{equation*} The two first terms are bounded. Indeed since $\boldsymbol{z}_{\varepsilon}$ is bounded and $\bo{\rho}$ is integrable, \begin{eqnarray*} \left| \dfrac{\mu_{0,i}z_{\varepsilon,i}(t)}{\varepsilon} - \dfrac{1}{\varepsilon} \int_{0}^{t/\varepsilon} z_{\varepsilon,i}(t -\varepsilon a) \rho_{i}(a)da \right| & \leq & \dfrac{1}{\varepsilon} \left( \mu_{0,i} + \int_{0}^{t}\rho_{i}(a)da \right) \sup_{0 \leq t \leq T} |z_{\varepsilon,i}(t)| \\ & \leq & \dfrac{1}{\varepsilon} \left( 2\mu_{0,i} \sup_{0 \leq t \leq T} |z_{\varepsilon,i}(t)| \right), \quad \forall i \end{eqnarray*} The same arguments hold for the integral involving the $\bo{z}_{p}$ and ensure that there exists a constant $\tilde{K}$ (independent of $\varepsilon$) such that $\left|\bo{\mathcal{L}}_{\varepsilon}[\bo{z}_{\varepsilon}]\right| \leq \tilde{K}/\varepsilon$.\\ On the other hand, since $\boldsymbol{z}_{\varepsilon}$ is uniformly bounded with respect to $\varepsilon$ in $(0,T]$ and $F$ is assumed to be continuously differentiable, we have that $\bo{F}^{'}(\boldsymbol{z}_{\varepsilon})$ is bounded uniformly in $\varepsilon$, so that there exists a constant $K_{F}$ such that $|\bo{F}^{'}(\bo{z}_{\varepsilon})| \leq K_{F}$.\\ This implies that \begin{equation*} \left| (\bo{\mathcal{E}}_{t}^{\varepsilon})^{'}(\bo{z}_{\varepsilon}) \right| \leq \dfrac{\tilde{K}}{\varepsilon} + K_{F}. \end{equation*} By the latter inequality and \eqref{eq_interm} we have \begin{equation}\label{I} \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle \leq \dfrac{\frac{\tilde{K}}{\varepsilon} + K_{F}}{\eta}\left| \bo{\hat{z}}_{\varepsilon} \right|^{2} \end{equation} Let us now find a lower bound for $\langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}^{2}_{\varepsilon} \rangle$. Since $\langle a-b,a \rangle \geq \dfrac{1}{2}\left(|a|^{2} - |b|^{2}\right),$ by assuming that $\bo{z}^{2}_{p}(t) = \bo{z}^{1}_{p}(t), \, \forall\, t \leq 0$, we have \begin{eqnarray*} \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}}\int_{0}^{\infty}\big| \hat{z}_{\varepsilon,i}(t)\big|^{2}(t) \rho_{i}(a)da - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da & \leq & \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle, \end{eqnarray*} so that \begin{equation}\label{II} \dfrac{\mu_{0,m}}{2\varepsilon}|\bo{z}_{\varepsilon}(t)|^{2} - \dfrac{1}{2\varepsilon} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da \leq \langle \boldsymbol{\hat{\mathcal{L}}}_{\varepsilon}, \bo{\hat{z}}_{\varepsilon} \rangle \end{equation} By \eqref{I} and \eqref{II}, we have \begin{equation*} \dfrac{\mu_{0,m}}{2}|\bo{z}_{\varepsilon}(t)|^{2} - \dfrac{1}{2} \sum_{i=1}^{N_{p}} \int_{0}^{t/\varepsilon} |\hat{z}_{\varepsilon,i}(t-\varepsilon a)|^{2}\rho_{i}(a)da \leq \dfrac{\tilde{K} + \varepsilon K_{F}}{\eta}\left| \bo{\hat{z}}_{\varepsilon}(t) \right|^{2}. \end{equation*} It follows \end{itemize} \end{proof} ############################################# \bibliographystyle{plain} \bibliography{biblio} \end{document}
2412.18692v1
http://arxiv.org/abs/2412.18692v1
Most subrings of $\mathbb{Z}^n$ have large corank
\documentclass[12pt]{amsart} \usepackage{amssymb} \usepackage{amsthm} \usepackage[margin=1in]{geometry} \usepackage{amstext} \usepackage{amsbsy} \usepackage{amscd} \usepackage{enumerate} \usepackage{chngpage} \usepackage{mathtools} \usepackage{amsmath} \usepackage{hyperref} \usepackage{tikz} \usepackage{stmaryrd} \usepackage{color} \newtheorem{alg}{Algorithm} \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{question}[thm]{Question} \newtheorem{conj}[thm]{Conjecture} \newtheorem{example}[thm]{Example} \newtheorem{obs}[thm]{Observation} \newtheorem{claim}[thm]{Claim} \newtheorem{rmk}[thm]{Remark} \makeatletter \let\@wraptoccontribs\wraptoccontribs \makeatother \newtheorem{thmA}{Theorem}[section] \newtheorem{defnA}[thmA]{Definition A\ignorespaces} \newtheorem{rmkA}[thmA]{Remark A\ignorespaces} \theoremstyle{plain} \newtheorem{conjA}[thmA]{Conjecture A\ignorespaces} \newcommand{\Z}{\mathbb Z} \newcommand{\N}{\mathbb N} \newcommand{\R}{\mathbb R} \newcommand{\C}{\mathbb C} \newcommand{\Q}{\mathbb Q} \newcommand{\F}{\mathbb F} \newcommand{\Fp}{\mathbb{F}_p} \newcommand{\Fq}{\mathbb{F}_q} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\HH}{\mathcal{H}} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\corank}{corank} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\col}{col} \DeclareMathOperator{\cok}{cok} \newcommand{\nathan}[1]{{\color{red} \sf $\clubsuit$ N:\ #1}} \newcommand{\kelly}[1]{{\color{blue} \sf $\clubsuit$ K:\ #1}} \newcommand{\mb}[1]{\mathbb{#1}} \let\longleftrightarrow\relax \DeclareRobustCommand{\longleftrightarrow}{\leftarrow\bigjoinrel\rightarrow} \newcommand\bigjoinrel{\mathrel{\mkern-7mu}} \title{Most subrings of $\Z^n$ have large corank} \author{Kelly Isham} \address{Colgate University, Hamilton, NY 13346} \email{[email protected]} \author{Nathan Kaplan} \address{University of California, Irvine, CA 92697} \email{[email protected]} \address{City College of New York, New York, NY, 10031} \email{[email protected]} \date{\today} \begin{document} \maketitle \vspace{-.5cm} {\centerline{(With an Appendix by Gautam Chinta)}} \begin{abstract} If $\Lambda \subseteq \Z^n$ is a sublattice of index $m$, then $\Z^n/\Lambda$ is a finite abelian group of order $m$ and rank at most $n$. Several authors have studied statistical properties of these groups as we range over all sublattices of index at most $X$. In this paper we investigate quotients by sublattices that have additional algebraic structure. While quotients $\Z^n/\Lambda$ follow the Cohen-Lenstra heuristics and are very often cyclic, we show that if $\Lambda$ is actually a subring, then once $n \ge 7$ these quotients are very rarely cyclic. More generally, we show that once $n$ is large enough the quotient typically has very large rank. In order to prove our main theorems, we combine inputs from analytic number theory and combinatorics. We study certain zeta functions associated to $\Z^n$ and also prove several results about matrices in Hermite normal form whose columns span a subring of $\Z^n$. \end{abstract} \section{Introduction} The focus of this article is about a family of random finite abelian groups that arise in number theory. There has recently been extensive interest in this subject; see for example the survey of Wood \cite{WoodICM}. We show that quotients of $\Z^n$ by random subrings do not look like quotients of $\Z^n$ by random sublattices. This has an interpretation in terms of the distribution of cokernels of families of random integer matrices and the Cohen-Lenstra heuristics, a research topic that has been quite active in recent years. In order to describe our results, we introduce some notation. A \emph{sublattice} $\Lambda \subseteq \Z^n$ is a finite index subgroup, that is, we reserve this term for full-rank sublattices of $\Z^n$. For vectors $u = (u_1,\ldots, u_n)$ and $w = (w_1,\ldots, w_n)$ in $\Z^n$, we write $u \circ w$ for the vector given by the componentwise product, $u \circ w = (u_1 w_1,\ldots, u_n w_n)$. A sublattice $\Lambda \subseteq \Z^n$ is \emph{multiplicatively closed} if $u,w \in \Lambda$ implies $u \circ w \in \Lambda$. A multiplicatively closed sublattice is a \emph{subring} if it also contains the multiplicative identity $(1,1,\ldots, 1)$. In this article, we address questions of the following type. \begin{question} Let $X$ be a positive real number and $R \subseteq \Z^n$ be a subring of index at most $X$ chosen uniformly at random. What does the quotient $\Z^n/R$ `look like'? For example, as $X \rightarrow \infty$ how often is this group cyclic? \end{question} A main point of this article is to show that the additional algebraic structure possessed by subrings leads these quotients to very often have large rank. In order to set up the contrast with quotients of random sublattices, we highlight some results about $\Z^n/\Lambda$ where $\Lambda \subseteq \Z^n$ is a sublattice of index at most $X$ chosen uniformly at random. A finite abelian group $G$ can be written uniquely in terms of its invariant factors, \[ G \cong \Z/\alpha_1 \Z \oplus \Z/\alpha_2 \Z \oplus \cdots \oplus \Z/\alpha_n\Z, \] where $\alpha_{i+1} \mid \alpha_i$ for $1 \le i \le n-1$. The \emph{rank} of $G$ is the largest $i$ for which $\alpha_i > 1$. The \emph{corank} of a sublattice $\Lambda \subseteq \Z^n$ is the rank of $\Z^n/\Lambda$. A sublattice $\Lambda \subseteq \Z^n$ is \emph{cocyclic} if either $\Lambda = \Z^n$ or $\Lambda$ has corank $1$. Nguyen and Shparlinski compute the proportion of sublattices of $\Z^n$ that are cocyclic \cite{ns}, a result which also follows by a different method from earlier work of Petrogradsky \cite{petro}. Chinta, Kaplan, and Koplewitz generalize this result \cite{CKK}, determining the proportion of sublattices of $\Z^n$ that have corank at most $k$ for any $k \in [1,n]$. Throughout this paper we use $p$ to denote a prime number and write $\prod_p$ for a product over all primes. \begin{thm}\label{CKK_thm}\cite[Corollary 1.2]{CKK}\label{thm:lattice_corank} Let $n,k$ be positive integers satisfying $1 \le k \le n$. Then, \begin{eqnarray*} p_{n,k} & := & \lim_{X \rightarrow \infty} \frac{\#\left\{\text{Sublattices } \Lambda \subseteq \Z^n\colon [\Z^n\colon \Lambda] < X \text{ and } \Lambda \text{ has corank at most } k\right\}}{\#\left\{\text{Sublattices } \Lambda \subseteq \Z^n\colon [\Z^n\colon \Lambda] < X\right\}} \\ & = & \prod_p \left( \prod_{j=1}^n (1-p^{-j})^2 \cdot \sum_{i=0}^k \frac{1}{p^{i^2} \prod_{j=1}^i (1-p^{-j})^2 \prod_{j=1}^{n-i} (1-p^{-j})}\right). \end{eqnarray*} \end{thm} These probabilities arise in the famous Cohen--Lenstra heuristics, which were developed during the study of statistical questions about Sylow $p$-subgroups of class groups of families of number fields \cite{cohen_lenstra}. Let $P_n$ be the distribution on finite abelian $p$-groups of rank at most $n$ that chooses a finite abelian $p$-group $G$ of rank $r \le n$ with probability \begin{equation}\label{eqn:PnG} P_n(G) = \frac{1}{\#\Aut(G)}\left(\prod_{i=1}^n (1-p^{-i})\right)\left(\prod_{i=n-r+1}^n (1-p^{-i})\right). \end{equation} Theorem \ref{CKK_thm} states that the proportion of $\Lambda \subseteq \Z^n$ that have corank at most $k$ is equal to the product over all primes of the probability that a finite abelian $p$-group chosen from the distribution $P_n$ has rank at most $k$. These connections are explained in \cite{CKK}. Even for small values of $k$, when $n$ goes to infinity these probabilities are quite large. For example, for large $n$ the proportion of cocyclic sublattices $\Lambda \subseteq \Z^n$ is approximately $85\%$, the proportion with corank at most $2$ is approximately $99.4\%$, and the proportion with corank at most $3$ is approximately $99.995\%$. The main theorem of this paper is that for large $n$, while the vast majority of random sublattices of $\Z^n$ have small corank, it is extremely rare for a random subring of $\Z^n$ to have small corank. \begin{thm}\label{proportion_corank_k} Let $n$ and $k$ be positive integers satisfying $1 \le k < n$. Define \[ p^R_{n,k} = \lim_{X\rightarrow \infty} \frac{\#\left\{\text{Subrings } R \subseteq \Z^n\colon [\Z^n\colon R] < X \text{ and } R \text{ has corank at most } k\right\}}{\#\left\{\text{Subrings } R \subseteq \Z^n\colon [\Z^n\colon R] < X\right\}}. \] \begin{enumerate} \item If $k \in \{1,2,3\}$ and $n \ge 6$, then $p^R_{n,k} = 0$. \item If $n \ge 7$, then $p^R_{n,4} = 0$. \item If $k \le (6-4\sqrt{2})n + 2\sqrt{2}-\frac{8}{3}$ and $n \ge 7$, then $p^R_{n,k} = 0$. \end{enumerate} \end{thm} \noindent For example, while approximately $85\%$ of sublattices of $\Z^7$ are cocyclic, the proportion of subrings of $\Z^7$ that are cocyclic is $0$. We prove Theorem \ref{proportion_corank_k} by studying the asymptotic rate of growth of the function \[ H_{n,k}(X) = \#\left\{\text{Subrings } R \subseteq \Z^n\colon [\Z^n\colon R] < X \text{ and } R \text{ has corank at most } k\right\} \] and comparing it to the asymptotic growth rate of the function \[ N_n(X) = \#\left\{\text{Subrings } R \subseteq \Z^n\colon [\Z^n\colon R] < X\right\}. \] We use techniques from the theory of zeta functions of rings to prove upper bounds for the growth rate of $H_{n,k}(X)$ and compare these results to lower bounds of Isham for $N_n(X)$ \cite[Theorem 1.6]{ish_subrings}. A key part of our argument involves counting special classes of matrices in Hermite normal form. Every finite index sublattice $\Lambda \subseteq \Z^n$ is the column span of a unique $n \times n$ matrix $H(\Lambda)$ in Hermite normal form. Moreover, $\Z^n/\Lambda$ is isomorphic to the cokernel of this matrix. We discuss matrices in Hermite normal form and their cokernels at the start of Section \ref{sec:counting} and in Proposition \ref{SNF_prop}. Theorem \ref{thm:lattice_corank} says that the probability $\Z^n/\Lambda$ has rank at most $k$ is what one would expect if these random groups were distributed according to the Cohen-Lenstra heuristics. In \cite{CKK}, the authors also show that Sylow $p$-subgroups of $\Z^n/\Lambda$ are distributed according to the distribution $P_n(G)$ from \eqref{eqn:PnG}. These results can be interpreted by saying that cokernels of random integer matrices in Hermite normal form follow the Cohen-Lenstra heuristics. There is a general philosophy that cokernels of families of random integer and $p$-adic matrices should be distributed according to the Cohen-Lenstra heuristics, except when there is additional algebraic structure that must be taken into account; see for example \cite[Section 3.2]{WoodICM}. Theorem \ref{proportion_corank_k} says that once $n$ is not too small, cokernels of $n \times n$ matrices in Hermite normal form whose columns span a subring of $\Z^n$ are not distributed according to the Cohen-Lenstra heuristics. The additional algebraic structure of having a column span that is closed under componentwise multiplication leads to a completely different looking distribution of finite abelian groups. In order to prove Theorem \ref{proportion_corank_k} we need only give an upper bound on the growth rate of $H_{n,k}(X)$, but for small values of $k$ we can prove something much more precise. We give asymptotic formulas for $H_{n,k}(X)$ when $k \le 3$. \begin{thm}\label{Hnk_123} Suppose $k \in \{1,2,3\}$ and $n > k$. There exists a positive real number $C_{n,k}$ so that \[ H_{n,k}(X) \sim C_{n,k} X (\log X)^{\binom{n}{2}-1} \] as $X \rightarrow \infty$. \end{thm} \noindent We give a more precise description of $C_{n,k}$ in Section \ref{sec_corank}. We discuss the growth of $H_{n,4}(X)$ at the end of that section. In Section \ref{corank_smalln} we use our knowledge of $N_n(X)$ when $n \le 5$ to compute $p_{n,k}^R$ for all $k$ when $n \le 4$. We prove that $p_{n,n-1} > 0$ for all $n$, and as a consequence of these results, we show that $p_{5,k} > 0$ for each $k \in \{1,2,3,4\}$. Finally, an appendix by Gautam Chinta defines a multivariate zeta function which encodes not only the coranks of subrings of $\Z^n$, but also the full cotypes. A conjecture explicitly describing this {\em cotype zeta function} for $\Z^4$ is presented. \section{Counting lattices and counting subrings}\label{sec:counting} In this paper, we study the functions $H_{n,k}(X)$ and $N_n(X)$ by studying analytic properties of certain zeta functions associated to $\Z^n$. We begin by introducing the subgroup zeta function of $\Z^n$. \begin{defn} \label{defn:zetaZn} Let $a_k(\Z^n)$ denote the number of sublattices $\Lambda \subseteq \Z^n$ with $[\Z^n\colon \Lambda] = k$. Define \[ \zeta_{\Z^n}(s) = \sum_{k=1}^\infty a_k(\Z^n) k^{-s}, \] where $s$ is a complex variable. \end{defn} Since $\Z^n$ is a finitely generated nilpotent group, this zeta function has an Euler product and we can write \[ \zeta_{\Z^n}(s) = \prod_p \zeta_{\Z^n,p}(s),\ \text{ where }\ \ \ \zeta_{\Z^n,p}(s) = \sum_{e=0}^\infty a_{p^e}(\Z^n) p^{-es}. \] Let $M_n(\Z)$ denote the set of $n \times n$ matrices with entries in $\Z$. An invertible matrix $A \in M_n(\Z)$ with entries $a_{ij}$ is in \emph{Hermite normal form} if: \begin{enumerate} \item $A$ is upper triangular, and \item $0 \le a_{ij} < a_{ii}$ for $1 \le i<j \le n$. \end{enumerate} Let $\HH_n(\Z)$ denote the set of invertible matrices $A\in M_n(\Z)$ that are in Hermite normal form. Every sublattice $\Lambda\subseteq \Z^n$ is the column span of a unique matrix $H(\Lambda) \in \HH_n(\Z)$, and moreover, $[\Z^n \colon \Lambda] = \det(H(\Lambda))$. Counting matrices in $\HH_n(\Z)$ with given determinant proves that \begin{equation}\label{zeta_Zn} \zeta_{\Z^n}(s) = \zeta(s) \zeta(s-1)\cdots \zeta(s-(n-1)). \end{equation} See the book of Lubotzky and Segal for five proofs of this result \cite{LubotzkySegal}. For an extensive introduction to this topic see the survey of Voll \cite{Voll} or the book of du Sautoy and Woodward~\cite{dusautoy}. Applying a standard Tauberian theorem allows one to deduce an asymptotic formula for $\sum_{k < X} a_{\Z^n}(k)$ in terms of analytic properties of $\zeta_{\Z^n}(s)$. The following Tauberian theorem is due to Delange. See \cite[Ch III, pages 121-122]{narkiewicz1984number} for an English translation. \begin{thm}\label{tauberian_theorem} \cite{delange} Let $F(s) = \sum_{n \ge 1} a(n) n^{-s}$ be a Dirichlet series with nonnegative coefficients that converges for $\Re(s) > \alpha > 0.$ If \begin{enumerate} \item $F(s)$ is analytic on $\Re(s) = \alpha$ except for $s=\alpha$ and \item for $s \sim \alpha$ with $\Re(s) > \alpha$, \[ F(s) = \frac{G(s)}{(s-\alpha)^\beta}+ H(s) \] where $G(s)$ and $H(s)$ are analytic at $s= \alpha$ with $G(\alpha) \ne 0$ \end{enumerate} then $$ \sum_{n \le X} a(n) \sim \frac{G(\alpha)}{\alpha \Gamma(\beta)} X^{\alpha} (\log X)^{\beta -1}. $$ \end{thm} See the recent survey of Alberts \cite{alberts} for several concrete examples of how Tauberian theorems are applied to Dirichlet series constructed from Euler products. The function on the right-hand side of equation \eqref{zeta_Zn} has its right-most pole at $s=n$, and this pole is a simple pole. It is not difficult to see that this function satisfies the hypotheses of Theorem \ref{tauberian_theorem}, and then to carry out the application of this theorem. This leads to the following result. \begin{cor} Let $n$ be a positive integer $n$ and define \[ L_n(X) = \#\{\text{Sublattices } \Lambda \subseteq \Z^n\colon [\Z^n \colon \Lambda] < X\}. \] We have \[ L_n(X) \sim \frac{\zeta(n) \zeta(n-1)\cdots \zeta(2)}{n} X^n, \] as $X \rightarrow \infty$. \end{cor} A main idea of this paper is to follow a similar strategy for the subring zeta function of~$\Z^n$. \begin{defn}\label{defn:subring_zeta} Let $f_n(k)$ denote the number of subrings $R \subseteq \Z^n$ with $[\Z^n \colon R] = k$. The \emph{subring zeta function of $\Z^n$} is defined by \[ \zeta_{\Z^n}^R(s) = \sum_{k =1}^\infty f_n(k) k^{-s}. \] \end{defn} Just as $\zeta_{\Z^n}(s)$ has an Euler product, we have \[ \zeta_{\Z^n}^R(s) = \prod_p \zeta_{\Z^n,p}^R(s),\ \text{ where }\ \ \ \zeta_{\Z^n,p}^R(s) = \sum_{e=0}^\infty f_n(p^e) p^{-es}. \] The subring zeta function of $\Z^n$ is only known explicitly for $n \le 4$. \begin{thm}\label{GeneratingFunctions} We have \begin{eqnarray*} \zeta_{\Z^2}^R(s) & = & \zeta(s), \\ \zeta_{\Z^3}^R(s) & = & \frac{\zeta(3s-1) \zeta(s)^3}{\zeta(2s)^2}, \\ \zeta_{\Z^4}^R(s) & = & \prod_p \frac{1}{(1-p^{-s})^2(1-p^2 p^{-4s})(1-p^3 p^{-6s})} \Big(1+4 p^{-s}+2 p^{-2s}\\ & & +(4p-3) p^{-3s}+(5p-1)p^{-4s} +(p^2-5p)p^{-5s}+(3p^2-4p) p^{-6s} \\ & & -2p^2 p^{-7s}-4p^2 p^{-8s} - p^2 p^{-9s}{\Big)}. \end{eqnarray*} \end{thm} \noindent The statement for $\Z^3$ follows from work of Datskovsky and Wright \cite{DW}, and the statement for $\Z^4$ follows from work of Nakagawa \cite{Nakagawa}. The right-most pole of $\zeta_{\Z^3}^R(s)$ is located at $s=1$ and has order $3$, and the right-most pole of $\zeta_{\Z^4}^R(s)$ is located at $s=1$ and has order $6$. Du Sautoy and Grunewald show that for any $n \ge 2$, the zeta function $\zeta_{\Z^n}^R(s)$ satisfies condition (1) of Theorem \ref{tauberian_theorem} \cite[Theorem 1.7]{dusautoy_grunewald}. Therefore, one can carry out the application of Theorem \ref{tauberian_theorem} for $n \le 4$, leading to the following result. \begin{cor}\label{Nn_growth} We have \begin{eqnarray*} N_2(X) & \sim & X,\\ N_3(X) & \sim & \frac{1}{2\zeta(2)} X (\log X)^2,\\ N_4(X) & \sim &\frac{1}{5! \zeta(2)^3}X (\log X)^5. \end{eqnarray*} \end{cor} For larger $n$ precise asymptotic formulas for $N_n(X)$ are not known. Kaplan, Marcinek, and Takloo-Bighash \cite{kmt} compute the order of growth for $n=5$ but are unable to determine the constant in the asymptotic. \begin{thm}\cite[Theorem 6]{kmt}\label{N5X} There exists a positive real number $C_5$ such that \[ N_5(X) \sim C_5 X (\log X)^9 \] as $X \rightarrow \infty$. \end{thm} In the proof of Theorem \ref{proportion_corank_k}, we apply a lower bound for the growth of $N_n(X)$ for $n \ge 7$. Kaplan, Marcinek, and Takloo-Bighash show that for any $n$, there exists a positive real number $C_n$ such that for $X$ sufficiently large $N_n(X) > C_n X (\log X)^{\binom{n}{2}-1}$ \cite[Theorem 6]{kmt}. Building on work of Brakenhoff \cite{brakenhoff}, Isham proves a lower bound that is much stronger for $n \ge 7$. \begin{thm}\cite[Theorem 1.6 and Proposition 5.4]{ish_subrings}\label{ish_lower} Let \[ a(n) = \max_{0 \leq d \leq n-1} \left(\frac{d(n-1-d)}{n-1+d} + \frac{1}{n-1+d}\right) \] where $\max_{0 \leq d \leq n-1}$ is a maximum over integers $d \in [0, n-1]$. \begin{enumerate} \item We have $a(n) \ge (3-2\sqrt{2})(n-1) -(\sqrt{2}-1)$. \item There exists a positive real number $C_n$ such that for all sufficiently large $X$, \[ N_n(X) > C_n X^{a(n)} > C_n X^{(3-2\sqrt{2})(n-1) -(\sqrt{2}-1)}. \] In particular, when $n \ge 7$ there exists a positive real number $C_n$ such that for all sufficiently large $X$, \[ N_n(X) > C_n X^{9/8}. \] \end{enumerate} \end{thm} Theorem \ref{proportion_corank_k} (1) for $n \ge 7$ follows from Theorems \ref{Hnk_123} and \ref{ish_lower}. We prove Theorem \ref{proportion_corank_k} (2) and the $n=6$ case of Theorem \ref{proportion_corank_k} (1) at the very end of Section \ref{sec_corank}. Theorem \ref{proportion_corank_k} (3) follows from Theorem \ref{ish_lower} and the following result. \begin{thm}\label{Hnk_upper} Let $a(n)$ be defined as in Theorem \ref{ish_lower}. Suppose $n \ge 7$ and $k \le (6-4\sqrt{2})n + 2\sqrt{2}-\frac{8}{3}$. Then, \[ \lim_{X \rightarrow \infty} \frac{H_{n,k}(X)}{X^{a(n)}} = 0. \] \end{thm} \noindent In order to prove this theorem, we give an upper bound on the growth of $H_{n,k}(X)$. We do this in Corollary \ref{Hnk_upper_cor}. \section{The corank at most $k$ zeta function of $\Z^n$}\label{sec_corank} In order to prove Theorems \ref{proportion_corank_k}, \ref{Hnk_123}, and \ref{Hnk_upper} we introduce the \emph{corank at most $k$ zeta function of $\Z^n$} and study its analytic properties. \begin{defn}\label{defn:corank_at_most} Let $\tilde{h}_{n,k}(j)$ be the number of subrings $R \subseteq \Z^n$ that have corank at most $k$ and index $j$ and ${h}_{n,k}(j)$ be the number of these subrings that have corank exactly $k$. Clearly, $$\tilde{h}_{n,k}(j) = \sum_{i = 0}^k {h}_{n,i}(j)\; \textrm{ and } \; H_{n,k}(X) = \sum_{j= 1}^{\lfloor X\rfloor} \tilde{h}_{n,k}(j).$$ We define the \emph{corank at most $k$ zeta function of $\Z^n$} by \[ \zeta_{\Z^n}^{R,(k)}(s) = \sum_{j = 1}^\infty \tilde{h}_{n,k}(j) j^{-s}. \] \end{defn} A finite abelian group has rank at most $k$ if and only if each of its Sylow $p$-subgroups has rank at most $k$. For the same reason that the subring zeta function $\zeta_{\Z^n}^R(s)$ has an Euler product, we see that $\zeta_{\Z^n}^{R,(k)}(s)$ has an Euler product as well. We have \[ \zeta_{\Z^n}^{R,(k)}(s) = \prod_p \zeta_{\Z^n, p}^{R,(k)}(s),\ \text{ where }\ \ \ \zeta_{\Z^n, p}^{R,(k)}(s) = \sum_{e =0}^\infty \tilde{h}_{n,k}(p^e)p^{-es}. \] In order to prove Theorem \ref{Hnk_123}, we find the right-most pole of $\zeta_{\Z^n}^{R, (k)}(s)$ for $k \le 3$ and then apply Theorem \ref{tauberian_theorem}. In this section, we prove Theorem \ref{Hnk_123}, first for $k=1$, then $k=2$, and finally for $k=3$. \begin{prop}\label{count_cocyclic} Suppose $n \ge 2$ and $e \ge 1$. Then $h_{n,1}(p^e) = \binom{n}{2}$. \end{prop} \noindent We prove this result in Section \ref{subring_matrices}. We note that this is closely related to a result of Brakenhoff \cite[Theorem 1.5]{brakenhoff}. Proposition \ref{count_cocyclic} leads directly to the following expression for the corank at most $1$ zeta function of $\Z^n$. \begin{prop}\label{cocyclic_zeta} Suppose $n \ge 2$. We have \[ \zeta_{\Z^n}^{R,(1)}(s) = \zeta(s) \prod_p \left(1 + \left(\binom{n}{2} - 1\right)p^{-s}\right). \] \end{prop} The $k=1$ case of Theorem \ref{Hnk_123} follows directly from Proposition \ref{cocyclic_zeta}. The following is an instance of the Factorization Method discussed in the survey of Alberts \cite{alberts}. \begin{cor} \label{cor:cocyclic_asymptotic} Let $m = \binom{n}{2}$. We have \[ H_{n,1}(X) \sim \frac{1}{(m-1)!} \prod_p (p^{-m} (p-1)^{m-1} (p+(m-1)) X (\log X)^{m-1}. \] \end{cor} \noindent Note that for $n = 2$ this is consistent with the result from Corollary \ref{Nn_growth} that $N_2(X) \sim X$. \begin{proof} The right-most pole of $\zeta_{\Z^n}^{R,(1)}(s)$ is located at $s = 1$. We will prove that the pole has order $\binom{n}{2}$. Multiplying the zeta function by $\frac{1}{\zeta(s)^t}$ for some integer $t > 0$, observe that \begin{align*} \zeta_{\Z^n}^{R,(1)}(s) \prod_p (1-p^{-s})^{t}&=\prod_p \left(\left(1 + \left(\binom{n}{2} - 1\right)p^{-s}\right)(1-p^{-s})^{t-1}\right)\\ &=\prod_p\left(\left(1 + \left(\binom{n}{2} - 1\right)p^{-s}\right) \sum_{k=0}^{t-1} (-1)^k\binom{t-1}{k} p^{-ks}\right). \end{align*} From here it is clear that $\zeta_{\Z^n}^{R, (1)}(s) \prod_p (1-p^{-s})^{t}$ still has a pole at $s=1$ when $t < \binom{n}{2}$, and does not have a pole at $s=1$ when $t = \binom{n}{2}.$ We now apply Theorem \ref{tauberian_theorem}. To find $C_{n,1}$, we evaluate $\zeta(s)^{\binom{n}{2}} \zeta_{\Z^n}^{R,(1)}(s)$ at $s=1$ and simplify. \end{proof} We will apply this result in Section \ref{corank_smalln} when we study the proportion of subrings of $\Z^n$ with small corank for $n \le 5$. In order to state the analogous results for subrings of corank $2$ and corank $3$ we first recall some material due to Liu about irreducible subrings of $\Z^n$ \cite{Liu}. Liu defines irreducible subrings of $\Z_p^n$, but for notational convenience we prefer to define the corresponding notion for subrings of $\Z^n$ with index equal to a power of $p$. \begin{defn} A subring $R \subseteq \Z^n$ with index equal to a power of $p$ is \emph{irreducible} if for each $(x_1, \ldots, x_n) \in R,\ x_1\equiv x_2\equiv \cdots \equiv x_n \pmod{p}$. \end{defn} The motivation for this definition comes from the following decomposition result of Liu. \begin{thm}\cite[Theorem 3.4]{Liu} A subring $R \subseteq \Z^n$ with index equal to a power of $p$ can be written uniquely as a direct sum of irreducible subrings. \end{thm} Liu uses this result to prove a recurrence for subrings of $\Z^n$ in terms of irreducible subrings of $\Z^n$ and subrings of $\Z^j$ for $j< n$ \cite[Proposition 2.8]{Liu}. Let $g_n(k)$ be the number of irreducible subrings of $\Z^n$ of index $k$. We note that Liu uses slightly different notation for counting irreducible subrings. Our $g_n(k)$ is denoted by $g_{n-1}(k)$ in \cite{Liu}. We now state the analogues of Proposition \ref{count_cocyclic} for subrings of corank $2$ and $3$. \begin{thm}\label{corank2_upper} Suppose $e \ge 2$ and $n \ge 3$. Then \[ h_{n, 2}(p^e) =\frac{(3n^2-17n+36)}{12} \binom{n-1}{2} g_3(p^e) + 3\binom{n-1}{3}(e-1). \] \end{thm} We now give a similar but more complicated, result for subrings of corank $3$. \begin{thm}\label{corank3_upper} Suppose $e \ge 3$ and $n \ge 4$. Then \[ h_{n,3}(p^e) = \frac{n^3 - 11n^2 + 40n - 40}{8} \binom{n-1}{3} g_4(p^e) + (3n - 5) \binom{n-1}{4}\sum_{j=2}^{e-1} (j-1) g_3(p^j). \] \end{thm} \noindent We will prove these two theorems in Section \ref{subring_matrices}. Our aim is to prove Theorem \ref{Hnk_123} using the explicit formulas for $h_{n,k}(p^e)$ when $k \in \{1,2,3\}.$ \begin{prop} \label{prop:corank2_zeta} Let \begin{align*} a(n) &= \frac{3n^2-17n+36}{12} \binom{n-1}{2}\\ b(n) &= 3\binom{n-1}{3}\\ m &= \binom{n}{2}. \end{align*} We have \begin{align*} \zeta_{\Z^n}^{R, (2)}(s) &= \zeta(s)^2\zeta(3s-1)\prod_p \bigg(1 + \left(m-2\right) p^{-s} + ( a(n) + b(n) - m+1) p^{-2 s} - a(n)p^{-3s} \\ &+ ( a(n)-1 )p^{1-3s} + (a(n) - m + 2)p^{1-4 s} - (2a(n) + b(n) - m+1 )p^{1-5s} \bigg). \end{align*} The right-most pole of this function occurs at $s=1$ and has order $m$. \end{prop} \noindent We defined $a(n)$ and $b(n)$ in this way so that Theorem \ref{corank2_upper} becomes \[ h_{n,2}(p^e) = a(n) g_3(p^3) + b(n) (e-1). \] Before giving the proof, we remark that we can explicitly determine the constant derived from applying Theorem \ref{tauberian_theorem} to the function in Proposition \ref{prop:corank2_zeta}. \begin{cor}\label{cor:corank2_constant} Let $a(n), b(n),$ and $m$ be defined as in Proposition \ref{prop:corank2_zeta}. Then we have \[ H_{n,2}(X) \sim C_{n,2} X(\log X)^{m-1} \] where \begin{align*} C_{n,2} &= \frac{\zeta(2)}{(m-1)!} \prod_p (1-p^{-1})^{m-2} \bigg(1 + \left(m-2\right) p^{-1} + ( 2a(n) + b(n) - m) p^{-2 } \\ &- ( m-2)p^{-3} -(2a(n) + b(n) - m+1 ) p^{-4} \bigg). \end{align*} \end{cor} Note that $H_{3,2}(X) = N_3(X)$ since subrings in $\Z^3$ have corank at most 2. The constant $C_{3,2}$ from Corollary \ref{cor:corank2_constant} is consistent with the constant in the asymptotic for $N_3(X)$ given in Corollary \ref{Nn_growth}. \begin{proof}[Proof of Proposition \ref{prop:corank2_zeta}] Consider $\tilde{h}_{n,2}(p^e)$. Observe that $\tilde{h}_{n,2}(p^0) = 1$ and $\tilde{h}_{n,2}(p^1) = \binom{n}{2}$. For any $e \ge 2$, we have that $$ \tilde{h}_{n,2}(p^e) = \binom{n}{2} + h_{n,2}(p^e). $$ Applying Theorem \ref{corank2_upper}, we obtain \begin{align*} \zeta_{\Z^n,p}^{R, (2)}(s) &= 1 + \sum_{e \ge 1}\binom{n}{2} p^{-es} + \sum_{e \ge 2} \left( a(n)g_3(p^e) + b(n) (e-1)\right)p^{-es}. \end{align*} Liu shows that \begin{align*} \sum_{e \ge 0} g_3(p^e) p^{-es}&=\sum_{e \ge 2} g_3(p^e) p^{-es}= \frac{p^{-2s} + p^{1-3s} + 2p^{1-4s}}{(1-p^{-s})(1-p^{1-3s})}. \end{align*} This is $B_2(p,x)$ with $x = p^{-s}$ \cite[page 296]{Liu}. Using this formula and standard geometric series formulas, we simplify our expression as follows using Mathematica \cite{mathematica}. The first author has posted the Mathematica worksheets on her website \cite{mathematica_work}. We have \begin{align*} \zeta_{\Z^n,p}^{R, (2)}(s) &= 1 + \binom{n}{2} \frac{p^{-s}}{1-p^{-s}} + a(n) \frac{p^{-2s} + p^{1-3s} + 2p^{1-4s}}{(1-p^{-s})(1-p^{1-3s})} + b(n) \frac{p^{-2s}}{(1-p^{-s})^2}\\ &= \frac{1}{(1-p^{-s})^2 (1-p^{1-3s})} \bigg(1 + ( a(n) + b(n) - \binom{n}{2}+1) p^{-2 s} + \left(\binom{n}{2}-2\right) p^{-s} \\ &- a(n)p^{-3s} + p^{1-3s}( a(n)-1 ) + p^{1-4 s} (a(n) - \binom{n}{2} + 2)- p^{1-5s} (2a(n) + b(n) - \binom{n}{2}+1 ) \bigg). \end{align*} An argument like the one given in the proof of Corollary \ref{cor:cocyclic_asymptotic} shows that the right-most pole of this function is located at $s=1$, and the order of this pole is $\binom{n}{2}$. Applying Theorem \ref{tauberian_theorem} now gives the asymptotic result. \end{proof} \begin{prop}\label{prop:corank3_zeta} Let \begin{align*} a(n) &= \frac{3n^2-17n+36}{12} \binom{n-1}{2}\\ b(n) &= 3\binom{n-1}{3}\\ c(n) &= \frac{n^3-11n^2+40n-40}{8} \binom{n-1}{3}\\ d(n) &= (3n-5) \binom{n-1}{4}\\ m &= \binom{n}{2}. \end{align*} The right-most pole of $\zeta_{\Z^n}^{R,(3)}(s)$ occurs at $s=1$ and has order $m$. We have \[ H_{n,3}(X) \sim C_{n,3} X(\log X)^{m-1} \] where \begin{align*} C_{n,3} = \frac{1}{(m-1)!} \prod_p (1-p^{-1})^{m-4} &\bigg(1 + p^{-1} (m-4) + p^{-2} (6 + 2 a(n) + b(n) +c(n) - 3 m) \\ &+p^{-3} (-4 - 4 a(n) - 2 b(n)+ 6 c(n) + 3 d(n) + 3 m)\\ &+ p^{-4}(1 + 2a(n) + b(n) - 7 c(n) + 2 d(n) - m)\bigg). \end{align*} \end{prop} \noindent We defined $c(n)$ and $d(n)$ in this way so that Theorem \ref{corank3_upper} says \[ h_{n,3}(p^e) = c(n) g_4(p^e) + d(n) \sum_{j=2}^{e-1} g_3(p^j). \] We could write down an expression for $\zeta_{\Z^n}^{R,(3)}(s)$ analogous to the one given for $\zeta_{\Z^n}^{R,(2)}(s)$ in Proposition \ref{prop:corank2_zeta}, but in the interest of space, we omit it. Note that constant $C_{4,3}$ from Proposition \ref{prop:corank3_zeta} is consistent with the constant in the asymptotic for $N_4(X)$ given in Corollary \ref{Nn_growth}. See the first author's website \cite{mathematica_work} for details. \begin{proof} Consider $\tilde{h}_{n,3}(p^e)$. Observe that $\tilde{h}_{n,3}(p^0) = 1,\ \tilde{h}_{n,3}(p^1) = \binom{n}{2}$, and $\tilde{h}_{n,3}(p^2) = \tilde{h}_{n,2}(p^2)$. For $e \ge 3$, \[ \tilde{h}_{n,3}(p^e) = \binom{n}{2} + h_{n,2}(p^e) + h_{n,3}(p^e). \] Plugging these expressions into $\zeta_{\Z^n, p}^{R, (3)}(s)$, we have \begin{eqnarray}\label{corank3_zeta} \zeta_{\Z^n, p}^{R, (3)}(s) &=& 1 + \sum_{e\ge 1} \binom{n}{2} p^{-es} + \sum_{e \ge 2} \left(a(n) g_3(p^3) + b(n)(e-1)\right)p^{-es} \\ &+& \sum_{e \ge 3} \left(c(n) g_4(p^e) + d(n) \sum_{j=2}^{e-1} (j-1)g_3(p^j)\right)p^{-es},\nonumber \end{eqnarray} where $a(n), b(n), c(n)$, and $d(n)$ are defined as in the statement of the proposition. The proofs of the $k=1$ and $k=2$ cases of Theorem \ref{Hnk_123} handle the first three summands, so we focus on the last summand. Many of these calculations and simplifications were performed in Mathematica \cite{mathematica}; see the first author's website \cite{mathematica_work} for the code. First consider \[ \sum_{e\ge 3} c(n) g_4(p^e)p^{-es}. \] Liu proves that \begin{align*} \sum_{e\ge 3} g_4(p^e)p^{-es} &= \frac{p^{-3s}}{(1-p^{-s})^2 (1-p^{1-3s}) (1-p^{2-4s}) (1-p^{3-6s})}\bigg( 1 + (p^2+p-1)p^{-s} \\ &+ (5p^2-p)p^{-2s} +(p^3+p^2-p)p^{-3s} + (7p^3-11p^2+p)p^{-4s} + (p^3+p^2)p^{-5s}\\ &+ (3p^4 -13p^3+3p^2)p^{-6s} + (-p^5+2p^3)p^{-7s} + (-4p^5 -6p^4+4p^3)p^{-8s} \\ &+ (-2p^5+p^3)p^{-9s} +(-3p^6+4p^5)p^{-10s} +6p^{6-12s} \bigg). \end{align*} \noindent This is $B_3(p,x)$ with $x = p^{-s}$ \cite[Proposition 6.3]{Liu}. Next consider \begin{align*} \sum_{e \ge 3} \left( \sum_{j=2}^{e-1} (j-1)g_3(p^j)\right)p^{-es} &= \sum_{e \ge 3}\left( \sum_{j=2}^{e-1} (j-1)g_3(p^j) p^{-js} p^{(-e+j)s}\right)\\ &= g_3(p^2)p^{-2s} \sum_{e \ge 1} p^{-es} + 2g_3(p^3)p^{-3s} \sum_{e \ge 1} p^{-es}+ 3g_3(p^4) p^{-4s}\sum_{e \ge 1}p^{-es} + \cdots \\ &= \frac{p^{-s}}{1-p^{-s}} \sum_{e \ge 2} (e-1)g_3(p^{e}) p^{-es}. \end{align*} We consider the derivative with respect to $x$ of both expressions for $B_2(p,x)$ given by \cite[page 296]{Liu}. Observe that \[ \frac{d}{dx} \left(\sum_{e \ge 2} g_3(p^e)x^e \right) =x^{-1} \sum_{e \ge 2} eg_3(p^e) x^e. \] Differentiating the rational function expression for $B_2(p,x)$ with respect to $x$ gives \begin{align*} \frac{d}{dx} \left(\sum_{e \ge 2} g_3(p^e) x^e \right) &= \frac{d}{dx} \left( \frac{x^2 + p x^3 + 2p x^4}{(1-x)(1-p x^3)} \right)\\ &= \frac{-x (-2 + x - 3 p x - 6 p x^2 + 5 p x^3 + 2 p x^4 + 3 p^2 x^5)}{(1 - x)^2 (1 - p x^3)^2}. \end{align*} Combining the expressions for each of the four summands in \eqref{corank3_zeta} and putting them over a common denominator gives the formula for $\zeta_{\Z^n}^{R, (3)}(s)$. From here, we can see that neither of these last two summands contributes a pole to the right of $s=1$ when we take the Euler product. Since $g_4(p^e) \le f_4(p^e)$, we see that \[ \prod_p \left(1+\sum_{e \ge 2} g_4(p^e)p^{-es}\right) \] converges to the right of $\Re(s) = 1$ and cannot have a pole at $s=1$ of order larger than $\binom{n}{2}$. An analogous statement holds for \[ \prod_p \left(1+\sum_{e\ge 3} \left(\sum_{j=2}^{e-1} (j-1)g_3(p^j)\right) p^{-es}\right). \] We can now apply the strategy of the proof of Corollary \ref{cor:cocyclic_asymptotic}, multiplying the expression for $\zeta_{\Z^n, p}^{R, (3)}(s)$ by $(1-p^{-s})^{\binom{n}{2}}$ and then using the fact that if $a_n>0$ for all $n$, then the infinite product $\prod_n (1+a_n)$ converges if and only if $\sum_n a_n$ converges. In this way, we see that $\zeta_{\Z^n}^{R,(3)}(s)$ has its right-most pole at $s=1$ and that pole has order exactly $\binom{n}{2}$. \end{proof} We have now completed the proof of Theorem \ref{Hnk_123}. \begin{proof}[Proof of Theorem \ref{Hnk_123}] This theorem follows directly from Corollary \ref{cor:cocyclic_asymptotic}, Corollary \ref{cor:corank2_constant}, and Proposition \ref{prop:corank3_zeta}. \end{proof} We can use the expression for $\tilde{h}_{4,3}(p^e)$ that comes from Proposition \ref{count_cocyclic}, Theorem \ref{corank2_upper}, and Theorem \ref{corank3_upper} along with the ideas in this proof to verify that $\zeta_{\Z^4}^{R,(3)}(s) = \zeta_{\Z^4}^R(s)$, which gives a nice check of our results. Similarly, we can use the expression for $\tilde{h}_{3,2}(p^e)$ that comes from Theorem \ref{corank2_upper} and Proposition \ref{count_cocyclic} to verify that $\zeta_{\Z^3}^{R,(2)}(s) = \zeta_{\Z^3}^R(s)$. A main idea of the proof of Theorem \ref{Hnk_upper} (and thus Theorem \ref{proportion_corank_k}) is to prove upper bounds for $\tilde{h}_{n,k}(p^e)$, which then imply that the right-most pole of $\zeta_{\Z^n}^{R,(k)}(s)$ cannot be too large. Applying Theorem \ref{tauberian_theorem} then completes the proof. We state a general upper bound for the number of subrings of $\Z^n$ of corank $k$ and index $p^e$. We defer the proof until the next section. \begin{thm}\label{general_upper} Suppose $e,n$ and $k$ are positive integers with $k \le n-1$. \begin{enumerate} \item We have \[ \binom{n-1}{k} g_{k+1}(p^e) \le h_{n,k}(p^e) \le (n-k)^k \binom{n-1}{k} g_{k+1}(p^e). \] \item We have \[ \tilde{h}_{n,k}(p^e) \le k (n-1)^k 2^{n-1} f_{k+1}(p^e). \] \end{enumerate} \end{thm} \noindent We could certainly prove sharper bounds, but these suffice for our main application. We apply the second part of Theorem \ref{general_upper} along with an upper bound on $N_{k+1}(X)$ due to Kaplan, Marcinek, and Takloo-Bighash in order to give an upper bound on $H_{n,k}(X)$. \begin{thm}\label{KMTB_upper}\cite[Theorem 6]{kmt} Let $k \ge 5$ be a positive integer. For any $\epsilon > 0$, there exists a constant $C_{k,\epsilon} > 0$ depending on $k$ and $\epsilon$ such that \[ N_{k+1}(X) < C_{k,\epsilon} X^{\frac{k}{2} - \frac{2}{3} + \epsilon} \] for all $X > 0$. \end{thm} We apply this result to prove the following upper bound for $H_{n,k}(X)$. \begin{cor}\label{Hnk_upper_cor} For any $\epsilon > 0$, we have \[ H_{n,k}(X) = O(X^{\frac{k}{2} - \frac{2}{3} + \epsilon}) \] where the constant depends on $k,n$, and $\epsilon$. \end{cor} \begin{proof} Our first main goal is to show that if the right-most pole of $\zeta_{\Z^{k+1}}^R(s)$ is at $s = \alpha$, then for any $n,\ \zeta_{\Z^n}^{R,(k)}(s)$ converges whenever $\Re(s) > \alpha$. We have \[ \zeta_{\Z^n}^{R,(k)}(s) = \prod_p \zeta_{\Z^n,p}^{R,(k)}(s) = \prod_p \left(1 + \sum_{e \ge 1} \tilde{h}_{n,k}(p^e) p^{-es}\right). \] We recall that \[ \prod_p \left(1 + \sum_{e \ge 1} \tilde{h}_{n,k}(p^e) p^{-es}\right) \] converges if and only if \[ \sum_p \sum_{e \ge 1} \tilde{h}_{n,k}(p^e) p^{-es} \] does. Theorem \ref{general_upper} (2) implies that for any positive real $s$ \[ \sum_p \sum_{e \ge 1} \tilde{h}_{n,k}(p^e) p^{-es} \le k(n-1)^k 2^{n-1} \sum_p \sum_{e \ge 1} f_{k+1}(p^e) p^{-es}. \] Since $k(n-1)^k 2^{n-1}$ is a constant, this expression converges if and only if $\sum_p \sum_{e \ge 1} f_{k+1}(p^e) p^{-es}$ does. But this expression converges if and only if \[ \prod_p \left(1 + \sum_{e \ge 1} f_{k+1}(p^e) p^{-es}\right) = \zeta_{\Z^{k+1}}^{R}(s) \] converges. Suppose the right-most pole of $\zeta_{\Z^{k+1}}^R(s)$ is at $s=\alpha$. Since $\zeta_{\Z^n}^{R,(k)}(s)$ has no poles to the right of $\alpha$, Theorem \ref{tauberian_theorem} now implies that for any $\epsilon > 0,\ H_{n,k}(X) = O(X^{\alpha+\epsilon})$. Applying Theorem \ref{KMTB_upper} completes the proof. \end{proof} We now show how this result completes the proof of Theorem \ref{Hnk_upper}. \begin{proof}[Proof of Theorem \ref{Hnk_upper}] Corollary \ref{Hnk_upper_cor} implies that for any $\epsilon > 0$ there exists a constant $C_{n,k,\epsilon} > 0$ such that \[ H_{n,k}(X) < C_{n,k,\epsilon} X^{\frac{k}{2}-\frac{2}{3}+\epsilon}. \] By Theorem \ref{ish_lower} (1), $X^{a(n)} >X^{(3-2\sqrt{2})(n-1) -(\sqrt{2}-1)}$. We find that \[ \lim_{X \rightarrow \infty} \frac{H_{n,k}(X)}{X^{a(n)}}=0 \] whenever $\frac{k}{2} - \frac{2}{3} < (3-2\sqrt{2})(n-1) -(\sqrt{2}-1)$. Solving for $k$ completes the proof. \end{proof} The proof of Theorem \ref{proportion_corank_k} (2) follows a similar argument, but in place of the upper bound from Corollary \ref{Hnk_upper_cor} we use the result from Theorem \ref{N5X} about the growth of $N_5(X)$. \begin{proof}[Proof of Theorem \ref{proportion_corank_k} (2)] Theorem \ref{N5X} implies that there exists a $C_5>0$ such that $N_5(X) \sim C_5 X (\log X)^9$. The argument from the proof of Corollary \ref{Hnk_upper_cor} then shows that for any $\epsilon>0,\ H_{n,4}(X) = O(X^{1+\epsilon})$, where the constant depends on $n$ and $\epsilon$. Note that $a(n)>1$ for any $n \ge 7$. Applying the argument from the proof of Theorem \ref{Hnk_upper} completes the proof. \end{proof} We now turn to the statement of Theorem \ref{proportion_corank_k} (1) for the particular case $n = 6$. While we do not know the asymptotic rate of growth of $N_6(X)$, we do have a lower bound that is good enough for our purposes. We learned the following fact from Gautam Chinta and Ramin Takloo-Bighash. \begin{prop}\label{Z6_lower} We have \[ \lim_{X \rightarrow \infty} \frac{X(\log X)^{15}}{N_6(X)} = 0. \] \end{prop} \begin{proof} The key idea is to apply a result for subrings of small index from \cite{akkm}. By Theorem \ref{tauberian_theorem} it is enough to show that the order of the pole at $s=1$ is at least $\binom{6}{2}+1$. We know that $\Z^6$ has $\binom{6}{2}$ subrings of index $p$ and at least $p^6$ subrings of index $p^7$ \cite[Theorem 1.3]{akkm}. Since \[ \prod_p (1+ 15 p^{-s} + p^6 p^{-7s}) \] has a pole of order at least $16$ at $s=1$, we see that $\zeta_{\Z^6}^R(s)$ does as well. \end{proof} \begin{proof}[Proof of Theorem \ref{proportion_corank_k} (1) for $n=6$] Theorem 1.4 implies that $H_{6,3}(X) = o(X (\log(X)^{15})$, so the result of the previous proposition implies that \[ \lim_{X \rightarrow \infty} \frac{H_{6,3}(X)}{N_6(X)} = 0. \] \end{proof} We will prove Theorem \ref{general_upper} at the end of the next section. \section{Subring matrices and the proofs of Theorems \ref{corank2_upper}, \ref{corank3_upper}, and \ref{general_upper}}\label{subring_matrices} A main idea that we use to prove upper bounds for $h_{n,k}(p^e)$ is to consider matrices in $\HH_n(\Z)$ whose columns span a subring of $\Z^n$ of corank at most $k$. Throughout this paper, if $A \in M_n(\Z)$ we write $\col(A)$ for the column span of $A$. We write $v_1,\ldots, v_n$ for the columns of $A$ and $a_{ij}$ for the entries of $A$. \begin{defn} A matrix $A \in \HH_n(\Z)$ is a \emph{subring matrix} if \begin{enumerate} \item the identity $(1,1,\ldots,1)^T \in \col(A)$, and \item for any columns $v_i, v_j$ of $A$, we have $v_i \circ v_j \in \col(A)$. \end{enumerate} We see that $A$ is a subring matrix if and only if $\col(A)$ is a subring of $\Z^n$. Moreover, $\det(A) = [\Z^n\colon \col(A)]$. \end{defn} Suppose $A \in \HH_n(\Z)$ is a subring matrix. Since $(1,1,\ldots,1)^T \in \col(A)$, it is clear that $a_{nn} = 1$. If $\det(A) = p^e$, then the diagonal entries of $A$ are $(p^{e_1}, p^{e_2},\ldots, p^{e_{n-1}},1)$, where each $e_i \ge 0$ and $e_1+\cdots + e_{n-1} = e$. That is, $(e_1,\ldots, e_{n-1})$ is a weak composition of $e$ into $n-1$ parts. Let $\alpha = (e_1,\ldots, e_{n-1})$ and define $g_\alpha(p)$ to be the number of irreducible subrings of $\Z^n$ with diagonal entries $(p^{e_1},\ldots, p^{e_{n-1}},1)$. We now prove that the corank of a subring $R \subseteq \Z^n$ is at most $n-1$. Recall that the \emph{cokernel} of a matrix $A \in M_n(\Z)$ is $\cok(A) = \Z^n/\col(A)$. If $A$ is a subring matrix then the corank of $\col(A)$ is the number of nontrivial invariant factors of $\cok(A)$. We recall some basic facts about the Smith normal form of an integer matrix. \begin{prop}\label{SNF_prop} Let $A \in M_n(\Z)$ be invertible. There exist $P, Q \in \GL_n(\Z)$ such that $PAQ = S$ is a diagonal matrix whose diagonal entries $(s_1,s_2,\ldots, s_n)$ are positive integers satisfying $s_i \mid s_{i+1}$ for all $1 \le i \le n-1$. Since $\cok(A) \cong \cok(PAQ) \cong \cok(S)$, we have \[ \cok(A) \cong \Z/s_1 \Z \times \Z/s_2 \Z \times \cdots \times \Z/s_n\Z. \] Moreover, these $s_i$ are uniquely determined and \[ s_1 \cdots s_i = \gcd(i \times i \text{ minors of } A). \] \end{prop} We have seen that an $n \times n$ subring matrix $A$ has an entry equal to $1$, and therefore the greatest common divisor of the $1 \times 1$ minors of $A$ is $1$. By Proposition \ref{SNF_prop}, $\cok(A)$ has at most $n-1$ nontrivial invariant factors, and therefore $\col(A)$ has corank at most $n-1$. \begin{prop}\label{subringZ2} We have $\zeta_{\Z^2}^R(s) = \zeta(s)$. Every proper subring $R \subseteq \Z^2$ has corank $1$. \end{prop} \begin{proof} Suppose $A \in \HH_2(\Z)$ is a subring matrix with $\det(A) = k$. Then $A = \left(\begin{smallmatrix} k & x \\ 0 & 1\end{smallmatrix}\right)$ where $0 \le x < k$. Since $(1,1)^T \in \col(A)$ we see that $x = 1$. \end{proof} Liu has determined the conditions under which the column span of a subring matrix is an irreducible subring. \begin{prop}\cite[Proposition 3.1]{Liu}\label{liu31} Suppose $A \in \HH_n(\Z)$ is a subring matrix with columns $v_1,\ldots, v_n$ and determinant equal to a power of $p$. Then $\col(A)$ is an irreducible subring if and only if $v_n = (1,1,\ldots, 1)^T$ and for each $i \in [1,n-1]$ every entry of $v_i$ is $0$ modulo $p$. \end{prop} If $A \in \HH_n(\Z)$ is a subring matrix with last column $(1,1,\ldots, 1)^T$, determinant equal to a power of $p$, and every entry of its first $n-1$ columns is divisible by $p$, then we say that $A$ is an \emph{irreducible subring matrix}. We see that $g_n(p^e)$ is equal to the number of $n\times n$ irreducible subring matrices with determinant $p^e$. The diagonal entries of an irreducible subring matrix $A$ with $\det(A) = p^e$ are of the form $(p^{e_1},\ldots, p^{e_{n-1}},1)$ where each $e_i \ge 1$ and $e_1+\cdots + e_{n-1} = e$. That is, $(e_1,\ldots, e_{n-1})$ is a composition of $e$ into $n-1$ parts. We see that $g_n(p^e)$ is equal to the sum of $g_\alpha(p)$ taken over all compositions $\alpha$ of $e$ into $n-1$ parts. \begin{prop}\cite[Proposition 3.3]{Liu}\label{liu33} Let $A$ be a subring matrix with diagonal entries $(p^{e_1}, \ldots, p^{e_{n-1}}, 1)$. If $e_i > 0$ for all $1 \le i \le n-1$, then $A$ is irreducible. \end{prop} Liu remarks that Propositions \ref{liu31} and \ref{liu33} give a sufficient condition for determining whether a subring matrix is irreducible. Namely, it suffices to check that the diagonal is of the form $(p^{e_1}, \ldots, p^{e_{n-1}}, 1)$ where each $e_i \ge 1$. These remarks lead to the following proposition. \begin{prop}\label{irred_corank_prop} Suppose $R \subseteq \Z^n$ is an irreducible subring. The corank of $R$ is $n-1$. \end{prop} \begin{proof} Let $A$ be the irreducible subring matrix for which $\col(A) = R$. Proposition \ref{liu31} implies that the last column of $A$ is $(1,\ldots, 1)^T$. In every other column of $A$, each entry is divisible by $p$. This implies that every $2\times 2$ minor of $A$ is divisible by $p$. By Proposition \ref{SNF_prop}, $\cok(A)$ has exactly $n-1$ nontrivial invariant factors. \end{proof} We next show that in a subring matrix $A$ with a given diagonal certain collections of entries must be divisible by $p$ and a particular submatrix constructed from $A$ is an irreducible subring matrix. \begin{prop}\label{prop:subring_matrix_divis} Let $A$ be an $n \times n$ subring matrix with diagonal $(p^{e_1},p^{e_2},\ldots, p^{e_{n-1}},1)$ and define $I = \{i_1,i_2,\ldots, i_k\}$ where $1 \le i_1<i_2<\cdots < i_k \le n-1$ and $e_j \neq 0$ if and only if $j \in I$. \begin{enumerate} \item For any $j_1, j_2$ satisfying $1\le j_1 \le j_2 \le k$, we have $v_{i_{j_1}} \circ v_{i_{j_2}} \in \Span(v_{i_1},v_{i_2},\ldots, v_{i_k})$. \item Consider the $k \times k$ matrix obtained by first taking the $n \times k$ matrix with columns $v_{i_1},v_{i_2},\ldots, v_{i_k}$ and then deleting the $j$\textsuperscript{th} row for any $j \not \in I$. Now append a row where every entry is $0$ to the bottom of this matrix, and append a final column where every entry is $1$. This is a $(k+1) \times (k+1)$ irreducible subring matrix. \item For any $j_1, j_2$ satisfying $1\le j_1 \le j_2 \le k$, we have $p \mid a_{i_{j_1} i_{j_2}}$. \item If $i \in I$, then every entry in the column $v_i$ is divisible by $p$. \end{enumerate} \end{prop} \begin{proof} We prove these statements in order. \begin{enumerate} \item Since $\col(A)$ is multiplicatively closed, for any pair $i_{j_1}, i_{j_2} \in I$ there exist unique $c_1,c_2,\ldots, c_n \in \Z$ such that \[ v_{i_{j_1}} \circ v_{i_{j_2}} = \sum_{\ell = 1}^n c_\ell v_\ell. \] Suppose $j \not\in I$, so that $p^{e_j} = 1$. Since $A$ is in Hermite normal form, the only nonzero entry in the $j$\textsuperscript{th} row of $A$ is this $1$ in column $j$. In particular, the entry of $v_{i_{j_1}} \circ v_{i_{j_2}}$ in row $j$ is $0$. This implies $c_j = 0$. We conclude that $v_{i_{j_1}} \circ v_{i_{j_2}}$ is in the span of the columns $v_i$ where $i \in I$. \item Consider the $n \times k$ matrix with columns $v_{i_1},\ldots, v_{i_k}$. Part (1) implies that the column span of this matrix is multiplicatively closed. If $j\not\in I$, then every entry of the $j$\textsuperscript{th} row of this matrix is $0$. We delete these rows and see that the column span of this $k \times k$ matrix is still multiplicatively closed. Appending a row where every entry is $0$ to the bottom of the matrix and a final column where every entry is equal to $1$ gives a $(k+1) \times (k+1)$ matrix whose column span is multiplicatively closed. The diagonal of this matrix consists of positive powers of $p$ in the first $k$ entries and then a $1$ in the last entry. Proposition \ref{liu31} implies that this is an irreducible subring matrix. \item The entry $a_{i_{j_1} i_{j_2}}$ is contained in one of the first $k$ columns of the irreducible subring matrix described in Part (2). Proposition \ref{liu31} implies that $p\mid a_{i_{j_1} i_{j_2}}$. \item Suppose $i \in I$ and consider $v_i$. Part (3) implies that any entry of $v_i$ in a row corresponding to an element of $I$ is divisible by $p$. Since $A$ is in Hermite normal form, the other entries of $v_i$ are $0$. \end{enumerate} \end{proof} The next result is the key observation that allows us to use combinatorial properties of subring matrices to prove upper bounds for $h_{n,k}(p^e)$. \begin{thm}\label{corank_diag_thm} Let $A$ be a $n\times n$ subring matrix with diagonal $(p^{e_1}, \ldots, p^{e_{n-1}}, 1)$ and define $I = \{i_1,i_2,\ldots, i_k\}$ where $1 \le i_1<i_2<\cdots < i_k \le n-1$ and $e_j \neq 0$ if and only if $j \in I$. Then $\corank(A) = k$. \end{thm} The analogue of this statement for sublattices is not true. For example, the column span of the matrix in Hermite normal form $$ A = \begin{pmatrix} 2&1 & 1\\ 0 & 2 & 1\\ 0 & 0 &1 \end{pmatrix} $$ has corank 1 since the gcd of the $2 \times 2$ minors of $A$ is $1$, but $A$ has two diagonal entries that are positive powers of $2$. \begin{proof} We bound the corank from above and below. There is an $(n-k)\times (n-k)$ submatrix of $A$ that is upper triangular and has diagonal $(1,1,\ldots,1)$. Therefore, Proposition \ref{SNF_prop} implies that $\corank(A) \le k$. Proposition \ref{prop:subring_matrix_divis} implies that if $i \in I$, then every entry of the column $v_i$ is divisible by $p$. Therefore, every $(n-k+1)\times (n-k+1)$ submatrix of $A$ contains a column in which every entry is divisible by $p$, so every $(n-k+1)\times (n-k+1)$ minor of $A$ is divisible by $p$. Proposition \ref{SNF_prop} then implies that $\corank(A) \ge k$, completing the proof. \end{proof} At the end of the introduction we discussed how cokernels of $n \times n$ subring matrices ordered by determinant are not distributed according to the Cohen-Lenstra heuristics. For example, the cokernel of such a matrix is cyclic much less often than the heuristics would predict. We give a kind of rough explanation for why this should be true. A main issue is that the number of subring matrices of index equal to a power of $p$ that have cyclic cokernel is quite small. If we require that the column space of a matrix $A$ in Hermite normal form is closed under componentwise multiplication, a large collection of entries is forced to be congruent to $0$ modulo $p$, and so it seems much more likely that every $k\times k$ minor of $A$ is divisible by $p$. Proposition \ref{SNF_prop} then explains why we should expect the corank of $A$ not to be too small. We now prove Proposition \ref{count_cocyclic}, which counts cocyclic subrings of $\Z^n$ of determinant $p^e$. \begin{proof}[Proof of Proposition \ref{count_cocyclic}] Suppose $A \in \HH_n(\Z)$ is a subring matrix with determinant $p^e$ and that $\col(A)$ is a cocyclic subring of $\Z^n$. By Theorem \ref{corank_diag_thm} exactly one diagonal entry of $A$ is equal to $p^e$ and the rest of the diagonal entries are equal to $1$. We first consider the case where $A$ has the form \[ \begin{pmatrix} p^e & a_{12} & a_{13} & \cdots & a_{1n}\\ &1&0&\cdots &0\\ &&1&\cdots &0\\ &&&\ddots &\vdots\\ &&&&1 \end{pmatrix}. \] Since $\col(A)$ is multiplicatively closed, for any $i$ satisfying $2 \le i\le n$ there exist unique $c_1,c_2\ldots, c_n \in \Z$ such that \[ v_i \circ v_i - v_i = \sum_{\ell=1}^n c_\ell v_\ell. \] Since the only nonzero entry in $v_i \circ v_i - v_i$ is an $a_{1i}^2-a_{1i}$ in the first row, we must have $p^e \mid (a_{1i}^2-a_{1i})$. If $i\neq j$ satisfy $2\le i,j \le n$, then the only nonzero entry of $v_i \circ v_j$ is an $a_{1i}a_{1j}$ in the first row. So we must have $p^e \mid a_{1i}a_{1j}$. These conditions are satisfied if and only if there is at most one $j$ for which $p^e \mid (a_{1j} -1)$, and $p^e \mid a_{1i}$ for all $i \ne j$. We cannot have every $a_{1i}$ divisible by $p$ since $\col(A)$ must contain $(1,1,\ldots, 1)^T$. Thus, there are a total of $n-1$ cocyclic subring matrices with this diagonal. For each $i \in [2,n-1]$ we repeat this analysis for the case where $a_{ii} = p^e$. In this case there are exactly $n-i$ cocyclic subring matrices. Taking a sum over $i$ completes the proof. \end{proof} In order to prove Theorems \ref{corank2_upper} and \ref{corank3_upper}, we need the following propositions about the structure of subring matrices with fixed diagonal and fixed corank. The first states that for any $i,j$ such that $e_i \neq 0$ and $e_j = 0$, the entry $a_{ij} \in \{0,1\}$. The second proposition states that for a fixed $i$ satisfying $e_i \neq 0$, there is exactly one $j$ for which $a_{ij} =1$. \begin{prop}\label{entry0or1prop} Let $A$ be a subring matrix with diagonal $(p^{e_1}, \ldots, p^{e_{n-1}}, 1)$ and define $I = \{i_1,i_2,\ldots, i_k\}$ where $1 \le i_1<i_2<\cdots < i_k \le n-1$ and $e_j \neq 0$ if and only if $j \in I$. If $i \in I$ and $j \not\in I$, then $a_{ij} \in \{0,1\}$. \end{prop} \begin{proof} Since $\col(A)$ is multiplicatively closed there exist unique $c_1,\ldots c_n \in \Z$ such that \[ v_j \circ v_j - v_j = \sum_{\ell=1}^n c_\ell v_\ell. \] All of the nonzero entries of $v_j \circ v_j - v_j$ are contained in rows $1,2,\ldots, j-1$. Therefore, $c_\ell =0$ if $\ell \ge j$. If $\ell \not\in I$, then $e_\ell = 0$ and $p^{e_\ell} = 1$. Since $A$ is in Hermite normal form the only nonzero entry in the $\ell$\textsuperscript{th} row of $A$ is this $1$ on the diagonal. Therefore, if $\ell < j$ satisfies $e_\ell = 0$, then $c_\ell = 0$. Consider the subset of $I$ consisting of integers less than $j$. That is, let $I' = \{i_1, i_2,\ldots, i_m\}$ be defined so that $i_m < j$ but $i_{m+1} > j$, or if there is no such $m$, then $i_k < j$ and we define $I' = I$. The entry of $v_j \circ v_j - v_j$ in row $i_m$ is $(a_{i_m j}^2 -a_{i_m j }) = a_{i_m j}(a_{i_m j}-1)$. Note that $p$ cannot divide both $a_{i_m j} $ and $a_{i_m j }-1$. The entry of $\sum_{\ell=1}^n c_\ell v_\ell = \sum_{\ell=1}^{i_m} c_\ell v_\ell$ in row $i_m$ is $c_{i_m} p^{e_{i_m}}$. Since $A$ is in Hermite normal form, we have $0 \le a_{i_m j} < p^{e_{i_m}}$ and see that $p^{e_{i_m}} \mid a_{i_m j }(a_{i_m j}-1)$. This implies $a_{i_m j} = 0$ or $a_{i_m j} = 1$. We now see that it is not possible for $v_j \circ v_j - v_j$ to have any nonzero entries outside of rows $1,2,\ldots, i_{m-1}$. That is, $\sum_{\ell=1}^n c_\ell v_\ell = \sum_{\ell=1}^{i_{m-1}} c_\ell v_\ell$. Applying the argument that we just gave to row $i_{m-1}$ shows that $a_{i_{m-1}j} \in \{0,1\}$. This implies that it is not possible for $v_j \circ v_j - v_j$ to have any nonzero entries outside of rows $1,2,\ldots, i_{m-2}$. Arguing by induction shows that for any $i \in I$ we have $a_{ij} \in \{0,1\}$. \end{proof} \begin{prop}\label{prop_exactly_1} Let $A$ be a subring matrix with diagonal $(p^{e_1}, \ldots, p^{e_{n-1}}, 1)$ and define $I = \{i_1,i_2,\ldots, i_k\}$ where $1 \le i_1<i_2<\cdots < i_k \le n-1$ and $e_j \neq 0$ if and only if $j \in I$. \begin{enumerate} \item Suppose $i \in I$ and $j_1, j_2 \in \{1,2,\ldots, n\} \setminus I$ with $j_1 \neq j_2$. Either $a_{ij_1} = 0$ or $a_{ij_2} = 0$. \item Suppose $i\in I$. There exists exactly one $j \in \{1,2,\ldots, n\} \setminus I$ for which $a_{ij} = 1$. \end{enumerate} \end{prop} Before giving the proof, we note that one can think of this proposition as a more general form of a result from the proof of Proposition \ref{count_cocyclic}. \begin{proof} Since $\col(A)$ is multiplicatively closed there exist unique $c_1,\ldots, c_n$ such that $ v_{j_1} \circ v_{j_2} = \sum_{\ell = 1}^n c_{\ell} v_\ell. $ We will prove that every entry of $v_{j_1} \circ v_{j_2}$ is $0$ by showing that $c_\ell = 0$ for each $\ell$. This shows that in each row, at most one of these two columns contains a nonzero entry. Since $A$ is in Hermite normal form, if $\ell \not\in I$, then the only nonzero entry in the $\ell$\textsuperscript{th} row of $A$ is a $1$ on the diagonal. Since $j_1 \neq j_2$, the entry of $v_{j_1} \circ v_{j_2}$ in row $\ell$ is $0$. Therefore, $c_\ell = 0$. So we see that $v_{j_1} \circ v_{j_2} = \sum_{\ell = 1}^k c_{i_\ell} v_{i_\ell}$. We first prove that $c_{i_k} = 0$. The entry of $v_{j_1} \circ v_{j_2}$ in row $i_k$ is $a_{i_k j_1} a_{i_k j_2}$. Proposition \ref{entry0or1prop} implies that if $a_{i_k j_1} \neq 0$, then $a_{i_k j_1} =1$, and the corresponding statement holds for $a_{i_k j_2}$. Therefore, if both entries are nonzero, then $a_{i_k j_1} a_{i_k j_2} = 1$. The entry of $\sum_{\ell = 1}^k c_{i_\ell} v_{i_\ell}$ in row $i_k$ is $c_{i_k} p^{e_{i_k}}$. Since $p^{e_{i_k}}\nmid 1$, we must have $c_{i_k} = a_{i_k j_1} a_{i_k j_2} = 0$. We next consider the entry of $v_{j_1} \circ v_{j_2}$ in row $i_{k-1}$. Repeating the argument just given shows that $c_{i_{k-1}} = 0$ and at least one of $a_{i_{k-1} j_1}, a_{i_{k-1} j_2}$ is $0$. Repeating this argument for the rows $i_{k-2}, i_{k-3}, \ldots, i_1$ in order we see that for any $i \in I$, either $a_{i j_1} = 0$ or $a_{i j_2} = 0$. Part (1) of this proposition implies that there is at most one $j \in \{1,2,\ldots, n\} \setminus I$ for which $a_{ij} \neq 0$. Proposition \ref{entry0or1prop} implies that if $a_{ij} \neq 0$ then $a_{ij} = 1$. Proposition \ref{prop:subring_matrix_divis} (3) implies that if for every $j \in \{1,2,\ldots, n\} \setminus I$ we have $a_{ij} = 0$, then every entry of the $i$\textsuperscript{th} row of $A$ is divisible by $p$. This is not possible since $\col(A)$ must contain $(1,1,\ldots, 1)^T$. \end{proof} Before applying Theorem \ref{corank_diag_thm} to prove Theorems \ref{corank2_upper}, \ref{corank3_upper}, and \ref{general_upper}, we recall one more result of Liu on the structure of subring matrices. \begin{prop}\cite[Lemma 3.5]{Liu}\label{liulastcol} If $A$ is a subring matrix, then every entry in the $n$\textsuperscript{th} column of $A$ is in $\{0,1\}$. If $a_{in} = 1$ and $a_{jn} =0$, then $a_{ij} =a_{ji}=0$. \end{prop} \begin{proof}[Proof of Theorem \ref{corank2_upper}] Suppose $A$ is an $n \times n$ subring matrix with diagonal $(p^{e_1},\ldots, p^{e_{n-1}},1)$ and $\col(A)$ has corank $2$. Theorem \ref{corank_diag_thm} implies that there exist $i,j$ satisfying $1\le i < j \le n-1$ where $e_i,e_j \ge 1$ and $e_\ell = 0$ for all $\ell \in [1,n-1]\setminus \{i,j\}$. Let $\alpha = (e_i,e_j)$. An example of such a matrix with $i = 1$ is \[ A = \begin{pmatrix} p^{e_1} & a_{12} &a_{13}& \ldots &\ldots&\ldots& a_{1n}\\ & 1 & 0 & \ldots &\ldots&\ldots& 0\\ &&\ddots&&&\\ &&&p^{e_j}&a_{j(j+1)} &\ldots& a_{jn}\\ &&&&1&\ldots&0\\ &&&&&\ddots&\\ &&&&&&1 \end{pmatrix}. \] Since $A$ is in Hermite normal form, if $\ell \in [1,n-1]\setminus \{i,j\}$ then $p^{e_\ell} = 1$ and the only nonzero entry in the $\ell$\textsuperscript{th} row of $A$ is this $1$ on the diagonal. Proposition \ref{prop_exactly_1} implies that there exists a unique $\ell_i \in [1,n] \setminus \{i,j\}$ where $a_{i \ell_i} = 1$, and for every $\ell \in [1,n] \setminus \{i,j,\ell_i\}$ we have $a_{i \ell} = 0$. The corresponding statement holds for the $j$\textsuperscript{th} row of $A$. There is a unique $\ell_j \in [1,n] \setminus \{i,j\}$ where $a_{j \ell_j} = 1$ and for every $\ell \in [1,n] \setminus \{j,\ell_j\}$ we have $a_{j \ell} = 0$. We count the number of subring matrices $A$ where the diagonal entry in row $i$ is $p^{e_i}$, the diagonal entry in row $j$ is $p^{e_j}$, and all other diagonal entries are equal to $1$. We will prove that there are \begin{equation}\label{ei_ej_count} (n-j-1) + (n-i-2) + g_\alpha(p) + g_\alpha(p) (n-i-2)(n-j-1) \end{equation} such matrices by dividing them up based on the pair $(a_{in},a_{jn})$. Proposition \ref{entry0or1prop} implies that $(a_{in},a_{jn}) \in \{(1,0),(0,1),(1,1),(0,0)\}$. We consider each of these possibilities separately. \begin{enumerate} \item $(a_{in},a_{jn}) = (1,0)$. Proposition \ref{liulastcol} implies that $a_{ij} = 0$. There are $n-j-1$ choices for which entry in row $j$ is equal to $1$. Any such choice completely determines the entries of $A$. It is easy to check that each of these choices does give a subring matrix. \item $(a_{in},a_{jn}) = (0,1)$. Proposition \ref{liulastcol} implies that $a_{ij} = 0$. There are $n-i-2$ choices for which entry in row $i$ is equal to $1$. Any such choice completely determines the entries of $A$. It is easy to check that each of these choices does give a subring matrix. \item $(a_{in},a_{jn}) = (1,1)$. Proposition \ref{prop:subring_matrix_divis} (2) implies that if $A$ is a subring matrix, then \[ \begin{pmatrix} p^{e_i} & a_{ij} & 1\\ 0 & p^{e_j} & 1 \\ 0 & 0 & 1 \end{pmatrix} \] is an irreducible subring matrix. This matrix is completely determined by $a_{ij}$. There are $g_\alpha(p)$ choices for this entry. Any such choice completely determines the entries of $A$. It is easy to check that each of these choices does give a subring matrix. \item $(a_{in},a_{jn}) = (0,0)$. As in the previous case, there are $g_\alpha(p)$ choices for the entry $a_{ij}$. There are $n-i-2$ choices for which entry in row $i$ is equal to $1$ and $n-j-1$ choices for which entry in row $j$ is equal to $1$. These three choices are independent and completely determine the entries of $A$. It is easy to check that every set of choices gives a subring matrix. \end{enumerate} It is straightforward to prove by induction that \[ \sum_{i=1}^{n-2} \sum_{j=i+1}^{n-1} \left((n-i-2)+(n-j-1)\right) = 3 \binom{n-1}{3} \] and that \[ \sum_{i=1}^{n-2} \sum_{j=i+1}^{n-1} \left(1+(n-i-2)(n-j-1)\right) = \frac{3n^2-17n+36}{12} \binom{n-1}{2}. \] There are $e-1$ pairs $(e_i,e_j)$ of positive integers with $e_i+e_j = e$. The sum of $g_\alpha(p)$ taken over all compositions $\alpha$ of $e$ into two parts is $g_3(p^e)$. Adding these terms together gives the formula in Theorem \ref{corank2_upper}. \end{proof} \begin{proof}[Proof of Theorem \ref{corank3_upper}] We closely follow the strategy of the proof of Theorem \ref{corank2_upper}. Suppose $A$ is an $n \times n$ subring matrix with diagonal $(p^{e_1},p^{e_2},\ldots, p^{e_{n-1}},1)$ and $\col(A)$ has corank $3$. Theorem \ref{corank_diag_thm} implies that there exist $i,j,k$ satisfying $1\le i < j < k \le n-1$ where $e_i,e_j, e_k \ge 1$ and $e_\ell = 0$ for all $\ell \in [1,n-1]\setminus \{i,j,k\}$. If $\ell \in [1,n-1]\setminus \{i,j,k\}$, then the only nonzero entry in the $\ell$\textsuperscript{th} row of $A$ is a $1$ on the diagonal. Proposition \ref{prop_exactly_1} implies that there exists a unique $\ell_i \in [1,n]\setminus \{i,j,k\}$ where $a_{i \ell_i} = 1$, and for every $\ell\in [1,n] \setminus \{i,j,k,\ell_i\}$ we have $a_{i \ell} = 0$. Similarly, there exists a unique $\ell_j \in [1,n] \setminus \{j,k\}$ where $a_{j \ell_j} = 1$ and for every $\ell \in [1,n] \setminus \{j,k,\ell_j\}$ we have $a_{j \ell} = 0$. Also, there exists a unique $\ell_k \in [1,n] \setminus \{k\}$ where $a_{k \ell_k} = 1$, and for every $\ell \in [1,n] \setminus \{k,\ell_k\}$ we have $a_{k \ell} = 0$. Proposition \ref{entry0or1prop} implies that $a_{in},a_{jn}, a_{kn} \in \{0,1\}$. We count subring matrices $A$ where the diagonal entry in row $i$ is $p^{e_i}$, the diagonal entry in row $j$ is $p^{e_j}$, and the diagonal entry in row $k$ is $p^{e_k}$, by dividing up these matrices based on the $8$ possibilities for $(a_{in},a_{jn}, a_{kn})$. Let $\alpha = (e_i,e_j,e_k),\ \alpha_{12} = (e_i,e_j),\ \alpha_{13} = (e_i, e_k)$, and $\alpha_{23} = (e_j,e_k)$. We prove that the number of subring matrices with a particular value of $(a_{in}, a_{jn}, a_{kn})$ is given in the following table: \[ \begin{tabular}{|c|c|p{8.0cm}|} \hline {$(a_{in}, a_{jn}, a_{kn})$} & $\#\{\text{Subring Matrices } A\}$ \\[2pt] \hline $(1,1,1)$ & $g_\alpha(p)$\\[2pt] \hline $(0,0,0)$ & $(n-i-3)(n-j-2)(n-k-1) g_\alpha(p)$ \\[2pt] \hline $(1,1,0)$ & $(n-k-1) g_{\alpha_{12}}(p)$ \\[2pt] \hline $(1,0,1)$ & $(n-j-2) g_{\alpha_{13}}(p)$ \\[2pt] \hline $(0,1,1)$ & $(n-i-3) g_{\alpha_{23}}(p)$ \\[2pt] \hline $(1,0,0)$ & $(n-j-2)(n-k-1) g_{\alpha_{23}}(p)$ \\[2pt] \hline $(0,1,0)$ & $(n-i-3)(n-k-1) g_{\alpha_{13}}(p)$ \\[2pt] \hline $(0,0,1)$ & $(n-i-3)(n-j-2) g_{\alpha_{12}}(p)$ \\[2pt] \hline \end{tabular}. \] Assuming for now that the values in this table are correct we complete the proof of the theorem. It is straightforward to prove the following formulas by induction: \begin{eqnarray*} \sum_{i=1}^{n-3} \sum_{j=i+1}^{n-2} \sum_{k=j+1}^{n-1} \left( (n-i-3)(n-j-2) + (n-k-1) \right) & = & \frac{1}{5} \binom{n-1}{4} (8n-25), \\ \sum_{i=1}^{n-3} \sum_{j=i+1}^{n-2} \sum_{k=j+1}^{n-1} \left( (n-i-3)(n-k-1) + (n-j-2) \right) & = & \frac{1}{5} \binom{n-1}{4} (4n-5), \\\ \sum_{i=1}^{n-3} \sum_{j=i+1}^{n-2} \sum_{k=j+1}^{n-1} \left( (n-j-2)(n-k-1) + (n-i-3) \right) & = & \frac{1}{5}\binom{n-1}{4} (3n+5), \end{eqnarray*} and \begin{eqnarray*} \sum_{i=1}^{n-3} \sum_{j=i+1}^{n-2} \sum_{k=j+1}^{n-1} \left((n-i-3)(n-j-2)(n-k-1) +1\right) & = & \frac{1}{8} \binom{n-1}{3} (n^3-11n^2+40n-40). \end{eqnarray*} There is a bijection between compositions $\alpha = (e_i,e_j,e_k)$ of $e$ into three parts and compositions $(e_i,e_j)$ of an integer $m \in [2,e-1]$ into two parts. The number of compositions of $m$ into two parts is $m-1$. Taking a sum over all possible compositions $\alpha$ gives \[ \sum_\alpha g_{\alpha_{12}}(p) = \sum_{m=2}^{e-1} (m-1) g_3(p^m). \] For exactly the same reason we get the same expression when we sum $g_{\alpha_{13}}(p)$ or $g_{\alpha_{23}}(p)$ over this set of $\alpha$. The sum of $g_\alpha(p)$ taken over all compositions of $e$ into three parts is $g_4(p^e)$. Combining these facts with the observation that $(8n-25)+(4n-5)+(3n+5) = 5(3n-5)$ completes the proof. We now prove that the values in the table given above are correct. \begin{enumerate} \item $(a_{in},a_{jn},a_{kn}) = (1,1,1)$. Proposition \ref{prop:subring_matrix_divis} (2) implies that if $A$ is a subring matrix, then \[ \begin{pmatrix} p^{e_i} & a_{ij} & a_{ik} & 1\\ 0 & p^{e_j} & a_{jk} & 1 \\ 0 & 0 & p^{e_k}& 1 \\ 0 & 0 & 0& 1 \end{pmatrix} \] is an irreducible subring matrix. There are $g_{\alpha}(p)$ possibilities for the triple of entries $a_{ij},a_{ik},a_{jk}$. Any such choice completely determines the entries of $A$. It is easy to check that each of these choices does give a subring matrix. \item $(a_{in},a_{jn},a_{kn}) = (0,0,0)$. As in the previous case, there are $g_\alpha(p)$ choices for the triple of entries $a_{ij},a_{ik},a_{jk}$. There are $n-i-3$ choices for which entry in row $i$ is equal to $1$, $n-j-2$ choices for which entry in row $j$ is equal to $1$, and $n-k-1$ choices for which entry in row $k$ is equal to $1$. These choices are independent and completely determine the entries of $A$. It is easy to check that every set of choices gives a subring matrix. \end{enumerate} We discuss two of the six additional possibilities for $(a_{in},a_{jn},a_{kn})$. The others cases are very similar. \begin{enumerate}\setcounter{enumi}{2} \item $(a_{in},a_{jn},a_{kn}) = (0,1,1)$. Proposition \ref{liulastcol} implies that $a_{ij} = a_{ik} = 0$. Proposition \ref{prop:subring_matrix_divis} (2) implies that if $A$ is a subring matrix, then \[ \begin{pmatrix} p^{e_i} & 0 & 0 & 1\\ 0 & p^{e_j} & a_{jk} & 1 \\ 0 & 0 & p^{e_k}& 1 \\ 0 & 0 & 0& 1 \end{pmatrix} \] is an irreducible subring matrix. This is an irreducible subring matrix if and only if \[ \begin{pmatrix} p^{e_j} & a_{jk} & 1 \\ 0 & p^{e_k}& 1 \\ 0 & 0& 1 \end{pmatrix} \] is an irreducible subring matrix. There are $g_{\alpha_{23}}(p)$ such matrices. There are $n-i-3$ choices for which entry in row $i$ is equal to $1$. These choices are independent and completely determine the entries of $A$. It is easy to check that every set of choices gives a subring matrix. \item $(a_{in},a_{jn},a_{kn}) = (0,1,0)$. Proposition \ref{liulastcol} implies that $a_{ij} = a_{jk} = 0$. Proposition \ref{prop:subring_matrix_divis} (2) implies that if $A$ is a subring matrix, then \[ \begin{pmatrix} p^{e_i} & 0 & a_{ik} & 1\\ 0 & p^{e_j} & 0 & 1 \\ 0 & 0 & p^{e_k}& 1 \\ 0 & 0 & 0& 1 \end{pmatrix} \] is an irreducible subring matrix. This is an irreducible subring matrix if and only if \[ \begin{pmatrix} p^{e_i} & a_{ik} & 1 \\ 0 & p^{e_k}& 1 \\ 0 & 0& 1 \end{pmatrix} \] is an irreducible subring matrix. There are $g_{\alpha_{13}}(p)$ such matrices. There are $n-i-3$ choices for which entry in row $i$ is equal to $1$ and $n-k-1$ choices for which entry in row $k$ is equal to $1$. These choices are independent and completely determine the entries of $A$. It is easy to check that every set of choices gives a subring matrix. \end{enumerate} We omit the four remaining cases since they are very similar to the ones described here. \end{proof} We now give the proof of Theorem \ref{general_upper}. \begin{proof}[Proof of Theorem \ref{general_upper}] Theorem \ref{corank_diag_thm} implies that a subring of $\Z^n$ of corank $k$ and index $p^e$ is the column span of an $n \times n$ matrix $A$ with diagonal $(p^{e_1}, p^{e_2},\ldots, p^{e_{n-1}},1)$ where exactly $k$ of these first $n-1$ diagonal entries are positive powers of $p$. Suppose these entries are $p^{i_1}, p^{i_2}, \ldots, p^{i_k}$ where $1 \le i_1 < i_2< \cdots < i_k \le n-1$. Let $\alpha = (p^{e_{i_1}},p^{e_{i_2}},\ldots, p^{e_{i_k}})$. Note that $\alpha$ is a composition of $e$ into $k$ parts. Proposition \ref{prop:subring_matrix_divis} (2) implies that \[ \begin{pmatrix} p^{e_{i_1}} & a_{i_1 i_2} & \cdots & a_{i_1 i_k} & 1\\ 0 & p^{e_{i_2}} & \cdots & a_{i_2 i_k} & 1 \\ 0 & 0 & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & p^{e_k}& 1 \\ 0 & 0 & 0& 0 & 1 \end{pmatrix} \] is an irreducible subring matrix. There are $g_\alpha(p)$ choices for the entries of this matrix. Proposition \ref{prop_exactly_1} (2) implies that in each of row $i_1, i_2,\ldots, i_k$ of the matrix $A$, there is precisely one entry equal to $1$. Once we make a choice for where these entries are, we have completely determined the entries of the matrix $A$. For the lower bound, we note that if we choose each of these entries to be in the final column of $A$, then it is easy to check that we do get a subring matrix. For the upper bound, we note that in each one of these rows, the entry equal to $1$ cannot lie in the columns $i_1, \ldots, i_k$, so there are at most $n-k$ choices for where this $1$ could be. This means that the number of choices for $A$ where the collection of rows in which the diagonal entry is not equal to $1$ is $i_1, i_2, \ldots, i_k$ and these diagonal entries give the composition $\alpha$ is at least $g_\alpha(p)$ and is at most $(n-k)^k g_\alpha(p)$. Taking a sum over all $\binom{n-1}{k}$ choices for $(i_1,\ldots, i_k)$ and over all compositions $\alpha$ of $e$ into $k$ parts completes the proof of the first part of the theorem. For the second part, recall that $\tilde{h}_{n,k}(p^e) = \sum_{\ell=1}^{k} h_{n,\ell}(p^e)$. For any positive integer $\ell$, there is a trivial upper bound $\binom{n-1}{\ell} \le 2^{n-1}$. It is easy to see that for any $\ell \le k$, we have $g_{\ell+1}(p^e) \le f_{\ell+1}(p^e) \le f_{k+1}(p^e)$. We conclude that $\tilde{h}_{n,k}(p^e)$ is at most \[ \sum_{\ell=1}^{k} (n-\ell)^\ell \binom{n-1}{\ell} g_{\ell+1}(p^e) \le \sum_{\ell=1}^{k} (n-1)^k 2^{n-1} f_{k+1}(p^e) \le k (n-1)^k 2^{n-1} f_{k+1}(p^e). \] \end{proof} \section{Subrings of $\Z^n$ of given corank for $n \le 5$}\label{corank_smalln} When $n \ge 6$, Theorem \ref{proportion_corank_k} states that the proportion of subrings in $\Z^n$ with `small' corank is 0\%. In this section, we study subrings of $\Z^n$ with given corank when $n \le 5$. For each $n \le 4$ we compute the proportion of subrings of each fixed corank. For $n =5$ we show that the proportion of subrings of each fixed corank is positive, but we cannot determine exactly what these proportions are. Theorem \ref{proportion_corank_k} (1) implies that 100\% of subrings of $\Z^6$ have corank 4 or 5. We are unable to determine whether a positive proportion of these subrings have corank $4$ because we cannot currently determine the order of the pole at $s=1$ of $H_{6,4}(X)$. We will see below that a positive proportion of these subrings have corank $5$. In Proposition \ref{subringZ2} we saw that for each positive integer $k$ there is a unique subring of $\Z^2$ of index $k$ and that every proper subring of $\Z^2$ has corank $1$. We next consider subrings of~$\Z^3$. \begin{cor} We have \begin{align*} p_{3,1}^R &= \zeta(2)\prod_p (p^{-3}(p-1)^2(p+2)) \approx .471683\\ p_{3,2}^R &= 1 - p_{3,1}^R \approx .528317. \end{align*} \end{cor} \begin{proof} This follows from Corollary \ref{cor:cocyclic_asymptotic}, Corollary \ref{Nn_growth}, and the fact that every proper subring of $\Z^3$ has corank $1$ or $2$. \end{proof} We next consider subrings of $\Z^4$. Every proper subring in $\Z^4$ has corank $1,2$, or $3$. Specializing the formula for $\zeta_{\Z^n}^{R,(2)}(s)$ given in Proposition \ref{prop:corank2_zeta} to the case $n=4$ gives the following. \begin{cor}\label{crk2_4_cor} We have \begin{align*} \zeta_{\Z^4}^{R,(2)}(s) &= \zeta(3s-1)\zeta(s)^6 \prod_p\bigg( 1 - 6 p^{1 - 9 s} + 24 p^{1 - 8 s} - 33 p^{1 - 7 s}\\ &+ 12 p^{1 - 6 s} + 12 p^{1 - 5 s} - 12 p^{1 - 4 s} + 3 p^{1 - 3 s}- 4 p^{-7 s} + 18 p^{-6 s}\\ &- 28 p^{-5 s}+ 13 p^{-4 s }+ 8 p^{-3s} - 8 p^{-2s}\bigg). \end{align*} \end{cor} This expression leads directly to the following result. \begin{cor} We have \begin{align*} p_{4,1}^R &=\zeta(2)^3\prod_p\left( p^{-6}(p - 1)^5(p + 5)\right)\approx .0593079 \\ p_{4,2}^R &= \zeta(2)^4\prod_p p^{-8}\left((p-1)^5 (1+p) (p^2+4p+6)\right) - p_{4,1}^R \approx .4389531\\ p_{4,3}^R &= 1 - p_{4,1}^R - p_{4,2}^R \approx .501739. \end{align*} \end{cor} \begin{proof} This follows from Corollaries \ref{Nn_growth}, \ref{cor:cocyclic_asymptotic}, \ref{crk2_4_cor}, and from the fact that every proper subring of $\Z^4$ has corank $1,2$, or $3$. \end{proof} Theorem \ref{N5X} says that there exists a positive real number $C_5$ such that \[ N_5(X) \sim C_5 X (\log X)^9. \] However, it is not currently known what this constant $C_5$ is. Theorem \ref{Hnk_123} says that $H_{5,1}(X), H_{5,2}(X)$, and $H_{5,3}(X)$ each have similar rates of growth. We can conclude that $p_{5,k}^R >0$ for each $k \in \{1,2,3\}$, but we are not able to determine the exact values of these probabilities. We can determine their relative frequencies, for example, $\frac{p_{5,2}^R}{p_{5,1}^R} \approx 59.801$ and $\frac{p_{5,3}^R}{p_{5,1}^R} \approx 679.548$. We now prove that $p_{5,4}^R > 0$, or equivalently, that $p_{5,1}^R + p_{5,2}^R + p_{5,3}^R < 1$. We do this by proving more generally that $p_{n,n-1}^R>0$ for any positive integer $n$. For the rest of the paper, if $G$ is a finite abelian group we write $G_p$ for its Sylow $p$-subgroup. \begin{thm}\label{corank_n-1} Let $n$ be a positive integer and $p$ be a prime. Then \[ \lim_{X\rightarrow \infty} \frac{\#\{\text{Subrings } R \subseteq \Z^n\colon [\Z^n\colon R] \le X \text{ and } (\Z^n/R)_p \cong (\Z/p\Z)^{n-1}\} }{N_n(X)} > 0. \] Therefore, $p^R_{n,n-1} > 0$. \end{thm} The second statement follows from the first by noting that a finite abelian group has rank at most $r$ if and only if each of its Sylow $p$-subgroups has rank at most $r$. If $R \subseteq \Z^n$ is a finite index subring then $\Z^n/R$ is a finite abelian group of rank at most $n-1$. Therefore, if $(\Z^n/R)_p$ has rank $n-1$, then $\Z^n/R$ has rank $n-1$. The main idea in the proof of Theorem \ref{corank_n-1} is to break $R$ up into `prime power components'. Before explaining what we mean exactly, we highlight a particular subring of $\Z^n$ that plays an important role in our argument. \begin{prop}\label{unique_Rpstar} Let $n \ge 2$ be a positive integer, $p$ be a prime, and $e_1,e_2,\ldots, e_n$ be the standard basis vectors of $\R^n$. There is a unique subring $R_p^* \subsetneq \Z^n$ such that $\Z^n/R_p^* \cong (\Z/p\Z)^{n-1}$. This subring is generated by $pe_1, pe_2,\ldots, pe_{n-1}, e_1+\cdots + e_n$. \end{prop} \begin{proof} Suppose $R \subseteq \Z^n$ is a subring for which $\Z^n/R \cong (\Z/p\Z)^{n-1}$. It is given by $\col(A)$ for an $n \times n$ subring matrix $A$. Since $[\Z^n\colon \col(A)] = p^{n-1}$ and $\col(A)$ has corank $n-1$, Theorem \ref{corank_diag_thm} implies that the diagonal of $A$ must be $(p,p,\ldots, p, 1)$, and Proposition \ref{liu33} implies that $A$ is an irreducible subring matrix. Proposition \ref{liu31} implies that every entry in the last column of $A$ is $1$ and all of the other nonzero entries of $A$ are the entries equal to $p$ on the diagonal. In particular, $A$ is unique. \end{proof} We need one additional piece in order to prove Theorem \ref{corank_n-1}. \begin{prop}\label{p-part_trivial} Let $n$ be a positive integer and $p$ be a prime. Then \[ \lim_{X\rightarrow \infty} \frac{\#\{\text{Subrings } R \subseteq \Z^n\colon [\Z^n\colon R] \le X \text{ and } p \nmid [\Z^n \colon R]\} }{N_n(X)} = \zeta_{\Z^n,p}^R(\alpha)^{-1} > 0, \] where $\alpha$ is the abscissa of convergence of $\zeta_{\Z^n}^R(s)$. \end{prop} \begin{proof} The asymptotic formula for the expression in the numerator comes from applying Theorem \ref{tauberian_theorem} to the function $\zeta_{\Z^n,p}^R(s)^{-1} \zeta_{\Z^n}^R(s)$. Suppose that the rightmost pole of $\zeta_{\Z^n}^R(s)$ occurs at $s=\alpha$. The results of \cite[Section 4]{dusautoy_grunewald} imply that the abscissa of convergence of each local factor $\zeta_{\Z^n,p}^R(s)$ occurs to the left of $\alpha$, and so $\zeta_{\Z^n,p}^R(\alpha)$ is a positive real number. The only difference in the Euler products defining the counting functions in the numerator and in the denominator of this proposition is the factor of $\zeta_{\Z^n,p}^R(s)$ that occurs in the denominator but not in the numerator. Theorem \ref{tauberian_theorem} implies that the ratio in the proposition converges to $ \zeta_{\Z^n,p}^R(\alpha)^{-1}$. \end{proof} \begin{proof}[Proof of Theorem \ref{corank_n-1}] Let $p_1,\ldots, p_r$ be distinct primes, $a_1, \ldots, a_r$ be positive integers, and $k = p_1^{a_1}\cdots p_r^{a_r}$. The Euler product for $\zeta_{\Z^n}^R(s)$ reflects the fact that there is a bijection between subrings $R \subseteq \Z^n$ with $[\Z^n \colon R] = k$ and collections of subrings $(R_1,R_2,\ldots, R_r)$ where each $R_i \subseteq \Z^n$ is a subring with $[\Z^n\colon R_i] = p_i^{a_i}$. For a description of how to find these subrings $R_i$ given the matrix $H$ in Hermite normal form for which $\col(H) = R$, see \cite[Section 4]{CKK}. One can interpret this fact by noting that for any prime $p,\ \Z^n \hookrightarrow \Z_p^n$, so a subring $R \subseteq \Z^n$ gives a subring of $\Z_p^n$ for each $p$, where we get a proper subring if and only if $p \mid [\Z^n \colon R]$. Proposition \ref{unique_Rpstar} implies that $(\Z^n/R)_p \cong (\Z/p\Z)^{n-1}$ if and only if $R_p = R_p^*$. In this way, we see that there is a bijection between \[ \mathcal{A}_X = \{\text{Subrings } R \subset \Z^n\colon [\Z^n \colon R] \le X \text{ and } (\Z^n/R)_p \cong (\Z/p\Z)^{n-1}\} \] and \[ \mathcal{B}_X = \left\{\text{Subrings } R \subset \Z^n\colon [\Z^n \colon R] \le \frac{X}{p^{n-1}} \text{ and } p \nmid [\Z^n \colon R]\right\}. \] Proposition \ref{p-part_trivial} implies that as $X \rightarrow \infty$ the set $\mathcal{B}_X$ includes a positive proportion of all subrings of $\Z^n$ of index at most $X$. \end{proof} \appendix \section*{Appendix: A conjecture for the cotype zeta function of $\Z^4$\\ (by Gautam Chinta)} In this appendix we present a conjecture for the cotype zeta function of the ring $\Z^4$ and show how it is compatible with the results of Section \ref{sec_corank} and Section \ref{corank_smalln} on counting subrings of corank less than or equal to 3. We begin with a definition of the cotype zeta function, which generalizes both the subring zeta function of Definition \ref{defn:subring_zeta} and the corank at most $k$ zeta function of Definition \ref{defn:corank_at_most}. A finite index subring $R$ of $\Z^n$ will have corank at most $n-1$ since $(1,1,\ldots,1)\in R.$ Let $\alpha_1(R), \ldots, \alpha_{n-1}(R)$ be the invariant factors of the group $\Z^n/R$, where we set $\alpha_i(R)=1$ if $i$ is bigger than the corank of $R$. For a tuple $\alpha=(\alpha_1,\ldots,\alpha_{n-1})$ of positive integers with $\alpha_{i+1}\mid\alpha_i$, set $f_{n}(\alpha)=f_n(\alpha_1,\ldots, \alpha_{n-1})$ to be the number of subrings of $\Z^n$ of cotype $\alpha.$ \begin{defnA} The {\em subring cotype zeta function of} $\Z^n$ is \[\zeta_{\Z^n}^R(s_1, \ldots, s_{n-1}) =\sum_{\alpha_{n-1}\mid\alpha_{n-2}\mid\cdots\mid\alpha_1} \frac{f_n(\alpha_1, \ldots, \alpha_{n-1})}{\alpha_1^{s_1}\cdots\alpha_{n-1}^{s_{n-1}}}. \] \end{defnA} We will also refer to $\zeta_{\Z^n}^R(s_1, \ldots, s_{n-1})$ more simply as the {\em cotype zeta function of } $\Z^n$. Since $\alpha_1(R)\cdots \alpha_{n-1}(R)=[\Z^n:R]$ we have the relation \begin{equation} \label{eq:app1} \zeta_{\Z^n}^R(s)=\zeta_{\Z^n}^R(s, \ldots, s). \end{equation} (We hope no notational confusion will arise from letting the number of arguments distinguish between the single-variable subring zeta function on the left of (\ref{eq:app1}) and the multivariate cotype subring zeta function on the right.) Just as with the single-variable subring zeta function, the cotype zeta function has an Euler product: \begin{equation*} \zeta_{\Z^n}^R(s_1, \ldots, s_{n-1})= \prod_p F_n(p;p^{-s_1},\dots, p^{-s_{n-1}}) \end{equation*} for a rational function $F_n$ in $p^{-s_1},\dots, p^{-s_{n-1}}$. A straightforward calculation yields \begin{itemize} \item For $\Z^2$: $$F_{2}(p;x)=\frac 1{1-x} $$ \item For $\Z^3$: $$F_3(p;x,y)=\frac {1+2x-2x^2y-x^3y}{(1-x)(1-xy)(1-px^2y)} $$ \end{itemize} where we have set $x=p^{-s_1}, y=p^{-s_2}.$ Note the functional equations \begin{align} \label{eq:fes} F_2(1/p;1/x)&=-x\,F_2(p;x)\\ \nonumber F_3(1/p;1/x, 1/y)&=pxyF_3(p;x,y) \end{align} and the specialization \begin{equation*} F_{3}(p;x,x)=\frac{(1+x)^2}{(1-x)(1-px^3)}, \end{equation*} in agreement with the local factor of $\zeta_{\Z^3}^R(s)$ in Theorem \ref{GeneratingFunctions}. \subsection*{A conjecture for $\Z^4$} The subring cotype zeta function of $\Z^n$ has not been explicitly computed for $n\geq 4$. In this section, we give a conjecture for $n=4$ based on computer calculations. By virtue of the Euler product, it suffices to define the local factor $F_4(p;x,y,z).$ \begin{conjA}\label{app:conjecture} The local factor of the subring cotype zeta function of $\Z^4$ is $$F_4(p;x,y,z)=\frac {N(p;x,y,z)}{D(p;x,y,z)} $$ where the denominator is \begin{equation*} D(p;x,y,z)= (1-x)(1-xy)(1-xyz)(1-px^2y)(1-p^2x^2yz)(1-p^2x^2y^2z)(1-p^3x^3y^2z) \end{equation*} and the numerator $N(p;x,y,z)$ is the polynomial of total degree 21 (in $x,y,z$) with coefficients as given in Table \ref{tab:1}. \end{conjA} \begin{table}[htbp] \parbox{.45\linewidth}{ \centering \begin{tabular}{|l|l|} \hline monomial & coefficient \\ \hline \hline $1$ & $1$ \\ \hline $x$ & $5$ \\ \hline $x y$ & $6$ \\ \hline $x^{2} y$ & $3 \, p - 2$ \\ \hline $x^{2} y z$ & $p - 5$ \\ \hline $x^{3} y$ & $3 \, p - 4$ \\ \hline $x^{2} y^{2} z$ & $p - 6$ \\ \hline $x^{3} y z$ & $-p - 1$ \\ \hline $x^{3} y^{2}$ & $-6 \, p$ \\ \hline $x^{3} y^{2} z$ & $-5 \, p^{2} - 5 \, p + 1$ \\ \hline $x^{4} y^{2}$ & $-6 \, p$ \\ \hline $x^{3} y^{3} z$ & $-p - 1$ \\ \hline $x^{4} y^{2} z$ & $-14 \, p^{2} - 3 \, p + 5$ \\ \hline $x^{4} y^{3} z$ & $-4 \, p^{3} - 6 \, p^{2} + 6 \, p + 1$ \\ \hline $x^{5} y^{2} z$ & $p^{2} + p$ \\ \hline $x^{4} y^{3} z^{2}$ & $-p^{3} + 5 \, p^{2}$ \\ \hline $x^{5} y^{3} z$ & $-7 \, p^{3} + 8 \, p^{2} + 8 \, p$ \\ \hline $x^{5} y^{3} z^{2}$ & $-p^{4} + 13 \, p^{2}$ \\ \hline $x^{5} y^{4} z$ & $p^{2} + p$ \\ \hline $x^{6} y^{3} z$ & $5 \, p^{3} + 4 \, p^{2} - p$ \\ \hline \end{tabular}} \parbox{.45\linewidth}{ \centering \begin{tabular}{|l|l|}\hline monomial & coefficient \\ \hline \hline $x^{11} y^{7} z^{3}$ & $p^{5}$ \\ \hline $x^{10} y^{7} z^{3}$ & $5 \, p^{5}$ \\ \hline $x^{10} y^{6} z^{3}$ & $6 \, p^{5}$ \\ \hline $x^{9} y^{6} z^{3}$ & $-2 \, p^{5} + 3 \, p^{4}$ \\ \hline $x^{9} y^{6} z^{2}$ & $-5 \, p^{5} + p^{4}$ \\ \hline $x^{8} y^{6} z^{3}$ & $-4 \, p^{5} + 3 \, p^{4}$ \\ \hline $x^{9} y^{5} z^{2}$ & $-6 \, p^{5} + p^{4}$ \\ \hline $x^{8} y^{6} z^{2}$ & $-p^{5} - p^{4}$ \\ \hline $x^{8} y^{5} z^{3}$ & $-6 \, p^{4}$ \\ \hline $x^{8} y^{5} z^{2}$ & $p^{5} - 5 \, p^{4} - 5 \, p^{3}$ \\ \hline $x^{7} y^{5} z^{3}$ & $-6 \, p^{4}$ \\ \hline $x^{8} y^{4} z^{2}$ & $-p^{5} - p^{4}$ \\ \hline $x^{7} y^{5} z^{2}$ & $5 \, p^{5} - 3 \, p^{4} - 14 \, p^{3}$ \\ \hline $x^{7} y^{4} z^{2}$ & $p^{5} + 6 \, p^{4} - 6 \, p^{3} - 4 \, p^{2}$ \\ \hline $x^{6} y^{5} z^{2}$ & $p^{4} + p^{3}$ \\ \hline $x^{7} y^{4} z$ & $5 \, p^{3} - p^{2}$ \\ \hline $x^{6} y^{4} z^{2}$ & $8 \, p^{4} + 8 \, p^{3} - 7 \, p^{2}$ \\ \hline $x^{6} y^{4} z$ & $13 \, p^{3} - p$ \\ \hline $x^{6} y^{3} z^{2}$ & $p^{4} + p^{3}$ \\ \hline $x^{5} y^{4} z^{2}$ & $-p^{4} + 4 \, p^{3} + 5 \, p^{2}$ \\ \hline \end{tabular}} \caption{Coefficients of the numerator $N(p;x,y,z)$} \label{tab:1} \end{table} This conjecture was obtained by first enumerating all the subrings (and cotypes) of $\Z_2^4$ of index less than $2^{23}$. A similar computation was done for subrings of $\Z_p^4$ for $p=3,5$ and $7$. Putting these computations together under the assumption that $N(p;x,y,z)$ is a polynomial in $p$ leads to the Conjecture. \begin{rmkA} Note the functional equation \[F(1/p;1/x,1/y,1/z)=-p^3xyzF(p;x,y,z).\] This functional equation and the ones in (\ref{eq:fes}) above are consistent with the results of Voll \cite{Voll2010}. Furthermore, the specialization $F(p;p^{-s}, p^{-s}, p^{-s})$ agrees with the local factor of $\zeta_{\Z^4}^R(s)$ in Theorem \ref{GeneratingFunctions}. \end{rmkA} \begin{rmkA} We can show that the conjecture is compatible with the results of Section \ref{sec_corank} for $n=4$. Explicitly, the corank at most $k$ zeta function of $\Z^n$ can be obtained from the cotype zeta function as follows: \begin{equation} \label{eq:app5} \zeta_{\Z^n}^{R,(k)}(s) = \lim_{s_{k+1}, \ldots, s_{n-1}\to\infty} \zeta_{\Z^n}^{R}(s,\ldots,s,s_{k+1}, \ldots, s_{n-1}). \end{equation} When $n=4$ and $k=1$ or 2, we substitute the expression from Conjecture A\ref{app:conjecture} into the righthand side of (\ref{eq:app5}) to get \begin{align*} \zeta_{\Z^4}^{R,(1)}(s) &= \prod_p\frac{ 1+5p^{-s}}{1-p^{-s}},\\ \zeta_{\Z^4}^{R,(2)}(s) &=\zeta(s)^6\cdot\prod_p\frac{\left( 1+4p^{-s}+2p^{-2s}+(3p-4)p^{-3s}-6p^{1-5s}\right)(1-p^{-s})^4} {1-p^{1-3s}}. \end{align*} These expressions for the corank at most 1 and 2 zeta functions agree with Proposition \ref{cocyclic_zeta} when $n=4$ and Corollary \ref{crk2_4_cor}. \end{rmkA} \begin{rmkA} Using the expression for the cotype zeta function of $\Z^4$, it should be possible to similarly obtain results for counting subrings of $\Z^n$ of corank 1,2 or 3 for arbitrary $n$. This will follow from an analogue of Liu's recursion relation \cite[Proposition 4.4]{Liu} expressing reducible subrings in $\Z^n$ in terms subrings of $\Z^m$, for $m$ less than $n$. In fact, suitable refinement of Liu's relation will identify subrings of $\Z^n$ of corank $m$ with products of subrings of $\Z^{m'}$ with $m'\leq m.$ \end{rmkA} \subsection*{Acknowledgments} The authors thank Gautam Chinta and Ramin Takloo-Bighash for helpful discussions, and in particular for letting us know about the lower bound for subrings of $\Z^6$ given in Proposition \ref{Z6_lower}. The second author was supported by NSF grants DMS 1802281 and DMS 2154223. \bibliographystyle{habbrv} \bibliography{bib_all} \end{document}
2412.18761v1
http://arxiv.org/abs/2412.18761v1
Tail Dependence of Multivariate Archimedean Copulas
\documentclass[12pt]{article} \usepackage{amsmath,amssymb,amsthm} \usepackage[mathscr]{eucal} \usepackage{rotate,graphics,epsfig} \usepackage{color} \usepackage{hyperref} \oddsidemargin=0in \evensidemargin=0in \textwidth=6.5in \headheight=0pt \headsep=0pt \topmargin=0in \textheight=8.6in \renewcommand{\baselinestretch}{1.2} \newtheorem{The}{Theorem} \newtheorem{Exa}[The]{Example} \newtheorem{Cor}[The]{Corollary} \newtheorem{Pro}[The]{Proposition} \newtheorem{Lem}[The]{Lemma} \theoremstyle{definition} \newtheorem{Def}[The]{Definition} \newtheorem{Rem}[The]{Remark} \numberwithin{equation}{section} \numberwithin{The}{section} \newcommand{\be}{\begin{eqnarray}} \newcommand{\ee}{\end{eqnarray}} \newcommand{\by}{\begin{eqnarray*}} \newcommand{\ey}{\end{eqnarray*}} \newcommand{\bn}{\begin{enumerate}} \newcommand{\en}{\end{enumerate}} \newcommand{\bi}{\begin{itemize}} \newcommand{\ei}{\end{itemize}} \def\red#1{{\textcolor{red}{#1}}} \def\oo{\infty} \def\a{\alpha} \def\p{\partial} \def\lm{\lambda} \def\Fbar{{\overline F}} \def\Cbar{{\overline C}} \def\g{\kappa} \def\G{\kappa} \def\half{{\textstyle {1\over 2}}} \def\rt#1{\sqrt{#1}\,} \def\frac#1#2{{#1 \over #2}} \def\th{\theta} \def\de{\delta} \def\k{\kappa} \def\Phibar{{\overline\Phi}} \def\ze{\zeta} \def\s{\sigma} \def\eqd{\,{\buildrel d \over =}\,} \def\b{\beta} \def\quarter{{\textstyle {1\over 4}}} \def\fr{Fr\'echet } \def\RV{\mbox{\rm{RV}}} \def\sign{{\rm sign}\,} \def\xibf{\boldsymbol{\xi}} \def\debf{\boldsymbol{\delta}} \def \Re {\mathbb{R}} \def \MDA {\mbox{\rm{MDA}}} \def \GUM {\mbox{\rm{Gumbel}}} \def \FRE {\mbox{\rm{Fr\'{e}chet}}} \def \PAR {\mbox{\rm{Pareto}}} \def \P {\mathbb{P}} \def \E {\mathbb{E}} \def\xx{\boldsymbol{x}} \def\XX{\boldsymbol{X}} \def\mmu{\boldsymbol{\mu}} \def\ddelta{\boldsymbol{\delta}} \def\llambda{\boldsymbol{\lambda}} \def\ttheta{\boldsymbol{\theta}} \def\yy{\boldsymbol{y}} \def\zz{\boldsymbol{z}} \def\YY{\boldsymbol{Y}} \def\UU{\boldsymbol{U}} \def\VV{\boldsymbol{V}} \def\RR{\boldsymbol{R}} \def\aa{\boldsymbol{a}} \def\ww{\boldsymbol{w}} \def\vv{\boldsymbol{v}} \def\ss{\boldsymbol{s}} \def\Zero{\boldsymbol{0}} \def\boo{\boldsymbol{\oo}} \def\ra{\rightarrow} \def\la{\leftarrow} \def\rav{{\buildrel v \over \rightarrow}} \def \red#1{\textcolor{red}{#1}} \def \pf {\noindent \emph{Proof: }} \def \QED {\hfill $\square$} \def \RV {\mbox{\rm{RV}}} \begin{document} \title{Tail Dependence of Multivariate Archimedean Copulas} \author{ Haijun Li\footnote{{\small\texttt{[email protected]}}, Department of Mathematics and Statistics, Washington State University, Pullman, WA 99164, U.S.A.} } \date{December 2024} \maketitle \begin{abstract} Archimedean copulas generated by Laplace transforms have been extensively studied in the literature, with much of the focus on tail dependence limited only to cases where the Laplace transforms exhibit regular variation with positive tail indices. In this paper, we extend the investigation to include Archimedean copulas associated with both slowly varying and rapidly varying Laplace transforms. We show that tail dependence functions with various tail orders effectively capture the extremal dependence across the entire class of Archimedean copulas, reflecting the full spectrum of tail behaviors exhibited by the underlying Laplace transforms. \medskip \noindent \textbf{Key words and phrases}: Tail dependence, regular variation, rapidly variation, slow variation. \end{abstract} \section{Introduction} \label{S1} Archimedean copulas constitute a fundamental class of copula functions that are widely used in the copula theory and applications (see, e.g., \cite{Nelsen2006, Joe2014, JL11}), and can be best illustrated by using scale mixtures of independently and identically distributed exponential random variables. Let $E_1, \dots, E_d$ be independent and exponentially distributed random variables with unit mean, and $R$ be a strictly positive random variable that is independent of $E_i$, $1\le i\le d$. Define: \begin{equation} \label{mix exponential} { X}=(X_1, \dots, X_d) := (RE_1, \dots, RE_d). \end{equation} Assume that the Laplace transform of $R^{-1}$ exists and is given by $\varphi(t)$, which is continuous and decreasing with $\varphi(0)=1$ and $\lim_{t \to \infty}\varphi(t)=0$. The marginal survival function of $X_i$ is then described as $\overline{F}_i(t) = \mathbb{P}(X_i>t)= \mathbb{E}(e^{-t/R}) = \varphi(t)$, $t\ge 0$, which implies that its inverse $\overline{F}_i^{\,-1}(u_i) = \varphi^{-1}(u_i)$, for $0\le u_i\le 1$, $1\le i\le d$. The joint distribution function of the transformed scale mixture $(\varphi(X_1), \dots, \varphi(X_d))$, after applying $\varphi$ to $X$, is now derived as follows. \begin{eqnarray} {C}(u_1 \dots, u_d) &:=& \mathbb{P}\big(RE_1>\overline{F}_1^{\,-1}(u_1), \dots, RE_d>\overline{F}_d^{\,-1}(u_d)\big)= \mathbb{E}\big(e^{-[\sum_{i=1}^d\varphi^{-1}(u_i)]/R}\big)\nonumber\\ &=& \varphi\Big(\sum_{i=1}^d\varphi^{-1}(u_i)\Big),~\forall~ (u_1, \dots, u_d)\in [0,1]^d.\label{Archimedean copula} \end{eqnarray} The function \eqref{Archimedean copula} is known as an Archimedean copula with {\em generator} $g:=\varphi^{-1}$. In general, a copula $C: [0,1]^d\to [0,1]$ is defined as a distribution function of a random vector $(U_1, \dots, U_d)$ with univariate, uniformly distributed margins $U_i$ on $[0,1]$, $1\le i\le d$; see \cite{Joe97, Joe2014} for detailed discussions on copulas and applications. Correspondingly, the distribution $\widehat{C}$ of the dual vector $(1-U_1, \dots, 1-U_d)$ is called the survival copula. As demonstrated from scale mixture \eqref{mix exponential}, any random vector can be decomposed into univariate margins and a copula, that encodes all the marginally scale-free dependence information, so that marginal behaviors and dependence structure can be studied separately. Observe that the Laplace transform $\varphi$ of a positive random variable is {completely monotone} on $[0,\infty)$; that is, $\varphi$ has the derivatives of all orders and \begin{equation} \label{complete monotone} (-1)^k\frac{d^k\varphi(t)}{dt^k}\ge 0, ~\forall~t\in [0,\infty), ~ k=0, 1, 2, \dots. \end{equation} In fact, since the Laplace transforms of positive random variables coincide with complete monotone functions on $[0, \infty)$ with a value of 1 at 0, the condition \eqref{complete monotone} is also sufficient for \eqref{Archimedean copula} to be a copula, according to Kimberling's theorem (see \cite{Kimberling74}). \begin{Pro}\rm \label{Kimberling} Let $\varphi: \mathbb{R}_+\to [0,1]$ be continuous and strictly decreasing with boundary conditions $\varphi(0)=1$ and $\lim_{t\to \infty}\varphi(t)=0$. The function \eqref{Archimedean copula} defined on $[0,1]^d$ is a copula for any $d\ge 2$ if and only if $\varphi$ is completely monotone. \end{Pro} It is observed from \eqref{Archimedean copula} that the decay rate of probabilities of the scale mixture $X$ in upper-orthants, or equivalently, the lower-orthant decay rate of Archimedean copula $C$, is determined by right-tail behaviors of Laplace transform $\varphi$. The most commonly examined univariate right-tail pattern is the power-law behavior, also known as regular variation, which is defined as that $\varphi$ is said to be {\em regularly varying at $\infty$} with tail parameter $-\alpha$, denoted by $\varphi\in \mbox{RV}_{-\alpha}$, if for all $x>0$, \begin{equation} \label{univ RV} \frac{\varphi(tx)}{\varphi(t)}\to x^{-\alpha}, \ \mbox{as}\ t\to \infty. \end{equation} It is straightforward that $\varphi\in \mbox{RV}_{-\alpha}$ if and only if $\varphi(x) = \ell(x)x^{-\alpha}$ for some $\ell(x)\in \mbox{RV}_{0}$, which is known as {\em slowly varying at $\infty$} \cite{Resnick07}. On the other hand, a function $\varphi(x)$ is said to be {\em regularly varying at $0$} with tail parameter $\beta$ if $\varphi(x^{-1})$ is regularly varying at $\infty$ with tail parameter $-\beta$. The goal of this paper is to complete a tail dependence theory of Archimedean copulas with various Laplace transforms $\varphi$, including but not limited to regular variation. In fact, it is well-known that if $\varphi\in \mbox{RV}_{-\alpha}$, where $\alpha>0$, then the lower-orthant decay rate of $C$ can be explicitly derived, see, e.g., \cite{CS09, JLN10}. In Section 2, we complement this result by deriving the lower-orthant decay rate of $C$ for a {slowly varying} Laplace transform $\varphi\in \mbox{RV}_0$. Moreover, in Section 3, we derive the lower-orthant decay rate of an Archimedean copula $C$ when the Laplace transform $\varphi$ is {\em rapidly varying} (in the sense of Omey \cite{Omey2013}). In short, the tail dependence of an Archimedean copula emerges from all three distinct right-tail decay patterns of its underlying Laplace transform $\varphi$: slow variation, regular variation, and rapid variation. The lower-orthant decay rate of a copula $C$ can be evaluated by using the $k$-th order tail dependence function (\cite{JLN10, HJ11}), denoted by $b(w; k)$, $k\ge 1$, that is defined as the non-zero limiting function \begin{equation} \label{k tail limit} b(w; k)= \lim_{u\downarrow 0}\frac{C(uw_1, \dots, uw_d)}{u^k\,\ell(u)}=\lim_{u\downarrow 0}\frac{\mathbb{P}\big(\cap_{i=1}^d\{U_i\le uw_i\}\big)}{u^k\,\ell(u)}, \end{equation} for $w=(w_1, \dots, w_d)\in (0,\infty)^d$, whenever the limit exists, for some function $\ell(\cdot)$, that is slowing varying at $0$; i.e., $\ell(t^{-1})\in \mbox{RV}_0$. Observe that the tail order $k$ satisfies $1\le k\le d$ for a non-trivial $k$-th order tail dependence function to exist. We discuss in this paper the lower tail dependence of copula $C$ only, as the upper tail dependence of $C$ can be studied by analyzing the lower tail dependence of the survival copula $\widehat{C}$. If $k=1$, the tail dependence function $b(w;1)$, first introduced in \cite{JLN10}, can be trivially extended to $\mathbb{R}^d_+$. The existences of the first-order tail dependence for $C$ and its margins are equivalent to the existence of the first-order exponent function, denoted by $a(w;1)$, that is defined as the non-zero limiting function \begin{equation} \label{exponent function} a(w; 1)= \lim_{u\downarrow 0}\frac{\mathbb{P}\big(\cup_{i=1}^d\{U_i\le uw_i\}\big)}{u\,\ell(u)}, \ w=(w_1, \dots, w_d)\in \mathbb{R}^d_+, \end{equation} whenever the limit exists, for some function $\ell(\cdot)$, that is slowing varying at $0$. The exponent function of a copula, together with univariate, regularly varying margins, are shown in \cite{Li2009, LS2009, LH13} to be equivalent to multivariate regular varying distributions. Multivariate regular variation describes multivariate power-law behaviors, and has been extensively studied in the literature; see, e.g., \cite{Resnick07, MS01}. If $k>1$, the tail dependence function $b(w;k)$ describes the higher order tail dependence of multivariate distributions that is often emerged from hidden regular variation within a subcone of $\mathbb{R}^d\backslash \{0\}$ or multivariate rapid variation (\cite{HJ11, HJL12, LH14}). For example, if $X$ is distributed according to a multivariate normal distribution with mean zero and $d\times d$ equicorrelation matrix, having pair-wise correlation coefficient $0\le \rho<1$, then its first order tail dependence parameter $b((1,\dots, 1); 1) = 0$, but the higher tail order $k = d/[1+(d-1)\rho]$ (see \cite{HJ11}), satisfying that $1<k\le d$, that reveals the higher order tail dependence emerged from the multivariate normal distribution with exponentially decayed joint tails (also known as multivariate rapid variation). In this paper, we demonstrate that various tail orders also exist for Archimedean copulas with slowly varying, regularly varying, and rapidly varying Laplace transforms $\phi$. Throughout this paper, the right-tail equivalence $f(x)\sim g(x)$ means that two non-zero functions satisfy that $f(x)/g(x)\to 1$ as $x\to a$, $a\in [0,\infty]$. \section{Tail dependence of Archimedean copulas with regularly varying Laplace transforms} \label{S2} Since an Archimedean copula is generated from the Laplace transform $\varphi$ (or generator $\varphi^{-1}$) of a positive random variable, its joint tail decay structure depends on the tail behaviors of $\varphi$, as was shown in the case of regular variation in \cite{genest1989, CS09, JLN10}. We provide the proofs here for compeleteness. \begin{The}\rm \label{RV} Let $C({u}; \varphi)=\phi(\sum_{i=1}^d\varphi^{-1}(u_i))$ be an Archimedean copula, $u=(u_1, \dots, u_d) \in [0,1]^d$, with Laplace transform $\varphi$. \begin{enumerate} \item If the Laplace transform $\varphi$ is regularly varying at $\infty$ with tail parameter $-\alpha<0$, then the lower tail dependence function of $C$ is given by \begin{equation} \label{lower tail-RV} b(w;1) =\Bigl(\sum_{j=1}^d w_j^{-1/\alpha}\Bigr)^{-\alpha}, \ w = (w_1, \dots, w_d)\in \mathbb{R}_+^d. \end{equation} \item If the inverse Laplace transform (or generator) $\varphi^{-1}$ is regularly varying at 1, or $\varphi^{-1}(1-1/x)$ is regularly varying at $\infty$ with tail parameter $-\beta<0$, then the upper exponent function of $C$ is given by \begin{equation} \label{upper tail-RV} a(w;1) =\Bigl(\sum_{j=1}^d w_j^{\beta}\Bigr)^{1/\beta}, \ w = (w_1, \dots, w_d)\in \mathbb{R}_+^d. \end{equation} \end{enumerate} \end{The} \noindent {\sl Proof.} (1) Using the stochastic representation \eqref{mix exponential}, the tail dependence function \eqref{lower tail-RV} is a natural consequence from the regular variation of $\varphi$. In fact, it follows from Proposition 2.6 of \cite{Resnick07} that $\overline{F}_i^{\,-1}(u_i) = \varphi^{-1}(u_i)=\ell(u_i)u_i^{-1/\alpha}$ is regularly varying at $0$ with tail parameter $-1/\alpha$, where $\ell(\cdot)$ is slowing verying at $0$. Therefore, for $w = (w_1, \dots, w_d)>0$, \[\varphi^{-1}(uw_i)=\ell(uw_i)(uw_i)^{-1/\alpha} \sim \ell(u)u^{-1/\alpha}w_i^{-1/\alpha}=\varphi^{-1}(u)w_i^{-1/\alpha},\ 1\le i\le d. \] Since $\varphi(\cdot)$ is continuous, we have by \eqref{Archimedean copula}, \[b(w;1) = \lim_{u\downarrow 0}\frac{C(uw_1, \dots, uw_d)}{u} = \lim_{u\downarrow 0}\frac{\varphi\Big(\varphi^{-1}(u)\sum_{i=1}^dw_i^{-1/\alpha}\Big)}{\varphi(\varphi^{-1}(u))}= \Bigl(\sum_{j=1}^d w_j^{-1/\alpha}\Bigr)^{-\alpha}, \] which trivially holds if some of $w_i$s are zero, and hence \eqref{lower tail-RV} holds for all $w = (w_1, \dots, w_d)\in \mathbb{R}_+^d$. (2) Similarly, the upper tail dependence of Archimedean copula $C({ u}; \varphi)=\phi\bigl(\sum_{i=1}^d\varphi^{-1}(u_i)\bigr)$ emerges when the inverse Laplace transform (or generator) $\varphi^{-1}$ is regularly varying at 1; that is, $\varphi^{-1}(1-1/x)$ is regularly varying at $\infty$ with tail parameter $-\beta<0$ (see \cite{genest1989} for the bivariate case). For all $w = (w_1, \dots, w_d)\in \mathbb{R}_+^d$, \[\varphi^{-1}(1-uw_i)=\ell(uw_i)(uw_i)^{\beta} \sim \ell(u)u^{\beta}w_i^{\beta}=\varphi^{-1}(1-u)w_i^{\beta},\ 1\le i\le d, \] implying that \[\varphi(\varphi^{-1}(1-u)w_i)\sim 1-uw_i^{1/\beta}, \ 1\le i\le d. \] Observe that \[\mathbb{P}(U_i>1-uw_i,\ \exists\ i)=1-\mathbb{P}(U_i\le 1-uw_i,\ \forall\ i)=1-\varphi\big(\sum_{i=1}^d \varphi^{-1}(1-uw_i)\big) \] \[\sim 1-\varphi\big(\varphi^{-1}(1-u)\sum_{i=1}^dw_i^{\beta}\big)\sim 1-\Big(1-u\big(\sum_{i=1}^dw_i^\beta\big)^{1/\beta}\Big)\sim u\big(\sum_{i=1}^dw_i^\beta\big)^{1/\beta}. \] Since $\varphi(\cdot)$ is continuous, we have, \[a(w;1) = \lim_{u\downarrow 0}\frac{\mathbb{P}(U_i>1-uw_i,\ \exists\ i)}{u} = \Bigl(\sum_{j=1}^d w_j^{\beta}\Bigr)^{1/\beta}, \] and \eqref{upper tail-RV} holds. \hfill $\Box$ \begin{Rem} \label{Archimedean-r-1} \begin{enumerate} \item If the Laplace transform $\varphi$ is regularly varying at $\infty$ with tail parameter $-\alpha<0$, then $X$ in \eqref{mix exponential} is multivariate regularly varying at $\infty$, with upper intensity measure $\mu_\infty(\cdot)$; that is, for any fixed norm $||\cdot||$ on $\mathbb{R}^d_+$, \begin{equation} \label{MRV at infinity} \frac{\mathbb{P}(X\in tB)}{\mathbb{P}(||X||>t)}\to \mu_\infty(B),\ \mbox{as}\ t\to \infty, \end{equation} for all the relative compact subsets $B\subset \overline{\mathbb{R}}^d_+\backslash\{0\}$ bounded away from $0$, that is $\mu_\infty$-continuous in the sense that $\mu_\infty(\partial B)=0$. The intensity measure $\mu_\infty(\cdot)$ is known to enjoy the group invariance $\mu_\infty(tB) = t^{-\alpha}\mu_\infty(B)$, $t>0$, leading to the semi-parametric representation for multivariate extremes that is useful in statistical analysis \cite{Resnick07}. Since an Archimedean copula $C$ can be viewed as the survival copula of $X$ in \eqref{mix exponential}, the intensity measure $\mu_\infty(\cdot)$ yields the lower tail dependence function of $C$, as presented in Theorem \ref{RV} (1). \item It is straightforward to see that the inverse Laplace transform $\varphi^{-1}$ is regularly varying at 1 if and only if $F_i(u) = 1 -\varphi(u)$ is regularly varying at $0$ with tail parameter $1/\beta$. If $\varphi^{-1}$ is regularly varying at 1, then $X$ in \eqref{mix exponential} is multivariate regularly varying at $0$, with lower intensity measure $\mu_0(\cdot)$; that is, \begin{equation} \label{MRV at zero} \frac{\mathbb{P}(X\in uB)}{\mathbb{P}(||X||\le u)}\to \mu_0(B) \end{equation} for all the relative compact subsets $B\subset \overline{\mathbb{R}}^d_+\backslash\{+\infty\}$, that is $\mu_0$-continuous in the sense that $\mu_0(\partial B)=0$ \cite{MS01}. Similar to Remark \ref{Archimedean-r-1} (1), \eqref{MRV at zero} yields the upper tail dependence function of $C$, as presented in Theorem \ref{RV} (2). \end{enumerate} \end{Rem} We focus on multivariate maximums in this paper, and hence it follows from Remark \ref{Archimedean-r-1} that we discuss only the tail dependence function $b(\cdot;k)$ for Archimedean copulas with Laplace transform $\varphi$. The proof of Theorem \ref{RV}, however, excludes the case that $\alpha=0$, when the Laplace transform $\varphi$ is slowly varying at infinity. To obtain the tail dependence in this case, the following lemma is needed. \begin{Lem}\rm (Elez and Djur\v{c}i\'c \cite{Elez2013}) \label{Inv-SV} Let $g: [0,\infty)\to (0,\infty)$ be slowing varying with inverse $g^{-1}$ and $\lim_{x\to \infty}g(x) = \infty$. Then the inverse $g^{-1}$ is rapidly varying in the sense of de Haan \cite{Haan1970}; that is, \[\lim_{y\to \infty}\frac{g^{-1}(\lambda y)}{g^{-1}(y)}= \infty,\ \ \forall\ \lambda>1. \] \end{Lem} \begin{The}\rm \label{SV} Let $C({u}; \varphi)=\phi(\sum_{i=1}^d\varphi^{-1}(u_i))$ be an Archimedean copula, $u=(u_1, \dots, u_d) \in [0,1]^d$, where the Laplace transform $\varphi$ is slowly varying at $\infty$. The lower tail dependence function of $C$ is given by \begin{equation} \label{lower tail-SV} b(w;1) =\min\{w_1, \dots, w_d\}, \ w = (w_1, \dots, w_d)\in \mathbb{R}_+^d. \end{equation} \end{The} \noindent {\sl Proof.} Since $\varphi(x)$ is slowly varying and decreasing to zero, $1/\varphi(x)$ is slowly varying and increasing to $\infty$. Observe that \[\left(1/\varphi\right)^{-1}(y) = \varphi^{-1}(1/y),\ y>0. \] Therefore, by Lemma \ref{Inv-SV}, $\varphi^{-1}(1/y)$ is rapidly varying in the sense of de Haan; that is, for any $\lambda>1$, \[\frac{\varphi^{-1}\left(1/(\lambda y)\right)}{\varphi^{-1}(1/y)}\to \infty,\ \ y\to \infty. \] Let $u=1/y$, and thus, for any $\lambda>1$, \[\frac{\varphi^{-1}\left(u/\lambda\right)}{\varphi^{-1}(u)}\to \infty,\ \ u\to 0, \] which is equivalent to \begin{equation} \label{inv-SV0} \frac{\varphi^{-1}\left(\lambda u\right)}{\varphi^{-1}(u)}\to 0,\ \ u\to 0, \forall\ \lambda>1. \end{equation} For any fixed $w = (w_1, \dots, w_d)>0$, let $m=\min\{w_1, \dots, w_d\}>0$ and $\lambda_i = w_i/m\ge 1$, $1\le i\le d$. Consider \[\frac{\varphi\left(\sum_{i=1}^d\varphi^{-1}(uw_i)\right)}{u}= m\frac{\varphi\left(\sum_{i=1}^d\varphi^{-1}(\lambda_ium)\right)}{um} = m\frac{\varphi\left(\sum_{i=1}^d\frac{\varphi^{-1}(\lambda_ium)}{\varphi^{-1}(um)}\varphi^{-1}(um)\right)}{um}. \] It follows from \eqref{inv-SV0} that as $u\to 0$, \[\sum_{i=1}^d\frac{\varphi^{-1}(\lambda_ium)}{\varphi^{-1}(um)}\to c_w \] where $c_w=|\{i|w_i=m, 1\le i\le d\}|\ge 1$ is a constant. Hence, for a fixed $0<\epsilon<1$, there exists a small $\delta>0$, such that whenever $0<u<\delta$, \[c_w-\epsilon\le \sum_{i=1}^d\frac{\varphi^{-1}(\lambda_ium)}{\varphi^{-1}(um)}\le c_w+\epsilon. \] Since the Laplace transform $\varphi(\cdot)$ is decreasing, \[m\frac{\varphi\left((c_w-\epsilon)\varphi^{-1}(um)\right)}{um}\ge \frac{\varphi\left(\sum_{i=1}^d\varphi^{-1}(uw_i)\right)}{u}\ge m\frac{\varphi\left((c_w+\epsilon)\varphi^{-1}(um)\right)}{um}. \] When $u\to 0$, $\varphi^{-1}(um)\to \infty$. It then follows from the slow variation of $\varphi$ that as $u\to 0$, \[\frac{\varphi\left((c_w+\epsilon)\varphi^{-1}(um)\right)}{um}=\frac{\varphi\left((c_w+\epsilon)\varphi^{-1}(um)\right)}{\varphi(\varphi^{-1}(um))}\to 1 \] \[\frac{\varphi\left((c_w-\epsilon)\varphi^{-1}(um)\right)}{um}=\frac{\varphi\left((c_w-\epsilon)\varphi^{-1}(um)\right)}{\varphi(\varphi^{-1}(um))}\to 1, \] implying that $b(w;1)=\lim_{u\downarrow 0}\frac{\varphi\left(\sum_{i=1}^d\varphi^{-1}(uw_i)\right)}{u}=m$, for any $w>0$. The tail dependence \eqref{lower tail-SV} trivially holds if some of $w_i$s are zero, and hence \eqref{lower tail-SV} holds for any $w\in \mathbb{R}_+^d$. \hfill $\Box$ \begin{Rem} \begin{enumerate} \item In stochastic representation \eqref{mix exponential}, if $R$ is regularly varying at $\infty$ with tail parameter $-\alpha\le 0$, then, by Breiman's Theorem (see \cite{Breiman1965}), the margin $X_i$ is regularly varying with tail parameter $-\alpha\le 0$. In particular, if $R$ is slowly varying at $\infty$, then $\varphi\in \mbox{RV}_{0}$. \item Observe from \eqref{mix exponential} that $\varphi(x)$ is the marginal survival function of $X=(X_1, \dots, X_d)$. The fact that $\varphi\in \mbox{RV}_{-\alpha}$, $\alpha\ge 0$, indicates that $X$ in \eqref{mix exponential} is multivariate regularly varying in the sense of \eqref{MRV at infinity}, including multivariate slowly varying, with a simple mixture structure, from which tail dependence emerges. \end{enumerate} \end{Rem} \begin{Exa}\rm It is well-known that the Laplace transforms of positive random variables coincide with complete monotone functions on $[0, \infty)$. When $d=2$, in particular, the class of Laplace transforms consists of non-increasing, non-negative convex functions on $[0, \infty)$. Let $\varphi(x) = 1/\log(x+e)$, $x\ge 0$. The function is non-negative, decreasing, convex and slowly varying as $x\to \infty$, and hence $\varphi(x)$ is the Laplace transform of a positive random variable, that is slowly varying. The inverse $\varphi^{-1}(u) = e^{1/u}-e$, $0\le u\le 1$, and the straightforward calculation leads to \[b(w_1, w_2;1) = \lim_{u\downarrow 0}\frac{\varphi(\varphi^{-1}(uw_1)+\varphi^{-1}(uw_2))}{u}= \min\{w_1, w_2\}, \] for $(w_1,w_2)\in \mathbb{R}^2_+$. \end{Exa} \section{Tail dependence of Archimedean copulas with rapidly varying Laplace transforms} \label{S3} The higher order tail dependence naturally emerges from an Archimedean copula with rapidly varying Laplace transform $\varphi$. Let $\Gamma_\alpha(g)$ denote the class of measurable functions for which there exists a measurable and positive function $g$, such that \begin{equation} \label{gamma class} \lim_{x\to \infty}\frac{f(t+xg(t))}{f(t)}=e^{-\alpha x},\ \ \forall x\in \mathbb{R},\ \alpha\ge 0. \end{equation} The $\Gamma_\alpha(g)$-class is studied extensively in \cite{Omey2013}, where $\alpha$ can be any real number. We assume that $\alpha\ge 0$ because we focus on non-increasing functions in $\Gamma_\alpha(g)$ only in this paper. It is shown in \cite{Omey2013} that the limit \eqref{gamma class} holds locally uniformly, where $g$ is self-neglecting in the sense that $g(t)/t\to 0$ and \begin{equation} \label{self-neglecting} \lim_{x\to \infty}\frac{g(t+xg(t))}{g(t)}=1,\ \ \forall x\in \mathbb{R}, \end{equation} holds locally uniformly. The local uniform convergences yield the representations for the functions in $\Gamma_\alpha(g)$, $\alpha\ge 0$. In particular, the following two representations from \cite{Omey2013} are needed. \begin{enumerate} \item If $g$ is self-neglecting, then $g(x) = D(x)W(x)$, where $D(x)\to c>0$, and \begin{equation} \label{self-neglecting rep} W(x)=\exp\Big\{\int_{x_0}^x\frac{\epsilon^*(z)}{g(z)} dz\Big\} \end{equation} such that its derivative converges to zero. That is, $g$ can be taken as a function whose derivative converges to zero. Note that $\epsilon^*(z)$ can be negative, whereas $g(z)>0$, $z\in \mathbb{R}$. \item If $g$ is self-neglecting and $f$ has a non-increasing derivative $f'$, then $g$ and $-f/f'$ are tail equivalent. That is, if $f\in \Gamma_\alpha(g)$ is the survival function of a random variable, then the self-neglecting function $g$ can be taken as the reciprocal of the hazard rate. \end{enumerate} \begin{Lem}\rm \label{ultimately IFR} If $\varphi\in \Gamma_\alpha(g)$, where $g$ is ultimately monotone (i.e., monotone for all sufficiently large $t$), then the self-neglecting function $g$ satisfies that the limit $g(t)/g(\lambda t)$, as $t\to \infty$, exists (real numbers or positive infinity) for any $\lambda>1$. \end{Lem} \noindent {\sl Proof.} We prove the ultimately decreasing case only, and the ultimately increasing case is similar. It follows from \eqref{self-neglecting rep} that $\epsilon^*(z)$ is ultimately negative (i.e., negative for all sufficiently large $z$). Observe that \[\frac{g(t)}{g(\lambda t)}\sim \exp\Big\{\int_{\lambda t}^t\frac{\epsilon^*(z)}{g(z)} dz\Big\},\ \mbox{as}\ t\to \infty, \] is ultimately increasing, and therefore, the limit $g(t)/g(\lambda t)$ exists for any $\lambda>1$. \hfill $\Box$ \begin{Rem} The Laplace transform $\varphi(t)\in \Gamma_\alpha(g)$ is the survival function of a positive random variable $X_i$ (see \eqref{mix exponential}), and $g$ can be taken as the reciprocal of the failure rate of $X_i$. Therefore, $g(t)$ is ultimately monotone can be interpreted as the failure rate of $X_i$ being ultimately monotone. \end{Rem} Since $\varphi(x)$ is the marginal survival function of $X=(X_1, \dots, X_d)$, the rapid variation of the Laplace transform $\varphi\in \Gamma_{\alpha}(g)$ indicates that $X$ in \eqref{mix exponential} is multivariate rapidly varying in some sense, with a simple scale mixture structure, which yields tail dependence, as the following result shows. \begin{The}\rm \label{Rapid V} Let $C({u}; \varphi)=\phi(\sum_{i=1}^d\varphi^{-1}(u_i))$ be an Archimedean copula, $u=(u_1, \dots, u_d) \in [0,1]^d$, where the Laplace transform $\varphi\in \Gamma_{\alpha}(g)$ is rapidly varying at $\infty$, $\alpha>0$, where $g$ is ultimately decreasing. If, in addition, the Laplace transform $\varphi$ satisfies that for some $1\le k\le d$, \begin{equation} \label{ratio rapid} \lim_{t \to \infty}\frac{\varphi(td)}{\varphi^k(t)}= \tau >0, \end{equation} then the lower tail dependence function of $C$ is given by \begin{equation} \label{tail-Rapid} b(w;k) =\tau\prod_{i=1}^dw_i^{k/d}, \ w = (w_1, \dots, w_d)\in \mathbb{R}_+^d. \end{equation} \end{The} \noindent {\sl Proof.} Since $\varphi(t)$ is the marginal survival function of $X_i$ in the stochastic representation \eqref{mix exponential} and $g$ is the reciprocal of its hazard rate, one can write \[\varphi(x) = \exp\Big\{-\int_0^x\frac{1}{g(z)}dz\Big\}, x\ge 0. \] It follows from Lemma \ref{ultimately IFR} that the limit $g(t)/g( td)$ exists, and thus, applying L'H\^opital's rule on \eqref{ratio rapid} yields \[\tau = \lim_{t \to \infty}\frac{\varphi(td)d/g(td)}{\varphi^k(t)k/g(t)}=\tau \lim_{t \to \infty}\frac{d}{k}\frac{g(t)}{g(td)} \] resulting in that $\lim_{t \to \infty}g(t)/g(td) = k/d$. Let $u=\varphi(t)$, and thus $u$ is small if and only if $t$ is large. Furthermore let $e^{-\alpha x_i}=w_i$, $1\le i\le d$. Observe that \[\varphi\big(t+g(t)(\log w_i^{-1/\alpha})\big)=\varphi(t+x_ig(t))\sim \varphi(t)e^{-\alpha x_i}=uw_i, \ (x_1, \dots, x_d)\in \mathbb{R}^d_+, \] imply that \[td+g(t)\sum_{i=1}^d\log w_i^{-1/\alpha}\sim \sum_{i=1}^d\varphi^{-1}(uw_i). \] Take $\varphi$ on both sides, and therefore, \[\varphi\Big(\sum_{i=1}^d\varphi^{-1}(uw_i)\Big)\sim \varphi\Big(td+g(t)\sum_{i=1}^d\log w_i^{-1/\alpha}\Big)\sim \varphi\Big(td+kg(td)/d\sum_{i=1}^d\log w_i^{-1/\alpha}\Big), \] which, in turns, is tail equivalent to $\varphi(td)\exp\Big\{-\alpha\sum_{i=1}^d\log w_i^{-k/d\alpha}\Big\}=\varphi(\varphi^{-1}(u)d)\prod_{i=1}^dw_i^{k/d} $. Hence, it follows from \eqref{ratio rapid} that \[ b(w;k) = \lim_{u\downarrow 0}\frac{\varphi\Big(\sum_{i=1}^d\varphi^{-1}(uw_i)\Big)}{u^k}=\tau\lim_{u\downarrow 0}\frac{\varphi\Big(\sum_{i=1}^d\varphi^{-1}(uw_i)\Big)}{\varphi(\varphi^{-1}(u)d)}=\tau\prod_{i=1}^dw_i^{k/d}, \] for $1\le k\le d$, as desired. \hfill $\Box$ \begin{Rem} \label{Archimedean-r-5} \begin{enumerate} \item Huang obtained this result in \cite{Huang2020} under an additional condition that the self-neglecting function $g$ satisfies that $g(t)/g(td)$ converges to a constant, as $t\to \infty$. Such a restriction is not necessary in our new proof, due to Lemma \ref{ultimately IFR}. \item If $g$ is ultimately decreasing, then the failure rate of $X_i$, $1\le i\le d$, is ultimately increasing, and hence $X_i$, $1\le i\le d$, has the light right tail. On the other hand, if $g$ is ultimately increasing, then the failure rate of $X_i$, $1\le i\le d$, is ultimately decreasing, and tail behaviors are more complex with possible heavier tails. \item Most distributions are defined in terms of densities, and the tail density approach is used in \cite{LH14, JL19, Li2021} to study the multivariate rapid variation and higher order tail dependence functions. As in the case of multivariate regular variation, if a density is multivariate rapidly varying, then the corresponding distribution is multivariate rapidly varying. In contrast, however, the densities of Archimedean copulas often do not enjoy compact explicit forms. \end{enumerate} \end{Rem} It should be mentioned that the tail order $k$ depends on $\alpha$ through \eqref{ratio rapid} implicitly, in contrast to the regular variation case of Theorem \ref{RV}, where the tail dependence function depends on $\alpha$ explicitly. The assumption \eqref{ratio rapid}, however, is mild and satisfied in various applications. Several examples are given below to illustrate Theorem \ref{Rapid V}, and all these Archimedean copulas and the corresponding Laplace transforms can be found in \cite{Joe97}. \begin{Exa}\rm Consider a bivariate Frank copula $C(u_1,u_2)$ with Laplace transformation of $\varphi(t) = -\frac{\log (1-(1-e^{-\theta})e^{-t})}{\theta}$, $\theta>0$. Since $\varphi(t)\sim \frac{1-e^{-\theta}}{\theta}e^{-t}$, then, let $g(t) \equiv 1$, \begin{eqnarray} \varphi(t+g(t)x) &\sim & \frac{1-e^{-\theta}}{\theta} e^{-(t+x)} \sim \varphi(t)e^{- x}. \nonumber \end{eqnarray} That is, $\varphi\in \Gamma_1(g(t))$, and $g(t)/g(2t)=1$ and $\alpha =1$. Observe that \[ \frac{\varphi(2t)}{\varphi(t)^2} \sim \frac{(\frac{1-e^{-\theta}}{\theta})(e^{-t})^2}{(\frac{1-e^{-\theta}}{\theta} e^{-t})^2} = \frac{\theta}{1-e^{-\theta}}=\tau. \] According to Theorem \ref{Rapid V}, its lower tail dependence $b(w_1, w_2; 2) = \frac{\theta}{1-e^{-\theta}}w_1w_2$, $w_1, w_2\ge 0$. \end{Exa} \begin{Exa}\rm Consider a bivariate B5 copula $C(u_1,u_2)$ with Laplace transformation of $\varphi(t) = 1 - (1-e^{-t})^{1/\theta}, \theta\ge 1$. Since $\varphi(t) \sim (1/\theta) e^{-t},$ then, let $g(t)\equiv 1$, \[\varphi(t+1x) \sim (1/\theta) e^{-t} e^{-x} =\varphi(t)e^{-\alpha x}, \] where $\alpha = 1$. That is, $\varphi\in \Gamma_1(g(t))$ and $g(t)/g(2t)=1$. Observe that \[\frac{\varphi(td)}{\varphi(t)^2} \sim \frac{\frac{1}{\theta}(e^{-t})^{2}}{(\frac{1}{\theta}e^{-t})^{2}} = \theta = \tau. \] According to Theorem \ref{Rapid V}, its lower tail dependence $b(w_1, w_2; 2) = \theta w_1w_2$, $w_1, w_2\ge 0$. \end{Exa} \begin{Exa}\rm Consider a two-dimensional Archimedean copula $C(u_1,u_2)$ with Laplace transformation of $\varphi(t) = [(1-\theta)e^{-t}/(1-\theta e^{-t})]^\alpha, 0\le \theta< 1$ and $\alpha>0$. This is the Laplace transform of a negative binomial distribution. Observe that \[\varphi(t)\sim (1-\theta)^\alpha e^{-\alpha t},\ 0\le \theta< 1, \alpha>0. \] \[\varphi(t+1x)\sim (1-\theta)^\alpha e^{-\alpha (t+x)}\sim \varphi(t) e^{-\alpha x} \] where $g(t)\equiv 1$ is self-neglecting, leading to $\varphi\in \Gamma_{\alpha}(g(t))$, and $g(t)/g(2t)=1$. Since, \begin{eqnarray} \frac{\varphi(td)}{\varphi(t)^2} &\sim & \frac{(1-\theta)^\alpha(e^{-\alpha t2})}{((1-\theta)^\alpha e^{-\alpha t})^{2}} = (1-\theta)^{-\alpha} = \tau, \end{eqnarray} the lower tail dependence $b(w_1, w_2; 2) = (1-\theta)^{-\alpha}w_1w_2$, $w_1, w_2\ge 0$. \end{Exa} \begin{Exa}\rm Consider a two-dimensional Gumbel copula, $C(u_1,u_2)$ with Laplace transformation $\varphi(t) = e^{-t^{1/\theta}},\ \theta\ge 1$. Let $g(t)=t^{1-1/\theta}$, where $\theta\ge 1$, which is non-decreasing. It is straightforward to verify that $g(t)$ is self-neglecting, and \[\frac{g(t)}{g(2t)}= 2^{-1+1/\theta}. \] Observe that as $t\to \infty$, \[\frac{\varphi(t+y\,g(t))}{\varphi(t)}\sim \frac{e^{-t^{1/\theta}\big[1+y\,t^{-1/\theta}/\theta\big]}}{e^{-t^{1/\theta}}}\to e^{-y/\theta} \] and thus $\varphi(t)\in \Gamma_{1/\theta}(g(t))$. Furthermore, we also have \[\frac{\varphi(2t)}{\varphi(t)^{2^{1/\theta}}}= e^{-t^{1/\theta}2^{1/\theta}+t^{1/\theta}2^{1/\theta}}=1, \] and thus the lower tail dependence function is given by \[b(w_1,w_2; 2^{1/\theta}) = \big(w_1w_2\big)^{2^{-1+1/\theta}}, \] where $w_1 \ge 0$, and $w_2 \ge 0$. \end{Exa} \section{Concluding remarks} \label{S4} The extremal dependence of a random vector $X=(X_1, \dots, X_d)$ is influenced by, and thus can vary with, the tail behaviors of its marginal distributions. In contrast, tail dependence, as introduced in \cite{JLN10}, depends solely on the copula of $X$, independent of its marginal distributions. This inherent extremal dependence captures a universal characteristic present in all multivariate distributions, including those with multivariate slowly varying, regularly varying, and rapidly varying joint tails. In this paper, we illustrate this universality via the tail dependence function with tail order $k$ (\cite{JLN10, HJ11}) for random samples, where the marginal distributions, as characterized by Laplace transform $\varphi$, exhibit diverse tail decay patterns ranging from slow to regular to rapid variation. Due to their simple scale-mixing structure, Archimedean copulas are typically specified by their distribution functions, whereas most multivariate distributions are defined through their densities \cite{Resnick07, BE07}. Tail densities of order $k$, introduced in \cite{LW2013, LH14}, have been used to analyze the universal characteristics of extremal dependence via the copula density approach. Additionally, \cite{JL19, Li2021} derived the tail dependence of skew-elliptical distributions, capturing the universal features of extremal dependence in both regularly varying and rapidly varying skew-elliptical distributions. More generally, the density-based approach can be used for analyzing tail dependence through distributional densities. \begin{thebibliography}{99} \bibitem{BE07} Balkema, G. and Embrechts, P.: High Risk Scenarios and Extremes: A geometric approach. European Mathematical Society, Z\"{u}rich, Switzerland, 2007. \bibitem{Breiman1965} Breiman, L.: On some limit theorems similar to the arc-sin law. Theory Prob. Appl., 1965, 10:323-331. \bibitem{CS09} Charpentier, A. and Segers, J.: Tails of multivariate Archimedean copulas. {Journal of Multivariate Analysis}, 2009, 100:1521-1537. \bibitem{Haan1970} de Haan, L.: On Regular Variation and its Applications to the Weak Convergence of Sample Extremes. Math. Centre Tracts, vol. 32, CWI, Amsterdam, 1970. \bibitem{Elez2013} Elez, N. and Djur\v{c}i\'c, D.: Some properties of rapidly varying functions. Journal of Mathematical Analysis and Applications, 2013, 401:888-895. \bibitem{genest1989} Genest, C. and Rivest, L.-P.: A characterization of Gumbel's family of extreme value distributions. { Statistics and Probability Letters}, 1989, 8:207--211. \bibitem{HJ11} Hua, L., and Joe, H.: Tail order and intermediate tail dependence of multivariate copulas. J. Multivariate Anal., 2011, 102:1454-1471. \bibitem{HJL12} Hua, L., Joe, H. and Li, H.: Relations between hidden regular variation and tail order of copulas. Journal of Applied Probability, 2014, 51(1): 37-57. \bibitem{Huang2020} Huang, R.: Higher Order Tail Dependence of Archimedean Copulas with Rapidly Varying Laplace Transforms. PhD Dissertation, Washington State University, Pullman, WA, 2020. \bibitem{Joe97} Joe, H.: {Multivariate Models and Dependence Concepts}. Chapman \& Hall, London, 1997. \bibitem{Joe2014} Joe, H.: Dependence Modeling with Copulas. Chapman \& Hall/CRC, Boca Raton, FL, 2014. \bibitem{JL11} Joe, H. and Li, H.: Tail risk of multivariate regular variation. Methodology and Computing in Applied Probability, 2011, 13:671-693. \bibitem{JL19} Joe, H., Li, H.: Tail densities of skew-elliptical distributions. Journal of Multivariate Analysis, 2019, 171:421-435. \bibitem{JLN10} Joe, H., Li, H. and Nikoloulopoulos, A.K.: Tail dependence functions and vine copulas. {Journal of Multivariate Analysis}, 2010, 101:252-270. \bibitem{Kimberling74} Kimberling, C.: A probablistic interpretation of complete monotonicity. {Aequationes Mathematicae}, 1974, 10:152-164. \bibitem{Li2009} Li, H.: Orthant tail dependence of multivariate extreme value distributions. {Journal of Multivariate Analysis}, 2009, 100:243-256. \bibitem{LH13} Li, H.: Toward a copula theory for multivariate regular variation. Copulae in Mathematical and Quantitative Finance: Proceedings of the Workshop Held in Cracow, 2012, 177-199. \bibitem{Li2021} Li, H.: On rapid variation of multivariate probability densities. arXiv:2104.14071 [math.PR], 2021. \bibitem{LH14} Li, H. and Hua, L.: Higher order tail densities of copulas and hidden regular variation. {Journal of Multivariate Analysis}, 2015, 138:143--155. \bibitem{LS2009} Li, H. and Sun, Y.: Tail dependence for heavy-tailed scale mixtures of multivariate distributions. {J. Appl. Prob.}, 2009, 46(4):925-937. \bibitem{LW2013} Li, H. and Wu, P.: Extremal dependence of copulas: A tail density approach. {Journal of Multivariate Analysis}, 2013, 114:99-111. \bibitem{MS01} Meerschaert, M. M. and Scheffler, H.-P.: Limit Distributions for Sums of Independent Random Vectors. John Wiley \& Sons, 2001. \bibitem{Nelsen2006} Nelsen, R. B.: An introduction to Copulas. Springer, New York, 2006. \bibitem{Omey2013} Omey, E.: On the class gamma and related classes of functions. Publications de l'Institut Math\'ematique, 2013, 93(107):1-18. \bibitem{Resnick07} Resnick, S.: {Heavy-Tail Phenomena: Probabilistic and Statistical Modeling}. Springer, New York, 2007. \end{thebibliography} \end{document} \begin{align*} |\bar{X}_n - \bar{X}_{k^2}| &= \left|\frac{S_n}{n} - \frac{S_{k^2}}{k^2}\right| \\ &= \left|\frac{k^2 S_n - n S_{k^2}}{nk^2}\right| \\ &= \left|\frac{k^2 (S_{k^2} + (S_n - S_{k^2})) - n S_{k^2}}{nk^2}\right| \\ &= \left|\frac{(k^2 - n) S_{k^2} + k^2 (S_n - S_{k^2})}{nk^2}\right| \\ &\le \frac{|k^2 - n|}{nk^2} |S_{k^2}| + \frac{k^2}{nk^2} |S_n - S_{k^2}| \end{align*}
2412.18809v1
http://arxiv.org/abs/2412.18809v1
Existence and uniqueness of Generalized Polarization Tensors vanishing structures
\documentclass[10pt,reqno]{article} \usepackage{amsthm} \usepackage[numbers]{natbib} \usepackage{amsmath} \usepackage{amsmath} \usepackage{bm} \usepackage{float} \usepackage{abstract} \usepackage{amssymb} \usepackage{setspace} \usepackage{graphicx} \usepackage{multirow} \usepackage{booktabs} \usepackage{multirow} \usepackage{geometry}\setlength{\textwidth}{140mm} \setlength{\textheight}{200mm} \setlength{\oddsidemargin}{11mm} \setlength{\evensidemargin}{11mm} \usepackage{xcolor} \usepackage{caption} \usepackage{subcaption} \usepackage{hyperref}\hypersetup{ colorlinks=true, linkcolor=blue, filecolor=blue, urlcolor=red, citecolor=cyan } \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{conjecture}{Conjecture} \newtheorem{lemma}{Lemma}[section] \newtheorem{question}{Question} \newtheorem{algorithm}{Algorithm} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{proposition}{Proposition} \newtheorem{fact}{Fact} \newtheorem{corollary}{Corollary}[section] \newtheorem{assumption}{Assumption} \newtheorem{example}{Example}[section] \theoremstyle{definition} \newtheorem{remark}{Remark}[section] \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\GL}{\Lambda} \newcommand{\eqnref}[1]{(\ref {#1})} \newcommand{\Ge}{\epsilon} \newcommand{\Gs}{\sigma} \title{Existence and uniqueness of Generalized Polarization Tensors vanishing structures} \author{ {Fanbo Sun} \thanks{School of Mathematics and Statistics, Central South University, Changsha, 410083, Hunan Province, China. \ \ Email: fanbo\[email protected]}\quad\quad {Youjun Deng} \thanks{Corresponding author. School of Mathematics and Statistics, Central South University, Changsha, 410083, Hunan Province, China. \ \ Email: [email protected]; dengyijun\[email protected]} } \date{} \begin{document} \maketitle \begin{abstract} This paper is concerned with the open problem proposed in Ammari et. al. Commun. Math. Phys, 2013. We first investigate the existence and uniqueness of Generalized Polarization Tensors (GPTs) vanishing structures locally in both two and three dimension by fixed point theorem. Employing the Brouwer Degree Theory and the local uniqueness, we prove that for any radius configuration of $N+1$ layers concentric disks (balls) and a fixed core conductivity, there exists at least one piecewise homogeneous conductivity distribution which achieves the $N$-GPTs vanishing. Furthermore, we establish a global uniqueness result for the case of proportional radius settings, and derive an interesting asymptotic configuration for structure with thin coatings. Finally, we present some numerical examples to validate our theoretical conclusions. \medskip \noindent{\bf Keywords:} GPTs vanishing structure; Existence and uniqueness; Brouwer Degree Theory; Layer potential \noindent{\bf 2020 Mathematics Subject Classification: 35R30, 35R05, 35C10} \end{abstract} \section{Introduction} \quad\ Consider the conductivity problem \begin{equation}\label{conductivity problem} \begin{cases} \begin{aligned} &\nabla \cdot ((\sigma_{\Omega}\chi_{\Omega}+\chi_{\Omega_0})\nabla u)=0 & \text{in } \mathbb{R}^d,\\ &(u-H)(x) = O(|x|^{1-d}) & \text{as } |x|\to \infty. \end{aligned} \end{cases} \end{equation} where $d=2,3$ and $\Omega$ is an inclusion inserted into infinite homogeneous background $\Omega_0=\mathbb{R}^d\backslash \Omega$, $\chi$ presents the characteristic function and $H(x)$ is a harmonic function in $\mathbb{R}^d$. The conductivity $\sigma_{\Omega}$ of inclusion is different from the background, which cause some perturbation to the background fields. In the inverse conductivity problem, the measurement of boundary data is utilized to reconstruct unknown shapes and conductivity distribution of inclusions embedded in background. However, there are certain types of inclusions that only perturb the background fields very slightly (even not at all). These inclusions are referred to near (completely) cloaking structures, as they render themselves invisible to probing by electrical impedance tomography (EIT) \cite{you2020combined,ji2021neutral,blaasten2020recovering}. The cloaking by transmission optics is used to construct a singular conductivity distribution whose DtN map is exactly the same as constant conductivity distribution \cite{greenleaf}. However, this transformation induces the singularity of material. To overcome, the near cloaking structures developed by Kohn \textit{et al}. \cite{kohn2008cloaking} are widely applied in practice. They use a regular transformation to push forward the material constant into a small ball (with radius $\rho$) instead of a singular point, and the near cloaking effects can be estimated to be of order $\rho^d$. We also refer to \cite{DLU171,DLU172,LLRU15} for near cloaking anisotropic structures. In addition, neutral inclusions which do not cause any perturbation to uniform background fields have been extensively studied and widely used in designing invisibility cloaking structures with metamaterials in various contexts \cite{kang2019construction,kang2021polarization,kang2022existence,lim2020inclusions,nguyen2016cloaking}. The generalized polarization tensors (GPTs) vanishing structures were first put forward by Ammari \textit{et al}. \cite{ammari2013enhancement1}, where they designed the structure by multi-coated concentric disks or balls and proved the near cloaking effects can be enhanced to the order of $\rho^{d+2N}$ by $N$-GPTs vanishing structure. The GPTs, which are extension of the polarization tensor introduced by Schiffer and Szego{\"a} \cite{schiffer1949virtual} to higher order. It's well known that GPTs are defined as an asymptotic sense and they carry geometric information about the inclusion. Indeed, GPTs have been widely used in reconstructing small inclusions and shape description \cite{ammari2014reconstruction2,bruhl2003direct,deng2024identifying}. The GPTs vanishing structure is also concerned as neutral inclusions in asymptotic sense. In addition, GPTs-vanishing structures of general shape have also been proposed \cite{feng2017construction,kang2022existence}. However, as an important problem reported initially, the existence of high order GPTs vanishing structures with multi-coated concentric disks, has seen little significant progress recently. The purpose of this paper is to prove that if the core $\Omega_{N+1}$ has any fixed constant conductivity, then there exist $N$ coatings surrounding $\Omega_{N+1}$ such that the inclusion $\Omega$ becomes an $N$-GPTs vanishing structure. This result is independent of radius settings, that is, for any $N+1$-layer concentric disks (balls) with given radius, there exists a suitable conductivity distribution to achieve the $N$-GPTs vanishing. To address this problem, We first show that the $N$-GPTs vanishing structure exists and is unique under some smallness assumptions on the structure. Then we derive the continuous differentiability of GPTs respect to the conductivity contrast $\eta$. This derivation relies on the integral equation representation with layer potential technique. Afterwards, by delicate and detailed analysis, we find $N$-GPTs vanishing structures employing the homotopic invariance of Brouwer Degree (see Theorem \ref{main result}). We analyze the mapping properties of GPTs respect to $\eta$ and compute the value of Brouwer Degree, which together with the locally uniqueness result, we prove the main results. Finally, we study a type of structures with proportional radius settings and derive a uniqueness result. Besides, we notice that the GPTs vanishing structures designed by extreme thin coats require high contrast conductivity setting, and the contrast between the outermost layer and background should be weaker than others. The numerical experiments are in collaborate with our theoretical findings. The organization of this paper is as follows. In the following section, we introduce the layer potential technique and the GPTs. Our main results are presented in Section 3. In section 4, we show the existence and uniqueness of $N$-GPTs vanishing structures locally. In Section \ref{sec:3}, we first prove the continuously differentiable of GPTs respect to $\eta$. Then we derive the existence of the $N$-GPTs vanishing structures for any fixed core conductivity using Brouwer Degree Theory. Section \ref{sec:proportional} focuses on deriving the uniqueness of $N$-GPTs vanishing structures under proportional radius settings, the extreme case is also discussed in this part. In Section \ref{sec:5} we present some numerical experiments to verify our results. \section{Layer potentials and GPTs} \quad \ \ In this section, we shall introduce the GPTs and related integral system by layer potential technique. Let $\Omega\subset \mathbb{R}^d$ be a piecewise homogeneous domain divided by $N$ parts, i.e. $\Omega_k$, which are surrounded by $C^{1,\alpha}$($0<\alpha<1$) closed and nonintersecting surfaces $\Gamma_k$, $k=1,2,\dots,N$, and $\Omega_0=\mathbb{R}^d\backslash \Omega$ be the background medium. Each region $\Omega_k$ is filled with homogeneous medium, that is \begin{equation} \sigma(x) = \sigma_k, \quad x\in \Omega_k,\ k=0,1,\dots,N. \end{equation} It is noted that the solution $u$ of \eqref{conductivity problem} satisfies the following transmission conditions: \begin{equation}\label{transmission} u|_{+}=u|_{-}, \quad \sigma_{k-1}\partial_{\nu_k}|_{+}=\sigma_{k}\partial_{\nu_k}|_{-} \quad \text{on } \Gamma_k, \end{equation} where the notation $\nu_k$ indicate the outward normal on $\Gamma_k$. Let $G(x)$ be the fundamental solution to the Laplace equation in $\mathbb{R}^d$, that is \begin{equation}\label{fundamental} G(x)=\begin{cases} \frac{1}{2\pi}\ln|x|, &\text{ } d=2,\\ \frac{1}{(2-d)\omega_d}|x|^{2-d}, &\text{ } d=3, \end{cases} \end{equation} where $\omega_d$ is the area of the unit sphere in $\mathbb{R}^d$. For a bounded simple closed $C^{1,\alpha}$ surface $\Gamma$, the single and double layer potentials with density $\phi\in L^2(\Gamma)$ are defined by \begin{equation}\label{single} \mathcal{S}_{\Gamma}[\phi]({x}):=\int_{\Gamma}G({x-y})\phi(y)d\sigma_{{y}},\quad x\in \mathbb{R}^d,\notag \end{equation} \begin{equation}\label{double} \mathcal{D}_{\Gamma}[\phi]({x}):=\int_{\Gamma}\frac{\partial}{\partial_{\nu_{y}}}G({x-y})\phi(y)d\sigma_{{y}},\quad x\in \mathbb{R}^d\backslash\Gamma,\notag \end{equation} where $\partial_{\nu_{y}}$ denotes the outward normal derivative respect to ${y}$-variables. The following jump relations hold \begin{equation}\label{sjump} \partial_{\nu}\mathcal{S}_{\Gamma}[\phi]({x})|_{\pm}=(\pm\frac{I}{2}+\mathcal{K}^{*}_{\Gamma})[\phi]\quad \text{on } \Gamma, \end{equation} \begin{equation} \mathcal{D}_{\Gamma}[\phi]({x})|_{\pm}=(\mp\frac{I}{2}+\mathcal{K}_{\Gamma})[\phi]\quad \text{on } \Gamma, \end{equation} where the Neumann-Poincar\'e operator $\mathcal{K}^{*}_{\Gamma}$ is defined by \begin{equation}\label{NP} \mathcal{K}^{*}_{\Gamma}[\phi](x):=\int_{\Gamma}\frac{\partial G({x-y})}{\partial_{\nu_{x}}}\phi(y)d\sigma_{{y}}, \end{equation} and $\mathcal{K}_{\Gamma}$ is the $L^2$ adjoined of $\mathcal{K}^{*}_{\Gamma}$ \begin{equation}\label{NPad} \mathcal{K}_{\Gamma}[\phi](x):=\int_{\Gamma}\frac{\partial G({x-y})}{\partial_{\nu_{y}}}\phi(y)d\sigma_{{y}}. \end{equation} By using layer potential technique, the solution $u$ to \eqref{conductivity problem} can be represented by \begin{equation}\label{solution} u(x)=H(x)+\sum_{k=1}^{N}\mathcal{S}_{\Gamma_k}[\phi_k](x), \end{equation} for some functions $\phi_k\in L_0^2(\Gamma_k)$, where the function space is defined by \begin{equation} L_0^2(\Gamma_k):=\left\{ \phi\in L^2(\Gamma_k):\int_{\Gamma_k}\phi=0\right\},\notag \end{equation} using \eqref{sjump}, the transmission conditions \eqref{transmission} can be rewritten by \begin{equation}\label{integral} \begin{bmatrix} \lambda_1I-\mathcal{K}^{*}_{\Gamma_1} & -\mathcal{K}_{\Gamma_2,\Gamma_1} &\cdots & -\mathcal{K}_{\Gamma_N,\Gamma_1}\\ -\mathcal{K}_{\Gamma_1,\Gamma_2} & \lambda_2I-\mathcal{K}^{*}_{\Gamma_2} &\cdots & -\mathcal{K}_{\Gamma_N,\Gamma_2}\\ \vdots & \vdots & \ddots & \vdots\\ -\mathcal{K}_{\Gamma_1,\Gamma_N} & -\mathcal{K}_{\Gamma_2,\Gamma_N} &\cdots & \lambda_NI-\mathcal{K}^{*}_{\Gamma_N} \end{bmatrix}\begin{bmatrix} \phi_1\\\phi_2\\ \vdots \\\phi_N \end{bmatrix}=\begin{bmatrix} \partial_{\nu_1}H\\\partial_{\nu_2}H\\ \vdots \\\partial_{\nu_N}H \end{bmatrix}, \end{equation} where $\mathcal{K}_{\Gamma_{k_1},\Gamma_{k_2}}:L_0^2(\Gamma_{k_1})\to L_0^2(\Gamma_{k_2})$ denotes the normal derivative of single layer potential $\partial_{\nu_{k_2}}\mathcal{S}_{\Gamma_{k_1}}$ constrained on $\Gamma_{k_2}$ and \begin{equation} \lambda_k=\frac{\sigma_k+\sigma_{k-1}}{2(\sigma_k-\sigma_{k-1})},\quad k=1,2,\dots,N. \end{equation} Denote the integral operator on the left-hand side of \eqref{integral} by $\mathbb{J}_{\Omega}$. The solvability and uniqueness of above integral system can be established by the following theorem. \begin{theorem}\label{invertible} For any $\lambda_k\in (-\infty,-1/2]\cup[1/2,+\infty)$, $k=1,2,\dots,N$, the operator $\mathbb{J}_{\Omega}$ is invertible on $L_0^2(\Gamma_1)\times L_0^2(\Gamma_2)\times \dots\times L_0^2(\Gamma_N) \to L_0^2(\Gamma_1)\times L_0^2(\Gamma_2)\times \dots\times L_0^2(\Gamma_N)$. \begin{proof} For notation simplicity, we take function space $X=L_0^2(\Gamma_1)\times L_0^2(\Gamma_2)\times ...\times L_0^2(\Gamma_N)$, it follows from \eqref{integral} that $\mathbb{J}_{\Omega}$ can be decomposed by \begin{equation} \begin{aligned}\mathbb{J}_{\Omega}&=\begin{bmatrix} \lambda_1I-\mathcal{K}^{*}_{\Gamma_1} & 0 &\cdots & 0\\ 0 & \lambda_2I-\mathcal{K}^{*}_{\Gamma_2} &\cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 &\cdots & \lambda_NI-\mathcal{K}^{*}_{\Gamma_N} \end{bmatrix}+\begin{bmatrix} 0 & -\mathcal{K}_{\Gamma_2,\Gamma_1} &\cdots & -\mathcal{K}_{\Gamma_N,\Gamma_1}\\ -\mathcal{K}_{\Gamma_1,\Gamma_2} & 0 &\cdots & -\mathcal{K}_{\Gamma_N,\Gamma_2}\\ \vdots & \vdots & \ddots & \vdots\\ -\mathcal{K}_{\Gamma_1,\Gamma_N} & -\mathcal{K}_{\Gamma_2,\Gamma_N} &\cdots & 0 \end{bmatrix}\\ &=:\mathbb{J}_{\Omega}^{(1)}+\mathbb{J}_{\Omega}^{(2)}.\notag \end{aligned} \end{equation} one can readily see $\mathbb{J}_{\Omega}^{(1)}$ is invertible on $X$ with $|\lambda_k|\geq 1/2$. Since the integral kernel of $\mathcal{K}_{\Gamma_{k_1},\Gamma_{k_2}}$ is analytic, the operator $\mathbb{J}_{\Omega}^{(2)}$ is compact. Thanks for Fredholm alternative, it suffices to prove the injectivity of $\mathbb{J}_{\Omega}$. Indeed, we assume that \begin{equation} \mathbb{J}_{\Omega}(\phi_1,\phi_2,...,\phi_N)^T=(0,0,...,0)^T,\notag \end{equation} where the superscript $T$ denotes the matrix transpose. Let \begin{equation} u(x):=\sum_{k=1}^{N}\mathcal{S}_{\Gamma_k}[\phi_k](x),\quad x\in\mathbb{R}^d.\notag \end{equation} It's easy to verify that $u$ satisfies the following transmission problem \begin{equation}\label{gtransmission} \begin{cases} \Delta u=0,& \text{in }\mathbb{R}^d\backslash\cup_{k=1}^N\Gamma_k,\\ u|_{+}=u|_{-}, & \text{on } \Gamma_k,\\ (\lambda_k-1/2)\partial_{\nu}u|_{+}=(\lambda_k+1/2)\partial_{\nu}u|_{-}, & \text{on } \Gamma_k,\\ u(x)=\mathcal{O}(|x|^{1-d}) &|x|\to \infty, \end{cases} \end{equation} we point out \eqref{gtransmission} has only trivial solution. Let $|\lambda_k|\neq 1/2$ for each $k$, the system reduces to a standard transmission problem, thus $u(x)=0$. Next, we suppose $|\lambda_k|=1/2$ for some $k$. If $\lambda_k=-1/2$, it immediately follows $\partial_{\nu}u|_{+}=0$ on $\Gamma_k$, hence $u(x)=0$ in $\mathbb{R}^d\backslash\cup_{i=1}^k\Omega_i$. Using the continuity condition $u|_{+}=u|_{-}$ on $\Gamma_k$ we get $u(x)=0$. If $\lambda_k=1/2$ for some $k$, that is $\partial_{\nu}u|_{-}=0$ on $\Gamma_k$, thus $u(x)=0,x\in \cup_{i=1}^k\Omega_i$ and consequently $u(x)=0$ in $\mathbb{R}^d$ from \eqref{transmission}. The jump formula \eqref{sjump} yields that \begin{equation} \phi_l=\frac{\partial\mathcal{S}_{\Gamma_l}}{\partial_{\nu}}|_{+}-\frac{\partial\mathcal{S}_{\Gamma_l}}{\partial_{\nu}}|_{-}=0.\notag \end{equation} This completes the proof. \end{proof} \end{theorem} With the uniqueness of $(\phi_1,\phi_2,\dots,\phi_N)$ determined by \eqref{integral}, we proceed to introduce the polarization tensors of multi-layer structures. For any multi-index $\alpha=(\alpha_1,\alpha_2,\dots,\alpha_d)$, let $x^{\alpha}=x_1^{\alpha_1}\dots x_d^{\alpha_d}$ and $\partial^{\alpha}=\partial_1^{\alpha_1}\dots\partial_d^{\alpha_d}$, in which $\partial_i=\partial/\partial x_i$. With the help of Taylor expansion for $y$ lies on a compact set we have \begin{equation} G(x-y)=\sum_{|\alpha|=0}^{+\infty}\frac{(-1)^{|\alpha|}}{\alpha!}\partial^{\alpha}G(x)y^{\alpha},\quad |x|\to\infty,\notag \end{equation} together with \eqref{solution}, we can obtain the following far-field expansion \begin{equation}\label{repre} \begin{aligned} (u-H)(x)&=\sum_{k=1}^N\int_{\Gamma_k}G(x-y)\phi_k(y)d\sigma_y\\ &=\sum_{k=1}^N\sum_{|\alpha|=0}^{+\infty}\sum_{|\beta|=0}^{+\infty}\frac{(-1)^{|\alpha|}}{\alpha!\beta!}\partial^{\alpha}G(x)\partial^{\beta}H(0)\int_{\Gamma_k}y^{\alpha}\phi_{k,{\beta}}d\sigma_y, \end{aligned} \end{equation} as $|x|\to \infty$, where $(\phi_{1,\beta},\phi_{2,\beta},...,\phi_{N,\beta})=\mathbb{J}_{\Omega}^{-1}(\partial_{\nu_1}y^{\beta},\partial_{\nu_2}y^{\beta},...,\partial_{\nu_N}y^{\beta})^T$. \begin{definition} For $\alpha,\beta\in \mathbb{N}^d$, let $\phi_{k,\beta}, k=1,2,\dots,N$ be the solution of \begin{equation} \mathbb{J}_{\Omega}(\phi_{1,\beta},\dots,\phi_{N,\beta})^T=(\partial_{\nu_1}y^{\beta},\partial_{\nu_2}y^{\beta},\dots,\partial_{\nu_N}y^{\beta})^T. \notag \end{equation} Then the GPT $M_{\alpha\beta}$ is defined to be \begin{equation}\label{GPT} M_{\alpha\beta}:=\sum_{k=1}^N\int_{\Gamma_k}y^{\alpha}\phi_{k,{\beta}}d\sigma_y. \end{equation} Specially, denote $M_{\alpha\beta}$ by $M_{ij}$ with $|\alpha|=|\beta|=1$, the matrix $\mathbf{M}:=(M_{ij})_{i,j=1}^d$ is called P\'olya–Szego{\"a} polarization tensor. \end{definition} Through \eqref{repre}, we can get complete information about the far-field expansion of perturbed electric potential by GPTs \begin{equation}\label{expansion} (u-H)(x)=\sum_{|\alpha|=0}^{+\infty}\sum_{|\beta|=0}^{+\infty}\frac{(-1)^{|\alpha|}}{\alpha!\beta!}\partial^{\alpha}G(x)M_{\alpha\beta}\partial^{\beta}H(0),\quad \text{as }|x|\to \infty. \end{equation} \section{Main results} \quad \ \ In this section, we shall present the main results for existence and uniqueness of GPTs vanishing structures. We shall first introduce the contracted Generalized Polarization Tensors (CGPTs) and the concepts of $N$-GPTs vanishing structure in two and three dimension, respectively. The proof of main results shall be shown in sections afterwards. \subsection{Two dimensional case} \quad\ For any harmonic function $H(x)$ in $\mathbb{R}^2$, which admits the following expression \begin{equation}\label{H} H(x)=H(0)+\sum_{n=1}^{+\infty}r^n(a^c_n\cos n\theta+a^s_n\sin n\theta), \end{equation} where $x=(r\cos\theta,r\sin\theta)$. For multi-indices $\alpha,\beta$ with $|\alpha|=|\beta|=n$, we define $(a_{\alpha}^c)$ and $(a_{\beta}^s)$ with \begin{equation} \sum_{|\alpha|=n}a_{\alpha}^cx^{\alpha}=r^n\cos n\theta\quad \text{and} \quad \sum_{|\beta|=n}a_{\beta}^sx^{\alpha}=r^n\sin n\theta, \notag \end{equation} thus we define the contracted Generalized Polarization Tensors (CGPTs) by \begin{equation}\label{CGPT1} M_{mn}^{cc}:=\sum_{|\alpha|=m}^{+\infty}\sum_{|\beta|=n}^{+\infty}a_{\alpha}^ca_{\beta}^cM_{\alpha\beta}, \end{equation} \begin{equation}\label{CGPT2} M_{mn}^{cs}:=\sum_{|\alpha|=m}^{+\infty}\sum_{|\beta|=n}^{+\infty}a_{\alpha}^ca_{\beta}^sM_{\alpha\beta}, \end{equation} \begin{equation}\label{CGPT3} M_{mn}^{sc}:=\sum_{|\alpha|=m}^{+\infty}\sum_{|\beta|=n}^{+\infty}a_{\alpha}^sa_{\beta}^cM_{\alpha\beta}, \end{equation} \begin{equation}\label{CGPT4} M_{mn}^{ss}:=\sum_{|\alpha|=m}^{+\infty}\sum_{|\beta|=n}^{+\infty}a_{\alpha}^sa_{\beta}^sM_{\alpha\beta}, \end{equation} note the expansion of $G(x-y)$ as $|x|\to \infty$, that is \begin{equation}\label{G} G(x-y)=\sum_{n=1}^{+\infty}\frac{-1}{2\pi n}\Big\{ \frac{\cos n\theta_x}{r_x^n}r_y^n\cos n\theta_y+\frac{\sin n\theta_x}{r_x^n}r_y^n\sin n\theta_y \Big\}+C, \end{equation} where $x=(r_x\cos\theta_x,r_x\sin\theta_x)$, $y=(r_y\cos\theta_y,r_y\sin\theta_y)$ and $y\in \Gamma_k$, $k=1,\dots,N$. Plugging \eqref{H} and \eqref{G} into \eqref{expansion}, we get \begin{equation}\label{far-field}\begin{aligned} (u-H)(x)=&-\sum_{m=1}^{+\infty}\frac{\cos m\theta}{2\pi mr^m}\sum_{n=1}^{\infty}(M_{mn}^{cc}a_n^c+M_{mn}^{cs}a_n^s)\\ &-\sum_{m=1}^{+\infty}\frac{\sin m\theta}{2\pi mr^m}\sum_{n=1}^{\infty}(M_{mn}^{sc}a_n^c+M_{mn}^{ss}a_n^s), \end{aligned} \end{equation} uniformly as $|x|\to +\infty$. The structure $\Omega$ with conductivity distribution $\sigma_{\Omega}$ is called $N$-GPTs vanishing structure which satisfies $M_{mn}^{cc}=M_{mn}^{cs}=M_{mn}^{sc}=M_{mn}^{ss}=0$ for any $m,n\leq N$. To obtain $N$-GPTs vanishing structures, Ammari \textit{et al}. \cite{ammari2013enhancement1} introduced multiple radial symmetric coatings. We shall prove that for any $N$, there exists an $N+1$-layer radial symmetric structure can be designed to achieve the $N$-GPTs vanishing. Suppose $0<r_{N+1}<r_{N}<\dots<r_{2}<r_{1}$ and denote the coating domain by \begin{equation} \Omega_k:={r_{k+1}<r\leq r_{k}}, \quad k=1,2,\dots,N,\notag \end{equation} the core $\Omega_{N+1}=\{r\leq r_{N+1}\}$ coated by $\Omega_k$ and background media $\Omega_{0}=\{r> r_{1}\}$. Take conductivity of $\Omega_k$ be $\sigma_k$ and $\sigma_{0}=1$. Thanks for the orthogonality of harmonic functions, there holds \begin{equation} \begin{aligned} &M_{nm}^{cs}=M_{nm}^{sc}=0\quad \text{for all } m,n,\\ &M_{nm}^{cc}=M_{nm}^{ss}=0\quad \text{for } m\neq n, \end{aligned}\notag \end{equation} and \begin{equation} M_{nn}^{cc}=M_{nn}^{ss}=0\quad\text{for all } n. \notag \end{equation} Therefore, the electric potential is perturbed mildly with $N$-GPTs vanishing structure, that is \begin{equation} (u-H)(x)=\mathcal{O}(|x|^{-N-1})\quad \text{as }|x|\to +\infty. \notag \end{equation} In what follows, define $$\eta_k=\frac{\sigma_k-\sigma_{k-1}}{\sigma_k+\sigma_{k-1}}, \quad k=1,2,\dots,N+1.$$ For the sake of simplicity, we take $M_n[\eta]=M_{nn}^{cc}$ generated by $\eta=(\eta_1,\dots,\eta_{N+1})$ and $\mathcal{M}_N(\eta)=(M_1[\eta],\dots,M_N[\eta])$. We are mainly concerned with solution to the following nonlinear equations \begin{equation} \mathcal{M}_N(\eta)=0,\quad \eta\in[-1,1]^{N+1},\ \eta\neq 0. \notag \end{equation} It is readily seen that $\eta=0$ is a trivial solution to the above nonlinear equations. The existence and $N$-GPTs vanishing structure can be shown as follows. \begin{theorem}\label{fixed core} For any given radius $0<r_{N+1}<r_{N}<\dots<r_{2}<r_{1}$ and fixed $\sigma_{N+1}\geq 0$, there exists a combination $\eta=(\eta_1,\dots,\eta_{N+1})\in [-1,1]^{N+1}$ such that the $N$-GPTs $\mathcal{M}_N$ vanish. \end{theorem} \subsection{Three dimensional case} The harmonic function $H(x)$ in $\mathbb{R}^3$ can be denoted by \begin{equation} H(x)=H(0)+\sum_{n=1}^{+\infty}\sum_{n'=-n}^na_n^{n'}r^nY^{n'}_n(\theta,\phi), \notag \end{equation} where $Y_n^{n'}(\theta,\phi)$ are the $n'$-th spherical harmonics of order $n$. Similarly the fundamental solution $G(x-y)$ has the following expression as $|x|\to \infty$ \begin{equation}\label{G2} G(x-y)=\sum_{n=1}^{+\infty}\sum_{n'=-n}^n \frac{r_y^n}{(2n+1)r_x^{n+1}}Y_n^{n'}(\theta_x,\phi_x)\overline{Y_n^{n'}(\theta_y,\phi_y)}+C, \end{equation} with the spherical coordinate $(r_x,\theta_x,\phi_x)$. For multi-index $|\alpha|=n$, we define $(a_{\alpha}^{n,m})$ by \begin{equation} \sum_{|\alpha|=n}a_\alpha^{n,m}x^\alpha=r^nY_n^m(\theta,\phi), \notag \end{equation} thus the contracted Generalized Polarization Tensors(CGPTs) in three-dimension are then defined as \begin{equation} M_{mn}^{m'n'}:=\sum_{|\alpha|=m}^{+\infty}\sum_{|\beta|=n}^{+\infty}a_{\alpha}^{m,m'}a_{\beta}^{n,n'}M_{\alpha\beta}, \quad |m'|\leq m,|n'|\leq n, \notag \end{equation} which leads to the following far-field expression \begin{equation}\label{far-field2} (u-H)(x)=\sum_{m=1}^{\infty}\sum_{m'=-m}^{m}\frac{Y_m^{m'}(\theta,\phi)}{(2m+1)r^{m+1}}\sum_{n=1}^{\infty}\sum_{n'=-n}^{n}M_{mn}^{m'n'}a_n^{n'}, \end{equation} uniformly as $|x|\to +\infty$. We shall keep the geometric and conductivity settings of $\Omega$ in accordance with the two dimensional case. We denote $\tilde{M}_n=M_{nn}^{n'n'}$, since $M_{nm}^{n'm'}=0$ for $n\neq m$ or $n'\neq m'$ and $M_{nn}^{n_1'n_1'}=M_{nn}^{n_2'n_2'}$ for $|n_1'|\leq n,\ |n_2'|\leq n$. Thus the $N$-GPTs vanishing structure $\Omega$ can be designed by taking $\tilde{M}_n=0,\ n=1,\dots,N$. Let $\tilde{\mathcal{M}}_N(\eta)=(\tilde{M}_1[\eta],\dots,\tilde{M}_N[\eta])$ be generated by $\eta=(\eta_{1},\dots,\eta_{N+1})$. We shall now prove the following existence result for three dimensional case. \begin{theorem}\label{th:main2} For any given radius $0<r_{N+1}<r_{N}<\dots<r_{2}<r_{1}$ and fixed $\sigma_{N+1}\geq 0$, there exists a combination $\eta=(\eta_{1},\dots,\eta_{N+1})\in [-1,1]^{N+1}$ such that the $N$-GPTs $\tilde{\mathcal{M}}_N$ vanish. \end{theorem} \section{The existence and uniqueness of $N$-GPTs vanishing structure locally} \quad \ \ In this section, we shall prove the existence and uniqueness of $N$-GPTs vanishing structure under some assumptions on the solutions or the structure. The related results shall be used to derive our final results in the last section. Let us first recall the result obtained in \cite{FD23,kong2024inverse} for radially symmetric structure. It can be explicit given that $M_n=-2\pi n b_0^{(n)}$, where $b_0^{(n)}=e^T \Upsilon_{N+1}^{(n)} (P_{N+1}^{(n)})^{-1} e$, $e=(1,1,\cdots,1)^T$, \begin{equation}\label{PN} P_{N+1}^{(n)}[\eta]:=\begin{bmatrix} 1/\eta_1 & (r_2/ r_1)^{2n} & \cdots & (r_{N+1}/ r_1)^{2n} \\ -1 & 1/\eta_2 & \cdots &(r_{N+1}/ r_2)^{2n} \\ \vdots & \vdots & \ddots & \vdots \\ -1 & -1 & \cdots & 1/\eta_{N+1} \end{bmatrix}, \end{equation} and \begin{equation}\label{RN} \Upsilon_{N+1}^{(n)}:=\begin{bmatrix} r_1^{2n} & 0 & \cdots & 0\\ 0 & r_2^{2n} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & r_{N+1}^{2n} \end{bmatrix}. \end{equation} In three dimensional case, one similarly has $\tilde{M}_n=-(2n+1)e^T \tilde{\Upsilon}_{N+1}^{(n)} (\tilde{P}_{N+1}^{(n)})^{-1} e$, where \begin{equation} \tilde{P}_{N+1}^{(n)}[\eta]:=\begin{bmatrix} \frac{2n+1}{2n}1/\eta_1+\frac{1}{2n} & \frac{n+1}{n}(r_2/ r_1)^{2n+1} & \cdots & \frac{n+1}{n}(r_{N+1}/ r_1)^{2n+1} \\ -1 & \frac{2n+1}{2n}1/\eta_2+\frac{1}{2n} & \cdots &\frac{n+1}{n}(r_{N+1}/ r_2)^{2n+1} \\ \vdots & \vdots & \ddots & \vdots \\ -1 & -1 & \cdots & \frac{2n+1}{2n}1/\eta_{N+1}+\frac{1}{2n} \end{bmatrix}, \end{equation} and \begin{equation} \tilde{\Upsilon}_{N+1}^{(n)}:=\begin{bmatrix} r_1^{2n+1} & 0 & \cdots & 0\\ 0 & r_2^{2n+1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & r_N^{2n+1} \end{bmatrix}.\notag \end{equation} We want to point out that, in the above setup, we have the assumptions that $\eta_n\neq 0$, $n=1,2,\ldots, N+1$. Actually, if there holds $\eta_{N_0}=0$ for some $N_0\geq 1$ and $\eta_{n}\neq 0$ for $n\neq N_0$, then the $N$ coatings is essentially degenerated to $N-1$ coatings. It is a common sense that $N-1$ coatings can not achieve $N$-GPTs vanishing, at least if the structures (radius) of $N-1$ coatings are fixed. We shall show that this is actually true. But before this, we need to show some locally existence and uniqueness results for $N$-GPTs vanishing with $N$ coatings. In what follows, we shall define $f_0:=\sum_{n=1}^{N+1} \eta_n$. Numerical observation in \cite{ammari2013enhancement1} show that when $N$-GPTs vanish, $f_0$ is small, which means that the conductivity distributions in the multi-coatings are oscillating. So it is nature to first assume that $f_0$ is small enough. \subsection{Existence and uniqueness for $f_0$ is small} \begin{theorem}\label{th:mainsmall1} Suppose $f_0=\sum_{n=1}^{N+1} \eta_n\ll 1$ is fixed then there exists a unique solution $\eta$ such that the $N$-GPTs vanish. \end{theorem} \begin{proof} We only prove the two dimensional case, while it is similar to show the proof for three dimensional case. Let \beq \GL:= \begin{bmatrix} \eta_1 & 0 & \cdots & 0 \\ 0 & \eta_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \eta_{N+1} \end{bmatrix} \quad\mbox{and}\quad \Pi_n = P_{N+1}^{(n)} - \GL^{-1}. \eeq Then we have $$ (P_{N+1}^{(n)})^{-1}= (I+\GL\Pi_n)^{-1} \GL. $$ We now write $b_1^{(n)}$ as \beq\label{eq:comfin2} b_1^{(n)}= e^T \Upsilon_{N+1}^{(n)} \GL e + e^T \Upsilon_{N+1}^{(n)} [(I+\GL\Pi_n)^{-1}-I] \GL e. \eeq One can easily see that \beq e^T \Upsilon_{N+1}^{(n)} \GL e = \sum_{j=1}^{N+1} \eta_j r_j^{2k}, \eeq Define \beq\label{eq:dfnwk} w_n(\eta):= e^T \Upsilon_{N+1}^{(n)} [(I+\GL\Pi_n)^{-1}-I] \GL e \eeq then there satisfies \beq\label{wkest} |w_n(\eta)| \le C |\eta|^2, \eeq where $C$ depends on $r_1, \ldots, r_{N+1}$ and $N$. Thus $\eta=(\eta_1, \ldots, \eta_{N+1})$ satisfies the following equation: \beq V \eta^T + W(\eta)= \begin{bmatrix} \sum_{j=1}^{N+1} \eta_j \\ b_{1}^{(1)} \\ \vdots \\ b_{1}^{(N)} \end{bmatrix}, \eeq where $$ V= \begin{bmatrix} 1 & 1 & \cdots & 1 \\ r_1^2 & r_2^2 & \cdots & r_{N+1}^2 \\ \vdots & \vdots & \ddots & \vdots \\ r_1^{2N} & r_2^{2N} & \cdots & r_{N+1}^{2N} \end{bmatrix}, \quad W(\eta) = \begin{bmatrix} 0 \\ w_1(\eta) \\ \vdots \\ w_N(\eta) \end{bmatrix}. $$ Note that $V$ is the Vandermonde matrix and invertible. By \eqnref{wkest} there are $\Ge>0$ and $C_1, C_2$ depending on $r_1, \ldots, r_{N+1}$ and $N$ such that if $|\eta| \le \Ge$, then \beq C_1 |\eta| \le |V \eta + W(\eta)| \le C_2 |\eta|. \eeq By fixed point theory (see, for example, \cite{Ze86}), there is also $\delta_0>0$ such that for all $\mathbf{f}\in \mathbb{R}^{N+1}$ satisfying $|\mathbf{f}| \le \delta_0$ there is a unique solution $\eta$ satisfying $|\eta| \le \Ge$ to \beq\label{VWf} V \eta + W(\eta) =\mathbf{f}. \eeq In particular, if we take $\mathbf{f}=(f_0, 0, \ldots, 0)^T$ with $|f_0| \le \delta_0$, then there is a unique solution to \eqnref{VWf}. \end{proof} \begin{corollary} If $\Gs_{N+1}$ is fixed and is close enough to the background conductivity $\Gs_0$ then $N$-GPTs vanishing structure exists and is unique. \end{corollary} \begin{proof} For the sake of simplicity, assume that the background conductivity $\Gs_{0}=1$. Then there holds \beq\label{eq:Gsb_rela1} \Gs_{N+1}=\prod_{n=1}^{N+1}\frac{1+\eta_n}{1-\eta_n} \eeq Since $\|\eta\|_{l^{\infty}}<1$ by Taylor expansion we have \beq \Gs_{N+1}=\prod_{n=1}^{N+1}\left[(1+\eta_n)\sum_{k=0}^{\infty}\eta_n^k\right]=1+2\sum_{n=1} ^{N+1}\eta_n + w_0(\eta) \eeq where the residual term $w_0(\eta)$ satisfies \beq |w_0(\eta)|\leq C |\eta|^2 \eeq for $|\eta|\leq\epsilon$. Thus if we take $\mathbf{f}=((1-\Gs_{N+1})/2, 0, \ldots, 0)^T$ and substitute the first term in $W(\eta)$, appeared in Theorem \ref{th:mainsmall1}, with $w_0(\eta)$, then by following the same way in Theorem \ref{th:mainsmall1} there is a unique solution to \eqnref{VWf} for $|(1-\Gs_{N+1})/2|\leq\delta_0$ for some small $\delta_0>0$. \end{proof} \subsection{Existence and uniqueness for $r_{N+1}$ is small} Next, we shall show that the existence and uniqueness can also be proven in the case that the radius of the core, i.e. $r_{N+1}$ is small enough. The result shall be used for existence of $N$-GPTs in general situation. \begin{theorem}\label{th:mainle2} Suppose that $r_{N+1}\ll 1$, then for any fixed core conductivity $\Gs_{N+1}$, the $N$-GPTs vanishing structure exists and is unique. \end{theorem} \begin{proof} Note that from \eqnref{eq:Gsb_rela1}, it is equivalent to fix $\Gs_{N+1}$ and $\eta_{N+1}$. Let \beq \tilde{V}=\left[ \begin{array}{cccc} r_1^2 & r_2^2 & \cdots & r_{N}^2 \\ r_1^4 & r_2^4 & \cdots & r_{N}^4 \\ \vdots & \vdots & \ddots & \vdots \\ r_1^{2N} & r_2^{2N} & \cdots & r_{N}^{2N} \end{array} \right],\quad \tilde{W}(\eta) = \begin{bmatrix} w_1(\eta) \\ w_2(\eta) \\ \vdots \\ w_N(\eta) \end{bmatrix} \eeq and $\eta'=(\eta_1,\ldots,\eta_{N})$. Here $w_n(\eta)$ is defined in \eqnref{eq:dfnwk}. Let $b_{0}^{(n)}=0$ then by minor adjustment on \eqnref{VWf} we have \beq\label{VWf1} \tilde{V} {\eta'}^T + \tilde{W}(\eta) =\tilde{\mathbf{f}}. \eeq with $\tilde{\mathbf{f}}=-\eta_{N+1}(r_{N+1}^2,r_{N+1}^4,\ldots,r_{N+1}^{2N})^T$. Denote by $Q_{ij}^{(n)}$ the $(i+1)(j+1)$-th element in $(I+\GL\Pi_n)^{-1}-I$. Since \begin{eqnarray*} w_n(\eta) & = & \sum_{i=1}^{N+1}\sum_{j=1}^{N} r_i^{2n}Q_{ij}^{(n)}\eta_j+\sum_{i=1}^{N+1}r_i^{2n}Q_{i(N+1)}^{(n)}\eta_{N+1} \\ & = & \sum_{j=1}^{N}r_{N+1}^{2n}Q_{(N+1)j}^{(n)}\eta_j + r_{N+1}^{2n}Q_{(N+1)(N+1)}^{(n)}\eta_{N+1}+O(r_{N+1}^{2n})+O(|{\eta'}|^2) \\ & = & O(r_{N+1}^{2n}) + O(|\eta'|^2) \end{eqnarray*} By moving the $O(r_{N+1}^{2n})$ part in $w_n$ to the right side of \eqnref{VWf1} one can get that the right hand side is of order $r_{N+1}^{2n}$ for the $n$-th equation, $n=1,2,\ldots, N$. Since $\tilde{V}$ is invertible and does not depend on $r_{N+1}$, for any $\Ge>0$, there exists $\delta_1>0$ such that for $r_{N+1}$ satisfying $r_{N+1} \le \delta_1$ there is a unique solution $\eta'$ satisfying $|\eta'| \le \Ge$ to \eqnref{VWf1}. \end{proof} \section{Proof of main theorems} \label{sec:3} \quad \ \ Let us first rewrite the integral system \eqref{integral} as follows \begin{equation}\label{integral2} \begin{bmatrix} I-2\eta_1\mathcal{K}^{*}_{\Gamma_1} & -2\eta_1\mathcal{K}_{\Gamma_2,\Gamma_1} &\cdots & -2\eta_1\mathcal{K}_{\Gamma_N,\Gamma_1}\\ -2\eta_2\mathcal{K}_{\Gamma_1,\Gamma_2} & I-2\eta_2\mathcal{K}^{*}_{\Gamma_2} &\cdots & -2\eta_2\mathcal{K}_{\Gamma_N,\Gamma_2}\\ \vdots & \vdots & \ddots & \vdots\\ -2\eta_N\mathcal{K}_{\Gamma_1,\Gamma_N} & -2\eta_N\mathcal{K}_{\Gamma_2,\Gamma_N} &\cdots & I-2\eta_N\mathcal{K}^{*}_{\Gamma_N} \end{bmatrix}\begin{bmatrix} \phi_1\\\phi_2\\ \vdots \\\phi_N \end{bmatrix}=\begin{bmatrix} 2\eta_1\partial_{\nu_1}H\\2\eta_2\partial_{\nu_2}H\\ \vdots \\2\eta_N\partial_{\nu_N}H \end{bmatrix}. \end{equation} Denote the coefficient matrix by $\mathbb{E}^{\eta}_{\Omega}$, we shall demonstrate that the CGPTs are continuously differentiable respect to $\eta=(\eta_1,\eta_2,\dots,\eta_N)$. \begin{theorem}\label{the1} For $\alpha,\beta\in\mathbb{N}^d$ and $\eta_k\in[-1,1],k=1,\dots,N$. Let $\phi^{\eta}_{k,\beta}$ be the solution of \begin{equation} \mathbb{E}^{\eta}_{\Omega}(\phi_{1,\beta},\dots,\phi_{N,\beta})^T=(2\eta_1\partial_{\nu_1}y^{\beta},2\eta_2\partial_{\nu_2}y^{\beta},\dots,2\eta_N\partial_{\nu_N}y^{\beta})^T.\notag \end{equation} Then the GPTs $M_{\alpha\beta}$ defined by \eqref{GPT} are continuously differentiable in $\eta_k\in[-1,1]$ and so are the CGPTs. \begin{proof} We first claim the integral operator $\mathbb{E}^{\eta}_{\Omega}$ is invertible for all $\eta\in[-1,1]^N$. If $\eta_k\neq 0$ for any $k\leq N$, that case can be immediately proved by the invertibility of $\mathbb{J}_{\Omega}$. Let's assume $\eta_{k_i}=0$ for $i=1,2,\dots,s$, the $k_i$-th equation turns to \begin{equation} I[\phi_{1,\beta}]=0,\quad i=1,2,\dots,s, \notag \end{equation} thus $\phi_{1,\beta}=0$, this implies the system has degraded to $(N-s)$-layer transmission problem, whose invertibility can be obtained by Theorem \ref{invertible}. Let $\delta\ll 1$, $\tilde{\eta}=U(\eta,\delta)\cap [-1,1]^N$ and $\Delta{\eta}=\tilde{\eta}-\eta$. Direct computation shows \begin{equation} \begin{aligned} (\mathbb{E}_{\Omega}^{\eta}+\tilde{\mathbb{E}}_{\Omega})(\tilde{\phi}_{1,\beta},\dots,\tilde{\phi}_{N,\beta})^T&=(2\eta_1\partial_{\nu_1}y^{\beta},\dots,2\eta_N\partial_{\nu_N}y^{\beta})^T\\&+(2\Delta\eta_1\partial_{\nu_1}y^{\beta},\dots,2\Delta\eta_N\partial_{\nu_N}y^{\beta})^T, \end{aligned} \notag \end{equation} where \begin{equation} \tilde{\mathbb{E}}_{\Omega}=\begin{bmatrix} -2\Delta{\eta_1}\mathcal{K}^{*}_{\Gamma_1} & -2\Delta{\eta_1}\mathcal{K}_{\Gamma_2,\Gamma_1} &\cdots & -2\Delta{\eta_1}\mathcal{K}_{\Gamma_N,\Gamma_1}\\ -2\Delta{\eta_2}\mathcal{K}_{\Gamma_1,\Gamma_2} & -2\Delta{\eta_2}\mathcal{K}^{*}_{\Gamma_2} &\cdots & -2\Delta{\eta_2}\mathcal{K}_{\Gamma_N,\Gamma_2}\\ \vdots & \vdots & \ddots & \vdots\\ -2\Delta{\eta_N}\mathcal{K}_{\Gamma_1,\Gamma_N} & -2\Delta{\eta_N}\mathcal{K}_{\Gamma_2,\Gamma_N} &\cdots & -2\Delta{\eta_N}\mathcal{K}^{*}_{\Gamma_N} \end{bmatrix}. \notag \end{equation} It follows $(\tilde{\phi}_{1,\beta},\dots,\tilde{\phi}_{N,\beta})=(\phi_{1,\beta},\dots,\phi_{N,\beta})+(\phi^{(1)}_{1,\beta},\dots,\phi^{(1)}_{N,\beta})+\mathcal{O}(\delta^2)$, where \begin{equation}\label{MD} \begin{aligned} (\phi^{(1)}_{1,\beta},\dots,\phi^{(1)}_{N,\beta})^T&=-(\mathbb{E}_{\Omega}^{\eta})^{-1}\tilde{\mathbb{E}}_{\Omega}(\mathbb{E}_{\Omega}^{\eta})^{-1}(2\eta_1\partial_{\nu_1}y^{\beta},\dots,2\eta_N\partial_{\nu_N}y^{\beta})^T\\&+(\mathbb{E}_{\Omega}^{\eta})^{-1}(2\Delta\eta_1\partial_{\nu_1}y^{\beta},\dots,2\Delta\eta_N\partial_{\nu_N}y^{\beta})^T. \end{aligned} \end{equation} Since $(\mathbb{E}_{\Omega}^{\eta})^{-1}$ is independent of $\Delta{\eta}$, the right-hand side of \eqref{MD} is linear with $\Delta{\eta}$. Plugging \eqref{MD} into \eqref{GPT}, we obtain \begin{equation} \begin{aligned} \tilde{M}_{\alpha\beta}&=\sum_{k=1}^N\int_{\Gamma_k}y^{\alpha}\tilde{\phi}_{k,{\beta}}d\sigma_y\\&=\sum_{k=1}^N\int_{\Gamma_k}y^{\alpha}{\phi}_{k,{\beta}}d\sigma_y+\sum_{k=1}^N\int_{\Gamma_k}y^{\alpha}{\phi}^{(1)}_{k,{\beta}}d\sigma_y+\mathcal{O}(\delta^2)\\ &:=M_{\alpha\beta}+\mathbf{M}^{(1)}_{\alpha\beta}(\Delta\eta_1,\dots,\Delta\eta_N)^T+\mathcal{O}(\delta^2), \end{aligned}\notag \end{equation} where $\mathbf{M}^{(1)}_{\alpha\beta}=(m^1_{\alpha\beta},\dots,m^N_{\alpha\beta})$ is determined by \eqref{MD}. The continuous differentiability of CGPTs follows from \eqref{CGPT1}-\eqref{CGPT4}. \end{proof} \end{theorem} Before proving our main result on the existence of $N$-GPTs vanishing structure, we introduce the Brouwer degree theory, which mainly follows from the monograph \cite{ciarlet2013linear}. \begin{definition}\label{deg} Let $D\subset \mathbb{R}^n$ be a bounded open set, $\phi \in C^1(\overline{D},\mathbb{R}^n),\ P$ is a regular value of $\phi$ and $P\notin \phi(\partial D)$, define the Brouwer degree in $D$ respect to $P$ as \begin{equation}\label{Brouwer degree} \text{deg}(\phi,D,P):=\sum_{x\in\phi^{(-1)}(P)}\text{sgn}J_{\phi}(x), \end{equation} where $\phi^{(-1)}(P)=\{x\in D:\phi(x)=P\}$ and $J_{\phi}(x)=\text{det}\nabla\phi(x)$. \end{definition} Note that the Brouwer degree can be extended to any $P\notin\phi(\partial D)$ and continuous functions through an integral formulation. However, for simplicity and clarity of computation, we use the definition in Definition \ref{deg}. Moreover, one of the most important properties of Brouwer degree is the homotopic invariance: \begin{theorem}\label{th:thmin}{\normalfont[Theorem 9.15-6 in \cite{ciarlet2013linear}]} Let $D\subset \mathbb{R}^n$ be a bounded open set, $f$, $g\in C(\overline{D},\mathbb{R}^n)$ and a homotopy $H\in C(\overline{D}\times [0,1],\mathbb{R}^n)$ joining $f$ to $g$ in the space $C(\overline{D},\mathbb{R}^n)$, that is \begin{equation} H(\cdot,0)=f\quad \text{and} \quad H(\cdot,1)=g. \notag \end{equation} If the point $b\notin H(\partial D\times [0,1])$, then \begin{equation} \mathrm{deg}(H(\cdot,\lambda),D,b)=\mathrm{deg}(f,D,b)\quad \text{for all }0\leq \lambda\leq1. \notag \end{equation} In particular, $\mathrm{deg}(f,D,b)=\mathrm{deg}(g,D,b)$. \end{theorem} Specially, the following theorem investigates that the Brouwer Degree of a continuous odd map is nonzero. \begin{theorem}\label{Borsuk-Ulam}{\normalfont[Theorem 9.17-2 in \cite{ciarlet2013linear}]} Let $D\subset \mathbb{R}^n$ be a bounded origin-symmetric open set such that $0\in D$, $\phi\in C(\overline{D}\times [0,1],\mathbb{R}^n)$ and $\phi(x)=-\phi(-x)$ on $\partial D$. Then, the Brouwer Degree $\mathrm{deg}(f,D,0)$ is odd. \end{theorem} The Brouwer Degree Theory is usually used to prove solvability, in fact, the solvability theorem holds. \begin{theorem}\label{solvability}{\normalfont[Theorem 9.15-4 in \cite{ciarlet2013linear}]} Let $D\subset \mathbb{R}^n$ be a bounded open set, $\phi\in C(\overline{D},\mathbb{R}^n)$ and $b\in \phi(\partial D)$. If $b \notin \phi(\overline{D})$, then \begin{equation} \mathrm{deg}(f,D,b)=0. \notag \end{equation} Hence, \begin{equation} \mathrm{deg}(f,D,b) \notin 0 \text{ implies that } f(x)=b \text{ for some } x\in D. \notag \end{equation} \end{theorem} We establish the existence of $N$-GPTs vanishing structures by employing the Brouwer Degree theory: \begin{theorem}\label{main result} For any given radius $0<r_{N+1}<r_{N}<\dots<r_{2}<r_{1}$ and fixed $\eta_{N+1}\in [-1,1]$, there exists a nontrivial combination $\eta'=(\eta_1,\dots,\eta_N)$ with $\eta_n\in [-1,1],\ n=1,\dots,N$ such that $N$-GPTs vanish, i.e. $\mathcal{M}_N(\eta',\eta_{N+1})=0$. \begin{proof} Let $D=\{\eta' : |\eta_n|\leq 1, n=1,\dots,N\}$, and define the map $F_t(\eta')=\mathcal{M}_N(\eta',t),\ | t|\leq1$. It follows from Theorem \ref{the1} that $F_t$ is a homotopy joining $F_{-1}$ to $F_1$. Firstly, we shall point out that if $\eta'\in \partial D$, then there holds \begin{equation}\label{independent} \mathcal{M}_N(\eta',t_1)=\mathcal{M}_N(\eta',t_2) \quad\text{for any }t_1\neq t_2. \end{equation} In fact, if $\eta'\in \partial D$ there exists some $1\leq p\leq N$ such that $\eta_p=\pm 1$. It suffices to consider $\eta_p=-1$, as the other case is similar. Consider the following transmission system: \begin{equation}\label{ntransmission} \begin{cases} \Delta u=0,& \text{in }\mathbb{R}^2\backslash\cup_{k=1}^{N+1}\Gamma_k,\\ u|_{+}=u|_{-}, & \text{on } \Gamma_k, \ k>p,\\ (1-\eta_k)\partial_{\nu}u|_{+}=(1+\eta_k)\partial_{\nu}u|_{-}, & \text{on } \Gamma_k,\ k> p,\\ \partial_{\nu}u|_{+}=0, & \text{on } \Gamma_p,\\ u-r^n\cos n\theta=\mathcal{O}(|x|^{-1}), &|x|\to \infty, \end{cases} \end{equation} The orthogonality of harmonic functions shows \begin{equation} u-r^n\cos n\theta=\frac{b_n\cos n\theta}{r^n} \quad \text{in } \mathbb{R}^2\backslash \Omega, \end{equation} thus \begin{equation} M_n=-2\pi nb_n, \end{equation} follows from \eqref{far-field}. Note that \eqref{ntransmission} can be treated as an exterior Neumann boundary problem, therefore $M_n$ is independent of $\eta_{N+1}$, which implies \eqref{independent}. Next we suppose that $0\notin F_0(\partial D)$, which will be verified. In this case, by \eqref{independent} one has $0\notin F_t(\partial D)$. Now let $u_k^{\perp}$ be the harmonic conjugate of $u$ in each $\Omega_k$ and define $v=\sigma_k u_k^{\perp}+c_k$ in $\Omega_k$, where \begin{equation} \sigma_k=\prod_{i=1}^{k}\frac{1+\eta_i}{1-\eta_i}\quad k=0,1,\dots,N+1, \notag \end{equation} and the constants $c_k$ are chosen such that $v$ satisfies the transmission conditions across $\partial\Omega_k$, $k=0,1, \ldots, N+1$. Then $v$ satisfies the following equation \begin{equation}\label{ntransmission2} \begin{cases} \Delta v=0,& \text{in }\mathbb{R}^2\backslash\cup_{k=1}^{N+1}\Gamma_k,\\ v|_{+}=v|_{-}, & \text{on } \Gamma_k,\\ (1+\eta_k)\partial_{\nu}v|_{+}=(1-\eta_k)\partial_{\nu}v|_{-}, & \text{on } \Gamma_k,\\ v+r^n\sin n\theta=\mathcal{O}(|x|^{-1}), &|x|\to \infty, \end{cases} \end{equation} therefore we can immediately deduce that $\mathcal{M}(\eta',t)=-\mathcal{M}(-\eta',-t)$ and hence $F_0(\eta')=-F_0(-\eta')$. It follows from Theorem \ref{Borsuk-Ulam} that \begin{equation} \text{deg}(F_0,D,0)\neq 0. \notag \end{equation} By the homotopy invariance of Brouwer degree (Theorem \ref{th:thmin}), we deduce that \begin{equation}\label{brouwer degree} \text{deg}(F_t,D,0)=\text{deg}(F_0,D,0)\neq 0, \end{equation} for which we can immediately get the existence of $N$-GPTs vanishing structure by Theorem \ref{solvability}. Finally we show that $0\notin F_0(\partial D)$, otherwise by \eqref{independent}, there exist $\eta'\in \partial D$ such that \begin{equation} F_t(\eta')=F_0(\eta')=0 \quad \text{for all }|t|\leq 1. \notag \end{equation} Since in this case $\eta_{N+1}=0$, the structure is equivalent to $N$-coatings over $\Omega_{N+1}$ with radius $r_{N+1}$ sufficiently small. Thus there exist at least two solutions to such $N$-GPTs vanishing structure, one is at condition $\eta_{N+1}=0$ while the other is at $\eta_{N+1}\neq 0$. This is contradiction by using Theorem \ref{th:mainle2}. \end{proof} \end{theorem} \begin{proof}[Proof of Theorem \ref{fixed core}] Without loss of generality we assume $\sigma_0=1$, there holds \begin{equation} \sigma_{N+1}=\prod_{i=1}^{N+1}\frac{1+\eta_i}{1-\eta_i}=t. \end{equation} Let $\overline{M}_t^{(n)}=M_n[\eta_1,\dots,\eta_{N+1}]$, where \begin{equation} \eta_{N+1}=\frac{t\prod_{i=1}^N(1-\eta_i)-\prod_{i=1}^N(1+\eta_i)}{t\prod_{i=1}^N(1-\eta_i)+\prod_{i=1}^N(1+\eta_i)}, \notag \end{equation} and define $G_t(\eta_1,\dots,\eta_N)=(\overline{M}_t^{(1)},\dots,\overline{M}_t^{(N)})$. For $t=0$, we can immediately obtain $\eta_{N+1}=-1$, whose existence is solved in Theorem \ref{main result}. The Brouwer degree of $G_0$ in $D$ follows from \eqref{brouwer degree} that \begin{equation} \text{deg}(G_0,D,0)\neq 0. \notag \end{equation} Using the same approach in Theorem \ref{main result}, the homotopy invariance of Brouwer degree holds. It follows that \begin{equation} \text{deg}(G_t,D,0)=\text{deg}(G_0,D,0)\neq 0. \end{equation} Then the existence of $N$-GPTs vanishing structure proved. \end{proof} \begin{proof}[Proof of Theorem \ref{th:main2}] Let us first define $D=\{\eta' : |\eta_n|\leq 1, n=1,\dots,N\}$ and $\tilde{F}_{t}(\eta')=\tilde{\mathcal{M}}_N(\eta',t)$. Note that we can not use the same harmonic conjugate arguments to derive that $\mathcal{M}_N(\eta)$ is a odd map. We shall compute the Brouwer degree of $\tilde{F}_0$ directly. Let \begin{equation} u(x)=\sum_{n=0}^N H_n(x)+\sum_{k=1}^{N+1}\mathcal{S}_{\Gamma_k}[\phi_k](x),\quad x\in\mathbb{R}^3, \notag \end{equation} with $H_n=r^nY_n^{n'}(\theta,\phi)$ and $(\phi_1,\dots,\phi_{N+1})$ satisfy \eqref{integral2}. Thus \begin{equation} (u-\sum_{n=0}^N H_n)(x)=\sum_{n=0}^N\frac{b_nY_n^{n'}(\theta,\phi)}{r^{n+1}} \quad\text{in }\mathbb{R}^3\backslash \Omega, \notag \end{equation} and $\tilde{M}_n=(2n+1)b_n$ follows from \eqref{far-field2}. By using (\ref{MD}) and the fact that (see, e.g., \cite{ammari2013spectral,ammari2013mathematical}) \begin{equation} \label{Single sphere} \mathcal{S}_{\Gamma_k}[Y_n^{n'}](x) = \begin{cases} - \frac{1}{2n+1} \frac{r^n}{r_k^{n-1}} Y_n^{n'} (\hat x), \quad & \mbox{if } |x|=r < r_k, \\ - \frac{1}{2n+1} \frac{r_k^{n+2}}{r^{n+1}} Y_n^{n'} (\hat x), \quad & \mbox{if } |x|=r > r_k, \end{cases} \end{equation} we immediately get \begin{equation} \frac{\partial\tilde{M}_n}{\partial \eta_k}\Big|_{\eta=0}=(2n+1)r^{n+1}\mathcal{S}_{\Gamma_k}[\partial_{\nu_k}H_n]=nr_k^{2n+1}. \end{equation} Note that $\tilde{F}_0=(\tilde{M}_1,\dots,\tilde{M}_N)$, therefore \begin{equation} \nabla{\tilde{F}_0}(0)=\begin{bmatrix} r_1^{3} & r_2^{3} & \cdots & r_N^{3} \\ 2r_1^{5} & 2r_2^{5} & \cdots & 2r_N^{5} \\ \vdots & \vdots & \ddots & \vdots \\ Nr_1^{2N+1} & Nr_2^{2N+1} & \cdots & Nr_N^{2N+1} \end{bmatrix},\notag \end{equation} where the Jacobi matrix $\nabla{\tilde{F}_0}(0)$ is of Vandermonde form which is invertible. Similar to Theorem \ref{main result}, we can consider the case of $\eta_{N+1}=0$ to $N$-coatings over $\Omega_{N+1}$ with radius $r_{N+1}$ sufficiently small. It follows from Theorem \ref{th:mainle2} there exists a unique $|\eta'|\leq \epsilon$ such that $N$-GPTs vanishing. Therefore one can readily derive that $\tilde{F}_0(\eta')\neq 0$ for $\eta'\neq 0$. Then it follows from \eqref{Brouwer degree} that \begin{equation} \text{deg}(\tilde{F}_0,D,0)=\text{sgn}J_{\tilde{F}_0}(0)=1. \end{equation} Thus the existence of fixed $\eta_{N+1}$ can be proved by homotopic invariance similar to Theorem \ref{main result}. We have shown that there exists a nontrivial combination $\eta'=(\eta_1,\dots,\eta_N)$ with $\eta_n\in [-1,1],\ n=1,\dots,N$ such that $N$-GPTs vanish. Following the similar steps in Theorem \ref{fixed core}, one can show the existence of $N$-GPTs vanishing structure for any fixed $\Gs_{N+1}$. The proof is complete. \end{proof} \section{Multi-layer structure with proportional radius} \label{sec:proportional} \quad\ After resolving the global existence problem, it is natural to consider whether the $N$-GPTs vanishing structure is unique under fixed geometry and core conductivity. It is noted that we have only shown the uniqueness locally in section 4. In this section, we investigate a specific class of $N$-GPTs vanishing structures with special radius-settings and derive a uniqueness result for insulating core. Let $r_k=r_{N+1} \gamma^{N+1-k},k=1,\dots,N+1$ with growth parameter $\gamma>1$, where the radius of layers increase by a constant scale. \subsection{A uniqueness result of proportional radius} \quad \ To start with, we provide an explicit formula for CGPTs in two dimension. We will continue to use the notation of $M_n[\eta]$ in Section \ref{sec:3}. Let $H_n=r^ne^{in\theta}$ in polar coordinates $(r,\theta)$. For the orthogonality, we can express the solution $u_n$ in the form \begin{equation} u_n=a_k^{(n)}r^ne^{in\theta}+\frac{b_k^{(n)}}{r^n}e^{in\theta},\quad \text{in }\Omega_k, \quad k=0,1,\dots,N+1, \end{equation} where $a_{0}^{(n)}=1$ and $b_{N+1}^{(n)}=0$. Then it follows from \eqref{far-field} that \begin{equation}\label{k cgpt} M_n=-2\pi nb_0^{(n)}. \end{equation} From transmission boundary conditions, the following recursion formulas hold \begin{equation} \sigma_{k}a_k^{(n)}r_k^n+\sigma_{k}\frac{b_k^{(n)}}{r_k^n}=\sigma_{k}a_{k-1}^{(n)}r_k^n+\sigma_{k}\frac{b_{k-1}^{(n)}}{r_k^n},\notag \end{equation} \begin{equation} \sigma_{k}a_k^{(n)}r_k^{n}-\sigma_{k}\frac{b_k^{(n)}}{r_k^n}=\sigma_{k-1}a_{k-1}^{(n)}r_k^n-\sigma_{k-1}\frac{b_{k-1}^{(n)}}{r_k^n},\notag \end{equation} which can immediately deduce \begin{equation}\label{recursion} \prod_{i=1}^k\begin{bmatrix} 2\sigma_i &0 \\ 0&2\sigma_i \end{bmatrix}\begin{bmatrix} a^{(n)}_{k} \\ b^{(n)}_{k} \\ \end{bmatrix}=\prod_{i=1}^k\begin{bmatrix} \sigma_i+\sigma_{i-1} &0 \\ 0&\sigma_i+\sigma_{i-1} \end{bmatrix} \begin{bmatrix} 1 &\eta_i r_i^{-2n} \\ \eta_i r_i^{2n}&1 \end{bmatrix} \begin{bmatrix} 1 \\ b^{(n)}_{0} \\ \end{bmatrix}. \end{equation} For the convenience of description, we denote by $\mathcal{C}^{m}_{n}$ the set of all combinations of $m$ out $n$, $m\leq n$ and multi-index $\mathbf{i}=(i_1,i_2,\dots,i_m)\in \mathcal{C}^{n}_{m}$, where the indexes are rearranged as $i_1<i_2<\dots<i_n$. Moreover, $\lceil \cdot \rceil$ and $\lfloor \cdot \rfloor$ represent rounding-up function and rounding-down functions, respectively. \begin{lemma}\label{explicit lemma} Let $r_1>r_2>\dots>r_{N+1}>0$ and $\sigma_{i-1}+ \sigma_{i}\neq 0$ for all $i\leq N+1$, the CGPTs $M_n$ generated by N+1-layer structure have the following representation \begin{equation}\label{explicit M} M_n= \frac{2\pi np_{21}^{(n)}}{p_{22}^{(n)}},\quad n\in\mathbb{N}_{+}. \end{equation} where \begin{equation}\label{p21} p_{21}^{(n)}=\sum_{j=1}^{\lceil (N+1)/2\rceil}\sum_{\mathbf{i}\in \mathcal{C}_{N+1}^{2j-1}}\prod_{s=1}^{2j-1}\eta_{i_s}r_{i_s}^{(-1)^{s+1}2n}, \end{equation} \begin{equation}\label{p22} p_{22}^{(n)}=1+\sum_{j=1}^{\lfloor (N+1)/2\rfloor}\sum_{\mathbf{i}\in \mathcal{C}_{N+1}^{2j}}\prod_{s=1}^{2j}\eta_{i_s}r_{i_s}^{(-1)^{s+1}2n}. \end{equation} \end{lemma} \begin{proof} Let \begin{equation} P^{(n)}=\begin{bmatrix} p_{11}^{(n)} &p_{12}^{(n)} \\ p_{21}^{(n)} &p_{22}^{(n)}\end{bmatrix}= \prod_{i=1}^{N+1}\begin{bmatrix} 1 &\eta_i r_i^{-2n} \\ \eta_i r_i^{2n}&1 \end{bmatrix}. \notag \end{equation} From \eqref{recursion}, one immediately gets that \begin{equation} \prod_{i=1}^{N+1}\begin{bmatrix} 2\sigma_i &0 \\ 0&2\sigma_i \end{bmatrix}\begin{bmatrix} a^{(n)}_{N+1} \\ b^{(n)}_{N+1} \\ \end{bmatrix}=\prod_{i=1}^{N+1}\begin{bmatrix} \sigma_i+\sigma_{i-1} &0 \\ 0&\sigma_i+\sigma_{i-1} \end{bmatrix} \begin{bmatrix} p_{11}^{(n)} &p_{12}^{(n)} \\ p_{21}^{(n)} &p_{22}^{(n)}\end{bmatrix} \begin{bmatrix} 1 \\ b^{(n)}_{0} \\ \end{bmatrix}. \notag \end{equation} Note that $b_{N+1}^{(n)}=0$, which implies $p_{21}^{(n)}+b_0^{(n)}p_{22}^{(n)}=0$, we conclude that \eqref{explicit M} holds with the help of \eqref{k cgpt}. We now deduce the explicit representations of $p_{21}^{(n)}$ and $p_{22}^{(n)}$. For simplicity in notation, we let \begin{equation} \Lambda_i^{(n)}=\begin{bmatrix} 0 &\eta_i r_i^{-2n} \\ \eta_i r_i^{2n}&0 \end{bmatrix},\notag \end{equation} thus the matrix $P^{(n)}$ turns to \begin{equation} P^{(n)}=\prod_{i=1}^{N+1}(I+\Lambda_i^{(n)})=\sum_{j=0}^{N+1}\sum_{\mathbf{i}\in \mathcal{C}_{N+1}^{j}}\prod_{s=1}^j \Lambda^{(n)}_{i_s}. \notag \end{equation} It is easy to see that $\Lambda_i^{(n)}$ is anti-diagonal, we have \begin{equation} \Lambda^{(n)}_{i_p}\Lambda^{(n)}_{i_q}=\begin{bmatrix} \eta_p r_p^{-2n}\eta_q r_q^{2n} & 0 \\ 0&\eta_p r_p^{2n}\eta_q r_q^{-2n} \end{bmatrix}. \notag \end{equation} By simple induction, it follows that for any even $j$, \begin{equation} \prod_{s=1}^{j}\Lambda^{(n)}_{i_s}=\begin{bmatrix} \eta_{i_1} r_{i_1}^{-2n}\eta_{i_2} r_{i_2}^{2n}\dots\eta_{i_{j-1}} r_{i_{j-1}}^{2n}\eta_{i_{j}} r_{i_{j}}^{2n} & 0 \\ 0&\eta_{i_1} r_{i_1}^{2n}\eta_{i_2} r_{i_2}^{-2n}\dots\eta_{i_{j-1}} r_{i_{j-1}}^{-2n}\eta_{i_{j}} r_{i_{j}}^{-2n} \end{bmatrix},\notag \end{equation} and for any odd $j$ there holds \begin{equation} \prod_{s=1}^{j}\Lambda^{(n)}_{i_s}=\begin{bmatrix} 0 & \eta_{i_1} r_{i_1}^{-2n}\eta_{i_2} r_{i_2}^{2n}\dots\eta_{i_{j-1}} r_{i_{j-1}}^{2n}\eta_{i_{j}} r_{i_{j}}^{-2n} \\ \eta_{i_1} r_{i_1}^{2n}\eta_{i_2} r_{i_2}^{-2n}\dots\eta_{i_{j-1}} r_{i_{j-1}}^{-2n}\eta_{i_{j}} r_{i_{j}}^{2n}&0 \end{bmatrix}.\notag \end{equation} To summarize, we can deduce \eqref{p21} and \eqref{p22}. \end{proof} \begin{remark}\label{remark1} We remark that $p_{21}^{(n)}$ is a polynomial of $\eta$ and the coefficients are determined by the radius of the structure. Specifically, the coefficient of $\eta_{i_1}\cdots\eta_{i_{2j-1}}$ is given by \begin{equation} r_{i_1}^{2n}r_{i_2}^{-2n}\cdots r_{i_{2j-2}}^{-2n}r_{i_{2j-1}}^{2n}.\notag \end{equation} For rearrangement, it's easy to see that \begin{equation} r_{i_{2j-1}}^{2n}<r_{i_1}^{2n}r_{i_2}^{-2n}\cdots r_{i_{2j-2}}^{-2n}r_{i_{2j-1}}^{2n}<r_{i_1}^{2n}.\notag \end{equation} Under the proportional radius settings, this property will play a crucial role in our subsequent analysis. \end{remark} \begin{lemma}\label{zero sol} Let $\eta=(\eta',\eta_{N+1})$ and $\mathcal{M}(\eta)=(M_1,M_2,\dots,M_N)$, if $r_k=r \gamma^{N+1-k},k=1,\dots,N+1$, then the nonlinear algebra system $\mathcal{M}(\eta',0)=0$ if and only if $\eta'=0$. \begin{proof} The sufficiency is evident, we shall now prove the necessity. Note that for $\eta_{N+1}=0$, the structure can be considered by $N$ layers. In order to achieve $M_n=0$, it's sufficient to consider the value of $p_{21}^{(n)}$. Let $\psi_n=p_{21}^{(n)}$, it follows from Lemma \ref{explicit lemma} that \begin{equation}\label{phik} \psi_{n} = \sum_{j=1}^{\lceil N/2\rceil}\sum_{\text{i}\in C_{N}^{2j-1}}r^{2n}\gamma^{2m_{{\mathbf{i}}}n}\prod_{s=1}^{2j-1}\eta_{i_s}, \end{equation} where \begin{equation}\label{mi1} m_\mathbf{i}=N+1-i_{1}+i_{2}-\dots+i_{2j-2}-i_{2j-1}. \end{equation} From rearrangement, i.e. $1\leq i_{1}<i_{2}<\dots<i_{2j-2}<i_{2j-1}\leq N$, it follows that \begin{equation} 1\leq N+1-i_{2j-1}\leq m_{\mathbf{i}}\leq N+1-i_{1}\leq N. \notag \end{equation} Define \begin{equation}\label{Aj1} A^N_k:=\{\mathbf{i}\in \mathcal{C}_{N}^{2j-1}|m_\mathbf{i}=k,\ 2j-1\leq N\},\ k=1,2,\dots,N, \end{equation} by \eqref{phik} we have \begin{equation} \psi^{(n)} = \sum_{k=1}^{N}r^{2n}\gamma^{2kn}\sum_{\mathbf{i}\in A^N_k}\prod_{s=1}^{2j-1}\eta_{i_s}. \end{equation} Let $\xi_{k}:=\sum_{\mathbf{i}\in A_{k}^N}\prod_{s=1}^{2j-1}\eta_{i_s}$ and $\xi=(\xi_1,\xi_2,\dots,\xi_{N})$, the nonlinear system $\psi^{(n)} = 0,$ $n=1,2,\dots,N$ turns to a linear algebra system with \begin{equation}\label{linear system} \begin{bmatrix} r^2\gamma^2 & r^2\gamma^2 & r^2\gamma^4 & \cdots & r^2\gamma^{2N}\\ r^4\gamma^4 & r^4\gamma^4 & r^4\gamma^8 & \cdots & r^4\gamma^{4N}\\ r^6\gamma^6 & r_N^6\gamma^6 & r^6\gamma^{12} & \cdots & r^6\gamma^{6N}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ r^{2N}\gamma^{2N} & r^{2N}\gamma^{2N} & r^{2N}\gamma^{4N} & \cdots & r^{2N}\gamma^{2NN} \end{bmatrix}\begin{bmatrix} \xi_1\\\xi_2\\\xi_3\\ \vdots \\\xi_{N} \end{bmatrix}=\begin{bmatrix} 0\\0\\0\\ \vdots \\0 \end{bmatrix}, \end{equation} which coefficient matrix $M$ is Vandermonde form. Since the coefficient matrix is non-degenerate, we claim that $\xi=0$ if and only if $\psi^{(n)} = 0$. Note $\xi_1=\eta_N$ and $\xi_N=\eta_1$ so that $\eta_1=\eta_N=0$. More generally, assuming that $\eta_1=\eta_2=\dots=\eta_i=0$, it follows from Remark \ref{remark1} that $\xi_{N-i}=\eta_{i+1}=0$. By induction we can immediately deduce \begin{equation} \eta_1=\eta_2=\dots=\eta_N=0. \end{equation} which implies that the nonlinear system $\mathcal{M}(\eta',0)=0$ have only trivial solution. \end{proof} \end{lemma} We want to mention that the results of Lemma \ref{zero sol} have been proven for general settings of radius. However, the proof is useful for our subsequent uniqueness result. \begin{theorem}\label{pro radii} For the radius given by $r_k=r \gamma^{N+1-k}$, $ k=1,\dots,{N+1}$, $\gamma>1$ and fixed core conductivity $\sigma_{N+1}\geq 0$, there exists a combination $(\sigma_1,\dots,\sigma_{N})$ such that N-GPTs vanish. Furthermore, if $\sigma_{N+1}=0$, the combination is unique. \end{theorem} \begin{proof} The existence can be deduced immediately by Corollary \ref{fixed core}. Then we show the uniqueness of solution with $\sigma_{N+1}=0$. Let \begin{equation} \xi_{k+1}:=\sum_{\text{i}\in A_{k}^{N+1}}\prod_{s=1}^{2j-1}\eta_{i_s}, \end{equation} where $A_{k}^{N+1}$ is defined by \eqref{mi1} and \eqref{Aj1}. Recalling $\sigma_{N+1}=0$ equals to $\eta_{N+1}=-1$, and thus $\xi_1=\eta_{N+1}=-1$. Using the symmetry of \eqref{p21} and \eqref{p22} we get \begin{equation} M_n(\eta_1,\eta_2,\dots,\eta_{N+1})=-M_n(-\eta_1,-\eta_2,\dots,-\eta_{N+1}), \end{equation} it's sufficient to consider the case of $\eta_{N+1}=1$. Using the same approach in Lemma \ref{zero sol}, it can be shown that \begin{equation}\label{linear system2} \begin{bmatrix} r^2\gamma^2 & r^2\gamma^4 & r^2\gamma^6 &\cdots & r^2\gamma^{2N}\\ r^4\gamma^4 & r^4\gamma^8 & r^4\gamma^{12} &\cdots & r^4\gamma^{4N}\\ r^6\gamma^6 & r^6\gamma^{12} & r^6\gamma^{18} &\cdots & r^6\gamma^{6N}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ r^{2N}\gamma^{2N} & r^{2N}\gamma^{4N} & r^{2N}\gamma^{6N} &\cdots & r^{2N}\gamma^{2N^2} \end{bmatrix}\begin{bmatrix} \xi_2\\\xi_3\\\xi_4\\ \vdots \\\xi_{N+1} \end{bmatrix}=-\begin{bmatrix} r^2\\r^4\\r^6\\ \vdots \\r^{2N} \end{bmatrix}. \end{equation} Therefore $(\xi_2,\xi_3,\xi_4,\dots,\xi_{N+1})$ is unique determined by the non-trivial linear system \eqref{linear system2}, which implies that $p_{21}^{(n)}$ can be unique determined for any $n\in \mathbb{N}_+$ \begin{equation} p_{21}^{(n)}=\begin{bmatrix} r^{2n}&r^{2n}\gamma^{2n}&\cdots& r^{2n}\gamma^{2nN}\end{bmatrix}\begin{bmatrix} \xi_1\\\xi_2\\\ \vdots \\\xi_{N+1} \end{bmatrix}. \end{equation} Furthermore, we shall point out $p_{22}^{(n)}$ can also be determined by $(\xi_1,\xi_2,\dots,\xi_{N+1})$. Let $\mathsf{m}_{\mathbf{i}}=-i_1+i_2-\dots-i_{2j-1}+i_{2j}$, where $\mathbf{i}\in C^{2j}_{N+1}$, then we have \begin{equation} p_{22}^{(n)}=\begin{bmatrix} 1&\gamma^{2n}&\cdots& \gamma^{2nN}\end{bmatrix}\begin{bmatrix} 1\\\zeta_1\\\ \vdots \\\zeta_{N} \end{bmatrix}, \end{equation} where $\zeta_{k}=\sum_{\text{i}\in \mathcal{A}_k^{N+1}}\prod_{s=1}^{2j}\eta_{i_s}$, $\mathcal{A}_k^{N+1}= \{\mathbf{i}\in C_{N}^{2j}|\mathsf{m}_\mathbf{i}=k,2j\leq N+1 \}$. In terms of combination, we can separate $\xi_{k+1}=\xi_{k+1}^{(1)}+\xi_{k+1}^{(2)}$ with \begin{equation} \xi_{k+1}^{(1)}=\Big(\sum_{\mathbf{i}\in \mathcal{A}_k^N}\prod_{s=1}^{2j}\eta_{i_s}\Big)\eta_{N+1}, \end{equation} \begin{equation} \xi_{k+1}^{(2)}=\sum_{\mathbf{i}\in A_{k}^N}\prod_{s=1}^{2j-1}\eta_{i_s}. \end{equation} Since $\mathbf{i}=(i_1,i_2,\dots,i_{2j-1})\in A_{k}^{N}$ equals to \begin{equation} N+1-i_{1}+i_{2}+\dots+i_{2j-2}-i_{2j-1}=k, \notag \end{equation} thus $(i_1,i_2,\dots,i_{2j-1},{N+1})\in \mathcal{A}_k^{N+1}$. Combining with the definition of $\zeta_k$ it follows \begin{equation} \zeta_k=\frac{\xi_{k+1}^{(1)}}{\eta_{N+1}}+\xi_{k+1}^{(2)}\eta_{N+1}, \end{equation} therefore $\zeta_k=\xi_{k+1}^{(1)}+\xi_{k+1}^{(2)}=\xi_{k+1}$ is unique. The total field $u_n-H_n=b_0^{(n)}e^{in\theta}/r^n=-p_{21}^{(n)}e^{in\theta}/{p_{22}^{(n)}r^n}$ is determined uniquely. Thus the uniqueness of conductivity distribution can be obtained by uniqueness of $L^{\infty}(\Omega)$ under Dirichlet-to-Neumann (DtN) map \cite{astala2006calderon}. \end{proof} \subsection{The extreme case} \label{subsec:extreme} \quad \ In this part, we shall consider the distance between two adjacent layers is extremely small. In other words, the core of multi-layer structure is enclosed by extremely thin layers. In mathematically, we set $\gamma=1+\delta$, where $\delta\ll1$. One then has \begin{equation} r_k=r_{N+1} (1+\delta)^{N-k},\quad k=1,\dots,{N+1}. \end{equation} Our work focus on the conductivity distribution of $N$-GPTs vanishing structure which has insulative core. Take $\lambda_{N+1}=-1$, similar to \eqref{linear system2} we have \begin{equation}\label{extremeeq} \begin{bmatrix} (1+\delta)^2 & (1+\delta)^4 & (1+\delta)^6 &\cdots & (1+\delta)^{2N}\\ (1+\delta)^4 & (1+\delta)^8 & (1+\delta)^{12} &\cdots & (1+\delta)^{4N}\\ (1+\delta)^6 & (1+\delta)^{12} & (1+\delta)^{18} &\cdots & (1+\delta)^{6N}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ (1+\delta)^{2N} & (1+\delta)^{4N} & (1+\delta)^{6N} &\cdots & (1+\delta)^{2N^2} \end{bmatrix}\begin{bmatrix} \xi_2\\\xi_3\\\xi_4\\ \vdots \\\xi_{N+1} \end{bmatrix}=\begin{bmatrix} 1\\1\\1\\ \vdots \\1 \end{bmatrix}, \end{equation} whose existence and uniqueness of conductivity distribution has been established by Theorem \ref{pro radii}. The exact value of conductivity is characterized by the following Theorem \begin{theorem} Assume the unique solution $\eta$ of \eqref{extremeeq} is continuous differentiable respect to $\delta$. That is, the solution of $\mathcal{M}_{\delta}(\eta)=0$ can be written by \begin{equation} \eta_k = \eta_k^{(0)}+\eta_k^{(1)}\delta +\mathcal{O}(\delta^2), \quad k=1,\dots,N \end{equation} then \begin{equation}\label{eta1} \eta_1=(-1)^{N-1}(1-N(N-1)\delta)+\mathcal{O}(\delta^2), \end{equation} \begin{equation} \eta_k=(-1)^{N-k}+\mathcal{O}(\delta^2)\quad k=2,\dots,N. \end{equation} \begin{proof} With the help of Vandermonde determinant, straight forward computation shows \begin{equation} \begin{aligned} \xi_{N+1}&=(-1)^{N-1}\frac{\prod_{i\neq N}((1+\delta)^{2i}-1)}{\prod_{i\neq N}((1+\delta)^{2N}-(1+\delta)^{2i})}\\ &=(-1)^{N-1}\frac{\prod_{i\neq N}(2i\delta+i(2i-1)\delta^2)}{\prod_{i\neq N}((2N-2i)\delta+(N(2N-1)-i(2i-1))\delta^2)}\\ &=(-1)^{N-1}+(-1)^{N-1}\Big(\sum_{i=1}^{N-1}\big(\frac{2i-1}{2}\big)-\frac{2(N+i)-1}{2}\Big)\delta\\ &=\xi_{N+1}^{(0)}+\xi_{N+1}^{(1)}\delta, \end{aligned}\notag \end{equation} where $\xi_{N+1}^{(0)}=(-1)^{N-1}$ and $\xi_{N+1}^{(1)}=(-1)^NN(N-1)$. Recalling the definition of $\xi_{k}$, we can immediately get \eqref{eta1}. Similar calculations yield that \begin{equation} \begin{aligned} \xi_{N}&=(-1)^{N-1}\frac{\prod_{i\neq N-1}((1+\delta)^{2i}-1)}{\prod_{i\neq N-1}((1+\delta)^{2N-2}-(1+\delta)^{2i})}\\ &=\xi_{N}^{(0)}+\xi_{N}^{(1)}\delta, \end{aligned}\notag \end{equation} where $\xi_{N}^{(0)}=(-1)^{N-2}N$ and $\xi_{N}^{(1)}=(-1)^{N-1}N(N-1)^2$. We now rewrite $\xi_N$ as \begin{equation} \xi_N = \eta_2+\sum_{i=2}^N\eta_1\eta_i\eta_{i+1}, \end{equation} note that $|\eta_k|\leq 1,k=1,\dots,N$, we have $\eta_2^{(0)}=(-1)^{N-2}$ and $\eta_1^{(0)}\eta_i^{(0)}\eta_{i+1}^{(0)}=(-1)^{N-2}$. By induction one can easily obtain $\eta_k^{(0)}=(-1)^{N-k}$, thus we get $(-1)^{N-k}\eta_k^{(1)}\leq 0$. Furthermore the first order term can be written by \begin{equation} \begin{aligned} \xi_{N}^{(1)}&=-(N-1)\eta_{1}^{(1)}+\sum_{i=2}^N(-1)^i2 \eta_{i}^{(1)}\\&=(-1)^{N-1}N(N-1)^2+\sum_{i=2}^N(-1)^i2 \eta_{i}^{(1)}, \end{aligned}\notag \end{equation} therefore $\eta_k^{(1)}=0, k=2,\dots,N$. \end{proof} \end{theorem} \begin{remark} We note that the extreme vanishing structures of two-layers can be also considered as the high conductivity (HC) imperfect interface which studied in \cite{kang2019construction}. Indeed, it has been shown that if \begin{equation}\label{imperfect} \eta_1=1-\frac{2\sigma_2\sigma_0}{\sigma_2-\sigma_0}\delta+\mathcal{O}(\delta^2), \end{equation} then $\Omega$ is a 1-GPTs vanishing structure. For an insulative core with $\sigma_{2}=0$, the above condition \eqref{imperfect} simplifies to \begin{equation} \eta_1=1+\mathcal{O}(\delta^2). \notag \end{equation} which is consistent with \eqref{eta1}. \end{remark} \section{Numerical illustration} \label{sec:5} \quad \ \ In this section, we present some numerical examples to corroborate our theoretical results in the previous sections. We shall focus on two dimensional case, while for three dimensional case only entries of matrix should be changed accordingly. In order to find conductivity combination of $N$-GPTs vanishing structure, it suffices to solve the nonlinear equations by iteration methods. \begin{equation}\label{equations} \mathcal{M}_{N}(\eta)=0. \end{equation} For computation simplicity, we here use the formula generated by \cite{FD23,kong2024inverse}, that is \begin{equation} M_n[\eta]=-2\pi ne^T\Upsilon_N^{(n)}(P_N^{(n)}[\eta])^{-1}e, \end{equation} where $e=(1,1,\cdots,1)^T$ and $P_{N+1}^{(n)}[\eta]$, $\Upsilon_{N+1}^{(n)}$ are defined by \eqref{PN}, \eqref{RN}. We note that $P_{N+1}^{(n)}[\eta]$ is invertible for $\|\eta\|_{l^{\infty}}\leq 1$ due to the well-posedness of elliptic equation. Hence, simple calculation shows \begin{equation} \frac{\partial{M_n}}{\partial\eta_i}=-\frac{2\pi n}{\eta^2_i}e^T \Upsilon_{N+1}^{(n)}(P_{N+1}^{(n)}[\eta])^{-1}E_{ii}(P_{N+1}^{(n)}[\eta])^{-1}e, \end{equation} where $E_{ii}$ is sparse matrix with value 1 at the $i$-th row $i$-th column and $0$ others. As an illustration, we put $\eta_{N+1}=-1$, i.e., the core is insulated. To solve \eqref{equations} with the condition $\|\eta\|_{l^{\infty}}\leq 1$, we shall use the projection Newton iteration, the well known Newton direction, is described by \begin{equation} p_{N}=-(\nabla \mathcal{M}(\eta))^\dagger\mathcal{M}^T(\eta), \end{equation} here $(\mathcal{M}(\eta))^\dagger$ is the pseudo inverse of $\mathcal{M}(\eta)$. Note the projection Newton iteration is not always a descent iteration, we use the projection gradient iteration to ensure the convergence, and the gradient direction given by \begin{equation} p_{G}=-(\nabla \mathcal{M}(\eta))^T\mathcal{M}^T(\eta). \end{equation} In each iteration, we first identify a descent point along the Newton direction and then project it into the set $\|\eta\|_{l^{\infty}}\leq 1$. Next, we check whether the projected point is a descent point. If it is not, we repeat the process by replacing the gradient direction to ensure that the iteration remains target descent. Due to the local second-order convergence of Newton method, our iterative approach will get an accurate solution quickly. First, we consider that intervals between each two adjacent layers are equidistance. Set \begin{equation}\label{radii1} r_i=2-(i-1)/N,\quad i=1,2,\dots,N+1, \end{equation} such that $r_1=2$ and $r_{N+1}=1$. In Figure \ref{fig:GPTva1}, we show the conductivity distribution with the GPTs vanishing structure designed by \eqref{radii1}, where $N=8$ and $\Gs_{N+1}=0$. It can be seen the conductivity of adjacent layers behave an oscillatory pattern, mathematically, it's saying that $\sigma_{j}< \sigma_{j-1},\sigma_{j+1}$ for odd $j$ and $\sigma_{j}> \sigma_{j-1},\sigma_{j+1}$ for even $j$. As the layer approaches to core or background, the oscillation phenomenon will become more severe or slight. Moreover, the conductivity distribution also exhibits monotonicity in the gap layers, it shows the conductivity increase from slow to fast as approaching to core in even layers and decrease in odd layers. \begin{figure} \includegraphics[width=1\textwidth]{GPTva1.jpg} \captionsetup{font={small}} \caption{\label{fig:GPTva1} Left figure shows the conductivity distribution of GPTs vanishing structure with \eqref{radii1}, and right figure shows the values of CGPTs generated by such distribution. $N=8$ and $\Gs_{N+1}=0$.} \end{figure} Next, we consider the radius of layers are increasing with the same scale $\gamma$ which is studied in section \ref{sec:proportional}, that is \begin{equation}\label{radii2} r_{i-1} = \gamma r_{i},\quad i=1,2,\dots,N+1, \end{equation} for the convenience of comparison, we also set $r_1=2$ and $r_{N+1}=1$. Proportional radius settings makes the inner layers thinner, relatively, the outer layer will be thicker. Figure \ref{fig:GPTva2} presents the conductivity distribution with the $N$-GPTs vanishing structure designed by \eqref{radii2}, where $N=8$ and $\Gs_{N+1}=0$. Similarly, the oscillation phenomenon and monotonicity are also preserved in this configuration. Moreover, one can also see that the conductivity oscillation is more pronounced in thinner layers and slighter in thicker layers. \begin{figure} \includegraphics[width=1\textwidth]{GPTva2.jpg} \captionsetup{font={small}} \caption{\label{fig:GPTva2}Left figure shows the conductivity distribution of GPTs vanishing structure with \eqref{radii2}, and right figure shows the values of CGPTs generated by such distribution.$N=8$, $\Gs_{N+1}=0$.} \end{figure} As the end, we verify our the conclusion of extreme case obtained in subsection \ref{subsec:extreme}. Take $\delta = 0.01$ and therefore $\gamma = 1.01$, Table \ref{table:example 1} exhibits the conductivity value $\sigma_k$ and contrast parameter $\eta_k$ with $N$-GPTs vanishing structure in extreme case, where $N=8$. The numerical result in the table is corresponding to our asymptotic conclusion. Moreover, the coats designed to achieve cloaking are quite thin, therefore the conductivity will be sufficient large. Indeed, this is also physically justifiable. \begin{table}[ht] \renewcommand{\arraystretch}{1} \caption{The GPTs vanishing structure designed in the extreme case.} \label{table:example 1} \centering \tiny \resizebox{1\textwidth}{!}{ \begin{tabular}{cccccc} \toprule \multicolumn{6}{c}{$N=8$} \\ \midrule $\eta_k$ & \textasciitilde & -0.4485 & 0.9215 & -0.9844 & 0.9944 \\ $M_k$& \textasciitilde & -1.5632e-13 & 2.5935e-13 & 0 & 1.4388e-13 \\ $\sigma_k$&1 & 0.3436 & 8.4117 & 0.0661 & 23.3907\\ \midrule $\eta_k$&-0.9975 & 0.9989 & -0.9996 & 0.9999 & -1 \\ $M_k$& 1.2790e-13 & 2.5757e-13 & 7.4607e-14 & 1.0000e-15 & 1.6874e-4 \\ $\sigma_k$&0.0288 & 52.6041 & 0.0112 & 188.4053 & 0\\ \bottomrule \end{tabular} } \end{table} It's worth to mention that the CGPTs are highly sensitive in extreme case, this is because the $p_{22}^{(n)}$ defined by \eqref{p22} will be sufficiently small. This singular phenomenon will greatly affect the stability of the numerical iteration. To overcome, we divide the interval $\eta_{N+1}\in [-1,0]$ into $d$ equal parts, that is, $0=\eta_{N+1}^1>\eta_{N+1}^2> \dots>\eta_{N+1}^d=-1$. We solve the accurate solution with fixed $\eta_{N+1}^j$ and take it as the initial value for $\eta_{N+1}^{j+1}$. The preprocessing technique can be a practical method for solving high-order GPTs vanishing setting. \section*{Acknowledgements} The work of Y. Deng was supported by NSFC-RGC Joint Research Grant No. 12161160314. \section*{Data Availability Statement} No data was used for the research described in the article. \section*{Conflict of interest statement} The authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. \small \begin{thebibliography}{00} \bibitem{ammari2013spectral} {H. Ammari, G. Ciraolo, H. Kang, H. Lee, and G. W. Milton}, Spectral theory of a Neumann–Poincaré-type operator and analysis of cloaking due to anomalous localized resonance, Arch. Rat. Mech. Anal., 208 (2013), 667-692. \bibitem{ammari2014reconstruction2} {H. Ammari, Y. Deng, H. Kang, and H. Lee}, Reconstruction of inhomogeneous conductivities via the concept of generalized polarization tensors, A. I. H. Poincar\'e-AN., 31 (2014), 877–897. \bibitem{ammari2013mathematical} {H. Ammari, J. Garnier, W. Jing, H. Kang, M. Lim, K. Sølna, and H. Wang}, Mathematical and statistical methods for multistatic imaging, 2098, Springer, 2013. \bibitem{ammari2007polarization} {H. Ammari and H. Kang}, Polarization and moment tensors: with applications to inverse problems and effective medium theory, 162, Springer Science, 2007. \bibitem{ammari2005reconstruction} {H. Ammari, H. Kang, E. Kim, and M. Lim}, Reconstruction of closely spaced small inclusions, SIAM J. Numer. Anal., 42 (2005), 2408–2428. \bibitem{ammari2013enhancement1} {H. Ammari, H. Kang, H. Lee, and M. Lim}, Enhancement of near cloaking using generalized polarization tensors vanishing structures. Part I: The conductivity problem, Commun. Math. Phys., 317 (2013), 253–266. \bibitem{ammari2013enhancement2} {H. Ammari, H. Kang, H. Lee, and M. Lim}, Enhancement of near-cloaking. Part II: The helmholtz equation, Commun. Math. Phys., 317 (2013), 485–502. \bibitem{astala} {K. Astala, M. Lassas, and L. Päivärinta}, Calderon’s inverse problem for anisotropic conductivity in the plane, Comm. Part. Diff. Equa., 30 (2004), 207–224. \bibitem{astala2006calderon} {K. Astala and L. Päivärinta}, Calderón’s inverse conductivity problem in the plane, Ann. Math., (2006), 265–299. \bibitem{bao2014nearly} {G. Bao, H. Liu, and J. Zou}, Nearly cloaking the full maxwell equations: cloaking active contents with general conducting layers, J. Math. Pures. Appl., 101 (2014), 716–733. \bibitem{benveniste1999neutral} {Y. Benveniste and T. Miloh}, Neutral inhomogeneities in conduction phenomena, J. Mech. Phys. Solids, 47 (1999), 1873–1892. \bibitem{blaasten2020recovering} {E. Blåsten and H. Liu}, Recovering piecewise constant refractive indices by a single far-field pattern, Inverse Problems, 36 (2020), 085005. \bibitem{bruhl2003direct} {M. Brühl, M. Hanke, and M. S. Vogelius}, A direct impedance tomography algorithm for locating small inhomogeneities, Numer. Math., 93 (2003), 635–654. \bibitem{choi2023geometric} {D. Choi, J. Kim, and M. Lim}, Geometric multipole expansion and its application to semineutral inclusions of general shape, Z. Angew. Math. Phys., 74 (2023), 39. \bibitem{choi2024construction} {D. Choi and M. Lim}, Construction of inclusions with vanishing generalized polarization tensors by imperfect interfaces, Stud. Appl. Math., 152 (2024), 673–695. \bibitem{ciarlet2013linear} {P. G. Ciarlet}, Linear and nonlinear functional analysis with applications, SIAM, (2013). \bibitem{DLU171} Y. Deng, H. Liu and G. Uhlmann, On regularized full- and partial-cloaks in acoustic scattering, Commun. Part. Differential Eq., 42 (6) (2017), 821-851. \bibitem{DLU172} Y. Deng, H. Liu and G. Uhlmann, Full and partial cloaking in electromagnetic scattering, Arch. Ration. Mech. Anal., 223 (1)(2017), 265-299. \bibitem{deng2024identifying} {Y. Deng, H. Liu, and Y. Wang}, Identifying Active Anomalies in a Multilayered Medium by Passive Measurement in EIT, SIAM J. Appl. Math., 84 (2024), 1362–1384. \bibitem{FD23} X. Fang and Y. Deng, On plasmon modes in multi-layer structures, Math. Meth. Appl. Sci. 46 (17) (2023), 18075-18095. \bibitem{feng2017construction} {T. Feng, H. Kang, and H. Lee}, Construction of gpt-vanishing structures using shape derivative, J. Comput. Phys., (2017), 569–585. \bibitem{greenleaf} {A. Greenleaf, M. Lassas, and G. Uhlmann}, On nonuniqueness for calderon’s inverse problem, Math. Res. Lett., 10 (2003), 685–693. \bibitem{jarczyk2012neutral} {P. Jarczyk and V. Mityushev}, Neutral coated inclusions of finite conductivity, Proc. Math. Phys. Eng. Sci., 468 (2012), 954–970. \bibitem{ji2021neutral} {Y.-G. Ji, H. Kang, X. Li, and S. Sakaguchi}, Neutral inclusions, weakly neutral inclusions, and an over-determined problem for confocal ellipsoids, in Geometric Properties for Parabolic and Elliptic PDE’s, Springer, 2021, 151–181. \bibitem{kang2019construction} {H. Kang and X. Li}, Construction of weakly neutral inclusions of general shape by imperfect interfaces, SIAM J. Appl. Math., 79 (2019), 396–414. \bibitem{kang2021polarization} {H. Kang, X. Li, and S. Sakaguchi}, Polarization tensor vanishing structure of general shape: Existence for small perturbations of balls, Asymptot. Anal., 125 (2021), 101–132. \bibitem{kang2022existence} {H. Kang, X. Li, and S. Sakaguchi}, Existence of weakly neutral coated inclusions of general shape in two dimensions, Appl. Anal., 101 (2022), 1330–1353. \bibitem{kohn2010cloaking} {R. V. Kohn, D. Onofrei, M. S. Vogelius, and M. I. Weinstein}, Cloaking via change of variables for the helmholtz equation, Commun. Pure. Appl. Math., 63 (2010), 973–1016. \bibitem{kohn2008cloaking} {R. V. Kohn, H. Shen, M. S. Vogelius, and M. I. Weinstein}, Cloaking via change of variables in electric impedance tomography, Inverse Problems, 24 (2008), 015016. \bibitem{kong2024inverse} {L. Kong, Y. Deng, and L. Zhu}, Inverse conductivity problem with one measurement: uniqueness of multi-layer structures, Inverse Problems, 40 (2024), 085005. \bibitem{kong2024enlargement} {L. Kong, L. Zhu, Y. Deng, and X. Fang}, Enlargement of the localized resonant band gap by using multi-layer structures, J. Comput. Phys., 518 (2024), 113308. \bibitem{leonhardt2009broadband} {U. Leonhardt and T. Tyc}, Broadband invisibility by non-euclidean cloaking, Science, 323 (2009), 110–112. \bibitem{li2016literature} {J. Li}, A literature survey of mathematical study of metamaterials, Int. J. Numer. Anal. Model, 13 (2016), 230–243. \bibitem{LLRU15} Li, J., Liu, H., Rondi, L., Uhlmann, G., Regularized transformation-optics cloaking for the Helmholtz equation: from partial cloak to full cloak. Commun. Math. Phys. 335, 671–712 (2015) \bibitem{lim2020inclusions} {M. Lim and G. W. Milton}, Inclusions of general shapes having constant field inside the core and nonelliptical neutral coated inclusions with anisotropic conductivity, SIAM J. Appl. Math., 80 (2020), 1420–1440. \bibitem{nachman1996global} {A. I. Nachman}, Global uniqueness for a two-dimensional inverse boundary value problem, Ann. Math., (1996), 71–96. \bibitem{nguyen2016cloaking} {H.-M. Nguyen}, Cloaking using complementary media in the quasistatic regime, ANN. I. H. POINCARE-AN., 33 (2016), 1509–1518. \bibitem{pham2018solutions} {D. C. Pham}, Solutions for the conductivity of multi-coated spheres and spherically symmetric inclusion problems, Z. Angew. Math. Phys., 69 (2018), 1–14. \bibitem{schiffer1949virtual} {M. Schiffer and G. Szegö}, Virtual mass and polarization, Trans. Am. Math. Soc., 67 (1949), 130–205. \bibitem{wang2013neutral} {X. Wang and P. Schiavone}, A neutral multi-coated sphere under non-uniform electric field in conductivity, Z. Angew. Math. Phys., 64 (2013), 895–903. \bibitem{you2020combined} {L. You, J. Man, K. Yan, D. Wang, and H. Li}, Combined fourier-wavelet transforms for studying dynamic response of anisotropic multi-layered flexible pavement with linear-gradual interlayers, Appl. Math. Model, 81 (2020), 559–581. \bibitem{Ze86} E. Zeidler, Nonlinear functional analysis and its applications: Fixed point theorems, Springer- Verlag, New York, 1986. \end{thebibliography} \end{document}
2412.18841v1
http://arxiv.org/abs/2412.18841v1
Splitting the difference: Computations of the Reynolds operator in classical invariant theory
\documentclass[11pt]{amsart} \usepackage[dvipsnames]{xcolor} \usepackage{amssymb,amsmath,amsthm,enumerate,mathtools,mathptmx} \usepackage[new]{old-arrows} \usepackage{tikz-cd} \usepackage[utf8]{inputenc} \usepackage{hyperref} \hypersetup{ colorlinks = true, linkcolor = BrickRed, citecolor = Green, urlcolor = blue, filecolor = red, } \usepackage{cleveref} \usepackage{enumitem} \usepackage[margin=0.9in]{geometry} \usepackage{parskip} \usepackage[backend=biber,style=alphabetic,doi=false,isbn=false,url=false,eprint=false,maxbibnames=5,minbibnames=5,mincitenames=5,maxcitenames=5,maxalphanames=5,minalphanames=5,backref=true]{biblatex} \addbibresource{../refs.bib} \DeclareFieldFormat{extraalpha}{#1} \DeclareLabelalphaTemplate{ \labelelement{eld[final]{shorthand} eld{label} eld[strwidth=2,strside=left,ifnames=1]{labelname} eld[strwidth=1,strside=left]{labelname} } } \DefineBibliographyStrings{english}{ backrefpage={}, backrefpages={} } nentrypunct}{} \usepackage{xpatch} \DeclareFieldFormat{backrefparens}{\addperiod#1} \xpatchbibmacro{pageref}{parens}{backrefparens}{}{} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{identity}[thm]{Identity} \theoremstyle{definition} \newtheorem{rem}[thm]{Remark} \newtheorem{defn}[thm]{Definition} \newtheorem{example}[thm]{Example} \numberwithin{equation}{section} \crefname{thm}{theorem}{theorems} \crefname{rem}{remark}{remarks} \crefname{prop}{proposition}{propositions} \crefname{lem}{lemma}{lemmas} \crefname{identity}{identity}{identities} \crefname{equation}{}{} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\SL}{SL} \DeclareMathOperator{\SU}{SU} \DeclareMathOperator{\UU}{U} \DeclareMathOperator{\OO}{O} \DeclareMathOperator{\GG}{G} \DeclareMathOperator{\Sp}{Sp} \DeclareMathOperator{\SpU}{SpU} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\Sym}{Sym} \DeclareMathOperator{\Pf}{Pf} \DeclareMathOperator{\chr}{char} \DeclareMathOperator{\adj}{adj} \newcommand{\md}[1]{{\left\lvert #1 \right\lvert}} \newcommand{\deff}[1]{{\color{blue}#1}} \newcommand{\into}{\longhookrightarrow} \DeclareRobustCommand{\onto}{\relbar\joinrel\twoheadrightarrow} \newcommand{\tr}{\operatorname{tr}} \newcommand{\dG}{{\mathrm{d}}G} \newcommand{\Sage}{\texttt{SageMath}} \let\emptyset\varnothing \let\subset\subseteq \let\supset\supseteq \let\ge\geqslant \let\le\leqslant \let\mapsto\longmapsto \let\to\longrightarrow \setcounter{tocdepth}{1} \begin{document} \title[Splitting the difference]{Splitting the difference: Computations of the Reynolds operator \\ in classical invariant theory} \author{Aryaman Maithani} \address{Department of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT~84112, USA} \email{[email protected]} \thanks{The author was supported by NSF grants DMS 2101671 and DMS 2349623.} \subjclass[2020]{Primary 13A50; Secondary 13P99, 14L24, 14L35.} \keywords{Reynolds operator, ring of invariants, classical groups, linearly reductive groups.} \begin{abstract} If $G$ is a linearly reductive group acting rationally on a polynomial ring $S$, then the inclusion $S^{G} \into S$ possesses a unique $G$-equivariant splitting, called the Reynolds operator. We describe algorithms for computing the Reynolds operator for the \emph{classical actions} as in Weyl's book. The groups are the general linear group, the special linear group, the orthogonal group, and the symplectic group, with their classical representations: direct sums of copies of the standard representation and copies of the dual representation. \end{abstract} \maketitle {\setlength{\parskip}{0em} \tableofcontents} \section{Introduction} \label{sec:introduction} Consider a group $G$ acting on a ring $S$ by ring automorphisms. The \deff{ring of invariants} for this group action is defined as \begin{equation*} S^{G} \coloneqq \{s \in S : g(s) = g \ \text{for all} \ g \in G\}, \end{equation*} i.e., $S^{G}$ is the subring of elements that are fixed by each group element. We have the inclusion of rings \begin{equation} \label{eq:inclusion} S^{G} \into S. \end{equation} The above is also then an inclusion of $S^{G}$-modules. A natural question to ask is whether~\Cref{eq:inclusion} splits in the category of $S^{G}$-modules---in which case $S^{G}$ is a direct summand of $S$. A positive answer to this question often implies good properties about the subring; for example, a direct summand of a noetherian ring is again noetherian. A deeper result is the Hochster--Roberts theorem~\Cite{HochsterRoberts}, which states that a direct summand of a polynomial ring is Cohen--Macaulay. The inclusion~\Cref{eq:inclusion} does not always split; a simple example is the alternating group $A_{3}$ acting on $\mathbb{F}_{3}[x, y, z]$ by permuting the variables. A more dramatic example was given by \Citeauthor{Nagarajan}~\Cite{Nagarajan} where a group of order two acts on a regular ring for which the ring of invariants is not noetherian. For finite groups, a simple condition that ensures the existence of a splitting is having order invertible in $S$; the inclusion~\Cref{eq:inclusion} then splits with an $S^{G}$-linear splitting given by \begin{equation*} s \mapsto \frac{1}{\md{G}} \sum_{g \in S} g(s). \end{equation*} The above is the \emph{Reynolds operator} and has the additional property of being \emph{$G$-equivariant} (\Cref{defn:splitting}). In this paper, our groups of interest are certain linear algebraic groups over a field $k$, i.e., Zariski-closed subgroups of $\GL_{n}(k)$. If such a group $G$ acts (rationally) on a $k$-vector space $V$, then we get a (rational) degree-preserving $k$-algebra action of $G$ on the polynomial ring $S \coloneqq \Sym(V)$. Hilbert's fourteenth problem asked if $S^{G}$ is always a finitely generated $k$-algebra---a question answered in the negative by \Citeauthor{Nagata14th}~\Cite{Nagata14th} by giving an example where $S^{G}$ is not noetherian. For linear algebraic groups, the analogue to having invertible order is to be \emph{linearly reductive}. These groups admit a similar Reynolds operator, see \Cref{thm:linearly-reductive-reynolds-unique-linear}; in particular, the inclusion~\Cref{eq:inclusion} splits $G$-equivariantly and $S^{G}$-linearly. We focus on the following titular \emph{classical groups} of Weyl's book~\Cite{WeylClassical}: the general linear group $\GL_{n}(k)$, the special linear group $\SL_{n}(k)$, the orthogonal group $\OO_{n}(k)$, and the symplectic group $\Sp_{2n}(k)$. As in the book, we look at their classical actions, corresponding to the direct sum of copies of the standard representation and possibly copies of the dual representation. We record the rings of invariants for some of these actions in \Cref{thm:classical-invariants}. This includes infinite fields of positive characteristic as in~\Cite{ConciniProcesiCharacteristicFree, Hashimoto:AnotherProof}. There is, however, a stark difference between characteristics zero and positive: if $k$ is a field of characteristic zero, then the groups listed above are all linearly reductive. This is typically not the case in positive characteristic wherein these groups admit representations for which the ring of invariants is not Cohen--Macaulay~\Cite{Kohls:NonCM}. Moreover---while the classical rings of invariants continue to be Cohen--Macaulay even in positive characteristic---the inclusion~\Cref{eq:inclusion} is rarely split~\Cite{HochsterJeffriesPandeySingh}. This has the interesting consequence that given any splitting over $\mathbb{Q}$, every prime must appear in the denominator of the image of any basis; see \Cref{rem:primes-in-denominators} for a precise statement. For the most part, we consider these classical groups in characteristic zero. Because these are then linearly reductive, the inclusion~\Cref{eq:inclusion} splits. We give an algorithm for explicitly computing the Reynolds operator in each case in terms of certain integrals of monomial functions. We do this by reducing the computation to one over a compact Lie group, in which case we may integrate with respect to the Haar measure akin to averaging over a finite group. Methods to compute these integrals are of interest in mathematical physics due to their important role in areas such as mesoscopic transport, quantum chaos, and quantum information and decoherence. This interest has led to the development of various algorithms---such as the \emph{invariant method} and the \emph{column vector method}---to compute these integrals; see the introduction of~\Cite{GorinLopez} for more on this topic. We remark that there are conditions weaker than having invertible order or being linearly reductive that imply finite generation of $S^{G}$. Indeed, Noether~\Cite{Noether:Invariants} showed that if $G$ is a finite group acting on a finitely generated $k$-algebra $S$ by $k$-algebra automorphisms, then $S^{G}$ is a finitely generated $k$-algebra. Similarly, \Citeauthor{Haboush:Reductive}~\Cite{Haboush:Reductive} proved that if $G$ is a \emph{reductive group} acting rationally on a finitely generated $k$-algebra $S$, then $S^{G}$ is finitely generated. While the classical groups are no longer linearly reductive in positive characteristic, they continue to be reductive, and hence the invariant subrings are known to be finitely generated. The paper is arranged as follows. After setting up the notations and definitions in \Cref{sec:basic-notions}, we define the classical group actions in \Cref{sec:classical-group-actions} and record the rings of invariants. In \Cref{sec:linearly-reductive}, we recall the relevant facts about linearly reductive groups. \Cref{sec:splitting-over-lie-group} discusses the computation of the Reynolds operator for a compact Lie group. We discuss facts about the Haar measure and set up the required machinery to integrate functions that take values in polynomial rings. \Cref{sec:reynolds-classical} begins by describing how the computation of the Reynolds operator for a classical group over an arbitrary field of characteristic zero can be reduced to that for a compact Lie group. With this reduction in place, we then give algorithms that one may implement on a computer algebra system. We make use of these algorithms in \Cref{sec:explicit-formulae} to provide explicit formulae for the Reynolds operators for the $\SL$ and $\GL$ actions. These algorithms have been implemented in \Sage~\Cite{sagemath}, and we note some conjectures arising out of these computations. Lastly, we compare with the situation in positive characteristic in \Cref{sec:positive-characteristic}. \section{Notations and definitions} \label{sec:basic-notions} The letter $k$ will denote a field. For $n \ge 1$, $\mathbb{A}_{k}^{n}$ denotes the topological space $k^{n}$ with the Zariski topology. We recall the following classical groups of invertible matrices. \begin{enumerate}[label=(\alph*)] \item (General linear group) $\GL_{n}(k)$ is the group of $n \times n$ invertible matrices over $k$. \item (Special linear group) $\SL_{n}(k) \coloneqq \{M \in \GL_{n}(k) : \det(M) = 1\}$. \item (Orthogonal group) $\OO_{n}(k) \coloneqq \{M \in \GL_{n}(k) : M^{\tr} M = I_{n}\}$, where $I_{n}$ denotes the identity matrix. \item (Symplectic group) $\Sp_{2n}(k) \coloneqq \{M \in \GL_{2n}(k) : M^{\tr} \Omega M = \Omega\}$, where $\Omega \coloneqq \left( \begin{smallmatrix} O & I_{n} \\ -I_{n} & O \\ \end{smallmatrix} \right)$. \end{enumerate} When the field $k$ is taken to be the complex numbers, we have the following additional subgroups. \begin{enumerate}[label=(\alph*), resume] \item (Unitary group) $\UU_{n}(\mathbb{C}) \coloneqq \{U \in \GL_{n}(\mathbb{C}) : U U^{\ast} = I_{n}\}$, where $U^{\ast}$ denotes the conjugate transpose of $U$. \item (Special unitary group) $\SU_{n}(\mathbb{C}) \coloneqq \UU_{n}(\mathbb{C}) \cap \SL_{n}(\mathbb{C})$. \item (Symplectic unitary group) $\SpU_{2n}(\mathbb{C}) \coloneqq \UU_{2n}(\mathbb{C}) \cap \Sp_{2n}(\mathbb{C})$. \end{enumerate} All the above groups inherit the subspace topology from $\mathbb{A}_{k}^{n^{2}}$, and we refer to this as the Zariski topology. These are all topological groups---though typically not Hausdorff---because the product and inversion functions are continuous in the Zariski topology, being given by rational functions in the entries of the matrices. When $k = \mathbb{C}$, these groups also have the Euclidean topology and moreover are smooth submanifolds of $\mathbb{C}^{n^{2}}$. In this case, the product and inversion functions are smooth; hence, these are all Lie groups. \begin{defn} \label{defn:splitting} Let $G$ be a group acting by ring automorphisms on a ring $S$. A \deff{splitting} for the inclusion $S^{G} \into S$ is an additive function $\mathcal{R} \colon S \to S^{G}$ such that $\mathcal{R}(r) = r$ for all $r \in S^{G}$. The splitting is \deff{$G$-equivariant} if $\mathcal{R}(g(s)) = \mathcal{R}(s)$ for all $g \in G$ and $s \in S$. The splitting is \deff{$S^{G}$-linear} if $\mathcal{R}(rs) = r \mathcal{R}(s)$ for all $r \in S^{G}$ and $s \in S$. \end{defn} \section{The classical group actions} \label{sec:classical-group-actions} Let $k$ be a field, and $t$, $m$, $n$ be positive integers. We use the notation \begin{equation*} k[Y_{t \times n}] \coloneqq k[y_{ij} : 1 \le i \le t,\, 1 \le j \le n], \end{equation*} i.e., $k[Y_{t \times n}]$ is a polynomial ring over $k$ in $tn$ variables. Once the dimensions have been specified, we write $k[Y]$ for brevity. We use the letter $Y$ for the $t \times n$ matrix $[y_{ij}]_{i, j}$. The notation naturally extends to $k[X_{m \times t}, Y_{t \times n}]$. Let $G$ be one of the groups $\GL_{t}(k)$, $\SL_{t}(k)$, $\OO_{t}(k)$, or $\Sp_{t}(k)$, where for the last case, we assume that $t$ is even. We will consider the following two types of rational actions of $G$. \begin{enumerate}[label=(R\arabic*)] \item \label{item:standard-action} The group $G$ acts on $k[Y_{t \times n}]$, where the action of $M \in G$ is given by \begin{equation*} M \colon Y \mapsto M Y; \end{equation*} by the above, we mean that $[Y]_{ij} \mapsto [MY]_{ij}$. \item \label{item:standard-dual-action} The group $G$ acts on $k[X_{m \times t}, Y_{t \times n}]$, where the action of $M \in G$ is given by \begin{equation*} M \colon \begin{cases} X \mapsto X M^{-1}, \\ Y \mapsto M Y. \end{cases} \end{equation*} \end{enumerate} The first action corresponds to the direct sum of $n$ copies of the standard representation, whereas the second has an additional $m$ copies of the dual representation. We will describe the splittings for all of these actions. We recall below the \emph{classical rings of invariants} as in Weyl's book~\Cite{WeylClassical} where they were originally discussed in characteristic zero. A characteristic-free proof of the following theorem can be found in~\Cite{ConciniProcesiCharacteristicFree, Hashimoto:AnotherProof}. \begin{thm} \label{thm:classical-invariants} Let $k$ be an infinite field. With the above actions, we have the following rings of invariants. \begin{enumerate}[label=(\alph*)] \item (General linear group) For positive integers $t$, $m$, $n$, the equality \begin{equation*} k[X_{m \times t}, Y_{t \times n}]^{\GL_{t}(k)} = k[XY] \end{equation*} holds, i.e., the invariant ring is generated, as a $k$-algebra, by the entries of the matrix product $XY$. \item (Special linear group) For positive integers $t$, $n$ with $t \le n$, the equality \begin{equation*} k[Y_{t \times n}]^{\SL_{t}(k)} = k[\text{size $t$ minors}] \end{equation*} holds, i.e., the invariant ring is generated, as a $k$-algebra, by the size $t$ minors of the matrix $Y$. \item (Orthogonal group) For positive integers $t$, $n$ and $\chr(k) \neq 2$, the equality \begin{equation*} k[Y_{t \times n}]^{\OO_{t}(k)} = k[Y^{\tr} Y] \end{equation*} holds, i.e., the invariant ring is generated, as a $k$-algebra, by the entries of the matrix product $Y^{\tr} Y$. \item (Symplectic group) For positive integers $t$, $n$, the equality \begin{equation*} k[Y_{2t \times n}]^{\Sp_{2t}(k)} = k[Y^{\tr} \Omega Y] \end{equation*} holds, i.e., the invariant ring is generated, as a $k$-algebra, by the entries of the matrix product $Y^{\tr} \Omega Y$. \end{enumerate} \end{thm} \begin{rem} For each of the above actions, the fixed subring is of independent interest for the reasons described below. We denote the invariant subring in the respective cases by $R$. \begin{enumerate}[label=(\alph*)] \item (General linear group) The ring $R$ is isomorphic to the determinantal ring $k[Z_{m \times n}]/I_{t + 1}(Z)$, where $I_{t + 1}(Z)$ is the ideal generated by the size $t + 1$ minors of $Z$. \item (Special linear group) The ring $R$ is the Pl\"ucker coordinate ring of the Grassmannian of $t$-dimensional subspaces of an $n$-dimensional space. \item (Orthogonal group) The ring $R$ is isomorphic to $k[Z]/I_{t + 1}(Z)$, where $Z$ is an $n \times n$ symmetric matrix of indeterminates. \item (Symplectic group) The ring $R$ is isomorphic to $k[Z]/\Pf_{2t + 2}(Z)$, where $Z$ is an $n \times n$ alternating matrix of indeterminates, and $\Pf_{2t + 2}(Z)$ the ideal generated by its principal $2t + 2$-Pfaffians. \end{enumerate} \end{rem} \section{Linearly reductive groups} \label{sec:linearly-reductive} This section contextualises our results with the broader theory of linearly reductive groups. For the most part, this is only for theoretical interest, as we will compute the Reynolds operator concretely by integrating over a compact Lie group. For an introduction to linear algebraic groups and rational actions, we refer the reader to one of~\Cite{FogartyInvariant, MumfordFourteenthProblem, HochsterInvariantSurvey, DerksenKemper}. We record the relevant facts here. \begin{defn} \label{defn:reynolds-operator} Let $G$ be a linear algebraic group over the field $k$, and $V$ a rational representation of $G$. A \deff{Reynolds operator} is a $k$-linear, $G$-equivariant splitting $\mathcal{R} \colon k[V] \to k[V]^{G}$. \end{defn} \begin{thm} \label{thm:linearly-reductive-reynolds-unique-linear} If $G$ is linearly reductive, then for every rational representation $V$, there exists a \emph{unique} Reynolds operator $\mathcal{R} \colon k[V] \to k[V]^{G}$. Moreover, $\mathcal{R}$ is $k[V]^{G}$-linear. \end{thm} \begin{proof} The statements are Theorem 2.2.5 and Corollary 2.2.7 in~\Cite{DerksenKemper}, respectively. \end{proof} \begin{example} We give an example of a group $G$ acting on a polynomial ring $S$ for which there exists an $S^{G}$\nobreakdash-linear splitting but no $G$-equivariant splitting. Let $G$ be the symmetric group on two element, and $S \coloneqq \mathbb{F}_{2}[x, y]$. The group $G$ acts on $S$ by permuting the variables, and the invariant subring is $\mathbb{F}_{2}[x+y, xy]$. Because $S$ is a free $S^{G}$-module with $\{1, x\}$ as a basis, the inclusion $S^{G} \into S$ splits $S^{G}$-linearly. Suppose that $\pi \colon S \to S^{G}$ is a $G$-equivariant splitting. Then, $\pi(x) = \pi(y)$ because $x$ and $y$ are in the same orbit. But then, \begin{equation*} x + y = \pi(x + y) = \pi(x) + \pi(y) = 2 \pi(x) = 0, \end{equation*} a contradiction. Thus, $S^{G} \into S$ admits no $G$-equivariant splitting even though it splits $S^{G}$-linearly. This example extends mutatis mutandis to any positive characteristic $p$ by considering the permutation action of $\Sigma_{p}$---the symmetric group on $p$ elements---on the polynomial ring $\mathbb{F}_{p}[x_{1}, \ldots, x_{p}]$. \end{example} \begin{example} We now give an example of a group action for which no $S^{G}$-linear splitting exists. Consider the action of the alternating group $G \coloneqq A_{3}$ on the polynomial ring $S \coloneqq \mathbb{F}_{3}[x, y, z]$ by permuting the variables. If we let $e_{1}$, $e_{2}$, $e_{3}$ denote the elementary symmetric polynomials in $x$, $y$, $z$ and set $\Delta \coloneqq (x - y)(y - z)(z - x)$, then one can check that $\Delta \in S^{G}$, $\Delta \notin (e_{1}, e_{2}, e_{3}) S^{G}$, but $\Delta \in (e_{1}, e_{2}, e_{3}) S$. This implies that $S^{G} \into S$ does not split over $S^{G}$. More generally, if $A_{n}$ acts on $S = \mathbb{F}_{p}[x_{1}, \ldots, x_{n}]$ by permuting variables, the inclusion $S^{A_{n}} \into S$ splits if and only if $p$ does not divide $\md{A_{n}}$; the nontrivial implication was proven in~\Cite[Theorem 12.2]{Glassbrenner:CMFrational} for $p \nmid n(n - 1)$, and the general case can be found in \Cite[Theorem 5.5]{Singh:FailureF}, \Cite{Smith:AlternatingInvariants}, \Cite[Theorem 2.18]{Jeffries:Thesis}, and \Cite[Corollary 4.2]{GoelJeffriesSingh}. \end{example} \begin{example} If $k$ is a field of characteristic zero, then the classical groups $\GL_{n}(k)$, $\SL_{n}(k)$, $\OO_{n}(k)$, and $\Sp_{2n}(k)$ are all linearly reductive, as are all finite groups. For a finite group $G$, the Reynolds operator is just averaging over the group: $\mathcal{R}(f) = \frac{1}{\md{G}} \sum\limits_{g \in G} g(f)$. \end{example} The above Reynolds operator extends naturally to smooth actions of a compact Lie group, see \Cref{thm:reynolds-for-lie-group}. The following theorem, in conjunction with \Cref{prop:invariants-and-operator-over-GC-and-intersection}, tells us how the computation of the Reynolds operator for a linearly reductive group over $\mathbb{C}$ can be reduced to that for a compact Lie group. \begin{thm} \label{thm:equivalent-linearly-reductive-over-C} Let $G$ be a linear algebraic group over $\mathbb{C}$. The following are equivalent. \begin{enumerate}[label=(\alph*)] \item $G$ is linearly reductive. \item $G$ has a Zariski-dense subgroup that is a compact Lie group (in the Euclidean topology). \end{enumerate} \end{thm} We shall deduce the above theorem for the classical groups of interest by producing Zariski-dense subgroups in \Cref{thm:density}. \section{The Reynolds operator for a Lie group} \label{sec:splitting-over-lie-group} We will now describe the Reynolds operator for a compact Lie group acting on a polynomial ring. Strictly speaking, the term ``Reynolds operator'' was defined for the rational action of a linear algebraic group, but we continue to use this term to mean a ($\mathbb{C}$-)linear $G$-equivariant splitting. We first recall some theory of integration over such a group. In this section, a finite-dimensional vector space over $\mathbb{R}$ will have its canonical structure of a real differentiable manifold. Examples include $\mathbb{C}$ and finite-dimensional vector spaces over $\mathbb{C}$. Let $G$ be a compact real Lie group and $\dG$ denote the (normalised) Haar measure on $G$. Given an element $g \in G$, we denote by $L_{g}$ and $R_{g}$ the left and right translation maps: \begin{equation} \label{eq:translation-maps} \begin{aligned} L_{g} \colon G &\to G, \\ h &\mapsto gh, \end{aligned} \qquad\qquad \begin{aligned} R_{g} \colon G &\to G, \\ h &\mapsto hg. \end{aligned} \end{equation} For an introduction to the Haar measure, we refer the reader to one of~\Cite{HalmosMeasure, RoydenAnalysis, LangAnalysis}. We next recall the properties of interest to us. \begin{thm} \label{thm:invariance-to-field} Let $\psi \colon G \to \mathbb{R}$ be smooth, and $g \in G$. Then, \begin{equation*} \int_{G} \psi \,\dG = \int_{G} (\psi \circ L_{g}) \,\dG = \int_{G} (\psi \circ R_{g}) \,\dG. \end{equation*} If $\psi$ is constant and takes the value $1$, then \begin{equation*} \int_{G} \psi \, \dG = 1. \end{equation*} \end{thm} We may naturally extend the integration of scalar-valued functions to vector-valued functions: \begin{defn} Let $V$ be a finite-dimensional $\mathbb{R}$-vector space, and $\psi \colon G \to V$ a smooth function. Fix a basis $\{v_{1}, \ldots, v_{n}\}$ of $V$. Let $\psi_{i} \colon G \to \mathbb{R}$ be the corresponding coordinate functions, satisfying $\psi(g) = \sum \psi_{i}(g) v_{i}$. We define \begin{equation*} \int_{G} \psi \coloneqq \sum_{i = 1}^{n} \left(\int_{G} \psi_{i} \,\dG\right) v_{i} \in V. \end{equation*} \end{defn} One checks that the above definition is independent of the choice of basis. Note that our notation above drops the ``$\dG$'' when integrating vector-valued functions. This is for ease of notation as we will always be integrating with respect to the Haar measure. The linearity of scalar integration and the properties of the Haar measure readily extend to the following. \begin{lem} \label{lem:integral-commute-linear-maps} Let $T \colon V \to W$ be a linear map of finite-dimensional vector spaces, and let $\psi \colon G \to V$ be a smooth function. Then, \begin{equation*} \int_{G} (T \circ \psi) = T\left(\int_{G} \psi\right). \end{equation*} \end{lem} \begin{lem} \label{lem:invariance-to-vector-space} Let $\psi \colon G \to V$ be smooth, and $g \in G$. Then, \begin{equation*} \int_{G} \psi = \int_{G} (\psi \circ L_{g}) = \int_{G} (\psi \circ R_{g}). \end{equation*} If $\psi$ and takes the value $v$, then \begin{equation*} \int_{G} \psi = v. \end{equation*} \end{lem} \begin{defn} Suppose $V$ is an infinite-dimensional vector space, and $\Psi \colon G \to V$ a function such that the vector space spanned by the image of $\Psi$ is finite-dimensional. Let $W \subset V$ be any finite-dimensional subspace containing the image of $\Psi$, and let $\psi \colon G \to W$ be the restriction of $\Psi$. We say that $\Psi$ is \deff{smooth} if $\psi$ is smooth, and define \begin{equation*} \int_{G} \Psi \coloneqq \int_{G} \psi, \end{equation*} \end{defn} where we note that the above definitions are independent of the choice of $W$. Let $S = \mathbb{C}[x_{1}, \ldots, x_{n}]$ be a polynomial ring, and let $[S]_{1}$ denote the $\mathbb{C}$-vector space of homogeneous degree one polynomials. There is a natural isomorphism of groups \begin{equation*} \{\text{degree-preserving $\mathbb{C}$-algebra automorphisms of $S$}\} \longleftrightarrow \{\text{$\mathbb{C}$-linear automorphisms of $[S]_{1}$}\}. \end{equation*} A degree-preserving $\mathbb{C}$-algebra action of $G$ on $S$ is called \deff{smooth} if the corresponding action $G \times [S]_{1} \to [S]_{1}$ is smooth. In this case, the corresponding action $G \times [S]_{d} \to [S]_{d}$ is smooth for all $d \ge 0$, where $[S]_{d}$ denotes the space of homogeneous polynomials of degree $d$. For $f \in S$, define the orbit map \begin{align*} \psi_{f} \colon G &\to S \\ g &\mapsto g(f). \end{align*} The function $\psi_{f}$ takes values within a finite-dimensional subspace of $S$, for example, the space of polynomials of degree at most the degree of $f$. If the $G$-action is smooth, then $\psi_{f}$ defines a smooth function. \begin{thm} \label{thm:reynolds-for-lie-group} Let $G$ be a compact Lie group acting smoothly on the polynomial ring $S \coloneqq \mathbb{C}[x_{1}, \ldots, x_{n}]$ by degree-preserving $\mathbb{C}$\nobreakdash-algebra automorphisms. Then, $S^{G} \into S$ splits with a degree-preserving, $G$-equivariant, $S^{G}$-linear splitting $\mathcal{R} \colon S \onto S^{G}$ given by \begin{equation*} \mathcal{R} \colon f \mapsto \int_{G} \psi_{f}. \end{equation*} Suggestively, the above may be written as \begin{equation*} \mathcal{R}(f) = \int_{g \in G} g(f), \end{equation*} resembling the Reynolds operator for finite groups. \end{thm} \begin{proof} The $\mathbb{C}$-linearity of $\mathcal{R}$ is clear. If $f$ is homogeneous, then $\psi_{f}$ takes values in subspace $[S]_{\deg(f)}$ and in turn, $\mathcal{R}(f) \in [S]_{\deg(f)}$. Thus, $\mathcal{R}$ is a degree-preserving $\mathbb{C}$-linear map. For the rest of the proof, we will make repeated use of \Cref{lem:integral-commute-linear-maps,lem:invariance-to-vector-space}. Recall that $L_{g}$ and $R_{g}$ denote the translation maps, defined in~\Cref{eq:translation-maps}. For $f \in S$ and $g \in G$, we define the $\mathbb{C}$-linear functions $S \xrightarrow{\rho_{f}} S$ and $S \xrightarrow{\mu_{g}} S$ given by left multiplication and the $G$-action, respectively. Consequently, \begin{align*} \mathcal{R}(f) &= \int_{G} \psi_{f} = \int_{G} \psi_{f} \circ R_{g} = \int_{G} \psi_{g(f)} = \mathcal{R}(g(f)) \\[5pt] &= \int_{G} \psi_{f} \circ L_{g} = \int_{G} \mu_{g} \circ \psi_{f} = \mu_{g}\left(\int_{G} \psi_{f}\right) = g(\mathcal{R}(f)). \end{align*} The above shows that $\mathcal{R}$ takes values in $S^{G}$ and is $G$-equivariant. Lastly, if $f \in S^{G}$ and $h \in S$, then \begin{equation*} \mathcal{R}(fh) = \int_{G} \psi_{fh} = \int_{G} \rho_{f} \circ \psi_{h} = \rho_{f} \left(\int_{G} \psi_{h}\right) = f \mathcal{R}(h), \end{equation*} and $\psi_{f}$ is identically equal to $f$, giving us \begin{equation*} \mathcal{R}(f) = \int_{G} \psi_{f} = f. \end{equation*} This finishes the proof that $\mathcal{R}$ is an $S^{G}$-linear splitting. \end{proof} \section{The Reynolds operator for the classical actions} \label{sec:reynolds-classical} Fix an integer $t \ge 1$ and let $\GG(-)$ be one of $\GL_{t}(-)$, $\SL_{t}(-)$, $\OO_{t}(-)$, or $\Sp_{t}(-)$, where we assume that $t$ is even in the last case. Define $C \coloneqq \GG(\mathbb{C}) \cap \UU_{t}(\mathbb{C})$. The intersections in the respective cases are $\UU_{n}(\mathbb{C})$, $\SU_{n}(\mathbb{C})$, $\OO_{n}(\mathbb{R})$, and $\SpU_{n}(\mathbb{C})$. Let $k$ be an arbitrary field of characteristic zero. \begin{thm}[The density theorem] \label{thm:density} With the above notation, we have: \begin{enumerate}[label=(\alph*)] \item $\GG(\mathbb{Q})$ is a Zariski-dense subgroup of $\GG(k)$; and \item $C$ is a Zariski-dense subgroup of $\GG(\mathbb{C})$. \end{enumerate} \end{thm} \begin{proof} For (a), see the proof of~\Cite[Anhang II, Satz 4]{KraftGeometrische}. We give a more elementary proof for $\GL$ and $\SL$ in \Cref{sec:proof-density}, see \Cref{prop:U-GL-dense,prop:SU-SL-dense}. We also prove (b) in \Cref{sec:proof-density}, see \Cref{thm:G-Q-dense-in-G-k}. \end{proof} By $k[Z]$, we will mean one of $k[Y]$ or $k[X, Y]$. In either case, we have a rational action of $\GG(k)$ on $k[Z]$, as described in \Cref{sec:classical-group-actions}. Note that $C$ is a compact Lie group, and the action of $\GG(\mathbb{C})$ on $\mathbb{C}[Z]$ restricts to a smooth action of $C$. We have the following group extensions. \begin{equation*} \begin{tikzcd} \GG(k) & & \GG(\mathbb{C}) & \\ & \GG(\mathbb{Q}) \arrow[lu, no head] \arrow[ru, no head] & & C \arrow[lu, no head] \end{tikzcd} \end{equation*} We will first show how the computation of the Reynolds operator for $\GG(k)$ reduces to that for $C$. The key point is that the action is rational, and each inclusion above is Zariski-dense by \Cref{thm:density}. This reduction is useful because $C$ is a compact Lie group; thus, we have its Reynolds operator by \Cref{thm:reynolds-for-lie-group}. \begin{prop} \label{prop:same-invariants-upon-field-extension} Let $f_{1}, \ldots, f_{n} \in \mathbb{Q}[Z]^{\GG(\mathbb{Q})}$ be generating invariants, i.e., we have $\mathbb{Q}[Z]^{\GG(\mathbb{Q})} = \mathbb{Q}[f_{1}, \ldots, f_{n}]$. Then, the equality $k[Z]^{\GG(k)} = k[f_{1}, \ldots, f_{n}]$ holds. In particular, we have the inclusion $\mathbb{Q}[Z]^{\GG(\mathbb{Q})} \subset k[Z]^{\GG(k)}$ as subsets of $k[Z]$. \end{prop} \begin{proof} We first show that each $f_{i}$ is $\GG(k)$-invariant. To this end, note that the equation \begin{equation*} \sigma(f_{i}) - f_{i} = 0 \end{equation*} holds for each fixed $i$ and for all $\sigma \in \GG(\mathbb{Q})$. Because the action is rational and $\GG(\mathbb{Q})$ is Zariski-dense in $\GG(k)$ by \Cref{thm:G-Q-dense-in-G-k}, the above equation must hold for all $\sigma \in \GG(k)$. In other words, each $f_{i}$ is $\GG(k)$-invariant. We now prove the inclusion $k[Z]^{\GG(k)} \subset k[f_{1}, \ldots, f_{n}]$. Let $B$ be a $\mathbb{Q}$-basis for $k$. Given $h \in k[Z]^{\GG(k)}$, write \begin{equation*} h = \sum_{b \in B} b h_{b} \end{equation*} for $h_{b} \in \mathbb{Q}[Z]$. If we apply $\sigma \in \GG(\mathbb{Q})$ to the above equation, we get \begin{equation*} h = \sum_{b \in B} b \sigma(h_{b}) \end{equation*} because $\sigma(h) = h$ and $\sigma(b) = b$ for all $b \in k$. Comparing the two displayed equations above gives us that each $h_{b}$ is fixed by $\GG(\mathbb{Q})$ and thus $h_{b} \in \mathbb{Q}[f_{1}, \ldots, f_{n}]$ for all $b$. In turn, $h \in k[f_{1}, \ldots, f_{n}]$, as desired. \end{proof} \begin{prop} Let $\mathcal{R}_{k} \colon k[Z] \onto k[Z]^{\GG(k)}$ denote the Reynolds operator over the field $k$. The following diagram commutes \begin{equation*} \begin{tikzcd} {k[Z]} \arrow[r, "\mathcal{R}_{k}", two heads] & {k[Z]^{\GG(k)}} \\ {\mathbb{Q}[Z]} \arrow[r, "\mathcal{R}_{\mathbb{Q}}"', two heads] \arrow[u, hook] & {\mathbb{Q}[Z]^{\GG(\mathbb{Q})}}. \arrow[u, hook] \end{tikzcd} \end{equation*} In particular, if $\mu \in k[Z]$ is a monomial, then \begin{equation} \label{eq:R-k-mu-R-C-mu} \mathcal{R}_{k}(\mu) = \mathcal{R}_{\mathbb{C}}(\mu). \end{equation} \end{prop} The above equation makes sense by interpreting $\mu$ as an element of $\mathbb{C}[Z]$. \begin{proof} In view of \Cref{prop:same-invariants-upon-field-extension}, we may extend $\mathcal{R}_{\mathbb{Q}}$ $k$-linearly to obtain a retraction $\pi$ making the diagram \begin{equation*} \begin{tikzcd} {k[Z]} \arrow[r, "\pi", two heads] & {k[Z]^{\GG(k)}} \\ {\mathbb{Q}[Z]} \arrow[r, "\mathcal{R}_{\mathbb{Q}}"', two heads] \arrow[u, hook] & {\mathbb{Q}[Z]^{\GG(\mathbb{Q})}}. \arrow[u, hook] \end{tikzcd} \end{equation*} commute. We need to show that $\pi = \mathcal{R}_{k}$. By the uniqueness of the Reynolds operator, \Cref{thm:linearly-reductive-reynolds-unique-linear}, it suffices to show that $\pi$ is $\GG(k)$-equivariant. Note that $\GG(k)$-equivariance can be checked on monomials, where it is true again by the Zariski-density of $\GG(\mathbb{Q})$. This proves that the diagram commutes. Now, if $\mu \in \mathbb{Q}[Y]$ is a monomial, then the diagram gives us $\mathcal{R}_{k}(\mu) = \mathcal{R}_{\mathbb{Q}}(\mu)$. Because $k$ was arbitrary, we get~\Cref{eq:R-k-mu-R-C-mu}. \end{proof} The Zariski-density of $C$ in $\GG(\mathbb{C})$ similarly yields the following proposition. \begin{prop} \label{prop:invariants-and-operator-over-GC-and-intersection} The equality $\mathbb{C}[Z]^{\GG(\mathbb{C})} = \mathbb{C}[Z]^{C}$ holds, and the splitting $\mathcal{R} \colon \mathbb{C}[Z] \to \mathbb{C}[Y]^{C}$ described in \Cref{thm:reynolds-for-lie-group} is $\GG(\mathbb{C})$-equivariant. In other words, $\mathcal{R}$ is the Reynolds operator for the $\GG(\mathbb{C})$-action. \end{prop} \begin{rem} The above has now made the computation of $\mathcal{R}_{k}$ clear: because the Reynolds operator $\mathcal{R}_{k}$ is a $k$-linear map, it suffices to compute it on monomials; and for monomials, $\mathcal{R}_{k}$ agrees with the Reynolds operator for the Lie group $C$ by~\Cref{eq:R-k-mu-R-C-mu} and \Cref{prop:invariants-and-operator-over-GC-and-intersection}. \end{rem} In the following two subsections, we describe algorithms to implement this splitting on a computer algebra system. \subsection{Computing the Reynolds operator for copies of the standard representation} \label{subsec:standard-computation} Continuing our notation from earlier, let $\GG(k) \le \GL_{t}(k)$ be one of the classical groups, and $C \coloneqq \GG(\mathbb{C}) \cap \UU_{t}(\mathbb{C})$ the corresponding compact Lie group. For a positive integer $n$, the group $\GG(k)$ acts on $k[Y_{t \times n}]$ as described in~\ref{item:standard-action}. We describe the Reynolds operator for this action. Consider the larger polynomial ring $k[Y][U_{t \times t}]$, and define the $k$-algebra map \begin{align*} \phi \colon k[Y] &\to k[Y][U] \\ Y &\mapsto UY. \end{align*} For $f \in k[Y]$, write \begin{equation*} \phi(f) = \sum_{I} \alpha_{I}(f) u^{I}, \end{equation*} where $\alpha_{I}(f) \in k[Y]$; in the above, the sum is over multi-indices $I \in \mathbb{N}^{t^{2}}$, and $u^{I}$ is the corresponding monomial. Each $u^{I}$ can be naturally interpreted as a smooth function $C \to \mathbb{C}$ and the Reynolds operator is then given as \begin{equation} \label{eq:reynolds-standard-representation} \begin{aligned} \mathcal{R} \colon k[Y] &\to k[Y]^{\GG(k)} \\ f &\mapsto \sum_{I} \alpha_{I}(f) \int_{C} u^{I}. \end{aligned} \end{equation} \subsection{Computing the Reynolds operator for copies of the standard and the dual representations} \label{subsec:standard-dual-computation} We now consider the action of $\GG(k)$ on $k[X_{m \times t}, Y_{t \times n}]$ as described in~\ref{item:standard-dual-action}. Note that while the action of $\GG(k)$ involves an inverse, $C$ is a subgroup of the unitary group and thus, $U^{-1} = \overline{U}^{\tr}$ for $U \in C$. We now consider the larger polynomial ring $k[X, Y][U_{t \times t}, \overline{U}_{t \times t}]$ with $2t^{2}$ additional indeterminates; explicitly, the new variables are the symbols ${\{u_{ij} : 1 \le i, j \le n\} \cup \{\overline{u}_{ij} : 1 \le i, j \le n\}}$. Define the $k$-algebra map \begin{align*} \phi \colon k[X, Y] &\to k[X, Y][U, \overline{U}] \\ X &\mapsto X \overline{U}^{\tr}, \\ Y &\mapsto U Y. \end{align*} For $f \in k[X, Y]$, write \begin{equation*} \phi(f) = \sum_{I, J} \alpha_{I, J}(f) u^{I} \overline{u}^{J}. \end{equation*} Each monomial $u^{I} \overline{u}^{J}$ can again be interpreted as a smooth function on $C$ and the Reynolds operator is given as \begin{equation} \label{eq:reynolds-standard-dual-representation} \begin{aligned} \mathcal{R} \colon k[X, Y] &\to k[X, Y]^{\GG(k)} \\ f &\mapsto \sum_{I, J} \alpha_{I, J}(f) \int_{C} u^{I} \overline{u}^{J}. \end{aligned} \end{equation} \subsection{Some remarks} \label{subsec:remarks} We stress that the only non-algebraic calculations above are the integrals of monomial functions over $C$, where $C$ is one of $\UU_{t}(\mathbb{C})$, $\SU_{t}(\mathbb{C})$, $\OO_{t}(\mathbb{R})$, or $\SpU_{t}(\mathbb{C})$. Note moreover that these are scalar functions. While we discussed the theory of integration of vector-valued functions to prove the above, one only needs to work with $\mathbb{C}$-valued functions in practice. The integration of these monomials functions over $\UU_{t}(\mathbb{C})$, $\OO_{t}(\mathbb{R})$, and $\SpU_{t}(\mathbb{C})$ is of interest in various field of mathematical physics, see the introduction of~\Cite{GorinLopez}. Methods to compute these integrals are described in~\Cite{CollinsSniady, GorinLopez}. In particular, the integration of arbitrary monomial functions over $\UU_{t}(\mathbb{C})$ has been implemented in the \texttt{Mathematica} package \texttt{IntU}~\Cite{PuchalaMiszczak}. Using this package, we have implemented the splitting~\Cref{eq:reynolds-standard-dual-representation} for the action~\ref{item:standard-dual-action} of $\GL_{t}(\mathbb{C})$ in the computer algebra system \Sage~\Cite{sagemath}. We have also implemented the splitting~\Cref{eq:reynolds-standard-representation} for the action~\ref{item:standard-action} of $\SL_{2}(\mathbb{C})$ using \Cref{thm:integrating-over-SU2}. For $\SL_{t}(k)$ and $\OO_{t}(k)$, the method described in \Cref{subsec:standard-dual-computation} for the action~\ref{item:standard-dual-action} may be modified as follows. \begin{enumerate}[label=(\alph*)] \item (Special linear group) If $C = \SL_{t}(\mathbb{C}) \cap \UU_{t}(\mathbb{C})$, then the inverse of $U \in C$ is given by the adjugate $\adj(U)$. Note that the entries of $\adj(U)$ are polynomials in the entries of $U$, so we may modify $\phi$ as \begin{align*} \phi \colon k[X, Y] &\to k[X, Y][U] \\ X &\mapsto X \adj(U), \\ Y &\mapsto U Y. \end{align*} \item (Orthogonal group) If $C = \OO_{t}(\mathbb{C}) \cap \UU_{t}(\mathbb{C})$, then the inverse of $U \in C$ is just the transpose $U^{\tr}$, so we may modify $\phi$ as \begin{align*} \phi \colon k[X, Y] &\to k[X, Y][U] \\ X &\mapsto X U^{\tr}, \\ Y &\mapsto U Y. \end{align*} \end{enumerate} \section{Explicit formulae} \label{sec:explicit-formulae} In this section, we use the formulae of \Cref{sec:reynolds-classical} to compute the Reynolds operators for $\SL_{2}$ and $\GL_{t}$. We give expressions for these in terms of the invariants described in \Cref{thm:classical-invariants}. \subsection{The Reynolds operator for \texorpdfstring{$\SL_{2}$}{SL2}} We use formula~\Cref{eq:reynolds-standard-representation} to compute the Reynolds operator~$\mathcal{R}$ for the standard action~\ref{item:standard-action} of $\SL_{2}(k)$ on $k[Y_{2 \times N}]$; the relevant monomial integrals are determined in \Cref{thm:integrating-over-SU2} and we can thus compute $\mathcal{R}$ on any element of $k[Y]$. We begin the section by recording the value of $\mathcal{R}$ on various families of monomials, postponing the proofs until the end of the section. By \Cref{thm:classical-invariants}, we know that $k[Y]^{\SL_{2}(k)}$ is generated by the size $2$ minors of $Y$. For ease of notation, we write \begin{equation*} Y = \begin{bmatrix} a_{1} & a_{2} & \cdots & a_{N} \\ b_{1} & b_{2} & \cdots & b_{N} \\ \end{bmatrix} , \qquad \{\Delta\} \coloneqq \{\text{size $2$ minors of $Y$}\}, \qquad \text{and} \qquad \Delta_{i, j} \coloneqq a_{i} b_{j} - a_{j} b_{i}. \end{equation*} The next theorem describes the Reynolds operator on $k[Y_{2 \times 2}]$. \begin{thm} \label{thm:reynolds-operator-SL-2-by-2} Let $\mathcal{R} \colon k[Y_{2 \times 2}] \to k[\{\Delta\}]$ be the Reynolds operator and $\mu \in k[Y_{2 \times 2}]$ a monomial. \begin{enumerate}[leftmargin=*, label=(\alph*)] \item If $\mu$ is of the form $(a_{1} b_{2})^{n} (a_{2} b_{1})^{m}$ for some nonnegative integers $n$ and $m$, then \begin{equation} \label{eq:R-SL-2-2} \mathcal{R}(\mu) = \mathcal{R}\left((a_{1} b_{2})^{n} (a_{2} b_{1})^{m}\right) = \frac{n! m!}{(n + m + 1)!} \Delta_{1,2}^{n} \Delta_{2,1}^{m}; \end{equation} in particular, for $n \ge 0$, we have \begin{equation} \label{eq:R-SL-2-1} \mathcal{R}\left((a_{1} b_{2})^{n}\right) = \frac{1}{n + 1}\Delta_{1,2}^{n}. \end{equation} \item If $\mu$ is not of the above form, then \begin{equation*} \mathcal{R}(\mu) = 0. \end{equation*} \end{enumerate} \end{thm} We give $k[Y_{2 \times N}]$ a multi-grading by defining $\deg(a_{i}) = (1, 0)$ and $\deg(b_{i}) = (0, 1)$ for all $1 \le i \le N$. \begin{thm} \label{thm:row-unbalanced-in-kernel} Let $\mu \in k[Y]$ be a monomial such that $\deg(\mu) = (m, n)$ with $m \neq n$. Then, $\mathcal{R}(\mu) = 0$. \end{thm} Computations suggest that~\Cref{eq:R-SL-2-2} generalises as follows. \begin{conj} \label{conj:2x3-formula} For all nonnegative integers $i$, $j$, $k$, we have \begin{equation*} \mathcal{R}\left( (a_{1} b_{2})^{i} (a_{1} b_{3})^{j} (a_{2} b_{3})^{k} \right) = \frac{(i + j)! (k + j)!}{(i + j + k + 1)! j!} \Delta_{1, 2}^{i} \Delta_{1, 3}^{j} \Delta_{2, 3}^{k}. \end{equation*} \end{conj} \begin{conj} \label{conj:odd-powers-in-kernel} For all nonnegative integers $n$, we have \begin{equation*} \mathcal{R}\left((a_{1} a_{2} a_{3} b_{1} b_{2} b_{3})^{2n + 1}\right) = 0. \end{equation*} \end{conj} \begin{thm} \label{thm:integrating-over-SU2} For all nonnegative integers $a$, $b$, $c$, $d$, we have \begin{equation*} \int_{\SU_{2}(\mathbb{C})} u_{11}^{a} u_{12}^{b} u_{21}^{c} u_{22}^{d} = \begin{cases} (-1)^{b} \dfrac{a! b!}{(a + b + 1)!} & \text{if $a = d$ and $b = c$}, \\[3pt] 0 & \text{else}. \end{cases} \end{equation*} \end{thm} \begin{proof} See \Cref{thm:integrating-over-SU2-appendix}. \end{proof} We say that a monomial in $k[Y]$ is \deff{balanced} if it is a product of monomials of the form $a_{i} b_{j}$ with $i \neq j$, and \deff{unbalanced} otherwise. The following are straightforward observations. \begin{enumerate}[label=(\alph*)] \item The algebra of minors $k[\{\Delta\}]$ sits inside the $k$-subalgebra generated by the balanced monomials. \item If $\mu \in k[Y]$ is a balanced monomial, then $\deg(\mu) = (d, d)$ for some $d \ge 0$. \end{enumerate} Note however that $\deg(a_{1} b_{1}) = (1, 1)$, yet $a_{1} b_{1}$ is unbalanced. \begin{rem} Assuming \Cref{conj:2x3-formula}, the $k[\{\Delta\}]$-linearity of $\mathcal{R}$ would then determine the value of $\mathcal{R}$ on any balanced monomial in $k[Y_{2 \times 3}]$. For example, one may verify \Cref{conj:2x3-formula} in the two cases needed for the following computation and obtain \begin{equation*} \mathcal{R}\left((a_{1} b_{2})(a_{2} b_{3})(a_{3} b_{1})\right) = \mathcal{R}\left((a_{1} b_{2})(a_{2} b_{3})(a_{1} b_{3} - \Delta_{1, 3})\right) = \left(\frac{1}{6} - \frac{1}{6}\right) \Delta_{1, 2} \Delta_{1, 3} \Delta_{2, 3} = 0. \end{equation*} The above gives us the case $n = 0$ in \Cref{conj:odd-powers-in-kernel}. In particular, it shows that a balanced monomial may be in the kernel of $\mathcal{R}$; \Cref{thm:reynolds-operator-SL-2-by-2} tells us that this does not happen for $k[Y_{2 \times 2}]$, where the monomials in the kernel are precisely the unbalanced ones. \end{rem} \begin{rem} It is not true that the image of a monomial is again a monomial in the $\Delta_{i, j}$. One checks that \begin{equation*} \mathcal{R}(a_{1} b_{2} a_{3} b_{4}) = \frac{1}{3} \Delta_{1, 2} \Delta_{3, 4} - \frac{1}{6} \Delta_{1, 3} \Delta_{2, 4}. \end{equation*} The expression on the right is not divisible by any $\Delta_{i, j}$ and thus cannot be expressed as a monomial in the $\{\Delta\}$. \end{rem} \begin{proof}[Proof of \Cref{thm:reynolds-operator-SL-2-by-2} (a)] The map $\phi$ from \Cref{subsec:standard-computation} is given by \begin{equation*} \begin{bmatrix} a_{1} & \cdots & a_{N} \\ b_{1} & \cdots & b_{N} \\ \end{bmatrix} \mapsto \begin{bmatrix} u_{11} & u_{12} \\ u_{21} & u_{22} \end{bmatrix} \begin{bmatrix} a_{1} & \cdots & a_{N} \\ b_{1} & \cdots & b_{N} \\ \end{bmatrix}. \end{equation*} Thus, \begin{align*} \phi(a_{1}) &= a_{1} u_{11} + b_{1} u_{12}, \ \text{and} \\ \phi(b_{2}) &= a_{2} u_{21} + b_{2} u_{22}. \end{align*} Because $\phi$ is a ring homomorphism, we have \begin{align*} \phi\left((a_{1}b_{2})^{n}\right) &= \sum_{i + j + k + \ell = n} \binom{n}{i, j, k, \ell} \left(a_{1} a_{2} u_{11} u_{21}\right)^{i} \left(a_{1} b_{2} u_{11} u_{22}\right)^{j} \left(a_{2} b_{1} u_{12} u_{21}\right)^{k} \left(b_{1} b_{2} u_{12} u_{22}\right)^{\ell}. \end{align*} \Cref{thm:integrating-over-SU2} tells us that if we integrate the above over $\SU_{2}(\mathbb{C})$, the only terms that remain are those with $i = \ell$. Integrating those terms, we get \begin{align*} \mathcal{R}\left((a_{1}b_{2})^{n}\right) &= \sum_{2i + j + k = n} \binom{n}{i, j, k, i} (a_{1} b_{2})^{i + j} (a_{2} b_{1})^{i + k} (-1)^{i + k} \frac{(i + j)! (i + k)!}{(n + 1)!} \\ &= \frac{1}{n + 1}(a_{1} b_{2} - a_{2} b_{1})^{n} = \frac{\Delta_{1, 2}^{n}}{n + 1}, \end{align*} where the penultimate equality uses \Cref{identity:x-plus-y-multinomial}, proving~\Cref{eq:R-SL-2-1}. For~\Cref{eq:R-SL-2-2}, note that $a_{2} b_{1} = a_{1} b_{2} + \Delta_{2, 1}$. Because $\mathcal{R}$ is $k[\{\Delta\}]$-linear and $\Delta_{1, 2} = -\Delta_{2, 1}$, we get \begin{align*} \mathcal{R}((a_{1} b_{2})^{n} (a_{2} b_{1})^{m}) &= \mathcal{R}\left( (a_{1} b_{2})^{n} (a_{1} b_{2} + \Delta_{2, 1})^{m} \right) \\ &= \sum_{k = 0}^{m} \binom{m}{k} \Delta_{2, 1}^{m - k} \mathcal{R}\left( (a_{1} b_{2})^{n + k} \right) \\ &= \sum_{k = 0}^{m} \binom{m}{k} \Delta_{2, 1}^{m - k} \cdot \frac{\Delta_{1, 2}^{n + k}}{n + k + 1} \\ &= \Delta_{1, 2}^{n} \Delta_{2, 1}^{m} \sum_{k = 0}^{m} \binom{m}{k} \frac{(-1)^{k}}{n + k + 1}. \end{align*} \Cref{identity:2} finishes the proof. \end{proof} \begin{proof}[Proof of \Cref{thm:row-unbalanced-in-kernel}] Consider the element $\sigma = \left( \begin{smallmatrix} 2 & 0 \\ 0 & 2^{-1} \\ \end{smallmatrix} \right) \in \SL_{2}(k)$. We have $\sigma(\mu) = 2^{m - n} \mu$ and thus, the $\SL_{2}(k)$-equivariance of $\mathcal{R}$ implies that $\mathcal{R}(\mu) = 2^{m - n} \mathcal{R}(\mu)$. Because $m \neq n$, we get $\mathcal{R}(\mu) = 0$. \end{proof} \begin{proof}[Proof of \Cref{thm:reynolds-operator-SL-2-by-2} (b)] We first prove the statement when $\mu$ is of the form $(a_{1} b_{1})^{m} (a_{1} b_{2})^{n}$ for some $m > 0$ and $n \ge 0$. We have \begin{align*} \phi(\mu) &= (a_{1}^{2} u^{\ast} + a_{1} b_{1} u^{\ast} + a_{1} b_{1} u^{\ast} + b_{1}^{2} u^{\ast})^{m} \cdot (a_{1} a_{2} u^{\ast} + a_{1} b_{2} u^{\ast} + a_{2} b_{1} u^{\ast} + b_{1} b_{2} u^{\ast})^{n}, \end{align*} where each $u^{\ast}$ denotes some monomial in the $u_{ij}$. Because $m > 0$, when we expand the above, each monomial that appears will be unbalanced in the sense that we may write \begin{equation*} \phi(\mu) = \sum_{I} \alpha_{I} \mu_{I} u_{I}, \end{equation*} where $\alpha_{I} \in k$, $\mu_{I} \in k[Y]$ is an unbalanced monomial, and $u_{I} \in k[U]$ is a monomial. Integrating the above yields \begin{equation*} \mathcal{R}(\mu) = \sum_{I} \left(\alpha_{I} \textstyle \int\! u_{I}\right) \mu_{I}. \end{equation*} Now, note that $\mathcal{R}(\mu) \in k[\{\Delta\}] \subset k[\text{balanced monomials}]$, whereas each $\mu_{I}$ above is unbalanced. Thus, the terms above must cancel out to give us $\mathcal{R}(\mu) = 0$. The $k[\{\Delta\}]$-linearity of $\mathcal{R}$ then implies the statement for $\mu$ of the form $(a_{1} b_{1})^{m} \nu$ with $m > 0$ and $\nu \in k[a_{1} b_{2}, a_{2} b_{1}]$. By symmetry, the statement also holds for $\mu$ of the form $(a_{2} b_{2})^{m} \nu$. \Cref{thm:row-unbalanced-in-kernel} takes care of unbalanced monomials not of the above form. \end{proof} \subsection{The Reynolds operator for \texorpdfstring{$\GL_{t}$}{GLt}} Let $t$, $n$, $m$ be positive integers, and $\mathcal{R} \colon k[X_{m \times t}, Y_{t \times n}] \to k[X, Y]^{\GL_{t}(k)}$ the Reynolds operator for the action~\ref{item:standard-dual-action}. By \Cref{thm:classical-invariants}, we know the image of $\mathcal{R}$ to lie in $k[XY]$, the subalgebra of $k[X, Y]$ generated by the entries of $XY$. Experimenting with the package \texttt{IntU}~\Cite{PuchalaMiszczak} suggests a formula similar to~\Cref{eq:R-SL-2-1}. \begin{conj} For $t = 2$ and $n \ge 0$, we have \begin{equation*} \mathcal{R}\left((x_{11} y_{11})^{n}\right) = \frac{1}{n + 1}(x_{11} y_{11} + x_{12} y_{21})^{n} = \frac{1}{n + 1}\left([XY]_{1, 1}\right)^{n}. \end{equation*} More generally, for $t \ge 1$ and $n \ge 0$, we have \begin{equation*} \mathcal{R}\left((x_{11} y_{11})^{n}\right) = \binom{n + t - 1}{t - 1}^{-1}\left([XY]_{1, 1}\right)^{n}. \end{equation*} \end{conj} \section{Comparison with positive characteristic} \label{sec:positive-characteristic} The classical groups $\GL$, $\SL$, $\OO$, $\Sp$ are typically \emph{not} linearly reductive in positive characteristic. Thus, there is no guarantee of the existence of splittings that are linear over the fixed subring. In fact, the following theorem tells us that this is essentially never the case. \begin{thm}[{\Cite[Theorem 1.1]{HochsterJeffriesPandeySingh}}] \label{thm:HJPS} Let $k$ be a field of characteristic $p > 0$. Fix positive integers $m$, $n$, and $t$, and let $R \subset S$ denote one of the following inclusions: \begin{enumerate}[label=(\alph*)] \item $k[XY] \subset k[X_{m \times t}, Y_{t \times n}]$; \item $k[\{\Delta\}] \subset k[Y_{t \times n}]$ with $t \le n$, where $\{\Delta\}$ is the set of size $t$ minors of $Y$; \item $k[Y^{\tr} Y] \subset k[Y_{t \times n}]$; \item $k[Y^{\tr} \Omega Y] \subset k[Y_{2t \times n}]$. \end{enumerate} Then the inclusion $R \subset S$ splits $R$-linearly if and only if, in the respective cases, \begin{enumerate}[label=(\alph*)] \item $t = 1$ or $\min\{m, n\} \le t$; \item $t = 1$ or $t = n$; \item $t = 1$; $t = 2$ and $p$ is odd; $p = 2$ and $n \le (t + 1)/2$; or $p$ is odd and $n \le (t + 2)/2$; \item $n \le t + 1$. \end{enumerate} \end{thm} \begin{rem} The above theorem does not reference any group (action). However, compare with \Cref{thm:classical-invariants} to see the connection for infinite fields of positive characteristic. \end{rem} \begin{rem} \label{rem:primes-in-denominators} We describe a curious implication of \Cref{thm:HJPS}. We revisit formula~\Cref{eq:R-SL-2-1}: \begin{equation*} \mathcal{R}\left((a_{1} b_{2})^{n}\right) = \frac{1}{n + 1}\Delta_{1,2}^{n}. \end{equation*} Note the denominator `$n + 1$'. This means that each prime number shows up as a factor of the denominator for some monomial. Said differently, $\mathcal{R}$ does not restrict to a map $\mathbb{Z}_{(p)}[Y] \to \mathbb{Z}_{(p)}[\{\Delta\}]$ for any prime $p > 0$, where $\mathbb{Z}_{(p)}$ is the subring of $\mathbb{Q}$ defined as \begin{equation*} \mathbb{Z}_{(p)} \coloneqq \left\{\frac{a}{b} \in \mathbb{Q} : a, b \in \mathbb{Z} \ \text{with} \ p \nmid b\right\}. \end{equation*} \Cref{thm:HJPS} tells us that this must essentially always happen for any of the Reynolds operators described in the paper. More generally, the above must happen for essentially any splitting that is linear over the subring. Indeed, pick a situation in \Cref{thm:HJPS} where the inclusion does not split in positive characteristic. For example, $\mathbb{F}_{p}[\{\Delta\}] \into \mathbb{F}_{p}[Y_{t \times n}]$ with $1 < t < n$. As discussed earlier, the inclusion $\mathbb{Q}[\{\Delta\}] \into \mathbb{Q}[Y_{t \times n}]$ \emph{does} split. Moreover, if we are only interested in splittings that are linear over the subring, then there are typically more than one. Let $\pi \colon \mathbb{Q}[Y_{t \times n}] \to \mathbb{Q}[\{\Delta\}]$ be any such $\mathbb{Q}[\{\Delta\}]$-linear splitting. The following must hold: given any prime $p > 0$, there exists some monomial $\mu = \mu(p) \in \mathbb{Q}[Y]$ such that when we express $\pi(\mu)$ as a polynomial in the $\{\Delta\}$ with rational coefficients, then one of the coefficients has denominator divisible by $p$. Indeed, if this were not the case for some prime $p$, then $\pi$ would restrict to a splitting $\mathbb{Z}_{(p)}[Y] \to \mathbb{Z}_{(p)}[\{\Delta\}]$, and we could go mod $p$ to obtain an $\mathbb{F}_{p}[\{\Delta\}]$-linear splitting, contradicting \Cref{thm:HJPS}. \end{rem} \appendix \section{Proof of the density theorem} \label{sec:proof-density} \begin{defn} For $X$ a topological space and $Y$ a subspace of $X$, a \deff{retraction} of $X$ onto $Y$ is a continuous function $r \colon X \to Y$ satisfying $r(y) = y$ for all $y \in Y \subset X$. \end{defn} \begin{lem} \label{lem:vanishing-on-product-infinite-sets} Let $k$ be a field, and $S \subset k$ be an infinite subset. If $f \in k[x_{1}, \ldots, x_{n}]$ is a polynomial vanishing on the product $S^{n} \subset \mathbb{A}_{k}^{n}$, then $f$ is the zero polynomial. Equivalently, $S^{n}$ is Zariski-dense in $\mathbb{A}_{k}^{n}$. \end{lem} \begin{proof} We prove the statement by induction on $n$. It is clear for $n = 1$. Assume $n > 1$ and suppose $f$ is nonzero. Write $f = f_{0} + f_{1} x_{n} + \cdots + f_{d} x_{n}^{d}$ with $d \ge 0$, $f_{d} \neq 0$, and $f_{i} \in k[x_{1}, \ldots, x_{n - 1}]$. By induction, there exists $\mathbf{s} = (s_{1}, \ldots, s_{n - 1}) \in S^{n - 1}$ with $f_{d}(\mathbf{s}) \neq 0$. Then, $f(\mathbf{s}, x_{n})$ is a nonzero polynomial in one variable, and this finishes the proof. \end{proof} \begin{lem} \label{lem:dense-intersection-dense} Let $X$ be a topological space, $Z \subset X$ a dense subspace, and $Y \subset X$ a subspace such that there exists a retraction $r \colon X \onto Y$ with $r(Z) \subset Z$. Then, $Z \cap Y$ is dense in $Y$. \end{lem} \begin{proof} Let $y \in Y$ be arbitrary. As $Z$ is dense in $X$, there exists a net $\langle z_{\lambda} \rangle_{\lambda \in \Lambda}$ in $Z$ with $z_{\lambda} \to y$. In turn, $\langle r(z_{\lambda}) \rangle_{\lambda}$ is a net in $Z \cap Y$ converging to $y$. \end{proof} For the next few proofs, we define the function \begin{equation} \label{eq:retraction-GL-SL} \begin{aligned} r \colon \GL_{n}(k) & \to \SL_{n}(k) \\ U = \begin{bmatrix} u_{11} & u_{12} & \cdots & u_{1n} \\ u_{21} & u_{22} & \cdots & u_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ u_{n1} & u_{n2} & \cdots & u_{nn} \end{bmatrix} &\mapsto \begin{bmatrix} \frac{u_{11}}{\det U} & \frac{u_{12}}{\det U} & \cdots & \frac{u_{1n}}{\det U} \\ u_{21} & u_{22} & \cdots & u_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ u_{n1} & u_{n2} & \cdots & u_{nn} \end{bmatrix}. \end{aligned} \end{equation} That is, $r$ scales the first row of $U$ by $\frac{1}{\det U}$. \begin{lem} For any field $k$, the function $r$ defined by~\Cref{eq:retraction-GL-SL} is a retraction of $\GL_{n}(k)$ onto $\SL_{n}(k)$. \end{lem} \begin{proof} The multilinearity of $\det$ implies that $\det(r(U)) = 1$ for all $U \in \GL_{n}(\mathbb{C})$, that is, $r$ indeed takes values in $\SL_{n}(k)$. The function $r$ is continuous in the Zariski topology because it is given by rational functions. It is clear that the restriction of $r$ to $\SL_{n}(k)$ is the identity. \end{proof} \begin{prop} \label{prop:U-GL-dense} For all $n \ge 1$, the subgroup $\UU_{n}(\mathbb{C})$ is Zariski-dense in $\GL_{n}(\mathbb{C})$. \end{prop} \begin{proof} Let $C$ be the Zariski closure of $\UU_{n}(\mathbb{C})$ in $\GL_{n}(\mathbb{C})$. Write $C = V(\mathfrak{a}) \cap \GL_{n}(\mathbb{C})$ for $\mathfrak{a}$ an ideal. Let $f \in \mathfrak{a}$. Note that if $z_{1}, \ldots, z_{n}$ are elements of the unit circle $\mathbb{S}^{1}$, then $\diag(z_{1}, \ldots, z_{n})$ is an element of $\UU_{n}$. Thus, $f$ vanishes on all diagonal matrices with entries coming from $\mathbb{S}^{1}$. By \Cref{lem:vanishing-on-product-infinite-sets}, we see that $f$ must vanish on \emph{all} diagonal matrices. Thus, $C$ contains all invertible diagonal matrices. Because $\GL_{n}(\mathbb{C})$ is a topological group in the Zariski topology, and $\UU_{n}(\mathbb{C})$ is a subgroup, it follows that $C$ is a subgroup. As every invertible matrix can be decomposed as $U D V$ with $U, V \in \UU_{n}(\mathbb{C})$ and $D$ invertible diagonal, we are done. \end{proof} \begin{prop} \label{prop:SU-SL-dense} For all $n \ge 1$, the subgroup $\SU_{n}(\mathbb{C})$ is Zariski-dense in $\SL_{n}(\mathbb{C})$. \end{prop} \begin{proof} We use \Cref{lem:dense-intersection-dense} with $X = \GL_{n}(\mathbb{C})$, $Z = \UU_{n}(\mathbb{C})$, $Y = \SL_{n}(\mathbb{C})$, and $r$ given by~\Cref{eq:retraction-GL-SL}. The density of $Z$ then follows from \Cref{prop:U-GL-dense}. All that is left to be shown is that $r(\UU_{n}(\mathbb{C})) \subset \UU_{n}(\mathbb{C})$. To this end, note that a matrix is unitary if and only if its rows form an orthonormal basis. If $U \in \UU_{n}(\mathbb{C})$, then $\det(U) \in \mathbb{S}^{1}$ and thus, the rows of $r(U)$ continue to be orthonormal. \end{proof} \begin{thm} \label{thm:G-Q-dense-in-G-k} Let $k$ be a field of characteristic zero. For each of the following inclusions, the subgroup is Zariski-dense in the larger group. \begin{enumerate}[label=(\alph*)] \item $\GL_{n}(\mathbb{Q}) \subset \GL_{n}(k)$, \item $\SL_{n}(\mathbb{Q}) \subset \SL_{n}(k)$, \item $\OO_{n}(\mathbb{Q}) \subset \OO_{n}(k)$, and \item $\Sp_{2n}(\mathbb{Q}) \subset \Sp_{2n}(k)$. \end{enumerate} \end{thm} \begin{proof} General linear group: By \Cref{lem:vanishing-on-product-infinite-sets}, the subspace $\mathbb{Q}^{n^{2}}$ is dense in $\mathbb{A}_{k}^{n^{2}}$. Intersecting with the open set $\GL_{n}(k)$ gives us (a). Special linear group: (b) then follows by use of \Cref{lem:dense-intersection-dense} with $X = \GL_{n}(k)$, $Y = \SL_{n}(k)$, $Z = \GL_{n}(\mathbb{Q})$, and $r$ given by~\Cref{eq:retraction-GL-SL}. Orthogonal group: We note that the orthogonal group $\OO_{n}(k)$ is generated by the set of reflections \begin{equation*} R(k) \coloneqq \left\{I - \frac{2 u u^{\tr}}{u^{\tr} u} : \text{$u \in k^{n}$ with $u^{\tr} u \neq 0$}\right\}, \end{equation*} in fact the Cartan--Dieudonn\'e theorem states that every orthogonal matrix is a product of at most $n$ such reflections, see~\Cite{DieudonneGroupesClassiquesOG, ScherkOrthogonal}. Because the closure of $\OO_{n}(\mathbb{Q})$ must be a subgroup of $\OO_{n}(k)$, it suffices to show that $R(\mathbb{Q})$ is dense in $R(k)$. To this end, note that $I(k) \coloneqq \{u \in k^{n} \colon u^{\tr} u \neq 0\}$ is an open subset of $\mathbb{A}_{k}^{n}$ and thus intersecting with the dense set $\mathbb{Q}^{n}$, we get that $I(\mathbb{Q})$ is dense in $I(k)$. Now, $R(k)$ is the image of $I(k)$ under the continuous map $u \mapsto I - \frac{2 u u^{\tr}}{u^{\tr} u}$ and hence $R(\mathbb{Q})$ is dense in $R(k)$. Symplectic group: (d) follows similarly by using the fact that the symplectic group $\Sp_{2n}(k)$ is generated by \begin{equation*} \begin{bmatrix} A & O \\ O & (A^{\tr})^{-1} \\ \end{bmatrix}, \qquad \begin{bmatrix} I & B \\ O & I \\ \end{bmatrix}, \quad \text{and} \quad \begin{bmatrix} O & I \\ -I & O \\ \end{bmatrix}, \end{equation*} where $A$ varies over $\SL_{n}(k)$, and $B$ over all symmetric $n \times n$ matrices. This description is originally due to \Citeauthor{DieudonneGenerators}~\Cite{DieudonneGenerators} and can also be found in~\Cite[\S2.2]{OMearaSymplectic}. \end{proof} \section{Multinomial coefficient and integration identities} \label{sec:identities} \begin{identity} \label{identity:beta-integral} For integers $a, b \ge 0$, we have \begin{equation*} \int_{0}^{1} t^{a} (1 - t)^{b} \,{\mathrm{d}}t = \frac{a! b!}{(a + b + 1)!}. \end{equation*} \end{identity} \begin{proof} The formula is readily verified if $b = 0$. For $a \ge 0$ and $b > 0$, integration by parts yields \begin{equation*} \int_{0}^{1} t^{a} (1 - t)^{b} \,{\mathrm{d}}t = \frac{b}{a + 1} \int_{0}^{1} t^{a + 1} (1 - t)^{b - 1} \,{\mathrm{d}}t. \end{equation*} Repeated application of the above gives the desired formula. \end{proof} \begin{identity} \label{identity:x-plus-y-multinomial} Let $n \ge 0$ be an integer. One has the identity \begin{equation*} \frac{(x + y)^{n}}{n + 1} = \sum_{2i + j + k = n} \binom{n}{i, i, j, k} \frac{(i + j)! (i + k)!}{(n + 1)!} x^{i + j} y^{i + k}, \end{equation*} where, explicitly, the sum is taken over all triples $(i, j, k) \in \mathbb{N}^{3}$ satisfying $2i + j + k = n$. \end{identity} \begin{proof} Note that \begin{equation*} \binom{n}{i, i, j, k} \frac{(i + j)! (i + k)!}{(n + 1)!} = \frac{n!}{i! i! j! k!} \frac{(i + j)! (i + k)!}{(n + 1)!} = \frac{1}{n + 1} \binom{i + j}{i} \binom{i + k}{k}. \end{equation*} Thus, the identity of interest is equivalent to \begin{equation*} (x + y)^{n} = \sum_{2i + j + k = n} \binom{i + j}{i} \binom{i + k}{k} x^{i + j} y^{i + k}. \end{equation*} Because both sides of the equation are homogeneous of degree $n$, it suffices to verify that \begin{align} \label{eq:001} (x + 1)^{n} = \sum_{2i + j + k = n} \binom{i + j}{i} \binom{i + k}{k} x^{i + j}. \tag{$\star$} \end{align} To prove the above identity, we need to show that the coefficient of $x^{a}$ is the same on both sides for each $0 \le a \le n$. The coefficient of $x^{a}$ on the right-hand-side of~\Cref{eq:001} is given by \begin{align*} \sum_{\substack{2i + j + k = n \\ i + j = a}} \binom{i + j}{i} \binom{i + k}{k} = \sum_{i + k = n - a} \binom{a}{i} \binom{i + k}{i} = \sum_{i} \binom{a}{i} \binom{n - a}{i}. \end{align*} Thus, it suffices to prove that \begin{align} \label{eq:002} \binom{n}{a} = \sum_{i} \binom{a}{i} \binom{n - a}{i}. \tag{$\dagger$} \end{align} To this end, note that \begin{equation*} (1 + X)^{a} (1 + Y)^{n - a} = \sum_{i, j} \binom{a}{i} \binom{n - a}{j} X^{i} Y^{j}. \end{equation*} Substituting $Y = 1/X$ gives \begin{equation*} (1 + X)^{a} \left(1 + \frac{1}{X}\right)^{n - a} = \sum_{i, j} \binom{a}{i} \binom{n - a}{j} X^{i - j}. \end{equation*} Thus, \begin{equation*} \frac{1}{X^{n - a}} (1 + X)^{n} = \sum_{i, j} \binom{a}{i} \binom{n - a}{j} X^{i - j}. \end{equation*} Comparing the coefficient of $X^{0}$ on both sides gives us~\Cref{eq:002}. \end{proof} \begin{identity} \label{identity:2} For integers $m, n \ge 0$, one has the identity \begin{equation*} \sum_{k = 0}^{n} \binom{n}{k} \frac{(-1)^{k}}{m + k + 1} = \frac{m! n!}{(m + n + 1)!}. \end{equation*} \end{identity} \begin{proof} We note \begin{align*} \sum_{k = 0}^{n} \binom{n}{k} \frac{(-1)^{k}}{m + k + 1} &= \sum_{k = 0}^{n} \binom{n}{k} (-1)^{k} \int_{0}^{1} t^{m + k} \,{\mathrm{d}}t \\ &= \int_{0}^{1} t^{m} \cdot \sum_{k = 0}^{n} \binom{n}{k} (-t)^{k} \,{\mathrm{d}}t \\ &= \int_{0}^{1} t^{m} (1 - t)^{n} \,{\mathrm{d}}t \\ &= \frac{m! n!}{(m + n + 1)!}, \end{align*} where the last step uses \Cref{identity:beta-integral}. \end{proof} \begin{identity} \label{identity:integrate-cos-sin} For integers $a, b \ge 0$, we have \begin{equation*} \int_{0}^{\pi/2} \cos^{2a}(\theta) \sin^{2b}(\theta) \sin(2\theta) \,{\mathrm{d}}\theta = \frac{a!b!}{(a + b + 1)!}. \end{equation*} \end{identity} \begin{proof} The integrand can be rewritten as \begin{align*} \cos^{2a}(\theta) \sin^{2b}(\theta) \sin(2\theta) &= 2 \cos^{2a + 1}(\theta) \sin^{2b + 1}(\theta) \\ &= 2 (\cos^{2}(\theta))^{a} (\sin(\theta))^{2b + 1} \cos(\theta) \\ &= 2 (1 - \sin^{2}(\theta))^{a} (\sin(\theta))^{2b + 1} \cos(\theta). \end{align*} The substitution $u = \sin(\theta)$ gives us \begin{align*} \int_{0}^{\pi/2} \cos^{2a}(\theta) \sin^{2b}(\theta) \sin(2\theta) \,{\mathrm{d}}\theta &= \int_{0}^{1} 2 (1 - u^{2})^{a} u^{2b + 1} \,{\mathrm{d}}u \\ &= \int_{0}^{1} (1 - u^{2})^{a} (u^{2})^{b} (2u \,{\mathrm{d}}u) \\ &= \int_{0}^{1} (1 - t)^{a} t^{b} \,{\mathrm{d}}t. \end{align*} The desired identity now follows from \Cref{identity:beta-integral}. \end{proof} \begin{identity} \label{thm:integrating-over-SU2-appendix} For nonnegative integers $a, b, c, d$, we have \begin{equation*} \int_{\SU_{2}(\mathbb{C})} u_{11}^{a} u_{12}^{b} u_{21}^{c} u_{22}^{d} = \begin{cases} (-1)^{b} \dfrac{a! b!}{(a + b + 1)!} & \text{if $a = d$ and $b = c$}, \\[3pt] 0 & \text{else}. \end{cases} \end{equation*} \end{identity} \begin{proof} We use the formula for the Haar measure on $\SU_{2}(\mathbb{C})$ from~\Cite[Proposition 7.4.1]{FarautAnalysis}. Given a smooth function $f \colon \SU_{2}(\mathbb{C}) \to \mathbb{C}$, we have \begin{align*} \int_{\SU_{2}(\mathbb{C})} f &= \frac{1}{2 \pi^{2}} \int_{0}^{\pi/2} \int_{0}^{\pi} \int_{-\pi}^{\pi} f\left( \begin{bmatrix} e^{\iota \psi} & \\ & e^{-\iota \psi} \\ \end{bmatrix} \begin{bmatrix} \cos(\theta) & \sin(\theta) \\ -\sin(\theta) & \cos(\theta) \\ \end{bmatrix} \begin{bmatrix} e^{\iota \varphi} & \\ & e^{-\iota \varphi} \\ \end{bmatrix} \right) \sin(2 \theta) \,{\mathrm{d}}\psi \,{\mathrm{d}}\varphi \,{\mathrm{d}}\theta \\[8pt] &= \frac{1}{2 \pi^{2}} \int_{0}^{\pi/2} \int_{0}^{\pi} \int_{-\pi}^{\pi} f\left( \begin{bmatrix} e^{\iota (\psi + \varphi)} \cos(\theta) & e^{\iota (\psi - \varphi)} \sin(\theta) \\ -e^{\iota (-\psi + \varphi)}\sin(\theta) & e^{\iota (-\psi - \varphi)} \cos(\theta) \\ \end{bmatrix} \right) \sin(2 \theta) \,{\mathrm{d}}\psi \,{\mathrm{d}}\varphi \,{\mathrm{d}}\theta. \end{align*} Rewriting in terms of the above coordinates, we get \begin{equation*} u_{11}^{a} u_{12}^{b} u_{21}^{c} u_{22}^{d} = (-1)^{c} \exp(\iota \psi(a + b - c - d)) \exp(\iota \varphi(a - b + c - d)) \cos^{a + d}(\theta) \sin^{b + c}(\theta). \end{equation*} We integrate using Fubini's theorem to obtain \begin{align*} & 2\pi^{2} \int_{\SU_{2}(\mathbb{C})} u_{11}^{a} u_{12}^{b} u_{21}^{c} u_{22}^{d} \\ &= (-1)^{c} \cdot \int_{-\pi}^{\pi} \exp(\iota \psi(a + b - c - d)) \,{\mathrm{d}}\psi \cdot \int_{0}^{\pi} \exp(\iota \varphi(a - b + c - d)) \,{\mathrm{d}}\varphi \cdot \int_{0}^{\pi/2} \cos^{a + d}(\theta) \sin^{b + c}(\theta) \sin(2\theta) \,{\mathrm{d}}\theta. \end{align*} For the first integral to be nonzero, we must have $a + b - c - d = 0$. This implies that $a - b + c - d$ is even and hence must be zero if the second integral is to be nonzero. Solving these two equations simultaneously gives us $a = d$ and $b = c$. Assume now that these two equations hold. We then have \begin{align*} & 2\pi^{2} \int_{\SU_{2}(\mathbb{C})} u_{11}^{a} u_{12}^{b} u_{21}^{c} u_{22}^{d} \\ &= (-1)^{b} \cdot \int_{-\pi}^{\pi} 1 \,{\mathrm{d}}\psi \cdot \int_{0}^{\pi} 1 \,{\mathrm{d}}\varphi \cdot \int_{0}^{\pi/2} \cos^{2a}(\theta) \sin^{2b}(\theta) \sin(2\theta) \,{\mathrm{d}}\theta \\ &= (-1)^{b} (2 \pi^{2}) \cdot \frac{a!b!}{(a + b + 1)!}, \end{align*} where the last equality follows from \Cref{identity:integrate-cos-sin}. \end{proof} \printbibliography \end{document}
2412.18949v1
http://arxiv.org/abs/2412.18949v1
Metric Space Recognition by Gromov-Hausdorff Distances to Simplexes
\documentclass[leqno]{article} \usepackage{geometry} \usepackage{graphicx} \usepackage[cp1251]{inputenc} \usepackage[english]{babel} \usepackage{mathtools} \usepackage{amsfonts,amssymb,mathrsfs,amscd,amsmath,amsthm} \usepackage{verbatim} \usepackage{url} \usepackage{stmaryrd} \usepackage{cmap} \def\ig#1#2#3#4{\begin{figure}[!ht]\begin{center}\includegraphics[height=#2\textheight]{#1.eps}\caption{#4}\label{#3}\end{center}\end{figure}} \def\labelenumi{\rm(\theenumi)} \def\thtext#1{ \catcode`@=11 \gdef\@thmcountersep{. #1} \catcode`@=12 } \def\threst{ \catcode`@=11 \gdef\@thmcountersep{.} \catcode`@=12 } \theoremstyle{plain} \newtheorem*{mainthm}{Main Theorem} \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{ass}[thm]{Assertion} \newtheorem{lem}[thm]{Lemma} \theoremstyle{definition} \newtheorem{dfn}[thm]{Definition} \newtheorem{rk}[thm]{Remark} \newtheorem{examp}[thm]{Example} \newtheorem{sol}[thm]{Solution} \pagestyle{myheadings} \catcode`@=11 \def\.{.\spacefactor\@m} \catcode`@=12 \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\Q{{\mathbb Q}} \def\R{\mathbb R} \def\X{{\mathbb X}} \def\Y{{\mathbb Y}} \def\Z{{\mathbb Z}} \def\a{\alpha} \def\b{\beta} \def\e{\varepsilon} \def\dl{\delta} \def\D{\Delta} \def\g{\gamma} \def\G{\Gamma} \def\om{\omega} \def\Om{\Omega} \def\k{\varkappa} \def\l{\lambda} \def\r{\rho} \def\s{\sigma} \def\S{\Sigma} \def\t{\tau} \def\v{\varphi} \def\0{\emptyset} \def\:{\colon} \def\<{\langle} \def\>{\rangle} \def\[{\llbracket} \def\]{\rrbracket} \def\ang{\angle} \def\c{\circ} \def\d{\partial} \def\eqae{\almost{=}} \def\equiv{\approx} \def\geae{\almost{\ge}} \def\gtae{\almost{>}} \def\imply{\Rightarrow} \def\inae{\almost{\in}} \def\leae{\almost{\le}} \def\ltae{\almost{<}} \def\lra{\Leftrightarrow} \def\n{\nabla} \def\osm{\overline{\overline{o}}} \def\refitem#1{(\ref{#1})} \def\rra{\rightrightarrows} \def\rom#1{\emph{#1}} \def\({\rom(} \def\){\rom)} \def\sm{\setminus} \def\ss{\subset} \def\ssae{\almost{\ss}} \def\sp{\supset} \def\spae{\almost{\sp}} \def\toae{\almost{\to}} \def\topr{\stackrel{\mbox{\scriptsize p}}{\to}} \def\x{\times} \def\ba{{\bar a}} \def\bA{{\bar A}} \def\bcAD{\overline{\mathstrut\cAD}} \def\bb{{\bar b}} \def\bB{{\bar B}} \def\bbo{\bar{\bar{\operatorname{o}}}} \def\bc{{\bar c}} \def\bcK{{\bar\cK}} \def\bd{{\bar d}} \def\be{{\bar e}} \def\bF{{\bar F}} \def\bg{{\bar g}} \def\bG{{\bar G}} \def\bbG{{\mathbb G}} \def\bcG{\bar{\cal G}} \def\bI{{\bar I}} \def\bJ{{\bar J}} \def\bla{{\bar\l}} \def\bM{{\bar M}} \def\bOm{{\bar\Om}} \def\bpi{{\bar\pi}} \def\bq{{\bar q}} \def\bQ{{\bar Q}} \def\bbQ{{\mathbb Q}} \def\br{{\bar r}} \def\bR{{\bar R}} \def\bsg{{\bar\sigma}} \def\bS{{\bar S}} \def\bcS{\bar{\cal S}} \def\bu{{\bar u}} \def\bv{\bar v} \def\bw{\bar w} \def\bx{{\bar x}} \def\bX{{\bar X}} \def\by{{\bar y}} \def\bY{{\bar Y}} \def\bz{{\bar z}} \def\bZ{{\bar Z}} \def\arcsh{\operatorname{arcsh}} \def\CARD{\operatorname{CARD}} \def\capa{\operatorname{cap}} \def\CC{\operatorname{CC}} \def\Cl{\operatorname{Cl}} \def\const{\operatorname{const}} \def\conv{\operatorname{conv}} \def\cov{\operatorname{cov}} \def\crit{{\operatorname{crit}}} \def\diam{\operatorname{diam}} \def\dil{\operatorname{dil}} \def\dis{\operatorname{dis}} \def\dist{\operatorname{dist}} \def\div{\operatorname{div}} \def\dLS{\operatorname{dLS}} \def\dom{\operatorname{dom}} \def\ext{\operatorname{ext}} \def\Ext{\operatorname{Ext}} \def\GL{\operatorname{GL}} \def\gr{\operatorname{gr}} \def\Gr{\operatorname{Gr}} \def\id{\operatorname{id}} \def\im{\operatorname{im}} \def\Im{\operatorname{Im}} \def\Iso{\operatorname{Iso}} \def\Int{\operatorname{Int}} \def\loc{{\operatorname{loc}}} \def\ln{\operatorname{ln}} \def\mst{\operatorname{mst}} \def\MST{\operatorname{MST}} \def\np{\operatorname{np}} \def\O{\operatorname{O}} \def\opt{{\operatorname{opt}}} \def\Re{\operatorname{Re}} \def\RP{\operatorname{\R P}} \def\sign{\operatorname{sign}} \def\SL{\operatorname{SL}} \def\smt{\operatorname{smt}} \def\SMT{\operatorname{SMT}} \def\SO{\operatorname{SO}} \def\SU{\operatorname{SU}} \def\supp{\operatorname{supp}} \def\tanh{\operatorname{th}} \def\U{\operatorname{U}} \def\vol{\operatorname{vol}} \def\XST{\operatorname{XST}} \def\xst{\operatorname{xst}} \def\cA{{\cal A}} \def\cAD{\mathcal{AD}} \def\cB{{\cal B}} \def\cC{{\cal C}} \def\cD{{\cal D}} \def\cF{{\cal F}} \def\cG{{\cal G}} \def\cH{{\cal H}} \def\cI{{\cal I}} \def\cJ{{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}} \def\cM{{\cal M}} \def\cN{{\cal N}} \def\cP{{\cal P}} \def\cR{{\cal R}} \def\cS{{\cal S}} \def\cT{{\cal T}} \def\cU{{\cal U}} \def\cV{{\cal V}} \def\cX{{\cal X}} \def\cY{{\cal Y}} \def\tf{{\tilde f}} \def\tX{{\tilde X}} \def\tR{{\tilde R}} \def\hd{{\hat d}} \def\hF{{\hat F}} \def\hR{{\hat R}} \def\hx{{\hat x}} \def\hX{{\hat X}} \def\hy{{\hat y}} \def\hY{{\hat Y}} \def\gM{{\mathfrak M}} \def\gR{{\mathfrak R}} \usepackage{epigraph} \begin{document} \title{Metric Space Recognition by Gromov--Hausdorff Distances to Simplexes} \author{A.\,O.~Ivanov, E.\,S.~Lychagina, and A.\,A.~Tuzhilin} \date{} \maketitle \begin{abstract} In the present paper a distinguishability of bounded metric spaces by the set of the Gromov--Hausdorff distances to so-called simplexes (metric spaces with unique non-zero distance) is investigated. It is easy to construct an example of non-isometric metric spaces such that the Gromov--Hausdorff distance between them vanishes. Such spaces are non-distinguishable, of course. But we give examples of non-distinguishable metric spaces, the Gromov--Hausdorff distance between which is non-zero. More over, we prove several sufficient and necessary conditions for metric spaces to be non-distinguishable. \textbf{Keywords:} metric space, ultrametric space, minimal spanning tree, graph chromatic number, graph clique covering number, Borsuk's number, Hausdorff distance, Gromov--Hausdorff distance, one-distance metric space \end{abstract} \setlength{\epigraphrule}{0pt} \section{Introduction} \markright{\thesection.~Introduction} Modern computer graphics, image comparison and recognition systems, which are actively developing at present, often use so-called ``spaces of spaces''. In addition to these applications, these spaces have a theoretical significance, and therefore continue to attract the attention of specialists in pure and applied mathematics. One of the possible approaches to the study of such spaces is based on construction of a distance function between spaces as a ``measure of their dissimilarity''. In~1914, F.~Hausdorff~\cite{Hausdorff} introduced a non-negative symmetric function on pairs of non-empty subsets of a metric space $X$, equal to the greatest lower bound of such numbers $r$ that one set is contained in the $r$-neighborhood of the other and vice versa. This function turned out to be a metric on the space of closed bounded subsets of $X$. Later, in~1975, D.~Edwards~\cite{Edwards} and, independently, in~1981, M.~Gromov~\cite{Gromov} generalized the Hausdorff construction to the family of all metric compacta, using their isometric embeddings into all possible ambient spaces (see the definition below). The resulting function is called the Gromov--Hausdorff distance, and the corresponding metric space of metric compacta, considered up to isometry, is called the Gromov--Hausdorff space and is denoted by $\cM$. The geometry of this space turns out to be rather bizarre and is actively studied. It is well known that $\cM$ is a complete, separable, geodesic space, and also that $\cM$ is not proper~\cite{BurBurIva,IvaNikTuz}. The structure of the Gromov--Hausdorff space became the subject of the present paper. Pursuing the idea of ``introducing coordinates in the space $\cM$'', a family of metric spaces with one non-zero distance, which we will call simplexes, was proposed for the role of coordinate axes. More precisely, each axis is a set of simplexes of fixed cardinality (but different diameters), and a metric compact is associated with a set of distance functions from it to the points of the axis. The problem of calculating the Gromov--Hausdorff distance between arbitrary metric spaces is quite nontrivial, therefore this choice of coordinate axes is due to the relative simplicity of such distance functions, a detailed study of which is given in~\cite{GrigIvaTuzSimpDist}. In the paper~\cite{IvaTuzDistSympl} an example of two finite metric spaces that do not differ in distances to finite simplexes was given (see also below). From the results of~\cite{GrigIvaTuzSimpDist} it immediately follows that the distances from these spaces to all possible simplexes are the same. The question arises: which spaces can be distinguished from each other in this way? We will give a number of necessary and sufficient conditions for the indistinguishability of spaces. We will show that the cardinalities of finite indistinguishable spaces must coincide. An example of a finite and a continuum indistinguishable spaces will be given, as well as a series of examples of indistinguishable spaces among which there are pairs of all possible cardinalities not less than the continuum. \section{Preliminaries}\label{sec:GH} \markright{\thesection.~Preliminaries} Let us recall the necessary concepts and results concerning the Hausdorff and Gromov--Hausdorff distances. More detailed information can be found in~\cite{BurBurIva}. Let $X$ be an arbitrary non-empty set. Denote by $\#X$ the cardinality of $X$. If $X$ is a metric space and $x$ and $y$ are its points, then by $|xy|$ we denote the distance between these points, and if $A$ and $B$ are non-empty subsets of $X$, then we set $|AB|=|BA|=\inf\bigl\{|ab|:a\in A,\,b\in B\bigr\}$. If $A=\{a\}$, then instead of $\bigl|\{a\}B\bigr|=\bigl|B\{a\}\bigr|$ we write $|aB|=|Ba|$. For a metric space $X$, a point $x\in X$ and a real number $r>0$, by $U_r(x)$ we denote the open ball with center at $x$ and radius $r$, and for a non-empty subset $A\ss X$ its $r$-neighborhood is defined as follows: $$ U_r(A)=\big\{x\in X:|Ax|<r\big\}=\cup_{a\in A}U_r(a). $$ Let $X$ be a set. By $\cP_0(X)$ we denote the family of all its non-empty subsets. Further, let $X$ be a metric space, and $A,B\in\cP_0(X)$. The value $$ d_H(A,B)=\inf\bigl\{r\in[0,\infty]:A\ss U_r(B)\quad\text{and}\quad U_r(A)\sp B\bigr\} $$ is called the \emph{Hausdorff distance\/} between $A$ and $B$. It is well-known that $d_H$ is a metric on the set $\cH(X)$ of all non-empty closed bounded subsets of the metric space $X$. Further, let $X$ and $Y$ be metric spaces. A triple $(X',Y',Z)$ consisting of a metric space $Z$ and its two subsets $X'$ and $Y'$ that are isometric to $X$ and $Y$, respectively, is called a \emph{realization of the pair $(X,Y)$}. The greatest lower bound $d_{GH}(X,Y)$ of reals $r$ such that there exists a realization $(X',Y',Z)$ of the pair $(X,Y)$ satisfying the inequality $d_H(X',Y')\le r$ is called the \emph{Gromov--Hausdorff distance\/} between the metric spaces $X$ and $Y$. It is well-known that the function $d_{GH}$ is a generalized pseudometric on the proper class of all metric spaces considered up to an isometry. This means that $d_{GH}$ can take infinite values ??and also be zero on a pair of non-isometric metric spaces. However, on the family of compact metric spaces, considered up to isometry, $d_{GH}$ is a metric. For specific calculations of the Gromov--Hausdorff distance, the concept of distortion of generalized mappings --- binary relations --- turns out to be useful. A multivalued surjective mapping $R$ from a set $X$ to a set $Y$ is called a \emph{correspondence}. The set of all correspondences between $X$ and $Y$ is denoted by $\cR(X,Y)$. Thus, a correspondence is a binary relation between $X$ and $Y$ for which the natural projections onto $X$ and $Y$ are surjective. For metric spaces $X$ and $Y$ and a binary relation $\s\in\cP_0(X\x Y)$ the value $$ \dis\s:=\sup\Bigl\{\bigl||xx'|-|yy'|\bigr|:(x,y),\,(x',y')\in\s\Bigr\} $$ is called the \emph{distortion\/} of the binary relation $\s$. The next result is well-known, see, for example,~\cite{BurBurIva}. \begin{thm}\label{thm:GH-metri-and-relations} For any non-empty metric spaces $X$ and $Y$ the following equality holds\/\rom: $$ d_{GH}(X,Y)=\frac12\inf\bigl\{\dis R:R\in\cR(X,Y)\bigr\}. $$ \end{thm} For a cardinal number $n$, let $\D_n$ denote the metric space of cardinality $n$ in which all nonzero distances are equal to $1$. Note that $\D_1$ is a one-point metric space. For a metric space $X$ and a positive real number $\l$, let $\l X$ denote the metric space obtained from $X$ by multiplying all distances by $\l$. If $X$ is bounded and $\l=0$, then let $\l X=\D_1$. Clearly, $\l\D_1=\D_1$ for every $\l\ge0$. The spaces $\l\D_n$ are referred as \emph{simplexes of cardinality $n$}. For a metric space $X$, let $\diam X$ denote its diameter, i.e., the value $\inf\bigl\{|xy|:x,y\in X\bigr\}$. Then it is easy to see that $2d_{GH}(\D_1,X)=\diam X$, see~\cite{BurBurIva}. \section{Indistinguishable Metric Spaces} \markright{\thesection.~Indistinguishable Metric Spaces} We start with the following result. \begin{thm}[\cite{GrigIvaTuzSimpDist}]\label{thm:BigSympl} Let $X$ be a metric space, $\l\ge0$ a real number, and let $n$ be a cardinal number such that $\#X<n$, then $$ 2d_{GH}(\l\D_n,X)=\max\{\l,\,\diam X-\l\}. $$ \end{thm} In the present paper, we call metric spaces \emph{indistinguishable\/} if the Gromov--Hausdorff distances from them to each simplex are the same. Otherwise, we call the spaces \emph{distinguishable}. Theorem~\ref{thm:BigSympl} immediately implies the following result. \begin{cor}\label{cor:diam} If metric spaces are indistinguishable, then their diameters are the same. \end{cor} \begin{rk} It also follows from Theorem~\ref{thm:BigSympl} that metric spaces $X$ and $Y$ of the same diameter can be distinguished only by simplexes, whose cardinality is less than or equals to $\max\{\#X,\#Y\}$. \end{rk} The following results concern the computation of the Gromov--Hausdorff distance from a metric space $X$ to simplexes, whose cardinality does not exceed $\#X$. Let $X$ be an arbitrary set and $n$ a cardinal number not exceeding $\#X$. By $\cD_n(X)$ we denote the family of all possible partitions of $X$ into $n$ nonempty subsets. Now, let $X$ be a metric space. Then for each $D=\{X_i\}_{i\in I}\in\cD_n(X)$ put $$ \diam D=\sup_{i\in I}\diam X_i. $$ Notice that $\diam D=\diam X$ for $D\in\cD_1(X)$. Further, for each $D=\{X_i\}_{i\in I}\in\cD_n(X)$ put $$ \a(D)= \begin{cases} \inf\bigl\{|X_iX_j|:i\ne j\bigr\}&\text{for $n\ge2$,}\\ \infty&\text{for $n=1$.} \end{cases} $$ \begin{thm}[\cite{GrigIvaTuzSimpDist}]\label{thm:GenFormulaSmallSympl} For a metric space $X$, a cardinal number $n\le\#X$ and real $\l\ge0$ the equality $$ 2d_{GH}(\l\D_n,X)=\inf_{D\in\cD_n(X)}\max\Bigl\{\diam D,\l-\a(D),\diam X-\l\Bigr\} $$ holds. \end{thm} Put $$ \a_n(X)=\sup_{D\in\cD_n(X)}\a(D). $$ Notice that $\a_1(X)=\infty$. \begin{cor}\label{cor:BigLambda} For a metric space $X$, a cardinal number $n\le\#X$ and real $\l$ such that $\l\ge2\diam X$ the equality $$ 2d_{GH}(\l\D_n,X)= \begin{cases} \inf_{D\in\cD_n(X)}\bigl(\l-\a(D)\bigr)=\l-\a_n(X)&\text{for $n\ge2$,}\\ \diam X&\text{for $n=1$} \end{cases} $$ holds \end{cor} \begin{proof} Since for any partition $D$ the inequalities $\diam D\le \diam X$ and $\a(D)\le\diam X$ are valid, then $$ \max\Bigl\{\diam D,\l-\a(D),\diam X-\l\Bigr\}=\l-\a(D) $$ for $\l\ge2\diam X$, from which the required conclusion follows. \end{proof} Let $\CARD$ stands for the proper class of non-zero cardinal numbers. For a metric space $X$ define a mapping $\cA_X\:\CARD\to[0,\infty]$ as follows: $\cA_X(n)=\a_n(X)$ for $n\le\#X$, and $\cA_X(n)=0$ for $n>\#X$. Theorem~\ref{thm:BigSympl} and Corollary~\ref{cor:BigLambda} imply immediately the following result. \begin{cor} If metric spaces $X$ and $Y$ are indistinguishable, then $\cA_X=\cA_Y$. \end{cor} \section{$\a$-connected Metric Spaces} \markright{\thesection.~$\a$-connected Metric Spaces} A metric space is called \emph{$\a$-connected\/} if for each $n\ge2$ the equality $\a_n(X)=0$ holds. Every connected space is $\a$-connected~\cite{GrigIvaTuzSimpDist}. Since the transition from a partition to its enlargement does not increase the value of $\a$, the following result is true. \begin{prop}[\cite{GrigIvaTuzSimpDist}]\label{prop:cAmonot} The function $\cA$ is non-increasing. \end{prop} \begin{rk} Proposition~\ref{prop:cAmonot} implies that a metric space $X$ is $\a$-connected, iff $\a_2(X)=0$. \end{rk} Let us give another corollary of Theorem~\ref{thm:GenFormulaSmallSympl}. Put $$ d_n(X)=\inf_{D\in\cD_n(X)}\diam D. $$ Notice that $d_1(X)=\diam X$. \begin{cor}\label{cor:GHforAlpha0} Let $X$ be an arbitrary $\a$-connected metric space, and assume that $n\le\#X$, then $$ 2d_{GH}(\l\D_n,X)=\max\bigl\{d_n(X),\l,\diam X-\l\bigr\}. $$ \end{cor} Since $\max\bigl\{\l,\diam X-\l\bigr\}\ge(\diam X)/2$, then these characteristics do not distinguish metric spaces as $d_n(X)\le(\diam X)/2$. For a metric space $X$ we define a mapping $\cD_X\:\CARD\to[0,\infty]$ as follows: $$ \cD_X(n)= \begin{cases} d_n(X)&\text{for $n\le\#X$,}\\ 0&\text{for $n>\#X$.} \end{cases} $$ Put $$ \cD_X^{1/2}(n)= \begin{cases} \cD_X(n)&\text{for $\cD_X(n)\ge(\diam X)/2$,}\\ 0&\text{otherwise.} \end{cases} $$ \begin{cor} Let $X$ and $Y$ be arbitrary $\a$-connected metric spaces. If $X$ and $Y$ are indistinguishable, then $\cD_X^{1/2}=\cD_Y^{1/2}$. \end{cor} \section{Distinguishability and $\mst$-spectrum} \markright{\thesection.~Distinguishability and $\mst$-spectrum} Let $G=(V,E)$ be an arbitrary graph with vertex set $V$ and edge set $E$. We say that $G$ is \emph{defined on a metric space $X$} if $V\ss X$. For each such graph, define the \emph{lengths $|e|$} of its \emph{edge $e=vw$} as the distances $|vw|$ between the vertices $v$ and $w$ of this edge, and also the \emph{length $|G|$} of the \emph{graph $G$} itself as the sum of the lengths of all its edges. Let $M$ be a finite metric space. Define the value $\mst(M)$ by setting it equal to the length of the shortest tree among all trees of the form $(M,E)$. The resulting value is called the \emph{length of the minimal spanning tree on $M$}; a tree $G=(M,E)$ such that $|G|=\mst(M)$ is called a \emph{minimal spanning tree on $M$}. Note that a minimal spanning tree exists for any finite metric space $M$. A minimal spanning tree is generally not uniquely defined. The set of all minimal spanning trees on $M$ is denoted by $\MST(M)$. For $G\in\MST(M)$, let $\s(G)$ denote the vector of edge lengths of $G$, ordered in descending order. The following result is well known, see e.g.~\cite{Emel}. \begin{prop} For each $G_1,\,G_2\in\MST(M)$ the equality $\s(G_1)=\s(G_2)$ holds. \end{prop} For each finite metric space $M$ by $\s(M)$ we denote the vector $\s(G)$ for arbitrary $G\in\MST(M)$ and call it the \emph{$\mst$-spectrum of the space $M$}. \begin{thm}[\cite{TuzMSTSpec}]\label{thm:MSTSpec} Let $X$ be a finite metric space consisting of $m$ points, $\s(X)=(\s_1,\ldots,\s_{m-1})$, and $\l\ge2\diam X$. Then $2d_{GH}(\l\D_{k+1},X)=\l-\s_k$, where $1\le k\le m-1$. \end{thm} \begin{cor}\label{cor:MSTSpect} Let $X$ and $Y$ be indistinguishable finite metric spaces. Then their cardinalities are the same, and their diameters are equal to each other, and their $\mst$-spectra coincide. \end{cor} \begin{proof} The diameters are equal to each other due to Corollary~\ref{cor:diam}. Let us verify the cardinalities coincidence. Assume the contrary, and let $\#X<\#Y$. Due to Corollary's assumption, $d_{GH}(\l\D_k,X)=d_{GH}(\l\D_k,Y)$ for any $k$ and any $\l$. Chose $k>\#X$, then in accordance with Theorem~\ref{thm:BigSympl} we have $$ d_{GH}(\l\D_k,X)=\max\{\l,\diam X-\l\}=\l \qquad\text{for $\l\ge2\diam X$}. $$ On the other hand, due to Theorem~\ref{thm:MSTSpec}, we have $2d_{GH}(\l\D_{k},Y)=\l-\s_{k-1}(Y)$ for $k\le \#Y$, and hence $$ \l=d_{GH}(\l\D_k,X)=d_{GH}(\l\D_k,X)=\l-\s_{k-1}(Y), $$ for $k=\#Y$ and $\l\ge2\diam X$. The latter implies $\s_{k-1}(Y)=0$, a contradiction. Spectra coincidence follows from Theorem~\ref{thm:MSTSpec}. \end{proof} \section{Distinguishability of Ultrametric Spaces} \markright{\thesection.~Distinguishability of Ultrametric Spaces} Recall that a metric space $X$ is called \emph{ultrametric}, if its distance function satisfies the following enhanced triangle inequality: $$ |xz|\le\max\bigl\{|xy|,\,|yz|\bigr\} $$ for each $x,\,y,\,z\in X$. This inequality implies similar ``polygon inequality'', namely, for an arbitrary set $x_1, x_2,\ldots,x_k$ of points of the space $X$ we have: $$ |x_1x_k|\le\max\bigl\{|x_1x_2|,|x_2x_3|,\ldots,|x_{k-1}x_k|\bigr\}. $$ In the paper~\cite{IvaTuzUltra} the next result is obtained. \begin{thm}\label{thm:ultra} Let $X$ be a finite ultrametric space consisting of $m$ points, and let $\s(X)=(\s_1,\s_2,\ldots,\s_{m-1})$ be its $\mst$-spectrum. Then for each positive integer $k>0$ we have $$ 2d_{GH}(\l\D_k,X)= \begin{cases} \s_1&\text{for $k=1$},\\ \max\{\s_1-\l,\,\s_k,\,\l-\s_{k-1}\}&\text{for $1<k<m$},\\ \max\{\s_1-\l,\,\l-\s_{m-1}\}&\text{for $k=m$},\\ \max\{\s_1-\l,\,\l\}&\text{for $k>m$}. \end{cases} $$ \end{thm} \begin{cor}\label{cor:ultras} Finite ultrametric spaces are indistinguishable if and only if their cardinalities are the same and their $\mst$-spectra coincide. \end{cor} \section{Distinguishability in Terms of Extreme Points} \markright{\thesection.~Distinguishability in Terms of Extreme Points} According to Theorem~\ref{thm:GenFormulaSmallSympl}, in order to exactly calculate the function $g_n(\l)=d_{GH}(\l\D_n,X)$ for a given bounded metric space $X$ and cardinal number $n\le\#X$, it is sufficient to know all pairs $\bigl(\a(D),\diam D\bigr)$, $D\in\cD_n(X)$. We consider these pairs as points of the plane on which the standard coordinates $(\a,d)$ are fixed. We denote the set of all such pairs by $\cAD_n(X)\ss\R^2$, and also put $h_{\a,d}(\l):=\max\{d,\,\l-\a\}$. Let us rewrite Theorem~\ref{thm:GenFormulaSmallSympl} in new notations, using the fact that the function $\diam X-\l$ does not depend on the partition $D$, and swapping $\inf$ and $\max$. \begin{cor}\label{cor:GH-dist-alpha-diam-set} Let $X$ be an arbitrary bounded metric space, and $n\le\#X$. Then $$ 2d_{GH}(\l\D_n,X)=\max\bigl\{\diam X-\l,\,\inf_{(\a,d)\in\cAD_n(X)}h_{\a,d}(\l)\bigr\}. $$ \end{cor} \begin{rk} For $\a$-connected metric spaces the formula from Corollary~\ref{cor:GH-dist-alpha-diam-set} coincides with the one from Corollary~\ref{cor:GHforAlpha0} \end{rk} We need the following notations: for arbitrary metric space $X$ and cardinal number $n\le\#X$, put: \begin{gather*} \a_n^-(X)=\inf_{D\in\cD_n(X)}\a(D),\qquad \a_n^+(X)=\sup_{D\in\cD_n(X)}\a(D),\\ d_n^-(X)=\inf_{D\in\cD_n(X)}\diam D,\qquad d_n^+(X)=\sup_{D\in\cD_n(X)}\diam D. \end{gather*} Notice that above the notations $\a_n(X)=\a_n^+(X)$ and $d_n(X)=d_n^-(X)$ are used. \begin{rk} For $n<\#X$ in the case of a finite metric space, and for $n\le\#X$ in the case of infinite metric space one has $d_n^+(X)=\diam X$. \end{rk} In what follows, it will be more convenient to work not with the set $\cAD_n(X)$, but with its closure $\bcAD_n(X)$. Notice that both $\cAD_n(X)$ and $\bcAD_n(X)$ lie in the rectangle formed by the intersection of two strips in the plane $\R^2(\a,d)$: the horizontal strip defined by the inequality $d_n^-(X)\le d\le d_n^+(X)$, and the vertical strip defined by the inequality $\a_n^-(X)\le\a\le\a_n^+(X)$. Thus, both of these sets are bounded, and $\bcAD_n(X)$ is compact. \begin{cor}\label{cor:GH-dist-alpha-diam-set-closure} Let $X$ be an arbitrary bounded metric space, and $n\le\#X$. Then $$ 2d_{GH}(\l\D_n,X)=\max\bigl\{\diam X-\l,\,\inf_{(\a,d)\in\bcAD_n(X)}h_{\a,d}(\l)\bigr\}. $$ \end{cor} Next, notice that for any pair $(\a,d)$ the graph of the function $y=h_{\a,d}(\l)$ is an angle with its vertex at the point $T_{\a,d}=(\a+d,d)$, one side of which is opposite to the abscissa axis, and the other is co-directed with the bisector of the first quadrant. Notice that for any $(\a,d),\,(\a',d')$ such that $\a\ge\a'$ and $d\le d'$ we have: $h_{\a,d}(\l)\le h_{\a',d'}(\l)$ for all $\l$. Thus, when calculating $d_{GH}(\l\D_n,X)$, such $(\a',d')\in\bcAD_n(X)$ for which there exists $(\a,d)\in\bcAD_n(X)$, $\a\ge\a'$, $d\le d'$, can be ignored. A point $(\a,d)\in\bcAD_n(X)$ is called \emph{extremal\/} if for it there does not exist a point $(\a',d')\in\bcAD_n(X)$ different from it, for which $\a'\ge\a$ and $d'\le d$. The set of all extremal points from $\bcAD_n(X)$ is denoted by $\Ext_n(X)$. In the paper~\cite{GrigIvaTuzSimpDist} it is shown that the set $\Ext_n(X)$ is not empty, and the following result is true. \begin{thm}\label{thm:extr} Let $X$ be an arbitrary bounded metric sp[ace, and $n\le\#X$. Then $$ 2d_{GH}(\l\D_n,X)=\max\bigl\{\diam X-\l,\,\inf_{(\a,d)\in\Ext_n(X)}h_{\a,d}(\l)\bigr\}. $$ \end{thm} Notice that the closure of $\overline{\Ext}_n(X)$ is contained in $\bcAD_n(X)$ and contains $\Ext_n(X)$, so from the equality of the left-hand parts of the formulas in Theorem~\ref{thm:extr} and Corollary~\ref{cor:GH-dist-alpha-diam-set-closure} we obtain the following result. \begin{cor}\label{cor:extr} Let $X$ be an arbitrary bounded metric sp[ace, and $n\le\#X$. Then $$ 2d_{GH}(\l\D_n,X)=\max\bigl\{\diam X-\l,\,\inf_{(\a,d)\in\overline{\Ext}_n(X)}h_{\a,d}(\l)\bigr\}. $$ \end{cor} For a bounded metric space $X$, let $\Pi_X^+\ss\R^2(\a,d)$ denote the half-plane defined by $\a+2d\ge\diam X$, and let $\Pi_X^-\ss\R^2(\a,d)$ denote the half-plane defined by $\a+2d\le\diam X$. Let us define a subset $M_n(X)\ss\R^2$. If $\Ext_n(X)\ss\Pi_X^+$, then we put $M_n(X)=\Ext_n(X)$. If $\Ext_n(X)\ss\Pi_X^-$, then $M_n(X)=\{A_+\}$, where $$ A_+=\Big(\a_n^+(X),\frac{\diam X-\a_n^+(X)}{2}\Big). $$ In the remaining case, we denote by $(\a_\pm,d_\pm)$ the point of the set $\overline{\Ext}_n(X)\cap\Pi_X^\pm$ closest to the straight line $\a+2d=\diam X$ (note that in the case under consideration both of these sets are non-empty). If the point $(\a_-,d_+)$ lies in $\Pi_X^-$, then put $M_n(X)=\overline{\Ext}_n(X)\cap\Pi_X^+$. Otherwise $M_n(X)=\overline{\Ext}_n(X)\cap\Pi_X^+\cup\big\{A\big\}$, where $$ A=\Big(\a_-,\frac{\diam X-\a_-}{2}\Big). $$ \begin{cor}\label{cor:disting_bounded_Ext} Bounded metric spaces $X$ and $Y$ of the same cardinality are indistinguishable, if and only if their diameters are the same and the sets $M_n(X)$ and $M_n(Y)$ coincide for each $n\le\#X=\#Y$. \end{cor} \begin{proof} Spaces $X$ and $Y$ are indistinguishable if and only if the functions $f_n(\l)=2d_{GH}(\l\D_n,X)$ and $g_n(\l)=2d_{GH}(\l\D_n,Y)$ are equal for all $n\le\#X=\#Y$ and $\l\ge0$. Let us consider their graphs on the plane with coordinates $(\l,y)$. It follows from Theorem~\ref{thm:extr} that the graph of the function $f_n(\l)=2d_{GH}(\l\D_n,X)$ is a broken line, one link of which is a segment of the straight line $y=\diam X-\l$, and the remaining links lie on the rays of the graphs of the functions $h_{\a,d}$, where $(\a,d)\in\overline{\Ext}_n(X)$. Obviously, functions $f_n$ and $g_n$ are equal if and only if their graphs coincide, and the latter takes place if and only if the vertex sets of the corresponding broken lines coincide. Let us describe the graph of function $f_n(\l)$. Denote by $L_{\a,d}$ the graph of function $h_{\a,d}$. Recall that this is a broken line consisting of two rays with a vertex at the point with coordinates $(d+\a,d)$. Since in the formula from Theorem~\ref{thm:extr} the maximum of the greatest lower bound of functions $h_{\a,d}$ and function $\diam X-\l$ is taken, then on the graph of function $f_n(\l)$ there are those and only those extremal points that lie in the half-plane $\Pi_X^+$. If all extreme points fall in this half-plane, then the set $M_n(X)=\Ext_n(X)$ completely determines this graph. If $\Ext_n(X)\ss\Pi_X^-$, then the graph of the function $f_n(\l)$ consists of two links lying on the straight lines $y=\diam X-\l$ and $y=\l-\a_n^+(X)$. These two straight lines on the plane with coordinates $(\l,y)$ intersect at the point with coordinates $$ \Big(\frac{\diam X+\a_n^+(X)}{2},\frac{\diam X-\a_n^+(X)}{2}\Big), $$ that corresponds to the point $$ A_+=\Big(\a_n^+(X),\frac{\diam X-\a_n^+(X)}{2}\Big) $$ at the plane with coordinates $(\a,d)$, $\a=\l-y$, $d=y$. Thus, in this case the set $M_n(X)=\{A_+\}$ completely defines the graph of the function $f_n(\l)$ and conversely, $M_n(X)$ can be restored from the graph. Finally, let the extreme points be located in both half-planes. In this situation, the lower envelope $E_L$ of the family of broken lines $\{L_{\a,d}\}$ intersects the line $y=\diam X-\l$ at some point. To find this point, consider the vertices of the envelope $E_L$ closest to this line. These are the points $B_\pm=(\a_\pm+d_\pm,d_\pm)\in\R^2(\l,y)$, where $(\a_\pm,d_\pm)\in\Pi_X^\pm$. The part of $E_L$ between these two points is a broken line consisting of a horizontal straight segment with endpoint at $B_+$ and an inclined straight segment with endpoint at $B_-$. The common vertex of these straight segments is $(\a_-+d_+,d_+)$. On the plane $\R^2(\a,d)$ this point has the form $(\a_-,d_+)$. If this point lies in $\Pi_X^-$, then the line $y=\diam X-\l$ intersects the horizontal segment of the lower envelope and the coordinates of the intersection point are calculated through $(\a_+,d_+)\in\Pi_X^+$ and $\diam X$. If the point $(\a_-,d_+)$ lies in $\Pi_X^-$, then the line $y=\diam X-\l$ intersects the inclined segment of the envelope $E_L$, and the coordinates of the intersection point have the form $$ \Big(\frac{\diam X+\a_-}{2},\frac{\diam X-\a_-}{2}\Big), $$ that corresponds to the point $$ A=\Big(\a_-,\frac{\diam X-\a_-}{2}\Big). $$ in the plane $\R^2(\a,d)$. Thus, in this case also the set $M_n(X)$ completely determines the graph of the function $f_n(\l)$ and, conversely, is can be reconstructed from the graph. The corollary is proved. \end{proof} Notice that indistinguishable bounded metric spaces could have different extreme sets. \begin{examp} Let $X$ be a subset of the plane consisting of four equal straight segments of length $s$ lying on the same straight line $\ell$, and $\dl$ be the distance between the nearest endpoints of adjacent segments. Let us calculate the distance $d_{GH}(X,\l\D_4)$. It is clear that $\diam X=4s+3\dl$, $d_4^+(X)=\diam X$, $d_4^-(X)=s$, $\a_4^-(X)=0$, $\a_4^+(X)=\dl$, there is one extremal point with coordinates $(\dl,s)$ on the plane $\R^2(\a,d)$ or $(s+\dl,s)$ in coordinates $(\l,y)$. This point lies in the half-plane $\Pi_X^-$, and the set $M_4(X)$ consists of a single point $A_+(X)$. Let us change the space $X$, preserving the diameter and $\a_4^+$, and changing $d_4^-$. To do this, let us rotate the segments with respect to their midpoints and extend them so that their ends remain on straight lines perpendicular to the straight line $\ell$ and passing through the endpoints of the segments forming the space $X$, see Fig.~\ref{fig:X_Y}. We denote the resulting space by $Y$. \ig{X_Y}{0.15}{fig:X_Y}{Indistinguishable metric spaces $X$ and $Y$ with different sets $\Ext_4$.} Then $\diam Y=\diam X$, $d_4^+(Y)=\diam Y$, $d_4^-(Y)=S>s$, $\a_4^-(Y)=0$, $\a_4^+(Y)=\dl$, the space $Y$ has one extremal point with coordinates $(\dl,S)$ on the plane $\R^2(\a,d)$ or $(S+\dl,S)$ in coordinates $(\l,y)$. For $S<\dl+2s$ this point lies in the half-plane $\Pi_Y^-=\Pi_X^-$, and the set $M_4(Y)$ consists of one point $A_+(Y)=A_+(X)$. Thus, in the notation of the proof of Corollary~\ref{cor:disting_bounded_Ext}, the graphs of the functions $f_4(\l)$ and $g_4(\l)$ coincide, and the sets $\Ext_4(X)$ and $\Ext_4(Y)$ are different, see Fig.~\ref{fig:graph_f_g}. \ig{graph_f_g}{0.25}{fig:graph_f_g}{The graphs of the functions $f_4(\l)=2d_{GH}(\l\D_4,X)$ and $g_4(\l)=2d_{GH}(\l\D_4,Y)$ coincide.} \end{examp} Combining the Corollaries~\ref{cor:MSTSpect} and~\ref{cor:disting_bounded_Ext}, we obtain the following indistinguishability criterion for finite metric spaces. \begin{cor}\label{cor:finite_disting_bounded_Ext} Finite metric spaces $X$ and $Y$ are indistinguishable, if and only if their diameters are equal and the subsets $M_n(X)$ and $M_n(Y)$ coincide for each $n\le\#X=\#Y$. \end{cor} \section{Distinguishability and Graph Chromatic Number} \markright{\thesection.~Distinguishability and Graph Chromatic Number} In paper~\cite{IvaTuzTwoDist} a connection was found between the Gromov--Hausdorff distances from metric spaces with two nonzero distances to simplexes and the chromatic numbers of graphs. Recall that the \emph{chromatic number of a simple graph $G$} is the smallest number of colors in which the vertices of this graph can be colored so that adjacent vertices are colored in different colors. We denote this number by $\g(G)$. \begin{thm}[\cite{IvaTuzTwoDist}]\label{thm:ChromNum} Let $G=(V,E)$ be an arbitrary finite simple graph, where $V$ stands for its vertex set and $E$ stands for its edge set. Fix arbitrary reals $a$ and $b$ such that $0<a<b\le2a$ and define a metric on $V$ as follows\/\rom: put the distances between each adjacent vertices of $G$ to be equal to $b$, and put all the remaining non-zero distances to be equal to $a$. If $m$ is the largest positive number such that $2d_{GH}(a\D_m,V)=b$, then $\g(G)=m+1$. \end{thm} For $0<a<b\le2a$, a metric space in which nonzero distances take only values ??$a$ or $b$ is called an \emph{$(a,b)$-space}. Let $X$ be an arbitrary $(a,b)$-space. Construct a graph $G_X=(V,E)$ by reversing the procedure from Theorem~\ref{thm:ChromNum}, namely, we set $V=X$ and connect vertices $x,y\in X$ by an edge if and only if $|xy|=b$. From Corollary~\ref{cor:MSTSpect} and Theorem~\ref{thm:ChromNum}, we obtain the following result. \begin{cor} Let $X_i$, $i=1,\,2$, be some $(a_i,b_i)$-spaces and let $0<a_1<b_1\le2a_1$, $0<a_2<b_2\le2a_2$. Assume that $X_1$ and $X_2$ are indistinguishable. Then they have the same cardinality, $a_1=a_2$, $b_1=b_2$, and the chromatic numbers of the graphs $G_{X_1}$ and $G_{X_2}$ coincide. \end{cor} \section{Distinguishability and Graph Clique Covering Number} \markright{\thesection.~Distinguishability and Graph Clique Covering Number} In the same paper~\cite{IvaTuzTwoDist} a connection is found between the Gromov--Hausdorff distances from metric spaces with two nonzero distances to simplexes and the clique covering numbers of graphs. The point is that the clique covering number and the chromatic number are dual, see, for example~\cite{Emel}. Let us recall the corresponding definitions. A \emph{clique\/} in an arbitrary simple graph $G=(V,E)$ is each of its subgraphs in which every two vertices are connected by an edge. Notice that every single-vertex subgraph is a clique by definition. It is clear that the family of vertices of all cliques covers $V$. The smallest number of cliques whose vertices cover $V$ is called the \emph{clique covering number}. We denote this number by $\theta(G)$. For a simple graph $G$, let $G'$ stand for its \emph{dual\/} graph, i.e. the graph that has the same set of vertices and the complementary set of edges (two vertices in $G'$ are connected by an edge if and only if they are not connected by an edge in $G$). The following fact is well known. \begin{prop} For each simple graph $G$ the equality $\theta(G)=\g(G')$ holds. \end{prop} Theorem~\ref{thm:ChromNum} can be reformulated in the following form. \begin{thm}[\cite{IvaTuzTwoDist}]\label{thm:clique} Let $G=(V,E)$ be an arbitrary finite simple graph, where $V$ is its vertex set, and $E$ is its edge set. Fix arbitrary $0<a<b\le2a$ and define a metric on $V$ as follows\/\rom: put the distances between adjacent vertices to be equal to $a$, and put all the remaining nonzero distances to be equal to $b$. If $n$ is the largest positive integer such that $2d_{GH}(a\D_n,V)=b$, then $\theta(G)=n+1$. \end{thm} Let $0<a<b\le2a$ and $X$ be an arbitrary $(a,b)$-space. Construct the graph $H_X=(V,E)$ by reversing the procedure from Theorem~\ref{thm:clique}, namely, set $V=X$ and connect vertices $x,y\in X$ by an edge if and only if $|xy|=a$. From Corollary~\ref{cor:MSTSpect} and Theorem~\ref{thm:clique}, we immediately obtain the following result. \begin{cor} Let $X_i$, $i=1,\,2$, be some $(a_i,b_i)$-spaces and $0<a_1<b_1\le2a_1$, $0<a_2<b_2\le2a_2$. Assume that $X_1$ and $X_2$ are indistinguishable. Then they have the same cardinality, $a_1=a_2$, $b_1=b_2$, and the clique covering numbers of the graphs $H_{X_1}$ and $H_{X_2}$ coincide. \end{cor} \section{Distinguishability and the Borsuk Number} \markright{\thesection.~Distinguishability and the Borsuk Number} In this section we discuss the connection between distinguishability and the famous Borsuk problem: Into how many parts should an arbitrary non-single-point subset of the Euclidean space $\R^n$ be partitioned so that the diameters of the partition elements are smaller than the diameter of the initial set. In~1933, K.~Borsuk stated the famous conjecture that any bounded subset of $\R^n$ can be partitioned into $(n+1)$ subsets of smaller diameter. This conjecture was proved by Hadwiger~\cite{Hadw} and~\cite{Hadw2} for convex subsets with smooth boundary, and unexpectedly disproved in the general case in~1993~\cite{KahnKalai}. The current state of the art is described, for example, in~\cite{Raig}. In~\cite{IvaTuzBorsuk} the following generalization of the Borsuk problem is formulated. Let $X$ be a bounded metric space, $n$ a cardinal number, $n\le\#X$, and $D=\{X_i\}_{i\in I}\in\cD_n(X)$. We say that $D$ is a partition into parts \emph{of strictly smaller diameter} if there exists $\e>0$ such that $\diam X_i\le\diam X-\e$ for all $i\in I$. The following problem is called the \emph{generalized Borsuk problem\/}: find out whether a given bounded metric space can be partitioned into a given number of parts of strictly smaller diameter. It turns out that the answer can be obtained in terms of the Gromov--Hausdorff distance to suitable simplexes. \begin{thm}[\cite{IvaTuzBorsuk}] Let $X$ be an arbitrary bounded metric space and $n$ a cardinal number such that $n\le\#X$. Chose an arbitrary real $\l$, $0<\l<\diam X$, then $X$ can be partitioned into $n$ parts of strictly smaller diameter if and only if $2d_{GH}(\l\D_n,X)<\diam X$. \end{thm} \begin{cor} Let $X$ and $Y$ be indistinguishable metric spaces. Then for each cardinal number $n$, $n\le\min\{\#X,\#Y\}$, the spaces $X$ and $Y$ either simultaneously allow or do not allow a partition into $n$ parts of strictly smaller diameter. \end{cor} \section{Examples of Indistinguishable Spaces} \markright{\thesection.~Examples of Indistinguishable Spaces} In conclusion, let us give some examples of indistinguishable spaces. \subsection{Indistinguishable three-point and continuum spaces} Let $M$ be a three-point space with distances $a<b<c$, and let $X$ be a disjoint union of connected bounded subsets $X_1$, $X_2$, $X_3$ such that the diameters of all the three ones are $d<(c-a)/2$, and the distances between any points from $X_1$ and $X_2$, $X_1$ and $X_3$, $X_2$ and $X_3$ are $a$, $b$, $c$, respectively. Then $$ d_{GH}(\l\D_n,M)=d_{GH}(\l\D_n,X). $$ Indeed, for $n>\#X$ this implies from Theorem~\ref{thm:BigSympl}. It remains to verify the case $n\le\#X$. First, let $n>3$. Then each $D\in\cD_n(X)$ partitions one of the connected spaces $X_i$, and therefore $\a(D)=0$. By the corollary~\ref{cor:GHforAlpha0}, we have $$ 2d_{GH}(\l\D_n,X)=\max\Bigl\{d_n(X),\l,\diam X-\l\Bigr\}. $$ On the other hand, $\max\{\l,\diam X-\l\}\ge(\diam X)/2=c/2$, and for $D\in\cD_n(X)$, each element of which lies in some $X_i$, we have $\diam D\le(c-a)/2<c/2$, therefore $d_n(X)<c/2$ and, therefore, $2d_{GH}(\l\D_n,X)=\max\{\l,\diam X-\l\}$. By Theorem~\ref{thm:BigSympl}, the same formula holds for $d_{GH}(\l\D_n,M)$. Let $n=3$. Let $D_0=\{X_1,X_2,X_3\}$, then for each $D\in\cD_3(X)$, $D\ne D_0$, we have $\a(D)=0$ again. Define $d'_n(X)=\inf_{D\ne D_0}\diam D$, then $d'_n(X)<c/2$ again and $$ \inf_{D\ne D_0}\max\Bigl\{\diam D,\l,\diam X-\l\Bigr\}=\max\{\l,\diam X-\l\}=\max\{\l,c-\l\}. $$ On the other hand, $\a(D_0)=a$, therefore $$ \max\Bigl\{\diam D_0,\l-\a(D_0),\diam X-\l\Bigr\}=\max\{d,\l-a,c-\l\}. $$ Given that $d<(c-a)/2$ and $\max\{d,\l-a,c-\l\}\ge(c-a)/2$, obtain $2d_{GH}(\l\D_n,X)=\max\{\l-a,c-\l\}$. Finally, in the paper~\cite{IvaTuzIrreduce} it is shown that the Gromov--Hausdorff distance between two three-point spaces can be calculated as the $\max$-norm of the difference of ordered distance vectors. Therefore, the same formula holds for $d_{GH}(\l\D_n,M)$. Let $n=2$. Considering $\l\D_2$ as a degenerated three-point space with the distances $0$, $\l$, $\l$ and applying the same result from~\cite{IvaTuzIrreduce} conclude that $$ 2d_{GH}(\l\D_2,M)=\max\bigl\{a,|b-\l|,|c-\l|\bigr\}. $$ Let us show that the same formula is valid for the space $X$. Notice that for each partition $D\in\cD_2(X)$ we have $\diam D\ge a$, because one element of this partition contains points from at least two sets $X_i$, of which $X$ consists. Besides, $\a(D)\le b$, because either the partition $D$ splits one of $X_i$ and $\a(D)=0$, or it does not split any $X_i$, and one of its elements is some $X_i$, and the other one is the union of the remaining two $X_j$. So, $$ 2d_{GH}(\l\D_2,X)\ge\max\big\{a,|\l-b|,|c-\l|\big\}. $$ Notice that this estimate is attained at the partition $D=\{X_1\cup X_2,X_3\}$, that completes the proof. \subsection{Indistinguishable connected spaces} Let $X=[0,1]$ be a straight segment and $Z$ a connected metric space, $\diam Z\le1/2$. Put $Y=X\x Z$ and define a distance function on $Y$ as follows: $$ \bigl|(x_1,z_1)(x_2,z_2)\bigr|=\max\bigl\{|x_1x_2|,|z_1z_2|\bigr\}. $$ The space $Y$ is connected and $\diam X=\diam Y=1$, therefore $$ 2d_{GH}(\l\D_n,X)=\max\bigl\{d_n(X),\l,1-\l\bigr\},\qquad 2d_{GH}(\l\D_n,Y)=\max\bigl\{d_n(Y),\l,1-\l\bigr\}. $$ Notice that $d_2(X)\le1/2$, $d_2(Y)\le1/2$, and $\max\{\l,1-\l\}\ge1/2$, therefore using monotony of $d_n$ conclude that $d_n(X)\le1/2$ and $d_n(Y)\le1/2$ for all $n\ge2$, and hence $d_{GH}(\l\D_n,X)=d_{GH}(\l\D_n,Y)$ for $n\ge2$. Since $\diam X=\diam Y$, then $d_{GH}(\D_1,X)=d_{GH}(\D_1,Y)$ also, so the spaces $X$ and $Y$ are indistinguishable. Let us generalize this example as follows. \begin{thm} Let $X$ and $Y$ be $\a$-connected metric spaces of the same diameter, and assume that $d_2(X)\le(\diam X)/2$ and $d_2(Y)\le(\diam Y)/2$. Then $X$ and $Y$ are indistinguishable. In particular, for any cardinal numbers greater than or equal to continuum there exists indistinguishable metric spaces of such cardinalities. \end{thm} \markright{References} \begin{thebibliography}{20} \bibitem{Hausdorff} F.~Hausdorff, {\it Grundz\"uge der Mengenlehre}, Veit, Leipzig, 1914 [reprinted by Chelsea in 1949]. \bibitem{Edwards} D.~Edwards, \emph{The Structure of Superspace}, in {\it Studies in Topology}, Academic Press, 1975. \bibitem{Gromov} M.~Gromov, {\emph Groups of Polynomial Growth and Expanding Maps}, Publications Mathematiques I.H.E.S., {\bf 53}, 1981. \bibitem{BurBurIva} D.~Burago, Yu.~Burago, S.~Ivanov, \emph{A Course in Metric Geometry}, Graduate Studies in Mathematics {\bf 33} A.M.S., Providence, RI, 2001. \bibitem{IvaNikTuz} Ivanov A.\,O., Nikolaeva N.\,K., Tuzhilin A.\,A. \emph{The Gromov-Hausdorff Metric on the Space of Compact Metric Spaces is Strictly Intrinsic}. ArXiv e-prints: {\tt arXiv:1504.03830}, 2015. \bibitem{GrigIvaTuzSimpDist} D.~S.~Grigor'ev, A.~O.~Ivanov and A.~A.~Tuzhilin, \emph{Gromov--Hausdorff Distance to Simplexes\textquotedblright}, Chebyshevski Sbornik, \textbf{20} (2), 100--114 (2019). l \bibitem{IvaTuzDistSympl} A.\,O.~Ivanov, A.\,A.~Tuzhilin, \emph{Geometry of Compact Metric Space in Terms of Gromov--Hausdorff Distances to Regular Simplexes}, ArXiv e-prints, {\tt arXiv:1607.06655}, 2016. \bibitem{TuzMSTSpec} A.\,A.~Tuzhilin, \emph{Calculation of Minimum Spanning Tree Edges Lengths using Gromov--Hausdorff Distance}, ArXiv e-prints, {\tt arXiv:1605.01566}, 2016. \bibitem{IvaTuzTwoDist} A.\,O.~Ivanov, A.\,A.~Tuzhilin, \emph{The Gromov-Hausdorff Distance between Simplexes and Two-Distance Spaces}, ArXiv e-prints, {\tt arXiv:1907.09942}, 2019. \bibitem{Hadw} H.~Hadwiger, \emph{\"Uberdeckung einer Menge durch Mengen kleineren Durchmessers}. Commentarii Mathematici Helvetici, v. 18 (1): 73--75, (1945). \bibitem{Hadw2} H.~Hadwiger, \emph{Mitteilung betreffend meine Note: \"Uberdeckung einer Menge durch Mengen kleineren Durchmessers}. Commentarii Mathematici Helvetici, v. 19, (1946). \bibitem{KahnKalai} J.~Kahn, G.~Kalai, \emph{A counterexample to Borsuks conjecture}. Bull. Amer. Math. Soc., v. 29 (1), 60--62 (1993). \bibitem{Raig} A.~M.~Raigorodskii, \emph{Around Borsuk's Hypothesis}, Journal of Mathematical Sciences, \textbf{154} (4) 604--623 (2008). \bibitem{IvaTuzBorsuk} A.~O.~Ivanov and A.~A.~Tuzhilin, \emph{Calculation of the GromovHausdorff distance using the Borsuk number}, Moscow University Mathematics Bulletin \textbf{78} (1) 37--43 (2023). \bibitem{IvaTuzIrreduce} A.\,O.~Ivanov, A.\,A.~Tuzhilin, \emph{Gromov--Hausdorff Distance, Irreducible Correspondences, Steiner Problem, and Minimal Fillings}, ArXiv e-prints, {\tt arXiv:1604.06116}, 2016. \bibitem{IvaTuzUltra} Ivanov A. O., Tuzhilin A. A. \emph{The Gromov--Hausdorff distances between simplexes and ultrametric spaces}, ArXiv e-prints, {\tt arXiv:1907.03828}, 2019. \bibitem{Emel} V.~A. Emelichev, O.~I. Mel'nikov, V.~I. Sarvanov, and R.~I. Tyshkevich, {\emph Lectures on Graph Theory}, Moscow, URSS, 2019. \end{thebibliography} \end{document}
2412.18968v1
http://arxiv.org/abs/2412.18968v1
Boundary Blow-up Solutions of Second Order Quasilinear Equation on Infinite Cylinders
\documentclass[A4paper, 11pt]{amsart} \usepackage{graphicx} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{xcolor} \usepackage{amssymb} \usepackage{amsmath,stackengine} \usepackage{amsfonts} \usepackage{amstext} \usepackage{amsthm} \usepackage[english]{babel} \usepackage[autostyle]{csquotes} \usepackage[super]{nth} \usepackage{hyperref} \usepackage[margin=1.4in]{geometry} \usepackage{txfonts} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem*{remark}{Remark} \newtheorem{claim}{Claim} \newtheorem*{example}{Example} \newcommand{\rn}{\mathbb{R}^{n}} \newcommand{\re}{\mathbb{R}} \numberwithin{equation}{section} \providecommand*\lemmaautorefname{Lemma} \providecommand*\propositionautorefname{Proposition} \providecommand*\corollaryautorefname{Corollary} \providecommand*\definitionautorefname{Definition} \providecommand*\remarkautorefname{Remark} \hypersetup{ colorlinks=true, linkcolor= blue, citecolor=blue, pdfpagemode=FullScreen, } \newenvironment{Assumptions}{\setcounter{enumi}{0} \renewcommand{\theenumi}{(\textbf{A}.\arabic{enumi})} \renewcommand{\labelenumi}{\theenumi} \begin{enumerate}}{\end{enumerate} } \newcommand{\bi}{\color{magenta}} \newcommand{\ei}{\normalcolor} \title[LARGE SOLUTIONS OF QUASI-LINEAR EQUATION ON INFINITE CYLINDERS]{BOUNDARY BLOW-UP SOLUTIONS OF SECOND ORDER QUASILINEAR EQUATION ON INFINITE CYLINDERS} \author[I. Chowdhury]{Indranil Chowdhury} \address{\parbox{.8\linewidth} {{\textbf{I. Chowdhury}}\medskip \\ Indian Institute of Technology - Kanpur, India \medskip}} \curraddr{} \email{[email protected]} \author[N. N. Dattatreya]{N. N. Dattatreya} \address{\parbox{.8\linewidth} {{\textbf{N. N. Dattatreya}}\medskip \\ Indian Institute of Technology - Kanpur, India \medskip}} \curraddr{} \email{[email protected]} \date{} \keywords{\ Boundary blow-up solution, Asymptotic behaviour, $p$-Laplace operator, unbounded domain, infinite cylinder, Keller-Osserman condition, } \smallskip \subjclass{35B44, 35A01, 35J92, 35J62, 35J25, } \begin{document} \begin{abstract} This article studies large solutions, for a class of Quasi-linear equations involving p-Laplacian on the \textit{infinite cylindrical domains}. We study the wellposedness of `weak' large solutions on infinite cylinders by the convergence of large solutions on finite cylinders and observe that any such solution coincides with the large solution on its cross-section. Finally, the results are generalized to a class of operators involving non-linearity in the gradient. \end{abstract} \maketitle \section{Introduction} The focus of this paper is the existence and uniqueness of solutions for the following problem \begin{align}\label{eqn_mainQ} \begin{cases} \text{div}(Q(|\nabla u|)\nabla u)= f(u)\quad &\text{in}~ S(\omega)\\ u(x)\to \infty &\text{as} ~dist(x,\partial S(\omega))\to 0, \end{cases} \end{align} on the infinite cylinder $S(\omega)=\omega\times \mathbb{R}$, where $\omega$ is a $C^{2}$ domain in $\mathbb{R}^{n-1}$, $Q:(0,\infty)\to (0,\infty)$ is a $C^1$ function such that $Q(0^{+})=0$ and $(Q(r)r)'>0$ for $r>0$, and $f:[0,\infty)\to [0,\infty)$ increasing, continuous function such that $f(r)\to \infty$ as $r\to\infty$. In general, the existence of a solution for \eqref{eqn_mainQ} is connected with the `Keller-Osserman' condition, when $Q(r)=1$ it reads as \begin{align*} \int^{\infty}\frac{ds}{\sqrt{F(s)}}<\infty, \end{align*} where $F$ is the primitive of $f$. Such solutions, whenever they exist, are called \textit{Large~Solutions}. Other terminologies used in the literature are \textit{Explosive Solutions} and \textit{Boundary Blow-up Solutions}. In connection with complex analysis, the work of L.Biberbach \cite{biberbach} in 1916 dealt with the wellposedness of similar problems for $Q(r)=1$ andandroid  $f(t)=e^{t}$ on a planer domain, and conformal map representation of the solution on a simply connected domain. Simultaneous yet independent works of Keller and Osserman around 1957 marked the beginning of studying large solutions. Keller's work was on Electrodynamics concerning the equilibrium of charged gas particles in a container \cite{osti_4157678, Keller1957}, and Osserman's \cite{Osserman1957} is concerned with the geometry of the ball and the Gauss curvature. Another interesting work towards the boundary blow-up solution involving the Laplace operator related to stochastic control is \cite{lions} by J. M. Lasry and P. L. Lions. \medskip The wellposedness of large solutions for semi-linear elliptic equations on bounded domains are well studied by now, the related developments are in e.g. \cite{bandlemarcus4, MR1240799, essen, bandlegiarrusso, Amandine1997, veronmarcus} and the references therein. For instance, the existence of blow-up solutions for two types of locally Lipschitz non-decreasing force function, one having zero and the other strictly positive, is studied in \cite{Amandine1997}. The articles \cite{essen,bandlegiarrusso} deal with $-\Delta u=au-b(x)f(u)$ and $\Delta u\pm |\nabla u|^{q}=p(x)u^{\gamma}$ respectively. Whereas \cite{Costin, Lieberman} deal with a symmetric solution on a ball for $f(t)=\Tilde{f}(t)-\lambda t$. Another interesting topic regarding the large solution is its asymptotic behaviour near the boundary, see \cite{Costin, Payne} for the results in the unit ball and half cylinder respectively. The asymptotic behaviour of large solutions is largely studied for its dependence is not just on the operator or the force function but also on the shape of the domain's boundary, we refer \cite{bandlemarcus4, Payne, McKenna, Bandle1998OnSE, cbandle, bandlesurvay, Shuibo2010, Lieberman} for related results on bounded domains. Papers \cite{cbandle, bandlesurvay} deal with the force functions of the form $u^{q}$ and $e^{u}$, \cite{Shuibo2010} provides the exact rate of blow-up of the solution to $\Delta u=b(x)f(u)$ on a smooth domain. The second-order effect for such is in \cite{Bandle1998OnSE}. Asymptotic on the derivative of the large solution is found in \cite{bandlemarcus3}. The dependence of the large solution on the curvature of the boundary can be found in \cite{Bandlemarcus1, Pino}. Results for a non-smooth domain can be found in \cite{veronmarcus} and on fractal domains in \cite{matero}. Work regarding Logistic Equations is available in \cite{Letelier2001, FLORICA2002}. The existence of large solutions to the Monge-Ampère equation is dealt with in \cite{Materononlinear, Mohammed2007, Zhijun2018, Zhang2020}. An interesting use of a large solution is discussed in \cite{kichenassamy}. \medskip The wellposedness and asymptotic of large solutions on bounded domains associated with p-Laplacian have also been explored in recent decades, we refer \cite{materopaper, materothisis, Zongming2007} and the references therein. In \cite{diaz1993, Du2003}, the authors studied similar results on a bounded domain for a general quasilinear equation of the form $-div(Q(|\nabla u|\nabla u))+\lambda f(u)=g$, and, on the same note, fully nonlinear problems of the form $u-\Delta_{p}u+\beta |\nabla u|^{q}=f$ are studied in \cite{Buccheri}. \medskip If $f=0$, the Keller Osserman condition fails, rendering the non-existence of large solutions. Blowing up at the boundary isn't an inherent property of this local operator but is imposed by the boundary condition. Non-linearity tries to avert the solution from blowing up in the form of a local bound on the solution, boundary condition encourages the solution to blow up. The border at which one takes over the other is the Keller-Osserman condition. \medskip As discussed in \cite{Keller1957}, the non-existence of blow-up solution on the whole space $\rn$ raises a natural question: to explore similar results when one or more directions are unbounded, but not all! The answer for semi-linear equations ($Q(r)=1$) can be found in \cite{bandle2013large}. In this paper, the authors proved that such a solution on an infinite cylindrical domain exists using the convergence of a large solution on the finite cylinders becoming unbounded in particular direction(s), and proves that such a large solution coincides with the large solution on the corresponding slice of the cylinder, intuition to which is from the same problem in $\mathbb{R}$, where they have shown that the solutions $v_{\ell}$ in $(-\ell,\ell)$ converges to zero locally uniformly as $\ell\to \infty$. \medskip We aim to achieve similar existence results of \eqref{eqn_mainQ} by the approximation of the large solutions on finite cylinders becoming unbounded in particular direction(s). For simplicity, first we carry out the analysis when $Q(r)=r^{p-2}$ for $p>1$ and on $S=S(B^{N-1}_{1}(0))$, and prove the following statements: \begin{itemize} \item [{\small{(i)}}] There exists $u\in W^{1,\infty}_{loc}(S)$ that solves \eqref{eqn_mainQ} with $Q(r)=r^{p-2}, p>1$, when $f$ satisfy \textit{Keller Osserman} type condition given in \ref{A1} bellow. \item [{\small{(ii)}}] $u$ is independent of the last variable and solves a corresponding problem on $B^{n-1}_{1}(0)$ for a fixed last variable. \item [{\small{(iii)}}] $u$ is unique in solving \eqref{eqn_mainQ} for $Q(r)=r^{p-2}, p>1$, with an additional compatible condition on $f$. \end{itemize} The first two results are then generalized to \eqref{eqn_mainQ} in section \ref{section 5}. Along the proof, we observe that $\{u_{\ell}\}$, the large solutions on finite cylinder $B^{N-1}_{1}(0) \times (-\ell, \ell)$, is a decreasing sequence of non-negative functions, and so define $u$ to be its limit. In the case of the Laplace operator, local uniform convergence of $u_{\ell}$ to $u$ pushes the limit beyond the operator as in \cite{bandle2013large}, where the authors consider point-wise solutions. They construct \textit{Maximal large solutions} and \textit{Minimal large solutions} on the infinite cylinder and show that both maximal and minimal large solutions of the finite cylinders converge to the maximal large solution on the infinite cylinder, which is due to the construction of the maximal large solution (see \cite{bandle2013large}), for the minimal large solution, they consider a slightly different, finite boundary data on finite cylinders. For $p\neq 2$, our solution is weak, which facilitates the need to change the limit and the integral, where the hurdle appears due to two reasons: One is the presence of non-linearity in the gradient, and the other is the boundary condition, which doesn't allow the solution to become a test function, up to scaling and translation. Although it is possible to get a local uniform convergence similar to the proof of the existence of a large solution on a bounded domain as in \cite{diaz1993}, which uses a local uniform gradient bound on domains with positive mean curvature, we avoid it since we are only concerned about the limit of $\{u_\ell\}$ being a large solution on the infinite cylinder, which can be achieved by a weaker convergence. We show a local $W^{1,p}$ convergence using a vector inequality and a suitable cutoff test function. In section \ref{section 3}, we show how the idea in the proof of blow-up control in \cite{materopaper} can be used to get a local uniform bound on the solution of \eqref{eqn_mainQ} by building a radial, blow-up super solution on balls, with the help of results from section \ref{Section 2}. The remaining part of this paper is organized as follows: in Section \ref{Section 2} we discuss the problem given, in dimension one. in Section \ref{section 3}, we provide definitions and basic properties. We also prove a local uniform bound for weak solutions and local $L^{p}$ norm bound for its gradient in a bounded domain. In Section \ref{section 4}, we prove the results stated above, and in Section \ref{section 5}, we generalize some of those results to \eqref{eqn_mainQ}. \section{Analysis In One-Dimension}\label{Section 2} This section demonstrates a one-dimensional analysis for a problem general than \eqref{eqn_mainQ}. The results are similar to \cite{bandle2013large} studied for a linear operator and also conjectured for the equation of type \eqref{eqn_1d}. This analysis provides an intuition to the asymptomatic in higher dimensions. Moreover, the solution built in this provides a way to construct a radially symmetric increasing solution for \eqref{eqn_mainQ} on balls. Similar techniques for \eqref{eqn_mainQ} are in \cite{2diaz1993}. The uniqueness of positive large solutions to the p-Laplace operator can be found in \cite{Guo2007}. Consider the problem on a sequence of domains in $\mathbb{R}$, whose limit is the whole space. For $\ell>0$, we take, \begin{align} (A(v'))'= f(v) \quad &\text{in} \quad (-\ell,\ell) \label{eqn_1d}\\ v(x)\rightarrow \infty \quad\quad &\text{as} \quad x\rightarrow \pm \ell. \label{eqn_1d_bdry} \end{align} Where $A:\mathbb{R}\rightarrow \mathbb{R}$ is a continuously differentiable, increasing function with $A(0)=0$ and $A'(r)>0~\forall ~r>0$; further assumptions on $A$, required for the analysis, will be discussed in due course. In addition, $f:[0,\infty)\rightarrow [0,\infty)$ is a continuous function which is increasing and $f(r)>0$ for $r>0$ and $f(t)\to \infty$ as $t\to \infty$. \begin{example} One may consider, \begin{enumerate} \item [\textit{i)}] $A(r)=|r|^{p-2}r$, which is the p-Laplacian. \item [\textit{ii)}] $A(r)=\frac{r}{\sqrt{1+r^{2}}}$. \end{enumerate} \end{example} \makeatletter \newcommand{\myitem}[1]{\medskip \item[#1]\protected@edef\@currentlabel{#1}} \begin{enumerate} \myitem{$\textbf{(A1)}$} \label{A1} \textit{We assume the following Keller Osserman condition: For any $r>0$ } \begin{align*} \Psi_{A}(r):=\int_{r}^{\infty}\frac{ds}{B^{-1}\{F(s)\}}<\infty. \end{align*} \end{enumerate} Which is denoted as $\int^{\infty}\frac{ds}{B^{-1}\{F(s)\}}<\infty$. Where \begin{align*} \displaystyle F(x)=\int_{0}^{x}f(s)\ ds,\quad B(x)=\int_{0}^{x}A'(s)s\ ds. \end{align*} \begin{example} When $A(r)=|r|^{p-2}r$ we get $ \displaystyle \Psi_{p}(r)=\biggl(1-\frac{1}{p}\biggl)^{\frac{1}{p}}\int_{r}^{\infty}\frac{dx}{F(s)^{\frac{1}{p}}}<\infty$ and for $f(t)=t^{q}$ \ref{A1} becomes $\displaystyle \int^{\infty}\frac{dx}{x^{\frac{q+1}{p}}}<\infty. $ Therefore, $q>p-1$. \end{example} For a $v_{0}\in (0,+\infty)$, we consider the following IVP: \begin{align}\label{eqn_1d_ivp} \begin{cases} (A(w'))'= f(w) \quad &\forall~ x>x_{m}\\ w'(x_{m})=0 \quad \text{and} &w(x_{m})=v_{0}. \end{cases} \end{align} Multiplying both sides by $w'$, integrating between $x_{m}<x$, and by a change of variable \begin{align*} \int_{0}^{w'(x)}A'(s)s\ ds=\int_{v_{0}}^{w(x)}f(s)\ ds. \end{align*} Then we have by the above notation, $B(w'(x))=F(w(x))-F(v_{0}).$ Which implies \begin{align}\label{eqn_one dimensional analysis} \frac{w'(x)}{B^{-1}\{F(w(x))-F(v_{0})\}}=1. \end{align} Integrating from $x_{m}$ to $x$ with the change of variable applied to the left-hand side, we get, \begin{equation}\label{eqn_for_KO} \int_{v_{0}}^{w(x)}\frac{ ds}{B^{-1}\{F(s)-F(v_{0})\}}=x-x_{m}. \end{equation} Note that the right side of \eqref{eqn_for_KO} blows-up to $\infty$ as $x\to \infty$, whereas by assuming \ref{A1} the left side remains finite as long as $v_{0}=v(x_{m})>0$. Therefore $w$ should blow up at a finite value of $x$, say $x=\ell(v_{0})$, where $(x_{m}, \ell(v_{0}))$ is the maximal interval of existence. We have \begin{equation}\label{eqn_KO_ell} \ell(v_{0})=x_{m}+\int_{v_{0}}^{\infty}\frac{ds}{B^{-1}\{F(w(x))-F(v_{0})\}} . \end{equation} Tracing the steps back, any $w$ defined implicitly as in \eqref{eqn_for_KO} solves \eqref{eqn_1d_ivp}. To see the uniqueness of solution to \eqref{eqn_1d_ivp}, assume $w_{1}$ and $w_{2}$ are two solutions, then by \eqref{eqn_for_KO} \begin{align*} \int_{w_{1}(x)}^{w_{2}(x)}\frac{ ds}{B^{-1}\{F(s)-F(v_{0})\}}=0, \end{align*} implies $w_{1}=w_{2}$ for all $x>x_{m}$, as the integrand is positive. To this end, we have that $w$ is the only solution to \eqref{eqn_1d_ivp} and is implicitly given by \eqref{eqn_for_KO}. Similarly, let $\Tilde{w}$ solve the counterpart \begin{align*} \begin{cases} (A(w'))'=f(w) \quad &\forall~ x<x_{m}\\ w'(x_{m})=0 \quad &w(x_{m})=v_{0}, \end{cases} \end{align*} we can construct a function \begin{equation*} v(x)=\begin{cases} w(x) \quad \text{if } x\geq x_{m}\\ \Tilde{w}(x) \quad \text{if } x\leq x_{m} \end{cases} \end{equation*}that solves \eqref{eqn_1d} and \eqref{eqn_1d_bdry} uniquely on $(-\ell(v_{0}),\ell(v_{0}))$ and is given implicitly as in \eqref{eqn_for_KO}. Further whenever $x<x_{m}$ by reflection, $2x_{m}-x>x_{m}$ and so $v(2x_{m}-x)$ solve \eqref{eqn_1d_ivp}, and vice versa. We deduce that $v(2x_{m}-x)=v(x)$ for all $x$ for which $v$ is defined. Since the domain we are looking for is symmetrical about $0$, we are forced to consider $x_{m}=0$. Thus, $v$ blows up at $\pm \ell(v_{0})$, $v(-x)=v(x)$ for all $x\in (-\ell(v_{0}),\ell(v_{0}))$ and, is implicitly given by \begin{equation}\label{eqn_1d_sln_implicit} \int_{v_{0}}^{v(x)}\frac{ ds}{B^{-1}\{F(s)-F(v_{0})\}}=x \quad \forall~ x\in (-\ell(v_{0}),\ell(v_{0})). \end{equation} It is easy to see that $v$ is decreasing in $(-\ell(v_{0}),0)$ and increasing in $(0,\ell({v_{0}}))$ as $v''>0$ by chain rule. Thus we have \begin{align}\label{eqn:v_0} \displaystyle v_{0}=\min_{x\in (-\ell(v_{0}),\ell(v_{0}))} v(x)=v(x_{m}) \end{align} Before going further, we consider some necessary assumptions regarding the integrability of $\int_{0}^{r}\frac{ds}{B^{-1}\{F(s)\}}$ for any $r\in (0,\infty)$, which is, denoted by $\int_{0^{+}}\frac{ds}{B^{-1}\{F(s)\}}$. \begin{enumerate} \myitem{$\textbf{(A2)}$}\label{A2} $\int_{0^{+}}\frac{ds}{B^{-1}\{F(s)\}}=\infty$, which is called the $Osgood~ Condition$. \myitem{$\textbf{(A3)}$}\label{A3} $\int_{0^{+}}\frac{ds}{B^{-1}\{F(s)\}}<\infty$. \end{enumerate} First, we prove a lemma that gives non-existence of a large solution on $\re$ \begin{lemma}\label{lemma phi_0 neq 0} Assume \ref{A2} and $v_0$ as in \eqref{eqn:v_0}, then $v_{0}\ne 0$. \end{lemma} \begin{proof} Assume the contrary, then $F(v_{0})=0$. The equation \eqref{eqn_1d_ivp} in this case has a unique solution $w=0$, because, if $w(x)>0$ for some $x$, take $x_{0}=\min\{x~|~w(x)>0\}$, then $x_{0}\geq0$, $w(x_{0})=0$ and $(x_{0},\infty)\in \{x~|~w(x)>0\}$. \\ By \eqref{eqn_1d_sln_implicit}, with the same argument as between \eqref{eqn_1d_ivp} and \eqref{eqn_for_KO}, and integrating between $x<y$ we get, \begin{align*} \displaystyle \int_{w(x)}^{w(y)}\frac{ds}{B^{-1}\{F(s)\}}=y-x. \end{align*} Letting $x \to x_{0}$, we get a contradiction to \ref{A2} as the left-hand side would then become infinite, whereas the right-hand side remains finite. Hence $v_{0}\ne 0$. \end{proof} \begin{theorem} With \ref{A1} and \ref{A2}, for any $\ell>0$ there exists a unique solution $v_{\ell}$ to \eqref{eqn_1d},\eqref{eqn_1d_bdry} with the following properties: \begin{itemize} \item[\textit{(i)}] $v_{\ell}$ is convex. \item [\textit{(ii)}] $v_{\ell}\to 0$ uniformly on any compact subset of $\mathbb{R}$, as $\ell\to \infty$. \end{itemize} \end{theorem} \begin{proof} We prove the existence of $v_{\ell}$ for all $\ell>0$ by showing that $\ell(v_{0}):(0,+\infty)\to(0,+\infty)$, as a function of $v_{0}$, is onto, and uniqueness by showing that it is one-one. To show that it is onto, we show that it is continuous, $\displaystyle \lim_{v_{0}\to\infty}\ell(v_{0})=0 $ and $\displaystyle \lim_{v_{0}\to 0}\ell(v_{0})=\infty $.\\ We have defined $v_{\ell}$ above. As we have observed by chain rule $v''>0$, hence $(i)$ holds. \\ \noindent \textit{\underline{Step 1:} }$\ell(v_{0})$ is onto.\\ By \eqref{eqn_KO_ell} and change of variable, we get, \begin{align*} \ell(v_{0})=\int_{0}^{\infty}\frac{ds}{B^{-1}\{F(v(s+v_{0}))-F(v_{0})\}}, \end{align*} hence $\ell$ is a continuous function of $v_{0}$. Differentiating the denominator of the integrand with respect to $v_{0}$ \begin{equation}\label{eqn_diff_wrt_phi0} \frac{d}{dv_{0}}\{F(s+v_{0})-F(v_{0})\}=f(s+v_{0})-f(v_{0})\geq 0, \end{equation} implies that $F(s+v_{0})-F(v_{0})$ is increasing with respect to $v_{0}$ for a fixed $s$. Since $B$ is increasing, $B^{-1}\{F(s+v_{0})-F(v_{0})\}$ is increasing in $v_{0}$ and hence $\ell(v_{0})$ is decreasing in $v_{0}$. For $s>0$ \begin{align*} F(s+v_{0})-F(v_{0})=\int_{v_{0}}^{s+v_{0}}f(t)\ dt\geq \int_{v_{0}}^{s+v_{0}}f(v_{0})\ dt =sf(v_{0}), \end{align*} thus $F(s+v_{0})-F(v_{0})\to +\infty$ as $v_{0}\to \infty$ because $f(r)\to\infty$ when $r\to\infty$. We claim that $B(x)\to +\infty$ as $x\to+\infty$. With this we have, $B^{-1}\{F(s+v_{0})-F(v_{0})\} \to \infty$ as $v_{0}\to \infty$ and so $\displaystyle \lim_{v_{0}\to\infty}\ell(v_{0})=0 $.\\ To prove the claim, we notice that $A'>\alpha$ for some $\alpha>0$ as it is continuous. Hence, \begin{equation*} \displaystyle\lim_{x\to+\infty}B(x)\geq \lim_{x\to+\infty}\int_{0}^{x}\alpha s\ ds= +\infty. \end{equation*} On the other hand, as $v_{0}\to 0$, $F(s+v_{0})-F(v_{0})\to F(s)$, thus $B^{-1}\{F(s+v_{0})-F(v_{0})\}$ decreases to $ B^{-1}\{F(s)\}$ by \eqref{eqn_diff_wrt_phi0}. By Monotone convergence theorem and \ref{A2} we then conclude that $\displaystyle \lim_{v_{0}\to 0}\ell(v_{0})=\infty $. \\ \noindent \textit{\underline{Step 2:}} To show that $\ell$ is strictly decreasing with respect to $v_{0}$. Since $f$ is monotonically increasing, $f(s+v_{0})-f(v_{0})>0$ for large $s$, by \eqref{eqn_diff_wrt_phi0} we get that $F(s+v_{0})-F(v_{0})$ is strictly increasing for large $s$ and so \begin{align*} \frac{1}{B^{-1}\{F(s+v_{0})-F(v_{0})\}} \end{align*} is strictly decreasing, as a result $\ell$ is strictly decreasing. That proves the claim.\\ \noindent \textit{\underline{Step 3:}} Uniqueness for solution of \eqref{eqn_1d},\eqref{eqn_1d_bdry}.\\ Say $v_{1}$ and $v_{2}$ are two solutions. If $v^{1}_{0}=v^{2}_{0}$, uniqueness follows from the uniqueness of \eqref{eqn_1d_ivp} and its counterpart. If $v^{1}_{0}<v^{2}_{0}$, then, $\ell(v^{1}_{0})<\ell(v^{2}_{0})$. They can not simultaneously solve \eqref{eqn_1d_bdry}.\\ \noindent \textit{\underline{Step 4:}} Proof of \textit{(ii)}.\\ Integrating \eqref{eqn_one dimensional analysis} from $x>x_{m}$ to $\ell(v_{0})$ for $w=v_{\ell}$, we get, \begin{equation*} \int_{v_{\ell}(x)}^{\infty}\frac{ ds}{B^{-1}\{F(s)-F(v_{0})\}}=\ell(v_{0})-x . \end{equation*} When $\ell=\ell(v_{0})\to \infty$, due to Keller-Osserman and Osgood conditions $v_{\ell}(x)\to 0$. For any compact set $K$ in $\re$ we can find $a>0$ in $\re$ such that $K\subset [-a,a]$ in $\mathbb{R}$, since $v_{\ell}(a)\to 0$, $v_{\ell}\to 0$ uniformly on $K$ as $\ell\to\infty$. \end{proof} \begin{corollary}{(Non-existence)} There is no large solution solving $(A(v'))'= f(v)$ on $\re$. \end{corollary} \begin{proof} From \eqref{eqn_KO_ell}, there is a unique $\ell(v_{0})<+\infty$ whenever $v_{0}>0$ and from the last proof $\displaystyle\lim_{v_{0}\to 0}\ell(v_{0})=+\infty$. Given that $f\geq 0$, if $v$ is such a large solution on $\re$, then $v_{0}=\displaystyle\min_{x\in \re}v(x)=0$. A contradiction to Lemma \ref{lemma phi_0 neq 0}. \end{proof} \begin{theorem} Assume \ref{A1} and \ref{A3}. Fix \begin{align*} L=\int_{0}^{\infty}\frac{ds}{B^{-1}\{F(s)\}} \end{align*} then there is a unique solution $\Tilde{v}_{\ell}$ to \eqref{eqn_1d},\eqref{eqn_1d_bdry}. Also $\Tilde{v}_{\ell}\to 0$ uniformly on any compact set, moreover for $\ell>L$, $\Tilde{v}_{\ell}$ develops dead core. \end{theorem} \begin{proof} Arguments are similar for $\Tilde{v}_{0}>0$, except when $\Tilde{v}_{0}\to 0$, $F(\Tilde{v}_{0}+s)-F(\Tilde{v}_{0})\to F(s)$ and so by Monotone convergence theorem \begin{align*} \lim_{\Tilde{v}_{0}\to 0}\ell(\Tilde{v}_{0})=\int_{0}^{\infty}\frac{ds}{B^{-1}\{F(s)\}}=L<\infty. \end{align*} For the case when $\Tilde{v}_{0}=0$ and $\ell\geq L$, we construct the solution using the solution $v_{L}$ from the previous theorem, and before doing so, we see how it should fit. \\ If an increasing $v$ should solve \eqref{eqn_1d_ivp}, let $x_{0}=min\{x>x_{m}~|~v(x)>0\}$, define $v$ for $x>x_{m}$ implicitly by \begin{align*} \int_{0}^{v(x)}\frac{ds}{B^{-1}\{F(s)\}}=x-x_{0} \quad \quad \forall ~ x>x_{0}, \end{align*} when $x\to L+x_{0}$ we have $v(x)\to \infty$. By taking $x_{0}=\ell-L$ we have a solution for \eqref{eqn_1d_ivp} that blows up at $\ell$. i,e,. there is a unique solution $\Tilde{v}_{\ell}$ solving \eqref{eqn_1d_ivp} such that $\Tilde{v}_{\ell}(x)=0 ~\forall~ x\in (x_{m},\ell-L]$ and is given implicitly by \begin{align*} \int_{0}^{\Tilde{v}_{\ell}(x)}\frac{ds}{B^{-1}\{F(s)\}}=x-\ell+L \quad \forall x\in [\ell-L,\ell). \end{align*} By the symmetric construction, we'll have a unique solution to \eqref{eqn_1d}, \eqref{eqn_1d_bdry} of the form \begin{align*} \Tilde{v}_{\ell}(x)=0 \quad \forall~x\in [L-\ell,\ell+L], \end{align*} And is implicitly defined by \begin{align*} \int_{0}^{\Tilde{v}_{\ell}(x)}\frac{ds}{B^{-1}\{F(s)\}}=x-\ell+L \quad \forall x\in (-\ell, L-\ell)\cup (\ell-L, \ell). \end{align*} Thus, $\Tilde{v}_{\ell}$ develops dead core in $ [L-\ell,\ell+L]$. Uniform convergence of $\Tilde{v}_{\ell}$ follows from the uniform convergence of $v_{L}$. \end{proof} \section{Preliminary Results for higher dimension }\label{section 3} In the current section, we gather results for large solutions on bounded domains in general dimension $n$. Similar results can be found in \cite{diaz1993}. Let $\Omega \in \rn$ be bounded and for $p>1$ we consider the problem \begin{equation}\label{eqn_noboundary condition} \Delta_{p}u= f(u)\quad \text{in}\quad \Omega, \end{equation} along with the boundary condition \begin{equation*} u(x)\to \infty \quad \text{as} \quad x\to \partial \Omega. \end{equation*} The assumption on $f$ is as follows: \begin{enumerate} \myitem{$\textbf{(A4)}$}\label{A4} $f:[0,\infty)\rightarrow [0,\infty)$ be a continuous, increasing function that is positive on $(0,\infty)$, $f(0)=0$ and satisfies Keller-Osserman condition \ref{A1}. \end{enumerate} Example of such an $f$ is $x^{q}$ for $q>p-1$. We consider a notion of a local weak solution, as follows: \begin{definition}\label{dfn_solution} Let $\Omega$ be a domain in $\rn$ and $u\in W_{loc}^{1,p}(\Omega)$ is a weak solution of $\Delta_{p}u= f(u)$ in $\Omega$ if \begin{align*} -\int_{\Omega'}|\nabla u|^{p-2}\nabla u\cdot\nabla \phi \ dx=\int_{\Omega'}f(u)\phi \ dx \quad \forall~ \phi\in W_{0}^{1,p}(\Omega'), \end{align*} for every $\Omega'\subset\subset\Omega$.\\ By a \textit{weak subsolution} (or, weak supersolution) to $\Delta_{p}u= f(u)$ in $\Omega$, we mean \begin{align*} -\int_{\Omega'}|\nabla u|^{p-2}\nabla u\cdot\nabla \phi \ dx\geq \int_{\Omega'}f(u)\phi \ dx \ \ \text{\bigg(or, $\leq \int_{\Omega'}f(u)\phi \ dx $\bigg)} \ \quad \forall~ \phi\in W_{0}^{1,p}(\Omega'),\phi\geq 0, \end{align*} for every $\Omega'\subset \subset \Omega$. \end{definition} One of the most important tools available for dealing with the wellposedness of general nonlinear operators is the comparison principle, we state the same for \eqref{eqn_noboundary condition}, proof can be found in \cite[Theorem 2.2]{diaz1993} and \cite[Theorem 2.15]{lindqvist}. \begin{proposition}[\cite{diaz1993}]{(Comparison Principle)}\label{prop_comparison} Let $u,v\in W_{loc}^{1,p}(\Omega)\cap C(\Omega)$ be two function such that \begin{align*} \Delta_{p}u- f(u)\geq \Delta_{p}v- f(v)\quad \text{in}~\Omega, \quad \text{weakly} \end{align*} where $f$ is increasing with $f(0^{+})=0$ and that, \begin{align*} \limsup \frac{u(x)}{v(x)}\leq 1\quad \text{as}~dist(x,\partial\Omega)\to 0. \end{align*} Then $u\leq v$ in $\Omega$. In particular, if $u$ is a weak subsolution and $v$ is a weak supersolution, the result holds. \end{proposition} Next, we give a local uniform bound for the solution of \eqref{eqn_noboundary condition}. Similar results can be found in \cite{materopaper} and \cite{diaz1993}. \begin{proposition}\label{prop_weak_local_bound} Let $\Omega$ be a bounded domain in $\rn$, $u\in W_{loc}^{1,p}(\Omega)\cap C(\Omega)$ be a weak sub solution of \eqref{eqn_noboundary condition}. For any $x_{0}\in \Omega$ and $R>0$ such that $B_{R}(x_{0})\subset \subset \Omega$, we have \begin{equation}\label{eqn_local_bound} u(x)\leq \omega\Big(\frac{R}{2}\Big) \quad \text{for all } x\in B_{\frac{R}{2}}(x_{0}), \end{equation} where $\omega$ solves \begin{equation} \begin{cases} (|\omega'(t)|^{p-2}\omega'(t))'=f(\omega(t)) \quad &\text{in } (-R,R)\\ \omega(t)\to\infty \quad &\text{as } t\to\pm R. \end{cases} \end{equation} \end{proposition} \begin{proof} We construct a radially symmetric weak boundary blow-up solution $v$, of \eqref{eqn_noboundary condition}, on $B_{R}(x_{0})$, and use the comparison principle for $u$ and $v$. Define $v(x)=\omega(|x-x_{0}|)\in C(B_{R}(x_{0}))$. For any $\phi\in C_{c}^{\infty}(B_{R}(x_{0}))$, by denoting $x=x_{0}+\frac{t}{R}\theta(x)$, where $t\in (0,R)$ and $\theta(x)\in \partial B_{R}(x_{0})$, we get \begin{equation}\label{eqn_eta'} \begin{split} &\int_{B_{R}(x_{0})}|\nabla v |^{p-2}\nabla v\cdot \nabla\phi \ dx\\ &=\int_{B_{R}(x_{0})}|\omega'((|x-x_{0}|)|^{p-2}\omega'(|x-x_{0}|) \frac{x-x_{0}}{|x-x_{0}|}\cdot \nabla \phi \ dx\\ &=\int_{0}^{R}\int_{\partial B_{t}(x_{0})}|\omega'(t)|^{p-2}\omega'(t) \theta(x) \cdot \nabla \phi(t,x) \ dH(x) dt\\ &=\int_{0}^{R}|\omega'(t)|^{p-2}\omega'(t)~ \int_{\partial B_{R}(x_{0})}\frac{\partial \phi}{\partial t}(t,x) \frac{t^{n-1}}{R^{n}} \ dH(x) dt.\\ \end{split} \end{equation} as $\frac{\partial \phi}{\partial t}(t,x)=\theta(x) \cdot \nabla \phi(t,x)$.\\ Define \begin{equation*} \eta(t)=\begin{cases} \int_{0}^{t} \int_{\partial B_{R}(x_{0})}\frac{\partial}{\partial s}\Big(\phi(s,x) \frac{s^{n-1}}{R^{n}}\Big) \ dH(x) ds \quad &\text{for }t> 0\\ 0 \quad &\text{for } t\leq 0, \end{cases} \end{equation*} which is a $C_{c}(\mathbb{R})$ function as $\phi\in C_{c}^{\infty}(B_{R}(x_{0}))$. $\phi(x)=0$ for $|x|=R$, implies \begin{equation*} \eta(t)=\int_{0}^{t} \int_{\partial B_{R}(x_{0})}\frac{\partial \phi}{\partial s}(s,x) \frac{s^{n-1}}{R^{n}} \ dH(x) ds \quad \text{for }t> 0. \end{equation*} Thus \begin{equation*} \eta'(t)=\int_{\partial B_{R}(x_{0})}\frac{\partial \phi}{\partial t}(s,x) \frac{t^{n-1}}{R^{n}} \ dH(x) dt. \end{equation*} $\omega$ is a solution, by \eqref{eqn_eta'} \begin{align*} \int_{B_{R}(x_{0})}|\nabla v|^{p-2}\nabla v\cdot \nabla\phi \ dx&=\int_{0}^{R}f(\omega(t)) \eta(t) \ dt\\ &=\int_{0}^{R}f(\omega(t))\int_{\partial B_{R}(x_{0})} \phi(t,x)\frac{t^{n-1}}{R^{n-1}}\ dH(x) dt\\ &=\int_{B_{R}(x_{0})} f(v(x))\phi(x) \ dx. \end{align*} $v(x)\to \infty$ as $|x|\to R$, since $u<\infty$ in $B_{R}(x_{0})$, comparison principle implies \begin{equation*} u(x)\leq v(x)\quad \text{for all }x\in B_{R}(x_{0}). \end{equation*} Because $\omega$ is increasing in $(0,R)$ we get \eqref{eqn_local_bound}. \end{proof} Next, we will see that the gradient of the sub-solution has a local integrability independent of the boundary conditions and its $L^{p}$ norm, by the usual cutoff technique. \begin{proposition}\label{prop_grad_lp_bound} Let $\Omega$ be a bounded domain in $\rn$, $u\in W_{loc}^{1,p}(\Omega)\cap C(\Omega)$ be any weak sub solution of \eqref{eqn_noboundary condition}, then $\|\nabla u\|_{L^{p}(K)}$ is bounded independent of $u$. \end{proposition} \begin{proof} Let $K$ be a compact subset of $\Omega$, $K_{1}\subset \Omega$ be compact such that $K\subsetneq K_{1}$ and $\phi\in C_{c}^{\infty}(K_{1})$ such that $\phi\leq 1$, $\phi=1$ on $K$. Given the Proposition \ref{prop_weak_local_bound}, by taking $u\phi$ as a test function in the weak formulation, we get, \begin{equation*} \begin{split} \int_{K_{1}}|\nabla u|^{p}\phi \ dx &\leq \int_{K_{1}}f(u)u\phi \ dx - \int_{K_{1}}|\nabla u|^{p-2}u \nabla u\cdot \nabla \phi \ dx\\ &\leq \int_{K_{1}}f(u)u\phi \ dx + \int_{K_{1}}|\nabla u|^{p-1}u |\nabla \phi| \ dx.\\ \end{split} \end{equation*} By the assumption that $f$ is increasing and Proposition \ref{prop_weak_local_bound}, also using young's inequality for $c>0$, \begin{equation*} \int_{K_{1}}|\nabla u|^{p}\phi \ dx \leq f(\|u\|_{L^{\infty}(K_{1})})\|u\|_{L^{\infty}(K_{1})}|K_{1}|+ \frac{c(p-1)}{p}\int_{K_{1}}|\nabla u|^{p} \ dx + \frac{1}{pc^{p-1}}\int_{K_{1}} u^{p}|\nabla \phi|^{p} \ dx \end{equation*} Choosing $c=\frac{p}{2(p-1)}$ we get \begin{equation*} \int_{K}|\nabla u|^{p} \ dx\leq 2f(\|u\|_{L^{\infty}(K_{1})})\|u\|_{L^{\infty}(K_{1})}|K_{1}|+ \frac{2^{p}(p-1)^{p-1}}{p^{p}}\|\nabla \phi\|^{p}_{L^{p}(K_{1})}\|u\|^{p}_{L^{\infty}(K_{1})} \end{equation*} The result follows from the previous proposition and the compactness. \end{proof} The existence result for a large solution in a bounded domain is stated in the proceeding theorem in a compact form. Proof and related explanations can be found in \cite[Section 6]{diaz1993} and \cite{materopaper}. \begin{theorem}[\cite{materopaper},\cite{diaz1993}] Let $\Omega$ be a bounded domain in $\rn$, $n\geq 2$ whose boundary $C^{2}$, let $u_{m}$ solve \begin{align}\label{finite_dirchlet} \begin{cases} \Delta_{p}u_{m}- f(u_{m})=0 \quad &\text{in} ~\Omega \\ u_{m}=m \quad &\text{on}~\partial\Omega, \end{cases} \end{align} with $f$ satisfying \ref{A4}. \begin{enumerate} \item Then $\{u_{m}\}\subset L_{loc}^{\infty}(\Omega)$ is bounded. \item Let $u(x)=sup~u_{m}(x)$, then $u\in W_{loc}^{1,\infty}(\Omega)$, and $u$ solves \begin{equation}\label{singular_bdry} \begin{cases} \Delta_{p}u=f(u)\quad &\text{in } \Omega\\ u(x)\to \infty \quad &\text{as } x\to\zeta\in\partial\Omega. \end{cases} \end{equation} \end{enumerate} \end{theorem} Existence and $C^{1}$ regularity of solution to \eqref{finite_dirchlet} can be found in \cite{diaz1985}. The uniqueness of the boundary blow-up solutions can be formalized by looking at its asymptotic at the boundary, one needs to assume an additional condition on the relation between the operator and the force function. \begin{enumerate} \myitem{$\textbf{(A5)}$}\label{A5} $f$ be such that \begin{equation*} \liminf_{t\to\infty}\frac{\Psi_{p}(\beta t)}{\Psi_{p}(t)}>1\quad \text{for all } \beta\in (0,1) \quad (\Psi_{p} \text{ defined in \ref{A1}}). \end{equation*} \end{enumerate} \begin{theorem}[\cite{materopaper}]\label{thrm_matero_asym} Let $\Omega$ be a bounded domain in $\rn$ with $C^{2}$ boundary and $u\in W_{loc}^{1,p}(\Omega)$ be a solution of \eqref{singular_bdry}, where $f$ satisfy \ref{A4} and \ref{A5}. Then \begin{equation}\label{eqn_matero_asymptotic} \displaystyle \lim_{x\to \partial\Omega}\frac{u(x)}{\Phi_{p}(d(x))}=1, \end{equation} where $\Phi_{p}$ is the inverse of $\Psi_{p}$ and $d(x)=\text{dist}(x,\partial\Omega$). \end{theorem} \begin{proof} Since $\Omega$ is $C^{2}$, it has uniform interior and exterior ball conditions. For $z\in\partial\Omega$, let $R>0$ be such that $B^{z}_{R}(x_{0})$ for some $x_{0}\in \Omega$, is the interior ball and, $\Tilde{B}_{R}^{z}(y_{0})$ for some $y_{0}\in \Omega^{c}$, be the exterior ball associated with $z$. For $0<r<R$, we consider two barriers $w_{r}\in W_{loc}^{1,p}(B_{r}^{z}(x_{0}))$ which is radially symmetric, increasing function on $B_{r}^{z}(x_{0})$ and $v_{r}\in W_{loc}^{1,p}(\Tilde{B}^{z}_{r,2R}(y_{0}))$, which is radially symmetric, decreasing function on the annulus $\Tilde{B}^{z}_{r,2R}(y_{0}):=\Tilde{B}^{z}_{2R}(y_{0})\setminus \Tilde{B}_{r}^{z}(y_{0})$. They individually solve \begin{equation*} \begin{cases} \Delta_{p}w_{r}(t)=f(w_{r}(t)) \quad &\text{in } [0,r)\\ w_{r}(t)\to \infty \quad &\text{as } t\to r, \end{cases} \end{equation*} and \begin{equation*} \begin{cases} \Delta_{p}v_{r}(t)=f(v_{r}(t)),\quad &\text{for } t\in (r,2R)\\ v_{r}(t)\to\infty, \quad &\text{as } t\to r\\ v_{r}(t)\to 0, \quad &\text{as } t\to 2R, \end{cases} \end{equation*} where $f$ has \ref{A4}. $w_{r}$ and $v_{r}$ verifies \begin{equation*} \displaystyle \lim_{t\to r} \frac{\Psi_{p}(w_{r}(t))}{r-t}= 1. \end{equation*} and \begin{equation*} \displaystyle \lim_{t\to r} \frac{\Psi_{p}(v_{r}(t))}{t-r}\leq 1. \end{equation*} For the existence of such a function, we refer \cite[Corollary 3.5, Proposition 4.2 and Proposition 4.3]{materopaper}. Comparison principle implies $u\leq w_{r}$ on $B_{r}^{z}(x_{0})$ and hence $\Psi_{p}(u)\geq \Psi_{p}(w_{r})$ on $B_{r}^{z}(x_{0})$. Therefore, \begin{equation*} \frac{\Psi_{p}(u(x))}{d(x,z)}\frac{t-r}{\Psi_{p}(w_{r}(x))}\geq 1. \end{equation*} Letting $r\to R$, as $d(x)\leq d(x,z)$ \begin{equation}\label{eqn_matero inequality} \displaystyle \lim_{x\to z}\frac{\Psi_{p}(u(x))}{d(x)}\geq 1. \end{equation} For the other way inequality, we compare $u$ and $v_{r}$ on $\Tilde{B}^{z}_{r,2R}(y_{0})\cap \Omega$ and letting $r\to R$ and \eqref{eqn_matero inequality}, \begin{equation*} \displaystyle \lim_{x\to z}\frac{\Psi_{p}(u(x))}{d(x)}= 1. \end{equation*} By the condition \ref{A5} on $f$, one can derive \eqref{eqn_matero_asymptotic}. \end{proof} This leads to the uniqueness, \begin{theorem}\label{thrm_uniqueness_bdd_domain} Let $\Omega$, $f$ be as in Theorem \ref{thrm_matero_asym}, then \eqref{singular_bdry} admits a unique solution. \end{theorem} \begin{proof} If $u$ and $v$ are two solutions of \eqref{singular_bdry}, then by \eqref{eqn_matero_asymptotic} \begin{equation*} \displaystyle \lim_{x\to\partial\Omega} \frac{u(x)}{v(x)}=1, \end{equation*} using the comparison principle we can derive that $u=v$. \end{proof} In \cite{materopaper}, the solutions are not necessarily $C(\Omega)$, and the regularity of the solution considered is $C^{1,\alpha}(K)$ for some compact subset $K$ of $\Omega$ and some $\alpha\in(0,1)$. The composition principle used is local in nature, as given in \cite{diaz1985}; thus, uniqueness needs more assumption of $f$. \section{Infinite Cylinder}\label{section 4} In this section, we address the existence and uniqueness of large solutions on infinite cylinders. Like in \cite{bandle2013large}, we would like to get the existence by the approximation of solutions on the finite cylinders $B^{n-1}_{1}(0)\times (-\ell,\ell)$. However, to work with a $C^{2}$ bounded domain, we modify $B^{n-1}_{1}(0)\times (-\ell,\ell)$ by attaching suitable domains on either side. Let \begin{align*} S_{\ell}=\{x\in\rn:\sum_{1}^{n-1}x_{i}^{2}<1, -\ell<x_{n}<\ell\}\cup\{x\in\rn:\sum_{1}^{n-1}x_{i}^{4}+(x_{n}\pm \ell)^{4}<1\}. \end{align*} whose boundary is at least $C^{2}$. We note $S_{\ell}\to S$ when $\ell\to\infty$, where $S=(B^{n-1}_{1}(0)\times \mathbb{R}) \subset \rn$.\\ By the comparison principle, we have the following result \begin{proposition} Let $f$ satisfy \ref{A4} and $u_{\ell}$ solve \eqref{singular_bdry} on $S_{\ell}$ in the weak sense. Then, \begin{itemize} \item[(\textit{i})] $u_{\ell}\geq 0~\forall~\ell>0$ \quad (as $f(0)=0$) \item[(\textit{ii})] $\{u_{\ell}(x)\}_{\ell}$ is a decreasing sequence in the following sense:\\ For $\ell(x)>0$ with $x\in S_{\ell(x)}$, $\{u_{\ell}(x)\}_{\ell\geq\ell(x)}$ is decreasing in $\ell$. \end{itemize} \end{proposition} Define \begin{equation}\label{eqn_dfn_of_u} u(x)=\inf_{\ell\geq\ell(x)}u_{\ell}(x) \quad \forall x\in S. \end{equation} Then by Proposition \ref{prop_weak_local_bound}, $u\in L^{\infty}(\Tilde{S})$ for every $\Tilde{S}$ compactly contained in $S$ ($\Tilde{S}\subset\subset S$). Thus, \begin{equation}\label{eqn_lp convergence} u_{\ell}\to u \quad \text{in}~ L^{q}(\Tilde{S})\quad \text{for } 1\leq q\leq \infty \quad \text{and for any} \quad \Tilde{S}\subset\subset S. \end{equation} Our aim is to find a large solution to the following problem \begin{equation}\label{eqn_large solution for plaplacianon} \begin{cases} \Delta_{p}v= f(v)\quad &\text{in}\quad S\\ v(x)\to \infty \quad &\text{as}~x\to \zeta\in\partial S. \end{cases} \end{equation} The function $u$ defined in (4.1) is a candidate, but one needs to pass the limit through the weak formulation. \begin{theorem} For $u_{\ell}$,$u$ and $\Tilde{S}$ as above and $f$ satisfying \ref{A4}, we have \begin{align*} u_{\ell}\rightharpoonup u \quad in~W^{1,p}(\Tilde{S}) \end{align*} Thus $u\in W_{loc}^{1,p}(S)$. \end{theorem} \begin{proof} The result follows directly follows from Proposition \ref{prop_weak_local_bound} and \ref{prop_grad_lp_bound}. \end{proof} This brings us to the crucial part: \begin{theorem}\label{thrm_large solution existence} For $f$ satisfying \ref{A4}, $p\geq 2$, $u$ defined in \eqref{eqn_dfn_of_u} solves \eqref{eqn_large solution for plaplacianon} in the weak sense. \end{theorem} \begin{proof} Note that $u(x) \to \infty$ as $x\to \partial S$ by the definition of $u$. We have that $ \{u_{\ell}\}$, for large $\ell$ is uniformly bounded on $\Tilde{S}$ by the Proposition \ref{prop_weak_local_bound} and hence $\{f(u_{\ell})\}$ is bounded uniformly on $\Tilde{S}$. For any $\phi\in C_{c}^{\infty}(S)$, by taking $\Tilde{S}$ such that $supp(\phi)\subset\Tilde{S}$, dominated convergence theorem implies that \begin{align*} \int_{\Tilde{S}}f(u_{\ell})\phi\to \int_{\Tilde{S}}f(u)\phi. \end{align*} That is \begin{equation}\label{eqn_int convergence} \int_{\Tilde{S}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell}\cdot \nabla\phi\to \int_{\Tilde{S}}f(u)\phi. \end{equation} We aim to achieve \begin{equation}\label{eqn_weak convergence of operator} \int_{\Tilde{S}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell}\cdot \nabla\phi\to \int_{\Tilde{S}}|\nabla u|^{p-2}\nabla u\cdot \nabla\phi, \end{equation} by showing that $u_{\ell}\to u$ in $W^{1,p}(\Tilde{S})$. We use a vector-valued inequality (see \cite{lindqvist}): For $x,y\in \rn$ there is a positive constant $c$ such that \begin{equation}\label{vector inequlity} |x-y|^{p}\leq c ~ (|x|^{p-2}x-|y|^{p-2}y) \cdot (x-y). \end{equation} Choose a cutoff function $\psi\in C_{c}^{\infty}(\Tilde{S}_{1})$ such that $\psi=1$ on $\Tilde{S}$ and $0\leq \psi \leq 1$, then \begin{equation}\label{eqn_strong convergence} \begin{split} \int_{\Tilde{S}}|\nabla u_{\ell}-\nabla u|^{p} &\leq \int_{\Tilde{S}_{1}}\psi|\nabla u_{\ell}-\nabla u|^{p}\\ &\leq c \int_{\Tilde{S}_{1}}\psi \{(|\nabla u_{\ell}|^{p-2}\nabla u_{\ell}-|\nabla u|^{p-2}\nabla u) \cdot (\nabla u_{\ell}-\nabla u)\}. \end{split} \end{equation} Since $\phi\mapsto \int_{\Tilde{S}_{1}}\psi\{|\nabla u|^{p-2}\nabla u \cdot \nabla \phi\}$ is a continuous linear functional on $W^{1,p}(\Tilde{S_{1}})$, by weak convergence \begin{align*} \int_{\Tilde{S}_{1}}\psi |\nabla u|^{p-2}\nabla u \cdot \nabla (u_{\ell}-u)\to 0\quad \text{as} \quad \ell\to\infty. \end{align*} Consider, \begin{align*} \int_{\Tilde{S}_{1}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell} \cdot \psi \nabla (u_{\ell}-u) &=\int_{\Tilde{S}_{1}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell} \cdot \{\nabla(\psi(u_\ell-u))-(u_{\ell}-u)\nabla \psi\}\\ &=I_{1}-I_{2}. \end{align*} As $\|\nabla u_{\ell}\|_{L^{p}}$ is uniformly bounded with respect to $\ell$ by the Proposition \ref{prop_grad_lp_bound}, there is a $K>0$ such that \begin{align*} |I_{2}|&\leq \int_{\Tilde{S}_{1}}|\nabla u_{\ell}|^{p-1}|\nabla \psi\|u_{\ell}-u|\leq \sup_{\Tilde{S}_{1}}|\nabla\psi| \|\nabla u_{\ell}\|_{L^{p}}^{p-1} \|u_{\ell}-u\|_{L^{p}} \leq K \|u_{\ell}-u\|_{L^{p}} \xrightarrow{\ell \to 0} 0. \end{align*} Taking care of $I_{1}$ is more delicate, we rewrite it as, \begin{align*} I_{1} =\int_{\Tilde{S}_{1}} \big(|\nabla u_{\ell}|^{p-2}\nabla u_{\ell} \cdot \nabla (\psi(u_{\ell}-u))- f(u)(u_{\ell}-u)\psi \big) + \int_{\Tilde{S}_{1}} f(u)(u_{\ell}-u)\psi . \end{align*} The second integral on the right-hand side tends to $0$ when $\ell\to 0$ by \eqref{eqn_lp convergence}. By \eqref{eqn_int convergence} and density \begin{align*} \int_{\Tilde{S}_{1}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell}\cdot \nabla\phi\to \int_{\Tilde{S}_{1}}f(u)\phi, \quad \forall~\phi\in W_{0}^{1,p}(\Tilde{S}_{1}), \end{align*} this implies \begin{align*} \biggl|\int_{\Tilde{S}_{1}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell}\cdot \nabla\phi- f(u)\phi\biggl|\leq \mathcal{O}_{\ell}(1)\|\phi\|_{W_{0}^{1,p}}\quad\forall~\phi\in W_{0}^{1,p}(\Tilde{S}_{1}). \end{align*} Taking $\phi=(u_{\ell}-u)\psi$, we get, \begin{align*} \biggl|\int_{\Tilde{S}_{1}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell}\cdot \nabla\{(u_{\ell}-u)\psi\}- f(u)(u_{\ell}-u)\psi\biggl|&\leq \mathcal{O}_{\ell}(1)\|(u_{\ell}-u)\psi\|_{W_{0}^{1,p}(\Tilde{S_{1}})} \leq \mathcal{O}_{\ell}(1) M. \end{align*} for some $M>0$, as $u_{\ell},u$ are bounded in $W^{1,p}(\Tilde{S}_{1})$. Thus \begin{equation*} \int_{\Tilde{S}_{1}}|\nabla u_{\ell}|^{p-2}\nabla u_{\ell}\cdot \nabla((u_{\ell}-u)\psi)- f(u)(u_{\ell}-u)\psi \to 0 \quad \text{as}~\ell\to \infty. \end{equation*} Thus $I_1 \to 0$ when $\ell \to 0$. Then $u_{\ell}\to u$ in $W^{1,p}(\Tilde{S})$ follows by \eqref{eqn_strong convergence}. Up to a subsequence, with the dominated convergence theorem \eqref{eqn_weak convergence of operator} holds. The proof is complete by \eqref{eqn_int convergence} and \eqref{eqn_weak convergence of operator}. \end{proof} Similar arguments can also be found in \cite[Theorem 6.4]{diaz1993}, \cite{materopaper} and Browder-Minty method from \cite{evans}. \medskip Finally, we show that the large solution of \eqref{eqn_large solution for plaplacianon} coincides with the large solution of the same problem on $B^{n-1}_{1}(0) \subset \mathbb{R}^{n-1}$, as stated in the next theorem. \begin{theorem}\label{thrm_large solution on S} Assume \ref{A4} and $u$ is as in the Theorem \ref{thrm_large solution existence}, then $u$ is independent of the $n^{\text{th}}$ variable. i,e,. $u(x)=u(x')$ where $x=(x',x_{n})$. Moreover, if we denote $v(x')=u(x',x_{n})$ for a fixed $x_{n}$, $v$ solves \begin{equation}\label{Eqn_crossection} \begin{cases} \Delta_{p}v- f(v)=0 \quad &\text{in}~B^{n-1}_{1}(0)\\ v\to \infty \quad &\text{as}~dist(x,\partial B^{n-1}_{1}(0)). \end{cases} \end{equation} \begin{remark} The large solution on the infinite cylinder coincides with the large solution on the cross-section $B^{n-1}_{1}(0)$. In other words, solutions of \eqref{singular_bdry} on $S_{\ell}$ asymptomatically converge to the large solution of the cross-sectional problem. \end{remark} \end{theorem} \begin{proof}[Proof of the Theorem \ref{thrm_large solution on S}] Let $\tau=(0,\cdots,0,\tau_{n})$ with $\tau_{n}\in \mathbb{R}$, define \begin{equation*} u_{\tau}(x)=u(x+\tau). \end{equation*} As $S= B_{1}^{n-1}(0) \times \mathbb{R}$ and $u$ is a weak solution of \eqref{eqn_large solution for plaplacianon}, for any $\psi \in C_c^{\infty}(S)$ we have \begin{align*} & \int_{S}|\nabla u_{\tau}(x)|^{p-2}\nabla u_{\tau}(x)\cdot \nabla\psi(x)\ dx =\int_{S}|\nabla u(x)|^{p-2}\nabla u(x)\cdot \nabla\psi(x)\ dx\\ &=- \int_{S}f(u(x))\psi(x)\ dx=- \int_{S}f(u(x+\tau))\psi(x)\ dx=- \int_{S}f(u_{\tau}(x))\psi(x)\ dx. \end{align*} Thus $u_{\tau}$ also solves \eqref{eqn_large solution for plaplacianon}. Then by comparison principle (Proposition \ref{prop_comparison}) $u(x+\tau)\leq u(x)$. Replacing $\tau$ by $-\tau$ we get $u(x-\tau)\leq u(x)$. Then for $y=x+\tau$ we have, $u(x)=u(y-\tau)\leq u(y)=u(x+\tau)$. Thus $u(x)$ is independent of the last variable. \smallskip To see $v$ is the large solution on $B_{1}^{n-1}(0)$, consider $\phi\in C_{c}^{\infty}(B^{n-1}_{1}(0))$, take $\Tilde{\phi}\in C_{c}^{\infty} (\re)$ such that $\int_{\mathbb{R}} \Tilde{\phi} =1$ and define $\psi(x)=\phi(x')\Tilde{\phi}(x_{n})$, Clearly $\psi\in C_{c}^{\infty}(S)$. Then, \begin{equation}\label{thrm_test_fn_v} -\int_{S}|\nabla u(x)|^{p-2}\nabla u(x)\cdot \big(\Tilde{\phi}(x_n)\, \nabla_{x'}\phi (x'),\phi(x') \, \Tilde{\phi}'(x_n)\big)\ dx=\int_{S}f(u(x))\psi(x)\ dx, \end{equation} where $\nabla u=(\nabla_{x'}v,0)$. Now separating the domain of integration in $x'$ and $x_n$ variables, and using $\int_{\mathbb{R}} \Tilde{\phi} =1$ we find \begin{align*} -\int_{B^{n-1}_{1}(0)}|\nabla_{x'}v(x')|^{p-2}\nabla_{x'}v(x')\cdot \nabla_{x'}\phi(x')\ dx' = \int_{B^{n-1}_{1}(0)}f(v(x')\phi(x')\ dx'. \end{align*} This completes the proof. \end{proof} \begin{theorem}(\textbf{Uniqueness:}) Assume $f$ satisfies \ref{A5} and \ref{A4}, then the equation \eqref{eqn_large solution for plaplacianon} has a unique solution. \end{theorem} \begin{proof} The result follows from the Theorem \ref{thrm_uniqueness_bdd_domain} and the Theorem \ref{thrm_large solution on S}. \end{proof} \section{Extension}\label{section 5} In this section, we generalize the results from the preceding section to \begin{itemize} \item Operator of the form: \begin{equation}\label{eqn_general} div(Q(|\nabla u|)\nabla u)- f(u)=0 \quad \text{in} ~\Omega, \end{equation} where $Q:(0,\infty)\to (0,\infty)$ is a continuously differentiable function such that $Q(0^{+})=0$, denoting $a(r)=Q(r)r$, we assume that $a'(r)>0$ for $r>0$. \item Domains that become unbounded in more than one direction but not all, and having $C^{2}$ cross-section. \end{itemize} In both cases, we discuss the dissimilarity in the proof of the results given in section \ref{section 4}. \medskip While considering \eqref{eqn_general}, one needs to modify the notion of weak solution; first we notice that $|\nabla u|^{p-2}\nabla u$ gives a bounded linear functional on $W^{1,p}(\Omega)$ whenever $u\in W^{1,p}(\Omega)$, which, need not be the case with $Q(|\nabla u|)\nabla u$, thus we have the following definition. \begin{definition} Let $1<q<\infty$ and $q'$ be its conjugate. Let $\Omega$ be an open set in $\rn$, $u\in\{v\in W^{1,q}_{loc}(\Omega)~|~Q(|\nabla v|)\nabla v\in (L_{loc}^{q'}(\Omega))^{n}\}$ is called a \textit{Weak Solution} to \eqref{eqn_general} if for any $\Omega'\subset\subset\Omega$ \begin{align*} \int_{\Omega'}Q(|\nabla u|)\nabla u\cdot \nabla v=\int_{\Omega'}f(u)v \quad \forall~v\in W_{0}^{1,q}(\Omega') \end{align*} whenever $f(u)\in L_{loc}^{q'}(\Omega)$. \end{definition} For $Q(r)=r^{p-2}$ we get p-Laplacian. Another example for $Q$ is $(1+r^{2})^{\frac{-1}{2}}$, see \cite[Remark 2.1]{diaz1993}. The definition of weak super solution and weak sub solution follows parallel to Definition \ref{dfn_solution} and the modification of the above definition. The existence result on the infinite cylinder for \eqref{eqn_general}, closely follows the results in Section \ref{section 4}. \begin{theorem}\label{thrm_Q existence} Let $S_{\ell}(\omega)$ and $S(\omega)$ be as above, also $u_{\ell}$ be a large solution to \eqref{eqn_general} on $S_{\ell}(\omega)$ with $f$ satisfying \ref{A4}, then there is a function $u\in W_{loc}^{1,\infty}(S(\omega))$ such that \begin{align*} u_{\ell}\rightharpoonup u \quad in~W^{1,q}_{loc}(S(\omega))\quad \text{where} ~1\leq q<\infty, \end{align*} and \begin{align*} u_{\ell}\overset{\ast}{\rightharpoonup} u \quad in ~W_{loc}^{1,\infty}(S(\omega)). \end{align*} Moreover $u$ solves \eqref{eqn_mainQ} . \end{theorem} \begin{proof} An essential ingredient for the proof is a local uniform bound for the gradient of the solution. See \cite[Corollary 5.5]{diaz1993} for such an estimate. By replacing $|\nabla u|^{p-2}\nabla u$ with $Q(|\nabla u|)\nabla u$ in the proof of the Proposition \ref{prop_weak_local_bound}, we get a local bound for the solution $u$. The first part of the result is from the Banach-Alaoglu theorem and separability. To show that $u$ solves \eqref{eqn_mainQ}, one needs to get similar inequality for non-linearity in the gradient as in \eqref{vector inequlity}, to have local strong convergence of $u_{\ell}$. Otherwise, one has to work with weak convergence. The idea can be found in the proof of the existence of a large solution on a bounded domain in \cite[Theorem 6.4]{diaz1993}. \end{proof} This solution is translation invariant with respect to the unbounded variable. \begin{theorem} Let $u$ be as in the Theorem \ref{thrm_Q existence} then $u$ is independent of $n^{\text{th}}$ variable. Moreover, if we denote $v(x')=u(x',x_{n})$ for a fixed $x_{n}$, $v$ solves \begin{equation*} \begin{cases} div(Q(|\nabla v|)\nabla v)- f(v)=0 \quad &\text{in}~\omega\\ v\to \infty \quad &\text{as}~dist(x,\partial \omega). \end{cases} \end{equation*} ``Large solution on the infinite cylinder coincides with the large solution on the cross-section $\omega$" \end{theorem} The proof is similar to that in the Theorem \ref{thrm_large solution on S}. \begin{remark} The large solution on $S(\omega)$ is unique whenever the large solution on $\omega$ is unique. \end{remark} For completeness, we state the comparison principle for \eqref{eqn_general}, see \cite[Theorem 2.2]{diaz1993} for the proof. \begin{proposition}{(Comparison Principle)} Let $u,v\in W_{0}^{1,p}(\Omega)\cap C(\Omega)$ be two function such that \begin{align*} div(Q(|\nabla u|)\nabla u- f(u)\geq div(Q(|\nabla v|)\nabla v- f(v)\quad \text{in}~\Omega, \end{align*} and that, \begin{align*} \limsup \frac{u(x)}{v(x)}\leq 1\quad \text{as}~dist(x,\partial\Omega)\to 0. \end{align*} Then $u\leq v$ in $\Omega$. In particular, if $u$ is a weak subsolution and $v$ is a weak supersolution, the result holds. \end{proposition} \medskip In order to discuss the results for domains becoming unbounded in more than one direction, consider the infinite cylindrical domain $\tilde{S}(\omega)=\omega \times \re^{n-m}$, where $1\leq m<n$ and $\omega\subset \re^{m}$ is an open bounded domain with $C^{2}$ boundary. Let $\tilde{S}_{\ell}(\omega)$ be a $C^{2}$ bounded domain such that $\omega\times (-\ell,\ell)^{n-m}\subset \tilde{S}_{\ell}(\omega)$, and $\tilde{S}_{\ell}(\omega)\to \tilde{S}(\omega)$ as $\ell\to \infty$. \begin{theorem} Let $f$ satisfy \ref{A4} and $w_{\ell}$ be a solution of \eqref{singular_bdry} on $\tilde{S}_{\ell}(\omega)$, define $\displaystyle w(x)=\lim_{\ell\to\infty}w_{\ell}(x)$, for $x\in \tilde{S}(\omega)$, then \begin{enumerate} \item The function $w\in W^{1,p}_{loc}(\tilde{S}(\omega))$ and $w_{\ell}\rightharpoonup w$ in $W^{1,p}_{loc}(\tilde{S}(\omega))$. \item The function $w$ solves \eqref{singular_bdry} on $S(\omega)$ \item Let $x=(x_{1},x_{2})$, where $x_{1}\in \omega$ and $x_{2}\in \re^{n-m}$. The solution $w$ is independent of $x_{2}$ variable. $w(x_{1},x_{2})$ solves \eqref{Eqn_crossection} on $\omega$ for each fixed $x_{2}\in \re^{n-m}$. \item In addition if $f$ satisfies \ref{A5}, then \eqref{singular_bdry} on $\tilde{S}(\omega)$ has unique solution. \end{enumerate} \end{theorem} Apart from the usual technical generalization, the proof is similar to the proofs in Section \ref{section 4}, and we omit them here. \section*{Acknowledgements} I.C. was supported by the DST-INDIA INSPIRE faculty fellowship (IFA22-MA187). N.N.D. is supported by PMRF grant (2302262). \renewcommand\refname{Bibliography} \bibliographystyle{abbrv} \bibliography{ref.bib} \end{document}
2412.19008v1
http://arxiv.org/abs/2412.19008v1
On the minimal parabolic induction
\documentclass[a4paper]{article} \usepackage[scale=0.8]{geometry} \usepackage{amsmath} \usepackage{titlesec} \usepackage{amsthm} \usepackage{amssymb} \usepackage{array} \usepackage{cite} \usepackage{color} \usepackage{mathrsfs} \usepackage{graphicx} \usepackage{subfigure} \usepackage[version=4]{mhchem} \usepackage{textcomp} \usepackage[all]{xy} \usepackage{tikz-cd} \usepackage{enumerate} \usepackage{float} \usepackage{ulem} \usepackage[colorlinks, linkcolor=blue, citecolor=red]{hyperref} \normalem \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{definition}{Definition}[section] \newtheorem{example}{Example}[section] \newtheorem{observation}{Observation}[section] \newtheorem{main}{Main Theorem} \theoremstyle{remark} \newtheorem{remark}{Remark}[section] \tikzcdset{scale cd/.style={every label/.append style={scale=#1}, cells={nodes={scale=#1}}}} \newcommand{\n}{\mathfrak{n}} \newcommand{\g}{\mathfrak{g}} \newcommand{\C}{\mathbb{C}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\End}{\operatorname{End}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\id}{\operatorname{id}} \newcommand{\Ind}{\operatorname{Ind}} \renewcommand{\b}{\mathfrak{b}} \newcommand{\h}{\mathfrak{h}} \newcommand{\Res}{\operatorname{Res}} \newcommand{\p}{\mathfrak{p}} \renewcommand{\l}{\mathfrak{l}} \newcommand{\Ext}{\operatorname{Ext}} \renewcommand{\O}{\mathcal{O}} \renewcommand{\u}{\mathfrak{u}} \newcommand{\im}{\operatorname{im}} \newcommand{\supp}{\operatorname{supp}} \newcommand{\s}{\mathfrak{s}} \newcommand{\z}{\mathfrak{z}} \newcommand{\incl}{\hookrightarrow} \newcommand{\proj}{\twoheadrightarrow} \renewcommand{\sl}{\mathfrak{sl}} \newcommand{\Ann}{\operatorname{Ann}} \newcommand{\coker}{\operatorname{coker}} \newcommand{\I}{\mathbf{I}} \newcommand{\R}{\mathbf{R}} \newcommand{\vac}{|0\rangle} \newcommand{\J}{\mathbb{J}} \newcommand{\V}{\mathbb{V}} \renewcommand{\L}{\mathcal{L}} \newcommand{\M}{\mathcal{M}} \newcommand{\tosim}{\xrightarrow{\sim}} \renewcommand{\H}{\operatorname{H}} \newcommand{\D}{\mathcal{D}} \newcommand{\SL}{\operatorname{SL}} \renewcommand{\P}{\mathbb{P}} \newcommand{\A}{\mathbb{A}} \newcommand{\Aut}{\operatorname{Aut}} \renewcommand{\k}{\mathfrak{k}} \newcommand{\Ad}{\operatorname{Ad}} \newcommand{\W}{\mathcal{W}} \newcommand{\T}{\mathcal{T}} \newcommand{\Adm}{\operatorname{Adm}} \newcommand{\Fl}{\mathcal{F}\ell} \newcommand{\Mod}{\text{-}\mathsf{Mod}} \newcommand{\wtMod}{\text{-}\mathsf{wtMod}} \renewcommand{\mod}{\text{-}\mathsf{mod}} \newcommand{\F}{\mathcal{F}} \newcommand{\pr}{\operatorname{pr}} \newcommand{\ord}{\text{ord}} \newcommand{\PBW}{\text{PBW}} \newcommand{\gr}{\operatorname{gr}} \newcommand{\N}{\mathcal{N}} \renewcommand{\J}{\operatorname{J}} \renewcommand{\a}{\mathfrak{a}} n}{\text{fin}} \newcommand{\Oblv}{\operatorname{Oblv}} \renewcommand{\H}{\operatorname{H}} \begin{document} \title{On the minimal parabolic induction} \date{\today} \author{Xinyu Li} \maketitle \begin{abstract} Motivated by Beilinson--Bernstein's proof of the Jantzen conjectures \cite{beilinson1993proof}, we define the minimal parabolic induction functor for Kac--Moody algebras, and establish some basic properties. As applications of the formal theory, we examine first extension groups between simple highest weight modules in the category of weight modules, and analyze the annihilators of some simple highest weight modules. \end{abstract} \tableofcontents \setcounter{section}{-1} \section{Introduction} \subsection{Motivation} Let $\g$ be a Kac--Moody algebra. For two weights $\lambda,\mu$ of $\g$, it is a basic problem in representation theory to calculate the first extension groups between $L(\lambda)$ and $L(\mu)$ in the category of $\g$-weight modules (or the BGG category $\O$). In particular, such problems arise in recent studies on simple affine vertex algebras (for instance, \cite{kawasetsu2022relaxed} and \cite{arakawa2023weight}). However, the explicit result is not easy to calculate in general. Even for $\g$ being a finite dimensional semisimple Lie algebra, the answer depends on the Jantzen conjecture (cf. \cite{humphreys2008representations} Chapter 8.15.), which is a deep result in representation theory. In the finite case, the Jantzen conjecture was proved by Beilinson--Bernstein \cite{beilinson1993proof}, using their localisation theorem \cite{beilinson1981localisation}, the weight filtration of $\ell$-adic mixed perverse sheaves \cite{beilinson1982faisceaux}, and the monodromy weight filtration of nearby cycles \cite{deligne1980conjecture}. We do not have a Beilinson--Bernstein type localisation theorem for a general Kac--Moody algebra $\g$. Therefore, some more tools need to be developed to calculate the extension groups. Instead of trying to obtain a direct formula, our idea is to reduce the calculation to some Levi subalgebra (which is usually of finite type, and hence we can apply the known results for finite dimensional reductive Lie algebras). This leads us to the construction of \textit{minimal parabolic induction}. Let us briefly describe our main constructions and main results below. \subsection{Main results} Let $\g$ be a Kac--Moody algebra with a fixed choice $\Pi$ of simple roots. For any subset $\Xi\subset\Pi$, we have the corresponding standard (resp. opposite) parabolic subalgebra $\p^+_\Xi$ (resp. $\p^-_\Xi$), and the associated Levi subalgebra $\l_\Xi$. The parabolic induction functor $$\Ind_{\Xi,!}\colon\l_\Xi\Mod\to\g\Mod,N\mapsto U\g\otimes_{U\p^+_\Xi} N,$$ which maps $\l_\Xi$-Verma modules to $\g$-Verma modules, is intensively studied in representation theory. Here we inflate an $\l_\Xi$-module $N$ to a $\p^+_\Xi$-module through the projection $\p^+_\Xi\proj\l_\Xi$. Equally important is the parabolic coinduction functor $$\Ind_{\Xi,*}\colon\l_\Xi\Mod\to\g\Mod,N\mapsto\Hom_{U\p^-_\Xi}(U\g,N),$$ which maps completed $\l_\Xi$-coVerma modules to completed $\g$-coVerma modules. Here we inflate an $\l_\Xi$-module $N$ to a $\p^-_\Xi$-module through the projection $\p^-_\Xi\proj\l_\Xi$. In the other direction, there is the parabolic restriction functor $$\Res_\Xi^!\colon\g\Mod\to\l_\Xi\Mod,M\mapsto \Hom_{U\u^+_\Xi}(\C,M),$$ and the parabolic corestriction functor $$\Res_\Xi^*\colon\g\Mod\to\l_\Xi\Mod,M\mapsto \C\otimes_{U\u^-_\Xi}M,$$ where $\u^\pm_\Xi$ is the nilradical of $\p^\pm_\Xi$. We have adjoint pairs of functors $(\Ind_{\Xi,!},\Res_\Xi^!),(\Res_\Xi^*,\Ind_{\Xi,*})$. Moreover, $$\Res_\Xi^!\circ\Ind_{\Xi,*}=\Res_\Xi^*\circ\Ind_{\Xi,!}=\id.$$ This induces a natural transformation (see Definition~\ref{minimal-pinduction}) $$\Ind_{\Xi,!}\to\Ind_{\Xi,*}.$$ Let us denote by $\Ind_{\Xi,!*}$ the image of the above transformation. We call $\Ind_{\Xi,!*}$ the \textit{minimal parabolic induction}, or the \textit{intermediate parabolic induction}. In this paper, we establish some basic properties of minimal parabolic induction. For example, like the construction of IC sheaves, one may expect that $\Ind_{\Xi,!*}$ sends ``good'' (lisse) simple objects to simple objects. We confirm this expectation in Proposition~\ref{ind-simple} by showing that $\Ind_{\Xi,!*}$ maps a simple weight module to a simple weight module (see Section~\ref{weight-module} for the definition of weight modules and the category $\g\wtMod$). \begin{main} For any simple $\l_\Xi$-weight module $N$, $\Ind_{\Xi,!*}(N)$ is a simple $\g$-weight module. \end{main} We exhibit two applications of the general theory in Section~\ref{applications}. First, we explore the behavior of first extension groups between some simple highest weight modules under minimal parabolic induction. Under some rather technical conditions on two weights $\lambda,\mu$, we are able to prove that there is an isomorphism (Proposition~\ref{ind-ext}) $$\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))=\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu)).$$ \begin{main} Let $\mu,\lambda$ be two weights of $\g$ such that $\mu-\lambda\in\Z\Xi$, then we always have an inclusion $$\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))\incl\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu)).$$ Moreover, suppose that $\mu-\lambda\notin\Z_{\geq0}\Xi$ and $\lambda$ is $\Xi$-joyful (a technical notion introduced in Definition~\ref{joyful}), then the above inclusion is an isomorphism. \end{main} Another application is about annihilators. Assume $\l_\Xi$ is a finite dimensional reductive Lie algebra, then we are able to prove that $$\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(\lambda)))=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda)))$$ for $\lambda$ being a $\rho$-anti-dominant integral weight for $\l_\Xi$ and $w\in W_\Xi$, the Weyl group of $\l_\Xi$. As a corollary, we show that under the same assumption, $\Ann_{U\g}(L(\lambda))\subset\Ann_{U\g}(L(w\cdot\lambda))$. \begin{main} Suppose $\Xi$ is of finite type, i.e. $\l_\Xi$ is a finite dimensional reductive Lie algebra. Let $\lambda\in\h^*$ be a weight such that $\langle\lambda+\rho,\alpha^\vee\rangle\in\Z_{\leq0}$ for any $\alpha^\vee\in\Xi^\vee$, then we have $$\Ann_{U\g}(L(\lambda))=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda)))\subset\Ann_{U\g}(L(w\cdot\lambda))$$ for any $w\in W_\Xi$, the Weyl group of $\l_\Xi$. \end{main} To the knowledge of the author, this result is new for arbitrary Kac--Moody algebra.\footnote{When $\g$ is a finite dimensional semisimple Lie algebra, this result was proved by Vogan \cite{vogan1980ordering}. When $\g$ is an affine Kac--Moody algebra, this result was claimed by Dhillon, based on his work \cite{campbell2021affine}. Both of them used Harish-Chandra bimodules, whose general theory has not been developed for arbitrary Kac--Moody algebras yet.} \subsection{Convention} Throughout the paper, we work over an algebraically closed field $\C$ of characteristic $0$. Symbols $\otimes$ and $\Hom$ without subscripts mean the corresponding operations in the category of $\C$-vector spaces, that is, $\otimes_\C$ and $\Hom_\C$. For a ring (or a Lie algebra) $\Lambda$, we use $\Lambda\Mod$ to denote the category of left $\Lambda$-modules. By a module, we mean a left module. \subsection{Acknowledgement} The author would like to express his deepest gratitude to his advisors, Prof. Wenbin Yan and Prof. Peng Shan, without whose help the current work would have never been done. He especially thanks Peng Shan for pointing out many inaccuracies, due to the author's arrogance and over-optimism, in an earlier draft of the paper. There is a similar construction in the theory of vertex algebras, namely the Zhu's induction. The author thanks Tomoyuki Arakawa, Thomas Creutzig, and Yongchang Zhu for correspondences on this subject. The author thanks Gurbir Dhillon for teaching him parabolic induction in categorical representation theory, Dingxin Zhang for teaching him $\ell$-adic mixed perverse sheaves, and Qixian Zhao for teaching him Duflo's theorem. \section{Kac--Moody algebra setup} \subsection{Parabolic type} Let $\g$ be a Kac--Moody algebra associated with the triple $(\h,\Pi,\Pi^\vee)$, where $\h$ is a fixed Cartan subalgebra, $\Pi\subset\h^*$ (resp. $\Pi^\vee\subset\h$) is the collection of simple roots (resp. coroots). Denoted by $\Delta$ the set of roots of $\g$, $\Delta^+$ (resp. $\Delta^-$) the set of positive (resp. negative) roots of $\g$, we have the root decomposition $$\g=\h\oplus\bigoplus_{\alpha\in\Delta}\g_\alpha=\n^-\oplus\h\oplus\n^+,$$ where $$\n^+=\bigoplus_{\alpha\in\Delta^+}\g_\alpha,\n^-=\bigoplus_{\alpha\in\Delta^-}\g_\alpha.$$ Any subset $\Xi$ of $\Pi$ (together with the corresponding subset $\Xi^\vee$ of $\Pi^\vee$) defines a \textit{parabolic type}. More precisely, let $\Delta_\Xi=\Delta\cap\Z\Xi,\Delta^\pm_\Xi=\Delta_\Xi\cap\Delta^\pm$. Then we have the standard (opposite) parabolic subalgebra of type $\Xi$ $$\p^+_\Xi=\bigoplus_{\alpha\in\Delta^-_\Xi}\g_\alpha\oplus\h\oplus\n^+,\p^-_\Xi=\bigoplus_{\alpha\in\Delta^+_\Xi}\g_\alpha\oplus\h\oplus\n^-,$$ the (opposite) nilradical $$\u^\pm_\Xi=\bigoplus_{\alpha\in\Delta^\pm\backslash\Delta^\pm_\Xi}\g_\alpha,$$ and the Levi $$\l_\Xi=\h\oplus\bigoplus_{\alpha\in\Delta_\Xi}\g_\alpha.$$ \subsection{Transpose anti-involution} \label{transpose-anti-involution} Let us name the simple roots of $\g$ by $\alpha_1,\cdots,\alpha_n$. We fix the Chevalley generators $e_i\in\g_{\alpha_i}$, $f_i\in\g_{-\alpha_i}$. From the structure theory of Kac--Moody algebra (the Serre relations), it is well-known that the map $\tau(e_i)=f_i,\tau(f_i)=e_i$ and $\tau(h)=h$ for $h\in\h$ extends to an anti-involution of $U\g$, which we still denote by $\tau$. This anti-involution $\tau$ is called the \textit{transpose anti-involution}, in the sense that for $\g=\sl_n$ with the standard choice of Chevalley generators, $\tau(x)=x^t$ is the transpose of a matrix. The transpose anti-involution interchanges $\n^+$ and $\n^-$. More generally, for any parabolic type $\Xi$, $\tau$ interchanges $\p^+_\Xi$ and $\p^-_\Xi$, $\u^+_\Xi$ and $\u^-_\Xi$, and restricts to an anti-involution on $\l_\Xi$. The transpose anti-involution $\tau$ leads to an isomorphism of algebra $U\g\simeq(U\g)^{\text{op}}$. As a consequence, we can identify the category of left $U\g$-modules with the category of right $U\g$-modules via the twisting of $\tau$. More precisely, for any left $U\g$-module $M$, we can define a right $U\g$-module structure on $M$ by $$m.X=\tau(X).m,X\in U\g,m\in M.$$ Conversely, for any right $U\g$-module $M$, we can define a left $U\g$-module structure on $M$ by $$X.m=m.\tau(X),X\in U\g,m\in M.$$ \subsection{Basic representations} Let $\lambda\in\h^*$ be a weight. We have three basic types of $\g$-modules, that is, the Verma module $M(\lambda)$ of highest weight $\lambda$, the simple module $L(\lambda)$ of highest weight $\lambda$, and the completed coVerma module $M(\lambda)^*$\footnote{The usual coVerma module, or dual Verma module, is $M(\lambda)^\vee$. Here $^\vee$ is the restricted dual introduced in Section~\ref{km-weight}.}. It is known that $M(\lambda)$ has a maximal proper $\g$-submodule, usually denoted by $N(\lambda)$. There is a canonical (up to a nonzero constant) morphism $M(\lambda)\to M(\lambda)^*$ (that is induced by the Shapovalov form on $M(\lambda)$), whose kernel is $N(\lambda)$ and whose image is $M(\lambda)/N(\lambda)\simeq L(\lambda)$. Similarly, let us denote by $M_\Xi(\lambda),L_\Xi(\lambda)$ and $M_\Xi(\lambda)^*$, respectively, the $\l_\Xi$-Verma module of highest weight $\lambda$, the $\l_\Xi$-simple module of highest weight $\lambda$, and the $\l_\Xi$-completed coVerma module. Let $N_\Xi(\lambda)\subset M_\Xi(\lambda)$ be the maximal proper $\l_\Xi$-submodule of $M_\Xi(\lambda)$, then $M_\Xi(\lambda)/N_\Xi(\lambda)\simeq L_\Xi(\lambda)$. \subsection{Geometry of weight} Let $W$ be the Weyl group of $\g$. We fix an element $\rho\in\h^*$ such that $\langle\rho,\alpha^\vee\rangle=1$ for any $\alpha^\vee\in\Pi^\vee$. The \textit{dot action} of $W$ on $\h^*$ is defined by $$w\cdot\lambda=w(\lambda+\rho)-\rho.$$ For any root $\alpha$ of $\g$, let us denote by $s_\alpha$ the corresponding reflection in $W$. Let $W_\Xi$ be the subgroup of $W$ generated by $s_\alpha$ for $\alpha\in\Xi$. It is the Weyl group of $\l_\Xi$. For two weights $\mu,\lambda\in\h^*$, we write $\mu\geq\lambda$ if $\mu-\lambda\in\Z_{\geq0}\Pi$. Let $Q_\Xi$ be the root lattice of $\l_\Xi$. The following observation will be useful when dealing with weight modules in Section~\ref{weight-module}. \begin{observation} \label{weight} The root lattice $Q_\Xi$ of $\l_\Xi$ has zero intersection with the monoid spanned by $\Delta^-\backslash\Delta^-_\Xi$. \end{observation} \section{Minimal parabolic induction} \label{mpind} Let us fix a parabolic type $\Xi\subset\Pi$. \begin{definition}[parabolic restriction] There are two types of parabolic restriction functors. The \textit{parabolic $!$-restriction} (invariant) functor is defined by $$\Res_\Xi^!\colon\g\Mod\to\l_\Xi\Mod,M\mapsto\Hom_{U\u^+_\Xi}(\C,M),$$ while the \textit{parabolic $*$-restriction} (coinvariant) functor is defined by $$\Res_\Xi^*\colon\g\Mod\to\l_\Xi\Mod,M\mapsto\C\otimes_{U\u^-_\Xi}M.$$ Here $\C$ means the trivial representation. For a $\g$-module $M$, the $\l_\Xi$-action on $\Res_\Xi^?(M)$ ($?\in\{!,*\}$) is inherited from the $\l_\Xi$-action on $M$. It is well-defined because $[\l_\Xi,\u^\pm_\Xi]\subset\u^\pm_\Xi$. \end{definition} \begin{definition} There are two types of parabolic induction functors. The \textit{parabolic $!$-induction} functor is defined by $$\Ind_{\Xi,!}\colon\l_\Xi\Mod\to\g\Mod,N\mapsto U\g\otimes_{U\p^+_\Xi}N.$$ Here we inflate an $\l_\Xi$-module $N$ to a $\p^+_\Xi$-module via the projection $\p^+_\Xi\proj\l_\Xi$. The \textit{parabolic $*$-induction} (coinduction) functor is defined by $$\Ind_{\Xi,*}\colon\l_\Xi\Mod\to\g\Mod,N\mapsto\Hom_{U\p^-_\Xi}(U\g,N).$$ Here we inflate an $\l_\Xi$-module $N$ to a $\p^-_\Xi$-module via the projection $\p^-_\Xi\proj\l_\Xi$. The left multiplication of $U\g$ and the right multiplication of $U\p^+_\Xi$ make $U\g$ a $(U\g,U\p^+_\Xi)$-bimodule. Then we view $U\g$ as a $(U\p^-_\Xi,U\g)$-bimodule via the transpose anti-involution. \end{definition} Here we collect some well-known facts about parabolic restrictions and parabolic inductions. \begin{proposition} \begin{enumerate} \item We have adjoint pairs of functors $(\Ind_{\Xi,!},\Res_\Xi^!),(\Res_\Xi^*,\Ind_{\Xi,*})$. \item $\Res_\Xi^!\circ\Ind_{\Xi,*}=\Res_\Xi^*\circ\Ind_{\Xi,!}=\id$. \item The functor $\Ind_{\Xi,?}$ ($?\in\{!,*\}$) is exact, the functor $\Res_\Xi^!$ (resp. $\Res_\Xi^*$) is left (resp. right) exact. \item The $!$-induction $\Ind_{\Xi,!}$ maps an $\l_\Xi$-Verma module to the $\g$-Verma module with the same highest weight, while the $*$-induction $\Ind_{\Xi,*}$ maps a completed $\l_\Xi$-coVerma module to the completed $\g$-coVerma module associated to the same weight. \end{enumerate} \end{proposition} \begin{proof} By the $\otimes$-$\Hom$ adjunction, for an $\l_\Xi$-module $N$ and a $\g$-module $M$, we have canonical isomorphisms $$\Hom_{U\g}(U\g\otimes_{U\p^+_\Xi}N,M)=\Hom_{U\p^+_\Xi}(N,M)=\Hom_{U\l_\Xi}(N,\Hom_{U\u^+_\Xi}(\C,M)).$$ This shows that $(\Ind_{\Xi,!},\Res_\Xi^!)$ is an adjoint pair. Similarly, $$\Hom_{U\l_\Xi}(\C\otimes_{U\u^-_\Xi}M,N)=\Hom_{U\p^-_\Xi}(M,N)=\Hom_{U\g}(M,\Hom_{U\p^-_\Xi}(U\g,N)).$$ This shows that $(\Res^*_\Xi,\Ind_{\Xi,*})$ is an adjoint pair. Moreover, for any $\l_\Xi$-module $N$, $$\Hom_{U\u^+_\Xi}(\C,\Hom_{U\p^-_\Xi}(U\g,N))=\Hom_{U\p^-_\Xi}(U\g\otimes_{U\u^+_\Xi}\C,N)=\Hom_{U\p^-_\Xi}(U\p^-_\Xi,N)=N.$$ This shows that $\Res^!_\Xi\circ\Ind_{\Xi,*}=\id$. Similarly, $$\C\otimes_{U\u^-_\Xi}U\g\otimes_{U\p^+_\Xi}N=U\p^+_\Xi\otimes_{U\p^+_\Xi}N=N.$$ This shows that $\Res^*_\Xi\circ\Ind_{\Xi,!}=\id$. By the PBW theorem, $U\g=U\u^-_\Xi\otimes U\p^+_\Xi$ is a free (hence projective, flat) right $U\p^+_\Xi$-module. This shows the exactness of $\Ind_{\Xi,?}$. As a right (resp. left) adjoint, $\Res_\Xi^!$ (resp. $\Res_\Xi^*$) is left (resp. right) exact. Recall that Verma modules (resp. completed coVerma modules) are constructed by $!$-induction (resp. $*$-induction) with respect to the empty collection $\emptyset\subset\Pi$. The last assertion follows from the composability of $!$-induction (resp. $*$-induction), which is again due to the composability of $\otimes$ (resp. $\Hom$). \end{proof} Now we see that there is a natural transformation from $\Ind_{\Xi,!}$ to $\Ind_{\Xi,*}$ fitting into the following diagram $$\begin{tikzcd} {\Ind_{\Xi,!}} \arrow[Rightarrow, r, no head] \arrow[d] & {\Ind_{\Xi,!}\circ\Res^!_\Xi\circ\Ind_{\Xi,*}} \arrow[d] \\ {\Ind_{\Xi,*}\circ\Res^*_\Xi\circ\Ind_{\Xi,!}} \arrow[Rightarrow, r, no head] & {\Ind_{\Xi,*}} \end{tikzcd}$$ \begin{definition} \label{minimal-pinduction} Let $\Ind_{\Xi,!*}\colon\l_\Xi\Mod\to\g\Mod$ be the image of the natural transformation $\Ind_{\Xi,!}\to\Ind_{\Xi,*}$, that is (on the object level), $$\Ind_{\Xi,!*}(N)=\im(\Ind_{\Xi,!}(N)\to\Ind_{\Xi,*}(N)).$$ We call $\Ind_{\Xi,!*}$ the \textit{minimal parabolic induction} functor, or the \textit{intermediate parabolic induction} functor. \end{definition} \begin{proposition} \label{inj-surj} Let $N_1\proj N_2$ be a surjection in $\l_\Xi\Mod$, then the induced map $\Ind_{\Xi,!*}(N_1)\to\Ind_{\Xi,!*}(N_2)$ is also surjective. Dually, let $N_1'\incl N_2'$ be an injection in $\l_\Xi\Mod$, then the induced map $\Ind_{\Xi,!*}(N_1')\to\Ind_{\Xi,!*}(N_2')$ is also injective. \end{proposition} \begin{proof} Since $\Ind_{\Xi,!}$ is exact, the map $\Ind_{\Xi,!}(N_1)\to\Ind_{\Xi,!}(N_2)$ is surjective. Consider the commutative diagram $$\begin{tikzcd} {\Ind_{\Xi,!}(N_1)} \arrow[d, two heads] \arrow[r, two heads] & {\Ind_{\Xi,!}(N_2)} \arrow[d, two heads] \\ {\Ind_{\Xi,!*}(N_1)} \arrow[r] & {\Ind_{\Xi,!*}(N_2)} \end{tikzcd}$$ The surjectivity of the bottom arrow follows. Dually, the map $\Ind_{\Xi,*}(N_1')\to\Ind_{\Xi,*}(N_2')$ is injective because $\Ind_{\Xi,*}$ is exact. Consider the commutative diagram $$\begin{tikzcd} {\Ind_{\Xi,!*}(N_1')} \arrow[r] \arrow[d, hook] & {\Ind_{\Xi,!*}(N_2')} \arrow[d, hook] \\ {\Ind_{\Xi,*}(N_1')} \arrow[r, hook] & {\Ind_{\Xi,*}(N_2')} \end{tikzcd}$$ The injectivity of the top arrow follows. \end{proof} \begin{definition} Let $\J_{\Xi,!}^{-1}\colon\l_\Xi\Mod\to\g\Mod$ be the kernel of the natural transformation $\Ind_{\Xi,!}\proj\Ind_{\Xi,!*}$, $\J_{\Xi,*}^1\colon\l_\Xi\Mod\to\g\Mod$ be the cokernel of the natural transformation $\Ind_{\Xi,!*}\incl\Ind_{\Xi,*}$.\footnote{$\J$ stands for Jantzen.} \end{definition} Let $N$ be an $\l_\Xi$-module, then the map $\Ind_{\Xi,!}(N)\to\Ind_{\Xi,*}(N)$ can be explicitly described as follows. Using the PBW decomposition $U\g=U\u^-_\Xi\otimes U\p^+_\Xi$, we can identify $U\g\otimes_{U\p^+_\Xi}N$ with $U\u^-_\Xi\otimes N$, and $\Hom_{U\p^-_\Xi}(U\g,N)$ with $\Hom(U\u^-_\Xi,N)$. Let $$\epsilon^\pm\colon U\u^\pm_\Xi\proj U\u^\pm_\Xi/\u^\pm_\Xi(U\u^\pm_\Xi)=\C$$ be the augmentation maps, $\phi=\epsilon^-\otimes\id\otimes\epsilon^+$ be the map $$\phi=\epsilon^-\otimes\id\otimes\epsilon^+\colon U\g=U\u^-_\Xi\otimes U\l_\Xi\otimes U\u^+_\Xi\to\C\otimes U{\l_\Xi}\otimes\C =U\l_\Xi.$$ Then the map $\Ind_{\Xi,!}(N)\to\Ind_{\Xi,*}(N)$ can be identified with $$U\u^-_\Xi\otimes N\to\Hom(U\u^-_\Xi,N),X\otimes n\mapsto[Y\mapsto\phi(\tau(X)Y)n=\phi(\tau(Y)X)n].$$ \begin{proposition} \label{j-1} Let $N$ be an $\l_\Xi$-module. Under the identification $$\Ind_{\Xi,!}(N)=U\g\otimes_{U\p^+_\Xi}N=U\u^-_\Xi\otimes N,$$ we have $\J^{-1}_{\Xi,!}(N)\subset\u^-_\Xi(U\u^-_\Xi)\otimes N$. Under the identification $$\Ind_{\Xi,*}(N)=\Hom_{U\p^-_\Xi}(U\g,N)=\Hom(U\u^-_\Xi,N),$$ we have $\Hom(\u^-_\Xi(U\u^-_\Xi),N)\proj\J^1_{\Xi,*}(N)$. \end{proposition} \begin{proof} Let us consider a commutative diagram with exact rows $$\begin{tikzcd} 0 \arrow[r] & {\J^{-1}_{\Xi,!}(N)} \arrow[r] \arrow[dd, dashed] & {\Ind_{\Xi,!}(N)} \arrow[r] \arrow[Rightarrow, dd, no head] & {\Ind_{\Xi,!*}(N)} \arrow[r] \arrow[d] & 0 \\ & & & {\Ind_{\Xi,*}(N)} \arrow[d] & \\ 0 \arrow[r] & \u^-_\Xi(U\u^-_\Xi)\otimes N \arrow[r] & U\u^-_\Xi\otimes N \arrow[r] & \C\otimes N=N \arrow[r] & 0 \end{tikzcd}$$ The existence of the dashed arrow follows from the exactness of rows and the commutativity of the diagram. For the dual statement, notice that we have a commutative diagram with exact rows $$\begin{tikzcd} 0 \arrow[r] & {\Hom(\C,N)=N} \arrow[d] \arrow[r] & {\Hom(U\u^-_\Xi,N)} \arrow[r] \arrow[Rightarrow, dd, no head] & {\Hom(\u^-_\Xi(U\u^-_\Xi),N)} \arrow[r] \arrow[dd, dashed] & 0 \\ & {\Ind_{\Xi,!}(N)} \arrow[d] & & & \\ 0 \arrow[r] & {\Ind_{\Xi,!*}(N)} \arrow[r] & {\Ind_{\Xi,*}(N)} \arrow[r] & {\J^1_{\Xi,*}(N)} \arrow[r] & 0 \end{tikzcd}$$ The existence of the dashed arrow follows from the exactness of rows and the commutativity of the diagram. \end{proof} \begin{proposition} \label{j-2} Let $N$ be an $\l_\Xi$-module. For any $\g$-submodule $M\subset\Ind_{\Xi,!}(N)$ that is contained in $\u^-_\Xi(U\u^-_\Xi)\otimes N$ under the identification $\Ind_{\Xi,!}(N)=U\u^-_\Xi\otimes N$, we have $M\subset\J^{-1}_{\Xi,1}(N)$. Dually, any quotient $\g$-module $\Ind_{\Xi,*}(N)\proj M'$ factorizing through $\Hom(\u^-_\Xi(U\u^-_\Xi),N)$ under the identification $\Ind_{\Xi,*}(N)=\Hom(U\u^-_\Xi,N)$ is a quotient of $\J^1_{\Xi,*}(N)$. \end{proposition} \begin{proof} Every element in $M$ can be written as a finite sum $\sum_i X_i\otimes n_i$ for some $X_i\in U\u^-_\Xi$ and $n_i\in N$. To show that it lies in $$\J^{-1}_{\Xi,!}(N)=\ker(\Ind_{\Xi,!}(N)\to\Ind_{\Xi,*}(N)),$$ it suffices to show that $\sum_i\phi(\tau(Y)X_i)n_i=0$ for any $Y\in U\g$. Let us decompose each $\tau(Y)X_i$ as a finite sum $\tau(Y)X_i=\sum_j X_{i,j}^-X_{i,j}^0X_{i,j}^+$ for some $X_{i,j}^-\in U\u^-_\Xi,X_{i,j}^0\in U\l_\Xi,X_{i,j}^+\in U\u^+_\Xi$, then $$\sum_i\phi(\tau(Y)X_i)n_i=\sum_{i,j}\phi(X_{i,j}^-X_{i,j}^0X_{i,j}^+)n_i=\sum_{i,j}\epsilon^-(X_{i,j}^-)X_{i,j}^0\epsilon^+(X_{i,j}^+)n_i.$$ Notice that the $\tau(Y)$ action on $\sum_i X_i\otimes n_i$ is $$\sum_i\tau(Y)X_i\otimes_{U\p^+_\Xi}n_i=\sum_{i,j}X_{i,j}^-X_{i,j}^0X_{i,j}^+\otimes_{U\p^+_\Xi}n_i=\sum_{i,j}X_{i,j}^-\otimes X_{i,j}^0\epsilon^+(X_{i,j}^+)n_i.$$ By assumption, it is contained in $M\subset\u^-_\Xi(U\u^-_\Xi)\otimes N$. This means exactly $$\sum_{i,j}\epsilon^-(X_{i,j}^-)X_{i,j}^0\epsilon^+(X_{i,j}^+)n_i=0.$$ Therefore, we conclude that $\sum_i\phi(\tau(Y)X_i)n_i=0$. Dually, to show that $M'$ is a quotient of $$\J^1_{\Xi,*}(N)=\coker(\Ind_{\Xi,!}(N)\to\Ind_{\Xi,*}(N)),$$ it suffices to show that the composition $$\Ind_{\Xi,!}(N)\to\Ind_{\Xi,*}(N)\to M'$$ vanishes. As a $U\g$-module, $\Ind_{\Xi,!}(N)$ is generated by $1\otimes n\in 1\otimes N=N$. Hence it suffices to show that the image of $1\otimes n$ vanishes in $M'$. Notice that we have a commutative diagram $$\begin{tikzcd} {\Ind_{\Xi,!}(N)} \arrow[r] \arrow[Rightarrow, d, no head] & {\Ind_{\Xi,*}(N)} \arrow[r, two heads] & M' \\ U\u^-_\Xi\otimes N \arrow[r] & {\Hom(U\u^-_\Xi,N)} \arrow[r, two heads] & {\Hom(\u^-_\Xi(U\u^-_\Xi),N)} \arrow[u, two heads] \end{tikzcd}$$ The image of $1\otimes n$ in $\Hom(\u^-_\Xi(U\u^-_\Xi),N)$ vanishes, so its image in $M'$ also vanishes. We are done. \end{proof} \begin{remark} \label{max} Combining Proposition~\ref{j-1}~and~\ref{j-2}, we see that for any $\l_\Xi$-module $N$, $\J^{-1}_{\Xi,!}(N)$ is \textit{the} maximal $\g$-submodule of $\Ind_{\Xi,!}(N)=U\u^-_\Xi\otimes N$ that is contained in $\u^-_\Xi(U\u^-_\Xi)\otimes N$, and $\J^1_{\Xi,*}(N)$ is \textit{the} maximal $\g$-module quotient of $\Ind_{\Xi,*}(N)=\Hom(U\u^-_\Xi,N)$ that factors through $\Hom(\u^-_\Xi(U\u^-_\Xi),N)$. \end{remark} Let $\operatorname{Oblv}^\g_{\l_\Xi}$ be the forgetful functor from $\g\Mod$ to $\l_\Xi\Mod$. There are canonical natural transformations $$\Res_\Xi^!\incl\Oblv^\g_{\l_\Xi},\Oblv^\g_{\l_\Xi}\proj\Res_\Xi^*,$$ the composition of which induces a natural transformation $\Res_\Xi^!\to\Res_\Xi^*$. \begin{definition} Let $\Res_\Xi^{!*}\colon\g\Mod\to\l_\Xi\Mod$ be the image of the natural transformation $\Res_\Xi^!\to\Res_\Xi^*$. It seems plausible to call $\Res_\Xi^{!*}$ the \textit{intermediate parabolic restriction} functor. \end{definition} Now we have a commutative diagram of natural transformations $$\begin{tikzcd} \id \arrow[r] \arrow[Rightarrow, rrrr, no head, bend left] \arrow[Rightarrow, dd, no head] & {\Res_\Xi^!\circ\Ind_{\Xi,!}} \arrow[r] \arrow[d, two heads] & {\Res_\Xi^!\circ\Ind_{\Xi,!*}} \arrow[r, hook] \arrow[d, two heads] & {\Res_\Xi^!\circ\Ind_{\Xi,*}} \arrow[Rightarrow, r, no head] \arrow[d, two heads] & \id \arrow[Rightarrow, dd, no head] \\ & {\Res^{!*}_\Xi\circ\Ind_{\Xi,!}} \arrow[r] \arrow[d, hook] & {\Res^{!*}_\Xi\circ\Ind_{\Xi,!*}} \arrow[r] \arrow[d, hook] & {\Res^{!*}_\Xi\circ\Ind_{\Xi,*}} \arrow[d, hook] & \\ \id \arrow[Rightarrow, r, no head] \arrow[Rightarrow, rrrr, no head, bend right] & {\Res^*_\Xi\circ\Ind_{\Xi,!}} \arrow[r, two heads] & {\Res^*_\Xi\circ\Ind_{\Xi,!*}} \arrow[r] & {\Res^*_\Xi\circ\Ind_{\Xi,*}} \arrow[r] & \id \end{tikzcd}$$ Here the natural transformations $\id\to\Res^!_\Xi\circ\Ind_{\Xi,!}$ and $\Res^*_\Xi\circ\Ind_{\Xi,*}\to\id$ are induced by adjunctions. The map $\Res^!_\Xi\circ\Ind_{\Xi,!*}\to\Res^!_\Xi\circ\Ind_{\Xi,*}$ (resp. $\Res^*_\Xi\circ\Ind_{\Xi,!}\to\Res^*_\Xi\circ\Ind_{\Xi,!*}$) is monic (resp. epic) because $\Res^!_\Xi$ (resp. $\Res^*_\Xi$) is left (resp. right) exact. \begin{proposition} \label{res-ind} We have $\Res^!_\Xi\circ\Ind_{\Xi,!*}=\Res^*_\Xi\circ\Ind_{\Xi,!*}=\Res^{!*}_\Xi\circ\Ind_{\Xi,!*}=\id$. \end{proposition} \begin{proof} Consider the following commutative diagram of natural transformations $$\begin{tikzcd} & \id \arrow[r] & {\Res^!_\Xi\circ\Ind_{\Xi,!}} \arrow[r] \arrow[d, hook] & {\Res^!_\Xi\circ\Ind_{\Xi,!*}} \arrow[d, hook] & \\ 0 \arrow[r] & {\Oblv^\g_{\l_\Xi}\circ\J^{-1}_{\Xi,!}} \arrow[r] & {\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!}} \arrow[r] & {\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!*}} \arrow[r] & 0 \end{tikzcd}$$ whose bottom row is exact. We claim that the composition $$\id\to\Res^!_\Xi\circ\Ind_{\Xi,!}\to\Res^!_\Xi\circ\Ind_{\Xi,!*}\to\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!*}$$ is monic. In fact, for any $\l_\Xi$-module, $$\im(N\to\Res^!_\Xi\circ\Ind_{\Xi,!}(N)\to\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!}(N))=1\otimes N$$ under the identification $\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!}(N)=U\u^-_\Xi\otimes N$. By Proposition~\ref{j-1}, we see that it has zero intersection with $$\Oblv^\g_{\l_\Xi}\circ\J^{-1}_{\Xi,!}(N)\subset\u^-_\Xi(U\u^-_\Xi)\otimes N.$$ This shows that the composition $$\id\to\Res^!_\Xi\circ\Ind_{\Xi,!}\to\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!}\to\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!*}$$ is monic, i.e. the composition $$\id\to\Res^!_\Xi\circ\Ind_{\Xi,!}\to\Res^!_\Xi\circ\Ind_{\Xi,!*}\to\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!*}$$ is monic. Consequently, the composition $$\id\to\Res^!_\Xi\circ\Ind_{\Xi,!}\to\Res^!_\Xi\circ\Ind_{\Xi,!*}$$ is monic. Now we have two monos $$\id\incl\Res^!_\Xi\circ\Ind_{\Xi,!*},\Res^!_\Xi\circ\Ind_{\Xi,!*}\to\Res^!_\Xi\circ\Ind_{\Xi,*}=\id$$ whose composition is the identity. This shows that they are isomorphisms, and $\Res^!_\Xi\circ\Ind_{\Xi,!*}=\id$. Dually, consider the following commutative diagram of natural transformations $$\begin{tikzcd} 0 \arrow[r] & {\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!*}} \arrow[r] \arrow[d, two heads] & {\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,*}} \arrow[r] \arrow[d, two heads] & {\Oblv^\g_{\l_\Xi}\circ\J^1_{\Xi,*}} \arrow[r] & 0 \\ & {\Res^*_\Xi\circ\Ind_{\Xi,!*}} \arrow[r] & {\Res^*_\Xi\circ\Ind_{\Xi,*}} \arrow[r] & \id & \end{tikzcd}$$ whose top row is exact. We claim that the composition $$\Res^*_\Xi\circ\Ind_{\Xi,!*}\to\Res^*_\Xi\circ\Ind_{\Xi,*}\to \id$$ is epic. In fact, for any $\l_\Xi$-module $N$, the composition $$\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,*}(N)\to\Res^*_\Xi\circ\Ind_{\Xi,*}(N)\to N$$ is identified with the quotient $$\Hom(U\u^-_\Xi,N)\proj\Hom(\C,N),$$ whose kernel is identified with $\Hom(\u^-_\Xi(U\u^-_\Xi),N)$. By Proposition~\ref{j-1}, $\Hom(\u^-_\Xi(U\u^-_\Xi),N)$ maps surjectively to $\Oblv^\g_{\l_\Xi}\circ\J^1_{\Xi,*}(N)$. This shows that the composition $$\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!*}\to\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,*}\to\Res^*_\Xi\circ\Ind_{\Xi,*}\to\id$$ is epic, i.e. the composition $$\Oblv^\g_{\l_\Xi}\circ\Ind_{\Xi,!*}\to\Res^*_\Xi\circ\Ind_{\Xi,!*}\to\Res^*_\Xi\circ\Ind_{\Xi,*}\to\id$$ is epic. Consequently, the composition $$\Ind_{\Xi,!*}\to\Res^*_\Xi\circ\Ind_{\Xi,!*}\to\Res^*_\Xi\circ\Ind_{\Xi,*}\to\id$$ is epic. Now we have two epis $$\id=\Res^*_\Xi\circ\Ind_{\Xi,!}\proj\Res^*_\Xi\circ\Ind_{\Xi,!*},\Res^*_\Xi\circ\Ind_{\Xi,!*}\proj\id$$ whose composition is the identity. This shows that they are isomorphisms, and $\Res^*_\Xi\circ\Ind_{\Xi,!*}=\id$. Now $$\Res^{!*}_\Xi\circ\Ind_{\Xi,!*}=\im(\Res^!_\Xi\circ\Ind_{\Xi,!*}\to\Res^*_\Xi\circ\Ind_{\Xi,!*})=\id,$$ we are done. \end{proof} \begin{remark} \label{exactness-minimal} As we have seen, the functors $\Ind_{\Xi,!}$ and $\Ind_{\Xi,*}$ are exact. However, $\Ind_{\Xi,!*}$ is not exact in general. Let $N^\bullet$ be an acyclic cochain complex of $\l_\Xi$-modules, then we have a short exact sequence of cochain complexes $$0\to\J^{-1}_{\Xi,!}(N^\bullet)\to\Ind_{\Xi,!}(N^\bullet)\to\Ind_{\Xi,!*}(N^\bullet)\to 0.$$ This induces a long exact sequence of cohomologies $$\cdots\to\H^i(\Ind_{\Xi,!}(N^\bullet))\to\H^i(\Ind_{\Xi,!*}(N^\bullet))\to\H^{i+1}(\J^{-1}_{\Xi,!}(N^\bullet))\to\H^{i+1}(\Ind_{\Xi,!}(N^\bullet))\to\cdots.$$ Since $\Ind_{\Xi,!}$ is exact, the cochain complex $\Ind_{\Xi,!}(N^\bullet)$ is also acyclic, hence $\H^i(\Ind_{\Xi,!}(N^\bullet))=0$ for all $i$. So we get an isomorphism $$\H^i(\Ind_{\Xi,!*}(N^\bullet))=\H^{i+1}(\J^{-1}_{\Xi,!}(N^\bullet)).$$ \end{remark} From above discussion, we deduce the following useful Lemma. \begin{lemma} \label{exact} Let $N^{-1}\xrightarrow{f}N^0\xrightarrow{g}N^1$ be an exact sequence of $\l_\Xi$-modules, then the induced sequence $$\Ind_{\Xi,!*}(N^{-1})\to\Ind_{\Xi,!*}(N^0)\to\Ind_{\Xi,!*}(N^1)$$ is exact if the map $\J^{-1}_{\Xi,!}(g)\colon\J^{-1}_{\Xi,!}(N^0)\to\J^{-1}_{\Xi,!}(N^1)$ is surjective. \end{lemma} \begin{proof} We extend the exact sequence to an acyclic complex $$N^\bullet=[0\to\ker(f)\to N^{-1}\to N^0\to N^1\to\coker(g)\to0]$$ such that $N^i$ lives in cohomological degree $i$. Then from Remark~\ref{exactness-minimal}, we have $$\H^0(\Ind_{\Xi,!*}(N^\bullet))=\H^1(\J^{-1}_{\Xi,!}(N^\bullet))=0.$$ Therefore, we conclude that $$\Ind_{\Xi,!*}(N^{-1})\to\Ind_{\Xi,!*}(N^0)\to\Ind_{\Xi,!*}(N^1)$$ is exact. \end{proof} \section{Weight modules} \label{weight-module} In this section, we focus on weight modules. \subsection{Generalities on weight modules} Let $\a$ be a finite-dimensional abelian Lie algebra. \begin{definition} An $\a$-module $M$ is called a \textit{weight module} if $$M=\bigoplus_{\lambda\in\a^*}M_\lambda,\text{ where }M_\lambda=\{v\in M:a\cdot v=\lambda(a)v\text{ for all }a\in\a\}.$$ n$ consisting of weight $\a$-modules of which each weight space is finite dimensional. \end{definition} The following proposition is well known (cf. \cite{kac1990infinite} Proposition 1.5). \begin{proposition} \label{weight-abelian} Any submodule or quotient of a $\a$-weight module is also a weight module. \end{proposition} \begin{proof} Let $M$ be a weight $\a$-module and $N\subset M$ be a submodule. For any $v\in N$, we can decompose $v$ as a finite sum $v=\sum_{j=1}^m v_j$, where $v_j\in M_{\lambda_j}$ and the weights $\lambda_1,\cdots,\lambda_m$ are distinct. The polynomial $$\prod_{1\leq i<j\leq m}(\lambda_i-\lambda_j)\in S\a^*=\C[\a]$$ is nonzero, so we can find $a\in\a$ such that $\prod_{1\leq i<j\leq m}(\lambda_i-\lambda_j)(a)\neq0$. This means that $\lambda_1(a),\cdots,\lambda_m(a)$ are distinct. For $k=0,1,\cdots,m-1$, we have $$a^k\cdot v=\sum_{j=1}^m\lambda_j(a)^k v_j\in N.$$ This is a system of linear equations associated to a nondegenerate matrix (the Vandermonde determinant does not vanish). Hence all $v_j$'s lie in $N$. Now let $f\colon M\proj L$ be a quotient. We know that $\ker f$ is a weight module, so the quotient $L$ is also a weight module. \end{proof} n$ is a Serre subcategory. \begin{definition} For an $\a$-weight module $M$, the \textit{support} of $M$ is $$\supp(M):=\{\lambda\in\a^*:M_\lambda\neq0\}.$$ \end{definition} \subsection{Weight Kac--Moody modules} \label{km-weight} Let $\g$ be a Kac--Moody algebra, with a fixed Cartan subalgebra $\h$. Let us fix a parabolic type $\Xi$ from now on in this section. \begin{definition} n$ under $\operatorname{Oblv}^\g_\h$. We know that $\g\wtMod$ is an abelian category, with $\g\wtMod^\fin$ being a Serre subcategory. n$ consisting of weight $\l_\Xi$-modules of which each weight space is finite dimensional. \end{definition} \begin{remark} n$ is not closed under arbitrary colimits and tensor products. Moreover, since $\g\wtMod^\fin$ is a Serre subcategory of $\g\wtMod$, many homological properties of $\g\wtMod^\fin$ can be recovered from those of $\g\wtMod$. For example, for two objects $N_1,N_2\in\g\wtMod^\fin$, the first extension group between them in $\g\wtMod^\fin$ is the same as the one in $\g\wtMod$. \end{remark} n$ that cannot be performed on $\g\wtMod$, that is, the restricted duality functor presented below. \begin{definition} n$ be a $\g$-weight module of which each weight space is finite dimensional. Let $M=\bigoplus_{\lambda\in\h^*}M_\lambda$ be the weight decomposition. The linear dual $M^*=\Hom(M,\C)$ is naturally a right $U\g$-module, can we make which into a left $U\g$-module via the transpose anti-involution $\tau$. Then the subspace $$M^\vee=\bigoplus_{\lambda\in\h^*}M_\lambda^*\subset M^*$$ is a $\g$-submodule. We call $M^\vee$ the \textit{restricted dual} of $M$. \end{definition} The following properties are well-known (cf. \cite{humphreys2008representations}, \cite{kac1990infinite}). \begin{proposition} \begin{enumerate} n.$$ \item The functor $^\vee$ is an anti-involution, i.e. ${^\vee}{^\vee}=\id$. \item The restricted duality functor $^\vee$ is exact. Moreover, it defines an anti-equivalence of abelian categories. \item The simple highest weight modules are self-dual, i.e. $L(\lambda)^\vee\simeq L(\lambda)$ for each $\lambda\in\h^*$. \end{enumerate} \end{proposition} As an easy corollary, we have \begin{corollary} n\subset\g\wtMod$, there is a canonical isomorphism of Yoneda extension groups $$\Ext^n_{\g\wtMod}(N_1,N_2)=\Ext^n_{\g\wtMod}(N_2^\vee,N_1^\vee)$$ for any $n\geq0$. \end{corollary} n$$ satisfying the same properties. \subsection{Weight modules under parabolic induction} \begin{proposition} The functors $\Ind_{\Xi,!},\Ind_{\Xi,!*}$ and $\J^{-1}_{\Xi,!}$ restrict to functors $$\Ind_{\Xi,!}\colon\l_\Xi\wtMod\to\g\wtMod,$$ $$\Ind_{\Xi,!*}\colon\l_\Xi\wtMod\to\g\wtMod,$$ $$\J^{-1}_{\Xi,!}\colon\l_\Xi\wtMod\to\g\wtMod.$$ \end{proposition} \begin{proof} For $N\in\l_\Xi\Mod$, the isomorphism $\Ind_{\Xi,!}(N)=U\u^-_\Xi\otimes N$ is not only an isomorphism of vector spaces, but it also preserves the $\h$-module structure. Here the right hand side is understood as the tensor product of $\h$-modules. Therefore, we see that if $N\in\l_\Xi\wtMod$, then $\Ind_{\Xi,!}(N)=U\u^-_\Xi\otimes N$ lives in $\g\wtMod$. As a quotient (resp. sub), we see that $\Ind_{\Xi,!*}$ (resp. $\J^{-1}_{\Xi,!}$) also maps weight modules to weight modules. \end{proof} \begin{remark} Unlike the parabolic $!$-induction $\Ind_{\Xi,!}$, the parabolic $*$-induction $\Ind_{\Xi,*}$ does not map a weight $\l_\Xi$-module to a weight $\g$-module in general (instead of direct sum of weight spaces, it is direct product of weight spaces). Though our construction in the previous section is completely ``dualizable'', asymmetric phenomena may happen when focusing on weight modules. \end{remark} \begin{proposition} The functors $\Res^!_\Xi,\Res^*_\Xi$ and $\Res^{!*}_\Xi$ restrict to functors $$\Res^!_\Xi\colon\g\wtMod\to\l_\Xi\wtMod,$$ $$\Res^*_\Xi\colon\g\wtMod\to\l_\Xi\wtMod,$$ $$\Res^{!*}_\Xi\colon\g\wtMod\to\l_\Xi\wtMod,$$ n,$$ n,$$ n.$$ \end{proposition} \begin{proof} The forgetful functor $\Oblv^\g_{\l_\Xi}\colon\g\Mod\to\l_\Xi\Mod$ preserves weight spaces. Therefore, as a sub (resp. quotient), $\Res^!_\Xi$ (resp. $\Res^*_\Xi$) can be restricted. Thus $\Res^{!*}_\Xi=\im(\Res^!_\Xi\to\Res^*_\Xi)$ can be restricted. \end{proof} \subsection{Simple weight modules} \begin{definition} For $\xi\in\h^*$, an $\l_\Xi$-weight module $N\in\l_\Xi\wtMod$ is called \textit{$\xi$-shifted}, if $\supp(N)\subset\xi+Q_\Xi$.\footnote{Recall that $Q_\Xi$ is the root lattice of $\l_\Xi$.} Let us denote by $\l_\Xi\wtMod_\xi$ the full subcategory of $\l_\Xi\wtMod$ consisting of $\xi$-shifted weight modules. \end{definition} It is easy to see that we have a decomposition of abelian category: \begin{proposition} If $\xi\equiv\xi'\pmod {Q_\Xi}$, then $\l_\Xi\wtMod_\xi=\l_\Xi\wtMod_{\xi'}$. There is a direct sum decomposition of abelian category $$\l_\Xi\wtMod=\bigoplus_{\xi\in\h^*/Q_\Xi}\l_\Xi\wtMod_\xi.$$ Here $\xi$ runs through (a choice of representatives of) the coset space $\h^*/Q_\Xi$. \end{proposition} \begin{corollary} For any simple (hence indecomposable) weight $\l_\Xi$-module $N\in\l_\Xi\wtMod$, there exists $\xi\in\h^*$ such that $N\in\l_\Xi\wtMod_\xi$. \end{corollary} \begin{remark} \label{max-new} Let $N\in\l_\Xi\wtMod_\xi$ be a $\xi$-shifted $\l_\Xi$-weight module for some $\xi\in\h^*$. In this situation, for any $\g$-submodule $M$ of $\Ind_{\Xi,!}(N)=U\u^-_\Xi\otimes N$, we have, by Observation~\ref{weight}, that $M$ lies in $\u^-_\Xi(U\u^-_\Xi)\otimes N$ if and only if $M$ has zero intersection with $1\otimes N$. Therefore, in the sense of Remark~\ref{max}, $\J^{-1}_{\Xi,!}(N)$ is the maximal $\g$-submodule of $\Ind_{\Xi,!}(N)=U\u^-_\Xi\otimes N$ that has zero intersection with $1\otimes N$. \end{remark} \begin{proposition} \label{res-simple} The functors $\Res^!_\Xi,\Res^*_\Xi$ and $\Res^{!*}_\Xi$ map a simple $\g$-weight module to a simple $\l_\Xi$-weight module or $0$. \end{proposition} \begin{proof} Let $M$ be a simple $\g$-weight module. Suppose that $\Res^!_\Xi(M)\neq0$. Pick any nonzero weight vectors $v_1,v_2\in\Res^!_\Xi(M)\subset\Oblv^\g_{\l_\Xi}(M)$. By the simplicity of $M$, we have $M=U\g.v_1=U\g.v_2$. Noting that $v_i\in\Res^!_\Xi(M)=\Hom_{U\u^+_\Xi}(\C,M)$, we have $\u^+_\Xi.v_i=0$ for $i=1,2$. So by the PBW theorem, $$M=U\g.v_i=(U\u^-_\Xi\otimes U\l_\Xi\otimes U\u^+_\Xi).v_i=(U\u^-_\Xi\otimes U\l_\Xi).v_i.$$ In particular, we can find $X_1,X_2\in U\u^-_\Xi\otimes U\l_\Xi$ such that $v_1=X_1.v_2,v_2=X_2.v_1$. Now we have $v_1=X_1X_2.v_1$, so the weight of $X_1X_2$ is $0$. By Observation~\ref{weight}, we see that $X_1,X_2\in1\otimes U\l_\Xi=U\l_\Xi$, so $v_1$ and $v_2$ lie in the same $\l_\Xi$-submodule of $\Res^!_\Xi(M)$. This verifies the simplicity of $\Res^!_\Xi(M)$. Dually, let $M'$ be a simple $\g$-weight module such that $\Res^*_\Xi(M')\neq0$. Pick any nonzero weight vectors $\bar{w}_1,\bar{w}_2\in\Res^*_\Xi(M')$. Let $w_i$ be a weight vector in $\Oblv^\g_{\l_\Xi}(M')$ that is in the preimage of $\bar{w}_i$. By the simplicity of $M'$, we have $M'=U\g.w_1=U\g.w_2$. Therefore, we can find $Y_1,Y_2\in U\g=U\u^-_\Xi\otimes U\l_\Xi\otimes U\u^+_\Xi$ such that $w_1=Y_1.w_2,w_2=Y_2.w_1$. Since $w_1,w_2$ do not vanish in the quotient $\Res^*_\Xi(M')$, we must have $Y_1,Y_2\in1\otimes U\l_\Xi\otimes U\u^+_\Xi$. Now we have $w_1=Y_1Y_2.w_1$, so the weight of $Y_1Y_2$ is $0$. By Observation~\ref{weight}, we see that $Y_1,Y_2\in U\l_\Xi\otimes 1=U\l_\Xi$, so $w_1,w_2$ (and hence $\bar{w}_1,\bar{w}_2$) lie in the same $\l_\Xi$-submodule. This verifies the simplicity of $\Res^*_\Xi(M')$. The functor $\Res^{!*}_\Xi$ is a quotient of $\Res^!$ (and a sub of $\Res^*_\Xi$), so it also maps simple $\g$-weight modules to simple $\l_\Xi$-weight modules or $0$. \end{proof} \begin{corollary} For any weight $\lambda\in\h^*$, $\Res^!_\Xi(L(\lambda))=\Res^*_\Xi(L(\lambda))=\Res^{!*}_\Xi(L(\lambda))=L_\Xi(\lambda)$. \end{corollary} \begin{proof} Using Proposition~\ref{res-simple}, we conclude that $\Res^!_\Xi(L(\lambda)),\Res^*_\Xi(L(\lambda)),\Res^{!*}_\Xi(L(\lambda))$ are simple highest weight $\l_\Xi$-modules of highest weight $\lambda$. Therefore $\Res^!_\Xi(L(\lambda))=\Res^*_\Xi(L(\lambda))=\Res^{!*}_\Xi(L(\lambda))=L_\Xi(\lambda)$. \end{proof} \begin{proposition} \label{ind-simple} The functor $\Ind_{\Xi,!*}$ maps a simple $\l_\Xi$-weight module to a simple weight $\g$-module. \end{proposition} \begin{proof} Let $N$ be a simple $\l_\Xi$-weight module, then it is $\xi$-shifted for some $\xi$. Pick any nonzero $n\in N$, then $U\l_\Xi.n=N$ by the simplicity of $N$. Therefore $\Ind_{\Xi,!}(N)=U\g\otimes_{U\p^+_\Xi}N=U\g.(1\otimes n)$ and the image of $1\otimes n$ in $\Ind_{\Xi,!*}(N)$ is cyclic. Let $\bar{v}$ be any nonzero vector in $\Ind_{\Xi,!*}(N)$, and $v$ be a preimage of $\bar{v}$ in $\Ind_{\Xi,!}(N)$. Consider the $\g$-submodule $U\g.v\subset\Ind_{\Xi,!}(N)$. If it has zero intersection with $1\otimes N$, then by Remark~\ref{max-new}, it is contained in $\J^{-1}_{\Xi,!}(N)$. As a consequence, its image in $\Ind_{\Xi,!*}(N)$ will be zero. In particular, $\bar{v}=0$, a contradiction. Therefore the intersection $U\g.v\cap(1\otimes N)$ is nonzero. This means that there exists a nonzero element in $U\g.v$ of the form $1\otimes n$ for some $n\in N$. By previous discussion, we see that $1\otimes n$ generates $\Ind_{\Xi,!}(N)$. Hence $U\g.v=\Ind_{\Xi,!}(N)$ and $U\g.\bar{v}=\Ind_{\Xi,!*}(N)$. This shows the simplicity of $\Ind_{\Xi,!*}(N)$. \end{proof} \begin{remark} The above Proposition~\ref{ind-simple} justifies the term ``minimal''. \end{remark} \begin{corollary} \label{minimal-maps-hwsimple} For any weight $\lambda\in\h^*$, $\Ind_{\Xi,!*}(L_\Xi(\lambda))=L(\lambda)$. \end{corollary} \begin{proof} By Proposition~\ref{ind-simple}, $\Ind_{\Xi,!*}(L_\Xi(\lambda))$ is a simple $\g$-weight module. Noting that it is a highest weight module of highest weight $\lambda$, we conclude that $\Ind_{\Xi,!*}(L_\Xi(\lambda))=L(\lambda)$. \end{proof} \section{Applications} \label{applications} We exhibit two applications of our theory of minimal parabolic induction. \subsection{Extensions} In this section, we examine the behavior of first extension groups between some simple highest weight modules under minimal parabolic induction. \begin{lemma} \label{res-exact} Let $\lambda\in\h^*$ be a weight. Suppose we have a short exact sequence in $\g\wtMod$ $$0\to M_1\to M_2\to L(\lambda)\to0.$$ Moreover, assume that there is no weight $\mu\in\supp(M_1)$ such that $\mu>\lambda$, then the restricted sequence $$0\to\Res^!_\Xi(M_1)\to\Res^!_\Xi(M_2)\to\Res^!_\Xi(L(\lambda))=L_\Xi(\lambda)\to0$$ is also exact. \end{lemma} \begin{proof} The functor $\Res^!_\Xi$ is left exact, so we only need to show that $\Res^!_\Xi(M_2)\to L_\Xi(\lambda)$ is surjective. Let $v_\lambda$ be the highest weight vector in $L(\lambda)$. By assumption, $v_\lambda$ must come from a highest weight vector $v\in M_2$. We have $v\in M_2^{\n^+}\subset M_2^{\u^+_\Xi}=\Res^!_\Xi(M_2)$. Therefore $v_\lambda$, now viewed as the highest weight vector in $L_\Xi(\lambda)$, lives in the image of $\Res^!_\Xi(M_2)\to L_\Xi(\lambda)$. As a consequence, the map $\Res^!_\Xi(M_2)\to L_\Xi(\lambda)$ is surjective. \end{proof} \begin{proposition} \label{res-ext} Let $\mu,\lambda\in\h^*$ be two weights such that $\mu\equiv\lambda\pmod{Q_\Xi}$, then we have an inclusion $$\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))\incl\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu)).$$ \end{proposition} \begin{proof} Using the duality functor introduced in Section~\ref{km-weight}, we have $$\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))=\Ext^1_{\g\wtMod}(L(\mu),L(\lambda)),\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu))=\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\mu),L_\Xi(\lambda)).$$ So without losing generality, we can assume that $\mu\ngtr\lambda$. Now by Lemma~\ref{res-exact}, for any extension $$[0\to L(\mu)\to P\to L(\lambda)\to0]\in\Ext^1_{\g\wtMod}(L(\lambda),L(\mu)),$$ the restricted sequence $$0\to\Res^!_\Xi(L(\mu))=L_\Xi(\mu)\to\Res^!_\Xi(P)\to\Res^!_\Xi(L(\lambda))=L_\Xi(\lambda)\to0$$ is also exact, and hence gives an element in $\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu))$. Moreover, it induces a linear map $$\R\colon\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))\to\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu)).$$ Let us show that $\R$ is injective. Let $$\left[0\to L(\mu)\xrightarrow{f} P\xrightarrow{g} L(\lambda)\to0\right]$$ be an element in $\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))$ such that $$\R\left[0\to L(\mu)\xrightarrow{f} P\xrightarrow{g} L(\lambda)\to0\right]=0.$$ This means that the projection $\Res^!_\Xi(P)\proj\Res^!_\Xi(L(\lambda))=L_\Xi(\lambda)$ admits a section, which we denote by $\iota$. Denote by $v_\lambda$ the highest weight vector in $L(\lambda)$ (it is also the highest weight vector in $L_\Xi(\lambda)$), then any element in $L(\lambda)$ is of the form $X.v_\lambda$ for some $X\in U\g$. Moreover, we know that $\iota(v_\lambda)$ is a highest weight vector in $P$ because $\mu\ngtr\lambda$. We claim that the map $\Tilde{\iota}\colon L(\lambda)\to P$ defined by $X.v_\lambda\mapsto X.\iota(v_\lambda)$ is well-defined and provides a section for $g$. Suppose $X_1.v_\lambda=X_2.v_\lambda$ for some $X_1,X_2\in U\g$, then $$g((X_1-X_2).\iota(v_\lambda))=(X_1-X_2).g(\iota(v_\lambda))=(X_1-X_2).v_\lambda=0.$$ Therefore $(X_1-X_2).\iota(v_\lambda)=f(v)$ for some $v\in L(\mu)$. If $v\neq0$, then it generates $L(\mu)$ by the simplicity of $L(\mu)$. In particular, there exists $Y\in U\g$ such that $Y.v=v_\mu$, where $v_\mu$ is the highest weight vector of $L(\mu)$ (it is also the highest weight vector of $L_\Xi(\mu)$). Now we have $$f(v_\mu)=Y.f(v)=Y(X_1-X_2).\iota(v_\lambda)\subset U\g.\iota(v_\lambda)=(U\u^-_\Xi\otimes U\l_\Xi\otimes U\u^+_\Xi).\iota(v_\lambda)=(U\u^-_\Xi\otimes U\l_\Xi).\iota(v_\lambda).$$ Since $\lambda\equiv\mu\pmod{Q_\Xi}$, we have, by Observation~\ref{weight}, $$f(v_\mu)\in U\l_\Xi.v_\lambda=\iota(L_\Xi(\lambda)).$$ This contradicts to the fact that $\iota$ is a section of $\Res^!_\Xi(g)$. Therefore, we have $v=0$. Hence $X_1.\iota(v_\lambda)=X_2.\iota(v_\lambda)$, and $\Tilde{\iota}$ is well-defined. Now it is transparent to see that $\Tilde{\iota}$ provides a section for $g$. \end{proof} We introduce the following definition for technical reason. \begin{definition} \label{joyful} An integral weight $\lambda\in\h^*$ is called \textit{$\Xi$-joyful}, if $$N(\lambda)=\J^{-1}_{\Xi,!}(M_\Xi(\lambda))+\Ind_{\Xi,!}(N_\Xi(\lambda)).$$ Recall that $N(\lambda)$ (resp. $N_\Xi(\lambda)$) is the maximal $\g$ (resp. $\l_\Xi$) proper submodule of $M(\lambda)$ (resp. $M_\Xi(\lambda)$). \end{definition} \begin{remark} We always have an inclusion $$\J^{-1}_{\Xi,!}(M_\Xi(\lambda))+\Ind_{\Xi,!}(N_\Xi(\lambda))\incl N(\lambda).$$ \end{remark} \begin{proposition} \label{gen-singular} Suppose $\lambda$ is an integral weight and $N(\lambda)$ is generated by singular vectors, i.e. $$N(\lambda)=\sum_{\alpha\in\Pi:s_\alpha\cdot\lambda<\lambda}M(s_\alpha\cdot\lambda),$$ then $\lambda$ is $\Xi$-joyful for any parabolic type $\Xi$. \end{proposition} \begin{proof} We have $$\sum_{\alpha\in\Pi\backslash\Xi:s_\alpha\cdot\lambda<\lambda}M(s_\alpha\cdot\lambda)\subset\u^-_\Xi(U\u^-_\Xi)\otimes M_\Xi(\lambda),$$ so $$\sum_{\alpha\in\Pi\backslash\Xi:s_\alpha\cdot\lambda<\lambda}M(s_\alpha\cdot\lambda)\subset\J^{-1}_{\Xi,!}(M_\Xi(\lambda))$$ by Proposition~\ref{j-2}. Moreover, $$\sum_{\alpha\in\Xi:s_\alpha\cdot\lambda<\lambda}M(s_\alpha\cdot\lambda)=\Ind_{\Xi,!}\left(\sum_{\alpha\in\Xi:s_\alpha\cdot\lambda<\lambda}M_\Xi(s_\alpha\cdot\lambda)\right)\subset\Ind_{\Xi,!}(N_\Xi(\lambda)),$$ so $$N(\lambda)=\sum_{\alpha\in\Pi\backslash\Xi:s_\alpha\cdot\lambda<\lambda}M(s_\alpha\cdot\lambda)+\sum_{\alpha\in\Xi:s_\alpha\cdot\lambda<\lambda}M(s_\alpha\cdot\lambda)\subset\J^{-1}_{\Xi,!}(M_\Xi(\lambda))+\Ind_{\Xi,!}(N_\Xi(\lambda)).$$ \end{proof} The notion of $\Xi$-joyful is useful, due to the following Proposition. \begin{proposition} \label{sing-surj} Suppose an integral weight $\lambda\in\h^*$ is $\Xi$-joyful, then the map $$\J^{-1}_{\Xi,!}(M_\Xi(\lambda))\to\J^{-1}_{\Xi,!}(L_\Xi(\lambda))$$ induced from the projection $M_\Xi(\lambda)\proj L_\Xi(\lambda)$ is surjective. \end{proposition} \begin{proof} Let us consider a commutative diagram with exact rows $$\begin{tikzcd} 0 \arrow[r] & {\J^{-1}_{\Xi,!}(M_\Xi(\lambda))} \arrow[r] \arrow[d, "a"] & {\Ind_{\Xi,!}(M_\Xi(\lambda))} \arrow[r] \arrow[d, "b"] & {\Ind_{\Xi,!*}(M_\Xi(\lambda))} \arrow[r] \arrow[d, "c"] & 0 \\ 0 \arrow[r] & {\J^{-1}_{\Xi,!}(L_\Xi(\lambda))} \arrow[r] & {\Ind_{\Xi,!}(L_\Xi(\lambda))} \arrow[r] & {\Ind_{\Xi,!*}(L_\Xi(\lambda))} \arrow[r] & 0 \end{tikzcd}$$ By the Snake Lemma, it suffices to show that the map $$\ker(\Ind_{\Xi,!}(M_\Xi(\lambda))\xrightarrow{b}\Ind_{\Xi,!}(L_\Xi(\lambda)))\to\ker(\Ind_{\Xi,!*}(M_\Xi(\lambda))\xrightarrow{c}\Ind_{\Xi,!*}(L_\Xi(\lambda)))$$ is surjective. Since $\Ind_{\Xi,!}$ is exact, $$\ker(\Ind_{\Xi,!}(M_\Xi(\lambda))\xrightarrow{b}\Ind_{\Xi,!}(L_\Xi(\lambda)))=\Ind_{\Xi,!}(N_\Xi(\lambda)).$$ Notice that $\Ind_{\Xi,!}(M_\Xi(\lambda))=M(\lambda)$ and $\Ind_{\Xi,!*}(L_\Xi(\lambda))=L(\lambda)$, $$\ker(\Ind_{\Xi,!}(M_\Xi(\lambda))\to\Ind_{\Xi,!*}(L_\Xi(\lambda)))=\ker(M(\lambda)\to L(\lambda))=N(\lambda).$$ Therefore $$\ker(\Ind_{\Xi,!*}(M_\Xi(\lambda))\xrightarrow{c}\Ind_{\Xi,!*}(L_\Xi(\lambda)))=\frac{\ker(\Ind_{\Xi,!}(M_\Xi(\lambda))\to\Ind_{\Xi,!*}(L_\Xi(\lambda)))}{\ker(\Ind_{\Xi,!}(M_\Xi(\lambda))\to\Ind_{\Xi,!*}(M_\Xi(\lambda)))}=\frac{N(\lambda)}{\J^{-1}_{\Xi,!}(M_\Xi(\lambda))}.$$ By our assumption that $\lambda$ is $\Xi$-joyful, the map $\Ind_{\Xi,!}(N_\Xi(\lambda))\to\frac{N(\lambda)}{\J^{-1}_{\Xi,!}(M_\Xi(\lambda))}$ is surjective, we are done. \end{proof} \begin{lemma} \label{ind-exact} Let $\mu,\lambda\in\h^*$ be two integral weights such that $\mu-\lambda\notin\Z_{\geq0}\Xi$. Moreover, suppose that $\lambda$ is $\Xi$-joyful, then for any short exact sequence of weight $\l_\Xi$-modules $$0\to L_\Xi(\mu)\to Q\to L_\Xi(\lambda)\to0,$$ the induced sequence $$0\to\Ind_{\Xi,!*}(L_\Xi(\mu))=L(\mu)\to\Ind_{\Xi,!*}(Q)\to\Ind_{\Xi,!*}(L_\Xi(\lambda))=L(\lambda)\to0$$ is exact. \end{lemma} \begin{proof} By Proposition~\ref{inj-surj}, the map $\Ind_{\Xi,!*}(L_\Xi(\mu))\to\Ind_{\Xi,!*}(Q)$ is injective and the map $\Ind_{\Xi,!*}(Q)\to\Ind_{\Xi,!*}(L_\Xi(\lambda))$ is surjective. It remains to check the exactness of the middle term. Since $\mu-\lambda\notin\Z_{\geq0}\Xi$, the highest weight vector in $L_\Xi(\lambda)$ must come from a highest weight vector in $Q$. Therefore we have an $\l_\Xi$-module morphism $M_\Xi(\lambda)\to Q$ such that the composition $M_\Xi(\lambda)\to Q\to L_\Xi(\lambda)$ is nonzero. This induces a sequence $$\J^{-1}_{\Xi,!}(M_\Xi(\lambda))\to\J^{-1}_{\Xi,!}(Q)\to\J^{-1}_{\Xi,!}(L_\Xi(\lambda)).$$ The composition $\J^{-1}_{\Xi,!}(M_\Xi(\lambda))\to\J^{-1}_{\Xi,!}(L_\Xi(\lambda))$ is surjective by Proposition~\ref{sing-surj}, so the map $\J^{-1}_{\Xi,!}(Q)\to\J^{-1}_{\Xi,!}(L_\Xi(\lambda))$ is surjective. Now using Lemma~\ref{exact}, we conclude that the middle term $$\Ind_{\Xi,!*}(L_\Xi(\mu))\to\Ind_{\Xi,!*}(Q)\to\Ind_{\Xi,!*}(L_\Xi(\lambda))$$ is exact. \end{proof} Now we can state and prove the main result of the section. \begin{proposition} \label{ind-ext} Let $\mu,\lambda\in\h^*$ be two integral weights such that $\mu\equiv\lambda\pmod{Q_\Xi}$ and $\mu-\lambda\notin\Z_{\geq0}\Xi$. Moreover, suppose that $\lambda$ is $\Xi$-joyful, then we have isomorphisms $$\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))=\Ext^1_{\g\wtMod}(L(\mu),L(\lambda))=\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu))=\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\mu),L_\Xi(\lambda)).$$ \end{proposition} \begin{proof} Using the restricted dual introduced in Section~\ref{km-weight}, it is enough to show that $$\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))=\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu)).$$ In Proposition~\ref{res-ext}, we have constructed an inclusion $$\R\colon\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))\incl\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu))$$ that is induced by the functor $\Res^!_\Xi$. By Lemma~\ref{ind-exact}, for any element $$[0\to L_\Xi(\mu)\to Q\to L_\Xi(\lambda)\to0]\in\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu)),$$ the induced sequence $$0\to\Ind_{\Xi,!*}(L_\Xi(\mu))=L(\mu)\to\Ind_{\Xi,!*}(Q)\to\Ind_{\Xi,!*}(L_\Xi(\lambda))=L(\lambda)\to0$$ is exact, and hence gives an element in $\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))$. This operation induces a linear map $$\I\colon\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu))\to\Ext^1_{\g\wtMod}(L(\lambda),L(\mu)).$$ By Proposition~\ref{res-ind}, $\R\circ\I=\id$. Combining with the fact that $\R$ is injective, we conclude that $\R$ induces an isomorphism $\R\colon\Ext^1_{\g\wtMod}(L(\lambda),L(\mu))\xrightarrow{\sim}\Ext^1_{\l_\Xi\wtMod}(L_\Xi(\lambda),L_\Xi(\mu))$. \end{proof} \subsection{Annihilators} In this section, let us assume $\Xi$ is of finite type, i.e. $\l_\Xi$ is a finite dimensional reductive Lie algebra. Let $\lambda\in\h^*$ be a $\rho$-anti-dominant integral weight for $\l_\Xi$, i.e. $\langle\lambda+\rho,\alpha^\vee\rangle\in\Z_{\leq0}$ for any $\alpha^\vee\in\Xi^\vee$. The following statement is a direct consequence of Duflo's theorem on annihilators of Verma modules \cite{duflo1973construction}. \begin{proposition} Let $\lambda\in\h^*$ be a $\rho$-anti-dominant integral weight for $\l_\Xi$, then for any $w\in W_\Xi$, we have $$\Ann_{U\l_\Xi}(M_\Xi(\lambda))=\Ann_{U\l_\Xi}(M_\Xi(w\cdot\lambda)).$$ \end{proposition} We prove that this property is stable under minimal parabolic induction. \begin{proposition} \label{ann} Let $\lambda\in\h^*$ be a $\rho$-anti-dominant integral weight for $\l_\Xi$, then for any $w\in W_\Xi$, we have $$\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(\lambda)))=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))).$$ \end{proposition} \begin{proof} Since $\lambda$ is a $\rho$-anti-dominant integral weight for $\l_\Xi$, $M_\Xi(\lambda)=L_\Xi(\lambda)$ is simple, and $L_\Xi(\lambda)$ is the unique simple submodule of $M_\Xi(w\cdot\lambda)$ (its socle). The inclusion $$M_\Xi(\lambda)=L_\Xi(\lambda)\incl M_\Xi(w\cdot\lambda)$$ induces an inclusion $$\Ind_{\Xi,!*}(M_\Xi(\lambda))\incl\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda)).$$ Therefore $\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(\lambda)))\supset\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda)))$. Let $I=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(\lambda)))$, $I$ is a two-sided ideal of $U\g$. It remains to prove that $I.\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))=0$. Suppose $I.\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))\neq0$, pick any nonzero $\bar{v}\in I.\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))$. Let $v$ be a preimage of $\bar{v}$ in $\Ind_{\Xi,!}(M_\Xi(w\cdot\lambda))=U\u^-_\Xi\otimes M_\Xi(\lambda)$. Since $\bar{v}$ is nonzero in $\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))$, $U\g.v$ has nonzero intersection with $1\otimes M_\Xi(w\cdot\lambda)$ (Remark~\ref{max-new}). Noting that $M_\Xi(\lambda)$ is the only simple submodule of $M_\Xi(w\cdot\lambda)$, we have $$1\otimes M_\Xi(\lambda)\subset(U\g.v)\cap(1\otimes M_\Xi(w\cdot\lambda)).$$ Let $v_\lambda$ (resp. $v_{w\cdot\lambda}$) be the highest weight vector of $M_\Xi(\lambda)$ (resp. $M_\Xi(w\cdot\lambda)$), $\bar{v}_\lambda$ (resp. $\bar{v}_{w\cdot\lambda}$) be the image of $1\otimes v_\lambda$ (resp. $1\otimes v_{w\cdot\lambda}$) in $\Ind_{\Xi,!*}(M_\Xi(\lambda))$ (resp. $\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))$), then $$\bar{v}_\lambda\in U\g.\bar{v}\subset I.\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))=IU\g.\bar{v}_{w\cdot\lambda}=I.\bar{v}_{w\cdot\lambda}.$$ Choose $X\in I\subset U\g=U\u^-_\Xi\otimes U\l_\Xi\otimes U\u^+_\Xi$ such that $\bar{v}_\lambda=X.\bar{v}_{w\cdot\lambda}$. Let us decompose $X$ as $X=X_1+X_2+X_3$ for some $$X_1\in1\otimes U\l_\Xi\otimes 1=U\l_\Xi,X_2\in\u^-_\Xi(U\u^-_\Xi)\otimes U\l_\Xi\otimes1,X_3\in U\u^-_\Xi\otimes U\l_\Xi\otimes\u^+_\Xi(U\u^+_\Xi).$$ Since $\bar{v}_{w\cdot\lambda}$ is a highest weight vector, $X_3.\bar{v}_{w\cdot\lambda}=0$, so $$\bar{v}_\lambda=X_1.\bar{v}_{w\cdot\lambda}+X_2.\bar{v}_{w\cdot\lambda}.$$ Noticing that $w\cdot\lambda-\lambda\in Q_\Xi$, we have, by Observation~\ref{weight}, $\bar{v}_\lambda=X_1.\bar{v}_{w\cdot\lambda}$. But $X\in I=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(\lambda)))$, for any $w\in M_\Xi(\lambda)$, $$0=X.\bar{w}=X_1.\bar{w}+X_2.\bar{w}.$$ Here $\bar{w}$ means the image of $1\otimes w$ in $\Ind_{\Xi,!*}(M_\Xi(\lambda))$. Therefore $$1\otimes(X_1.w)+X_2.(1\otimes w)\in\J^{-1}_{\Xi,!}(M_\Xi(\lambda))\subset\u^-_\Xi(U\u^-_\Xi)\otimes M_\Xi(\lambda).$$ Thus $X_1.w=0$. Consequently, $$X_1\in\Ann_{U\l_\Xi}(M_\Xi(\lambda))=\Ann_{U\l_\Xi}(M_\Xi(w\cdot\lambda)).$$ This implies that $\bar{v}_\lambda=X_1.\bar{v}_{w\cdot\lambda}=0$, which is absurd. So $I$ annihilates $\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))$, we are done. \end{proof} \begin{corollary} Let $\lambda\in\h^*$ be a $\rho$-anti-dominant integral weight for $\l_\Xi$, then for any $w\in W_\Xi,$ we have $$\Ann_{U\g}(L(\lambda))\subset\Ann_{U\g}(L(w\cdot\lambda)).$$ \end{corollary} \begin{proof} Since $\lambda$ is a $\rho$-anti-dominant integral weight for $\l_\Xi$, $L_\Xi(\lambda)=M_\Xi(\lambda)$. By Corollary~\ref{minimal-maps-hwsimple}, $$L(\lambda)=\Ind_{\Xi,!*}(L_\Xi(\lambda))=\Ind_{\Xi,!*}(M_\Xi(\lambda)).$$ From Proposition~\ref{ann}, we know that $$\Ann_{U\g}(L(\lambda))=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(\lambda)))=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))).$$ The quotient $M_\Xi(w\cdot\lambda)\proj L_\Xi(w\cdot\lambda)$ induces a surjection $\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda))\proj\Ind_{\Xi,!*}(L_\Xi(w\cdot\lambda))=L(w\cdot\lambda)$ (Proposition~\ref{inj-surj}), so $$\Ann_{U\g}(L(\lambda))=\Ann_{U\g}(\Ind_{\Xi,!*}(M_\Xi(w\cdot\lambda)))\subset\Ann_{U\g}(L(w\cdot\lambda)).$$ \end{proof} \section{Minimal type} To give a hopefully more accessible orientation to our general theory, let us consider the minimal parabolic type, i.e. $\Xi=\{\alpha\}$ consists of a single simple root $\alpha$. For brevity, let us write $\p^+_\alpha$ for $\p^+_{\{\alpha\}}$, $\Ind_{\alpha,!*}$ for $\Ind_{\{\alpha\},!*}$, etc. In this case, $\l_\alpha$ is a finite dimensional reductive Lie algebra, whose derived subalgebra $\s_\alpha=[\l_\alpha,\l_\alpha]$ is isomorphic to $\sl_2$. Let $\z_\alpha$ be the center of $\l_\alpha$, then $\z_\alpha\in\h$ and $\l_\alpha=\s_\alpha\oplus\z_\alpha$. For any $n\in\Z_{\geq0}$ and $\xi\in\z_\alpha^*\subset\h^*$, we have a non-split short exact sequence $$0\to M_\alpha\left(-\frac{n+2}{2}\alpha+\xi\right)\to M_\alpha\left(\frac{n}{2}\alpha+\xi\right)\to L_\alpha\left(\frac{n}{2}\alpha+\xi\right)\to0.$$ Notice that $-\frac{n+2}{2}\alpha+\xi$ is a $\rho$-anti-dominant integral weight for $\l_\alpha$, so $M_\alpha\left(-\frac{n+2}{2}\alpha+\xi\right)=L_\alpha\left(-\frac{n+2}{2}\alpha+\xi\right)$ is simple. By applying minimal parabolic induction $\Ind_{\alpha,!*}$, we obtain a sequence $$0\to L\left(-\frac{n+2}{2}\alpha+\xi\right)\to\Ind_{\alpha,!*}\left(M_\alpha\left(\frac{n}{2}\alpha+\xi\right)\right)\to L\left(\frac{n}{2}\alpha+\xi\right)\to0.$$ By Proposition~\ref{ann}, we have $$\Ann_{U\g}\left(L\left(-\frac{n+2}{2}\alpha+\xi\right)\right)=\Ann_{U\g}\left(\Ind_{\alpha,!*}\left(M_\alpha\left(\frac{n}{2}\alpha+\xi\right)\right)\right)\subset\Ann_{U\g}\left(L\left(\frac{n}{2}\alpha+\xi\right)\right)$$ in the above sequence. Suppose, in addition, that the weight $\frac{n}{2}\alpha+\xi$ is integral and is $\alpha$-joyful, then by Proposition~\ref{ind-ext}, the above sequence is exact. It does not split, because after precomposing with $\Res^!_\alpha$, it does not split. This gives rise to a nonzero element in $\Ext^1_{\g\wtMod}\left(L\left(\frac{n}{2}\alpha+\xi\right),L\left(-\frac{n+2}{2}\alpha+\xi\right)\right)$. \begin{example} Let us consider $\g=\widehat{E_8}$ to be the untwited affine Kac--Moody algebra associated to $E_8$. Let $\Pi=\{\alpha_0,\alpha_1,\cdots,\alpha_8\}$ be the set of simple roots of $\widehat{E_8}$, where $\alpha_1,\cdots,\alpha_8$ are those finite roots of finite $E_8$. Here we are referring to the Bourbaki convention on root systems \cite{bourbakigroupes}. Recall that in Section~\ref{transpose-anti-involution}, we have introduced the Chevalley generators $e_i\in\g_{\alpha_i}$, $f_i\in\g_{-\alpha_i}$. Let $\Lambda_0,\cdots,\Lambda_8$ be the fundamental weights of $\widehat{E_8}$. Let $s_i$ be the simple reflection in $W$ associated to the simple root $\alpha_i$. Notice that $-\Lambda_4=s_2\cdot(-\Lambda_4)+\alpha_2$. As described above, we have a sequence \begin{equation} \label{sequence} 0\to L(s_2\cdot(-\Lambda_4))\to\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))\to L(-\Lambda_4)\to0, \end{equation} where $$\Ann_{U\widehat{E_8}}(L(s_2\cdot(-\Lambda_4)))=\Ann_{U\widehat{E_8}}(\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4)))\subset\Ann_{U\widehat{E_8}}(L(-\Lambda_4)).$$ In this case, we can use a tricky way to show that the sequence~\eqref{sequence} is exact. Let $L_{-6}(E_8)$ be the simple affine vertex algebra associated to $E_8$ of level $-6$. By the classification of \cite{arakawa2018joseph}, we know that $L(s_2\cdot(-\Lambda_4))$ is an $L_{-6}(E_8)$-module. Since $\Ann_{U\widehat{E_8}}(L(s_2\cdot(-\Lambda_4)))=\Ann_{U\widehat{E_8}}(\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4)))$, $\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))$ is also an $L_{-6}(E_8)$-module.\footnote{Recall that $L_{-6}(E_8)=V^{-6}(E_8)/J$. Here $V^{-6}(E_8)$ is the universal affine vertex algebra associated to $E_8$ of level $-6$, $J$ is some vertex algebra ideal. A $V^{-6}(E_8)$-module $M$ is an $L_{-6}(E_8)$-module if and only if $Y(v,z).m=0$ for any $v\in J$ and $m\in M$, where $Y(v,z)=\sum_{n\in\Z}v_{(n)}z^{-n-1}$ is the field corresponding to $v$. Notice that we can write the Fourier modes $v_{(n)}$ as elements from $U\widehat{E_8}$. More precisely, $v_{(n)}\in\End(M)$ lies in the image of $U\widehat{E_8}\to\End(M)$ for any $n\in\Z$. Therefore, the $V^{-6}(E_8)$-module $M$ is an $L_{-6}(E_8)$-module if and only if $v_{(n)}\in\Ann_{U\widehat{E_8}}(M)$ for any $v\in J$.} Again using the classification in \cite{arakawa2018joseph}, we see that each composition factor of $\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))$ is of the form $L(w\cdot(-\Lambda_4))$ for some $$w\in\{\id,s_2,s_3,s_1s_3,s_5,s_6s_5,s_7s_6s_5,s_8s_7s_6s_5,s_0s_8s_7s_6s_5\}.$$ We have \begin{align*} -\Lambda_4&=s_2\cdot(-\Lambda_4)+\alpha_2,\\ -\Lambda_4&=s_3\cdot(-\Lambda_4)+\alpha_3,\\ s_3\cdot(-\Lambda_4)&=s_1s_3\cdot(-\Lambda_4)+2\alpha_1,\\ -\Lambda_4&=s_5\cdot(-\Lambda_4)+\alpha_5,\\ s_5\cdot(-\Lambda_4)&=s_6s_5\cdot(-\Lambda_4)+2\alpha_6,\\ s_6s_5\cdot(-\Lambda_4)&=s_7s_6s_5\cdot(-\Lambda_4)+3\alpha_7,\\ s_7s_6s_5\cdot(-\Lambda_4)&=s_8s_7s_6s_5\cdot(-\Lambda_4)+4\alpha_8,\\ s_8s_7s_6s_5\cdot(-\Lambda_4)&=s_0s_8s_7s_6s_5\cdot(-\Lambda_4)+5\alpha_0. \end{align*} Let $v_{-\Lambda_4}$ be the highest weight vector in $M_{\alpha_2}(-\Lambda_4)$, and $\bar{v}_{-\Lambda_4}$ be the image of $1\otimes v_{-\Lambda_4}$ in $\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))$. For $i\in\{0,1,3,5,6,7,8\}$, we have $$e_jf_i.(1\otimes v_{-\Lambda_4})=f_ie_j.(1\otimes v_{-\Lambda_4})=0.$$ for $j\in\{0,\cdots,8\}\backslash\{i\}$. Moreover, $$e_if_i.(1\otimes v_{-\Lambda_4})=f_ie_i.(1\otimes v_{-\Lambda_4})+\alpha_i^\vee.(1\otimes v_{-\Lambda_4})=0+\langle-\Lambda_4,\alpha_i^\vee\rangle(1\otimes v_{-\Lambda_4})=0.$$ Therefore, $f_i.(1\otimes v_{-\Lambda_4})$ is a (possibly zero) highest weight vector in $\Ind_{\alpha_2,!}(M_{\alpha_2}(-\Lambda_4))=U\u^-_{\alpha_2}\otimes M_{\alpha_2}(-\Lambda_4)$. In particular, it generates an $\widehat{E_8}$-submodule of $\Ind_{\alpha_2,!}(M_{\alpha_2}(-\Lambda_4))=U\u^-_{\alpha_2}\otimes M_{\alpha_2}(-\Lambda_4)$ that has zero intersection with $1\otimes M_{\alpha_2}(-\Lambda_4)$. By Remark~\ref{max-new}, we see that this submodule lies in $\J^{-1}_{\alpha_2,!}(M_{\alpha_2}(-\Lambda_4))$, and hence $f_i.\bar{v}_{-\Lambda_4}=0$ in $\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))$. This shows that $L(w\cdot(-\Lambda_4))$ cannot appear as a composition factor of $\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))$ for $w\in\{s_3,s_1s_3,s_5,s_6s_5,s_7s_6s_5,s_8s_7s_6s_5,s_0s_8s_7s_6s_5\}$, and hence the only possible composition factors of $\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))$ are $L(-\Lambda_4)$ and $L(s_2\cdot(-\Lambda_4))$. Notice that the weight spaces $\Ind_{\alpha_2,!}(M_{\alpha_2}(-\Lambda_4))_{-\Lambda_4}$ and $\Ind_{\alpha_2,!}(M_{\alpha_2}(-\Lambda_4))_{s_2\cdot(-\Lambda_4)}$ are all one dimensional, so both $L(-\Lambda_4)$ and $L(s_2\cdot(-\Lambda_4))$ can appear at most once as composition factors of $\Ind_{\alpha_2,!*}(M_{\alpha_2}(-\Lambda_4))$. This shows that exactness of the sequence~\eqref{sequence}. Moreover, we see that this is a short exact sequence of $L_{-6}(E_8)$-modules. By similar careful analysis, we see that the extension quiver of simple highest weight $L_{-6}(E_8)$-modules is the same as the double quiver of the affine Dynkin quiver of type $E_8^{(1)}$. \begin{theorem} Consider the diagram $$\begin{tikzcd} & & & & & s_2 \arrow[d, bend left] & & \\ s_0s_8s_7s_6s_5 \arrow[r, bend left] & s_8s_7s_6s_5 \arrow[r, bend left] \arrow[l, bend left] & s_7s_6s_5 \arrow[r, bend left] \arrow[l, bend left] & s_6s_5 \arrow[r, bend left] \arrow[l, bend left] & s_5 \arrow[r, bend left] \arrow[l, bend left] & \id \arrow[r, bend left] \arrow[l, bend left] \arrow[u, bend left] & s_3 \arrow[r, bend left] \arrow[l, bend left] & s_1s_3 \arrow[l, bend left] \end{tikzcd}$$ Then we have $$\dim\Ext^1_{\widehat{E_8}\wtMod}(L(w\cdot(-\Lambda_4)),L(w'\cdot(-\Lambda_4)))=\#\text{ of arrows from }w\text{ to }w'\text{ in the above diagram}$$ for $w,w'\in\{\id,s_2,s_3,s_1s_3,s_5,s_6s_5,s_7s_6s_5,s_8s_7s_6s_5,s_0s_8s_7s_6s_5\}$. Furthermore, all nontrivial extensions are in fact extensions of $L_{-6}(E_8)$-modules. \end{theorem} We can perform similar constructions for vertex algebras $L_{-4}(E_7),L_{-3}(E_6)$ and $L_{-2}(D_4)$. They arise from rank one SCFT under the $4d/2d$ duality (cf. \cite{beem2015infinite}, \cite{shan2023mirror}). The case for $L_{-2}(D_4)$ was already considered by \cite{kawasetsu2022relaxed}, from a different point of view (Zhu's induction). \end{example} \begin{theorem} Consider the diagram $$\begin{tikzcd} & & & s_2 \arrow[d, bend left] & & & \\ s_0s_1s_3 \arrow[r, bend left] & s_1s_3 \arrow[r, bend left] \arrow[l, bend left] & s_3 \arrow[r, bend left] \arrow[l, bend left] & \id \arrow[r, bend left] \arrow[u, bend left] \arrow[l, bend left] & s_5 \arrow[r, bend left] \arrow[l, bend left] & s_6s_5 \arrow[r, bend left] \arrow[l, bend left] & s_7s_6s_5 \arrow[l, bend left] \end{tikzcd}$$ Then we have $$\dim\Ext^1_{\widehat{E_7}\wtMod}(L(w\cdot(-\Lambda_4)),L(w'\cdot(-\Lambda_4)))=\#\text{ of arrows from }w\text{ to }w'\text{ in the above diagram}$$ for $w,w'\in\{\id,s_2,s_3,s_1s_3,s_0s_1s_3,s_5,s_6s_5,s_7s_6s_5\}$. Furthermore, all nontrivial extensions are in fact extensions of $L_{-4}(E_7)$-modules. \end{theorem} \begin{theorem} Consider the diagram $$\begin{tikzcd} & & s_0s_2 \arrow[d, bend left] & & \\ & & s_2 \arrow[d, bend left] \arrow[u, bend left] & & \\ s_1s_3 \arrow[r, bend left] & s_3 \arrow[r, bend left] \arrow[l, bend left] & \id \arrow[r, bend left] \arrow[u, bend left] \arrow[l, bend left] & s_5 \arrow[r, bend left] \arrow[l, bend left] & s_6s_5 \arrow[l, bend left] \end{tikzcd}$$ Then we have $$\dim\Ext^1_{\widehat{E_6}\wtMod}(L(w\cdot(-\Lambda_4)),L(w'\cdot(-\Lambda_4)))=\#\text{ of arrows from }w\text{ to }w'\text{ in the above diagram}$$ for $w,w'\in\{\id,s_2,s_0s_2,s_3,s_1s_3,s_5,s_6s_5\}$. Furthermore, all nontrivial extensions are in fact extensions of $L_{-3}(E_6)$-modules. \end{theorem} \begin{theorem} Consider the diagram $$\begin{tikzcd} & s_0 \arrow[d, bend left] & \\ s_3 \arrow[r, bend left] & \id \arrow[r, bend left] \arrow[u, bend left] \arrow[l, bend left] \arrow[d, bend left] & s_4 \arrow[l, bend left] \\ & s_1 \arrow[u, bend left] & \end{tikzcd}$$ Then we have $$\dim\Ext^1_{\widehat{D_4}\wtMod}(L(w\cdot(-\Lambda_2)),L(w'\cdot(-\Lambda_2)))=\#\text{ of arrows from }w\text{ to }w'\text{ in the above diagram}$$ for $w,w'\in\{\id,s_0,s_1,s_3,s_4\}$. Furthermore, all nontrivial extensions are in fact extensions of $L_{-2}(D_4)$-modules. \end{theorem} \bibliographystyle{alpha} \bibliography{bib} \end{document}
2412.19054v2
http://arxiv.org/abs/2412.19054v2
Regularized neural network for general variational inequalities involving monotone couples of operators in Hilbert spaces
\begin{filecontents*}{example.eps} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore \end{filecontents*} \RequirePackage{fix-cm} \documentclass[smallextended,envcountsect]{svjour3} \smartqed \usepackage{tikz} \colorlet{mygreen}{green!60!black} \newcommand{\mygreen}[1]{{\color{mygreen}#1}} \usepackage{graphicx} \usepackage{bigints} \usepackage{amssymb} \usepackage{verbatim} \usepackage{subfigure} \usepackage{booktabs} \usepackage{booktabs} \usepackage{amsfonts,amsmath} \usepackage{capt-of}\usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=blue, urlcolor=blue, citecolor=blue } \usepackage{cite} \newtheorem{assumption}{Assumption}[section] \newtheorem{algorithm}{Algorithm} \newtheorem{counterexample}{Counterexample}[section] \begin{document} \title{Regularized neural network for general variational inequalities involving monotone couples of operators in Hilbert spaces} \titlerunning{Regularized neural networks for monotone GVIs} \author{Pham Ky Anh \and Trinh Ngoc Hai\thanks{Dedicated to Professor Hoang Xuan Phu on the occasion of his 70th birthday.} \and Nguyen Van Manh } \institute{Pham Ky Anh \at Department of Mathematics, Vietnam National University, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam\\ \email{anhpk\symbol{64}vnu.edu.vn} \and Trinh Ngoc Hai \at School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, 1 Dai Co Viet, Hai Ba Trung, Hanoi, Vietnam\\ \email{hai.trinhngoc\symbol{64}hust.edu.vn} \and Nguyen Van Manh \at Faculty of Basic Sciences, Hanoi University of Industry, 298 Cau Dien, Bac Tu Liem, 100000, Hanoi, Vietnam\\ \email{nvmanhhhn\symbol{64}haui.edu.vn} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} In this paper, based on the Tikhonov regularization technique, we study a monotone general variational inequality (GVI) by considering an associated strongly monotone GVI, depending on a regularization parameter $\alpha,$ such that the latter admits a unique solution $x_\alpha$ which tends to some solution of the initial GVI, as $\alpha \to 0.$ However, instead of solving the regularized GVI for each $\alpha$, which may be very expensive, we consider a neural network (also known as a dynamical system) associated with the regularized GVI and establish the existence and the uniqueness of the strong global solution to the corresponding Cauchy problem. An explicit discretization of this neural network leads to strongly convergent iterative regularization algorithms for monotone general variational inequality. Numerical tests are performed to show the effectiveness of the proposed methods.\\ This work extends our recent results in [Anh, Hai, Optim. Eng. 25 (2024) 2295-2313] to more general setting. \keywords{General variational inequality \and Bilevel variational inequality \and Regularized neural network \and Monotonicity \and Monotone couple \and Lipschitz continuity} \subclass{47J20 \and 49J40 \and 49M30} \end{abstract} \section{Introduction and Preliminaries} Let $H$ be a real Hilbert space, whose inner product and norm are denoted by $\langle \cdot, \cdot \rangle $ and $\| \cdot\| $, respectively. Let $C \subset H$ be a nonempty, closed and convex set and $ A; B; F; G : H \to H$ be given operators. \\ The variational inequality of $G$ on $C$ is \begin{equation}\label{VI} \text{to find } x^* \in C \text{ such that } \left\langle G x^*, y- x^* \right\rangle \geq 0 \quad \forall y \in C. \tag{VI($G,C$)} \end{equation} The inverse variational inequality of $B$ on $C$ is \begin{equation}\label{IVI} \text{to find } x^* \in H \text{ such that } Bx^* \in C \text{ and } \left\langle x^*, y- B x^* \right\rangle \geq 0 \quad \forall y \in C. \tag{IVI($B,C$)} \end{equation} In this paper, we focus our attention on solving the following General Variational Inequality (GVI, for short): \begin{equation}\label{GVI} {\rm Find}\; x^* \in H \; {\rm such \; that} \; Fx^* \in C \; {\rm and } \;\langle Ax^*, y - Fx^*\rangle \geq 0 \quad \; \forall y \in C.\;\; \tag{GVI($A, F, C$)} \end{equation} Denote by Sol($A, F, C$) the solution set of GVI($A, F, C$), which is assumed not empty throughout this paper.\\ A couple $(A, F)$ is said to be, see, \cite{LiZou}, \begin{itemize} \item[] $\gamma$-strongly monotone, $\gamma >0$, if $$ \langle Ax - Ay, Fx - Fy \rangle \geq \gamma \|x - y\|^2, \; \forall x, y \in H; $$ \item[] monotone, if $$ \langle Ax - Ay, Fx - Fy \rangle \geq 0, \; \forall x, y \in H. $$ \end{itemize} A general variational inequality GVI($A, F, C$), involving a monotone (strongly monotone) couple ($A, F$), is called a monotone (strongly monotone) GVI.\\ \indent In recent years there has been a growing interest in the general variational inequality driven by its large number of applications. When $F=I$-the identity operator, the GVI collapses into a variational inequality VI($A,C$) and if $A=I,$ we get the inverse variational inequality IVI($F, C$). Thus, the general variational inequality provides a unified formulation for several important problems. In what follows, only the directly involved literature on GVIs will be cited here. The general variational inequality was first introduced and studied by Noor \cite{Noor1}. Pang and Yao \cite{PangYao} established some sufficient conditions for the existence of the solutions to a GVI and investigated their stability properties. Zhao \cite{Zhao} and He \cite{He1} proposed some implicit methods for monotone general variational inequalities in finite dimensional Hilbert spaces. Noor et al \cite{AlShejari}, \cite{Noor1}, and Tangkhawiwetkul \cite{Tang} proposed some neural networks as well as their discretized versions for solving strongly monotone GVI. Based on the Banach contraction principle, they established strong convergence of iterative methods under very restrictive conditions. \\ In this paper, we combine the Tikhonv regularization technique and the dynamical system approach to construct a class of iterative regularization methods for solving monotone general variational inequalities. This distinguishes our methods from most other projection-type methods for general variational inequalities with monotone/strongly monotone couples of operators.\\ The paper is organized as follows. In Section 1, we briefly state some definitions and lemmas which will be subsequently used. In Section 2, the existence and uniqueness of the solution to strongly monotone GVIs is established under some mild conditions. In Section 3, a regularized neural network associated with a unique solution to the strongly monotone GVI depending on the regularization parameter $\alpha,$ is studied. It is proved that the corresponding Cauchy problem admits a unique global solution $x(t),$ whose limit at infinity exists and solves the given monotone GVI. In Section 4, we derive a class of explicit iterative regularization methods by discretizing the mentioned above Cauchy problem. Finally in Section 5, several examples, including an infinite dimensional one, are given to demonstrate the applicability of our result.\\ Now, recall some notions for subsequent needs.\\ An operator $A:C\to H$ is said to be \begin{enumerate} \item $\lambda$-strongly monotone on $C$, if there exists a constant $\lambda >0$ such that for all $x,y\in C$, we have $$ \langle Ax-Ay, x-y \rangle \geq \lambda \|x-y\|^2.$$ \item monotone on $C$, if for all $x,y\in C$, it holds that $$ \langle Ax - Ay,x-y \rangle \geq 0.$$ \item firmly nonexpansive, if for all $x,y\in C$, one gets $$ \langle Ax - Ay,x-y \rangle \geq \| Ax - Ay \|^2 $$ \item $L_A$-Lipschitz continuous on $C$, if for all $x,y\in C$, it holds that $$ \| Ax-Ay\| \leq L_A \| x-y\|. $$ \item sequentially weak-to-weak continuous on $C$ if for any sequence $\{x^k\}$ satisfying $x^k \rightharpoonup \bar x$, it holds that $Ax^k \rightharpoonup A\bar x$. \end{enumerate} It is well known that the metric projection of $x\in H$ onto $C,$ defined by $Px:= \underset{y \in C} {\rm argmin}\| x- y\|$ enjoys two important properties \begin{itemize} \item[i)] $z=Px$ if and only if $$ \left\langle x-z,y-z \right\rangle \leq 0 \quad \forall y \in C; $$ \item[ii)] $\langle Px - Py, x-y \rangle \geq \|Px - Py\|^2, \; \forall x, y \in H.$ \end{itemize} The second property means that $P$ is firmly nonexpansive, hence, it is monotone and nonexpansive. \vskip0.1cm \indent We start with the following generalization of \cite[Lemma 2.2]{AnhHai}. \begin{lemma}\label{lemma1} \begin{itemize} \item[\textup{(a)}] Assume that the couple $(A,F)$ is $\gamma$-strongly monotone and $x^*$ is a solution of \ref{GVI}. Then, for all $f\in C, y\in H$, we have \begin{equation}\label{3a} \left\langle Fy- f, Ay-Ax^* \right\rangle +\left\langle f-Fx^*, Ay \right\rangle \geq \gamma \| y-x^* \|^2 . \end{equation} \item[\textup{(b)}] Assume that $(A,F)$ is monotone. Then, $x^*$ is a solution of \ref{GVI} if and only if \begin{equation}\label{1a} Fx^* \in C \textup{ and } \left \langle Ay-Ax^*, Fy -f \right \rangle + \left \langle Ay, f-Fx^* \right \rangle \geq 0 \quad \forall y\in H, f\in C. \end{equation} \end{itemize} \end{lemma} \begin{proof} (a) Since $(A,F)$ is $\gamma$-strongly monotone, for all $f\in C, y\in H$ we have \begin{equation}\label{2} \left\langle Fy- f, Ay-Ax^* \right\rangle +\left\langle f-Fx^*, Ay \right\rangle \geq \gamma \| y-x^* \|^2 + \left\langle f-Fx^*, Ax^* \right\rangle. \end{equation} Since $x^*$ is a solution of \ref{GVI}, it holds that $\left\langle f-Fx^*, Ax^* \right\rangle \geq 0$. Combining this and \eqref{2}, we get the desired result.\\ (b) Since $(A,F)$ is monotone, for all $f\in C, y\in H$ it holds that \begin{equation*} \left\langle Fy- f, Ay-Ax^* \right\rangle +\left\langle f-Fx^*, Ay \right\rangle \geq \left\langle f-Fx^*, Ax^* \right\rangle \geq 0. \end{equation*} The necessity is established. Conversely, choosing $y=x^*$ in \eqref{1a}, we get the sufficiency. \end{proof} The following result was established in a finite dimensional space by He and Dong \cite{HeDong}, however it remains true in any real Hilbert spaces. \begin{lemma}\label{existuniq}\cite[Theorem 3.2]{HeDong} Let $C$ be a nonempty closed convex subset of a real Hilbert space $\mathcal{H},$ and let $B :\mathcal{H} \to \mathcal{H}$ be a Lipschitz continuous and strongly monotone operator. Then the inverse variational inequality IVI($B, C$) has a unique solution. \end{lemma} \begin{lemma}\label{lemma31}\cite{Xu} Let $\{U_k \}; \{\theta_k \}; \{\zeta_k \} $ be positive sequences satisfying $\{\theta_k\} \subset (0;1)$; $\sum_{k=1}^\infty \theta_k =\infty$, $\lim\limits_{k \to \infty} \dfrac{\zeta_k}{\theta_k}=0 $ and $$ U_{k+1} \leq (1-\theta_k) U_k +\zeta_k\quad \forall k\geq 0. $$ Then, $U_k \to 0$ as $k\to \infty$. \end{lemma} \section{Strongly monotone general variational inequalities} \begin{theorem}\label{Th2.1} Suppose that the operators $A$ and $ F,$ acting in a finite dimensional Hilbert space $H,$ are Lipschiz continuous with coefficients $L_A$ and $L_F$, respectively. If in addition, the couple ($A, F$) is $\gamma$-strongly monotone. Then \begin{itemize} \item[i.] $A, F$ admit Lipschitz continuous inverses, moreover, for all $u, v \in H,$ \begin{equation}\label{1} \frac{1}{L_F}\|u-v\| \leq \|F^{-1}u - F^{-1}v\| \leq \frac{L_A}{\gamma}\|u-v\|, \end{equation} $$\frac{1}{L_A}\|u-v\| \leq \|A^{-1}u - A^{-1}v\| \leq \frac{L_F}{\gamma}\|u-v\|. $$ \item[ii.] The general variational inequality GVI($A, F, C$) admits a unique solution. Moreover, $x^*$ is a solution to the general variational inequality GVI($A, F, C$) if and only if $u^*:= F x^*$ is a solution to the strongly monotone variational inequality VI($AF^{-1}, C$) and if and only if $v^*:= Ax^*$ is a solution to the inverse variational inequality IVI($FA^{-1}, C$). \end{itemize} \end{theorem} \begin{proof} From the $\gamma$-strong monotonicity of the couple $(A, F),$ one has $$\gamma \|x - y\|^2 \leq \langle Ax -Ay, Fx-Fy \rangle \leq \|Ax-Ay\| \|Fx - Fy\|, \; \forall x, y \in H.$$ Using the last inequality and the Lipschitz continuity of $A,$ and $F,$ one finds\\ $$\|Fx-Fy\| \geq \frac{\gamma}{L_A}\|x -y\| \;\; {\rm and} \;\; \|Ax-Ay\| \geq \frac{\gamma}{L_F}\|x-y\|,$$ which show that $A, $ and $F$ are injective mappings. By Brouwer invariance of domain theorem \cite{Brou}, the sets $A(H)$ and $F(H)$ are open. To prove that $A(H)$ is also closed, we assume $A(H) \ni y_n \to \tilde{y}.$ Choose $x_n \in H,$ s.t., $Ax_n = y_n.$ Since $\frac{\gamma}{L_F}\|x_n-x_m \| \leq \|Ax_n - Ax_m\| = \|y_n - y_m\|,$ the sequence $\{x_n\}$ is fundamental, hence $x_n \to \tilde{x}.$ Further, $\tilde{y} = \lim_{n\to \infty} y_n = \lim_{n\to \infty} Ax_n = A\tilde{x}.$ The last relation shows that $A(H)$ is closed, hence $A(H) =H.$ By the same argument, one can conclude that $F(H) = H.$ Thus both $A$ and $F$ are invertible, moreover the assertion \eqref{1} holds for $A^{-1}$ and $F^{-1}.$\\ Now suppose, $x^* \in {\rm Sol}(A, F, C)$ then $Fx^* \in C$ and $\langle Ax^*, y - Fx^*\rangle \geq 0 $ for all $ y \in C.$ Denote $v^*:= A x^*, \; u^*:= F x^*,$ one gets $\langle v^*, y - FA^{-1}v^*\rangle \geq 0, \; FA^{-1}v^* \in C $ and $\langle AF^{-1} u^*, y - u^* \rangle \geq 0 $ for all $y \in C.$ That means $u^*$ is the unique solution to strongly monotone variational inequality VI($AF^{-1}, C$) and $v^*$ is the unique solution to the strongly inverse variational inequality IVI($FA^{-1}, C$). Conversely, if $u^* (v^*)$ is the unique solution to strongly monotone variational inequality VI($AF^{-1}, C$) (strongly monotone inverse variational inequality IVI($FA^{-1}, C$)), then $x^* = F^{-1}u^* = A^{-1}v^*$ is a unique solution to the GVI($A, F, C$). $\blacksquare$ \end{proof} {\bf Remark 2.1} The finite dimensionality of $H$ is used only for the proof of the invertibility of $A,$ and $F.$ The conclusions of Proposition 1 may not true, if $H$ is an infinite dimensional Hilbert space. In particular, the operators $A, F$ may not invertible, even if ($A, F$) is a strongly monotone couple of Lipschitz continuous operators. Indeed, let $H$ be a real seperable Hilbert space with an orthonormal basis $\{e_i\}_{i=1}^\infty$ and $Ax = Fx = \sum_{i=1}^\infty x_ie_{i+1},$ for any $x = \sum_{i=1}^\infty x_i e_i.$ Clearly, $A$ and $F$ are bounded linear non-surjective operators with im $A$ = im $F$ = span$(\{e_i\}_{i=2}^\infty) \ne H$. However, the couple ($A,F$) is $1$-strongly monotone. \vskip0.1cm \begin{theorem}\label{Th2.2} Let $C$ be a nonempty closed convex subset in a real Hilbert space $H,$ and $A, F: H \to H$ be Lipschitz continuous operators on $H$ with constants $L_A, \; L_F, $ respectively. Assume that $A$ is a $\lambda$-strongly monotone operator and the couple ($A, F$) is $\gamma$-strongly monotone on $H.$ Then, the general variational inequality GVI($A, F, C$) admits a unique solution in $H,$ i.e., Sol($A, F, C$) = $\{x^*\}.$ \end{theorem} \begin{proof} Since $A$ is $L_A$-Lipschitz continuous and $\lambda$-strongly monotone on $H,$ it possesses a continuous inverse $A^{-1}.$ The strong monotonicity of the couple ($A, F$) implies that $FA^{-1}$ is $\frac{\gamma}{L_A^2}$-strongly monotone and $\frac{L_F}{\lambda}$-Lipschitz continuous. According to Lemma \ref{existuniq}, the inverse variational inequality IVI($ FA^{-1}, C$ ) has a unique solution $u^*$, hence, GVI($A,; F;C$) admits a unique solution $x^* := A^{-1}u^*.$ $\blacksquare$ \end{proof} \noindent{\bf Remark 2.2} Tangkhawiwetkul \cite{Tang} proposed a neurall network for the so-called generalized inverse mixed variational inequality with a strongly monotone couple of operators. In particular, based on the contraction mapping theorem, the author proved Theorem 2.2 under an additional assumptions: $$ \sqrt{L_A^2 - 2\gamma + L_F^2} + \sqrt{1 - 2\lambda + L_A^2} < 1,$$ where $\gamma < \frac{L_A^2 + L_F^2}{2}; \; \lambda < \frac{1 + L_A^2}{2}.$\\ This additional condition narrows the class of GVI problems being studied. \section{Regularized neurall network} \noindent{Main assumptions} \begin{assumption}\label{assumption1}We consider the problem \ref{GVI} under the following conditions: \begin{itemize} \item[\textbf{(A1)}] $A: H \to H $ is $\lambda$- strongly monotone, $L_A$-Lipschitz continuous. $F$ is $L_F$-Lipschitz continuous and $A, F$ are sequentially weak-to-weak continuous; \item[\textbf{(A2)}] the couple ($A, F$) is monotone; \item[\textbf{(A3)}] the solution set Sol($A, F, C$) is nonempty. Moreover, the set $\mathcal C:= A({\rm Sol}(A, F, C))$ is convex. \end{itemize} \end{assumption} \begin{remark}\label{CondA3} Suppose $A: H \to H$ is a positive define, bounded linear operator, i.e., $\langle Ax, x\rangle \geq \ell\|x\|^2$ for all $x\in H,$ then $A$ satisfies Assumption (A1) with $L_A = \|A\|, \; \lambda = \ell.$ Moreover, $A$ is weak-weak continuous and the set $A({\rm Sol}(A, F, C))$ is convex if and only if Sol($A, F, C$) is convex. \end{remark} We begin with the following obvious result, whose proof is based on the properties of the metric projection $P$ onto the closed convex set $C:$ \begin{lemma}\label{L3.1} $x^* \in$ Sol($A, F, C$) if and only if it is a solution to the equation \begin{equation} \label{Eq1} Fx^* = P(Fx^* - \mu Ax^*), \end{equation} where $\mu$ is an arbitrary fixed positive number. \end{lemma} In what follows, we assume the Assumption (A3) is satisfied. Clearly, Sol($A,F,C$) as well as $\mathcal{C}$ are closed subsets.\\ Since $A$ is $\lambda$-strongly monotone and $L_A$-Lipschitz continuous, its inverse $\mathcal{A}:= A^{-1}$ is $\frac{1}{\lambda}$-Lipschitz continuous and $\frac{\lambda}{L_A^2}$-strongly monotone. Thus, the following bilevel variational inequality problem: \begin{equation}\label{BVI} \text{Find }u \in \mathcal{C}, \text{ such that, } \langle \mathcal{A}u, v - u \rangle \geq 0, \; \forall v \in \mathcal{C} \tag{BVI} \end{equation} admits a unique solution, denoted by $u^\dagger$. Let $x^\dagger := \mathcal{A}u^\dagger \in {\rm Sol}(A,F,C).$ \vskip0.1cm For each positive number $\alpha >0,$ we define the operator $F_\alpha := F + \alpha I,$ where $I$ is the identity operator in $H.$ Clearly, $F_\alpha$ is $(L_F + \alpha)$-Lipschitz continuous, and the couple ($A, F_\alpha$) is $\alpha \lambda$-strongly monotone. According to Theorem 2.2, the so-called regularized general variational inequality GVI($A, F_\alpha, C$) admits a unique solution, denoted by $x_\alpha.$ \vskip0.1cm {\bf Lemma 3.2}\label{lemma1h} Under Assumption \ref{assumption1}, the net $\{x_\alpha\}$ possesses the following properties: \begin{itemize} \item[(i)] boundedness on $(0, \infty);$ \item[(ii)] $\lim_{\alpha \to 0} x_\alpha = x^\dagger \in {\rm Sol}(A, F, C),$ where $x^\dagger=\mathcal A u^\dagger$ and $u^\dagger$ is the unique solution of the bilevel variational inequality \eqref{BVI}; \item[(iii)] There exists $M > 0$ such that for all $\alpha; \beta > 0,$ the estimate $\|x_\alpha - x_\beta \| \leq \frac{M |\alpha -\beta|}{\beta}$ holds. \end{itemize} \begin{proof} (i) Let $x^* \in {\rm Sol}(A, F, C)$. Since $x_\alpha$ is the unique solution of the $\alpha\lambda$-strongly monotone inverse variational inequality GVI($A,F_\alpha$,C), according to Lemma \ref{lemma1}-(a), we have \begin{equation}\label{4} \left\langle F_\alpha x^*- f, Ax^*-Ax_\alpha \right\rangle +\left\langle f-F_\alpha x_\alpha, Ax^* \right\rangle \geq \lambda \alpha \| x^*-x_\alpha \|^2 \quad \forall f\in C. \end{equation} Taking $f=Fx^*$ in \eqref{4}, we get \begin{equation}\label{5} \alpha \left\langle x^*, Ax^*-Ax_\alpha \right\rangle +\left\langle Fx^* -F_\alpha x_\alpha, Ax^* \right\rangle \geq \lambda \alpha \| x^*-x_\alpha \|^2. \end{equation} Noting that $ \left\langle Fx^* -F_\alpha x_\alpha, Ax^* \right\rangle \leq 0 $, from \eqref{5} we arrive at \begin{equation}\label{6} \left\langle x^*, Ax^*-Ax_\alpha \right\rangle \geq \lambda \| x^*-x_\alpha \|^2. \end{equation} Using the Lipschitz continuity of $A$, we get \begin{equation*} \| x^*-x_\alpha \| \leq \frac {L_A}{\lambda } \|x^* \|. \end{equation*} Thus the net $\{x_\alpha \}$ is bounded on $(0;\infty)$.\\ (ii) Let $\hat x$ be a cluster point of $\{x_\alpha\}$. There exists a subsequence $\{x_{\alpha_k}\} \subset \{x_\alpha \}$ satisfying $ x_{\alpha_k} \rightharpoonup \hat x$. Using the definition of $x_{\alpha_k} \in {\rm Sol}(A, F_\alpha,C)$ and Lemma \ref{lemma1}-(a), we have \begin{equation}\label{7} \left \langle Ay-Ax_{\alpha_k}, F y +\alpha_k y -f \right \rangle + \left \langle Ay, f-F x_{\alpha_k} -\alpha_k x_{\alpha_k} \right \rangle \geq 0 \quad \forall y\in H, f\in C. \end{equation} In \eqref{7}, taking the limit as $k\to \infty$, using the weak-weak continuity of $A, F$, we get \begin{equation}\label{8} \left \langle Ay-A\hat x, F y -f \right \rangle + \left \langle Ay, f-F \hat x \right \rangle \geq 0 \quad \forall y\in \mathcal H, f\in C. \end{equation} We have $F_{\alpha_k} x_{\alpha_k} \in C$ for all $k \geq 0$ and $F_{\alpha_k} x_{\alpha_k} =Fx_{\alpha_k}+\alpha_kx_{\alpha_k} \rightharpoonup F \hat x$. Moreover, the set $C$ being closed and convex is also weakly closed. Hence, \begin{equation}\label{haj1} F \hat x \in C. \end{equation} Lemma \ref{lemma1}-(b), together with \eqref{8} and \eqref{haj1} implies that $\hat x \in \text{Sol}(A,F,C)$. On the other hand, from \eqref{6}, it follows that \begin{equation}\label{hai1} \left\langle x^*, Ax^*-A\hat x \right\rangle \geq 0 \quad \forall x^* \in {\rm Sol}(A,F,C). \end{equation} Recall that the strongly monotone variational inequality problem VI($\mathcal A, \mathcal C$) admits a unique solution $u^\dagger$.\\ From \eqref{hai1}, we have $\hat u:= A \hat x$ is a solution of the problem \begin{equation}\label{gf1} \text{find } \hat u \in \mathcal C \text{ such that } \left\langle \mathcal{A} u,u- \hat u \right\rangle \geq 0 \quad \forall u \in \mathcal C. \end{equation} By the Minty lemma \cite[Lemma 1.5]{Kinderlehrer}, this problem is equivalent to the problem VI($\mathcal A, \mathcal C$), which admits a unique solution $u^\dagger$. Thus, $ \hat u \equiv u^\dagger,$ the net $\{x_\alpha\}$ has a unique weak cluster point and hence $x_\alpha \rightharpoonup \ x^\dagger =\mathcal A u^\dagger$. Finally, in \eqref{6}, taking $x^* = x^\dagger$ and letting $\alpha \to 0$, we get $x_\alpha \to x^\dagger.$ (iii) From the definition of $x_\alpha$, we have \begin{equation}\label{9} \left\langle F_\beta x_\beta- F_\alpha x_\alpha ,A x_\alpha \right\rangle \geq 0 \end{equation} Similarly, \begin{equation}\label{10} \left\langle F_\alpha x_\alpha -F_\beta x_\beta ,A x_\beta \right\rangle \geq 0. \end{equation} Adding \eqref{9} and \eqref{10} yields \begin{equation}\label{11} \left\langle F_\beta x_\beta- F_\beta x_\alpha , A x_\alpha -Ax_\beta \right\rangle + \left\langle F_\beta x_\alpha- F_\alpha x_\alpha , A x_\alpha -Ax_\beta \right\rangle \geq 0. \end{equation} Using the $\beta \lambda$-strong monotonicity of the couple $(A,F_\beta)$, we get \begin{equation}\label{12} (\beta-\alpha) \left\langle x_\alpha , A x_\alpha -Ax_\beta \right\rangle \geq \beta \lambda \| x_\alpha -x_\beta \|^2. \end{equation} Denoting $K:=\sup\{\|x_\alpha\|: \alpha>0\}$ and $M:= \frac{L_A K}{\lambda},$ from \eqref{12}, we have \begin{equation}\label{12a} \| x_\alpha -x_\beta \| \leq \frac{M}{\beta}|\beta -\alpha|, \; \;\blacksquare \end{equation} \end{proof} According to Lemma \ref{L3.1}, each regularized solution $x_\alpha$ satisfies the relation $$ P(F_\alpha x - \mu A x) - F_\alpha x = 0.$$ This observation leads us to the following regularized neural network \begin{equation}\label{RegNN} \begin{cases} & \dot x(t)=f(\alpha (t), \mu (t), x(t))\\ & x(0)= x_0 \in H, \end{cases} \end{equation} where $\alpha(t), \; \mu(t) : [0, \infty) \to (0, \infty)$ are continuous functions and $f(\alpha (t), \mu (t), x(t)) := P\Big(F_{\alpha(t)} x(t) -\mu (t) Ax(t) \Big)-F_{\alpha(t)} x(t)$.\\ Denote by $AC_{\mathrm{loc}}\left([0, +\infty), H \right) \; (L^1_{\mathrm{loc}}\left([0, +\infty), H \right)) $ the spaces of locally absolutely continuous (locally integrable) functions, i.e., the spaces of all absolutely continuous (integrable) functions on each finite interval $[0, T], \; T >0.$\\ A function $x: [0, +\infty) \to H$ is said to be a strong global solution of \eqref{RegNN} if the following properties hold: \begin{itemize} \item[a)] $x \in AC_{\mathrm{loc}}[0, +\infty),$ i.e., $x(t)$ is absolutely continuous on each interval $[0, T],$ for any $0<T< +\infty;$ \item[b)] For almost everywhere $t \in [0, +\infty)$ equation \eqref{RegNN} holds; \item[c)] $x(0) = x_0.$ \end{itemize} \begin{theorem}\label{Th3.3} Let $\alpha(t), \; \mu(t): [0, \infty) \to (0, \infty)$ be given positive continuous functions. Under Assumption \ref{assumption1}, the neural network \eqref{RegNN} admits a unique solution. \end{theorem} \begin{proof} First observe that the two functions $\alpha \longmapsto f(\alpha, \mu, x):= P(Fx+\alpha x- \mu Ax) - (Fx + \alpha x)$ and $\mu \longmapsto f(\alpha, \mu, x)$ are continuous on $[0, +\infty).$ Then the mapping $f(\alpha(t),\mu(t),\cdot): H \to H$ for all $t\in [0, +\infty)$ is continuous. Moreover, by the nonexpansiveness of $P$ and Lipschitz continuity of $A, \; F,$ we get \mygreen{ $$\forall x, y \in H, \; \|f(\alpha(t),\mu(t), x)- f(\alpha(t),\mu(t), y)\| \leq M(t)\|x-y\|,$$} where $M(t):= 2L_F + 2 \alpha(t) + \mu(t)L_A \in C[0, +\infty).$ \\ Besides, the function $t\longmapsto \varphi(t):= \|f(\alpha(t), \mu(t), x^\dagger)\|,$ where $x^\dagger$ is the unique solution of the bilevel problem BVI($A,F,C$), is continuous on $[0, +\infty).$ \\ For any $x \in H,$ we have \begin{align*} \|f(\alpha(t), \mu(t), x)\| & \leq \|f(\alpha(t), \mu(t), x^\dagger)\| + \|f(\alpha(t), \mu(t), x) - f(\alpha(t), \mu(t), x^\dagger)\| \\ & \leq \varphi(t) + M(t) \|x - x^\dagger\| \\ &\leq \varphi(t)+M(t)\|x^\dagger\|+M(t)\|x\| \\ &\leq \sigma(t)(1 + \|x\|), \end{align*} where the function $\sigma(t) := \max\{\varphi(t) + M(t)\|x^\dagger\|, M(t)\}$ is also continuous on $[0, +\infty),$ hence $\sigma \in L^1_{\mathrm{loc}}[0, +\infty). $\\ Finally, from the fact that the mapping $f(\alpha(t),\mu(t),\cdot) $ is continuous on $[0, +\infty)$ and the estimate $ \|f(\alpha(t), \mu(t), x)\| \leq \sigma(t)(1 + \|x\|),$ with $\sigma \in L^1_{\mathrm{loc}}[0, +\infty),$ holds for all $x\in H,$ we conclude that $$\forall x \in H, \; f(\cdot, \cdot, x) \in L^1_{\mathrm{loc}}\left([0, +\infty), H\right).$$ Thus, by the Cauchy-Lipschitz-Picard theorem \cite[Prop. 6.2.1]{Haraux}, the Cauchy problem \eqref{RegNN} admits a unique global solution. $\blacksquare$ \end{proof} \begin{theorem}\label{Theorem_continuous_convergence} Let $x(t)$ be a strong global solution of \eqref{RegNN}. Suppose that Assumption \ref{assumption1} is satisfied, $A$ is a linear, self-adjoint operator and that $$ \lim_{t\to \infty} \alpha(t) = 0, \; \lim_{t\to \infty} \alpha(t) \mu(t) =\infty,\quad \lim_{t\to \infty} \frac{\dot \alpha (t)}{\alpha^2 (t)} =0,\quad \bigintssss_0^\infty \alpha(t) dt=\infty, $$ where $\alpha(t)$ is a given positive continuously differentiable function and $\mu(t)$ is a chosen positive continuous function. Then, $\|x(t) - x^\dagger \| \to 0,$ as $t \to \infty.$ \end{theorem} \begin{proof} According to Lemma \ref{lemma1h}(ii), $\|x_{\alpha(t)}- x^\dagger \| \to 0,$ as $t \to \infty,$ we only need to prove that $\lim\limits_{t\to\infty} \| x(t)-x_{\alpha(t)} \|=0$. Denote $$ V(t):=\frac 1 2 \left\langle A x(t)-Ax_{\alpha(t)},x(t)-x_{\alpha(t)} \right\rangle. $$ Since $ V(t)\geq \frac 1 2 \lambda \|x(t)-x_{\alpha(t)} \|^2 $, it remains to show that $V(t) \to 0 $ as as $t \to \infty.$ We have \begin{align}\label{rf1} \dot V(t) &= \frac 1 2 \left\langle A x(t)-Ax_{\alpha(t)}, \dot x(t) +\dfrac{d}{dt} x_{\alpha(t)} \right\rangle +\frac 1 2 \left\langle \dfrac{d}{dt} A x(t)+\dfrac{d}{dt}Ax_{\alpha(t)} , x(t)-x_{\alpha(t)} \right\rangle\notag\\ &= \left\langle A x(t)-Ax_{\alpha(t)}, \dot x(t) +\dfrac{d}{dt} x_{\alpha(t)} \right\rangle. \end{align} In the last equality, we use the assumption that $A$ is a linear, self-adjoint operator. By Lemma \ref{lemma1h}-(iii), the regularized solution $x_{\alpha(t)} \in {\rm Sol}(A, F_{\alpha(t)}, C)$ satisfies the relation $$\|x_{\alpha(t)} - x_{\alpha(s)}\| \leq M \left|\frac{\alpha(t)- \alpha(s)}{\alpha(t)}\right|, \; \forall t, s \in [0, +\infty), $$ hence, it is absolutely continuous due to the absolute continuity of $\alpha(t).$ Therefore, it is differentiable almost every where in $t.$ Obviously, \begin{equation}\label{1Th3} \left\| \frac d {dt} x_{\alpha(t)} \right\| \leq M \left| \frac {\dot \alpha(t)}{\alpha(t)} \right|. \end{equation} Denote $y(t):=P\Big(F_{\alpha(t)} x(t) -\mu (t) Ax(t) \Big)$. Using the property of the projection, we have $$ \left\langle u-y(t),F_{\alpha(t)} x(t)-\mu(t) A x(t) -y(t) \right\rangle \leq 0 \quad \forall u \in C. $$ Taking $u=F_{\alpha(t)} x _{\alpha(t)} $, we obtain $$ \left\langle y(t)-F_{\alpha(t)} x _{\alpha(t)} ,F_{\alpha(t)} x(t)-\mu(t) A x(t) -y(t) \right\rangle \geq 0. $$ On the other hand, since $x_{\alpha(t)} $ solves the problem GVI$(A,F _{\alpha(t)},C )$, it holds that $$ \mu(t) \left\langle A x _{\alpha(t)} ,y(t)-F _{\alpha(t)} x _{\alpha(t)} \right\rangle \geq 0. $$ Combining the last two inequalities, we arrive at $$ \left\langle y(t)-F _{\alpha(t)} x _{\alpha(t)},F _{\alpha(t)} x-\mu(t) [A x(t)- A x _{\alpha(t)} ]-y(t) \right\rangle \geq 0. $$ Thus, \begin{align}\label{re3} &\mu(t) \left\langle A x(t) -A x _{\alpha(t)} ,y(t) -F _{\alpha(t)} x(t) \right\rangle \leq \left\langle F _{\alpha(t)} x(t)-y(t),F _{\alpha(t)} x(t) -F _{\alpha(t)} x _{\alpha(t)} \right\rangle+ \notag\\ & -\mu(t) \left\langle A x(t) -A x _{\alpha(t)} , F _{\alpha(t)} x(t) - F _{\alpha(t)} x _{\alpha(t)} \right\rangle - \|y(t) -F _{\alpha(t)} x(t) \| ^2 \notag\\ & \leq -\mu(t) \alpha(t) \lambda \| x(t)-x _{\alpha(t)} \| ^2 -\frac 1 2 \|y(t) -F _{\alpha(t)} x(t) \| ^2+ \frac 1 2 \|F _{\alpha(t)} x(t) -F _{\alpha(t)} x _{\alpha(t)} \| ^2 \notag\\ &\leq - \left( \mu(t) \alpha(t) \lambda -\frac 1 2 (L_F+\alpha(t))^2 \right) \| x(t)-x _{\alpha(t)} \| ^2 - \frac 1 2 \|y(t) -F _{\alpha(t)} x(t) \| ^2. \end{align} Combining \eqref{rf1} and \eqref{re3}, we have \begin{align}\label{re4} V(t)&\leq \left\langle A x(t) -A x _{\alpha(t)} ,y(t) -F _{\alpha(t)} x(t) \right\rangle + ML_A\left| \frac{\dot \alpha(t)}{\alpha(t)}\right| \| x(t)-x _{\alpha(t)} \| \notag\\ &\leq - \left( \alpha(t) \lambda -\frac 1 {2\mu(t)} (L_F+\alpha(t))^2 \right) \| x(t)-x _{\alpha(t)} \| ^2 - \frac 1 {2\mu(t)} \|y(t) -F _{\alpha(t)} x(t) \| ^2+\notag\\ &+ML_A\left| \frac{\dot \alpha(t)}{\alpha(t)}\right| \| x(t)-x _{\alpha(t)} \|. \end{align} Since $\alpha (t) \mu (t) \to +\infty$ as $t\to \infty$, without loss of generality, we may assume that $\alpha(t) \lambda -\frac 1 {2\mu(t)} (L_F+\alpha(t))^2 >0$ for all $t\geq 0.$ Thus, using the Lipschitz continuity and the strong monotonicity of $A$, from \eqref{re4} we get \begin{align}\label{ret2} \dot V(t) \leq - \left( \frac {2\alpha(t) \lambda} {L_A} -\frac {(L_F+\alpha(t))^2} {L_A\mu(t)} \right) V(t) + ML_A\left| \frac{\dot \alpha(t)}{\alpha(t)}\right|\sqrt{\frac{2V}{\lambda}}. \end{align} From this differential inequality, we obtain $$ \sqrt{V(t)} \leq \frac{\bigints_0^t \left\{ \exp \left\{ \bigintss_0^u \left( \frac {\alpha(s) \lambda} {L_A} -\frac {(L_F+\alpha(s))^2} {2L_A\mu(s)} \right) ds \right\} ML_A\left| \frac{\dot \alpha(u)}{\alpha(u)}\right|\sqrt{\frac{1}{2\lambda}} \right\}du +\sqrt{V(0)} }{ \exp \left\{ \bigints_0^t \left( \frac {\alpha(u) \lambda} {L_A} -\frac {(L_F+\alpha(u))^2} {2L_A\mu(u)} \right) du \right\} }. $$ Since $\alpha(t) \mu(t) \to \infty$ as $t\to \infty$, there exists $t_0>0$ such that $$ \frac {\alpha(u) \lambda} {L_A} -\frac {(L_F+\alpha(u))^2} {2L_A\mu(u)} \geq \frac {\alpha(u) \lambda} {2L_A} \quad \forall u\geq t_0. $$ Moreover, we have $\int_0^{\infty} \alpha (t) dt =\infty$, hence $$ \lim_{t\to \infty} \exp \left\{ \bigintss_0^t \left( \frac {\alpha(u) \lambda} {L_A} -\frac {(L_F+\alpha(u))^2} {2L_A\mu(u)} \right) du \right\} =\infty. $$ Now, it is sufficient to show that $$ L:=\lim_{t\to \infty} \frac{\bigints_0^t \left\{ \exp \left\{ \bigintss_0^u \left( \frac {\alpha(s) \lambda} {L_A} -\frac {(L_F+\alpha(s))^2} {2L_A\mu(s)} \right) ds \right\} ML_A\left| \frac{\dot \alpha(u)}{\alpha(u)}\right|\sqrt{\frac{1}{2\lambda}} \right\}du}{\exp \left\{ \bigints_0^t \left( \frac {\alpha(u) \lambda} {L_A} -\frac {(L_F+\alpha(u))^2} {2L_A\mu(u)} \right) du \right\}}=0. $$ If the numerator is finite, we get the desired result. In the opposite case, applying L'hospital's rule, we have \begin{align*} L&=\lim_{t\to \infty} \frac{ \exp \left\{ \bigints_0^t \left( \frac {\alpha(s) \lambda} {L_A} -\frac {(L_F+\alpha(s))^2} {2L_A\mu(s)} \right) ds \right\} ML_A\left| \frac{\dot \alpha(t)}{\alpha(t)}\right|\sqrt{\frac{1}{2\lambda}} }{\exp \left\{ \bigints_0^t \left( \frac {\alpha(u) \lambda} {L_A} -\frac {(L_F+\alpha(u))^2} {2L_A\mu(u)} \right) du \right\} \left( \frac {\alpha(t) \lambda} {L_A} -\frac {(L_F+\alpha(t))^2} {2L_A\mu(t)} \right) }\\ &=\lim_{t\to \infty} \frac{ML_A^2\dot \alpha(t)}{\lambda \sqrt{2\lambda}\alpha^2(t)}\\ &=0 \end{align*} and this completes the proof. $\blacksquare$ \end{proof} \begin{remark} In Theorem \ref{Theorem_continuous_convergence}, one can chose $\alpha(t) = (1+t)^{-p}$ and $\mu(t) = (1+t)^{q},$ where $0<p< q< 1.$ \end{remark} Next, we assume that $(A,F)$ is $\gamma$-strongly monotone. In this case, we do not need to regularize the mapping $F$ and \eqref{RegNN} becomes \begin{equation}\label{RegNN2} \begin{cases} \dot x(t)=P(Fx(t)-\mu A x(t) )-Fx(t),\\ x(0)\in H. \end{cases} \end{equation} We will establish the convergence rate of the solution $x(t)$ of \eqref{RegNN2} to the unique solution $x^\dagger$ of \ref{GVI}. \begin{theorem}\label{Theo_rate} Let $x(t)$ be a strong global solution of \eqref{RegNN2}. Suppose that $A,F$ satisfy all the conditions in Theorem \ref{Theorem_continuous_convergence}, moreover, $(A,F)$ is $\gamma$-strongly monotone and $\mu > \frac 1 {2\gamma} L_F^2$. Then, we have $$ \|x(t) -x^\dagger \| \leq \sqrt{\frac{L_A}{\lambda}} \| x(0)-x^\dagger \| e^{-ct}, $$ where $c=\gamma -\frac 1 {2\mu} L_F^2>0$. \end{theorem} \begin{proof} Denote $$ W(t)=\frac 1 2 \left\langle Ax(t)-Ax^\dagger,x(t)-x^\dagger \right\rangle , y(t)=P(Fx(t)-\mu Ax(t)). $$ Similarly to \eqref{re3}, we obtain \begin{align}\label{re3a} \left\langle A x(t) -A x ^\dagger ,y(t) -F x(t) \right\rangle\leq - \left( \gamma -\frac 1 {2\mu} L_F^2 \right) \| x(t)-x^\dagger \| ^2 - \frac 1 {2\mu} \|y(t) -F x(t) \| ^2. \end{align} Thus, instead of \eqref{re4}, we obtain \begin{align}\label{re4a} \dot W(t)&\leq- \left( 2 \gamma -\frac 1 {\mu} L_F^2 \right) W(t) - \frac 1 {2\mu} \|y(t) -F x(t) \| ^2. \end{align} It implies that $$ W(t)\leq W(0) e^{-\left( 2 \gamma -\frac 1 {\mu} L_F^2\right)t}. $$ Since $A$ is $\lambda$-strongly monotone and $L_A$-Lipschitz continuous, we have $$ \lambda \| x(t)-x^\dagger \| ^2 \leq \left\langle Ax(t)-A x^\dagger,x(t)-x^\dagger \right\rangle \leq L_A \| x(t)-x^\dagger \| ^2 $$ and hence $$ \| x(t)-x^\dagger \| \leq \sqrt{\frac{L_A}{\lambda}} \| x(0)-x^\dagger \| e^{-ct}, $$ where $c:= \gamma -\frac 1 {2\mu} L_F^2 >0$. \; $\blacksquare$ \end{proof} \section{Discretization of the neural network} An explicit Euler scheme applied to the Cauchy problem \eqref{RegNN} gives \begin{equation}\label{Discret} \begin{cases} & y^{k}=P \Big(F_{\alpha_k} x^k -\mu_k Ax^k \Big)\\ &x^{k+1} = x^k +h_k (y^k-F_{\alpha_k}x^k )\\ & x^0 \in H. \end{cases} \end{equation} \begin{theorem}\label{theo2discret} Suppose that $A,F$ satisfy all the conditions in Theorem \ref{Theorem_continuous_convergence} and $$ \sum_{k=0}^\infty h_k\alpha_k =\infty; \lim_{k\to\infty} \mu_k h_k =0; \lim_{k\to\infty} \mu_k \alpha_k =\infty; \lim_{k\to\infty} \frac{|\alpha_k -\alpha_{k+1} |}{h_k \alpha_k^2} =0; \lim_{k\to\infty} \alpha_k=0. $$ Then the sequence $\{x^k\},$ generated by \eqref{Discret} converges to the solution $x^\dagger$ of the GVI($A, F, C$). \end{theorem} \begin{proof} Similarly to \eqref{re3}, we get \begin{equation}\label{fe1} \left\langle A x^k -A x _{\alpha_k} ,y^k -F _{\alpha_k} x^k \right\rangle \leq - \left( \alpha_k \lambda -\frac 1 {2\mu_k} (L_F+\alpha_k)^2 \right) \| x^k-x _{\alpha_k} \| ^2 - \frac 1 {2\mu_k} \|y^k -F _{\alpha_k} x^k \| ^2. \end{equation} Denote $U_{k}:=\left\langle Ax^{k}-A x_{\alpha_{k}},x^k-x_{\alpha_k}\right\rangle$. We have \begin{align}\label{aq1} U_{k+1}-U_k&=\left\langle A x^{k+1}-Ax _{\alpha_{k+1} } , x^{k+1}-x^k -(x _{\alpha_{k+1}}-x _{\alpha_{k}} ) \right\rangle+ \notag \\ &+ \left\langle A x^{k+1}-A x^k-(Ax _{\alpha_{k+1} } -A x _{\alpha_{k} } ),x^k -x _{\alpha_{k}} \right\rangle. \end{align} Since $A$ is a linear, self-adjoint and Lipschitz continuous operator, it holds that \begin{align}\label{aq2} & \left\langle A x^{k+1}-Ax _{\alpha_{k+1} } , x^{k+1}-x^k -(x _{\alpha_{k+1}}-x _{\alpha_{k}} ) \right\rangle \leq \left\langle A x^k -A x _{\alpha_k} ,x^{k+1}-x^k \right\rangle +\notag \\ & + L_A \| x^{k+1}-x _{\alpha_{k+1} } \| \| x _{\alpha_{k+1}}-x _{\alpha_{k}} \| + L_A \| x^{k+1}-x^k \|^2 +L_A \| x^{k+1}-x^k \| \| x _{\alpha_{k+1}}-x _{\alpha_{k}} \| \notag \\ &\leq \left\langle A x^k -A x _{\alpha_k} ,x^{k+1}-x^k \right\rangle + 2L_A \| x^{k+1}-x^k \| \|x _{\alpha_{k+1}}-x _{\alpha_{k}} \|+ L_A \| x _{\alpha_{k+1}}-x _{\alpha_{k}} \| ^2 + \notag \\ &+L_A \| x^k -x _{\alpha_{k}} \| \| x _{\alpha_{k+1}} -x _{\alpha_{k}} \| + L_A \| x^{k+1}-x^k \| ^2. \end{align} and \begin{align}\label{aq3} \left\langle A x^{k+1}-A x^k-(Ax _{\alpha_{k+1} } -A x _{\alpha_{k} } ),x^k -x _{\alpha_{k}} \right\rangle & \leq \left\langle A x^k -A x _{\alpha_k} ,x^{k+1}-x^k \right\rangle + \notag \\ &+L_A \| x _{\alpha_{k+1}}-x _{\alpha_{k}} \| \| x^k-x _{\alpha_{k}} \| . \end{align} Combining \eqref{fe1}, \eqref{aq1}, \eqref{aq2} and \eqref{aq3}, we get \begin{align}\label{aq4} & U_{k+1}-U_k \leq 2\left\langle A x^k -A x _{\alpha_k} ,x^{k+1}-x^k \right\rangle + 2L_A \| x^{k+1}-x^k \| \|x _{\alpha_{k+1}}-x _{\alpha_{k}} \| + \notag \\ &+2L_A \| x^k -x _{\alpha_{k}} \| \| x _{\alpha_{k+1}} -x _{\alpha_{k}} \| + L_A \| x^{k+1}-x^k \| ^2+ L_A \| x _{\alpha_{k+1}}-x _{\alpha_{k}} \| ^2 \notag \\ & \leq - \left( 2h_k \alpha_k \lambda -\frac {h_k} {\mu_k} (L_F+\alpha_k)^2 \right) \| x^k-x _{\alpha_k} \| ^2 - \frac {h_k} {\mu_k} \|y^k -F _{\alpha_k} x^k \| ^2 + \notag \\ &+2L_A \| x^k -x _{\alpha_{k}} \| \| x _{\alpha_{k+1}} -x _{\alpha_{k}} \| + L_A \| x^{k+1}-x^k \| ^2+ L_A \| x _{\alpha_{k+1}}-x _{\alpha_{k}} \| ^2+ \notag \\ &+2L_A \| x^{k+1}-x^k \| \|x _{\alpha_{k+1}}-x _{\alpha_{k}} \|. \end{align} Since $\mu_k \alpha_k \to \infty$ as $k\to \infty$, without loss of generality, we may assume that $$ 2h_k \alpha_k \lambda -\frac {h_k} {\mu_k} (L_F+\alpha_k)^2 \geq h_k \alpha_k \lambda\quad \forall k\geq 0. $$ On the other hand, applying the Cauchy inequality, we get $$ 2L_A \| x^k -x _{\alpha_{k}} \| \| x _{\alpha_{k+1}} -x _{\alpha_{k}} \| \leq \frac 1 2 h_k \alpha_k \lambda \| x^k -x _{\alpha_{k}} \| ^2 + \frac {2L_A^2} {h_k \alpha_k \lambda}\| x _{\alpha_{k+1}} -x _{\alpha_{k}} \|^2. $$ and $$ 2L_A \| x^{k+1}-x^k \| \|x _{\alpha_{k+1}}-x _{\alpha_{k}} \| \leq L_A \| x^{k+1}-x^k \| ^2 + L_A \|x _{\alpha_{k+1}}-x _{\alpha_{k}} \|^2 $$ Thus, \begin{align}\label{aq5} U_{k+1}-U_k & \leq -\frac 1 2 h_k \alpha_k \lambda \| x^k-x _{\alpha_k} \| ^2 - \left( \frac {h_k} {\mu_k} -2L_Ah_k^2 \right) \|y^k -F _{\alpha_k} x^k \| ^2 + \notag \\ &+ \left( 2L_A +\frac {2L_A^2} {h_k \alpha_k \lambda} \right) \| x _{\alpha_{k+1}}-x _{\alpha_{k}} \| ^2 \notag \\ & \leq -\frac 1 2 h_k \alpha_k \lambda \| x^k-x _{\alpha_k} \| ^2 - \left( \frac {h_k} {\mu_k} -2L_Ah_k^2 \right) \|y^k -F _{\alpha_k} x^k \| ^2 + \notag \\ &+ M^2 \left( 2L_A +\frac {2L_A^2} {h_k \alpha_k \lambda} \right)\frac {| \alpha_{k+1}-\alpha_{k}|^2}{\alpha_k^2}. \end{align} Since $\mu_k h_k \to 0$, without loss of generality, we may assume that $\frac {h_k} {\mu_k} -2L_Ah_k^2 \geq 0 $ for all $k\geq 0.$ On the other hand, since $A$ is $L_A$-Lipschitz continuous, we have $ U_k \leq L_A \| x^k-x_{\alpha_k} \|^2$. Combining these facts with \eqref{aq5}, we obtain \begin{align}\label{aq6} U_{k+1} & \leq \left(1-\frac {h_k \alpha_k \lambda} {2L_A} \right) U_k + M^2 \left( 2L_A +\frac {2L_A^2} {h_k \alpha_k \lambda} \right)\frac {| \alpha_{k+1}-\alpha_{k}|^2}{\alpha_k^2}. \end{align} Since $$ \sum_{k=0}^\infty h_k \alpha_k =\infty,\quad \lim_{k\to \infty} \frac{ | \alpha_{k+1}-\alpha_{k}|^2 }{\alpha_k^4 h_k^2}=0, $$ applying Lemma \ref{lemma31} with $$ \theta_k:= \frac {h_k \alpha_k \lambda} {2L_A};\quad \zeta_k := M^2 \left( 2L_A +\frac {2L_A^2} {h_k \alpha_k \lambda} \right)\frac {| \alpha_{k+1}-\alpha_{k}|^2}{\alpha_k^2}, $$ we get $U_k\to 0$. Finally, since $A$ is strongly monotone, we have $U_k \geq \lambda \| x^k-x_{\alpha_k}\|^2$, and therefore, $\| x^k-x_{\alpha_k}\| \to 0.$ According to Lemma \ref{lemma1h}-(ii), $x_{\alpha_k} \to x^\dagger$ as $k \to\infty,$ hence $\lim\limits_{k\to \infty} x^k = x^\dagger.$ $\blacksquare$ \end{proof} \begin{remark}\label{remark1h} An example of $\{\alpha_k\}$, $\{\mu_k\}$, $\{h_k\}$ satisfying the conditions in Theorem \ref{theo2discret} is $$ \alpha_k=\frac 1 {(k+1)^p},\ \mu_k=(k+1)^q,\ h_k=\frac 1 {(k+1)^r},\quad 0<p<q<r, p+r<1. $$ \end{remark} \section{Numerical experiments} \begin{example} Let $ H = \ell^2$ and $m\geq 2$ be a fixed positive integer. For any $x = (x_1, x_2, \ldots) \in \ell^2,$ we define the operators $A x = (x_1, x_2,\frac{3+1}{3}x_3 , \ldots, \frac{n+1}{n}x_n, \ldots)$ and $Fx = (-x_2, x_1, 0, \ldots ).$ It is easy seen that $A$ is Lipschitz continuous and strongly monotone with $L_A = 2, \; \mu =1.$ Further, $F$ is a Lipschitz continuous operator with $L_F = 1.$ Let $\ell^2 \ni x^n =(x_1^n, x_2^n, \ldots) \rightharpoonup x=(x_1, x_2, \ldots) $ and $u = (1,0, \ldots), \; v =(0, -1, 0, \ldots).$ Then $x_n^1 = \langle x^n, u \rangle \to \langle x, u \rangle = x_1, \;\; -x_2^n = \langle x^n,v \rangle \to \langle x, v\rangle = -x_2 \; (n \to \infty). $ The fact $F(x^n)= (-x_2^n, x_1^n, 0, \ldots) \to (-x_2, x_1, 0, \ldots) = F(x)$ as $n \to \infty,$ shows that $F$ is sequentially weak-to-strong continuous, hence it is weak-to-weak continuous. Finally, for all $x=(x_1, x_2, \ldots, x_n, \ldots)$ and $y=(y_1, y_2, \ldots, y_n, \ldots),$ one has $\langle Ax - Ay, Fx -Fy\rangle = 0,$ i.e., the couple ($A, F$) is monotone.\\ Define a set $C= \{x \in \ell^2: x_i = 0, \; i=1, \ldots, m\}$.\\ Observe that $Fx \in C$ if and only if $x = (0, 0, x_3,\ldots).$ For such $x$ and for any $y \in C,$ we have $F x =0 ,$ and $$T(x, y):= \langle Ax, y - Fx\rangle = \langle Ax, y \rangle= \sum_{j= m+1}^\infty \frac{j+1}{j}x_jy_j. $$ We show that $T(x, y) \geq 0$ for all $y\in C$ if and only if $x_j = 0, \; j \geq m+1.$ \\ Indeed, if the latter condition holds then $T(x, y) =0$ trivially for all $y \in C.$\\ Conversely, suppose that there exists $\tilde{x}=(0,0,\tilde{x}_3, \ldots, \tilde{x}_j, \ldots)$ such that $\tilde{x}_j \neq 0$ for some $ j \geq m+1,$ while $T(\tilde{x},y) \geq 0$ for all $y \in C.$ Taking $\tilde{y} \in C$ with $\tilde{y}_j := - \tilde x_j $ and $\tilde{y}_k := 0$ for all $k\neq j$. Then, $T(\tilde{x}, \tilde{y})= -(\tilde{x}_j)^2 <0,$ which is a contradiction. Hence the solution set Sol($A, F, C$) consists of all $x=(0, 0, x_3, \ldots, x_m, 0, \ldots).$ Clearly, Sol($A, F, C$) is nonempty, closed, convex subset in $\ell^2.$ According to Remark \ref{CondA3}, $A$ is weak-weak continuous, and the subset $\mathcal{C}:= A({\rm Sol}(A,F,C))$ is convex.\\ Thus, all the assumptions $(A_1)-(A_3)$ are fulfilled.\\ Clearly, $\mathcal{A}y = (y_1, y_2, \frac{3}{4}y_3,\ldots, \frac{n}{n+1}y_n, \ldots).$ Let $u^\dagger:= (0, 0, u_3, \ldots,u_m, 0, \ldots) \in \mathcal{C}$ be a unique solution to the BVI $$\langle \mathcal{A}u^\dagger, v- u^\dagger \rangle \geq 0, \; \forall v \in \mathcal{C}.$$ Letting $v =0 \in \mathcal{C},$ we get $$-\langle \mathcal{A}u^\dagger, u^\dagger \rangle = -\frac{3}{4}u_3^2-\ldots-\frac{m}{m+1}u_m^2 \geq 0. $$ Thus $u^\dagger = 0,$ hence $x^\dagger = \mathcal{A} u^\dagger =0.$\\ The regularized neural network \eqref{RegNN} in this case is of the form: \begin{equation}\label{RegNN1} \begin{cases} &\dot{x}_1(t) = x_2(t) - \alpha(t) x_1(t),\\ &\dot{x}_2(t) = -x_1(t) - \alpha(t) x_2(t),\\ &\dot{x}_k(t)= - \alpha(t) x_k(t), \; k= 3,\ldots,m ,\\ &\dot{x}_j(t) = - \frac{j}{J+1}\mu(t) x_j(t), \; j \geq m+1,\\ & x(0) = x^0 \in \ell^2, \end{cases} \end{equation} Integrating \eqref{RegNN1} gives \begin{equation*} \begin{cases} &{x}_1(t) = e^ {-\int_0^t \alpha (u)du } (x^0_1 \cos t +x^0_2 \sin t ),\\ &{x}_2(t) = e^ {-\int_0^t \alpha (u)du } (x^0_2 \cos t -x^0_1 \sin t ),\\ &x_k(t)= x^0_k e^ {-\int_0^t \alpha (u)du }, \; k= 3,\ldots,m ,\\ &x_j(t) = x^0_j \frac{j+1}{j}e^ {-\int_0^t \mu (u)du }, \; j \geq m+1,\\ \end{cases} \end{equation*} Further, \begin{equation*} \|x(t)- x^\dagger\|^2 = \|x(t)\|^2 = \sum\limits_{i=1}^m|x_i(0)|^2 e^{-2\int_0^t \alpha(u){\rm du}} + \sum\limits_{j=m+1}^\infty |x_j^0|^2 \Big(\frac{j+1}{j}\Big)^2e^{-2\int_0^t\mu(u) \mathrm{du}} \end{equation*} The last estimate shows that $$\|x(t) - x^\dagger\| \leq \sum\limits_{i=1}^m|x_i(0)|^2 e^{-2\int_0^t \alpha(u){\rm du}} + 4 \Big(\|x(0)\|^2 - \sum\limits_{i=1}^m|x_i(0)|^2 \Big)e^{-2\int_0^t \mu(u) \mathrm{du}}.$$ If $\int_0^\infty \alpha(u){\rm du} = \int_0^\infty \mu(u){\rm du} = +\infty$ then $\|x (t) - x^\dagger \| \to 0$ as $t$ approaches $\infty.$ \vskip0.1cm \end{example} \begin{example}\label{ex2} Let $H = \mathbb{R}^3,\; C = B[0, 1]$- a closed unit ball centered at $0.$ Define a nonsingular matrix $$ B=\begin{pmatrix} 1 & 0 & 0\\ 1 & 1 & 0\\ 1 & 1 & 1\\ \end{pmatrix} $$ and let $A = B^TB,$ then $A$ is a symmetric positive define matrix. A simple calculation shows that $$ A=\begin{pmatrix} 1 &1 &1\\ 1 & 2 & 2\\ 1 & 2 & 3 \\ \end{pmatrix},\,\,\,\,\,\, A^{-1} = \begin{pmatrix} 2 & -1 &0\\ -1 & 2 &-1\\ 0 & -1 & 1\\ \end{pmatrix} $$ Let $Fx:=A^{-1}P x,$ where $Px$ is a metric projection of $x$ onto $C.$ Then, \begin{equation*} Fx = \left \{ \begin{array}{ll} A^{-1} x, \; \; \|x\| \leq 1, \\ \frac{A^{-1}x}{\|x\|} \; \;\; \|x\| >1 \end{array} \right . \end{equation*} Since the metric projection $P$ is firmly nonexpansive and the matrix $A$ is symmetric, one gets $$\langle Ax- Ay, Fx - Fy\rangle = \langle x- y, Px - Py\rangle \geq \|P x -P y\|^2 \geq 0, $$ for all $x, y \in H,$ which yields the monotonicity of the couple ($A, F$).\\ Clearly, $x = 0$ is a solution of the GVI($A, F, C$). Suppose $x^* $ is an another solution, then $Fx^* \in C, \; \langle Ax^*, y - Fx^*\rangle \geq 0 $ for all $y \in C.$ In particular, setting $y = 0,$ we find $-\langle Ax^*, Fx^*\rangle \geq 0,$ or equivalently, $\langle x^*, Px^*\rangle \leq 0,$ which implies that $x^* =0.$ Thus Sol($A, F, C$) $= \{0\}$ and $x^\dagger = 0.$ Applying the algorithm \eqref{Discret}, we get \begin{equation}\label{Discret1} \begin{cases} &z^k = A^{-1}P x^k\\ & y^{k}=P \Big(z^k + \alpha_k x^k -\mu_k Ax^k \Big)\\ &x^{k+1} = x^k +h_k (y^k-z^k - \alpha_k x^k )\\ \end{cases} \end{equation} \vskip0.1cm The parameters of the algorithms are chosen as follows: \begin{itemize} \item[(i)] In Algorithm \eqref{Discret}, $\alpha_k=\frac 1 {(k+1)^p}$, $\mu_k=(k+1)^q,$ $h_k=\frac 1 {(k+1)^r}$. We test the algorithm with different $p,q,r$ satisfying $0<p<q<r, p+r<1$. \item[(ii)] In the dynamical system \eqref{RegNN}, $\alpha (t)=(t+1)^{-0.2} $, $\mu (t)=(t+1)^{0.4}.$ \end{itemize} The starting point are randomly generated. The results are presented in Figures \ref{fig2} and \ref{fig1}. \begin{figure}[!ht] \centering \includegraphics[width=9cm]{roirac.eps} \caption{Behaviors of the sequence $\{x^k\}$ generated by Algorithm \eqref{Discret} in Example \ref{ex2}}\label{fig2} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=9cm]{lientuc.eps} \caption{Behaviors of the function $x(t)$ generated by Algorithm \eqref{RegNN} in Example \ref{ex2}}\label{fig1} \end{figure} \end{example} \begin{example}\label{example3} Let $C:=\{ x=(x_1,x_2,\ldots,x_n)^T \in \mathbb{R}^n: x_i\in [-1;1], i=1,\ldots n \}$, $A:\mathbb{R}^n \to \mathbb{R}^n$, $A(x)=(x_1, 2x_2,\ldots,n x_n )$, $F: \mathbb{R}^n \to \mathbb{R}^n$, $F(x)=Bx$, where $B=(2n)^{-1}(b_{ij})_{n\times n}$ and $$ b_{ij}=\begin{cases} 0\text{ if } i=j,\\ i+j\text{ if } j>i,\\ -j(i+j)/i\text{ if } j<i. \end{cases} $$ It is easy seen that $A,F$ is Lipschitz continuous, $A$ is $1$-strongly monotone. Moreover, since $ \left\langle Ax,Fx \right\rangle =0 $ for all $x \in \mathbb{R}^n$, we have $$ \left\langle Ax-Ay,Fx-Fy \right\rangle =0\quad \forall x,y\in \mathbb{R}^n. $$ Hence, the couple $(A,F)$ is monotone. Next, we show that Sol$(A; F ; C)=\{0\}$. Indeed, suppose that $x^*$ is a solution of the problem \ref{GVI}. Thus, $ \left\langle Ax^*,y-Fx^* \right\rangle \geq 0 $ for all $y\in C$. Since $ \left\langle Ax^*,Fx^* \right\rangle =0 $, hence $ \left\langle Ax^*,y\right\rangle \geq 0 $ for all $y\in C.$ If $Ax^* =0$, then $x^*=0$, otherwise, taking $y=-\frac 1 {\|Ax^*\|}Ax^* \in C$, we have $ - \left\langle Ax^*,Ax^* \right\rangle \geq 0 $, which implies $x^* =0.$ Thus, $x^* =0$ is the unique solution of the problem \ref{GVI}. We apply the dynamical system \eqref{RegNN} to solve this problem with $\alpha (t)=(t+1)^{-0.2} $, $\mu (t)=(t+1)^{0.4}.$ The starting point is randomly generated. We implement the algorithm with different $n$. The results are presented in Figure \ref{fig3}. As we can see, in all the cases, the function $x(t)$ converges to the unique solution $x^* =0$ of the problem. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{fig3.eps} \caption{Behaviors of the function $x(t)$ generated by the dynamical system \eqref{RegNN} in Example \ref{example3}}\label{fig3} \end{figure} \end{example} \section{Conclusions} In this paper, based on the regularization techniques, we introduce neural networks for solving general variational inequalities involving monotone couples of operators. We prove that under some suitable conditions, the regularized neural network as well as its discretized version strongly converge to a certain solution of the GVI. We also give some numerical experiments, including infinite dimensional one, to certify the effectiveness of the proposed algorithm. {\bf Open questions.} {\it Do Theorems \ref{Theorem_continuous_convergence} and \ref{theo2discret} still hold when $A$ is not necessary a bounded linear self-adjoint positive define operator ?. } \vskip0.1cm \begin{thebibliography}{99} \bibitem{AlShejari} A.A. AlShejari, M. A. Noor, K. I. Noor, Recent developments in general quasi variational inequalities, Int. J. Anal. Appl. 22:84 (2024) \bibitem{AnhHai} P.K. Anh, T.N. Hai, Regularized dynamics for monotone inverse variational inequalities in Hilbert spaces, Optim. Eng. 25, 2295–2313 (2024) \bibitem{Brou} L.E.J. Brouwer, Invariantz des $n$-dimensionalen Gebiets, Math. Ann. 71, 305–313 (1911) \bibitem{Haraux} A. Haraux, Systémes dynamiques dissipatifs et applications. Recherches en Mathématiques Appliquées, Masson (1991) \bibitem{He1} B.S. He, Inexact implicit methods for monotone general variational inequalities, Math. Program., Ser. A 86, 199–217 (1999) \bibitem{HeDong} S. He, Q.-L. Dong, An existence-uniqueness theorem and alternating contraction projection methods for inverse variational inequalities, J. Inequal. Appl. 2018:351 (2018) \bibitem{Kinderlehrer} D. Kinderlehrer, G. Stampacchia, An introduction to variational inequalities and their applications, Academic, New York (1980) \bibitem{LiZou} X. Li, Y.Z. Zou, Existence result and error bounds for a new class of inverse mixed quasi variational inequalities, J. Inequal. Appl. 42 (2016) \bibitem{Noor1} M.A. Noor, General variational inequalities, Appl. Math. Lett. 1, 119-121 (1988). \bibitem{PangYao} J.S., Pang, J.C., Yao, On a generalization of a normal map and equation, SIAM J. Control Optim. 33, 168–184 (1995) \bibitem{Tang} J. Tangkhawiwetkul, A neural network for solving the generalized inverse mixed variational inequality problem in Hilbert spaces, AIMS Math. 8(3), 7258-7276 (2023) \bibitem{Xu} H. Xu, Another control condition in an iterative method for nonexpansive mappings. Bull. Austral. Math. Soc. 65, 109–113 (2002) \bibitem{Zhao} Y.-B. Zhao, The iterative methods for monotone generalized variational inequalities, Optimization 42(4), 285-307 (1997) \end{thebibliography} \end{document}
2412.19044v1
http://arxiv.org/abs/2412.19044v1
A New Adaptive Control Scheme for Unstable Heat Equation with Unknown Control Coefficient
\documentclass[11pt]{article} \usepackage{amssymb,amsmath,amsthm,amsfonts,mathrsfs} \usepackage{subfigure,graphicx} \usepackage{ulem} \graphicspath{{pictureSimulation/}} \usepackage{color} \usepackage{verbatim} \usepackage{float, tikz, array, caption} \usetikzlibrary{shapes,arrows} \textwidth=16.5cm \textheight=24cm \def\disp{\displaystyle} \def\baselinestretch{1.3} \oddsidemargin 0cm \headsep=-1.3cm \raggedbottom \usepackage[CJKbookmarks, colorlinks, bookmarksnumbered=true,pdfstartview=FitH,linkcolor=black]{hyperref} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{conclusion}{Conclusion}[section] \numberwithin{equation}{section} \theoremstyle{definition} \newtheorem{definition}{Definition}\newtheorem{example}{Example}[section] \newtheorem{remark}{Remark}[section] \newtheorem*{remarks}{Remarks} \newtheorem*{notation}{Notation} \def\re{\par\hang\textindent} \def\crr{\cr\noalign{\vskip2mm}} \def\dfrac{\displaystyle\frac} \newcommand{\etal}{\textit{et al.~}} \def\dref#1{(\ref{#1})} \newcommand{\R}{{\mathbb R}} \newcommand{\C}{{\mathbb C}} \def\H{{\cal H}} \def\P{\mathbb{P}} \newcommand{\N}{{\mathbb N}} \def\D{\mathcal{D}} \def\Z{\mathbb{Z}} \def\A{\mathcal{A}} \def\B{\mathcal{B}} \def\X{\mathcal{X}} \begin{document} \title{{\bf A New Adaptive Control Scheme for Unstable Heat Equation with Unknown Control Coefficient\footnote{\small This work is supported by the National Natural Science Foundation of China (No. 62273217, 12131008, 62373231) and the Fundamental Research Program of Shanxi Province (202203021223002).} }} \author{ Hongyinping Feng\footnote{\small Corresponding author. Email: [email protected].} \ and\ \ Hai-Li Du \\ {\it \small School of Mathematical Sciences} {\it \small Shanxi University, Taiyuan, Shanxi, 030006, P.R. China}\\ } \maketitle \begin{abstract}In this paper, we develop a novel and simple adaptive control scheme for a one-dimensional unstable heat equation with unknown control coefficient. A new state observer is designed to estimate the system state, while a new update law is devised to estimate the reciprocal of the control coefficient. In contrast with the conventional state observer which is usually available for all controllers, the newly designed state observer depends on a special controller factorization. Very importantly, both the unknown control coefficient and its corresponding estimate do not appear in the state observer anymore. Consequently, the stabilization of the control plant comes down to the stabilization of the state observer. In this way, the obstacles caused by the unknown control coefficient can be overcome thoroughly. As an application, the performance output tracking is also considered by the newly developed approach. When the reference signal is persistently exciting, the reciprocal of the control coefficient can be estimated effectively by the designed update law. All the aforementioned results are proved mathematically, although the resulting closed-loop system may be nonlinear. Some of them are validated visually by numerical simulations. \end{abstract} \vspace{0.3cm} \noindent {\bf Keywords:}~Adaptive control, unknown control coefficient, unstable heat equation, observer. \vspace{0.3cm} \section{Introduction} Systems with unknown control coefficient are ubiquitous in many fields of engineering, even for the infinite-dimensional system. However, compared with the works on the finite-dimensional system in literature, the problem caused by unknown control coefficient is rarely considered for the infinite-dimensional system. To the best of our knowledge, the estimation of unknown control coefficient in unstable infinite-dimensional systems is still a challenging problem until now. It is quite difficult to achieve the desired system performance when the control coefficient is unknown. In this paper, we present an adaptive approach to address the problem caused by unknown control coefficient through the following benchmark unstable heat system: \begin{equation}\label{20237122035} \left\{\begin{array}{l} w_{t}(x,t)= w_{xx}(x,t) , \; x\in (0,1), \; t> 0,\crr w_{x} (0,t)=-qw(0,t) ,\ \ w_{x}(1,t)=bu(t) , \ \ t\ge 0, \ \ q>0,\crr y (t)= (w(0,t),w(1,t)) ,\ \ t\ge 0, \end{array}\right. \end{equation} where $w $ is the system state, $u$ is the control input, $y $ is the measurement output and $b\neq0$ is the unknown control coefficient. The model \dref{20237122035} is a general one-dimensional heat equation with boundary convection. It depicts the flow of heat in a rod that is insulated everywhere except the two ends. The convection at the left end is proportional to the temperature $w(0,t)$. It may destroy the system stability. The actuator with control coefficient $b$ is installed on the right end $x=1$. The control coefficient $b$ depends usually on many factors such as the specific heat capacity, the area of cross-section, the density as well as the thermal conductivity of the rod. It may be unknown in the practical applications. The more details of physical modeling of heat equation can be found in \cite{Hahn2012}. System \dref{20237122035} is unstable provided $q>1$ \cite{Andrey4}. The unstable heat systems have been extensively studied in recent years. Some representative works can be found in \cite{Liuw}, \cite{backsteppingheat2004}, \cite{backsteppingheat2005}, \cite{Deutscher2}, \cite{Fengheat} and the monograph \cite{Backsteppingbook}, where the partial differential equation backstepping method has been used to cope with the unstable term. Although the method of backstepping is powerful, it relies on the precise information on the control coefficient which may be unknown in practical applications. Therefore, how to stabilize the infinite-dimensional systems, including the unstable heat system \dref{20237122035}, with unknown control coefficient is still an unsolved issue. In this paper, we will address the problems that is resulted in the unknown control coefficient by the adaptive control approach. A new update law is devised to estimate the reciprocal of $b$ rather than the control coefficient $b$ itself. Meanwhile, a new state observer, that does not rely on the control coefficient and its estimation, is designed to estimate the system state effectively. We emphasize that the state estimation does not depend on the convergence of control coefficient estimation. As a consequence of this good characteristic of the state observer, a new adaptive control scheme is developed to stabilize the unstable heat equation \dref{20237122035} even if the control coefficient $b$ is unknown. Additionally, if the controller meets the so called persistent excitation condition, the unknown control coefficient can be estimated effectively by the newly designed update law. We point out that the introduction of adaptive control approach in the controller design for the infinite-dimensional unstable system is not our original creation. In \cite{Krs2}, \cite{Andrey3}, \cite{Andrey4} and the references therein, the adaptive control approach, together with the backstepping technique, has been used successfully to deal with the heat equation with unknown unstable coefficient. We proceed as follows. In Section \ref{Obs}, we present the update law for the reciprocal of $b$ and present the state observer for system \dref{20237122035}. An observer-based output feedback stabilizer is designed in Section \ref{Obserstabilizer}. As an application of the results obtained in Sections \ref{Obs} and \ref{Obserstabilizer}, we consider the performance output tracking for system \dref{20237122035} in Section \ref{Tracking}. The main results in Sections \ref{Obs}, \ref{Obserstabilizer} and \ref{Tracking} are proved strictly in Sections \ref{PfTh1}, \ref{PfTh2} and \ref{PfTh3}, respectively. The mathematical proofs arranged in these sections can be skipped if the reader is interested only in the design procedure. Some numerical simulations are presented in Section \ref{NumSim} to validate the theoretical results, followed by some conclusions in Section \ref{Concluding}. A mathematical result that is less relevant to the feedback or observer design is arranged in Appendix. For the sake of simplicity, we drop the obvious temporal and spatial domains in the rest of the paper. Throughout the article, $\H$ denotes the Hilbert space $ L^2(0,1)$ which is equipped with an inner product \begin{equation}\label{2023962248} \langle f,g\rangle_{\H}=\int_{0}^{1}f(x)g(x)dx,\ \ \forall\ f,g\in \H. \end{equation} We denote the set of functions $W$ as \begin{equation}\label{202473949} W =\{f+g\ |\ f\in L^{\infty}(0,\infty), g\in L^{2}(0,\infty)\}. \end{equation} \section{Observer design}\label{Obs} When $b$ is known, the following controller can stabilize system \dref{20237122035} exponentially by using the method of partial differential equation backstepping \cite{Backsteppingbook}: \begin{equation}\label{2023813835} \left.\begin{array}{l} \disp u(t)=-\frac{q+c_0}{b}\left[ w(1,t)+q \int_{0}^{1}e^{q(1-x)}w(x,t)dx\right], \end{array} \right. \end{equation} where $c_0$ is a positive tuning parameter. However, when $b$ is unknown, the situation becomes completely different. The unknown control coefficient $b$ may give rise to great difficulties in the controller design. For example, the conventional approach to the controller design may be infeasible even if we have known the estimation \begin{equation}\label{2023719933} \hat{b}(t)\to b\ \ \mbox{as}\ \ t\to+\infty. \end{equation} Actually, in terms of \dref{2023719933} and \dref{2023813835}, the controller is designed naturally as \begin{equation}\label{2023719934} u(t)=-\frac{q+c_0}{\hat{b}(t)}\left[ w(1,t)+q \int_{0}^{1}e^{q(1-x)}w(x,t)dx\right]. \end{equation} However, the controller \dref{2023719934} is meaningless at time $t$ if $\hat{b}(t)=0$ which may take place and is hard to avoid in the convergence \dref{2023719933}. Moreover, the unknown control coefficient can also affect the estimation/cancellation of the disturbance in the well-known Active Disturbance Rejection Control \cite{FengAnnual} and the solvability of the regulator equations in the Internal Model Principle \cite{Natarajan2016TAC}, \cite{PauLassi2016TAC} and \cite{PauLassi2017SIAM}. In summary, we will encounter great difficulties in the presence of unknown control coefficient. In order to overcome the obstacles caused by the unknown control coefficient $b$, we propose a novel strategy that estimates the reciprocal of $ b$ rather than $b$ itself. Additionally, we factorize the controller by \begin{equation}\label{2023719937} u(t)=\zeta(t)u_0(t), \end{equation} where $\zeta $ is used to compensate for the unknown $b$ and $u_0 $ is a new controller. Both of them will be determined later. Thanks to the new controller type \dref{2023719937}, the observer of system \dref{20237122035} can be designed as \begin{equation}\label{20237122036} \left\{\begin{array}{l} \hat{w}_{t}(x,t)=\hat{w}_{xx}(x,t) , \crr \hat{w}_{x} (0,t)=-q {w}(0,t),\ \ \hat{w}_{x}(1,t)= u_0(t)+c_1 [w(1,t)-\hat{w}(1,t)] , \crr \dot{\zeta}(t)= -\mbox{sgn}(b)[w(1,t)-\hat{w}(1,t)]u_0(t) , \end{array}\right. \end{equation} where $ c_1 $ is a positive tuning gain. The observer \dref{20237122036} consists of two parts: the state observer, i.e., $\hat{w}$-subsystem, and the update law for $\zeta$. Please note that both the unknown control coefficient $b$ and the newly designed $\zeta$ do not appear in the state observer anymore. This is precisely the subtlety of controller factorization \dref{2023719937} and is very important for the controller design. Let the observation error be \begin{equation}\label{20237122039frac1b} \tilde{w} (x,t)=w(x,t)-\hat{w}(x,t),\ \ \tilde{\zeta}(t)=\frac{1}{b}- \zeta(t). \end{equation} Then it is governed by \begin{equation}\label{20237122040frac1b} \left\{\begin{array}{l} \tilde{w}_{t}(x,t)=\tilde{w} _{xx}(x,t) , \crr \tilde{w} _{x} (0,t)= 0 ,\ \ \tilde{w} _{x}(1,t)=-b\tilde{\zeta}(t)u_0(t) -c_1\tilde{w} (1,t), \crr \dot{\tilde{\zeta}}(t)= \mbox{sgn}(b) u_0(t)\tilde{w} (1,t). \end{array}\right. \end{equation} By Lemmas \ref{Lm2023828938} and \ref{Lm2023811150} in Section \ref{PfTh1} below, we can prove that the error system \dref{20237122040frac1b} is asymptotically stable for any $u_0\in W$ and any initial state $(\tilde{w}(\cdot,0),\tilde{\zeta}(0))\in \H\times \R$. Actually, if we choose the Lyapunov function of error system \dref{20237122040frac1b} as \begin{equation}\label{202310142044} V(t)=\frac12\int_{0}^{1}\tilde{w}^2(x,t)dx+\frac{|b|}{2}\tilde{\zeta}^2(t), \end{equation} a formal computation will show that $$\dot{V}(t)=-\int_{0}^{1}\tilde{w}^2_x(x,t)dx-c_1\tilde{w}^2(1,t)\leq0.$$ The new design of the update law for $\zeta$ in observer \dref{20237122036} is based on the Lyapunov function \dref{202310142044}. \begin{theorem}\label{Th20238281955} Let $q>0$, $b\neq 0$, $c_1>0$ and $u_0\in W $. Suppose that the sign of $b$ is known. Then the observer \dref{20237122036} of system \dref{20237122035} is well-posed: for any initial state $(w(\cdot,0),\hat{w}(\cdot,0),\zeta(0))\in \H^2\times \R$, the observer \dref{20237122036} admits a unique solution $(\hat{w}(\cdot,t), {\zeta}(t))\in C(0,\infty;\H\times\R)$ such that \begin{equation}\label{20238282000} |{\zeta} (t) -\zeta_0|+\|w(\cdot,t)-\hat{w}(\cdot,t)\|_{\H} \to 0 \ \ \mbox{as}\ \ t\to+\infty, \end{equation} where $\zeta_0$ is a constant that may not be $\frac1b$. If we suppose further that the controller $u_0$ meets the following persistent excitation condition: there exists a time $\tau>0$ such that \begin{equation}\label{202381809*} \lim_{t\to+\infty}\int_{t}^{t+\tau} u_0 (s) ds \neq0, \end{equation} then \begin{equation}\label{20238282005} \zeta (t) \to \frac1b \ \ \mbox{as}\ \ t\to+\infty . \end{equation} \end{theorem} \begin{remark}\label{Re20238282005} By \dref{20237122035} and \dref{2023719937}, we have $w_x(1,t)=bu(t)=b\zeta(t)u_0(t)$. Although the observer \dref{20237122036} is well-posed, the new control coefficient $b\zeta(t)$ of $u_0(t)$ remains unknown until estimation \dref{20238282005}. Consequently, the design of $u_0$ may still remain challenging. This implies that the separation principle based on the observer \dref{20237122036} cannot be directly applied to the output feedback design of system \dref{20237122035}, even if full state estimation \dref{20238282000} is available. On the other hand, according to Theorem \ref{Th20238281955}, the state estimation in \dref{20238282000} holds true as long as $u_0 \in W $. Consequently, we can stabilize the control plant \dref{20237122035} via stabilizing the state observer \dref{20237122036}. Given that the coefficient of controller $u_0$ in the state observer in \dref{20237122036} always equals 1, stabilizing the state observer becomes straightforward and feasible. This feature is particularly significant for observer-based controller designs when the control coefficient $b$ is unknown. \end{remark} \begin{remark}\label{Re2023914} By the conventional definition \cite{PE1987}, the signal $u_0$ satisfies the persistent excitation condition means that there exist constants $\tau $, $t_0 $ and $c_0$ such that \begin{equation}\label{202381917} \int_{t }^{t+\tau}|u_0(s)|^2ds \geq c_0>0,\ \ \forall\ t> t_0\geq 0,\ \ \tau>0. \end{equation} Clearly, the function $u_0$ that meets \dref{202381917} always meets the persistent excitation condition \dref{202381809*}. Therefore, our assumption \dref{202381809*} is not stronger than the conventional persistent excitation condition \dref{202381917}. \end{remark} \section{Observer-based stabilizer}\label{Obserstabilizer} This section is devoted to the observer-based stabilizer design for the system \dref{20237122035}. Taking the observation error \dref{20237122039frac1b} into account, the state observer in \dref{20237122036} can be rewritten as \begin{equation}\label{20238312133} \left\{\begin{array}{l} \hat{w}_{t}(x,t)=\hat{w}_{xx}(x,t) , \crr \hat{w}_{x} (0,t)=-q \hat{w}(0,t)-q \tilde{w}(0,t),\crr \hat{w}_{x}(1,t)= u_0(t)+c_1 \tilde{w}(1,t) . \end{array}\right. \end{equation} By Theorem \ref{Th20238281955}, $ \|w(\cdot,t)-\hat{w}(\cdot,t)\|_{\H} \to 0$ as $t\to+\infty$ provided $u_0\in W$. In this case, the stabilization of system \dref{20237122035} can come down to stabilization of system \dref{20238312133}. Since $\| \tilde{w}(\cdot,t)\|_{\H} \to 0$ and the unknown control coefficient $b$ does not appear in the observer \dref{20238312133}, we can stabilize the observer \dref{20238312133} easily by ignoring the $\tilde{w}$-parts. Indeed, in view of the feedback \dref{2023813835}, the controller $u_0$ can be designed as \begin{equation}\label{20237122039} u_0(t)= -(q+c_0) \left[ \hat{w}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{w}(x,t)dx\right], \end{equation} under which we get the closed-loop system of system \dref{20237122035} \begin{equation}\label{2023720848} \left\{\begin{array}{l} w_{t}(x,t)=w_{xx}(x,t) ,\crr w_{x} (0,t)=-qw(0,t) ,\crr \disp w_{x}(1,t)=-b(q+c_0)\zeta(t) \left[ \hat{w}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{w}(x,t)dx\right], \crr \hat{w}_{t}(x,t)=\hat{w}_{xx}(x,t) ,\crr \hat{w}_{x} (0,t)=-q {w}(0,t),\crr \disp \hat{w}_{x}(1,t)= -(q+c_0) \left[ \hat{w}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{w}(x,t)dx\right]+c_1 [w(1,t)-\hat{w}(1,t)] , \crr \disp\dot{\zeta}(t)= \mbox{sgn}(b)(q+c_0) [w(1,t)-\hat{w}(1,t)] \left[ \hat{w}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{w}(x,t)dx\right] . \end{array}\right. \end{equation} \begin{theorem}\label{th20237141510} Let $q>0$, $b\neq 0$, $c_0>0$ and $c_1>0$. Suppose that the sign of $b$ is known. Then, for any initial state $(w(\cdot,0),\hat{w}(\cdot,0),\zeta(0))\in \H^2\times \R$, the closed-loop system \dref{2023720848} admits a weak solution $(w(\cdot,t), \hat{w}(\cdot,t), {\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \begin{equation}\label{2023720844w} |\zeta(t)- \zeta_0|+\|(w(\cdot,t) ,\hat{w}(\cdot,t))\|_{\H^2} \to 0 \ \ \mbox{as}\ \ t\to+\infty, \end{equation} where $\zeta_0$ is a constant that may not be $\frac1b$. \end{theorem} \begin{remark}\label{Re2023831} In Theorem \ref{th20237141510}, we have not verified the persistent excitation condition \dref{202381809*} for the function $u_0$ in \dref{20237122039}. In fact, the stability of the closed-loop system \dref{2023720848} always implies that the function $u_0$ is not persistently exciting. This will be proved in Lemma \ref{Lm2023951644} and demonstrated visually through numerical simulations in Section \ref{NumSim}. \end{remark} \section{Performance output tracking}\label{Tracking} In this section, we apply the theoretical results in Sections \ref{Obs} and \ref{Obserstabilizer} to the performance output tracking for system \dref{20237122035}. In this way, the corresponding controller can be persistently excited and hence the unknown control coefficient $b$ can be estimated effectively. We aim at designing a feedback control such that the performance output $w(0,t) $ satisfies \begin{equation}\label{2023421108} \int_0^{\infty}| w(0,t) - r(t)|^2dt<+\infty, \end{equation} where $r\in C^{\infty}[0,+\infty)$ is a given reference. Performance output tracking for infinite-dimensional system has been studied extensively in recent years. However, to the best of our knowledge, the result about the infinite-dimensional systems with unknown control coefficient is still fairly scarce. All the approaches that have been used in \cite{Natarajan2016TAC}, \cite{PauLassi2016TAC}, \cite{PauLassi2017SIAM}, \cite{GuoWAdaptiveAuto}, \cite{GuoZhaoRX2022} and \cite{Guomeng} rely on the precise information on the control coefficient. Moreover, in contrast with the aforementioned results where the reference is assumed to be generated by an exosystem, the considered reference $r$ is more general and is non-collocated to the controller. This also brings some difficulties to the controller design. To achieve the desired output tracking, we need to construct the servo dynamics. Inspired by \cite{105} and \cite[Chapter 12]{Backsteppingbook}, we let \begin{equation}\label{2023829858} v(x,t)=\sum_{j=0}^{\infty}r^{(j)}(t)\left[\frac{x^{2j}}{(2j)!}-\frac{ qx ^{2j+1}}{(2j+1)!}\right], \ \ x\in[0,1],\ \ t\geq0. \end{equation} By a simple computation, the function $v$ is governed by \begin{equation}\label{20221131932} \left\{\begin{array}{l} \disp v_t(x,t)=v_{xx}(x,t),\crr \disp v_x(0,t)=-qr(t),\ \ v(0,t)=r(t) \end{array}\right. \end{equation} and furthermore, \begin{equation}\label{2023828927} v_x(1,t) = -qr(t)-\sum_{j=1}^{\infty}r^{(j)}(t)\left[\frac{q}{(2j)!} -\frac{1}{(2j-1)!}\right]. \end{equation} If we let \begin{equation}\label{2023829902} z(x,t)=w(x,t)-v(x,t), \ \ x\in[0,1],\ \ t\geq0, \end{equation} then \begin{equation}\label{2023829903} z(0,t)=w(0,t)-v(0,t)=w(0,t)-r(t), \ \ t\geq0, \end{equation} and hence \begin{equation}\label{20221131937} \left\{\begin{array}{l} \disp z_t(x,t)=z_{xx}(x,t), \crr \disp z_x(0,t)= -qz(0,t) , \ z_x(1,t)=bu(t)-v_x(1,t). \end{array}\right. \end{equation} It follows from \dref{2023829902} and \dref{2023829858} that \begin{equation}\label{2023828z1} z(1,t) = w(1,t)-v(1,t)=w(1,t)- \sum_{j=0}^{\infty}r^{(j)}(t)\left[\frac{1}{(2j)!} -\frac{q }{(2j+1)!} \right]. \end{equation} Owing to \dref{2023829903}, we are able to achieve the output tracking \dref{2023421108} as long as we can stabilize system \dref{20221131937}. Inspired by the observer and stabilizer designs in Sections \ref{Obs} and \ref{Obserstabilizer}, the observer of system \dref{20221131937} can be designed as \begin{equation}\label{20238291031} \left\{\begin{array}{l} u(t)=\zeta(t)u_0(t),\crr \hat{z}_{t}(x,t)=\hat{z}_{xx}(x,t) , \; x\in (0,1), \crr \hat{z}_{x} (0,t)=-q {z}(0,t),\ \ \hat{z}_{x}(1,t)= u_0(t)-v_x(1,t)+c_1 [z(1,t)-\hat{z}(1,t)] , \crr \dot{\zeta}(t)= -\mbox{sgn}(b)[z(1,t)-\hat{z}(1,t)]u_0(t) , \end{array}\right. \end{equation} and the observer-based controller is designed as \begin{equation}\label{20221111734} \begin{array}{ll} u_0(t)&\disp = \disp -(q+c_0)\left[ \hat{z}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{z}(x,t)dx\right]+ v_x(1,t), \end{array} \end{equation} where $c_0 $ and $c_1$ are positive tuning parameters. Combining \dref{2023828z1}, \dref{20238291031}, \dref{20221111734} and \dref{2023829903}, we obtain the closed-loop system of \dref{20237122035}: \begin{equation}\label{2023829945} \left\{\begin{array}{l} w_{t}(x,t)=w_{xx}(x,t) ,\crr w_{x} (0,t)=-qw(0,t) ,\ \ \disp w_{x}(1,t)= b\zeta(t)u_0(t) , \crr u_0(t) \disp = \disp -(q+c_0)\left[ \hat{z}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{z}(x,t)dx\right]+ v_x(1,t),\crr \hat{z}_{t}(x,t)=\hat{z}_{xx}(x,t) , \; x\in (0,1), \crr \hat{z}_{x} (0,t)=-q [w(0,t)-r(t)],\ \ \hat{z}_{x}(1,t)= u_0(t)+c_1 [w(1,t)-v(1,t)-\hat{z}(1,t)] -v_x(1,t), \crr \dot{\zeta}(t)= -\mbox{sgn}(b)[w(1,t)-v(1,t)-\hat{z}(1,t)]u_0(t) , \end{array}\right. \end{equation} where $v(1,t)$ and $v_x(1,t)$ are given by \dref{2023829858} and \dref{2023828927}, respectively. \begin{theorem}\label{th2023829953} Let $q>0$, $b\neq 0$, $c_0>0$ and $c_1>0$. Suppose that the sign of $b$ is known and the reference $r\in C^{\infty}[0,\infty)$ satisfies \begin{equation}\label{2023829957} \sup_{t\geq0, j=0,1,\cdots} | r^{(j)}(t)|<+\infty. \end{equation} Then for any initial state $(w(\cdot,0),\hat{z}(\cdot,0),\zeta(0))\in \H^2\times\R$, the closed-loop system \dref{2023829945} admits a weak solution $(w(\cdot,t), \hat{z}(\cdot,t), {\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \begin{equation}\label{2023829954} \int_0^{\infty}|w(0,t)-r(t)|^2dt<+\infty \end{equation} and \begin{equation}\label{2023829956} \sup_{t\geq0}\left[\|w(\cdot,t)\|_{\H}+\|\hat{z}(\cdot,t)\|_{\H}+| b\zeta(t)-1 |\right]<+\infty. \end{equation} If we assume further that the reference $r$ satisfies the following persistent excitation condition: \begin{equation}\label{202396903} \lim_{t\to+\infty}\int_t^{t+\tau}v_x(1,s)ds\neq0, \end{equation} then \begin{equation}\label{202396902} \zeta(t)-\frac1b \to 0 \ \ \mbox{as}\ \ t\to+\infty. \end{equation} \end{theorem} When $r(t)\equiv r^*$ with $ r^* \neq0$. By \dref{2023828927}, $v_x(1,t) = -qr(t)\equiv-qr^*$ and hence the persistent excitation condition \dref{202396903} obviously holds. In this case, the closed-loop system \dref{2023829945} is reduced to \begin{equation}\label{2023829945constant} \left\{\begin{array}{l} w_{t}(x,t)=w_{xx}(x,t) ,\crr w_{x} (0,t)=-qw(0,t) ,\ \ \disp w_{x}(1,t)= b\zeta(t)u_0(t) , \crr u_0(t) \disp = \disp -(q+c_0)\left[ \hat{z}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{z}(x,t)dx\right]- qr^*,\crr \hat{z}_{t}(x,t)=\hat{z}_{xx}(x,t) , \; x\in (0,1), \crr \hat{z}_{x} (0,t)=-q [w(0,t)-r^*],\ \ \hat{z}_{x}(1,t)= u_0(t)+c_1 [w(1,t)- (1-q)r^*-\hat{z}(1,t)] +qr^*, \crr \dot{\zeta}(t)= -\mbox{sgn}(b)[w(1,t)-(1-q)r^*-\hat{z}(1,t)]u_0(t) . \end{array}\right. \end{equation} Theorem \ref{th2023829953} leads immediately to the following corollary: \begin{corollary}\label{Co202396} Suppose that $q>0$, $b\neq 0$, $c_0>0$, $c_1>0$, $ r^*\neq0$ and the sign of $b$ is known. Then for any initial state $(w(\cdot,0),\hat{z}(\cdot,0),\zeta(0))\in \H^2\times\R$, the closed-loop system \dref{2023829945constant} admits a weak solution $(w(\cdot,t), \hat{z}(\cdot,t), {\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \dref{2023829954}, \dref{2023829956} and \dref{202396902} hold with $r(t)\equiv r^*$. \end{corollary} \section{Proof of Theorem \ref{Th20238281955} }\label{PfTh1} Before proving Theorem \ref{Th20238281955}, we first give two lemmas. \begin{lemma}\label{Lm2023828938} Suppose that $b\neq 0$, $c_1>0$ and $u_0\in L^2_{\rm loc}(0,\infty)$. Then, for any initial state $(\tilde{w}(\cdot,0),\tilde{\zeta}(0))\in \H\times\R$ and for any $T>0$, system \dref{20237122040frac1b} admits a unique weak solution $(\tilde{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,T;\H\times\R)$ such that \begin{equation}\label{2023828938} \int_0^t\int_{0}^{1}\tilde{w}_x^2(x,s)dx+c_1\tilde{w}^2(1,s)ds = F(0)-{F}(t) \end{equation} and \begin{equation}\label{2023828938E} \int_0^t \int_{0}^{1}\tilde{w}_{ x}^2(x,s)dx +b u_0(s) \tilde{\zeta} (s)\tilde{w}(1,s) + c_1 \tilde{w} ^2(1,s)ds= E(0)-E(t), \end{equation} where $0<t\leq T$ and \begin{equation}\label{20237141056} F(t)=E(t)+\frac{|b|}{2}\tilde{\zeta}^2(t),\ \ E(t)=\frac12\int_{0}^{1}\tilde{w}^2(x,t)dx . \end{equation} \end{lemma} \begin{proof} We will prove Lemma \ref{Lm2023828938} by Galerkin method \cite{Evans}. Let \begin{equation}\label{wxh201912303Ad429} \left\{\begin{array}{l} \disp \phi_n(x)=\sqrt{2} \sin \sqrt{\lambda_n}x ,\crr \lambda_n=\left(n-\frac{1}{2}\right)^2\pi^2, \end{array}\right. \ \ x\in[0,1],\ \ n\geq 1. \end{equation} Then, $\{\phi_n(\cdot) \}_{n=1}^{\infty}$ forms an orthonormal basis for $\H$. Fix a positive integer $N$, let $\mathcal{P}_N$ be the orthogonal projection of $\H$ onto $V_N:=\mbox{the linear span of } \{\phi_j:j=1,2,\cdots,N \} $. Let \begin{equation} \label{2018961642} \tilde{w}_N(\cdot,t)=\sum_{j=1}^{N}\tilde{w}_{N,j}(t)\phi_j,\ \ \tilde{\zeta}_N(t) \end{equation} be the approximate solutions that satisfy the following system of ordinary differential equations: \begin{equation} \label{2023818918} \left\{ \begin{array}{l} \disp \langle \tilde{w}_{Nt} (\cdot,t),\phi_j\rangle_{\H} + \langle \tilde{w}_{Nx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = -\phi_j(1) \left[ b \tilde{\zeta}_N(t)u_0(t) + c_1 \tilde{w}_N(1,t) \right] ,\crr \dot{\tilde{\zeta}}_N(t)= \mbox{sgn}(b)u_0(t) \tilde{w}_N (1,t),\crr \disp \tilde{w}_N(\cdot,0)=\mathcal{P}_N\tilde{w}(\cdot,0), \ \ \tilde{\zeta}_N(0)=\tilde{\zeta}(0),\ \ \ j=1,2\cdots, N. \end{array}\right. \end{equation} Clearly the initial value satisfies \begin{equation} \label{20189101501} \lim\limits_{N\to+\infty} \|\tilde{w}_N(\cdot,0)-\tilde{w}(\cdot,0)\|_{\H} =0 \end{equation} and system \dref{2023818918} is an initial-value problem for a first-order $N+1$ system of linear ordinary differential equations with the time-varying coefficient $u_0(t)$ and the unknown functions $\tilde{w}_{N,j}, \tilde{\zeta}_N$. By the theory of ordinary differential equations, for every $N \geq 1$ and any $T>0$, \dref{2023818918} has a solution $\tilde{w}_{N,j}, \tilde{\zeta}_N\in C^1[0, T]$. In terms of system \dref{2023818918}, we let \begin{equation}\label{2023818807N} F_N(t)=E_N(t)+\frac{|b|}{2}\tilde{\zeta}_N^2(t),\ \ E_N(t)=\frac12\int_{0}^{1}\tilde{w}_N^2(x,t)dx . \end{equation} Then \dref{20189101501} implies that \begin{equation}\label{20238291125} F_N(0)\to F(0)\ \ \mbox{as}\ \ N\to+\infty . \end{equation} Multiplying the first equation in \dref{2023818918} by $\tilde{w}_{N,j}(t)$ and summing for $j = 1, \cdots ,N$, and multiplying the second equation in \dref{2023818918} by $\tilde{\zeta}_N(t)$, we get \begin{equation}\label{2023818806EN} \dot{E}_N(t)= -\int_{0}^{1}\tilde{w}_{Nx}^2(x,t)dx -b u_0(t) \tilde{\zeta}_N(t) \tilde{w}_N (1,t) - c_1 \tilde{w}_N^2(1,t) \end{equation} and \begin{equation}\label{2023818940} \begin{array}{l} \dot{F}_N(t)= \disp -\int_{0}^{1}\tilde{w}_{Nx}^2(x,t)dx-c_1\tilde{w}_N^2(1,t) \leq 0. \end{array} \end{equation} That is, for any $t\in [0,T]$, \begin{equation}\label{20238291501} \int_0^t\int_{0}^{1}\tilde{w}_{Nx}^2(x,s)dx +b u_0(s) \tilde{\zeta}_N(s) \tilde{w}_N (1,s) + c_1 \tilde{w}_N^2(1,s)ds=E_N(0)-E_N(t) \end{equation} and \begin{equation}\label{2023828938N} \int_0^t\int_{0}^{1}\tilde{w}_{Nx}^2(x,s)dx+c_1\tilde{w}^2_N(1,s)ds +{F}_N(t)= F_N(0). \end{equation} Let $(\tilde{w}_N(\cdot,t),\tilde{\zeta}_{N}(t)) , (\tilde{w}_L(\cdot,t),\tilde{\zeta}_{L}(t))$ be two approximate solutions, and without loss of generality, we assume $N > L$. Set $$ \left\{\begin{array}{l} \disp \tilde{w}_{NL}(\cdot,t) =\tilde{w}_N(\cdot,t)-\tilde{w}_L(\cdot,t)= \sum_{j=1}^N[\tilde{w}_{N,j}(t)-\tilde{w}_{L,j}(t)]\phi_j,\crr \disp \tilde{\zeta}_{NL}(t)= \tilde{\zeta}_{N}(t)-\tilde{\zeta}_{L}(t), \end{array}\right. $$ with the understanding that $\tilde{w}_{L,j}(t)\equiv 0$ when $j>L$. Then $\tilde{w}_{NL}(\cdot,t)$ and $\tilde{\zeta}_{NL}(t)$ satisfy \begin{equation} \label{20239152217} \left\{ \begin{array}{l} \disp \langle \tilde{w}_{NLt} (\cdot,t),\phi_j\rangle_{\H} + \langle \tilde{w}_{NLx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = - \phi_j(1) \left[ bu_0(t) \tilde{\zeta}_{NL}(t) + c_1 \tilde{w}_{NL}(1,t) \right] ,\crr \dot{\tilde{\zeta}}_{NL}(t)= \mbox{sgn}(b)u_0(t) \tilde{w}_{NL} (1,t), \crr \disp \tilde{w}_{NL}(\cdot,0)=\mathcal{P}_N\tilde{w}(\cdot,0)-\mathcal{P}_L\tilde{w}(\cdot,0), \ \tilde{\zeta}_{NL}(0)=0,\ \ j=1,2\cdots ,N. \end{array}\right. \end{equation} Let \begin{equation} \label{20239152218} F_{NL}(t)=\frac12\int_{0}^{1}\tilde{w}_{NL}^2(x,t)dx +\frac{|b|}{2}\tilde{\zeta}_{NL}^2(t),\ \ t\geq0. \end{equation} Since system \dref{2023818918} takes the same form as system \dref{20239152217}, we get \begin{equation}\label{2023828938NAd915} \int_0^t\int_{0}^{1}\tilde{w}_{NLx}^2(x,s)dx+c_1\tilde{w}^2_{NL}(1,s)ds +{F}_{NL}(t)= F_{NL}(0) , \end{equation} as we have obtained \dref{2023828938N}. Note that \begin{equation}\label{20239152220} F_{NL}(0) \to 0\ \mbox{as}\ N,L\to+\infty, \end{equation} \dref{2023828938NAd915} implies that, for any given $t\in[0,T]$, $\{\tilde{w}_N(\cdot,t) \}$ is a Cauchy sequence in $ \H$ and $\{\tilde{\zeta}_N(t)\}$ is a Cauchy sequence in $\R $. Moreover, $\{\tilde{w}_N \}$ is a Cauchy sequence in $L^2(0,T;H ^1(0,1))$ and $\{\tilde{w} _N(1,t)\}$ is a Cauchy sequence in $L^2(0,T) $. Hence, there exist $(\tilde{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,T;\H\times\R)$ such that \begin{equation} \label{2023951453} \left\{\begin{array}{l} \tilde{w}_{N} (\cdot,t)\to \tilde{w}(\cdot,t)\ \ \mbox{ in}\ \ \H,\ \ \forall\ t\in[0,T], \crr \disp \tilde{\zeta}_N(t)\to \tilde{\zeta}(t) \ \ \mbox{ in}\ \ \R,\ \ \forall\ t\in[0,T],\crr \disp \tilde{w}_N \to \tilde{w} \ \ \mbox{ in}\ \ \ L^{2}(0, T;H ^1(0,1)),\crr \disp \tilde{w}_N(1,\cdot)\to \tilde{w}(1,\cdot) \ \ \mbox{ in}\ \ \ L^{2}( 0, T ), \end{array}\right.\mbox{as}\ \ N\to+\infty. \end{equation} For any $T>0$ and $0<t<T$, by integrating \dref{2023818918} from $0$ to $t$, we obtain \begin{equation} \label{2023818918inte} \left\{ \begin{array}{l} \disp \langle \tilde{w}_{N} (\cdot,t),\phi_j\rangle_{\H} + \int_0^t\langle \tilde{w}_{Nx}(\cdot,\tau) , \phi_{jx}\rangle_{\H}d\tau \disp =\langle \tilde{w}_{N} (\cdot,0),\phi_j\rangle_{\H}\crr \disp - \phi_j(1) \int_0^t \left[ b \tilde{\zeta}_N(\tau)u_0(\tau) + c_1 \tilde{w}_N(1,\tau) \right]d\tau ,\crr \tilde{\zeta} _N(t)=\disp \tilde{\zeta} (0) +\mbox{sgn}(b) \int_0^t \tilde{w}_N (1,\tau)u_0(\tau)d\tau, \end{array}\right. \ \ j=1,2\cdots ,N. \end{equation} We pass to the limit as $N\to+\infty$ in \dref{2023818918inte} to get \begin{equation} \label{2023818918inteweak} \left\{ \begin{array}{l} \disp \langle \tilde{w} (\cdot,t),\phi_j\rangle_{\H} + \int_0^t\langle \tilde{w}_{x}(\cdot,\tau) , \phi_{jx}\rangle_{\H}d\tau \disp =\langle \tilde{w} (\cdot,0),\phi_j\rangle_{\H}\crr \disp - \phi_j(1) \int_0^t \left[ b \tilde{\zeta} (\tau)u_0(\tau) + c_1 \tilde{w} (1,\tau) \right]d\tau ,\crr \tilde{\zeta} (t)=\disp \tilde{\zeta} (0) +\mbox{sgn}(b) \int_0^t \tilde{w} (1,\tau)u_0(\tau)d\tau, \end{array}\right. \end{equation} which implies that $ (\tilde{w},\tilde{\zeta} ) $ is a weak solution of the system \dref{20237122040frac1b}. Since system \dref{20237122040frac1b} is a linear system, the uniqueness of the solution is trivial. By virtue of \dref{2023951453}, we let $N\to+\infty$ in \dref{2023828938N} and \dref{20238291501} to obtain \dref{2023828938} and \dref{2023828938E}, respectively. \end{proof} \begin{lemma}\label{Lm2023811150} Let $b\neq 0$ and $c_1>0$. Suppose that the controller $u_0\in W $. Then, for any initial state $(\tilde{w}(\cdot,0),\tilde{\zeta}(0))\in \H\times\R$, system \dref{20237122040frac1b} admits a unique weak solution $(\tilde{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H\times\R)$ such that \begin{equation}\label{20238291544} \int_{0}^{\infty}\tilde{w}^2(0,t)+\tilde{w}^2(1,t)dt<+\infty, \end{equation} \begin{equation}\label{2023811157} \|\tilde{w}(\cdot,t)\|_{\H}\to 0 \ \ \mbox{as}\ \ t\to+\infty \end{equation} and \begin{equation}\label{2023811157B} \tilde{\zeta} (t) \to \zeta_* \ \ \mbox{as}\ \ t\to+\infty , \end{equation} where $\zeta_*$ is a constant that may not be zero. If we suppose further that there exists a time $\tau>0$ such that the controller $u_0 $ satisfies \dref{202381809*}, then, \begin{equation}\label{20238231156} \tilde{\zeta} (t) \to 0 \ \ \mbox{as}\ \ t\to+\infty . \end{equation} \end{lemma} \begin{proof} The existence of the solution can be obtained by Lemma \ref{Lm2023828938}. It is sufficient to prove \dref{20238291544}, \dref{2023811157} and \dref{2023811157B}. We first prove \dref{20238291544} and the convergence \dref{2023811157}. By \dref{2023828938} and \dref{20237141056}, we have \begin{equation}\label{2023720757} \tilde{\zeta}^2(t)\leq \frac{2}{|b|} F(t) \leq \frac{2}{|b|}F(0). \end{equation} By \cite[p.258, Theorem 1]{Evans}, there exists a positive constant $M$ such that \begin{equation}\label{202411242219FD} \tilde{w}^2(0,t) \leq M \left [\tilde{w}^2(1,t)+\disp \int_{0}^{1}\tilde{w}_x^2(x,t)dx \right]. \end{equation} Passing to the limit as $ t\to+\infty$ in \dref{2023828938}, we obtain \begin{equation}\label{20238231205} \int_0^{\infty}\left[ \int_{0}^{1}\tilde{w}_x^2(x,t)dx+c_1\tilde{w}^2(1,t)\right] dt \leq F(0), \end{equation} which, together with \dref{202411242219FD}, leads to \dref{20238291544}. By \dref{2023828938E}, \dref{2023720757} and the Poincar\'{e}'s inequality, there exists a positive constant $\omega$ such that \begin{equation*}\label{2023941442} \begin{array}{rl} \disp \dot{E}(t)&\disp =-\int_{0}^{1}\tilde{w}_{ x}^2(x,t)dx -b u_0(t) \tilde{\zeta} (t)\tilde{w}(1,t) - c_1 \tilde{w} ^2(1,t) \crr &\leq -\omega E(t) +\sqrt{2|b|F(0)} \cdot |u_0(t)| | \tilde{w}(1,t)| . \end{array} \end{equation*} Applying Lemma \ref{Lm20231004} to this inequality, we get \dref{2023811157} easily. Now we prove the convergence \dref{2023811157B}. Owing to \dref{2023828938} and \dref{20237141056}, we conclude that \begin{equation}\label{202395808} \dot{F}(t)\leq0\ \ \mbox{and}\ \ 0\leq F(t)\leq F(0),\ \ \forall\ t\geq0. \end{equation} By the monotone convergence theorem, there exists a $F_*\geq0$ such that $F(t)\to F_*$ as $t\to+\infty$. Furthermore, \begin{equation}\label{202395815} \lim_{t\to+\infty}\tilde{\zeta}^2(t)=\frac{2}{|b|} \lim_{t\to+\infty}\left[F(t)-E(t)\right]= \frac{2 F_*}{|b|} \end{equation} and hence \begin{equation}\label{202395815abs} \lim_{t\to+\infty}|\tilde{\zeta} (t)|= \sqrt{\frac{2 F_* }{|b|} } . \end{equation} When $F_*=0$, \dref{202395815abs} implies that $\tilde{\zeta} (t)\to0$ as $t\to+\infty$. It suffices to consider the case that $F_*\neq0$. By \dref{202395815}, there exists a $t_0>0$ such that \begin{equation}\label{202395821} \tilde{\zeta}^2(t)> \frac{F_*}{|b|}>0,\ \ \forall\ t>t_0. \end{equation} Hence, $\tilde{\zeta}(t)\neq 0$ for all $t>t_0$. Note that $\tilde{\zeta}\in C[0,\infty)$, it follows from the intermediate value theorem that \begin{equation}\label{202395827} |\tilde{\zeta}(t)|=\tilde{\zeta}(t) \ \ \mbox{or}\ \ |\tilde{\zeta}(t)|=-\tilde{\zeta}(t),\ \ \forall\ t>t_0, \end{equation} which, together with \dref{202395815abs}, leads to \begin{equation*}\label{2023831906} \lim_{t\to+\infty} \tilde{\zeta} (t) = \sqrt{\frac{2 F_* }{|b|} } \ \ \mbox{or}\ \ \lim_{t\to+\infty} \tilde{\zeta} (t) = -\sqrt{\frac{2 F_* }{|b|} } . \end{equation*} So we have proved the convergence \dref{2023811157B}. In order to prove \dref{20238231156}, we suppose that the condition \dref{202381809*} holds but $\zeta_*\neq0$. Define \begin{equation}\label{20238251127} \rho (t)=\int_0^1x\tilde{w} (x,t)dx,\ \ t\geq0 . \end{equation} Finding the derivative along the solution of system \dref{20237122040frac1b} to get \begin{equation}\label{202382511250} \begin{array}{l} \disp \dot{\rho} (t)= -b\tilde{\zeta} (t)u_0(t) -(1+c_1)\tilde{w} (1,t) + \tilde{w}(0,t). \end{array} \end{equation} For any $t>0$ and $\tau>0$, it follows from \dref{2023811157} and \dref{20238251127} that \begin{equation}\label{20238251130toinfty} \int_{t}^{t+\tau} \dot{\rho} (s)ds= {\rho} (t+\tau)- {\rho} (t) \to 0\ \ \mbox{as}\ \ t\to+\infty. \end{equation} By \dref{20238291544}, for any $\tau>0$, we have \begin{equation}\label{20238291546} \int_{t}^{t+\tau}\tilde{w}^2(0,s)+\tilde{w}^2(1,s)ds\to 0 \ \ \mbox{as}\ \ t\to+\infty, \end{equation} which yields \begin{equation}\label{20238251135} \int_{t}^{t+\tau} \tilde{w}(0,s) -(1+c_1)\tilde{w} (1,s) ds \to0\ \ \mbox{as}\ \ t\to+\infty. \end{equation} Combining \dref{202382511250}, \dref{20238251130toinfty} and \dref{20238251135}, we arrive at \begin{equation}\label{20238291552} b \int_{t}^{t+\tau}\tilde{\zeta} (s) u_0(s)ds \to0\ \ \mbox{as}\ \ t\to+\infty. \end{equation} On the other hand, it follows from \dref{2023811157B} that \begin{equation}\label{20238291555} b \int_{t}^{t+\tau}[\zeta_*-\tilde{\zeta} (s)]u_0(s)ds \to0\ \ \mbox{as}\ \ t\to+\infty. \end{equation} Note that \begin{equation}\label{20238251136} b \int_{t}^{t+\tau}\zeta_* u_0(s)ds= b \int_{t}^{t+\tau}[\zeta_*-\tilde{\zeta} (s)]u_0(s)ds + b \int_{t}^{t+\tau} \tilde{\zeta} (s) u_0(s)ds. \end{equation} Combining \dref{20238291555}, \dref{20238251136} and \dref{20238291552}, we have \begin{equation}\label{20238291557} b \zeta_* \int_{t}^{t+\tau} u_0(s)ds \to0\ \ \mbox{as}\ \ t\to+\infty, \end{equation} which, together with the assumption $ b \zeta_* \neq0$, contradicts to the condition \dref{202381809*}. \end{proof} {\it Proof of Theorem \ref{Th20238281955}.} By Lemma \ref{Lm2023828938}, we can suppose that $(\tilde{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H\times\R)$ is a weak solution to system \dref{20237122040frac1b}. Let \begin{equation}\label{20238291644} \zeta(t)=\frac1b-\tilde{\zeta}(t),\ \ t\geq0. \end{equation} Then $\zeta\in C[0,\infty)$ and $u(t)=\zeta(t)u_0(t)$ belongs to $L^2_{\rm loc}(0,\infty)$. Hence, system \dref{20237122035} with $u(t)=\zeta(t)u_0(t)$ admits a unique solution $ {w}(\cdot,t) \in C(0,\infty;\H)$. Let \begin{equation}\label{20238291646} \hat{w}(x,t)=w(x,t)-\tilde{w}(x,t),\ \ x\in[0,1],\ t\geq0. \end{equation} A simple computation shows that $(\hat{w}(\cdot,t),\zeta(t))\in C(0,\infty;\H\times\R)$ defined by \dref{20238291644} and \dref{20238291646} is a weak solution of the observer \dref{20237122036}. Furthermore, the convergence \dref{20238282000} and \dref{20238282005} can be obtained by Lemma \ref{Lm2023811150} directly. \hfill$\Box$ \section{Proof of Theorem \ref{th20237141510}}\label{PfTh2} Before proving Theorem \ref{th20237141510}, we first consider the following transformed system: \begin{equation}\label{20238292127} \left\{\begin{array}{l} \tilde{w}_{t}(x,t)=\tilde{w} _{xx}(x,t) , \crr \tilde{w} _{x} (0,t)= 0 ,\ \disp\tilde{w} _{x}(1,t)= b(q+c_0)\tilde{\zeta}(t) \check{w}(1,t)-c_1\tilde{w} (1,t), \crr \disp\dot{\tilde{\zeta}}(t)= - (q+c_0) \mbox{sgn}(b) \tilde{w} (1,t)\check{w}(1,t) ,\crr \check{w}_{t}(x,t)=\check{w}_{xx}(x,t) +q^2e^{qx}\tilde{w}(0,t) ,\crr \check{w}_{x} (0,t)=-q\tilde{w}(0,t),\ \ \disp \check{w}_{x}(1,t)= - c_0 \check{w}(1,t) +c_1 \tilde{w}(1,t) . \end{array}\right. \end{equation} \begin{lemma}\label{Lm2023828938Ad92} Suppose that $b\neq 0$, $c_0 >0$ and $c_1>0$. Then for any initial state $(\tilde{w}(\cdot,0),\check{w}(\cdot,0),\tilde{\zeta}(0))\in \H^2\times\R$, system \dref{20238292127} admits a weak solution $(\tilde{w}(\cdot,t),\check{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \begin{equation}\label{2023951709} \int_{0}^{\infty} \int_{0}^{1}\tilde{w}_{x}^2(x,t)+\check{w}_{x}^2(x,t)dx + \tilde{w}^2(1,t)+\check{w}^2(1,t)dt<+\infty. \end{equation} \end{lemma} \begin{proof} We will prove Lemma \ref{Lm2023828938Ad92} by Galerkin method. Let $\{\phi_n \} $ be given by \dref{wxh201912303Ad429}. Suppose that \begin{equation} \label{2018961642Ad92} \tilde{w}_N(\cdot,t)=\sum_{j=1}^{N}\tilde{w}_{N,j}(t)\phi_j,\ \ \check{w}_N(\cdot,t)=\sum_{j=1}^{N}\check{w}_{N,j}(t)\phi_j,\ \ \tilde{\zeta}_N(t) \end{equation} satisfy the following system of ordinary differential equations: \begin{equation} \label{2023818918Ad92} \left\{ \begin{array}{l} \disp \langle \tilde{w}_{Nt} (\cdot,t),\phi_j\rangle_{\H} + \langle \tilde{w}_{Nx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = \phi_j(1) \left[ b (q+c_0) \tilde{\zeta}_N(t) \check{w}_N(1,t) - c_1 \tilde{w}_N(1,t) \right] ,\crr \disp \langle \check{w}_{Nt} (\cdot,t),\phi_j\rangle_{\H} + \langle \check{w}_{Nx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = \phi_j(1) \left[ c_1 \tilde{w}_N(1,t) -c_0\check{w}_N(1,t)\right] ,\crr \dot{\tilde{\zeta}}_N(t)= -\mbox{sgn}(b)(q+c_0) \tilde{w}_N (1,t) \check{w}_N(1,t),\crr \disp \tilde{w}_N(\cdot,0)=\mathcal{P}_N\tilde{w}(\cdot,0), \ \disp \check{w}_N(\cdot,0)=\mathcal{P}_N\check{w}(\cdot,0), \ \tilde{\zeta}_N(0)=\tilde{\zeta}(0),\ \ j=1,2\cdots ,N, \end{array}\right. \end{equation} where $\mathcal{P}_N$ be the orthogonal projection of $\H$ onto $V_N:=\mbox{the linear span of } \{\phi_j:j=1,2,\cdots,N \} $. Clearly the initial values satisfy \begin{equation} \label{20189101501Ad92} \lim\limits_{N\to+\infty} \|\tilde{w}_N(\cdot,0)-\tilde{w}(\cdot,0)\|_{\H}+\|\check{w}_N(\cdot,0)-\check{w}(\cdot,0)\|_{\H} =0. \end{equation} It follows from the Cauchy-Peano theorem that for every $N \geq 1$, \dref{2023818918Ad92} has a solution $\tilde{w}_{N,j}, \check{w}_{N,j}, \tilde{\zeta}_N\in C^1[0, T_N]$ for some $T_N > 0$. To address the nonlinearities in \dref{2023818918Ad92}, we introduce \begin{equation}\label{202310131547} z_{N}(x,t)=\gamma \tilde{w}_{N}(x,t),\ \ \xi_{N}(t)=\gamma^2\tilde{\zeta}_{N}(t), \end{equation} where $\gamma$ is a positive constant that is sufficiently large. Then, system \dref{2023818918Ad92} becomes \begin{equation} \label{202310131549} \left\{ \begin{array}{l} \disp \langle z _{Nt} (\cdot,t),\phi_j\rangle_{\H} + \langle z _{Nx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = \phi_j(1) \left[ \frac{b (q+c_0)}{\gamma} \xi _N(t) \check{w}_N(1,t) - c_1 z _N(1,t) \right] ,\crr \disp \langle \check{w}_{Nt} (\cdot,t),\phi_j\rangle_{\H} + \langle \check{w}_{Nx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = \phi_j(1) \left[ \frac{c_1}{\gamma} z _N(1,t) -c_0\check{w}_N(1,t)\right] ,\crr \dot{\xi }_N(t)= -\mbox{sgn}(b)(q+c_0)\gamma z _N (1,t) \check{w}_N(1,t),\crr \disp z _N(\cdot,0)=\gamma\mathcal{P}_N \tilde{w} (\cdot,0), \ \disp \check{w}_N(\cdot,0)=\mathcal{P}_N\check{w}(\cdot,0), \ \xi _N(0)=\gamma^2\tilde{\zeta} (0),\ \ j=1,2\cdots ,N. \end{array}\right. \end{equation} We divide the left proof into four parts for clarity. {\it (1), A priori estimation.} Define \begin{equation}\label{2023931622} L_N(t)=\frac12\int_{0}^{1}z_N^2(x,t)+\check{w}_N^2(x,t)dx +\frac{|b|}{2\gamma^2}\xi_N^2(t). \end{equation} Multiplying the first two equations of \dref{202310131549} by $\gamma\tilde{w}_{N,j}(t)$, $\check{w}_{N,j}(t)$ respectively and summing for $j = 1, \cdots ,N$, and multiplying the third equation of \dref{202310131549} by $\xi_N(t)$, we get \begin{equation}\label{2023818940Ad93} \begin{array}{l} \dot{L}_N(t)= \disp -\int_{0}^{1} z _{Nx}^2(x,t)+ \check{w} _{Nx}^2(x,t)dx-c_1 z _N^2(1,t)-c_0\check{w}_N^2(1,t)+\frac{c_1}{\gamma}z_N (1,t)\check{w}_N (1,t). \end{array} \end{equation} By Cauchy's inequality, we get \begin{equation}\label{2023931658} \begin{array}{l} \dot{L}_N(t)\leq \disp -\int_{0}^{1} z _{Nx}^2(x,t)+\check{w}_{Nx}^2(x,t)dx -\left(c_1-\frac{c_1}{2\gamma}\right) z _N^2(1,t)- \left(c_0-\frac{c_1}{2\gamma}\right)\check{w}_N^2(1,t) \leq 0. \end{array} \end{equation} If we choose $\gamma$ large enough such that \begin{equation}\label{2023100161043} \gamma>\max\left\{\frac12,\frac{c_1}{2c_0}\right\}, \end{equation} then, for any $t\geq0$, there exists a positive constant $\alpha$ such that \begin{equation}\label{2023951539} \begin{array}{l} \disp \alpha \int_0^t\int_{0}^{1} z _{Nx}^2(x,s)+\check{w}_{Nx}^2(x,s)dx+ z _N^2(1,s)+ \check{w}_N^2(1,s)ds+ {L}_N(t) \leq L_N(0)< M_0, \end{array} \end{equation} where $M_0$ is a positive constant that is independent of $\gamma$ and $N$. Combing \dref{2023951539}, \dref{2023931622} and \dref{202310131547}, we can conclude that the solution of system \dref{2023818918Ad92} can be extended to $\tilde{w}_{N,j},\check{w}_{N,j}, \tilde{\zeta}_N\in C^1[0, \infty)$. It follows from \dref{2023931622} and \dref{2023951539} that \begin{equation}\label{202310131615} |\xi_N(t)|<\sqrt{\frac{2 M_{0}}{|b|}}\gamma,\ \ \forall\ t\geq0. \end{equation} Moreover, it follows from the \dref{202310131549} that \begin{equation}\label{202310231510} \gamma(q+c_0)\left|\int_{0}^{t} z_N (1,s)\check{w}_N(1,s)ds\right| \leq |\xi_N (0)|+|\xi_N(t)|\leq \gamma^2|\tilde{\zeta} (0)|+ \sqrt{\frac{2 M_{0}}{|b|}}\gamma . \end{equation} {\it (2), Weak convergence.} \dref{2023951539} implies that, for any given $T>0$, $\{ z _N \}$ and $\{\check{w}_N \}$ are bounded sequences in $L^{2}(0,T;H^1 (0,1))$. Therefore, there exist subsequences of $\{ z _N \}$ and $\{\check{w}_N \}$ which we still denote respectively by $\{ z _N \}$ and $\{\check{w}_N \}$ that satisfy \begin{equation} \label{2023951542} \left\{\begin{array}{l} \disp z _N \to z \ \ \mbox{weakly in}\ \ \ L^{2}(0, T; H ^1(0,1) ),\crr \disp \check{w}_N \to \check{w} \ \ \mbox{weakly in}\ \ \ L^{2}(0, T; H ^1(0,1) ), \end{array}\right.\mbox{as}\ \ N\to+\infty, \end{equation} where $ z (\cdot,t), \check{w}(\cdot,t) \in C(0,T;H ^1(0,1) )$. Using \dref{2023951539} again, the sequences $\{ z _N(1,\cdot)\}$ and $\{\check{w} _N(1,\cdot)\}$ are bounded in $L^{2}(0,T)$. By \dref{2023951542}, there exist subsequences of $\{ z _N(1,\cdot)\}$ and $\{\check{w} _N(1,\cdot)\}$ which we still denote respectively by $\{ z _N(1,\cdot)\}$ and $\{\check{w} _N(1,\cdot)\}$ that satisfy \begin{equation} \label{2023951542check} \left\{\begin{array}{l} \disp z _N(1,\cdot)\to z (1,\cdot) \ \ \mbox{weakly in}\ \ \ L^{2}( 0, T ),\crr \disp \check{w}_N(1,\cdot)\to \check{w}(1,\cdot) \ \ \mbox{weakly in}\ \ \ L^{2}( 0, T ), \end{array}\right.\mbox{as}\ \ N\to+\infty. \end{equation} As a consequence of \dref{202310131615}, there exists a subsequence of $\{\xi_N\}$ which we still denote by $\{\xi_N\}$ that satisfies \begin{equation}\label{202310231458} \xi_N(t)\to \xi(t) \ \ \mbox{as}\ \ N\to+\infty,\ \ \forall\ t\geq0. \end{equation} {\it (3), Strong convergence.} Let $( z _N(\cdot,t),\xi _{N}(t)) , ( z _L(\cdot,t),\xi _{L}(t))$ be two approximate solutions, and without loss of generality, we assume $N > L$. Set $$\left\{ \begin{array}{l} \disp z _{NL}(\cdot,t) = z _N(\cdot,t)- z _L(\cdot,t)= \gamma\sum_{j=1}^N[ \tilde{w} _{N,j}(t)- \tilde{w} _{L,j}(t)]\phi_j,\crr \disp \xi _{NL}(t)= \xi _{N}(t)-\xi _{L}(t), \end{array}\right. $$ with the understanding that $ \tilde{w} _{L,j}(t)\equiv 0$ when $j>L$. Then $ z _{NL}(\cdot,t)$, $\check{w}_{NL}(\cdot,t)$ and $\xi _{NL}(t)$ satisfy \begin{equation} \label{20231013856} \left\{ \begin{array}{l} \disp \langle z _{NLt} (\cdot,t),\phi_j\rangle_{\H} + \langle z _{NLx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = \phi_j(1) \left[ \Gamma_{\gamma}(t) - c_1 z _{NL}(1,t) \right] ,\crr \disp \langle \check{w}_{NLt} (\cdot,t),\phi_j\rangle_{\H} + \langle \check{w}_{NLx}(\cdot,t) , \phi_{jx}\rangle_{\H} \disp = \phi_j(1) \left[ \frac{c_1}{\gamma} z _{NL}(1,t) -c_0\check{w}_{NL}(1,t)\right] ,\crr \dot{ \xi }_{NL}(t)= -\mbox{sgn}(b)\gamma(q+c_0) \left[ z _N (1,t) \check{w}_N(1,t)- z _L (1,t) \check{w}_L(1,t)\right],\crr \disp z _{NL}(\cdot,0)=\gamma\mathcal{P}_N \tilde{w} (\cdot,0)-\gamma\mathcal{P}_L \tilde{w} (\cdot,0), \ \check{w}_{NL}(\cdot,0)= \mathcal{P}_N \check{w} (\cdot,0)- \mathcal{P}_L \check{w} (\cdot,0),\crr \xi _{NL}(0)=0,\ j=1,2\cdots ,N, \end{array}\right. \end{equation} where \begin{equation*}\label{20231013903} \begin{array}{l} \Gamma_{\gamma}(t)= \disp \frac{ b (q+c_0)}{\gamma} \left[ {\xi}_{NL}(t) \check{w}_N(1,t) +{\xi}_L(t) \check{w}_{NL}(1,t)\right]. \end{array} \end{equation*} Let \begin{equation}\label{2023931622NL1012} L_{NL}(t)= \frac{\varepsilon}2\int_{0}^{1} z _{NL}^2(x,t)dx +\frac{|b|}{2} \xi _{NL}^2(t) + \frac{1} 2\int_{0}^{1}\check{w}_{NL}^2(x,t)dx, \end{equation} where $\varepsilon $ is a positive constant that is small enough. Multiplying the first two equations of \dref{20231013856} by $ \varepsilon\gamma[ \tilde{w} _{N,j}(t)- \tilde{w} _{L,j}(t)]$ and $\check{w}_{N,j}(t)-\check{w}_{L,j}(t)$, respectively, and summing for $j = 1, \cdots ,N$, and multiplying the third equation in \dref{20231013856} by $ \xi _{NL}(t)$, we get \begin{equation}\label{20231092217} \begin{array}{l} \disp \dot{L}_{NL}(t) \leq\disp - \varepsilon \| z _{NLx}(\cdot,t)\|_{\H}^2- \|\check{w}_{NLx}(\cdot,t)\|_{\H}^2 \disp -c_0 \check{w}_{NL}^2(1,t)-c_1 \varepsilon z _{NL}^2(1,t)\crr \disp+\left[\frac{c_1}{\gamma}+ \varepsilon (q+c_0)\sqrt{ {2|b| M_{0}} }\right] |\check{w}_{NL}(1,t) z _{NL}(1,t)| + [I_{1}(t)+I_{2}(t)]|\xi _{NL}(t)| \end{array} \end{equation} where \begin{equation}\label{202310231520} \left\{ \begin{array}{l} \disp I_{1}(t)= |b|\gamma(q+c_0)|z_N(1,t)\check{w}_N(1,t)-z_L(1,t)\check{w}_L(1,t)| ,\crr \disp I_{2}(t)=\frac{\varepsilon |b|(q+c_0)}{\gamma}|\check{w}_N(1,t)z_{NL}(1,t)| . \end{array}\right. \end{equation} For any $T>0$ and any $ 0<t\leq T$, $\xi_{NL}$ is continuous on $[0,t]$. Hence, there exists a $t_*\in [0,t]$ such that \begin{equation}\label{202310231538} |\xi_{NL}(t_*)|=\max_{s\in [0,t]}|\xi_{NL}(s)|. \end{equation} Hence, it follows from \dref{202310231510} that there exists a positive constant $M_1$ that is independent of $N,L$ and $t$ such that \begin{equation}\label{202310231529} \int_{0}^{t} I_{1}(s)|\xi_{NL}(s)|ds\leq |\xi_{NL}(t_*)|\int_{0}^{t}I_1(s)ds\leq M_1 |\xi_{NL}(t_*)|. \end{equation} Combining \dref{202310231538}, \dref{2023951539} and \dref{202310231520}, there exists a positive constant $M_2$ that is independent of $N,L$ and $t$ such that \begin{equation}\label{202310231550} \begin{array}{l} \disp \int_{0}^{t} I_{2}(s)|\xi_{NL}(s)|ds\leq |\xi_{NL}(t_*)|\int_{0}^{t}I_2(s)ds\crr \disp \leq \frac{\varepsilon|b|(q+c_0)}{2\gamma}|\xi_{NL}(t_*)|\int_{0}^{t}\check{w}_N^2(1,s)+z_{NL}^2(1,s) ds \leq M_2|\xi_{NL}(t_*)|. \end{array} \end{equation} By Young's inequality, for any $ \delta>0 $, we have \begin{equation}\label{202310142306} |\check{w}_{NL}(1,t) z _{NL}(1,t)|\leq \frac{1}{2\delta}\check{w}_{NL}^2(1,t)+ \frac{\delta}{2} z^2_{NL}(1,t) . \end{equation} We choose \begin{equation}\label{202310142308} 0<\delta<\frac{\sqrt{2}c_1}{(q+c_0)\sqrt{ |b| M_{0} }},\ \ 0<\varepsilon<\frac{\sqrt{2}c_0\delta}{ (q+c_0)\sqrt{|b| M_{0}} } \end{equation} and choose $\gamma$ large enough such that \begin{equation}\label{202310142315} 0<\frac{c_1}{2\gamma }<\min \left\{ c_0 -\frac{\varepsilon(q+c_0)\sqrt{ |b| M_{0} }}{\sqrt{2}\delta}, c_1\varepsilon-\frac{\delta\varepsilon(q+c_0)\sqrt{ |b| M_{0}} }{\sqrt{2}} \right\}. \end{equation} Substituting the inequalities \dref{202310231529}, \dref{202310231550}, \dref{202310142306}, \dref{202310142308} and \dref{202310142315} into \dref{20231092217}, we get \begin{equation}\label{20231013939} {L}_{NL}(t)+\int_{0}^{t}\varepsilon\| z _{NLx}(\cdot,s)\|_{\H}^2+ \|\check{w}_{NLx}(\cdot,s)\|_{\H}^2ds\leq {L}_{NL}(0)+ (M_1+M_2) |\xi _{NL}(t_*)| \end{equation} for any $t\in [0,T]$. Since $\xi_{NL}(t_*)\to 0$ as $N,L\to+\infty$, it follows from \dref{20189101501Ad92}, \dref{202310131547} and \dref{20231013939} that \begin{equation}\label{20231013944} {L}_{NL}(t) +\int_{0}^{t}\varepsilon\| z _{NLx}(\cdot,s)\|_{\H}^2+ \|\check{w}_{NLx}(\cdot,s)\|_{\H}^2ds\to 0\ \ \mbox{as}\ \ N,L\to+\infty,\ \ \forall\ t\in[0,T], \end{equation} which, together with \dref{202310131547} and \dref{2023931622NL1012}, implies that $\{\tilde{w}_N(\cdot,t) \}$ and $\{\check{w}_N(\cdot,t) \}$ are two Cauchy sequences in $ \H$; $\{\tilde{w}_N \}$ and $\{\check{w}_N \}$ are two Cauchy sequences in $ L^2(0,t;H^1(0,1))$. Notice that $z_{NL}(0,t)=\check{w}_{NL}(0,t)=0$, it follows from the Sobolev trace-embedding that \begin{equation}\label{20231014901} z_{NL}^2(1,t)\leq \int _0^1z _{NLx}^2(x,t)dx,\ \ \ \check{w}_{NL}^2(1,t)\leq \int_0^1\check{w}_{NLx}^2(x,t)dx, \end{equation} which, together with \dref{202310131547} and \dref{20231013944}, implies that $\{\tilde{w}_N(1,t) \}$ and $\{\check{w}_N(1,t) \}$ are two Cauchy sequences in $L^2(0,t)$ for any $t\in [0,T]$. In particular, we have \begin{equation}\label{20239201457} \int_0^t \tilde{w}_N (1,s) \check{w}_N(1,s)ds \to \int_0^t \tilde{w} (1,s) \check{w} (1,s)ds \ \ \mbox{as}\ \ N\to+\infty. \end{equation} Moreover, \dref{202310131615} and \dref{202310231458} yield \begin{equation}\label{20239201610} \lim_{N\to+\infty }\int_0^t\xi _N(s) \check{w}_N(1,s)ds= \int_0^t\xi (s) \check{w} (1,s) ds. \end{equation} {\it (4), Passage to the limit.} For any $0<t<T$, by integrating \dref{2023818918Ad92} from $0$ to $t$, we obtain \begin{equation} \label{20239201614} \left\{ \begin{array}{l} \disp \langle \tilde{w}_N (\cdot,t),\phi_j\rangle_{\H} + \int_0^t\langle \tilde{w}_{Nx}(\cdot,\tau) , \phi_{jx}\rangle_{\H}d\tau \disp =\langle \tilde{w}_N (\cdot,0),\phi_j\rangle_{\H}\crr \disp + \phi_j(1) \int_0^t \left[ b (q+c_0) \tilde{\zeta}_N (\tau)\check{w}_N(1,\tau) - c_1 \tilde{w}_N (1,\tau) \right]d\tau ,\crr \tilde{\zeta}_N (t)=\disp \tilde{\zeta} (0) -\mbox{sgn}(b)(q+c_0) \int_0^t \tilde{w} _N (1,\tau)\check{w}_N(1,\tau)d\tau,\crr \disp \langle \check{w}_N (\cdot,t),\phi_j\rangle_{\H} + \int_0^t\langle \check{w}_{Nx}(\cdot,\tau) , \phi_{jx}\rangle_{\H}d\tau \disp =\langle \check{w}_N (\cdot,0),\phi_j\rangle_{\H}\crr \disp +\phi_j(1) \int_0^t \left[ c_1 \tilde{w}_N (1,\tau) -c_0\check{w}_N (1,\tau)\right] d\tau. \end{array}\right. \end{equation} In view of \dref{2023951542}, \dref{2023951542check}, \dref{202310231458}, \dref{20239201457}, \dref{20239201610}, \dref{20189101501Ad92} and \dref{202310131547}, we pass to the limit as $N\to+\infty$ in \dref{20239201614} to get \begin{equation} \label{20239161659} \left\{ \begin{array}{l} \disp \langle \tilde{w} (\cdot,t),\phi_j\rangle_{\H} + \int_0^t\langle \tilde{w}_{x}(\cdot,\tau) , \phi_{jx}\rangle_{\H}d\tau \disp =\langle \tilde{w} (\cdot,0),\phi_j\rangle_{\H}\crr \disp + \phi_j(1) \int_0^t \left[ b (q+c_0) \tilde{\zeta} (\tau)\check{w}(1,\tau) -c_1 \tilde{w} (1,\tau) \right]d\tau ,\crr \tilde{\zeta} (t)=\disp \tilde{\zeta} (0) -\mbox{sgn}(b)(q+c_0) \int_0^t \tilde{w} (1,\tau)\check{w}(1,\tau)d\tau,\crr \disp \langle \check{w} (\cdot,t),\phi_j\rangle_{\H} + \int_0^t\langle \check{w}_{x}(\cdot,\tau) , \phi_{jx}\rangle_{\H}d\tau \disp =\langle \check{w} (\cdot,0),\phi_j\rangle_{\H}\crr \disp +\phi_j(1) \int_0^t \left[ c_1 \tilde{w} (1,\tau) -c_0\check{w} (1,\tau)\right] d\tau, \end{array}\right. \ j=1,2\cdots , \end{equation} which implies that $ (\tilde{w},\tilde{\zeta},\check{w} ) $ is a weak solution of the system \dref{20238292127}. Since the weak convergence \dref{2023951542} and \dref{2023951542check}, \dref{2023951709} follows from \dref{2023951539}, \dref{202310131547} and the weakly lower semicontinuity of the norm $\|\cdot\|_{L^2(0,\infty)}$ \cite[p.6, Theorem 1.1.1]{EvansweakConv}. \end{proof} \begin{lemma}\label{Lm2023951644} Suppose that $b\neq 0$, $c_0 >0$ and $c_1>0$. Then for any initial state $(\tilde{w}(\cdot,0),\check{w}(\cdot,0),\tilde{\zeta}(0))\in \H^2\times\R$, system \dref{20238292127} admits a weak solution $(\tilde{w}(\cdot,t),\check{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \begin{equation}\label{2023951645} |\tilde{\zeta}(t)- {\zeta}_* |+\|(\tilde{w}(\cdot,t),\check{w}(\cdot,t))\|_{\H^2} \to 0 \ \ \mbox{as}\ \ t\to+\infty, \end{equation} where $ {\zeta}_*$ is a constant that may not be $0$. \end{lemma} \begin{proof} By Lemma \ref{Lm2023828938Ad92}, system \dref{20238292127} admits a weak solution $(\tilde{w}(\cdot,t),\check{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \dref{2023951709} holds. If we let $u_0(t)=-(q + c_0 )\check{w}(1,t)$, then $u_0\in L^2(0,\infty)$. Since the $\tilde{w},\tilde{\zeta}$-subsystem of \dref{20238292127} happens to be system \dref{20237122040frac1b}, by exploiting Lemma \ref{Lm2023811150}, there exists a constant $\zeta_*$ that may not be zero such that \begin{equation}\label{2023951705} \|\tilde{w}(\cdot,t)\|_{\H} +|\tilde{\zeta}(t)-\zeta_*|\to 0 \ \ \mbox{as}\ \ t\to+\infty. \end{equation} Moreover, $\tilde{w}(0,\cdot),\tilde{w}(1,\cdot)\in L^2(0,\infty) $. Since the $\check{w}$-subsystem of \dref{20238292127} is a well-known exponentially stable heat equation with inhomogeneous terms $q^2e^{qx}\tilde{w}(0,t)$,$-q\tilde{w}(0,t)$ and $c_1\tilde{w}(1,t)$, we can deduce from \cite[Lemma 3.1]{FengAnnual} that $\|\check{w}(\cdot,t)\|_{\H} \to 0$ as $t\to+\infty$. \end{proof} {\it Proof of Theorem \ref{th20237141510}.} By Lemma \ref {Lm2023951644}, the transformed system \dref{20238292127} admits a weak solution $(\tilde{w}(\cdot,t),\check{w}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \dref{2023951645} holds. In order to relate the transformed system \dref{20238292127} to the original system \dref{2023720848}, we define the transformation $ \Pi : \H\to \H $ by \begin{equation}\label{2023813837} ( \Pi f)(x)=f(x)+q\int_0^xe^{q(x-s)}f(s)ds,\ \ \forall\ f\in \H . \end{equation} Then, $ \Pi \in \mathcal{L}(\H )$ is invertible and more specifically \begin{equation}\label{2023813841} ( \Pi ^{-1}g)(x)=g(x)-q\int_0^xg(s)ds,\ \ \forall\ g\in \H . \end{equation} In terms of the constant $b$, we define the transformation $ \Upsilon_b : \R\to \R$ by \begin{equation}\label{2023951821} \Upsilon_b (s) = \frac1b-s,\ \ \forall\ s\in \R. \end{equation} Clearly, $ \Upsilon_b ^{-1} =\Upsilon_b$. Let \begin{equation}\label{2023951826} \begin{pmatrix} w(\cdot,t) \\ \hat{w}(\cdot,t) \\ \zeta(t) \end{pmatrix} =\begin{pmatrix} 1&-1&0 \\ 0&\Pi&0 \\ 0&0&\Upsilon_b \end{pmatrix}^{-1}\begin{pmatrix} \tilde{w}(\cdot,t) \\ \check{w}(\cdot,t) \\ \tilde{\zeta}(t) \end{pmatrix} =\begin{pmatrix} 1&\Pi^{-1}&0 \\ 0&\Pi^{-1}&0 \\ 0&0&\Upsilon_b \end{pmatrix} \begin{pmatrix} \tilde{w}(\cdot,t) \\ \check{w}(\cdot,t) \\ \tilde{\zeta}(t) \end{pmatrix}. \end{equation} A straightforward computation shows that such a defined $( {w}(\cdot,t),\hat{w}(\cdot,t), {\zeta}(t))\in C(0,\infty;\H^2\times\R)$ is a weak solution of system \dref{2023720848}. Moreover, we can obtain the convergence \dref{2023720844w} directly by combining \dref{2023951826} and \dref{2023951645}. \hfill$\Box$ \section{Proof of Theorem \ref{th2023829953}}\label{PfTh3} We first consider the following transformed system: \begin{equation}\label{2023952151} \left\{\begin{array}{l} \tilde{z}_{t}(x,t)=\tilde{z} _{xx}(x,t) , \crr \tilde{z} _{x} (0,t)= 0 ,\ \disp\tilde{z} _{x}(1,t)=- b \tilde{\zeta}(t) u_0(t)-c_1\tilde{z} (1,t), \crr \disp\dot{\tilde{\zeta}}(t)= \mbox{sgn}(b) \tilde{z} (1,t)\left[d(t)- (q+c_0)\check{z}(1,t) \right] ,\ \ d\in L^{\infty}(0,\infty),\crr \check{z}_{t}(x,t)=\check{z}_{xx}(x,t) ,\crr \check{z}_{x} (0,t)=-q\tilde{z}(0,t),\ \disp \check{z}_{x}(1,t)= - c_0 \check{z}(1,t) +c_1 \tilde{z}(1,t) . \end{array}\right. \end{equation} \begin{lemma}\label{Lm2023952250} Let $b\neq 0$, $c_0 >0$, $c_1>0$ and $d\in L^{\infty}(0,\infty)$. Then for any initial state $(\tilde{z}(\cdot,0),\check{z}(\cdot,0),\tilde{\zeta}(0))\in \H^2\times\R$, system \dref{2023952151} admits a weak solution $(\tilde{z}(\cdot,t),\check{z}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \begin{equation}\label{2023961016} \int_0^{\infty}\left[\check{z}^2(0,t) + \tilde{z}^2(0,t) + \tilde{z}^2(1,t)+\check{z}^2(1,t) \right] dt<+\infty, \end{equation} and \begin{equation}\label{2023952251} \sup_{t\geq0} \left[ \|(\tilde{z}(\cdot,t),\check{z}(\cdot,t))\|_{\H^2}+|\tilde{\zeta}(t) |\right]< +\infty. \end{equation} If we assume further that there exists a time $\tau>0$ such that \begin{equation}\label{202381809*d} \lim_{t\to+\infty}\int_{t}^{t+\tau} d (s) ds \neq0, \end{equation} then \begin{equation}\label{2023952252} \tilde{\zeta}(t) \to 0 \ \ \mbox{as}\ \ t\to+\infty. \end{equation} \end{lemma} \begin{proof} By the Galerkin approximation, for any initial state $(\tilde{z}(\cdot,0),\check{z}(\cdot,0),\tilde{\zeta}(0))\in \H^2\times\R$, system \dref{2023952151} admits a weak solution $(\tilde{z}(\cdot,t),\check{z}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \begin{equation}\label{2023951709zz} \int_{0}^{\infty} \int_{0}^{1}\tilde{z}_{x}^2(x,t)+\check{z}_{x}^2(x,t)dx+\tilde{z}^2(1,t)+\check{z}^2(1,t)dt<+\infty, \end{equation} which leads to \dref{2023961016} directly due to the Sobolev trace-embedding. Since the proof is almost the same as Lemma \ref{Lm2023828938Ad92}, we omit it due to the space limitation. We only consider the stability. Let $u_0(t)=d(t)- (q+c_0)\check{z}(1,t)$. Since \dref{2023951709zz} and \dref{2023952151}, we have $d\in L^{\infty}(0,\infty)$ and $\check{z}(1,\cdot)\in L^2(0,\infty)$. By exploiting Lemma \ref{Lm2023811150} that $\tilde{z}(0,\cdot)\in L^2(0,\infty)$ and \begin{equation}\label{2023811157zz} \|\tilde{z}(\cdot,t)\|_{\H} +|\tilde{\zeta} (t)- \zeta_*|\to 0 \ \ \mbox{as}\ \ t\to+\infty, \end{equation} where $\zeta_*$ is a constant that may not be zero. Since the $\check{z}$-subsystem of \dref{2023952151} is a well-known exponentially stable heat equation with inhomogeneous terms $-q\tilde{z}(0,t)$ and $c_1\tilde{z}(1,t)$, we can conclude from \cite[Lemma 3.1]{FengAnnual} that $\|\check{z}(\cdot,t)\|_{\H} \to 0$ as $t\to+\infty$. As a result, the boundedness \dref{2023952251} follows immediately from \dref{2023811157zz}. By \dref{2023951709zz}, it follows that \begin{equation}\label{20239211456} \lim_{t\to+\infty}\int_{t}^{t+\tau} \check{z}^2(1,s) ds=0 ,\ \ \forall\ \tau>0. \end{equation} H\"{o}lder's inequality yields \begin{equation}\label{20239211511} \int_{t}^{t+\tau} |\check{z} (1,s)| ds \leq \sqrt{\tau} \left(\int_{t}^{t+\tau} \check{z}^2(1,s) ds\right)^{1/2} ,\ \ \forall\ \tau>0. \end{equation}\ Consequently, it follows from \dref{20239211456}, \dref{20239211511} and the assumption \dref{202381809*d} that \begin{equation}\label{202396802} \lim_{t\to+\infty}\int_{t}^{t+\tau} u_0 (s) ds=\lim_{t\to+\infty}\int_{t}^{t+\tau} d (s) ds \neq0. \end{equation} Note that the $\tilde{z},\tilde{\zeta}$-subsystem of \dref{2023952151} is equivalent to system \dref{20237122040frac1b}, we employ Lemma \ref{Lm2023811150} to get $\zeta_*=0$. So \dref{2023952252} holds. \end{proof} {\it Proof of Theorem \ref{th2023829953}.} Let $d(t)=v_x(1,t)$, where $v_x(1,t)$ is given by \dref{2023828927}. Owing to the assumption \dref{2023829957}, $\|d\|_{L^{\infty}(0,\infty)}<+\infty$ and \begin{equation}\label{2023961052} \sup_{t\geq 0}\|v(\cdot,t)\|_{\H}<+\infty, \end{equation} where $v$ is given by \dref{2023829858}. By Lemma \ref {Lm2023952250}, the transformed system \dref{2023952151} admits a weak solution $(\tilde{z}(\cdot,t),\check{z}(\cdot,t),\tilde{\zeta}(t))\in C(0,\infty;\H^2\times\R)$ such that \dref{2023961016} and \dref{2023952251} hold. In terms of the operators $\Pi$ and $\Upsilon_b$ that are given by \dref{2023813837} and \dref{2023951821} respectively, we define \begin{equation}\label{2023952235} \begin{pmatrix} z(\cdot,t) \\ \hat{z}(\cdot,t) \\ \zeta(t) \end{pmatrix} =\begin{pmatrix} 1&\Pi^{-1}&0 \\ 0&\Pi^{-1}&0 \\ 0&0&\Upsilon_b \end{pmatrix} \begin{pmatrix} \tilde{z}(\cdot,t) \\ \check{z}(\cdot,t) \\ \tilde{\zeta}(t) \end{pmatrix}. \end{equation} A direct computation shows that such a defined $( {z}(\cdot,t),\hat{z}(\cdot,t), {\zeta}(t))\in C( 0,\infty ;\H^2\times\R)$ is governed by \begin{equation}\label{2023952148} \left\{\begin{array}{l} \disp z_t(x,t)=z_{xx}(x,t), \crr \disp z_x(0,t)= -qz(0,t) , \ z_x(1,t)=b\zeta(t)u_0(t)-v_x(1,t),\crr \hat{z}_{t}(x,t)=\hat{z}_{xx}(x,t) , \; x\in (0,1), \crr \hat{z}_{x} (0,t)=-q {z}(0,t),\ \ \hat{z}_{x}(1,t)= u_0(t)-v_x(1,t)+c_1 [z(1,t)-\hat{z}(1,t)] , \crr \dot{\zeta}(t)= -\mbox{sgn}(b)[z(1,t)-\hat{z}(1,t)]u_0(t) ,\crr u_0(t) \disp = \disp -(q+c_0)\left[ \hat{z}(1,t)+q \int_{0}^{1}e^{q(1-x)}\hat{z}(x,t)dx\right]+ v_x(1,t). \end{array}\right. \end{equation} Let \begin{equation}\label{202396916} w(x,t)=z(x,t)+v(x,t), \ \ x\in[0,1],\ t\geq0, \end{equation} where $v$ is given by \dref{2023829858}. By using \dref{2023829858}, \dref{20221131932} and \dref{2023952148}, a straightforward computation shows that $( w(\cdot,t),\hat{z}(\cdot,t), {\zeta}(t))\in C(0,\infty;\H^2\times\R)$ is a weak solution of the closed-loop system \dref{2023829945}. Note that \begin{equation}\label{2023961037} w(0,t)-r(t)= w(0,t)-v(0,t)=z(0,t)=\hat{z}(0,t)+\tilde{z}(0,t)= \check{ z}(0,t)+\tilde{z}(0,t), \end{equation} \dref{2023829954} holds due to \dref{2023961016}. Furthermore, the boundedness \dref{2023829956} follows from \dref{202396916}, \dref{2023952235}, \dref{2023961052} and \dref{2023952251}. When the reference $r$ satisfies \dref{202396903}, $d(t)=v_x(1,t)$ satisfies \dref{202381809*d}. Hence, \dref{2023952252} holds due to Lemma \ref{Lm2023952250}. By recalling \dref{2023951821} and \dref{2023951826}, it follows that $\zeta(t)=\Upsilon_b\tilde{\zeta}(t)= \frac1b- \tilde{\zeta}(t)$ which, together with \dref{2023952252}, leads to the convergence \dref{202396902} easily. \section{Numerical simulations}\label{NumSim} In this section, we present numerical simulations for systems \dref{2023720848} and \dref{2023829945constant} to validate the theoretical results. The finite difference scheme is adopted in discretization. The numerical results are programmed in Matlab. The time step and the space step are taken as $0.0001$ and $0.02$, respectively. The reference signal and the corresponding parameters are chosen as \begin{equation}\label{20181211918} \left.\begin{array}{l} \disp r^* =3,\ c_0=c_1=5,\ q=2,\ b=-10. \end{array}\right. \end{equation} In both cases, the initial state is chosen as \begin{equation}\label{2023962257} w(x,0)=qx-1, \ \hat{w}(x,0)=\hat{z}(x,0)=0, \ \zeta(0)=0. \end{equation} The solution of system \dref{2023720848} is plotted in Figure \ref{Fig1}. From Figure \ref{Fig1} we see that the system state has been stabilized effectively, which shows that our controller works well. Both the state observer and the control coefficient update law are convergent. However, $\zeta(t)$ has not converged to $\frac1b$. Hence, the numerical simulation is consistent with our theoretical results in Theorem \ref{th20237141510} and Remark \ref{Re2023831}. The tracking result of system \dref{2023829945constant} is plotted in Figure \ref{Fig2} which shows that the performance output converges to the constant reference smoothly. Since the persistent excitation condition is satisfied, the estimate $\zeta$ converges to $\frac1b$ as $t\to+\infty$. All the state of system \dref{2023829945constant} are bounded. \begin{figure}[!htb]\centering \subfigure[$w(x,t)$.] {\includegraphics[width=0.32\textwidth]{NoPEw}} \subfigure[ $\hat{w}(x,t)$.] {\includegraphics[width=0.32\textwidth]{hatw}} \subfigure[Control coefficient estimation.] {\includegraphics[width=0.32\textwidth]{NoPEzeta}} \caption{Simulations for system \dref{2023720848}.}\label{Fig1} \end{figure} \begin{figure}[!htb]\centering \subfigure[$w(x,t)$.] {\includegraphics[width=0.48\textwidth]{w}} \subfigure[ $\hat{z}(x,t)$.] {\includegraphics[width=0.48\textwidth]{hatz}}\\ \subfigure[Control coefficient estimation.] {\includegraphics[width=0.48\textwidth]{zeta}} \subfigure[Output tracking.] {\includegraphics[width=0.48\textwidth]{tracking}} \caption{Simulations for system \dref{2023829945constant}.}\label{Fig2} \end{figure} \section{Conclusions}\label{Concluding} This paper presents a new adaptive control scheme for the unstable heat equation \dref{20237122035}. Since the control coefficient is unknown, the conventional partial differential equation backstepping can not be used directly. We overcome this obstacle by dividing the controller into two factors. One of the factors is used to compensate for the unknown control coefficient and the other factor is used to stabilize the system. A state observer, where both the unknown control coefficient and its corresponding estimate are absent, is designed to estimate the system state, while a new update law is proposed to estimate the reciprocal of control coefficient. Very importantly, the convergence of the state estimation does not rely on the convergence of the control coefficient estimation. This implies that the conventional controller design based on the separation principle is not trivial since we always do not know the effective estimation of the unknown control coefficient before the controller design. In summary, the problems that are caused by the unknown control coefficient are addressed successfully via stabilizing the unstable heat equation. The successful controller design is mainly attributed to two innovative ideas: (1) we design the update law to estimate the reciprocal of $b$ rather than to estimate the control coefficient $b$ itself; (2) we design the controller for the observer rather than the control plant itself. Inspired by these ideas, the proposed approach can also be extended straightforwardly to stabilize other distributed parameter systems with unknown control coefficient such as the wave equation and the beam equation. \begin{thebibliography}{1} \bibitem{Deutscher2} J. Deutscher, A backstepping approach to the output regulation of boundary controlled parabolic PDEs, {\it Automatica}, 57(2015), 56-64. \bibitem{Evans} L.C. Evans, {\it Partial differential equations}, Vol. 19 of Graduate studies in mathematics, American mathematical society, 1997. \bibitem{EvansweakConv} L.C. Evans, {\it Weak Convergence Methods for Nonlinear Partial Differential Equations}, CBMS Reg. Conf. Ser. Math. 74, AMS, Providence, RI, 1990. \bibitem{Fengheat} H. Feng and B.Z. Guo, New unknown input observer and output feedback stabilization for uncertain heat equation, {\it Automatica}, 86(2017), 1-10. \bibitem{FengAnnual} H. Feng and B.Z. Guo, Active disturbance rejection control: New and old results, {\it Annual Reviews in Control}, 44(2017), 238-248. \bibitem{GuoZhaoRX2022} B.Z. Guo and R.X. Zhao, Output regulation for a heat equation with unknown exosystem, {\it Automatica}, 138(2022), 110159. \bibitem{Guomeng} B.Z. Guo and T. Meng, Robust error based non-collocated output tracking control for a heat equation, {\it Automatica}, 114(2020), 108818. \bibitem{GuoWAdaptiveAuto} W. Guo and F.F. Jin, Adaptive error feedback regulator design for 1D heat equation, {\it Automatica}, 113(2020), 108810. \bibitem{Hahn2012} D.W. Hahn and M.N. \"{O}zisik, {\it Heat Conduction}, John Wiley \& Sons, Inc., New Jersey, 2012. \bibitem{Backsteppingbook} M. Krstic and A. Smyshlyaev, {\it Boundary Control of PDEs: A Course on Backstepping Designs}, Philadelphia, PA, USA: SIAM, 2008. \bibitem{Krs2} M. Krstic and A. Smyshlyaev, Adaptive boundary control for unstable parabolic PDEs-Part I: Lyapunov design, {\it IEEE Transactions on Automatic Control}, 53(2008), 1575-1591. \bibitem{105} B. Laroche, P. Martin and P. Rouchon, Motion planning for the heat equation, {\it International Journal of Robust and Nonlinear Control}, 10(2000), 629-643. \bibitem{Liuw} W. Liu, Boundary feedback stabilization of an unstable heat equation, {\it SIAM Journal on Control and Optimization}, 42(2003), 1033-1043. \bibitem{Natarajan2016TAC} V. Natarajan, D.S. Gilliam and G. Weiss, The state feedback regulator problem for regular linear systems, {\it IEEE Transactions on Automatic Control}, 59(2014), 2708-2723. \bibitem{PauLassi2016TAC} L. Paunonen, Controller design for robust output regulation of regular linear systems, {\it IEEE Transactions on Automatic Control}, 61(2016), 2974-2986. \bibitem{PauLassi2017SIAM} L. Paunonen, Robust controllers for regular linear systems with infinite-dimensional exosystems, {\it SIAM Journal on Control and Optimization}, 55(2017), 1567-1597. \bibitem{PE1987} N. Shimkin and A. Feuer, Persistency of excitation in continuous-time systems, {\it Systems $\&$ Control Letters}, 9(1987), 225-233. \bibitem{Andrey3} A. Smyshlyaev and M. Krstic, Adaptive boundary control for unstable parabolic PDEs-Part II: Estimation-based designs, {\it Automatica}, 43(2007), 1543-1556. \bibitem{Andrey4} A. Smyshlyaev and M. Krstic, Adaptive boundary control for unstable parabolic PDEs-Part III: Output feedback examples with swapping identifiers, {\it Automatica}, 43(2007), 1557-1564. \bibitem{backsteppingheat2004} A. Smyshlyaev and M. Krstic, Closed-form boundary state feedbacks for a class of 1-D partial integro-differential equations, {\it IEEE Transactions on Automatic Control}, 49(2004), 2185-2202. \bibitem{backsteppingheat2005} A. Smyshlyaev and M. Krstic, Backstepping observers for a class of parabolic PDEs, {\it Systems $\&$ Control Letters}, 54(2005), 613-625. \end{thebibliography} \section{Appendix} \begin{lemma}\label{Lm20231004} Let $\omega>0$. Suppose that the functions $\gamma_1,\gamma_2\in L^2(0, \infty)$ and $\gamma_3\in L^{\infty}(0,\infty)$. If the nonnegative function $\eta$ satisfies the differential inequality \begin{equation}\label{202310041509} \dot{\eta}(t)\leq -\omega\eta(t)+[\gamma_1(t)+\gamma_3(t)]\gamma_2(t) , \end{equation} then $\eta(t)\to 0$ as $t\to+\infty$. \end{lemma} \begin{proof} By the Cauchy's inequality, \begin{equation*}\label{202310041359} |[\gamma_1(t)+\gamma_3(t)]\gamma_2(t)|\leq \frac{1}{2}\gamma_1^2(t)+\frac{1}{2}\gamma_2^2(t)+ |\gamma_2 (t)\gamma_3 (t)|. \end{equation*} If we let $ g_1(t)= |\gamma_2 (t)\gamma_3 (t)|$, $ g_2(t)=\frac{1}{2}\gamma_1^2(t)+\frac{1}{2}\gamma_2^2(t)$, then $g_1\in L^2(0,\infty)$ and $g_2\in L^1(0,\infty)$. Furthermore, \begin{equation}\label{202310011537} \disp \dot{\eta} (t)\leq -\omega\eta (t)+g_1(t)+g_2(t). \end{equation} Since $g_1\in L^2(0,\infty)$, for any $\sigma >0$, there exists a $t_1>0$ such that $\|g_1\|_{L^2(t_1,\infty )}<\sigma$. It follows from the H\"{o}lder's inequality that \begin{equation}\label{20239202222} \int_{t_1}^te^{-\omega(t-s)} g_1 (s)ds \leq \left( \int_{t_1}^te^{-2\omega(t-s)}ds\right)^{1/2} \left( \int_{t_1}^tg_1^2 (s)ds \right)^{1/2}\leq \frac{\|g_1\|_{L^{2}(t_1,\infty )}}{\sqrt{2\omega}} < \frac{\sigma}{\sqrt{2\omega}}. \end{equation} Since $g_2\in L^1(0,\infty)$, for any $\sigma >0$, there exists a $t_2>0$ such that $\|g_2\|_{L^1(t_2,\infty)}<\sigma$, which means that \begin{equation}\label{APPP6} \int_{t_2}^te^{-\omega(t-s)} g_2 (s)ds \leq \int_{t_2}^t g_2 (s)ds \leq \|g_2\|_{L^{1}(t_2,\infty )}<\sigma. \end{equation} Choosing $t_0=\max\{t_1,t_2\}$ and taking \dref{20239202222} and \dref{APPP6} into account, we solve \dref{202310011537} to get \begin{eqnarray}\label{APPP4} \begin{array}{ll} \eta ( t) &\disp \leq e^{-\omega t} \eta (0) +e^{-\omega(t-t_0)}\int_0^{t_0}e^{-\omega(t_0-s)} [ g_1 (s)+g_2(s)]ds + \int_{t_0}^te^{-\omega(t-s)} [ g_1 (s)+g_2(s)]ds\crr &\disp \leq e^{-\omega t} \eta (0) +e^{-\omega(t-t_0)}\int_0^{t_0}e^{-\omega(t_0-s)} [ g_1 (s)+g_2(s)]ds + \left(\frac{1}{\sqrt{2\omega}}+1\right)\sigma. \end{array} \end{eqnarray} Passing to the limit as $t \to+\infty $ in \dref{APPP4}, we obtain \begin{eqnarray}\label{APPP5B} \lim_{t\to+\infty} \eta (t)\leq \left(\frac{1}{\sqrt{2\omega}}+1\right)\sigma, \end{eqnarray} which leads to $\eta(t)\to 0$ as $t\to+\infty$ due to the arbitrariness of $\sigma$. \end{proof} \end{document}
2412.19100v1
http://arxiv.org/abs/2412.19100v1
Constrained stochastic linear quadratic control under regime switching with controlled jump size
\documentclass[12pt]{amsart} \usepackage[vmargin=2.6cm,hmargin=2.2cm]{geometry} \usepackage[colorlinks,citecolor=blue,urlcolor=blue,bookmarks=false,hypertexnames=true]{hyperref} \usepackage{enumerate} \usepackage{color} \usepackage{amsthm} \usepackage{graphicx} \usepackage{times} \usepackage{xcolor} \graphicspath{ {./images/} } \usepackage{tikz} \usepackage{multirow} \usepackage{caption} \usepackage{subcaption} \usepackage[square,numbers]{natbib} \usepackage{comment} \usepackage[title]{appendix} \usepackage{fancyhdr} \allowdisplaybreaks[3] \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{remark}{Remark}[section] \newtheorem{example}{Example}[section] \newtheorem{assumption}{Assumption}[section] \newtheorem{definition}{Definition}[section] \newtheorem{assumptions}{Assumptions}[section] \newcommand{\dd}{\operatorname{d}\! } \newcommand{\dt}{\operatorname{d}\! t} \newcommand{\de}{\operatorname{d}\! e} \newcommand{\ds}{\operatorname{d}\! s} \newcommand{\dr}{\operatorname{d}\! r} \newcommand{\du}{\operatorname{d}\! u} \newcommand{\dv}{\operatorname{d}\! v} \newcommand{\dx}{\operatorname{d}\! x} \newcommand{\dy}{\operatorname{d}\! y} \newcommand{\dz}{\operatorname{d}\! z} \newcommand{\ddp}{\operatorname{d}\! p} \newcommand{\db}{\operatorname{d}\! B} \newcommand{\dw}{\operatorname{d}\! W} \newcommand{\cH}{\ensuremath{\mathcal{H}}} \newcommand{\cM}{\ensuremath{\mathcal{M}}} \newcommand{\cZ}{\ensuremath{\mathcal{Z}}} \newcommand{\cE}{\ensuremath{\mathcal{E}}} \newcommand{\argmin}{\ensuremath{\operatorname*{argmin}}} \newcommand{\argmax}{\ensuremath{\operatorname*{Argmax}}} \newcommand{\essinf}{\ensuremath{\operatorname*{essinf}}} \newcommand{\esssup}{\ensuremath{\operatorname*{ess\;sup}}} \newcommand{\dualgamma}{\widehat{\Gamma}} \newcommand{\dualpi}{\widehat{\Pi}} \newcommand{\ep}{\varepsilon} \newcommand{\lnu}{\ensuremath{L^{2,\nu}}} \newcommand{\lpnu}{\ensuremath{L^{p,\nu}_{\mathcal{P}}}} \newcommand{\ltwonu}{\ensuremath{L^{2, \nu}_{\mathcal{P}}}} \newcommand{\linnu}{\ensuremath{L^{\infty,\nu}_{\mathcal{P}}}} \newcommand{\nn}{\nonumber} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\blue}[1]{{\color{blue}#1}} \newcommand{\green}[1]{{\color{green}#1}} \newcommand{\yellow}[1]{{\color{yellow}#1}} \newcommand{\Var}{{\rm Var}} \newcommand{\E}{\mathbb{E}} \newcommand{\R}{\mathbb{R}} \newcommand{\pf}{\noindent\textbf{Proof:} } \newcommand{\eof}{\hfill{$\Box$}} \newcommand{\revise}[1]{\textcolor{red}{#1}} \newcommand{\BMO}{L^{2, \;\mathrm{BMO}}_{\mathbb F^{W,N}}(0, T;\R^{n})} \newcommand{\dpt}{\dd\mathbb{P}\otimes \dt\textrm{-a.e.}} \newcommand{\dptv}{\dd\mathbb{P}\otimes \dt\otimes\dd\nu\textrm{-a.e.}} \newcommand{\jo}[1]{{\textcolor{blue}{#1}}} \DeclareMathOperator*{\supp}{supp} \title[LQ with controlled jump size]{Constrained stochastic linear quadratic control under regime switching with controlled jump size} \author[Shi]{Xiaomin Shi} \author[Xu]{Zuo Quan Xu} \date{\today} \keywords{} \address{X.~Shi: School of Statistics and Mathematics, Shandong University of Finance and Economics, Jinan 250100, China.} \email{{[email protected]}} \address{Z.Q.~Xu: Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.} \email{{[email protected]}} \begin{document} \setcitestyle{numbers} \maketitle \begin{abstract} In this paper, we examine a stochastic linear-quadratic control problem characterized by regime switching and Poisson jumps. All the coefficients in the problem are random processes adapted to the filtration generated by Brownian motion and the Poisson random measure for each given regime. The model incorporates two distinct types of controls: the first is a conventional control that appears in the continuous diffusion component, while the second is an unconventional control, dependent on the variable $z$, which influences the jump size in the jump diffusion component. Both controls are constrained within general closed cones. By employing the Meyer-It\^o formula in conjunction with a generalized squares completion technique, we rigorously and explicitly derive the optimal value and optimal feedback control. These depend on solutions to certain multi-dimensional fully coupled stochastic Riccati equations, which are essentially backward stochastic differential equations with jumps (BSDEJs). We establish the existence of a unique nonnegative solution to the BSDEJs. One of the major tools used in the proof is the newly established comparison theorems for multidimensional BSDEJs. \end{abstract} \subsection*{Keywords:}linear-quadratic control; regime switching; controlled jump size; fully coupled stochastic Riccati equations; backward stochastic differential equations with jumps. \subsection*{Mathematics Subject Classification (2020):} 60H30. 60J28. 60J76. 93E20. \section{Introduction} Since the pioneering work of Wonham \cite{Wo}, stochastic linear-quadratic (LQ) theory has been extensively studied by numerous researchers. For instance, Bismut \cite{Bi} was the first one who studied stochastic LQ problems with random coefficients. In order to obtain the optimal random feedback control, he formally derived a stochastic Riccati equation (SRE). But he could not solve the SRE in the general case. It is Kohlmann and Tang \cite{KT}, for the first time, that established the existence and uniqueness of the one-dimensional SRE. Tang \cite{Tang03, Tang15} made another breakthrough and proved the existence and uniqueness of the matrix valued SRE with uniformly positive control weighting matrix using two different approaches. Sun, Xiong and Yong \cite{SXY} studied the indefinite stochastic LQ problem with random coefficients. Hu and Zhou \cite{HZ} solved the stochastic LQ problem with cone control constraint. Zhang, Dong and Meng \cite{ZDM} made a great progress in solving stochastic LQ control and related SRE with jumps with uniformly definite control weight by inverse flow technique. Li, Wu and Yu \cite{LWY} considered the stochastic LQ problem with jumps in the indefinite case. Please refer to Chapter 6 in Yong and Zhou \cite{YZ} for a systematic account on this subject. Stochastic LQ problems for Markovian regime switching system were studied in Wen, Li and Xiong \cite{WLX} and Zhang, Li and Xiong \cite{ZLX} where weak closed-loop solvability, open-loop solvability and closed-loop solvability were established. But the coefficients are assumed to be \emph{deterministic} functions of time $t$ for each given regime $i$ in the above papers, so their SREs are indeed deterministic ordinary differential equations (ODEs). Hu, Shi and Xu \cite{HSX, HSX2} formulated cone-constrained stochastic LQ problems with regime switching on finite time horizon and infinite time horizon respectively, in which the coefficients are \emph{stochastic} processes adapted to the filtration generated by the Brownian motion for each give regime $i$. Due to the randomness of the coefficients, the corresponding SREs in \cite{HSX, HSX2} are actually BSDEs. In this paper, we generalize the LQ problem in \cite{HSX} to a model in which the coefficients are \emph{stochastic} processes adapted to the filtration generated by the Brownian motion and the Poisson random measure for each give regime $i$. In addition to a usual control $u_1$, we introduce a second control $u_2(z)$ depending on the jump size $z$. The motivations to incorporating the second control are, in insurance area, the optimal reinsurance strategies may depend on the claim size in general, see, e.g., Liu and Ma \cite{LM} and Wu, Shen, Zhang and Ding \cite{WSZD}; and in controllability issues for stochastic systems with jump diffusions, a control depending on the jump size is necessary as a consequence of martingale representation theorem of Poisson random measures, see, e.g., Goreac \cite{Goreac} and Song \cite{Song}. The application of this kind of stochastic LQ model in a optimal liquidation problem with dark pools can be found in our working paper \cite{FSX}. The first main contribution of this paper is to provide a pure analysis method (using tools like approximation technique, comparison theorem for multi-dimensional BSDEJs, log transformation, etc) of the existence of a unique solution to the corresponding system of SREs, which is a $2\ell$-dimensional coupled BSDEJs. This is interesting in its own right from the point of view of BSDE theory. Note that even though the SREs in \cite{HSX} are $2\ell$-dimensional, they are partially coupled, that is, the first $\ell$ equations for $\{P^i_1\}_{i\in\cM}$ and the second $\ell$ equations for $\{P^i_2\}_{i\in\cM}$ are totally decoupled. But in our new model, the equation for $P^i_1$ also depends on $P^i_2,\Gamma^i_2$, rendering the $2\ell$-dimensional SREs in our new model are fully coupled. This more complicated phenomenon comes from the fact that, to the best of our knowledge, the optimal state process will probably change its sign at the jump time of the underlying Poisson random measure. Compared with the $2$-dimensional SREs in Hu, Shi and Xu \cite{HSX4}, here we need to study $2\ell$-dimensional SREs because of the new coupling terms $\sum_j q^{ij}P^j_1$ and $\sum_j q^{ij}P^j_2$. The second main contribution is to give a rigorous verification theorem of the optimal value and optimal control, using the unique solution to the corresponding system of SREs, Meyer-It\^o's formula, a generalized squares completion technique and some delicate analysis. The rest part of this paper is organized as follows. In Section \ref{section:fm}, we formulate a constrained stochastic LQ control problem with regime switching, controlled jump size and random coefficients. Section \ref{section:Ri} is devoted to proving the existence of a unique nonnegative solution to the related $2\ell$-dimensional fully coupled SREs in standard and singular cases. In Section \ref{section:Veri}, we solve the LQ problem by establishing a rigorous verification theorem. \section{Problem formulation}\label{section:fm} Let $(\Omega, \mathcal F, \mathbb{F}, \mathbb{P})$ be a fixed complete filtered probability space. The filtration $\mathbb{F}=\{\mathcal F_t, t\geq0\}$ is generated by the following three independent random sources augmented by all the $\mathbb{P}$-null sets. \medskip \begin{itemize} \item The first random source is a standard $n$-dimensional Brownian motion $W_t=(W_{1,t}, \ldots, W_{n_1,t})^{\top}$.\medskip \item The second one is an $n_2$-dimensional Poisson random measure $N=(N_1, \ldots,N_{n_2})^{\top}$ defined on $\R_+\times\cZ$, where $\mathcal{Z}\subset\R^{\ell}\setminus\{0\}$ is a nonempty Borel subset of some Euclidean space. For each $k=1,\ldots,n_2$, $N_k$ posses the same stationary compensator (intensity measure) $\nu(\dz)\dt$ satisfying $\nu(\cZ)<\infty$. The compensated Poisson random measure is denoted by $\tilde N(\dt,\dz)$.\medskip \item The third one is a continuous-time stationary Markov chain $\alpha_t$ valued in a finite state space $\mathcal M=\{1, 2, \ldots, \ell\}$ with $\ell\geq 1$. The Markov chain has a generator $Q=(q_{ij})_{\ell\times\ell}$ with $q_{ij}\geq0$ for $i\neq j$ and $\sum_{j=1}^{\ell}=0$ for every $i\in\cM$. \end{itemize} Besides the filtration $\mathbb{F}$, we will often use the filtration $\mathbb{F}^{W,N}=\{\mathcal F^{W,N}_t, t\geq0\}$ which is generated by the Brownian motion $W$ and the Poisson random measures $N$ and augmented by all the $\mathbb{P}$-null sets. Throughout the paper, let $T$ denote a fixed positive constant, $\mathcal{P}$ (resp. $\mathcal{P}^{W,N}$) denote the $\mathbb{F}$ (resp. $\mathbb{F}^{W,N}$)-predictable $\sigma$-field on $\Omega\times[0,T]$, and $\mathcal{B}(\cZ)$ denote the Borelian $\sigma$-field on $\cZ$. We denote by $\R^\ell$ the set of $\ell$-dimensional column vectors, by $\R^\ell_+$ the set of vectors in $\R^\ell$ whose components are nonnegative, by $\R^{\ell\times n}$ the set of $\ell\times n$ real matrices, by $\mathbb{S}^n$ the set of $n\times n$ symmetric real matrices, by $\mathbb{S}^n_+$ the set of $n\times n$ nonnegative definite real matrices, and by $\mathbf{1}_n$ the $n$-dimensional identity matrix. For any vector $Y$, we denote $Y_i$ as its $i$-th component. For any matrix $M=(m_{ij})$, we denote its transpose by $M^{\top}$, and its norm by $|M|=\sqrt{\sum_{ij}m_{ij}^2}$. If $M\in\mathbb{S}^n$ is positive definite (resp. positive semidefinite), we write $M>$ (resp. $\geq$) $0.$ We write $A>$ (resp. $\geq$) $B$ if $A, B\in\mathbb{S}^n$ and $A-B>$ (resp. $\geq$) $0.$ We write the positive and negative parts of $x\in\R$ as $x^+=\max\{x, 0\}$ and $x^-=\max\{-x, 0\}$ respectively. The elementary inequality $|a^{\top}b|\leq c|a|^2+\frac{|b|^2}{2c}$ for any $a,b\in\R^{n}$, $c>0$, will be used frequently without claim. Throughout the paper, we use $c$ to denote a suitable positive constant, which is independent of $(t,\omega, i)$ and can be different from line to line. \subsection*{Notation} We use the following notation throughout the paper: \begin{align*} L^{\infty}_{\mathcal{F}_T}(\Omega;\R)&=\Big\{\xi:\Omega\rightarrow \R\;\Big|\;\xi\mbox { is }\mathcal{F}_{T}\mbox{-measurable, and essentially bounded}\Big\}, \\ L^{2}_{\mathbb{F}}(0, T;\R)&=\Big\{\phi:[0, T]\times\Omega\rightarrow \R\;\Big|\;\phi\mbox{ is } \mathbb{F}\mbox{-predictable and }\E\int_{0}^{T}|\phi_t|^{2}\dt<\infty \Big\}, \\ L^{\infty}_{\mathbb{F}}(0, T;\R)&=\Big\{\phi:[0, T]\times\Omega \rightarrow\R\;\Big|\;\phi\mbox{ is }\mathbb{F}\mbox{-predictable and essentially bounded} \Big\},\\ L^{2,\nu}(\R)&=\Big\{\phi:\cZ\rightarrow\R\mbox{ is measurable with} \ \|\phi(\cdot)\|^2_{\nu}:=\int_{\cZ}\phi(z)^2\nu(\dz)<\infty\Big\},\\ L^{\infty,\nu}(\R)&=\Big\{\phi:\cZ\rightarrow\R\mbox{ is measurable and} \ \phi \mbox{ is bounded }\dd\nu \textrm{-a.e.}\Big\},\\ L^{2,\nu}_{\mathcal{P}}(0, T;\R)&=\Big\{\phi:[0, T]\times\Omega\times\cZ\rightarrow \R\;\Big|\;\phi\mbox{ is } \mathcal{P}\otimes\mathcal{B}(\cZ)\mbox{-measurable }\\ &\qquad\mbox{ \ \ \ \ and }\E\int_{0}^{T}\int_{\cZ}|\phi_t(z)|^{2}\nu(\dz)\dt<\infty \Big\},\\ L^{\infty}_{\mathcal{P}}(0, T;\R)&=\Big\{\phi:[0, T]\times\Omega\times\cZ \rightarrow\R\;\Big|\;\phi \mbox{ is }\mathcal{P}\otimes\mathcal{B}(\cZ)\mbox{-measurable and essentially bounded} \Big\},\\ S^{\infty}_{\mathbb{F}}(0,T;\R)&=\Big\{\phi:\Omega\times[0,T]\to \R \;\Big|\;\phi \mbox{ is c\`ad-l\`ag, $\mathbb{F}$-adapted and essentially bounded}\Big\}. \end{align*} These definitions are generalized in the obvious way to the cases that $\mathcal{F}$ is replaced by $\mathcal F^{W,N}$, $\mathbb{F}$ by $\mathbb{F}^{W,N}$, $\mathcal{P}$ by $\mathcal{P}^{W,N}$ and $\R$ by $\R^n$, $\R^{n\times m}$ or $\mathbb{S}^n$. In our argument, $t$, $\omega$, ``almost surely'' and ``almost everywhere'', will be suppressed for simplicity in many circumstances, when no confusion occurs. All the processes and maps considered in this paper, unless otherwise stated, are stochastic, so, for notation simplicity, we will not write their dependence on $\omega$ explicitly. Equations and inequalities shall be understood to hold true $\dptv$ For a random variable or stochastic process $X$, we write $X\gg1$ (resp. $X\ll1$) if there exists a constant $c>0$ such that $X\geq c$ (resp. $|X|\leq c$). Consider the following real-valued linear stochastic differential equation (SDE) with jumps: \begin{align} \label{state} \begin{cases} \dd X_t=\left[A_t^{\alpha_{t-}}X_{t-}+(B^{\alpha_{t-}}_{1,t})^{\top}u_{1,t}+\int_{\cZ}B^{\alpha_{t-}}_{2,t}(z)^{\top}u_{2,t}(z)\nu(\dz)\right]\dt\\ \qquad\qquad\qquad+\left[C^{\alpha_{t-}}_tX_{t-}+D^{\alpha_{t-}}_t u_{1,t}\right]^{\top}\dw_t\\ \qquad\qquad\qquad+\int_{\cZ}\left[E_t^{\alpha_{t-}}(z)X_{t-}+F_t^{\alpha_{t-}}(z)u_{2,t}(z)\right]^{\top}\tilde N(\dt,\dz), \quad t\in[0,T], \\ X_0=x,~~ \alpha_0=i_0, \end{cases} \end{align} where $A^{i}, \ B_1^{i}, \ C^{i}, \ D^{i}$ are all $\mathbb{F}^{W,N}$-predictable processes, and $B^{i}_{2}(\cdot), \ E^{i}(\cdot), \ F^{i}(\cdot)$ are $\mathcal{P}^{W,N}\otimes\mathcal{B}(\cZ)$-measure processes of suitable sizes, $(u_1,u_2)$ is the control and $x\in\R$, $i_0\in\cM$ are the known initial values. Let $\Pi_1$, $\Pi_2$ be two given closed cones (not necessarily convex) in $\R^{m_1}$ and $\R^{m_2}$, respectively. The class of admissible controls is defined as the set \begin{align*} \mathcal{U} &:=\Big\{(u_{1},u_{2}) \;\Big|\;u_{1}\in L^2_\mathbb{F}(0, T;\R^{m_1}),~u_{1,t}\in\Pi_1, ~\dpt, \\ &\qquad\qquad\qquad\ \mbox{and }\ u_{2}\in L^{2,\nu}_{\mathcal{P}}(0, T;\R^{m_2}),~ u_{2,t}\in\Pi_2, ~\dptv \Big\}.\end{align*} If $u\equiv (u_{1},u_{2})\in\mathcal{U}$, then the SDE \eqref{state} admits a unique strong solution $X$, and we refer to $(X, u)$ as an admissible pair. Let us now state our stochastic linear quadratic optimal control problem as follows: \begin{align} \begin{cases} \mbox{Minimize} &\ J(u;x,i_0 )\smallskip\\ \mbox{subject to} &\ u \in\mathcal{U}, \end{cases} \label{LQ}\end{align} where the cost functional $J$ is given as the following quadratic form \begin{align}\label{costfunc} J(u;x,i_0) &:=\E\Big\{ G^{\alpha_{T}}X_T^2 +\int_0^T\Big[u_{1,t}^{\top}R^{\alpha_{t}}_{1,t}u_{1,t} +Q^{\alpha_{t}}_tX_t^2+\int_{\cZ}u_{2,t}(z)^{\top}R^{\alpha_{t}}_{2,t}(z)u_{2,t}(z)\nu(\dz)\Big]\dt\Big\}. \end{align} The optimal value of the problem is defined as \begin{align*} V(x,i_0)=\inf_{u \in\mathcal{U}}~J( u;x,i_0). \end{align*} Problem \eqref{LQ} is said to be solvable, if there exists a control $u^*\in\mathcal{U}$ such that \begin{align*} -\infty<J( u^*;x,i_0)\leq J( u;x,i_0), \quad \forall\; u\in\mathcal{U}, \end{align*} in which case, $u^*$ is called an optimal control for problem \eqref{LQ} and one has \begin{align*} V(x,i_0)= J( u^*;x,i_0). \end{align*} \begin{remark} By choosing $\Pi_1=\{0\}$ (resp. $\Pi_2=\{0\}$), our model covers the case of $R_1^i=0$, $D^i=0$, $B^i_1=0$ (resp. $R_2^i=0$, $F^i=0$, $B^i_2=0$). In particular, our model covers the pure jump (i.e. $(B^i_1,C^i,D^i,R_1^i)=0$) and pure diffusion (i.e., $(B^i_2,E^i,F^i,R_2^i)=0$) models. \end{remark} Throughout this paper, we put the following assumption on the coefficients. \begin{assumption} \label{assu1} It holds, for every $i\in\cM$, that \begin{align*} \begin{cases} A^{i}\in L_{\mathbb{F}^{W,N}}^\infty(0, T;\R), \ B^{i}_1\in L_{\mathbb{F}^{W,N}}^\infty(0, T;\R^{m_1}), \ B^{i}_2\in L_{\mathcal{P}^{W,N}}^\infty(0,T;\R^{m_2}),\\ C^{i} \in L_{\mathbb{F}^{W,N}}^\infty(0, T;\R^{n_1}), \ D^{i}\in L_{\mathbb{F}^{W,N}}^\infty(0, T;\R^{n_1\times m_1}), \\ E^{i}\in L_{\mathcal{P}^{W,N}}^\infty(0,T;\R^{n_2}),\ F^{i}\in L_{\mathcal{P}^{W,N}}^\infty(0,T;\R^{n_2\times m_2}),\\ R^{i}_1\in L_{\mathbb{F}^{W,N}}^\infty(0, T;\mathbb{S}^{m_1}_+), \ R^{i}_2 \in L_{\mathcal{P}^{W,N}}^\infty(0,T;\mathbb{S}^{m_2}_+),\\ Q^{i}\in L_{\mathbb{F}^{W,N}}^\infty(0, T;\R_+), \ G^{i}\in L_{\mathcal{F}^{W,N}_T}^\infty(\Omega;\R_+). \end{cases} \end{align*} \end{assumption} Under Assumption \ref{assu1}, the cost functional \eqref{costfunc} is nonnegative, hence problem \eqref{LQ} is well-posed. \bigskip Besides Assumption \ref{assu1}, we need the following hypothesizes: \begin{enumerate}[1.] \item \label{R11} $R^{i}_1\geq \delta \mathbf{1}_{m_1}$. \item \label{R12} $(D^{i})^{\top}D^i\geq \delta \mathbf{1}_{m_1}$. \item \label{R21} $R^{i}_2\geq \delta \mathbf{1}_{m_2}$. \item \label{R22} $(F^{i})^{\top}F^i\geq \delta \mathbf{1}_{m_2}$. \end{enumerate} We will consider the problem under one of following two assumptions. \begin{assumption} [Standard case] \label{assu2} There exists a constant $\delta>0$ such that both hypothesizes \ref{R11} and \ref{R21} hold. \end{assumption} \begin{assumption}[Singular case] \label{assu3} There exists a constant $\delta>0$ such that $G^i\geq\delta$ and one of the following hold: \begin{enumerate} \item[Case I.] Both hypothesizes \ref{R12} and \ref{R21} hold; \item[Case II.] Both hypothesizes \ref{R12} and \ref{R22} hold; \item[Case III.] Both hypothesizes \ref{R11} and \ref{R22} hold. \end{enumerate} \end{assumption} \section{Solvability of the Riccati equations}\label{section:Ri} For any $i\in\cM$, $j=1,2,$ let us denote by $E^i_k$ the $k$-th component of $E^i$, $\Gamma^i_{jk}$ the $k$-th component of $\Gamma^i_j$ and $F^i_k$ the $k$-th row of $F^i$, $k=1,...,n_2$. To solve problem \eqref{LQ}, we need to study the following $2\ell$-dimensional SRE with jumps: \begin{align} \label{P} \begin{cases} \dd P_{1,t}^i=-\Big[(2A^i+|C^i|^2)P^i_{1,t-}+2(C^i)^{\top}\Lambda^i_1+Q^i+H_{11}^{i,*}(P^i_1,\Lambda^i_1)\\ \qquad\qquad\qquad\qquad+\int_{\cZ}H_{12}^{i,*}(z,P^i_1,P^i_2,\Gamma^i_1, \Gamma^i_2) \nu(\dz)+\sum_{j=1}^{\ell}q^{ij}P_{1}^j\Big]\dt\\ \qquad\quad\;+(\Lambda^i_1)^{\top}\dw+\int_{\cZ}\Gamma^i_1(z)^{\top}\tilde N(\dt,\dz),\bigskip\\ \dd P_{2,t}^i=-\Big[(2A^i+|C^i|^2)P^i_{2,t-}+2(C^i)^{\top}\Lambda^i_2+Q^i+H_{21}^{i,*}(P^i_2,\Lambda^i_2)\\ \qquad\qquad\qquad\qquad+\int_{\cZ}H_{22}^{i,*}(z,P^i_1,P^i_2,\Gamma^i_1, \Gamma^i_2) \nu(\dz)+\sum_{j=1}^{\ell}q^{ij}P_{2}^j\Big]\dt\\ \qquad\quad\;+(\Lambda^i_2)^{\top}\dw+\int_{\cZ}\Gamma^i_2(z)^{\top}\tilde N(\dt,\dz),\bigskip\\ P^i_{1,T}=G, \ P^i_{2,T}=G, ~~R^i_{1,t}+P^i_{1,t}(D^i_t)^{\top}D^i_t> 0, \ R^i_{1,t}+P^i_{2,t}(D^i_t)^{\top}D^i_t> 0, ~~i\in\cM,\\ \end{cases} \end{align} where, for any $(z,P_1,P_2,\Lambda,\Gamma_1,\Gamma_2)\in\cZ\times\R_+\times\R_+\times\R^n\times \R \times \R$, \begin{align*} H^i_{11}(v,P_1,\Lambda)&:=v^{\top}(R^i_1+P_1(D^i)^{\top}D^i)v+2(P_1(B^i_1+(D^i)^{\top}C^i)+(D^i)^{\top}\Lambda)^{\top}v \\ H^i_{12}(v,z,P_1,P_2,\Gamma_1, \Gamma_2)&:=v^{\top}R^i_2v+\sum_{k=1}^{n_2}(P_1+\Gamma_{1k})\big[((1+E^i_k+F^i_k v)^+)^2-1\big]\\ &\quad-2P_1\sum_{k=1}^{n_2}(E^i_k+F^i_k v)+2P_1(B^i_2)^{\top}v+\sum_{k=1}^{n_2}(P_2+\Gamma_{2k}) ((1+E^i_k+F^i_k v)^-)^2, \\ H^i_{21}(v,P_2,\Lambda)&:=v^{\top}(R^i_1+P_2(D^i)^{\top}D^i)v-2(P_2(B^i+(D^i)^{\top}C^i)+(D^i)^{\top}\Lambda)^{\top}v,\\ H^i_{22}(v,z,P_1,P_2,\Gamma_1, \Gamma_2)&:=v^{\top}R^i_2v+\sum_{k=1}^{n_2}(P_2+\Gamma_{2k})\big[((-1-E^i_k+F^i_k v)^-)^2-1\big]\\ &\quad-2P_2\sum_{k=1}^{n_2}(E^i_k-F^i_kv)-2P_2(B^i_2)^{\top}v+\sum_{k=1}^{n_2}(P_1+\Gamma_{1k})((-1-E^i_k+F^i_k v)^+)^2, \end{align*} and \begin{align*} H_{11}^{i,*}(t,P_1,\Lambda)&:=\inf_{v\in\Pi_1} H_{11}^i(v,P_1,\Lambda),\\ H_{12}^{i,*}(t,z,P_1,P_2,\Gamma_1, \Gamma_2)&:=\inf_{v\in\Pi_2} H_{12}^i(v,z,P_1,P_2,\Gamma_1, \Gamma_2),\\ H_{21}^{i,*}(t,P_2,\Lambda)&:=\inf_{v\in\Pi_1} H_{21}^i(v,P_2,\Lambda),\\ H_{22}^{i,*}(t,z,P_1,P_2,\Gamma_1, \Gamma_2)&:=\inf_{v\in\Pi_2} H_{22}^i(v,z,P_1,P_2,\Gamma_1, \Gamma_2). \end{align*} Noting in above, we omit the argument $t$. Because the generators in \eqref{P} depend on all $P^i_j$s, hence \eqref{P} is a fully coupled BSDE with jumps. \begin{definition}\label{def} A vector of stochastic process $(P_j^i,\Lambda_j^i,\Gamma_j^i)_{i\in\cM,\;j=1,2}$ is called a solution to the BSDEJ \eqref{P} if it satisfies all the equations and constraints in \eqref{P}, and $(P_j^i,\Lambda_j^i,\Gamma_j^i)\in S^{\infty}_{\mathbb{F}^{W,N}}(0,T;\R)\times L^{2}_{\mathbb{F}^{W,N}}(0,T;\R^{n_1})\times L^{\infty,\nu}_{\mathcal{P}^{W,N}}(0, T;\R^{n_2})$ for all $i\in\cM$, $j=1,2$. Furthermore, the solution is called nonnegative if $P^i_j\geq0$, $P^i_j+\Gamma^i_j\geq0$, and called uniformly positive if $P^i_j\gg1$ and $P^i_j+\Gamma^i_j\gg1$, for all $i\in\cM$, $j=1,2$. \end{definition} Before giving the proof of main theorem in this section, let us recall the definition of bounded mean oscillation martingales, briefly called BMO martingales. Please refer to Kazamaki \cite{Ka} for a systematic account on continuous BMO martingales. A process $\int_0^{t}\phi_s^\top dW_s$ is called a BMO martingale if and only if there exists a constant $c>0$ such that \[ \E\Big[\int_{\tau}^T|\phi_s|^2\ds\;\Big|\;\mathcal{F}^{W,N}_{\tau}\Big]\leq c \] for all $\mathbb{F}^{W,N}$ stopping times $\tau\leq T$. In the uniqueness part of the Theorem \ref{existence}, we will use the following property of BMO martingales: If $\int_0^{t}\phi_s^\top dW_s$ is a BMO martingale on $[0,T]$, then the Dol\'eans-Dade stochastic exponential $\mathcal{E}\big(\int_0^{t}\phi_s^{\top}dW_s\big)$ is a uniformly integrable martingale on $[0,T]$. The following comparison theorem for multi-dimensional BSDEJs was firstly established in \cite[Theorem 2.2]{HSX4}. We list it here as it plays crucial role in the solvability of the BSDEJ \eqref{P}. \begin{lemma} \label{comparison} Suppose, for every $ i\in \{1,2,...,m\}$, \begin{align*} (Y_i, Z_i, \Phi_i), (\overline Y_i, \overline Z_i, \overline\Phi_i)\in S^{2}_{\mathbb{F}}(0,T;\R)\times L^{2}_{\mathbb F}(0, T;\R^{n})\times L^{2,\nu}_{\mathcal{P}}(0, T;\R), \end{align*} and they satisfy BSDEJs \begin{align*} Y_{i,t}&=\xi_i+\int_t^T f_i(s,Y_{s-}, Z_{i,s}, \Phi_{s})\ds-\int_t^T Z_{i,s}^{\top}\dw_s-\int_t^T\int_{\cE}\Phi_{i,s}(e) \widetilde N(\ds,\de), \end{align*} and \begin{align*} \overline Y_{i,t}&=\overline\xi_i+\int_t^T \overline f_i(s, \overline Y_{s-}, \overline Z_{i,s}, \overline \Phi_{s})\ds-\int_t^T\overline Z_{i,s}^{\top}\dw_s-\int_t^T\int_{\cE}\overline\Phi_{i,s}(e) \widetilde N(\ds,\de). \end{align*} Also suppose that, for all $i\in\{1,2,...,m\}$ and $s\in[0,T]$, \begin{enumerate}[(1)] \item \label{cond-boundary} $\xi_i\leq\overline\xi_i$; \item \label{cond-gamma} there exists a constant $c>0$ such that \begin{align*} &\quad\;f_i(s,Y_{s-}, Z_{i,s}, \Phi_{1,s}, \cdots, \Phi_{i,s}, \cdots, \Phi_{\ell,s})\\ &\qquad\quad-f_i(s,Y_{s-}, Z_{i,s}, \Phi_{1,s}, \cdots, \overline\Phi_{i,s}, \cdots, \Phi_{\ell,s}) \\ &\leq c \int_{\cE} (\Phi_{i,s}(e)-\overline \Phi_{i,s}(e))^{+}\nu(\de) +\int_{\cE} |\Phi_{i,s}(e)-\overline \Phi_{i,s}(e)|\nu(\de); \end{align*} \item \label{cond-growth} there exists a constant $c>0$ such that \begin{align*} &\quad\;f_i(s,Y_{s-}, Z_{i,s}, \Phi_{1,s}, \cdots, \overline\Phi_{i,s}, \cdots, \Phi_{\ell,s})-f_i(s, \overline Y_{s-}, \overline Z_{i,s}, \overline \Phi_{s})\\ &\leq c\Big (| Y_{i,s-}- \overline Y_{i,s-}|+\sum_{j\neq i} (Y_{j,s-}- \overline Y_{j,s-})^{+}+|Z_{i,s}- \overline Z_{i,s}|\\ &\qquad\quad+\sum_{j\neq i} \int_{\cE}( Y_{j,s-}+\Phi_{j,s}(e)-\overline Y_{j,s-} -\overline\Phi_{j,s}(e))^+\nu(\de)\Big); \end{align*} \item $f_{i}(\cdot, 0,0,0)$ and $\overline f_{i}(\cdot, 0,0,0)\in L^{2}_{\mathbb F}(0, T;\R)$; \item both $f_{i}$ and $\overline f_i$ are Lipschitz in $(y,z,\phi)$;~\text{and} \item \label{cond-size} $f_i(s, \overline Y_{s-}, \overline Z_{i,s}, \overline \Phi_{s}) \leq \overline f_i(s, \overline Y_{s-}, \overline Z_{i,s}, \overline \Phi_{s})$. \end{enumerate} Then $\mathbb{P}\{Y_{i,t}\leq\overline Y_{i,t}, \forall t\in[0,T]\}=1$ for all $i\in\{1,2,...,m\}$. \end{lemma} \begin{theorem}\label{existence} Under Assumptions \ref{assu1} and \ref{assu2}, the BSDEJ \eqref{P} admits a unique nonnegative solution $(P_j^i,\Lambda_j^i,\Gamma_j^i)_{i\in\cM,\;j=1,2}$. \end{theorem} \pf For each natural number $k$, define maps \begin{align*} H_{11}^{i,*,k}(P_{1},\Lambda) &:=\inf_{v\in\Pi_1,|v|\leq k} H_{11}^{i}(v,P_{1},\Lambda),\\ H_{12}^{i,*,k}(z,P_{1},P_{2},\Gamma_{1},\Gamma_2) &:=\inf_{v\in\Pi_2,|v|\leq k} H_{12}^{i}(v,z,P_{1},P_{2},\Gamma_{1},\Gamma_2),\\ H_{21}^{i,*,k}(P_{2},\Lambda) &:=\inf_{v\in\Pi_1,|v|\leq k} H_{21}^{i}(v,P_{2},\Lambda),\\ H_{22}^{i,*,k}(z,P_{1},P_{2},\Gamma_{1},\Gamma_2) &:=\inf_{v\in\Pi_2,|v|\leq k} H_{22}^{i}(v,z,P_{1},P_{2},\Gamma_{1},\Gamma_2). \end{align*} Then they are uniformly Lipschitz in $(P_{1},P_{2},\Lambda,\Gamma_1,\Gamma_2)$ and decreasingly approach to $H_{11}^{i,*}$, $H_{12}^{i,*}$, $H_{21}^{i,*}$, $H_{22}^{i,*}$ and respectively as $k$ goes to infinity. For each $k$, the following BSDE \begin{align} \label{Ptrun} \begin{cases} \dd P_{1,t}^{i,k}=-\Big[(2A^i+|C^i|^2)P^{i,k}_{1,t-}+2(C^i)^{\top}\Lambda^{i,k}_1+Q^i+ H_{11}^{i,*,k}(P^{i,k}_1,\Lambda^{i,k}_1)+\sum_{j=1}^{\ell}q^{ij}P_{1}^{j,k}\\ \qquad\qquad\qquad\qquad+\int_{\cZ}H_{12}^{i,*,k}(P^{i,k}_1,P^{i,k}_2,\Gamma^{i,k}_1, \Gamma^{i,k}_2) \nu(\dz)\Big]\dt\\ \qquad\quad\;+(\Lambda^{i,k}_1)^{\top}\dw+\int_{\cZ}\Gamma^{i,k}_1(z)^{\top}\tilde N(\dt,\dz),\bigskip\\ \dd P_{2,t}^{i,k}=-\Big[(2A^i+|C^i|^2)P^{i,k}_{2,t-}+2(C^i)^{\top}\Lambda^{i,k}_2+Q^i+ H_{21}^{i,*,k}(P^{i,k}_2,\Lambda^{i,k}_2)+\sum_{j=1}^{\ell}q^{ij}P_{2}^{j,k}\\ \qquad\qquad\qquad\qquad+\int_{\cZ}H_{22}^{i,*,k}(P^{i,k}_1,P^{i,k}_2,\Gamma^{i,k}_1, \Gamma^{i,k}_2)\nu(\dz)\Big]\dt\\ \qquad\quad\;+(\Lambda^{i,k}_2)^{\top}\dw+\int_{\cZ}\Gamma^{i,k}_2(z)^{\top}\tilde N(\dt,\dz),\bigskip\\ P^{i,k}_{1,T}=G, \ P^{i,k}_{2,T}=G, \ i\in\cM, \end{cases} \end{align} is a $2\ell$-dimensional BSDEJ with a Lipschitz generator. According to \cite[Lemma 2.4]{TL}, it admits a unique solution $(P_j^{i,k},\Lambda_j^{i,k},\Gamma_j^{i,k})_{i\in\cM,\;j=1,2}$ such that $$(P_j^{i,k},\Lambda_j^{i,k},\Gamma_j^{i,k})\in S^{2}_{\mathbb{F}^{W,N}}(0, T;\R)\times L^{2}_{\mathbb{F}^{W,N}}(0, T;\R^{n_1})\times L^{2,\nu}_{\mathcal{P}^{W,N}}(0, T;\R^{n_2}), \ i\in\cM, \ j=1,2.$$ Next we will show that $(P_1^{i,k},P_2^{i,k})_{i\in\cM}$ are lower and upper bounded. Actually, the following two linear (with bounded coefficients) BSDEJs (see, e.g., \cite[Proposition 2.2]{BBP}) \begin{align} \label{overlineP} \begin{cases} \dd \overline P_{1,t}^i=-\Big[(2A^i+|C^i|^2)\overline P^i_{1,t-}+2(C^i)^{\top}\overline \Lambda^i_1+Q^i+\int_{\cZ}H_{12}^{i}(0,\overline P^i_1,\overline P^i_2,\overline \Gamma^i_1, \overline \Gamma^i_2) \nu(\dz)\\ \qquad\qquad+\sum_{j=1}^{\ell}q^{ij}\overline P_{1}^j\Big]\dt+(\overline \Lambda^i_1)^{\top}\dw+\int_{\cZ}\overline \Gamma^i_1(z)^{\top}\tilde N(\dt,\dz),\\ \dd \overline P^i_{2,t}=-\Big[(2A^i+|C^i|^2)\overline P^i_{2,t-}+2(C^i)^{\top}\overline \Lambda^i_2+Q^i+\int_{\cZ}H_{22}^{i}(0,\overline P^i_1,\overline P^i_2,\overline \Gamma^i_1, \overline \Gamma^i_2) \nu(\dz)\\ \qquad\qquad+\sum_{j=1}^{\ell}q^{ij}\overline P_{2}^j\Big]\dt+(\overline \Lambda^i_2)^{\top}\dw+\int_{\cZ}\overline \Gamma^i_2(z)^{\top}\tilde N(\dt,\dz),\\ \overline P^i_{1,T}=G, \ \overline P^i_{2,T}=G, \ i\in\cM. \end{cases} \end{align} and \begin{align} \label{underlineP} \begin{cases} \dd\underline P_{1,t}^i=-\Big[(2A^i+|C^i|^2)\underline P^i_{1,t-}+2(C^i)^{\top}\underline \Lambda^i_1+\sum_{j=1}^{\ell}q^{ij}\underline P_{1}^j\Big]\dt+(\underline \Lambda^i_1)^{\top}\dw+\int_{\cZ}\underline \Gamma^i_1(z)^{\top}\tilde N(\dt,\dz),\\ \dd\underline P_{2,t}^i=-\Big[(2A^i+|C^i|^2)\underline P^i_{2,t-}+2(C^i)^{\top}\underline \Lambda^i_2+\sum_{j=1}^{\ell}q^{ij}\underline P_{2}^j\Big]\dt+(\underline \Lambda^i_2)^{\top}\dw+\int_{\cZ}\underline \Gamma^i_2(z)^{\top}\tilde N(\dt,\dz),\\ \underline P^i_{1,T}=0, \ \underline P^i_{2,T}=0, \ i\in\cM. \end{cases} \end{align} admit unique uniformly bounded solutions $(\overline P_j^i, \overline\Lambda_j^i, \overline \Gamma_j^i)_{i\in\cM,\;j=1,2}$ and $(\underline P_j^i, \underline\Lambda_j^i, \underline \Gamma_j^i)_{i\in\cM,\;j=1,2}$ respectively. Clearly, $(\underline P_j^i, \underline\Lambda_j^i, \underline \Gamma_j^i)_{i\in\cM,\;j=1,2}=0$ by uniqueness. According to the definitions of $H_{jj'}^{i,*,k}, \ H_{jj'}^{i}$, we have \begin{align*} H_{11}^{i,*,k}(\overline P_1,\overline \Lambda_1)&\leq H_{11}^i(0,\overline P_1,\overline \Lambda_1)=0,\\ H_{12}^{i,*,k}(\overline P_1,\overline P_2,\overline \Gamma_1, \overline \Gamma_2)&\leq H_{12}^i(0,\overline P_1,\overline P_2,\overline \Gamma_1, \overline \Gamma_2),\\ H_{21}^{i,*,k}(\overline P_2,\overline \Lambda_2)&\leq H_{21}^i(0,\overline P_2,\overline \Lambda_2)=0,\\ H_{22}^{i,*,k}(\overline P_1,\overline P_2,\overline \Gamma_1, \overline \Gamma_2)&\leq H_{22}^i(0,\overline P_1,\overline P_2,\overline \Gamma_1, \overline \Gamma_2). \end{align*} Also, thanks to Assumption \ref{assu2}, \begin{align*} Q^i+H_{11}^{i,*,k}(\underline P_1,\underline \Lambda_1)+\int_{\cZ}H_{12}^{i,*,k}(\underline P_1,\underline P_2,\underline \Lambda_1,\underline \Lambda_2)\nu(\dz)=Q^i\geq 0,\\ Q^i+H_{21}^{i,*,k}(\underline P_2,\underline \Lambda_2)+\int_{\cZ}H_{22}^{i,*,k}(\underline P_1,\underline P_2,\underline \Lambda_1,\underline \Lambda_2)\nu(\dz)=Q^i\geq 0. \end{align*} We can apply the Lemma \ref{comparison} \footnote{Conditions (1)-(5) can be obtained in a similar way as \cite[Theorem 3.1]{HSX4}} to \eqref{Ptrun} and \eqref{overlineP}, and to \eqref{Ptrun} and \eqref{underlineP}, respectively, to get \begin{align*} 0\leq P_1^{i,k}\leq \overline P_1^{i}, \ 0\leq P_2^{i,k}\leq \overline P_2^{i}. \end{align*} Applying the same comparison theorem to different $k$s in \eqref{Ptrun}, we get $P_j^{i,k}$ is non-increasing in $k$, for any $i\in\cM, \ j=1,2$. A nonnegative solution to \eqref{P} can be constructed in much the same way as \cite[Theorem 3.1]{HSX4} by proving the strong convergence of $(P_j^{i,k},\Lambda_j^{i,k},\Gamma_j^{i,k})_{i\in\cM,\;j=1,2}$ as $k\to\infty$. Details are left to the interested readers. \bigskip We now turn to the proof of uniqueness. Suppose $(P^i_j,\Lambda^i_j,\Gamma^i_j)_{i\in\cM,\;j=1,2}$ and $(\tilde P^i_j,\tilde\Lambda^i_j,\tilde\Gamma^i_j)_{i\in\cM,\;j=1,2}$ are two nonnegative solutions of \eqref{P}. Then there exists a constant $M>0$ such that $$0\leq P^i_j,~\tilde P^i_j\leq M.$$ Estimates similar as in \cite[Theorem 3.1]{HSX4} yields also that $$0\leq P^i_{j,t-}+\Gamma^i_{j,t},~ \tilde P^i_{j,t-}+\tilde\Gamma^j_{j,t}\leq M.$$ Firstly, we show $\int_0^{\cdot}\Lambda^i_1\dw$ is a BMO martingale. Applying It\^o's formula to $|P_{1,t}^i|^2$, we get, for any $\mathbb{F}^{W,N}$ stopping time $\tau\leq T$, \begin{align*} &\quad\;\E\Big[\int_{\tau}^T|\Lambda^i_1|^2\ds\;\Big|\;\mathcal{F}^{W,N}_{\tau}\Big]\\ &\leq |G^i|^2 +\E\Big\{\int_{\tau}^T2P_{1}^i\Big[(2A^i+|C^i|^2)P^i_{1,t-}+2(C^i)^{\top}\Lambda^i_1+Q^i\\ &\qquad\qquad\qquad+H_{11}^{i,*}(P^i_1,\Lambda^i_1)+\int_{\cZ}H_{12}^{i,*}(P^i_1,P^i_2,\Gamma^i_1, \Gamma^i_2) \nu(\dz)+\sum_{j=1}^{\ell}q^{ij}P_{1}^j\Big]\ds\;\Big|\;\mathcal{F}^{W,N}_{\tau}\Big\}\\ &\leq c+\frac{1}{2}\E\Big[\int_{\tau}^T|\Lambda^i_1|^2\ds\;\Big|\;\mathcal{F}^{W,N}_{\tau}\Big], \end{align*} where we used Assumption \ref{assu1}, $H_{11}^{i,*}\leq 0$, $H_{12}^{i,*}(P^i_1,P^i_2,\Gamma^i_1, \Gamma^i_2)\leq H_{12}^{i}(0,P^i_1,P^i_2,\Gamma^i_1, \Gamma^i_2)$, and the solution $(P_j^{i},\Lambda_j^{i},\Gamma_j^{i})_{i\in\cM,\;j=1,2}$ is uniformly bounded. Note both sides in the above estimate are finite since $\Lambda^i_1\in L^{2}_{\mathbb{F}^{W,N}}(0,T;\R^{n_1})$. After rearrangement, we conclude that $\int_0^{\cdot}\Lambda^i_1\dw$ is a BMO martingale. Likewise, $\int_0^{\cdot}\Lambda^i_2\dw$ is also a BMO martingale. This completes the proof of existence. Let $a>0$ be a sufficiently small constant such that $R^i_1-a(D^i)^{\top}D^i>0$. Write $\varrho=\frac{a}{a+M}$, then $0<\varrho< 1$. Let \begin{align*} &(U^i_j,V^i_j,\Phi^i_{jk})=\Big(\ln (P^i_j+a),\frac{\Lambda^i_j}{P^i_j+a},\ln\Big(1+\frac{\Gamma^i_{jk}}{P^i_{j,t-}+a}\Big)\Big),\\ &(\tilde U^i_j,\tilde V^i_j,\tilde \Phi^i_{jk})=\Big(\ln (\tilde P^i_j+a),\frac{\tilde \Lambda^i_j}{\tilde P^i_j+a},\ln\Big(1+\frac{\tilde \Gamma^i_{jk}}{\tilde P^i_{j,t-}+a}\Big)\Big) \end{align*} for all $i$, $j$, $k$. Then we have the estimates \begin{align}\label{Philowerbound} \varrho\leq e^{\Phi^i_{jk}}, e^{\tilde\Phi^i_{jk}}\leq \varrho^{-1}, \ \ \ \frac{e^{U^i_1+\Phi^i_{1,k}}-a}{e^{U^i_1}}=\frac{P^i_1+\Gamma^i_{1k}}{e^{U^i_1}}\geq 0, \ \ \frac{e^{U^i_2+\Phi^i_{2,k}}-a}{e^{U^i_1}}=\frac{P^i_2+\Gamma^i_{2k}}{e^{U^i_1}}\geq 0. \end{align} Also, \begin{align*} \begin{cases} \dd U_{1}^i=-\Big[(2A^i+|C^i|^2)(1-ae^{-U^i_{1}})+2(C^i)^{\top}V^i_{1}+Q^ie^{-U^i_{1}} +\frac{1}{2}|V^i_1|^2+\sum_{j=1}^{\ell} q^{ij}e^{U_1^{j}-U_1^{i}}\\ \qquad\qquad\quad+\tilde H^{i,*}_{11}(U^i_{1},V^i_{1})+\int_{\cZ}\tilde H^{i,*}_{12}(U^i_{1},U^i_{2},\Phi^i_{1},\Phi^i_2)\nu(\dz)+\sum_{k=1}^{n_2}\int_{\cZ}(e^{\Phi^i_{1,k}}-\Phi^i_{1,k}-1)\nu(\dz)\Big]\dt\\ \qquad\quad\;+(V_{1}^i)^{\top}\dw+\int_{\cZ}\Phi_1^i(z)^{\top}\tilde N(\dt,\dz),\bigskip\\ \dd U^i_{2}=-\Big[(2A^i+|C^i|^2)(1-ae^{-U^i_{2}})+2(C^i)^{\top}V^i_{2}+Q^ie^{-U^i_{2}} +\frac{1}{2}|V^i_2|^2+\sum_{j=1}^{\ell} q^{ij}e^{U_2^{j}-U_2^{i}}\\ \qquad\qquad\quad+\tilde H^{i,*}_{21}(U^i_{2},V^i_{2})+\int_{\cZ}\tilde H^{i,*}_{22}(U^i_{1},U^i_{2},\Phi^i_{1},\Phi^i_2)\nu(\dz)+\int_{\cZ}\sum_{k=1}^{n_2}(e^{\Phi^i_{2,k}}-\Phi^i_{2,k}-1)\nu(\dz)\Big]\dt\\ \qquad\quad\;+(V^i_{2})^{\top}\dw+\int_{\cZ}\Phi_2^i(z)^{\top}\tilde N(\dt,\dz),\bigskip\\ U^i_{1,T}=U^i_{2,T}=\ln(G^i+a),~~i\in\cM, \end{cases} \end{align*} where \begin{align*} \tilde H^{i}_{11}(v,U_1,V_1)&:=v^{\top}(R^i_1e^{-U_{1}}+(1-ae^{-U_1})(D^i)^{\top}D^{i})v\\ &\qquad+2((1-ae^{-U_1})(B^{i}_1+(D^{i})^{\top}C^{i})+(D^{i})^{\top}V_1)^{\top}v, \\ \tilde H^{i}_{12}(v,U_1,U_2,\Phi_1, \Phi_2)&:=v^{\top}R^{i}_2e^{-U_1}v+\sum_{k=1}^{n_2}\frac{e^{U_1+\Phi_{1,k}}-a}{e^{U_1}}\Big(((1+E^{i}_k+F^{i}_kv)^+)^2-1\Big)\\ &\qquad-2(1-ae^{-U_{1}})\sum_{k=1}^{n_2}(E^{i}_k+F^{i}_kv)+2(1-ae^{-U_{1}})(B^{i}_2)^{\top}v\\ &\qquad+\sum_{k=1}^{n_2}\frac{e^{U_2+\Phi_{2,k}}-a}{e^{U_1}} ((1+E^{i}_k+F^{i}_kv)^-)^2, \\ \tilde H^{i}_{21}(v,U_2,V_2)&:=v^{\top}(R^{i}_1e^{-U_{2}}+(1-ae^{-U_2})(D^{i})^{\top}D^{i})v\\ &\qquad-2((1-ae^{-U_2})(B^{i}_1+(D^{i})^{\top}C^{i})+(D^{i})^{\top}V_2)^{\top}v, \\ \tilde H^{i}_{22}(v,U_1,U_2,\Phi_1, \Phi_2)&:=v^{\top}R^{i}_2e^{-U_2}v+\sum_{k=1}^{n_2}\frac{e^{U_2+\Phi_{2,k}}-a}{e^{U_2}}\Big(((-1-E^{i}_k+F^{i}_kv)^-)^2-1\Big)\\ &\qquad-2(1-ae^{-U_{2}})\sum_{k=1}^{n_2}(-E^{i}_k+F^{i}_kv)-2(1-ae^{-U_{2}})(B_2^{i})^{\top}v\\ &\qquad+\sum_{k=1}^{n_2}\frac{e^{U_1+\Phi_{1,k}}-a}{e^{U_2}} ((-1-E^{i}_k+F^{i}_kv)^-)^2, \end{align*} and \begin{align*} \tilde H_{11}^{i,*}(U_1,V_1)&:=\inf_{v\in\Pi_1} \tilde H^{i}_{11}(v,U_1,V_1),\\ \tilde H_{12}^{i,*}(z,U_1,U_2,\Phi_1, \Phi_2)&:=\inf_{v\in\Pi_2} \tilde H^{i}_{12}(v,z,U_1,U_2,\Phi_1, \Phi_2),\\ \tilde H_{21}^{i,*}(U_2,V_2)&:=\inf_{v\in\Pi_1} \tilde H^{i}_{21}(v,U_2,V_2),\\ \tilde H_{22}^{i,*}(z,U_1,U_2,\Phi_1, \Phi_2)&:=\inf_{v\in\Pi_2} \tilde H^{i}_{22}(v,z,U_1,U_2,\Phi_1, \Phi_2). \end{align*} Set \[ \bar U^i_j=U^i_j-\tilde U^i_j, \ \bar V^i_j=V^i_j-\tilde V^i_j, \ \bar\Phi^i_j=\Phi^i_j-\tilde\Phi^i_j, \ i\in\cM, \ j=1,2. \] Then applying It\^{o}'s formula to $(\bar U^i_j)^2$, we deduce that \begin{align*} &\quad\;(U^i_{1,t})^2+\int_t^T|\bar V^i_1|^2\ds+\int_t^T\int_{\cZ}|\bar\Phi^i_1|^2\nu(\dz)\dt\\ &=\int_t^T\Big[L^i_{11}+\int_{\cZ}L^i_{12}(z)\nu(\dz)\Big]\dt-\int_t^T2\bar U^i_1(\bar V^i_{1})^{\top}\dw\\ &\quad\;-\sum_{k=1}^{n_2}\int_t^T\int_{\cZ}(2\bar U^i_{1,k}\bar\Phi^i_{1,k}+(\bar\Phi^i_{1,k})^2)\tilde N_k(\dt,\dz), \end{align*} and \begin{align*} &\quad\;(U^i_{2,t})^2+\int_t^T|\bar V^i_2|^2ds+\int_t^T\int_{\cZ}|\bar\Phi^i_2|^2\nu(\dz)\dt\\ &=\int_t^T\Big[L^i_{21}+\int_{\cZ}L^i_{22}(z)\nu(\dz)\Big]\dt -\int_t^T2\bar U^i_2(\bar V^i_{2})^{\top}\dw\\ &\quad\;-\sum_{k=1}^{n_2}\int_t^T\int_{\cZ}(2\bar U^i_{2,k}\bar\Phi^i_{2,k}+(\bar\Phi^i_{2,k})^2)\tilde N_k(\dt,\dz), \end{align*} where \begin{align*} L^i_{11}:&=2\bar U^i_1\Big[(Q^i-2aA^i-a|C^i|^2)(e^{-U^i_1}-e^{-\tilde U^i_1})+2(C^i)^{\top}\bar V^i_1+\frac{1}{2}(V^i_1+\tilde V^i_1)\bar V^i_1\\ &\qquad\qquad+\sum_{j=1}^{\ell}q^{ij}\Big(e^{U^i_1-U^i_1}-e^{\tilde U^j_1-\tilde U^i_1}\Big)+\tilde H_{11}^{i,*}(U^i_{1},V^i_{1},)-\tilde H_{11}^{i,*}(\tilde U^i_{1},\tilde V^i_{1})\Big],\\ L^i_{12}(z):&=2\bar U^i_1\Big[\sum_{k=1}^{n_2}[(e^{\Phi^i_{1,k}}-\Phi^i_{1,k}-1)-(e^{\tilde\Phi^i_{1,k}}-\tilde\Phi^i_{1,k}-1)]\\ &\qquad\qquad +\tilde H_{12}^{i,*}(z,U^i_{1},\Phi^i_1,U^i_{2},\Phi^i_2)-\tilde H_{12}^{i,*}(z,\tilde U^i_{1},\tilde\Phi^i_1,\tilde U^i_{2},\tilde\Phi^i_2)\Big],\\ L^i_{21}:&=2\bar U^i_2\Big[(Q^i-2aA^i-a|C^i|^2)(e^{-U^i_2}-e^{-\tilde U^i_2})+2(C^i)^{\top}\bar V^i_2+\frac{1}{2}(V^i_2+\tilde V^i_2)\bar V^i_2\\ &\qquad\qquad+\sum_{j=1}^{\ell}q^{ij}\Big(e^{U^i_2-U^i_2}-e^{\tilde U^j_2-\tilde U^i_2}\Big)+\tilde H_{21}^{i,*}(U^i_{2},V^i_{2},)-\tilde H_{21}^{i,*}(\tilde U^i_{2},\tilde V^i_{2})\Big],\\ L^i_{22}(z):&=2\bar U^i_2\Big[\sum_{k=1}^{n_2}[(e^{\Phi^i_{2,k}}-\Phi^i_{2,k}-1)-(e^{\tilde\Phi^i_{2,k}}-\tilde\Phi^i_{2,k}-1)]\\ &\qquad\qquad+\tilde H_{22}^{i,*}(z,U^i_{1},\Phi^i_1,U^i_{2},\Phi^i_2)-\tilde H_{22}^{i,*}(z,\tilde U^i_{1},\tilde\Phi^i_1,\tilde U^i_{2},\tilde\Phi^i_2)\Big]. \end{align*} The terms $L_{11}$ and $L_{21}$ can be estimated in much the same way as \cite[Theorem 3.5]{HSX} to get\footnote{$R^i_1-a(D^i)^{\top}D^i>0$ is required in these estimates.} \begin{align*} &L^i_{11}\leq |\beta^i|(\bar U^i_1)^2+c|\bar U^i_1|\sum_{j=1}^{\ell}|\bar U^j_1|+c(\beta^i)^{\top}\bar U^i_1\bar V^i_1,\\ &L^i_{21}\leq |\beta^i|(\bar U^i_2)^2+c|\bar U^i_2|\sum_{j=1}^{\ell}|\bar U^j_2|+c(\beta^i)^{\top}\bar U^i_2\bar V^i_2, \end{align*} where $\beta^i$ is some $\mathbb{F}^{W,N}$-predictable process satisfying $|\beta^i|\leq c(1+|V^i_1|+|\tilde V^i_1|+|V^i_2|+|\tilde V^i_2|)$ so that $\int_0^{\cdot}(\beta^i)^{\top}\dw$ is a BMO martingale. On the other hand, from Assumptions \ref{assu1}, \ref{assu2} and \eqref{Philowerbound}, there are positive constants $c_1,c_2,c_3$ such that \begin{align*} &\qquad\tilde H^i_{12}(v,z,U^i_1,U^i_2,\Phi^i_1, \Phi^i_2)-\tilde H^i_{12}(0,z,U^i_1,U^i_2,\Phi^i_1, \Phi^i_2)\\ &\geq \frac{\delta}{M+a}|v|^2-\sum_{k=1}^{n_2}\frac{e^{U^i_1+\Phi^i_{1,k}}-a}{e^{U_1}} -2(1-ae^{-U^i_{1}})\sum_{k=1}^{n_2}(E^i_k+F^i_kv)+2(1-ae^{-U^i_{1}})(B^i_2)^{\top}v\\ &\qquad-\Big[\sum_{k=1}^{n_2}\frac{e^{U^i_1+\Phi^i_{1,k}}-a}{e^{U_1}}\Big(((1+E^i_k)^+)^2-1\Big)\\ &\qquad\qquad-2(1-ae^{-U^i_{1}})\sum_{k=1}^{n_2}E^i_k+\sum_{k=1}^{n_2}\frac{e^{U^i_2+\Phi^i_{2,k}}-a}{e^{U^i_1}} ((1+E^i_k)^-)^2\Big]\\ &\geq \frac{\delta}{M+a}|v|^2-c_2|v|-c_3> 0, \end{align*} if $|v|> c$ with $c>0$ being sufficiently large. Hence, \begin{align}\label{boundeddomain} \tilde H_{12}^{i,*}(z,U^i_1,U^i_2,\Phi^i_1, \Phi^i_2)&:=\inf_{v\in\Pi_2, |v|\leq c} \tilde H^i_{12}(v,z,U^i_1,U^i_2,\Phi^i_1, \Phi^i_2). \end{align} Furthermore, noting $U^i_1,\tilde U^i_1, U^i_2,\tilde U^i_2, \Phi^i_1, \tilde\Phi^i_1, \Phi^i_2, \tilde\Phi^i_2$ are bounded, we have \begin{align*} L^i_{12}(z) \leq c|\bar U^i_1|(|\bar U^i_1|+|\bar\Phi^i_1(z)|+|\bar U^i_2|+|\bar\Phi^i_2(z)|). \end{align*} Similar arguments applying to $\tilde H^{i,*}_{22}(v,z,U^i_1,U^i_2,\Phi^i_1, \Phi^i_2)$ yield that \begin{align*} L^i_{22}(z) \leq c|\bar U^i_2|(|\bar U^i_1|+|\bar\Phi^i_1(z)|+|\bar U^i_2|+|\bar\Phi^i_2(z)|). \end{align*} For each $i\in\cM$, introduce the processes \begin{align*} J^i_t=\exp\Big(\int_0^t|\beta^i_s|\ds\Big), \ N^i_t=\exp\Big(\int_0^t(\beta^i_s)^{\top}\dw_s-\frac{1}{2}\int_0^t|\beta^i_s|^2\ds\Big). \end{align*} It\^{o}'s formula gives \begin{align*} &\qquad J^i_tN^i_t|\bar U^i_{1,t}|^2+\E_t\int_t^TJ^i_sN^i_s|\bar V^i_1|^2\ds+\E_t\int_t^T\int_{\cZ}J^i_sN^i_s|\bar\Phi^i_1|^2\nu(\dz)\ds\\ &\leq\E_t\int_t^TJ^i_sN^i_s\Big[c|\bar U^i_1||\bar U^i_2|+c|\bar U^i_1|\sum_{j=1}^{\ell}|\bar U^j_1|+c|U^i_1|\int_{\cZ}(\bar\Phi^i_1(z)+\bar\Phi^i_2(z))\nu(\dz)\Big]\ds\\ &\leq c\E_t\int_t^{T}J^i_sN^i_s(|\bar U^i_1|^2+|\bar U^i_2|^2+\sum_{j=1}^{\ell}|\bar U_1^j|^2)\ds+\frac{1}{4}\E_t\int_t^T\int_{\cZ}J^i_sN^i_s(|\bar\Phi^i_1(z)|^2+|\bar\Phi^i_2(z)|^2)\nu(\dz)\ds. \end{align*} Note that $N^i_t$ is a uniformly integrable martingale, thus \begin{align*} \widetilde W^i_t:=W_t-\int_0^t(\beta^i_s)^{\top}\dw_s, \end{align*} is a Brownian motion under the probability $\tilde{\mathbb{P}}^i$ defined by \[ \frac{d\tilde{\mathbb{P}}^i}{d\mathbb{P}}\Big|_{\mathcal{F}^{W,N}_T}=N^i_T. \] We denote by $\widetilde \E^i_t$ the conditional expectation with respect to the probability $\tilde{\mathbb{P}}^i$, then \begin{align}\label{e1} &\qquad J^i_t|\bar U^i_{1,t}|^2+\widetilde\E^i_t\int_t^TJ^i_s|\bar V^i_1|^2\ds+\widetilde\E^i_t\int_t^T\int_{\cZ}J^i_s|\bar\Phi^i_1|^2\nu(\dz)\ds\nn\\ &\leq c\widetilde\E^i_t\int_t^{T}J^i_s(|\bar U^i_1|^2+|\bar U^i_2|^2+\sum_{j=1}^{\ell}|\bar U_1^j|^2)\ds+\frac{1}{4}\widetilde\E^i_t\int_t^T\int_{\cZ}J^i_s(|\bar\Phi^i_1|^2+|\bar\Phi^i_2|^2)\nu(\dz)\ds. \end{align} Similarly, we have \begin{align}\label{e2} &\qquad J^i_t|\bar U^i_{2,t}|^2+\widetilde\E^i_t\int_t^TJ^i_s|\bar V^i_2|^2\ds+\widetilde\E^i_t\int_t^T\int_{\cZ}J^i_s|\bar\Phi^i_2|^2\nu(\dz)\ds\nn\\ &\leq c\widetilde\E^i_t\int_t^{T}J^i_s(|\bar U^i_1|^2+|\bar U^i_2|^2+\sum_{j=1}^{\ell}|\bar U_2^j|^2)\ds+\frac{1}{4}\widetilde\E^i_t\int_t^T\int_{\cZ}J^i_s(|\bar\Phi^i_1|^2+|\bar\Phi^i_2|^2)\nu(\dz)\ds. \end{align} Combining the above two inequalities yields \begin{align*} |\bar U_{1,t}^i|^2+|\bar U^i_{2,t}|^2&\leq c\widetilde\E^i_t\int_t^{T}\exp(\int_t^s|\beta^i_r|\dr)(\sum_{j=1}^{\ell}\int_t^{T}(|\bar U^j_{1,s}|^2+|\bar U^j_{2,s}|^2)\ds\\ &\leq c\widetilde\E^i_t \Big[\exp(\int_t^T|\beta^i_r|\dr) \sum_{j=1}^{\ell}\int_t^{T}(|\bar U^j_{1,s}|^2+|\bar U^j_{2,s}|^2)\ds\Big]\\ &\leq c\widetilde\E^i_t \Big[\exp(\int_t^T|\beta^i_r|\dr)\Big]\sum_{j=1}^{\ell}\int_t^{T}\Xi^j_{s}\ds, \end{align*} where \[ \Xi^j_{s}:=\esssup_{\omega\in\Omega} \Big(|\bar U^j_{1,s}|^{2}+|\bar U^j_{2,s}|^{2}\Big). \] According to \cite[Lemma 3.4]{HSX3}, $\widetilde\E^i_t \Big[\exp(\int_t^T|\beta^i_r|\dr)\Big]\leq c$, then taking essential supreme on both sides, we deduce that \begin{align*} 0\leq \sum_{i=1}^{\ell}\Xi^i_{t}\leq c\int_t^{T}\sum_{i=1}^{\ell}\Xi^i_{s} \ds. \end{align*} We infer from Gronwall's inequality that $\Xi^i=0$, so $\bar U^i_{1}=\bar U^i_{2}=0$, for all $i\in\cM$. Consequently, it follows from \eqref{e1} and \eqref{e2} that $\bar V^i_{1}=\bar V^i_{2}=0$ and $\bar\Phi^i_1=\bar\Phi^i_2=0$ for all $i\in\cM$. This completes the proof. \eof \begin{theorem}\label{existencesing} Under Assumptions \ref{assu1} and \ref{assu3}, the BSDEJ \eqref{P} admits a unique uniformly positive solution $(P^i_j,\Lambda^i_j,\Gamma^i_j)_{i\in\cM,\;j=1,2}$. \end{theorem} \pf The proof of the existence is similar to the above Theorem \ref{existence} and will only be indicated briefly why the solution to \eqref{P} is uniformly positive. \underline{When both \ref{R12} and \ref{R21} hold.} In this case, there exists constant $c_2>0$, such that \[ 2A^i+|C^{i}|^2-\delta^{-1}|B^i_1+(D^i)^{\top}C^i|^2\geq -c_2, \ -\delta^{-1}|B_2^i|^2\geq -c_2 \] where $\delta$ is the constant in Assumption \ref{assu3}. Notice that $(\underline P_{t},\underline\Lambda_{t},\underline\Gamma_{t})=(\frac{1}{(\delta^{-1}+1)e^{c_2(T-t)}-1},0,0)$ solves the following BSDEJ \begin{align}\label{Plowersingu1} \begin{cases} \dd\underline P=-(-c_2\underline P-c_2\underline P^2)\dt+\underline\Lambda^{\top}\dw+\int_{\cE}\underline\Gamma(e) \widetilde N(\dt,\dz),\\ \underline P_{T}=\delta. \end{cases} \end{align} And we have the following inequalities \begin{align*} H_{11}^{i,*,k}(\underline P,\underline \Lambda)&\geq \inf_{v\in\R^{m_1}} H_{11}^{i}(v,\underline P,\underline \Lambda)\geq-\delta^{-1} |B^i_1+(D^i)^{\top}C^i|^2\underline P,\\ H_{12}^{i,*,k}(\underline P,\underline P,\underline \Gamma,\underline \Gamma)&\geq \inf_{v\in\R^{m_1}}H_{12}^{i}(v,\underline P,\underline P,\underline \Gamma,\underline \Gamma)\geq-\delta^{-1}|B_2^i|^2\underline P^2\geq -c_2\underline P^2. \end{align*} Similar results also holds for $H_{21}^{i,*,k}(\underline P,\underline \Lambda)$ and $H_{22}^{i,*,k}(\underline P,\underline P,\underline \Gamma,\underline \Gamma)$. Applying Lemma \ref{comparison} to \eqref{Ptrun} and \eqref{Plowersingu1}, we get \begin{align*} P^{i,k}_{j,t}\geq \underline P_{t}\geq \frac{1}{(\delta^{-1}+1)e^{c_2T}-1}, \ t\in[0,T], \ i\in\cM, \ j=1,2. \end{align*} Sending $k\rightarrow\infty$ leads to the desired uniformly positive lower bound. \underline{When both \ref{R12} and \ref{R22} hold.} In this case, there exists constant $c_3>0$, such that \[ 2A^i+|C^{i}|^2-\delta^{-1}|B^i_1+(D^i)^{\top}C^i|^2-\delta^{-1}\int_{\cZ}|(F^i)^{\top}E^i+B_2^i|^2\nu(\dz)\geq -c_3, \] where $\delta$ is the constant in Assumption \ref{assu3}. Notice $(\underline P_{t},\underline\Lambda_{t},\underline\Gamma_{t})=(\delta e^{-c_3(T-t)},0,0)$ solves the following BSDEJ \begin{align}\label{Plowersingu2} \begin{cases} \dd\underline P=-(-c_3\underline P)\dt+\underline\Lambda^{\top}\dw+\int_{\cE}\underline\Gamma(e) \widetilde N(\dt,\dz),\\ \underline P_{T}=\delta. \end{cases} \end{align} And we have the following inequalities \begin{align*} H_{11}^{i,*,k}(\underline P,\underline \Lambda)&\geq \inf_{v\in\R^{m_1}} H_{11}^{i}(v,\underline P,\underline \Lambda)\geq-\delta^{-1} |B^i_1+(D^i)^{\top}C^i|^2\underline P,\\ H_{12}^{i,*,k}(z,\underline P,\underline P,\underline \Gamma,\underline \Gamma)&\geq \inf_{v\in\R^{m_1}}H_{12}^{i}(z, v,\underline P,\underline P,\underline \Gamma,\underline \Gamma)\geq-\delta^{-1}|(F^i)^{\top}E^i+B_2^i|^2\underline P. \end{align*} Therefore, \begin{align*} &(2A^i+|C^i|^2)\underline P+2(C^i)^{\top}\underline\Lambda+Q^i+\sum_{j=1}^{\ell}q^{ij}\underline P+H_{11}^{i,*,k}(\underline P,\underline \Lambda)+\int_{\cZ}H_{12}^{i,*,k}(\underline P,\underline P,\underline \Gamma,\underline \Gamma)\nu(\dz)\\ &\geq (2A^i+|C^i|^2)\underline P-\delta^{-1} |B^i_1+(D^i)^{\top}C^i|^2\underline P-\int_{\cZ}\delta^{-1}|(F^i)^{\top}E^i+B_2^i|^2\underline P\nu(\dz)\geq -c_3\underline P. \end{align*} Similar results also holds for $H_{21}^{i,*,k}(\underline P,\underline \Lambda)$ and $H_{22}^{i,*,k}(z,\underline P,\underline P,\underline \Gamma,\underline \Gamma)$. Applying Lemma \ref{comparison} to \eqref{Ptrun} and \eqref{Plowersingu2}, we get \begin{align*} P^{i,k}_{j,t}\geq \underline P_{t}=\delta e^{-c_3(T-t)}\geq \delta e^{-c_3T}, \ t\in[0,T], \ i\in\cM, \ j=1,2. \end{align*} Sending $k\rightarrow\infty$ leads to the desired uniformly positive lower bound. \underline{When both \ref{R11} and \ref{R22} hold.} This case can be handled in a similar manner. The details are left to the readers. As for the uniqueness, just setting $a=0$ in the proof of the uniqueness part of Theorem \ref{existence}. \eof \section{Solution to the LQ problem \eqref{LQ}}\label{section:Veri} In this subsection we will present an explicit solution to the LQ problem \eqref{LQ} in terms of solutions to the BSDEJ \eqref{P}. When $R^i_1+P(D^i)^{\top}D^i>0, \ i\in\cM$, we define \begin{align}\label{hatv1} \hat v_{11}^{i}(t,P,\Lambda)&:=\argmin_{v\in\Pi_1} H_{11}^i(t,v,P,\Lambda),\nn\\ \hat v_{21}^{i}(t,P,\Lambda)&:=\argmin_{v\in\Pi_1} H_{21}^i(t,v,P,\Lambda). \end{align} When $R^i_2>0$ or $(P_j+\Gamma_{jk})(F^i)^{\top}F^i>0$, $i\in\cM, \ j=1,2, \ k=1,2,...,n_2$, we define \begin{align}\label{hatv2} \hat v_{12}^{i}(t,z,P_1,P_2,\Gamma_1, \Gamma_2)&:=\argmin_{v\in\Pi_2} H_{12}^i(t,v,z,P_1,P_2,\Gamma_1, \Gamma_2),\nn\\ \hat v_{22}^{i}(t,z,P_1,P_2,\Gamma_1, \Gamma_2)&:=\argmin_{v\in\Pi_2} H_{22}^i(t,v,z,P_1,P_2,\Gamma_1, \Gamma_2). \end{align} \begin{theorem}\label{Th:verif} Let Assumptions \ref{assu1}, and \ref{assu2} (resp. \ref{assu3}) hold. Let $(P^i_j,\Lambda^i_j,\Gamma^i_j)\in S^{\infty}_{\mathbb{F}^{W,N}}(0,T;\R)\times L^{2}_{\mathbb{F}^{W,N}}(0,T;\R^{n_1}) \times L^{\infty,\nu}_{\mathcal{P}^{W,N}}(0, T;\R^{n_2}), \ i\in\cM, \ j=1,2$, be the nonnegative (resp. uniformly positive) solution to the BSDEJ \eqref{P}. Then the state feedback control $u^*=(u^*_1,u^*_2)$ given by \begin{align} \label{ustar} \begin{cases} u^*_1(t,X,\alpha)=\hat v_{11}^{\alpha_{t-}}(t,P^{\alpha_{t-}}_{1,t-},\Lambda^{\alpha_{t-}}_{1,t})X_{t-}^++\hat v_{21}^{\alpha_{t-}}(t,P^{\alpha_{t-}}_{2,t-},\Lambda^{\alpha_{t-}}_{2,t})X_{t-}^-,\\ u^*_2(t,X,\alpha)=\hat v_{12}^{\alpha_{t-}}(t,z,P^{\alpha_{t-}}_{1,t-},P^{\alpha_{t-}}_{2,t-},\Gamma^{\alpha_{t-}}_{1,t}, \Gamma^{\alpha_{t-}}_{2,t})X_{t-}^+ +\hat v_{22}^{\alpha_{t-}}(t,z,P^{\alpha_{t-}}_{1,t-},P^{\alpha_{t-}}_{2,t-},\Gamma^{\alpha_{t-}}_{1,t}, \Gamma^{\alpha_{t-}}_{2,t})X_{t-}^-, \end{cases} \end{align} is optimal for the LQ problem \eqref{LQ}. Moreover, the optimal value is \begin{align*} V(x,i_0)=P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2. \end{align*} \end{theorem} A proof of this theorem is contained in the following two Lemmas. In order to avoid as far as possible unwieldy formulas, we agree to suppress the the superscripts and subscripts of $A,B,C,D,E,F,R,Q,G$. And we will write $v_{ij}^{\alpha_{t-}}(t,P^{\alpha_{t-}}_1,\Lambda^{\alpha_{t-}}_1)$ simply $\hat v_{ij}$, $i,j=1,2$ when no confusion can arise. \begin{lemma}\label{lemma1} Under the condition of Theorem \ref{Th:verif}, we have \begin{align}\label{lower1} J(u;x,i_0)\geq P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2, \end{align} for any $u\in\mathcal{U}$, and \begin{align} J(u^*;x,i_0)= P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2. \end{align} \end{lemma} \pf By the Meyer-It\^o formula \cite[Theorem 70]{Protter}, we deduce that, for any $u=(u_1,u_2)\in\mathcal{U}$, \begin{align*} \dd X_t^+&=\mathbf{1}_{\{X_{t-}>0\}}\Big[\Big(AX+B_1^{\top}u_1 +\int_{\cZ}B_2(z)^{\top}u_2(z)\nu(\dz)-\sum_{k=1}^{n_2}\int_{\cZ}(E_k(z)X+F_k(z)u_2(z))\nu(\dz)\Big)\dt\\ &\quad+(CX+Du_1)^{\top}\dw_t\Big]+\sum_{k=1}^{n_2}\int_{\cZ}\Big[((X+E_k(z)X+F_k(e)u_2(z))^+-X^+\Big]N_k(\dt,\dz)+\frac{1}{2}\dd L_t, \end{align*} and \begin{align*} \dd X_t^-&=-\mathbf{1}_{\{X_{t-}\leq0\}}\Big[\Big(AX+B_1^{\top}u_1 +\int_{\cZ}B_2(z)^{\top}u_2(z)\nu(\dz)-\sum_{k=1}^{n_2}\int_{\cZ}(E_k(z)X+F_k(z)u_2(z))\nu(\dz)\Big)\dt\\ &\quad+(CX+Du_1)^{\top}\dw_t\Big]+\sum_{k=1}^{n_2}\int_{\cZ}\Big[((X+E_k(z)X+F_k(e)u_2(z))^--X^-\Big]N_k(\dt,\dz)+\frac{1}{2}\dd L_t, \end{align*} where $L$ is the local time of $X$ at $0$. Since $X_{t-}^{\pm}\dd L_t=0$, it follows from the It\^{o} formula that \begin{align*} \dd\; (X_t^+)^2&=\Big[2X_{t-}^+\Big(AX+B_1^{\top}u_1 +\int_{\cZ}B_2(z)^{\top}u_2(z)\nu(\dz)-\sum_{k=1}^{n_2}\int_{\cZ}(E_k(z)X+F_k(z)u_2(z))^{\top}\nu(\dz)\Big)\\ &\qquad+\mathbf{1}_{\{X_{t-}>0\}}|CX+Du_1|^2\Big]\dt+2X^+(CX+Du_1)^{\top}\dw_t\\ &\qquad+\sum_{k=1}^{n_2}\int_{\cZ}\Big[((X+E_k(z)X+F_k(e)u_2(z))^+)^2-(X^+)^2\Big]N_k(\dt,\dz), \end{align*} and \begin{align*} \dd\;(X_t^-)^2&=\Big[-2X_{t-}^-\Big(AX+B_1^{\top}u_1 +\int_{\cZ}B_2(z)^{\top}u_2(z)\nu(\dz)-\sum_{k=1}^{n_2}\int_{\cZ}(E_k(z)X+F_k(z)u_2(z))^{\top}\nu(\dz)\Big)\\ &\qquad+\mathbf{1}_{\{X_{t-}\leq0\}}|CX+Du_1|^2\Big]\dt-2X^-(CX+Du_1)^{\top}\dw_t\\ &\qquad+\sum_{k=1}^{n_2}\int_{\cZ}\Big[((X+E_k(z)X+F_k(e)u_2(z))^-)^2-(X^-)^2\Big]N_k(\dt,\dz). \end{align*} Then applying the It\^{o} formula to $P^{\alpha_t}_{1,t}(X_t^+)^2$, yields \begin{align*} \dd P^{\alpha_t}_{1,t}(X_t^+)^2&=P^{\alpha_{t-}}_{1,t-}\Big[2X_{t-}^+\Big(AX+B_1^{\top}u_1+\int_{\cZ}B_2(z)^{\top}u_2\nu(\dz)\\ &\qquad-\sum_{k=1}^{n_2}\int_{\cZ}(E_k(z)X+F_k(z)u_2(z))\nu(\dz)\Big) +\mathbf{1}_{\{X_{t-}>0\}}|CX+Du_1|^2\Big]\dt\\ &\qquad+2X^+(CX+Du_1)^{\top}\Lambda^{\alpha_{t-}}_1\dt\\ &\qquad-(X^+)^2\Big[(2A+|C|^2)P^{\alpha_{t-}}_{1,t-}+2C^{\top}\Lambda^{\alpha_{t-}}_1+Q +H_{11}^{\alpha_{t-},*}(P^{\alpha_{t-}}_1,\Lambda^{\alpha_{t-}}_1)\\ &\qquad\qquad+\int_{\cZ}H_{12}^{\alpha_{t-},*}(P^{\alpha_{t-}}_1,P^{\alpha_{t-}}_2,\Gamma^{\alpha_{t-}}_1, \Gamma^{\alpha_{t-}}_2)\nu(\dz)\Big]\dt\\ &\qquad+\sum_{k=1}^{n_2}\int_{\cZ}(P^{\alpha_{t-}}_{1,t-}+\Gamma^{\alpha_{t-}}_{1k,t})\Big[\big((X+E_k(z)X+F_k(e)u_2(z))^+\big)^2-(X^+)^2\Big]\nu(\dz)\dt\\ &\qquad +\Big[2X^+(CX+Du_1)+(X^+)^2\Lambda^{\alpha_{t-}}_1\Big]^{\top}\dw\\ &\qquad+\sum_{k=1}^{n_2}\int_{\cZ}(P^{\alpha_{t-}}_{1,t-}+\Gamma^{\alpha_{t-}}_{1k,t})\Big[\big((X+E_k(z)X+F_k(e)u_2(z))^+\big)^2-(X^+)^2\Big]\tilde N_k(\dt,\de)\\ &\qquad + (X^+)^2\int_{\cZ}\Gamma^{\alpha_{t-}}_1(z)^{\top}\tilde N(\dt,\dz)+(X^+)^2 \sum_{j,j'\in\cM}(P_1^{j}-P_1^{j'})\mathbf{1}_{\{\alpha_t-=j'\}}\dd \tilde N^{j'j}, \end{align*} where $\{N^{j'j}\}_{j,j'\in\cM}$ are independent Poisson processes each with intensity $q^{j'j}$, and $\tilde N^{j'j}_t=N^{j'j}_t-q^{j'j}t, \ t\geq 0$ are the corresponding compensated Poisson martingale. Likewise, \begin{align*} \dd P^{\alpha_t}_{2,t}(X_t^-)^2&=P^{\alpha_{t-}}_{2,t-}\Big[-2X_{t-}^+\Big(AX+B_1^{\top}u_1+\int_{\cZ}B_2(z)^{\top}u_2\nu(\dz)\\ &\qquad-\sum_{k=1}^{n_2}\int_{\cZ}(E_k(z)X+F_k(z)u_2(z))\nu(\dz)\Big) +\mathbf{1}_{\{X_{t-}\leq0\}}|CX+Du_1|^2\Big]\dt\\ &\qquad-2X^-(CX+Du_1)^{\top}\Lambda^{\alpha_{t-}}_1\dt\\ &\qquad-(X^-)^2\Big[(2A+|C|^2)P^{\alpha_{t-}}_{2,t-}+2C^{\top}\Lambda^{\alpha_{t-}}_2+Q +H_{21}^{\alpha_{t-},*}(P^{\alpha_{t-}}_2,\Lambda^{\alpha_{t-}}_2)\\ &\qquad\qquad+\int_{\cZ}H_{22}^{\alpha_{t-},*}(P^{\alpha_{t-}}_1,P^{\alpha_{t-}}_2,\Gamma^{\alpha_{t-}}_1, \Gamma^{\alpha_{t-}}_2)\nu(\dz)\Big]\dt\\ &\qquad+\sum_{k=1}^{n_2}\int_{\cZ}(P^{\alpha_{t-}}_{2,t-}+\Gamma^{\alpha_{t-}}_{2k,t})\Big[\big((X+E_k(z)X+F_k(e)u_2(z))^-\big)^2-(X^-)^2\Big]\nu(\dz)\dt\\ &\qquad +\Big[-2X^-(CX+Du_1)+(X^-)^2\Lambda^{\alpha_{t-}}_1\Big]^{\top}\dw\\ &\qquad+\sum_{k=1}^{n_2}\int_{\cZ}(P^{\alpha_{t-}}_{2,t-}+\Gamma^{\alpha_{t-}}_{2k,t})\Big[\big((X+E_k(z)X+F_k(e)u_2(z))^-\big)^2-(X^-)^2\Big]\tilde N_k(\dt,\dz)\\ &\qquad + (X^-)^2\int_{\cZ}\Gamma^{\alpha_{t-}}_2(z)^{\top}\tilde N(\dt,\dz)+(X^-)^2 \sum_{j,j'\in\cM}(P_2^{j}-P_2^{j'})\mathbf{1}_{\{\alpha_t-=j'\}}\dd \tilde N^{j'j}. \end{align*} We define, for $n\geq 1$, the following stopping time $\tau_n$: \begin{align*} \tau_n:=\inf\{t\geq 0 : |X_{t}|\geq n\}\wedge T, \end{align*} with the convention that $\inf\emptyset=\infty$. Obviously, $\tau_n\uparrow T$ a.s. along $n\uparrow\infty$. Summing the two equations above, takeing integration from $0$ to $\tau_n$, and then taking expectation, we deduce \begin{align}\label{lower2} &\E\Big[ P^{\alpha_{\tau_n}}_{1,{\tau_n}}(X_{\tau_n}^+)^2+P^{\alpha_{\tau_n}}_{2,{\tau_n}}(X_{\tau_n}^-)^2\Big]\nn\\ &\qquad\qquad+\E\int_0^{\tau_n}\Big[u_{1}^{\top}R_{1}u_{1} +QX^2+\int_{\cZ}(u_{2}(z)^{\top}R_{2}(z)u_{2}(z))\nu(\dz)\Big]\dt\nn\\ &=P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2 +\E\int_0^{\tau_n}\Big[u_1^{\top}(R_1+\mathbf{1}_{\{X>0\}}P_1D^{\top}D+\mathbf{1}_{\{X\leq0\}}P_2D^{\top}D)u_1\nn\\ &\qquad+2u_1^{\top}(P_1B_1+D^{\top}(P_1C+\Lambda_1))X^+ -2u_1^{\top}(P_2B_1+D^{\top}(P_2C+\Lambda_2))X^-\nn\\ &\qquad -H_{11}^{\alpha_{t-},*}(P_1,\Lambda_1)(X^+)^2-H_{21}^{\alpha_{t-},*}(P_2,\Lambda_2)(X^-)^2\Big]\dt\nn\\ &\qquad + \E\int_0^{\tau_n}\int_{\cZ}\Big[u_2^{\top}R_2u_2 +2P_1X^+\Big(B_2(z)^{\top}u_2(z)-\sum_{k=1}^{n_2}(E_k(z)X+F_k(z)u_2(z))\Big)\nn\\ &\qquad\qquad-2P_2X^-\Big(B_2(z)^{\top}u_2(z)-\sum_{k=1}^{n_2}(E_k(z)X+F_k(z)u_2(z))\Big)\nn\\ &\qquad\qquad+\sum_{k=1}^{n_2}(P_{1}+\Gamma_{1k})\Big(\big((X+E_k(z)X+F_k(e)u_2(z))^+\big)^2-(X^+)^2\Big)\nn\\ &\qquad\qquad+\sum_{k=1}^{n_2}(P_{2}+\Gamma_{2k})\Big(\big((X+E_k(z)X+F_k(e)u_2(z))^-\big)^2-(X^-)^2\Big)\nn\\ &\qquad\qquad-H_{12}^{\alpha_{t-},*}(P_1,P_2,\Gamma_1, \Gamma_2)(X^+)^2 -H_{22}^{\alpha_{t-},*}(P_1,P_2,\Gamma_1, \Gamma_2)(X^-)^2\Big]\nu(\dz)\dt. \end{align} We will denote by $\phi(X,u)$ the right-hand side (RHS) of the above equation and show $\phi(X,u)\geq 0, ~\dptv$, for any $u\in\mathcal{U}$. Indeed, let us define \begin{align*} v_t=(v_{1,t},v_{2,t}(z))= \begin{cases} \Big(\frac{u_{1,t}}{|X_{t-}|},\frac{u_{2,t}(z)}{|X_{t-}|}\Big), & \ \mbox {if} \ |X_{t-}|>0;\\ (0,0), & \ \mbox {if} \ |X_{t-}|=0. \end{cases} \end{align*} It is clear that the above process $v$ is valued in $\Gamma_1\times\Gamma_2$ since $\Gamma_1, \Gamma_2$ are cones. If $X_{t-}>0$, then \begin{align*} \phi(X,u)&=X^2\Big[v_1^{\top}(R_1+P_1D^{\top}D)v_1+2v_1^{\top}(P_1B_1+D^{\top}(P_1C+\Lambda_1)) -H_{11}^{\alpha_{t-},*}(P_1,\Lambda_1)\Big]\\ &\qquad+X^2\int_{\cZ}\Big[v_2^{\top}R_2v_2+2P_1B_2(z)^{\top}v_2(z)-2P_1\sum_{k=1}^{n_2}(E_k(z)X+F_k(z)v_2(z))\\ &\qquad\qquad+\sum_{k=1}^{n_2}(P_{1}+\Gamma_{1k})\Big(\big((1+E_k(z)+F_k(e)v_2(z))^+\big)^2-1\Big)\\ &\qquad\qquad+\sum_{k=1}^{n_2}(P_{2}+\Gamma_{2k})\big((1+E_k(z)+F_k(e)v_2(z))^-\big)^2\\ &\qquad\qquad-H_{12}^{\alpha_{t-},*}(P_1,P_2,\Gamma_1, \Gamma_2)\nu(\dz)\Big]\nu(\dz)\geq 0, \end{align*} from the definitions of $H_{11}^{i,*}, H_{12}^{i,*}$. Moreover, the equality holds at \begin{align*} u^*_1(t,X,\alpha)=\hat v_{11}^{\alpha_{t-}}(t,P^{\alpha_{t-}}_1,\Lambda^{\alpha_{t-}}_1)X_{t-}^+, \ u^*_2(t,X,\alpha)=\hat v_{12}^{\alpha_{t-}}(t,P^{\alpha_{t-}}_1,P^{\alpha_{t-}}_2,\Gamma^{\alpha_{t-}}_1, \Gamma^{\alpha_{t-}}_2)X_{t-}^+. \end{align*} Next if $X_{t-}<0$, then \begin{align*} \phi(X,u)&=X^2\Big[v_1^{\top}(R_1+P_1D^{\top}D)v_1-2v_1^{\top}(P_1B_1+D^{\top}(P_1C+\Lambda_1)) -H_{21}^{\alpha_{t-},*}(P_2,\Lambda_2)\Big]\\ &\qquad+X^2\int_{\cZ}\Big[v_2^{\top}R_2v_2-2P_2B_2(z)^{\top}v_2(z)-2P_2\sum_{k=1}^{n_2}(E_k(z)-F_k(z)v_2(z))\\ &\qquad\qquad+\sum_{k=1}^{n_2}(P_{1}+\Gamma_{1k})\big((-1-E_k(z)+F_k(e)v_2(z))^+\big)^2\\ &\qquad\qquad+\sum_{k=1}^{n_2}(P_{2}+\Gamma_{2k})\Big(\big((-1-E_k(z)+F_k(e)v_2(z))^-\big)^2-1\Big)\\ &\qquad\qquad-H_{22}^{\alpha_{t-},*}(P_1,P_2,\Gamma_1, \Gamma_2)\nu(\dz)\Big]\nu(\dz)\geq 0, \end{align*} from the definitions of $H_{21}^{i,*}, H_{22}^{i,*}$. Moreover, the equality holds at \begin{align*} u^*_1(t,X,\alpha)=\hat v_{21}^{\alpha_{t-}}(t,P^{\alpha_{t-}}_1,\Lambda^{\alpha_{t-}}_1)X_{t-}^-, \ u^*_2(t,X,\alpha)=\hat v_{22}^{\alpha_{t-}}(t,P^{\alpha_{t-}}_1,P^{\alpha_{t-}}_2,\Gamma^{\alpha_{t-}}_1, \Gamma^{\alpha_{t-}}_2)X_{t-}^-. \end{align*} Finally, when $X_{t-}=0$, then \begin{align*} \phi(X,u)&=u_1^{\top}R_1P_2D^{\top}Du_1+\int_{\cZ}\Big[u_2^{\top}R_2u_2 +\sum_{k=1}^{n_2}(P_{1}+\Gamma_{1k})\big((F_ku_2)^+\big)^2\\ &\qquad+\sum_{k=1}^{n_2}(P_{2}+\Gamma_{2k})\big((F_ku_2)^-\big)^2\Big]\nu(\dz)\geq0; \end{align*} here the equality holds at $u^*_1=0$, $u^*_2=0$. The above analysis together with \eqref{lower2} shows that \begin{align*} &\E\Big[ P^{\alpha_{\tau_n}}_{1,{\tau_n}}(X_{\tau_n}^+)^2+P^{\alpha_{\tau_n}}_{2,{\tau_n}}(X_{\tau_n}^-)^2\Big]+\E\int_0^{\tau_n}\Big[u_{1}^{\top}R_{1}u_{1} +QX^2+\int_{\cZ}u_{2}(z)^{\top}R_{2}(z)u_{2}(z)\nu(\dz)\Big]\dt\\ &\geq P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2. \end{align*} By noting that for any $u\in\mathcal{U}$, the corresponding state process $X\in S^{2}_{\mathbb{F}}(0,T;\R)$. Sending $n\rightarrow\infty$, we conclude, from the dominated convergence theorem, that \eqref{lower1} holds for any $u\in\mathcal{U}$, where the equality is achieved when $u^*$ is defined by \eqref{ustar}. \eof \begin{lemma} Under the condition of Theorem \ref{Th:verif}, $(u^*_1(t,X,\alpha),u^*_2(t,X,\alpha))\in\mathcal{U}$. \end{lemma} \pf It is clear that $(u^*_1(t,X,\alpha),u^*_2(t,X,\alpha))$ is valued in $\Pi_1\times\Pi_2$. It remains to prove $$(u^*_1(t,X,\alpha),u^*_2(t,X,\alpha))\in L^2_\mathbb{F}(0, T;\R^{m_1})\times L^{2,\nu}_{\mathcal{P}}(0, T;\R^{m_2}).$$ Substituting \eqref{ustar} into the state process \eqref{state}, we have \begin{align}\label{stateoptimal} \begin{cases} \dd X_t=\left[AX+B^{\top}(\hat v_{11}X^++\hat v_{21}X^-)+\int_{\cZ}B(z)^{\top}(\hat v_{12}X^++\hat v_{22}X^-)\nu(\dz)\right]\dt\\ \qquad\qquad\qquad+\left[CX+D (\hat v_{11}X^++\hat v_{21}X^-)\right]^{\top}dw_t\\ \qquad\qquad\qquad+\int_{\cZ}\left[E(z)X+F(z)(\hat v_{12}X^++\hat v_{22}X^-)\right]^{\top}\tilde N(\dt,\dz), \ t\in[0,T], \\ X_0=x,\ \alpha_0=i_0, \end{cases} \end{align} According to \cite[Theorem 3.5]{HSX}, we have $\hat v_{11}, \hat v_{21}\in L^{2}_{\mathbb{F}}(0, T;\R^{m_1})$. And from \eqref{boundeddomain}, we know that $\hat v_{12},\hat v_{22}\in L^{\infty}_{\mathcal{P}}(0, T;\R^{m_2})$. By the basic theorem of Gal'chuk \cite[p.756-757]{Gal}, the SDE \eqref{stateoptimal} admits a unique solution, denoted by $X^*$. From the proof of Lemma \ref{lemma1}, we find that, for any stopping time $\iota\leq T$, \begin{align}\label{iota} &\E\Big[ P^{\alpha_{\theta_n}\wedge\iota}_{1,{\theta_n\wedge\iota}}((X^*_{\theta_n\wedge\iota})^+)^2 +P^{\alpha_{\theta_n}\wedge\iota}_{2,{\theta_n\wedge\iota}}((X^*_{\theta_n\wedge\iota})^-))^2\Big]\nn\\ &\qquad\qquad+\E\int_0^{\theta_n\wedge\iota}\Big[(u^*_{1})^{\top}R_{1}u^*_{1} +Q(X^*)^2+\int_{\cZ}(u^*_{2}(z)^{\top}R_{2}(z)u^*_{2}(z))\nu(\dz)\Big]\dt\nn\\ &= P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2, \end{align} where \begin{align*} \theta_n:=\inf\{t\geq 0:|X^*_{t}|\geq n\}\wedge T. \end{align*} \underline{When Assumption \ref{assu2} holds.} We have, from \eqref{iota}, \begin{align*} \delta\E\int_0^{\theta_n\wedge T}\Big[|u^*_{1}|^2 +\int_{\cZ}|u^*_{2}(z)|^2\nu(\dz)\Big]\dt\leq P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2. \end{align*} Letting $n\rightarrow\infty$, it follows from the monotone convergence theorem that $(u^*_1,u^*_2)\in L^2_\mathbb{F}(0, T;\R^{m_1})\times L^{2,\nu}_{\mathcal{P}}(0, T;\R^{m_2})$. \underline{When Assumption \ref{assu3} holds}. In this case, there exists $c>0$ such that $P^i_j\geq c, \ P^i_{j}+\Gamma^i_{jk}\geq c, \ i\in\cM, \ j=1,2, \ k=1,\ldots, n_2$. From \eqref{iota}, we get \begin{align*} c\E[ |X^*_{\theta_n\wedge\iota}|^2]\leq P^{i_0}_{1,0}(x^+)^2+P^{i_0}_{2,0}(x^-)^2. \end{align*} Letting $n\rightarrow\infty$, it follows from Fatou's lemma that \begin{align*} \E[ |X^*_{\iota}|^2]\leq c, \end{align*} for any stopping time $\iota\leq T$. This further implies \begin{align*} \E \int_{0}^{ T}|X^*_{t}|^2\dt \leq cT. \end{align*} Applying It\^o formula to $|X^*_{t}|^2$, yields that \begin{align*} &\quad x^2+\E\int_0^{\theta_n\wedge T} |Du^*_1|^2\dt+\E\int_0^{\theta_n\wedge T}\int_{\cZ}|Fu^*_2| ^2\nu(\dz)\dt\\ &= \E [X^*_{\theta_n\wedge T}]^2-\E\int_0^{\theta_n\wedge T}\Big[(2A+|C|^2+\int_{\cZ}|E(z)|^2\nu(\dz))|X^*_{t}|^2\\ &\qquad+2X^*_{t}(B_1+D^{\top}C)^{\top}u_1^* +2X^*_{t}\int_{\cZ}(B_2^{\top}u_2^*+\sum_{k=1}^{n_2}E_kF_ku_2^*)\nu(\dz) \Big]\dt. \end{align*} If \ref{R12} and \ref{R22} hold, we have \begin{align*} &\quad \delta\E\int_0^{\theta_n\wedge T}|u^*_1|^2\dt+\delta\E\int_0^{\theta_n\wedge T}\int_{\cZ}|u^*_2| ^2\nu(\dz)\dt\\ &\leq c +c\E\int_0^{\theta_n\wedge T}\Big[|X^*_{t}|^2+2|X^*_{t}||u_1^* | +2|X^*_{t}|\int_{\cZ}|u_2^*|\nu(\dz) \Big]\dt\\ &\leq c +c\Big(1+\frac{2}{\delta}+\frac{2\nu(\cZ)}{\delta}\Big)\E\int_0^{\theta_n\wedge T}|X^*_{t}|^2\dt+\frac{\delta}{2}\E\int_0^{\theta_n\wedge T}|u_1^* |^2\dt +\frac{\delta}{2}\E\int_0^{\theta_n\wedge T}\int_{\cZ}|u_2^*|^2\nu(\dz) \dt. \end{align*} After rearrangement, it follows from the monotone convergence theorem that \begin{align*} \E\int_0^{ T}|u^*_1|^2\dt+\E\int_0^{ T}\int_{\cZ}|u^*_2| ^2\nu(\dz)\dt\leq c. \end{align*} Hence $(u^*_1,u^*_2)\in L^2_\mathbb{F}(0, T;\R^{m_1})\times L^{2,\nu}_{\mathcal{P}}(0, T;\R^{m_2})$. If \ref{R12} and \ref{R21} hold, $u^*_1\in L^2_\mathbb{F}(0, T;\R^{m_1})$ follows exactly as above. On the other hand, we can get $u_2^*\in L^{2,\nu}_{\mathcal{P}}(0, T;\R^{m_2})$ from \eqref{iota}. The last case in Assumption \ref{assu3} can be handled similarly. \eof \begin{thebibliography}{99} \bibitem {BBP} Barles G, Buckdahn R, Pardoux E. Backward stochastic differential equations and integral-partial differential equations. Stochastics, 1997, 60(1-2): 57-83. \bibitem {Bi} Bismut J. Linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim., 1976, 14(3): 419-444. \bibitem {FSX} Fu G, Shi X, Xu Z Q. The system of BSDEs with singular terminal values arising in optimal liquidation with regime switching. Preprint. \bibitem {Gal} Gal'Chuk L I. Existence and uniqueness of a solution for stochastic equations with respect to semimartingales. Theory Probab. Appl. 1979, 23(4): 751-763. \bibitem {Goreac} Goreac D. A note on the controllability of jump diffusions with linear coefficients. IMA J. Math. Control Inform., 2012, 29(3): 427-435. \bibitem {HSX} Hu Y, Shi X, Xu Z Q. Constrained stochastic LQ control with regime switching and application to portfolio selection. Ann. Appl. Probab., 2022, 32 (1):426-460. \bibitem {HSX2} Hu Y, Shi X, Xu Z Q. Constrained stochastic LQ control on infinite time horizon with regime switching. ESAIM Control Optim. Calc. Var., 2022, 28: 5. \bibitem {HSX3} Hu Y, Shi X, Xu Z Q. Non-homogeneous stochastic LQ control with regime switching and random coefficients. Math. Control Relat. Fields, 2024, 14(2): 671-694. \bibitem {HSX4} Hu Y, Shi X, Xu Z Q. Comparison theorems for multi-dimensional BSDEs with jumps and applications to constrained stochastic linear-quadratic control. arXiv:2311.06512, 2023 \bibitem {HZ} Hu Y, Zhou X. Constrained stochastic LQ control with random coefficients, and application to portfolio selection. SIAM J. Control Optim., 2005, 44(2): 444-466. \bibitem {Ka} Kazamaki N. Continuous exponential martingales and BMO, Springer, 2006. \bibitem {KT} Kohlmann M, Tang S. Global adapted solution of one-dimensional backward stochastic Riccati equations, with application to the mean-variance hedging. Stochastic Process. Appl. 2002, 97: 255-288. \bibitem {LWY} Li N, Wu Z, Yu Z. Indefinite stochastic linear-quadratic optimal control problems with random jumps and related stochastic Riccati equations. Sci. China Math., 2018, 61: 563-576. \bibitem {LM} Liu Y, Ma J. Optimal reinsurance/investment problems for general insurance models. Ann. Appl. Probab., 2009, 19 (4): 1495C1528. \bibitem {Protter} Protter P. Stochastic Integration and Differential Equations, 2nd edition, 2005, Springer Berlin Heidelberg. \bibitem {Song} Song Y. The Partial Controllability of Linear Stochastic Control Systems with Terminal Constraints and Its Applications to Game-Based Control Systems with Jumps. SIAM J. Control Optim., 2023, 61(6): 3635-3663. \bibitem {SXY} Sun J, Xiong J, Yong J. Indefinite stochastic linear-quadratic optimal control problems with random coefficients: Closed-loop representation of open-loop optimal controls. Ann. Appl. Probab., 2021, 31(1): 460-499. \bibitem {Tang03} Tang S. General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and stochastic Riccati equations. SIAM J. Control Optim., 2003, 42(1): 53-75. \bibitem {Tang15} Tang S. Dynamic programming for general linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim., 2015, 53(2): 1082-1106. \bibitem {TL} Tang S, Li X. Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim., 1994, 32(5): 1447-1475. \bibitem {WLX} Wen J, Li X, Xiong J. Weak closed-loop solvability of stochastic linear quadratic optimal control problems of Markovian regime switching system. Appl. Math. Optim., 2020: 1-31. \bibitem {Wo} Wonham W. On a matrix Riccati equation of stochastic control. SIAM J. Control, 1968, 6(4): 681-697. \bibitem {WSZD} Wu F, Shen Y, Zhang X, Ding K. Optimal claim-dependent proportional reinsurance under a self-exciting claim model. J. Optim. Theory Appl., 2024: 1-27. \bibitem {YZ} Yong J, Zhou X. Stochastic controls: Hamiltonian systems and HJB equations. 1999, Springer, New York. \bibitem {ZDM} Zhang F, Dong Y and Meng Q. stochastic Riccati equation with jumps associated with stochastic learar quadratic optimal control with jumps and random coefficients. SIAM J. Control Optim. 2020, 58(1): 393-424. \bibitem {ZLX} Zhang X, Li X, Xiong J. Open-loop and closed-loop solvabilities for stochastic linear quadratic optimal control problems of Markovian regime switching system. ESAIM Control Optim. Calc. Var., 2021, 27: 1-35. \end{thebibliography} \end{document}
2412.19095v1
http://arxiv.org/abs/2412.19095v1
On Laplacian and Distance Laplacian Spectra of Generalized Fan Graph & a New Graph Class
\documentclass[12pt]{article} \usepackage{tikz,float,hyperref,collref} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[margin=2.75cm]{geometry} \usepackage{amsmath,amsfonts,mathtools,authblk,amssymb,amsthm} \usepackage{cleveref,graphicx,tabularx,ragged2e} \usepackage{booktabs,dirtytalk,multicol} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{obs}[theorem]{Observation} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{claim}[theorem]{Claim} \newtheorem{note}{Note}[section] \newtheorem{conjecture}[theorem]{Conjecture} \allowdisplaybreaks \date{} \title{On Laplacian and Distance Laplacian Spectra of Generalized Fan Graph \& a New Graph Class} \author{\noindent\large Subarsha Banerjee$^{1}$\footnote{Corresponding author.\\ Email address: \href{mailto:[email protected]}{[email protected]/[email protected]}}, and Soumya Ganguly$^{2}$ } \affil{$^{1}$\small \footnotesize Department of Mathematics, JIS University, Kolkata, West Bengal 700109, India. \\ $^{2}$\small \footnotesize BTech(2nd Year), Department of Computer Science \& Engineering, JIS University, Kolkata, West Bengal 700109, India.} \begin{document} \maketitle \begin{abstract} Given a graph $G$, the Laplacian matrix of $G$, $L(G)$ is the difference of the adjacency matrix $A(G)$ and $\text{Deg}(G)$, where $\text{Deg}(G)$ is the diagonal matrix of vertex degrees. The distance Laplacian matrix $D^L({G})$ is the difference of the transmission matrix of $G$ and the distance matrix of $G$. In the given paper, we first obtain the Laplacian and distance Laplacian spectrum of generalized fan graphs. We then introduce a new graph class which is denoted by $\mathcal{NC}(F_{m,n})$. Finally, we determine the Laplacian spectrum and the distance Laplacian spectrum of $\mathcal{NC}(F_{m,n})$. \end{abstract} \textbf{Keywords:} Laplacian spectrum; distance Laplacian spectrum; generalized fan graph; equitable partition. \\ \textbf{2010 Mathematics Subject Classification:} 05C07, 05C12, 05C50. \section{Introduction} Throughout the paper, $G$ shall denote a finite, simple, and undirected graph. Let $V(G)=\{v_1,v_2,\dots, v_n\}$ denote the set of all vertices of $G$, and let $E(G)$ denote the set of all edges of $G$. The \textit{order} of $G$ is the number of elements in $V(G)$. Let $v_i,v_j\in V(G)$. We say that the vertex $v_i$ to be \textit{adjacent} to $v_j$ provided there is an edge from $v_i$ to $v_j$ or vice versa. If the vertices $v_i$ and $v_j$ are adjacent to each other, it shall be denoted by $v_i\sim v_j$. The total number of vertices in $G$ that are adjacent to a given vertex $v$ is known as the \textit{degree} of $v$. The \textit{join} of two graphs $G_1$ and $G_2$ is is denoted by $G_1+G_2$. The \textit{adjacency} matrix $A(G)$ of $G$ is defined as $A(G)=(a_{ij})_{n\times n}$ is an $n\times n$ matrix defined as follows: $a_{ij}=\begin{cases} 1 & \text{ if } v_i\sim v_j\\ 0 & \text{ elsewhere }. \end{cases}$. The \textit{Laplacian} matrix $L(G)$ of $G$ is defined as $L(G)=(l_{ij})_{n\times n}$ is defined as follows: $l_{ij}=\begin{cases} d_i & \textbf{ if } i=j\\ -1 & \text{ if } v_i\sim v_j\\ 0 & \text{ elsewhere }. \end{cases}$. Here, $d_i$ denotes the degree of the $i^{th}$ vertex $v_i$. The Laplacian matrix $L(G)$ of a graph $G$ has all its eigenvalues as real numbers. Moreover, $L(G)$ is a positive semidefinite matrix. Consequently, all the real eigenvalues of $L(G)$ are non-negative. It is known that the summation of row entries in a Laplacian matrix is zero. Thus, the determinant of $L(G)$ is always $0$. Hence, $0$ is always an eigenvalue of $L(G)$. A sequence of vertices and edges in a graph $G$ is known as a \textit{walk}. A walk is said to be \textit{closed} if the starting vertex is the same as the end vertex. If all the edges are different in a walk, then it is known as a \textit{trail.} A \textit{path} is a trail in which no vertex is repeated. A closed path is said to be a \textit{cycle}. The number of edges in a path is known as the \textit{length} of the path. The \textit{distance} matrix of a connected graph $G$ is defined as $D(G)=(d_{ij})_{n\times n}$, where $d_{ij}=d(v_i,v_j)$ is the distance between two vertices $v_i$ and $v_j$. The sum of distances from a vertex $v$ to all other vertices of ${G}$ is known as the \textit{transmission} of $v$. The transmission of a vertex $v$ is denoted by $Tr(v).$ The \textit{transmission matrix} of $G$ is an $n\times n$ matrix where each diagonal entry denotes the transmission of the vertex $v$, and each off-diagonal entry is $0$. The \textit{distance Laplacian} matrix $D^L({G})$ of a connected graph $G$ is defined as $D^L({G})=Tr({G})-D({G})$. It was introduced in \cite{1}. The \textit{distance signless Laplacian} matrix $D^Q({G})$ is defined as $D^{Q}({G})=Tr({G})+D({G})$. Recently, the researchers have studied the two matrices extensively, see for example \cite{2}, \cite{3}, \cite{4}, \cite{5}, \cite{6}, \cite{7}, and \cite{8}. Both the matrices, namely the distance Laplacian matrix and distance signless Laplacian matrix of a graph are positive semi-definite matrices. Consequently, both the matrices have non-negative eigenvalues. Over the last few decades, various researchers have pondered whether it is possible to predict the eigenvalues of a graph by observing the structure of a graph. One way to study the given problem is to perform various graph operations and create new graphs from existing graphs. Several graph operations have been introduced by researchers till now, some of them being \textit{join} of two graphs, \textit{disjoint union}, \textit{Cartesian product}, \textit{direct product}, \textit{lexicographic product}. Several variants of corona product of two graphs have also been introduced and studied by various researchers in the recent past. Readers may refer to the papers \cite{9}, \cite{10}, \cite{11}, \cite{12}, \cite{13}, and \cite{14} for a detailed discussion in this regard. Moreover, researchers have determined the eigenvalues of the resulting graph operations in terms of existing graphs. Readers are suggested to see the papers \cite{15} and \cite{16} for more details. Recently, in \cite{17}, the authors have determined the distance Laplacian and distance signless Laplacian spectrum of \textit{generalized wheel graphs}. They have also introduced a new graph class and named it the \textit{dumbbell graph.} The authors continued their study on dumbbell graphs in \cite{18}. The above works motivate us to study the Laplacian as well as the distance Laplacian spectrum of the \textit{generalized fan graph} in this paper. We have also introduced a new graph class and deduced its Laplacian and the distance Laplacian spectrum. \section{Preliminaries} \label{S2} The following definitions and theorems will be used in the subsequent sections. \begin{definition}\cite{19} \label{EqP} Let $M$ be a order $n$ matrix defined as follows: \begin{center} \( \begin{pmatrix} M_{11} & \cdots & M_{1t} \\ \vdots & \ddots & \vdots \\ M_{t1} & \cdots & M_{tt} \end{pmatrix}. \) \end{center} Each block $M_{ij}$ has order $n_i\times n_j$ for $1\leq i, j\leq t$, and $M$ is equal to its transpose. Moreover, $n=n_1+\cdots+n_t$. For $1\leq i, j\leq t$, let $b_{ij}$ denote a matrix in which each element of $b_{ij}$ is obtained by adding all the entries in $M_{ij}$ and then dividing by the number of rows. The matrix $B=(b_{ij})$ so obtained is known as the \textit{quotient} matrix of $M$. Additionally, if for each pair $i,j$, the sum of the entries in each row of $M_{ij}$ is constant, then we call $B$ as the \textit{equitable quotient} matrix of $M$. \end{definition} There exists a relation between the set of eigenvalues of $B$ and $M$, which is given by the following theorem. \begin{theorem}\cite[Lemma $2.3.1$]{19} \label{P1} If $\rho(M)$ is the set of eigenvalues of $M$, and $\rho(B)$ is the set of eigenvalues of $B$, then $\rho(B)$ is contained in $\rho(M)$. \end{theorem} \section{Laplacian Spectra of Generalized Fan Graph and a New Graph Class} We first determine the eigenvalues of Laplacian matrix of generalized fan graphs. We then introduce a new graph class and determine its Laplacian spectrum. \begin{definition} The generalized fan graph, denoted by $F_{m,n}$, is given by $F_{m,n}=\overline K_m+P_n$, where $\overline{K}_m$ is the null graph on $m$ vertices, and $P_n$ is the path graph on $n$ vertices. \end{definition} To determine the Laplacian spectrum of the generalized fan graph $F_{m,n}$, we shall first require the following result from \cite[Corollary 3.7]{20}. \begin{theorem} \label{Thjoin} Let $G_1+ G_2$ denote the join of two graphs $G_1$ and $G_2$. Then \begin{flalign*} \mu(G_1+ G_2;x)=\frac{x(x-n_1-n_2)}{(x-n_1)(x-n_2)}\mu(G_1,x-n_2)\mu(G_2,x-n_1), \end{flalign*} where $n_1$ and $n_2$ are orders of $G_1$ and $G_2$ respectively. \end{theorem} \begin{theorem} \label{II} If $m,n\ge 2$, then the Laplacian eigenvalues of $F_{m,n}$ are $0$ having multiplicity $1$, $m+n$ having multiplicity $1$, $n$ having multiplicity $m-1$, and $m+2-2\cos \frac{\pi j}{n}$ having multiplicity $1$ for $1\le j\le n-1$. \end{theorem} \begin{proof} We know that the Laplacian eigenvalues of $\overline K_m$ are $0$ having multiplicity $m$. Hence, $\mu(\overline{K}_m;x)=x^m$. Moreover, using \cite[Section 1.4.4]{19}, we find that the Laplacian eigenvalues of $P_n$ are $2-2\cos (\frac{\pi j}{n})$, where $ 0\le j\le n-1$. Hence, the characteristic polynomial of the Laplacian matrix of ${P}_n$ is given as follows: \begin{flalign*} \mu(P_n;x)&=x \times \bigg[ \prod_{j=1}^{n-1}\bigg(x-2+2\cos \frac{\pi j}{n}\bigg)\bigg]. \end{flalign*} Thus, using \Cref{Thjoin}, we get, \begin{flalign*} \mu(F_{m,n};x)&=\frac{x(x-m-n)}{(x-m)(x-n)}\times \mu(\overline{K}_m,x-n)\times \mu(P_n,x-m) \\ &=\frac{x(x-m-n)}{(x-m)(x-n)}\times (x-n)^m \times (x-m) \times \bigg[ \prod_{j=1}^{n-1}\bigg(x-m-2+2\cos \frac{\pi j}{n}\bigg)\bigg] \\ &=x(x-m-n)\times (x-n)^{m-1} \times \bigg[ \prod_{j=1}^{n-1}\bigg(x-m-2+2\cos \frac{\pi j}{n}\bigg)\bigg]. \end{flalign*} Hence the result follows. \end{proof} \begin{corollary} The Laplacian spectrum of the usual fan graph $F_{1,n}$ consists of $0$ having multiplicity $1$, $1+n$ having multiplicity $1$, and $3-2\cos \frac{\pi j}{n}$ having multiplicity $1$ for $1\le j\le n-1$. \end{corollary} \begin{proof} The proof follows from \cref{II} by putting $m=1$. \end{proof} We shall now introduce a new graph class and derive the Laplacian spectrum of the same. We shall denote the new graph class by $\mathcal{NC}(F_{m,n})$. We shall define the new graph in what follows. \begin{definition} \label{Def1} The graph $\mathcal{NC}(F_{m,n})$ has $2(m + n)$ vertices and is obtained by connecting $m$ vertices at the centers of two generalized fan graphs $F_{m,n}$, where $m,n \ge 2$ through $m$-edges. \end{definition} We shall now illustrate the newly defined graph class $\mathcal{NC}(F_{m,n})$ with an example in what follows. \begin{example} We consider $m=3$ and $n=4$. We have the following two graphs namely, $\overline K_3$ and $P_3$. We shall first construct the generalized fan graph $F_{m,n}$. \begin{multicols}{2} \begin{figure}[H] \begin{tikzpicture}[scale=0.5] \node[shape=circle,draw=black] (0) at (0,0) {$0$}; \node[shape=circle,draw=black] (1) at (3,3) {$1$}; \node[shape=circle,draw=black] (2) at (6,0) {$2$}; \end{tikzpicture} \caption{$\overline K_3$} \label{Figure 1} \end{figure} \begin{figure}[H] \begin{tikzpicture}[scale=0.75] \node[shape=circle,draw=black] (0) at (3,0) {$a$}; \node[shape=circle,draw=black] (1) at (6,0) {$b$}; \node[shape=circle,draw=black] (2) at (9,0) {$c$}; \node[shape=circle,draw=black] (3) at (12,0) {$d$}; \draw (0) -- (1); \draw (1) -- (2); \draw (2) -- (3); \end{tikzpicture} \caption{$P_4$} \label{Figure 2} \end{figure} \end{multicols} Using $\overline{K}_3$ and $P_4$, the generalized fan graph $F_{3,4}$ is given as follows: \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.75] \node[shape=circle,draw=black] (0) at (0,3) {$a$}; \node[shape=circle,draw=black] (1) at (0,6) {$b$}; \node[shape=circle,draw=black] (2) at (0,9) {$c$}; \node[shape=circle,draw=black] (3) at (0,12) {$d$}; \node[shape=circle,draw=black] (a) at (9,9) {$0$}; \node[shape=circle,draw=black] (b) at (9,5) {$2$}; \node[shape=circle,draw=black] (c) at (9,7) {$1$}; \draw (0) -- (a); \draw (0) -- (b); \draw (0) -- (c); \draw (0) -- (1); \draw (1) -- (2); \draw (1) -- (2); \draw (2) -- (3); \draw (1) -- (a); \draw (1) -- (b); \draw (1) -- (c); \draw (2) -- (a); \draw (2) -- (b); \draw (2) -- (c); \draw (3) -- (a); \draw (3) -- (b); \draw (3) -- (c); \end{tikzpicture} \caption{The generalized fan graph $F_{3,4}$.} \label{Figure 3} \end{figure} Using \Cref{Def1}, the new graph class $\mathcal{NC}(F_{3,4})$ is given as follows: \begin{figure}[H] \begin{multicols}{2} \begin{tikzpicture}[scale=0.75] \node[shape=circle,draw=black] (0) at (2,3) {$a$}; \node[shape=circle,draw=black] (1) at (2,6) {$b$}; \node[shape=circle,draw=black] (2) at (2,9) {$c$}; \node[shape=circle,draw=black] (3) at (2,12) {$d$}; \node[shape=circle,draw=black] (a) at (9,9) {$0$}; \node[shape=circle,draw=black] (b) at (9,5) {$2$}; \node[shape=circle,draw=black] (c) at (9,7) {$1$}; \draw (0) -- (a); \draw (0) -- (b); \draw (0) -- (c); \draw (0) -- (1); \draw (1) -- (2); \draw (1) -- (2); \draw (2) -- (3); \draw (1) -- (a); \draw (1) -- (b); \draw (1) -- (c); \draw (2) -- (a); \draw (2) -- (b); \draw (2) -- (c); \draw (3) -- (a); \draw (3) -- (b); \draw (3) -- (c); \node[shape=circle,draw=black] (a1) at (12,9) {$0$}; \node[shape=circle,draw=black] (b1) at (12,5) {$2$}; \node[shape=circle,draw=black] (c1) at (12,7) {$1$}; \node[shape=circle,draw=black] (01) at (19,3) {$a$}; \node[shape=circle,draw=black] (11) at (19,6) {$b$}; \node[shape=circle,draw=black] (21) at (19,9) {$c$}; \node[shape=circle,draw=black] (31) at (19,12) {$d$}; \draw (01) -- (a1); \draw (01) -- (b1); \draw (01) -- (c1); \draw (01) -- (11); \draw (11) -- (21); \draw (11) -- (21); \draw (21) -- (31); \draw (11) -- (a1); \draw (11) -- (b1); \draw (11) -- (c1); \draw (21) -- (a1); \draw (21) -- (b1); \draw (21) -- (c1); \draw (31) -- (a1); \draw (31) -- (b1); \draw (31) -- (c1); \draw (a) -- (a1); \draw (b) -- (b1); \draw (c) -- (c1); \end{tikzpicture} \end{multicols} \caption{The graph $\mathcal{NC}_{3,4}$.} \label{Figure3} \end{figure} \end{example} We shall now illustrate the Laplacian eigenvalues of $\mathcal{NC}_{m,n}$ in what follows. It is known that the Laplacian eigenvalues of $P_n$ are $0$ and $2(1-\cos \frac{\pi j}{n})$ having multiplicity $1$ for $1\le j\le n-1$. \begin{theorem} \label{I} If $m,n\ge 2$, then the Laplacian eigenvalues of $\mathcal{NC}(F_{m,n})$ are as follows: \begin{enumerate} \item [$\bullet$] $2(1-\cos \frac{\pi j}{n})+m$ having multiplicity $2$ for $1\le j\le n-1$, \item [$\bullet$] $n$ having multiplicity $m-1$, \item [$\bullet$] $n+2$ having multiplicity $m-1$, \item [$\bullet$] $\frac{m+n}{2} \pm \frac{\sqrt{(m^2 + 2(m + 2)n + n^2 - 4m + 4) + 1}}{2}$ having multiplicity $1$, \item [$\bullet$]$m+n$ having multiplicity $1$, \item [$\bullet$] $0$ having multiplicity $1$. \end{enumerate} \end{theorem} \begin{proof} We shall first index the vertices of $P_n$, then list the vertices of $\overline{K}_m$. We again list the vertices of the second copy of $\overline{K}_m$ and finally list the vertices of the second copy of $P_n$. Thus the Laplacian matrix of $\mathcal{NC}(F_{m,n})$ is given as follows: \begin{flalign*} L(\mathcal{NC}(F_{m,n}))= \left(\begin{matrix} L(P_n)+mI && -J_{n\times m} && 0_{n\times m} && 0_{n\times n} \\ \\ -J_{m\times n} && (n+1)I_{m\times m} && -I_{m\times m} && 0_{m\times n} \\ \\ 0_{n\times m} && -I_{m\times m} && (n+1)I_{m\times m} && -J_{m\times n} \\ \\ 0_{n\times n}&& 0_{n\times m} && -J_{n\times m} && L(P_n)+mI \end{matrix}\right). \end{flalign*} Now, since $L(P_n)$ is a singular matrix, so zero will be an eigenvalue of $L(P_n)$. The eigenvector corresponding to the eigenvalue $0$ is $\mathbf{1}=[1,1,\dots, 1]^T$. For a symmetric matrix, if $\lambda_i$ and $\lambda_j$ are two distinct eigenvalues with eigenvectors $v_i$ and $v_j$ respectively, then $v_i$ and $v_j$ are orthogonal to each other. Let $\lambda(\neq 0)$ be an eigenvalue of $L(P_n)$ having eigenvector $\mathbf{v}$. Then, $\mathbf{1}^T\mathbf{v}=0$. Let $v_i$, $2\le i\le m$ be an eigenvector corresponding to the eigenvalue $\lambda_i=2(1-\cos \frac{\pi i}{n})$ of $P_n$. Let $\mathbf{V_i}=\left(\begin{array}{cc} \mathbf{v_i}_{n}\\ \mathbf{0}_{m}\\ \mathbf{0}_{m}\\\mathbf{0}_{n} \end{array}\right)$. Now $L(\mathcal{NC}(F_{m,n}))\mathbf{V_i}= (\lambda_i+m)\mathbf{V_i}$. Thus, $\lambda_i+m$ is an eigenvalue of $L(\mathcal{NC}(F_{m,n}))$. Similarly, let $\mathbf{W_i}=\left(\begin{array}{cc} \mathbf{0}_{n}\\ \mathbf{0}_{m}\\ \mathbf{0}_{m}\\\mathbf{v_i}_{n} \end{array}\right)$, we observe that $L(\mathcal{NC}(F_{m,n}))\mathbf{W_i}= (\lambda_i+m)\mathbf{W_i}$. Thus, again, we find that $\lambda_i+m$ is an eigenvalue of $L(\mathcal{NC}(F_{m,n}))$ for $2\le i\le m$. Hence, we observe that $\lambda_i+m$ is an eigenvalue of $L(\mathcal{NC}(F_{m,n}))$ for $2\le i\le m$ having multiplicity $2$. Let $\mathbf{X_i}=\left(\begin{array}{cc} \mathbf{0}_{n}\\ \mathbf{v_i}_{m}\\ \mathbf{v_i}_{m}\\\mathbf{0}_{n} \end{array}\right)$. We have \begin{flalign*} &L(\mathcal{NC}(F_{m,n}))\mathbf{X_i} \\ &=\left(\begin{matrix} L(P_n)+mI && -J_{n\times m} && 0_{n\times m} && 0_{n\times n} \\ \\ -J_{m\times n} && (n+1)I_{m\times m} && -I_{m\times m} && 0_{m\times n} \\ \\ 0_{n\times m} && -I_{m\times m} && (n+1)I_{m\times m} && -J_{m\times n} \\ \\ 0_{n\times n}&& 0_{n\times m} && -J_{n\times m} && L(P_n)+mI \end{matrix}\right) \left(\begin{array}{cc} \mathbf{0}_{n}\\\\ \mathbf{v_i}_{m}\\\\ \mathbf{v_i}_{m}\\\\\mathbf{0}_{n} \end{array}\right) \\ &=\left(\begin{array}{cc} \mathbf{0}\\\\((n+1)-1)\mathbf{v_i}_{m}\\\\ ((n+1)-1)\mathbf{v_i}_{m}\\\\\mathbf{0} \end{array}\right) \\ &=\left(\begin{array}{cc} \mathbf{0}\\\\n\mathbf{v_i}_m\\\\ n\mathbf{v_i}_m\\\\\mathbf{0} \end{array}\right) \\ &=n\left(\begin{array}{cc} \mathbf{0}\\\\\mathbf{v_i}_{m}\\\\ \mathbf{v_i}_{m}\\\\\mathbf{0} \end{array}\right). \end{flalign*} We thus obtain $L(\mathcal{NC}(F_{m,n}))\mathbf{X_i}= n\mathbf{X_i}$. Thus, $n$ is an eigenvalue of $L(\mathcal{NC}(F_{m,n}))$. Hence, we find that $n$ is an eigenvalue of $L(\mathcal{NC}(F_{m,n}))$ having multiplicity $m-1$. Let $\mathbf{Y_i}=\left(\begin{array}{cc} \mathbf{0}_{n}\\ \mathbf{v_i}_{m}\\ \mathbf{-v_i}_{m}\\\mathbf{0}_{n} \end{array}\right)$. Now $L(\mathcal{NC}(F_{m,n}))\mathbf{X_i}= (n+2)\mathbf{Y_i}$. Thus, $n+2$ is an eigenvalue of $L(\mathcal{NC}(F_{m,n}))$ having multiplicity $m-1$. Thus, we determine $2(n+m-2)$ eigenvalues of $L(\mathcal{NC}(F_{m,n})$. We shall now use \Cref{EqP}. We shall now use \Cref{P1} to find the $4$ remaining eigenvalues of $L(\mathcal{NC}(F_{m,n})$. We find that they are contained in the spectrum of matrix $B$ given as follows: \[ B= \left( \begin{array}{cccccccc} m &&-m && 0 && 0 \\ \\ -n && n+1 && -1 && 0 \\ \\ 0 && -1 && n+1 && -n \\ \\ 0 && 0 && -m && m \end{array} \right). \] The characteristic polynomial of $B$ is : \begin{flalign*} \Theta(B,x)&=x^4 + (-2m - 2n - 2)x^3 + (m^2 + 2mn + n^2 + 4m + 2n)x^2 + (-2m^2 - 2mn)x. \end{flalign*} On solving $\Theta(B,x)=0$, we obtain the required result. \end{proof} \section{Distance Laplacian Spectrum of Generalized Fan Graph and a New Graph Class} \label{S3} In this section, we evaluate the distance Laplacian spectrum of the generalized fan graph. We then determine the distance Laplacian spectrum of the new graph class that was introduced in the previous section. To determine the distance Laplacian spectrum of the generalized fan graph, we shall need the given theorem. \begin{theorem}\label{Th1} \label{Join} Let $G_1$ be a graph on $n_1$ vertices having Laplacian eigenvalues $0=\lambda_1\le \lambda_2\le\cdots \le \lambda_{n_1}$ and $G_2$ be a graph on $n_2$ vertices having Laplacian eigenvalues $0=\mu_1\le \mu_2\le\cdots \le \mu_{n_2}$. Then the distance Laplacian spectrum of $G_1+ G_2$ consists of $n_2+2n_1-\lambda_i$ having multiplicity $1$ for $2\le i\le n_1$, $n_1+2n_2-\mu_i$ having multiplicity $1$ for $2\le j\le n_2$, and $0, n_1+n_2$ having multiplicity $1$. \end{theorem} \begin{proof} We shall first index the vertices of the graph $G_1$. We then index the vertices of the graph $G_2$. We have: \begin{flalign*} D^L(G_1+ G_2)&= \left(\begin{matrix} D^{L_1} && -J_{n_1\times n_2} \\ \\ -J_{n_1\times n_2} && D^{L_2} \end{matrix}\right). \end{flalign*} Here, \begin{flalign*} D^{L_1}&=Tr(G_1)-D(G_1) \\ &=Tr(G_1)+A(G_1)-2J_{n_1\times n_1}+2I_{n_1\times n_1} \\ &=\bigg((n_2+2(n_1-1))I_{n_1\times n_1}\bigg)-\text{Deg}(G_1) \\&+A(G_1)-2J_{n_1\times n_1}+2I_{n_1\times n_1} \\ &=\bigg((n_2+2(n_1-1)+2)I_{n_1\times n_1}\bigg)-\text{Deg}(G_1)+A(G_1)-2J_{n_1\times n_1} \\ &=\bigg((n_2+2n_1)I_{n_1\times n_1}\bigg)-\text{Deg}(G_1)+A(G_1)-2J_{n_1\times n_1} \\ &=\bigg((n_2+2n_1)I_{n_1\times n_1}\bigg)-2J_{n_1\times n_1}-L(G_1), \end{flalign*} and, \begin{flalign*} D^{L_2}&=Tr(G_2)-D(G_2) \\ &=\bigg((n_1+2n_2)I_{n_2\times n_2}\bigg)-2J_{n_2\times n_2}-L(G_2). \end{flalign*} We know that the Laplacian matrix $L(G_1)$ is a singular matrix having a determinant as $0$. Moreover, since the sum of the entries of each row is $0$, so $0$ will be an eigenvalue of $L(G_1)$. Hence, we have $L(G_1)\mathbf{1}=L(G_1)[1,1,\dots, 1]^T=\mathbf{0}$. Let $\lambda_i$ be a non-zero eigenvalue of $L(G_1)$ whose eigenvector is $\mathbf{v_i}$, $2\le i\le n$. Moreover, $\mathbf{1}^T\mathbf{v_i}=0$. Let $\mathbf{V_i}=\left(\begin{array}{cc} \mathbf{v_i}_{n_1}\\ \mathbf{0}_{n_2} \end{array}\right)$. We obtain, \begin{flalign*} &D^L(G_1+ G_2)\mathbf{V_i} \\ &=\left(\begin{matrix} D^{L_1} & -J_{n_1\times n_2} \\ \\ -J_{n_2\times n_1} & D^{L_2} \end{matrix}\right)\left(\begin{array}{cc} \mathbf{v_i}_{n_1}\\\\ \mathbf{0}_{n_2} \end{array}\right) \\ &=\left(\begin{array}{cc} D^{L_1}\mathbf{v_i}\\\\ \mathbf{0} \end{array}\right) \\ &=\left(\begin{array}{cc}\bigg(((n_2+2n_1)I_{n_1\times n_1})-2J_{n_1\times n_1}-L(G_1)\bigg)\mathbf{v_i}\\\\ \mathbf{0}\end{array}\right) \\ &=\left(\begin{array}{cc}(n_2+2n_1)\mathbf{v_i}-\lambda_i\mathbf{v_i}\\\\ \mathbf{0}\end{array}\right) \\ &=\left(\begin{array}{cc}(n_2+2n_1-\lambda_i)\mathbf{v_i}\\\\ \mathbf{0}\end{array}\right) \\ &=(n_2+2n_1-\lambda_i)\mathbf{V_i}. \end{flalign*} Thus, if $\lambda_i$ is an eigenvalue of $L(G_1)$ for $2\le i\le n_1$, we find that $n_2+2n_1-\lambda_i$ is an eigenvalue of $D^L(G_1+ G_2)$. This provides us with $n_1-1$ distance Laplacian eigenvalues of $G_1+G_2$. Let $\mu_j$ be an eigenvalue of $L(G_2)$. Let $\mathbf{w}$ be an eigenvector of $\mu_j$. Using similar arguments as given above, we find that $n_1+2n_2-\mu_i$ is a distance Laplacian eigenvalue of $G_1+ G_2$ corresponding to eigenvector $\mathbf{W}$. Here, $\mathbf{W}=\left(\begin{array}{cccccccc} \mathbf{0}_{n_1}\\\mathbf{w}_{n_2} \end{array}\right).$ This provides us with $n_1+n_2-2$ distance Laplacian eigenvalues of $G_1+G_2$. The remaining two eigenvalues of $D^L(G_1+G_2)$ can be obtained by using the concept of equitable partitions(\Cref{EqP}). Since each block matrix of $D^L(G_1+ G_2)$ has a constant row sum, we find that the equitable quotient matrix of $D^L(G_1+ G_2)$ is given as follows: \begin{flalign*} B&=\left( \begin{array}{cccc} n_2&& -n_2\\ -n_1&&n_1 \end{array} \right). \end{flalign*} Since $\sigma(B)=\left(\begin{array}{ccccc} n_1+n_2 & & 0\\ 1 && 1 \end{array}\right)$, using Theorem \ref{P1}, we find that the eigenvalues of $D^L(G_1+ G_2)$ are $n_2+2n_1-\lambda_i$ having multiplicity $1$ for $2\le i\le n_1$, $n_1+2n_2-\mu_i$ having multiplicity $1$ for $2\le j\le n_2$, and $0, n_1+n_2$ having multiplicity $1$. \end{proof} We now determine the distance Laplacian spectra of the generalized fan graph $F_{m,n}$. \begin{theorem} \label{Fan1} The spectrum of the distance Laplacian matrix of $F_{m,n}$ consists of $n+m$ having multiplicity $m-1$, $m+2n-2+2\cos (\frac{\pi j}{n})$ having multiplicity $1$ for $0\le j\le n-1$, and $0,m+n$ having multiplicity $1$. \end{theorem} \begin{proof} We know $F_{m,n}=\overline K_m+P_n$. Using \Cref{Th1}, the eigenvalues of the distance Laplacian matrix of $F_{m,n}$ are $n+m$ having multiplicity $m-1$, $m+2n-2+2\cos (\frac{\pi j}{n})$ having multiplicity $1$ for $0\le j\le n-1$, and $0,m+n$ having multiplicity $1$. \end{proof} \begin{corollary} The distance Laplacian spectrum of the usual fan graph $F_{1,n}$ consists of $2n-1+2\cos (\frac{\pi j}{n})$ having multiplicity $1$ for $0\le j\le n-1$, and $0,n+1$ having multiplicity $1$. \end{corollary} \begin{proof} The proof follows by substituting $m=1$ in \Cref{Fan1}. \end{proof} \subsection{Distance Laplacian spectrum of $\mathcal{NC}(F_{m,n})$} In our next theorem, we shall now determine the distance Laplacian spectrum of the new graph class $\mathcal{NC}(F_{m,n})$. \begin{theorem} \label{1} The distance Laplacian spectrum of $\mathcal{NC}(F_{m,n})$ consists of: \begin{itemize} \item [$\bullet$] $(5n+3m-\lambda_i)$ having multiplicity $2$ for $2\le i\le n$, \item [$\bullet$] $3n+5m-4$ having multiplicity $m-1$, \item [$\bullet$] $3n+5m-2$ having multiplicity $m-1$, \item[$\bullet$] $\frac{9}{2}(n +m) - 2\pm \frac{1}{2}\sqrt{A} $, where $A=24n + 9n^2 - 14nm-24m + 9m^2 + 16$, having multiplicity $1$, \item[$\bullet$] $3(n+m)$ having multiplicity $1$, and \item[$\bullet$] $0$ having multiplicity $1$. \end{itemize} \end{theorem} \begin{proof} We shall first index the vertices of $P_n$, then list the vertices of $\overline{K}_m$. We again list the vertices of the second copy of $\overline{K}_m$ and finally list the vertices of the second copy of $P_n$. We have: \begin{flalign*} &D^L(\mathcal{NC}(F_{m,n})) \\&= \left(\begin{matrix} D^{L_1} && -J_{n\times m} && -2J_{n\times m} && -3J_{n\times m} \\ \\ -J_{m\times n} && D^{L_2} && -(3J-2I)_{m\times m} && -2J_{m\times n} \\ \\ -2J_{m\times n} && -(3J-2I)_{m\times m} && D^{L_2} && -J_{m\times n} \\ \\ -3J_{n\times n}&& -2J_{n\times m} && -J_{n\times m} && D^{L_1} \end{matrix}\right). \end{flalign*} Here, \begin{flalign*} D^{L_1}&=(5n+3m)I_{n}-2J_{n\times n}-L(G_1), \text{ and } \\ D^{L_2}&=(3n+5m-2)I_{m}-2J_{m\times m}. \end{flalign*} Assuming $\lambda_i$ to be an eigenvalue of $L(P_n)$ with eigenvector $\mathbf{v_i}$ for $2\le i\le n$, we have $\mathbf{1}^T\mathbf{v_i}=0$. Considering $\mathbf{V_i}=\left(\begin{array}{cc} \mathbf{v_i}_{n}\\ \mathbf{0}_{m}\\ \mathbf{0}_{m}\\\mathbf{0}_{n} \end{array}\right)$, we obtain, \begin{flalign*} &D^L(\mathcal{NC}(F_{m,n})) \\ &=\left(\begin{matrix} D^{L_1} & -J_{n\times m} & -2J_{n\times m} & -3J_{n\times n} \\ \\ -J_{m\times n} & D^{L_2} & -(3J-2I)_{m\times m} & -2J_{m\times n} \\ \\ -2J_{m\times n} & -(3J-2I)_{m\times m} & D^{L_2} & -J_{m\times n} \\ \\ -3J_{n\times m}& -2J_{n\times m} & -J_{n\times m} & D^{L_1} \end{matrix}\right)\left(\begin{array}{cc} \mathbf{v_i}_{n}\\\\ \mathbf{0}_{m}\\\\ \mathbf{0}_{m}\\\\\mathbf{0}_{n} \end{array}\right) \\ &=\left(\begin{array}{cc} D^{L_1}\mathbf{v_i}\\\\ \mathbf{0}\\\\ \mathbf{0}\\\\\mathbf{0} \end{array}\right) \\ &=\left(\begin{array}{cc}\bigg((5n+3m)I_{n}-2J_{n\times n}-L(P_n)\bigg)\mathbf{v_i}\\\\ \mathbf{0}\\\\ \mathbf{0}\\\\\mathbf{0}\end{array}\right) \\ &=\left(\begin{array}{cc}(5n+3m)\mathbf{v_i}-L(P_n)\mathbf{v_i}\\\\ \mathbf{0}\\\\ \mathbf{0}\\\\\mathbf{0}\end{array}\right) \\ &=(5n+3m-\lambda_i)\mathbf{V_i}. \end{flalign*} Thus, we observe that $(5n+3m-\lambda_i)$ becomes an eigenvalue of $D^L(\mathcal{NC}(F_{m,n}))$ having multiplicity $1$. Here, $2\le i\le n$. Let $\mathbf{W_i}=\left(\begin{array}{cc} \mathbf{0}_{n}\\ \mathbf{0}_{m}\\ \mathbf{0}_{m}\\\mathbf{v_i}_{n} \end{array}\right)$ for $2\le i\le n$. We observe that $D^L(\mathcal{NC}(F_{m,n}))\mathbf{W_i}=(5n+3m-\lambda_i)\mathbf{W_i}.$ Thus, $(5n+3m-\lambda_i)$ is an eigenvalue of $D^L(\mathcal{NC}(F_{m,n}))$ having multiplicity $1$ for $2\le i\le n$. Hence we find that $(5n+3m-\lambda_i)$ is an eigenvalue of $D^L(\mathcal{NC}(F_{m,n}))$ having multiplicity $2$ for each $2\le i\le n$. Moreover, we observe that $3n+5m$ and $3n+5m-4$ are eigenvalues of $D^L(\mathcal{NC}(F_{m,n}))$ having multiplicity $m-1$. This gives us $2(n+m-2)$ eigenvalues of $D^L(\mathcal{NC}(F_{m,n}))$. The remaining $4$ eigenvalues of $D^L(\mathcal{NC}(F_{m,n}))$ are contained in the spectrum of $B$ where \[ B= \left( \begin{array}{cccccccc} 3(n+m) &&-m && -2m && -3n \\ \\ -n && 3(n+m)-2 && -(3m-2)&& -2n \\ \\ -2n && -(3m-2)&& 3(n+m)-2&& -n \\ \\ -3n&& -2m&& -m && 3(n+m) \end{array} \right). \] The characteristic polynomial of $B$ is $x^4 + (-12m - 12n + 4)x^3 + (45m^2 + 98mn + 45n^2 - 24m - 36n)x^2 + (-54m^3 - 186m^2n - 186mn^2 - 54n^3 + 36m^2 + 108mn + 72n^2)x$. On solving, we find that the eigenvalues of $B$ are $\frac{9}{2}(n +m) - 2\pm \frac{1}{2}\sqrt{A}$, $3(n + m)$ and $0$ having multiplicity $1$, where $A=24n + 9n^2 - 14nm-24m + 9m^2 + 16$. \end{proof} \section{Conclusion} In this paper, our main aim is to determine the Laplacian and the distance Laplacian spectrum of the generalized fan graph $F_{m,n}$. Moreover, we also introduce a new graph class and determine its Laplacian as well as its distance Laplacian spectrum. We illustrate our results with various examples. We now provide a problem to the readers for future work. The matrix $D_t(G) = t\text{Tr}(G) + (1-t) D(G), 0<t<1$, is known as the \textit{generalized distance matrix} of a graph $G$. We encourage the readers to determine the spectrum of the generalized distance matrix of the generalized fan graph as well as the new graph class introduced in this paper. \section{Declarations} \subsection{Conflict of interest:} The authors state that there is no conflict of interest. \subsection{Funding:} Not Applicable. \subsection{Authors contribution:} S.B.-Conceptualization, Methodology, Formal Analysis, Writing-Review and Editing, Supervision. S.G.- Resources, Writing-Original Draft, Methodology. The final submitted version of this manuscript has been read and approved by all the authors. \subsection{Acknowledgement:} Not Applicable. \subsection{Data availability statement:} The article includes all the data that support the findings of this study. \begin{thebibliography}{99} \bibitem{1} Aouchiche, M., Hansen, P., Two Laplacians for the distance matrix of a graph, Linear algebra and its applications, 439(1), 21-33, 2013. \bibitem{2} Xing, R., Zhou, B., On the distance and distance signless Laplacian spectral radii of bicyclic graphs, Linear algebra and its applications, 439(12), 3955-3963, 2013. \bibitem{3} Aouchiche, M., Hansen, P., Some properties of the distance Laplacian eigenvalues of a graph, Czechoslovak mathematical journal, 64(3), 751-761, 2014. \bibitem{4} Lin, H., Zhou, B., On the distance Laplacian spectral radius of graphs, Linear Algebra and Its Applications, 475, 265-275, 2015. \bibitem{5} Lin, H., Lu, X., Bounds on the distance signless Laplacian spectral radius in terms of clique number, Linear and Multilinear Algebra, 63(9), 1750-1759, 2015. \bibitem{6} Aouchiche, M., Hansen, P., On the distance signless Laplacian of a graph, Linear and multilinear algebra, 64(6), 1113-1123, 2016. \bibitem{7} Alhevaz, A., Baghipur, M., Hashemi, E., Ramane, H. S., On the distance signless Laplacian spectrum of graphs. Bulletin of the Malaysian Mathematical Sciences Society, 42, 2603-2621, 2019. \bibitem{8} Alhevaz, A., Baghipur, M., Paul, S., Spectrum of graphs obtained by operations, Asian-European Journal of Mathematics, 13(02), 2050045, 2020. \bibitem{9} Liu, X., Lu, P., Spectra of subdivision-vertex and subdivision-edge neighbourhood coronae, Linear algebra and its applications, 438(8), 3547-3559, 2013. \bibitem{10} Liu, X., Zhou, S., Spectra of the neighbourhood corona of two graphs, Linear and Multilinear Algebra, 62(9), 1205-1219. \bibitem{11} McLeman, C., McNicholas, E., Spectra of coronae, Linear algebra and its applications, 435(5), 998-1007, 2011. \bibitem{12} Thomas, A. S., Kalayathankal, S. J., Kureethara, J. V., Spectrum of corona products based on splitting graphs, Discrete Mathematics, Algorithms and Applications, 15(03), 2250087, 2023. \bibitem{13} Das, A., Panigrahi, P., Construction of simultaneous cospectral graphs for adjacency, Laplacian, and normalized Laplacian matrices, Kragujevac Journal of Mathematics, 47(6), 947-964, 2023. \bibitem{14} Varghese, R. P., Second-stage spectrum of corona of two graphs, Discrete Mathematics, Algorithms, and Applications, 14(07), 2250034, 2022. \bibitem{15} Barik, S., Kalita, D., Pati, S., Sahoo, G., Spectra of graphs resulting from various graph operations and products: a survey, Special Matrices, 6(1), 323-342, 2018. \bibitem{16} Banerjee, S. Distance Laplacian spectra of various graph operations and its application to graphs on algebraic structures, Journal of Algebra and Its Applications, 22(01), 2350022, 2023. \bibitem{17} Kaliyaperumal, S., Desikan, K., Distance (signless) Laplacian spectrum of dumbbell graphs, Transactions on Combinatorics, 12(4), 207-216, 2023. \bibitem{18} Kaliyaperumal, S., Desikan, K., Resistance distance of generalized wheel and dumbbell graph using symmetric {1}-inverse of Laplacian matrix, Proyecciones (Antofagasta), 42(1), 145-166, 2023. \bibitem{19} Brouwer, A. E., Haemers, W. H., Spectra of graphs, Springer Science \& Business Media, 2011. \bibitem{20} Mohar, B., Alavi, Y., Chartrand, G., Oellermann, O., The Laplacian spectrum of graphs, Graph theory, combinatorics, and applications, 2(871-898), 12, 1991. \end{thebibliography} \section{Appendix} In this section, we provide the adjacency and the Laplacian eigenvalues of Fan graph $F_{1,n}$(\Cref{Tab}) and the generalized Fan graph $F_{m,n}$(\Cref{Tab1}) for various values of $m$ and $n$. \begin{table}[ht] \begin{center} \begin{tabular}{c|c|c} $n$ & $\text{Adjacency Spectrum}$ & $\text{Laplacian Spectrum}$ \\ \hline $3$ & $\{0, -1, -1.56, 2.56\}$ & $\{0, 1, 1,4\}$ \\ \hline $4$ & $\{-1.62, 0.62, -1.47, -0.46, 2.93\}$ & $\{5, 3, 0, 1.58, 4.41\}$ \\ \hline $5$ & $\{3.22, 0.11, -1.53, -1.81, 1,-1\}$ & $\{6, 0, 2.38, 4.62, 1.38, 3.62\}$ \\ \hline $6$ & $\{-1.80, -0.44, 1.25, -1.82, -1.18, 0.54, 3.46\}$ & $\{7, 4, 3, 2, 0, 1.27, 4.73\}$\\ \hline $7$ & $\{0, -2, -1.41, 1.41, -1.81, -0.71, 0.84, 3.67\}$ & $\{8, 0, 1.75, 3.44, 4.8, 1.2, 2.55, 4.25\}$\\ \end{tabular} \end{center} \caption{Comparison Table for Adjacency Spectrum \& Laplacian Spectrum of Fan Graph } \label{Tab} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{c|c|c|c} $n$ & $m$ &$\text{Adjacency Spectrum}$ & $\text{Laplacian Spectrum}$ \\ \hline $2$ & $2$& $\{2, -2, 0, 0\}$ & $\{0, 2, 2,4\}$ \\ \hline $2$ &$3$ &$\{-2, 0, 0, -1.24, 3.24\}$ & $\{0, 5, 5, 3, 3\}$ \\ \hline $3$ &$2$ & $\{3, -1, -2, 0, 0\}$ & $\{0,5,5,2,2\}$ \\ \hline $3$ & $4$ &$\{0, 0, -1.62, 0.62, -2.84, -0.49, 4.32\}$ & $\{0,7, 5, 4, 4, 3.58, 6.41\}$\\ \hline $4$ &$3$ & $\{0, 0, 0, 0, -2.92, -1.3, 4.22\}$ & $\{0, 5, 7, 7, 3, 3, 3\}$\\ \end{tabular} \end{center} \caption{Comparison Table for Adjacency Spectrum \& Laplacian Spectrum of Generalized Fan Graph } \label{Tab1} \end{table} \end{document}
2412.19126v2
http://arxiv.org/abs/2412.19126v2
Polycyclic Codes over the Product Ring $\mathbb{F}_q^l$ and their Annihilator Dual
\documentclass[11pt,a4paper]{article} \usepackage{amsmath, amsfonts, amssymb} \usepackage{a4wide} \usepackage{parskip} \usepackage{enumitem} \usepackage{xcolor} \DeclareMathOperator{\spn}{span} \usepackage{hyperref} \usepackage{booktabs}\usepackage{caption}\usepackage{siunitx}\usepackage{tabularx} \usepackage{comment} \usepackage{multirow} \usepackage{mathtools} \usepackage{caption} \usepackage{parskip} \usepackage{graphicx} \usepackage{adjustbox} \usepackage{multirow} \usepackage{amsthm} \usepackage{bm, makecell} \usepackage{lscape} \usepackage{rotating} \usepackage{floatrow} \floatsetup[table]{capposition=top} \usepackage[noadjust]{cite} \def\proof{\noindent{\textit{Proof. }}} \def\qed{\hfill {$\square$}\goodbreak \medskip} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{observation}[theorem]{Observation} \newtheorem{notation}[theorem]{Notation} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{xca}[theorem]{Exercise} \newtheorem{remark}[theorem]{Remark} \newtheorem{question}[theorem]{Questions} \numberwithin{equation}{section} \renewcommand{\baselinestretch}{1.2} \newcommand{\lcm}{\textnormal{lcm}} \newcommand{\Char}{\textnormal{char}} \newcommand{\Tr}{\textnormal{Tr}} \newcommand{\Supp}{\textnormal{Supp}} \usepackage{tikz,xcolor,hyperref} \usepackage{mathdots} \definecolor{lime}{HTML}{A6CE39} \DeclareRobustCommand{\orcidicon}{ \begin{tikzpicture} \draw[lime, fill=lime] (0,0) circle [radius=0.16] node[white] {{\fontfamily{qag}\selectfont \tiny ID}}; \draw[white, fill=white] (-0.0625,0.095) circle [radius=0.007]; \end{tikzpicture} \hspace{-2mm} } \foreach \x in {A, ..., Z}{ \expandafter\xdef\csname orcid\x\endcsname{\noexpand\href{https://orcid.org/\csname orcidauthor\x\endcsname}{\noexpand\orcidicon}} } \newcommand{\orcidauthorA}{0009-0006-4407-5700}\newcommand{\orcidauthorB}{0000-0003-0174-2447}\newcommand{\orcidauthorC}{0000-0002-8668-1948} \begin{document} \date{} \title{Polycyclic Codes over the Product Ring $\mathbb{F}_q^l$ and their Annihilator Dual } \author{{\bf Akanksha\footnote{email: {\tt [email protected]}}\orcidA{}, \bf Ritumoni Sarma\footnote{ email: {\tt [email protected]}}\orcidC{}} \\ Department of Mathematics\\ Indian Institute of Technology Delhi\\Hauz Khas, New Delhi-110016, India} \maketitle \begin{abstract} In this article, for the finite field $\mathbb{F}_q$, we show that the $\mathbb{F}_q$-algebra $\mathbb{F}_q[x]/\langle f(x) \rangle$ is isomorphic to the product ring $\mathbb{F}_q^{\deg f(x)}$ if and only if $f(x)$ splits over $\mathbb{F}_q$ into distinct factors. We generalize this result to the quotient of the polynomial algebra $\mathbb{F}_q[x_1, x_2,\dots, x_k]$ by the ideal $\langle f_1(x_1), f_2(x_2),\dots, f_k(x_k)\rangle.$ On the other hand, we establish that every finite-dimensional $\mathbb{F}_q$-algebra $\mathcal{S}$ has an orthogonal basis of idempotents with their sum equal to $1_{\mathcal{S}}$ if and only if $\mathcal{S}\cong\mathbb{F}_q^l$ as $\mathbb{F}_q$-algebras, where $l=\dim_{\mathbb{F}_q} \mathcal{S}$. Instead of studying polycyclic codes over $\mathbb{F}_q$-algebras $\mathbb{F}_q[x_1, x_2,\dots, x_k]/\langle f_1(x_1), f_2(x_2),\dots, f_k(x_k)\rangle$ where $f_i(x_i)$ splits into distinct linear factors over $\mathbb{F}_q,$ which is a subclass of $\mathbb{F}_q^l,$ we study polycyclic codes over $\mathbb{F}_q^l$ and obtain their unique decomposition into polycyclic codes over $\mathbb{F}_q$ for every such orthogonal basis of $\mathbb{F}_q^l$. We refer to it as an $\mathbb{F}_q$-decomposition. An $\mathbb{F}_q$-decomposition enables us to use results of polycyclic codes over $\mathbb{F}_q$ to study polycyclic codes over $\mathbb{F}_q^l$; for instance, we show that the annihilator dual of a polycyclic code over $\mathbb{F}_q^l$ is a polycyclic code over $\mathbb{F}_q^l$. Furthermore, with the help of different Gray maps, we produce a good number of examples of MDS or almost-MDS or/and optimal codes; some of them are LCD over $\mathbb{F}_q$. Finally, we study Gray maps from $(\mathbb{F}_q^l)^n$ to $\mathbb{F}_q^{nl},$ and use it to construct quantum codes with the help of CSS construction. \medskip \noindent \textit{Keywords:} Linear Code, Cyclic Code, Polycyclic Code, $\mathbb{F}_q$-algebra, Product Ring \medskip \noindent \textit{2020 Mathematics Subject Classification:} 94B05, 94B15, 94B99, 13M05 \end{abstract} \section{Introduction}\label{Section 1}\label{sec1} Linear codes have been extensively studied over finite fields due to their crucial role in error detection and error correction, which are essential for reliable information transmission. In 1994, Hammons et al. (\cite{hammons1994z}) studied linear codes over $\mathbb{Z}_4$ and constructed codes as binary images under the Gray maps of linear codes over $\mathbb{Z}_4$ for the first time. This drew the attention of coding theorists towards codes over finite commutative rings. A linear code invariant under the right (and hence left) cyclic shift is called a \textit{cyclic code}, first introduced by E. Prange (\cite{prange1957cyclic}) in 1957. Due to their relatively easier implementation and a great deal of rich algebraic structure, they have been studied widely. Numerous algebraic coding theorists have encouraged the study of cyclic codes for both burst-error correction and random-error correction. Cyclic codes over finite chain rings have been studied in different contexts, for instance, see \cite{abualrub2007cyclic},\cite{bonnecaze1999cyclic},\cite{dinh2004cyclic} and \cite{kanwar1997cyclic} and many authors also studied cyclic codes over finite non-chain rings, for instance, see \cite{yildiz2011cyclic} and \cite{islam2021cyclic}. \par Constacyclic codes are a generalization of cyclic codes, first introduced by Berlekamp (\cite{berlekamp2015algebraic}) in 1960. Negacyclic codes are a special case of constacyclic codes. Both constacyclic and negacyclic codes over finite fields have been widely studied, for example, see \cite{bakshi2012class},\cite{chen2012constacyclic} and \cite{raka2015class}. Further, constacyclic codes over various rings have been extensively studied, for instance, see \cite{ABUALRUB2009520},\cite{swati_raka}, \cite{gao2018u},\cite{karadeniz20111+},\cite{QIAN2006820}, \cite{raza2024quantum} and \cite{shi2021construction}. Note that the rings considered to study constacyclic codes in \cite{swati_raka}, \cite{gao2018u}, \cite{raza2024quantum} and \cite{shi2021construction} are isomorphic to a product ring of the form $\mathbb{F}_q^l,$ for some $l \in \mathbb{N}.$ \par Polycyclic codes are a generalization of constacyclic codes, introduced by Peterson (\cite{peterson1972error}) in 1972. In recent years, polycyclic codes have been studied in \cite{aydin2022polycyclic},\cite{lopez2009dual},\cite{shi2020polycyclic},\cite{shi2020construction}, \cite{tabue2018polycyclic} and \cite{wu2022structure}. In 2009, L\'{o}pez-Permouth et al. (\cite{lopez2009dual}) studied polycyclic codes and sequential codes, and they established that a linear code over $\mathbb{F}_q$ is polycyclic if and only if its Euclidean dual is sequential. However, sequential codes are not necessarily polycyclic, and hence, the Euclidean dual of a polycyclic code need not be polycyclic. In 2016, Alahmadi et al. (\cite{alahmadi2016duality}) studied polycyclic codes over finite fields, and they introduced a different non-degenerate bilinear form, with respect to which the dual of polycyclic code is polycyclic. In 2020, Tabue et al. (\cite{tabue2018polycyclic}) studied polycyclic codes over finite chain rings and presented several results regarding the freeness of polycyclic codes and their annihilator dual. \par In 2021, Islam et al. (\cite{islam2021cyclic}) studied cyclic codes over $R_{e,q}:=\mathbb{F}_q[u]/\langle u^e-1 \rangle,$ where $e|(q-1).$ In 2022, Bhardwaj and Raka (\cite{swati_raka}) studied constacyclic codes over a general non-chain ring $\mathcal{R}:=$ $\mathbb{F}_q[u_1,u_2,\dots,u_k]/\langle f_1(u_1),f_2(u_2),\dots,f_k(u_k)\rangle,$ where each $f_i(u_i)$ splits over $\mathbb{F}_q$ and has simple roots. Observe that both $R_{e, q}$ and $\mathcal{R}$ are isomorphic to a product ring $\mathbb{F}_q^l, $ for some $l \in \mathbb{N}.$ In 2021, Qi (\cite{qi2022polycyclic}) studied polycyclic codes over $\mathbb{F}_q[u]/\langle u^2-u\rangle,$ a non-chain ring which is isomorphic to $\mathbb{F}_q^2.$ However, note that in any case $\mathcal{R}$ and $R_{e,q}$ are not isomorphic to $\mathbb{F}_{p_1}^{p_2},$ where $p_1<p_2$ are primes. Hence, it is worth mentioning that the product ring $\mathbb{F}_q^l$ is a wider class as compared to the rings considered in the literature mentioned above. In 2024, Bajalan and Moro (\cite{bajalan2024polycyclic}) studied polycyclic codes over rings of the form $\mathcal{A}:=R[x_1,x_2,\dots,x_k]/\langle t_1(x_1),t_2(x_2),\dots,t_k(x_k)\rangle,$ where $R$ denotes a finite chain ring and each $t_i(x_i)$ is a monic square-free polynomial over $R.$ It is important to emphasize that the base ring considered in our article does not represent a special case of the class considered by the authors in \cite{bajalan2024polycyclic}. Moreover, neither of the two classes can be regarded as a generalization of the other. Specifically, for any choice of a finite chain ring $R,$ $\mathcal{A}$ is not isomorphic to $\mathbb{F}_{p_1}^{p_2},$ where $p_1< p_2$ are primes. This motivates us to study polycyclic codes over the product ring $\mathbb{F}_q^l,$ for some $l \in \mathbb{N}.$ \par The structure of the article is as follows: Preliminaries are presented in Section \ref{sec2}. In Section \ref{sec3}, we characterize a ring $\mathcal{S}$ isomorphic to a product ring $\mathbb{F}_q^l.$ In Section \ref{sec4}, we study polycyclic codes over $\mathbb{F}_q^l$ and study their annihilator duals. In Section \ref{sec5}, we consider different Gray maps from the $\mathbb{F}_q$-algebra $(\mathbb{F}_q^l)^n$ to $\mathbb{F}_q^{nl}$ and study certain properties of the Gray image. Moreover, we utilize Gray maps from $(\mathbb{F}_q^l)^n$ to $\mathbb{F}_q^{nl}$ to construct quantum codes with the help of CSS construction. We conclude the article in Section \ref{sec6}. \section{Preliminaries}\label{sec2} Throughout this article, for a prime power $q=p^m,$ $\mathbb{F}_q$ denotes the finite field of order $q.$ Let $R$ denote a finite commutative unital ring, and $U(R)$ denote the group of units of $R$. \begin{definition}[$R$-linear Code] A code $C$ of length $n$ is said to be \textit{$R$-linear code}, if it is an $R$-submodule of $R^n.$ \end{definition} Recall that, for an $\mathbb{F}_q$-linear code $C$ of length $n$, dimension $k$ and distance $d$ we denote its parameters by $[n,k,d].$ Every $\mathbb{F}_q$-linear code with parameters $[n,k,d]$ satisfies $d -1\le n-k$, and codes for which $d-1=n-k,$ are said to be \textit{maximum distance separable} (in short, \textit{MDS}) codes. These codes maximize the distance of the code. Consequently, they can detect and correct the highest number of errors. \begin{definition}[Euclidean Dual] The \textit{Euclidean dual} of an $R$-linear code $C$ of length $n$, denoted by $C^\perp,$ is the $R$-linear code given by $\{\mathbf{x}\in R^n : \langle \mathbf{x}, \mathbf{y}\rangle =\mathbf{x}\cdot \mathbf{y}=\underset{i=1}{\overset{n}{\sum}}x_iy_i=0 \, , \forall \, \mathbf{y} \in C\}.$ \end{definition} \begin{definition}[LCD Codes] An {\it LCD code} $C$ is an $R$-linear code which intersects its Euclidean dual trivially, that is, $C\cap C^{\perp}=\{0\}$. \end{definition} \begin{definition} Let $C$ be an $R$-linear code, \begin{itemize} \item It is {\it self-orthogonal} if $ C \subset C^\perp$, \item It is {\it self-dual} if $C=C^\perp,$ \item It is {\it dual-containing} if $C^\perp\subset C.$ \end{itemize} \end{definition} \begin{definition}[Polycyclic Codes] Suppose that $ \mathbf{a} =(a_0,a_1,\dots,a_{n-1}) \in R^n $ and $a_0 \in U(R)$. A linear code $C$ over $R$ is called \textit{$\mathbf{a}$-polycyclic} if whenever $ \mathbf{c}=(c_0,c_1,\dots,c_{n-1}) \in C$, then $(0,c_0,c_1,\dots,c_{n-2})+c_{n-1}(a_0,a_1,\dots,a_{n-1}) \in C.$ \end{definition} Associate \begin{equation}\label{tuple to polynomial} \mathbf{b}=(b_0,b_1,\dots,b_{n-1}) \in R^n \ \text{ with } \ \mathbf{b}(x)=b_0+b_1x+\dots+b_{n-1}x^{n-1} \in R[x]. \end{equation} Similar to a cyclic code, for $\mathbf{a}=(a_0,a_1,\dots,a_{n-1})\in R^n,$ we can characterize an $\mathbf{a}$-polycyclic code over $R$ as an ideal of $R^{\mathbf{a}} =R[x]/\langle x^n-\mathbf{a}(x)\rangle$, for $\mathbf{a}(x)\in R[x]$. \begin{proposition}\cite{lopez2009dual} A linear code over $R$ is {\it $\mathbf{a}$-polycyclic code} if and only if it is an ideal of $R^\mathbf{a}$. \end{proposition} \begin{definition}[Sequential Codes] An $R$-linear code $C$ of length $n$ is said to be {\it $\textbf{b}$-sequential}, for $ \textbf{b} \in R^n $, if $ \mathbf{c}=(c_0,c_1,\dots,c_{n-1}) \in C\Longrightarrow (c_1,c_2,\dots, c_{n-1}, \mathbf{c}\cdot\textbf{b}) \in C,$ where $\mathbf{c}\cdot\mathbf{b}$ denotes the Euclidean inner product of $\mathbf{c}$ and $\mathbf{b}$ in $R^n.$ \end{definition} \begin{definition}($\mathbb{F}_q$-algebra) \cite{atiyah2018introduction} A commutative ring $R$ with unity (denoted by $1_R$) is called an $\mathbb{F}_q$-algebra if there exists a ring homomorphism $f:\mathbb{F}_q\to R$ such that $f(1)=1_R.$ \end{definition} \begin{notation}[Product Ring] The $l$-dimensional vector space $\mathbb{F}_q^l$ is a ring with respect to the component-wise multiplication, that is, for $\mathbf{x}=(x_1, x_2, \dots, x_l), \mathbf{y}=(y_1, y_2,\dots, y_l)\in \mathbb{F}_q^l,$ the multiplication is $\mathbf{x} \mathbf{y}=(x_1y_1, x_2y_2,\dots, x_ly_l).$ We denote this product ring simply by $\mathbb{F}_q^l.$ Note that for $a\in\mathbb{F}_q$, $a\mapsto (a, a, \dots, a)$ defines an $\mathbb{F}_q$-algebra structure on this product ring. \end{notation} \section{Characterization of the Base Ring}\label{sec3} In this section we investigate when a particular class of $\mathbb{F}_q$-algebra is isomorphic to a product ring $\mathbb{F}_q^l$. \begin{lemma}\label{Existence of basis consisting of orthogonal idempotents} Let $\mathcal{S}$ be an $\mathbb{F}_q$-algebra. Then, $\mathcal{S}$ is isomorphic to the $\mathbb{F}_q$-algebra $\mathbb{F}_q^l$ if and only if there exists an $\mathbb{F}_q$-basis $\{e_1, e_2, \dots, e_l\}$ of $\mathcal{S}$ consisting of orthogonal idempotent elements satisfying $\sum\limits_{i=1}^l{e_i}=1$. \end{lemma} \begin{proof}For $1\le i\le l$, denote by $\epsilon_i,$ the $l$-tuple over $\mathbb{F}_q$ whose $i$-th component is $1$ and $j$-th component is $0$ for $1\le j\ne i\le l$ so that $\{\epsilon_1,\dots,\epsilon_l\}$ is a basis of $\mathbb{F}_q^l.$ Let $\phi:\mathcal{S} \to \mathbb{F}_q^l$ be an isomorphism of $\mathbb{F}_q$-algebras. Then, $\{\phi^{-1}(\epsilon_1),\dots,\phi^{-1}(\epsilon_l)\}$ is an $\mathbb{F}_q$-basis of $\mathcal{S}$ consisting of orthogonal idempotents such that $\underset{i=1}{\overset{l}{\sum}}\phi^{-1}(\epsilon_i)=1.$ Conversely, let $\mathcal{S}$ be an $\mathbb{F}_q$-algebra that has a basis $\{e_1,\dots,e_l\}$ such that $e_ie_j=\delta_{i, j} e_i$ and $\underset{i=1}{\overset{l}{\sum}} e_i=1.$ Then, $e_i \mapsto \epsilon_i$, for $1\le i\le l$, induces an isomorphism of $\mathbb{F}_q$-algebras from $\mathcal{S}$ to $\mathbb{F}_q^l$.\hfill $\square$ \end{proof} \begin{proposition} If an $\mathbb{F}_q$-algebra $\mathcal{S}$ has an $\mathbb{F}_q$-basis $\mathcal{B}=\{e_1, e_2, \dots, e_l\}$ consisting of orthogonal idempotent elements satisfying $\sum\limits_{i=1}^l{e_i}=1$, then we have the following: \begin{enumerate} \item $e_i\mathbb{F}_q\lhd \mathcal{S}$ and $\mathcal{S}=\underset{i=1}{\overset{l}{\bigoplus}}e_i\mathbb{F}_q$; \item $e_i\mathcal{S}=e_i\mathbb{F}_q,$ and hence $\mathcal{S}=\underset{i=1}{\overset{l}{\bigoplus}}e_i\mathcal{S}.$ \end{enumerate} \end{proposition} \begin{proof} Suppose $a=\underset{j=1}{\overset{l}{\sum}}e_ja_j\in\mathcal{S}$ for $a_j\in\mathbb{F}_q.$ \begin{enumerate} \item Observe $e_i\mathbb{F}_q<\mathcal{S}$. Since $e_ie_j=0$ and $e_i^2=e_i$, we have $a(e_ib)=e_ia_ib$ for $b\in\mathbb{F}_q$ so that $e_i\mathbb{F}_q\lhd\mathcal{S}$. Now the next statement follows as $\mathcal{B}$ is a basis. \item Observe that $e_i\mathcal{S}\supset e_i\mathbb{F}_q.$ For the other containment, observe $e_ia=e_ia_i.$ \end{enumerate} \hfill $\square$ \end{proof} \begin{lemma}\label{Rings} Let $e, l\in\mathbb{N}$ and $p(x)\in \mathbb{F}_q[x]$ be irreducible. Then, $\frac{\mathbb{F}_q[x]}{\langle{p(x)}^{e}\rangle}$ is isomorphic to the product ring ${\mathbb{F}_q^{l}}$ if and only if $\deg(p(x))=1$, $e=1$ and $l=1$. \end{lemma} \begin{proof}Assume that $\frac{\mathbb{F}_q[x]}{\langle{p(x)}^{e}\rangle}\cong{\mathbb{F}_q^{l}}$ as rings. Since $p(x)$ is irreducible over $\mathbb{F}_q,$ the group of units of $\frac{\mathbb{F}_q[x]}{\langle{p(x)}^{e}\rangle}$ is the complement of $\langle p(x)\rangle/\langle p(x)^e\rangle$ so that the number of units in $\frac{\mathbb{F}_q[x]}{\langle{p(x)}^{e}\rangle}$ is $q^{e\deg{p(x)}}-q^{\deg{p(x)}(e-1)}=q^{(e-1)\deg{p(x)}}(q^{\deg{p(x)}}-1).$ But the number of units in $\mathbb{F}_q^{l}$ is $(q-1)^l.$ Hence, $q^{(e-1)\deg{p(x)}}(q^{\deg{p(x)}}-1)=(q-1)^l.$ As $ 1\leq \deg{p(x)},$ we get $e=1, \,\deg{p(x)}=1$ and $l=1.$\\ Conversely, if $\deg p(x)=e=l=1$, then $p(x)=x+a$ for some $a\in\mathbb{F}_q$ and $\frac{\mathbb{F}_q[x]}{\langle{x+a}\rangle}\cong \mathbb{F}_q$.\hfill $\square$ \end{proof} The following theorem is immediate from Lemma \ref{Rings}. \begin{theorem}\label{isomorphic to F_q^l} Let $f(x)\in\mathbb{F}_q[x]$, $l\in\mathbb{N}$.Then, $\frac{\mathbb{F}_q[x]}{\langle f(x) \rangle}$ is isomorphic to the product ring ${\mathbb{F}_q^l}$ if and only if $f(x)$ splits over $\mathbb{F}_q$ into distinct linear factors and $l=\deg{f(x)}$. \end{theorem} \begin{proof}Suppose the factorization of $f(x)\in\mathbb{F}_q[x]$ into irreducible factors is given by $$p_1(x)^{e_1}p_2(x)^{e_2}\dots p_r(x)^{e_r}$$ where $p_i(x)\ne p_j(x)$ for $i\ne j.$ Then, by the Chinese Remainder Theorem, $\frac{\mathbb{F}_q[x]}{\langle f(x) \rangle} \cong \underset{i=1}{\overset{r}{\prod}}\frac{\mathbb{F}_q[x]}{\langle p_i(x)^{e_i} \rangle}.$ By Lemma \ref{Rings}, it is isomorphic to $\mathbb{F}_q^l$ if and only if each factor is isomorphic to $\mathbb{F}_q$ so that $\deg{p_i(x)}=1$ and $ e_i=1$ for each $i.$ \hfill $\square$ \end{proof} For a field $\mathbb{F}$, if $A$ and $B$ are $\mathbb{F}$-algebras, then the tensor product $A\otimes_\mathbb{F} B$ (or $A\otimes B$) is an $\mathbb{F}$-algebra [see chapter 2, \cite{atiyah2018introduction}]. \begin{lemma}\label{tensortodirectproduct} Let $\mathbb{F}$ be a field and $m, n\in\mathbb{N}$. If $A=\mathbb{F}^m$ and $B=\mathbb{F}^n,$ then $A\otimes B\cong \mathbb{F}^{mn}$ as $\mathbb{F}$-algebras. More generally, $ \underset{i=1}{\overset{l}{\otimes}}\mathbb{F}^{d_i}\cong \mathbb{F}^{\underset{i=1}{\overset{l}{\prod}}d_i}$ as $\mathbb{F}$-algebras, where $d_i\in \mathbb{N} $ $ \forall \, \,1\leq i\leq l.$ \end{lemma} \begin{proof} Let $\{e_1,e_2,\dots,e_{m}\}$ denote the standard base of $A$, $\{f_1,f_2,\dots,f_{n}\}$ denote the standard base of $B$, and let $\{\epsilon_1,\epsilon_2,\dots,\epsilon_{mn}\}$ denote the standard base of ${\mathbb{F}^{mn}}$. Consider the bilinear map $\psi: \mathbb{F}^{m} \times {\mathbb{F}}^{n}\longrightarrow\mathbb{F}^{mn}$ given by $\psi(e_i, f_j)=\epsilon_{i+(j-1)m}.$ Since the image of $\psi$ contains a basis, $\psi$ extends uniquely to an $\mathbb{F}$-linear surjective map $\tilde{\psi}:A\otimes B\to \mathbb{F}^{mn}$ which is in fact an isomorphism of vector spaces as the domain and the codomain spaces have the same dimension over $\mathbb{F}$. For $\tilde{\psi}$ to be an isomorphism of $\mathbb{F}$-algebras, it is enough to check that $\tilde{\psi}((e_i\otimes f_j)(e_s\otimes f_t))= \tilde{\psi}(e_i\otimes f_j)\tilde{\psi}(e_s\otimes f_t)$ for $1\le i, s\le m, 1\le j, t\le n$. Note that, $\tilde{\psi}((e_i\otimes f_j)(e_s\otimes f_t))=\tilde{\psi}(e_i e_s \otimes f_j f_t)= \tilde{\psi}(e_i\delta_{i,s} \otimes f_j\delta_{j,t})=\tilde{\psi}(\delta_{i,s}\delta_{j,t}e_i\otimes f_j)=\delta_{i,s}\delta_{j,t}\epsilon_{i+(j-1)m}.$ On the other hand, $\tilde{\psi}(e_i\otimes f_j)\tilde{\psi}(e_s\otimes f_t)= \epsilon_{i+(j-1)m}\epsilon_{s+(t-1) m}=\delta_{i+(j-1)m, s+(t-1)m}\epsilon_{s+(t-1)m}.$ Observe that $i+(j-1)m=s+(t-1)m$ if and only if $i=s$ and $j=t$. The general statement follows by induction. \hfill $\square$ \end{proof} In \cite{swati_raka}, Bhardwaj and Raka explicitly constructed a basis consisting of orthogonal idempotents whose sum is the unity (referred to as a complete set of orthogonal idempotents) for the quotient ring $\mathcal{R}=\mathbb{F}_q[x_1,\dots,x_k]/\langle f_1(x_1),\dots,f_k(x_k) \rangle,$ where each $f_i(x_i)$ splits over $\mathbb{F}_q$ and has simple roots. As a consequence, they obtained an $\mathbb{F}_q$-decomposition of $\mathcal{R}$ so that $\mathcal{R}\cong \mathbb{F}_q^l$, where $l=\dim_{\mathbb{F}_q} \mathcal{R}$. However, from Theorem \ref{complete orthogonal idempotents}, we see that it is not necessary to explicitly construct a basis consisting of orthogonal idempotents having sum $1,$ to show that $\mathcal{R}\cong\mathbb{F}_q^l$, for some $l\in\mathbb{N}$. Note that one requires the knowledge of roots of each $f_i(x_i)$ to construct such a basis of $\mathbb{F}_q[x_1,\dots,x_k]/\langle f_1(x_1),\dots,f_k(x_k) \rangle,$ where each $f_i(x_i)$ splits over $\mathbb{F}_q$ and has simple roots, by the construction given in \cite{swati_raka}. In fact, the following theorem can also be deduced from the results of the article \cite{poli1985important}, \cite{martinez2006multivariable} or \cite{chillag1995regular}, however, a simple and alternate proof can be seen by the argument given below which follows from the lemma proved above. \begin{theorem}\label{complete orthogonal idempotents} Suppose, for $1\le i\le k,$ $f_i(x_i)$ splits over $\mathbb{F}_q$ and has simple roots. Then $\mathcal{R}=\mathbb{F}_q[x_1,\dots,x_k]/\langle f_1(x_1), \dots,f_k(x_k) \rangle$ is isomorphic to a product ring of the form $\mathbb{F}_q^l.$ \end{theorem} \begin{proof} Observe that as rings $$\mathcal{R} \cong \mathbb{F}_q[x_1]/\langle f_1(x_1) \rangle \otimes \mathbb{F}_q[x_2]/\langle f_2(x_2) \rangle \otimes \dots \otimes \mathbb{F}_q[x_k]/\langle f_k(x_k) \rangle.$$ So, by Theorem \ref{isomorphic to F_q^l}, $$\mathcal{R}\cong \mathbb{F}_q^{\deg{f_1}} \otimes \mathbb{F}_q^{\deg{f_2}}\otimes \dots \otimes \mathbb{F}_q^{\deg{f_k}}.$$ Therefore, by Lemma \ref{tensortodirectproduct}, $\mathcal{R}$ is a ring isomorphic to $\mathbb{F}_q^l,$ where $l=\underset{i=1}{\overset{k}{\prod}}\deg{f}_i.$ \hfill $\square$ \end{proof} \begin{remark} Bhardwaj and Raka in \cite{swati_raka} considered codes over a wide class of rings which were isomorphic to product rings, still, considering codes over a product ring ${\mathbb{F}_q^l}$, where $l \in \mathbb{N},$ is more general than considering codes over a ring of the form $\mathbb{F}_q[x_1,\dots,x_k]/\langle f_1(x_1),\dots,f_k(x_k) \rangle$ where each $f_i(x_i)$ splits over $\mathbb{F}_q$ and has simple roots and $k\in \mathbb{N}$. Since for a prime power $q$ and $f_1(x_1),\dots,f_k(x_k)\in\mathbb{F}_q[x_1,\dots,x_k],$ we have $\mathbb{F}_q[x_1,\dots,x_k]/\langle f_1(x_1),\dots,f_k(x_k) \rangle\not\cong\mathbb{F}_{p_1}^{p_2},$ where $p_1<p_2$ are primes. \end{remark} \begin{remark} Another advantage of using $\mathbb{F}_q^l$ as the base ring is that a basis consisting of orthogonal idempotents is readily available, namely, the standard basis. \end{remark} \begin{remark} In a finite commutative $\mathbb{F}_q$-algebra with unity, a complete orthogonal basis of idempotents need not be unique, for instance, $\mathbb{F}_2^4$ has two such bases, namely, \\ $\{(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1)\},$ and $\{(1,1,1,0),(0,1,1,1),(1,0,1,1),(1,1,0,1)\}$. \end{remark} \begin{remark}\label{remark2.10PIR} Note that if $\mathcal{S}$ is isomorphic to the product ring $\mathbb{F}_q^l$ then $\mathcal{S}[x]$ is a principal ideal ring since $\mathbb{F}_q^l[x]\cong \mathbb{F}_q[x]\times \dots \times \mathbb{F}_q[x]$($l$ times) as rings, and $\mathbb{F}_q[x]$ is a principal ideal domain. Hence for any polynomial $f(x) \in \mathcal{S}[x]$, the quotient ring $\frac{\mathcal{S}[x]}{\langle f(x) \rangle}$ is also a principal ideal ring. \end{remark} \section{Polycyclic Codes over a Product Ring}\label{sec4} Let $\mathcal{B}=\{\bm{e}_1,\bm{e}_2,\dots,\bm{e}_l\}$ denote an $\mathbb{F}_q$-basis of $\mathbb{F}_q^l$ consisting of orthogonal idempotents having sum $\bm{1}.$ Note that one such basis is the standard basis. Let $\bm{a}\in\mathbb{F}_q^l$ and write $\bm{a}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_ia_i,$ where $a_i\in\mathbb{F}_q,$ for $1\le i\le l$. Denote the $i$-th canonical projection of $\mathbb{F}_q^l$ to $\mathbb{F}_q$ by $\pi_i$ so that $\pi_i(\bm{a})=a_i$. Extend this to $\tilde{\pi}_i:(\mathbb{F}_q^l)^n\to\mathbb{F}_q^n$ such that $\tilde{\pi}_i(\bm{a}_0,\dots,\bm{a}_{n-1})=(\pi_i(\bm{a}_0),\dots,\pi_i(\bm{a}_{n-1})).$ We will use the following notations throughout the article: \begin{notation} For any $\Bar{\bm{a}}=\left(\bm{a}_0, \bm{a}_2,\dots, \bm{a}_{n-1}\right)\in (\mathbb{F}_q^l)^n,$ if $\bm{a}_i=\underset{j=1}{\overset{l}{\sum}} \bm{e}_ja_{i,j}$, where $a_{i,j}\in\mathbb{F}_q,$ for $0\leq i\leq {n-1}.$ For $1 \leq j \leq l,$ we define, $\bm{a}^{(j)}=\big(a_{0,j},a_{1,j}, \dots, a_{{n-1},j}\big)\in\mathbb{F}_q^n$ so that, \begin{equation}\label{decomposition of a vector} \Bar{\bm{a}} =\underset{j=1}{\overset{l}{\sum}}(a_{0,j}\bm{e}_j,a_{1,j}\bm{e}_j,\dots,a_{n-1,j}\bm{e}_j)=\underset{j=1}{\overset{l}{\sum}}\bm{e}_j*{\bm{a}}^{(j)}. \end{equation} \end{notation} \begin{notation} Let $\mathcal{C}$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$ for some $\Bar{\bm{a}}=(\bm{a}_0,\bm{a}_1,\dots,\bm{a}_{n-1})\in (\mathbb{F}_q^l)^n.$ Then from Equation \ref{tuple to polynomial} we have $\Bar{\bm{a}}(x)=\bm{a}_0+\bm{a}_1x+\dots+\bm{a}_{n-1}x^{n-1}$ and $\mathcal{C}$ is an ideal of $\mathbb{F}_q^l[x]/\langle x^n-\Bar{\bm{a}}(x)\rangle.$ Further, from Equation \ref{decomposition of a vector} we get $\Bar{\bm{a}}=\underset{j=1}{\overset{l}{\sum}}(a_{0,j}\bm{e}_j,a_{1,j}\bm{e}_j,\dots,a_{n-1,j}\bm{e}_j)=\underset{j=1}{\overset{l}{\sum}}\bm{e}_j*{\bm{a}}^{(j)}\in (\mathbb{F}_q^l)^n$ and $\bm{a}^{(j)}=\big(a_{0,j},a_{1,j}, \dots, a_{{n-1},j}\big)\in\mathbb{F}_q^n.$ Observe that, $x^n-\Bar{\bm{a}}(x)=\underset{j=1}{\overset{l}{\sum}}\bm{e}_j(x^n-\bm{a}^{(j)}(x)),$ where $x^n-\bm{a}^{(j)}(x)\in \mathbb{F}_q[x]$ exist uniquely. \end{notation} Then, with respect to the basis $\mathcal{B}$, we have a unique decomposition of every $\mathbb{F}_q^l$-submodule of $(\mathbb{F}_q^l)^n$ into $\mathbb{F}_q$-linear subspaces of $\mathbb{F}_q^n.$ \begin{theorem}[$\mathbb{F}_q$-decomposition]\label{linear} Let $\{\bm{e}_1,\bm{e}_2,\dots,\bm{e}_l\}$ be a basis of $\mathbb{F}_q^l$ over $\mathbb{F}_q$ consisting of orthogonal idempotents such that $\sum_{i=1}^l\bm{e}_i=\bm{1}$. For $\mathcal{C}\subset(\mathbb{F}_q^l)^n$ denote $\tilde\pi_i(\mathcal{C})$ by $\mathcal{C}_i,$ for $1\le i\le l.$ Then, \begin{enumerate} \item[(a)] $\mathcal{C}$ is an $\mathbb{F}_q^l$-submodule of $(\mathbb{F}_q^l)^n$ if and only if $\,\forall\, 1\le j\le l,\, \mathcal{C}_j$ is an $\mathbb{F}_q$-subspace of $\mathbb{F}_q^n$ and $\mathcal{C}=\underset{i=1}{\overset{l}{\sum}} \bm{e}_i\mathcal{C}_i.$ \item[(b)] If $\mathcal{C}$ is an $\mathbb{F}_q^l$-submodule of $(\mathbb{F}_q^l)^n$ and $\mathcal{C}=\underset{i=1}{\overset{l}{\sum}} \bm{e}_i\mathcal{C}_i'$, where $\mathcal{C}_i'\subset\mathbb{F}_q^n$ ($1\le i\le l$), then $\mathcal{C}_i'=\mathcal{C}_i$ for each $i$ so that the sum is direct. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item [(a)]Let $\mathcal{C}$ be an $\mathbb{F}_q^l$-submodule of $(\mathbb{F}_q^l)^n$, and let $\Bar{\bm{a}}=(\bm{a}_1,\bm{a}_2,\dots,\bm{a}_n)$ and $\Bar{\bm{b}}=(\bm{b}_1,\bm{b}_2,\dots, \bm{b}_n)\in \mathcal{C}.$ For $\alpha,\beta\in \mathbb{F}_q,$ observe that $\alpha\tilde{\pi}_{i}(\Bar{\bm{a}})+\beta\tilde{\pi}_{i}(\Bar{\bm{b}})=\tilde{\pi}_{i}(\alpha\Bar{\bm{a}})+\tilde{\pi}_{i}(\beta\Bar{\bm{b}})=\tilde{\pi}_{i}(\alpha\Bar{\bm{a}}+\beta\Bar{\bm{b}}).$ Since $\alpha\Bar{\bm{a}}+\beta\Bar{\bm{b}}\in \mathcal{C}$, we have $\alpha\tilde{\pi}_{i}(\Bar{\bm{a}})+\beta\tilde{\pi}_{i}(\Bar{\bm{b}})\in \mathcal{C}_i$ so that $\mathcal{C}_i$ is an $\mathbb{F}_q$-subspace of $\mathbb{F}_q^n.$ Further, by definition of $\mathcal{C}_i$, it follows that $\mathcal{C}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i\mathcal{C}_i.$ Conversely, suppose that $\mathcal{C}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i\mathcal{C}_i$ and each $\mathcal{C}_i$ is an $\mathbb{F}_q$-subspace. Let $\Bar{\bm{a}}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i*{\bm{a}}^{(i)},\,\Bar{\bm{b}}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i*{\bm{b}}^{(i)}\in\mathcal{C},$ where ${\bm{a}}^{(i)}, {\bm{b}}^{(i)} \in \mathcal{C}_i $ for $1\leq i\leq l,$ and let $\bm{\alpha}=\underset{i=1}{\overset{l}{\sum}}\alpha_i\bm{e}_i\in\mathbb{F}_q^l,$ where $\alpha_i\in \mathbb{F}_q$ for $1\leq i \leq l.$ Then, $\bm{\alpha}\Bar{\bm{a}}+\Bar{\bm{b}}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i*(\alpha_i\bm{a}^{(i)}+\bm{b}^{(i)})\in \mathcal{C}$ as each $\mathcal{C}_i$ is an $\mathbb{F}_q$-subspace. \item[(b)] Since $\{\bm{e}_1,\bm{e}_2,\dots,\bm{e}_l\}$ is an $\mathbb{F}_q$-basis of $\mathbb{F}_q^l,$ the assertion follows. \end{enumerate} \hfill $\square$ \end{proof} Thus, $\mathcal{C}=\underset{i=1}{\overset{l}{\bigoplus}} \bm{e}_i\mathcal{C}_i$, where $\mathcal{C}_i=\tilde\pi_i(\mathcal{C}).$ We refer to this decomposition of $\mathcal{C}$ as the {\it $\mathbb{F}_q$-decomposition} with respect to the basis $\{\bm{e}_1,\bm{e}_2,\dots,\bm{e}_l\}$. In part (a) of Theorem \ref{linear}, ``only if" fails if we drop the hypothesis that $\mathcal{C}=\underset{i=1}{\overset{l}{\sum}} \bm{e}_i\mathcal{C}_i.$ \begin{example} Let $\mathcal{C}=\mathbb{F}_2^2\setminus\{0\}.$ Then, $\tilde{\pi}_1(\mathcal{C})=\mathbb{F}_2=\tilde{\pi}_2(\mathcal{C}).$ But $\mathcal{C}$ is not $\mathbb{F}_2$-linear. \end{example} By Theorem \ref{linear}, given any linear code $\mathcal{C}$ over $\mathbb{F}_q^l$, we obtain $d$ linear codes over $\mathbb{F}_q.$ \begin{theorem}\label{polycyclic_classification} Let $\mathcal{C}=\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\mathcal{C}_i$ be an $\mathbb{F}_q^l$-linear code of length $n,$ and let $\Bar{\bm{a}}=\bm{e}_1*\bm{a}^{(1)}+\bm{e}_2*\bm{a}^{(2)}+ \dots +\bm{e}_l*\bm{a}^{(l)} \in (\mathbb{F}_q^l)^n$, where $\bm{a}^{(i)}\in\mathbb{F}_q^n$, for $1\le i\le l$. Then, $\mathcal{C}$ is $\Bar{\bm{a}}$-polycyclic over $\mathbb{F}_q^l$ if and only if for each $1 \leq i \leq l$, $\mathcal{C}_i$ is $\bm{a}^{(i)}$-polycyclic over $\mathbb{F}_q$. \end{theorem} \begin{proof}Let $\Bar{\bm{c}}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i*\bm{c}^{(i)}\in\mathcal{C},$ where for each $1\leq i\leq l, \, \, \bm{c}^{(i)}=(c_{1,i},c_{2,i},\dots,c_{n,i} )\in \mathcal{C}_i$. Then, $\Bar{\bm{c}}=(\bm{c}_1,\bm{c}_2,\dots,\bm{c}_{n})$ where $\bm{c}_i=\underset{j=1}{\overset{l}{\sum}}\bm{e}_jc_{i,j}\in\mathbb{F}_q^l.$ Suppose $\Bar{\bm{a}}=(\bm{a}_0,\bm{a}_1,\dots,\bm{a}_{n-1})\in(\mathbb{F}_q^l)^n,$ where $\bm{a}_i=\underset{j=1}{\overset{l}{\sum}}\bm{e}_ja_{i,j},$ for $0\leq i\leq n-1.$ Set, for $1\le i\le l$, $\bm{a}^{(i)}=(a_{0,i},a_{1,i},\dots,a_{n-1,i})\in\mathbb{F}_q^n.$ Observe that $\bm{c}_n\cdot \bm{a}_i=\underset{j=1}{\overset{l}{\sum}}a_{i,j}c_{n,j}\bm{e}_j,$ for $0\leq i\leq n-1,$ and \begin{align*} (\bm{0},\bm{c}_1,\dots,\bm{c}_{n-1})+\bm{c}_n\Bar{\bm{a}} = \,\, & \bm{e}_1*[(0,c_{1,1},c_{2,1},\dots,c_{n-1,1})+c_{n,1}\bm{a}^{(1)}] \\ +\,\, & \bm{e}_2*[(0,c_{1,2},c_{2,2},\dots,c_{n-1,2})+c_{n,2}\bm{a}^{(2)}] \\ + \,\,& \cdots \\ + \,\,& \bm{e}_l*[(0,c_{1,l},c_{2,l},\dots,c_{n-1,l})+c_{n,l}\bm{a}^{(l)}]. \end{align*} Then, $(0, c_{1,i},c_{2,i},\dots,c_{n-1,i})+c_{n,i}\bm{a}^{(i)}\in \mathcal{C}_i$ for each $1\le i\le l$ if and only if $(\bm{0},\bm{c}_1,\dots,\bm{c}_{n-1}) +\bm{c}_n\Bar{\bm{a}} \in \mathcal{C}.$ Hence, each $\mathcal{C}_i$ is $\bm{a}^{(i)}$-polycyclic over $\mathbb{F}_q$ if and only if $\mathcal{C}$ is $\Bar{\bm{a}}$-polycyclic over $\mathbb{F}_q^l.$ \hfill $\square$ \end{proof} \textbf{Note}: From Remark \ref{remark2.10PIR}, it is clear that every $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$ is principally generated. Moreover, a generator polynomial can be given as: \begin{theorem}\label{Generator polynomial of Polycyclic codes} Let $\mathcal{C}$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$ and let $\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\mathcal{C}_i$ be the $\mathbb{F}_q$- decomposition of $\mathcal{C}$ with respect to $\mathcal{B}$. If $\mathcal{C}_i$ is $\mathbb{F}_q$-isomorphic to the ideal generated by $\bm{g}^{(i)}(x)$ in $\mathbb{F}_q[x]/\langle x^n-\bm{a}^{(i)}(x)\rangle$, then $\Bar{\bm{g}}(x):=\underset{i=1}{\overset{l}{ \sum}} \bm{e}_i\bm{g}^{(i)}(x)$ divides $x^n-\Bar{\bm{a}}(x)$ in $\mathbb{F}_q^l[x],$ and the ideal generated by $\Bar{\bm{g}}(x)$ in $\mathbb{F}_q^l[x]/\langle x^n-\Bar{\bm{a}}(x)\rangle$ is isomorphic to $\mathcal{C}$ as an $\mathbb{F}_q^l$-module. Moreover, if $\mathcal{C}=\langle \Bar{\bm{g}}(x) \rangle$, then, $\mathcal{C}_i=\langle \bm{g}^{(i)}(x) \rangle$ for each $1 \leq i \leq l,$ where $\Bar{\bm{g}}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{g}^{(i)}(x).$ \end{theorem} \begin{proof} Assume the hypothesis of the theorem. Then, there exists $\bm{h}^{(i)}(x)\in \mathbb{F}_q[x]$ such that $\bm{g}^{(i)}(x)\bm{h}^{(i)}(x)=x^n-\bm{a}^{(i)}(x).$ Define, $\Bar{\bm{h}}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i\bm{h}^{(i)}(x)\in\mathbb{F}_q^l[x].$ Then, $\Bar{\bm{g}}(x)\Bar{\bm{h}}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i\bm{g}^{(i)}(x)\bm{h}^{(i)}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i(x^n-\bm{a}^{(i)}(x))=x^n-\Bar{\bm{a}}(x)$ so that $\Bar{\bm{g}}(x) \mid (x^n-\Bar{\bm{a}}(x))$ over $\mathbb{F}_q^l.$ Next, let $\overline{\Bar{\bm{c}}(x)}\in\mathcal{C}.$ Since $\mathcal{C}=\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\mathcal{C}_i, \overline{\Bar{\bm{c}}(x)}=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i \overline{\bm{g}^{(i)}(x) \bm{q}^{(i)}(x)}$ for $\bm{q}^{(i)}(x)\in \mathbb{F}_q[x].$ Observe that $\overline{\Bar{\bm{c}}(x)}=\overline{\Bar{\bm{q}}(x)}\, \overline{\Bar{\bm{g}}(x)}$ for $\Bar{\bm{q}}(x)=\left(\underset{i=1}{\overset{l}{\sum}}\bm{e}_i\bm{q}^{(i)}(x)\right)\in\mathbb{F}_q^l[x].$ The converse follows from the fact that, for each $1 \leq i \leq l$, $\bm{e}_i\mathcal{C}_i$ is embedded in $\mathcal{C}.$ \hfill $\square$ \end{proof} \begin{corollary} Let $\Bar{\bm{a}}=(\bm{a}_0,\bm{a}_1,\dots,\bm{a}_{n-1})\in(\mathbb{F}_q^l)^n,$ and let $x^n-\Bar{\bm{a}}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i(x^n-\bm{a}^{(i)}(x)).$ Suppose $x^n-\bm{a}^{(i)}(x)=f_{i,1}(x)^{n_{i,1}}\dots f_{i,{k_i}}(x)^{n_{i,{k_i}}}$ is the prime factorization of $x^n-\bm{a}^{(i)}(x) $ over $\mathbb{F}_q$ for $1\leq i\leq l.$ Then, there are exactly $\underset{i=1}{\overset{l}{\prod}}\underset{j=1}{\overset{k_i}{\prod}}(n_{i,j}+1)$ distinct $\Bar{\bm{a}}$-polycyclic codes over $\mathbb{F}_q^l$. \end{corollary} \begin{proof} Let $\Bar{\bm{a}}(x)\in \mathbb{F}_q^l[x]$ and let $x^n-\Bar{\bm{a}}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i(x^n-\bm{a}^{(i)}(x)).$ Since every $\Bar{\bm{a}}$-polycyclic code $\mathcal{C}$ is an ideal of $\mathbb{F}_q^l[x]/\langle x^n-\Bar{\bm{a}}(x)\rangle$ and can be uniquely decomposed as $\mathcal{C}=\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\mathcal{C}_i,$ where each $\mathcal{C}_i$ is an $\mathbb{F}_q$-linear subspace of $\mathbb{F}_q^n$ and a principal ideal of $\mathbb{F}_q[x]/\langle x^n-\bm{a}^{(i)}(x) \rangle.$ The number of distinct $\Bar{\bm{a}}$-polycyclic codes over $\mathbb{F}_q^l$ is equal to the product of the number of distinct $\bm{a}^{(i)}$-polycyclic codes over $\mathbb{F}_q$ for $1\le i\le l.$ The number of distinct $\bm{a}^{(i)}$-polycyclic codes over $\mathbb{F}q$ is equal to the number of divisors of $x^n-\bm{a}^{(i)}(x).$ \hfill $\square$ \end{proof} \begin{corollary} Let $\mathcal{C}$ and $\widetilde{\mathcal{C}}$ be $\Bar{\bm{a}}$-polycyclic codes and $\Bar{\bm{g}}(x), \widetilde{\Bar{\bm{g}}(x)}$ (respectively) are their generator polynomials over $\mathbb{F}_q^l$. Then, $\mathcal{C} \subset \widetilde{\mathcal{C}}$ if and only if $\widetilde{\Bar{\bm{g}}(x)} $ divides $ \Bar{\bm{g}}(x)$ over $\mathbb{F}_q^l$. \end{corollary} \begin{proof} $\mathcal{C} \subset \widetilde{\mathcal{C}} \iff \langle \Bar{\bm{g}}(x) \rangle \subset \langle \widetilde{\Bar{\bm{g}}(x)}\rangle \iff \Bar{\bm{g}}(x)\in \langle \widetilde{\Bar{\bm{g}}(x)}\rangle \iff \Bar{\bm{g}}(x)=\widetilde{\Bar{\bm{g}}(x)}\Bar{\bm{b}}(x)$ for some $\Bar{\bm{b}}(x)\in \mathbb{F}_q^l[x] \iff \widetilde{\Bar{\bm{g}}}(x) $ divides $\Bar{\bm{g}}(x)$ over $\mathbb{F}_q^l.$ \hfill $\square$ \end{proof} \begin{corollary} Let $\mathcal{C}=\langle \Bar{\bm{g}}(x)\rangle$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$ of length $n.$ If $\Bar{\bm{g}}(x)$ is monic then, $\mathcal{C}$ is a free $\mathbb{F}_q^l$-submodule of $(\mathbb{F}_q^l)^n$ and rank$(\mathcal{C})=n-\deg \Bar{\bm{g}}(x)$. \end{corollary} \begin{proof} Consider the set $\{\Bar{\bm{g}}(x)\Bar{\bm{h}}(x):\Bar{\bm{h}}(x)\in\mathbb{F}_q^l[x]\}.$ Note that, if $\deg \Bar{\bm{h}}_i(x) \le n-\deg \Bar{\bm{g}}(x)-1$ for $1\le i \le 2$, $\Bar{\bm{g}}(x)\Bar{\bm{h}}_1(x) \not \equiv \Bar{\bm{g}}(x)\Bar{\bm{h}}_2(x) \, (\textnormal{mod}\, \, x^n-\Bar{\bm{a}}(x)).$ In fact, for any codeword $\Bar{\bm{g}}(x)\Bar{\bm{h}}(x)\in \mathcal{C},$ we write, $\Bar{\bm{g}}(x)\Bar{\bm{h}}(x)=(x^n-\Bar{\bm{a}}(x))\Bar{\bm{q}}(x)+\Bar{\bm{r}}(x),$ where $\deg \Bar{\bm{r}}(x) \le n-1.$ So, we have $\Bar{\bm{r}}(x)=\Bar{\bm{g}}(x)\Bar{\bm{h}}(x)-(x^n-\Bar{\bm{a}}(x))\Bar{\bm{q}}(x).$ Hence, $\Bar{\bm{g}}(x) \mid \Bar{\bm{r}}(x)$ and $\Bar{\bm{r}}(x)=\Bar{\bm{g}}(x)\Bar{\bm{l}}(x).$ Since $\Bar{\bm{g}}(x)$ is monic, $\deg \Bar{\bm{l}}(x) \le n-\deg \Bar{\bm{g}}(x)-1.$ So, $\{\Bar{\bm{g}}(x),x\Bar{\bm{g}}(x),\dots,x^{n-\deg \Bar{\bm{g}}(x)-1}\Bar{\bm{g}}(x)\}$ spans $\mathcal{C}.$ Also, as $\Bar{\bm{g}}(0)$ is unit, $\{\Bar{\bm{g}}(x),x\Bar{\bm{g}}(x),\dots,x^{n-\deg \Bar{\bm{g}}(x)-1}\Bar{\bm{g}}(x)\}$ is linearly independent over $\mathbb{F}_q^l.$ \hfill $\square$ \end{proof} \begin{corollary} Let $\mathcal{C}=\langle \Bar{\bm{g}}(x) \rangle$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$. Then $\Bar{\bm{g}}(x)$ is monic if and only if $\deg(\bm{g}^{(i)}(x))=\deg(\bm{g}^{(j)}(x))$ for $1\le i, j\le l,$ where $\Bar{\bm{g}}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{g}^{(i)}(x)$. \end{corollary} \begin{proof} If each $\bm{g}^{(i)}(x)$ has same degree, then $\underset{i=1}{\overset{l}{\sum}}\bm{e}_i=1\implies \Bar{\bm{g}}(x)$ is monic. Conversely, sum of $l-1$ or less elements in $\{\bm{e}_1,\dots,\bm{e}_l\}$ is a zero divisor in $\mathbb{F}_q^l$. \hfill $\square$ \end{proof} In 2016, A. Alahmadi (\cite{alahmadi2016duality}) introduced following bilinear form. \begin{definition}\label{bilinearform} Let $\mathbf{f}(x) \in \mathbb{F}_q[x]$. Define the $\mathbb{F}_q$-valued bilinear form on $\mathbb{F}_q[x]/\langle \mathbf{f}(x)\rangle$ by \begin{equation} \langle \overline{\mathbf{h}_1(x)},\overline{\mathbf{h}_2(x)} \rangle_\mathbf{f}=\mathbf{r}(0) \end{equation} for $\mathbf{h}_1(x), \mathbf{h}_2(x)\in \mathbb{F}_q[x],$ where $\mathbf{r}(x)$ is the remainder when $\mathbf{h}_1(x)\mathbf{h}_2(x)$ is divided by $\mathbf{f}(x)$ in $\mathbb{F}_q[x].$ \end{definition} This bilinear form can be extended and defined over $\frac{\mathbb{F}_q^l[x]}{\langle x^n-\Bar{\bm{a}}(x) \rangle}$ in the following manner. Define the $\mathbb{F}_q^l$-valued bilinear form on the quotient $\frac{\mathbb{F}_q^l[x]}{\langle x^n-\Bar{\bm{a}}(x) \rangle }$ by \begin{equation} \langle \overline{\Bar{\bm{h}}_1(x)},\overline{\Bar{\bm{h}}_2(x)} \rangle_{\Bar{\bm{a}}}=\Bar{\bm{r}}(0) \end{equation} for $\Bar{\bm{h}}_1(x), \Bar{\bm{h}}_2(x)\in \mathbb{F}_q^l[x],$ where $\Bar{\bm{r}}(x)$ is the remainder when $\Bar{\bm{h}}_1(x)\Bar{\bm{h}}_2(x) $ is divided by $x^n-\Bar{\bm{a}}(x)$ in $ \mathbb{F}_q^l[x].$ \begin{lemma} If $\Bar{\bm{a}}(0)=\bm{a}_0$ is a unit in $\mathbb{F}_q^l$, then the bilinear form in Definition \ref{bilinearform} is non-degenerate. \end{lemma} \begin{proof}Let $\Bar{\bm{h}}(x)\in\mathbb{F}_q^l[x]$ be such that $\langle \overline{\Bar{\bm{f}}(x)}, \overline{\Bar{\bm{h}}(x)} \rangle_{\Bar{\bm{a}}}=0\, \forall \, \Bar{\bm{f}}(x)\in \mathbb{F}_q^l[x].$ Assume, without loss of generality, $\deg(\Bar{\bm{h}}(x))\le n-1;$ and set $\Bar{\bm{h}}(x)=\bm{h}_0+\bm{h}_1x+\dots+\bm{h}_{n-1}x^{n-1}$. Putting $\Bar{\bm{f}}(x)=1$, we get $\bm{h}_0=\Bar{\bm{h}}(0)=\bm{0}.$ Inductively, we get for $1\le i\le n-1,$ putting $\Bar{\bm{f}}(x)=x^{n-i}$, that $\bm{a}_0\bm{h}_{i}=\bm{0}$ and hence $\bm{h}_{i}=\bm{0}$ as $\bm{a}_0$ is a unit so that $\Bar{\bm{h}}(x)=\Bar{\bm{0}}.$ \hfill $\square$ \end{proof} However, the converse is false. \begin{example} For $\Bar{\bm{a}}(x)=\bm{0}$ and $n=1$, in $\mathbb{F}_q^l[x]/\langle x \rangle$ the above annihilator product reduces to the product in the ring $\mathbb{F}_q^l,$ and hence it becomes a non-degenerate bilinear form. \end{example} \begin{definition} The bilinear form below the Definition \ref{bilinearform} is called {\it annihilator product.} We denote the dual of $\mathcal{C}$ with respect to the annihilator product by $\mathcal{C}^\circ,$ and call it the {\it annihilator dual.} Symbolically, $\mathcal{C}^\circ:=\{\overline{\Bar{\bm{f}}(x)}\in\frac{\mathbb{F}_q^l[x]}{\langle x^n-\Bar{\bm{a}}(x)\rangle}|\,\langle \overline{\Bar{\bm{f}}(x)},\overline{\Bar{\bm{h}}(x)} \rangle_{\Bar{\bm{a}}}=\bm{0}, \forall\, \overline{\Bar{\bm{h}}(x)} \in \mathcal{C}\}.$ \end{definition} \begin{definition} The {\it annihilator} of $\mathcal{C}$, denoted by \begin{equation} \textnormal{ Ann}(\mathcal{C}):=\{\overline{\Bar{\bm{f}}(x)}\in\frac{\mathbb{F}_q^l[x]}{\langle x^n-\Bar{\bm{a}}(x)\rangle}|\, \overline{\Bar{\bm{f}}(x)}\,\overline{\Bar{\bm{h}}(x)}=\Bar{\bm{0}} \in \frac{\mathbb{F}_q^l[x]} { \langle x^n-\Bar{\bm{a}}(x)\rangle} \textnormal{ for all } \overline{\Bar{\bm{h}}(x)} \in \mathcal{C}\}. \end{equation} \end{definition} \begin{lemma}\label{Ann(C)=C^0} Let $\Bar{\bm{a}}(x)=\bm{a}_{n-1}x^{n-1}+\dots+\bm{a}_1x+\bm{a}_0 \in \mathbb{F}_q^l[x]$, where $\bm{a}_0 \in U(\mathbb{F}_q^l)$ and let $\mathcal{C}$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$ having length $n$. Then $\mathcal{C}^\circ=\textnormal{Ann}(\mathcal{C}).$ \end{lemma} \begin{proof}By definition, $\textnormal{Ann}(\mathcal{C})\subset\mathcal{C}^\circ.$ For the converse, let $\overline{\Bar{\bm{f}}(x)}\in\mathcal{C}^\circ.$ We require to show that $\overline{\Bar{\bm{f}}(x)\Bar{\bm{h}}(x)}=\Bar{\bm{0}}$ for every $\overline{\Bar{\bm{h}}(x)}\in\mathcal{C}.$ Suppose that $\Bar{\bm{g}}(x)$ generates $\mathcal{C}$, $\Bar{\bm{f}}(x)\Bar{\bm{g}}(x)=(x^n-\Bar{\bm{a}}(x))\Bar{\bm{q}}(x)+\Bar{\bm{r}}(x)$ where $\Bar{\bm{q}}(x), \Bar{\bm{r}}(x)\in \mathbb{F}_q^l[x]$ and $\Bar{\bm{r}}(x)=\bm{r}_{n-1}x^{n-1}+\dots+\bm{r}_1x+\bm{r}_0.$ Then, for each $i\ge0,$ $\overline{x^i\Bar{\bm{g}}(x)}\in\mathcal{C}$ so that $\bm{r}_0=\langle \overline{\Bar{\bm{f}}(x)}, \overline{\Bar{\bm{g}}(x)} \rangle_{\Bar{\bm{a}}}=\bm{0},\, \bm{a}_0\bm{r}_{n-1}=\langle \overline{\Bar{\bm{f}}(x)}, \overline{x\Bar{\bm{g}}(x)} \rangle_{\Bar{\bm{a}}}=\bm{0},\dots, \bm{a}_0\bm{r}_1=\langle \overline{\Bar{\bm{f}}(x)}, \overline{x^{n-1}\Bar{\bm{g}}(x)} \rangle_{\Bar{\bm{a}}}=\bm{0}.$ Since $\bm{a}_0$ is a unit, $\bm{r}_j=0$ for each $j$ so that $\overline{\Bar{\bm{f}}(x)\Bar{\bm{g}}(x)}=\Bar{\bm{0}}.$ \hfill $\square$ \end{proof} We have a few results which hold for sequential codes over $\mathbb{F}_q^l.$ \begin{theorem}\label{classification of sequential codes}Let $\mathcal{C}$ be an $\mathbb{F}_q^l$-linear code of length $n,$ and let $\Bar{\bm{a}}=\underset{i=1}{\overset{l}{\sum}} \bm{e}_i*\bm{a}^{(i)}\in (\mathbb{F}_q^l)^n,$ where $\bm{a}^{(i)}\in\mathbb{F}_q^n,$ for $1\le i\le l$. Then, $\mathcal{C}$ is $\Bar{\bm{a}}$-sequential over $\mathbb{F}_q^l$ if and only if $\mathcal{C}_i$ is $\bm{a}^{(i)}$-sequential over $\mathbb{F}_q$ for each $i.$ \end{theorem} \begin{proof} The proof of the above theorem follows on the same lines as of the Theorem \ref{polycyclic_classification}. \end{proof} \begin{lemma} An $\mathbb{F}_q^l$-linear code is $\Bar{\bm{a}}$-polycyclic if and only if its Euclidean dual is $\Bar{\bm{a}}$-sequential over $\mathbb{F}_q^l.$ \end{lemma} \begin{proof}By Theorem \ref{polycyclic_classification}, $\mathcal{C}$ is $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$ if and only if each $\mathcal{C}_i$ is $\bm{a}^{(i)}$-polycyclic over $\mathbb{F}_q.$ By [\cite{lopez2009dual},Theorem 3.2], $\mathcal{C}_i$ is $\bm{a}^{(i)}$-polycyclic over $\mathbb{F}_q$ if and only if $\mathcal{C}_i^\perp$ is $\bm{a}^{(i)}$-sequential over $\mathbb{F}_q.$ By Theorem \ref{classification of sequential codes} $\mathcal{C}^\perp$ is $\Bar{\bm{a}}$-sequential over $\mathbb{F}_q^l.$ \hfill $\square$ \end{proof} \begin{lemma}\label{Dual of dual} If $\mathcal{C}$ is an $\mathbb{F}_q^l$-linear $\Bar{\bm{a}}$-polycyclic code, then $\mathcal{C}^\circ=(\mathcal{C}. A)^\perp$ where $\mathcal{C}. A=\{\Bar{\bm{c}}A\,|\,\Bar{\bm{c}}\in\mathcal{C}\}$ with $A:=(\langle \Bar{\bm{\epsilon}}_i(x),\Bar{\bm{\epsilon}}_j(x)\rangle_{\Bar{\bm{a}}})_{1\leq i,j\leq n}$, for the standard basis $\{\Bar{\bm{\epsilon}}_i \,|\,1\le i\le n\}$ of $(\mathbb{F}_q^l)^n$ over $\mathbb{F}_q^l.$ In particular, $(\mathcal{C}^\circ)^\circ=\mathcal{C}.$ \end{lemma} \begin{proof} Note that, $\langle \Bar{\bm{x}}, \Bar{\bm{y}}\rangle_{\Bar{\bm{a}}}=\Bar{\bm{x}}A\Bar{\bm{y}}^t$ for $\Bar{\bm{x}}, \Bar{\bm{y}}\in (\mathbb{F}_q^l)^n.$ But $A^t=A$ and hence, $\Bar{\bm{x}}A\Bar{\bm{y}}^t=\Bar{\bm{x}}(\Bar{\bm{y}}A)^t=\langle \Bar{\bm{x}},\Bar{\bm{y}}A\rangle.$ Further, consider $\mathcal{C}^\circ=\{\Bar{\bm{x}}\in(\mathbb{F}_q^l)^n | \langle \Bar{\bm{x}}, \Bar{\bm{y}}\rangle_{\Bar{\bm{a}}} =\bm{0}, \, \forall \,\Bar{\bm{y}}\in \mathcal{C}\} =\{\Bar{\bm{x}}\in(\mathbb{F}_q^l)^n | \langle \Bar{\bm{x}},\Bar{\bm{y}}A\rangle=\bm{0}, \, \forall \,\Bar{\bm{y}}\in \mathcal{C}\}=\{\Bar{\bm{x}}\in(\mathbb{F}_q^l)^n| \langle \Bar{\bm{x}},\Bar{\bm{y}}\rangle=\bm{0}, \, \forall \,\Bar{\bm{y}}\in \mathcal{C}. A\}=(\mathcal{C}. A)^\perp.$\\ Observe that, $\Bar{\bm{y}}\in (\mathcal{C}.A)^\perp \iff \langle \Bar{\bm{y}}, \Bar{\bm{x}}A \rangle =\bm{0}\,\, \forall \,\Bar{\bm{x}}\in \mathcal{C} \iff \Bar{\bm{y}}(\Bar{\bm{x}}A)^\perp=\bm{0}\,\, \forall\, \Bar{\bm{x}}\in \mathcal{C} \iff \langle \Bar{\bm{y}}A, \Bar{\bm{x}} \rangle =\bm{0} \, \forall \, \Bar{\bm{x}}\in \mathcal{C} \iff \Bar{\bm{y}}A\in \mathcal{C}^\perp \iff \Bar{\bm{y}}\in \mathcal{C}^\perp. A^{-1}.$ Hence, $(\mathcal{C}. A)^\perp=\mathcal{C}^\perp .A^{-1}.$ Therefore, $(\mathcal{C}^\circ)^\circ=((\mathcal{C}. A)^\perp)^\circ=(\mathcal{C}^\perp. A^{-1})^\circ=\mathcal{C}.$ \hfill $\square$ \end{proof} \begin{notation} We will denote $A$ in Lemma \ref{Dual of dual} as $(\langle \Bar{\bm{\epsilon}}_i,\Bar{\bm{\epsilon}}_j\rangle_{\Bar{\bm{a}}})$ for convinience. \end{notation} \begin{theorem}\label{Dual decomposition} For $\Bar{\bm{a}}$-polycyclic code $\mathcal{C}$ over $\mathbb{F}_q^l,$ if the $\mathbb{F}_q$-decomposition with respect to $\mathcal{B}$ is $\mathcal{C}=\bigoplus_{i=1}^l{\bm{e}_i\,\mathcal{C}_i}$, then the $\mathbb{F}_q$-decomposition of $\mathcal{C}^\circ$ with respect to $\mathcal{B}$ is given by $\mathcal{C}^\circ=\bigoplus_{i=1}^l{\bm{e}_i\,\mathcal{C}^\circ_i}.$ \end{theorem} \begin{proof}Suppose $A$ is as in Lemma \ref{Dual of dual}. Then $A=\underset{i=1}{\overset{l}{\sum}}\bm{e}_iA_i,$ where $A_i\in M_n(\mathbb{F}_q).$ By Lemma \ref{Dual of dual}, $$\mathcal{C}^\circ=(\mathcal{C}.A)^\perp =\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\left(\mathcal{C}_i. A_i\right)^\perp =\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\mathcal{C}_i^\circ.$$ \hfill $\square$ \end{proof} \begin{theorem} Let $\Bar{\bm{a}}\in(\mathbb{F}_q^l)^n$. Then, an $\mathbb{F}_q^l$-linear code is $\Bar{\bm{a}}$-polycyclic if and only if its annihilator dual is $\Bar{\bm{a}}$-polycyclic. \end{theorem} \begin{proof} Let $\mathcal{C}$ be $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l,$ and let $\mathcal{C}=\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\mathcal{C}_i.$ By Theorem \ref{Dual decomposition}, $\mathcal{C}^\circ=\underset{i=1}{\overset{l}{\bigoplus}}\bm{e}_i\mathcal{C}_i^\circ.$ By Theorem \ref{polycyclic_classification}, each $\mathcal{C}_i$ is $\bm{a}^{(i)}$-polycyclic. By [Proposition 3, \cite{tabue2018polycyclic}], it follows that each $\mathcal{C}_i^\circ$ is $\bm{a}^{(i)}$-polycyclic as $\mathbb{F}_q$ is a chain ring. Again, by Theorem \ref{polycyclic_classification}, $\mathcal{C}^\circ$ is $\mathbf{a}$-polycyclic. Converse follows from Lemma \ref{Dual of dual}.\hfill $\square$ \end{proof} \begin{corollary}\label{generatorpolynomialofC^0} Let $\mathcal{C}=\langle \Bar{\bm{g}}(x) \rangle$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l,$ and let $\Bar{\bm{h}}(x)$ is a check polynomial of $ \mathcal{C}$ so that $x^n-\Bar{\bm{a}}(x)=\Bar{\bm{g}}(x)\Bar{\bm{h}}(x)$. Then, $\mathcal{C}^\circ$ is an $\Bar{\bm{a}}$-polycyclic code and is generated by $\Bar{\bm{h}}(x).$ \end{corollary} \begin{proof} Suppose $\Bar{\bm{g}}(x)=\underset{i=1}{\overset{l}{ \sum}} \bm{e}_i\bm{g}^{(i)}(x)$ and $\Bar{\bm{h}}(x)=\underset{i=1}{\overset{l}{ \sum}} \bm{e}_i\bm{h}^{(i)}(x).$ By Theorem \ref{Generator polynomial of Polycyclic codes}, $\mathcal{C}_i=\langle \bm{g}^{(i)}(x) \rangle,$ for each $1\le i\le l.$ Then by [Lemma 2.4 in \cite{qi2022polycyclic}], $\mathcal{C}^\circ_i=\langle \bm{h}^{(i)}(x) \rangle,$ for each $1\le i\le l.$ So, by Theorem \ref{Dual decomposition}, the result follows. \hfill $\square$ \end{proof} \begin{corollary} Let $\mathcal{C}=\langle \Bar{\bm{g}}(x)\rangle$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l,$ and let $\Bar{\bm{g}}(x)$ be a check polynomial of $\mathcal{C}.$ Then $\mathcal{C}$ is annihilator self orthogonal if and only if $\Bar{\bm{h}}(x) $ divides $ \Bar{\bm{g}}(x)$ in $\mathbb{F}_q^l[x].$ \end{corollary} \begin{proof} By Corollary \ref{generatorpolynomialofC^0}, we have $\mathcal{C}^\circ=\langle \Bar{\bm{h}}(x)\rangle.$ Hence, $\mathcal{C}\subset\mathcal{C}^\circ \iff \langle \Bar{\bm{g}}(x) \rangle \subset \langle \Bar{\bm{h}}(x) \rangle \iff \Bar{\bm{g}}(x)\in \langle \Bar{\bm{h}}(x) \rangle \iff \Bar{\bm{h}}(x) \mid \Bar{\bm{g}}(x). $ \hfill $\square$ \end{proof} \begin{definition} Let $\mathcal{C}$ be an $\mathbb{F}_q^l$-linear code, \begin{itemize} \item It is called {\it annihilator self-orthogonal} if $\mathcal{C} \subset \mathcal{C}^\circ,$ \item It is called {\it annihilator self-dual } if $\mathcal{C}=\mathcal{C}^\circ,$ \item It is called {\it annihilator dual-containing} if $\mathcal{C}^\circ \subset \mathcal{C} ,$ \item It is called an {\it annihilator LCD code} if $\mathcal{C} \cap \mathcal{C}^\circ=\{0\}.$ \end{itemize} \end{definition} From Theorem \ref{Dual decomposition} the following Corollary is immediate: \begin{corollary}\label{CiffC_i} If $\mathcal{C}$ be an $\mathbf{a}$-polycyclic code over $\mathbb{F}_q^l.$ Then, \begin{enumerate} \item $\mathcal{C}$ is annihilator self-orthogonal $\Leftrightarrow$ each $\mathcal{C}_i$ is self-orthogonal over $\mathbb{F}_q,$ \item $\mathcal{C}$ is an annihilator dual-containing $\Leftrightarrow$ each $\mathcal{C}_i$ is dual-containing over $\mathbb{F}_q,$ \item $\mathcal{C}$ is annihilator self-dual $\Leftrightarrow$ each $\mathcal{C}_i$ is self-dual over $\mathbb{F}_q,$ \item C is annihilator LCD $\Leftrightarrow$ each $\mathcal{C}_i$ is annihilator LCD over $\mathbb{F}_q.$ \end{enumerate} \end{corollary} \hfill $\square$\\ The following result is due to [Corollary 3, \cite{bajalan2024polycyclic}]: \begin{theorem}\label{Ann dual containing over F_q} Let $\mathcal{C}=\langle \Bar{\bm{g}}(x) \rangle $ be a $\Bar{\bm{a}}$-polycyclic code of length $n$ over $\mathbb{F}_q.$ Suppose $x^n-\Bar{\bm{a}}(x)=\Bar{\bm{h}}(x)\Bar{\bm{g}}(x).$ Then $\mathcal{C}^\circ \subseteq \mathcal{C}$ if and only if $\Bar{\bm{g}}(x)$ is a divisor of $\Bar{\bm{h}}(x).$ \end{theorem} Now from Theorem 2.7, Theorem 2.8, Theorem 3.3, Theorem 4.6 of \cite{qi2022polycyclic} and from Theorem \ref{Ann dual containing over F_q}, Corollary \ref{CiffC_i} we get the following results: \begin{corollary} \label{decomposition} Let $\mathcal{C}$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l.$ Then, \begin{enumerate} \item $\mathcal{C}=\langle \Bar{\bm{g}}(x) \rangle=\langle \underset{i=1}{\overset{l}{\sum}}\bm{e}_i\bm{g}^{(i)}(x) \rangle$ is annihilator dual-containing over $\mathbb{F}_q^l \Leftrightarrow $ $\bm{g}^{(i)}(x)\mid\bm{h}^{(i)}(x)$ for each $1\leq i\leq l.$ \item $\mathcal{C}=\langle \Bar{\bm{g}}(x) \rangle=\langle \underset{i=1}{\overset{l}{\sum}}\bm{e}_i\bm{g}^{(i)}(x) \rangle$ is annihilator self-dual over $\mathbb{F}_q^l \Leftrightarrow x^n-\bm{a}^{(i)}(x)=a_i(\bm{g}^{(i)}(x))^2$ where $ a_i\in U(\mathbb{F}_q)$ for each $1\leq i\leq l.$ \item $\mathcal{C}=\langle \Bar{\bm{g}}(x) \rangle=\langle \underset{i=1}{\overset{l}{\sum}}\bm{e}_i\bm{g}^{(i)}(x) \rangle$ is annihilator LCD over $\mathbb{F}_q^l \Leftrightarrow $ $\gcd(\bm{g}^{(i)}(x),\bm{h}^{(i)}(x))=1$ where $x^n-\bm{a}^{(i)}(x)=\bm{g}^{(i)}(x)\bm{h}^{(i)}(x)$ for each $1\leq i\leq l.$ \end{enumerate} \end{corollary} \begin{corollary} Let $\Bar{\bm{a}}(x)\in\mathbb{F}_q^l[x]$ where $\bm{a}_0 \in U(\mathbb{F}_q^l).$ Assume, for each $1\leq i \leq l,$ that $x^n-\bm{a}^{(i)}(x)=\underset{j=1}{\overset{s_i}{\prod}}f_{i,j}^{m_{i,j}}(x)$ be the factorization into distinct irreducible polynomials $f_{i, j}(x)\in \mathbb{F}_q[x].$ Then, \begin{enumerate} \item The number of annihilator self-orthogonal $\Bar{\bm{a}}$-polycyclic codes over $\mathbb{F}_q^l$ is $$\underset{i=1}{\overset{l}{\prod}}\underset{j=1}{\overset{s_i}{\prod}}\Big(m_{i, j}-\Big\lceil \frac{m_{i, j}}{2} \Big\rceil +1\Big).$$ \item The number of annihilator self-dual $\Bar{\bm{a}}$-polycyclic codes over $\mathbb{F}_q^l$ is one if $m_{i, j}$ is even for each $(i, j)$ and zero otherwise. \item The number of $\Bar{\bm{a}}$-polycyclic annihilator LCD codes over $\mathbb{F}_q^l$ is $2^{\underset{i=1}{\overset{l}{\sum}} s_i}.$ \end{enumerate} \end{corollary} \begin{proof} From Corollary \ref{CiffC_i}, we have the number of annihilator self-orthogonal $\Bar{\bm{a}}$-polycyclic codes over $\mathbb{F}_q^l$ is the product of number of annihilator self-orthogonal $\bm{a}^{(i)}$-polycyclic codes over $\mathbb{F}_q$ for $1\leq i \leq l.$ The number of annihilator self-orthogonal $\bm{a}^{(i)}$-polycyclic codes over $\mathbb{F}_q$ for $1\leq i \leq l$ can be computed by [Theorem 2.9, \cite{qi2022polycyclic}]. Similarly, the proofs of $2$ and $3$ follow from [Theorem 2.9, \cite{qi2022polycyclic}]. \hfill $\square$ \end{proof} \begin{remark} Note that the results proved above for the product $\mathbb{F}_q^l$ also hold for any $\mathbb{F}_q$-algebra isomorphic to the product ring due to Lemma \ref{Existence of basis consisting of orthogonal idempotents}. But product rings enable us to use the readily available standard base for implementation in MAGMA and sagemath. \end{remark} \section{Gray Maps}\label{sec5} There are several Gray maps that we can define to get codes over $\mathbb{F}_q$ from codes over $\mathbb{F}_q^l.$ Suppose $\{\bm{e}_1,\bm{e}_2,\dots,\bm{e}_l\}$ is an $\mathbb{F}_q $-basis of $\mathbb{F}_q^l$ consisting of orthogonal idempotents having $\underset{i=1}{\overset{l}{\sum}}\bm{e}_i=\bm{1}$ and define a map $\Phi:{(\mathbb{F}_q^l})^n\rightarrow\mathbb{F}_q^{nl}$ such that, $$\Phi(\bm{c}_1,\dots,\bm{c}_n)=(c_{1,1},c_{1,2},\dots,c_{1,l},\dots,c_{n,1},c_{n,2},\dots,c_{n,l}),$$ where $c_i=\underset{j=1}{\overset{l}{\sum}}e_jc_{i,j}.$ Note that $\Phi$ is a linear transformation from $(\mathbb{F}_q^{l})^n$ to $\mathbb{F}_q^{nl}.$ Let $\Bar{\bm{b}} \in (\mathbb{F}_q^{l})^n,$ Gray weight of $\Bar{\bm{b}}$ is defined as $w_G(\Bar{\bm{b}})=w_H(\Phi(\Bar{\bm{b}})),$ where $w_H(\Phi(\Bar{\bm{b}}))$ is the Hamming weight of $\Phi(\Bar{\bm{b}})$. Then distance $d_G(\mathcal{C}):=min\{d_G(\Bar{\bm{b}},\Bar{\bm{c}})|\Bar{\bm{b}}\neq\Bar{\bm{c}}\in \mathcal{C}\}$ is clearly the minimum Gray weight of a non-zero codeword. \begin{theorem} Let $\mathcal{C}$ be a linear code over $\mathbb{F}_q^l$ of length $n$ and $\mathcal{C}=\underset{i=1}{\overset{l}{\bigoplus}}\mathcal{C}_i$ be the $\mathbb{F}_q$-decomposition with respect $\mathcal{B}$. Then, $\Phi$ is a distance preserving map from $(\mathbb{F}_q^{l})^n$ to $\mathbb{F}_q^{nl}$. Moreover, $\Phi(\mathcal{C})$ would be an $\mathbb{F}_q$-linear code having parameters $[nl,nl-\underset{i=1}{\overset{l}{\sum}}\deg g_i].$ \end{theorem} \begin{proof} The proof follows directly from the definition of Gray map. \hfill $\square$ \end{proof} \begin{definition}[Polycyclic shift] Let $\Bar{\bm{a}}=(\bm{a}_1,\dots,\bm{a}_n)\in(\mathbb{F}_q^l)^n$ and $\bm{a}_1$ is a unit in $\mathbb{F}_q^l.$ Then for $\Bar{\bm{c}}=(\bm{c}_1,\dots,\bm{c}_n)\in (\mathbb{F}_q^l)^n,$ polycyclic shift $\tau_{\Bar{\bm{a}}}$ is defined as $ \tau_{\Bar{\bm{a}}}(\Bar{\bm{c}})=(\bm{0},\bm{c}_1,\dots,\bm{c}_{n-1})+\bm{c}_n(\bm{a}_1,\dots,\bm{a}_n).$ \end{definition} \begin{definition}[Sequential shift]Let $\Bar{\bm{a}}=(\bm{a}_1,\dots,\bm{a}_n)\in(\mathbb{F}_q^l)^n$ and $\bm{a}_1$ is a unit in $\mathbb{F}_q^l.$ Then for $\Bar{\bm{c}}=(\bm{c}_1,\dots,\bm{c}_n)\in (\mathbb{F}_q^l)^n,$ sequential shift $\widetilde{\tau}_{\Bar{\bm{a}}}$ is defined as $ \widetilde{\tau}_{\Bar{\bm{a}}}(\Bar{\bm{c}})=(\bm{c}_2,\dots,\bm{c}_{n},\Bar{\bm{c}}\cdot\Bar{\bm{a}}).$ \end{definition} Define, $ \mathcal{T}^l_{\Bar{\bm{a}}},\, \widetilde{\mathcal{T}}^l_{\Bar{\bm{a}}}:\mathbb{F}_q^{nl}\rightarrow\mathbb{F}_q^{nl}$ such that, \begin{multline*} \mathcal{T}^l_{\Bar{\bm{a}}}(r_{1,1},r_{1,2},\dots,r_{1,l},r_{2,1},r_{2,2},\dots,r_{2,l},\dots,r_{n,1},\dots,r_{n,l})=(0,\dots,0,r_{1,1},r_{1,2},\dots,r_{1,l},\dots,\\r_{n-1,1},\dots,r_{n-1,l})+(r_{n,1}\,a_{1,1},\dots,r_{n,l}\,a_{n,l},r_{n,1}\,a_{2,1},\dots,r_{n,l}\,a_{2,l},\dots,r_{n,1}\,a_{n,1},\dots,r_{n,l}\,a_{n,l}) \end{multline*} and \begin{align*} \widetilde{\mathcal{T}}^{l}_{\Bar{\bm{a}}}(r_{1,1},r_{1,2},\dots,r_{1,l},r_{2,1},r_{2,2},\dots,r_{2,l},\dots,r_{n,1},\dots,r_{n,l})=&(r_{2,1},r_{2,2},\dots,r_{2,l},\dots,r_{n,1},\\&\dots,r_{n,l},\,\bm{r}^1\cdot\bm{a}^{(1)},\bm{r}^2\cdot\bm{a}^{(2)},\dots,\bm{r}^l\cdot\bm{a}^{(l)}) \end{align*} Let $\Bar{\bm{a}}=(\bm{a}_1,\bm{a}_2,\dots,\bm{a}_{n})=\bm{e}_1*\bm{a}^{(1)}+\bm{e}_2*\bm{a}^{(2)}+ \dots +\bm{e}_l*\bm{a}^{(l)} \in (\mathbb{F}_q^l)^n$ and $\mathbf{a}_1$ is unit in $\mathbb{F}_q^l.$ \begin{definition}[$\Bar{\bm{a}}$-quasi cyclic code] A linear code $C$ of length $nl$ over $\mathbb{F}_q$ is said to be $\Bar{\bm{a}}$-quasi cyclic code of index $l$ over $\mathbb{F}_q$ if for any, $(r_{1,1},r_{1,2},\dots,r_{1,l},r_{2,1},r_{2,2},\dots,r_{2,l},\dots,r_{n,1},\dots,r_{n,l})\in C\Longrightarrow\mathcal{T}^l_{\Bar{\bm{a}}}(r_{1,1},r_{1,2},\dots,r_{1,l},r_{2,1},r_{2,2},\dots,r_{2,l},\dots,r_{n,1},\dots,r_{n,l})\in C.$ \end{definition} \begin{definition}[$\Bar{\bm{a}}$-quasi sequential code] A linear code $C$ of length $nl$ over $\mathbb{F}_q$ is said to be $\Bar{\bm{a}}$-quasi sequential code of index $l$ over $\mathbb{F}_q$ if for any, $(r_{1,1},r_{1,2},\dots,r_{1,l},r_{2,1},r_{2,2},\dots,r_{2,l},\dots,r_{n,1},\\ \dots,r_{n,l})\in C\Longrightarrow\widetilde{\mathcal{T}}^{l}_{\Bar{\bm{a}}}(r_{1,1},r_{1,2},\dots,r_{1,l},r_{2,1},r_{2,2},\dots,r_{2,l},\dots,r_{n,1},\dots,r_{n,l})\in C.$ \end{definition} \begin{lemma} Let $\Bar{\bm{a}}=(\bm{a}_1,\bm{a}_2,\dots,\bm{a}_{n})=\bm{e}_1*\bm{a}^{(1)}+\bm{e}_2*\bm{a}^{(2)}+ \dots +\bm{e}_l*\bm{a}^{(l)} \in (\mathbb{F}_q^l)^n$ be such that $\bm{a}_1$ is unit in $\mathbb{F}_q^l.$ Denote by $\tau_{\Bar{\bm{a}}},$ the $\Bar{\bm{a}}$-polycyclic shift. Then, $\Phi\circ\tau_{\Bar{\bm{a}}}=\mathcal{T}^l_{\Bar{\bm{a}}}\circ\Phi.$ \end{lemma} \begin{proof} Let $\Bar{\bm{c}}=(\bm{c}_1,\dots,\bm{c}_n)\in\mathcal{C}$ and $\Bar{\bm{a}}$ be as given in the theorem. Consider, \begin{align*} \tau_{\Bar{\bm{a}}}(\Bar{\bm{c}})&=(\bm{0},\bm{c}_1,\dots,\bm{c}_{n-1})+\bm{c}_n(\bm{a}_1,\dots,\bm{a}_n)\\ &=\left(\bm{0},\underset{i=1}{\overset{l}{\sum}}c_{1,i}\bm{e}_i,\dots,\underset{i=1}{\overset{l}{\sum}}c_{n-1,i}\bm{e}_i\right)+\underset{i=1}{\overset{l}{\sum}}c_{n,i}\bm{e}_i\left(\underset{i=1}{\overset{l}{\sum}}a_{1,i}\bm{e}_i,\dots,\underset{i=1}{\overset{l}{\sum}}a_{n,i}\bm{e}_i\right)\\ &=\left(\bm{0},\underset{i=1}{\overset{l}{\sum}}c_{1,i}\bm{e}_i,\dots,\underset{i=1}{\overset{l}{\sum}}c_{n-1,i}\bm{e}_i\right)+\left(\underset{i=1}{\overset{l}{\sum}}c_{n,i}a_{1,i}\bm{e}_i,\dots,\underset{i=1}{\overset{l}{\sum}}c_{n,i}a_{n,i}\bm{e}_i\right)\\ &=\left(\underset{i=1}{\overset{l}{\sum}}c_{n,i}a_{1,i}\bm{e}_i,\underset{i=1}{\overset{l}{\sum}}c_{1,i}\bm{e}_i+\underset{i=1}{\overset{l}{\sum}}c_{n,i}a_{2,i}\bm{e}_i,\dots,\underset{i=1}{\overset{l}{\sum}}c_{n-1,i}\bm{e}_i+\underset{i=1}{\overset{l}{\sum}}c_{n,i}a_{n,i}\bm{e}_i\right)\\ &=\left(\underset{i=1}{\overset{l}{\sum}}c_{n,i}a_{1,i}\bm{e}_i,\underset{i=1}{\overset{l}{\sum}}(c_{1,i}+c_{n,i}a_{2,i})\bm{e}_i,\dots,\underset{i=1}{\overset{l}{\sum}}(c_{n-1,i}+c_{n,i}a_{n,i})\bm{e}_i\right) \end{align*} Furthermore,\\ \begin{multline*}\label{Phitau} \Phi(\tau_{\Bar{\bm{a}}}(\Bar{\bm{c}}))=(c_{n,1}a_{1,1},c_{n,2}a_{1,2},\dots,c_{n,l}a_{1,l},c_{1,1}+c_{n,1}a_{2,1},c_{1,2}+c_{n,2}a_{2,2},\dots,c_{1,l}+c_{n,l}a_{n,l},\\\dots,c_{n-1,1}+c_{n,1}a_{n,1},\dots,c_{n-1,l}+c_{n,l}a_{n,l}) \end{multline*} Now consider, \begin{multline*} \Phi(\Bar{\bm{c}})=(c_{1,1},\dots,c_{1,l},c_{2,1},\dots,c_{2,l},\dots,c_{n,1},\dots,c_{n,l})\textit{ \textnormal{and} } \\ \mathcal{T}^l_{\Bar{\bm{a}}}(\Phi(\Bar{\bm{c}}))=(0,\dots,0,c_{1,1},\dots,c_{1,l},\dots,c_{n-1,1},\dots,c_{n-1,l})+(c_{n,1}a_{1,1},\dots,c_{n,l}a_{n,l},c_{n,1}a_{2,1},\\\dots,c_{n,l}a_{2,l},\dots,c_{n,1}a_{n,1},\dots,c_{n,l}a_{n,l}) \end{multline*} \hfill $\square$ \end{proof} Hence, the following theorem is immediate. \begin{theorem} Let $\mathcal{C}$ be an $\mathbb{F}_q^l$-linear code having length $n$. Then, $\mathcal{C}$ is $\Bar{\bm{a}}$-polycyclic over $\mathbb{F}_q^l$ if and only if $\Phi(\mathcal{C})$ is $\Bar{\bm{a}}$-quasi cyclic code of index $l$ over $\mathbb{F}_q.$ \end{theorem} \begin{lemma} Let $\Bar{\bm{a}}=(\bm{a}_1,\bm{a}_2,\dots,\bm{a}_{n})=\bm{e}_1*\bm{a}^{(1)}+\bm{e}_2*\bm{a}^{(2)}+ \dots +\bm{e}_l*\bm{a}^{(l)} \in (\mathbb{F}_q^l)^n$ be such that $\bm{a}_1$ is unit in $\mathbb{F}_q^l.$ Denote by $\tau'_{\Bar{\bm{a}}},$ the $\Bar{\bm{a}}$-sequential shift. Then, $\Phi\circ\tau'_{\Bar{\bm{a}}}=\widetilde{\mathcal{T}}^{l}_{\Bar{\bm{a}}}\circ\Phi.$ \end{lemma} \begin{proof} Let $\Bar{\bm{c}}=(\bm{c}_1,\bm{c}_2,\dots,\bm{c}_n) \text{ and }\Bar{\bm{a}}=(\bm{a}_1,\bm{a}_2,\dots,\bm{a}_n) \in \mathbb{F}_q^{nl}.$ Then consider, \begin{align*} \tau_{\Bar{\bm{a}}}'(\Bar{\bm{c}})&=(\bm{c}_2,\dots,\bm{c}_n,\Bar{\bm{c}}\cdot\Bar{\bm{a}})\\ &=\left(\underset{j=1}{\overset{l}{\sum}}c_{2,j}\bm{e}_j,\dots,\underset{j=1}{\overset{l}{\sum}}c_{n,j}\bm{e}_j,\underset{j=1}{\overset{l}{\sum}}(c_{1,j}a_{1,j}+c_{2,j}a_{2,j}+\dots + c_{n,j}a_{n,j})\bm{e}_j\right) \end{align*} Then,\\ \begin{align*} \Phi(\tau_{\Bar{\bm{a}}}'(\Bar{\bm{c}})) = & \big( c_{2,1},\dots,c_{2,l},c_{3,1},\dots,c_{3,l},\dots,c_{n,1},\dots,c_{n,l},c_{1,1}a_{1,1}+c_{2,1}a_{2,1}\\ & +\dots+c_{n,1}a_{n,1},c_{1,2}a_{1,2}+\dots+c_{n,2}a_{n,2},\dots,c_{1,l}a_{1,l}+c_{2,l}a_{2,l}+\dots+c_{n,l}a_{n,l}\big), \end{align*} which is clearly equal to $\widetilde{\mathcal{T}}^{l}_{\Bar{\bm{a}}}\circ\Phi(\Bar{\bm{c}}).$ \hfill $\square$ \end{proof} Hence, the following theorem is immediate now: \begin{theorem} Let $\mathcal{C}$ be an $\mathbb{F}_q^l$-linear code having length $n$. Then, $\mathcal{C}$ is $\Bar{\bm{a}}$-sequential code over $\mathbb{F}_q^l$ if and only if $\Phi(\mathcal{C})$ is $\Bar{\bm{a}}$-quasi sequential code of index $l$ over $\mathbb{F}_q.$ \end{theorem} For the Gray map $\Phi$ defined above we know the algebraic structure on the corresponding Gray image of $\mathcal{C},$ however, to improve the distance we define generalized Gray map which is obtained by composing $\Phi$ with a suitable automorphism of $\mathbb{F}_q^{nl}$ which yields a code with the same length and dimension. \\ Hence, we consider the Gray map of the form: \begin{equation}\label{GenGraymap} \Psi(c_{1,1},c_{1,2},\dots,c_{1,l},\dots,c_{n,1},c_{n,2},\dots,c_{n,l})= ((c_{1,1},c_{1,2},\dots,c_{1,l})M,\dots, (c_{n,1},c_{n,2},\dots,c_{n,l})M), \end{equation} where $M\in GL(l, \mathbb{F}_q).$\\ For Table \ref{table1}, $l=2$ and $M=\begin{pmatrix} 1 & 1\\ 0 &1 \\ \end{pmatrix}.$ In Table \ref{table2} and \ref{table3}, $l$ takes value $3$ and $2$ respectively, and $M$ is as mentioned in each case. All codes presented in Table \ref{table1} and \ref{table2} (probably except item 31 of Table \ref{table1}) are optimal as per the database \cite{codetables} and new in the sense that they are inequivalent to the codes available in MAGMA \cite{cannon2011handbook}, whereas, in Table \ref{table3}, all codes are MDS.\\ \begin{lemma} \label{annpreserving} Let $\mathcal{C}=\langle \Bar{\bm{g}}(x)=\underset{i=1}{\overset{l}{\sum}}\bm{e}_i\bm{g}^{(i)}(x)\rangle$ be an $\Bar{\bm{a}}$-polycyclic code of length $n$ over $\mathbb{F}_q^l$, and let $\Psi$ be the map defined as in Equation \eqref{GenGraymap}, then the parameters of $\Psi(\mathcal{C})$ will be given by $[nl,nl-\underset{i=1}{\overset{l}{\sum}}\deg {\bm{g}}^{(i)}(x),\geq d]_q,$ where $d$ is the Hamming distance of $\mathcal{C}$. \end{lemma} \begin{proof} The length and dimension of $\Psi(\mathcal{C})$ are clear from the definition of $\Psi.$ The distance remains greater than or equal to the Hamming distance of $\mathcal{C}$ since $M$ is an invertible matrix. \end{proof}\hfill $\square$\\ Observe that, the Gram matrix in Lemma \ref{Dual of dual} with respect to the annihilator product is given as:\\ $$A=( \langle \Bar{\bm{\epsilon}}_i,\Bar{\bm{\epsilon}}_j \rangle_{\Bar{\bm{a}}})=\begin{pmatrix} (1,1,\dots,1)&\bm{0}&\bm{0}&\dots&\bm{0}\\ \bm{0}&\bm{0}&\bm{0}&\dots&\bm{a}_0\\ \bm{0}&\bm{0}&\bm{0}&\dots&\langle \Bar{\bm{\epsilon}}_3,\Bar{\bm{\epsilon}}_n \rangle_{\Bar{\bm{a}}}\\ \vdots&\vdots&\vdots&\iddots&\vdots\\ \bm{0}&\bm{a}_0&\langle \Bar{\bm{\epsilon}}_n,\Bar{\bm{\epsilon}}_3 \rangle_{\Bar{\bm{a}}}&\dots&\langle \Bar{\bm{\epsilon}}_n,\Bar{\bm{\epsilon}}_n \rangle_{\Bar{\bm{a}}}& \end{pmatrix}. $$Set $\langle \Bar{\bm{\epsilon}}_i,\Bar{\bm{\epsilon}}_j \rangle_{\Bar{\bm{a}}}:=(\epsilon_{i,j}^{(1)},\epsilon_{i,j}^{(2)},\dots,\epsilon_{i,j}^{(l)})\in \mathbb{F}_q^l.$ For every such $A,$ we can define $\Bar{A}\in GL(nl,\mathbb{F}_q)$ as follows:\\ $$\Bar{A}=\begin{pmatrix} \begin{pmatrix} 1&0&\dots&0\\ 0&1&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&1 \end{pmatrix}&\begin{pmatrix} 0&0&\dots&0\\ 0&0&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&0 \end{pmatrix}&\dots&\begin{pmatrix} 0&0&\dots&0\\ 0&0&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&0 \end{pmatrix}\\ \begin{pmatrix} 0&0&\dots&0\\ 0&0&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&0 \end{pmatrix}&\begin{pmatrix} 0&0&\dots&0\\ 0&0&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&0 \end{pmatrix}&\dots&\begin{pmatrix} a_0&0&\dots&0\\ 0&a_0&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&a_0 \end{pmatrix}\\ \vdots&\vdots&\iddots&\vdots\\ \begin{pmatrix} 0&0&\dots&0\\ 0&0&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&0 \end{pmatrix}&\begin{pmatrix} a_0&0&\dots&0\\ 0&a_0&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&a_0 \end{pmatrix}&\dots&\begin{pmatrix} \epsilon_{n,n}^{(1)}&0&\dots&0\\ 0&\epsilon_{n,n}^{(2)}&\dots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\dots&\epsilon_{n,n}^{(l)} \end{pmatrix} \end{pmatrix},$$ that is, $\Bar{A}=(A_{i,j}),$ where $A_{i,j}$ is $l\times l$ diagonal matrix $\textnormal{diag} (\langle \bm{\epsilon}_i,\bm{\epsilon}_j\rangle_{\Bar{\bm{a}}}).$ Note that if $\Bar{\bm{a}}=(\bm{a}_0,\bm{a}_1,\dots,\bm{a}_{n-1})\in(\mathbb{F}_q^l)^n$ is such that $\bm{a}_i=(b_i,b_i,\dots,b_i)\in \mathbb{F}_q^l$, for each $0\leq i \leq n-1,$ then every entry of the matrix $A=(\langle \bm{\epsilon}_i,\bm{\epsilon}_j\rangle_{\Bar{\bm{a}}})$ is again an $l$-tuple over $\mathbb{F}_q$ with equal components, that is, $\langle \bm{\epsilon}_i,\bm{\epsilon}_j \rangle_{\Bar{\bm{a}}}=(\epsilon_{i,j}^{(1)},\epsilon_{i,j}^{(2)},\dots, \epsilon_{i,j}^{(l)})=(\epsilon_{i,j}, \epsilon_{i,j},\dots, \epsilon_{i,j})\in \mathbb{F}_q^l.$ With these notations we prove the following Lemmas: \begin{lemma}\label{Psi(C.A)=Psi(C).Bar{A}} Let $\mathcal{C}$ be an $\Bar{\bm{a}}$-polycyclic code over $\mathbb{F}_q^l$ of length $n,$ and let $\Psi$ denote the map in Equation \eqref{GenGraymap}. If each $\bm{a}_i=(b_i, b_i, \dots, b_i)\in \mathbb{F}_q^l,$ where $b_i\in\mathbb{F}_q,$ then $$\Psi(\mathcal{C}.A)=\Psi(\mathcal{C}).\Bar{A}.$$ \end{lemma} \begin{proof} For any $\Bar{\bm{c}}=(\bm{c}_1,\bm{c}_2,\dots,\bm{c}_n)\in\mathcal{C},$ we get, \begin{align*} \Psi\big(\Bar{\bm{c}}A\big)=&\Big((\bm{c}_1(1,1,\dots,1))M\textbf{,}\,\,(\bm{c}_n\bm{a}_0)M\textbf{,}\,\, (\bm{c_{n-1}}\bm{a}_0)M+(\bm{c}_n\langle \bm{\epsilon}_n,\bm{\epsilon}_3 \rangle_{\Bar{\bm{a}}})M\textbf{,} \,\, \dots\,\,\textbf{,} (\bm{c}_2\bm{a}_0)M+(\bm{c}_3\\&\langle \bm{\epsilon}_3,\bm{\epsilon}_n \rangle_{\Bar{\bm{a}}})M+\dots+(\bm{c}_n\langle \bm{\epsilon}_n,\bm{\epsilon}_n \rangle_{\Bar{\bm{a}}})M \Big)\\=&\Big(\bm{c}_1M,\,b_0\bm{c}_nM,\, b_0\bm{c_{n-1}}M+\epsilon_{n,3}\,\bm{c}_n M, \dots, b_0\bm{c}_2M+\epsilon_{3,n}\,\bm{c}_3 M+\dots+\epsilon_{n,n}\,\bm{c}_nM\Big)\\=&(\bm{c}_1M,\,\bm{c}_2M,\dots,\bm{c}_nM)\Bar{A}\\=& \Psi(\Bar{\bm{c}})\Bar{A} \end{align*} Hence, the result follows. \hfill $\square$ \end{proof} \begin{lemma}\label{same distance of Psi(C) and Psi(C).Bar(A)} If $\Bar{\bm{a}}(x)=\bm{a}_0\in U(\mathbb{F}_q^l),$ a constant polynomial, then $\min\{w_H(\Psi(\Bar{\bm{c}})):\Bar{\bm{c}}\in\mathcal{C}\setminus\{{\Bar{\bm{0}}}\}\}=\min\{w_H(\Psi(\Bar{\bm{c}})\Bar{A}):\Bar{\bm{c}}\in \mathcal{C}\setminus\{{\Bar{\bm{0}}}\}\}.$ Consequently, parameters of $\Psi(\mathcal{C})$ will be same as of $\Psi(\mathcal{C}).\Bar{A}.$ \end{lemma} \begin{proof} The proof follows from the fact that $\Bar{A}$ is invertible with exactly one non-zero entry in each row and each column.\hfill $\square$ \end{proof} \begin{theorem}[Euclidean CSS construction]\cite{calderbank1998quantum}\label{EuclcideanCSSconstruction} Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be $[n,k_1,d_1]_q$ and $[n,k_2,d_2]_q$ over $\mathbb{F}_q$ respectively with $\mathcal{C}_2^{\perp}\subseteq \mathcal{C}_1.$ Then there exists a quantum error-correcting code $\mathcal{C}$ with parameters $[[n,k_1+k_2-n,d]]_q,$ where $d=\min\{w_H(c): c\in(\mathcal{C}_1\setminus \mathcal{C}_2^\perp)\cup(\mathcal{C}_2\setminus \mathcal{C}_1^\perp)\}.$ \end{theorem} Now, we have the following theorem. \begin{theorem}\label{Quantumcons} Let $\mathcal{C}=\langle \Bar{\bm{g}}(x)\rangle$ be an $\Bar{\bm{a}}$-polycyclic code of length $n$ over $\mathbb{F}_q^l,$ where $\Bar{\bm{a}}(x)=\bm{a}_0=(b_0,b_0,\dots,b_0)\in\mathbb{F}_q^l,$ and let $\Psi$ be defined as in Equation \eqref{GenGraymap} and $MM^T=\lambda\, I$, for some $\lambda\in U(\mathbb{F}_q),$ then there exists a quantum code with parameters $[[nl,nl-2\underset{i=1}{\overset{l}{\sum}}\deg {\bm{g}}^{(i)}(x),\geq d]]_q,$ where $d$ is the Hamming distance of $\Psi(\mathcal{C}).$ \end{theorem} \begin{proof} From Lemma \ref{Psi(C.A)=Psi(C).Bar{A}}, we have that $\Psi(\mathcal{C}.A)=\Psi(\mathcal{C}).\Bar{A}$. Also $\Psi$ is an Euclidean duality preserving map, which follows from [Theorem 5.4, \cite{akanksha2025thetadeltathetamathbfacycliccodes}]. Then,$$\Psi(\mathcal{C}^\circ)=\Psi((\mathcal{C}.A)^\perp)=(\Psi(\mathcal{C}.A))^\perp=(\Psi(C).\Bar{A})^\perp.$$ Hence, $\mathcal{C}^\circ\subset \mathcal{C}\implies \Psi(\mathcal{C}^\circ)\subset\Psi(\mathcal{C})\implies (\Psi(C).\Bar{A})^\perp\subset \Psi(\mathcal{C}).$ Note that, since $\Bar{A}$ is invertible and by Lemma \ref{same distance of Psi(C) and Psi(C).Bar(A)}, $\Psi(\mathcal{C})$ and $\Psi(\mathcal{C}.\Bar{A})$ have same parameters. Hence, by Theorem \ref{EuclcideanCSSconstruction}, there exists a quantum code with parameter $[[nl, nl-2\underset{i=1}{\overset{l}{\sum}}\deg {\bm{g}}^{(i)}(x),\geq d]]_q,$ where $d$ is the Hamming distance of $\Psi(\mathcal{C}).$ \hfill $\square$ \end{proof} An instance of this theorem can be seen in the following example: \begin{example} For $q=5$ and $l=2,$ consider the ring $\mathbb{F}_5^2.$ Let $x^n-\Bar{\bm{a}}(x)=(1,1)x^5-(1,1).$ Let $\mathcal{C}=\bm{e}_1\mathcal{C}_1\oplus\bm{e}_2\mathcal{C}_2$ be an $(1,1)$-polycyclic code of length $5$ over $\mathbb{F}_5^2.$ Then by Theorem \ref{polycyclic_classification} each $\mathcal{C}_i$ is a cyclic code over $\mathbb{F}_5$ of length $5$. Now,\\ $$x^5-1=(x^2+3x+1)(x^3 + 2x^2 + 3x + 4)=(x+4)(x^4 + x^3 + x^2 + x + 1)$$ If $\mathcal{C}_1=\langle x^2+3x+1 \rangle$, $\mathcal{C}_2=\langle x+4\rangle$ over $\mathbb{F}_5$ and $M=\begin{pmatrix} 1&4\\ 4&4\\ \end{pmatrix}.$ Then, $\Psi(\mathcal{C})$ will have parameters $[10,7,3].$ Moreover, $x^2+3x+1$ divides $x^3 + 2x^2 + 3x + 4$ and $x+4$ divides $x^4 + x^3 + x^2 + x + 1,$ $\mathcal{C}_1$ and $\mathcal{C}_2$ are both dual containing and consequently $\mathcal{C}$ is also dual containing by Corollary \ref{decomposition} and by Theorem \ref{Quantumcons}, there exists a quantum code with parameters $[[10,4 , \geq 3]].$ \end{example} Table \ref{table4} consists of examples constructed using this construction. \subsection{Examples For Tables} \textbf{Example 1.} For $q=3$ and $l=3,$ consider the ring $\mathbb{F}_5^3.$ Let $x^n-\Bar{\bm{a}}(x)=(1,1,1)x^4-(2,2,2)x^3-(1,1,1)x-(1,1,1)\in \mathbb{F}_5^3[x].$ Suppose $\mathcal{C}=\bm{e}_1\mathcal{C}_1\oplus\bm{e}_2\mathcal{C}_2\oplus\bm{e}_3\mathcal{C}_3$ be an $((1,1,1),(1,1,1),(2,2,2))$-polycyclic code of length $4$ over $\mathbb{F}_5^3.$ Then by Theorem \ref{polycyclic_classification}, $\mathcal{C}_i$ is a $(1,1,2)$-polycyclic code of length $4$ over $\mathbb{F}_5$ for each $1\leq i \leq 3.$ Now,\\ $$x^4-2x^3-x-1=\bm{g}^{(1)}(x)\times\bm{h}^{(1)}(x)=\bm{g}^{(2)}(x)\times\bm{h}^{(2)}(x)=\bm{g}^{(3)}(x)\times\bm{h}^{(3)}(x)$$ in $\mathbb{F}_5[x].$ If $\mathcal{C}_1:=\langle\bm{g}^{(1)}(x)=x^3+2\rangle, \mathcal{C}_2:=\langle\bm{g}^{(2)}(x)=x^2+2\rangle$, $\mathcal{C}_3:=\langle\bm{g}^{(3)}(x)=x^3+2x^2+2x+1\rangle$ and $M=\begin{pmatrix} 2&1&1\\ 1&2&1 \\ 0&1&1\\ \end{pmatrix},$ then $\Psi(\mathcal{C})$ is an optimal $[12, 4, 6]_5$ linear code over $\mathbb{F}_5.$ This is example $5$ of Table \ref{table2}. All the above computations were done using MAGMA \cite{cannon2011handbook}.\\ \textbf{Example 2.} For $q=7$ and $l=2,$ consider the ring $\mathbb{F}_7^2.$ Let $x^n-\Bar{\bm{a}}(x)=(1,1)x^3-(1,1)x-(1,1)\in \mathbb{F}_7^2[x].$ Suppose $\mathcal{C}=\bm{e}_1\mathcal{C}_1\oplus\bm{e}_2\mathcal{C}_2$ be an $((1,1),(1,1))$-polycyclic code of length $3$ over $\mathbb{F}_7^2.$ Then by Theorem \ref{polycyclic_classification}, $\mathcal{C}_i$ is a $(1,1)$-polycyclic code of length $3$ over $\mathbb{F}_7$ for each $1\leq i \leq 2.$ Now,\\ $$x^3+6x+6=\bm{g}^{(1)}(x)\times\bm{h}^{(1)}(x)=\bm{g}^{(2)}(x)\times\bm{h}^{(2)}(x)$$ in $\mathbb{F}_7[x].$ If $\mathcal{C}_1:=\langle\bm{g}^{(1)}(x)=x^2+5x+3\rangle, \mathcal{C}_2:=\langle\bm{g}^{(2)}(x)=x+2\rangle$ and $M=\begin{pmatrix} 5&2\\ 2&2\\ \end{pmatrix},$ then $\Psi(\mathcal{C})$ is an MDS $[6,3,4]_7$ linear code over $\mathbb{F}_7.$ This is example $4$ of Table \ref{table3}. All the above computations were done using MAGMA \cite{cannon2011handbook}. \begin{landscape} \begin{table}[!ht] \begin{adjustbox}{width=.9\textwidth} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline S. No.& $q$ & $n$ & $\Bar{\bm{g}}(x)=x^n-\Bar{\bm{a}}(x)$ & $\bm{g}^{(1)}(x)$ & $\bm{g}^{(2)}(x)$ & Parameters of $\Psi(C)$ & Remarks \\ \hline 1 & 2 & 6 & $(1,1)x^6-(1,1)x^5-(1,1)x^2-(1,1)$ & $x^2+x+1$ & $x^4+x^2+x+1$ & $[12,6,4]$ & Optimal \\ \hline 2 &2 & 6&$(1,1)x^6 + (1,1)x^3 + (1,1)x + (1,1)$& $x+1$ & $x^3 + x^2 + 1$ & $[12,8,3]$ & Optimal \\ \hline 3 &2 & 7&$(1,1)x^7-(1,1)$& $x + 1$ & $x^6 + x^5 + x^4 + x^3 + x^2 + x + 1$ & $[14,7,4]$ & Optimal and Cyclic \\ \hline 4 &2 & 7&$(1,1)x^7-(1,1)$& $x^3 + x^2 + 1$ & $x^6 + x^5 + x^4 + x^3 + x^2 + x + 1$ & $[14,5,6]$ & Optimal and Quasicyclic \\ \hline 5 &2 & 8&$(1,1)x^8 + (1,1)x^5 + (1,1)x^3 + (1,1)$& $x + 1$ & $x^6 + x^5 + x + 1$ & $[16,9,4]$ & Optimal and Quasicyclic \\ \hline 6 &2 & 8&$(1,1)x^8 + (1,1)x^4 + (1,1)x^2 + (1,1)$& $x^5 + x^4 + x^3 + 1$ & $x^8 + x^4 + x^2 + 1$ & $[16, 3, 8]$ & Optimal and Quasicyclic \\ \hline 7 &2 & 8&$(1,1)x^8 - (1,1)x^4 - (1,1)x^2 - (1,1)$& $x + 1$ & $x^5 + x^4 + x^3 + 1$ & $[16,10,4]$ & Optimal and Quasicyclic \\ \hline 8 &2 & 8&$(1,1)x^8+(1,1)x^6+(1,1)x^2+(1,1)$& $x^5+x^3+x^2+1$ & $x^8+x^6+x^2+1$ & $[16,3,8]$ & Optimal and Quasicyclic \\ \hline 9 &3 & 6 & $(1,1)x^6 + (2,2)x^2 + (2,2)x + (2,2)$ & $x + 1$ &$x^3 + 2x + 1$&$[12,8,3]$ & Optimal, LCD \\ \hline 10 &3 & 6 & $(1,1)x^6+(2,2)x^2+(2,2)x+(2,2)$& $x+1$ & $x^4+x^3+2x^2+1$ & $[12,7,4]$ & Optimal and Quasicyclic\\ \hline 11 &3 & 7 & $(1,1)x^7 + (1,1)x^4 + (2,2)x + (2,2)$&$x + 1$&$x^3 + x^2 + 2$&$[14,10,3]$&Optimal \\ \hline 12 &3&7&$(1,1)x^7-(2,2)x^4-(1,1)x-(1,1)$&$x+2$&$x^4+2x^2+2x+1$&$[14,9,4]$&Optimal and Quasicyclic\\ \hline 13 &3&8&$(1,1)x^8-(2,2)x^4-(1,1)x^3-(1,1)$&$x+2$&$x^3+2x+2$&$[16,12,3]$&Optimal, LCD\\ \hline 14 &3&8&$(1,1)x^8 + (1,1)x^3 + (2,2)x + (2,2)$&$x+1$&$x^5+2x^4+2x^3+x^2+1$&$[16,10,4]$&Optimal \\ \hline 15 &3&8&$(1,1)x^8-(2,2)x^3-(1,1)x-(1,1)$&$x^5+2x^4+2x^3+x^2+1$&$x^8+x^3+2x+2$&$[16,3,10]$&Optimal and Quasicyclic\\ \hline 16 &3&8&$(1,1)x^8-(2,2)x^3-(1,1)x-(1,1)$&$x^6+x^5+2x^3+2x^2+x+2$&$x^8+x^3+2x+2$&$[16,2,12]$&Optimal and Quasicyclic \\ \hline 17 &4 & 4 & $(1,1)x^4-(u,u)x^3-(u,u)x^2-(1,1)$ & $x+1$ &$x^2+x+u^2$&$[8,5,3]$ & Optimal \\ \hline 18 &4& 4&$(1,1)x^4+(u,u)x^3+(1,1)$&$x+u^2$&$x^3+x^2+u^2x+u$&$[8,4,4]$&Optimal and A-MDS \\ \hline 19 &4& 5& $(1,1)x^5+(u,u)x^2+(u,u)x+(1,1)$&$x+1$&$x^2+x+u$&$[10,7,3]$&Optimal \\ \hline 20 &4&5&$(1,1)x^5+(u,u)x^2+(u,u)x+(1,1)$&$x^2+x+u$&$x^5+ux^2+ux+1$&$[10,3,6]$&Optimal and Quasicyclic \\ \hline 21 &4&5&$(1,1)x^5+(1,1)$&$x+1$&$x^3+u^2x^2+u^2x+1$&$[10,6,4]$&Optimal, Quasicyclic, and A-MDS \\ \hline 22 &4&6&$(1,1)x^6+(u,u)x^2+(u,u)x+(1,1)$&$x+1$&$x^4+u^2x^3+x^2+x+u$&$[12,7,4]$&Optimal and Quasicyclic \\ \hline 23 &4&7&$(1,1)x^7+(1,1)x^4+(u,u)x+(1,1)$&$x+u^2$&$x^4+x^3+ux^2+x+1$&$[14,9,4]$&Optimal \\ \hline 24 &5 & 4& $(1,1)x^4-(1,1)$& $x+1$ & $x^3+4x^2+x+4$ & $[8,4,4]$ & Optimal, LCD and A-MDS \\ \hline 25 &5 & 4&$(1,1)x^4-(1,1)x^3-(1,1)x-(1,1)$& $x+3$ & $x^2 +4x + 4$ & $[8,5,3]$ & Optimal and A-MDS \\ \hline 26 &5 & 5&$(1,1)x^5-(1,1)$& $x-1$ & $(x-1)^3$ & $[10,6,4]$ & Optimal and A-MDS \\ \hline 27 &5 & 5 &$(1,1)x^5-(1,1)$ & $x-1$ & $(x-1)^2$ & $[10,7,3]$ & Optimal and A-MDS \\ \hline 28 &5 & 5 & $(1,1)x^5+(1,1)x^4+(1,1)x^3+(2,2)x^2+(4,4)$ & $x^3 + 3x^2 + 3x + 1$ & $x^5 + x^4 + x^3 + 2x^2 + 4$ & $[10,2,8]$ & Optimal and A-MDS \\ \hline 29 &7 & 5 & $(1,1)x^5-(2,2)x^4-(1,1)x-(1,1)$ & $x+2$ & $x^3+6x^2+5x+6$ & $[10,6,4]$ & Optimal, LCD and A-MDS \\ \hline 30 &7 & 5 & $(1,1)x^5+(5,5)x^3+(3,3)x+(6,6)$ & $x^3+4x^2+6x+6$ & $x^5+5x^3+3x+6$ & $[10,2,8]$ & Optimal, LCD, Quasicyclic and A-MDS \\ \hline 31 &11 & 4 & $(1,1)x^4+(5,5)x^2+(6,6)x+(2,2)$ & $x+3$ & $x^2+5x+10$ & $[8,5,3]$ & LCD, A-MDS (No data available) \\ \hline \multicolumn{7}{l}{\normalsize In Table A-MDS stands for Almost MDS, that is, $k+d=n$.} \end{tabular} \end{adjustbox} \caption{$l=2$} \Large\label{table1} \end{table} \newpage \centering \begin{table}[!ht] \begin{adjustbox}{width=0.8\textwidth} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline S. No.& $q$ & $n$ &$M$& $\Bar{\bm{g}}(x)=x^n-\Bar{\bm{a}}(x)$ & $\bm{g}^{(1)}(x)$ & $\bm{g}^{(2)}(x)$ &$\bm{g}^{(3)}(x)$& Parameters $\Psi(C)$ & Remarks \\ \hline 1 &2&5 &$\begin{pmatrix} 1&1&1\\ 0&1&1 \\ 1&0&1 \end{pmatrix}$ & $(1,1,1)x^5+(1,1,1)$ & $x+1$& $x^4+x^3+x^2+x+1$&$x^4+x^3+x^2+x+1$&$[15,6,6]$&Optimal, LCD and Quasicyclic\\ \hline 2 &2&6&$\begin{pmatrix} 1&1&1\\ 0&1&1 \\ 1&0&1 \end{pmatrix}$ & $(1,1,1)x^6+(1,1,1)$ & $x^4+x^3+x+1$& $x+1$&$x+1$&$[18,12,4]$&Optimal and Quasicyclic\\ \hline 3 &2&7 &$\begin{pmatrix} 1&1&1\\ 0&1&1 \\ 1&0&1 \end{pmatrix}$ & $(1,1,1)x^7+(1,1,1)$ & $x^4+x^3+x^2+1$& $x+1$&$x+1$&$[21,15,4]$&Optimal and Quasicyclic\\ \hline 4 &3&5 &$\begin{pmatrix} 1&1&1\\ 0&2&1 \\ 0&1&1 \end{pmatrix}$ & $(1,1,1)x^5-(1,1,1)$ & $x-1$& $x-1$&$x^4+x^3+x^2+x+1$&$[15,9,4]$&Optimal \\ \hline 5 &3&4 &$\begin{pmatrix} 2&1&1\\ 1&2&1 \\ 0&1&1 \end{pmatrix}$ & $(1,1,1)x^4-(2,2,2)x^3-(1,1,1)x-(1,1,1)$ & $x^3+2$& $x^2+2$&$x^3+2x^2+2x+1$&$[12,4,6]$&Optimal \\ \hline 6 &3&4 &$\begin{pmatrix} 2&1&1\\ 1&2&1 \\ 0&1&1 \end{pmatrix}$ & $(1,1,1)x^4 + (1,1,1)x^3 + (2,2,2)x + (2,2,2)$ & $x^3 + 2x^2 + 2x + 1$& $x+2$&$x+1$&$[12,7,4]$&Optimal \\ \hline 7 &3&4 &$\begin{pmatrix} 2&1&1\\ 1&2&1 \\ 0&1&1 \end{pmatrix}$ & $(1,1,1)x^4+(1,1,1)x^3+(2,2,2)x+(2,2,2)$ & $x^2+x+1$& $x+1$&$x+1$&$[12,8,3]$&Optimal \\ \hline 8 &4&3 &$\begin{pmatrix} u^2&0&u\\ u&u^2&1 \\ 1&1&u \end{pmatrix}$ & $(1,1,1)x^3+(1,1,1)$ & $x+u^2$& $x+1$&$x+u^2$&$[9,6,3]$&Optimal, LCD \\ \hline 9 &5&3 &$\begin{pmatrix} 1&0&1\\ 0&1&0 \\ 2&2&1 \end{pmatrix}$ & $(1,1,1)x^3+(1,1,1)x^2+(4,4,4)$ & $x+2$& $x^2+4x+2$&$1$&$[9,6,3]$&Optimal, LCD and A-MDS \\ \hline 10 &7&3 &$\begin{pmatrix} 1&0&1\\ 0&1&0 \\ 2&2&1 \end{pmatrix}$ & $(1,1,1)x^3+(4,4,4)x+(4,4,4)$ & $x+3$& $x^2+4x+6$&$1$&$[9,6,3]$&Optimal, LCD and A-MDS \\ \hline 11 &7&3 &$\begin{pmatrix} 1&0&1\\ 1&1&0 \\ 0&1&1 \end{pmatrix}$ & $(1,1,1)x^3-(6,6,6)x^2-(5,5,5)x-(9,9,9)$ & $x^2+3x+1$& $x+5$&$x+5$&$[9,5,4]$&Optimal and A-MDS \\ \hline 12 &7&3 &$\begin{pmatrix} 1&5&1\\ 1&1&0 \\ 2&1&1 \end{pmatrix}$ & $(1,1,1)x^3-(6,6,6)x^2-(5,5,5)x-(9,9,9)$ & $x^3+x^2+2x+5$& $x^2+3x+1$&$x+5$&$[9,3,6]$&Optimal, LCD and A-MDS \\ \hline \multicolumn{7}{l}{\normalsize In Table A-MDS stands for Almost MDS, that is, $k+d=n$.} \end{tabular} \end{adjustbox} \caption{$l=3$} \Large\label{table2} \end{table} \end{landscape} Apart from these codes, we are able to obtain some MDS codes. {\Large{ \begin{table}[!ht] \centering \begin{adjustbox}{width=.9\textwidth} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline S.No.&$q$ & $n$ & $M$& $\Bar{\bm{g}}(x)=x^n-\Bar{\bm{a}}(x)$ & $\bm{g}^{(1)}(x)$ & $\bm{g}^{(2)}(x)$ & Parameters $\Psi(C)$ & Remark \\ \hline 1 &4 &3 &$\begin{pmatrix} u&u^2\\ u&u \\ \end{pmatrix}$&$(1,1)x^3+(1,1)$&$x+u$&$x^2+ux+u^2$&$[6,3,4]$&LCD, MDS\\ \hline 2 &5 &3 &$\begin{pmatrix} 3&2\\ 2&2 \\ \end{pmatrix}$&$(1,1)x^3+(4,4)$&$x^2+x+1$&$x+4$&$[6,3,4]$&LCD, MDS\\ \hline 3 &7 &3 &$\begin{pmatrix} 5&2\\ 2&2 \\ \end{pmatrix}$&$(1,1)x^3+(6,6)$&$x+6$&$x+3$&$[6,4,3]$&MDS\\ \hline 4 &7 &3 &$\begin{pmatrix} 5&2\\ 2&2 \\ \end{pmatrix}$&$(1,1)x^3+(6,6)x+(6,6)$&$x^2+5x+3$&$x+2$&$[6,3,4]$&MDS\\ \hline 5 &7 &3 &$\begin{pmatrix} 5&2\\ 2&2 \\ \end{pmatrix}$&$(1,1)x^3+(6,6)$&$x^2+4x+2$&$x^2+2x+4$&$[6,2,5]$&MDS\\ \hline 6 &8 &2 &$\begin{pmatrix} u&u^3\\ 1&u \\ \end{pmatrix}$&$(1,1)x^2+(u^2,u^2)x+(u,u)$&$x+u^3$&$x+u^5$&$[4,2,3]$&LCD, MDS\\ \hline 7 &8 &3 &$\begin{pmatrix} u&u^3\\ u&u \\ \end{pmatrix}$&$(1,1)x^3+(u^2,u^2)x+(u,u)$&$x+u^4$&$x^2+u^4x+u^4$&$[6,3,4]$&LCD, MDS\\ \hline 8 &9 &4 &$\begin{pmatrix} u&u\\ u^2&1 \\ \end{pmatrix}$&$(1,1)x^4 + (2,2)$&$x^2+u^3x + u^2$&$x^3 + u^2x^2+2x + u^6$&$[8, 3, 6]$& LCD, MDS\\ \hline 9 &9 &4 &$\begin{pmatrix} u&u\\ u^2&1 \\ \end{pmatrix}$&$(1,1)x^4 + (2,2)$&$x + u^2$&$x^2 + u^5x + u^6$&$[8, 5, 4]$&LCD, MDS\\ \hline \end{tabular} \end{adjustbox} \caption{$l=2$} \label{table3} \end{table}}} \label{Graymap} {\Large{ \begin{table}[!ht] \centering \begin{adjustbox}{width=.9\textwidth} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline $q$ & $n$ & $M$& $\Bar{\bm{g}}(x)=x^n-\Bar{\bm{a}}(x)$ & $\bm{g}^{(1)}(x)$ & $\bm{g}^{(2)}(x)$ & Parameters $\Psi(C)$&Remarks about $\Psi(C)$ & Parameters of Quantum Code \\ \hline 3 &9 &$\begin{pmatrix} 2&1\\ 2&2 \\ \end{pmatrix}$&$(1,1)x^9+(2,2)$&$x+2$&$x^4+2x^3+2x+1$&$[18,13,3]$&Almost optimal&$[[18,8,\geq 3]]$\\ \hline 3 &9 &$\begin{pmatrix} 1&0\\ 0&1 \\ \end{pmatrix}$&$(1,1)x^9+(2,2)$&$x+2$&$x+2$&$[18,16,2]$&Optimal&$[[18,14,2]]$\\ \hline 5 &5 &$\begin{pmatrix} 1&4\\ 4&4 \\ \end{pmatrix}$&$(1,1)x^5-(1,1)$&$x^2+3x+1$&$x+4$&$[10,7,3]$&Optimal&$[[10,4,\geq 3]]$\\ \hline 7 &7 &$\begin{pmatrix} 2&4\\ 3&2 \\ \end{pmatrix}$&$(1,1)x^7-(1,1)$&$x^2+5x+1$&$x^2+5x+1$&$[14,10,3]$&Almost Optimal&$[[14,6,\geq 3]]$\\ \hline 7 &7 &$\begin{pmatrix} 2&4\\ 3&2 \\ \end{pmatrix}$&$(1,1)x^7-(1,1)$&$x^2+5x+1$&$x^3+4x^2+3x+6$&$[14,9,4]$&Almost Optimal&$[[14,4,\geq 4]]$\\ \hline \end{tabular} \end{adjustbox} \caption{Quantum Codes} \label{table4} \end{table}}} \section{Conclusion}\label{sec6} To study polycyclic (in particular, cyclic) codes, many authors utilized $\mathbb{F}_q$-algebras that have an orthogonal basis of idempotents having sum 1 and they could extract interesting codes, for instance, Qi (\cite{qi2022polycyclic}) constructed almost MDS binary codes, Islam et al. (\cite{islam2021cyclic}) produced many optimal and MDS codes, and Bhardwaj et al. (\cite{swati_raka}) considered a broader class of rings generalizing the above two and studied constacyclic codes over them. Since the algebras considered by them are each isomorphic to $\mathbb{F}_q^l$ for some $l\in\mathbb{N},$ we have studied polycyclic codes over the product ring $\mathbb{F}_q^l$. In fact, in these articles, the product ring structure of the base ring was utilized to extract good codes. For instance, in \cite{islam2021cyclic}, they picked a base ring which is isomphic to $\mathbb{F}_q[u]/\langle u^l-1\rangle$ so that $u^l-1$ splits over $\mathbb{F}_q$; in particular, it requires, $l\leq q$ and $l\mid q-1.$ Note that the product ring $\mathbb{F}_q^l$ cannot be realized as $\mathbb{F}_q[u]/\langle f(u)\rangle$ if $l>q.$ Our study of polycyclic codes over $\mathbb{F}_q^l$ does not assume any restriction on $q$ and $l$. For example, binary codes cannot be obtained as the Gray image by the construction in \cite{islam2021cyclic}. Moreover, for $l=2,$ linear codes (for instance, Items 3, 4, 21 in Table 1, and 1 in Table 3) over fields with characteristic $2$ cannot be obtained by the above mentioned constructions. In fact, our general setup enables us to construct a large number of interesting codes, since $q$ and $l$ are independent of each other. As a future work, one can try to find a characterization of Gray maps that can produce good codes; in other words, one can try to find a class of $M$ in Section \ref{Graymap} which yields codes with good parameters. \subsection*{Declarations} {\bf Ethical Approval and Consent to participate:} Not applicable.\\ {\bf Consent for publication:} Not applicable.\\ {\bf Availability for supporting data:} Not applicable.\\ {\bf Competing interests:} The authors declare that they have no competing interests.\\ {\bf Funding:} The first author expresses gratitude to MHRD, India, for financial support in the form of PMRF(PMRF ID 1403187) at the Indian Institute of Technology, Delhi.\\ {\bf Author’s contributions:} Akanksha and Ritumoni Sarma contributed equally to this work. Both authors read and approved the final manuscript.\\ {\bf Acknowledgments:} We thank FIST lab (project number: SR/FST/MS-1/2019/45) for computation facility. \bibliographystyle{abbrv} \bibliography{Genproductring} \nocite{huffman2010fundamentals} \nocite{ling2004coding} \nocite{cannon2006handbook} \end{document}
2412.19223v1
http://arxiv.org/abs/2412.19223v1
Functional identities involving additive maps on division rings
\documentclass{amsart} \newcommand{\la}{\lambda} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\nn}{\nonumber} \newcommand{\bee}{\begin{eqnarray*}} \newcommand{\eee}{\end{eqnarray*}} \newcommand{\lb}{\label} \newcommand{\nii}{\noindent} \newcommand{\ii}{\indent} \newcommand{\0}{${\bf 0}$} \newcommand{\tab}{\hspace*{2em}} \usepackage{cases} \usepackage{latexsym} \usepackage{amsmath} \usepackage[arrow,matrix]{xy} \usepackage{stmaryrd} \usepackage{amsfonts} \usepackage{amsmath,amssymb,amscd,bbm,amsthm,mathrsfs,dsfont} \usepackage{fancyhdr} \usepackage{amsxtra,ifthen} \usepackage{verbatim} \renewcommand{\labelenumi}{(\alph{enumi})} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{maintheorem}{Theorem} \newtheorem{THM}{Theorem} \newtheorem{Cor}[THM]{Corollary} \renewcommand{\theTHM}{\Alph{THM}} \newtheorem*{theorem*}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem*{proposition*}{Proposition} \newtheorem{conjecture}{Conjecture} \newtheorem*{problem}{Problem} \theoremstyle{definition} \newtheorem{remark}{Remark} \newtheorem*{example*}{Example} \newtheorem*{notation}{Notation} \newtheorem{fact}{Fact} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prove}[thm]{Proof} \newtheorem{prop}[thm]{Proposition} \newtheorem{question}[thm]{Question} \theoremstyle{definition} \newtheorem{case}{\bf Case} \newtheorem{subcase}{\bf Subcase} \newtheorem{case-x}{\bf Case} \newtheorem{subcase-x1}{\bf Subcase} \newtheorem{subcase-x2}{\bf Subcase} \newtheorem{subcase-x3}{\bf Subcase} \newtheorem{subcase-x4}{\bf Subcase} \newtheorem{case-y}{\bf Case} \newtheorem{subcase-y1}{\bf Subcase} \newtheorem{subcase-y2}{\bf Subcase} \newtheorem{subcase-y3}{\bf Subcase} \newtheorem{subcase-y4}{\bf Subcase} \newtheorem{subcase-y5}{\bf Subcase} \newtheorem{subcase-y6}{\bf Subcase} \newtheorem{subcase-y7}{\bf Subcase} \newtheorem{subcase-y8}{\bf Subcase} \newtheorem{deff}[thm]{Definition} \newtheorem{fac}[thm]{Fact} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem{example}[thm]{Example} \newtheorem*{theproof-2}{\bf Proof of Proposition \ref{propos}} \newtheorem*{theproof-1}{\bf Proof of Proposition \ref{propM}} \usepackage{amsmath} \newcommand{\To}{\longrightarrow} \newcommand{\h}{\mathcal{H}} \newcommand{\s}{\mathcal{S}} \newcommand{\A}{\mathcal{A}} \newcommand{\J}{\mathcal{J}} \newcommand{\M}{\mathcal{M}} \newcommand{\W}{\mathcal{W}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\BOP}{\mathbf{B}} \newcommand{\BH}{\mathbf{B}(\mathcal{H})} \newcommand{\KH}{\mathcal{K}(\mathcal{H})} \newcommand{\Real}{\mathbb{R}} \newcommand{\Int}{\mathbb{Z}} \newcommand{\nat}{\mathbb{N}} \newcommand{\Complex}{\mathbb{C}} \newcommand{\Field}{\mathbb{F}} \newcommand{\RPlus}{\Real^{+}} \newcommand{\Polar}{\mathcal{P}_{\s}} \newcommand{\Poly}{\mathcal{P}(E)} \newcommand{\EssD}{\mathcal{D}} \newcommand{\Lom}{\mathcal{L}} \newcommand{\States}{\mathcal{T}} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\seq}[1]{\left<#1\right>} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\essnorm}[1]{\norm{#1}_{\ess}} \pagenumbering{arabic} \renewcommand{\labelenumi}{(\alph{enumi})} \begin{document} \title[Functional identities involving additive maps on division rings] {Functional identities involving additive maps on division rings} \author{Lovepreet Singh} \address{Lovepreet Singh, Department of Mathematics, Indian Institute of Technology Patna, Bihar, 801106, India} \email{[email protected]} \author{S. K. Tiwari} \address{S. K. Tiwari, Department of Mathematics, Indian Institute of Technology Patna, Bihar, 801106, India} \email{[email protected] \& [email protected]} \thanks{{\it Mathematics Subject Classification:} 16K40, 16R50, 16R60 } \thanks{{\it Key Words and Phrases.} Division ring, Functional identity, Generalized polynomial identity, GPI-algebra, PI-ring. } \thanks{ } \dedicatory{} \commby{} \begin{abstract} Let $g$ be an additive map on division ring $D$ and $G_{1}(Y), ~G_{2}(Y) \neq 0$, $H(Y)$ are generalized polynomials in $D \{Y\}$. In this paper, we study the functional identity $G_{1}(y)g(y)G_{2}(y) = H(y)$. By application of the result and its implications, we prove that if $D$ is a non-commutative division ring with characteristic different from $2$, then the only possible solution of additive maps $g_{1},g_{2}: D \rightarrow D$ satisfying the identity $g_{1}(y)y^{-m} + y^{n}g_{2}(y^{-1})= 0$ with $(m,n) \neq (1,1)$ are positive integers is $ g_{1} = g_{2} = 0$. \end{abstract} \maketitle \section{Introduction} Throughout, we consider $R^{*}$ is the set of all units in ring $R$ and assume rings or algebras are associative with unity. In $1821$, Cauchy's functional equation $g (y_{1} + y_{2}) = g (y_{1})+ g(y_{2})$ set the way for the study of inverse identities in division rings. For an additive map $g: \mathbb{R} \rightarrow \mathbb{R}$ satisfying $g(y) = y^{2}g(y^{-1})$, Halperin \cite{h1} in $1963$ posed a question regarding the continuity of function $g$. In $1964$ Kurepa \cite{k1} proved that, if two non-zero additive maps $g_{1},g_{2}: \mathbb{R} \rightarrow \mathbb{R}$ with $g_{1}(y) = P(y)g_{2}(y^{-1})$, where $P: \mathbb{R} \rightarrow \mathbb{R}$ continuous and $P(1) = 1$, then $P(y) = y^{2}$, $g_{1}(y) + g_{2}(y) = 2yg_{2}(1)$, and map $y \rightarrow g_{1}(y) - y g_{1}(1)$ is a derivation. Further in $1968$, Nishiyama and Horinouchi \cite{n1} characterize the maps satisfying $g(y^{m_{1}}) = ay^{m_{2}}g(y^{m_{3}})$ for all $y \in \mathbb{R}$, where $m_{1},m_{2},m_{3}$ are the integers. In $1970-1971$, Kannappan and Kurepa studied some identities with inverse in \cite{pl1} and \cite{pl2}. We study the relation of derivations with functional identities. By derivation $\delta: R \rightarrow R$ on ring $R$, we mean an additive map with $\delta(y_{1}y_{2}) = \delta(y_{1})y_{2} + y_{1} \delta (y_{2})$ for all $y_{1},y_{2} \in R$. Clearly every derivation $\delta : R \rightarrow R$ satisfies $\delta (y) =-y \delta (y^{-1})y$ for all $y \in R^{*}$. In $1995$, Bre\v{s}ar \cite{bres1} shown that, if $g_{1}(y)y = yg_{2}(y)$ holds for all $y \in D$, where $g_{1},g_{2}: D \rightarrow D$, are additive maps, then $g_{1}(y) = ya + \eta (y) $ and $g_{2}(y) = ay + \eta (y)$, for all $y \in D$ and $a \in D$ is a fixed element, $\eta $ is an additive map from $D$ to $Z(D)$. In $2018$ Catalano \cite{lc1}, to show how rational identities can be extended to functional identity, proved the following, \begin{thm}(\cite[Theorem $1$]{lc1}) \label{catmain} Suppose $g_{1},g_{2}$ are additive maps on a division ring $D$ having characteristic different from $2$ such that \begin{equation} \label{ceq1} g_{1}(y)y^{-1} + yg_{2}(y^{-1})=0, \end{equation} for all $y \in D^{*}$. Then $g_{1}(y) = ya + \delta (y)$ and $g_{2}(y) = -ay + \delta (y)$ for all $y \in D$, where $a \in D$ is a fixed element and $\delta : D \rightarrow D$ is a derivation. \end{thm} If we take the $\operatorname{char} D =2$, then maps $g_{1}(y)=y$ and $g_{2}(y) = y $ satisfies the \eqref{ceq1}, but not of the form as in Theorem \ref{catmain}. So the condition $\operatorname{char} D \neq 2$ can not be removed. In \cite{lc1}, Catalano proved Theorem \ref{catmain} for the matrix case and questioned whether the prerequisite $\operatorname{char} D \neq 3$ can be removed in \cite[Theroem $4$]{lc1} or not. Arga\c c and the authors of \cite{narg1} extended Cataloano's results in $2020$ and provided an affirmative response to the question. More precisely, \begin{thm} (\cite[Theorem $1.1$]{narg1}) \label{catmain2} Suppose $g_{1},g_{2}$ are additive maps on $R$, where $R$ be either a matrix ring $M_{t}(D), \ t \geq 1$, over a division ring $D$ having characteristic different from $2$ or a non-commutative division ring such that \begin{equation*} g_{1}(y)y^{-1} + yg_{2}(y^{-1})=0, \end{equation*} for all $y \in R^{*}$. Then $g_{1}(y) =ay + \delta (y) $ and $g_{2}(y) = - ya +\delta (y) $ for all $y \in D$, where $\delta : R \rightarrow R$ is a derivation and $a \in R$ is a fixed element. \end{thm} To see a counter-example for Theorem \ref{catmain2} in the case when $R$ is commutative and $\operatorname{char} D = 2$, we refer the reader to \cite[Exmaple]{narg1}. In $2024$, Catalano and Merch\'an \cite{lc2} studied the identity $g_{1}(y) = -y^{n}g_{2}(y^{-1})$ on division ring $D$, where $g_{1},g_{2}$ are additive maps on $D$ and $n$ is a positive integer. The above identity was completely solved by Ng \cite{ng1} on fields. Recently, Lee and Lin \cite{tk1} proved the following, \begin{thm} (\cite[Theorem $5.1$]{tk1}) \label{leemain} Suppose that $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ having characteristic different from $2$ and $n \neq 2$ is positive integer such that \begin{equation} \label{leeeq} g_{1}(y) = y^{n}g_{2}(y^{-1}), \end{equation} for all $y \in D^{*}$. Then $g_{1}=g_{2}=0$. \end{thm} For the case $n=2$ in equation \eqref{leeeq}, Dar and Jing \cite{na1} proved that additive maps $g_{1},g_{2}: D \rightarrow D$ are of the form $g_{1}(y) = ya $ and $g_{2}(y) = ya$ for all $y \in D$, where $a \in D$ is a fixed element. If we take $D= \mathbb{Z}_{2}$ and $g_{1}(y) = g_{2}(y) = y$ in Theorem \ref{leemain}, then clearly $g_{1}$ and $g_{2}$ satisfies \eqref{leeeq}. Thus, Theorem \ref{leemain} does not hold when $D$ is commutative or $\operatorname{char} D =2$. Seeing the results mentioned above, it is natural to ask questions about the characterization of additive maps $g_{1},g_{2}: D \rightarrow D$ satisfying the identity, \begin{equation} \label{maineq} g_{1}(y)y^{-m} + y^{n}g_{2}(y^{-1}) =0, \end{equation} where $n,m$ are positive integers. To do this, we first study a few functional identities involving maps defined by generalized polynomials with coefficients in a division ring $D$ having a center $Z(D)$ in the indeterminate variable $Y$. Assume that $D\{Y\}$ represents the free product of the $Z(D)$-algebra $D$ and the polynomial algebra $Z(D)[Y]$. More precisely, elements in $D\{Y\}$ are the sum of finitely many monomials of degree $s$ of the form \begin{equation*} q_{1}Yq_{2}Yq_{3} \cdots q_{s}Yq_{s+1}, \end{equation*} for some $q_{i} \in D$. We refer the reader to \cite{C2} or \cite{M1}. \begin{deff} We say map $g: D \rightarrow D$ is an elementary operator if there exist finitely many non-zero $p_{i},q_{i} \in D$ such that $g(y) = \sum_{i} p_{i}yq_{i}$ for all $y \in D$. \end{deff} We will prove the following in the paper. \begin{thm} \label{mainB} Suppose $g$ is an additive map on a division ring $D$ and $G_1(Y), G_2(Y),\\ H(Y) \in D\{Y\}$ such that $G_1(y)g(y)G_2(y) = H(y)$ for all $y \in D$ If $G_1(Y) $ and $G_{2}(Y)$ are non-zero, then either $g$ is an elementary operator or $D$ is finite-dimensional over $Z(D)$. \end{thm} Applying these characterizations and their implications, we solve the equation \eqref{maineq}. This extends Catalano's result in 2018 and Lee and Lin's result in 2024. Particularly, \begin{thm}\label{main2} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ having characteristic different from $2$ and $n,m$ are positive integers with $(n,m) \not = (1,1)$ such that \begin{equation} \label{maineq1} g_{1}(y)y^{-m} + y^ng_{2}(y^{-1})=0, \end{equation} for all $y \in D^ *$. Then $g_{1} = g_{2} = 0$. \end{thm} Note that for $n = 1$ and $m \geq2$ positive integers, if additive maps $g_{1},g_{2} : D \rightarrow D$ satisfies $g_{1}(y)y^{-m} + yg_{2}(y) = 0$ for all $y \in D^{*}$, then by Theorem \ref{main2} we have $g_{1} = g_{2}= 0$. As an application of this result, we have the following corollary, \begin{cor} \label{dcor} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ having characteristic different from $2$, $m\geq 2$ is a positive integer and $ a \in D^{*}, l \in D$ such that $g_{1}(y_{1}) y_{2}^{-m} + y_{1} g_{2}(y_{2}) = l$ for all $y_{1}, y_{2} \in D$ with $y_{1}y_{2}=a=y_{2}y_{1}$. Then $g_{1}(y) = 0$ and $g_{2}(y)= ya^{-1}l$ for all $y \in D$. \end{cor} \begin{proof} Let $y_{1} \in D^{*}$. Then we assume $y_{2} = y_{1}^{-1}a$. Thus $y_{1}y_{2} = a$. Therefore, we get $g_{1}(y_{1})y_{1}^{-m}a^{m} + y_{1}g_{2}(y_{1}^{-1}a) = l$ and so \begin{equation} \label{eqc2} g_{1}(y_{1})y_{1}^{-m}a^{m} + y_{1}(g_{2}(y_{1}^{-1}a)-y_{1}^{-1}l) = 0. \end{equation} Multiplying both sides of equation \eqref{eqc2} by $a^{-m}$ from right, we get \begin{equation} g_{1}(y_{1})y_{1}^{-m} + y_{1}(g_{2}(y_{1}^{-1}a)-y_{1}^{-1}l)a^{-m} = 0. \end{equation} Thus, by the application of Theorem \ref{main2}, we get \begin{center} $g_{1}(y_{1})= 0$ \hspace{0.2cm} and \hspace{0.2cm} $g_{2}(y_{1}) = y_{1}a^{-1}l$, \end{center} for all $y_{1} \in D$, where $ a \in D^{*}, l \in D$. Hence, we get $g_{1}(y_{1}) = 0$ and $g_{2}(y_{1}) = y_{1}a^{-1}l$, for all $y_{1} \in D$, where $a \in D^{*}, l \in D$. \end{proof} In Corollary \ref{dcor}, if either $D$ is a commutative or $\operatorname{char} D = 2$ or $D$ is not a division ring, then it can not be guaranteed that $g_{1}$ and $g_{2}$ are of the form as mentioned in Corollary \ref{dcor}. Here we give some examples, \begin{example} Consider a ring $R = \mathbb{Z}_{2}$. If additive maps $g_{1}, g_{2}: R \rightarrow R$ are defined as $g_{1}(y) = -y^{2}$ and $g_{2}(y) = y + y^{2}$ for all $y \in R$, $m=3$ and $a = 1 = l$. Clearly, $g_{1}(y)y^{-3} + yg_{2}(y^{-1}) = 1$ for all $x \in R^{*}$. But $g_{1}$ and $g_{2}$ are not of the form as mentioned in a Corollary \ref{dcor}. \end{example} \begin{example} Let $R = M_{t}(\mathbb{Z}_{4})$ where $t \geq 2$ be a non-commutative ring. If additive maps $g_{1},g_{2}: R \rightarrow R$ are defined as $g_{1}(y) = g_{2}(y) = y$ for all $y \in R$, $m=1$, $a = 1$ and $ l = 2$. Clearly, $g_{1}(y_{1})y_{2} + y_{1}g(y_{2}) = 2$ for all $y_{1},y_{2} \in R^{*}$ such that $y_{1}y_{2}=1=y_{2}y_{1}$. But $g_{1}$ and $g_{2}$ are not of the form as mentioned in a Corollary \ref{dcor}. \end{example} \section{The identity $G_1(y)g(y)G_2(y) = H(y)$} Throughout this section, let $D$ be a division ring with center $Z(D)$. We define $D \{Y_1, \cdots, Y_m \}$ to be free product of $Z(D)$-algebra $D$ and the free algebra $Z(D) \{Y_1, \cdots,Y_m \}$ over $Z(D)$ or the generalized free algebra over $Z(D)$ in the variable $Y_1, \cdots, Y_m$ with coefficients in $D$ (see \cite{C2} and \cite{M1}). We say $D$ is a GPI-algebra if there exists a non-zero $g(Y_1, \cdots, Y_m) \in D\{Y_1, \cdots, Y_m\}$ such that $g(y_1, \cdots, y_m) = 0$ for all $y_i \in D$. In this case, $g(Y_1, \cdots, Y_m)$ is a non-trivial GPI for $D$. Here, we have the special case of Martindale's Theorem. \begin{thm} (\cite[Theorem $3$]{M1})\label{m1} If $D$ is a division GPI-algebra, then it is finite-dimensional over $Z(D)$. \end{thm} Given additive maps $g_{i1}, \cdots, g_{is}: D \rightarrow D$ and $p_{ij} \in D$, assume \begin{center} $G(y) = \sum_{i} p_{i1}g_{i1}(y)p_{i2} \cdots p_{is}g_{is}(y)p_{is+1}$, \end{center} for all $ y \in D$. Applying the standard linearizing argument to $G(x)$, we define \begin{center} $G^{(1)}(y_1) = G(y_1)$, \hspace{0.5cm} $G^{(2)}(y_1,y_2) = G^{(1)}(y_1 + y_2) - G^{(1)}(y_1) - G^{(1)}(y_2)$. \end{center} In general, we let, \begin{multline*} G^{(k+1)}(y_1, \cdots, y_k, y_{k+1}) = \\ G^{(k)}(y_1, \cdots, y_{k-1},y_k+ y_{k+1}) - G^{(k)}(y_1, \cdots, y_{k-1}, y_{k}) - G^{(k)}(y_1, \cdots, y_{k-1}, y_{k+1}) \end{multline*} for $k \geq 1$. Also, it is known that \begin{equation*} G^{(s)}(y_1, \cdots, y_s) = \sum_{i} \sum_{\sigma \in Sym(s)} p_{i1}g_{i1}(y_{\sigma(1)})p_{i2} \cdots p_{is}g_{is}(y_{\sigma(s)})p_{is+1}, \end{equation*} for all $y_i \in D$, where $Sym(s)$ is a symmetric group of $\{ 1,2, \cdots, s \}$. If $k > s$, then \begin{equation*} G^{(k)}(y_1, \cdots, y_k) = 0, \end{equation*} for all $y_i \in D$. Throughout, this paper we define, \begin{equation*} G^{(k)}(y_1, \cdots, \hat{y_{j}}, \cdots, y_k)=G^{(k)}(y_1, \cdots, y_{j-1},y_{j+1}, \cdots, y_k) \end{equation*} for $y_{1}, \cdots , y_{k} \in D$ and $k \geq 1$ any integer. At this point, we have some remarks. \begin{rem} (\cite[Remark $2.2$]{tk1}) \label{r2} Let $D$ be a division ring, and $G(Y) \neq 0 \in D\{Y\}$. \begin{itemize} \item[(i)] If $\operatorname{deg} G(Y) >1$, then $G(Y_{1}+Y_{2}) -G(Y_{1}) - G(Y_{2}) \neq 0 $ in $D\{Y_{1},Y_{2}\}$. \item[(ii)] If $G(Y_{1}+Y_{2}) - G(Y_{1}) - G(Y_{2}) \neq 0$ in $D\{Y_{1},Y_{2}\}$ and $G(0) = 0$, then $\operatorname{deg} G(Y_{1}) >1 $. \item[(iii)] $G(Y_{1}+Y_{2}) = G(Y_{1}) + G(Y_{2})$ iff there exists finitely many $p_i,q_i \in D$ such that $G(Y_{1}) = \sum_i p_iY_{1}q_i$. \item[(iv)] If $s= \operatorname{deg} G(Y) \geq 1$ and $G(0) = 0$, then $G^{(s)}(Y_1, \cdots, Y_s)$ is non-zero and multilinear in $D\{Y_1,\cdots,Y_s\}$. \end{itemize} \end{rem} Given additive maps $g_{ij}:D \rightarrow D$, let $G(y) = \sum_s G_s(y)$, for $y \in D$, where \begin{equation*} G_s(y) = \sum_{i} p_{i1s}g_{i1s}(y)p_{i2s} \cdots p_{iss}g_{is}(y)p_{is+1s}, \end{equation*} for all $y \in D$. For a positive integer $t$, we define \begin{equation*} G^{(t)}(y_1, \cdots, y_t) = \sum_s G_s^{(t)}(y_1, \cdots, y_t), \end{equation*} for all $y_i \in D$. Let's move towards the first main theorem. To prove Theorem \ref{mainB}, we need the following result, \begin{fact} \label{t3} Suppose $g$ is an additive map on a division ring $D$ and $G(Y), H(Y) \in D\{Y\}$ such that $G(y)g(y) = H(y)$ for all $y \in D$. If $G(Y)$ is a non-zero, then either $g$ is an elementary operator or $D$ is finite-dimensional over its center. \end{fact} \begin{proof}[\textbf{Proof of Theorem \ref{mainB}}] Take $\operatorname{deg} G_1(Y) = s_1$ and $\operatorname{deg} G_2(Y) = s_2$. If $s_2=0$, then there exists $b^{'} \in D^{*}$ such that $G_2(Y) = b^{'}$ and we get \begin{equation*} G_1(y)g(y)b^{'} = H(y). \end{equation*} From Fact \ref{t3}, we get our conclusion by taking $g^{'}(y) = g(y)b^{'}$ as an additive map $D$ to itself. We can proceed similarly to the above in case $s_1 = 0$. Now we consider the case when $s_1 = s_2 = 0$. Then there exist $b,b^{'} \in D^{*}$ such that $G_1(Y) = b$ and $G_2(Y) = b^{'}$. So we have $g(y) = b^{-1} H(y) b^{'-1}$ for all $y \in D$ and additivity of $g$ implies that, we have $H(y_{1}+y_{2}) = H(y_{1}) + H(y_{2}) $ and $H(0) = 0$. If $\operatorname{deg} H(Y) \leq 1$, then we have finitely many non-zero $p_i,q_i \in D$ such that $H(Y) = \sum_i p_i Y q_i$. Thus, $g$ is an elementary operator. On the other hand, if $\operatorname{deg} H(Y) > 1$, then by Remark \ref{r2} $(i)$, we have \begin{equation} \label{ts1} H(Y_{1} + Y_{2}) - H(Y_{1}) - H(Y_{2}) \neq 0, \end{equation} in $D\{Y_{1},Y_{2}\}$ and additivity of $g$ implies that, (\ref{ts1}) is a non-trivial GPI for $D$. Hence, $D$ is a GPI-algebra and by application of Theorem \ref{m1}, we get $[D: Z(D)] < \infty$. Finally, we consider the case where $\operatorname{deg} G_1(Y) = s_1 \geq 1$ and $\operatorname{deg} G_2(Y) = s_2 \geq 1$. We write \begin{equation*} G_{i}(Y) = G_{i0}(Y) + G_{i1}(Y), \end{equation*} where $G_{i1}(Y)$ is the homogeneous part of $G_{i}(Y)$ of degree $s_i$, $i \in \{1,2\}$. Take \begin{center} $A_{lk}(y) = G_{1l}(y)g(y)G_{2k}(y)$ \hspace{0.3cm} and \hspace{0.3cm} $A_{11}(y) = G_{11}(y) g(Y)G_{21}(y)$, \end{center} for all $y \in D$, where $(l,k) \neq (1,1)$ and $l,k \in \{0,1\}$. Since $\operatorname{deg} G_{10}(Y) < s_1$ and $\operatorname{deg} G_{20}(Y) < s_2 $,we have \begin{equation*} A_{lk}^{(s_1+s_2+1)}(y_1, \cdots, y_{s_1 + s_2 + 1})= 0 \end{equation*} and \begin{align*} A_{11}^{(s_1+s_2+1)} & (y_1, \cdots, y_{s_1 + s_2 + 1})= \\ & \left( \sum_{j=1}^{s_1 + s_2+ 1} G^{(s_1)}_{11}(y_1,\cdots, \Hat{y}_{j},\Hat{y}_{j+1}, \cdots, \Hat{y}_{j+s_2}, \cdots, y_{s_1+s_2+1})g(y_j) \right)\\ & \left( G_{21}^{(s_2)}(y_{j+1}, \cdots, y_{j+s_2}) \right), \end{align*} for all $y_i \in D$, where $y_{j+i} = y_{j+i-s_{1}-s_{2}-1}$ for $j+i>s_1+s_2+1$, $i \in \{ 1,2, \cdots, s_2 \}$, $(l,k) \neq (1,1)$ and $l,k \in \{0,1\}$. Since $G_1(y)g(y)G_2(y) = H(y) $ for all $y \in D$, we get \begin{equation} \label{ts2} \begin{aligned} H^{(s_1+s_2+1)} & (y_1, \cdots, y_{s_1 + s_2 + 1})= \\ & \left( \sum_{j=1}^{s_1 + s_2+ 1} G^{(s_1)}_{11}(y_1,\cdots, \Hat{y}_{j},\Hat{y}_{j+1}, \cdots, \Hat{y}_{j+s_2}, \cdots, y_{s_1+s_2+1})g(y_j) \right)\\ & \left( G_{21}^{(s_2)}(y_{j+1}, \cdots, y_{j+s_2}) \right), \end{aligned} \end{equation} for all $y_i \in D$, where $y_{j+i} = y_{j+i-s_{1}-s_{2}-1}$ for $j+i>s_1+s_2+1$ and $i \in \{ 1,2, \cdots, s_2 \}$. By Remark \ref{r2} $(iv)$, all \begin{center} $G^{(s_1)}_{11}(Y_1,\cdots, \Hat{Y}_{j},\Hat{Y}_{j+1}, \cdots, \Hat{Y}_{j+s_2}, \cdots, Y_{s_1+s_2+1})$ \end{center} and \begin{center} $G_{21}^{(s_2)}(Y_{j+1}, \cdots, Y_{j+s_2})$, \end{center} are multilinear and non-zero. Thus, we rewrite (\ref{ts2}) as \begin{equation} \label{ts3} \begin{aligned} G^{(s_1)}_{11}&(y_{s_2+1},\cdots, y_{s_1+s_2})g(y_{s_1+s_2+1}) G_{21}^{(s_2)}(y_{1}, \cdots, y_{s_2})=\\ & \sum_{j=1}^{s_{1}+ s_{2}}b_{j}(y_{1}, \cdots, y_{s_1 +s_2}) y_{s_1 +s_2+1}c_{j}(y_{1}, \cdots, y_{s_1+s_2}) + \\ &H^{(s_1+s_2+1)} (y_1, \cdots, y_{s_1 + s_2 + 1}), \end{aligned} \end{equation} for all $y_i \in D$, where $b_{j},c_{j}$ are generalized monomials in $y_{1}, \cdots, y_{s_1+s_2},g(y_{1}),\\ \cdots, g(y_{s_1+s_2})$. \textbf{Case $1$:} If $G^{(s_1)}_{11}(y_{s_2+1},\cdots, y_{s_2+s_1}) =0$ and $G_{21}^{(s_2)}(y_{1}, \cdots, y_{s_2}) = 0$ for all $y_{1}, \cdots, y_{s_1+s_2} \in D$, then $D$ is a division GPI-algebra and from Theorem \ref{m1} we have $[D: Z(D)] < \infty$. \textbf{Case $2$:} If $G^{(s_1)}_{11}(z_{s_2+1},\cdots, z_{s_2+s_1}) \neq 0$ for some $z_{s_2+1}, \cdots, z_{s_1+s_2} \in D$ and $G_{21}^{(s_2)}(y_{1}, \cdots, y_{s_2}) = 0$ for all $y_{1}, \cdots, y_{s_2} \in D$, then we will proceed like Case $1$ and get the same conclusion. \textbf{Case $3$:} If $G^{(s_1)}_{11}(y_{s_2+1},\cdots, y_{s_2+s_1}) = 0$ for all $y_{s_2+1}, \cdots, y_{s_1+s_2} \in D$ and\\ $G_{21}^{(s_2)}(z_{1}, \cdots, z_{s_2}) \neq 0$ for some $z_{1}, \cdots, z_{s_2} \in D$, then we get the conclusion $[D: Z(D)] < \infty$. \textbf{Case $4$:} If $G^{(s_1)}_{11}(z_{s_2+1},\cdots, z_{s_2+s_1}) \neq 0$ and $G_{21}^{(s_2)}(z_{1}, \cdots, z_{s_2}) \neq 0$ for some $z_{1}, \cdots, z_{s_1+s_2} \in D$, then let \begin{center} $\Tilde{b}_{j}= b_{j}(z_1, \cdots, z_{s_1+s_2})$ \hspace{0.3cm} and \hspace{0.3cm} $\Tilde{c}_{j}(z_{1}, \cdots, z_{s_1+s_2})$ \end{center} for all $j$. Take $t = G^{(s_1)}_{11}(z_{s_2+1},\cdots, z_{s_2+s_1}) $ and $ t^{'}= G_{21}^{(s_2)}(z_{1}, \cdots, z_{s_2})$. In view of equation $(\ref{ts3})$, we have \begin{equation} \label{ts4} g(y_{s_1+s_2})= \sum_{j} t^{-1} \Tilde{b}_{j}y_{s_1+s_2+1} \Tilde{c}_{j}t^{'-1} + E(y_{s_1+s_2}), \end{equation} for all $y_{s_1+s_2+1} \in D$, where \begin{equation*} E(Y_{s_1+s_2+1}) = t^{-1}H^{(s_1 + s_2 + 1)}(z_{1}, \cdots, z_{s}, Y_{s_1+s_2+1})t^{'-1} \in D \{ Y_{s_1 + s_2 +1} \}. \end{equation*} If we assume $E(Y_{s_1+s_2+1})$ is linear in $Y_{s_1 +s_2+1}$, then, by equation (\ref{ts4}), $g$ is an elementary operator. Otherwise, $ \operatorname{deg}(E(Y_{s_1+s_2+1})) > 1$ and additivity of $g$ with equation (\ref{ts4}), implies that $D$ satisfies the GPI \begin{equation*} E(y_{s_1 +s_2+1} + y_{s_1+s_2+2})- E(y_{s_1 + s_2+1}) - E(y_{s+2}). \end{equation*} From Remark \ref{r2} $(i)$, we get $ E(Y_{s_1 + s_2 +1} + Y_{s_1 + s_2 +2})- E(y_{s_1 + s_2 +1}) - E(y_{s_1 + s_2 +2}) \neq 0 $. Then by application of Theorem \ref{m1}, we have $D$ is a finite-dimensional over $Z(D)$. \end{proof} \begin{thm} \label{tc5} Suppose $g_{1}, \cdots, g_{s}$ are additive maps on a division ring $D$ and $G_{1j}(Y_{1}, \cdots, Y_{s})$, $G_{2j}(Y_{1}, \cdots, Y_{s})$, $H(Y_{1}, \cdots, Y_{s}) \in D\{Y_{1}, \cdots, Y_{s})$, for $j=1, \cdots, s$ such that \begin{equation} \label{ts5} \sum_{j=1}^{s}G_{1j}(y_{1}, \cdots, y_{s})g_{j}(y_{j})G_{2j}(y_{1}, \cdots,y_{s}) = H(y_{1}, \cdots, y_{s}), \end{equation} for all $y_i \in D$. If $[D: Z(D)] = \infty$ and for some $j$, $G_{1j}(Y_{1}, \cdots, Y_{s}) \neq 0$ and $G_{2j}(Y_{1}, \cdots, Y_{s}) \neq 0$, then $g_{j}$ is an elementary operator. \end{thm} \begin{proof} Suppose for some $t$, $G_{1t}(Y_{1}, \cdots, Y_{s}) \neq 0$ and $ G_{2t}(Y_{1}, \cdots, Y_{s}) \neq 0 $. We claim that $g_{t}$ is an elementary operator. If either $ G_{1t}(Y_{1}, \cdots, Y_{s}) $ or $ G_{2t}(Y_{1}, \cdots, Y_{s}) $ is GPI for $D$, then by Theorem \ref{m1} we have $D$ is finite-dimensional over $Z(D)$, which is a contradiction. Thus we assume $ G_{1t}(Y_{1}, \cdots, Y_{s}) $ and $ G_{2t}(Y_{1}, \cdots, Y_{s}) $ are not GPI's for $D$. Then $ G_{1t}(p_{1}, \cdots, p_{s}) \neq 0 $ and $ G_{2t}(p_{1}, \cdots, p_{s}) \neq 0 $ for some $p_i \in D$ for $1\leq i \leq s$. By equation (\ref{ts5}), we have, \begin{equation} \label{ts6} \begin{aligned} G_{1t}& (p_{1},\cdots,y_{t}, \cdots, p_{s})g_{t}(y_{t})G_{2t}(p_{1},\cdots,y_{t}, \cdots, p_{s})= \\ & -\sum_{j=1, j\neq t}^{s} G_{1j}(p_{1},\cdots,y_{t}, \cdots, p_{s})g_{j}(p_{j}) G_{2j}(p_{1},\cdots, y_{t}, \cdots, p_{s}) \\ & + H(p_{1},\cdots, y_{t}, \cdots, p_{s}), \end{aligned} \end{equation} for all $y_t \in D$. Note that \begin{equation*} \begin{aligned} G_{1t}& (p_{1},\cdots, Y_{t}, \cdots, p_{s}),G_{2t}(p_{1},\cdots, Y_{t}, \cdots, p_{s}), \\ & -\sum_{j=1, j \neq t}^{s} G_{1j}(p_{1},\cdots, Y_{t}, \cdots, p_{s})g_{j}(p_{j}) G_{2j}(p_{1},\cdots,Y_{t}, \cdots, p_{s}) \\ & + H(p_{1},\cdots,Y_{t}, \cdots, p_{s}) \in D \{ Y_{t} \}. \end{aligned} \end{equation*} Since $ G_{1t}(p_{1},\cdots,Y_{t}, \cdots, p_{s}) \neq 0,G_{2t}(p_{1},\cdots, Y_{t}, \cdots, p_{s}) \neq 0 $ and $[D:Z(D)] = \infty$, by application of Theorem \ref{mainB} we have $g_{t}$ is an elementary operator. \end{proof} Here, we generalize the Theorem \ref{mainB} to a more general form. \begin{thm} Suppose $g_{1}, \cdots, g_{s}$ are additive maps on a division ring $D$ and $G_{1j}(Y)$, $G_{2j}(Y) \in D\{Y\} \backslash \{0\}$ such that \begin{equation} \label{teql7} \sum_{j=1}^{n} G_{1j}(y)g_{j}(y)G_{2j}(y) = H(y), \end{equation} for all $y\in D$. If $[D: Z(D)] = \infty$ and $\operatorname{deg} (G_{1j}(Y) +G_{2j}(Y)) \neq \operatorname{deg} (G_{1i}(Y) + G_{2i}(Y)) $ for $i \neq j$, then all $g_{j}$ are elementary operators. \end{thm} \begin{proof} We will proceed with an induction of $n$. The case $ n = 1$ is done by Theorem \ref{mainB}. So we assume $n >1$ and the conclusion holds for $n-1$. Let $s_{kj} = \operatorname{deg} G_{kj}(Y)$, for $j= 1, \cdots, n$ and $k \in \{1,2\}$. We may assume $s_{1j} + s_{2j} < s_{1n} + s_{2n} $, for $j = 1, \cdots, n-1$. If $E_{k1}(Y)$ is the homogeneous part of $G_{kn}(Y)$ of degree $s_{kn}$, $k \in \{1,2\}$, then we write \begin{equation*} G_{kn}(Y) = E_{k0}(Y) + E_{k1}(Y). \end{equation*} Since $s_{1j} + s_{2j} < s_{1n} + s_{2n} $, for $j = 1, \cdots, n-1$ applying the same argument given in the proof of Theorem \ref{mainB}, we get \begin{equation*} \begin{aligned} H^{(s_{1n}+s_{2n}+1)} & (y_{1}, \cdots, y_{s_{1n} + s_{2n} + 1})= \\ & \sum_{j=1}^{s_{1n} + s_{2n}+ 1} E^{(s_{1n})}_{11}(y_{1},\cdots, \Hat{y}_{j},\Hat{y}_{j+1}, \cdots, \Hat{y}_{j+s_2}, \cdots, y_{s_{1n}+s_{2n}+1})g_{n}(y_{j})\\ & E_{21}^{(s_{2n})}(y_{j+1}, \cdots, y_{j+s_{2n}}), \end{aligned} \end{equation*} for all $y_i \in D$, where $y_{j+i} = y_{j+i-s_{1}-s_{2}-1}$ for $j+i>s_{1n}+s_{2n}+1$ and $i \in \{ 1,2, \cdots, s_{2n} \}$. As $E_{11}^{(s_{1n})}(Y_{1}, \cdots, Y_{s}) \neq 0$ and $E_{21}^{(s_{2n})}(Y_{1}, \cdots, Y_{s}) \neq 0$, by application of Theorem \ref{tc5} we have $g_{n}$ is an elementary operator. Therefore, we have finitely may non-zero $p_{i}, q_{i} \in D$ such that $g_n(y) = \sum_{i} p_{i} y q_{i}$ for all $y \in D$. By equation \eqref{teql7}, \begin{equation*} \sum_{j=1}^{n-1} G_{1j}(y)g_{j}(y)G_{2j}(y) = H(y) - G_{1n}(y)(\sum_{i} p_{i}y q_{i})G_{2n}(y), \end{equation*} for all $y \in D$. Note that $ H(Y) - G_{1n}(Y)(\sum_{i} p_{i}Y q_{i})G_{2n}(Y) \in D \{Y\}$. So, by the induction hypothesis, we get $g_{1}, \cdots, g_{n}$ are elementary operators. \end{proof} \section{The identity $g(y^2) = w_{1}(y)g(y)+ g(y)w_{2}(y) + w_{3}(y)g(y)w_{4}(y)$} \begin{lem} \label{tl6} Suppose $g_{1},g_{2}$ are additive maps on a division ring $D$ having characteristic different from $2$ and $w_{1},w_{2},w_{3},w_{4}: D \rightarrow D$ are maps such that $g_{1}(y^2) = w_{1}(y)g_{2}(y)+ g_{2}(y)w_{2}(y) + w_{3}(y)g_{2}(y)w_{4}(y)$ for all $ y \in D$. Then \begin{equation*} \begin{aligned} &\Bigl(2w_{1}(2y_{1})- 2w_{1}(y_{1}) -w_{1}(y_{1}+y_{2}) -w_{1}(y_{1}-y_{2}) \Bigl)g_{2}(y_{1}) \\ & + g_{2}(y_{1})\Bigl(2w_{2}(2y_{1}) - 2w_{2}(y_{1}) - w_{2}(y_{1}+y_{2}) - w_{2}(y_{1}-y_{2})\Bigl)\\ &+ \Bigl(2w_{1}(y_{2}) -w_{1}(y_{1}+y_{2}) + w_{1}(y_{1}-y_{2})\Bigl)g_{2}(y_{2})\\ & + g_{2}(y_{2}) \Bigl(2w_{2}(y_{2}) -w_{2}(y_{1}+y_{2}) + w_{2}(y_{1}-y_{2})\Bigl)\\ & + 2w_{3}(2y_{1})g_{2}(y_{1})w_{4}(2y_{1}) - w_{3}(y_{1}+y_{2})g_{2}(y_{1}+y_{2})w_{4}(y_{1}+y_{2})\\ & - w_{3}(y_{1}-y_{2})g_{2}(y_{1}-y_{2})w_{4}(y_{1}-y_{2}) - 2w_{3}(y_{1})g_{2}(y_{1})w_{4}(y_{1}) \\ & + 2w_{3}(y_{2})g_{2}(y_{2})w_{4}(y_{2}) = 0, \end{aligned} \end{equation*} for all $y_{1},y_{2} \in D$. \end{lem} \begin{proof} Let $y_{1},y_{2} \in D$. We compute, \begin{equation} \begin{aligned} \label{ts8} g_{1}(y_{1}y_{2}+y_{2}y_{1}) &= g_{1}\Bigl((y_{1}+y_{2})^2 - y_{1}^2 - y_{2}^2\Bigl)\\ & = w_{1}(y_{1} + y_{2})g_{2}(y_{1} +y_{2}) + g_{2}(y_{1} + y_{2})w_{2}(y_{1}+y_{2}) \\ &+ w_{3}(y_{1}+y_{2})g_{2}(y_{1}+y_{2})w_{4}(y_{1}+y_{2}) - w_{1}(y_{1})g_{2}(y_{1})\\ & - g_{2}(y_{1})w_{2}(y_{1}) - w_{1}(y_{2})g_{2}(y_{2}) - g_{2}(y_{2})w_{2}(y_{2})\\ & - w_{3}(y_{1})g_{2}(y_{1})w_{4}(y_{1})- w_{3}(y_{2})g_{2}(y_{2})w_{4}(y_{2}). \end{aligned} \end{equation} Replacing $(y_{1},y_{2})$ by $(y_{1}+y_{2}, y_{1}-y_{2})$ in (\ref{ts8}), we get \begin{equation*} \begin{aligned} g_{1}\Bigl((y_{1}+y_{2})(y_{1}-y_{2})& + (y_{1}-y_{2})(y_{1}+y_{2})\Bigl)\\ &=2w_{1}(2y_{1})g_{2}(y_{1}) + 2g_{2}(y_{1})w_{2}(2y_{1})- w_{1}(y_{1}+y_{2})g_{2}(y_{1}+y_{2})\\ &- g_{2}(y_{1}+y_{2})w_{2}(y_{1}+y_{2})- w_{1}(y_{1}-y_{2})g_{2}(y_{1}-y_{2})\\ &- g_{2}(y_{1}-y_{2})w_{2}(y_{1}-y_{2})+ 2w_{3}(2y_{1})g_{2}(y_{1})w_{4}(2y_{1}) \\ &- w_{3}(y_{1}+y_{2})g_{2}(y_{1}+y_{2})w_{4}(y_{1}+y_{2})\\ &- w_{3}(y_{1}-y_{2})g_{2}(y_{1}-y_{2})w_{4}(y_{1}-y_{2})\\ & = (2w_{1}(2y_{1}) -w_{1}(y_{1}+y_{2}) -w_{1}(y_{1}-y_{2}) )g_{2}(y_{1}) \\ & + g_{2}(y_{1})(2w_{2}(2y_{1}) - w_{2}(y_{1}+y_{2}) - w_{2}(y_{1}-y_{2}))\\ & + (-w_{1}(y_{1}+y_{2}) + w_{1}(y_{1}-y_{2}))g_{2}(y_{2}) \\ & + g_{2}(y_{2}) (-w_{2}(y_{1}+y_{2}) + w_{2}(y_{1}-y_{2})) + 2w_{3}(2y_{1})g_{2}(y_{1})w_{4}(2y_{1}) \\ &- w_{3}(y_{1}+y_{2})g_{2}(y_{1}+y_{2})w_{4}(y_{1}+y_{2})\\ &- w_{3}(y_{1}-y_{2})g_{2}(y_{1}-y_{2})w_{4}(y_{1}-y_{2}). \end{aligned} \end{equation*} On the other hand we have, \begin{equation*} \begin{aligned} &g_{1}\Bigl((y_{1}+y_{2})(y_{1}-y_{2}) + (y_{1}-y_{2})(y_{1}+y_{2})\Bigl)\\ & = 2g_{1}(y_{1}^{2}) - 2g_{1}(y_{2}^{2}) \\ & = 2w_{1}(y_{1})g_{2}(y_{1}) + 2g_{2}(y_{1})w_{2}(y_{1}) - 2w_{1}(y_{2})g_{2}(y_{2}) -2g_{2}(y_{2})w_{2}(y_{2})\\ & + 2w_{3}(y_{1})g_{2}(y_{1})w_{4}(y_{1}) - 2w_{3}(y_{2})g_{2}(y_{2})w_{4}(y_{2}). \end{aligned} \end{equation*} Comparing the above two equations, we get \begin{equation*} \begin{aligned} &\Bigl(2w_{1}(2y_{1})- 2w_{1}(y_{1}) -w_{1}(y_{1}+y_{2}) -w_{1}(y_{1}-y_{2})\Bigl)g_{2}(y_{1}) \\ & + g_{2}(y_{1})\Bigl(2w_{2}(2y_{1}) - 2w_{2}(y_{1}) - w_{2}(y_{1}+y_{2}) - w_{2}(y_{1}-y_{2})\Bigl)\\ &+ \Bigl(2w_{1}(y_{2}) -w_{1}(y_{1}+y_{2}) + w_{1}(y_{1}-y_{2})\Bigl)g_{2}(y_{2})\\ & + g_{2}(y_{2}) \Bigl(2w_{2}(y_{2}) -w_{2}(y_{1}+y_{2}) + w_{2}(y_{1}-y_{2})\Bigl)\\ & + 2w_{3}(2y_{1})g_{2}(y_{1})w_{4}(2y_{1}) - w_{3}(y_{1}+y_{2})g_{2}(y_{1}+y_{2})w_{4}(y_{1}+y_{2})\\ & - w_{3}(y_{1}-y_{2})g_{2}(y_{1}-y_{2})w_{4}(y_{1}-y_{2}) - 2w_{3}(y_{1})g_{2}(y_{1})w_{4}(y_{1}) \\ & + 2w_{3}(y_{2})g_{2}(y_{2})w_{4}(y_{2}) = 0, \end{aligned} \end{equation*} as desired. \end{proof} \begin{thm} \label{ttl2} Suppose $g_{1},g_{2}$ are additive maps on a division ring $D$ having characteristic different from $2$ and $w_{1}(Y),w_{2}(Y),w_{3}(Y),w_{4}(Y) \in D \{Y\}$ such that $$g_{1}(y^{2}) = w_{1}(y)g_{2}(y)+ g_{2}(y)w_{2}(y) + w_{3}(y)g_{2}(y)w_{4}(y)$$ for all $y \in D$. If $\operatorname{deg} w_{1}(Y), \operatorname{deg} w_{2}(Y), \operatorname{deg} w_{3}(Y), \operatorname{deg} w_{4}(Y) > 1$, then either $D$ is finite-dimensional over $Z(D)$ or $g_{2}$ is an elementary operator. \end{thm} \begin{proof} Assume that $\operatorname{deg} w_{1}(Y), \operatorname{deg} w_{2}(Y), \operatorname{deg} w_{3}(Y), \operatorname{deg} w_{4}(Y) > 1$ and $[D:Z(D)] = \infty$. By Remark \ref{r2} $(i)$, we have $ w_{1}(Y_{1}+Y_{2}) - w_{1}(Y_{1}) - w_{1}(Y_{2}), w_{2}(Y_{1}+Y_{2}) - w_{2}(Y_{1}) - w_{2}(Y_{2}), w_{3}(Y_{1}+Y_{2}) - w_{3}(Y_{1}) - w_{3}(Y_{2}), w_{4}(Y_{1}+Y_{2}) - w_{4}(Y_{1}) - w_{4}(Y_{2}) \in D\{Y_{1},Y_{2}\} \backslash \{0\}. $ It follows from Lemma \ref{tl6} that \begin{equation*} \begin{aligned} &\Bigl(2w_{1}(2y_{1})- 2w_{1}(y_{1}) -w_{1}(y_{1}+y_{2}) -w_{1}(y_{1}-y_{2})\Bigl)g_{2}(y_{1}) \\ & + g_{2}(y_{1})\Bigl(2w_{2}(2y_{1}) - 2w_{2}(y_{1}) - w_{2}(y_{1}+y_{2}) - w_{2}(y_{1}-y_{2})\Bigl)\\ &+ \Bigl(2w_{1}(y_{2}) -w_{1}(y_{1}+y_{2}) + w_{1}(y_{1}-y_{2})\Bigl)g_{2}(y_{2})\\ & + g_{2}(y_{2}) \Bigl(2w_{2}(y_{2}) -w_{2}(y_{1}+y_{2}) + w_{2}(y_{1}-y_{2})\Bigl)\\ & + 2w_{3}(2y_{1})g_{2}(y_{1})w_{4}(2y_{1}) - w_{3}(y_{1}+y_{2})g_{2}(y_{1}+y_{2})w_{4}(y_{1}+y_{2})\\ & - w_{3}(y_{1}-y_{2})g_{2}(y_{1}-y_{2})w_{4}(y_{1}-y_{2}) - 2w_{3}(y_{1})g_{2}(y_{1})w_{4}(y_{1}) \\ & + 2w_{3}(y_{2})g_{2}(y_{2})w_{4}(y_{2}) = 0. \end{aligned} \end{equation*} Assume that $g_{2}$ is not an elementary operator. Otherwise it follows from the Theorem \ref{tc5} that, $w_{3}(Y) = 0$ and $w_{4}(Y) = 0$, which is contradiction. Hence, the claim follows. \end{proof} Before proving next lemma, we need the following fact, \begin{fact} \cite[Theorem $2(a)$]{tl1} \label{ll3} Suppose $\{ p_{1}, \cdots, p_{s}\}$ and $\{ q_{1}, \cdots , q_{s}\}$ are two linearly independent subsets of a division ring $D$ over $Z(D)$. Then $\sum_{j} p_{j} Y q_{j} \neq 0 \in D\{Y\}$. \end{fact} \begin{lem} \label{tll4} Suppose $\{ p_{1}, \cdots, p_{s}\}$ and $\{ q_{1}, \cdots , q_{s}\}$ are two linearly independent subsets of a division ring $D$ over $Z(D)$ and $w_{1}(Y), w_{2}(Y),w_{1}(3), w_{4}(Y) \in D\{Y\}$ with $\operatorname{deg} w_{1}(Y), \operatorname{deg} w_{2}(Y),\operatorname{deg} w_{3}(Y), \operatorname{deg} w_{4}(Y) > 1$, $\operatorname{deg} w_{1}(Y) \neq \operatorname{deg} w_{2}(Y) \neq \operatorname{deg} w_{3}(Y)+ \operatorname{deg} w_{4}(Y)$. Then \begin{equation*} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - \sum_{j=1}^{s}\Bigl( w_{1}(Y) p_{j} Y q_{j} + p_{j} Y q_{j} w_{2}(Y)\Bigl) - \sum_{j=1}^{s} w_{3}(Y) p_{j} Y q_{j} w_{4}(Y) \neq 0. \end{equation*} \end{lem} \begin{proof} Assume $\operatorname{deg} w_{i}(Y) = l_{i} > 1$, for $i \in \{1,2,3,4\}$. Write $w_{i}(Y) = \sum_{k_{l_{i}}=0}^{l_{i}} w_{ik_{l_{i}}}(Y)$, where $w_{ik_{l_{i}}}$ represents the homogeneous part of $w_{i}(Y)$ of degree $k_{l_{i}}$; otherwise, it equals zero. Thus $w_{il_{i}}(Y) \neq 0$ for $i \in \{1,2,3,4\}$. If possible, assume that \begin{equation*} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - \sum_{j=1}^{s}\Bigl( w_{1}(Y) p_{j} Y q_{j} + p_{j} Y q_{j} w_{2}(Y)\Bigl) - \sum_{j=1}^{s} w_{3}(Y) p_{j} Y q_{j} w_{4}(Y) = 0. \end{equation*} Then \begin{equation*} \begin{aligned} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - &\sum_{k_{l_{1}}=0}^{l_{1}} w_{1k_{l_{1}}}(Y)(\sum_{j=1}^{s} p_{j} Y q_{j}) - (\sum_{j=1}^{s} p_{j} Y q_{j}) \sum_{k_{l_{2}=0}}^{l_{2}} w_{2k_{l_{2}}}(Y)\\ & -\sum_{k_{l_{3}}=0}^{l_{3}} w_{3k_{l_{3}}}(Y)(\sum_{j=1}^{s} p_{j} Y q_{j}) \sum_{k_{l_{4}=0}}^{l_{4}} w_{4k_{l_{4}}}(Y) = 0. \end{aligned} \end{equation*} Thus, for $l_{i} > 1$ where $i \in \{1,2,3,4\}$, it implies that either $w_{1l_{1}}(Y)(\sum_{j=1}^{s}p_{j}Yq_{j})=0$ or $(\sum_{j=1}^{s}p_{j}Yq_{j})w_{2kl_{2}}(Y)=0$ or $w_{3l_{3}}(Y)(\sum_{j=1}^{s}p_{j}Yq_{j})w_{4l_{4}}(Y)=0$ depending on whether $l_{1} = max \{l_{1},l_{2},l_{3}+l_{4}\}$ or $l_{2} = max \{l_{1},l_{2},l_{3}+l_{4}\}$ or $l_{3}+l_{4} = max \{l_{1},l_{2},l_{3}+l_{4}\}$ respectively. As $D\{Y\}$ is a domain. By (\cite[Corollary, p.$379$]{coh1}, \cite[Theorem $2.4$]{cohn2}), we have $\sum_{j=1}^{s} p_{j} Y q_{j} = 0$, which is a contradiction by Fact \ref{ll3}. \end{proof} If we take $l_{1} = l_{2}$ in Lemma \ref{tll4}. Then we can proceed similarly as in Lemma \ref{tll4} in case $l_{3}+l_{4} = max \{l_{1},l_{2},l_{3}+l_{4}\}$ and we get, \begin{equation*} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - \sum_{j=1}^{s}\Bigl( w_{1}(Y) p_{j} Y q_{j} + p_{j} Y q_{j} w_{2}(Y)\Bigl) - \sum_{j=1}^{s} w_{3}(Y) p_{j} Y q_{j} w_{4}(Y) \neq 0. \end{equation*} But if either $l_{1}= max \{l_{1},l_{3}+l_{4}\}$ or $l_{1}=l_{2}=l_{3}+l_{4}$, then we have counter-examples as follows. \begin{example} Suppose $w_{1}(Y) = Y + Y^{4} $, $w_{2}(Y) = -2Y^{4}, w_{3}(Y) = w_{4}(Y)= Y^{2}$ and $s=1$ with $p_{1} = q_{1}=1$. All other conditions of Lemma \ref{tll4} are satisfied. But $Y^{2} - w_{1}(Y)Y - Yw_{2}(Y) - w_{3}(Y)Yw_{4}(Y) = 0$. \end{example} \begin{example} Suppose $w_{1}(Y) = Y + Y^{5} $, $w_{2}(Y) = -Y^{5}-Y^{4}, w_{3}(Y) = w_{4}(Y)= Y^{2}$ and $s=1$ with $p_{1} = q_{1}=1$. All other conditions of Lemma \ref{tll4} are satisfied. But $Y^{2} - w_{1}(Y)Y - Yw_{2}(Y) - w_{3}(Y)Yw_{4}(Y) = 0$. \end{example} Take $l_{1} \neq l_{2}$ in Lemma \ref{tll4} and all other assumptions, notations in Lemma \ref{tll4}. Without loss of generality we can assume that $l_{1}>l_{2}$. Now, if $l_{1} = l_{3}+l_{4}$, then conclusion of Lemma \ref{tll4} need not hold. \begin{example} Suppose $w_{1}(Y) = -Y^{2} - Y^{4} $, $w_{2}(Y) = Y + Y^{2}, w_{3}(Y) = w_{4}(Y)= Y^{2}$ and $s=1$ with $p_{1} = q_{1}=1$. All other conditions of Lemma \ref{tll4} are satisfied. But $Y^{2} - w_{1}(Y)Y - Yw_{2}(Y) - w_{3}(Y)Yw_{4}(Y) = 0$. \end{example} \begin{rem} \label{rem lemm1} Assume $w_{1}(Y) =w_{2}(Y)= Y^{2l}, w_{3}(Y)= 4Y^{l}, w_{4}(Y) = Y^{l}$, where $l>1$ is a integer and all other assumptions, notations as in Lemma \ref{tll4}. If we assume \begin{equation*} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - \sum_{j=1}^{s} \Bigl(Y^{l} p_{j} Y q_{j} + p_{j} Y q_{j} Y^{l}\Bigl) - 4\sum_{j=1}^{s}Y^{l}p_{j}Yq_{j}Y^{l} = 0, \end{equation*} then we get $\sum_{j=1}^{s} p_{j} Y^{2} q_{j}=0$, which is a contradiction by Fact \ref{ll3} or \cite[Theorem $2(a)$]{tl1}. \end{rem} Naturally, this leads us to the following question, \begin{question} Suppose $\{ p_{1}, \cdots, p_{s}\}$ and $\{ q_{1}, \cdots , q_{s}\}$ are two linearly independent subsets of a division ring $D$ over $Z(D)$ and $w(Y)\in D\{Y\}$ with $\operatorname{deg} w(Y) > 1$. Then is it possible to conclude either \begin{equation*} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - \sum_{j=1}^{s}\Bigl( w_{1}(Y) p_{j} Y q_{j} + p_{j} Y q_{j} w_{2}(Y)\Bigl) - \sum_{j=1}^{s} w_{3}(Y) p_{j} Y q_{j} w_{4}(Y) \neq 0, \end{equation*} or $D$ is finite-dimensional over its center? \end{question} \begin{thm} \label{tcl5} Suppose $g$ is a additive map on a division ring $D$ having characteristic different from $2$ and $w_{1}(Y), w_{2}(Y),w_{3}(Y),w_{4}(Y) \neq 0$ in $D\{Y\}$ such that $$g(y^2) = w_{1}(y)g(y) + g(y)w_{2}(y) + w_{3}(y)g(y)w_{4}(y)$$ for all $y \in D$. If $\operatorname{deg} w_{1}(Y), \operatorname{deg} w_{2}(Y),\operatorname{deg} w_{3}(Y), \operatorname{deg} w_{4}(Y) > 1$, $\operatorname{deg} w_{1}(Y) \neq \operatorname{deg} w_{2}(Y) \neq \operatorname{deg} w_{3}(Y)+ \operatorname{deg} w_{4}(Y)$, then $D$ is finite-dimensional over $Z(D)$. \end{thm} \begin{proof} If $\operatorname{deg} w_{1}(Y), \operatorname{deg} w_{2}(Y), \operatorname{deg} w_{3}(Y), \operatorname{deg} w_{4}(Y) > 1$ from Theorem \ref{ttl2} we have either $D$ is a finite-dimensional over its center or $g$ is an elementary operator. If we assume $g$ is an elementary operator, then there exist finitely many $p_{j}, q_{j} \in D$ such that $\sum_{j=1}^{s} p_{j} y q_{j}$ for all $y \in D$. We may assume $s$ to be minimal. Then $\{p_{1}, \cdots, p_{s} \}$ and $ \{ q_{1}, \cdots, q_{s} \} $ are two independent sets over $Z(D)$. As $g(y^2) = w_{1}(x)g(y) + g(y)w_{2}(y) + w_{3}(y)g(y)w_{4}(y)$ for all $y \in D$, so we get \begin{equation*} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - \sum_{j=1}^{s}\Bigl( w_{1}(Y) p_{j} Y q_{j} + p_{j} Y q_{j} w_{2}(Y)\Bigl) - \sum_{j=1}^{s} w_{3}(Y) p_{j} Y q_{j} w_{4}(Y) = 0, \end{equation*} for all $y \in D$. Since $\operatorname{deg} w_{1}(Y), \operatorname{deg} w_{2}(Y) > 1$. It follows from Lemma \ref{tll4} that \begin{equation*} \sum_{j=1}^{s} p_{j} Y^{2} q_{j} - \sum_{j=1}^{s}\Bigl( w_{1}(Y) p_{j} Y q_{j} + p_{j} Y q_{j} w_{2}(Y)\Bigl) - \sum_{j=1}^{s} w_{3}(Y) p_{j} Y q_{j} w_{4}(Y) \end{equation*} is a non-trivial GPI for $D$. By application of Theorem \ref{m1}, we get $[D:Z(D)] < \infty$. \end{proof} \begin{rem} \label{rem lemm2} Assume $w_{1}(Y) =w_{2}(Y)= Y^{2l}, w_{3}(Y)= 4Y^{l}, w_{4}(Y) = Y^{l}$, where $l>1$ is a integer and all other assumptions, notations as in Theorem \ref{tcl5}. Then by following the same process as in Theorem \ref{tcl5} we can prove that $D$ is finite-dimensional over its center. \end{rem} \section{The identity $g_{1}(y)y^{-m} + y^{n}g_{2}(y^{-1})=0$} In this section, we prove our main result on identity $g_{1}(y)y^{-m} + y^{n}g_{2}(y^{-1})=0$ as follows. Note that Theorem \ref{main2} is not generally true if $D$ is a non-commutative but not a division ring. Here is an example. \begin{example} Let $D = \mathbb{Z}_{5} \{Y_{1}, Y_{2}\}$. Define additive maps $g_{1},g_{2}: D \rightarrow D$, such that $g_{1}|_{\mathbb{Z}_{5}}(y) = 3y^{5}$ and $g_{2}|_{\mathbb{Z}_{5}}(y) = 2y$. Then, for $n=4,m=2$, equation \eqref{maineq1} will be satisfied, but neither $g_{1} = 0$ nor $g_{2} = 0$. \end{example} Now we prove the following proposition, which will be useful in establishing Theorem \ref{main2}. \begin{prop}\label{t12} Suppose $g_{1},g_{2}$ are additive maps on algebra $R$ over field $F$ and $m,n \neq 1$ are integers such that \begin{equation} \label{eq9} g_{1}(y)y^{-m} + y^{n}g_{2}(y^{-1})=0, \end{equation} for all $y \in R^*$. If we exclude the case $\operatorname{char} F = p >0$ with $p-1 | n+m-2$, then $g_{1} = g_{2} = 0$ on $R^*$. \end{prop} \begin{proof} If $\operatorname{char} F = 0$, we set $k=2$. However, if $\operatorname{char} F = p > 0$, we choose $k$ such that $\Bar{k}$ generates the cyclic multiplicative group $\mathbb{Z}_{p}^{*}$ provided $p-1 \not| n+m-2$. Thus, we have $k^{n+m-2} -1 \neq 0$ in $F$ as $p-1 \not| n+m-2$. So, from above two cases we have $k(k^{n+m-2}-1) \neq 0$ in $F$. Replacing $y$ by $ky$ in (\ref{eq9}), we have \begin{align*} kg_{1}(y) & = g_{1}(ky)\\ & = - k^{n} y^{n}g_{2}(k^{-1}y^{-1})k^{m}y^{m}\\ & = - k^{n+m-1}y^{n}g_{2}(y^{-1})y^{m}\\ & = k^{m+n-1}g_{1}(y), \end{align*} for all $y \in R^*$, which implies $k(k^{n+m-2}-1)g_{1}(y) = 0$ for all $y \in R^*$. Hence $g_{1} = 0$ on $R^*$, and by equation (\ref{eq9}) we have $g_{2} = 0 $ on $R^*$. \end{proof} In particular, we have \begin{lem} \label{4t2} Suppose $g_{1},g_{2}$ are additive maps on a division ring $D$ and $m,n$ are positive integers with $(m,n) \neq (1,1)$ such that $g_{1}(y)y^{-m} + y^{n}g_{2}(y^{-1})=0$ for all $y \in D^{*}$. If we exclude the case $\operatorname{char} D = p >0$ with $p-1 | n+m-2$, then $g_{1} = g_{2} = 0$ on $D^*$. \end{lem} In \cite[Example $4.3$]{tk1}, some examples show the reason for the restriction on the characteristic in Proposition \ref{t12}. \begin{rem} In Theorem \ref{main2} the case $m=n=1$ is proved by L. Catalano \cite[Theorem $1$]{lc1}. If $m=1$ and $n=2$ or $m=2$ and $n=1$, then $g_{1}=g_{2}=0$ follows by Lemma \ref{4t2}. \end{rem} Therefore, in this section, we will assume the following unless specified, \begin{equation} \tag{*} \label{1*} \operatorname{char} D = p > 2, p-1 | n+m-2, n,m \geq 2. \end{equation} To Study Theorem \ref{main2}, we divide into following cases: \textbf{Case $1$:} $n= p^{l_{1}} k_{1}, m=p^{l_{2}}k_{2}$, where $l_{i} \geq 0$, $k_{i} >1$ and $gcd(p,k_{i}) = 1$, and $k_{i}-1$ is not a non-negative power of $p$, $i \in \{1,2\}$, \textbf{Case $2$:} $n= p^{l_{1}} k_{1}, m=p^{l_{2} +m_{2}} + p^{l_{2}}$, where $l_{i},m_{2} \geq 0$, $k_{1} >1$ for $i \in \{1,2\}$, $gcd(p,k_{1}) = 1$, $(l_{2},m_{2}) \neq (0,0)$ and $k_{1}-1$ is not a non-negative power of $p$, \textbf{Case $3$:} $n= p^{l_{1} + m_{1}} + p^{l_{1}}, m=p^{l_{2}}k_{2}$, where $l_{i},m_{1} \geq 0$ for $i \in \{1,2\}$, $k_{2} >1$, $gcd(p,k_{2}) = 1$, $(l_{1},m_{1}) \neq (0,0)$ and $k_{2}-1$ is not a non-negative power of $p$, \textbf{Case $4$:} $n = p^{l_{1}+m_{1}}+ p^{l_{1}}, m= p^{l_{2}+m_{2}} + p^{l_{2}}$, for some integers $l_{i} \geq 0$ and $m_{i} \geq 0$ such that $(l_{i},m_{i}) \neq (0,0)$, for $i \in \{1,2\}$. Consider $P_{1}(Y)= 2\Bigl( \sum^{k_{1}}_{i=0} \binom{k_{1}}{i} Y^{p^{l_{1}i}} \Bigl), P_{2}(Y) = \sum^{k_{2}}_{j=0} \binom{k_{2}}{j} Y^{p^{l_{2}j}}$. \begin{lem} \label{l7} Suppose $k > 1$ is a positive integer, $p$ is an odd prime, $gcd(p,k)=1$ and $k-1$ is not a non-negative power of $p$. Also, suppose that $s \geq 0 $ is the largest integer such that $p^{s} | k-1$. Then we have the following. \begin{itemize} \item[(i)] $p \not| \binom{k}{p^s+1}$, \item[(ii)] $p \not| \binom{k}{p^s}$, \item[(iii)] If \eqref{1*} and Case $1$ holds, then $P_{1}(Y), P_{2}(Y) \in (\mathbb{Z}_p[Y]) \backslash \{0\}$. \end{itemize} \end{lem} \begin{proof} \begin{itemize} \item[(i)] Conclusion follows from \cite[Lemma $5.3 (i)$]{tk1}. \item[(ii)] We can prove this like $(i)$, so we will omit the proof for the sake of brevity. \item[(iii)] Since $p$ is odd and $p-1 | n+m-2$, so $n+m$ must be even, implying either $m,n$ are even or odd. Therefore $k_{1},k_{2}$ are either even or odd. We may assume that $s_{1},s_{2} \geq 0$ are the largest such that $p^{s_{1}} | k_{1}-1$ and $p^{s_{2}}|k_{2}-1$. As $k_{1}-1$, $k_{2}-1$ are not non-negative power of $p$, we have $p^{s_{1}} < k_{1}-1$ and $p^{s_{2}} < k_{2}-1$. Firstly, we will assume that $k_{1}$ is even. In this case $p^{s_{1}} +1$ is even and $p^{s_{1}}+1 \leq k_{1}-2$. Now, if we assume $k_{1}$ is odd. In this case, $p^{s_{1}}$ is odd and $p^{s_{1}} \leq k_{2}-2$. By $(i)$ and $(ii)$, we get $p \not| \binom{k_{1}}{p^{s_{1}}+1}$ and $p \not| \binom{k_{1}}{p^{s_{1}}}$. By similar arguments for $k_{2}$, we get $p \not| \binom{k_{2}}{p^{s_{2}}+1}$ and $p \not| \binom{k_{2}}{p^{s_{2}}}$. So $P_{1}(Y)$ and $P_{2}(Y)$ are non-zero in $\mathbb{Z}_p[Y] \backslash \{0\}$. \end{itemize} \end{proof} \begin{lem} \label{l8} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ and $n,m \geq 2$ are integers such that $g_{1}(y)y^{-m} + y^ng_{2}(y^{-1})=0$, for all $y \in D^{*}$. Then \begin{equation*} g_{1}(a_{1}) = -(1-a_{1})^ng_{1}(1)(1-a_{1})^m + a_{1}^ng_{1}(1)a_{1}^m +g_{1}(1) + g_{2}(a_{1}), \end{equation*} for all $ a_{1} \in D$. \end{lem} \begin{proof} From Hua's identity, for any $ a_{1} \in D^{*}$, we have \begin{equation*} 1-a_{1}= [ 1 + ( a_{1}^{-1} - 1)^{-1} ]^{-1}. \end{equation*} Applying additive function $g_{1}$ on above, we get \begin{equation} \begin{aligned} g_{1}(a_{1}) & = g_{1}(1) - g_{1}( [ 1 + ( a_{1}^{-1} - 1)^{-1} ]^{-1})\\ & = g_{1}(1) + \Bigl(( 1 + ( a_{1}^{-1} - 1)^{-1} )^{-n}g_{2}(1 + ( a_{1}^{-1} - 1)^{-1})\Bigl)\\ & \hspace{1.8cm}\Bigl(( 1 + ( a_{1}^{-1} - 1)^{-1} )^{-m}\Bigl)\\ & = g_{1}(1) + \Bigl((1-a_{1})^ng_{2}(1)(1-a_{1})^m + a_{1}^n(a_{1}^{-1} -1)^n g_{2}( ( a_{1}^{-1} -1)^{-1})\Bigl) \\ & \hspace{1.8cm}\Bigl((a_{1}^{-1} -1)^m a_{1}^m \Bigl)\\ & = g_{1}(1) - (1-a_{1})^ng_{1}(1)(1-a_{1})^m - a_{1}^n g_{1}( a_{1}^{-1} -1) a_{1}^m \\ & = g_{1}(1) - (1-a_{1})^ng_{1}(1)(1-a_{1})^m + a_{1}^n g_{1}(1) a_{1}^m +g_{2}(a_{1}). \end{aligned} \end{equation} \end{proof} \begin{lem} \label{l9} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ with (\ref{1*}) and Case $1$ such that $g_{1}(y)y^{-m} + y^ng_{2}(y^{-1})=0$, for all $y \in D^{*}$. Then $g_{1}(1) = 0$ and $g_{1} = g_{2}$. \end{lem} \begin{proof} Since $p$ is odd and $p-1 | n+m -2$, $n+m$ must be even, implying $n,m$ are even or odd. Therefore $k_{1},k_{2}$ are either even or odd. Also, as $k_{i}-1 \neq p^0=1$, so $k_{i} > 2$, for $i \in \{1,2\}$. By Lemma \ref{l8}, we get \begin{equation} \label{eq1} (g_{1} - g_{2})(a_{1})= g_{1}(1) - (1-a_{1})^ng_{1}(1)(1-a_{1})^m + a_{1}^n g_{1}(1) a_{1}^m, \end{equation} for all $a_{1} \in D$. Replace $a_{1}$ by $-a_{1}$ in (\ref{eq1}). Since $n+m$ is even, we have \begin{equation} \label{eq2} -(g_{1} - g_{2})(a_{1})= g_{1}(1) - (1+a_{1})^ng_{1}(1)(1+a_{1})^m + a_{1}^n g_{1}(1) a_{1}^m, \end{equation} for all $a_{1} \in D$. Comparing (\ref{eq1}) and (\ref{eq2}), we have \begin{equation} \label{eq3} 0= 2g_{1}(1) - (1-a_{1})^ng_{1}(1)(1-a_{1})^m - (1+a_{1})^ng_{1}(1)(1+a_{1})^m + 2a_{1}^ng_{1}(1)a_{1}^m, \end{equation} for all $a_{1} \in D$. As $k_{i} >2$ for $i \in \{1,2\}$, the equation (\ref{eq3}) becomes \begin{equation*} 2\Bigl( \sum^{k_{1}}_{i=0} \binom{k_{1}}{i} a_{1}^{p^{l_{1}}i} \Bigl)g_{1}(1)\Bigl( \sum^{k_{2}}_{j=0} \binom{k_{2}}{j} a_{1}^{p^{l_{2}}j}\Bigl)=0, \end{equation*} where $(i,j) \neq (0,0), (k_{1},k_{2})$ and both $i$ and $j$ can not be even and odd or odd and even respectively, for all $a_{1} \in D$. Take $ P_{1}(Y)= 2\Bigl( \sum^{k_{1}}_{i=0} \binom{k_{1}}{i} Y^{p^{l_{1}}i} \Bigl), P_{2}(Y) = \Bigl( \sum^{k_{2}}_{j=0} \binom{k_{2}}{j} Y^{p^{l_{2}}j}\Bigl) \in (\mathbb{Z}_p[Y]) $. Then \begin{equation*} P_{1}(a_{1})g_{1}(1)P_{2}(a_{1})=0, \end{equation*} for all $a_{1} \in D$. By Lemma \ref{l7}, $P_{1}(Y)g_{1}(1)P_{2}(Y) \neq 0 \in D[Y]$. By, Jacobson's theorem \cite[Theorem $13.11$]{tl1}, there exists a $ a_{1} \in D $ such that $P_{1}(a_{1})g_{1}(1)P_{2}(a_{1}) \neq 0$. Therefore, we get $g_{1}(1) = 0$. Hence $g_{1} = g_{2}$. \end{proof} \begin{lem} \label{lc9} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ with (\ref{1*}) and Case $2$ such that $g_{1}(y)y^{-m} + y^ng_{2}(y^{-1})=0$, for all $y \in D^{*}$. Then $g_{1}(1) = 0$ and $g_{1} = g_{2}$. \end{lem} \begin{proof} Since $p$ is odd and $p-1 | n+m -2$, $n+m$ must be even, implying $n,m$ are even. Therefore $k_{1}$ is even. Also, as $k_{1}-1 \neq p^0=1$, so $k_{1} > 2$. By Lemma \ref{l8}, we get \begin{equation} \label{eqc1} (g_{1}-g_{2})(a_{1})= g_{1}(1) - (1-a_{1})^ng_{1}(1)(1-a_{1})^m + a_{1}^n g_{1}(1) a_{1}^m, \end{equation} for all $a_{1} \in D$. Replace $a_{1}$ by $-a_{1}$ in (\ref{eqc1}). Since $n+m$ is even, we have \begin{equation} \label{ceq2} -(g_{1}-g_{2})(a_{1})= g_{1}(1) - (1+a_{1})^ng_{1}(1)(1+a_{1})^m + a_{1}^n g_{1}(1) a_{1}^m, \end{equation} for all $a_{1} \in D$. Comparing (\ref{eqc1}) and (\ref{ceq2}), we have \begin{equation} \label{eqc3} 0= 2g_{1}(1) - (1-a_{1})^ng_{1}(1)(1-a_{1})^m - (1+a_{1})^ng_{1}(1)(1+a_{1})^m + 2a^ng_{1}(1)a_{1}^m, \end{equation} for all $a_{1} \in D$. As $k_{1} >2$ and $m=p^{l_{2}+m_{2}} + p^{l_{2}}$, the equation (\ref{eqc3}) becomes \begin{equation*} \begin{aligned} 2\Bigl( & \sum^{k_{1}-2}_{i=2, even} \binom{k_{1}}{i} a_{1}^{p^{l_{1}}i} \Bigl)g_{1}(1)(1+a_{1}^{p^{l_{2}+m_{2}} + p^{l_{2}}}) + 2\Bigl( \sum^{k_{1}-1}_{i=1, odd} \binom{k_{1}}{i} a_{1}^{p^{l_{1}}i} \Bigl)g_{1}(1)\\ &(a_{1}^{p^{l_{2}+m_{2}}} + a_{1}^{p^{l_{2}}}) + 2a_{1}^{p^{l_{1}}k_{1}}g_{1}(1) + 2g_{1}(1)a_{1}^{p^{l_{2} + m_{2}} + p^{l_{2}}} =0, \end{aligned} \end{equation*} for all $a_{1} \in D$. Take $Q_{1}(Y) = 2\Bigl( \sum^{k_{1}-2}_{i=2, even} \binom{k_{1}}{i} Y^{p^{l_{1}}i} \Bigl)g_{1}(1)(1+Y^{p^{l_{2}+m_{2}} + p^{l_{2}}}) + 2\Bigl( \sum^{k_{1}-1}_{i=1, odd} \binom{k_{1}}{i} Y^{p^{l_{1}}i} \Bigl)g_{1}(1)(a_{1}^{p^{l_{2}+m_{2}}} + Y^{p^{l_{2}}}) + 2Y^{p^{l_{1}}k_{1}}g_{1}(1) + 2g_{1}(1)Y{p^{l_{2} + m_{2}} + p^{l_{2}}}.$ If $Q_{1}(Y) = 0 \in D[Y]$, then we get $2Y^{p^{l_{1}}k_{1}}g_{1}(1) = 0$, which implies $g_{1}(1) = 0$. Otherwise, $Q_{1}(Y) \neq 0 \in D[Y]$. By Lemma \ref{l7}, $P_{1}(Y)g_{1}(1)(1+Y^{p^{l_{2}+m_{2}} + p^{l_{2}}}) \neq 0 \in D[Y]$. By, Jacobson's theorem \cite[Theorem $13.11$]{tl1}, there exists a $ a_{1} \in D $ such that $Q_{1}(a_{1})\neq 0$. Therefore, we get $g_{1}(1) = 0$. Thus $g_{1} = g_{2}$. \end{proof} \begin{lem} \label{clc9} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ with (\ref{1*}) and Case $3$ such that $g_{1}(y)y^{-m} + y^ng_{2}(y^{-1})=0$, for all $y \in D^{*}$. Then $g_{1}(1) = 0$ and $g_{1} = g_{2}$. \end{lem} We leave the proof of Lemma \ref{clc9} as an easy exercise for the reader. \begin{lem} \label{ll6} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ with (\ref{1*}) and Case $4$ such that $g_{1}(y)y^{-m} + y^ng_{2}(y^{-1})=0$, for all $y \in D^{*}$. Then $g_{1}(1) = 0$ and $g_{1} = g_{2}$. \end{lem} \begin{proof} By Lemma \ref{l8}, we have \begin{equation} \label{leq1} (g_{1} - g_{2})(a_{1})= g_{1}(1) - (1-a_{1})^ng_{1}(1)(1-a_{1})^m + a_{1}^n g_{1}(1) a_{1}^m, \end{equation} for all $a_{1} \in D$. Replace $a_{1}$ by $-a_{1}$ in (\ref{leq1}), we get \begin{equation} \label{leq2} -(g_{1} - g_{2})(a_{1})= g_{1}(1) - (1+a_{1})^ng_{1}(1)(1+a_{1})^m + a_{1}^n g_{1}(1) a_{1}^m, \end{equation} for all $a_{1} \in D$. Since $n= p^{l_{1}+m_{1}} + p^{l_{1}}$ and $m= p^{l_{2}+m_{2}} + p^{l_{2}}$, from (\ref{leq1}) and (\ref{leq2}), we have \begin{equation} \label{leq3} (g_{1} - g_{2})(a_{1}) = (1+a_{1}^{n})g_{1}(1)(a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{2} +m_{2}}}) + (a_{1}^{p^{l_{1}}} + a_{1}^{p^{l_{1} +m_{1}} })g_1(1)(1+a_{1}^{m}), \end{equation} for all $a_{1} \in D$. If we assume that $g_{1}(1) \neq 0$. Then map $a_{1} \mapsto (1+a_{1}^{n})g_{1}(1)(a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{2} +m_{2}}}) + (a_{1}^{p^{l_{1}}} + a_{1}^{p^{l_{1} +m_{1}} })g_1(1)(1+a_{1}^{m})$ is additive. Using additive property right side expression of \eqref{leq3}, and equation \eqref{leq1} we define \begin{equation*} \begin{aligned} Q(X,Y) = &-(1-(X+Y))^{n}g_{1}(1)(1 - (X+Y))^{m} + (X+Y)^{n}g_{1}(X+Y)^{m} - g_{1}(1)\\ & + (1-X)^{n}g_{1}(1)(1-X)^{m} - X^{n}g_{1}(1)X^{m} + (1 - Y)^{n}g_{1}(1)(1- Y)^{m}\\ & - Y^{n}g_{1}(1)Y^{m} \in D\{X,Y\}. \end{aligned} \end{equation*} Therefore, $D$ is a PI-ring as $Q(X,Y) \neq 0$. By Po\v{s}ner's Theorem \cite{pose} $D$ is finite-dimensional over its center. $Z(D)$ is infinite field as $D$ is non-commutative. We assume $F$ is maximal sub-field of $D$ such that $Z(D) \subset F$. Then $D \otimes_{Z(D)} F \cong M_{r}(F)$, where $r=\sqrt{dim_{Z(D)}D} > 1$. We can assume $Q(X,Y)$ is also a PI for $M_r(F)$ as $D$ and $M_r(F)$ satisfy the same PI(see \cite[Corollary, p.$64$]{jacob}). Substituting $X= I, Y= -I$, where $I$ is $ r \times r$ identity matrix, in $Q(X,Y)$, we get $Q(I,-I) = \neq 0$, which is a contradiction. Thus $g_{1}(1) = 0$, which implies $g_{1} = g_{2}$. \end{proof} By Lemma \ref{4t2}, Lemma \ref{l9}, Lemma \ref{lc9}, Lemma \ref{clc9} and Lemma \ref{ll6}, we have the following lemma. \begin{lem} \label{ll56} Suppose $g_{1},g_{2}$ are additive maps on a non-commutative division ring $D$ having characteristic different from $2$ and $m,n \geq 2$ are positive integers such that $g_{1}(y)y^{-m} + y^ng_{2}(y^{-1})=0$ for all $y \in D^*$. Then $g_{1} = g_{2}$. \end{lem} From now on, we always make the following assumptions to make our statements neat unless specified. Suppose $g$ is a additive map on a non-commutative division ring $D$ with (\ref{1*}) such that $g(y)y^{-m} + y^{n}g(y^{-1}) = 0$ for all $y \in D^{*}$. Our aim is to show that $g = 0$. We define $C_{D}(a_{2}) = \{ a_{1} \in D | a_{1}a_{2} = a_{2}a_{1} \}$, is a centralizer of $a_{2} \in D$. \begin{lem} \label{l10} Suppose $P(Y;a_{1}) = (1+Y)^ng(a_{1})(1+Y)^m + (1-Y)^ng(a_{1})(1-Y)^m - 2Y^ng(a_{1})Y^m -2g(a_{1}) \in D[Y]$, for $a_{1} \in D$. Then following will hold: \begin{itemize} \item[(i)] $g(y^2) = (1+y)^{n}g(y)(1+y)^{m} - y^{n}g(y)y^{m} - g(y)$ for all $y \in D$, \item[(ii)] $P(a_{1}a_{2};a_{1}) = 0$ for all $a_{1}, a_{2} \in D$ such that $a_{1}a_{2} = a_{2}a_{1}$, \item[(iii)] If for some $d \in D$, for any $a_{1} \in C_D(d)^*$, $P(d;a_{1}) \neq 0$, then $g(C_D(d)) = 0$, \item[(iv)] If $ P(Z(D);a_{1})\neq 0$, then $g =0$. \end{itemize} \end{lem} \begin{proof} Let $a_{1}, a_{2} \in D$, such that $a_{1}, a_{2} \neq 0,1$. Then, by Hua's identity, $a_{1}- a_{1}a_{2}a_{1} = [a_{1}^{-1} + (a_{2}^{-1} - a_{1})^{-1}]^{-1}$. By application of additive map $g$ on the above identity, we get, \begin{equation} \label{eq4} \begin{aligned} g(a_{1}-a_{1}a_{2}a_{1}) =& g( [a_{1}^{-1} + (a_{2}^{-1} - a_{1})^{-1}]^{-1}) \\ = & -\Bigl((a_{1}^{-1} + (a_{2}^{-1} - a_{1})^{-1})^{-n}g(a_{1}^{-1} + (a_{2}^{-1} - a_{1})^{-1})\Bigl) \\ &\hspace{0.5cm}\Bigl((a_{1}^{-1} + (a_{2}^{-1} - a_{1})^{-1})^{-m})\Bigl)\\ = & -(a_{1} - a_{1}a_{2}a_{1})^{n}\Bigl(-a_{1}^{n}g(a_{1})a_{1}^{m}\\ &- (a_{2}^{-1} - a_{1})^{-n}g( a_{2}^{-1} -a_{1} )( a_{2}^{-1} - a_{1} )^{-m} \Bigl)\\ & \hspace{0.4cm} \Bigl(( a_{1} - a_{1}a_{2}a_{1} )^{m}\Bigl)\\ = &(a_{1} - a_{1}a_{2}a_{1})^{n}a_{1}^{-n}g(a_{1})a_{1}^{-m}(a_{1} - a_{1}a_{2}a_{1})^{m} \\ &+ \Bigl((a_{1} - a_{1}a_{2}a_{1})^{n}(a_{2}^{-1} - a_{1})^{-n}g(a_{2}^{-1} - a_{1})\Bigl)\\ & \hspace{0.5cm}\Bigl((a_{2}^{-1} - a_{1})^{-m}(a_{1} - a_{1}a_{2}a_{1})^{m}\Bigl), \end{aligned} \end{equation} for all $a_{1},a_{2} \in D$ with $a_{1}a_{2} \neq 0,1$. Assume that $a_{1}a_{2} = a_{2}a_{1}$. Then equation (\ref{eq4}) becomes \begin{equation*} \begin{aligned} g(a_{1}-a_{1}^{2}a_{2}) &= (1 - a_{1}a_{2})^ng(a_{1})(1 -a_{1}a_{2})^{m} - (a_{1}a_{2})^ng(a_{1})(a_{1}a_{2})^{m}\\ & - (a_{1}a_{2})^na_{2}^{-n}g(a_{2})a_{2}^{-m}(a_{1}a_{2})^{m}. \end{aligned} \end{equation*} This implies, \begin{equation} \begin{aligned} g( a_{1} - a_{1}^2a_{2}) = & (1 - a_{1}a_{2})^ng(a_{1})(1 -a_{1}a_{2})^{m} - (a_{1}a_{2})^ng(a_{1})(a_{1}a_{2})^{m}\\ & - a_{1}^ng(a_{2})a_{1}^{m}, \end{aligned} \end{equation} which implies \begin{equation} \label{eq5} \begin{aligned} g(a_{1}^2a_{2}) = & -(1-a_{1}a_{2})^ng(a_{1})(1-a_{1}a_{2})^{m} + (a_{1}a_{2})^ng(a_{1})(a_{1}a_{2})^{m}\\ & + a_{1}^ng(a_{2})a_{1}^{m} + g(a_{1}), \end{aligned} \end{equation} for all $ a_{1}, a_{2} \in D$ with $a_{1}a_{2} = a_{2}a_{1}$. Since $p-1 | n+m-2$, $ n+m $ must be even. Replacing $a_{1} $ by $-a_{1}$ in equation (\ref{eq5}), we get \begin{equation} \label{eq6} \begin{aligned} g(a_{1}^2a_{2}) = & (1+a_{1}a_{2})^ng(a_{1})(1+a_{1}a_{2})^{m} - (a_{1}a_{2})^ng(a_{1})(a_{1}a_{2})^{m}\\ & + a_{1}^ng(a_{2})a_{1}^{m} - g(a_{1}), \end{aligned} \end{equation} for all $a_{1}, a_{2} \in D$ with $a_{1}a_{2} = a_{2}a_{1}$. Since $g(1) = 0$, substituting $a_{2} = 1$ in equation (\ref{eq6}), we get \begin{equation} \label{eq7} g(a_{1}^2) = (1+a_{1})^ng(a_{1})(1+a_{1})^{m} - a_{1}^ng(a_{1})a_{1}^{m} - g(a_{1}). \end{equation} This proves $(i)$. On equating equations (\ref{eq5}) and (\ref{eq6}), we have $(ii)$. Assume $ d \in D$ such that $P(d;a_{1}) \neq 0$ and $a_{1} \in C_D(d)$. Take $a_{2} = a_{1}^{-1}d$. Then $a_{1}a_{2} = a_{2}a_{1}$. By $(ii)$, we have $P(d;a_{1}) = P(a_{1}a_{2};a_{1}) = 0$, which is a contradiction. Thus, we get $g(a_{1})=0 \ i.e \ g(C_D(d)) = 0$. Obviously, $(iv)$ follows from $(iii)$. \end{proof} \begin{lem} \label{lem9} If Case $1$ holds, then $g = 0$. \end{lem} \begin{proof} Suppose $ g \neq 0$. Let \begin{equation*} \begin{aligned} P(Y;a_{1}) = (1+Y)^{n}g(a_{1})(1+Y)^{m} + (1-Y)^{n}g(a_{1})(1-Y)^{m} & - 2Y^{n}g(a_{1})Y^{m} \\ & -2g(a_{1}) \in D[Y]. \end{aligned} \end{equation*} We have $P(Y;a_{1}) = P_{1}(Y)g(a_{1})P_{2}(Y) \neq 0$ in $D[Y]$, by the application of Lemma \ref{l7}$(ii)$ and $P(Z(D);a_{1})=0$, by the application of Lemma \ref{l10}$(iv)$. \textbf{Case $A$:} If we consider the case $P(d;a_{1}) \neq 0$ for any $ d \in D \backslash Z(D) $, then by application of Lemma \ref{l10}$(iii)$, we have $g(C_D(d))=0$ for all $d \in D \backslash Z(D)$. In particular, $g(Z(D))=0$ and $g(d)=0$ for all $d \in D \backslash Z(D)$, which implies $g=0$. \textbf{Case $B$:} On the other hand, we consider a case $P(d;a_{1})=0$ for some $d \in D \backslash Z(D)$. Suppose $w \in D \backslash Z(D)$. Then $C_D(w)$ is an infinite division ring according to \cite[Theorem $13.10$]{tl1}. Also we have $P(z;a_{1}) \neq 0$ for some $z \in C_D(w)$ using \cite[Theorem $16.7$]{tl1}. So, by application of Lemma \ref{l10} $(iii)$, we have $g(C_D(z))=0$. Particularly, we get $g(w)=0$ and $g(Z(D))=0$. Hence $g=0$. \end{proof} By, using the similar arguments as in Lemma \ref{lem9}, we have following remarks. \begin{rem} \label{crem} If Case $2$ holds, then $g=0$. \end{rem} \begin{rem} \label{c1rem} If Case $3$ holds, then $g=0$. \end{rem} Next, we study Case $4$ and make the following key observation. \begin{lem} \label{ll10} Assume that Case $4$ holds. For $a_{1} \in D$, if $a_{1} \in Z(D)$, then $g(a_{1}) = 0$ and $(a_{1}^{m} + a_{1}^{n} + (a_{1}^{p^{l_{1}}} + a_{1}^{p^{l_{1}+m_{1}}})(a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{2}+m_{2}}}))g(a_{2}) = 0$ for all $a_{2} \in D$. \end{lem} \begin{proof} From equation (\ref{eq6}) we have \begin{equation} \label{leq22 } \begin{aligned} &g(a_{1}^2a_{2}) = a_{1}^na_{1}^{m}g(a_{2}) + a_{1}^{n}a_{2}^{n} g(a_{1}) + g(a_{1}) a_{1}^{m}a_{2}^{m} \\ &+ (a_{1}^{p^{l_{1}}}a_{2}^{p^{l_{1}}} + a_{1}^{p^{l_{1} + m_{1}}}a_{2}^{p^{l_{1} + m_{1}}})g(a_{1})(a_{1}^{p^{l_{2}}}a_{2}^{p^{l_{2}}} + a_{1}^{p^{l_{2} + m_{2}}}a_{2}^{p^{l_{2} + m_{2}}}))g(a_{1}, \end{aligned} \end{equation} for all $a_{2} \in D$ with $a_{1} \in Z(D)$. Suppose $g(a_{1}) \neq 0$. Then, $a_{2} \rightarrow (a_{1}^{n}a_{2}^{n} + a_{1}^{m}a_{2}^{m} + (a_{1}^{p^{l_{1}}}a_{2}^{p^{l_{1}}} + a_{1}^{p^{l_{1} + m_{1}}}a_{2}^{p^{l_{1} + m_{1}}})(a_{1}^{p^{l_{2}}}a_{2}^{p^{l_{2}}} + a_{1}^{p^{l_{2} + m_{2}}}a_{2}^{p^{l_{2} + m_{2}}}))g(a_{1}) $ is an additive map say $g^{'}$. Let \begin{equation*} \begin{aligned} Q(X,Y) = g^{'}(X+Y) - g^{'}(X) - g^{'}(Y) \in D\{X,Y\}. \end{aligned} \end{equation*} $Q(X,Y) \neq 0$ GPI for $D$. We get $D$ is finite-dimensional over its infinite center, because of Theorem \ref{m1} and non-commutativity of $D$. As $D$ is non-commutative finite-dimensional algebra over $Z(D)$, $Z(D)$ is not algebraic over $\mathbb{Z} \backslash p \mathbb{Z}$ by Jacobson's theorem \cite[Theorem $13.11$]{tl1}. Thus, there exists $\alpha \in Z(D)$ such that $a_{1}^{n}\alpha^{n} + a_{1}^{m}\alpha^{m} + (a_{1}^{p^{l_{1}}}\alpha^{p^{l_{1}}} + a_{1}^{p^{l_{1} + m_{1}}}\alpha^{p^{l_{1} + m_{1}}})(a_{1}^{p^{l_{2}}}\alpha^{p^{l_{2}}} + a_{1}^{p^{l_{2} + m_{2}}}\alpha^{p^{l_{2} + m_{2}}}) \neq 0$. This implies $Q(\alpha, -\alpha) \neq 0$, which is a contradiction. Hence $g(a_{1}) = 0$. It follows from equation (\ref{leq22 }) that \begin{equation} \label{leq23} g(a_{1}^{2}a_{2}) = a_{1}^{n}a_{1}^{m}g(a_{2}), \end{equation} for all $a_{2} \in D$. On replacing $a_{1}$ by $a_{1}+1$ in equation (\ref{leq23}), we get \begin{equation*} \begin{aligned} 2g(a_{1}a_{2}) = & g(a_{2})a_{1}^{p^{l_{2}}} + g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} + g(a_{2})a_{1}^{m}\\ & \; \; + a_{1}^{p^{l_{1}}}g(a_{2}) + a_{1}^{p^{l_{1}}}g(a_{2})a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{1}}}g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} + a_{1}^{p^{l_{1}}}g(a_{2})a_{1}^{m} \\ & \; \; + a_{1}^{p^{l_{1} + m_{1}}}g(a_{2}) + a_{1}^{p^{l_{1} + m_{1}}}g(a_{2})a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{1} + m_{1}}}g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} \\ & \; \; + a_{1}^{p^{l_{1} + m_{1}}}g(a_{2})a_{1}^{m} + a_{1}^{n}g(a_{2}) + a_{1}^{n}g(a_{2})a_{1}^{p^{l_{2}}} + a_{1}^{n}g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} , \end{aligned} \end{equation*} for all $a_{2} \in D$. Replacing $a_{1}$ by $-a_{1}$ in above equation we get, \begin{equation*} \begin{aligned} -2g(a_{1}a_{2}) = & -g(a_{2})a_{1}^{p^{l_{2}}} - g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} + g(a_{2})a_{1}^{m}\\ & \; \; - a_{1}^{p^{l_{1}}}g(a_{2}) + a_{1}^{p^{l_{1}}}g(a_{2})a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{1}}}g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} - a_{1}^{p^{l_{1}}}g(a_{2})a_{1}^{m} \\ & \; \; - a_{1}^{p^{l_{1} + m_{1}}}g(a_{2}) + a_{1}^{p^{l_{1} + m_{1}}}g(a_{2})a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{1} + m_{1}}}g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} \\ & \; \; - a_{1}^{p^{l_{1} + m_{1}}}g(a_{2})a_{1}^{m} + a_{1}^{n}g(a_{2}) - a_{1}^{n}g(a_{2})a_{1}^{p^{l_{2}}} - a_{1}^{n}g(a_{2})a_{1}^{p^{l_{2} + m_{2}}} , \end{aligned} \end{equation*} for all $a_{2} \in D$. By the addition of the above two expressions, we get, \begin{equation} \label{mleq24} \begin{aligned} (a_{1}^{m} + a_{1}^{n} + (a_{1}^{p^{l_{1}}} + a_{1}^{p^{l_{1}+m_{1}}})(a_{1}^{p^{l_{2}}} + a_{1}^{p^{l_{2}+m_{2}}}))g(a_{2}) = 0, \end{aligned} \end{equation} for all $a_{2} \in D$. \end{proof} \begin{rem} \label{rem1} If we assume $D$ is a non-commutative ring such that $\operatorname{char} D \neq 2,3$ and all other assumptions of Lemma \ref{ll10}, then by substituting $a_{1} =1$ in equation (\ref{mleq24}), we get $g = 0$. \end{rem} Next, we will prove the following result in Case $4$, when $(m_{1},m_{2}) \neq (0,0)$ and characteristic of $D$ is $3$. \begin{lem} \label{c3} Suppose $D$ is a non-commutative division ring having characteristic $3$. If $n= p^{l_{1}+m_{1}}+p^{l_{1}}, m= p^{l_{2}+m_{2}}+p^{l_{2}}$ for some integers $l_{1},l_{2},m_{1},m_{2} \geq 0$ and $(m_{1},m_{2}) \neq (0,0)$, then $g = 0$. \end{lem} \begin{proof} Let $\alpha \in Z(D)$. Then by Lemma \ref{ll10}, \begin{equation*} \alpha^{n}g(a_{2}) = -( \alpha^{m} + (\alpha^{p^{l_{1}}} + \alpha^{p^{l_{1} + m_{1}}}) (\alpha^{p^{l_{2}}} + \alpha^{p^{l_{2} + m_{2}}})) g(a_{2}), \end{equation*} for all $a_{2} \in D$. Thus \begin{equation*} \alpha^{2n}g(a_{2}) = -(\alpha^{2m} + (\alpha^{2p^{l_{1}}} + \alpha^{2p^{l_{1} + m_{1}}}) (\alpha^{2p^{l_{2}}} + \alpha^{2p^{l_{2} + m_{2}}}) )g(a_{2}), \end{equation*} for all $a_{2} \in D$. On the other hand, \begin{equation*} \begin{aligned} \alpha ^{2n} g(a_{2}) & = \alpha^{n}( \alpha^{n} g(a_{2})) \\ & = - \alpha^{n} ( \alpha^{m} + (\alpha^{p^{l_{1}}} + \alpha^{p^{l_{1} + m_{1}}}) (\alpha^{p^{l_{2}}} + \alpha^{p^{l_{2} + m_{2}}})) g(a_{2}) \\ & = ( \alpha^{m} + (\alpha^{p^{l_{1}}} + \alpha^{p^{l_{1} + m_{1}}}) (\alpha^{p^{l_{2}}} + \alpha^{p^{l_{2} + m_{2}}}))^{2} g(a_{2}), \end{aligned} \end{equation*} for all $a_{2} \in D$. On comparing the above two equations, we get \begin{equation*} \begin{aligned} ( \alpha^{m} ( \alpha^{2p^{l_{1}}} + \alpha^{2p^{l_{1} + m_{1}}}) + & \alpha^{m} (\alpha^{m} + \alpha^{n}) + 2\alpha^{m+n} - \alpha^{2n} \\ & + \alpha^{n} ( \alpha^{2p^{l_{2}}} + \alpha^{2p^{l_{2} + m_{2}}}) ) g(a_{2}) = 0, \end{aligned} \end{equation*} for all $ a_{2} \in D$. Assuming $g(a_{2}) \neq 0$, we get a contradiction by substituting $\alpha = 1$ in the above equation. Hence $g = 0$. \end{proof} At last, we deal with case when $m_{i} = 0$ for $i \in \{1,2\}$. \begin{lem} \label{l11} If $n = 2p^{l_{1}}, m = 2p^{l_{2}}$ for some integers $l_{1},l_{2} > 0$ and \eqref{1*} is satisfied with equation $g(y)y^{-m} + y^{n}g(y^{-1}) = 0$, then $g = 0$. \end{lem} \begin{proof} By Lemma \ref{l10} \begin{equation} \label{leq24} g(a_{1}^{2}) = (1+a_{1}^{p^{l_{1}}})^{2}g(a_{1})(1+a_{1}^{p^{l_{2}}})^{2} - a_{1}^{2p^{l_{1}}}g(a_{1})a_{1}^{2p^{l_{2}}} -g(a_{1}), \end{equation} for all $a_{1} \in D$. Replacing $a_{1}$ by $-a_{1}$ in \eqref{leq24}, we get \begin{equation} \label{nleq25} g(a_{1}^{2}) = (1-a_{1}^{p^{l_{1}}})^{2}g(a_{1})(1-a_{1}^{p^{l_{2}}})^{2} + a_{1}^{2p^{l_{1}}}g(a_{1})a_{1}^{2p^{l_{2}}} +g(a_{1}), \end{equation} for all $ a_{1} \in D$. On solving equations \eqref{leq24} and \eqref{nleq25}, we get \begin{equation} \label{nleq26} g(a_{1}^{2}) = a_{1}^{2p^{l_{1}}}g(a_{1}) + g(a_{1})a_{1}^{2p^{l_{2}}} + 4a_{1}^{p^{l_{1}}}g(a_{1})a_{1}^{p^{l_{2}}}, \end{equation} for all $a_{1} \in D$. Since $a_{1}^{p^{l_{1}}}, a_{1}^{p^{l_{2}}} >1$. By application of Theorem \ref{tcl5} and Remark \ref{rem lemm2} on \eqref{nleq26}, we get $D$ is finite-dimensional over $Z(D)$. If we consider $Z(D)$ is finite, then division ring $D$ is also finite. According to Wedder-burn's Theorem (see \cite{wed}), division ring $D$ must be commutative, which is a contradiction. Therefore, $Z(D)$ is infinite. Also by Lemma \ref{ll10}, we have \begin{equation} \label{leq25} 2g(\alpha a_{2}) = \beta g(a_{2}), \end{equation} where $\beta \in Z(D)$, for all $\alpha \in Z(D)$ and $a_{2} \in D$. Note that $2^{p^{l_{i}}} \equiv 2 \ mod \ p$ for $ i \in \{1,2\}$. By application of Lemma \ref{tl6} to equation (\ref{leq24}), we get \begin{equation} \label{leq26} \begin{aligned} &\Bigl(6a_{1}^{2p^{l_{1}}} -(a_{1}+a_{2})^{2p^{l_{1}}} -(a_{1}-a_{2})^{2p^{l_{1}}} \Bigl)g(a_{1}) + g(a_{1})\Bigl(6a_{1}^{2p^{l_{2}}}\\ & -(a_{1}+a_{2})^{2p^{l_{2}}} -(a_{1}-a_{2})^{2p^{l_{2}}}\Bigl) + 24a_{1}^{p^{l_{1}}}g(a_{1})a_{1}^{p^{l_{2}}}\\ &-4(a_{1}+ a_{2})^{p^{l_{1}}}g(a_{1})(a_{1} + a_{2})^{p^{l_{2}}} -4(a_{1} - a_{2})^{p^{l_{1}}}g(a_{1})(a_{1} -a_{2})^{p^{l_{2}}} \\= & \Bigl(-6a_{2}^{2p^{l_{1}}} +(a_{1}+a_{2})^{2p^{l_{1}}} - (a_{1}-a_{2})^{2p^{l_{1}}}\Bigl)g(a_{2}) \\ &+ g(a_{2}) \Bigl(-6a_{2}^{2p^{l_{1}}} +(a_{1}+a_{2})^{2p^{l_{1}}} - (a_{1}-a_{2})^{2p^{l_{2}}}\Bigl)\\ & +4(a_{1} + a_{2})^{p^{l_{1}}}g(a_{2})(a_{1} + a_{2})^{p^{l_{2}}} -4(a_{1}- a_{2})^{p^{l_{1}}}g(a_{2})(a_{1} -a_{2})^{p^{l_{2}}}\\ & + 8a_{2}^{p^{l_{1}}}g(a_{2})a_{2}^{p^{l_{2}}}, \end{aligned} \end{equation} for all $a_{1},a_{2} \in D$. Write \begin{center} $(Y_{1}+Y_{2})^{p^{l_{1}}} = \sum_{j=1}^{p^{l_{1}}}P_{j}(Y_{1},Y_{2}),$ \hspace{0.5cm} $(Y_{1}+Y_{2})^{2p^{l_{1}}} = \sum_{j=1}^{2p^{l_{1}}}U_{j}(Y_{1},Y_{2})$ \end{center} and \begin{center} $(Y_{1}+Y_{2})^{2p^{l_{2}}} = \sum_{j=1}^{2p^{l_{2}}}V_{j}(Y_{1},Y_{2}),$ \hspace{0.5cm} $(Y_{1}+Y_{2})^{p^{l_{2}}} = \sum_{j=1}^{p^{l_{2}}}Q_{j}(Y_{1},Y_{2})$ \end{center} where $P_{j}(Y_{1},Y_{2}),Q_{j}(Y_{1},Y_{2}), U_{j}(Y_{1},Y_{2}), V_{j}(Y_{1},Y_{2}) \in \mathbb{Z}_{p}\{Y_{1},Y_{2}\}$ are homogeneous in $Y_{2}$ of degree $j$. Suppose $l= max\{l_{1},l_{2} \}$. In case $l_{1} \neq l_{2}$, without loss of generality we can assume $l_{1}<l_{2}$. Then equation (\ref{leq26}) becomes, \begin{equation} \label{leq27} \begin{aligned} & \sum_{j=1}^{p^{l}} \Bigl(U_{2j}(a_{1},a_{2})g(a_{1}) + g(a_{1})V_{2j}(a_{1},a_{2})\Bigl) \\ &+ \sum_{j=1,k=1}^{\frac{p^{l}-1}{2},\frac{p^{l}-1}{2}} 4P_{2j}(a_{1},a_{2})g(a_{1})Q_{2k}(a_{1},a_{2})\\ & + \sum_{j=1,k=1}^{\frac{p^{l}+1}{2},\frac{p^{l}+1}{2}} 4P_{2j-1}(a_{1},a_{2})g(a_{1})Q_{2k-1}(a_{1},a_{2}) \\ &= \sum_{j=1}^{p^{l}} \Bigl(U_{2j-1}(a_{1},a_{2})g(a_{1}) + g(a_{2})V_{2j-1}(a_{1},a_{2})\Bigl)\\ & + \sum_{j=1,k=1}^{\frac{p^{l}-1}{2},\frac{p^{l}+1}{2}} 4P_{2j}(a_{1},a_{2})g(a_{2})Q_{2k-1}(a_{1},a_{2})\\ & + \sum_{j=1,k=1}^{\frac{p^{l}+1}{2},\frac{p^{l}-1}{2}} 4P_{2j-1}(a_{1},a_{2})g(a_{2})Q_{2k }(a_{1},a_{2}), \end{aligned} \end{equation} for all $a_{1},a_{2} \in D$, where $P_{j}(a_{1},a_{2})= 0 $ for $j > p_{l_{1}}$ and $U_{j}(a_{1},a_{2}) = 0$ for $j> 2p^{l_{1}}$. Replacing $a_{2}$ by $\alpha a_{2}$ and multiplying both sides by $2$ in equation (\ref{leq27}) and applying equation (\ref{leq25}), we get \begin{equation} \label{leq28} \begin{aligned} & \sum_{j=1}^{p^{l}} 2 \alpha^{2j}\Bigl(U_{2j}(a_{1},a_{2})g(a_{1}) + g(a_{1})V_{2j}(a_{1},a_{2})\Bigl) \\ &+ \sum_{j=1,k=1}^{\frac{p^{l}-1}{2},\frac{p^{l}-1}{2}} 8 \alpha^{2j + 2k} P_{2j}(a_{1},a_{2})g(a_{1})Q_{2k}(a_{1},a_{2})\\ & + \sum_{j=1,k=1}^{\frac{p^{l}+1}{2},\frac{p^{l}+1}{2}} 8 \alpha^{2j-1 + 2k -1} P_{2j-1}(a_{1},a_{2})g(a_{1})Q_{2k-1}(a_{1},a_{2}) \\ &= \sum_{j=1}^{p^{l}} \alpha^{2j-1} \beta\Bigl(U_{2j-1}(a_{1},a_{2})g(a_{2}) + g(a_{2})V_{2j-1}(a_{1},a_{2})\Bigl)\\ & + \sum_{j=1,k=1}^{\frac{p^{l}-1}{2},\frac{p^{l}+1}{2}} 4\alpha^{2j +2k -1} \beta P_{2j}(a_{1},a_{2})g(a_{2})Q_{2k-1}(a_{1},a_{2}) \\ & + \sum_{j=1,k=1}^{\frac{p^{l}+1}{2},\frac{p^{l}-1}{2}} 4\alpha^{2j-1 + 2k} \beta P_{2j-1}(a_{1},a_{2})g(a_{2})Q_{2k }(a_{1},a_{2}), \end{aligned} \end{equation} for all $a_{1},a_{2} \in D$ and $\alpha \in Z(D)$. Since $Z(D)$ is infinite. By application of standard Vandermonde argument to solve equation (\ref{leq28}), we have \begin{equation} \label{eq29} \begin{aligned} & 2\Bigl(U_{2j}(a_{1},a_{2})g(a_{1}) + g(a_{1})V_{2j}(a_{1},a_{2}\Bigl)\\ & + \sum_{k=1}^{\frac{p^{l}-1}{2}} 8 \alpha^{2k}P_{2j}(a_{1},a_{2})g(a_{1})Q_{2k}(a_{1},a_{2})\\ & - \sum_{k=1}^{\frac{p^{l}-1}{2}} 4 \alpha^{2k-1} \beta P_{2j}(a_{1},a_{2})g(a_{2})Q_{2k-1}(a_{1},a_{2})\\ & = \beta \Bigl(U_{2j-1}(a_{1},a_{2})g(a_{2}) + g(a_{2})V_{2j-1}(a_{1},a_{2})\Bigl)\\ & + \sum_{k=1}^{\frac{p^{l}-1}{2}} 4 \alpha^{2k} \beta P_{2j-1}(a_{1},a_{2})g(a_{2})Q_{2k}(a_{1},a_{2})\\ & - \sum_{k=1}^{\frac{p^{l}-1}{2}} 8 \alpha^{2k-1}P_{2j-1}(a_{1},a_{2})g(a_{1})Q_{2k-1}(a_{1},a_{2})=0, \end{aligned} \end{equation} for all $a_{1},a_{2} \in D$ and $1 \leq j \leq p^{l} $. Fix $j=p^{l}$ in equation (\ref{eq29}), we get, \begin{equation} \label{eq30} 2(U_{2p^{l}-1}(a_{1},a_{2})g(a_{2}) + g(a_{2}) V_{2p^{l}-1}(a_{1},a_{2})) =0, \end{equation} for all $a_{1},a_{2} \in D$ and $\alpha \in Z(D)$. In case $l_{1} \neq l_{2}$, from equation \eqref{eq30}, we get \begin{equation} \label{nleq31} g(a_{2}) V_{2p^{l}-1}(a_{1},a_{2})=0, \end{equation} for all $a_{1},a_{2} \in D$. Let $a_{1} \in D$ such that $g(a_{2}) \neq 0$. Then this implies $ V_{2p^{l}}(a_{1},a_{2}) = 0$ and \begin{equation*} [a_{1}^{2p^{l}},a_{2}]=[a_{1},V_{2p^{l}-1}(a_{1},a_{2})]=0, \end{equation*} for all $a_{1} \in D$. Thus given $a_{1} \in D$, we get \begin{equation*} D = C_{D}(a_{1}^{2p^{l}-1}) \cup \{ a_{2} \in D | g(a_{2}) =0\}. \end{equation*} We get $D = C_{D}(a_{1}^{2p^{l}-1})$ as $g \neq 0$. So, $a_{1}^{p^{l}} \in Z(D)$ for all $a_{1} \in D$. Because of Kaplansky's Theorem \cite[Theorem]{kap1}, we have $D$ is commutative, which is a contradiction. On the other hand, if $l_{1} = l_{2}$. Then $P_{p^{l}-1}(a_{1}, a_{2}) = Q_{p^{l}-1}(a_{1}, a_{2})$, which means equation \eqref{eq29} becomes, \begin{equation*} \begin{aligned} \beta U_{2p^{l}-1}(a_{1},a_{2})g(a_{1}) + g(a_{1}) \beta & U_{2p^{l}-1}(a_{1},a_{2}) + \sum_{k=1}^{\frac{p^{l}-1}{2}} 4 \alpha^{2k} \beta P_{2p^{l}-1}(a_{1},a_{2})g(a_{2})P_{2k}(a_{1},a_{2})\\ & - \sum_{k=1}^{\frac{p^{l}-1}{2}} 8 \alpha^{2k-1}P_{2p^{l}-1}(a_{1},a_{2})g(a_{1})P_{2k-1}(a_{1},a_{2}) = 0, \end{aligned} \end{equation*} for all $a_{1} \in D$. Taking $a_{1} = 1$ in the above equation, we get $\sum_{k=1}^{\frac{p^{l}-1}{2}} \alpha^{2k} \beta P_{2p^{l}-1}(1,a_{2})\\ g(a_{2})P_{2k}(1,a_{2})=0$, by standard Vandermonde argument we have $P_{2p^{l}-1}(1,a_{2}) = 0 \ i.e.\ 2p^{l}a^{2p^{l}-1}=0$, which is contradiction. This proves that $g = 0$. \end{proof} \begin{proof}[ \textbf{Proof of Theorem \ref{main2}}] Because of Lemma \ref{4t2}, we may assume that \begin{equation*} \operatorname{char} D = p >2, n,m>2, p-1 | n+m-2. \end{equation*} By Lemma \ref{ll56}, we may assume that $ g_{2} = -g_{1}$. That is \begin{equation*} g(y) = -y^ng_{1}(y^{-1})y^{m}, \end{equation*} holds for all $y \in D^{*}$. We get our conclusion $g_{1}= 0$ in Case $1$, by Lemma \ref{lem9}. Remark \ref{crem} and Remark \ref{c1rem} conclude that $g=0$ in both Case $2$ and Case $3$. In Case $4$, by the application of Remark \ref{rem1}, Lemma \ref{c3} and Lemma \ref{l11}, we get $g_{1}(y)= 0$, for all $y \in D$. Hence $g_{1} =g_{2} = 0$. \end{proof} \section{Data availability} There is no data available in this article. \section{Acknowledgment} The authors would like to express their sincere thanks to the reviewers and referees for the constructive comments and suggestions which helped to improve the paper's quality. L. Singh is supported by DST-INSPIRE (IF$230146$), Ref. No. DST/INSPIRE/03/2023/002370. All the authors contributed equally. \begin{thebibliography}{20} \bibitem{narg1} N. Arga\c{c}, M.P. Ero\v{g}lu, T.-K. Lee, J.-H. Lin, Identities with inverses on matrix rings, Linear Multilinear Algebra \textbf{68} (2020) 635–651. \bibitem{h1} J. Acz\'el, L. Kossuth, Some unsolved problems in the theory of functional equations, Arch. Math. \textbf{15} (1964) 435–444. \bibitem{bres1} M. Bre\v{s}ar, On generalized biderivations and related maps. J. Algebra \textbf{172} (1995) 764–786. \bibitem{lc1} L. Catalano, On a certain functional identity involving inverses. Comm. Algebra \textbf{46} (2018) 3430-3435. \bibitem{lc2} L. Catalano, T. Merch\'an, On rational functional identities, Commun. Algebra \textbf{52} (2024) 717–722. \bibitem{C2} C. L. Chuang, GPI's having coefficients in Utumi quotient rings, Proc. Amer. Math. Soc. \textbf{103} (1988) 723-728. \bibitem{coh1} P. M. Cohn, On the free product of associative rings. III, J. Algebra \textbf{8} (1968) 376–383. \bibitem{cohn2} P. M. Cohn, On the free product of associative rings II, Math. Z. \textbf{73} (1960) 433-456. \bibitem{na1} N. A. Dar, W. Jing, On a functional identity involving inverses on matrix rings, Quaest. Math. \textbf{46} (2023) 927–937. \bibitem{jacob} N. Jacobson, PI-Algebras: an Introduction, Lecture Notes in Mathematics, Springer-Verlag, Berl. -New York \textbf{441} 1975. \bibitem{pl1} P. L. Kannappan, S. Kurepa, Some relations between additive functions–I, Aequ. Math. \textbf{4} (1970) 163–175. \bibitem{pl2} P. L. Kannappan, S. Kurepa, Some relations between additive functions–II, Aequ. Math. \textbf{6} (1971) 46–58. \bibitem{kap1} I. Kaplansky, A theorem on division rings, Can. J. Math. \textbf{3} (1951) 290–292. \bibitem{k1} S. Kurepa, The Cauchy functional equation and scalar product in vector spaces, Glas. Mat.-Fiz. Astr. Ser. II \textbf{19} (1964) 23–36. \bibitem{tl1} T. Y. Lam, A First Course in Noncommutative Rings, second edition, Grad. Texts in Math., Springer-Verlag, New York, \textbf{131} (2001) xx+385 pp. \bibitem{tk1} T. K. Lee, J. H. Lin, Certain functional identities on division rings, J. Algebra \textbf{647} (2024) 492-514. \bibitem{M1} W. S. Martindale 3rd, Prime rings satisfying a generalized polynomial identity, J. Algebra \textbf{12} (1969) 576–584. \bibitem{ng1} C. T. Ng, The equation $F(x) + M(x)G(1/x) =0$ and homogeneous biadditive forms, Linear Algebra Appl. \textbf{93} (1987) 255–279. \bibitem{n1} A. Nishiyama, S. Horinouchi, On a system of functional equations, Aequ. Math. \textbf{1} (1968) 1–5. \bibitem{pose} E.C. Posner, Prime rings satisfying a polynomial identity, Proc. Am. Math. Soc. \textbf{11} (1960) 180–183. \bibitem{rr1} R. Raphael, Rings which are generated by their units, J. Algebra \textbf{28} (1974) 199–205. \bibitem{wed} J. H. Maclagan-Wedderburn, A theorem on finite algebras, Trans. Amer. Math. Soc. \textbf{6} (1905) 349–352. \end{thebibliography} \end{document}
2412.19258v1
http://arxiv.org/abs/2412.19258v1
Complexity and Structural Results for the Hull and Convexity Numbers in Cycle Convexity for Graph Products
\documentclass[12pt]{article} \usepackage{amsmath, amsthm, amscd, amsfonts, amssymb, graphicx} \usepackage{amsmath,amssymb,latexsym} \usepackage{mathtools} \usepackage[T1]{fontenc} \usepackage{graphicx,tikz,color,url} \usetikzlibrary{decorations.pathreplacing} \usepackage{comment} \usepackage{enumitem} \usepackage{multicol} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{fact}[theorem]{Fact} \newtheorem{observation}[theorem]{Observation} \newtheorem{claim}[theorem]{Claim} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{problem}[theorem]{Problem} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newcommand{\s}{\color{blue}} \usepackage[normalem]{ulem} \newcommand{\rudini}[2]{\textcolor{blue}{#1}\textcolor{red}{\sout{#2}}} \DeclareMathOperator {\gp} {gp} \DeclareMathOperator {\mono} {mp} \DeclareMathOperator {\smono} {smp} \DeclareMathOperator {\diam} {diam} \DeclareMathOperator {\rad} {rad} \DeclareMathOperator {\ecc} {ecc} \let\deg\relax \DeclareMathOperator {\deg} {deg} \DeclareMathOperator {\Pl} {P\ell} \DeclareMathOperator{\cp}{\,\square\,} \textwidth 15.0cm \textheight 20.5cm \oddsidemargin 0.4cm \evensidemargin 0.4cm \voffset -1cm \title{Complexity and Structural Results for the Hull and Convexity Numbers in Cycle Convexity for Graph Products} \author{Bijo S. Anand$^{a}$\footnote{bijos\[email protected]} \and Ullas Chandran S. V. $^{b}$\footnote{[email protected]} \and Julliano R. Nascimento$^{c}$\footnote{[email protected]} \and Revathy S. Nair$^{d}$\footnote{[email protected]} \\\\ $^{a}$\small Department of Mathematics, Sree Narayana College, Punalur, Kerala\\ $^{b}$\small Department of Mathematics, Mahatma Gandhi College,\\\small Thiruvananthapuram, Kerala, India\\ $^{c}$ \small Instituto de Informática, Universidade Federal de Goiás, Goiânia, GO, Brazil\\ $^{d}$ \small Department of Mathematics, Mar Ivanios College, University of Kerala, \\\small Thiruvananthapuram, India } \date{\today} \textwidth15cm \textheight20.0cm \oddsidemargin 0.4cm \evensidemargin 0.4cm \voffset-1cm \begin{document} \maketitle \begin{abstract} Let $G$ be a graph and $S \subseteq V(G)$. In the cycle convexity, we say that $S$ is \textit{cycle convex} if for any $u\in V(G)\setminus S$, the induced subgraph of $S\cup\{u\}$ contains no cycle that includes $u$. The \textit{cycle convex hull} of $S$ is the smallest convex set containing $S$. The \textit{cycle hull number} of $G$, denoted by $hn_{cc}(G)$, is the cardinality of the smallest set $S$ such that the convex hull of $S$ is $V(G)$. The \textit{convexity number} of $G$, denoted by $C_{cc}(G)$, is the maximum cardinality of a proper convex set of $V(G)$. This paper studies cycle convexity in graph products. We show that the cycle hull number is always two for strong and lexicographic products. For the Cartesian, we establish tight bounds for this product and provide a closed formula when the factors are trees, generalizing an existing result for grid graphs. In addition, given a graph $G$ and an integer $k$, we prove that $hn_{cc}(G) \leq k$ is NP-complete even if $G$ is a bipartite Cartesian product graph, addressing an open question in the literature. Furthermore, we present exact formulas for the cycle convexity number in those three graph products. That leads to the NP-completeness of, given a graph $G$ and an integer $k$, deciding whether $C_{cc}(G) \geq k$, when $G$ is a Cartesian, strong or lexicographic product graph. \end{abstract} \noindent{\small {\bf Keywords:} convexity; convexity number; hull number; cycle convexity number; cycle hull number; Cartesian product; strong product; lexicographic product.} \noindent{\small {\bf AMS Subj.Class:} 05C69, 05C76, 05C85} \section{Introduction} \label{sec:intro} A \emph{finite convexity space} is defined as a pair $(V, \mathcal{C})$, where $V$ is a non-empty finite set and $\mathcal{C}$ is a collection of subsets of $V$ satisfying the following properties: $\emptyset \in \mathcal{C}$, $V \in \mathcal{C}$, and $\mathcal{C}$ is closed under intersections. The elements of $\mathcal{C}$ are referred to as \emph{convex sets}; see \cite{van-1993}. Various convexity structures associated with the vertex set of a graph are widely studied. The most natural convexities in graphs are path convexities, defined in terms of a family of paths $\mathcal{P}$, where a set $S$ of vertices in $G$ is $\mathcal{P}$-\emph{convex} if $S$ includes all vertices of every path in $\mathcal{P}$ between any two vertices of $S$. An extensive overview of different types of path convexities are provided in \cite{pelayo-2015}. The well-known \emph{geodesic convexity} corresponds to the case where $\mathcal{P}$ is the family of all shortest paths; see \cite{everett1985hull, buckley-1990, farber-1986}. Other significant examples include \emph{monophonic convexity} \cite{caceres-2005, source17, source20} and $P_3$-\emph{convexity}~\cite{source11, centeno2011irreversible, coelho2019p3}. These convexities are defined respectively over induced paths and paths with three vertices. There are also convexity definitions that do not rely on path systems; some significant examples include \emph{Steiner convexity}~\cite{source9} and $\Delta$-\emph{convexity}~\cite{bijo2,bijo3,bijo1}. In the case of Steiner convexity, a set $S \subseteq V(G)$ is \emph{Steiner convex} if, for any subset $S' \subseteq S$, all vertices of any Steiner tree with terminal set $S'$ are contained within $S$. In the case of $\Delta$-convexity, a set $S\subseteq V(G)$ is $\Delta$-\emph{convex} if every vertex $u\in V(G)\setminus S$ fails to form a triangle with any two vertices in $S$. Graph convexities have been studied in many contexts, including the determination of convexity invariants such as the hull number, the interval number, and the convexity number. A major reference work by Pelayo~\cite{pelayo-2015} provides an extensive survey of geodesic convexity. A newly introduced convexity, known as \textit{cycle convexity}, was recently studied in~\cite{interval08}. In this convexity, for a set $S$ of vertices in a graph $G$, the \textit{cycle interval} of $S$, denoted by $\langle S \rangle$, is the set formed by the vertices of $S$ and any $w \in V(G)$ that form a cycle with the vertices of $S$. If $\langle S \rangle = S$, then $S$ is \textit{cycle convex} in $G$. The \textit{cycle convexity number} of $G$, $C_{cc}(G)$, is the maximum cardinality of a proper cycle convex set of $V(G)$. The \textit{cycle convex hull} of a set $S$, denoted by $\langle S \rangle_C$, is the smallest cycle convex set containing $S$. We say that a vertex $w \in V(G)$ is \textit{generated} by $S$ if $w\in\langle S \rangle_C$. The \textit{hull number} of $G$ in the cycle convexity, or more briefly, the \textit{cycle hull number} of $G$, $hn_{cc}(G)$, is the cardinality of the smallest set $S$ such that $\langle S \rangle_C = V(G)$. The study of cycle convexity in graphs has gained attention due to its applications in both graph theory and related fields such as Knot Theory~\cite{araujo2020cycle}. Concerning the cycle hull number, Araujo et al.~\cite{araujo2024hull} presented bounds for this parameter in $4$-regular planar graphs and proved that, given a planar graph $G$ and an integer $k$, determining whether $hn_{cc}(G) \leq k$ is NP-complete. They also showed that the parameter is computable in polynomial time for classes like chordal, $P_4$-sparse, and grid graphs. Regarding the cycle convexity number, Lima, Marcilon, and Medeiros~\cite{lima2024complexity} showed that, given a graph $G$ and an integer $k$, determining whether $C_{cc}(G) \geq k$ is both NP-complete and W[1]-hard when parameterized by the size of the solution. However, for certain graph classes such as extended $P_4$-laden graphs, they show an algorithm able to solve the problem in polynomial time. In addition, other results have been obtained for related parameters in cycle convexity, for instance, the interval number~\cite{interval08} and the percolation time~\cite{lima2024complexity}. In this work, we explore some properties of cycle convex sets in graph products, focusing on the hull number and the convexity number. For the strong and lexicographic products of nontrivial connected graphs, we show that the cycle hull number is always two. For the Cartesian product, we establish tight bounds on the cycle hull number and present a closed formula for this parameter for Cartesian products of trees, which generalizes a known result for grid graphs~\cite{araujo2024hull}. On the complexity side, we close a gap in the literature regarding the cycle hull number for bipartite graphs, left open by Araujo et al.~\cite{araujo2024hull}. Specifically, given a graph $G$ and an integer $k$, we show that determining whether $hn_{cc}(G) \leq k$ is NP-complete even if $G$ is a bipartite Cartesian product graph. Regarding the cycle convexity number, we show exact formulas for the Cartesian, strong, and lexicographic products of nontrivial connected graphs. As a corollary, we find the cycle convexity number of the three products when the factors are complete, cycle, or path graphs. In addition, the provided formulas directly imply the NP-completeness of the decision problem related to the convexity number for the three considered product graphs. The paper is organized in more than three sections. Section~\ref{sec:pre} presents some notations and terminology, Section~\ref{sec:hull} focuses on the cycle hull number and Section~\ref{sec:cx} addresses the cycle convexity number. \section{Preliminaries}\label{sec:pre} We now introduce the terminology used throughout this paper. A graph $G = (V, E)$ is defined as a finite, undirected simple graph. The \emph{open neighborhood} $N(u)$ of a vertex $u \in V(G)$ is the set $\{ v \in V(G) : uv\in E(G) \}$, while the \emph{closed neighborhood} $N[u]$ is defined as $N(u) \cup \{ u \}$. We will write $u \sim v$ if vertices $u$ and $v$ are adjacent. A vertex $v$ in a connected graph $G$ is a \emph{cut-vertex} if the graph $G - v$ is disconnected. The subgraph of $G$ induced by a subset $S \subseteq V(G)$ is denoted by $G[S]$. A vertex is \emph{simplicial} if its neighborhood induces a clique. The \emph{clique number} $\omega(G)$ of $G$ is the size of the largest clique in $G$, while the \emph{independence number} $\alpha(G)$ represents the size of the largest independent set. The path of order $\ell$ is denoted as $P_{\ell}$, the cycle of length $\ell$ as $C_{\ell}$, the complete graph of order $\ell$ as $K_{\ell}$. Let $G$ and $H$ be two graphs. In this paper, we discuss the cycle hull number and the cycle convexity number of the \emph{Cartesian product} $G \Box H$, \emph{lexicographic product} $G \circ H$, and \emph{strong product} $G \boxtimes H$. The graphs $G$ and $H$ are called \textit{factors}. All these products share the vertex set $V(G) \times V(H)$. For $(g_1, h_1), (g_2, h_2) \in V(G) \times V(H)$: In the Cartesian product $G \square H$, the vertices $(g_1, h_1)$ and $(g_2, h_2)$ are adjacent if and only if either i) $g_1 \sim g_2$ in $G$ and $h_1 = h_2$, or ii) $g_1 = g_2$ and $h_1 \sim h_2$ in $H$. In the lexicographic product $G \circ H$, these vertices are adjacent if either i) $g_1 \sim g_2$, or ii) $g_1 = g_2$ and $h_1 \sim h_2$. Finally, in the strong product $G \boxtimes H$, the vertices $(g_1, h_1)$ and $(g_2, h_2)$ are adjacent if one of the following holds: i) $g_1 \sim g_2$ and $h_1 = h_2$, ii) $g_1 = g_2$ and $h_1 \sim h_2$, or iii) $g_1 \sim g_2$ and $h_1 \sim h_2$. If $\ast \in \{\square, \circ, \boxtimes\}$, then the \emph{projection mappings} $\pi_G: V(G \ast H) \rightarrow V(G)$ and $\pi_H: V(G \ast H) \rightarrow V(H)$ are given by $\pi_G(u,v) = u$ and $\pi_H(u,v) = v$, respectively. We adopt the following conventions. For $u \in V(G)$ and $v \in V(H)$, we define $^uH$ to be the subgraph of $G \ast H$ induced by $\{ u \} \times V(H)$, which we call an \emph{$H$-layer}, while the \emph{$G$-layer}, $G^v$ is the subgraph induced by $V(G) \times \{ v \}$. If $S \subseteq V(G \square H)$, then the set $\{g \in V(G) :\ (g,h) \in S \text{ for some } h \in V(H)\}$ is the \emph{projection} $\pi_G(S)$ of $S$ on $G$. The projection $\pi_H(S)$ of $S$ on $H$ is defined analogously. Finally, for $S \subseteq V(G \ast H)$, we call $S$ a \emph{sub-product} of $G \ast H$ if $S = \pi_G(S) \times \pi_H(S)$. In Section~\ref{sec:intro}, we introduced the cycle convexity definitions. Here, we revisit them, now specifically for the $P_3$-convexity, which will be useful for Section~\ref{sec:hull}. For a set $S$ of vertices in a graph $G$, the \textit{$P_3$-interval} of $S$, denoted by $\langle S \rangle^{{P_3}}$, is the set formed by the vertices of $S$ and any $w \in V(G)$ that forms a $P_3: u, w, v$ with $u,v \in S$. If $\langle S \rangle^{{P_3}} = S$, then $S$ is \textit{$P_3$-convex} in $G$. The \textit{$P_3$-convex hull} of a set $S$, denoted by $\langle S \rangle^{{P_3}}_C$, is the smallest $P_3$-convex set containing $S$. The \textit{$P_3$-hull number} of $G$, denoted as $hn_{P_3}(G)$, is the cardinality of the smallest set $S$ such that $\langle S \rangle^{{P_3}}_C = V(G)$. To avoid ambiguity, we will use the super/subscript $cc$ instead of $P_3$ when referring to cycle convexity. Additionally, we may include the super/subscripts $P_3(G)$ or $cc(G)$ to specify the graph $G$ under consideration. \section{Hull Number}\label{sec:hull} In this section, we investigate the cycle hull number of the Cartesian, strong, and lexicographic product of nontrivial connected graphs. We start by examining the strong and lexicographic products in Theorems~\ref{hullstrong} and~\ref{hulllexico}, respectively, for which it is proved that the cycle hull number is always two in each case. \begin{theorem}\label{hullstrong} Let $G$ and $H$ be two nontrivial connected graphs. Then $$hn_{cc}(G\boxtimes H)=2.$$ \end{theorem} \begin{proof} Let $h_1h_2\in E(H)$ and $g\in V(G)$. We claim that $\{(g,h_1),(g,h_2)\}$ is a hull set of $G\boxtimes H$. By the definition of strong product, for any $g'\in N_G(g)$, the vertices $(g,h_1)$ and $(g,h_2)$ form triangles with both $(g',h_1)$ and $(g',h_2)$. Thus, $(g',h_1),(g',h_2)\in \langle\{(g,h_1),(g,h_2)\}\rangle_C$. This proves that $N_G(g)\times \{h_1,h_2\}\subseteq \langle\{(g,h_1),(g,h_2)\}\rangle_C$. Using similar arguments as above, $N_G(N_G(g))\times \{h_1,h_2\}$ is contained in $\langle N_G(g)\times \{h_1,h_2\}\rangle_C$. This sequentially shows that the vertices of the two $G$-layers, $G^{h_1}$ and $G^{h_2}$, are contained in $\langle\{(g,h_1),(g,h_2)\}\rangle_C$. Now $(g,h_1),(g',h_1)\in V(G^{h_1})\subseteq \langle\{(g,h_1),(g,h_2)\}\rangle_C$, where $g'\in N_G(g)$. Using the similar arguments as above, we get $V(^gH)\cup V(^{g'}H)\subseteq \langle (g,h_1),(g',h_1)\rangle_C$. This shows that $V(G\Box H)\subseteq\langle \{(g,h_1),(g,h_2)\}\rangle_C$ since both $G$ and $H$ are connected. \end{proof} \begin{theorem}\label{hulllexico} Let $G$ and $H$ be two nontrivial connected graphs. Then $$hn_{cc}(G\circ H)=2.$$ \end{theorem} \begin{proof} Let $(g,h_1)(g,h_2)$ be any edge in $G\circ H$. Then $\langle \{(g,h_1),(g,h_2)\}\rangle_C$ contains $N_G(g)\times V(H)$. This sequentially proves that $V(G\circ H)\subseteq\langle \{(g,h_1),(g,h_2)\}\rangle_C$ and so $hn_{cc}(G\circ H)=2.$ \end{proof} Next, we turn our attention to the cycle hull number of Cartesian product graphs. Unlike the strong and lexicographic products, this is a more involved problem that required several intermediate steps to establish general bounds. \begin{lemma}\label{union of subproduct} Let $G$ and $H$ be two nontrivial connected graphs. Then any convex set $S$ in $G\Box H$ is of the form $\displaystyle S=\bigcup_{i=1}^{r}(S_i\times T_i)$, where each $S_i\subseteq V(G)$ and $T_i\subseteq V(H)$. \end{lemma} \begin{proof} First suppose that $S$ is connected. In the following we prove that $S$ is a subproduct of $G\Box H$. Specifically, for any \( (g,h), (g',h') \in V(G \Box H) \), we need to show that \( \{g,g'\} \times \{h,h'\} \subseteq S \). For that, consider a path \( P: (g,h) = (x_1, y_1), (x_2, y_2), \ldots,\\ (x_k, y_k) = (g', h') \) between \( (g,h) \) and \( (g',h') \) in the induced subgraph of \( S \). We will use induction on \( k \). For the case \( k = 1 \), without loss of generality the path \( P \) is given by \( (g,h), (g',h), (g',h') \). Since \( S \) is convex, it follows that \( (g, h') \in S \). Therefore, the result holds for \( k = 1 \). Assume that the result holds for all paths of length \( i < k \). That is, if \( (x,y) \in S \) and there exists a path between \( (g,h) \) and \( (x,y) \) in the induced subgraph of \( S \) in \( G \Box H \) of length at most \( k \), then \( \{g,x\} \times \{h,y\} \subseteq S \). Given this assumption to the path $P$ implies that \( \{x_1, x_2, \ldots, x_{k-1}\} \times \{y_1, y_2, \ldots, y_{k-1}\} \subseteq S \). Now, since $(x_{k-1}, y_{k-1})$ and $(x_k, y_k)$ are adjacent in $G\Box H$, Hence without loss of generality, we may assume that $y_k = y_{k-1}$. This in turn implies that $(x_k, y_{k-1}) \in S$. From the path $P$ in $G\Box H$, it is clear that \( \pi_H(P) : y_1, y_2, \ldots, y_{k-1} \) forms a $h-h'$ walk in \( H \). Now, choose any \( y_i \) where \( 1 \leq i \leq k-2 \). Let \( Q \) be a \( y_{k-1} - y_i \) subwalk of \( \pi_H(P) \), say \( Q : y_{k-1}, v_1, v_2, \ldots, v_r = y_i \), where \( \{v_1, v_2, \ldots, v_r\} \subseteq \{y_1, y_2, \ldots, y_{k-1}\} \). By the induction hypothesis, \( \{x_{k-1}\} \times \{v_1, v_2, \ldots, v_r\} \subseteq S \). This sequentially shows that \( (x_k, v_2), (x_k, v_3), \ldots, (x_k, v_r) \in S \). Therefore, \( \{x_k\} \times \{y_1, y_2, \ldots, y_{k-1}\} \subseteq S \). This ensures that \( \{g,g'\} \times \{h,h'\} \subseteq S \). Now, consider the case that the induced subgraph of $S$ contains more than one component. Let $S=C_1\cup C_2\cup \ldots \cup C_r$, where each $C_i$'s are pairwise disjoint components in the induced subgraph of $S$. By the definition of the cycle convexity, each component $C_i$ ($i=1,2,\ldots, r$) is convex and so from the first part of this theorem, each $C_i$ ($i=1,2,\ldots, r$) is a subproduct of $G\Box H$. i.e., for $i=1,2,\ldots r$, $C_i=S_i\times T_i$, for some $S_i\subseteq V(G)$ and $T_i\subseteq V(H)$. Therefore $\displaystyle S=\bigcup_{i=1}^{r}(S_i\times T_i)$. \end{proof} \begin{theorem}\label{theo:StimesT} Let $G$ and $H$ be two nontrivial connected graphs and let $S$ and $T$ be any two convex sets in $G$ and $H$ respectively. Then $S\times T$ is convex in $G\Box H$. \end{theorem} \begin{proof} Let $S$ and $T$ be any two convex sets in $G$ and $H$ respectively. Assume to the contrary that $S\times T$ is not convex in $G\Box H$. Then we can find a vertex $(g,h)\in V(G\Box H)\setminus (S\times T)$ such that $(g,h)$ must have at least two distinct neighbours say $(g_1,h_1)$ and $(g_2,h_2)$ in the same connected component of the induced subgraph of $S\times T$. Then we have to consider the following four cases.\\ {\bf Case 1}: $g=g_1=g_2$, $hh_1,hh_2\in E(H)$. Since $g=g_1=g_2$, the three vertices $(g,h), (g_1,h_1),(g_2,h_2)$ are lie on a single $H$-layer $^gH$ and $h$ is adjacent to both $h_1$ and $h_2$. Since $h_1,h_2\in T$, $h\in T$. Therefore $(g,h)\in S\times T$.\\ {\bf Case 2}: $h=h_1=h_2$ and $gg_1,gg_2\in E(G)$. Here we also get a contradiction by the similar argument as above.\\ {\bf Case 3}: $g=g_1$, $h=h_2$, $hh_1\in E(H)$ and $gg_2\in E(G)$. Since $g_1,g_2\in S$ and $h_1,h_2\in V(H)$, $(g,h)=(g_1,h_2)\in (\{g_1,g_2\}\times \{h_1,h_2\})\subseteq (S\times T)$. Therefore Case 3 is not possible. \\{\bf Case 4}: $g=g_2$, $h=h_1$, $hh_2\in E(H)$ and $gg_1\in E(G)$. This case is also not possible by the similar arguments of Case 3. \end{proof} \begin{lemma}\label{projections} Let $G$ and $H$ be two connected nontrivial graphs. If $S$ is a hull set of $G\Box H$, then $\pi_G(S)$ and $\pi_H(S)$ are hull sets of $G$ and $H$ respectively. \end{lemma} \begin{proof} Let $S$ be a hull set of $G\Box H$. Then $S\subseteq \pi_G(S)\times \pi_H(S)\subseteq \langle \pi_G(S)\rangle_C \times \langle \pi_H(S)\rangle_C$. By Theorem~\ref{theo:StimesT} we get $\langle S\rangle_C = \langle \pi_G(S)\rangle_C \times \langle \pi_H(S)\rangle_C = V(G\Box H)$ and hence $\langle \pi_G(S)\rangle_C = V(G)$ and $\langle\pi_H(S)\rangle_C = V(H)$. \end{proof} \begin{lemma}\label{line_column} Let $G$ and $H$ be two connected nontrivial graphs and $S \subseteq V(G \Box H)$. If $V(G^h) \cup V(^gH) \subseteq \langle S \rangle_C$, for some $g \in V(G)$ and $h \in V(H)$, then $S$ is a hull set of $G \Box H$. \end{lemma} \begin{proof} We will prove for each vertex $g' \in N_G(g)$, $V(^{g'}H) \subseteq \langle S\rangle_C$. For each vertex $h' \in N_H(h)$, we have $(g',h')\in \langle \{(g,h),(g',h),(g,h')\}\rangle$. Therefore $(g',h') \in \langle S\rangle_C$. Now for any $h'' \in N_H(h')$, $(g',h'') \in \langle\{(g,h''), (g,h'),(g',h')\}\rangle$ and we get $(g',h'')\in \langle S\rangle_C$. By continuing this process, we obtain $V(^{g'}H)\subseteq \langle S\rangle_C$. Analogously, we can prove that for any $g'' \in N_G(g')$, $V(^{g''}H) \subseteq \langle S\rangle_C$. By following these steps repeatedly, we can conclude that $V(^{g'}H)\subseteq \langle S\rangle_C$, for all $g' \in V(G)$. Therefore $\langle S\rangle_C = V(G\Box H)$. \end{proof} \begin{theorem}\label{hull1} Let $G$ and $H$ be two nontrivial connected graphs. Then $$\max \{hn_{cc}(G), hn_{cc}(H),3\}\leq hn_{cc}(G\Box H)\leq hn_{cc}(G)+hn_{cc}(H)-1.$$ \end{theorem} \begin{proof} For the upper bound, Let $\{g_1,g_2,\ldots,g_r\}$ and $\{h_1,h_2,\ldots h_s\}$ be two minimum hull sets of $G$ and $H$ respectively. Then $hn_{cc}(G)=r$ and $hn_{cc}(H)=s$. Consider the set $S=\{(g_1,h_1), (g_1,h_2),\ldots (g_1,h_s), (g_2,h_s),(g_3,h_s) \ldots ,(g_r,h_s)\}$. Since $\langle \{g_1,g_2,\ldots,g_r\}\rangle_C=V(G)$, $\langle\{(g_1,h_s), (g_2,h_s), \ldots ,(g_r,h_s)\}\rangle_C =V(G^{h_s})$. Also since $\langle\{h_1,h_2,\ldots, h_s\}\rangle_C=V(H)$, $\langle \{(g_1,h_1), (g_1, h_2),\ldots, (g_1,h_s)\} \rangle_C=V(^{g_1}H)$. Hence, by Lemma~\ref{line_column} we conclude that $\langle S\rangle_C =V(G\Box H)$. On the other hand, if $G$ and $H$ are two nontrivial connected graphs, then any 2-element subset of $V(G\Box H)$ is either an independent set of $G\Box H$ or its convex hull is contained in a single layer. This shows that any hull set in $G\Box H$ need at least three vertices. Hence the lower bound immediately follows from Lemma \ref{projections}. \end{proof} Theorem \ref{hull1} immediately shows that $hn_{cc}(G\Box H)=3$ when both factors have hull number equals two. This applies, for instance, to complete graphs, where $hn_{cc}(K_m \Box K_n) = 3$. When at least one of the factors has hull number equals two, some bounds are provided by the next theorem. \begin{theorem}\label{theo:bounds_with_hnccH_two} Let $G$ and $H$ be two nontrivial connected graphs with $hn_{cc}(H)=2$. Then $hn_{cc}(G)\leq hn_{cc}(G\Box H)\leq hn_{cc}(G)+1$. Moreover $ hn_{cc}(G\Box H)= hn_{cc}(G)$ if and only if there is a minimum hull set $S$ in $G$ such that $S$ can be partitioned into two sets $S_1$ and $S_2$ with $\langle S_1 \rangle_C \cap \langle S_2 \rangle_C \neq\emptyset$. \end{theorem} \begin{proof} Suppose $S= S_1 \cup S_2$ with $\langle S_1 \rangle_C \cap \langle S_2 \rangle_C \neq \emptyset$. Let $\{h,h'\}$ be a hull set of $H$ and let $g \in \langle S_1 \rangle_C \cap \langle S_2 \rangle_C$. Let $A_1,A_2, \dots, A_r$ be the components of $\langle S_1 \rangle_C$ and $B_1,B_2, \dots, B_s$ be the components of $\langle S_2 \rangle_C$. Assume without loss of generality that $g \in A_1 \cap B_1$. Let $T= ((S \setminus V(B_1)) \times \{h\}) \cup ((S \cap V(B_1)) \times \{h'\})$. In the following, we prove that $T$ is a hull set of $G \Box H$. \smallskip \noindent $\textbf{Claim 1: } V(G) \times \{h\} \subseteq \langle T \rangle_C$. \smallskip \noindent \textit{Proof of Claim~1.} Let $g' \in V(B_1) \setminus \{g\}$ and $P:g=g_1,g_2,...,g_k=g'$ be a $g-g'$ path in $B_1$. Then $(g_i,h') \in \langle T \rangle_C$ for all $i$ with $1 \leq i \leq k$. Since $(g_1,h),(g_1,h'),(g_2,h') \in \langle T \rangle_C$, we gain $(g_2,h) \in \langle T \rangle_C$. Similarly, given that $(g_2,h),(g_2,h'),(g_3,h') \in \langle T \rangle_C$, we obtain $(g_3,h) \in \langle T \rangle_C$. By continuing this way, we get $(g_k,h) \in \langle T \rangle_C$. This shows that $V(B_1) \times \{h\} \subseteq \langle T \rangle_C$. Thus $V(G) \times \{h\} \subseteq \langle T \rangle_C$. \hfill $\blacksquare$ \medskip Since $g \in A_1 \cap B_1$, it follows that $(g,h),(g,h') \in \langle T \rangle_C$. Given that $\{h,h'\}$ is a hull set of $H$, we have $V(^gH) \subseteq \langle \{(g,h),(g,h')\} \rangle_C$. Furthermore, by Claim~1, $V(G^h) \subseteq \langle T \rangle_C$. Hence Lemma~\ref{line_column} implies that $T$ is a hull set of $G \Box H$. \medskip Conversely, assume that $hn_{cc}(G \Box H)=hn_{cc}(G)$. Let $T$ be a hull set of $G \Box H$ of size $hn_{cc}(G)$. Since ${\pi}_G(T)$ is a hull set of $G$, it follows that $|\pi_G(T)|=|T|$. Define $T=(S_1 \times \{h_1\}) \cup (S_2 \times \{h_2\}) \cup...(S_k\times \{h_k\})$ and let $\{h,h'\}$ be a hull set of $H$. Also, consider $S=\pi_G(T)=S_1\cup S_2 \cup...\cup S_k$, which is a hull set of $G$ of size $hn_{cc}(G)$. \smallskip \noindent $\textbf{Claim 2: }$ $S$ can be partitioned into $U \cup V$ such that $\langle U \rangle_C \cap \langle V \rangle_C \neq \emptyset.$ \smallskip \noindent \textit{Proof of Claim~2.} Suppose by contradiction that $\langle S_i \rangle_C \cap \langle S_j \rangle_C = \emptyset$ for every distinct $i, j \in \{1, \dots, k\}.$ Consequently $\langle \cup_{j \neq i}(S_j \times \{h_j\}) \rangle_C \cap (V(G) \times \{h_i\})=\emptyset$. This shows that $\langle S_i \times \{h_i\} \rangle_C=V(G) \times \{h_i\}$ and so $S_i$ is a hull set of $G$, a contradiction. Hence $\langle S_i \rangle_C \cap \langle S_j \rangle_C = \emptyset$ for some pair $i, j \in \{1, \dots, k\}$, $i \neq j$. Without loss of generality, fix $i=1$ and $j=2.$ Then $S=U \cup V$, where $U=S_1$ and $V=S_2 \cup S_3 \cup...\cup S_k$ with $\langle U \rangle_C \cap \langle V \rangle_C \neq \emptyset.$ \hfill $\blacksquare$ \smallskip Claims~1 and~2 imply in the bounds $hn_{cc}(G)\leq hn_{cc}(G\Box H)\leq hn_{cc}(G)+1$. \end{proof} We now focus on the complexity of determining the hull number in cycle convexity for Cartesian product graphs. To this end, we first formally recall the associated decision problem as stated in \cite{araujo2024hull}. \begin{problem}{\textsc{Hull Number in Cycle Convexity}}\\ \textbf{Instance:} A graph $G$ and a positive integer $k$.\\ \textbf{Question:} Is $hn_{cc}(G) \leq k$? \end{problem} As far as we know, the complexity of \textsc{Hull Number in Cycle Convexity} for bipartite graphs was open~\cite{araujo2024hull}. Theorem~\ref{theo:bipartiteNP-complete} fills this gap in the literature and also is useful in Theorem~\ref{theo:cartesianNPComplete} to prove that the problem remains NP-complete even for bipartite Cartesian product graphs. \begin{theorem}\label{theo:bipartiteNP-complete} \textsc{Hull number in Cycle Convexity} remains NP-complete on bipartite graphs. \end{theorem} \begin{proof} The problem is known to belong to NP~\cite{araujo2024hull}. For the hardness part, we perform a reduction from \textsc{Hull number in $P_3$ convexity} for bipartite graphs, which is NP-complete~\cite{araujo2013hull}. We first describe some subgraphs we use. Let $H(w)$ be the graph shown in Figure~\ref{fig:Hw_Fuv}(a). The \textit{non-edge} gadget $F^{uv}$ arises from the disjoint union of four vertices $u,u',v,v'$ together with five copies of $H(w)$, denoted as $H(w_i^{uv})$, for every $1 \leq i \leq 5$. Add to $E(F^{uv})$ the edges to make: \begin{itemize} \item a $C_4: u, u', y_0(w_2^{uv}), y_0(w_1^{uv})$; \item a $C_4: v, v', y_0(w_4^{uv}), y_0(w_5^{uv})$; and \item a $P_3: u', y_0(w_3^{uv}),v'$. \end{itemize} A sketch of the graph $F^{uv}$, is depicted in Figure~\ref{fig:Hw_Fuv}(b). \begin{figure}[htb] \centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (61.3,129.28) -- (112.64,129.16) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (61.44,179.28) -- (112.77,179.16) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (61.3,129.28) -- (61.44,179.28) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (112.77,179.16) -- (112.64,129.16) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (177.36,130.01) -- (228.69,129.89) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (177.49,180.01) -- (228.82,179.88) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (177.36,130.01) -- (177.49,180.01) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (228.82,179.88) -- (228.69,129.89) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (114.59,244.34) -- (147.39,210.62) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (80.2,211.02) -- (112.77,179.16) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (209.64,212.31) -- (177.49,180.01) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (178.43,242.95) -- (147.39,210.62) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (114.59,244.34) -- (80.2,210.13) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (178.43,242.95) -- (208.7,212.37) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (147.39,210.62) -- (112.77,178.26) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (177.36,130.01) -- (146.13,98.39) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (147.39,210.62) -- (177.49,179.12) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (112.64,129.16) -- (144.25,96.49) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (473.34,159.53) -- (540.33,159.32) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (413.58,158.08) -- (445.1,194.74) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (476.76,159.91) -- (445.1,194.74) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (476.94,86.81) -- (540,86.65) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (347.11,86.72) -- (413.11,86.64) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (472.91,219.74) -- (445.1,194.74) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (420.07,218.72) -- (445.1,194.74) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (567.07,219.19) -- (540.62,192.84) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (574.61,159.15) -- (601.06,185.5) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (602.43,218.03) -- (567.07,219.19) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (574.61,159.15) -- (540.09,158.89) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (601.77,218.03) -- (601.06,185.5) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (540.62,192.84) -- (540.09,158.89) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (539.85,52.74) -- (566.58,26.68) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (599.77,61.16) -- (573.03,87.22) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (540.49,88.12) -- (539.85,52.74) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (599.77,61.16) -- (600.53,26.65) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (540.49,87.45) -- (573.03,87.22) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (566.58,26.68) -- (600.53,26.65) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (286.27,185.23) -- (312.64,158.79) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (346.31,192.8) -- (319.95,219.23) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (287.42,220.59) -- (286.27,185.23) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (346.31,192.8) -- (346.59,158.28) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (287.41,219.92) -- (319.95,219.23) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (312.64,158.79) -- (346.59,158.28) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (346.59,158.28) -- (413.58,158.08) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (346.54,85.5) -- (346.36,158.6) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (540,86.65) -- (540.33,159.32) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (413.11,86.64) -- (413.58,158.08) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (476.94,86.81) -- (476.76,159.91) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (320.38,26.31) -- (346.72,52.77) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (312.59,86.32) -- (286.25,59.86) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (285.5,26.77) -- (320.38,26.31) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (312.59,86.32) -- (347.11,86.72) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (285.5,26.77) -- (286.25,59.86) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (346.72,52.77) -- (347.11,86.72) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (351.36,158.83) .. controls (351.49,156.08) and (349.35,153.74) .. (346.6,153.61) .. controls (343.84,153.48) and (341.5,155.61) .. (341.37,158.37) .. controls (341.24,161.13) and (343.37,163.47) .. (346.13,163.6) .. controls (348.89,163.72) and (351.23,161.59) .. (351.36,158.83) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (448.46,191.49) .. controls (446.59,189.47) and (443.42,189.34) .. (441.4,191.22) .. controls (439.37,193.09) and (439.24,196.25) .. (441.12,198.28) .. controls (442.99,200.31) and (446.16,200.43) .. (448.18,198.56) .. controls (450.21,196.68) and (450.34,193.52) .. (448.46,191.49) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (540.47,154.33) .. controls (537.71,154.25) and (535.41,156.42) .. (535.33,159.18) .. controls (535.25,161.94) and (537.42,164.24) .. (540.18,164.32) .. controls (542.94,164.4) and (545.24,162.23) .. (545.32,159.47) .. controls (545.4,156.71) and (543.23,154.41) .. (540.47,154.33) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (346.38,90.49) .. controls (349.14,90.59) and (351.45,88.42) .. (351.54,85.66) .. controls (351.63,82.9) and (349.46,80.59) .. (346.7,80.5) .. controls (343.94,80.41) and (341.63,82.57) .. (341.54,85.33) .. controls (341.45,88.09) and (343.62,90.4) .. (346.38,90.49) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (535,86.45) .. controls (534.89,89.21) and (537.04,91.54) .. (539.8,91.65) .. controls (542.56,91.76) and (544.88,89.62) .. (545,86.86) .. controls (545.11,84.1) and (542.96,81.77) .. (540.2,81.66) .. controls (537.45,81.55) and (535.12,83.69) .. (535,86.45) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (418.11,86.64) .. controls (418.11,83.88) and (415.87,81.64) .. (413.11,81.64) .. controls (410.34,81.64) and (408.11,83.88) .. (408.11,86.64) .. controls (408.11,89.4) and (410.34,91.64) .. (413.11,91.64) .. controls (415.87,91.64) and (418.11,89.4) .. (418.11,86.64) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (481.94,86.81) .. controls (481.94,84.05) and (479.7,81.81) .. (476.94,81.81) .. controls (474.18,81.81) and (471.94,84.05) .. (471.94,86.81) .. controls (471.94,89.57) and (474.18,91.81) .. (476.94,91.81) .. controls (479.7,91.81) and (481.94,89.57) .. (481.94,86.81) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (418.58,158.08) .. controls (418.58,155.32) and (416.34,153.08) .. (413.58,153.08) .. controls (410.82,153.08) and (408.58,155.32) .. (408.58,158.08) .. controls (408.58,160.84) and (410.82,163.08) .. (413.58,163.08) .. controls (416.34,163.08) and (418.58,160.84) .. (418.58,158.08) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (481.76,159.91) .. controls (481.76,157.15) and (479.52,154.91) .. (476.76,154.91) .. controls (474,154.91) and (471.76,157.15) .. (471.76,159.91) .. controls (471.76,162.67) and (474,164.91) .. (476.76,164.91) .. controls (479.52,164.91) and (481.76,162.67) .. (481.76,159.91) -- cycle ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (419.5,256.05) -- (420.07,218.72) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (472.91,219.74) -- (472.34,257.07) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (445.91,280.67) -- (419.5,256.05) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (445.91,280.67) -- (472.34,257.07) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (118.48,244.12) .. controls (118.58,241.97) and (116.92,240.14) .. (114.77,240.04) .. controls (112.62,239.94) and (110.79,241.61) .. (110.69,243.76) .. controls (110.59,245.91) and (112.25,247.73) .. (114.41,247.83) .. controls (116.56,247.93) and (118.38,246.27) .. (118.48,244.12) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (182.32,243.13) .. controls (182.42,240.98) and (180.76,239.16) .. (178.61,239.06) .. controls (176.46,238.96) and (174.63,240.62) .. (174.53,242.77) .. controls (174.43,244.92) and (176.1,246.75) .. (178.25,246.85) .. controls (180.4,246.95) and (182.22,245.29) .. (182.32,243.13) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (151.28,210.8) .. controls (151.38,208.65) and (149.72,206.82) .. (147.57,206.72) .. controls (145.41,206.62) and (143.59,208.29) .. (143.49,210.44) .. controls (143.39,212.59) and (145.05,214.42) .. (147.2,214.52) .. controls (149.36,214.62) and (151.18,212.95) .. (151.28,210.8) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (212.6,212.55) .. controls (212.7,210.4) and (211.03,208.57) .. (208.88,208.47) .. controls (206.73,208.37) and (204.9,210.04) .. (204.8,212.19) .. controls (204.7,214.34) and (206.37,216.17) .. (208.52,216.27) .. controls (210.67,216.37) and (212.5,214.7) .. (212.6,212.55) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (232.72,180.07) .. controls (232.82,177.91) and (231.16,176.09) .. (229.01,175.99) .. controls (226.85,175.89) and (225.03,177.55) .. (224.93,179.7) .. controls (224.83,181.86) and (226.49,183.68) .. (228.64,183.78) .. controls (230.8,183.88) and (232.62,182.22) .. (232.72,180.07) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (181.39,180.19) .. controls (181.49,178.04) and (179.82,176.21) .. (177.67,176.11) .. controls (175.52,176.01) and (173.69,177.68) .. (173.59,179.83) .. controls (173.49,181.98) and (175.16,183.8) .. (177.31,183.9) .. controls (179.46,184) and (181.29,182.34) .. (181.39,180.19) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (84.09,210.31) .. controls (84.19,208.16) and (82.53,206.33) .. (80.38,206.23) .. controls (78.23,206.13) and (76.4,207.79) .. (76.3,209.95) .. controls (76.2,212.1) and (77.86,213.92) .. (80.01,214.02) .. controls (82.17,214.12) and (83.99,212.46) .. (84.09,210.31) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (116.67,179.34) .. controls (116.77,177.18) and (115.1,175.36) .. (112.95,175.26) .. controls (110.8,175.16) and (108.97,176.82) .. (108.87,178.97) .. controls (108.77,181.13) and (110.44,182.95) .. (112.59,183.05) .. controls (114.74,183.15) and (116.57,181.49) .. (116.67,179.34) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (65.33,179.06) .. controls (65.43,176.91) and (63.77,175.08) .. (61.62,174.98) .. controls (59.47,174.88) and (57.64,176.55) .. (57.54,178.7) .. controls (57.44,180.85) and (59.1,182.68) .. (61.26,182.78) .. controls (63.41,182.88) and (65.23,181.21) .. (65.33,179.06) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (65.2,129.47) .. controls (65.3,127.31) and (63.64,125.49) .. (61.49,125.39) .. controls (59.33,125.29) and (57.51,126.95) .. (57.41,129.1) .. controls (57.31,131.25) and (58.97,133.08) .. (61.12,133.18) .. controls (63.28,133.28) and (65.1,131.62) .. (65.2,129.47) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (116.53,129.34) .. controls (116.63,127.19) and (114.97,125.36) .. (112.82,125.26) .. controls (110.67,125.16) and (108.84,126.83) .. (108.74,128.98) .. controls (108.64,131.13) and (110.31,132.96) .. (112.46,133.06) .. controls (114.61,133.16) and (116.43,131.49) .. (116.53,129.34) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (148.15,96.67) .. controls (148.25,94.52) and (146.58,92.69) .. (144.43,92.59) .. controls (142.28,92.49) and (140.45,94.15) .. (140.35,96.3) .. controls (140.25,98.46) and (141.92,100.28) .. (144.07,100.38) .. controls (146.22,100.48) and (148.05,98.82) .. (148.15,96.67) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (181.25,130.19) .. controls (181.35,128.04) and (179.69,126.22) .. (177.54,126.12) .. controls (175.39,126.02) and (173.56,127.68) .. (173.46,129.83) .. controls (173.36,131.98) and (175.03,133.81) .. (177.18,133.91) .. controls (179.33,134.01) and (181.15,132.35) .. (181.25,130.19) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (232.59,130.07) .. controls (232.69,127.92) and (231.03,126.09) .. (228.87,125.99) .. controls (226.72,125.89) and (224.9,127.56) .. (224.8,129.71) .. controls (224.7,131.86) and (226.36,133.69) .. (228.51,133.79) .. controls (230.66,133.89) and (232.49,132.22) .. (232.59,130.07) -- cycle ; \draw (59.55,106.83) node [anchor=north west][inner sep=0.75pt] {$x_{1}( w)$}; \draw (131.38,71.43) node [anchor=north west][inner sep=0.75pt] {$y_{0}( w)$}; \draw (127.51,30.51) node [anchor=north west][inner sep=0.75pt] {$H( w)$}; \draw (63.37,157.68) node [anchor=north west][inner sep=0.75pt] {$x_{2}( w)$}; \draw (86.11,200.43) node [anchor=north west][inner sep=0.75pt] {$x_{3}( w)$}; \draw (71.16,243.45) node [anchor=north west][inner sep=0.75pt] {$x_{4}( w)$}; \draw (180.43,241.85) node [anchor=north west][inner sep=0.75pt] {$x_{5}( w)$}; \draw (214.34,205.39) node [anchor=north west][inner sep=0.75pt] {$x_{6}( w)$}; \draw (229.98,158.98) node [anchor=north west][inner sep=0.75pt] {$x_{7}( w)$}; \draw (229.94,110) node [anchor=north west][inner sep=0.75pt] {$x_{8}( w)$}; \draw (114.47,122.25) node [anchor=north west][inner sep=0.75pt] {$y_{1}( w)$}; \draw (114.7,157.56) node [anchor=north west][inner sep=0.75pt] {$y_{2}( w)$}; \draw (152.3,201.04) node [anchor=north west][inner sep=0.75pt] {$y_{3}( w)$}; \draw (178.07,108.47) node [anchor=north west][inner sep=0.75pt] {$y_{5}( w)$}; \draw (179.88,157.58) node [anchor=north west][inner sep=0.75pt] {$y_{4}( w)$}; \draw (347.7,89.23) node [anchor=north west][inner sep=0.75pt] {$y_{0}\left( w_{1}^{uv}\right)$}; \draw (419,86.41) node [anchor=north west][inner sep=0.75pt] {$u$}; \draw (483.67,89.07) node [anchor=north west][inner sep=0.75pt] {$v$}; \draw (347.7,137.23) node [anchor=north west][inner sep=0.75pt] {$y_{0}\left( w_{2}^{uv}\right)$}; \draw (453.04,181.9) node [anchor=north west][inner sep=0.75pt] {$y_{0}\left( w_{3}^{uv}\right)$}; \draw (543.04,139.23) node [anchor=north west][inner sep=0.75pt] {$y_{0}\left( w_{4}^{uv}\right)$}; \draw (542.85,87.47) node [anchor=north west][inner sep=0.75pt] {$y_{0}\left( w_{5}^{uv}\right)$}; \draw (419.67,139.74) node [anchor=north west][inner sep=0.75pt] {$u'$}; \draw (481,139.07) node [anchor=north west][inner sep=0.75pt] {$v'$}; \draw (288.63,44.66) node [anchor=north west][inner sep=0.75pt] {$H\left( w_{1}^{uv}\right)$}; \draw (543.3,45.66) node [anchor=north west][inner sep=0.75pt] {$H\left( w_{5}^{uv}\right)$}; \draw (291.3,176) node [anchor=north west][inner sep=0.75pt] {$H\left( w_{2}^{uv}\right)$}; \draw (419.3,224) node [anchor=north west][inner sep=0.75pt] {$H\left( w_{3}^{uv}\right)$}; \draw (542.36,179.26) node [anchor=north west][inner sep=0.75pt] {$H\left( w_{4}^{uv}\right)$}; \draw (431.67,29.51) node [anchor=north west][inner sep=0.75pt] {$F^{uv}$}; \draw (98,29.61) node [anchor=north west][inner sep=0.75pt] [align=left] {(a)}; \draw (403,29.61) node [anchor=north west][inner sep=0.75pt] [align=left] {(b)}; \end{tikzpicture} \caption{Graphs (a) $H(w)$ and (b) $F^{uv}$. The black vertices in (a) represent a $cc$-hull set of the graph $H(w)$. The hexagons in (b) represent the subgraphs $H(w)$ in $F^{uv}$.} \label{fig:Hw_Fuv} \end{figure} Let $(G,k)$ be an instance of \textsc{Hull number in $P_3$ convexity}, where $G$ is a bipartite graph and $k$ is a positive integer. We construct an instance $(G', k')$ of \textsc{Hull number in Cycle Convexity} as follows. Let $L = \{uv :$ $uv \notin E(G)$ and $N_G(u) \cap N_G(v) \neq \emptyset \}$ be the set of non-edges in $G$. The graph $G'$ arises from $G$ by adding a subgraph $F^{uv}$, for every $uv \in L$. We make $k' = k + 45|L|$. It is easy to see that $G'$ is a bipartite graph, since $G$ and $F^{uv}$ are clearly bipartite and, by definition of $L$, for every $uv \in L$, the connection between vertices in $V(F^{uv})$ and $V(G)$ make a $C_6: u,u', y_0(w_3^{uv}), v',v, z$, for every $z \in N_G(u) \cap N_G(v)$, which is bipartite. Further, the construction can be accomplished in polynomial time, since $|L| = O(|V(G)|)^2$. We show that $hn_{P_3}(G) \leq k$ if and only if $hn_{cc}(G') \leq k'$. \smallskip First, suppose that $hn_{P_3}(G) \leq k$. Let $S$ be a $P_3$-hull set of $G$ with $|S| \leq k$. For every $1 \leq i \leq 5$, we define $S(w_i^{uv}) = \{y_j(w_i^{uv}) : 1 \leq j \leq 5\} \cup \{ x_j(w_i^{uv}) : j \in \{1,3,5,7\} \}$, and, for every $uv \in L$, we define $\mathcal{S}^{uv} = S(w_1^{uv}) \cup \dots \cup S(w_5^{uv})$. We show that $S' = S \cup \big(\bigcup_{uv \in L} \mathcal{S}^{uv}\big)$ is a $cc$-hull set of $G'$. Notice that $|S(w_i^{uv})| = 9$ and $\mathcal{S}^{uv} = 5|S(w_i^{uv})|$, then $|S'| = k + 45|L| = k'$. Let us proceed with some useful claims. \medskip \noindent $\textbf{Claim 1: }$ Let $1 \leq i \leq 5$. Then $\langle S(w_i^{uv}) \rangle_C = V(H(w_i^{uv}))$. \smallskip \noindent \textit{Proof of Claim~1.} For short, we omit $(w_i^{uv})$ from the notation. By definition, $S = \{y_1, \dots, y_5, x_1, x_3,$ $x_5, x_7 \}$. The construction of $H$ implies that $y_0$ lies in a cycle $y_0,y_1, \dots y_5$; $x_2$ lies in a cycle $x_1, y_1, y_2, x_2$; $x_4$ lies in a cycle $x_3, y_2, y_3, x_3$; $x_6$ lies in a cycle $x_5,y_3,y_4,x_6$; and $x_8$ lies in a cycle $x_7,y_4,y_5,x_8$. Then, it holds that $y_0, x_2, x_4, x_6, x_8 \in \langle S \rangle_C$. \hfill $\blacksquare$ \medskip \noindent $\textbf{Claim 2: }$ Let $z \in V(G)$. If $z$ belongs to the $P_3$-convex hull of $S$ in $G$, then $z$ belongs to the $cc$-convex hull of $S'$ in $G'$. \smallskip \noindent \textit{Proof of Claim~2.} Here, for the $P_3$-convexity in $G$ and the cycle convexity in $G'$, we adopt the superscripts $P_3(G)$ and $cc(G')$, respectively, to clarify the convexity and the graph being considered in the interval $\langle \cdot \rangle$ and convex hull $\langle \cdot \rangle_C$ operations. Suppose that $z \in \langle S \rangle^{_{P_3(G)}}_C$. If $z \in S$, then $z \in S'$ and there is nothing to do. So, suppose that $z \notin S$ and $z \notin S'$. Since $z \in \langle S \rangle^{_{P_3(G)}}_C$, there exist $u,v \in \langle S \rangle^{_{P_3(G)}}_C$ such that $z \in \langle u, v \rangle^{_{P_3(G)}}$, that is, $z$ lies in a $P_3: u,z,v$ in $G$. We prove by induction that $z \in \langle S' \rangle^{_{cc(G')}}_C$. For the base case, let $u,v \in S$, which means $u, v \in S'$. Notice that $G$ is bipartite, then given that $z$ lies in a $P_3: u,z,v$ in $G$, it follows that $uv \notin E(G)$. Thus $uv \in L$ and, by construction, a non-edge gadget $F^{uv}$ exists in $G'$. Recall that Claim~1 implies $y_0(w_i^{uv}) \in \langle S' \rangle_C^{_{cc(G')}}$, for every $1 \leq i \leq 5$. Thus, given that $u,v \in S'$, we have that $u'$ lies in the cycle $u, u', y_0(w_2^{uv}), y_0(w_1^{uv})$ as well as $v'$ lies in the cycle $v, v', y_0(w_4^{uv}), y_0(w_5^{uv})$. Consequently $u',v' \in \langle S' \rangle^{_{cc(G')}}_C$. Now, since $z$ lies in a cycle $u,u',y_0(w_3^{uv}),v',v,z$, we obtain that $z \in \langle S' \rangle^{_{cc(G')}}_C$. By inductive hypothesis, suppose that $u, v \in \langle S' \rangle^{_{cc(G')}}_C$ with either $u \notin S'$, $v \notin S'$, or both $u,v \notin S'$. Since $G$ is bipartite, $u,z,v$ does not induce a $K_3$ in $G$, which implies $uv \notin E(G)$ and $uv \in L$. Consequently, the existence of a non-edge gadget $F^{uv}$ with $u',v' \in \langle S' \rangle^{_{cc(G')}}_C$ ensures that $z \in \langle S' \rangle^{_{cc(G')}}_C$, as $z$ lies in the cycle $u,u',y_0(w_3^{uv}),v',v,z$. \hfill $\blacksquare$ \medskip By Claims~1 and 2, we conclude that $S'$ is a $cc$-hull set of $G'$ and then $hn_{cc}(G') \leq k'$. \medskip For the converse, suppose that $hn_{cc}(G') \leq k'$. Let $S'$ be a $cc$-hull set of $G'$ with $|S'| \leq k' = k + 45|L|$. The subsequent claims will be helpful. \medskip \noindent $\textbf{Claim 3: }$ Let $1 \leq i \leq 5$. It holds that: \begin{enumerate} \item[(i)] $|S' \cap V(H(w_i^{uv}))| \geq 9$; and \item[(ii)] if $S'$ is a minimum $cc$-hull set of $G'$, then $y_0(w_i^{uv}) \notin S'$. \end{enumerate} \smallskip \noindent \textit{Proof of Claim~3.} For short, we omit $(w_i^{uv})$ from the notation. First, we examine some properties of $H$. Let us fix $X = \{x_i : 1 \leq i \leq 8\}$, $Y = \{y_i : 0 \leq i \leq 5\}$, and define the folowing cycles: \begin{multicols}{3} \begin{itemize} \item $A_1: x_1, y_1, y_2, x_2$; \item $A_2: x_3, y_2, y_3, x_4$; \item $A_3: x_6, y_4, y_3, x_5$; \item $A_4: x_8, y_5, y_4, x_7$; \item $A_5: y_0,y_1,\dots, y_5$. \end{itemize} \end{multicols} Let $1 \leq i \leq 4$. For every $a \in A_i \setminus Y$, notice that $|N_H(a) \cap Y| = 1$. Then, since $S'$ is a $cc$-hull set, it follows that $S' \cap (A_i \setminus Y) \neq \emptyset$, which means that $|S' \cap X| \geq 4$. Furthermore, for every $a \in A_5 \setminus \{y_0\}$, we have that $|N_H(a) \cap (X \cup \{y_0\})| = 2$, say $N_H(a) \cap (X \cup \{y_0\}) = \{a_1, a_2\}$. Since $G' \setminus Y$ is disconnected and $a_1$ and $a_2$ belong to distinct connected components, we have that $|S' \cap (A_5 \setminus \{y_0\})| \geq 1$. (i) Suppose by contradiction that $S' \cap V(H) = \{v_1, \dots, v_8\}$. From the above, we assume that $v_1, \dots, v_4 \in S' \cap X$, and $v_5 \in S' \cap (A_5 \setminus \{y_0\})$. By construction, $A_i$ induces a $C_4$, $|A_i \cap A_j| = 1$, and all the paths from vertices in $A_i$ to vertices in $A_j$ pass through $Y$, for $i, j \in [4]$, $i \neq j$. This implies that $v_6, v_7, v_8$ generate at most three cycles in $\{ A_1, \dots, A_4 \}$, a contradiction. (ii) Let $S'$ be a minimum $cc$-hull set of $G'$. By (i), we have that $|S' \cap V(H)| = 9$, with $|S' \cap X| \geq 4$ and $|S' \cap (A_5 \setminus \{y_0\})| \geq 1$. Suppose by contradiction that $y_0 \in S'$. We denote $S' \cap V(H) = \{y_0, v_1, \dots, v_8\}$. We assume that $v_1, \dots, v_4 \in S' \cap X$, and $v_5 \in S' \cap (A_5 \setminus \{y_0\})$. By construction, $y_0$ lies only in cycle $A_5$, then, similarly to Item (i) $v_6, v_7, v_8$ generate at most three cycles in $\{ A_1, \dots, A_4 \}$, a contradiction. \hfill $\blacksquare$ \medskip \noindent $\textbf{Claim 4: }$ Let $uv \in L$. Then $|S' \cap F^{uv}| \geq 45$. \smallskip \noindent \textit{Proof of Claim~4.} Let $1 \leq i \leq 5$. By Claim~3(ii), we know that $y_0(w_i^{uv})$ does not belong to a minimum $cc$-hull set of $G'$. Given that $y_0(w_i^{uv})$ is a cut-vertex in $G$ and $H(w_i^{uv}) \setminus \{ y_0(w_i^{uv}) \} $ is a connected component of $G' \setminus \{ y_0(w_i^{uv}) \} $, it follows that $|S' \cap F^{uv}| \geq |S' \cap (V(H(w_1^{uv}) \cup \dots \cup V(H(w_1^{uv})))| = 5 \cdot 9 = 45$. \hfill $\blacksquare$ \medskip By Claim~4, we may assume that $|S' \cap (V(G) \cup \{u',v'\})| \geq k$. Notice that if $u' \in S'$, a $cc$-hull set $S''$ of $G'$ of the same order of $S'$ can be obtained by $S'' = S' \setminus \{u'\} \cap \{u\}$. The same reasoning applies to $v'$. We then assume that $S''$ is a $cc$-hull set of $G$ with $|S''| \geq k'$ and $S'' \cap \{u', v'\} = \emptyset$, for every $uv \in L$. We show that $S'' \cap V(G)$ is a $P_3$-hull set of $G$. Recall that $y_0(w_i^{uv}) \in \langle S'' \rangle_C$, for every $1 \leq i \leq 5$. \medskip \noindent $\textbf{Claim 5: }$ Let $z \in V(G)$. If $z$ is in the $cc$-convex hull of $S''$ in $G'$, then $z$ is in the $P_3$-convex hull of $S = S'' \cap V(G)$ in $G$. \smallskip \noindent \textit{Proof of Claim~5.} As done in the proof of Claim~2, for the $P_3$-convexity in $G$ and the cycle convexity in $G'$, we use the superscripts $P_3(G)$ and $cc(G')$, respectively, to identify the convexity and the graph being considered in the interval and convex hull operations. Suppose that $z \in \langle S'' \rangle^{_{cc(G')}}_C$. If $z \in S''$, then $z \in S$ and there is nothing to do. So, suppose that $z \notin S''$. Since $z \in \langle S'' \rangle^{_{cc(G')}}_C$, there exist $u,v \in \langle S'' \rangle^{_{cc(G')}}_C$ such that $z \in \langle u, v \rangle^{_{cc(G')}}$. We prove by induction that $z \in \langle S \rangle^{_{P_3(G)}}_C$. For the base case, let $u, v \in S$. Since $uv \notin E(G')$ and $uv \in L$, $z \in \langle S'' \rangle_C^{_{cc(G')}}$ because of the cycle $z, u, u', y_0(w_3^{uv}), v', v$ created by the non-edge gadget $F^{uv}$. Thus, the path with three vertices $u,z,v$ implies that $z \in \langle S \rangle_C^{_{P_3(G)}}$. By inductive hypothesis, suppose that $u, v \in \langle S \rangle^{_{P_3(G)}}_C$, with either $u \notin S$, $v \notin S$, or both $u,v \notin S$. Since $G'$ is bipartite, we have that $uv \notin E(G')$. Consequently $uv \in L$, and by the non-edge gadget $F^{uv}$, we have $u',v' \in \langle S'' \rangle^{_{cc(G')}}_C$. Since $z$ lies in a cycle $u,u',y_0(w_3^{uv}),v',v,z$, we get $z \in \langle S'' \rangle^{_{cc(G')}}_C$. This, when mapped to the three-vertex path $u,z,v$ in $G$ implies that $z \in \langle S \rangle_C^{_{P_3(G)}}$. \hfill $\blacksquare$ \medskip By Claim~5, we obtain that $S$ is a $P_3$-hull set of $G$ and then $hn_{P_3}(G) \leq k$ completing the proof. \end{proof} We are now ready to prove the NP-completeness of \textsc{Hull Number in Cycle Convexity} for Cartesian product of graphs. \begin{theorem}\label{theo:cartesianNPComplete} \textsc{Hull number in Cycle Convexity} remains NP-complete on Cartesian product $G_1 \square G_2$ graphs under either of the following conditions: \begin{enumerate} \item[(i)] $G_1 \square G_2$ is bipartite; \item[(ii)] both $G_1$ and $G_2$ are planar. \end{enumerate} \end{theorem} \begin{proof} The NP-membership is already established~\cite{araujo2024hull}. For the NP-hardness, the proof of \textit{(i)} and \textit{(ii)} make use of the graph $H(w)$ shown in Figure~\ref{fig:Hw_Fuv}(a) in order to produce a graph that has a minimum hull set $S$ that can be partitioned into two $S_1 \cup S_2$ with $\langle S_1 \rangle_C \cap \langle S_2 \rangle_C \neq \emptyset$. Let $\mathcal{B}$ and $\mathcal{P}$ be the classes of bipartite and planar graphs, respectively. From an instance $(G,k)$ of \textsc{Hull number in Cycle Convexity} in which $G \in \{ \mathcal{B}, \mathcal{P} \}$, we construct an instance $(G' \square K_2, k')$ of the same problem as described below. Before, let us remark that if $G \in \mathcal{B}$, the problem is NP-complete by Theorem~\ref{theo:bipartiteNP-complete} and if $G \in \mathcal{P}$, the NP-completeness follows from Theorem~4.2 by Araujo et al.~\cite{araujo2024hull}. Let $H$ be the graph formed by two copies of $H(w)$ from Figure~\ref{fig:Hw_Fuv}(a), labeled $H(w_1), H(w_2)$, where the vertices $y_0(w_1)$ and $y_0(w_2)$ are identified as a single vertex $v$. The graph $G'$ arises from the disjoint union of $G$ and $H$ by adding $uv \in E(G')$, for some vertex $u \in V(G)$. A sketch of the graph $G'$ is depicted in Figure~\ref{fig:graph_H}. The graph $G'$ is used to compute the Cartesian product $G' \square K_2$. Given that $H$ and $K_2$ are planar bipartite graphs, we have that: if $G \in \mathcal{B}$, $G' \square K_2$ is bipartite, as Item \textit{(i)}; and if $G \in \mathcal{P}$, both $G'$ and $K_2$ are planar, as Item \textit{(ii)}. We show some properties of $G'$ in Claims~1 and~2. \begin{figure}[htb] \centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (218.3,168.98) -- (243.05,140.94) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (218.84,116.13) -- (243.05,140.94) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (181.51,115.9) -- (218.84,116.13) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (218.3,168.98) -- (180.97,168.75) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (157.13,142.54) -- (181.51,115.9) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (157.13,142.54) -- (180.97,168.75) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (133.5,169.38) -- (158.25,141.34) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (134.04,116.53) -- (158.25,141.34) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (96.71,116.3) -- (134.04,116.53) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (133.5,169.38) -- (96.17,169.15) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (72.33,142.94) -- (96.71,116.3) ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (72.33,142.94) -- (96.17,169.15) ; \draw (157.23,142.13) -- (157.33,83) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (152.33,83) .. controls (152.33,80.24) and (154.57,78) .. (157.33,78) .. controls (160.09,78) and (162.33,80.24) .. (162.33,83) .. controls (162.33,85.76) and (160.09,88) .. (157.33,88) .. controls (154.57,88) and (152.33,85.76) .. (152.33,83) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (152.23,142.13) .. controls (152.23,139.37) and (154.47,137.13) .. (157.23,137.13) .. controls (159.99,137.13) and (162.23,139.37) .. (162.23,142.13) .. controls (162.23,144.89) and (159.99,147.13) .. (157.23,147.13) .. controls (154.47,147.13) and (152.23,144.89) .. (152.23,142.13) -- cycle ; \draw (79.67,71.41) .. controls (79.67,57.09) and (114.26,45.48) .. (156.93,45.48) .. controls (199.61,45.48) and (234.2,57.09) .. (234.2,71.41) .. controls (234.2,85.73) and (199.61,97.33) .. (156.93,97.33) .. controls (114.26,97.33) and (79.67,85.73) .. (79.67,71.41) -- cycle ; \draw (62.8,108.08) -- (255.4,108.08) -- (255.4,179.08) -- (62.8,179.08) -- cycle ; \draw (140,67.07) node [anchor=north west][inner sep=0.75pt] {$u$}; \draw (151,148.73) node [anchor=north west][inner sep=0.75pt] {$v$}; \draw (50.4,62.33) node [anchor=north west][inner sep=0.75pt] {$G$}; \draw (93.19,132.93) node [anchor=north west][inner sep=0.75pt] {$H( w_{1})$}; \draw (177.99,132.93) node [anchor=north west][inner sep=0.75pt] {$H( w_{2})$}; \draw (30.4,132.73) node [anchor=north west][inner sep=0.75pt] {$H$}; \end{tikzpicture} \caption{Graph $G'$. The hexagons represent the subgraphs $H(w)$ from Figure~\ref{fig:Hw_Fuv}(a).} \label{fig:graph_H} \end{figure} \smallskip \noindent $\textbf{Claim 1: }$ For every minimum hull set $S$ of $G'$, $|S \cap V(H)| = 18$. \smallskip \noindent \textit{Proof of Claim~1.} Let $S$ be a minimum hull set of $G'$. Since $v$ is a cut-vertex in $G'$ and $v \notin S$, Claim~3 from the proof of Theorem~\ref{theo:bipartiteNP-complete} implies that $|S \cap V(H(w_i))| = 9$, for $i = 1,2$. Summing over $H(w_1)$ and $H(w_2)$ gives $|S \cap V(H)| = 18$. \hfill $\blacksquare$ \smallskip For the next, let us fix $S(w_i) = \{y_j(w_i) : 1 \leq j \leq 5\} \cup \{ x_j(w_i) : j \in \{1,3,5,7\} \}$ for $i = 1, 2$. \smallskip \noindent $\textbf{Claim 2: }$ There is a minimum hull set $S$ in $G'$ such that $S$ can be partitioned into $S_1 \cup S_2$ with $\langle S_1 \rangle_C \cap \langle S_2 \rangle_C \neq \varnothing$. \smallskip \noindent \textit{Proof of Claim~2.} Let $S$ be a minimum hull set of $G'$ such that $S(w_1) \cup S(w_2) \subseteq S$. By considering $S_1 = S(w_1)$ and $S_2 = S \setminus S(w_1)$, we have that $\langle S_1 \rangle_C = V(H(w_1))$, and $\langle S_2 \rangle_C = V(G') \cup V(H(w_2))$. Then $\langle S_1 \rangle_{C} \cap \langle S_2 \rangle_{C} = \{v\}$ as desired. \hfill $\blacksquare$ \medskip By Claim~2, Theorem~\ref{theo:bounds_with_hnccH_two} implies that $hn_{cc}(G' \square K_2) = hn_{cc}(G')$. So we just show that $hn_{cc}(G) \leq k$ if and only if $hn_{cc}(G') \leq k + 18$. Let $S$ be a hull set of $G$ with $|S| \leq k$ and $S' = S \cup S(w_1) \cup S(w_2)$. Since $\langle S' \cap V(G) \rangle_C = V(G)$ and $\langle S' \cap V(H) \rangle_C = V(H)$, we obtain that $S'$ is a hull set of $G'$. For the converse, let $S'$ be a hull set of $G'$ with $|S'| \leq k+18$. By Claim~1, we know that $|S' \cap V(H)| \geq 18$. Then, $|S' \cap V(G)| \leq k$. Since $v$ is a cut-vertex in $G'$ and $G$ is a connected component of $G' \setminus v$, any vertex in $V(G)$ must be generated by the convex hull of vertices in $V(G)$, consequently $S'\cap V(G)$ is a hull set of $G$. \end{proof} As a positive result, in the following, we prove that the cycle hull number of Cartesian product of any two trees can be computed in linear time. This finding generalizes the already known result for grid graphs, $hn_{cc}(P_m\Box P_n)=m+n-1$~\cite{araujo2024hull}. \begin{theorem} Let $T_1$ and $T_2$ be two nontrivial trees with orders $m$ and $n$ respectively. Then $hn_{cc}(T_1\Box T_2)=m+n-1.$ \end{theorem} \begin{proof} For a tree $T$ and an end vertex $v$ of $T,$ let $T^v$ be the tree obtained from $T$ by deleting the vertex $v$. We prove the theorem by using induction on $|V(T_2)|$. For the case $|V(T_2)| = 2$, Theorem~\ref{theo:bounds_with_hnccH_two} implies that $h(T_1 \Box T_2)=h(T_1)+1 = |V(T_1)|+1$ for any tree $T_1$. Assume that $h(T_1 \Box T_2)=h(T_1)+h(T_2)-1$ for any trees $T_1$ and $T_2$ with $|V(T_2)|=k$. Choose arbitrary trees $T_1$ and $T_2$ with $|V(T_2)|=k+1$. Since the Cartesian product is commutative, we may assume that $|V(T_1)| \geq |V(T_2)|$. We first prove the following claim. \smallskip \noindent $\textbf{Claim 1: }$ $h(T_1\Box T_2^v) < h(T_1 \Box T_2)$. \smallskip \noindent \textit{Proof of Claim~1.} Assume the contrary that $h(T_1\Box T_2^v) \geq h(T_1 \Box T_2)$. Let $S$ be a minimum hull set of $T_1 \Box T_2$. For each $x \in V(T_2)$, we fix $S_x=S \cap (T_1 \times \{x\})$. Now, for the support vertex $u$ of $v$ in $T_2$, we fix $U=\{x: (x,v)\in S_v\}$ and $S'= (S\setminus S_v) \cup (U \times \{u\})$. Suppose that there is no edge between $S_u$ and $S_v$ in $T_1 \Box T_2$. Then, it is clear that $\langle S \rangle= \langle S \setminus S_v \rangle \cup S_v$ and $S_v \neq T_1 \times \{v\}$. This is a contradiction to the fact that $S$ is a hull set of $T_1 \Box T_2$. Hence, there exists at least one edge between $S_u$ and $S_v$. This, in turn implies that $|S'|<|S|$. Also it is straightforward to verify that $S'$ is a hull set of $T_1 \Box T_2^v$. This is a contradiction to the fact that $|S'|<|S|=h(T_1 \Box T_2) \leq h(T_1 \Box T_2^v)$. Hence the claim follows. \hfill $\blacksquare$ \smallskip Now, the induction hypothesis and Claim 1 show that $h(T_1)+k-1=h(T_1\Box T_2^v)<h(T_1 \Box T_2) \leq h(T_1)+h(T_2)-1=h(T_1)+k. $ Thus, $h(T_1 \Box T_2)=h(T_1)+k$ and so the result follows for all trees $T_1$ and $T_2$. \end{proof} \section{Convexity Number} \label{sec:cx} We recall that the convexity number of a connected graph $G$ is the cardinality of a maximum proper convex set in $G$ and is denoted by $C_{cc}(G)$. If a graph $G$ contains a vertex $g$ that is not part of a cycle, then the convex hull of the remaining vertices will not generate $g$. Hence, $C_{cc}(G) = n-1$, which is the upper bound for any graph. Our first result establishes an equality for this parameter in the Cartesian product of two graphs. \begin{theorem}\label{theo:cx} Let $G$ and $H$ be two nontrivial connected graphs with orders $m$ and $n$ respectively. Then $$C_{cc}(G\Box H) = \max\{n \cdot C_{cc}(G), m \cdot C_{cc}(H)\}.$$ \end{theorem} \begin{proof} Let $S_1$ be a proper convex set of $G$ with $|S_1| = C_{cc}(G)$ and set $T_1 = V(H)$. Define $S_2 = V(G)$ and let $T_2$ be a proper convex set of $H$ with $|T_2| = C_{cc}(H)$. By Theorem~\ref{theo:StimesT}, $S_i \times T_i$ for $i = 1,2$ is convex in $G \Box H$. Then $C_{cc}(G\Box H) \geq \max\{|S_1 \times T_1|, |S_2 \times T_2| \}$ and the lower bound follows. For the upper bound, suppose by contradiction that there exists a convex set $S$ in $G \square H$ with $|S| > \max\{n \cdot C_{cc}(G), m \cdot C_{cc}(H)\}$. We assume first that $n \cdot C_{cc}(G) \geq m \cdot C_{cc}(H)$. Given that $|S| > n \cdot C_{cc}(G)$, there exists some $h \in V(H)$ such that $|S \cap V(G^h)| > C_{cc}(G)$. This implies that $V(G^h) \subset \langle S \cap V(G^h) \rangle_C$. Given that $|S| > m \cdot C_{cc}(H)$, there exists some $v \in V(G)$ such that $|S \cap V(^gH)| > C_{cc}(H)$. This gives that $V(^gH) \subset \langle S \cap V(^gH) \rangle_C$. So, given that $V(G^h) \cup V(^gH) \subset \langle S \rangle_C$, we have that $\langle S \rangle_C = V(G \Box H)$, which contradicts the assumption that $S$ is a proper convex set. Since the Cartesian product is commutative, the case $n \cdot C_{cc}(G) < m \cdot C_{cc}(H)$ follows by similar arguments. \end{proof} Given that $C_{cc}(K_n) = 1$, $C_{cc}(C_n) = n-2$, and $C_{cc}(T_n) = n-1$, Theorem~\ref{theo:cx} implies the following corollary. \begin{corollary} For positive integers $m$ and $n$, it holds that \begin{enumerate}[label=\arabic*.] \item $C_{cc}(K_m\Box K_n)= \max\{ m,n \}$. \item $C_{cc}(K_m\Box C_n)= \max\{ n, mn-2 \}$. \item $C_{cc}(K_m\Box T_n)= \max\{ n, mn-1 \}$. \item $C_{cc}(C_m\Box C_n)= \max\{ mn-2n, mn-2m \}$. \item $C_{cc}(C_m\Box T_n)= \max\{ mn-2n, mn-m \}$. \item $C_{cc}(T_m\Box T_n)=\max\{ mn-n, mn-m \}$. \end{enumerate} \end{corollary} Now, we proceed to strong and lexicographic products, where there is a relation with the independence number. \begin{theorem}\label{theo:convexity_strong_lexico} Let $G$ and $H$ be two nontrivial connected graphs. For $\ast\in \{\boxtimes, \circ\},$ it holds that $$C_{cc}(G\ast H)=\alpha(G\ast H).$$ \end{theorem} \begin{proof} Let $\ast\in \{\boxtimes, \circ\}$. By Theorems~\ref{hullstrong} and~\ref{hulllexico}, any two adjacent vertices in $G\ast H$ forms a hull set in $G\ast H$. Thus the only proper convex sets in $G\ast H$ are independent sets. Therefore $C_{cc}(G\ast H)=\alpha(G\ast H)$. \end{proof} It is easy to verify that $\alpha(K_n) = 1$, $\alpha(C_n) = \lfloor \frac{n}{2} \rfloor$, $\alpha(P_n) = \lceil \frac{n}{2} \rceil$. Using the result $\alpha(K_n \boxtimes G) = \alpha(G)$ by Vesel~\cite{vesel1998independence}, the results for the strong product of cycles by Sonnemann and Krafft~\cite{sonnemann1974independence}, as well as the equality for the lexicographic product $\alpha(G \circ H) = \alpha(G)\alpha(H)$ by Geller and Stahl~\cite{geller1975chromatic}, we obtain the next corollary. \begin{corollary} For positive integers $j, k, m$ and $n$, the following holds \begin{enumerate}[label=\arabic*.] \item $C_{cc}(K_m\boxtimes K_n) = C_{cc}(K_m\circ K_n)= 1$. \item $C_{cc}(K_m\boxtimes C_n)= C_{cc}(K_m\circ C_n)= \lfloor \frac{n}{2} \rfloor$. \item $C_{cc}(K_m\boxtimes P_n)= C_{cc}(K_m\circ P_n)= \lceil \frac{n}{2} \rceil$. \item $C_{cc}(C_m\circ C_n)= \lfloor \frac{m}{2} \rfloor \lfloor \frac{n}{2} \rfloor$. \item $C_{cc}(C_{2j}\boxtimes C_n)= j \lfloor \frac{n}{2} \rfloor$. \item $C_{cc}(C_{2j+1}\boxtimes C_{2k+1})= jk + \lfloor \frac{k}{2} \rfloor$, with $j \geq k$. \item $C_{cc}(C_m\boxtimes P_n)= C_{cc}(C_m\circ P_n)= \lfloor \frac{m}{2} \rfloor \lceil \frac{n}{2} \rceil$. \item $C_{cc}(P_m\boxtimes P_n)= C_{cc}(P_m\circ P_n)= \lceil \frac{m}{2} \rceil \lceil \frac{n}{2} \rceil$. \end{enumerate} \end{corollary} We close this section with a complexity result for the cycle convexity number derived from the above discussions. For that recall the related decision problem. \begin{problem}{\textsc{Convexity Number in Cycle Convexity}}\\ \textbf{Instance:} A graph $G$ and a positive integer $k$.\\ \textbf{Question:} Is $C_{cc}(G) \geq k$? \end{problem} \begin{corollary} \textsc{Convexity number in Cycle Convexity} is NP-complete when restricted to the class of Cartesian, strong, or lexicographic products of two nontrivial graphs. \end{corollary} \begin{proof} The NP membership is clear. For the Cartesian product, given that \textsc{Convexity number in Cycle Convexity} is NP-complete~\cite{lima2024complexity}, the result is straightforward from Theorem~\ref{theo:cx}. For the strong and lexicographic products, recall that given a graph $G$ and a positive integer $k$, deciding whether $\alpha(G) \geq k$ is NP-complete~\cite{garey1979computers}. Hence the conclusion follows by Theorem~\ref{theo:convexity_strong_lexico}. \end{proof} \bibliographystyle{amsplain} \bibliography{cycle} \end{document} By the definition of Cartesian product of graphs, every vertices of $C_n^{b_2}$ will be some part of four cycles of the vertices of $(a_n,b_1)$, and hence $V(C_n^{b_2})\in \langle S\rangle_C$. BY the same argument the remaining layers Similarly $(a_1,b_2),(a_1,b_3),\ldots, (a_1,b_{m-1})$ form a cycle with $(a_1,b_n)$, so $(a_1,b_n)\in \langle S\rangle_C$. i.e, the vertices of $^{a_1}C_m$ and $C_n^{b_1}$ will be in $\langle S\rangle_C$. \begin{theorem} Let $G$ and $H$ be two nontrivial connected graphs with orders $m$ and $n$ respectively and assume that $C_{cc}(G)=m-1$. Then $C_{cc}(G\Box H)=(m-1)n+C_{cc}(H)$. \end{theorem} \proof Since $C_{cc}(G)=m-1$, there exists a proper convex set $S=\{g_1,g_2,\ldots ,g_{m-1}\}$ in $G$ with $|S|=m-1$. Let $V(G)\setminus S=\{g_m\}$. Since $S$ is a proper convex set $g_m$ does not form a cycle with $S$. By the definition of Cartesian product of graphs $S\times V(H)$ is a proper convex set in $G\Box H$. If $\{h_1,h_2,\ldots ,h_s\}$ is a maximum proper convex set in $H$, then $\{(g_m,h_1), (g_m,h_2),\ldots ,(g_m,h_s)\}$ form a proper convex set in $G\Box H$. Now it is clear that $S\times V(H)\cup \{(g_m,h_1), (g_m,h_2),\ldots ,(g_m,h_s)\}$ is a proper convex set in $V(G\Box H)$. Therefore $C_{cc}(G\Box H)\geq (m-1)n+C_{cc}(H)$. Assume that $C_{cc}(G\Box H)=\ell\geq (m-1)n+C_{cc}(H)$. In this case for getting the convexity number $\ell$ in the product we have to take more than $s$ number of vertices in the $H$-layer $^{g_m}H$. Then a set with $\ell$ number of vertices in $G\Box H$ cannot be a proper convex set in $G\Box H$. Therefore $C_{cc}(G\Box H)= (m-1)n+C_{cc}(H)$.\qed Let $U=\{g_1,g_2,\ldots ,g_r\}$ and $V=\{h_1,h_2,\ldots ,h_s\}$ be two maximum proper convex set in $G$ and $H$ respectively. Then $C_{cc}(G)=r$ and $C_{cc}(H)=s$. Consider the set $U\times V$ which has cardinality $rs=C_{cc}(G)C_{cc}(H)$. Claim: $U\times V$ is a proper convex set in $G\Box H$. Let $(g,h)\in V(G\Box H)\setminus (U\times V)$. Since $U$ is a proper convex set in $G$, $g$ will not form a cycle with the vertices in $U$. Hence $\langle U\times V\rangle_C$ will not contain $(g,h)$. That means $U\times V$ is a proper convex set in $G\Box H$. Therefore $|U\times V|\leq C_{cc}(G\Box H)$. i.e., $rs\leq C_{cc}(G\Box H)\implies C_{cc}(G)C_{cc}(H)\leq C_{cc}(G\Box H)$.
2412.19277v1
http://arxiv.org/abs/2412.19277v1
A study on the dual of $C(X)$ with the topology of (strong) uniform convergence on a bornology
\documentclass[reqno, 12pt]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathtools} \usepackage{tikz-cd} \usepackage{enumerate} \usepackage[mathscr]{eucal} \usepackage{graphicx} \usepackage{pgfplots} \setlength{\textwidth}{5.5in} \setlength{\textheight}{8in} \setlength{\oddsidemargin}{.5in} \setlength{\evensidemargin}{.5in} \setlength{\topmargin}{.25in} \setlength{\parskip}{0pt} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{note}[theorem]{Note} \newtheorem{guess}[theorem]{Guess} \newtheorem{definitions}[theorem]{Definitions} \theoremstyle{definition} \newtheorem{example}[theorem]{Example} \newtheorem{result}[theorem]{Result} \newtheorem{remark}[theorem]{Remark} \newtheorem{question}[theorem]{Question} \newcommand{\trinorm}{\vert\hspace{-0.5mm}\vert\hspace{-0.5mm}\vert} \begin{document} \title[A study on the Dual spaces of $(C(X), \tau_{\mathcal{B}})$ and $(C(X), \tau_{\mathcal{B}}^s)$]{A study on the dual of $C(X)$ with the topology of (strong) uniform convergence on a bornology} \author{Akshay Kumar} \newcommand{\acr}{\newline\indent} \address{Akshay Kumar: Department of Mathematics, Indian Institute of Technology Madras, Chennai-600036, India} \email{[email protected]} \thanks{This work is supported by IIT Madras (Project No. SB22231267MAETWO008573).} \subjclass[2010]{ Primary: 54C35, 54C40, 46A16; Secondary: 28A33, 54D70} \keywords{Bornologies, topology of (strong) uniform convergence on a bornology, compact-open topology, extended locally convex spaces, topology of uniform convergence on bounded sets, Borel measures supported on bornology.} \begin{abstract} This article begins by deriving a measure-theoretic decomposition of continuous linear functionals on $C(X)$, the space of all real-valued continuous functions on a metric space $(X, d)$, equipped with the topology $\tau_\mathcal{B}$ of uniform convergence on a bornology $\mathcal{B}$. We characterize the bornologies for which $(C(X), \tau_{\mathcal{B}})^*=(C(X), \tau_{\mathcal{B}}^s)^*$, where $\tau_{\mathcal{B}}^s$ represents the topology of strong uniform convergence on $\mathcal{B}$. Furthermore, we examine the normability of $\tau_{ucb}$, the topology of uniform convergence on bounded subsets, on $(C(X), \tau_{\mathcal{B}})^*$, and explore its relationship with the operator norm topology. Finally, we derive a topology on measures that shares a connection with $(C(X), \tau_{\mathcal{B}})^*$ when endowed with $\tau_{ucb}$. \end{abstract} \maketitle \section{Introduction and Preliminaries} \subsection{Introduction} Let $C(X)$ be the space of real-valued continuous functions on a metric space $(X,d)$. On $C(X)$, the classical topology of uniform convergence on a bornology $\mathcal{B}$, usually denoted by $\tau_{\mathcal{B}}$, has garnered significant attention from many mathematicians over the past several decades (see, \cite{Ucanbfms, Suc, SWasucob, MN}). It is known that if $\mathcal{B}=\mathcal{K}$, the collection of all non-empty relatively compact subsets of $X$, then $(C(X), \tau_k)$ forms a locally convex space. Moreover, the dual $(C(X), \tau_k)^*$ is isometrically isomorphic to the space of all closed regular finite Borel measures on $X$ with compact supports (see, Theorem 2.4.1, p. 88 in \cite{bns}). For a general bornology $\mathcal{B}$, $(C(X), \tau_{\mathcal{B}})$ may not necessarily form a topological vector space, as the scalar multiplication may fail to be jointly continuous. However, as shown in \cite{esaetvs}, $(C(X), \tau_{\mathcal{B}})$ is always an extended locally convex space (elcs), which allows for the study of its dual space. This paper is organized into four sections. Following a necessary preliminary details, the second section develops a decomposition of continuous linear functionals on $(C(X), \tau_{\mathcal{B}})$ in terms of closed, regular, finite Borel measures supported on $\mathcal{B}$. In the third section, we compare the dual spaces of $(C(X), \tau_{\mathcal{B}})$ and $(C(X), \tau_{\mathcal{B}}^s)$, where $\tau_{\mathcal{B}}^s$ denotes the topology of strong uniform convergence on the bornology $\mathcal{B}$. This topology was defined by Beer and Levi in \cite{Suc} and has been further explored by many people (see, \cite{ATasucob, Bspafs, SWasucob}). We provide an example demonstrating that the duals of these spaces may differ (see, Example \ref{dual of both spaces may not coincide}). Additionally, in Theorem \ref{condition under which both the spaces have same dual}, we find conditions under which both the spaces share the same dual. Since $(C(X), \tau_\mathcal{B})$ forms a locally convex space for $\mathcal{B}\subseteq\mathcal{K}$, the natural topology on $(C(X), \tau_{\mathcal{B}})^*$ is the topology $\tau_{ucb}$ of uniform convergence on bounded subsets of $(C(X), \tau_{\mathcal{B}})$ (see, Definition \ref{def of uniform convergence on bounded sets}). However, many people have considered the operator norm topology when studying $(C(X), \tau_{k})^*$ (see, \cite{bns, kundu1989spaces, wheeler1976mackey}). A natural question arises: do $\tau_{ucb}$ and $\|\cdot\|_{op}-$topology coincide? In the fourth section, we present Example \ref{example for operator and strong topology not coincide} to illustrate that this is not always the case. We further examine the normability of $\tau_{ucb}$ on $(C(X), \tau_{\mathcal{B}})^*$ and $(C(X), \tau_{\mathcal{B}}^s)^*$. We show that $\|\cdot\|_{op}$ may not define a norm on $(C(X), \tau_{\mathcal{B}})^*$. To address this, we introduce a topology $\sigma$ on measures, which shares a connection between $((C(X), \tau_{\mathcal{B}})^*, \tau_{ucb})$ and measures supported on $\mathcal{B}$. \subsection{Preliminaries} We denote a metric space by $(X, d)$ (or $X$). A collection $\mathcal{B}$ of non-empty subsets of $X$ is said to form a \textit{bornology} on $X$ if it covers $X$ and is closed under finite union and taking subsets of its members. A subfamily $\mathcal{B}_0$ of $\mathcal{B}$ is said to be a \textit{base} for $\mathcal{B}$ if it is cofinal in $\mathcal{B}$ under the set inclusion. If all the members of $\mathcal{B}_0$ are closed in $(X,d)$, then we say $\mathcal{B}$ has a \textit{closed base}. For more details on bornologies, we refer to \cite{bala, Ballf, hogbe, Bc}. \begin{definition}\label{definition of tauB}{\normalfont (\cite{Suc})} \normalfont The topology $\tau_{\mathcal{B}}$ of \textit{uniform convergence on} $\mathcal{B}$ is determined by a uniformity on $C(X)$ having base consisting of sets of the form $$[B, \epsilon]=\left\lbrace (f, g) : \forall x\in B,~ |f(x)-g(x)|<\epsilon \right\rbrace~ (B\in\mathcal{B},~ \epsilon>0).$$\end{definition} \noindent Note here that \begin{enumerate}[(1)] \item if $\mathcal{B}=\mathcal{F}$, the bornology of all non-empty finite subsets of $X$, then $\tau_\mathcal{B}$ will give the \textit{topology of pointwise convergence}, which is usually denoted by $\tau_p$; \item if $\mathcal{B}=\mathcal{K}$, the boronology of all non-empty relatively compact subsets of $X$, then $\tau_\mathcal{B}$ will give the \textit{topology of uniform convergence on compact sets} or \textit{compact open topology}, which is usually denoted by $\tau_{k}$; \item if $\mathcal{B}=\mathcal{P}_0(X)$, the boronology of all non-empty subsets of $X$, then $\tau_\mathcal{B}$ will give the \textit{topology of uniform convergence}. We usually denote this topology by $\tau_u$. \end{enumerate} \noindent These particular cases have been widely explored in the literature (see, \cite{kelley, Fswufagt, willard}). For a bornology $\mathcal{B}$ on $X$, in \cite{Suc}, Beer and Levi present a variational form of $\tau_{\mathcal{B}}$ known as the topology of strong uniform convergence on $\mathcal{B}$. \begin{definition}{\normalfont (\cite{Suc})} \normalfont The topology $\tau^{s}_{\mathcal{B}}$ of \textit{strong uniform convergence on $\mathcal{B}$} is determined by a uniformity on $C(X)$ having base consisting of sets of the form $$[B, \epsilon]^s=\left\lbrace (f, g) : \exists~ \delta>0 ~\forall x\in B^\delta,~ |f(x)-g(x)|<\epsilon \right\rbrace~ (B\in\mathcal{B},~ \epsilon>0), $$ where for $B\subseteq X$, $B^\delta=\displaystyle{\bigcup_{y\in B}}\{x\in X: d(x, y)<\delta\}$. \end{definition} \begin{remark}\normalfont \begin{enumerate}[(1)] \item For every bornology $\mathcal{B}$ on $X$, $\overline{\mathcal{B}}= \{A\subseteq \overline{B}: B\in \mathcal{B}\}$ forms a bornology on $X$. It is easy to prove that $\tau_{\mathcal{B}}=\tau_{\overline{\mathcal{B}}}$ and $\tau_{\mathcal{B}}^s=\tau_{\overline{\mathcal{B}}}^s$ on $C(X)$. Hence, from this point of view, we assume that every bornology $\mathcal{B}$ has a closed base $\mathcal{B}_0$. \item Note here that if $\mathcal{B}=\mathcal{F}$ and we extend our function space to $\mathbb{R}^X$, the set of all real-valued functions on $(X,d)$, then $\tau_{\mathcal{B}}^s$ will give the weakest topology on $\mathbb{R}^X$ for which $C(X)$ is closed (see, Corollary 6.8 in \cite{Suc}). \end{enumerate} \end{remark} Recall from \cite{flctopology, esaetvs} that a vector space $Z$ equipped with a topology $\tau$ is said to be an \textit{extended locally convex space} (elcs) if $\tau$ is induced by a collection of extended seminorms (functions which satisfy all the properties of a seminorm and may attain the infinity value). Note that the topologies $\tau_{\mathcal{B}}$ and $\tau_{\mathcal{B}}^s$ on $C(X)$ are induced by the collections $\mathscr{P}=\{\rho_B: B\in \mathcal{B}_0\}$ and $\mathscr{P}^s=\{\rho_B^s: B\in \mathcal{B}_0\}$ of extended seminorms, respectively, where $\rho_B(f) =\sup_{x\in B}|f(x)|$ and $\displaystyle{\rho_B^s(f) =\inf_{\delta>0}\left\lbrace \sup_{x\in B^\delta}|f(x)|\right\rbrace}$ for all $f\in C(X)$ (see, \cite{flctopology}). Therefore, both the function spaces $(C(X), \tau_{\mathcal{B}})$ and $(C(X), \tau_{\mathcal{B}}^s)$ form extended locally convex spaces. One can note that both $\mathscr{P}$ and $\mathscr{P}^s$ are directed families as for every $B_1, B_2\in\mathcal{B}$, we have $B_1\cup B_2\in\mathcal{B}$. Consequently, $\max \{\rho_{B_1}, \rho_{B_2}\}\leq \rho_{B_1\cup B_2}$ and $\max \{\rho^s_{B_1}, \rho^s_{B_2}\}\leq \rho^s_{B_1\cup B_2}$. Therefore, $$\mathcal{N}=\{\rho_B^{-1}([0, r)): r>0 \text{ and } B\in\mathcal{B}\}\text{ and }$$ $$\mathcal{N}^s=\{(\rho_B^s)^{-1}([0, r)): r>0 \text{ and } B\in\mathcal{B}\}$$ give a neighborhood base at $0$ in $(C(X), \tau_{\mathcal{B}})$ and $(C(X), \tau_{\mathcal{B}}^s)$, respectively. For more details on extended locally convex spaces, we refer to \cite{doelcs, flctopology, belcsaubp, Relcs, esaetvs}. \section{Measure theoretical Decomposition} Recall that $(C(X), \tau_{k})^*$ is isometrically isomorphic to the space of all closed regular Borel measures with compact support (see, Theorem 2.4.1, p. 88 in \cite{bns}). Therefore, a natural candidate to describe the dual of $(C(X), \tau_{\mathcal{B}})$ is the collection of all closed regular finite Borel measures with support in $\mathcal{B}$. Let \begin{enumerate} \item $\mathscr{B}(X)$ denotes the Borel sigma algebra induced by $X$; \item $\mathscr{M}(X)$ denotes the collection of all finite Borel measures on $\mathscr{B}(X)$; \item $\mathscr{M}_{\mathcal{B}}(X)$ denotes the collection of all closed regular, finite, Borel measures on $\mathscr{B}(X)$ supported on $\mathcal{B}$, that is, for every $\mu\in\mathscr{M}_{\mathcal{B}}$, there exists a closed $B\in\mathcal{B}$ such that $|\mu|\left(X\setminus B\right)=0$. \end{enumerate} We use the next proposition to demonstrate that corresponding to every $\mu\in\mathscr{M}_{\mathcal{B}}(X)$, there exists a continuous linear functional on $(C(X), \tau_{\mathcal{B}})$. For $B\in\mathcal{B}$, we define \begin{eqnarray*} C(X)_{fin}^{B}&=& \{f\in C(X): \rho_B(f)<\infty\}\\ &=&\{f\in C(X): \sup_{x\in B}|f(x)|<\infty\}, \text{ and }\\ C(X)_{fin}^{B_s}&=& \{f\in C(X): \rho_B^s(f)<\infty\}\\ &=&\left\lbrace f\in C(X): \inf_{\delta>0}\left(\sup_{x\in B^\delta}|f(x)|\right)<\infty\right\rbrace. \end{eqnarray*} \begin{proposition}\label{every measure gives a continuous map} Suppose $\mu\in \mathscr{M}_{\mathcal{B}}(X)$ is supported on closed $B\in\mathcal{B}$. Then the linear functional $H_\mu(f)=\int_X fd\mu$ for all $f\in C(X)_{fin}^B$ is continuous on $\left(C(X)_{fin}^B, \tau_\mathcal{B}|_{C(X)_{fin}^B}\right)$. \end{proposition} \begin{proof} For every $f\in C(X)_{fin}^B$, we have \begin{eqnarray*} |H_\mu(f)|&=& \left|\int_X fd\mu\right|\leq \int_X|f|d\mu\\ & =&\int_B|f|d\mu\leq \sup_{x\in B}|f(x)| \int_{B}1 d\mu\\ & \leq& \rho_B(f)|\mu|(B). \end{eqnarray*} Hence, $H_\mu$ is continuous on $C(X)_{fin}^B.$ \end{proof} \begin{lemma}{\normalfont(\cite{flctopology, esaetvs})} For every continuous extended seminorm $\rho$ on an elcs $(Z, \tau)$, the following statements hold. \begin{enumerate}[(1)] \item $Z_{fin}^\rho:=\{x\in Z: \rho(x)<\infty\}$ is a clopen subspace of $(Z, \tau)$. \item If $Z=Z_{fin}^\rho\oplus M$ (algebraic direct sum) for some subspace $M$ of $Z$, then $M$ is a discrete subspace of $(Z, \tau)$. \end{enumerate} \end{lemma} Observe here that for every $B\in \mathcal{B}$, $\rho_B$ gives a continuous extended seminorm on $(C(X), \tau_{\mathcal{B}})$. Therefore, $C(X)_{fin}^{B}$ is a clopen subspace of $(C(X), \tau_{\mathcal{B}})$ and $M_B$ is a discrete subspace of $(C(X), \tau_{\mathcal{B}})$, where $C(X)=C(X)_{fin}^B\oplus M_B$ (it is an algebraic direct sum. However, it will turn out to be a topological direct sum). Consequently, the linear map $H_\mu\circ p_B$ is continuous on $(C(X), \tau_{\mathcal{B}})$, where $p_B$ is the projection of $C(X)$ onto $C(X)_{fin}^B$ and $H_\mu$ is the linear map as defined in Proposition \ref{every measure gives a continuous map}. Hence, for every closed regular, finite Borel measure $\mu$ with a support in $\mathcal{B}$, we have a continuous linear functional $H_\mu\circ p_B$ on $(C(X), \tau_{\mathcal{B}})$. Since $M_B$ is discrete subspace of $(C(X), \tau_{\mathcal{B}})$, every linear functional on $M_B$ is continuous. Thus, for every $\mu\in\mathscr{M}_{\mathcal{B}}(X)$ supported on $B$ and linear functional $H_2$ on $M_B$, the linear functional $F:= H_\mu\circ p_B+H_2\circ q_B$ is continuous on $(C(X), \tau_{\mathcal{B}})$, where $q_B$ is the projection of $C(X)$ onto $M_B$. We next show that every continuous linear functional on $(C(X), \tau_{\mathcal{B}})$ has this form. \\ \noindent We now define support of a continuous linear functional on $(C(X), \tau_{\mathcal{B}})$. \begin{definition}\normalfont We say a linear functional $F$ on $C(X)$ is \textit{supported on} $B\in\mathcal{B}$ if $F(f)=0$ whenever $f|_B=0$.\end{definition} \begin{lemma}\label{support of a continuous linear functional} Suppose $F$ is a continuous linear functional on $(C(X), \tau_{\mathcal{B}})$. Then there exists a closed $B\in \mathcal{B}$ such that $F$ is supported on $B$.\end{lemma} \begin{proof} Since $F$ is continuous on $(C(X), \tau_{\mathcal{B}})$, there exists a $B\in \mathcal{B}_0$ such that $|F(f)|\leq \rho_B(f)$ for all $f\in C(X)$. It is easy to see that $F$ is supported on $B$. \end{proof} The next example shows that a continuous linear functional on $(C(X), \tau_{\mathcal{B}})$ can be supported on two distinct sets. \begin{example} Let $\mathcal{B}=\{A\subseteq [a, \infty): a\in \mathbb{R}\}$. Consider the linear functional $F$ on $(C(X), \tau_{\mathcal{B}})$ by $$F(f)=\int_{0}^{1}f(x)dx.$$ Then $F\in (C(X), \tau_{\mathcal{B}})^*$. It is easy to see that $F$ is supported on both $[0, 1]$ and $[0, 4]$. \end{example} \begin{note}\normalfont One can note here that if $F$ is supported on $A$, then $F$ is supported on $D$ for every $A\subseteq D.$\end{note} Recall that one of the important steps in examining barreledness and many other functional properties of $(C(X), \tau_{k})$ is to show that every continuous linear functional on $(C(X), \tau_{k})$ has minimal support in $\mathcal{K}$ (see, \cite{bns}). The next example shows that this may not be true in the case of an arbitrary bornology. However, for a non-zero $F\in (C(X), \tau_{\mathcal{B}})$, the collection $\mathscr{S}_F:=\{B\in\mathcal{B}: F \text{ is supported on } B \}$ always satisfies the finite intersection property. \begin{proposition}\label{F is supported on A intersection B} Let $0\neq F\in \left(C(X), \tau_{\mathcal{B}}\right)^*$ supported on $A, B\in\mathcal{B}$. Then $F$ is supported on the non-empty set $A\cap B$. \end{proposition} \begin{proof} We first show that $A\cap B\neq \emptyset$. Let $A\cap B=\emptyset$. By Pasting lemma and Tietze extension theorem, for each $f\in C(X)$, we have an $h\in C(X)$ such that $h=f$ on $A$ and $h=0$ on $B$. Since $F$ is supported on both $A$ and $B$, $F(f)=F(h)=0$. Thus, $F(f)=0$ for all $f\in C(X)$. Which is not possible. Now, suppose $f|_{A\cap B}=0$. Define $g_0:A\cup B\to \mathbb{R}$ by $g_0=f$ on $A$ and $g_0=0$ on $B$. Let $g$ be a continuous extension of $g_0$ on $X$. Since $g|_B=0$ and $f-g|_A=0$ and $F$ is supported on both $A$ and $B$, we have $F(f)=F(g)=0$. Hence, $F$ is supported on $A\cap B$. \end{proof} \begin{example} Let $\mathcal{B}=\{A\subseteq [a, \infty): a\in \mathbb{R}\}$. Define a continuous linear functional $F$ on $(C(X), \tau_{\mathcal{B}})$ by $F(f)=\int_{-1}^{0}q_B(f)dx$, where $B=[0, \infty)$. Then $F$ is supported on $[a, \infty)$ for all $a\in\mathbb{R}$ (as $[0, a]$ is compact for every $a>0$). Hence, $\bigcap \{B\in \mathcal{B}_0: F \text{ is supported on } B\}=\emptyset$. \end{example} We need the following lemma in order to derive the main theorem of this section. \allowdisplaybreaks \begin{lemma}\label{2} Suppose $F\in (C(X), \tau_{\mathcal{B}})^*$. Then there exists a closed $B\in\mathcal{B}$ and a unique $F_B\in (C_b(B), \parallel\cdot\parallel_\infty)^*$ such that $F_B(f|_B)=F(f)$ for all $f\in C(X)_{fin}^B$, where $C_b(B)$ is the collection of all bounded, continuous functions on $B$ and $\parallel h\parallel_\infty=\sup_{x\in B}|h(x)|$ for all $h\in C_b(B)$. \end{lemma} \begin{proof} Let $B\in \mathcal{B}$ such that $|F|\leq \rho_B$. Define $F_B(h)=F(\hat{h})$ for all $h\in C_b(B)$, where $\hat{h}$ is any continuous extension of $h$ on $X$. Since $F$ is supported on $B$, $F_B$ is well-defined and unique. Note that $|F_B(f)|\leq \rho_B(f)=\sup_{x\in B}|f(x)|=\|f\|_\infty$ for all $f\in C_b(B)$. Thus, $F_B\in \left(C_b(B), \parallel\cdot\parallel_\infty\right)^*$. \end{proof} If $B\in\mathcal{B}$ and $C(X)=C(X)_{fin}^B\oplus M_B$, then we denote $p_B$ and $q_B$ by the projections of $C(X)$ onto $C(X)_{fin}^B$ and $M_B$, respectively. \begin{theorem}\label{decomposition of continuous linear functionals} Suppose $F\in (C(X), \tau_{\mathcal{B}})^*$. Then there exists a closed set $B\in \mathcal{B}$ such that $F=H_\mu\circ p_B+ F\circ q_B$ for some $\mu\in\mathscr{M}_{\mathcal{B}}$ supported on $B$, that is, $F(f)=\int_X fd\mu$ for all $f\in C(X)_{fin}^B$. Moreover, $\|\mu\|=\sup\{|F(f)|: \|f\|_\infty\leq 1\}$, where $\|\mu\|$ represents the total variation of $\mu$ which is defined by $\|\mu\|:=|\mu|(X)$. \end{theorem} \begin{proof}By Lemma \ref{support of a continuous linear functional}, $F$ is supported on some closed $B\in\mathcal{B}$. By Lemma \ref{2}, there exists a unique $F_B\in (C_b(B), \parallel\cdot\parallel_\infty)^*$ such that $F_{B}(f|_B)=F(f)$ for all $f\in C(X)_{fin}^B$. There exists a closed regular, finite sigma measure $\nu$ on $\mathscr{B}(B)$ such that $F_B(f|_B)=\int_{B}f|_Bd\nu$ for all $f\in C(X)_{fin}^B$. Since $\mathscr{B}(B)= \{A\cap B: A\in \mathscr{B}(X)\}$, $\mu(A):=\nu(A\cap B)$ defines a finite measure on $\mathscr{B}(X)$. It is easy to see that $\mu$ is supported on $B$. We now show that $\mu$ is closed regular. Let $A\in \mathscr{B}(X)$. Then $A\cap B\in \mathscr{B}(B)$. Therefore, for $\epsilon>0$, there exist a closed set $C$ and an open set in $U$ in $B$ such that $C\subseteq A\cap B\subseteq U$ and $|\nu|(U\setminus C)<\epsilon$. Since $B$ is closed in $X$, $C$ is closed in $X$. Take $V=U\cup (X\setminus B)$. Then $V$ is open in $X$. It is easy to see that $C\subseteq A\subseteq V$, and $|\mu|(V\setminus C)=|\mu|((U\setminus C)\cap B)=|\nu|(U\setminus C)<\epsilon.$ Thus, $\mu$ is closed regular. Note that $F(f)=F_B(f|_B)=\int_Bf|_Bd\nu=\int_Xfd\mu$ for all $f\in C(X)_{fin}^B$. Hence, $F=H_\mu\circ p_B+H_2\circ q_B$ for $H_2=F|_{M_B}$. It is easy to see that $\sup\{|F(f)|: \|f\|_\infty\leq 1\}\leq |\mu|(X)= |\mu|(B)$. Since $\mu$ is closed regular, for $\epsilon>0$, we have a closed set $C$ and an open set $U$ in $X$ such that $C\subseteq B\subseteq U$ and $|\mu|\left(U\setminus C\right) <\epsilon$. Since $C\cap \left(X\setminus U\right)=\emptyset$, we have a continuous function $h$ on $X$ such that $h|_{C}= \frac{\epsilon}{|\mu(B)|+1}$, $h|_{X\setminus U}=0$, and $\|h\|_\infty \leq 1$. Note that \begin{eqnarray*} \left|\int_X hd\mu-|\mu|(B)\right|&\leq& \left|\int_C hd\mu+\int_{U\setminus C} hd\mu-|\mu|(B)\right|\\ &\leq& \frac{\epsilon|\mu|(C)}{|\mu|(B)+1}+|\mu|\left(U\setminus C\right)+|\mu|\left(U\setminus C\right)\\ &\leq& \frac{\epsilon|\mu|(B)}{|\mu|(B)+1}+2|\mu|\left(U\setminus C\right) \leq 3\epsilon. \end{eqnarray*} Thus, $\left|\int_Xhd\mu-|\mu|(B)\right|\leq 3\epsilon$. Consequently, \begin{eqnarray*} |\mu|(B)+3\epsilon &\leq& \left|\int_Xhd\mu\right|\\ &\leq& \sup\{|F(f)|: \|f\|_\infty\leq 1\}. \end{eqnarray*} Hence, $\|\mu\|=\sup\{|F(f)|: \|f\|_\infty\leq 1\}$. \end{proof} \begin{remark}Suppose $\mathcal{B}=\mathcal{K}$, the collection of all non-empty compact subsets of $X$. Then for each $B\in\mathcal{B}$, $\rho_B$ is a seminorm on $C(X)$. Which implies that $M_B=\{0\}$. Consequently, $q_B=0$. Hence, every continuous linear functional $F$ on $(C(X), \tau_k)$ has the form $F=H_\mu$ for some finite Borel measure $\mu$ supported on some compact subset of $X$. Since $C_b(X)$ is dense in $(C(X), \tau_{\mathcal{B}})$, the function $$\|F\|_{op}=\sup\{|F(f)|: \|f\|_\infty\leq 1\}$$ defines a norm on $(C(X), \tau_{\mathcal{B}})^*$. As $\|F\|_{op}=\|\mu\|$, $((C(X), \tau_{\mathcal{B}})^*, \|\cdot\|_{op})$ is isometrically isomorphic to $(\mathscr{M}_{\mathcal{K}}, \|\cdot\|)$. \end{remark} \begin{corollary} Suppose $\mathcal{B}\subseteq \mathcal{K}$. Then $((C(X), \tau_{\mathcal{B}})^*, \|\cdot\|_{op})$ is isometrically isomorphic to $(\mathscr{M}_{\mathcal{B}}, \|\cdot\|)$. \end{corollary} \section{Comparison between the duals of $(C(X), \tau_{\mathcal{B}})$ and $(C(X), \tau_{\mathcal{B}}^s)$} As mentioned earlier an extended locally convex space (elcs) may not form a topological vector space. Therefore, we may not directly apply the classical techniques of locally convex spaces to these new spaces. To address this problem, the authors in \cite{flctopology} constructed the finest locally convex topology for an elcs which is still coarser than the given extended locally convex space topology (they called this topology by flc topology ($\tau_F$)). It is also shown in the same paper that if $\tau_F$ and $\tau_F^s$ are the flc topologies for $(C(X), \tau_{\mathcal{B}})$ and $(C(X), \tau_{\mathcal{B}}^s)$, respectively, then $\tau_F$ coincides with $\tau_F^s$ if and only if $\mathcal{B}$ is shielded from closed sets. A natural question one can ask here is when both the spaces have the same weak topologies? Which is equivalent to asking when both the spaces have the same dual? We show that the inclusion $(C(X), \tau_{\mathcal{B}}) \subseteq (C(X), \tau_{\mathcal{B}}^s)$ may be strict. We also find the condition under which both the spaces have the same dual. \begin{lemma}\label{support of a continuous linear functional on tauBs} Suppose $0\neq F\in (C(X), \tau_{\mathcal{B}}^s)^*$. Then there exists a closed $B\in \mathcal{B}$ such that $F$ is supported on $B^\delta$ for every $\delta>0$. \end{lemma} \begin{proof} It is similar to the proof of Lemma \ref{support of a continuous linear functional}. \end{proof} Recall from \cite{Suc} that a superset $A_1$ of $A$ is said to be a \textit{shield} for $A$ if for every non-empty closed set $C\subseteq X$ with $C\cap A_1=\emptyset$, $C$ is not near to $A$, that is, $C\cup A^\delta=\emptyset$ for some $\delta>0$. A bornology $\mathcal{B}$ is said to be \textit{shielded from closed sets} if every member of $\mathcal{B}$ has a closed shield in $\mathcal{B}$. The next example shows that, in general, $(C(X), \tau_{\mathcal{B}})^*$ may not coincide with $(C(X), \tau_{\mathcal{B}}^s)^*$. \begin{example}\label{dual of both spaces may not coincide}\normalfont Suppose $X=\mathbb{R}$ and $\mathcal{B}$ is the bornology induced by the base $$\mathcal{B}_0=\{\mathbb{N}\cup A: A \text{ is any non-empty finite subset of } \mathbb{R}\}.$$ Note that if $A$ is any finite subset of $\mathbb{R}$. Then there exists a $k>3$ such that $A\cap [k, \infty)=\emptyset$. Since an arbitrary union of a locally finite family of closed sets is closed (see, Corollary 1.1.12, p. 17 in \cite{GTE}), $C:=\bigcup_{n\geq k}\left[n+\frac{1}{n}, n+1-\frac{1}{n}\right]$ is closed in $\mathbb{R}$. It is easy to prove that $C\cap (\mathbb{N}\cup A)=\emptyset$ and $\mathbb{N}^\delta \cap C\neq \emptyset$ for all $\delta>0$. So, $\mathcal{B}$ is not shielded from closed sets. Thus, $\tau_{\mathcal{B}}\subsetneq \tau_{\mathcal{B}}^s$ on $C(X)$. We now construct an $F\in (C(X), \tau_{\mathcal{B}}^s)^*\setminus(C(X), \tau_{\mathcal{B}})^*$. Let $B=\mathbb{N}$. Then $\rho_B^s$ is continuous on $(C(X), \tau_{\mathcal{B}}^s)$. Consequently, $C(X)_{fin}^{B_s}=\{f\in C(X): (\rho_B^s)(f)<\infty\}$ is clopen in $(C(X), \tau_{\mathcal{B}}^s)$. Define an $h\in C(X)$ by \[ h(x)= \begin{cases} \text{$0,$} &\quad\text{if $x\leq 1$}\\ \text{$n(n+1)(x-n)$,} &\quad\text{if $n\leq x\leq n+\frac{1}{n+1}$ for $n\in \mathbb{N}$}\\ \text{$(n+1)(n+1-x),$} &\quad\text{if $n+\frac{1}{n+1}\leq x\leq n+1$ for $n\in\mathbb{N}$}.\\ \end{cases}\] Clearly, $h(n)=0$ for all $n\in \mathbb{N}$. Suppose $\delta>0$. Then for every positive integer $m$ with $\frac{1}{m}<\delta$, we have $m+\frac{1}{m+1}\in B^\delta$ and $h(m+\frac{1}{m+1})=m$. Consequently, $\sup_{x\in B^\delta}|h(x)|=\infty$. Since this holds for all $\delta>0$, $\rho_B^s(h)=\infty$. Therefore, $h\notin C(X)_{fin}^{B_s}$. Suppose $M_{B_s}$ is any algebraic complement of $C(X)_{fin}^{Bs}$ containing $h$. Then $C(X)=C(X)_{fin}^{B_s}\oplus M_{B_s}$ becomes a topological direct sum. Consider the linear function $F$ defined by $$F(f)=\int_{1}^{\frac{3}{2}}q_{B_s}(f)(x)dx \text{ for all } f\in C(X),$$ where $q_{B_s}$ is the projection of $C(X)$ on $M_{B_s}$. Note that $|F(f)|\leq \rho_B^s(f)$ for all $f\in C(X)$. Therefore, $F\in (C(X), \tau_{\mathcal{B}}^s)^*$. It should be noted that $F$ is supported on $\mathbb{N}^\delta$ for every $\delta>0$. Let $F\in (C(X), \tau_{\mathcal{B}})^*$. By Lemma \ref{support of a continuous linear functional}, there exists a finite set $A\subseteq \mathbb{R}$ such that $F$ is supported on $\mathbb{N}\cup A$. Consider a bounded function $g\in C(X)$ such that $g|_{A\cup \mathbb{N}}=h|_{A\cup\mathbb{N}}$. It is easy to see that $g\in C(X)_{fin}^{B_s}$ and $g-h|_{A\cup\mathbb{N}}=0$. Since $q_{B_s}(g-h)=-h$, $F(g-h)=-\int_{1}^{\frac{3}{2}}h(x)dx\neq 0$. We arrive at a contradiction. \end{example} \begin{theorem}\label{condition under which both the spaces have same dual} Suppose $\mathcal{B}$ is a bornology on $X$ with a closed base $\mathcal{B}_0$. Then the following statements are equivalent: \begin{enumerate}[(1)] \item $\tau_{\mathcal{B}}=\tau_{\mathcal{B}}^s$ on $C(X)$; \item $\mathcal{B}$ is shielded from closed sets; \item $(C(X), \tau_{\mathcal{B}})^*=(C(X), \tau_{\mathcal{B}}^s)^*$. \end{enumerate} \end{theorem} \begin{proof} The implications (1)$\Leftrightarrow$(2)$\Rightarrow$(3) follow from Theorem 4.1 in \cite{Suc}. (3)$\Rightarrow$(2). Suppose $B_0\in\mathcal{B}$ does not have any closed shield in $\mathcal{B}$, that is, for all closed set $B\in\mathcal{B}$ with $B_0\subseteq B$, there exists a closed set $C_B$ such that $C_B\cap B=\emptyset$ and $C_B\cap B_0^\delta \neq \emptyset$ for all $\delta >0$. Let $C(X)=C(X)_{fin}^{B_0s}\oplus M_{B_0s}$. Consider the linear functional $F$ on $C(X)$ defined by $F(f):=G(q_{B_0s}(f)),$ where $G$ is any linear functional on $M_{B_0s}$ such that $G(h)=0$ only if $h=0$. Note that $|F(f)|\leq \rho_{B_0}^s(f)$ for all $f\in C(X)$. Thus, $F\in (C(X), \tau_{\mathcal{B}}^s)^*$. By our hypothesis, $F\in (C(X), \tau_{\mathcal{B}})^*$. By Lemma \ref{support of a continuous linear functional}, $F$ is supported on some closed set $A\in\mathcal{B}$. Take $D=A\cup B_0$. Clearly, $D\in\mathcal{B}$ and $B_0\subseteq D$. Since $B_0$ does not have any closed shield in $\mathcal{B}$, there exists a closed set $C_D$ such that $C_D\cap D=\emptyset$ and $C_D\cap B_0^{\delta}\neq \emptyset$ for all $\delta>0$. Let $x_n\in C_D\cap B_0^{\frac{1}{n}}$. Note that if $x_{n_k}$ is any subsequence of $(x_n)$ which converges to $x$, then there exists a sequence $(b_{n_k})$ in $B_0$ such that $d(x_{n_k}, b_{n_k})<\frac{1}{n_k}\to 0$. Thus, $x\in B_0\cap D$. Which is not possible. Therefore, no subsequence of $(x_n)$ is convergent in $X$. Thus, $P=\{x_n:n\in\mathbb{N}\}$ is a closed, discrete subset of $X$. We can always find an $h\in C(X)$ such that $h|_D=0$ and $h(x_n)=n$ for all $n\in\mathbb{N}$. Consequently, $h|_A=0$ and $q_{B_0s}(h)\neq 0$. Thus, $h|_A=0$ and $F(h)\neq 0$. We arrive at a contradiction as $F$ is supported on $A$. \end{proof} \section{Topology of uniform convergence on bounded sets} In section 2, we have noted that $((C(X), \tau_k)^*, \|\cdot\|_{op})$ is isometrically isomorphic to $(\mathscr{M}_{\mathcal{K}}, \|\cdot\|)$, where $\tau_k$ is the compact open topology on $C(X)$. However, the $\|\cdot\|_{op}$ shows no relation with $\mathcal{B}$. In the case of a Hausdorff locally convex space $(Z, \tau)$, the generalization of the operator norm topology is the strong topology, which is the topology of uniform convergence on bounded subsets of $Z$. The suitable adaptation of this topology for an elcs has been studied in \cite{doelcs, belcsaubp, Relcs} (they denoted this topology by $\tau_{ucb}$). In this section, we first compare $\tau_{ucb}$ with $\|\cdot\|_{op}$. We then identify the conditions under which $\tau_{ucb}$ is normable, and derive a topology on $\mathscr{M}_\mathcal{B}$ that shares a connection with $(C(X), \tau_{\mathcal{B}})^*$ when endowed with $\tau_{ucb}$. We begin with the following definition. \begin{definition}\normalfont(\cite{doelcs}) Suppose $(Z, \tau)$ is an elcs. Then $A\subseteq Z$ is said to be \textit{bounded} in $(Z, \tau)$ if for every neighborhood $U$ of $0$, there exist an $r>0$ and a finite set $F\subseteq Z$ such that $A\subseteq F+rU$. \end{definition} \noindent The following points about bounded subsets of an elcs $(Z, \tau)$ are either easy to verify or given in \cite{doelcs}. \begin{enumerate}[(1)] \item Every finite subset of $Z$ is bounded. \item If $(Z, \tau)$ is a locally convex space, then $A\subseteq Z$ is bounded if and only if for every neighborhood $U$ of $0$, there exists an $r>0$ such that $A\subseteq rU$. \item The collection of all bounded sets forms a bornology on $Z$. \item If $A$ is bounded, then for all $f\in Z^*$, $f(A)$ is bounded. The converse of this holds if $(Z, \tau)$ forms a locally convex space. \end{enumerate} \begin{definition}\label{def of uniform convergence on bounded sets}\normalfont(\cite{doelcs}) Let $(Z, \tau)$ be an elcs. Then the topology $\tau_{ucb}$, on $Z^*$, of \textit{uniform convergence on bounded subsets of} $(Z, \tau)$ is induced by the collection $\mathcal{P}=\{\rho_A: A ~\text{is a bounded subset of}~ (Z, \tau)\}$ of seminorms on $Z^*$, where $\rho_A(F)=\sup_{x\in A}|F(x)| ~\text{for all}~ x\in Z^*$.\end{definition} \noindent The following points about $\tau_{ucb}$ are given in \cite{doelcs}. \begin{enumerate}[(1)] \item The space $(Z^*, \tau_{ucb})$ forms a locally convex space. \item The collection $\mathcal{N}=\{A^\circ: A \text{ is bounded in } (Z, \tau)\}$ gives a neighborhood base at $0$ in $(Z^*, \tau_{ucb})$, where $A^\circ:=\{F\in Z^*: |F(x)|\leq 1 \text{ for all } x\in A\}$. \item For an lcs $(Z, \tau)$, $\tau_{ucb}$ coincides with its strong topology. \end{enumerate} \begin{proposition}\label{op forms a norm}\normalfont The map $\|\cdot\|_{op}$ defines a norm on $(C(X), \tau_{\mathcal{B}})^*$ if and only if $\mathcal{B}\subseteq \mathcal{K}$. \end{proposition} \begin{proof} Suppose $B\in\mathcal{B}$ is not compact. Then $C(X)_{fin}^B$ is a proper open subspace of $(C(X), \tau_{\mathcal{B}})$. Let $C(X)=C(X)_{fin}^B\oplus M_B$. Consider a linear functional $F$ on $C(X)$ such that $F(f)= 0$ for all $f\in C(X)_{fin}^B$ and $F(f)\neq 0$ for some non-zero $f\in M_B$. Clearly, $F$ is a non zero continuous functional on $(C(X)_{fin}^B, \tau_{\mathcal{B}})$ as $|F(f)|\leq \rho_B(f)$ for all $f\in C(X)$. Since $C_b(X) \subseteq C(X)_{fin}^B$, $\|F\|_{op}=0$. Conversely, if $\mathcal{B}\subseteq\mathcal{K}$, then $C_b(X)$ is dense in $(C(X), \tau_{\mathcal{B}})$. Thus, $\|\cdot\|_{op}$ defines a norm on $(C(X), \tau_{\mathcal{B}})^*$. \end{proof} The natural question one can ask here are both the topologies $\tau_{ucb}$ and $\|\cdot\|_{op}$ coincide for $\mathcal{B}=\mathcal{K}$? The next example shows that, in general, it may not always be the case. \begin{example}\label{example for operator and strong topology not coincide} Suppose $X=\mathbb{R}$ and $\mathcal{B}=\mathcal{K}$. For each $m\in\mathbb{N}$, consider the functions $f_m$ defined by \[ f_m(x)= \begin{cases} \text{$0,$} &\quad\text{if $x<0$}\\ \text{$x$,} &\quad\text{if $0\leq x\leq m$ }\\ \text{$m,$} &\quad\text{if $x>m$}\\ \end{cases}\] Note that if $K=[a, b]$, then $\sup_{x\in K}|f_m(x)|\leq \sup_{x\in K}|f_{m_0}(x)|\leq m_0$ for $m_0>b$. Therefore, $\mathbb{A}=\{f_m:m\in\mathbb{N}\}$ is bounded in $(C(X), \tau_{\mathcal{B}})$. Thus, $\mathbb{A}^\circ$ gives a neighborhood at $0$ in $\left((C(X), \tau_{\mathcal{B}})^*, \tau_{ucb}\right)$. Let $\tau_{ucb}=\tau_{\|\cdot\|_{op}}$. Then there exists an $r>0$ such that $B_{op}[0, r]\subseteq \mathbb{A}^\circ$, where $B_{op}[0, r]=\{F\in (C(X), \tau_{\mathcal{B}})^*: \|F\|_{op}\leq r\}$. Consequently, $\mathbb{A}\subseteq (\mathbb{A}^\circ)_\circ\subseteq (B_{op}[0, r])_\circ$, where $(\mathbb{A}^\circ)_\circ= \{f\in C(X): |F(f)|\leq 1 \text{ for all } F\in \mathbb{A}^\circ\}$. Which is not true as for $k\in\mathbb{N}$ with $kr>1$, the continuous linear functional $F(f):=\int_{k}^{k+r}f(x)dx$ belongs to $B_{op}[0, r]$, but $|F(f_k)|=\int_{k}^{k+r}f_k(x)dx=\int_{k}^{k+r}kdx=kr>1$. \end{example} We next find conditions under which the topology $\tau_{ucb}$ is normable on $(C(X), \tau_{\mathcal{B}}^s)^*$ (or $(C(X), \tau_{\mathcal{B}})^*$). We use the following well known results in the sequel. \begin{enumerate}[(1)] \item A locally convex space topology is normable if and only if it has a bounded neighborhood at $0$ (see, Theorem 6.2.1, p. 160 in \cite{tvsnarici}). \item If $(Z, \tau)$ is an elcs and $A^\circ$ is bounded in $(Z^*, \tau_{ucb})$, then $A$ is absorbing in $Z$ (it follows from Theorem 3.2 in \cite{flctopology} and Theorem 8.3.5, p. 234 in \cite{tvsnarici}). \end{enumerate} \begin{proposition}\label{normability of strong topology} Suppose $\tau_{ucb}$ on $(C(X), \tau_{\mathcal{B}}^s)^*$ $(\text{or } (C(X), \tau_{\mathcal{B}})^*)$ is normable. Then $\mathcal{B}\subseteq \mathcal{K}$. \end{proposition} \begin{proof} Let $\mathbb{B}$ be a closed, convex, and bounded subset of $(C(X), \tau_{\mathcal{B}}^s)$ such that $\mathbb{B}^\circ$ gives a bounded neighborhood at $0$ in $((C(X), \tau_{\mathcal{B}}^s)^*, \tau_{ucb})$. Consequently, $\mathbb{B}$ is an absorbing subset of $C(X)$. Let $A\in \mathcal{B}$ be closed. Then $\mathbb{B}\subseteq \rho_A^{-1}([0, r))+\mathbb{F}$ for some $r>0$ and finite subset $\mathbb{F}$ of $C(X)$. Therefore, $\mathbb{B}\subseteq C(X)_{fin}^B\oplus M$ for some finite dimensional subspace $M$ of $C(X)$. If $A$ is not compact, then there exists a closed, discrete subset $D=\{x_k:k\in\mathbb{N}\}$ of $A$. For $n\in\mathbb{N}$, consider the continuous function $h_n:X\to \mathbb{R}$ such that \[ h_n(x_k)= \begin{cases} \text{$0,$} &\quad\text{when $k<n$}\\ \text{$k^{2n},$} &\quad\text{when $k\geq n$}\\ \end{cases}\] Note that if $\sum_{j=1}^{m}\alpha_j h_{n_j}=0$, then for all $1\leq j\leq m$, $\sum_{j=1}^{m}\alpha_j h_{n_j}(x_{n_j})=\sum_{i=j}^{m}\alpha_i n_j^{2n_i}=0$. Consequently, $\alpha_j=0$ for all $j$. Thus, the set $\mathbb{K}=\{h_n: n\in\mathbb{N}\}$ is linearly independent. Since $\mathbb{B}$ is absorbing, $\mathbb{K}\subseteq C(X)_{fin}^B\oplus M$. We next show that $\text{span}(\mathbb{K})\cap C(X)_{fin}^B=\{0\}$. Let $h:=\sum_{i=1}^{m}\alpha_i h_{i}$ for $\alpha_m>0$ and $m>1$. Then for $k>\max\{m^2, \frac{|\alpha_i|}{\alpha_m}: 1\leq i\leq m \}$, we have \begin{eqnarray*} h(x_k)&=&\sum_{i=1}^{m}\alpha_i h_{i}(x_k)= \sum_{i=1}^{m} \alpha_i k^{2i}\\ &\geq& \alpha_mk^{2m}-\sum_{i=1}^{m-1}|\alpha_i|k^{2i}\geq \alpha_mk^{2m}+k^{2(m-1)}k\alpha_m \sum_{j=1}^{m}1\\ &\geq& \alpha_m k^{2m-1}(k-m^2). \end{eqnarray*} Thus, $h\notin C(X)_{fin}^B$. Hence, $\text{span}(\mathbb{K})\cap C(X)_{fin}^B=\{0\}$. Consequently, the codimension of $C(X)_{fin}^B$ in $C(X)_{fin}^B\oplus M$ is infinite as $C(X)_{fin}^B\oplus\text{span}(\mathbb{K}) \subseteq C(X)_{fin}^B\oplus M$. Which is not possible.\end{proof} Recall that an lcs $(Z, \tau)$ is \textit{quasibarreled} if each \textit{bornivorous} (a set which is absolutely convex, closed, and absorbs all bounded subsets of $Z$) is a neighborhood of $0$ (see, \cite{ hanslocally, tvsnarici}). \begin{proposition}\label{quasibarreled} Suppose $(Z, \tau)$ is a quasibarreled lcs. Then the strong dual $(Z^*, \tau_s)$ is normable if and only if $(Z, \tau)$ is normable. \end{proposition} \begin{proof} Suppose $(Z^*, \tau_s)$ is normable. Then its the strong dual $Z^{**}$ is normable. Since $(Z, \tau)$ is quasibarreled, $Z$ is isomorphic to its canonical image $J(Z)\subseteq Z^{**}$. Hence, $(Z, \tau)$ is normable. \end{proof} Suppose $(Z, \tau)$ is an elcs. We denote $\tau_F$ by the finest locally convex topology (flc topology) for $(Z, \tau)$ which is still coarser than $\tau$. The space $(Z, \tau_F)$ is called the finest space for $(Z, \tau)$ (see, \cite{spoens, flctopology}). \begin{theorem}\label{normability thm} Suppose $\mathcal{B}$ is a bornology on $X$ such that the finest space $(C(X), \tau_F)$ for $(C(X), \tau_{\mathcal{B}})$ is quasibarreled. Then the following statements are equivalent. \begin{enumerate}[$(1)$] \item The topology $\tau_{ucb}$ on the dual of $(C(X), \tau_{\mathcal{B}})$ is normable; \item $\tau_{ucb}$ on the dual of $(C(X), \tau_{\mathcal{B}}^s)$ is normable; \item $\tau_{\mathcal{B}}$ is normable on $C(X)$; \item $\tau_{\mathcal{B}}^s$ is normable on $C(X)$; \item $X$ is compact and $X\in \mathcal{B}$. \end{enumerate} \end{theorem} \begin{proof} The implications $(1)\Leftrightarrow(2)$ follow from Theorem 4.1 of \cite{Suc} and Proposition \ref{normability of strong topology}. The implications $(3)\Leftrightarrow (4)\Leftrightarrow(5)$ are easy to prove as for $B\in \mathcal{B}$, $\rho_{B}$ (or $\rho_{B}^s$) gives a norm on $C(X)$ if and only if $B=X$ and $B$ is compact. $(1)\Rightarrow(3).$ By Proposition \ref{normability of strong topology}, $\mathcal{B}\subseteq \mathcal{K}$. Therefore, $\tau_{\mathcal{B}}=\tau_{F}$. Hence, by Proposition \ref{quasibarreled}, $\tau_{\mathcal{B}}$ is normable. $(5)\Rightarrow(1)$. If $B=X$ is compact, then $\tau_\mathcal{B}$ is induced by the supremum norm. Consequently, its strong dual is normable. \end{proof} \begin{corollary} For every metric space $X$, the strong dual of $(C(X), \tau_k)$ $((C(X), \tau_{p}))$ is normable if and only if $X$ is compact (finite). \end{corollary} \begin{proof} For every metric space $X$, $(C(X), \tau_k)$ and $(C(X), \tau_{p})$ are always quasibarreled (see, Corollary 3 and Theorem 5, p. 234 in \cite{hanslocally}). It now follows from Theorem \ref{normability thm}. \end{proof} We have noted in Proposition \ref{op forms a norm} that the operator norm may not define a norm on $(C(X), \tau_{\mathcal{B}})^*$ when $\mathcal{B}\supsetneq \mathcal{K}$. Therefore, we introduce a topology $\sigma$ on $\mathscr{M}_\mathcal{B}(X)$ that establishes a strong relationship with $((C(X), \tau_{\mathcal{B}})^*, \tau_{ucb})$. In the proof of Theorem \ref{decomposition of continuous linear functionals}, we have seen that if $\mu\in\mathscr{M}_{\mathcal{B}}(X)$, then $$\|\mu\|=\sup\left\lbrace \int_X fd\mu: \|f\|_\infty\leq 1\right\rbrace.$$ If we replace $\left\lbrace f: \|f\|_\infty\leq 1\right\rbrace$ by any other bounded set $\mathbb{B}$ of $(C(X), \tau_{\mathcal{B}})$, then the map $\omega_\mathbb{B}(\mu)=\sup_{f\in \mathbb{B}}\left|\int_Xfd\mu\right|$ for $\mu\in\mathscr{M}_{\mathcal{B}}$ may attain the infinite value. However, if $D\in\mathcal{B}$ such that $C(X)_{fin}^D\subseteq C(X)_{fin}^B$ for all $B\in\mathcal{B}$, then for every bounded subset $\mathbb{B}$ of $C(X)_{fin}^D$, $\omega_\mathbb{B}$ defines a seminorm on $\mathscr{M}_{\mathcal{B}}$. These spaces are referred to in the literature as fundamental extended locally convex spaces. \begin{definition}\normalfont (\cite{flctopology, esaetvs}) An elcs $(Z, \tau)$ is said to be a \textit{fundamental elcs} if $Z_{fin}$ is open in $(Z, \tau)$, where $$Z_{fin}=\bigcap\left\lbrace Z_{fin}^\rho: \rho \text{ is a continuous extended seminorm on } (Z, \tau)\right\rbrace.$$ \end{definition} \begin{proposition} The following statements are equivalent for a bornology $\mathcal{B}$ on $X$. \begin{enumerate}[(1)] \item $(C(X), \tau_{\mathcal{B}})$ forms a fundamental elcs. \item There exists a $D\in\mathcal{B}$ such that $C(X)_{fin}^D\subseteq C(X)_{fin}^B$ for all $B\in\mathcal{B}$. \end{enumerate} \end{proposition} \begin{proof} Suppose $D\in\mathcal{B}$ and $r>0$ such that $\rho_{D}^{-1}([0,r))\subseteq \bigcap_{B\in\mathcal{B}} C(X)_{fin}^B.$ It is now easy to see that $C(X)_{fin}^D\subseteq C(X)_{fin}^B$ for all $B\in\mathcal{B}$. Conversely, if $\rho$ is any continuous extended seminorm on $(C(X), \tau_{\mathcal{B}})$, then $\rho\leq \rho_B$ for some $B\in\mathcal{B}$. Consequently, $\bigcap_{B\in\mathcal{B}} C(X)_{fin}^B\subseteq C(X)_{fin}^\rho$ for all continuous extended seminorm $\rho$ on $(C(X), \tau_\mathcal{B}).$ Thus, $(C(X), \tau_{\mathcal{B}})$ forms a fundamental elcs. \end{proof} Now, suppose $\mathcal{B}$ is a bornology such that $(C(X), \tau_{\mathcal{B}})$ forms a fundamental elcs. Then there exists a $D\in\mathcal{B}$, such that $C(X)_{fin}^D\subseteq C(X)_{fin}^B$ for all $B\in\mathcal{B}$. Let $\sigma$ be the topology on $\mathscr{M}_{\mathcal{B}}(X)$ induced by the directed family $\{\omega_\mathbb{B}:\mathbb{B} \text{ is a bounded subset of } C(X)_{fin}^D\}$ of seminorms. Then $(\mathscr{M}_\mathcal{B}(X), \sigma)$ forms a locally convex space. \begin{theorem}\label{tauucb on the dual of C(X)} Suppose $\mathcal{B}$ is a bornology on $X$ such that $(C(X), \tau_{\mathcal{B}})$ forms a fundamental elcs. Then there exists a $D\in\mathcal{B}$ such that $((C(X), \tau_{\mathcal{B}})^*, \tau_{ucb})$ is isomorphic to product space $(\mathscr{M}_\mathcal{B}(X), \sigma)\times (M^*, \tau_{w^*})$, where $C(X)=C(X)_{fin}^D\oplus M$ and $\tau_{w^*}$ is the weak$^*$ topology on the dual $M^*$ of $M$. \end{theorem} \begin{proof} Let $D\in\mathcal{B}$ such that $C(X)_{fin}^D\subseteq C(X)_{fin}^B$ for all $B\in\mathcal{B}$. Suppose $C(X)=C(X)_{fin}^D\oplus M$. Consider the map $$\Psi: (\mathscr{M}_\mathcal{B}(X), \sigma)\times (M^*, \tau_{w^*})\to ((C(X), \tau_{\mathcal{B}})^*, \tau_{ucb})$$ defined by $$(\mu, H)\mapsto \int_{X} p_D(f)d\mu + H(q_D(f)) \text{ for } f\in C(X),$$ where $p_D$ and $q_D$ are the projections of $C(X)$ onto $C(X)_{fin}^D$ and $M$, respectively. Clearly, $\Psi$ is linear. If $\int_{X} p_D(f)d\mu + H(q_D(f))=0$ for all $f\in C(X)$, then $H=0$ and $\|\mu\|=0$ as $\|\mu\|=\sup\left\lbrace \left| \int_{X} fd\mu\right| : \|f\|_{\infty}\leq 1\right\rbrace$. Consequently, $\mu=0$ and $H=0$. Thus, $\Psi$ is injective. Since $C(X)_{fin}^D\oplus M$ is a topological direct sum and $C(X)_{fin}^D\subseteq C(X)_{fin}^B$ for all $B\in\mathcal{B}$, by Theorem \ref{decomposition of continuous linear functionals}, $\Psi$ is surjective. For the continuity of $\Psi$, let $\mathbb{B}$ be a bounded set in $(C(X), \tau_{\mathcal{B}})$. Clearly, $p_D(\mathbb{B})$ is a bounded subset of $C(X)_{fin}^D$. Suppose $r>0$ and $\mathbb{F}$ is a finite subset of $M$ such that $\mathbb{B}\subseteq \rho_{D}^{-1}([0, r))+ \mathbb{F}$. Consequently, $q_D(\mathbb{B})$ is a finite subset of $M$. Note that \begin{eqnarray*} \sup_{f\in \mathbb{B}} \left|\int_{X} p_D(f) d\mu+ H(q_D(f)) \right|&\leq& \sup_{f\in \mathbb{B}} \left|\int_{X} p_D(f) d\mu\right|+ \sup_{f\in \mathbb{B}}\left|H(q_D(f))\right|\\ &\leq& \sup_{f\in p_D(\mathbb{B})} \left|\int_{X} f d\mu\right|+ \sup_{f\in q_D(\mathbb{F})}\left|H(f)\right|\\ &=& \omega_{p_D(\mathbb{B})}(\mu)+\rho_{q_D(\mathbb{B})}(H). \end{eqnarray*} Hence, $\Psi$ is continuous. Now, let $\mathbb{B}$ and $\mathbb{F}$ be any bounded and finite subsets of $C(X)_{fin}^D$ and $M$, respectively. Then $\mathbb{B}\cup\mathbb{F}$ is bounded in $(C(X), \tau_{\mathcal{B}})$ and $$\sup_{f\in \mathbb{B}}\left|\int_X fd\mu\right|+\sup_{f\in \mathbb{F}}|H(f)|\leq \sup_{f\in \mathbb{B}\cup \mathbb{F}}\left|\int_X p_D(f)d\mu+H(q_D(f))\right|.$$ Therefore, $\Psi^{-1}$ is continuous. Hence, $\Psi$ is an isomorphism. \end{proof} Note that for every relatively compact set $B\subseteq X$, $\rho_B$ forms a seminorm on $C(X)$. Consequently, $M_B=\{0\}$. Therefore, for $\mathcal{B}\subseteq \mathcal{K}$, $\sigma$ is induced by $\{\omega_{\mathbb{B}}: \mathbb{B} \text{ is bounded in } (C(X), \tau_{\mathcal{B}})\}$. Thus, the following theorem follows from Theorem \ref{tauucb on the dual of C(X)}. \begin{theorem} For every $\mathcal{B}\subseteq \mathcal{K}$, $(C(X), \tau_{\mathcal{B}})^*$ is isomorphic to $(\mathscr{M}_\mathcal{B}(X), \sigma)$. \end{theorem} \begin{remark}\normalfont If we take $X\in \mathcal{B}$ and $X$ is compact, then $\sigma$ and $\tau_{\mathcal{B}}$ will be induced by the total variation norm $\|\cdot\|$ and the supremum norm $\|\cdot\|_\infty$, respectively. \end{remark} \begin{remark} Observe that for every $\mathcal{K} \subsetneq \mathcal{B}$, the operator norm $\|\cdot\|_{op}$ no longer defines a norm on $(C(X), \tau_{\mathcal{B}})^*$. Therefore, the topology $\sigma$ may serve as a suitable candidate for studying the functional-analytic properties of the function space $(C(X), \tau_{\mathcal{B}})$. \end{remark} \bibliographystyle{plain} \bibliography{reference_file} \end{document}
2412.19334v1
http://arxiv.org/abs/2412.19334v1
The cuspidal cubic and line arrangements with only triple points
\documentclass[11pt,english]{amsart} \usepackage[T1]{fontenc} \usepackage[latin1]{inputenc} \usepackage{verbatim} \usepackage{amstext} \usepackage{amsthm} \usepackage{amssymb} \makeatletter \numberwithin{equation}{section} \numberwithin{figure}{section} \theoremstyle{plain} \newtheorem{thm}{\protect\theoremname} \theoremstyle{remark} \newtheorem{rem}[thm]{\protect\remarkname} \theoremstyle{plain} \newtheorem{prop}[thm]{\protect\propositionname} \usepackage{tikz} \usepackage{pgfplots} \makeatother \makeatother \usepackage{babel} \providecommand{\propositionname}{Proposition} \providecommand{\remarkname}{Remark} \providecommand{\theoremname}{Theorem} \begin{document} \addtolength{\textwidth}{0mm} \addtolength{\hoffset}{-0mm} \addtolength{\textheight}{0mm} \addtolength{\voffset}{-0mm} \global\long\def\AA{\mathbb{A}}\global\long\def\CC{\mathbb{C}} \global\long\def\BB{\mathbb{B}} \global\long\def\PP{\mathbb{P}} \global\long\def\QQ{\mathbb{Q}} \global\long\def\RR{\mathbb{R}} \global\long\def\FF{\mathbb{F}} \global\long\def\DD{\mathbb{D}} \global\long\def\NN{\mathbb{N}}\global\long\def\ZZ{\mathbb{Z}} \global\long\def\HH{\mathbb{H}} \global\long\def\Gal{{\rm Gal}} \global\long\def\bA{\mathbf{A}} \global\long\def\kP{\mathfrak{P}} \global\long\def\kQ{\mathfrak{q}} \global\long\def\ka{\mathfrak{a}}\global\long\def\kP{\mathfrak{p}}\global\long\def\kn{\mathfrak{n}}\global\long\def\km{\mathfrak{m}} \global\long\def\cA{\mathfrak{\mathcal{A}}}\global\long\def\cB{\mathfrak{\mathcal{B}}}\global\long\def\cC{\mathfrak{\mathcal{C}}}\global\long\def\cD{\mathcal{D}}\global\long\def\cH{\mathcal{H}}\global\long\def\cK{\mathcal{K}} \global\long\def\cF{\mathcal{F}} \global\long\def\cI{\mathfrak{\mathcal{I}}}\global\long\def\cJ{\mathcal{J}} \global\long\def\cL{\mathcal{L}}\global\long\def\cM{\mathcal{M}}\global\long\def\cN{\mathcal{N}}\global\long\def\cO{\mathcal{O}}\global\long\def\cP{\mathcal{P}}\global\long\def\cR{\mathcal{R}}\global\long\def\cS{\mathcal{S}}\global\long\def\cW{\mathcal{W}} \global\long\def\cQ{\mathcal{Q}}\global\long\def\kBS{\mathfrak{B}_{6}}\global\long\def\kR{\mathfrak{R}}\global\long\def\kU{\mathfrak{U}}\global\long\def\kUn{\mathfrak{U}_{9}}\global\long\def\ksU{\mathfrak{U}_{7}} \global\long\def\a{\alpha} \global\long\def\b{\beta} \global\long\def\d{\delta} \global\long\def\D{\Delta} \global\long\def\L{\Lambda} \global\long\def\g{\gamma}\global\long\def\om{\omega} \global\long\def\G{\Gamma} \global\long\def\d{\delta} \global\long\def\D{\Delta} \global\long\def\e{\varepsilon} \global\long\def\k{\kappa} \global\long\def\l{\lambda} \global\long\def\m{\mu} \global\long\def\o{\omega} \global\long\def\p{\pi} \global\long\def\P{\Pi} \global\long\def\s{\sigma} \global\long\def\S{\Sigma} \global\long\def\t{\theta} \global\long\def\T{\Theta} \global\long\def\f{\varphi} \global\long\def\ze{\zeta} \global\long\def\deg{{\rm deg}} \global\long\def\det{{\rm det}} \global\long\def\Dem{Proof: } \global\long\def\ker{{\rm Ker}} \global\long\def\im{{\rm Im}} \global\long\def\rk{{\rm rk}} \global\long\def\car{{\rm car}}x{{\rm Fix( }} \global\long\def\card{{\rm Card }} \global\long\def\codim{{\rm codim}} \global\long\def\coker{{\rm Coker}} \global\long\def\pgcd{{\rm pgcd}} \global\long\def\ppcm{{\rm ppcm}} \global\long\def\la{\langle} \global\long\def\ra{\rangle} \global\long\def\Alb{{\rm Alb}} \global\long\def\Jac{{\rm Jac}} \global\long\def\Disc{{\rm Disc}} \global\long\def\Tr{{\rm Tr}} \global\long\def\Nr{{\rm Nr}} \global\long\def\NS{{\rm NS}} \global\long\def\Pic{{\rm Pic}} \global\long\def\Km{{\rm Km}}\global\long\def\rk{{\rm rk}}\global\long\def\Hom{{\rm Hom}} \global\long\def\End{{\rm End}} \global\long\def\aut{{\rm Aut}} \global\long\def\SSm{{\rm S}} \global\long\def\psl{{\rm PSL}} \global\long\def\cu{{\rm (-2)}} \global\long\def\mod{{\rm \,mod\,}} \global\long\def\cros{{\rm Cross}} \global\long\def\nt{z_{o}} \global\long\def\co{\mathfrak{\mathcal{C}}_{0}} \global\long\def\ldt{\Lambda_{\{2\},\{3\}}} \global\long\def\ltd{\Lambda_{\{3\},\{2\}}}\global\long\def\lldt{\lambda_{\{2\},\{3\}}} \global\long\def\ldq{\Lambda_{\{2\},\{4\}}} \global\long\def\lldq{\lambda_{\{2\},\{4\}}} \subjclass[2000]{Primary: 05B35 14N20 52C30 } \title{the cuspidal cubic and line arrangements with only triple points} \author{Xavier Roulleau} \begin{abstract} We describe a new infinite family of line arrangements in the projective plane with only triple points singularities and recover previously known examples. \end{abstract} \maketitle \section{Introduction} Line arrangements, i.e. finite union of lines in the projective plane, are studied in various domains: algebra, topology, combinatorics... A double (respectively triple) point on a line arrangement is a point where exactly $2$ (respectively $3$) lines of the arrangement meet. The celebrated Sylvester-Gallai Theorem assert that any line arrangement over the real field has at least one double point, or is the trivial example: the union of lines passing through the same point. Hirzebruch \cite{Hirzebruch} then proved that over the complex field, a line arrangement is either the trivial example or possesses double or triple points. There exists up to now a unique non-trivial example of a complex line arrangement with only triple points. Since then, the construction of line arrangements with many or only triple points attracted attention see e.g. \cite{HKS} for a historical account. In the present paper, we give a new construction, over the finite fields, of line arrangements with only triple points. Line arrangements with only triple points are also interesting for their combinatorics, linked with the Steiner triple systems, which are set $T$ of subsets of order $3$ of a finite set, such that any subset of order two is contained in a unique subset $t\in T$. Before presenting our results, let us introduce some notations and recall some properties of line arrangements. If $\cL=\ell_{1}+\dots+\ell_{n}$ is a line arrangement labeled by $\{1,\dots,n\}$, the data $M(\cL)$ of the triples $\{i,j,k\}\subset\{1,\dots,n\}$ such that the lines $\ell_{i},\ell_{j}$ and $\ell_{k}$ meet at a common point is called the matroid associated to $\cL$; it encodes the combinatorics of $\cL$. If $\cL$ has only triple points, $M(\cL)$ is a Steiner triple system. If $\g$ is a projective transformation of the plane, one may define $\g\cL=\g\ell_{1}+\dots+\g\ell_{n}$. Both line arrangements $\cL,\,\g\cL$ have the same combinatorics: $M(\cL)=M(\g\cL)$. Given a matroid $M$, a labelled line arrangement $\cL$ such that $M(\cL)=M$ is said a realization of $M$. Given a field $K$, there exists a scheme parametrizing the realizations over $K$ and the quotient of these realizations modulo the action of $\text{PGL}_{3}(K)$ is called the realization space. One says that a line arrangement over a field $K$ is rigid if $\cR(M)_{/K}$ is zero dimensional. The aim of the present paper is to describe a new infinite family of line arrangements in the plane with only triple points, and to study some of their associated realization spaces: \begin{thm} For any power $q$ of $3$, there exists a line arrangement $\cL_{q}$ of $q$ lines over $\FF_{q}$ with $\tfrac{q(q-1)}{6}$ triple points and no other singularities. \\ Let $M_{q}=M(\cL_{q})$ be the matroid associated to $\cL_{q}$. For $q=3^{n}$ and $n\geq3$, the matroid $M_{q}$ has no realizations over any field of characteristic $\neq3$. \\ The realization space $\cR(M_{27})_{/\overline{\FF}_{3}}$ is $3$ dimensional with a two dimensional immersed component. The reduced scheme of $\cR(M_{27})_{/\overline{\FF}_{3}}$ is an open sub-scheme of a quadric in $\PP^{4}$. \end{thm} We obtain a line arrangement $\cL_{q}$ as the dual of the $q$ points defined over $\FF_{q}$ on the smooth part of the cuspidal cubic $y^{2}z=x^{3}$. The previously known line arrangements with only triple points are: a) The trivial example: three lines meeting at one point. b) Over a field containing a primitive third root of unity (in particular over any algebraically closed field of characteristic $\neq3$), one has the $\text{Ceva}(3)$ line arrangement with 9 lines: \[ (x^{3}-y^{3})(x^{3}-z^{3})(z^{3}-y^{3})=0, \] and $12$ triple points. It is the dual of the Hesse configuration, i.e. of the $9$ flex points of a smooth cubic. In characteristic $3$, there is also a line arrangement of $9$ lines having the same associated matroid as $\text{Ceva}(3)$. It is the $9$ lines in $\PP^{2}(\FF_{3})$ not containing a given point $p$ in $\PP^{2}(\FF_{3})$. These line arrangements are rigid. c) The Fano plane: the $7$ lines in $\PP^{2}(\text{\ensuremath{\FF}}_{2})$. It is also a rigid line arrangement. d) For any $n\geq4$, the sequence of line arrangements $\cC_{2^{n}-1}'$ with $2^{n}-1$ lines defined over $\overline{\FF}_{2}$ constructed in \cite{HKS}. Their construction is obtained by taking the dual of a generic projection to $\PP^{2}$ of the $2^{n}-1$ points $\PP^{n-1}(\FF_{2})$ in $\PP^{n-1}$. e) A rigid example over $\FF_{11}$ with $19$ lines (and $57$ triple points) constructed in \cite{HKS}. It is an open question \cite{Urzua} whether or not a) and b) are the unique examples of line arrangements over $\CC$ with only triple points. For any $n\geq2$, we also obtain over $\overline{\FF}_{2}$ some line arrangements $\cC_{2^{n}-1}$ with $2^{n}-1$ lines and only triple points, by using the dual of the non-zero points in $\AA^{1}(\FF_{2^{n}})$ in the smooth part of the cuspidal cubic curve (which is isomorphic to $\AA^{1}$). For $n\geq4$, we obtain that our line arrangement $\cC_{2^{n}-1}$ and the line arrangement $\cC_{2^{n}-1}'$ of example d) define isomorphic matroids, thus $\cC_{2^{n}-1}$ is another construction of $\cC_{2^{n}-1}'$. Our construction of $\cC_{n}$ generalizes a bit the one of $\cC_{n}'$ since it holds for $n=2$ and $3$. The line arrangement $\cC_{3}$ is the trivial example, and $\cC_{7}$ is the Fano plane, example c). About the line arrangement $\cL_{n}$: for $n=3$, it is the trivial example, and $\cL_{9}$ is example b). We checked that the $19$ points in the dual of example e) are not contained in a cubic curve. This example is the only one known that does not appear to be related to cubic curves. \section{Line arrangements using the cuspidal cubic} \subsection{The cuspidal cubic} Let $C\hookrightarrow\PP^{2}$ be the cuspidal cubic over a field $K$ with a flex point $O$ defined over $K$. Let $C^{*}$ be the complement of the singular point. As for a smooth cubic curve, one may define a composition law (denoted by $+$) on $C^{*}$ by chords and tangent, so that $O$ is neutral. \begin{thm} \label{thm:Silver}(See e.g \cite[Chapter III, Proposition 2.5]{Silverman}). The composition law $+$ makes $C^{*}(K)$ an abelian group isomorphic to $(K,+)$. Three points $a,b,c\in C^{*}(K)$ (identified with $K$) are on a line if and only if \[ a+b+c=0\text{ in }K. \] \end{thm} In particular the tangent to a point $a$ in $C^{*}$ cuts $C^{*}$ at a residual point $b$ such that $2a+b=0$. Suppose that $K$ is a finite field of order $\#K=q=p^{n}$, where $p$ is the characteristic of $K$. From Theorem \ref{thm:Silver}, the number of lines containing three distinct points of $K$ equals to the number of sets \[ S_{a,b}=\{a,b,-(a+b)\} \] which have order $3$. One has $|S_{a,b}|=3$ if and only if \[ a\neq b\text{ and }a\neq-(a+b)\text{ and }b\neq-(a+b) \] which is equivalent to \begin{equation} a\neq b\text{ and }b\neq-2a\text{ and }2b\neq-a.\label{eq:UN} \end{equation} \subsection{Characteristic $3$} \subsubsection{General construction} Suppose that $\text{Char}(K)=3$. Let $a,b\in K$ and $S_{a,b}=\{a,b,-(a+b)\}$. Since $-2=1$ in $K$, the conditions \eqref{eq:UN} in order to have $\#S_{a,b}=3$ are reduced to $a\neq b$. Then there are then $\tfrac{1}{3}\left(\begin{array}{c} q\\ 2 \end{array}\right)=\tfrac{q(q-1)}{6}$ lines containing $3$ points and through each point in $K$ there are $\tfrac{1}{2}(q-1)$ such lines. That gives a \[ \left(q_{\tfrac{1}{2}(q-1)},\,\left(\tfrac{q(q-1)}{6}\right)_{3}\right) \] configuration of points and lines. Let $\cL_{q}$ be the dual of the $q$ points of $K$. \begin{thm} \label{thm:arrangement-Char3}The line arrangement $\cL_{q}$ of $q$ lines has $\tfrac{q(q-1)}{6}$ triple points and no other singularities. \end{thm} \begin{proof} The dual of the lines containing three points of $K$ are triple points of the line arrangement $\cL_{q}$, thus $\cL_{q}$ has $\tfrac{q(q-1)}{6}$ triple points. A line arrangement with $q$ lines has at most $\tfrac{1}{3}\tfrac{q(q-1)}{2}$ triple points, and if that bound is reached there are no other singularities. \end{proof} We remark that every point $a$ on $C^{*}$ is a flex: for any $a$ one has $3a=0$. Geometrically, the intersection of the tangent line at $a$ with the cubic is three times the point $a$. \begin{rem} As a group, or $\FF_{p}$-vector space, $\FF_{q}$ is isomorphic to $(\FF_{p})^{n}$, where $q=p^{n}$. Since the relation $a+b+c=0$ is preserved under such an isomorphism, the group $GL_{n}(\FF_{p})$ induces symmetries on the matroid $M_{q}$ associated to $\cL_{q}$. For $p=3$, since for any element $t$ one has $(a+t)+(b+t)+(c+t)=a+b+c$, the relation $a+b+c=0$ is preserved by translation, thus the automorphism group of $M_{3^{n}}$ is the general affine group of $\AA_{/\FF-3}^{n}$, the semi-direct product of $(\ZZ/3\ZZ)^{n}$ with $GL_{n}(\FF_{3})$. \end{rem} \subsubsection{Examples in characteristic $3$} \noindent $\bullet$ The line arrangement $\cL_{3}$ is $3$ lines through the same point. $\bullet$ The line arrangement $\cL_{9}$ has $9$ lines and $12$ double points. The matroid associated to $\cL_{9}$ is the same as the matroid associated to the Ceva(3) line arrangement (after suitable labeling of the lines). $\bullet$ Let $M_{27}$ be the matroid associated to the line arrangement $\cL_{27}$. Using \cite{Oscar}, one computes that the realization space $\cR(M_{27})$ over $\overline{\FF}_{3}$ of $M_{27}$ is $3$ dimensional with a non-reduced immersed component $Z$ of dimension $2$ and multiplicity $2$, a model of it is an open sub-scheme of a rank $3$ quadric in $\AA^{4}$. We give the equations for that model of $\cR(M_{27})$ in $\AA^{4}$ and the associated $27$ normal vectors of the line arrangements in the appendix of the arXiv version of the paper. Using the generic point of $\cR(M_{27})$, one can check using MAGMA software that the 27 points of the dual of the generic realization of $M_{27}$ are not on a cuspidal cubic. However for the generic element of $Z$, (which is such that $Z_{\text{red}}$ is a sub-scheme of $\cR(M_{27})$), the $27$ points dual to a realization of $M_{27}$ are on a cuspidal cubic curve. \begin{prop} For any $q=3^{n}$ with $n\geq3$, the matroid $M_{q}$ has no realizations over any field of characteristic $\neq3$ \end{prop} \begin{proof} Using \cite{Oscar}, one computes that the matroid $M_{27}$ has no realizations over any field of characteristic $\neq3$. The matroid $M_{q},\,q=3^{n}$ may also be described as triples $\{a,b,c\}$ in $(\FF_{3})^{n}$ such that $a+b+c=0$. For $n\geq3$, the group $(\FF_{3})^{3}$ is a sub-group of $(\FF_{3})^{n}$, and a realization of $M_{q}$ gives a realization of $M_{27}$ by taking the labels indexed by that sub-group. That forces the realization to be in characteristic $3$. \end{proof} \subsection{Characteristic $2$} \subsubsection{General construction} Suppose that $\text{Char}(K)=2$ and $K=\FF_{q}$, where $q=2^{n}$ for $n>1$. Condition in \eqref{eq:UN} for $S_{a,b}=\{a,b,-(a+b)\}$ to be of order $3$ are then $a\neq b$ and $a\neq0,\,b\neq0$. More geometrically, since for any $a\in K$, one has $2a=0$, the tangent line trough $a$ contains $0$, and the point $a$ with multiplicity $2$, thus it contains no other points of the cuspidal cubic. That implies that for any couple $a\neq b$ in $K^{*}$, the line going through $a$ and $b$ meets the cubic in a third point $c=-(a+b)\not\in\{0,a,b\}$. Through each point $a\neq0$ of $K$ there goes $\tfrac{1}{2}(q-2)$ lines containing $a$ and another point of $K^{*}$. There are $\tfrac{(q-1)(q-2)}{6}$ such lines, and each line contain $3$ points of $K^{*}$. One obtain in that way a \[ \left((q-1)_{\tfrac{1}{2}(q-2)},\,\left(\tfrac{(q-1)(q-2)}{6}\right)_{3}\right) \] configuration of points and lines. Let $\cC_{q-1}$ be the dual of the $q-1$ in $K^{*}$. We obtain \begin{thm} \label{thm:arrangement-Char2}The line arrangement $\cC_{q-1}$ of $q-1$ lines has $\tfrac{(q-1)(q-2)}{6}$ triple points and no other singularities. \end{thm} \subsubsection{Examples in characteristic $2$} \noindent $\bullet$ The line arrangement $\cC_{2^{2}-1}$ is $3$ lines through the same point. $\bullet$ The line arrangement $\cC_{2^{3}-1}$ is the Fano plane. $\bullet$ Let $N_{2^{n}-1}$ be the matroid associated to $\cC_{2^{n}-1}$; this is the set of triples $\{a,b,a+b\}\subset\FF_{2^{n}}$ with $a\neq b,\,a\neq0,\,b\neq0$. Using \cite{Oscar}, one computes that the realization space $\cR(N_{15})$ of realizations of $N_{15}$ over $\overline{\FF}_{2}$ is an open subscheme in $\PP_{/\overline{\FF_{2}}}^{3}$, in particular it is $3$ dimensional. We checked that the generic element in $\cR(N_{15})$ is such that there is a cuspidal cubic containing the $15$ points dual to the $15$ lines. For $n>3$, in \cite{HKS}, are constructed some line arrangements $\cC_{2^{n}-1}'$ (over the field $\FF_{2^{3n-1}}$) with only triple points and with the same number $q-1=2^{n}-1$ of lines as $\cC_{2^{n}-1}$. The $2^{n}-1$ points dual to $\cC_{2^{n}-1}'$ are the image of the $2^{n}-1$ points of $\PP^{n-1}(\FF_{2})$ in $\PP^{n-1}$ by a generic projection onto $\PP^{2}$. In fact one can check that the line arrangements $\cC_{q-1}$ and $\cC_{q-1}'$ define isomorphic matroids as follows. As a group, the field $K=\FF_{q}$ (for $q=2^{n}$) is the group $(\FF_{2})^{n}$ and the set $(\FF_{2})^{n}\setminus\{0\}$ is contained in $\PP^{n-1}(\FF_{2})$. Two distinct points $a,b$ of $(\FF_{2})^{n}\setminus\{0\}$ generate a line in $\PP^{n-1}$ which contains the points $a,b$, $c=a+b$ (so that $a+b+c=0$, since we are in characteristic $2$), and no other points of $\PP^{n-1}(\FF_{2})$. As shown in \cite{HKS}, a generic projection $f:\PP^{n}\dashrightarrow\PP^{2}$ induces a bijection between the set $\PP^{n-1}(\FF_{2})$ and its image and $f$ preserves the alignments: for $a,b,c\in\PP^{n-1}(\FF_{2})$, one has $a+b+c=0$ in $(\FF_{2})^{n}$ if and only if $f(a),f(b)$ and $f(c)$ are on a line in $\PP^{2}$. Thus a bijective morphism $\eta$ of groups between $K$ and $(\FF_{2})^{n}$ defines a isomorphism between the matroids $M(\cC_{q})$ and $M(\cC_{q}')$, i.e. sends bijectively the triples $\{a,b,c\}$ of $N=N(\cC_{q})$ onto the triples of $N'=N'(\cC_{q}')$. If $\cC_{q}'=(\ell_{a})_{a\in\FF_{2}^{n}}$ is a realization of $N'$, then $(\ell{}_{\eta(a)})_{a\in K}$ is a realization of $\cC_{q}$. The construction of $\cC_{15}'$ in \cite{HKS} explains why the realization space $\cR(N_{15})$ is an open sub-scheme of $\PP_{/\overline{\FF}_{2}}^{3}$, since the line arrangement $\cC_{15}'$ is obtained by the projection to $\PP^{2}$ from a generic point in $\PP^{3}$ of the $15$ points of $\PP^{3}(\FF_{2})$ in $\PP^{3}$. \begin{rem} For $n\geq3$, the projections $\PP^{n}\dashrightarrow\PP^{2}$ are parametrized by the Grassmannian $G(n-2,n+1)$. For $n>4$, it would be interesting to understand if as for $n=3$, an open sub-scheme of the grassmannian is also the realization spaces of the realizations of $N_{2^{n}-1}$. At least we checked that for $n=4$, the realization space $\cR(N_{31})$ is $6$ dimensional and rational, as $G(2,5)$. \end{rem} \begin{rem} The line arrangements $\cC_{2^{n}-1}$ are defined over $\FF_{2^{n}}$: this is an answer the question before Theorem 3.4 in \cite{HKS}, to find a smaller field over which $\cC'_{q}$ may be defined. \end{rem} \subsection{Characteristic $\protect\neq2,3$} Suppose that $\text{Char}(K)=p>0$ and $p\notin\{2,3\}$. Then for $b=-2a$ and $b=-\tfrac{1}{2}a$, $S_{a,b}$ has order $2$. Thus, the number of lines containing $3$ points and passing through $a\in C^{*}$ is $\tfrac{1}{2}(q-3)$, moreover the number of lines containing $3$ (distinct) points of $K$ is \[ \tfrac{1}{6}q(q-3) \] and the number of lines containing $2$ points is $q$. One has a \[ \left(q_{\tfrac{1}{2}(q-3)},\,\left(\tfrac{q(q-3)}{6}\right)_{3}\right) \] configuration of points and lines. The dual is a line arrangement of $q$ lines and $\tfrac{q(q-3)}{6}$ triple points. It has \[ \tfrac{q(q-1)}{2}-3\tfrac{q(q-3)}{6}=q \] double points (corresponding to the $q$ $2$-rich lines in the dual space). \begin{thebibliography}{99} \bibitem{Oscar} Corey D., K\"uhne L, Schr\" oter B., Matroids in OSCAR, to appear in as a chapter in the book The Computer Algebra System OSCAR. \bibitem{Hirzebruch} Hirzebruch F., Arrangements of lines and algebraic surfaces, Birkh\"auser, Progress in Math., Vol. 36, pp. 113--140 \bibitem{HKS} K\"uhne L., Szemberg T., Tutaj-Gasi\'nska H., Line arrangements with many triple points, Rend. Circ. Mat. Palermo (2) 73 (2024), no. 7, 2501--2512 \bibitem{Silverman} Silverman J., The arithmetic of elliptic curves, GMT 106, Springer, 2009. xx+513 pp \bibitem{Urzua} Urz\'ua G., Some open questions about line arrangements in the projective plane, Proyecciones 41(2), 517--536 (2022) \end{thebibliography} \vspace{0.3cm} \noindent Xavier Roulleau\\ Universit d'Angers, \\ CNRS, LAREMA, SFR MATHSTIC, \\ F-49000 Angers, France \noindent [email protected] \end{document}
2501.06197v1
http://arxiv.org/abs/2501.06197v1
A Physics-informed Sheaf Model
\documentclass{article} \usepackage{arxiv} \usepackage[table,xcdraw]{xcolor} \usepackage{amsmath,amsthm,amssymb,amscd} \usepackage[all,cmtip]{xy} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{lipsum} \usepackage{mathrsfs} \usepackage{fancyhdr} \usepackage{color} \usepackage{subfig} \usepackage{graphicx} \usepackage{textcomp} \usepackage{ytableau} \usepackage{tikz} \usetikzlibrary{calc,intersections,through,backgrounds,patterns} \newtheorem{exam.}[subsection]{Example} \newtheorem{def.}[subsection]{Definition} \newtheorem{lemma}[subsection]{Lemma} \newtheorem{prop.}[subsection]{Proposition} \newtheorem{coro}[subsection]{Corollary} \newtheorem{theorem}[subsection]{Theorem} \newtheorem*{remark}{Remark} \newtheorem*{notation}{Notation} \usepackage{algorithm} \usepackage{algpseudocode} \newcommand{\bbE}{\mathbb{E}} \newcommand{\Obj}{{\rm{Obj}}} \newcommand{\id}{{\rm{id}}} \newcommand{\ObjC}{{\rm{Obj}}(\mathcal{C})} \newcommand{\bbQ}{\mathbb{Q}} \newcommand{\bbA}{\mathbb{A}} \newcommand{\bbP}{\mathbb{P}} \newcommand{\bbZ}{\mathbb{Z}} \newcommand{\bbR}{\mathbb{R}} \newcommand{\bbW}{\mathbb{W}} \newcommand{\bbC}{\mathbb{C}} \newcommand{\bbN}{\mathbb{N}} \newcommand{\Fq}{\mathbb{F}_q} \newcommand{\calS}{\mathcal{S}} \newcommand{\calT}{\mathcal{T}} \newcommand{\Hom}{{\rm{Hom}}} \newcommand{\Kom}{{\rm{Kom}}} \newcommand{\End}{{\rm{End}}} \newcommand{\HomC}{{\rm{Hom}}_{\mathcal{C}}} \newcommand{\im}{{\rm{im}}} \newcommand{\Top}{\mathfrak{Top}} \newcommand{\sd}{{\rm sd}} \newcommand{\fA}{\mathfrak{A}} \newcommand{\fB}{\mathfrak{B}} \newcommand{\fC}{\mathfrak{C}} \newcommand{\fD}{\mathfrak{D}} \newcommand{\fE}{\mathfrak{E}} \newcommand{\fF}{\mathfrak{F}} \newcommand{\fG}{\mathfrak{G}} \newcommand{\fH}{\mathfrak{H}} \newcommand{\fI}{\mathfrak{I}} \newcommand{\fJ}{\mathfrak{J}} \newcommand{\fK}{\mathfrak{K}} \newcommand{\fL}{\mathfrak{L}} \newcommand{\fM}{\mathfrak{M}} \newcommand{\fN}{\mathfrak{N}} \newcommand{\fO}{\mathfrak{O}} \newcommand{\fP}{\mathfrak{P}} \newcommand{\fQ}{\mathfrak{Q}} \newcommand{\fR}{\mathfrak{R}} \newcommand{\fS}{\mathfrak{S}} \newcommand{\fT}{\mathfrak{T}} \newcommand{\fU}{\mathfrak{U}} \newcommand{\fV}{\mathfrak{V}} \newcommand{\fW}{\mathfrak{W}} \newcommand{\fX}{\mathfrak{X}} \newcommand{\fY}{\mathfrak{Y}} \newcommand{\fZ}{\mathfrak{Z}} \newcommand{\Ab}{\mathfrak{Ab}} \newcommand{\calH}{\mathcal{H}} \newcommand{\calF}{\mathcal{F}} \newcommand{\cKom}{\mathbf{cKom}} \newcommand{\diam}{{\rm diam}} \newcommand{\shA}{\mathscr{A}} \newcommand{\shB}{\mathscr{B}} \newcommand{\shC}{\mathscr{C}} \newcommand{\shD}{\mathscr{D}} \newcommand{\shE}{\mathscr{E}} \newcommand{\shF}{\mathscr{F}} \newcommand{\shG}{\mathscr{G}} \newcommand{\shH}{\mathscr{H}} \newcommand{\shI}{\mathscr{I}} \newcommand{\shJ}{\mathscr{J}} \newcommand{\shK}{\mathscr{K}} \newcommand{\shL}{\mathscr{L}} \newcommand{\shM}{\mathscr{M}} \newcommand{\shN}{\mathscr{N}} \newcommand{\shO}{\mathscr{O}} \newcommand{\shP}{\mathscr{P}} \newcommand{\shQ}{\mathscr{Q}} \newcommand{\shR}{\mathscr{R}} \newcommand{\shS}{\mathscr{S}} \newcommand{\shT}{\mathscr{T}} \newcommand{\shU}{\mathscr{U}} \newcommand{\shV}{\mathscr{V}} \newcommand{\shW}{\mathscr{W}} \newcommand{\shX}{\mathscr{X}} \newcommand{\shY}{\mathscr{Y}} \newcommand{\shZ}{\mathscr{Z}} \newcommand{\spec}{{\rm Spec}} \newcommand{\Max}{{\rm Max}} \newcommand{\fa}{\mathfrak{a}} \newcommand{\fb}{\mathfrak{b}} \newcommand{\fp}{\mathfrak{p}} \newcommand{\fq}{\mathfrak{q}} \newcommand{\fm}{\mathfrak{m}} \newcommand{\bfa}{\mathbf{a}} \newcommand{\bfb}{\mathbf{b}} \newcommand{\bfc}{\mathbf{c}} \newcommand{\bfd}{\mathbf{d}} \newcommand{\bfe}{\mathbf{e}} \newcommand{\bff}{\mathbf{f}} \newcommand{\bfg}{\mathbf{g}} \newcommand{\bfh}{\mathbf{h}} \newcommand{\bfi}{\mathbf{i}} \newcommand{\bfj}{\mathbf{j}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfl}{\mathbf{l}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfo}{\mathbf{o}} \newcommand{\bfp}{\mathbf{p}} \newcommand{\bfq}{\mathbf{q}} \newcommand{\bfr}{\mathbf{r}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bft}{\mathbf{t}} \newcommand{\bfu}{\mathbf{u}} \newcommand{\bfv}{\mathbf{v}} \newcommand{\bfw}{\mathbf{w}} \newcommand{\bfx}{\mathbf{x}} \newcommand{\bfy}{\mathbf{y}} \newcommand{\bfz}{\mathbf{z}} \newcommand{\supp}{{\rm supp}} \newcommand{\Der}{{\rm Der}} \newcommand{\ann}{{\rm ann}} \newcommand{\conv}{{\rm conv}} \newcommand{\aff}{{\rm aff}} \newcommand{\pos}{{\rm pos}} \newcommand{\cl}{{\rm cl}} \newcommand{\st}{{\rm st}} \newcommand{\Int}{{\rm int}} \newcommand{\reInt}{{\rm reInt}} \newcommand{\reBd}{{\rm reBd}} \newcommand{\bd}{{\rm bd}} \newcommand{\inte}{{\rm int}} \newcommand{\bbV}{\mathbb{V}} \newcommand{\Sd}{{\rm Sd}} \newcommand{\PC}{{\rm PC}} \newcommand{\Ext}{{\rm Ext}} \newcommand{\Open}{{\rm Open}} \newcommand{\op}{{\rm op}} \newcommand{\disp}{\displaystyle} \newcommand{\reint}{{\rm reInt}} \newcommand{\recl}{{\rm reCl}} \newcommand{\ddiag}{{\rm diag}} \title{A Physics-informed Sheaf Model} \author{ Chuan-Shen Hu \\ Division of Mathematical Sciences\\ School of Physical and Mathematical Sciences\\ Nanyang Technological University \\ Singapore 637371 \\ \texttt{[email protected]} \\ \And Xiang Liu \\ Division of Mathematical Sciences\\ School of Physical and Mathematical Sciences\\ Nanyang Technological University \\ Singapore 637371 \\ \texttt{[email protected]}\\ \AND Kelin Xia \\ Division of Mathematical Sciences\\ School of Physical and Mathematical Sciences\\ Nanyang Technological University \\ Singapore 637371 \\ \texttt{[email protected]} } \date{} \begin{document} \maketitle \begin{abstract} Normal mode analysis (NMA) provides a mathematical framework for exploring the intrinsic global dynamics of molecules through the definition of an energy function, where normal modes correspond to the eigenvectors of the Hessian matrix derived from the second derivatives of this function. The energy required to 'trigger' each normal mode is proportional to the square of its eigenvalue, with six zero-eigenvalue modes representing universal translation and rotation, common to all molecular systems. In contrast, modes associated with small non-zero eigenvalues are more easily excited by external forces and are thus closely related to molecular functions. Inspired by the anisotropic network model (ANM), this work establishes a novel connection between normal mode analysis and sheaf theory by introducing a cellular sheaf structure, termed the anisotropic sheaf, defined on undirected, simple graphs, and identifying the conventional Hessian matrix as the sheaf Laplacian. By interpreting the global section space of the anisotropic sheaf as the kernel of the Laplacian matrix, we demonstrate a one-to-one correspondence between the zero-eigenvalue-related normal modes and a basis for the global section space. We further analyze the dimension of this global section space, representing the space of harmonic signals, under conditions typically considered in normal mode analysis. Additionally, we propose a systematic method to streamline the Delaunay triangulation-based construction for more efficient graph generation while preserving the ideal number of normal modes with zero eigenvalues in ANM analysis. \end{abstract} \section{Introduction} \label{Section: Introduction} Molecular functions are inherently tied to their dynamic behavior, as Richard Feynman famously stated: 'everything that living things do can be understood in terms of the jigglings and wigglings of atoms.' However, molecular motions can be highly complex. In particular, proteins continuously experience various motions under physiological conditions, including atomic thermal vibrations, sidechain rotations, residue movements, and large-scale domain shifts. Protein functions are closely linked to their motions and fluctuations, significantly influencing processes such as drug binding~\cite{Alvarez-Garcia:2014}, molecular docking~\cite{Fischer:2014}, self-assembly~\cite{marsh2014protein}, allosteric signaling~\cite{bu2011proteins}, and enzyme catalysis~\cite{fraser2009hidden}. The extent of protein motion within a cellular environment is largely determined by the local flexibility of its structure, an inherent characteristic of the protein. This flexibility is commonly assessed using the Debye-Waller factor (B-factor), which represents the atomic mean-square displacement and is measured through techniques such as X-ray crystallography, NMR spectroscopy, or single-molecule force experiments~\cite{dudko2006intrinsic}. However, the B-factor is not an absolute measure of flexibility, as it is also influenced by factors such as the crystal environment, solvent type, data collection conditions, and the procedures used for structural refinement~\cite{kondrashov2007protein,hinsen2008structural}. Various models have been proposed to assess the flexibility of biomolecules, including molecular dynamics (MD)~\cite{mccammon1977dynamics}, normal mode analysis (NMA)~\cite{brooks1983charmm,go1983dynamics,levitt1985protein,tasumi1982normal}, elastic network models (ENM)~\cite{bahar1997direct, bahar1998vibrational, Atilgan:2001,cui2005normal,hinsen1998analysis,li2002coarse,tama2001conformational}, and approaches based on graph theory~\cite{jacobs2001protein}. In particular, NMA serves as a time-independent MD model~\cite{park2013coarse}, decomposing molecular dynamics using eigenvalues and eigenvectors, leveraging this spectral information to predict the motions of a given biomolecule. Notably, the first few eigenvectors derived from NMA models often capture the collective and global motions of the biomolecule, which potentially play a significant role in its functional behavior. Among the Elastic Network Models (ENMs), the Gaussian Network Model (GNM)~\cite{bahar1998vibrational,bahar1997direct} and the Anisotropic Network Model (ANM)~\cite{Atilgan:2001} have emerged as widely used tools for studying protein dynamics. Both models have gained significant attention in the scientific community due to their simplicity and ability to accurately describe experimental data on equilibrium protein dynamics. The GNM offers a straightforward approach to investigating isotropic motions, while the ANM focuses on anisotropic motions. Specifically, while GNM is commonly applied in molecular flexibility analysis, such as B-factor prediction~\cite{Opron:2014,park2013coarse,LWYang:2008}, ANM is employed to analyze molecular functions directly related to intrinsic global normal modes. In molecular dynamics, the eigenvectors, known as \textit{eigenmodes}, of the ANM Hessian matrix describe the intrinsic motion of atoms. Generally, eigenmodes associated with smaller eigenvalues correspond to more global and stable molecular motions, whereas those with larger eigenvalues reflect more pronounced and unstable local movements. Notably, the eigenmodes associated with an eigenvalue of zero, referred to as \textit{trivial modes}, represent rigid body motions, such as the translation and rotation of the entire molecule~\cite{cui2005normal}. Despite the wide applications and significant success of normal mode analysis (NMA) in studying molecular structure, flexibility, dynamics, and function, a solid mathematical foundation for NMA is largely absent. A longstanding problem, involving the rigorous lower bound for the cutoff distance or a systematic approach for constructing the underlying graph essential to NMA models, remains unsolved. Furthermore, while it is known that normal modes are directly influenced by the molecular ``shape,'' the fundamental connections between normal mode behaviors and the underlying molecular topology have not been thoroughly explored. In summary, traditional graph-spectral-based NMA has inherent limitations in characterizing and analyzing the fundamental relationships between molecular dynamics and underlying topology. Establishing a rigorous mathematical framework for normal mode analysis remains a significant challenge. \paragraph{Sheaf theory} Molecular representation and featurization are essential in the application of AI to scientific fields. Mathematically, geometry and topology offer foundational frameworks for physical, chemical, and AI models~\cite{huangstability, nguyen2020review, nguyen2019mathematical, nguyen2020mathdl, peigeom, ye2019curvature, ghrist2014elementary}. Geometry focuses on shape and spatial configurations, while topology examines the underlying connections and relationships. Sheaf theory serves as a powerful bridge between geometry and topology, providing a framework to integrate rich and complex information about topological spaces equipped with ``data'' or ``signals'' in their local regions. Specifically, for an underlying topological space, a sheaf assigns algebraic structures (e.g., groups, vector spaces, rings, etc.) to local regions, captures local coherence, and shapes global information by coherently "gluing" these local data together~\cite{bredon2012sheaf,hartshorne2013algebraic,curry2014sheaves,robinson2014topological,kashiwara2018persistent}. Originally developed to address fixed-point problems in partial differential equations and good cover problems in nerve theory, sheaf theory has become a foundation of modern mathematics. Today, it serves as a common language across various disciplines, facilitating exploration in fields such as algebraic geometry, algebraic topology, number theory, and complex geometry~\cite{bredon2012sheaf,hartshorne2013algebraic, artin2012arithmetic,wells2017differential}. In applications, due to its richer and more flexible mathematical framework, the sheaf theory-based representation has also been incorporated into machine learning and deep learning architectures~\cite{barbero2022sheaf_attention,hansen2020sheaf,caralt2024joint,battiloro2023tangent,he2023sheaf,braithwaite2024heterogeneous,duta2024sheaf}. Recently, a discretized version of sheaves, known as \textit{cellular sheaves} and their variants, has been proposed for Topological Data Analysis (TDA) on simplicial complexes (or cellular complexes)~\cite{curry2014sheaves,ghrist2014elementary,hansen2021opinion,robinson2014topological,hansRiess2022}. Specifically, unlike sheaves defined on general topological spaces, a cellular sheaf is defined on topological objects with strong combinatorial relationships between local structures, such as graphs, simplicial complexes, cell complexes, and even partially-ordered sets. By assigning algebraic objects (e.g., groups, vector spaces, rings) to combinatorial elements in the underlying topological complexes, such as simplices within a simplicial complex, the sheaf cohomology, sheaf Laplacian, and global and local section spaces can be directly constructed and computed algebraically. This approach provides a practical framework from a computational perspective. In particular, as a generalization of the cohomology of simplicial or cell complexes, along with various assignments to the stalk spaces and restriction maps, the cellular sheaf cohomology involves more geometric and physical information to the topology structure. In recent developments, the \textit{force cosheaf} model has been developed, providing a solid mathematical foundation for analyzing truss mechanisms~\cite{cooperband2023towards,cooperband2024equivariant,cooperband2023cosheaf,cooperband2024cellular}. The homology theory of the force cosheaf captures the axial forces along incident members connected by truss joints, offering a novel mathematical framework for studying equilibrium stresses and truss system configurations. In particular, the modern form of Maxwell’s Rule for 3-dimensional truss systems can be understood through Euler characteristics within this homology theory, primarily focusing on homology and its Euler characteristics~\cite{cooperband2023cosheaf,cooperband2024cellular}. By treating the cellular sheaf as a dual structure to the force cosheaf, it has been shown that force cosheaf homology can be regarded as the dual counterpart of cellular sheaf cohomology. From an NMA perspective, the physical and geometric insights gained from the global sections, sheaf cohomology, sheaf Laplacian, and spectral information can be leveraged to explore connections between atomic movements and the underlying graph statics. \paragraph{Main contributions} In this paper, we establish a rigorous sheaf theory-based mathematical framework for normal mode analysis (NMA), presenting a physics-informed sheaf model specifically tailored for the anisotropic network model (ANM). The main results of this paper are presented in three parts. First, for molecules modeled as graphs, we represent the atomic system as a cellular sheaf, termed the \textit{anisotropic sheaf}, based on the principles of the anisotropic network model (ANM). This sheaf acts as a "dual" counterpart to the force cosheaf model with spring constants on edges~\cite{cooperband2023towards,cooperband2023cosheaf,cooperband2024cellular}. We demonstrate that the corresponding sheaf Laplacian matrix is equivalent to the Hessian matrix used in NMA, with its eigenvectors capturing intrinsic global molecular motions (Theorem \ref{Theorem: Main result 1}). Second, based on cellular sheaf modeling, sheaf theory concepts such as cohomology, global sections, and Hodge theory are utilized to provide deeper insight into the intrinsic dynamics of molecules. In particular, the global section space, and cellular sheaf cohomology provide a rigorous interpretation of rigid motions, offering a clear geometric description of these physical phenomena (Theorem \ref{Theorem: Main result 2}). Additionally, the spectral information of the sheaf Laplacian, including its eigenvalues and eigenvectors, provides insights into the dynamics and fluctuations of molecules. Furthermore, we present a sheaf-based proof for the existence of a minimal graph model that induces an ANM Hessian with the desired number of trivial modes (Theorem \ref{Theorem: Main result 3}). Finally, we analyze the relationships between rigid motions and the underlying graph structure. In particular, drawing on the proofs and discussions in Section \ref{Section: Normal Mode Analysis in Anisotropic Sheaf Models}, we introduce the concept of an \textit{admissible simplicial complex} for building the graph from a given atomic system (Definition \ref{Definition: admissible homogeneous 3-complex}). We prove that any ANM Hessian based on the $1$-skeleton of an admissible simplicial complex induces exactly six trivial modes (Theorem \ref{Theorem: Main result 4-1}). Inspired by previous works employing Delaunay triangulation and related mathematical tools for constructing underlying graphs~\cite{xia2014identifying,zhou2014alpha}, we further examine the use of the sheaf framework in Delaunay triangulation-based construction. Specifically, we prove that every Delaunay triangulation of a 3D point cloud is admissible (Corollary \ref{Corollary: Main result 4-2}), ensuring that its $1$-skeleton induces a Hessian matrix with six trivial modes. Additionally, Algorithm \ref{Algorithm: Main result 5-2} provides a systematic method for constructing a minimal graph for a given point cloud in \( \mathbb{R}^3 \) that induces exactly six trivial modes, and any subgraph obtained by removing edges leads to a Hessian matrix with more than six trivial modes. \paragraph{Organization of the Paper} The remainder of this paper is organized into four sections (Sections \ref{Section: Foundations of the Mathematical Framework}--\ref{Section: Graph Construction for Anisotropic Sheaves}). Section \ref{Section: Foundations of the Mathematical Framework} introduces the mathematical foundations necessary for this work, including the theory of cellular sheaves on simplicial complexes, global sections, sheaf cohomology, and Laplacians. In Section \ref{Section: Anisotropic Sheaves}, we revisit the mathematical formulation of the anisotropic network model and present the proposed anisotropic sheaf model. Section \ref{Section: Normal Mode Analysis in Anisotropic Sheaf Models} discusses normal mode analysis using anisotropic sheaves, with a focus on the mathematical interpretation of eigenmodes with zero eigenvalues through global sections and an examination of the dimension of the global section space. Finally, Section \ref{Section: Graph Construction for Anisotropic Sheaves} addresses the construction of underlying graphs for anisotropic analysis, providing mathematical proofs for the construction and an efficient method for implementation. \section{Foundations of the Mathematical Framework} \label{Section: Foundations of the Mathematical Framework} Cellular sheaves defined on simplicial complexes and graphs form the foundation of the proposed work. In this section, we introduce the definition of a cellular sheaf on an abstract simplicial complex, along with the associated sheaf cohomology and sheaf Laplacian, with a particular focus on cellular sheaves defined on finite abstract simplicial complexes and graphs. For a more comprehensive and general treatment—including cellular sheaves of vector spaces on posets or cellular sheaves of Hilbert spaces on cell complexes—refer to~\cite{curry2014sheaves, curry2016discrete, robinson2014topological, hansen2019toward}. \paragraph{Cellular sheaves on simplicial complexes} Mathematically, a \textit{cellular sheaf} of $\mathbb{R}$-vector spaces on an abstract simplicial complex $K$ is defined as a functor $\mathcal{F}: (K, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$, from the poset category $(K, \leq)$ to the category $\textup{\textsf{Vect}}_{\mathbb{R}}$ of vector spaces and linear transformations, where $\leq$ denotes the partial order induced by the face relations of simplices in $K$. Specifically, an \textit{abstract simplicial complex} $K$ over a vertex set $V$ is a collection of non-empty subsets of $V$, called \textit{simplices}, with the following property: if $\sigma \in K$ and $\tau$ is a non-empty subset of $\sigma$, then $\tau \in K$, where $\tau$ is referred to as a \textit{face} of $\sigma$. The subset relation $\subseteq$ on simplices in $K$ forms a partial order, typically denoted by $\leq$ (or $\trianglelefteq$), which defines a category $(K, \leq)$. Additionally, a simplex $\sigma$ in $K$ is called a $q$\textit{-simplex} or is said to have \textit{dimension} $q$, with $q \in \mathbb{Z}_{\geq 0}$, if it consists of $q+1$ vertices. The dimension of $\sigma \in K$ is denoted by $\dim(\sigma)$, and the collection of $q$-simplices within $K$ is denoted by $K_{(q)}$. Under this setup, a cellular sheaf $\mathcal{F}: (K, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ over an abstract simplicial complex $K$ consists of the following data: \begin{itemize} \item[\rm (a)] a vector space $\mathcal{F}_\sigma$ for each simplex $\sigma \in K$; \item[\rm (b)] an $\mathbb{R}$-linear transformation $\mathcal{F}_{\sigma, \tau}: \mathcal{F}_\sigma \rightarrow \mathcal{F}_\tau$ for simplices $\sigma \leq \tau$. \end{itemize} Moreover, as a functor, the map $\mathcal{F}_{\sigma, \sigma}: \mathcal{F}_\sigma \rightarrow \mathcal{F}_\sigma$ is defined as the identity map $\id_{\mathcal{F}_\sigma}$, and $\mathcal{F}_{\tau, \eta} \circ \mathcal{F}_{\sigma, \tau} = \mathcal{F}_{\sigma, \eta}$ for every triple towered simplices $\sigma \leq \tau \leq \eta$ in $K$. The vector space $\mathcal{F}_\sigma$ is called the \textit{stalk} of $\mathcal{F}$ at $\sigma$, and the linear transformation $\mathcal{F}_{\sigma, \tau}: \mathcal{F}_\sigma \rightarrow \mathcal{F}_\tau$ is called the \textit{restriction map} from $\mathcal{F}_\sigma$ to $\mathcal{F}_\tau$. Elements in the stalk space $\mathcal{F}_\sigma$ are referred to as \textit{local sections} of $\mathcal{F}$ on $\sigma$. Indeed, the cellular sheaf structure can be defined straightforwardly when considering an undirected graph as a simplicial complex composed of $0$- and $1$-simplices (vertices and edges, respectively). Specifically, For a graph $G = (V,E)$, a cellular sheaf $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ can be defined as follows: \begin{itemize} \item[\rm (a)] a vector space $\mathcal{F}_{v}$ for each vertex $v \in V$; \item[\rm (b)] a vector space $\mathcal{F}_{e}$ for each edge $v \in E$; \item[\rm (c)] a linear transformation $\mathcal{F}_{v,e}: \mathcal{F}_v \rightarrow \mathcal{F}_e$ for any ordered pair $v \leq e$. \end{itemize} As a convention, we define $\mathcal{F}_{v,e}: \mathcal{F}_v \rightarrow \mathcal{F}_e$ to be the zero map when $v \nleq e$. This convention simplifies the representation of the sheaf coboundary maps and the Laplacian using matrices (e.g., Equation \ref{Eq. Coboundary matrix}). It is worth noting that in the case of graphs, the composition rule $\mathcal{F}_{\tau, \eta} \circ \mathcal{F}_{\sigma, \tau} = \mathcal{F}_{\sigma, \eta}$ holds naturally for every chain of simplices $\sigma \leq \tau \leq \eta$ in $K$, since there are no simplices in dimension greater than or equal to $2$. \paragraph{Global section spaces of cellular sheaves} Global sections play an important role in both cellular and general sheaf theory, as they gather local section information from the underlying space and consistently glue them together~\cite{bredon2012sheaf,curry2014sheaves,hansen2019toward}. To define the global section space of a cellular sheaf $\mathcal{F}: (K, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ over an abstract simplicial complex $K$, we begin by introducing the concept of chain spaces. Specifically, for every non-negative integer $q \in \mathbb{Z}_{\geq 0}$, the $q$\textit{-th cochain space} of $\mathcal{F}$ is defined as \begin{equation*} C^q(K,\mathcal{F}) = \prod_{\sigma \in K_{(q)}} \mathcal{F}_\sigma. \end{equation*} That is, $C^q(K,\mathcal{F})$ is the direct sum of the stalk spaces of $\mathcal{F}$ on $q$-simplices. Elements in $C^q(K,\mathcal{F})$ are usually expressed by $|K_{(q)}|$-tuples $(\mathbf{x}_\sigma)_{\sigma \in K_q}$ with $\mathbf{x}_\sigma \in \mathcal{F}_\sigma$ for every $\sigma \in K_{(q)}$. In particular, if $K$ is finite, then the cochain spaces can be written by \begin{equation*} C^q(K,\mathcal{F}) = \bigoplus_{\sigma \in K_{(q)}} \mathcal{F}_\sigma. \end{equation*} \begin{def.} Let $\mathcal{F}: (K, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ be a cellular sheaf over an abstract simplicial complex $K$ with vertex set $V = K_{(0)}$. A $|K_{(0)}|$-tuple $(\mathbf{x}_v)_{v \in V} \in C^0(K,\mathcal{F})$ is called a \textbf{global section} of $\mathcal{F}$ if $\mathcal{F}_{v,e}(\mathbf{x}_v) = \mathcal{F}_{w,e}(\mathbf{x}_w)$ for every $1$-simplex $e \in K_{(1)}$ such that $v \leq e$ and $w \leq e$. The collection of global sections of $\mathcal{F}$, referred to as the \textbf{global section space} of $\mathcal{F}$, is denoted by $\Gamma(K,\mathcal{F})$. The mathematical representation of the global section space is given by \begin{equation*} \Gamma(K,\mathcal{F}) = \{ (\bfx_v)_{v \in K_{(0)}} \in C^0(K,\mathcal{F}) \ | \ \mathcal{F}_{v, e}(\bfx_v) = \mathcal{F}_{w, e}(\bfx_w) \text{ if } v \leq e \text{ and } w \leq e \text{ for some } e \in K_{(1)} \}. \end{equation*} \end{def.} Because every $\mathcal{F}_{v,e}$ and $\mathcal{F}_{w,e}$ are $\mathbb{R}$-linear maps, the global section space $\Gamma(K,\mathcal{F})$ is an $\mathbb{R}$-valued vector subspace of the $0$-th cochain space $C^0(K,\mathcal{F})$. In sheaf theory, this space collects the \textit{harmonic signals} on the simplices that can be coherently assembled into a global signal of the sheaf~\cite{robinson2014topological,hansen2019toward, bodnar2022neural}. \paragraph{Restriction of cellular sheaves} In this paragraph, we introduce a sheaf operation known as the \textit{restriction sheaf}, which functions to produce a cellular sheaf on a subcomplex from a cellular sheaf defined on a simplicial complex. This operation can be used to examine the behavior of the sheaf on the substructures of the underlying simplicial complex, such as the dimension of the global section space on these substructures and other related properties. In particular, in the upcoming sections, this operation will be frequently applied to cellular sheaves on simplicial complexes, serving as an effective tool for studying various properties of the sheaves. Let $\mathcal{F}: (K,\leq) \rightarrow \textsf{Vect}_{\mathbb{R}}$ be a cellular sheaf defined on a simplicial complex $K$ and let $L$ be a subcomplex of $K$. Then the face relation $\leq$ on simplices in $L$ inherits the partial order from $K$. The restriction sheaf of $\mathcal{F}$ on $L$, denoted as $\mathcal{F}|_L: (L,\leq) \rightarrow \textsf{Vect}_{\mathbb{R}}$, is defined as follows. For every simplex $\sigma$ and pair $\sigma \leq \tau$ in $L \subseteq K$, the stalk space and restrictions map are defined by $(\mathcal{F}|_L)_\sigma = \mathcal{F}_\sigma$ and $(\mathcal{F}|_L)_{\sigma, \tau} = \mathcal{F}_{\sigma, \tau}$. By inheriting the structure of $\mathcal{F}$, $\mathcal{F}|_L$ satisfies all the sheaf axioms. The following proposition provides a straightforward relationship between the global sections of $\mathcal{F}$ and those of $\mathcal{F}|_L$. \begin{prop.}\label{Proposition: global section space and restriction sheaf} Let $\mathcal{F}: (K,\leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ be a cellular sheaf over a simplicial complex $K$ with vertex set $V = K_{(0)}$, and let $L \subseteq K$ be a subcomplex. Let $(\mathbf{x}_v)_{v \in K_{(0)}} \in C^0(K,\mathcal{F})$ be a $|K_{(0)}|$-tuple of local sections. If $(\mathbf{x}_v)_{v \in K_{(0)}}$ is a global section of $\mathcal{F}$, then the restricted $|L_{(0)}|$-tuple $(\mathbf{x}_v)_{v \in L_{(0)}} \in C^0(L,\mathcal{F}|_L)$ is a global section of $\mathcal{F}|_L$. \end{prop.} \begin{proof} Let $\mathcal{G} = \mathcal{F}|_L$ and $e$ be an edge (i.e., a $1$-simplex) of $L$. Because $L$ is a subcomplex of $K$, $e$ is also a $1$-simplex of $K$. Because $(\mathbf{x}_v)_{v \in K_{(0)}} \in \Gamma(K,\mathcal{F})$, $\mathcal{F}_{v,e}(\mathbf{x}_v) = \mathcal{F}_{w,e}(\mathbf{x}_w)$ whenever $v, w \in K_{(0)}$ with $v \leq e$ and $w \leq e$. Since $L_{(0)} \subseteq K_{(0)}$, this formula shows that $(\mathbf{x}_v)_{v \in L_{(0)}}$ is a global section of $\mathcal{F}|_L$. \end{proof} \paragraph{Sheaf cohomology and sheaf Laplacians} Similar to the Laplacian matrix of an undirected graph, a cellular sheaf defined on a simplicial complex gives rise to a related structure known as the \textit{sheaf Laplacian} on the underlying simplicial complex. To provide an overview of cellular sheaf theory and to leverage some useful results, we briefly introduce sheaf cohomology and the sheaf Laplacians of cellular sheaves of finite-dimensional vector spaces defined on finite abstract simplicial complexes. Let $K$ be a finite abstract simplicial complex with vertex set $V = K_{(0)}$, and let $\mathcal{F}: (K, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ be a cellular sheaf. To define the sheaf cohomology and sheaf Laplacian of $\mathcal{F}$, the cochain spaces $C^q(K, \mathcal{F}) = \bigoplus_{\sigma \in K_{(q)}} \mathcal{F}_\sigma$ and the coboundary maps $\delta^q: C^q(K, \mathcal{F}) \rightarrow C^{q+1}(K, \mathcal{F})$ are essential components. Specifically, by defining a total order $<$ on $V$, each $q$-simplex in $K$ is uniquely determined by the oriented sequence $[v_0, v_1, \dots, v_q]$ with $v_0 < v_1 < \cdots < v_q$. Moreover, under these defined orientations, the signed incidence function $[\cdot: \cdot]: K \times K \rightarrow \{ -1, 0, 1 \}$ can be defined as follows. For every paired $(\sigma, \tau) \in K \times K$ of simplicies in $K$, the signed incidence of the pair is defined by \begin{equation} \label{Eq. signed incidence function} [\sigma:\tau] = \begin{cases} (-1)^i &, \ \text{if } \tau = [v_0, v_1, ..., v_n] \text{ and } \sigma = [v_0, ..., \widehat{v_i}, ..., v_n], \\ 0 &, \ \text{otherwise}. \\ \end{cases} \end{equation} For any simplicial complex $K$, the signed incidence function satisfies the following relation for any $\sigma, \tau \in K$: \begin{equation} \label{Eq: Signed incidence relation} \sum_{\eta \in K} [\sigma : \eta] \cdot [\eta : \tau] = 0, \end{equation} where analogous signed incidence functions can be defined in more general settings, such as for regular cell complexes (cf. \cite{curry2014sheaves}). The $q$-\textit{th coboundary map} $\delta^q: C^q(K,\mathcal{F}) \rightarrow C^{q+1}(K,\mathcal{F})$ is thus defined as the $\mathbb{R}$-linear map extended by the following assignment: \begin{equation*} \delta^q|_{\mathcal{F}_\sigma} = \sum_{\tau \in K_{(q+1)}} [\sigma:\tau] \cdot \mathcal{F}_{\sigma, \tau}. \end{equation*} In particular, the composition map of the maps $\delta^{q+1} \circ \delta^q$ from $C^q(K,\mathcal{F})$ to $C^{q+1}(K,\mathcal{F})$ satisfies $\delta^{q+1} \circ \delta^q = 0$ for every $q \geq 0$ (cf. Lemma 6.6.2, \cite{curry2014sheaves}), and thus the $q$-th sheaf cohomology is defined as the quotient vector space \begin{equation*} H^q(K,\mathcal{F}) = \frac{\ker(\delta^q)}{{\rm im}(\delta^{q-1})}. \end{equation*} \begin{remark} To define sheaf cohomology, a total order is used on the vertex set of the underlying simplicial complex. However, as is well-known in the computation of simplicial homology (e.g., Section 3.1, \cite{curry2015topological}), different total orders on $V$ induce isomorphic simplicial homologies for the simplicial complex. Similarly, as a dual structure of homology, the choice of total order on the vertex set does not affect the sheaf cohomology. For more information, refer to \cite{curry2014sheaves,robinson2014topological,XiaoqiWei2024FODS}. \end{remark} The \textit{sheaf Laplacian} is also defined using the coboundary maps $\delta^q: C^q(K,\mathcal{F}) \rightarrow C^{q+1}(K,\mathcal{F})$. In particular, for a cellular sheaf of finite-dimensional $\mathbb{R}$-spaces on a finite simplicial complex $K$, the $q$-\textit{th sheaf Laplacian} of the cochain complex \begin{equation*} \xymatrix@+0.0em{ \cdots \ar[r] & C^{q-1}(K,\mathcal{F}) \ar[r]^{\delta^{q-1}} & C^{q}(K,\mathcal{F}) \ar[r]^{\delta^{q}} \ar@/^1pc/[l]^{(\delta^{q-1})^*} & C^{q+1}(K,\mathcal{F}) \ar@/^1pc/[l]^{(\delta^{q})^*} \ar[r] & \dots } \end{equation*} is defined as the linear operator $\Delta^q = (\delta^{q})^* \circ \delta^{q} + \delta^{q-1} \circ (\delta^{q-1})^*$ from $C^q(K,\mathcal{F})$ to itself, where $(\delta^{q-1})^*$ and $(\delta^{q})^*$ are the unique adjoint $\mathbb{R}$-linear maps of $\delta^{q-1}$ and $\delta^{q}$. Especially, the existence of adjoint maps is ensured by the finite-dimensionality of the vector spaces in the cochain $C^\bullet(K,\mathcal{F})$. Indeed, by considering each $\delta^{q}$ as a matrix over $\mathbb{R}$, the adjoint map $(\delta^{q})^*$ is precisely the transpose of $\delta^{q}$. For the construction of the sheaf Laplacian in more general settings, such as for cellular sheaves of Hilbert spaces, refer to~\cite{hansen2019toward} for more information. Sheaf cohomology, the global section space, and the sheaf Laplacian have intrinsically close relationships. First, for a cellular sheaf $\mathcal{F}: (K, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ on a simplicial complex $K$, it can also yield the space of global sections. Notably, by conventionally setting $C^{-1}(K,\mathcal{F}) = 0$ and $\delta^{-1}: C^{-1}(K,\mathcal{F}) \rightarrow C^{0}(K,\mathcal{F})$ as the zero map, one obtains \begin{equation} \label{Eq. Global section space representation} H^0(K,\mathcal{F}) = \ker(\delta^0) \simeq \{ (\bfx_v)_{v \in K_0} \ | \ \mathcal{F}_{v,[v,w]}(\bfx_v) = \mathcal{F}_{w,[v,w]}(\bfx_w) \text{ if } [v,w] \in K_1 \} = \Gamma(K,\mathcal{F}). \end{equation} Second, by the Hodge decomposition theorem, each $q$-th sheaf cohomology can be identified as the kernel of the $q$-th sheaf Laplacian. This connection between sheaf cohomology and the sheaf Laplacian is formally stated in the following theorem. \begin{theorem} \label{Theorem: Hodge's theorem} Let $(K,\leq)$ be a finite abstract simplicial complex and $\mathcal{F}: (K,\leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$ be a cellular sheaf of finite-dimensional $\mathbb{R}$-vector spaces defined on $K$. Let $\Delta^q: C^{q}(K,\mathcal{F}) \rightarrow C^{q}(K,\mathcal{F})$ be the $q$-th sheaf Laplacian. Then, $H^q(K,\mathcal{F}) \simeq \ker(\Delta^q)$. In particular, \begin{equation} \label{Eq. Global section space and the kernel of 0th Laplacian} \Gamma(K,\mathcal{F}) = H^0(K,\mathcal{F}) = \ker(\Delta^0) = \ker((\delta^0)^* \cdot \delta^0). \end{equation} In particular, the nullity of the sheaf Laplacian is the dimension of the global section space, i.e., $\dim_\mathbb{R} \ker(\Delta^0) = \dim_\mathbb{R} \Gamma(K,\mathcal{F}) = \dim_\mathbb{R} \ker(\delta^0)$. \end{theorem} \begin{proof} For a proof of the theorem, refer to Theorem 3.1 in~\cite{hansen2019toward}. \end{proof} From a molecular dynamics perspective, with the introduction of the proposed anisotropic sheaf model in the following section, we demonstrate how incorporating geometric and physical information into the cellular sheaf framework enables local sections at the vertices to capture localized fluctuations and movements. In contrast, integrating these local sections into the global section offers a cohesive view of the motion of the entire simplicial complex. \section{Anisotropic Sheaves} \label{Section: Anisotropic Sheaves} In this section, we introduce the proposed sheaf modeling for atomic systems, which is defined on $1$-dimensional simplicial complexes—namely, undirected, simple, finite graphs. The anisotropic sheaf model, which will be the primary focus of our analysis, is inspired by the Anisotropic Network Model (ANM), a powerful framework for analyzing the normal modes of proteins and exploring the functional relationships and dynamics of numerous proteins~\cite{Atilgan:2001, doruker2000dynamics, eyal2006anisotropic, xia2015multiscale, xia2018multiscale}. \begin{figure} \centering \includegraphics[width=\linewidth]{Normal_modes_2.png} \caption{Illustration of the eigenmodes with zero and non-zero eigenvalues for chain A of the protein with ID 1URP from the RCSB Protein Data Bank~\cite{BJORKMAN1998651, Berman:2000}. The visualizations of the eigenmodes were generated using the VMD software~\cite{VMD}.} \label{fig: Comparison of zero and non-zero eigenmodes} \end{figure} \paragraph{Anisotropic network and Hassien matrix} The mathematical formulation of the anisotropic network and the associated Hessian matrix is presented below. Refer to~\cite{Atilgan:2001, doruker2000dynamics, eyal2006anisotropic, xia2015multiscale, xia2018multiscale} for more detailed settings, discussions, and formula derivations related to the ANM framework. Given an atomic system with 3D coordinates of atomic positions, particularly for the ${\rm C}_\alpha$ atoms, the anisotropic network is constructed as follows. Let $\{ v_i = (x_i^\circ, y_i^\circ, z_i^\circ) \ | \ i = 1, 2, ..., n \}$ be the coordinates of the equilibrium positions of the atoms. Two distance measurements for two atoms $v_i$ and $v_j$ are considered, the equilibrium distance $s_{ij}^\circ$ and $s_{ij}$. With a spring constant $\gamma$, the harmonic potential between these two atoms is formulated as \begin{equation*} V_{ij} = \frac{\gamma}{2} (s_{ij} - s_{ij}^\circ)^2. \end{equation*} Furthermore, by using a fixed cutoff distance \( R_c \) and the Heaviside step function \( H: \mathbb{R} \rightarrow \{ 0, 1 \} \), defined by \( H(x) = 1 \) for \( x \geq 0 \) and \( H(x) = 0 \) otherwise, the potential function \( V_{\text{ANM}} \) for the entire atomic system is given by \begin{equation} \label{Eq. ANM energy function} \begin{split} V_{\text{ANM}} = \frac{\gamma}{2} \sum_{i,j} (d_{ij} - d_{ij}^\circ)^2 \cdot H(R_c - d_{ij}^\circ), \end{split} \end{equation} where $d_{ij} := \Vert (x_j - x_i, y_j - y_i, z_j - z_i) \Vert$ and $d_{ij}^\circ := \Vert (x_j^\circ - x_i^\circ, y_j^\circ - y_i^\circ, z_j^\circ - z_i^\circ) \Vert$. In particular, $H(R_c - d_{ij}^\circ) = 0$ if the equilibrium distance $d_{ij}^\circ$ is larger than a given cutoff distance $R_c$. Then, for the $i$-th and $j$-th atoms, the ANM Hessian matrix is defined as \begin{equation} \label{Eq. small Hessian matrix} \begin{split} H_{ij} &= \begin{bmatrix} \frac{\partial V_{ij}}{\partial x_i \partial x_j} & \frac{\partial V_{ij}}{\partial x_i \partial y_j} & \frac{\partial V_{ij}}{\partial x_i \partial z_j}\\ \frac{\partial V_{ij}}{\partial y_i \partial x_j} & \frac{\partial V_{ij}}{\partial y_i \partial y_j} & \frac{\partial V_{ij}}{\partial y_i \partial z_j}\\ \frac{\partial V_{ij}}{\partial z_i \partial x_j} & \frac{\partial V_{ij}}{\partial z_i \partial y_j} & \frac{\partial V_{ij}}{\partial z_i \partial z_j}\\ \end{bmatrix} \\ &= \frac{-\gamma}{(s_{ij}^\circ)^2} \cdot \begin{bmatrix} (x_j^\circ-x_i^\circ)(x_j^\circ-x_i^\circ) & (x_j^\circ-x_i^\circ)(y_j^\circ-y_i^\circ) & (x_j^\circ-x_i^\circ)(z_j^\circ-z_i^\circ) \\ (x_j^\circ-x_i^\circ)(y_j^\circ-y_i^\circ) & (y_j^\circ-y_i^\circ)(y_j^\circ-y_i^\circ) & (y_j^\circ-y_i^\circ)(z_j^\circ-z_i^\circ) \\ (x_j^\circ-x_i^\circ)(z_j^\circ-z_i^\circ) & (y_j^\circ-y_i^\circ)(z_j^\circ-z_i^\circ) & (z_j^\circ-z_i^\circ)(z_j^\circ-z_i^\circ) \\ \end{bmatrix} \end{split}. \end{equation} In particular, according to Equation~\eqref{Eq. ANM energy function}, $H_{ij}$ equals the zero matrix if $d_{ij}^{\circ}$ is larger than the cutoff distance. The force constant of the entire system is described by the following $3n \times 3n$ matrix, known as the \textit{Hessian matrix} of the system. That is, \begin{equation} \label{Eq. The big Hessian matrix} \mathbf{H}_{\rm ANM} = \begin{bmatrix} H_{11} & H_{12} & \cdots & H_{1n}\\ H_{21} & H_{22} & \cdots & H_{2n}\\ \vdots & \vdots & \ddots & \vdots \\ H_{n1} & H_{n2} & \cdots & H_{nn}\\ \end{bmatrix}, \end{equation} where each $H_{ij}$ with $i \neq j$ is a $3 \times 3$ matrix defined in Equation \eqref{Eq. small Hessian matrix} and $H_{ii} := -\sum_{j \neq i} H_{ij}$. The Hessian matrix is symmetric and positive semi-definite, and therefore can be diagonalized into \( 3n \) non-negative eigenvalues. The eigenvectors of \( \mathbf{H}_{\rm ANM} \) are referred to as \textit{eigenmodes} of the anisotropic network. In physical terms, the eigenvalues of the Hessian matrix represent the energetic cost of displacing the system. Eigenmodes with smaller eigenvalues correspond to low-energy deformations and typically reflect delocalized motions. Specifically, the eigenmodes associated with zero eigenvalues correspond to the rigid motions of the entire biomolecule, such as global translations and rotations. In contrast, eigenmodes with larger eigenvalues represent high-energy modes with higher energetic costs, and they are associated with more significant local deformations of the biomolecule~\cite{cui2005normal}. Figure \ref{fig: Comparison of zero and non-zero eigenmodes} illustrates examples of eigenmodes with zero and non-zero eigenvalues. With a sufficiently large cutoff distance \( R_c \), the ANM Hessian matrix for the underlying graph structure typically exhibits six zero eigenvalues, corresponding to the trivial modes of translation and rotation for the entire biomolecular complex (Figure \ref{fig: Six non-zero eigenmodes}). However, if \( R_c \) is too small, the number of zero eigenvalue modes can significantly exceed six, creating challenges for accurate ANM analysis. On the other hand, using extremely large cutoff distances complicates the network formed by the atomic system, resulting in significantly higher computational complexity. \paragraph{Mathematical formulation of anisotropic sheaves} With an underlying geometric graph as the foundation, the \textit{anisotropic sheaf} is defined as follows. Let $G = (V, E)$ be an undirected, simple, finite graph, where the vertices in $V$ are labeled as $v_1, v_2, \dots, v_n$, each accompanied by coordinates $(x_i^\circ, y_i^\circ, z_i^\circ) \in \mathbb{R}^3$ for $i = 1, 2, \dots, n$. Then, a cellular sheaf $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ is defined by the following assignments. For each vertex $v_i \in V$ and edge $[v_i, v_j] \in E$ with $i < j$, we define $\mathcal{F}_{v_i} = \mathbb{R}^3$ and $\mathcal{F}_{[v_i,v_j]} = \mathbb{R}$. For the linear transformations for ordered pairs $v_i \leq [v_j, v_j]$ and $v_i \leq [v_j, v_j]$, we define \begin{equation} \label{Eq. Anisotropic sheaf definition} \mathcal{F}_{v_i, [v_i,v_j]} = w_{ij} \cdot \begin{bmatrix} x_j^\circ-x_i^\circ & y_j^\circ-y_i^\circ & z_j^\circ-z_i^\circ \\ \end{bmatrix} = \mathcal{F}_{v_j, [v_i,v_j]} \end{equation} as a $1 \times 3$ matrix, which represents a linear transformation from $\mathbb{R}^3$ to $\mathbb{R}^1$, and $w_{ij}$ is an assigned \textit{weight value} for the edge $[v_i, v_j]$. Furthermore, as the convention introduced in Section \ref{Section: Foundations of the Mathematical Framework}, we also establish \begin{equation} \label{Eq. General cellular sheaf convention} \mathcal{F}_{v_i, [v_j,v_k]} = \begin{bmatrix} 0 & 0 & 0 \end{bmatrix} \end{equation} for $i \notin \{ j, k \}$, denoting the zero map from $\mathcal{F}_{v_i}$ to $\mathcal{F}_{[v_j,v_k]}$ when $v_i$ is not a face of $[v_j,v_k]$. Then, for every $[v_i, v_j] \in E$ with $i < j$, we have \begin{equation} \label{Eq. Local 3 times 3 matrix of the ANM sheaf} \begin{split} \mathcal{F}_{v_j, [v_i,v_j]}^T\mathcal{F}_{v_i, [v_i,v_j]} &= \mathcal{F}_{v_i, [v_i,v_j]}^T\mathcal{F}_{v_j, [v_i,v_j]} = w_{ij}^2 \cdot\begin{bmatrix} x_j^\circ-x_i^\circ \\ y_j^\circ-y_i^\circ \\ z_j^\circ-z_i^\circ \\ \end{bmatrix} \cdot \begin{bmatrix} x_j^\circ-x_i^\circ & y_j^\circ-y_i^\circ & z_j^\circ-z_i^\circ \\ \end{bmatrix} \\ & = w_{ij}^2 \cdot \begin{bmatrix} (x_j^\circ-x_i^\circ)^2 & (x_j^\circ-x_i^\circ)(y_j^\circ-y_i^\circ) & (x_j^\circ-x_i^\circ)(z_j^\circ-z_i^\circ) \\ (x_j^\circ-x_i^\circ)(y_j^\circ-y_i^\circ) & (y_j^\circ-y_i^\circ)^2 & (y_j^\circ-y_i^\circ)(z_j^\circ-z_i^\circ) \\ (x_j^\circ-x_i^\circ)(z_j^\circ-z_i^\circ) & (y_j^\circ-y_i^\circ)(z_j^\circ-z_i^\circ) & (z_j^\circ-z_i^\circ)^2 \\ \end{bmatrix}. \end{split} \end{equation} In particular, compared to Equation~\eqref{Eq. small Hessian matrix}, the matrix $\mathcal{F}_{v_j, [v_i,v_j]}^T\mathcal{F}_{v_i, [v_i,v_j]}$ corresponds to the Hessian matrix for the $i$-th and $j$-th atoms in the ANM model, with the only difference being the scaling factor $\frac{-\gamma}{(s_{ij}^\circ)^2}$. We term the cellular sheaf $\mathcal{F}$ as an anisotropic sheaf defined on the geometric graph $G = (V,E)$. Moreover, to establish the sheaf cohomology of an anisotropic sheaf, we consider the following setup. By \eqref{Eq. signed incidence function}, the signed incidence is defined by $[v_i:[v_i,v_j]] := -1$, $[v_j:[v_i,v_j]] := 1$, and $[v_i:[v_j,v_k]] = 0$ for $i \notin \{ j, k \}$. Let $E = \{ e_1, ..., e_m \}$, the set of edges in $G$, then the $0$-th and $1$-st cochain spaces of $\mathcal{F}$ can be explicitly expressed by \begin{equation*} C^0(G,\mathcal{F}) = \bigoplus_{k = 1}^n \mathcal{F}_{v_k} \simeq \mathbb{R}^{3n} \text{ and } C^1(G,\mathcal{F}) = \bigoplus_{l = 1}^m \mathcal{F}_{e_l} \simeq \mathbb{R}^{m}. \end{equation*} Representing elements in $C^0(G, \mathcal{F})$ and $C^1(G, \mathcal{F})$ as elements in $\mathbb{R}^{3n}$ and $\mathbb{R}^{m}$ in column form, the $0$-th coboundary map becomes an $\mathbb{R}$-linear map $\mathbf{C}: C^0(G, \mathcal{F}) \longrightarrow C^1(G, \mathcal{F})$, represented by the following $m \times 3n$ matrix: \begin{equation} \label{Eq. Coboundary matrix} \mathbf{C} = \begin{bmatrix} [v_1:e_1] \cdot \mathcal{F}_{v_1, e_1} & [v_2:e_1] \cdot \mathcal{F}_{v_2, e_1} & \cdots & [v_n:e_1] \cdot \mathcal{F}_{v_n, e_1}\\ [v_1:e_2] \cdot \mathcal{F}_{v_1, e_2} & [v_2:e_2] \cdot \mathcal{F}_{v_2, e_2} & \cdots & [v_n:e_2] \cdot \mathcal{F}_{v_n, e_2}\\ \vdots & \vdots & \ddots & \vdots \\ [v_1:e_m] \cdot \mathcal{F}_{v_1, e_m} & [v_2:e_m] \cdot \mathcal{F}_{v_2, e_m} & \cdots & [v_n:e_m] \cdot \mathcal{F}_{v_n, e_m}\\ \end{bmatrix}, \end{equation} where $\mathbf{C}$ is composed of $m \times n$ blocks, and the $(l,k)$-th block is the $1 \times 3$ matrix $[v_k:e_l] \cdot \mathcal{F}_{v_k, e_l}$. Then, the $0$-th sheaf Laplacian $L_\mathcal{F} := \Delta^0$ is defined as the $3n \times 3n$ matrix \begin{equation} \label{Eq. Sheaf Laplacian} L_\mathcal{F} = \mathbf{C}^T \cdot \mathbf{C} = \begin{bmatrix} L_{11} & L_{12} & \cdots & L_{1n}\\ L_{21} & L_{22} & \cdots & L_{2n}\\ \vdots & \vdots & \ddots & \vdots \\ L_{n1} & L_{n2} & \cdots & L_{nn}\\ \end{bmatrix}, \end{equation} where the $(i,j)$-block matrix, with $i, j \in \{ 1, 2, \dots, n \}$, of the Laplacian matrix $L_\mathcal{F}$ is given by \begin{equation*} L_{ij} = \sum_{l = 1}^m \ [v_j : e_l] \cdot [v_i : e_l] \cdot \mathcal{F}_{v_i,e_l}^T\mathcal{F}_{v_j,e_l}. \end{equation*} Since $G$ is a simple graph, i.e., a $1$-dimensional abstract simplicial complex, any two vertices $v_i$ and $v_j$ with $i < j$ can have at most one edge between them. In particular, the $(i,j)$-block matrix of $L_\mathcal{F}$ satisfies the equation \begin{equation*} L_{ij} = \begin{cases} [v_j : [v_i,v_j]] \cdot [v_i : [v_i,v_j]] \cdot \mathcal{F}_{v_i,[v_i,v_j]}^T\mathcal{F}_{v_j,[v_i,v_j]} = -\mathcal{F}_{v_i,[v_i,v_j]}^T\mathcal{F}_{v_j,[v_i,v_j]} &, \ \text{if } [v_i,v_j] \in E, \\ 0 &, \ \text{otherwise}. \\ \end{cases} \end{equation*} More precisely, by Equation \eqref{Eq. Local 3 times 3 matrix of the ANM sheaf}, for $[v_i,v_j] \in E$, matrix $L_{ij}$ can be represented by \begin{equation*} L_{ij} = -\mathcal{F}_{v_i,[v_i,v_j]}^T\mathcal{F}_{v_j,[v_i,v_j]} = -w_{ij}^2 \cdot \begin{bmatrix} (x_j^\circ-x_i^\circ)(x_j^\circ-x_i^\circ) & (x_j^\circ-x_i^\circ)(y_j^\circ-y_i^\circ) & (x_j^\circ-x_i^\circ)(z_j^\circ-z_i^\circ) \\ (x_j^\circ-x_i^\circ)(y_j^\circ-y_i^\circ) & (y_j^\circ-y_i^\circ)(y_j^\circ-y_i^\circ) & (y_j^\circ-y_i^\circ)(z_j^\circ-z_i^\circ) \\ (x_j^\circ-x_i^\circ)(z_j^\circ-z_i^\circ) & (y_j^\circ-y_i^\circ)(z_j^\circ-z_i^\circ) & (z_j^\circ-z_i^\circ)(z_j^\circ-z_i^\circ) \\ \end{bmatrix}. \end{equation*} On the other hand, by adapting the convention in Equation~\eqref{Eq. General cellular sheaf convention} and the definition of $\mathcal{F}_{v_i, [v_i,v_j]} = \mathcal{F}_{v_j, [v_i,v_j]}$, \begin{equation*} L_{ii} = \sum_{l = 1}^m \mathcal{F}_{v_i, e_l}^T\mathcal{F}_{v_i, e_l} = \sum_{j = 1}^n \mathcal{F}_{v_i, [v_i,v_j]}^T\mathcal{F}_{v_i, [v_i,v_j]} = \sum_{j = 1}^n \mathcal{F}_{v_i, [v_i,v_j]}^T\mathcal{F}_{v_j, [v_i,v_j]} = \sum_{j \neq i} -L_{ij} \end{equation*} for every $i \in \{ 1, 2, ..., n \}$. In particular, when setting each weight $w_{ij}$ to be $\gamma^{1/2}/s_{ij}^\circ$, the induced sheaf Laplacian $L$ is exactly the Hessian matrix from the ANM model (cf. Equations \eqref{Eq. The big Hessian matrix} and \eqref{Eq. small Hessian matrix}), i.e., $L_{ij} = H_{ij}$. Furthermore, by Theorem \ref{Theorem: Hodge's theorem}, the space $\ker(L_\mathcal{F})$ equals the $0$-th cohomology $H^0(G,\calF)$; that is, the space of global sections of $\calF$ over the graph $G$. According to the ANM framework, the global section space corresponds to the eigenmodes associated with the zero eigenvalue of the Hessian matrix. We summarize the above inferences in the following theorem. \begin{theorem} \label{Theorem: Main result 1} Let $G = (V, E)$ be an undirected, simple, finite graph with vertex coordinates $(x_i^\circ, y_i^\circ, z_i^\circ)$, where each vertex has distinct coordinates. Let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the anisotropic sheaf defined as in~\eqref{Eq. Anisotropic sheaf definition}. If $w_{ij} = \gamma^{1/2}/s_{ij}^\circ$, where $\gamma$ denotes the spring constant and $s_{ij}^\circ$ represents the equilibrium distance between the $i$-th and $j$-th vertices, then the sheaf Laplacian $L_\mathcal{F} := \Delta^0$ equals the Hessian matrix $\mathbf{H}_{\rm ANM}$ based on the underlying graph $G = (V, E)$. \end{theorem} \begin{exam.} \label{Example: anisotropic sheaf defined on K3} Let $G = K_3$ be the complete graph with three vertices. Furthermore, we represent $V = \{ 1, 2, 3\}$ and $E = \{ [1,2], [1,3], [2,3] \}$. Let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be an anisotropic sheaf. Then the $0$-th coboundary matrix is \begin{equation*} \mathbf{C} = \begin{bmatrix} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{2, [1,2]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{3, [1,3]} \\ \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{3, [2,3]} \\ \end{bmatrix}, \end{equation*} where each block is a $1 \times d$ matrix. On the other hand, the sheaf Laplacian $L_\mathcal{F}$ is the matrix \begin{equation*} \begin{split} L_\mathcal{F} = \mathbf{C}^T \cdot \mathbf{C} &= \begin{bmatrix} -\mathcal{F}_{1, [1,2]}^T & -\mathcal{F}_{1, [1,3]}^T & \mathbf{0} \\ \mathcal{F}_{2, [1,2]}^T & \mathbf{0} & -\mathcal{F}_{2, [2,3]}^T \\ \mathbf{0} & \mathcal{F}_{3, [1,3]}^T & \mathcal{F}_{3, [2,3]}^T \\ \end{bmatrix} \cdot \begin{bmatrix} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{2, [1,2]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{3, [1,3]} \\ \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{3, [2,3]} \\ \end{bmatrix} \\ &= \begin{bmatrix} L_{11} & -\mathcal{F}_{1, [1,2]}^T\mathcal{F}_{2, [1,2]} & -\mathcal{F}_{1, [1,3]}^T\mathcal{F}_{3, [1,3]}\\ -\mathcal{F}_{2, [1,2]}^T \mathcal{F}_{1, [1,2]} & L_{22} & -\mathcal{F}_{2,[2,3]}^T\mathcal{F}_{3,[2,3]} \\ -\mathcal{F}_{3, [1,3]}^T\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{3, [2,3]}^T\mathcal{F}_{2, [2,3]} & L_{33} \\ \end{bmatrix}, \end{split} \end{equation*} where the diagonal block matrices are \begin{equation*} \begin{split} L_{11} &= \mathcal{F}_{1, [1,2]}^T \mathcal{F}_{1, [1,2]} + \mathcal{F}_{1, [1,3]}^T \mathcal{F}_{1, [1,3]}, \\ L_{22} &= \mathcal{F}_{2, [1,2]}^T\mathcal{F}_{2, [1,2]} + \mathcal{F}_{2, [2,3]}^T\mathcal{F}_{2, [2,3]}, \\ L_{33} &= \mathcal{F}_{3, [1,3]}^T\mathcal{F}_{3, [1,3]} + \mathcal{F}_{3, [2,3]}^T\mathcal{F}_{3, [2,3]}. \end{split} \end{equation*} In particular, the rank of $\mathbf{C}$ is at most 3. Let $\mathbf{C}_1$, $\mathbf{C}_2$, and $\mathbf{C}_3$ denote the first, second, and third rows of $\mathbf{C}$, respectively. Suppose $\lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}$ satisfy $\lambda_1 \mathbf{C}_1 + \lambda_2 \mathbf{C}_2 + \lambda_3 \mathbf{C}_3 = 0$. Then, it follows that $\lambda_1 \mathcal{F}_{1, [1,2]} + \lambda_2 \mathcal{F}_{1, [1,3]} = \mathbf{0}$ and $\lambda_2 \mathcal{F}_{3, [1,3]} + \lambda_3 \mathcal{F}_{3, [2,3]} = \mathbf{0}$. Furthermore, if $\mathcal{F}_{1, [1,2]}$ and $\mathcal{F}_{1, [1,3]}$ are linearly independent and $\mathcal{F}_{3, [2,3]} \neq \mathbf{0}$, then $\lambda_1 = \lambda_2 = \lambda_3 = 0$, implying that ${\rm rank}(\mathbf{C}) = 3$. \end{exam.} If \( w_{ij} = 1 \) for every \( i < j \), the anisotropic sheaf can be regarded as a dual structure to the force cosheaf defined on the graph~\cite{cooperband2023towards,cooperband2024equivariant,cooperband2023cosheaf,cooperband2024cellular}. Specifically, the restriction maps of the anisotropic sheaf from vertices to edges correspond to the transpose matrices of the force cosheaf’s maps from edges to vertices. In particular, the force cosheaf can be represented as a functor $\mathcal{H}: (G,\leq)^{\rm op} \rightarrow \textsf{Vect}_{\mathbb{R}}$ from the opposite category of $(G,\leq)$ to $\textsf{Vect}_{\mathbb{R}}$, and the induced boundary matrix $\mathbf{B}: C_1(G,\mathcal{H}) \rightarrow C_0(G,\mathcal{H})$ is exactly the transpose of the boundary matrix $\mathbf{C}: C^0(G,\mathcal{F}) \rightarrow C^1(G,\mathcal{F})$, where $C_0(G,\mathcal{H}) = C^0(G,\mathcal{F})$, $C_1(G,\mathcal{H}) = C^1(G,\mathcal{F})$. In particular, the $1$-st cosheaf homology $H_1(G,\mathcal{H})$, with its dimension representing the $|S|$, satisfies \begin{equation} \begin{split} \dim H_1(G,\mathcal{H}) &= \dim \ker(\mathbf{B}) = \dim C_1(G,\mathcal{H}) - {\rm rank}(\mathbf{B}) = \dim C_1(G,\mathcal{H}) - {\rm rank}(\mathbf{B}^T) \\ &= \dim C_1(G,\mathcal{H}) - {\rm rank}(\mathbf{C}) = |E| - {\rm rank}(\mathbf{C}) = \dim_\mathbb{R} H^1(G,\mathcal{F}). \end{split} \end{equation} On the other hand, the dimension of $0$-th force cosheaf homology $H_0(G,\mathcal{H})$ is given by \begin{equation} \begin{split} \dim H_0(G,\mathcal{H}) &= \dim \frac{C_0(G,\mathcal{H})}{{\rm im}(\mathbf{B})} = \dim C_0(G,\mathcal{H}) - \dim {\rm im}(\mathbf{B}) \\ &= \dim C_0(G,\mathcal{H}) - {\rm rank}(\mathbf{B}) = \dim C_0(G,\mathcal{H}) - {\rm rank}(\mathbf{B}^T) \\ &= \dim C_0(G,\mathcal{H}) - {\rm rank}(\mathbf{C}) = \dim \ker(\mathbf{C}) = \dim H^0(G,\mathcal{F}). \end{split} \end{equation} In other words, the dimension of the $0$-th force cosheaf equals the dimensional of the global section space of the anisotropic sheaf $\mathcal{F}: (G,\leq) \rightarrow \textsf{Vect}_{\mathbb{R}}$. More precisely, the $3$-dimensional Maxwell's rule states that \begin{equation} \label{Eq. Maxwell's rule} 3|V| - |E| = \dim H_0(G,\mathcal{H}) - \dim H_1(G,\mathcal{H}) = 6 + |M| - |S|, \end{equation} where $|M| = \dim H_0(G,\mathcal{H}) - 6 = \dim H_0(G,\mathcal{H}) - \binom{4}{2}$ is the number of \textit{linkage mechanisms} and $|S| = \dim H_1(G,\mathcal{F})$ is the number of \textit{self-stresses} $|S|$ of the truss mechanism. For further details and a more general version of the $n$-dimensional Maxwell's rule, refer to~\cite{cooperband2023towards,cooperband2024equivariant,cooperband2023cosheaf,cooperband2024cellular}. \paragraph{Ranks of anisotropic sheaves} At the end of the section, the concept of the \textit{rank} of an anisotropic sheaf on a graph is introduced, which is crucial for determining the dimension of the global section space of the sheaf. In general, for a cellular sheaf $\mathcal{F}$ on a graph $G = (V, E)$, the rank of $\mathcal{F}$ reflects the linear independence of the linear transformations $\mathcal{F}_{v, e}$ considered as vectors in $\mathbb{R}^3$. As illustrated in Example \ref{Example: anisotropic sheaf defined on K3}, the matrix $\mathbf{C}$ achieves a rank of $3$ when $\mathcal{F}_{1, [1,2]}$ and $\mathcal{F}_{1, [1,3]}$ are linearly independent and $\mathcal{F}_{3, [2,3]} \neq \mathbf{0}$. In other words, counting the number of linearly independent vectors $\mathcal{F}_{v, e} \in \mathbb{R}^3$ is useful for approximating the rank of the coboundary matrix and, consequently, for estimating the dimension of the global section space. The formal definition of the rank of an anisotropic sheaf is provided below. \begin{def.} Let $G = (V,E)$ be an undirected, simple, and finite graph, and let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be an anisotropic sheaf. The \textbf{rank} of $\mathcal{F}$ is defined as follows. \begin{equation*} {\rm rank}(\mathcal{F}) = \dim_\mathbb{R} \bigg( {\rm span}_\mathbb{R} \bigg\{ \mathcal{F}_{v_i, [v_i,v_j]} \ \bigg| \ i < j \bigg\} \bigg). \end{equation*} An anisotropic sheaf $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ is said to have \textbf{full rank} if ${\rm rank}(\mathcal{F}) = 3$. \end{def.} It is evident that ${\rm rank}(\mathcal{F}) \leq 3$, since all vectors $\mathcal{F}_{v, e}$ lie in the $3$-dimensional Euclidean space. Excluding cases where an anisotropic sheaf $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ has $w_{ij} = 0$ for some $i < j$, the following proposition simplifies the definition of the rank of an anisotropic sheaf to facilitate a more efficient analysis. \begin{prop.} Let $G = (V, E)$ be a graph with $n$ vertices, each associated with 3D coordinates $(x_i^\circ, y_i^\circ, z_i^\circ)$. The vertices are denoted by $v_1, \ldots, v_n \in V$, and the edges by $[v_i, v_j]$ with $i < j$. Let $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be an anisotropic sheaf defined as in~\eqref{Eq. Anisotropic sheaf definition}. If $w_{ij} \neq 0$ for every $i < j$, and $[v_1, v_j]$ for all $j \in \{ 2, ..., n\}$, then \begin{equation*} {\rm rank}(\mathcal{F}) = \dim_\mathbb{R} \bigg( {\rm span}_\mathbb{R} \bigg\{ \mathcal{F}_{v_1, [v_1,v_j]} \ \bigg| \ j \in \{ 2, ..., n \} \bigg\} \bigg), \end{equation*} which admits a basis for the space spanned by the vectors $\mathcal{F}_{v_i, [v_i,v_j]}$ with $i < j$. In particular, if $G = K_n$ is a complete graph with $n$ vertices and $k$ is a fixed number in $\{ 1, 2, ..., n \}$, then the set $\{ \mathcal{F}_{v_k, [v_k,v_l]} \ | \ l \in \{ 1, 2, ..., n \} \setminus \{ k \} \}$ spans all the vectors $\mathcal{F}_{v_i, [v_i,v_j]}$ with $i < j$. \end{prop.} \begin{proof} For every $i < j$ in $\{ 1, 2, ..., n \}$, the weight $w_{ij}$ is non-zero. Then, \begin{equation} \label{Eq. ij vector spanned by 1j and 1i} \begin{split} \frac{1}{w_{ij}} \cdot \mathcal{F}_{v_i, [v_i,v_j]} &= \begin{bmatrix} x_j^\circ-x_i^\circ & y_j^\circ-y_i^\circ & z_j^\circ-z_i^\circ \\ \end{bmatrix} = \begin{bmatrix} x_j^\circ & y_j^\circ & z_j^\circ \\ \end{bmatrix} - \begin{bmatrix} x_i^\circ & y_i^\circ & z_i^\circ \\ \end{bmatrix} \\ &= \begin{bmatrix} x_j^\circ - x_1^\circ & y_j^\circ - y_1^\circ & z_j^\circ - z_1^\circ \\ \end{bmatrix} - \begin{bmatrix} x_i^\circ - x_1^\circ & y_i^\circ - y_1^\circ & z_i^\circ - z_1^\circ \\ \end{bmatrix} \\ &= \frac{1}{w_{1j}} \cdot \mathcal{F}_{v_1, [v_1,v_j]} - \frac{1}{w_{1i}} \cdot \mathcal{F}_{v_1, [v_1,v_i]} \in {\rm span}_\mathbb{R} \{ \mathcal{F}_{v_1, [v_1,v_j]}, \mathcal{F}_{v_1, [v_1,v_i]} \} \end{split} \end{equation} if $[v_1, v_i]$ and $[v_1, v_j]$ also belong to the edge set $E$ of $G$. This shows that $\{ \mathcal{F}_{v_1, [v_1,v_2]}, ..., \mathcal{F}_{v_1, [v_1,v_{n}]} \}$ spans all the vectors $\mathcal{F}_{v_i, [v_i,v_j]}$ with $i < j$, as desired. Because $\{ \mathcal{F}_{v_1, [v_1,v_j]} \ | \ j \in \{ 2, ..., n \}\}$ is a subset of $\{ \mathcal{F}_{v_i, [v_i,v_j]} \ | \ i < j \}$, the proposition follows. \end{proof} {\color{red} } \section{Normal Mode Analysis in Anisotropic Sheaf Models} \label{Section: Normal Mode Analysis in Anisotropic Sheaf Models} In this section, we explore the physical significance of the global sections and the global section space of the anisotropic sheaf defined on an atomic system. The section is organized into two main parts. First, we establish the connection between the global sections of the anisotropic sheaf and the normal modes of the atomic system. Specifically, as demonstrated in Theorem \ref{Theorem: Main result 2}, each global section corresponds to a normal mode involving global translation or rotation in the anisotropic network model associated with the graph. Second, we examine the dimension of the global section space of a given anisotropic sheaf. By analyzing the anisotropic sheaf on a complete graph under specific geometric conditions, such as the coordinates being in general position, we provide a mathematical proof for the existence of a graph model of an anisotropic sheaf with a global section space of dimension exactly 6—generally considered an ideal condition in ANM analysis (cf. Theorems \ref{Theorem: Main result 3-} and \ref{Theorem: Main result 3}). \paragraph{Global Section Space and Normal Modes} \begin{figure} \centering \includegraphics[width=\linewidth]{Glabal_section_toy_example.png} \caption{An illustration of two global sections of an anisotropic sheaf defined on a graph $G$, consisting of two vertices $v_1$ and $v_2$ with coordinates $(x_1^\circ, y_1^\circ, z_1^\circ)$ and $(x_2^\circ, y_2^\circ, z_2^\circ)$, connected by a single edge. According to the definition of anisotropic sheaves, the stalk spaces are $\mathcal{F}_{v_1} = \mathbb{R}^3$, $\mathcal{F}_{v_2} = \mathbb{R}^3$, $\mathcal{F}_{[v_1,v_2]} = \mathbb{R}$. The restriction maps $\mathcal{F}_{v_1, [v_1, v_2]}$ and $ = \mathcal{F}_{v_2, [v_1, v_2]}$ are defined as the $1 \times 3$ matrix $\begin{bmatrix} x_2^\circ-x_1^\circ & y_2^\circ-y_1^\circ & z_2^\circ-z_1^\circ \\ \end{bmatrix}$, which sends each vector $\mathbf{x} \in \mathbb{R}^3$ to the inner product value $\langle \mathbf{u}, \mathbf{x} \rangle$, where $\mathbf{u} = (x_2^\circ-x_1^\circ, y_2^\circ-y_1^\circ, z_2^\circ-z_1^\circ)$. Then, for any $\mathbf{v} \in \mathbb{R}^3$, $(\mathbf{v}, \mathbf{v})$ is a global section since $\mathcal{F}_{v_1,[v_1,v_2]}(\mathbf{v}) = \mathcal{F}_{v_2,[v_1,v_2]}(\mathbf{v})$. On the other hand, if $\mathbf{w} \in \mathbb{R}^3$ is perpendicular to $\mathbf{u}$, then $(\mathbf{0}, \mathbf{w})$ with $\mathbf{0}$ is a global section since $\mathcal{F}_{v_2, [v_1,v_2]}(\mathbf{w}) = \langle \mathbf{u}, \mathbf{w} \rangle = 0 = \langle \mathbf{u}, \mathbf{0} \rangle = \mathcal{F}_{v_1, [v_1,v_2]}(\mathbf{0})$.} \label{fig: demo example of global sections} \end{figure} Global sections of an anisotropic sheaf have direct geometric meanings in normal mode analysis within molecular dynamics. Starting with a simple example of a graph consisting of two vertices and one edge with 3D coordinate information, Figure \ref{fig: demo example of global sections} illustrates the geometric visualization of two types of global sections for any anisotropic sheaf defined on this graph. In this example, the tuples $(\mathbf{v}, \mathbf{v})$ and $(\mathbf{0}, \mathbf{w})$, where $\mathbf{v}, \mathbf{w}, \mathbf{0} = (0,0,0) \in \mathbb{R}^3$, are global sections since $\mathcal{F}_{v_1,[v_1,v_2]}(\mathbf{v}) = \mathcal{F}_{v_2,[v_1,v_2]}(\mathbf{v})$ and $\mathcal{F}_{v_2,[v_1,v_2]}(\mathbf{w}) = \mathcal{F}_{v_1,[v_1,v_2]}(\mathbf{0})$. Specifically, $(\mathbf{v}, \mathbf{v})$ corresponds to a translation operation where both vertices move consistently along the 3D direction of $\mathbf{v}$, resulting in the translation of the entire graph. In contrast, $\mathbf{w}$ is set perpendicular to the vector $(x_2^\circ - x_1^\circ, y_2^\circ - y_1^\circ, z_2^\circ - z_1^\circ)$, implying that $(\mathbf{0}, \mathbf{w})$ is a global section. This corresponds to a movement where vertex $v_1$ remains fixed while vertex $v_2$ rotates, with $\mathbf{w}$ as the velocity of the circular motion centered at $v_1$ with radius $s_{12}^\circ$. The following theorem shows that the dimension of the global section space of any anisotropic sheaf must be at least $6$. This space has a linearly independent set consisting of three global sections corresponding to translations in three directions, and three more corresponding to rotations around the three principal axes—operations that act on the entire structure as a whole (Figure \ref{fig: Six non-zero eigenmodes}). This phenomenon is summarized in the following theorem. \begin{theorem}\label{Theorem: Main result 2} Let $G = (V,E)$ be an undirected, simple, and finite graph, where each vertex is annotated with a $3$-dimensional coordinate $(x_i^\circ, y_i^\circ, z_i^\circ)$. Let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be an anisotropic sheaf. If there are three affinely independent coordinates, then $\dim_\mathbb{R} \ker(L_{\calF}) \geq 6$. In particular, this space contains a linearly independent set $S$ with $|S| = 6$, comprising three global sections corresponding to translations in three directions and three more corresponding to rotations around the three principal axes. \end{theorem} \begin{proof} Suppose $V = \{ v_1, v_2, ..., v_n \}$ consists of $n$ vertices. Then, the $0$-th cochain space $C^0(G,\mathcal{F})$ of $\mathcal{F}$ is defined as the direct sum $\mathbb{R}^3 \oplus \mathbb{R}^3 \oplus \cdots \oplus \mathbb{R}^3 \simeq \mathbb{R}^{3n}$ of $n$ $\mathbb{R}^3$ spaces. Elements in $C^0(G,\mathcal{F})$ are $n$-tuples $\alpha = (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_n)$ of vectors in $\mathbb{R}^3$, recording local sections on vertices $v_1, v_2, ..., v_n$. Let $\{ \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3 \}$ be the standard basis for $\mathbb{R}^3$, and let $\alpha_k = (\mathbf{e}_k, \mathbf{e}_k, ..., \mathbf{e}_k)$ be the $n$-tuple consisting of $n$ $\mathbf{e}_k$ vectors. Then, $\{ \alpha_1, \alpha_2, \alpha_3 \}$ is a linearly independent subset of $C^0(G,\mathcal{F})$. Therefore, it is sufficient to show that, in addition to the global sections corresponding to translations, there are three linearly independent global sections arising from rotational operations. To simplify the proof, we focus on rotations around the $x$-, $y$-, and $z$-axes and claim that these rotational operations correspond to global sections of any anisotropic sheaf. Let $R_z: \mathbb{R}^3 \rightarrow \mathbb{R}^3$ denote the rotation around the $z$-axis. Let $(x_i^\circ, y_i^\circ, z_i^\circ)$ be the coordinates of the $i$-th vertex of $G$. Then the velocity vector due to the rotation $R_z$ at this vertex is $(-y_i^\circ, x_i^\circ, 0)$. Therefore, \begin{equation*} \begin{split} \mathcal{F}_{v_i,[v_i,v_j]}(-y_i^\circ, x_i^\circ, 0) &= \langle (x_j^\circ - x_i^\circ, y_j^\circ - y_i^\circ, z_j^\circ - z_i^\circ), (-y_i^\circ, x_i^\circ, 0) \rangle \\ &= -x_j^\circ y_i^\circ + x_i^\circ y_i^\circ + x_i^\circ y_j^\circ - x_i^\circ y_i^\circ = -x_j^\circ y_i^\circ + x_i^\circ y_j^\circ, \\ \mathcal{F}_{v_i,[v_i,v_j]}(-y_j^\circ, x_j^\circ, 0) &= \langle (x_j^\circ - x_i^\circ, y_j^\circ - y_i^\circ, z_j^\circ - z_i^\circ), (-y_j^\circ, x_j^\circ, 0) \rangle \\ &= -x_j^\circ y_j^\circ + x_i^\circ y_j^\circ + x_j^\circ y_j^\circ - x_j^\circ y_i^\circ = x_i^\circ y_j^\circ - x_j^\circ y_i^\circ. \end{split} \end{equation*} for every $v_i, v_j \in V$. By setting $\alpha_z = (\bfx_i)_{i = 1}^n \in C^0(G,\mathcal{F}) = \bigoplus_{i = 1}^n \mathcal{F}_{v_i}$ with $\bfx_i = (-y_i^\circ, x_i^\circ, 0) \in \mathcal{F}_{v_i} = \mathbb{R}^3$ for each $i$, we deduce that $\alpha_z \in \Gamma(G,\mathcal{F})$. By symmetric arguments, the associated $\alpha_x$ and $\alpha_y$ are also global sections of $\mathcal{F}$. Second, we claim that if the point cloud $\{ (x_i^\circ, y_i^\circ, z_i^\circ) \}_{i = 1}^n$ contains three affinely independent coordinates, then the set $\{ \alpha_1, \alpha_2, \alpha_3, \alpha_x, \alpha_y, \alpha_z \}$ is linearly independent. Without loss of generality, we assume that $\{ (x_i^\circ, y_i^\circ, z_i^\circ) \}_{i = 1}^3$ is affinely independent in $\mathbb{R}^3$, and it is sufficient to prove that the matrix \begin{equation*} \begin{bmatrix} 1 & 0 & 0 & 0 & z_1^\circ & y_1^\circ \\ 0 & 1 & 0 & z_1^\circ & 0 & -x_1^\circ \\ 0 & 0 & 1 & -y_1^\circ & -x_1^\circ & 0 \\ 1 & 0 & 0 & 0 & z_2^\circ & y_2^\circ \\ 0 & 1 & 0 & z_2^\circ & 0 & -x_2^\circ \\ 0 & 0 & 1 & -y_2^\circ & -x_2^\circ & 0 \\ 1 & 0 & 0 & 0 & z_3^\circ & y_3^\circ \\ 0 & 1 & 0 & z_3^\circ & 0 & -x_3^\circ \\ 0 & 0 & 1 & -y_3^\circ & -x_3^\circ & 0 \\ \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & z_2^\circ - z_1^\circ & y_2^\circ - y_1^\circ \\ 0 & 1 & 0 & z_2^\circ - z_1^\circ & 0 & x_1^\circ - x_2^\circ \\ 0 & 0 & 1 & y_1^\circ - y_2^\circ & x_1^\circ - x_2^\circ & 0 \\ 1 & 0 & 0 & 0 & z_3^\circ - z_1^\circ & y_3^\circ - y_1^\circ \\ 0 & 1 & 0 & z_3^\circ - z_1^\circ & 0 & x_1^\circ - x_3^\circ \\ 0 & 0 & 1 & y_1^\circ - y_3^\circ & x_1^\circ - x_3^\circ & 0 \\ \end{bmatrix} \end{equation*} has rank $6$, where the second matrix is obtained through column elimination. Since the triangle formed by the three coordinates is non-degenerate, the submatrix consisting of the last three columns of the right-hand-side matrix has a rank of $3$. Consequently, the whole matrix has a rank of $6$, as desired. \end{proof} \begin{remark} Using cosheaf language, Theorem \ref{Theorem: Main result 2} can be interpreted that for any non-degenerate $1$-simplicial complex in \( \mathbb{R}^3 \), the cosheaf homology space, representing the linkage kinematic degrees of freedom, has dimension at least \( \binom{4}{2} \)~\cite{cooperband2024cellular}. Dually, within the framework of the proposed anisotropic sheaf model, we explicitly construct the global sections corresponding to the $3$-dimensional translations and rotations, which are directly connected to the geometric significance of atomic local movements. \end{remark} \begin{figure} \centering \includegraphics[width=\linewidth]{Six_trivial_modes.png} \caption{Illustration of six linearly independent eigenmodes with zero eigenvalues for the protein with ID 103M from the RCSB Protein Data Bank~\cite{smith1999correlations, Berman:2000}. Three of the eigenmodes correspond to translations of the entire protein in three directions, while the other three correspond to rotations of the entire protein around three axes. The visualizations of the eigenmodes were generated using the VMD software~\cite{VMD}.} \label{fig: Six non-zero eigenmodes} \end{figure} Theorems \ref{Theorem: Main result 2} and \ref{Theorem: Main result 1} together offer a mathematical interpretation of the geometric significance of the eigenmodes associated with a zero eigenvalue by analyzing the global sections of the proposed anisotropic sheaf. Specifically, Theorem \ref{Theorem: Main result 1} establishes that each eigenmode of the anisotropic Hessian matrix $\mathbf{H}_{\rm ANM}$ with a zero eigenvalue corresponds to a global section of the sheaf. Importantly, by Theorem \ref{Theorem: Main result 2}, this correspondence holds independently of the cutoff distance $R_c$ used in network construction for Equation~\eqref{Eq. ANM energy function}, ensuring that all underlying anisotropic networks consistently exhibit the same six eigenmodes—three corresponding to translations and three to rotations. In particular, as demonstrated in Theorem \ref{Theorem: Main result 2}, the Hessian matrix $\mathbf{H}_{\rm ANM}$ associated with any atomic system must exhibit more than six trivial eigenvalues if the system contains at least three affinely independent coordinates. Next, we present two propositions as preparation for a further analysis of global section spaces. The first proposition provides a formula relating the dimension of the global section space of an anisotropic sheaf to the rank of the 0-coboundary map. This allows one to compute the dimension of $H^0(G, \mathcal{F})$ by calculating the rank of the coboundary matrix. \begin{prop.} \label{Proposition: Dimension of the global section space of an anisotropic sheaf} Let $G = (V,E)$ be an undirected, simple, and finite graph. Let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be an anisotropic sheaf. Let $\mathbf{C}: C^0(G,\calF) \rightarrow C^1(G,\calF)$ be the $0$-th coboundary matrix. Then \begin{equation*} \dim_\mathbb{R} \ker(L_\calF) = \dim_\mathbb{R} H^0(G,\calF) = 3 \cdot |V| - {\rm rank}(\mathbf{C}). \end{equation*} \end{prop.} \begin{proof} As presented in \eqref{Eq. Coboundary matrix}, the coboundary map $\mathbf{C}$ is an $m \times dn$, where $n = |V|$ and $m = |E|$ are the numbers of vertices and edges of $G$, respectively. In particular, it is a linear transformation from $\mathbb{R}^{3n} \simeq C^0(G,\calF)$ to $\mathbb{R}^m \simeq C^1(G,\calF)$. By Theorem \ref{Theorem: Hodge's theorem} and the dimension theorem, we have $\dim_\mathbb{R} \ker(L_\calF) = \dim_\mathbb{R} H^0(G,\calF) = \dim_\mathbb{R} \ker(\mathbf{C}) = \dim_\mathbb{R} C^0(G,\calF) - {\rm rank}(\mathbf{C}) = 3 \cdot |V| - {\rm rank}(\mathbf{C})$, as desired. \end{proof} The second proposition plays a crucial role in investigating the dimensions of the global section spaces of an anisotropic sheaf and its restriction sheaves. This proposition will be highly valuable in exploring the kernel of the Laplacian matrix of anisotropic sheaves. \begin{prop.} \label{Proposition: Contravariant relation of global section spaces} Let $V$ be a finite set and $G_1 = (V,E_1), G_2 = (V,E_2)$ be undirected simple graphs defined on the same vertex set. Suppose $E_1 \subseteq E_2$ and $\calF_1, \calF_2$ are anisotropic sheaves defined on $G_1$ and $G_2$, respectively. If $\mathcal{F}_1 = \mathcal{F}_2|_{G_1}$, then $\dim_\mathbb{R} \ker(L_{\calF_2}) \leq \dim_\mathbb{R} \ker(L_{\calF_1})$. \end{prop.} \begin{proof} Suppose $V = \{ v_1, ..., v_n \}$ and $E_1 = \{ e_1, ..., e_m \}$ consist of $n$ vertices and $m$ edges, respectively. Since $E_1 \subseteq E_2$, $E_2 = \{ e_1, ..., e_m, e_{m+1}, ..., e_{m+d} \}$, where $e_{m+1}, ..., e_{m+d} \in E_2 \setminus E_1$ are the additional edges of $G_2$. Then, the $0$-th coboundary matrix $\mathbf{C}_2: C^0(G_2,\calF_2) \rightarrow C^1(G_2,\calF_2)$ can be represented by \begin{equation*} \mathbf{C}_2 = \begin{bmatrix} [v_1:e_1] \cdot (\mathcal{F}_1)_{v_1, e_1} & [v_2:e_1] \cdot (\mathcal{F}_1)_{v_2, e_1} & \cdots & [v_n:e_1] \cdot (\mathcal{F}_1)_{v_n, e_1}\\ [v_1:e_2] \cdot (\mathcal{F}_1)_{v_1, e_2} & [v_2:e_2] \cdot (\mathcal{F}_1)_{v_2, e_2} & \cdots & [v_n:e_2] \cdot (\mathcal{F}_1)_{v_n, e_2}\\ \vdots & \vdots & \ddots & \vdots \\ [v_1:e_m] \cdot (\mathcal{F}_1)_{v_1, e_m} & [v_2:e_m] \cdot (\mathcal{F}_1)_{v_2, e_m} & \cdots & [v_n:e_m] \cdot (\mathcal{F}_1)_{v_n, e_m}\\ [v_1:e_{m+1}] \cdot (\mathcal{F}_2)_{v_1, e_{m+1}} & [v_2:e_{m+1}] \cdot (\mathcal{F}_2)_{v_2, e_{m+1}} & \cdots & [v_n:e_{m+1}] \cdot (\mathcal{F}_2)_{v_n, e_{m+1}}\\ [v_1:e_{m+2}] \cdot (\mathcal{F}_2)_{v_1, e_{m+2}} & [v_2:e_{m+2}] \cdot (\mathcal{F}_2)_{v_2, e_{m+2}} & \cdots & [v_n:e_{m+2}] \cdot (\mathcal{F}_2)_{v_n, e_{m+2}}\\ \vdots & \vdots & \ddots & \vdots \\ [v_1:e_{m+d}] \cdot (\mathcal{F}_2)_{v_1, e_{m+d}} & [v_2:e_{m+d}] \cdot (\mathcal{F}_2)_{v_2, e_{m+d}} & \cdots & [v_n:e_{m+d}] \cdot (\mathcal{F}_2)_{v_n, e_{m+d}}\\ \end{bmatrix}, \end{equation*} where the submatrix of the first $m$ rows of $\mathbf{C}_2$ is the $0$-th coboundary matrix of the sheaf $\mathcal{F}_1: (G_1,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$, denoted by $\mathbf{C}_1$. In particular, ${\rm rank}(\mathbf{C}_1) \leq {\rm rank}(\mathbf{C}_2)$. By Proposition \ref{Proposition: Dimension of the global section space of an anisotropic sheaf}, $\dim_\mathbb{R} \ker(L_{\calF_2}) \leq \dim_\mathbb{R} \ker(L_{\calF_1})$. \end{proof} \paragraph{Dimension Analysis of Global Section Spaces} In the second part of this section, we examine the dimension of the global section space of the proposed anisotropic sheaf model. Generally, by selecting an appropriate cutoff distance $R_c$ (see Equation~\eqref{Eq. ANM energy function}), the associated ANM Hessian matrix typically exhibits six zero eigenvalues, corresponding to the trivial eigenmodes of translation and rotation for the entire biomolecular complex (Figure \ref{fig: Six non-zero eigenmodes}). However, if $R_c$ is not sufficiently large, the number of zero eigenvalue eigenmodes may significantly exceed six, complicating the ANM analysis. To analyze the dimension of the global section space of an anisotropic sheaf $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ with non-zero weight values $w_{ij}$ (refer to Equation \eqref{Eq. Anisotropic sheaf definition}), we focus on the case where the anisotropic sheaf is of the form: \begin{equation} \label{Eq. Anisotropic sheaf definition-form 2} \mathcal{F}_{v_i, [v_i,v_j]} = \begin{bmatrix} x_j^\circ-x_i^\circ & y_j^\circ-y_i^\circ & z_j^\circ-z_i^\circ \\ \end{bmatrix} = \mathcal{F}_{v_j, [v_i,v_j]} \end{equation} That is, we set $w_{ij} := 1$ for every edge $[e_i,e_j] \in E$. In particular, since each row of the induced coboundary matrix, $\mathbf{C}$ only involves two $1 \times 3$ matrices $\mathcal{F}_{e_i,[e_i,e_j]}$ and $\mathcal{F}_{e_j,[e_i,e_j]}$, dividing the scalar $w_{ij}$ to this row doesn't change the rank of the matrix. That is, to investigate the dimension of the global section space of an anisotropic sheaf with non-zero edge weights, it is sufficient to study the anisotropic sheaf of the form in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}. In particular, the linear relation depicted in \eqref{Eq. ij vector spanned by 1j and 1i} becomes \begin{equation} \label{Eq. ij vector spanned by 1j and 1i-simple} \begin{split} \mathcal{F}_{v_i, [v_i,v_j]} &= \mathcal{F}_{v_1, [v_1,v_j]} - \mathcal{F}_{v_1, [v_1,v_i]} \in {\rm span}_\mathbb{R} \{ \mathcal{F}_{v_1, [v_1,v_j]}, \mathcal{F}_{v_1, [v_1,v_i]} \}. \end{split} \end{equation} Evidently, by verifying the local consistency of the local sections on the stalk spaces of vertices in the underlying graph, sheaves defined as in Equations~\eqref{Eq. Anisotropic sheaf definition} and \eqref{Eq. Anisotropic sheaf definition-form 2} share the same global section space. In particular, the associated sheaf Laplacians have identical nullity. \begin{theorem}\label{Theorem: weighted and non-weighted anisotropic sheaves} Let $G = (V, E)$ be an undirected, simple, and finite graph. Define an anisotropic sheaf $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ as in Equation~\eqref{Eq. Anisotropic sheaf definition}, with non-zero weights $w_{ij}$. For every edge $[v_i, v_j] \in E$ with endpoints $v_i, v_j \in V$, the sheaf map is given by: \begin{equation*} \mathcal{F}_{v_i, [v_i, v_j]} = w_{ij} \cdot \begin{bmatrix} x_j^\circ - x_i^\circ & y_j^\circ - y_i^\circ & z_j^\circ - z_i^\circ \\ \end{bmatrix} = \mathcal{F}_{v_j, [v_i, v_j]}. \end{equation*} On the other hand, let $\mathcal{G}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the anisotropic sheaf defined as in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}. Then, $\mathcal{F}$ and $\mathcal{G}$ have the same global section space, i.e., $H^0(G, \mathcal{F}) = H^0(G, \mathcal{G})$. \end{theorem} \begin{proof} Let $v_i$ and $v_j$ be two vertices in $G$, forming an edge $[v_i, v_j] \in E$. Let $\mathbf{x}_{v_i} \in \mathcal{F}_{v_i} = \mathbb{R}^3 = \mathcal{G}_{v_i}$ and $\mathbf{x}_{v_j} \in \mathcal{F}_{v_j} = \mathbb{R}^3 = \mathcal{G}_{v_j}$ be local sections on the stalk spaces on $v_i$ and $v_j$, respectively. Because $w_{ij} \neq 0$, equation $\mathcal{F}_{v_i,[v_i,v_j]}(\mathbf{x}_{v_i}) = \mathcal{F}_{v_j,[v_i,v_j]}(\mathbf{x}_{v_j})$ if and only if $\mathcal{G}_{v_i,[v_i,v_j]}(\mathbf{x}_{v_i}) = \mathcal{G}_{v_j,[v_i,v_j]}(\mathbf{x}_{v_j})$. Thus, $\mathcal{F}$ and $\mathcal{G}$ have the same global section space, i.e., $H^0(G, \mathcal{F}) = H^0(G, \mathcal{G})$. Furthermore, since the global sections determine the null space of the associated sheaf Laplacian, both sheaves also share the same nullity. This completes the proof. \end{proof} \begin{remark} In applications, the weight values $w_{ij}$ for the edges are set as $\gamma^{1/2}/s_{ij}^{\circ}$ (refer to Equation \eqref{Eq. small Hessian matrix} and Theorem \ref{Theorem: Main result 1}). Since the spring constant $\gamma$ is generally non-zero, anisotropic sheaves with non-zero weight values align with the assumptions used in ANM analysis. \end{remark} Using the anisotropic sheaf model, the second main result of this section provides a sheaf-based perspective for proving the existence of underlying graphs that model a given atomic system, ensuring that the number of trivial eigenmodes is exactly six. Specifically, as demonstrated in Theorem \ref{Theorem: Main result 3-}, for an ANM Hessian matrix constructed on the complete graph of atoms in general position, the number of trivial modes is precisely six. Moreover, by Proposition \ref{Proposition: Contravariant relation of global section spaces}, adding additional edges to the underlying graph of the atomic system reduces the number of trivial modes. Consequently, it follows from Proposition \ref{Proposition: Contravariant relation of global section spaces} that there exists a graph with the minimal number of edges that ensures $\dim_{\mathbb{R}} \ker(\mathbf{H}_{\rm ANM}) = 6$. This result is formalized in the following theorem. \begin{theorem}\label{Theorem: Main result 3} Let $V \subseteq \mathbb{R}^3$ be a finite point cloud in general position, where the cardinality of $|V|$ is larger than $3$. Then there is a graph $G = (V, E)$ with edges of endpoints in $V$ that satisfies the following two properties: \begin{itemize} \item[\rm (a)] For the anisotropic sheaf $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ defined as in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}, $\dim_{\mathbb{R}} H^0(G, \mathcal{F}) = 6$. \item[\rm (b)] Let $G' = (V,E')$ be the graph obtained by removing an arbitrary edge in $E$, and let $\mathcal{G}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the anisotropic sheaf defined as in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}, then $\dim_{\mathbb{R}} H^0(G, \mathcal{G}) > 6$. \end{itemize} \end{theorem} For a point cloud $V \subseteq \mathbb{R}^3$ in general position, a graph $G = (V,E)$ with edges of endpoints in $V$ that satisfies properties (a) and (b) is referred to a \textit{minimal graph} for desired nullity of the induced Hessian matrix. \section{Graph Construction for Anisotropic Sheaves} \label{Section: Graph Construction for Anisotropic Sheaves} In practical applications, modeling a network from a point cloud with a complete graph is often inefficient due to the excessive number of edges involved. In the following discussion, we propose an alternative approach by constructing an anisotropic sheaf $\mathcal{F}$ on a graph $G$ using $3$-dimensional homogeneous simplicial complex to represent the point cloud data in general position, such as the ${\rm C}_\alpha$ point cloud extracted from a protein structure. In particular, by employing 3D Delaunay triangulation to connect the data points~\cite{delaunay1934bulletin}, the resulting anisotropic sheaf consistently maintains the global section space at a dimension of $6$. Compared to complete graphs, this method offers a more streamlined network model that ensures the constructed anisotropic sheaf achieves the minimal dimension of the global section space. \paragraph{Homogeneous simplicial $3$-complex construction} In this paper, we construct the underlying graph of a given point cloud in $\mathbb{R}^3$ by considering simplicial complexes composed of $3$-dimensional tetrahedrons. Specifically, we focus on \textit{homogeneous} (or \textit{pure}) simplicial $3$-complexes in $\mathbb{R}^3$, built upon the point cloud to obtain the underlying graph. These complexes play a significant role in computer graphics, particularly in generating network structures from discrete objects embedded in 2D or 3D Euclidean spaces~\cite{thompson1998handbook}. We briefly recall the definition of homogeneous simplicial $3$-complexes below. \begin{def.} \label{Definition: Homogeneous simplicial 3-complex} A \textbf{pure} or \textbf{homogeneous} simplicial $3$-complex $K$ in the $3$-dimensional Euclidean space $\mathbb{R}^3$ is a simplicial complex consisting of geometric simplices in $\mathbb{R}^3$, where every simplex of dimension less than $3$ serves as a face of at least one simplex $\sigma$ in $K$ of dimension exactly $3$. For every $2$-simplex $\tau$ in $K$, the \textbf{degree of upper adjacency}, denoted by $\deg_{K,u}(\tau)$, is defined as the number of $3$-simplices $\sigma$ in $K$ such that $\tau \leq \sigma$. \end{def.} In other words, a homogeneous simplicial $3$-complex $K$ comprises a collection of non-overlapping $3$-simplices, with the meaning of ``non-overlapping'' that two $3$-simplices intersect only at a shared $q$-face with $0 \leq q < 3$. Notably, to explore the face intersection behaviors between two $3$-simplices within a homogeneous simplicial $3$-complex, we utilize the notation $\sigma \sim_q \tau$ to denote the relation that $\sigma \cap \tau$ is a $q$-simplex with $0 \leq q < 3$. The following two propositions serve as useful tools for investigating the face relations of simplices in $\mathbb{R}^3$. \begin{prop.} \label{Proposition: |K| = |L| with homogeneous K, L, then K=L} Let $K$ be a finite homogeneous simplicial $3$-complex in $\mathbb{R}^3$ and let $L$ be the simplicial complex generated by a collection of $3$-simplices in $K$. Then, $|K| = |L|$ if and only if $K = L$. \end{prop.} \begin{proof} It is clear that $|K| = |L|$ if $K = L$. Therefore, it is sufficient to prove the `only if' direction. Since $L$ is generated by simplices in $K$, we must have $L \subseteq K$. Suppose $K \neq L$, say $\tau \in K \setminus L$. In particular, $\tau \subseteq |K| = |L|$. Because $K$ is a homogeneous simplicial $3$-complex, there is a $3$-simplex $\sigma \in K$ such that $\tau \subseteq \sigma$. Because $\tau \notin L$ and $L$ is a simplicial complex, $\sigma \notin L$. Because $\dim(\sigma) = 3$, the interior of $\sigma$ is non-empty (Proposition \ref{Proposition: Elementary Munkres properties}(b)). Let $\mathbf{x}$ be an interior point in $\sigma$. Because $K$ is a simplicial complex in $\mathbb{R}^3$ and $\dim(\sigma) = 3$, $\sigma$ is the only simplex in $K$ containing $\mathbf{x}$ (Proposition \ref{Proposition: Elementary Munkres properties}(e)). In particular, $\mathbf{x} \notin \tau$ for every $\tau \in L$. This shows that $\mathbf{x} \in |K| \setminus |L|$, which is a contradiction. \end{proof} \begin{prop.}\label{Proposition: Upper adjacency of a 2-simplex} Let $K$ be a homogeneous simplicial $3$-complex in $\mathbb{R}^3$, then either $\deg_{K,u}(\tau) = 1$ or $\deg_{K,u}(\tau) = 2$ for every $2$-simplex $\tau \in K$. \end{prop.} \begin{proof} Let $\tau \in K$ be a $2$-simplex. Because $K$ is a homogeneous simplicial $3$-complex, there is a $3$-simplex $\sigma \in K$ such that $\tau \leq \sigma$. In particular, $\deg_{K,u}(\tau) \geq 1$. Let $H$ be the hyperplane spanned by $\tau$, such that $\sigma$ is contained in one of the closed half-spaces separated by $H$. Then, by Proposition \ref{Proposition: Supporting hyperplane of simplex on face}(d), $\deg_{K,u}(\tau) > 1$ only if there is a unique $3$-simplex $\sigma' \in K$ with $\tau \leq \sigma'$ such that $\sigma'$ lies in the opposite closed half-space of $H$, as desired. \end{proof} In particular, building on a finite point cloud in $\mathbb{R}^3$, such as a collection of atom coordinates, we consider certain homogeneous $3$-simplices, referred to as \textit{admissible simplicial complexes}, which are constructed based on the point cloud. The properties required for selecting admissible simplicial complexes are formally defined below. \begin{def.} \label{Definition: admissible homogeneous 3-complex} Let $V$ be a finite point cloud in $\mathbb{R}^3$. A simplicial complex $K = K(V)$ of simplicies in $\mathbb{R}^3$ is said to be \textbf{admissible} for $V$ if it satisfies the following three properties: \begin{itemize} \item[\rm (a)] $V$ is the vertex set of $K$; \item[\rm (b)] $K$ is a homogeneous simplicial $3$-complex; \item[\rm (c)] For every two $3$-simplicies $\tau$ and $\rho$ in $K$, there is a sequence $\sigma_1, ..., \sigma_n$ of $3$-simplicies in $K$ such that $\sigma_1 = \tau$, $\sigma_n = \rho$, and $\sigma_i \sim_2 \sigma_{i+1}$ for every $i \in \{ 1, ..., n-1 \}$. \end{itemize} \end{def.} Specifically, the $1$-skeleton $G$ of an admissible simplicial complex $K = K(V)$, constructed from a given point cloud $V \subseteq \mathbb{R}^3$, is primarily considered as the underlying graph of the ANM model in this study. In algebraic topology, for any simplicial complex $K$, the $1$-skeleton of $K$ is defined as the subcomplex generated by all its $1$-simplices~\cite{munkres2018elements}. Furthermore, for a homogeneous simplicial $3$-complex $K$ in $\mathbb{R}^3$, the $1$-skeleton is a homogeneous $1$-complex, resulting in a simple, undirected, geometric graph embedded in $\mathbb{R}^3$, with the $0$-simplices of $K$ as its vertices. Notably, as dictated by property (c), $G$ is a connected graph and every anisotropic sheaf defined on $G$ (cf. \eqref{Eq. Anisotropic sheaf definition}) has rank $3$ since $K$ is a homogeneous simplicial $3$-complex. In particular, the following proposition shows that the homogeneous simplicial $3$-complex produced through the 3D Delaunay triangulation of any point cloud $V \subseteq \mathbb{R}^3$ in general position is admissible in the sense of Definition \ref{Definition: admissible homogeneous 3-complex}. \begin{prop.} \label{Proposition: DL is a 3D Tetrahedral mesh} Let $V \subseteq \mathbb{R}^3$ be a finite point cloud in general position. Let $K = K(V)$ be the 3D Delaunay triangulation of $V$. Then, $K$ is admissible, i.e., $K$ satisfies (a), (b), and (c) in Definition \ref{Definition: admissible homogeneous 3-complex}. \end{prop.} \begin{proof} Since a 3D Delaunay triangulation is a homogeneous $3$-simplicial complex over the point cloud as its vertex set, properties (a) and (b) hold. It remains to prove that $K$ satisfies property (c). Let $|K| \subseteq \mathbb{R}^3$ be the geometric realization of $K$. By the definition of 3D Delaunay triangulation, $|K|$ is the convex hull of $V$. Pick an arbitrary $3$-simplex $\sigma_0 \in K$, and define $L \subseteq K$ as the subcomplex generated by the collection $\{ \sigma_0 \} \cup C$, where $C$ is the collection of $3$-simplices $\sigma \in K$ such that there exists a sequence of $3$-simplices $\sigma_1, \dots, \sigma_n$ in $K$ with $\sigma_n = \sigma$ and $\sigma_{i} \sim_2 \sigma_{i+1}$ for $i = 0, 1, \dots, n-1$. In particular, $L$ is also a homogeneous simplicial $3$-complex. We claim that $|L| = |K|$, and this shows that $L = K$ by Proposition \ref{Proposition: |K| = |L| with homogeneous K, L, then K=L}. Suppose $|L| \neq |K|$. Then, there exists a point $\mathbf{x}_0 \in |K| \setminus |L|$. We claim that there is a $3$-simplex $\sigma \in K$ and a hyperplane $H$ spanned by a $2$-face $\tau \leq \sigma$, with $\deg_{L,u}(\tau) = 1$ (see Definition \ref{Definition: Homogeneous simplicial 3-complex} and Proposition \ref{Proposition: Upper adjacency of a 2-simplex}), such that $H$ separates $\mathbf{x}_0$ and $\sigma$ into disjoint open and closed half-spaces. To prove this fact by induction, we proceed as follows. Pick an arbitrary $3$-simplex $\sigma \in L$. By Corollary \ref{Corollary: A point outside the sigma and supporting hyperplane thm}, there is a $2$-face $\tau \leq \sigma$ such that the spanned hyperplane $H$ separates $\mathbf{x}_0$ and $\sigma$ into disjoint open and closed half-spaces. We have done if $\deg_{L,u}(\tau) = 1$. Otherwise, $\deg_{L,u}(\tau) = 2$ by Proposition \ref{Proposition: Upper adjacency of a 2-simplex}. Therefore, there is a $3$-simplex $\sigma' \in L$ such that $\tau = \sigma \cap \sigma'$ and $\{ \mathbf{x}_0 \} \cup \sigma'$ lies in the opposite closed half-space of $H$. By the same arguments, there is a $2$-face $\tau'$ of $\sigma'$ such that the hyperplane $H'$ spanned by $\tau'$ separates $\mathbf{x}_0$ and $\sigma'$. In particular, we have $\deg_{L,u}(\tau') = 1$ or $2$. This process terminates because $L$ is a finite simplicial complex, and the claim is proved by induction. Let $\sigma \in K$, along with $H$ and $\tau$, be selected with the properties described in the previous paragraph. Say $\tau = \conv(\bfx_1, \bfx_2, \bfx_3)$ for some $\bfx_1, \bfx_2, \bfx_3 \in V$. Because $|K|$ is convex, the $3$-simplex $\omega := \conv(\bfx_0, \bfx_1, \bfx_2, \bfx_3)$ is a closed subset of $|K|$. In particular, let $\mathbf{z} \in \reInt(\tau)$, then the line segment $\conv(\mathbf{z}, \mathbf{x}_0)$ is a closed subset of $|K|$. Because $\deg_{L,u}(\tau) = 1$, there is a $3$-simplex $\sigma' \in K \setminus L$ such that $\mathbf{z} \in \sigma \cap \sigma'$. Because $K$ is a simplicial complex, it forces that $\tau = \sigma \cap \sigma'$; in particular, $\sigma \sim_2 \sigma'$. However, by the definition of $L$, this implies that $\sigma' \in L$ since $\sigma \in L$. It leads to a contradiction, and we conclude that $|L|$ and $|K|$ must be equal, as desired. \end{proof} The $1$\textit{-skeleton} $G$ of an abstract simplicial complex $K$ is the collection of $0$- and $1$-simplices within $K$, forming a simplicial complex of dimension $1$, which can be viewed as a graph $G = (V, E)$ consisting of vertices and edges. Notably, let $K$ be an admissible simplicial complex defined as in Definition \ref{Definition: admissible homogeneous 3-complex}, and let $G$ be the $1$-skeleton of $K$, then the property (c) implies $G$ is a connected geometric graph embedded in $\mathbb{R}^3$. The following theorem demonstrates that every $1$-skeleton $G$ derived in this manner supports anisotropic sheaves to achieve the minimum dimension of the global section space. \begin{theorem}\label{Theorem: Main result 4-1} Let $V = \{ v_1, v_2, ..., v_n \} \subseteq \mathbb{R}^3$ be a finite point cloud in general position, and let $K = K(V)$ be an admissible simplicial complex for $V$, as defined in Definition \ref{Definition: admissible homogeneous 3-complex}. If $G$ is the $1$-skeleton of $K$, and $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ is an anisotropic sheaf with $w_{ij} \neq 0$ whenever $i < j$, then $\dim_\mathbb{R} \Gamma(G,\mathcal{F}) = 6$. \end{theorem} \begin{proof} Let $\mathbf{C}: C^0(G, \mathcal{F}) \rightarrow C^1(G, \mathcal{F})$ be the associated $0$-coboundary matrix of sheaf $\mathcal{F}$. For every non-empty collection $S$ of $3$-simplicies in $K$, denote $N(S)$ as the set $N(S) = \{ \tau \in K_{(3)} \ | \ \sigma \sim_2 \tau \text{ for some } \sigma \in S\}$. We begin with the case of the subcomplex generated by a single $3$-simplex and prove the theorem by induction. First, pick an arbitrary $3$-simplex $\sigma_1$ in $K$ and define $L_1$ as the subcomplex generated by $S_1 = \{ \sigma_1 \}$, i.e., $L_1 = \{ \tau \in K \ | \ \tau \leq \sigma_1 \}$. Let $G_1 = (V_1, E_1)$ denote the $1$-skeleton of $L_1$ and let $\mathcal{F}_1 = \mathcal{F}|_{G_1}$. By extending zeros, each row in the $0$-th coboundary matrix of sheaf $\mathcal{F}_1$, denoted as $\mathbf{C}_{(1)}: C^0(G_1, \mathcal{F}_1) \rightarrow C^1(G_1, \mathcal{F}_1)$, can be identified as a row of the $0$-th coboundary matrix $\mathbf{C}_{(1)}: C^0(G, \mathcal{F}) \rightarrow C^1(G, \mathcal{F})$ of sheaf $\mathcal{F}$. Because $\sigma_1$ is a non-degenerate $3$-simplex, the rank of $\mathbf{C}_{(1)}$ equals $6$ by Example \ref{Example: K4 example}. In particular, \begin{equation*} {\rm rank}(\mathbf{C}_{(1)}) = 6 = 3 \cdot 4 - 6 = 3 \cdot |V_1| - 6. \end{equation*} Second, let $L_2$ be the simplicial complex generated by \( S_1 \cup N(S_1) \), i.e., $L_2 = \{ \tau \in K \ | \ \tau \leq \sigma \text{ for some } \sigma \in S_1 \cup N(S_1) \}$, then the restriction sheaf $\mathcal{F}_2 := \mathcal{F}|_{G_2}$, and the $0$-th coboundary matrix $\mathbf{C}_{(2)}: C^0(G_2, \mathcal{F}_2) \rightarrow C^1(G_2, \mathcal{F}_2)$ are defined. In particular, we have $L_1 \subseteq L_2$, $V_1 \subseteq V_2$, $E_1 \subseteq E_2$, and $\mathcal{F}_2|_{G_1} = \mathcal{F}_1$. Again, $\mathbf{C}_{(2)}$ is a submatrix of $\mathbf{C}$, where the corresponding rows are extended to match those in $\mathbf{C}$ by appending zeros. By the definition of $L_2$, each vertex belongs to $V_2 \setminus V_1$ must be a vertex of a member in $N(S_1)$, and hence has at least $3$ edges in $E_2$ that connect to vertices in $V_1$. Since every tetrahedron in $N(S_1)$ is non-degenerate, each vertex in $V_2 \setminus V_1$ contributes at least three additional linearly independent rows to the extension matrix $\mathbf{C}_{(2)}$ from matrix $\mathbf{C}_{(1)}$. Then, \begin{equation*} {\rm rank}(\mathbf{C}_{(2)}) \geq {\rm rank}(\mathbf{C}_{(1)}) + 3 \cdot | V_2 \setminus V_1 | = 3 \cdot |V_1| - 6 + 3 \cdot | V_2 \setminus V_1 | = 3 \cdot |V_2| - 6. \end{equation*} On the other hand, since ${\rm rank}(\mathbf{C}_{(2)}) = 3 \cdot |V_2| - \dim_{\mathbb{R}}(\ker(L_{\mathcal{F}_2})) \leq 3 \cdot |V_2| - 6$ by Theorem \ref{Theorem: Main result 2}, we obtain \begin{equation*} {\rm rank}(\mathbf{C}_{(2)}) = 3 \cdot |V_2| - 6. \end{equation*} If $V_2 = V_1$, then ${\rm rank}(\mathbf{C}_{(2)}) = {\rm rank}(\mathbf{C}_{(1)})$ since $3 \cdot |V_2| - 6 = 3 \cdot |V_1| - 6 = {\rm rank}(\mathbf{C}_{(1)}) \leq {\rm rank}(\mathbf{C}_{(2)}) \leq 3 \cdot |V_2| - 6$ by Theorem \ref{Theorem: Main result 2}. Similarly, we define $S_2$ as the collection of all $3$-simplices in $L_2$ and the same process can be continued by defining $L_3$ as the simplicial complex generated by $S_2 \cup N(S_2)$. By continuing this process, we will have the following filtration of subcomplexes of $K$: \begin{equation*} L_1 \subseteq L_2 \subseteq L_3 \subseteq \cdots \end{equation*} with $1$-skeletons $G_{n} = (V_n, E_n)$, sets $S_n$ and $N(S_n)$, sheaves $\mathcal{F}_n: (G_n, \leq) \rightarrow \textup{\textsf{Vect}}_{\mathbb{R}}$, and the $0$-th coboundary matrices $\mathbf{C}_{(n)}: C^0(G_n, \mathcal{F}_n) \rightarrow C^1(G_n, \mathcal{F}_n)$ with $n = 1, 2, ...$, satisfying \begin{equation*} {\rm rank}(\mathbf{C}_{(n)}) = 3 \cdot |V_n| - 6. \end{equation*} Because $V$ is finite, this process must terminate. Moreover, every two tetrahedrons $\tau$ and $\rho$ in $K$ admit a sequence $\tau_1, ..., \tau_m$ of $3$-simplicies such that $\tau_1 = \tau$, $\tau_m = \rho$, and $\tau_i \sim_2 \tau_{i+1}$ for each $i$ since $K$ is admissible according to Definition \ref{Definition: admissible homogeneous 3-complex}. Eventually, this process encompasses all the $3$-simplices in $K$ and associates an $N \in \mathbb{N}$ such that $L_N = K$, $G_N = G$, and $\mathbf{C}_{(N)} = \mathbf{C}$. By Proposition \ref{Proposition: Dimension of the global section space of an anisotropic sheaf}, we conclude that $\dim_\mathbb{R} \ker(L_\calF) = 3 \cdot |V_N| - {\rm rank}(\mathbf{C}_{(N)}) = 3 \cdot |V| - {\rm rank}(\mathbf{C}) = 3 \cdot |V| - (3 \cdot |V| - 6) = 6$. \end{proof} The following corollary follows from Proposition \ref{Proposition: DL is a 3D Tetrahedral mesh} and Theorem \ref{Theorem: Main result 4-1}. Specifically, for any 3D atomic system in general position, the $1$-skeleton of the 3D Delaunay triangulation of the system guarantees that the associated ANM Hessian matrix has exactly six eigenmodes with zero eigenvalues. \begin{coro}\label{Corollary: Main result 4-2} Let $V = \{ v_1, v_2, ..., v_n \} \subseteq \mathbb{R}^3$ be a finite point cloud in general position, and let $K = K(V)$ be the 3D Delaunay triangulation of $V$. If $G$ is the $1$-skeleton of $K$, and $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ is an anisotropic sheaf with $w_{ij} \neq 0$ whenever $i < j$, then $\dim_\mathbb{R} \Gamma(G,\mathcal{F}) = 6$. \end{coro} \paragraph{Minimal Graph Construction} At the end of this section, we propose a systematic method for constructing $3$-simplices and their faces in an admissible simplicial complex, as defined in Definition \ref{Definition: admissible homogeneous 3-complex}. The $1$-skeleton of this simplicial complex induces the desired Hessian matrix with nullity exactly $6$. Furthermore, this graph is minimal in the sense of Theorem \ref{Theorem: Main result 3}; that is, removing any edge from the constructed graph would result in a Hessian matrix with nullity larger than $6$. The construction is presented in the following algorithm. \begin{algorithm}[H] \caption{Algorithm for the minimal graph construction}\label{Algorithm: Main result 5-2} \begin{algorithmic}[1] \Require A finite point cloud $V \subseteq \mathbb{R}^3$ in general position in $\mathbb{R}^3$ ($|V| \geq 4$). \Ensure A minimal graph $G = (V, E)$ that satisfies the properties (a) and (b) in Theorem \ref{Theorem: Main result 3}. \State Pick an arbitrary point in $V$, say $\mathbf{v}_1$. \State Pick another three points that are nearest to $\mathbf{v}_1$, say $\mathbf{v}_2$, $\mathbf{v}_3$, $\mathbf{v}_4$. \State Let $K_1$ be the homogeneous simplicial complex generated by ${\rm conv}(\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4)$. \For{$i = 5$ to $|V|$} \State Pick a point $\mathbf{v}_i \in V \setminus \{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_{i-1}\}$ that has the smallest distance to $|K_{i-1}|$. \State Pick a 2-simplex $\sigma$ in $K_{i-1}$, say $\sigma = \mathrm{conv}(\mathbf{v}_{j_1}, \mathbf{v}_{j_2}, \mathbf{v}_{j_3})$, such that \begin{equation}\label{Equation: Crucial step in the Algorithm} \mathrm{conv}(\mathbf{v}_{j_1}, \mathbf{v}_{j_2}, \mathbf{v}_{j_3}, \mathbf{v}_i) \cap K_{i-1} = \{ \mathbf{v}_{j_1}, \mathbf{v}_{j_2}, \mathbf{v}_{j_3}, \mathbf{v}_i \}. \end{equation} \State Add $\mathbf{v}_i$ and the edges $\{\mathbf{v}_i, \mathbf{v}_{j_1}\}$, $\{\mathbf{v}_i, \mathbf{v}_{j_2}\}$, $\{\mathbf{v}_i, \mathbf{v}_{j_3}\}$ to $K_i$. \EndFor \State Let $K_N$ denote the final simplicial complex after all vertices in $V$ have been processed. \State Let $G = (V, E)$ be the $1$-skeleton of the simplicial complex $K_N$. \Return $G$. \end{algorithmic} \end{algorithm} For step 3 in Algorithm \ref{Algorithm: Main result 5-2}, ${\rm conv}(\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4) \cap V = {\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4}$, since $\mathbf{v}_2$, $\mathbf{v}_3$, and $\mathbf{v}_4 \in V$ are the nearest points to $\mathbf{v}_1$. For step 6, such a $2$-simplex exists by the proof of the existence of a $2$-face in the homogeneous simplicial complex using Corollary \ref{Corollary: A point outside the sigma and supporting hyperplane thm}. Furthermore, Equation~\eqref{Equation: Crucial step in the Algorithm} holds due to the minimal distance property of $\mathbf{v}_i$. Due to the construction, the resulting $K_N$ is an admissible simplicial complex as defined in Definition \ref{Definition: admissible homogeneous 3-complex}. Let $\mathcal{F}: (G, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the anisotropic sheaf defined as in~\eqref{Eq. Anisotropic sheaf definition-form 2}, i.e., \begin{equation*} \mathcal{F}_{v_i, [v_i,v_j]} = \begin{bmatrix} x_j^\circ - x_i^\circ & y_j^\circ - y_i^\circ & z_j^\circ - z_i^\circ \ \end{bmatrix} = \mathcal{F}_{v_j, [v_i,v_j]}. \end{equation*} By Theorem \ref{Theorem: Main result 4-1}, the global section space of the anisotropic sheaf $\mathcal{F}$ has rank $6$, i.e., the nullity of the induced ANM Hessian matrix is exactly $6$. Finally, for the minimality of the generated $1$-skeleton, note that there are exactly $6 + 3 \cdot (|V| - 4) = 3 \cdot |V| - 6$ edges. Let $G = (V, E)$ be the output graph, and let $G' = (V, E')$ be obtained by removing an edge from $E$. Let $\mathcal{F}': (G', \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the induced anisotropic sheaf. The coboundary matrix $\mathbf{C}': C^0(G', \mathcal{F}') \rightarrow C^1(G', \mathcal{F}')$ then has dimensions $|E'| \times 3|V| = (3|V| - 7) \times 3|V|$. Consequently: \begin{equation*} \dim_{\mathbb{R}} \ker(\mathbf{C}') = 3|V| - \mathrm{rank}(\mathbf{C}') \geq 3|V| - (3|V| - 7) = 7. \end{equation*} This confirms that removing any edge increases the nullity of the Hessian matrix beyond $6$, ensuring the minimality of the constructed graph. \bibliography{refs_2} \bibliographystyle{abbrv} \appendix \section{Anisotropic Sheaves on Complete Graphs} \label{Section: Anisotropic Sheaves on Complete Graphs} To prove that any anisotropic sheaf $\mathcal{F}: (K_n, \leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ of rank $3$ over a complete graph $K_n$ ($n \geq 3$) must satisfy the condition $\dim_\mathbb{R} \Gamma(K_n, \mathcal{F}) = 6$, we begin by examining the cases where $n = 4$, $5$, and $6$. \begin{exam.} \label{Example: K4 example} Let $G = K_4$ be the complete graph with vertex set $V = \{ 1,2,3,4 \}$ accompanied with 3D coordinates, and let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the anisotropic sheaf defined in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}, then the coboundary matrix $\mathbf{C}: C^0(G,\mathcal{F}) \rightarrow C^1(G,\mathcal{F})$ is a $6 \times 12$ matrix represented by \begin{equation*} \mathbf{C} = \left[\begin{array}{cccc} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{2, [1,2]} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{3, [1,3]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{4, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{3, [2,3]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{4, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{4, [3,4]} \\ \end{array} \right] = \left[\begin{array}{cccc} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right]. \end{equation*} Then, ${\rm rank}(\mathbf{C}) \leq 6$ and hence $\dim_\mathbb{R}(\ker(L_\mathcal{F})) = 3 \cdot 4 - {\rm rank}(\mathbf{C}) \geq 6$. Furthermore, by applying column operations with block matrices, the coboundary matrix becomes \begin{equation} \label{Eq. Column and Row operations for K4-1} \begin{split} \mathbf{C} &= \left[\begin{array}{cccc} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right] \longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right] \\ &\longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right] \longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathcal{F}_{1, [1,4]} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,4]} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right] = \widetilde{\mathbf{C}}. \end{split} \end{equation} In particular, ${\rm rank}(\mathbf{C}) = {\rm rank}(\widetilde{\mathbf{C}})$. Moreover, by investigating the rank of matrix $\widetilde{\mathbf{C}}$, the following assertions hold: \begin{itemize} \item[\rm (a)] If ${\rm rank}(\mathcal{F}) = 1$, then ${\rm rank}(\mathbf{C}) = 3$ and $\dim_\mathbb{R}(\ker(L_\mathcal{F})) = 9$. \item[\rm (b)] If ${\rm rank}(\mathcal{F}) = 2$, then ${\rm rank}(\mathbf{C}) = 5$ and $\dim_\mathbb{R}(\ker(L_\mathcal{F})) = 7$. \item[\rm (c)] If ${\rm rank}(\mathcal{F}) = 3$, then ${\rm rank}(\mathbf{C}) = 6$ and $\dim_\mathbb{R}(\ker(L_\mathcal{F})) = 6$. \end{itemize} \end{exam.} \begin{proof} Suppose the assumption of (a) holds. Without loss of generality, we may assume that all the vectors $\mathcal{F}_{i,[i,j]}$ are generated by $\mathcal{F}_{1,[1,2]} \neq [ 0 \ 0 \ 0 ]$. Especially, say $\mathcal{F}_{1,[1,3]} = \lambda \cdot \mathcal{F}_{1,[1,2]}$ and $\mathcal{F}_{1,[1,4]} = \mu \cdot \mathcal{F}_{1,[1,2]}$ for some $\lambda, \mu \in \mathbb{R}$. In particular, by column operations, \begin{equation*} \mathbf{C} \longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \lambda \cdot \mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mu \cdot \mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & -(\lambda - 1) \cdot \mathcal{F}_{1, [1,2]} & (\lambda - 1) \cdot \mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -(\mu - 1) \cdot \mathcal{F}_{1, [1,2]} & \mathbf{0} & (\mu - 1) \cdot \mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -(\mu - \lambda) \cdot \mathcal{F}_{1, [1,2]} & (\mu - \lambda) \cdot \mathcal{F}_{1, [1,2]} \\ \end{array} \right]. \end{equation*} Evidently, the first three rows of the right-hand-side matrix are linearly independent and generate all the rows. This shows that ${\rm rank}(\mathbf{C}) = 3$. By Proposition \ref{Proposition: Dimension of the global section space of an anisotropic sheaf}, $\dim_\mathbb{R}(\ker(L_\mathcal{F})) = 3 \times 4 - 3 = 9$. Suppose the assumption of (b) holds. Without loss of generality, we may assume that the collection $\{ \mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]} \}$ generates all vectors $\mathcal{F}_{i,[i,j]}$ with $i < j$. Say $\mathcal{F}_{1,[1,4]} = \alpha \mathcal{F}_{1,[1,2]} + \beta \mathcal{F}_{1,[1,3]}$ for some $\alpha, \beta \in \mathbb{R}$. As in the previous modification to the matrix $\mathbf{C}$, the column and row operations result in the following procedure: \begin{equation*} \begin{split} \mathbf{C} &\longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right] \longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \alpha \mathcal{F}_{1,[1,2]} + \beta \mathcal{F}_{1,[1,3]} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} \\ \end{array} \right] \\ &\longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & -\alpha \cdot (\alpha \mathcal{F}_{1,[1,2]} + \beta \mathcal{F}_{1,[1,3]}) & -\beta\cdot (\alpha \mathcal{F}_{1,[1,2]} + \beta \mathcal{F}_{1,[1,3]}) & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} \\ \end{array} \right] \\ &\longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & -\alpha\beta\mathcal{F}_{1,[1,3]} & -\alpha\beta\mathcal{F}_{1,[1,2]} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} \\ \end{array} \right] \longrightarrow \left[\begin{array}{cccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} \\ \end{array} \right]. \\ \end{split} \end{equation*} Because $\mathcal{F}_{1, [1,2]}$ and $\mathcal{F}_{1, [1,3]}$ are linearly independent row vectors, the matrix $\mathbf{C}$ has rank $5$. In particular, the dimension of the kernel of $L_\mathcal{F}$ is $\dim_\mathbb{R}(\ker(L_\mathcal{F})) = 3 \cdot 4 - 5 = 7$. Finally, suppose the assumption of (c) holds. Because the rank of $\mathcal{F}$ equals 3, the sets $\{ \mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]}, \mathcal{F}_{1, [1,4]} \}$, $\{ \mathcal{F}_{2, [2,3]}, \mathcal{F}_{2, [2,4]} \}$, and $\{\mathcal{F}_{3, [3,4]} \}$ depicted in~\eqref{Eq. Column and Row operations for K4-1} are sets of linearly independent row vectors. This shows that ${\rm rank}(\mathbf{C}) = {\rm rank}(\widetilde{\mathbf{C}}) = 6$ and $\dim_\mathbb{R}(L_\mathcal{F}) = 6$. \end{proof} \begin{exam.} \label{Example: K5 example} Let $G = K_5$ be the complete graph with vertex set $V = \{ 1,2,3,4,5\}$ accompanied with 3D coordinates, and let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the anisotropic sheaf defined in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}, then the coboundary matrix $\mathbf{C}: C^0(G,\mathcal{F}) \rightarrow C^1(G,\mathcal{F})$ is a $10 \times 15$ matrix represented by \begin{equation*} \mathbf{C} = \left[\begin{array}{ccccc} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,5]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,5]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,5]} & \mathbf{0} & \mathcal{F}_{3, [3,5]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{4, [4,5]} & \mathcal{F}_{4, [4,5]} \\ \end{array} \right]. \end{equation*} In particular, ${\rm rank}(\mathbf{C}) \leq 10$ and hence $\dim_\mathbb{R}(\ker(L_\mathcal{F})) = 3 \cdot 5 - {\rm rank}(\mathbf{C}) \geq 5$. Furthermore, matrix $\mathbf{C}$ contains the following $6 \times 12$ matrix $\mathbf{D}$ that corresponds to the $0$-th coboundary matrix of the restriction sheaf $\mathcal{F}|_H$, where $H$ is the complete graph with vertices $1, 2, 3$, and $4$: \begin{equation*} \mathbf{D} = \left[\begin{array}{ccccc} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right]. \end{equation*} More precisely, matrix $\mathbf{D}$ is defined by collecting the $1$-st, $2$-nd, $3$-th, $5$-th, $6$-th, and $7$-th rows of matrix $\mathbf{C}$ and reduce the last column of $\mathbf{C}$. Furthermore, suppose $\mathcal{F}_{1,[1,5]} = \alpha_2 \mathcal{F}_{1,[1,2]} + \alpha_3 \mathcal{F}_{1,[1,3]} + \alpha_4 \mathcal{F}_{1,[1,4]}$ with real numbers $\alpha_2, \alpha_3, \alpha_4 \in \mathbb{R}$, then ${\rm rank}(\mathbf{C}) \leq {\rm rank}(\mathbf{D}) + 3$ with equality if $\mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]}, \mathcal{F}_{1, [1,4]}$ are linearly independent. \end{exam.} \begin{proof} By applying column operations with block matrices, the coboundary matrix $\mathbf{C}$ becomes \begin{equation*} \widetilde{\mathbf{C}} = \left[\begin{array}{ccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,5]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,5]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,5]} & \mathbf{0} & \mathcal{F}_{3, [3,5]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{4, [4,5]} & \mathcal{F}_{4, [4,5]} \\ \end{array} \right] \end{equation*} with ${\rm rank}(\mathbf{C}) = {\rm rank}(\widetilde{\mathbf{C}})$. These column operations also affect the matrix $\mathbf{D}$ as \begin{equation*} \widetilde{\mathbf{D}} = \left[\begin{array}{ccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} \\ \end{array} \right]. \end{equation*} Similarly, we have ${\rm rank}(\mathbf{D}) = {\rm rank}(\widetilde{\mathbf{D}})$. In particular, based on the linear relation described in~\eqref{Eq. ij vector spanned by 1j and 1i-simple}, the matrix $\widetilde{\mathbf{C}}$ can be written by \begin{equation*} \begin{split} \widetilde{\mathbf{C}} &= \left[\begin{array}{ccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} \\ \hline \mathbf{0} & -(\mathcal{F}_{1, [1,3]} - \mathcal{F}_{1, [1,2]}) & \mathcal{F}_{1, [1,3]} - \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -(\mathcal{F}_{1, [1,4]} - \mathcal{F}_{1, [1,2]}) & \mathbf{0} & \mathcal{F}_{1, [1,4]} - \mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -(\mathcal{F}_{1, [1,5]} - \mathcal{F}_{1, [1,2]}) & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} - \mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -(\mathcal{F}_{1, [1,4]} - \mathcal{F}_{1, [1,3]}) & \mathcal{F}_{1, [1,4]} - \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -(\mathcal{F}_{1, [1,5]} - \mathcal{F}_{1, [1,3]}) & \mathbf{0} & \mathcal{F}_{1, [1,5]} - \mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -(\mathcal{F}_{1, [1,5]} - \mathcal{F}_{1, [1,4]}) & \mathcal{F}_{1, [1,5]} - \mathcal{F}_{1, [1,4]} \\ \end{array} \right]. \end{split} \end{equation*} Next, the following process is validated by performing row operations for elimination. \begin{equation*} \begin{split} \widetilde{\mathbf{C}} &\longrightarrow \left[\begin{array}{ccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & - \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & - \mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & - \mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & - \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & - \mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & - \mathcal{F}_{1, [1,4]} \\ \end{array} \right]\\ &\longrightarrow \left[\begin{array}{ccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & -\alpha_2 \cdot \sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & -\alpha_3 \cdot \sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & -\alpha_4 \cdot \sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & - \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & - \mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & \mathbf{0} & \mathbf{0} & - \mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & - \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & \mathbf{0} & - \mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & - \mathcal{F}_{1, [1,4]} \\ \end{array} \right] = \overline{\mathbf{C}}. \end{split} \end{equation*} Let $\overline{\mathbf{C}}_k$ with $k \in \{ 1,2, ..., 10 \}$ be the $k$-th rows of $\overline{\mathbf{C}}$. By adding the row vector combination \begin{equation*} \alpha_2^2 \overline{\mathbf{C}}_1 + \alpha_3^2 \overline{\mathbf{C}}_2 + \alpha_4^2 \overline{\mathbf{C}}_3 - \alpha_2\alpha_3 \overline{\mathbf{C}}_5 - \alpha_2\alpha_4 \overline{\mathbf{C}}_6 - \alpha_3\alpha_4 \overline{\mathbf{C}}_8 \end{equation*} to the $4$th row of $\overline{\mathbf{C}}$. Then, the matrix $\overline{\mathbf{C}}$ becomes \begin{equation*} \overline{\overline{\mathbf{C}}} = \left[\begin{array}{ccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & - \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & - \mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & \mathbf{0} & \mathbf{0} & - \mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & - \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & \mathbf{0} & - \mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\sum_{i = 2}^4 \alpha_i \mathcal{F}_{1,[1,i]} & - \mathcal{F}_{1, [1,4]} \\ \end{array} \right]. \end{equation*} During the row operation process, the submatrix $\widetilde{\mathbf{D}}$ of $\widetilde{\mathbf{C}}$ becomes \begin{equation*} \overline{\overline{\mathbf{D}}} = \left[\begin{array}{ccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} \\ \end{array} \right], \end{equation*} corresponding to the $7$-th, $9$-th, and $10$-th rows of $\overline{\overline{\mathbf{C}}}$. In particular, matrices $\overline{\overline{\mathbf{D}}}$ and $\mathbf{D}$ have the same rank. Because rows $\overline{\overline{\mathbf{D}}}_7, \overline{\overline{\mathbf{D}}}_9$, and $\overline{\overline{\mathbf{D}}}_{10}$ are all non-zero rows that do not correspond to the rows in $\overline{\overline{\mathbf{D}}}$, we deduce that \begin{equation*} {\rm rank}(\mathbf{C}) = {\rm rank}(\overline{\overline{\mathbf{C}}}) \leq {\rm rank}(\overline{\overline{\mathbf{D}}}) + 3 = {\rm rank}(\mathbf{D}) + 3. \end{equation*} In addition, if the $1 \times 3$ row vectors $\mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]}, \mathcal{F}_{1, [1,4]}$ are linearly independent, then ${\rm rank}(\mathbf{C}) = {\rm rank}(\mathbf{D}) + 3$. \end{proof} Next, in the following example, we further explore the anisotropic sheaf defined on $K_6$. Especially, we will build on the results from Example \ref{Example: K5 example} to establish that ${\rm rank}(\mathbf{C}) = {\rm rank}(\mathbf{D}) + 3 = 9 + 3 = 12$ for the anisotropic sheaf of rank $3$ on $K_6$. Moreover, we will show that by systematically applying the methods outlined in Examples \ref{Example: K5 example} and \ref{Example: K6 example}, similar rank properties are preserved for rank-$3$ sheaves $\mathcal{F}: (K_n,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$. Specifically, based on the demonstrations in Examples \ref{Example: K5 example} and \ref{Example: K6 example}, we can confirm that ${\rm rank}(\mathbf{C}) = 3 \cdot (|V| - 2) = 3 \cdot (n - 2) = 3n - 6$. Notably, $\dim_\mathbb{R}(L_\mathcal{F}) = 3 \cdot |V| - {\rm rank}(\mathbf{C}) = 3n - (3n - 6) = 6$ remains a constant. \begin{exam.} \label{Example: K6 example} Let $G = K_6$ be the complete graph with vertex set $V = \{ 1,2,3,4,5,6\}$ accompanied with 3D coordinates, and let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be the anisotropic sheaf defined in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}, then the coboundary matrix $\mathbf{C}: C^0(G,\mathcal{F}) \rightarrow C^1(G,\mathcal{F})$ is a $15 \times 18$ matrix represented by \begin{equation*} \mathbf{C} = \left[\begin{array}{cccccc} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,6]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,5]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,5]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,6]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,6]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,5]} & \mathbf{0} & \mathcal{F}_{3, [3,5]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,6]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{3, [3,6]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{4, [4,5]} & \mathcal{F}_{4, [4,5]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{4, [4,6]} & \mathbf{0} & \mathcal{F}_{4, [4,6]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{5, [5,6]} & \mathcal{F}_{5, [5,6]} \\ \end{array}\right]. \end{equation*} If ${\rm rank}(\mathcal{F}) = 3$, then ${\rm rank}(\mathbf{C}) = {\rm rank}(\mathbf{D}) + 3 = 9 + 3 = 12$, where \begin{equation*} \mathbf{D} = \left[\begin{array}{ccccc} -\mathcal{F}_{1, [1,2]} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ -\mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} \\ \hline \mathbf{0} & -\mathcal{F}_{2, [2,3]} & \mathcal{F}_{2, [2,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,4]} & \mathbf{0} & \mathcal{F}_{2, [2,4]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{2, [2,5]} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{2, [2,5]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,4]} & \mathcal{F}_{3, [3,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{3, [3,5]} & \mathbf{0} & \mathcal{F}_{3, [3,5]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{4, [4,5]} & \mathcal{F}_{4, [4,5]} \\ \end{array}\right] \end{equation*} consists rows that correspond to rows $\mathbf{C}_1, \mathbf{C}_2, \mathbf{C}_3, \mathbf{C}_4, \mathbf{C}_6, \mathbf{C}_7, \mathbf{C}_8, \mathbf{C}_{10}, \mathbf{C}_{11}, \mathbf{C}_{13}$ of $\mathbf{C}$. In particular, we have $\dim_\mathbb{R}(L_\mathcal{F}) = 3 \cdot |V| - {\rm rank}(\mathbf{C}) = 3 \cdot 6 - 12 = 6$. \end{exam.} \begin{proof} By assumption, the rank of $\mathcal{F}$ is $3$. Because $G = K_6$ is complete, all row vectors $\mathcal{F}_{i,[i,j]}$ can be spanned by the set $\{ \mathcal{F}_{1,[1,n]} \ | \ n = 2, ..., 6 \}$ of row vectors. Namely, all edges can be uniquely represented by the ordered sequence \begin{equation*} \overbrace{[v_1, v_2], [v_1, v_3], ..., [v_1, v_n]}^{n-1 \ \text{edges}}, \overbrace{[v_2, v_3], [v_2, v_4], ..., [v_2, v_n]}^{n-2 \ \text{edges}}, ..., \overbrace{[v_{n-1}, v_n]}^{1 \ \text{edge}}. \end{equation*} Without loss of generality, we may assume that $\mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]}, \mathcal{F}_{1, [1,4]}$ are linearly independent. In particular, all row vectors $\mathcal{F}_{i,[i,j]}$ can be spanned by the set $\{ \mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]}, \mathcal{F}_{1, [1,4]} \}$. Similar to the proof in Example \ref{Example: K5 example}, we employ column and row operations on $\mathbf{C}$, leading to the matrix \begin{equation*} \overline{\mathbf{C}} = \left[\begin{array}{cccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,6]} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & -\mathcal{F}_{1, [1,5]} \\ \end{array} \right]. \end{equation*} Moreover, based on the same column and row operations, the submatrix $\mathbf{D}$ becomes \begin{equation*} \overline{\mathbf{D}} = \left[\begin{array}{cccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & -\mathcal{F}_{1, [1,4]} \\ \end{array} \right]. \end{equation*} Let $\overline{\mathbf{C}}_k$ with $k = 1,2, ..., 15$ be the rows of $\overline{\mathbf{C}}$. By the proof of Example \ref{Example: K5 example}, the row vector $\overline{\mathbf{C}}_5 = \begin{bmatrix} \label{Eq. submatrix of Cbar in K6 case} \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,6]} \\ \end{bmatrix}$ is generated by the row vectors of the submatrix of $\overline{\mathbf{C}}$: \begin{equation} \left[\begin{array}{cccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} \\ \end{array} \right]. \end{equation} Note that rows in the submatrix only involve linear maps $\mathcal{F}_{i,[i,j]}$ with $i < j$ in $\{ 1, 2, 3, 4, 5, 6 \} \setminus \{ 5 \}$. In particular, the row $\overline{\mathbf{C}}_5$ can be spanned by rows in $\overline{\mathbf{C}}$ that do not involve linear transformation $\mathcal{F}_{1,[1,5]}$. Because the set $\{ \mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]}, \mathcal{F}_{1, [1,4]} \}$ spans all the row vectors $\mathcal{F}_{i,[i,j]}$ with $i < j$, we may represent $\mathcal{F}_{1, [1,5]}$ and $\mathcal{F}_{1, [1,6]}$ by \begin{equation*} \mathcal{F}_{1,[1,5]} = \sum_{i = 1}^3 \alpha_i \mathcal{F}_{1,[1,i]} \text{ and } \mathcal{F}_{1,[1,6]} = \sum_{i = 1}^3 \beta_i \mathcal{F}_{1,[1,i]}. \end{equation*} By row operations, we obtain \begin{equation*} \begin{split} \overline{\mathbf{C}} &\longrightarrow \left[\begin{array}{cccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & -\mathcal{F}_{1, [1,5]} \\ \end{array} \right] \\ &\longrightarrow \left[\begin{array}{cccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & \bfu & \bfv & \bfw & \mathbf{0} & \mathbf{0} \\ \end{array} \right] = \overline{\overline{\mathbf{C}}}, \end{split} \end{equation*} where $\bfu = \beta_2\mathcal{F}_{1,[1,5]} + \alpha_2\mathcal{F}_{1,[1,6]}$, $\bfv = \beta_3\mathcal{F}_{1,[1,5]} + \alpha_3\mathcal{F}_{1,[1,6]}$, and $\bfw = \beta_4\mathcal{F}_{1,[1,5]} + \alpha_4\mathcal{F}_{1,[1,6]}$. By expanding the terms $\mathcal{F}_{1,[1,5]}$ and $\mathcal{F}_{1,[1,6]}$ as linear combinations of $\mathcal{F}_{1, [1,2]}$, $\mathcal{F}_{1, [1,3]}$, and $\mathcal{F}_{1, [1,4]}$, \begin{equation*} \begin{split} \bfu &= (\alpha_2\beta_2 + \alpha_2\beta_2) \cdot \mathcal{F}_{1,[1,2]} + (\alpha_3\beta_2 + \alpha_2\beta_3) \cdot \mathcal{F}_{1,[1,3]} + (\alpha_4\beta_2 + \alpha_2\beta_4) \cdot \mathcal{F}_{1,[1,4]}, \\ \bfv &= (\alpha_2\beta_3 + \alpha_3\beta_2) \cdot \mathcal{F}_{1,[1,2]} + (\alpha_3\beta_3 + \alpha_3\beta_3) \cdot \mathcal{F}_{1,[1,3]} + (\alpha_4\beta_3 + \alpha_3\beta_4) \cdot \mathcal{F}_{1,[1,4]}, \\ \bfw &= (\alpha_2\beta_4 + \alpha_4\beta_2) \cdot \mathcal{F}_{1,[1,2]} + (\alpha_3\beta_4 + \alpha_4\beta_3) \cdot \mathcal{F}_{1,[1,3]} + (\alpha_4\beta_4 + \alpha_4\beta_4) \cdot \mathcal{F}_{1,[1,4]}. \\ \end{split} \end{equation*} By adding the row vector combination \begin{equation} \label{Eq. K6 linear combination} \begin{split} &-2\alpha_2\beta_2 \overline{\mathbf{C}}_1 -2\alpha_3\beta_3 \overline{\mathbf{C}}_2 -2\alpha_4\beta_4 \overline{\mathbf{C}}_3 + (\alpha_3\beta_2 + \alpha_2\beta_3) \cdot \overline{\mathbf{C}}_6 + (\alpha_4\beta_2 + \alpha_2\beta_4) \cdot \overline{\mathbf{C}}_7 + (\alpha_4\beta_3 + \alpha_3\beta_4) \cdot \overline{\mathbf{C}}_{10} \end{split} \end{equation} to the row vector $\overline{\overline{\mathbf{C}}}_{15} = \begin{bmatrix} \mathbf{0} & \bfu & \bfv & \bfw & \mathbf{0} & \mathbf{0} \\ \end{bmatrix}$. More precisely, Let $\bfs_1, \bfs_2, \bfs_3, \bfs_4, \bfs_5, \bfs_6$ be the corresponding $1 \times 3$ vectors of the row vector in \eqref{Eq. K6 linear combination}, then we have \begin{equation*} \begin{split} \bfs_1 &= \bfs_5 = \bfs_6 = \mathbf{0}, \\ \bfs_2 &= -2\alpha_2\beta_2 \cdot \mathcal{F}_{1, [1,2]} -(\alpha_3\beta_2 + \alpha_2\beta_3) \cdot \mathcal{F}_{1, [1,3]} - (\alpha_4\beta_2 + \alpha_2\beta_4) \cdot \mathcal{F}_{1, [1,4]} = -\bfu, \\ \bfs_3 &= -2\alpha_3\beta_3 \cdot \mathcal{F}_{1, [1,3]} -(\alpha_3\beta_2 + \alpha_2\beta_3) \cdot \mathcal{F}_{1, [1,2]} - (\alpha_4\beta_3 + \alpha_3\beta_4) \cdot \mathcal{F}_{1, [1,4]} = -\mathbf{v}, \\ \bfs_4 &= -2\alpha_4\beta_4 \cdot \mathcal{F}_{1, [1,4]} - (\alpha_4\beta_2 + \alpha_2\beta_4) \cdot \mathcal{F}_{1, [1,2]} - (\alpha_4\beta_3 + \alpha_3\beta_4) \cdot \mathcal{F}_{1, [1,3]} = -\mathbf{w}. \end{split} \end{equation*} In particular, these equations imply that the matrix $\overline{\overline{\mathbf{C}}}$ becomes \begin{equation*} \widehat{\mathbf{C}} = \left[\begin{array}{cccccc} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,5]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,5]} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,6]} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \end{array} \right]. \end{equation*} Because $\mathcal{F}_{1,[1,2]}, \mathcal{F}_{1,[1,3]}, \mathcal{F}_{1,[1,4]}$ are linearly independent, ${\rm rank}(\widehat{\mathbf{C}}) = {\rm rank}(\mathbf{D}) + 3$. By Example \ref{Example: K5 example}, ${\rm rank}(\mathbf{D}) = 9$. Therefore, ${\rm rank}(\mathbf{C}) = {\rm rank}(\widehat{\mathbf{C}}) = 9 + 3 = 12$. \end{proof} Building on the findings from Examples \ref{Example: K5 example} and \ref{Example: K6 example}, we are now prepared to demonstrate that any rank $3$ sheaves $\mathcal{F}: (K_n,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$, as described in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2} on any $K_n$, necessarily possess a $0$-th coboundary matrix $\mathcal{F}: C^0(K,\mathcal{F}) \rightarrow C^1(K,\mathcal{F})$ with rank $3 \cdot (n-2)$. \begin{theorem}\label{Theorem: Main result 3-} \label{Theorem: Kn example} Let $n \geq 4$, $G = K_n$ with $V = \{ 1, 2, ...,n \}$, and let $\mathcal{F}: (G,\leq) \rightarrow \textup{\textsf{Vect}}_\mathbb{R}$ be defined as in Equation~\eqref{Eq. Anisotropic sheaf definition-form 2}. If ${\rm rank}(\mathcal{F}) = 3$, then the $0$-th coboundary matrix $\mathbf{C}: C^0(G,\mathcal{F}) \rightarrow C^1(G,\mathcal{F})$ has rank $3 \cdot (n-2)$. In particular, we have $\dim_\mathbb{R}(L_\mathcal{F}) = 3 \cdot |V| - {\rm rank}(\mathbf{C}) = 3 \cdot n - (3 \cdot n - 6) = 6$. \end{theorem} \begin{proof} For $n = 4$, the theorem follows by Example \ref{Example: K4 example}. We use mathematical induction to prove the theorem. Suppose the theorem holds for some $n \geq 4$. By column and row operations, we focus on the matrix \begin{equation*} \overline{\mathbf{C}} = \left[ \begin{array}{cccccc|c} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathcal{F}_{1, [1,n]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathcal{F}_{1, [1,n+1]} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \cdots & \mathbf{0} & \mathbf{0} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \mathbf{0} & -\mathcal{F}_{1, [1,n]} & \mathbf{0} & \mathbf{0} & \cdots & -\mathcal{F}_{1, [1,2]} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \cdots & \mathbf{0} & \mathbf{0} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,n]} & \mathbf{0} & \cdots & -\mathcal{F}_{1, [1,3]} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & -\mathcal{F}_{1, [1,n+1]} & -\mathcal{F}_{1, [1,n]} \\ \end{array}\right]. \end{equation*} By assumption, the rank of $\mathcal{F}$ is $3$. Because $G = K_{n+1}$ is complete, all row vectors $\mathcal{F}_{i,[i,j]}$ can be spanned by the set $\{ \mathcal{F}_{1,[1,j]} \ | \ j = 2, ..., n+1 \}$ of row vectors. Without loss of generality, we may assume that $\mathcal{F}_{1, [1,2]}, \mathcal{F}_{1, [1,3]}, \mathcal{F}_{1, [1,4]}$ are linearly independent. By the proof of Example \ref{Example: K5 example}, the vector \begin{equation*} \left[ \begin{array}{cccccc|c} \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathcal{F}_{1, [1,n+1]} \\ \end{array}\right] \end{equation*} is generated by the row vectors of the submatrix with size $9 \times 3 \cdot n$: \begin{equation} \label{Eq. Submatrix A1} \mathbf{A}_1 = \left[ \begin{array}{ccccccc|c} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & - \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,4]} \\ \end{array}\right]. \end{equation} On the other hand, by the proof of Example \ref{Example: K6 example}, for every $j \in \{ 5, ..., n \}$, the row vector \begin{equation*} \left[ \begin{array}{ccccccc|c} \mathbf{0} & \mathbf{0} & \cdots & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,j]} \\ \end{array}\right] \end{equation*} with the vector at the $j$th location of the row is generated by the row vectors of the submatrix with size $13 \times 3 \cdot n$: \begin{equation*} \label{Eq. Submatrix A2} \mathbf{A}_2 = \left[\begin{array}{cccccccc|c} \mathbf{0} & \mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \cdots & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathcal{F}_{1, [1,4]} & \mathbf{0} & \cdots & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathcal{F}_{1, [1,j]} & \cdots & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & -\mathcal{F}_{1, [1,3]} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,4]} & \mathbf{0} & -\mathcal{F}_{1, [1,2]} & \mathbf{0} & \cdots & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,j]} & \mathbf{0} & \mathbf{0} & \cdots & -\mathcal{F}_{1, [1,2]} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,2]} \\ \hline \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,4]} & -\mathcal{F}_{1, [1,3]} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,j]} & \mathbf{0} & \cdots & -\mathcal{F}_{1, [1,3]} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \mathbf{0} & \cdots & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,3]} \\ \hline \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,j]} & \cdots & -\mathcal{F}_{1, [1,4]} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & -\mathcal{F}_{1, [1,n+1]} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & -\mathcal{F}_{1, [1,4]} \\ \end{array}\right]. \end{equation*} Note that the matrix $\overline{\mathbf{C}}$ is formed by augmenting $\overline{\mathbf{D}}$---the matrix corresponding to the complete graph $K_n$ with vertex set $\{1, 2, \ldots, n \}$---with $n$ new rows, each a $1 \times 3n$ vector. Given that $\mathcal{F}_{1,[1,2]}, \mathcal{F}_{1,[1,3]}, \mathcal{F}_{1,[1,4]}$ are linearly independent, and by the induction hypothesis, the rank of $\overline{\mathbf{D}}$ is $3n - 6$. Observing the rows in matrices $\mathbf{A}_1$ and $\mathbf{A}_2$, we note that, apart from the rows in matrix $\overline{\mathbf{D}}$, only three additional linearly independent vectors are required to generate all the rows in $\overline{\mathbf{C}}$. This shows that ${\rm rank}(\overline{\mathbf{C}}) = {\rm rank}(\overline{\mathbf{D}}) + 3 = 3n - 3 = 3(n+1) - 6$. By mathematical induction, the theorem follows. \end{proof} \section{Simplicial Complexes and Their Geometric Realizations} This paper uses two typical representations of a finite simplicial complex in $\mathbb{R}^3$. First, we represent a simplicial complex in $\mathbb{R}^3$ as a collection $K$ of geometric simplices embedded in $\mathbb{R}^3$, equipped with a partial order $\leq$ that based on the face relations of the simplices; that is, as a partially ordered set $(K, \leq)$ of geometric simplices in $\mathbb{R}^3$. Second, we consider the simplicial complex as a subspace of $\mathbb{R}^3$, known as its geometric realization in $\mathbb{R}^3$ and denoted by $|K|$. The former representation focuses primarily on the combinatorial face relations of simplices in $K$, while the latter embeds the entire finite simplicial complex as a compact subset of $\mathbb{R}^3$, formed by the union of geometric simplices in $\mathbb{R}^3$. We briefly recap these formal definitions as follows. \begin{def.} A \textbf{simplicial complex} in \( \mathbb{R}^3 \) is a collection \( K \) of simplices, where each simplex is the convex hull \( \sigma = \mathrm{conv}(S) \) for some affinely independent set \( S \subseteq \mathbb{R}^3 \). Two simplices $\sigma$ and $\tau$ satisfy the relation $\sigma \leq \tau$ if $\sigma$ is a face of $\tau$. The collection \( K \) satisfies the following properties: {\rm (a)} if \( \sigma \in K \) and \( \tau \leq \sigma \), then \( \tau \in K \); and {\rm (b)} if \( \sigma, \tau \in K \), then \( \sigma \cap \tau \in K \). \end{def.} A simplicial complex in $\mathbb{R}^3$, as defined above, is a collection of subsets of $\mathbb{R}^3$, with an emphasis on the combinatorial properties of a collection of simplices within $\mathbb{R}^3$. On the other hand, when analyzing the global structure of a simplicial complex embedded in $\mathbb{R}^3$, its geometric realization is considered, which is defined as follows. \begin{def.} Let $K$ be a finite simplicial complex in $\mathbb{R}^3$. Then, the \textbf{geometric realization} of $K$, denoted as $|K|$, is the union of all simplices in $\mathbb{R}^3$, i.e., $|K| = \bigcup_{\sigma \in K} \sigma \subseteq \mathbb{R}^3$. \end{def.} To investigate the geometric and topological properties of $|K|$, we briefly recap the following well-known properties of simplices within $\mathbb{R}^3$, focusing particularly on their boundary, interior, and supporting hyperplane properties. \begin{prop.} \label{Proposition: Elementary Munkres properties} Let $S = \{ \bfx_0, ..., \bfx_q \}$ be an affinely independent set in $\mathbb{R}^3$ and let $\sigma = \conv(S)$ be the simplex generated by $S$. Then the following assertions hold. \begin{itemize} \item[\rm (a)] The relative interior of $\sigma$, denoted by $\reInt(\sigma)$, is $\{ \sum_{i = 0}^q t_i \bfx_i \ | \ t_i \in (0, 1), \sum_{i = 0}^q t_i = 1 \}$. \item[\rm (b)] If $\sigma$ is an $3$-simplex, then the interior and relative interior of $\sigma$ coincide, i.e., $\reInt(\sigma) = \Int(\sigma)$. \item[\rm (c)] The all $(q-1)$-faces of $\sigma$ are $\conv(S_i)$, where $S_i = S \setminus \{ \bfx_i \}$ for $i \in \{ 0, ..., q \}$. \item[\rm (d)] The relative boundary of $\sigma$, denoted by $\reBd(\sigma)$, is the union of all $(q-1)$-faces of $\sigma$. \item[\rm (e)] If $\sigma$ is a $3$-simplex, then the boundary and relative boundary of $\sigma$ coincide, i.e., $\reBd(\sigma) = \bd(\sigma)$. \end{itemize} \end{prop.} \begin{proof} These properties are not limited to the case of $N = 3$; they hold when $S = \{ \bfx_0, \dots, \bfx_q \}$ is a set of $q+1$ affinely independent vectors in $\mathbb{R}^N$ for any $N, q \in \mathbb{N}$ with $q \leq N$ \cite{munkres2018elements}. \end{proof} Furthermore, due to the convexity of simplices, every $3$-simplex in $\mathbb{R}^3$ can be separated by a hyperplane spanned by its faces. More precisely, for a $3$-simplex \( \sigma = \mathrm{conv}(S) \) generated by an affinely independent set \( S = \{ \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3 \} \subseteq \mathbb{R}^3 \), any $2$-face spans a hyperplane in \( \mathbb{R}^3 \). For instance, for the $2$-face \( \tau = \mathrm{conv}(\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2) \) of \( \sigma \), the set \( \{ \mathbf{x}_1 - \mathbf{x}_0, \mathbf{x}_2 - \mathbf{x}_0 \} \) is linearly independent, and the hyperplane spanned by \( \tau \) is defined as the affine hull of \( \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2 \), i.e., $\mathrm{aff}(\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2) = \mathbf{x}_0 + \mathrm{span}_{\mathbb{R}} \{ \mathbf{x}_1 - \mathbf{x}_0, \mathbf{x}_2 - \mathbf{x}_0 \}$. In addition, if $\bfv$ is a vector that is perpendicular to vectors $\bfx_1 - \bfx_0, \bfx_2 - \bfx_0$, then \begin{equation*} \bfx_0 + {\rm span}_{\mathbb{R}} \{ \bfx_1 - \bfx_0, \bfx_2 - \bfx_0 \} = H(\bfx_0, \bfv) := \{ \bfx \in \mathbb{R}^3 \ | \ \langle \bfv, \bfx - \bfx_0 \rangle = 0 \}. \end{equation*} Furthermore, if one chooses $\bfv$ with the property that $\langle \bfv, \bfx_3 - \bfx_0 \rangle > 0$, then for scalars $t_0, t_1, t_2, t_3 \in [0,1]$ with $t_0 + t_1 + t_2 + t_3 = 1$, one obtains \begin{equation*} \begin{split} \langle \bfv, (t_0\bfx_0 + t_1\bfx_1 + t_2\bfx_2 + t_3\bfx_3) -\bfx_0 \rangle &= \sum_{i = 0}^3 t_i \cdot \langle \bfv, \bfx_i - \bfx_0 \rangle = t_3 \cdot \langle \bfv, \bfx_3 - \bfx_0 \rangle \geq 0. \end{split} \end{equation*} In other words, the entire simplex $\sigma$ is contained in the \textit{closed half-space} $H(\mathbf{x}_0, \mathbf{v})_{\geq 0} := \{ \mathbf{x} \in \mathbb{R}^3 \mid \langle \mathbf{v}, \mathbf{x} - \mathbf{x}_0 \rangle \geq 0 \}$. Furthermore, the interior of \(\sigma\) is contained in the \textit{open half-space} \( H(\mathbf{x}_0, \mathbf{v})_{> 0} := \{ \mathbf{x} \in \mathbb{R}^3 \mid \langle \mathbf{v}, \mathbf{x} - \mathbf{x}_0 \rangle > 0 \} \). Similarly, the opposite closed and open half-spaces \( H(\mathbf{x}_0, \mathbf{v})_{\leq 0} := \{ \mathbf{x} \in \mathbb{R}^3 \mid \langle \mathbf{v}, \mathbf{x} - \mathbf{x}_0 \rangle \leq 0 \} \) and \( H(\mathbf{x}_0, \mathbf{v})_{< 0} := \{ \mathbf{x} \in \mathbb{R}^3 \mid \langle \mathbf{v}, \mathbf{x} - \mathbf{x}_0 \rangle < 0 \} \) are defined. With this geometric intuition, the intersection \( \sigma \cap H(\mathbf{x}_0, \mathbf{v}) \) is often referred to as the \textit{face} of \( \sigma \) opposite to \( \mathbf{x}_3 \)~\cite{munkres2018elements}. We summarize this observation in the following proposition, which will be beneficial in investigating simplicial complexes in \( \mathbb{R}^3 \). \begin{prop.} \label{Proposition: Supporting hyperplane of simplex on face} Let \( S = \{ \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3 \} \) be an affinely independent set in \(\mathbb{R}^3\), and let \(\sigma = \operatorname{conv}(S)\) be the $3$-simplex generated by \( S \). Let \(\mathbf{v}\) be a vector in \(\mathbb{R}^3\) that is perpendicular to the vectors \(\mathbf{x}_1 - \mathbf{x}_0\) and \(\mathbf{x}_2 - \mathbf{x}_0\) and satisfies the property \(\langle \mathbf{v}, \mathbf{x}_3 - \mathbf{x}_0 \rangle > 0 \). Then, \begin{itemize} \item[\rm (a)] $\sigma \subseteq H(\mathbf{x}_0, \mathbf{v})_{\geq 0} := \{ \mathbf{x} \in \mathbb{R}^3 \mid \langle \mathbf{v}, \mathbf{x} - \mathbf{x}_0 \rangle \geq 0 \}$; \item[\rm (b)] $\Int(\sigma) \subseteq H(\mathbf{x}_0, \mathbf{v})_{> 0} := \{ \mathbf{x} \in \mathbb{R}^3 \mid \langle \mathbf{v}, \mathbf{x} - \mathbf{x}_0 \rangle > 0 \}$; \item[\rm (c)] each $\bfw \in \reInt(\conv(\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2))$ admits an open ball $B_{\epsilon}(\bfw) := \{ \bfx \in \mathbb{R}^3 \ | \ \Vert \bfx - \bfw \Vert < \epsilon \}$ centered at $\bfw$ with radius $\epsilon$ such that $B_{\epsilon}(\bfw) \cap H(\mathbf{x}_0, \mathbf{v})_{> 0} \subseteq \Int(\sigma)$; \item[\rm (d)] if $\bfy$ is another point in $H(\mathbf{x}_0, \mathbf{v})_{> 0}$ such that $T = \{ \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \bfy \}$ is an affinely independent set and $\tau = \conv(T)$, then $\Int(\tau) \cap \Int(\sigma) \neq \emptyset$; in particular, $\tau$ and $\sigma$ share the $2$-face $\conv(\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2)$. \end{itemize} \end{prop.} \begin{proof} Generalized statements for properties (a), (b), and (c) in the case of simplices of arbitrary dimension can be found in \cite{munkres2018elements}. Leveraging (a), (b), and (c), we prove property (d), which helps explore homogeneous simplicial $3$-complexes in $\mathbb{R}^3$. Let $\bfy$ be another point in $H(\mathbf{x}_0, \mathbf{v})_{> 0}$ such that $T = \{ \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \bfy \}$ is an affinely independent set and $\tau = \conv(T)$. Then, both $\sigma$ and $\tau$ satisfy properties (a), (b), and (c). Choose $\bfw \in \reInt(\conv(\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2))$, then there are $\epsilon_1 > 0$ and $\epsilon_2 > 0$ such that $B_{\epsilon_1}(\bfw) \cap H(\mathbf{x}_0, \mathbf{v})_{> 0} \subseteq \Int(\sigma)$ and $B_{\epsilon_2}(\bfw) \cap H(\mathbf{x}_0, \mathbf{v})_{> 0} \subseteq \Int(\tau)$. By choosing $\epsilon = \min\{ \epsilon_1, \epsilon_2 \}$, then $\Int(\tau) \cap \Int(\sigma) \neq \emptyset$ since $\emptyset \neq B_{\epsilon}(\bfw) \cap H(\mathbf{x}_0, \mathbf{v})_{> 0} \subseteq \Int(\sigma)$ and $\emptyset \neq B_{\epsilon}(\bfw) \cap H(\mathbf{x}_0, \mathbf{v})_{> 0} \subseteq \Int(\tau)$. \end{proof} In addition to representing a $3$-simplex \( \sigma \subseteq \mathbb{R}^3 \) as the convex hull of affinely independent vectors \( \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3 \), one can also represent a $3$-simplex as the intersection of closed half-spaces defined by its faces. Actually, any convex polytope is the intersection of closed half-spaces defined by its faces (cf. \cite{ewald1996combinatorial}). We formalize this specific fact for the case of $3$-simplices in the following proposition. \begin{prop.} \label{Proposition: sigma as the intersection of all its face spaces} Let $\sigma \subseteq \mathbb{R}^3$ be the $3$-simplex defined as the convex hull of an affinely independent set $S = \{ \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3 \}$. For each $i \in \{ 0, 1, 2, 3 \}$, there is a normal vector $\mathbf{v}_i \in \mathbb{R}^3$ as in Proposition \ref{Proposition: Supporting hyperplane of simplex on face} such that $\sigma \subseteq H(\mathbf{x}_i, \mathbf{v}_i)_{\geq 0}$ and $\sigma = \bigcap_{i=0}^3 H(\mathbf{x}_i, \mathbf{v}_i)_{\geq 0}$. \end{prop.} The intersection formula in Proposition \ref{Proposition: sigma as the intersection of all its face spaces} has an alternative geometric explanation. More precisely, by choosing the vectors $\mathbf{v}_i$ properly, the supporting hyperplanes $H(\mathbf{x}_i, \mathbf{v}_i)$ in Proposition \ref{Proposition: sigma as the intersection of all its face spaces} are exact the hyperplanes spanned by the $2$-faces of $\sigma$. For instance, if $\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3$ and $\mathbf{v} \in \mathbb{R}^3$ are defined as in Proposition \ref{Proposition: Supporting hyperplane of simplex on face}, then $H(\mathbf{x}_0, \mathbf{v}) = {\rm aff}(\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2)$, where ${\rm aff}(S)$ denotes the affine hull spanned by a set $S \subseteq \mathbb{R}^3$. \begin{coro} \label{Corollary: A point outside the sigma and supporting hyperplane thm} Let $\sigma \subseteq \mathbb{R}^3$ be the $3$-simplex defined as the convex hull of an affinely independent set $S = \{ \mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3 \}$. For every $\mathbf{x} \in \mathbb{R}^3 \setminus \sigma$, there is an $i \in \{ 0, 1, 2, 3 \}$ such that $H := {\rm aff}(S \setminus \{ \mathbf{x}_i \})$ is a supporting hyperplane that separates $\bfx$ and $\sigma$ into the disjoint open and closed half-spaces. \end{coro} \begin{proof} By Proposition \ref{Proposition: sigma as the intersection of all its face spaces}, by choosing the vectors $\mathbf{v}_0, \mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ as required in the proposition, the $3$-simplex $\sigma$ can be represented as the intersection $\sigma = \bigcap_{i=0}^3 H(\mathbf{x}_i, \mathbf{v}_i)_{\geq 0}$. Then, \begin{equation*} \mathbf{x} \in \mathbb{R}^3 \setminus \left( \bigcap_{i=0}^3 H(\mathbf{x}_i, \mathbf{v}_i)_{\geq 0} \right) = \bigcup_{i=0}^3 \mathbb{R}^3 \setminus H(\mathbf{x}_i, \mathbf{v}_i)_{\geq 0} = \bigcup _{i=0}^3 H(\mathbf{x}_i, \mathbf{v}_i)_{< 0}. \end{equation*} Say $\mathbf{x} \in H(\mathbf{x}_i, \mathbf{v}_i)_{< 0}$ for some $i \in \{ 0, 1, 2, 3 \}$ and set $H = H(\mathbf{x}_i, \mathbf{v}_i)$. Because $\sigma \subseteq H(\mathbf{x}_i, \mathbf{v}_i)_{\geq 0}$, the supporting hyperplane separates $\sigma$ and $\mathbf{x}$ into two different half-spaces. \end{proof} \end{document} \usetikzlibrary{matrix,quotes,arrows.meta} \newif\iftikzcd@mathmode \def\tikzcdset{\pgfqkeys{/tikz/commutative diagrams}} \def\tikzcd@x@addto@macro#1#2{ \pgfutil@toks@\expandafter{#1} \xdef#1{\the\pgfutil@toks@#2}} \tikzcdset{ arrows/.code={\tikzcdset{every arrow/.append style={#1}}}, labels/.code={\tikzcdset{every label/.append style={#1}}}, cells/.code={\tikzcdset{every cell/.append style={#1}}}, diagrams/.code={\tikzcdset{every diagram/.append style={#1}}}, to/.code={ \pgfutil@ifundefined{pgf@sh@ns@\tikzcdmatrixname-#1} {\def\tikzcd@ar@target{#1}} {\def\tikzcd@ar@target{\tikzcdmatrixname-#1}}}, from/.code={ \pgfutil@ifundefined{pgf@sh@ns@\tikzcdmatrixname-#1} {\def\tikzcd@ar@start{#1}} {\def\tikzcd@ar@start{\tikzcdmatrixname-#1}}}, description/.style={ /tikz/anchor=center, /tikz/fill=\pgfkeysvalueof{/tikz/commutative diagrams/background color}}, phantom/.style={ /tikz/draw=none, /tikz/commutative diagrams/labels={ /tikz/font=, /tikz/anchor=center}}, crossing over/.style={ /tikz/preaction={ /tikz/draw=\pgfkeysvalueof{/tikz/commutative diagrams/background color}, /tikz/arrows=-, /tikz/line width=\pgfkeysvalueof{/tikz/commutative diagrams/crossing over clearance}}}, row sep/.code={\tikzcd@sep{row}{#1}}, column sep/.code={\tikzcd@sep{column}{#1}}, math mode/.is if=tikzcd@mathmode, arrow style/.is choice} \def\tikzcd@sep#1#2{ \pgfkeysifdefined{/tikz/commutative diagrams/#1 sep/#2} {\pgfkeysgetvalue{/tikz/commutative diagrams/#1 sep/#2}\tikzcd@temp \pgfkeysalso{/tikz/#1 sep/.expand once=\tikzcd@temp}} {\pgfkeysalso{/tikz/#1 sep={#2}}}} \tikzcdset{ .unknown/.code={ \ifpgfkeysaddeddefaultpath \c@pgf@counta=0 \c@pgf@countb=0 \let\tikzcd@temp=\tikzcd@parse \expandafter\tikzcd@temp\pgfkeyscurrentname\relax \ifx\tikzcd@temp\pgfutil@empty \advance\c@pgf@counta by\tikzcd@currentrow \advance\c@pgf@countb by\tikzcd@currentcolumn \edef\tikzcd@ar@target{\tikzcdmatrixname-\the\c@pgf@counta-\the\c@pgf@countb} \else \pgfqkeys{/tikz}{\pgfkeyscurrentname={#1}} \else \def\pgfutilnext{\pgfkeysvalueof{/handlers/.unknown/.@cmd}#1\pgfeov}\pgfutilnext}} \fi\fi\fi% \tikzcd@temp} \def\tikzcd{ \let\arrow\tikzcd@arrow \let\ar\tikzcd@arrow \def\rar{\tikzcd@xar{r}} \def\lar{\tikzcd@xar{l}} \def\dar{\tikzcd@xar{d}} \def\uar{\tikzcd@xar{u}} \def\urar{\tikzcd@xar{ur}} \def\ular{\tikzcd@xar{ul}} \def\drar{\tikzcd@xar{dr}} \def\dlar{\tikzcd@xar{dl}} \global\let\tikzcd@savedpaths\pgfutil@empty nodes, /tikz/every cell/.append code={\tikzcdset{every cell}}, /tikz/commutative diagrams/.cd,every matrix] \bgroup} \def\endtikzcd{ \pgfmatrixendrow\egroup \pgfextra{\global\let\tikzcdmatrixname\tikzlastnode}; \tikzcdset{\the\pgfmatrixcurrentrow-row diagram/.try} \begingroup \tikzcd@enablequotes \tikzcd@patcherrmsg \tikzcd@savedpaths \endgroup \endtikzpicture} \def\tikzcd@arrow{ \relax \tikzcd@x@addto@macro\tikzcd@savedpaths{ \noexpand\def\noexpand\tikzcd@currentcolumn{\the\pgfmatrixcurrentcolumn} \noexpand\def\noexpand\tikzcd@currentrow{\the\pgfmatrixcurrentrow} \noexpand\def\noexpand\tikzcd@lineno{\the\inputlineno}} \pgfutil@ifnextchar[{\tikzcd@@arrow}{\tikzcd@ar@old[]}} \def\tikzcd@@arrow[#1]{\pgfutil@ifnextchar\bgroup{\tikzcd@ar@old[#1]}{\tikzcd@ar@new[#1]}} \def\tikzcd@ar@new[#1]{ \pgfutil@g@addto@macro\tikzcd@savedpaths{ \path[/tikz/commutative diagrams/.cd,every arrow,#1] (\tikzcd@ar@start\tikzcd@startanchor) to (\tikzcd@ar@target\tikzcd@endanchor); }} \def\tikzcd@ar@old[#1]#2{ \pgfutil@g@addto@macro\tikzcd@savedpaths{ \path[/tikz/commutative diagrams/.cd,every arrow,{#2},#1] (\tikzcd@ar@start\tikzcd@startanchor) to } \pgfutil@ifnextchar[{\tikzcd@ar@getlabel}{\pgfutil@ifnextchar\bgroup{\tikzcd@ar@getlabel[]}{\tikzcd@ar@final}}} \def\tikzcd@ar@getlabel[#1]#2{ \pgfutil@g@addto@macro\tikzcd@savedpaths{ node[/tikz/commutative diagrams/.cd,every label,#1]{\tikzcd@mathmaybe{#2}}} \pgfutil@ifnextchar[{\tikzcd@ar@getlabel}{\pgfutil@ifnextchar\bgroup{\tikzcd@ar@getlabel[]}{\tikzcd@ar@final}}} \def\tikzcd@ar@final{ \pgfutil@g@addto@macro\tikzcd@savedpaths{(\tikzcd@ar@target\tikzcd@endanchor); }} \def\tikzcd@xar#1{\relax\pgfutil@ifnextchar[{\tikzcd@@xar{#1}}{\tikzcd@arrow[]{#1}}} \def\tikzcd@@xar#1[#2]{\tikzcd@arrow[#2]{#1}} } \def\tikzcd@ar@target{\tikzcdmatrixname-\tikzcd@currentrow-\tikzcd@currentcolumn} \def\tikzcd@ar@start{\tikzcdmatrixname-\tikzcd@currentrow-\tikzcd@currentcolumn} \def\tikzcd@passquotes#1{\tikzset{every to/.append style={#1}}} \def\tikzcd@enablequotes{ \pgfkeys{ /handlers/first char syntax/the character "/.initial=\tikzcd@passquotes, /tikz/edge quotes mean={ edge node={node [/tikz/commutative diagrams/.cd,every label,##2]{\tikzcd@mathmaybe{##1}}}}}} \def\tikzcd@patcherrmsg{ \let\tikzcd@errmessage\errmessage \def\errmessage##1{\tikzcd@errmessage{##1^^J...^^Jl.\tikzcd@lineno\space I think the culprit is a tikzcd arrow in cell \tikzcd@currentrow-\tikzcd@currentcolumn}}} \def\tikzcd@setanchor#1[#2]#3\relax{ \ifx\relax#2\relax\else \tikzcdset{@#1transform/.append style={#2},@shiftabletopath} \ifx\relax#3\relax \pgfutil@namelet{tikzcd@#1anchor}{pgfutil@empty} \else \pgfutil@namedef{tikzcd@#1anchor}{.#3}} \tikzcdset{ @shiftabletopath/.style={ /tikz/execute at begin to={ \begingroup \def\tikz@tonodes{coordinate[pos=0,commutative diagrams/@starttransform/.try](tikzcd@nodea) coordinate[pos=1,commutative diagrams/@endtransform/.try](tikzcd@nodeb)} \path (\tikztostart) \tikz@to@path; \endgroup \def\tikztostart{tikzcd@nodea} \def\tikztotarget{tikzcd@nodeb} \tikzset{insert path={(tikzcd@nodea)}}}, /tikz/commutative diagrams/@shiftabletopath/.code={}}, start anchor/.code={ \pgfutil@ifnextchar[{\tikzcd@setanchor{start}}{\tikzcd@setanchor{start}[]}#1\relax}, end anchor/.code={ \pgfutil@ifnextchar[{\tikzcd@setanchor{end}}{\tikzcd@setanchor{end}[]}#1\relax}, start anchor=, end anchor=, shift left/.style={ /tikz/commutative diagrams/@shiftabletopath, /tikz/execute at begin to={ \pgfpointnormalised{ \pgfpointdiff{\pgfpointanchor{tikzcd@nodeb}{center}}{\pgfpointanchor{tikzcd@nodea}{center}}} \pgfgetlastxy{\tikzcd@x}{\tikzcd@y} \pgfmathparse{#1} \ifpgfmathunitsdeclared\else \pgfmathparse{\pgfmathresult*\pgfkeysvalueof{/tikz/commutative diagrams/shift left/.@def}} \coordinate (tikzcd@nodea) at ([shift={(\pgfmathresult*\tikzcd@y,-\pgfmathresult*\tikzcd@x)}]tikzcd@nodea); \coordinate (tikzcd@nodeb) at ([shift={(\pgfmathresult*\tikzcd@y,-\pgfmathresult*\tikzcd@x)}]tikzcd@nodeb); \tikzset{insert path={(tikzcd@nodea)}}}}, shift right/.style={ /tikz/commutative diagrams/shift left={-(#1)}}, transform nodes/.style={ /tikz/commutative diagrams/@shiftabletopath, /tikz/commutative diagrams/@starttransform/.append style={#1}, /tikz/commutative diagrams/@endtransform/.append style={#1}}, shift/.style={ /tikz/shift={#1}, /tikz/commutative diagrams/transform nodes={/tikz/shift={#1}}}, xshift/.style={ /tikz/xshift={#1}, /tikz/commutative diagrams/transform nodes={/tikz/xshift={#1}}}, yshift/.style={ /tikz/yshift={#1}, /tikz/commutative diagrams/transform nodes={/tikz/yshift={#1}}}} \pgfutil@ifluatex \directlua{tex.enableprimitives('tikzcd@', {'Umathaxis', 'Umathfractionrule'})} \pgfmathdeclarefunction{axis_height}{0}{ \begingroup $\relax$ \pgfmathreturn\the\tikzcd@Umathaxis\textstyle \endgroup} \pgfmathdeclarefunction{rule_thickness}{0}{ \begingroup $\relax$ \pgfmathreturn\the\tikzcd@Umathfractionrule\textstyle \endgroup} \else \pgfmathdeclarefunction{axis_height}{0}{ \begingroup $\relax$ \pgfmathreturn\the\fontdimen22\textfont2 \endgroup} \pgfmathdeclarefunction{rule_thickness}{0}{ \begingroup $\relax$ \pgfmathreturn\the\fontdimen8\textfont3 \endgroup} \pgfdeclareshape{asymmetrical rectangle} { \inheritsavedanchors[from={rectangle}] \inheritanchor[from={rectangle}]{base} \inheritanchor[from={rectangle}]{north} \inheritanchor[from={rectangle}]{south} \inheritanchor[from={rectangle}]{base west} \inheritanchor[from={rectangle}]{north west} \inheritanchor[from={rectangle}]{south west} \inheritanchor[from={rectangle}]{base east} \inheritanchor[from={rectangle}]{north east} \inheritanchor[from={rectangle}]{south east} \inheritanchor[from={rectangle}]{mid} \inheritanchor[from={rectangle}]{mid west} \inheritanchor[from={rectangle}]{mid east} \inheritbackgroundpath[from={rectangle}] \anchor{center}{\pgf@anchor@rectangle@center\pgfmathsetlength\pgf@y {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}}} \anchor{west}{\pgf@anchor@rectangle@west\pgfmathsetlength\pgf@y {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}}} \anchor{east}{\pgf@anchor@rectangle@east\pgfmathsetlength\pgf@y {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}}} \anchor{real center}{\pgf@anchor@rectangle@center} \anchor{real west}{\pgf@anchor@rectangle@west} \anchor{real east}{\pgf@anchor@rectangle@east} \anchorborder{ \pgfmathsetlength\pgfutil@tempdima {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}} \pgf@xb=\pgf@x \pgf@yb=\pgf@y \southwest \pgf@xa=\pgf@x \pgf@ya=\pgf@y \northeast \advance\pgf@x by-\pgf@xa \advance\pgf@y by-\pgf@ya \pgf@xc=.5\pgf@x \pgf@yc=.5\pgf@y \advance\pgf@xa by\pgf@xc \advance\pgf@ya by\pgf@yc \ifdim\pgf@yb>0pt \northeast \pgf@yc=\pgf@y \advance\pgf@yc by-\pgfutil@tempdima \else \southwest \pgf@yc=-\pgf@y \advance\pgf@yc by\pgfutil@tempdima \edef\pgf@marshal{ \noexpand\pgfpointborderrectangle {\noexpand\pgfqpoint{\the\pgf@xb}{\the\pgf@yb}} {\noexpand\pgfqpoint{\the\pgf@xc}{\the\pgf@yc}} } \pgf@process{\pgf@marshal} \advance\pgf@x by\pgf@xa \advance\pgf@y by\pgfutil@tempdima }} \pgfutil@IfUndefined{starttikzpicture}{}{ \def\starttikzcd{\tikzcd} \def\stoptikzcd{\endtikzcd}} \pgfkeys{ cm to/.tip={Computer Modern Rightarrow[length=+0pt 6.2]}, cm bold to/.tip={cm to[scale=0.667]}, cm double to/.tip={cm to[sep=+0pt -2.6]cm to}, cm bar/.tip={Bar[width=+0pt 8.2 0.89,line cap=round]}, cm left hook/.tip={Hooks[width=+0pt 10.8,length=+0pt 3.6,harpoon,line cap=round]}, cm implies/.tip={Implies}, cm left to/.tip={cm to[left]}, cm right to/.tip={cm to[right]}, cm right hook/.tip={cm left hook[swap]}, cm to reversed/.tip={cm to[reversed]}, cm */.tip={Circle[length=+0pt 7.6]}, cm o/.tip={cm *[open]}, cm |/.tip={cm bar}} \pgfqkeys{/pgf/arrow keys}{ glyph math command/.code={ \pgfarrowsaddtooptions{\def\tikzcd@glyph{$\begingroup\expandafter\endgroup\csname #1\endcsname$}}}, glyph axis/.code={\pgfarrowsaddtooptions{\pgfmathsetlengthmacro\tikzcd@glyph@axis{#1}}}, glyph length/.code={\pgfarrowsaddtooptions{\pgfmathsetlengthmacro\tikzcd@glyph@len{#1}}}, glyph shorten/.code={\pgfarrowsaddtooptions{\pgfmathsetlengthmacro\tikzcd@glyph@shorten{#1}}}} \pgfdeclarearrow{ name=Glyph, cache=false, bending mode=none, parameters={\tikzcd@glyph@len,\tikzcd@glyph@shorten}, setup code={ \pgfarrowssettipend{\tikzcd@glyph@len\advance\pgf@x by\tikzcd@glyph@shorten}}, defaults={ glyph axis=axis_height, glyph length=+0.9ex, glyph shorten=+-0.1ex}, drawing code={ \pgfpathrectangle{\pgfpoint{+0pt}{+-1ex}}{\pgfpoint{+\tikzcd@glyph@len}{+2ex}} \pgfusepathqclip \pgftransformxshift{+\tikzcd@glyph@len} \pgftransformyshift{+-\tikzcd@glyph@axis} \pgftext[right,base]{\tikzcd@glyph}}} \tikzcdset{ double line/.code={\tikzset{ double equal sign distance, double=\pgfkeysvalueof{/tikz/commutative diagrams/background color}}}, dashed/.style={/tikz/dash pattern={on 7\pgflinewidth off 4\pgflinewidth}}, tikzcd to/.tip={cm to}, tikzcd bar/.tip={cm bar}, tikzcd left hook/.tip={cm left hook}, tikzcd double to/.tip={cm double to}, tikzcd implies/.tip={Implies}, tikzcd right hook/.tip={tikzcd left hook[swap]}, tikzcd left to/.tip={tikzcd to[harpoon]}, tikzcd right to/.tip={tikzcd left to[swap]}, tikzcd to reversed/.tip={tikzcd to[reversed]}, tikzcd cap/.tip={}, tikzcd implies cap/.tip={}, tikzcd implies bar/.tip={tikzcd bar}, no tail/.code={\pgfsetarrowsstart{tikzcd cap}}, to head/.code={\pgfsetarrowsend{tikzcd to}}, maps to/.code={\pgfsetarrowsstart{tikzcd bar}}, hook/.code={\pgfsetarrowsstart{tikzcd right hook}}, hook'/.code={\pgfsetarrowsstart{tikzcd left hook}}, harpoon/.code={\pgfsetarrowsend{tikzcd left to}}, harpoon'/.code={\pgfsetarrowsend{tikzcd right to}}, two heads/.code={\pgfsetarrowsend{tikzcd double to}}, tail/.code={\pgfsetarrowsstart{tikzcd to reversed}}, rightarrow/.code={\pgfsetarrows{tikzcd cap-tikzcd to}}, Rightarrow/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies cap-tikzcd implies}}, leftarrow/.code={\pgfsetarrows{tikzcd to-tikzcd cap}}, Leftarrow/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies-tikzcd implies cap}}, leftrightarrow/.code={\pgfsetarrows{tikzcd to-tikzcd to}}, Leftrightarrow/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies-tikzcd implies}}, mapsto/.code={\pgfsetarrows{tikzcd bar-tikzcd to}}, mapsfrom/.code={\pgfsetarrows{tikzcd to-tikzcd bar}}, Mapsto/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies bar-tikzcd implies}}, Mapsfrom/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies-tikzcd implies bar}}, hookrightarrow/.code={\pgfsetarrows{tikzcd right hook-tikzcd to}}, hookleftarrow/.code={\pgfsetarrows{tikzcd to-tikzcd left hook}}, rightharpoonup/.code={\pgfsetarrows{tikzcd cap-tikzcd left to}}, rightharpoondown/.code={\pgfsetarrows{tikzcd cap-tikzcd right to}}, leftharpoonup/.code={\pgfsetarrows{tikzcd right to-tikzcd cap}}, leftharpoondown/.code={\pgfsetarrows{tikzcd left to-tikzcd cap}}, rightarrowtail/.code={\pgfsetarrows{tikzcd to reversed-tikzcd to}}, leftarrowtail/.code={\pgfsetarrows{tikzcd to-tikzcd to reversed}}, twoheadrightarrow/.code={\pgfsetarrows{tikzcd cap-tikzcd double to}}, twoheadleftarrow/.code={\pgfsetarrows{tikzcd double to-tikzcd cap}}, no head/.code={\pgfsetarrowsend{tikzcd cap}}, dash/.code={\pgfsetarrows{tikzcd cap-tikzcd cap}}, dashrightarrow/.code={\tikzcdset{rightarrow,dashed}}, dashleftarrow/.code={\tikzcdset{leftarrow,dashed}}, equal/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies cap-tikzcd implies cap}}, equals/.code={\tikzcdset{equal}}, rightsquigarrow/.code={\tikzcdset{rightarrow,squiggly}}, leftsquigarrow/.code={\tikzcdset{leftarrow,squiggly}}, leftrightsquigarrow/.code={\tikzcdset{leftrightarrow,squiggly}}, squiggly/.code={ \pgfutil@ifundefined{tikz@[email protected]@loaded} {\pgfutil@packageerror{tikz-cd}{You need to say \string\usetikzlibrary{decorations.pathmorphing} to use squiggly arrows}{}}{} \pgfkeysalso{ /tikz/decorate, /tikz/decoration={ zigzag, segment length=9.25\pgflinewidth, amplitude=1.9\pgflinewidth, post=lineto, post length=6\pgflinewidth, pre=lineto, pre length=6\pgflinewidth, #1}}}} \pgfkeysdef{/tikz/commutative diagrams/arrow style/Latin Modern}{ \tikzcdset{ tikzcd bar/.tip={Bar[width=+0pt 12 .992,line cap=round]}, tikzcd left hook/.tip={Hooks[width=+0pt 15,length=+0pt 4.2,arc=190,harpoon,line cap=round]}}} \pgfkeysdef{/tikz/commutative diagrams/arrow style/tikz}{ \tikzcdset{ tikzcd to/.tip={>}, tikzcd bar/.tip={Bar[width=+3pt 4 0.9]}, tikzcd left hook/.tip={Hooks[harpoon]}, tikzcd double to/.tip={tikzcd to[]tikzcd to}}} \pgfkeysdef{/tikz/commutative diagrams/arrow style/math font}{ \tikzcdset{ tikzcd to/.tip={Glyph[glyph math command=rightarrow]}, tikzcd cap/.tip={Glyph[glyph math command=leftarrow]}, tikzcd to reversed/.tip={Glyph[glyph math command=leftarrowtail]}, tikzcd bar/.tip={Glyph[glyph math command=mapsfrom]}, tikzcd left hook/.tip={Glyph[glyph math command=hookleftarrow]}, tikzcd right hook/.tip={Glyph[glyph math command=hookleftarrow, swap]}, tikzcd implies/.tip={Glyph[glyph math command=Rightarrow, glyph length=1.2ex]}, tikzcd implies cap/.tip={Glyph[glyph math command=Leftarrow]}, tikzcd implies bar/.tip={Glyph[glyph math command=Mapsfrom]}, tikzcd double to/.tip={Glyph[glyph math command=twoheadrightarrow, glyph length=1.4ex]}, tikzcd left to/.tip={Glyph[glyph math command=rightharpoonup]}, tikzcd right to/.tip={Glyph[glyph math command=rightharpoonup,swap]}, double line/.append code={\tikzset{double distance={2*(height("$=$")-axis_height-rule_thickness)}}}, dashed/.code={\tikzset{dash pattern=on 0.8ex off 0.4ex, dash phase=0.8ex}}, squiggly/.default={pre length=1ex, post length=1ex}}} \tikzcdset{ every arrow/.style={ /tikz/draw, /tikz/line width=rule_thickness, /tikz/commutative diagrams/rightarrow}, every label/.style={ /tikz/auto, /tikz/font=\everymath\expandafter{\the\everymath\scriptstyle}, /tikz/inner sep=+0.5ex}, every cell/.style={ /tikz/shape={asymmetrical rectangle}, /tikz/inner xsep=+1ex, /tikz/inner ysep=+0.85ex}, every matrix/.style={/tikz/inner sep=+0pt}, every diagram/.style={ /tikz/commutative diagrams/row sep=normal, /tikz/commutative diagrams/column sep=normal, /tikz/baseline=+0pt}, 1-row diagram/.style={/tikz/baseline/.expanded=(\tikzcdmatrixname.base)}, math mode=true, center yshift/.initial=axis_height, row sep/huge/.initial=+3.6em, row sep/large/.initial=+2.7em, row sep/normal/.initial=+1.8em, row sep/scriptsize/.initial=+1.35em, row sep/small/.initial=+0.9em, row sep/tiny/.initial=+0.45em, column sep/huge/.initial=+4.8em, column sep/large/.initial=+3.6em, column sep/normal/.initial=+2.4em, column sep/scriptsize/.initial=+1.8em, column sep/small/.initial=+1.2em, column sep/tiny/.initial=+0.6em, crossing over clearance/.initial=+1.5ex, shift left/.default=+0.56ex, shift right/.default=1, background color/.initial=white} \endinput \def\pgfautoxrefs{1} \documentclass[a4paper]{ltxdoc} \def\xcolorversion{2.00} \usepackage[version=latest]{pgf} \usepackage{xkeyval,calc,listings,tikz,fp} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=blue, pagecolor=blue, urlcolor=blue, citecolor=blue, pdfborder=0 0 0, } \usetikzlibrary{ arrows, arrows.meta, calc, fit, patterns, plotmarks, shapes.geometric, shapes.misc, shapes.symbols, shapes.arrows, shapes.callouts, shapes.multipart, shapes.gates.logic.US, shapes.gates.logic.IEC, circuits.logic.US, circuits.logic.IEC, circuits.logic.CDH, circuits.ee.IEC, datavisualization, datavisualization.formats.functions, er, automata, backgrounds, chains, topaths, trees, petri, mindmap, matrix, calendar, folding, fadings, shadings, spy, through, turtle, positioning, scopes, decorations.fractals, decorations.shapes, decorations.text, decorations.pathmorphing, decorations.pathreplacing, decorations.footprints, decorations.markings, shadows, lindenmayersystems, intersections, fixedpointarithmetic, fpu, svg.path, external, } \usepackage[a4paper,left=2.25cm,right=2.25cm,top=2.5cm,bottom=2.5cm,nohead]{geometry} \usepackage{amsmath,amssymb} \usepackage{xxcolor} \usepackage{pifont} \usepackage{makeidx} \usepackage{amsmath} \graphicspath{{../../images/}} \input{pgfmanual-en-macros} \makeindex \makeatletter \renewcommand*\l@subsection{\@dottedtocline{2}{1.5em}{2.8em}} \renewcommand*\l@subsubsection{\@dottedtocline{3}{4.3em}{3.2em}} \makeatother \tikzset{ every plot/.style={prefix=plots/pgf-}, shape example/.style={ color=black!30, draw, fill=yellow!30, line width=.5cm, inner xsep=2.5cm, inner ysep=0.5cm} } \usepackage{tikz-cd,xr,multicol,microtype} \usepackage[utf8]{inputenc} \hypersetup{unicode, pdftitle={The tikz-cd package}, pdfauthor={Florêncio Neves}, pdfkeywords={commutative diagrams, latex, tikz, pgf}, } \externaldocument[pgfman-]{pgfmanual} \pgfkeys{ /pdflinks/search key prefixes in= {/tikz/commutative diagrams/}, /pdflinks/internal link prefix=tikzcd, /pdflinks/warnings=false, /pdflinks/show labels=false, } \title{The \texttt{tikz-cd} package} \author{Florêncio Neves} \date{Version 0.9b, of March 8, 2014} \tikzset{/codeexample/every codeexample/.append style={/tikz/commutative diagrams/background color=graphicbackground}} \tikzset{gray x/.tip={Rays[color=gray]}} \newcommand{\displayarrow}[2][]{ \index{#2@\protect\texttt{#2} arrow tip} \index{Arrow tips!#2@\protect\texttt{#2}} \texttt{#2} & yields \tikz[baseline=-axis_height] \draw[{#2}-{#2}, line width=rule_thickness, #1] (0,0) -- (1,0);} \newcommand{\displayarrowstyle}[1]{ \def\mykey{/tikz/commutative diagrams/#1} \def\mypath{} \def\myname{}rsttimetrue \decompose/tikz/commutative diagrams/#1/\nil \texttt{#1} & yields \tikz[baseline=-axis_height, line width=rule_thickness] \draw[arrows=gray x-gray x,commutative diagrams/.cd, #1] (0,0) -- (0.6,0);} \begin{document} \maketitle The general-purpose drawing package Ti\emph{k}Z can be used to typeset commutative diagrams and other kinds of mathematical pictures, generating high-quality results. The present package facilitates the creation of such diagrams by providing a convenient set of macros and reasonable default settings. Familiarity with Ti\emph{k}Z is helpful but not necessary, since the examples contained here cover the most common situations. This software is distributed under the terms of the GNU General Public License, version 3 or later. \tableofcontents \setcounter{section}{-1} \section{Disclaimer} \label{sec:disclaimer} Before version 1.0 of this package is released, minor modifications and bug fixes may be performed. All documents making an orthodox use of this package will continue to compile and generate essentially the same output. However, if you have strict stability requirements (for instance, if you want to assure no page break changes will happen in your documents), keep a copy of the current version of the file \texttt{tikzlibrarycd.code.tex} in your document's directory. \section{Getting started} \label{sec:basic-usage} To invoke this package in \LaTeX, type \begin{quote} \index{tikz-cd@\protect\texttt{tikz-cd} package} \index{Packages and files!tikz-cd@\protect\texttt{tikz-cd}}{{\ttfamily\char`\\usepackage\char`\{\declare{tikz-cd}\char`\}}} \end{quote} or load \tikzname{} and then type \begin{quote} \index{cd@\protect\texttt{cd} library} \index{Libraries!cd@\protect\texttt{cd}} {{\ttfamily\char`\\usetikzlibrary\char`\{\declare{cd}\char`\}}}\pgfmanualpdflabel{cd}{}\end{quote} \subsection{Creating a diagram} \label{sec:creating-diagrams} The basic tool to create a commutative diagram is the following environment. \begin{environment}{{tikzcd}\opt{\oarg{options}}} \end{environment} The environment contents should describe a matrix, as in \LaTeX's familiar |{tabular}| environment. The optional argument \meta{options} may be used to modify the appearance of the diagram. Any of the customization keys described in this manual, as well as those originally provided by \tikzname{}, can be used here. Arrows between matrix entries can be created with the |\arrow| command described below. Everything inside |{tikzcd}| is typeset in math mode, but you will probably want use it inside an |{equation}| environment or |\[| \dots |\]|, so that the diagram is placed on a new line and centered. It is important to note that \textsc{dvi} viewers will not display diagrams correctly. It is necessary to convert the \textsc{dvi} file to the \textsc{pdf} or \textsc{ps} format---or, even better, use \texttt{pdflatex} to obtain a \textsc{pdf} file directly. \subsection{Inserting arrows} \label{sec:inserting-arrows} Inside the |{tikzcd}| environment, the following synonymous commands are provided to produce arrows. \begin{pgfmanualentry} \extractcommand\arrow|[|\meta{options}|]|\@@ \extractcommand\ar|[|\meta{options}|]|\@@ \pgfmanualbody \end{pgfmanualentry} Here, \meta{options} is a comma-separated list of options which can be used to specify the arrow target, add labels, change arrow tips, and perform additional modifications. The arrow target is determined by a direction parameter, which consists of a string of characters |r|, |l|, |d|, |u| (standing for right, left, down and up). Labels can be placed on an arrow by means of the quotes syntax, described in detail in the \pgfname{} manual \cite[\S\ref*{pgfman-section-label-quotes}]{pgfman}. Notice the use of |"\phi"| in the example below. \begin{codeexample}[] \begin{tikzcd} A \arrow[rd] \arrow[r, "\phi"] & B \\ & C \end{tikzcd} \end{codeexample} To further modify the appearance of an arrow, note that \meta{options} may contain any key that can be passed to \tikzname's |\path| command. Similarly, a label can receive additional options via the syntax \begin{quote} |"|\meta{label text}|"|\opt{\meta{label options}}. \end{quote} Both \meta{label text} and \meta{label options} need to be enclosed in curly braces if they contain commas. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi"] \arrow[d, red] & B \arrow[d, "\psi" red] \\ C \arrow[r, red, "\eta" blue] & D \end{tikzcd} \end{codeexample} Arrows can have an arbitrary number of labels, by repeated use of arguments in quotes. The example below shows how to control the positioning of labels. Notice in particular that an apostrophe as \meta{label option} causes the label to be placed on the opposite side of the arrow. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi" near start, "\psi"', "\eta" near end] & B \end{tikzcd} \end{codeexample} We provide two real-life examples. \begin{codeexample}[] \begin{tikzcd} T \arrow[drr, bend left, "x"] \arrow[ddr, bend right, "y"] \arrow[dr, dotted, "{(x,y)}" description] & & \\ & X \times_Z Y \arrow[r, "p"] \arrow[d, "q"] & X \arrow[d, "f"] \\ & Y \arrow[r, "g"] & Z \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd}[column sep=tiny] & \pi_1(U_1) \ar[dr] \ar[drr, "j_1", bend left=20] & &[1.5em] \\ \pi_1(U_1\cap U_2) \ar[ur, "i_1"] \ar[dr, "i_2"'] & & \pi_1(U_1) \ast_{ \pi_1(U_1\cap U_2)} \pi_1(U_2) \ar[r, dashed, "\simeq"] & \pi_1(X) \\ & \pi_1(U_2) \ar[ur]\ar[urr, "j_2"', bend right=20] & & \end{tikzcd} \end{codeexample} \subsection{Changing arrow tips} \label{sec:changing-arrow-tips} A set of |\arrow| options is provided to create different kinds of arrows. Some of these options have a short descriptive name, such as |hook|, and others are named after \TeX{} arrow-producing commands (without a ``|\|''), like |dashrightarrow|. \begin{codeexample}[] \begin{tikzcd} X \arrow[r, hook] \arrow[dr, dashrightarrow] & \bar{X} \arrow[d]\\ & Y \end{tikzcd} \end{codeexample} The following list shows all available arrow types (each of them is a style key located in |/tikz/commutative diagrams|). \begin{multicols}{2}\raggedcolumns \subsubsection*{Basic arrows} \begin{tabular}{ll} \displayarrowstyle{to head}\\ \displayarrowstyle{rightarrow}\\ \displayarrowstyle{leftarrow}\\ \displayarrowstyle{leftrightarrow}\\ \displayarrowstyle{Rightarrow}\\ \displayarrowstyle{Leftarrow}\\ \displayarrowstyle{Leftrightarrow} \end{tabular} \subsubsection*{Arrows from bar} \begin{tabular}{ll} \displayarrowstyle{maps to}\\ \displayarrowstyle{mapsto}\\ \displayarrowstyle{mapsfrom}\\ \displayarrowstyle{Mapsto}\\ \displayarrowstyle{Mapsfrom}\\ \end{tabular} \subsubsection*{Arrows with hook} \begin{tabular}{ll} \displayarrowstyle{hook}\\ \displayarrowstyle{hook'}\\ \displayarrowstyle{hookrightarrow}\\ \displayarrowstyle{hookleftarrow}\\ \end{tabular} \subsubsection*{Arrows with tail} \begin{tabular}{ll} \displayarrowstyle{tail}\\ \displayarrowstyle{rightarrowtail}\\ \displayarrowstyle{leftarrowtail}\\ \end{tabular} \subsubsection*{Two-headed arrows} \begin{tabular}{ll} \displayarrowstyle{two heads}\\ \displayarrowstyle{twoheadrightarrow}\\ \displayarrowstyle{twoheadleftarrow}\\ \end{tabular} \subsubsection*{Harpoons} \begin{tabular}{ll} \displayarrowstyle{harpoon}\\ \displayarrowstyle{harpoon'}\\ \displayarrowstyle{rightharpoonup}\\ \displayarrowstyle{rightharpoondown}\\ \displayarrowstyle{leftharpoonup}\\ \displayarrowstyle{leftharpoondown}\\ \end{tabular} \subsubsection*{Dashed arrows} \begin{tabular}{ll} \displayarrowstyle{dashed}\\ \displayarrowstyle{dashrightarrow}\\ \displayarrowstyle{dashleftarrow}\\ \end{tabular} \subsubsection*{Squiggly arrows} \begin{tabular}{ll} \displayarrowstyle{squiggly}\\ \displayarrowstyle{rightsquigarrow}\\ \displayarrowstyle{leftsquigarrow}\\ \displayarrowstyle{leftrightsquigarrow} \end{tabular} \subsubsection*{Non-arrows} \begin{tabular}{ll} \displayarrowstyle{no head}\\ \displayarrowstyle{no tail}\\ \displayarrowstyle{dash}\\ \displayarrowstyle{equal}\\ \end{tabular} \end{multicols} A gray cross (\tikz \path[/pgf/tips=true,gray x-] (0,0) -- (1mm,0);) in the samples above indicates that the corresponding tip is kept unchanged. This allows several arrow styles to be superimposed. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, tail, two heads, dashed] & B \end{tikzcd} \end{codeexample} \subsection{Alternative syntax for arrows} \label{sec:altern-synt-arrows} The following forms of the arrow command were used before the appearance of the quotes syntax for labels, and now may seem somewhat convoluted. They are nonetheless still available for backwards compatibility. \begin{command}{\arrow\opt{\oarg{options}}\marg{direction}\meta{labels}} \end{command} Here, \meta{direction} is a string containing the characters |r|, |l|, |d|, |u| determining the arrow target, and \meta{labels} can be the empty string or of the form \begin{quote} \opt{\oarg{label options}}\marg{label text}\meta{more labels}. \end{quote} The equivalent command |\ar| can also be used in this form. Here is an example. \begin{codeexample}[] \begin{tikzcd} A \arrow{d} \arrow{r}[near start]{\phi}[near end]{\psi} & B \arrow[red]{d}{\xi} \\ C \arrow[red]{r}[blue]{\eta} & D \end{tikzcd} \end{codeexample} There are further shortened forms: \begin{pgfmanualentry} \extractcommand\rar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\lar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\dar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\uar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\drar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\urar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\dlar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\ular\opt{\oarg{options}}\meta{labels}\@@ \pgfmanualbody \end{pgfmanualentry} The first one is equivalent to \begin{quote} |\arrow|{\oarg{options}}|{r}|\meta{labels} \end{quote} and the other ones work analogously. \subsection{Usage in plain \TeX{}} \label{sec:usage-plain-tex} To use this software in plain \TeX{}, load \tikzname{} and the \texttt{cd} library by saying \begin{quote} \ttfamily\char`\\input \declare{tikz}.tex\\ \index{cd@\protect\texttt{cd} library} \index{Libraries!cd@\protect\texttt{cd}} \ttfamily\char`\\usetikzlibrary\char`\{\declare{cd}\char`\} \end{quote} The |{tikzcd}| environment should then be replaced by the following: \begin{plainenvironment}{{tikzcd}\opt{\oarg{options}}} \end{plainenvironment} All other functions of this library work as described in this manual without change. \subsection{Usage in Con\TeX t} \label{sec:usage-plain-context} To use this software in Con\TeX t, load \tikzname{} and then the \texttt{cd} library by saying \begin{quote} \ttfamily\char`\\usemodule[\declare{tikz}]\\ \index{cd@\protect\texttt{cd} library} \index{Libraries!cd@\protect\texttt{cd}} \ttfamily\char`\\usetikzlibrary[\declare{cd}] \end{quote} The |{tikzcd}| environment should then be replaced by the following: \begin{contextenvironment}{{tikzcd}\opt{\oarg{options}}} \end{contextenvironment} All other functions of this library work as described in this manual without change. \section{Controlling the appearance of diagrams} \label{sec:chang-appe-diagr} This section describes a number of customization keys defined by this package. All keys are located in the path |/tikz/commutative diagrams|. Options passed to |{tikzcd}| or |\arrow| are searched for in that path, and, if not found there, in |/tikz|. To set options globally, it is convenient to use the following command. \begin{command}{\tikzcdset\marg{options}} Executes \meta{options} in the path |/tikz/commutative diagrams|. \end{command} Besides the keys described in this manual, numerous \tikzname\ parameters can affect the appearance of a diagram. However, only a few of them (namely those appearing in |every diagram|, |every cell|, |every arrow|, and |every label| below) are reinitialized when |{tikzcd}| is called. This means that modifying a certain \tikzname\ parameter globally may or may not affect the output of |{tikzcd}|. We also point out that besides the options and styles provided by this package, several keys defined by \tikzname{} are useful for arrows. Some examples are \texttt{dashed}, |dotted|, and its relatives, |line width=|\meta{dimension}, |color=|\meta{color}, |bend right|, |bend left|, |in=|\meta{angle}, |out=|\meta{angle}, |loop|, etc. See the \pgfname{} manual~\cite[\S\ref*{pgfman-section-cap-joins} and \S\ref*{pgfman-library-to-paths}]{pgfman}. Likewise, \tikzname{} provides several keys that are useful for labels, such as |above|, |below|, |left|, |right|, |swap| (which makes the label be placed on the right side of the arrow, relative to its direction), |sloped|, |pos=|\meta{fraction}, |near start|, |near end|, |inner sep=|\meta{dimension}, |font=|\meta{font command}, |text width=|\meta{dimension}, etc. See the \pgfname{} manual~\cite[\S\ref*{pgfman-section-nodes}, esp.\ \S\ref*{pgfman-section-pos-option}]{pgfman}. \subsection{General options} \label{sec:general-options} \begin{stylekey}{/tikz/commutative diagrams/every diagram} This style is applied to every |{tikzcd}| environment. Initially, it contains the following: \begin{quote} |row sep=normal||,|\\ |column sep=normal||,|\\ |/tikz/baseline=0pt| \end{quote} \end{stylekey} The |baseline=0pt| setting is used to make equation numbers be placed correctly (as an exception, one-row diagrams are anchored at their matrix base, which is exactly what you want). \begin{key}{/tikz/commutative diagrams/diagrams=\meta{options}} This key appends \meta{options} to the style |every diagram|. \end{key} \begin{stylekey}{/tikz/commutative diagrams/every matrix} This style is applied to the \tikzname{} matrix created internally by |{tikzcd}|. Initially, it contains the following: \begin{quote} |/tikz/inner sep=0pt| \end{quote} \end{stylekey} \begin{stylekey}{/tikz/commutative diagrams/every cell} This style is applied to every \tikzname{} matrix cell created by |{tikzcd}|. Initially, it contains the following: \begin{quote} |/tikz/shape=asymmetrical rectangle||,|\\ |/tikz/inner xsep=1ex||,|\\ |/tikz/inner ysep=0.85ex| \end{quote} \end{stylekey} The |asymmetrical rectangle| shape is described in \S\ref{sec:asymm-rect-shape}. The |inner xsep|, |inner ysep| options determine the spacing between a diagram entry and any arrows reaching it. \begin{key}{/tikz/commutative diagrams/cells=\meta{options}} This key appends \meta{options} to the style |every cell|. \end{key} \def\printsepaux+#1em{#1\,em} \def\printsep#1#2{\edef\temp{\pgfkeysvalueof{/tikz/commutative diagrams/#1 sep/#2}}\expandafter\printsepaux\temp} \begin{key}{/tikz/commutative diagrams/row sep=\meta{size}} This key acts as a ``frontend'' to \tikzname's |/tikz/row sep| key. If the key \begin{quote} |/tikz/commutative diagrams/row sep/|\meta{size} \end{quote} stores a \meta{value}, then it is read and |/tikz/row sep|=\meta{value} is set. If the key above is not initialized, then \meta{size} is presumably a dimension, and |/tikz/row sep|=\meta{size} is set. The initially available sizes, and their values, are the following: \begin{center} \begin{tabular}{cccccc} |tiny| & |small| & |scriptsize| & |normal| & |large| & |huge| \\ \printsep{row}{tiny} & \printsep{row}{small} & \printsep{row}{scriptsize} & \printsep{row}{normal} & \printsep{row}{large} & \printsep{row}{huge} \end{tabular} \end{center} \end{key} Notice that setting, say, |row sep=1cm| globally with |\tikzcdset| will have no effect, since the |row sep| option is re-set at the beginning of each diagram. To make all diagrams have |row sep| equal to 1\,cm, you can modify the meaning of |normal| by saying \begin{quote} |\tikzcdset{row sep/normal=1cm}|. \end{quote} You can also create new sizes, but note that \pgfname\ requires new keys to be initialized explicitly. For example, to create a size |my size|, meaning 1\,ex, you should use \begin{quote} |\tikzcdset{row sep/my size/.initial=1ex}|. \end{quote} \begin{key}{/tikz/commutative diagrams/column sep=\meta{size}} This works analogously to the |row sep| key above. The sizes available initially are the following: \begin{center} \begin{tabular}{cccccc} |tiny| & |small| & |scriptsize| & |normal| & |large| & |huge| \\ \printsep{column}{tiny} & \printsep{column}{small} & \printsep{column}{scriptsize} & \printsep{column}{normal} & \printsep{column}{large} & \printsep{column}{huge} \end{tabular} \end{center} \end{key} In the examples below, the triangular diagrams would look too wide or too tall if the column or row separation were not set appropriately. \begin{codeexample}[] \begin{tikzcd}[column sep=small] & A \arrow[dl] \arrow[dr] & \\ B \arrow{rr} & & C \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd}[row sep=tiny] & B \arrow[dd] \\ A \arrow[ur] \arrow[dr] & \\ & C \end{tikzcd} \end{codeexample} Section \ref*{pgfman-section-matrices}.3.2 of the \pgfname{} manual \cite{pgfman} contains further details on the spacing of matrix cells. \begin{key}{/tikz/commutative diagrams/math mode=\meta{boolean} (default true)} This keys determines whether or not the contents of a diagram are typeset in math mode. If set globally or diagram-wise, it affects both the diagram entries and arrow labels. If used with |\arrow|, it affects only its labels. \end{key} \begin{key}{/tikz/commutative diagrams/background color=\meta{color} (initially white)} This key stores the name of a color, and is read by styles that fill the background, such as |description| and |crossing over|. It does not cause the background of diagrams to be filled. \end{key} \subsection{Global options for arrows} \label{sec:options-arrows} \begin{stylekey}{/tikz/commutative diagrams/every arrow} This style is applied to every |\arrow|. Initially, it contains the following: \begin{quote} |/tikz/draw,|\\ |/tikz/line width=rule_thickness||,|\\ |rightarrow| \end{quote} \end{stylekey} \begin{key}{/tikz/commutative diagrams/arrows=\meta{options}} This key appends \meta{options} to the style |every arrow|. \end{key} \begin{key}{/tikz/commutative diagrams/arrow style=\meta{style}} This key determines which collection of arrow tips is used by the arrow tip selection styles listed in \S\ref{sec:changing-arrow-tips}. The initial setting is suitable for documents using the Computer Modern font at any size. The available choices for \meta{style} are: \begin{description} \item[\texttt{Latin Modern}] A small variant of the initial settings, intended for documents using the Latin Modern font at any size. \item[\texttt{math font}] This setting uses the |Glyph| meta arrow tip described in \S\ref{sec:font-arrow-tips}. \item[\texttt{tikz}] This setting uses the arrow tips defined in \tikzname's |arrows.meta| library. It honors the option |/tikz/>|. \end{description} \end{key} If you are using a font different from Computer Modern or Latin Modern, you may find the best results by selecting the |math font| style. As detailed in \S\ref{sec:font-arrow-tips}, this is not guaranteed to work perfectly with all fonts, but gives good results in many cases. If the \texttt{math font} style gives unsatisfactory results, you can try selecting the \texttt{tikz} style, and setting |/tikz/>| to the value that best matches your font (among those shown in \cite[\S\ref*{pgfman-section-arrows-meta}]{pgfman}). \begin{codeexample}[] \tikzcdset{ arrow style=tikz, diagrams={>={Straight Barb[scale=0.8]}} } \begin{tikzcd} A \arrow[r, tail] \arrow[rd] & B \arrow[d, two heads]\\ & D \end{tikzcd} \end{codeexample} \subsection{Absolute placement of arrows} \label{sec:absol-positioning} The usual behavior of |\arrow| is to produce an arrow starting at the diagram entry where the command appears, and ending at an entry whose location is specified relative to that. The following keys override this behavior, allowing source and target to be selected explicitly. \begin{key}{/tikz/commutative diagrams/from=\meta{node}} Sets the arrow source to \meta{node}. When called with an argument of the form \meta{row number}\texttt{-}\meta{column number}, sets the arrow source to be the corresponding entry of the diagram's matrix. \end{key} \begin{key}{/tikz/commutative diagrams/to=\meta{node}} Similar to |from|, but refers to the arrow target. \end{key} Recall that it is possible to give a specific entry of a \tikzname{} matrix a name by using the \verb!|[!\meta{options}\verb!]|! syntax, as done for entry $C$ in the example below. {\catcode`\|=12 \begin{codeexample}[] \begin{tikzcd} A \arrow[to=c, red] \arrow[to=2-2, blue] & B \\ |[alias=c]| C & D \arrow[from=1-1, to=1-2, purple] \end{tikzcd} \end{codeexample} } In the next examples, empty labels are used to create nodes for later reference. The |draw=red| option is used to show where these empty nodes are located, but of course you want to remove that when using this technique. \begin{codeexample}[] \begin{tikzcd}[column sep=scriptsize] A \arrow[dr] \arrow[rr, ""{name=U, below, draw=red}]{} & & B \arrow[dl] \\ & C \arrow[Rightarrow, from=U, "\psi"] \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, bend left=50, ""{name=U, below, draw=red}] \arrow[r, bend right=50, ""{name=D, draw=red}] & B \arrow[Rightarrow, from=U, to=D] \end{tikzcd} \end{codeexample} \subsection{Phantom arrows} \label{sec:phantom-arrows} Sometimes it is necessary to insert a symbol outside the grid subjacent to the diagram. The easiest way to achieve this is as a label to an invisible arrow. \begin{stylekey}{/tikz/commutative diagrams/phantom} Creates an invisible arrow. Labels to this arrow are not invisible. They will be anchored at their center and typeset in full size (i.e., with |\textstyle|). To get smaller labels, as in ordinary arrows, use the |\scriptstyle| command. \end{stylekey} In the picture below, the arrow containing the |phantom| option goes from $A$ to $D$, and the |\ulcorner| symbol ($\ulcorner$) is inserted closer to the starting point $A$. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] \arrow[d] \arrow[dr, phantom, "\ulcorner", very near start] & B \arrow[d] \\ C \arrow[r] & D \end{tikzcd} \end{codeexample} \subsection{Fine-tuning the placement of arrows} \label{sec:fine-tuning-arrows} \begin{key}{/tikz/commutative diagrams/shift left=\meta{dimension} (default 0.56ex)} Shifts arrows by \meta{dimension} to the left, relative to the arrow direction. A dimensionless argument causes that multiple of the default value to be used. \end{key} \begin{key}{/tikz/commutative diagrams/shift right=\meta{dimension} (default 1)} A shortcut to |shift left=-|\meta{dimension}. \end{key} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, red, shift left=1.5ex] \arrow[r] \arrow[dr, blue, shift right=1.5ex] \arrow[dr] & B \arrow[d, purple, shift left=1.5ex] \arrow[d]\\ & C \end{tikzcd} \end{codeexample} The default values of |shift left| and |shift right| are appropriate for a pair of parallel arrows, and dimensionless arguments are useful to create sets of multiple parallel arrows. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] & B \arrow[r, shift left] \arrow[r, shift right] & C \arrow[r] \arrow[r, shift left=2] \arrow[r, shift right=2] & \cdots \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/shift=\marg{coordinate}} Shifts arrows by \meta{coordinate}. \end{key} \begin{key}{/tikz/commutative diagrams/xshift=\meta{dimension}} Shifts arrows right by \meta{dimension}. \end{key} \begin{key}{/tikz/commutative diagrams/yshift=\meta{dimension}} Shifts arrows up by \meta{dimension}. \end{key} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, yshift=0.7ex] \arrow[r, yshift=-0.7ex] & B \arrow[d, xshift=0.7ex] \arrow[d, xshift=-0.7ex] \\ & C \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/start anchor=\opt{{\ttfamily\char`\{}\ooarg{coordinate transformations}}\meta{anchor}\opt{\ttfamily\char`\}}} This key specifies at which anchor of the source node the arrow should start. Optionally, additional coordinate transformations can be supplied. An empty \meta{anchor} argument causes no anchor to be specified, which is the usual behavior. \end{key} \begin{key}{/tikz/commutative diagrams/end anchor=\opt{{\ttfamily\char`\{}\ooarg{coordinate transformations}}\meta{anchor}\opt{\ttfamily\char`\}}} This key works analogously, but refers to the target node of the arrow. \end{key} See the picture on \S\ref{sec:asymm-rect-shape} for some of the possible values for \meta{anchor}. \begin{codeexample}[] \begin{tikzcd}[cells={nodes={draw=gray}}] A \arrow[r, black] \arrow[r, blue, end anchor=north east] \arrow[r, red, start anchor={[xshift=-1ex]}, end anchor={[yshift=2ex]north east}] & B \end{tikzcd} \end{codeexample} \subsection{Three-dimensional diagrams} \label{sec:crossing-over} \begin{stylekey}{/tikz/commutative diagrams/crossing over} This style makes a thicker line, with color |background color|, to be drawn under the current arrow, simulating the effect of its passing over other arrows. \begin{codeexample}[] \begin{tikzcd} A \arrow[dr] & B \arrow[dl, crossing over] \\ C & D \end{tikzcd} \end{codeexample} \end{stylekey} Note that, since arrows are drawn in the order they are read, some arrows may need to run ``backwards'' to achieve the desired result. The following picture illustrates this. \begin{codeexample}[] \begin{tikzcd}[row sep=scriptsize, column sep=scriptsize] & f^* E_V \arrow[dl] \arrow[rr] \arrow[dd] & & E_V \arrow[dl] \arrow[dd] \\ f^* E \arrow[rr, crossing over] \arrow[dd] & & E \\ & U \arrow[dl] \arrow[rr] & & V \arrow[dl] \\ M \arrow[rr] & & N \arrow[uu, crossing over, leftarrow]\\ \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/crossing over clearance=\meta{dimension} (initially 1.5ex)} This key specifies the width of the background-colored line drawn under a |crossing over| arrow. \end{key} \subsection{Options for labels} \label{sec:options-labels} \begin{stylekey}{/tikz/commutative diagrams/every label} This style is applied to every label produced with |\arrow|. It is initially set to \begin{quote} |/tikz/auto,|\\ |/tikz/font=|\meta{something}|,|\\ |/tikz/inner sep=0.5ex| \end{quote} where \meta{something} is something that makes |\scriptstyle| be applied to labels in math mode. \end{stylekey} The key |/tikz/auto| makes the label be placed on the left side of the arrow, relative to its direction. The key |/tikz/inner sep| controls the distance between a label and the corresponding arrow. \begin{key}{/tikz/commutative diagrams/labels=\meta{options}} This key appends \meta{options} to |every label|. \end{key} \begin{stylekey}{/tikz/commutative diagrams/description} This style causes the label to be placed over the arrow, with the background filled. The clearance around the label is determined by \texttt{/tikz/inner sep}. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi" description] & B \end{tikzcd} \end{codeexample} \end{stylekey} \section{Advanced usage} \label{sec:advanced-usage} This section provides further details on the functioning of this package, with the aim of allowing the advanced user to make a more or less arbitrary use of other \tikzname{} features within |{tikzcd}|. \subsection{Internals of \texttt{tikzcd} and the arrow commands} \label{sec:intern-arrow-comm} The |{tikzcd}| environment works by substituting code of the form \begin{quote} |\begin{tikzcd}[|\meta{options}|]|\\ \hspace*{1.5ex} \meta{contents}\\ |\end{tikzcd}| \end{quote} with roughly the following: \begin{quote} |\begin{tikzpicture} [|\meta{options}|]|\\ \hspace*{1.5ex}| \matrix [matrix of nodes] {|\\ \hspace*{3ex}| |\meta{contents} |\\|\\ \hspace*{1.5ex}| };|\\ \hspace*{1.5ex}| |\meta{paths}\\ |\end{tikzpicture}| \end{quote} Not shown above are a number of initialization procedures, such as defining |\arrow| and its relatives, as well as applying the default settings specified by |every diagram| and its relatives. Note that the next-row command |\\| for the last row is inserted by |{tikzcd}|, and therefore should not be present in \meta{contents}. Notice also that you can use the key |execute at end picuture| in \meta{options} to have arbitrary \tikzname{} code executed after a diagram is drawn. Initially, \meta{paths} is the empty string. A command |\arrow[|\meta{options}|]| does nothing at the point it is inserted, and causes the following code to be appended to \meta{paths}: \begin{quote} |\path[|\meta{options}|] (|\meta{current node}|) to (|\meta{target~node}|);| \end{quote} Here, \meta{current node} is the node corresponding to the matrix cell where the command |\arrow| is present. A special |.unknown| key handler is set up to parse direction arguments in \meta{options} and set \meta{target node} accordingly. \subsection{Tweaking \texttt{to} paths} \label{sec:tweaking-to-paths} Recall that the \texttt{to} path operation used in the paths created by |\arrow| can take a number of options, as described in \S\ref*{pgfman-library-to-paths} of the \pgfname{} manual~\cite{pgfman}. In particular, the key |/tikz/to path| determines the path that is actually drawn, and can be used to do all sorts of fiddling. \begin{codeexample}[] \begin{tikzcd} A \arrow[dr, controls={+(1.5,0.5) and +(-1,0.8)}] \arrow[dr, dashed, to path=|- (\tikztotarget)] & \\ & B \arrow[loop right] \end{tikzcd} \end{codeexample} The following example shows how to produce a ``snake'' map. The arrow with the |phantom| option (going from $B$ to $E$) has the sole purpose of creating a coordinate, named |Z|, lying halfway between these two cells. The arrow starting at $C$ has target $D$, so the macros |\tikztostart| and |\tikztotarget| will expand to the nodes corresponding to these two cells in the argument of |to path|. Notice also the use of |\tikztonodes| at the point where we want the label to be inserted. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] & B \arrow[r] \arrow[d, phantom, ""{coordinate, name=Z}] & C \arrow[dll, "\delta", rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}] \\ D \arrow[r] & E \arrow[r] & F \end{tikzcd} \end{codeexample} \subsection{Drawing diagrams directly with Ti\emph{k}Z} \label{sec:draw-diagr-directly} If you find that this package is not flexible enough for some particular application, you can use the methods described in \cite{lenders}, \cite{milne} and draw diagrams directly with \tikzname. In this case, you can still use the styles provided here to obtain pictures with a uniform appearance throughout your document. The pictures below show how this can be done (the second one is adapted from \cite{milne}). \begin{codeexample}[] \begin{tikzpicture}[commutative diagrams/every diagram] \matrix[matrix of math nodes, name=m, commutative diagrams/every cell] { X & \bar X \\ & Y \\}; \path[commutative diagrams/.cd, every arrow, every label] (m-1-1) edge[commutative diagrams/hook] (m-1-2) edge[commutative diagrams/dashed] (m-2-2) (m-1-2) edge (m-2-2); \end{tikzpicture} \end{codeexample} \begin{codeexample}[] \begin{tikzpicture}[commutative diagrams/every diagram] \node (P0) at (90:2.3cm) {$X\otimes (Y\otimes (Z\otimes T))$}; \node (P1) at (90+72:2cm) {$X\otimes ((Y\otimes Z)\otimes T))$} ; \node (P2) at (90+2*72:2cm) {\makebox[5ex][r]{$(X\otimes (Y\otimes Z))\otimes T$}}; \node (P3) at (90+3*72:2cm) {\makebox[5ex][l]{$((X\otimes Y)\otimes Z)\otimes T$}}; \node (P4) at (90+4*72:2cm) {$(X\otimes Y)\otimes (Z\otimes T)$}; \path[commutative diagrams/.cd, every arrow, every label] (P0) edge node[swap] {$1\otimes\phi$} (P1) (P1) edge node[swap] {$\phi$} (P2) (P2) edge node {$\phi\otimes 1$} (P3) (P4) edge node {$\phi$} (P3) (P0) edge node {$\phi$} (P4); \end{tikzpicture} \end{codeexample} \subsection{Issues with active ampersand} \label{sec:issues-with-active-ampersand} By default, \tikzname{} makes the character |&| active inside matrices, and this causes the error message \begin{quote} |! Package pgfbasematrix Error: Single ampersand used with wrong catcode.| \end{quote} when |{tikzcd}| is used inside the argument to a macro such as a Beamer frame or a footnote. One solution to this problem is to call |{tikzcd}| with the option |ampersand replacement=\&|, and replace all occurrences of |&| with |\&| in the diagram. This procedure is also needed if you want to use matrices in a diagram cell or label. \begin{codeexample}[/tikz/commutative diagrams/diagrams={column sep=large}] \begin{tikzcd}[ampersand replacement=\&] A \oplus B \ar[r, "{\begin{pmatrix} e & f \\ g & h \end{pmatrix}}"] \& C \oplus D \end{tikzcd} \end{codeexample} An alternative fix to this issue that does not require replacing |&| with a different column separator consists in adding the following line to your document after all packages have been loaded: \begin{quote} |\def\temp{&} \catcode`&=\active \let&=\temp| \end{quote} However, this may interfere in unexpected ways with other packages. Use this trick at your own risk. \section{Additional goodies} \label{sec:general-infra} This package provides some general \pgfname\ infrastructure to achieve its goals. These additional goodies are documented in this section. \subsection{The \texttt{asymmetrical rectangle} shape} \label{sec:asymm-rect-shape} The following shape is used inside |{tikzcd}| to ensure that arrows between nodes in the same row are perfectly horizontal, even if the nodes contain text with different heights and depths. \begin{shape}{asymmetrical rectangle} This shape is similar to the |rectangle| shape, but its |center| is located at a fixed distance of the |base|, as determined by the |center yshift| key, rather than lying at the shape's geometric center. The numerical anchors, as well as |east| and |west|, are modified accordingly, and there are anchors called |real center|, |real east|, and |real west| matching |rectangle|'s original definitions. All other anchors provided by |rectangle| are available and remain unmodified. \end{shape} \begin{key}{/tikz/commutative diagrams/center yshift=\meta{dimension} (initially axis\_height)} Determines the distance between |asymmetrical rectangle|'s |base| and |center| anchors. \end{key} The picture below shows some of the available anchors. \begin{center}\Huge \begin{tikzpicture} \node[name=s,shape=asymmetrical rectangle,shape example] {Asymmetrical rectangle\vrule width 1pt height 2cm}; \foreach \anchor/\placement in {north west/above left, north/above, north east/above right, west/left, center/right, east/right, real west/left, real center/right, real east/right, base west/left, base/right, base east/right, south west/below left, south/below, south east/below right, text/left, 10/right, 130/above, 230/below, -10/below} \draw[shift=(s.\anchor)] plot[mark=x] coordinates{(0,0)} node[\placement] {\scriptsize\texttt{\anchor}}; \end{tikzpicture} \end{center} \subsection{Reading font parameters} \label{sec:read-font-param} The following are |pgfmath| functions used to access relevant math font parameters. They take no arguments, but the result depends of the currently selected font size. \begin{math-function}{axis\_height} Returns the axis height parameter (a.k.a.\ $\sigma_{22}$) of the document's math font. \end{math-function} \begin{math-function}{rule\_thickness} Returns the fraction rule thickness (a.k.a.\ $\xi_8$) of the document's math font. \end{math-function} \subsection{Computer Modern arrow tips} \label{sec:comp-modern-arrow} The following arrow tips mimic the Computer Modern designs. It is useful to know that at size 10\,pt, the Computer Modern arrow stems are 0.4\,pt thick; for larger font sizes, this parameter should be scaled accordingly, or replaced by the equivalent figure 0.0928\,ex. Notice that by using the mechanism explained in \S\ref{sec:changing-arrow-tips}, it is not necessary, and in fact not advisable, to directly refer to the arrow tips listed in this section inside a |{tikzcd}|. \begin{multicols}{2}\raggedcolumns \begin{tabular}{ll} \displayarrow{cm to}\\ \displayarrow[/tikz/commutative diagrams/double line]{cm implies}\\ \displayarrow[line width=1.5*rule_thickness]{cm bold to}\\ \displayarrow{cm double to}\\ \displayarrow{cm to reversed}\\ \end{tabular} \begin{tabular}{ll} \displayarrow{cm bar}\\ \displayarrow{cm left to}\\ \displayarrow{cm right to}\\ \displayarrow{cm left hook}\\ \displayarrow{cm right hook}\\ \end{tabular} \end{multicols} Incidentally, \tikzname's original \texttt{to} arrow tip seems to be based on the pre-1992 version of Computer Modern, which, in spite of its author's wish \cite{knuth}, can still be found in many systems. \TeX{}Live, for instance, distributed the old version up until 2007 or 2008. Therefore, an up-to-date \TeX{} distribution may be necessary to get matching arrows in formulas and diagrams. \subsection{Glyph arrow tips} \label{sec:font-arrow-tips} As an attempt to provide a general solution to the problem of having matching arrow tips in text and pictures, this feature produces arrow tips that consist of (pieces of) font glyphs carefully placed at the endpoints of the path. To activate it in |{tikzcd}| diagrams, refer to the |arrow style| key. \begin{arrowtipsimple}{Glyph} An arrow tip made from a piece of text. It accepts the following parameters. \begin{key}{/pgf/arrow keys/glyph math command=\meta{name}} The name of a command (to be used inside |$\csname| \dots |\endcsname$|) producing the desired glyph. \end{key} \begin{key}{/pgf/arrow keys/glyph length=\meta{dimension} (initially 1ex)} The length of the portion of the glyph not clipped away. Also used to set the `tip end' parameter. \end{key} \begin{key}{/pgf/arrow keys/glyph axis=\meta{dimension} (initially axis\_height)} A vertical displacement applied to the glyph in order to make the glyph's central axis (typically an arrow stem) aligned with the path. \end{key} \begin{key}{/pgf/arrow keys/glyph shorten=\meta{dimension} (initially -0.1ex)} An additional amount by which the end of the path is shortened. This is used to compensate for the gap that usually exists between the tip end and the glyph's bounding box. \end{key} \end{arrowtipsimple} Below are some usage examples. Notice that glyph arrow tips do not scale with \pgfname{} line width but their size depends on the current font size, so you will probably want to set \texttt{line width=rule\_thickness} when using them. Also, contrarily to the arrow parameters defined by the \texttt{arrows.meta} library, the parameters described above are evaluated only at the time the arrow tip is drawn, so they can (and should) be given in the units em or ex. \begin{codeexample}[] \tikzset{ math to/.tip={Glyph[glyph math command=rightarrow]}, loop/.tip={Glyph[glyph math command=looparrowleft, swap]}, weird/.tip={Glyph[glyph math command=Rrightarrow, glyph length=1.5ex]}, pi/.tip={Glyph[glyph math command=pi, glyph length=1.5ex, glyph axis=0pt]}, } \begin{tikzpicture}[line width=rule_thickness] \draw[loop-math to, bend left] (0,2) to (1,2); \draw[math to-weird] (0,1) to (1,1); \draw[pi-pi] (0,0) to (1,0); \end{tikzpicture} \end{codeexample} It is important to be aware of some drawbacks of this feature. First, the transition between a line and the arrow tip may become visible with some printers (especially in low resolutions or draft mode) and document viewers, as you may be able to see in the samples above. Second, these rather long tips may (\tikz[baseline=-axis_height]\draw[dash pattern=on 0.8ex off 0.4ex,-{Glyph[glyph math command=rightarrow]}] (0,0) -- (3.4ex,0);) or may not (\tikz[baseline=-axis_height]\draw[dash pattern=on 0.8ex off 0.4ex,-{Glyph[glyph math command=rightarrow]}] (0,0) -- (4ex,0);) fit nicely with dashed or curved lines. Finally, the method used to place the arrow tip at the end of a stroked path and clip away the arrow stem makes certain assumptions about the font design and could fail in cases where unusual design choices are made. \begin{thebibliography}{9} \bibitem{knuth} Donald Knuth, \emph{Important message to all users of \TeX}. Available at \url{http://www-cs-staff.stanford.edu/~uno/cm.html} \bibitem{lenders} Felix Lenders, \emph{Commutative diagrams using \tikzname}. Available at \url{http://www.felixl.de/commu.pdf}. \bibitem{milne} James Milne, \emph{Guide to commutative diagrams}. Available at \url{http://www.jmilne.org/not/CDGuide.html}. \bibitem{pgfman} Till Tantau, \emph{The \tikzname{} and \pgfname{} packages: Manual for version 3.0.0}. Available at \url{http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf}. \end{thebibliography} \printindex \end{document} \edef\tikzatcode{\the\catcode`\@} \catcode`\@=11 \input pgf.tex \input pgffor.tex \input tikz.code.tex \catcode`\@=\tikzatcode \endinput \ProvidesFileRCS[v\pgfversion] $Header: /context/mirror/tikz/cvs/pgf/generic/pgf/frontendlayer/tikz/libraries/tikzlibraryquotes.code.tex,v 1.4 2014/03/21 19:52:38 tantau Exp $ \def\tikz@quote@parser#1{\tikz@quote@@parser#1\pgf@stop} \def\tikz@quote@@parser"#1"{ \pgfutil@ifnextchar\bgroup{ \tikz@quote@@parser@group{#1}}{ \pgfutil@ifnextchar'{ \tikz@quote@@parser@apo{#1}}{ \tikz@quote@@parser@normal{#1}}}} \def\tikz@quote@@parser@apo#1'{ \pgfutil@ifnextchar\bgroup{\tikz@quote@@parser@apo@group{#1}}{\tikz@quote@@parser@apo@normal{#1}}} \def\tikz@quote@@parser@group#1#2#3\pgf@stop{ \expandafter\def\expandafter\tikz@temp\expandafter{\tikz@quotes@as{#1}{#2}} \expandafter\pgfkeysalso\expandafter{\tikz@temp}} \def\tikz@quote@@parser@normal#1#2\pgf@stop{ \expandafter\def\expandafter\tikz@temp\expandafter{\tikz@quotes@as{#1}{#2}} \expandafter\pgfkeysalso\expandafter{\tikz@temp}} \def\tikz@quote@@parser@apo@group#1#2#3\pgf@stop{ \expandafter\def\expandafter\tikz@temp\expandafter{\tikz@quotes@as{#1}{',#2}} \expandafter\pgfkeysalso\expandafter{\tikz@temp}} \def\tikz@quote@@parser@apo@normal#1#2\pgf@stop{ \expandafter\def\expandafter\tikz@temp\expandafter{\tikz@quotes@as{#1}{',#2}} \expandafter\pgfkeysalso\expandafter{\tikz@temp}} \pgfkeys{/handlers/first char syntax=true} \def\tikz@enable@node@quotes{ \pgfkeyssetvalue{/handlers/first char syntax/the character "}{\tikz@quote@parser} \let\tikz@quotes@as\tikz@node@quotes@as} \def\tikz@enable@edge@quotes{ \pgfkeyssetvalue{/handlers/first char syntax/the character "}{\tikz@quote@parser} \let\tikz@quotes@as\tikz@edge@quotes@as} \def\tikz@enable@pic@quotes{ \pgfkeyssetvalue{/handlers/first char syntax/the character "}{\tikz@quote@parser} \let\tikz@quotes@as\tikz@pic@quotes@as} \tikzset{ node quotes mean/.code={\def\tikz@node@quotes@as##1##2{#1}}, edge quotes mean/.code={\def\tikz@edge@quotes@as##1##2{#1}}, pic quotes mean/.code={\def\tikz@pic@quotes@as##1##2{#1}}, quotes mean pin/.style={node quotes mean={ pin={[direction shorthands,every pin quotes/.try,##2]##1}}}, quotes mean label/.style={node quotes mean={ label={[direction shorthands,every label quotes/.try,##2]##1}}}, quotes mean label, edge quotes mean={edge node={node [every edge quotes,#2]{#1}}}, pic quotes mean={pic text={#1},pic text options={every pic quotes/.try,#2}}, every edge quotes/.style={auto}, direction shorthands/.code={ \pgfkeyslet{/tikz/centered/.@cmd}\tikz@label@@centered \pgfkeyslet{/tikz/above/.@cmd}\tikz@label@@above \pgfkeyslet{/tikz/below/.@cmd}\tikz@label@@below \pgfkeyslet{/tikz/left/.@cmd}\tikz@label@@left \pgfkeyslet{/tikz/right/.@cmd}\tikz@label@@right \pgfkeyslet{/tikz/above left/.@cmd}\tikz@label@@above@left \pgfkeyslet{/tikz/above right/.@cmd}\tikz@label@@above@right \pgfkeyslet{/tikz/below left/.@cmd}\tikz@label@@below@left \pgfkeyslet{/tikz/below right/.@cmd}\tikz@label@@below@right } } \def\tikz@label@@centered#1\pgfeov{\pgfkeysalso{label position=center,pin position=center}} \def\tikz@label@@above#1\pgfeov{\pgfkeysalso{label position=90,pin position=90}} \def\tikz@label@@below#1\pgfeov{\pgfkeysalso{label position=-90,pin position=-90}} \def\tikz@label@@left#1\pgfeov{\pgfkeysalso{label position=180,pin position=180}} \def\tikz@label@@right#1\pgfeov{\pgfkeysalso{label position=0,pin position=0}} \def\tikz@label@@above@left#1\pgfeov{\pgfkeysalso{label position=135,pin position=135}} \def\tikz@label@@below@left#1\pgfeov{\pgfkeysalso{label position=-135,pin position=-135}} \def\tikz@label@@above@right#1\pgfeov{\pgfkeysalso{label position=45,pin position=45}} \def\tikz@label@@below@right#1\pgfeov{\pgfkeysalso{label position=-45,pin position=-45}} \endinput \usetikzlibrary{matrix,quotes,arrows.meta} \newif\iftikzcd@mathmode \def\tikzcdset{\pgfqkeys{/tikz/commutative diagrams}} \tikzcdset{ arrows/.code={\tikzcdset{every arrow/.append style={#1}}}, labels/.code={\tikzcdset{every label/.append style={#1}}}, cells/.code={\tikzcdset{every cell/.append style={#1}}}, diagrams/.code={\tikzcdset{every diagram/.append style={#1}}}, execute before arrows/.code={\expandafter\def\expandafter\tikzcd@before@paths@hook\expandafter{\tikzcd@before@paths@hook#1}}, to/.code={\tikzcd@setarrowend\tikzcd@ar@target{#1}}, from/.code={\tikzcd@setarrowend\tikzcd@ar@start{#1}}, description/.style={ /tikz/anchor=center, /tikz/fill=\pgfkeysvalueof{/tikz/commutative diagrams/background color}}, phantom/.style={ /tikz/draw=none, /tikz/commutative diagrams/labels={ /tikz/font=, /tikz/anchor=center}}, crossing over/.style={ /tikz/preaction={ /tikz/draw=\pgfkeysvalueof{/tikz/commutative diagrams/background color}, /tikz/arrows=-, /tikz/line width=\pgfkeysvalueof{/tikz/commutative diagrams/crossing over clearance}}}, cramped/.code={\tikzcdset{ every matrix/.append style={inner sep=+-0.3em}, every cell/.append style={inner sep=+0.3em}}}, row sep/.code={\tikzcd@sep{row}{#1}}, column sep/.code={\tikzcd@sep{column}{#1}}, sep/.code={\tikzcdset{row sep={#1},column sep={#1}}}, math mode/.is if=tikzcd@mathmode, arrow style/.is choice} \def\tikzcd@sep#1#2{ \pgfkeysifdefined{/tikz/commutative diagrams/#1 sep/#2} {\pgfkeysgetvalue{/tikz/commutative diagrams/#1 sep/#2}\tikzcd@temp \pgfkeysalso{/tikz/#1 sep/.expand once=\tikzcd@temp}} {\pgfkeysalso{/tikz/#1 sep={#2}}}} \def\tikzcd@setarrowend#1#2{ \pgfutil@ifundefined{pgf@sh@ns@\tikzcdmatrixname-#2} { \c@pgf@counta=0 \c@pgf@countb=0 \let\tikzcd@temp=\tikzcd@parse \expandafter\tikzcd@temp#2\relax \ifx\tikzcd@temp\pgfutil@empty \advance\c@pgf@counta by\tikzcd@currentrow \advance\c@pgf@countb by\tikzcd@currentcolumn \edef#1{\tikzcdmatrixname-\the\c@pgf@counta-\the\c@pgf@countb} \else \def#1{#2} }{ \def#1{\tikzcdmatrixname-#2} }} \tikzcdset{ .unknown/.code={ \ifpgfkeysaddeddefaultpath \c@pgf@counta=0 \c@pgf@countb=0 \let\tikzcd@temp=\tikzcd@parse \expandafter\tikzcd@temp\pgfkeyscurrentname\relax \ifx\tikzcd@temp\pgfutil@empty \advance\c@pgf@counta by\tikzcd@currentrow \advance\c@pgf@countb by\tikzcd@currentcolumn \edef\tikzcd@ar@target{\tikzcdmatrixname-\the\c@pgf@counta-\the\c@pgf@countb} \else \pgfqkeys{/tikz}{\pgfkeyscurrentname={#1}} \else \def\pgfutilnext{\pgfkeysvalueof{/handlers/.unknown/.@cmd}#1\pgfeov}\pgfutilnext}} \fi\fi\fi% \tikzcd@temp} \def\tikzcd{ \let\arrow\tikzcd@arrow \let\ar\tikzcd@arrow \def\rar{\tikzcd@xar{r}} \def\lar{\tikzcd@xar{l}} \def\dar{\tikzcd@xar{d}} \def\uar{\tikzcd@xar{u}} \def\urar{\tikzcd@xar{ur}} \def\ular{\tikzcd@xar{ul}} \def\drar{\tikzcd@xar{dr}} \def\dlar{\tikzcd@xar{dl}} \global\let\tikzcd@savedpaths\pgfutil@empty \matrix[ nodes, /tikz/every cell/.append code={\tikzcdset{every cell}}, /tikz/commutative diagrams/.cd,every matrix] \bgroup} \def\endtikzcd{ \pgfmatrixendrow\egroup \pgfextra{\global\let\tikzcdmatrixname\tikzlastnode}; \tikzcdset{\the\pgfmatrixcurrentrow-row diagram/.try} \begingroup \pgfkeys{ /handlers/first char syntax/the character "/.initial=\tikzcd@forward@quotes, /tikz/edge quotes mean={, , /tikz/commutative diagrams/.cd,every label,##2]{##1}}}} \let\tikzcd@errmessage\errmessage \def\errmessage##1{\tikzcd@errmessage{##1^^J...^^Jl.\tikzcd@lineno\space I think the culprit is a tikzcd arrow in cell \tikzcd@currentrow-\tikzcd@currentcolumn}} \tikzcd@before@paths@hook \tikzcd@savedpaths \endgroup \endtikzpicture} \def\tikzcd@arrow{ \relax \edef\tikzcd@temp{ \noexpand\def\noexpand\tikzcd@currentcolumn{\the\pgfmatrixcurrentcolumn} \noexpand\def\noexpand\tikzcd@currentrow{\the\pgfmatrixcurrentrow} \noexpand\def\noexpand\tikzcd@lineno{\the\inputlineno}} \expandafter\pgfutil@g@addto@macro\expandafter\tikzcd@savedpaths\expandafter{\tikzcd@temp} \pgfutil@ifnextchar[{\tikzcd@handle@shortcuts@next\tikzcd@@arrow}{\tikzcd@ar@old[]}} \def\tikzcd@@arrow[#1]{\pgfutil@ifnextchar\bgroup{\tikzcd@ar@old[#1]}{\tikzcd@ar@new[#1]}} \def\tikzcd@ar@new[#1]{ \pgfutil@g@addto@macro\tikzcd@savedpaths{ \path[/tikz/commutative diagrams/.cd,every arrow,#1] (\tikzcd@ar@start\tikzcd@startanchor) to (\tikzcd@ar@target\tikzcd@endanchor); }} \def\tikzcd@ar@old[#1]#2{ \pgfutil@ifnextchar[ {\tikzcd@handle@shortcuts@next\tikzcd@ar@getlabel{to={#2},#1}} {\pgfutil@ifnextchar\bgroup {\tikzcd@ar@getlabel{to={#2},#1}[]} {\tikzcd@ar@new[to={#2},#1]}}} \def\tikzcd@ar@getlabel#1[#2]#3{ \pgfutil@ifnextchar[ {\tikzcd@handle@shortcuts@next\tikzcd@ar@getlabel{#1,"{#3}"{#2}}} {\pgfutil@ifnextchar\bgroup {\tikzcd@ar@getlabel{#1,"{#3}"{#2}}[]} {\tikzcd@ar@new[#1,"{#3}"{#2}]}}} \def\tikzcd@xar#1{\relax\pgfutil@ifnextchar[{\tikzcd@handle@shortcuts@next\tikzcd@@xar{#1}}{\tikzcd@arrow[]{#1}}} \def\tikzcd@@xar#1[#2]{\tikzcd@arrow[#2]{#1}} \def\tikzcd@handle@shortcuts@next{ \iftikz@handle@active@code \begingroup \tikz@switchoff@shorthands\expandafter \endgroup\expandafter} \def\tikzcd@ar@target{\tikzcdmatrixname-\tikzcd@currentrow-\tikzcd@currentcolumn} \def\tikzcd@ar@start{\tikzcdmatrixname-\tikzcd@currentrow-\tikzcd@currentcolumn} \def\tikzcd@forward@quotes#1{\tikzset{every to/.append style={#1}}} \let\tikzcd@before@paths@hook\pgfutil@empty \def\tikzcd@setanchor#1[#2]#3\relax{ \ifx\relax#2\relax\else \tikzcdset{@#1transform/.append style={#2},@shiftabletopath} \ifx\relax#3\relax \pgfutil@namelet{tikzcd@#1anchor}{pgfutil@empty} \else \pgfutil@namedef{tikzcd@#1anchor}{.#3}} \tikzcdset{ @shiftabletopath/.style={ /tikz/execute at begin to={ \begingroup \def\tikz@tonodes{coordinate[pos=0,commutative diagrams/@starttransform/.try](tikzcd@nodea) coordinate[pos=1,commutative diagrams/@endtransform/.try](tikzcd@nodeb)} \path (\tikztostart) \tikz@to@path; \endgroup \def\tikztostart{tikzcd@nodea} \def\tikztotarget{tikzcd@nodeb} \tikzset{insert path={(tikzcd@nodea)}}}, /tikz/commutative diagrams/@shiftabletopath/.code={}}, start anchor/.code={ \pgfutil@ifnextchar[{\tikzcd@setanchor{start}}{\tikzcd@setanchor{start}[]}#1\relax}, end anchor/.code={ \pgfutil@ifnextchar[{\tikzcd@setanchor{end}}{\tikzcd@setanchor{end}[]}#1\relax}, start anchor=, end anchor=, shift left/.style={ /tikz/commutative diagrams/@shiftabletopath, /tikz/execute at begin to={ \pgfpointnormalised{ \pgfpointdiff{\pgfpointanchor{tikzcd@nodeb}{center}}{\pgfpointanchor{tikzcd@nodea}{center}}} \pgfgetlastxy{\tikzcd@x}{\tikzcd@y} \pgfmathparse{#1} \ifpgfmathunitsdeclared\else \pgfmathparse{\pgfmathresult*\pgfkeysvalueof{/tikz/commutative diagrams/shift left/.@def}} \coordinate (tikzcd@nodea) at ([shift={(\pgfmathresult*\tikzcd@y,-\pgfmathresult*\tikzcd@x)}]tikzcd@nodea); \coordinate (tikzcd@nodeb) at ([shift={(\pgfmathresult*\tikzcd@y,-\pgfmathresult*\tikzcd@x)}]tikzcd@nodeb); \tikzset{insert path={(tikzcd@nodea)}}}}, shift right/.style={ /tikz/commutative diagrams/shift left={-(#1)}}, transform nodes/.style={ /tikz/commutative diagrams/@shiftabletopath, /tikz/commutative diagrams/@starttransform/.append style={#1}, /tikz/commutative diagrams/@endtransform/.append style={#1}}, shift/.style={ /tikz/shift={#1}, /tikz/commutative diagrams/transform nodes={/tikz/shift={#1}}}, xshift/.style={ /tikz/xshift={#1}, /tikz/commutative diagrams/transform nodes={/tikz/xshift={#1}}}, yshift/.style={ /tikz/yshift={#1}, /tikz/commutative diagrams/transform nodes={/tikz/yshift={#1}}}} \pgfutil@ifluatex \directlua{tex.enableprimitives('tikzcd@', {'Umathaxis', 'Umathfractionrule'})} \pgfmathdeclarefunction{axis_height}{0}{ \begingroup $\relax$ \pgfmathreturn\the\tikzcd@Umathaxis\textstyle \endgroup} \pgfmathdeclarefunction{rule_thickness}{0}{ \begingroup $\relax$ \pgfmathreturn\the\tikzcd@Umathfractionrule\textstyle \endgroup} \else \pgfmathdeclarefunction{axis_height}{0}{ \begingroup $\relax$ \pgfmathreturn\the\fontdimen22\textfont2 \endgroup} \pgfmathdeclarefunction{rule_thickness}{0}{ \begingroup $\relax$ \pgfmathreturn\the\fontdimen8\textfont3 \endgroup} \pgfdeclareshape{asymmetrical rectangle} { \inheritsavedanchors[from={rectangle}] \inheritanchor[from={rectangle}]{base} \inheritanchor[from={rectangle}]{north} \inheritanchor[from={rectangle}]{south} \inheritanchor[from={rectangle}]{base west} \inheritanchor[from={rectangle}]{north west} \inheritanchor[from={rectangle}]{south west} \inheritanchor[from={rectangle}]{base east} \inheritanchor[from={rectangle}]{north east} \inheritanchor[from={rectangle}]{south east} \inheritanchor[from={rectangle}]{mid} \inheritanchor[from={rectangle}]{mid west} \inheritanchor[from={rectangle}]{mid east} \inheritbackgroundpath[from={rectangle}] \anchor{center}{\pgf@anchor@rectangle@center\pgfmathsetlength\pgf@y {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}}} \anchor{west}{\pgf@anchor@rectangle@west\pgfmathsetlength\pgf@y {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}}} \anchor{east}{\pgf@anchor@rectangle@east\pgfmathsetlength\pgf@y {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}}} \anchor{real center}{\pgf@anchor@rectangle@center} \anchor{real west}{\pgf@anchor@rectangle@west} \anchor{real east}{\pgf@anchor@rectangle@east} \anchorborder{ \pgfmathsetlength\pgfutil@tempdima {\pgfkeysvalueof{/tikz/commutative diagrams/center yshift}} \pgf@xb=\pgf@x \pgf@yb=\pgf@y \southwest \pgf@xa=\pgf@x \pgf@ya=\pgf@y \northeast \advance\pgf@x by-\pgf@xa \advance\pgf@y by-\pgf@ya \pgf@xc=.5\pgf@x \pgf@yc=.5\pgf@y \advance\pgf@xa by\pgf@xc \advance\pgf@ya by\pgf@yc \ifdim\pgf@yb>0pt \northeast \pgf@yc=\pgf@y \advance\pgf@yc by-\pgfutil@tempdima \else \southwest \pgf@yc=-\pgf@y \advance\pgf@yc by\pgfutil@tempdima \edef\pgf@marshal{ \noexpand\pgfpointborderrectangle {\noexpand\pgfqpoint{\the\pgf@xb}{\the\pgf@yb}} {\noexpand\pgfqpoint{\the\pgf@xc}{\the\pgf@yc}} } \pgf@process{\pgf@marshal} \advance\pgf@x by\pgf@xa \advance\pgf@y by\pgfutil@tempdima }} \pgfkeys{ cm to/.tip={Computer Modern Rightarrow[length=+0pt 6.2]}, cm bold to/.tip={cm to[scale=0.667]}, cm double to/.tip={cm to[sep=+0pt -2.6]cm to}, cm bar/.tip={Bar[width=+0pt 8.2 0.89,line cap=round]}, cm left hook/.tip={Hooks[width=+0pt 10.8,length=+0pt 3.6,harpoon,line cap=round]}, cm implies/.tip={Implies}, cm left to/.tip={cm to[left]}, cm right to/.tip={cm to[right]}, cm right hook/.tip={cm left hook[swap]}, cm to reversed/.tip={cm to[reversed]}, cm */.tip={Circle[length=+0pt 7.6]}, cm o/.tip={cm *[open]}, cm |/.tip={cm bar}} \pgfqkeys{/pgf/arrow keys}{ glyph math command/.code={ \pgfarrowsaddtooptions{\def\tikzcd@glyph{$\begingroup\expandafter\endgroup\csname #1\endcsname$}}}, glyph axis/.code={\pgfarrowsaddtooptions{\pgfmathsetlengthmacro\tikzcd@glyph@axis{#1}}}, glyph length/.code={\pgfarrowsaddtooptions{\pgfmathsetlengthmacro\tikzcd@glyph@len{#1}}}, glyph shorten/.code={\pgfarrowsaddtooptions{\pgfmathsetlengthmacro\tikzcd@glyph@shorten{#1}}}} \pgfdeclarearrow{ name=Glyph, cache=false, bending mode=none, parameters={\tikzcd@glyph@len,\tikzcd@glyph@shorten}, setup code={ \pgfarrowssettipend{\tikzcd@glyph@len\advance\pgf@x by\tikzcd@glyph@shorten}}, defaults={ glyph axis=axis_height, glyph length=+0.9ex, glyph shorten=+-0.1ex}, drawing code={ \pgfpathrectangle{\pgfpoint{+0pt}{+-1ex}}{\pgfpoint{+\tikzcd@glyph@len}{+2ex}} \pgfusepathqclip \pgftransformxshift{+\tikzcd@glyph@len} \pgftransformyshift{+-\tikzcd@glyph@axis} \pgftext[right,base]{\tikzcd@glyph}}} \tikzcdset{ double line/.code={\tikzset{ double equal sign distance, double=\pgfkeysvalueof{/tikz/commutative diagrams/background color}}}, dashed/.style={/tikz/dash pattern={on 7\pgflinewidth off 4\pgflinewidth}}, tikzcd to/.tip={cm to}, tikzcd bar/.tip={cm bar}, tikzcd left hook/.tip={cm left hook}, tikzcd double to/.tip={cm double to}, tikzcd implies/.tip={Implies}, tikzcd right hook/.tip={tikzcd left hook[swap]}, tikzcd left to/.tip={tikzcd to[harpoon]}, tikzcd right to/.tip={tikzcd left to[swap]}, tikzcd to reversed/.tip={tikzcd to[reversed]}, tikzcd cap/.tip={}, tikzcd implies cap/.tip={}, tikzcd implies bar/.tip={tikzcd bar}, no tail/.code={\pgfsetarrowsstart{tikzcd cap}}, to head/.code={\pgfsetarrowsend{tikzcd to}}, maps to/.code={\pgfsetarrowsstart{tikzcd bar}}, hook/.code={\pgfsetarrowsstart{tikzcd right hook}}, hook'/.code={\pgfsetarrowsstart{tikzcd left hook}}, harpoon/.code={\pgfsetarrowsend{tikzcd left to}}, harpoon'/.code={\pgfsetarrowsend{tikzcd right to}}, two heads/.code={\pgfsetarrowsend{tikzcd double to}}, tail/.code={\pgfsetarrowsstart{tikzcd to reversed}}, rightarrow/.code={\pgfsetarrows{tikzcd cap-tikzcd to}}, Rightarrow/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies cap-tikzcd implies}}, leftarrow/.code={\pgfsetarrows{tikzcd to-tikzcd cap}}, Leftarrow/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies-tikzcd implies cap}}, leftrightarrow/.code={\pgfsetarrows{tikzcd to-tikzcd to}}, Leftrightarrow/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies-tikzcd implies}}, mapsto/.code={\pgfsetarrows{tikzcd bar-tikzcd to}}, mapsfrom/.code={\pgfsetarrows{tikzcd to-tikzcd bar}}, Mapsto/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies bar-tikzcd implies}}, Mapsfrom/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies-tikzcd implies bar}}, hookrightarrow/.code={\pgfsetarrows{tikzcd right hook-tikzcd to}}, hookleftarrow/.code={\pgfsetarrows{tikzcd to-tikzcd left hook}}, rightharpoonup/.code={\pgfsetarrows{tikzcd cap-tikzcd left to}}, rightharpoondown/.code={\pgfsetarrows{tikzcd cap-tikzcd right to}}, leftharpoonup/.code={\pgfsetarrows{tikzcd right to-tikzcd cap}}, leftharpoondown/.code={\pgfsetarrows{tikzcd left to-tikzcd cap}}, rightarrowtail/.code={\pgfsetarrows{tikzcd to reversed-tikzcd to}}, leftarrowtail/.code={\pgfsetarrows{tikzcd to-tikzcd to reversed}}, twoheadrightarrow/.code={\pgfsetarrows{tikzcd cap-tikzcd double to}}, twoheadleftarrow/.code={\pgfsetarrows{tikzcd double to-tikzcd cap}}, no head/.code={\pgfsetarrowsend{tikzcd cap}}, dash/.code={\pgfsetarrows{tikzcd cap-tikzcd cap}}, dashrightarrow/.code={\tikzcdset{rightarrow,dashed}}, dashleftarrow/.code={\tikzcdset{leftarrow,dashed}}, equal/.code={\tikzcdset{double line}\pgfsetarrows{tikzcd implies cap-tikzcd implies cap}}, equals/.code={\tikzcdset{equal}}, rightsquigarrow/.code={\tikzcdset{rightarrow,squiggly}}, leftsquigarrow/.code={\tikzcdset{leftarrow,squiggly}}, leftrightsquigarrow/.code={\tikzcdset{leftrightarrow,squiggly}}, squiggly/.code={ \pgfutil@ifundefined{tikz@[email protected]@loaded} {\pgfutil@packageerror{tikz-cd}{You need to say \string\usetikzlibrary{decorations.pathmorphing} to use squiggly arrows}{}}{} \pgfkeysalso{ /tikz/decorate, /tikz/decoration={ zigzag, segment length=9.25\pgflinewidth, amplitude=1.9\pgflinewidth, post=lineto, post length=6\pgflinewidth, pre=lineto, pre length=6\pgflinewidth, #1}}}} \pgfkeysdef{/tikz/commutative diagrams/arrow style/Latin Modern}{ \tikzcdset{ tikzcd bar/.tip={Bar[width=+0pt 12 .992,line cap=round]}, tikzcd left hook/.tip={Hooks[width=+0pt 15,length=+0pt 4.2,arc=190,harpoon,line cap=round]}}} \pgfkeysdef{/tikz/commutative diagrams/arrow style/tikz}{ \tikzcdset{ tikzcd to/.tip={>}, tikzcd bar/.tip={Bar[width=+3pt 4 0.9]}, tikzcd left hook/.tip={Hooks[harpoon]}, tikzcd double to/.tip={tikzcd to[]tikzcd to}}} \pgfkeysdef{/tikz/commutative diagrams/arrow style/math font}{ \tikzcdset{ tikzcd to/.tip={Glyph[glyph math command=rightarrow]}, tikzcd cap/.tip={Glyph[glyph math command=leftarrow]}, tikzcd to reversed/.tip={Glyph[glyph math command=leftarrowtail]}, tikzcd bar/.tip={Glyph[glyph math command=mapsfrom]}, tikzcd left hook/.tip={Glyph[glyph math command=hookleftarrow]}, tikzcd right hook/.tip={Glyph[glyph math command=hookleftarrow, swap]}, tikzcd implies/.tip={Glyph[glyph math command=Rightarrow, glyph length=1.2ex]}, tikzcd implies cap/.tip={Glyph[glyph math command=Leftarrow]}, tikzcd implies bar/.tip={Glyph[glyph math command=Mapsfrom]}, tikzcd double to/.tip={Glyph[glyph math command=twoheadrightarrow, glyph length=1.4ex]}, tikzcd left to/.tip={Glyph[glyph math command=rightharpoonup]}, tikzcd right to/.tip={Glyph[glyph math command=rightharpoonup,swap]}, double line/.append code={\tikzset{double distance={2*(height("$=$")-axis_height-rule_thickness)}}}, dashed/.code={\tikzset{dash pattern=on 0.8ex off 0.4ex, dash phase=0.8ex}}, squiggly/.default={pre length=1ex, post length=1ex}}} \tikzcdset{ every arrow/.style={ /tikz/draw, /tikz/line width=rule_thickness, /tikz/commutative diagrams/rightarrow}, every label/.style={ /tikz/auto, /tikz/font=\everymath\expandafter{\the\everymath\scriptstyle}, /tikz/inner sep=+0.5ex}, every cell/.style={ /tikz/shape={asymmetrical rectangle}, /tikz/inner xsep=+1ex, /tikz/inner ysep=+0.85ex}, every matrix/.style={/tikz/inner sep=+0pt}, every diagram/.style={ /tikz/commutative diagrams/row sep=normal, /tikz/commutative diagrams/column sep=normal, /tikz/baseline=+0pt}, 1-row diagram/.style={/tikz/baseline/.expanded=(\tikzcdmatrixname.base)}, math mode=true, center yshift/.initial=axis_height, row sep/huge/.initial=+3.6em, row sep/large/.initial=+2.7em, row sep/normal/.initial=+1.8em, row sep/scriptsize/.initial=+1.35em, row sep/small/.initial=+0.9em, row sep/tiny/.initial=+0.45em, column sep/huge/.initial=+4.8em, column sep/large/.initial=+3.6em, column sep/normal/.initial=+2.4em, column sep/scriptsize/.initial=+1.8em, column sep/small/.initial=+1.2em, column sep/tiny/.initial=+0.6em, crossing over clearance/.initial=+1.5ex, shift left/.default=+0.56ex, shift right/.default=1, background color/.initial=white} \pgfutil@IfUndefined{starttikzpicture}{}{ \def\starttikzcd{\tikzcd} \def\stoptikzcd{\endtikzcd} \tikzcdset{ every matrix/.append code={ \def\NC{\pgfmatrixnextcell} \def\NR{\pgfmatrixendrow}}} } \endinput \def\pgfautoxrefs{1} \documentclass[a4paper]{ltxdoc} \def\xcolorversion{2.00} \usepackage[version=latest]{pgf} \usepackage{xkeyval,calc,listings,tikz,fp} \usepackage{hyperref} \hypersetup{ colorlinks=false, linkcolor=blue, filecolor=blue, pagecolor=blue, urlcolor=blue, citecolor=blue, pdfborder=0 0 0, } \usetikzlibrary{ arrows, arrows.meta, calc, fit, patterns, plotmarks, shapes.geometric, shapes.misc, shapes.symbols, shapes.arrows, shapes.callouts, shapes.multipart, shapes.gates.logic.US, shapes.gates.logic.IEC, circuits.logic.US, circuits.logic.IEC, circuits.logic.CDH, circuits.ee.IEC, datavisualization, datavisualization.formats.functions, er, automata, backgrounds, chains, topaths, trees, petri, mindmap, matrix, calendar, folding, fadings, shadings, spy, through, turtle, positioning, scopes, decorations.fractals, decorations.shapes, decorations.text, decorations.pathmorphing, decorations.pathreplacing, decorations.footprints, decorations.markings, shadows, lindenmayersystems, intersections, fixedpointarithmetic, fpu, svg.path, external, } \usepackage[a4paper,left=2.25cm,right=2.25cm,top=2.5cm,bottom=2.5cm,nohead]{geometry} \usepackage{amsmath,amssymb} \usepackage{xxcolor} \usepackage{pifont} \usepackage{makeidx} \usepackage{tikz-cd,xr,multicol,microtype} \IfFileExists{pgfmanual-en-macros} {\input{pgfmanual-en-macros}} {\PackageError{tikz-cd-doc}{ This document requires the file pgfmanual-en-macros.tex (distributed with pgf) to compile. Please place a copy of that file in the current directory}{}} \IfFileExists{pgfmanual.aux} {\externaldocument[pgfman-]{pgfmanual}} {\PackageWarning{tikz-cd-doc}{ Couldn't find pgfmanual.aux. To get cross-references to the pgf manual, compile it and copy all resulting .aux files to the current directory}{}} \makeindex \tikzset{ every plot/.style={prefix=plots/pgf-}, shape example/.style={ color=black!30, draw, fill=yellow!30, line width=.5cm, inner xsep=2.5cm, inner ysep=0.5cm} } \hypersetup{unicode, pdftitle={tikzcd: Commutative diagrams with TikZ}, pdfkeywords={Commutative diagrams; TeX; LaTeX; ConTeXt; TikZ; pgf; tikz-cd; tikzcd}, } \pgfkeys{ /pdflinks/search key prefixes in= {/tikz/commutative diagrams/}, /pdflinks/internal link prefix=tikzcd, /pdflinks/warnings=false, /pdflinks/show labels=false, } \tikzset{/codeexample/every codeexample/.append style={/tikz/commutative diagrams/background color=graphicbackground}} \tikzset{gray x/.tip={Rays[color=gray]}} \newcommand{\displayarrow}[2][]{ \index{#2@\protect\texttt{#2} arrow tip} \index{Arrow tips!#2@\protect\texttt{#2}} \texttt{#2} & yields \tikz[baseline=-axis_height] \draw[{#2}-{#2}, line width=rule_thickness, #1] (0,0) -- (1,0);} \newcommand{\displayarrowstyle}[1]{ \def\mykey{/tikz/commutative diagrams/#1} \def\mypath{} \def\myname{}rsttimetrue \decompose/tikz/commutative diagrams/#1/\nil \texttt{#1} & yields \tikz[baseline=-axis_height, line width=rule_thickness] \draw[arrows=gray x-gray x,commutative diagrams/.cd, #1] (0,0) -- (0.6,0);} \begin{document} \begin{center} \vspace*{1em} \tikz\node[scale=1.2]{ \color{gray}\Huge\ttfamily \char`\{\textcolor{red!75!black}{tikzcd}\char`\}}; \vspace{0.5em} {\Large\bfseries Commutative diagrams with \tikzname} \vspace{1em} {Version 0.9e \qquad October 30, 2014} \end{center} \vspace{1.5em} The general-purpose drawing package Ti\emph{k}Z can be used to typeset commutative diagrams and other kinds of mathematical pictures, generating high-quality results. The present package facilitates the creation of such diagrams by providing a convenient set of macros and reasonable default settings. Familiarity with Ti\emph{k}Z is helpful but not necessary, since the examples contained here cover the most common situations. This software is distributed under the terms of the GNU General Public License, version 3 or later. \tableofcontents \setcounter{section}{-1} \section{Disclaimer} \label{sec:disclaimer} Before version 1.0 of this package is released, minor modifications and bug fixes may be performed. All documents making an orthodox use of this package will continue to compile and generate essentially the same output. However, if you have strict stability requirements (for instance, if you want to assure no page break changes will happen in your documents), keep a copy of the current version of the file \texttt{tikzlibrarycd.code.tex} in your document's directory. \section{Getting started} \label{sec:basic-usage} To invoke this package in \LaTeX, type \begin{verse} \index{tikz-cd@\protect\texttt{tikz-cd} package} \index{Packages and files!tikz-cd@\protect\texttt{tikz-cd}} |\usepackage{tikz-cd}| \end{verse} or load \tikzname{} and then type \begin{verse} \index{cd@\protect\texttt{cd} library} \index{Libraries!cd@\protect\texttt{cd}} |\usetikzlibrary{cd}|\end{verse} \subsection{Creating a diagram} \label{sec:creating-diagrams} The basic tool to create a commutative diagram is the following environment. \begin{environment}{{tikzcd}\opt{\oarg{options}}} \end{environment} The environment contents should describe a matrix, as in \LaTeX's familiar |{tabular}| environment. The optional argument \meta{options} may be used to modify the appearance of the diagram. Any of the customization keys described in this manual, as well as those originally provided by \tikzname{}, can be used here. Arrows between matrix entries can be created with the |\arrow| command described below. Everything inside |{tikzcd}| is typeset in math mode, but you will probably want to use it inside an |{equation}| environment or |\[| \dots |\]|, so that the diagram is placed on a new line and centered. It is important to note that \textsc{dvi} viewers will not display diagrams correctly. It is necessary to convert the \textsc{dvi} file to \textsc{pdf} or \textsc{ps} format---or, even better, use a tool that generates \textsc{pdf} files directly, such as \texttt{pdflatex}. \subsection{Inserting arrows} \label{sec:inserting-arrows} Inside the |{tikzcd}| environment, the following synonymous commands are provided to produce arrows. \begin{pgfmanualentry} \extractcommand\arrow|[|\meta{options}|]|\@@ \extractcommand\ar|[|\meta{options}|]|\@@ \pgfmanualbody \end{pgfmanualentry} Here, \meta{options} is a comma-separated list of options which can be used to specify the arrow target, add labels, change arrow tips, and perform additional modifications. The arrow target can be specified by a direction parameter, which consists of a string of characters |r|, |l|, |d|, |u| (standing for right, left, down and up). Labels can be placed on an arrow by means of the quotes syntax, described in detail in the \pgfname{} manual \cite[\S\ref*{pgfman-section-label-quotes}]{pgfman}. Notice the use of |"\phi"| in the example below. \begin{codeexample}[] \begin{tikzcd} A \arrow[rd] \arrow[r, "\phi"] & B \\ & C \end{tikzcd} \end{codeexample} To further modify the appearance of an arrow, note that \meta{options} may contain any key that can be passed to \tikzname's |\path| command. Similarly, a label can receive additional options via the syntax \begin{verse} |"|\meta{label text}|"|\opt{\meta{label options}}. \end{verse} Both \meta{label text} and \meta{label options} need to be enclosed in curly braces if they contain commas. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi"] \arrow[d, red] & B \arrow[d, "\psi" red] \\ C \arrow[r, red, "\eta" blue] & D \end{tikzcd} \end{codeexample} Arrows can have an arbitrary number of labels, by repeated use of arguments in quotes. The example below shows how to control the positioning of labels. Notice in particular that an apostrophe as \meta{label option} causes the label to be placed on the opposite side of the arrow. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi" near start, "\psi"', "\eta" near end] & B \end{tikzcd} \end{codeexample} We provide two real-life examples. \begin{codeexample}[] \begin{tikzcd} T \arrow[drr, bend left, "x"] \arrow[ddr, bend right, "y"] \arrow[dr, dotted, "{(x,y)}" description] & & \\ & X \times_Z Y \arrow[r, "p"] \arrow[d, "q"] & X \arrow[d, "f"] \\ & Y \arrow[r, "g"] & Z \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd}[column sep=tiny] & \pi_1(U_1) \ar[dr] \ar[drr, "j_1", bend left=20] & &[1.5em] \\ \pi_1(U_1\cap U_2) \ar[ur, "i_1"] \ar[dr, "i_2"'] & & \pi_1(U_1) \ast_{ \pi_1(U_1\cap U_2)} \pi_1(U_2) \ar[r, dashed, "\simeq"] & \pi_1(X) \\ & \pi_1(U_2) \ar[ur]\ar[urr, "j_2"', bend right=20] & & \end{tikzcd} \end{codeexample} \subsection{Changing arrow tips} \label{sec:changing-arrow-tips} A set of |\arrow| options is provided to create different kinds of arrows. Some of these options have a short descriptive name, such as |hook|, and others are named after \TeX{} arrow-producing commands (without a ``|\|''), like |dashrightarrow|. \begin{codeexample}[] \begin{tikzcd} X \arrow[r, hook] \arrow[dr, dashrightarrow] & \bar{X} \arrow[d]\\ & Y \end{tikzcd} \end{codeexample} The following list shows all available arrow types (each of them is a style key located in |/tikz/commutative diagrams|). \begin{multicols}{2}\raggedcolumns \subsubsection*{Basic arrows} \begin{tabular}{ll} \displayarrowstyle{to head}\\ \displayarrowstyle{rightarrow}\\ \displayarrowstyle{leftarrow}\\ \displayarrowstyle{leftrightarrow}\\ \displayarrowstyle{Rightarrow}\\ \displayarrowstyle{Leftarrow}\\ \displayarrowstyle{Leftrightarrow} \end{tabular} \subsubsection*{Arrows from bar} \begin{tabular}{ll} \displayarrowstyle{maps to}\\ \displayarrowstyle{mapsto}\\ \displayarrowstyle{mapsfrom}\\ \displayarrowstyle{Mapsto}\\ \displayarrowstyle{Mapsfrom}\\ \end{tabular} \subsubsection*{Arrows with hook} \begin{tabular}{ll} \displayarrowstyle{hook}\\ \displayarrowstyle{hook'}\\ \displayarrowstyle{hookrightarrow}\\ \displayarrowstyle{hookleftarrow}\\ \end{tabular} \subsubsection*{Arrows with tail} \begin{tabular}{ll} \displayarrowstyle{tail}\\ \displayarrowstyle{rightarrowtail}\\ \displayarrowstyle{leftarrowtail}\\ \end{tabular} \subsubsection*{Two-headed arrows} \begin{tabular}{ll} \displayarrowstyle{two heads}\\ \displayarrowstyle{twoheadrightarrow}\\ \displayarrowstyle{twoheadleftarrow}\\ \end{tabular} \subsubsection*{Harpoons} \begin{tabular}{ll} \displayarrowstyle{harpoon}\\ \displayarrowstyle{harpoon'}\\ \displayarrowstyle{rightharpoonup}\\ \displayarrowstyle{rightharpoondown}\\ \displayarrowstyle{leftharpoonup}\\ \displayarrowstyle{leftharpoondown}\\ \end{tabular} \subsubsection*{Dashed arrows} \begin{tabular}{ll} \displayarrowstyle{dashed}\\ \displayarrowstyle{dashrightarrow}\\ \displayarrowstyle{dashleftarrow}\\ \end{tabular} \subsubsection*{Squiggly arrows} \begin{tabular}{ll} \displayarrowstyle{squiggly}\\ \displayarrowstyle{rightsquigarrow}\\ \displayarrowstyle{leftsquigarrow}\\ \displayarrowstyle{leftrightsquigarrow} \end{tabular} \subsubsection*{Non-arrows} \begin{tabular}{ll} \displayarrowstyle{no head}\\ \displayarrowstyle{no tail}\\ \displayarrowstyle{dash}\\ \displayarrowstyle{equal}\\ \end{tabular} \end{multicols} A gray cross (\tikz \path[/pgf/tips=true,gray x-] (0,0) -- (1mm,0);) in the samples above indicates that the corresponding tip is kept unchanged. This allows several arrow styles to be superimposed. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, tail, two heads, dashed] & B \end{tikzcd} \end{codeexample} \subsection{Alternative syntax for arrows} \label{sec:altern-synt-arrows} The following forms of the arrow command were used before the appearance of the quotes syntax for labels, and now may seem somewhat convoluted. They are nonetheless still available for backwards compatibility. \begin{command}{\arrow\opt{\oarg{options}}\marg{direction}\meta{labels}} \end{command} Here, \meta{direction} is a string containing the characters |r|, |l|, |d|, |u| and is used to determine the arrow target. Alternatively, you can specify an explicit matrix cell by replacing \meta{direction} with something of the form \meta{row number}\texttt{-}\meta{column number}, or the name of a node. The trailing \meta{labels} can be the empty string or of the form \begin{verse} \opt{\oarg{label options}}\marg{label text}\meta{more labels}. \end{verse} The equivalent command |\ar| can also be used in this form. Here is an example. \begin{codeexample}[] \begin{tikzcd} A \arrow{d} \arrow{r}[near start]{\phi}[near end]{\psi} & B \arrow[red]{d}{\xi} \\ C \arrow[red]{r}[blue]{\eta} & D \end{tikzcd} \end{codeexample} There are further shortened forms: \begin{pgfmanualentry} \extractcommand\rar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\lar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\dar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\uar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\drar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\urar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\dlar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\ular\opt{\oarg{options}}\meta{labels}\@@ \pgfmanualbody \end{pgfmanualentry} The first one is equivalent to \begin{verse} |\arrow|{\oarg{options}}|{r}|\meta{labels} \end{verse} and the other ones work analogously. \subsection{Usage in plain \TeX{}} \label{sec:usage-plain-tex} To use this software in plain \TeX{}, load \tikzname{} and the \texttt{cd} library by saying \begin{verse} |\input tikz.tex|\\ \index{cd@\protect\texttt{cd} library} \index{Libraries!cd@\protect\texttt{cd}} |\usetikzlibrary{cd}| \end{verse} The |{tikzcd}| environment should then be replaced by the following: \begin{plainenvironment}{{tikzcd}\opt{\oarg{options}}} \end{plainenvironment} All other functions of this library work as described in this manual without change. \subsection{Usage in Con\TeX t} \label{sec:usage-plain-context} To use this software in Con\TeX t, load \tikzname{} and then the \texttt{cd} library by saying \begin{verse} |\usemodule[tikz]|\\ \index{cd@\protect\texttt{cd} library} \index{Libraries!cd@\protect\texttt{cd}} |\usetikzlibrary[cd]| \end{verse} The |{tikzcd}| environment should then be replaced by the following: \begin{contextenvironment}{{tikzcd}\opt{\oarg{options}}} \end{contextenvironment} Moreover, you may replace the column and row separators |&|, |\\| by their Con\TeX t analogues |\NC|, |\NR|. However, you should use |\NC| only \emph{between} cells, and not before the first column or after the last column, as in usual Con\TeX t tables. Similarly, |\NR| should be used only between rows. All other functions of this library work as described in this manual without change. \section{Controlling the appearance of diagrams} \label{sec:chang-appe-diagr} This section describes a number of customization keys defined by this package. All keys are located in the path |/tikz/commutative diagrams|. Options passed to |{tikzcd}| or |\arrow| are searched for in that path, and, if not found there, in |/tikz|. To set options globally, it is convenient to use the following command. \begin{command}{\tikzcdset\marg{options}} Executes \meta{options} in the path |/tikz/commutative diagrams|. \end{command} Besides the keys described in this manual, numerous \tikzname\ parameters can affect the appearance of a diagram. However, only a few of them (namely those appearing in |every diagram|, |every cell|, |every arrow|, and |every label| below) are reinitialized when |{tikzcd}| is called. This means that modifying a certain \tikzname\ parameter globally may or may not affect the output of |{tikzcd}|. We also point out that besides the options and styles provided by this package, several keys defined by \tikzname{} are useful for arrows. Some examples are \texttt{dashed}, |dotted|, and its relatives, |line width=|\meta{dimension}, |color=|\meta{color}, |bend right|, |bend left|, |in=|\meta{angle}, |out=|\meta{angle}, |loop|, etc. See the \pgfname{} manual~\cite[\S\ref*{pgfman-section-cap-joins} and \S\ref*{pgfman-library-to-paths}]{pgfman}. Likewise, \tikzname{} provides several keys that are useful for labels, such as |above|, |below|, |left|, |right|, |swap| (which makes the label be placed on the right side of the arrow, relative to its direction), |sloped|, |pos=|\meta{fraction}, |near start|, |near end|, |inner sep=|\meta{dimension}, |font=|\meta{font command}, |text width=|\meta{dimension}, etc. See the \pgfname{} manual~\cite[\S\ref*{pgfman-section-nodes}, esp.\ \S\ref*{pgfman-section-pos-option}]{pgfman}. \subsection{General options} \label{sec:general-options} \begin{stylekey}{/tikz/commutative diagrams/every diagram} This style is applied to every |{tikzcd}| environment. Initially, it contains the following: \begin{verse} |row sep=normal||,|\\ |column sep=normal||,|\\ |/tikz/baseline=0pt| \end{verse} \end{stylekey} The |baseline=0pt| setting is used to make equation numbers be placed correctly (as an exception, one-row diagrams are anchored at their matrix base, which is exactly what you want). \begin{key}{/tikz/commutative diagrams/diagrams=\meta{options}} This key appends \meta{options} to the style |every diagram|. \end{key} \begin{stylekey}{/tikz/commutative diagrams/every matrix} This style is applied to the \tikzname{} matrix created internally by |{tikzcd}|. Initially, it contains the following: \begin{verse} |/tikz/inner sep=0pt| \end{verse} \end{stylekey} \begin{stylekey}{/tikz/commutative diagrams/every cell} This style is applied to every \tikzname{} matrix cell created by |{tikzcd}|. Initially, it contains the following: \begin{verse} |/tikz/shape=asymmetrical rectangle||,|\\ |/tikz/inner xsep=1ex||,|\\ |/tikz/inner ysep=0.85ex| \end{verse} \end{stylekey} The |asymmetrical rectangle| shape is described in \S\ref{sec:asymm-rect-shape}. The |inner xsep|, |inner ysep| options determine the spacing between a diagram entry and any arrows reaching it. \begin{key}{/tikz/commutative diagrams/cells=\meta{options}} This key appends \meta{options} to the style |every cell|. \end{key} \def\printsepaux+#1em{#1\,em} \def\printsep#1#2{\edef\temp{\pgfkeysvalueof{/tikz/commutative diagrams/#1 sep/#2}}\expandafter\printsepaux\temp} \begin{key}{/tikz/commutative diagrams/row sep=\meta{size}} This key acts as a ``frontend'' to \tikzname's |/tikz/row sep| key. If the key \begin{verse} |/tikz/commutative diagrams/row sep/|\meta{size} \end{verse} stores a \meta{value}, then it is read and |/tikz/row sep|=\meta{value} is set. If the key above is not initialized, then \meta{size} is presumably a dimension, and |/tikz/row sep|=\meta{size} is set. The initially available sizes, and their values, are the following: \begin{center} \begin{tabular}{cccccc} |tiny| & |small| & |scriptsize| & |normal| & |large| & |huge| \\ \printsep{row}{tiny} & \printsep{row}{small} & \printsep{row}{scriptsize} & \printsep{row}{normal} & \printsep{row}{large} & \printsep{row}{huge} \end{tabular} \end{center} \end{key} Notice that setting, say, |row sep=1cm| globally with |\tikzcdset| will have no effect, since the |row sep| option is re-set at the beginning of each diagram. To make all diagrams have |row sep| equal to 1\,cm, you can modify the meaning of |normal| by saying \begin{verse} |\tikzcdset{row sep/normal=1cm}|. \end{verse} You can also create new sizes, but note that \pgfname\ requires new keys to be initialized explicitly. For example, to create a size |my size|, meaning 1\,ex, you should use \begin{verse} |\tikzcdset{row sep/my size/.initial=1ex}|. \end{verse} \begin{key}{/tikz/commutative diagrams/column sep=\meta{size}} This works analogously to the |row sep| key above. The sizes available initially are the following: \begin{center} \begin{tabular}{cccccc} |tiny| & |small| & |scriptsize| & |normal| & |large| & |huge| \\ \printsep{column}{tiny} & \printsep{column}{small} & \printsep{column}{scriptsize} & \printsep{column}{normal} & \printsep{column}{large} & \printsep{column}{huge} \end{tabular} \end{center} \end{key} \begin{key}{/tikz/commutative diagrams/sep=\meta{size}} This key sets |row sep=|\meta{size}, |column sep=|\meta{size}. \end{key} In the examples below, the triangular diagrams would look too wide or too tall if the column or row separation were not set appropriately. \begin{codeexample}[] \begin{tikzcd}[column sep=small] & A \arrow[dl] \arrow[dr] & \\ B \arrow{rr} & & C \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd}[row sep=tiny] & B \arrow[dd] \\ A \arrow[ur] \arrow[dr] & \\ & C \end{tikzcd} \end{codeexample} Section \ref*{pgfman-section-matrices}.3.2 of the \pgfname{} manual \cite{pgfman} contains further details on the spacing of matrix cells. \begin{stylekey}{/tikz/commutative diagrams/cramped} By default, a generous amount of white space is added around diagram cells, which is appropriate for large, displayed diagrams. The present style removes some of this extra white space, and is intended for smaller diagrams that should blend with the surrounding text, or very wide material that wouldn't fit on the page otherwise. \end{stylekey} Keep in mind that while there are some legitimate uses for |{tikzcd}| diagrams on inline formulas, standard \LaTeX\ constructs such as |\overset| and |\xrigthtarrow| are often sufficient and should be preferred. The picture below shows the (somewhat subtle) difference between the cramped and the non-cramped styles. \begin{codeexample}[pre=\minipage{6cm},post=\endminipage] This \begin{tikzcd} A \arrow[r] & B \end{tikzcd} is a regular diagram.\\ This \begin{tikzcd}[cramped, sep=small] A \arrow[r] & B \end{tikzcd} is a cramped diagram.\\ This $A \to B$ is just a formula. \end{codeexample} \begin{key}{/tikz/commutative diagrams/math mode=\meta{boolean} (default true)} This key determines whether or not the contents of a diagram are typeset in math mode. If set globally or diagram-wise, it affects both the diagram entries and arrow labels. If used with |\arrow|, it affects only its labels. \end{key} \begin{key}{/tikz/commutative diagrams/background color=\meta{color} (initially white)} This key stores the name of a color, and is read by styles that fill the background, such as |description| and |crossing over|. It does not cause the background of diagrams to be filled. \end{key} \subsection{Global options for arrows} \label{sec:options-arrows} \begin{stylekey}{/tikz/commutative diagrams/every arrow} This style is applied to every |\arrow|. Initially, it contains the following: \begin{verse} |/tikz/draw,|\\ |/tikz/line width=rule_thickness||,|\\ |rightarrow| \end{verse} \end{stylekey} \begin{key}{/tikz/commutative diagrams/arrows=\meta{options}} This key appends \meta{options} to the style |every arrow|. \end{key} \begin{key}{/tikz/commutative diagrams/arrow style=\meta{style}} This key determines which collection of arrow tips is used by the arrow tip selection styles listed in \S\ref{sec:changing-arrow-tips}. The initial setting is suitable for documents using the Computer Modern font at any size. The available choices for \meta{style} are: \begin{description} \item[\texttt{Latin Modern}] A small variant of the initial settings, intended for documents using the Latin Modern font at any size. \item[\texttt{math font}] This setting uses the |Glyph| meta arrow tip described in \S\ref{sec:font-arrow-tips}. \item[\texttt{tikz}] This setting uses the arrow tips defined in \tikzname's |arrows.meta| library. It honors the option |/tikz/>|. \end{description} This key is usually invoked in the document preamble, and should be set only once. \end{key} If you are using a font different from Computer Modern or Latin Modern, you may find the best results by selecting the |math font| style. As detailed in \S\ref{sec:font-arrow-tips}, this is not guaranteed to work perfectly with all fonts, but gives good results in many cases. If the \texttt{math font} style gives unsatisfactory results, you can try selecting the \texttt{tikz} style, and setting |/tikz/>| to the value that best matches your font (among those shown in \cite[\S\ref*{pgfman-section-arrows-meta}]{pgfman}). \begin{codeexample}[] \tikzcdset{ arrow style=tikz, diagrams={>={Straight Barb[scale=0.8]}} } \begin{tikzcd} A \arrow[r, tail] \arrow[rd] & B \arrow[d, two heads]\\ & D \end{tikzcd} \end{codeexample} \subsection{Absolute placement of arrows} \label{sec:absol-positioning} The usual behavior of |\arrow| is to produce an arrow starting at the diagram entry where the command appears, and ending at an entry whose location is specified relative to that. The following keys override this behavior, allowing source and target to be selected explicitly. \begin{key}{/tikz/commutative diagrams/from=\meta{argument}} If \meta{argument} is of the form \meta{row number}\texttt{-}\meta{column number}, or if it is a string of characters |r|, |l|, |d|, |u|, this key sets the arrow source to be the corresponding cell in the diagram matrix. Otherwise, it assumes the argument is the name of a node and sets the arrow source to \meta{argument}. \end{key} \begin{key}{/tikz/commutative diagrams/to=\meta{argument}} Similar to |from|, but refers to the arrow target. \end{key} Recall that it is possible to give a specific entry of a \tikzname{} matrix a name by using the \verb!|[!\meta{options}\verb!]|! syntax, as done for entry $C$ in the example below. You must be careful not to create nodes whose name contains only the characters |l|, |r|, |u|, |d| if you want to refer to them using |from| or |to|. {\catcode`\|=12 \begin{codeexample}[] \begin{tikzcd} A \arrow[to=Z, red] \arrow[to=2-2, blue] & B \\ |[alias=Z]| C & D \arrow[from=ul, to=1-2, purple] \end{tikzcd} \end{codeexample} } In the next examples, empty labels are used to create nodes for later reference. The |draw=red| option is used to show where these empty nodes are located, but of course you want to remove that when using this technique. \begin{codeexample}[] \begin{tikzcd}[column sep=scriptsize] A \arrow[dr] \arrow[rr, ""{name=U, below, draw=red}]{} & & B \arrow[dl] \\ & C \arrow[Rightarrow, from=U, "\psi"] \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, bend left=50, ""{name=U, below, draw=red}] \arrow[r, bend right=50, ""{name=D, draw=red}] & B \arrow[Rightarrow, from=U, to=D] \end{tikzcd} \end{codeexample} \subsection{Phantom arrows} \label{sec:phantom-arrows} Sometimes it is necessary to insert a symbol outside the grid subjacent to the diagram. The easiest way to achieve this is as a label to an invisible arrow. \begin{stylekey}{/tikz/commutative diagrams/phantom} Creates an invisible arrow. Labels to this arrow are not invisible. They will be anchored at their center and typeset in full size (i.e., with |\textstyle|). To get smaller labels, as in ordinary arrows, use the |\scriptstyle| command. \end{stylekey} In the picture below, the arrow containing the |phantom| option goes from $A$ to $D$, and the |\ulcorner| symbol ($\ulcorner$) is inserted closer to the starting point $A$. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] \arrow[d] \arrow[dr, phantom, "\ulcorner", very near start] & B \arrow[d] \\ C \arrow[r] & D \end{tikzcd} \end{codeexample} \subsection{Fine-tuning the placement of arrows} \label{sec:fine-tuning-arrows} \begin{key}{/tikz/commutative diagrams/shift left=\meta{dimension} (default 0.56ex)} Shifts arrows by \meta{dimension} to the left, relative to the arrow direction. A dimensionless argument causes that multiple of the default value to be used. \end{key} \begin{key}{/tikz/commutative diagrams/shift right=\meta{dimension} (default 1)} A shortcut to |shift left=-|\meta{dimension}. \end{key} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, red, shift left=1.5ex] \arrow[r] \arrow[dr, blue, shift right=1.5ex] \arrow[dr] & B \arrow[d, purple, shift left=1.5ex] \arrow[d]\\ & C \end{tikzcd} \end{codeexample} The default values of |shift left| and |shift right| are appropriate for a pair of parallel arrows, and dimensionless arguments are useful to create sets of multiple parallel arrows. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] & B \arrow[r, shift left] \arrow[r, shift right] & C \arrow[r] \arrow[r, shift left=2] \arrow[r, shift right=2] & \cdots \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/shift=\marg{coordinate}} Shifts arrows by \meta{coordinate}. \end{key} \begin{key}{/tikz/commutative diagrams/xshift=\meta{dimension}} Shifts arrows right by \meta{dimension}. \end{key} \begin{key}{/tikz/commutative diagrams/yshift=\meta{dimension}} Shifts arrows up by \meta{dimension}. \end{key} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, yshift=0.7ex] \arrow[r, yshift=-0.7ex] & B \arrow[d, xshift=0.7ex] \arrow[d, xshift=-0.7ex] \\ & C \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/start anchor=\opt{{\ttfamily\char`\{}\ooarg{coordinate transformations}}\meta{anchor}\opt{\ttfamily\char`\}}} This key specifies at which anchor of the source node the arrow should start. Optionally, additional coordinate transformations can be supplied. An empty \meta{anchor} argument causes no anchor to be specified, which is the usual behavior. \end{key} \begin{key}{/tikz/commutative diagrams/end anchor=\opt{{\ttfamily\char`\{}\ooarg{coordinate transformations}}\meta{anchor}\opt{\ttfamily\char`\}}} This key works analogously, but refers to the target node of the arrow. \end{key} See the picture on \S\ref{sec:asymm-rect-shape} for some of the possible values for \meta{anchor}. \begin{codeexample}[] \begin{tikzcd}[cells={nodes={draw=gray}}] A \arrow[r, black] \arrow[r, blue, end anchor=north east] \arrow[r, red, start anchor={[xshift=-1ex]}, end anchor={[yshift=2ex]north east}] & B \end{tikzcd} \end{codeexample} \subsection{Three-dimensional diagrams} \label{sec:crossing-over} \begin{stylekey}{/tikz/commutative diagrams/crossing over} This style makes a thicker line, with color |background color|, to be drawn under the current arrow, simulating the effect of its passing over other arrows. \begin{codeexample}[] \begin{tikzcd} A \arrow[dr] & B \arrow[dl, crossing over] \\ C & D \end{tikzcd} \end{codeexample} \end{stylekey} Note that, since arrows are drawn in the order they are read, it may be necessary to defer the drawing of certain arrows to achieve the desired result. This can be done using the |from| key, as shown in the following picture. \begin{codeexample}[] \begin{tikzcd}[row sep=scriptsize, column sep=scriptsize] & f^* E_V \arrow[dl] \arrow[rr] \arrow[dd] & & E_V \arrow[dl] \arrow[dd] \\ f^* E \arrow[rr, crossing over] \arrow[dd] & & E \\ & U \arrow[dl] \arrow[rr] & & V \arrow[dl] \\ M \arrow[rr] & & N \arrow[from=uu, crossing over]\\ \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/crossing over clearance=\meta{dimension} (initially 1.5ex)} This key specifies the width of the background-colored line drawn under a |crossing over| arrow. \end{key} \subsection{Options for labels} \label{sec:options-labels} \begin{stylekey}{/tikz/commutative diagrams/every label} This style is applied to every label produced with |\arrow|. It is initially set to \begin{verse} |/tikz/auto,|\\ |/tikz/font=|\meta{something}|,|\\ |/tikz/inner sep=0.5ex| \end{verse} where \meta{something} is something that makes |\scriptstyle| be applied to labels in math mode. \end{stylekey} The key |/tikz/auto| makes the label be placed on the left side of the arrow, relative to its direction. The key |/tikz/inner sep| controls the distance between a label and the corresponding arrow. \begin{key}{/tikz/commutative diagrams/labels=\meta{options}} This key appends \meta{options} to |every label|. \end{key} \begin{stylekey}{/tikz/commutative diagrams/description} This style causes the label to be placed over the arrow, with the background filled. The clearance around the label is determined by \texttt{/tikz/inner sep}. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi" description] & B \end{tikzcd} \end{codeexample} \end{stylekey} \section{Advanced usage} \label{sec:advanced-usage} This section provides further details on the functioning of this package, with the aim of allowing the advanced user to make a more or less arbitrary use of other \tikzname{} features within |{tikzcd}|. \subsection{Internals of \texttt{tikzcd} and the arrow commands} \label{sec:intern-arrow-comm} The |{tikzcd}| environment works by substituting code of the form \begin{verse} |\begin{tikzcd}[|\meta{options}|]|\\ \hspace*{1.5ex} \meta{contents}\\ |\end{tikzcd}| \end{verse} with roughly the following: \begin{verse} |\begin{tikzpicture}[|\meta{options}|]|\\ \hspace*{1.5ex}| \matrix[matrix of nodes] {|\\ \hspace*{3ex}| |\meta{contents} |\\|\\ \hspace*{1.5ex}| };|\\ \hspace*{1.5ex}| |\meta{paths}\\ |\end{tikzpicture}| \end{verse} Not shown above are a number of initialization procedures, such as defining |\arrow| and its relatives, as well as applying the default settings specified by |every diagram| and its relatives. Note that the next-row command |\\| for the last row is inserted by |{tikzcd}|, and therefore should not be present in \meta{contents}. Notice also that you can use the key |execute at end picuture| in \meta{options} to have arbitrary \tikzname{} code executed after a diagram is drawn. Initially, \meta{paths} is the empty string. A command |\arrow[|\meta{options}|]| does nothing at the point it is inserted, and causes the following code to be appended to \meta{paths}: \begin{verse} |\path[|\meta{options}|] (|\meta{source~node}|) to (|\meta{target~node}|);| \end{verse} By default, \meta{source node} and \meta{target node} refer to the node corresponding to the matrix cell where the command |\arrow| is present. This can be changed using the |from| and |to| keys, or a direction argument (a string consisting of characters |r|, |l|, |d|, |u|). \subsection{Tweaking \texttt{to} paths} \label{sec:tweaking-to-paths} Recall that the \texttt{to} path operation used in the paths created by |\arrow| can take a number of options, as described in \S\ref*{pgfman-library-to-paths} of the \pgfname{} manual~\cite{pgfman}. In particular, the key |/tikz/to path| determines the path that is actually drawn, and can be used to do all sorts of fiddling. \begin{codeexample}[] \begin{tikzcd} A \arrow[dr, controls={+(1.5,0.5) and +(-1,0.8)}] \arrow[dr, dashed, to path=|- (\tikztotarget)] & \\ & B \arrow[loop right] \end{tikzcd} \end{codeexample} The following example shows how to produce a ``snake'' map. The arrow with the |phantom| option (going from $B$ to $E$) has the sole purpose of creating a coordinate, named |Z|, lying halfway between these two cells. The arrow starting at $C$ has target $D$, so the macros |\tikztostart| and |\tikztotarget| will expand to the nodes corresponding to these two cells in the argument of |to path|. Notice also the use of |\tikztonodes| at the point where we want the label to be inserted. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] & B \arrow[r] \arrow[d, phantom, ""{coordinate, name=Z}] & C \arrow[dll, "\delta", rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}] \\ D \arrow[r] & E \arrow[r] & F \end{tikzcd} \end{codeexample} \subsection{Drawing diagrams directly with Ti\emph{k}Z} \label{sec:draw-diagr-directly} If you find that this package is not flexible enough for some particular application, you can use the methods described in \cite{lenders}, \cite{milne} and draw diagrams directly with \tikzname. In this case, you can still use the styles provided here to obtain pictures with a uniform appearance throughout your document. The pictures below show how this can be done (the second one is adapted from \cite{milne}). \begin{codeexample}[] \begin{tikzpicture}[commutative diagrams/every diagram] \matrix[matrix of math nodes, name=m, commutative diagrams/every cell] { X & \bar X \\ & Y \\}; \path[commutative diagrams/.cd, every arrow, every label] (m-1-1) edge[commutative diagrams/hook] (m-1-2) edge[commutative diagrams/dashed] (m-2-2) (m-1-2) edge (m-2-2); \end{tikzpicture} \end{codeexample} \begin{codeexample}[] \begin{tikzpicture}[commutative diagrams/every diagram] \node (P0) at (90:2.3cm) {$X\otimes (Y\otimes (Z\otimes T))$}; \node (P1) at (90+72:2cm) {$X\otimes ((Y\otimes Z)\otimes T))$} ; \node (P2) at (90+2*72:2cm) {\makebox[5ex][r]{$(X\otimes (Y\otimes Z))\otimes T$}}; \node (P3) at (90+3*72:2cm) {\makebox[5ex][l]{$((X\otimes Y)\otimes Z)\otimes T$}}; \node (P4) at (90+4*72:2cm) {$(X\otimes Y)\otimes (Z\otimes T)$}; \path[commutative diagrams/.cd, every arrow, every label] (P0) edge node[swap] {$1\otimes\phi$} (P1) (P1) edge node[swap] {$\phi$} (P2) (P2) edge node {$\phi\otimes 1$} (P3) (P4) edge node {$\phi$} (P3) (P0) edge node {$\phi$} (P4); \end{tikzpicture} \end{codeexample} \subsection{Issues with active characters} \label{sec:issues-with-active-ampersand} By default, \tikzname{} makes the character |&| active inside matrices, and this causes the error message \begin{verse} |! Package pgfbasematrix Error: Single ampersand used with wrong catcode.| \end{verse} when |{tikzcd}| is used inside the argument to a macro such as a Beamer frame or a footnote. One solution to this problem is to call |{tikzcd}| with the option |ampersand replacement=\&|, and replace all occurrences of |&| with |\&| in the diagram. This procedure is also needed if you want to use matrices in a diagram cell or label. \begin{codeexample}[/tikz/commutative diagrams/diagrams={column sep=large}] \begin{tikzcd}[ampersand replacement=\&] A \oplus B \ar[r, "{\begin{pmatrix} e & f \\ g & h \end{pmatrix}}"] \& C \oplus D \end{tikzcd} \end{codeexample} An alternative fix to this issue that does not require replacing |&| with a different column separator consists in adding the following line to your document after all packages have been loaded: \begin{verse} |\def\temp{&} \catcode`&=\active \let&=\temp| \end{verse} However, this may interfere in unexpected ways with other packages. Use this trick at your own risk. A different but related issue is that some packages, notably \texttt{babel}, modify the catcodes of certain characters in a way that may upset \tikzname's parser. To fix this, add \begin{verse} |\usetikzlibrary{babel}| \end{verse} to your document preamble. \section{Additional goodies} \label{sec:general-infra} This package provides some general \pgfname\ infrastructure to achieve its goals. These additional goodies are documented in this section. \subsection{The \texttt{asymmetrical rectangle} shape} \label{sec:asymm-rect-shape} The following shape is used inside |{tikzcd}| to ensure that arrows between nodes in the same row are perfectly horizontal, even if the nodes contain text with different heights and depths. \begin{shape}{asymmetrical rectangle} This shape is similar to the |rectangle| shape, but its |center| is located at a fixed distance of the |base|, as determined by the |center yshift| key, rather than lying at the shape's geometric center. The numerical anchors, as well as |east| and |west|, are modified accordingly, and there are anchors called |real center|, |real east|, and |real west| matching |rectangle|'s original definitions. All other anchors provided by |rectangle| are available and remain unmodified. \end{shape} \begin{key}{/tikz/commutative diagrams/center yshift=\meta{dimension} (initially axis\_height)} Determines the distance between |asymmetrical rectangle|'s |base| and |center| anchors. \end{key} The picture below shows some of the available anchors. \begin{center}\Huge \begin{tikzpicture} \node[name=s,shape=asymmetrical rectangle,shape example] {Asymmetrical rectangle\vrule width 1pt height 2cm}; \foreach \anchor/\placement in {north west/above left, north/above, north east/above right, west/left, center/right, east/right, real west/left, real center/right, real east/right, base west/left, base/right, base east/right, south west/below left, south/below, south east/below right, text/left, 10/right, 130/above, 230/below, -10/below} \draw[shift=(s.\anchor)] plot[mark=x] coordinates{(0,0)} node[\placement] {\scriptsize\texttt{\anchor}}; \end{tikzpicture} \end{center} \subsection{Reading font parameters} \label{sec:read-font-param} The following are |pgfmath| functions used to access relevant math font parameters. They take no arguments, but the result depends of the currently selected font size. \begin{math-function}{axis\_height} Returns the axis height parameter (a.k.a.\ $\sigma_{22}$) of the document's math font. \end{math-function} \begin{math-function}{rule\_thickness} Returns the fraction rule thickness (a.k.a.\ $\xi_8$) of the document's math font. \end{math-function} \subsection{Computer Modern arrow tips} \label{sec:comp-modern-arrow} The following arrow tips mimic the Computer Modern designs. It is useful to know that at size 10\,pt, the Computer Modern arrow stems are 0.4\,pt thick; for other font sizes, scale this parameter accordingly, or set \texttt{line width=rule\_thickness}. Notice that by using the mechanism explained in \S\ref{sec:changing-arrow-tips}, it is not necessary, and in fact not advisable, to directly refer to the arrow tips listed in this section inside a |{tikzcd}|. \begin{multicols}{2}\raggedcolumns \begin{tabular}{ll} \displayarrow{cm to}\\ \displayarrow[/tikz/commutative diagrams/double line]{cm implies}\\ \displayarrow[line width=1.5*rule_thickness]{cm bold to}\\ \displayarrow{cm double to}\\ \displayarrow{cm to reversed}\\ \end{tabular} \begin{tabular}{ll} \displayarrow{cm bar}\\ \displayarrow{cm left to}\\ \displayarrow{cm right to}\\ \displayarrow{cm left hook}\\ \displayarrow{cm right hook}\\ \end{tabular} \end{multicols} \subsection{Glyph arrow tips} \label{sec:font-arrow-tips} As an attempt to provide a general solution to the problem of having matching arrow tips in text and pictures, this feature produces arrow tips that consist of (pieces of) font glyphs carefully placed at the endpoints of the path. To activate it in |{tikzcd}| diagrams, refer to the |arrow style| key. \begin{arrowtipsimple}{Glyph} An arrow tip made from a piece of text. It accepts the following parameters. \begin{key}{/pgf/arrow keys/glyph math command=\meta{name}} The name of a command (to be used inside |$\csname| \dots |\endcsname$|) producing the desired glyph. \end{key} \begin{key}{/pgf/arrow keys/glyph length=\meta{dimension} (initially 1ex)} The length of the portion of the glyph not clipped away. Also used to set the `tip end' parameter. \end{key} \begin{key}{/pgf/arrow keys/glyph axis=\meta{dimension} (initially axis\_height)} A vertical displacement applied to the glyph in order to make the glyph's central axis (typically an arrow stem) aligned with the path. \end{key} \begin{key}{/pgf/arrow keys/glyph shorten=\meta{dimension} (initially -0.1ex)} An additional amount by which the end of the path is shortened. This is used to compensate for the gap that usually exists between the tip end and the glyph's bounding box. \end{key} \end{arrowtipsimple} Below are some usage examples. Notice that glyph arrow tips do not scale with \pgfname{} line width but their size depends on the current font size, so you will probably want to set \texttt{line width=rule\_thickness} when using them. Also, contrarily to the arrow parameters defined by the \texttt{arrows.meta} library, the parameters described above are evaluated only at the time the arrow tip is drawn, so they can (and should) be given in the units em or ex. \begin{codeexample}[] \tikzset{ math to/.tip={Glyph[glyph math command=rightarrow]}, loop/.tip={Glyph[glyph math command=looparrowleft, swap]}, weird/.tip={Glyph[glyph math command=Rrightarrow, glyph length=1.5ex]}, pi/.tip={Glyph[glyph math command=pi, glyph length=1.5ex, glyph axis=0pt]}, } \begin{tikzpicture}[line width=rule_thickness] \draw[loop-math to, bend left] (0,2) to (1,2); \draw[math to-weird] (0,1) to (1,1); \draw[pi-pi] (0,0) to (1,0); \end{tikzpicture} \end{codeexample} It is important to be aware of some drawbacks of this feature. First, the transition between a line and the arrow tip may become visible with some printers (especially in low resolutions or draft mode) and document viewers, as you may be able to see in the samples above. Second, these rather long tips may (\tikz[baseline=-axis_height]\draw[dash pattern=on 0.8ex off 0.4ex,-{Glyph[glyph math command=rightarrow]}] (0,0) -- (3.4ex,0);) or may not (\tikz[baseline=-axis_height]\draw[dash pattern=on 0.8ex off 0.4ex,-{Glyph[glyph math command=rightarrow]}] (0,0) -- (4ex,0);) fit nicely with dashed or curved lines. Finally, the method used to place the arrow tip at the end of a stroked path and clip away the arrow stem makes certain assumptions about the font design and could fail in cases where unusual design choices are made. \begin{thebibliography}{9} \bibitem{lenders} Felix Lenders, \emph{Commutative diagrams using \tikzname}. Available at \url{http://www.felixl.de/commu.pdf}. \bibitem{milne} James Milne, \emph{Guide to commutative diagrams}. Available at \url{http://www.jmilne.org/not/CDGuide.html}. \bibitem{pgfman} Till Tantau, \emph{The \tikzname{} and \pgfname{} packages: Manual for version 3.0.0}. Available at \url{http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf}. \end{thebibliography} \printindex \end{document}
2412.19410v1
http://arxiv.org/abs/2412.19410v1
Game theoretical asymptotic mean value properties for non-homogeneous $p$-Laplace problems
\documentclass[a4paper, 10pt]{amsart} \usepackage[square, numbers, comma]{natbib} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \usepackage{amsmath,amsthm,amssymb,amssymb,esint,verbatim,tabularx,graphicx} \usepackage{dsfont} \usepackage{bbm} \usepackage{fancyhdr} \usepackage{enumerate} \usepackage{hyperref} \usepackage{cleveref} \usepackage{amsmath} \usepackage{fancyhdr} \usepackage{epic} \usepackage{pgf,tikz} \usetikzlibrary{arrows} \usepackage[utf8]{inputenc} \usepackage{color} \usepackage{hyperref} \usepackage{verbatim} \usepackage{pdfpages} \usepackage{empheq} \usepackage{microtype} \usepackage[toc,page]{appendix} \usepackage{mathtools} \usepackage[inner=2.2cm,outer=2.2cm,bottom=2.6cm,top=2.6cm]{geometry} \parskip 4pt \newtheorem{theorem}{Theorem}[section] \newtheorem{question}{Open question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}{Definition}[section] \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \theoremstyle{definition} \newtheorem{example}{Example}[section] \numberwithin{equation}{section} \newcommand{\R}{\ensuremath{\mathbb{R}}} \newcommand{\N}{\ensuremath{\mathbb{N}}} \newcommand{\E}{\ensuremath{\mathcal{E}}} \newcommand{\Z}{\ensuremath{\mathbb{Z}}} \newcommand{\J}{\ensuremath{\mathbb{J}}} \newcommand{\Hvar}{\ensuremath{\gamma}} \newcommand{\Lvar}{L_{\varphi}} \newcommand{\dif}{\mathrm{d}} \newcommand{\ra}{\rightarrow} \newcommand{\alp}{\rho} \newcommand{\veps}{\varepsilon} \newcommand{\Div}{\mbox{div}} \newcommand{\Te}{\ensuremath{T^{\textup{exp}}}} \newcommand{\Ti}{\ensuremath{T^{\textup{imp}}}} \newcommand{\s}{\ensuremath{s}} \newcommand{\Levy}{\ensuremath{\mathcal{L}}} \newcommand{\Operator}{\ensuremath{\mathfrak{L}^{\sigma,\mu}}} \newcommand{\Levyleqr}{\Levymu_{\leq r}} \newcommand{\Levygeqr}{\Levymu_{> r}} \newcommand{\Levymu}{\ensuremath{\mathcal{L}^\mu}} \newcommand{\Levynu}{\ensuremath{\mathcal{L}^\nu}} \newcommand{\Levynux}{\ensuremath{\mathcal{L}^{\nu_{\Dx}}}} \newcommand{\Levyd}{\ensuremath{\overline{\mathcal{L}}}} \newcommand{\plap}{\ensuremath{\Delta_p}} \newcommand{\nplap}{\ensuremath{\Delta_p}^{\textup{N}}} \newcommand{\flap}{\ensuremath{(-\Delta)^{\s}}} \newcommand{\iflap}{\ensuremath{(-\Delta)^{-\sigma}}} \newcommand{\FpLap}{\ensuremath{(-\Delta)_p^{\s}}} \newcommand{\mun}{\ensuremath{\mu_\s^N}} \newcommand{\muo}{\ensuremath{\mu_\s^1}} \newcommand{\p}{\ensuremath{\sigma}} \newcommand{\LL}{\mathcal{L}} \newcommand{\Tep}{\ensuremath{T_\veps}} \newcommand{\TTep}{\ensuremath{\widetilde{T}_\veps}} \newcommand{\Lep}{\ensuremath{L_\veps}} \newcommand{\LLep}{\ensuremath{\widetilde{L}_\veps}} \newcommand{\Imvp}{\mathcal{I}_r^p} \newcommand{\Mmvp}{\mathcal{M}^p_r^p} \newcommand{\MVPrh}{\Delta_{p}^h} \newcommand{\Mmvpb}{\mathcal{M}^p} \newcommand{\yt}{\tilde{y}} \newcommand{\Dx}{\Delta x} \newcommand{\Dt}{\Delta t} \newcommand{\U}{\mathcal{U}} \newcommand{\G}{\tilde{G}} \newcommand{\hv}{\widehat{v}} \newcommand{\uu}{\hat{u}} \newcommand{\vv}{\hat{v}} \newcommand{\B}{B_\varepsilon^\mu} \newcommand{\Bn}{B_{\varepsilon_n}^\mu} \newcommand{\W}{\mathcal{W}} \newcommand{\V}{\mathcal{V}} \newcommand{\M}{\mathcal{M}} \newcommand{\As}{{A}} \newcommand{\Bs}{{B}} \newcommand{\Cs}{{C}} \newcommand{\Ss}{{S}} \newcommand{\Gs}{{G}} \newcommand{\dd}{\,\mathrm{d}} \newcommand{\diver}{\mathrm{div}} \newcommand{\dell}{\partial} \newcommand{\indikator}{\mathbf{1}_{|z|\leq 1}} \newcommand{\indik}{\mathbf{1}} \newcommand{\qqquad}{\qquad\quad} \newcommand{\e}{\varepsilon} \DeclareMathOperator*{\esssup}{ess \, sup} \DeclareMathOperator*{\essinf}{ess \, inf} \DeclareMathOperator*{\esslim}{ess\,lim} \DeclareMathOperator{\sgn}{\textup{sign}} \DeclareMathOperator{\Ent}{E} \DeclareMathOperator{\Lip}{Lip} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator*{\cO}{\mathit{O}} \DeclareMathOperator*{\co}{\mathit{o}} \DeclareMathOperator*{\sinc}{sinc} \newcommand{\lap}{\Delta} \newcommand{\Ux}{\overline{U}} \newcommand{\Ut}{\widetilde{U}} \newcommand{\Vt}{\widetilde{V}} \newcommand{\Uxt}{\widetilde{\overline{U}}} \newcommand{\Grid}{\mathcal{G}_h} \newcommand{\GridT}{\mathcal{T}_{\Delta t}^T} \renewcommand{\vec}{} \newcommand{\Se}{\mathcal{S}_{\tau}} \newcommand{\hphi}{\hat{\phi}} \newcommand{\hpsi}{\hat{\psi}} \newcommand{\RN}{\mathbb{R}^N} \newcommand{\bc}{\color{magenta}} \newcommand{\ec}{\color{blue}} \newcommand{\nc}{\normalcolor} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \begin{document} \title[Asymptotic mean value properties for non-homogeneous $p$-Laplace problems]{Game theoretical asymptotic mean value properties \\ for non-homogeneous $p$-Laplace problems} \author[F.~del~Teso]{F\'elix del Teso} \address[F. del Teso]{Departamento de Matemáticas, Universidad Aut\'onoma de Madrid (UAM), Campus de Cantoblanco, 28049 Madrid, Spain} \email[]{felix.delteso\@@{}uam.es} \urladdr{https://sites.google.com/view/felixdelteso} \keywords{asymptotic Expansion, Mean Value Property, Dynamic Programming Principle, Game Theoretical, Two-Person Game, Viscosity Solution, $p$-Laplacian, Non-homogeneous $p$-Laplace poblem} \author[J. D. Rossi]{Julio D. Rossi} \address[J. D. Rossi]{Departamento de Matem\'atica, FCEyN, Universidad de Buenos Aires, Pabell\'on I, Ciudad Universitaria (1428), Buenos Aires, Argentina.} \email[]{[email protected]} \subjclass[2020]{ 35D40 35J92 35B05 35Q91 91A05 } \begin{abstract} We extend the classical mean value property for the Laplacian operator to address a nonlinear and non-homogeneous problem related to the $p$-Laplacian operator for $p>2$. Specifically, we characterize viscosity solutions to the $p$-Laplace equation $\Delta_p u \coloneqq \nabla\cdot(|\nabla u|^{p-2} \nabla u) = f$ with a nontrivial right-hand side $f$, through novel asymptotic mean value formulas. While asymptotic mean value formulas for the homogeneous case ($f = 0$) have been previously established, leveraging the normalization $\Delta_p^{\textup{N}} u\coloneqq |\nabla u|^{2-p} \Delta_p u = 0$, which yields the 1-homogeneous normalized $p$-Laplacian, such normalization is not applicable when $f \neq 0$. Furthermore, the mean value formulas introduced here motivate, for the first time in the literature, a game-theoretical approach for non-homogeneous $p$-Laplace equations. We also analyze the existence, uniqueness, and convergence of the game values, which are solutions to a dynamic programming principle derived from the mean value property. \end{abstract} \maketitle \tableofcontents \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} \section{Introduction}\label{sec:intro} The aim of this paper is to propose and study game-theoretical asymptotic expansions, asymptotic mean value properties and dynamic programming principles for the classical $p$-Laplacian: \begin{align*} \Delta_p \varphi \coloneqq \nabla \cdot \left(\left|\nabla \varphi\right|^{p-2}\nabla \varphi\right). \end{align*} We begin by briefly reviewing the state of the art on this topic, which has been extensively studied over the last decade. The work by Manfredi, Parviainen, and Rossi \cite{MaPaRo10} is the first known result in this direction. They benefit from the identity \begin{align*} \Delta_p\varphi = |\nabla \varphi|^{p-2} \nplap\varphi, \quad \text{with} \quad \nplap\varphi \coloneqq (p-2) \left\langle D^2\varphi \frac{\nabla \varphi}{|\nabla \varphi|}, \frac{\nabla \varphi}{|\nabla \varphi|}\right\rangle + \Delta \varphi, \end{align*} where $\nplap$ is the so-called \emph{normalized} $p$-Laplacian. This formulation is particularly useful due to the following fundamental property of $p$-harmonic functions: \begin{align}\label{eq:equivalencepharmonic} \plap u = 0 \quad \text{if and only if} \quad \nplap u = 0. \end{align} In this way, they proposed an asymptotic expansion for $p$-harmonic functions via an asymptotic expansion of the normalized $p$-Laplacian: \begin{theorem}[Manfredi-Parviainen-Rossi 2010]\label{thm:MaPaRo10} Let $d \in \mathbb{N}$, $p \geq 2$, $x\in \R^d$, and $\varphi \in C^2(B_R(x))$ for some $R > 0$, such that $\nabla \varphi(x) \neq 0$. For $r > 0$ small enough, define \begin{align*} nt_{ B_{\gamma r}(x)} \varphi(y) \, \mathrm{d}y, \end{align*} where $\beta = (p-2)/(p+d)$ and $\gamma = \sqrt{2(p+d)}$. Then, \begin{align*} \mathcal{M}_r[\varphi](x) = \varphi(x) + r^2 \nplap \varphi(x) + o(r^2) \quad \text{as} \quad r \to 0^+. \end{align*} As a consequence, a continuous function $u$ satisfies $\Delta_p u (x) = 0$ in the viscosity sense if and only if $ u(x) = \mathcal{M}_r[u] (x) + o(r^2)$ in the viscosity sense. \end{theorem} Moreover, the above result allows for a game-theoretical interpretation of $p$-harmonic functions through the so-called \emph{tug-of-war with noise}, a two-players zero-sum game whose rules are the following: \begin{enumerate}[\noindent\rm (i)] \item Fix a parameter $r>0$, an open bounded domain $\Omega$, a starting point $x\in \Omega$ and a payoff function $g$ defined in $\R^d\setminus\Omega$. \medskip \item\label{tugogwarnoise-item2} Each turn, toss a biased coin with probabilities $\beta$ for heads and $1-\beta$ for tails. \begin{enumerate}[$\bullet$] \item If the result is tails, the new position of the game is chosen randomly in the ball of radius $\gamma r$. \item If the result is heads, a round of tug-of-war is played: a fair coin decides which of the two players chooses the next position of the game inside the ball of radius $\gamma r$. \end{enumerate} \medskip \item The process described in \eqref{tugogwarnoise-item2} is repeated until the position of the game lies outside $\Omega$ for the first time (this position is denoted by $x_\tau$). When this happens, the second player pays the amount ~$g(x_\tau)$ to the first player. \end{enumerate} The value of the game $u_r:\R^d\to\R$ solves the following boundary value problem: \begin{align}\label{eq:DPPintrotugogwarwithnoise} \left\{ \begin{aligned} u_r(x)&= \mathcal{M}_{r}[u_r](x) &\textup{if}&\quad x\in \Omega,\\ u_r(x)&=g(x) &\textup{if}& \quad x\in \R^d\setminus\Omega. \end{aligned}\right. \end{align} In the game-theoretical literature, the equation in \eqref{eq:DPPintrotugogwarwithnoise} is known as the Dynamic Programming Principle (DPP) of the game. It can be proved (see \cite{MaPaRo10,PSSW,PS}) that the value functions $u_{r}$ converge uniformly as $r \to 0^+$ to the unique viscosity solution $u$ of $\Delta_p u = 0$ in $\Omega$ with $u=g$ on $\partial\Omega$. One limitation of the above result is that it is only valid for $p$-harmonic functions, since the equivalence \eqref{eq:equivalencepharmonic} does not hold for nontrivial right-hand sides. Consequently, asymptotic mean value formulas for problems involving $\nplap$ do not yield, in general, asymptotic mean value formulas for problems involving $\plap$. An attempt to overcome this limitation was made by del Teso and Lindgren in \cite{dTLi21} (see also \cite{BuSq22}), where they present a direct asymptotic expansion for the $p$-Laplacian. \begin{theorem}[del Teso-Lindgren 2021]\label{thm:dTLi21} Let $d \in \mathbb{N}$, $p \geq 2$, $x\in \R^d$, and $\varphi \in C^2(B_R(x))$ for some $R > 0$. For $r > 0$ small enough, define \begin{align*} nt_{B_r} |\varphi(x+y) - \varphi(x)|^{p-2} (\varphi(x+y) - \varphi(x)) \, \mathrm{d}y, \end{align*} nt_{\partial_{B_1}}|y_1|^p \dd \sigma(y))$. Then, \begin{align*} \mathcal{L}_r[\varphi](x) = \plap \varphi(x) + o_r(1) \quad \text{as} \quad r \to 0^+. \end{align*} As a consequence, a continuous function $u$ satisfies $\Delta_p u (x) = f (x)$ in the viscosity sense if and only if $\mathcal{L}_r[u] (x) = f (x) + o(r^2)$ in the viscosity sense. \end{theorem} We observe that \Cref{thm:dTLi21} provides a framework to approximate the $p$-Laplacian and produce approximations in the form of dynamic programming principles (see \cite{dTLi21}). Nevertheless, the inability to express $\mathcal{L}_r[\varphi](x)$ in the form $A_{r}[\varphi](x) - \varphi(x)$, where $A_r$ is a monotone averaging operator, does not allow to provide a probabilistic interpretation of the $p$-Laplacian $\Delta_p$ derived from this asymptotic expansion. The \textbf{main goal of this paper} is to obtain direct asymptotic expansions for the $p$-Laplacian having the following properties: \begin{enumerate}[(i)] \item They allow to characterize solutions of non-homogeneous problems related to the $p$-Laplacian through asymptotic mean value formulas. \medskip \item They have an associated Dynamic Programming Principle with a probabilistic game-theoretical interpretation. \end{enumerate} In order to explain the underlying ideas of the formulas and properties that will be displayed throughout the paper, let us introduce them in a simplified scenario and operate in a very informal way. Consider the problem $\plap u = 1$ for $p = 3$, that is \begin{align}\label{eq:toyproblem} \Delta_3 u(x) \coloneqq |\nabla u(x)| \Delta_3^{\textup{N}} u(x) = 1. \end{align} Note that we necessarily have $|\nabla u(x)| > 0$ and $\Delta_3^{\textup{N}} u(x) > 0$. It is easy to check (see Lemma \ref{lem:ident-crucial}) that, given $a,b \geq 0$, we have $a^{\frac{1}{2}}b^{\frac{1}{2}} = \inf_{c > 0}\left\{ \frac{c}{2} a + \frac{1}{2c}b \right\}$. Thus, \Cref{eq:toyproblem} can be equivalently expressed as \begin{equation}\label{eq:toyproblembis} \inf_{c > 0}\left\{ \frac{c}{2} |\nabla u(x)| + \frac{1}{2c}\Delta_3^{\textup{N}} u(x) \right\} = 1. \end{equation} Now, we use that for $\rho$ small we have $ |\nabla u(x)| \sim \frac{1}{\rho} (\sup_{B_{\rho} (x)}u - u(x))$ and that, from Theorem \ref{thm:MaPaRo10} with $r$ small, we obtain $\Delta_3^{\textup{N}} u(x) \sim \frac{1}{r^2} \left(\mathcal{M}_{r}[u](x)-u(x)\right) $ to get \begin{align*} \inf_{c > 0}\left\{ \frac12 \frac{c}{\rho} \left(\sup_{B_{\rho} (x)}u - u(x) \right)\! + \! \frac{1}{2} \frac{1}{cr^2} \left(\mathcal{M}_{r}[u](x)-u(x)\right) \right\} \sim 1. \end{align*} In order to write the left-hand side of this asymptotic formula in the form of $A_{\veps}[u](x) - u(x)$, we just choose $\rho$ and $r$ depending on $c$ and a small parameter $\varepsilon>0$. More specifically, we take $\rho = c \varepsilon^2$ and $r = \varepsilon c^{-1/2}$ to formally obtain the asymptotic expansion $$ A_{\veps}[u](x) \sim u(x) + \varepsilon^2 \quad \textup{with} \quad A_{\veps}[u](x)\coloneqq\inf_{c > 0}\left\{ \frac{1}{2} \sup_{B_{\varepsilon^2 c}(x)} u + \frac{1}{2} \mathcal{M}_{\varepsilon c^{-1/2}}[u](x) \right\}, $$ and, hence, we arrive to the desired asymptotic mean value formula \begin{equation} \label{0-4} A_{\veps}[u](x)= u(x) + \varepsilon^2 +o (\varepsilon^2) \end{equation} that formally characterizes solutions to \eqref{eq:toyproblem}. The associated dynamic programming principle (DPP) for this asymptotic mean value formula just drops the error term and reads as \begin{equation} \label{0-5} A_{\veps}[u](x)= u(x)+ \varepsilon^2. \end{equation} Now, let us describe two main properties of the asymptotic mean value formula \eqref{0-4} that allow us to achieve the proposed goals. The left-hand side of the mean value formula \eqref{0-4} is monotone increasing with respect to $u$ and, in addition, the coefficients add up to 1, so they can play the role of conditional probabilities (this fact will be crucial to show that there exists a solution to the DPP for the game that we describe in the next section). \subsection*{Main difficulties.} Let us discuss the main difficulties that arise when we want to make this argument rigorous. \noindent $\bullet$ For a general $p>2$, and assuming that $f> 0$, we write the problem $\Delta_p u (x) = f(x)$ as \begin{align*} |\nabla u(x)|^{\frac{p-2}{p-1}} \left(\Delta_p^{\textup{N}} u(x)\right)^{\frac{1}{p-1}} = (f(x))^{\frac{1}{p-1}}. \end{align*} We can then reproduce the previous idea using the identity $a^{\alpha}b^{1-\alpha}=\inf_{c>0}\left\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\right\}$ (see \Cref{lem:ident-crucial}), that is valid for $a,b \geq 0$ and $\alpha\in(0,1)$, by taking $\alpha=(p-2)/(p-1)$. However, when $p\in(1,2)$ we have that $(p-2)/(p-1)<0$ and this methodology does not work. \noindent $\bullet$ We are performing approximations for $\rho = c \varepsilon^2$ and $r = \varepsilon c^{-1/2}$ small, but in the infimum we consider every $c > 0$. To tackle this, we restrict $c$ and compute the infimum for $c\in [m(\varepsilon), M(\varepsilon)]$ with $m(\varepsilon) \to 0^+$ and $M(\varepsilon) \to + \infty$ as $\veps\to0^+$ (in order to cover the whole set $(0,+\infty)$ in the limit as $\varepsilon\to 0^+$), and $\varepsilon (m(\varepsilon))^{-1/2} \to 0^+$ and $\varepsilon^2 M(\varepsilon) \to 0^+$ as $\veps\to0^+$ (to make rigorous the asymptotic expansions). A suitable choice for $m(\varepsilon)$ and $M(\varepsilon)$ is given by formula \eqref{as:trunc} below. Observe that, when we compute the infimum for $m(\varepsilon) \leq c \leq M(\varepsilon)$ with $\varepsilon (m(\varepsilon))^{-1/2} \to 0^+$ and $\varepsilon^2 M(\varepsilon) \to 0$, we have that formulas \eqref{0-4} and \eqref{0-5} are \emph{local}. This holds since, as $\varepsilon \to 0^+$, the radii of the involved balls where we consider values of $u$ goes to zero uniformly. \noindent $\bullet$ Moreover, an extra difficulty comes from the fact that we want to deal with solutions to $\Delta_3 u = |\nabla u| \Delta_3^{\textup{N}} u = f$ without assuming a sign condition on $f$. Indeed, the very first step in the previous formal computation writes the product of two quantities as an infimum, $a^{\frac{1}{2}}b^{\frac{1}{2}} = \inf_{c > 0}\left\{ \frac{c}{2} a + \frac{1}{2c}b \right\}$, but this can not be used when $a$ or $b$ are negative (the right-hand side is $-\infty$ and the left-hand side does not make sense). Hence, to handle this issue we need to introduce two different averaging operators $\mathcal{A}_\veps^+ [u]$ and $\mathcal{A}_\veps^- [u]$ (see \Cref{sec:mainpmayor2}). Heuristically speaking, for a solution to the PDE, $|\nabla u(x)|$ is always nonnegative and $ \Delta_3^{\textup{N}} u(x)$ has the same sign of $f(x)$. Then, when $f(x)<0$ we will use the formula that holds for $-u$ (that is a solution to the same equation with a change of sign). This change turns infimums into suppremums and viceversa in the previous formula and give rise to two different mean value formulas according to the sign of $f(x)$. \noindent $\bullet$ On top of these technicalities, solutions to $\Delta_p u = f$ are not in general $C^2$ but $C^{1,\alpha}$ with $\alpha<1$ (\cite{ATU1,ATU2}), and then we cannot perform second order expansions of the solution. Hence, we need to interpret the mean value formula and the corresponding PDE in the viscosity sense (this fact already appeared in previous works \cite{BChMR,dTLi21,MaPaRo10}). \noindent $\bullet$ The fact that we have two different formulas, $\mathcal{A}_\veps^+[u](x)$ and $\mathcal{A}_\veps^-[u](x)$, depending on the sign of $f(x)$ creates several difficulties when dealing with rigorous proofs. The main difficulty appears at points where $f(x)=0$, since at those points we are forced to choose one of the two formulas, $\mathcal{A}_\veps^+[u](x)$ or $\mathcal{A}_\veps^-[u](x)$. The technical problem is tackled restricting ourselves to consider only test functions with non-vanishing gradient at those points. \noindent $\bullet$ In addition, difficulties also arise when we prove the convergence of solutions to the DPP \eqref{0-5} to solutions to the Dirichlet problem for the $p$-Laplacian. In this case we have to remark that the DPP \eqref{0-5} is not consistent in the sense of the work \cite{BarSou} by Barles and Souganidis. This fact does not allow us to apply their classical convergence results for viscosity solutions since we need to look carefully which test functions are admissible for our problem. \noindent $\bullet$ Finally, let us point out that, when we aim to show the equivalence between being a solution to the PDE $\Delta_p u = f$ and the mean value formula (and also for the convergence of the associated DPP), we will need some control on the error term in the asymptotic expansion for a smooth function at points where $f=0$. Hence, we work with $C^3$ test functions, and prove a quantitative version of our previous results. We verify the equivalence of being a smooth ($C^3$) sub or supersolution to the PDE with being a smooth sub or supersolution to an asymptotic mean value formula. After this we obtain the equivalence of being a viscosity solution to the PDE with verifying an appropriate asymptotic mean value formula also in the viscosity sense. \subsection*{Description of the main novelties.} The main novelties in this paper can be summarized as follows: \noindent $\bullet$ For the first time in the literature, we obtain a mean value formula that characterizes solutions to the non-homogeneous $p$-Laplace problem and that is well suited to design a game whose value functions verify a Dynamic Programming Principle that is given by the mean value formula dropping the error term. This task involves an operator that is nonlinear and homogeneous of degree $p-1$ (remark that the obtained mean value formula is homogeneous of degree $1$ in $u$). In addition, associated with the mean value formula, we can design a game whose value functions converge uniformly to the unique solution to the Dirichlet problem for the PDE $\Delta_p u = f$. Therefore, here we introduce a new approach to find game theoretical mean value properties that is flexible enough to handle operators that are not $1-$homogeneous. \noindent $\bullet$ Partial differential equations and probability are closely related areas of mathematics. This relation goes back to the fact that harmonic functions and martingales both satisfy mean value properties. The asymptotic mean value formulas that we found are of the form $$u(x) = \mathcal{A}_\varepsilon [u](x) - \varepsilon^2 J_p (f(x))+o(\varepsilon^2),$$ see \eqref{0-4} and Corollary \ref{teo-f>0-intro-bis} below. Here, $\mathcal{A}_\varepsilon$ is an averaging operator that is monotone increasing in $u$ and, moreover, the coefficients add up to 1 and $J_p$ is the signed $\frac{1}{p-1}$ power, $J_p(\xi):= \textup{sign}(\xi) (\textup{sign}(\xi) \xi )^{\frac{1}{p-1}}$. Then, we can consider solutions to the equation $$u_\varepsilon(x) = \mathcal{A}_\varepsilon [u_\varepsilon](x) - \varepsilon^2 J_p (f(x))$$ inside a domain $\Omega$ with a fixed outter datum $u_\varepsilon (x) = g(x)$ for $x\in \mathbb{R}^d \setminus \Omega$. This equation is the dynamic programming principle DPP associated to a game (whose rules can be intuitively deduced from the form of the operator $\mathcal{A}_\varepsilon$). Here we also decribe this game and show that its value function converges as $\varepsilon \to 0^+$ to the solution to $\Delta_p u(x) = f(x)$ in $\Omega$, with $u (x) = g(x)$ on $\partial \Omega$. This is the first probabilistic game-theoretical interpretation for non-homogeneous $p$-Laplace problems. \noindent $\bullet$ The obtained mean value formulas reproduce the diffusive character of the $p$-Laplacian. In fact, the diffusion coefficient in the $p$-Laplacian, $\mbox{div} (|\nabla u|^{p-2} \nabla u)$, is given by $|\nabla u|^{p-2}$, and then the diffusion is large or small according to the size of the gradient. The mean value formula and the dynamic programming principle found here reproduce this property. In fact, if we go back to the formula that we found for $\Delta_3 u = 1$, \eqref{0-4}, we observe that, when the gradient of the function $u$ is large, we have to choose the constant $c$ small in order to make the term $\sup_{B_{\varepsilon^2 c}(x)} u -u(x)$ small (recall that we aim for an infimum in $c$ in \eqref{0-4} and in \eqref{0-5}). This forces that the radii of the balls where we play tug-of-war with noise is large (since it is given by $\varepsilon c^{-1/2}$ with $c$ small). That is, we have that, the bigger the gradient is, the larger the balls are when playing tug-of-war with noise. Conversely, when the gradient of $u$ is small, it will be better to choose $c$ large and hence the balls in the diffusive part of the mean value formula will be small. \subsection*{Comments on related literature} Now, let us briefly comment on previous results concerning mean value formulas. It is well known that there exists a mean value formula that characterizes harmonic functions. Specifically, a function $u$ is harmonic (i.e., a solution to $\Delta u= \nabla\cdot(\nabla u) = 0$) if and only if $u(x)$ equals the mean value of $u$ inside every ball $B_\varepsilon(x)$ centered at $x$. A weaker statement, known as the asymptotic mean value property, dates back to \cite{Blaschke,Privaloff}. This property states that $u$ is harmonic if and only if \[ nt_{B_\varepsilon(x)} u(y)\, \dd y + o(\varepsilon^2). \] Extensions of these ideas to the classical Poisson equation $-\Delta u = f$ yield the formula \[ nt_{B_\varepsilon(x)} u(y)\, \dd y + \frac{\varepsilon^2}{2(n+2)} f(x) + o(\varepsilon^2). \] Mean value formulas for operators other than the Laplacian have also been studied. For instance, \cite{Littman.et.al1963} establishes a mean value theorem for linear divergence form operators with bounded measurable coefficients. A simpler statement, expressed in terms of mean value sets, was obtained in \cite{BH2015,Caffarelli.Roquejoffre.2007}. For results concerning mean value properties in the sub-Riemannian setting, see \cite{BoLan}. An extension to general mean value formulas in non-Euclidean spaces can be found in \cite{cur}. The extension of the linear theory to nonlinear operators is relatively recent. As mentioned earlier, a nonlinear mean value property for $p$-harmonic functions was first introduced in \cite{MaPaRo10,PSSW,PS}. For further references and more general equations, see also \cite{TPSS,KAWOHL2012173,Arroyo,BH2015,LiuSchikorra2015,dTMP,HansHartikainen,Angel.Arroyo.Tesis,I, ArroyoHeinoParviainen2017,DoMaRiSt,Lewicka2017,MiRo24,Gonzalvez2024} and the recently published book \cite{BlancandRossi2019}. For mean value properties related to the $p$-Laplacian in the Heisenberg group, refer to \cite{LMan,DoMaRiSt22}, while for the standard variational $p$-Laplacian, see the previously quoted work \cite{dTLi21}. For mean value formulas concerning solutions to more general elliptic problems, including Monge-Ampère and $k$-Hessian equations, we refer to \cite{BChMR,BChMR2}. Finally, this topic has also been addressed in the nonlocal literature (see \cite{BuSq22,dTEnLe22,Lew22,dTMeOc24}). Concerning the interplay between game theory and partial differential equations there is a large literature dealing with linear problems. Recently, motivated by \cite{PSSW} where the authors introduce and analyze the tug-of-war game related to the infinity Laplacian, a number of papers deal with more general nonlinear equations and different problems including, for example, the obstacle problem. We refer to \cite{TPSS,Chandra,dTMP,LeMa,PSSW,PS} and the books \cite{BlancandRossi2019} and \cite{Lew20}. We highlight that none of these references deal with the non-homogeneous $p$-Laplace problem. \medskip \subsection*{Organization of the paper.} The paper is organized as follows: In Section \ref{sect-main} we describe and state precisely our main results; in Section \ref{sect-asymp} we analyze asymptotic expansions for the $p$-Laplacian and prove the characterization of viscosity solutions to the PDE in terms of the asymptotic mean value formula; in Section \ref{sect-dynamic} we deal with existence, uniqueness and convergence as $\varepsilon \to 0^+$ of solutions to the Dynamic Programming Principle (DPP); in Section \ref{sec:probabilitythings} we analyze the game associated with our mean value formula and show that the game has a value that coincides with the unique solution to the DPP; finally, we include two appendixes, in the first one, Appendix \ref{ApA}, we include proofs of some arithmetic-geometric identities and in the second one, Appendix \ref{app:viscositydef}, we discuss the definition of being a viscosity solution both to the PDE and to the asymptotic mean value formula. \subsection*{Notations.} Given $p \in (2,\infty)$ and $d\in \mathbb{N}$, we introduce the constants $$\beta=(p-2)/(p+d), \qquad \gamma=\sqrt{2(p+d)} \qquad \mbox{and} \qquad \alpha=(p-2)/(p-1).$$ We will also use $J_p$ to denote the signed $\frac{1}{p-1}$ power, that is, \[ J_p(\xi)\coloneqq\left\{ \begin{array}{ll} \xi^{\frac{1}{p-1}} \quad &\textup{if} \quad \xi\geq0,\\ -(-\xi)^{\frac{1}{p-1}} \quad &\textup{if} \quad \xi\leq0. \end{array} \right. \] \section{Main results} \label{sect-main} In this section we include the precise statements of our results. \subsection{Asymptotic expansions and mean value properties} \label{sec:mainpmayor2} Our first set of main results regards asymptotic expansions and mean value properties for the $p$-Laplacian in the range $p\in(2,\infty)$. Let us carefully introduce all the ingredients before stating the results. As we have mentioned, to make all the computations of this paper rigorous, we need to reduce the set for the infimum in \eqref{eq:toyproblembis} from $\{c>0 \}$ to a compact set. Let us define the following functions: \begin{equation}\label{as:trunc}\tag{$\textup{M-m}$} \textup{Let $m,M:\R_+ \to \R_+$ be defined by $m(\veps)=\veps^{\frac{2}{2+\alpha}}$ and $M(\veps)=\veps^{-\frac{2}{2-\alpha}}$}. \end{equation} Let us also consider the following averaging operators, defined for $\veps>0$, $x\in \R^d$, and a bounded Borel function $\varphi:B_R(x)\to \R$ for some $R>0$ large enough: \begin{align}\label{def:A+} \mathcal{A}_\veps^+[\varphi](x)\coloneqq \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} \varphi + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) \right\} \end{align} and \begin{align}\label{def:A-} \mathcal{A}_\veps^-[\varphi](x)\coloneqq\sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \inf_{B_{\veps^2c^{1-\alpha}}(x)} \varphi + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) \right\}. \end{align} We are now ready to state our first main result. It provides an asymptotic expansion of the $p$-Laplacian both in an asymptotic form for $C^2$ functions and in a quantitative way for $C^3$ functions. Moreover, while the asymptotic formula may fail for $C^2$ functions when the $p$-Laplacian vanishes (due to the lack of control of the error term), this problem is avoided when applied to $C^3$ functions. \begin{theorem}\label{thm:main-p-greater2} Let $d\in \mathbb{N}$, $p>2$, $\veps_0\in (0,1)$, $x\in \R^d$, and $\varphi\in C^2(B_{\veps_0}(x))$ such that $\nabla\varphi(x)\not=0$. Assume \eqref{as:trunc}. The following assertions hold: \begin{enumerate}[\rm (a)] \item\label{thm:main-p-greater2-item1} There exists $\overline{\veps}\in(0,\veps_0)$ such that, for all $\veps\in(0,\overline{\veps})$, the following quantity is well defined: \[ \mathcal{A}_\veps[\varphi](x)\coloneqq\left\{\begin{aligned} \mathcal{A}_\veps^+[\varphi](x) \quad \textup{if} \quad \Delta_p\varphi(x)\geq0,\\ \mathcal{A}_\veps^-[\varphi](x) \quad \textup{if} \quad \Delta_p\varphi(x)<0. \end{aligned}\right. \] \item\label{thm:main-p-greater2-item2} If $\Delta_p\varphi(x)\not=0$, then we have the following asymptotic expansion of the $p$-Laplacian: \[ \mathcal{A}_\veps[\varphi](x)=\varphi(x)+\veps^2 J_p(\Delta_p\varphi(x)) +o(\veps^2) \quad \textup{as} \quad \veps\to0^+. \] \item\label{thm:main-p-greater2-item3} If, additionally, $\varphi\in C^3(B_{\veps_0}(x))$, then, for all $\veps\in(0,\overline{\veps})$, we have that \[ \mathcal{A}_\veps[\varphi](x)=\varphi(x)+\veps^2 J_p(\Delta_p\varphi(x)) + \veps^2 E(\varphi,\veps), \] with \begin{align*} |E(\varphi,\veps)|\leq& K_1 \|D^2\varphi\|_{L^\infty(B_{\overline{\veps}}(x))} \veps^{\frac{2(p-2)}{p}} + K_2\left(\|\nabla\varphi\|_{L^\infty(B_{\overline{\veps}}(x))}+\|D^{3}\varphi\|_{L^\infty(B_{\overline{\veps}}(x)) }+ \frac{|D^2\varphi(x)|^2}{|\nabla\varphi(x)|}\right) \veps^{\frac{2}{3p-4}}. \end{align*} where $K_1$ and $K_2$ are positive constants depending only on $p$ and $d$. \end{enumerate} \end{theorem} \begin{remark} We can make the choice $\mathcal{A}_\veps[\varphi](x)=\mathcal{A}_\veps^-[\varphi](x)$ if $\Delta_p\varphi(x)=0$ and the above result remains true. \end{remark} Let us note that the above asymptotic expansion holds only when $\nabla\varphi(x)\not=0$ (since \Cref{thm:MaPaRo10} cannot be used to approximate $\nplap\varphi(x)$). This is due to the fact that $\nplap\varphi(x)$ is not well defined when $\nabla\varphi(x)=0$. Even with this limitation, the above results still provide asymptotic mean value characterizations of viscosity solutions to the non-homogeneous $p$-Laplace equation when $p>2$. Such result follows directly from the following characterization of smooth sub and supersolutions. \begin{theorem}\label{thm:subsupersmooth-bis} Let $d\in \mathbb{N}$, $p>2$, $x\in \R^d$, $\varphi\in C^3(B_{R}(x))$ for some $R>0$, and $f\in C(B_{R}(x))$ such that $f(x)\geq0$. Additionally, assume that both $\nabla\varphi(x)\not=0$ and $\Delta_p\varphi(x)\not=0$ if $f(x)=0$. Assume also that $\veps>0$ and that \eqref{as:trunc} holds. Then, \begin{align*} \begin{aligned} &\Delta_p \varphi(x)\leq f(x) \quad (\textup{resp. } \Delta_p \varphi(x)\geq f(x) ) \end{aligned} \end{align*} if and only if \begin{align*} \begin{aligned} &\mathcal{A}_\veps^+[\varphi](x)\leq \varphi(x)+\veps^2 J_p(f(x)) +o(\veps^2) \quad (\textup{resp. }\mathcal{A}^+_\veps[\varphi](x)\geq \varphi(x)+\veps^2 J_p(f(x)) +o(\veps^2) ) \quad \textup{as} \quad \veps\to0^+. \end{aligned} \end{align*} The same result holds if $f(x)\leq 0$ replacing $\mathcal{A}_\veps^+$ by $\mathcal{A}_\veps^-$. \end{theorem} Let us finally fix the notation \[ \overline{\mathcal{A}}_\veps[\varphi;f](x)\coloneqq\left\{\begin{aligned} \mathcal{A}_\veps^+[\varphi](x) \quad \textup{if} \quad f(x)\geq0,\\ \mathcal{A}_\veps^-[\varphi](x) \quad \textup{if} \quad f(x)<0, \end{aligned}\right. \quad \textup{and} \quad \underline{\mathcal{A}}_\veps[\varphi;f](x)\coloneqq\left\{\begin{aligned} \mathcal{A}_\veps^+[\varphi](x) \quad \textup{if} \quad f(x)>0,\\ \mathcal{A}_\veps^-[\varphi](x) \quad \textup{if} \quad f(x)\leq 0. \end{aligned}\right. \] We are now ready to formulate the second main result: We provide two asymptotic mean value characterizations of viscosity solutions to non-homogeneous $p$-Laplace problems. \begin{corollary}[Asymptotic mean value property] \label{teo-f>0-intro-bis} Let $d\in \mathbb{N}$, $p>2$, $\Omega\subset\R^d$ be an open set, and $f\in C(\Omega)$. Assume also that $\veps>0$ and that \eqref{as:trunc} holds. Let $\mathcal{A}_\veps[\varphi;f]$ be defined by either $\overline{\mathcal{A}}_\veps[\varphi;f]$ or $\underline{\mathcal{A}}_\veps[\varphi;f]$. Then, $u$ is a viscosity solution to \[ \Delta_p u(x)=f(x) \quad \textup{for} \quad x\in \Omega, \] if and only $u$ is a viscosity solution to \begin{align*} \begin{aligned} &u(x) =\mathcal{A}_\veps[u;f](x) - \veps^2 J_p(f(x)) +o(\veps^2) \quad \textup{for} \quad x\in \Omega, \quad \textup{as} \quad \veps\to0^+. \end{aligned} \end{align*} \end{corollary} The precise definitions of viscosity solutions for both the $p$-Laplace equation and the asymptotic mean value property can be found in \Cref{app:viscositydef}. Observe that in the previous result we are not assuming any a priori regularity for $u$ besides continuity. Also notice that we have two different mean value operators, $\overline{\mathcal{A}}_\veps[\varphi;f]$ and $\underline{\mathcal{A}}_\veps[\varphi;f]$, such that the associated mean value formulas characterize viscosity solutions to $\Delta_p u(x)=f(x)$. \begin{remark}Let us point out that classical weak solutions for the $p$-Laplacian that are obtained minimizing the energy $$ E(u) = \frac1p \int_\Omega |\nabla u(x)|^p \dd x - \int_\Omega f(x) u(x) \dd x $$ coincide with solutions in the viscosity sense, \cite{Is,JuJu,JuLiMan}. Hence, our main result also provides a characterization in terms of an asymptotic mean value formula for weak solutions. \end{remark} \subsection{Dynamic Programming Principle and game theoretical interpretation} The previous asymptotic expansion naturally gives a dynamic programming principle for the $p$-Laplace Poisson problem \begin{align}\label{eq:ppoisson} \left\{ \begin{aligned} \Delta_p u(x)&= f(x) &\textup{if}& \quad x\in \Omega,\\ u(x)&= g(x) &\textup{if}& \quad x\in \partial\Omega. \end{aligned} \right. \end{align} We will need the following regularity assumption on the domain $\Omega$: \begin{equation}\label{as:regudom} \tag{$\textup{A}_{\Omega}$} \begin{split} \textup{$\Omega\subset \R^d$ is a bounded open domain with the uniform exterior ball condition: there exists }\\ \textup{$R>0$ such that for all $x_0\in \partial \Omega$ there exists $z_0\in \R^d\setminus \Omega$ such that $\overline{B_{R}(z_0)}\cap \overline{\Omega}=\{x_0\}$. } \end{split} \end{equation} In the following result, Theorem \ref{thm:DPPexsandconv}, we present the well-posedness of the associated dynamic programming principle as well as the uniform convergence of its solution to the unique solution of \eqref{eq:ppoisson}. \begin{theorem} \label{thm:DPPexsandconv} Let $d\in \mathbb{N}$, $p>2$, $\Omega$ satisfying \eqref{as:regudom}, $f\in C(\overline{\Omega})$ and $g\in C_{\textup{b}}(\R^d\setminus \Omega)$. Assume \eqref{as:trunc}. For every $\veps>0$, the following dynamic programming principle has a unique bounded Borel solution $u_\veps$: \begin{align} \label{DPP} \left\{ \begin{aligned} u_\veps(x)&= \mathcal{A}_\veps[u_\veps;f](x) - \veps^2 J_p(f(x)) &\textup{if}& \quad x\in \Omega,\\ u_\veps(x)&= g(x) &\textup{if}& \quad x\in \R^d\setminus\Omega, \end{aligned} \right. \end{align} where $\mathcal{A}_\veps[\varphi;f]$ is defined by either $\overline{\mathcal{A}}_\veps[\varphi;f]$ or $\underline{\mathcal{A}}_\veps[\varphi;f]$. Moreover, $$u_\veps\to u \textup{ as $\veps\to0^+$ uniformly in $\overline{\Omega}$ }$$ where $u$ is the unique viscosity solution of the $p$-Laplace Poisson problem \eqref{eq:ppoisson}. \end{theorem} \subsection*{The $p$-Laplacian gambling house.} The previous result naturally provides us with a game-theoretical approach to solutions of the $p$-Laplace Poisson problem \eqref{eq:ppoisson}. More precisely, the Dynamic Programming Principle \eqref{DPP} corresponds to the problem satisfied by the value function of the following two-players, zero-sum game, that can be called {\it the $p$-Laplacian gambling house}, whose rules are described below: \begin{enumerate}[\noindent \rm g(i)] \item\label{item1} Fix a parameter $\veps>0$, an open bounded domain $\Omega$, a starting point $x_0\in \Omega$, a payoff function $g$ defined in $\R^d\setminus\Omega$, and a running payoff function $f$ defined in $\Omega$. \medskip \item\label{item2} Each turn, given the actual position $x\in \Omega$, if $f(x)\geq0$, then, the second player (who wants to minimize the final payoff) chooses a constant $c \in [m(\veps),M(\veps)]$. Then they toss a biased coin with probabilities $\alpha$ for heads and $1-\alpha$ for tails. \begin{enumerate}[$\bullet$] \item If the result is heads, the first player (who wants to maximize) chooses the next position at any point in the ball $B_{\veps^2c^{1-\alpha}}(x)$. \item If the result is tails, they play a round of tug-of-war with noise in the ball $B_{\gamma \veps c^{-\frac{\alpha}{2}}}(x)$ with probabilities $\beta$ and $1-\beta$ (see description in \Cref{sec:intro}). \end{enumerate} \medskip \item\label{item3} If $f(x)<0$ the roles of the players are reversed. More precisely, the first player chooses $c$ and the second player chooses the next position in the ball $B_{\veps^2c^{1-\alpha}}(x)$ if the result of the first coin toss is heads. \medskip \item\label{item4} The process described in \rm{g}\eqref{item2} and \rm{g}\eqref{item3} is repeated, with a running payoff at each movement of amount $-\veps^2 J_p(f(x))$. The game continues until the position of the game lies outside $\Omega$ for the first time (this position is denoted by $x_\tau$). When this happens, the second player pays the amount ~$g(x_\tau)$ to the first player. \end{enumerate} Note that, in this game, the players are somehow ``gambling" by choosing the value of $c$. If one player chooses $c$ small, he lets, with probability $\alpha$, the other player to move freely in a small ball (of radius $\veps^2c^{1-\alpha}$ that becomes small if $c$ is small), and they both play tug-of-war with noise with probability $1-\alpha$ in a large ball (of radius $\gamma \veps c^{-\frac{\alpha}{2}}$ that becomes large when $c$ is small). On the other hand, if the player chooses $c$ large, the other player moves freely, with probability $\alpha$, in a large ball, and they both play tug-of-war with noise with probability $1-\alpha$ in a smaller ball. This kind of dynamics allows a much richer set of strategies than the usual tug-of-war with noise. Also notice that the rules of the game are intuitively reflected in the form of the DPP. In fact, take a point $x$ with $f(x) >0$, at this point the DPP reads as, $$ u_\veps(x)= \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} u_\veps + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[u_\veps](x) \right\} - \veps^2 J_p(f(x)). $$ Now, one thinks about $u_\veps$ as the expected amount that the players will get starting the game at $x$ (this is the meaning of the value function of the game). This equation says that the value function at $x$ equals the result after one play. This is determined by the infimum among constants \( c \). Recall that, when \( f(x) > 0 \), the player aiming to minimize the payoff chooses \( c \). The calculation of the infimum involves taking the weighted average, where the weights are determined by the probabilities of heads (\( \alpha \)) and tails (\( 1-\alpha \)) in a coin toss. This average includes the expected outcomes of selecting the next position. In this context, the player choosing the position seeks to maximize the outcome, which is where the supremum appears. Subsequently, the game involves a ``tug-of-war'' with noise in balls of radius related to \( c \) and \( \varepsilon \). Finally, the running payoff is subtracted, which in this case is \( \varepsilon^2 J_p(f(x)) \). A similar interpretation holds at points $x$ with $f(x) <0$ since in this case the DPP is $$ u_\veps(x)= \sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \inf_{B_{\veps^2c^{1-\alpha}}(x)} u_\veps + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[u_\veps](x) \right\} - \veps^2 J_p(f(x)). $$ The following result states the connection between this game and the solution of the dynamic programming principle. \begin{theorem} Let the assumptions of \Cref{thm:DPPexsandconv} hold. Consider the game described in {\rm{g}\eqref{item1}-\rm{g}\eqref{item2}-\rm{g}\eqref{item3}-\rm{g}\eqref{item4}}. Then, the game has a value, and it is given by the unique solution $u_\veps$ of the dynamic programming principle \eqref{DPP}. \end{theorem} A more rigorous statement of the above result will be given in \Cref{sec:probabilitythings} after we introduce all the probability machinery that is required. Notice that this result combined with Theorem \ref{thm:DPPexsandconv} gives that the value functions of the game approximate the solution to the $p$-Laplace Poisson problem \eqref{eq:ppoisson} when the parameter that controls the size of the steps in the game, $\varepsilon$, goes to zero. \subsection{The limit cases and possible extensions} Here we collect possible extensions of our results and gather some final comments. Here, for simplicity, we assume $f>0$. \noindent $\bullet$ {\bf The limit case $p=+\infty$.} For $p\to+\infty$, we have $ \alpha \to 1$ and $(f(x))^{\frac{1}{p-1}} \to1$, and then the mean value formula reads as $$ \sup_{B_{\veps^2}(x)} u = u(x) + \varepsilon^2 +o(\varepsilon^2). $$ The corresponding DPP is given by \begin{align*} \left\{ \begin{aligned} u_\varepsilon (x) & = \sup_{B_{\veps^2}(x)} u_\varepsilon - \varepsilon^2 &\textup{if}& \quad x\in \Omega,\\ u_\veps(x)&= g(x) &\textup{if}& \quad x\in \R^d\setminus\Omega, \end{aligned} \right. \end{align*} whose solutions converge to solutions to the Eikonal equation \begin{align} \label{eq:ppoisson-Eik} \left\{ \begin{aligned} |\nabla u(x) |&= 1 &\textup{if}& \quad x\in \Omega,\\ u(x)&= g(x) &\textup{if}& \quad x\in \partial\Omega. \end{aligned} \right. \end{align} Notice that the limit as $p\to +\infty$ for solutions to the $p$-Laplace Poisson problem \eqref{eq:ppoisson} is the solution to \eqref{eq:ppoisson-Eik}, see \cite{BBM}. Therefore, taking $p=+\infty$ in our mean value formulas (or in the corresponding dynamic programming principles) is compatible with taking the limit as $p \to + \infty$ in the $p$-Laplace Poisson problem. \noindent $\bullet$ {\bf The limit case $p=2$.} For $p=2$, we get $\beta=0$, $\alpha=0$ and $\gamma=\sqrt{2(2+d)}$. Hence, as $(f(x))^{\frac{1}{p-1}}= f(x)$, the mean value formula is given by $$ nt_{ B_{\gamma \varepsilon}(x)} u (y) \, \mathrm{d}y = u(x) + \varepsilon^2 f(x) +o(\varepsilon^2), $$ that is a well-known formula for solutions to $\Delta u(x) = f(x)$, see \cite{Blaschke,Privaloff}. The corresponding DPP is given by \begin{align*} \label{DPP-infty-2} \left\{ \begin{aligned} nt_{ B_{\gamma\varepsilon}(x)} u_\varepsilon (y) \, \mathrm{d}y - \varepsilon^2 f(x) &\textup{if}& \quad x\in \Omega,\\ u_\veps(x)&= g(x) &\textup{if}& \quad x\in \R^d\setminus\Omega, \end{aligned} \right. \end{align*} whose solutions converge to solutions to the Poisson problem for the usual Laplacian. Hence, also in this case, our asymptotic expensions are compatible with the classical ones. \noindent $\bullet$ {\bf Mean value properties for more general equations.} Using similar techniques as the ones used to prove our results, we can provide asymptotic expansions, asymptotic mean value characterizations of viscosity solutions, dynamic programming principles, and game theoretical interpretations for a large class of related equations. In general, if one has an asymptotic mean value formula of the form $$ u(x) = \mathcal{B}_r[u] (x) + o(r^2) $$ that characterizes solutions to $F(D^2u(x), \nabla u(x), x) =0$, one can expect that $$ u(x) = \inf_{c\in [m(\veps),M(\veps)]} \left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} u + (1-\alpha) \mathcal{B}_{\veps c^{-\frac{\alpha}{2}}}[u](x) \right\} - \varepsilon^2 f(x) + o(\varepsilon^2) $$ charaterizes solutions to $$ |\nabla u(x)|^{\alpha } (F(D^2u(x), \nabla u(x), x))^{1-\alpha} = f(x), $$ assuming that $f>0$ (and a similar formula changing inf by sup and modifying $\mathcal{B}$ holds when $f<0$). Nonlocal versions of $F$ can also be treated similarly. For a reference on this kind of problems we refer to \cite{IS13}. In particular, for $a_1,a_2\geq0$ and $a_3>0$, we can deal with equations of the form \begin{equation*} |\nabla u (x) |^{a_1} \left(a_2 \left\langle D^2u (x) \frac{\nabla u}{|\nabla u|} (x) , \frac{\nabla u}{|\nabla u|} (x) \right\rangle + a_3\Delta u (x) \right)=f(x), \end{equation*} and also with Pucci type operators \begin{equation*} |\nabla u (x) |^{a_1} \left( \inf_{\lambda I \leq A \leq \Lambda I} \mbox{trace} \left( A \, D^2 u (x) \right) \right)=f(x), \end{equation*} as long as $f(x)>0$. \section{Asymptotic expansions and mean value properties for the $p$-Laplacian} \label{sect-asymp} The goal of this section is to prove the asymptotic expansion and mean value property results for the $p$-Laplacian in the range $p\in(2,\infty)$, as stated in \Cref{thm:main-p-greater2}, \Cref{thm:subsupersmooth-bis} and \Cref{teo-f>0-intro-bis}. \subsection{Asymptotic expansions for $C^2$ functions} Let us start by recalling the result of Theorem \ref{thm:MaPaRo10} related to the average operator \begin{align*} nt_{ B_{\gamma r}(x)} \varphi(y) \, \mathrm{d}y, \end{align*} where $\beta = (p-2)/(p+d)$ and $\gamma =\sqrt{2(p+d)}$. It states that for $p>2$ and $\varphi \in C^2(B_R(x))$ for some $R>0$ and such that $\nabla\varphi(x)\not=0$, we have the following asymptotic expansion for the normalized $p$-Laplacian: \begin{equation}\label{eq:plapnasexp_littleo} \mathcal{M}_r[\varphi](x)-\varphi(x) = r^2\Delta_p^{\textup{N}} \varphi(x) + o(r^2) \quad \textup{as} \quad r\to0^+. \end{equation} Additionally, we recall that the following asymptotic expansion for the modulus of the gradient holds: \begin{equation}\label{eq:gradasexpsup_littleo} \sup_{B_r(x)} \varphi-\varphi(x) = r\left|\nabla \varphi(x)\right| + o(r) \quad \textup{as} \quad r\to0^+. \end{equation} We also introduce the following assumption that can be seen as a weakened version of \eqref{as:trunc}: \begin{equation}\label{as:trunc2}\tag{$\textup{H}_{T}$} \left\{\begin{aligned} &\textup{Let } m,M:\R_+ \to \R_+ \textup{ be non-decreasing and non-increasing functions respectively, and such}\\ &\textup{that }m(\veps)\to0^+, \quad M(\veps)\to+\infty, \quad \veps m(\veps)^{-\frac{\alpha}{2}}\to0^+ \quad \textup{and} \quad \veps^2M(\veps)^{1-\alpha}\to0^+ \quad \textup{as} \quad \veps\to0^+. \end{aligned}\right. \end{equation} \begin{remark} It is standard to check that, if \eqref{as:trunc} holds, then \eqref{as:trunc2} holds. \end{remark} Finally, we recall the definitions of $\mathcal{A}^+_\veps$ and $\mathcal{A}^-_\veps$ given in \Cref{sec:mainpmayor2}, \begin{align}\label{def:A+bis} \mathcal{A}_\veps^+[\varphi](x)\coloneqq\inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} \varphi + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) \right\} \end{align} and \begin{align}\label{def:A-bis} \mathcal{A}_\veps^-[\varphi](x)\coloneqq\sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \inf_{B_{\veps^2c^{1-\alpha}}(x)} \varphi + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) \right\}. \end{align} Let us start by proving that \eqref{def:A+bis} and \eqref{def:A-bis} are well defined under very mild assumptions. This will actually prove a more general version of Theorem \ref{thm:main-p-greater2}\eqref{thm:main-p-greater2-item1}. \begin{lemma} Let $d\in \mathbb{N}$, $p>2$, $\veps_0\in(0,1)$, $x\in \R^d$ and $\varphi:B_{\veps_0}(x)\to\R$ be a bounded Borel function. Assume \eqref{as:trunc2}. There exists $\overline{\veps}\in(0,\veps_0)$ such that, for all $\veps\in(0,\overline{\veps})$, the quantities \eqref{def:A+bis} and \eqref{def:A-bis} are well defined. \end{lemma} \begin{proof} Since $c\in [m(\veps),M(\veps)]$, then, for $\veps>0$ small enough, we have that \[ B_{\veps^2 c^{1-\alpha}}(x) \subset B_{\veps^2 M(\veps)^{1-\alpha}}(x)\subset B_{\veps_0}(x) \quad \textup{and} \quad B_{\gamma \veps c^{-\frac{\alpha}{2}}}(x) \subset B_{\gamma \veps m(\veps)^{-\frac{\alpha}{2}}}(x)\subset B_{\veps_0}(x), \] where the last inclusions follow from assumption \eqref{as:trunc2}. Since the function $\varphi$ is bounded in $B_{\veps_0}$, then all the $\inf$ and $\sup$ involved in \eqref{def:A+bis} and \eqref{def:A-bis} are well-defined. Finally, since $\varphi$ is additionally a Borel function, then the integrals in \eqref{def:A+bis} and \eqref{def:A-bis} are also well-defined. \end{proof} We are now ready prove Theorem \ref{thm:main-p-greater2}\eqref{thm:main-p-greater2-item2}. We restate it here for convenience. \begin{proposition}\label{pro:asexp-greater2} Let $d\in \mathbb{N}$, $p>2$, $\veps_0\in(0,1)$, $x\in \R^d$, and $\varphi\in C^2(B_{\veps_0}(x))$ such that $\nabla\varphi(x)\not=0$ and $\Delta_p\varphi(x)\not=0$. Assume \eqref{as:trunc2} and define, for $\veps<\veps_0$ small enough, the average \[ \mathcal{A}_\veps[\varphi](x)\coloneqq\left\{\begin{aligned} \mathcal{A}_\veps^+[\varphi](x) \quad \textup{if} \quad \Delta_p\varphi(x)\geq 0,\\ \mathcal{A}_\veps^-[\varphi](x) \quad \textup{if} \quad \Delta_p\varphi(x)<0. \end{aligned}\right. \] Then, \begin{align}\label{eq:asexp-proofs} \mathcal{A}_\veps[\varphi](x)=\varphi(x)+\veps^2 J_p(\Delta_p\varphi(x)) +o(\veps^2) \quad \textup{as} \quad \veps\to0^+. \end{align} \end{proposition} \begin{proof} \textbf{Step 1: }Let us begin by assuming that $\Delta_p\varphi(x)>0$. In this case, $\mathcal{A}_\veps[\varphi](x)=\mathcal{A}_\veps^+[\varphi](x)$ and $J_p(\Delta_p\varphi(x))=(\Delta_p\varphi(x))^\frac{1}{p-1}=|\nabla\varphi(x)|^{\alpha}(\nplap\varphi(x))^{1-\alpha}$. Note also that $|\nabla\varphi(x)|>0$ and $\nplap \varphi(x)>0$. Then, by \eqref{eq:plapnasexp_littleo} and \eqref{eq:gradasexpsup_littleo}, we have that \begin{align*}\label{eq:asexpid} \frac{\mathcal{A}^+_\veps[\varphi](x)-\varphi(x)}{\veps^2}&=\!\!\!\inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \left(\frac{\displaystyle \sup_{B_{\veps^2c^{1-\alpha}}(x)} \varphi-\varphi(x)}{\veps^2}\right) + (1-\alpha)\left( \frac{\mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x)-\varphi(x)}{\veps^2} \right)\right\}\\ &=\!\!\!\inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \left(c^{1-\alpha}\left|\nabla \varphi(x)\right| + \frac{o(\veps^2c^{1-\alpha})}{\veps^2} \right) + (1-\alpha) \left(c^{-\alpha}\Delta_p^{\textup{N}} \varphi(x) + \frac{o(\veps^2 c^{-\alpha})}{\veps^2}\right)\right\}\\ &= \!\!\!\inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha c^{1-\alpha} \left(\left|\nabla \varphi(x)\right| + \frac{o(\veps^2c^{1-\alpha})}{\veps^2c^{1-\alpha}} \right) + (1-\alpha) c^{-\alpha} \left( \Delta_p^{\textup{N}} \varphi(x) + \frac{o(\veps^2 c^{-\alpha})}{\veps^2 c^{-\alpha}}\right)\right\}. \end{align*} Now let us fix $\delta>0$ small enough such that $|\nabla\varphi(x)|-\delta>0$ and $\Delta_p^{\textup{N}} \varphi(x)-\delta >0$. By \eqref{as:trunc2} we can also choose $\hat{\veps}$ small enough in such a way that $\left|\frac{o(\veps^2c^{1-\alpha})}{\veps^2c^{1-\alpha}}\right|\leq \delta$ and $\left|\frac{o(\veps^2 c^{-\alpha})}{\veps^2 c^{-\alpha}}\right|\leq \delta$ for all $c\in[m(\veps),M(\veps)]$ and all $\veps<\hat{\veps}$. Thus, by \eqref{eq:gm-am}, the above identity implies \begin{align*} \frac{\mathcal{A}^+_\veps[\varphi](x)-\varphi(x)}{\veps^2}&\geq \inf_{c\in [m(\veps),M(\veps)]}\Big\{\alpha c^{1-\alpha} \left(\left|\nabla \varphi(x)\right| -\delta \Big) + (1-\alpha) c^{-\alpha} \Big( \Delta_p^{\textup{N}} \varphi(x) -\delta \right)\Big\}\\ &\geq\inf_{c>0}\Big\{\alpha c^{1-\alpha} \left(\left|\nabla \varphi(x)\right| -\delta \right) + (1-\alpha) c^{-\alpha} \left( \Delta_p^{\textup{N}} \varphi(x) -\delta \right)\Big\}\\ &= \left(\left|\nabla \varphi(x)\right| -\delta \right)^{\alpha} \left( \Delta_p^{\textup{N}} \varphi(x) -\delta \right)^{1-\alpha}\\ &= \left|\nabla \varphi(x) \right|^{\alpha} \left( \Delta_p^{\textup{N}} \varphi(x) \right)^{1-\alpha} + o_\delta(1). \end{align*} Since $\delta>0$ was arbitrary, we have proved one of the inequalities of identity \eqref{eq:asexp-proofs}. To prove the reverse inequality, we proceed in a similar way to obtain \begin{align*} \frac{\mathcal{A}^+_\veps[\varphi](x)-\varphi(x)}{\veps^2}&\leq \inf_{c\in [m(\veps),M(\veps)]}\Big\{\alpha c^{1-\alpha} \left(\left|\nabla \varphi(x)\right| +\delta \right) + (1-\alpha) c^{-\alpha} \left( \Delta_p^{\textup{N}} \varphi(x) +\delta \right)\Big\}. \end{align*} We can now use \Cref{lem:gm-am-aprox} together with \eqref{as:trunc2} to get \begin{align*} \frac{\mathcal{A}^+_\veps[\varphi](x)-\varphi(x)}{\veps^2}\leq& \left(\left|\nabla \varphi(x)\right| +\delta \right)^{\alpha} \left( \Delta_p^{\textup{N}} \varphi(x) +\delta \right)^{1-\alpha}\\ &+ \alpha \left(\left|\nabla \varphi(x)\right| +\delta \right) m(\veps)^{1-\alpha} + (1-\alpha) \left( \Delta_p^{\textup{N}} \varphi(x) +\delta \right) M(\veps)^{-\alpha}\\ =& \left|\nabla \varphi(x) \right|^{\alpha} \left( \Delta_p^{\textup{N}} \varphi(x) \right)^{1-\alpha} + o_\delta(1) + o_\veps(1). \end{align*} The arbitrariness of $\delta>0$ completes the proof of the remaining inequality in \eqref{eq:asexp-proofs}. With this, we have completed the proof when $\Delta_p\varphi(x)>0$. \textbf{Step 2: }Let us assume now that $\Delta_p\varphi(x)<0$. We define $\psi=-\varphi$ and note that $$\Delta_p\psi(x)=-\Delta_p\varphi(x)>0.$$ Thus, we can apply the previous result to get \begin{align*} \mathcal{A}_\veps^+[\psi](x)=\psi(x)+\veps^2 J_p(\Delta_p\psi(x))+ o(\veps^2). \end{align*} Let us rewrite the above identity in terms of $\varphi$. On one hand, \[ J_p(\Delta_p\psi(x))=J_p(-\Delta_p\varphi(x))=-J_p(\Delta_p\varphi(x)). \] On the other hand \begin{align*} \mathcal{A}^+_\veps[\psi](x)&= \mathcal{A}^+_\veps[-\varphi](x) = \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} \{-\varphi\} + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[-\varphi](x) \right\}\\ &= \inf_{c\in [m(\veps),M(\veps)]}\left\{-\alpha \inf_{B_{\veps^2c^{1-\alpha}}(x)} \varphi - (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) \right\}\\ &=-\sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \inf_{B_{\veps^2c^{1-\alpha}}(x)} \varphi + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) \right\}=- \mathcal{A}^-_\veps[\varphi](x), \end{align*} that is, \begin{align*} -\mathcal{A}_\veps^-[\varphi](x)=-\varphi(x)-\veps^2 J_p(\Delta_p\varphi(x))+ o(\veps^2), \end{align*} which is precisely what we wanted to prove. \end{proof} \subsection{Asymptotic expansions with quantitative error estimates for $C^3$ functions} We rely now on the following quantitative version of Theorem \ref{thm:MaPaRo10} that can be found in Exercise 3.7 in \cite{Lew20}. \begin{lemma}\label{lem:asexp-plapnorm} Let $d \in \mathbb{N}$, $p \geq 2$, $x\in \R^d$, and $\varphi \in C^3(B_R(x))$ for some $R > 0$, such that $\nabla \varphi(x) \neq 0$. Then, there exists $\overline{R}$ small enough such that for all $r\in(0,\overline{R})$, we have that \begin{align*} \mathcal{M}_r[\varphi](x) - \varphi(x)= r^2\Delta_p^{\textup{N}} \varphi(x) + r^2E_1(\varphi,r), \end{align*} with \[ |E_1(\varphi,r)|\leq K \left(\|D^{3}\varphi\|_{L^\infty(B_{\overline{R}}(x)) }+ \frac{|D^2\varphi(x)|^2}{|\nabla\varphi(x)|}\right) r, \] where $K$ is a positive constant depending only on $p$ and $d$. \end{lemma} We will also use the following quantitative estimate for the approximation of the modulus of the gradient. \begin{lemma}\label{lem:asexp-gradmod} Let $d\in \mathbb{N}$, $R>0$ and $\varphi\in C^3(B_{R}(x))$. Then, for all $r\in(0,R/2)$, we have that \begin{align*} \sup_{B_{r}(x)}\varphi - \varphi(x) = r|\nabla \varphi(x)| + r E_2(\varphi,r) \end{align*} with \[ |E_2(\varphi,r)|\leq K \|D^{2}\varphi\|_{L^\infty(B_{R/2}(x)) } r \] where $K$ is a positive constant depending only on $p$ and $d$. \end{lemma} The proof of the above result is standard and we omit it. We are now ready to prove \Cref{thm:main-p-greater2}\eqref{thm:main-p-greater2-item3}, that we restate here for convenience. \begin{proposition}\label{pro:asexp-greater2-quantitative} Let $d\in \mathbb{N}$, $p>2$, $\veps_0\in(0,1)$, $x\in \R^d$, and $\varphi\in C^3(B_{\veps_0}(x))$ such that $\nabla\varphi(x)\not=0$. Assume \eqref{as:trunc} and define, for $\veps<\veps_0$ small enough, the average \[ \mathcal{A}_\veps[\varphi](x)\coloneqq\left\{\begin{aligned} \mathcal{A}_\veps^+[\varphi](x) \quad \textup{if} \quad \Delta_p\varphi(x)\geq0,\\ \mathcal{A}_\veps^-[\varphi](x) \quad \textup{if} \quad \Delta_p\varphi(x)<0. \end{aligned}\right. \] Then, there exists $\overline{\veps}$ small enough such that for all $\veps\in(0,\overline{\veps})$, we have that \begin{align*} \mathcal{A}_\veps[\varphi](x)-\varphi(x)=\veps^2 J_p(\Delta_p\varphi(x)) +\veps^2E(\varphi,\veps), \end{align*} with \begin{align}\label{eq:quanterror} |E(\varphi,\veps)|\leq& K_1 \|D^2\varphi\|_{L^\infty(B_{\overline{\veps}}(x))} \veps^{\frac{2(p-2)}{p}} + K_2\left(\|\nabla\varphi\|_{L^\infty(B_{\overline{\veps}}(x))}+\|D^{3}\varphi\|_{L^\infty(B_{\overline{\veps}}(x)) }+ \frac{|D^2\varphi(x)|^2}{|\nabla\varphi(x)|}\right) \veps^{\frac{2}{3p-4}}, \end{align} where $K_1$ and $K_2$ are positive constants depending only on $p$ and $d$. \end{proposition} \begin{proof} We prove only the case $\Delta_p\varphi(x)\geq0$ since the case $\Delta_p\varphi(x)<0$ follows by replacing $\varphi$ by $-\varphi$ as in the proof of \Cref{pro:asexp-greater2}. We observe first that, since $\nabla\varphi(x)\not=0$, then $\Delta_p \varphi(x)= |\nabla \varphi(x)|^{p-2} \Delta_p^{\textup{N}}\varphi(x)$ with $\Delta_p^{\textup{N}}\varphi(x)\geq0$. We can use now \Cref{lem:gm-am-aprox} to get \begin{align*} J_p(\Delta_p\varphi(x))&= |\nabla \varphi(x)|^{\alpha} (\Delta_p^{\textup{N}}\varphi(x))^{1-\alpha}\\ &=\inf_{c\in[m(\veps),M(\veps)]}\{\alpha c^{1-\alpha} |\nabla \varphi(x)| + (1-\alpha) c^{-\alpha}\Delta_p^{\textup{N}}\varphi(x)\} + E_3(\varphi,\veps), \end{align*} where $E_3(\varphi,\veps)$ can be bounded using the choice $m(\veps)=\veps^{\frac{2}{2+\alpha}}$ and $M(\veps)=\veps^{-\frac{2}{2-\alpha}}$ given by \eqref{as:trunc} in the following way: \[ |E_3(\varphi,\veps)|\leq |\nabla\varphi(x)|\veps^{\frac{2(1-\alpha)}{2+\alpha}} + \Delta_p^{\textup{N}}\varphi(x)\veps^{\frac{2\alpha}{2-\alpha}} \leq \|\nabla\varphi\|_{L^\infty(B_{\overline{\veps}}(x))}\veps^{\frac{2}{3p-4}} + \|D^2\varphi\|_{L^\infty(B_{\overline{\veps}}(x))}\veps^{\frac{2(p-2)}{p}} . \] We use now Lemma \ref{lem:asexp-plapnorm} and Lemma \ref{lem:asexp-gradmod} to obtain, \begin{align*} (\Delta_p\varphi(x))^{\frac{1}{p-1}} =&\inf_{c\in[m(\veps),M(\veps)]}\Bigg\{\alpha c^{1-\alpha} \left( \frac{\displaystyle\sup_{B_{\veps^2c^{1-\alpha}}(x)}\varphi - \varphi(x)}{\veps^2c^{1-\alpha}} - E_2(\varphi,\veps^2c^{1-\alpha})\right) \\ &\qquad \qquad \qquad \, + (1-\alpha) c^{-\alpha}\left(\frac{\mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) - \varphi(x)}{\veps^2c^{-\alpha}}-E_1(\varphi,\veps c^{-\frac{\alpha}{2}}) \right)\Bigg\} + E_3(\varphi,\veps), \end{align*} that is, \begin{align*} \veps^2 (\Delta_p\varphi(x))^{\frac{1}{p-1}} =&\!\!\!\inf_{c\in[m(\veps),M(\veps)]}\Bigg\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)}\varphi +(1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x) \\ &\qquad \quad \quad \,- \veps^2 \alpha c^{1-\alpha} E_2(\varphi,\veps^2c^{1-\alpha}) -\veps^2(1-\alpha) c^{-\alpha}E_1(\phi,\veps c^{-\frac{\alpha}{2}}) \Bigg\}-\varphi(x) + \veps^2E_3(\varphi,\veps)\\ =&\!\!\!\inf_{c\in[m(\veps),M(\veps)]}\Bigg\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)}\varphi +(1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x)\Bigg\}-\varphi(x) + \veps^2E_4(\varphi,\veps)+ \veps^2E_3(\varphi,\veps). \end{align*} where \begin{align*} |E_4(\varphi,\veps)| &\leq \!\!\! \sup_{c\in [m(\veps),M(\veps)]}\big\{\alpha c^{1-\alpha} |E_2(\varphi,\veps^2c^{1-\alpha})| +(1-\alpha) c^{-\alpha}|E_1(\phi,\veps c^{-\frac{\alpha}{2}})|\big\}\\ &\leq C \!\!\! \sup_{c\in [m(\veps),M(\veps)]}\Bigg\{ \|D^{2}\varphi\|_{L^\infty(B_{\overline{\veps}}(x))} \veps^2c^{2(1-\alpha)} +\left(\|D^{3}\varphi\|_{L^\infty(B_{\overline{\veps}}(x)) }+ \frac{|D^2\varphi(x)|^2}{|\nabla\varphi(x)|}\right) \veps c^{-\frac{3\alpha}{2}}\Bigg\}\\ & \leq C \! \left(\|D^{2}\varphi\|_{L^\infty(B_{\overline{\veps}}(x))} \veps^{2}M(\veps)^{2(1-\alpha)}+ \left(\|D^{3}\varphi\|_{L^\infty(B_{\overline{\veps}}(x)) }+ \frac{|D^2\varphi(x)|^2}{|\nabla\varphi(x)|}\right) \veps m(\veps)^{-\frac{3}{2}\alpha } \right)\\ &= C \! \left(\|D^{2}\varphi\|_{L^\infty(B_{\overline{\veps}}(x))} \veps^{\frac{2(p-2)}{p}}+ \left(\|D^{3}\varphi\|_{L^\infty(B_{\overline{\veps}}(x)) }+ \frac{|D^2\varphi(x)|^2}{|\nabla\varphi(x)|}\right) \veps^{\frac{2}{3p-4}} \right), \end{align*} where $C$ is a positive constant depending only on $p$ and $d$. This concludes the proof. \end{proof} \subsection{Asymptotic mean value characterization of viscosity solutions} The first goal of this section is to prove the characterization of smooth sub and supersolutions given by \Cref{thm:subsupersmooth-bis}. We only prove it in the case $f(x)\geq0$ since the result for $f(x)\leq 0$ follows by replacing $f$ by $-f$ and $\varphi$ by $-\varphi$ as in the proof of \Cref{pro:asexp-greater2}. Again, we restate the result here for convenience. \begin{proposition}\label{prop:subsupersmooth} Let $d\in \mathbb{N}$, $p>2$, $x\in \R^d$, $\varphi\in C^3(B_{R}(x))$ for some $R>0$, and $f\in C(B_{R}(x))$ such that $f(x)\geq0$. Additionally, assume that both $\nabla\varphi(x)\not=0$ and $\Delta_p\varphi(x)\not=0$ if $f(x)=0$. Assume also that $\veps>0$ and that \eqref{as:trunc} holds. Then, \begin{eqnarray} &\Delta_p \varphi(x)\leq f(x) \label{eq:smoothsuper-plap}\\ (\textup{resp. } &\Delta_p \varphi(x)\geq f(x) )\label{eq:smoothsub-plap} \end{eqnarray} if and only if \begin{eqnarray} \mathcal{A}_\veps^+[\varphi](x)\leq \varphi(x)+\veps^2 J_p(f(x)) +o(\veps^2) \quad \textup{as} \quad &\veps\to0^+ \label{eq:smoothsuper-MVP} \\ (\textup{resp. } \mathcal{A}_\veps^+[\varphi](x)\geq \varphi(x)+\veps^2 J_p(f(x)) +o(\veps^2) \quad \textup{as} \quad &\veps\to0^+) \label{eq:smoothsub-MVP} . \end{eqnarray} \end{proposition} \begin{proof} First, let us note that if $\Delta_p\varphi(x)>0$ (and thus, $\nabla\varphi(x)\not=0$), then the equivalence follows directly from \Cref{pro:asexp-greater2}. From now, let us assume that $\Delta_p\varphi(x)\leq0$. \textbf{Case 1:} Let us prove the equivalence \eqref{eq:smoothsuper-plap}$\iff$\eqref{eq:smoothsuper-MVP}. \noindent \textbf{Step 1: Proof of \eqref{eq:smoothsuper-plap} $\implies$ \eqref{eq:smoothsuper-MVP}.} On one hand, if $\nabla\varphi(x)=0$, we have \begin{align*} \frac{\mathcal{A}_\veps^+[\varphi](x)- \varphi(x)}{\veps^2} &\leq \inf_{c\in [m(\veps),M(\veps)]}\Big\{\alpha c^{1-\alpha} \delta + (1-\alpha)c^{-\alpha} K\Big\}= \delta^\alpha K^{1-\alpha} + o_\veps(1). \end{align*} Note that \eqref{eq:smoothsuper-MVP} follows since $\delta>0$ is arbitrary small and $f(x)\geq0$. On the other hand, if $\nabla\varphi(x)\not=0$, then $\nplap \varphi(x)$ is well defined and $\nplap \varphi(x)\leq0$ since $\plap \varphi(x)\leq0$. Consequently, \begin{align*} \frac{\mathcal{A}_\veps^+[\varphi](x)- \varphi(x)}{\veps^2} &\leq \inf_{c\in [m(\veps),M(\veps)]}\Big\{\alpha c^{1-\alpha} K + (1-\alpha)c^{-\alpha} \delta \Big\}= K^\alpha \delta^{1-\alpha} + o_\veps(1), \end{align*} and \eqref{eq:smoothsuper-MVP} follows. \noindent \textbf{Step 2: Proof of \eqref{eq:smoothsuper-MVP} $\implies$ \eqref{eq:smoothsuper-plap}.} There is actually nothing to prove here since we are assuming that $\Delta_p \varphi(x)\leq 0$, so we trivially have $\Delta_p \varphi(x)\leq f(x)$ since $f(x)\geq0$. \textbf{Case 2:} Let us prove the equivalence \eqref{eq:smoothsub-plap}$\iff$\eqref{eq:smoothsub-MVP}. \noindent \textbf{Step 1: Proof of \eqref{eq:smoothsub-plap} $\implies$ \eqref{eq:smoothsub-MVP}.} If $f(x)>0$, then, by \eqref{eq:smoothsub-plap}, $\plap\varphi(x)\geq f(x)>0$, which is a contradiction with the fact that we only need to check the case $\plap\varphi(x)\leq0$. On the other hand, if $f(x)=0$, then, by assumption, we have that $\Delta_p\varphi(x)\not=0$, and thus, $\plap\varphi(x)<0$. This is a contradiction with \eqref{eq:smoothsub-plap}, and the proof is completed. \noindent \textbf{Step 2: Proof of \eqref{eq:smoothsub-MVP} $\implies$ \eqref{eq:smoothsub-plap}.} We are already assuming that $\plap \varphi(x)\leq 0$. Let us assume first, by contradiction, that $ \plap \varphi(x) <0$ (and thus $\nabla\varphi(x)\not=0$ and $\nplap\varphi(x) <0$). Then, \begin{align*} \frac{\mathcal{A}_\veps^+[\varphi](x)- \varphi(x)}{\veps^2} &=\!\!\! \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \left(\frac{\displaystyle \sup_{B_{\veps^2c^{1-\alpha}}(x)} \varphi-\varphi(x)}{\veps^2}\right) + (1-\alpha)\left( \frac{\mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\varphi](x)-\varphi(x)}{\veps^2} \right)\right\}\\ &= \!\!\!\inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha c^{1-\alpha} \left(\left|\nabla \varphi(x)\right| + \frac{o(\veps^2c^{1-\alpha})}{\veps^2c^{1-\alpha}} \right) + (1-\alpha)c^{-\alpha} \left(\nplap \varphi(x) + \frac{o(\veps^2 c^{-\alpha})}{\veps^2c^{-\alpha}}\right)\right\}\\ &\leq \!\!\!\inf_{c\in [m(\veps),M(\veps)]}\Big\{\alpha c^{1-\alpha} \left(2\left|\nabla \varphi(x)\right|\right) + (1-\alpha)c^{-\alpha} \left(\nplap \varphi(x)/2 \right)\Big\}\\ &\leq \alpha m(\veps)^{1-\alpha} \left(2\left|\nabla \varphi(x)\right|\right) + (1-\alpha)m(\veps)^{-\alpha} \left(\nplap \varphi(x)/2 \right) \\ &<-1. \end{align*} where the last inequality follows from the fact that $m(\veps)\to0^+$ as $\veps\to0^+$ and $\nplap\varphi(x)<0$. This is clearly a contradiction with \eqref{eq:smoothsub-MVP} since $f(x)\geq0$. Thus, we must have $\Delta_p\varphi(x)=0$. Let us assume now, by contradiction, that $\nabla\varphi(x)=0$. Then, by assumption, we have that $f(x)>0$, which implies that \begin{align*} 0<J_p(f(x))\leq \frac{\mathcal{A}_\veps^+[\varphi](x)- \varphi(x)}{\veps^2} + o_\veps(1) &\leq \inf_{c\in [m(\veps),M(\veps)]}\Big\{\alpha c^{1-\alpha} \delta + (1-\alpha)c^{-\alpha} K\Big\}= \delta^\alpha K^{1-\alpha} + o_\veps(1). \end{align*} From here, we reach a contradiction by letting $\veps\to0^+$ and using the arbitrariness of $\delta>0$. At this point, we only need to check the case $\nabla\varphi(x)\not=0$ and $\nplap \varphi(x)=0$. In this case, we can use the quantitative asymptotic expansion estimate given in \Cref{pro:asexp-greater2-quantitative}, that works also in the case when $\Delta_p\varphi(x)=0$ with $\nabla\varphi(x)\not=0$, to prove the desired result. \end{proof} Let us now comment on the proof of the mean value characterization of viscosity solutions given by \Cref{teo-f>0-intro-bis}. \begin{corollary}[Asymptotic mean value property] \label{teo-f>0-sect} Let $d\in \mathbb{N}$, $p>2$, $\Omega\subset\R^d$ be an open set, and $f\in C(\Omega)$. Assume also that $\veps>0$ and that \eqref{as:trunc} holds. Let $\mathcal{A}_\veps[\varphi;f]$ be defined by either $\overline{\mathcal{A}}_\veps[\varphi;f]$ or $\underline{\mathcal{A}}_\veps[\varphi;f]$. Then, $u$ is a viscosity solution to \[ \Delta_p u(x)=f(x) \quad \textup{for} \quad x\in \Omega, \] if and only $u$ is a viscosity solution to \begin{align*} \begin{aligned} &u(x) =\mathcal{A}_\veps[u;f](x) - \veps^2 J_p(f(x)) +o(\veps^2) \quad \textup{for} \quad x\in \Omega, \quad \textup{as} \quad \veps\to0^+. \end{aligned} \end{align*} \end{corollary} \begin{proof} The proof follows from the previous result, Proposition \ref{prop:subsupersmooth}, using the definitions of solution to both the PDE and the mean value property stated in the Appendix \ref{app:viscositydef}. In fact, from Proposition \ref{prop:subsupersmooth}, we have that $u$ is a viscosity supersolution to the PDE $\Delta_p u(x)=f(x)$ in the sense of Definition \ref{def:viscsol-p-Laplace} (every test function that touches $u$ from below verifies $\Delta_p u(x)\leq f(x)$, notice that we restrict to test functions with non-vanishing gradient at points where $f(x)=0$) if and only if it is a viscosity supersolution to the mean value formula in the sense of Definition \ref{def:asMVPviscosity}. The same holds for subsolutions proving the desired characterization of viscosity solutions. \end{proof} \section{Dynamic programming principles for the $p$-Laplacian} \label{sect-dynamic} The aim of this section is to prove \Cref{thm:DPPexsandconv}. Let us recall the framework. We consider the boundary value problem \begin{equation}\label{eq:DPP}\tag{DPP} \left\{ \begin{aligned} u_\veps(x)&= \mathcal{A}_\veps[u_\veps;f](x)-\veps^2J_p(f(x)), &\textup{if}&\quad x\in \Omega,\\ u_\veps(x)&=g(x), &\textup{if}& \quad x\in \R^d\setminus\Omega, \end{aligned}\right. \end{equation} with \[ \mathcal{A}_\veps[u_\veps;f](x):=\left\{\begin{aligned} \mathcal{A}_\veps^+[u_\veps](x) \quad \textup{if} \quad f(x)\geq0,\\ \mathcal{A}_\veps^-[u_\veps](x) \quad \textup{if} \quad f(x)<0. \end{aligned}\right. \] \begin{remark} We will only prove \Cref{thm:DPPexsandconv} with $\mathcal{A}_\veps[u_\veps;f]$ as given above. The proof for \[ \mathcal{A}_\veps[u_\veps;f](x):=\left\{\begin{aligned} \mathcal{A}_\veps^+[u_\veps](x) \quad \textup{if} \quad f(x)>0,\\ \mathcal{A}_\veps^-[u_\veps](x) \quad \textup{if} \quad f(x)\leq 0, \end{aligned}\right. \] follows in a similar way. \end{remark} \subsection{Existence and uniqueness and properties of solutions} We first prove a comparison principle that, in particular, implies uniqueness of solutions. \begin{lemma}\label{lem:comparisonDPP} Let $d\in \N$, $p>2$, $\veps\in(0,1)$, $f,f_1,f_2\in C(\overline{\Omega})$ and $g_1,g_2\in C_{\textup{b}}(\R^d\setminus\Omega)$. Assume \eqref{as:trunc2}. Consider two bounded Borel functions $\overline{u},\underline{u} \colon \R^d \to \R$ such that \begin{equation*} \left\{ \begin{aligned} \overline{u}(x)&\geq \mathcal{A}_\veps[\overline{u};f](x)-\veps^2J_p(f_1(x)), &\textup{if}&\quad x\in \Omega,\\ \overline{u}(x)&\geq g_1(x), &\textup{if}& \quad x\in \R^d\setminus\Omega, \end{aligned}\right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} \underline{u}(x)&\leq \mathcal{A}_\veps[\underline{u};f](x)-\veps^2J_p(f_2(x)), &\textup{if}&\quad x\in \Omega,\\ \underline{u}(x)&\leq g_2(x), &\textup{if}& \quad x\in \R^d\setminus\Omega. \end{aligned}\right. \end{equation*} If $f_1\leq f_2$ and $g_1\geq g_2$, then $\overline{u}\geq \underline{u}$ in $\R^d$. In particular, there exists at most one bounded Borel function $u$ satisfying \eqref{eq:DPP}. \end{lemma} \begin{proof} Since $\overline{u}\geq \underline{u}$ in $\R^d\setminus \Omega$, we only need to show that $\overline{u}\geq \underline{u}$ in $\Omega$. Assume by contradiction that $\overline{u}(x)< \underline{u}(x)$ for some $x\in \Omega$, and let $x_0\in \Omega$ and $S>0$ be such that \[ S=\underline{u}(x_0)-\overline{u}(x_0)= \sup_{x\in \R^d}\{\underline{u}(x)-\overline{u}(x)\}. \] Define $\underline{v}=\underline{u}-S$, so that $\underline{v}(x_0)=\overline{u}(x_0)$ and $\underline{v}\leq \overline{u}$ in $\R^d$. Moreover, \begin{align*} \underline{v}(x)&=\underline{u}(x)-S \\ & \leq \mathcal{A}_\veps[\underline{u};f](x)-S-\veps^2J_p(f_2(x)) \\ & = \mathcal{A}_\veps[\underline{u}-S;f](x)-\veps^2J_p(f_2(x))\\ &= \mathcal{A}_\veps[\underline{v};f](x)-\veps^2J_p(f_2(x)), \end{align*} that is, \begin{equation*} \left\{ \begin{aligned} \underline{v}(x)&\leq \mathcal{A}_\veps[\underline{v};f](x)-\veps^2J_p(f_2(x)), &\textup{if}&\quad x\in \Omega,\\ \underline{v}(x)&\leq g_2(x)-S, &\textup{if}& \quad x\in \R^d\setminus\Omega. \end{aligned}\right. \end{equation*} In particular, we have that \begin{align*} 0&=\underline{v}(x_0)-\overline{u}(x_0) \\ &\leq \mathcal{A}_\veps[\underline{v};f](x_0)-\mathcal{A}_\veps[\overline{u};f](x_0) - \veps^2 (J_p(f_2(x_0))-J_p(f_1(x_0)))\\ &\leq \mathcal{A}_\veps[\underline{v};f](x_0)-\mathcal{A}_\veps[\overline{u};f](x_0). \end{align*} If $f(x_0)\geq0$, we have, from the above identity \begin{align*} 0\leq &\mathcal{A}_\veps^+[\underline{v}](x_0)-\mathcal{A}_\veps^+[\overline{u}](x_0)\\ = & \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x_0)} \underline{v} + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\underline{v}](x_0) \right\}\\ &-\inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x_0)} \overline{u} + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\overline{u}](x_0) \right\}\\ \leq&\sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \left(\sup_{B_{\veps^2c^{1-\alpha}}(x_0)} \underline{v}-\sup_{B_{\veps^2c^{1-\alpha}}(x_0)} \overline{u}\right)+ (1-\alpha) \left(\mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\underline{v}](x_0)-\mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\overline{u}](x_0) \right)\right\}\\ nt_{B_{\gamma \veps c^{-\frac{\alpha}{2}} }(x_0)}(\underline{v}(y)-\overline{u}(y))\dd y\right\}\\ \leq & \frac{(1-\alpha)(1-\beta)}{|B_{\gamma \veps m(\veps)^{-\frac{\alpha}{2}}}(x_0)|}\int_{B_{\gamma \veps M(\veps)^{-\frac{\alpha}{2}}}(x_0)}(\underline{v}(y)-\overline{u}(y))\dd y. \end{align*} This implies that $\underline{v}= \overline{u}$ on $B_\eta(x_0)$ for some $\eta>0$ that depends only on $\veps$. A similar argument follows if $f(x_0)< 0$. Repeating this process iteratively at new contact points, we get that $\underline{v}(x)= \overline{u}(x)$ for some $x\in \R^d\setminus \Omega$, and thus, we have that \[ \overline{u}(x)=\underline{v}(x) \leq g(x)-S \leq \overline{u}(x)-S, \] which is a contradiction since $S>0$. Uniqueness of solutions of \eqref{eq:DPP} follows by taking $f_1=f_2=f$ and $g_1=g_2$. \end{proof} Before proceeding to prove existence of solutions, we need to ensure the existence of a subsolution and a supersolution. \begin{lemma}\label{lem:barrier} Let $d\in \N$ $p>2$, $\veps\in(0,1)$, $f\in C(\overline{\Omega})$ and $g\in C_{\textup{b}}(\R^d\setminus\Omega)$. Assume \eqref{as:trunc2}. Then there exist bounded Borel functions $\underline{u}$ and $\overline{u}$ such that \begin{align}\label{eq:expsub} \left\{\begin{aligned}& \underline{u}(x) \leq \mathcal{A}_\veps[\underline{u};f](x)-\veps^2 J_p(f(x))&\textup{if}& \quad x\in \Omega,\\ & \underline{u}(x) \leq g(x)&\textup{if}&\quad x\in\mathbb{R}^d\setminus \Omega, \end{aligned}\right. \end{align} and \begin{align}\label{eq:expsuper} \left\{\begin{aligned}& \overline{u}(x) \geq \mathcal{A}_\veps[\overline{u};f](x)-\veps^2 J_p(f(x))&\textup{if}& \quad x\in \Omega,\\ & \overline{u}(x) \leq g(x)&\textup{if}&\quad x\in\mathbb{R}^d\setminus \Omega. \end{aligned}\right. \end{align} \end{lemma} \begin{proof} First let us observe, that given any function $\underline{u}$, we have \[ \mathcal{A}_\veps[\underline{u};f](x) \geq \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \inf_{B_{\veps^2c^{1-\alpha}}(x)} \underline{u} + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\underline{u}](x) \right\}. \] Since $\Omega$ is a bounded set, we can choose $R>0$ large enough such that $\Omega \subset B_R(0)$ and \[ (B_{\veps^2 M(\veps)^{1-\alpha}}(0)+ \Omega)\cup (B_{ \veps m(\veps)^{-\frac{\alpha}{2}}\sqrt{2(p+d)}}(0)+ \Omega) \subset B_R(0). \] Given two generic constants $L,T>0$, to be chosen later, and using the convention $x=(x_1,\ldots,x_d)$, let us define \begin{align*} \underline{u}(x)=\left\{ \begin{aligned} &e^{L x_1}-T &\textup{if}& \quad x\in B_R(0),\\ &-\|g\|_{L^\infty(\R^d\setminus\Omega)} \quad &\textup{if}& \quad x\in \mathbb{R}^d\setminus B_R(0), \end{aligned} \right. \end{align*} which is clearly a bounded Borel function. Note that, in $B_R(0)$, we have that $\underline{u}$ and all its derivatives are strictly increasing functions in its first variable $x_1$. More precisely, for all $n\in \mathbb{N}$ and all $x\in B_R(0)$, we have \[ \partial_{x_1}^n \underline{u}(x) = L^n e^{L x_1} >0. \] Moreover, for every $x\in \Omega$, we have \[ \inf_{B_{\veps^2c^{1-\alpha}}(x)} \underline{u}-\underline{u}(x)= -\veps^2 c^{1-\alpha} \partial_{x_1} \underline{u}(x) + \frac{(\veps^2 c^{1-\alpha})^2 }{2} \partial_{x_1}^2 \underline{u}(\xi)\geq -\veps^2 c^{1-\alpha} L e^{Lx_1}, \] where $\xi\in (x_1-\veps^2c^{1-\alpha}, x_1)$. In a similar way, \begin{align*} \frac{1}{2} \inf_{B_{\gamma \veps c^{-\frac{\alpha}{2}}}(x)}\underline{u}+ \frac{1}{2} \sup_{B_{\gamma\veps c^{-\frac{\alpha}{2}}}(x)}\underline{u}-2\underline{u}(x) \geq \veps^2c^{-\alpha}\partial^2_{x_1}\underline{u}(x) = \veps^2c^{-\alpha} L^2 e^{L x_1}, \end{align*} and \[ nt_{B_{\gamma \veps c^{-\frac{\alpha}{2}}}(x)} \underline{u}(y) \dd y - \underline{u}(x) \geq \veps^2c^{-\alpha} K_{d} L^2 e^{Lx_1}, \] where $K_d$ is a positive constant depending only on the dimension. Thus, there exists a constant $\tilde{K}_d$, depending only on the dimension, such that \[ \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\underline{u}](x)-\underline{u}(x) \geq \veps^2c^{-\alpha} \tilde{K}_{d} L^2 e^{Lx_1}. \] We are now ready to show that $\underline{u}$ satisfies \eqref{eq:expsub} for a suitable choice of the constants $L$ and $T$. By direct computations, \begin{align*} &\frac{ \mathcal{A}_\veps[\underline{u};f](x)-\underline{u}(x)}{\veps^2} -J_p(f(x)) \\ &\geq \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \frac{\displaystyle \inf_{B_{\veps^2c^{1-\alpha}}(x)} \underline{u}-\underline{u}(x)}{\veps^2} + (1-\alpha) \frac{\mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\underline{u}](x)-\underline{u}(x)}{\veps^2} \right\}- \|f\|_{L^\infty(\Omega)}^{\frac{1}{p-1}}\\ &\geq e^{Lx_1} \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha c^{1-\alpha} (-L) + (1-\alpha) c^{-\alpha}\left(\tilde{K}_d L^2\right) \right\}- \|f\|_{L^\infty(\Omega)}^{\frac{1}{p-1}}\\ &\geq e^{Lx_1} \left( - \alpha M(\veps)^{1-\alpha} L + (1-\alpha) M(\veps)^{-\alpha}\tilde{K}_d L^2- \|f\|_{L^\infty(\Omega)}^{\frac{1}{p-1}}\right)\\ &\geq0 \end{align*} where the last estimate follows by taking $L$ big enough, since the positive term depends quadratically on $L$ while the negative ones depend sub-quadratically on $L$. Thus, for all $x\in \Omega$, we have that \[ \underline{u}(x) \leq\mathcal{A}_\veps[\underline{u};f](x) - \veps^2 J_p(f(x)), \] and this holds independently on the choice of $T$. Finally, we can choose $T$ large enough such that \[ \underline{u}(x) \leq -\|g\|_{L^\infty(\R^d\setminus\Omega)} \] for all $x\in B_R(0)$. This implies that $\underline{u}\leq g$ in $\R^d\setminus \Omega$, which concludes the proof of the fact that $\underline{u}$ satisfies \eqref{eq:expsub}. To prove the existence of $\overline{u}$ satisfying \eqref{eq:expsuper} it is enough to take $\overline{u}=-\underline{u}$ and observe that \[ \mathcal{A}_\veps[\overline{u};f](x) \leq \sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} \overline{u} + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\overline{u}](x) \right\}. \] \end{proof} We are now ready to establish existence of solutions of the dynamic programming principle \eqref{eq:DPP}. \begin{lemma} Let $d\in \N$, $p>2$, $\veps\in(0,1)$, $f\in C(\overline{\Omega})$ and $g\in C_{\textup{b}}(\R^d\setminus\Omega)$. Assume \eqref{as:trunc2}. Then there exists a unique bounded Borel function $u_\veps$ satisfying \eqref{eq:DPP}. \end{lemma} \begin{proof} Uniqueness is already established. Let us prove existence. By Lemma \ref{lem:barrier}, we can find $u_0$ to be a bounded Borel function such that \begin{align*} \left\{\begin{aligned}& u_{0}(x) \leq \mathcal{A}_\veps[u_{0};f](x)-\veps^2 J_p(f(x))&\textup{if}& \quad x\in \Omega,\\ & u_{0}(x) \leq g(x)&\textup{if}&\quad x\in\mathbb{R}^d\setminus \Omega, \end{aligned}\right. \end{align*} and define $\{u_k\}_{k\in \mathbb{N}}$ by recursion as \begin{align*} u_{k}(x)= \left\{\begin{aligned}&\mathcal{A}_\veps[u_{k-1};f](x)-\veps^2 J_p(f(x))&\textup{if}& \quad x\in \Omega,\\ &g(x)&\textup{if}&\quad x\in\mathbb{R}^d\setminus \Omega. \end{aligned}\right. \end{align*} \noindent\textbf{Step 1:} Let us prove that, for every $x\in \R^d$, the sequence $\{u_k(x)\}_{k\in \mathbb{N}}$ is a nondecreasing sequence in $k$. Clearly, $u_0\leq u_1$ in $\mathbb{R}^d$. Assume, using an inductive argument, that $u_{k-1}\leq u_k$ in $\mathbb{R}^d$. Then, on one hand, if $x\in \R^d\setminus \Omega$, we have \[ u_{k+1}(x)=g(x)=u_k(x). \] On the other hand, if $x\in \Omega$, we can use the monotonicity of $\mathcal{A}_\veps$ together with the inductive hypothesis, to get \[ u_{k+1}(x)= \mathcal{A}_\veps[u_{k};f](x)-\veps^2 J_p(f(x)) \geq \mathcal{A}_\veps[u_{k-1};f](x)-\veps^2 J_p(f(x))= u_k(x). \] \noindent\textbf{Step 2:} We prove now that for all $k\in \mathbb{N}$ we have that \begin{align*} \left\{\begin{aligned}& u_{k}(x) \leq \mathcal{A}_\veps[u_{k};f](x)-\veps^2 J_p(f(x))&\textup{if}& \quad x\in \Omega,\\ & u_{k}(x) =g(x)&\textup{if}&\quad x\in\mathbb{R}^d\setminus \Omega. \end{aligned}\right. \end{align*} By definition we have that $u_k=g$ in $\mathbb{R}^d\setminus \Omega$. Moreover, by Step 1, for every $x\in \Omega$ we have that \[ u_{k}(x) = \mathcal{A}_\veps[u_{k-1};f](x)-\veps^2 J_p(f(x)) \leq \mathcal{A}_\veps[u_{k};f](x)-\veps^2 J_p(f(x)). \] \noindent\textbf{Step 3:} By Lemma \ref{lem:barrier}, there exists $\overline{u}$ such that \begin{equation*} \left\{ \begin{aligned} \overline{u}(x)&\geq \mathcal{A}_\veps[\overline{u};f](x)-\veps^2J_p(f(x)) &\textup{if}&\quad x\in \Omega,\\ \overline{u}(x)&\geq g(x) &\textup{if}& \quad x\in \R^d\setminus\Omega. \end{aligned}\right. \end{equation*} We then have, by the comparison principle stated in Lemma \ref{lem:comparisonDPP}, that $u_k\leq \overline{u}$ in $\mathbb{R}^d$. This implies, by monotonicity of the sequence $u_k$, that there exists $u$ bounded Borel such that \[ u_k\to u \quad \textup{as} \quad k\to+\infty \quad \textup{pointwise in } \mathbb{R}^d. \] \noindent\textbf{Step 4:} We will show now that $u_k\to u$ as $k\to+\infty$ uniformly in $\mathbb{R}^d$. We follow ideas from \cite{Lew20}. Note that $u=u_k=g$ in $\mathbb{R}^d\setminus\Omega$ for all $k\in \mathbb{N}$, so we only need to ensure uniform convergence in $\Omega$. First, given $u_m,u_n$ with $m>n$, we have the following estimate for all $x\in \Omega$: \begin{align*} |u_{m+1}(x)-u_{n+1}(x)|=& |\mathcal{A}[u_m;f](x)-\mathcal{A}[u_n;f](x)| \\ \leq& \sup_{c\in [m(\veps),M(\veps)]} \Big\{\alpha \sup_{x\in\R^d} |u_m(x)-u_n(x)|+(1-\alpha)\beta \sup_{x\in\R^d} |u_m(x)-u_n(x)| \\ &+ (1-\alpha)(1-\beta) \frac{1}{|B_{\gamma \veps c^{-\frac{\alpha}{2}}}|}\|u_m-u_n\|_{L^1(\Omega)}\Big\}\\ \leq& \eta \sup_{x\in\R^d} |u_m(x)-u_n(x)| + \frac{C_{d,p}}{|B_{\veps M(\veps)^{-\frac{\alpha}{2}}}|}\|u_m-u_n\|_{L^1(\Omega)}\\ \leq& \eta \sup_{x\in\R^d} |u(x)-u_n(x)| + \frac{C_{d,p}}{|B_{\veps M(\veps)^{-\frac{\alpha}{2}}}|}\|u-u_n\|_{L^1(\Omega)}, \end{align*} where $\eta=\alpha+(1-\alpha)\beta<1$ and we have used in the last step that $u_k$ is a nondecreasing sequence. Passing to the limit as $m\to+\infty$ in the above estimate, we get \[ |u(x)-u_{n+1}(x)|\leq \eta \sup_{x\in\R^d} |u(x)-u_n(x)| + \frac{C_{d,p}}{|B_{\veps M(\veps)^{-\frac{\alpha}{2}}}|}\|u-u_n\|_{L^1(\Omega)} \] which implies \[ \sup_{x\in\R^d} |u(x)-u_{n+1}(x)|\leq \eta \sup_{x\in\R^d} |u(x)-u_n(x)| + \frac{C_{d,p}}{|B_{\veps M(\veps)^{-\frac{\alpha}{2}}}|}\|u-u_n\|_{L^1(\Omega)}. \] Let $$A_{n}:=\sup_{x\in\R^d} |u(x)-u_{n+1}(x)|.$$ Passing to the limit as $n\to+\infty$ in the above estimate, and using the Dominated Convergence Theorem (or the Monotone Convergence Theorem), \[ \lim_{n\to+\infty} A_{n}=\lim_{n\to+\infty} A_{n+1} \leq \eta\lim_{n\to+\infty} A_{n}, \] which implies that $$\lim_{n\to+\infty} A_{n}=0,$$ and completes the proof. \noindent\textbf{Step 5:} Finally, we show that $u$ is a solution of \eqref{eq:DPP}. Clearly, $u=g$ in $\mathbb{R}^d\setminus \Omega$. Moreover, it holds that \begin{align*} u(x)&= \lim_{k\to+\infty} u_k(x) \\ & = \lim_{k\to +\infty}\mathcal{A}_\veps[u_{k-1};f](x)-\veps^2 J_p(f(x)) \\ &= \mathcal{A}_\veps[\lim_{k\to +\infty} u_{k-1};f](x)-\veps^2 J_p(f(x))\\ &=\mathcal{A}_\veps[u;f](x)-\veps^2 J_p(f(x)). \end{align*} where we have used the uniform convergence of Step 4 to interchange the limit and the average $\mathcal{A}_\veps$. The proof is completed. \end{proof} Let us finally prove that the solutions of \eqref{eq:DPP} are uniformly bounded independently of $\veps>0$. We also find boundary estimates. \begin{lemma}\label{lem:unifBarr+boundaryesti} Let $d\in \N$ $p>2$, $\veps\in(0,1)$, $\Omega\subset\R^d$ be an open bounded set, $f\in C(\overline{\Omega})$ and $g\in C_{\textup{b}}(\R^d\setminus\Omega)$. Assume \eqref{as:trunc2}. Let $u_\veps$ be the solution to \eqref{eq:DPP}. Then, there exist $\veps_0>0$ and $T>0$ such that for all $\veps\in(0,\veps_0)$ we have that \[ \|u_\veps\|_{L^\infty(\R^d)} \leq T, \] where $T$ is a positive constant depending on $p$, $\Omega$, $f$ and $g$ (but not on $\veps$). Moreover, for all $x_0\in \partial \Omega$, it holds that \begin{align*} \limsup_{\veps\to0, y\to x_0} u_\veps (y) \leq g(x_0), \quad \liminf_{\veps\to0, y\to x_0} u_\veps (y) \geq g(x_0). \end{align*} \end{lemma} \begin{proof} \textbf{Step 1: } Let us first construct a smooth barrier for the Poisson problem \eqref{eq:ppoisson}. It is standard to check that the function $$W(x)= |x|^{-\alpha}$$ with $\alpha = \frac{p+d-2}{p-1}$ is radially decreasing and $\Delta_p W(x) = C_{d,p} |x|^{-(\alpha+1)(p-1)-1}$ for some positive constant $C_{d,p}$. Let us now take a point $x_0\in \partial\Omega$. By the uniform exterior ball condition, there exist $R>0$ and $z_0\in \R^d\setminus \Omega$ such that $\overline{B_R(z_0)}\cap \overline{\Omega}= \{x_0\}$. Now fix a parameter $\eta>0$ and a constant $K>0$ to be chosen later, and consider the following function \begin{align*} \underline{W}(x)= K (|x-z_0|^{-\alpha}-R^{-\alpha}) + g(x_0)-\eta. \end{align*} First, let us note that, for all $x\in \Omega$, we have that \[ \Delta_p \underline{W}(x) = K^{p-1} \Delta_p W(x-z_0) \geq K^{p-1} C_{d,p}\tilde{R}^{-(\alpha+1)(p-1)-1}, \] where $\tilde{R}$ is such that $\Omega \subset B_{\tilde{R}}(z_0)$. Thus, we can choose $K$ large enough in such a way that $$\Delta_p \underline{W}(x) \geq \|f\|_{L^\infty(\Omega)}+1$$ for all $x\in \Omega$. On the other hand, if $x\in \overline{\Omega}\subset \R^d\setminus B_R(z_0) $, we have that \[ \underline{W}(x) \leq g(x_0)-\eta. \] Thus, there exists $r=r(g,\eta)>0$ such that \[ \underline{W}(x) \leq g(x)-\frac{\eta}{2} \quad \textup{for all} \quad x\in B_r(x_0)\cap \partial \Omega. \] Again, we can choose $K$ large enough such that \[ \underline{W}(x) \leq g(x)-\frac{\eta}{2} \quad \textup{for all} \quad x\in \partial \Omega \setminus (B_r(x_0)\cap \partial \Omega), \] since $|x-x_0|^{-\alpha}-R^{-\alpha}<0$ for all $ x\in \partial \Omega \setminus (B_r(x_0)\cap \partial \Omega) \subset \R^d\setminus B_{R}(z_0)$. Summarizing, we have obtained a function $\underline{W}$ such that \begin{align*} \left\{ \begin{aligned} \Delta_p \underline{W}(x)&\geq \|f\|_{L^\infty(\Omega)}+1 &\textup{if}& \quad x\in \Omega,\\ \underline{W}(x)&\leq g(x)-\eta/2 &\textup{if}& \quad x\in \partial\Omega. \end{aligned} \right. \end{align*} Moreover, $\underline{W}$ is smooth and has a nonvanishing gradient in a neighborhood of $\Omega$. \textbf{Step 2:} We will show now that $\underline{W}$ is a uniform-in-$\veps$ barrier for \eqref{eq:DPP}. On one hand, if $f(x)\geq0$, and since $\Delta_p \underline{W}\geq0$ in $\Omega$, we can use the quantitative estimate given in \eqref{pro:asexp-greater2-quantitative} to get \begin{align*} \frac{\mathcal{A}_\veps[\underline{W};f](x) - \underline{W}(x)}{\veps^2} = \frac{\mathcal{A}_\veps^+[\underline{W}](x) - \underline{W}(x)}{\veps^2} = J_p(\Delta_p \underline{W}) + o_\veps(1) \geq J_p(\|f\|_{L^\infty(\Omega)}+1) + o_\veps(1) \geq J_p(f(x)), \end{align*} where we have used that $o_\veps(1)$ is uniform in $\Omega$. On the other hand, for all $f(x)<0$, we have that $\mathcal{A}_\veps[\underline{W};f](x)=\mathcal{A}_\veps^-[\underline{W}](x)$. In this case, we note that \begin{align*} & \frac{\mathcal{A}_\veps^-[\underline{W}](x)- \underline{W}(x)}{\veps^2}= \!\!\! \sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \frac{1}{\veps^2}\left(\displaystyle \inf_{B_{\veps^2c^{1-\alpha}}(x)} \underline{W}-\underline{W}(x)\right) + (1-\alpha)\frac{1}{\veps^2}\left( \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[\underline{W}](x)-\underline{W}(x) \right)\right\}\\ &\qquad = \sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha c^{1-\alpha} \left(-\left|\nabla \underline{W}(x)\right| + \frac{o(\veps^2c^{1-\alpha})}{\veps^2c^{1-\alpha}} \right) + (1-\alpha)c^{-\alpha} \left(\nplap \underline{W}(x) + \frac{o(\veps^2 c^{-\alpha})}{\veps^2c^{-\alpha}}\right)\right\}\\ &\qquad \geq \sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha c^{1-\alpha} \left(-2\left|\nabla \underline{W}(x)\right|\right) + (1-\alpha)c^{-\alpha} \left(\nplap \underline{W}(x)/2 \right)\right\}\\ &\qquad \geq \alpha m(\veps)^{1-\alpha} \left(-2\left|\nabla \underline{W}(x)\right|\right) + (1-\alpha)m(\veps)^{-\alpha} \left(\nplap \underline{W}(x)/2 \right) \\ &\qquad \geq 0 > J_p(f(x)). \end{align*} Thus, by the comparison principle given in \Cref{lem:comparisonDPP}, we have that $u_\veps \geq \underline{W}$. We conclude that $u_\veps$ is uniformly bounded from below. In a similar way, we can check that $u_\veps$ is uniformly bounded from above. \textbf{Step 3: } Given $x_0\in \partial \Omega$, we have \[ \liminf_{\veps\to0, \, y\to x_0} u_\veps(y) \geq \liminf_{\veps\to0, \, y\to x_0} \underline{W}(y) = W(x_0) = g(x_0)-\eta. \] Using an analogous argument to the one used to find the lower bound, we can get that \[ \limsup_{\veps\to0, \, y\to x_0} u_\veps(y) \leq g(x_0)+\eta. \] The conclusion follows by the arbitrariness of $\eta$. \end{proof} \subsection{Convergence as $\veps\to0^+$} Let us establish, for convenience, the notation \begin{equation*} \mathcal{S}(\veps, x, t,\varphi) = \left\{ \begin{aligned} &\frac{t- \mathcal{A}_\veps[\varphi;f](x)}{\veps^2}+J_p(f(x)) \quad &\textup{if}&\quad x\in \Omega,\\ &t-g(x)\quad &\textup{if}&\quad x\in \R^d\setminus\Omega, \end{aligned}\right. \end{equation*} so that problem \eqref{eq:DPP} can be equivalently formulated as \[ \mathcal{S}(\veps,x, u_\veps(x),u_\veps)=0 \quad \textup{for} \quad x\in \R^d. \] First, we observe the following monotonicity and shifting invariance property on $S$, that follows directly from the definition. \begin{lemma} \begin{enumerate}[\rm (a)] \item Let $t\in \R$ and $\phi,\psi$ be two bounded Borel functions such that $\phi\leq \psi$ in $\R^d$. Then \[ \mathcal{S}(\veps,x, t,\phi)\geq S(\veps,x, t,\psi). \] \item Let $t,\xi,\eta\in \R$, and $\phi$ be a bounded Borel function in $\R^d$. Then, for all $x\in \Omega$, we have that \[ \mathcal{S}(\veps,x, t+\xi,\phi+\xi+\eta)=\mathcal{S}(\veps,x, t,\phi)-\frac{\eta}{\veps^2}. \] \end{enumerate} \end{lemma} \begin{proof}[Proof of convergence of solutions to the DPP as $\varepsilon \to 0^+$] Let us define \begin{align*} \overline{u}(x)= \limsup_{\veps\to0, y\to x} u_\veps(y), \quad \textup{and} \quad \underline{u}(x)= \liminf_{\veps\to0, y\to x} u_\veps(y). \end{align*} Note that, by definition, it holds that $$\underline{u} (x)\leq \overline{u}(x).$$ Now, if we can show that $\overline{u}$ (resp. $\underline{u}$) is a viscosity subsolution (resp. supersolution) of the Poisson problem \eqref{eq:ppoisson}, then, by the comparison principle for \eqref{eq:ppoisson}, we get that $\underline{u}\geq \overline{u}$. And thus, we conclude $$u_\veps (x) \to u(x)=\underline{u}(x) =\overline{u}(x) $$ as $\veps\to0$ uniformly in $\overline{\Omega}$. \noindent \textbf{Step 1:} Let us first show that $\overline{u}$ is a viscosity subsolution at points where $f(x_0)\geq0$. We note first that $\overline{u}$ is a bounded upper semicontinuous function. Take $x_0\in \Omega$ and a function $\varphi\in C^\infty_{\textup{b}}(B_R(x_0))$ such that $\varphi(x_0)=\overline{u}(x_0)$ and $\varphi(x)>u(x)$ for all $x\in \Omega$. Our goal is to check that \begin{align}\label{eq:subsolproof} \Delta_p\varphi(x_0)\geq f(x_0). \end{align} It is standard to check that there exists a sequence $(\veps_n,y_n) \to (0,x_0)$ as $n\to+\infty$ in such a way that \[ u_{\veps_n}(x) - \varphi(x) \leq u_{\veps_n}(y_n) - \varphi(y_n) + e^{-\frac{1}{\veps_n}} \quad \textup{for all} \quad x\in B_{R}(x_0). \] Now let $\xi_n\coloneqq u_{\veps_n}(y_{\veps_n})- \varphi(y_n)$. By the properties of the DPP, we get that \begin{align*} 0&=\mathcal{S}(\veps_n,y_n, u_\veps(y_n),u_\veps)\\ & =\mathcal{S}(\veps_n,y_n, \varphi(y_n) + \xi_n ,u_\veps) \\ & \geq \mathcal{S}(\veps_n,y_n, \varphi(y_n) + \xi_n ,\varphi + \xi_n + e^{-\frac{1}{\veps_n}})\\ & \geq \mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi)- \frac{e^{-\frac{1}{\veps_n}}}{\veps_n^2}, \end{align*} that is, \begin{equation}\label{eq:keysupervisc} \mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi) \leq \frac{e^{-\frac{1}{\veps_n}}}{\veps_n^2}. \end{equation} \textbf{Case 1: } Assume $\Delta_p\varphi(x_0)>0$. We note first that, if $f(x_0)=0$, then $\Delta_p\varphi(x_0)\geq f(x_0)$ trivially. On the other hand, if $f(x_0)>0$, then, there exist $\rho>0$ and $N>0$ such that $f(y_n)>0$, $\Delta_p\varphi(y_n), |\nabla\varphi(y_n)| >\rho>0$ and $y_n\in \Omega$ for all $n>N$. Then, \begin{align*} \mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi) &= \frac{\varphi(y_n)- \mathcal{A}_{\veps_n}[\varphi;f](y_n)}{\veps_n^2}+J_p(f(y_n)) \\ &= \frac{\varphi(y_n)- \mathcal{A}_{\veps_n}^+[\varphi](y_n)}{\veps_n^2}+J_p(f(y_n))\\ &= -J_p(\Delta_p \varphi(y_n)) + o_{\veps_n}(1) + J_p(f(y_n)), \end{align*} where $o_{\veps_n}(1)$ is uniform in $y_n$ by the quantitative estimate given in \Cref{pro:asexp-greater2-quantitative}. Combining the above identity with \eqref{eq:keysubvisc}, we get \[ -J_p(\Delta_p \varphi(y_n)) + o_{\veps_n}(1) + J_p(f(y_n)) \leq \frac{e^{-\frac{1}{\veps_n}}}{\veps_n^2}. \] Sending $n \to +\infty$ we get that $\Delta_p\varphi(x_0)\geq f(x_0)$, that is what we wanted to prove. \textbf{Case 2:} Assume now, by contradiction, that $\Delta_p\varphi(x_0)<0$. Then, there exist $\rho>0$ and $N>0$ such that $ |\nabla\varphi(y_n)| >\rho>0$, $\nplap\varphi(y_n)<-\rho$ and $y_n\in \Omega$ for all $n>N$. Assume that there exists a subsequence $y_{n_j}\to x_0$ as $j+\infty$ such that $f(y_{n_j})\geq0$. Then, \begin{align*} -\mathcal{S}(\veps_{n_j},y_{n_j}, \varphi(y_{n_j}) ,\varphi)&= \frac{\mathcal{A}_{\veps_{n_j}}[\varphi;f](y_{n_j})-\varphi(y_{n_j})}{\veps_{n_j}^2}-J_p(f(y_{n_j})) \\ & = \frac{\mathcal{A}_{\veps_{n_j}}^+[\varphi](y_{n_j})-\varphi(y_{n_j})}{\veps_{n_j}^2}-J_p(f(y_{n_j}))\\ &\leq \inf_{c\in[m({\veps_{n_j}}),M({\veps_{n_j}})]}\Big\{ \alpha c^{1-\alpha}(|\nabla \varphi (y_{n_j})|+\delta)+ (1-\alpha) c^{-\alpha} (\nplap\varphi(y_{n_j}) + \delta)\Big\}\\ &\leq \inf_{c\in[m(\veps_{n_j}),M(\veps_{n_j})]}\Big\{ \alpha c^{1-\alpha}(2|\nabla \varphi(y_{n_j})|)+ (1-\alpha) c^{-\alpha} (-\rho/2)\Big\}\\ & \leq \alpha m(\veps_{n_j})^{1-\alpha}(2|\nabla \varphi(y_{n_j})|)+ (1-\alpha) m(\veps_{n_j})^{-\alpha} (-\rho/2)\\ &<-1, \end{align*} where the last inequality hold for $\veps_{n_j}$ small enough. This is clearly a contradiction with \eqref{eq:keysupervisc}. Note that we have strongly used the quantitative estimates of \Cref{pro:asexp-greater2-quantitative} that are valid away from points where the gradient vanishes. Thus, we must necessary have that, for $\tilde{N}>N$ big enough, the sequence $y_n$ is such that $f(y_n)<0$. In this case, we can use \Cref{pro:asexp-greater2-quantitative} to get \begin{align*} \mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi) &= \frac{\varphi(y_n)- \mathcal{A}_{\veps_n}[\varphi;f](y_n)}{\veps_n^2}+J_p(f(y_n)) \\ &= \frac{\varphi(y_n)- \mathcal{A}_{\veps_n}^-[\varphi](y_n)}{\veps_n^2}+J_p(f(y_n))\\ &= -J_p(\Delta_p \varphi(y_n)) + o_{\veps_n}(1) + J_p(f(y_n)). \end{align*} This estimate, together with \eqref{eq:keysupervisc}, yields \[ \frac{e^{-\frac{1}{\veps_n}}}{\veps_n^2}\geq -J_p(\Delta_p \varphi(y_n)) + o_{\veps_n}(1) + J_p(f(y_n)), \] that is, $\Delta_p\varphi(x_0)\geq f(x_0)\geq0$, which is a contradiction with the assumption $\Delta_p\varphi(x_0)<0$. \textbf{Case 3:} Assume by contradiction that $\plap \varphi(x_0)=0$ and $\nabla \varphi(x_0)=0$. By definition of viscosity solution of the $p$-Laplace Poisson problem (see Appendix \ref{app:viscositydef}), we only need to test with this kind of test function at points where $f(x)>0$. Then, there exist $\rho>0$ and $N>0$ such that $ f(y_n) >\rho>0$ for all $n>N$. Thus, \begin{align*} -\mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi)&\leq \inf_{c\in[m({\veps_n}),M({\veps_n})]}\Big\{ \alpha c^{1-\alpha}(|\nabla \varphi (y_n)|+\delta)+ (1-\alpha) c^{-\alpha} K\Big\}- J_p(f(y_n))\\ &\leq (|\nabla \varphi (y_n)|+\delta)^{\alpha}K^{1-\alpha} - J_p(\rho). \end{align*} From here, we can reach again a contradiction with \eqref{eq:keysupervisc} by letting $n\to+\infty$. \textbf{Case 4:} Assume that $\plap \varphi(x_0)=0$ and $\nabla \varphi(x_0)\not=0$. On one hand, if $f(x_0)=0$, we trivially have $\Delta_p\varphi(x_0)=0 =f(x_0)$. On the other hand, assume by contradiction that $f(x_0)>0$. We claim that there exists $N>0$ big enough such that $\plap \varphi(y_n)\geq0$. If the claim holds, then we conclude, by \Cref{pro:asexp-greater2-quantitative}, that \begin{align*} \mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi) &= -J_p(\Delta_p \varphi(y_n)) + o_{\veps_n}(1) + J_p(f(y_n)). \end{align*} From here, the conclusion $\Delta_p\varphi(x_0)\geq f(x_0)$ follows using \eqref{eq:keysupervisc}. Let us proof the claim by contradiction. Assume that there exists a subsequence with $y_{n_j}\to x_0$ as $j+\infty$ such that $\Delta_p\varphi(y_{n_j})<0$ (and, thus, $\nplap \varphi(y_{n_j})<0$). Note that there exist $\rho>0$ and $N>0$ such that $f(y_{n_j})>\rho>0$ for all $n_j>N$. Then, we have \begin{align*} -\mathcal{S}(\veps_{n_j},y_{n_j}, \varphi(y_{n_j}) ,\varphi)&= \frac{\mathcal{A}_{\veps_{n_j}}[\varphi;f](y_{n_j})-\varphi(y_{n_j})}{\veps_{n_j}^2}-J_p(f(y_{n_j})) \\ & = \frac{\mathcal{A}_{\veps_{n_j}}^+[\varphi](y_{n_j})-\varphi(y_{n_j})}{\veps_{n_j}^2}-J_p(f(y_{n_j}))\\ &\leq \inf_{c\in[m({\veps_{n_j}}),M({\veps_{n_j}})]}\Big\{ \alpha c^{1-\alpha}(|\nabla \varphi (y_{n_j})|+\delta)+ (1-\alpha) c^{-\alpha} (\nplap\varphi(y_{n_j}) + \delta)\Big\}-J_p(\rho)\\ &\leq \inf_{c\in[m(\veps_{n_j}),M(\veps_{n_j})]}\Big\{ \alpha c^{1-\alpha}(2|\nabla \varphi(y_{n_j})|)+ (1-\alpha) c^{-\alpha} \delta\Big\}-J_p(\rho)\\ & \leq (2|\nabla \varphi(y_{n_j})|)^{\alpha} \delta^{1-\alpha}+ o_{\veps_{n_j}}(1)-J_p(\rho)\\ &<-J_p(\rho)/2. \end{align*} This is clearly a contradiction with \eqref{eq:keysupervisc}. \textbf{Final comment on Step 1:} When $x_0\in \partial \Omega$ we can use \Cref{lem:unifBarr+boundaryesti} to get that $$\overline{u}(x_0)\leq g(x_0).$$ Thus, we have shown that $\overline{u}$ is a viscosity subsolution of \eqref{eq:ppoisson}. \noindent\textbf{Step 2:} Let us show first that $\underline{u}$ is viscosity supersolution at points where $f(x_0)\geq0$. We note first that $\overline{u}$ is a bounded lower semicontinuous function. Take $x_0\in \Omega$ and a function $\varphi\in C^\infty_{\textup{b}}(B_R(x_0))$ such that $\varphi(x_0)=\underline{u}(x_0)$ and $\varphi(x)<\underline{u}(x)$ for all $x\in \Omega$. Arguing as in Step 1, we get that there exits a sequence $(\veps_n,y_n) \to (0,x_0)$ as $n\to+\infty$ in such a way that \begin{equation}\label{eq:keysubvisc} \mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi) \geq - \frac{e^{-\frac{1}{\veps_n}}}{\veps_n^2}. \end{equation} \textbf{Case 1:} Assume $\Delta_p\varphi(x_0) >0$. Then, there exist $\rho>0$ and $N>0$ such that $\nplap\varphi(y_n), |\nabla\varphi(y_n)| >\rho>0$ and $y_n\in \Omega$ for all $n>N$. Assume that there exists a subsequence $y_{n_j}\to x_0$ as $j+\infty$ such that $f(y_{n_j})<0$. Then, \begin{align*} -\mathcal{S}(\veps_{n_j},y_{n_j}, \varphi(y_{n_j}) ,\varphi) &= \frac{\mathcal{A}_{\veps_{n_j}}[\varphi;f](y_{n_j})- \varphi(y_{n_j})}{\veps_{n_j}^2}-J_p(f(y_{n_j})) \\ & = \frac{ \mathcal{A}_{\veps_{n_j}}^-[\varphi](y_{n_j})-\varphi(y_{n_j})}{\veps_{n_j}^2}-J_p(f(y_{n_j}))\\ & \geq \sup_{c\in[m({\veps_{n_j}}),M({\veps_{n_j}})]}\Big\{ \alpha c^{1-\alpha}(-|\nabla \varphi (y_{n_j})|-\delta)+ (1-\alpha) c^{-\alpha} (\nplap\varphi(y_{n_j}) - \delta)\Big\}\\ & \geq \sup_{c\in[m({\veps_{n_j}}),M({\veps_{n_j}})]}\Big\{ \alpha c^{1-\alpha}(-2|\nabla \varphi (y_{n_j})|)+ (1-\alpha) c^{-\alpha} (\nplap\varphi(y_{n_j})/2)\Big\}\\ & \geq \alpha m({\veps_{n_j}})^{1-\alpha}(-2|\nabla \varphi (y_{n_j})|)+ (1-\alpha) m({\veps_{n_j}})^{-\alpha} (\nplap\varphi(y_{n_j})/2)\\ &>1, \end{align*} where the last inequality holds for $\veps_{n_j}$ small enough since $\plap \varphi(y_n)>\rho$. This is clearly a contradiction with \eqref{eq:keysubvisc}. Thus, we must necessary have that, for $\tilde{N}>N$ big enough, the sequence $y_n$ is such that $f(y_n)\geq0$. Then, \begin{align*} \mathcal{S}(\veps_n,y_n, \varphi(y_n) ,\varphi) &= \frac{\varphi(y_n)- \mathcal{A}_{\veps_n}[\varphi;f](y_n)}{\veps_n^2}+J_p(f(y_n)) \\ & = \frac{\varphi(y_n)- \mathcal{A}_{\veps_n}^+[\varphi](y_n)}{\veps_n^2}+J_p(f(y_n))\\ &= -J_p(\Delta_p \varphi(y_n)) + o_{\veps_n}(1) + J_p(f(y_n)), \end{align*} where $o_{\veps_n}(1)$ is uniform in $y_n$ by the quantitative estimate of \Cref{pro:asexp-greater2-quantitative}. Combining the above identity with \eqref{eq:keysubvisc}, we get \[ -J_p(\Delta_p \varphi(y_n)) + o_{\veps_n}(1) + J_p(f(y_n)) \geq - \frac{e^{-\frac{1}{\veps_n}}}{\veps_n^2}. \] Sending $n \to +\infty$ we get that $\Delta_p\varphi(x_0)\leq f(x_0)$, which is what we wanted to prove. \textbf{Case 2:} Assume $\Delta_p\varphi(x_0) \leq0$. In this case, the result is trivial since $\Delta_p\varphi(x_0) \leq0\leq f(x_0)$. \textbf{Final comment on Step 2:} When $x_0\in \partial \Omega$, we can use \Cref{lem:unifBarr+boundaryesti} to get that $$\underline{u}(x_0)\geq g(x_0).$$ Thus, we conclude that $\underline{u}$ is a viscosity supersolution of \eqref{eq:ppoisson}. \noindent \textbf{Step 3:} The cases for sub and supersolutions if $f(x_0)<0$ follow from the above results changing $f$ by $-f$ and $u_\veps$ by $-u_\veps$. This concludes the proof. \end{proof} \section{The associated game}\label{sec:probabilitythings} As highlighted in \Cref{sec:intro,sect-main}, the Dynamic Programming Principle examined in the previous section relates to a game. In this section we prove, using probabilistic arguments, that the game has a value that coincides with the solution to the DPP. To simplify the presentation, we will only analyze the case $f\geq0$ in $\Omega$ (the necessary adaptations to cover the general case follow similarly). We recall the rules of the game here for convenience: \begin{enumerate}[\noindent \rm G(i)] \item\label{item1b} Fix a parameter $\veps>0$, an open bounded domain $\Omega$, a starting point $x_0\in \Omega$, a payoff function $g$ defined in $\R^d\setminus\Omega$, and a running payoff function $f$ defined in $\Omega$. \item\label{item2b} Each turn, given the actual position $x\in \Omega$, the second player (who wants to minimize the final payoff) chooses a constant $c \in [m(\veps),M(\veps)]$. Then, the players toss a biased coin with probabilities $\alpha$ for heads and $1-\alpha$ for tails. \begin{enumerate}[$\bullet$] \item If the result is heads, the first player (who wants to maximize) chooses the next position at any point in the ball $B_{\veps^2c^{1-\alpha}}(x)$. \item If the result is tails, they play a round of tug-of-war with noise in the ball $B_{\gamma \veps c^{-\frac{\alpha}{2}}}(x)$ with probabilities $\beta$ and $1-\beta$ (see description in \Cref{sec:intro}). \end{enumerate} \item\label{item4b} The process described in \rm{G}\eqref{item2b} is repeated, with a running payoff at each movement of amount $-\veps^2 J_p(f(x))$. The game continues until the position of the game lies outside $\Omega$ for the first time (this position is denoted by $x_\tau$). When this happens, the second player pays the amount ~$g(x_\tau)$ to the first player. \end{enumerate} To gain some intuition on why the solution of \eqref{DPP} coincides with the value of the game, let us heuristically make some observations. When tug-of-war with noise is played, the expected outcome is given by $$ \mathcal{M}_{\varepsilon c^{-\frac{\alpha}{2}}}[ u_\veps](x) = \beta\bigg(\frac{1}{2} \sup_{ B_{\gamma \veps c^{-\frac{\alpha}{2}}}(x)} nt_{ B_{\gamma \veps c^{-\frac{\alpha}{2}}}(x)} u_\veps (y) \, \mathrm{d}y, $$ while when the first player chooses the next position in $B_{\veps^2c^{1-\alpha}}(x)$ the expected value is given by $$ \sup_{B_{\veps^2c^{1-\alpha}}(x)} u_\veps.$$ Then, for a choice of $c\in [m(\veps),M(\veps)]$, we obtain the following expected outcome: $$ \alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} u_\veps + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[u_\veps ](x). $$ Taking into account that the first player wants to minimize and chooses $c\in [m(\veps),M(\veps)]$, we finally arrive to the fact that the expected value at $x$ is given by the expected outcome after playing one round of the game, that is, $$ u_\veps (x) = \inf_{c\in [m(\veps),M(\veps)]}\left\{\alpha \sup_{B_{\veps^2c^{1-\alpha}}(x)} u_\veps + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[u_\veps](x) \right\} - \veps^2 J_p(f(x)). $$ Thus, we arrive to the fact that the value function verifies $$u_\veps(x)= \mathcal{A}_\veps[u_\veps;f](x) - \veps^2 J_p(f(x))$$ for $x\in \Omega$ and we have $u_\veps=g$ outside the domain, that is, $u_\veps$ solves \eqref{DPP}. \begin{remark} When $f(x)<0$, a similar reasoning leads to the part of the DPP where $$ u_\veps(x)=\sup_{c\in [m(\veps),M(\veps)]}\left\{\alpha \inf_{B_{\veps^2c^{1-\alpha}}(x)} u_\veps + (1-\alpha) \mathcal{M}_{\veps c^{-\frac{\alpha}{2}}}[u_\veps](x) \right\} - \veps^2 J_p(f(x)). $$ \end{remark} \subsection{Rigorous game-related results} The game described in {\rm{G}\eqref{item1b}-\rm{G}\eqref{item2b}-\rm{G}\eqref{item4b}} generates a sequence of states $$ P=\{ x_{0},x_{1},...,x_{\tau}\}, $$ where $x_0$ is the initial position of the game and $x_\tau$ is the final position of the game (the first time that the position of the game lies outside $\Omega$). A \emph{strategy} for Player~I is a function \( S_I \) defined on the partial histories of the game, which specifies the choices that Player~I will make. In particular, it determines the next position of the game when the biased coin toss results in a head, as well as the next position when Player~I is playing tug-of-war. Similarly, a strategy for Player~II is a function \( S_{II} \), also defined on the partial histories, that specifies the choice of \( c\in [m(\veps),M(\veps)] \) and the next position of the game when playing tug-of-war. When the two players have fixed their strategies \( S_I \) and \( S_{II} \), we can compute the \emph{expected outcome} as follows: The expected payoff, when starting from \( x_0 \) and using the strategies \( S_I \) and \( S_{II} \), is given by \begin{equation} \label{eq:defi-expectation} \mathbb{E}_{S_{I},S_{II}}^{x_0} \left[ - \varepsilon^2 \sum_{i=0}^{\tau-1} J_p(f(x_i)) + g(x_\tau) \right]. \end{equation} Here, the expected value is computed according to the probability measure obtained from Kolmogorov's extension theorem. Observe that the game ends almost surely, regardless of the strategies used by the players, that is, \( \mathbb{P} (\tau =+\infty) = 0 \), and therefore the expected value in \eqref{eq:defi-expectation} is well-defined. This is due to the random movements that occur when playing tug-of-war with noise, which forces the position of the game to be out of the domain in a finite number of plays with positive probability. Then, since Player I tries to maximize the expected outcome and Player II aims to minimize it, the \emph{value of the game for Player I} is given by \[ u^\varepsilon_I(x_0)=\sup_{S_I}\inf_{S_{II}} \, \mathbb{E}_{S_{I},S_{II}}^{x_0}\left[- \varepsilon^2 \sum_{i=0}^{\tau-1} J_p(f(x_i))+ g (x_\tau) \right], \] while the \emph{value of the game for Player II} is given by \[ u^\varepsilon_{II}(x_0)= \inf_{S_{II}}\sup_{S_I} \, \mathbb{E}_{S_{I},S_{II}}^{x_0}\left[ - \varepsilon^2\sum_{i=0}^{\tau-1} J_p(f(x_i)) + g (x_\tau) \right]. \] Intuitively, the values $u_I(x_0)$ and $u_{II}(x_0)$ are the best expected outcomes each player can guarantee when the game starts at $x_0$. Notice that it holds that $$u^\varepsilon_I (x) \leq u^\varepsilon_{II}(x).$$ If these two values coincide, $u^\varepsilon_I (x) = u^\varepsilon_{II}(x)$, we say that \emph{the game has a value}. Now, we are ready to state and prove our main result in this section. \begin{theorem} \label{teo.valor.DPP} Let the assymptions of \Cref{thm:DPPexsandconv} hold. Then, the game described in {\rm{G}\eqref{item1b}-\rm{G}\eqref{item2b}-\rm{G}\eqref{item4b}} has a value that is characterized by the unique solution $u_\veps$ to \eqref{eq:DPP}, that is, it holds that $$u_\varepsilon(x)=u^\varepsilon_{I}(x)=u^\varepsilon_{II}(x) \quad \textup{for all} \quad x \in \mathbb{R}^d.$$ \end{theorem} \begin{proof} \textbf{Step 1:} Let us prove first that $u_\veps \leq u_I^\veps$. Fix $\delta>0$. Given any $c \in [m(\veps),M(\veps)]$ we choose a strategy $S_I^*$ for Player I following the solution to the DPP \eqref{DPP} as follows: Player I chooses a point $x_{k+1}^I$ such that \begin{align*} x_{k+1}^I=S_{I}^{\ast}(x_0,\dots,x_k) \quad \mbox{such that} \quad \sup_{y \in B_{\varepsilon^2 c^{1-\alpha}}(x_k)}u_{\varepsilon}(y)-\frac{\delta}{2^{k+1}}\leq u_{\varepsilon}(x_{k+1}^I), \end{align*} which corresponds to a quasi-optimal strategy aiming to maximize $u_\varepsilon$ if the result of the toss is heads. Player I also chooses another point such that \begin{align*} \tilde{x}_{k+1}^I=S_{I}^{\ast}(x_0,\dots,x_k) \quad \mbox{such that} \quad \sup_{y \in B_{\gamma \varepsilon c^{-\frac{\alpha}{2}}}(x_k)}u_{\varepsilon}(y)-\frac{\delta}{2^{k+1}}\leq u_{\varepsilon}(\tilde{x}_{k+1}^I), \end{align*} which corresponds to a quasi-optimal strategy when the result of the toss is tails, and thus, they must play tug-of-war with noise. Given the strategy $S^*_I$ for Player I and any strategy $S_{II}$ (i.e., $(c_{k+1},x^{II}_{k+1})=S_{II}(x_0,\dots,x_k)$) for Player II, we consider the sequence of random variables $$ M_k=u_{\varepsilon}(x_k) - \varepsilon^2 \sum_{i=0}^{k-1} J_p(f(x_i)) - \frac{\delta}{2^k}.$$ Let us see that $(M_k)_{k\geq 0}$ is a \textit{submartingale}. To this end, we need to estimate $$\mathbb{E}_{S_{I}^{\ast},S_{II}}^{x_0}[M_{k+1}|M_0,\dots ,M_k]=\mathbb{E}_{S_{I}^{\ast},S_{II}}^{x_0}[M_{k+1}|x_k].$$ It holds that \begin{align*} nt_{ B_{\gamma \veps c_{k+1}^{-\frac{\alpha}{2}}} (x_k)} \!\!\!\!\!\! u_{\varepsilon} (y) \, \mathrm{d}y \right)\\ & - \varepsilon^2 \sum_{i=0}^{k} J_p(f(x_i)) -\frac{\delta}{2^{k+1}}\\ \geq & A_\veps^+[u_\veps](x_k)- \left(\alpha+ (1-\alpha)\frac{\beta}{2}\right) \frac{\delta}{2^{k+1}}- \veps^2 \sum_{i=0}^{k} J_p(f(x_i)) -\frac{\delta}{2^{k+1}}\\ \geq& u_\veps(x_k) + \veps^2 J_p(f(x_k))-\veps^2 \sum_{i=0}^{k} J_p(f(x_i)) -\frac{\delta}{2^{k}}\\ =& M_k, \end{align*} where we have used that $u_\veps$ is a solution of \eqref{DPP} in the last equality. Therefore, $(M_k)_{k\geq 0}$ is \textit{submartingale}. Now, using the \textit{Optional Stopping Theorem} (cf. \cite{Williams}), we conclude that, for any $k \in \mathbb{N}$, we have that \begin{align*} \mathbb{E}_{S_{I}^{\ast},S_{II}}^{x_0}[M_{\min\{\tau, k\}}]\geq M_0, \end{align*} where $\tau$ is the first time such that $x_{\tau}\notin\Omega$. Taking limit $k\rightarrow\infty$, we arrive to $$ \mathbb{E}_{S_{I}^{\ast},S_{II}}^{x_0}[M_\tau] = \mathbb{E}_{S_{I}^{\ast},S_{II}}^{x_0} \Big[ g(x_\tau) - \varepsilon^2\sum_{i=0}^{\tau-1} J_p(f(x_i)) -\frac{\delta}{2^\tau}\Big] \geq u_\varepsilon (x_0) - \delta . $$ Since the above estimate holds for any strategy $S_{II}$ used by Player II, it holds for the infimum of all possible strategies of Player II. Thus, we get \begin{align*} u^\varepsilon_I(x_0)&=\sup_{S_I}\inf_{S_{II}}\, \mathbb{E}_{S_{I},S_{II}}^{x_0}\left[- \varepsilon^2 \sum_{i=0}^{\tau-1} J_p(f(x_i)) + g (x_\tau) \right] \\ &\geq \inf_{S_{II}}\, \mathbb{E}_{S_{I}^*,S_{II}}^{x_0}\left[- \varepsilon^2 \sum_{i=0}^{\tau-1} J_p(f(x_i)) + g (x_\tau) \right]\\ &\geq u_\veps(x_0)-\delta. \end{align*} The arbitrariness of $\delta$ concludes the proof. \textbf{Step 2:} Let us prove now that $u_\veps \geq u_{II}^\veps$. Fix $\delta>0$. To this end, we define the following quasi-optimal strategy $S_{II}^*$ for Player II: First choose $c^*_{k+1}$ in such a way that \begin{align*} \displaystyle \mathcal{A}_\veps^+[u_\veps](x_k) +\frac{\delta}{2^{k+2}} \geq \left\{\alpha \sup_{B_{\veps^2(c^*_{k+1})^{1-\alpha}}(x_k)} u_{\varepsilon} + (1-\alpha) \mathcal{M}_{\veps (c^*_{k+1})^{-\frac{\alpha}{2}}}[u_{\varepsilon}](x_k) \right\}. \end{align*} and then \begin{align*} x_{k+1}^{II}=S_{II}^{\ast}(x_0,\dots,x_k) \qquad \mbox{such that} \qquad \inf_{y \in B_{\gamma \varepsilon (c^*_{k+1})^{-\frac{\alpha}{2}}}(x_k)}u^{\varepsilon}(y)+\frac{\delta}{2^{k+2}}\geq u^{\varepsilon}(x_{k+1}^{II}). \end{align*} Given any strategy $S_I$ for Player I, define the random variables $$ N_k=u_{\varepsilon}(x_k)- \varepsilon^2 \sum_{i=0}^{k-1} J_p(f(x_i)) + \frac{\delta}{2^k}$$ and then let us show that $(N_k)_{k\geq 0}$ is a \textit{supermartingale}. Indeed, we have \begin{align*} nt_{ B_{\gamma \veps (c^*_{k+1})^{-\frac{\alpha}{2}}}(x_k)} u_{\varepsilon} (y) \, \mathrm{d}y \right)\\ & - \varepsilon^2 \sum_{i=0}^{k} J_p(f(x_i)) +\frac{\delta}{2^{k+1}}\\ \leq & A_\veps^+[u_\veps](x_k)+ \left(1+ (1-\alpha)\frac{\beta}{2}\right) \frac{\delta}{2^{k+2}} - \veps^2 \sum_{i=0}^{k} J_p(f(x_i)) +\frac{\delta}{2^{k+1}}\\ \leq& u_\veps(x_k) + \veps^2 J_p(f(x_k)) - \veps^2 \sum_{i=0}^{k} J_p(f(x_i)) +\frac{\delta}{2^{k}}\\ =& N_k. \end{align*} From here, the rest of the proof of Step 2 follows as in Step 1 to obtain \begin{align*} u^\varepsilon_{II}(x_0)&=\inf_{S_{II}} \sup_{S_I}\, \mathbb{E}_{S_{I},S_{II}}^{x_0}\left[- \varepsilon^2 \sum_{i=0}^{\tau-1} J_p(f(x_i)) + g (x_\tau) \right] \\ &\leq \sup_{S_I} \, \mathbb{E}_{S_{I},S_{II}^*}^{x_0}\left[- \varepsilon^2 \sum_{i=0}^{\tau-1} J_p(f(x_i)) + g (x_\tau) \right]\\ &\leq u_\veps(x_0)+ \delta. \end{align*} Since $\delta$ is arbitrary, this concludes the proof the result. \end{proof} \appendix \section{The geometric mean as infimum of arithmetic means} \label{ApA} A key tool that allows us to derive asymptotic expansions for the $p$-Laplacian (and related operators) is the fact that any geometric mean can be equivalently expressed as the infimum of certain arithmetic means. More precisely, we rely on the identity given in the following result. \begin{lemma}\label{lem:ident-crucial} Let $a,b\geq0$ and $\alpha \in (0,1)$. Then, \begin{equation}\label{eq:gm-am} a^{\alpha}b^{1-\alpha}=\inf_{c>0}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\}. \end{equation} \end{lemma} \begin{proof} If $a=0$ the result follows since the infimum on the right-hand side of \eqref{eq:gm-am} is reached as $c\to+\infty$. Now, let us assume that $a>0$. On one hand, by choosing $c=b/a$, we obtain \[ \inf_{c>0}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\}\leq \alpha \left(b/a\right)^{1-\alpha} a + (1-\alpha)\left(b/a\right)^{-\alpha} b = a^{\alpha}b^{1-\alpha}. \] The reverse inequality follows from the weighted Arithmetic Mean - Geometric Mean inequality. Specifically, \[ \inf_{c>0}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\}\geq \inf_{c>0}\Big\{\left(c^{1-\alpha} a\right)^{\alpha} \left(c^{-\alpha} b\right)^{1-\alpha}\Big\} = a^{\alpha}b^{1-\alpha}. \] \end{proof} Since the results in this paper require performing approximations on quantities within the infimum over $c>0$, we need to localize the region over which the infimum is taken and ensure that this truncation does not introduce significant error. More precisely, we have the following result. \begin{lemma}\label{lem:gm-am-aprox} Let $a,b\geq0$, $\alpha \in (0,1)$ and $0<m<M<+\infty$. Then, \begin{equation}\label{eq:gm-am-aprox} \left|a^{\alpha}b^{1-\alpha}-\inf_{c\in[m,M]}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\}\right| \leq \alpha a \, m^{1-\alpha} + (1-\alpha) b\, M^{-\alpha}. \end{equation} \end{lemma} \begin{proof} First, we note that \begin{align*} a^{\alpha}b^{1-\alpha} = \inf_{c>0}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\} \leq \inf_{c\in[m,M]}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\}, \end{align*} so we only need to prove the reverse inequality. We do this by considering different cases. If $a = b = 0$, then \eqref{eq:gm-am-aprox} holds since $$ \inf_{c\in[m,M]}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\} = 0 = a^{\alpha}b^{1-\alpha}.$$ If $a > 0$ and $b = 0$, then \begin{align*} \inf_{c\in[m,M]}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\} = \inf_{c\in[m,M]}\left\{\alpha c^{1-\alpha} a \right\} = \alpha a \inf_{c\in[m,M]}\left\{ c^{1-\alpha}\right\} = \alpha a m^{1-\alpha}, \end{align*} and thus, \eqref{eq:gm-am-aprox} holds since $a^{\alpha}b^{1-\alpha} = 0$. The case $a = 0$ and $b > 0$ follows in a similar way. Finally, assume that $a, b > 0$. Let us define $F(c) = \alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b$ and note that $F'(c) = \alpha(1-\alpha) c^{-\alpha} \left(a - b/c\right)$. This implies that $c = b/a$ is the global minimum of $F$ on $[0,+\infty)$. In particular, if $b/a \in [m, M]$, then \begin{align*} \inf_{c\in[m,M]}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\} = \inf_{c>0}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\} = a^{\alpha}b^{1-\alpha}. \end{align*} On the other hand, if $b/a < m$, then \begin{align*} \inf_{c\in[m,M]}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\} &= \alpha m^{1-\alpha} a + (1-\alpha)m^{-\alpha}b \\ & \leq \alpha m^{1-\alpha} a + (1-\alpha)\left(\frac{b}{a}\right)^{-\alpha} b \\ &= \alpha m^{1-\alpha} a + a^{\alpha}b^{1-\alpha}. \end{align*} Finally, if $b/a > M$, then \begin{align*} \inf_{c\in[m,M]}\Big\{\alpha c^{1-\alpha} a + (1-\alpha)c^{-\alpha} b\Big\} &= \alpha M^{1-\alpha} a + (1-\alpha)M^{-\alpha}b \\ & \leq \alpha \left(\frac{b}{a}\right)^{1-\alpha} a + (1-\alpha)M^{-\alpha} b \\ &= a^{\alpha}b^{1-\alpha} + (1-\alpha) M^{-\alpha} b, \end{align*} which concludes the proof. \end{proof} \section{Viscosity solutions for the $p$-Laplacian and the mean value properties}\label{app:viscositydef} We devote this appendix to discuss the notion of viscosity solutions for the $p$-Laplacian and also for the asymptotic mean value properties used in this paper. \subsection{Viscosity solutions of $p$-Laplace equations for $p\in(2,\infty)$} We adopt the following notion of viscosity solution for the $p$-Laplacian. \begin{definition}[Viscosity solution of $p$-Laplace equation]\label{def:viscsol-p-Laplace} Let $p\in(2,\infty)$ and $\Omega\subset\R^d$ be an open set. Suppose that $f$ is a continuous function in $\Omega$. We say that a bounded lower (resp. upper) semicontinuous function $u$ in $\Omega$ is a \emph{viscosity supersolution} (resp. \emph{subsolution}) of \[ \Delta_p u =f \quad \textup{in} \quad \Omega \] if the following holds: Whenever $x_0\in \Omega$ and $\varphi\in C^2(B_R(x_0))$ for some $R>0$ are such that \begin{align}\label{eq:visctouch-super} \varphi(x)-u(x)&\leq \varphi(x_0)-u(x_0) \quad \textup{for all} \quad x\in (B_R(x_0)\setminus\{x_0\})\cap\Omega,\\\label{eq:visctouch-sub} \text{(resp. }\varphi(x)-u(x)&\geq \varphi(x_0)-u(x_0)) \end{align} with $\nabla\varphi(x_0)\not=0$ if $f(x_0)=0$, then we have \begin{align}\label{eq:viscsuper} \Delta_p\varphi(x_0) &\leq f(x_0).\\\label{eq:viscsub} \text{(resp. } \Delta_p\varphi(x_0) &\geq f(x_0).) \end{align} A \emph{viscosity solution} is a bounded continuous function $u$ in $\Omega$ that is both a viscosity supersolution and a viscosity subsolution. \end{definition} \begin{remark}\label{rem:novanishgrad} Note that we do not need to test with test functions with vanishing gradient when the right-hand side is zero. This is due to the fact that, if $\nabla\varphi(x_0)=0$, then $\Delta_p\varphi(x_0)=0$ for $p>2$, and thus \Cref{eq:viscsub,eq:viscsuper} are trivially satisfied. \end{remark} Actually, it is very well known that the class of test functions $\varphi$ in \Cref{def:viscsol-p-Laplace} can be restricted to a smaller class without losing any generality. We recall several simplifications that are useful in our setting. We present all of them in the context of supersolutions, since the results for subsolutions follow in the same way. \begin{remark}[Strict contact points] It is well known that condition \eqref{eq:visctouch-super} can be replaced by \begin{align*} \varphi(x_0)=u(x_0) \quad \textup{and} \quad \varphi(x)&< u(x) \quad \textup{for all} \quad x\in (B_R(x_0)\setminus\{x_0\})\cap\Omega. \end{align*} \end{remark} \begin{remark} [More regular test functions]\label{rem:regulartest} Without loss of generality, we can assume that $\varphi\in C^\infty(B_R(x_0))$. Indeed, let us consider a test function $\varphi\in C^2(B_R(x_0))$ for viscosity supersolutions (i.e. satisfying \eqref{eq:visctouch-super}), a smooth mollifier $\rho_\delta\in C_c^\infty(\overline{B_\delta(0)})$ and the mollified version of $\varphi$ given by $\varphi_\delta=\rho_\delta*\varphi$. Let us recall that $ \varphi_\delta \to \varphi$, $\nabla\varphi_\delta \to \nabla\varphi$ and $D^2\varphi_\delta \to D^2\varphi$ as $\delta\to0^+$ uniformly in $B_{R/2}(x_0)$. By uniform convergence of $\varphi_\delta$, there exists a sequence $\{x_\delta\}_{\delta>0}$ such that $x_\delta\to x_0$ as $\delta\to0$ such that \begin{align*} \varphi_\delta(x)-u(x)&\leq \varphi_\delta(x_\delta)-u(x_\delta) \quad \textup{for all} \quad x\in (B_{R/2}(x_\delta)\setminus\{x_\delta\})\cap\Omega. \end{align*} By definition of viscosity supersolution, we have that $\Delta_p\varphi_\delta(x_\delta) \leq f(x_\delta)$, which implies \eqref{eq:viscsuper}, by uniform convergence of $D^2 \varphi_\delta$ and $\nabla \varphi_\delta$ to $D^2 \varphi$ and $\nabla \varphi$ respectively, and the continuity of $f$. \end{remark} \begin{remark}[Non vanishing $p$-Laplacian]\label{rem:novanplap} Without loss of generality, we can assume that $\varphi$ is such that, whenever $\nabla \varphi(x_0)\not=0$, then $ \Delta_p \varphi(x_0)\not=0$. First note that $|\nabla \varphi(x_0)|^{p-2}$ is well defined for all $p\in(1,\infty)$ since $\nabla \varphi(x_0)\not=0$, and, thus, \[ \Delta_p\varphi(x_0)= |\nabla \varphi(x_0)|^{p-2}\nplap \varphi(x_0)=|\nabla \varphi(x_0)|^{p-2} \left(\Delta \varphi(x) + (p-2) \left<D^2\varphi(x_0) \frac{\nabla\varphi(x_0)}{|\nabla\varphi(x_0)|}, \frac{\nabla\varphi(x_0)}{|\nabla\varphi(x_0)|}\right>\right). \] Let us assume that $\varphi$ is a test function for supersolutions at $x_0$, that is, \begin{align*} \varphi(x_0)=u(x_0) \quad \textup{and} \quad \varphi(x)&< u(x) \quad \textup{for all} \quad x\in (B_R(x_0)\setminus\{x_0\})\cap\Omega, \end{align*} and such that $\Delta_p\varphi(x_0)=0$. Since $\nabla \varphi(x_0)\not=0$, then we must have that $\nplap \varphi(x_0)=0$. Now consider the function \[ \varphi_\delta(x)=\varphi(x)-\delta|x-x_0|^2. \] Clearly, \begin{align*} \varphi_\delta(x_0)=u(x_0) \quad \textup{and} \quad \varphi_\delta(x)&< u(x) \quad \textup{for all} \quad x\in (B_R(x_0)\setminus\{x_0\})\cap\Omega, \end{align*} and then $\Delta_p\varphi_\delta (x_0)\leq f(x_0)$. Moreover, note that $\nabla \varphi_\delta(x_0)=\nabla\varphi(x_0)$ and $D^2 \varphi_\delta(x_0)=D^2\varphi(x_0)- 2\delta I$. Then, \begin{align*} \Delta_p^{\textup{N}}\varphi_\delta(x_0)&=\Delta\varphi_\delta(x_0) + (p-2)\left<D^2\varphi_\delta(x_0) \frac{\nabla\varphi_\delta(x_0)}{|\nabla\varphi_\delta(x_0)|}, \frac{\nabla\varphi_\delta(x_0)}{|\nabla\varphi_\delta(x_0)|}\right>\\ &=(\Delta\varphi(x_0)-2\delta d) + (p-2)\left<\left(D^2\varphi(x_0)- 2\delta I\right) \frac{\nabla\varphi(x_0)}{|\nabla\varphi(x_0)|}, \frac{\nabla\varphi(x_0)}{|\nabla\varphi(x_0)|}\right>\\ &= \Delta_p^{\textup{N}}\varphi(x_0) - 2(d+p-2) \delta\\ &= - 2(d+p-2) \delta<0. \end{align*} Thus, we get that $\Delta_p\varphi_\delta(x_0)= -2(d+p-2)|\nabla\varphi(x_0)|^{p-2} \delta <0$. Finally, we observe that \begin{align*} f(x_0)\geq \Delta_p\varphi_\delta(x_0) &= |\nabla\varphi_\delta(x_0)|^{p-2} \Delta_p^{\textup{N}}\varphi_\delta(x_0) =|\nabla\varphi(x_0)|^{p-2} \left(\Delta_p^{\textup{N}}\varphi(x_0) - 2(d+p-2) \delta\right)\\ &=\Delta_p\varphi(x_0) -2(d+p-2) |\nabla\varphi(x_0)|^{p-2} \delta. \end{align*} Since we can take $\delta>0$ arbitrarily small, we conclude that $\Delta_p\varphi(x_0)\leq f(x_0)$, as we wanted to check. \end{remark} Let us define now the concept of viscosity solutions for the boundary value problem \begin{align}\label{eq:ppoisson-appendix} \left\{ \begin{aligned} \Delta_p u(x)&= f(x) &\textup{if}& \quad x\in \Omega,\\ u(x)&= g(x) &\textup{if}& \quad x\in \partial\Omega. \end{aligned} \right. \end{align} \begin{definition}[Viscosity solution of $p$-Laplace boundary value problem]\label{def:viscsol-p-Laplace-poisson} Let $p\in(2,\infty)$ and $\Omega\subset\R^d$ be a bounded open set. Suppose that $f$ is a continuous function in $\overline{\Omega}$ and that $g$ is a continuous function in $\partial \Omega$. We say that a bounded lower (resp. upper) semicontinuous function $u$ in $\overline{\Omega}$ is a \emph{viscosity supersolution} (resp. \emph{ subsolution}) of \eqref{eq:ppoisson-appendix} if the following holds: \begin{enumerate}[\rm (a)] \item $u$ is a viscosity supersolution (resp. subsolution) of $\Delta_p u=f$ in $\Omega$ (in the sense of \Cref{def:viscsol-p-Laplace}) \medskip \item $u(x)\geq g(x)$ (resp. $u(x)\leq g(x)$) for all $x\in \partial \Omega$. \end{enumerate} \end{definition} \subsection{Asymptotic expansions in the viscosity sense for $p\in(2,\infty)$} Having in mind all the considerations of the previous section about how restrictive test functions can be taken, we propose a definition of viscosity solution of the asymptotic expansion problems under minimal requirements on the test functions. Let us recall the notation \[ \overline{\mathcal{A}}_\veps[\varphi;f](x)\coloneqq\left\{\begin{aligned} \mathcal{A}_\veps^+[\varphi](x) \quad \textup{if} \quad f(x)\geq0,\\ \mathcal{A}_\veps^-[\varphi](x) \quad \textup{if} \quad f(x)<0, \end{aligned}\right. \quad \textup{and} \quad \underline{\mathcal{A}}_\veps[\varphi;f](x)\coloneqq\left\{\begin{aligned} \mathcal{A}_\veps^+[\varphi](x) \quad \textup{if} \quad f(x)>0,\\ \mathcal{A}_\veps^-[\varphi](x) \quad \textup{if} \quad f(x)\leq 0. \end{aligned}\right. \] \begin{definition}[The asymptotic mean value property in the viscosity sense]\label{def:asMVPviscosity} Let $p\in(2,\infty)$, $\Omega\subset\R^d$ be an open set and $\mathcal{A}_{\veps}$ be given by either $\overline{\mathcal{A}}_\veps$ or $\underline{\mathcal{A}}_\veps$. Suppose that $f$ is a continuous function in $\Omega$. We say that a bounded lower (resp. upper) semicontinuous function $u$ in $\Omega$ is a \emph{viscosity supersolution} (resp. \emph{ subsolution}) of \begin{align*} \mathcal{A}_\veps[u;f]-u=\veps^2 J_p(f) + o(\veps^2) \quad \textup{in} \quad \Omega \end{align*} if the following holds: Whenever $x_0\in \Omega$ and $\varphi\in C^\infty(B_R(x_0))$ for some $R>0$ are such that \eqref{eq:visctouch-super} (resp. \eqref{eq:visctouch-sub}) holds with $\nabla\varphi(x_0)\not=0$ and $\Delta_p\varphi(x_0)\not=0$ if $f(x_0)=0$, then we have \begin{align*} \mathcal{A}_\veps[\varphi;f](x_0)-\varphi(x_0)&\leq \veps^2 J_p(f(x_0)) + o(\veps^2). \\ \text{(resp. } \mathcal{A}_\veps[\varphi;f](x_0)-\varphi(x_0)&\geq \veps^2 J_p(f(x_0)) + o(\veps^2).) \end{align*} A \emph{viscosity solution} is a continuous function $u$ in $\Omega$ that is both a viscosity supersolution and a viscosity subsolution. \end{definition} \begin{remark} Note that we have the extra assumptions $\nabla\varphi(x_0)\not=0$ and $\Delta_p\varphi(x_0)\not=0$ if $f(x_0)=0$ on the definition of viscosity solution for the asymptotic mean value property. This is motivated from the definition of viscosity solution of the $p$-Laplacian problem together with \Cref{rem:novanishgrad,rem:novanplap}. We have also asked $\varphi\in C^\infty(B_R(x_0))$ instead of just $\varphi\in C^2(B_R(x_0))$ following \Cref{rem:regulartest}. \end{remark} \section*{Acknowledgments} We want to thank J. J. Manfredi for several nice discussions that helped us to improve our results. This work was started during a visit of J. D. Rossi to Universidad Autónoma de Madrid and ICMAT for the thematic semester ``Diffusion, Geometry, Probability and Free Boundaries" and finished during a visit of F. del Teso to Universidad de Buenos Aires funded by the "José Castillejo" scholarship with reference CAS23/00051. The authors are grateful to both institutions for the friendly and stimulating working atmosphere. F. del Teso was partially supported by the Spanish Government through RYC2020-029589-I, PID2021- 127105NB-I00 and CEX2019-000904-S funded by the MICIN/AEI. J. D. Rossi was partially supported by CONICET PIP GI No 11220150100036CO (Argentina), PICT-03183 (Argentina) and UBACyT 20020160100155BA (Argentina). \begin{thebibliography}{100} \bibitem{TPSS} T.~Antunovic, Y.~Peres, S.~Sheffield, and S.~Somersille. \newblock Tug-of-war and infinity {L}aplace equation with vanishing {N}eumann boundary condition. \newblock {\em Comm. Partial Differential Equations}, 37(10):1839--1869, 2012. \bibitem{ATU1} D.~Araujo, E.~Teixeira, and J.-M. Urbano. \newblock A proof of the {$C^{p'}$}-regularity conjecture in the plane. \newblock {\em Adv. Math.}, 316:541--553, 2017. \bibitem{ATU2} D.~Araujo, E.~Teixeira, and J.-M. Urbano. \newblock Towards the {$C^{p'}$}-regularity conjecture in higher dimensions. \newblock {\em Int. Math. Res. Not. IMRN}, (20):6481--6495, 2018. \bibitem{ArroyoHeinoParviainen2017} A.~Arroyo, J.~Heino, and M.~Parviainen. \newblock Tug-of-war games with varying probabilities and the normalized {$p(x)$}-laplacian. \newblock {\em Communications on Pure and Applied Analysis}, 16(3):915--944, 2017. \bibitem{Arroyo} A.~Arroyo and J.~G. Llorente. \newblock On the asymptotic mean value property for planar p-harmonic functions. \newblock {\em Proc. Amer. Math. Soc.}, 144(9):3859--3868, 2016. \bibitem{BarSou} G.~Barles and P.~E. Souganidis. \newblock Convergence of approximation schemes for fully nonlinear second order equations. \newblock {\em Asymptotic Anal.}, 4(3):271--283, 1991. \bibitem{BBM} T.~Bhattacharya, E.~D. Benedetto, and J.~Manfredi. \newblock Limits as $p \to \infty$ of $\delta_p u_p = f$ and related extremal problems. \newblock {\em Rend. Sem. Mat. Univ. Politec. Torino}, pages 15--68, 1991. \bibitem{BChMR2} P.~Blanc, F.~Charro, J.~J. Manfredi, and J.~D. Rossi. \newblock A nonlinear mean value property for {M}onge-{A}mp\`ere. \newblock {\em J. Convex Anal.}, 28(2):353--386, 2021. \bibitem{BChMR} P.~Blanc, F.~Charro, J.~J. Manfredi, and J.~D. Rossi. \newblock Asymptotic mean value expansions for solutions of general second-order elliptic equations. \newblock {\em Adv. Nonlinear Stud.}, 22:118--142, 2022. \bibitem{BlancandRossi2019} P.~Blanc and J.~D. Rossi. \newblock {\em {G}ame {T}heory and {P}artial {D}ifferential {E}quations}. \newblock De Gruyter, Berlin, Boston, 2019. \bibitem{BH2015} I.~Blank and Z.~Hao. \newblock The mean value theorem and basic properties of the obstacle problem for divergence form elliptic operators. \newblock {\em Comm. Anal. Geom.}, 23(1):129--158, 2015. \bibitem{Blaschke} W.~Blaschke. \newblock Ein {M}ittelwertsatz und eine kennzeichnende {E}igenschaft des logarithmischen potentials. \newblock {\em Ber. Verh. S\"{a}chs. Akad. Wiss., Leipziger}, 68:3--7, 1916. \bibitem{BoLan} A.~Bonfiglioli and E.~Lanconelli. \newblock Subharmonic functions in sub-{R}iemannian settings. \newblock {\em J. Eur. Math. Soc. (JEMS)}, 15(2):387--441, 2013. \bibitem{BuSq22} C.~Bucur and M.~Squassina. \newblock An asymptotic expansion for the fractional p-{L}aplacian and for gradient-dependent nonlocal operators. \newblock {\em Communications in Contemporary Mathematics}, 24(04):2150021, 2022. \bibitem{Caffarelli.Roquejoffre.2007} L.~Caffarelli and J.~M. Roquejoffre. \newblock Uniform {H}\"{o}lder estimates in a class of elliptic systems and applications to singular limits in models for diffusion flames. \newblock {\em Arch. Ration. Mech. Anal.}, 183(3):457--487, 2007. \bibitem{Chandra} E.~W. Chandra, M.~Ishiwata, R.~Magnanini, and H.~Wadade. \newblock Variational p-harmonious functions: existence and convergence to p-harmonic functions. \newblock {\em NoDEA, Nonlinear Differ. Equ. Appl.}, 28(5):Paper No. 51, 23 p., 2021. \bibitem{dTEnLe22} F.~del Teso, J.~Endal, and M.~Lewicka. \newblock On asymptotic expansions for the fractional infinity {L}aplacian. \newblock {\em Asymptot. Anal.}, 127(3):201--216, 2022. \bibitem{dTLi21} F.~del Teso and E.~Lindgren. \newblock A mean value formula for the variational $p$-{L}aplacian. \newblock {\em NoDEA Nonlinear Differential Equations Appl.}, 28(3):Paper No. 27, 33, 2021. \bibitem{dTMP} F.~del Teso, J.~J. Manfredi, and M.~Parviainen. \newblock Convergence of dynamic programming principles for the p-{L}aplacian. \newblock {\em Adv. Calc. Var.}, 15(2):191--212, 2022. \bibitem{dTMeOc24} F.~del Teso, M.~Medina, and P.~Ochoa. \newblock Higher-order asymptotic expansions and finite difference schemes for the fractional $p$-{L}aplacian. \newblock {\em Mathematische Annalen}, 390:157--203, 2024. \bibitem{DoMaRiSt22} A.~Domokos, J.~J. Manfredi, D.~Ricciotti, and B.~Stroffolini. \newblock Convergence of natural p-means for the p-laplacian in the heisenberg group. \newblock {\em Nonlinear Analysis}, 223:113058, 2022. \bibitem{DoMaRiSt} A.~Domokos, J.~J. Manfredi, D.~Ricciotti, and B.~Stroffolini. \newblock Boundary convergence properties of {$L^p$}-averages. \newblock {\em Journal of Mathematical Sciences}, 269(6):778--791, 2023. \bibitem{LMan} F.~Ferrari, Q.~Liu, and J.~J. Manfredi. \newblock On the characterization of $p$-harmonic functions on the heisenberg group by mean value properties. \newblock {\em Discrete Contin. Dyn. Syst.}, 34(7):2779--2793, 2014. \bibitem{Angel.Arroyo.Tesis} A.~R.~A. Garc\'ia. \newblock {\em Nonlinear {M}ean {V}alue {P}roperties related to the $p$-{L}aplacian}. \newblock PhD thesis, Universitat Aut\`onoma de Barcelona, 2017. \bibitem{Gonzalvez2024} I.~Gonz\'alvez, A.~Miranda, J.~D. Rossi, and J.~Ruiz-Cases. \newblock Finding the convex hull of a set using the flow by minimal curvature with an obstacle. {A} game theoretical approach. \newblock {\em arXiv}, 2409.06855, 2024. \bibitem{HansHartikainen} H.~Hartikainen. \newblock A dynamic programming principle with continuous solutions related to the $p$-laplacian. \newblock {\em Differential and Integral Equations}, 29(5/6):583 -- 600, 2016. \bibitem{IS13} C.~Imbert and L.~Silvestre. \newblock {$C^{1,\alpha}$} regularity of solutions of some degenerate fully nonlinear elliptic equations. \newblock {\em Adv. Math.}, 233:196--206, 2013. \bibitem{Is} H.~Ishii. \newblock On the equivalence of two notions of weak solutions, viscosity solutions and distribution solutions. \newblock {\em Funkcial. Ekvac}, 38:101--120, 1995. \bibitem{I} M.~Ishiwata, R.~Magnanini, and H.~Wadade. \newblock A natural approach to the asymptotic mean value property for the p-{L}aplacian. \newblock {\em Calc. Var. Partial Differential Equations}, 56(4):Art. 97, 22 pp., 2017. \bibitem{JuJu} V.~Julin and P.~Juutinen. \newblock A new proof for the equivalence of weak and viscosity solutions for the p-{L}aplace equation. \newblock {\em Comm. Part. Differ. Equ.}, 37(5):934--946, 2012. \bibitem{JuLiMan} P.~Juutinen, P.~Lindqvist, and J.~J. Manfredi. \newblock On the equivalence of viscosity solutions and weak solutions for a quasi-linear equation. \newblock {\em SIAM J. Math. Anal.}, 33(3):699--717, 2001. \bibitem{KAWOHL2012173} B.~Kawohl, J.~Manfredi, and M.~Parviainen. \newblock Solutions of nonlinear pdes in the sense of averages. \newblock {\em Journal de Mathématiques Pures et Appliquées}, 97(2):173--188, 2012. \bibitem{Lew20} M.~Lewicka. \newblock {\em A course on tug-of-war games with random noise}. \newblock Universitext. Springer, Cham, 2020. \bibitem{Lew22} M.~Lewicka. \newblock Non-local {T}ug-of-{W}ar with noise for the geometric fractional {$\bold p$}-{L}aplacian. \newblock {\em Adv. Differential Equations}, 27(1-2):31--76, 2022. \bibitem{Lewicka2017} M.~Lewicka and J.~J. Manfredi. \newblock The obstacle problem for the {$p$}-laplacian via optimal stopping of tug-of-war games. \newblock {\em Probability Theory and Related Fields}, 167(1):349--378, 2017. \bibitem{LeMa} M.~Lewicka and J.~J. Manfredi. \newblock The obstacle problem for the p-{L}aplacian via optimal stopping of tug-of-war games. \newblock {\em Probab. Theory Related Fields}, 167(1-2):349--378, 2017. \bibitem{Littman.et.al1963} W.~Littman, G.~Stampacchia, and H.~F. Weinberger. \newblock Regular points for elliptic equations with discontinuous coefficients. \newblock {\em Ann. Scuola Norm. Sup. Pisa. Cl. Sci.}, 17(1-2):43--77, 1963. \bibitem{LiuSchikorra2015} Q.~Liu and A.~Schikorra. \newblock General existence of solutions to dynamic programming equations. \newblock {\em Communications on Pure and Applied Analysis}, 14(1):167--184, 2015. \bibitem{MaPaRo10} J.~J. Manfredi, M.~Parviainen, and J.~D. Rossi. \newblock An asymptotic mean value characterization for {$p$}-harmonic functions. \newblock {\em Proc. Amer. Math. Soc.}, 138(3):881--889, 2010. \bibitem{cur} A.~Minne and D.~Tewodrose. \newblock Asymptotic mean value {L}aplacian in metric measure spaces. \newblock {\em J. Math. Anal. Appl.}, 491(2):Art. ID 124330, 21 p., 2020. \bibitem{MiRo24} A.~Miranda and J.~D. Rossi. \newblock Games for the two membranes problem. \newblock {\em Orbita Mathematicae}, 1(1):59--101, 2024. \bibitem{PSSW} Y.~Peres, O.~Schramm, S.~Sheffield, and D.~Wilson. \newblock Tug-of-war and the infinity {L}aplacian. \newblock {\em J. Amer. Math. Soc.}, 22:167--210, 2009. \bibitem{PS} Y.~Peres and S.~Sheffield. \newblock Tug-of-war with noise: a game theoretic view of the $p$-{L}aplacian. \newblock {\em Duke Math. J.}, 145(1):91--120, 2008. \bibitem{Privaloff} I.~Privaloff. \newblock Sur les fonctions harmoniques. \newblock {\em Mat. Sb.}, 32(3):464--471, 1925. \bibitem{Williams} D.~Williams. \newblock {\em Probability with {M}artingales}. \newblock Cambridge University Press, Cambridge, 1991. \end{thebibliography} \end{document}
2412.19397v1
http://arxiv.org/abs/2412.19397v1
Well-Posedness of Second-Order Uniformly Elliptic PDEs with Neumann Conditions
\documentclass[12pt]{article} \usepackage[utf8]{inputenc} \usepackage{geometry,amsmath,mathtools,blkarray,amssymb,amsthm,setspace,xcolor,bbm,tikz-cd,xparse,amsfonts,authblk,graphics,datetime,thmtools,bm} \usepackage[normalem]{ulem} \usepackage[shortlabels]{enumitem} \usepackage[colorlinks,linkcolor=blue,citecolor=blue,urlcolor=blue,bookmarks=false,hypertexnames=true]{hyperref} \usepackage[backend=biber, style=bwl-FU, sortcites=false, maxcitenames=2, mincitenames=1, maxbibnames=8, uniquelist=false]{biblatex} \usepackage{tikz-cd} \usetikzlibrary{decorations.text} \usepackage{tikz,tikz-3dplot} \usetikzlibrary{decorations.pathreplacing} \allowdisplaybreaks \bibliography{./ref.bib} \providecommand{\keywords}[1] { \small \textbf{{Keywords---}} #1 } \setlength\parindent{24pt} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{claim}{Claim}[section] \theoremstyle{definition} \newtheorem{axiom}{Axiom}[section] \newtheorem{definition}{Definition}[section] \newtheorem{example}{Example} \newtheorem{assumption}{Assumption}[section] \newtheorem{condition}{Condition}[section] \newtheorem{remark}{Remark}[section] \renewcommand\thmcontinues[1]{Continued} \newtheorem{conditionalt}{Condition}[condition] \newenvironment{conditionp}[1]{ \renewcommand\theconditionalt{#1} \conditionalt }{\endconditionalt} \newtheorem{assumptionalt}{Assumption}[assumption] \newenvironment{assumptionp}[1]{ \renewcommand\theassumptionalt{#1} \assumptionalt }{\endassumptionalt} \newcommand{\indep}{\perp \!\!\! \perp} \newcommand{\tr}{\mathrm{tr}} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\ip}[2]{\left<#1, #2\right>} \newcommand{\argmin}{\mathop{\mathrm{argmin}}} \newcommand{\argmax}{\mathop{\mathrm{argmax}}} \newcommand{\at}[1]{\ \bigg |_{#1}} \newcommand{\memo}[2]{\textcolor{#1}{\textbf{#2}}} \renewcommand{\div}{\mathrm{div}} \newcommand{\id}{\mathrm{id}} \newcommand{\Int}{\mathrm{Int}} \newcommand{\cof}{\hspace{0.1em}\mathrm{cof}\hspace{0.1em}} \newcommand{\supp}{\mathrm{supp} \hspace{0.15em}} \NewDocumentCommand{\defmathletter}{m}{ \expandafter\newcommand\csname b#1\endcsname{\mathbb{#1}} \expandafter\newcommand\csname c#1\endcsname{\mathcal{#1}}} \NewDocumentCommand{\defmathletters}{>{\SplitList{,}}m}{\ProcessList{#1}{\defmathletter}} \defmathletters{A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,X,Y,Z} \NewDocumentCommand{\defvector}{m}{ \expandafter\newcommand\csname v#1\endcsname{\mathbf{#1}}} \NewDocumentCommand{\defvectors}{>{\SplitList{,}}m}{\ProcessList{#1}{\defvector}} \defvectors{A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,X,Y,Z} \defvectors{a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,x,y,z} \newcommand{\vmu}{{\boldsymbol{\mu}}} \newcommand{\vnu}{{\boldsymbol{\nu}}} \newcommand{\vgamma}{{\boldsymbol{\gamma}}} \newcommand{\vvarphi}{{\boldsymbol{\varphi}}} \newdateformat{monthyeardate}{\monthname[\THEMONTH] \THEYEAR} \title{Well-Posedness of Second-Order Uniformly Elliptic PDEs with Neumann Conditions} \author{Haruki Kono \thanks{Email: [email protected].}} \affil{MIT} \date{\monthyeardate\today} \begin{document} \maketitle \begin{abstract} Extending the results of \cite{nardi2015schauder}, this note establishes an existence and uniqueness result for second-order uniformly elliptic PDEs in divergence form with Neumann boundary conditions. A Schauder estimate is also derived. \end{abstract} \section{Introduction} For a $C^{2, \alpha}$-domain $\Omega \subset \bR^d,$ we consider \begin{align} \label{eq:main-pde} \begin{cases} \nabla \cdot (A \nabla u) = f \text{ in } \Omega \\ \ip{A \nabla u}{\vn} = g \text{ on } \partial \Omega \end{cases} , \end{align} where $f \in C^{0, \alpha} (\bar \Omega),$ $g \in C^{1, \alpha} (\bar \Omega),$ and $\vn : \partial \Omega \to \bR^d$ is the outward normal unit vector of $\partial \Omega.$ Throughout this note, we assume that $A = (a^{ij}) : \bar \Omega \to \bR^{d \times d}$ is a matrix valued function that satisfies $a^{ij} \in C^{1, \alpha} (\bar \Omega)$ and is uniformly elliptic, i.e., there exists $\lambda > 0$ such that $v^\prime A (x) v \geq \lambda$ for $x \in \Omega$ and $v \in \bR^d$ with $\norm{v} = 1.$ The symmetric part of $A$ defined as $A_S \coloneqq (A + A^\prime) / 2$ admits the eigendecomposition $A_S = \sum_i \lambda_i q_i q_i^\prime,$ where $\lambda_i \geq \lambda > 0$ and $q_i$ forms an orthogonal basis of $\bR^d.$ Although the PDE (\ref{eq:main-pde}) is common, its well-posedness is not explicitly stated in the literature. \cite{nardi2015schauder} proves the existence and uniqueness and obtains a Schauder estimate for the Laplace operator, i.e., the case of $A = I.$ We extend the ideas of \cite{nardi2015schauder} to second-order uniformly elliptic PDEs in divergence form. \section{Existence and Uniqueness Result} Before stating our main result, we shall show several auxiliary propositions. \begin{lemma} \label{lem:second-derivative-is-definite} Let $\Omega \subset \bR^d$ be a $C^2$-domain. If $u \in C^2 (\bar \Omega)$ satisfies $u (p) = \max_{\bar \Omega} u \ (\min_{\bar \Omega} u)$ and $\nabla u (p) = 0$ for $p \in \partial \Omega,$ then $D^2 u (p)$ is negative (positive) semi-definite. \end{lemma} \begin{proof} We shall show only the case where $p$ attains the maximum. It suffices to show $v^\prime D^2 u (p) v \leq 0$ for each $v \in \bR^d \setminus \{0\}.$ We consider two distinct cases. First, suppose that $v \notin T_p \partial \Omega.$ Then we have $p + \varepsilon v \in \Omega$ or $p - \varepsilon v \in \Omega$ for small enough $\varepsilon > 0.$ Since the same argument goes through, we focus on the first case. Consider the map $f : [0, \bar \varepsilon) \ni \varepsilon \mapsto u (p + \varepsilon v)$ for small $\bar \varepsilon > 0.$ Observe that $f^{\prime \prime} (0+)$ exists since $u \in C^2 (\bar \Omega).$ Suppose for a contradiction that $f^{\prime \prime} (0 +) > 0.$ Then for small $\varepsilon > 0,$ we have $f^\prime (\varepsilon) = f^\prime (0+) + \int_0^\varepsilon f^{\prime\prime} (\delta) d \delta > f^\prime (0 +) = 0,$ where the last equality holds because $\nabla u (p) = 0.$ Hence, it holds that $f (\varepsilon) = f (0) + \int_0^\varepsilon f^\prime (\delta) d \delta > f (0).$ Since $f (\varepsilon) = u (p + \varepsilon v)$ and $f (0) = u (p),$ this inequality contradicts the fact that $u$ attains its maximum at $p.$ Therefore, we have $f^{\prime\prime} (0 +) \leq 0,$ which implies $v^\prime D^2 u (p) v = f^{\prime\prime} (0 +) \leq 0.$ Next, suppose that $v \in T_p \partial \Omega.$ Recall from Section 6.2 of \cite{gilbarg1977elliptic} and Definition 2.2 of \cite{nardi2015schauder} that there is a diffeomorphism $\varphi_p$ around $p$ that ``straightens'' $\partial \Omega.$ There exists a $C^2$-path $c_\varepsilon \in \bR^d$ such that $\varphi_p (c_0) = p,$ $\varphi_p (c_\varepsilon) \in \Omega$ and $v = \lim_{\varepsilon \to 0} \frac{u (\varphi_p (c_\varepsilon)) - u (p)}{\varepsilon}.$ The rest of proof runs similarly for $f : [0, \bar \varepsilon) \ni \varepsilon \mapsto u (\varphi_p (c_\varepsilon)).$ \end{proof} \begin{corollary} \label{cor:maximum-principle} Let $\Omega \subset \bR^d$ be a $C^2$-domain. If $u \in C^2 (\bar \Omega)$ satisfies $u (p) = \max_{\bar \Omega} u \ (\min_{\bar \Omega} u)$ and $\nabla u (p) = 0$ for $p \in \partial \Omega,$ then $\nabla \cdot (A \nabla u) (p) \leq 0 \ (\geq 0).$ \end{corollary} \begin{proof} Recall the eigendecomposition $A_S = \sum_i \lambda_i q_i q_i^\prime$ where $\lambda_i > 0.$ As $\nabla u (p) = 0,$ we have \begin{align*} \nabla \cdot (A \nabla u) (p) = \tr (A_S (p) D^2 u (p)) = \sum_i \lambda_i (p) q_i (p)^\prime (D^2 u (p)) q_i (p) . \end{align*} The conclusion holds by Lemma \ref{lem:second-derivative-is-definite}. \end{proof} \begin{lemma} \label{lem:estimate} Let $\Omega \subset \bR^d$ be a $C^2$-domain. For $f \in C^{0, \alpha} (\bar \Omega),$ suppose that $u \in C^{2, \alpha} (\bar \Omega)$ is a solution to \begin{align*} \begin{cases} \nabla \cdot (A \nabla u) - u = f \text{ in } \Omega \\ \ip{A \nabla u}{\vn} = 0 \text{ on } \partial \Omega \end{cases} . \end{align*} Then $\norm{u}_{C^0} \leq \norm{f}_{C^0}.$ \end{lemma} \begin{proof} Let $p \in \bar \Omega$ such that $|u (p)| = \max_{\bar \Omega} |u|.$ First, suppose that $p \in \Omega,$ and that $u (p) = \max_{\bar \Omega} |u|.$ The same argument works for the case of $u (p) = - \max_{\bar \Omega} |u|$ too. Observe that $u (p) \geq 0.$ Since $p$ is an interior maximizer, $\nabla u (p) = 0$ holds, and $D^2 u (p)$ is negative semi-definite. Therefore, we have \begin{align*} \nabla \cdot (A \nabla u) (p) = \tr (A_S (p) D^2 u (p)) = \sum_i \lambda_i (p) q_i (p)^\prime D^2 u (p) q_i (p) \leq 0 , \end{align*} where $\lambda_i (p) > 0.$ Thus, $0 \leq u (p) = \nabla \cdot (A \nabla u) (p) - f (p) \leq - f (p),$ and consequently, $\norm{u}_{C^0} \leq \norm{f}_{C^0}.$ Next, suppose that $p \in \partial \Omega$ and that $u (p) = \max_{\bar \Omega} |u|.$ Since $p$ is a maximizer of $u\mid_{\partial \Omega}$ in particular, $\ip{\nabla u (p)}{\tau} = 0$ holds for $\tau \in T_p \partial \Omega.$ Recall also that $0 = \ip{A (p) \nabla u (p)}{\vn (p)} = \ip{\nabla u (p)}{A (p)^\prime \vn (p)}.$ Since $\ip{\vn (p)}{A (p)^\prime \vn (p)} \geq \lambda > 0$ by the uniform ellipticity of $A,$ we have $A (p)^\prime \vn (p) \notin T_p \partial \Omega.$ By these, we observe that $T_p \partial \Omega$ and $A (p)^\prime \vn (p)$ linearly spans $\bR^d.$ Hence, $\nabla u (p) = 0$ holds. By Corollary \ref{cor:maximum-principle}, $0 \leq u (p) = \nabla \cdot (A \nabla u) (p) - f (p) \leq - f (p),$ which implies $\norm{u}_{C^0} \leq \norm{f}_{C^0}.$ \end{proof} Now, we are ready to prove the existence and uniqueness result. \begin{theorem} Let $\Omega \subset \bR^d$ be a $C^{2, \alpha}$-domain. Suppose that $f \in C^{0, \alpha} (\bar \Omega)$ and $g \in C^{1, \alpha} (\bar \Omega)$ satisfy the compatibility condition \begin{align} \label{eq:compatibility-condition} \int_\Omega f = \int_{\partial \Omega} g . \end{align} Then the problem (\ref{eq:main-pde}) admits a unique solution in the space \begin{align*} \cC \coloneqq \left\{ u \in C^{2, \alpha} (\bar \Omega) \mid \int_\Omega u = 0 \right\} . \end{align*} \end{theorem} \begin{proof} First consider the following problem \begin{align} \label{eq:augumented-pde} \begin{cases} \nabla \cdot (A \nabla u) - u = f \text{ in } \Omega \\ \ip{A \nabla u}{\vn} = g \text{ on } \partial \Omega \end{cases} . \end{align} Since the solution to this PDE for $f = g = 0$ is unique, it has a unique solution in $C^{2, \alpha} (\bar \Omega)$ for any $(f, g) \in C^{0, \alpha} (\bar \Omega) \times C^{1, \alpha} (\bar \Omega)$ by Theorem 5.1 of \cite{nardi2015schauder}. (See also page 130 of \cite{gilbarg1977elliptic}.) Hence, the map $C^{0, \alpha} (\bar \Omega) \times C^{1, \alpha} (\bar \Omega) \ni (f, g) \mapsto u \in C^{2, \alpha} (\bar \Omega),$ which assigns the solution $u$ to (\ref{eq:augumented-pde}) for each $(f, g),$ is well-defined. Restricting this map to the space \begin{align*} \cA \coloneqq \left\{ (f, g) \in C^{0, \alpha} (\bar \Omega) \times C^{1, \alpha} (\bar \Omega) \mid \int_\Omega f = \int_{\partial \Omega} g \right\} , \end{align*} we define $\cU : \cA \ni (f, g) \mapsto u \in \cC.$ This map is well-defined because for $(f, g) \in \cA,$ the corresponding solution $u$ satisfies \begin{align*} \int_\Omega u = \int_\Omega \nabla \cdot (A \nabla u) - \int_\Omega f = \int_{\partial \Omega} \ip{A \nabla u}{\vn} - \int_\Omega f = \int_{\partial \Omega} g - \int_\Omega f = 0 . \end{align*} Observe also that the map $\cU$ is bijective. Indeed, it is injective by the uniqueness of the solution to (\ref{eq:augumented-pde}) for $f = g = 0,$ and it is surjective because for $u \in \cC,$ the pair, $f = \nabla \cdot (A \nabla u) - u$ and an $C^{1, \alpha}$ extension $g$ of $g \mid_{\partial \Omega} = \ip{A \nabla u}{\vn},$ is in $\cA.$ For the space \begin{align*} \cF \coloneqq \left\{ f \in C^{0, \alpha} (\bar \Omega) \mid \int_\Omega f = 0 \right\} , \end{align*} consider $\tilde T : \cF \ni f \mapsto \cU (-f, 0) \in \cC.$ Notice that this map is well-defined because $(-f, 0) \in \cA$ for $f \in \cF.$ Since $u = \tilde T f,$ where $f \in \cF,$ solves the PDE \begin{align*} \begin{cases} \nabla \cdot (A \nabla u) - u = - f \text{ in } \Omega \\ \ip{A \nabla u}{\vn} = 0 \text{ on } \partial \Omega \end{cases} , \end{align*} we have the estimate \begin{align*} \norm{u}_{C^{2, \alpha}} \leq C (\norm{u}_{C^0} + \norm{f}_{C^{0, \alpha}}) \end{align*} by Theorem 6.30 of \cite{gilbarg1977elliptic}. Combined with Lemma \ref{lem:estimate}, this yields \begin{align*} \norm{u}_{C^{2, \alpha}} \leq C \norm{f}_{C^{0, \alpha}} , \end{align*} which implies that $\tilde T$ is bounded. Recall also that the embedding $\iota : C^{2, \alpha} (\bar \Omega) \hookrightarrow C^{0, \alpha} (\bar \Omega)$ is compact by Lemma 6.36 of \cite{gilbarg1977elliptic}. Hence, the map $T \coloneqq \iota \circ \tilde T : \cF \to \cF$ is a compact operator. We shall apply the Fredholm alternative to $I - T : \cF \to \cF.$ Consider the equation $(I - T) u = 0$ for $u \in \cF.$ Since $u = Tu = \tilde T u = \cU (- u, 0) \in C^{2, \alpha} (\bar \Omega),$ this equation is equivalent to $u$ solving \begin{align*} \begin{cases} \nabla \cdot (A \nabla u) = 0 \text{ in } \Omega \\ \ip{A \nabla u}{\vn} = 0 \text{ on } \partial \Omega \end{cases} , \end{align*} which admits only the trivial solution. By the Fredholm alternative, for any $v \in \cF,$ there uniquely exists $u \in \cF$ such that $(I - T) u = v.$ In particular, for any $(f, g) \in \cA,$ there uniquely exists $u \in \cF$ such that $(I - T) u = \cU (f, g).$ Since $u = \tilde T u + \cU (f, g) \in \cC$ holds, $u$ uniquely solves the PDE (\ref{eq:main-pde}) in $\cC.$ \end{proof} \section{Schauder Estimate} \begin{theorem} Let $\Omega \subset \bR^d$ be a $C^{2, \alpha}$-domain. Suppose that $f \in C^{0, \alpha} (\bar \Omega)$ and $g \in C^{1, \alpha} (\bar \Omega)$ satisfy the compatibility condition (\ref{eq:compatibility-condition}). Then, the unique solution $u \in \cC$ to the PDE (\ref{eq:main-pde}) satisfies \begin{align*} \norm{u}_{C^{2, \alpha}} \leq C \left(\norm{f}_{C^{0, \alpha}} + \norm{g}_{C^{1, \alpha}}\right) . \end{align*} \end{theorem} \begin{proof} To obtain the estimate, assume otherwise, i.e., for each $k \in \bN,$ there is $(f_k, g_k) \in C^{0, \alpha} (\bar \Omega) \times C^{1, \alpha} (\bar \Omega)$ satisfying (\ref{eq:compatibility-condition}) such that the corresponding solution $u_k \in \cC$ to (\ref{eq:main-pde}) satisfies \begin{align*} \norm{u_k}_{C^{2, \alpha}} > k \left(\norm{f_k}_{C^{0, \alpha}} + \norm{g_k}_{C^{1, \alpha}}\right) . \end{align*} By rescaling $(f_k, g_k),$ we assume $\norm{u_k}_{C^{2, \alpha}} = 1,$ since $u_k \neq 0,$ without loss of generality. By exactly the same argument in pp.429-430 of \cite{nardi2015schauder}, $u_k$ converges to a function $u$ in $\cC$ up to subsequence, and the limit $u$ satisfies \begin{align*} \begin{cases} \nabla \cdot (A \nabla u) = 0 \text{ in } \Omega \\ \ip{A \nabla u}{\vn} = 0 \text{ on } \partial \Omega \end{cases} , \end{align*} which implies $u = 0.$ However, this yields \begin{align*} 0 = \norm{u}_{C^{2, \alpha}} = \lim_{k \to \infty} \norm{u_k}_{C^{2, \alpha}} = 1 , \end{align*} which is a contradiction. Hence, the desired estimate holds. \end{proof} \printbibliography \end{document}
2412.19486v1
http://arxiv.org/abs/2412.19486v1
A group and the completion of its coset semigroup
\def\CTeXPreproc{Created by ctex v0.2.13, don't edit!} \documentclass[12pt]{article} \usepackage{amssymb} \usepackage{amsmath} \usepackage{tikz} \usepackage{amscd} \usepackage{latexsym} \usepackage{mathrsfs} \usepackage{enumitem} \usepackage{lipsum} \usepackage[noadjust]{cite} \setlength{\oddsidemargin}{-0.1cm} \setlength{\evensidemargin}{-0.1cm} \setlength{\topmargin}{-1.0cm} \setlength{\parindent}{12pt} \setlength{\parskip}{3ptplus1ptminus2pt} \setlength{\baselineskip}{20pt plus2pt minus1pt} \setlength{\textheight}{23true cm} \setlength{\textwidth}{16true cm} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Example}[Theorem]{Example} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Corollary}[Theorem]{Corollary} \newcommand{\bm}[1]{\mbox{\boldmath{$#1$}}} \newcommand{\qed}{\hfill $\Box$} \newenvironment{Proof}{ \emph{Proof.}}{\qed} \def\bc{\begin{center}} \def\ec{\end{center}} \def\fo{{\cal F}(S)/{\rho}} \def\no{\noindent} \numberwithin{equation}{section} \begin{document} \title{{\bf A group and the completion of its coset semigroup}\footnote{This paper is supported by National Natural Science Foundation of China (11971383, 12201495), and Shaanxi Fundamental Science Research Project for Mathematics and Physics (22JSY023).}} \author{{\bf Xian-zhong Zhao}, {\bf Zi-dong Gao}, {\bf Dong-lin Lei} \footnote{Corresponding author. E-mail: [email protected]} \\ {\small School of Mathematics, Northwest University}\\ {\small Xi'an, Shaanxi, 710127, P.R. China}\\} \date{} \maketitle \vskip -4pt \baselineskip 16pt {\small \noindent\textbf{ABSTRACT:} Let ${\cal K}_1(G)$ denote the inverse subsemigroup of ${\cal K}(G)$ consisting of all right cosets of all non-trivial subgroups of $G$. This paper concentrates on the study of the group $\Sigma({\cal K}_1(G))$ of all units of the completion of ${\cal K}_1(G)$. The characterizations and the representations of $\Sigma({\cal K}_1(G))$ are given when $G$ is a periodic group whose minimal subgroups permute with each other. Based on these, for such groups $G$ except some special $p$-groups, it is shown that $G$ and its coset semigroup ${\cal K}_1(G)$ are uniquely determined by each other, up to isomorphism. This extends the related results obtained by Schein in 1973. \vskip 6pt \noindent \textbf{Keywords:} periodic groups; coset semigroups; inverse semigroups; completions \vskip 6pt \noindent \textbf{2020 Mathematics Subject Classifications:} 20M18; 20F50 } \section{Introduction} Let $G$ be a group and ${\cal K}(G)$ denote the set of all right cosets of all subgroups of $G$. Then ${\cal K}(G)$ equipped with the binary operation $$Ha* Kb = \langle H, K^{a^{-1}}\rangle ab$$ becomes an inverse monoid (see \cite{schein1966} or \cite{mcali2}). An inverse subsemigroup of ${\cal K}(G)$ is said to be a \emph{coset semigroup} of $G$. The coset semigroups of a group play an impotent role in the theory of inverse semigroups. They have attracted the attention and investigation of many scholars such as, Schein, McAlister, East, Ara\'{u}jo and Shum et al. (see, \cite{schein1966, schei4, mcali2, james, james2, shum, arau}). It is easy to see that ${\cal K}_1 (G)\cong {\cal K}_1 (\widetilde{G})$ if $ G\cong \widetilde{G}$ for any two groups $G$ and $\widetilde{G}$, where ${\cal K}_1(G)$ (resp. ${\cal K}_1 (\widetilde{G})$) denotes the inverse subsemigroup of ${\cal K}(G)$ (resp. ${\cal K} (\widetilde{G})$) consisting of all right cosets of all non-trivial subgroups of $G$ (resp. $\widetilde{G}$), but not vice versa. Schein \cite{schei2} tells us that an inverse semigroup and its completion are uniquely determined by each other, up to isomorphism. That is to say, for any inverse semigroups $S$ and $T$, $S\cong T$ if and only if $C(S)\cong C(T)$, where $C(S)$ and $C(T)$ are the completion of $S$ and $T$ respectively. In this paper we shall devote to the investigation of the groups $G$ for which ${\cal K}_1 (G)\cong {\cal K}_1 (\widetilde{G})$ implies that $ G\cong \widetilde{G}$ for any group $\widetilde{G}$. As a consequence, we obtain that for such groups, the group $G$, the coset semigroup ${\cal K}_1(G)$ of $G$ and the completion $C({\cal K}_1(G))$ of ${\cal K}_1(G)$ are uniquely determined by each other, up to isomorphism. It not only extends the related result obtained by Schein in \cite{schei2}, but also build a bridge between the investigations of such groups and of their coset semigroups. The following concepts, notions and results on the completion of an inverse semigroup due to Schein are needed for us. Following Schein \cite{schei2}, we call two elements $x$ and $y$ of an inverse semigroup $S$ to be \emph{compatible}, written $x\thicksim y$, if both $xy^{-1}$ and $x^{-1}y$ are idempotents. A subset $A$ of $S$ is said to be \emph{compatible} if the elements of $A$ are pairwise compatible. We also know that $S$ equipped with the natural partial order becomes a partial order set. A compatible subset $A$ of $S$ is said to be \emph{permissible} if it is an order ideal. The set of all permissible subsets of $S$ is denoted by $C(S)$, called the \emph{completion} of $S$. It is well-known that $C(S)$ is a complete and infinitely distributive inverse monoid under the usual multiplication of subsets and that the natural partial order on $C(S)$ coincide with subset inclusion. An idempotent of $C(S)$ is just an order ideal contained in $E(S)$. The identity of $C(S)$ is $E(S)$. For any $A\in C(S)$, the inverse $A^{-1}$ of $A$ is equal to $\{a^{-1} \mid a \in A\}$. Also, $S$ can be embedded in $C(S)$ by means of the embedding $\iota: S\to C(S)$ defined by $\iota(s)=[s]$, where $[s]$ is the order ideal generated by $s$. The group of units of $C(S)$ is denoted by $\Sigma(S)$ (see Schein \cite{schei2} or Lawson \cite{lawson} for details). There is a series of papers in the literature devoted to the study of the completions of inverse semigroups and their applications. Schein \cite{schei2} describes the translational hulls and ideal extensions of an inverse semigroup via its completion. McAlister \cite{mcali5} and Lawson \cite{lawson1} introduce and study the $E$-unitary inverse semigroups and the almost factorisable inverse semigroups $S$ via the semigroup $C(S)$ and the group $\Sigma(S)$. Also, the semigroup $C(S)$ has numerous applications in the researches of many other related topics, such as $S$-systems, expansions of inverse semigroups and pseudogroups (see, \cite{schei3, shoji,lawson2, lawson3, cast}). Noticing that the mapping $\varphi: G \rightarrow \Sigma({\cal K}_1(G))$ defined by $\varphi(a)=\{Ha \mid 1 \neq H \leq G\}$ is a group homomorphism whose kernel is the intersection of all non-trivial subgroups of $G$, we can see that a group $G$ is embedded to the unit group $\Sigma({\cal K}_1(G))$ of $C({\cal K}_1(G))$ if the intersection of all non-trivial subgroups of $G$ is trivial. This motivates us to concentrate on the study of the unit group $\Sigma({\cal K}_1(G))$. Unless otherwise specified in this article, $G$ always denotes a non-trivial periodic group and $\Omega_G = \{H_i\}_{i\in I}$ the set of all minimal subgroups of $G$, and writes $${\cal A}_G=\{\{H_ia_i\}_{i\in I}\mid (\forall \,j, k\in I)\,H_ja_j\thicksim H_ka_k \text{ and } |\{H_ia_i\}_{i\in I}\cap L_{\scriptscriptstyle {H_j}}|=1\}.$$ Then ${\cal A}_G$ equipped with the multiplication $A\cdot B=\overline {[A][B]}$ becomes a group, and the mapping $\varphi$ from ${\cal A}_G$ to $ \Sigma({\cal K}_1(G))$ defined by $\varphi (A) = [A]$ is an isomorphism (see, Lemma \ref{tonggou}). Also, it is proved that $$A\cdot B=\{H_ia_ib_{i^{\tau(A)}}\}_{i\in I}$$ for any $A=\{H_ia_i\}_{i\in I}$, $B=\{H_ib_i\}_{i\in I}\in {\cal A}_G$, where $\tau$ is a group homomorphism from ${\cal A}_G$ to the symmetric group $Sym(I)$. When $H_i H_j = H_j H_i$ for any $H_i,\, H_j \in \Omega_G $, for any given $H \in \Omega_G $, we introduce a simple graph $\Gamma(H)$ corresponding to $H$ and prove that $$|\Sigma({\cal K}_1(G))|=|{\cal A}_G|=|G||H|^{|J|-1},$$ where $|J|$ denotes the cardinality of the set of all connected components of $\Gamma(H)$. Also, we give the value of $|J|$ by means of the related results in group theory and the theory of vector spaces. Based on these, for a periodic group $G$ whose the minimal subgroups permute with each other, we shall show that $G$ is isomorphic to $\Sigma({\cal K}_1(G))$ except some special $p$-groups. That is to say, for such groups, $G$ and $\Sigma({\cal K}_1(G))$ are uniquely determined by each other, up to isomorphism. Also, Schein tells us in \cite{schei2} that an inverse semigroup and its completion are uniquely determined by each other, up to isomorphism. That is to say, for any inverse semigroups $S$ and $T$, $S$ and $T$ are isomorphic if and only if $C(S)$ and $C(T)$ are isomorphic. Thus we can see that for the periodic groups $G$ whose the minimal subgroups permute with each other except some special $p$-groups, $G$ and its coset semigroup ${\cal K}_1(G)$ are uniquely determined by each other, up to isomorphism. This extends the related result obtained by Schein in \cite{schei2}. For other unexplained notation and terminology used in this paper, the reader is referred to \cite{lawson,petrich,robinson}. \section{Preliminary} In this section, the order filters and the compatible relations in the coset semigroups ${\cal K}_1(G)$ are studied. Some properties of ${\cal K}_1(G)$ are given. It is easy to see that the following elementary properties of ${\cal K}(G)$ holds also in ${\cal K}_1(G)$. \begin{Lemma}\cite[Lemma 1.1]{mcali2}\label{kgjiben} \rm Let $G$ be a group and let $Ha, Kb\in{\cal K}(G)$. Then \begin{itemize} \item[$(i)$] $(Ha)^{-1}=a^{-1}H=H^{a}a^{-1}$; thus $(Ha)*(Ha)^{-1}=H,$ $(Ha)^{-1}*(Ha)=H^{a}$; further $Ha=Kb$ implies $H=K,\,ab^{-1}\in H$; \item[$(ii)$] $Ha{\,\cal R\,}Kb$ $[Ha{\,\cal L\,}Kb]$ if and only if $H = K$ [$H^a=K^b$]; $Ha{\,\cal D\,}Kb$ if and only if $H$ is conjugate to $K$; $Ha\leq_{\cal J} Kb$ if and only if $H$ is conjugate to a subgroup of $K$; \item[$(iii)$] the idempotents of ${\cal K}(G)$ are precisely the subgroups of $G$; the central idempotents are the normal subgroups of $G$; \item[$(iv)$] if $H$ is an idempotent of ${\cal K}(G)$ (i.e., $H$ is a subgroup of $G$), then the $\cal H$-class of ${\cal K}(G)$ containing $H$ is isomorphic to $N_G(H)/H$; \item[$(v)$] the natural partial order on ${\cal K}(G)$ is the inverse of inclusion. \end{itemize} \end{Lemma} In the paper we write $\leq_n$ to denote the natural partial order on ${\cal K}(G)$. It is well-known that a group $G$ is periodic if and only if every subgroup of $G$ contains a minimal subgroup, and the Frattini subgroup $\Phi(G)$ of a group $G$ is equal to the intersection of all maximal subgroups of $G$ if $G$ has at least one maximal subgroup, otherwise it is equal to $G$. Since a minimal (maximal, respectively) subgroup of $G$ is precisely a maximal (primitive, respectively) idempotent of ${\cal K}_1(G)$, it follows that $\Omega_G$ is precisely the set of all maximal idempotents of ${\cal K}_1(G)$. Thus we have immediately \begin{Lemma}\label{minmax} \rm Let $G$ be a group, $H\leq G$ and $a\in G$. Then \begin{itemize} \item[$(i)$] $Ha\in {\cal K}_1(G)$ is a maximal element if and only if $H\in \Omega_G$; \item[$(ii)$] $G$ is periodic if and only if there exists $L \in \Omega_G$ such that $K\leq_n L$ for any given $K\in E({\cal K}_1(G))$; \item[$(iii)$] $H\leq\Phi(G)$ if and only if $K\leq_n H$ for every primitive idempotent $K$ of ${\cal K}_1(G)$. \end{itemize} \end{Lemma} \begin{Lemma}\label{filter} \rm In the partial order set $({\cal K}_1(G), \leq_n)$, the order filter $(Ha)^\upharpoonright$ generated by an element $Ha$ is equal to $ \{Kb \mid 1< K\leq H, \, b\in G, \, ab^{-1} \in H\}$. In particular, $H^\upharpoonright = {\cal K}_1(H)$. \end{Lemma} \begin{Proof} For any $Kb\in{\cal K}_1(G)$, we have $$\begin{array}{lll} Ha \leq_n Kb &\Longleftrightarrow& (\, \exists \, M \in E({{\cal K}_1(G)})) \, Ha= M* Kb \\ &\Longleftrightarrow& (\, \exists \, 1< M \leq G) \, Ha= \langle M, K\rangle b \\ &\Longleftrightarrow&(\, \exists \, 1< M \leq G) \, H= \langle M, K\rangle , \, ab^{-1}\in H \\ &\Longleftrightarrow& K\leq H , \, ab^{-1}\in H. \\ \end{array}$$ \vskip -6pt This shows that $(Ha)^\upharpoonright = \{Kb \mid 1< K\leq H, \, b\in G, \, ab^{-1} \in H\}$. \end{Proof} \begin{Lemma}\label{order} \rm Let $G$ be a finite group and $\pi_k$ denote the set of all prime divisors of $\prod_{H\in \Omega_G}|R_{\scriptscriptstyle H}|$, where $R_{\scriptscriptstyle H}$ denotes the ${\cal R}$-class of ${\cal K}_1(G)$ containing $H$. If $|\pi_k|>1$ (i.e., $G$ is not a $p$-group), then $|G|={\rm l.c.m.}\{|R_{\scriptscriptstyle H}|: H\in \Omega_G\}$. Otherwise, $|\pi_k|=1$, say, $\pi_k=\{p\}$ for some prime $p$ (i.e., $G$ is a $p$-group), then $|G|=p\cdot {\rm l.c.m.}\{|R_{\scriptscriptstyle H}|: H\in \Omega_G\}$. \end{Lemma} \begin{Proof} \rm Suppose that $|G|=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_n^{\alpha_n}$, where $p_1,\ldots,p_n$ are distinct primes. Then we know by the Cauchy Theorem (see \cite[1.6.17]{robinson}) that there exist $H_i\in \Omega_G$ with $|H_i|=p_i$ for each $p_i$, $i=1,2\ldots,n$. Hence, we can see from Lemma \ref{kgjiben} that $$|R_{\scriptscriptstyle H_i}|=|G:H_i|=p_1^{\alpha_1}\cdots p_{i-1}^{\alpha_{i-1}}p_i^{\alpha_i-1} p_{i+1}^{\alpha_{i+1}}\cdots p_n^{\alpha_n}.$$ This implies that $$|\pi_k|>1 \iff n>1 \iff \text{ $G$ is not a $p$-group,}$$ and $|G|={\rm l.c.m.}\{|R_{\scriptscriptstyle H}|: H\in \Omega_G\}$ if $|\pi_k|>1$, and $|G|=p\cdot {\rm l.c.m.}\{|R_{\scriptscriptstyle H}|: H\in \Omega_G\}$ if $\pi_k=\{p\}$ for some prime $p$, as required. \end{Proof} Lemma \ref{filter} tells us that the filter $H^\upharpoonright$ of ${\cal K}_1(G)$ is an inverse subsemigroup of ${\cal K}_1(G)$ for each $H\in E({\cal K}_1(G))$. Combining Lemma \ref{order} and \cite[Theorem 1.2.7]{schmi}, we have \begin{Lemma}\label{cyclic} \rm Let $H$ be a finite subgroup of a group $G$. Then $H$ is a cyclic subgroup of order $p_1^{n_1}p_2^{n_2}\cdots p_s^{n_s}$ if and only if $E(H^\upharpoonright)^1$ is a direct product of chains of length $n_1,n_2,\ldots,n_s$, where $p_1,p_2,\ldots,p_s$ are distinct primes. \end{Lemma} \begin{Lemma}\label{perm} \rm Let $G$ be a group and $H$, $K\in \Omega_G$ with $H\neq K$. Then $H$ and $K$ permute in $G$ if and only if $|R_{ \scriptscriptstyle H}^{\scriptscriptstyle (H*K)^\upharpoonright}|$ is a prime, where $R_{ \scriptscriptstyle H}^{\scriptscriptstyle (H*K)^\upharpoonright}$ denotes the ${\cal R}$-class of the inverse subsemigroup $(H*K)^\upharpoonright$ containing $H$. \end{Lemma} \begin{Proof} We know that $H$ and $K$ permute if and only if $HK=KH$, i.e., $HK=\langle H,K\rangle$ in $G$. The latter one holds if and only if $|\langle H,K\rangle:H|=|HK:H|$. Noticing that $$|HK:H|=|K:K\cap H|=|K|/|H\cap K|=|K|.$$ Also, we know that $|K|$ is a prime, since a maximal idempotent of ${\cal K}_1(G)$ is a minimal subgroup of $G$. Thus we can see that $H$ and $K$ permute in $G$ if and only if $|R_{ \scriptscriptstyle H}^{\scriptscriptstyle (H*K)^\upharpoonright}|$ is a prime, since $|R_{ \scriptscriptstyle H}^{\scriptscriptstyle (H*K)^\upharpoonright}|=|\langle H,K\rangle:H|$. \end{Proof} To describe the completion of ${\cal K}_1(G)$, we need to study the compatible relations in ${\cal K}(G)$, since the compatible relation on ${\cal K}_1(G)$ is exactly the restriction of the compatible relation on ${\cal K}(G)$ to ${\cal K}_1(G)$. \begin{Lemma}\label{hakbc} \rm Let $G$ be a group and $Ha, Kb\in {\cal K}(G)$. Then $$Ha\thicksim Kb\iff ab^{-1}\in \langle H,K^{ba^{-1}}\rangle\cap \langle H, K\rangle.$$ \end{Lemma} \begin{Proof} We can see from Lemma \ref{kgjiben} that $$\begin{aligned} Ha\thicksim Kb&\iff Ha*(Kb)^{-1},\,(Ha)^{-1}*Kb\in E({\cal K}_1(G))\\ &\iff \langle H,K^{ba^{-1}}\rangle ab^{-1},\,\langle H^a, K^a\rangle a^{-1}b\leq G\\ &\iff ab^{-1}\in \langle H,K^{ba^{-1}}\rangle,\,a^{-1}b\in \langle H^a, K^a\rangle\\ &\iff ab^{-1}\in \langle H,K^{ba^{-1}}\rangle\cap \langle H, K\rangle. \end{aligned}$$ \end{Proof} \begin{Corollary}\label{hak} \rm Let $G$ be a group, $a\in G$ and $H, K\leq G$ with $HK=KH$. Then the following statements are equivalent: \begin{itemize} \item[$(i)$] $Ha\thicksim K;$ \item[$(ii)$] $a\in HK;$ \item[$(iii)$]$(\exists\, b\in K)\,\, Ha=Hb.$ \end{itemize} \end{Corollary} \begin{Proof} By Lemma \ref{hakbc}, $(i)\Rightarrow (ii)$ and $(ii)\Rightarrow (iii)$ are evident. To show $(iii)\Rightarrow (i)$, notice that $$Ha*K^{-1}=Hb*K= \langle H,K^{b^{-1}}\rangle b= \langle H,K\rangle b=\langle H,K\rangle \in E({\cal K}_1(G)),$$ $$(Ha)^{-1}*K=H^{b^{-1}}b*K=\langle H^{b^{-1}}, K^{b^{-1}}\rangle b= \langle H^{b^{-1}}, K\rangle b= \langle H^{b^{-1}}, K\rangle \in E({\cal K}_1(G)).$$ Hence $Ha\thicksim K$, as required. \end{Proof} As a consequence of \cite[Lemma 1.3]{lawson1}, we have immediately \begin{Lemma}\label{rjiaowei1} \rm Let $S$ be an inverse semigroup. Then $\thicksim \cap \,{\cal L}=\thicksim \cap \,{\cal R}=\Delta_S$. \end{Lemma} \begin{Lemma} \label{jiaowei1} \rm Let $G$ be a group, $\{K_\lambda \}_{\lambda \in \Lambda} = \{K \mid 1 \neq K \leq G\}$ and $A = \{K_\lambda a_\lambda\}_{\lambda \in \Lambda_{\scriptscriptstyle A}}\in C({\cal K}_1(G))$. Then $|A\cap L_{\scriptscriptstyle {K_\lambda}}|\leq 1$ and $|A\cap R_{\scriptscriptstyle {K_\lambda}}|\leq1$ for any $\lambda\in\Lambda$. Moreover, $$A\in \Sigma({\cal K}_1(G)) \iff (\forall \lambda \in \Lambda)\,\,|A\cap L_{\scriptscriptstyle {K_\lambda}}|=|A\cap R_{\scriptscriptstyle {K_\lambda}}|=1.$$ \end{Lemma} \begin{Proof} It is easy to see from Lemma $\ref{rjiaowei1}$ that $|A\cap L_{\scriptscriptstyle {K_\lambda}}|\leq 1$ and $|A\cap R_{\scriptscriptstyle {K_\lambda}}|\leq1$ for any $\lambda\in\Lambda$. Also, Lemma \ref{kgjiben} tells us that $ E({\cal K}_1(G))=\{K_\lambda\}_{\lambda \in \Lambda}$, and we have from \cite[Chapter 1, Lemma 22]{lawson} \begin{eqnarray}\label{21} AA^{-1} & = & \{(K_\lambda a_\lambda)*(K_\lambda a_\lambda)^{-1}\}_{\lambda \in \Lambda_{\scriptscriptstyle A}} = \{K_\lambda\}_{\lambda \in \Lambda_{\scriptscriptstyle A}}, \end{eqnarray} \begin{eqnarray}\label{22} A^{-1}A & = &\{(K_\lambda a_\lambda)^{-1}*(K_\lambda a_\lambda)\}_{\lambda \in \Lambda_{\scriptscriptstyle A}} = \{K_\lambda ^{a_\lambda}\}_{\lambda \in \Lambda_{\scriptscriptstyle A}}. \end{eqnarray} Suppose that $A\in \Sigma({\cal K}_1(G))$ and $AB = BA = E({\cal K}_1(G))$. Then $B$ is exactly the inverse of $A$ in the inverse semigroup $C({\cal K}_1(G))$. That is to say, $B = A^{-1}$ and so $AA^{-1}=A^{-1}A=E({\cal K}_1(G))$. Thus it follows from (\ref{21}) and (\ref{22}), respectively that $\Lambda_A=\Lambda$ and there exists $K_\nu a_\nu\in A$ such that $K_\nu^{a_\nu} = K_\lambda$ for any $\lambda\in \Lambda$. This implies that $K_\nu a_\nu \, {\cal L} \, K_\lambda$ and $K_\lambda a_\lambda \, {\cal R} \, K_\lambda$ for any $\lambda\in \Lambda$ and so $1 \leq |A\cap L_{\scriptscriptstyle {K_\lambda}}|$, $1 \leq |A\cap R_{\scriptscriptstyle {K_\lambda}}|$ for any $\lambda \in \Lambda$. Now we have shown that $$(\forall \lambda \in \Lambda)\,\,|A\cap L_{\scriptscriptstyle {K_\lambda}}|=|A\cap R_{\scriptscriptstyle {K_\lambda}}|=1.$$ Conversely, suppose that $|A\cap L_{\scriptscriptstyle {K_\lambda}}|=|A\cap R_{\scriptscriptstyle {K_\lambda}}|=1$ for any $\lambda \in \Lambda$. Then there exist $K_\mu a_\mu, \, K_\nu a_\nu\in A$ such that $K_\mu a_\mu \, {\cal L} \, K_\lambda$ and $K_\nu a_\nu \, {\cal R} \, K_\lambda$ for any $\lambda\in \Lambda$. That is to say, $K_\lambda=K_\nu\in AA^{-1}$ and $K_\lambda=K_\mu^{a_\mu} \in A^{-1}A$. This shows that $AA^{-1}=A^{-1}A=E({\cal K}_1(G))$ and so $A\in \Sigma({\cal K}_1(G))$, as required. \end{Proof} \section{The unit group $\Sigma({\cal K}_1(G))$} In this section we shall investigate the unit group $\Sigma({\cal K}_1(G))$ of the completion $C({\cal K}_1(G))$ of ${\cal K}_1(G)$. Recall that $\{K_\lambda \}_{\lambda \in \Lambda} = \{K \mid 1 \neq K \leq G\}$ and $\Omega_G = \{H_i\}_{i\in I}$ denotes the set of all minimal subgroups of $G$ and $${\cal A}_G=\{\{H_ia_i\}_{i\in I}\mid (\forall \,j, k\in I)\,H_ja_j\thicksim H_ka_k \text{ and } |\{H_ia_i\}_{i\in I}\cap L_{\scriptscriptstyle {H_j}}|=1\}.$$ It is well known that ${\cal K}_1(G)$ is an inverse semigroup and a partially ordered set under the natural order. For any $A\subseteq {\cal K}_1(G)$ we denote by $[A]$ the order ideal generated by $A$, and by $\overline A$ the set of all maximal elements in $A$. In particular, for any $U=\{K_{\lambda}a_{\lambda}\}_{\lambda\in \Lambda}\in \Sigma({\cal K}_1(G))$, $\overline U=\{H_ia_i\}_{i\in I}$, and $[\,\overline{U}\,]=U$. Firstly, we have \begin{Lemma}\label{tonggou} \rm ${\cal A}_G$ equipped with the multiplication $A\cdot B=\overline {[A][B]}$ becomes a group, and the mapping $\varphi$ from ${\cal A}_G$ to $ \Sigma({\cal K}_1(G))$ defined by $\varphi (A) = [A]$ is an isomorphism. \end{Lemma} \begin{Proof} \rm Let $A=\{H_ia_i\}_{i\in I}\in {\cal A}_G$. It follows immediately from \cite[Section 1.4, Lemma 14 ]{lawson} that $[A]\in C({\cal K}_1(G))$ and so $|[A]\cap L_{\scriptscriptstyle {K}}|\leq 1$ and $|[A]\cap R_{\scriptscriptstyle {K}}|\leq1$ for any $1\neq K\leq G$ by Lemma \ref{jiaowei1}. Also, there exists $H_s\in\Omega_G$ such that $H_s\leq K$, since $G$ is periodic. It follows that there exists $H_ta_t\in A$ such that $H_ta_t \, {\cal L} \, H_s$, since $|\{H_ia_i\}_{i\in I}\cap L_{\scriptscriptstyle {H_s}}|=1$. That is to say, $H_t^{a_t}=H_s\leq K$. This implies that $Ka_s\leq_n H_sa_s$ and $K^{a_t^{-1}}a_t\leq_n H_ta_t$ and so $Ka_s, \, K^{a_t^{-1}}a_t\in [A]$. Noticing that $Ka_s \, {\cal R} \, K$ and $K^{a_t^{-1}}a_t \, {\cal L} \, K$, we have that $|[A]\cap L_{\scriptscriptstyle {K}}|= 1$ and $|[A]\cap R_{\scriptscriptstyle {K}}|=1$ for any $1\neq K\leq G$. Thus $[A]\in \Sigma({\cal K}_1(G))$ by Lemma \ref{jiaowei1}. We have shown that $\varphi$ is well-defined. Also $\varphi$ is injective. Suppose that $U\in \Sigma({\cal K}_1(G))$ and write $A=\overline{U}$. Then $|U\cap L_{\scriptscriptstyle {K}}|=|U\cap R_{\scriptscriptstyle {K}}|=1$ for any $1\neq K\leq G$ by Lemma \ref{jiaowei1}. It is clear that $A$ is compatible and we may write $A=\{H_ia_i\}_{i\in I}$ by Lemma \ref{minmax}. Also, for any $H_j\in\Omega_G$, we have $|A\cap L_{\scriptscriptstyle {H_j}}|\leq1$ by Lemma \ref{rjiaowei1} and there exists $Ha\in U$ such that $Ha \, {\cal L} \, H_j$. That is to say, $H^a=H_j$ and so $H=H_j^{a^{-1}}\in\Omega_G$. It follows from Lemma \ref{minmax} that $Ha\in A$ and so $A\in {\cal A}_G$. Noticing that $\varphi(A)=[A]=[\overline{U}]=U$, we have shown that $\varphi$ is surjective and the multiplication $\cdot$ on ${\cal A}_G$ defined above is well-defined. Also, we have that $$\varphi(A\cdot B)=[A\cdot B]=\left[\overline{[A] [B]}\right]= [A] [B]=\varphi(A) \varphi(B),$$ for all $A,B\in{\cal A}_G$, since $[A] [B]\in \Sigma({\cal K}_1(G))$. Hence $\varphi$ is bijective and preserves the multiplications, and so $({\cal A}_G,\cdot)$ is a group, since $\Sigma({\cal K}_1(G))$ is a group, as required. \end{Proof} Suppose that $A=\{H_ia_i\}_{i\in I}$, $B=\{H_ib_i\}_{i\in I}\in {\cal A}_G$, and $A\cdot B=\{H_i c_i\}_{i\in I},$ then for any $k\in I$, there exist $i,j\in I$ such that $$H_kc_k=H_ia_i* H_ja_j=\langle H_i,\,H_j^{a_i^{-1}}\rangle a_ib_j.$$ Hence $\langle H_i,\,H_j^{a_i^{-1}}\rangle = H_k$, and so $H_i=H_j^{a_i^{-1}}=H_k$, thus $H_j=H_i^{a_i}$. Further, defining a transformation $\tau_A$ on $I$ by $i^{\tau_A}=j$ whenever $H_j=H_i^{a_i}$, we have $$A\cdot B=\{H_ia_ib_{i^{\tau_A}}\}_{i\in I}.$$ In fact, we can prove that $\tau_A$ is a permutation on $I$. It is easy to see that $\tau_A$ is well- defined, since $H_ia_i=H_ia^{\prime}_i$ implies that $H_i^{a_i}=H_i^{a'_i}$. Suppose that $i^{\tau_A}=j^{\tau_A}$. Then $H_i^{a_i}=H_j^{a_j}$ and so $H_ia_i \, {\cal L} \, H_ja_j$. Noticing that $H_ia_i\thicksim H_ja_j$, we have that $H_ia_i=H_ja_j$ by Lemma \ref{rjiaowei1}. Thus $i=j$ and so $\tau_A$ is injective. Also, there exists $H_ia_i\in A$ such that $H_ia_i \, {\cal L} \, H_j$ for any $j\in I$, since $|A\cap L_{\scriptscriptstyle {H_j}}|=1$. It follows that $H_i^{a_i}=H_j$ and so $i^{\tau_A}=j$. This shows that $\tau_A$ is surjective and so it is a permutation on $I$. Define a map $\tau$ from ${\cal A}_G$ to $Sym(I)$ by $\tau(A)=\tau_A$. Then for each $k\in I$, one has $$H_{k^{\tau(A)\tau(B)}}=H_{(k^{\tau_A})^{\tau_B}}=(H_{k^{\tau_A}})^{b_{k^{\tau_A}}}=H_k^{a_kb_{k^{\tau_A}}} =H_{k^{\tau(A\cdot B)}},$$ and so $k^{\tau(A)\tau(B)}=k^{\tau(A\cdot B)}$. This shows that $$\tau(A)\tau(B)=\tau(A\cdot B).$$ That is to say, $\tau$ is a homomorphism. \noindent{\bf Remark\,} Let $\{\Omega_t\}_{t\in T}$ be all orbits of $\Omega_G$ under the conjugation action of $G$ on $\Omega_G$ and $X_t=\{k\in I\mid H_k\in \Omega_t\}$ for any $t\in T$. Then $$\tau({\cal A}_G)\leq {\textstyle\prod}_{t\in T}\, Sym(X_t),$$ since $H_{i^{\tau_A}}$ is conjugate to $H_{j^{\tau_A}}$ whenever $H_i$ is conjugate to $H_j$. Summarizing the above results, we have \begin{Proposition} \rm For any $A=\{H_ia_i\}_{i\in I}$, $B=\{H_ib_i\}_{i\in I}\in {\cal A}_G$, $$A\cdot B=\{H_ia_ib_{i^{\tau_A}}\}_{i\in I},$$ where the transformation $\tau_A$ defined by $i^{\tau_A}=j$ whenever $H_j=H_i^{a_i}$ is a permutation on $I$ and the map $\tau$ from ${\cal A}_G$ to $Sym(I)$ defined by $\tau(A)=\tau_A$ is a group homomorphism. \end{Proposition} Next, we shall investigate the unit group $\Sigma({\cal K}_1(G))$ when $G$ is a nontrivial periodic group satisfying $H_iH_j = H_jH_i$ for any $H_i, \,H_j\in \Omega_G$ and $|\Omega_G|\geq 2$. Write $\Omega_p(G)$ to denote the subgroup of $G$ generated by all minimal subgroups of $G$ of order $p$, i.e., $\Omega_p(G)=\langle H_i\in \Omega_G \mid |H_i|=p\rangle=\langle x\in G\mid x^p=1\rangle$. Let $\Omega(G)=\prod_{p\in\mathbb{P}}\Omega_p(G)$, where $\mathbb{P}$ denotes the set of all primes. Then we have \begin{Lemma}\label{minper} \rm The following statements are equivalent: \begin{itemize} \item[$(i)$] $H_iH_j = H_jH_i$ for any $H_i, \,H_j\in \Omega_G$; \item[$(ii)$] $\Omega_p(G)$ is either a trivial subgroup or an elementary abelian $p$-subgroup of $G$ for each prime $p$; \item[$(iii)$] $\Omega(G)$ is an abelian subgroup of $G$. \end{itemize} \end{Lemma} \begin{Proof} Firstly, we prove that $(i)\Rightarrow(ii)$. Suppose that $p$ is a prime and $\Omega_p(G)$ is not trivial. It is sufficient to show that $\Omega_p(G)$ is abelian. Suppose $a, b\in G$ with $|a|=|b|= p$. Then $\langle a\rangle=H_i$, $\langle b\rangle=H_j$ for some $H_i$, $H_j\in \Omega_G$, and $H_iH_j\leq G$. Also, we can see that $|H_iH_j|=p$ or $|H_iH_j|=p^2$, since $$|H_iH_j|=|H_i||H_j|/|H_i\cap H_j|=p^2/|H_i\cap H_j|.$$ This implies that $H_iH_j$ is an abelian group by \cite[1.6.15]{robinson} and so $ab = ba$, as required. Now, we prove that $(ii)\Rightarrow(iii)$. It is easy to see that $\Omega_p(G)\unlhd G$ for any prime $p$. This implies that $\Omega(G)$ is the direct product of $\Omega_p(G)$, $p\in\mathbb{P}$. Thus $\Omega(G)$ is abelian. It is easy to see $(iii)\Rightarrow(i)$, since $\Omega(G)$ contains all minimal subgroups of $G$. This completes the proof. \end{Proof} \begin{Lemma}\label{ag} \rm Let $Ag=\{H_i a_ig\}_{i\in I}$ for any $A=\{H_i a_i\}_{i\in I}\in {\cal A}_G$ and $g\in G$. Then $Ag \in {\cal A}_G$. \end{Lemma} \begin{Proof} It is easy to see from Lemma \ref{hakbc} that $H_ia_ig\thicksim H_ja_jg$ for any $i, \, j \in I$, since $H_ia_i\thicksim H_ja_j$. Also, there exists $H_ia_i \in A$ such that $H_ia_i \, {\cal L} \, H_j^{g^{-1}}$ for any $j\in I$, since $|A\cap L_{\scriptscriptstyle{H_j^{g^{-1}}}}|=1$. That is to say, $H_i^{a_i}=H_j^{g^{-1}}$, i.e., $H_i^{a_ig}=H_j$. This implies that $H_i{a_ig}\, {\cal L} \,H_j$ and so $|Ag \cap L_{\scriptscriptstyle {H_j}}|=1$ for any $j\in I$. Thus $Ag\in {\cal A}_G$, as required. \end{Proof} In the remainder of this section, we always assume that $G$ is a nontrivial periodic group satisfying $H_iH_j = H_jH_i$ for any $H_i, \,H_j\in \Omega_G$ and $|\Omega_G|\geq 2$. Given $H_r \in \Omega_G$. Taken a right transversal ${\cal{C}}$ of $H_r$ in $G$. Write ${\cal A}(H_r)$ to denote the subset of ${\cal A}_G$ consisting of all $\{H_i a_i\}_{i\in I}\in{\cal A}_G$ such that $ a_r \in H_r $. Define a map $\psi$ from ${\cal A}(H_r)\times {\cal C}$ to ${\cal A}_G$ by $\psi(A, c_{\scriptscriptstyle\ell})= Ac_{\scriptscriptstyle\ell}$. Then we have \begin{Lemma}\label{t-1} \rm The mapping $\psi$ defined above is a bijection. In particular, $$|{\cal A}_G|=|{\cal A}(H_r)||G:H_r|.$$ \end{Lemma} \begin{Proof} We can see from Lemma \ref{ag} that $\psi$ is well-defined. Suppose that $Ac_{\scriptscriptstyle\ell}=Bc_k$ for some $A,\, B\in {\cal A}(H_r)$ and $c_{\scriptscriptstyle\ell}, \, c_k\in {\cal C}$. Then $H_rc_{\scriptscriptstyle\ell}=H_rc_k$ and so $c_{\scriptscriptstyle\ell}=c_k$. This implies that $$B=(Bc_k)c_k^{-1}=(Ac_{\scriptscriptstyle\ell})c_{\scriptscriptstyle\ell}^{-1}=A$$ and so $\psi$ is injective. For each $A=\{H_ia_i\}_{i\in I}\in{\cal A}_G$, there exists $c_{\scriptscriptstyle\ell}\in\mathcal C$ such that $H_r a_r = H_r c_{\scriptscriptstyle\ell}$. Let $A'=Ac_{\scriptscriptstyle\ell}^{-1}$. Then $A'\in {\cal A}(H_r)$ by Lemma \ref{ag} and $\varphi(A',c_{\scriptscriptstyle\ell})=A$. This shows that $\psi$ is bijective, and so $$|{\cal A}_G|=|{\cal A}(H_r)\times {\cal C}|=|{\cal A}(H_r)| |{\cal C}|=|{\cal A}(H_r)||G:H_r|,$$ as required. \end{Proof} Define a simple graph $\Gamma(H_r)$ corresponding to $H_r$ by $\Gamma(H_r)=(V_{\scriptscriptstyle{H_r}},E_{\scriptscriptstyle{H_r}})$, where $V_{\scriptscriptstyle{H_r}}=\Omega_G\setminus \{H_r\}$ and $$E_{\scriptscriptstyle{H_r}}=\{(H_i,H_j)\mid H_i\neq H_j,\,\,H_iH_j\cap H_r=1\}.$$ Write $\{\Gamma_j\}_{j\in J}$ to denote the set of all connected components of $\Gamma(H_r)$. We have \begin{Lemma}\label{graph} \rm The followings are true: \begin{itemize} \item[$(i)$] ${\cal A}(H_r)=\left\{\bigcup_{j\in J}\Gamma_ja_j\bigcup \{H_r\}\mid (\forall j\in J)\, a_j\in H_r\right\},$ \item[$(ii)$] $|{\cal A}(H_r)|=|H_r|^{|J|}.$ \end{itemize} \end{Lemma} \begin{Proof} \rm We can see from Corollary \ref{hak} that $$\begin{aligned} H_ia_i\thicksim H_r\iff&(\exists\, b_i\in H_r)\,\, H_ia_i=H_ib_i,\\ H_ia_i\thicksim H_ja_j\iff& a_ia_j^{-1}\in H_iH_j\iff b_ib_j^{-1}\in H_iH_j, \end{aligned}$$ for any given $A=\{H_ia_i\}_{i\in I}\in {\cal A}(H_r)$. It follows that $$A=\{H_ib_i\}_{i\in I\setminus\{t\}}\cup\{H_r\},$$ where $b_i\in H_r$, $b_ib_j^{-1}\in H_iH_j$ for all $i,j\in I\setminus\{r\}$. On the other hand, $b_i=b_j$ if $(H_i, H_j)\in E_{\scriptscriptstyle{H_r}}$, since $b_ib_j^{-1}\in H_iH_j\cap H_r=1$. This implies that $b_i=b_j$ if $H_i, H_j\in \Gamma_k$ for some $k\in J$. This shows that $$A=\cup\{\Gamma_jc_j\}_{j\in J}\cup \{H_r\}$$ for some $c_j\in H_r,\,j\in J$. Conversely, suppose that $A=\cup\{\Gamma_ja_j\}_{j\in J}\cup \{H_r\}$, where $a_j\in H_r$ for any $j\in J$. Then, it is easy to see that $A$ is a compatible subset of ${\cal K}_1(G)$. Also, we can see from Lemma \ref{minper} that $H_i^{a_j}=H_i$, i.e., $H_ia_j \, {\cal L} \, H_i$ for any $H_ia_j\in A$. Hence $|A \cap L_{\scriptscriptstyle {H_i}}|=1$ for any $H_i\in\Omega_G$ and so $A\in {\cal A}(H_r)$. This shows that $${\cal A}(H_r)=\{\cup\{\Gamma_ja_j\}_{j\in J}\cup \{H_r\}\mid (\forall j\in J)\, a_j\in H_r\},$$ and so $|{\cal A}(H_r)|=|H_r|^{|J|}$, as required. \end{Proof} By Lemma \ref{tonggou}, Lemma \ref{t-1} and Lemma \ref{graph} we immediately have \begin{Theorem}\label{|ag|} $|\Sigma({\cal K}_1(G))|=|{\cal A}_G|=|G||H_r|^{|J|-1}.$ \end{Theorem} \rm Next, we shall show that $\Gamma(H_r)$ is connected if $G$ is not a $p$-group for any prime $p$ or $G$ is a $p$-group with $|\Omega_G|> p^2$ for some prime $p$, otherwise it is an empty graph. \begin{Theorem}\label{pq} \rm If $G$ is not a $p$-group for any prime $p$ and $H\in V_{\scriptscriptstyle H_r}$ with $|H|\neq |H_r|$, then $(H_i,H)\in E_{\scriptscriptstyle H_r}$ for all $H_i\in V_{\scriptscriptstyle H_r}$ with $H_i\neq H$, and $|J|=1$. \end{Theorem} \begin{Proof} \rm Suppose that $|H|=p$ and $|H_r|=q$. If $(H_i,H)\notin E_{\scriptscriptstyle H_r}$ for some $H_i\in E_{\scriptscriptstyle H_r}$ with $H_i\neq H$, then $H_iH\cap H_r=H_r$, and so $H_r\leq H_iH$. Thus we have $$p |H_i| = |H||H_i|/|H \cap H_i| = |H H_i| = |H H_i : H_r||H_r| = q |H H_i : H_r|.$$ This implies that $|H_i|=q$, since $p,\,q $ and $|H_i|$ are prime numbers and $p \neq q$. We can now see that $q^2 = |H_i||H_r| = |H_iH_r|$ and $ pq =|H||H_i| = |HH_i|$. Again by $H_r\leq HH_i$ and $H_i\leq HH_i$, we have that $H_iH_r\leq HH_i$ and so $q^2$ divides $pq$, a contradiction. This shows that $(H,H_i)\in E_{\scriptscriptstyle H_r}$ for all $H_i\in V_{\scriptscriptstyle H_r}$ with $H_i\neq H$, and so $|J|=1$, as required. \end{Proof} Now, we shall consider the case where $G$ is a $p$-group. In this case $\Omega(G)=\Omega_p(G)$ is an elementary abelian $p$-subgroup of $G$ (see Lemma \ref{minper}). It is well known that the elementary abelian group $\Omega(G)$ equipped with the scalar multiplication defined by $\overline k\circ \alpha=k\alpha$ becomes a vector space over field $\mathbb Z_p$. Also, $H_i$ is a $1$-dimensional subspace of $\Omega(G)$ for every $H_i\in \Omega_G$, since $H_i=\{\overline{k}a_i\mid \overline{k}\in\mathbb Z_p\}$ for some $a_i\in \Omega(G)$. To study $\Gamma(H_r)$ for the $p$-groups, the following two lemmas are needed for us. \begin{Lemma}\label{zikongjiantu} \rm Let $W$ be a vector space over field $\mathbb F$ such that $\dim W\geqslant 2$ and $v\in W\setminus \{0\}$. Define a graph $\Gamma_v=(V_{\scriptscriptstyle \Gamma_v},E_{\scriptscriptstyle \Gamma_v})$, where $V_{\scriptscriptstyle \Gamma_v}=\{L(\alpha)\mid \alpha\in W\setminus\{0\}\}\setminus\{L(v)\}$ and $$ E_{\scriptscriptstyle \Gamma_v} = \{(L(\alpha),L(\beta))\mid L(\alpha)\neq L(\beta),\,\, v\notin L(\alpha,\beta)\}. $$ Then $$\ell_{\scriptscriptstyle\Gamma_v}=\begin{cases} |V_{\scriptscriptstyle \Gamma_v}|, &\dim W=2,\\ 1, &\dim W\geqslant 3, \end{cases}$$ where $\ell_{\scriptscriptstyle\Gamma_v}$ denotes the cardinal number of the set of all connected components of $\Gamma_v$. \end{Lemma} \begin{Proof} Let $L(\alpha),\, L(\beta)\in V_{\scriptscriptstyle \Gamma_v}$ with $L(\alpha)\neq L(\beta)$. Then $\dim L(\alpha,\beta)=2$. If $\dim W=2$, then $W=L(\alpha,\beta)$, and so $v\in L(\alpha,\beta)$. This implies that $\left(L(\alpha),L(\beta)\right)\notin E_{\scriptscriptstyle \Gamma_v}$ and $\ell_{\scriptscriptstyle\Gamma_v}= |V_{\scriptscriptstyle \Gamma_v}|$. Otherwise, $\dim W\geqslant 3$. Suppose that $\left(L(\alpha),L(\beta)\right)\notin E_{\scriptscriptstyle \Gamma_v}$. Then $v\in L(\alpha, \beta)$. Thus $L(\alpha, \beta,v)=L(\alpha,\beta)\neq W$, since $\dim W\geqslant 3$. We can show that $$(L(\alpha),L(\theta)),\,(L(\theta),L(\beta))\in E_{\scriptscriptstyle \Gamma_v}$$ for any $\theta\in W\setminus L(\alpha, \beta,v)$. In fact, if $(L(\alpha),L(\theta))\notin E_{\scriptscriptstyle \Gamma_v}$, then $v\in L(\alpha, \theta)$, and so $v=k_1\alpha+k_2\theta$ for some $k_1,k_2\in \mathbb F$ with $k_2\neq 0$, since $L(v)\neq L(\alpha)$. This implies that $$\theta=1/k_2(v-k_1\alpha)\in L(\alpha,\beta,v),$$ a contradiction. Thus $(L(\alpha),L(\theta))\in E_{\scriptscriptstyle \Gamma_v}$. Dually, we can prove that $(L(\theta),L(\beta))\in E_{\scriptscriptstyle \Gamma_v}$. This shows that $\ell_{\scriptscriptstyle\Gamma_v}=1$, as required. \end{Proof} The following result is well-known. \begin{Lemma}\label{yiwiezikongjian} \rm Let $W$ be a $m$-dimensional vector space over field $\mathbb Z_p$, where $p$ is a prime number. Then $W$ has $(p^m-1)/(p-1)$ $1$-dimensional subspaces, i.e., $$ \left|\{L(\alpha)\mid \alpha\in W\setminus\{0\}\}\right| = (p^m-1)/(p-1).$$ \end{Lemma} By Lemma $\ref{minper}$, Lemma $\ref{zikongjiantu}$ and Lemma $\ref{yiwiezikongjian}$ we immediately have \begin{Theorem}\label{pm} \rm If $G$ is a $p$-group, then $\Gamma(H_r)$ is empty when $|\Omega(G)|=p^2$, and $\Gamma(H_r)$ is connected when $|\Omega(G)|>p^2$. \end{Theorem} As a consequence, by Theorem \ref{|ag|} we have \begin{Corollary} \rm $|\Sigma({\cal K}_1(G))|=|G|$ if $G$ is not a $p$-group for any prime $p$ or $G$ is a $p$-group with $|\Omega_G|> p^2$ for some prime $p$. \end{Corollary} \section{Groups with isomorphic coset semigroups} In this section we shall directly confront the question: which group $G$ and its coset semigroup ${\cal K}_1(G)$ are uniquely determined by each other, up to isomorphism. The following shows that a group $G$ is embedded to the unit group $\Sigma({\cal K}_1(G))$ of $C({\cal K}_1(G))$ if the intersection of all non-trivial subgroups of $G$ is trivial. \begin{Lemma}\label{embedding} \rm Let $G$ be a group and $\{K_\lambda\}_{\lambda\in \Lambda}$ the set of all non-trivial subgroups of $G$. Define a map $\eta$ from $G$ to $\Sigma({\cal K}_1(G))$ by $\eta(a)=\{K_\lambda a\}_{\lambda\in \Lambda}$. Then $\eta$ is a group homomorphism, and $\ker\eta=\bigcap\{K_\lambda\}_{\lambda\in \Lambda}$. \end{Lemma} \begin{Proof} \rm By virtue of Lemma \ref{hakbc} and Lemma \ref{jiaowei1}, $\eta(a)\in \Sigma({\cal K}_1(G))$ for each $a\in G$. That is to say, $\eta$ is well-defined. For any $a,b\in G$, we have $$ \eta(a)\eta(b)=\{K_\lambda a\}_{\lambda\in \Lambda}\{K_\lambda b\}_{\lambda\in \Lambda} =\{K_\lambda a* K_\mu b\}_{\lambda,\,\mu\in\Lambda} =\{\langle K_\lambda,K_\mu^{a^{-1}}\rangle ab\}_{\lambda,\,\mu\in\Lambda} \subseteq \eta(ab).$$ On the other hand, $K_\lambda ab=K_\lambda a*K_\lambda^a b\in\eta(a)\eta(b)$ for any $K_\lambda ab\in \eta(ab)$. This implies that $ \eta(a)\eta(b) \supseteq\eta(ab)$. Thus we have shown that $\eta(a)\eta(b)=\eta(ab)$, and so $\eta$ is a group homomorphism. Also, for any $a\in G$, we have $$a\in \ker\eta\iff \eta(a)=\{K_\lambda\}_{\lambda\in \Lambda}\iff (\forall \,\lambda\in\Lambda)\,\,a\in K_\lambda\iff a\in \textstyle\bigcap\{K_\lambda\}_{\lambda\in \Lambda},$$ since the identity of $\Sigma({\cal K}_1(G))$ is $\{K_\lambda\}_{\lambda\in \Lambda}$. This shows that $\ker\eta=\bigcap\{K_\lambda\}_{\lambda\in \Lambda}$, as required. \end{Proof} \begin{Lemma}\label{surj} \rm Let $G$ be a periodic group such that $H_iH_j=H_jH_i$ for all $H_i, H_j\in \Omega_G$ and $|\Omega_G|\geq2$. If $\eta$ is as defined in Lemma \ref{embedding}, then $\eta$ is an isomorphism from $G$ to $\Sigma({\cal K}_1(G))$ if and only if $|J|=1$. \end{Lemma} \begin{Proof} \rm Suppose that $|J|=1$. Then we can see from Lemma \ref{tonggou}, Lemma \ref{t-1} and Lemma \ref{graph} that $\eta$ is surjective. Also, Theorem \ref{pq} and Theorem \ref{pm} tell us that $|J|=1$ if and only if $G$ is not a $p$-group for any prime $p$ or a $p$-group with $|\Omega(G)|>p^2$ for some prime $p$. Hence it follows from Lemma \ref{embedding} that $\ker\eta=1$. This shows that $\eta$ is injective, and so $\eta$ is an isomorphism. Conversely, suppose that $\eta$ is an isomorphism. If $|J|>1$, then by Lemma \ref{t-1} we may take $$A=\cup\{\Gamma_ja_0\}_{j\in J,\,j\neq i}\cup\Gamma_ia_i\cup \{H_r\}\in {\cal A}(H_r),$$ where $a_i,a_0\in H_r$ and $a_i\neq a_0$. Also, it follows from Lemma \ref{tonggou} that there exists $a\in G$ such that $[A]=\eta(a)$, since $\eta$ is surjective. In particular, we have that $H_ia_i=H_ia$, $H_ja_0=H_ja$ and $H_ra=H_r$ for any $H_i\in\Gamma_i$ and $H_j\in\Gamma_j$, where $j\neq i$. This implies that $$a\in H_r\cap H_ia_i=H_ra_i\cap H_ia_i=(H_r\cap H_i)a_i=\{a_i\},$$ and so $a=a_i$. Dually, we can show that $a=a_0$. That is to say, $a_i=a_0$, a contradiction. This shows that $|J|=1$, as required. \end{Proof} The following tells us that for the periodic groups whose minimal subgroups permute with each other, except some special $p$-groups, $G$ and ${\cal K}_1(G)$ are uniquely determined by each other, up to isomorphism. \begin{Theorem}\label{isom1} \rm Let $G$ be a periodic group such that $H_iH_j=H_jH_i$ for all $H_i, H_j\in \Omega_G$ and $|\Omega_G|\geq2$. If $G$ is not a $p$-group for any prime $p$ or $G$ is a $p$-group with $|\Omega(G)|>p^2$ for some prime $p$, then for any group $\widetilde{G}$, ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$ if and only if $G\cong \widetilde{G}$. \end{Theorem} \begin{Proof} \rm It is evident that $G\cong \widetilde{G}$ implies that ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$. Conversely, suppose that ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$. Then we have that $\Sigma({\cal K}_1(G))\cong G$ by Theorem \ref{pq}, Theorem \ref{pm} and Lemma \ref{surj}. Also, we can see from Lemma \ref{minmax} and Lemma \ref{perm} that $\widetilde{G}$ is a periodic group whose minimal subgroups permute with each other. Moreover, we have that $\Gamma(H_r)\cong \Gamma(\widetilde{H_r})$, where $\widetilde{H_r}$ is the image of $H_r$ under the isomorphism from ${\cal K}_1(G)$ to $ {\cal K}_1(\widetilde{G})$, since $(H_i,H_j)\in E_{\scriptscriptstyle{H_r}}$ if and only if $(H_i* H_j)\vee H_r$ is not exist in ${\cal K}_1(G)$. In particular, $\Gamma(\widetilde{H_r})$ is connected and so $\Sigma({\cal K}_1(\widetilde{G}))\cong \widetilde{G}$ by Lemma \ref{surj}. Now, ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$ implies that $\Sigma({\cal K}_1(G))\cong \Sigma({\cal K}_1(\widetilde{G}))$ by \cite[Corollary of 1.17]{schei2}. We then have that $G\cong \widetilde{G}$, as required. \end{Proof} To answer the above question for the $p$-groups $G$ with $|\Omega(G)| \leq p^2$ (i.e., $|\Omega(G)|=p$ or $|\Omega(G)|=p^2$), we need the followings. \begin{Lemma}\label{abelian} \rm Let $G$ be a group. Then ${\cal K}_1(G)$ is abelian if and only if $G$ is either an abelian group or else the quaternion group $Q_8$. \end{Lemma} \begin{Proof} \rm It is easy to see that ${\cal K}_1(G)$ is abelian if $G$ is an abelian group. Also, ${\cal K}_1(Q_8)$ is abelian. In fact, noticing that $Q_8$ is a Dedekind group and $Ha\in Z(Q_8/H)$ for any $Ha\in {\cal K}_1(Q_8)$, we have $$Ha*Kb=HKab=HKa*HKb=HKb*HKa=HKba=KHba=Kb*Ha,$$ for any $Ha,\,Kb\in {\cal K}_1(Q_8)$. Conversely, suppose that ${\cal K}_1(G)$ is abelian for some non-abelian group $G$. Then every idempotent of ${\cal K}_1(G)$ is central. Thus, by Lemma \ref{kgjiben}, every subgroup of $G$ is normal, that is, $G$ is a Dedekind group. Then we have $G\cong Q_8\times A$ by \cite[Theorem 5.3.8]{robinson}, where $A$ is an abelian group. If $A\neq 1$, then by Lemma \ref{kgjiben}, $H_{\scriptscriptstyle A}$, the ${\cal K}$-class of ${\cal K}_1(G)$ containing $H$, is equal to $G/A\cong Q_8$, which is a non-abelian subgroup of ${\cal K}_1(G)$, a contradiction. This implies that $A=1$ and so $G\cong Q_8$, as required. \end{Proof} \begin{Theorem}\label{omega1} \rm Let $G$ be a $p$-group with $|\Omega(G)|=p$. Write $\Omega(G)=K$. If ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$ for some group $\widetilde{G}$, then $G/K\cong \widetilde{G}/\widetilde{K}$, where $\widetilde{K}$ is the unique minimal subgroup of $\widetilde{G}$. Further, $G\cong \widetilde{G}$ if $G$ is a finite group of composite order. \end{Theorem} \begin{Proof} \rm It is easy to see that $\Omega_G=\{K\}$. Suppose that $\varphi$ is an isomorphism from ${\cal K}_1(G)$ to ${\cal K}_1(\widetilde{G})$. Then we have $\Omega_{\widetilde{G}}=\{ \widetilde{K}\}$, where $\widetilde{K}=\varphi(K)$, since $\varphi$ preserves the natural partial order. Also, it is easy to see that $K$ and $\widetilde{K}$ are respectively the identities of ${\cal K}_1(G)$ and ${\cal K}_1(\widetilde{G})$. This implies that their groups of units $U({\cal K}_1(G))=H^{\scriptscriptstyle {\cal K}_1(G)}_{\scriptscriptstyle K}$ and $U({\cal K}_1(\widetilde G))=H^{\scriptscriptstyle {\cal K}_1(\widetilde G)}_{\scriptscriptstyle \widetilde K}$ are isomorphic. On the other hand, it follows from Lemma \ref{kgjiben} that $H^{\scriptscriptstyle {\cal K}_1(G)}_{\scriptscriptstyle K}=G/K$ and $H^{\scriptscriptstyle {\cal K}_1( \widetilde G)}_{\scriptscriptstyle \widetilde K}=\widetilde{G}/\widetilde{K}$. This shows that $G/K\cong \widetilde{G}/\widetilde{K}$. Now, assume that $G$ is a finite group of composite order.. Then $G>H$, and by Lemma \ref{order}, we have $|\widetilde{G}|=|G|$. Since $|\Omega_G|=|\Omega_{\widetilde{G}}|=1$, $G$ is either a cyclic group of prime power order or else a generalized quaternion group by \cite[5.3.6]{robinson} and by the Sylow Theorem. Also, the same conclusion holds for $\widetilde{G}$. Suppose first that $G$ is a cyclic group of prime power order. Then ${\cal K}_1(G)$ is abelian and so is ${\cal K}_1(\widetilde{G})$. Hence we can see from Lemma \ref{abelian} that $\widetilde{G}\cong G$ if $|G|\neq8$. Suppose that $|G|=8$ and $\widetilde{G}\ncong G$. Then $\widetilde{G}\cong Q_8$ and so ${\cal K}_1(\widetilde{G})$ contains three primitive idempotents, i.e., maximal subgroups of $\widetilde{G}$. However, $G$ contains exactly one primitive idempotent, a contradiction. Hence $\widetilde{G}\cong G$. Suppose now that $G$ is a generalized quaternion group. If $|G|=8$ then we can see that $\widetilde{G}\cong G$ by previous proof. Hence we assume that $|G|>8$. Then ${\cal K}_1(G)$ is non-abelian and so is ${\cal K}_1(\tilde{G})$. This implies that $\widetilde{G}$ is also a generalized quaternion group. Since $|G|=|\widetilde{G}|$, we get $G\cong \widetilde{G}$, as required. \end{Proof} In the remainder of this section, we concentrate on the case of finite groups. We shall show that for finite abelian groups and metacyclic $p$-groups of composite order (in particular, for finite $p$-groups with $|\Omega(G)|=p^2$, $p>2$), $G$ and ${\cal K}_1(G)$ are uniquely determined by each other, up to isomorphism. Recall that the exponent of a finite group $G$, denoted by $\exp G$, is the least common multiple of the orders of all elements of $G$. \begin{Lemma}\label{exp} \rm Let $G$ be a finite group of composite order and $K$ a non-trivial subgroup of $G$. If $\varphi$ is an isomorphism from ${\cal K}_1(G)$ to ${\cal K}_1(\widetilde{G})$ for some group $\widetilde{G}$, then $|K|=|\varphi(K)|$ and $\exp K=\exp \varphi(K)$. \end{Lemma} \begin{Proof} \rm Firstly, we can see from Lemma \ref{order} that $|G|=|\widetilde{G}|=|\varphi(G)|$, since $G$ is the zero of ${\cal K}_1(G)$. Also, we have that $|K|=|\varphi(K)|$, since $|K|=|G|/|G:K|=|G|/|R_{\scriptscriptstyle K}|$. Let $g$ be a non-trivial element of $K$, then $\langle g\rangle$ is a cyclic subgroup of $G$. It follows from Lemma \ref{cyclic} that $\varphi(\langle g\rangle)$ is cyclic. Suppose that $\varphi(\langle g\rangle)=\langle \widetilde{g}\rangle$. Then we have $|\widetilde{g}|=|g|$, since $|\langle \widetilde{g}\rangle|=|\langle g\rangle|$. It follows that $\exp K=\exp \varphi(K)$, as required. \end{Proof} \begin{Theorem}\label{isoab} \rm Let $G$ be a finite abelian group of composite order. Then for any group $\widetilde{G}$, ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$ if and only if $G\cong \widetilde{G}$. \end{Theorem} \begin{Proof} \rm It is evident that $G\cong \widetilde{G}$ implies that ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$. Conversely, suppose that ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$. By Theorem \ref{isom1}, Theorem \ref{omega1} and \cite[4.2.6]{robinson}, we only need to consider the case where $G$ is a $p$-group and $|\Omega(G)|=p^2$, and so we may write $G=\langle a\rangle\times \langle b\rangle$, where $|a|=p^m$, $|b|=p^n$ and $m\geq n\geq 1$. We can see from Lemma \ref{exp} that $|\widetilde{G}|=|G|$ and $\exp \widetilde{G}=\exp G=p^m$. Hence $\widetilde{G}$ is a $p$-group. Let $\widetilde{g}\in \widetilde{G}$ such that $|\widetilde{g}|=p^m$. Then we may write $\widetilde{G}=\langle \widetilde{g}\rangle\times \widetilde{A}$ for some subgroup $\widetilde{A}$ of $\widetilde{G}$ by \cite[4.2.7]{robinson}. By Lemma \ref{cyclic}, we have that $$\Omega(G)=\langle H\leq G: |H|=p\rangle=\textstyle\bigwedge\{H\in E({\cal K}_1(G)): |E(H^\upharpoonright)|=1\}.$$ This implies that the image of $\Omega(G)$ under the isomorphism from ${\cal K}_1(G)$ to $ {\cal K}_1(\widetilde{G})$ is precisely $\Omega(\widetilde{G})$ and so $|\Omega(\widetilde{G})|=|\Omega(G)|=p^2$ by Lemma \ref{order} and Lemma \ref{filter}. It follows that $\widetilde{A}$ is cyclic and $|\widetilde{A}|=|\widetilde{G}|/|\langle \widetilde{g}\rangle|=|G|/p^m=p^n$. Thus $G\cong \widetilde{G}$, as required. \end{Proof} \begin{Lemma}\label{centre} \rm Let $G$ be a finite group and $\{M_1,M_2,\ldots,M_n\}$ the set of all maximal abelian subgroups of $G$. Then $Z(G)=\bigcap_{i=1}^nM_i$. \end{Lemma} \begin{Proof} \rm It is easy to see that $Z(G)\leq M_i$ for each $i\in\underline{n}$, since $Z(G)M_i$ is an abelian subgroup of $G$. Thus $Z(G)\leq \bigcap_{i=1}^nM_i$. On the other hand, for each $y\in G$, there exists $i\in\underline n$ such that $\langle y\rangle\leq M_i$, since $\langle y\rangle$ is an abelian subgroup of $G$. Thus for each $x\in\bigcap_{i=1}^nM_i$, $xy=yx$, since $x,y\in M_i$. This shows that $Z(G)\geqslant \bigcap_{i=1}^nM_i$. We have proved that $Z(G)=\bigcap_{i=1}^nM_i$, as required. \end{Proof} \begin{Lemma}\label{isoc} \rm Let $G$ be a finite group of composite order. Then ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$ implies that $Z(G)\cong Z(\widetilde{G})$ for any group $\widetilde{G}$. \end{Lemma} \begin{Proof} Let $\varphi: {\cal K}_1(G)\to {\cal K}_1(\widetilde{G})$ be a semigroup isomorphism. We first show that $H$ is an abelian subgroup of $G$ if and only if $\varphi(H)$ is an abelian subgroup of $\widetilde{G}$. Suppose that $H$ is abelian subgroup of $G$. If $|H|$ a prime, then $|E(H^\upharpoonright)|=1$, and so $|E(\varphi(H)^\upharpoonright)|=1$. This implies that $|\varphi(H)|$ is also a prime and so $\varphi(H)$ is an abelian subgroup of $\widetilde{G}$. Otherwise, suppose that $|H|$ is composite. Then $${\cal K}_1(H)=H^\upharpoonright\cong \varphi(H)^\upharpoonright={\cal K}_1(\varphi(H)),$$ since $\varphi$ preserves the natural partial order. It follows from Theorem \ref{isoab} that $\varphi(H)\cong H$, and so $\varphi(H)$ is an abelian subgroup of $\widetilde{G}$. Dually, we can show that $H$ is an abelian subgroup of $G$ if $\varphi(H)$ is an abelian subgroup of $\widetilde G$. Let $M=\{M_1,M_2,\ldots, M_s\}$ be the set of all maximal abelian subgroups of $G$. Then $$\widetilde M=\{\varphi(M_1),\varphi(M_2),\ldots, \varphi(M_s)\}$$ is the set of all maximal abelian subgroups of $\widetilde G$, since $\varphi$ preserves the natural partial order. If $\bigvee M$ exists, then we can see from Lemma \ref{centre} that $Z(G)=\bigvee M$, and so $$\varphi(Z(G))=\varphi(\textstyle\bigvee M)=\textstyle\bigvee(\varphi (M))=Z(\widetilde G).$$ This implies that $${\cal K}_1(Z(G))=Z(G)^\upharpoonright\cong \varphi(Z(G))^\upharpoonright={\cal K}_1(\varphi(Z(G)))={\cal K}_1(Z(\widetilde G)).$$ Also, we can see from Lemma \ref{exp} that $|Z(G)|=|Z(\widetilde G)|$. Thus, it follows from Theorem \ref{isoab} that $Z(\widetilde G)=\varphi(Z(G))\cong Z(G)$. Finally, if $\bigvee M$ does not exist, then neither does $\bigvee \widetilde M$. In this case, both $Z(G)$ and $Z(\widetilde G)$ are trivial, and so $Z(G)\cong Z(\widetilde G)$, as required. \end{Proof} A group $G$ is called metacyclic if $G/N$ is cyclic for some normal cyclic subgroup $N$ of $G$. Based on King's characterization of metacyclic groups in \cite{king}, we can show that \begin{Theorem}\label{metacy} \rm Let $G$ be a finite metacyclic $p$-group of composite order. Then for any group $\widetilde{G}$, ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$ if and only if $G\cong \widetilde{G}$. \end{Theorem} \begin{Proof} \rm It is evident that $G\cong \widetilde{G}$ implies that ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$. Conversely, suppose that ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$. We can see from Lemma \ref{kgjiben}, Lemma \ref{cyclic} and Lemma \ref{isoc} that $\widetilde{G}$ is metacyclic and $Z(G)\cong Z(\widetilde{G})$. Also, we have that $G/G'\cong \widetilde{G}/\widetilde{G}'$, since $G/G'$ is the greatest abelian factor group of $G$, i.e., largest abelian ${\cal H}$-class which contains a central idempotent in ${\cal K}_1(G)$. Similarly, $G$ and $\widetilde{G}$ have isomorphic greatest dihedral factor groups. Further, we can see from Lemma \ref{minmax} and Lemma \ref{exp} that the minimum of the orders of the non-Frattini elements of $G$ and $\widetilde{G}$ are equal. Thus, $G\cong \widetilde{G}$ by \cite[Theorem 5.4]{king}. \end{Proof} Let $G$ be a finite $p$-group with $|\Omega(G)|=p^2$, $p>2$. Noticing that $\Omega(G)$ contains all minimal subgroups of $G$, we have that $\Omega(G)$ is the unique subgroup of type $(p,p)$ in $G$. Then by \cite[Introduction, Lemma 5]{berk}, there exists a minimal subgroup $Z$ of $G$ such that $C_G(Z)=G$. It follows from \cite[Proposition 13.26]{berk} that $G$ is metacyclic. As a consequence of Theorem \ref{metacy}, we have \begin{Corollary} \rm Let $G$ be a finite $p$-group with $|\Omega(G)|=p^2$, $p>2$. Then for any group $\widetilde{G}$, ${\cal K}_1(G)\cong {\cal K}_1(\widetilde{G})$ if and only if $G\cong \widetilde{G}$. \end{Corollary} \noindent{\bf Remark} \rm We have shown that $G$, ${\cal K}_1(G)$ and $C({\cal K}_1(G))$ are uniquely determined by each other, up to isomorphism, if $G$ is one of the groups appearing in Theorem \ref{isom1}, Theorem \ref{isoab} and Theorem \ref{metacy}. \begin{thebibliography}{99} \bibitem{arau} Ara\'{u}jo, J., Kinyon, M., Konieczny, J.: Conjugacy in inverse semigroups. \emph{J. Algebra}, $\bf533$, 142-173 (2019) \bibitem{berk} Berkovich, Y.: Groups of Prime Power Order, Volume 1, Walter de Gruyter, Berlin, 2008 \bibitem{cast} de Castro, G.: Coverages on inverse semigroups. \emph{Semigroup Forum}, $\bf102(2)$, 375-396 (2021) \bibitem{james} East, J.: Factorizable inverse monoids of cosets of subgroups of a group. \emph{Comm. Algebra}, $\bf 34$, 2659-2665 (2006) \bibitem{james2} East, J.: Embeddings in coset monoids. \emph{J. Aust. Math. Soc.}, $\bf85$, 75-80 (2008) \bibitem{king} King, B. W.: Presentations of metacyclic groups. \emph{Bull. Aust. Math. Soc.}, $\bf 8$, 103-131 (1973) \bibitem{lawson1} Lawson, M. V.: Almost factorisable inverse semigroups. \emph{Glasgow Math. J.}, $\bf 36$, 97-111 (1994) \bibitem{lawson} Lawson, M. V.: Inverse semigroups: the theory of partial symmetries, World Scientific Pub-lishing Co., Inc., NJ, 1998 \bibitem{lawson2} Lawson, M. V., Margolis, S. W., Steinberg, B.: Expansions of inverse semigroups. \emph{J. Aust. Math. Soc.}, $\bf80(2)$, 205-228 (2006) \bibitem{lawson3} Lawson, M. V., Lenz, D. H.: Pseudogroups and their \'{e}tale groupoids. \emph{Adv. Math.}, $\bf244$, 117-170 (2013) \bibitem{mcali5} McAlister, D. B.: Some covering and embedding theorems for inverse semigroups. \emph{J. Austral. Math. Soc.}, $\bf 22A$, 188-211 (1976) \bibitem{mcali2} McAlister, D. B.: Embedding inverse semigroups in coset semigroups. \emph{Semigroup Forum}, $\bf 20$, 255-267 (1980) \bibitem{petrich} Petrich, M.: Inverse semigroups, John Wiley and Sons, New York, 1984 \bibitem{robinson} Robinson, D.J.S.: A Course in the Theory of Groups, Springer-Verlag, Berlin, 1996 \bibitem{schein1966} Schein, B.M.: Semigroups of strong subsets. \emph{Volskii Matem. Sbornik, Kuibysev}, $\bf 4$, 180-186 (1966) (Russian) \bibitem{schei2} Schein, B.M.: Completions, translational hulls and ideal extensions of inverse semigroups. \emph{Czechoslovak Math. J.}, {\bf 23}, 75-610 (1973) \bibitem{schei3} Schein, B.M.: Injectives in certain classes of semigroups. \emph{Semigroup Forum}, {\bf9}, 159-171 (1974) \bibitem{schei4} Schein, B.M.: Semigroups of cosets of semigroups: variations on a Dubreil theme. \emph{Collect. Math.}, {\bf46(1)}, 171-182 (1995) \bibitem{schmi} Schmidt, R.: Subgroup Lattices of Groups, Walter de Gruyter, Berlin, 1994 \bibitem{shoji} Shoji, K.: Completions and injective hulls of $E$-reflexive semigroups. \emph{Semigroup Forum}, {\bf36}, 55-68 (1987) \bibitem{shum} Yong, H., Wei, T., Shum, K. P.: $\mathscr{P}$-Condense and $\mathscr{CP}$-Condensing Operators on Semilattices, Join-Complete Lattices and Some Inverse Semigroups, \emph{Comm. Math. Stat.}, $\bf4(4)$, 1-13 (2016) \end{thebibliography} \end{document}
2412.19536v1
http://arxiv.org/abs/2412.19536v1
Potential Vector Fields in $\mathbb R^3$ and $α$-Meridional Mappings of the Second Kind $(α\in \mathbb R)$
\documentclass[sn-mathphys,Numbered]{sn-jnl} \usepackage{graphicx}\usepackage{multirow}\usepackage{amsmath,amssymb,amsfonts}\usepackage{amsthm}\usepackage{mathrsfs}\usepackage[title]{appendix}\usepackage{xcolor}\usepackage{textcomp}\usepackage{manyfoot}\usepackage{booktabs}\usepackage{algorithm}\usepackage{algorithmicx}\usepackage{algpseudocode}\usepackage{listings} \theoremstyle{thmstyleone}\newtheorem{theorem}{Theorem}\newtheorem{proposition}[theorem]{Proposition}\newtheorem{lemma}[theorem]{Lemma}\newtheorem{corollary}[theorem]{Corollary} \theoremstyle{thmstyletwo}\newtheorem{example}{Example}\newtheorem{remark}{Remark} \theoremstyle{thmstylethree}\newtheorem{definition}{Definition} \raggedbottom \begin{document} \title[Potential Vector Fields in $\mathbb R^3$] {Potential Vector Fields in $\mathbb R^3$ and $\alpha$-Meridional Mappings of the Second Kind $(\alpha \in \mathbb R)$} \author*{\fnm{Dmitry} \sur{Bryukhov}} \email{[email protected] https://orcid.org/0000-0002-8977-3282} \affil*{ \orgname{Independent scholar}, \orgaddress{\street{Mira Avenue 19, apt. 225}, \city{Fryazino}, \postcode{141190}, \state{Moscow region}, \country{Russian Federation}}} \abstract{This paper extends approach developed in a recent author's paper on analytic models of potential fields in inhomogeneous media. New three-dimensional analytic models of potential vector fields in some layered media are constructed. Properties of various analytic models in Cartesian and cylindrical coordinates in $\mathbb R^3$ are compared. The original properties of the Jacobian matrix $\mathbf{J}(\vec V)$ of potential meridional fields $\vec V$ in cylindrically layered media, where $\phi( \rho) = \rho^{-\alpha}$ $(\alpha \in \mathbb R)$, lead to the concept of \emph{$\alpha$-meridional mappings of the first and second kind}. The concept of \emph{$\alpha$-Meridional functions of the first and second kind} naturally arises in this way. When $\alpha =1$, the special concept of \emph{Radially holomorphic functions in $\mathbb R^3$}, introduced by G\"{u}rlebeck, Habetha and Spr\"{o}ssig in 2008, is developed in more detail. Certain key properties of the radially holomorphic functions $G$ and functions reversed with respect to $G$ are first characterized. Surprising properties of the radially holomorphic potentials represented by superposition of the radially holomorphic exponential function $e^{\breve{\beta} x}$ $(\breve{\beta} \in \mathbb R)$ and function reversed with respect to $e^{\breve{\beta} x}$ are demonstrated explicitly. The basic properties of the radially holomorphic potential represented by the radially holomorphic extension of the Joukowski transformation in $\mathbb R^3$ are studied. } \keywords{Potential meridional fields, Set of zeros, $\alpha$-Meridional mappings, Elliptic equations with singular coefficients, Radially holomorphic functions} \pacs[MSC Classification]{30G35, 30C65, 35J15, 35Q05, 37N10} \maketitle \section{Introduction} \label{sec:intro} A rich variety of three-dimensional analytic and numerical models of potential vector fields $\vec V = \vec V(\vec x) $ in mathematical physics and continuum mechanics (see, e.g., \cite{BornWolf:2003,BorisTar:1979,Carslaw,KhmKravOv:2010,Reddy:2018,Br:Hefei2020}) may be investigated by means of the following first-order system with a variable $C^1$-coefficient $\phi= \phi(x_0,x_1,x_2)>0$: \begin{gather} \begin{cases} \mathrm{div} \, (\phi \ \vec V) =0, \\[1ex] \mathrm{curl}{\ \vec V} =0, \end{cases} \label{potential-system-3} \end{gather} where $\ \vec V = (V_0, V_1, V_2)$, $\ \vec x = (x_0, x_1, x_2)$. The Euclidean space $\mathbb R^3=\{(x_0, x_1,x_2)\}$ in this setting involves the longitudinal variable $x_0$, the cylindrical radial variable $\rho = \sqrt{x_1^2+x_2^2}$ and the azimuthal angle $\ \theta = \arccos \frac{x_1}{\rho}$. The scalar potential $h = h(x_0,x_1,x_2)$ in simply connected open domains $\Lambda \subset \mathbb R^3$, where $\vec V = \mathrm{grad} \ h$, allows us to reduce every $C^1$-solution of the system~\eqref{potential-system-3} to a $C^2$-solution of the continuity equation \begin{gather} \mathrm{div} \, ( \phi \ \mathrm{grad}{\ h}) = 0. \label{Liouville-3} \end{gather} In particular, the coefficient $\phi= \phi(x_0,x_1,x_2)$ and the scalar potential $h= h(x_0,x_1,x_2)$ in the context of the theory of \emph{Conduction of heat} may be interpreted as the thermal conductivity $\kappa = \kappa(x_0, x_1,x_2)$ and the steady state temperature $T = T(x_0,x_1,x_2)$ (see, e.g., \cite {Carslaw,Br:Hefei2020}), respectively. The potential vector field $\vec V$, satisfying relations $\vec V = \frac {d{\vec x}}{dt} = \mathrm{grad} \ h$, in continuum mechanics in the case of a steady flow is interpreted as the potential velocity field, and the scalar potential $h$ as the velocity potential (see, e.g., \cite{KochinKibelRoze:1964,Ilyushin:1990,Sedov:1994,Acheson,WhiteXue:2021,AnderCadou:2024}), respectively. The geometric properties of the Jacobian matrix $\mathbf{J}(\vec V)$ in three dimensions, where $ \mathbf{J_{l m}}(\vec V) = \frac{\partial{V_l}}{\partial{x_m}}$ $(l, m = 0,1,2)$, are difficult to treat in detail in contrast to properties of the Jacobian matrix in two dimensions into the framework of the concept of \emph{Conformal mappings of the second kind} (see, e.g., \cite{KochinKibelRoze:1964,LavSh:1987,Acheson,WhiteXue:2021,AnderCadou:2024}). It should be noted that the system~\eqref{potential-system-3} under the condition $\phi(\rho) = \rho^{-\alpha}$ $(\rho >0)$ in the expanded form is described as \begin{gather} \begin{cases} \mathrm{div}\ { \vec V} - \alpha \left( \frac{x_1}{\rho^2} V_1 + \frac{x_2}{\rho^2} V_2 \right) =0, \\[1ex] \mathrm{curl}{\ \vec V} =0. \end{cases} \label{alpha-axial-hyperbolic-system-3} \end{gather} The corresponding continuity equation~\eqref{Liouville-3} is written as \begin{gather} (x_1^2+x_2^2)\Delta{h} - \alpha \left( x_1\frac{\partial{h}}{\partial{x_1}} + x_2\frac{\partial{h}}{\partial{x_2}}\right) =0. \label{eq-axial-hyperbolic-3-alpha} \end{gather} General class of $C^1$-solutions of the system~\eqref{alpha-axial-hyperbolic-system-3} in the context of \emph{Non-Euclidean modifications of quaternionic analysis in $\mathbb R^3$} (see, e.g., \cite{Leut:2000,LeZe:CMFT2004,Br:Hefei2020}) is equivalently represented as general class of $C^1$-solutions of a family of axially symmetric generalizations of the Cauchy-Riemann system in $\mathbb R^3$ \begin{gather} \begin{cases} (x_1^2+x_2^2) \left( \frac{\partial{u_0}}{\partial{x_0}}- \frac{\partial{u_1}}{\partial{x_1}}-\frac{\partial{u_2}}{\partial{x_2}} \right) + \alpha (x_1u_1+x_2u_2)=0, \\[1ex] \frac{\partial{u_0}}{\partial{x_1}}=-\frac{\partial{u_1}}{\partial{x_0}}, \quad \frac{\partial{u_0}}{\partial{x_2}}=-\frac{\partial{u_2}}{\partial{x_0}}, \\[1ex] \frac{\partial{u_1}}{\partial{x_2}}=\ \ \frac{\partial{u_2}}{\partial{x_1}}, \end{cases} \label{A_3^alpha-system} \end{gather} where $(u_0, u_1, u_2)=(V_0, -V_1, -V_2)$. New three-dimensional analytic models of potential vector fields $\vec V$ in cylindrically layered media, where $\phi( \rho) = \rho^{-\alpha}$ $(\alpha \in \mathbb R)$, were constructed by the author in 2021 \cite{Br:Hefei2020} using exact solutons of the system~\eqref{alpha-axial-hyperbolic-system-3} and the system~\eqref{A_3^alpha-system}. Potential meridional fields are provided by the condition $ \frac{\partial{h}}{\partial{\theta}} = 0$ (see, e.g., \cite{KhmKravOv:2010,Br:Hefei2020}). Potential transverse fields are provided by the condition $\frac{\partial{h}}{\partial{x_0}} = 0$, respectively. The original properties of the Jacobian matrix of a wide range of potential meridional fields in cylindrically layered media, where $\phi( \rho) = \rho^{-\alpha}$, $\alpha \ge 0$, were established in 2021 \cite{Br:Hefei2020} using cylindrical coordinates in $\mathbb R^3$. The main goal of this paper is to develop new applications of the concept of $\alpha$-meridional mappings of the second kind in the context of the theory of \emph{Potential meridional velocity fields $\vec V$} in some special layered media. The paper is organized as follows. In Section 2, the basic concepts of \emph{Reduced quaternion-valued functions} are characterized in the first subsection. The basic concepts of \emph{Potential vector fields in $\mathbb R^3$} are characterized in the second subsection. The basic concepts of \emph{Autonomous systems and gradient systems} are characterized in the third subsection. In Section 3, new three-dimensional analytic models of potential velocity fields $\vec V$ in special inhomogeneous isotropic media are constructed. Boundary value problems for the continuity equation represented by an elliptic equation with two singular coefficients in $\mathbb R^3$ are discussed. In Section 4, the basic properties of analytic models of potential meridional velocity fields in cylindrically layered media with the mass density $\phi( \rho) = \rho^{-\alpha}$, where $\alpha \ge 0$, are studied. Applied properties of $\alpha$-meridional mappings of the second kind are viewed in the context of \emph{Stability theory of gradient systems} in $\mathbb R^3=\{(x_0, x_1,x_2)\}$. In Section 5, the specifics of $1$-meridional mappings of the second kind is considered in the context of \emph{Generalized axially symmetric potential theory (GASPT)}. New tools of the radially holomorphic potential in $\mathbb R^3$ allow us to extend analytic and geometric tools of the complex potential within potential meridional velocity fields in cylindrically layered media with the mass density $\phi( \rho) = \rho^{-1}$. In Section 6, we conclude the paper by describing future work in the context of \emph{Non-Euclidean modifications of quaternionic analysis in $\mathbb R^4$}. \section{Preliminaries} \label{sec2} \subsection{Reduced Quaternion-Valued Functions: Basic Concepts} \label{subsec21} The real algebra of quaternions $\mathbb H$ is a four dimensional skew algebra over the real field generated by real unity $1$. Three imaginary unities $i, j,$ and $k$ satisfy to multiplication rules \begin{gather*} i^2 = j^2 = k^2 = ijk = -1, \quad ij = -ji = k. \end{gather*} The independent quaternionic variable is defined as $$x = x_0 + ix_1 + jx_2 + kx_3.$$ The quaternion conjugation of $x$ is defined by the following automorphism: $$ x \mapsto \overline{x} := x_0 - ix_1 - jx_2 - kx_3.$$ If $\rho = \sqrt {x_1^2+x_2^2+x_3^2} > 0$, then $x= x_0 + I \rho$, where $ I = \frac{i x_1+ j x_2+ k x_3 }{\rho}$, $ I^2=-1.$ The independent quaternionic variable may be interpreted as the vector \\ $\vec x = (x_0, x_1, x_2, x_3)$ in $\mathbb R^4$, where we deal with the Euclidean norm $$ \| x \|^2 := x \overline{x} = x_0^2 + x_1^2 + x_2^2 + x_3^2 := r^2. $$ If $x_3 > 0$, the independent quaternionic variable in cylindrical coordinates in $\mathbb{R}^4$ is described as $x = x_0 + \rho (i\cos{\theta} + j \sin{\theta}\cos{\psi} + k\sin{\theta}\sin{\psi}),$ where $x_1 = \rho \cos{\theta}, \quad x_2 = \rho \sin{\theta}\cos{\psi}$, $ \quad x_3 = \rho \sin{\theta}\sin{\psi},$ $ \varphi= \arccos \frac{x_0}{r} \ (0 < \varphi < \pi)$, $\quad \theta = \arccos \frac{x_1}{\rho} \ (0 \leq \theta \leq 2\pi),$ $\psi = \mathrm{arccot} \frac{x_2}{x_3} \ (0 < \psi < \pi).$ The dependent quaternionic variable is defined as $$ u = u_0 + iu_1 + ju_2 + ju_3 \sim (u_0, u_1, u_2, u_3). $$ The quaternion conjugation of $u$ is defined by the following automorphism: $$ u \mapsto \overline{u} := u_0 - iu_1 - ju_2 - ku_3. $$ If $x_3 = 0$, then we deal with the independent reduced quaternionic variable $x = x_0 + ix_1 + jx_2.$ The independent reduced quaternionic variable may be interpreted as the vector $\vec x = (x_0, x_1, x_2)$ in $\mathbb R^3$. If $\rho > 0$, the independent reduced quaternionic variable in cylindrical coordinates in $\mathbb{R}^3$ is described as $x = x_0 + \rho (i\cos{\theta} + j \sin{\theta})$, where $\varphi= \arccos \frac{x_0}{r} = \mathrm{arccot}\frac{x_0}{\rho} \ (0 < \varphi < \pi), \quad \theta = \arccos \frac{x_1}{\rho} \ (0 \leq \theta \leq 2\pi).$ The dependent reduced quaternionic variable is defined as $$ u = u_0 + iu_1 + ju_2 \sim (u_0, u_1, u_2). $$ \begin{definition} Let $\Omega \subset \mathbb R^3$ be an open set. Every continuously differentiable mapping $u= u_0 + iu_1 + ju_2: \Omega \rightarrow \mathbb{R}^3$ is called the reduced quaternion-valued $C^1$-function in $\Omega$. \end{definition} Analytic models of three-dimensional harmonic potential fields $\vec V = \vec V(x_0,x_1,x_2)$ satisfy the Riesz system in $\mathbb R^3$ \begin{gather*} \begin{cases} \mathrm{div}\ { \vec V} =0, \\[1ex] \mathrm{curl}{\ \vec V} =0. \end{cases} \end{gather*} General class of exact solutions of the Riesz system in $\mathbb R^3$ in the context of \emph{Quaternionic analysis in $\mathbb R^3$} (see, e.g., \cite{Leut:2000,BraDel:2003,Del:2007}) is equivalently represented as general class of analytic solutions of the system \begin{gather*} (R) \begin{cases} \frac{\partial{u_0}}{\partial{x_0}}- \frac{\partial{u_1}}{\partial{x_1}}- \frac{\partial{u_2}}{\partial{x_2}} =0, \\[1ex] \frac{\partial{u_0}}{\partial{x_1}}=-\frac{\partial{u_1}}{\partial{x_0}}, \quad \frac{\partial{u_0}}{\partial{x_2}}=-\frac{\partial{u_2}}{\partial{x_0}}, \\[1ex] \frac{\partial{u_1}}{\partial{x_2}}=\ \ \frac{\partial{u_2}}{\partial{x_1}}, \end{cases} \end{gather*} where $(u_0, u_1, u_2):=(V_0, -V_1, -V_2)$. Exact solutions of the system $(R)$ are referred to as the reduced quaternion-valued monogenic functions $u= u_0 + iu_1 + ju_2$ with harmonic components $u_l= u_l(x_0,x_1,x_2)$ $(l= 0,1,2)$. Unfortunately, the set of reduced quaternion-valued monogenic functions does not cover the set of the reduced quaternionic power functions, where $u= u_0 + iu_1 + ju_2 = (x_0 + ix_1 + jx_2)^n$, $n \in \mathbb{Z}$ (see, e.g., \cite{Leut:CV20,Leut:2000}). A multifaceted analytic extension of the concept of the power series with real and complex coefficients has been developed by Leutwiler and Eriksson-Bique since 1992 in the context of \emph{Modified quaternionic analysis in $\mathbb R^3$} (see, e.g., \cite{Leut:CV17,Leut:CV20,Leut:Rud96,ErLe:1998}). An important concept of radially holomorphic functions was introduced by G\"{u}rlebeck, Habetha and Spr\"{o}ssig in 2008 in the context of the theory of \emph{Holomorphic functions in $n$-dimensional space} \cite{GuHaSp:2008}. \subsection{Potential Vector Fields in $\mathbb R^3$ and the Scalar Potentials: Basic Concepts} \label{subsec22} Numerous mathematical problems of two-dimensional analytic models of potential fields $\vec V = \vec V(x,y)$ in homogeneous media have been studied by means of the complex potential. In accordance with the theory of holomorphic functions of a complex variable, where $f = f(z) = u + iv$, $z = x + iy$ \cite{LavSh:1987,Br:Hefei2020}, analytic models of potential velocity fields $\vec V$ in continuum mechanics are characterized by the principal invariants \begin{gather*} I_{\mathbf{J}(\vec V)} = \mathrm{tr} \mathbf{J}(\vec V) = 0, \quad II_{\mathbf{J}(\vec V)} = \det\mathbf{J}(\vec V) = - \mid f'(z) \mid^2 \leq 0. \end{gather*} General class of $C^1$-solutions of the system ~\eqref{potential-system-3} was equivalently represented as general class of $C^1$-solutions of the system \begin{gather} \begin{cases} \phi \left( \frac{\partial{u_0}}{\partial{x_0}} - \frac{\partial{u_1}}{\partial{x_1}} - \frac{\partial{u_2}}{\partial{x_2}}\right) + \left(\frac{\partial{\phi}}{\partial{x_0}}u_0 - \frac{\partial{\phi}}{\partial{x_1}}u_1 - \frac{\partial{\phi}}{\partial{x_2}}u_2\right) =0,\\[1ex] \frac{\partial{u_0}}{\partial{x_1}}=-\frac{\partial{u_1}}{\partial{x_0}}, \quad \frac{\partial{u_0}}{\partial{x_2}}=-\frac{\partial{u_2}}{\partial{x_0}}, \\[1ex] \frac{\partial{u_1}}{\partial{x_2}}=\frac{\partial{u_2}}{\partial{x_1}}, \end{cases} \label{Bryukhov-Kaehler-3} \end{gather} where $ (u_0, u_1, u_2)=(V_0, -V_1, -V_2)$, in 2021 \cite{Br:Hefei2020}. The system~\eqref{Bryukhov-Kaehler-3} is characterized as generalized non-Euclidean modification of the system $(R)$ with respect to the conformal metric \begin{gather} ds^2 = \phi^2 (d{x_0}^2 + d{x_1}^2 + d{x_2}^2). \label{Riemannian conformal metric} \end{gather} The system~\eqref{A_3^alpha-system} under the condition $\alpha>0$ is characterized as $\alpha$-axial-hyperbolic non-Euclidean modification of the system $(R)$ with respect to the conformal metric~\eqref{Riemannian conformal metric} defined outside the axis $x_0$ by formula: \begin{gather*} ds^2 = \frac{d{x_0}^2 + d{x_1}^2 + d{x_2}^2}{\rho^{2\alpha}}. \end{gather*} \begin{definition} Every exact solution of eqn~\eqref{eq-axial-hyperbolic-3-alpha} under the condition $\alpha>0$ in a simply connected open domain $\Lambda \subset \mathbb R^3$ $(\rho > 0)$ is called $\alpha$-axial-hyperbolic harmonic potential in $\Lambda$. \end{definition} The continuity equation~\eqref{Liouville-3} in the expanded form is expressed as \begin{gather} \phi \Delta h + \frac{\partial{\phi}}{\partial{x_0}} \frac{\partial{h}}{\partial{x_0}} + \frac{\partial{\phi}}{\partial{x_1}} \frac{\partial{h}}{\partial{x_1}} + \frac{\partial{\phi}}{\partial{x_2}}\frac{\partial{h}}{\partial{x_2}} =0. \label{Liouville-eq-3-expanded} \end{gather} The equipotential surfaces (often called ``the level surfaces", see, e.g., \cite{ZachThoe:1986,BorisTar:1979}) in $\Lambda$ are provided by the equation \begin{gather} h(x_0,x_1,x_2) = C = const. \label{equipotential} \end{gather} Using the total differential $dh$, eqn~\eqref{equipotential} may be reformulated as an exact differential equation (see, e.g., \cite{Walter:1998}) \begin{gather*} dh = \frac{\partial{h}}{\partial{x_0}} d{x_0} + \frac{\partial{h}}{\partial{x_1}} d{x_1} + \frac{\partial{h}}{\partial{x_2}} d{x_2} = 0. \end{gather*} Let $\varsigma$ be a real independent variable. Assume that the following homogeneous linear first-order partial differential equation (see, e.g., \cite{ZachThoe:1986,Zaud:2006}) \begin{gather} \frac{\partial{h}}{\partial{x_0}} W_0 + \frac{\partial{h}}{\partial{x_1}} W_1 + \frac{\partial{h}}{\partial{x_2}} W_2 = 0 \label{PDE} \end{gather} is satisfied in $ \Lambda$ such that \begin{gather*} \frac{dx_l}{d\varsigma} = W_l(x_0,x_1,x_2) \quad (l = 0,1,2). \end{gather*} According to \cite{ZachThoe:1986} and \cite{ArnoldGeom}, a surface $S$ in $\Lambda$ is an integral surface of the characteristic vector field $\vec W= (W_0, W_1, W_2)$ of eqn~\eqref{PDE} if $S$ is a level surface of a first integral of $\vec W$. In other words, $S$ is described by the equation~\eqref{equipotential}, where $h = h(x_0,x_1,x_2)$ is a solution of eqn~\eqref{PDE} in $\Lambda$ such that $\mathrm{grad} \ h \neq 0$. An integral surface of $\vec W$ is a member of a one-parameter family of integral surfaces of $\vec W$ given by eqn~\eqref{equipotential} with $C$ being considered a parameter. Eqn~\eqref{PDE} is geometrically interpreted as the orthogonality condition for potential vector fields $\vec V = \mathrm{grad} \ h$ and the characteristic vector fields $\vec W = \frac {d{\vec x}}{d\varsigma} $: \begin{gather} ( \vec V, \vec W ) = (\mathrm{grad} \ h, \vec W ) = 0. \label{orthogonality} \end{gather} Eqn~\eqref{orthogonality} is satisfied, in particular, under the condition $ \mathrm{grad} \ h = 0$. \begin{definition} A point $\vec x^* = (x_0^*,x_1^*,x_2^*) \in \Lambda$ is said to be a critical point of the scalar potential $h$ if $ \mathrm{grad} \ h(x_0^*,x_1^*,x_2^*) =0$. The set of all critical points is called the critical set of $h$ in $\Lambda$, respectively. \end{definition} \begin{remark} As follows from three conditions $\frac{\partial{h(x_0^*,x_1^*,x_2^*)}}{\partial{x_0}} =0$, $\frac{\partial{h(x_0^*,x_1^*,x_2^*)}}{\partial{x_1}} =0$, $\frac{\partial{h(x_0^*,x_1^*,x_2^*)}}{\partial{x_2}} =0$, eqn~\eqref{Liouville-eq-3-expanded} takes a simplified form $ \Delta h =0$ within the critical set of $h$. \end{remark} \begin{definition} A critical point $\vec x^* = (x_0^*,x_1^*,x_2^*) \in \Lambda$ of the scalar potential $h = h(x_0, x_1, x_2)$ is said to be a degenerate critical point if $\det\mathbf{H}(h(x_0^{*},x_1^{*},x_2^{*})) =0$. Otherwise, it is called a nondegenerate critical point of $h$. \end{definition} \begin{remark} It is well known (see e.g., \cite{LavSh:1987}) that arbitrary critical point of the complex plane is nondegenerate. \end{remark} The characteristic equation of the Jacobian matrix of arbitrary potential $C^1$-vector field $\vec V$ in the general setting \begin{gather} \begin{pmatrix} \frac{\partial{V_0}}{\partial{x_0}} & \frac{\partial{V_0}}{\partial{x_1}} & \frac{\partial{V_0}}{\partial{x_2}} \\[1ex] \frac{\partial{V_1}}{\partial{x_0}} & \frac{\partial{V_1}}{\partial{x_1}} & \frac{\partial{V_1}}{\partial{x_2}} \\[1ex] \frac{\partial{V_2}}{\partial{x_0}} & \frac{\partial{V_2}}{\partial{x_1}} & \frac{\partial{V_2}}{\partial{x_2}} \end{pmatrix} = \begin{pmatrix} \ \ \frac{\partial{u_0}}{\partial{x_0}} & \ \ \frac{\partial{u_0}}{\partial{x_1}} & \ \ \frac{\partial{u_0}}{\partial{x_2}} \\[1ex] -\frac{\partial{u_1}}{\partial{x_0}} & -\frac{\partial{u_1}}{\partial{x_1}} & -\frac{\partial{u_1}}{\partial{x_2}} \\[1ex] -\frac{\partial{u_2}}{\partial{x_0}} & -\frac{\partial{u_2}}{\partial{x_1}} & -\frac{\partial{u_2}}{\partial{x_2}} \end{pmatrix} \label{Hessian-matrix-3} \end{gather} is expressed as (see e.g., \cite{BorisTar:1979,LaiRubKr:2010,Br:Hefei2020}) \begin{gather} \lambda^3 - I_{\mathbf{J}(\vec V)} \lambda^2 + II_{\mathbf{J}(\vec V)} \lambda - III_{\mathbf{J}(\vec V)} = 0. \label{characteristic lambda-3} \end{gather} The principal scalar invariants $I_{\mathbf{J}(\vec V)}$, $II_{\mathbf{J}(\vec V)}$, $III_{\mathbf{J}(\vec V)}$ are given by the formulas \begin{gather} \begin{cases} I_{{\mathbf{J}(\vec V)}} \equiv \mathrm{tr} \mathbf{J}(\vec V) = \lambda_0 + \lambda_1 + \lambda_2= J_{00} + J_{11} + J_{22}, \\[1ex] II_{{\mathbf{J}(\vec V)}} = \lambda_0 \lambda_1 + \lambda_0 \lambda_2 + \lambda_1 \lambda_2 = \\[1ex] J_{00}J_{11} + J_{00}J_{22} + J_{11}J_{22} - (J_{01})^2 - (J_{02})^2 - (J_{12})^2, \\[1ex] III_{{\mathbf{J}(\vec V)}} \equiv \det\mathbf{J}(\vec V) = \lambda_0 \lambda_1 \lambda_2 = \\[1ex] J_{00}J_{11}J_{22} + 2J_{01}J_{02}J_{12} - J_{00}(J_{12})^2 - J_{11}(J_{02})^2 - J_{22}(J_{01})^2, \end{cases} \label{principal invariants} \end{gather} where real roots $\lambda_0$, $\lambda_1$, $\lambda_2$ of eqn~\eqref{characteristic lambda-3} are the eigenvalues of~\eqref{Hessian-matrix-3}. The principal scalar invariants~\eqref{principal invariants} in $\mathbb R^3$ play key roles within analytic models of potential fields in mathematical physics and continuum mechanics (see, e.g., \cite{BorisTar:1979,Ilyushin:1990,LaiRubKr:2010,Br:Hefei2020}). The third principal invariant may have a variable sign in simply connected open domains $\Lambda \subset \mathbb R^3$ in contrast to the second principal invariant into the framework of the concept of \emph{Conformal mappings of the second kind}. The Jacobian matrix $\mathbf{J}(\vec V)$ in the case of a potential velocity field $\vec V$ in $\mathbb R^3$ in continuum mechanics is interpreted as the rate of deformation tensor (see, e.g., \cite{BorisTar:1979,Ilyushin:1990,Sedov:1994,LaiRubKr:2010,Reddy:2018}). \begin{definition} A point $(x_0,x_1,x_2) \in \Lambda$ is said to be a degenerate point of the Jacobian matrix $\mathbf{J}(\vec V)$ in $\Lambda$ if $\det\mathbf{J}(\vec V(x_0,x_1,x_2)) =0$. Otherwise, it is called a nondegenerate point of $\mathbf{J}(\vec V)$ in $\Lambda$. \end{definition} The Jacobian matrix $\mathbf{J}(\vec V)$ of arbitrary potential $C^1$-vector field $\vec V$ coincides with the Hessian matrix $\mathbf{H}(h)$ of the corresponding scalar potential $h$. Along with that, the set of degenerate points of the Jacobian matrix $\mathbf{J}(\vec V)$ in $\Lambda$ covers the set of degenerate critical points of the scalar potential $h$ in $\Lambda$. \subsection {Vector Fields in the Phase Space, Autonomous Systems and Gradient Systems: Basic Concepts} \label{subsec23} The development and applications of analytic models of potential vector fields in continuum mechanics require immersion in the theory of \emph{Autonomous systems of first-order ordinary differential equations} (see, e.g., \cite{AbrMarsden:1987,Goriely:2001,Perko:2001,Wiggins:2003,HirschSmaleDev:2013,Zhang:2017,Strogatz:2018}). Let us take a look at the basic concepts of autonomous systems in the Euclidean space $\mathbb R^n=\{(x_1, \ldots, x_n)\}$. The space $\mathbb R^n$ is known as the phase space. \begin{definition} Let $\vec Q = (Q_1, \ldots, Q_n)$ be a vector field in an open set $\Omega \subset \mathbb R^n$. An autonomous system of first-order ordinary differential equations \begin{gather} \frac{d \vec x}{dt} = \vec Q(\vec x) \label{auton-n} \end{gather} is said to be smooth if $Q \in C^1(\Omega)$. \end{definition} \begin{definition} A point $\vec x^{**} = (x_1^{**}, \ldots, x_n^{**}) \in \Omega$ is said to be an equilibrium point of a smooth system~\eqref{auton-n} if $\vec Q(\vec x^{**}) = 0$. Otherwise, it is called a regular point of~\eqref{auton-n}. The set of all equilibrium points in $\Omega$ is called the set of equilibria of~\eqref{auton-n} in $\Omega$, respectively. \end{definition} \begin{definition} A linear autonomous system of the form \begin{gather*} \frac{d \vec x}{dt} = \mathbf{A}(\vec x^{**}) \vec x \end{gather*} is said to be the linearization of a smooth system~\eqref{auton-n} at an equilibrium point $\vec x^{**} \in \Omega$ if the $n \times n$ matrix $\mathbf{A}(\vec x^{**})$ coincides with the Jacobian matrix $\mathbf{J}(\vec Q(\vec x^{**}))$ of the vector field $\vec Q$ at $\vec x^{**}$. \end{definition} \begin{definition} An equilibrium point $\vec x^{**} \in \Omega$ of the system~\eqref{auton-n} is said to be a degenerate if $\det\mathbf{J}(\vec Q(\vec x^{**})) =0$. Otherwise, it is called a nondegenerate equilibrium point of~\eqref{auton-n}. \end{definition} Equilibrium points of the system~\eqref{auton-n} in the context of \emph{Stability theory}, \emph{Bifurcation theory} and the theory of \emph{Integrability of differential systems} are often referred to as singular points (also sometimes to as ``zeros", ``critical points``, ``fixed points", or ``stationary points") (see, e.g., \cite{Perko:2001,Wiggins:2003,Strogatz:2018,Goriely:2001,LlibreZhang:2012,Zhang:2016,Zhang:2017}). Consider the basic concepts of autonomous systems in the space $\mathbb R^n=\{(x_1, \ldots, x_n)\}$ in a broader context, where a $C^1$-vector field $\vec Q = (Q_1, \ldots, Q_n)$ depends on a variable parameter $\mu$, $\mu \in \mathbb R$, in an open set $\Omega \subset \mathbb R^n$. These systems are referred to as autonomous systems depending on a parameter $\mu$ (see, e.g., \cite{ChowHale:1982,Perko:2001,HirschSmaleDev:2013,Kuznetsov:2023}). \begin{definition} An equilibrium point $\vec x^{**} = (x_1^{**}, \ldots, x_n^{**}) \in \Omega$ of a smooth system of the form \begin{gather} \frac{d \vec x}{dt} = \vec Q(\vec x; \mu) \label{auton-n-mu} \end{gather} is said to be a hyperbolic if all the eigenvalues $\lambda_1, \ldots, \lambda_n$ of the Jacobian matrix $\mathbf{J}(\vec Q(\vec x^{**}; \mu))$ of the vector field $\vec Q(\vec x^{**}; \mu)$ lie off the imaginary axis, i.e., $Re (\lambda_l) \neq 0$ for $l = 1, \ldots, n$. Otherwise, it is called a nonhyperbolic point of the system~\eqref{auton-n-mu}. \end{definition} Hyperbolic equilibrium points are sometimes referred to as elementary equilibrium (or ``elementary critical``) points (see, e.g., \cite{AbrMarsden:1987}). According to (\cite{Strogatz:2018}, p.156), ``Hyperbolic fixed points are sturdy; their stability type is unaffected by small nonlinear terms. Nonhyperbolic fixed points are the fragile ones." Following the concept given by Abraham and Marsden (\cite{AbrMarsden:1987}, p.75), the number of eigenvalues with negative real part (counting multiplicities) of the matrix $\mathbf{J}(\vec Q(\vec x^{**}; \mu))$ may be viewed as the index of $\vec x^{**}$. As noted by Strogatz (\cite{Strogatz:2018}, p.47), ``Bifurcation theory is rife with conflicting terminology. The subject really hasn't settled down yet, and different people use different words for the same thing." Nevertheless, the basic concepts of autonomous systems in the phase space $\mathbb R^n=\{(x_1, \ldots, x_n)\}$ have been extended to the case of several variable parameters $\check{m}$, $\check{m} > 1$ (see, e.g., \cite{ChowHale:1982,ArnAfrIlyashShil:1994,Kuznetsov:2023}). In particular, real coefficients of polynomials within polynomial autonomous systems may be interpreted as variable parameters $\mu_1 \ldots, \mu_{\check{m}}$, such that $Q_1 = Q_1(x_1, \ldots, x_n; \mu_1, \ldots, \mu_{\check{m}}), \ldots, Q_n = Q_n(x_1, \ldots, x_n; \mu_1, \ldots, \mu_{\check{m}})$. The space $\mathbb R^{\check{m}} =\{(\mu_1, \ldots, \mu_{\check{m}})\}$ is known as the space of parameters (see, e.g., \cite{ArnAfrIlyashShil:1994}). In the last two decades, fundamentally new properties of polynomial autonomous systems in $\mathbb R^3$ and $\mathbb R^4$ have attracted special attention in the context of the theory of \emph{Integrability of differential systems} (see, e.g., \cite{Goriely:2001,GasLliZh:2009,Zhang:2011,WalZhang:2021,LlibreZhang:2012,Zhang:2016,Zhang:2017}). Some remarkable properties of polynomial systems in $\mathbb R^4$ represented by the so-called one-dimensional quaternion homogeneous polynomial differential equation \begin{gather} \frac{dq}{dt} = \check{a} q^{\check{k}}\overline{q}^{\check{n}}, \label{a-overline-monomial-k,n} \end{gather} where $\check{a} \in \mathbb H$, $\check{k}, \check{n} \in \mathbb N \bigcup \{0\}$, $q = q_0 + q_1i + q_2j + q_3k$ and $\overline{q}$ is the quaternion conjugation of $q$, were considered by Gasull, Llibre and Zhang in 2009 \cite{GasLliZh:2009}). According to \cite{GasLliZh:2009}, the right-hand side of~\eqref{a-overline-monomial-k,n} is an unique monomial. When $\check{n}= 0$, the quaternion differential equation~\eqref{a-overline-monomial-k,n} is written as \begin{gather} \frac{dq}{dt} = \check{a} q^{\check{k}}. \label{monomial-k} \end{gather} Certain important cases of~\eqref{monomial-k}, where $\check{a} \in \mathbb H$, were studied. When $\check{k}= 0$, eqn~\eqref{a-overline-monomial-k,n} is written as \begin{gather} \frac{dq}{dt} = \check{a} \overline{q}^{\check{n}}. \label{overline-monomial-n} \end{gather} Certain important cases of~\eqref{overline-monomial-n}, where $\check{a} \in \mathbb H$, were highlighted. Several new kinds of polynomial autonomous systems in $\mathbb R^4$ represented by polynomial differential equations over the quaternions \begin{gather} \frac{dx}{dt} = P(x), \label{WaZh-polynomial} \end{gather} where $x = x_0 + x_1i + x_2j + x_3k$ and $P(x)$ is a quaternionic polynomial with complex coefficients, were studied by Zhang in 2011 \cite{Zhang:2011} and by Walcher and Zhang in 2021 \cite{WalZhang:2021}. As may be seen \cite{WalZhang:2021}, qualitative properties of equilibrium (or ``stationary") points of polynomial autonomous systems represented by~\eqref{WaZh-polynomial} raise new issues for consideration in the context of \emph{Stability theory}. Here it is necessary to clarify that the potential vector field $\vec V = V(x_0, x_1,x_2)$ within the concept of \emph{Smooth autonomous systems in the phase space $\mathbb R^3=\{(x_0, x_1,x_2)\}$} may be interpreted as the gradient vector field, and the coefficient $\phi= \phi(x_0,x_1,x_2)$ as the density associated with the invariant measure of the form $\int_{\Lambda} \phi(x_0,x_1,x_2)dx_0 dx_1 dx_2$ (see, e.g., \cite{Wiggins:2003,Strogatz:2018,Goriely:2001}), respectively. A smooth gradient system with scalar potential $h$ in a simply connected open domain $\Lambda \subset \mathbb R^3=\{(x_0, x_1,x_2)\}$ may be described as (see, e.g., \cite{Wiggins:2003,HirschSmaleDev:2013,Strogatz:2018,BrRhod:2013,BrRhod:2014}) \begin{gather} \frac {d{\vec x}}{dt} = \vec V = \mathrm{grad} \ h(\vec x), \quad t \in \mathbb R. \label{grad-system-3} \end{gather} \begin{remark} As noted by Wiggins (\cite{Wiggins:2003}, p.231) ``The minus sign in front of the gradient is traditional and imposes no restriction as we can always redefine $h(\vec x)$ as $-h(\vec x)$" (see, e.g., the plus sign in front of the gradient in definition of gradient systems with harmonic potential given by Kozlov and Furta \cite{KozlovFurta:2001}). \end{remark} \begin{remark} An equilibrium point $\vec x^{**} = (x_0^{**}, x_1^{**}, x_2^{**}) \in \Lambda$ of a smooth gradient system with scalar potential $h$ depending on a parameter $\mu$ \begin{gather} \frac{d \vec x}{dt} = \vec V(\vec x; \mu) = \mathrm{grad} \ h(\vec x; \mu) \label{grad-system-mu} \end{gather} is nonhyperbolic if and only if there is at least one zero eigenvalue of the Jacobian matrix $\mathbf{J}(\vec V(\vec x^{**}; \mu))$ of the gradient vector field $\vec V(\vec x^{**}; \mu)$. Therefore, nonhyperbolic equilibrium points and degenerate equilibrium points of the system~\eqref{grad-system-mu} are the same. \end{remark} It is interesting to note that critical points $\vec x^*$ of any scalar potential $h$ in $\Lambda$ may be studied as equilibrium points $\vec x^{**}$ of the corresponding gradient system~\eqref{grad-system-mu} in $\Lambda$. The Jacobian matrix $\mathbf{J}(\vec V)$ in the context of \emph{Stability theory of gradient systems} (see, e.g., \cite{Chetayev:1961,Gilmore:1993}) may be regarded as the stability matrix at $\vec x^{**}$, and the eigenvalues of $\mathbf{J}(\vec V)$ at $\vec x^{**}$ as the stability coefficients of $\vec x^{**}$, respectively. Following the concept given by Kozlov \cite{Kozlov:1993}, the number of positive eigenvalues (counting multiplicities) of the Jacobian matrix $\mathbf{J}(\vec V(\vec x^{**}; \mu))$ at an equilibrium point $\vec x^{**}$ may be viewed as the degree of instability of $\vec x^{**}$. The first applications of the concept of \emph{Gradient systems}~\eqref{grad-system-3} were provided in 2013-2014 \cite{BrRhod:2013,BrRhod:2014}. Potential (often referred to as ``irrotational" in mathematical physics and continuum mechanics \cite{BorisTar:1979,Ilyushin:1990,LaiRubKr:2010,BrKos:2012,BrRhod:2013}) velocity fields $\vec V$ in special inhomogeneous isotropic media with the mass density $\phi = \rho^{-1}$ were represented by the following reduced quaternion-valued ordinary differential equation: \begin{gather*} \frac {dx}{dt} = V_0 + i V_1 + j V_2 = \overline{F}(x), \end{gather*} where $x= x_0 + ix_1 + jx_2$, $\overline{F}(x) = u_0 - i u_1 - j u_2$ and $F(x) = \frac{\partial{h}}{\partial{x_0}} - i \frac{\partial{h}}{\partial{x_1}} - j\frac{\partial{h}}{\partial{x_1}}$. \section {Analytic Models of Potential Velocity Fields in Some Special Inhomogeneous Media} \label{sec3} Hereinafter, the vector $\vec V= \mathrm{grad} \ h$ will be identified with a potential velocity field, the scalar potential $h$ with the velocity potential, the coefficient $\phi$ with the mass density of an inhomogeneous isotropic medium, and the Jacobian matrix $\mathbf{J}(\vec V)$ with the rate of deformation tensor (see, e.g., \cite{LaiRubKr:2010,Reddy:2018,WhiteXue:2021,AnderCadou:2024}), respectively. The continuity equation~\eqref{Liouville-3} in continuum mechanics allows one to provide local conservation of mass at any point $\vec x = (x_0,x_1,x_2) \in \Lambda$ in an inhomogeneous isotropic medium with the mass density $\phi= \phi(x_0,x_1,x_2)$. Thus, the invariant measure $\int_{\Lambda} \phi(x_0,x_1,x_2)dx_0 dx_1 dx_2$ may be identified with total mass of the matter occupying $\Lambda$ (see, e.g., \cite{LaiRubKr:2010,Reddy:2018}). Inhomogeneous isotropic media, whose properties are constant throughout every plane perpendicular to a fixed direction, are referred in mathematical physics and continuum mechanics to as layered media (see, e.g., \cite {BornWolf:2003,Brekh:1980,Br:Hefei2020}). Let us turn our attention to some original properties of analytic models of potential velocity fields $\vec V$ in biplanarly layered media, where $\phi = \phi_1(x_1)\phi_2(x_2)$, $\phi_1(x_1) >0$, $\phi_2(x_2) >0$: \begin{gather} \begin{cases} \mathrm{div} \, ( \phi_1(x_1)\phi_2(x_2) \vec V ) = 0, \\[1ex] \mathrm{curl}{\ \vec V} = 0. \end{cases} \label{bi-potential-system-3} \end{gather} General class of $C^1$-solutions of the system~\eqref{bi-potential-system-3} is equivalently represented as general class of $C^1$-solutions of the system \begin{gather} \begin{cases} \phi_1(x_1)\phi_2(x_2) \left(\frac{\partial{u_0}}{\partial{x_0}}- \frac{\partial{u_1}}{\partial{x_1}}- \frac{\partial{u_2}}{\partial{x_2}}\right) - \left( \frac{d{{\phi}_1}}{d{x_1}}u_1 + \frac{d{{\phi}_2}}{d{x_2}}u_2 \right) = 0, \\[1ex] \frac{\partial{u_0}}{\partial{x_1}}=-\frac{\partial{u_1}}{\partial{x_0}}, \quad \frac{\partial{u_0}}{\partial{x_2}}=-\frac{\partial{u_2}}{\partial{x_0}}, \\[1ex] \frac{\partial{u_1}}{\partial{x_2}}=\frac{\partial{u_2}}{\partial{x_1}}, \end{cases} \label{Bryukhov-3-hyperbolic-3} \end{gather} where $(V_0,V_1,V_2) = (u_0, -u_1, -u_2)$. Eqn~\eqref{Liouville-eq-3-expanded} is written as \begin{gather} \phi_1(x_1)\phi_2(x_2) \left( \frac{{\partial}^2{h}}{{\partial{x_0}}^2} + \frac{{\partial}^2{h}}{{\partial{x_1}}^2} + \frac{{\partial}^2{h}}{{\partial{x_2}}^2} \right) + \frac{d{{\phi}_1}}{d{x_1}} \frac{\partial{h}}{\partial{x_1}} + \frac{d{{\phi}_2}}{d{x_2}} \frac{\partial{h}}{\partial{x_2}} =0. \label{alpha_1,2-biplanar} \end{gather} Suppose that $\phi_1(x_1) = x_1^{-\alpha_1}$, $\phi_2(x_2) = x_2^{-\alpha_2}$ $(\alpha_1, \alpha_2 \in \mathbb{R})$. Eqn~\eqref{alpha_1,2-biplanar} is reduced to the following elliptic equation with two singular coefficients: \begin{gather} \Delta{h} - \frac{\alpha_1}{x_1}\frac{\partial{h}}{\partial{x_1}} - \frac{\alpha_2}{x_2}\frac{\partial{h}}{\partial{x_2}} =0. \label{alpha_1,2-bihyperbolic-3} \end{gather} The system~\eqref{bi-potential-system-3} is expressed as \begin{gather*} \begin{cases} \mathrm{div} \, ( x_1^{-\alpha_1} x_2^{-\alpha_2} \vec V ) = 0, \\[1ex] \mathrm{curl}{\ \vec V} = 0, \end{cases} \end{gather*} and the system~\eqref{Bryukhov-3-hyperbolic-3} is simplified: \begin{gather*} \begin{cases} (\frac{\partial{u_0}}{\partial{x_0}}- \frac{\partial{u_1}}{\partial{x_1}}-\frac{\partial{u_2}}{\partial{x_2}}) + \frac{\alpha_1}{x_1} u_1 + \frac{\alpha_2}{x_2} u_2 = 0, \\[1ex] \frac{\partial{u_0}}{\partial{x_1}}=-\frac{\partial{u_1}}{\partial{x_0}}, \quad \frac{\partial{u_0}}{\partial{x_2}}=-\frac{\partial{u_2}}{\partial{x_0}}, \\[1ex] \frac{\partial{u_1}}{\partial{x_2}}=\ \ \frac{\partial{u_2}}{\partial{x_1}}. \end{cases} \end{gather*} This system under conditions of $\alpha_1>0$, $\alpha_2>0$ may be characterized as $(\alpha_1, \alpha_2)$-bihyperbolic non-Euclidean modification of the system $(R)$ with respect to the conformal metric~\eqref{Riemannian conformal metric} defined on a quarter-space $\{x_1 > 0, x_2 > 0\}$ by formula: \begin{gather*} ds^2 = \frac{d{x_0}^2 + d{x_1}^2 + d{x_2}^2}{ x_1^{2\alpha_1} x_2^{2\alpha_2}}. \end{gather*} \begin{definition} Every exact solution of eqn~\eqref{alpha_1,2-bihyperbolic-3} under the conditions $\alpha_1>0$, $\alpha_2> 0$ in a simply connected open domain $\Lambda \subset \mathbb R^3$ $(x_1 > 0, x_2 > 0)$ is called $(\alpha_1, \alpha_2)$-bihyperbolic harmonic potential in $\Lambda$. \end{definition} The basic analytic properties of $(\alpha_1, \alpha_2)$-bihyperbolic harmonic potentials may be established using separation of variables. \begin{theorem} A special class of three-dimensional solutions of eqn~\eqref{alpha_1,2-bihyperbolic-3} may be obtained using the Bessel functions of the first and second kind for different values of the separation constants $\breve{\lambda}$ and $\breve{\mu}$: \begin{align*} & h(x_0, x_1, x_2) = {x_1}^\frac{\alpha_1+1}{2} \left[ c_{\breve{\lambda}}^1 J_{\frac{\alpha_1+1}{2}}(\breve{\lambda}x_1) + c_{\breve{\lambda}}^2 Y_{\frac{\alpha_1+1}{2}}(\breve{\lambda}x_1) \right] \times \\ & \sum_{\breve{\mu}= -\infty}^\infty \left( b^1_{\breve{\mu}} \cos{\breve{\mu} x_0} + b^2_{\breve{\mu}} \sin{\breve{\mu} x_0} \right) {x_2}^\frac{\alpha_2+1}{2} \left[ a^1_{\breve{\lambda}, \breve{\mu}} J_{\frac{\alpha_2+1}{2}}(i \breve{\nu}x_2) + a^2_{\breve{\lambda}, \breve{\mu}} Y_{\frac{\alpha_2+1}{2}}(i \breve{\nu}x_2) \right], \end{align*} where $\ \breve{\nu} = \sqrt{ \breve{\lambda}^2 + \breve{\mu}^2}$; $\ c^1_{\breve{\lambda}}, c^2_{\breve{\lambda}}, b^1_{\breve{\mu}}, b^2_{\breve{\mu}}, a^1_{\breve{\lambda}, \breve{\mu}}, a^2_{\breve{\lambda}, \breve{\mu}} = const \in \mathbb R $. \end{theorem} \begin{proof} Consider a special class of exact solutions of eqn~\eqref{alpha_1,2-bihyperbolic-3} under the condition $h(x_0, x_1, x_2) =$ $p(x_0, x_2) \varpi(x_1)$: $$ \varpi \left( \frac{\partial{^2}{p}}{\partial{x_0}^2} + \frac{\partial {^2}{p}}{\partial{ x_2}^2} \right) - \frac{\varpi \alpha_2}{x_2} \frac{\partial{p}}{\partial{ x_2}} + p \frac{d{^2}{\varpi}}{d{x_1}^2} - \frac{ \alpha_1}{x_1} p \frac{d{\varpi}}{d{x_1}} = 0. $$ Relations \begin{align*} - p \frac{d{^2}{\varpi}}{d{x_1}^2} + \frac{ \alpha_1}{x_1} p \frac{d{\varpi}}{d{x_1}} = \varpi \left( \frac{\partial{^2}{p}}{\partial{x_0}^2} + \frac{\partial {^2}{p}}{\partial{x_2}^2} \right) - \frac{\varpi \alpha_2}{x_2} \frac{\partial{p}}{\partial{ x_2}} = \breve{\lambda}^2 p\varpi \quad ( \breve{\lambda} = const \in \mathbb R ) \end{align*} lead to the following system of equations: \begin{gather} \begin{cases} \frac{d{^2}{\varpi}}{d{x_1}^2} - \frac{\alpha_1}{x_1} \frac{d{\varpi}}{d{x_1}} + \breve{\lambda}^2 \varpi = 0, \\ \frac{\partial{^2}{p}}{\partial{x_0}^2} + \frac{\partial {^2}{p}}{\partial{x_2}^2} - \frac{\alpha_2}{x_2} \frac{\partial{p}}{\partial{x_2}} - \breve{\lambda}^2 p = 0. \end{cases} \label{Laplace-Beltrami equation, bi-sep-3} \end{gather} The first equation of the system~\eqref{Laplace-Beltrami equation, bi-sep-3} as a linear second-order ordinary differential equation containing power functions may be solved using linear independent solutions (see, e.g., \cite{PolZait:Ordin-2018}, Chapter 14, p. 526 item 63): $$ \varpi_{ \breve{\lambda}}(x_1)= {x_1}^\frac{\alpha_1+1}{2} \left[ c_{\breve{\lambda}}^1 J_{\frac{\alpha_1+1}{2}}(\breve{\lambda}x_1) + c_{\breve{\lambda}}^2 Y_{\frac{\alpha_1+1}{2}}(\breve{\lambda}x_1) \right]; \quad c_{\breve{\lambda}}^1, c_{\breve{\lambda}}^2= const \in \mathbb{R}, $$ where $J_{ \breve{\nu}}(\breve{\xi})$ and $Y_{ \breve{\nu}}(\breve{\xi})$ are the Bessel functions of the first and second kind of real order ${\frac{\alpha_1 + 1}{2}}$ and real argument $\breve{\lambda}x_1$ (see, e.g., \cite{Watson:1944,Koren:2002}). The second equation of the system~\eqref{Laplace-Beltrami equation, bi-sep-3} may be solved using separation of variables $p(x_0, x_2) = \Xi(x_0) \Upsilon(x_2)$: $$ \frac{1}{\Xi} \frac{d{^2}{\Xi}}{d{x_0}^2} + \frac{1}{ \Upsilon} \frac{d{^2}{ \Upsilon}}{d{x_2}^2} - \frac{\alpha_2} { \Upsilon x_2} \frac{d{ \Upsilon}}{d{x_2}} - \breve{\lambda}^2= 0. $$ Relations \begin{align*} - \frac{1}{\Xi} \frac{d{^2}{\Xi}}{d{x_0}^2} = \frac{1}{ \Upsilon} \frac{d{^2}{ \Upsilon}}{d{x_2}^2} - \frac{\alpha_2} { \Upsilon x_2} \frac{d{ \Upsilon}}{d{\rho}} - \breve{\lambda}^2 = \breve{\mu}^2 \quad ( \breve{\mu} = const \in \mathbb R ) \end{align*} lead to the following system of equations \begin{gather} \begin{cases} \frac{d{^2}{\Xi}}{d{x_0}^2} + \breve{\beta}^2 \Xi = 0, \\[1ex] x_2^2 \frac{d{^2}{ \Upsilon}}{d{x_2}^2} - \alpha_2 x_2 \frac{d{ \Upsilon}}{d{x_2}} - (\breve{\lambda}^2 + \breve{\mu}^2)x_2^2 \Upsilon = 0. \end{cases} \label{eq-sep-x_2-x_0} \end{gather} The first equation of the system~\eqref{eq-sep-x_2-x_0} may be solved using trigonometric functions: $ \quad \Xi_{\breve{\mu}}(x_0) = b^1_{\breve{\mu}} \cos{\breve{\mu} x_0} + b^2_{\breve{\mu}} \sin{\breve{\mu} x_0}, $ where $\breve{\mu}\in \mathbb Z$. The second equation of the system~\eqref{eq-sep-x_2-x_0} may be solved using linear independent solutions (see, e.g., \cite{PolZait:Ordin-2018}, Chapter 14, p. 526 item 63): $$ \Upsilon_{ \breve{\lambda}, \breve{\mu}}(x_2)= {x_2}^\frac{\alpha_2+1}{2} \left[ a^1_{\breve{\lambda}, \breve{\mu}} J_{\frac{\alpha_2+1}{2}}(i \breve{\nu}x_2) + a^2_{\breve{\lambda}, \breve{\mu}} Y_{\frac{\alpha_2+1}{2}}(i \breve{\nu}x_2) \right], $$ keeping in mind that $J_{\frac{\alpha_2+1}{2}}(i \breve{\nu}x_2)$ and $Y_{\frac{\alpha_2+1}{2}}(i \breve{\nu}x_2)$ are the Bessel functions of the first and second kind of real order ${\frac{\alpha_2 + 1}{2}}$ and purely imaginary argument $i \breve{\nu}x_2$, where $\ \breve{\nu} = \sqrt{ \breve{\lambda}^2 + \breve{\mu}^2}$ (see, e.g., \cite{Watson:1944,Koren:2002}). \end{proof} \begin{remark} The Dirichlet problem in a bounded rectangular parallelepiped for eqn~\eqref{alpha_1,2-bihyperbolic-3} under the conditions $\alpha_1>0$, $\alpha_2>0$ was studied by Urinov and Karimov in 2023 in a three-dimensional setting \cite{UriKar:2023}. It is important to note that various boundary value problems for elliptic equations with singular coefficients (see, e.g., \cite{UrinovKarimovKT:2019,UrinovKarimovKT:2020}) may have rich applications in the mechanics of layered media. Two-dimensional analytic models of potential meridional and transverse fields are of particular interest. \end{remark} When $\alpha_1=0$, $\alpha_2 \neq 0$, the equation~\eqref{alpha_1,2-bihyperbolic-3} leads to the Weinstein equation in $\mathbb R^3$ (see, e.g., \cite{Leut:CV20,ErOrel:2014}) \begin{gather} x_2 \Delta{h} - \alpha_2 \frac{\partial{h}}{\partial{x_2}} =0. \label{alpha-hyperbolic-3} \end{gather} Surprising analytic properties of exact solutions of eqn~\eqref{alpha-hyperbolic-3} have been studied by Leutwiler, Eriksson and Orelma in the context of \emph{Hyperbolic function theory in $\mathbb R^3$} (see, e.g., \cite{ErLeut:2007,ErOrel:2014}), and later in the context of the theory of \emph{Modified harmonic functions in $\mathbb R^3$} (see, e.g., \cite{Leut:2017-AACA,Leut:2017-CAOT,Leut:2021-MMAS}). \begin{definition} Every exact solution of eqn~\eqref{alpha-hyperbolic-3} under the condition $\alpha_2>0$ in a simply connected open domain $\Lambda \subset \mathbb R^3$ $(x_2 > 0)$ is called $\alpha_2$-hyperbolic harmonic potential in $\Lambda$. \end{definition} Fundamentally new analytic properties of exact solutions of eqn~\eqref{alpha-hyperbolic-3} under the condition $\alpha_2=1$ have been investigated by Leutwiler and Eriksson-Bique in the context of \emph{Modified quaternionic analysis in $\mathbb R^3$} (see, e.g., \cite{Leut:CV17,Leut:CV20,Leut:Rud96,ErLe:1998}) using the reduced quaternionic power series with complex coefficients . Nowadays exact solutions of eqn~\eqref{alpha-hyperbolic-3} in the context of the theory of \emph{Modified harmonic functions in $\mathbb R^3$}, where $\alpha_2 < 0$, are referred to as $(-\alpha_2)$-modified harmonic functions (see, e.g., \cite{Leut:2021-MMAS}). Let us compare the similarities and differences between eqn~\eqref{eq-axial-hyperbolic-3-alpha} and eqn~\eqref{alpha_1,2-bihyperbolic-3} in Cartesian coordinates. This immediately leads to the following formulation. \begin{proposition} [The first criterion] Any $(\alpha_1, \alpha_2)$-bihyperbolic harmonic potential $h= h(x_0, x_1, x_2)$ in $\Lambda \subset \mathbb R^3$ $(x_1>0, x_2>0)$ represents an $(\alpha_1+ \alpha_2)$-axial-hyperbolic harmonic potential if and only if in $\Lambda$ \begin{gather} x_2 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_2}}. \label{meridional-condition} \end{gather} \end{proposition} \begin{proof} Suppose that $\alpha = \alpha_1+ \alpha_2$ in eqn~\eqref{eq-axial-hyperbolic-3-alpha} and $x_1>0$, $x_2>0$. As may be seen, $\ x_2 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_2}}$ if and only if $\ \frac{1}{x_1} \frac{\partial{h}}{\partial{x_1}} = \frac{1}{x_2} \frac{\partial{h}}{\partial{x_2}}$. As follows from eqns~\eqref{eq-axial-hyperbolic-3-alpha} and~\eqref{alpha_1,2-bihyperbolic-3}, \begin{gather} \Delta{h} = \frac{(\alpha_1+ \alpha_2)x_1}{(x_1^2+x_2^2)} \frac{\partial{h}}{\partial{x_1}} + \frac{(\alpha_1+ \alpha_2) x_2}{(x_1^2+x_2^2)} \frac{\partial{h}}{\partial{x_2}} = \frac{\alpha_1}{x_1} \frac{\partial{h}}{\partial{x_1}} + \frac{\alpha_2}{x_2} \frac{\partial{h}}{\partial{x_2}}. \label{Rel-axial-hyperbolic-bihyperbolic-3} \end{gather} Relations~\eqref{Rel-axial-hyperbolic-bihyperbolic-3} imply that \begin{gather} \frac{(\alpha_1+ \alpha_2)x_1^2 - \alpha_1(x_1^2+x_2^2)}{(x_1^2+x_2^2)} \frac{1}{x_1} \frac{\partial{h}}{\partial{x_1}} = \frac{\alpha_2(x_1^2+x_2^2) - (\alpha_1+ \alpha_2) x_2^2}{(x_1^2+x_2^2)} \frac{1}{x_2} \frac{\partial{h}}{\partial{x_2}}. \label{alpha-axial-hyperbolic-bihyperbolic-3} \end{gather} Eqn~\eqref{alpha-axial-hyperbolic-bihyperbolic-3} is satisfied if and only if the axially symmetric condition~\eqref{meridional-condition} is satisfied. \end{proof} Now let us compare the similarities and differences between eqns~\eqref{eq-axial-hyperbolic-3-alpha} and~\eqref{alpha_1,2-bihyperbolic-3} in cylindrical coordinates. This immediately leads to the following formulation. \begin{proposition} [The second criterion] Any $(\alpha_1, \alpha_2)$-bihyperbolic harmonic potential $h= h(x_0, x_1, x_2)$ in $\Lambda \subset \mathbb R^3$ $(x_1>0, x_2>0)$ represents an $(\alpha_1+ \alpha_2)$-axial-hyperbolic harmonic potential if and only if in $\Lambda$ in cylindrical coordinates \begin{gather} \frac{\partial{h}}{\partial{\theta}} = 0. \label{meridional-condition-cyl} \end{gather} \end{proposition} \begin{proof} When $\alpha = \alpha_1+ \alpha_2$, eqn~\eqref{eq-axial-hyperbolic-3-alpha} in cylindrical coordinates is written as \begin{gather} \rho^2 \left( \frac{\partial{^2}{h}}{\partial{x_0}^2} + \frac{\partial {^2}{h}}{\partial{\rho}^2} \right) - (\alpha_1+ \alpha_2 -1) \rho \frac{\partial{h}}{\partial{\rho}} + \frac{\partial {^2}{h}}{\partial{\theta}^2} = 0. \label{eq-axial-hyperbolic-3-alpha-cyl} \end{gather} Eqn~\eqref{alpha_1,2-bihyperbolic-3} in cylindrical coordinates is written as \begin{gather} \rho^2 \left( \frac{\partial{^2}{h}}{\partial{x_0}^2} + \frac{\partial {^2}{h}}{\partial{\rho}^2} \right) - (\alpha_1 + \alpha_2 -1) \rho \frac{\partial{h}}{\partial{\rho}} + \frac{\partial {^2}{h}}{\partial{\theta}^2} + (\alpha_1 \tan{\theta} - \alpha_2 \cot{\theta}) \frac{\partial{h}}{\partial{\theta}} =0. \label{alpha_1,2-bihyperbolic-3-cyl} \end{gather} This implies that the condition~\eqref{meridional-condition-cyl} is necessary and sufficient. \end{proof} As follows from the second criterion, new joint class of exact solutions of eqns~\eqref{eq-axial-hyperbolic-3-alpha-cyl} and~\eqref{alpha_1,2-bihyperbolic-3-cyl}, satisfying the condition~\eqref{meridional-condition-cyl}, may be equivalently represented as general class of exact solutions of the elliptic Euler-Poisson-Darboux equation in cylindrical coordinates \cite{Br:Hefei2020}: \begin{gather} \rho \left( \frac{\partial{^2}{g}}{\partial{x_0}^2} + \frac{\partial {^2}{g}}{\partial{\rho}^2} \right) - (\alpha -1) \frac{\partial{g}}{\partial{\rho}} = 0, \label{EPD equation} \end{gather} where, according to \cite{Br:Hefei2020}, $h(x_0, x_1, x_2) := g(x_0, \rho)$, and $\alpha = \alpha_1 + \alpha_2$. \begin{remark} The corresponding analytic models in mathematical physics and continuum mechanics lead to potential meridional fields in cylindrically layered media, where $\phi( \rho) = \rho^{-\alpha}$. \end{remark} Class of exact solutions of eqn~\eqref{EPD equation} in the context of \emph{GASPT} (see, e.g., \cite{Weinstein:1948-flows,Weinstein:1953,Br:Hefei2020}) is referred to as class of generalized axially symmetric potentials. A special class of generalized axially symmetric potentials is provided by means of separation of variables of the form $g(x_0, \rho) = \Xi(x_0) \Upsilon(\rho)$ \cite{Br:Hefei2020}, where \begin{gather} \begin{cases} \Xi_{\breve{\beta}}(x_0) = b^1_{\breve{\beta}} \cosh(\breve{\beta} x_0) + b^2_{\breve{\beta}} \sinh(\breve{\beta}x_0); \quad \breve{\beta}, b^1_{\breve{\beta}}, b^2_{\breve{\beta}}= const \in \mathbb R, \\[1ex] \Upsilon_{\breve{\beta}}(\rho) = {\rho}^\frac{\alpha}{2} \left[ a^1_{\breve{\beta}} J_{\frac{\alpha}{2}}( \breve{\beta} \rho) + a^2_{\breve{\beta}} Y_{\frac{\alpha}{2}}( \breve{\beta} \rho) \right]; \quad a^1_{\breve{\beta}}$, $a^2_{\breve{\beta}}= const \in \mathbb R. \end{cases} \label{EPD special} \end{gather} Every generalized axially symmetric potential $g = g(x_0, \rho)$ indicates the existence of the Stokes stream function $\hat{g} = \hat{g}(x_0, \rho)$, which is defined by the generalized Stokes-Beltrami system in the meridian half-plane $(\rho > 0)$ \begin{gather*} \begin{cases} {\rho}^{-(\alpha -1)} \frac{\partial{g}}{\partial{x_0}} = \frac{\partial{\hat{g}}}{\partial{\rho}}, \\[1ex] {\rho}^{-(\alpha -1)} \frac{\partial{g}}{\partial{\rho}}=-\frac{\partial{\hat{g}}}{\partial{x_0}}. \end{cases} \end{gather*} The Stokes stream function $\hat{g} = \hat{g}(x_0, \rho)$, in contrast to generalized axially symmetric potential, satisfies the following equation: \begin{gather} \rho \left( \frac{\partial{^2}{\hat{g}}}{\partial{x_0}^2} + \frac{\partial {^2}{\hat{g}}}{\partial{\rho}^2} \right) + (\alpha -1) \frac{\partial{\hat{g}}}{\partial{\rho}} = 0. \label{Stokes stream} \end{gather} When $\alpha=0$, generalized axially symmetric potential $g = g(x_0, \rho)$ and the Stokes stream function $\hat{g} = \hat{g}(x_0, \rho)$ satisfy equations \begin{gather} \rho \left( \frac{\partial{^2}{g}}{\partial{x_0}^2} + \frac{\partial {^2}{g}}{\partial{\rho}^2} \right) + \frac{\partial{g}}{\partial{\rho}} = 0, \label{EPD equation-0} \end{gather} \begin{gather} \rho \left( \frac{\partial{^2}{\hat{g}}}{\partial{x_0}^2} + \frac{\partial {^2}{\hat{g}}}{\partial{\rho}^2} \right) - \frac{\partial{\hat{g}}}{\partial{\rho}} = 0. \label{Stokes stream-0} \end{gather} The specifics of boundary value problems for eqns~\eqref{EPD equation-0} and~\eqref{Stokes stream-0} in simply connected domains of the meridian half-plane $(\rho >0)$ has been studied, in particular, by Plaksa, Shpakivskyi and Gryshchuk in the context of the theory of \emph{Monogenic functions in spaces with commutative multiplication and applications in fluid mechanics} (see, e.g., \cite{Plaksa:2001,Plaksa:2003,PlakShpak:2023}). \section {Gradient Systems in $\mathbb R^3$ and $\alpha$-Meridional Mappings of the Second Kind in Continuum Mechanics } \label{sec4} Let us turn our attention to some important properties of a smooth gradient system~\eqref{grad-system-mu} with scalar potential $h$ depending on a parameter $\mu$ in the following expanded form: \begin{gather} \begin{cases} \frac {dx_0}{dt} = V_0(x_0,x_1,x_2; \mu) = \frac{\partial{h(x_0,x_1,x_2; \mu)}}{\partial{x_0}}, \\[1ex] \frac {dx_1}{dt} = V_1(x_0,x_1,x_2; \mu) = \frac{\partial{h(x_0,x_1,x_2; \mu)}}{\partial{x_1}}, \\[1ex] \frac {dx_2}{dt} = V_2(x_0,x_1,x_2; \mu) = \frac{\partial{h(x_0,x_1,x_2; \mu)}}{\partial{x_2}}. \end{cases} \label{traject} \end{gather} This system in continuum mechanics may be interpreted as the system of the pathline equations, where the scalar potential $h$ is identified with the velocity potential (see, e.g., \cite{Ilyushin:1990,Sedov:1994,LaiRubKr:2010,Batch:2000,WhiteXue:2021,AnderCadou:2024}). The original analytic properties of potential velocity fields $\vec V$ depending on a variable parameter $\mu$ in inhomogeneous isotropic media with the mass density $\phi = \phi(x_0,x_1,x_2)$ may be established in the context of \emph{Stability theory} and \emph{Bifurcation theory}. The sets of zeros of $\vec V$ in simply connected open domains $\Lambda \subset \mathbb R^3$ coincide with the critical sets of the velocity potential $h$ in $\Lambda$. The system of the streamline equations in continuum mechanics is described as (see, e.g., \cite{Ilyushin:1990,Sedov:1994,Acheson,Batch:2000,WhiteXue:2021,AnderCadou:2024}) \begin{gather} \frac{\frac{dx_0}{ds}}{V_0} = \frac{\frac{dx_1}{ds}}{V_1} = \frac{\frac{dx_2}{ds}}{V_2}, \label{streamline-Acheson} \end{gather} where $s$ characterizes an independent parameter, $s \in \mathbb R$. In general, the systems of equations~\eqref{traject} and~\eqref{streamline-Acheson} are different. Nevertheless, the systems~\eqref{traject} and~\eqref{streamline-Acheson} may be identical in the case of a steady flow, where $V_l \neq 0$ $(l = 0,1,2)$ in $\Lambda$. According to (\cite{WhiteXue:2021}, p.42), the system~\eqref{streamline-Acheson} may be viewed as an integrable system in $\Lambda$, if the velocity field $\vec V$ is given in $\Lambda$. When the component $V_0 \neq 0$ in $\Lambda$, the system~\eqref{traject} may be represented as (see, e.g., the system of the streamline equations in continuum mechanics \cite{Sedov:1994}, pp.43-44) \begin{gather*} \begin{cases} \frac {dx_1}{dx_0} = \frac {V_1(x_0,x_1,x_2; \mu)}{V_0(x_0,x_1,x_2; \mu)}, \\[1ex] \frac {dx_2}{dx_0} = \frac {V_2(x_0,x_1,x_2; \mu)}{V_0(x_0,x_1,x_2; \mu)}. \end{cases} \end{gather*} When the component $V_1 \neq 0$ in $\Lambda$, the system~\eqref{traject} may be represented as \begin{gather*} \begin{cases} \frac {dx_0}{dx_1} = \frac {V_0(x_0,x_1,x_2; \mu)}{V_1(x_0,x_1,x_2; \mu)}, \\[1ex] \frac {dx_2}{dx_1} = \frac {V_2(x_0,x_1,x_2; \mu)}{V_1(x_0,x_1,x_2; \mu)}, \end{cases} \end{gather*} respectively. \begin{definition} The set of all points $\vec x = (x_0,x_1,x_2)$, where $V_l(x_0,x_1,x_2; \mu) =0$ $(l = 0,1,2)$ in $\Lambda$, is said to be the $x_l$-nullcline of~\eqref{traject} in $\Lambda$. \end{definition} According to (\cite{HirschSmaleDev:2013}, p.187), the nullclines may be regarded as one of the most useful tools for analyzing the behavior of~\eqref{traject} in the context of \emph{Global nonlinear techniques}. In particular, the intersections of the $x_0$-, $x_1$- and $x_2$-nullclines in $\Lambda$ yield the set of equilibria of~\eqref{traject} in $\Lambda$. Let us take a look at the basic properties of analytic models of potential meridional velocity fields $\vec V$ in cylindrically layered media with the mass density $\phi( \rho) = \rho^{-\alpha}$. Eqn~\eqref{EPD equation} leads to a family of Vekua type systems in the meridian half-plane for different values of $\alpha$ \cite{Br:Hefei2020}: \begin{gather} \begin{cases} \rho \left( \frac{\partial{u_0}}{\partial{x_0}} - \frac{\partial{u_{\rho}}}{\partial{\rho}} \right) + (\alpha -1) u_{\rho} = 0, \\[1ex] \frac{\partial{u_0}}{\partial{\rho}}=-\frac{\partial{u_{\rho}}}{\partial{x_0}}, \end{cases} \label{A_3^alpha system-meridional} \end{gather} where $u_0 = \frac{\partial{g}}{\partial{x_0}}, \quad u_{\rho} = - \frac{\partial{g}}{\partial{\rho}}$. The system~\eqref{alpha-axial-hyperbolic-system-3} is reduced to the following two-dimensional system: \begin{gather} \begin{cases} \rho \left( \frac{\partial{V_0}}{\partial{x_0}} + \frac{\partial{V_{\rho}}}{\partial{\rho}} \right) - (\alpha -1) V_{\rho} = 0, \\[1ex] \frac{\partial{V_0}}{\partial{\rho}} = \frac{\partial{V_{\rho}}}{\partial{x_0}}, \end{cases} \label{Bryukhov-vector-meridional} \end{gather} where \begin{gather*} V_0= u_0, \quad V_1 = \frac{x_1}{\rho} V_{\rho} = -u_1, \quad V_2 = \frac{x_2}{\rho} V_{\rho} = -u_2, \quad V_{\rho} = -u_{\rho}. \end{gather*} The Jacobian matrix $\mathbf{J}(\vec V)$ of potential meridional fields $\vec V = \left(V_0,\frac{x_1}{\rho} V_{\rho},\frac{x_2}{\rho} V_{\rho} \right)$ in $\mathbb R^3$ is expressed as \begin{gather} \begin{pmatrix} \left[ -\frac{\partial{V_{\rho}}}{\partial{\rho}} +\frac{V_{\rho}}{\rho} (\alpha -1) \right] & \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_1}{\rho} & \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_2}{\rho} \\[1ex] \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_1}{\rho} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \frac{x_1^2}{\rho^2} + \frac{V_{\rho}}{\rho} \frac{x_2^2}{\rho^2}\right) & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}}- \frac{V_{\rho}}{\rho}\right) \frac{x_1 x_2}{\rho^2} \\[1ex] \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_2}{\rho} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}}- \frac{V_{\rho}}{\rho}\right) \frac{x_1 x_2}{\rho^2} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \frac{x_2^2}{\rho^2} + \frac{V_{\rho}}{\rho} \frac{x_1^2}{\rho^2}\right) \end{pmatrix} \label{VG tensor-merid} \end{gather} The characteristic equation~\eqref{characteristic lambda-3} of~\eqref{VG tensor-merid} is written as \begin{gather} \lambda^3 - \alpha \frac{V_{\rho}}{\rho} \lambda^2 - \left[ \left( \frac{\partial{V_\rho}}{\partial{x_0}} \right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2 - (\alpha -1) \frac{V_{\rho}}{\rho} \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} + \frac{V_{\rho}}{\rho} \right) \right] \lambda \notag \\ + \frac{V_{\rho}}{\rho} \left[ \left( \frac{\partial{V_\rho}}{\partial{x_0}} \right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2 - (\alpha -1) \frac{V_{\rho}}{ \rho} \frac{\partial{V_{\rho}}}{\partial{\rho}} \right] = 0. \label{characteristic lambda-alpha} \end{gather} \begin{theorem}[see \cite{Br:Hefei2020}] Roots of~\eqref{characteristic lambda-alpha} are given by the formulas: \begin{align} \lambda_{0} &= \frac{V_{\rho}}{\rho}; \notag\\ \lambda_{1, 2} &=\frac{(\alpha -1)}{2} \frac{ V_{\rho}}{ \rho} \pm \notag\\ &\hspace*{5ex}\sqrt{ \frac{(\alpha -1)^2}{4} \left( \frac{V_{\rho}}{ \rho} \right)^2 - (\alpha -1) \frac{V_{\rho}}{\rho} \frac{\partial{V_{\rho}}}{\partial{\rho}}+ \left( \frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2}. \label{Roots-alpha} \end{align} \end{theorem} \begin{remark} The second formula~\eqref{Roots-alpha} may be simplified: \begin{align*} \lambda_{1,2} &= \frac{(\alpha -1)}{2} \frac{V_{\rho}}{\rho} \pm \sqrt{ \left(\frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 + \left( \frac{\alpha -1}{2} \frac{V_{\rho}}{\rho} - \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2}. \end{align*} It implies that the radicand cannot take negative values. \end{remark} The formulas~\eqref{Roots-alpha} may play key roles in the context of \emph{Stability theory of gradient systems}~\eqref{traject} and the corresponding \emph{Bifurcation theory}. As may be seen from~\eqref{traject} in conjunction with the first criterion of meridional fields and eqn~\eqref{EPD equation}, remarkable properties of potential meridional fields $\vec V = \mathrm{grad} \ h$ in cylindrically layered media with a mass density $\phi = \rho^{-\alpha}$ in $\Lambda$ $(x_1 \neq 0, x_2 \neq 0)$ may be studied by means of gradient systems with $\alpha$-axial-hyperbolic harmonic velocity potential $h$, satisfying the condition $x_2 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_2}}$. \begin{theorem}[On the structure of the sets of equilibria of gradient systems] Assume that the set of equilibria of a gradient system~\eqref{traject} with $\alpha$-axial-hyperbolic harmonic potential $h$, satisfying the condition $x_2 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_2}}$, is not empty in $\Lambda$ $(x_1 \neq 0, x_2 \neq 0)$. Then every equilibrium point $\vec x^{**}$ of the system~\eqref{traject} in $\Lambda$ is degenerate. The index and the degree of instability of $\vec x^{**}$ are both equal to one for any $\alpha$. \end{theorem} \begin{proof} As noted in \cite{Br:Hefei2020}, the set of degenerate points of the Jacobian matrix~\eqref{VG tensor-merid} is provided by two independent equations: \begin{align} {V_{\rho}}=0; \quad \left(\frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 + \left(\frac{\partial{V_{\rho}}}{\partial{\rho}}\right)^2 - (\alpha-1)\frac{V_{\rho}}{\rho}\frac{\partial{V_{\rho}}}{\partial{\rho}}=0. \label{degenerate-alpha} \end{align} Every equilibrium point $\vec x^{**}$ is defined by the condition $\vec V (\vec x^{**}) = 0$. As follows from the first equation of~\eqref{degenerate-alpha}, all equilibrium points $\vec x^{**}$ of~\eqref{traject} belong to the set of degenerate points of the Jacobian matrix~\eqref{VG tensor-merid}. The eigenvalues of~\eqref{VG tensor-merid} at $\vec x^{**}$ are given by the formulas \begin{align*} \lambda_{0} &= 0; \notag\\ \lambda_{1,2} &= \pm \sqrt{ \left(\frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2}. \end{align*} \end{proof} \begin{definition} Let $\Lambda \subset \mathbb R^3$ $(x_1 \neq 0, x_2 \neq 0)$ be a simply connected open domain. Assume that an exact solution $(u_0, u_1, u_2)$ of the system~\eqref{A_3^alpha-system}, where $\alpha \neq 0$, satisfies the axially symmetric condition $x_2 u_1 = x_1 u_2$ in $\Lambda$. Then mapping $u = u_0 + iu_1 + ju_2: \Lambda \rightarrow \mathbb{R}^3$ is called $\alpha$-meridional mapping of the first kind, while mapping $ \overline{u} = u_0 - iu_1 - ju_2: \Lambda \rightarrow \mathbb{R}^3$ is called $\alpha$-meridional mapping of the second kind. \end{definition} \begin{remark} Arbitrary $\alpha$-meridional mapping of the second kind may be equivalently represented as a mapping $\overline{u} = V_0 + iV_1 + jV_2: \Lambda \rightarrow \mathbb{R}^3$, where $x_2 V_1 = x_1 V_2$. The Jacobian matrix $\mathbf{J}(\overline{u})$ of every $\alpha$-meridional mapping of the second kind $\overline{u} = u_0 - iu_1 - ju_2: \Lambda \rightarrow \mathbb{R}^3$ may be identified with the Jacobian matrix~\eqref{VG tensor-merid} of the corresponding potential meridional field $\vec V$ in cylindrically layered media with the mass density $\phi( \rho) = \rho^{-\alpha}$. \end{remark} Following the concepts given by Chetayev and Gilmore in the context of \emph{Stability theory of gradient systems} \cite{Chetayev:1961,Gilmore:1993}, the eigenvalues of the Jacobian matrix~\eqref{VG tensor-merid} at equilibrium points $\vec x^{**}$ of~\eqref{traject} with $\alpha$-axial-hyperbolic harmonic velocity potential $h$, satisfying the condition $x_2 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_2}}$, may be regarded as the stability coefficients of $\vec x^{**}$. Now let us consider the specifics of harmonic meridional mappings of the second kind within analytic models of potential meridional fields $\vec V$ in fluid mechanics in the case of a steady flow. \begin{definition} Let $\Lambda \subset \mathbb R^3$ $(x_1 \neq 0, x_2 \neq 0)$ be a simply connected open domain. Assume that an exact solution $(u_0, u_1, u_2)$ of the system $(R)$ satisfies the axially symmetric condition $x_2 u_1 = x_1 u_2$ in $\Lambda$. Then mapping $u = u_0 + iu_1 + ju_2: \Lambda \rightarrow \mathbb{R}^3$ is called harmonic meridional mapping of the first kind, while mapping $ \overline{u} = u_0 - iu_1 - ju_2: \Lambda \rightarrow \mathbb{R}^3$ is called harmonic meridional mapping of the second kind. \end{definition} When $\alpha =0$, eqn~\eqref{EPD equation} in cylindrical coordinates leads to the well-known Vekua type system \begin{gather*} \begin{cases} \rho \left( \frac{\partial{u_0}}{\partial{x_0}} - \frac{\partial{u_{\rho}}}{\partial{\rho}} \right) - u_{\rho} = 0,\\[1ex] \frac{\partial{u_0}}{\partial{\rho}}=-\frac{\partial{u_{\rho}}}{\partial{x_0}}. \end{cases} \end{gather*} The system~\eqref{Bryukhov-vector-meridional} is simplified: \begin{gather*} \begin{cases} \rho \left( \frac{\partial{V_0}}{\partial{x_0}} + \frac{\partial{V_{\rho}}}{\partial{\rho}} \right) + V_{\rho} = 0, \\[1ex] \frac{\partial{V_0}}{\partial{\rho}} = \frac{\partial{V_{\rho}}}{\partial{x_0}}. \end{cases} \end{gather*} The basic properties of the Jacobian matrix~\eqref{VG tensor-merid} of potential meridional fields $\vec V$ in homogeneous media \begin{gather} \begin{pmatrix} \left( -\frac{\partial{V_{\rho}}}{\partial{\rho}} - \frac{V_{\rho}}{\rho} \right) & \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_1}{\rho} & \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_2}{\rho} \\[1ex] \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_1}{\rho} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \frac{x_1^2}{\rho^2} + \frac{E_{\rho}}{\rho} \frac{x_2^2}{\rho^2}\right) & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}}- \frac{V_{\rho}}{\rho}\right) \frac{x_1 x_2}{\rho^2} \\[1ex] \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_2}{\rho} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} - \frac{V_{\rho}}{\rho}\right) \frac{x_1 x_2}{\rho^2} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \frac{x_2^2}{\rho^2} + \frac{V_{\rho}}{\rho} \frac{x_1^2}{\rho^2}\right) \end{pmatrix} \label{VG tensor-merid-0} \end{gather} were presented in 2021 \cite{Br:Hefei2020}. According to (\cite{LavSh-Hydro}, pp.210-211), analytic models of potential fields with axial symmetry are of particular interest to hydrodynamics problems. The characteristic equation~\eqref{characteristic lambda-alpha} is written as incomplete cubic equation \begin{gather} \lambda^3 - \left[ \left( \frac{\partial{V_\rho}}{\partial{x_0}} \right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2 + \frac{V_{\rho}}{\rho} \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} + \frac{V_{\rho}}{\rho} \right) \right] \lambda \notag \\ + \frac{V_{\rho}}{\rho} \left[ \left( \frac{\partial{V_\rho}}{\partial{x_0}} \right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2 + \frac{V_{\rho}}{ \rho} \frac{\partial{V_{\rho}}}{\partial{\rho}} \right] = 0. \label{characteristic-0} \end{gather} Roots of~\eqref{characteristic-0} are given by the formulas \begin{gather*} \lambda_{0} = \frac{V_{\rho}}{\rho}; \quad \lambda_{1, 2} = - \frac{V_{\rho}}{2 \rho} \pm \sqrt{\left( \frac{V_{\rho}}{2 \rho} \right)^2 + \frac{V_{\rho}}{\rho} \frac{\partial{V_{\rho}}}{\partial{\rho}} + \left( \frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2}. \end{gather*} The set of degenerate points of the Jacobian matrix~\eqref{VG tensor-merid-0} in $\Lambda$ $(x_1 \neq 0, x_2 \neq 0)$ is provided by two independent equations: $$ {V_{\rho}}=0; \quad \left(\frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 +\left(\frac{\partial{V_{\rho}}}{\partial{\rho}}\right)^2 + \frac{V_{\rho}}{\rho}\frac{\partial{V_{\rho}}}{\partial{\rho}}=0. $$ \begin{remark} The sets of zeros of potential meridional fields $\vec V$ in homogeneous media in $\Lambda$ $(x_1 \neq 0, x_2 \neq 0)$ coincide with the sets of equilibria of gradient systems~\eqref{traject} with harmonic velocity potential $h$, satisfying the condition $x_2 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_2}}$. \end{remark} Potential meridional velocity fields $\vec V$ in fluid mechanics in the case of a steady flow are often referred to as irrotational axisymmetric flow fields (see, e.g., \cite{Batch:2000,WhiteXue:2021,AnderCadou:2024}). As noted by White and Xue (\cite{WhiteXue:2021}, p.17), ``Foremost among the properties of a flow is the velocity field $V$. In fact, determining the velocity is often tantamount to solving a flow problem, since other properties follow directly from the velocity field." Zeros of $\vec V$ are known as stagnation points (see, e.g., \cite {Acheson,WhiteXue:2021,AnderCadou:2024}). The sets of equilibrium points $\vec x^{**}$ of gradient systems~\eqref{traject} with harmonic velocity potential $h$ in fluid mechanics may be interpreted as the sets of stagnation points. According to Theorem (On the structure of the sets of equilibria of gradient systems), the degree of instability of $\vec x^{**}$ is equal to one. \begin{definition} Let $\Lambda \subset \mathbb R^3$ $(x_1 \neq 0, x_2 \neq 0)$ be a simply connected open domain. Suppose that $\alpha \neq 0$. Then $\alpha$-meridional mapping of the first kind $u= u_0 + iu_1 + ju_2: \Lambda \rightarrow \mathbb{R}^3$ is called $\alpha$-meridional function of the first kind $u= u_0(x_0, x_1, x_2) + iu_1(x_0, x_1, x_2) + ju_2(x_0, x_1, x_2)$ in $\Lambda$, while $\alpha$-meridional mapping of the second kind $ \overline{u} = u_0 - iu_1 - ju_2: \Lambda \rightarrow \mathbb{R}^3$ is called $\alpha$-meridional function of the second kind $\overline{u}= u_0(x_0, x_1, x_2) - iu_1(x_0, x_1, x_2) - ju_2(x_0, x_1, x_2)$ in $\Lambda$. \end{definition} The sets of zeros of $\alpha$-meridional functions of the second kind $u = u_0 - iu_1 - ju_2$ in $\Lambda$ $(x_1 \neq 0, x_2 \neq 0)$ coincide with the sets of zeros of potential meridional fields $\vec V = (V_0, V_1, V_2) $ in cylindrically layered media with the mass density $\phi = \rho^{-\alpha}$ in $\Lambda$. The topological properties of the sets of zeros of $\alpha$-meridional functions are of particular interest in the context of \emph{Bifurcation theory}. As noted by Chow and Hale (\cite{ChowHale:1982}, Preface), ``Static bifurcation theory is concerned with the changes that occur in the structure of the set of zeros of a function as parameters in the function are varied.`` \section {$1$-Meridional Mappings of the Second Kind and the Radially Holomorphic Potential in $\mathbb R^3$} \label{sec5} Consider the specifics of $1$-meridional mappings of the second kind within analytic models of potential meridional fields $\vec V$ in $\mathbb R^3$. When $\alpha =1$, the system~\eqref{A_3^alpha system-meridional} leads the well-known Cauchy-Riemann type system in the meridian half-plane (see, e.g., \cite{Br:Hefei2020}) \begin{gather} \begin{cases} \frac{\partial{u_0}}{\partial{x_0}} - \frac{\partial{u_{\rho}}}{\partial{\rho}}= 0,\\[1ex] \frac{\partial{u_0}}{\partial{\rho}}=-\frac{\partial{u_{\rho}}}{\partial{x_0}}. \end{cases} \label{CR-merid} \end{gather} The system~\eqref{Bryukhov-vector-meridional} is simplified: \begin{gather} \begin{cases} \frac{\partial{V_0}}{\partial{x_0}} + \frac{\partial{V_{\rho}}}{\partial{\rho}} = 0,\\[1ex] \frac{\partial{V_0}}{\partial{\rho}} = \frac{\partial{V_{\rho}}}{\partial{x_0}}. \end{cases} \label{Bryukhov-vector-1} \end{gather} The Jacobian matrix~\eqref{VG tensor-merid} is expressed as \begin{gather} \begin{pmatrix} -\frac{\partial{V_{\rho}}}{\partial{\rho}} & \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_1}{\rho} & \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_2}{\rho} \\[1ex] \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_1}{\rho} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \frac{x_1^2}{\rho^2} + \frac{V_{\rho}}{\rho} \frac{x_2^2}{\rho^2}\right) & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}}- \frac{V_{\rho}}{\rho}\right) \frac{x_1 x_2}{\rho^2} \\[1ex] \frac{\partial{V_{\rho}}}{\partial{x_0}} \frac{x_2}{\rho} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}}- \frac{V_{\rho}}{\rho}\right) \frac{x_1 x_2}{\rho^2} & \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \frac{x_2^2}{\rho^2} + \frac{V_{\rho}}{\rho} \frac{x_1^2}{\rho^2}\right) \end{pmatrix} \label{VG tensor-merid-1} \end{gather} The characteristic equation~\eqref{characteristic lambda-alpha} of~\eqref{VG tensor-merid-1} is written as \begin{gather} \lambda^3 - \frac{V_{\rho}}{\rho} \lambda^2 - \left[ \left( \frac{\partial{V_\rho}}{\partial{x_0}} \right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2 \right] \lambda + \frac{V_{\rho}}{\rho} \left[ \left( \frac{\partial{V_\rho}}{\partial{x_0}} \right)^2 + \left( \frac{\partial{V_{\rho}}}{\partial{\rho}} \right)^2 \right] = 0. \label{characteristic-1} \end{gather} Roots of~\eqref{characteristic-1} are given by the formulas \begin{gather*} \lambda_{0} = \frac{V_{\rho}}{\rho}; \quad \lambda_{1, 2} = \pm \sqrt{ \left(\frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 + \left(\frac{\partial{V_{\rho}}}{\partial{\rho}}\right)^2}. \end{gather*} The set of degenerate points of~\eqref{VG tensor-merid-1} is provided by two independent equations \cite{Br:Hefei2020}: \begin{gather} {V_{\rho}}=0; \quad \left(\frac{\partial{V_{\rho}}}{\partial{x_0}}\right)^2 + \left(\frac{\partial{V_{\rho}}}{\partial{\rho}}\right)^2 =0. \label{degenerate-1} \end{gather} Let us turn our attention to the concept of \emph{Radially holomorphic functions} introduced by G\"{u}rlebeck, Habetha and Spr\"{o}ssig in 2008 \cite{GuHaSp:2008}. \begin{definition} The radial differential operator in $\mathbb R^3$ is defined by formula \begin{gather*} \partial_{rad}{G} := \frac{1}{2} \left( \frac{\partial{}}{\partial{x_0}}- I \frac{\partial{}}{\partial{\rho}} \right) G := G' \quad (G = g + I \hat{g}). \end{gather*} Every reduced quaternion-valued function $G = g + I \hat{g}$ satisfying a Cauchy-Riemann type differential equation in simply connected open domains $\Lambda \subset \mathbb R^3$ $(x_1 \neq 0, x_2 \neq 0)$ \begin{gather} \overline{\partial}_{rad}{G} := \frac{1}{2} \left( \frac{\partial{}}{\partial{x_0}} + I \frac{\partial{}}{\partial{\rho}} \right) G = 0 \label{CRDO-con} \end{gather} is called radially holomorphic in $\Lambda$. The reduced quaternion-conjugate function $\overline{G} = g - I \hat{g}$ is called radially anti-holomorphic in $\Lambda$. \end{definition} As noted in \cite{GuHaSp:2008,Br:Hefei2020}, the basic elementary radially holomorphic functions in $\mathbb R^3$ may be viewed as elementary functions of the reduced quaternionic variable $ G = G(x) = g(x_0, \rho) + I \hat{g}_{\rho}(x_0, \rho)$ satisfying relations \begin{align*} [x^{n} &:= r^{n}(\cos{n\varphi} + I \sin{n\varphi})]' = n x^{n-1};\\ [e^{x} &:= e^{x_0}(\cos{\rho}+I \sin{\rho})]'= e^{x};\\ [\cos{x} &:= \frac{1}{2}( e^{-Ix} + e^{Ix})]'= - \sin{x};\\ [\sin{x} &:= \frac{I}{2}( e^{-Ix} - e^{Ix})]'= \cos{x};\\ [\ln{x} &:= \ln r +I \varphi]' = x^{-1}. \end{align*} \begin{definition} Let $G = g + I \hat{g}$ be a radially holomorphic function in $\Lambda$ satisfying the following differential equation: \begin{gather} G' = F, \label{primitive} \end{gather} where $F = u_0 + I u_{\rho}$ is a radially holomorphic function with components $u_0$, $u_{\rho}$ satisfying the system~\eqref{CR-merid} in $\Lambda$. The function $G$ is called radially holomorphic primitive of $F$ in $\Lambda$. \end{definition} \begin{definition} Radially holomorphic primitive $G = g + I \hat{g}$ in $\Lambda$ into the framework of the system~\eqref{Bryukhov-vector-1} is said to be the radially holomorphic potential in the context of \emph{GASPT} (or, in short, the radially holomorphic potential) in $\Lambda$. \end{definition} New tools of the radially holomorphic potential in $\mathbb R^3$ were developed by the author in 2021 \cite{Br:Hefei2020} as the radially holomorphic extension of tools of the complex potential in the context of \emph{GASPT}. According to \cite{Br:Hefei2020}, general class of exact solutions of eqn~\eqref{CRDO-con} is equivalently represented as general class of exact solutions of the Cauchy-Riemann type system in the meridian half-plane~\eqref{CR-merid} concerning functions $g = g(x_0, \rho)$, $\hat{g} = \hat{g}(x_0, \rho)$: \begin{gather} \begin{cases} \frac{\partial{g}}{\partial{x_0}}-\frac{\partial{\hat{g}}}{\partial{\rho}}=0,\\[1ex] \frac{\partial{g}}{\partial{\rho}}=-\frac{\partial{\hat{g}}}{\partial{x_0}}. \end{cases} \label{invar} \end{gather} It should be noted that eqns~\eqref{EPD equation} and~\eqref{Stokes stream} in the case of $\alpha = 1$ are identical: \begin{gather*} \frac{\partial{^2}{g}}{\partial{x_0}^2} + \frac{\partial {^2}{g}}{\partial{\rho}^2} = 0, \quad \frac{\partial{^2}{\hat{g}}}{\partial{x_0}^2} + \frac{\partial {^2}{\hat{g}}}{\partial{\rho}^2} = 0, \end{gather*} in contrast to the case of $\alpha = 0$. \begin{proposition} Let $G = g + I \hat{g}$ be a radially holomorphic function in $\Lambda$. Then any reduced quaternion-valued expression of the form $\Theta = \vartheta + I \hat{\vartheta} = I G = -\hat{g} + I g $ yields also a radially holomorphic function in $\Lambda$. \end{proposition} \begin{proof} The system~\eqref{invar} may be equivalently represented as \begin{gather*} \begin{cases} \frac{\partial{(-\hat{g})}}{\partial{\rho}} = - \frac{\partial{g}}{\partial{x_0}},\\[1ex] \frac{\partial{(-\hat{g})}}{\partial{x_0}} - \frac{\partial{g}}{\partial{\rho}} =0. \end{cases} \end{gather*} Mutual permutation of equations leads to the Cauchy-Riemann type system \begin{gather*} \begin{cases} \frac{\partial{(-\hat{g})}}{\partial{x_0}} - \frac{\partial{g}}{\partial{\rho}} =0, \\[1ex] \frac{\partial{(-\hat{g})}}{\partial{\rho}} = - \frac{\partial{g}}{\partial{x_0}}, \end{cases} \end{gather*} where $\vartheta = -\hat{g}$, $\hat{\vartheta} = g$. \end{proof} As may be seen, generalized axially symmetric potential $g = g(x_0, \rho)$ and the Stokes stream function $\hat{g} = \hat{g}(x_0, \rho)$ within analytic models of potential meridional fields $\vec V$ in $\mathbb R^3=\{(x_0, x_1,x_2)\}$ in the context of \emph{GASPT} may reverse roles, similar to the velocity potential and the stream function in complex analysis (see, e.g., \cite{KochinKibelRoze:1964,LavSh:1987,Batch:2000,Acheson,WhiteXue:2021}). If $g = g(x_0, \rho)$ and $\hat{g} = \hat{g}(x_0, \rho)$ both exist in $\Lambda$, then the streamlines and equipotential lines are everywhere mutually perpendicular except at zeros of $\vec V$ in $\Lambda$. Now we are ready to introduce an appropriate definition. \begin{definition} Let $G = g + I \hat{g}$ be a radially holomorphic function in $\Lambda$. The radially holomorphic function $\Theta = \vartheta + I \hat{\vartheta}= I G$ is said to be the radially holomorphic function reversed with respect to $G$ in $\Lambda$. \end{definition} \begin{corollary} A linear superposition of a radially holomorphic function $G$ and the radially holomorphic function reversed with respect to $G$ yields a radially holomorphic potential in the context of \emph{GASPT}. \end{corollary} \begin{proof} The system of partial differential equations~\eqref{invar} is linear. Therefore, any sum of several solutions of~\eqref{invar} is also a solution. \end{proof} The following definition in the context of the concept of \emph{Radially holomorphic functions} \cite{GuHaSp:2008} and its applications in the context of \emph{Analytic models of potential meridional fields} may be quite natural. \begin{definition} A radially holomorphic function $F = F(x) = u_0(x_0, \rho) + I u_{\rho}(x_0, \rho)$ satisfying the equation~\eqref{primitive} in $\Lambda$ is called the first derivative (or, in short, derivative) of $G = G(x) = g(x_0, \rho) + I \hat{g}_{\rho}(x_0, \rho)$ in $\Lambda$. \end{definition} The radially anti-holomorphic function $\overline{F} = \overline{F}(x) = u_0(x_0, \rho) - I u_{\rho}(x_0, \rho)$ may be equivalently represented as $\overline{F}(x) = V_0(x_0, \rho) + I V_{\rho}(x_0, \rho)$, where the components $V_0$ and $V_{\rho}$ satisfy the system~\eqref{Bryukhov-vector-1}. Analytic models of potential meridional fields $\vec V= (V_0,\frac{x_1}{\rho} V_{\rho},\frac{x_2}{\rho}V_{\rho})$ in cylindrically layered media with the mass density $\phi( \rho) = \rho^{-1}$ may be studied in detail in this setting. Let us denote the derivative of the radially holomorphic function $\Theta = \Theta(x) = \vartheta(x_0, \rho) + I \hat{\vartheta}(x_0, \rho)$ by $\Psi = \Psi(x) = \Psi_0(x_0, \rho) + I \Psi_{\rho}(x_0, \rho)$. The components $\Psi_0$ and $\Psi_{\rho}$ of the radially holomorphic function $\Theta = \Theta(x)$ satisfy the system~\eqref{CR-merid}. Any linear superposition of a radially holomorphic function $F$ and the radially holomorphic function reversed with respect to $F$ yields also a radially holomorphic function with components satisfying the system~\eqref{CR-merid}. Consider some surprising properties of generalized axially symmetric potentials expressed in terms of the Bessel functions of the first and second kind of real order $\frac{\alpha}{2}$ and real argument~\eqref{EPD special} in the case of $\alpha = 1$. \begin{example} Assume that $g(x_0, \rho) = \Xi(x_0) \Upsilon(\rho)$, where \begin{gather} \begin{cases} \Xi_{\breve{\beta}}(x_0) = b^1_{\breve{\beta}} \cosh(\breve{\beta} x_0) + b^2_{\breve{\beta}} \sinh(\breve{\beta}x_0); \quad \breve{\beta}, b^1_{\breve{\beta}}, b^2_{\breve{\beta}}= const \in \mathbb R, \\[1ex] \Upsilon_{\breve{\beta}}(\rho) = {\rho}^\frac{1}{2} \left[ a^1_{\breve{\beta}} J_{\frac{1}{2}}( \breve{\beta} \rho) + a^2_{\breve{\beta}} Y_{\frac{1}{2}}( \breve{\beta} \rho) \right]; \quad a^1_{\breve{\beta}}$, $a^2_{\breve{\beta}}= const \in \mathbb R, \end{cases} \label{EPD 1} \end{gather} while $\breve{\beta}> 0$, $b^1_{\breve{\beta}} = b^2_{\breve{\beta}} =1$, $\ a^1_{\breve{\beta}} = -A^1_{\breve{\beta}} \sqrt{\frac{\pi \breve{\beta}}{2}}$, $\ a^1_{\breve{\beta}} = -A^2_{\breve{\beta}} \sqrt{\frac{\pi \breve{\beta}}{2}}$; $\quad A^1_{\breve{\beta}}, A^2_{\breve{\beta}} = const \in \mathbb R.$ The Bessel functions of the first and second kind of half-integer order may be expressed in finite terms using algebraic and trigonometric functions (see, e.g., \cite{Koren:2002}, p.10 and pp. 16-17). In particular, $ \ J_{\frac{1}{2}}(\breve{\beta} \rho) = \sqrt{\frac{2}{\pi \breve{\beta} }} \rho^{-\frac{1}{2}} \sin ( \breve{\beta} \rho)$ and $ \ Y_{\frac{1}{2}}(\breve{\beta} \rho) = -\sqrt{\frac{2}{\pi \breve{\beta} }} \rho^{-\frac{1}{2}} \cos ( \breve{\beta} \rho)$. Formulas~\eqref{EPD 1} are simplified in this way: \begin{gather*} \begin{cases} \Xi_{\breve{\beta}}(x_0) = e^{\breve{\beta}x_0}, \\[1ex] \Upsilon_{\breve{\beta}}(\rho) = - A^1_{\breve{\beta}} \sin ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \cos ( \breve{\beta} \rho), \end{cases} \end{gather*} and the generalized axially symmetric potentials take the form \begin{gather} g(x_0, \rho) = e^{\breve{\beta} x_0} [-A^1_{\breve{\beta}} \sin ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \cos ( \breve{\beta} \rho)]. \label{GASP-1} \end{gather} Assume that any radially holomorphic potential is represented by superposition of the radially holomorphic exponential function $e^{\breve{\beta} x}$ and the radially holomorphic function reversed with respect to $e^{\breve{\beta} x}$: \begin{align*} G(x) = A^2_{\breve{\beta}} e^{\breve{\beta}x} + A^1_{\breve{\beta}} I e^{\breve{\beta}x} = A^2_{\breve{\beta}} e^{\breve{\beta}x_0}[\cos(\breve{\beta}\rho)+I \sin(\breve{\beta}\rho)] + A^1_{\breve{\beta}} e^{\breve{\beta}x_0}[-\sin(\breve{\beta}\rho)+I \cos(\breve{\beta}\rho)]. \end{align*} The following reduced quaternion-valued expression may be more convenient: \begin{gather} G(x) = g + I \hat{g} = e^{\breve{\beta} x_0} [- A^1_{\breve{\beta}} \sin ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \cos ( \breve{\beta} \rho)] + I e^{\breve{\beta} x_0} [A^1_{\breve{\beta}} \cos ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \sin ( \breve{\beta} \rho)]. \label{exponential} \end{gather} It is obvious now that generalized axially symmetric potential~\eqref{GASP-1} coincides with the scalar part of the radially holomorphic potential~\eqref{exponential}. The derivative of $G = G(x)$ is written as $$ F(x) = \breve{\beta} (A^2_{\breve{\beta}} e^{\breve{\beta}x} + A^1_{\breve{\beta}} I e^{\breve{\beta}x} ).$$ The corresponding radially anti-holomorphic function in the expanded form is described as $$ \overline{F}(x) = \breve{\beta} e^{\breve{\beta} x_0} [- A^1_{\breve{\beta}} \sin ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \cos ( \breve{\beta} \rho)] - I \breve{\beta} e^{\breve{\beta} x_0} [ A^1_{\breve{\beta}} \cos ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \sin ( \breve{\beta} \rho)].$$ We obtain new analytic models of potential meridional fields, where \begin{gather*} V_0 = \frac{\partial{g}}{\partial{x_0}} = \breve{\beta} e^{\breve{\beta} x_0} [- A^1_{\breve{\beta}} \sin ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \cos ( \breve{\beta} \rho)], \end{gather*} \begin{gather*} V_{\rho} = \frac{\partial{g}}{\partial{\rho}} = - \breve{\beta} e^{\breve{\beta} x_0} [ A^1_{\breve{\beta}} \cos ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \sin ( \breve{\beta} \rho)], \end{gather*} \begin{align*} \frac{\partial{V_{\rho}}}{\partial{x_0}} = - \breve{\beta}^2 e^{\breve{\beta} x_0} [A^1_{\breve{\beta}} \cos ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \sin ( \breve{\beta} \rho)], \end{align*} \begin{align*} \frac{\partial{V_{\rho}}}{\partial{\rho}} = \breve{\beta}^2 e^{\breve{\beta} x_0} [ A^1_{\breve{\beta}} \sin ( \breve{\beta} \rho) - A^2_{\breve{\beta}} \cos ( \breve{\beta} \rho)]. \end{align*} Relations \begin{gather*} \begin{cases} V_0 = A^1_{\breve{\beta}} \sin ( \breve{\beta} \rho) - A^2_{\breve{\beta}} \cos ( \breve{\beta} \rho) = 0, \\[1ex] V_{\rho} = A^1_{\breve{\beta}} \cos ( \breve{\beta} \rho) + A^2_{\breve{\beta}} \sin ( \breve{\beta} \rho) = 0 \end{cases} \end{gather*} imply that the set of zeros of $\vec V$ is empty. \end{example} The Joukowski transformation $$w(z) = U \left( z + \frac{a^2}{z} \right), \ U, a \in \mathbb R$$ is well known in fluid mechanics (see, e.g., \cite{KochinKibelRoze:1964,Acheson}) as the complex potential of an elementary flow, which has a singularity at infinity. This elementary flow is irrotational in the complex plane everywhere except at the origin. Let us consider the basic properties of the radially holomorphic potential represented by the radially holomorphic extension of the Joukowski transformation in $\mathbb R^3$. \begin{example} Assume that the radially holomorphic potential is expressed as $$G =G(x) = B(x + \gamma^2 x^{-1}), \quad x_1\neq 0, x_2\neq 0.$$ The derivative of $G = G(x)$ is written as $$ F(x)= B(1 - \gamma^2 x^{-2}); \quad B, \gamma \in \mathbb R.$$ The corresponding radially anti-holomorphic function is described as $$ \overline{F}(x) = B(1 - \gamma^2 \overline{x}^{-2}) = V_0 + I V_{\rho}.$$ Here we obtain analytic models of potential meridional fields $\vec V = \left( V_0, \frac{x_1}{\rho} V_{\rho}, \frac{x_2}{\rho} V{\rho} \right)$, where \begin{align*} V_0 = B - \frac{B \gamma^2(x_0^2 - \rho^2)}{(x_0^2 + \rho^2)^2}, \quad V_{\rho} = - \frac{2B \gamma^2 x_0 \rho}{(x_0^2 + \rho^2)^2}. \end{align*} \begin{align*} \frac{\partial{V_{\rho}}}{\partial{x_0}} = \frac{2B \gamma^2 \rho(x_0^2 - \rho^2)}{(x_0^2 + \rho^2)^3}, \quad \frac{\partial{V_{\rho}}}{\partial{\rho}} = - \frac{2B \gamma^2 x_0(x_0^2 - \rho^2)}{(x_0^2 + \rho^2)^3}. \end{align*} The set of degenerate points of~\eqref{VG tensor-merid-1} is provided by two independent equations~\eqref{degenerate-1}: \begin{align*} x_0 =0; \quad \rho^2 = x_0^2. \end{align*} Relations \begin{gather*} \begin{cases} V_0 = B - \frac{B \gamma^2(x_0^2 - \rho^2)}{(x_0^2 + \rho^2)^2} = 0, \\[1ex] V_{\rho} = - \frac{2B \gamma^2 x_0 \rho}{(x_0^2 + \rho^2)^2} = 0 \end{cases} \end{gather*} imply that the set of zeros of $\vec V$ is provided by two compatible equations $$ x_0 =0 , \quad \rho^2 = \gamma^2.$$ \end{example} Potential meridional fields of the form $\vec V= \vec V(\vec x; \mu)$ in cylindrically layered media with the mass density $\phi = \rho^{-1}$ may be represented by the reduced quaternion-valued ordinary differential equation \begin{gather*} \frac {dx}{dt} = V_0 + i V_1 + j V_2 = \overline{F}(x; \mu), \end{gather*} where $x= x_0 + ix_1 + jx_2$, $\overline{F}(x; \mu) = u_0 - i u_1 - j u_2$ and $F(x; \mu) = \frac{\partial{h}}{\partial{x_0}} - i \frac{\partial{h}}{\partial{x_1}} - j\frac{\partial{h}}{\partial{x_1}}$. Geometric and topological properties of the sets of zeros of potential meridional fields $\vec V= \vec V(\vec x; \mu)$ are of particular interest in \emph{Bifurcation theory}. \section{Concluding Remarks} \label{sec6} Let us now turn our attention to the specifics of a smooth gradient system with scalar potential $h$ depending on a parameter $\mu$ in $\mathbb R^4=\{(x_0, x_1,x_2, x_3)\}$: \begin{gather} \begin{cases} \frac {dx_0}{dt} = V_0(x_0,x_1,x_2,x_3; \mu) = \frac{\partial{h(x_0,x_1,x_2,x_3; \mu)}}{\partial{x_0}}, \\[1ex] \frac {dx_1}{dt} = V_1(x_0,x_1,x_2,x_3; \mu) = \frac{\partial{h(x_0,x_1,x_2,x_3; \mu)}}{\partial{x_1}}, \\[1ex] \frac {dx_2}{dt} = V_2(x_0,x_1,x_2,x_3; \mu) = \frac{\partial{h(x_0,x_1,x_2,x_3; \mu)}}{\partial{x_2}}, \\[1ex] \frac {dx_3}{dt} = V_3(x_0,x_1,x_2,x_3; \mu) = \frac{\partial{h(x_0,x_1,x_2,x_3; \mu)}}{\partial{x_3}}. \end{cases} \label{traject-4} \end{gather} The vector $\vec V = (V_0, V_1, V_2, V_3)$ implies a rich variety of analytic models of four-dimensional potential velocity fields. The sets of zeros of $\vec V$ in simply connected open domains $\Lambda \subset \mathbb R^4$ coincide with the critical sets of the velocity potential $h$ in $\Lambda$. Consider the following four-dimensional extension of the system~\eqref{alpha-axial-hyperbolic-system-3}: \begin{gather} \begin{cases} (x_1^2+x_2^2+x_3^2) \ \mathrm{div}\ { \vec V} - \alpha \left(x_1 V_1 + x_2 V_2 + x_3 V_3 \right) =0, \\[1ex] \frac{\partial{V_0}}{\partial{x_1}}= \frac{\partial{V_1}}{\partial{x_0}}, \quad \frac{\partial{V_0}}{\partial{x_2}}= \frac{\partial{V_2}}{\partial{x_0}}, \quad \frac{\partial{V_0}}{\partial{x_3}}= \frac{\partial{V_3}}{\partial{x_0}}, \\[1ex] \frac{\partial{V_1}}{\partial{x_2}}= \frac{\partial{V_2}}{\partial{x_1}}, \quad \frac{\partial{V_1}}{\partial{x_3}}= \frac{\partial{V_3}}{\partial{x_1}}, \quad \frac{\partial{V_2}}{\partial{x_3}}= \frac{\partial{V_3}}{\partial{x_2}}. \end{cases} \label{alpha-axial-isotropic-system-4} \end{gather} The velocity potential $h$ in $\Lambda$, where $\vec V = \mathrm{grad} \ h,$ allows us to reduce every $C^1$-solution of the system~\eqref{alpha-axial-isotropic-system-4} to a $C^2$-solution of the continuity equation \begin{gather} (x_1^2+ x_2^2 + x_3^2)\Delta{h} - \alpha \left( x_1\frac{\partial{h}}{\partial{x_1}} + x_2\frac{\partial{h}}{\partial{x_2}} + x_3\frac{\partial{h}}{\partial{x_3}} \right) =0. \label{alpha-axial-hyperbolic-4} \end{gather} The first-order system \begin{gather} \begin{cases} (x_1^2+x_2^2+x_3^2) \left( \frac{\partial{u_0}}{\partial{x_0}}- \frac{\partial{u_1}}{\partial{x_1}}-\frac{\partial{u_2}}{\partial{x_2}}- \frac{\partial{u_3}}{\partial{x_3}} \right) + \alpha (x_1u_1+x_2u_2+x_3u_3)=0 \\[1ex] \frac{\partial{u_0}}{\partial{x_1}}=-\frac{\partial{u_1}}{\partial{x_0}}, \quad \frac{\partial{u_0}}{\partial{x_2}}=-\frac{\partial{u_2}}{\partial{x_0}}, \quad \frac{\partial{u_0}}{\partial{x_3}}=-\frac{\partial{u_3}}{\partial{x_0}}, \\[1ex] \frac{\partial{u_1}}{\partial{x_2}}=\ \ \frac{\partial{u_2}}{\partial{x_1}}, \quad \frac{\partial{u_1}}{\partial{x_3}}=\ \ \frac{\partial{u_3}}{\partial{x_1}}, \quad \frac{\partial{u_2}}{\partial{x_3}}=\ \ \frac{\partial{u_3}}{\partial{x_2}}, \end{cases} \label{eq:A_4^alpha-system} \end{gather} where $(u_0, u_1, u_2, u_3) :=(V_0, -V_1, -V_2, -V_3)$, demonstrates explicitly a family of axially symmetric generalizations of the Cauchy-Riemann system in $\mathbb R^4$ in accordance with the system~\eqref{alpha-axial-isotropic-system-4} for different values of the parameter $\alpha$ (see \cite{Br:Hefei2020}). \begin{definition} Every exact solution of eqn~\eqref{alpha-axial-hyperbolic-4} under the condition $\alpha>0$ in a simply connected open domain $\Lambda \subset \mathbb R^4$ $(\rho > 0)$ is called $\alpha$-axial-hyperbolic harmonic potential in $\Lambda$. \end{definition} Properties of the sets of zeros of potential meridional fields $\vec V= \vec V(\vec x; \mu)$ represented by the following quaternion-valued ordinary differential equation \begin{gather*} \frac {dx}{dt} = V_0 + i V_1 + j V_2 + k V_3 = \overline{F}(x; \mu), \end{gather*} where $x= x_0 + ix_1 + jx_2 + kx_3$, $\overline{F}(x; \mu) = u_0 - i u_1 - j u_2 - k u_3$ and $F(x; \mu) = \frac{\partial{h}}{\partial{x_0}} - i \frac{\partial{h}}{\partial{x_1}} - j\frac{\partial{h}}{\partial{x_1}} - k\frac{\partial{h}}{\partial{x_1}}$, have not been studied. The eigenvalues of the Jacobian matrix $\mathbf{J}(\vec V)$ at zeros of $\vec V$ may be regarded as the stability coefficients of equilibrium points $\vec x^{**}$ of gradient systems~\eqref{traject-4} with $\alpha$-axial-hyperbolic harmonic velocity potential $h$, satisfying the conditions $x_2 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_2}}$, $x_3 \frac{\partial{h}}{\partial{x_1}} = x_1 \frac{\partial{h}}{\partial{x_3}}$, $x_3 \frac{\partial{h}}{\partial{x_2}} = x_2 \frac{\partial{h}}{\partial{x_3}}$. \begin{definition} Let $\Lambda \subset \mathbb R^4$ $(x_1 \neq 0, x_2 \neq 0, x_3 \neq 0)$ be a simply connected open domain. Assume that an exact solution $(u_0, u_1, u_2, u_3)$ of the system~\eqref{eq:A_4^alpha-system}, where $\alpha \neq 0$, satisfies the axially symmetric conditions ${u_1}{x_2}={u_2}{x_1}, \quad {u_1}{x_3}={u_3}{x_1}, \quad {u_2}{x_3}={u_3}{x_2}$ in $\Lambda$. Then mapping $u = u_0 + iu_1 + ju_2 + ku_3: \Lambda \rightarrow \mathbb{R}^4$ is called four-dimensional $\alpha$-meridional mapping of the first kind, while mapping $ \overline{u} = u_0 - iu_1 - ju_2 - ku_3: \Lambda \rightarrow \mathbb{R}^4$ is called four-dimensional $\alpha$-meridional mapping of the second kind. \end{definition} Analytic and geometric properties of four-dimensional $\alpha$-meridional mappings may be studied in more detail in the context of \emph{Non-Euclidean modifications of quaternionic analysis in $\mathbb R^4$} (see, e.g., \cite{HempLeut:1996,LeZe:CMFT2004,ErOrel:2019,Br:Hefei2020}) using the well-known formulas \begin{gather*} \begin{cases} x_0 = \frac{1}{4}(x - ixi - jxj - kxk), \\[1ex] x_1 = \frac{1}{4i}(x - ixi + jxj + kxk), \\[1ex] x_2 = \frac{1}{4j}(x + ixi - jxj + kxk), \\[1ex] x_3 = \frac{1}{4k}(x + ixi + jxj - kxk). \end{cases} \end{gather*} \bmhead{Acknowledgments} The author would like to thank Prof. Aksenov, Prof. Grigor'ev, Prof. K\"{a}hler, Dr. Kisil, Prof. Leutwiler, Prof. Plaksa, Prof. Shamolin, Prof. Spr\"{o}{\ss}ig, Prof. Urinov for inspiring discussions. \begin{thebibliography}{1} \bibitem {AbrMarsden:1987} Abraham,~R., Marsden,~J.E.: {Foundations of Mechanics}, 2nd edn. Addison-Wesley, Redwoood City, CA (1987) \bibitem {Acheson} Acheson,~D. J.: {Elementary Fluid Dynamics.} Oxford Applied Mathematics and Computing Science Series. Clarendon Press, Oxford (1990) \bibitem {AnderCadou:2024} Anderson, Jr.,~J. D., Cadou,~C.P.: {Fundamentals of Aerodynamics}, 7th edn. Series in Aeronautical and Aerospace Engineering. McGraw-Hill, New York (2024) \bibitem {ArnoldGeom} Arnol'd,~V.I.: {Geometrical Methods in the Theory of Ordinary Differential Equations}, 2nd edn. Grundlehren der mathematischen Wissenschaften, vol. 250. Springer New York, NY (1988) \bibitem {ArnAfrIlyashShil:1994} Arnol'd,~V.I. (Ed.): {Dynamical Systems V. Bifurcation Theory and Catastrophe Theory}. Encyclopaedia of Mathematical Sciences, vol. 5. Springer, Berlin, Heidelberg (1994) \bibitem {Batch:2000} Batchelor,~G. K.: {An Introduction to Fluid Dynamics}. Cambridge University Press, Cambridge (2000) \bibitem {BorisTar:1979} Borisenko,~A.I., Tarapov,~I.E.: {Vector And Tensor Analysis With Applications}. Revised English edition translated and edited by R.A.Silverman. Dover Publications, New York (1979) \bibitem {BornWolf:2003} Born,~M., Wolf,~E.: {Principles of Optics}. 7th edn., Cambridge University Press, Cambridge (2003) \bibitem{BraDel:2003} Brackx,~F., Delanghe,~R.: {On harmonic potential fields and the structure of monogenic functions}. Z. Anal. Anwend. \textbf{22}(2), 261--273 (2003) \bibitem {Brekh:1980} Brekhovskikh,~L.M.: {Waves in Layered Media}, 2nd edn. Applied Mathematics and Mechanics, vol. 16. Academic Press, New York (1980) \bibitem{BrKos:2012} Bryukhov,~D.: {On modified quaternionic analysis, irrotational velocity fields and and new applications of the Riesz system in $\mathbb R^3$}. In: Simos~T.E. et~al. (eds.) ICNAAM 2012, Kos, Greece. AIP Conf. Proc., vol. 1479, New York, pp. 275--278 (2012) \bibitem{BrRhod:2013} Bryukhov,~D.: {On modified quaternionic analysis, irrotational velocity fields and new gradient dynamical systems in $\mathbb R^3$}. In: Simos~T.E. et~al. (eds.) ICNAAM 2013, Rhodes, Greece. AIP Conf. Proc., vol. 1558, New York, pp. 485--488 (2013) \bibitem{BrRhod:2014} Bryukhov,~D.: {On modified quaternionic analysis, gradient dynamical systems and Kolmogorov equations in $\mathbb R^3$}. In: Simos~T.E. et~al. (eds.) ICNAAM 2014, Rhodes, Greece. AIP Conf. Proc., vol. 1648, New York, 440002-1--440002-4 (2015) \bibitem {Br:Hefei2020} Bryukhov,~D.: {Electrostatic fields in some special inhomogeneous media and new generalizations of the Cauchy-Riemann system}. Adv. Appl. Clifford Algebras \textbf{31}, 61 (2021) \bibitem{Carslaw} Carslaw,~H.S., Jaeger,~J.C.: {Conduction of Heat in Solids}. 2nd edn., Clarendon Press, Oxford (1959) \bibitem {Chetayev:1961} Chetayev,~N.G.: {The Stability of Motion}. Pergamon Press, London (1961) \bibitem {ChowHale:1982} Chow,~S.-N., Hale,~J.K.: {Methods of Bifurcation Theory}. Grundlehren der mathematischen Wissenschaften, vol. 251. Springer New York, NY (1982) \bibitem {Del:2007} Delanghe,~R.: {On homogeneous polynomial solutions of the Riesz system and their harmonic potentials}. Complex Var. Elliptic Equ. \textbf{52}(10-11), 1047--1062 (2007) \bibitem {ErLe:1998} Eriksson-Bique,~S.-L., Leutwiler,~H.: {On modified quaternionic analysis in $\mathbf R^3$}. Archiv der Math. \textbf{70}, 228--234 (1998) \bibitem {ErLeut:2007} Eriksson,~S.-L., Leutwiler,~H.: {Hyperbolic function theory}. Adv. Appl. Clifford Algebras \textbf{17}, 437--450 (2007) \bibitem {ErOrel:2014} Eriksson,~S.-L., Orelma,~H.: {Hyperbolic Laplace operator and the Weinstein equation in $\mathbb R^3$}. Adv. Appl. Clifford Algebras \textbf{24}(1), 109--124 (2014) \bibitem {ErOrel:2019} Eriksson,~S.-L., Orelma,~H.: {Hyperbolic function theory in the skew-field of quaternions}. Adv. Appl. Clifford Algebras \textbf{29}, 97 (2019) \bibitem {GasLliZh:2009} Gasull,~A., Llibre,~J., Zhang,~X.: {One-dimensional quaternion homogeneous polynomial differential equations}. J. Math. Phys. \textbf{50}, 082705 (2009) \bibitem {Gilmore:1993} Gilmore,~R.: {Catastrophe Theory for Scientists and Engineers}. Dover Publications, New York (1993) \bibitem {Goriely:2001} Goriely,~A.: {Integrability and Nonintegrability of Dynamical Systems}. Advanced Series in Nonlinear Dynamics, vol. 19. World Scientific, Singapore (2001) \bibitem {GuHaSp:2008} G\"{u}rlebeck,~K., Habetha,~K., Spr\"{o}{\ss}ig,~W.: {Holomorphic Functions in the Plane and $n$-Dimensional Space}. Birkh\"{a}user, Basel (2008) \bibitem {HempLeut:1996} Hempfling,~Th., Leutwiler,~H.: {Modified quaternionic analysis in $\mathbb R^4$}. In: Dietrich~V., Habetha~K., Jank~G. (eds.) Clifford Algebras and Their Application in Mathematical Physics. Fundamental Theories of Physics, vol. 94, pp. 227--237. Kluwer, Dordrecht (1998) \bibitem {HirschSmaleDev:2013} Hirsch,~M.W., Smale,~S., Devaney,~R.L.: {Differential Equations, Dynamical Systems, and an Introduction to Chaos}, 3rd edn. Elsevier/Academic Press, Oxford (2013) \bibitem {Ilyushin:1990} Ilyushin,~A.A.: {Continuum Mechanics} [in Russian], 3rd edn. Moscow University Press, Moscow (1990) \bibitem {KhmKravOv:2010} Khmelnytskaya,~K.V., Kravchenko,~V.V., Oviedo,~H.: {On the solution of the static Maxwell system in axially symmetric inhomogeneous media}. Math. Methods Appl. Sci. \textbf{33}(4), 439--447 (2010) \bibitem {KochinKibelRoze:1964} Kochin,~N.E., Kibel'~I.A, Roze~N.V.: {Theoretical Hydromechanics}. Translated from the 5th Russian Edition by D. Boyanovitch. Edited by J.R.M.~Radok. Interscience Publ., New York (1964) \bibitem {Koren:2002} Korenev,~B.G.: {Bessel Functions and Their Applications.} Analytical Methods and Special Functions, vol.8. Taylor \& Francis, London/New York (2002) \bibitem {Kozlov:1993} Kozlov,~V.V.: {On the degree of instability}. J. Appl. Math. Mech. \textbf{57}(5), 771--776 (1993) \bibitem {KozlovFurta:2001} Kozlov,~V.V., Furta,~S.D.: {The first Lyapunov method for strongly nonlinear systems of differential equations} [in Russian]. In: Matrosov~V.M. et al. (eds.) Nonlinear Mechanics, pp. 89--113. Fizmatlit, Moscow (2001) \bibitem {Kuznetsov:2023} Kuznetsov,~Yu.A.: {Elements of Applied Bifurcation Theory}, 4th edn. Applied Mathematical Sciences, vol. 112. Springer, Cham (2023) \bibitem {LaiRubKr:2010} Lai,~W.M., Rubin,~D., Krempl,~E.: {Introduction in Continuum Mechanics}, 4th edn. Elsevier, Oxford (2010) \bibitem {LavSh:1987} Lavrentyev,~M.A., Shabat,~B.V.: {Methods of the Theory of Functions of a Complex Variable} [in Russian], 5th edn. Nauka, Moscow (1987) \bibitem{LavSh-Hydro} Lavrentyev,~M.A., Shabat,~B.V.: {Hydrodynamics Problems and Their Mathematical Models} [in Russian]. Nauka, Moscow (1973) \bibitem {Leut:CV17} Leutwiler,~H.: {Modified Clifford analysis}. Complex Var. Theory Appl. \textbf{17}(3-4), 153--171 (1992) \bibitem {Leut:CV20} Leutwiler,~H.: {Modified quaternionic analysis in $\mathbb R^3$}. Complex Var. Theory Appl. \textbf{20}, 19--51 (1992) \bibitem {Leut:Rud96} Leutwiler,~H.: {Rudiments of function theory in $\mathbb R^3$}. Expo. Math. \textbf{14}, 97--123 (1996) \bibitem {Leut:2000} Leutwiler,~H.: {Quaternionic analysis in $\mathbb R^3$ versus its hyperbolic modification}. In: Brackx,~F., Chisholm,~J.S.R., Soucek,~V. (eds.) NATO Science Series II: Mathematics, Physics and Chemistry, vol. 25, pp. 193--211. Kluwer, Dordrecht (2001) \bibitem {LeZe:CMFT2004} Leutwiler,~H., Zeilinger,~P.: {On quaternionic analysis and its modifications}. Comput. Methods Funct. Theory \textbf{4}(1), 159--183 (2004) \bibitem {Leut:2017-AACA} Leutwiler,~H.: {Modified spherical harmonics}. Adv. Appl. Clifford Algebras \textbf{27}, 1479--1502 (2017) \bibitem {Leut:2017-CAOT} Leutwiler,~H.: {An orthonormal system of modified spherical harmonics}, Complex Analysis and Operator Theory \textbf{11}(5), 1241--1251 (2017) \bibitem {Leut:2021-MMAS} Leutwiler,~H.: {Further results on modified harmonic functions in three dimensions}, Math Meth Appl Sci.; 1--9 (2021). https://doi.org/10.1002/mma.7277 \bibitem {LlibreZhang:2012} Llibre,~J., Zhang,~X.: {On the Darboux Integrability of Polynomial Differential Systems}. Qual. Theory Dyn. Syst. \textbf{11}, 129--144 (2012) \bibitem {Perko:2001} Perko,~L.: {Differential Equations and Dynamical Systems}, 3rd edn. Texts in Applied Mathematics, vol. 7. Springer, New York (2001) \bibitem {Plaksa:2001} Plaksa,~S.A.: {Dirichlet problem for an axisymmetric potential in a simply-connected domain of the meridian plane}. Ukr. Math. J. \textbf{53}(12), 1976--1997 (2001) \bibitem {Plaksa:2003} Plaksa,~S.A.: {Dirichlet problem for the Stokes flow function in a simply-connected domain of the meridian plane}. Ukr. Math. J. \textbf{55}(2), 241--281 (2003) \bibitem {PlakShpak:2023} Plaksa,~S.A., Shpakivskyi,~V.S.: {Monogenic Functions in Spaces with Commutative Multiplication and Applications}. Frontiers in Mathematics. Birkh\"{a}user, Cham (2023) \bibitem {PolZait:Ordin-2018} Polyanin,~A.D., Zaitsev,~V.F.: {Handbook of Ordinary Differential Equations: Exact Solutions, Methods, and Problems}. CRC Press, Boca Raton/London (2018) \bibitem {Reddy:2018} Reddy,~J.N.: {Principles of Continuum Mechanics. Conservation and Balance Laws with Applications}, 2nd edn. Cambridge University Press, Cambridge (2018) \bibitem {Sedov:1994} Sedov,~L.I.: {Continuum Mechanics}, vol. 1 [in Russian], 5th edn. Nauka, Moscow (1994) \bibitem {Strogatz:2018} Strogatz,~S.H.: {Nonlinear Dynamics and Chaos With Applications to Physics, Biology, Chemistry, and Engineering}, 2nd edn. CRC Press, Boca Raton (2018) \bibitem {UrinovKarimovKT:2019} Urinov,~A.K., Karimov,~K.T.: {The unique solvability of boundary value problems for a 3D elliptic equation with three singular coefficients}. Russ. Math. (Iz. VUZ), No. 2, 62--73 (2019) \bibitem {UrinovKarimovKT:2020} Urinov,~A.K., Karimov,~K.T.: {The Dirichlet problem for an elliptic equation with singular coefficients in a semi-cylindrical domain}. Lobachevskii J. Math., \textbf{41}, 1898--1909 (2020) \bibitem {UriKar:2023} Urinov,~A.K., Karimov,~K.T.: {The Dirichlet problem for an elliptic equation with three singular coefficients and negative parameters}. J. Math. Sci. \textbf{274}(2), 285--300 (2023) \bibitem {WalZhang:2021} Walcher,~S., Zhang,~X.: {Polynomial differential equations over the quaternions}. J. Differ. Equ. \textbf{282}, 566--595 (2021) \bibitem {Walter:1998} Walter,~W.: {Ordinary Differential Equations}. Graduate Texts in Mathematics, vol. 182. Springer, New York (1998) \bibitem {Watson:1944} Watson,~G.N.: {A Treatise on the Theory of Bessel Functions}, 2nd edn. Cambridge University Press, Cambridge (1995) \bibitem {Weinstein:1948-flows} Weinstein,~A.: {On axially symmetric flows}. Quart. Appl. Math. \textbf{5}, 429--444 (1948) \bibitem {Weinstein:1953} Weinstein,~A.: {Generalized axially symmetric potential theory}. Bull. Amer. Math. Soc. \textbf{59}(1), 20--38 (1953) \bibitem {WhiteXue:2021} White,~F.M., Xue,~H.: {Fluid Mechanics}, 9th edn. McGraw--Hill Education, New York (2021) \bibitem {Wiggins:2003} Wiggins,~S.: {Introduction to Applied Nonlinear Dynamical Systems and Chaos}, 2nd edn. Texts in applied mathematics, vol. 2. Springer, New York (2003) \bibitem {ZachThoe:1986} Zachmanoglou,~E.C., Thoe,~D.W.: {Introduction to Partial Differential Equations with Applications}. Dover, New York (1986) \bibitem {Zaud:2006} Zauderer,~E.: {Partial Differential Equations of Applied Mathematics}, 3rd edn. Wiley Series in Pure and Applied Mathematics. Wiley, Hoboken, N.J. (2006) \bibitem {Zhang:2011} Zhang,~X.: {Global structure of quaternion polynomial differential equations}. Commun. Math. Phys. \textbf{303}, 301--316 (2011) \bibitem {Zhang:2016} Zhang,~X.: {Liouvillian integrability of polynomial differential systems}. Trans. Amer. Math. Soc. \textbf{368}(1), 607--620 (2016) \bibitem {Zhang:2017} Zhang,~X.: {Integrability of Dynamical Systems: Algebra and Analysis}. Developments in Mathematics, vol. 47. Springer, Singapore (2017) \end{thebibliography} \end{document}
2412.19558v1
http://arxiv.org/abs/2412.19558v1
Pretabular Tense Logics over S4t
\documentclass[a4paper]{easychair} \usepackage{bussproofs} \usepackage{amsmath,amssymb} \usepackage{mathrsfs} \usepackage{enumerate} \usepackage{stmaryrd} \usepackage{setspace} \setstretch{1.2} \geometry{left=1.5in,right=1.5in,top=1in,bottom=1in} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{fact}[theorem]{Fact} \newtheorem{remark}[theorem]{Remark} \theoremstyle{plain} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \renewcommand{\phi}{\varphi} \newcommand{\hra}{\hookrightarrow} \newcommand{\Imp}{\Rightarrow} \newcommand{\bd}{\blacklozenge} \newcommand{\bb}{\blacksquare} \newcommand{\sub}{\subseteq} \newcommand{\p}{\prime} \newcommand{\com}[1]{{#1}^{c}} \newcommand{\comb}[1]{(#1)^{c}} \newcommand{\ol}{\overline} \newcommand{\ul}{\underline} \newcommand{\w}{\widehat} \newcommand{\ve}{\varnothing} \newcommand{\D}{\Diamond} \newcommand{\B}{\Box} \newcommand{\ua}{\uparrow} \newcommand{\da}{\downarrow} \newcommand{\cset}[1]{{\{ #1 \}}} \newcommand{\tup}[1]{\langle #1 \rangle} \newcommand{\eq}{\leftrightarrow} \newcommand{\Eq}{\Leftrightarrow} \newcommand{\pmi}{\leftarrow} \newcommand{\Pmi}{\Leftarrow} \newcommand{\rsto}{{\upharpoonright}} \newcommand{\iso}{\cong} \newcommand{\ff}{\mathscr{F}} \newcommand{\ar}{\mathsf{ar}} \newcommand{\A}{\mathfrak{A}} \newcommand{\AB}{\mathfrak{B}} \newcommand{\dd}{\Delta} \newcommand{\db}{\nabla} \newcommand{\R}{R_\sharp} \renewcommand{\S}{S_\sharp} \newcommand{\gf}{\mathbb{F}} \renewcommand{\gg}{\mathbb{G}} \newcommand{\F}{\mathfrak{F}} \newcommand{\G}{\mathfrak{G}} \renewcommand{\H}{\mathfrak{H}} \newcommand{\X}{\mathfrak{X}} \newcommand{\Y}{\mathfrak{Y}} \newcommand{\Z}{\mathfrak{Z}} \newcommand{\M}{\mathfrak{M}} \newcommand{\N}{\mathfrak{N}} \newcommand{\C}{\mathfrak{C}} \renewcommand{\L}{ \mathscr{L}_t} \newcommand{\J}{\mathcal{J}} \newcommand{\md}{\models} \usepackage{tikz} \tikzset{shorten <>/.style={shorten >=#1, shorten <=#1}} \usetikzlibrary{patterns,arrows} \title{Pretabular Tense Logics over $\mathsf{S4}_t$} \author{ Qian Chen } \institute{The Tsinghua-UvA JRC for Logic, Department of Philosophy, Tsinghua University, China \and Institute for Logic, Language and Computation, University of Amsterdam, The Netherlands} \authorrunning{~} \titlerunning{~} \begin{document} \maketitle \begin{abstract} A logic $L$ is called tabular if it is the logic of some finite frame and $L$ is pretabular if it is not tabular while all of its proper consistent extensions are tabular. Pretabular modal logics are by now well investigated. In this work, we study pretabular tense logics in the lattice $\mathsf{NExt}(\mathsf{S4}_t)$ of all extensions of $\mathsf{S4}_t$, tense $\mathsf{S4}$. For all $n,m,k,l\in\mathbb{Z}^+\cup\cset{\omega}$, we define the tense logic $\mathsf{S4BP}_{n,m}^{k,l}$ with respectively bounded width, depth and z-degree. We give characterizations of pretabular logics in some lattices of the form $\mathsf{NExt}(\mathsf{S4BP}_{n,m}^{k,l})$. We show that the set $\mathsf{Pre}(\mathsf{S4.3}_t)$ of all pretabular logics extending $\mathsf{S4.3}_t$ contains exactly 5 logics. Moreover, we prove that $|\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})|=\aleph_0$ and $|\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,3})|=2^{\aleph_0}$. Finally, we show that for all cardinal $\kappa$ such that $\kappa\leq{\aleph_0}$ or $\kappa=2^{\aleph_0}$, $|\mathsf{Pre}(L)|=\kappa$ for some $L\in\mathsf{NExt}(\mathsf{S4}_t)$. It follows that $|\mathsf{Pre}(\mathsf{S4}_t)|=2^{\aleph_0}$, which answers the open problem about the cardinality of $\mathsf{Pre}(\mathsf{S4}_t)$ raised in \cite{Rautenberg1979}. \end{abstract} \section{Introduction} Tabular and pretabular logics are important in the research of lattices of logics. A logic is called \textit{tabular} if it is the logic of some finite frame or algebra. Tabular modal logics have been well-investigated, see \cite{blok1983,Chagrov1997}. A logic is called \textit{pretabular} if it is not tabular but all of its proper consistent extensions are tabular. Since the 1970s, a number of by-now famous results on pretabular modal logics and super-intuitionistic logics have been obtained. Let $\mathsf{Pre}(L)$ denote the set of pretabular logics extending $L$. Maksimova \cite{Maksimova1972} proved that there are exactly 3 pretabular super-intuitionistic logics. Esakia and Meskhi \cite{ESAKIA1977} and Maksimova~\cite{Maksimova1975} showed independently that $|\mathsf{Pre}(\mathsf{S4})|=5$. Bellissima \cite{Bellissima1988} proved that there is exactly 1 pretabular logic in $\mathsf{NExt}(\mathsf{KAlt}_1)$, but there exists an infinite set of pretabular modal logics in $\mathsf{NExt}(\mathsf{KAlt}_2)$. Blok \cite{Blok1980b} showed that every pretabular logic in $\mathsf{NExt}(\mathsf{K4})$ enjoys the finite model property and that $|\mathsf{Pre}(\mathsf{K4})|=2^{\aleph_0}$. Tense logics are bi-modal logics that include a future-looking necessity modality $\B$ and a past-looking possibility modality $\bd$. These two modalities are adjoint, in the sense that for any tense logic $L$ and formulas $\phi$ and $\psi$, $\bd\phi\to\psi\in L$ if and only if $\phi\to\B\psi\in L$. The expressive power of the tense language is properly stronger than that of the basic modal language, and the lattices of tense logics are more complex than those of modal logics. For example, every modal logic in the lattice $\mathsf{NExt}(\mathsf{S4.3})$ enjoys the finite model property (FMP), but there are countably many tense logics in $\mathsf{NExt}(\mathsf{S4.3}_t)$ which do not have the FMP (see \cite{Bull1966,Wolter1996tlwto}). Makinson \cite{Makinson1971} showed that $\mathsf{NExt}(\mathsf{K})$ has only 2 co-atoms, while it was shown in \cite{Chen.Ma2024a} that there are continuum many co-atoms in $\mathsf{NExt}(\mathsf{K}_t)$. For more on the differences between the lattices of modal logics and tense logics, we refer the readers to \cite{Rautenberg1979,Kracht1992,Thomason1972,Ma2021,Ma.Chen2023,Chen.Ma2024a}. Tabularity and pretabularity of tense logics have also been studied. For example, tense analogs $T_{n,m}$ of alternative modal logics $\mathsf{KAlt}_n$ were investigated in \cite{Ma2021}. It turns out that $|\mathsf{Pre}(T_{1,1})|=1$ and $|\mathsf{Pre}(T_{1,2})|=|\mathsf{Pre}(T_{2,1})|\geq\aleph_0$. Pretabularity in the lattice $\mathsf{NExt}(\mathsf{S4}_t)$ has not been fully understood yet. Kracht \cite{Kracht1992} defined a pretabular tense logic $\mathsf{Ga}\in\mathsf{NExt}(\mathsf{S4}_t)$ of depth 2, whose modal fragment is tabular. Rautenberg \cite[Page 23]{Rautenberg1979} posed the question: how many pretabular tense logics exist in the lattice $\mathsf{NExt}(\mathsf{S4}_t)$? As a partial answer, Rautenberg claimed in the same work that there are infinitely many pretabular tense logics extending $\mathsf{S4}_t$. However, no proof was provided, and the only explanation for this claim was that the tense logic $\mathsf{Ga}$ is pretabular and has dimension 3. Since Kracht \cite{Kracht1992} later demonstrated that $\mathsf{Ga}$ does not have dimension 3, we believe determining the exact cardinality of $\mathsf{Pre}(\mathsf{S4}_t)$ remains an open problem. In this work, we investigate pretabular tense logics in $\mathsf{NExt}(\mathsf{S4}_t)$, where $\mathsf{S4}_t$ is the tense logic of all reflexive-transitive frames. One of our main goals is to determine the cardinality of $\mathsf{Pre}(\mathsf{S4}_t)$. Since the lattice $\mathsf{NExt}(\mathsf{S4}_t)$ is too complex, we start by studying sub-lattices of $\mathsf{NExt}(\mathsf{S4}_t)$ where logics are bounded by some parameters. Intuitively, a rooted frame is of \textit{z-degree} $l$ if $\F$ can be generated within $l$ steps by any point $x\in\F$. For all $n,m,k,l\in\mathbb{Z}^+\cup\cset{\omega}$, we define the tense logics with bounding parameters $\mathsf{S4BP}_{n,m}^{k,l}$, which is the tense logic of all frames with forth-width, back-width, depth and z-degree no more than $n,m,k$ and $l$, respectively. Transitive modal logics with bounded forth-width and depth are widely studied, see \cite{Fine1974,Fine1985,Chagrov1997}. Due to the past-looking modality in the tense language, we need to take into account not only the forth-width and depth of frames, but also the back-width and z-degree of frames. Our first step is to study tense logics in $\mathsf{NExt}(\mathsf{S4BP}_{n,m}^{k,l})$ where $n,m,k,l\in\mathbb{Z}^+$. Given the constraints on width, depth and z-degree, every skeleton of $\mathsf{S4BP}_{n,m}^{k,l}$ is finite and there are only finitely many non-isomorphic skeletons. Moreover, we introduce c-irreducible pre-skeletons and show that every pretabular logic in $\mathsf{NExt}(\mathsf{S4BP}_{n,m}^{k,l})$ is the tense logic of some rooted c-irreducible $\aleph_0$-pre-skeleton. This gives us a full characterization of pretabular tense logics in $\mathsf{NExt}(\mathsf{S4BP}_{n,m}^{k,l})$. We then study pretabular tense logics in some lattices $\mathsf{NExt}(\mathsf{S4BP}_{n,m}^{k,l})$ where some of the bounds are removed. Let $\mathsf{S4.3}_t$ be the tense logic of linear reflexive-transitive frames. We show that $\mathsf{S4.3}_t=\mathsf{S4BP}_{1,1}^{\omega,\omega}$ and give a full characterization of $\mathsf{Pre}(\mathsf{S4.3}_t)$, which implies that $|\mathsf{Pre}(\mathsf{S4.3}_t)|=5$. A key distinction between interpretations of modal logics and tense logics is that the latter are sensitive to "zigzag-like" paths. Therefore, we turn our attention to the tense logic $\mathsf{Ga}$ of garlands, which has been studied by Kracht \cite{Kracht1992}. It turns out that $\mathsf{Ga}=\mathsf{S4BP}_{2,2}^{2,\omega}$. We provide a characterization of the rooted frames of $\mathsf{Ga}$. Utilizing this characterization, we point out an error in the characterization of $\mathsf{NExt}(\mathsf{Ga})$ given in \cite{Kracht1992}. We also give a full characterization of $\mathsf{Pre}(\mathsf{S4BP}_{2,2}^{2,\omega})$. It follows that the following theorem for cardinality of pretabular logics holds: for all cardinal $\kappa\leq{\aleph_0}$, there exists $L\in\mathsf{NExt}(\mathsf{S4BP}_{2,2}^{2,\omega})$ with $|\mathsf{Pre}(L)|=\kappa$. We then investigate pretabular tense logics in $\mathsf{NExt}(\mathsf{S4BP}_{2,3}^{2,\omega})$. The main result we obtain is that $|\mathsf{Pre}(\mathsf{S4BP}_{2,3}^{2,\omega})|=2^{\aleph_0}$, by constructing a continual family of rooted frames whose tense logics are pairwise different and pretabular. For each sequence $\alpha\in 2^\mathbb{Z}$, we define an infinite rooted frame $\Z_\alpha$. We show that for all dissimilar finitely perfect sequences $\alpha,\beta\in 2^\mathbb{Z}$, $\mathsf{Log}(\Z_\alpha)$ is pretabular and $\mathsf{Log}(\Z_\alpha)\neq\mathsf{Log}(\Z_\beta)$. To obtain desired sequences, we introduce the generalized Thue-Morse sequences $\chi^f$. It turns out that $\cset{{\chi^f}:f\in 2^\omega}$ is a continual set of pairwise dissimilar finitely perfect sequences. In order to show that $\mathsf{Log}(\Z_{\alpha})$ is pretabular, we show that if $\G$ is an infinite rooted frame such that $\mathsf{Log}(\Z_{\chi^f})\sub\mathsf{Log}(\G)$, then every finite fragment of $\G$ is contained in $\Z_{\chi^f}$. Since rooted frames for the logic $\mathsf{S4BP}_{2,3}^{2,\omega}$ have involved structures and thus are hard to handle, we need to develop new technical tools. The tools used here are generalized Jankov formulas for image-finite Kripke frames and the corresponding local t-morphisms. Jankov formulas are widely used in the study of lattices of intermediate logics and modal logics (see \cite{Jankov1963,deJongh1968,Kracht1993,Rautenberg1980,Wolter1997}). Fine \cite{fine1974ascending} developed frame formulas for finite rooted $\mathsf{S4}$-frames, which are similar to Jankov formulas for finite subdirectly irreducible Heyting algebras. Such a formula for $\F$ is refuted by some $\mathsf{S4}$-frame $\G$ if and only if $\F$ is a p-morphic image of a generated subframe of $\G$. In this work, we define local t-morphisms and associate to every image-finite rooted pointed-frame $(\F,x)$ and $k\in\mathbb{Z}^+$ the generalized Jankov formula $\J^k(\F,x)$. We show that for any frame $\G=(Y,S)$ and $y\in Y$, $(\G,y)$ validates $\neg\J^k(\F,x)$ if and only if there exists no $k$-t-morphism $f:(\G,y)\twoheadrightarrow^k(\F,x)$. This tells us that $\G,y\md\J^k(\F,x)$ implies that the neighborhood of $y$ is similar to the neighborhood of $x$ within z-degree $k$. Generalized Jankov formulas provide a useful tool in the study of $\mathsf{Pre}(\mathsf{S4BP}_{2,3}^{2,\omega})$, since all we need in the main proofs is to characterize large enough neighborhoods of some arbitrarily fixed points. In fact, the proofs of many of the lemmas in Section \ref{sec:BS223} heavily rely on properties of generalized Jankov formulas and local t-morphisms. Finally, we obtain the following anti-dichotomy theorem for cardinality of pretabular extensions for logics in $\mathsf{NExt}(\mathsf{S4}_t)$: \begin{center} For all $\kappa\leq{\aleph_0}$ or $\kappa=2^{\aleph_0}$, there exists $L\in\mathsf{NExt}(\mathsf{S4}_t)$ such that $|\mathsf{Pre}(L)|=\kappa$. \end{center} Consequently, we determine that the cardinality of $\mathsf{Pre}(\mathsf{S4}_t)$ is $2^{\aleph_0}$. This gives a full solution to the problem presented in \cite{Rautenberg1979}. The paper is structured as follows: In Section 2 we recall preliminaries of tense logic. Section 3 introduces generalized Jankov formulas and local t-morphisms. In Section 4 we introduce tense logics with bounding parameters $\mathsf{S4BP}_{n,m}^{k,l}$ and study basic properties of them. In Section 5, we give a characterization of $\mathsf{Pre}(\mathsf{S4BP}_{n,m}^{k,l})$ for each $n,m,k,l\in\mathbb{Z}^+$. Section 6 provides a characterization of $\mathsf{Pre}(\mathsf{S4.3}_t)$. In Section 7, we provide a characterization of $\mathsf{Pre}(\mathsf{S4BP}_{2,2}^{2,\omega})$ and prove the anti-dichotomy theorem for cardinality of pretabular logics extending $\mathsf{S4BP}_{2,2}^{2,\omega}$. We also point out the connection between Kracht's work on the lattice $\mathsf{NExt}(\mathsf{Ga})$ in \cite[Section 4]{Kracht1992} and our work in this section. In Section 8, we introduce generalized Thue-Morse sequences and construct a continual family of pretabular tense logics in $\mathsf{NExt}(\mathsf{S4BP}_{2,3}^{2,\omega})$. It follows that the cardinality of $\mathsf{Pre}(\mathsf{S4}_t)$ is $2^{\aleph_0}$. Conclusions are given in Section 9. \section{Preliminaries} Basic notions on tense logic can be found in e.g. \cite{Ma2021,Chagrov1997,Blackburn2001}. Let $\mathbb{N}$ and $\mathbb{Z}^+$ be sets of all natural numbers and positive integers respectively. We use $\mathbb{O}$ and $\mathbb{E}$ for sets of odd numbers and even numbers in $\mathbb{N}$ respectively. Let $\mathbb{Z}^*=\mathbb{Z}^+\cup\cset{\omega}$. The cardinal of a set $X$ is denoted by $|X|$. The power set of $X$ is denoted by $\mathcal{P}(X)$. The set of all finite tuples of elements of $X$ is denoted by $X^*$. We use Boolean operations $\cap$, $\cup$ and $\comb{\cdot}$ (complementation) on $\mathcal{P}(X)$. \begin{definition} The {language $\mathscr{L}_t$ of tense logic} consists of a denumerable set of variables $\mathsf{Prop}=\{p_i: i\in\mathbb{N}\}$, connectives $\bot$ and $\to$, and unary tense operators $\B$ and $\bd$. The set of all formulas $\L$ is defined as follows: \[ \L\ni \phi ::= p \mid \bot \mid (\phi\to\phi) \mid \B\phi\mid \bd\phi,~\text{where $p \in \mathsf{Prop}$.} \] Formulas in $\mathsf{Prop}\cup\{\bot\}$ are called {\em atomic}. The connectives $\top,\neg,\wedge$ and $\vee$ are defined as usual. Let $\D\phi:=\neg\B\neg\phi$ and $\bb\phi:=\neg\bd\neg\phi$. Let $var(\phi)$ be the set of all variables in a formula $\phi$. For each $n\in\omega$, let $\mathsf{Prop}(n)=\cset{p_i:i<n}$ and $\L(n)=\cset{\phi\in\L:var(\phi)\sub\mathsf{Prop}(n)}$. The {\em complexity} $c(\phi)$ of a formula $\phi$ is defined by \begin{align*} c(p) &= 0 = c(\bot),\\ c(\phi\to\psi)&=\max\{c(\phi),c(\psi)\}+1,\\ c(\B\phi)&=c(\phi)+1=c(\bd\phi). \end{align*} The {\em modal degree} $md(\phi)$ of a formula $\phi$ is defined inductively as follows: \begin{align*} md(p) &= 0 = md(\bot),\\ md(\phi\to\psi)&=\max\{md(\phi),md(\psi)\},\\ md(\B\phi)&=md(\phi)+1=md(\bd\phi). \end{align*} The set of all subformulas of a formula $\phi$ is denoted by $Sub(\phi)$. A set of formulas $\Sigma$ is {\em closed under subformulas} if $\Sigma=\bigcup_{\phi\in\Sigma}Sub(\phi)$. A {\em substitution} is a homomorphism $(.)^s:\L\twoheadrightarrow \L$ on the formula algebra $\L$.\end{definition} \begin{definition} A {\em frame} is a pair $\F =(X,R)$ where $X\neq\ve$ and $R\sub X\times X$. We write $Rxy$ if $\tup{x,y}\in R$. The {\em inverse} of $R$ is defined as $\breve{R}=\{\tup{y,x}:Rxy\}$. The {\em inverse} of $\F$ is defined as the frame $\breve{\F}=(X,\breve{R})$. For every $x\in X$, let $R[x]=\{y\in X: Rxy\}$ and $\breve{R}[x]=\{y\in X: Ryx\}$. For every $Y\sub X$, we define $R[Y]=\bigcup_{y\in Y}R[y]$ and $\breve{R}[Y]=\bigcup_{y\in Y}\breve{R}[y]$. Let $x\in X$. For all $n\geq 0$, we define $\R^n[x]$ by: \begin{center} $\R^0[x] = \{x\}$ and $\R^{k+1}[x] =\R^k[x]\cup R[\R^k[x]]\cup \breve{R}[\R^k[x]]$. \end{center} Let $\R[x] = \R^1[x]$ and $\R^\omega[x] = \bigcup_{k\geq 0}\R^k[x]$. Let $\mathsf{Fr}$ and $\mathsf{Fin}$ denote the class of all frames and finite frames, respectively. \end{definition} \begin{definition} A {\em general frame} is a triple $\gf=(X,R,A)$ where $(X,R)$ is a frame and $A\sub\mathcal{P}(X)$ is a set such that $\ve\in A$ and $A$ is closed under $\cap$, $(\cdot)^c$, $R[\cdot]$ and $\breve{R}[\cdot]$. We call $A\sub\mathcal{P}(X)$ {\em the set of internal sets in $\gf$}. \end{definition} For any $Q\sub\mathcal{P}(X)$, let $[Q]$ denote the smallest subset of $\mathcal{P}(X)$ such that $Q\sub[Q]$ and $(\F,[Q])$ is a general frame. Note that $\gf=(X,R,\bigcap_{i\in I}A_i)$ is a general frame for every family $(\gf_i=(X,R,A_i))_{i\in I}$ of general frames, [Q] is well-defined. A general frame $\gf=(X,R,A)$ is called {\em finitely generated}, if $P=[Q]$ for some finite $Q\sub\mathcal{P}(X)$. Let $X$ be a set and $A\sub\mathcal{P}(X)$. We say that $A$ has the finite intersection property (FIP), if $\bigcap B\neq\ve$ for any finite subset $B$ of $A$. \begin{definition} Let $\gf=(X,R,A)$ be a general frame. Then $\gf$ is called \begin{enumerate}[(i)] \item {\em differentiated}, if for all distinct $x,y\in X$, $x\in U$ and $y\not\in U$ for some $U\in A$; \item {\em tight}, if for all $x,y\in X$ such that $y\not\in R[x]$, there are internal sets $U,V\in A$ such that $x\in U\setminus\breve{R}[V]$ and $y\in V\setminus R[U]$; \item {\em compact}, if $\bigcap B\neq\ve$ for any $B\sub A$ which has the FIP; \item {\em refined}, if $\gf$ is both differentiated and tight; \item {\em descriptive}, if $\gf$ is both refined and compact. \end{enumerate} Let $\mathsf{GF}$, $\mathsf{RF}$ and $\mathsf{DF}$ denote the class of all general frames, refined frames and descriptive frames, respectively. \end{definition} For each general frame $\gf=(X,R,A)$, we call $\kappa\gf=(X,R)$ the underlying frame of $\gf$. Moreover, if a general frame is of the form $\gf=(X,R,\mathcal{P}(X))$, then we identify $\gf$ with its underlying frame $\kappa\gf$. In this sense, we see that $\mathsf{Fr}\sub\mathsf{RF}\sub\mathsf{GF}$. \begin{definition} Let $\gf=(X,R,A)$ be a general frame. Then a map $V:\mathsf{Prop}\to A$ is called a valuation in $\gf$. A valuation $V$ is extended to $V:\L\to A$ as follows: \begin{align*} V(\bot) &=\ve, &V(\phi\to\psi) &= (V(\phi))^c\cup V(\psi),\\ V(\bd\phi) &= \bd_R V(\phi), &V(\B\phi) &= \B_R V(\phi), \end{align*} where operations $\B_R$ and $\bd_R$ are defined by $\bd_R:U\mapsto R[U]$ and $\B_R:U\mapsto (\breve{R}[U^c])^c$. A Kripke model is a pair $\M=(\gf,V)$ where $\gf\in\mathsf{GF}$ and $V$ a valuation in $\gf$. Let $\phi$ be a formula and $w\in X$. Then (i) $\phi$ is {\em true} at $w$ in $\M$ (notation: $\M,w\models\phi$) if $w\in V(\phi)$; (ii) $\phi$ is {\em valid} at $w$ in $\gf$ (notation: $\gf,w\models\phi$) if $w\in V(\phi)$ for every valuation $V$ in $\gf$; (iii) $\phi$ is {\em valid} in $\gf$ (notation: $\gf\models\phi$) if $\gf,w\models\phi$ for every $w\in X$; and (iv) $\phi$ is {\em valid} in a class of general frame $\mathcal{K}$ (notation: $\mathcal{K}\models\phi$) if $\gf\models\phi$ for every $\gf\in\mathcal{K}$. For all sets $\Sigma\sub\L$ of formulas and classes $\mathcal{K}\sub\mathsf{GF}$ of general frames, let \begin{center} $\mathcal{K}(\Sigma)=\cset{\gf\in\mathcal{K}:\gf\md\Sigma}$ and $\mathsf{Log}(\mathcal{K})=\cset{\phi:\mathcal{K}\md\phi}$. \end{center} \end{definition} We often write $(X,R,V)$ for the model $(\gf,V)$. Let $\M=(X,R,V)$ be a model and $\Sigma$ a set of formulas. Then for all $x,y\in X$, we write $x\equiv_\Sigma y$ if $\cset{\phi\in\L:\M,x\md\phi}=\cset{\phi\in\L:\M,y\md\phi}$. \begin{definition} Let $\gf=(X,R,A)$ be a general frame. For every subset $Y$ of $X$, the {\em subframe of $\gf$ induced by $Y$} is defined as $\gf\rsto Y=(Y,R\rsto Y,A\rsto Y)$, where $R\rsto Y=R\cap(Y\times Y)$ and $A\rsto Y=\cset{U\cap Y:U\in A}$. Let $\gg$ be a general frame. If $\gg\cong\gf\rsto Y$, then we say $\gg$ can be embedded into $\gf$ and write $\gg\rightarrowtail\gf$. For all $x\in X$, let $\gf_x$ denote the frame $\gf\rsto \R^\omega[x]$ and we call $\gf\rsto\R^\omega[x]$ the {\em subframe of $\gf$ generated by $x$}. We call $\gf$ a {\em rooted frame}, if $\gf=\F\rsto \R^\omega[x]$ for some $x\in X$. For each class $\mathcal{K}$ of general frames, let $\mathcal{K}_r=\cset{\gf_x:\gf\in\mathcal{K}\text{ and }x\in\gf}$. Let $\mathcal{F}=(\gf_i=(X_i,R_i,A_i))_{i\in I}$ be a family of general frames. Then the {\em disjoint union of $\mathcal{F}$} is defined as $\bigoplus_{i\in I}\gf_i=(X,R,A)$, where $X=\biguplus_{i\in I}X_i=\cset{\tup{y,i}:y\in X_i,i\in I}$, $R=\cset{\tup{\tup{x,i},\tup{y,i}}:R_ixy\text{ and }i\in I}$ and $A=\cset{U\sub X:\forall_{i\in I}(U\cap X_i\in A_i)}$. Let $\gf=(X,R,A)$ and $\gf'=(X',R',A')$ be general frames. A map $f:X\to X'$ is said to be a {\em t-morphism from $\gf$ to $\gf'$}, if $f^{-1}[Y']\in A$ for any $Y'\in A'$ and \begin{center} for all $x\in X$, $f[R[x]]= R'[f(x)]$ and $f[\breve{R}[x]]=\breve{R'}[f(x)]$. \end{center} We write $f:\gf\twoheadrightarrow\gf'$ if $f$ is a surjective t-morphism from $\gf$ to $\gf'$. Moreover, $\gf'$ is called a {\em t-morphic image of $\gf$ (notation:$\gf\twoheadrightarrow\gf'$)} if there exists $f:\gf\twoheadrightarrow\gf'$. For each class $\mathcal{K}$ of general frames, let $\mathsf{TM}(\mathcal{K})$ denote the class of all t-morphic images of frames in $\mathcal{K}$. \end{definition} \begin{fact}\label{fact:inj-tmorphism} Let $\F$ and $\G$ be frames and $f:\F\twoheadrightarrow\G$. If $f$ is injective, then $f:\F\iso\G$. \end{fact} We recall the definition and some basic results about ultraproducts of general frames (c.f. \cite{Kracht1999}). Given a family $\mathcal{X}=(X_i)_{i\in I}$ of sets, the {\em direct product $\prod_{i\in I}X_i$ of $\mathcal{X}$} is defined as follows: $$ \prod_{i\in I}X_i=\cset{f:I\to\bigcup_{i\in I}X_i: \forall{i\in I}(f(i)\in X_i)}. $$ Let $I$ be a set and $F\sub\mathcal{P}(I)$. Then $F$ is called a filter over $I$ if (1) $I\in F$; (2) for all $J,K\in F$, $J\cap K\in F$ and (3) for all $J\in F$ and $K\in\mathcal{P}(I)$, $J\sub K$ implies $K\in F$. A filter $F$ over $I$ is called proper if $\ve\not\in F$. Moreover, $F$ is called an ultrafilter if $F$ is proper and for all $J\in\mathcal{P}(I)$, $J\in F$ or $J^c\in F$. \begin{definition} Let $(\X_i=(X_i,R_i,A_i))_{i\in I}$ be a family of general frames and $U$ an ultrafilter over $I$. Then the ultraproduct $\prod_U\X_i=(\prod_UX_i,R,A)$ of $(\X_i)_{i\in I}$ is defined as follows: \begin{itemize} \item $\prod_UX_i=\cset{[x]:x\in\prod_{i\in I}X_i}$, where $[x]=\cset{y\in\prod_{i\in I}X_i:\cset{i\in I:x(i)=y(i)}\in U}$; \item the set $A$ of internal sets is $\cset{[P]:P\in\prod_{i\in I}A_i}$, where \begin{center} $[P]=\cset{[x]\in X:\cset{i\in I:x(i)\in P(i)}\in U}$. \end{center} \item for all $x,y\in\prod_{i\in I}X_i$, $R[x][y]$ if and only if $\cset{i\in I:R_ix(i)y(i)}\in U$. \end{itemize} \end{definition} \begin{proposition}\label{prop:UltraproductOfGF} Let $(\gf_i=(X_i,R_i,A_i))_{i\in I}$ be a family of general frames, $U$ an ultrafilter over $I$ and $\prod_U\gf_i$ the ultraproduct of $(\gf_i)_{i\in I}$. Then for all $P,Q\in\prod_{i\in I}A_i$ \begin{align*} ([P])^c&=[\tup{P(i)^c:i\in I}], &[P]\cap[Q]&=[\tup{P(i)\cap Q(i):i\in I}],\\ \B_R[P]&=[\tup{\B_{R_i}P(i):i\in I}], &\bd_R[P]&=[\tup{\bd_{R_i}P(i):i\in I}]. \end{align*} As a corollary, $\prod_U\gf_i$ is a general frame. Moreover, if $\gf_i$ is differentiated (tight) for each $i\in I$, then $\prod_U\gf_i$ is differentiated (tight). \end{proposition} \begin{proof} The fact that $\prod_U\gf_i$ is a general frame follows from \cite[Lemma 5.7.1]{Kracht1999}. Suppose $\gf_i$ is differentiated for each $i\in I$. Take any distinct $[x],[y]\in X$. Then $J=\cset{j\in I:x(j)\neq y(j)}\in U$. For each $j\in J$, there exists $P_j\in A_j$ such that $x(j)\in P_j$ and $y(j)\not\in P_j$. Let $[P]\in A$ be such that $P(j)=P_j$ for all $j\in J$. Then clearly, $[x]\in[P]$ and $[y]\not\in[P]$, which entails $\prod_U\gf_i$ is differentiated. The case for tightness is similar. \end{proof} \begin{theorem}\label{thm:UltraproductOfGF} Let $(\M_i=(\gf_i,V_i))_{i\in I}$ be a family of models, $V=\tup{V_i:i\in I}$ and $U$ an ultrafilter over $I$. Let $[V]$ be the valuation in $\prod_U\gf_i$ such that $[V]:p\mapsto[\tup{V_i(p):i\in I}]$. Then for all $x=\tup{x_i:i\in I}$ and $\phi\in\L$, \begin{enumerate}[(1)] \item $\prod_U\gf_i,[V],[x]\md\phi$ if and only if $\cset{i\in I:\M_i,x_i\md\phi}\in U$. \item $\bigoplus_{i\in I}\gf_i\md\phi$ implies $\prod_U\gf_i\md\phi$. \end{enumerate} \end{theorem} \begin{proof} (1) follows from \cite[Lemma 5.7.2]{Kracht1999}. For (2), suppose $\bigoplus_{i\in I}\gf_i\md\phi$. Take any $y=\tup{y_i:i\in I}$ and valuation $V'$ in $\prod_U\gf_i$. Note that for all $j\in\omega$, $V'(p_j)=[P_j]$ for some $P_j\in\prod_{i\in I}A_i$. For all $i\in I$, let $V'_i$ be the valuation in $\gf_i$ such that $V'_i(p_j)=P_j(i)$ for all $j\in\omega$. Then we see $V'=[\tup{V'_i:i\in I}]$. Since $\bigoplus_{i\in I}\gf_i\md\phi$, we have $\cset{i\in I:\gf_i,V'_i,y_i\md\phi}=I\in U$. Then by (1), $\prod_U\gf_i,V',[y]\md\phi$. \end{proof} \begin{definition} A (normal) {\em tense logic} is a set of formulas $L$ such that (i) all instances of classical propositional tautologies belong to $L$; (ii) $\bd \phi\to \psi\in L$ if and only if $\phi\to \B\psi\in L$; (iii) if $\phi,\phi\to\psi\in L$, then $\psi\in L$; (iv) if $\phi\in L$, then $\phi^s\in L$ for every substitution $s$. The least tense logic is denoted by $\mathsf{K}_t$. \end{definition} For every tense logic $L$ and set of formulas $\Sigma$, let $L\oplus\Sigma$ denote the smallest tense logic containing $L\cup\Sigma$. A tense logic $L_1$ is a {\em sublogic} of $L_2$ (or $L_2$ is an {\em extension} of $L_1$) if $L_1\sub L_2$. Let $\mathsf{NExt}(L)$ be the set of all extensions of $L$. A tense logic $L$ is {\em consistent} if $\bot\not\in L$. The only inconsistent tense logic is $\L$. Clearly $(\mathsf{NExt}(L), \cap,\oplus)$ is a lattice with top $\L$ and bottom $L$. For all tense logics $L_1$ and $L_2$ such that $L_1\sub L_2$, we have the {\em interval} $[L_1,L_2] = \{L\in \mathsf{NExt}(\mathsf{K}_t) : L_1\sub L\sub L_2\}$. \begin{fact} For all tense logic $L$, $L=\mathsf{Log}(\mathsf{DF}(L))=\mathsf{Log}(\mathsf{DF}_r(L))=\mathsf{Log}(\mathsf{RF}_r(L))$. \end{fact} \begin{definition} A tense logic $L$ is {\em Kripke-complete} if $L=\mathsf{Log}(\mathsf{Fr}(L))$. A tense logic $L$ has the {\em finite model property} (FMP) (notation: $\mathrm{FMP}(L)$) if $L=\mathsf{Log}(\mathsf{Fin}(L))$. A tense logic $L$ is {\em tabular} if $L = \mathsf{Log}(\F)$ for some finite frame $\F$. \end{definition} Recall some results on tabularity of tense logics (c.f. \cite{Chen.Ma2024a}). For each $n\in\omega$ and $\phi\in\L$, we define the formula $\Delta^n\phi$ by: \begin{center} $\Delta^0\phi=\phi$ and $\Delta^{k+1}\phi=\Delta^k\phi\vee\D\Delta^k\phi\vee\bd\Delta^k\phi$. \end{center} Let $\nabla^n\phi:=\neg\Delta^n\neg\phi$. For each $n\in\omega$, the formula $\mathsf{tab}^T_n$ is defined as \begin{align*} \mathsf{tab}^T_n=\neg(\Delta^n\psi_0\wedge\cdots\wedge\Delta^n\psi_n), \end{align*} where $\psi_i=\neg p_0\wedge\cdots\wedge\neg p_{i-1}\wedge p_i$ for each $i\leq n$. Note that $\psi_0=p_0$. For example, $\Delta p = p\vee\D p\vee\bd p$, $\Delta^2q=q\vee\D q\vee\bd q\vee\D^2 q\vee\D\bd q\vee\bd\D q\vee\bd^2 q$ and $\mathsf{tab}^T_1=\neg((p_0\vee\D p_0\vee\bd p_0)\wedge((\neg p_0\wedge p_1)\vee\D(\neg p_0\wedge p_1)\vee\bd(\neg p_0\wedge p_1)))$. Semantically, one can check that $\M,w\md\Delta^n\phi$ iff $\M,u\md\phi$ for some $u\in \R^n[w]$. \begin{fact}\label{fact:tabn} Let $\gf=(X,R,A)\in\mathsf{GF}$ and $x\in X$. Then for every $n\in\omega$, \begin{enumerate}[(1)] \item $\R^{n}[x]=\R^{n+1}[x]$ if and only if $\R^n[x]=\R^\omega[x]$. \item $\kappa\gf,w\md\mathsf{tab}^T_n$ if and only if $|\R^n(\kappa\gf,w)|\leq n$. \end{enumerate} \end{fact} \begin{proof} See \cite[Lemma 3.4 and Lemma 3.5]{Chen.Ma2024a}. \end{proof} \begin{theorem}[{\cite[Theorem 3.7]{Chen.Ma2024a}}]\label{thm:tabular} For every consistent logic $L\in\mathsf{NExt}(\mathsf{K}_t)$, $L\in\mathsf{TAB}$ if and only if $\mathsf{tab}^T_n\in L$ for some $n\geq 1$. \end{theorem} \begin{theorem}\label{thm:nontabular-infrootedframe} Let $L$ be a non-tabular tense logic. Then $\mathsf{Fin}_r(L)\neq\mathsf{RF}_r(L)$, i.e., there exists an infinite rooted refined frame $\gf$ such that $\gf\md L$. \end{theorem} \begin{proof} Suppose every $\gf\in\mathsf{RF}_r(L)$ is finite. Since $L$ is non-tabular, by Theorem \ref{thm:tabular}, $\mathsf{tab}^T_n\not\in L$ for any $n\in\omega$. By $L=\mathsf{Log}(\mathsf{RF}_r(L))$, for all $n\in\omega$, there exists $\gf_n=(X_n,R_n)\in\mathsf{RF}_r(L)$ and $x_n\in X_n$ such that $\gf_n,x_n\not\md\mathsf{tab}^T_n$. By Fact \ref{fact:tabn}(2), we see $\gf_n,x_n\not\md\mathsf{tab}^T_m$ for all $m\geq n$. By Theorem \ref{thm:UltraproductOfGF}(1), $\prod_U\gf_n,[x]\not\md\mathsf{tab}^T_n$ for any $n\in\omega$, where $U$ is a non-principal ultrafilter over $\omega$ and $x:n\mapsto x_n$ for all $n\in\omega$. Let $\gf$ be the rooted general subframe of $\prod_U\gf_n$ generated by $[x]$. Then we have $\gf,[x]\not\md\mathsf{tab}^T_n$ for any $n\in\omega$, which entails $\gf$ is infinite. By Theorem \ref{thm:UltraproductOfGF}, $\prod_U\gf_n\md L$, which implies $\gf\md L$ and contradicts to the assumption. \end{proof} \begin{corollary} If $L\in\mathsf{Pre}(\mathsf{K}_t)$, then $L=\mathsf{Log}(\gf)$ for some rooted refined frame $\gf$. \end{corollary} \section{Generalized Jankov formulas and local t-morphisms} In this section, we introduce generalized Jankov formulas and local t-morphisms, which are the main tools we use in the following sections. Given any image-finite frame $\F=(X,R)$, $x\in X$ and $k\in\mathbb{Z}^+$, we associate the generalized Jankov formula $\J^k(\F,x)$ to it. We show that for any frame $\G=(Y,S)$ and $y\in Y$, $(\G,y)$ validates $\neg\J^k(\F,x)$ if and only if there exists no $k$-t-morphism $f:(\G,y)\twoheadrightarrow^k(\F,x)$. This generalizes the Jankov formulas for finite algebras or finite rooted frames defined in \cite{deJongh1968,Jankov1963,fine1974ascending}. \begin{definition} Let $\gf=(X,R,A)\in\mathsf{GF}$. Then $\gf$ is said to be {\em image-finite} if $|\R[x]|<\aleph_0$ for all $x\in X$. \end{definition} \begin{lemma}\label{lem:pointwise-finite-kappaF} Let $\gf=(X,R,A)\in\mathsf{GF}$ be image-finite. Then \begin{enumerate}[(1)] \item $|\R^n[x]|<\aleph_0$ for all $x\in X$ and $n\in\omega$. \item If $\gf$ is differentiated, then $\mathsf{Log}(\gf)=\mathsf{Log}(\kappa\gf)$. \end{enumerate} \end{lemma} \begin{proof} For (1), we prove by induction on $n$. The case $n=0$ is trivial. Suppose $n>0$. Then by induction hypothesis, we have \begin{center} $|\R^n[x]|\leq\sum_{y\in R^{n-1}_\sharp[x]}|\R[y]|<\aleph_0$. \end{center} For (2), it suffices to show $\mathsf{Log}(\gf)\sub\mathsf{Log}(\kappa\gf)$. Take any $\phi\not\in\mathsf{Log}(\kappa\gf)$. Then there exists $x\in X$ and a valuation $V$ in $\kappa\gf$ such that $\kappa\gf,V,x\not\md\phi$. Let $d(\phi)=m$. By~(1), $\R^m[x]$ is finite. Since $\gf$ is differentiated, there exists a valuation $V'$ in $\gf$ such that $V(p)\cap \R^m[x]=V'(p)\cap \R^m[x]$ for all $p\in\mathsf{Prop}$. Thus $(\kappa\gf,V)\rsto \R^m[x]\cong(\gf,V')\rsto \R^m[x]$. Note that $d(\phi)=m$, we have $\gf,V',x\not\md\phi$. Thus $\phi\not\in\mathsf{Log}(\gf)$. \end{proof} \begin{definition} Let $\F=(X,R)$ be a image-finite frame and $x\in X$. Let $k\in\mathbb{Z}^+$ and $\tup{x_i:i\in n}$ be an enumeration of $\R^{k}[x]$ where $x=x_0$. Then the formula $\J^k(\F,x)$ is defined to be the conjunction of the following formulas: \begin{enumerate}[(1)] \item $p_0\wedge\nabla^k(p_0\vee\cdots\vee p_{n-1})$ \item $\nabla^k(p_i\to\neg p_j)$, for all $i\neq j$ \item $\nabla^{k-1}((p_i\to\D p_j)\wedge(p_j\to\bd p_i))$, for all $Rx_ix_j$ \item $\nabla^{k-1}((p_i\to\neg\D p_j)\wedge(p_j\to\neg\bd p_i))$, for all $x_j\not\in R[x_i]$ \end{enumerate} $\J^k(\F,x)$ is called the Jankov formula of $(\F,x)$ with degree $k$. \end{definition} By Lemma \ref{lem:pointwise-finite-kappaF}(1), $\R^k[x]$ is finite for all $k\in\omega$. Thus $\J^k(\F,x)$ is well-defined. Let $V$ be a valuation in $\F$ such that $V(p_i)=\cset{x_i}$ for all $i\in n$. Then it is not hard to check that $\F,V,x\md\J^k(\F,x)$. Thus $\F,x\not\md\neg\J^k(\F,x)$ for any $k\in\mathbb{Z}^+$. Intuitively, $\J^k(\F,x)$ is a local description of $\F$ which contains the information of $\F\rsto\R^k[x]$. If $\F$ is generated by $x$ in $k$ steps, then $\J^k(\F,x)$ plays a similar role as the classical Jankov formula. \begin{definition} Let $\F=(X,R)$ and $\F'=(X',R')$ be frames, $x\in X$ and $x'\in X'$. For each $k\in\mathbb{Z}^+$, a function $f:\R^k[x]\to X'$ is called a $k$-t-morphism from $(\F,x)$ to $(\F',x')$ (notation: $f:(\F,x)\twoheadrightarrow^k(\F',x')$), if $f(x)=x'$ and \begin{center} for all $y\in \R^{k-1}[x]$, $f[R[y]]= R'[f(y)]$ and $f[\breve{R}[y]]=\breve{R'}[f(y)]$. \end{center} We write $(\F,x)\twoheadrightarrow^k(\F',x')$ if there is a $k$-t-morphism $f:(\F,x)\twoheadrightarrow^k(\F',x')$. \end{definition} To simplify notations, given any partial function $f:X\to X'$ such that $\mathsf{dom}(f)\supseteq\R^k[x]$, we write $f:(\F,x)\twoheadrightarrow^k(\F',x')$ for $f\rsto\R^k[x]:(\F,x)\twoheadrightarrow^k(\F',x')$ if there is no danger of confusion. It is clear that if $f:\F\twoheadrightarrow\F'$ and $f(x)=x'$, then $f:(\F,x)\twoheadrightarrow^k(\F',x')$ for all $k\in\mathbb{Z}^+$. Moreover, the following proposition holds. \begin{proposition}\label{prop:k-t-morphism} Let $\F=(X,R)$, $\F'=(X',R')$ be frames, $x\in X$, $x'\in X'$ and $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$. Let $m<k$. Suppose $z,z'\in \R^{m}[x]$ and $f(z)=f(z')$. Then \begin{center} $f[\R^n[z]]=f[{R'}_\sharp^n[z']]$ for all $n\in\omega$ such that $n+m<k$. \end{center} \end{proposition} \begin{proof} The proof proceeds by induction on $n$. The case $n=0$ is trivial. Let $n>0$. Take any $w\in\R^{n}[z]$. Then there exists $u\in\R^{n-1}[z]$ such that $w\in\R[u]$. By induction hypothesis, $f(u)\in f[\R^{n-1}[z]]=f[\R^{n-1}[z']]$ and so $f(u)=f(u')$ for some $u'\in\R^{n-1}[z']$. Note that $u,u'\in\R^{k-1}[x]$, we have $f[\R[u]]={R'}_\sharp[f(u)]={R'}_\sharp[f(u')]=f[\R[u']]$. Thus $f(w)\in f[\R[u]]=f[\R[u']]\sub f[\R^{n}[z']]$. By arbitrariness of $w$, $f[\R^n[z]]\sub f[\R^n[z']]$. Symmetrically, we see $f[\R^n[z']]\sub f[\R^n[z]]$. Hence $f[\R^n[z]]=f[\R^n[z']]$. \end{proof} Proposition \ref{prop:k-t-morphism} tells that if two points have the same image under a local t-morphism, then they have similar neighborhoods after some properly restriction. \begin{definition} Let $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$. A subset $Y\sub\R^{k-1}[x]$ is called {\em sufficient} if for all $y\in Y$, there exist $u,v\in Y$ such that $f(y)=f(u)=f(v)$ and $R[u]\cup\breve{R}[v]\sub Y$. We call $f$ {\em sufficient} if there is a nonempty sufficient set $Y\sub\mathsf{dom}(f)$. \end{definition} Intuitively, $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$ is sufficient means that $\F\rsto\R^k[x]$ contains enough information to describe The readers can readily check that the following fact holds: \begin{fact}\label{fact:boundary-sufficient} Let $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$ and $Y\sub\R^{k-1}[x]$. Then $Y$ is sufficient if there exists $Z\sub Y$ such that $\R[Z]\sub Y$ and $f[Y\setminus Z]\sub f[Z]$. \end{fact} \begin{lemma}\label{lem:sufficient-full} Let $\F=(X,R)$ and $\F'=(X',R')$ be rooted frames, $x\in X$ and $x'\in X'$. Suppose $k\in\mathbb{Z}^+$ and $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$ is sufficient. Then $f$ is full. \end{lemma} \begin{proof} Suppose $Y\sub\mathsf{dom}(f)$ is sufficient and $y\in Y$. We show by induction on $n$ that $(R')_\sharp^n[f(y)]\sub f[Y]$ for all $n\in\omega$. The case $n=0$ is trivial. Let $n>0$. Take any $z'\in(R')_\sharp^{n-1}[f(y)]$. By induction hypothesis, $f(z)=z'$ for some $z\in Y$. Since $Y$ is sufficient, there are $u,v\in Y$ such that $z'=f(u)=f(v)$ and $R[u]\cup\breve{R}[v]\sub Y$. Since $Y\sub\R^k[x]$, we have $R'[z']=f[R[u]]\sub f[Y]$ and $\breve{R'}[z']=f[\breve{R}[v]]\sub f[Y]$. By arbitrariness of $z'$, we see $\bigcup_{z'\in(R')_\sharp^{n-1}[f(y)]}(R')_\sharp[z']\sub f[Y]$, which entails $(R')_\sharp^n[f(y)]\sub f[Y]$. Then $X'=(R')_\sharp^\omega[f(y)]\sub f[Y]\sub\mathsf{ran}(f)\sub X'$, which means $f$ is full. \end{proof} \begin{corollary}\label{coro:tmorphism-ktmorphism} Let $\F=(X,R)$ and $\F'=(X',R')$ be rooted frames, $x\in X$ and $x'\in X'$. Suppose $k\in\mathbb{Z}^+$, $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$ and $\R^{k-1}[x]=X$. Then $f:\F\twoheadrightarrow\F'$. \end{corollary} \begin{proof} Since $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$ and $\R^{k-1}[x]=X$, we see $f:X\to X'$ is a t-morphism. Note that $X\sub\R^{k-1}[x]$ is sufficient, by Lemma \ref{lem:sufficient-full}, $f$ is full. Thus $f:\F\twoheadrightarrow\F'$. \end{proof} \begin{lemma}\label{lem:k-t-morphism} Let $\F=(X,R)$ and $\F'=(X',R')$ be frames, $x\in X$ and $x'\in X'$. Suppose $k\in\mathbb{Z}^+$ and $f:(\F,x)\twoheadrightarrow^{k}(\F',x')$. Then for all formula $\phi$ with $md(\phi)\leq k$, \begin{center} $\F,x\md\phi$ implies $\F',x'\md\phi$. \end{center} \end{lemma} \begin{proof} Suppose $\F',V',x'\not\md\phi$. Let $V$ be a valuation in $\F$ such that $V(p)=f^{-1}[V'(p)]$ for all $p\in\mathsf{Prop}$. It suffices to prove the following claim: \noindent\textbf{Claim:} For all $l\leq k$, $y\in\R^{k-l}[x]$ and $\psi\in\L$ with $\mathsf{md}(\psi)\leq l$ \begin{center} $\F,V,y\md\psi$ if and only if $\F',V',f(y)\md\psi$. \end{center} \textbf{Proof of Claim:} By induction on $l$. The case $l=0$ follows from the definition of $V$ immediately. Let $l>0$. By induction hypothesis, $y$ and $f(y)$ agree on all formulas with modal depth at most $l-1$. Take any $\gamma\in\L$ with $\mathsf{md}(\gamma)\leq l-1$. Suppose $\F,V,y\md\bd\gamma$. Then $\F,V,z\md\gamma$ for some $z\in\breve{R}[y]$. Note that $z\in\R^{k-(l-1)}[x]$, by induction hypothesis, $\F',V',f(z)\md\gamma$. Since $f(z)\in f[\breve{R}[y]]=\breve{R'}[f(y)]$, we see $\F',V',f(y)\md\bd\gamma$. Suppose $\F',V',f(y)\md\bd\gamma$. Then $\F',V',z'\md\gamma$ for some $z'\in\breve{R'}[f(y)]$. Since $\breve{R'}[f(y)]=f[\breve{R}[y]]$, there exists $z\in\breve{R}[y]$ such that $f(z)=z'$. Note that $z\in\R^{k-(l-1)}[x]$, by induction hypothesis, $\F,V,z\md\gamma$, which entails $\F,V,y\md\bd\gamma$. Since $f[R[y]]= R'[f(y)]$, by a similar proof, we see $\F,V,y\md\B\gamma$ if and only if $\F',V',f(y)\md\B\gamma$. \hfill$\B$ By the claim above, take $l=k$ and $\psi=\phi$, we see $\F,x\not\md\phi$. \end{proof} \begin{lemma}\label{lem:JankovLemma-k} Let $\F=(X,R)$ be a frame, $\G=(Y,S)$ a image-finite frame, $x\in X$ and $y\in Y$. Then for all $k\in\mathbb{Z}^+$, \begin{center} $(\F,x)\twoheadrightarrow^{k}(\G,y)$ if and only if $\F,x\not\md\neg\J^k(\G,y)$. \end{center} \end{lemma} \begin{proof} Suppose $(\F,x)\twoheadrightarrow^{k}(\G,y)$. Note that $\G,y\not\md\neg\J^k(\G,y)$ and $\mathsf{md}(\J^k(\G,y))=k$, by Lemma \ref{lem:k-t-morphism}, we have $\F,x\not\md\neg\J^k(\G,y)$. Suppose $\F,V,x\not\md\neg\J^k(\G,y)$ for some valuation $V$ in $\F$. Let $\tup{y_i:i\in n}$ be an enumeration of $\S^k[y]$ used in the definition of $\J^k(\G,y)$. We define the function $f:\R^k[x]\to Y$ as follows: \begin{center} for all $z\in\R^k[x]$, $f(z)=y_i$ if and only if $z\md p_i$. \end{center} Since $x\md\nabla^k(p_0\vee\cdots\vee p_{n-1})$ and $x\md\nabla^k(p_i\to\neg p_j)$ for all $i\neq j$, we see $\R^k[x]\sub\bigcup_{i\in n}V(p_i)$ and $\R^k[x]\cap V(p_i)\cap V(p_j)=\ve$ for all $i\neq j$. Thus $f$ is well-defined. It suffices to show that $f:(\F,x)\twoheadrightarrow^k(\G,y)$. Since $y=y_0$ and $x\md p_0$, $f(x)=y$. Take any $z\in\R^{k-1}[x]$. Then $f(z)=y_i$ and $z\md p_i$ for some $i\in n$. For all $y_j\in Y$, we have \begin{center} $y_j\in S[f(z)]\Longrightarrow x\md\nabla^{k-1}(p_i\to\D p_j)\Longrightarrow z\md\D p_j\Longrightarrow y_j\in f[R[z]]$. \end{center} and \begin{center} $y_j\not\in S[f(z)]\Longrightarrow x\md\nabla^{k-1}(p_i\to\neg\D p_j)\Longrightarrow z\md\neg\D p_j\Longrightarrow y_j\not\in f[R[z]]$. \end{center} Thus $f[R[z]]= S[f(z)]$. Similarly, we see $f[\breve{R}[z]]= \breve{S}[f(z)]$. Hence $(\F,x)\twoheadrightarrow^{k}(\G,y)$. \end{proof} By Lemma \ref{lem:JankovLemma-k} and Corollary \ref{coro:tmorphism-ktmorphism}, we have \begin{theorem}\label{thm:JankovLemma} Let $\F=(X,R)$ be a frame, $\G=(Y,S)$ a image-finite frame and $y\in Y$. Let $k\in\mathbb{Z}^+$. Suppose $X=\R^k[x]$ for all $x\in X$. Then \begin{center} $\F\twoheadrightarrow\G$ if and only if $\F\not\md\neg\J^k(\G,y)$. \end{center} \end{theorem} \section{Tense logics over $\mathsf{S4}_t$ with bounding parameters} Recall that $\mathsf{S4}_t=\mathsf{K}_t\oplus\cset{\mathbf{4},\mathbf{T}}$, where $\mathbf{T}=\B p\to p$ and $\mathbf{4}=\B p\to\B\B p$. From this section, we focus on tense logics in $\mathsf{NExt}(\mathsf{S4}_t)$. Unless otherwise specified, frames are always assumed to be reflexive and transitive. In this section, we introduce tense logics in $\mathsf{NExt}(\mathsf{S4}_t)$ with bounding parameters and investigate basic properties of them. \begin{definition} Let $\gf=(X,R,A)\in\mathsf{GF}$ be a frame and $x\in X$. Let $k\in\mathbb{Z}^+$. Then we say {\em $x$ is of z-degree $k$ (notation: $\mathrm{zdg}(x)=k$)}, if $\R^{k-1}[x]\neq \R^{k}[x]=\R^\omega[x]$. Specially, $\mathrm{zdg}(x)=0$ if $\R^\omega[x]=\cset{x}$ and $\mathrm{zdg}(x)=\aleph_0$ if $\R^k[x]\neq\R^{k+1}$ for any $k\in\omega$. We define the {\em z-degree $\mathrm{zdg}(\gf)$ of $\gf$} by $\mathrm{zdg}(\gf)=\mathrm{sup}\cset{\mathrm{zdg}(x):x\in X}$. \end{definition} \begin{definition} Let $\gf=(X,R,A)\in\mathsf{GF}$ be a general frame, $\alpha$ an ordinal and $\mathcal{Y}=\tup{y_i\in X:i<\alpha}$. Then \begin{itemize} \item $\mathcal{Y}$ is called a chain in $\F$ if $Rx_\lambda x_\gamma$ for all $\lambda<\gamma<\alpha$; \item $\mathcal{Y}$ is called a strict chain in $\F$ if it is a chain and $x_\lambda\not\in R(x_\gamma)$ for all $\lambda<\gamma<\alpha$; \item $\mathcal{Y}$ is called a (strict) co-chain in $\F$ if it is a (strict) chain in $\breve{\F}$; \item $\cset{y_i\in X:i<\alpha}$ is called an anti-chain in $\F$ if $x_\lambda\not\in R(x_\gamma)$ for all $\lambda\neq\gamma<\alpha$. \end{itemize} The length $l(\mathcal{Y})$ of a strict chain $\mathcal{Y}=\tup{y_i\in X:i<\alpha}$ is defined to be $\alpha$. We say that $x\in X$ is of depth $n$ (notation: $\mathrm{dep}(x)=n$), if there exists a strict chain $\mathcal{Y}$ in $\gf\rsto R[x]$ with $l(\mathcal{Y})=n$ and there is no strict chain of greater length. Otherwise $x$ is said to be of infinite depth and we write $\mathrm{dep}(x)=\aleph_0$. We define the {\em depth $\mathsf{dep}(\gf)$ of $\gf$} by $\mathsf{dep}(\gf)=\mathrm{sup}\cset{\mathsf{dep}(x):x\in X}$. Let $n\in\mathbb{Z}^+$ and $x\in X$. We say that $x$ is of forth-width $n$ (notation: $\mathrm{wid}^+(x)=n$), if there exists an anti-chain $Y\sub R[x]$ with $|Y|=n$ and there is no anti-chain in $R[x]$ with greater size. Otherwise we write $\mathrm{wid}^+(\gf)=\aleph_0$. We say that $\gf$ is of forth-width $n$ (notation: $\mathrm{wid}^+(\gf)=n$), if $\mathrm{sup}\cset{\mathrm{wid}^+(x):x\in X}=n$. Back-width is defined dually and we write $\mathrm{wid}^-(x)=n$ and $\mathrm{wid}^-(\gf)=n$ if $x$ and $\gf$ is of back-width $n$, respectively. \end{definition} \begin{definition} For each $n\in\mathbb{Z}^+$, let $(\mathbf{bz}_n)$, $(\mathbf{bw}^+_n)$ and $(\mathbf{bw}^-_n)$ denote the following formulas respectively: \begin{align*} \tag{$\mathbf{bz}_n$} \Delta^{n+1}p&\to\Delta^{n}p\\ \tag{$\mathbf{bw}^+_n$} \bigwedge_{i\leq n}\D p_i&\to\bigvee_{i\neq j\leq n}\D(p_i\wedge(p_j\vee\D p_j))\\ \tag{$\mathbf{bw}^-_n$} \bigwedge_{i\leq n}\bd p_i&\to\bigvee_{i\neq j\leq n}\bd(p_i\wedge(p_j\vee\bd p_j)) \end{align*} Moreover, we define the formula $(\mathbf{bd}_n)$ for each $n\in\mathbb{Z}^+$ as follow: \begin{align*} \mathbf{bd}_1 &= \D\B p_0\to p_0\\ \mathbf{bd}_{k+1} &= \D(\B p_{k}\wedge\neg\mathbf{bd}_k)\to p_{k} \end{align*} Specially, we define $\mathbf{bd}_\omega=\mathbf{bz}_\omega=\mathbf{bw}^+_\omega=\mathbf{bw}^-_\omega=\top$. Let $k,l,n,m\in\mathbb{Z}^*$. We define the tense logics $\mathsf{S4BP}^{k,l}_{n,m}$ by \begin{center} $\mathsf{S4BP}^{k,l}_{n,m}=\mathsf{S4}_t\oplus\cset{\mathbf{bd}_k,\mathbf{bz}_l,\mathbf{bw}^+_n,\mathbf{bw}^-_m}$. \end{center} \end{definition} It is not hard to verify that $\mathsf{S4.3}_t=\mathsf{S4BP}^{\omega,\omega}_{1,1}=\mathsf{S4BP}^{\omega,1}_{1,1}$ and the following fact holds. \begin{fact}\label{fact:bounds} Let $\gf=(X,R,A)\in\mathsf{RF}$, $x\in X$ and $n\in\mathbb{Z}^+$. Then \begin{enumerate}[(1)] \item $\gf,x\md\mathbf{bd}_n$ if and only if $\mathrm{dep}(x)\leq n$. \item $\gf,x\md\mathbf{bz}_n$ if and only if $\mathrm{zdg}(x)\leq n$. \item $\gf,x\md\mathbf{bw}^+_n$ if and only if $\mathrm{wid}^+(x)\leq n$. \item $\gf,x\md\mathbf{bw}^-_n$ if and only if $\mathrm{wid}^-(x)\leq n$. \end{enumerate} \end{fact} \begin{proof} The proof for this fact is standard, see \cite{Chagrov1997}. \end{proof} \begin{definition} Let $\gf=(X,R,A)\in\mathsf{GF}$ be and $x\in W$. The {\em cluster generated by $x$}, denoted by $C(x)$, is defined as follows: \[ C(x)=R[x]\cap \breve{R}[x] \] A subset $C\sub X$ is called a cluster in $\gf$ if $C=C(x)$ for some $x\in X$. Let $n\in\mathbb{Z}^+$. We say $\gf$ is of the girth $n$ (notation: $\mathrm{gir}(\gf)=n$) if there exists a cluster $C$ in $\gf$ such that $|C|=n$ and there is no cluster in $\gf$ of larger size. We write $\mathrm{gir}(\gf)=\aleph_0$ if for all $k\in\mathbb{Z}^+$, there exists a cluster $C$ in $\gf$ such that $|C|>k$. \end{definition} \begin{lemma}\label{lem:cluster-n-formula} Let $\M=(X,R,V)$ be a model and $C\sub X$ be a cluster in $\F$. Suppose $n\in\omega$, $x,y\in C$ and $x\equiv_{\mathsf{Prop}(n)} y$. Then $x\equiv_{\L(n)} y$. \end{lemma} \begin{proof} The proof proceeds by induction on the complexity of $\phi$. The case $\phi\in\mathsf{Prop}(n)$ follows from $x\equiv_{\mathsf{Prop}(n)} y$ immediately and the Boolean cases are standard. Consider the case $\phi=\B\psi$. Suppose $\M,x\md\phi$. Then $\M,z\md\psi$ for all $z\in R[y]$. Since $C$ is a cluster and $x,y\in C$, we see $R[x]= R[y]$. By induction hypothesis, $\M,z\md\psi$ for all $z\in R[y]$. Thus $\M,y\md\phi$. Symmetrically, $\M,y\md\phi$ implies $\M,x\md\phi$. Note that $\breve{R}[x]=\breve{R}[y]$, the proof for the case $\phi=\bd\psi$ is similar. \end{proof} \begin{proposition}\label{prop:finitecluster} Let $\gf=(X,R,A)\in\mathsf{RF}$ be finitely generated. Then $\mathrm{gir}(\gf)<\aleph_0$. \end{proposition} \begin{proof} Let $\gf$ be generated by $U_0,\cdots,U_{n-1}\in A$ for some $n\in\omega$. Let $V$ be a valuation in $\gf$ such that $V(p_i)=U_i$ for all $i<n$. Then $A=\cset{V(\phi):\phi\in\L(n)}$. Let $\M=(\gf,V)$. Since $\gf$ is differentiated, we have $x\not\equiv_{\L(n)}y$ for any different $x,y\in X$. Take any cluster $C$ in $\gf$. By Lemma \ref{lem:cluster-n-formula}, $x\not\equiv_{\mathsf{Prop}(n)}y$ for any different $x,y\in X$. Thus $|C|\leq 2^n<\aleph_0$. Since $C$ is chosen arbitrarily, $\mathrm{gir}(\gf)\leq 2^n<\aleph_0$. \end{proof} \begin{lemma}\label{lem:nstep-fin} Let $n,m,k\in\mathbb{Z}^+$ and $\gf=(X,R,A)$ be a finitely generated refined frame for $\mathsf{S4BP}^{k,\omega}_{n,m}$. Then $\gf$ is image-finite. \end{lemma} \begin{proof} Let $x\in X$. By Fact \ref{fact:bounds}, $R[x]$ and $\breve{R}[x]$ contains only finitely many distinct chains and each of them is of finite depth. Since $\gf$ is finitely generated, by Proposition \ref{prop:finitecluster}, every cluster in $\gf$ is finite. Thus $\R[x]=\cset{x}\cup R[x]\cup\breve{R}[x]$ is finite. \end{proof} \begin{theorem}\label{thm:BP-KC-FMP} Let $n,m,k,l\in\mathbb{Z}^+$ and $L\in\mathsf{NExt}(\mathsf{S4BP}^{k,\omega}_{n,m})$. Then $L$ is Kripke complete. Moreover, if $L\in\mathsf{NExt}(\mathsf{S4BP}^{k,l}_{n,m})$, then $L$ has the finite model property. \end{theorem} \begin{proof} Let $\mathcal{K}$ be the class of all rooted finitely generated refined frames of $L$. By Lemma~\ref{lem:nstep-fin}, every frame in $\mathcal{K}$ is image-finite. By Lemma \ref{lem:pointwise-finite-kappaF}(2), we see $L=\mathsf{Log}(\mathcal{K})=\mathsf{Log}(\cset{\kappa\gf:\gf\in\mathcal{K}})$, which entails that $L$ is Kripke complete. Suppose $L\in\mathsf{NExt}(\mathsf{S4BP}^{k,l}_{n,m})$. Take any $\gf=(X,R,A)\in\mathcal{K}$ and $x\in X$. Since $\gf$ is rooted, by Lemma \ref{lem:pointwise-finite-kappaF}(1) and Fact \ref{fact:bounds}, we see that $X=\R^l[x]$ is finite. Thus every frame in $\mathcal{K}$ is finite, which entails $L$ has the FMP immediately. \end{proof} \begin{remark}\label{rem:K4BP} The readers can see that reflexivity of frames plays no role in the proofs above. In fact, it is also natural to drop axiom $\mathbf{T}$ and define transitive tense logics $\mathsf{K4BP}^{k,l}_{n,m}$ with bounding parameters. The basic properties above can be easily generalized. \end{remark} \section{Pretabular tense logics over $\mathsf{NExt}(\mathsf{S4BP}^{k,l}_{n,m})$}\label{sec:S4BP} In this section, we give a characterization of pretabular tense logics in $\mathsf{NExt}(\mathsf{S4BP}^{k,l}_{n,m})$ where $k,l,n,m\in\mathbb{Z}^+$. By Theorem \ref{thm:BP-KC-FMP}, every logic in $\mathsf{NExt}(\mathsf{S4BP}^{k,l}_{n,m})$ is Kripke complete. It turns out that a tense logic $L\in\mathsf{NExt}(\mathsf{S4BP}^{k,l}_{n,m})$ is pretabular if and only if $L=\mathsf{Log}(\F)$ for some rooted frame $\F$ with certain conditions. \begin{definition} Let $\F=(X,R)$ be a frame. Let $W^S=\cset{C(x):x\in X}$ and $R^S=\cset{\tup{C(x),C(y)}:Rxy}$. Then $\F^S=(W^S,R^S)$ is called the {\em skeleton of $\F$}. We call $\F$ a skeleton if $\F\iso\F^S$. \end{definition} \begin{definition} Let $\F=(X,R)$ be a frame. Then for each $\lambda\leq\omega$ and $x\in X$, we define the frame $\F^x_\lambda=(X^x_\lambda,R^x_\lambda)$ as follows: \begin{itemize} \item $X^x_\lambda=X\uplus N^x_\lambda$, where $N^x_\lambda=\cset{x_i:0<i\leq\lambda}$; \item $R^x_\lambda=R\cup(C^x_\lambda\times R[x])\cup(\breve{R}[x]\times C^x_\lambda)\cup(C^x_\lambda\times C^x_\lambda)$, where $C^x_\lambda=N^x_\lambda\cup\cset{x}$. \end{itemize} A frame $\G$ is called a {\em $\lambda$-pre-skeleton} if $\G\iso\F^x_\lambda$ for some skeleton $\F$ and $\lambda>0$. \end{definition} Intuitively, $\F^x_\lambda$ is the frame obtained from $\F$ by replacing one reflexive point $x$ in $\F$ by a cluster with $1+\lambda$ points. The readers can readily verify that pre-skeletons are those frames containing exactly 1 proper cluster. It should be clear that $\F$ and $\F^x_\lambda$ share the same skeleton and have the same z-degree, width and depth. In what follows, without loss of generality, we always assume that $X\cap N^x_\lambda=\ve$. \begin{lemma}\label{lem:cluster-clopen-S4BP} Let $\gf=(X,R,A)\in\mathsf{RF}_r(\mathsf{S4BP}^{k,l}_{n,m})$ for some $k,l,n,m\in\mathbb{Z}^+$. Then \begin{enumerate}[(1)] \item $\kappa\gf^S$ is finite. \item $C\in A$ for every cluster $C\sub W$. \end{enumerate} \end{lemma} \begin{proof} For (1), by Fact \ref{fact:bounds}, $\kappa\gf\in\mathsf{RF}_r(\mathsf{S4BP}^{k,l}_{n,m})$. Note that $\kappa\gf$ contains no proper cluster, $\gf$ is image-finite and (1) follows from Lemma \ref{lem:pointwise-finite-kappaF}(1). For (2), let $\tup{C_i}_{i\leq s}$ be an enumeration of clusters in $\gf$ such that $C=C_s$. Let $c\in C$. For each $i<s$, take a point $c_i\in C_i$. Suppose $c_i\not\in R[c]$. Since $\gf$ is tight, $c_i\in U_i$ and $c\not\in\D U_i$ for some $U_i\in A$. Since $\gf$ is differentiated, $c_i\in U_i'$ and $c\not\in U_i'$ for some $U_i'\in A$. Let $V_i=(U_i\cap U_i')\cup\breve{R}[U_i\cap U_i']$. Then $c_i\in V_i$ and $c\not\in V_i$. Note that $V_i\sub\breve{R}[V_i]$, we have $C_i\sub V_i$ and $C\cap V_i=\ve$. Suppose $c_i\in R[c]$. Since $c_i\not\in C$, we see $c_i\not\in\breve{R}[c]$. By a similar argument, there exists $V_i\in A$ with $C_i\sub V_i$ and $C\cap V_i=\ve$. It is not hard to see that $C=X\setminus\bigcup_{i<s}V_i\in A$. \end{proof} \begin{lemma}\label{lem:pre-skeleton-inftofin} Let $\gf=(\F^x_\lambda,A)\in\mathsf{RF}_r(\mathsf{S4BP}^{k,l}_{n,m})$ for some $k,l,n,m\in\mathbb{Z}^+$ and $\lambda\geq\aleph_0$. Then $\gf\twoheadrightarrow\F^x_s$ for all $s\in\omega$. \end{lemma} \begin{proof} Let $s\in\omega$ and $C^x_\omega$ denote the cluster in $\gf$ generated by $x$. By Lemma \ref{lem:cluster-clopen-S4BP}, $C^x_\omega\in A$. Note that $C^x_\omega$ is infinite and $\gf$ is differentiated, there are pairwise disjoint $U_0,\cdots,U_{s}\in A$ such that $\bigcup_{i\leq s}U_i=C^x_\omega$. Let $D=\cset{d_0,\cdots,d_{s}}=C^x_s$ denote the cluster in $\F^x_n$ generated by $x$. We define the map $f:\gf\to\F^x_n$ by \begin{align*} f(x)= \begin{cases} d_i, &\text{ if }x\in U_i,\\ x, &\text{ otherwise.} \end{cases} \end{align*} It is easy to check that $f:\gf\twoheadrightarrow\F^x_n$. \end{proof} \begin{definition} Let $\M=(X,R,V)$ be a model and $C\sub X$ a cluster. A subset $D\sub C$ is called a $\mathsf{Prop}(n)$-approximation of $C$ if for all $c\in C$, $c\equiv_{\mathsf{Prop}(n)}d$ for some $d\in D$. \end{definition} \begin{lemma}\label{lem:selection-in-cluster} Let $\M=(X,R,V)$ be a model and $C\sub X$ be a cluster in $\F$. Let $Y$ be a subset of $X$ such that $X\setminus C\sub Y$ and $Y\cap C$ is a $\mathsf{Prop}(n)$-approximation of $C$. Then for all $y\in Y$ and $\phi\in\L(n)$, \begin{center} $\M,y\md\phi$ if and only if $\M\rsto Y,y\md\phi$. \end{center} \end{lemma} \begin{proof} Let $\M\rsto Y=(Y,S)$. The proof proceeds by induction on the complexity of $\phi$. The case $\phi\in\mathsf{Prop}(n)$ is trivial and the Boolean cases are standard. Consider the case $\phi=\B\psi$. Suppose $\M,y\md\phi$. Then $\M,z\md\psi$ for all $z\in R[y]$. Since $S[y]\sub R[y]$, by induction hypothesis, $\M\rsto Y,z\md\psi$ for all $z\in S[y]$. Thus $\M\rsto Y,y\md\phi$. Suppose $\M,y\not\md\phi$. Then $\M,z\not\md\psi$ for some $z\in R[y]$. Assume $z\not\in C$. Since $X\setminus C\sub Y$, we see $z\in Y$. By induction hypothesis, $\M\rsto Y,z\not\md\psi$ and so $\M\rsto Y,y\not\md\phi$. Assume $z\in C$. Since $Y\cap C$ is a $\mathsf{Prop}(n)$-approximation of $C$, there is $z'\in C\cap Y$ such that $z\equiv_{\mathsf{Prop}(n)}z'$. By Lemma \ref{lem:cluster-n-formula}, $\M,z'\not\md\psi$. By induction hypothesis, $\M\rsto Y,z'\not\md\psi$ and so $\M\rsto Y,y\not\md\phi$. The case $\phi=\bd\psi$ can be proved in a similar way. \end{proof} \begin{lemma}\label{lem:ThFomega=CapThFn} Let $\F=(X,R)$ be a skeleton. Then for all $x\in X$, \begin{center} $\mathsf{Log}(\F^x_\omega)=\bigcap\limits_{n\in\omega}\mathsf{Log}(\F^x_n)$. \end{center} \end{lemma} \begin{proof} Note that $\F^x_\omega\twoheadrightarrow\F^x_n$ for each $n\in\omega$, we have $\mathsf{Log}(\F^x_\omega)\sub\bigcap_{n\in\omega}\mathsf{Log}(\F^x_n)$. Suppose $\phi(p_0,\cdots,p_{m-1})\not\in\mathsf{Log}(\F^x_\omega)$. Then $\F^x_\omega,V\not\md\phi$ for some valuation $V$ in $\F^x_\omega$. Let $\M=(\F^x_\omega,V)$. Since $\mathsf{Prop}(m)$ is finite, there exists a finite $\mathsf{Prop}(m)$-approximation $C$ of $C^x_\omega$. Let $Y=X_{-x}\cup C$. Clearly, $\F^x_{|C|+1}\iso\F^x_\omega\rsto Y$. Note that $X^x_\omega\setminus C^x_\omega\sub Y$, by Lemma \ref{lem:selection-in-cluster}, $\M\rsto Y\not\md\phi$, which entails $\phi\not\in\mathsf{Log}(\F^x_{|C|+1})$ and so $\phi\not\in\bigcap_{n\in\omega}\mathsf{Log}(\F^x_n)$. \end{proof} \begin{lemma}\label{lem:Finr-FMP} Let $L$ be a tense logic with $\mathbf{bz}_k\in L$ and $\cset{\F_i:i\in I}$ a family of frames. Suppose $L=\bigcap\cset{\mathsf{Log}(\F_i):i\in I}$. Then $\mathsf{Fin}_{r}(L)\sub\bigcup_{i\in I}\mathsf{TM}(\F_i)$. \end{lemma} \begin{proof} Let $L=\mathsf{Log}(\F^x_\omega)$, $\G\in\mathsf{Fin}_r(L)$ and $y\in\G$. Note that $\G\not\md\neg\J^{k+1}(\G,y)$, we see $\neg\J^{k+1}(\G,y)\not\in L$. Since $L=\bigcap\cset{\mathsf{Log}(\F_i):i\in I}$, $\F_i\not\md\neg\J^{k+1}(\G,y)$ for some $i\in I$. By Theorem \ref{thm:JankovLemma}, $\F_i\twoheadrightarrow^{k+1}\G$. Since $\mathbf{bz}_k\in L$ and $\F_i\md L$, we see $\mathsf{zdg}(\F_i)\leq k$. By Corollary~\ref{coro:tmorphism-ktmorphism}, we have $\F_i\twoheadrightarrow\G$. Hence $\G\in\mathsf{TM}(\F_i)$. \end{proof} \begin{lemma}\label{lem:finiteK4-ontopmorphism-cluster} Let $\F=(X,R)$ and $\G=(Y,S)$ be frames of finite depth and $x\in X$. Suppose $f:\F\twoheadrightarrow\G$ is an onto t-morphism and $|C(f(x))|\geq 2$. Then \begin{enumerate}[(1)] \item $|C(y)|\geq 2$ for some $y\in R[x]$. \item $|C(y')|\geq 2$ for some $y'\in\breve{R}[x]$. \end{enumerate} Moreover, if $\F=(\F')^x_n$ is a $n$-pre-skeleton for some $n\geq 1$, then \begin{enumerate} \item[(3)] for all $z\in X$, $C(z)$ is proper if and only if $C(f(z))$ is proper. \item[(4)] $f[C(x)]=C(f(x))$. \end{enumerate} \end{lemma} \begin{proof} For (1), let $f(x)=y_0$ and $\mathrm{dep}(\F)=k$. Since $|C(y_0)|\geq 2$, there exists $y_1\in C(y_0)$ with $y_0\neq y_1$. By $f$ is a t-morphism and $Sf(x)y_1$, there exists $x_1\in X$ such that $f(x_1)=y_1$ and $Rxx_1$. Again, since $Sf(x_1)y_0$, there exists $x_2\in X$ such that $f(x_2)=y$ and $Rx_1x_2$. By repeating this construction, we get an $R$-chain $\tup{x_i\in X:i\leq 2k+1}$ such that $x=x_0$, $f(x_{2i})=y_0$ and $f(x_{2i+1})=y_1$ for all $i\leq k$. Since $\mathrm{dep}(\F)=k<2k+1$, we see $x_{m}=x_{l}$ for some $m<l\leq 2k+1$. By transitivity of $\F$, we see $x_{m+1}Rx_l=x_m$ and so $C(x_m)=C(x_{m+1})$. Note that $f(x_m)\neq f(x_{m+1})$, we have $x_m\neq x_{m+1}$ and so $|C(x_m)|\geq 2$. (2) can be proved Symmetrically. For (3), suppose $C(f(z))$ is proper. Then $|C(f(z))|\geq 2$. By (1) and (2), there are $z_1\in R[z]$ and $z_2\in\breve{R}[z]$ such that $|C(z_1)|\geq 2$ and $|C(z_2)|\geq 2$. Since $\F$ is a pre-skeleton, there is exactly 1 proper cluster $C$ in $\F$. Thus $C(z_1)=C(z_2)=C$. Since $z_2RzRz_2$, we have $z\in C$ and so $C(z)=C$, which implies $C(z)$ is proper. Suppose $C(z)$ is proper. Then we have $z\in C(x)$, which entails $C(f(z))=C(f(x))$ is proper. For (4), it is clear that $f(y)\in C(f(x))$ for all $y\in C(x)$, which entails $f[C(x)]\sub C(f(x))$. Let $u\in C(f(x))$. Since $f$ is onto, $f(z)=u$ for some $u\in X$. Since $C(u)=C(f(x))$ is proper, by (3), $C(z)$ is proper. Note that $C(x)$ is the only proper cluster in $\F$, $z\in C(x)$ and so $u\in f[C(x)]$. Thus $f[C(x)]=C(f(x))$. \end{proof} \begin{lemma}\label{lem:preskeleton-tmorphism} Let $\F=(X,R)$ and $\G=(Y,S)$ be finite skeletons. Then the following are equivalent: \begin{enumerate}[(1)] \item $\F^x_n\twoheadrightarrow\G^y_m$ for some $1\leq m\leq n\leq\omega$; \item $\F^x_k\twoheadrightarrow\G^y_l$ for all $l\leq k\leq\omega$. \end{enumerate} \end{lemma} \begin{proof} Clearly (1) follows from (2). Suppose $f:\F^x_n\twoheadrightarrow\G^y_m$. Let $X_0=X\setminus C(x)$ and $Y_0=Y\setminus C(y)$. By Lemma~\ref{lem:finiteK4-ontopmorphism-cluster}(3-4), $f[C^x_n]=C^y_m$ and $f[X_0]=Y_0$. For any $k\geq l$, there is a onto map $g:C^x_k\to C^y_l$. Let $f'=f\rsto X_0$ and $h=g\cup f'$. Clearly, $h:\F^x_k\twoheadrightarrow\G^y_l$. \end{proof} \begin{definition} Let $\lambda>0$. Then a $\lambda$-pre-skeleton $\F^x_\lambda$ is called c-irreducible if \begin{center} $\mathsf{TM}(\F^x_\lambda)=\mathsf{Iso}(\cset{\F^x_m:0<m\leq\lambda})\cup\mathsf{TM}(\F)$. \end{center} Otherwise, $\F^x_\lambda$ is c-reducible. \end{definition} Intuitively, a pre-skeleton $\F^x_\lambda$ is c-irreducible if and only if $\F^x_\lambda$ does not admit any proper t-morphic image with the same girth. \begin{example} Consider the frames $\F_1$ and $\F_2$ in Figure \ref{fig:example-c-irreducible}. We see that $\F_1$ is c-reducible, since $\F_2$ is a t-morphic image of $\F_1$. However, $\F_2$ is c-irreducible, since for every non-injective t-morphism $f$, the $f$-image is always a skeleton. \begin{figure}[ht] \[ \begin{tikzpicture}[scale=1] \draw (0.5,-1.2) node[below]{$\F_1$}; \draw (0,0) node{$2$}; \draw (0,0) ellipse (.3 and .3); \draw [->] (0.52,-1.05) -- (0.76,-.1); \draw (0.5,-1.1) node{$\circ$}; \draw [->] (0.45,-1.05) -- (.05,-.3); \draw (.8,-.05) node{$\circ$}; \draw (.8,-.05) node[above]{$x_1$}; \draw [->] (0.55,-1.05) -- (1.45,-.1); \draw (1.5,-.05) node{$\circ$}; \draw (1.5,-.05) node[above]{$x_2$}; \end{tikzpicture} \quad\quad\quad\quad\quad\quad \begin{tikzpicture}[scale=1] \draw (0.5,-1.2) node[below]{$\F_2$}; \draw (0,0) node{$2$}; \draw (0,0) ellipse (.3 and .3); \draw [->] (0.55,-1.05) -- (0.95,-.1); \draw (0.5,-1.1) node{$\circ$}; \draw [->] (0.45,-1.05) -- (.05,-.3); \draw (1,-.05) node{$\circ$}; \end{tikzpicture} \] \caption{A c-reducible frame and a c-irreducible frame} \label{fig:example-c-irreducible} \end{figure} \end{example} \begin{lemma}\label{lem:c-irreducible-2-omega} Let $\F=(X,R)$ be a skeleton and $x\in X$ a reflexive point. Then \begin{center} $\F^x_1$ is c-irreducible if and only if $\F^x_\omega$ is c-irreducible. \end{center} \end{lemma} \begin{proof} The right-to-left direction follows from Lemma \ref{lem:finiteK4-ontopmorphism-cluster}(4) immediately. For the left-to-right direction, take any and $\H'\in\mathsf{TM}(\F^x_\omega)$. Let $h:\F^x_\omega\to\H'$. By Lemma~\ref{lem:finiteK4-ontopmorphism-cluster}, $\H'$ is of the form $\H^z_m$ for some $m\leq\omega$. If $m=0$, then $\H'\in\mathsf{TM}(\F)$ by Lemma~\ref{lem:preskeleton-tmorphism}. Suppose $m>0$. By Lemma~\ref{lem:preskeleton-tmorphism}, $\H^z_1\in\mathsf{TM}(\F^x_1)$. Note that $\mathsf{TM}(\F^x_1)=\mathsf{Iso}(\cset{\F^x_1})\cup\mathsf{TM}(\F)$ and every frame in $\mathsf{TM}(\F)$ contains no proper cluster, $\H^z_1\iso\F^x_1$. Thus $\H'\iso\H^z_m\iso\F^x_m$. \end{proof} \begin{lemma}\label{lem:pretabularity-fin-skeleton} Let $\F=(X,R)$ be a rooted finite skeleton and $x\in X$. Then \begin{center} $\mathsf{Log}(\F^x_\omega)$ is pretabular if and only if $\F^x_1$ is c-irreducible. \end{center} \end{lemma} \begin{proof} Suppose $\F^x_1$ is c-reducible. Then there exists a frame $\G_0$ such that $\G_0\not\iso\F^x_1$ and $\G_0\in\mathsf{TM}(\F^x_1)\setminus\mathsf{TM}(\F)$. Let $f:\F^x_1\twoheadrightarrow\G_0$. Note that $C^x_1$ is the unique proper cluster in $\F^x_1$. We claim $|f[C^x_1]|>1$. Suppose $|f[C^x_1]|=1$. Then $f[C^x_1]=\cset{y_0}$ for some $y_0\in\G_0$. Then we define $f':\F\to\G_0$ by $f'(x)=y_0$ and $f'(z)=f(z)$ for all $z\in X\setminus\cset{x}$. It is clear that $f':\F\twoheadrightarrow\G_0$, which entails $\G_0\in\mathsf{TM}(\F)$ and leads to a contradiction. Thus $f[C^x_1]=C(f(x))$ is a proper cluster in $\G_0$. By Lemma \ref{lem:finiteK4-ontopmorphism-cluster}(3), $\G_0$ is a 2-pre-skeleton and so $\G_0\cong\G^y_1$ for some $\G=(Y,S)$. It suffices to show $\mathsf{Log}(\G^y_\omega)\supsetneq\mathsf{Log}(\F^x_\omega)$. Since $\F^x_1\twoheadrightarrow\G^y_1$, by Lemma \ref{lem:preskeleton-tmorphism}, $\F^x_\omega\twoheadrightarrow\G^y_\omega$ and so $\mathsf{Log}(\G^y_\omega)\supseteq\mathsf{Log}(\F^x_\omega)$. Consider the formula $\phi=\neg\J^k(\F^x_1,x)$ where $k=\mathsf{zdg}(\F)+1$. We show that $\phi\in\mathsf{Log}(\G^y_\omega)\setminus\mathsf{Log}(\F^x_\omega)$. Clearly, $\phi\not\in\mathsf{Log}(\F^x_\omega)$. Suppose $\phi\not\in\mathsf{Log(\G^y_\omega)}$. By Lemma \ref{lem:ThFomega=CapThFn}, $\G^y_m\not\md\phi$ for some $m\in\omega$. By Theorem \ref{thm:JankovLemma}, $\G^y_m\twoheadrightarrow\F^x_1$. By Lemma \ref{lem:preskeleton-tmorphism}, $\G_0=\G^y_1\twoheadrightarrow\F^x_1$. Thus $|\F^x_1|=|\G_0|$ and so $f:\F^x_1\twoheadrightarrow\G_0$ is injective. By Fact \ref{fact:inj-tmorphism}, $\G_0\iso\F^x_1$, which contradicts the assumption. Suppose $\F^x_1$ is c-irreducible. By Lemma \ref{lem:c-irreducible-2-omega}, $\F^x_\omega$ is c-irreducible. Take any $L\supsetneq\mathsf{Log}(\F^x_\omega)$. By Theorem~\ref{thm:BP-KC-FMP}, $L=\mathsf{Log}(\mathsf{Fin}_r(L))$. By Lemma Lemma \ref{lem:Finr-FMP}, we see \begin{center} $\mathsf{Fin}_r(L)\subsetneq\mathsf{Fin}_r(\mathsf{Log}(\F^x_\omega))\sub\mathsf{TM}(\F^x_\omega)=\mathsf{Iso}(\cset{\F^x_{m}:0<m\leq\omega})\cup\mathsf{TM}(\F)$. \end{center} Thus $\mathsf{Fin}_r(L)\sub\mathsf{Iso}(\cset{\F^x_{m}:0<m\leq n})\cup\mathsf{TM}(\F)$ for some $n\in\mathbb{Z}^+$. So $L=\mathsf{Log}(\mathsf{Fin}_r(L))$ is tabular. Clearly, $\mathsf{Log}(\F^x_\omega)$ is non-tabular. Hence $\mathsf{Log}(\F^x_\omega)$ is pretabular. \end{proof} \begin{theorem}\label{thm:pretabular-S4BP-ch} Let $L\in\mathsf{NExt}(\mathsf{S4BP}^{k,l}_{n,m})$ for some $k,l,n,m\in\mathbb{Z}^+$. Then $L$ is pretabular if and only if $L=\mathsf{Log}(\F^x_\omega)$ for some c-irreducible rooted pre-skeleton $\F^x_\omega$. \end{theorem} \begin{proof} The right-to-left direction follows from Lemma \ref{lem:pretabularity-fin-skeleton} immediately. For the other direction, suppose $L$ is pretabular. By Theorem \ref{thm:nontabular-infrootedframe}, $L=\mathsf{Log}(\gf')$ for some infinite rooted refined frame $\gf'$. Since $\gf'\md\mathsf{S4BP}^{k,l}_{n,m}$, we see that $\kappa\gf'$ is finite and so $\kappa\gf'\iso\F^x_\lambda$ for some frame $\F^x_\lambda$ and $\lambda\geq\aleph_0$. By Lemma \ref{lem:pre-skeleton-inftofin}, $\gf'\twoheadrightarrow\F^x_n$ for all $n\in\mathbb{Z}^+$. By Lemma \ref{lem:ThFomega=CapThFn}, \begin{center} $\mathsf{Log}(\F^x_\lambda)=\mathsf{Log}(\kappa\gf')\sub\mathsf{Log}(\gf')\sub\bigcap_{0<i<\omega}\mathsf{Log}(\F^x_i)=\mathsf{Log}(\F^x_\lambda)=\mathsf{Log}(\F^x_\omega)$. \end{center} Since $L=\mathsf{Log}(\gf')$ is pretabular, $L=\mathsf{Log}(\F^x_\omega)$. Let $\G=(\F^x_\omega)^S$. Consider the frame $\G^{C(x)}_\omega$. Note that $\G^{C(x)}_\omega$ is isomorphic to the frame obtained by collapsing all proper cluster except $C(x)$. It is clear that $\F^x_\omega\twoheadrightarrow\G^{C(x)}_\omega$. Since $L$ is pretabular, $L=\mathsf{Log}(\G^{C(x)}_\omega)$. Note that $\G$ is a finite skeleton, by Lemma \ref{lem:pretabularity-fin-skeleton}, $\G^{C(x)}_\omega$ is c-irreducible. \end{proof} \begin{lemma}\label{lem:irreducible-frame-same-logic-implies-iso} Let $\F^x_\omega$ and $\G^y_\omega$ be finite c-irreducible rooted pre-skeletons. Then \begin{center} $\mathsf{Log}(\F^x_\omega)=\mathsf{Log}(\G^y_\omega)$ implies $\F^x_1\iso\G^y_1$. \end{center} \end{lemma} \begin{proof} Let $k=\mathsf{zdg}(\F)+\mathsf{zdg}(\G)$. Since $\F^x_1\md\mathsf{Log}(\F^x_\omega)$ and $\F^x_1,x\not\md\neg\J^k(\F^x_1,x)$, we see $\neg\J^k(\F^x_1,x)\not\in\mathsf{Log}(\G^y_\omega)$ and so $\G^y_\omega\not\md\neg\J^k(\F^x_1,x)$. By Theorem \ref{lem:JankovLemma-k} and Corollary \ref{coro:tmorphism-ktmorphism}, $\G^y_\omega\twoheadrightarrow\F^x_1$. By Lemma \ref{lem:preskeleton-tmorphism}, $\G^y_1\twoheadrightarrow\F^x_1$. Symmetrically, we can show that $\F^x_1\twoheadrightarrow\G^y_1$. Then $|\G^y_1|=|\F^x_1|$. By Fact \ref{fact:inj-tmorphism}, $\F^x_1\iso\G^y_1$. \end{proof} \begin{theorem} For all $k,l,n,m\in\mathbb{Z}^+$, $|\mathsf{Pre}(\mathsf{S4BP}^{k,l}_{n,m})|<\aleph_0$. \end{theorem} \begin{proof} Take any skeleton $\F\in\mathsf{Fr}_r(\mathsf{S4BP}^{k,l}_{n,m})$ and $x\in\F$. It is not hard to see that $|\F|\leq|\R^l[x]|<{(k(m+n))}^l$. Thus there are only finitely many skeletons validating $\mathsf{S4BP}^{k,l}_{n,m}$. By Theorem \ref{thm:pretabular-S4BP-ch}, $|\mathsf{Pre}(\mathsf{S4BP}^{k,l}_{n,m})|<\aleph_0$. \end{proof} \begin{theorem} For all $k,l,n,m\in\mathbb{Z}^+$ and $L\in\mathsf{Pre}(\mathsf{S4BP}^{k,l}_{n,m})$, $L$ has the FMP. \end{theorem} \begin{proof} Follows from Lemma \ref{lem:ThFomega=CapThFn} and Theorem \ref{thm:pretabular-S4BP-ch} immediately. \end{proof} \section{Pretabular tense logics in $\mathsf{NExt}(\mathsf{S4.3}_t)$} In Section \ref{sec:S4BP}, pretabular tense logics of finite width, depth and z-degree are studied. A full characterization is obtained. In this section, let us move to the tense logic $\mathsf{S4.3}_t$ of linear frames and its extensions. The readers can readily verify that $\mathsf{S4.3}_t=\mathsf{S4BP}_{1,1}^{\omega,1}$, which means that it has an extremely strong bound on width and z-degree, but no restriction on depth. By the characterization of pretabular modal logics over $\mathsf{S4}$ in \cite{ESAKIA1977,Maksimova1975}, we see $|\mathsf{Pre}(\mathsf{S4.3})|=3$, where $\mathsf{S4.3}$ is the modal logic of linear frames. The aim of this section is to show the following theorem: \begin{theorem} There are exactly 5 pretabular tense logics in $\mathsf{NExt}(\mathsf{S4.3}_t)$. \end{theorem} Let $L^\ua$ denote the tense logic of all finite chains. To be precise, for each $n\in\mathbb{Z}^+$, let $\C^\ua_n=(n,\geq)$. We define $L^\ua=\bigcap_{n=1}^\omega\mathsf{Log}(\C^\ua_n)$. The first thing we show in this section is that $L^\ua$ is the only pretabular logic in $\mathsf{NExt}(\mathsf{S4.3}_t)$ with infinite depth. \begin{lemma}\label{lem:S4.3-infdep-morphism} Let $\gf\in\mathsf{RF}_r(\mathsf{S4.3}_t)$ and $n\in\mathbb{Z}^+$. If $\mathrm{dep}(\gf)>n$, then $\C^\ua_{n+1}\in\mathsf{TM}(\gf)$. \end{lemma} \begin{proof} Let $\gf=(X,R,A)$. Since $\mathrm{dep}(\gf)>n$, by Fact \ref{fact:bounds}(1), $\gf\not\md\mathbf{bd}_n$. Then there exists a valuation $V$ and a co-chain $\tup{x_i:i\leq n}$ in $\gf$ such that $\gf,V,x_i\md\B p_i$ and $\gf,V,x_{i+1}\md\neg p_i$ for all $i<n$. We define the function $f:X\to n+1$ by: \begin{align*} f(x)= \begin{cases} i, &\text{ if }x\in V(\B p_i)\setminus V(\B p_{i+1})\text{ and }i<n\\ n, &\text{ otherwise.} \end{cases} \end{align*} Clearly, $f^{-1}[D]\in P$ for all $D\sub n+1$. Take any $j<n$ and $x\in V(\B p_j)$. Note that $\gf$ is linear, by $x_{j+1}\md\B p_{j+1}\wedge\neg p_j$, we see $x\in R[x_{j+1}]$ and so $x\in V(\B p_{j+1})$. Thus $V(\B p_i)\subsetneq V(\B p_{i+1})$ for all $i<n$. Hence $f:\gf\twoheadrightarrow\C^\ua_{n+1}$. \end{proof} \begin{theorem}\label{thm:infchain-Lua} Let $L\in\mathsf{NExt}(\mathsf{S4.3}_t)$. Then $L\sub L^\ua$ iff $\mathbf{bd}_n\not\in L$ for any $n\in\mathbb{Z}^+$. \end{theorem} \begin{proof} The left-to-right direction is trivial. Suppose $\mathbf{bd}_n\not\in L$ for any $n\in\mathbb{Z}^+$. Then for each $n\in\mathbb{Z}^+$, there exists $\gf_n\in\mathsf{RF}_r(L)$ with $\gf_n\not\md\mathbf{bd}_n$. By Fact \ref{fact:bounds}(1) and Lemma \ref{lem:S4.3-infdep-morphism}, $\gf_n\twoheadrightarrow\C^\ua_{n+1}$, which entails $\C^\ua_{n+1}\md L$. Hence $L\sub\bigcap_{n=1}^\omega\mathsf{Log}(\C^\ua_n)=L^\ua$. \end{proof} \begin{lemma}\label{lem:grz.3-pretabular} $L^\ua$ is pretabular. \end{lemma} \begin{proof} Let $L\supsetneq L^\ua$. By Theorem \ref{thm:infchain-Lua}, $\mathbf{bd}_n\in L$ for some $n\in\mathbb{Z}^+$. Note that $\mathbf{bz}_1\in L^\ua$, by Lemma \ref{lem:Finr-FMP}, $\mathsf{Fin}_r(L^\ua)=\bigcup_{n\in\mathbb{Z}^+}\mathsf{TM}(\C^\ua_n)$. Thus $\mathsf{Fin}_r(L)\sub\mathsf{Iso}(\cset{\C^\ua_n})$. By Theorem \ref{thm:BP-KC-FMP}, $L=\mathsf{Log}(\mathsf{Fin}_r(L))$ is tabular. Thus $L^\ua$ is pretabular. \end{proof} By Theorem \ref{thm:infchain-Lua}, $L^\ua$ is the maximal tense logic in $\mathsf{NExt}(\mathsf{S4.3}_t)$ with infinite depth. Thus $L^\ua$ is the only pretabular logic in $\mathsf{NExt}(\mathsf{S4.3}_t)$ with infinite depth. To characterize logics in $\mathsf{Pre}(\mathsf{S4.3}_t)$ with finite depth, we introduce some auxiliary definitions. \begin{definition} Let $\mathsf{tp}=\cset{\pm,+,-,\circ}$. For all $\lambda$ such that $\lambda\leq\omega$, we define $\C^\circ_\lambda=(\C^\ua_1)^0_\lambda$, $\C^+_\lambda=(\C^\ua_2)^1_\lambda$, $\C^-_\lambda=(\C^\ua_2)^0_\lambda$ and $\C^\pm_\lambda=(\C^\ua_3)^1_\lambda$. For all $t\in\mathsf{tp}$, let $L^t=\mathsf{Log}(\C^t_\omega)$. \end{definition} \begin{figure}[ht] \[ \begin{tikzpicture}[scale=1.5] \draw (0,-1.2) node[below]{$\C^\circ_\lambda$}; \draw (0,0) node{$1+\lambda$}; \draw (0,0) ellipse (.3 and .3); \end{tikzpicture} \quad\quad\quad \begin{tikzpicture}[scale=1.5] \draw (0,-1.2) node[below]{$\C^+_\lambda$}; \draw (0,0) node{$1+\lambda$}; \draw (0,0) ellipse (.3 and .3); \draw [->] (0,.3) -- (0,1); \draw (0,1.05) node{$\circ$}; \end{tikzpicture} \quad\quad\quad \begin{tikzpicture}[scale=1.5] \draw (0,-1.2) node[below]{$\C^-_\lambda$}; \draw (0,0) node{$1+\lambda$}; \draw (0,0) ellipse (.3 and .3); \draw [->] (0,-1) -- (0,-.3); \draw (0,-1.05) node{$\circ$}; \end{tikzpicture} \quad\quad\quad \begin{tikzpicture}[scale=1.5] \draw (0,-1.2) node[below]{$\C^\pm_\lambda$}; \draw (0,0) node{$1+\lambda$}; \draw (0,0) ellipse (.3 and .3); \draw [->] (0,.3) -- (0,1); \draw (0,1.05) node{$\circ$}; \draw [->] (0,-1) -- (0,-.3); \draw (0,-1.05) node{$\circ$}; \end{tikzpicture} \] \caption{Frames ${\C^\circ_\lambda}$, $\C^+_\lambda$, $\C^-_\lambda$ and $\C^\pm_\lambda$} \label{fig:pretabular-S4.3} \end{figure} The frames ${\C^\circ_\lambda}$, $\C^+_\lambda$, $\C^-_\lambda$ and $\C^\pm_\lambda$ are depicted in Figure.\ref{fig:pretabular-S4.3}. In what follows, we show that the logics $\mathsf{Pre}(\mathsf{S4.3}_t)=\cset{L^\pm,L^+,L^-,L^\circ,L^\ua}$. \begin{lemma}\label{lem:pretab-S43-fin-depth} For all $t\in\mathsf{tp}$, $L^t$ is pretabular. \end{lemma} \begin{proof} Since $\C^t_1$ is c-irreducible, by Lemma \ref{lem:pretabularity-fin-skeleton}, $L^t=\mathsf{Log}(\C^t_\omega)$ is pretabular. \end{proof} \begin{lemma}\label{lem:4-pretab-S43-fin-depth} Let $t,s\in\mathsf{tp}$. Then $t=s$ if and only if $L^t=L^s$. \end{lemma} \begin{proof} Follows from Lemma \ref{lem:irreducible-frame-same-logic-implies-iso} immediately. \end{proof} \begin{lemma}\label{lem:c-irreducible-S4.3-finite-depth} Let $\F=(X,R)\in\mathsf{Fr}_r(\mathsf{S4.3}_t)$ be a skeleton of finite depth and $x\in X$. Then \begin{center} $\F^x_\omega$ is c-irreducible if and only if $\F^x_\omega\in\mathsf{Iso}(\cset{\C^t_\omega:t\in\mathsf{tp}})$. \end{center} \end{lemma} \begin{proof} The right-to-left direction is trivial. For the other direction, suppose $\F^x_\omega$ is c-irreducible. Let $Y_0=R[x]\setminus C(x)$ and $Y_1=\breve{R}[x]\setminus C(x)$. We claim that $|Y_0|<2$ and $|Y_1|<2$. Suppose $|Y_0|\geq 2$. Then let $\G=(X_0,R_0)$ where $X_0=Y_1\cup C(x)\cup\cset{y}$ and $R_0$ is the transitive-reflexive closure of $(Z\times C(x))\cup (C(x)\times\cset{y})$. It is obvious that $\G\in\mathsf{TM}(\F^x_\omega)\setminus\mathsf{Iso}(\cset{\F^x_n:n\leq\omega})$, which contradicts $\F^x_\omega$ is c-irreducible. Similarly, $|Y_1|\geq 2$ implies $\F^x_\omega$ is c-reducible, which is impossible. Thus $|Y_0|<2$ and $|Y_1|<2$. The reader can verify that $\tup{|Y_0|,|Y_1|} = \tup{0,0}$, $\tup{0,1}$, $\tup{1,0}$, and $\tup{1,1}$ correspond to $\F^x_\omega \iso \C^\circ_\omega$, $\C^+_\omega$, $\C^-_\omega$, and $\C^\pm_\omega$, respectively. \end{proof} \begin{theorem} $\mathsf{Pre}(\mathsf{S4.3}_t)=\cset{L^\pm,L^+,L^-,L^\circ,L^\ua}$ and $|\mathsf{Pre}(\mathsf{S4.3}_t)|=5$. \end{theorem} \begin{proof} By Lemma \ref{lem:grz.3-pretabular}, Lemma \ref{lem:pretab-S43-fin-depth} and Lemma \ref{lem:4-pretab-S43-fin-depth}, $\cset{L^\pm,L^+,L^-,L^\circ,L^\ua}\sub\mathsf{Pre}(\mathsf{S4.3}_t)$ and $|\cset{L^\pm,L^+,L^-,L^\circ,L^\ua}|=5$. Take any $L\in\mathsf{Pre}(\mathsf{S4.3}_t)$. Suppose $\mathbf{bd}_n\not\in L$ for any $n\in\mathbb{Z}^+$. By Theorem \ref{thm:infchain-Lua}, $L\sub L^\ua$, which entails $L=L^\ua$. Suppose $\mathbf{bd}_n\in L$ for some $n\in\mathbb{Z}^+$. Then $L\in\mathsf{NExt}(\mathsf{S4BP}^{n,1}_{1,1})$. By Theorem \ref{thm:pretabular-S4BP-ch}, $L=\mathsf{Log}(\F^x_\omega)$ for some c-irreducible rooted $\F^x_\omega$. By Lemma~\ref{lem:c-irreducible-S4.3-finite-depth}, $\F^x_\omega\in\mathsf{Iso}(\cset{\C^t_\omega:t\in\mathsf{tp}})$ and so $L\in\cset{L^t:t\in\mathsf{tp}}$. \end{proof} \section{Pretabular tense logics in $\mathsf{NExt}(\mathsf{S4BP}^{2,\omega}_{2,2})$} In this section, we study pretabular logics over $\mathsf{S4BP}_{2,2}^{2,\omega}$. Comparing to $\mathsf{S4.3}_t$, the tense logic $\mathsf{S4BP}^{2,\omega}_{2,2}$ has weaker constraints on the width of logics, but a stronger constraint on the depth. As we will show in this section later, rooted frames of $\mathsf{S4BP}^{2,\omega}_{2,2}$ are garlands and hoops. A full characterization of $\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})$ is given and it turns out that the cardinality of $\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})$ is $\aleph_0$. Thus we have a constructive proof for the claim in \cite{Rautenberg1979} that $\mathsf{Pre}(\mathsf{S4}_t)$ is infinite. Before characterizing the pretabular logics in $\mathsf{NExt}(\mathsf{S4BP}^{2,\omega}_{2,2})$, we need to define some finite skeletons of $\mathsf{S4BP}^{2,\omega}_{2,2}$, which plays an important role in our proof. \begin{definition} Let $\G_\mathbb{Z}$ denote the frame $(\mathbb{Z},R_z)$ where \begin{center} $R_z=\cset{\tup{i,i}:i\in\mathbb{Z}}\cup\cset{\tup{2i+1,2i}:i\in\mathbb{Z}}\cup\cset{\tup{2i+1,2i+2}:i\in\mathbb{Z}}$. \end{center} For all $\lambda\leq\omega$, we define $\G_\lambda$ as the frame $\G_\mathbb{Z}\rsto(1+\lambda)$. For each $n\in\mathbb{O}$, we define $\H_n$ as the frame $(1+n,(R_z\rsto{(1+n)})\cup\cset{\tup{n,0}})$. We call $\G_n$ an $n$-garland and $\H_n$ an $n$-hoop. Let $\mathcal{G}=\cset{\G_n:n\in\omega}$, $\breve{\mathcal{G}}=\cset{\breve{G_n}:n\in\omega}$ and $\mathcal{H}=\cset{\H_n:n\in\mathbb{O}}$. \end{definition} The frames $\G_\mathbb{Z}$, $\G_n$ and $\H_n$ are depicted in Figure.\ref{fig:pretabular-S4.3}. \begin{figure}[ht] \[ \begin{tikzpicture}[scale=.7] \def\ptRad{.2pt} \draw (-2,1.5) node{$\G_\mathbb{Z}$}; \draw (5.8,-.5) node{$\G_n$}; \draw (3.5,.8) node{$\cdots$}; \draw (-1.5,.8) node{$\cdots$}; \draw (6.5,.8) node{$\cdots$}; \draw [dashed] (-.2,2.5) rectangle (5.2,-1); \node (-1) at (-1,0)[label=below:$-1$]{$\circ$}; \node (0) at (0,1.5)[label=above:$0$]{$\circ$}; \node (1) at (1,0)[label=below:$1$]{$\circ$}; \node (2) at (2,1.5)[label=above:$2$]{$\circ$}; \node (3) at (3,0)[label=below:$3$]{$\circ$}; \node (4) at (4,1.5)[label=above:$n-1$]{$\circ$}; \node (5) at (5,0)[label=below:$n$]{$\circ$}; \node (6) at (6,1.5)[label=above:$n+1$]{$\circ$}; \draw [->,shorten <>=\ptRad] (-1) -- (0); \draw [->,shorten <>=\ptRad] (1) -- (0); \draw [->,shorten <>=\ptRad] (1) -- (2); \draw [->,shorten <>=\ptRad] (3) -- (2); \draw [->,shorten <>=\ptRad] (5) -- (4); \draw [->,shorten <>=\ptRad] (5) -- (6); \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[scale=.7] \def\ptRad{.2pt} \draw (-1,1.5) node{$\H_n$}; \draw (3,.5) node{$\cdots$}; \node (0) at (0,1.5)[label=above:$0$]{$\circ$}; \node (1) at (0,0)[label=below:$1$]{$\circ$}; \node (2) at (2,1.5)[label=above:$2$]{$\circ$}; \node (3) at (2,0)[label=below:$3$]{$\circ$}; \node (4) at (4,1.5)[label=above:$n-3$]{$\circ$}; \node (5) at (4,0)[label=below:$n-2$]{$\circ$}; \node (6) at (6,1.5)[label=above:$n-1$]{$\circ$}; \node (7) at (6,0)[label=below:$n$]{$\circ$}; \draw [->,shorten <>=\ptRad] (1) -- (0); \draw [->,shorten <>=\ptRad] (1) -- (2); \draw [->,shorten <>=\ptRad] (3) -- (2); \draw [->,shorten <>=\ptRad] (5) -- (4); \draw [->,shorten <>=\ptRad] (5) -- (6); \draw [->,shorten <>=\ptRad] (7) -- (6); \draw [->,shorten <>=\ptRad] (7) -- (0); \end{tikzpicture} \] \caption{The garlands $\G_n$ and hoops $\H_n$ for some $n\in\mathbb{O}$} \label{fig:BS222} \end{figure} \begin{lemma} $\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H}\cup\cset{\G_\omega,\breve{\G_\omega}}\sub\mathsf{TM}(\G_\mathbb{Z})$. \end{lemma} \begin{proof} It is easy to see $\mathrm{abs}(\cdot):\G_\mathbb{Z}\twoheadrightarrow\G_\omega$, where $\mathrm{abs}(\cdot)$ is the absolute value function. Take any $n\in\omega$. Then the function $h_n:x\mapsto x\mod(1+n)$ is a t-morphism from $\G_\mathbb{Z}$ to $\H_n$. Moreover, $g_n:\G_\omega\twoheadrightarrow\G_n$, where \begin{align*} g(x)= \begin{cases} x\mod 2n, &\text{ if }(x\mod 2n)\leq n;\\ 2n-(x\mod 2n), &\text{ otherwise.} \end{cases} \end{align*} Thus $\mathcal{G}\cup\mathcal{H}\cup\cset{\G_\omega}\sub\mathsf{TM}(\G_\mathbb{Z})$, which entails $\breve{\mathcal{G}}\cup\cset{\breve{\G_\omega}}\sub\mathsf{TM}(\breve{\G_\mathbb{Z}})$. Note that $s:x\mapsto x+1$ is an isomorphism between $\G_\mathbb{Z}$ and $\breve{\G_\mathbb{Z}}$, we see $\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H}\cup\cset{\G_\omega,\breve{\G_\omega}}\sub\mathsf{TM}(\G_\mathbb{Z})$. \end{proof} \begin{fact}\label{fact:finite-frame-BS222} Let $\F=(X,R)\in\mathsf{Fr}_r(\mathsf{S4}_t)$ be a skeleton. Then the following holds: \begin{enumerate}[(1)] \item If $\F\md\mathbf{bd}_2$, then for all $x\in X$, $R[x]=\R[x]$ or $\breve{R}[x]=\R[x]$. \item If $\F\md\mathbf{bw}^+_2$, then for all $x\in X$, $|R[x]|\leq 3$. \item If $\F\md\mathbf{bw}^-_2$, then for all $x\in X$, $|\breve{R}[x]|\leq 3$. \end{enumerate} \end{fact} \begin{proof} The proof for this fact is standard. \end{proof} For each skeleton $\F=(X,R)\in\mathsf{Fin}_r(\mathsf{S4BP}^{2,\omega}_{2,2})$ and $x\in X$, we call $x$ a top (bottom) point if $\R[x]=\breve{R}[x]$ ($\R[x]=R[x]$), respectively. \begin{lemma}\label{lem:rooted-skeleton-BS222} Let $\F=(X,R)\in\mathsf{Fr}_r(\mathsf{S4BP}^{2,\omega}_{2,2})$ be a skeleton and $x\in X$. Then \begin{enumerate}[(1)] \item for all $n\in\omega$, $\F\rsto\R^n[x]\in\mathsf{Iso}(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H})$. \item $\F\rsto\R^\omega[x]\in\mathsf{Iso}(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H}\cup\cset{\G_\omega,\breve{\G_\omega},\G_\mathbb{Z}})$. \end{enumerate} \end{lemma} \begin{proof} For (1), the proof proceeds by induction on $n$. When $n=0$, we see $\F\rsto\R^0[x]\iso\G_0$. Let $n>0$. By induction hypothesis, $\F\rsto\R^{n-1}[x]\in\mathsf{Iso}(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H})$. Suppose $\F\rsto\R^{n-1}[x]\in\mathsf{Iso}(\mathcal{H})$. Then $|\R[y]\cap\R^{n-1}[x]|=3$ for all $y\in X$. By Fact \ref{fact:finite-frame-BS222}, $\R^n[x]=\R^{n-1}[x]$ and so $\F\rsto\R^{n}[x]\in\mathsf{Iso}(\mathcal{H})$. Suppose $\F\rsto\R^{n-1}[x]\in\mathsf{Iso}(\mathcal{G})$. Then there exists $f:\G_m\iso\F\rsto\R^{n-1}[x]$ for some $m\in\omega$. Since $\R^{n-1}[x]\sub\R^n[x]\neq X$, by Fact \ref{fact:tabn}(1), $\R[f(0)]\nsubseteq\R^{n-1}[x]$ or $\R[f(m)]\nsubseteq\R^{n-1}[x]$. Then we have three cases: \begin{itemize} \item $\R[f(0)]\sub\R^{n-1}[x]$ and $\R[f(m)]\nsubseteq\R^{n-1}[x]$. Note that $\cset{f(m-1),f(m)}\sub\R[f(m)]$, by Fact \ref{fact:finite-frame-BS222}, there exists a unique point $y\in\R[f(m)]\setminus\R^{n-1}[x]$. Clearly, $\R[y]\cap\R^{n-1}[x]=\cset{f(m)}$. Thus $f\cup\cset{\tup{m+1},y}:\G_{m+1}\iso\F\rsto\R^n[x]$. \item $\R[f(0)]\nsubseteq\R^{n-1}[x]$ and $\R[f(m)]\sub\R^{n-1}[x]$. By a similar argument, by Fact \ref{fact:finite-frame-BS222}, there is a unique point $y\in\breve{R}[f(0)]\setminus\R^{n-1}[x]$ and we see that $g:\breve{\G_{m+1}}\iso\F\rsto\R^n[x]$, where $g(0)=y$ and $g(1+k)=f(k)$ for all $0<k\leq m$. \item $\R[f(0)]\nsubseteq\R^{n-1}[x]$ and $\R[f(m)]\nsubseteq\R^{n-1}[x]$. Again, by Fact \ref{fact:finite-frame-BS222}, there exists a unique point $y\in\R[f(m)]\setminus\R^{n-1}[x]$ and a unique point $z\in\breve{R}[f(0)]\setminus\R^{n-1}[x]$. Suppose $y\neq z$. Then $g:\breve{\G_{m+2}}\iso\F\rsto\R^n[x]$, where $g(0)=z$, $g(m+1)=y$ and $g(1+k)=f(k)$ for all $0<k\leq m$. Otherwise, $y=z$ and we see that $f\cup\cset{\tup{m+1,y}}:\H_{m+1}\iso\F\rsto\R^n[x]$. \end{itemize} Suppose $\F\rsto\R^{n-1}[x]\in\mathsf{Iso}(\breve{\mathcal{G}})$. Then $\breve{\F}\rsto\R^{n-1}[x]\in\mathsf{Iso}(\mathcal{G})$, which entails $\breve{\F}\rsto\R^{n}[x]\in\mathsf{Iso}(\mathcal{G})$. Thus $\F\rsto\R^{n}[x]\in\mathsf{Iso}(\breve{\mathcal{G}})$. Since $\F$ is rooted, (2) follows from (1) immediately. \end{proof} Let $\mathsf{Ga}=\mathsf{Log}(\G_\mathbb{Z})$. The following theorem shows that $\mathsf{Ga}$ is the maximal logic in $\mathsf{NExt}(\mathsf{S4BP}^{2,\omega}_{2,2})$ with infinite z-degree. \begin{theorem}\label{thm:infzigzag-Ga} Let $L\in\mathsf{NExt}(\mathsf{S4BP}^{2,\omega}_{2,2})$. Then $L\sub\mathsf{Ga}$ iff $\mathbf{bz}_n\not\in L$ for any $n\in\mathbb{Z}^+$. \end{theorem} \begin{proof} The left-to-right direction is trivial. Suppose $\mathbf{bz}_n\not\in L$ for any $n\in\mathbb{Z}^+$. Take any $\phi\not\in\mathsf{Ga}$ with $md(\phi)=k$. Then $\G_\mathbb{Z},m\not\md\phi$ for some $m\in\mathbb{Z}$. Since $\mathbf{bz}_{4k+2}\not\in L$ and $L$ is Kripke complete, there exists a frame $\F=(X,R)\in\mathsf{Fr}_r(L)$ and $x\in X$ such that $\F,x\not\md\mathbf{bz}_{4k+2}$. Then $\F^S,C(x)\not\md\mathbf{bz}_{4k+2}$ and so $\mathsf{zdg}(C(x))\geq 4k+2$. Let $\G=\F^S\rsto(R^S)_\sharp^{4k+1}[C(x)]$. By Lemma \ref{lem:rooted-skeleton-BS222}, $\G\in\mathsf{Iso}(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H})$. Since $\mathsf{zdg}(C(x))\geq 4k+2$, $\G\not\in\mathsf{Iso}(\mathcal{H})$. Suppose $g:\G\iso\G_l$ for some $l\in\omega$. Since $\mathsf{zdg}(C(x))\geq 4k+2$, we see $l\geq 4k+1$. Let $f:1+l\to\mathbb{Z}$ be the function such that $f(x)=x+m+(m\mod 2)-2k$. Then we see $f\circ g:(\F,y)\twoheadrightarrow^{l-1}(\G_\mathbb{Z},m)$, where $y$ is such that $g(y)=2k-(m\mod 2)$. By Lemma \ref{lem:k-t-morphism}, $\F,x\not\md\phi$ and so $\phi\not\in L$. \end{proof} \begin{corollary}\label{coro:extensions-Ga} Let $\mathcal{F}\sub\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H}$ be infinite. Then $\mathsf{Ga}=\mathsf{Log}(\mathcal{F})$. \end{corollary} \begin{theorem}\label{thm:Ga-is-pretabular} $\mathsf{Ga}\in\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})$. \end{theorem} \begin{proof} Clearly, $\mathsf{Ga}$ is non-tabular. Take any $L\supsetneq\mathsf{Ga}$. By Theorem \ref{thm:infzigzag-Ga}, $\mathbf{bz}_n\in L$ for some $n\in\mathbb{Z}^+$. By Lemma \ref{lem:rooted-skeleton-BS222}, $\mathsf{Fr}_r(L)\sub\mathsf{Iso}(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H})$. By Corollary \ref{coro:extensions-Ga}, $\mathsf{Fr}_r(L)\sub\mathsf{Iso}(\mathcal{F})$ for some finite set $\mathcal{F}\sub\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H}$. Thus $L=\mathsf{Log}(\bigoplus\mathcal{F})$ is tabular. \end{proof} In what follows, we give a characterization for pretabular logics over $\mathsf{S4BP}^{2,\omega}_{2,2}$ with finite z-degree. \begin{lemma}\label{lem:finite-BS222-frame-irreducible} Let $m\leq n\in\omega$, $\F^x_1$ be a pre-skeleton and $f:(\G_n)^m_1\twoheadrightarrow\F^x_1$. Then \begin{enumerate}[(1)] \item $f^{-1}[f(m)]=\cset{m}$ and $f^{-1}[f(m_1)]=\cset{m_1}$. \item for all different $k,l\in 1+n$, $f(k)=f(l)$ implies $k+l=2m$. \item if $2m<n$, then $f:(\G_n)^m_1\iso\F^x_1$. \end{enumerate} \end{lemma} \begin{proof} Recall that the domain of $(\G_n)^m_1$ is $(1+n)\cup\cset{m_1}$. Since $f:(\G_n)^m_1\twoheadrightarrow\F^x_1$, by Lemma \ref{lem:finiteK4-ontopmorphism-cluster}, $f[\cset{m,m_1}]=\cset{x,x_1}$ is the unique proper cluster in $\F^x_1$. For (1), take any $k\in(\G_n)^m_1$. Suppose $f(k)=f(m)$. Then $C(f(k))$ is a proper cluster. By Lemma \ref{lem:finiteK4-ontopmorphism-cluster}, $C(k)$ is proper. Since $f(m)\neq f(m_1)$ and $\cset{m,m_1}$ is the unique proper cluster in $(\G_n)^m_1$, we see $k=m$. Hence $f^{-1}[f(m)]=\cset{m}$. Similarly, $f^{-1}[f(m_1)]=\cset{m_1}$. For (2), take any different $k,l\in 1+n$ such that $f(k)=f(l)$. By (1), $k\neq m\neq l$. Let $\F^x_1=(X,R)$. Then there exists $a\in\mathbb{Z}^+$ such that $f(m)\in\R^{a}[f(k)]\setminus\R^{a-1}[f(k)]$. Thus $f(m)\in f[\R^{a}[k]]\setminus f[\R^{a-1}[k]]$ and $f(m)\in f[\R^{a}[l]]\setminus f[\R^{a-1}[l]]$. By (1), $m\in\R^{a}[k]\setminus\R^{a-1}[k]$ and $m\in\R^{a}[l]\setminus\R^{a-1}[l]$. Since $k\neq l$, we see $\cset{k,l}=\cset{m-a,m+a}$. Thus $k+l=2m$. For (3), suppose $2m<n$. By (1) and (2), $f^{-1}[f[y]]=\cset{y}$ for all $y\in\cset{m,m_1}\cup\cset{k\in 1+n:k>2m}$. Since $2m+1\in\R^m[m+a]\setminus\R^m[m-a]$ for all $a\leq m$, by (2), we see $f^{-1}[f[k]]=\cset{k}$ for all $k\leq 2m$. Thus $f$ is injective, which entails $f:(\G_n)^m_1\iso\F^x_1$. \end{proof} \begin{lemma}\label{lem:Gn-irreducible} Let $m,n\in\omega$. Suppose $n>0$ and $2m\leq n$. Then \begin{enumerate}[(1)] \item $(\G_n)^m_1$ is c-irreducible if and only if $2m\neq n$. \item $(\breve{\G_n})^m_1$ is c-irreducible if and only if $2m\neq n$. \end{enumerate} \end{lemma} \begin{proof} Note that $\mathsf{TM}((\breve{\G_n})^m_1)=\cset{\breve{\F}:\F\in\mathsf{TM}((\G_n)^m_1)}$, (2) follows from (1) immediately. For (1), suppose $2m\neq n$. By Lemma \ref{lem:finite-BS222-frame-irreducible}(3), $(\G_n)^m_1$ is c-irreducible. For the other direction, suppose $2m=n$. Let $f:X\to(1+m)\cup\cset{m_1}$ be defined as follows: \begin{align*} f(x)= \begin{cases} x, &\text{ if }x\leq m \text{ or }x=m_1;\\ n-x, &\text{ otherwise.} \end{cases} \end{align*} Thus $f:(\G_n)^m_1\twoheadrightarrow(\G_m)^m_1$. Since $n>0$, we see $n>m$ and so $(\G_m)^m_1\not\iso(\G_n)^m_1$. By Lemma \ref{lem:finiteK4-ontopmorphism-cluster}, $(\G_m)^m_1\not\in\mathsf{TM}(\G_n)$. Thus $(\G_n)^m_1$ is c-reducible. \end{proof} \begin{lemma}\label{lem:Hn-reducible} For all $n\in\mathbb{O}$ and $m\leq n$, $(\H_n)^m_1$ is c-reducible. \end{lemma} \begin{proof} Suppose $n=4k+1$ for some $k\in\omega$ and $m\in\mathbb{E}$. Let $f:(4k+2)\cup\cset{m_1}\to(4k+2)\cup\cset{(2k)_1}$ be the map defined as: \begin{align*} f(x)= \begin{cases} (x+2k-m)\mod(4k+1), &\text{ if } x\in(4k+2);\\ (2k)_1, &\text{ otherwise.} \end{cases} \end{align*} We see that $f:(\H_n)^m_1\iso(\H_n)^{2k}_1$. We define $g:(4k+2)\cup\cset{(2k)_1}\to(2k+1)\cup\cset{0_1}$ by \begin{align*} g(x)= \begin{cases} l, &\text{ if } \mathrm{abs}(x-2m)=l \text{ and }x\in(n+1);\\ 0_1, &\text{ otherwise.} \end{cases} \end{align*} Then we see that $g:(\H_n)^{2k}_1\twoheadrightarrow(\G_{2k+2})^0_1$, which entails $g\circ f:(\H_n)^{m}_1\twoheadrightarrow(\G_{2k+2})^0_1$. Note that $(\G_{2k+2})^0_1\not\in\mathsf{TM}(\H_n)$ and $(\G_{2k+2})^0_1\not\iso(\H_n)^{m}_1$, $(\H_n)^{m}_1$ is c-reducible. Assume $m\in\mathbb{O}$. Then we can construct maps $f':(\H_n)^m_1\iso(\H_n)^{2k+1}_1$ and $g':(\H_n)^{2k+1}_1\twoheadrightarrow(\breve{\G_{2k+1}})^0_1$ in a similar way, which also implies that $(\H_n)^{m}_1$ is c-reducible. Similar arguments work for the case when $n=4k+3$ for some $k\in\omega$. \end{proof} Now we are ready to prove the main theorem of this section: \begin{theorem}\label{thm:pretabular-BS222-ch} $\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})=\cset{\mathsf{Ga},\mathsf{S5}_t}\cup\cset{\mathsf{Log}((\G_n)^m_\omega),\mathsf{Log}((\breve{\G_n})^m_\omega):2m<n\in\mathbb{Z}^+}$. \end{theorem} \begin{proof} Clearly, $\mathsf{S5}_t=L^\circ\supseteq\mathsf{S4BP}^{2,\omega}_{2,2}$ is tabular. By Theorem \ref{thm:Ga-is-pretabular}, $\mathsf{Ga}\in\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})$. Take any $2m<n\in\omega$. By Lemma \ref{lem:Gn-irreducible} and Lemma \ref{lem:c-irreducible-2-omega}, $(\G_n)^m_\omega$ is c-irreducible. By Theorem \ref{thm:pretabular-S4BP-ch}, $\mathsf{Log}((\G_n)^m_\omega)$ is pretabular. Take any $L\in\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})$. Suppose $\mathsf{bz}_n\not\in L$ for any $n\in\mathbb{Z}^+$. Then by Theorem \ref{thm:infzigzag-Ga}, $L\sub\mathsf{Ga}$. Since $L$ is pretabular, $L=\mathsf{Ga}$. Suppose $\mathsf{bz}_n\in L$ for some $n\in\mathbb{Z}^+$. Then $L\in\mathsf{Pre}(\mathsf{S4BP}^{2,n}_{2,2})$. By Theorem \ref{thm:pretabular-S4BP-ch}, $L=\mathsf{Log}(\F^x_\omega)$ for some c-irreducible rooted finite pre-skeleton. By Lemma \ref{lem:rooted-skeleton-BS222}, $\F\in\mathsf{Iso}(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H})$. If $|\F|=1$, then $\F^x_\omega\iso(\G_0)^0_\omega$. Suppose $|\F|\neq 1$, then $|\F|=1+n$ for some $n\in\mathbb{Z}^+$. By Lemma \ref{lem:Gn-irreducible} and Lemma \ref{lem:Hn-reducible}, $\F^x_\omega\iso(\G_n)^m_\omega$ or $\F^x_\omega\iso(\breve{\G_n})^m_\omega$ for some $m\in\omega$ such that $2m\neq n$. If $2m<n$, then we are done. If $2m>n$, then we see that $\F^x_\omega\iso(\G_n)^{n-m}_\omega$ or $\F^x_\omega\iso(\breve{\G_n})^{n-m}_\omega$. Thus $L\in\cset{\mathsf{Log}((\G_n)^m_\omega):2m<n\in\omega}$. \end{proof} \begin{theorem}\label{thm:pretab-BS222} $|\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})|=\aleph_0$. \end{theorem} \begin{proof} By Theorem \ref{thm:pretabular-BS222-ch}, $|\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})|\leq\aleph_0$. Take any $n,m,k,l\in\omega$ such that $2m<n$ and $2l<k$. It is clear that $(\G_n)^m_\omega$, $(\breve{\G_n})^m_\omega$, $(\G_k)^l_\omega$ and $(\breve{\G_k})^l_\omega$ are pairwise non-isomorphic. By Lemma \ref{lem:irreducible-frame-same-logic-implies-iso}, their logics are pairwise different. Thus by Theorem \ref{thm:pretabular-BS222-ch}, $|\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})|=\aleph_0$. \end{proof} Moreover, we show the following anti-dichotomy theorem for cardinality of pretabular extensions for logics in $\mathsf{NExt}(\mathsf{S4BP}_{2,2}^{2,\omega})$: \begin{theorem}\label{thm:anti-dichotomy-BS222} For all cardinal $\kappa\leq{\aleph_0}$, there exists $L\in\mathsf{NExt}(\mathsf{S4}_t)$ with $|\mathsf{Pre}(L)|=\kappa$. \end{theorem} \begin{proof} By Theorem \ref{thm:pretab-BS222}, $|\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})|=\aleph_0$. Obviously, $\mathsf{Log}(\C^\uparrow_1))\supseteq\mathsf{S4BP}_{2,2}^{2,\omega}$ and $|\mathsf{Pre}(\mathsf{Log}(\C^\uparrow_1))|=0$. Take any $\kappa\in\mathbb{Z}^+$. Let $\mathcal{F}=\cset{(\G_{n+1})^0_\omega:n<\kappa}$ and $L=\bigcap_{\F\in\mathcal{F}}\mathsf{Log}(\F)$. Then $|\mathcal{F}|=\kappa$. Note that $\mathbf{bd}_1\not\in L$ and $\mathbf{bz}_{\kappa+1}\in L$, $\cset{\mathsf{Ga},\mathsf{S5}_t}\cap\mathsf{Pre}(L)=\ve$. Take any $\F^x_\omega\in\cset{(\G_n)^m_\omega,(\breve{\G_n})^m_\omega:2m<n\in\mathbb{Z}^+}$. Suppose $\F^x_\omega\not\in\mathcal{F}$. Then similar to the proof of Lemma \ref{lem:irreducible-frame-same-logic-implies-iso}, we see $\J^{\mathsf{zdg}(\F)+\kappa}(\F^x_1,x)\not\in L$. Thus $\F\in\mathcal{F}$ if and only if $\mathsf{Log}(\F)\supseteq L$. By Theorem \ref{thm:pretabular-BS222-ch}, $\mathsf{Pre}(L)=\cset{\mathsf{Log}(\F):\F\in\mathcal{F}}$. By Lemma \ref{lem:irreducible-frame-same-logic-implies-iso}, $|\mathsf{Pre}(L)|=\kappa$. \end{proof} \begin{theorem}\label{thm:BS222-pretab-fmp} For all $L\in\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,2})$, $L$ has the FMP. \end{theorem} \begin{proof} Follows from Lemma \ref{lem:ThFomega=CapThFn}, Corollary \ref{coro:extensions-Ga} and Theorem \ref{thm:pretabular-BS222-ch} immediately. \end{proof} \begin{remark}\label{rem:Kracht-Ga} The results obtained in this section is closely related to the ones in \cite[Section 4]{Kracht1992}. The logic $\mathsf{Ga}$ was defined to be $\mathsf{S4}_t\oplus\cset{\mathbf{grz},\mathbf{alt}^+_3,\mathbf{alt}^-_3,\mathbf{bd}_2}$ and garlands $\G_n$ were also defined there. It was proved that $\mathsf{Ga}=\mathsf{Log}(\G_\omega)=\bigcap_{n\in\omega}\mathsf{Log}(\G_n)$ is pretabular. However, there are some problematic claims in \cite[Section 4]{Kracht1992}, which makes the characterization of $\mathsf{NExt}(\mathsf{Ga})$ given there incomplete. It was claimed that a rooted frame $\F$ validates $\mathsf{Ga}$ if and only if $\F\iso\G_n$ for some $n\in\omega$. Thus $\cset{\mathsf{Log}(\G_n):n\text{ is prime}}$ is the set of all logics in $\mathsf{NExt}(\mathsf{Ga})$ which are of co-dimension 3. But as Lemma \ref{lem:rooted-skeleton-BS222} shows, the class of rooted frame for $\mathsf{Ga}$ is $\mathsf{Iso}(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H}\cup\cset{\G_\omega,\breve{\G_\omega},\G_\mathbb{Z}})$, but not $\mathsf{Iso}(\mathcal{G})$. Consider the frames $\H_3$ and $\breve{G_2}$, as shown in Figure \ref{fig:H3G2}. Then we see that $\H_3\md\mathsf{Ga}$ but $\H_3\ncong\G_n$ for any $n\in\omega$. Moreover, note that $\mathsf{tab}^T_4\wedge\neg\J^4(\G_3,0)\in\mathsf{Log}(\H_3)$ and $\mathsf{tab}^T_3\not\in\mathsf{Log}(\H_3)$, we see that $\mathsf{Log}(\H_3)\not\in\cset{\bigcap_{i\in I}\mathsf{Log}(\G_i):I\sub\omega}$. Thus $\mathsf{Log}(\H_3)$ is missing in the characterization given by Kracht \cite{Kracht1992}. It is also straightforward to show that $\mathsf{Log}(\breve{\G_2})\not\in\cset{\bigcap_{i\in I}\mathsf{Log}(\G_i):I\sub\omega}$. With the results obtained in this section, we can even give a full characterization of $\mathsf{NExt}(\mathsf{Ga})$. It can be shown that $\mathsf{NExt}(\mathsf{Ga})$ is dually isomorphic to the distributive lattice freely $\bigcup$-generated by $(\mathcal{G}\cup\breve{\mathcal{G}}\cup\mathcal{H},\twoheadrightarrow)$. But we do not go into the details now and leave this work for future research. \begin{figure}[ht] \[ \begin{tikzpicture}[scale=.7] \def\ptRad{.2pt} \draw (.5,1.5) node{$\breve{\G_2}$}; \node (1) at (1,0)[label=below:$0$]{$\circ$}; \node (2) at (2,1.5)[label=above:$1$]{$\circ$}; \node (3) at (3,0)[label=below:$2$]{$\circ$}; \draw [->,shorten <>=\ptRad] (1) -- (2); \draw [->,shorten <>=\ptRad] (3) -- (2); \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[scale=.7] \def\ptRad{.2pt} \draw (-1,1.5) node{$\H_3$}; \node (0) at (0,1.5)[label=above:$0$]{$\circ$}; \node (1) at (0,0)[label=below:$1$]{$\circ$}; \node (2) at (2,1.5)[label=above:$2$]{$\circ$}; \node (3) at (2,0)[label=below:$3$]{$\circ$}; \draw [->,shorten <>=\ptRad] (1) -- (0); \draw [->,shorten <>=\ptRad] (1) -- (2); \draw [->,shorten <>=\ptRad] (3) -- (2); \draw [->,shorten <>=\ptRad] (3) -- (0); \end{tikzpicture} \] \caption{The garland $\breve{G_2}$ and hoop $\H_3$} \label{fig:H3G2} \end{figure} \end{remark} \section{Pretabular tense logics in $\mathsf{NExt}(\mathsf{S4BP}^{2,\omega}_{2,3})$}\label{sec:BS223} In this section, we study the pretabular tense logics extending $\mathsf{S4BP}^{2,\omega}_{2,3})$, which has weaker constraint on back-width than $\mathsf{S4BP}^{2,\omega}_{2,2})$. The class of rooted frames for $\mathsf{S4BP}^{2,\omega}_{2,3})$ is much more complicated than the one for $\mathsf{S4BP}^{2,\omega}_{2,2})$. It turns out that there are continuum many rooted frames for $\mathsf{S4BP}^{2,\omega}_{2,3})$ whose logics are pairwise different. The aim of this section is to show that the cardinality of $\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,3})$ is $2^{\aleph_0}$. We first construct a continual family of so-called finitely perfect sequences by generalizing the Thue-Morse sequences. Based on these sequences, we construct corresponding `umbrella-like' frames such that their logics are pairwise different and pretabular. As a corollary, we see $|\mathsf{Pre}(\mathsf{S4}_t)|=2^{\aleph_0}$, which answers the open problem given in \cite{Rautenberg1979}. \subsubsection*{Preliminaries of sequences} For all $i,j\in\mathbb{Z}$, we write $[i,j]$ for $\cset{k\in\mathbb{Z}:i\leq k\leq j}$. A subset $I$ of $\mathbb{Z}$ is said to be an interval in $\mathbb{Z}$ if for all $i,j\in I$, $[i,j]\sub I$. A map $t:\mathbb{Z}\to\mathbb{Z}$ is called a translation if there exists $k\in\mathbb{Z}$ such that $t(i)=i+k$ for all $i\in\mathbb{Z}$. We write $t:a\mapsto b$ for the translation $t$ such that $t:i\mapsto(i+b-a)$ for all $i\in\mathbb{Z}$. Let $X$ be a non-empty set. An {\em $X$-sequence} is a partial function $f:\mathbb{Z}\to X$ where $\mathsf{dom}(f)$ is an interval in $\mathbb{Z}$. Let $\mathsf{Seq}(X)$ and $\mathsf{Seq}^{<\aleph_0}(X)$ denote the sets of all $X$-sequences and finite $X$-sequences, respectively. Let $\alpha:[a,b]\to X$ be a nonempty finite sequence such that $\alpha(i)=x_{i-a}$ for all $i\in[a,b]$. Then we write $\tup{a,\tup{x_0,\cdots,x_b}}$ for $\alpha$. If $a=0$, we write $\tup{x_0,\cdots,x_b}$ for $\alpha$. \begin{definition} Let $X\neq\ve$ and $\alpha,\beta\in\mathsf{Seq}(X)$. Then we say (i) $\alpha$ is embedded into $\beta$ (notation: $\alpha\trianglelefteq\beta$), if $\alpha\circ s\sub\beta$ for some translation $s$. (ii) $\alpha$ is finitely covered by $\beta$ (notation: $\alpha\preceq\beta$), if $\gamma\trianglelefteq\beta$ for all finite sequence $\gamma\sub\alpha$. (iv) $\alpha$ and $\beta$ are similar (notation: $\alpha\approx\beta$), if $\alpha\preceq\beta$ and $\beta\preceq\alpha$. We say $\alpha$ and $\beta$ are dissimilar if $\alpha\not\approx\beta$. \end{definition} \begin{definition}[Concatenation]\label{def:concatenation} Let $\alpha:[a,b]\to X$ and $\beta:[c,d]\to X$ be finite $X$-sequences for some nonempty set $X$. Then we define the sequence $\alpha\ast\beta$ by \begin{center} $\alpha\ast\beta={\tup{\alpha(a),\cdots,\alpha(b),\beta(c),\cdots,\beta(d)}}$. \end{center} The sequences $\alpha^\dagger\ast\beta$ and $\alpha\ast\beta^\dagger$ are defined as follows: \begin{center} $\alpha^\dagger\ast\beta=\tup{a,\alpha\ast\beta}$ and $\alpha\ast\beta^\dagger=\tup{c+a-(b+1),\alpha\ast\beta}$. \end{center} Let $\tup{\alpha_i:i\in n}$ be a finite tuple of finite $X$-sequences. Then we write $\alpha_0\ast\alpha_1\ast\cdots\ast\alpha_{n-1}$ or $\alpha_0\alpha_1\cdots\alpha_{n-1}$ for $(\cdots(\alpha_0\ast\alpha_1)\ast\cdots\ast\alpha_{n-2})\ast\alpha_{n-1}$. Moreover, we define \begin{center} $\alpha_0\ast\cdots\ast{\alpha_m}^\dagger\ast\cdots\ast\alpha_{n-1}=((\alpha_0\ast\cdots\ast{\alpha_{m-1}})\ast{\alpha_m}^\dagger)^\dagger\ast({\alpha_{m+1}}\ast\cdots\ast\alpha_{n-1})$. \end{center} \end{definition} The notation in Definition \ref{def:concatenation} is a bit complicated, but the idea is simple. Given any finite tuple $A=\tup{\alpha_i:i\in n}$ of finite $X$-sequences, the sequence $\alpha_0\ast\cdots\ast{\alpha_m}^\dagger\ast\cdots\ast\alpha_{n-1}$ is designed to be the concatenation of $A$ which always preserves the index of $\alpha_m$. An example is given in Table \ref{tab:example-concatenation}. \begin{table}[htbp]\label{tab:example-concatenation} \begin{tabular}{l|cccccccccccccccccc} $\mathbb{Z}$&$\cdots$&-5&-4&-3&-2&-1&0&1&2&3&4&5&6&$\cdots$\\ \\ $\alpha$&&&&&&a&b&c&d&&&&\\ $\beta$&&&&&x&y&z&&&&&&\\ $\gamma$&&&&&u&v&w&&&&&&\\ $\alpha\ast\beta$&&&&&&&a&b&c&d&x&y&z\\ $\alpha^\dagger\ast\beta$&&&&&&a&b&c&d&x&y&z&\\ $\beta\ast\gamma^\dagger\ast\alpha$&&x&y&z&u&v&w&a&b&c&d&&\\ \end{tabular} \caption{Example of concatenations of sequences} \end{table} \begin{definition} Let $X$ be a non-empty set, $A\sub\mathsf{Seq}(X)$ and $n\in\omega$. Then the set $\mathsf{Con}(A,n)$ of all $n$-concatenations of $A$ is defined as follows: \begin{center} $\mathsf{Con}(A,n)=\cset{\alpha_0\cdots\alpha_{n-1}:\forall{i<n}(\alpha_i\in A)}$. \end{center} Let $\mathsf{Con}(A)=\bigcup_{i\in\omega}\mathsf{Con}(A,n)$. We call $\mathsf{Con}(A)$ the set of all concatenations of $A$. \end{definition} \begin{definition} Let $\alpha:\mathbb{Z}\to 2$ be a map. We say that $\alpha$ is finitely perfect if for all finite subsequence $\beta$ of $\alpha$, there is $n\in\omega$ such that $\beta\triangleleft\zeta$ for all $\zeta\triangleleft\alpha$ with $|\zeta|>n$. \end{definition} \subsubsection*{Generalized Thue-Morse sequences} In this subsection, our goal is to construct a continuum of pairwise dissimilar finitely perfect $\cset{0,1}$-sequences. For every function $f:\omega\to\cset{0,1}$, we define the generalized Thue-Morse sequence $\chi^f$ generated by $f$. The Thus-Morse sequence $\alpha^t$ is defined by $\alpha^t=\bigcup_{i\in\omega}\alpha^t_i$, where $\alpha^t_0=\tup{0}$ and $\alpha^t_{i}=\alpha_i\ast\ol{\alpha_{i-1}}$ for all $i>0$. The sequence $\alpha^t$ has many nice properties. For example, $\alpha^t$ is shown to be overlap-free, i.e., $x\beta x\beta x\ntrianglelefteq\alpha^t$ for any $\cset{0,1}$-sequence $\beta$ and $x\in\cset{0,1}$, see \cite[Proposition 5.1.6]{Pytheas2002}. The readers can also check that $\alpha^t$ is finitely perfect. To obtain a continual family of finitely perfect $\cset{0,1}$-sequences, we generalize the Thue-Morse sequence as follows: \begin{definition}[Generalized Thus-Morse sequence] Let $f:\omega\to\cset{0,1}$. For each $i\in\omega$, the finite binary sequence $\chi_i$ is defined as follows: \begin{itemize} \item $\chi^f_0=\tup{0,0,1}$; \item $\chi^f_{2i+1}={\chi_{2i}}^\dagger\ast\tup{f(2i)}\ast\ol{\chi_{2i}}$; \item $\chi^f_{2i+2}=\ol{\chi_{2i+1}}\ast\tup{f(2i+1)}\ast{\chi_{2i+1}}^\dagger$. \end{itemize} Let $\chi^f=\bigcup_{i\in\omega}\chi^f_i$. Then we see $\chi^f$ is a function from $\mathbb{Z}$ to $\cset{0,1}$. The sequence $\chi^f$ is called the generalized Thus-Morse sequence generated by $f$. \end{definition} \begin{example} Consider the maps $f:\omega\to\cset{0}$ and $g:\omega\to\cset{1}$. Then the sequences $\chi^f$ and $\chi^g$ are constructed as follows: \begin{center} \begin{tabular}{l|cccccccccccccccccc} $\mathbb{Z}$&$\cdots$&-8&-7&-6&-5&-4&-3&-2&-1&0&1&2&3&4&5&6&7&$\cdots$\\ \\ $\chi^f_0$&&&&&&&&&&0&0&1&&&&&\\ $\chi^g_1$&&&&&&&&&&0&0&1&1&1&1&0&\\ $\chi^f_2$&&1&1&0&1&0&0&1&0&0&0&1&0&1&1&0&&\\ $\chi^g_2$&&1&1&0&0&0&0&1&1&0&0&1&1&1&1&0&&\\ &&&&&&&&&$\vdots$&&&&&&&\\ $\chi^f$&$\cdots$&1&1&0&1&0&0&1&0&0&0&1&0&1&1&0&0&$\cdots$\\ $\chi^g$&$\cdots$&1&1&0&0&0&0&1&1&0&0&1&1&1&1&0&1&$\cdots$\\ \end{tabular} \end{center} \end{example} \begin{lemma} Let $f\in 2^\omega$, $j\leq i\in\omega$ and $\nu(j)=2^{i-j}$. Then there are $\alpha_1,\cdots,\alpha_{\nu(j)}\in\cset{\chi^f_j,\ol{\chi^f_j}}$ and $n_1,\cdots,n_{{\nu(j)}-1}\in\cset{0,1}$ such that $\chi^f_i\approx\alpha_0n_0\cdots\alpha_{{\nu(j)}-1}n_{{\nu(j)}-1}\alpha_{{\nu(j)}}$. \end{lemma} \begin{proof} The proof proceeds by induction on $i-j$. The case $i-j=0$ is trivial. Suppose $i-j>0$. By induction hypothesis, $\chi^f_i\approx\alpha_0n_0\cdots\alpha_{\nu(j+1)}n_{\nu(j+1)-1}\alpha_{{\nu(j+1)}}$ for some $\alpha_1,\cdots,\alpha_{\nu(j)}\in\cset{\chi^f_{j+1},\ol{\chi^f_{j+1}}}$ and $n_0,\cdots,n_{{\nu(j+1)}-1}\in\cset{0,1}$. Note that for all $l\leq\nu(j+1)$, we have $\alpha_l\approx\beta_l^1m_l\beta_l^2$ for some $\beta_l^1,\beta_l^2\in\cset{\chi^f_j,\ol{\chi^f_j}}$ and $m_l\in\cset{0,1}$. Thus we have $\chi^f_i\approx\beta_0^1m_0\beta_0^2n_0\cdots n_{\nu(j+1)-1}\beta_{\nu(j+1)}^1m_{\nu(j+1)}\beta_{\nu(j+1)}^2$, which concludes the proof. \end{proof} \begin{corollary}\label{coro:construction-chif} Let $f\in 2^\omega$. Then for all $j\in\omega$, \[ \chi^f\preceq\mathsf{Con}(\cset{\chi^f_j0,\chi^f_j1,\ol{\chi^f_j}0,\ol{\chi^f_j}1}). \] \end{corollary} Intuitively, Corollary \ref{coro:construction-chif} says that the $\cset{0,1}$-sequence is builded up by iterating elements in $\cset{\chi^f_j0,\chi^f_j1,\ol{\chi^f_j}0,\ol{\chi^f_j}1}$. This makes the structure of $\chi^f$ clear. \begin{lemma}\label{lem:chif-finitely-perfect} Let $f\in 2^\omega$. Then $\chi^f$ is finitely perfect. \end{lemma} \begin{proof} Take any finite subsequence $\alpha\triangleleft\chi^f$. Then $\alpha\triangleleft\chi^f_{i-1}$ for some $i\in\omega$. Let $n=2(|\chi^f_i|+1)$. Take any $\gamma\triangleleft\chi^f$ with $|\gamma|>n$. By Corollary \ref{coro:construction-chif}, $\beta\triangleleft\gamma$ for some $\beta\in\cset{\chi^f_i0,\chi^f_i1,\ol{\chi^f_i}0,\ol{\chi^f_i}1}$. Note that $\chi^f_{i-1}\triangleleft\chi^f_i$ and $\chi^f_{i-1}\triangleleft\ol{\chi^f_i}$, we see $\alpha\triangleleft\beta\triangleleft\gamma$. \end{proof} \begin{lemma}\label{lem:chifi-emb-chifi1-unique} Let $f\in 2^\omega$ and $i\in\mathbb{Z}^+$. Let $\alpha:[a,b]\to 2$ and $\beta:[c,d]\to 2$ be 2-sequences such that $\alpha\approx\chi^f_i$ and $\beta\approx\alpha x\ol{\alpha}$. Let $t:\mathbb{Z}\to\mathbb{Z}$ be a translation. Then (1) $\alpha\circ t\sub\beta$ implies $t:a\mapsto c$; (2) $\alpha\circ t\sub\ol{\beta}$ implies $t:b\mapsto d$. \end{lemma} \begin{proof} We prove (1) and (2) together by induction on $i$. Suppose $i=1$. Then we see that $\alpha\approx 001f(0)110$, $\beta\approx 110\ol{f(0)}001x001f(0)110$ and $\ol{\beta}\approx 001{f(0)}110\ol{x}110\ol{f(0)}001$. It is not hard to verify that both (1) and (2) hold. Let $i>1$. Assume $i\in\mathbb{O}$. Then $\alpha\approx\chi^f_{i-1}f(i-1)\ol{\chi^f_{i-1}}$ and $\beta\approx{\chi^f_{i-1}}{f(i-1)}\ol{\chi^f_{i-1}}x\ol{\chi^f_{i-1}}\ol{f(i-1)}{\chi^f_{i-1}}$. Suppose $\alpha\circ t\sub\beta$. Let $\gamma=\alpha\rsto[a,a+|\chi^f_i|-1]$, $\gamma'=\alpha\rsto[b+1-|\chi^f_i|,b]$, $\lambda=\beta\rsto[c,c+|\alpha|-1]$ and $\lambda'=\beta\rsto[d+1-|\alpha|,d]$. Then either $\gamma\circ t\sub\lambda$ or $\gamma'\circ t\sub\lambda'$. By induction hypothesis, we see $t:a\mapsto c$. The case for $\ol{\beta}$ is similar. \end{proof} \begin{lemma}\label{lem:TM-sequence-emb} Let $f,g\in 2^\omega$ be distinct. Then $\chi^f\npreceq\chi^g$. \end{lemma} \begin{proof} Since $f\neq g$, there exists $i\in\omega$ such that $f(i)\neq g(i)$ and $f(j)=g(j)$ for all $j<i$. Without loss of generality, assume $f(i)=0$ and $i\in\mathbb{E}$. Then $\chi^f_2\approx\ol{\chi^f_i}1{\chi^f_i}f(i+1){\chi^f_i}0\ol{\chi^f_i}$. By Corollary \ref{coro:construction-chif}, $\chi^g\preceq\mathsf{Con}(\cset{\chi^g_{i+1}0,\chi^g_{i+1}1,\ol{\chi^g_{i+1}}0,\ol{\chi^g_{i+1}}1})$. Since $g(i)=1$ and $f(j)=g(j)$ for all $j<i$, $\chi^g_{i+1}=\chi^f_i1\ol{\chi^f_i}$. Then $\chi^g\preceq\mathsf{Con}(\cset{\chi^f_i1\ol{\chi^f_i}0,\chi^f_i1\ol{\chi^f_i}1,\ol{\chi^f_i}0{\chi^f_i}0,\ol{\chi^f_i}0{\chi^f_i}1})$. However, by Lemma \ref{lem:chifi-emb-chifi1-unique}, neither $\chi^f_i1\ol{\chi^f_i}\triangleleft\chi^f_2\approx\ol{\chi^f_i}1{\chi^f_i}f(i+1){\chi^f_i}0\ol{\chi^f_i}$ nor $\ol{\chi^f_i}0\chi^f_i\triangleleft\chi^f_2$ holds. Thus $\chi^f_{i+2}\ntrianglelefteq\chi^g$ and so $\chi^f\npreceq\chi^g$. \end{proof} \subsubsection*{Umbrellas and their properties} We are now ready to construct the umbrellas, which are rooted frames for $\mathsf{S4BP}_{2,3}^{2,\omega}$. For each sequence $\alpha:\mathbb{Z}\to 2$, we define the umbrella $\Z_\alpha$ generated by $\alpha$. A key property of umbrellas is that their structures are locally preserved by $k$-t-morphisms, provided $k$ is sufficiently large. The precise statement of this key property is given in Lemma \ref{lem:injective-map}. With the help of this property, we show that $\mathsf{Log}(\Z_\alpha)$ is pretabular for any finitely perfect $\alpha:\mathbb{Z}\to 2$. Let us begin with the definition of umbrellas. \begin{definition} Let $\Z_0=(Z_0,R_0)$ and $\Z_1=(Z_1,R_1)$ be frames as depicted in Figure~\ref{fig:BulidingBlocks}. To be precise, we define $Z_0=\cset{a_i:i<6}\cup\cset{b_0,b_1}\cup\cset{c_i:i<2}$, $Z_1=Z_0\cup\cset{a_6,a_7}$, $R_0=R_1\rsto Z_0$ and $R_1$ to be the transitive-reflexive closure of the set \begin{center} $\cset{\tup{a_{2i},a_{2i+1}}:i<4}\cup\cset{\tup{a_{2i+2},a_{2i+1}}:i<3}\cup\cset{\tup{b_0,b_1},\tup{b_0,a_1},\tup{c_0,c_1},\tup{c_2,c_1},\tup{c_0,a_3}}$. \end{center} For each $\alpha:\mathbb{Z}\to 2$, we define the frame $\Z_\alpha=(Z_\alpha,R_\alpha)$ as follows: \begin{itemize} \item $Z_\alpha=\biguplus_{i\in\mathbb{Z}}Z_{\alpha(i)}=\cset{\tup{x,i}:x\in Z_{\alpha(i)}\text{ and }i\in\mathbb{Z}}$ \item $\tup{x,i}R_{\alpha}\tup{y,j}$ if and only if one of the following holds: \begin{itemize} \item $i=j$ and $R_{\alpha(i)}xy$; \item $j=i+1$, $f(i)=0$, $x=a_5$ and $y=a_0$; \item $j=i+1$, $f(i)=1$, $x=a_7$ and $y=a_0$. \end{itemize} \end{itemize} \end{definition} \begin{figure}[ht] \[ \begin{tikzpicture} \draw (-2,0.5) node{$\Z_0$}; \draw (0,0) node{$\circ$}; \draw (.5,1) node{$\circ$}; \draw [->] (.05,.05) -- (0.45,.95); \draw (1,0) node{$\circ$}; \draw [->] (.95,.05) -- (0.55,.95); \draw (1.5,1) node{$\circ$}; \draw (1.5,1) node[right]{$a_5$}; \draw (-1,0) node{$\circ$}; \draw (-1,0) node[left]{$a_0$}; \draw [->] (.05-1,.05) -- (0.45-1,.95); \draw (-.5,1) node{$\circ$}; \draw [->] (-.05,.05) -- (-0.45,.95); \draw [->] (1.05,0.05) -- (1.45,.95); \draw (0,0) node[right]{\small $a_2$}; \draw (-0.5,1) node[right]{\small $a_1$}; \draw (1,0) node[right]{\small $a_4$}; \draw (0.5,1) node[right]{\small $a_3$}; \draw (-.5,-1.5) node{$\circ$}; \draw (-.5,-1.5) node[right]{$b_0$}; \draw [->] (-.5,-1.45) -- (-.5,.95); \draw [->] (-.55,-1.45) -- (-0.95,-.55); \draw (-1,-.5) node{$\circ$}; \draw (-1,-.5) node[left]{$b_1$}; \draw (.5,-1.5) node{$\circ$}; \draw (.5,-1.5) node[right]{$c_0$}; \draw [->] (.5,-1.45) -- (.5,.95); \draw [->] (.55,-1.45) -- (0.95,-.55); \draw (1,-.5) node{$\circ$}; \draw (1,-.5) node[right]{$c_1$}; \draw (1.5,-1.5) node{$\circ$}; \draw (1.5,-1.5) node[right]{$c_2$}; \draw [->] (1.45,-1.45) -- (1.05,-.55); \end{tikzpicture} \quad\quad\quad \begin{tikzpicture} \draw (-2,0.5) node{$\Z_1$}; \draw (0,0) node{$\circ$}; \draw (.5,1) node{$\circ$}; \draw [->] (.05,.05) -- (0.45,.95); \draw (1,0) node{$\circ$}; \draw [->] (.95,.05) -- (0.55,.95); \draw (1.5,1) node{$\circ$}; \draw (1.5,1) node[right]{$a_5$}; \draw (2,0) node{$\circ$}; \draw (2,0) node[right]{$a_6$}; \draw [->] (1.95,.05) -- (1.55,.95); \draw (-.5,1) node{$\circ$}; \draw [->] (-.05,.05) -- (-0.45,.95); \draw (-1,0) node{$\circ$}; \draw (-1,0) node[left]{$a_0$}; \draw [->] (-.95,.05) -- (-0.55,.95); \draw (2.5,1) node{$\circ$}; \draw (2.5,1) node[right]{$a_7$}; \draw [->] (1.05,0.05) -- (1.45,.95); \draw [->] (1.05+1,0.05) -- (1.45+1,.95); \draw (0,0) node[right]{\small $a_2$}; \draw (-0.5,1) node[right]{\small $a_1$}; \draw (1,0) node[right]{\small $a_4$}; \draw (0.5,1) node[right]{\small $a_3$}; \draw (-.5,-1.5) node{$\circ$}; \draw (-.5,-1.5) node[right]{$b_0$}; \draw [->] (-.5,-1.45) -- (-.5,.95); \draw [->] (-.55,-1.45) -- (-0.95,-.55); \draw (-1,-.5) node{$\circ$}; \draw (-1,-.5) node[left]{$b_1$}; \draw (.5,-1.5) node{$\circ$}; \draw (.5,-1.5) node[right]{$c_0$}; \draw [->] (.5,-1.45) -- (.5,.95); \draw [->] (.55,-1.45) -- (0.95,-.55); \draw (1,-.5) node{$\circ$}; \draw (1,-.5) node[right]{$c_1$}; \draw (1.5,-1.5) node{$\circ$}; \draw (1.5,-1.5) node[right]{$c_2$}; \draw [->] (1.45,-1.45) -- (1.05,-.55); \end{tikzpicture} \] \caption{The frames $\Z_0$ and $\Z_1$} \label{fig:BulidingBlocks} \end{figure} In what follows, let $\alpha:\mathbb{Z}\to 2$ be an arbitrarily fixed sequence on $\mathbb{Z}$ and $\Z_\alpha=(Z,R)$. Our aim is to show that $\mathsf{Log}(\Z_\alpha)$ is pretabular. To make the proofs below easier to read, we re-indexed the elements in $Z$ by a onto map $l:Z\to\mathbb{Z}$ where: \begin{itemize} \item $l(\tup{a_0,0})=0$; \item for all $\tup{a_i,j},\tup{a_{i'},j'}\in Z$, $l(\tup{a_i,j})<l(\tup{a_{i'},j'})$ iff $j<j'$ or, $j=j'$ and $i<i'$; \item for all $i<2$ and $j\in\mathbb{Z}$, $l(\tup{b_i,j})=l(\tup{a_1,j})$; \item for all $i<3$ and $j\in\mathbb{Z}$, $l(\tup{c_i,j})=l(\tup{a_3,j})$. \end{itemize} It is easy to see such map $f$ exists and is unique. To simplify notations, we write $a^i_{l(\tup{x_i,j})}$, $b^i_{l(\tup{y_i,j})}$ and $c^i_{l(\tup{z_i,j})}$ for $\tup{a_i,j}$, $\tup{b_i,j}$ and $\tup{c_i,j}$, respectively. We also write $a_j$ for $a^i_j$. A fragment of the re-indexed frame $\Z_\alpha$ is shown in Figure~\ref{fig:relabel}. For all $i,j\in\mathbb{Z}$ such that $i\leq j$, we define $Z[i,j]=\cset{z\in Z: i\leq l(z)\leq j}$. Note that for all $a^{i_1}_{j_1},b^{i_2}_{j_2},c^{i_3}_{j_3}\in Z$, we have $i_1+j_1\in\mathbb{E}$, and $j_2,j_3\in\mathbb{O}$. \begin{figure}[ht] \[ \begin{tikzpicture} \draw (2.5,0.5) node{$\cdots$}; \draw (0,0) node{$\circ$}; \draw (.5,1) node{$\circ$}; \draw [->] (.05,.05) -- (0.45,.95); \draw (1,0) node{$\circ$}; \draw [->] (.95,.05) -- (0.55,.95); \draw (1.5,1) node{$\circ$}; \draw (1.5,1) node[right]{$a^5_5$}; \draw (2,0) node{$\circ$}; \draw (2,0) node[right]{$a^6_6$}; \draw [->] (1.95,.05) -- (1.55,.95); \draw (-.5,1) node{$\circ$}; \draw [->] (-.05,.05) -- (-0.45,.95); \draw (-1,0) node{$\circ$}; \draw (-1,0) node[right]{$a^0_0$}; \draw [->] (-.95,.05) -- (-0.55,.95); \draw [->] (1.05,0.05) -- (1.45,.95); \draw (0,0) node[right]{$a^2_2$}; \draw (-0.5,1) node[right]{$a^1_1$}; \draw (1,0) node[right]{$a^4_4$}; \draw (0.5,1) node[right]{$a^3_3$}; \draw (-.5,-1.5) node{$\circ$}; \draw (-.5,-1.5) node[right]{$b^0_1$}; \draw [->] (-.5,-1.45) -- (-.5,.95); \draw [->] (-.55,-1.45) -- (-0.95,-.55); \draw (-1,-.5) node{$\circ$}; \draw (-1,-.5) node[left]{$b^1_1$}; \draw (.5,-1.5) node{$\circ$}; \draw (.5,-1.5) node[right]{$c^0_3$}; \draw [->] (.5,-1.45) -- (.5,.95); \draw [->] (.55,-1.45) -- (0.95,-.55); \draw (1,-.5) node{$\circ$}; \draw (1,-.5) node[right]{$c^1_3$}; \draw (1.5,-1.5) node{$\circ$}; \draw (1.5,-1.5) node[right]{$c^2_3$}; \draw [->] (1.45,-1.45) -- (1.05,-.55); \draw (-4+-1.5,0.5) node{$\cdots$}; \draw (-4+0,0) node{$\circ$}; \draw (-4+.5,1) node{$\circ$}; \draw [->] (-4+.05,.05) -- (-4+0.45,.95); \draw (-4+1,0) node{$\circ$}; \draw [->] (-4+.95,.05) -- (-4+0.55,.95); \draw (-4+1.5,1) node{$\circ$}; \draw (-4+1.5,1) node[right]{$a^5_{-3}$}; \draw (-3+1.5,1) node{$\circ$}; \draw (-3+1.5,1) node[right]{$a^7_{-1}$}; \draw (-4+2,0) node{$\circ$}; \draw [->] (-4+1.95,.05) -- (-4+1.55,.95); \draw [->] (-3+1.95,.05) -- (-3+1.55,.95); \draw (-4+-.5,1) node{$\circ$}; \draw [->] (-4+-.05,.05) -- (-4+-0.45,.95); \draw (-4+-1,0) node{$\circ$}; \draw [->] (-4+-.95,.05) -- (-4+-0.55,.95); \draw [->] (-4+1.05,0.05) -- (-4+1.45,.95); \draw [->] (-3+1.05,0.05) -- (-3+1.45,.95); \draw (-4+0,0) node[below]{$a^2_{-6}$}; \draw (-4+-0.5,1) node[right]{$a^1_{-7}$}; \draw (-4+-1,0) node[left]{$a^0_{-8}$}; \draw (-3+1,0) node[right]{$a^6_{-2}$}; \draw (-4+1,0) node[right]{$a^4_{-4}$}; \draw (-4+0.5,1) node[right]{$a^3_{-5}$}; \draw (-4+-.5,-1.5) node{$\circ$}; \draw (-4+-.5,-1.5) node[right]{$b^0_{-7}$}; \draw [->] (-4+-.5,-1.45) -- (-4+-.5,.95); \draw [->] (-4+-.55,-1.45) -- (-4+-0.95,-.55); \draw (-4+-1,-.5) node{$\circ$}; \draw (-4+-1,-.5) node[left]{$b^1_{-7}$}; \draw (-4+.5,-1.5) node{$\circ$}; \draw (-4+.5,-1.5) node[right]{$c^0_{-5}$}; \draw [->] (-4+.5,-1.45) -- (-4+.5,.95); \draw [->] (-4+.55,-1.45) -- (-4+0.95,-.55); \draw (-4+1,-.5) node{$\circ$}; \draw (-4+1,-.5) node[right]{$c^1_{-5}$}; \draw (-4+1.5,-1.5) node{$\circ$}; \draw (-4+1.5,-1.5) node[right]{$c^2_{-5}$}; \draw [->] (-4+1.45,-1.45) -- (-4+1.05,-.55); \end{tikzpicture} \] \caption{Fragment of relabelled $\Z_\alpha$} \label{fig:relabel} \end{figure} \begin{lemma}\label{lem:BoundaryOfIntervalsInZ-alpha} Let $\G=(Y,S)\in\mathsf{Fr}_r$ and $f:(\Z_\alpha,z_0)\twoheadrightarrow^{n}(\G,y_0)$. Then for all $i\leq j$ such that $Z[i,j]\sub\R^{n}[z_0]$, $Z[i,j]$ is sufficient if $f[\cset{x_i,x_j}]\sub f[Z[i,j]\setminus\cset{x_i,x_j}]$. \end{lemma} \begin{proof} Note that $\R[Z[i,j]\setminus\cset{x_i,x_j}]\sub Z[i,j]$. By Fact \ref{fact:boundary-sufficient}, $f[\cset{x_i,x_j}]\sub f[Z[i,j]\setminus\cset{x_i,x_j}]$ implies that $Z[i,j]$ is sufficient. \end{proof} \begin{lemma}\label{lem:Zalpha-k-morphism-same-type} Let $\G=(Y,S)\in\mathsf{Fr}_r(\mathsf{S4}_t)$, $z_0\in Z$, $y_0\in Y$ and $n\in\omega$. Suppose $f:(\Z_\alpha,z_0)\twoheadrightarrow^{n+19}(\G,y_0)$ and $f$ is not sufficient. Then for all $x,y\in\cset{a,b,c}$ and $i,j,k,l\in\mathbb{Z}$ such that $f(x^i_j)=f(y^k_l)$, the following holds: \begin{enumerate}[(1)] \item if $x^i_j,y^k_l\in\R^{n+18}[z_0]$, then $i+k\in\mathbb{E}$; \item if $x^i_j,y^k_l\in\R^{n+16}[z_0]$, then $x=b$ implies $y\neq c$; \item if $x^i_j,y^k_l\in\R^{n+6}[z_0]$, then $x=b$ implies $y\neq a$; \item if $x^i_j,y^k_l\in\R^{n}[z_0]$, then $x=c$ implies $y\neq a$. \end{enumerate} \end{lemma} \begin{proof} For (1), suppose $x^i_j,y^k_l\in\R^{n+18}[z_0]$ and $i+k\not\in\mathbb{E}$. Assume $i\in\mathbb{E}$. Then for each $z\in\cset{x^i_j,y^k_l}$, we have $f(x)=f(x^i_j)=f(y^k_l)$ and $R[y^k_l]\cup\breve{R}[x^i_j]\sub\cset{x^i_j,y^k_l}$, which entails $\cset{x^i_j,y^k_l}$ is sufficient. Assume $i\in\mathbb{O}$. Then $\breve{R}[y^k_l]\cup R[x^i_j]\sub\cset{x^i_j,y^k_l}$, which also implies that $\cset{x^i_j,y^k_l}$ is sufficient. \vspace{.5em} \noindent For (2), suppose $x^i_j,y^k_l\in\R^{n+16}[z_0]$, $x=b$ and $y=c$. Then we have two cases: (2.1) $i=1$. By (1), $k=1$. By Proposition \ref{prop:k-t-morphism}, we see $\cset{f(b^0_j),f(b^1_j)}=f[\R[b^1_j]]=f[\R[c^1_l]]=\cset{f(c^0_l),f(c^1_l),f(c^2_l)}$, which entails $f(c^0_l)=f(c^2_l)=f(b^0_j)$. Let $X=\cset{b^0_j,b^1_j,c^2_l,c^1_l}$. Then $\R[b^1_j]\cup\R[c^2_l]\sub X$. Note that $X\sub\R^{n+18}[z_0]$, $f(c^2_l)=f(b^0_j)$ and $f(c^1_l)=f(b^1_j)$, by Fact \ref{fact:boundary-sufficient}, we see $X$ is sufficient. (2.2) $i\neq 1$. Then $i=0$ and so $k\in\cset{0,2}$. By (2.1), $f(b^1_j)\neq f(c^1_j)$. If $k=2$, then by Proposition \ref{prop:k-t-morphism} and (1), $f(b^1_j)=f(c^1_j)$, which is impossible. Thus $k=0$. By Proposition \ref{prop:k-t-morphism} and (1), we see $f(a^1_j)=f(c^1_l)$ and $f(a^1_l)=f(b^1_j)$. Consider the set $X'=Z[j,j]\cup Z[l,l]$. Clearly, $X'\sub\R^2[x^i_j]\cup\R^2[y^k_l]\sub\R^{n+18}[z_0]$. Note that $\R[X'\setminus\cset{a^1_j,a^1_l}]\sub X'$ and $f[\cset{a^1_j,a^1_l}]\sub f[X'\setminus\cset{a^1_j,a^1_l}]$, we see $X'$ is sufficient. Thus this case is impossible. \vspace{.5em} \noindent For (3), let $x=b$ and $y=a$. We first prove the following two claims: \noindent\textbf{Claim 1:} Suppose $i=1$. If $x^i_j,y^k_l\in\R^{n+15}[z_0]$, then $j=l+2$. \noindent\textbf{Proof of Claim 1:} By $x^i_j,y^k_l\in\R^{n+15}[z_0]$, we see $Z[\min(j,l),\max(j,l)]\sub\R^{n+18}[z_0]$. Suppose $l=j$. Then $f(b^1_j)=f(a^1_j)$. By Lemma \ref{lem:BoundaryOfIntervalsInZ-alpha}, $Z[j,j]$ is sufficient. Suppose $l=j+2$. Then by Proposition \ref{prop:k-t-morphism} and (1), we have $f(b^0_j)=f(c^0_l)$, which contradicts (2.1). Suppose $l>j+2$. By (1) and Proposition \ref{prop:k-t-morphism}, we see $f(b^0_j)=f(a_{l-1})$ and $f(a_j)=f(a_{l-2})$. Note that $a_{l-2}\neq a_j$, we see $f[\cset{a_j,a_l}]\sub f[Z[j,l]\setminus\cset{a_j,a_l}]$. By Lemma \ref{lem:BoundaryOfIntervalsInZ-alpha}, $Z[j,l]$ is sufficient. Suppose $l+2<j$. By (1) and Proposition \ref{prop:k-t-morphism}, we see $f(b^0_j)=f(a_{l+1})$ and $f(a_j)=f(a_{l+2})$. Note that $a_{l+2}\neq a_j$, we see $f[\cset{a_j,a_l}]\sub f[Z[l,j]\setminus\cset{a_j,a_l}]$. By Lemma \ref{lem:BoundaryOfIntervalsInZ-alpha}, $Z[l,j]$ is sufficient.\hfill$\dashv$ \vspace{.5em} \noindent\textbf{Claim 2:} Suppose $i=1$. If $x^i_j,y^k_l\in\R^{n+7}[z_0]$, then $j\neq l+2$. \noindent\textbf{Proof of Claim 2:} Suppose $j=l+2$. Then $k\in\cset{5,7}$. (a) $k=5$. Consider the frame $\Z_\alpha$ with labels in Figure \ref{fig:aux-proof-1}. By Proposition \ref{prop:k-t-morphism} and (1), points with same label have the same $f$-image. Then $f(c^0_{l-2})\in f[\cset{a_{j+1},b^0_j}]$. By (2), $f(c^0_{l-2})\neq f(b^0_j)$ and so $f(a_{j+1})=f(c^0_{l-2})$. Note that $f(a_{l-2})=f(a_j)$ and $Z[l-2,j+1]\sub\R^{n+18}[z_0]$, by Lemma \ref{lem:BoundaryOfIntervalsInZ-alpha}, $Z[l-2,j+1]$ is sufficient, which is impossible. (b) $k=7$. Consider the frame $\Z_\alpha$ with labels given in Figure \ref{fig:aux-proof-2}(a). By Proposition \ref{prop:k-t-morphism} and (1), we see that points with same label have the same $f$-image. Moreover, we see that $f(c^0_{l-4})\in f[\cset{a_{j+3},c^0_{j+2}}]$. Suppose $f(c^0_{l-4})=f(a_{j+3})$. Note that $Z[l-4,j+3]\sub\R^{n+18}[z_0]$, by Lemma \ref{lem:BoundaryOfIntervalsInZ-alpha}, $Z[l-4,j+3]$ is sufficient. Suppose $f(c^0_{l-4})=f(c^0_{j+2})$. By (1) and Proposition \ref{prop:k-t-morphism}, we can verify that in Figure \ref{fig:aux-proof-2}(b), points with same label have the same $f$-image. Then $f(b^1_{l-6})\in f[\cset{{a_{j+2},a_{j+6}}}]$. Note that $\cset{a_{j+2},a_{j+6},b^1_{l-6}}\sub\R^{n+15}[z_0]$, by Claim 1, $f(b^1_{l-6})\not\in f[\cset{{a_{j+2},a_{j+6}}}]$.\hfill$\dashv$ \vspace{.5em} Suppose $x^i_j,y^k_l\in\R^{n+6}[z_0]$. By Claim 1 and Claim 2, $i\neq 1$. Then $i=0$. By (1) and Proposition \ref{prop:k-t-morphism}, $f(b^1_j)\in\cset{f(a_{l-1}),f(a_{l+1})}$. Note that $\cset{b^1_j,a_{l-1},a_{l+1}}\sub\R^{n+7}[z_0]$, by Claim 1 and Claim 2, $f(b^1_j)\not\in\cset{f(a_{l-1}),f(a_{l+1})}$, which is impossible. \vspace{.5em} \noindent For (4), let $x=c$ and $y=a$. We first prove the following claims: \noindent\textbf{Claim 3:} Suppose $x^i_j,y^k_l\in\R^{n+2}[z_0]$. then $i\neq 2$. \noindent\textbf{Proof of Claim 3:} suppose $x^i_j,y^k_l\in\R^{n+2}[z_0]$ and $i=2$. By (1), $j+l\in\mathbb{O}$. Suppose $l=j+1$. By Proposition \ref{prop:k-t-morphism} and (1), $f(a_l)=f(c^1_l)$. By Lemma \ref{lem:BoundaryOfIntervalsInZ-alpha}, $Z[j,l]$ is sufficient. Similarly, $l=j-1$ implies $Z[l,j]$ is sufficient. Suppose $l>j+3$. By Proposition \ref{prop:k-t-morphism} and (1), $f(a_l)=f(a_{j-3})$. By Lemma \ref{lem:BoundaryOfIntervalsInZ-alpha}, $Z[j,l]$ is sufficient. Similarly, $l<j-3$ implies $Z[l,j]$ is sufficient. Suppose $l=j-3$. By Proposition \ref{prop:k-t-morphism} and (1), $f(b^0_{l-1})\in f[\R^2[a_l]]=f[\R^2[c^2_j]]=f[\cset{c^0_j,c^1_j,c^2_j}]$. Since $\cset{b^0_{l-1},c^0_j,c^1_j,c^2_j}\sub\R^{n+16}[z_0]$, by (2), $f(b^0_{l-1})\not\in f[\cset{c^0_j,c^1_j,c^2_j}]$. Suppose $l=j+3$. Then $k=0$ or $k=6$. If $k=0$, then by (1) and Proposition~\ref{prop:k-t-morphism}, we see $f(b^0_{l+1})\in f[\R^2[a_l]]=f[\R^2[c^2_j]]=f[\cset{c^0_j,c^1_j,c^2_j}]$, which contradicts (2). Suppose $k=6$. Consider the relabelled frame in Figure \ref{fig:aux-proof-3}. By Proposition \ref{prop:k-t-morphism} and (1), points with same label have the same $f$-image. Thus $f(b^0_{l+3})\in\cset{f(c^0_j),f(a_{j-1})}$. Note that $\cset{b^0_{l+3},c^0_j,a_{j-1}}\sub\R^{n+6}[z_0]$, by (2) and (3), $f(b^0_{l+3})\not\in\cset{f(c^0_j),f(a_{j-1})}$.\hfill$\dashv$ \noindent\textbf{Claim 4:} Suppose $x^i_j,y^k_l\in\R^{n+1}[z_0]$. then $i\neq 1$. \noindent\textbf{Proof of Claim 4:} suppose $x^i_j,y^k_l\in\R^{n+1}[z_0]$ and $i=1$. Note that $\R[a^k_l]\sub\R^{n+2}[z_0]$, by (1) and Proposition \ref{prop:k-t-morphism}, $f(c^2_j)\in f[\R[a^k_l]]$. By (2) and (3), $k=3$ and $f(c^0_l)=f(c^2_j)$. By Proposition \ref{prop:k-t-morphism}, $f(c^1_j)=f(c^1_l)$, which entails that $\cset{c^0_l,c^1_l,c^2_l,c^1_j,c^2_j}$ is sufficient.\hfill$\dashv$ \vspace{.5em} Suppose $x^i_j,y^k_l\in\R^{n}[z_0]$. By Claim 3 and Claim 4, $i\not\in\cset{1,2}$. Then $i=0$. By (1) and Proposition \ref{prop:k-t-morphism}, $f(c^1_j)\in\cset{f(a_{l-1}),f(a_{l+1})}$. Note that $\cset{c^1_j,a_{l-1},a_{l+1}}\sub\R^{n+1}[z_0]$, by Claim 3 and Claim 4, $f(c^1_j)\not\in\cset{f(a_{l-1}),f(a_{l+1})}$, which is impossible. \end{proof} \begin{lemma}\label{lem:Zalpha-k-morphism-same-label} Let $\G=(Y,S)\in\mathsf{Fr}_r(\mathsf{S4}_t)$, $z_0\in Z$, $y_0\in Y$ and $n\in\omega$. Suppose $f:(\Z_\alpha,z_0)\twoheadrightarrow^{n+26}(\G,y_0)$ and $f$ is not sufficient. Then for all $x,y\in\cset{a,b,c}$ and $i,j,k,l\in\mathbb{Z}$ such that $f(x^i_j)=f(y^k_l)$, the following holds: \begin{enumerate}[(1)] \item if $x^i_j,y^k_l\in\R^{n+4}[z_0]$, then $i=k$; \item if $x^i_j,y^k_l\in\R^{n}[z_0]$, then $j=l$. \end{enumerate} \end{lemma} \begin{proof} Take any $x^i_j,y^k_l\in\R^n[z_0]$ such that $f(x^i_j)=f(y^k_l)$. By Lemma \ref{lem:Zalpha-k-morphism-same-type}, $x=y$ and $i+k\in\mathbb{E}$. For (1), consider the following three cases: (1.1) $x=b$. Then $i,k\in\cset{0,1}$. Since $i+k\in\mathbb{E}$, we see $i=k$. (1.2) $x=c$. Then $i,k\in\cset{0,1,2}$. Suppose $i\neq k$. Then $\cset{i,k}=\cset{0,2}$. Suppose $i=0$. By Proposition \ref{prop:k-t-morphism}, $f(a_i)\in f[\R[c^0_j]]=f[\R[c^2_l]]=\cset{f(c^2_l),f(c^1_l)}$. Since $f:(\Z_\alpha,z_0)\twoheadrightarrow^{n+26}(\G,y_0)$ and $\cset{a_i,c^2_l,c^1_l}\sub\R^{n+7}[z_0]$, by Lemma \ref{lem:Zalpha-k-morphism-same-type}(4), $f(a_i)\not\in\cset{f(c^0_l),f(c^1_l)}$, which leads to a contradiction. Symmetrically, $i=0$ is also impossible. (1.3) $x=a$. Then $i,k\leq 7$. We now show $i=k$ by showing the following claims: \noindent\textbf{Claim 1:} if $x^i_j,y^k_l\in\R^{n+5}[z_0]$, then $i\in\mathbb{E}$ implies $i=k$. \noindent\textbf{Proof of Claim 1:} Consider the following cases: (a) $2\in\cset{i,k}$. Suppose $i=2$. By Proposition \ref{prop:k-t-morphism}, $f(b^0_{j-1}),f(c^0_{j+1})\in f[\R^2[a^k_l]]$. Since $\R^2[a^k_l]\sub\R^{n+7}[z_0]$, by Lemma \ref{lem:Zalpha-k-morphism-same-type}, $k\in\mathbb{E}$ and there are $b^{k_1}_{l_1},c^{k_2}_{l_2}\in\R^2[a^k_l]$, which entails $k=2$. Symmetrically, $k=2$ implies $i=2$. (b) $0\in\cset{i,k}$. Suppose $i=0$. By Proposition \ref{prop:k-t-morphism}, $f(b^0_{j+1})\in f[\R^2[a^k_l]]$, which entails $k\in\cset{0,1,2}$. By Lemma \ref{lem:Zalpha-k-morphism-same-type}(1) and (a), $k=0$. Symmetrically, $k=0$ implies $i=k$. (c) $4\in\cset{i,k}$. Suppose $i=4$. Similar to the argument for (b). Note that $i+k\in\mathbb{E}$ and $i,k\leq 7$, by (a)-(c), we see $i=6$ if and only if $k=6$. \noindent\textbf{Claim 2:} if $x^i_j,y^k_l\in\R^{n+4}[z_0]$, then $i=k$. \noindent\textbf{Proof of Claim 2:} The case $i\in\mathbb{E}$ follows from Claim 1 immediately. Suppose $i\in\mathbb{O}$. By Proposition \ref{prop:k-t-morphism} and the definition of $Z$, $f[\cset{a^{i-1}_{j-1},a^{i'}_{j+1}}]\sub f[\cset{a^{k-1}_{l-1},a^{k'}_{l+1}}]$. Note that $\cset{a_{j-1},a_{j+1}}\sub\R^{n+5}[z_0]$, by Claim 1, $\cset{i-1,i'}=\cset{k-1,k'}$. Suppose $i\neq k$. Then $i-1=k'\in\cset{k+1,0}$ and $k-1=i'\in\cset{i+1,0}$. If $i-1=0$, then $k\in\cset{5,7}$, which contradicts $k-1\in\cset{i+1,0}=\cset{2,0}$. Thus $i-1=k+1$ and so $k-1=0$. Then $i\in\cset{5,7}$, which contradicts $i-1\in\cset{k+1,0}=\cset{2,0}$. Hence $i=k$. For (2), suppose $j<l$. By Lemma \ref{lem:Zalpha-k-morphism-same-type} and (1), $x=y$ and $i=k$. By Proposition~\ref{prop:k-t-morphism}, $f(a_j)=f(a_l)$. Note that $a_j,a_l\in\R^{3}[x^i_j]\cup\R^3[y^k_l]$, we see $\R[a_{j}]\cup\R[a_{l}]\sub\R^{n+4}[z_0]$. Then by (1), $f(a_{j-1})=f(a_{l-1})$. Thus $Z[j-1,l]$ is sufficient. Symmetrically, $j>l$ implies that $Z[l-1,j]$ is sufficient. By assumption, $f$ is not sufficient, which leads to a contradiction. Hence $j=l$. \end{proof} \begin{lemma}\label{lem:injective-map} Let $\G=(Y,S)\in\mathsf{Fr}_r$ be an infinite frame. Let $z_0\in Z$, $y_0\in Y$ and $n\in\omega$. Suppose $f:(\Z_\alpha,z_0)\twoheadrightarrow^{n+26}(\G,y_0)$. Then $g=f\rsto\R^n[z_0]$ is injective. \end{lemma} \begin{proof} Follows from Lemma \ref{lem:Zalpha-k-morphism-same-type} and Lemma \ref{lem:Zalpha-k-morphism-same-label} immediately. \end{proof} \begin{figure}[ht] \[ \begin{tikzpicture} \draw (2.5,0.5) node{$\cdots$}; \draw (0,0) node{$\circ$}; \draw (.5,1) node{$\circ$}; \draw [->] (.05,.05) -- (0.45,.95); \draw (1,0) node{$\circ$}; \draw [->] (.95,.05) -- (0.55,.95); \draw (1.5,1) node{$\circ$}; \draw (2,0) node{$\circ$}; \draw [->] (1.95,.05) -- (1.55,.95); \draw (-.5,1) node{$\circ$}; \draw [->] (-.05,.05) -- (-0.45,.95); \draw (-1,0) node{$\circ$}; \draw (-1,0) node[right]{$1$}; \draw [->] (-.95,.05) -- (-0.55,.95); \draw [->] (1.05,0.05) -- (1.45,.95); \draw (0,0) node[right]{$3$}; \draw (-0.5,1) node[right]{$2$}; \draw (-.5,-1.5) node{$\circ$}; \draw (-.5,-1.5) node[right]{$1$}; \draw [->] (-.5,-1.45) -- (-.5,.95); \draw [->] (-.55,-1.45) -- (-0.95,-.55); \draw (-1,-.5) node{$\circ$}; \draw (-1,-.5) node[below]{$0$}; \draw (-1,-.5) node[right]{$b^1_j$}; \draw (.5,-1.5) node{$\circ$}; \draw [->] (.5,-1.45) -- (.5,.95); \draw [->] (.55,-1.45) -- (0.95,-.55); \draw (1,-.5) node{$\circ$}; \draw (1.5,-1.5) node{$\circ$}; \draw [->] (1.45,-1.45) -- (1.05,-.55); \draw (-3+-1.5,0.5) node{$\cdots$}; \draw (-3+0,0) node{$\circ$}; \draw (-3+.5,1) node{$\circ$}; \draw [->] (-3+.05,.05) -- (-3+0.45,.95); \draw (-3+1,0) node{$\circ$}; \draw [->] (-3+.95,.05) -- (-3+0.55,.95); \draw (-3+1.5,1) node{$\circ$}; \draw (-3+1.5,1) node[right]{$0$}; \draw (-3+1.5,1) node[above]{$a_l$}; \draw (-3+2,0) node{$\circ$}; \draw [->] (-3+1.95,.05) -- (-3+1.55,.95); \draw (-3+-.5,1) node{$\circ$}; \draw [->] (-3+-.05,.05) -- (-3+-0.45,.95); \draw (-3+-1,0) node{$\circ$}; \draw [->] (-3+-.95,.05) -- (-3+-0.55,.95); \draw [->] (-3+1.05,0.05) -- (-3+1.45,.95); \draw (-3+0,0) node[below]{$a_{l-3}$}; \draw (0,0) node[below]{$a_{j+1}$}; \draw (-3+1,0) node[right]{$1$}; \draw (-3+0.5,1) node[right]{$2$}; \draw (-3+-.5,-1.5) node{$\circ$}; \draw [->] (-3+-.5,-1.45) -- (-3+-.5,.95); \draw [->] (-3+-.55,-1.45) -- (-3+-0.95,-.55); \draw (-3+-1,-.5) node{$\circ$}; \draw (-3+.5,-1.5) node{$\circ$}; \draw (-3+.5,-1.5) node[right]{$c^0_{l-2}$}; \draw (-1+.5,-1.5) node[left]{$b^0_{j}$}; \draw [->] (-3+.5,-1.45) -- (-3+.5,.95); \draw [->] (-3+.55,-1.45) -- (-3+0.95,-.55); \draw (-3+1,-.5) node{$\circ$}; \draw (-3+1.5,-1.5) node{$\circ$}; \draw [->] (-3+1.45,-1.45) -- (-3+1.05,-.55); \end{tikzpicture} \] \caption{For Claim 2(a) in the proof of Lemma \ref{lem:injective-map}} \label{fig:aux-proof-1} \end{figure} \begin{figure}[ht] \[ \begin{tikzpicture} \draw (-5+-1,0) node[left]{(a)}; \draw (2.5,0.5) node{$\cdots$}; \draw (0,0) node{$\circ$}; \draw (.5,1) node{$\circ$}; \draw [->] (.05,.05) -- (0.45,.95); \draw (1,0) node{$\circ$}; \draw [->] (.95,.05) -- (0.55,.95); \draw (1.5,1) node{$\circ$}; \draw (1.5,1) node[right]{$6$}; \draw (2,0) node{$\circ$}; \draw (2,0) node[right]{$7$}; \draw [->] (1.95,.05) -- (1.55,.95); \draw (-.5,1) node{$\circ$}; \draw [->] (-.05,.05) -- (-0.45,.95); \draw (-1,0) node{$\circ$}; \draw (-1,0) node[right]{$1$}; \draw [->] (-.95,.05) -- (-0.55,.95); \draw [->] (1.05,0.05) -- (1.45,.95); \draw (0,0) node[right]{$3$}; \draw (0,0) node[below]{$a_{j+1}$}; \draw (-0.5,1) node[right]{$2$}; \draw (1,0) node[left]{$5$}; \draw (1,0) node[right]{$a_{j+3}$}; \draw (0.5,1) node[right]{$4$}; \draw (-.5,-1.5) node{$\circ$}; \draw (-.5,-1.5) node[right]{$1$}; \draw [->] (-.5,-1.45) -- (-.5,.95); \draw [->] (-.55,-1.45) -- (-0.95,-.55); \draw (-1,-.5) node{$\circ$}; \draw (-1,-.5) node[below]{$0$}; \draw (-1,-.5) node[right]{$b^1_j$}; \draw (.5,-1.5) node{$\circ$}; \draw (.5,-1.5) node[right]{$4'$}; \draw (.5,-1.5) node[below]{$c^0_{j+2}$}; \draw [->] (.5,-1.45) -- (.5,.95); \draw [->] (.55,-1.45) -- (0.95,-.55); \draw (1,-.5) node{$\circ$}; \draw (1,-.5) node[right]{$4''$}; \draw (1.5,-1.5) node{$\circ$}; \draw (1.5,-1.5) node[right]{$4'''$}; \draw [->] (1.45,-1.45) -- (1.05,-.55); \draw (-4+-1.5,0.5) node{$\cdots$}; \draw (-4+0,0) node{$\circ$}; \draw (-4+.5,1) node{$\circ$}; \draw [->] (-4+.05,.05) -- (-4+0.45,.95); \draw (-4+1,0) node{$\circ$}; \draw [->] (-4+.95,.05) -- (-4+0.55,.95); \draw (-4+1.5,1) node{$\circ$}; \draw (-4+1.5,1) node[right]{$2$}; \draw (-3+1.5,1) node{$\circ$}; \draw (-3+1.5,1) node[right]{$0$}; \draw (-3+1.5,1) node[above]{$a_l$}; \draw (-4+2,0) node{$\circ$}; \draw [->] (-4+1.95,.05) -- (-4+1.55,.95); \draw [->] (-3+1.95,.05) -- (-3+1.55,.95); \draw (-4+-.5,1) node{$\circ$}; \draw [->] (-4+-.05,.05) -- (-4+-0.45,.95); \draw (-4+-1,0) node{$\circ$}; \draw [->] (-4+-.95,.05) -- (-4+-0.55,.95); \draw [->] (-4+1.05,0.05) -- (-4+1.45,.95); \draw [->] (-3+1.05,0.05) -- (-3+1.45,.95); \draw (-3+1,0) node[right]{$1$}; \draw (-4+1,0) node[right]{$3$}; \draw (-4+0.5,1) node[right]{$4$}; \draw (-4+0.5,1) node[above]{$a_{l-4}$}; \draw (-4+-.5,-1.5) node{$\circ$}; \draw [->] (-4+-.5,-1.45) -- (-4+-.5,.95); \draw [->] (-4+-.55,-1.45) -- (-4+-0.95,-.55); \draw (-4+-1,-.5) node{$\circ$}; \draw (-4+.5,-1.5) node{$\circ$}; \draw (-4+.5,-1.5) node[right]{$c^0_{l-4}$}; \draw [->] (-4+.5,-1.45) -- (-4+.5,.95); \draw [->] (-4+.55,-1.45) -- (-4+0.95,-.55); \draw (-4+1,-.5) node{$\circ$}; \draw (-4+1.5,-1.5) node{$\circ$}; \draw [->] (-4+1.45,-1.45) -- (-4+1.05,-.55); \end{tikzpicture} \] \[ \begin{tikzpicture} \draw (-5+-1,0) node[left]{(b)}; \draw (3,0.5) node{$\cdots$}; \draw (0,0) node{$\circ$}; \draw (.5,1) node{$\circ$}; \draw [->] (.05,.05) -- (0.45,.95); \draw (1,0) node{$\circ$}; \draw [->] (.95,.05) -- (0.55,.95); \draw (1.5,1) node{$\circ$}; \draw (1.5,1) node[right]{$6$}; \draw (2,0) node{$\circ$}; \draw (2,0) node[right]{$7$}; \draw [->] (1.95,.05) -- (1.55,.95); \draw (2.5,1) node{$\circ$}; \draw (2.5,1) node[right]{$8$}; \draw (2.5,1) node[above]{$a_{j+6}$}; \draw [->] (2.05,.05) -- (2.45,.95); \draw (-.5,1) node{$\circ$}; \draw [->] (-.05,.05) -- (-0.45,.95); \draw (-1,0) node{$\circ$}; \draw (-1,0) node[right]{$1$}; \draw [->] (-.95,.05) -- (-0.55,.95); \draw [->] (1.05,0.05) -- (1.45,.95); \draw (0,0) node[right]{$3$}; \draw (-0.5,1) node[right]{$2$}; \draw (1,0) node[right]{$5$}; \draw (.5,1) node[above]{$a_{j+2}$}; \draw (0.5,1) node[right]{$4$}; \draw (-.5,-1.5) node{$\circ$}; \draw (-.5,-1.5) node[right]{$1$}; \draw [->] (-.5,-1.45) -- (-.5,.95); \draw [->] (-.55,-1.45) -- (-0.95,-.55); \draw (-1,-.5) node{$\circ$}; \draw (-1,-.5) node[below]{$0$}; \draw (-1,-.5) node[right]{$b^1_j$}; \draw (.5,-1.5) node{$\circ$}; \draw (.5,-1.5) node[right]{$4'$}; \draw [->] (.5,-1.45) -- (.5,.95); \draw [->] (.55,-1.45) -- (0.95,-.55); \draw (1,-.5) node{$\circ$}; \draw (1,-.5) node[right]{$4''$}; \draw (1.5,-1.5) node{$\circ$}; \draw (1.5,-1.5) node[right]{$4'''$}; \draw [->] (1.45,-1.45) -- (1.05,-.55); \draw (-4+-1.5,0.5) node{$\cdots$}; \draw (-4+0,0) node{$\circ$}; \draw (-4+.5,1) node{$\circ$}; \draw [->] (-4+.05,.05) -- (-4+0.45,.95); \draw (-4+1,0) node{$\circ$}; \draw [->] (-4+.95,.05) -- (-4+0.55,.95); \draw (-4+1.5,1) node{$\circ$}; \draw (-4+1.5,1) node[right]{$2$}; \draw (-3+1.5,1) node{$\circ$}; \draw (-3+1.5,1) node[right]{$0$}; \draw (-3+1.5,1) node[above]{$a_l$}; \draw (-4+2,0) node{$\circ$}; \draw [->] (-4+1.95,.05) -- (-4+1.55,.95); \draw [->] (-3+1.95,.05) -- (-3+1.55,.95); \draw (-4+-.5,1) node{$\circ$}; \draw [->] (-4+-.05,.05) -- (-4+-0.45,.95); \draw (-4+-1,0) node{$\circ$}; \draw [->] (-4+-.95,.05) -- (-4+-0.55,.95); \draw [->] (-4+1.05,0.05) -- (-4+1.45,.95); \draw [->] (-3+1.05,0.05) -- (-3+1.45,.95); \draw (-4+0,0) node[right]{$5$}; \draw (-4+-0.5,1) node[right]{$6$}; \draw (-3+1,0) node[right]{$1$}; \draw (-4+1,0) node[right]{$3$}; \draw (-4+0.5,1) node[right]{$4$}; \draw (-4+-.5,-1.5) node{$\circ$}; \draw (-4+-1,-.5) node[left]{$b^1_{l-6}$}; \draw [->] (-4+-.5,-1.45) -- (-4+-.5,.95); \draw [->] (-4+-.55,-1.45) -- (-4+-0.95,-.55); \draw (-4+-1,-.5) node{$\circ$}; \draw (-4+.5,-1.5) node{$\circ$}; \draw (-4+.5,-1.5) node[right]{$4'$}; \draw [->] (-4+.5,-1.45) -- (-4+.5,.95); \draw [->] (-4+.55,-1.45) -- (-4+0.95,-.55); \draw (-4+1,-.5) node{$\circ$}; \draw (-4+1,-.5) node[right]{$4''$}; \draw (-4+1.5,-1.5) node{$\circ$}; \draw (-4+1.5,-1.5) node[right]{$4'''$}; \draw [->] (-4+1.45,-1.45) -- (-4+1.05,-.55); \end{tikzpicture} \] \caption{For Claim 2(b) in the proof of Lemma \ref{lem:injective-map}} \label{fig:aux-proof-2} \end{figure} \begin{figure}[ht] \[ \begin{tikzpicture} \draw (-2,0) node[left]{$a_l$}; \draw (-4+1.5,-1.5) node[left]{$c^2_j$}; \draw (.5,0.5) node{$\cdots$}; \draw (0,0) node{$\circ$}; \draw (-.5,1) node{$\circ$}; \draw [->] (-.05,.05) -- (-0.45,.95); \draw (-1,0) node{$\circ$}; \draw (-1,0) node[right]{$2$}; \draw [->] (-.95,.05) -- (-0.55,.95); \draw (-0.5,1) node[right]{$3$}; \draw (-0.5,-1.5) node[right]{$b^0_{l+3}$}; \draw (-.5,-1.5) node{$\circ$}; \draw [->] (-.5,-1.45) -- (-.5,.95); \draw [->] (-.55,-1.45) -- (-0.95,-.55); \draw (-1,-.5) node{$\circ$}; \draw (-4.5,0.5) node{$\cdots$}; \draw (-4+0,0) node{$\circ$}; \draw (-4+.5,1) node{$\circ$}; \draw [->] (-4+.05,.05) -- (-4+0.45,.95); \draw (-4+1,0) node{$\circ$}; \draw [->] (-4+.95,.05) -- (-4+0.55,.95); \draw (-4+1.5,1) node{$\circ$}; \draw (-4+1.5,1) node[right]{$1$}; \draw (-3+1.5,1) node{$\circ$}; \draw (-3+1.5,1) node[right]{$1$}; \draw (-4+2,0) node{$\circ$}; \draw [->] (-4+1.95,.05) -- (-4+1.55,.95); \draw [->] (-3+1.95,.05) -- (-3+1.55,.95); \draw [->] (-4+1.05,0.05) -- (-4+1.45,.95); \draw [->] (-3+1.05,0.05) -- (-3+1.45,.95); \draw (-4+0,0) node[left]{$4$}; \draw (-4+0,0) node[below]{$a_{j-1}$}; \draw (-3+1,0) node[right]{$0$}; \draw (-4+1,0) node[right]{$2$}; \draw (-4+1,0) node[left]{$c^0_j$}; \draw (-4+0.5,1) node[right]{$3$}; \draw (-4+.5,-1.5) node{$\circ$}; \draw (-4+.5,-1.5) node[right]{$2$}; \draw [->] (-4+.5,-1.45) -- (-4+.5,.95); \draw [->] (-4+.55,-1.45) -- (-4+0.95,-.55); \draw (-4+1,-.5) node{$\circ$}; \draw (-4+1,-.5) node[right]{$1$}; \draw (-4+1.5,-1.5) node{$\circ$}; \draw (-4+1.5,-1.5) node[right]{$0$}; \draw [->] (-4+1.45,-1.45) -- (-4+1.05,-.55); \end{tikzpicture} \] \caption{For Claim 3 in the proof of Lemma \ref{lem:injective-map}} \label{fig:aux-proof-3} \end{figure} \begin{lemma}\label{lem:partial-iso-Zalpha} Let $\G=(Y,S)\in\mathsf{Fr}_r(\mathsf{Log}(\Z_\alpha))$ be an infinite frame. Then for all $k\in\omega$ and $y\in Y$, $\G\rsto S_\sharp^k[y]\rightarrowtail\Z_\alpha$. \end{lemma} \begin{proof} Since $\G\md\mathsf{Log}(\Z_\alpha)$ and $\G,y\not\md\neg\J^{k+26}(\G,y)$, there exists $x\in Z$ such that $\Z_\alpha,x\not\md\neg\J^{k+26}(\G,y)$. By Proposition \ref{lem:JankovLemma-k}, there exists a map $f:(\Z_\alpha,x)\twoheadrightarrow^{k+26}(\F,w)$. By Lemma \ref{lem:injective-map}, $g=f\rsto \R^k[x]$ is injective. By Fact \ref{fact:inj-tmorphism}, $g:\R^k[x]\iso\R^k[y]$. \end{proof} \begin{lemma} If $\alpha:\mathbb{Z}\to 2$ is finitely perfect, then $\mathsf{Log}(\Z_\alpha)$ is pretabular. \end{lemma} \begin{proof} Let $L\supseteq\mathsf{Log}(\Z_\alpha)$ be non-tabular. By Theorem \ref{thm:nontabular-infrootedframe}, $L\sub\mathsf{Log}(\gf)$ for some rooted refined frame $\gf$. Note that $\gf\md\mathbf{alt}^+_3\wedge\mathbf{alt}^-_4$, we see that $\gf$ is image-finite. By Lemma \ref{lem:pointwise-finite-kappaF}, $\mathsf{Log}(\gf)=\mathsf{Log}(\kappa\gf)$. Let $\kappa\gf=G=(Y,S)$. It suffices to show that $\mathsf{Log}(\Z_\alpha)\supseteq\mathsf{Log}(\G)$. Take any $\phi\not\in\mathsf{Log}(\Z_\alpha)$. Then $\Z_\alpha,z\not\md\phi$ for some $z\in Z$ and there exists a finite subsequence $\beta$ of $\alpha$ such that $\Z_\alpha\rsto\R^{\mathsf{md}(\phi)}[z]\rightarrowtail\Z_\beta$. Recall that $\alpha$ is finitely perfect, there exists $n\in\omega$ such that $\beta\triangleleft\gamma$ for all $\gamma\triangleleft\alpha$ such that $|\gamma|>n$. Let $m=8(n+3)$. Take any $x\in Y$. By Lemma \ref{lem:partial-iso-Zalpha}, $\G\rsto S_\sharp^m[y]\cong\Z$ for some $\Z\rightarrowtail\Z_\alpha$. Then $\mathsf{zdg}(Z)\geq m$. By the construction of $\Z_\alpha$, we see $\Z_\gamma\rightarrowtail\Z$ for some $\gamma\triangleleft\alpha$ with $|\gamma|\geq n$. Thus $\Z_\alpha\rsto\R^{\mathsf{md}(\phi)}[z]\rightarrowtail\Z_\beta\rightarrowtail\Z_\gamma\rightarrowtail\Z\iso\G\rsto S_\sharp^m[y]$, which implies $\phi\not\in\mathsf{Log}(\G)$. Hence $\mathsf{Log}(\Z_\alpha)\supseteq\mathsf{Log}(\G)\supseteq L\supseteq\mathsf{Log}(\Z_\alpha)$, which entails that $\mathsf{Log}(\Z_\alpha)$ is pretabular. \end{proof} \begin{corollary}\label{coro:LogZchif-is-pretabular} For all $f:\omega\to 2$, the logic $\mathsf{Log}(\Z_{\chi^f})$ is pretabular. \end{corollary} \begin{lemma}\label{lem:distinctMap-distinctLogic} For all distinct functions $f,g:\omega\to 2$, $\mathsf{Log}(\Z_{\chi^f})\nsubseteq\mathsf{Log}(\Z_{\chi^g})$. \end{lemma} \begin{proof} Let $f,g:\omega\to 2$ be distinct. By Lemma \ref{lem:TM-sequence-emb}, there exists $\beta\triangleleft\chi^g$ such that $\beta\ntriangleleft\chi^f$. Suppose $\Z_{\chi^g}\md\mathsf{Log}(\Z_{\chi^f})$. By Lemma \ref{lem:partial-iso-Zalpha}, $\Z_\beta\rightarrowtail\Z_{\chi^f}$. By the construction of $\Z_\beta$ and $\Z_{\chi^f}$, it is clear that $\beta\triangleleft\chi^f$, which is impossible. Thus $\mathsf{Log}(\Z_{\chi^f})\nsubseteq\mathsf{Log}(\Z_{\chi^g})$. \end{proof} As consequences, the following theorems hold: \begin{theorem}\label{thm:pretab-BS223} $|\mathsf{Pre}(\mathsf{S4}_t)|\geq|\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,3})|=2^{\aleph_0}$. \end{theorem} \begin{proof} By Lemma \ref{lem:distinctMap-distinctLogic}, $|\cset{\mathsf{Log}(\Z_{\chi^f}):f\in 2^\omega}|=2^{\aleph_0}$. By Corollary \ref{coro:LogZchif-is-pretabular}, we see that $\cset{\mathsf{Log}(\Z_{\chi^f}):f\in 2^\omega}\sub\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,3})\sub\mathsf{Pre}(\mathsf{S4}_t)$. \end{proof} \begin{theorem}\label{thm:pretab-S4t} For all $\kappa\leq{\aleph_0}$ or $\kappa=2^{\aleph_0}$, $|\mathsf{Pre}(L)|=\kappa$ for some $L\in\mathsf{NExt}(\mathsf{S4}_t)$. \end{theorem} \begin{remark}\label{rem:fmp-pretab} As it is shown in this section, the logics $\mathsf{Log}(\Z_{\chi^f})$ are pretabular, Kripke complete and of finite depth. There exist modal logics in $\mathsf{NExt}(\mathsf{S4})$ which also satisfy these properties, for example $\mathsf{S5}$. However, there are substantial differences between the lattices $\mathsf{NExt}(\mathsf{S4})$ and $\mathsf{NExt}(\mathsf{S4}_t)$. For example, by \cite[Theorems 12.7 and 12.11]{Chagrov1997}, the following claims are true in $\mathsf{NExt}(\mathsf{S4})$, even in $\mathsf{NExt}(\mathsf{K4})$: \begin{enumerate}[(i)] \item every tabular logic has finitely many immediate predecessors; \item every pretabular logic enjoys the finite model property. \end{enumerate} On the other hand, our conjecture is that $\mathsf{Fin}_r(\mathsf{Log}(\Z_{\chi^f}))=\mathsf{Iso}(\cset{\C^\uparrow_1,\C^\uparrow_2})$ for any $f\in 2^\omega$. This will give that $\mathsf{Log}(\C^\ua_2)$ has a continuum of immediate predecessors and $\mathsf{Log}(\Z_{\chi^f})$ is pretabular but lacks the finite model property. Thus neither (i) nor (ii) may hold in $\mathsf{NExt}(\mathsf{S4}_t)$. In order to prove this conjecture, we would need to show that the critical exponent of $\chi^f$ is always finite. We leave this to future research. \end{remark} \section{Conclusions} The present work contributes a series of results on pretabularity in tense logics above $\mathsf{S4}_t$. We started with tense logics $\mathsf{S4BP}_{n,m}^{k,l}$ with bounding parameters. We provided a full characterization of $\mathsf{Pre}(\mathsf{S4BP}_{n,m}^{k,l})$ for cases where all parameters $k,l,n,m$ are finite. Then we investigated some concrete tense logics where some of the parameters are infinite. Full characterizations for pretabular logics extending $\mathsf{S4.3}_t$ and $\mathsf{S4BP}_{2,2}^{2,\omega}$ were provided, where $\mathsf{S4.3}_t=\mathsf{S4BP}_{1,1}^{\omega,\omega}$. It was shown that $|\mathsf{Pre}(\mathsf{S4.3}_t)|=5$ and $|\mathsf{Pre}(\mathsf{S4BP}_{2,2}^{2,\omega})|=\aleph_0$. An anti-dichotomy theorem for cardinality of pretabular extensions for logics in $\mathsf{NExt}(\mathsf{S4BP}_{2,2}^{2,\omega})$ was provided. Finally, we studied pretabular tense logics in $\mathsf{NExt}(\mathsf{S4BP}^{2,\omega}_{2,3})$. By Theorem \ref{thm:pretab-BS223}, there are a continuum of pretabular logics $\mathsf{NExt}(\mathsf{S4BP}^{2,\omega}_{2,3})$. This answered the open problem raised in \cite{Rautenberg1979}. It also established Theorem \ref{thm:pretab-S4t}, which gave a general result on the cardinality of pretabular extensions for logics in $\mathsf{NExt}(\mathsf{S4}_t)$. On the other hand, Theorem \ref{thm:pretab-BS223} indicates that it is hopeless to have a full characterization of $\mathsf{Pre}(\mathsf{S4BP}^{2,\omega}_{2,3})$ or $\mathsf{Pre}(\mathsf{S4}_t)$. But it does not mean that research on pretabular tense logics in $\mathsf{NExt}(\mathsf{S4}_t)$ is completed. There is still a lot of future work that needs to be done. Some have already been mentioned in the remarks (see Remark \ref{rem:K4BP} and Remark \ref{rem:Kracht-Ga}). Additionally, we outline a few more topics here: One possible future work is to explore further on pretabular logics extending $\mathsf{S4BP}_{n,m}^{k,l}$. For example, consider the tense logic $\mathsf{S4BP}_{2,2}^{3,\omega}$, which has a forth-width and back-width of 2 and a depth of 3. The cardinality of $\mathsf{Pre}(\mathsf{S4BP}_{2,2}^{3,\omega})$ remains unknown. Pretabular logics extending $\mathsf{S4BP}_{n,m}^{k,l}$ for finite $l$ are not well-understood yet. Another direction for future work is to investigate pretabular logics in $\mathsf{NExt}(\mathsf{S4}_t)$ with the FMP. Pretabular logics can be viewed as boundaries of tabular logics. It is natural to consider that pretabular logics with the FMP act as the limit of certain set of tabular logics. As it is shown in \cite[Theorem 12.11]{Chagrov1997}, every pretabular modal logic in $\mathsf{NExt}(\mathsf{K4})$ has the FMP. By Theorem \ref{thm:BS222-pretab-fmp}, every pretabular tense logic in $\mathsf{NExt}(\mathsf{S4BP}_{2,2}^{2,\omega})$ has the FMP. However, if our conjecture in Remark \ref{rem:fmp-pretab} is proved to be correct, then there exists a continuum-sized family of pretabular tense logics lacking the FMP in $\mathsf{NExt}(\mathsf{S4BP}_{2,3}^{2,\omega})$. This raises at least two natural questions: (i) When does $\mathsf{NExt}(L)$ contain pretabular logics lacking the FMP? (ii) How many pretabular logics with the FMP exist in $\mathsf{NExt}(\mathsf{S4}_t)$? Exploring these questions will deepen our understanding of the lattices of tense logics. \vspace{2em} \noindent\textbf{Acknowledgement.} The author is grateful to Nick Bezhanishvili for his very helpful and insightful comments, which significantly improved the manuscript. The author also thanks Tenyo Takahashi for his inspiring suggestions, contributing to the main proof in Section~\ref{sec:BS223}. Finally, the author is indebted to Minghui Ma and Rodrigo Nicolau Almeida for the discussions that helped shape the ideas presented here. The author is supported by Chinese Scholarship Council. \bibliographystyle{siam} \bibliography{reference-arXiv} \end{document}
2412.19618v1
http://arxiv.org/abs/2412.19618v1
On Counting Constructions and Isomorphism Classes of $I$-Graphs
\documentclass[12pt]{article} \usepackage[margin=1.5in]{geometry} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{tikz} \usetikzlibrary{positioning} \usepackage{tkz-graph} \usepackage{float} \GraphInit[vstyle = Shade] \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \DeclareMathOperator{\id}{id} \newcommand{\bd}[1]{\mathbf{#1}} \newcommand{\RR}{\mathbb{R}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\col}[1]{\left[\begin{matrix} #1 \end{matrix} \right]} \newcommand{\comb}[2]{\binom{#1^2 + #2^2}{#1+#2}} \begin{document} \title{\textbf{On Counting Constructions and Isomorphism Classes of $I$-Graphs}} \author{Harrison Bohl \\ School of Mathematics and Physics \\ University of Queensland \\ \texttt{[email protected]} \\ \ \\ Adrian W. Dudek \\ School of Mathematics and Physics \\ University of Queensland \\ \texttt{[email protected]}} \date{} \maketitle \begin{abstract} We prove a collection of asymptotic density results for several interesting classes of the $I$-graphs. Specifically, we quantify precisely the proportion of $I$-graphs that are generalised Petersen graphs as well as those that are connected. Our results rely on the estimation of sums over tuples satisfying various coprimality conditions along with other techniques from analytic number theory. \end{abstract} \section{Introduction and Main Results} The purpose of this paper is to prove some results regarding the $I$-graphs, a further generalisation of the generalised Petersen graphs (see \cite{Boben} for an introduction). Specifically, our main result is to quantify the extent of this generalisation, that is, we determine the density of the $I$-graphs that are isomorphic to generalised Petersen graphs. We first dispense with terminology; if one lets $n, k \in \mathbb{N}$ such that $n\geq 3$ and $k \leq n/2$, then one can define the generalised Petersen graph (or GPG) $P(n,k)$ to be the graph with vertex set $$V=\{a_i, b_i : 0 \leq i \leq n-1\}$$ and edge set $$E=\{a_i a_{i+1}, a_i b_i, b_i b_{i+k} : 0 \leq i \leq n-1 \}$$ with the subscript arithmetic being performed modulo $n$. \begin{figure}[H] \centering \includegraphics[width=0.45\linewidth,angle=270]{GPG.png} \caption{$GPG(10,3)$.} \label{fig:GPGex} \end{figure} Moreover, throughout this paper, $\zeta(s)$ refers to the Riemann zeta-function, $\phi(n)$ is the Euler totient function, $\omega(n)$ denotes the number of distinct prime divisors of $n$, $\tau(n)$ denotes the number of divisors of $n$, and $\mu(n)$ denotes the M\"obius function evaluated at $n$. One can see the textbook of Bateman--Diamond \cite{BatemanDiamond} for more information on these and other functions central to analytic number theory. To further generalise the GPGs, let $n, j,k \in \mathbb{N}$ such that $n \geq 3$, $k \leq n/2$ and $1 \leq j \leq k$, and define the $I$-graph $I(n,j,k)$ to be the graph with vertex set $$V=\{a_i, b_i : 0 \leq i \leq n-1\}$$ and edge set $$E=\{a_i a_{i+j}, a_i b_i, b_i b_{i+k} : 0 \leq i \leq n-1 \}.$$ Clearly, we have that $I(n,1,k) = P(n,k)$ for all $n$ and $k \leq n/2$. The main purpose of this paper is to prove some results on the number of $I$-graphs that are GPGs as well as on the number of $I$-graphs that are connected. We begin with the statement of a result which says that, in the asymptotic sense, approximately 89.32\% of permissible choices for $(n,j,k)$ arise in an $I$-graph that is also a generalised Petersen graph. \begin{thm} \label{igraphsextendgpgs} Let $A(N)$ count the number of tuples $(n, j, k)$ with $3 \leq n \leq N$, $k \leq n/2$ and $1 \leq j \leq k$. Let $B(N)$ count the number of these tuples such that $I(n, j, k)$ is a generalised Petersen graph. Then $$\lim_{N \rightarrow \infty} \frac{B(N)}{A(N)} = \frac{12}{\pi^2} - C = 0.8932\ldots$$ where $C = \prod_p (1-2/p^2) = 0.3226\ldots$ is an infinite product over the prime numbers. \end{thm} It should be noted that the constant $C$ is studied elsewhere. Specifically, it is the asymptotic density of integers $n$ such that $n$ and $n+1$ are both square-free \cite{Mirsky1947} and is approximately equal to 0.3226. Moreover, $(1+C)/2$ is known as the Feller--Tornier constant and is equal to the asymptotic density of numbers with an even number of non-unitary prime divisors (see Feller--Tornier \cite{FellerTornier}). It should also be noted that some tuples $(n, j, k)$ will result in an $I$-graph that is not connected. Our next result enumerates the density of tuples that result in a connected $I$-graph. Specifically, we show that about 98.3\% of all choices for $(n,j,k)$ result a connected graph. \begin{thm} \label{igraphsconnected} Let $A(N)$ count the number of tuples $(n, j, k)$ with $3 \leq n \leq N$, $k \leq n/2$ and $1 \leq j \leq k$. Let $C(N)$ count the subset of these tuples such that $I(n, j, k)$ is connected. Then $$\lim_{N \rightarrow \infty} \frac{C(N)}{A(N)} = \frac{1}{\zeta(6)} = \frac{945}{\pi^6} = 0.98295\ldots.$$ \end{thm} The above results are, in a strong sense, number-theoretic, in that they arise from the counting of integer tuples satisfying various properties. Indeed, it may be considered more natural to perform the counting amongst the isomorphism classes of the graphs themselves, rather than the tuples. Our second set of results acts upon this consideration, starting with the following, which provides an asymptotic estimate for the number of isomorphism classes of the $I$-graphs with $n \leq N$. We first remind the reader that we write $f(N) \sim g(N)$ to mean that $f(N)/g(N) \rightarrow 1$ as $N \rightarrow \infty$. \begin{thm} \label{countingisomorphisms} Let $CI(N)$ denote the count of isomorphism classes of $I$-graphs $I(n, j, k)$ with $n \leq N$. Then $$CI(N) \sim \frac{5}{16} N^2$$ as $N \rightarrow \infty$. \end{thm} The above theorem immediately tells us that, on average, there are greater than $c N$ graphs in each isomorphism class. To see this directly, one notes that when we count all the tuples $(n, j, k)$ with $3 \leq n \leq N$, $k \leq n/2$ and $1 \leq j \leq k$, we furnish a result asymptotic to a constant multiple of $N^3$. The remaining two theorems provide us with the density of our interesting subclasses amongst the isomorphism classes. \begin{thm} \label{igraphsextendgpgsisomorphism} Let $CP(N)$ denote the count of isomorphism classes of Peterson graphs $P(n,k)$ with $n\leq N$ and let $CI(N)$ denote the count of isomorphism classes of $I$-graphs with $n \leq N$. Then $$\lim_{N\to\infty}\frac{CP(N)}{CI(N)}=\frac{4(\pi^2-3)}{5 \pi^2} = 0.55683\dots.$$ \end{thm} This says that approximately 55.7\% of isomorphism classes of $I$-graphs are isomorphism classes of generalised Peterson graphs. \begin{thm} \label{igraphsconnectedisomorphism} Let $CI_c(N)$ denote the count of isomorphism classes of connected $I$-graphs with $n\leq N$ and let $CI(N)$ denote the count of isomorphism classes of $I$-graphs. Then $$\lim_{N\to\infty}\frac{CI_c(N)}{CI(N)}=\frac1{\zeta(2)}=0.60793\dots.$$ \end{thm} This says that approximately 60.8\% of isomorphism classes of $I$-graphs are isomorphism classes of connected $I$-graphs. It is also interesting to note that this is the asymptotic density of square-free integers or, alternatively, the probability that one picks a pair of coprime integers. These two results illustrate the importance of counting isomorphism classes of the graphs, rather than simply counting all permissible tuples. One may combine the two above results to see that $$\lim_{N \rightarrow \infty} \frac{CP(N)}{CI_c(N)} = \frac{2 (\pi^2-3)}{15} = 0.91594\ldots.$$ In short, the overwelming majority of connected $I$-graphs are, in fact, GPGs, and there are many graphs in each isomorphism class. However, of the $I$-graphs that are not connected, there are many different isomorphism classes with relatively few graphs in each class. \section{Preliminary Lemmas and Results} \subsection{Estimates for Sums} \label{estimates} To prove Theorems \ref{igraphsextendgpgs} and \ref{igraphsconnected}, we will state and prove a couple of useful lemmas. Some of these sums will be of specific interest to number theorists. \begin{lem}\label{countingcoprimenumbers} Let $n \in \mathbb{N}$. We have that \begin{equation} \sum_{\substack{k \leq m \\ (k, n) = 1}} 1 = \frac{m \varphi(n)}{n} + O( 2^{\omega(n)}). \end{equation} \end{lem} \begin{proof} We proceed directly through \begin{eqnarray*} \sum_{\substack{k \leq m \\ (k, n)=1}} 1 & = & \sum_{k \leq m} \sum_{d | (k, n)} \mu(d) \\ & = & \sum_{d | n} \mu(d) \sum_{\substack{k \leq m \\ d | k} } 1 \\ & = & \sum_{d | n} \mu(d) \bigg( \frac{m}{d} + O(1) \bigg) \\ & = & m \sum_{d |n} \frac{\mu(d)}{d} + O\bigg( \sum_{d|n} | \mu(d) | \bigg) \\ & = & \frac{m \varphi(n)}{n} + O( 2^{\omega(n)}). \end{eqnarray*} \end{proof} \begin{lem}\label{summingcoprimenumbers} Let $n \in \mathbb{N}$. We have that \begin{equation} \sum_{\substack{k \leq m \\ (k, n) = 1}} k = \frac{m^2 \varphi(n)}{2 n} + O(m 2^{\omega(n)}). \end{equation} \end{lem} \begin{proof} We proceed directly through \begin{eqnarray*} \sum_{\substack{k \leq m \\ (k, n)=1}} & = & \sum_{k \leq m} k \sum_{d | (k, n)} \mu(d) \\ & = & \sum_{d | n} \mu(d) \sum_{\substack{k \leq m \\ d | k} }k \\ & = & \sum_{d | n} \mu(d) \bigg( \frac{m^2}{2d} + O(m) \bigg) \\ & = & \frac{m^2}{2} \sum_{d |n} \frac{\mu(d)}{d} + O\bigg( m \sum_{d|n} 1 \bigg) \\ & = & \frac{m^2 \varphi(n)}{2n} + O(m 2^{\omega(n)}). \end{eqnarray*} \end{proof} \begin{lem}\label{phisquared} We have that \begin{equation} \sum_{n \leq N} \varphi^2(n) = C \frac{N^3}{3} + O(N^2 \log N) \end{equation} where $$C = \prod_{p} \bigg( 1-\frac{2}{p^2}\bigg),$$ the product being over all prime numbers $p$. \end{lem} \begin{proof} Set $k=0$ in Equation 30 of \cite{Mirsky}. \end{proof} \begin{lem}\label{nphi} We have that \begin{equation} \sum_{n \leq N} n \varphi(n) = \frac{2}{\pi^2} N^3 +o(N^3) . \end{equation} \end{lem} \begin{proof} This follows from the formula $$\sum_{n \leq N} \varphi(n) = \frac{3}{\pi^2} N^2 + o(N^2)$$ and the method of partial summation. One can see Bateman--Diamond \cite{BatemanDiamond} for more details. \end{proof} \subsection{Results on Counting $I$-Graphs}\label{counting_sec} For the proofs of Theorems \ref{countingisomorphisms}, \ref{igraphsextendgpgsisomorphism} and \ref{igraphsconnectedisomorphism}, we require exact formulas for the number of isomorphism classes of $I$-graphs and GPGs of order $2n$. These are provided by Petkov\v{s}ek and Zakraj\v{s}ek \cite{PetkovsekZakrajsek} and we state these below. \begin{lem}\label{I_count} Let $n=p_1^{k_1}p_2^{k_2}\dots p_{\omega(n)}^{k_{\omega(n)}}$ be the prime factorization of $n$. Then the number $I(n)$ of isomorphism classes of $I$-graphs on $2n$ vertices is given by \begin{align} I(n)=\frac1{4}\sum_{i=1}^4\prod_{j=1}^{\omega(n)}g_i\left(p_j^{k_j}\right)-\begin{cases} 2\tau(n)-1,&n\text{ even},\\ \tau(n),&n\text{ odd}, \end{cases} \end{align} where \begin{align} g_1(p^k)&=\frac{(p+1)p^k-2}{p-1},\\ g_2(p^k)&=\begin{cases} 4k,&p=2,\\ 2k+1,&p<2, \end{cases}\\ g_3(p^k)&=\begin{cases} 2,&p=2\text{ and }k=1,\\ 4(k-1),&p=2\text{ and }k\geq2,\\ 2k+1,&p>2, \end{cases}\\ g_4(p^k)&=\begin{cases} 2,&p=2,\\ 2k+1,&p\equiv1\mod4,\\ 1,&p\equiv3\mod4. \end{cases} \end{align} \end{lem} \begin{lem}\label{connectedI_count} The number $I_c(n)$ of isomorphism classes of connected $I$-graphs on $2n$ vertices is given by \begin{align} I_c(n)=\frac1{4}\left(\frac{J_2(n)}{\varphi(n)}+r(n)+s(n)+t(n)\right)-\begin{cases} 1, & n\text{ odd}\\ 2, & n\equiv0\mod{4}\\ 3, &n\equiv2\mod{4} \end{cases}, \end{align} where \begin{align} t(n)=\begin{cases} 2^{\omega(n)}+2^{\omega(n/2)},&n\text{ even},\\ 2^{\omega(w)},&n\text{ odd}. \end{cases} \end{align} \end{lem} \begin{lem}\label{peterson_count} The number $P(n)$ of isomorphism classes of generalised Peterson graphs on $2n$ vertices is given by \begin{align}P(n)=\frac1{4}(2n-\varphi(n)-2\gcd(n,2)+r(n)+s(n)).\end{align} \end{lem} \section{Proofs of Main Theorems } \subsection{Proof of Theorem \ref{igraphsextendgpgs}} To prove Theorem \ref{igraphsextendgpgs}, we require the following result of Boben, Pisanski and \v{Z}itnik \cite{Boben}. \begin{thm} A graph $I(n, j, k)$ is a GPG if and only if $(n,j)=1$ or $(n,k)=1$. \end{thm} We may now proceed directly. As in the theorem statement, we let $A(N)$ count the number of tuples $(n, j, k)$ with $3 \leq n \leq N$, $k \leq n/2$ and $1 \leq j \leq k$. It follows then that \begin{eqnarray*} A(N) & = & \sum_{3 \leq n \leq N} \sum_{k \leq \lfloor n/2 \rfloor} \sum_{j \leq k} 1 \\ & = & \frac{1}{2} \sum_{3 \leq n \leq N} \lfloor n/2 \rfloor (\lfloor n/2 \rfloor + 1) \\ & = & \frac{N^3}{24} + O(N^2). \end{eqnarray*} Let $B(N)$ count the number of these tuples such that $I(n, j, k)$ is a generalised Petersen graph. Equivalently, $B(N)$ is the count of triples $(n, j, k)$ such that $3 \leq n \leq N$, $1 \leq j \leq k$, $1 \leq k \leq n/2$ and $(n, j) = 1$ or $(n,k)=1$. Considering separately the cases where $(n,k) = 1$ and where $(n,k) > 1$ we have that $$B(N) = \sum_{3 \leq n \leq N} \sum_{\substack{k \leq n/2 \\ (k, n) = 1}} \sum_{j \leq k} 1 + \sum_{3 \leq n \leq N} \sum_{\substack{k \leq n/2 \\ (k, n) > 1}} \sum_{\substack{j < k \\ (n,j) = 1}} 1.$$ To simplify our working, we write $$B_1(N) = \sum_{\substack{k \leq n/2 \\ (k, n) = 1}} \sum_{j \leq k} 1$$ and $$B_2(N) = \sum_{\substack{k \leq n/2 \\ (k, n) > 1}} \sum_{\substack{j < k \\ (n,j) = 1}} 1$$ and estimate each of these in turn using the tools developed in Section \ref{estimates}. From Lemma \ref{summingcoprimenumbers}, it follows directly that $$B_1(N) = \frac{1}{8} n \varphi(n) + O(n 2^{\omega(n)}).$$ We can apply Lemma \ref{countingcoprimenumbers} to the second sum to get \begin{eqnarray*} B_2(N) & = & \frac{\varphi(n)}{n} \sum_{\substack{k \leq n/2 \\ (k,n)>1}} k + O(n 2^{\omega(n)}) \\ & = & \frac{\varphi(n)}{n} \bigg(\sum_{k \leq n/2} k - \sum_{\substack{k \leq n/2 \\ (k,n)=1}} k \bigg) + O(n 2^{\omega(n)}). \end{eqnarray*} An application of Lemma \ref{summingcoprimenumbers} gives us that $$B_2(N) = \frac{1}{8} n \varphi(n) - \frac{1}{8} \varphi^2(n) + O(n 2^{\omega(n)}).$$ We can feed these results back into our expression for $B(N)$ to get \begin{eqnarray*} B(N) & = & \sum_{3 \leq n \leq N} \bigg( \frac{1}{4} n \varphi(n) - \frac{1}{8} \varphi^2(n)\bigg) + O\bigg(\sum_{n\leq N} n 2^{\omega(n)} \bigg) \\ & = & \frac{1}{4} \sum_{n \leq N} n \varphi(n) - \frac{1}{8} \sum_{n \leq N} \varphi^2(n) + O\bigg(\sum_{n\leq N} n 2^{\omega(n)} \bigg) \end{eqnarray*} A direct application of Lemmas \ref{phisquared} and \ref{nphi} gives us that $$B(N) = \bigg(\frac{1}{2 \pi^2} - \frac{C}{24} \bigg) N^3 + O\bigg(\sum_{n\leq N} n 2^{\omega(n)} \bigg)$$ where $C$ is specified in Lemma \ref{phisquared}. We now note that $2^{\omega(n)} \leq d(n)$ where $d(n)$ denotes the number of divisors and use the well-known estimate $$\sum_{n \leq N} d(n) = N \log N + O(N).$$ This completes the proof of Theorem \ref{igraphsextendgpgs}. \subsection{Proof of Theorem \ref{igraphsconnected}} The proof relies on Theorem 8 of Oliveira--Vinagre \cite{OliveiraVinagre} which we state as follows. \begin{thm}\label{connected} The graph $I(n,j,k)$ is connected if and only if $(n,j,k)=1$. \end{thm} Let $C(N)$ count the number of tuples $(n, j, k)$ with $3 \leq n \leq N$, $k \leq n/2$ and $1 \leq j \leq k$, as well as the added condition that $(n,j,k)=1$. This is the sum $$C(N) = \sum_{3 \leq n \leq N} \sum_{k \leq n/2} \sum_{\substack{j \leq k \\ (n, j, k) = 1}} 1$$ It should be noted that we have already counted a lot of these simply by virture of counting the tuples such that $(n,j)=1$ or $(n,k) = 1$ in the previous section. Indeed, all that is left is to count all of the tuples such that $(j,k)=1$, $(n,k)>1$ and $(n,j)>1$, and add this to $B(N)$ to get $C(N)$. However, we compute the sum $C(N)$ directly using an idea of Toth (see Remark 1 on P13 of \cite{toth}) which is altogether not too dissimilar to the ideas used in the proofs of Lemma \ref{countingcoprimenumbers} and \ref{summingcoprimenumbers}. Proceeding directly, we write $$C(N) = \sum_{3 \leq n \leq N} \sum_{k \leq n/2} \sum_{j \leq k} \sum_{d | (n,j,k)} \mu(d).$$ We switch the order of summation to get \begin{equation} \label{CSummationSwitched} C(N) = \sum_{d=1}^N \mu(d) \sum_{1 \leq n_1\leq N/d} \sum_{k_1 \leq n/2d} \sum_{j_1 \leq k/d} 1 \end{equation} where we have written $n=dn_1$, $k = dk_1$ and $j = d j_1$. The evaluation of the inner triple sum is as before; we get that \begin{eqnarray*} \sum_{1 \leq n_1\leq N/d} \sum_{k_1 \leq n/2d} \sum_{j_1 \leq k/d} 1 & = & \sum_{1 \leq n_1\leq N/d} \sum_{k_1 \leq n/2d} \bigg( \frac{k_1}{d} + O(1) \bigg) \\ & = & \sum_{1 \leq n_1 \leq N/d} \bigg(\frac{1}{8} \bigg(\frac{n_1}{d}\bigg)^2 + O \bigg(\frac{n_1}{d} \bigg) \bigg) \\ & = & \frac{1}{24} \frac{N^3}{d^6} + O\bigg(\frac{N^2}{d^3}\bigg). \end{eqnarray*} Substituting this directly into Equation \ref{CSummationSwitched} gives us that $$C(N) = \frac{N^3}{24} \sum_{d=1}^N \frac{\mu(d)}{d^6} + O(N^2).$$ We note that the sum may be evaluated \begin{eqnarray*} \sum_{d=1}^N \frac{\mu(d)}{d^6} & = & \sum_{d=1}^{\infty} \frac{\mu(d)}{d^6} + O \bigg( \sum_{d > N} \frac{1}{d^6} \bigg) \\ & = & \frac{1}{\zeta(6)} + O(N^{-5}) \\ \end{eqnarray*} and this completes the proof. \subsection{Proof of Theorem \ref{countingisomorphisms}} The purpose of this section is to evaluate asymptotically the sum $$CI(N) = \sum_{n \leq N} I(n)$$ where $I(n)$ is as provided in Lemma \ref{I_count}. Working directly, it follows that $$CI(N) = \frac{1}{4} \sum_{n \leq N} \sum_{i=1}^4 \prod_{j=1}^{\omega(n)} g_i(p_j^{k_j}) +O \bigg( \sum_{n \leq N} \tau(n) \bigg).$$ To ease the notation, we write $$g_i(n) = \prod_{j=1}^{\omega(n)} g_i(p_j^{k_j})$$ where $n = p_1^{k_1} \cdots p_{\omega(n)}^{k_{\omega(n)}}$. It then follows that \begin{equation} \label{estimateInPieces} CI(N) = \frac{1}{4} ( \Sigma_1 + \Sigma_2 + \Sigma_3 + \Sigma_4) +O \bigg( \sum_{n \leq N} \tau(n) \bigg) \end{equation} where $$\Sigma_i = \sum_{n \leq N} g_i(n).$$ The reader can glance at the structures of the piecewise functions $g_2$, $g_3$ and $g_4$ to realise that it is expected that sums of these functions should all be of similiar asymptotic order. Indeed, each piece, and therefore each of these three functions can all be be bounded above by $g_u(p^k) = (k+1)^2$. This may seem, at a glance, wasteful, but as we will see this function lends itself swiftly to estimation using analytic techniques. As such, we have that $$\Sigma_2 + \Sigma_3 + \Sigma_4 = O\bigg( \sum_{n \leq N} g_u(n) \bigg)$$ where the effect of $g_u$ on some integer $n$ is in the normal multiplicative manner to each of its prime factors. We will now proceed to estimate this sum, after which we will estimate what turns out to be the main term, namely $\Sigma_1$. Our method of estimation will be similiar for both; namely, we construct a Dirichlet series that embeds the function of interest into its coefficients. Then, we apply the Ikehara--Wiener theorem to extract an asymptotic formula for the partial sums. Consider the Euler product $$F_u(s) = \prod_{p} \bigg(1 + \frac{4}{p^{s}} + \frac{9}{p^{2s}} + \frac{16}{p^{3s}} + \cdots \bigg)$$ which converges for $\text{Re}(s) > 1$ (see \cite{BatemanDiamond} for further theory). In the region of convergence, we have that $$F_u(s) = \sum_{n=1}^{\infty} \frac{g_u(n)}{n^s}.$$ We now wish to appeal to analytic properties of the above Dirichlet series in order to estimate the sum $\sum_{n \leq N} g_u(n)$ \textit{viz.} the Ikehara--Wiener theorem. This theorem comes in many forms; a useful one is provided from Theorem 2.4.1 in Cojocaru and Murty \cite{cojocarumurty}. \begin{thm} \label{ikeharawiener1} Let $$F(s) = \sum_{n=1}^{\infty} \frac{a_n}{n^s}$$ be a Dirichlet series with non-negative coefficients converging for $\text{Re}(s) >1$. Suppose that $F(s)$ extends analytically at all points on $\text{Re}(s)=1$ apart from $s=1$, and that at $s=1$ we can write $$F(s) = \frac{H(s)}{(s-1)^{1-\alpha}}$$ for some $\alpha \in \mathbb{R}$ and some $H(s)$ holomorphic in the region $\text{Re}(s) \geq 1$ and non-zero there. Then $$\sum_{n \leq x} a_n \sim \frac{c x}{( \log x)^{\alpha}}$$ with $$c:=\frac{H(1)}{\Gamma(1 - \alpha)}$$ where $\Gamma$ is the Gamma function. \end{thm} It is fairly easy to see that $F_u(s)$ satisfies the conditions of the above theorem. Actually, it follows from the derivation of (1.2.10) of Titchmarsh \cite{titchmarsh1986theory} that $$F_u(s) = \frac{\zeta^4(s)}{\zeta(2s)}$$ and we can now borrow from the known properties of the Riemann zeta-function. Specifically, we know that at $s=1$ we may write $$F_u(s) = \frac{H(s)}{(s-1)^4}$$ where $H(s)$ is holomorphic and non-zero in the region $\text{Re}(s) \geq 1$. It follows that $$\sum_{n \leq N} g_u(n) = O(N \log^3 N).$$ Substituting this back into Equation \ref{estimateInPieces} along with the fact that $\sum_{n \leq N} \tau(n) = O(N \log N)$ (see \cite{BatemanDiamond}) we have that \begin{equation} CI(N) = \frac{1}{4} \Sigma_1 + O(N \log^3 N). \end{equation} Finally, we seek to estimate $\Sigma_1$. An upper bound will not do this time. It is both fortunate and intriguing that the function $g_1$ naturally resolves via the Riemann zeta-function. \begin{lem} We have for $\text{Re}(s) > 2$ that $$\sum_{n = 1}^{\infty} \frac{g_1(n)}{n^s} = \frac{\zeta(s)^2 \zeta(s-1)}{\zeta(2s)}.$$ \end{lem} \begin{proof} We start by expressing the Dirichlet series for the multiplicative function $g_1$ as an Euler product. \begin{align*} \sum_{n=1}^\infty \frac{g_1(n)}{n^s} &=\prod_p\left(1+\frac{g_1(p)}{p^s}+\frac{g_1(p^2)}{p^{2s}}+\frac{g_1(p^3)}{p^{3s}}+\dots\right) \\ &=\prod_p\left(1+\frac{p+1}{p-1}\left(1+\frac1{p^{s-1}}+\frac1{p^{2(s-1)}}+\dots\right)-\frac{2}{p-1}\left(\frac1{p^s}+\frac1{p^{2s}}+\dots\right)\right). \end{align*} Now, $\sum_{n=1}^\infty 1/p^{ns}=1/(p^s-1)$ and similarly $\sum_{n=1}^\infty 1/p^{n(s-1)}=1/(p^{s-1}-1)$ so \begin{align*} \sum_{n=1}^\infty\frac{g_1(n)}{n^s}&=\prod_p\left(1+\frac{p+1}{p-1}\cdot\frac1{p^{s-1}-1}-\frac2{p-1}\cdot\frac1{p^s-1}\right)\\ &=\prod_p\left(\frac{p^s+1}{p^s-1}\cdot\frac1{1-p^{1-s}}\right) \\ &=\prod_p\left(\frac{p^s+1}{p^s-1}\right)\cdot\prod_p\left(\frac1{1-p^{1-s}}\right)=\frac{\zeta(s)^2\zeta(s-1)}{\zeta(2s)}. \end{align*} The final equality follows directly from the proof of (1.2.8) of Titchmarsh \cite{titchmarsh1986theory}. \end{proof} It is interesting to note, from Lemma 3.13 of \cite{PetkovsekZakrajsek}, that before the direct evaluation one has $$g_1(n) = \frac{1}{\varphi(n)} \sum_{a \in \mathbb{Z}_n^*} \gcd(n, a-1)^2.$$ It is striking that the above sum should find itself arising quite naturally in terms of the Riemann zeta-function. As before, we will use the analytic properties of this Dirichlet series to estimate $\sum_{n\leq N}g_1(n)$ \textit{viz.} the Ikehara--Weiner theorem. The following useful form of this theorem can be found in Murty \cite{Murty}. \begin{thm}\label{ikeharawiener2} Suppose $F(s)=\sum_{n=1}^\infty a_n/n^s$ is a Dirichlet series with non-negative coefficients that is convergent for Re$(s)>c>0$. If $F(s)$ extends to a meromorphic function in the region Re$(s)\geq c$ with only a simple pole at $s=c$ and residue $R$ then $$\sum_{n\leq x}b_n\sim\frac{Rx^c}{c}.$$ \end{thm} Here we have $F(s)=\sum_{n=1}^\infty g_1(n)/n^s$. Since we have just evaluated this to be $\zeta(s)^2\zeta(s-1)/\zeta(2s)$, it is clear $F(s)$ extends to a meromorphic function in the region $Re(s)\geq 2$ with a simple pole at $s=2$. Clearly, we have that $$\quad\text{res}_{s\to2}\,\zeta(s-1)=1$$ and so it follows that $$\sum_{n\leq N}g_1(n)\sim\frac{\zeta^2(2)}{2 \zeta(4)}N^2 = \frac{5}{4} N^2.$$ Substituting this back into the expression for $CI(N)$ gives $$CI(N)=\frac{5}{16}N^2+O(N\log^3N)\sim\frac{5}{16}N^2$$ and this completes the proof.\hfill$\Box$ \subsection{Proof of Theorem \ref{igraphsextendgpgsisomorphism}} Let $CP(N)$ denote the count of isomorphism classes of Peterson graphs $P(n,k)$. Using Lemma \ref{peterson_count} we have that \begin{align*} CP(N)&=\sum_{n\leq N}P(n)\\ &=\sum_{n\leq N}\frac1{4}(2n-\varphi(n)-2\gcd(n,2)+r(n)+s(n)) \\ &=\frac1{4}\left(\sum_{n\leq N}2n-\sum_{n\leq N}\varphi(n)-\sum_{n\leq N}2\gcd(n,2)+r(n)+s(n)\right). \end{align*} It can now be seen that each of these summations can be evaluated using classic methods. First, as seen in \cite{apostol}, we have that $$\sum_{n \leq N} \varphi(n) = \frac{3}{\pi^2} N^2 + O(N\log N),$$ and $$\sum_{n\leq N}2n=N^2+O(N).$$ Finally, it is clear that $\gcd(n,2)=O(1)$ and both $$s(n)=\begin{cases} 0,&4|n\mbox{ and }\exists p : (p|n\mbox{ and }p\equiv3\mod 4) \\ 2^{\psi(n)},&\mbox{otherwise} \end{cases}$$ and $$r(n)=\begin{cases} 2^{\omega(n)},&n\equiv1\mod2\mbox{ and }n\equiv4\mod8,\\ 2^{\omega(n)-1},&n\equiv2\mod4,\\ 2^{\omega(n)+1},&n\equiv0\mod8. \end{cases}$$ are $O(2^{\omega(n)})$. Since it is known that $$2^{\omega(n)}=\sum_{d|n}|\mu(d)|,$$ then it follows by Abel Summation that $$\sum_{n\leq N}2^{\omega(n)}=O(N\log N).$$ Combining these results we obtain \begin{align*} CP(N)&=\frac1{4}\left(N^2+N-\frac3{\pi^2}N^2\right)+O(N\log N)\\ &=\frac{\pi^2-3}{4\pi^2}N^2+O(N\log N). \end{align*} Now, as proven in Theorem \ref{countingisomorphisms}, we have that $$CI(N)=(5/16) N^2+O(N\log^3N)$$ and so we are now able to compute the final result, for \begin{align*} \frac{CP(N)}{CI(N)}&=\frac{\frac{\pi^2-3}{4\pi^2}N^2+O(N\log N)}{\frac5{16}N^2+O(N\log^3N)} \end{align*} and so \begin{align*} \lim_{N\to\infty}\frac{CP(N)}{CI(N)}&=\frac{4(\pi^2-3)}{5\pi^2} \end{align*} This completes the proof of Theorem \ref{igraphsextendgpgsisomorphism}.\hfill$\Box$ \subsection{Proof of Theorem \ref{igraphsconnectedisomorphism}} Let $CI_c(N)$ denote the count of isomorphism classes of connected $I$-graphs $I(n,j,k)$. Using Lemma \ref{connectedI_count} we have \begin{align*} CI_c(N)&=\sum_{n\leq N}I_c(n)\\ &=\sum_{n\leq N}\frac1{4}\left(\frac{J_2(n)}{\varphi(n)}+r(n)+s(n)+t(n)\right)-\begin{cases} 1, & n\text{ odd}\\ 2, & n\equiv0\mod{4}\\ 3, &n\equiv2\mod{4}. \end{cases} \end{align*} We first note that the piecewise term will simply be $O(1)$ and since \begin{align} t(n)=\begin{cases} 2^{\omega(n)}+2^{\omega(n/2)},&n\text{ even},\\ 2^{\omega(w)},&n\text{ odd}, \end{cases} \end{align} we have that $r(n)$, $s(n)$ and $t(n)$ are all $O(N\log N)$. The main focus of this sum will therefore be $J_2(n)/\varphi(n)$. This is simply the Dedekind psi-function and it is known that $$\sum_{n\leq N}\frac{J_2(n)}{\varphi(n)}=\frac{15}{2\pi^2}N^2+O(N\log N).$$ A sketch of this proof can be found in Chapter 3 of \cite{apostol}. Combining these results we obtain the following $$CI_c(N)=\frac{15}{8\pi^2}N^2+O(N\log N).$$ Now, as proven in Theorem \ref{countingisomorphisms} $CI(N)=5/16\cdot N^2+O(N\log^3N)$. We are now able to compute the final result. \begin{align*} \frac{CI_c(N)}{CI(N)}&=\frac{\frac{15}{8\pi^2}N^2+O(N\log N)}{\frac5{16}N^2+O(N\log^3N)}\\ \lim_{N\to\infty}\frac{CI_c(N)}{CI(N)}&=\frac6{\pi^2}=\frac1{\zeta(2)}. \end{align*} This completes the proof of Theorem \ref{igraphsextendgpgsisomorphism}.\hfill$\Box$ \section*{Acknowledgements} The authors would like to thank Randell Heyman for continuously pointing them in the right direction on the evaluation of many of the sums involved in Theorem \ref{igraphsextendgpgs} and \ref{igraphsconnected}. \clearpage \bibliographystyle{plain} \bibliography{biblio} \end{document} NOTES: Sharper bound, try a lower bound too, Think about proportion constant too. How many $I$-graphs are not GPG's? What is the energy of the $I$-graphs? Can use sum of lambda squared equals 2E to prove identity regarding sum of cos squared. Only interested in connected graphs i.e. when gcd(n,j,k) = 1. The smallest $I$-graphs that are not GPGs? (in the paper) Structure: How many proper $I$-graphs are there? Use Theorem 10 Potentially use some 2-d version of Hurwitz approximation theorem. we establish versions of the above density theorem In order to give context, we will first state what it means for two $I$-graphs (therefore generalised peterson graphs) to be isomorphic \cite{configs_igraphs}. \begin{definition} Two $I$-graphs $I(n,j,k)$ and $I(n,j_1,k_1)$ are in the same isomorphism class iff there exists an integer $a$ relatively prime to $n$ such that either \begin{align*} {j_1,k_1}=aj\mod n,&\;ak \mod n\text{ or,}\\ {j_1,k_1}=aj\mod n,&-ak\mod n. \end{align*} \end{definition}
2412.19605v1
http://arxiv.org/abs/2412.19605v1
Infinitary combinatorics in condensed math and strong homology
\documentclass{amsart} \usepackage{amsthm,amsmath,amssymb, color, graphics,enumerate,enumitem} \usepackage[dvipsnames]{xcolor} \usepackage{tikz-cd} \usepackage{mathrsfs,mathtools} \usepackage{adjustbox} \usepackage[pdftex]{hyperref} \hypersetup{ colorlinks=true, linktoc=all, linkcolor=blue, allcolors=blue, } \def\colim{\qopname\relax m{colim}} \let\oldqedbox\qedsymbol \newcommand{\twoqedbox}{\oldqedbox\oldqedbox} \newtheorem*{thma}{Theorem~A} \newtheorem*{thmb}{Theorem~B} \newtheorem*{thmc}{Theorem~C} \newtheorem*{thmd}{Theorem~D} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \newtheorem{subclaim}[theorem]{Subclaim} \newtheorem{fact}[theorem]{Fact} \newtheorem{question}[theorem]{Question} \newtheorem{example}[theorem]{Example} \newtheorem*{questionA1}{Question A1} \newtheorem*{questionA2}{Question A2} \newtheorem*{questionB1}{Question B1} \newtheorem*{questionB2}{Question B2} \newtheorem*{questionC1}{Question C1} \newtheorem*{questionC2}{Question C2} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \newcommand{\bb}{\mathbb} \newcommand{\mb}{\mathbf} \newcommand{\ra}{\rightarrow} \newcommand{\otp}{\mathrm{otp}} \newcommand{\sh}{\mathrm{sh}} \newcommand{\supp}{\mathrm{supp}} \newcommand{\cf}{\mathrm{cf}} \newcommand{\ssup}{\mathrm{ssup}} \newcommand{\CondAb}{\mathsf{Cond(Ab)}} \author[Bergfalk]{Jeffrey Bergfalk} \address{Departament de Matem\`{a}tiques i Inform\`{a}tica \\ Universitat de Barcelona \\ Gran Via de les Corts Catalanes 585 \\ 08007 Barcelona, Catalonia} \email{[email protected]} \urladdr{https://www.jeffreybergfalk.com/} \author[Lambie-Hanson]{Chris Lambie-Hanson} \address{Institute of Mathematics of the Czech Academy of Sciences \\ \v{Z}itn\'{a} 25, 110 00 Praha 1, Czechia} \email{[email protected]} \urladdr{https://users.math.cas.cz/~lambiehanson/} \title[Infinitary combinatorics in condensed math and strong homology]{Infinitary combinatorics \\ in condensed mathematics and strong homology} \subjclass[2020]{18F10, 18G80, 03E05, 03E35, 03E75, 13D05} \keywords{condensed mathematics, anima, compact projectives, derived limits, $n$-coherence, strong homology, Banach-Smith duality} \thanks{The first author was supported by Marie Sk\l odowska Curie (project 101110452) and Ram\'{o}n y Cajal fellowships; the second author was supported by GA\v{C}R project 23-04683S and the Academy of Sciences of the Czech Republic (RVO 67985840).} \begin{document} \begin{abstract} Recent advances in our understanding of higher derived limits carry multiple implications in the fields of condensed and pyknotic mathematics, as well as for the study of strong homology. These implications are thematically diverse, pertaining, for example, to the sheaf theory of extremally disconnected spaces, to Banach--Smith duality, to the productivity of compact projective condensed anima, and to the structure of the derived category of condensed abelian groups. Underlying each of these implications are the combinatorics of multidimensionally coherent families of functions of small infinite cardinal height, and it is for this reason that we convene accounts of them together herein. \end{abstract} \maketitle \section{Introduction} The aim of this article is to describe the role of a rich but rather specific area of infinitary combinatorics, namely that of higher-dimensional coherence phenomena, in the answers to an array of superficially unrelated questions. A main background to this work was Dustin Clausen and Peter Scholze's 2019 recognition that generalizations of the derived limit computations of \cite{Bergfalk_simultaneously_21} would carry structural implications within their emergent framework of \emph{condensed mathematics}. Not long thereafter, the present paper's authors realized that these generalizations afforded a more cohesive approach than hitherto to the well-studied questions of the additivity both of strong homology and of the derived limit functors themselves. These generalizations center on a family of groups $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}$ parametrizing nontrivial $n$-dimensional coherence phenomena on index systems of cardinal width $\kappa$ and height $\lambda$, and the interest of these phenomena derives in no small part from their relations to the cardinals $\aleph_n$. Scholze in 2021 converted a further question in condensed mathematics to one, in essence, about such phenomena, and the answer to this question, as we will see, leverages precisely this relation. More recently, Clausen and Scholze observed that a strong variant of the main result of \cite{Bergfalk_simultaneously_21} would carry pleasing implications for the Banach--Smith duality so prominent within condensed functional analysis, implications we describe herein as well. More precisely, this paper records proofs of the theorems A through D listed below, showing in the process that they all, combinatorially speaking, drink from the same well. We should underscore without delay that we neither presume nor require expertise in any of the several mathematical subfields which these theorems involve; that these results amount ultimately to little more than ordinal combinatorics is, after all, much of our point. How these combinatorics arise is, on the other hand, much of the story as well, and we have accordingly included such basic accounts and definitions as should render these conversions legible. That said, fuller comprehension of the often inescapably technical machinery of these conversions \emph{will} depend to some degree on readers' backgrounds; to better accommodate their range, we have shaped our text to permit multiple reading paths through its material, as we describe in greater detail below. Our first theorem addresses the question, communicated by Clausen and Scholze, of whether the natural functor from the pro-category of derived abelian groups to the derived category of condensed abelian groups is consistently fully faithful. For a fuller discussion and motivation of this question, see Section \ref{subsect:pro-to-D}; in Section \ref{subsect:systemsAkl} we show the following. \begin{thma} The natural embedding of $\mathsf{Pro}(\mathsf{D}\mathsf{(Ab)})$ into $\mathsf{D}\mathsf{(Cond(Ab))}$ is not full. In particular, the left hand side of the equation \[ \mathrm{RHom}_{\mathsf{Pro}(\mathsf{D}^{\geq 0}(\mathsf{Ab}))}\left(\text{``}\prod_{\omega}\text{''}\,\bigoplus_{\omega_1} \bb{Z},\bb{Z}\right)\not\cong\mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}}\left(\prod_{\omega} \bigoplus_{\omega_1} \bb{Z}, \bb{Z}\right), \] is concentrated in degree zero, but the right hand side is not. \end{thma} If, however, one restricts attention to \emph{countable} abelian groups and operations upon them, then the main pathologies (so to speak) of Theorem A are avoidable. We show the following in Section \ref{subsect:alternative}; the $\mb{A}[H]$ appearing in its premise is the natural generalization of the inverse system $\mb{A}=\mb{A}[\bb{Z}]$ tracing back, via a substantial set theoretic literature, to \cite{Mardesic_additive_88}. \begin{thmb} The following statements are all consistent; in fact each holds under the assumption that $\lim^n \mb{A}[H] = 0$ for all $n > 0$ and all abelian groups $H$. \begin{enumerate} \item $\mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}}\left(\prod_{\omega} \bigoplus_{\omega} \bb{Z}, \bigoplus_\mu \bb{Z}\right)$ is concentrated in degree zero for all cardinals $\mu$. \item Whenever $H$ is an abelian group and $\mb{M} = (M_i, \pi_{i,j}, \omega)$ is an inverse sequence of countable abelian groups whose transition maps $\pi_{i,j} : M_{j} \to M_i$ are all surjective, then $$\mathrm{Ext}_{\mathsf{Cond(Ab)}}^n\left(\lim_{i < \omega}\,M_i,H\right)=\colim_{i < \omega}\,\mathrm{Ext}_{\mathsf{Cond(Ab)}}^n\left(M_i,H\right)$$ for all $n \geq 0$. \item Whenever $p$ is a prime and $X$ is a separable solid $\bb{Q}_p$-Banach space, \[ \underline{\mathrm{Ext}}^i_{\mathsf{Solid}_{\bb{Q}_p}}(X, \bb{Q}_p) = 0 \] for all $i > 0$. \end{enumerate} \end{thmb} The first and the second of the theorem's conclusions are of additional interest (as we explain just after its proof) for each implying that the continuum is of cardinality at least $\aleph_{\omega+1}$. Its clause (3), on the other hand, implies that the classical stereotype duality between Banach spaces and Smith spaces consistently extends to the derived category of solid $\bb{Q}_p$-vector spaces when restricted to separable solid $\bb{Q}_p$-Banach spaces. Let us underscore before proceeding any further that the forms of Theorems A, B, and D, as well as their conversions to questions about derived limits, are all essentially due to Clausen and Scholze; in each case, our own contributions to the theorems consist principally in the derived limit analyses (exemplified by our Theorems \ref{thm:limsofAkappalambda} and \ref{thm:nonzero_cohomology_on_opens}) which complete their proofs. The background to Theorem C, in contrast, is our own joint work on the additivity problem for the strong homology functor $\overline{\mathrm{H}}_\bullet$; see Section \ref{sec:additivity} for a brief discussion of this problem's history. The main immediate point is that the computations underlying Theorem A readily furnish the $\mathsf{ZFC}$ counterexample to the additivity of $\overline{\mathrm{H}}_\bullet$ cited below, along with closely related $\mathsf{ZFC}$ counterexamples to the additivity of the functors $\lim^n : \mathsf{Pro(Ab)}\to\mathsf{Ab}$ for $n = 1$ and $n=2$. None of these is the first such counterexample to have been recorded; that distinction belongs to those appearing in \cite{Prasolov_non_05}. In both their unity and comparative simplicity, however, those recorded herein carry substantial advantages over those of \cite{Prasolov_non_05} (the generalized Hawaiian earring $Y^n$ below, for example, is compact, in contrast to its counterpart in \cite{Prasolov_non_05}), and together render the additivity problem's combinatorial content significantly plainer. \begin{thmc} Fix a natural number $n > 0$, let $B^n$ denote the $n$-dimensional open ball, and let $Y^n$ denote the one-point compactification of $\coprod_{\omega_1} B^n$. Then \[ \overline{\mathrm{H}}_{n-1}\left(\coprod_{\omega} Y^n \right) \not\cong \bigoplus_{\omega} \overline{\mathrm{H}}_{n-1}\left(Y^n\right). \] \end{thmc} For this paper's final results, we return to the setting of condensed mathematics. From both conceptual and computational points of view, one of condensed categories' most fundamental virtues is their possession of generating classes of compact projective objects. Both annoying, accordingly, and insufficiently well understood are the failures of these classes to be closed under binary products. For example, the tensor product $X \otimes Y$ of two compact projective objects in the category $\mathsf{Cond}(\mathsf{Ab})$ of condensed abelian groups need not be projective; see the introduction to Section \ref{sect:products} for further discussion. More subtle is the still-open question of whether or when such $X \otimes Y$ have finite projective dimension in $\mathsf{Cond}(\mathsf{Ab})$. Here we provide a negative answer to a stronger conjecture by proving that products of compact projective condensed anima are not, in general, compact, and we do so via an auxiliary result about classical sheaf categories that we feel is of interest in its own right. In what follows, $\mathsf{ED}$ denotes the category of extremally disconnected compact Hausdorff spaces. \begin{thmd} \begin{enumerate} \item For every field $K$ and $S \in \mathsf{ED}$, the constant sheaf $\mathcal{K}$ on $S$ is of injective dimension at most one. \item If $S,T \in \mathsf{ED}$ are each \v{C}ech-Stone compactifications of discrete sets of cardinality at least $\aleph_\omega$, then the injective dimension of the constant sheaf $\mathcal{K}$ on $S \times T$ is infinite. \item There exist compact projective condensed anima $S$ and $T$ whose product is not compact. \end{enumerate} \end{thmd} Let us turn now in greater detail to the paper's organization and the varying demands that it places upon its readers. Section \ref{sec:pro} divides into five subsections; of these, only \ref{subsect:condensedbackground}, which provides a brief review of derived and condensed settings, and \ref{subsect:systemsAkl}, recording the fundamentals of the groups $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}$, play a significant role in subsequent sections. Subsections \ref{subsect:pro-to-D} and the more down-to-earth \ref{subsect:alternative}, containing main portions of the arguments of Theorems A and B, respectively, provide motivation and context for the computations of \ref{subsect:systemsAkl}, while \ref{subsect:morenonvanishing} records a more set theoretically involved refinement of the latter. Put differently, in Subsections \ref{subsect:condensedbackground}, \ref{subsect:systemsAkl}, and \ref{subsect:alternative}, we presume only a basic familiarity with homological algebra (\cite[Ch.\ 1--3]{Weibel}, say) and set theory, while in Subsections \ref{subsect:pro-to-D} and \ref{subsect:morenonvanishing} (both of which may be skimmed in a first reading), we assume a little more familiarity, respectively, with each. Section \ref{sec:additivity} and \ref{sect:products} may be read independently of one another; the first of these divides into two subsections, neither of which presumes any further background of the reader. Its first subsection explores the implications of the results of Subsection \ref{subsect:systemsAkl} along the lines discussed before Theorem C above, which it also proves. A more optional second subsection contextualizes these implications within a broader discussion of strong homology. Next, after a brief introduction, the two subsections of Section \ref{sect:products} contain our proof of Theorem D, with the first containing the proofs of clauses (1) and (2) and the second containing that of clause (3). These portions of the paper require a bit more of the reader. Subsection \ref{subsect:sheaves} involves sheaf theory at roughly the level of \cite[Ch.\ 2]{Iversen}. Subsection \ref{subsect:products}, like any discussion of condensed anima, then necessitates an at least occasionally $\infty$-category theoretic vocabulary; relevant references are provided along the way. The paper then concludes with a list of open questions; among these, those amounting essentially to problems in infinitary combinatorics predominate. We hope by this outline to have suggested the kinds of math which our text engages; its more fundamental roots in the more contemporary field of condensed mathematics, though, merit a few further words of comment. Core references for condensed mathematics are the Scholze and Clausen--Scholze lecture notes \cite{CS1, CS2, CS3} and the online Clausen--Scholze lecture series \cite{Masterclass} and \cite{analytic_stacks}; multiple theses \cite{Aparicio_condensed_21, Asgeirsson, Mair} and course notes \cite{UChic,Columb,JHop} usefully supplement this material. In so rapidly expanding a field, however, reference lists take on a markedly provisional character. Contemporaneous with the development of condensed mathematics has been that of Barwick and Haine's closely related \emph{pyknotic mathematics}; among the fundamental references here are \cite{pyknotic} and the MSRI lecture series \cite{MSRI}, although our comments above on references proliferating faster than we can confidently track continue to apply. Below, we have uniformly framed our results in condensed terms, both for simplicity and for the essentially contingent reason that our first encounter with this world of ideas was by way of an email from Dustin Clausen and Peter Scholze; these results' translations to the pyknotic framework are, in general, straightforward. For that initial contact and for much rich and generous conversation thereafter, we wish to underscore our thanks to Clausen and Scholze. For further discussion and clarifications of the material of Sections \ref{subsect:pro-to-D} and \ref{subsect:products}, we are grateful to Lucas Mann as well. Lastly, a word on some of our most fundamental notations. We write $P(X)$ for the power set, and $|X|$ for the cardinality, of a set $X$. If $X$ is partially ordered then we write $\mathrm{cf}(X)$ for the least cardinality of a cofinal subset of $X$, while if $\kappa$ is a cardinal then $\mathrm{Cof}(\kappa)$ denotes the class of ordinals $\xi$ for which $\mathrm{cf}(\xi)=\kappa$. If $X$ is well-ordered then $\mathrm{otp}(X)$ denotes the unique ordinal which is order-isomorphic to $X$. We write ${^\kappa}X$ for the collection of functions from $\kappa$ to $X$, although we will at times also exponentiate on the right. Relatedly, $[X]^n$ and $[X]^{<\omega}$ denote, respectively, the collections of size-$n$ subsets, and of finite subsets, of $X$; when $X$ is linearly ordered, we will identify these with ordered tuples in the natural way. If $\mathcal{C}$ is a category, then we write $X\in\mathcal{C}$ to indicate that $X$ is an object of $\mathcal{C}$. \section{The category $\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}$ and the systems $\mathbf{A}_{\kappa,\lambda}$} \label{sec:pro} \subsection{A brief survey of the relevant categories} \label{subsect:condensedbackground} Let us begin by recalling the notion of an \emph{abelian category}: a category $\mathcal{A}$ is abelian if it possesses \begin{enumerate} \item a zero object (i.e., an object which is both initial and terminal), \item binary biproducts (i.e., coinciding binary products and sums), and \item kernels and cokernels, such that \item all of $\mathcal{A}$'s monomorphisms are kernels, and all of its epimorphisms are cokernels. \end{enumerate} The paradigmatic example, of course, is the category $\mathsf{Ab}$ of abelian groups, and we may view conditions (1) -- (4) as abstracting from $\mathsf{Ab}$ the categorical equipment for basic homological algebra. More general examples include the category $\mathsf{Mod}_R$ of right $R$-modules for a fixed ring $R$, or the category of presheaves of abelian groups over a fixed space $X$ or category $\mathcal{D}$, with inverse systems of abelian groups over a fixed partial order $I$ a special case of the latter. Letting $I$ range over the class of directed sets determines the objects --- frequently denoted \[\textnormal{``}\lim_{i\in I}\!\textnormal{''}\,A_i\] --- of the \emph{pro-category} $\mathsf{Pro}(\mathsf{Ab})$ of abelian groups, whose morphisms are formally described by the formula \begin{align} \label{eq:limcolim} \mathrm{Hom}_{\mathsf{Pro}(\mathsf{Ab})}\left(\textnormal{``}\lim_{i\in I}\!\textnormal{''}\,A_i,\textnormal{``}\lim_{j\in J}\!\textnormal{''}\,B_j\right)=\lim_{j\in J}\colim_{i\in I}\mathrm{Hom}_{\mathsf{Ab}}\left(A_i,B_j\right) \end{align} \noindent More generally and intuitively, $\mathsf{Pro}(\mathcal{C})$ freely adjoins cofiltered limits to its base category $\mathcal{C}$, as its notation suggests, and it, too, is abelian, if $\mathcal{C}$ itself is. Moreover, if $\mathcal{C}$ possesses cofiltered limits then the inverse limit functor \[\textnormal{``}\lim_{i\in I}\!\textnormal{''}\,C_i\,\mapsto\,\lim_{i\in I} C_i\] is well-defined on $\mathsf{Pro}(\mathcal{C})$. In particular (as a closer inspection of the morphisms of $\mathsf{Pro}(\mathcal{C})$ then implies), $\lim_{i\in J} C_i\cong\lim_{i\in I} C_i$ whenever $J$ is a cofinal subset of $I$. For more on the subject of pro-categories (often via their dual notion $\mathsf{Ind}(\mathcal{C}^{\mathrm{op}})=(\mathsf{Pro}(\mathcal{C}))^{\mathrm{op}}$, which plays, in turn, a significant conceptual role in Section \ref{sect:products} below), see \cite[I.8]{SGA4}, \cite[Appendix]{ArtinMazur}, \cite[VI]{Johnstone}, \cite[I.1]{Mardesic_strong_00}, or \cite[\S 6]{Kashiwara_Categories_06}. By the phrase \emph{basic homological algebra} above, we largely mean the following sequence: any abelian category $\mathcal{A}$ gives rise to the abelian category $\mathsf{Ch}^{\star}(\mathcal{A})$ of cochain complexes in $\mathcal{A}$ (here the ${^\star}$ may be read to denote any of the standard boundedness conditions, or none at all); identifying chain homotopic chain maps $f^\bullet,g^\bullet:A^\bullet\to B^\bullet$ determines in turn the \emph{homotopy category} $\mathsf{K}^{\star}(\mathcal{A})$ of $\mathcal{A}$, and localizing $\mathsf{K}^{\star}(\mathcal{A})$ (or, equivalently, $\mathsf{Ch}^{\star}(\mathcal{A})$) with respect to the class of \emph{quasi-isomorphisms} $f^\bullet:A^\bullet\to B^\bullet$ --- i.e., of maps $f^\bullet$ inducing isomorphisms $f^n_*:\mathrm{H}^n(A^\bullet)\to\mathrm{H}^n(B^\bullet)$ of all the cohomology groups of $A^\bullet$, $B^\bullet$ --- determines the \emph{derived category} $\mathsf{D}^{\star}(\mathcal{A})$ of $\mathcal{A}$. The latter two operations exhibit strong analogies with identifications of topological spaces up to \emph{homotopy equivalence} and \emph{weak homotopy equivalence}, respectively; as in both of those cases, they render legible relations otherwise invisible among the objects of the source category. Observe next that any additive functor $F:\mathcal{A}\to\mathcal{B}$ between abelian categories naturally induces functors $\mathsf{Ch}(F):\mathsf{Ch}^{\star}(\mathcal{A})\to\mathsf{Ch}^{\star}(\mathcal{B})$ and $\mathsf{K}(F):\mathsf{K}^{\star}(\mathcal{A})\to\mathsf{K}^{\star}(\mathcal{B})$, but that there is, in general, no $G:\mathsf{D}^{\star}(\mathcal{A})\to\mathsf{D}^{\star}(\mathcal{B})$ making the following square commute: \begin{center} \begin{tikzcd} \mathsf{K}^{\star}(\mathcal{A}) \arrow[d, "\mathsf{K}(F)"'] \arrow[r, "\ell_{\mathcal{A}}"] & \mathsf{D}^{\star}(\mathcal{A}) \arrow[d, "G", dashed] \\ \mathsf{K}^{\star}(\mathcal{B}) \arrow[r, "\ell_{\mathcal{B}}"'] & \mathsf{D}^{\star}(\mathcal{B}) \end{tikzcd} \end{center} \noindent What may exist in its stead is a $G$ so that $G\circ\ell_{\mathcal{A}}$ optimally approximates $\ell_{\mathcal{B}}\circ K(F)$ (in the sense of forming a terminal or initial object in the appropriate category of functors $\mathsf{K}^{\star}(\mathcal{A})\to\mathsf{D}^{\star}(\mathcal{B})$ over or under $\ell_{\mathcal{B}}\circ K(F)$, respectively); such a $G$ is then the (left or right) \emph{derived functor} ($\mathrm{L}F$ or $\mathrm{R}F$, respectively) of $F$. Functors of particular interest in what follows will be $\mathrm{Hom}:\mathcal{A}^{\mathrm{op}}\times\mathcal{A}\to\mathsf{Ab}$ and $\mathrm{lim}:\mathsf{Pro}(\mathcal{A})\to\mathcal{A}$ for abelian categories $\mathcal{A}$; the cohomology groups of their derived functors $\mathrm{RHom}(A,B)$ and $\mathrm{Rlim}\,\mathbf{X}$ define the classical expressions $\mathrm{Ext}^n(A,B)$ and $\mathrm{lim}^n\,\mathbf{X}$, respectively, for $n>0$. A \emph{non-example} of an abelian category is the category $\mathsf{TopAb}$ of topological abelian groups: writing $\mathbb{R}^{\mathrm{disc}}$ and $\mathbb{R}$ for the group $\mathbb{R}$ endowed with the discrete and standard topologies, respectively, the map \begin{align} \label{eq:toyexample}\mathrm{id}:\mathbb{R}^{\mathrm{disc}}\to\mathbb{R} \end{align} witnesses the failure of condition (4) above. The choice of $\mathbb{R}$ here is, of course, essentially arbitrary; the same issue arises for any fixed algebraic object admitting multiple topologies, some finer than others. These examples epitomize a tension between topological data and any practice of algebra partaking of the constructions of the previous paragraph; they are standard within the condensed literature and cue one of its main motivating questions: \emph{How to do algebra when rings/modules/groups carry a topology?} \cite[p.\ 6]{CS1}. We turn now, with this as background, to a brief introductory overview of the condensed framework. This subsection's remainder is entirely drawn from \cite[I--II]{CS1}. Write $\mathsf{CHaus}$ to denote the category of compact Hausdorff spaces and $\mathsf{ProFin}$ to denote its full subcategory of Stone spaces, i.e., of totally disconnected compact Hausdorff spaces; the notation derives from this subcategory's equivalence with the pro-category of finite sets (to see this, view the objects of the latter as inverse systems of discrete topological spaces and take their limits). Write $\mathsf{ED}\subset\mathsf{ProFin}$ for the category of extremally disconnected compact Hausdorff spaces, i.e., for the full subcategory of $\mathsf{CHaus}$ spanned by its projective objects (see \cite{Gleason}). Letting $\kappa$ range through the cardinals filters these large categories by their essentially small subcategories $\mathsf{ED}_\kappa\subseteq\mathsf{ProFin}_\kappa$ spanned by their objects of topological weight less than $\kappa$. The finite jointly surjective covers determine a Grothendieck topology $\tau$ on $\mathsf{ProFin}$, and we write $\tau$ also for its restriction to any subcategory of $\mathsf{ProFin}$. \begin{definition} \label{def:condset} Fix an infinite cardinal $\kappa$ and let $\mathcal{C}=\mathsf{ProFin}_\kappa$; a \emph{$\kappa$-condensed set} $F$ is then a $\mathsf{Set}$-valued sheaf on the site $(\mathcal{C},\tau)$. More concretely, $F\in\mathrm{Fun}(\mathcal{C}^{\mathrm{op}},\mathsf{Set})$ is $\kappa$-condensed iff it satisfies the following conditions: \begin{enumerate} \item $F(\varnothing)=*$ (i.e., $F(\varnothing)$ is a singleton, a terminal object of $\mathsf{Set}$); \item the natural map $F(S\sqcup T)\to F(S)\times F(T)$ is a bijection for any $S,T\in\mathcal{C}$; \item for any surjection $T\twoheadrightarrow S$ in $\mathcal{C}$ with induced maps $p_1,p_2$ from the fiber product $T\times_S T$ to $T$, the natural map $F(S)\to\{x\in F(T)\mid p_1^*(x)=p_2^*(x)\}$ is a bijection. \end{enumerate} Such $F$ form the objects of a category $\mathsf{Cond}(\mathsf{Set})_{\kappa}$, with morphisms the natural transformations between them. For strong limit $\kappa\leq\lambda$ we then have natural embeddings $\mathsf{Cond}(\mathsf{Set})_{\kappa}\to\mathsf{Cond}(\mathsf{Set})_{\lambda}$, and the category $\mathsf{Cond}(\mathsf{Set})$ is the direct limit of this system of embeddings as $\kappa$ ranges over the class of strong limit cardinals. \end{definition} A frequently useful observation is the following: by coinitiality, its restriction to extremally disconnected sets forms a basis for the topology $\tau$, hence condensed sets may be equivalently defined by taking $\mathcal{C}=\mathsf{ED}_\kappa$ in Definition \ref{def:condset} above. Over this more restricted category, the sheaf condition reduces to the first two conditions of Definition \ref{def:condset} (i.e., the third may be safely ignored), and this fact carries a number of pleasing consequences within the theory. The fact, on the other hand, that the category $\mathsf{ED}$ doesn't in general possess products is the source of some unpleasantness, and this deficiency forms the main concern of Section \ref{sect:products} below. Let $X$ be a topological space which is T1, meaning that any point of $X$ forms a closed set. Then the restricted Yoneda functor $$\underline{X}:\mathsf{ProFin}^{\mathrm{op}}\to\mathsf{Set}:S\mapsto\mathrm{Cont}(S,X),$$ where $\mathrm{Cont}(S,X)$ denotes the set of continuous maps from $S$ to $X$, is a condensed set, as the reader is encouraged to verify. In fact, the maps $X\mapsto\underline{X}$ define a fully faithful embedding of the category of compactly generated weakly Hausdorff topological spaces into the category $\mathsf{Cond}(\mathsf{Set})$; this is the first fundamental fact about the theory (\cite[Prop.\ 1.7]{CS1}). For the second, define the category $\mathsf{Cond}(\mathsf{Ab})$ either by replacing $\mathsf{Set}$ with $\mathsf{Ab}$ in Definition \ref{def:condset} or by restricting to the abelian group objects of $\mathsf{Cond}(\mathsf{Set})$. We then have \cite[Thm.\ 1.10]{CS1}: \begin{theorem} $\mathsf{Cond}(\mathsf{Ab})$ is a complete and cocomplete abelian category. \end{theorem} In particular, it is an abelian category into which the category, for example, of locally compact abelian groups embeds: within it, the latter's deficiencies epitomized by line \ref{eq:toyexample} above find resolution. And it is a category to which the derived machinery described above readily applies (in particular, it possesses enough projectives; see Section 4); with this observation, we arrive to the following subsection. \subsection{The functor $\mathsf{Pro}(\mathsf{D}(\mathsf{Ab}))^{\mathtt{b}}\to\mathsf{D}(\mathsf{Cond}(\mathsf{Ab}))$} \label{subsect:pro-to-D} It is plain from the preceding discussion that the category of abelian groups fully faithfully embeds into $\mathsf{Cond}(\mathsf{Ab})$; so too, by way of the category of locally compact abelian groups, does the category of \emph{profinite} abelian groups. Whether the category $\mathsf{Pro}(\mathsf{Ab})$ does as well is a natural next question; its most encompassing formulation is at the derived level, as follows: \begin{question} \label{ques:pro-D} Is the natural map $\mathsf{Pro}(\mathsf{D}\mathsf{(Ab)})^{\mathtt{b}}\to \mathsf{D}\mathsf{(Cond(Ab))}$ given by \begin{align} \label{eq:Q1} \mathrm{``lim}\textnormal{''}\,C_i\mapsto\mathrm{Rlim}\,\underline{C}_i\end{align} fully faithful? \end{question} A few remarks are immediately in order. First: as, in general, it's only at the $\infty$-category theoretic level that the $\mathsf{Pro}( - )$ operation preserves derived structure (cf.\ \cite[\S 15.4]{Kashiwara_Categories_06}), it's at this level that Question \ref{ques:pro-D} should ultimately be posed. We won't belabor this point, since it only momentarily affects our reformulation of the question below; more generally, as our introduction suggested, a relaxed or ``naive'' reading relationship (cf.\ \cite[Ch.\ I.1]{Cisinskietal}; the analogy, of course, is with naive set theory) to the $\infty$-category theoretic manipulations of both this subsection and Section \ref{subsect:products} is broadly compatible with this paper's main aims. This is fortunate, since any rigorous review of the subject of $\infty$-categories is beyond the scope of this paper. Lurie's \cite{LurieHTT} or its more recent online reworking \cite{kerodon} are standard one-stop references; for the subject of $\infty$-derived categories, see \cite[\S 1.3]{HA}. For a highly economical review of much of that material, see Appendix A of \cite{Mann}, which we cite more particularly in Section \ref{sect:products} below. In any case, a main point in the present context is that, when interpreted at the $\infty$-derived level, the $\mathrm{Rlim}$ of (\ref{eq:Q1}) is simply a limit; thus at this level, the functor (\ref{eq:Q1}) is plainly limit-preserving, and Question \ref{ques:pro-D} is, modulo size issues, one of whether $\mathsf{Pro}(\mathsf{D}\mathsf{(Ab)})^{\mathtt{b}}$ is a reflective subcategory of $\mathsf{D}\mathsf{(Cond(Ab))}$ \cite[\S 5.2.7]{LurieHTT}.\footnote{The issues in question are that the constituent categories aren't presentable, so that the standard lemma \cite[Cor.\ 5.5.2.9]{LurieHTT} completing the characterization doesn't apply.} We will expand on the question's significance in this subsection's conclusion below. Second: we should clarify what it is that we mean by the superscript $\mathtt{b}$. We begin by noting that the variant of Question \ref{ques:pro-D} given by omitting $\mathtt{b}$ admits an easy negative answer; thanks to Lucas Mann for the following example. For any complex $E$, $$\mathrm{Hom}_{\mathsf{Pro(D(Ab))}}\left(``\prod_{\mathbb{N}}\textnormal{''}\,\mathbb{Z}[-n],E\right)=\bigoplus_\mathbb{N}\mathrm{Hom}_{\mathsf{D(Ab)}}(\mathbb{Z}[-n],E);$$ see \cite[2.6.2]{Kashiwara_Categories_06}. Here $\mathbb{Z}[-n]$ denotes the cochain complex valued $\mathbb{Z}$ in the degree $n$ and $0$ elsewhere and the natural map $\bigoplus_{\mathbb{N}}\mathbb{Z}[-n]\to\prod_{\mathbb{N}}\mathbb{Z}[-n]$ is easily seen to be a quasi-isomorphism, with the consequence that \begin{align*} \mathrm{Hom}_{\mathsf{D(Cond(Ab))}}\left(\prod_{\mathbb{N}}\underline{\mathbb{Z}[-n]},\underline{E}\right) & =\mathrm{Hom}_{\mathsf{D(Cond(Ab))}}\left(\bigoplus_{\mathbb{N}}\underline{\mathbb{Z}[-n]},\underline{E}\right) \\ & =\prod_{\mathbb{N}}\mathrm{Hom}_{\mathsf{D(Cond(Ab))}}\left(\underline{\mathbb{Z}[-n]},\underline{E}\right). \end{align*} In each of the above cases the concluding $\mathrm{Hom}$ terms are $\mathrm{H}^n(E)$, hence if the cohomology of $E$ is nonvanishing in unboundedly many degrees, the two leftmost terms are non-isomorphic, and this instance of the functor (\ref{eq:Q1}) isn't full. Restricting attention to the full subcategory of $\mathsf{Pro(D(Ab))}$ spanned by inverse systems of uniformly bounded complexes, though, averts this issue, and it is this category which we have denoted above by $\mathsf{Pro(D(Ab))}^{\mathtt{b}}$. So posed, Question \ref{ques:pro-D} reduces to a more computationally concrete and interesting question, by steps which we now describe. First, note that one may without loss of generality restrict to complexes bounded from below by $0$, and hence to the derived category $\mathsf{D}^{\geq 0}$; observe next that by equation \ref{eq:limcolim}, our question is then that of whether for all inverse systems of uniformly bounded cochain complexes $(C_i)_{i\in I}$ and $(E_j)_{j\in J}$ in $\mathsf{D}^{\geq 0}(\mathsf{Ab})$ the natural map $$\lim_{j\in J}\colim_{i\in I}\mathrm{Hom}_{\mathsf{Pro}(\mathsf{D}^{\geq 0}(\mathsf{Ab}))}(C_i,E_j)\to \mathrm{Hom}_{\mathsf{D}^{\geq 0}(\mathsf{Cond(Ab}))}\left(\lim_{i\in I} \underline{C}_i,\lim_{j\in J} \underline{E}_j\right)$$ is an isomorphism. Again all operations are to be read as $\infty$-categorical. Factoring out the rightmost $\mathrm{lim}$ reduces our question to that of whether $$\colim_{i\in I}\mathrm{Hom}_{\mathsf{Pro}(\mathsf{D}^{\geq 0}(\mathsf{Ab}))}(C_i,E)\to \mathrm{Hom}_{\mathsf{D}^{\geq 0}(\mathsf{Cond(Ab}))}\left(\lim_{i\in I} \underline{C}_i,\underline{E}\right)$$ is an isomorphism for every bounded $E$ in $\mathsf{D}^{\geq 0}(\mathsf{Ab})$ (note that by the example above, this is in general false if $E$ is unbounded). By passing the $\colim$ back across the parentheses and recalling that $\mathrm{RHom}$ commutes with all finite limits and colimits, we may by \cite[4.4.2.7]{LurieHTT} further reduce our question to that of whether \begin{align*}\mathrm{RHom}_{\mathsf{Pro}(\mathsf{D}^{\geq 0}(\mathsf{Ab}))}\left(\text{``}\prod_{i\in I}\text{''}\,C_i,E\right)=\mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}}\left(\prod_{i\in I} \underline{C}_i, \underline{E}\right)\end{align*} for all $(C_i)_{i\in I}$ and $E$ as above. Standard d\'{e}vissage arguments (cf.\ \cite[Prop.\ 15.4.2]{Kashiwara_Categories_06}) using the truncation functors $\tau^{\geq n}$ and fiber sequences $\tau^{<n}E\to E\to \tau^{\geq n}E$ allow us to further reduce to the case in which $E$ is concentrated in a single degree; similarly for the first coordinate. Working with $\mathrm{RHom}$ allows us to further assume, without loss of generality, that that degree in both cases is zero. We have thus reduced our question to the following. \emph{For abelian groups $G_i$ $(i\in I)$ and $H$, does the following equality hold:} \begin{align}\label{eq:Q1'}\mathrm{RHom}_{\mathsf{Pro}(\mathsf{D}^{\geq 0}(\mathsf{Ab}))}\left(\text{``}\prod_{i\in I}\text{''}\,G_i,H\right)=\mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}}\left(\prod_{i\in I} \underline{G}_i, \underline{H}\right)?\end{align} This question further simplifies as follows. Observe first that free resolutions of arbitrary groups induce long exact sequences reducing the question to one for free groups (see the proof of Theorem \ref{thm:fromstacks} for more concrete argumentation along these lines); we may therefore assume above that each $G_i=\bigoplus_J\mathbb{Z}$ and $H=\bigoplus_K\mathbb{Z}$ for some sets $J$ and $K$. In this case, the left-hand side is concentrated in the degree zero, for the reason that \begin{align*} \mathrm{RHom}_{\mathsf{Pro}(\mathsf{D}^{\geq 0}(\mathsf{Ab}))}\left(\text{``}\prod_{i\in I}\text{''}\,G_i,H\right)=\bigoplus_{i\in I}\mathrm{RHom}_{\mathsf{D}^{\geq 0}(\mathsf{Ab})}\left(G_i,H\right), \end{align*} together with the fact that each $G_i$ is free. Moreover, the right-hand term readily identifies in the zeroth degree with $\bigoplus_I\prod_J H$. It is then easy to see that this, in turn, equals the zeroth degree of the right-hand side of equation \ref{eq:Q1'}, namely $\mathrm{lim}\,\mathbf{A}_{I,J}[H]$, in the language of a reformulation which we now describe. Recall that $[J]^{<\omega}$ denotes the collection of finite subsets of $J$. For any function $f:I\to [J]^{<\omega}$, let $X(f)=\{(i,j)\in I\times J\mid j\in f(i)\}$, and let ${^I}([J]^{<\omega})$ denote the ordering of functions from $I$ to $[J]^{<\omega}$ given by $f\leq g$ if and only if $X(f)\subseteq X(g)$. Now observe that $$\prod_{i\in I} G_i=\underset{f\in {^I}([J]^{<\omega})}{\mathrm{colim}} \prod_{i\in I} \mathbb{Z}^{f(i)}\,.$$ By way of this observation, one may rewrite the right-hand side of equation \ref{eq:Q1'} as $$\mathrm{Rlim}_{f\in {^I}([J]^{<\omega})} \,\mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}}\left(\prod_{i\in I} \underline{\mathbb{Z}^{f(i)}}, \underline{H}\right)$$ (cf.\ \cite[Prop.\ 3.6.3]{Prosmans}). Observe now that, since $\prod_{i\in I} \underline{\mathbb{Z}^{f(i)}}$ is projective in the subcategory $\mathsf{Solid}$ of $\mathsf{Cond(Ab)}$, and that the derived category of the former fully and faithfully embeds in that of the latter (\cite[Thm.\ 5.8]{CS1}), the interior $\mathrm{RHom}$ above concentrates in degree zero, and may be computed in that degree to equal $\bigoplus_{i\in I}\bigoplus_{f(i)} H=\bigoplus_{X(f)} H$. This last point follows essentially from \cite[Prop.\ 97.7 or Cor.\ 94.5]{Fuchs_Infinite_73}. We arrive in this way to the derived limits of the inverse systems $\mathbf{A}_{I,J}[H]$, the focus of the following subsection. As will soon grow clear, these limits are substantially more complicated than the left-hand side expressions of line \ref{eq:Q1'}; as such, they exemplify the computational import of Question \ref{ques:pro-D}: it is asking whether, on a significant subcategory of $\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}$, the $\mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}}$ functor admits a more transparent or manageable description. That subcategory $\mathsf{Pro}(\mathsf{D}\mathsf{(Ab)})^{\mathtt{b}}$ is significant not only for formalizing questions about limits, but for its centrality (hinted at above) to the intermediate subcategory $\mathsf{D}^{\geq 0}(\mathsf{Solid})$, one of the most critical in condensed mathematics (see \cite[\S 5, 6]{CS1}, \cite[\S 2]{CS2}). In essence, the above conversion was via the case of finite $J$, and the rather nontrivial computation of that case is a foundational result in the theory of solid abelian groups. Question \ref{ques:pro-D} may be read as one of how that computation extends. \subsection{The systems $\mathbf{A}_{\kappa,\lambda}$} \label{subsect:systemsAkl} Readers skipping over \ref{subsect:pro-to-D} to this subsection should refer, when necessary, to the previous paragraph for any unfamiliar notation. \begin{definition} \label{def:Akappalambda} Let $I$ and $J$ be nonempty sets; for any function $f:I\to [J]^{<\omega}$, let $$A_f=\bigoplus_{X(f)}\mathbb{Z}\,.$$ For any $f\leq g$ in ${^I}([J]^{<\omega})$, there is a natural projection map $$p_{fg}:A_g\to A_f;$$ these together comprise the inverse system $$\mathbf{A}_{I,J}=\left(A_f,p_{fg},{^I}([J]^{<\omega})\right).$$ For any abelian group $H$, let $\mathbf{A}_{I,J}[H]$ denote the inverse system defined by replacing each instance of $\mathbb{Z}$ above with $H$; similarly for the systems defined in parallel below. \end{definition} We may now summarize the reasoning of Subsection \ref{subsect:pro-to-D} in the following terms: \begin{proposition} \label{prop:fullifvanishing} If the natural map $\mathsf{Pro}(\mathsf{D}^{\geq 0}\mathsf{(Ab)})^{\mathtt{b}}\to \mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}$ is fully faithful then $\mathrm{lim}^n\,\mathbf{A}_{I,J}[H]=0$ for all free abelian groups $H$, sets $I$ and $J$, and $n>0$. \qed \end{proposition} \begin{remark} \label{rmk:AklandA} Let $\kappa=|I|$ and $\lambda=|J|$ and observe that bijections $I\to\kappa$ and $J\to\lambda$ determine an isomorphism of the systems $\mathbf{A}_{I,J}$ and $\mathbf{A}_{\kappa,\lambda}$, so that we may without loss of generality restrict our attention to systems of the latter form. Thus for any nonzero cardinals $\kappa$ and $\lambda$ define inverse systems $\mathbf{B}_{\kappa,\lambda}=(B_f,p_{fg},{^\kappa}([\lambda]^{<\omega})$ by letting $$B_f=\prod_{X(f)}\mathbb{Z}$$ and $\left(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}=(B_f/A_f,p_{fg},{^\kappa}([\lambda]^{<\omega}\right)$, where the maps $p_{fg}$ are in each case the obvious projection maps. Note that $\mathrm{lim}\,\mathbf{A}_{\kappa,\lambda}\cong\bigoplus_\kappa\prod_{\lambda}\mathbb{Z}$ and that $\mathrm{lim}\,\mathbf{B}_{\kappa,\lambda}\cong\prod_{\kappa\times\lambda}\mathbb{Z}$. Write $\mathbf{X}\restriction\Lambda$ for the restriction of an inverse system $\mathbf{X}$ to a suborder $\Lambda$ of its index-set. Letting $[\omega]^{\mathrm{in}}$ denote the set of strictly initial segments of $\omega$ then induces an identification of $\mathbf{A}_{\omega,\omega}\restriction {^\omega}([\omega]^{\mathrm{in}})$ with the inverse system denoted $\mathbf{A}$ in \cite{Mardesic_additive_88} and \cite{Bergfalk_simultaneously_23}, for example, and much intervening literature; since $[\omega]^{\mathrm{in}}$ is cofinal in $[\omega]^{<\omega}$, it follows that $\mathbf{A}_{\omega,\omega}$ and $\mathbf{A}$ are isomorphic in the category $\mathsf{pro}\text{-}\mathsf{Ab}$. The lemmas and definitions preceding Theorem \ref{thm:limsofAkappalambda} below are routine generalizations to the systems $\mathbf{A}_{\kappa,\lambda}$ of a sequence standard within the literature of $\mathbf{A}$. \end{remark} \begin{lemma} \label{lem:Bvanishing} $\mathrm{lim}^n\,\mathbf{B}_{\kappa,\lambda}=0$ for $n>0$. \end{lemma} \begin{proof} Observe that $\mathrm{lim}\,\mathbf{B}_{\kappa,\lambda}$ surjects onto $\mathrm{lim}\,(\mathbf{B}_{\kappa,\lambda}\restriction\Lambda)$ for any downwards-closed suborder $\Lambda$ of ${^\kappa}([\lambda]^{<\omega})$; in other words, $\mathbf{B}_{\kappa,\lambda}$ is \emph{flasque} in the sense of \cite{Jensen_les}. By way of this observation, the lemma follows from \cite[Th\'{e}or\`{e}me 1.8]{Jensen_les}. \end{proof} Consider now the short exact sequence of inverse systems \begin{align} \label{eq:ABBASES} \mathbf{0}\to\mathbf{A}_{\kappa,\lambda}\to\mathbf{B}_{\kappa,\lambda}\to(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}\to \mathbf{0}.\end{align} The functor $\mathrm{lim}$ being left exact, its application to (\ref{eq:ABBASES}) induces a long exact sequence $$0\to\mathrm{lim}\,\mathbf{A}_{\kappa,\lambda}\to\mathrm{lim}\,\mathbf{B}_{\kappa,\lambda}\to\mathrm{lim}\,(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}\to \mathrm{lim}^1\,\mathbf{A}_{\kappa,\lambda}\to\mathrm{lim}^1\,\mathbf{B}_{\kappa,\lambda}\to\cdots$$ whose terms $\mathrm{lim}^n\,\mathbf{X}$ are the $n^{\mathrm{th}}$ cohomology groups of $\mathrm{Rlim}\,\mathbf{X}$ (with a natural identification of $\mathrm{lim}$ and $\mathrm{lim}^0$). Lemma \ref{lem:Bvanishing} then implies that \begin{align} \label{eq:lim1} \mathrm{lim}^1\,\mathbf{A}_{\kappa,\lambda}\cong\frac{\mathrm{lim}\,(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}}{\mathrm{im}(\mathrm{lim}\,\mathbf{B}_{\kappa,\lambda})}\end{align} and that \begin{align} \label{eq:limn} \mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}\cong\mathrm{lim}^{n-1}\,(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}\text{ for all }n>1. \end{align} These equations facilitate combinatorial reformulations of the conditions ``$\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}=0$'' for $n>0$ as in the definition and lemma below; in what follows, $=^*$ denotes equality modulo a finite set. \begin{definition} \label{def:coherence} Let $\Phi=\langle\varphi_f:X(f)\to\mathbb{Z}\mid f\in {^\kappa}([\lambda]^{<\omega})\rangle$ be a family of functions. \begin{itemize} \item $\Phi$ is \emph{coherent} if $$\varphi_f=^*\varphi_g|_{X(f)}$$ for all $f\leq g$ in ${^\kappa}([\lambda]^{<\omega})$. \item $\Phi$ is \emph{trivial} if there is a function $\psi:\kappa\times\lambda\to\mathbb{Z}$ such that $$\varphi_f=^*\psi|_{X(f)}$$ for all $f\in {^\kappa}([\lambda]^{<\omega})$. \end{itemize} For any $(n+1)$-tuple $\vec{f}$, write $\vec{f}^i$ for the $n$-tuple formed by omitting the $i^{\mathrm{th}}$ element of $\vec{f}$. Also write $X(\vec{f})$ for $\bigcap_{f\in\vec{f}}X(f)$ and let $\Phi=\langle\varphi_{\vec{f}}:X(\vec{f})\to\mathbb{Z}\mid \vec{f}\in ({^\kappa}([\lambda]^{<\omega}))^n\rangle$ be a family of functions for some $n>1$. Here and in all that follows, we assume without further comment that such $\Phi$ are \emph{alternating}, meaning that $\varphi_{\vec{f}}=\mathrm{sign}(\sigma)\,\varphi_{\sigma\cdot\vec{f}}$ for all permutations $\sigma$ of $n$, where $\sigma\cdot\vec{f}$ denotes the natural action of $\sigma$ on $\vec{f}$. \begin{itemize} \item $\Phi$ is \emph{$n$-coherent} if $$\sum_{i\leq n}(-1)^i\varphi_{\vec{f}^i}|_{X(\vec{f})}=^* 0$$ for all $\vec{f}\in ({^\kappa}([\lambda]^{<\omega}))^{n+1}$. \item $\Phi$ is \emph{$n$-trivial} if there exists a $\Psi=\langle\psi_{\vec{f}}:X(\vec{f})\to\mathbb{Z}\mid \vec{f}\in ({^\kappa}([\lambda]^{<\omega}))^{n-1}\rangle$ such that $$\sum_{i<n}(-1)^i\psi_{\vec{f}^i}|_{X(\vec{f})}=^* \varphi_{\vec{f}}$$ for all $\vec{f}\in ({^\kappa}([\lambda]^{<\omega}))^{n}$. \end{itemize} \end{definition} The adaptations of these definitions to families of $H$-valued functions are straightforward, and the following lemma will continue to apply. For simplicity, however, we will tend in this subsection to proceed, as above, with a default codomain of $\mathbb{Z}$. Note also that \emph{coherence} readily identifies in the above definition with the $n=1$ instance of \emph{$n$-coherence}; this permits more uniform statements, as in the lemma below. \begin{lemma} \label{lem:zeroiftriv} For all $n>0$, $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}=0$ if and only if every $n$-coherent family of functions indexed by ${^\kappa}([\lambda]^{<\omega})$ is trivial. \end{lemma} \begin{proof} In the case of $n=1$, elements of $\mathrm{lim}\,(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}$ may be viewed as collections of $=^*$-equivalence classes $\langle [\varphi_f]\mid f\in{^\kappa}([\lambda]^{<\omega})\rangle$, where $\langle\varphi_f\mid f\in {^\kappa}([\lambda]^{<\omega})\rangle$ is a coherent family of functions. Since $\mathrm{lim}\,\mathbf{B}_{\kappa,\lambda}=\prod_{\kappa\times\lambda}\mathbb{Z}$ and the map $\mathrm{lim}\,\mathbf{B}_{\kappa,\lambda}\to\mathrm{lim}\,(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}$ takes a function $\psi:\kappa\times\lambda\to\mathbb{Z}$ to the collection $\langle[\psi|_{X(f)}]\mid f\in {^\kappa}([\lambda]^{<\omega})\rangle$ of equivalence classes of its restrictions, the assertion follows from equation \ref{eq:lim1}. The case of higher $n$ is close in spirit, only a bit more tedious; since the argument is so essentially that given in \cite[Section 2.1]{Bergfalk_simultaneously_21}, readers are referred there for details. The fundamental point is that, by equation \ref{eq:limn}, elements of $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}$ admit representation as elements of $\mathrm{lim}^{n-1}\,(\mathbf{B}/\mathbf{A})_{\kappa,\lambda}$ which, when construed as cohomology classes in the associated Roos complex, give rise to the representations of Definition \ref{def:coherence}. \end{proof} We now record the most basic facts about the groups $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}$; these will more than suffice for the applications structuring this and the following section. For further (and subtler) results, see Theorem \ref{thm:limnAkl_further} below. \begin{theorem} \label{thm:limsofAkappalambda} The following hold for the inverse systems $\mathbf{A}_{\kappa,\lambda}$: \begin{enumerate} \item If $\kappa$ or $\lambda$ is finite then $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}=0$ for all $n>0$. \item If $\kappa=\lambda=\aleph_0$ then it is consistent with the \textsf{ZFC} axioms that $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}=0$ for all $n>0$. \item If $\kappa=\lambda=\aleph_0$ and $n>0$ then it is consistent with the \textsf{ZFC} axioms that $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}\neq 0$. \item If $\mu\geq\kappa$ and $\nu\geq\lambda$ and $n\geq 0$ then $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}$ forms a subgroup of $\mathrm{lim}^n\,\mathbf{A}_{\mu,\nu}$. \item If $\kappa=\aleph_0$ and $\lambda=\aleph_1$ then $\mathrm{lim}^1\,\mathbf{A}_{\kappa,\lambda}\neq 0$. \end{enumerate} Moreover, for any nontrivial abelian group $H$, these results all equally hold if $\mathbf{A}_{\kappa,\lambda}$ is replaced with $\mathbf{A}_{\kappa,\lambda}[H]$. \end{theorem} The following corollary now establishes Theorem A from the introduction. \begin{corollary} \label{cor:notfull} The functor of equation \ref{eq:Q1} is not full. \end{corollary} \begin{proof} This is immediate from item (5) of Theorem \ref{thm:limsofAkappalambda}, together with Proposition \ref{prop:fullifvanishing}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:limsofAkappalambda}] Item (1) is immediate from Definition \ref{def:coherence} and Lemma \ref{lem:zeroiftriv}, for if $\kappa$ is finite then any associated $n$-coherent function is trivialized by a family of zero functions (or by a single zero function, if $n=1$), while if $\lambda$ is finite then the partial order ${^\kappa}([\lambda]^{<\omega})$ possesses a maximum. By \cite{Bergfalk_simultaneously_23} and Remark \ref{rmk:AklandA}, item (2) holds in any forcing extension given by adding $\beth_\omega$-many Cohen reals; see also \cite{Bergfalk_simultaneously_21} for an alternate deduction of its consistency from the existence of a weakly compact cardinal. Similarly, item (3) is the main result of \cite{Velickovic_non_21}; more precisely, item (3) was therein shown to hold for $\mathbf{A}_{\aleph_0,\aleph_0}[H]$ for any fixed nontrivial abelian group $H$. For a strong generalization of item (2) to arbitrary abelian groups $H$, see \cite[Theorem 1.2]{Bannister_additivity_23} and the discussion following the proof of Theorem \ref{thm:fromstacks} below. For item (4), fix $n\geq 0$ and observe that for any $\mu\geq\kappa$ and $\nu\geq\lambda$, any $n$-coherent $\Phi=\langle\varphi_{\vec{f}}:X(\vec{f})\to\mathbb{Z}\mid\vec{f}\in {^\kappa}([\lambda]^{<\omega})\rangle$ naturally extends to an $n$-coherent $\Psi=\langle\psi_{\vec{f}}:X(\vec{f})\to\mathbb{Z}\mid\vec{f}\in {^\mu}([\nu]^{<\omega})\rangle$ which is trivial if and only if $\Phi$ is, an extension which determines, in turn, an embedding of $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}$ into $\mathrm{lim}^n\,\mathbf{A}_{\mu,\nu}$. Concretely, for any $f:\mu\to[\nu]^{<\omega}$ define $g:\kappa\to [\lambda]^{<\omega}$ by $g(\xi)=f(\xi)\cap\lambda$ for all $\xi\in\kappa$, a correspondence extending in an obvious fashion to $n$-tuples $\vec{f}\in({^\mu}([\nu]^{<\omega}))^n$. Let then $\psi_{\vec{f}}\,|_{X(\vec{g})}=\varphi_{\vec{g}}$ and $\psi_{\vec{f}}\,|_{X(\vec{f})\backslash X(\vec{g})}=0$; the verification that $\Psi$ is as claimed is straightforward and left to the reader. For item (5), begin by fixing a family $E=\langle e_\alpha:\alpha\to\omega\mid\alpha<\omega_1\rangle$ of finite-to-one functions which is nontrivially coherent in the classical sense that $e_\beta |_\alpha=^* e_\alpha$ for all $\alpha<\beta<\omega_1$ but for no $e:\omega_1\to\omega$ does $e |_\alpha =^* e_\alpha$ for all $\alpha<\omega_1$. (Among the more prominent such families are the fiber maps $e_\alpha=\rho_1(\,\cdot\,,\alpha)$ of the $\rho_1$ or $\rho$ functions of \cite{Todorcevic_Walks_07}, for example, but interested readers might equally construct such an $E$ by transfinite recursion.) For any $f:\omega\to [\omega_1]^{<\omega}$, let $\mathrm{sp}(f)=\mathrm{sup}\bigcup_{i\in\omega} f(i)$ and let \begin{equation*} \varphi_f(i,\xi)= \begin{cases} 1 & \text{if } e_{\mathrm{sp}(f)}(\xi)=i \\ 0 & \text{otherwise} \end{cases} \end{equation*} for all $(i,\xi)\in X(f)$. To complete the proof, it will suffice to show that $\Phi=\langle\varphi_f\mid f\in {^\omega}([\omega_1]^{<\omega})\rangle$ is nontrivially coherent. To aid in seeing this, consider an auxiliary family $T=\langle\tau_\gamma\mid \gamma<\omega_1\rangle$ of functions $\tau_\gamma:\omega\times\gamma\to\{0,1\}$, each of which is the characteristic function of the reflection of the graph of $e_\gamma$; more concisely, $\tau_\gamma(i,\xi)=1$ if and only if $e_\gamma(\xi)=i$. It is then clear from the observation that $\varphi_f=\tau_{\mathrm{sp}(f)}|_{X(f)}$ that $\Phi$, by way of $T$, inherits the coherence of $E$. For the nontriviality of $\Phi$, simply observe that $\varphi |_{X(f)}\neq^*\varphi_f$ for any $\varphi_f$ whose domain $X(f)$ contains the $\{(k,\alpha_k)\mid k\in Y\}$ of the following claim: \begin{claim} For any $\varphi:\omega\times\omega_1\to\mathbb{Z}$ there exist a $\gamma<\omega_1$ and infinite $Y\subseteq \omega$ and $\alpha_k<\gamma$ with $\tau_\gamma(k,\alpha_k)\neq\varphi(k,\alpha_k)$ for all $k\in Y$. \end{claim} \begin{proof} If not, then for some cofinal $\Gamma\subseteq\omega_1$ and $\ell\in\omega$ we would have $\tau_\gamma|_{[\ell,\omega)\times\gamma}=\varphi|_{[\ell,\omega)\times\gamma}$ for all $\gamma\in\Gamma$. Let then $e(\xi)=n$ if and only if $n\geq\ell$ and $\varphi(n,\xi)=1$; this defines a partial function $e:\omega_1\to\omega$, any extension of which to all of $\omega_1$ trivializes $E$ (since each $e_\gamma\in E$ is finite-to-one), a contradiction. \renewcommand{\qedsymbol}{\twoqedbox} \end{proof} \let\qed\relax \end{proof} \subsection{An alternative framing} \label{subsect:alternative} Relations between the main expressions of Sections \ref{subsect:pro-to-D} and \ref{subsect:systemsAkl} may be more neatly summarized as follows. The theorem is due to Clausen and Scholze, and appears (in essence) in the closing minutes of the fourth lecture in their \emph{Analytic Stacks} series \cite{analytic_stacks}. We fill in the details of its proof. Note that this establishes clauses (1) and (2) of Theorem B. \begin{theorem} \label{thm:fromstacks} For any cardinals $\mu>0$ and $\lambda\geq\aleph_0$, the following assertions are equivalent: \begin{enumerate} \item $\lim^n \mb{A}_{\aleph_0,\lambda}[\bigoplus_{\mu} \bb{Z}] = 0$ for all $n>0$. \item $\mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}} (\prod_{\omega} \bigoplus_\lambda \underline{\bb{Z}}, \bigoplus_{\mu} \underline{\bb{Z}})$ is concentrated in degree zero. \item Whenever $\mb{M} = (M_i, \pi_{i,j}, \omega)$ is an inverse sequence $\mb{M} = (M_i, \pi_{i,j}, \omega)$ of abelian groups of cardinality at most $\lambda$ whose transition maps $\pi_{i,j} : M_{j} \to M_i$ are all surjective, and $H$ is an abelian group of cardinality at most $\mu$, we have $$\mathrm{Ext}_{\mathsf{Cond(Ab)}}^n\left(\lim_{i < \omega}\,\underline{M_i},\underline{H}\right)=\colim_{i < \omega}\,\mathrm{Ext}_{\mathsf{Cond(Ab)}}^n\left(\underline{M_i},\underline{H}\right)$$ for all $n \geq 0$. \end{enumerate} \end{theorem} \begin{proof} An argument that $(1) \Rightarrow (2)$ concluded Section \ref{subsect:pro-to-D}; let us briefly recall it, emphasizing its reversibility, thereby establishing $(2) \Rightarrow (1)$. Again, since \[ \prod_{\omega} \bigoplus_\lambda \bb{Z} = \colim_{f \in {^{\omega}([\lambda]^{<\omega})}} \prod_{i < \omega} \bb{Z}^{f(i)}, \] the expression in clause 2 may be written as \[ \mathrm{Rlim}_{f \in {^{\omega}([\lambda]^{<\omega})}} \mathrm{RHom}_{\mathsf{D}^{\geq 0}\mathsf{(Cond(Ab))}}\left(\prod_{i < \omega} \underline{\bb{Z}^{f(i)}}, \bigoplus_\mu \underline{\bb{Z}}\right). \] Since $\prod_{i < \omega} \underline{\bb{Z}^{f(i)}}$ is projective in $\mathsf{Solid}$, the $\mathrm{RHom}$ in the above expression concentrates in degree zero, where it is equal to $\bigoplus_{X(f)} \bigoplus_{\mu} \bb{Z}$. The clause 2 expression thus reduces to \[ \mathrm{Rlim}_{f \in {^{\omega}([\lambda]^{<\omega})}} \bigoplus_{X(f)} \bigoplus_{\mu} \bb{Z} = \mathrm{Rlim} \,\mb{A}_{\aleph_0, \lambda}\left[\bigoplus_{\mu} \bb{Z}\right], \] hence concentrates in degree zero if and only if the rightmost expression just above does; this shows $(1) \Leftrightarrow (2)$. Let us now show that $(2) \Rightarrow (3)$. To this end, fix $\mb{M}$ and $H$ as in the statement of (3). By resolving $H$ by free groups of cardinality $\mu$, we reduce to the case in which $H = \bigoplus_\mu \bb{Z}$. Turning to the first coordinate, fix a resolution \begin{equation} \label{eq:basicKFM} 0 \ra K_i \ra F_i \ra M_i \ra 0 \end{equation} of each $M_i$ in which each $F_i$ and $K_i$ are free groups of size $\lambda$ (i.e., each $F_i$ and $K_i$ are of the form $\bigoplus_\lambda \bb{Z}$) and observe that since products are exact in $\CondAb$ (\cite[Thm.\ 2.2 (AB4*)]{CS1}), these resolutions assemble into a short exact sequence \begin{equation} \label{eq:prodSES} 0 \ra \prod_{\omega} \bigoplus_\lambda \underline{\bb{Z}} \ra \prod_{\omega} \bigoplus_\lambda \underline{\bb{Z}} \ra \prod_{i<\omega} \underline{M_i} \ra 0 \end{equation} of condensed abelian groups with natural projections to the condensations of each short exact sequence as in line \ref{eq:basicKFM}. Applying $\mathrm{Hom}( - , \bigoplus_{\mu} \underline{\bb{Z}})$ to all of them determines a family of long exact sequences; under our assumptions, these carry the following implications: \begin{enumerate}[label=(\alph*)] \item $\mathrm{Ext}^n(\prod_{i < \omega} \underline{M_i}, \bigoplus_\mu \underline{\bb{Z}}) = 0$, and hence \begin{equation} \label{eq:prodcoprodequality} \mathrm{Ext}^n\left(\prod_{i < \omega} \underline{M_i}, \bigoplus_\mu \underline{\bb{Z}}\right) \cong \bigoplus_{i < \omega} \mathrm{Ext}^n\left(\underline{M_i}, \bigoplus_\mu \underline{\bb{Z}}\right)\end{equation} for all $n>1$. \item Equation \ref{eq:prodcoprodequality} in fact holds for $n=0$ and $n=1$ as well. \end{enumerate} The first is an easy consequence of the theorem's clause 2, while the second further leverages the slenderness of $\bigoplus_\mu \bb{Z}$ \cite[\S 94]{Fuchs_Infinite_73}. For the aforementioned projection maps from (\ref{eq:prodSES}) to the condensations of (\ref{eq:basicKFM}) induce a chain map from the sum of the long exact sequences given by $\mathrm{Hom}( - , \bigoplus_{\mu} \underline{\bb{Z}})$ of the latter to that given by $\mathrm{Hom}( - , \bigoplus_{\mu} \underline{\bb{Z}})$ of the former, and slenderness tells us that the two constituents $$\bigoplus_{\omega}\,\mathrm{Hom}\left(\bigoplus_\lambda \underline{\bb{Z}}, \bigoplus_\mu \underline{\bb{Z}}\right)\to\mathrm{Hom}\left(\prod_{\omega} \bigoplus_\lambda \underline{\bb{Z}}, \bigoplus_\mu \underline{\bb{Z}}\right)$$ of this chain map are each isomorphisms. This, together with the Five Lemma and item (a), implies that the two other nontrivial constituents of this map, namely the $n=0$ and $n=1$ instances of \begin{equation*} \bigoplus_{i < \omega} \mathrm{Ext}^n\left(\underline{M_i}, \bigoplus_\mu \underline{\bb{Z}}\right)\to\mathrm{Ext}^n\left(\prod_{i < \omega} \underline{M_i}, \bigoplus_\mu \underline{\bb{Z}}\right)\end{equation*} are isomorphisms as well. To upgrade these isomorphisms from products and coproducts to limits and colimits, consider, for any nonzero $j\leq\omega$, the shift map $$\mathrm{sh}:\prod_{i<1+j}\underline{M_i}\to\prod_{i<j}\underline{M_i}:(x_i)\mapsto(\pi_{i,i+1}(x_{i+1}))$$ and observe that for any finite such $j$, both rows of the natural commutative diagram \[ \begin{tikzcd} 0 \arrow[r] & \lim_{i<\omega}\,\underline{M_i} \arrow[r] \arrow[d] & \prod_{i<\omega}\underline{M_i} \arrow[r, "\mathrm{id}-\mathrm{sh}"] \arrow[d] & \prod_{i<\omega}\underline{M_i} \arrow[r] \arrow[d] & 0 \\ 0 \arrow[r] & \underline{M_j} \arrow[r] & \prod_{i< j+1}\underline{M_i} \arrow[r, "\mathrm{id}-\mathrm{sh}"] & \prod_{i< j}\underline{M_i} \arrow[r] & 0 \end{tikzcd} \] are exact, since the transition maps of $\mathbf{M}$ are all surjective. Again applying $\mathrm{Hom}( - , \bigoplus_{\mu} \underline{\bb{Z}})$ induces a family of long exact sequences, and a chain map from the colimit of those associated to the lower row to that associated to the upper row above. Since every second and third constituent of this chain map is, by the previous paragraph, an isomorphism of the form $$\colim_{j<\omega}\,\mathrm{Ext}^n\left(\prod_{i<j}\,\underline{M_i},\bigoplus_\mu\underline{\bb{Z}}\right)\cong\bigoplus_{i < \omega} \mathrm{Ext}^n\left(\underline{M_i}, \bigoplus_\mu \underline{\bb{Z}}\right)\xrightarrow{\cong}\mathrm{Ext}^n\left(\prod_{i < \omega} \underline{M_i}, \bigoplus_\mu \underline{\bb{Z}}\right),$$ again the Five Lemma implies that all the other constituents of this chain map, namely the natural morphisms $$\colim_{j<\omega}\,\mathrm{Ext}^n\left(\underline{M_j},\bigoplus_\mu\underline{\bb{Z}}\right)\to \mathrm{Ext}^n\left(\mathrm{lim}_{i<\omega}\,\underline{M_i},\bigoplus_\mu\underline{\bb{Z}}\right)$$ for each $n\in\omega$, are isomorphisms as well. This shows $(2) \Rightarrow (3)$. But since $\prod_{\omega} \bigoplus_\lambda \underline{\bb{Z}}$ admits expression as the limit of a sequence as in clause 3, we in fact have $(2) \Leftrightarrow (3)$, and this concludes the proof of the theorem. \end{proof} Let us briefly discuss the consistency of the (equivalent) assertions in Theorem \ref{thm:fromstacks}, first considering the case in which $\lambda = \aleph_0$. In recent work, Bannister considers the additivity of derived limits for a particular type of inverse system known as an \emph{$\Omega_\kappa$-system} (where the parameter $\kappa$ denotes an arbitrary infinite cardinal). In particular, he proves in \cite[Thm.\ 1.2]{Bannister_additivity_23} that, for any fixed $\kappa$, derived limits are additive for $\Omega_\kappa$-systems in any forcing extension obtained by adding at least $\beth_\omega(\kappa)$-many Cohen reals to a model of $\mathsf{ZFC}$. The assertion that $\lim^n \mb{A}_{\aleph_0,\aleph_0}[H] = 0$ for all $n > 0$ and all abelian groups $H$ is a special case of the additivity of derived limits for $\Omega_{\aleph_0}$-systems; therefore, for $\lambda = \aleph_0$, the conditions in Theorem \ref{thm:fromstacks} hold simultaneously for all $\mu > 0$ in any forcing extension obtained by adding at least $\beth_\omega$-many Cohen reals. In particular, starting with a model of $\mathsf{GCH}$, one sees that they are consistent with $2^{\aleph_0} = \aleph_{\omega+1}$. Furthermore, a result of Casarosa and Lambie-Hanson \cite{casarosa_lh} shows that, again for $\lambda = \aleph_0$, $\aleph_{\omega+1}$ is the \emph{minimum} value of the continuum compatible with the conditions holding simultaneously for all $\mu$. They prove that, if the dominating number $\mathfrak{d}$ equals $\aleph_n$ for some $1 \leq n < \omega$, then $\lim^n \mb{A}_{\aleph_0,\aleph_0}[\bigoplus_{\omega_n} \mathbb{Z}] \neq 0$. In particular, if the conditions in Theorem \ref{thm:fromstacks} hold for $\lambda = \aleph_0$ and all cardinals $\mu < \aleph_\omega$, then we must have $2^{\aleph_0} > \aleph_\omega$. A more axiomatic approach to these results is possible as well; the assumption $[\mathrm{PH}_n$ for all $n\geq 0]$, where $\mathrm{PH}_n$ denotes the $n^{\mathrm{th}}$ \emph{partition hypothesis} of \cite{BBMT}, also implies Theorem \ref{thm:fromstacks} for $\lambda=\aleph_0$ and all $\mu>0$ simultaneously. This, though, is a logically stronger assumption than those discussed above: by \cite[Cor.\ 8.21]{BBMT}, its consistency strength is exactly that of the existence of a weakly compact cardinal. On the other hand, if $\lambda > \aleph_0$, then the conditions of Theorem \ref{thm:fromstacks} are inconsistent even when $\mu = 1$, by clauses (5) and (4) of Theorem \ref{thm:limsofAkappalambda}. We end this subsection by highlighting a particular consequence of the assertions in Theorem \ref{thm:fromstacks} for the duality of condensed Banach spaces and condensed Smith spaces in non-archimedean settings. For concreteness, we work over $\bb{Q}_p$ for some fixed prime $p$. As noted in \cite[Lecture 6]{analytic_stacks}, the derived category $\mathsf{D(Solid}_{\bb{Q}_p})$ of solid $\bb{Q}_p$-vector spaces is a full subcategory of the derived category $\mathsf{D(Solid)}$ of solid abelian groups, which in turn is a full subcategory of the the derived category $\mathsf{D(Cond}(\mathsf{Ab}))$ of condensed abelian groups. Classically, a Smith space is a complete compactly generated locally convex topological vector space that has a universal compact set. Smith spaces were introduced in \cite{smith}, where they were shown to be in stereotype duality with Banach spaces. This Banach--Smith duality persists in the condensed setting. Recall that $\bb{Q}_p = \bb{Z}_p[\frac{1}{p}]$, where the latter expression is defined to be \[ \colim (\bb{Z}_p \xrightarrow{\times p} \bb{Z}_p \xrightarrow{\times p} \bb{Z}_p \xrightarrow{\times p} \cdots ). \] As shown in \cite[Lemma 3.8]{rj_rc} (see also \cite[Lecture 6]{analytic_stacks}, which more closely matches our notation), the solid $\bb{Q}_p$-Banach spaces are precisely the solid $\bb{Q}_p$-vector spaces of the form \[ \Big(\bigoplus_I \bb{Z}_p\Big)^{\wedge}_p\left[\frac{1}{p}\right] = \left(\lim_{n<\omega} \Big(\bigoplus_I \bb{Z}_p/p^n \bb{Z}_p\Big)\right)\left[\frac{1}{p}\right] \] for some index set $I$, while the solid $\bb{Q}_p$-Smith spaces are precisely the solid $\bb{Q}_p$-vector spaces of the form \[ \Big(\prod_I \bb{Z}_p\Big)\left[\frac{1}{p}\right] \] for some index set $I$ (readability dictates that for the remainder of this section we forego the underline notation for condensed images of rings). It is shown in \cite[Lemma 3.10]{rj_rc} that the classical Banach--Smith duality extends to the solid $\bb{Q}_p$-vector space setting via the internal $\mathrm{Hom}$ functor $\underline{\mathrm{Hom}}(-,-): (\mathsf{Solid}_{\bb{Q}_p})^{\mathrm{op}} \times \mathsf{Solid}_{\bb{Q}_p} \ra \mathsf{Solid}_{\bb{Q}_p}$ (see \cite[p.\ 13]{CS1} for basic definitions of condensed internal $\mathrm{Hom}$ functors). In particular, there it is shown that, for any index set $I$, we have \begin{itemize} \item $\underline{\mathrm{Hom}}\left((\bigoplus_I \bb{Z}_p)^{\wedge}_p\left[\frac{1}{p}\right], \bb{Q}_p\right) = (\prod_I \bb{Z}_p)\left[\frac{1}{p}\right]$, and \item $\underline{\mathrm{Hom}}\left((\prod_I \bb{Z}_p)\left[\frac{1}{p}\right], \bb{Q}_p\right) = (\bigoplus_I \bb{Z}_p)^{\wedge}_p\left[\frac{1}{p}\right]$. \end{itemize} As $\prod_I\mathbb{Z}_p$ is, moreover, a projective solid $\mathbb{Z}_p$-module, it is essentially immediate that the second of these equalities holds even at the level of the derived category $\mathsf{D(Solid}_{\bb{Q}_p})$, i.e., even with the $\underline{\mathrm{Hom}}$ replaced by $\mathrm{R}\underline{\mathrm{Hom}}$; see \cite[Remark 3.11] {rj_rc}. The question of whether this holds for the first bullet (and hence of whether this is a \emph{derived} duality relation), though, is more delicate. We end this subsection with a brief proof of the observation of Clausen and Scholze (again, see \cite[Lecture 6]{analytic_stacks}) that, under the conditions of Theorem \ref{thm:fromstacks} with $\lambda = \aleph_0$, this question also has an affirmative answer in the context of \emph{separable} solid $\bb{Q}_p$-Banach spaces, i.e., in the situation in which the index set $I$ is countable. The following theorem yields clause (3) of Theorem B: \begin{theorem} Suppose that $\lim^n \mb{A}_{\aleph_0,\aleph_0}[H] = 0$ for all $n > 0$ and all abelian groups $H$. Then, for all primes $p$, we have \[ \mathrm{R}\underline{\mathrm{Hom}}\left(\Big(\bigoplus_\omega \bb{Z}_p\Big)^{\wedge}_p\left[\frac{1}{p}\right], \bb{Q}_p \right) = \Big(\prod_\omega \bb{Z}_p\Big)\left[\frac{1}{p}\right]. \] \end{theorem} \begin{proof} By \cite[Lemma 3.10]{rj_rc}, we have $\underline{\mathrm{Hom}}\left( (\bigoplus_\omega \bb{Z}_p)^{\wedge}_p\left[\frac{1}{p}\right], \bb{Q}_p \right) = (\prod_\omega \bb{Z}_p)\left[\frac{1}{p}\right]$. Therefore, it suffices to show that $\mathrm{R}\underline{\mathrm{Hom}}\left( (\bigoplus_\omega \bb{Z}_p)^{\wedge}_p\left[\frac{1}{p}\right], \bb{Q}_p \right)$ is concentrated in degree zero. Note that \[ \Big(\bigoplus_\omega \bb{Z}_p\Big)^{\wedge}_p\left[\frac{1}{p}\right] \cong \lim_{n < \omega} \Big(\bigoplus_\omega \bb{Q}_p/(p^n \bb{Z}_p)\Big) \] and that $\bigoplus_\omega \bb{Q}_p/(p^n \bb{Z}_p)$ is countable and discrete for each $n < \omega$. Similarly, we have $\bb{Q}_p \cong \lim_{m < \omega} \bb{Q}_p/(p^n \bb{Z}_p)$. Therefore, for all $S \in \mathsf{ED}$, we have \begin{align} \label{eq:Banach-Smith} & \mathrm{R\underline{Hom}}\left(\Big(\bigoplus_\omega \bb{Z}_p\Big)^{\wedge}_p\left[\frac{1}{p}\right], \bb{Q}_p \right)(S) \nonumber \\ &= \underset{m<\omega}{\mathrm{Rlim}} \:\mathrm{R\underline{Hom}}\left(\lim_{n < \omega} \Big(\bigoplus_\omega \bb{Q}_p/(p^n \bb{Z}_p)\Big), \bb{Q}_p/(p^m \bb{Z}_p)\right)(S) \nonumber \\ &= \underset{m<\omega}{\mathrm{Rlim}} \:\mathrm{RHom}\left(\lim_{n < \omega} \Big(\bigoplus_\omega \bb{Q}_p/(p^n \bb{Z}_p)\Big), \mathrm{Cont}(S, \bb{Q}_p/(p^m \bb{Z}_p))\right). \end{align} For the second equality, see the third paragraph of \cite[\S 4]{TangProfinite} (or, a little more concretely, argue as for \cite[Lem.\ 11.1]{BLHS}). Since $S \in \mathsf{ED}$ and $\bb{Q}_p/(p^m \bb{Z}_p)$ is discrete, we know that $\mathrm{Cont}(S, \bb{Q}_p/(p^m \bb{Z}_p))$ (with the compact-open topology) is a discrete abelian group for all $m < \omega$. Hence clause (3) of Theorem \ref{thm:fromstacks} applies to the interior RHom of line \ref{eq:Banach-Smith}, entailing that \begin{align*} & H^i\mathrm{RHom}\left(\lim_{n<\omega} \Big(\bigoplus_\omega \bb{Q}_p/(p^n \bb{Z}_p)\Big), \mathrm{Cont}(S, \bb{Q}_p/(p^m \bb{Z}_p))\right) \\ & = \underset{n<\omega}{\mathrm{colim}} \:\mathrm{Ext}^i\left( \Big(\bigoplus_\omega \bb{Q}_p/(p^n \bb{Z}_p)\Big), \mathrm{Cont}(S, \bb{Q}_p/(p^m \bb{Z}_p))\right) \\ &= \underset{n<\omega}{\mathrm{colim}} \prod_\omega \mathrm{Ext}^i\left(\bb{Q}_p/(p^n \bb{Z}_p), \mathrm{Cont}(S, \bb{Q}_p/(p^m \bb{Z}_p))\right) \end{align*} for all $i\geq 0$. Both arguments inside $\mathrm{Ext}^i$ in the last line above are (discrete) abelian groups, and $\mathrm{Cont}(S, \bb{Q}_p/(p^m \bb{Z}_p))$ is divisible, and hence injective. Thus \[ \mathrm{Ext}^i\left(\bb{Q}_p/(p^n \bb{Z}_p), \mathrm{Cont}(S, \bb{Q}_p/(p^m \bb{Z}_p))\right) = 0 \] for all $i>0$ and the $\mathrm{Rlim}$ of line \ref{eq:Banach-Smith} is of an inverse sequence of abelian groups whose transition maps are all surjective. In consequence, it, too, concentrates in the zeroth degree, and this completes the proof. \end{proof} \subsection{Consistent nonvanishing in higher degrees} \label{subsect:morenonvanishing} In the presence of additional assumptions, item (5) of Theorem \ref{thm:limsofAkappalambda} does generalize to higher degrees; a main result in this direction is Theorem \ref{thm:limnAkl_further} below. Here a few preliminary words may be of value. First, this being the paper's most combinatorially involved result, readers may wish to defer a reading of its proof until they've digested the argument of Theorem \ref{thm:nonzero_cohomology_on_opens}; its proof in fact will reference that argument at one point. That said, these references are essentially to motifs which readers familiar with $n$-coherence arguments will already know. Among the most central of these motifs is the following theorem \cite{Goblot}; this result is frequently instrumental, as in Theorems \ref{thm:limnAkl_further} and \ref{thm:nonzero_cohomology_on_opens}, in the construction of higher nontrivially coherent families. \begin{theorem}[Goblot 1970] \label{thm:Goblot} Let $I$ index an inverse system $\mathbf{X}$ of abelian groups. If the cofinality of $I$ is less than or equal to $\aleph_k$ then $\mathrm{lim}^n\,\mathbf{X}=0$ for all $n>k+1$. \end{theorem} \noindent If the transitive maps of $\mathbf{X}$ are all surjective, this vanishing holds even for all $n>k$, as observed in \cite[Theorem 2.4]{Velickovic_non_21}; in what follows, by \emph{Goblot's Theorem} we will mean both Theorem \ref{thm:Goblot} and this corollary. This theorem brings us to a second point, which is the fundamental relationship of the functors $\mathrm{lim}^n$ to the cardinals $\aleph_n$. Since it will grow clearer in the course of this paper, there's no need to linger on this point, except to note that Theorem \ref{thm:limnAkl_further} is exemplary in two respects: (1) In it, each rise in the degree of the derived limit corresponds to a rise in the cofinality of a nonvanishing system; by Goblot's result, this is essentially unavoidable. (2) The question of whether the theorem's hypotheses may be weakened to the \textsf{ZFC} axioms is an open and good one (and is recorded in our conclusion). This brings us to a further item, which is that of the theorem's hypotheses; in it, for example, a notion of nontrivial $n$-coherence slightly different from that of Definition \ref{def:coherence} is invoked. Both, along with the main mechanism of Theorem \ref{thm:nonzero_cohomology_on_opens}, fall within a more general framework which we pause to record; here $\mathcal{I}^n$ denotes the cartesian product of an ideal $\mathcal{I}$ with itself $n$ times. \begin{definition} \label{def:coh2} For any $n>0$ and abelian group $H$ and ideals $\mathcal{I}$, $\mathcal{J}$ on a set $Y$, a family $$\Phi=\langle\varphi_{\vec{I}}:\bigcap\vec{I}\to H\mid\vec{I}\in\mathcal{I}^n\rangle$$ is \emph{$n$-coherent (modulo $\mathcal{J}$)} if $$\mathrm{supp}\left(\sum_{i\leq n}(-1)^i\varphi_{\vec{I}^i}\big|_{\bigcap\vec{I}}\right)\in\mathcal{J}$$ for any $\vec{I}\in \mathcal{I}^{n+1}$. If $n=1$ then $\Phi$ is \emph{trivial} if there exists a $\psi:Y\to H$ such that $$\mathrm{supp}\left(\psi\big|_{I}-\varphi_I\right)\in\mathcal{J}$$ for all $I\in\mathcal{I}$. Similarly, if $n>1$ then $\Phi$ is \emph{trivial} if there exists a $$\Psi=\langle\psi_{\vec{I}}:\bigcap\vec{I}\to H\mid\vec{I}\in \mathcal{I}^{n-1}\rangle$$ such that $$\mathrm{supp}\left(\sum_{i\leq n}(-1)^i\psi_{\vec{I}^i}\big|_{\bigcap\vec{I}}-\varphi_{\vec{I}}\right)\in\mathcal{J}$$ for all $\vec{I}\in \mathcal{I}^n$. For any $\mathcal{I}$, $\mathcal{J}$, and $H$ as above, let $\mathbf{X}[\mathcal{I},\mathcal{J},H]=(X_I,\pi_{I,J}, \mathcal{I})$ denote the $\mathcal{I}$-indexed inverse system with terms $$X_I=\langle\varphi:I\to H\mid\mathrm{supp}(\varphi)\in\mathcal{J}\rangle$$ and transition maps $$\pi_{I,J}:X_J\to X_I:\varphi\mapsto \varphi\big|_I$$ for each $I\subseteq J$ in $\mathcal{I}$. \end{definition} \begin{example} If $Y=\kappa\times\lambda$, $\mathcal{I}=\{X(f)\mid f\in{^\kappa}([\lambda]^{<\omega})\}$, and $\mathcal{J}=[Y]^{<\omega}$ then $\mathbf{X}[\mathcal{I},\mathcal{J},H]$ equals the system $\mathbf{A}_{\kappa,\lambda}[H]$ of Definition \ref{def:Akappalambda}. \end{example} \begin{lemma} \label{lem:nontrivfamsarenontrivlimsagain} For all $n>0$ and $\mathcal{I}$, $\mathcal{J}$, and $H$ as in Definition \ref{def:coh2} above, $$\mathrm{R}^n\mathrm{lim}_{I\in\mathcal{I}}\,\mathbf{X}[\mathcal{I},\mathcal{J},H]\neq 0$$ if and only the associated $n$-coherent families of functions are all trivial. \end{lemma} \begin{proof} The argument is essentially identical to that of Lemma \ref{lem:zeroiftriv}.\end{proof} \begin{example} For any fixed regular cardinal $Y=\lambda$, letting $H=\mathbb{Z}$, $\mathcal{J}=[Y]^{<\omega}$, and $\mathcal{I}$ equal the bounded ideal on $\lambda$ (within which the initial segments of $\lambda$ are cofinal) recovers some of the most fundamental set theoretic notions of coherence. The $n=1$ case of Definition \ref{def:coh2} is that strongly associated to Todorcevic's walks apparatus \cite{Todorcevic_Walks_07}; moreover, $$\mathrm{R}^n\mathrm{lim}_{I\in\mathcal{I}}\,\mathbf{X}[\mathcal{I},\mathcal{J},H]\cong\mathrm{H}^n(\lambda;\mathcal{H})$$ for all $n>0$, where the righthand terms are the sheaf cohomology groups of $\lambda$ (endowed with its order topology) with respect to the constant sheaf associated to $H$ (the argument is much as for \cite[Thm.\ 3.2]{IHW}). \end{example} The argument of Theorem \ref{thm:limnAkl_further} below will simultaneously involve notions of coherence from our previous two examples; we won't notationally belabor their distinction, though, as the dangers of confusing them are marginal. Turning lastly to the theorem's set theoretic hypotheses: good references for the approachability ideal and the principle $\diamondsuit(S)$ are \cite{Eisworth} and \cite{Rinot_diamond}, respectively; that said, what we need of them will appear, simply, in the course of the theorem's proof. We note that the hypotheses of the theorem are relatively weak. They all hold, for instance, in G\"{o}del's constructible universe $\mathrm{L}$ or various other canonical inner models. In addition, the failure of the approachability assumption in the main statement of the theorem has large cardinal strength precisely equal to a greatly Mahlo cardinal (see \cite{Mitchell_approachability}). In particular, if there fails to be a stationary subset of $S^{\aleph_{n+1}}_{\aleph_n}$ in $I[\aleph_{n+1}]$, then $\aleph_{n+1}$ must be greatly Mahlo in $\mathrm{L}$ (here, given cardinals $\kappa < \lambda$, $S^\lambda_\kappa$ denotes the set $\{\alpha < \lambda \mid \cf(\alpha) = \kappa\}$). The hypotheses of the theorem therefore hold, for instance, in the forcing extension of $\mathrm{L}$ formed by adding $\aleph_\omega$-many Cohen reals, a model in which, by results of \cite{Bergfalk_simultaneously_23} and \cite{Bannister_additivity_23}, we have \[ \mathrm{lim}^{n+1}\,\mathbf{A}_{\aleph_0, \aleph_0}\left[H\right]=0 \] for every $n < \omega$ and every abelian group $H$. \begin{theorem} \label{thm:limnAkl_further} Suppose that $1 \leq n < \omega$ and there is a stationary subset $S \subseteq S^{\aleph_{n+1}}_{\aleph_n}$ in the approachability ideal $I[\aleph_{n+1}]$. Then \[ \mathrm{lim}^{n+1}\,\mb{A}_{\aleph_n, \aleph_{n+1}}\left[\bigoplus_{\omega_{n+1}} \bb{Z}\right] \neq 0. \] If, additionally, $\diamondsuit(S)$ holds and there is a nontrivial $n$-coherent family $\langle \tau_{\vec{\eta}} : \bigwedge \vec{\eta} \rightarrow \bb{Z} \mid \vec{\eta} \in (\omega_n)^n \rangle$, then $\mathrm{lim}^{n+1}\, \mb{A}_{\aleph_n, \aleph_{n+1}} \neq 0$. \end{theorem} \begin{remark} \label{rmk:cofinality} Below, we will at times invoke what might be seen either by the description of the lim functor $\mathsf{Pro}(\mathcal{C})\to\mathcal{C}$ in Section \ref{subsect:condensedbackground} or, in any particular case, by hand, namely: \emph{only cofinal phenomena matter in analyses of $\mathrm{lim}^n$ or $n$-coherence.} \end{remark} \begin{proof}[Proof of Theorem \ref{thm:limnAkl_further}] Let $\mu := \aleph_n$, $H := \bigoplus_{\omega_{n+1}} \bb{Z}$, and $\Lambda := {^\mu}([\mu^+]^{<\omega})$. We will begin by constructing an $(n+1)$-coherent family $\Phi' = \langle \varphi'_{\vec{\beta}} : \mu \times \bigwedge \vec{\beta} \ra H' \mid \vec{\beta} \in (\mu^+)^{n+1} \rangle$, where $H'$ will be either $\bb{Z}$ or $H$, depending on whether or not we are assuming the additional hypotheses in the statement of the theorem. Fix a sequence $\langle c_\alpha \mid \alpha < \mu^+ \rangle$ witnessing that $S \in I[\mu^+]$. In particular, we may assume that each $c_\alpha$ is a subset of $\mu^+$ and, for all $\gamma \in S$, there is a club $d_\gamma$ in $\gamma$ such that \begin{itemize} \item $\otp(d_\gamma) = \mu$; and \item for all $\beta < \gamma$, there is $\alpha < \gamma$ such that $d_\gamma \cap \beta = c_\alpha$. \end{itemize} We can assume that $\otp(c_\alpha) < \mu$ for all $\alpha < \mu^+$. If $\diamondsuit(S)$ holds, then we may fix a sequence $\langle A_\gamma \mid \gamma \in S \rangle$ such that \begin{itemize} \item for each $\gamma \in S$, $A_\gamma$ is of the form \[ \left\langle \psi^\gamma_b : \bigcap_{\alpha \in b} c_\alpha \ra \bb{Z} ~ \middle| ~ b \in \gamma^n \right\rangle; \] \item for every sequence of the form \[ \left\langle \psi_b : \bigcap_{\alpha \in b} c_\alpha \ra \bb{Z} ~ \middle| ~ b \in (\mu^+)^n \right\rangle, \] there are stationarily many $\gamma \in S$ such that $A_\gamma = \langle \psi_b \mid b \in \gamma^n \rangle$. \end{itemize} We now define $\varphi'_{\vec{\beta}}$ for $\vec{\beta} \in (\mu^+)^{n+1}$. Since we require $\Phi'$ to be alternating, it suffices to define $\varphi'_{\vec{\beta}}$ for strictly increasing sequences $\vec{\beta} = \langle \beta_i \mid i \leq n \rangle$. We do so by recursion on $\beta_n$. For terminological convenience, let ``Case 1" denote the case in which we are not assuming any additional hypotheses and $H' = \bigoplus_{\omega_{n+1}} \bb{Z}$, and let ``Case 2" denote the case in which the additional hypotheses are being assumed and $H' = \bb{Z}$. As part of our recursion hypotheses, we will require that, if we are in Case 1, then, for all increasing $\vec{\beta} \in (\mu^+)^{n+1}$ and all $(\eta, \alpha) \in \mu \times \beta_0$, we have $\varphi'_{\vec{\beta}}(\eta, \alpha) \in \bigoplus_{\beta_n + \mu} \bb{Z}$. If we are in Case 1, fix a nontrivial $n$-coherent family $\langle \tau_{\vec{\eta}} : \bigwedge \vec{\eta} \rightarrow \bigoplus_{\mu} \bb{Z} \mid \vec{\eta} \in (\omega_n)^n \rangle$ (such exist by a slight modification of the argument of Theorem \ref{thm:nonzero_cohomology_on_opens} below; alternately, see \cite[Theorem 7.6]{TFOA}). Recall that, in Case 2, we already have a nontrivial $n$-coherent family $\langle \tau_{\vec{\eta}} : \bigwedge \vec{\eta} \rightarrow \bb{Z} \mid \vec{\eta} \in (\omega_n)^n \rangle$. Given $\gamma < \mu^+$, we define a ``shift homomorphism" $\sh_\gamma : \bigoplus_\mu \bb{Z} \ra \bigoplus_{\gamma + \mu} \bb{Z}$ as follows: for all $x \in \bigoplus_\mu \bb{Z}$, let $\sh_\gamma(x)$ be the unique $y \in \bigoplus_{\gamma + \mu} \bb{Z}$ such that $\supp(y) = \{\gamma + \eta \mid \eta \in \supp(x)\}$ and, for all $\eta \in \supp(x)$, we have $y(\gamma + \eta) = x(\eta)$. Fix $\gamma < \mu^+$, and assume that we have defined $\langle \varphi'_{\vec{\beta}} \mid \vec{\beta} \in \gamma^{n+1} \rangle$. We now define $\varphi'_{\vec{\beta}}$ for all increasing $\vec{\beta}$ with $\beta_n = \gamma$. By Goblot's Theorem, we can fix a family $\langle \sigma^\gamma_{\vec{\beta}} : \beta_0 \ra H' \mid \vec{\beta} \in \gamma^n \rangle$ witnessing that $\langle \varphi'_{\vec{\beta}} \mid \vec{\beta} \in \gamma^{n+1} \rangle$ is trivial. In Case 1, our recursion hypothesis implies that, for each $\vec{\beta} \in \gamma^{n+1}$, $\varphi'_{\vec{\beta}}$ maps into $\bigoplus_{\beta_n + \mu} \bb{Z}$, so we can assume that, for each $\vec{\beta} \in \gamma^n$, $\sigma^\gamma_{\vec{\beta}}$ maps into $\bigoplus_{\xi_\gamma} \bb{Z}$, where $\xi_\gamma = \sup\{\beta + \mu \mid \beta < \gamma\}$. In particular, $\xi_\gamma \leq \gamma + \mu$ and, if $\cf(\gamma) = \mu$, then $\xi_\gamma = \gamma$. If $\gamma \notin S$, then simply let $\varphi'_{\vec{\beta} ^\frown \langle \gamma \rangle} := (-1)^n \sigma^\gamma_{\vec{\beta}}$ for all $\vec{\beta} \in \gamma^n$. It is routine to verify that this maintains coherence and, in Case 1, our recursion hypothesis. If $\gamma \in S$, then we do a bit more work. Since $d_\gamma$ is cofinal in $\gamma$, it will suffice by Remark \ref{rmk:cofinality} to define $\varphi'_{\vec{\beta} ^\frown \langle \gamma \rangle}$ for $\vec{\beta} \in (d_\gamma)^n$. Let $\langle \delta^\gamma_\eta \mid \eta < \mu \rangle$ be the increasing enumeration of $d_\gamma$. Suppose first that we are in Case 1. Fix $\vec{\beta} \in (d_\gamma)^n$, and let $\vec{\eta} \in \mu^n$ be such that $\vec{\beta} = \langle \delta_{\eta_i} \mid i < n \rangle$. For each $(\xi, \alpha) \in \mu \times \beta_0$, let \[ \varphi'_{\vec{\beta}^\frown \langle \gamma \rangle}(\xi, \alpha) := \begin{cases} (-1)^n \sigma^\gamma_{\vec{\beta}}(\xi,\alpha) + \sh_\gamma(\tau_{\vec{\eta}}(\xi)) & \text{if } \xi < \eta_0 \text{ and } \alpha = \delta^\gamma_\xi \\ (-1)^n \sigma^\gamma_{\vec{\beta}}(\xi,\alpha) & \text{otherwise.} \end{cases} \] Again, the verification that this maintains coherence and our recursion hypothesis is routine, if somewhat tedious. Suppose now that we are in Case 2. We begin by defining auxiliary families $\bar{\Phi}^\gamma = \langle \bar{\varphi}_{\vec{\eta}} : \eta_0 \ra \bb{Z} \mid \vec{\eta} \in \mu^{n+1} \rangle$, $\bar{\Sigma}^\gamma = \langle \bar{\sigma}^\gamma_{\vec{\eta}}:\eta_0 \ra \bb{Z} \mid \vec{\eta} \in \mu^n \rangle$, and $\bar{\Psi}^\gamma = \langle \bar{\psi}^\gamma_{\vec{\eta}}: \eta_0 \ra \bb{Z} \mid \vec{\eta} \in \mu^n \rangle$. For the first, temporarily fix $\vec{\eta} \in \mu^{n+1}$, and let $\vec{\beta} \in \gamma^{n+1}$ be given by letting $\beta_i = \delta^\gamma_{\eta_i}$ for all $i \leq n$. Now, for all $\xi < \eta_0$, let $\bar{\varphi}_{\vec{\eta}}(\xi) := \varphi'_{\vec{\beta}}(\xi, \delta^\gamma_\xi)$. For the latter two, fix $\vec{\eta} \in \mu^n$, and let $\vec{\beta} \in \gamma^n$ be given by letting $\beta_i = \delta^\gamma_{\eta_i}$ for all $i < n$. First, for all $\xi < \eta_0$, let $\bar{\sigma}^\gamma_{\vec{\eta}}(\xi) = \sigma^\gamma_{\vec{\beta}}(\xi, \delta^\gamma_\xi)$. Finally, to define $\bar{\psi}^{\gamma}_{\vec{\eta}}$, define $b \in \gamma^n$ by letting $b(i)$ be such that $d_\gamma \cap \beta_i = c_{b(i)}$ for all $i < n$, and let $\bar{\psi}^{\gamma}_{\vec{\eta}}(\xi) = \psi^\gamma_b(\delta^\gamma_\xi)$ for all $\xi < \eta_0$. By construction, $\bar{\Phi}^\gamma$ is $(n+1)$-coherent, and $\bar{\Sigma}^\gamma$ trivializes $\bar{\Phi}^\gamma$. If $\bar{\Psi}^\gamma$ does not trivialize $\bar{\Phi}^\gamma$, then proceed exactly as in the case in which $\gamma \notin S$, i.e., set $\varphi'_{\vec{\beta}^\frown \langle \gamma \rangle} := (-1)^n \sigma^\gamma_{\vec{\beta}}$ for all $\vec{\beta} \in \gamma^n$. Suppose now that $\bar{\Psi}^\gamma$ \emph{does} trivialize $\bar{\Phi}^\gamma$. It follows that the family $\bar{\Sigma}^\gamma - \bar{\Psi}^\gamma = \langle \bar{\sigma}^\gamma_{\vec{\eta}} - \bar{\psi}^\gamma_{\vec{\eta}} \mid \vec{\eta} \in \mu^n \rangle$ is $n$-coherent. If $\bar{\Sigma}^\gamma - \bar{\Psi}^\gamma$ is nontrivial, then again proceed as in the case in which $\gamma \notin S$. On the other hand, if $\bar{\Sigma}^\gamma - \bar{\Psi}^\gamma$ is trivial, then proceed as follows. Suppose that $\vec{\beta} \in d^n_\gamma$, and let $\vec{\eta} \in \mu^n$ be such that $\beta_i = \delta^\gamma_{\eta_i}$ for all $i < n$. Then, for all $(\xi, \alpha) \in \mu \times \beta_0$, set \[ \varphi'_{\vec{\beta}^\frown \langle \gamma \rangle}(\xi,\alpha) := \begin{cases} (-1)^n \sigma^\gamma_{\vec{\beta}}(\xi,\alpha) + \tau_{\vec{\eta}}(\xi) & \text{if } \xi < \eta_0 \text{ and } \alpha = \delta^\gamma_\xi \\ (-1)^n \sigma^\gamma_{\vec{\beta}}(\xi,\alpha) & \text{otherwise.} \end{cases} \] Once again, verifying coherence is routine. This completes the construction of $\Phi'$. We now use $\Phi'$ to generate an $(n+1)$-coherent family $\Phi = \langle \varphi_{\vec{f}} : X(\vec{f}) \ra H' \mid \vec{f} \in \Lambda^{n+1} \rangle$. For each $f \in \Lambda$, let $\beta_f := \ssup(\bigcup\{f(\eta) \mid \eta < \mu\})$.\footnote{For a set of ordinals $x$, $\ssup(x)$ denotes the \emph{strong supremum} of $x$, i.e., $\sup\{\alpha + 1 \mid \alpha \in x\}$.} Given $\vec{f} \in \Lambda^{n+1}$, let $\vec{\beta}_{\vec{f}} \in (\mu^+)^{n+1}$ be given by $\vec{\beta}_{\vec{f}}(i) = \beta_{f_i}$ for $i \leq n$. Finally, let $\varphi_{\vec{f}} := \varphi'_{\vec{\beta}_{\vec{f}}} \restriction X(\vec{f})$. The $(n+1)$-coherence of $\Phi$ follows immediately from the $(n+1)$-coherence of $\Phi'$. It remains to show that $\Phi$ is nontrivial. Assume that $n > 1$; the case $n = 1$ is similar but easier. Suppose for sake of contradiction that $\Psi = \langle \psi_{\vec{f}} : X(\vec{f}) \ra H' \mid \vec{f} \in \Lambda^n \rangle$ witnesses that $\Phi$ is trivial. For each $\alpha < \mu^+$, define a function $f_\alpha \in \Lambda$ by letting \[ f_\alpha(\eta) := \begin{cases} \{c_\alpha(\eta)\} & \text{if } \eta < \otp(c_\alpha) \\ \emptyset & \text{otherwise.} \end{cases} \] If $b \in (\mu^+)^{\leq n}$ is such that, for all $i < j < |b|$, we have $c_{b(i)} \sqsubseteq c_{b(j)}$, then let $\vec{f}_b \in \Lambda^{|b|}$ be defined by letting $\vec{f}_b(i) = f_{b(i)}$ for all $i < |b|$, and, if $|b| = n$, then define $\psi^*_b : c_{b(0)} \ra H'$ by letting $\psi^*_b(c_{b(0)}(\eta)) = \psi_{\vec{f}_b}(\eta, c_{b(0)}(\eta))$ for all $\eta < \otp(c_{b(0)})$. For all other $b \in (\mu^+)^n$, simply let $\psi^*_b : \bigcap_{\alpha \in b} c_\alpha \ra H'$ be the constant function, taking value $0$. Assume first that we are in Case 1. Define a function $h:(\mu^+)^n \ra \mu^+$ by letting \[ h(b) = \sup\{\varepsilon \mid \exists \beta \in \bigcap_{\alpha \in b} c_\alpha ~ [\varepsilon \in \supp(\psi^*_b(\beta)]\} \] for all $b \in (\mu^+)^n$. Note that this is well-defined, since each $c_\alpha$ has cardinality less than $\mu$ and each $\psi^*_b$ maps into $\bigoplus_{\mu^+} \bb{Z}$. Now fix $\gamma \in S$ such that $h(b) < \gamma$ for all $b \in \gamma^n$. Recall that $\langle \delta^\gamma_\eta \mid \eta < \mu \rangle$ is the increasing enumeration of $d_\gamma$. Define a function $g \in \Lambda$ by letting $g(\xi) := \{\delta^\gamma_\xi\}$ for all $\xi < \mu$, and note that $\beta_g = \gamma$. For each $\eta < \mu$, define $g_\eta \in \Lambda$ by letting \[ g_\eta(\xi) = \begin{cases} \{\delta^\gamma_\xi\} & \text{if } \xi < \eta \\ \emptyset & \text{otherwise.} \end{cases} \] If $\eta < \mu$ is a limit ordinal, then $\beta_{g_\eta} = \delta^\gamma_\eta$; moreover, there is $\alpha < \gamma$ such that $\{\delta^\gamma_\xi \mid \xi < \eta\} = c_\alpha$, and hence $g_\eta = f_\alpha$. Temporarily fix an increasing $\vec{\eta} \in (\lim(\mu))^n$, let $\vec{g}_{\vec{\eta}} \in \Lambda^n$ be equal to $\langle g_{\eta_i} \mid i < n \rangle$, and let $\vec{\beta} \in d_\gamma^n$ equal $\langle \delta^\gamma_{\eta_i} \mid i < n \rangle$. By construction, for all $\xi < \eta_0$, we have \[ \varphi_{\vec{g}_{\vec{\eta}}{}^\frown \langle g \rangle}(\xi, \delta^\gamma_\xi) = \varphi'_{\vec{\beta}^\frown \langle \gamma \rangle}(\xi, \delta^\gamma_\xi) = (-1)^n \sigma^\gamma_{\vec{\beta}}(\xi,\delta^\gamma_\xi) + \sh_\gamma(\tau_{\vec{\eta}}(\xi)). \] By our assumption about $\Psi$, it follows that, for all but finitely many $\xi < \eta_0$, we have \[ (-1)^n \psi_{\vec{g}_{\vec{\eta}}}(\xi, \delta^\gamma_\xi) + \sum_{i < n}(-1)^n \psi_{\vec{g}_{\vec{\eta}}^i{}^\frown \langle g \rangle}(\xi, \delta^\gamma_\xi) = (-1)^n \sigma^\gamma_{\vec{\beta}}(\xi,\delta^\gamma_\xi) + \sh_\gamma(\tau_{\vec{\eta}}(\xi)). \] By our choice of $\gamma$ and $\sigma^\gamma_{\vec{\beta}}$, respectively, the supports of $\psi_{\vec{g}_{\vec{\eta}}}(\xi, \delta^\gamma_\xi)$ and $\sigma^\gamma_{\vec{\beta}}(\xi,\delta^\gamma_\xi)$ are contained in $\gamma$. It follows that, for all but finitely many $\xi < \eta_0$, we have \[ \sum_{i < n}(-1)^n \psi_{\vec{g}_{\vec{\eta}}^i{}^\frown \langle \gamma \rangle}(\xi, \delta^\gamma_\xi) \restriction [\gamma, \gamma + \mu) = \sh_\gamma(\tau_{\vec{\eta}}(\xi)) \] For each $\vec{\zeta} \in (\lim(\mu))^{n-1}$, let $\vec{g}_{\vec{\zeta}} = \langle g_{\zeta_i} \mid i < n-1 \rangle$, and let $\nu_{\vec{\zeta}} : \zeta_0 \ra \bigoplus_{\mu} \bb{Z}$ be the unique function such that \[ \sh_\gamma(\nu_{\vec{\zeta}}(\xi) = \psi_{\vec{g}_{\vec{\zeta}}{}^\frown \langle g \rangle}(\xi, \delta^\gamma_\xi) \restriction [\gamma, \gamma + \mu). \] But then the calculations above show that the family $\langle \nu_{\vec{\zeta}} \mid \vec{\zeta} \in (\lim(\mu))^{n-1} \rangle$ witnesses that the family $\langle \tau_{\vec{\eta}} \mid \vec{\eta} \in (\lim(\mu))^n \rangle$ is trivial. Since $\lim(\mu)$ is cofinal in $\mu$, this implies that the entire family $\langle \tau_{\vec{\eta}} \mid \vec{\eta} \in \mu^n \rangle$ is trivial, which is a contradiction. Now assume that we are in Case 2, and fix a $\gamma \in S$ such that $A_\gamma = \langle \psi^*_b \mid b \in \gamma^n \rangle$. Define $g$ and $g_\eta$ for $\eta < \mu$ as in the previous case. Let $\bar{\Phi}^\gamma$, $\bar{\Sigma}^\gamma$, and $\bar{\Psi}^\gamma$ be as in stage $\gamma$ of the construction of $\Phi'$. By our choice of $\gamma$, it follows that $\bar{\Psi}^\gamma$ trivializes $\bar{\Phi}^\gamma$. Suppose first that $\bar{\Sigma}^\gamma - \bar{\Psi}^\gamma$ is nontrivial. Fix $\vec{\eta} \in \mu^n$, and let $b \in \gamma^n$ be such that $c_{b(i)} = \{\delta^\gamma_\xi \mid \xi < \eta_i\}$ for all $i < n$. Unraveling the definitions and constructions yields the following facts. \begin{enumerate} \item For all $\xi < \eta_0$, we have $\psi^*_{\vec{f}_b} (\delta^\gamma_\xi) = \bar{\psi}^\gamma_{\vec{\eta}}(\xi)$. \item For all $\xi < \eta_0$, we have $\varphi_{\vec{f}_b{}^\frown \langle g \rangle}(\xi, \delta^\gamma_\xi) = (-1)^n\bar{\sigma}^\gamma_{\vec{\eta}}(\xi)$. \item For all but finitely many $\xi < \eta_0$, we have \[ (-1)^n \bar{\psi}^\gamma_{\vec{\eta}}(\xi) + \sum_{i < n} (-1)^i \psi_{\vec{f}_b^i{}^\frown \langle g \rangle} (\xi, \delta^\gamma_\xi) = (-1)^n \bar{\sigma}^\gamma_{\vec{\eta}}(\xi). \] \end{enumerate} For $\vec{\zeta} \in \mu^{n-1}$, define $\nu_{\vec{\zeta}}:\zeta_0 \ra \bb{Z}$ as follows. Let $\vec{g}_{\vec{\zeta}} = \langle g_{\zeta_i} \mid i < n-1 \rangle$, and let $\nu_{\vec{\zeta}}(\xi) = \psi_{\vec{g}_{\vec{\zeta}}{}^\frown \langle g \rangle}(\xi, \delta^\gamma_\xi)$ for all $\xi < \zeta_0$. Then item (3) above implies that $\langle (-1)^n \nu_{\vec{\zeta}} \mid \vec{\zeta} \in \mu^{n-1} \rangle$ trivializes $\bar{\Sigma}^\gamma - \bar{\Psi}^\gamma$, which is a contradiction. Finally, assume that $\bar{\Sigma}^\gamma - \bar{\Psi}^\gamma$ is trivial, as witnessed by $\langle \rho_{\vec{\zeta}} \mid \vec{\zeta} \in [\mu]^{n-1} \rangle$. Fix $\vec{\eta} \in \mu^n$ and $b \in \gamma^n$ as in the previous paragraph. Item (1) of that paragraph is unchanged, and items (2) and (3) become \begin{enumerate} \item[(2')] For all $\xi < \eta_0$, we have $\varphi_{\vec{f}_b{}^\frown \langle g \rangle}(\xi, \delta^\gamma_\xi) = (-1)^n\bar{\sigma}^\gamma_{\vec{\eta}}(\xi) + \tau_{\vec{\eta}}(\xi)$. \item[(3')] For all but finitely many $\xi < \eta_0$, we have \[ (-1)^n \bar{\psi}^\gamma_{\vec{\eta}}(\xi) + \sum_{i < n} (-1)^i \psi_{\vec{f}_b^i{}^\frown \langle g \rangle} (\xi, \delta^\gamma_\xi) = (-1)^n \bar{\sigma}^\gamma_{\vec{\eta}}(\xi) + \tau_{\vec{\eta}}(\xi). \] \end{enumerate} Rearranging item (3') and using the triviality of $\bar{\Sigma}^\gamma - \bar{\Psi}^\gamma$, we see that, for all but finitely many $\xi < \eta_0$, we have \[ (-1)^{n-1} \sum_{i < n} (-1)^i \rho_{\vec{\eta}^i}(\xi) + \sum_{i < n} (-1)^i \psi_{\vec{f}_b^i{}^\frown \langle g \rangle} (\xi, \delta^\gamma_\xi) = \tau_{\vec{\eta}}(\xi). \] For all $\vec{\zeta} \in \mu^{n-1}$, define $\nu_{\vec{\zeta}}:\zeta_0 \ra \bb{Z}$ as follows. Let $\vec{g}_{\vec{\zeta}} = \langle g_{\zeta_i} \mid i < n-1 \rangle$, and let $\nu_{\vec{\zeta}}(\xi) = (-1)^{n-1} \rho_{\vec{\zeta}}(\xi) + \psi_{\vec{g}_{\vec{\zeta}}{}^\frown \langle g \rangle}(\xi, \delta^\gamma_\xi)$ for all $\xi < \zeta_0$. Then the above calculation implies that $\langle \nu_{\vec{\zeta}} \mid \vec{\zeta} \in \mu^{n-1} \rangle$ trivializes $\langle \tau_\eta \mid \vec{\eta} \in \mu^n \rangle$, yielding the final contradiction. \end{proof} \begin{corollary} If $\mathrm{V} = \mathrm{L}$, then $\mathrm{lim}^{n+1}\, \mb{A}_{\aleph_n, \aleph_{n+1}}\neq 0$ for all $n<\omega$. \end{corollary} \begin{proof} The case of $n=0$ is the \textsf{ZFC} result Theorem \ref{thm:limsofAkappalambda}(5). For higher $n$, the relevant conditions of Theorem \ref{thm:limnAkl_further} all hold in G\"{o}del's constructible universe $\mathrm{L}$. To see this, first note that $ \mathsf{GCH}$ holds in $\mathrm{L}$ and therefore, by \cite[Theorem 3.40]{Eisworth}, $S^{\aleph_{n+1}}_{\aleph_n} \in I[\aleph_{n+1}]$ for all $1 \leq n < \omega$. By a theorem of Jensen \cite{jensen_constructible}, $\diamondsuit(S^{\aleph_{n+1}}_{\aleph_n})$ holds in $\mathrm{L}$. Finally, there exists a nontrivial $n$-coherent family $\langle \tau_{\vec{\eta}} : \bigwedge \vec{\eta} \rightarrow \bb{Z} \mid \vec{\eta} \in (\omega_n)^n \rangle$ in $\mathrm{L}$ by \cite[Cor. 3.27]{BLHCoOI}. \end{proof} \section{The additivity of derived limits and of strong homology} \label{sec:additivity} \subsection{Additivity} \label{subsect:additivity} The question of the additivity of strong homology has, over the past 35 years, attracted a substantial literature; main results therein include the following: \begin{enumerate} \item The continuum hypothesis implies that strong homology is not additive \cite{Mardesic_additive_88}. \item There exist non-metrizable \textsf{ZFC} counterexamples to the additivity of strong homology \cite{Prasolov_non_05}. \item It is consistent with the \textsf{ZFC} axioms that strong homology is additive on the category of locally compact separable metric spaces \cite{Bannister_on_22}. \end{enumerate} A primary aim of this section is to outline a more direct and cohesive approach to these theorems: each may be regarded as a result about the additivity of \emph{higher derived limits}, one moreover which is instantiated by the groups $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}$ for suitable cardinals $\kappa$ and $\lambda$. We will show more precisely that items (1), (2) and (3) above correspond in strong senses to items (3), (5), and (2) of Theorem \ref{thm:limsofAkappalambda}, respectively; each in this view is at heart a statement of infinitary combinatorics. For concreteness, we begin by recalling what it means for a functor to be additive. We record also a definition of strong homology which we elucidate in this section's conclusion; readers seeking a fuller treatment of the subject are referred to \cite{Mardesic_additive_88} or \cite{Mardesic_strong_00}. \begin{definition} A homology theory $\mathrm{H}_q:\mathsf{Top}\to\mathsf{Ab}$ $(q\geq 0)$ is \emph{additive} if for every natural number $q$ and every family $\{X_\alpha\,|\,\alpha\in A\}$ of topological spaces, the map $$i^{*}_q:\bigoplus_A \mathrm{H}_q(X_\alpha)\rightarrow \mathrm{H}_q\Big(\coprod_A X_\alpha\Big)$$ induced by the inclusion maps $i_\alpha:X_\alpha\hookrightarrow\coprod_A X_\alpha$ is an isomorphism. Similarly, a functor $F:\mathsf{Pro(Ab)}\to\mathsf{Ab}$ is \emph{additive} if for every family $\{\mathbf{X}_\alpha\,|\,\alpha\in A\}$ of inverse systems of abelian groups, the map $$i^{*}:\bigoplus_A F(\mathbf{X}_\alpha)\rightarrow F\Big(\coprod_A \mathbf{X}_\alpha\Big)$$ induced by the inclusion maps $i_\alpha:\mathbf{X}_\alpha\hookrightarrow\coprod_A \mathbf{X}_\alpha$ is an isomorphism. Adaptations of these definitions to subcategories $\mathcal{C}$ of the source categories $\mathsf{Top}$ and $\mathsf{Pro(Ab)}$ are straightforward; note, however, that additivity in these contexts is only a question of those sums of $\mathcal{C}$-objects which also fall within $\mathcal{C}$. The \emph{strong homology group} $\overline{\mathrm{H}}_q$ of a topological space $X$ is the $q^{\mathrm{th}}$ homology group of the homotopy limit of the system of chain complexes given by the application of the singular chain functor to any representative of the \emph{strong shape} $\mathsf{sSh}(X)\in\mathsf{Ho}(\mathsf{Pro(}\mathsf{Pol}))$ of $X$ \cite{CordierStrong}; in plainer English, it is the most programmatically \emph{strong shape invariant} extension of \emph{Steenrod homology} from the category of metric compacta to that of all topological spaces. \end{definition} We turn now to our central examples. For $n>0$ let $B^n$ denote an $n$-dimensional open ball, and let $S^n$, as usual, denote an $n$-dimensional sphere. For any cardinal $\lambda>0$ let $\mathbf{Y}^{n,\lambda}$ denote the inverse system $(Y^{n,\lambda}_a,p_{ab},[\lambda]^{<\omega}\backslash\varnothing)$ in which for all nonempty $a\subseteq b$ in $[\lambda]^{<\omega}$ the space $Y^{n,\lambda}_a$ is $\bigvee_{a}S^n$ and the bonding map $p_{ab}:Y^{n,\lambda}_b\to Y^{n,\lambda}_a$ is the identity on $\bigvee_a S^n$, sending all other points in $\bigvee_b Y^{n,\lambda}$ to the wedge's basepoint. \begin{lemma} \label{lemma:resolution} For any $n,\lambda>0$ the space $Y^{n,\lambda}=\mathrm{lim}\,\mathbf{Y}^{n,\lambda}$ is homeomorphic to the one-point compactification of $\coprod_\lambda B^n$; in consequence, the system $\mathbf{Y}^{n,\lambda}$ defines a resolution of this space in the sense of \cite{Mardesic_strong_00} or \cite{Mardesic_additive_88}. \end{lemma} \begin{proof} The second claim follows from the first together with \cite[Thm. 6.20]{Mardesic_strong_00}. For the first, observe simply that $Y^{n,\lambda}$ is compact, that the $\alpha^{\mathrm{th}}$ summand of $\coprod_\lambda B^n$ naturally identifies with the complement of the basepoint in $Y^{n,\lambda}_{\{\alpha\}}$, and that the complement of the union of these $B^n$-images in $Y^{n,\lambda}$ is exactly one point. \end{proof} When $\lambda=\aleph_0$, of course, $Y^{n,\lambda}$ is a \emph{compact bouquet of $n$-spheres}, familiarly known as an \emph{$n$-dimensional Hawaiian earring}. The following theorem generalizes the computations in \cite{Mardesic_additive_88} of the $\lambda=\aleph_0$ case to arbitrary cardinalities. \begin{theorem} \label{thm:homology_groups} For any nonzero cardinals $\kappa$ and $\lambda$, let $\mathbf{A}_{\kappa,\lambda}$ denote the inverse system of Definition \ref{def:Akappalambda} above. Then for any $n>0$, the strong homology groups of the space $Y^{n,\lambda}$ and its sums are as follows: \begin{equation*} \overline{\mathrm{H}}_q(Y^{n,\lambda})= \begin{cases} 0 & \text{if } q\neq 0,n\\ \prod_\lambda\mathbb{Z} & \text{if } q=n\\ \mathbb{Z} & \text{if } q=0 \end{cases} \end{equation*} and \begin{equation*} \overline{\mathrm{H}}_q\left(\coprod_\kappa Y^{n,\lambda}\right)= \begin{cases} 0 & \text{if } q>n\\ \mathrm{lim}^{n-q}\,\mathbf{A}_{\kappa,\lambda} & \text{if } 0<q\leq n\\ \mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}\oplus\bigoplus_\kappa\mathbb{Z} & \text{if } q=0. \end{cases} \end{equation*} In particular, if strong homology is additive then $\mathrm{lim}^s\,\mathbf{A}_{\kappa,\lambda}=0\textit{ for all }s,\kappa,\lambda>0$. \end{theorem} \begin{proof} The last assertion follows from the first simply by letting $n$ range above $0$ and letting $q$ range between them. For the first, observe that by Lemma \ref{lemma:resolution} the arguments of \cite[\S 5-6]{Mardesic_additive_88} fully generalize to arbitrary cardinalities $\kappa$ and $\lambda$, with only the partial exception of the (implicit) invocation of the Mittag-Leffler principle at the top of page 736. In our context, the key point at that step is that for every $q\geq 0$ the derived limits $\mathrm{lim}^s$ of the system of singular homology groups $\mathrm{H}_q(Y^{n,\lambda}_b)$ $(b\in [\lambda]^{<\omega})$ associated to $\mathbf{Y}^{n,\lambda}$ vanish in all degrees $s>0$. When $q\neq n$, though, this is clear, since these groups are either constantly $0$ or $\mathbb{Z}$, while if $q=n$, this system is $\mathbf{A}_{1,\lambda}$, whereupon the assertion follows immediately from item (1) of Theorem \ref{thm:limsofAkappalambda}. \end{proof} The main result of \cite{Prasolov_non_05}, and our Theorem C, now follow as an immediate corollary: \begin{corollary} \label{cor:notadd} There exists a non-metrizable \textsf{ZFC} counterexample to the additivity of strong homology. \end{corollary} \begin{proof} Our counterexample is a countable sum of copies of $Y^{n,\aleph_1}$. By Theorem \ref{thm:homology_groups}, whenever $p=n-1$, the additivity of strong homology over this sum would require that $\mathrm{lim}^1\,\mathbf{A}_{\aleph_0,\aleph_1}=0$, but we know from Theorem \ref{thm:limsofAkappalambda} that this limit is nonzero. \end{proof} As noted, $Y^{n,\omega_1}$ is a significantly simpler counterexample to additivity than that appearing in \cite{Prasolov_non_05} (which had derived, in turn, from \cite{Mardesic_nonvanishing_96}); it is also, in contrast to that counterexample, compact. See our conclusion for the question of whether \emph{metrizable} \textsf{ZFC} counterexamples to additivity exist. There exists no \emph{locally compact separable metric} \textsf{ZFC} counterexample to the additivity of strong homology since, by \cite{Bannister_on_22}, the only obstructions to additivity on this class of spaces are any nonvanishing derived limits $\mathrm{lim}^n$ $(n>0)$ of a family of inverse systems which sufficiently resemble $\mathbf{A}_{\aleph_0,\aleph_0}$ so that their higher limits all vanish for the same reasons that the latter's do in the model of \cite{Bergfalk_simultaneously_21} (and likewise in the model of \cite{Bergfalk_simultaneously_23}, by \cite{Bannister_additivity_23}). When $\mathrm{lim}^1\,\mathbf{A}_{\aleph_0,\aleph_0}\neq 0$, on the other hand, strong homology fails to be additive even on closed subspaces of $\mathbb{R}^2$, as witnessed by a countable sum of earrings $Y^{1,\aleph_0}$, and it was the derivation of this scenario from the continuum hypothesis which formed the main result of \cite{Mardesic_additive_88}. These examples, like those of \cite{Prasolov_non_05}, may be regarded as materializing a more fundamental failure of additivity, namely one at the level of the derived limit functors. More particularly, \cite[Theorem 1]{Prasolov_non_05} is also immediate within our framework, this time from Theorem \ref{thm:limsofAkappalambda}. \begin{corollary} \label{cor:nonadditive} The functor $\mathrm{lim}^n:\mathsf{Pro(Ab)}\to\mathsf{Ab}$ is not additive for either $n=1$ or $n=2$. \end{corollary} Note in contrast that the functor $\mathrm{lim}^0=\mathrm{lim}:\mathsf{Pro(Ab)}\to\mathsf{Ab}$ \emph{is} additive, as follows from the observation beginning the proof below.\footnote{The additivity of \v{C}ech homology is argued in \cite[Theorem 9]{Mardesic_additive_88}; the proof applies in its essentials to the functor $\mathrm{lim}$ as well, but readers may find it at least as efficient to supply the argument themselves.} \begin{proof} The essential observation is that the sum, in a pro-category, of a set of inverse systems is represented by a system indexed by the product of their index sets (with terms the naturally associated sum of objects; see \cite[pp. 502-503]{Prasolov_non_05} for details). In particular, $\mathbf{A}_{\kappa,\lambda}=\coprod_\kappa\mathbf{A}_{1,\lambda}$ for all cardinals $\kappa$ and $\lambda$, so the inequality $\mathrm{lim}^1\,\mathbf{A}_{\aleph_0,\aleph_1}\neq 0 = \bigoplus_\omega\mathrm{lim}^1\,\mathbf{A}_{1,\aleph_1}$ witnesses the $n=1$ instance of the corollary. For the $n=2$ case, let $\mathbf{C}_{\kappa,\lambda}=(C_f,q_{fg},{^\kappa}([\lambda]^{<\omega})$ and $\mathbf{D}_{\kappa,\lambda}=(D_f,r_{fg},{^\kappa}([\lambda]^{<\omega})$, where $$C_f=\bigoplus_{(\kappa\times\lambda)\backslash X(f)}\mathbb{Z}$$ and $D_f=\bigoplus_{\kappa\times\lambda}\mathbb{Z}$, and the morphisms $q_{fg}$ and $r_{fg}$ are the natural inclusion and identity maps, respectively. Note that $\mathbf{D}_{\kappa,\lambda}$ is flasque, so that the short exact sequences $\mathbf{C}_{\kappa,\lambda}\to\mathbf{D}_{\kappa,\lambda}\to\mathbf{A}_{\kappa,\lambda}$ induce isomorphisms $\mathrm{lim}^1\,\mathbf{A}_{\kappa,\lambda}\to\mathrm{lim}^2\,\mathbf{C}_{\kappa,\lambda}$, converting the previous inequality to $\mathrm{lim}^2\,\mathbf{C}_{\aleph_0,\aleph_1}\neq 0 = \bigoplus_\omega\mathrm{lim}^2\,\mathbf{C}_{1,\aleph_1}$. \end{proof} An oddity of these results, of course, is their restriction to the degrees $n\leq 2$. By Goblot's Theorem, results for higher $n$ will be, just as in Section \ref{sec:pro}, at least in part about the combinatorics of $\aleph_{n-1}$, and this indeed is much of the interest of Question \ref{ques:add_of_lims} below. In particular, the question of whether the functor $\mathrm{lim}^3:\mathsf{pro\text{-}Ab}\to\mathsf{Ab}$ is consistently additive appears to be closely related to the question of whether the group $\mathrm{H}^2(\omega_2;\mathbb{Z})$ can vanish in any model of the \textsf{ZFC} axioms.\footnote{See \cite[Question 9.3]{TFOA} and \cite[Question 7.7]{Bergfalk_simultaneously_23} and \cite[Question 11.2]{IHW} for general formulations of this question, which is one of the most important in this research area.} Of comparable interest is the question \emph{on what classes $\mathcal{C}$ may the functors $\mathrm{lim}^n$ be additive?} After all, the \textsf{ZFC} obstructions recorded in Theorem \ref{thm:limsofAkappalambda} and Corollary \ref{cor:nonadditive} only arise for systems of ``height'' $\lambda \geq \aleph_1$; it is natural, in consequence, to wonder if $\mathrm{lim}^n$ may nevertheless be additive for restricted or unrestricted sums of towers (i.e., countable inverse sequences) of abelian groups, and it's indeed in these terms that the theorem \cite[Thm.\ 1.2]{Bannister_additivity_23} referenced in Section \ref{subsect:alternative} is framed. In fact (as noted in \cite[\S 7]{Bannister_additivity_23}), this result coupled with the argument of \cite[Thm.\ 5.1]{Be17} implies the consistency of $\mathrm{lim}^1\,\mathbf{A}_{\kappa,\aleph_0}[H]=0$ for all $\kappa$ and abelian groups $H$, and the question of whether this can hold for higher degrees as well appears to be a deep one. As indicated, for whatever light it may shed on these and related computations, we conclude this section with a broader discussion of strong homology; readers uninterested in the latter should proceed without delay to Section \ref{sect:products}. \subsection{Strong homology} \label{subsect:stronghomology} Works on the additivity problem have, following \cite{Mardesic_additive_88} and \cite{Mardesic_strong_00}, tended to emphasize the axiomatic and, to a lesser degree, computational features of strong homology, somewhat eliding more conceptual formulations in the process (see \cite[\S 1]{Bannister_on_22} for a recent example). The cumulative effect has been a distorted picture, one in which strong homology figures mainly (1) as a sort of compromise-formation between the requirements of a homology functor and the unruly realities of geometric topology (represented by a standard roster of motivating examples of ``bad local behavior'': the Warsaw circle, Hawaiian earrings, solenoids, and so on), or (2) as an elaborate, if not \emph{ad hoc}, exactness-recovering corrective to \v{C}ech homology, or (3) as both. There is an element of truth to each of these images, but taken alone, they are seriously misleading as well, insofar as strong homology figures within them as essentially isolated from the more structural and homotopy theoretic emphases of the contemporary mainstream of algebraic topology. The reality, though, is that those emphases and strong homology \emph{grew up together} in large part out of the arrival, through the 1970s, of model category, homotopy (co)limit, and homotopy coherent technologies \cite{BoardmanVogt, BousfieldKan, CordierCoherent}, and it is in these terms, at least as much as (1) and (2) above, that strong homology should be understood; that \emph{strong shape}, the background homotopy theory of strong homology, is a conspicuously $\infty$-categorical entity is just one manifestation of this heritage (see, e.g., \cite[\S 7.1.6] {LurieHTT} and \cite{GuntherSemi}). We briefly sketch a more balanced, but nevertheless classical, account below. Surely the best-known general invariant of a topological space $X$ is its \emph{homotopy type}; put differently, the most prominent relation refining the identity relation on topological spaces is that of \emph{homotopy equivalence}. For all its virtues, this relation is, from some perspectives, weak; natural extensions of this relation determine further invariants of a topological space $X$, such as its \emph{strong shape} and its \emph{shape}.\footnote{We note in passing that these extensions are, in spirit, orthogonal to that of \emph{weak homotopy equivalence}: while the latter identifies the Warsaw circle, for example, with a point, the former identify it with a circle. These more intuitive identifications are the source both of the name \emph{shape} and of much initial interest in its theory, concerns emphasized in \cite{CordierPorter} and recapitulated (alongside many fundamental techniques) in the more contemporary field of persistent homology.} More precisely, we have a sequence of categories $\mathsf{Ho(Top)}\to\mathsf{sSh(Top)}\to\mathsf{Sh(Top)}$ corresponding to the aforementioned invariants, respectively, together with commuting functors from $\mathsf{Top}$ to each of them. The middle of these admits multiple incarnations, or models (within the \emph{coherent homotopy category} $\mathsf{CH}(\mathsf{Pro}(\mathsf{Top}))$, for example; see \cite[pp. 90, 2--3, and Ch.\ 1]{Mardesic_strong_00}); most classical among these is that of \cite{EdwardsHastings}, whereby the above diagram takes the form \begin{equation} \label{eq:shape_categories} \mathsf{Ho(Top)}\xrightarrow{F}\textsf{Ho(Pro(Top))}\xrightarrow{G}\textsf{Pro(Ho(Top))},\end{equation} with each of the latter two terms naturally construed as categories of \emph{systems of approximations to a topological space} $X$.\footnote{To be clear, the homotopy category of the first and third terms is, in the terminology of \cite[p.\ 395]{MayPonto}, the ``naive'' one, corresponding to the Hurewicz model structure on the category of topological spaces; that of the middle term derives from either of the classes of weak equivalences discussed in \cite{PorterTwo}. For the functor $F$, see \cite[\S 8]{Mardesic_strong_00}; on objects, $G$ factors through the natural map $\textsf{Pro}(\mathsf{Top})\to\mathsf{Pro}(\mathsf{Ho(Top)})$.} Such systems are only informative, of course, when they take values in some restricted class (or \emph{dense subcategory}) of ``nice'' topological spaces, the categories $\mathsf{Pol}$ of polyhedra or of almost neighborhood retracts (ANRs) being standard historical choices. And indeed, the \emph{Vietoris nerve} and \emph{\v{C}ech nerve} constructions on the objects of $\mathsf{Top}$ may each naturally be viewed as functors to the $\textsf{Ho(Pro(Pol))}$ and $\textsf{Pro(Ho(Pol))}$ subcategories, respectively, of the incarnations of $\mathsf{sSh(Top)}$ and $\mathsf{Sh(Top)}$ in equation (\ref{eq:shape_categories}) above (see \cite{Gunther}). As is well known, within the \v{C}ech construction, the plurality of maps witnessing the refinement relation between two open coverings of a space induce a single homotopy class of maps between the associated polyhedra, and it is for this reason that the shape functor takes values in the pro-category of $\mathsf{Ho(Top)}$. The further passage to a homology theory then depends on the assignment of (homotopy classes of) chain complexes to the objects of the terms of equation (\ref{eq:shape_categories}). The canonical choice in the case of $\mathsf{Ho(Top)}$ is, of course, the \emph{singular chain} functor $X\mapsto C^{\mathrm{sing}}_\bullet(X)$, the crucial point here being that $C^{\mathrm{sing}}_\bullet$ maps homotopy equivalent spaces to homotopy equivalent chain complexes; the homology groups of $C^{\mathrm{sing}}_\bullet(X)$ are the \emph{singular homology groups} of $X$. On the other hand, applying $C^{\mathrm{sing}}_\bullet$ to the $\mathsf{Top}$ and $\mathsf{Ho(Top)}$ terms of the two subsequent terms of line (\ref{eq:shape_categories}) yields objects in the homotopy category of the pro-category of chain complexes and the pro-category of the homotopy category of chain complexes, respectively. The natural means for consolidating these systems into a single graded abelian group are the \emph{homotopy inverse limit} and the \emph{inverse limit}, respectively --- although, limits not in general existing in homotopy categories, it's in the latter case standard to interchange steps, i.e., to take the limit of the homology groups of the chain complexes of the elements of the \v{C}ech nerve of $X$; the resultant groups are the \emph{\v{C}ech homology groups} of $X$.\footnote{This parallels, of course, the standard construction of \v{C}ech cohomology groups; that the latter are both shape and strong shape invariant is essentially immediate.} The \emph{strong homology groups} of a topological space $X$ are, in contrast, the homology groups of the chain complex arising from the application of the composition of the \emph{homotopy inverse limit} (holim) and \emph{singular} functors to the strong shape of $X$; to sum up, for any fixed space $X$ and $q\geq 0$, \begin{align*} \mathrm{H}_q(X) & = h_q(C^{\mathrm{sing}}_\bullet(X)),\\ \overline{\mathrm{H}}_q(X) & = h_q(\mathrm{holim}(C^{\mathrm{sing}}_\bullet(sS(X)))),\textnormal{ and }\\ \check{\mathrm{H}}_q(X) & = \mathrm{lim}\,h_q(C^{\mathrm{sing}}_\bullet(S(X))),\end{align*} where $\mathrm{H}_q$ denotes singular homology, $S:\mathsf{Top}\to\mathsf{Sh(Top)}$ and $sS:\mathsf{Top}\to\mathsf{sSh(Top)}$ denote the shape (see \cite[I.4.2, Thm.\ 3]{ShapeTheory}) and strong shape functors, respectively, and $h_q$ denotes the $q^{\mathrm{th}}$ homology group of a chain complex. Cordier (who characterizes the $\mathrm{holim}$ and $\mathrm{lim}^s$ functors as complementary responses to the ``deficiencies'' of $\mathrm{lim}$ \cite[p.\ 35]{CordierStrong}) derives the existence of a homotopy limit in the category of non-negative chain complexes of abelian groups from the Dold-Kan correspondence together with its existence in the category of simplicial abelian groups; he shows moreover that it takes the concrete form of precisely that total complex figuring so prominently in Chapter 4 of \cite{Mardesic_strong_00}. Let us conclude with a few summary remarks. \begin{itemize} \item Generalized framings of the strong shape functor are of a wider significance than we have so far suggested; see in particular Hoyois's \emph{Higher Galois theory} \cite[Def.\ 2.3, Rmk.\ 2.13]{Hoyois}, where this functor, ranging over $\infty$-topoi and denoted $\Pi_\infty$, carries the name \emph{fundamental pro-$\infty$-groupoid}; shape functors arise naturally in both the condensed and pyknotic settings as well, and are areas of active research.\footnote{See forthcoming work by Mair; see also \cite[II.4]{exodromy} for further references.} \item Recall that Eilenberg and Steenrod showed, via a geometric realization of a simple inverse sequence of groups possessing a nonvanishing $\mathrm{lim}^1$ ``that the \v{C}ech `homology theory' with integer coefficients is not exact,'' or more precisely that \emph{no integral homology theory on the category of compact pairs is simultaneously continuous (meaning that it commutes with inverse limits) and exact} \cite[p.\ 265]{EilenbergSteenrod}. Consider then what is sometimes framed (in a sense made precise by a Milnor short exact sequence \cite[Thm.\ 21.9]{Mardesic_strong_00}) as rectifying the non-exactness of $\check{\mathrm{H}}_\bullet$, namely the strong homology functor $\overline{\mathrm{H}}_\bullet$. On the category of compact metric spaces, $\overline{\mathrm{H}}_\bullet$ is fully axiomatized by the Eilenberg-Steenrod axioms together with a relative homeomorphism and cluster axiom \cite{Milnor1960}; it is in consequence thereon equivalent to the homology theories of Steenrod \cite{SteenrodCycles}, Borel-Moore \cite{BorelMoore}, Sitnikov, and others \cite{Massey78}. Where these theories may differ is in their extension to the broader category of locally compact spaces; of the central question of whether the Steenrod-Sitnikov theory embodying the most naive strategy of extension (direct limits) is strong shape invariant, Sklyarenko writes that it ``will likely be a test not only for Steenrod-Sitnikov homology, but also for strong shape theory itself'' (\cite{Sklyarenko}, quoted in \cite{Melikhov}). It is reasonable to wonder whether materializations like $\coprod_\omega Y^{n,\aleph_1}$ of nonvanishing derived limits may, much as in the classical \v{C}ech case, play some role in the answer. \end{itemize} \section{Products of compact projectives} \label{sect:products} Recall the introductory account of condensed mathematics concluding Section \ref{subsect:condensedbackground} above. For motivation, as is standard, we began by emphasizing the shortcomings of, for example, the category of Hausdorff topological abelian groups, together with its faithful embedding into the abelian category $\mathsf{Cond(Ab)}$ as one far-reaching kind of remedy. Note, though, that the provision of an abelian --- and even a symmetric monoidal abelian --- category structure extending that of some ill-behaved preabelian category isn't, in and of itself, particularly novel: Yoneda embeddings into abelian sheaf categories will always have these features (cf.\ comments, e.g., at \cite[p.\ 9]{CS2}). The distinction of condensed categories begins with something further, namely the conjunction of such features with the existence of a class of \emph{compact projective generators}; we record the relevant definitions just below. Let us more immediately recall, though, that this class derives from the existence \emph{in the underlying category \textsf{ProFin} of the condensed site} of a generating class of projective objects, and that the latter, as noted in Section \ref{subsect:condensedbackground} above, are precisely the extremally disconnected compact Hausdorff spaces, or $\mathsf{ED}$ spaces, for short. Note for future reference that the $\mathsf{ED}$ spaces also admit characterization as the retracts of the \v{C}ech--Stone compactifications $\beta X$ of discrete spaces $X$ (these $\beta X$ are accordingly termed the \emph{free} objects of $\mathsf{CHaus}$ in \cite{Rainwater}); recall as well the description of any such $\beta X$ as $(\{\mathcal{U}\subseteq P(X)\mid\mathcal{U}\textnormal{ is an ultrafilter on }X\},\tau)$, where the topology $\tau$ on $\beta X$ is generated by the sets $N_I:=\{\mathcal{U}\in\beta X\mid I\in\mathcal{U}\}$ for $I\subseteq X$. Returning to our more general discussion, let us quote from \cite{CS2} the basic definitions: \begin{quote} Let $\mathcal{C}$ be a category that admits all small colimits. Recall that an object $X\in\mathcal{C}$ is compact (also called finitely presented) if $\mathrm{Hom}(X,-)$ commutes with filtered colimits. An object $X\in\mathcal{C}$ is projective if $\mathrm{Hom}(X,-)$ commutes with reflexive coequalizers\footnote{Note that this corresponds with the more standard definitions in abelian contexts.} [\dots] Let $\mathcal{C}^{\mathrm{cp}}\subset\mathcal{C}$ be the full subcategory of compact projective objects. \end{quote} \noindent Fundamental examples include the following (as in item (5) below, in this section we will discard the underline convention for condensed images of a space): \begin{quote} \begin{enumerate} \item If $\mathcal{C}=\mathsf{Set}$ is the category of sets, then $\mathcal{C}^{\mathrm{cp}}$ is the category of finite sets, which generates $\mathcal{C}$ under small colimits. \item If $\mathcal{C}=\mathsf{Ab}$ is the category of abelian groups, then $\mathcal{C}^{\mathrm{cp}}$ is the category of finite[ly generated] free abelian groups, which generates $\mathcal{C}$ under small colimits. [\dots] \item[(4)] If $\mathcal{C}=\mathsf{Cond(Set)}$ is the category of condensed sets, then $\mathcal{C}^{\mathrm{cp}}$ is the category of extremally disconnected profinite sets, which generates $\mathcal{C}$ under small colimits. \item[(5)] If $\mathcal{C}=\mathsf{Cond(Ab)}$ is the category of condensed abelian groups, then $\mathcal{C}^{\mathrm{cp}}$ is the category of direct summands of $\mathbb{Z}[S]$ for extremally disconnected $S$, which generates $\mathcal{C}$ under small colimits. \cite[pp.\ 74-75]{CS2} \end{enumerate} \end{quote} Extrapolating, the compact projective objects of a given category appear to incarnate the \emph{finite} and the \emph{extremally disconnected} in, respectively, classical and condensed settings; observe, however, that only the first of these classes is closed under (finite) products. \begin{proposition} \label{prop:Rudin} If two extremally disconnected spaces $S$ and $T$ are both infinite, then their product is not extremally disconnected. \end{proposition} This proposition traces at the latest to Walter Rudin around 1960, by way of the attribution in \cite{Curtis}, and induces, in turn, a number of effects and questions within the field of condensed mathematics. By item (4) above, for example, it implies that $\mathsf{Cond(Set)}^{\mathrm{cp}}$ is not closed under products. Subtler questions attach to item (5). There, $\mathbb{Z}[ - ]:\mathsf{Cond(Set)}\to\mathsf{Cond(Ab)}$ denotes the left adjoint to the forgetful functor $\mathsf{Cond(Ab)}\to\mathsf{Cond(Set)}$, and within $\mathsf{Cond(Ab)}$, the relation $\mathbb{Z}[S]\otimes\mathbb{Z}[T]=\mathbb{Z}[S\times T]$ renders $\mathbb{Z}[ - ]$-images of products of $\mathsf{ED}$ spaces computationally ubiquitous. These are not, in general, projective, although the main result in this direction requires more argument than one might at first expect; the proof given in \cite{CS3}, for example, answers an open question from \cite{Aviles} in Banach space theory\footnote{That question, of whether the Banach space $C(\beta\mathbb{N}\times\beta\mathbb{N})$ is separably injective, typifies how this stratum of projectivity questions dualizes to injectivity questions in Banach settings; see \cite[Prop.\ 3.17]{CS3}.} along the way: \begin{proposition}[Clausen--Scholze 2022] \label{prop:projnotprods} For any infinite sets $X$ and $Y$, the tensor product $\mathbb{Z}[\beta X]\otimes\mathbb{Z}[\beta Y]=\mathbb{Z}[\beta X\times\beta Y]$ is not projective. \end{proposition} See \cite[Appendix to III and pp.\ 26--27]{CS3} for further discussion of these issues; the latter includes, for example, the following observation: by item (5), $\mathsf{Cond(Ab)}$ is rich in projective objects, i.e., in $A=\mathbb{Z}[S]$ for which $\mathrm{Ext}^i_{\mathsf{Cond(Ab)}}(A,-)=0$ for all $i>0$. For such to be \emph{internally} projective, though, would entail that the internal Ext functors $$\underline{\mathrm{Ext}}^i_{\mathsf{Cond(Ab)}}(\mathbb{Z}[S],-)(T)=\mathrm{Ext}^i_{\mathsf{Cond(Ab)}}(\mathbb{Z}[S\times T],-)=0$$ for all $\mathsf{ED}$ spaces $T$ and $i>0$. Thus $\mathbb{Z}[\beta X]$ is not internally projective for any infinite set $X$, by Proposition \ref{prop:projnotprods}. More refined analyses take account of the \emph{degree} of failure of a condensed abelian group to be projective; put differently, results like Proposition \ref{prop:projnotprods} may be viewed as partial answers to the general question \emph{For $\mathsf{ED}$ spaces $S$ and $T$, what is the projective dimension of $\mathbb{Z}[S\times T]$?} We record the question of whether this dimension may uniformly be finite in our conclusion below. In the present section, we address a strong variant of this prospect, one formulated at the level of the category $\mathsf{Cond(Ani)}$ of condensed anima. This, heuristically, is the condensed category of homotopy types of topological spaces; for its precise $\infty$-category theoretic definition, see Section \ref{subsect:products}. The more immediate point is that, just as in items (4) and (5) above, the $\mathsf{ED}$ spaces define the compact projective objects of $\mathsf{Cond(Ani)}$. This is the reason that we may frame our main contribution to these analyses in the following classical terms, yielding clause (1) and (2) of Theorem D. \begin{theorem} \label{thm:injdimconstantfield} For any field $K$ and $\mathsf{ED}$ space $S$, the constant sheaf $\mathcal{K}$ on $S$ is of injective dimension at most one; it is zero if $K$ is finite. In contrast, if $\mathsf{ED}$ spaces $S$ and $T$ are each \v{C}ech-Stone compactifications of sets of cardinality at least $\aleph_\omega$, then the injective dimension of the constant sheaf $\mathcal{K}$ on their product is infinite. \end{theorem} This theorem is, of course, evocative of both Proposition \ref{prop:Rudin} and Proposition \ref{prop:projnotprods}, but with a striking jump both in the threshold cardinality for unruly products from $\omega$ to $\aleph_\omega$. The contours of all but the contributing Theorem \ref{thm:nonzero_cohomology_on_opens} of its argument are due to Peter Scholze, along with the deduction of a main consequence in the condensed setting, establishing clause (3) of Theorem D \cite{ScholzePersComm}: \begin{theorem} \label{thm:productsanima} Products of compact projective condensed anima are not, in general, compact. \end{theorem} We divide our account of these results into two subsections. In the first, after a brief review of the relevant sheaf theory, we establish Theorem \ref{thm:injdimconstantfield}; in the second, after a brief review of the category of condensed anima, we establish Theorem \ref{thm:productsanima}. \subsection{Sheaves on $\mathsf{ED}$ spaces and their products} \label{subsect:sheaves} Recall that a \emph{presheaf of $R$-modules} $\mathcal{F}$ on a topological space $X$ is a contravariant functor from the lattice $(\tau(X),\subseteq)$ of open sets of $X$ to the category $R$-$\mathsf{Mod}$. Write $r_{V,U}$ for the map $\mathcal{F}(U)\to\mathcal{F}(V)$ corresponding to the inclusion of $V$ into $U$, and recall that $\mathcal{F}$ is a \emph{sheaf} if it additionally satisfies the condition that \emph{for all open covers $\bigcup_{i\in I}U_i=U$ of elements $U$ of $\tau(X)$ and collections $\{s_i\in\mathcal{F}(U_i)\mid i\in I\}$ such that} $$r_{U_i\cap U_j,U_i}(s_i)=r_{U_i\cap U_j,U_j}(s_j)$$ \emph{for all $i,j\in I$, there exists exactly one $s\in\mathcal{F}(U)$ satisfying} $$r_{U_i,U}(s)=s_i$$ \emph{for all $i\in I$}. A fundamental class of examples are the so-called \emph{constant sheaves} assigning to each $U\in\tau(X)$ the $R$-module of \emph{locally constant functions} from $U$ to some fixed $R$-module $M$. Within this subsection, the role of $R$ will tend to be played by a field. \begin{notation} If $K$ is a field or vector space, then we write $\mathcal{K}$ for the associated constant sheaf on a topological space $X$; in cases of potential ambiguity, we append a subscript to indicate the base space, writing, for example, $\mathcal{K}_X$. A calligraphic font is standard for two other classes of objects arising below as well; to distinguish these, we reserve the letter $\mathcal{I}$ for ideals, and $\mathcal{J}$ for injective resolutions or their constituent modules. \end{notation} We recall a few further fundamental points before turning to our main argument; for a fuller review, see, for example, \cite[Chapter 2]{Iversen}. \begin{definition} The \emph{stalk} of a presheaf $\mathcal{F}$ at a point $x\in X$ is $$\mathcal{F}_x:=\varinjlim_{U\ni x} \mathcal{F}(U).$$ Recall that a sheaf $\mathcal{G}$ on $X$ is fully determined by its stalks $\mathcal{G}_x$ $(x\in X)$, so that the \emph{sheafification} of a presheaf $\mathcal{F}$ may be defined simply as that sheaf $\widetilde{\mathcal{F}}$ on $X$ with $\widetilde{\mathcal{F}}_x=\mathcal{F}_x$ for all $x\in X$. Given sheaves $\mathcal{F}$ and $\mathcal{G}$ on $X$ and $Y$, respectively, and a map $f:X\to Y$, the \emph{direct image} (or \emph{pushforward}) $f_*\mathcal{F}$ of $\mathcal{F}$ is the sheaf on $Y$ mapping each $U\in\tau(Y)$ to $\mathcal{F}(f^{-1}(U))$. Left adjoint to $f_*$ is the \emph{inverse image} (or \emph{pullback}) functor $f^*$ sending $\mathcal{G}$ to the sheafification of the presheaf on $X$ given by $$U\mapsto\varinjlim_{V\supseteq f[U]} \mathcal{G}(U).$$ When $f$ is the inclusion of a locally closed subspace $X$ into $Y$, we have as well an $f_!$ sending $\mathcal{F}$ to the sheaf $f_!\mathcal{F}$ on $Y$ with stalks \begin{equation*} (f_!\mathcal{F})_y= \begin{cases} \mathcal{F}_y & \text{if } y \in X\\ 0 & \text{if } y \notin X. \end{cases} \end{equation*} In this case, $f^*$ is left-inverse to $f_!$. Suppose now that both $\mathcal{F}$ and $\mathcal{G}$ are sheaves of $K$-modules for some fixed field $K$ and that $Y$ is a point. In this case, $\mathcal{G}$ naturally identifies with a $K$-vector space $W$ and $f^*\mathcal{G}$ is simply the constant sheaf $\mathcal{W}_X$, while $f_*\mathcal{F}$ is the vector space $\Gamma(X;\mathcal{F}):=\mathcal{F}(X)$ of \emph{global sections} of $\mathcal{F}$ over $X$. The \emph{sheaf cohomology $K$-modules} $\mathrm{H}^q(X;\mathcal{F})$ are, by definition, the evaluations at $\mathcal{F}$ of the right derived functors of this $\Gamma(X;\,\cdot\,)$ viewed as a functor to $K$-$\mathsf{Mod}$ from the abelian category of sheaves of $K$-modules on $X$. By way of the aforementioned adjunction, we have as well the equation $$\mathrm{Ext}_K^q(\mathcal{K},\mathcal{F})=\mathrm{H}^q(X;\mathcal{F})$$ with the left-hand side computed, of course, in the category concluding the previous sentence \cite[2.7.5]{Iversen}. Therein, injective objects admit the standard definition; there are enough of them, and the injective dimension $\mathrm{injdim}_X(\mathcal{F})$ of any sheaf $\mathcal{F}$ is the minimal length of an injective resolution of $\mathcal{F}$. \end{definition} \begin{lemma} \label{lemma:fininjdim_pushforward} Let $K$ be a field and $S$ and $T$ be $\mathsf{ED}$ spaces and $\pi:S\times T\to S$ denote the projection map. Let $w(T)$ denote the cardinality of the smallest basis, or what is also known as the topological weight, of $T$. If the constant sheaf $\mathcal{K}$ on $S\times T$ is of finite injective dimension then so too is its direct image $\pi_{*}\mathcal{K}$, and the latter is in any case isomorphic to the constant sheaf $\mathcal{W}$ on $S$, where $W$ is the $K$-vector space $\bigoplus_{w(T)} K$. \end{lemma} \begin{proof} For any $s\in S$, consider the stalk $$(\pi_*\mathcal{K})_s=\varinjlim_{s\in U}\mathcal{K}(U\times T)$$ of $\pi_*\mathcal{K}$ over $s$. By the compactness of $T$, for every locally constant $f:U\times T\to K$ there exists a neighborhood $V$ of $s$ and locally constant $g:T\to K$ such that $f(s',t)=g(t)$ for all $(s',t)\in V\times T$. It follows that $(\pi_*\mathcal{K})_s$ equals the $K$-vector space of continuous functions $T\to K$ (with discrete codomain); moreover, the latter is isomorphic to $W=\bigoplus_{w(T)} K$ by the argument of \cite[Corollary 97.7]{Fuchs_Infinite_73}. The lemma's concluding claim is now immediate: since the natural map from the constant sheaf $\mathcal{W}$ on $S$ to $\pi_*\mathcal{K}$ induces isomorphisms of stalks, it is an isomorphism of sheaves. The preceding stalk computation holds, in fact, much more generally: it is a $q=0$ instance of the fact that for any $s\in S$ and sheaf $\mathcal{F}$ on $S\times T$, the stalk over $s$ of the right derived functor of the pushforward of $\mathcal{F}$ $$(\mathrm{R}^q\pi_*\mathcal{F})_s= \mathrm{H}^q(\pi^{-1}(s);\iota^{*}\mathcal{F})$$ for any $q\geq 0$, where $\iota^*$ denotes the pullback of the inclusion $\pi^{-1}(s)\hookrightarrow S\times T$ \cite[Theorem 17.2]{MilneLEC}. Since pullback functors are exact and $\mathrm{H}^q(T;-)=0$ for any $q>0$ and profinite $T$ \cite[Theorem 5.1]{Wiegand}, this in turn implies that $\pi_*$ is exact. The lemma's first claim now follows from the general fact that pushforwards preserve injectives \cite[II.4.1.3]{Iversen}, since this, when combined with the foregoing, implies that the image $\pi_*\mathcal{J}$ of a finite injective resolution $\mathcal{J}$ of $\mathcal{K}$ is a finite injective resolution of $\pi_*\mathcal{K}$. \end{proof} By way of this and the following lemma, assertions about the injective dimension of $\mathcal{K}$ on products of $\mathsf{ED}$ spaces --- and, hence, about the productivity of compact projective condensed anima --- convert to cohomology computations on open subsets of single $\mathsf{ED}$ spaces. \begin{lemma} \label{lemma:eq_conds} For every field $K$ and topological space $S$ and $K$-vector space $W$, the following conditions are equivalent: \begin{enumerate} \item The constant sheaf $\mathcal{W}$ on $S$ is of finite injective dimension. \item There exists an $n\in\mathbb{N}$ such that $\mathrm{H}^q(U;\mathcal{W})=0$ for any open subset $U$ of $S$ and $q>n$. \end{enumerate} \end{lemma} \begin{proof} $(1)\Rightarrow (2)$: Much as in the argument of the preceding lemma, the point is that the pullback functor $\iota^*$ associated to the inclusion $\iota:U\hookrightarrow S$ is both exact and preserves injectives \cite[II.6.10]{Iversen}. Hence $\iota^*$ applied to a length-$n$ injective resolution of the constant sheaf $\mathcal{W}$ over $S$ is a length-$n$ injective resolution of the constant sheaf $\mathcal{W}=\iota^*\mathcal{W}$ over $U$; this implies (2). $(2)\Rightarrow (1)$: First we establish the following: \begin{claim} \label{clm:injective_criterion} A sheaf $\mathcal{F}$ of $K$-vector spaces is injective if and only if $\mathrm{Ext}_K^1(\iota_{!}\mathcal{K},\mathcal{F})=0$ for every inclusion $\iota$ of an open subset $U$ into $S$. \end{claim} \begin{proof} The forward implication is standard. For the reverse implication, suppose that $\mathrm{Ext}_K^1$ vanishes in the manner described and let $f:\mathcal{A}\to\mathcal{F}$ and $g:\mathcal{A}\hookrightarrow\mathcal{B}$ be morphisms of sheaves over $S$; we need to show that $f$ extends to an $h:\mathcal{B}\to\mathcal{F}$. To that end, write $\mathcal{A}'\leq\mathcal{A}''$ if $\mathcal{A}'$ is a subsheaf of $\mathcal{A}''$, regard $\mathcal{A}$ as a subsheaf of $\mathcal{B}$, and consider the natural partial ordering of pairs $(\mathcal{A}',h')$ for which $\mathcal{A}\leq\mathcal{A}'\leq\mathcal{B}$ and $h':\mathcal{A}'\to\mathcal{F}$ satisfies $f=h'\circ g$. Since this order satisfies the conditions of Zorn's Lemma, our task reduces to showing that if there exists for some such $(\mathcal{A}',h')$ an $x\in S$ with $\mathcal{A}_x'\lneq\mathcal{B}_x$ then there exists an $\mathcal{A}''$ with $\mathcal{A}'\lneq\mathcal{A}''\leq\mathcal{B}$ and an $h'':\mathcal{A}''\to\mathcal{F}$ properly extending $h'$. To that end, fix such an $x$ and a nonzero $b\in\mathcal{B}_x\backslash\mathcal{A}_x'$ and a representative $s\in\mathcal{B}(U)$ of $b$ for some open $U\in S$ containing $x$. Observe (by checking stalks) that $s$ generates a sheaf on $U$ which is isomorphic to, and which we are therefore justified in identifying with, $\mathcal{K}$. Let $\iota$ denote the inclusion $U\to S$ and let $\mathcal{A}''=\mathcal{A}'+\iota_{!}\mathcal{K}$ and apply $\mathrm{Hom}_K(-,\mathcal{F})$ to the short exact sequence $$0\to\mathcal{A}'\to\mathcal{A}'+\iota_{!}\mathcal{K}\to\iota_{!}\mathcal{K}\to 0$$ of sheaves on $S$. Our premise, applied to the fragment of the induced long exact sequence $$\mathrm{Hom}_K(\mathcal{A}'+\iota_{!}\mathcal{K},\mathcal{F})\xrightarrow{e}\mathrm{Hom}_K(\mathcal{A}',\mathcal{F})\to\mathrm{Ext}_K^1(\iota_{!}\mathcal{K},\mathcal{F}),$$ implies that $e$ is surjective, and hence that $h'$ extends to an an element $h'':\mathcal{A}'+\iota_{!}\mathcal{K}\to\mathcal{F}$ of $e^{-1}(h')$. This $h''$ and $\mathcal{A}''=\mathcal{A}'+\iota_{!}\mathcal{K}$ conclude the argument. \end{proof} Fix now an $n$ as in (2) and a resolution $$0\to\mathcal{W}\to\mathcal{J}^0\to\cdots\to\mathcal{J}^n\to\mathcal{F}\to 0$$ of the sheaf $\mathcal{W}$ over $S$ in which each of the $\mathcal{J}^i$ are injective. By assumption, for every inclusion $\iota$ of an open subset $U$ into $S$, \begin{align*} 0 = \mathrm{H}^{n+1}(U;\mathcal{W}) & = \mathrm{H}^{n+1}(U;\iota^*\mathcal{W}) \\ & = \mathrm{Ext}_K^{n+1}(\mathcal{K},\iota^*\mathcal{W}) \\ & = \mathrm{Ext}_K^{n+1}(\iota_{!}\mathcal{K},\mathcal{W}) \\ & = \mathrm{Ext}_K^1(\iota_{!}\mathcal{K},\mathcal{F}), \end{align*} hence by Claim \ref{clm:injective_criterion}, $\mathcal{F}$ is injective, completing the argument. Here the penultimate equality follows from the $\iota_{!}$--$\iota^*$ adjunction together with the fact that such $\iota^*$ are exact and preserve injectives \cite[II.6.9(ii)]{Iversen}, and the last equality is via a standard dimension-shifting argument, such as appears, for example, in \cite[Exercise 2.4.3]{Weibel}. \end{proof} Against this background, we turn to the groups $\mathrm{H}^q(U;\mathcal{W})$ for such $W$ and open subsets $U \subseteq S$ as appear in Lemmas \ref{lemma:fininjdim_pushforward} and \ref{lemma:eq_conds} above; since such $U$ admit a convenient characterization when $S$ is the \v{C}ech-Stone compactification $\beta X$ of some discrete space $X$, we will focus our attention on $S$ of this sort. Note that there is no loss of generality in doing so, since every $\mathsf{ED}$ space $T$ is a retract of some such $\beta X$.\footnote{\label{ftnt:retracts} More precisely, such a retract descends to one of open sets $r^{-1}(U)\to U$; applying $\mathrm{H}^q(-;\mathcal{W})$ to the identity map $U\hookrightarrow r^{-1}(U)\to U$ then shows that if $n$ witnesses item 2 of Lemma \ref{lemma:eq_conds} for $\beta X$ then it does for $T$ as well. Thus for deriving a contradiction, via Lemma \ref{lemma:eq_conds}, from the premises of Lemma \ref{lemma:fininjdim_pushforward}, we may indeed restrict attention to the free $\mathsf{ED}$ spaces $\beta X$ for $X$ discrete.} The characterization just alluded to is the correspondence, given by Stone duality, between open subsets $U$ of $\beta X$ and ideals $\mathcal{I}$ on $X$ \cite[Lemma 19.1]{Halmos}. More precisely, any such $U$ equals $\bigcup_{I\in\mathcal{I}_U} N_I$ for some ideal $\mathcal{I}_U\subseteq P(X)$; note next that each such $N_I$ corresponds, via the map $\mathcal{U}\mapsto\mathcal{U}\cap P(I)$, to the set of ultrafilters on $I$, and that this map in fact defines a homeomorphism $N_I\to\beta I$. In consequence, for any field $K$ and vector space $W=\bigoplus_\kappa K$ and open $U\subseteq\beta X$, we have the following: \begin{align} \mathrm{H}^q(U;\mathcal{W}) & = \mathrm{H}^q(\mathrm{colim}_{I\in\mathcal{I}_U} \,\beta I;\mathcal{W}) \label{eq:Rqlim0} \\ & = \mathrm{R}^q\mathrm{lim}_{I\in\mathcal{I}_U}\,\mathrm{H}^0(\beta I;\mathcal{W}) \label{eq:Rqlim1} \\ & = \mathrm{R}^q\mathrm{lim}_{I\in\mathcal{I}_U}\,\bigoplus_\kappa \mathrm{H}^0(\beta I;\mathcal{K}) \label{eq:Rqlim2} \\ & = \mathrm{R}^q\mathrm{lim}_{I\in\mathcal{I}_U}\,\bigoplus_\kappa \prod_I K \label{eq:Rqlim3} \end{align} Here the fourth and third equalities follow from the definition and the compactness of $\beta I$, respectively; the first is immediate from our discussion above. The second may be viewed as an instance of \cite[Prop. 3.6.3]{Prosmans}; more precisely (since directed colimits are exact functors on sheaf categories), the right hand side of line \ref{eq:Rqlim0} is the $q^{\mathrm{th}}$ cohomology group of what may be written as \begin{align*} \mathrm{RHom}(\mathrm{colim}_{I\in\mathcal{I}_U}\,\iota_{!}\mathcal{K}_{\beta I},\mathcal{W}_U) & = \mathrm{Rlim}_{I\in\mathcal{I}_U}(\mathrm{RHom}(\iota_{!}\mathcal{K}_{\beta I},\mathcal{W}_U))\\ & = \mathrm{Rlim}_{I\in\mathcal{I}_U}(\mathrm{RHom}(\mathcal{K}_{\beta I},\iota^*\mathcal{W}_U)), \end{align*} where $\iota:\beta I\to U$ is the inclusion, the first equality is by way of the aforementioned reference, and the second is just as in the argument of Lemma \ref{lemma:eq_conds}. As noted above, the cohomology groups $\mathrm{Ext}^r(\mathcal{K}_{\beta I},\iota^*\mathcal{W}_U)=\mathrm{H}^r(\beta I;\mathcal{W})$ of the last line's interior term vanish in all degrees $r>0$. We arrive in this way to the expression in line \ref{eq:Rqlim1} above. The reward for these analyses is the expression appearing in line \ref{eq:Rqlim3}. Observe, for example, that if $\kappa=1$ in the sequence (\ref{eq:Rqlim0})--(\ref{eq:Rqlim3}) then for any open $U\subseteq\beta X$ the $\mathcal{I}_U$-indexed inverse system of line \ref{eq:Rqlim3} is plainly flasque, implying the overall expression zero for any $q>0$, from which it follows by the argument of Lemma \ref{lemma:eq_conds} that the injective dimension of $\mathcal{K}$ over any $\mathsf{ED}$ space $S$ is at most one. Together with Lemma \ref{lem:finitefieldsinjective} below, this shows the first part of Theorem \ref{thm:injdimconstantfield}. The second part derives from Lemmas \ref{lemma:fininjdim_pushforward} and \ref{lemma:eq_conds} (or, more precisely, from the argument of its (1)$\Rightarrow$(2)) and the following, which we will argue via line \ref{eq:Rqlim3} above as well. \begin{theorem} \label{thm:nonzero_cohomology_on_opens} Fix a field $K$ and $K$-vector space $W=\bigoplus_\kappa K$ for some $\kappa\geq\aleph_\omega$. For any set $X$ with $|X|\geq\aleph_\omega$ and $n\in\mathbb{N}$, there exists an open $U\subseteq\beta X$ such that $\mathrm{H}^n(U;\mathcal{W})\neq 0$. \end{theorem} The remainder of this section will principally be taken up with a proof and partial converse of this theorem (see also Remark \ref{rmk:betternontrivs} below for some refinements), in which several of the preceding sections' themes reappear. Our argument will revolve around a further instance of the generalized $n$-coherence appearing in Definition \ref{def:coh2}; this is the instance, for any cardinal $\kappa$, field $K$, set $X$, and ideal $\tilde{\mathcal{I}}$ on $X$, given by letting the definition's \begin{itemize} \item $Y=\kappa\times X$, \item $H=K$, \item $\mathcal{I}$ be the ideal on $Y$ generated by $\{\kappa\times I\mid I\in\tilde{\mathcal{I}}\}$, and \item $\mathcal{J}$ be the ideal on $Y$ generated by $\{s\times X\mid s\in [\kappa]^{<\omega}\}.$ \end{itemize} For any $n>0$, the associated notion of $n$-coherence is then that of a family $$\Phi=\left\langle\varphi_{\vec{I}}:\kappa\times\bigcap\vec{I}\to K\mid\vec{I}\in\tilde{\mathcal{I}}^n\right\rangle$$ such that for any $\vec{I}\in \tilde{\mathcal{I}}^{n+1}$, the function $$\sum_{i\leq n}(-1)^i\varphi_{\vec{I}^i}(\xi,\,\cdot\,)\big|_{\bigcap\vec{I}}$$ is nonzero for only finitely many $\xi\in\kappa$. Triviality is similarly defined, and it is immediate from Lemma \ref{lem:nontrivfamsarenontrivlimsagain} that for all $n>0$ and $\kappa$ and $K$ and $\tilde{\mathcal{I}}$ as above, $$\mathrm{R}^n\mathrm{lim}_{I\in\tilde{\mathcal{I}}}\,\bigoplus_\kappa \prod_I K = 0$$ if and only if the associated $n$-coherent families of functions are all trivial. \begin{proof}[Proof of Theorem \ref{thm:nonzero_cohomology_on_opens}] By equations \ref{eq:Rqlim0} through \ref{eq:Rqlim3}, it will suffice to show that for any $K$, $\kappa$, $X$, and $n$ as in the statement of the theorem, there exists an ideal $\mathcal{I}_n$ on $X$ such that $\mathrm{R}^n\mathrm{lim}_{I\in\mathcal{I}_n}\,\bigoplus_\kappa \prod_I K\neq 0$. For this it will suffice to exhibit for each $n>0$ a nontrivial $n$-coherent family of functions $$\Phi=\left\langle\varphi_{\vec{\alpha}}:\kappa\times\bigwedge\vec{\alpha}\to K\mid\vec{\alpha}\in (\omega_n)^n\right\rangle;$$ to see this, identify $X$ with its cardinality $|X|=\lambda\geq\aleph_\omega$ and let $\mathcal{I}_n$ denote the bounded ideal on $\omega_n\subset\lambda$ and apply Lemmas \ref{lem:nontrivfamsarenontrivlimsagain} and Remark \ref{rmk:cofinality}. Much as in the proof of item 5 of Theorem \ref{thm:limsofAkappalambda}, such families $\Phi$ may be efficiently derived from existing constructions in the set theoretic literature;\footnote{Functions $\varphi_{\vec{\alpha}}$ as desired may be read off from the functions $\mathtt{f}_n$ of Section 6 of \cite{TFOA} (with $K$ replacing $\mathbb{Z}$ in the definition of the latter) or, relatedly, from the walks functions $r_2^n$ of Sections 7 and 9 of \cite{IHW}.} since in the present case, however, the relevant machinery is both more involved and less well-known, it seems preferable here to construct such families directly. We will do so by recursion on $n$. Henceforth fix a field $K$. For the base case $n=1$, the functions $\tau_\alpha$ appearing in the proof of item 5 of Theorem \ref{thm:limsofAkappalambda} furnish a nontrivial coherent $\Phi=\langle\varphi_\alpha:\kappa\times\alpha\to K\mid\alpha\in\omega_1\rangle$ with $\kappa=\aleph_0$, as the reader may verify. In order to better fix both our notation and a template for higher $n$, though, we will record a more hands-on construction of such a $\Phi$, one in which $\kappa=\omega_1$. For this purpose, fix a sequence $\langle C_\alpha\subseteq\alpha\mid\alpha\in\mathrm{Cof}(\omega)\cap\omega_1\rangle$ in which $\mathrm{otp}(C_\alpha)=\omega$ and $\mathrm{sup}(C_\alpha)=\alpha$ for every countable limit ordinal $\alpha$; for each such $\alpha$, let $\chi_\alpha(0,-):\alpha\to K$ be the characteristic function of $C_\alpha$ and let $\chi_\alpha:\omega_1\times\alpha\to K$ otherwise equal $0$. For any $\psi:\kappa\times\alpha\to K$ and $\gamma<\kappa$, let $\psi^{(\gamma)}(\gamma+\eta,\xi)=\psi(\eta,\xi)$ for all $(\eta,\xi)\in\kappa\times\alpha$ and let $\psi^{(\gamma)}$ otherwise equal $0$. We proceed now to construct by recursion on $\alpha$ a nontrivial coherent $\Phi=\langle\varphi_\alpha:\omega_1\times\alpha\to K\mid\alpha\in\omega_1\rangle$, as desired. Our recursive assumption at stage $\beta$ is that $\Phi\restriction\beta:=\langle\varphi_\alpha:\omega_1\times\alpha\to K\mid\alpha\in\beta\rangle$ is defined and, moreover, that $$\{\eta<\omega_1\mid\varphi_\alpha(\eta,\xi)\neq 0\text{ for some }\xi<\alpha<\beta\}\subseteq\beta;$$ henceforth we reserve the notation $\mathrm{supp}(-)$ (or here, $\mathrm{supp}(\Phi\restriction\beta)$) for a family of self-explanatory variations on the bracketed expression above. There are two possibilities: \begin{enumerate} \item $\beta=\alpha+1$ for some $\alpha$. In this case, let $\varphi_\beta(\eta,\xi)=\varphi_\alpha(\eta,\xi)$ for $\xi<\alpha$ and otherwise equal $0$. \item $\beta$ is a limit ordinal. In this case, by Goblot's Theorem, there exists a trivialization $\psi:\omega_1\times\beta\to K$ of $\Phi\restriction\beta$; since $\mathrm{supp}(\Phi\restriction\beta)\subseteq\beta$ we may assume $\mathrm{supp}(\psi)\subseteq\beta$ as well. Let $\varphi_\beta=\psi+\chi^{(\beta)}_\beta$. \end{enumerate} This completes our account of stage $\beta$; note that our recursive assumption is conserved. The $\Phi$ so constructed is plainly coherent, so suppose for contradiction that some $\psi:\omega_1\times\omega_1\to K$ trivializes $\Phi$. Then for each $\beta\in\mathrm{Cof}(\omega)\cap\omega_1$ there exists $\xi_\beta<\beta$ with $\psi(\beta,\xi_\beta)\neq 0$; by Fodor's Lemma, $\xi_\beta$ equals some fixed $\xi$ for uncountably many $\beta$. For any $\alpha>\xi$, though, $\mathrm{supp}(\varphi_\alpha)\subseteq\alpha+1$ implies that $\varphi\neq^*\psi$, and this is the desired contradiction. We now show how the existence of nontrivial $n$-coherent families $$\Psi=\left\langle\psi_{\vec{\alpha}}:\omega_n\times\bigwedge\vec{\alpha}\to K\mid\vec{\alpha}\in (\omega_n)^n\right\rangle$$ implies the existence of nontrivial $(n+1)$-coherent families $$\Phi=\left\langle\varphi_{\vec{\alpha}}:\omega_{n+1}\times\bigwedge\vec{\alpha}\to K\mid\vec{\alpha}\in (\omega_{n+1})^{n+1}\right\rangle.$$ As above, we construct $\Phi$ in stages $\Phi\restriction\beta=\langle\varphi_{\vec{\alpha}}:\omega_{n+1}\times\bigwedge\vec{\alpha}\to K\mid\vec{\alpha}\in \beta^{n+1}\rangle$ ranging through $\beta<\omega_{n+1}$, maintaining the condition that $\mathrm{supp}(\Phi\restriction\beta)\subseteq\beta$ for all $\beta\in\mathrm{Cof}(\omega_n)\cap\omega_{n+1}$, and $\mathrm{supp}(\Phi\restriction\beta)\subseteq\beta + \omega_n$ for all other $\beta \in \omega_{n+1}$, as we go. Again we distinguish two cases: \begin{enumerate} \item $\mathrm{cf}(\beta)\neq\aleph_n$. By Goblot's Theorem, there exists an alternating trivialization $\Theta=\langle\theta_{\vec{\alpha}}:\omega_{n+1}\times\bigwedge\vec{\alpha}\to K\mid\vec{\alpha}\in\beta^n\rangle$ of $\Phi\restriction\beta$ with $\mathrm{supp}(\Theta)=\mathrm{supp}(\Phi\restriction\beta)$. Extend $\Phi\restriction\beta$ to $\Phi\restriction\beta+1$ by letting $\varphi_{\vec{\alpha}^\frown\langle\beta\rangle}=(-1)^{n+1}\theta_{\vec{\alpha}}$ for all $\vec{\alpha}\in\beta^n$; the requirement that $\Phi$ be alternating then determines on $\Phi\restriction\beta+1$ on any remaining indices. \item $\mathrm{cf}(\beta)=\aleph_n$. In this case, our assumptions together with Remark \ref{rmk:cofinality} ensure the existence of a nontrivial $n$-coherent $\Psi=\langle\psi_{\vec{\alpha}}:\omega_n\times\bigwedge\vec{\alpha}\to K\mid\vec{\alpha}\in (\beta)^n\rangle$ with $\mathrm{supp}(\Psi)\subseteq\omega_n$, as well as a $\Theta$ trivializing $\Phi\restriction\beta$ as before. Extend $\Phi\restriction\beta$ to $\Phi\restriction\beta+1$ by letting $\varphi_{\vec{\alpha}^\frown\langle\beta\rangle}=(-1)^{n+1}\theta_{\vec{\alpha}}+\psi^{(\beta)}_{\vec{\alpha}}$ for all $\vec{\alpha}\in\beta^n$. \end{enumerate} This completes our account of stage $\beta$; note just as above that our condition on $\mathrm{supp}(\Phi\restriction\beta)$ is conserved. The $n$-coherence of $\Phi$ is again straightforward to verify, so suppose for contradiction that $$\Upsilon=\left\langle\upsilon_{\vec{\alpha}}:\omega_{n+1}\times\bigwedge\vec{\alpha}\to K\mid\vec{\alpha}\in (\omega_{n+1})^n\right\rangle$$ trivializes $\Phi$. Much as in the $n=1$ case, the nontriviality of the families $\Psi$ arising in the course of our construction ensures that, for each $\beta\in\mathrm{Cof}(\omega_n)\cap\omega_{n+1}$, there exists an $f(\beta)=\vec{\alpha}\in(\beta)^{n+1}$ and $\eta\in [\beta,\beta+\omega_n)$ such that \begin{equation} \label{eq:sum} \sum_{i=0}^n(-1)^i\upsilon_{\vec{\alpha}^i}(\eta,\xi)\neq 0 \end{equation} for some $\xi<\alpha_0$ (if not, then $\Upsilon\restriction (\beta)^n$ would itself be trivial on the domain $[\beta,\beta+\omega_n)\times\beta$, implying that the associated $\Psi$ is). By Fodor's Lemma, $f$ is constantly $\vec{\alpha}$ on some cofinal $B\subseteq\omega_{n+1}$, hence the witnesses $\eta$ to equation \ref{eq:sum} for this $\vec{\alpha}$ are cofinal in $\omega_{n+1}$ as well. Since $\mathrm{supp}(\varphi_{\vec{\alpha}})\subseteq\mathrm{min}\,B$, though, this implies that $\Upsilon$ does not trivialize $\Phi$. This contradiction completes the argument. \end{proof} \begin{remark} \label{rmk:betternontrivs} Observe that any nontrivial abelian group $K$ would suffice for the above argument (with some nonzero $a$ taking the place of $1$), and hence for the statement of Theorem \ref{thm:nonzero_cohomology_on_opens} as well. A deeper question is whether the dimension $\kappa$ of $W$ need really be at least $\aleph_\omega$ in Theorem \ref{thm:nonzero_cohomology_on_opens}; note that this translates in Theorem \ref{thm:injdimconstantfield} to a question of whether \emph{both} factors $S$ and $T$ need be $\aleph_\omega$-large for their product to carry infinite injective dimension. Let us briefly note that \begin{itemize} \item by the discussion preceding Theorem \ref{thm:nonzero_cohomology_on_opens}, $\kappa$ must be infinite; \item in the case of $n=1$, as noted, $\kappa=\aleph_0$ suffices; \item it is consistent with the $\mathsf{ZFC}$ axioms that $\kappa=\aleph_0$ suffices for all $n>0$ (it suffices in the constructible universe $L$, by straightforward modifications of \cite[\S 3.3]{BLHCoOI}, for example); \item but whether $\kappa=\aleph_0$ suffices for all $n>0$ in $\mathsf{ZFC}$ alone is a subtle question, one again closely related to that of the groups $\mathrm{H}^n(\omega_n;\mathbb{Z})$ referenced in Section \ref{sec:additivity}. \end{itemize} On the other hand, by the following lemma, the conditions on $X$ in Theorem \ref{thm:nonzero_cohomology_on_opens} are essentially sharp. \end{remark} \begin{lemma} \label{lem:reverse_implication} If $S=\beta X$ for a discrete space $X$ with $|P(X)|<\aleph_\omega$ then for any field $K$ and $K$-vector space $W$, the constant sheaf $\mathcal{W}$ on $S$ is of finite injective dimension. \end{lemma} \begin{proof} Let $|P(X)|=\aleph_n$; any $\mathcal{I}_U$ as in equation \ref{eq:Rqlim3} above then determines a $\mathrm{Cof}(\leq\aleph_n)$-indexed inverse system with surjective bonding maps whose $\mathrm{Rlim}$ computes $\mathrm{H}^q(U;\mathcal{W})$. By equations \ref{eq:Rqlim0} through \ref{eq:Rqlim3} together with Goblot's Theorem, the second, and hence the first, of Lemma \ref{lemma:eq_conds}'s conditions therefore holds. \end{proof} \begin{corollary} \label{cor:413} If $\aleph_\omega$ is a strong limit cardinal then for any field $K$ and $K$-vector space $W$ and $\mathsf{ED}$ space $S$ of cardinality less than $\aleph_\omega$, the constant sheaf $\mathcal{W}$ on $S$ is of finite injective dimension. \end{corollary} \begin{proof} For $S=\beta X$ for a discrete space $X$, this is immediate from Lemma \ref{lem:reverse_implication}. Under our cardinal arithmetic assumptions, any other $\mathsf{ED}$ space $S$ of cardinality less than $\aleph_\omega$ is a retract of such a $\beta X$ with $|\beta X|<\aleph_\omega$ (simply let $X$ equal the underlying set of $S$), so the conclusion follows by the reasoning of footnote \ref{ftnt:retracts}. \end{proof} Just before Theorem \ref{thm:nonzero_cohomology_on_opens}, we noted that for any field $K$ and $\mathsf{ED}$ space $S$ the injective dimension of the sheaf $\mathcal{K}$ on $S$ is at most one. The following refinement completes the proof of Theorem \ref{thm:injdimconstantfield}. \begin{lemma} \label{lem:finitefieldsinjective} For any finite field $K$ and $\mathsf{ED}$ space $S$, the sheaf $\mathcal{K}$ on $S$ is injective. \end{lemma} \begin{proof} Observe first that since $K$ is compact, any continuous map from an open $U\subseteq S$ to $K$ uniquely extends to a continuous map $\beta U\to K$. Similarly, since $S$ is $\mathsf{ED}$, the inclusion $U\hookrightarrow S$ uniquely extends to an embedding of $\beta U$ as an open and closed subspace of $S$. Together these facts imply that the restriction map $\mathcal{K}(S)\to\mathcal{K}(U)$ is surjective. When the latter holds, as here, for all open $U\subseteq S$, we say that the sheaf in question is \emph{flasque} (the term's application to inverse systems above was a special instance); the interest of this property is that flasque resolutions are acyclic and therefore convenient for cohomology computations. It is easy to see also that $\iota^*$ preserves this property for any such $\iota: U\hookrightarrow S$. Putting these facts together, we have that $0=\mathrm{H}^1(U;\iota^*\mathcal{K})=\mathrm{Ext}^1(\iota_{!}\mathcal{K},\mathcal{K})$ for all such $\iota$, and conclude, by Claim \ref{clm:injective_criterion}, that $\mathcal{K}$ is injective. \end{proof} \subsection{Products of compact projective anima} \label{subsect:products} Recall this section's organizing concern: \emph{When is a category's class of compact projective objects closed under the formation of finite products?} Among condensed categories, we considered three main possibilities, which we may abbreviate as follows: \begin{enumerate}[label=(\Alph*)] \item $\mathsf{Cond(Set)}^{\mathrm{cp}}$ is closed with respect to binary products; \item $\mathsf{Cond(Ab)}^{\mathrm{cp}}$ is closed with respect to tensor products; \item $\mathsf{Cond(Ani)}^{\mathrm{cp}}$ is closed with respect to binary products. \end{enumerate} We noted the failures of (A) and (B) and considered, in their wake, the following weakening of (B): \begin{enumerate}[label=(\Alph*)] \setcounter{enumi}{3} \item Tensor products of compact projective objects in $\mathsf{Cond(Ab)}$ are of finite projective dimension. \end{enumerate} In this subsection, we record two main results --- namely, that (C) implies (D), but also that by the work of the previous subsection, (C), in general, fails to hold. In fact it will be most natural to argue these points in reverse order, after a brief review of $\mathsf{Cond(Ani)}$ and the phenomenon of \emph{animation}. To that end, let us continue with the quoted passage with which this section began: \begin{quote} Taken together, an object $X\in\mathcal{C}$ is compact projective if $\mathrm{Hom}(X,-)$ commutes with all filtered colimits and reflexive coequalizers: equivalently, it commutes with all (so-called) $1$-sifted colimits. \cite[p.\ 74]{CS2} \end{quote} This suffices, for our purposes, to describe the latter class of colimits, although a more intuitive characterization is the following: filtered colimits are precisely those which commute with finite limits; $(1$-)sifted colimits are precisely those which commute with finite products. The prefix $1$-, of course, signals that we're discussing the ordinary category theoretic version of what we'll soon upgrade to an $\infty$-category theoretic notion. Write $\mathsf{sInd}(\mathcal{C})$ for the free cocompletion of $\mathcal{C}$ with respect to $1$-sifted colimits. If $\mathcal{C}$ is cocomplete then we have a fully faithful embedding $\mathsf{sInd}(\mathcal{C}^{\mathrm{cp}})\to\mathcal{C}:\textnormal{``}\mathrm{colim}\textnormal{''}\,X_i\mapsto\mathrm{colim}\,X_i$, and ``if $\mathcal{C}$ is generated under small colimits by $\mathcal{C}^{\mathrm{cp}}$ then that functor is an equivalence'' (\cite[p.\ 74]{CS2}; for fuller argument, see \cite[\S 5.1.1]{CesnaviciusScholze}). In particular, we have $\mathsf{sInd}(\mathcal{C}^{\mathrm{cp}})\cong\mathcal{C}$ for each of the enumerated examples of the section's introduction: the category $\mathsf{Set}$ is the $1$-sifted ind-completion $\mathsf{sInd}(\mathsf{Set}^{\mathrm{cp}})$ of the category $\mathsf{Set}^{\mathrm{cp}}$ of finite sets; similarly for $\mathsf{Ab}$, for $\mathsf{Cond(Ab)}$, and so on. Animation is simply the $\infty$-category theoretic version of this operation;\footnote{For sifted diagrams in $\infty$-categories, see \cite[\S 5.5.8]{LurieHTT} or \cite[Tag 02QD]{kerodon}; these are again, intuitively, those whose colimits preserve finite products.} continuing with \cite{CS2}, we have: \begin{definition} Let $\mathcal{C}$ be a category that admits all small colimits and is generated under small colimits by $\mathcal{C}^{\mathrm{cp}}$. The \emph{animation of $\mathcal{C}$} is the $\infty$-category $\mathsf{Ani}(\mathcal{C})$ freely generated under sifted colimits by $\mathcal{C}^{\mathrm{cp}}$. \end{definition} For example: \begin{enumerate} \item The animation $\mathsf{Ani(Set)}$ of the category of sets is nothing other than the $\infty$-category of ``spaces'': it is, in other words, that higher category theoretic incarnation of the homotopy category of topological spaces playing within $\infty$-category theory a role broadly analogous to that of $\mathsf{Set}$ within ordinary category theory (see \cite[\S 1.2.16]{LurieHTT} for other presentations). Its objects are termed \emph{anima}, and the category itself is frequently abbreviated $\mathsf{Ani}$. \item The animation $\mathsf{Ani(Ab)}$ of the category of abelian groups is equivalent (via the Dold-Kan equivalence) to the $\infty$-derived category of abelian groups $\mathsf{D}_{\geq 0}(\mathsf{Ab})$ in non-negative homological degrees. \end{enumerate} Similarly for the categories $\mathsf{Cond(Set)}$ and $\mathsf{Cond(Ab)}$; for these and the above examples and the following definition and lemma, see again \cite[\S 11]{CS2}, where the terminology \emph{nonabelian derived category} for the animation construction is also noted. As it happens, the $\infty$-categorical condensation and animation operations commute: \begin{definition} \label{def:inftycatcondensed} Let $\mathcal{C}$ be an $\infty$-category that admits all small colimits. For any uncountable strong limit cardinal $\kappa$, the $\infty$-category $\mathsf{Cond}_\kappa(\mathcal{C})$ of $\kappa$-condensed objects of $\mathcal{C}$ is the category of contravariant functors from $\mathsf{ED}_\kappa$ to $\mathcal{C}$ which take finite coproducts to products. One then lets $\mathsf{Cond}(\mathcal{C})=\mathrm{colim}_{\kappa\in S}\,\mathsf{Cond}_\kappa(\mathcal{C})$ for $S$ the class of strong limit cardinals, just as in Definition \ref{def:condset}. \end{definition} \begin{lemma} Let $\mathcal{C}$ be a category that is generated under small colimits by $\mathcal{C}^{\mathrm{cp}}$. Then there is a natural equivalence of $\infty$-categories $\mathsf{Cond(Ani(}\mathcal{C}))\cong\mathsf{Ani(Cond(}\mathcal{C}))$. \end{lemma} It follows that $\mathsf{Cond(Ani)}$, the category of condensed anima, embraces within a single setting both topological and homotopical domains, in the sense that the natural embeddings $e:\mathsf{Cond(Set)}\to\mathsf{Cond(Ani)}$ and $\mathsf{Ani(Set)}\to\mathsf{Ani(Cond(Set))}$ are each fully faithful. For more on these matters, see \cite{Mair}; the argument therein that $\mathsf{ED}$ spaces define the compact projective objects of $\mathsf{Cond(Set)}$ also readily adapts to show that their $e$-images form a generating class of compact projective objects in $\mathsf{Cond(Ani)}$. This, briefly, is the context for Theorem \ref{thm:productsanima}. For its argument, we recall one further notion, that of a \emph{hypercover}. Much as the notion of a site generalizes that of a topological space, hypercovers may be regarded as simultaneously generalizing the notions both of topological covers and of projective resolutions: a hypercover of an object $X$ of $\mathcal{C}$ is an augmented simplicial object $$\cdots\substack{\longrightarrow\\[-1em] \longrightarrow \\[-1em] \longrightarrow}R_1\substack{\longrightarrow\\[-1em] \longrightarrow} R_0\substack{\longrightarrow} X$$ in $\mathcal{C}$ with $X$ in the degree $-1$. (Above, we've left degeneracy maps from lower to higher degrees undepicted simply for typographical reasons, and we will tend in what follows to denote such a hypercover simply by $R_\bullet$, i.e., to identify it with its non-negatively graded portion, so that a more scrupulous notation would be $R_\bullet\to X$.) Like covers or resolutions, hypercovers facilitate cohomology computations by decomposing their target into more convenient objects, and the induced hypercovering system of higher-order intersections $\bigcap_{i\in I} U_i$ of elements of an open cover $\mathcal{U}$ of a topological space $X$ which underlies the \v{C}ech cochain complex of $(X,\mathcal{U})$ is among the most fundamental of examples. The more general notion of a hypercover is often motivated as allowing for further refinements of the cover at each level $n$ of the complex, a prospect technically managed by an $n$-coskeleton condition for which we refer the reader to any standard reference (\cite[\S 8]{ArtinMazur}, \cite[\S 4]{DuggerIsak}, \cite[\S 4]{Conrad}). The important points for our purposes are the following: \begin{enumerate}[label=(\alph*)] \item In the present context, the ``convenient objects'' are the $\mathsf{ED}$ spaces; in particular, any $X\in\mathsf{CHaus}$ admits a hypercovering $R_\bullet$ by $\mathsf{ED}$ spaces, and even a standard such ``free resolution'' \cite[2.2.5]{pyknotic}. For hypercoverings of topological spaces $R_\bullet\to X$, we have moreover that $X$ is weakly homotopy equivalent to the homotopy colimit of $R_\bullet$ \cite{DuggerIsakRealizations}. \item Just as was the case for ordinary categories (Definition \ref{def:condset}), the sheaf condition defining condensed $\infty$-categories considerably simplifies when we restrict the underlying site to $\mathsf{ED}$ spaces, a simplicity on display in Definition \ref{def:inftycatcondensed}. In the $\infty$-categorical setting, though, that definition is equivalent not to the category of \emph{sheaves} on the profinite site, but of \emph{hypersheaves} (see \cite[\S 9]{Masterclass}, \cite[\S 2.2.2--2.2.7]{pyknotic} for this point in the pyknotic setting, and \cite[\S A.4]{Mann} for a concise review of the relevant notions). In other words, writing $\mathsf{StrLim}$ for the class of strong limit cardinals, a condensed anima may equivalently be characterized as an element of $\mathrm{colim}_{\kappa\in\mathsf{StrLim}}\,\mathrm{Fun}(\mathsf{ProFin}_\kappa^{\mathrm{op}},\mathsf{Ani})$ satisfying the $\infty$-categorical analogues of conditions (1) through (3) of Definition \ref{def:condset}; the equalizer condition $$F(S)=\mathrm{lim}\,(F(T)\rightrightarrows F(T\times_S T))\textnormal{ for any surjection }T\twoheadrightarrow S$$ of item (3), in particular, translates precisely to \begin{align} \label{eq:hypersheaf} F(S)=\mathrm{lim}\,F(T_\bullet) \end{align} for any hypercover $T_\bullet$ of $S$ in the category of profinite spaces (cf.\ \cite[A.4.20]{Mann} and the references therein). Note that this condition in turn implies that $S = \mathrm{colim}\,T_\bullet$ when each is interpreted in the category $\mathsf{Cond(Ani)}$. This follows from the Yoneda lemma, together with the fact that $$\mathrm{Hom}(S,F)=F(S)=\mathrm{lim}\,F(T_\bullet)=\mathrm{lim}\,\mathrm{Hom}(T_\bullet,F)=\mathrm{Hom}(\mathrm{colim}\,T_\bullet,F),$$ by (\ref{eq:hypersheaf}), for all condensed anima $F$. More colloquially, the hypersheaf condition ensures that the embedding $\mathsf{ProFin}\to\mathsf{Cond(Ani)}$ conserves the (weak) equivalences noted in item (a) above. \end{enumerate} We turn now to the derivation of Theorem \ref{thm:productsanima} from Theorem \ref{thm:injdimconstantfield}. The task essentially reduces to proving the following lemma. \begin{lemma} \label{lem:43to44} For all fields $K$ and $\mathsf{ED}$ spaces $S$ and $T$, if $S\times T$, viewed as a product of condensed anima, is compact, then the sheaf $\mathcal{K}$ on the topological space $S\times T$ is of finite injective dimension. \end{lemma} The point, of course, is that for any large $S$ and $T$ as in Theorem \ref{thm:injdimconstantfield}, the sheaf $\mathcal{K}$ on the topological space $S\times T$ is \emph{not} of finite dimension, hence the product $S\times T$ of compact projective anima is not compact, and this proves Theorem \ref{thm:productsanima}. \begin{proof}[Proof of Lemma \ref{lem:43to44}] Fix $K,S$, and $T$ as in the statement of the lemma, so that the product $S\times T$ of anima is compact. Fix as in item (a) above a hypercover $R_\bullet$ of the topological space $S\times T$ by $\mathsf{ED}$ spaces. For each $n\in\omega$ write $f_n$ for the induced map $R_n\to S\times T$ and $X_n$ for $\mathrm{colim}\,(R_\bullet)_{\leq n}$, that is, for the colimit in $\mathsf{Cond(Ani)}$ of the truncation of the simplicial hypercover $R_\bullet$ to its first $n+1$ terms. By item (b) above, the sequential colimit $\mathrm{colim}_{n\in\omega}\,X_n$ over the system of natural maps $X_m\to X_n$ $(m\leq n)$ is equal in $\mathsf{Cond(Ani)}$ to $S\times T$; from this it follows by our compactness assumption that the identity map on $S\times T$ factors through some $X_n$. Write $g$ for the associated retraction map $X_n\to S\times T$. For any sheaves $\mathcal{A}$ and $\mathcal{B}$ on $S\times T$, then, $\mathcal{B}$ is a retract of $g_*g^*\mathcal{B}$ and hence $\mathrm{RHom}(\mathcal{A},\mathcal{B})$ is a retract of $\mathrm{RHom}(\mathcal{A},g_*g^*\mathcal{B})$ as well. Our proof will conclude by justifying the claim that $g_*g^*\mathcal{B}$ is, in the derived $\infty$-category of sheaves on $S\times T$, the limit, over a finite diagram $I$, of the sheaves ${f_i}_*f_i^*\mathcal{B}$; let us first see, though, what the claim gives us: it supplies the first equality in the sequence \begin{align*}\mathrm{RHom}(\mathcal{A},g_*g^*\mathcal{B}) & =\mathrm{RHom}(\mathcal{A},\mathrm{lim}_I\,{f_i}_*f_i^*\mathcal{B}) \\ & =\mathrm{lim}_I\,\mathrm{RHom}(\mathcal{A},{f_i}_*f_i^*\mathcal{B}) \\ & =\mathrm{lim}_I\,\mathrm{RHom}(f_i^*\mathcal{A},f_i^*\mathcal{B}).\end{align*} For the second equality, recall that $\mathrm{RHom}$ preserves $\infty$-categorical limits in the second coordinate; the third is just the pushforward-pullback adjunction. When $\mathcal{B}=\mathcal{K}$, though, we know from Theorem \ref{thm:injdimconstantfield} that $f_i^*\mathcal{B}$ is of injective dimension at most $1$. Hence as $\mathcal{A}$ ranges over sheaves of $K$-modules on $S\times T$, the complex $\mathrm{RHom}(\mathcal{A},g_*g^*\mathcal{K})$ is a finite limit of uniformly bounded complexes and therefore itself uniformly bounded. So too, in consequence, are its retracts $\mathrm{RHom}(\mathcal{A},\mathcal{K})$, a conclusion amounting precisely to our assertion that the sheaf $\mathcal{K}$ on $S\times T$ is of finite injective dimension. It remains only to justify our claim; in essence, it's the sum of two subclaims. The first is that the functor $D$ which sends each profinite set $T$ to the $\infty$-category of sheaves on $T$, and morphism $f:S\to T$ to $f^*:D(T)\to D(S)$, satisfies descent, i.e., satisfies the condition (\ref{eq:hypersheaf}) for the functor $F=D$ \cite[Lem.\ 3.5.12]{Mann}. The second is that coupling this first subclaim with Lemma D.4.7(ii) of \cite{Mann} implies the claim: within the lemma, let $I$ denote the diagram $\Delta_{\leq n}^{\mathrm{op}}$ defining $X_n$, let $\mathcal{D}_i=D(R_i)$, let $\mathcal{C}=D(S\times T)$, let $F_i=f_i^*$, and let $F=g^*$. Then by descent, $D(X_n)=\mathrm{lim}_I\,D(R_i)$, and the lemma defines the right adjoint $g^*$ of $g_*$ as a limit of the right adjoints $f_{i^*}$ of $f_i^*$ in the manner, precisely, of our claim. \end{proof} A similar argument shows the following: \begin{lemma} Let $S$ and $T$ be $\mathsf{ED}$ spaces. If $S\times T$, viewed as a product of condensed anima, is compact, then $\mathbb{Z}[S\times T]$ is of finite projective dimension. \end{lemma} \begin{proof} Just as above, one fixes a hypercover $R_\bullet$ of $S\times T$ by $\mathsf{ED}$ spaces, lets $X_n=\mathrm{colim}\,(R_\bullet)_{\leq n}$ for each $n\in\omega$, and deduces that $S\times T$ is a retract of some $X_n$. It follows that $\mathbb{Z}[S\times T]$ is a retract of $\mathbb{Z}[X_n]$, and as the latter is a finite complex of projective condensed abelian groups, this concludes the proof. \end{proof} \section{Conclusion} \label{sect:conclusion} As Corollary \ref{cor:413} and Remark \ref{rmk:betternontrivs} respectively underscore, nothing in our work so far rules out either of the following possibilities: \begin{question} Is it consistent with the $\mathsf{ZFC}$ axioms that for all $\mathsf{ED}$ spaces $S$ and $T$ of cardinality less than $\aleph_\omega$, $S\times T$, viewed as a product of condensed anima, is compact? \end{question} \begin{question} Is it consistent with the $\mathsf{ZFC}$ axioms that a product $S\times T$ as above is compact whenever one of the factors is of cardinality less than $\aleph_\omega$? \end{question} Left even more conspicuously open is item (D) of Section \ref{subsect:products}: \begin{question} \label{ques:condabprods} Is it consistent with the $\mathsf{ZFC}$ axioms that tensor products of compact projective objects in $\mathsf{Cond(Ab)}$ are of finite projective dimension? \end{question} This might be loosely thought of as the derived version of the question of whether such products are projective (recall that by Proposition \ref{prop:projnotprods} the answer is no). Just as for that question, instances of this one admit formulations in functional analytic terms (cf.\ \cite[Appendix to III]{CS3}). For example: \begin{question} Is it consistent with the $\mathsf{ZFC}$ axioms that either of the Banach spaces $C(\beta\mathbb{N}\times\beta\mathbb{N})$ or $c_0$ is of finite injective dimension? \end{question} If not, then the answer to Question \ref{ques:condabprods} is negative as well. Worth noting is one further question in this line, listed as Question 3.6 of \cite{CS3}: \emph{Are all compact projective condensed abelian groups isomorphic to $\mathbb{Z}[S]$ for some $\mathsf{ED}$ space $S$?} We turn next to a more overtly combinatorial series of questions, beginning with the following: \begin{question} Fix $n>1$. Is it a \textsf{ZFC} theorem that there exist cardinals $\kappa$ and $\lambda$ and abelian group $H$ that $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\lambda}[H]\neq 0$? \end{question} Note that a nonvanishing $\mathrm{lim}^s\,\mathbf{A}_{\kappa,\lambda}$ would imply a negative answer to the $n=s+1$ instance of the following question, by way of the exact sequence in the proof of Corollary \ref{cor:nonadditive}. \begin{question} \label{ques:add_of_lims} For any fixed $n>2$, is it consistent with the \textsf{ZFC} axioms that the functor $\mathrm{lim}^n:\mathsf{Pro(Ab)}\to\mathsf{Ab}$ is additive? \end{question} Even if, as suspected, the answer is no, $\mathrm{lim}^n$ might consistently be additive for collections of \textit{towers}, as discussed in Section \ref{sec:additivity}. A good test for this prospect is the following question; variations on it appear in the works \cite{Be17,Bergfalk_simultaneously_23,Bannister_additivity_23} as well. \begin{question} Is it consistent with the \textsf{ZFC} axioms that $\mathrm{lim}^n\,\mathbf{A}_{\kappa,\omega}=0$ for all $n>0$ and cardinals $\kappa$? \end{question} As indicated, questions like these seem to form the essential content of standing questions falling more purely within the field of algebraic topology: \begin{question} Is it consistent with the \textsf{ZFC} axioms that strong homology is additive on the class of all locally compact metric spaces, or even on all metric spaces outright? \end{question} \section*{Acknowledgements} A portion of this work was carried out during its authors' research visits to the Fields Institute's spring 2023 Thematic Program on Set Theoretic Methods in Algebra, Dynamics and Geometry; we thank both the institute and the program's organizers for their hospitality and support. We reiterate our thanks to Dustin Clausen and Peter Scholze for their shaping role in this paper's results, as well as to Lucas Mann for his generosity in clarifying both the details and significance of several of its arguments, and thank both Catrin Mair and Michael Hru\v{s}\'{a}k for discussions benefiting the paper as well. \bibliographystyle{plain} \bibliography{condensedbib} \end{document}
2412.19681v1
http://arxiv.org/abs/2412.19681v1
Casimir Radial Parts via Matsuki Decomposition
\pdfoutput=1 \documentclass[a4paper, sumlimits, intlimits, titlepage]{amsart} \usepackage{latexsym} \usepackage{simplewick} \usepackage{amssymb,amsmath,amsthm,amsfonts} \usepackage{bbm} \numberwithin{equation}{section} \usepackage{mathtools} \usepackage{simpler-wick} \usepackage{epsf} \usepackage{color} \usepackage{graphicx} \usepackage{subfig} \usepackage{dsfont} \usepackage{tensor} \usepackage{physics} \usepackage{bm} \usepackage[backend=biber,style=numeric,sorting=none]{biblatex} \addbibresource{main.bib} \usepackage[export]{adjustbox} \usepackage{hyperref} \definecolor{darkred}{rgb}{0.8,0.1,0.1} \hypersetup{colorlinks=true, linkcolor=darkred, citecolor=blue, linktoc=page} \graphicspath{{figures/}} \usepackage[utf8]{inputenc} \def\tr{{\rm tr}} \newcommand{\Ham}{\mathcal{H}} \newcommand{\1}{1} \newcommand{\RR}{\ensuremath{\mathbb R}} \newcommand{\NN}{\ensuremath{\mathbb N}} \newcommand{\ZZ}{\ensuremath{\mathbb Z}} \newcommand{\CC}{\ensuremath{\mathbb C}} \newcommand{\ra}{\ensuremath{\rightarrow}} \newcommand{\bpm}{\ensuremath{\begin{pmatrix}}} \newcommand{\epm}{\ensuremath{\end{pmatrix}}} \newcommand{\expt}[1]{\left< #1 \right> } \newcommand{\innn}[3]{\left< #1 \left| #2 \right| #3 \right>} \newcommand{\inn}[2]{\left< #1 \left| #2 \right. \right>} \newcommand{\subn}[1]{\vspace{.4 cm} \noindent \textbf{#1. } } \newcommand{\sub}[1]{\vspace{.4 cm} \noindent \textbf{(#1) } } \DeclareMathOperator{\intinf}{\int_{-\infty}^{\infty}} \DeclareMathSymbol{:}{\mathord}{operators}{"3A} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\ad}{ad} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\GP}{GP} \newcommand{\Le}{\mathrm{L}} \newcommand{\Ri}{\mathrm{R}} \renewcommand{\labelenumi}{(\roman{enumi})} \usepackage{comment} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \begin{document} \title{Casimir Radial Parts via Matsuki Decomposition} \author[Schl\"osser]{Philip Schl\"osser} \email{[email protected]} \address{Radboud University, IMAPP-Mathematics, Heyendaalseweg 135, 6525 AJ NIJMEGEN, the Netherlands} \author[Isachenkov]{Mikhail Isachenkov} \email{[email protected]} \address{Universiteit van Amsterdam: Korteweg-de Vries Institute for Mathematics, Science Park 107, 1090 GE AMSTERDAM\\ and Institute of Physics, Science Park 904, 1098 XH AMSTERDAM, the Netherlands} \subjclass[2020]{33C55, 33C67, 33C80, 43A90, 81T40} \keywords{matrix-spherical function, symmetric pair, Cartan decomposition, conformal blocks, Calogero--Sutherland model, degenerate double-affine Hecke algebra, Cherednik--Dunkl operator, Heckman--Opdam hypergeometric function} \begin{abstract} We use Matsuki's decomposition for symmetric pairs $(G,H)$ of (not necessarily compact) reductive Lie groups to construct the radial parts for invariant differential operators acting on matrix-spherical functions. As an application, we employ this machinery to formulate an alternative, mathematically rigorous approach to obtaining radial parts of Casimir operators that appear in the theory of conformal blocks, which avoids poorly defined analytical continuations from the compact quotient cases. To exemplify how this works, after reviewing the presentation of conformal 4-point correlation functions via matrix-spherical functions for the corresponding symmetric pair, we for the first time provide a complete analysis of the Casimir radial part decomposition in the case of Lorentzian signature. As another example, we revisit the Casimir reduction in the case of conformal blocks for two scalar defects of equal dimension. We argue that Matsuki's decomposition thus provides a proper mathematical framework for analysing the correspondence between Casimir equations and the Calogero--Sutherland-type models, first discovered by one of the authors and Schomerus. \end{abstract} \setcounter{footnote}{0} \maketitle \tableofcontents \setcounter{equation}{0} \setcounter{footnote}{0} \tableofcontents \section{Introduction} A matrix-spherical function for two (topological, real Lie, complex Lie, algebraic) groups $(G,H)$ with $H\le G$ finite-dimensional $H$-representations $V,W$ is a (continuous, smooth, holomorphic, regular) function $f:\, G\to\Hom(V,W)$ satisfying \[ \forall g\in G, h,h'\in H:\quad f(hgh') = \pi_W(h) f(g) \pi_V(h'). \] The theory of matrix-spherical functions, and the matrix-valued orthogonal polynomials they give rise to, has been well studied, with references including \cite{warner,CM,mvop}. Considerable attention has been given to matrix-spherical functions for symmetric pairs $(G,K)$, i.e. when there is an involutive isomorphism $\theta$ of $G$ and $K=G^\theta$ is compact. However, the literature becomes more sparse when we allow $K$ to be non-compact. The reason for this is that the well-studied cases correspond to the situation (and slight variations thereof) that the reductive group $(G,K,\theta,B)$ (see e.g. \cite[Section~VII.2]{knapp}) is quotiented by its maximal compact subgroup $K$, and thus there is an Abelian subgroup $A$ such that $G=KAK$ \cite[Section VII.3]{knapp}. This allows to uniquely reconstruct a $(G,K)$-matrix-spherical functions from their restrictions to $A$. Leveraging this fact, invariant differential operators can be decomposed into matrix-valued differential operators on $A$, their \emph{radial parts}, as is done in \cite{CM}, following the earlier works of Harish-Chandra. The proof that $G$ decomposes as $KAK$ quite crucially relies on the fact that $K$ is compact, and such a statement at face value is usually wrong otherwise. Actually, as is shown in \cite{matsuki}, the best we can hope for with a general symmetric pair $(G,H)$ is \[ G = \overline{\bigcup_i HC_iH} \] for finitely many ``affine tori'' $C_i$, i.e. cosets with respect to an Abelian subgroup. Fortunately, it turns out that in this case matrix-spherical functions are still determined by their restrictions to all of the $C_i$. In the present work we will take this story one step further and explore how invariant differential operators, in particular the quadratic Casimir element, decompose in such setting. This has a direct application to conformal field theory (CFT), more specifically the theory of conformal blocks. As is explained in \cite[Section~3]{harmony}, conformal blocks for (Euclidean) 4-point correlators can be described as matrix-spherical functions for $(SO(d+1,1)_0, SO(1,1)_0\times SO(d))$.\footnote{Notice, however, that a rigorous treatment of correlation functions in a CFT also needs to take into account their distributional nature. In the present paper we choose not to work in this generality. See e.g. \cite{KQR-I, KQR} for the recent relevant work in this direction and a review of earlier physics literature.} In the scalar case (i.e. $SO(d)$ acting trivially, and $SO(1,1)_0\cong\RR_{>0}$ acting by characters), the action of the quadratic Casimir operator was first determined in \cite{dolanOsborn}. The authors of \cite{superintegrability} then showed that a change of coordinates transforms the resulting differential operator into the Hamiltonian of the Calogero--Sutherland model for $BC_2$, with parameters that refer not only to multiplicities of a relevant root system, but also to the characters by which $SO(1,1)_0$ acts from the left and from the right. The Calogero--Sutherland model is a super-integrable model with quite some history. In particular, it has been extensively studied by Heckman and Opdam (see e.g. \cite{heckmanSchlichtkrull} and references therein) and has been known to capture the Casimir action on zonal spherical functions for the more conventionally studied symmetric pairs $(G,K)$, where $K$ has to be compact. This connection, however, is not enough to explain the result from \cite{superintegrability}, as $SO(1,1)_0\times SO(d)$ is non-compact, and as the conformal blocks in question aren't zonal spherical functions. What we will show in present paper is that, with Matsuki's decomposition in hand, the steps from \cite{heckmanSchlichtkrull} can be reproduced very closely to yield parallel results in the vastly generalised setting, including for example the case of Lorentzian kinematics, where the configuration space of points on the group has richer structure compared to the Euclidean situation. As a matter of fact, we will develop most of the theory for the general symmetric pair context, then specialize to a general indefinite orthogonal group $G=SO(p+1, q+1)$, and only further down the road specialize to Euclidean and Lorentzian signatures ($q=0,1$), once the real rank of $G$ will start to affect the amount of needed calculations in an essential way. Lastly, conformal blocks for two defects of the same dimension can also be described as spherical functions, namely for $(SO(d+1,1)_0, SO(d-p)\times SO(p+1,1)_0)$. We will see that this setup also falls within the domain of validity for Matsuki's decomposition, so that we will be able to similarly use it in the radial part decomposition of the quadratic Casimir element, clarifying the appearance of the Calogero--Sutherland model in this context, which was earlier argued in \cite{defect}. Let us stress that the issue of having a properly defined Cartan decomposition in non-compact case is also important from the physics perspective, perhaps contrary to what might seem at the first glance. Being sometimes perceived more as a mathematical nuisance, it is usually glossed over in the physics literature by declaring it to be a more or less straightforward 'analytical continuation' from the compact case. However, as is frequently the case with going from compact spaces to non-compact ones, the 'boundary questions' actually cause trouble and turn out to be rather non-trivial mathematically, especially in treating lower-dimensional group orbits. Notice that, despite the possible lower dimensionality of such strata, good analytical control over them is of relevance, since various such regions might carry information about singularities of the physical correlation functions. As we have just reviewed above, bits and pieces of this radial part analysis, mostly but not exclusively in Euclidean setting, have appeared in physics literature before, see, in particular, \cite{Mack-book, harmony, Isachenkov:2017qgn, BSI, Buric:2022ucg}. Spherical functions relevant for Lorentzian analysis in the scalar case were also recently discussed in \cite{Agarwal:2023xwl}. However, the mathematically rigorous description of the Casimir reduction in the setting pertinent to the four-point (spinning) conformal blocks, including the case of Lorentzian signature, was, as far as we are aware, up to now missing from the literature. In particular, a careful analysis of the subtleties of the $KAK$-type matrix decomposition in non-compact case that we provide in this paper using Matsuki's theory is, to our knowledge, new.\footnote{For example, in \cite{Buric:2022ucg} the existence of such a decomposition is attributed to the Gel'fand pair property of $(G,H)$. However, we are not aware of any general statements yielding such a group factorisation for a given non-compact Gel'fand pair.} This, besides the wish to streamline and clean up mathematical details, might be seen as the main motivation behind this work on the physics side. It puts the $KAK$-type decompositions used in the Calogero--Sutherland approach to conformal blocks on a firm mathematical ground. The plan of the paper is as follows. In Section~\ref{sec:msf} we review some generalities on matrix-spherical functions, especially with respect to group actions on them. We then go on, in Section~\ref{sec:matsuki} to review Matsuki's (\cite{matsuki}) theory of decompositions in the special case where the two involutions $\sigma=\tau$ and the two subgroups $H=L$ are equal. In Section~\ref{sec:radial-parts} we develop this into a theory of radial part decompositions and compute the radial part of the quadratic Casimir element. In Section~\ref{sec:cb} we then pivot to applications in the theory of conformal blocks and explain how conformal blocks for 4-point correlation functions can be viewed as matrix-spherical functions in the sense defined earlier. In particular, in Section~\ref{sec:casimir-eq} we do a thorough analysis of Matsuki's theory for the group $SO(p+1,q+1)_0$, derive the expression of the quadratic Casimir element and match it with the scalar (Heckman--Opdam) expression from \cite{superintegrability} and the spinor expression from \cite{BSI}. In Section~\ref{sec:defect} we then provide a brief discussion of defect blocks and matching our results with \cite{defect}. We close this paper with a short summary and discussion of possible future directions. A word about notation: most of the notation is chosen in such a way that it is compatible with \cite{matsuki}. The notation of Section~\ref{sec:cb} that pertains to $G,\mathfrak{g}$ and various subgroups and subalgebras is more in line with the standard treatment of the structure theory of simple Lie groups (as presented in \cite{knapp}) and will, due to collisions with \cite{matsuki}'s notation, not be used in the subsequent sections. In Subsection~\ref{sec:cb-scalar} (and examples beforehand), we introduce the scalar parameters $\alpha,\beta$, so then roots will not be called $\alpha,\beta$, but $\gamma$. \section{Matrix-Spherical Functions (MSFs)}\label{sec:msf} \begin{definition} Let $(G,H)$ be a pair of groups, i.e. $G$ is a group and $H\le G$ is a subgroup. Let $(V,\pi_V),(W,\pi_W)$ be representations of $H$. A function $f:\, G\to\Hom(V,W)$ is said to be a \emph{matrix-spherical function} (MSF) if \[ \forall g\in G, h,h'\in H:\quad f(hgh') = \pi_W(h) f(g) \pi_V(h'). \] In the case where $G,H$ are real Lie groups (complex Lie groups, algebraic groups), we usually also require $f$ to be smooth (holomorphic, regular). Write $E^{V,W}(G,H)$ for the set of matrix-spherical functions for $(G,H)$ with the representations $V,W$. In the more general case, where $W$ is an $H$-bimodule, we can make the same definition: a function $f:\, G\to W$ is a MSF if \[ \forall g\in G,h,h'\in H:\quad f(hgh') = h\cdot f(g)\cdot h'. \] However, as we cannot interpret this function as being matrix-valued anymore, the name MSF is a misnomer. Write $E^W(G,H)$ for the functions thus described. \end{definition} We're now going to consider the case where $(G,H)$ are Lie groups and $W$ an $H$-bimodule with two smooth actions. \begin{lemma}\label{sec:lem-msf-action-diffops} $E^W(G,H)$ has actions by $U(\mathfrak{g})^H$ in terms of left-invariant and right-invariant differential operators. Furthermore, these two actions are pointwise related: for $f\in E^W(G,H)$ (or $C^\infty(G,W)$ more generally), $g\in G, p\in U(\mathfrak{g})^H$ with $\Ad(g)(p)\in U(\mathfrak{g})^H$ we have \[ (p\cdot f)(g) = (f\cdot \Ad(g)(p))(g). \] \end{lemma} \begin{proof} We first show that $C^\infty(G,W)$ is a $(U(\mathfrak{g}), H)$-bimodule (which is derived from the left and right regular representations). For a function $f\in C^\infty(G,W)$, and elements $g\in G, h,h'\in H,X\in\mathfrak{g}$ we define \begin{align*} (h\cdot f\cdot h')(g) & := (h')^{-1}\cdot f(h'gh)\cdot h^{-1}\\ (X\cdot f)(g) & := \dv{t}\eval{f(g\exp(tX))}_{t=0}\\ (f\cdot X)(g) & := \dv{t}\eval{f(\exp(tX)g)}_{t=0}. \end{align*} Since $G$ is associative, the left and right actions commute. Furthermore, for $h\in H, p\in U(\mathfrak{g})$ we have \begin{align*} h\cdot p\cdot h^{-1}\cdot f &= \Ad(h)(p)\cdot f\\ f\cdot h\cdot p \cdot h^{-1} &= f\cdot\Ad(h)(p), \end{align*} so that the left actions (resp. the right actions) are compatible with each other, which establishes that $(U(\mathfrak{g}),H)$-bimodule structure. Therefore, the invariants under the left action of $H$ still have a left action of $U(\mathfrak{g})^H$, analogously for the right actions. In particular, the elements that are left and right invariant under $H$ (i.e. elements of $E^W(G,H)$) have a left and a right action of $U(\mathfrak{g})^H$. As can be seen from the definition, the left action of $U(\mathfrak{g})$ involves the right regular representation on $C^\infty(G)$ and therefore commutes with the left regular representation, so that it acts by left-invariant differential operators. Similarly, the right action of $U(\mathfrak{g})$ is an action by means of right-invariant differential operators. Lastly, we show the pointwise relation between the left and right actions without the $H$-invariance assumption, i.e. for $C^\infty(G,W)$ and $U(\mathfrak{g})$. Let $f\in C^\infty(G,W)$ and $X\in\mathfrak{g}$ then \[ (X\cdot f)(g) = \dv{t}\eval{f(g\exp(tX))}_{t=0} = \dv{t}\eval{f(\exp(t\Ad(g)(X))g)}_{t=0} = (f\cdot \Ad(g)(X))(g). \] Now assume, the claim already holds for $p\in U(\mathfrak{g})$, then we have \begin{align*} (Xp\cdot f)(g) &= (X\cdot (p\cdot f))(g) = ((p\cdot f)\cdot\Ad(g)(X))(g)\\ &= (p\cdot (f\cdot\Ad(g)(X)))(g) = ((f\cdot\Ad(g)(X))\cdot\Ad(g)(p))(g)\\ &= (f\cdot \Ad(g)(Xp))(g).\qedhere \end{align*} \end{proof} \begin{corollary} The actions from Lemma~\ref{sec:lem-msf-action-diffops} can be restricted to $U(\mathfrak{g})^G$. If $G$ is connected, this yields two actions of $Z(U(\mathfrak{g}))$ that coincide. \end{corollary} \begin{proof} If $G$ is connected, $G$-invariance is the same as $\mathfrak{g}$-invariance, so $U(\mathfrak{g})^G=Z(U(\mathfrak{g}))$. Let $z\in Z(U(\mathfrak{g})),g\in G$ and $f\in E^W(G,H)$. By Lemma~\ref{sec:lem-msf-action-diffops}, we have \[ (z\cdot f)(g) = (f\cdot\Ad(g)(z))(g) = (f\cdot z)(g).\qedhere \] \end{proof} \begin{corollary}\label{sec:cor-pull-out-h} For $p\in U(\mathfrak{h})$ and $q\in U(\mathfrak{g})$ and $f\in E^W(G,H)$ we have \[ (pq\cdot f)(g) = (p\cdot f)(g)\cdot q,\qquad (f\cdot qp)(g) = q\cdot (f\cdot p)(g). \] \end{corollary} \begin{proof} We show the claim for $q$ being a monomial and then use linearity. Let $X_1\cdots X_r\in\mathfrak{h}$ and $p=X_1\cdots X_r$. Let $g\in G$, then \begin{align*} (q\cdot f)(g) &= \dv{t_1}\cdots\dv{t_r}\eval{f(g\exp(tX_1)\cdots \exp(tX_r))}_{t=0}\\ &= \dv{t_1}\cdots\dv{t_r} \eval{f(g)\cdot \exp(t_1X_1)\cdots \exp(t_rX_r)}_{t_1=\cdots=t_r=0}\\ &= f(g)\cdot q, \end{align*} whence $(pq\cdot f)(g) = (p\cdot (q\cdot f))(g) = (p\cdot f)(g)\cdot q$ as claimed. The result for the right action can be obtained analogously. \end{proof} Suppose now that we can decompose a dense subset $G'$ of $G$ as follows: \begin{equation}\label{eq:matsuki-decomposition} G' = \bigsqcup_{i\in I} HC_i H \end{equation} where $C_i\subseteq G$ are weakly embedded submanifolds (in our story these will turn out to be tori or right cosets of tori), i.e. the identity map $C_i\hookrightarrow G$ is an immersion. Parallel to the usual treatment, a spherical function is then determined by its restrictions to these tori $C_i$. \begin{lemma}\label{sec:lem-restriction-injective} The map $|_C:\,\, E^W(G,H)\to \prod_{i\in I} C^\infty(C_i,W)$ given by \[ f\mapsto (f|_{C_i})_{i\in I} \] is injective. \end{lemma} \begin{proof} Let $f,f'$ with $f|_C=f'|_C$, i.e. $f|_{C_i}=f'|_{C_i}$ for all $i\in I$. Since $f,f'$ are matrix-spherical, they agree on the corresponding $H\times H$-orbits as well, i.e. on $HC_i H$ for all $i\in I$, so in particular on \[ \bigcup_{i\in I} HC_i H = G'. \] Since both are continuous and $G'$ is dense in $G$, we have $f=f'$. \end{proof} \begin{definition} For $i\in I$ define \begin{align*} N_{C_i} &:= \{(h,h')\in H^2\mid hC_i h^{\prime-1}=C_i\}\\ Z_{C_i} &:= \{(h,h')\in H^2\mid \forall x\in C_i:\quad hxh^{\prime-1}=x\} \end{align*} and $J_{C_i}:= N_{C_i}/Z_{C_i}$. They are all groups. \end{definition} \begin{lemma}\label{sec:lem-msf-kak-restriction} Let $f\in E^W(G,H)$ and $i\in I$, then \[ f|_{C_i}\in C^\infty(C_i,W^{Z_{C_i}})^{J_{C_i}}. \] \end{lemma} \begin{proof} Let $g\in C_i$ and $(h,h')\in N_{C_i}$, then \[ f(hgh^{\prime-1}) = h\cdot f(g)\cdot h^{\prime-1} = (h,h')\cdot f(x). \] For the case where $(h,h')\in Z_{C_i}$ we have $hgh^{\prime-1}=g$ and hence $f(g)\in W^{Z_{C_i}}$. Thus we see that $f|_{C_i}$ intertwines the actions of $N_{C_i}$, which pass to the quotient $J_{C_i}$. \end{proof} \section{Matsuki's Double Coset Decomposition}\label{sec:matsuki} We are now going to quote some results from \cite{matsuki} that will be relevant later. Note that Matsuki considers two (generally) different involutions $\sigma,\tau$ on $G$, whereas in this paper we shall always assume $\sigma=\tau$. Let $(G,H)$ be a symmetric pair of Lie groups, i.e. $H\le G$ a Lie subgroup and $\sigma\in\Aut(G)$ an involution such that $(G^\sigma)_0\le H\le G^\sigma$, where we assume that $HG_0H=G$. We shall also assume that $(G,K,\theta,B)$ is a reductive Lie group (see e.g. \cite[Section VII.2]{knapp}), where $\theta$ and $\sigma$ ('s derivative) commute. We shall also assume that $B$ is invariant under $\sigma$, which is no restriction by the following observation: \begin{lemma} Let $(G,K,\theta,B)$ be a reductive Lie group and let $S\le\Aut(G)$ be a finite subgroup (the derivative of) whose action commutes with $\theta$. Then $B$ can without loss of generality be chosen to be $S$-invariant. \end{lemma} \begin{proof} If we choose \[ B'(X,Y):=\sum_{\sigma\in S} B(\sigma(X),\sigma(Y)) \] instead of $B$, we obtain a real inner product on $\mathfrak{g}$ that is invariant under $\Ad(G),\theta,S$, which is still positive(negative)-definite on $\mathfrak{p}$ ($\mathfrak{k}$), and under which $\mathfrak{p}$ and $\mathfrak{k}$ are still orthogonal. In other words: $(G,K,\theta,B')$ is still a reductive Lie group. \end{proof} \begin{definition} Let $\mathfrak{t}$ be a maximal commutative subalgebra of $\mathfrak{k}^{-\sigma}$ and extend $\mathfrak{t}$ to a maximal commutative subalgebra $\mathfrak{c}=\mathfrak{t}\oplus\mathfrak{a}$ of $\mathfrak{g}^{-\sigma}$. The subgroup $C:=\exp(\mathfrak{c})=:\exp(\mathfrak{a})T$ is called a \emph{fundamental Cartan subset}. \end{definition} \begin{definition} Given a fundamental Cartan subset $C$ as above, a subset $C'=\exp(\mathfrak{c}')t$ of $G$ is called a \emph{standard Cartan subset} if \begin{enumerate} \item $t\in T$ and $\mathfrak{c}'\le\mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma})$ is a commutative subalgebra; \item $\mathfrak{c'}=\mathfrak{t}'\oplus\mathfrak{a}'$ where $\mathfrak{t}'\le\mathfrak{t}$ and $\mathfrak{a}\le\mathfrak{a}'\subseteq\mathfrak{p}$; \item $\dim(\mathfrak{c}')=\dim(\mathfrak{c})$. \end{enumerate} Two standard Cartan subsets $\exp(\mathfrak{a}'_1)T'_1$ and $\exp(\mathfrak{a}'_2)T'_2$ are said to be \emph{conjugate} if there are $h,h'\in K\cap H$ such that $hT'_1h^{\prime-1}=T'_2$. In particular, any two Cartan subsets with the same $T'$ are conjugate. \end{definition} \begin{example} For the case of $\sigma=\theta$ and $H=K$, i.e. the case of non-compact symmetric spaces, we pick $\mathfrak{t}:=0$ and $\mathfrak{a}$ as a maximal commutative subalgebra of $\mathfrak{p}$. Then the fundamental Cartan subset $C$ is the analytic subgroup $A$ of $\mathfrak{a}$, i.e. the torus one finds in the usual treatment for this case, e.g. \cite[Theorem~6.46]{knapp}. There are also no other standard Cartan subsets (with respect to $C$) as $\mathfrak{a}$ is already a maximal commutative subalgebra of $\mathfrak{g}^{-\sigma}$. \end{example} From now on, we shall fix a fundamental Cartan subset $C\le G$. \begin{lemma}\label{sec:lem-Adt-involution} Let $C'=\exp(\mathfrak{c}')t$ be a standard Cartan subset, then $\Ad(t)$ is an involution of $\mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma})$. In particular, $C'$ and $tC't^{-1}$ are conjugate standard Cartan subsets. \end{lemma} \begin{proof} Let $X\in\mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma})$, so that $X,\Ad(t^{-1})(X)\in\mathfrak{g}^{-\sigma}$. Note that $t\in\exp(\mathfrak{k}^{-\sigma})$, whence $\sigma(t^{-1})=t$. Consequently, we have \[ -\Ad(t^{-1})(X) = \sigma(\Ad(t^{-1})(X)) = \Ad(t)(\sigma(X)) = -\Ad(t)(X), \] which implies $\Ad(t^{-2})(X)=X$. Therefore we have $\Ad(t)(X),X\in\mathfrak{g}^{-\sigma}$, so that \[ \Ad(t)(X)\in\mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma}). \] Now, note that $tC't^{-1}=t\exp(\mathfrak{c}')=\exp(\Ad(t)(\mathfrak{c}'))t$ and that $\Ad(t)(\mathfrak{c}')$ decomposes as \[ \mathfrak{t}'\oplus\Ad(t)(\mathfrak{a}') \] since $t\in\exp(\mathfrak{t})$ commutes with $\mathfrak{t}'\le\mathfrak{t}$. Therefore, $tC't^{-1}$ is also a standard Cartan subset that shares its ``compact part'' $\exp(\mathfrak{t}')t$ with $C'$ and is therefore conjugate to $C'$. \end{proof} \begin{definition} The set $G_{\mathrm{ss}}$ of all \emph{semisimple elements} of $G$ is the set of all $g\in G$ such that $\sigma\circ\Ad(g)\circ\sigma\circ\Ad(g)^{-1}$ is a semisimple Lie algebra automorphism of $\mathfrak{g}$. Let \[ G_{\mathrm{rs}} := \{g\in G_{\mathrm{ss}}\mid g^{-\sigma}\cap\Ad(g)(\mathfrak{g}^{-\sigma})\text{ is commutative}\} \] be the set of all \emph{regular semisimple elements}. Both are dense in $G$. \end{definition} \begin{lemma}\label{sec:lem-regular-intersection-c} Let $C'=\exp(\mathfrak{c}')t$ be a standard Cartan subset and let $x\in C'$. Then \[ \mathfrak{g}^{-\sigma}\cap\Ad(x)(\mathfrak{g}^{-\sigma}) = \mathfrak{c}' \] if and only if $x\in G_{rs}$. \end{lemma} \begin{proof} ``$\Leftarrow$: ``$\supset$'': Write $x=\exp(X)t$ and let $Y\in\mathfrak{c}'$. Then \[ \Ad(x^{-1})(Y) = \Ad(t^{-1})\exp(-\ad(X))(Y) = \Ad(t^{-1})(Y)\in\mathfrak{g}^{-\sigma} \] since $\mathfrak{c}'\le\mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma})$ by definition. ``$\subseteq$'': Since $x=\exp(X)t$ is regular, $\mathfrak{s}:=\mathfrak{g}^{-\sigma}\cap\Ad(x)(\mathfrak{g}^{-\sigma})$ is commutative and contains $\mathfrak{c}'$ by the observation above. For $Y\in\mathfrak{s}$ we thus have $\ad(X)(Y)=0$, so that \[ \Ad(t^{-1})(Y) = \Ad(x^{-1})\exp(-\ad(X))(Y) = \Ad(x^{-1})(Y)\in\mathfrak{g}^{-\sigma}. \] Consequently, \[ \mathfrak{c}'\le\mathfrak{s}\subseteq\mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma}), \] where $\mathfrak{s}$ is commutative and $\mathfrak{c}'$ is maximally commutative. Consequently, $\mathfrak{s}\le\mathfrak{c}'$. ``$\Rightarrow$'': If $x$ is not regular, then $\mathfrak{g}^{-\sigma}\cap\Ad(x)(\mathfrak{g}^{-\sigma})$ is not commutative. Therefore, it cannot be $\mathfrak{c}'$. \end{proof} \begin{theorem}[{\cite[Theorem 3(i--iii)]{matsuki}}]\label{sec:thm-matsuki} Let $G,H,K,\sigma,\theta$ as before. Fix a fundamental Cartan subset and let $(C_i)_{i\in I}$ be representatives of all the conjugacy classes of standard Cartan subsets. Then \begin{align*} G_{\mathrm{ss}} &= \bigcup_{i\in I} HC_i H\\ G_{\mathrm{rs}} &= \bigsqcup_{i\in I} H (C_i\cap G_{\mathrm{rs}}) H. \end{align*} This establishes a decomposition as in \eqref{eq:matsuki-decomposition}. Every element of the group $J_{C_i}$ we defined for such a case can be represented by $(h,h')N_{C_i}$ where $h,h'\in K\cap H$. \end{theorem} As a consequence, we see that any matrix-spherical function for $(G,H)$ is entirely determined by its restrictions to some standard Cartan subsets (Theorem~\ref{sec:thm-matsuki} and Lemma~\ref{sec:lem-restriction-injective}). This defines for us an appropriate setting in which we can now start computing radial parts of the differential operators encountered in Lemma~\ref{sec:lem-msf-action-diffops}, and especially of the quadratic Casimir element. \section{Radial Parts}\label{sec:radial-parts} \subsection{Root Space Decomposition} \begin{proposition}\label{sec:prop-root-spaces} Let $C'=\exp(\mathfrak{c}')t$ be a standard Cartan subset of $G$. For $\alpha\in(\mathfrak{c}'_\CC)^*$ define \[ \mathfrak{g}_\alpha := \{X\in\mathfrak{g}_\CC\mid\forall Z\in\mathfrak{c}'_\CC: \quad\ad(Z)(X) = \alpha(Z)X\}. \] Then \begin{enumerate} \item $\mathfrak{g}_\CC = \bigoplus_\alpha \mathfrak{g}_\alpha$; \item $\comm{\mathfrak{g}_\alpha}{\mathfrak{g}_\beta}\subseteq\mathfrak{g}_{\alpha+\beta}$; \item $\sigma(\mathfrak{g}_\alpha)\subseteq\mathfrak{g}_{-\alpha}$ \item $B(\mathfrak{g}_\alpha,\mathfrak{g}_\beta)=0$ unless $\alpha+\beta=0$. \end{enumerate} Write $\Sigma(\mathfrak{g}:\mathfrak{c}')$ for the set of $\alpha$ with $\dim(\mathfrak{g}_\alpha)>0$. \end{proposition} \begin{proof} \begin{enumerate} \item Since $\mathfrak{c}'$ is commutative, so is $\ad(\mathfrak{c}')$. For simultaneous diagonalisability and hence the existence of the claimed root space decomposition, it suffices therefore to show semisimplicity. We show that there is a (sesquilinear, positive-definite) inner product on $\mathfrak{g}_\CC$ with respect to which $\ad(\mathfrak{c}')$ acts normally. Recall that since $(G,K,\theta,B)$ is a reductive Lie group, the bilinear form $B_\theta$ is a positive-definite inner product on $\mathfrak{g}$. If we extend it sesquilinearly, we therefore obtain an inner product on $\mathfrak{g}_\CC$. For any $X,Y\in\mathfrak{g}_\CC$ and $Z\in\mathfrak{c}'$ we have \begin{align*} B_\theta(\ad(Z)(X),Y) &= -B(\ad(Z)(X), \theta(Y)) = B(X, \ad(Z)(\theta(Y)))\\ &= B(X, \theta(\ad(\theta(Z))(Y))) = -B_\theta(X, \ad(\theta(Z))(Y)). \end{align*} Consequently, the adjoint of $\ad(Z)$ (with respect to $B_\theta$) is $-\ad(\theta(Z))$. Since by definition, $\mathfrak{c}'$ is $\theta$-invariant, the element $-\ad(\theta(Z))$ lies in the commutative algebra $\ad(\mathfrak{c}')$ and therefore commutes with $\ad(Z)$. \item Let $X\in\mathfrak{g}_\alpha,Y\in\mathfrak{g}_\beta$ and $Z\in\mathfrak{c}'_\CC$, then \begin{align*} \ad(Z)(\comm{X}{Y}) &= \comm{\ad(Z)(X)}{Y} + \comm{X}{\ad(Z)(Y)}\\ &= \comm{\alpha(Z)X}{Y} + \comm{X}{\beta(Z)Y}\\ &= (\alpha+\beta)(Z)\comm{X}{Y}. \end{align*} \item Let $X\in\mathfrak{g}_\alpha,Z\in\mathfrak{c}'_\CC$. By definition of $\mathfrak{c}'$, we have $\sigma(Z)=-Z$, hence \[ \ad(Z)(\sigma(X)) = \sigma(\ad(\sigma(Z))(X)) = -\sigma(\ad(Z)(X)) = -\alpha(Z) \sigma(X). \] \item Let $X\in\mathfrak{g}_\alpha,Y\in\mathfrak{g}_\beta$ and $Z\in\mathfrak{c}'_\CC$, then \begin{align*} 0 &= B(\ad(Z)(X), Y) + B(X, \ad(Z)(Y))\\ &= \alpha(Z) B(X,Y) + \beta(Z) B(X,Y)\\ &= (\alpha+\beta)(Z) B(X,Y) \end{align*} due to $B$'s $\ad$-invariance. If $B(X,Y)\ne0$, the above implies that $(\alpha+\beta)(Z)=0$ for all $Z$, and hence $\alpha+\beta=0$. \end{enumerate} \end{proof} \begin{lemma}\label{sec:lem-root-spaces-automorphism} Let $\exp(\mathfrak{c}_1)t_1,\exp(\mathfrak{c}_2)t_2$ be two Cartan subsets and suppose there exists $\phi\in\Aut(\mathfrak{g}_{\CC})$ with $\phi(\mathfrak{c}_1)=\mathfrak{c}_2$. Then the reduced root systems $\Sigma(\mathfrak{g}:\mathfrak{c}_1) = \phi^*(\Sigma(\mathfrak{g}:\mathfrak{c}_2))$ and \[ \phi(\mathfrak{g}_{\phi^*(\alpha)}) = \mathfrak{g}_{\alpha}. \] for all $\alpha\in(\mathfrak{c}_{2, \CC})^*$, meaning that the root systems and root multiplicities are the same. \end{lemma} \begin{proof} Let $\alpha\in (\mathfrak{c}_{2,\CC})^*$, then $\phi^*(\alpha)=\alpha\circ\phi\in (\mathfrak{c}_{1,\CC})^*$. Let $X\in\mathfrak{g}_{\phi^*(\alpha)}$ and $Z\in\mathfrak{c}_1$, then \[ \ad(Z)(\phi(X)) = \phi(\ad(\phi^{-1}(Z))(X)) =\phi(\phi^*(\alpha)(\phi^{-1}(Z)) X) = \alpha(Z) \phi(X), \] which shows that $\phi(X)\in\mathfrak{g}_{\alpha}$. Consequently, $\phi$ restricts to isomorphisms $\mathfrak{g}_{\phi^*(\alpha)}\cong \mathfrak{g}_{\alpha}$ for all $\alpha\in (\mathfrak{c}_{2,\CC})^*$, which in turn also shows that the root systems are isomorphic via $\phi^*$. \end{proof} \subsection{Decomposition}\label{sec:general-decomposition} From now on we shall fix a standard Cartan subset $C'=\exp(\mathfrak{c}')t$. Assume that $\Ad(t)$ leaves $\mathfrak{c}'$ invariant and can be decomposed as follows: there is an involution $\phi\in O(\mathfrak{g}_\CC,B)$ that commutes with $\sigma$, and $\epsilon:\Sigma(\mathfrak{g}:\mathfrak{c}')\to\CC^\times$ such that \[ \forall \alpha\in\Sigma(\mathfrak{g}:\mathfrak{c}'),X\in\mathfrak{g}_\alpha:\quad \Ad(t)(X) = \epsilon_\alpha \phi(X). \] By Lemma~\ref{sec:lem-Adt-involution}, $\Ad(t)$ is an involution of $\mathfrak{c}'_\CC$ and hence also of $\Sigma(\mathfrak{g}:\mathfrak{c}')$. Unless this leads to ambiguity, we shall denote this involution by $t$. This definition implies two facts about the function $\epsilon$: \begin{lemma}\label{sec:lem-properties-epsilon} For $\alpha\in\Sigma(\mathfrak{g}:\mathfrak{c}')$ we have \begin{enumerate} \item $\epsilon_\alpha\epsilon_{-\alpha}=1$ and \item $\epsilon_\alpha = \epsilon_{t\alpha}$. \end{enumerate} In particular, $\epsilon_0=\pm1$, which can be absorbed into $\phi$. \end{lemma} \begin{proof} \begin{enumerate} \item Let $0\ne X\in\mathfrak{g}_\alpha$, then due to the non-degeneracy of $B$ there exists $Y\in\mathfrak{g}_{-\alpha}$ such that $B(X,Y)\ne0$. Then due to $B$'s invariance under $\Ad(t)$ and $\phi$ we have \begin{align*} B(X,Y) &= B(\Ad(t)X,\Ad(t)Y) = B(\epsilon_\alpha\phi(X),\epsilon_{-\alpha}\phi(Y))\\ &= \epsilon_\alpha\epsilon_{-\alpha} B(\phi(X),\phi(Y)) = \epsilon_\alpha \epsilon_{-\alpha} B(X,Y), \end{align*} hence $1=\epsilon_\alpha\epsilon_{-\alpha}$ since $B(X,Y)\ne0$. \item Let $0\ne X\in\mathfrak{g}_{\alpha}$, then we leverage that $\sigma(t)=t^{-1}$ and that $\sigma(\Ad(t)(X))\in\mathfrak{g}_{-t\alpha}$: \begin{align*} \sigma(X) &= \Ad(t)\Ad(t)^{-1}\sigma(X) = \Ad(t)\sigma\Ad(t)(X)\\ &= \epsilon_{-t\alpha}\phi\sigma\Ad(t)(X)\\ &=\epsilon_\alpha\epsilon_{-t\alpha} \phi\sigma\phi(X) = \epsilon_\alpha\epsilon_{-t\alpha} \sigma(X), \end{align*} which shows with (i) that $\epsilon_\alpha=\epsilon_{t\alpha}$. \end{enumerate} \end{proof} We now investigate what $Z_{C'}$ and $N_{C'}$ look and act like. \begin{proposition}\label{sec:prop-structure-Z-N} \begin{enumerate} \item $(h,h')\in Z_{C'}$ if and only if $h'=t^{-1}ht$ and $\Ad(h)$ acts trivially on $\mathfrak{c}'$. \item $(h,h')\in N_{C'}$ if and only if $\Ad(h)\mathfrak{c}'\subseteq \mathfrak{c}'$ and $hth^{\prime-1}t^{-1}\in\exp(\mathfrak{c}')$. \item If $(h,h')\in N_{C'}$, say $hth^{\prime-1}t^{-1}=\exp(Y)$, then $(h,h')$ acts on $C'$ as follows: \[ (h,h')\cdot \exp(X)t = \exp(\Ad(h)(X) + Y)t \] \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item ``$\Rightarrow$'': We know $t\in C'$, then $hth^{\prime-1}=t$, whence $h'=t^{-1}ht$. Let $X\in\mathfrak{c}'$, then \begin{align*} X &= \dv{s}\eval{\exp(tX)tt^{-1}}_{s=0} = \dv{s}\eval{h\exp(sX)th^{\prime-1}t^{-1}}_{s=0}\\ &= \dv{s}\eval{\exp(s\Ad(h)(X))}_{s=0} = \Ad(h)(X). \end{align*} ``$\Leftarrow$'': Let $(h,h')\in H\times H$ satisfy the conditions above. Every element of $C'$ can be written as $\exp(X)t$, so that \[ (h,h')\cdot\exp(X)t = h\exp(X)th^{\prime-1} = \exp(\Ad(h)(X)) hth^{\prime-1} = \exp(X)t, \] so that $(h,h')\in Z_{C'}$. \item ``$\Rightarrow$'': We apply the definition first to $t$, which yields \[ hth^{\prime-1} = \exp(Y)t \] for some $Y\in\mathfrak{c}'$. Let now $X\in\mathfrak{c}'$ and $s\in\RR$, then \[ h\exp(sX)th^{\prime-1} = \exp(\Ad(h)(sX) + Y) t. \] Multiplying with $t^{-1}$ and taking the $s$-derivative at $s=0$, shows that $\Ad(h)(X)\in\mathfrak{c}'$. The reverse implication is clear. \item Follows from (ii). \end{enumerate} \end{proof} \begin{lemma}\label{sec:lem-zero-space} Let $\mathfrak{m}':=Z_{\mathfrak{h}}(\mathfrak{c}')$. Then \[ \mathfrak{g}_0 = \mathfrak{c}'_\CC\oplus\mathfrak{m}'_\CC. \] This direct sum is orthogonal with respect to $B$ and $B_\sigma$. \end{lemma} \begin{proof} The sum is evidently direct as $\mathfrak{c}'$ and $\mathfrak{m}'$ lie in different eigenspaces of $\sigma$. The inclusion ``$\supseteq$'' is clear. For ``$\subseteq$'', note that $\sigma$ leaves $\mathfrak{g}_0$ invariant by Proposition~\ref{sec:prop-root-spaces}(iii), hence we can decompose \[ \mathfrak{g}_0 = \mathfrak{g}_0^+\oplus\mathfrak{g}_0^-, \] where $\sigma$ acts as $\pm1$ on $\mathfrak{g}_0^\pm$. Since $\sigma$ is orthogonal with respect to both $B$ and $B_\sigma$, these two eigenspaces are orthogonal with respect to both bilinear forms. It just remains to show that $\mathfrak{g}_0^+\subset \mathfrak{m}'_\CC$ and $\mathfrak{g}_0^-\subset \mathfrak{c}'_\CC$ (then these inclusions are equalities and we get the claimed orthogonality). For the first inclusion, let $X+iY\in\mathfrak{g}_0^+$, then both $X$ and $Y$ commute with $\mathfrak{c}'$ and satisfy $\sigma(X)=X,\sigma(Y)=Y$, so that $X,Y\in\mathfrak{h}$, and hence $X,Y\in\mathfrak{m}'$. For the second inclusion, let $X+iY\in\mathfrak{g}_0^-$, then $X,Y\in \mathfrak{g}^{-\sigma}$. Furthermore, we have \begin{align*} \sigma(\Ad(t^{-1})(X)) &= \sigma(\epsilon_0^{-1}\phi(X)) = \epsilon_0^{-1}\phi\sigma(X) = -\epsilon_0^{-1}\phi(X)\\ &= -\Ad(t^{-1})(X), \end{align*} so $X\in\Ad(t)(\mathfrak{g}^{-\sigma})$ as well. Consequently, $X\in\mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma})$ and commutes with $\mathfrak{c}'$. Since $\mathfrak{c}'$ is maximal commutative, we have $X\in\mathfrak{c}'$. Similarly, we also see $Y\in\mathfrak{c}'$. \end{proof} \begin{definition} Let $x\in \exp(X)t\in C'$ and $\alpha\in\Sigma(\mathfrak{g}:\mathfrak{c}')$. Define \[ x^\alpha := \epsilon_\alpha \exp(\alpha(X))\in\CC^\times. \] \end{definition} \begin{proposition}\label{sec:prop-properties-power} \begin{enumerate} \item For $x\in C'$ and $Y\in\mathfrak{g}_\alpha$ we have \[ \Ad(x)(Y)=x^{t\alpha} \phi(Y)\qquad \Ad(x^{-1})(Y) = x^{-\alpha} \phi(Y), \] so in particular $x\mapsto x^\alpha$ is well-defined. \item For all $x\in C',\alpha\in\Sigma(\mathfrak{g}:\mathfrak{c}')$ we have $(x^\alpha)^{-1}=x^{-\alpha}$. \item With respect to the group homomorphism \[ \exp(\mathfrak{c}')\to\CC^\times,\qquad \exp(X')\mapsto\exp(\alpha(X')), \] the map $x\mapsto x^\alpha$ intertwines the (left) group actions of $\exp(\mathfrak{c}')$ on $C'$ and of $\CC^\times$ on $\CC^\times$ (is in particular a homomorphism of torsors). \item Let $(h,h')\in N_{C'}$ and let $hth^{\prime-1}=x=\exp(Y)t$. Then \[ \qty(\epsilon_\alpha x^{-\Ad^*(h)(\alpha)})^2=1. \] \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item We have \[ \Ad(x)(Y) = \exp(\ad(X))\Ad(t)(Y) = \epsilon_\alpha \exp(\ad(X)) \phi(Y) = \epsilon_\alpha \exp((t\alpha)(X)) \phi(Y), \] which by Lemma~\ref{sec:lem-properties-epsilon}(ii) equals $x^{t\alpha} \phi(Y)$ as claimed. Furthermore, we have \[ \Ad(x^{-1})(Y) = \Ad(t^{-1}) \exp(\ad(-X))(Y) = \epsilon_{t\alpha}^{-1} \exp(-\alpha(X)) \phi(Y) = x^{-\alpha} \phi(Y) \] by Lemma~\ref{sec:lem-properties-epsilon}(i,ii). \item Follows from Lemma~\ref{sec:lem-properties-epsilon}(i). \item Follows from the definition and the fact that $\mathfrak{c}'$ is commutative. \item Rearranging the definition of $x$ we obtain $h = xh't^{-1}$. Applying $\sigma$, this equals $tx^{-1}t^{-1}h't$. Writing $x=\exp(Y)t$, this equation becomes \[ \exp(Y)th't^{-1} = \exp(-Y)t^{-1}h't. \] Let $X\in\mathfrak{g}_\alpha$, then $\Ad(th't^{-1})(X)$ and $\Ad(t^{-1}h't)(X)$ only differ by a constant: \begin{align*} \Ad(th't^{-1})(X) &= \frac{\epsilon_{\Ad(th't)(\alpha)}}{\epsilon_\alpha} \phi(\Ad(h')(\phi(X)))\\ \Ad(t^{-1}h't)(X) &= \frac{\epsilon_\alpha}{\Ad(tht')(\alpha)} \phi(\Ad(h')(\phi(X))). \end{align*} Consequently, we have \[ \exp(\Ad^*(th't)(\alpha)(Y)) \frac{\epsilon_{\Ad(th't)}}{\epsilon_\alpha} = \exp(-\Ad^*(th't)(\alpha)(Y)) \frac{\epsilon_\alpha}{\epsilon_{\Ad(th't)(\alpha)}}, \] which implies the claim using the fact that $\Ad^*(th't)$ and $\Ad^*(h)$ coincide on $(\mathfrak{c}'_\CC)^*$. \end{enumerate} \end{proof} \begin{lemma}\label{sec:lem-characterisation-regular} We can characterise $C'\cap G_{rs}$ as follows: \[ C'\cap G_{rs} = \{ x\in C' \mid \forall \alpha\in\Sigma:\quad x^\alpha\ne x^{-\alpha}\}. \] \end{lemma} \begin{proof} Let $x\in C'$. By Lemma~\ref{sec:lem-regular-intersection-c} we have $x\in C'\cap G_{rs}$ iff $\mathfrak{c}' = \mathfrak{g}^{-\sigma}\cap \Ad(x)(\mathfrak{g}^{-\sigma})$. So we are going to show that this condition holds iff $x^\alpha\ne x^{-\alpha}$ for all roots $\alpha\in\Sigma$. ``$\Rightarrow$'': Assume that $\mathfrak{g}^{-\sigma}\cap\Ad(x)(\mathfrak{g}^{-\sigma})=\mathfrak{c}'$. Let $\alpha\in\Sigma$ and $0\ne E\in\mathfrak{g}_\alpha$. Then $E-\sigma(E) \in\mathfrak{g}^{-\sigma}$ is a nontrivial linear combination of $\mathfrak{c}'$-root vectors for different root spaces and cannot be a root vector itself, in particular not an element of $\mathfrak{c}'$. This implies that $E-\sigma(E)\not\in\Ad(x)(\mathfrak{g}^{-\sigma})$. We have \[ \Ad(x^{-1})(E-\sigma(E)) = x^{-\alpha} \phi(E) - x^\alpha \phi(\sigma(E)) = x^{-\alpha} \phi(E) - x^\alpha \sigma(\phi(E)). \] That this element is not $\sigma$-antiinvariant, shows that $x^\alpha\ne x^{-\alpha}$. ``$\Leftarrow$'': Assume $x^\alpha\ne x^{-\alpha}$ for all roots $\alpha\in\Sigma$ and let $Y\in\mathfrak{g}^{-\sigma}\cap\Ad(x)(\mathfrak{g}^{-\sigma})$. We can decompose $Y$ according to \[ \mathfrak{g}_\CC = \mathfrak{m}'_\CC \oplus \mathfrak{c}'_\CC \oplus\bigoplus_{\alpha\in\Sigma} \mathfrak{g}_\alpha \] as \[ Y = Y_{0,+} + Y_{0,-} + \sum_{\alpha\in\Sigma} Y_\alpha. \] That $Y$ is $\sigma$-antiinvariant shows that $Y_{0,+}=0$ and $Y_{-\alpha} = -\sigma(Y_\alpha)$ for all $\alpha\in\Sigma$. Furthermore, \begin{align*} \Ad(x^{-1})(Y) &= \epsilon_0 \phi(Y_{0,-}) + \sum_{\alpha\in\Sigma^+} \qty(x^{-\alpha} \phi(Y_\alpha) - x^\alpha \phi(\sigma(Y_\alpha)))\\ &= \epsilon_0 \phi(Y_{0,-}) + \sum_{\alpha\in\Sigma^+} \qty(x^{-\alpha} \phi(Y_\alpha) - x^\alpha \sigma(\phi(Y_\alpha))). \end{align*} Since this is $\sigma$-antiinvariant as well, we obtain $x^{-\alpha}\phi(Y_\alpha) = x^{\alpha} \phi(Y_\alpha)$. Since $x^\alpha\ne x^{-\alpha}$, this reads $\phi(Y_\alpha)=0$ or equivalently $Y_\alpha=0$. Consequently, we have $Y=Y_{0,-}\in\mathfrak{c}'$. \end{proof} \begin{proposition}\label{sec:prop-es-ito-hs} Let $x\in C'\cap G_{rs}$ and $0\ne E\in\mathfrak{g}_\alpha$. Let $H:= E+\sigma(E)$, then \begin{align*} E &= \frac{\Ad(x)(\phi(H)) - x^{-\alpha}H}{x^\alpha-x^{-\alpha}}\\ \sigma(E) &= \frac{x^\alpha H - \Ad(x)(\phi(H))}{x^\alpha-x^{-\alpha}}\\ \phi(E) &= \frac{x^\alpha \phi(H) - \Ad(x^{-1})(H)}{x^\alpha-x^{-\alpha}}\\ \sigma(\phi(E)) &= \frac{\Ad(x^{-1})(H) - x^{-\alpha}\phi(H)}{x^\alpha-x^{-\alpha}} \end{align*} \end{proposition} \begin{proof} From Proposition~\ref{sec:prop-properties-power} we get \begin{align*} \Ad(x)(\phi(H)) &= x^\alpha E + x^{-\alpha}\sigma(E)\\ \Ad(x^{-1})(H) &= x^{-\alpha}\phi(E) + x^{\alpha}\sigma(\phi(E)), \end{align*} which leads to the following linear systems of equations. \begin{align*} \mqty(H\\\Ad(x)(\phi(H))) &= \mqty(1 & 1\\x^\alpha & x^{-\alpha}) \mqty(E\\\sigma(E))\\ \mqty(\phi(H)\\\Ad(x^{-1})(H)) &= \mqty(1 & 1\\x^{-\alpha} & x^\alpha)\mqty(\phi(E)\\\sigma(\phi(E))). \end{align*} Solving these systems yields the claimed expressions. Note that by Lemma~\ref{sec:lem-characterisation-regular}, we always have $x^\alpha\ne x^{-\alpha}$, whence the determinant is nonzero. \end{proof} \begin{lemma}\label{sec:lem-decomposition-g} For each $x\in C'\cap G_{rs}$ we can decompose $\mathfrak{g}$ as follows: \begin{align*} \mathfrak{g}_\CC &= \mathfrak{c}'_\CC \oplus (\mathfrak{h}_\CC + \Ad(x)(\mathfrak{h}_\CC))\\ &= \mathfrak{c}'_\CC \oplus (\mathfrak{h}_\CC + \Ad(x^{-1})(\mathfrak{h}_\CC)), \end{align*} where the intersection of the last two summands is $\mathfrak{m}'_\CC$. \end{lemma} \begin{proof} We are showing the claims for the first decomposition. The others follow by applying $\Ad(x^{-1})$. Note that we assumed that $\Ad(t)$ leave $\mathfrak{c}'$ (and its complexification) invariant. Since $\Ad(t)$ is orthogonal with respect to $B$, and because of Lemma~\ref{sec:lem-zero-space}, it therefore also leaves $\mathfrak{m}'$ (and its complexification) invariant. We evidently have the inclusion ``$\supseteq$'', and the inclusion ``$\subseteq$'' follows from Proposition~\ref{sec:prop-es-ito-hs}. So it remains to show that \[ \mathfrak{c}'_\CC\cap (\mathfrak{h}_\CC+\Ad(x)(\mathfrak{h}_\CC)) = 0 \] and that \[ \mathfrak{h}_\CC\cap\Ad(x)(\mathfrak{h}_\CC) = \mathfrak{m}'_\CC. \] For the first note that $\mathfrak{c}'_{\CC}$ and $\mathfrak{h}_\CC$ are orthogonal with respect to $B$ because they lie in different eigenspaces of $\sigma$, which is orthogonal with respect to $B$. Since $B$ restricted to $\mathfrak{c}'_\CC$ is also non-degenerate, we conclude that $\mathfrak{c}'_\CC\cap\mathfrak{h}_\CC =0$. Applying $\Ad(x)$ and the fact that $\mathfrak{c}'_\CC$ is invariant under $\Ad(x)$ yields the desired directness of the sum. Let $Y\in\mathfrak{h}_\CC\cap \Ad(x)(\mathfrak{h}_\CC)$. We expand \[ Y = Y_{0,+} + Y_{0,-} + \sum_{\alpha\in\Sigma} Y_\alpha. \] Then due to $\sigma$-invariance we have $Y_{0,-}=0$ and $Y_{-\alpha}=\sigma(Y_\alpha)$. Applying $\Ad(x^{-1})$ yields \[ \Ad(x^{-1})(Y) = \epsilon_0^{-1} \phi(Y_{0,+}) + \sum_{\alpha\in\Sigma^+} \qty(x^{-\alpha} \phi(Y_\alpha) + x^\alpha \sigma(\phi(Y_\alpha))). \] The fact that this is also $\sigma$-invariant and $x^\alpha\ne x^{-\alpha}$ (by Lemma~\ref{sec:lem-characterisation-regular}) yields that $Y_\alpha=0$, hence also $Y_{-\alpha}$, so that $Y=Y_{0,+}\in\mathfrak{m}'_\CC$. \end{proof} \begin{corollary}\label{sec:cor-decomposition-Ug} For $x\in C'\cap G_{rs}$ we can decompose \[ U(\mathfrak{g}) = \Ad(x^{-1})(U(\mathfrak{h})) S(\mathfrak{c}') U(\mathfrak{h}) = U(\mathfrak{h})S(\mathfrak{c}')\Ad(x)(U(\mathfrak{h})). \] with the only ambiguity being $U(\mathfrak{m}')$ acting on $\Ad(x^{-1})(U(\mathfrak{h}))$ (resp. $U(\mathfrak{h}))$) from the left and on $U(\mathfrak{h})$ (resp. $\Ad(x)(U(\mathfrak{h}))$) from the right. \end{corollary} \begin{proof} From Lemma~\ref{sec:lem-decomposition-g} we obtain that if $\mathfrak{q}$ is a $B$-orthogonal complement of $\mathfrak{m}'$ in $\mathfrak{h}$, we have $\mathfrak{g}_\CC = \mathfrak{c}'_\CC \oplus \mathfrak{m}_\CC \oplus \mathfrak{q}_\CC \oplus \Ad(x)(\mathfrak{q}_\CC)$ (and the same expression with $x$ inverted). By the Poincar\'e--Birkhoff--Witt theorem the multiplication map generates isomorphisms \[ U(\mathfrak{g}) \cong \Ad(x^{-1})(U(\mathfrak{q})) \otimes U(\mathfrak{m}')\otimes S(\mathfrak{c}') \otimes U(\mathfrak{q}) \cong U(\mathfrak{q}) \otimes U(\mathfrak{m}')\otimes U(\mathfrak{c}')\otimes \Ad(x)(U(\mathfrak{q})). \] Applying the same to $U(\mathfrak{h})$ we see that $U(\mathfrak{h}) = U(\mathfrak{q})\otimes U(\mathfrak{m}') = U(\mathfrak{m}')\otimes U(\mathfrak{q})$ and \[ \Ad(x^{-1})(U(\mathfrak{h})) = \Ad(x^{-1})(U(\mathfrak{q}))\otimes U(\mathfrak{m}'),\qquad \Ad(x)(U(\mathfrak{h})) = U(\mathfrak{m}')\otimes\Ad(x)(U(\mathfrak{q})), \] so that the above decompositions become \begin{align*} U(\mathfrak{g}) &= \Ad(x^{-1})(U(\mathfrak{h}))\otimes_{U(\mathfrak{m}')} \qty(S(\mathfrak{c}')\otimes U(\mathfrak{h}))\\ U(\mathfrak{g}) &= \qty(U(\mathfrak{h})\otimes S(\mathfrak{c}'))\otimes_{U(\mathfrak{m}')} \Ad(x)(U(\mathfrak{h})), \end{align*} where $U(\mathfrak{m}')$ acts only on the $U(\mathfrak{h})$ components. \end{proof} \begin{corollary} There are maps $$\Pi,\widetilde{\Pi}:\, U(\mathfrak{g}) \to C^\infty(C'\cap G_{rs})\otimes S(\mathfrak{c}')\otimes (U(\mathfrak{h})\otimes_{U(\mathfrak{m}')} U(\mathfrak{h}))$$ such that for \[ \Pi(p) = \sum_i f_i\otimes p_i\otimes u_i\otimes v_i\qquad \widetilde{\Pi}(u) = \sum_j \tilde{f}_j\otimes \tilde{p}_j \otimes\tilde{u}_j \otimes\tilde{v}_j \] we have \[ p = \sum_i f_i(x) \Ad(x^{-1})(u_ip_i)v_i = \sum_j \tilde{f}_j(x) \tilde{u}_j \tilde{p}_j \Ad(x)(\tilde{v}_j) \] for all $x\in C'\cap G_{rs}$. Here, $U(\mathfrak{m}')$ acts on the two $U(\mathfrak{h})$-tensor factors as follows: the right action (on the 1st factor) is twisted by $\Ad(t)$, and the left action is just the usual multiplication. \end{corollary} \begin{proof} Follows from Corollary~\ref{sec:cor-decomposition-Ug} and Proposition~\ref{sec:prop-es-ito-hs}. For the $U(\mathfrak{m}')$-actions note that we can write \[ p = \sum_i f_i(x) \Ad(x^{-1})(u_im_ip_i)v_i \] where $u_i,v_i$ are written entirely in terms of $\mathfrak{q}$, and $m_i\in U(\mathfrak{m}')$, then this equals \[ =\sum_i f_i(x) \Ad(x^{-1})(u_ip_i) \Ad(t^{-1})(m_i)v_i, \] meaning that $\Pi$ maps $p$ to both \[ \sum_i f_i \otimes p_i\otimes u_im_i\otimes v_i \] and \[ \sum_i f_i \otimes p_i\otimes u_i\otimes \Ad(t^{-1})v_i, \] which should be equal. Similarly for $\widetilde{\Pi}$. \end{proof} \begin{theorem} Let $W$ be a finite-dimensional $H$-bimodule and write $\pi_{\Le},\pi_{\Ri}$ for the left and right actions, respectively ($\pi_\Ri$ is then a representation of $H^{\operatorname{op}}$). There are maps \[ R^W,\widetilde{R}^W:\: U(\mathfrak{g}) \to C^\infty(C'\cap G_{rs}) \otimes S(\mathfrak{c}') \otimes \Hom(W^{\mathfrak{m}'},W) \] such that for every $f\in E^W(G,H)$, $x\in C'\cap G_{rs}$, and $p\in U(\mathfrak{g})$ we have \begin{align*} (p\cdot f)(x) &= R^W(p)(x) (f|_{C'})(x)\\ (f\cdot p)(x) &= \widetilde{R}^W(p)(x) (f|_{C'})(x). \end{align*} In particular, for $p\in U(\mathfrak{g})^{\mathfrak{m}'}$, the matrix parts of $R^W,\widetilde{R}^W$ lie in $\End(W^{\mathfrak{m'}})$. Here, $Y\in \mathfrak{m}'$ acts as follows on $v\in W$: \[ Y\cdot_{\mathfrak{m}'} v = \Ad(t)(Y)\cdot v - v\cdot Y. \] \end{theorem} \begin{proof} Define $R^W,\widetilde{R}^W$ by post-composing $\Pi,\widetilde{\Pi}$ with the representations $\pi_\Le,\pi_\Ri$ as follows: the last two tensor legs, $U(\mathfrak{h})\otimes_{U(\mathfrak{m}')} U(\mathfrak{h})$ act as follows on $W^{\mathfrak{m}'}$: \[ (a\otimes b)\cdot \psi = a\cdot \psi \cdot b. \] This is well-defined precisely because $f$ is an $\mathfrak{m}'$-homomorphism twisted in the correct way. We now show that $R^W,\widetilde{R}^W$ indeed yield the radial parts of the left and right actions on matrix-spherical functions. Let $f\in E^W(G,H), x\in C'\cap G_{rs},p\in U(\mathfrak{g})$, say \[ \Pi(p) = \sum_i f_i\otimes p_i\otimes u_i\otimes v_i, \] so that \[ \forall x\in C'\cap G_{rs}:\quad \sum_i f_i(x) \Ad(x^{-1})(u_ip_i) v_i. \] For any $x\in C'\cap G_{rs}$ we then have \[ (p\cdot f)(x) = \sum_i f_i(x) \qty(\Ad(x^{-1})(u_ip_i)v_i\cdot f)(x) = \sum_i f_i(x) \qty(v_i\cdot f\cdot u_ip_i)(x) \] by Lemma~\ref{sec:lem-msf-action-diffops}. Corollary~\ref{sec:cor-pull-out-h} then allows us to write this as \begin{align*} (p\cdot f)(x) &= \sum_i f_i(x) u_i\cdot (f\cdot p_i)(x) \cdot v_i\\ &= R(p)(x) f(x), \end{align*} where we interpret $S(\mathfrak{c}')$ as differential operators acting on $C^\infty(G,W)$ via right-invariant vector fields. To see that $R^W(p)$ for $p\in U(\mathfrak{g})^{\mathfrak{m}'}$ preserves pointwise $\mathfrak{m}'$-invariance, let $Y\in\mathfrak{m}'$, and note that \begin{align*} \Ad(t)(Y)\circ (p\cdot f)(x) &= \sum_i f_i(x) \Ad(t)(Y)u_i\cdot (p_i\cdot f)(x)\cdot v_i\\ &= \qty(\sum_i f_i(x) Y\Ad(x^{-1})(u_i)p_iv_i\cdot f)(x)\\ &= \qty(Y p\cdot f)(x). \end{align*} Now, since $p$ commutes with $\mathfrak{m}'$, this equals \begin{align*} \qty(pY\cdot f)(x) &= \qty(\sum_i f_i(x) \Ad(x^{-1})(u_i) p_iv_iY\cdot f)(x) = \sum_i f_i(x) u_i\cdot (f\cdot p_i)(x)\cdot v_iY\\ &= (p\cdot f)(x)\cdot Y. \end{align*} The statements about $\widetilde{R}^W$ follow analogously. \end{proof} Recall from Lemma~\ref{sec:lem-msf-kak-restriction} that the restriction of a MSF maps not only to $W^{\mathfrak{m}'}$, but even to $W^{Z_{C'}}$. If we consider that invariance, we get more out of the previous result. \begin{definition} Define \[ M' := Z_H(\mathfrak{c}') \cap t^{-1}Ht. \] This is a closed subgroup of $H$ with Lie algebra $\mathfrak{m}'$, since $\Ad(t^{-1})$ leaves $\mathfrak{m}'$ invariant. \end{definition} \begin{proposition}\label{sec:prop-Mprime-Z} The map $h\mapsto (tht^{-1}, h)$ is a (well-defined) isomorphism $M'\cong Z_{C'}$ that intertwines between the partially twisted action of $M'$ on $W$ and the action of $Z_{C'}$ on $W$. In particular, $W^{Z_{C'}}$ is a subspace of $W^{\mathfrak{m}'}$. \end{proposition} \begin{proof} Note that by definition of $M'$, both $tht^{-1}$ and $h$ are elements of $H$, so the map is well-defined. Furthermore, by Proposition~\ref{sec:prop-structure-Z-N}, it is an isomorphism. The partially twisted action of $M'$ on $W$ is defined so as to differentiate to the one of $\mathfrak{m}'$: \[ h\cdot_{M'} \psi = tht^{-1}\cdot \psi \cdot h^{-1}, \] which is exactly how $(tht^{-1},h)\in Z_{C'}$ acts on $\psi\in W$. This shows that $W^{Z_{C'}}=W^{M'}$ is a subspace of $W^{\mathfrak{m}'}$, which is equal if $M'$ is the analytic subgroup generated by $\mathfrak{m}'$ (i.e. is connected). \end{proof} \begin{corollary} $R^W$ and $\widetilde{R}^W$ restrict to maps \[ U(\mathfrak{g})^{M'} \to C^\infty(C'\cap G_{rs})\otimes S(\mathfrak{c}') \otimes \End(W^{M'}). \] \end{corollary} \begin{proof} Let $f\in E^W(G,H)$ and $p\in U(\mathfrak{g})^{M'}$. Let $m\in M'$ and $x\in C'\cap G_{rs}$, then \begin{align*} tmt^{-1}\cdot (p\cdot f)(x)\cdot m^{-1}) &= \sum_i f_i(x) tmt^{-1}\cdot u_i \cdot (f\cdot p_i)(x) \cdot v_i \cdot m^{-1})\\ &= \sum_i f_i(x) (v_i\cdot m^{-1}\cdot f\cdot tmt^{-1}\cdot u_i p_i)(x)\\ &= \sum_i f_i(x) \qty(m^{-1}\cdot\Ad(m)(v_i) \cdot f\cdot \Ad(tmt^{-1})(u_ip_i)\cdot tmt^{-1})(x)\\ &= \sum_i f_i(x) \qty(\Ad(m)(v_i) \cdot f\cdot \Ad(tmt^{-1})(u_ip_i))(tmt^{-1}xm^{-1}). \end{align*} By Lemma~\ref{sec:lem-msf-action-diffops}, this equals \[ = \sum_i f_i(x) \qty(\Ad(mx^{-1})(u_ip_i)\Ad(m)(v_i)\cdot f)(tmt^{-1}xm^{-1}). \] By Propositions~\ref{sec:prop-Mprime-Z} and \ref{sec:prop-structure-Z-N}, this equals \begin{align*} &= \sum_i f_i(x) \qty(\Ad(m)(\Ad(x^{-1})(u_ip_i)v_i)\cdot f)(x)\\ &= (\Ad(m)(p)\cdot f)(x) = (p\cdot f)(x) \end{align*} since $p$ is invariant under $M'$. \end{proof} \begin{corollary} Let $V,W$ be finite-dimensional $H$-modules and assume that $V,\overline{W}$ ($\overline{W}$ is the same as $W$, but with the $\mathfrak{m}'$-action twisted by $\Ad(t)$) are semisimple $\mathfrak{m}'$-modules. Then \[ (\Hom(V,W))^{\mathfrak{m}'} = \Hom_{\mathfrak{m}'}(V,\overline{W}) = \bigoplus_{\rho\in\widehat{\mathfrak{m}'}} \CC^{[V:\rho][\overline{W}:\rho]} =: \mathcal{V}. \] In that case, the maps $R^W,\widetilde{R}^W$ map to \[ U(\mathfrak{g})^{\mathfrak{m}'} \to C^\infty(C'\cap G_{rs}) \otimes S(\mathfrak{c}')\otimes\End(\mathcal{V}). \] An analogous statement holds when we define $\mathcal{V}$ in terms of the group $M'$ instead of $\mathfrak{m}'$. \end{corollary} \subsection{Radial Part of the Quadratic Casimir Element $\Omega_{\mathfrak{g}}$ } In this subsection we will now compute $\Pi$ of the quadratic Casimir element ($\widetilde{\Pi}$ will turn out to be the same). The application to MSF for a concrete $H$-bimodule $W$ is then straightforward. \begin{definition} Let $\alpha\in(\mathfrak{c}'_\CC)^*$. Write $C_\alpha\in\mathfrak{c}'_\CC$ for the unique element $X$ such that \[ \forall Y\in\mathfrak{c}'_\CC:\quad \alpha(Y) = B(X,Y). \] This exists since due to Proposition~\ref{sec:prop-root-spaces}(iv) and Lemma~\ref{sec:lem-zero-space} $B$ is non-degenerate when restricted to $\mathfrak{c}'_\CC$. \end{definition} \begin{lemma}\label{sec:lem-commutator-hs} Let $E\in\mathfrak{g}_\alpha$ and $H:=E + \sigma(E)$. \begin{enumerate} \item Then $\comm{E}{\sigma(E)} = -B_\sigma(E,E) C_\alpha$. \item For $x\in C'$ we have $\comm{H}{\Ad(x)(\phi(H))} = \qty(x^\alpha-x^{-\alpha})B_\sigma(E,E) C_\alpha$. \item For $x\in C'$ we have $\comm{\Ad(x^{-1})(H)}{\phi(H)}=\qty(x^\alpha-x^{-\alpha})B_\sigma(E,E)C_{t\alpha}$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item From Proposition~\ref{sec:prop-root-spaces}(iii,ii) we see that $\comm{E}{\sigma(E)}\in\mathfrak{g}_0$. Furthermore, since $\sigma$ is an involutive Lie algebra homomorphism, we have $\sigma(\comm{E}{\sigma(E)})=-\comm{E}{\sigma(E)}$, so that by Lemma~\ref{sec:lem-zero-space}, we have $\comm{E}{\sigma(E)}\in\mathfrak{c}'_\CC$. Let $Y\in\mathfrak{c}'_\CC$, then \[ B(Y, \comm{E}{\sigma(E)}) = B(\comm{Y}{E},\sigma(E)) = \alpha(Y) B(E,\sigma(E)) = -\alpha(Y)B_\sigma(E,E), \] hence $\comm{E}{\sigma(E)} = -B_\sigma(E,E) C_\alpha$. \item From the proof of Proposition~\ref{sec:prop-es-ito-hs} we know that \[ \comm{H}{\Ad(x)(\phi(H))} = \comm{E + \sigma(E)}{x^\alpha E + x^{-\alpha}\sigma(E)} = -\qty(x^\alpha - x^{-\alpha})\comm{E}{\sigma(E)}, \] which together with (i) implies the claim. \item From the proof of Proposition~\ref{sec:prop-es-ito-hs} we know that \begin{align*} \comm{\Ad(x^{-1})(H)}{\phi(H)} &= \comm{x^{-\alpha}\phi(E) + x^\alpha\sigma(\phi(E))}{\phi(E) + \sigma(\phi(E))}\\ &= -\qty(x^\alpha-x^{-\alpha})\comm{\phi(E)}{\sigma(\phi(E))}. \end{align*} Together with (i) applied to $\phi(E)$, the fact that $B_\sigma(\phi(E),\phi(E))=B_\sigma(E,E)$, and the fact that $\phi(E)\in\mathfrak{g}_{t\alpha}$, the claim follows. \end{enumerate} \end{proof} \begin{corollary}\label{sec:cor-anticommutator} Let $E\in\mathfrak{g}_\alpha$ and $x\in C'\cap G_{rs}$. Write $H:= E+\sigma(E)$. Then \begin{align*} \acomm{E}{\sigma(E)} &=- 2 \frac{H^2 + \Ad(x)(\phi(H)^2) - \qty(x^\alpha+x^{-\alpha}) H\Ad(x)(\phi(H))}{\qty(x^\alpha-x^{-\alpha})^2}\\ &\qquad - B_\sigma(E,E)\frac{x^\alpha+x^{-\alpha}}{x^\alpha-x^{-\alpha}} C_\alpha\\ \acomm{\phi(E)}{\sigma(\phi(E))} &= -2\frac{\phi(H)^2 + \Ad(x^{-1})(H^2) - \qty(x^\alpha+x^{-\alpha})\Ad(x^{-1})(H)\phi(H)}{\qty(x^\alpha-x^{-\alpha})^2}\\ &\qquad - B_\sigma(E,E)\frac{x^\alpha+x^{-\alpha}}{x^\alpha-x^{-\alpha}} C_{t\alpha} \end{align*} \end{corollary} \begin{proof} By Proposition~\ref{sec:prop-es-ito-hs} we have \begin{align*} \qty(x^\alpha-x^{-\alpha})^2\acomm{E}{\sigma(E)} &= \acomm{\Ad(x)(\phi(H)) - x^{-\alpha}H}{x^\alpha H - \Ad(x)(\phi(H))}\\ &= -2 H^2 - 2\Ad(x)(\phi(H)^2) + \qty(x^\alpha+x^{-\alpha})\acomm{H}{\Ad(x)(\phi(H))}. \end{align*} We can write the anticommutator as $2H\Ad(x)(\phi(H)) - \comm{H}{\Ad(x)(\phi(H))}$, so that by Lemma~\ref{sec:lem-commutator-hs} we have \[ = - 2H^2 - 2\Ad(x)(\phi(H)^2) + 2\qty(x^\alpha+x^{-\alpha}) H\Ad(x)(\phi(H)) - B_\sigma(E,E) \qty(\qty(x^\alpha)^2 - \qty(x^{-\alpha})^2) C_\alpha. \] Similarly, we have \begin{align*} \qty(x^\alpha-x^{-\alpha})\acomm{\phi(E)}{\sigma(\phi(E))} &= \acomm{x^\alpha\phi(H) - \Ad(x^{-1})(H)}{\Ad(x^{-1})(H)-x^{-\alpha}\phi(H)}\\ &= -2\phi(H)^2 - 2\Ad(x^{-1})(H^2) + \qty(x^\alpha + x^{-\alpha})\acomm{\Ad(x^{-1})(H)}{H}. \end{align*} The anticommutator equals $2\Ad(x^{-1})(H)H - \comm{\Ad(x^{-1})(H)}{H}$, so that by Lemma~\ref{sec:lem-commutator-hs}, we have \begin{align*} &= -2\phi(H)^2 - 2\Ad(x^{-1})(H^2) + 2\qty(x^\alpha+x^{-\alpha})\Ad(x^{-1})(H)H\\ &\qquad - B_\sigma(E,E) \qty(\qty(x^\alpha)^2-\qty(x^{-\alpha})^2) C_{t\alpha}.\qedhere \end{align*} \end{proof} \begin{proposition}\label{sec:prop-operator-A} Let $\alpha\in\Sigma$, let $X_1,\dots,X_r$ be an orthonormal basis of $\mathfrak{g}_\alpha$ with respect to $B_\sigma$. Define \[ A_\alpha := \sum_{i=1}^r(X_i+\sigma(X_i))\otimes (X_i+\sigma(X_i)) \in U(\mathfrak{h})\otimes U(\mathfrak{h}). \] Then $A_\alpha$ does not depend on the choice of basis. For $(h,h')\in N_{C'}$ we have \begin{align*} (\Ad(h)\otimes\Ad(h))(A_\alpha) &= A_{\Ad^*(h)(\alpha)}\\ (\Ad(h')\otimes\Ad(h'))(A_\alpha) &= A_{\Ad^*(h')(\alpha)} \end{align*} and \[ (\Ad(h)\otimes\Ad(h')\phi)(A_\alpha) = \pm (1\otimes\phi)A_{\Ad^*(h)(\alpha)} \] depending on whether \[ \epsilon_\alpha \qty(hth^{\prime-1})^{-\Ad^*(h)(\alpha)} = \pm1. \] \end{proposition} \begin{proof} Let $X_1,\dots,X_r$ and $Y_1,\dots,Y_r$ be orthonormal bases, say \[ X_i = \sum_{j=1}^r a_{ij}Y_j, \] then $a_{ij}=B_\sigma(X_i,Y_j)$ and in particular also $Y_i = \sum_{j=1}^r a_{ji}X_j$, so that \[ \sum_{i=1}^r a_{ij}a_{ik} = \sum_{i=1}^r B_{\sigma}(a_{ik}X_i,Y_j) = B_\sigma(Y_k,Y_j)=\delta_{jk}. \] Consequently, \begin{align*} \sum_{i=1}^r\qty(X_i + \sigma(X_i))\otimes\qty(X_i + \sigma(X_i)) &= \sum_{i=1}^r\sum_{j,k=1}^ra_{ij} a_{ik}\qty(Y_j + \sigma(Y_j))\otimes (Y_k + \sigma(Y_k))\\ &= \sum_{j,k=1}^r \sum_{i=1}^ra_{ij}a_{ik} \qty(Y_j + \sigma(Y_j))\otimes\qty(Y_k+ \sigma(Y_k))\\ &= \sum_{i=1}^r \qty(Y_i + \sigma(Y_i))\otimes\qty(Y_i +\sigma(Y_i)). \end{align*} For any Lie algebra automorphism $\psi$ that leaves $\mathfrak{c}'$ and $B$ invariant and commutes with $\sigma$ (e.g. $\Ad(h)$ or $\Ad(h')$ for $(h,h')\in N_{C'}$), the family $\phi(X_1),\dots,\phi(X_r)$ is a $B_\sigma$-orthonormal basis of $\mathfrak{g}_{\phi^*\alpha}$, so \[ (\phi\otimes\phi)(A_\alpha) = A_{\phi^*\alpha}. \] Lastly, for $(\Ad(h)\otimes\Ad(h')\phi)(A_\alpha)$ note that by Proposition~\ref{sec:prop-properties-power}(iv) we have \[ \epsilon_\alpha (hth^{\prime-1})^{-\Ad^*(h)(\alpha)} = \pm1, \] and in particular \begin{align*} \Ad(h)(X_i) &= \pm \phi(\Ad(h')(\phi(X_i)))\\ \Ad(h)(\sigma(X_i)) &= \pm \sigma(\phi(\Ad(h')(\phi(X_i)))), \end{align*} so that we have \begin{align*} \qty(\Ad(h)\otimes\Ad(h')\phi)A_\alpha &= \pm \sum_{i=1}^r (\phi\Ad(h')\phi\otimes\Ad(h')\phi)(X_i+\sigma(X_i))\otimes(X_i+\sigma(X_i))\\ &= \pm (1\otimes\phi)A_{\Ad^*(h)(\alpha)} \end{align*} seeing as $\phi\Ad(h')\phi$ is orthogonal with respect to $B_\sigma$ and commutes with $\sigma$, so that the expression in the sum becomes the expression we used to define $A_\alpha$, except that \[ \phi\Ad(h')\phi(X_1),\dots,\phi\Ad(h')\phi(X_r) \] is now a $B_\sigma$-orthonormal basis of $\mathfrak{g}_{\Ad^*(h)(\alpha)}$. \end{proof} \begin{theorem}\label{sec:thm-casimir-decomposition} The decompositions of the quadratic Casimir element $\Omega_{\mathfrak{g}}$ with respect to the standard Cartan subset $C'$ are as follows: \begin{align*} \Pi(\Omega_{\mathfrak{g}}) = \widetilde{\Pi}(\Omega_{\mathfrak{g}}) &= \Omega_{\mathfrak{c}'} + \sum_{\alpha\in\Sigma} \frac{n_\alpha}{2} \coth_\alpha C_\alpha + \Omega_{\mathfrak{m}'}\\ &\qquad + \sum_{\alpha\in\Sigma} \frac{\csch_\alpha^2}{4}\qty(m(A_\alpha)\otimes 1 + 1\otimes m(A_{t\alpha}) + 2 (1\otimes\phi)A_\alpha)\\ &\qquad -\sum_{\alpha\in\Sigma} \frac{\csch_{\alpha/2}^2}{4} (1\otimes\phi)A_\alpha. \end{align*} where $m: U(\mathfrak{h})\otimes U(\mathfrak{h})\to U(\mathfrak{h})$ is the multiplication map and $n_\alpha := \dim(\mathfrak{g}_\alpha)$. \end{theorem} \begin{proof} Let $C_1,\dots,C_r\in\mathfrak{c}'_\CC$ and $M_1,\dots,M_s\in\mathfrak{m}'_\CC$ be orthonormal bases (with respect to $B$). For $\alpha\in\Sigma$ let $E_{\alpha,1},\dots,E_{\alpha,n_\alpha}\in\mathfrak{g}_\alpha$ be an orthonormal basis with respect to $B_\sigma$. Without loss of generality assume that $E_{-\alpha,i}=\sigma(E_\alpha,i)$. Then the following bases are dual to each other with respect to $B$: \begin{align*} &C_1,\dots,C_r,M_1,\dots,M_s,(E_{\alpha,1},\dots,E_{\alpha,n_\alpha})_{\alpha\in\Sigma}\\ &C_1,\dots,C_r,M_1,\dots,M_s,(-\sigma(E_{\alpha,1}),\dots,-\sigma(E_{\alpha,n_\alpha}))_{\alpha\in\Sigma}. \end{align*} Thus, \[ \Omega_{\mathfrak{g}} = \sum_{i=1}^r C_i^2 + \sum_{i=1}^s M_i^2 - \sum_{\alpha\in\Sigma}\sum_{i=1}^{n_\alpha} E_{\alpha,i}\sigma(E_{\alpha_i}). \] The first two sums are just $\Omega_{\mathfrak{c}'}$ and $\Omega_{\mathfrak{m}'}$, respectively. Symmetrising the summands over $\alpha$ somewhat we obtain \[ \Omega_{\mathfrak{g}} = \Omega_{\mathfrak{c}'}+\Omega_{\mathfrak{m}'} - \frac{1}{2}\sum_{\alpha\in\Sigma}\sum_{i=1}^{n_\alpha} \acomm{E_{\alpha,i}}{\sigma(E_{\alpha,i})}. \] Let $x\in C'\cap G_{rs}$. Substituting in the expression from Corollary~\ref{sec:cor-anticommutator} and setting $H_{\alpha,i} := E_{\alpha,i}+\sigma(E_{\alpha,i}) = E_{\alpha,i}+E_{-\alpha,i}$ we get \begin{align*} \Omega_{\mathfrak{g}} &= \Omega_{\mathfrak{c}'} + \Omega_{\mathfrak{m}'} + \frac{1}{2}\sum_{\alpha\in\Sigma} \sum_{i=1}^{n_\alpha} B_\sigma(E_{\alpha,i},E_{\alpha,i}) \frac{x^\alpha+x^{-\alpha}}{x^\alpha-x^{-\alpha}} C_\alpha\\ &\qquad + \sum_{\alpha\in\Sigma}\sum_{i=1}^{n_\alpha} \frac{H_{\alpha_i}^2 + \Ad(x)(\phi(H_{\alpha,i})^2) - (x^\alpha+x^{-\alpha}) H_{\alpha,i}\Ad(x)(\phi(H_{\alpha,i}))}{\qty(x^\alpha-x^{-\alpha})^2}\\ &= \Omega_{\mathfrak{c}'} + \Omega_{\mathfrak{m}'} + \sum_{\alpha\in\Sigma} \frac{n_\alpha}{2} \frac{x^\alpha + x^{-\alpha}}{x^\alpha-x^{-\alpha}} C_\alpha\\ &\qquad + \sum_{\alpha\in\Sigma} \frac{m(A_\alpha) + \Ad(x)(m(A_{t\alpha})) - (x^\alpha+x^{-\alpha})m(1\otimes\Ad(x)\phi)(A_\alpha)}{\qty(x^\alpha-x^{-\alpha})^2}. \end{align*} Similarly, we have \begin{align*} \Omega_{\mathfrak{g}} &= \Omega_{\mathfrak{c}'} + \Omega_{\mathfrak{m}'} + \frac{1}{2}\sum_{\alpha\in\Sigma}\sum_{i=1}^{n_\alpha} B_\sigma(\phi(E_{\alpha,i}),\phi(E_{\alpha,i})) \frac{x^{t\alpha}+x^{t\alpha}}{x^{t\alpha}-x^{-t\alpha}} \Ad(x^{-1})(C_{t\alpha})\\ &\qquad + \sum_{\alpha\in\Sigma}\sum_{i=1}^{n_\alpha} \frac{H_{\alpha,i}^2 + \Ad(x^{-1})(\phi(H_{\alpha,i})^2) - (x^{t\alpha} + x^{-t\alpha})\Ad(x^{-1})(\phi(H_{\alpha,i}))H_{\alpha,i}}{\qty(x^{t\alpha}-x^{-t\alpha})^2}\\ &= \Omega_{\mathfrak{c}'} + \Omega_{\mathfrak{m}'} + \sum_{\alpha\in\Sigma}\frac{n_\alpha}{2} \frac{x^\alpha+x^{-\alpha}}{x^{\alpha}-x^{-\alpha}} \Ad(x^{-1})(C_\alpha)\\ &\qquad + \sum_{\alpha\in\Sigma} \frac{m(A_{t\alpha}) + \Ad(x^{-1})(m(A_\alpha)) - (x^\alpha+x^{-\alpha})m\qty(\Ad(x^{-1})\phi\otimes 1)A_{t\alpha}}{\qty(x^\alpha-x^{-\alpha})^2}. \end{align*} Note that these expressions are both in the shape that we can read off $\Pi(\Omega_{\mathfrak{g}}),\widetilde{\Omega_{\mathfrak{g}}}$. Writing $\coth_\alpha(x) := \frac{x^\alpha+x^{-\alpha}}{x^\alpha-x^{-\alpha}}$ and $\csch_\alpha(x) := 2(x^\alpha-x^{-\alpha})^{-1}$, we thus obtain \begin{align*} \widetilde{\Pi}(\Omega_{\mathfrak{g}}) &= \Omega_{\mathfrak{c}'} + \Omega_{\mathfrak{m}'} + \sum_{\alpha\in\Sigma} \frac{n_\alpha}{2}\coth_\alpha C_\alpha\\ &\qquad + \sum_{\alpha\in\Sigma} \frac{\csch^2_\alpha}{4} \qty(m(A_\alpha)\otimes 1 + 1\otimes m(A_{t\alpha}))\\ &\qquad- \sum_{\alpha\in\Sigma} \frac{\csch_\alpha \coth_\alpha}{2} (1\otimes\phi)A_\alpha)\\ \Pi(\Omega_{\mathfrak{g}}) &= \Omega_{\mathfrak{c}'} + \Omega_{\mathfrak{m}'} + \sum_{\alpha\in\Sigma} \frac{n_\alpha}{2}\coth_\alpha C_\alpha\\ &\qquad+ \sum_{\alpha\in\Sigma} \frac{\csch^2_\alpha}{4} \qty(1\otimes m(A_{t\alpha}) + m(A_\alpha)\otimes1)\\ &\qquad -\sum_{\alpha\in\Sigma} \frac{\csch_\alpha\coth_\alpha}{2} (\phi\otimes1)A_{t\alpha}, \end{align*} which equals $\widetilde{\Pi}(\Omega_{\mathfrak{g}})$ because $(\phi\otimes\phi)A_\alpha = A_{t\alpha}$ (by Proposition~\ref{sec:prop-operator-A}). Lastly, noting that \begin{align*} \frac{\csch_\alpha(x)\coth_\alpha(x)}{2} &= \frac{x^\alpha + x^{-\alpha} + 2}{\qty(x^\alpha-x^{-\alpha})} - \frac{\csch_\alpha^2(x)}{2}\\ &= \frac{\qty(1+x^\alpha)(1+x^{-\alpha})}{\qty(1+x^\alpha)(1-x^{-\alpha})\qty(1+x^{-\alpha})\qty(x^{\alpha}-1)} - \frac{\csch_\alpha^2(x)}{2}\\ &= \frac{\csch_{\alpha/2}^2(x)}{4} - \frac{\csch_\alpha^2(x)}{2}, \end{align*} where the square of $\csch_{\alpha/2}$ is a well-defined quantity obtained by multiplying out the product. In light of this we can also rewrite $\Pi(\Omega_{\mathfrak{g}})=\widetilde{\Pi}(\Omega_{\mathfrak{g}})$ as \begin{align*} \Pi(\Omega_{\mathfrak{g}}) = \widetilde{\Pi}(\Omega_{\mathfrak{g}}) &= \Omega_{\mathfrak{c}'} + \sum_{\alpha\in\Sigma} \frac{n_\alpha}{2} \coth_\alpha C_\alpha + \Omega_{\mathfrak{m}'}\\ &\qquad + \sum_{\alpha\in\Sigma} \frac{\csch_\alpha^2}{4}\qty(m(A_\alpha)\otimes 1 + 1\otimes m(A_{t\alpha}) + 2 (1\otimes\phi)A_\alpha)\\ &\qquad -\sum_{\alpha\in\Sigma} \frac{\csch_{\alpha/2}^2}{4} (1\otimes\phi)A_\alpha. \end{align*} \end{proof} \begin{corollary} In the case where $\Ad(t)=1$, we have $(\exp(X)t)^\alpha = \exp(\alpha(X))$, and the radial part of $\Omega_{\mathfrak{g}}$ simplifies to \begin{align*} \Pi(\Omega_{\mathfrak{g}}) &= \Omega_{\mathfrak{c}'} + \sum_{\alpha\in\Sigma} \frac{n_\alpha}{2} \coth_\alpha C_\alpha + \Omega_{\mathfrak{m}'}\\ &\qquad + \sum_{\alpha\in\Sigma} \frac{\csch_\alpha^2}{4}\qty(m(A_\alpha)\otimes 1 + 1\otimes m(A_{\alpha}) + 2 A_\alpha)\\ &\qquad -\sum_{\alpha\in\Sigma} \frac{\csch_{\alpha/2}^2}{4} A_\alpha \end{align*} where $\coth_\alpha,\csch_\alpha$ are related to the usual hyperbolic functions: if $x=\exp(X)$, then \[ \coth_\alpha(x) = \coth(\alpha(X)),\qquad \csch_\alpha(x) = \csch(\alpha(X)). \] \end{corollary} \section{Mathematical Setup for Conformal Blocks}\label{sec:cb} Let $p+q=d>2$ be natural numbers with $p\ge q$. Most commonly, we will encounter $q=0$ (\emph{Euclidean}) or $q=1$ ($\emph{Lorentzian}$). Let furthermore $\eta$ denote the standard bilinear form of signature $(p,q)$ on $\RR^{p,q}:=\RR^d$ and by abuse of notation also the one of signature $(p+1,q+1)$ on $\RR^{p+1,q+1}=\RR^{d+2}$. We will use lower case Greek letters to denote indices pertaining to $\RR^{p+1,q+1}$ and lower case Latin letters for $\RR^{p,q}$. For both we shall use the usual index raising and lowering conventions of physics (with the pseudo-inner product $\eta$), such that $A_\mu = A^\mu$ if $\mu\le p$ and $A_\mu=-A^\mu$ if $\mu>p$ (analogously for $A_i$). \subsection{Conformal Compactification} \begin{definition} Write $q:\, \RR^{p+1,q+1}\setminus\{0\}\to\RR\mathbb{P}^{d+1}$ for the projectivisation map. The real variety \[ \widehat{\RR^{p,q}} := \{q(v)\mid v\in\RR^{p+1,q+1}\setminus\{0\}, \eta(v,v)=0\} \] is called the \emph{conformal compactification of $\RR^{p,q}$}. \end{definition} Note that since $\widehat{\RR^{p,q}}$ is Zariski closed, it is also closed in the Euclidean topology. Since $\RR\mathbb{P}^{d+1}$ is compact, that entails that $\widehat{\RR^{p,q}}$ is also compact, as the name would imply. We first establish that $\widehat{\RR^{p,q}}$ is a smooth manifold in a useful sense and that it is indeed a compactification of $\RR^{p,q}$. \begin{lemma}\label{sec:lem-comp-diffeo} $\widehat{\RR^{p,q}}$ is an embedded submanifold of $\RR\mathbb{P}^{d+1}$. Furthermore, the map \[ \iota:\, \RR^{p,q}\to\widehat{\RR^{p,q}},\qquad v\mapsto (1-\eta(v,v)\,:\,2v\,:\,1+\eta(v,v)) \] is a diffeomorphism with $\iota(\RR^{p,q})$, which is dense in $\widehat{\RR^{p,q}}$. \end{lemma} \begin{proof} Note that $\widehat{\RR^{p,q}}$ is regular since $\eta$ is nondegenerate. Next, we define $f: \widehat{\RR^{p,q}}\cap\{(v_0:\dots:v_{d+1})\mid v_0+v_{d+1}\ne0\}\to\RR^{p,q}$ by \[ (v_0\,:\,v\,:\,v_{d+1})\mapsto \frac{v}{v_0+v_{d+1}}. \] This map is evidently well-defined and smooth. Note also that \[ f(\iota(v)) = v \] for all $v\in\RR^{p,q}$. If $(v_0:v:v_{d+1})\in\widehat{\RR^{p,q}}$, we have $\eta(v,v)=-(v_0+v_{d+1})(v_0-v_{d+1})$, so that \begin{align*} \iota(f(v_0:v:v_{d+1})) &= \iota\qty(\frac{v}{v_0+v_{d+1}})\\ &= \qty(1 - \frac{\eta(v,v)}{(v_0+v_{d+1})^2} : \frac{2v}{v_0+v_{d+1}} : 1 + \frac{v,v}{(v_0+v_{d+1})^2})\\ &= \qty(v_0+v_{d+1} + v_0 - v_{d+1}: 2v: v_0+v_{d+1}-v_0 + v_{d+1})\\ &= (v_0:v:v_{d+1}). \end{align*} Consequently, $\iota$ is a diffeomorphism and $f$ is its inverse. To see that $\iota(\RR^{p,q})$ is dense, note that any element of the form \[ (v_0\,:\, v\,:\, -v_0) \] can be reached as follows: if $v=0$, then the point $(1:0:-1)=\infty$ can be reached as \begin{align*} \lim_{t\to\infty} \iota(tw) &= \lim_{t\to\infty} (1-t^2\eta(w,w): 2w: 1+t^2\eta(w,w))\\ &= \lim_{t\to\infty} (\eta(w,w)-t^{-2}: -2t^{-2}w: -\eta(w,w)-t^{-2})\\ &= (1:0:-1) \end{align*} for any $w\in\mathbb{R}^{p,q}$ with $\eta(w,w)\ne0$. If $v\ne0$, there is a vector $w$ such that $\eta(v,w)\ne0$. By rescaling, we can choose that inner product to be $-v_0$. Note that $\eta(v,v)=0$. Then \begin{align*} \lim_{t\to\infty}\iota(w+vt) &= \lim_{t\to\infty} (1-\eta(w,w)+2tv_0: 2w+2vt: 1+\eta(w,w)-2tv_0)\\ &= \lim_{t\to\infty} ((1-\eta(w,w))t^{-1}+2v_0: 2t^{-1}w+2v: (1+\eta(w,w))t^{-1} - 2v_0))\\ &= (v_0:v:-v_0).\qedhere \end{align*} \end{proof} \subsection{Conformal Group $G$ and its Structure} \begin{lemma} $G:= SO(p+1,q+1)_0$ is the biggest classical connected Lie group acting on $\widehat{\RR^{p,q}}$ by projective transformations. \end{lemma} \begin{proof} The group of projective transformations of $\RR\mathbb{P}^{d+1}$ is $PGL(d+2)$, i.e. $GL(d+2)/\RR^\times$. Evidently, those that leave $\widehat{\RR^{p,q}}$ invariant are \begin{align*} &\{g\in GL(d+2)\mid\forall v\in\RR^{p+1,q+1}:\quad \eta(v,v)=0\Rightarrow \eta(gv, gv)=0\}/\RR^{\times}\\ =&\{g\in GL(d+2)\mid\forall v\in\CC^{p+1,q+1}:\quad \eta(v,v)=0\Rightarrow \eta(gv, gv)=0\}/\RR^{\times}\\ =&\{g\in GL(d+2)\mid \mathbb{V}(\eta)\subseteq\mathbb{V}(\eta\circ g)\}/\RR^\times, \end{align*} where we used that we can also complexify $\eta$ and $\RR^{p+1,q+1}$ and the fact that the real restriction of $\eta$'s zero set is Zariski dense. By Hilbert's Nullstellensatz, $\mathbb{V}(\eta)\subseteq\mathbb{V}(\eta\circ g)$ implies that the polynomial $\eta\circ g$ is a multiple of $\eta$ (as the ideals generated by both are radical). Since they are both homogeneous of degree 2, this multiple must be a nonzero scalar, and since $\eta,g$ have real coefficients, it must be a nonzero real, i.e. \[ \{g\in GL(d+2)\mid\exists C\in\RR^\times: \eta\circ g = C\eta\}/\RR^{\times} \] is the group of projective transformations leaving $\widehat{\RR^{p,q}}$ invariant. Since positive numbers have arbitrary roots in $\RR$, we can get rid of some of the actions \[ \{g\in GL(d+2)\mid \eta\circ g=\pm \eta\}/\{\pm1\}, \] thus this group is doubly covered by an extension of $O(p+1,q+1)$ by $\{\pm1\}$. If we take the unit component, we get a group that is (potentially) doubly covered by $SO(p+1,q+1)_0=G$. \end{proof} We therefore see that it is reasonable to study $G$. \begin{lemma}[{\cite[\S VII.2, example 2]{knapp}}] Together with $K=SO(p+1)\times SO(q+1)$ and the maps \begin{alignat*}{2} \theta: \mathfrak{g}&\to\mathfrak{g},\qquad &X&\mapsto -X^T = \eta X \eta\\ B: \mathfrak{g}\otimes\mathfrak{g}&\to\RR, &X\otimes Y&\mapsto \tr(XY), \end{alignat*} $G$ is a reductive Lie group. \end{lemma} \begin{definition} For $\mu,\nu,\sigma,\rho=0,\dots,d+1$ define \[ \tensor{(E_{\mu\nu})}{^\sigma_\rho} = \delta^\sigma_\nu \eta_{\nu\rho} \] (so that $\tensor{(\tensor{E}{_\mu^\nu})}{^\sigma_\rho} = \delta^\sigma_\mu\delta^\nu_\rho$) and \[ F_{\mu\nu} := E_{\mu\nu} - E_{\nu\mu}. \] \end{definition} The set $(F_{\mu\nu})_{0\le \mu < \nu \le d+1}$ is a basis of $\mathfrak{g}$ and a short calculation will convince the reader that \[ \comm{F_{\mu\nu}}{F_{\rho\sigma}} = \eta_{\nu\rho} F_{\mu\sigma} + \eta_{\mu\sigma}F_{\nu\rho} - \eta_{\mu\rho}F_{\nu\sigma} - \eta_{\nu\sigma} F_{\mu\rho}. \] Note in addition that $F_{\mu\nu}=-F_{\nu\mu}$ and that $\theta(F_{\mu\nu})=F^{\mu\nu}$, which equals $F_{\mu\nu}$ if $\mu,\nu\le p$ or $p<\mu,\nu$, and which equals $-F_{\mu\nu}$ otherwise. \begin{proposition} $\mathfrak{k}$ is spanned by $F_{\mu\nu}$ where $\mu,\nu$ are either both $\le p$ or both $>p$ and $\mathfrak{p}$ is spanned by $F_{\mu\nu}$ met $0\le \mu\le p <\nu \le d+1$. For $a:=0,\dots,q$ define $D_a := F^{a,d+1-a} = - F_{a,d+1-a}$. Then \[ \mathfrak{a}_{\mathfrak{p}} :=\operatorname{span}\{D_0,\dots,D_q\} \] is a maximal commutative subspace of $\mathfrak{p}$. \end{proposition} \begin{proof} By previous observations, the $F_{\mu\nu}$ are contained in $\mathfrak{k},\mathfrak{p}$, respectively. Since together they form a basis, and since $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$, the appropriate $F_{\mu\nu}$ span $\mathfrak{k},\mathfrak{p}$, respectively. Note that two $F_{\mu\nu}$ commute automatically if their indices don't collide, so that $\mathfrak{a}_{\mathfrak{p}}$ as defined is indeed commutative. Let now \[ X = \sum_{\mu\le p<\nu} A^{\mu\nu}F_{\mu\nu} \in\mathfrak{p} \] commute with $\mathfrak{a}_{\mathfrak{p}}$. In particular, \begin{align*} \comm{D_a}{X} &= \sum_{\mu\le p<\nu} A^{\mu\nu} \comm{F^{a,d+1-a}}{F_{\mu\nu}}\\ &= \sum_{\mu\le p<\nu} A^{\mu\nu} \qty(-\delta^a_\mu \tensor{F}{^{d+1-a}_\nu} - \delta^{d+1-a}_\nu \tensor{F}{^a_\mu})\\ &= -\sum_{\substack{\nu=p+1\\\nu\ne d+1-a}}^{d+1} A^{a,\nu} \tensor{F}{^{d+1-a}_\nu} - \sum_{\substack{\mu=0\\\mu\ne a}}^p A^{\mu,d+1-a} \tensor{F}{^a_\mu}. \end{align*} The first sum contains elements of $0\oplus \mathfrak{so}(q+1)$ and the second sum contains elements of $\mathfrak{so}(p+1)\oplus 0$ and are therefore linearly independent. Since this is zero, we obtain $A^{\mu\nu}=0$ for all $0\le \mu\le p < \nu\le d+1$ satisfying $\mu+\nu\ne d+1$. Consequently, $X\in\mathfrak{a}_{\mathfrak{p}}$. \end{proof} \begin{proposition}\label{sec:prop-rrs-decomposition} Define $\epsilon_0,\dots,\epsilon_q\in\mathfrak{a}_{\mathfrak{p}}^*$ by $\epsilon_a(D_b) := \delta_{a,b}$. Then $\mathfrak{g}$ has the following restricted root space decomposition with respect to $\mathfrak{a}_{\mathfrak{p}}$: \[ \mathfrak{g} = \mathfrak{m}_{\mathfrak{p}} \oplus \mathfrak{a}_{\mathfrak{p}} \oplus\bigoplus_{\alpha\in R} \mathfrak{g}_\alpha \] where $\mathfrak{m}_{\mathfrak{p}}$ is spanned by $F_{\mu\nu}$ for $q<\mu<\nu\le p$ and \[ R = \{\pm\epsilon_a\mid 0\le a\le q\} \cup\{\epsilon_a\pm\epsilon_b,-\epsilon_a\pm\epsilon_b\mid 0\le a<b\le q\} \] is its restricted root system which is of type $B_{q+1}$. The root multiplicities are $p-q-1$ for the short roots and 1 for the long roots. \end{proposition} \begin{proof} From \cite[\S VI.4]{knapp} it is known that such a decomposition exists (for $R$ the set of restricted roots) and is direct. It therefore suffices to find the roots and root spaces and $\mathfrak{m}_{\mathfrak{p}}$. \begin{description} \item[Short roots] Let $0\le a,b\le q$ and $q<\mu\le p$, then \begin{align*} \comm{D_a}{F^{b,\mu}\pm F^{d+1-b,\mu}} &= -\eta^{a,b} F^{d+1-b,\mu} \pm \eta^{d+1-a,d+1-b} F^{i,\mu}\\ &= \mp\delta_{a,b} \qty(F^{b,\mu}\pm F^{d+1-b,\mu})\\ &= (\mp\epsilon_b)(D_a)\qty(F^{b,\mu}\pm F^{d+1-b,\mu}), \end{align*} so that $F^{b,\mu}\pm F^{d+1-b,\mu}\in\mathfrak{g}_{\mp\epsilon_b}$. The root multiplicity of $\pm\epsilon_b$ is therefore at least $p-q$. \item[Long roots] Let $a,b,c\in [0,q]$ with $i\ne j$, then \begin{align*} &\comm{D_c}{F^{a,b} \pm F^{a,d+1-b} + F^{d+1-a,b} \pm F^{d+1-a,d+1-b}}\\ =& (-\epsilon_a\mp\epsilon_b)(D_c) \qty(F^{a,b} \pm F^{a,d+1-b} + F^{d+1-a,b} \pm F^{d+1-a,d+1-b}), \end{align*} so that we found an element of $\mathfrak{g}_{-\epsilon_a\mp\epsilon_b}$. By applying $\theta$, we also find an element of $\mathfrak{g}_{\epsilon_a\pm\epsilon_b}$. The root multiplicity of $\epsilon_a\pm\epsilon_b,-\epsilon_a\mp\epsilon_b$ is therefore at least 1. \item[Zero spaces] For $q<\mu<\nu\le p$ the indices of $F_{\mu\nu}$ are disjoint from $(a,d+1-a)$ for $a=0,\dots,q$, therefore $\comm{D_a}{F_{\mu\nu}}=0$. The dimension of $\mathfrak{m}_{\mathfrak{p}}$ is therefore at least $\frac{(p-q)(p-q-1)}{2}$. \end{description} We therefore see that the roots from the claim are indeed roots of $\mathfrak{g}$. To see that we have indeed found all roots and fully described the root spaces, note that \begin{align*} \frac{(d+2)(d+1)}{2} &= \dim(\mathfrak{g}) = \dim(\mathfrak{m}_{\mathfrak{p}}) + \dim(\mathfrak{a}_{\mathfrak{p}}) + \sum_{\alpha\in R} \dim(\mathfrak{g}_\alpha)\\ &\ge \frac{(p-q)(p-q-1)}{2} + (q+1) + (q+1)\cdot 2\cdot (p-q) + \frac{q(q+1)}{2}\cdot 4\\ &= \frac{(d-2q)(d-2q-1) + 2(q+1) + 4(q+1)(d-2q) + 4q(q+1)}{2}\\ &= \frac{(d+2)(d+1)}{2}, \end{align*} so that all inequalities are sharp, and we are done. \end{proof} \begin{lemma} $G$ acts transitively on $\widehat{\RR^{p,q}}$. Let $Q$ be the stabiliser of $(1:0:1)=\iota(0)$. It is a closed subgroup. In particular $\widehat{\RR^{p,q}}\cong G/Q$. \end{lemma} \begin{proof} Let $q(v)\in\widehat{\RR^{p,q}}$ and decompose $v\in\RR^{p+1,q+1}$ under $\RR^{p+1,q+1}=\RR^{p+1}\oplus\RR^{q+1}$ as $v=v_1\oplus v_2$. Since $v$ is a null vector, we have $\norm{v_1}=\norm{v_2}$ (Euclidean norms). Since $SO(p+1)$ and $SO(q+1)$ work transitively on the respective unit spheres, there are $g\in SO(p+1),g'\in SO(q+1)$ such that \[ ge_0 = \frac{v_1}{\norm{v_1}},\qquad g'e_q = \frac{v_2}{\norm{v_2}} \] (where $e_0,\dots,e_p$ are the respective standard unit vectors). Let $h:= g\oplus g'$, then $h\in SO(p+1,q+1)_0$ and \[ h\mqty(e_0 + e_{d+1}) = \mqty(g & 0\\0 & g') \mqty(e_0\\e_q) = \frac{1}{\norm{v_1}}\mqty(v_1\\v_2), \] so that $h\cdot(1:0:1)=q(v)$ (where $(1:0:1)$ is split up as $1+d+1$). The stabiliser $Q$ is the preimage of the closed set $\{\iota(0)\}$ under the continuous function $G\to\widehat{\RR^{p,q}},g\mapsto g\cdot\iota(0)$, and hence closed. \end{proof} We are now interested in what the corresponding Lie subalgebra $\mathfrak{q}$ looks like. \begin{lemma} Let \[ \Gamma = R\setminus\{-\epsilon_0, -\epsilon_0\pm\epsilon_a\mid 1\le a\le q\}, \] then \[ \mathfrak{q} = \mathfrak{m}_{\mathfrak{p}} \oplus \mathfrak{a}_{\mathfrak{p}} \oplus\bigoplus_{\alpha\in\Gamma}\mathfrak{g}_\alpha. \] \end{lemma} \begin{proof} We have $g\in Q$ iff $g(e_0+e_{d+1})\in\RR(e_0+e_{d+1})$. In particular, $X\in\mathfrak{q}$ iff the same condition holds. We shall first show $\mathfrak{a}_{\mathfrak{p}}\le\mathfrak{q}$. Let $1\le a\le q$, then $D_a$ has zeroes in the 0th and $d+1$-st columns, so that \[ D_a(e_0 + e_{d+1}) = 0, \] whence $D_a\in\mathfrak{q}$. Lastly, \[ D_0(e_0+e_{d+1}) = F^{0,d+1}(e_0+e_{d+1}) = e_0 + e_{d+1}, \] so also $D_0\in\mathfrak{q}$. Consequently $\mathfrak{a}_{\mathfrak{p}}\le\mathfrak{q}$. Now, since $\mathfrak{q}$ is a Lie algebra, the algebra $\ad(\mathfrak{a}_{\mathfrak{p}})$ leaves $\mathfrak{q}$ invariant, so that \[ \mathfrak{q} = (\mathfrak{a}_{\mathfrak{p}}\oplus \mathfrak{m}_{\mathfrak{p}})\cap\mathfrak{q} \oplus\bigoplus_{\alpha\in R} (\mathfrak{g}_\alpha\cap \mathfrak{q}). \] We just need to see what those intersections are. \begin{description} \item[Zero] Let $q<\mu<\nu\le p$, then $F_{\mu\nu}$ has zeroes in the 0th and $d+1$-st column, hence \[ F_{\mu\nu}(e_0 + e_{d+1}) = 0, \] so that $F_{\mu\nu}\in\mathfrak{q}$. Consequently, $\mathfrak{m}_{\mathfrak{p}}\oplus\mathfrak{a}_{\mathfrak{p}}\le\mathfrak{q}$. \item[$\pm\epsilon_0$] For $q<\mu\le p$ we have \[ (F^{0,\mu} \pm F^{d+1,\mu})(e_0+e_{d+1}) = (-1\mp 1)e_\mu, \] which is a multiple of $e_0+e_{d+1}$ only in the $-$-case. From Proposition~\ref{sec:prop-rrs-decomposition} we conclude that $\mathfrak{g}_{\epsilon_0}\subseteq\mathfrak{q}$ and $\mathfrak{g}_{-\epsilon_0}\cap\mathfrak{q}=0$. \item[Other short roots] For $a=1,\dots,q$ and $q<\mu\le p$ the matrix $F^{a,\mu} \pm F^{d+1-a,\mu}$ has zeroes in the 0th and $d+1$-st columns, and is hence contained in $\mathfrak{q}$. Consequently, $\mathfrak{g}_{\mp\epsilon_a}\subseteq\mathfrak{q}$. \item[$\pm\epsilon_0\pm\epsilon_a$:] Let $1\le a\le q$, then \begin{align*} (F^{0,a} \pm F^{0,d+1-a} + F^{d+1,a} \pm F^{d+1,d+1-a})(e_0+e_{d+1}) &= -2e_a \pm 2e_{d+1-a}\ne0\\ (F^{0,a} \mp F^{0,d+1-a} - F^{d+1,a} \pm F^{d+1,d+1-a})(e_0+e_{d+1}) &= 0, \end{align*} so that $\mathfrak{g}_{-\epsilon_0\mp\epsilon_a}\cap\mathfrak{q}=0$ and $\mathfrak{g}_{\epsilon_0\pm\epsilon_a}\subseteq\mathfrak{q}$. \item[Other long roots] Let $1\le a<b\le q$. The matrices contained in $\mathfrak{g}_{\epsilon_a\pm\epsilon_b}$ and $\mathfrak{g}_{-\epsilon_a\pm\epsilon_b}$ only have nonzero entries in the $a$-th, $b$-th, $d+1-a$-th, $d+1-b$-th columns, so in particular only zeroes in the 0th and $d+1$-st column. Consequently, all these weight spaces are contained in $\mathfrak{q}$. \end{description} We found that $\mathfrak{g}_\alpha\cap\mathfrak{q} = \mathfrak{g}_\alpha$ unless $\alpha=-\epsilon_0$ or $\alpha=-\epsilon_0\pm\epsilon_a$ for $1\le a\le q$, as claimed. \end{proof} We pick the positive subsystem \[ R^+ := \{\epsilon_a\mid 0\le a\le q\} \cup\{\epsilon_a\pm\epsilon_b\mid 0\le a<b\le q\} \] and the corresponding set $S$ of simple roots: \[ S := \{\epsilon_0-\epsilon_1,\dots,\epsilon_{q-1}-\epsilon_q, \epsilon_q\}. \] Let $S':= S\setminus\{\epsilon_0-\epsilon_1\}$. Note that \[ \Gamma = R^+ \cup (R \cap\operatorname{span}_\RR(S')). \] \begin{corollary} $\mathfrak{q}\le\mathfrak{g}$ is maximal parabolic with respect to our choices of positive subsystem and $\mathfrak{a}$. \end{corollary} \begin{proof} $\mathfrak{q}$ is parabolic since the minimal parabolic subalgebra \[ \mathfrak{m}_{\mathfrak{p}} \oplus \mathfrak{a}_{\mathfrak{a}} \oplus \bigoplus_{\alpha\in R^+}\mathfrak{g}_\alpha \] is contained therein (recall that $R^+\subseteq\Gamma$). By \cite[Proposition~VII.7.76]{knapp}, there is an inclusion-preserving 1-1 correspondence between subsets $S'\subseteq S$ and the parabolic subalgebras containing the minimal parabolic subalgebra that corresponds to our choices of $\mathfrak{a}_{\mathfrak{p}},R^+$. A correspondence which associates our $S'$ to $\mathfrak{q}$. Any Lie algebra lying between $\mathfrak{q}$ and $\mathfrak{g}$ must therefore correspond to a subset lying between $S'$ and $S$. Since these sets differ by only one element, there is no subset properly between $S'$ and $S$. Consequently, there are no Lie algebras between $\mathfrak{q}$ and $\mathfrak{g}$. \end{proof} With this result in hand we shall now make some definitions in the usual parlance of parabolic subalgebras. \begin{definition}\label{sec:def-parabolic-subalgebras} We define the subalgebras \begin{align*} \mathfrak{a}&:=\RR D_0\\ \mathfrak{a}_M &:= \operatorname{span}_\RR\{D_1,\dots,D_q\}\\ \mathfrak{m}&:=\operatorname{span}_\RR\{F_{\mu\nu}\mid 1\le\mu<\nu\le d\}\\ \mathfrak{n}&:=\operatorname{span}_\RR\{K^i \mid 1\le i\le d\}\\ \mathfrak{n}_M &:= \operatorname{span}_\RR \{F^{a,\nu}-F^{d+1-a,\nu}\mid 1\le a\le q, 1\le\nu\le d\}\\ \overline{\mathfrak{n}} &:= \operatorname{span}_\RR\{P^i \mid 1\le i\le d\} \end{align*} where \[ K^i := F^{0,i} - F^{d+1,\mu},\qquad P^i := -F^{0,i} - F^{d+1,i} \] ($i=1,\dots,d$). Let furthermore $A,A_M,N,N_M$ be the analytic subgroups of $\mathfrak{a},\mathfrak{a}_M,\mathfrak{n},\mathfrak{n}_M$ and let $M:=\: ^0Z_G(\mathfrak{a})$ (notation from e.g. \cite[\S VII.7.2]{knapp}). \end{definition} \begin{lemma} $Q=MAN$, which is the parabolic subgroup associated to $\mathfrak{q}$. \end{lemma} \begin{proof} ``$\subseteq$'': By \cite[Proposition~VII.7.83(b)]{knapp}, $MAN=N_G(\mathfrak{q})$, so it suffices to show that every $g\in Q$ normalises $\mathfrak{q}$. Let $X\in\mathfrak{q}$, then \[ \exp(t\Ad(g)(X))\iota(0) = g\exp(tX)g^{-1}\iota(0) = \iota(0), \] for all $t\in\RR$. Taking the derivative at $t=0$, we see that $\Ad(g)(X)\in\mathfrak{q}$ as well.\\ ``$\supset$'': Since $\mathfrak{a},\mathfrak{n}$ fix $\iota(0)$, so do their analytic subgroups, i.e. $A,N\le Q$. By \cite[Proposition~VII.7.82(c)]{knapp} we have $M=(M\cap K)A_MN_M$. Now, both $\mathfrak{a}_M,\mathfrak{n}_M$ fix $\iota(0)$, hence so do their analytic subgroups. It therefore remains to check if $K\cap M\le Q$. Assume \[ g=\mqty(A & 0\\0 & B)\in K\cap M \] (written as a $p+1$-block and a $q+1$-block). Then $\Ad(g)(D_0)=0$. If we block decompose $D_0$ this means, \[ \mqty(A & 0\\0 & B) \mqty(0 & e_1e_{q+1}^T\\e_{q+1}e_1^T & 0) \mqty(A^T & 0 \\ 0 & B^T) = \mqty(0 & Ae_1 (Be_{q+1})^T\\ Be_{q+1} (Ae_1)^T & 0), \] i.e. $Ae_1(Be_{q+1}) = e_1e_{q+1}^T$. I.e. there exists $\lambda\in\RR$ such that $Ae_1=\lambda e_1$ and $Be_{q+1}=\lambda^{-1}e_{q+1}$. Since $\pm1$ are the only two possible real eigenvalues an orthogonal matrix could have, we have $\lambda=\lambda^{-1}$. Consequently, \[ g\iota(0) = q\qty(\mqty(A & 0\\0 & B)\mqty(e_1\\e_{q+1})) = q\qty(e_1\\e_{q+1}) = \iota(0). \] Thus $g\in Q$. Consequently, $K\cap M\le Q$. \end{proof} Lastly, for future reference, we shall also establish what exactly the group $M$ looks like \begin{lemma} For every $A\in SO(p,q)_0$, there exists $g\in M$ such that \[ \forall x\in \RR^{p,q}:\quad g\cdot \iota(x) = \iota(Ax). \] \end{lemma} \begin{proof} Since $M\le G$ is a closed Lie subgroup with Lie algebra $\mathfrak{m}$ (which is isomorphic to $\mathfrak{so}(p,q)$), it contains the corresponding analytic subgroup, which is isomorphic to $SO(p,q)_0$. Let $g\in SO(p,q)_0$, the corresponding element of $G$ is \[ \mqty(1 & 0 & 0\\0 & g & 0\\0 & 0 & 1), \] which transforms $\iota(x)$ into $\iota(gx)$. \end{proof} \subsection{Point Configurations} When considering the action of $G$ not on a single point of $\widehat{\RR^{p,q}}$ but on tuples, we quickly run into singular strata, i.e. ``thin'' sets of orbits that prevent $\widehat{\RR^{p,q}}^n/G$ from becoming a manifold. Part of these can be eliminated by considering $\operatorname{Conf}(\widehat{\RR^{p,q}},n)$, the \emph{configuration space of $n$ points} in $\widehat{\RR^{p,q}}$, which consists of tuples of \emph{distinct} points. However, since our action is one by means of projective transformations and not all homeomorphisms, this is not sufficient and we have to resort to terminology from projective geometry. \begin{definition} In projective geometry, a tuple $(q(v_1),\dots, q(v_m))\in(\RR\mathbb{P}^n)^m$ is said to be \emph{in general position} if the vectors $v_1,\dots,v_m\in\RR^{n+1}$ are linearly independent. (Here, $q:\, \RR^{n+1}\setminus\{0\}\to\RR\mathbb{P}^n$ is the projectivisation map.) \end{definition} Since our transformation group is not all of $PGL(n)$ either, but involves the inner product, we have to add even more conditions to obtain our smooth stratum. \begin{definition} For the purposes of this paper, a tuple $(q(v_1),\dots,q(v_m))\in (\widehat{\RR^{p,q}})^m$ is said to be \emph{in general position} if the vectors $v_1,\dots,v_m\in\RR^{d+2}$ are linearly independent and no pair of them is orthogonal. Write $\GP(\widehat{\RR^{p,q}},m)=\GP(G/Q,m)$ for the set of all $m$-tuples of points in general position. It is an open dense subset as the map $(v_1,\dots,v_m)\mapsto (q(v_1),\dots,q(v_m))$ is a quotient map and since the conditions of being linearly independent and being pairwise non-orthogonal with respect to $\eta$ are Zariski-open (and hence define a dense open subset). \end{definition} Another interpretation of the non-orthogonality condition can be found if we consider two embedded points from $\RR^{p,q}$ and check if they satisfy the condition to be in general position. \begin{lemma}\label{sec:lem-gp-lightlike-separation} Let $x,y\in\RR^{p,q}$. Then $(\iota(x),\iota(y))$ is in general position iff $x$ and $y$ are not light-like separated or coincide, i.e. iff $\eta(x-y,x-y)\ne0$. \end{lemma} \begin{proof} We have $\iota(x)=q(v)$ and $\iota(y)=q(w)$ for \[ v = \mqty(1-\eta(x,x)\\2x\\1+\eta(x,x)),\qquad w = \mqty(1-\eta(y,y)\\2y\\1+\eta(y,y)). \] ``$\Rightarrow$'': Let $(\iota(x),\iota(y))$ be in general position, Note that \begin{align*} \eta(v,w) &= (1-\eta(x,x))(1-\eta(y,y)) + 4\eta(x,y) - (1-\eta(x,x))(1-\eta(y,y))\\ &= 4\eta(x,y) - 2\eta(x,x) - 2\eta(y,y)\\ &= -2\eta(x-y,x-y). \end{align*} Since we know that $\eta(v,w)\ne0$, we conclude that $\eta(x-y,x-y)\ne0$, so that $x,y$ are not light-like separated.\\ ``$\Leftarrow$'': Since $x,y$ are not light-like separated, the above calculation implies that $\eta(v,w)\ne0$. Furthermore, we know $x\ne y$, so that $\iota(x)\ne\iota(y)$ and in particular, $v,w$ are not multiples of each other. Since they are also both nonzero, they are linearly independent and consequently, $(q(v),q(w))$ are in general position. \end{proof} \begin{lemma} $G$'s action on $\widehat{\RR^{p,q}}$ extends naturally to $\GP(\widehat{\RR^{p,q}},n)$. \end{lemma} \begin{proof} Let $g\in G$ and $(q(v_1),\dots,q(v_n))$ be in general position. Since $g\in GL(d+2)$, the vectors $gv_1,\dots,gv_n$ are also linearly independent. Furthermore, for any two $i,j\in\{1,\dots,n\}$ we have \[ \eta(gv_i,gv_j)=\eta(v_i,v_j)\ne0. \] So $(q(gv_1),\dots,q(gv_n))=(g\cdot q(v_1),\dots,g\cdot q(v_n))$ is again in general position. \end{proof} \begin{lemma}\label{sec:lem-g-action-pairs} The action of $G$ on $\GP(G/Q,2)$ is transitive with $MA$ being a typical stabiliser. \end{lemma} \begin{proof} Let $(x,y)\in\GP(G/Q,2)$ and pick $g\in G$ such that $g\cdot y=\infty=(1:0:-1)$. Write $g\cdot x=: q(v)$ with say $v=(v_0,\underline{v},v_{d+1})$ (split as $1+d+1$). Since $(g\cdot x,g\cdot y)$ are in general position, we have \[ 0\ne \eta(v,(1,0,-1)^T) = v_0+v_{d+1}. \] Recalling the proof of Lemma~\ref{sec:lem-comp-diffeo}, we can thus conclude that $g\cdot x = \iota(u)$ were $u:=\frac{\underline{v}}{v_0+v_{d+1}}$. If we let the matrix $u^\bullet$ equal the column vector $u$ and correspondingly let $u_\bullet$ be the diagonal matrix $\eta$ multiplied with $u$ (standard index raising/lowering notation), we obtain \[ \exp(-u\cdot P) = \exp(-\sum_{i=1}^d u_i P^i) = \mqty(1-\frac{u^2}{2} & u^T_\bullet & -\frac{u^2}{2}\\ -u^\bullet & 1 & -u^\bullet\\ \frac{u^2}{2} & -u^T_\bullet & 1 + \frac{u^2}{2}) \] ($u^2 = \eta(u,u)$). Then we have \begin{align*} \exp(-u\cdot P)\cdot q(v) &= q\mqty(1\\0\\1) = \iota(0)\\ \exp(-u\cdot P)\cdot \infty &= &=q\mqty(1\\0\\-1) = \infty. \end{align*} Thus we have found an element $\exp(-u\cdot P)g\in G$ that maps $(x,y)$ to $(\iota(0),\infty)$. This shows transitivity. For a typical stabiliser we consider the point $(\iota(0),\infty)$ and let $H\le G$ be its stabiliser. Let $w\in G$ be an element satisfying $g\cdot\iota(0)=\infty$. Then \[ H=\{g\in G\mid g\in Q, w^{-1}gw\in G\} = Q\cap wQw^{-1}. \] In particular, because of what $\iota(0)$ and $\infty$ look like, we can choose $H$ diagonal with its diagonal entries being $\pm1$, say $w=\operatorname{diag}(w_0,\dots,w_{d+1})$ with $w_0=-w_{d+1}$. Then $w^2=1$ and $w\in K$. As a consequence, we have $\Ad(w)(D_0)=-D_0$ and $\Ad(w)(D_a)=\pm D_a$ for $a=1,\dots,q$. This shows that $\Ad^*(w)(\Gamma)=-\Gamma$ and thus $w Nw^{-1}=\overline{N}$. Furthermore, by \cite[Proposition VII.7.82(a)]{knapp}, $MA$ is the centraliser of $\mathfrak{a}$ in $G$, thus if $g\in MA$ we have \[ \Ad(wgw^{-1})(D_0) = -\Ad(wg)(D_0) = -\Ad(w)(D_0)=D_0, \] so that $wgw^{-1}\in MA$ as well, hence $wMAw^{-1}=MA$. Consequently, we have \[ Q\cap wQw^{-1} = MAN \cap wMAw^{-1} wNw^{-1} = MAN\cap MA\overline{N}. \] Let $g\in H$, say $g=man = m'a'\overline{n}'$, then $\overline{n}' = a^{\prime-1}m^{\prime-1}man\in\overline{N}\cap Q$, which is trivial by \cite[Proposition VII.7.83(e)]{knapp}. Consequently, $g\in MA$. \end{proof} We can thus conclude that $\GP(G/Q,2)\cong G/MA$ as smooth $G$-sets. \begin{lemma}\label{sec:lem-conf-frame} Let $n\ge2$ and $(x_1,\dots,x_n)\in\GP(G/Q,n)$. Then there exists $g\in G$ such that \[ g\cdot x_1 = \iota(0),\qquad g\cdot x_2 = \infty,\qquad g\cdot x_i = \iota(p_i)\quad (i=3,\dots,n). \] where $p_3,\dots,p_n\in\RR^{p,q}$ are linearly independent and none of $p_i$ ($i=3,\dots,n$) and $p_i-p_j$ ($i\ne j\in\{3,\dots,n\}$) is isotropic (i.e. light-like). \end{lemma} \begin{proof} Write $x_i=q(v_i)$ for $v_1,\dots,v_n\in\RR^{d+2}\setminus\{0\}$. We have $(x_1,x_2)\in\GP(G/Q,2)$, hence by Lemma~\ref{sec:lem-g-action-pairs}, there exists $g\in G$ mapping $(x_1,x_2)$ to $(\iota(0),\infty)$. Then since none of $gv_3,\dots,gv_n$ is orthogonal to $gv_2=(1,0,-1)^T$, we can again employ the same argument as in the transitivity proof of Lemma~\ref{sec:lem-g-action-pairs}, so that we find $p_3,\dots,p_n$ such that $g\cdot x_i = \iota(p_i)$ ($i=3,\dots,n$). By definition, the vectors \[ \mqty(1\\0\\1),\mqty(1\\0\\-1),\mqty(1-p_3^2\\2p_3^\bullet\\1+p_3^2),\dots, \mqty(1-p_n^2\\2p_n^\bullet\\ 1+p_n^2) \] are linearly independent, which implies that $p_3,\dots,p_n$ are linearly independent. Furthermore, by applying Lemma~\ref{sec:lem-gp-lightlike-separation} to $(g\cdot x_i,g\cdot x_j)$ for $i,j\in\{1,3,\dots,n\}$ in turn, we obtain that none of the vectors $p_i-p_j$ ($i\ne j\in\{3,\dots,n\}$) and $p_i-0$ ($i=3,\dots,n$) is isotropic. \end{proof} For the rest of this paper we shall be concerned with four-point configurations, i.e. with $n=4$. \begin{definition}\label{sec:def-uv} Define $u,v:\, \GP(G/Q,4)\to\RR$ by \begin{align} u(q(v_1),\dots,q(v_4)) &:= \frac{\eta(v_1,v_2)\eta(v_3,v_4)}{\eta(v_1,v_3)\eta(v_2,v_4)}\\ v(q(v_1),\dots,q(v_4)) &:= \frac{\eta(v_1,v_4)\eta(v_2,v_3)}{\eta(v_1,v_3)\eta(v_2,v_4)}. \end{align} They are both well-defined, smooth, $G$-invariant and for the parameters $\iota(x_1),\dots,\iota(x_4)$ they reduce to the well-known expressions for the cross-ratios from e.g. \cite[\S III.C.3]{bootstrapReview}. \end{definition} \begin{proof} Firstly, note that both right-hand sides are homogeneous in $v_1,\dots,v_4$ of degree $(0,0,0,0)$, and by definition neither $\eta(v_1,v_3)$ nor $\eta(v_2,v_4)$ are zero. Thus, the functions are well-defined and continuous. Since $q$ is a smooth quotient map and our definition of $u,v$ in terms of $v_1,\dots,v_4$ is also smooth, $u,v$ are smooth maps on $\GP(G/Q,4)$. Since $G$ leaves $\eta$ invariant, we have $u(q(gv_1),\dots,q(gv_4))=u(q(v_1),\dots,q(v_4))$ (same for $v$), so that $u,v$ are $G$-invariant. To see that our definition coincides with the usual one, recall that as shown in the proof of Lemma~\ref{sec:lem-gp-lightlike-separation}, for $q(v_i)=\iota(p_i)$ (for the usual choice of $v_i$) we have $\eta(v_i,v_j)=-2(p_i-p_j)^2$, so that \[ u(\iota(p_1),\dots,\iota(p_4)) = \frac{(p_1-p_2)^2(p_3-p_4)^2}{(p_1-p_3)^2(p_2-p_4)^2} \] and similarly for $v$. \end{proof} \begin{corollary}\label{sec:cor-uv-conf-frame} Let $(x_1,\dots,x_4)$ be in the same $G$-orbit as $(\iota(0),\infty,\iota(x),\iota(y))$ as in Lemma~\ref{sec:lem-conf-frame}, then \[ u(x_1,\dots,x_4) = \frac{(x-y)^2}{x^2} \qquad v(x_1,\dots,x_4) = \frac{y^2}{x^2}. \] \end{corollary} \begin{lemma}\label{sec:lem-configuration-homeomorphism} The map $\psi:\, G\to \GP(G/Q,2)^{\times 2}$ mapping \[ g\mapsto (\iota(0),\infty, g\iota(0),g\infty) \cong (Q, wQ, gQ, gwQ) \] ($w\in K$ as in the proof for the stabiliser in Lemma~\ref{sec:lem-g-action-pairs}) descends, when composed with the quotient map to $\GP(G/Q,2)^{\times 2}/G$, to a homeomorphism $MA\backslash G/MA\cong \GP(G/Q,2)^{\times 2}/G$. \end{lemma} \begin{proof} Define \begin{align*} f:\quad\GP(G/Q,2)^{\times 2}/G&\to MA\backslash G/MA,\\ G(\iota(0),\infty, \iota(x),\iota(y))&\mapsto MA \exp(x\cdot P) \exp(\frac{y-x}{(y-x)^2}\cdot K) MA. \end{align*} This is well-defined since every element of $\GP(G/Q,2)^{\times 2}/G$ can be written in the form $G(\iota(0),\infty,\iota(x),\iota(y))$ and since any other choice of $x,y$ would be related by $MA$, say there is $m\in MA$ and $\alpha\in\RR$ such that $m\exp(\alpha D_0)\cdot\iota(x)=\iota(x')$ and the same for $y,y'$. Then \[ \Ad(m\exp(\alpha D_0))(x\cdot P) = x'\cdot P \] and \[ \Ad(m\exp(\alpha D_0))\frac{y-x}{(y-x)^2}\cdot K = \frac{y'-x'}{(y'-x')^2}\cdot K \] due to $\exp(\alpha D_0)$ acting on $\iota(x)$ and the $P^i$ as scaling by $\exp(-\alpha)$, and on the $K^i$ as scaling by $\exp(\alpha)$. Consequently, \begin{align*} &MA \exp(x'\cdot P)\exp(\frac{y'-x'}{(y'-x')^2}\cdot K) MA\\ =& MA m\exp(\alpha D_0) \exp(x\cdot P)\exp(\frac{y-x}{(y-x)^2}\cdot K) m^{-1}\exp(-\alpha D_0) MA\\ =& MA \exp(x\cdot P)\exp(\frac{y-x}{(y-x)^2}\cdot K) MA. \end{align*} The map $f$ is continuous by definition. Write $\tilde{\Psi}: G\to\GP(G/Q,2)^{\times 2}/G$ for the composition of $\psi$ with the quotient map. Then $\tilde{\Psi}$ is $MA$-biinvariant: \begin{align*} \tilde{\Psi}(magm'a') &= G(\iota(0),\infty, magm'a'\cdot\iota(0), magm'a'\cdot\infty)\\ &= G(a^{-1}m^{-1}\iota(0),a^{-1}m^{-1}\cdot\infty, gm'a'\cdot\iota(0),gm'a'\cdot\infty) \end{align*} Since $MA$ fixes $\iota(0),\infty$, this equals $\tilde{\Psi}(g)$. It therefore descends to a continuous map $\Psi: MA\backslash G/MA\to\GP(G/Q,2)^{\times 2}/G$. We shall now show that $\Psi$ and $f$ are inverses of each other. For this we consider \begin{align*} \Psi(f(G(\iota(0),\infty,\iota(x),\iota(y)))) &= \Psi\qty(MA \exp(x\cdot P)\exp(\frac{y-x}{(y-x)^2}\cdot K) MA)\\ &= G(\iota(0),\infty, \iota(x), \iota(y)). \end{align*} Here we used that for $b:=\frac{y-x}{(y-x)^2}$ we can compute the action of $g=\exp(x\cdot P)\exp(b\cdot K)$ on $\iota(0),\infty$ as follows: \begin{align*} g\mqty(1\\0\\1) &= \mqty(1-\frac{x^2}{2} & -x^T_\bullet & -\frac{x^2}{2}\\ x^\bullet & 1 & x^\bullet\\ \frac{x^2}{2} & x^T_\bullet & 1 + \frac{x^2}{2}) \mqty(1\\0\\1) = \mqty(1-x^2\\2x\\1+x^2)\\ g\mqty(1\\0\\-1) &= \mqty(1-\frac{x^2}{2} & -x^T_\bullet & -\frac{x^2}{2}\\ x^\bullet & 1 & x^\bullet\\ \frac{x^2}{2} & x^T_\bullet & 1 + \frac{x^2}{2}) \mqty(1-\frac{b^2}{2} & b^T_\bullet & \frac{b^2}{2}\\ -b^\bullet & 1 & b^\bullet\\ -\frac{b^2}{2} & b^T_\bullet & 1+\frac{b^2}{2}) \mqty(1\\0\\-1)\\ &= \mqty(1-b^2+b^2x^2\\-2(b + b^2x)\\-1-b^2-b^2x^2) \end{align*} the former of which projectivises to $\iota(x)$ and the latter to $\iota(y)$. Let now $g\in G$ and let $MAhMA=f(\Psi(MAgMA))$, then $\Psi(MAgMA)=\Psi(f(\Psi(MAgMA)))$, i.e. \[ G(\iota(0),\infty,g\cdot\iota(0),g\cdot\infty) = G(\iota(0),\infty,h\cdot\iota(0),h\cdot\infty), \] then there is $k\in G$ such that \[ k\cdot\iota(0) = \iota(0),\qquad k\cdot\infty = \infty,\qquad kg\cdot\iota(0) = h\cdot\iota(0),\qquad kg\cdot\infty = h\cdot\infty. \] This shows that both $k$ and $h^{-1}kg$ fix $\iota(0),\infty$. By Lemma~\ref{sec:lem-g-action-pairs}, this shows that $k,h^{-1}kg\in MA$. In other words: \[ g = k^{-1}h(h^{-1}kg) \in MA h MA. \] Consequently, $f\circ\Psi$ is also the identity. \end{proof} Using this homeomorphism, we can identify $\GP(G/Q,4)/G$ with a dense open subset of $MA\backslash G/MA$, say $MA\backslash \tilde{G}/MA$ where $\tilde{G}\subseteq G$ is open, dense, and satisfies $MA\tilde{G}MA\subseteq \tilde{G}$ (more on this in Corollary~\ref{sec:cor-characterisation-g-tilde}). \subsection{4-Point Functions as MSF} \begin{definition} Let $(V_i,\pi_i)$ ($i=1,\dots,4$) be finite-dimensional $Q$-modules. A smooth function $f:\, G^{\times 4}\to V_1\otimes\cdots\otimes V_4$ is said to \emph{satisfy the Ward identities}, if for all $g,g_1,\dots,g_4\in G$ and $q_1,\dots,q_4\in Q$ we have \[ f(gg_1q_1,\dots,gg_4q_4) = \pi_1(q_1)^{-1}\otimes\cdots\otimes \pi_4(q_4)^{-1} f(g_1,\dots g_4), \] i.e. if $f$ is a $G\times Q^{\times 4}$-intertwiner for the following $G\times Q^{\times 4}$-actions: \begin{align*} \text{on }G^{\times 4}:&\quad (g,q_1,\cdots,q_4) \cdot (g_1,\dots,g_4) := (gg_1q_1^{-1},\dots gg_4q_4^{-1})\\ \text{on }V_1\otimes\cdots\otimes V_4:&\quad (g,q_1,\dots,q_4)\cdot (v_1\otimes\cdots\otimes v_4) := (q_1\cdot v_1\otimes\cdots\otimes q_4\cdot v_4). \end{align*} Let $U\subseteq G^{\times 4}$ (also open and dense) be the preimage of $\GP(G/Q,4)$ under the quotient map $G^{\times 4}\to (G/Q)^{\times 4}$. It is invariant under the action of $G\times Q^{\times 4}$, thus we can define an analogous notion of \emph{satisfying the Ward identities} for functions defined on $U$.\footnote{This definition can also be naturally put in the framework of generalised spherical functions associated to moduli spaces of principal connections on a corresponding star-shaped graph, see \cite{RS-2}. We thank Jasper Stokman for explaining this point of view to us.} \end{definition} Fix an element $w\in G$ that maps $\iota(0)$ to $\infty$ and vice-versa. For simplicity, we can choose $w$ to be a diagonal matrix that squares to 1. Then conjugation with $w$ is an autormorphism of $MA$ (which inverts elements of $A$). \begin{theorem}\label{sec:thm-injection-msf} Let $f:\, U\to V_1\otimes\cdots\otimes V_4$ solve the Ward identities. Write $W:=V_1\otimes\cdots\otimes V_4$ and define $F:\, \tilde{G}\to W$ as $g\mapsto f(1,w,g,gw)$. Then $F$ is a matrix-spherical function for the symmetric subgroup $MA$ and the following left and right actions: \begin{align*} (ma)\cdot (v_1\otimes\cdots\otimes v_4) &:= (ma\cdot v_1)\otimes (wmaw\cdot v_2)\otimes v_3\otimes v_4\\ (v_1\otimes\cdots\otimes v_4)\cdot (ma) &:= v_1\otimes v_2\otimes (m^{-1}a^{-1}\cdot v_3)\otimes (wm^{-1}a^{-1}w\cdot v_4). \end{align*} The map $f\mapsto F$ is injective. \end{theorem} \begin{proof} We begin by checking the biequivariance. Let $ma,m'a'\in MA$ and $g\in \tilde{G}$, then \begin{align*} F(magm'a') &= f(1,w,magm'a', magm'a'w) = f(a^{-1}m^{-1}, a^{-1}m^{-1}w, gm'a', gm'a'w)\\ &= \pi_1(ma)\otimes\pi_2(wmaw)\otimes \pi_3(m'a')^{-1}\otimes \pi_4(wm'a'w)^{-4} f(1,w,g,gw)\\ &= ma\cdot F(g)\cdot m'a'. \end{align*} For injectivity let $F$ and $F'$ arise from $f,f':\, U\to V_1\otimes\cdots\otimes V_4$ and $F=F'$. Let $(g_1,\dots,g_4)\in U$, then $(g_1Q,\dots, g_4Q)\in\GP(G/Q, 4)$. Then by Lemma~\ref{sec:lem-configuration-homeomorphism} and definition of $\tilde{G}$, there is $h\in \tilde{G}$ such that \[ G\psi(h) = G(Q, wQ, hQ, hwQ) = G(g_1Q, g_2Q, g_3Q, g_4Q). \] This implies that there are $g\in G$ and $q_1,\dots,q_4\in Q$ such that \[ (1,w,h,hw) = (gg_1q_1,\dots,gg_4q_4). \] Then \begin{align*} f(g_1,\dots,g_4) &= f(g^{-1}q_1^{-1}, g^{-1}wq_2^{-1}, g^{-1}hq_3^{-1}, g^{-1}hwq_4^{-1})\\ &= \pi_1(q_1)\cdots\pi_4(q_4) f(1,w,h,hw)\\ &= \pi_1(q_1)\cdots\pi_4(q_4) F(h) = \pi_1(q_1)\cdots\pi_4(q_4) F'(h) \\ &= f'(g_1,\dots,g_4). \end{align*} Lastly, we need to show that $MA\le G$ is symmetrising subgroup. Define $\sigma: G\to G$ be conjugation with the matrix $\operatorname{diag}(-1,1,\dots,1,-1)$, then $G^\sigma$ consists of the elements of $\sigma$ that can be written as a block matrix of the following shape: \[ \mqty(a & 0 & b\\0 & c & 0\\d & 0 & e) \] for $a,b,d,e\in\RR, e\in\RR^{d\times d}$. Every element of $MA$ can be written this way, consequently, $MA\le G^\sigma$. Furthermore, the unit component of $G^\sigma$ is the analytic subgroup for the subalgebra $\mathfrak{m}\oplus\mathfrak{a}$, which is contained in $MA$ since $MA$ has Lie algebra $\mathfrak{m}\oplus\mathfrak{a}$. Consequently, $(G^\sigma)_0\le MA\le G^\sigma$. \end{proof} Using the injection from Theorem~\ref{sec:thm-injection-msf}, we can view solutions to the Ward identities as matrix-spherical functions for $(G,MA)$. Since we can no longer assume that these functions are defined on all $G$, we shall write $E^W(\tilde{G},MA)$. \begin{lemma} The set of functions $f:\, U\to V_1\otimes\cdots\otimes V_4$ satisfying the Ward equations carries an action of $Z(U(\mathfrak{g}))$ that the injection from Theorem~\ref{sec:thm-injection-msf} intertwines with the right action described in Lemma~\ref{sec:lem-msf-action-diffops}. \end{lemma} \begin{proof} Let $X$ be the set of functions $f:\, U\to V_1\otimes\cdots\otimes V_4$ satisfying \[ \forall q_1,\dots,q_4\in Q:\quad f(g_1q_1,\dots,g_4q_4) = \pi_1(q_1)^{-1}\cdots\pi_4(q_4)^{-1} f(g_1,\dots,g_4). \] $X$ is acted upon from the right by $G$ and by four copies of $\mathfrak{g}$: \[ (f\cdot h)(g_1,\dots,g_4) = f(hg_1,\dots,hg_4) \] and the corresponding infinitesimal actions for each of the four inputs. This gives rise to an action of $(U(\mathfrak{g})^{\otimes 4})^G$ (diagonal action of $G$) on $G$-invariant of $X$, i.e. on solutions to the Ward identities. An algebra contained therein is $1\otimes 1\otimes\Delta(Z(U(\mathfrak{g})))$ (where $\Delta$ is the comultiplication). Write $\Psi$ for the injection from Theorem~\ref{sec:thm-injection-msf} (extended to all of $X$ using the same formula). To see that $\Psi$ intertwines these actions of $Z(U(\mathfrak{g}))$, note that for $X\in\mathfrak{g}$ and $f$ satisfying the Ward identities we have \begin{align*} \Psi(f\cdot X)(g) &= (f\cdot X)(1,w,g,gw)\\ &= (f\cdot (1\otimes1\otimes X\otimes1 + 1\otimes1\otimes1\otimes X))(1,w,g,gw)\\ &= \dv{t} \qty(f(1,w,\exp(tX)g, gw) + f(1,w,g,\exp(tX)gw))_{t=0}\\ &= \dv{t} \eval{f(1,w,\exp(tX)g, \exp(tX)gw)}_{t=0}\\ &= \dv{t} \eval{\Psi(f)(\exp(tX)g)}_{t=0}\\ &= \Psi(f)\cdot X.\qedhere \end{align*} \end{proof} \begin{definition} Let $\chi:\, Z(U(\mathfrak{g}))\to\CC$ be a central character of $\mathfrak{g}$. A smooth function $f:\, U\to V_1\otimes\cdots\otimes V_4$ (where $V_1,\dots,V_4$ are finite-dimensional $Q$-modules) or more generally $f\in E^W(\tilde{G},MA)$ (for any finite-dimensional $MA$-bimodule $W$) is said to be a \emph{conformal block} for $\chi$ if \[ \forall z\in Z(U(\mathfrak{g})):\quad f\cdot z = \chi(z)f.\footnote{In physics literature, one usually distinguishes between conformal blocks and conformal partial waves, depending on the boundary conditions and monodromy properties that one imposes on the (linear combinations of) eigenfunctions of the considered invariant differential operators. Since we do not discuss the solution theory in this paper, we also don't distinguish between the two in our terminology.} \] \end{definition} Conformal blocks are the fundamental building blocks used to decompose and put constraints on the correlation functions in conformal field theory. In order to find them, it is necessary to know what the above eigenvalue equation looks like for functions $f$ satisfying the Ward identities, or more generally for $(G,MA)$-matrix-spherical functions. Since $(G,MA)$ is a symmetric pair and $G$ is a reductive Lie group, we can now employ the theory established in the first half of this paper. \section{Obtaining the Casimir Equation}\label{sec:casimir-eq} In this section we shall focus on the eigenvalue equation for the quadratic Casimir element $\Omega_{\mathfrak{g}}$. In order to apply the results from Section~\ref{sec:radial-parts}, we first need to re-examine the structure of $G$. This time we will focus more on Cartan subsets and more generally, on the role that the involution $\sigma$ plays for $G$ and $\mathfrak{g}$. \subsection{Cartan Subsets $(C_i)_{i\in I}$ of $G$} Our first goal here will be choosing a fundamental Cartan subset $C$. Let us first describe the decomposition of $\mathfrak{g}$ with respect to $\theta,\sigma$. For that we will need to introduce a bit more notation. \begin{lemma} We have $\mathfrak{g} = \mathfrak{k}\oplus\mathfrak{p} = \mathfrak{h}\oplus\mathfrak{g}^{-\sigma} = \mathfrak{k}^\sigma \oplus \mathfrak{k}^{-\sigma} \oplus \mathfrak{p}^\sigma \oplus\mathfrak{p}^{-\sigma}$ with \begin{align*} &\mathfrak{k}^\sigma =\operatorname{span}\{F_{ij}\mid 1\le i,j\le p\quad\text{or}\quad p<i,j\le d\}\\ &\mathfrak{k}^{-\sigma} = \operatorname{span}\{F_{0,i}, F_{j,d+1}\mid 1\le i\le p < j\le d\}\\ &\mathfrak{p}^\sigma = \mathfrak{a}\oplus \operatorname{span}\{F_{i,j}\mid 1\le i\le p < j\le d\}\\ &\mathfrak{p}^{-\sigma} = \operatorname{span}\{F_{0,j}, F_{i,d+1}\mid 1\le i\le p < j\le d\}. \end{align*} Here $\mathfrak{h}=\mathfrak{g}^\sigma = \mathfrak{k}^\sigma\oplus\mathfrak{p}^\sigma=\mathfrak{m}\oplus\mathfrak{a}$ in previous notation. \end{lemma} \begin{proof} The vector $F_{\mu,\nu}$ is $\sigma$-invariant if none or both of $\mu,\nu$ are $0,d+1$, and $\sigma$-antiinvariant if exactly one index is $0,d+1$. Similarly, $F_{\mu,\nu}$ is $\theta$-invariant if none or both of $\mu,\nu$ are $>p$ and $\theta$-antiinvariant if exactly one index is $>p$. This shows ``$\supset$'' for all four claimed equations. For ``$\subseteq$'' we recall that the sum \[ \mathfrak{g} = \mathfrak{k}^\sigma \oplus \mathfrak{k}^{-\sigma} \oplus \mathfrak{p}^\sigma \oplus \mathfrak{p}^{-\sigma} \] is direct and that the spaces that make up the right-hand sides add up to $\mathfrak{g}$. \end{proof} \begin{definition} If $q=0$, define \[ \mathfrak{t} := \RR F_{01},\qquad \mathfrak{a} := \RR F_{d,d+1}; \] if $q>0$, define \[ \mathfrak{t} := \operatorname{span}\{F_{0,1},F_{d,d+1}\},\qquad \mathfrak{a} := 0. \] Let $\mathfrak{c}:= \mathfrak{t}\oplus\mathfrak{a}$. \end{definition} \begin{lemma} $\mathfrak{t}\subseteq\mathfrak{k}^{-\sigma}$ is maximally commutative as is $\mathfrak{c}\subseteq\mathfrak{g}^{-\sigma}$. \end{lemma} \begin{proof} For the first claim, we distinguish between $q=0,q>0$. If $q=0$, suppose $X\in\mathfrak{k}^{-\sigma}$ commutes with $F_{0,1}$, say \[ X = \sum_{i=1}^d a_i F_{0,i} \] (note that there is no $p<j\le d$). Then \[ 0 = \comm{F_{0,1}}{X} = -\sum_{i=1}^d a_i F_{1,i} = -\sum_{i=2}^d a_i F_{1,i}. \] Since all the $F_{1,i}$ appearing in the last sum are linearly independent, we have $a_i=0$ for $i=2,\dots,d$, showing that $X=a_1F_{0,1}$. This shows that $\mathfrak{t}\subseteq\mathfrak{k}^{-\sigma}$ is maximally commutative. The $q>0$ case is covered by the second claim, so we will show that one instead. Let $X\in\mathfrak{g}^{-\sigma}$ commute with $F_{0,1},F_{d,d+1}$, say \[ X = \sum_{i=1}^d (a_i F_{0,i} + b_i F_{i,d+1}). \] Then we have \begin{align*} 0 &= [F_{0,1}, X] = \sum_{i=1}^d (-a_i F_{1,i} + \delta_{i,1} b_i F_{0,d+1})\\ &= b_1 F_{0,d+1} - \sum_{i=2}^d a_i F_{1,i}\\ &= [F_{d,d+1}, X] = \sum_{i=1}^d (a_i \delta_{i,d} F_{0,d+1} - b_i F_{i,d})\\ &= a_d F_{0,d+1} - \sum_{i=1}^{d-1} b_i F_{i,d}. \end{align*} Since all $F_{\mu,\nu}$ appearing in the 2nd and 4th line are linearly independent, we conclude that all coefficients except for $a_1$ and $b_d$ are zero, whence $X\in\mathfrak{c}$. \end{proof} \begin{definition} Write $C:=\exp(\mathfrak{c})$ for the corresponding Cartan subset and $T:=\exp(\mathfrak{t})$ for its compact torus. The elements of $T$ are denoted by \[ t_\phi := \exp(\phi F_{0,1}) = \mqty(\cos(\phi) & \sin(\phi) & 0\\-\sin(\phi) & \cos(\phi) & 0\\0 & 0 & 1) \] (written as $1+1+d$-block matrix) for $q=0$ and \[ t_{\phi,\psi} := \exp(\phi F_{0,1} + \psi F_{d,d+1}) = \mqty(\cos(\phi) & \sin(\phi) & 0 & 0 & 0\\-\sin(\phi) & \cos(\phi) & 0 & 0 & 0\\0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & \cos(\psi) & -\sin(\psi)\\ 0 & 0 & 0 & \sin(\psi) & \cos(\psi)) \] (written as $1+1+(d-2)+1+1$-block matrix). \end{definition} Since Cartan subsets (with respect to $C$) are of the shape $\exp(\mathfrak{c}')t$ for $\mathfrak{c}'\subseteq \mathfrak{g}^{-\sigma}\cap\Ad(t)(\mathfrak{g}^{-\sigma})$ of dimension 2, it is useful to know what this intersection looks like. \begin{proposition}\label{sec:prop-intersection-adjoint} \begin{enumerate} \item For $q=0$ and $\phi\in\RR$ the space $\mathfrak{g}^{-\sigma}\cap\Ad(t_\phi)(\mathfrak{g}^{-\sigma})$ is spanned by $\mathfrak{c}$ and \[ \begin{cases} F_{2,d+1},\dots,F_{d-1,d+1} & \phi\not\in \pi\ZZ\\ F_{0,2},\dots, F_{0,d},F_{1,d+1},\dots,F_{d-1,d+1} & \phi\in\pi\ZZ. \end{cases}. \] \item For $q>0$ and $\phi,\psi\in\RR$, the space $\mathfrak{g}^{-\sigma}\cap\Ad(t_{\phi,\psi})(\mathfrak{g}^{-\sigma})$ is spanned by $\mathfrak{c}$ and \[ \begin{cases} 0 & \phi,\psi,\psi+\phi,\psi-\phi\not\in \pi\ZZ\\ F_{0,d}\mp F_{1,d+1} & \phi,\psi,\psi\pm\phi\not\in\pi\ZZ, \psi\mp \phi\in\pi\ZZ\\ F_{0,d},F_{1,d+1} & \phi,\psi\in \frac{\pi}{2} + \pi\ZZ,\\ F_{2,d+1},\dots,F_{d-1,d+1} & \phi\not\in\pi\ZZ, \psi\in\pi\ZZ,\\ F_{0,2},\dots, F_{0,d-1} & \psi\not\in\pi\ZZ,\phi\in\pi\ZZ,\\ F_{0,2},\dots,F_{0,d},F_{1,d+1},\dots,F_{d-1,d+1} & \phi,\psi\in\pi\ZZ. \end{cases} \] \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Let $X\in\mathfrak{g}^{-\sigma}$, say \[ X = \sum_{i=1}^d (a_i F_{0,i} + b_i F_{i,d+1}), \] then \begin{align*} \Ad(t_\phi)(X) &= a_1 F_{0,1} + b_1(\cos(\phi)F_{1,d+1} + \sin(\phi)F_{0,d+1})\\ &\quad + \sum_{i=2}^d \qty( a_i (\cos(\phi)F_{0,i} - \sin(\phi)F_{1,i}) + b_i F_{i,d+1}), \end{align*} which is an element of $\mathfrak{g}^{-\sigma}$ iff $b_1\sin(\phi) = a_i \sin(\phi) = 0$ ($i=2,\dots,d$). If $\phi\not\in\pi\ZZ$, we have $\sin(\phi)\ne0$, so that $b_1=a_2=\cdots=a_d=0$, and thus $X$ lies in the span of $F_{0,1}$ and $F_{i,d+1}$ ($i=2,\dots,d$). \item Let $X\in\mathfrak{g}^{-\sigma}$, say \[ X = \sum_{i=1}^d (a_i F_{0,i} + b_i F_{i,d+1}), \] then \begin{align*} \Ad(t_{\phi,\psi})(X) &= \sum_{i=2}^{d-1} \qty(a_i (\cos(\phi)F_{0,i} - \sin(\phi)F_{1,i}) + b_i (\cos(\psi) F_{i,d+1} - \sin(\psi)F_{i,d}))\\ &\quad + a_1 F_{0,1} + b_d F_{d,d+1}\\ &\quad + a_d(\cos(\psi)\cos(\phi) F_{0,d} - \cos(\psi)\sin(\phi) F_{1,d}\\ &\qquad + \sin(\psi)\cos(\phi) F_{0,d+1} - \sin(\psi)\sin(\phi) F_{1,d+1})\\ &\quad + b_1(\cos(\psi)\cos(\phi) F_{1,d+1} + \cos(\psi)\sin(\phi) F_{0,d+1}\\ &\qquad - \sin(\psi)\cos(\phi) F_{1,d} - \sin(\psi)\sin(\phi) F_{0,d})\\ &= \sum_{i=2}^{d-1} \qty(a_i (\cos(\phi)F_{0,i} - \sin(\phi)F_{1,i}) + b_i (\cos(\psi) F_{i,d+1} - \sin(\psi)F_{i,d}))\\ &\quad +\qty(a_d \cos(\psi)\cos(\phi) - b_1 \sin(\psi)\sin(\phi))F_{0,d}\\ &\quad +\qty(a_d \sin(\psi)\cos(\phi) + b_1 \cos(\psi)\sin(\phi))F_{0,d+1}\\ &\quad -\qty(a_d \cos(\psi)\sin(\phi) + b_1 \sin(\psi)\cos(\phi)) F_{1,d}\\ &\quad +\qty(-a_d \sin(\psi)\sin(\phi) + b_1 \cos(\psi)\cos(\phi)) F_{1,d+1}, \end{align*} which is contained in $\mathfrak{g}^{-\sigma}$ iff \begin{align*} 0 &= a_i \sin(\phi) \qquad (i=2,\dots,d-1)\\ 0 &= b_i \sin(\psi) \qquad (i=2,\dots,d-1)\\ 0 &= \mqty(\sin(\psi)\cos(\phi) & \cos(\psi)\sin(\phi)\\ \cos(\psi)\sin(\phi) & \sin(\psi)\cos(\phi))\mqty(a_d\\b_1). \end{align*} If none of $\phi,\psi,\psi+\phi,\psi-\phi$ are contained in $\pi\ZZ$, we have $\sin(\phi),\sin(\psi)\ne0$ and the matrix is regular. This shows that $a_i=b_i=0$ ($i=2,\dots,d-1$), as well as $a_d=b_1=0$, so that $X\in\mathfrak{c}$. If $\phi,\psi,\psi\pm\phi\not\in\pi\ZZ$ but $\psi\mp\phi\in\pi\ZZ$, the third condition reads \[ 0 = \sin(\psi)\cos(\phi) \mqty(1 & \pm1\\\pm1 & 1)\mqty(a_d\\b_1), \] where $\sin(\psi)\cos(\phi)\ne0$ (by assumption $\sin(\psi)\ne0$; if $\cos(\phi)=0$, we had $2\phi\in \pi\ZZ$, but then $\psi\pm\phi = (\psi\mp\phi) \pm 2\phi \in \pi\ZZ$ as well). This shows that $a_i=b_i=0$ ($i=2,\dots,d-1$) and $a_d\pm b_1=0$. Consequently, $X$ is spanned by $\mathfrak{c}$ and $F_{0,d}\mp F_{1,d+1}$. If $\phi,\psi\not\in\pi\ZZ$ but $\psi+\phi,\psi-\phi\in\pi\ZZ$, we have $\phi,\psi\in\frac{\pi}{2} + \pi\ZZ$ and thus the conditions read \[ a_i=b_i=0 \qquad (i=2,\dots,d-1), \] so that $X$ lies in the span of $\mathfrak{c}$ and $F_{0,d},F_{1,d+1}$. If $\phi\not\in\pi\ZZ$ but $\psi\in\pi\ZZ$, the conditions read \[ 0 = a_i\qquad (i=2,\dots,d-1)\qquad 0 = \cos(\psi)\sin(\phi)\mqty(0 & 1\\1 & 0)\mqty(a_d\\b_1), \] where neither $\cos(\psi)$ nor $\sin(\phi)$ are zero. Thus we have $a_i=0$ ($i=2,\dots,d-1$) and $a_d=b_1=0$. Thus, $X$ lies in the span of $\mathfrak{c}$ and $F_{2,d+1},\dots,F_{d-1,d+1}$. If conversely $\psi\not\in\pi\ZZ$ and $\phi\in\pi\ZZ$, we get $b_i=0$ ($i=2,\dots,d-1$) and $a_d=b_1=0$, thus $X$ lies in the span of $\mathfrak{c}$ and $F_{0,2},\dots,F_{0,d-1}$. Lastly, if $\psi,\phi\in\pi\ZZ$, we also have $\psi+\phi,\psi-\phi\in\pi\ZZ$. Thus, all conditions are satisfied and thus $X$ lies in the span of $\mathfrak{c}$ and $F_{0,2},\dots,F_{0,d},F_{1,d+1},\dots,F_{d-1,d+1}$. \end{enumerate} \end{proof} We now first classify the Cartan subsets of $G$ for the case $q=0$. \begin{proposition} If $q=0$, $C$ is the only Cartan subset of $G$ (relative to $C$). \end{proposition} \begin{proof} Let $C'=\exp(\mathfrak{c}')t_\phi$ be a Cartan subset, with $\mathfrak{c}'=\mathfrak{t}'\oplus\mathfrak{a}'$. If $\mathfrak{t}'=\mathfrak{t}$, we have $\mathfrak{c}'=\mathfrak{c}$, then $t$ can be absorbed into $\exp(\mathfrak{c}')$. Consequently, $C'=C$. Otherwise, $\mathfrak{t}'=0$ and $\mathfrak{a}'$ is a two-dimensional commutative subspace of $\mathfrak{p}^{-\sigma}\cap\Ad(t_\phi)(\mathfrak{g}^{-\sigma})$ spanned say by $F_{d,d+1}$ and $X$. Expand $X\in\mathfrak{p}^{-\sigma}$ as \[ X = \sum_{i=1}^d a_i F_{i,d+1}. \] Note that \[ 0 = \comm{F_{d,d+1}}{X} = - \sum_{i=1}^d a_i F_{i,d} = -\sum_{i=1}^{d-1} a_i F_{i,d}, \] which implies that $a_1=\cdots=a_{d-1}=0$, whence $X$ is a multiple of $F_{d,d+1}$ and $\mathfrak{a}'$ isn't two dimensional. Thus, there is no Cartan subset with $\mathfrak{t}'=0$. \end{proof} Interestingly, we will now demonstrate that there are more options for Cartan subsets in cases $q>0$: \begin{proposition}\label{sec:prop-cartan-subsets-Lorentzian} For $q>0$, the Cartan subsets of $G$ are $C'=\exp(\mathfrak{c}')t$ for \begin{enumerate} \item $\mathfrak{c}'=\mathfrak{c}$ and $t=1$; \item $\mathfrak{c}'$ is spanned by $F_{0,1}\pm F_{d,d+1}$ and $F_{0,d}\mp F_{1,d+1}$ and $t$ is $1$ or $t_{0,\pi}$; \item $\mathfrak{t}'=\RR F_{0,1}$ and $\mathfrak{a}'$ is a 1-dimensional subspace of the span of $F_{2,d+1},\dots, F_{p,d+1}$, and $t$ is either $1$ or $t_{0,\pi}$; \item $\mathfrak{t}'=\RR F_{d,d+1}$ and $\mathfrak{a}'$ is a 1-dimensional subspace of the span of $F_{0,p+1},\dots, F_{0,d-1}$ and $t$ is either $1$ or $t_{\pi,0}$; \item $\mathfrak{c}'=\RR F_{0,d}\oplus\RR F_{1,d+1}$ and $t=t_{\phi,\psi}$ for $\phi,\psi\in\frac{\pi}{2}+\pi\ZZ$; \item $\mathfrak{t}'=0$ and $\mathfrak{a}'$ is a two-dimensional commutative subalgebra of the span of $F_{0,p+1},\dots,F_{0,d},F_{1,d+1},\dots,F_{p,d+1}$ with $t=t_{\phi,\psi}$ for $\phi,\psi\in\pi\ZZ$. \end{enumerate} \end{proposition} \begin{proof} $\mathfrak{t}'$ is a subspace of $\mathfrak{t}$, so it can have dimension 2, 1, or 0. If $\dim(\mathfrak{t}')=2$, then $\mathfrak{c}'=\mathfrak{t}'=\mathfrak{t}=\mathfrak{c}$. Furthermore, $t$ can be absorbed into $\exp(\mathfrak{c}')$, whence $C'=C$, which is the 1st case If $\dim(\mathfrak{t}')=1$, say spanned by $X=aF_{0,1}+bF_{d,d+1}$. Then $\mathfrak{a}'$ is spanned by $Y$, which lies in $\mathfrak{p}^{-\sigma}$. We can expand $Y$ as \[ Y = \sum_{i=p+1}^d c_i F_{0,i} + \sum_{i=1}^p d_i F_{i,d+1}, \] then \begin{align*} \comm{X}{Y} &= -\sum_{i=p+1}^d a c_i F_{1,i} + ad_1 F_{0,d+1} - \sum_{i=1}^p bd_i F_{i,d} + bc_d F_{0,d+1}\\ &= - (ac_d + bd_1) F_{1,d} + (ad_1 + bc_d) -\sum_{i=p+1}^{d-1} a c_i F_{1,i} - \sum_{i=2}^p bd_i F_{i,d}. \end{align*} All basis vectors in the last equation are linearly independent, so we require \begin{align*} 0 &= a c_i \qquad (i=p+1,\dots,d-1)\\ 0 &= b d_i \qquad (i=2,\dots,p)\\ 0 & \mqty(a & b\\b & a)\mqty(c_d\\d_1). \end{align*} If the matrix is regular, i.e. $a^2-b^2\ne0$, and $a,b\ne0$ as well, then $Y=0$, which contradicts $\dim(\mathfrak{a}')=1$. Consequently, at least one of $a,b,a+b,a-b$ must be zero. If more than one is zero, all others must be zero as well, which implies $X=0$, which contradicts $\dim(\mathfrak{t}')=1$. In case $a\pm b=0$, say $X$ being a multiple of $F_{0,1}\mp F_{d,d+1}$, then $Y$ must be a multiple of $F_{0,d}\pm F_{1,d+1}$. By Proposition~\ref{sec:prop-intersection-adjoint}(ii), this is an element of $\mathfrak{g}^{-\sigma}\cap\Ad(t_{\phi,\psi})(\mathfrak{g}^{-\sigma})$ iff $\psi\pm\phi\in\pi\ZZ$. Then $t_{\phi,\psi} = t_{\phi,\mp\phi} t_{0,\psi\pm\phi}$, where $t_{\phi,\mp\phi}\in\exp(\mathfrak{c}')$ and $t_{0,\psi\pm\phi}=1$ or $t_{0,\pi}$. This is the 2nd case with signs reversed. In case $b=0$, we can take $X=F_{0,1}$ without loss of generality. Then $Y$ lies in the span of $F_{2,d+1},\dots, F_{p,d+1}$. By Proposition~\ref{sec:prop-intersection-adjoint}(ii), such an element is contained in the proper intersection iff $\psi\in\pi\ZZ$. Furthermore, $t=t_{\phi,\psi}=t_{\phi,0} t_{0,\psi}$ with $t_{\phi,0}\in\exp(\mathfrak{c}')$ and $t_{0,\psi}=1$ or $t_{0,\pi}$. This is the 3rd case. In case $a=0$, we can take $X=F_{d,d+1}$ without loss of generality. Then we have $Y$ in the span of $F_{0,p+1},\dots, F_{0,d-1}$. By Proposition~\ref{sec:prop-intersection-adjoint}(ii), such an element is contained in the proper intersection iff $\phi\in\pi\ZZ$. Furthermore, $t=t_{\phi,\psi}=t_{0,\psi} t_{\phi,0}$ with $t_{0,\psi}\in\exp(\mathfrak{c}')$ and $t_{\phi,0}=1$ or $t_{\pi,0}$. This is the 4th case. This covers all the cases for $\dim(\mathfrak{t}')=1$, let's now turn our attention to $\mathfrak{t}'=0$. In this case, $\mathfrak{a}'$ has to be a two-dimensional subalgebra of $\mathfrak{p}^{-\sigma}\cap\Ad(t_{\phi,\psi})(\mathfrak{g}^{-\sigma})$, which is spanned by \[ \begin{cases} 0 & \phi,\psi,\psi+\phi,\psi-\phi\not\in \pi\ZZ\\ F_{0,d}\mp F_{1,d+1} & \phi,\psi,\psi\pm\phi\not\in\pi\ZZ, \psi\mp \phi\in\pi\ZZ\\ F_{0,d},F_{1,d+1} & \phi,\psi\in \frac{\pi}{2} + \pi\ZZ,\\ F_{2,d+1},\dots,F_{p,d+1} & \phi\not\in\pi\ZZ, \psi\in\pi\ZZ,\\ F_{0,p+1},\dots, F_{0,d-1} & \psi\not\in\pi\ZZ,\phi\in\pi\ZZ,\\ F_{0,p+1},\dots,F_{0,d},F_{1,d+1},\dots,F_{p,d+1} & \phi,\psi\in\pi\ZZ. \end{cases} \] This already excludes the first two lines. In case $\phi,\psi\in\frac{\pi}{2}+\pi\ZZ$, we can take $\mathfrak{a}'=\RR F_{0,d}\oplus\RR F_{1,d+1}$, which is a commutative subalgebra. This is the 5th case. In case one of $\phi,\psi$ is contained in $\pi\ZZ$ but the other is not, there exists no 2-dimensional commutative subalgebra, so the only remaining possible case is $\phi,\pi\in\pi\ZZ$, which is the 6th case. \end{proof} Next, we are interested which of these Cartan subsets are conjugate to each other. For this we need to investigate the action of \[ N_K^T := \{(h,h')\in (H\cap K)^2\mid hTh^{\prime-1}=T\} \] on $T$. \begin{lemma}\label{sec:lem-normaliser-action-torus} If $q>1$, $N_K^T$ acts on $T$ as an affine reflection group generated by \begin{align*} s_0 : t_{\phi,\psi} &\mapsto t_{\pi-\phi,\pi-\psi}\\ s_1 : t_{\phi,\psi} &\mapsto t_{\phi,-\psi}\\ s_2 : t_{\phi,\psi} &\mapsto t_{-\phi,\phi}. \end{align*} \end{lemma} \begin{proof} Let $h\in K\cap H$, then $h$ can be written as a $1+p+q+1$-block matrix as \[ \mqty(A & 0 & 0 & \\0 & B & 0 & 0\\0 & 0 & C & 0\\0 & 0 & 0 & D) \] for $A,D\in\RR$ and $B\in\RR^{p\times p},C\in\RR^{q\times q}$, with $A^2=1, D^2=1$ and $B\in O(p), C\in O(q)$ with $\det(B)=A$ and $\det(C)=D$. Since $H$ isn't all of $G^\sigma$, but only those elements that stabilise $D_0$, we also have $A=D$. Consequently, \[ h = \mqty(\pm 1 & 0 & 0 & 0\\0 & A & 0 & 0\\0 & 0 & B & 0\\0 & 0 & 0 & \pm1) \] with $A\in O(p),B\in O(q)$ with $\det(A)=\det(B)=\pm1$. Furthermore, we can write $t_{\phi,\psi}$ in the same $1+p+q+1$-block matrix form as \[ \mqty(\cos(\phi) & \sin(\phi)e_1^T & 0 & 0\\-\sin(\phi)e_1 & 1 + (\cos(\phi)-1)e_1e_1^T & 0 & 0\\ 0 & 0 & 1+(\cos(\psi)-1)e_q e_q^T & -\sin(\psi)e_q\\ 0 & 0 & \sin(\psi)e_q^T & \cos(\psi)). \] If $h,h'\in K\cap H$ are block diagonal matrices with $\epsilon,A,B,\epsilon$ and $\zeta,C,D,\zeta$, we have \[ ht_{\phi,\psi}h^{\prime-1} = \mqty(X & 0\\0 & Y) \] with \begin{align*} X &= \mqty(\epsilon\zeta\cos(\phi) & \epsilon\sin(\phi) (Ce_1)^T\\ -\zeta\sin(\phi) Ae_1 & AC^T + (\cos(\phi)-1)Ae_1(Ce_1)^T)\\ Y &= \mqty(BD^T + (\cos(\psi)-1)Be_q (De_q)^T & -\zeta\sin(\psi)Be_q\\ \epsilon\sin(\psi) (De_q)^T & \epsilon\zeta\cos(\psi)) \end{align*} Since this lies in $T$ again, we have that $Ae_1, Ce_1, e_1$ are proportional, as are $Be_q, De_q, e_q$, say \[ Ae_1 = \alpha e_1,\quad Be_q = \beta e_q,\quad Ce_1 = \gamma e_1,\quad De_q = \delta e_q. \] Since $A,C,B,D$ are orthogonal, we have $\alpha,\beta,\gamma,\delta\in\{\pm1\}$. Then \[ \epsilon\gamma = \alpha\zeta,\qquad \epsilon\zeta = \alpha\gamma,\qquad \epsilon\delta = \beta\zeta,\qquad \epsilon\zeta = \beta\delta. \] If $\epsilon\zeta=1$, then $\alpha=\gamma$ and $\beta=\delta$. Then $ht_{\phi,\psi}h^{\prime-1}=t_{\epsilon\gamma\phi,\epsilon\delta\psi}$. All four possible combinations of signs of $\epsilon\gamma,\beta\zeta$ yield an action on $T$ that is generated by $s_1,s_2$. If $\epsilon\zeta=-1$, we have $\alpha=-\gamma$ and $\beta=-\delta$, then $ht_{\phi,\psi}h^{\prime-1} = t_{\pi-\gamma\epsilon\phi, \pi-\delta\epsilon\psi}$. This transformation is definitely contained in the group generated by $s_0,s_1,s_2$. Conversely, $s_0$ is effected by \[ h = \operatorname{diag}(1,-1,-1,1,\dots,1,-1,-1,1),\qquad h' = \operatorname{diag}(-1,1,-1,1\dots,1,-1,1,-1). \] Since we assumed $q>1$ (and hence also $p>1$), all matrices are elements of $K$. Furthermore, they commute with $D_0$, so that they are elements of $K\cap H$. The transformation $s_1$ is effected by \[ h=h' = \operatorname{diag}(1,\dots,1,-1,-1,1) \] and $s_2$ by \[ h=h' = \operatorname{diag}(1,-1,-1,\dots,1). \] \end{proof} \begin{lemma}\label{sec:lem-normaliser-action-torus-q1} For $q=1$, the action of $N_K^T$ on $T$ is generated by the reflection $t_{\phi,\psi}\mapsto t_{-\phi,\psi}$ and the translation $t_{\phi,\psi}\mapsto t_{\phi+\pi,\psi+\pi}$. \end{lemma} \begin{proof} An element $h\in H\cap K$ has the following $1+p+1+1$-block matrix shape: \[ h = \mqty(\epsilon & 0 & 0 & 0\\0 & A & 0 & 0\\0 & 0 & \epsilon & 0\\0 & 0 & 0 & \epsilon) \] for $A\in SO(p)$ and $\epsilon=\pm1$. Two such elements (with variables $\epsilon,A$ and $\delta, B$) give rise to an element of $N_K^T$ if $Ae_1=\alpha e_1,Be_1=\beta e_1$ and $\epsilon\delta=\alpha\beta$ and $\epsilon\beta = \alpha\delta$. If $\epsilon\delta=1$, we have $\alpha=\beta$, which corresponds to $t_{\phi,\psi}\mapsto t_{-\phi,\psi}$; and if $\epsilon\delta=-1$, we have $\alpha=-\beta$, which corresponds to $t_{\phi,\psi}\mapsto t_{\pi-\epsilon\beta\phi,\psi+\pi}$. Conversely, the reflection is effected by \[ h=h' = \operatorname{diag}(1,-1,-1,1,\dots,1), \] which exists because $p=d-1>1$ (by the original assumption that $d>2$), and the translation by \[ h = 1, h'=\operatorname{diag}(-1,-1,1,\dots,1,-1,-1). \] \end{proof} \begin{corollary}\label{sec:cor-cartan-subsets} For $q>0$, the equivalence classes of Cartan subsets can be represented by: \begin{enumerate} \item $C$; \item $\exp(\RR (F_{0,1}+F_{d,d+1}) \oplus \RR (F_{0,d}-F_{1,d+1}))$; \item $\exp(\RR (F_{0,1}+F_{d,d+1}) \oplus \RR (F_{0,d}-F_{1,d+1}))t_{0,\pi}$; \item $\exp(\RR F_{0,1}\oplus \RR F_{2,d+1})$; \item $\exp(\RR F_{d,d+1}\oplus \RR F_{0,d-1})$ (this doesn't exist for $q=1$); \item $\exp(\RR F_{0,d}\oplus\RR F_{1,d+1})$; \item $\exp(\RR F_{0,d}\oplus\RR F_{1,d+1})t_{\pi/2,\pi/2}$; \item $\exp(\RR F_{0,d}\oplus\RR F_{1,d+1})t_{0,\pi}$. \end{enumerate} \end{corollary} \begin{proof} The compact parts of the Cartan subsets described in Proposition~\ref{sec:prop-cartan-subsets-Lorentzian} consists of $t_{\phi,\psi}$ with \begin{enumerate} \item no conditions, this is the first equivalence class. \item $\phi\equiv\pm\psi\pmod{2\pi\ZZ}$ or $\phi\equiv\pi\pm\psi\pmod{2\pi\ZZ}$. The first two cases ($\pm$) and cases three and four are related by $\phi\mapsto\-\phi$, which is a transformation that can be enacted using $N_K^T$ (cf. Lemmas~\ref{sec:lem-normaliser-action-torus},\ref{sec:lem-normaliser-action-torus-q1}). This yields the second and third equivalence class, respectively. \item $\psi\in\pi\ZZ$, i.e. $\psi\equiv 0\pmod{2\pi\ZZ}$ or $\psi\equiv \pi\pmod{2\pi\ZZ}$. Both cases are related via a shift $(\psi,\phi)\mapsto (\psi+\pi,\phi+\pi)$, which can be effected using $N_K^T$. This yields the fourth equivalence class. \item The same for $\phi$, which yields the fifth equivalence class. \item $(\phi,\psi)$ being congruent (modulo $2\pi\ZZ\oplus 2\pi\ZZ$) to one of \[ \qty(\frac{\pi}{2},\frac{\pi}{2}),\qty(\frac{3\pi}{2},\frac{3\pi}{2}), \qty(\frac{\pi}{2},\frac{3\pi}{2}),\qty(\frac{\pi}{2},\frac{3\pi}{2}). \] The first and second (and third and fourth) element are related via the shift $(\phi,\psi)\mapsto(\phi+\pi,\psi+\pi)$, and the first and third element are related using the reflection $(\phi,\psi)\mapsto(-\phi,\psi)$, thus they all are equivalent, which is the seventh equivalence class. \item $(\phi,\psi)$ being congruent (modulo $2\pi\ZZ\oplus 2\pi\ZZ$) to one of \[ (0,0),(\pi,\pi),(0,\pi),(\pi,0). \] Of these, the first and second (and third and forth) are related via the translation, and no others are related via $N_K^T$, which is the sixth and eighth equivalence class, respectively. \end{enumerate} \end{proof} \subsection{Coordinates}\label{sec:coords} We shall now introduce unified coordinates for all Cartan subsets and answer the question to what extent the functions $u,v$ define a smooth manifold structure on $\GP(G/Q,4)/G$ (or $MA\backslash\tilde{G}/MA$ via $\psi$). \begin{definition} Let \[ D:= \left\{(\chi_1,\chi_2)\in\CC^2\mid \chi_1,\chi_2,\frac{\chi_1+\chi_2}{2},\frac{\chi_1-\chi_2}{2}\not\in i\pi\ZZ\right\}. \] Define $f,g: D\to\CC^2$ by \begin{align*} f: (\chi_1,\chi_2) &\mapsto \mqty(\sech^2(\chi_1/2)\sech^2(\chi_2/2)\\ \tanh^2(\chi_1/2)\tanh^2(\chi_2/2))\\ g: (\chi_1,\chi_2) &\mapsto \mqty(\sinh^2(\chi_1/2)\sinh^2(\chi_2/2)\\ \cosh^2(\chi_1/2)\cosh^2(\chi_2/2)). \end{align*} \end{definition} Note that $\chi_i\not\in i\pi\ZZ$ implies that $\frac{\chi_i}{2}\not\in \qty(\frac{i\pi}{2} + i\pi\ZZ)$, so that $\cosh(\frac{\chi_i}{2})\ne0$ and $f$ is well-defined. \begin{lemma}\label{sec:lem-local-diff} $f$ is a local diffeomorphism. \end{lemma} \begin{proof} Note that \[ f(\chi_1,\chi_2) = \mqty(\frac{1}{g_2(\chi_1,\chi_2)}\\\frac{g_1(\chi_1,\chi_2)}{g_2(\chi_1,\chi_2)}), \] so that $f$ is a local diffeomorphism iff $g$ is. We now compute the Jacobian of $g$: \[ g'(\chi_1,\chi_2) = \mqty(\sinh(\frac{\chi_1}{2})\cosh(\frac{\chi_1}{2})\sinh[2](\frac{\chi_2}{2}) & \sinh[2](\frac{\chi_1}{2})\sinh(\frac{\chi_2}{2})\cosh(\frac{\chi_2}{2})\\ \sinh(\frac{\chi_1}{2})\cosh(\frac{\chi_1}{2})\cosh[2](\frac{\chi_2}{2}) & \cosh[2](\frac{\chi_1}{2})\sinh(\frac{\chi_2}{2})\cosh(\frac{\chi_2}{2})), \] whose determinant is \begin{align*} \det(g'(\chi_1,\chi_2)) &= \sinh(\frac{\chi_1}{2})\cosh(\frac{\chi_1}{2}) \sinh(\frac{\chi_2}{2})\cosh(\frac{\chi_2}{2})\\ &\qquad \cdot \qty(\sinh[2](\frac{\chi_2}{2})\cosh[2](\frac{\chi_1}{2}) - \cosh[2](\frac{\chi_2}{2})\sinh[2](\frac{\chi_1}{2}))\\ &= 16 \sinh(\chi_1)\sinh(\chi_2)\sinh(\frac{\chi_2-\chi_1}{2})\sinh(\frac{\chi_2+\chi_1}{2}). \end{align*} Since none of $\chi_1,\chi_2,\frac{\chi_1+\chi_2}{2},\frac{\chi_1-\chi_2}{2}$ lies in the zero locus of $\sinh$, we have $\det(g'(\chi_1,\chi_2)) \neq 0$. By the inverse function theorem, $g$ is a local diffeomorphism, hence so is $f$. \end{proof} Since the complement of $D$ is a union of (locally finitely many) codimension 2 real subspaces, $D$ is still connected and therefore a covering space of $f(D)$. In particular, $D$ carries an action of the fundamental groupoid of $f(D)$. A way of approaching this is to first consider some obvious symmetries of $f$ and $g$, and then look what remains. \begin{lemma} Define $s_0,s_1,s_2: D\to D$ by \begin{align*} s_0 : (\chi_1,\chi_2)&\mapsto(\chi_1, 2\pi i-\chi_2)\\ s_1 : (\chi_1,\chi_2)&\mapsto(\chi_2,\chi_1)\\ s_2 : (\chi_1,\chi_2)&\mapsto(-\chi_1,\chi_2). \end{align*} These three transformations generate a Coxeter group $\tilde{W}$ of type $\tilde{C}_2$ (and its (scaled) affine action on $\CC^2$) of symmetries of $f$. \end{lemma} \begin{proof} As before, $f\circ s_i=f$ iff $g\circ s_i=g$, so we check for $g$. Since $\sinh^2,\cosh^2$ are even and periodic with period $\pi i$, we see that $s_0,s_2$ are symmetries. Furthermore, $g$ is symmetric with respect to exchanging $\chi_1,\chi_2$, so that $s_1$ is also a symmetry. To see that $s_0,s_1,s_2$ generate a Coxeter group of type $\tilde{C}_2$, note that they are all involutions, that \begin{align*} s_0s_1: (\chi_1,\chi_2)&\mapsto(\chi_2,2\pi i-\chi_1),\\ s_0s_2: (\chi_1,\chi_2)&\mapsto(-\chi_1,2\pi i-\chi_2),\\ s_1s_2: (\chi_1,\chi_2)&\mapsto(\chi_2,-\chi_1) \end{align*} have orders $4, 2, 4$, respectively, whence we see that they generate an affine Coxeter group of type $\tilde{C}_2$. \end{proof} \begin{proposition}\label{sec:prop-fundamental-domain} Consequently, it suffices to study $D/\tilde{W}$, of which a fundamental domain is given by \[ X = \bigsqcup_{I\subseteq\{0,1,2\}} X_I \] with \begin{alignat*}{2} X_\emptyset &=\RR^2+\{(a,b)\mid 0<a<b<\pi\}i\qquad &X_{\{0\}} &= \RR\times\RR_{>0} + \{(a,\pi)\mid 0<a<\pi\}i\\ X_{\{1\}} &= \{(a,b)\mid a<b\} + (0,\pi)i(1,1) & X_{\{2\}} &= \RR_{>0}\times\RR + \{(0,a)\mid 0<a<\pi\}i\\ X_{\{0,1\}} &= \{(a,b)\mid 0<a<b\} + (\pi,\pi)i &X_{\{0,2\}} &= \RR_{>0}^2 + (0,\pi)i\\ X_{\{1,2\}} &= \{(a,b)\mid 0< a< b\}, \end{alignat*} where every $X_I$ is fixed by $\tilde{W}_I$, the parabolic subgroup generated by $s_i$ ($i\in I$). \end{proposition} \begin{proof} We first focus on the imaginary parts. Since the action is just a rescaled version of the affine Weyl group of a $B_2$ root system, a fundamental domain is given by the fundamental alcove of said root system. This shows that the set \[ \tilde{X}:= \{(\chi_1,\chi_2)\in D\mid 0\le\Im(\chi_1)\le\Im(\chi_2)\le\pi\} \] (the preimage of a rescaled fundamental alcove under the projection onto the imaginary parts) touches every orbit, but not necessarily that it touches every orbit once: let $(\chi_1,\chi_2)\in\tilde{X}$ and $w\in\tilde{W}$, then we have either $w\cdot(\chi_1,\chi_2)\not\in\tilde{X}$ or $\Im(w\cdot(\chi_1,\chi_2)) = \Im(\chi_1,\chi_2)$. However, since $\tilde{W}$ also acts on the real parts, this is not enough to conclude that $w\cdot(\chi_1,\chi_2)=(\chi_1,\chi_2)$. In particular, if $\Im(\chi_1,\chi_2)$ lies in the face stabilised by the parabolic subgroup $\tilde{W}_I$, we need to consider $\tilde{W}_I$'s action on the real parts and restrict to a fundamental domain for that, too. We now proceed by the preimages of the faces of the fundamental alcove, indexed by its stabiliser subgroup $\tilde{W}_I$. \begin{description} \item[$I=\emptyset$] We have $0<\Im(\chi_1)<\Im(\chi_2)<\pi$, i.e. $(\chi_1,\chi_2)\in X_\emptyset$. Since the stabiliser subgroup is the trivial group, a fundamental domain is given by allowing all real values. We consequently end up with $X_{\emptyset}$. \item[$I=\{0\}$] We have $0<\Im(\chi_1)<\Im(\chi_2)=\pi$. The stabiliser subgroup is generated by $s_0$, which acts like the reflection $s_1s_2s_1$ on the real parts. Consequently, a fundamental domain is given by requiring \[ 0<\Im(\chi_1)<\Im(\chi_2)=\pi,\qquad \Re(\chi_2)\ge0. \] Since $\chi_2\ne\pi i$, we also know that $\Re(\chi_2)>0$ is in fact a strict inequality. This defines $X_{\{0\}}$. \item[$I=\{1\}$] We have $0<\Im(\chi_1)=\Im(\chi_2)<\pi$. The stabiliser subgroup is generated by $s_1$, which acts on the real parts by swapping them. A fundamental domain is therefore given by requiring \[ 0<\Im(\chi_1)=\Im(\chi_2)<\pi,\qquad \Re(\chi_1)\le\Re(\chi_2). \] Since $\frac{\chi_1-\chi_2}{2}\ne0$, we additionally know that $\Re(\chi_1)<\Re(\chi_2)$, which defines $X_{\{1\}}$ ($\subseteq D$). \item[$I=\{2\}$] We have $0=\Im(\chi_1)<\Im(\chi_2)<\pi$. The stabiliser subgroup is generated by $s_2$, which acts on the real parts by negating the first. Consequently, a fundamental domain is given by \[ 0=\Im(\chi_1)<\Im(\chi_2)<\pi,\qquad \Re(\chi_1)\ge0. \] Since $\chi_1\ne0$ (as required by $D$), we also know that $\Re(\chi_1)>0$, which defines $X_{\{2\}}$. \item[$I=\{0,1\}$] We have $\Im(\chi_1)=\Im(\chi_2)=\pi$. The stabiliser subgroup is generated by $s_0,s_1$, which act on the real parts like the Weyl group of $B_2$. Consequently, a fundamental domain is given by requiring \[ \Im(\chi_1)=\Im(\chi_2)=\pi,\qquad 0\le\Re(\chi_1)\le\Re(\chi_2). \] Since $\chi_1\ne\pi i$, we can in particular choose $0<\Re(\chi_1)$, and since $\frac{\chi_1-\chi_2}{2}\ne0$, we can also choose $\Re(\chi_1)<\Re(\chi_2)$, which defines $X_{\{0,1\}}$. \item[$I=\{0,2\}$] We have $0=\Im(\chi_1)<\Im(\chi_2)=\pi$. The stabiliser subgroup is generated by $s_0,s_2$, which act on the real parts by negating one or the other. Consequently, a fundamental domain is given by \[ 0 = \Im(\chi_1)<\Im(\chi_2)=\pi,\qquad 0\le\Re(\chi_1),\Re(\chi_2). \] Since $\chi_1\ne 0,\chi_2\ne\pi i$, we in particular have $0<\Re(\chi_1),\Re(\chi_2)$, which describes $X_{\{0,2\}}$. \item[$I=\{1,2\}$] We have $0=\Im(\chi_1)=\Im(\chi_2)$. The stabiliser subgroup is generated by $s_1,s_2$, which act on the real parts like the Weyl group of $B_2$, so that a fundamental domain is given by \[ 0=\Im(\chi_1)=\Im(\chi_2),\qquad 0\le\Re(\chi_1)\le\Re(\chi_2). \] Since $\chi_1,\frac{\chi_1-\chi_2}{2}\ne0$, we additionally require $0<\Re(\chi_1)<\Re(\chi_2)$, which describes $X_{\{1,2\}}$. \item[$I=\{0,1,2\}$] There are no elements stabilised by $s_0,s_1,s_2$. \end{description} Consequently, the union of all $X_I$ touches every orbit exactly once, and every element of $\tilde{X}$ (and hence of $D$) is related by $\tilde{W}$ to an element of one of the $X_I$. \end{proof} \begin{lemma}\label{sec:lem-f-injective} $f$ is injective on $X$. In particular, $\tilde{W}$ is the group of all symmetries of $f$. \end{lemma} \begin{proof} We show the result for $g$. Assume $(\chi_1,\chi_2),(\chi'_1,\chi'_2)\in D$ with $g(\chi_1,\chi_2)=g(\chi'_1,\chi'_2)$. We are going to show that $(\chi_1,\chi_2),(\chi'_1,\chi'_2)$ lie in the same $\tilde{W}$-orbit. We have \begin{align*} \frac{1}{4}(\cosh(\chi_1)-1) (\cosh(\chi_2)-1) &= \sinh[2](\frac{\chi_1}{2}) \sinh[2](\frac{\chi_2}{2})\\ &= \sinh[2](\frac{\chi'_1}{2}) \sinh[2](\frac{\chi'_2}{2})\\ &= \frac{1}{4}(\cosh(\chi'_1)-1) (\cosh(\chi'_2)-1),\\ \frac{1}{4}(\cosh(\chi_1)+1)(\cosh(\chi_2)+1) &= \cosh[2](\frac{\chi_1}{2}) \cosh[2](\frac{\chi_2}{2})\\ &= \sinh[2](\frac{\chi'_1}{2}) \sinh[2](\frac{\chi'_2}{2})\\ &= \frac{1}{4}(\cosh(\chi'_1)+1)(\cosh(\chi'_2)+1), \end{align*} which shows that $\cosh(\chi_1)+\cosh(\chi_2)=\cosh(\chi'_1)+\cosh(\chi'_2)$ and $\cosh(\chi_1)\cosh(\chi_2)=\cosh(\chi'_1)\cosh(\chi'_2)$. By standard algebra this shows that either $(\cosh(\chi_1),\cosh(\chi_2))=(\cosh(\chi'_1),\cosh(\chi'_2))$ or $=(\cosh(\chi'_2),\cosh(\chi'_1))$. In both cases, $(\chi'_1,\chi'_2)$ is at most an application of $s_1$ removed from satisfying the first equation i.e. $\cosh(\chi_1)=\cosh(\chi'_1),\cosh(\chi_2)=\cosh(\chi'_2)$. So without loss of generality, we can assume that the equation is satisfied. The equation can be rewritten as $\exp(\chi_i)+\exp(-\chi_i) = \exp(\chi'_i) + \exp(-\chi'_i)$ for $i=1,2$. By the same algebra argument we conclude $\exp(\chi_i)=\exp(\chi'_i)$ or $=\exp(-\chi'_i)$ for $i=1,2$. This yields four possibilities, all of which can be related to $\exp(\chi_1)=\exp(\chi'_1),\exp(\chi_2)\exp(\chi'_2)$ using the sign flips $s_2$ and $s_1s_2s_1$. Without loss of generality assume therefore that this equation holds. Since $\exp: (\CC,+)\to(\CC^\times,\cdot)$ is a group homomorphism with kernel $2\pi i\ZZ$, this implies that $(\chi'_1,\chi'_2)-(\chi_1,\chi_2)\in (2\pi i\ZZ)^2$. Such a translation can be effected using the elementary translations $s_0s_1s_2s_1$ and $s_1 s_0 s_1 s_2$. This shows that $(\chi_1,\chi_2),(\chi'_1,\chi'_2)$ lie in the same $\tilde{W}$-orbit. \end{proof} Next, we investigate the preimage of $\RR^2$ under $f$ (equivalently, $g$). \begin{lemma} The preimage of $\RR^2$ under $f$ consists of all the $\tilde{W}$-orbits of \[ Y = \bigsqcup_{I\subseteq\{0,1,2\}} Y_I \] where \begin{alignat*}{2} Y_\emptyset &=i\{(a,b)\mid 0<a<b<\pi\}\qquad & Y_{\{0\}} &= \{0\}\times\RR_{>0} + i\{(a,\pi)\mid 0<a<\pi\}\\ Y_{\{1\}} &= \RR_{>0}(-1,1) + (0,\pi)i (1,1) & Y_{\{2\}} &= \RR_{>0}\times\{0\} + i\{(0,a)\mid 0<a<\pi\}\\ Y_{\{0,1\}} &= \{(a,b)\mid 0<a<b\} + i(\pi,\pi) &Y_{\{0,2\}} &= \RR_{>0}^2 + i(0,\pi)\\ Y_{\{1,2\}} &= \{(a,b)\mid 0< a< b\}. \end{alignat*} \end{lemma} \begin{proof} Let $(\chi_1,\chi_2)\in D$, without loss of generality $(\chi_1,\chi_2)\in X_I$ for an $I\subseteq\{0,1,2\}$, and write \begin{align*} c_1 &:= \sinh(\frac{\chi_1}{2})\sinh(\frac{\chi_2}{2})\\ c_2 &:= \cosh(\frac{\chi_1}{2})\cosh(\frac{\chi_2}{2}). \end{align*} We now need to investigate when $c_1^2,c_2^2$ are real numbers. For this to happen, $c_i$ needs to be purely real or purely imaginary, making for four cases: \begin{description} \item[Both real] In this case, $c_1\pm c_2$ are both also real numbers. We have \[ c_2\pm c_1 = \cosh(\frac{\chi_1\pm\chi_2}{2}). \] If $\chi_1 = 2a + 2\phi$ and $\chi_2=2b+2\psi$, this reads \[ c_2\pm c_1 = \cosh(a\pm b)\cos(\phi\pm\psi) + i\sinh(a\pm b)\sin(\phi\pm\psi). \] For this to be purely imaginary, we need $a\pm b$ or $\phi\pm\psi\in\pi\ZZ$. If both $\phi+\psi,\phi-\psi\not\in\pi\ZZ$, we need $a+b,a-b=0$, which implies $a=b=0$. Furthermore, the inequalities imply that $0<\phi<\psi<\frac{\pi}{2}$, which in turn implies that $(\chi_1,\chi_2)\in X_\emptyset$, with zero real part, hence $(\chi_1,\chi_2)\in Y_\emptyset$. If $\phi-\psi\in\pi\ZZ$ and $\phi+\psi\not\in\pi\ZZ$, we need $a+b=0$. Furthermore, we have $0<\phi=\psi<\frac{\pi}{2}$, which implies that $(\chi_1,\chi_2)\in X_{\{1\}}$. Adding in the fact that $a=-b$, we obtain $(\chi_1,\chi_2)\in Y_{\{1\}}$. If $\phi+\psi\in\pi\ZZ$, we can have either $\phi=\psi=0$ or $\phi=\psi=\frac{\pi}{2}$. In both cases we also have $\phi-\psi\in\pi\ZZ$. In both cases there are no further restrictions on $a,b$, which implies $(\chi_1,\chi_2)\in X_{\{1,2\}}=Y_{\{1,2\}}$ or $X_{\{0,1\}}=Y_{\{0,1\}}$, respectively. \item[Both imaginary] In this case, $c_1\pm c_2$ are purely imaginary numbers, which equal \[ c_2\pm c_1 = \cosh(a\pm b)\cos(\phi\pm\psi) + i\sinh(a\pm b)\sin(\phi\pm\psi). \] Since the $\cosh$ of any real number is nonzero, we obtain $\cos(\phi+\psi)=\cos(\phi-\psi)=0$, meaning that $\phi+\psi,\phi-\psi\in\frac{\pi}{2} + \pi\ZZ$. Since we chose $0\le\phi\le\psi\le\frac{\pi}{2}$, the only possible combinations are $\phi=0,\psi=\frac{\pi}{2}$, which implies that $(\chi_1,\chi_2)\in X_{\{0,2\}}=Y_{\{0,2\}}$. \item[$c_1$ imaginary, $c_2$ real] In this case $c_2\pm c_1$ are complex conjugates of each other, meaning that \[ \cosh(\frac{\chi_1+\chi_2}{2}) = \overline{\cosh(\frac{\chi_1-\chi_2}{2})} = \cosh(\frac{\overline{\chi_1-\chi_2}}{2}), \] which shows that one of \begin{align*} 2\pi i\ZZ \ni &\frac{\chi_1+\chi_2}{2}-\frac{\overline{\chi_1}-\overline{\chi_2}}{2}\\ =& a + b + i\phi + i\psi - (a - b - i\phi + i\psi) = 2b + 2i\phi\\ 2\pi i\ZZ\ni &\frac{\chi_1+\chi_2}{2}+\frac{\overline{\chi_1}-\overline{\chi_2}}{2}\\ =& a+b+i\phi + i\psi + (a-b-i\phi+i\psi) = 2a + 2i\psi. \end{align*} If the first is true, we need $b=0$ and $\phi\in \pi\ZZ$. $b=0$ is only allowed for $X_\emptyset, X_{\{1\}}, X_{\{2\}}$, of which only $X_{\{2\}}$ allows $\phi\in\pi\ZZ$ (namely $\phi=0$). Since $b=0$, we in particular also have $(\chi_1,\chi_2)\in Y_{\{2\}}$. If the second is true, we need $a=0$ and $\psi\in\pi\ZZ$, which means that $(\chi_1,\chi_2)$ is not contained in any $X_I$, which is a contradiction. \item[$c_1$ real, $c_2$ imaginary] In this case, $c_1\pm c_2$ are complex conjugates of each other, meaning that \[ \cosh(\frac{\chi_1+\chi_2}{2}) = -\overline{\cosh(\frac{\chi_1-\chi_2}{2})} = -\cosh(\frac{\overline{\chi_1-\chi_2}}{2}), \] which shows that one of \begin{align*} \pi i + 2\pi i\ZZ \ni & 2b + 2i\phi\\ \pi i + 2\pi i\ZZ \ni & 2a + 2i\psi. \end{align*} In the first case, we have $b=0$ and $\phi\in\frac{\pi}{2} + \pi\ZZ$, i.e. $\phi=\frac{\pi}{2}$, which implies that $(\chi_1,\chi_2)$ is not contained in any $X_I$, which is a contradiction. In the second case, we have $a=0$ and $\psi=\frac{\pi}{2}$, which implies that $(\chi_1,\chi_2)\in Y_{\{0\}}\subseteq X_{\{0\}}$. \end{description} We thus obtain the following:\\ \begin{tabular}{l|l|l} $c_1$ & $c_2$ & Real Faces \\\hline re & re & $Y_\emptyset, Y_{\{1\}}, Y_{\{0,1\}}, Y_{\{1,2\}}$\\ re & im & $Y_{\{0\}}$\\ im & re & $Y_{\{2\}}$\\ im & im & $Y_{\{0,2\}}$. \end{tabular} \end{proof} \begin{corollary} Restricted to $Y$, $f$ is a diffeomorphism onto $f(Y)$, which is given by \[ \{(u,v)\in\RR^2\mid u,v,1+u^2+v^2-2u-2v-2uv\ne0\}. \] \end{corollary} \begin{proof} Note that the submanifolds $Y_I$ are real slices of $D$, meaning that infinitesimally, there is always one degree of freedom in $\chi_1$-direction and one in $\chi_2$-direction remaining. More concretely, this means that at any point $p\in Y$ we have $T_p Y \otimes\CC = T_p D$. In particular, up to scalar factors, the Jacobian is the same as when we consider all of $D$. Consequently, by the definition of $D$, both $f$ and $g$ are local diffeomorphisms on $Y_I$. $Y$ is now the disjoint union of (real) submanifolds of $D$ of dimension 2, consequently it is itself a submanifold on which $f$ is a local diffeomorphism. Furthermore, by Lemma~\ref{sec:lem-f-injective}, $f$ is injective on $X$, hence in particular on $Y$, hence it is a diffeomorphism. To see that $f(Y)\subset$ the set indicated, note that $\chi_1,\chi_2\not\in i\pi\ZZ$ implies that $\sech(\frac{\chi_i}{2}),\tanh(\frac{\chi_i}{2})$ ($i=1,2$) are finite and nonzero. \end{proof} We now have a closer look at what the Cartan subsets look like when viewed through the lens of $\psi:\, \tilde{G}\to \GP(G/Q,4)$ and the coordinate functions $u,v$. But first we need a shorthand to determine $(u,v)(\psi(g))$ from the entries of $x$ for $x\in\tilde{G}$. \begin{lemma}\label{sec:lem-cross-ratios-corners} Let $x\in\tilde{G}$ be written as a $1+d+1$-block matrix as follows: \[ x = \mqty(A & B & C\\D & E & F\\G & H & I). \] Then \begin{align*} u(\psi(x)) &= \frac{4}{(A-I)^2 - (C-G)^2}\\ v(\psi(x)) &= \frac{(A+I)^2-(C+G)^2}{(A-I)^2-(C-G)^2}. \end{align*} \end{lemma} \begin{proof} We have \[ \psi(x) = \qty(\iota(0),\infty, q\mqty(A + C\\D + F\\G+I), q\mqty(A - C\\D-F\\G-I)), \] So by definition of $u,v$ we have \begin{align*} u(\psi(x)) &= 2 \frac{A^2-C^2-G^2+I^2 - \eta(D,D) - \eta(F,F)}{(A-I)^2-(C-G)^2}\\ v(\psi(x)) &= \frac{(A+I)^2-(C+G)^2}{(A-I)^2-(C-G)^2}. \end{align*} Note that since $g\in O(p+1,q+1)$, the first and last column of $g$ are vectors with square length $1$ and $-1$, respectively, whence \[ A^2 + \eta(D,D) - G^2 = 1\qquad C^2 + \eta(F,F) - I^2 = -1, \] so that \[ u(\psi(x)) = \frac{4}{(A-I)^2 - (C-G)^2}. \] We can thus infer $u(\psi(x)),v(\psi(x))$ from the four corner entries $A,C,G,I$. \end{proof} \begin{corollary}\label{sec:cor-characterisation-g-tilde} $\tilde{G}$ consists of those matrices in $G$ with corners $A,C,G,I$ such that \[ (A-I)^2-(C-G)^2 \ne 0\ne (A+I)^2 - (C+G)^2, \] so it is indeed a dense open subset. \end{corollary} \begin{proof} Let $x\in G$, then $\psi(x) = (\iota(0),\infty, g\cdot \iota(0), g\cdot \infty)=(\iota(0),\infty,\iota(a),\iota(b))$ is in general position iff none of $\eta(x,x),\eta(y,y),\eta(x-y,x-y)$ is zero. By Corollary~\ref{sec:cor-uv-conf-frame} this is the case if $u(\psi(g)),v(\psi(g))$ are both finite and nonzero. By Lemma~\ref{sec:lem-cross-ratios-corners}, this is the case iff $(A-I)^2-(C-G)^2, (A+I)^2-(C+G)^2\ne0$. \end{proof} \begin{lemma}\label{sec:lem-cartan-chi} Let $q>0$. The (representative) Cartan subsets $C_I$ from Corollary~\ref{sec:cor-cartan-subsets} can be labelled as follows \begin{enumerate} \item $C_\emptyset = C$; \item $C_{\{0\}} = \exp(\RR(F_{0,1}+F_{d,d+1})\oplus\RR(F_{0,d}-F_{1,d+1}))$; \item $C_{\{2\}} = \exp(\RR(F_{0,1}+F_{d,d+1})\oplus\RR(F_{0,d}-F_{1,d+1}))t_{0,\pi}$; \item $C_{\{1\}} = \exp(\RR F_{0,1}\oplus \RR F_{2,d+1})$; \item $C_{\{1\}'} = \exp(\RR F_{d,d+1}\oplus\RR F_{0,d-1})$; \item $C_{\{0,1\}} = \exp(\RR F_{0,d}\oplus\RR F_{1,d+1})$; \item $C_{\{0,2\}} = \exp(\RR F_{0,d}\oplus\RR F_{1,d+1})t_{\pi/2,\pi/2}$; \item $C_{\{1,2\}} = \exp(\RR F_{0,d}\oplus\RR F_{1,d+1})t_{0,\pi}$. \end{enumerate} Then, for every $I$ there exists a homeomorphism (``parametrisation'') from $\overline{Y_I}$ to a subset of $C_I$ that maps $Y_I$ to (a subset of) $C_I\cap\tilde{G}$, in a way that $(\chi_1,\chi_2)\in Y_I$ is mapped to $x$ with $(u,v)(\psi(x)) = f(\chi_1,\chi_2)$. \end{lemma} \begin{proof} In the following, we use the same numbering as in Corollary~\ref{sec:cor-cartan-subsets}. For each Cartan subset $C_I$ we compute $(u,v)\circ\psi$ for a typical element and match this with $f^{-1}$ of an element of $\tilde{W}\overline{Y_I}$. \begin{enumerate} \item For $x\in C$, say $x=t_{\phi,\psi}$ we have \[ \mqty(A & C\\G & I) = \mqty(\cos(\phi) & 0\\0 & \cos(\psi)). \] By Corollary~\ref{sec:cor-characterisation-g-tilde}, we have $x\in\tilde{G}$ iff \[ \cos[2](\phi)-\cos[2](\psi) =-\sin(\phi+\psi)\sin(\phi-\psi)\ne0, \] i.e. iff $\phi\pm\psi\not\in \pi\ZZ$. Assume that is the case, then \begin{align*} u(\psi(x)) &= \frac{4}{(\cos(\phi)-\cos(\psi))^2}\\ &= \csc[2](\frac{\phi+\psi}{2}) \csc[2](\frac{\phi-\psi}{2})\\ v(\psi(x)) &= \frac{(\cos(\phi)+\cos(\psi))^2}{\cos(\phi)-\cos(\psi))^2}\\ &= \cot[2](\frac{\phi+\psi}{2}) \cot[2](\frac{\phi-\psi}{2}), \end{align*} such that $(u(\psi(x)),v(\psi(x)))=f\qty(i(\phi+\psi+\pi), i(\phi-\psi+\pi))$. It is therefore $\overline{Y_\emptyset}$, which we can use to parametrise $C_\emptyset$ as follows: \[ \overline{Y_\emptyset} \ni (\chi_1,\chi_2)\mapsto \exp(\frac{\chi_1+\chi_2-2\pi i}{2i}F_{0,1} + \frac{\chi_1-\chi_2}{2i}F_{d,d+1}). \] In particular, if $(\chi_1,\chi_2)\in Y_\emptyset$, we have $\chi_1,\chi_2\not\in \pi i\ZZ$, which corresponds to $\phi\pm\psi$ not being elements of $\pi \ZZ$, so that we obtain an element of $\tilde{G}$. \item For $x=\exp(a(F_{0,d}-F_{1,d+1}))t_{\phi,\phi}$ we have \[ \mqty(A & C\\G & I) = \mqty(\cosh(a)\cos(\phi) & \sinh(a)\sin(\phi)\\ -\sinh(a)\sin(\phi) & \cosh(a)\cos(\phi)). \] We thus have $x\in\tilde{G}$ iff $\sinh(a)\sin(\phi)\ne0$ and $\cosh(a)\cos(\phi)\ne0$, i.e. iff $a\ne0$ and $\phi\not\in \frac{1}{2}\pi\ZZ$. Assume that is the case, then \begin{align*} u(\psi(x)) &= \frac{1}{-\sinh[2](a)\sin[2](\phi)} = -\csch[2](a)\csc[2](\phi)\\ v(\psi(x)) &= -\coth[2](a)\cot[2](\phi), \end{align*} such that $(u(\psi(x)),v(\psi(x)))=f\qty(2i\phi+i\pi, 2a + i\pi)$. It is therefore $\overline{Y_{\{0\}}}$, which we can use to parametrise $C_{\{0\}}$ as follows: \[ \overline{Y_{\{0\}}}\ni (\chi_1,\chi_2) \mapsto \exp(\frac{\Re(\chi_2)}{2}(F_{0,d}-F_{1,d+1}) + \frac{\chi_1-i\pi}{2i}(F_{0,1}+F_{d,d+1})). \] In particular, for $(\chi_1,\chi_2)\in Y_{\{0\}}$, we obtain an element of $\tilde{G}$. \item For $x=\exp(a(F_{0,d}-F_{1,d+1}))t_{\phi,\phi+\pi}$, the corners are \[ \mqty(A & C\\G & I) = \mqty(\cosh(a)\cos(\phi) & -\sinh(a)\sin(\phi)\\ -\sinh(a)\sin(\phi) & -\cosh(a)\cos(\phi)). \] We thus have $x\in\tilde{G}$ iff $\cosh(a)\cos(\phi),\sinh(a)\sin(\phi)\ne0$, i.e. iff $a\ne0$ and $\phi\not\in\frac{\pi}{2}\ZZ$. Assume that is the case, then \begin{align*} u(\psi(x)) &= \frac{1}{\cosh[2](a)\cos[2](\phi)}\\ v(\psi(x)) &= - \tanh[2](a)\tan[2](\phi), \end{align*} such that $(u(\psi(x)), v(\psi(x))) = f(2a, 2i\phi)$. It is therefore $\overline{Y_{\{2\}}}$ that we can use to parametrise $C_{\{2\}}$ as follows: \[ \overline{Y_{\{2\}}}\ni (\chi_1,\chi_2)\mapsto \exp(\frac{\Re(\chi_1)}{2}(F_{0,d}-F_{1,d+1}) + \frac{\chi_2}{2i}(F_{0,1}+F_{d,d+1}))t_{0,\pi}. \] In particular, for $(\chi_1,\chi_2)\in Y_{\{2\}}$, we obtain an element of $\tilde{G}$. \item For $x=\exp(\phi F_{0,1} + a F_{2,d+1})$ we have \[ \mqty(A & C\\G & I) = \mqty(\cos(\phi) & 0\\0 & \cosh(a)). \] We thus have $x\in\tilde{G}$ iff \[ \cos[2](\phi)-\cosh[2](a) = -\sin(\phi+ia)\sin(\phi-ia)\ne0, \] i.e. iff $\phi\pm ia\not\in \pi\ZZ$. Assume that is the case, then \begin{align*} u(\psi(x)) &= \frac{4}{(\cos(\phi)-\cosh(a))^2} = \frac{4}{(\cosh(i\phi)-\cosh(a))^2}\\ &= \frac{1}{\sinh[2](\frac{a+i\phi}{2})\sinh[2](\frac{a-i\phi}{2})}\\ v(\psi(x)) &= \frac{(\cos(\phi)+\cosh(a))^2}{(\cos(\phi)-\cosh(a))^2} = \coth[2](\frac{a+i\phi}{2})\coth[2](\frac{a-i\phi}{2}) \end{align*} such that \[ (u(\psi(x)),v(\psi(x))) = f(a+i\phi + i\pi, -a+i\phi + i\pi). \] It is therefore $\overline{Y_{\{1\}}}$ that we can use to parametrise $C_{\{1\}}$ as follows: \[ \overline{Y_{\{1\}}}\ni(\chi_1,\chi_2)\mapsto \exp(\frac{\chi_1+\chi_2-2i\pi}{2i} F_{0,1} + \frac{\Re(\chi_1-\chi_2)}{2}F_{2,d+1}). \] In particular, for $(\chi_1,\chi_2)\in Y_{\{1\}}$, we obtain an element of $\tilde{G}$. \item For $x=\exp(\phi F_{d,d+1} + aF_{0,d-1})$, the four corners are \[ \mqty(A & B\\C & D) = \mqty(\cosh(a) & 0\\0 & \cos(\phi)), \] which leads to the same conditions for $x\in\tilde{G}$ and the same cross-ratios $u,v$ as the previous case. Consequently, we call this Cartan subset $C_{\{1\}'}$ and parametrise it as \[ \overline{Y_{\{1\}}}\ni(\chi_1,\chi_2)\mapsto \exp(\frac{\chi_1+\chi_2-2i\pi}{2i} F_{d,d+1} + \frac{\Re(\chi_1-\chi_2)}{2}F_{0,d-1}), \] where we get an element of $\tilde{G}$ if $(\chi_1,\chi_2)\in Y_{\{1\}}$. \item For $x=\exp(a F_{0,d}+ b F_{1,d+1})$ the corners are \[ \mqty(\cosh(a) & 0\\0 & \cosh(b)). \] We thus have $x\in\tilde{G}$ iff \[ \cosh[2](a)-\cosh[2](b) = \sinh(a+b)\sinh(a-b)\ne0, \] i.e. iff $a\pm b\ne0$. Assume that is the case, then \begin{align*} u(\psi(x)) &= \frac{4}{(\cosh(a)-\cosh(b))^2} = \frac{1}{\sinh[2](\frac{a+b}{2})\sinh[2](\frac{a-b}{2})}\\ v(\psi(x)) &= \frac{(\cosh(a)+\cosh(b))^2}{(\cosh(a)-\cosh(b))^2} = \coth[2](\frac{a+b}{2})\coth[2](\frac{a-b}{2}), \end{align*} such that \[ (u(\psi(x)),v(\psi(x))) = f(a+b+i\pi, a-b+i\pi). \] It is thus $\overline{Y_{\{0,1\}}}$ that we can use to parametrise $C_{\{0,1\}}$ as follows: \[ \overline{Y_{\{0,1\}}}\ni(\chi_1,\chi_2)\mapsto \exp(\frac{\Re(\chi_1+\chi_2)}{2} F_{0,d} + \frac{\Re(\chi_1-\chi_2)}{2}F_{1,d+1}), \] where we obtain an element of $\tilde{G}$ if $(\chi_1,\chi_2)\in Y_{\{0,1\}}$. \item For $x=\exp(a F_{0,d}+ b F_{1,d+1})t_{\pi/2,\pi/2}$ the corners are \[ \mqty(A & C\\G & I) = \mqty(0 & \sinh(a)\\\sinh(b) & 0). \] We thus have $x\in\tilde{G}$ iff \[ \sinh[2](a)-\sinh[2](b) = \sinh(a+b)\sinh(a-b)\ne0, \] i.e. iff $a\pm b\ne0$. Assume that is the case, then \begin{align*} u(\psi(x)) &= \frac{4}{-(\sinh(a)-\sinh(b))^2} = - \csch[2](\frac{a-b}{2})\sech[2](\frac{a+b}{2})\\ v(\psi(x)) &= \frac{-(\sinh(a)+\sinh(b))^2}{-(\sinh(a)-\sinh(b))^2} = \coth[2](\frac{a-b}{2})\tanh[2](\frac{a+b}{2}), \end{align*} which shows that \[ (u(\psi(x)),v(\psi(x))) = f(a+b, a-b+i\pi). \] It is therefore $\overline{Y_{\{0,2\}}}$ that we can use to parametrise $C_{\{0,2\}}$ as follows: \[ \overline{Y_{\{0,2\}}} \ni (\chi_1,\chi_2)\mapsto \exp(\frac{\Re(\chi_1+\chi_2)}{2}F_{0,d} + \frac{\Re(\chi_1-\chi_2)}{2}F_{1,d+1})t_{\pi/2,\pi/2}. \] In particular, we obtain an element of $\tilde{G}$ for $(\chi_1,\chi_2)\in Y_{\{0,2\}}$. \item For $x=\exp(a F_{0,d}+ b F_{1,d+1})t_{\pi/2,\pi/2}$ the corners are \[ \mqty(A & C\\G & I) = \mqty(\cosh(a) & 0\\0 & -\cosh(b)). \] We thus have $x\in\tilde{G}$ iff \[ \cosh[2](a)-\cosh[2](b) = \sinh(a+b)\sinh(a-b)\ne0, \] i.e. iff $a\pm b\ne0$. Assume that is the case, then \begin{align*} u(\psi(x)) &= \frac{4}{(\cosh(a)+\cosh(b))^2} = \sech[2](\frac{a+b}{2})\sech[2](\frac{a-b}{2})\\ v(\psi(x)) &= \frac{(\cosh(a)-\cosh(b))^2}{(\cosh(a)+\cosh(b))^2} = \tanh[2](\frac{a+b}{2})\tanh[2](\frac{a-b}{2}). \end{align*} This shows that \[ (u(\psi(x)),v(\psi(x))) = f(a+b, a-b). \] It is therefore $\overline{Y_{\{1,2\}}}$ that we can use to parametrise $C_{\{1,2\}}$ as follows: \[ \overline{Y_{\{1,2\}}}\ni(\chi_1,\chi_2)\mapsto \exp(\frac{\Re(\chi_1+\chi_2)}{2}F_{0,d} + \frac{\Re(\chi_1-\chi_2)}{2}F_{1,d+1})t_{0,\pi}. \] In particular, we obtain an element of $\tilde{G}$ for $(\chi_1,\chi_2)\in Y_{\{1,2\}}$. \end{enumerate} \end{proof} \begin{lemma}\label{sec:lem-cartan-chi-euclidean} Let $q=0$. There is a homeomorphism (``parametrisation'') from $\overline{Y_{\{1\}}}$ to a subset of $C$ that maps $Y_{\{1\}}$ to (a subset of) $C\cap\tilde{G}$ in such a way that $(\chi_1,\chi_2)\in Y_{\{1\}}$ is mapped to $x\in C$ with $(u,v)(\psi(x)) = f(\chi_1,\chi_2)$. \end{lemma} \begin{proof} We use the same techniques as in the proof of Lemma~\ref{sec:lem-cartan-chi}. Let $x\in C$, say \[ x = \exp(aF_{d,d+1})t_{\phi}, \] then the four corners of the matrix $x$ are \[ \mqty(\cos(\phi) & 0 \\ 0 & \cosh(a)), \] respectively. This leads to the same cross-ratios as in \ref{sec:lem-cartan-chi}(iv,v), which shows the claim and suggests the following parametrisation: \[ \overline{Y_{\{1\}}}\ni (\chi_1,\chi_2)\mapsto \exp(\frac{\chi_1+\chi_2-2i\pi}{2i}F_{0,1} + \frac{\chi_1-\chi_2}{2}F_{d,d+1}).\qedhere \] \end{proof} If we compare these Cartan subsets and the associated parameter regions $Y_I$ with the causal configurations mentioned in \cite[\S5.1]{qiao} (see also \cite[\S6.6.7]{KQR}), we note that our parameters $\chi_1,\chi_2$ are related to the standard bootstrap variables $z,\overline{z}$ used there as follows: \[ z = \sech[2](\frac{\chi_1}{2}),\qquad \overline{z} = \sech[2](\frac{\chi_2}{2}) \] up to potentially exchanging $z,\overline{z}$. This can be seen most easily from the fact that (see e.g. \cite{bootstrapReview}, (31)) $u,v$ can be written as $u=z\overline{z},v=(1-z)(1-\overline{z})$ combined with the Lemma~\ref{sec:lem-cartan-chi}, where we related $u$ and $v$ of a certain four-point configuration to the following functions of $\chi_{1/2}$: \[ u = \sech[2](\frac{\chi_1}{2})\sech[2](\frac{\chi_2}{2}),\qquad v = \qty(1-\sech[2](\frac{\chi_1}{2}))\qty(1-\sech[2](\frac{\chi_2}{2})). \] We can thus read off the claimed relations as one possible way to realise these equations. After some arithmetic we arrive at the following correspondence between $Y_I$ and causal regions from Qiao:\\ \begin{tabular}{c|c} $I$ & Causal region\\\hline $\emptyset$ & $E_{tu}$\\ $\{0\}$ & $U$\\ $\{1\}$ & $E_{stu}$\\ $\{2\}$ & $T$\\ $\{0,1\}$ & $E_{su}$\\ $\{0,2\}$ & $S$\\ $\{1,2\}$ & $E_{st}$. \end{tabular} In particular, we see that the Euclidean case $q=0$ can be completely described using the assumption that $z,\overline{z}$ are non-real complex numbers that are conjugate to each other, which is how $z,\overline{z}$ are usually introduced in the Euclidean setting, see e.g. \cite[\S3.2.1]{qiao}. \begin{remark} Note that the ``parametrisations'' from Lemmas~\ref{sec:lem-cartan-chi}, \ref{sec:lem-cartan-chi-euclidean} are not surjective. This corresponds to the fact that $(u,v)\circ\psi$ is not injective on $C_I\cap\tilde{G}$. For example, for $I=\emptyset$ we have $(u,v)(\psi(t_{\phi,\psi}))=(u,v)(\psi(t_{\phi',\psi'}))$ iff \[ (\cos(\phi)+\cos(\psi))^2, (\cos(\phi)-\cos(\psi))^2 \] equal the corresponding quantities for $\phi',\psi'$, which is the case if \[ (a,b) \in \{(a',b'),(b',a'),(-a', -b'),(-b',-a')\} \] ($a=\cos(\phi),b=\cos(\psi)$, and similarly for primed quantities). Since the cosine is even, each of these four possibilities contributes four more possibilities (two each for the relative signs that the corresponding angles could have), making for (generically) sixteen elements in the same fibre. Nevertheless, we can extend all of these parametrisations to larger domains (consisting of several $\tilde{W}$-translates of $\overline{Y_I}$) such that they do become surjective. \end{remark} \subsection{Root Spaces in the Euclidean Setting} We now apply the method of Section~\ref{sec:radial-parts} to compute the radial part of $\Omega_{\mathfrak{g}}$ in the Euclidean setting, i.e. for $q=0$. In that case, we have just one Cartan subset $C$, meaning that we essentially obtain a $KAK$-decomposition $G_{s} = HCH$ and $G_{rs}=H(C\cap G_{rs})H$. We first establish a reduced root space decomposition of $\mathfrak{g}_{\CC}$ with respect to $\mathfrak{c}$. \begin{proposition}\label{sec:prop-euclidean-root-spaces} The root system $\Sigma(\mathfrak{g}:\mathfrak{c})$ is of type $C_2$, where the short roots have multiplicity $d-2$, the long roots $1$, and $0$ has multiplicity $\frac{(d-2)(d-3)}{2}+2$. The root system is given by \[ \left\{\pm\epsilon_1,\pm\epsilon_2,\pm\frac{\epsilon_1+\epsilon_2}{2},\pm\frac{\epsilon_1-\epsilon_2}{2}\right\}, \] where $\epsilon_{1/2}(aF_{0,1}+bF_{d,d+1})=a\pm ib$. Moreover we have \[ x^{\frac{\epsilon_1\pm\epsilon_2}{2}} = \mp \exp(\frac{\chi_1\pm\chi_2}{2}) \] when $x\in C$ is parametrised as in Lemma~\ref{sec:lem-cartan-chi-euclidean}. Furthermore, the elements parametrised by $Y_{\{1\}}$ (i.e. not the boundary) lie in $G_{rs}$. \end{proposition} \begin{proof} Let $X:=aF_{0,1}+bF_{d,d+1}\in\mathfrak{c}_{\CC}$. Let $i,j=2,\dots,d-1$, then \begin{align*} \ad(X)(F_{ij}) &= 0\\ \ad(X)(F_{0,i} \pm i F_{1,i}) &= \pm ai (F_{0,i} \pm i F_{1,i})\\ \ad(X)(F_{i,d}\pm F_{i,d+1}) &= \mp b (F_{i,d}\pm F_{i,d+1})\\ \ad(X)(F_{0,d}\pm F_{0,d+1}+iF_{1,d}\pm i F_{1,d+1}) &= (\mp b + ai)(\cdots)\\ \ad(X)(F_{0,d}\pm F_{0,d+1}-iF_{1,d}\mp i F_{1,d+1}) &= (\mp b - ai)(\cdots). \end{align*} If we define $\epsilon_1(X):= ai + b$ and $\epsilon_2(X):= ai-b$, we obtain that $\pm\frac{\epsilon_1+\epsilon_2}{2},\pm\frac{\epsilon_1-\epsilon_2}{2},\pm\epsilon_1,\pm\epsilon_2$ are roots with multiplicity at least $d-2, d-2, 1,1$, respectively. In addition the multiplicity of 0 is at least $\frac{(d-2)(d-3)}{2}+2$. All of these vector spaces we have already found together have dimension $\frac{(d+2)(d+1)}{2}=\dim(\mathfrak{g})$. Therefore, these are all roots, and we have identified the full root spaces. Moreover, \begin{align*} x^{\frac{\epsilon_1+\epsilon_2}{2}} &= \exp(i\frac{\chi_1+\chi_2-2\pi i}{2i}) = - \exp(\frac{\chi_1+\chi_2}{2})\\ x^{\frac{\epsilon_1-\epsilon_2}{2}} &= \exp(\frac{\chi_1-\chi_2}{2}). \end{align*} From these formulae we can see that $x^{2\alpha}\ne1$ for all $\alpha\in\Sigma$ because that condition is exactly equivalent to none of $\chi_1,\chi_2,\frac{\chi_1\pm\chi_2}{2}$ being contained in $i\pi\ZZ$, which is how we defined $D\supset Y_{\{1\}}$. By Lemma~\ref{sec:lem-characterisation-regular}, this suffices to show $x\in G_{rs}$. \end{proof} \begin{lemma}\label{sec:lem-euclidean-cs} Let $W$ be a finite-dimensional $H$-bimodule and let $\Psi:E^W(\tilde{G},H)\to C^\infty(Y_{\{1\}},W^{Z_C})$ be the map obtained by restricting to $C$ and then parametrising as described in Lemma~\ref{sec:lem-cartan-chi-euclidean}. Let $C_{\epsilon_i}\in\mathfrak{c}_\CC$ be the element dual (with respect to $B$) to $\epsilon_i$ ($i=1,2$). Then \[ \Psi(C_{\epsilon_i}\cdot f) = \Psi(f\cdot C_{\epsilon_i}) = \partial_{\chi_i}\Psi(f).\qquad (i=1,2) \] \end{lemma} \begin{proof} We have \[ C_{\epsilon_{1/2}} = \frac{1}{2i} F_{0,1} \pm \frac{1}{2} F_{d,d+1}, \] so that \begin{align*} \Psi(f\cdot C_{\epsilon_1})(\chi_1,\chi_2) &= \dv{t} \eval{f\qty(\exp(tC_{\epsilon_1})\exp(\frac{\chi_1+\chi_2-2\pi i}{2i} F_{0,1} + \frac{\chi_1-\chi_2}{2}F_{d,d+1}))}_{t=0}\\ &= \dv{t} \eval{\Psi(f)(\chi_1+t, \chi_2)}_{t=0}\\ &= \partial_{\chi_1}\Psi(f)(\chi_1,\chi_2)\\ \Psi(f\cdot C_{\epsilon_2})(\chi_1,\chi_2) &= \dv{t}\eval{\Psi(f)(\chi_1,\chi_2+t)}_{t=0}\\ &= \partial_{\chi_2}\Psi(f)(\chi_1,\chi_2). \end{align*} \end{proof} \begin{corollary}\label{sec:prop-euclidean-as} The $A_\alpha$ operators from Proposition~\ref{sec:prop-operator-A} in the Euclidean case are as follows: \begin{align*} A_{\frac{\epsilon_1+\epsilon_2}{2}} &= \sum_{i=2}^{d-1} F_{1,i}\otimes F^{1,i}\\ A_{\frac{\epsilon_1-\epsilon_2}{2}} &= \sum_{i=2}^{d-1} F_{i,d}\otimes F^{i,d}\\ A_{\epsilon_{1/2}} &= \frac{1}{2}(F_{0,d+1}\mp i F_{1,d})\otimes (F^{0,d+1}\pm i F^{1,d}) \end{align*} (where $A_{-\alpha}=A_\alpha$). \end{corollary} \begin{corollary}\label{sec:cor-euclidean-scalar-as} Let $W=\CC$ be an $H$-bimodule as follows: the group $M$ from Definition~\ref{sec:def-parabolic-subalgebras} acts trivially, and the Lie algebra $\mathfrak{a}$ acts as: \[ \pi_\Le(D_0)=\alpha,\pi_\Ri(D_0)=\beta. \] Then \[ \pi_\Le(m(A_\gamma))=-\frac{\alpha^2}{2},\qquad \pi(A_\gamma) = -\frac{\alpha\beta}{2},\qquad \pi_\Ri(m(A_\gamma)) = -\frac{\beta^2}{2} \] for $\gamma\in\{\pm\epsilon_1,\pm\epsilon_2\}$, and 0 otherwise. \end{corollary} \subsection{Root Spaces in the Lorentzian Setting} We now want to use the method of Section~\ref{sec:radial-parts} to compute the radial part of $\Omega_{\mathfrak{g}}$, at least acting on $HC_iH$ for some Cartan subsets. For that we need to know which Cartan subsets satisfy the condition that $\Ad(t)$ act by $B$-orthogonal involution and weight-space dependent constants. This is trivially true for cases (i), (ii), (iv), (v), (vi) from Corollary~\ref{sec:cor-cartan-subsets} (i.e. the Cartan sets we called $C_{\emptyset},C_{\{0\}},C_{\{1\}},C_{\{1\}'},C_{\{0,1\}}$. For the remaining ones, we refer to Lemma~\ref{sec:lem-remaining-cartan-subsets-nice}. We first establish the (reduced) root space decompositions with respect to our five subalgebras. We shall do this by treating $C_\emptyset$ explicitly and showing that there is an automorphism of $\mathfrak{g}_{\CC}$ that maps $\mathfrak{c}_\emptyset$ to all others. For that we use the following lemma. \begin{lemma}\label{sec:lem-eigenvalue-structure-adjoint} Let $\mathfrak{c}_1,\mathfrak{c}_2$ be commutative Lie subalgebras of $\mathfrak{g}_{\CC}$. Any Lie group isomorphism $\phi:\, \mathfrak{c}_1\to\mathfrak{c}_2$ that pulls back the character of the defining representation of $\mathfrak{c}_2$ to that of $\mathfrak{c}_1$ can be effected by $\Ad(O(\eta,\CC))$. \end{lemma} \begin{proof} Write $V=\CC^{d+2}$ for the defining representations of $\mathfrak{c}_1,\mathfrak{c}_2$. Let $\lambda,\mu\in\mathfrak{c}_1^*$ and let $v\in V_\lambda,w\in V_\mu$ (weight spaces). For $Z\in \mathfrak{c}_1$ we then have \[ (\lambda+\mu)(Z)\eta(v,w) = \eta(\lambda(Z)v,w) + \eta(v,\mu(Z)w) = \eta(Zv, w) + \eta(v,Zw) = 0, \] as $\mathfrak{g}_{\CC}$ consists of matrices that are antisymmetric with respect to $\eta$. Since this holds for all $Z\in\mathfrak{c}_1$, we conclude that $\eta(v,w)$ can only be nonzero if $\lambda+\mu=0$. This shows that $\eta$ is a nondegenerate pairing between the $\lambda$ and the $-\lambda$ weight spaces for every $\lambda$. We now chose bases \[ (v_\mu^1,\dots,v_\mu^{n_\mu}), \qquad (u_\lambda^1,\dots,u_\lambda^{n_\lambda}) \] of $V_\mu$ (and $V_\lambda$) for every nonzero weight $\mu\in \mathfrak{c}_1^*$ (and $\lambda\in\mathfrak{c}_2^*$) such that $(v_\mu^1,\dots,v_\mu^{n_\mu})$ and $(v_{-\mu}^1,\dots,v_{-\mu}^{n_{-\mu}})$ are dual with respect to $\eta$ (note that $\eta$'s nondegeneracy as well as the fact that the orthogonal complement of $V_\mu\oplus V_{-\mu}$ implies that $n_\mu = n_{-\mu}$), similarly for the $u$. Choose furthermore an orthonormal basis $h_1,\dots,h_n\in V$ of the $0$-weight space with respect to $\mathfrak{c}_1$, and $k_1,\dots,k_m\in V$ of the $0$-weight space with respect to $\mathfrak{c}_2$. Since the defining representation is semisimple, these define two bases of $V$. Since $\phi$ pulls back the character of the defining representation, we find that $n_\lambda = n_{\phi^*(\lambda)}$ for all $\lambda\in\mathfrak{c}_2^*$, and hence in particular also that $n=m$. Define now $g\in\End(V)$ by mapping $v_{\phi^*(\lambda)}^i\mapsto u_\lambda^i$ for every nonzero weight $\lambda$ of the $\mathfrak{c}_2$-module $V$, and by mapping $h_i\mapsto k_i$. Since the linear map $g$ maps two bases of $V$ to each other, it is regular. Furthermore, we have \begin{align*} \eta(g(v_\lambda^i),g(v_\mu^j)) &= \eta(u_{\phi^{-*}(\lambda)}^i, v_{\phi^{-*}(\mu)}^j)\\ &= \delta_{\phi^{-*}(\lambda+\phi),0}\delta_{i,j} = \delta_{\lambda+\phi,0}\delta_{i,j}\\ &= \eta(v_\lambda^i, v_\mu^j)\\ \eta(g(v_\lambda^i), g(h_j)) &= \eta(u_{\phi^{-*}(\lambda)}^i, k_j)\\ &= 0 = \eta(v_\lambda^i, h_j)\\ \eta(g(h_i), g(h_j)) &= \eta(k_i,k_j) = \delta_{i,j}\\ &= \eta(h_i,h_j), \end{align*} so that $g$ is orthogonal, i.e. $g\in O(V)$. Let $\lambda\in\mathfrak{c}_2^*$ be a weight of the defining representation and $v\in V_\lambda$, then $g^{-1}v\in V_{\phi^*(\lambda)}$ and for every $X\in\mathfrak{c}_1$ we have \[ gXg^{-1}v = \lambda(\phi(X)) v = \phi(X)v. \] By linearity this is true for all $v\in V$. Since the defining representation is faithful, we thus obtain $\phi(X) = \Ad(g)(X)$. \end{proof} \begin{proposition}\label{sec:prop-root-systems} For all $I$, the root system $\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ is of type $C_2$ where the short roots have multiplicity $d-2$, the long roots $1$, and 0 has multiplicity $\frac{(d-2)(d-3)}{2} + 2$. In every case, we can choose the root system to be \[ \left\{\pm\epsilon_1,\pm\epsilon_2,\frac{\epsilon_1\pm\epsilon_2}{2},-\frac{\epsilon_1\pm\epsilon_2}{2}\right\} \] such that \[ x^{\frac{\epsilon_1\pm\epsilon_2}{2}} = \mp\exp(\frac{\chi_1\pm\chi_2}{2}) \] when $x\in C_I$ is parametrised as in Lemma~\ref{sec:lem-cartan-chi} and when $I\subset\{0,1\}$. For the remaining three choices of $I$, when $C_I$ is not a subgroup, we can pick \begin{align*} \{2\},\{1,2\}:\qquad x^{\frac{\epsilon_1\pm\epsilon_2}{2}} &= \epsilon^I_{\frac{\epsilon_1+\epsilon_2}{2}} \exp(\frac{\chi_1\pm\chi_2}{2})\\ \{0,2\}:\qquad x^{\frac{\epsilon_1\pm\epsilon_2}{2}} &= \mp i \epsilon^{\{0,2\}}_{\frac{\epsilon_1+\epsilon_2}{2}} \exp(\frac{\chi_1\pm\chi_2}{2}) \end{align*} (which determines all $x^\alpha$ if we can choose $\epsilon_I$ to be additive, i.e. so that it extends to a character of the root lattice). Furthermore, those elements parametrised by $Y_I$ (i.e. not the rest of the closure) lie in $G_{rs}$. \end{proposition} \begin{proof} We are going to start with finding a root space decomposition of $C_\emptyset$. Note that if we manage to show the existence of root spaces as claimed with dimension at least what is claimed, we have already described a subspace of $\mathfrak{g}$ of dimension \begin{align*} \frac{(d-2)(d-3)}{2} + 2 + 4(d-2) + 4 &= \frac{d^2 - 5d + 6 + 4 + 8d - 8}{2}\\ &= \frac{d^2 + 3d + 2}{2} = \frac{(d+2)(d+1)}{2}\\&= \dim(\mathfrak{g}), \end{align*} so we're done. Let $a,b\in\CC$ and $i,j=2,\dots,d-1$. Then we have \begin{align*} \ad(aF_{0,1}+bF_{d,d+1})(F_{i,j}) &= 0\\ \ad(aF_{0,1}+bF_{d,d+1})(F_{0,i} \pm iF_{1,i}) &= \pm ai (F_{0,i}\pm iF_{1,i})\\ \ad(aF_{0,1}+bF_{d,d+1})(F_{i,d}\pm iF_{i,d+1}) &= \mp bi (F_{i,d}\pm iF_{i,d+1})\\ \ad(aF_{0,1}+bF_{d,d+1})(F_{0,d}\pm i F_{0,d+1} \mp i F_{1,d} + F_{1,d+1}) &= \mp i(a+b) (\cdots)\\ \ad(aF_{0,1}+bF_{d,d+1})(F_{0,d}\pm i F_{0,d+1} \pm i F_{1,d} - F_{1,d+1}) &= \pm i(a-b) (\cdots), \end{align*} showing that if we define $\epsilon_1(aF_{0,1}+bF_{d,d+1}):= ai+bi$ and $\epsilon_2(aF_{0,1}+bF_{d,d+1}):= ai-bi$, we obtain the claim. Note further that $x^{\frac{\epsilon_1\pm\epsilon_2}{2}}$ for $x\in C_\emptyset$ have the claimed form. Now, we want to employ Lemmas~\ref{sec:lem-eigenvalue-structure-adjoint},\ref{sec:lem-root-spaces-automorphism}. For that note that \[ aF_{0,1}+bF_{d,d+1} = \mqty(0 & a & 0 & 0 & 0\\-a & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0& 0\\ 0 & 0 & 0 & 0 & -b\\0 & 0 & 0 & b & 0) \] (viewed as $1+1+(d-2)+1+1$-blocks) which (generically) has eigenvalues $\pm ia$ and $\pm ib$ and $0$ with multiplicities $1,1,1,1,d-2$, respectively. This means that the weights are the linear maps $\pm\frac{\epsilon_1+\epsilon_2}{2},\pm\frac{\epsilon_1-\epsilon_2}{2},0$, where $\epsilon_1,\epsilon_2$ are linearly independent. Since we are talking about finite vector spaces, all other two-dimensional commutative subalgebras of $\mathfrak{g}_{\CC}$ whose weights of the defining representation are of this form for linearly independent $\epsilon_1,\epsilon_2$, and have weight space dimensions $1,1,1,1,d-2$, are conjugate via $O(\eta,\CC)$ by Lemma~\ref{sec:lem-eigenvalue-structure-adjoint} and therefore have the same root system and root multiplicities by Lemma~\ref{sec:lem-root-spaces-automorphism}. We now check the other subalgebras. In particular, we construct $\epsilon_1,\epsilon_2\in\mathfrak{c}_I^*$ such that the weights of the defining representation are $\pm\frac{\epsilon_1+\epsilon_2}{2},\pm\frac{\epsilon_1-\epsilon_2}{2},0$ (with multiplicities $1,1,1,1,d-2$), and such that $x^{\frac{\epsilon_1\pm\epsilon_2}{2}}$ have the claimed forms, for $x\in C_I$ parametrised as in Lemma~\ref{sec:lem-cartan-chi}. \begin{description} \item[$I=\{0\}$] A generic element $X$ of $\mathfrak{c}_{\{0\}}$ looks like \[ X = a(F_{0,1}+F_{d,d+1}) + b(F_{0,d}- F_{1,d+1}) = \mqty(0 & a & 0 & -b & 0\\ -a & 0 & 0 & 0 & b\\ 0 & 0 & 0 & 0 & 0\\ -b & 0 & 0 & 0 & -a\\ 0 & b & 0 & a & 0 ), \] which has eigenvalues $b\pm ia, -b\pm ia$ and $0$ with multiplicities $1,1,1,1,d-2$, respectively. Define $\epsilon_1(X):= 2ia$ and $\epsilon_2(X):=2b$, then the weights of the defining representation are as claimed and $x^{\frac{\epsilon_1\pm\epsilon_2}{2}}$ have the desired expressions. For future reference, we note that $\mathfrak{c}_\emptyset$ is mapped to $\mathfrak{c}_I$ by $\Ad(g)$ with \[ g = \frac{1}{\sqrt{2}}\mqty(1 & 0 & 0 & 0 & -i\\ 0 & 1 & 0 & -i & 0\\ 0 & 0 & \sqrt{2} & 0 & 0\\ 0 & i & 0 & -1 & 0\\ i & 0 & 0 & 0 & -1) \] written as a $1+1+(d-2)+1+1$ block matrix. \item[$I=\{2\}$] The algebra $\mathfrak{c}_I$ is the same as for $I=\{0\}$, so all these notions carry over. In order to obtain the claimed expressions for $x^{\frac{\epsilon_1\pm\epsilon_2}{2}}$, however, we need to exchange $\epsilon_1,\epsilon_2$. \item[$I=\{1\}$] A generic element $X$ of $\mathfrak{c}_{\{1\}}$ looks like \[ X = aF_{0,1} + bF_{2,d+1} = \mqty(0 & a & 0 & 0 & 0\\ -a & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -b\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -b & 0 & 0) \] (written as a $1+1+1+(d-2)+1$-block matrix) with eigenvalues $\pm ia, \pm b, 0$ with multiplicities $1,1,1,1,d-2$, respectively. Define $\epsilon_1(X):= ia+b$ and $\epsilon_2(X):=ia-b$, then the weights of the defining representation are as claimed and $x^{\frac{\epsilon_1\pm\epsilon_2}{2}}$ have the desired expression. For future reference, we note that $\mathfrak{c}_\emptyset$ is mapped to $\mathfrak{c}_I$ by $\Ad(g)$ with \[ g = \mqty(1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -i\\0 & 0 & 1 & 0 & 0\\0 & 0 & 0 & 1 & 0) \] written as a $(1+1+1+(d-2)+1)\times(1+1+(d-2)+1+1)$ block matrix. \item[$\{1\}'$] A generic element $X$ of $\mathfrak{c}_{\{1\}'}$ looks like \[ X = aF_{d,d+1} + bF_{0,d-1} = \mqty( 0 & 0 & -b & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ -b & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -a\\ 0 & 0 & 0 & a & 0) ) \] (written as a $1+(d-2)+1+1+1$-block matrix), which has eigenvalues $\pm ia,\pm b,0$ with multiplicities $1,1,1,1,d-2$, respectively. Let $\epsilon_1(X):=ia+b,\epsilon_2(X):=ia-b$, then the weights of the defining representation are as claimed and $x^{\frac{\epsilon_1\pm\epsilon_2}{2}}$ have the desired expressions. For future reference, we note that $\mathfrak{c}_\emptyset$ is mapped to $\mathfrak{c}_I$ by $\Ad(g)$ with \[ g = \mqty(0 & 0 & 0 & 0 & -i\\0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\0 & i & 0 & 0 & 0\\i & 0 & 0 & 0 & 0) \] written as a $(1+(d-2)+1+1+1)\times(1+1+(d-2)+1+1)$ block matrix. \item[$I=\{0,1\},\{0,2\},\{1,2\}$] A generic element $X$ of $\mathfrak{c}_{\{0,1\}}$ looks like \[ X = aF_{0,d} + b F_{1,d+1} = \mqty( 0 & 0 & 0 & -a & 0\\ 0 & 0 & 0 & 0 & -b\\ 0 & 0 & 0 & 0 & 0\\ -a & 0 & 0 & 0 & 0\\ 0 & -b & 0 & 0 & 0 ) \] (written as a $1+1+(d-2)+1+1$-block matrix), which has eigenvalues $\pm a, \pm b, 0$ with multiplicities $1,1,1,1,d-2$, respectively. Defining $\epsilon_1(X):=a+b,\epsilon_2(X):=a-b$, the weights of the defining representation are as claimed and $x^{\frac{\epsilon_1\pm\epsilon_2}{2}}$ have the desired expressions. For future reference, we note that $\mathfrak{c}_\emptyset$ is mapped to $\mathfrak{c}_I$ by $\Ad(g)$ with \[ g = \mqty(1 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & -i\\0 & 0 & 1 & 0 & 0\\ 0 & i & 0 & 0 & 0\\0 & 0 & 0 & 1 & 0) \] written as a $1+1+(d-2)+1+1$ block matrix. \end{description} Since the adjoint representation on $\mathfrak{g}_\CC$ is the second exterior power of the defining representation, all roots in $\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ can be expressed using the $\epsilon_1,\epsilon_2$ we defined (in the way claimed). Like in the proof of Proposition~\ref{sec:prop-euclidean-root-spaces}, we note that due to the expressions of $x^\alpha$ we arrived at, $x^{\alpha}\ne x^{-\alpha}$ ($\alpha\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)$) is exactly equivalent to none of $\chi_1,\chi_2,\frac{\chi_1\pm\chi_2}{2}$ being elements of $\pi i\ZZ$, which is equivalent with $(\chi_1,\chi_2)\in D$ (and hence in $Y_I$ instead of its closure). \end{proof} Having now established the root systems for the different Cartan subsets, we can now work out the corresponding root spaces. \begin{proposition} The weight spaces of $\mathfrak{g}$ with respect to $\mathfrak{c}_I$ are spanned by \begin{description} \item[$I=\emptyset$] \begin{align*} 0 :& F_{i,j}\qquad (i,j=1,\dots,d-1)\\ \pm\frac{\epsilon_1+\epsilon_2}{2} : & F_{0,i} \pm i F_{1,i}\qquad (i=1,\dots,d-1)\\ \pm\frac{\epsilon_1-\epsilon_2}{2} : & F_{i,d} \mp i F_{i,d+1}\qquad (i=1,\dots,d-1)\\ \pm\epsilon_1 : & F_{0,d} + F_{1,d+1} \pm i(-F_{0,d+1} + F_{1,d})\\ \pm\epsilon_2 : & F_{0,d} - F_{1,d+1} \pm i(F_{0,d+1} + F_{1,d}). \end{align*} \item[$I=\{0\},\{2\}$] ($\{2\}$ with $\epsilon_1,\epsilon_2$ exchanged) \begin{align*} 0 :& F_{i,j}\qquad (i,j=1,\dots,d-1)\\ \pm\frac{\epsilon_1+\epsilon_2}{2}: & F_{0,i} - i F_{i,d+1} \pm i (F_{1,i} - i F_{i,d})\qquad (i=2,\dots,d-1)\\ \pm\frac{\epsilon_1-\epsilon_2}{2}: & F_{1,i} + iF_{i,d} \mp i(F_{0,i} + i F_{i,d+1})\qquad (i=2,\dots,d-1)\\ \pm\epsilon_1 : &F_{0,d} + F_{1,d+1} \pm i(F_{0,d+1} + F_{1,d})\\ \pm\epsilon_2 :& F_{0,d+1} + F_{1,d} \pm (F_{0,1} - F_{d,d+1}). \end{align*} \item[$I=\{1\}$] \begin{align*} 0 :& F_{i,j}\qquad (i,j=3,\dots,d)\\ \pm\frac{\epsilon_1+\epsilon_2}{2}: & F_{0,i}\pm i F_{1,i}\qquad (i=3,\dots,d)\\ \pm\frac{\epsilon_1-\epsilon_2}{2}: & F_{i,d+1} \pm F_{2,i}\qquad (i=3,\dots,d)\\ \pm\epsilon_1: &F_{0,d+1} - iF_{1,2} \pm (-F_{0,2}+iF_{1,d+1})\\ \pm\epsilon_2: &F_{0,d+1} + i F_{1,2} \pm (F_{0,2} + i F_{1,d+1}). \end{align*} \item[$I=\{1\}'$] \begin{align*} 0:& F_{i,j}\qquad (i,j=1,\dots,d-2)\\ \pm\frac{\epsilon_1+\epsilon_2}{2}: &F_{i,d} \mp i F_{i,d+1}\qquad (i=1,\dots,d-2)\\ \pm\frac{\epsilon_1-\epsilon_2}{2}: &F_{0,i} \pm F_{i,d-1}\qquad (i=1,\dots,d-2)\\ \pm\epsilon_1: &F_{0,d} \mp F_{d-1,d} + i F_{d-1,d+1} \mp i F_{0,d+1}\\ \pm\epsilon_2: &F_{0,d} \pm F_{d-1,d} - i F_{d-1,d+1} \mp i F_{0,d+1}. \end{align*} \item[$I=\{0,1\},\{0,2\},\{1,2\}$] \begin{align*} 0:& F_{i,j}\qquad (i,j=2,\dots,d-1)\\ \pm\frac{\epsilon_1+\epsilon_2}{2}: &F_{0,i} \pm F_{i,d}\qquad (i=2,\dots,d-1)\\ \pm\frac{\epsilon_1-\epsilon_2}{2}: &F_{1,i} \pm F_{i,d+1}\qquad (i=2,\dots,d-1)\\ \pm\epsilon_1: &F_{0,1}+F_{d,d+1}\mp (F_{0,d+1} - F_{1,d})\\ \pm\epsilon_2: &F_{0,1}-F_{d,d+1}\pm (F_{0,d+1} + F_{1,d}). \end{align*} \end{description} \end{proposition} \begin{proof} We shall use the weight spaces of $\mathfrak{c}_\emptyset$ from the proof of Proposition~\ref{sec:prop-root-systems} and apply $\Ad(g_I)$ for the $g_I\in O(p+1,q+1;\CC)$ that were found in that same proof, which satisfy $\Ad(g_I)(\mathfrak{c}_{\emptyset, \CC})=\mathfrak{c}_{I, \CC}$. A similar proof as in the proof of in Proposition~\ref{sec:prop-operator-A} can be employed to show that $\Ad(g_I)(\mathfrak{g}_\alpha)=\mathfrak{g}_{\Ad^*(g_I)(\alpha)}$ for $\alpha\in\Sigma(\mathfrak{g}:\mathfrak{c}_\emptyset)$. \end{proof} With these weight space decompositions in hand, we can now ascertain that the Cartan subsets $C_{\{2\}},C_{\{0,2\}},C_{\{1,2\}}$ indeed satisfy the technical condition of Section~\ref{sec:general-decomposition} and that we can even choose $\epsilon^I$ to extend to group homomorphisms of the root lattice (so that $\alpha\mapsto x^\alpha$ also extends to a group homomorphism). \begin{lemma}\label{sec:lem-remaining-cartan-subsets-nice} For $I=\{2\},\{0,2\},\{1,2\}$, we can write \[ \Ad(t_I)|_{\mathfrak{g}_\alpha} = \epsilon^I_\alpha \phi_I \] for $\alpha\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ such that $C_I$ satisfies the condition at the beginning of Section~\ref{sec:general-decomposition}. Furthermore, we can choose $\epsilon^I$ such that for $\alpha,\beta,\alpha+\beta\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ we have $\epsilon^I_\alpha\epsilon^I_\beta = \epsilon^I_{\alpha+\beta}$. In particular, we can choose \begin{align*} \epsilon^{\{2\}}_{\frac{\epsilon_1\pm\epsilon_2}{2}} &= \mp 1\\ \epsilon^{\{0,2\}}_{\frac{\epsilon_1\pm\epsilon_2}{2}} &= \pm i\\ \epsilon^{\{1,2\}}_{\frac{\epsilon_1\pm\epsilon_2}{2}} &= 1. \end{align*} \end{lemma} \begin{proof} We begin with the case of $I=\{0,2\}$ or $\{1,2\}$. We construct $B_\sigma$-orthonormal bases of the weight spaces as is done in the proof of Theorem~\ref{sec:thm-casimir-decomposition}: \begin{align*} E_{\pm\frac{\epsilon_1+\epsilon_2}{2},i} &:= \frac{1}{2\sqrt{\eta_{i,i}}}(\pm F_{0,i} + F_{i,d})\\ E_{\pm\frac{\epsilon_1-\epsilon_2}{2},i} &:= \frac{1}{2i\sqrt{\eta_{i,i}}} (F_{1,i} \pm F_{i,d+1})\\ E_{\pm\epsilon_1} &= \frac{1}{2\sqrt{2}}(\mp(F_{0,1}+F_{d,d+1}) + F_{0,d+1} - F_{1,d})\\ E_{\pm\epsilon_2} &= \frac{1}{2\sqrt{2}} (\pm(F_{0,1}-F_{d,d+1})+F_{0,d+1} + F_{1,d}). \end{align*} These bases satisfy $\sigma(E_{\alpha,i})=E_{-\alpha,i}$. For $I=\{1,2\}$, the action of $\Ad(t_I)$ on these basis elements is as follows: \begin{align*} E_{\pm\frac{\epsilon_1+\epsilon_2}{2}, i} &\mapsto - E_{\mp\frac{\epsilon_1+\epsilon_2}{2},i}\\ E_{\pm\frac{\epsilon_1-\epsilon_2}{2},i} &\mapsto E_{\mp\frac{\epsilon_1-\epsilon_2}{2},i}\\ E_{\pm\epsilon_1}&\mapsto -E_{\mp\epsilon_1}\\ E_{\pm\epsilon_2}&\mapsto -E_{\mp\epsilon_2}. \end{align*} Having $\phi_{\{1,2\}}$ permute the basis (and be the identity map on $\mathfrak{m}_{I,\CC}$, we can write this action in the desired way such that $\epsilon^{\{1,2\}}$ has the claimed form. For $I=\{0,2\}$, the action of $\Ad(t_I)$ on these basis elements is as follows: \begin{align*} \Ad(t)(E_{\pm\frac{\epsilon_1+\epsilon_2}{2},i}) &= \mp i E_{\mp\frac{\epsilon_1-\epsilon_2}{2},i}\\ \Ad(t)(E_{\pm\frac{\epsilon_1-\epsilon_2}{2},i}) &= \pm i E_{\mp\frac{\epsilon_1+\epsilon_2}{2},i}\\ \Ad(t)(E_{\pm\epsilon_1}) &= -E_{\mp\epsilon_1}\\ \Ad(t)(E_{\pm\epsilon_2}) &= E_{\pm\epsilon_2}. \end{align*} Having $\phi_{\{0,2\}}$ map \[ E_{\pm\frac{\epsilon_1+\epsilon_2}{2},i} \mapsto -E_{\mp\frac{\epsilon_1-\epsilon_2}{2},i},\qquad E_{\pm\frac{\epsilon_1-\epsilon_2}{2},i} \mapsto -E_{\mp\frac{\epsilon_1+\epsilon_2}{2},i},\qquad E_{\pm\epsilon_1}\mapsto -E_{\mp\epsilon_1},\qquad E_{\pm\epsilon_2}\mapsto -E_{\pm \epsilon_2}, \] and be the identity on $\mathfrak{m}_{I,\CC}$, we obtain a decomposition of $\Ad(t_I)$ with $\epsilon^I$ having the desired form. For $I=\{2\}$ we note that $t_I^2=1$, so that $\Ad(t_I)$ is a $B$-orthogonal involution. Furthermore, we have \[ \sigma\circ\Ad(t_I)\circ\sigma=\Ad(\sigma(t_I))=\Ad(t_I^{-1})=\Ad(t_I), \] so that $\Ad(t)$ also commutes with $\sigma$. Consequently, we can choose $\epsilon^I_\alpha:=1$ for all roots $\alpha$ and $\phi_I:=\Ad(t_I)$. \end{proof} \begin{corollary}\label{sec:cor-correspondence-x-powers-e-function} For $I=\{2\},\{0,2\}$ and $x\in C_I$ parametrised as in Lemma~\ref{sec:lem-cartan-chi} we have \[ x^{\frac{\epsilon_1\pm\epsilon_2}{2}} = \exp(\frac{\chi_1\pm\chi_2}{2}). \] In all other cases we have \[ x^{\frac{\epsilon_1\pm\epsilon_2}{2}} = \mp \exp(\frac{\chi_1\pm\chi_2}{2}). \] In every case, the function \[ \Sigma(\mathfrak{g}:\mathfrak{c}_I)\ni \alpha\mapsto x^\alpha \] can be extended to a group homomorphism $\ZZ\Sigma\to\CC^\times$. Furthermore, we always have \begin{align*} \coth_{\frac{\epsilon_1\pm\epsilon_2}{2}}(x) &= \coth(\frac{\chi_1\pm\chi_2}{2})\\ \coth_{\epsilon_{1/2}}(x) &= \coth(\chi_{1/2}). \end{align*} \end{corollary} \begin{proof} Follows from Proposition~\ref{sec:prop-root-systems} and Lemma~\ref{sec:lem-remaining-cartan-subsets-nice}. \end{proof} \begin{proposition}\label{sec:prop-aalpha} The operators $A_\alpha$ look as follows: \begin{description} \item[$I=\emptyset$] \begin{align*} A_{\frac{\epsilon_1+\epsilon_2}{2}} &= \sum_{i=2}^{d-1} F_{1,i}\otimes F^{1,i},\qquad A_{\frac{\epsilon_1-\epsilon_2}{2}} = \sum_{i=2}^{d-1} F_{i,d}\otimes F^{i,d},\\ A_{\epsilon_{1/2}} &= \frac{1}{2}(F_{0,d+1} \mp F_{1,d})\otimes(F^{0,d+1} \mp F^{1,d}) \end{align*} \item[$I=\{0\},\{2\}$] (for $i=\{2\}$ we exchange $\epsilon_1,\epsilon_2$) \begin{align*} A_{\frac{\epsilon_1\pm \epsilon_2}{2}} &= \frac{1}{2} \sum_{i=2}^{d-1} (F_{1,i}\mp i F_{i,d})\otimes(F^{1,i} \pm i F^{i,d})\\ A_{\epsilon_{1/2}} &= \frac{1}{2} (F_{0,d+1}\mp F_{1,d})\otimes (F^{0,d+1}\mp F^{1,d}) \end{align*} \item[$I=\{1\}$] \begin{align*} A_{\frac{\epsilon_1+\epsilon_2}{2}} &= \sum_{i=3}^d F_{1,i}\otimes F^{1,i}\qquad A_{\frac{\epsilon_1-\epsilon_2}{2}} = \sum_{i=3}^d F_{2,i}\otimes F^{2,i}\\ A_{\epsilon_{1/2}} &= \frac{1}{2}(F_{0,d+1}\mp iF_{1,2})\otimes(F^{0,d+1}\pm iF^{1,2}) \end{align*} \item[$I=\{1\}'$] \begin{align*} A_{\frac{\epsilon_1+\epsilon_2}{2}} &= \sum_{i=1}^{d-2} F_{i,d}\otimes F^{i,d}\qquad A_{\frac{\epsilon_1-\epsilon_2}{2}} = \sum_{i=1}^{d-2} F_{i,d-1}\otimes F^{i,d-1}\\ A_{\epsilon_{1/2}} &= \frac{1}{2}(F_{0,d+1}-iF_{d-1,d})\otimes( F^{0,d+1}+iF^{d-1,d}) \end{align*} \item[$I=\{0,1\},\{0,2\},\{1,2\}$] \begin{align*} A_{\frac{\epsilon_1+\epsilon_2}{2}} &= \sum_{i=2}^{d-1} F_{i,d}\otimes F^{i,d}\qquad A_{\frac{\epsilon_1-\epsilon_2}{2}} = \sum_{i=2}^{d-1} F_{1,i}\otimes F^{1,i}\\ A_{\epsilon_{1/2}} &= \frac{1}{2}(F_{0,d+1}\mp F_{1,d})\otimes (F^{0,d+1}\mp F^{1,d}). \end{align*} \end{description} \end{proposition} \begin{corollary}\label{sec:cor-lorentzian-scalar-as} Let $W$ be scalar as in Corollary~\ref{sec:cor-euclidean-scalar-as}, then \[ \pi_\Le(m(A_\gamma))=-\frac{\alpha^2}{2},\qquad \pi(A_\gamma) = -\frac{\alpha\beta}{2},\qquad \pi_\Ri(m(A_\gamma)) = -\frac{\beta^2}{2} \] if $\gamma\in\{\pm\epsilon_1,\pm\epsilon_2\}$, and zero otherwise. \end{corollary} \begin{lemma}\label{sec:lem-lorentzian-cs} Let $W$ be a finite-dimensional $H$-bimodule, let $I$ be the index of a Cartan subset, and let $\Psi_I: E^W(\tilde{G},H)\to C^\infty(Y_I, W^{Z_{C_I}})$ be the map obtained by restricting to $C_I$ and then parametrising as described in Lemma~\ref{sec:lem-cartan-chi}. Let $C_{\epsilon_i}\in\mathfrak{c}_{I,\CC}$ be the dual element (with respect to $B$) to $\epsilon_i\in\mathfrak{c}_{I,\CC}^*$. Then \[ \Psi_I(\Ad(t_I^{-1})(C_{\epsilon_i})\cdot f)=\Psi_I(f\cdot C_{\epsilon_i}) = \partial_{\chi_i}\Psi_I(f) \] for $i=1,2$. \end{lemma} \begin{proof} The elements $C_{\epsilon_i}$ ($i=1,2$) are \begin{alignat*}{2} I=\emptyset:\qquad C_{\epsilon_1} &= \frac{F_{0,1}+F_{d,d+1}}{ 2i}, &C_{\epsilon_2} &=\frac{F_{0,1}-F_{d,d+1}}{2i}\\ I=\{0\}:\qquad C_{\epsilon_1} &= \frac{F_{0,1}+F_{d,d+1}}{2i} \quad &C_{\epsilon_2} &= \frac{F_{0,d}-F_{1,d+1}}{2}\\ I=\{2\}:\qquad C_{\epsilon_1} &= \frac{F_{0,d}-F_{1,d+1}}{2} & C_{\epsilon_2} &= \frac{F_{0,1}+F_{d,d+1}}{2i}\\ I=\{1\}:\qquad C_{\epsilon_1} &= \frac{F_{0,1}+iF_{2,d+1}}{2i} &C_{\epsilon_2} &=\frac{F_{0,1}-iF_{2,d+1}}{2i}\\ I=\{1\}':\qquad C_{\epsilon_1} &= \frac{F_{d,d+1}+iF_{0,d-1}}{2i}, &C_{\epsilon_2} &= \frac{F_{d,d+1}-iF_{0,d-1}}{2i}\\ I=\{0,1\},\{0,2\},\{1,2\}:\qquad C_{\epsilon_1} &= \frac{F_{0,d}+F_{1,d+1}}{2} , &C_{\epsilon_2} &= \frac{F_{0,d}-F_{1,d+1}}{2}. \end{alignat*} Consulting the parametrisations from Lemma~\ref{sec:lem-cartan-chi} and ignoring issues of complexification, we obtain \begin{align*} \Psi_I(t_I^{-1}\exp(sC_{\epsilon_1})t_I\cdot f)(\chi_1,\chi_2) = \Psi_I(f\cdot\exp(sC_{\epsilon_1}))(\chi_1,\chi_2) &= \Psi_I(f)(\chi_1+s,\chi_2)\\ \Psi_I(t_I^{-1}\exp(sC_{\epsilon_2})t_I\cdot f)(\chi_1,\chi_2) = \Psi_I(f\cdot\exp(sC_{\epsilon_2}))(\chi_1,\chi_2) &=\Psi_I(f)(\chi_1,\chi_2+s). \end{align*} Taking the derivative with respect to $s$ at $s=0$ yields the claimed equality. \end{proof} \subsection{Matrix Case} If we combine Lemmas~\ref{sec:lem-euclidean-cs}, \ref{sec:lem-lorentzian-cs}, \ref{sec:lem-cartan-chi-euclidean}, \ref{sec:lem-cartan-chi} and Theorem~\ref{sec:thm-casimir-decomposition}, we obtain that \begin{align} \Psi_I(f\cdot\Omega_{\mathfrak{g}}) &= \Psi_I(\Omega_{\mathfrak{g}}\cdot f) = L(k)\Psi_I(f) + \pi_\Le(\Omega_{\mathfrak{m}_I})\Psi_I(f)\nonumber\\ &\qquad + \sum_{\gamma\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)} \frac{\pi_\Le(m(A_\gamma)) + \pi_\Ri(m(A_{\Ad^*(t_I)(\gamma)})) + 2\pi(1\otimes\phi_I)(A_\gamma)}{4\sinh^2_\gamma}\Psi_I(f)\nonumber\\ &\qquad - \sum_{\gamma\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)} \frac{\pi(1\otimes\phi_I)(A_\gamma)}{4\sinh^2_{\gamma/2}}\label{eq:casimir-matrix-diff-op} \Psi_I(f), \end{align} where \[ L(k) = \sum_{i=1}^2 \partial_{\xi_i} + \sum_{\alpha\in R^+} k_\alpha \frac{1+e^{-\alpha}}{1-e^{-\alpha})} \partial_\alpha \] is the Laplacian from \cite[Proposition~1.2.3]{heckmanSchlichtkrull} for the root system $R=2\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ of type $C_2$ and the multiplicities $k_{2\gamma} := \frac{n_\gamma}{2}$, with $\xi_1,\xi_2$ being an orthonormal basis of the underlying Euclidean space. In our case that Euclidean space is the real span of $\epsilon_1,\epsilon_2$. This equality holds both on a formal level where we associate our $x^\gamma$ with their $e^\gamma$, but also on the level of functions, since by Proposition~\ref{sec:prop-euclidean-root-spaces} and Corollary~\ref{sec:cor-correspondence-x-powers-e-function} the parametrisations from Lemmas~\ref{sec:lem-cartan-chi-euclidean} and \ref{sec:lem-cartan-chi} interact in such a way with our definitions of $\epsilon_1,\epsilon_2$ that \[ x^{\frac{\epsilon_1\pm\epsilon_2}{2}} = \exp(\frac{\chi_1\pm\chi_2}{2}) \] up to signs, and these signs turn out not to play a role for $\coth_\gamma$ ($\gamma\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)$). More concretely, the multiplicity vector is \[ k_{\pm 2\epsilon_{1/2}} = \frac{1}{2},\qquad k_{\pm\epsilon_1\pm\epsilon_2} = \frac{d-2}{2}. \] Later it will be useful to embed $2\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ into a larger root system $R$ of type $BC_2$, by adding $\pm\epsilon_{1/2}$ and setting their $k$ to zero. If we set \begin{align} K_{2\gamma} &:= \frac{\pi_\Le(m(A_\gamma)) + \pi_\Ri(m(A_{\Ad^*(t_I)(\gamma)})) + 2\pi(1\otimes\phi_I)(A_\gamma)}{4},\nonumber\\ L_{\gamma} &:= - \frac{\pi(1\otimes\phi_I)(A_\gamma)}{4} \in\End(W^{Z_{C_I}})\label{eq:matrix-radial-part-to-match} \end{align} for $\gamma\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ we can write the above as \begin{align*} \Psi_I(f\cdot\Omega_{\mathfrak{g}}) &= \Psi_I(\Omega_{\mathfrak{g}}\cdot f) = L(k)\Psi_I(f) + \pi_\Le(\Omega_{\mathfrak{m}_I})\Psi_I(f)\\ &\qquad + \sum_{\gamma\in \Sigma(\mathfrak{g}:\mathfrak{c}_I)} \qty(\frac{K_{2\gamma}}{\sinh^2_\gamma} + \frac{L_\gamma}{\sinh^2_{\gamma/2}})\Psi_I(f). \end{align*} We now show that this matches \cite[Equation~1.1]{Buric:2022ucg}. For this, we consider the case $I=\{1\}$ and $q=1$ and apply the permutation $(d+1,d,\dots,1)$ to our indices to match conventions. In the notation of \cite[Section~1.1]{Buric:2022ucg} we then compute the matrices $K_{2\gamma},L_\gamma$: \begin{align*} K_{2\epsilon_{1/2}} &= -\frac{1}{8}\qty(D_{\mp}^{\prime2} + D_{\pm}^2 - 2D_\pm'D_\pm)\\ K_{\epsilon_1+\epsilon_2} &= \frac{1}{4}\qty(M_{2a}'M_{2a}' + M_{2a}M_{2a} + 2M_{2a}'M_{2a})\\ K_{\epsilon_1-\epsilon_2} &= \frac{1}{4}\qty(M_{3a}'M_{3a}' + M_{3a}M_{3a} + 2M_{3a}'M_{3a})\\ L_{\epsilon_{1/2}} &= \frac{D_{\mp}'D_\mp}{8}\\ L_{\frac{\epsilon_1+\epsilon_2}{2}} &= -\frac{M_{2a}'M_{2a}}{4}\\ L_{\frac{\epsilon_1-\epsilon_2}{2}} &= -\frac{M_{3a}'M_{3a}}{4}. \end{align*} In particular, if we choose a parametrisation by $t_1,t_2$ in such a way that \[ x^{\frac{\epsilon_1\pm\epsilon_2}{2}} = \exp(\pm t_{1/2}), \] the sum \[ \sum_{\gamma\in \Sigma(\mathfrak{g}:\mathfrak{c}_I)} \qty(\frac{K_{2\gamma}}{\sinh^2_\gamma} + \frac{L_\gamma}{\sinh^2_{\gamma/2}}) \] corresponds to the matrix-valued function \begin{align*} &-\frac{D_-^{\prime2} + D_-^2 - 2\cosh(t_1-t_2)D_-'D_-}{4\sinh^2(t_1-t_2)} - (-\leftrightarrow+)\\ &+\frac{M_{2a}'M_{2a}' + M_{2a}M_{2a} - 2\cosh(t_1)M_{2a}'M_{2a}}{2\sinh^2(t_1)}\\ &+\frac{M_{3a}'M_{3a}' + M_{3a}M_{3a} - 2\cosh(t_2)M_{3a}'M_{3a}}{2\sinh^2(t_2)}. \end{align*} Consequently, we have \begin{align} H^{\rho_l,\rho_r} &= 2\sum_{\gamma\in \Sigma(\mathfrak{g}:\mathfrak{c}_I)} \qty(\frac{K_{2\gamma}}{\sinh^2_\gamma} + \frac{L_\gamma}{\sinh^2_{\gamma/2}})\nonumber\\ &\quad + \partial_{t_1}^2 + \partial_{t_2}^2 -\frac{(d-2)(d-4)}{4}\qty(\csch^2(t_1) + \csch^2(t_2)) + \frac{1}{2}\qty(\csch^2(t_1+t_2) + \csch^2(t_1-t_2)) \nonumber\\ &\quad - \frac{d^2-2d+2}{2} -\frac{1}{2}L^{ab}L_{ab}.\label{eq:buric-partially-matched} \end{align} Note that $(L_{ab})_{a<b}$ and $\qty(\frac{1}{2}L^{ab})_{a<b}$ are dual bases of $\mathfrak{m}_I$, so that $\Omega_{\mathfrak{m}_I}$ is given by $-\frac{1}{4}L^{ab}L_{ab}$. Note furthermore that \[ \rho(k) = \frac{1}{2}((d-1)\epsilon_1+\epsilon_2), \] whence \[ \norm{\rho(k)}^2 = \frac{d^2-2d+2}{4}. \] Note lastly that the middle line of \eqref{eq:buric-partially-matched} is recognisable as twice the Hamiltonian form (see e.g. \cite[Equation~2.1.9]{heckmanSchlichtkrull}) of the modified Laplacian $L(k)+\norm{\rho(k)}^2$. This shows that \begin{align*} H^{\rho_l,\rho_k} &= 2\sum_{\gamma\in \Sigma(\mathfrak{g}:\mathfrak{c}_I)} \qty(\frac{K_{2\gamma}}{\sinh^2_\gamma} + \frac{L_\gamma}{\sinh^2_{\gamma/2}})\\ &\quad + 2\delta(k)^{1/2}L(k)\delta(k)^{-1/2} + 2\pi_{\Le}(\Omega_{\mathfrak{m}_I})\\ &= 2\delta(k)^{1/2} \qty(L(k) + \pi_{\Le}(\Omega_{\mathfrak{m}_I}) + \sum_{\gamma\in \Sigma(\mathfrak{g}:\mathfrak{c}_I)} \qty(\frac{K_{2\gamma}}{\sinh^2_\gamma} + \frac{L_\gamma}{\sinh^2_{\gamma/2}}))\delta(k)^{-1/2}, \end{align*} which is indeed twice the Hamiltonian form of the operator from \eqref{eq:matrix-radial-part-to-match}. \subsection{Scalar Case}\label{sec:cb-scalar} We now assume that $W$ is a scalar bimodule as in the Corollaries~\ref{sec:cor-lorentzian-scalar-as},\ref{sec:cor-euclidean-scalar-as}: \[ \pi_\Le(F_{0,d+1})=\alpha,\quad \pi_\Ri(F_{0,d+1})=\beta \] and all elements of $M$ being mapped to 1. From Corollaries~\ref{sec:cor-euclidean-scalar-as}, \ref{sec:cor-lorentzian-scalar-as}, we find that \begin{align*} \pi_\Le(m(A_\gamma)) &= -\frac{\alpha^2}{2}\\ \pi_\Ri(m(A_{\Ad^*(t_I)(\gamma)})) &= -\frac{\beta^2}{2}\\ \pi(1\otimes\phi_I)(A_\gamma) &= -\frac{\alpha\beta}{2} \end{align*} for $\gamma$ a long root, i.e. $\pm\epsilon_1,\pm\epsilon_2$, and zero in all other cases, for the Euclidean and all $C_I$ (for $I\subseteq\{0,1\}$ with and without prime, and $I=\{1,2\}$) of the Lorentzian case. For the remaining two Lorentzian cases $I=\{2\},\{0,2\}$, the first two scalars are the same, but the third changes sign. Similarly, $\pi_\Le(\Omega_{\mathfrak{m}'})=0$. We thus obtain \[ K_{2\gamma} = -\frac{(\alpha\pm\beta)^2}{8},\qquad L_\gamma = \pm\frac{\alpha\beta}{8} \] for $\gamma\in\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ a long root and 0 otherwise, with ``$-$'' if $I=\{2\}$ or $\{0,2\}$ and ``$-$'' otherwise. \begin{lemma} If we write $e^{\frac{\epsilon_i}{2}}$ for the function mapping $x$ to $\exp(\frac{\chi_i}{2})$, and we set \begin{align*} l_{\pm\epsilon_1}^2 = l_{\pm\epsilon_2}^2 &= -\alpha\beta\\ l_{\pm(\epsilon_1+\epsilon_2)}^2 = l_{\pm(\epsilon_1-\epsilon_2)}^2 &= 0\\ l_{\pm2\epsilon_1}^2 = l_{\pm 2\epsilon_2}^2 &= -\qty(\frac{\alpha-\beta}{2})^2, \end{align*} we obtain \[ \Psi_I(\Omega_{\mathfrak{g}}\cdot f)= \Psi_I(f\cdot\Omega_{\mathfrak{g}}) = \qty(L(k) + \sum_{\gamma\in R^+}\frac{l_\gamma B^*(\gamma,\gamma)}{(e^{\gamma/2}-e^{-\gamma/2})^2})\Psi(f) \] for all $f\in E^{W})(G,H)$, where $R$ is now a root system of type $BC_2$ containing $2\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ as its immultipliable roots. \end{lemma} \begin{proof} By Corollary~\ref{sec:cor-correspondence-x-powers-e-function} and previous observations as to what $\pi(1\otimes\phi_I)(A_\gamma)$ is for the scalar representation, this is already the case for $I=\{2\}$ and $\{0,2\}$. For the others we have \[ \sinh_{\frac{\epsilon_i}{2}}^2(x) = x^\epsilon_i + x^{-\epsilon_i} - 2 = - \exp(\chi_i) - \exp(-\chi_i) - 2 = -\cosh(\frac{\chi_i}{2})^2, \] so that the constant term reads \[ -\frac{(\alpha+\beta)^2}{4\sinh[2](\chi_1)} - \frac{\alpha\beta}{4\cosh[2](\frac{\chi_1}{2})} + 1\leftrightarrow 2. \] Note that $\csch[2](2x) = \frac{\csch[2](x)-\sech[2](x)}{4}$, so that we have \begin{align*} -\frac{(\alpha+\beta)^2}{4\sinh[2](\chi_i)} - \frac{\alpha\beta}{4\cosh[2](\frac{\chi_i}{2})} &= -\frac{(\alpha-\beta)^2}{4\sinh[2](\chi_i)} - \alpha\beta \csch[2](\chi_i) - \frac{\alpha\beta}{4\cosh[2](\frac{\chi_i}{2})}\\ &= -\frac{(\alpha-\beta)^2}{4\sinh[2](\chi_i)} - \frac{\alpha\beta}{4\sinh[2](\frac{\chi_i}{2})}. \end{align*} Consequently, the claim also holds for the other choices of $I$. \end{proof} By \cite[Corollary~2.1.2]{heckmanSchlichtkrull}, we can absorb these last terms into $L(k)$ by conjugating as follows: \[ L(k) + \sum_{\alpha\in R^+} \frac{l_\gamma^2 B^*(\gamma,\gamma)}{\qty(e^{\gamma/2}-e^{-\gamma/2})^2} = \delta (L(m) + \norm{\rho(m)}^2 - \norm{\rho(k)}^2)\delta^{-1} \] where \begin{align*} m_{\pm\epsilon_1} = m_{\pm\epsilon_2} &= \alpha\\ m_{\pm(\epsilon_1+\epsilon_2)} = m_{\pm(\epsilon_1-\epsilon_2)} &= \frac{d-2}{2}\\ m_{\pm2\epsilon_1} = m_{\pm2\epsilon_2} &= \frac{1-\alpha+\beta}{2}, \end{align*} where \[ \norm{\rho(k)}^2 = \frac{d^2-2d+2}{4},\qquad \norm{\rho(m)}^2 = \frac{(d+\beta-1)^2+(d+1)^2}{4}, \] and where \[ \delta = \prod_{\gamma\in R^+} \qty(e^{\gamma/2}-e^{-\gamma/2})^{(m-k)_\gamma}, \] which up to constants and in the parametrisation of Lemma~\ref{sec:lem-cartan-chi} equals \[ \cosh[\frac{-\alpha+\beta}{2}](\frac{\chi_1}{2}) \cosh[\frac{-\alpha+\beta}{2}](\frac{\chi_2}{2}) \sinh[\frac{\alpha+\beta}{2}](\frac{\chi_1}{2}) \sinh[\frac{\alpha+\beta}{2}](\frac{\chi_2}{2}). \] In conclusion, we obtain \begin{equation} \Psi_I(\Omega_{\mathfrak{g}}\cdot f)= \Psi_I(f\cdot\Omega_{\mathfrak{g}}) = \delta \qty(L(m) + \frac{\beta(\beta+d)}{2})\delta^{-1} \Psi_I(f), \end{equation} which is exactly the observation in \cite{superintegrability}, thus explaining the appearance of a $BC_2$ root system and multiplicity vectors that contain both information on multiplicities and on the $H$-bimodule $W$. Notice that the classical theory of spherical functions for a non-compact Riemannian symmetric space from \cite[Chapter~5]{heckmanSchlichtkrull} would have led one to expect a $C_2=B_2$ root system with only multiplicity information. \subsection{Euclidean Spinor Case} For the simplest vector case we consider the case $p=3,q=0$, where $V_2,V_3$ are scalar representations and $V_1,V_4$ are the spin-$\frac{1}{2}$ representation of $\mathfrak{m}=\mathfrak{so}(3)$ (note that strictly speaking, this does not lift to a representation of $M$), given by \[ F_{1,2} \cong \frac{i}{2}\mqty(0 & 1\\1 & 0),\qquad F_{1,3} \cong \frac{1}{2}\mqty(0 & 1\\-1 & 0),\qquad F_{2,3} \cong \frac{i}{2}\mqty(1 & 0\\0 & -1). \] This allows us to eventually describe spherical functions on $\operatorname{Spin}(4,1)$ without having to work on that group. By picking our Weyl group generator $w$ as the diagonal matrix $\operatorname{diag}(-1,1,-1,1,1)$, we can ensure that the $\mathfrak{h}$-bimodule $W=V_1\otimes\cdots\otimes V_4$ from Theorem~\ref{sec:thm-injection-msf} becomes isomorphic to the $\mathfrak{m}$-bimodule $\End(V)$ (with $V$ being the spin-$\frac{1}{2}$ representation), where the generator $F_{0,4}$ of the scaling group acts as $\Delta_1-\Delta_2=: -2\alpha$ from the left and $\Delta_4-\Delta_3=: -2\beta$ from the right. Let us for simplicity choose $\mathfrak{c}$ to be generated by $F_{0,2}$ and $F_{3,4}$, instead of $F_{0,1},F_{3,4}$ (in other words, we are exchanging $1$ and $2$). From Corollary~\ref{sec:prop-euclidean-as} we then have \begin{align*} A_{\frac{\epsilon_1+\epsilon_2}{2}} &= F_{1,2}\otimes F_{1,2}\\ A_{\frac{\epsilon_1-\epsilon_2}{2}} &= F_{1,3}\otimes F_{1,3}\\ A_{\epsilon_{1/2}} &= -\frac{1}{2}(F_{0,4}\mp F_{2,3})\otimes(F_{0,4}\mp i F_{2,3}). \end{align*} We see that $A_{\epsilon_{1/2}}$ acts by means of diagonal matrices, which simplifies computations. \begin{proposition} In the above setting we have \begin{alignat*}{2} K_{\epsilon_1+\epsilon_2} &= -\frac{1}{8}\mqty(1 & 0 & 0 & 1\\0 & 1 & 1 & 0\\0 & 1& 1& 0\\1 & 0 & 0 & 1)\qquad & K_{\epsilon_1-\epsilon_2} &= -\frac{1}{8}\mqty(1 & 0 & 0 & 1\\0 & 1 & -1 & 0\\0 & -1 & 1& 0\\1 & 0 & 0 & 1)\\ K_{2\epsilon_{1/2}} &= -\frac{1}{8} \operatorname{diag}\mqty((2\alpha+2\beta\pm1)^2\\(2\alpha+2\beta)^2\\(2\alpha+2\beta)^2\\(2\alpha+2\beta\mp1)^2)\\ L_{\frac{\epsilon_1+\epsilon_2}{2}} &= \frac{1}{16}\mqty(0 & 0 & 0 & 1\\0 & 0 & 1 & 0\\0 & 1 & 0 & 0\\1 & 0 & 0 & 0) & L_{\frac{\epsilon_1-\epsilon_2}{2}} &= \frac{1}{16}\mqty(0 & 0 & 0 & 1\\0 & 0 & -1 & 0\\0 & -1 & 0 & 0\\1 & 0 & 0 & 0)\\ L_{\epsilon_{1/2}} &= \frac{1}{8}\operatorname{diag}\mqty(\qty(2\alpha\pm\frac{1}{2})\qty(2\beta\pm\frac{1}{2})\\ \qty(2\alpha\pm\frac{1}{2})\qty(2\beta\mp\frac{1}{2})\\ \qty(2\alpha\mp\frac{1}{2})\qty(2\beta\pm\frac{1}{2})\\ \qty(2\alpha\mp\frac{1}{2})\qty(2\beta\mp\frac{1}{2})). \end{alignat*} \end{proposition} This expression can indeed be matched with the differential operator obtained in \cite[\S4.2]{BSI}. In order to do this, we need to apply the gauge transformation sketched from \cite[Equation~4.18]{BSI}, which corresponds to conjugation with the matrix \[ \frac{1}{\sqrt{2}}\mqty(1 & 0 & 0 & 1\\0 & 1 & 1 & 0\\0 & -1 & 1& 0\\-1 & 0 & 0 & 1). \] \begin{proposition} The gauge transformation produces the following matrices: \begin{align*} \widetilde{K}_{\epsilon_1+\epsilon_2} &= -\frac{1}{4}\operatorname{diag}\mqty(1\\1\\0\\0),\qquad\widetilde{K}_{\epsilon_1-\epsilon_2} = -\frac{1}{4}\operatorname{diag}\mqty(1\\0\\1\\0),\\ \widetilde{K}_{2\epsilon_{1/2}} &= -\frac{1}{2} \mqty((\alpha+\beta)^2+\frac{1}{4} & 0 & 0 & \mp(\alpha+\beta)\\ 0 & (\alpha+\beta)^2 & 0 & 0 \\ 0 & 0 & (\alpha+\beta)^2 & 0\\ \mp(\alpha+\beta)& 0 & 0 & (\alpha+\beta)^2+\frac{1}{4}),\\ \widetilde{L}_{\frac{\epsilon_1+\epsilon_2}{2}} &= \frac{1}{16}\operatorname{diag}\mqty(1\\1\\-1\\-1), \qquad \widetilde{L}_{\frac{\epsilon_1-\epsilon_2}{2}} = \frac{1}{16}\operatorname{diag}\mqty(1\\-1\\1\\-1),\\ \widetilde{L}_{\epsilon_{1/2}} &= \frac{1}{8}\mqty(4\alpha\beta + \frac{1}{4} & 0 & 0 & \mp(\alpha+\beta)\\ 0 & 4\alpha\beta - \frac{1}{4} & \pm(\alpha-\beta) & 0\\ 0 & \pm(\alpha-\beta) & 4\alpha\beta - \frac{1}{4} & 0\\ \mp(\alpha+\beta) & 0 & 0 & 4\alpha\beta + \frac{1}{4}). \end{align*} \end{proposition} Defining \[ V^{PT}_{(\alpha,\beta)}(\gamma) := \frac{(\alpha+\beta)^2+\frac{1}{4}}{\sinh^2_\gamma} - \frac{\alpha\beta}{\sinh^2_{\gamma/2}}, \] we discover that $R(\Omega_{\mathfrak{g}})$'s 0th term has a diagonal term that amounts to \[ -V^{PT}_{(\alpha,\beta)}(\epsilon_1) - V^{PT}_{(\alpha,\beta)}(\epsilon_2). \] The remaining terms on the diagonal read \[ \frac{1}{16}\mqty(\csch^2_{\frac{\epsilon_1}{2}} + \csch^2_{\frac{\epsilon_2}{2}} + 2\sech^2_{\frac{\epsilon_1+\epsilon_2}{4}} + 2\sech^2_{\frac{\epsilon_1-\epsilon_2}{4}}\\ -\sech^2_{\frac{\epsilon_1}{2}} -\sech^2_{\frac{\epsilon_2}{2}} + 2\sech^2_{\frac{\epsilon_1+\epsilon_2}{4}} - 2\csch^2_{\frac{\epsilon_1-\epsilon_2}{4}}\\ -\sech^2_{\frac{\epsilon_1}{2}} -\sech^2_{\frac{\epsilon_2}{2}} - 2\csch^2_{\frac{\epsilon_1+\epsilon_2}{4}} + 2\sech^2_{\frac{\epsilon_1-\epsilon_2}{4}}\\\ \csch^2_{\frac{\epsilon_1}{2}} + \csch^2_{\frac{\epsilon_2}{2}} - 2\csch^2_{\frac{\epsilon_1+\epsilon_2}{4}} - 2\csch^2_{\frac{\epsilon_1-\epsilon-2}{4}}), \] and the off-diagonal ones (top to bottom): \[ \mqty(\frac{\alpha+\beta}{4}\qty(-\sech^2_{\frac{\epsilon_1}{2}} + \sech^2_{\frac{\epsilon_2}{2}})\\ \frac{\alpha-\beta}{4}\qty(\csch^2_{\frac{\epsilon_1}{2}}-\csch^2_{\frac{\epsilon_2}{2}})\\ \frac{\alpha-\beta}{4}\qty(\csch^2_{\frac{\epsilon_1}{2}}-\csch^2_{\frac{\epsilon_2}{2}})\\ \frac{\alpha+\beta}{4}\qty(-\sech^2_{\frac{\epsilon_1}{2}} + \sech^2_{\frac{\epsilon_2}{2}})). \] Up to the introduction of coordinates $u_1,u_2$ that associates concrete hypergeometric functions with our $\sinh_\gamma,\cosh_\gamma$ as follows: \begin{align*} f_{\frac{\epsilon_i}{2}} &\mapsto f\qty(\frac{u_i}{2})\qquad (i=1,2)\\ f_{\frac{u_1\pm u_2}{4}} &\mapsto f\qty(\frac{u_1\mp u_2}{2}), \end{align*} we thus obtain as 0th-order term almost the (negative of the) potential described in \cite[Equations~4.16--17]{BSI}. The difference is a scalar function (i.e. identity matrix) \[ \frac{\csch^2_{\epsilon_1} + \csch^2_{\epsilon_2}}{4} + \frac{\csch^2_{\frac{\epsilon_1+\epsilon_2}{2}} + \csch^2_{\frac{\epsilon_1-\epsilon_2}{2}}}{8}, \] which we, in line with the original reference \cite{BSI}, get from turning $L(k)$ into Hamiltonian form. In conclusion, we arrive exactly at the operator that was obtained in \cite[Section~4.2]{BSI}. \section{Blocks of Conformal Defects}\label{sec:defect} As is detailed in \cite[Chapter~3]{defect}, (scalar) conformal blocks for two defects of dimension $p$ in $d$ Euclidean spacetime dimensions can be described as zonal spherical functions for the pair $(G,H):=(SO(d+1,1)_0, SO(d-p)\times SO(p+1,1)_0)$. This pair is symmetric: if we pick the involution $\sigma$ to be conjugation with the diagonal matrix \[ \mqty(1_{d-p} & 0\\0 & -1_{p+2}), \] then $H=G^\sigma$. Furthermore, $(G,K,\theta,B)$ is a reductive Lie group, where $K=SO(d+1)$, the involution $\theta$ consists in conjugation with the diagonal matrix \[ \mqty(1_{d+1} & 0\\0 & -1), \] and $B$ is the trace form from the defining representation. Evidently, $\theta$ and $\sigma$ commute and $B$ is $\sigma$-invariant. Consequently, we can apply Matsuki's theory. \begin{proposition} A fundamental Cartan subset $C$ is given by $C=\exp(\mathfrak{c})$ where \[ \mathfrak{c} = \begin{cases} \operatorname{span}\{F_{i+1,d-i}\mid i=0,\dots,p\} \oplus\RR F_{0,d+1} & 2p < d-1\\ \operatorname{span}\{F_{i,d-i}\mid i=0,\dots d-p-1\} & 2p \ge d-1\\ \end{cases}. \] Here, $N$, the rank of $\mathfrak{c}$, is given by $\min(p+2,d-p)$. \end{proposition} \begin{proof} For $p+1\ge d-p$, note that the claimed maximal commutative subalgebra $\mathfrak{c}=\mathfrak{t}$ indeed is a commutative subalgebra of $\mathfrak{k}^{-\sigma}$. The vector space $\mathfrak{k}^{-\sigma}$ is spanned by $F_{\mu,\nu}$ for $0\le\mu< d-p\le\nu\le d$. Let $X$ be an element of it, say \[ X = \sum_{\mu=0}^{d-p-1} \sum_{\nu=d-p}^d a_{\mu,\nu}F_{\mu,\nu}, \] we then have \begin{align*} \comm{F_{i,d-i}}{X} &= \sum_{\mu,\nu} a_{\mu,\nu} \qty(-\delta_{i,\mu}F_{d-\mu,\nu} + \delta_{d-i,\nu} F_{\mu,d-\nu})\\ &= -\sum_{\nu\ne d-i} a_{i,\nu} F_{d-i,\nu} + \sum_{\mu\ne i} a_{\mu,d-i} F_{\mu,i}. \end{align*} This is zero iff (linear independence) $a_{i,\nu}=0$ ($\nu\ne d-i$) and $a_{\mu,d-i}=0$ ($\mu\ne i$). If we want this to hold true for all $i=0,\dots,d-p-1$, we obtain for each possible first index $\mu=0,\dots,d-p-1$ that $a_{\mu,\nu}$ is only allowed to be nonzero for $\mu+\nu=d$. In other words, $X\in\mathfrak{t}$. This shows that $\mathfrak{t}$ is indeed maximally commutative. Furthermore, by a similar argument there is no element in \[ \mathfrak{p}^{-\sigma}=\operatorname{span}\{F_{i,d+1}\mid i=0,\dots,d-p-1\} \] that commutes with $\mathfrak{t}$, so that the algebra $\mathfrak{a}$ is trivial. We thus see that $\mathfrak{c}$ has rank $N = d-p$. For $p+1<d-p$, consider $\mathfrak{t}$, the compact part of the claimed algebra $\mathfrak{c}$. Let $X\in \mathfrak{k}^{-\sigma}$, say \[ X = \sum_{\mu=0}^{d-p-1} \sum_{\nu=d-p}^d a_{\mu,\nu} F_{\mu,\nu}. \] Then \begin{align*} \comm{F_{i+1,d-i}}{X} &= \sum_{\mu,\nu} a_{\mu,\nu} \qty(-\delta_{i+1,\mu}F_{d+1-\mu,\nu} + \delta_{d-i,\nu} F_{\mu,d+1-\nu})\\ &=- \sum_{\nu\ne d-i} a_{i+1,\nu} F_{d-i,\nu} + \sum_{\mu\ne i+1} a_{\mu,d-i} F_{\mu,i+1}. \end{align*} Due to linear independence, this is zero iff $a_{i+1,\nu}=0$ and $a_{\mu,d-i}=0$ (for $\nu\ne d-i, \mu\ne i+1$). Consequently, $X$ commutes with all of $\mathfrak{t}$ if these conditions hold for all $i=0,\dots,p$. This ensures that for all possible choices of the second index $\nu$, we can only have $a_{\mu,\nu}\ne0$ for $\mu+\nu=d+1$. Consequently, $X\in\mathfrak{t}$. The centraliser of $\mathfrak{t}$ in $\mathfrak{p}^{-\sigma}$ is spanned by $F_{0,d+1}$ and $F_{p+2,d+1},\dots,F_{d-p-1,d+1}$. Since no two linearly independent elements of this span commute, any maximal commutative subalgebra is therefore one-dimensional, and so we can choose $\RR F_{0,d+1}$ as indicated. Furthermore, the rank of $\mathfrak{c}$ is $p+2$, which is $\le d-p$, hence $N=p+2$. \end{proof} Starting from this choice of fundamental Cartan subset, we need to figure out what are the other standard Cartan subsets. \begin{proposition}\label{sec:prop-defect-cartan-subsets} For $2p<d-1$, there are no other standard Cartan subsets. For $2p\ge d-1$, the remaining standard Cartan subsets are of the shape \[ C_i := \exp(\mathfrak{c}_i),\qquad \mathfrak{c}_i:= \operatorname{span}\{F_{j,d-j}\mid 0\le j\le d-p-1,j\ne i\} \oplus \RR F_{i,d+1} \] ($i=0,\dots,d-p-1$) or $C'_i := C_i \exp(\pi F_{i,d-i})$. \end{proposition} \begin{proof} Since no two linearly independent elements of $\mathfrak{p}^{-\sigma}$ commute, any commutative subalgebra $\mathfrak{a}$ is at most one-dimensional. Consequently, there are no other choices of Cartan subset for $2p<d-1$. For $2p\ge d-1$, let $C'=\exp(\mathfrak{a}')T'$ be a standard Cartan subset different from the fundamental Cartan subset $C$. Then $\mathfrak{a}$ is one-dimensional, say spanned by \[ X = \sum_{i=0}^{d-p-1} a_i F_{i,d+1}\in\mathfrak{p}^{-\sigma}. \] For any $t\in T'$ we then have $\Ad(t^{-1})(X)\in\mathfrak{g}^{-\sigma}$ as well. If we write \[ t = \exp(\sum_{i=0}^{d-p-1} \phi_i F_{i,d-i}), \] we have \begin{align*} \Ad(t^{-1})(X) &= \sum_{i=0}^{d-p-1} a_i\qty(\cos(\phi_i) F_{i,d+1} + \sin(\phi_i) F_{d-i,d+1}). \end{align*} This lies in $\mathfrak{g}^{-\sigma}$ iff $a_i \sin(\phi_i)=0$ for all $i=0,\dots,d-p-1$. Since $\dim(\mathfrak{t}')=d-p-1$, all but one of these equations have to be satisfied independently of $\phi_i$. In other words: there is an $i=0,\dots,d-p-1$ such that $a_i\ne0$ (and $a_j=0$ for $j\ne i$). This then also implies $\sin(\phi_i)=0$. Thus, $C'=C_i$ or $C'_i$, depending on whether $\phi_i\in 2\pi\ZZ$ or in $\pi + 2\pi \ZZ$. \end{proof} Note that any orthogonal block matrix permuting the indices $i\leftrightarrow j$ and $d-i\leftrightarrow d-j$, we can immediately see that $C_i$ and $C_j$ are conjugate, as are $C'_i, C'_j$. \begin{proposition}\label{sec:prop-defect-root-systems} Both with respect to $\mathfrak{c}$ and $\mathfrak{c}_0$, $\mathfrak{g}$ has a reduced root system of type $B_N$ (or $D_N$ in case $2p=d-2$) with root multiplicities: \[ n_{\mathrm{short}} = \abs{d-2-2p}, \quad n_{\mathrm{long}} = 1. \] \end{proposition} \begin{proof} For $2p\ge d-1$, we have for $0\le i,j,k\le d-p-1$: \begin{align*} \comm{F_{i,d-i}}{F_{j,k}\pm i F_{d-j,k} + iF_{j,d-k} \mp F_{d-j,d-k}} &= (\pm i\delta_{i,j} + i\delta_{i,k})(\cdots)\\ \comm{F_{i,d-i}}{F_{j,k}\pm i F_{d-j,k} - iF_{j,d-k} \pm F_{d-j,d-k}} &= (\pm i\delta_{i,j} - i\delta_{i,k})(\cdots)\\ \comm{F_{i,d-i}}{F_{j,m}\pm i F_{d-j,m}} &= \pm i\delta_{i,j} (\cdots)\\ \comm{F_{i,d-i}}{F_{m,n}} &= 0 \end{align*} for $m,n=d-p,\dots,p,d+1$, which establishes that the reduced root system with respect to $\mathfrak{c}$ is of type $B_{d-p}$ and has root multiplicities $2p-d+2$ and $1$ for short and long roots, respectively. Note that the dimensions of the root spaces we found add up to $\dim(\mathfrak{g})$, so that we have indeed found all of them. Similarly, we have \begin{align*} \comm{F_{0,d+1}}{F_{0,m} \pm F_{m,d+1}} &= \pm (F_{0,m} \pm F_{m,d+1})\\ \comm{F_{0,d+1}}{F_{m,n}} &= 0 \end{align*} for $m,n=1,\dots,d$, which yields a root system of type $B_{d-p}$ with multiplicities $2p-d+2$ and $1$. Lastly, for $2p<d-1$ we have for $i=0,\dots,p$: \begin{align*} \comm{F_{i+1,d-i}}{F_{j+1,k+1}\pm i F_{d-j,k+1} + iF_{j+1,d-k} \mp F_{d-j,d-k}} &= (\pm i\delta_{i,j} + i\delta_{i,k})(\cdots)\\ \comm{F_{0,d+1}}{F_{j+1,k+1}\pm i F_{d-j,k+1} + iF_{j+1,d-k} \mp F_{d-j,d-k}} &= 0\\ \comm{F_{i+1,d-i}}{F_{j+1,k+1}\pm i F_{d-j,k+1} - iF_{j+1,d-k} \pm F_{d-j,d-k}} &= (\pm i\delta_{i,j} - i\delta_{i,k})(\cdots)\\ \comm{F_{0,d+1}}{F_{j+1,k+1}\pm i F_{d-j,k+1} - iF_{j+1,d-k} \pm F_{d-j,d-k}} &= 0\\ \comm{F_{i+1,d-i}}{F_{0,j+1}\pm i F_{0,d-j} + F_{j+1,d+1} \pm i F_{d-j,d+1}} &= \pm i\delta_{i,j} (\cdots)\\ \comm{F_{0,d+1}}{F_{0,j+1}\pm i F_{0,d-j} + F_{j+1,d+1} \pm i F_{d-j,d+1}} &= \cdots\\ \comm{F_{i+1,d-i}}{F_{0,j+1}\pm i F_{0,d-j} - F_{j+1,d+1} \mp i F_{d-j,d+1}} &= \pm i\delta_{i,j} (\cdots)\\ \comm{F_{0,d+1}}{F_{0,j+1}\pm i F_{0,d-j} - F_{j+1,d+1} \mp i F_{d-j,d+1}} &= -\cdots\\ \comm{F_{i+1,d-i}}{F_{j+1,m}\pm iF_{d-j,m}}&= -i\delta_{i,j}(\cdots)\\ \comm{F_{0,d+1}}{F_{j+1,m}\pm iF_{d-j,m}} &= 0\\ \comm{F_{i+1,d-i}}{F_{0,m}\pm F_{m,d+1}} &= 0\\ \comm{F_{0,d+1}}{F_{0,m}\pm F_{m,d+1}} &= \pm (\cdots)\\ \comm{F_{i+1,d-i}}{F_{m,n}} &= 0\\ \comm{F_{0,d+1}}{F_{m,n}} &= 0 \end{align*} where $m,n=p+2,\dots,d-1-p$. This indeed establishes that we are dealing here with the root system of type $B_{p+2}$ or $D_{p+2}$ (in case $d-2=2p$), with root multiplicities $d-2p-2$ and $1$. \end{proof} \begin{lemma}\label{sec:lem-defect-blocks-satisfy-technical-cond} All standard Cartan subsets satisfy the technical condition of Section~\ref{sec:general-decomposition}. \end{lemma} \begin{proof} For $2p<d-1$ and the cases of $C$ and $C_i$ for $2p\ge d-1$ there is nothing to show as the Cartan subsets in questions are subgroups and are therefore covered. It remains then to show that $C'_i$ for $2p\ge d-1$ satisfies the technical condition. This is a coset with ``inhomogeneity'' $t' = \exp(\pi F_{i,d-i})$. The map $\Ad(t')$ leaves $\mathfrak{c}_i$ invariant and squares to the identity, since $(t')^2=1$, so that it commutes with $\sigma$. Therefore it satisfies the conditions we impose on $\phi$ in the decomposition $\Ad(t')|_{\mathfrak{g}_\alpha} = \epsilon_\alpha \phi$. \end{proof} \begin{theorem} The quadratic Casimir element acts on conformal blocks for $p$-dimensional defects in $d$-dimensional Euclidean spacetime as the operator $L(k)$ from \cite[Proposition~1.2.3]{heckmanSchlichtkrull} for a root system of type $B_N$ (or $D_N$) with multiplicities \[ k_{\text{short}} = \frac{\abs{d-2-2p}}{2},\qquad k_{\text{long}} = \frac{1}{2}, \] where $N=\min(p+2,d-p)$. This exactly matches what was obtained in \cite[Section~3]{defect} in the case $p=q$. \end{theorem} \begin{proof} By Proposition~\ref{sec:prop-defect-cartan-subsets} we need to consider $C$ (for $2p<d-1$) or $C,C_i,C_i'$ (for $2p\ge d-1$). By Lemma~\ref{sec:lem-defect-blocks-satisfy-technical-cond} we may apply Theorem~\ref{sec:thm-casimir-decomposition}. Since our $H$-bimodule is trivial, we ignore all elements of $U(\mathfrak{h})\otimes_{U(\mathfrak{m}')}U(\mathfrak{h})$, so that the differential operator $R^{\CC}(\Omega_{\mathfrak{g}})$ reduces to the Laplacian $L(k)$ for the root system $2\Sigma(\mathfrak{g}:\mathfrak{c}_I)$ where $I$ is either the empty word or a number in $0,\dots,d-1-p$. By Proposition~\ref{sec:prop-defect-root-systems}, this root system is of type $B_N$ with multiplicities $\abs{d-2-2p}$ and $1$, respectively (in case the short multiplicity is 0, it is of type $D_N$), which explains the choice of parameter $k$. \end{proof} \section{Conclusion and Outlook} In this paper we made a step towards developing a systematic theory of matrix-spherical functions for symmetric pairs $(G,H)$, where neither $G$ nor $H$ need to be compact. Namely, we used results from \cite{matsuki} to construct a group decomposition that generalises the usual $KAK$-decomposition, known from the standard theory of non-compact reductive Lie groups. We then used this new decomposition to explore the corresponding radial part reductions akin to the theory presented in \cite{CM}, and in particular to establish a matrix-valued, general symmetric pair analogue of \cite[Theorem~5.1.7]{heckmanSchlichtkrull}. Afterwards we applied this theory to some of the most basic examples arising in the study of conformal blocks in CFT. Firstly, we elaborated in detail on conformal blocks for 4-point functions, in particular on how the connection between the (quadratic) Casimir equation and the Calogero--Sutherland model \cite{superintegrability} can be appropriately derived and on what the Casimir equation looks like for general non-scalar cases (both Euclidean and Lorentzian), paying special attention to mathematical subtleties on the way. And secondly, we also checked our results against the two simple benchmarks: the matrix Calogero--Sutherland Hamiltonians corresponding to the spinorial Casimir equation for three-dimensional spinning blocks \cite{BSI} and to the conformal blocks for two $p$-dimensional scalar defects in Euclidean conformal field theory \cite{defect}. Highlighting the added value of this paper on the physics side, we would like to emphasize once more the need for a thorough understanding of the group decompositions and the corresponding radial part computations that were considered here. By now, the general form of the correspondence between conformal blocks, harmonic analysis and integrable models \cite{harmony, BSI} is rather clear, has been extended to various physically interesting settings, most importantly to the multipoint case in \cite{multipoint}, and has started to bring practical results, providing the actual input for the machinery of numerical conformal bootstrap, especially in interesting not-yet-explored setups. However, if one ultimately strives for an exhaustive analysis of the mathematics behind (the kinematics of) higher-dimensional conformal theories, it is fair to say that various pieces of this correspondence are not yet up to the standards of mathematical rigour. As a matter of fact, being perhaps at a risk of slightly overstating, one can regard this state of things as one of the crucial stumbling blocks on the way of properly using representation-theoretical tools in order to (analytically) address the questions of dynamics of higher-dimensional CFTs. It is precisely one of such shallower points that we have elaborated upon in the present paper: we clarified the specifics of the $KAK$ decompositions used in computations of radial parts of conformal Casimirs and made the latter fully explicit in the present context. Elucidation of this specific issue, along with the proper analysis of the Casimir reduction in the Lorentzian setting, should be considered as the main added value of the present paper on the physics side. Moving on to the outlook of possible further directions, the most interesting immediate development of our findings here would be a clarification of the connection of the present setup to the works of J.\ Stokman and N.\ Reshetikhin \cite{RS-2,RS-0, RS}, who studied generalised spherical functions associated to the moduli spaces of principal connections on a finite graph. As we reviewed earlier, the CFT four-point functions are closely related to matrix--spherical functions for the symmetric pair $(G, MA)$, which from the above point of view can be thus regarded as a correspondence between such generalised spherical functions related to two different types of graphs. It would be interesting to understand correspondences of this type better and, in particular, to explore their possible analogues for multipoint conformal blocks. We also expect that the theory we developed here can be used in the Reshetikhin--Stokman setting to lift the restriction to compact subgroups, playing a role in \cite{RS-2}. On a slightly smaller scale, let us conclude by listing several concrete further directions that could additionally be addressed in the context of conformal block computations via Matsuki decompositions: \begin{itemize} \item It would be interesting to get explicit formulas also for the radial parts of other generators of the algebra of invariant differential operators in various setups. Conceptually this does not seem to be much less straightforward than for the quadratic Casimir, however, the amount of necessary calculations grows quite fast with the order of differential operator. For example, in the simplest case of the scalar 4-point blocks there is a well-known Casimir element of degree four, whose action has been calculated in \cite[Equation~4.14]{dolanOsbornFurther}. \item Another simple matrix-valued case that is feasible to treating in complete generality is $V_1=V,V_2=\CC,V_3=V^*,V_4=\CC$ for the defining representation $V$ of $SO(p,q)$. $V$ restricts to a sum of two trivial and one defining representation of $M'$, which means that $\End_{M'}$ is five-dimensional. \item Our discussion of the blocks for conformal defects from \cite{defect} in the last section, of course, has also just scratched the surface. Most interestingly, Matsuki's decomposition naturally allows to consider different (symmetric) involutions of the two sides of a double coset, which thus fits perfectly into the framework of the conformal blocks for defects of different codimension $p \neq q$. In particular, we expect that it should be possible to rigorously reproduce and extend the results of \cite[Section~3.3]{defect} using Matsuki's theory. It is needless to say that all these constructions can and, perhaps, should also be extended to other signatures (Lorentzian) and to the spinning case. Some of these setups have already been analysed in the physics literature, with the omissions similar to what we have addressed in this paper. \item Getting under control the radial part decompositions for the other types of non-compact symmetric spaces by using Matsuki's decompositions is, of course, also an interesting, more distant goal. Some of these further setups also bear a clear relevance for physics, e.g. the group case is known to be related to the finite-temperature conformal blocks. \end{itemize} \section{Acknowledgements} PS would like to thank his PhD supervisors Erik Koelink and Maarten van Pruijssen for allowing him to pursue this side project. MI thanks Edward Berengoltz and Jasper Stokman for numerous discussions on the topics of this paper. The research of PS was funded by grant \texttt{OCENW.M20.108} of the Dutch Research Council NWO. \printbibliography \end{document}
2412.19714v2
http://arxiv.org/abs/2412.19714v2
Fractional nonlinear Schrödinger and Hartree equations in modulation spaces
\documentclass[12pt,a4paper,oneside,reqno]{amsart} \usepackage{fancyhdr} \usepackage{amssymb,mathrsfs,amsmath,amsthm} \usepackage[numbers,sort&compress]{natbib} \usepackage{authblk} \usepackage{url} \usepackage[numbers]{natbib} \usepackage{setspace} \usepackage{mathtools} \usepackage{comment} \usepackage[inline]{enumitem} \usepackage[ colorlinks=true, linkcolor=blue, citecolor=blue, urlcolor=blue, pagebackref, pdfauthor={Your Name}, pdftitle={Your Document Title}, pdfkeywords={keyword1, keyword2, keyword3} ]{hyperref} \usepackage{comment} \usepackage{hyperref} \usepackage[margin=1in]{geometry} \vfuzz2pt \hfuzz2pt \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheoremstyle{boldremark} {10pt} {10pt} {} {} {\bfseries} {.} { } {} \theoremstyle{boldremark} \newtheorem{Remark}[thm]{\textbf{Remark}} \numberwithin{equation}{section} \newenvironment{mathclass} {Mathematics Subject Classification (2010):} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\suchtHat}{\ensuremath{\, \vert \,}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\LL}{ {\mathcal L} } \newcommand{\psk}{ {\psi_k^\alpha} } \newcommand{\eps}{\varepsilon} \newcommand{\To}{\longrightarrow} \newcommand{\BX}{\mathbf{B}(X)} \newcommand{\B}{\mathcal{B}} \renewcommand{\keywords}[1]{ \par\noindent Keywords: #1 \par } \def\R {\mathbb{R}} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\Bea}{\begin{eqnarray*}} \newcommand{\Eea}{\end{eqnarray*}} \newcommand{\bt}{\begin{Theorem}} \newcommand{\et}{\end{Theorem}} \newcommand{\bpr}{\begin{Proposition}} \newcommand{\epr}{\end{Proposition}} \newcommand{\bl}{\begin{Lemma}} \newcommand{\el}{\end{Lemma}} \newcommand{\bi}{\begin{itemize}} \newcommand{\ei}{\end{itemize}} \newtheorem{Definition}{Definition}[section] \newtheorem{Theorem}[Definition]{Theorem} \newtheorem{Lemma}[Definition]{Lemma} \newtheorem{Proposition}[Definition]{Proposition} \newtheorem{Corollary}[Definition]{Corollary} \newtheorem{claim}{Claim} \title{Fractional nonlinear Schr\"odinger and Hartree equations in modulation spaces} \author{Divyang G. Bhimani} \address{Divyang G. Bhimani\\Department of Mathematics\\Indian Institute of Science Education and Research\\ Pune 411008\\India} \email{[email protected]} \author{Diksha Dhingra} \address{Diksha Dhingra \\Department of Mathematics\\ Indian Institute of Technology\\ Indore, 452020\\ India} \email{[email protected]} \author{Vijay Kumar Sohani} \address{Vijay Kumar Sohani\\ Department of Mathematics\\ Indian Institute of Technology\\ Indore 452020\\ India} \email{[email protected]} \pagestyle{fancy} \fancyhf{} \setlength{\headheight}{14.0pt} \setlength{\footskip}{14.0pt} \fancypagestyle{plain}{\fancyhf{} \renewcommand{\headrulewidth}{0pt}} } \fancyfoot[C]{\thepage} \begin{document} \date{} \maketitle{} \begin{center} DIVYANG G. BHIMANI, DIKSHA DHINGRA, VIJAY KUMAR SOHANI \end{center} \begin{abstract} We establish global well-posedness for the mass subcritical nonlinear fractional Schr\"odinger equation $$iu_t - (-\Delta)^\frac{\beta}{2} u+F(u)=0$$ having radial initial data in modulation spaces $M^{p,\frac{p}{p-1}}(\mathbb R^n)$ for $n \geq 2, p>2$ and $p$ sufficiently close to $2.$ The nonlinearity $F(u)$ is either of power-type $F(u)=\pm (|u|^{\alpha}u)\; (0<\alpha<2\beta / n)$ or Hartree-type $(|x|^{-\nu} \ast |u|^{2})u \; (0<\nu<\min\{\beta,n\}).$ Our order of dispersion $\beta$ lies in $(2n/ (2n-1), 2).$ \end{abstract} \footnotetext{\begin{mathclass} Primary 35Q55, 35Q60, 35R11, 42B37; Secondary 35A01. \end{mathclass} \keywords{Bourgain's high-low frequency decomposition method, Fractional nonlinear Schr\"odinger equation, Hartree equation, Global well-posedness, modulation spaces} } \section{Introduction} We study Cauchy problem for the fractional nonlinear Schr\"odinger equation (FNLS), namely, \begin{equation}\label{FNLS} \begin{cases} iu_t(x,t) - (-\Delta)^\frac{\beta}{2} u(x,t)+F(u)=0\\ u(x,0)=u_0(x) \end{cases}(x , t ) \in \mathbb R^n \times \mathbb R \end{equation}where $u(x,t) \in \mathbb C,$ and $(-\Delta)^{\beta/2}$ denotes the fractional Laplacian with $ \beta>0$. Here, the nonlinearity is either of the power type \begin{equation*}\label{Powertype} F(u)=\pm (|u|^{\alpha}u) \quad (0<\alpha<\frac{2\beta}{n}) \end{equation*} or Hartree type \begin{equation*}\label{Hartree} F(u)=(|\cdot|^{-\nu} \ast |u|^{2})u \quad (0<\nu< \min \{\beta,n\}), \end{equation*} where $\ast$ denotes the convolution in $\R^{n}$. The model \eqref{FNLS} was introduced in the theory of the fractional quantum mechanics by Laskin through an expansion of the Feynman path integral, transitioning from Brownian-like to L\'evy-like quantum mechanical paths \cite{Laskin1,Laskin2}. Equation \eqref{FNLS} with Hartree type nonlinearity, also known as Boson star equation, arises in the mean-field limit of large system of bosons, see \cite{Enno}. This is also referred as Schr\"odindger-Newton equation or Choquard equation in another physical context \cite{Lieb}. The goal of this article is to establish local well-posedness (LWP) and global well-posedness (GWP) for \eqref{FNLS} having initial data in modulation spaces. In the early 1980s Feichtinger \cite{Feih83} introduced a class of Banach spaces, the so called modulation spaces. We briefly recall them here, see \cite{KassoBook, wang2011harmonic} for a thorough introduction. Let $\rho: \mathbb R^n \to [0,1]$ be a smooth function satisfying $\rho(\xi)= 1 \ \text{if} \ \ |\xi|_{\infty}\footnote{Define $|\xi|_{\infty}=\max\{ | \xi_i | : \xi= (\xi_1,..., \xi_n)\}.$}\leq \frac{1}{2} $ and $\rho(\xi)= 0 \ \text{if} \ \ |\xi|_{\infty}\geq 1$. Let $\rho_k$ be a translation of $\rho,$ that is, $ \rho_k(\xi)= \rho(\xi -k) \ (k \in \mathbb Z^n).$ Denote $$\sigma_{k}(\xi)= \frac{\rho_{k}(\xi)}{\sum_{l\in\mathbb Z^{n}}\rho_{l}(\xi)}\quad (k \in \mathbb Z^n).$$ The frequency-uniform decomposition operators can be defined by $$\square_k = \mathcal{F}^{-1} \sigma_k \mathcal{F} \quad (k \in \mathbb Z^n)$$ where $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier and inverse Fourier transform respectively. The weighted modulation spaces $M^{p,q}_s \ (1 \leq p,q \leq \infty, s \in \R)$ is defined as follows: \begin{equation*} M^{p,q}_s= M^{p,q}_s(\R^n)= \left\{ f \in \mathcal{S}'(\R^n): \left|\left|f\right|\right|_{M^{p,q}_s}= \left\| \|\square_kf\|_{L^p_x} (1+|k|)^{s} \right\|_{\ell^q_k}< \infty \right\} . \end{equation*} For $s=0,$ we write $M^{p,q}_0= M^{p,q}.$ For $p=q=2,$ modulation spaces coincide with Sobolev spaces, i.e. $M^{2,2}_s= H^s \ (s \in \R). $ For $p\in [1, \infty]$, we denote $p'$ the H\"older conjugate, i.e. $\frac{1}{p}+\frac{1}{p'}=1.$ Denote $X_{rad}$ the set of all radial functions in $X$. We are now ready to state our main results in the next two subsections. \subsection{Fractional nonlinear Schr\"odinger equations} We consider \eqref{FNLS} with power type nonlinarity: \begin{equation}\label{FNLSP} \begin{cases} iu_t(x,t) - (-\Delta)^\frac{\beta}{2} u(x,t) \pm (|u|^{\alpha}u)(x,t)=0\\ u(x,0)=u_0(x) \end{cases}(x , t ) \in \mathbb R^n \times \mathbb R. \end{equation} \begin{thm}[Local well-posedness]\label{lwp} Let $0 < \alpha < \frac{2\beta}{n}$ and $ \frac{2n}{2n-1} < \beta < 2$ for $n \geq 2.$ Assume $u_{0}\in L^2_{rad}+M^{\alpha+2,(\alpha+2)'}_{rad}.$ Then there exists $T^*=T^*(\|u_{0}\|_{L^2+M^{\alpha+2,(\alpha+2)'}},n,\alpha,\beta)>0$ and a unique maximal solution $u$ of \eqref{FNLSP} such that \begin{equation*} u\in \big(C([0,T^*),L^2_{rad})\cap L^{\frac{2\beta(\alpha+2)}{n\alpha}}([0,T^*),L^{\alpha+2}) \big)+C([0,T^*),M^{\alpha+2,(\alpha+2)'}_{rad}). \end{equation*} \end{thm} The local solution established in Theorem \ref{lwp} can be extended to a global one under certain restriction on exponent $p$ \footnote{Refer to Remark \ref{Whypso} for a comment on the restriction of $p$.}. To this end, we denote \begin{equation}\label{pmax}p_{\max} := \begin{cases} 2+\frac{2}{\alpha+1}-\frac{n\alpha}{\beta(\alpha+1)}\quad &\text{if} \ \alpha-\frac{n\alpha^2}{2\beta(\alpha+2)}-1+\frac{n\alpha}{2\beta}>0 \\ \alpha+2 \quad & \text{otherwise}. \end{cases} \end{equation} \begin{thm}[global well-posedness]\label{gwp} Let $0 < \alpha < \frac{2\beta}{n}$ and $ \frac{2n}{2n-1} < \beta < 2$ for $n \geq 2.$ Assume that $u_{0}\in M^{p,p'}_{rad}$ for $p\in (2,p_{max}).$ Then the local solution constructed in Theorem \ref{lwp} of \eqref{FNLSP} extends globally and lies in \begin{equation*} \big(C(\R,L^2_{rad}) \cap L^{\frac{2\beta(\alpha+2)}{n\alpha}}_{loc}(\R,L^{\alpha+2})\big)+ C(\R,M^{\alpha+2,(\alpha+2)'}_{rad}). \end{equation*} \end{thm} The global well-posedness for \eqref{FNLS} has been studied by many authors in Sobolev spaces, see Subsection \ref{pw} below. Modulation spaces have played a crucial role in the long-running investigation of NLS \eqref{FNLS} ($\beta=2$) near Sobolev scaling criticality for the past two decades. We refer to, among others, survey article by Ruzansky-Sugimoto-Wang \cite{RuzSurvey} and \cite{wang2011harmonic, Ruz4NLS, oh2018global, Kasso2009, KassoBook, Wang2006,Bhimani2016, BhimaniJFA2, BhimaniNorm, BhimaniHartree-Fock, Wang25,bhimani2023mixed}. Theorem \ref{gwp} is the first global well-posedness result for \eqref{FNLS} $(\beta \neq 2)$ for the large data in modulation spaces. It is known that $M^{p,p'} \neq H^{s}=M^{2,2}_s$ for $p \neq 2, s\in \mathbb R.$ It is interesting to note that Theorem \ref{gwp} is applicable to initial data of infinite $L^2-$norm, and also to the data from $L^p_s-$Sobolev spaces in view of the following sharp embedding when $p>2$ : \begin{equation*}\label{se} L^p_s\hookrightarrow M^{p,p'} \hookrightarrow L^p \quad \text{for}\quad s > n\left(1-\frac{2}{p}\right) \end{equation*} and \begin{equation}\label{HsMpq} H^s \hookrightarrow M^{p, p'} \quad \text{for}\quad s> n\left(\frac{1}{2}-\frac{1}{p}\right). \end{equation} \subsection{Fractional Hartree equations} We now consider \eqref{FNLS} with Hartree-type nonlinearity: \begin{equation}\label{FNLSH} \begin{cases} iu_t(x,t) - (-\Delta)^\frac{\beta}{2} u(x,t) \pm (|\cdot|^{-\nu} \ast |u|^{2})u=0\\ u(x,0)=u_0(x) \end{cases}(x , t ) \in \mathbb R^n \times \mathbb R. \end{equation} \begin{thm}[local well-posedness]\label{lwpHar}Let $0<\nu<\min \{ \beta,n\}$ and $ \frac{2n}{2n-1} < \beta < 2$ for $n \geq 2.$ Assume $u_{0}\in L^2_{rad}+M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}.$ Then there exists $T^*=T^*(\|u_{0}\|_{L^2+M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}},n,\nu,\beta)>0$ and a unique maximal solution $u$ of \eqref{FNLSH} such that \begin{equation*} u\in \big(C([0,T^*),L^2_{rad})\cap L^{4\beta / {\nu}}([0,T^*),L^{4n/(2n-\nu)}) \big)+C([0,T^*),M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}). \end{equation*} \end{thm} The local solution established in Theorem \ref{lwpHar} can be extended to a global solution. \begin{thm}[global well-posedness]\label{gwpHar} Let $0<\nu<\min \{ \beta,n\}$ and $ \frac{2n}{2n-1} < \beta < 2$ for $n \geq 2.$ Assume that $u_{0}\in M^{s,s'}_{rad}$ for $s\in (2,s_{max})$ where \begin{equation}\label{qmax}s_{\max} := \frac{2n(4\beta-\nu)}{n(4\beta-\nu)-\nu(\beta-\nu)}. \end{equation}Then the local solution constructed in Theorem \ref{lwpHar} of \eqref{FNLSH} extends globally and lies in \begin{equation*} \big(C(\R,L^2_{rad}) \cap L^{4\beta / {\nu}}_{loc}(\R,L^{4n/(2n-\nu)})\big)+ C(\R,M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}). \end{equation*} \end{thm} The study of Cauchy problem for Hartree equations has long history and has appeared in various physical settings e.g. white dwarfs and many particle physics etc. \cite{Enno, Lieb}, \cite[Section 1.1]{BhimaniHartree-Fock}. In fact, it is widely studied with Cauchy data in $H^s$. Theorem \ref{gwpHar} complements this, see Subsection \ref{pw} below. We now pursue further discussion on Theorem \ref{gwp} and Theorem \ref{gwpHar} in the following subsection. \subsection{Prior work for NLS and Hartree equations}\label{pw} FNLS \eqref{FNLS} exhibits a variety of rich structures, a few of which we highlight here. Formally, the solution of \eqref{FNLSP} and \eqref{FNLSH} enjoy conservation of mass: \begin{align} M[u(t)] &= \int_{\mathbb{R}^n} |u(x, t)|^2 \, dx = M[u_0]. \label{mass} \end{align} Both the fractional NLS \eqref{FNLSP} and Hartree equation \eqref{FNLSH} exhibit scaling symmetry: \begin{itemize} \item If $u(x,t)$ is a solution of \eqref{FNLSP} having initial data $u_{\lambda}(x,0)=\lambda^{-\frac{\beta}{\alpha}} u ( \lambda^{-1} x,0)$, then $u_{\lambda}(x,t)= \lambda^{-\frac{\beta}{\alpha}} u ( \lambda^{-1} x,\lambda^{\beta} t)$ for $\lambda>0$ is also a solution of it. The $\dot{H}^{s}-$norm is invariant under this scaling for $$s=s_{c}=\frac{n}{2}- \frac{\beta}{ \alpha},$$ so called critical Sobolev index for \eqref{FNLSP}. \item If $u(x,t)$ solves \eqref{FNLSH}, then $u_{\lambda}$ defined as $u_{\lambda} (x, t) = \lambda^{\frac{n-\nu + \beta}{2}} u( \lambda x, \lambda^\beta t)$ also solves \eqref{FNLSH} with scaled initial data. The $\dot{H}^{s}-$norm is invariant under this scaling for $$s=s_{c}=\frac{\nu-\beta}{2},$$ so called critical Sobolev index for \eqref{FNLSH}. \end{itemize} The scaling provides heuristics indicating that \eqref{FNLSP} and \eqref{FNLSH} are well-posedness in $H^s$ for $s>s_c$ (sub-critical) and ill-posed in $H^s$ for $s<s_c$ (super-critical). Now one may ask whether there exists a class of data out of $H^{s_c}$ for which it is still well-posed. Modulation spaces turned out to be a very useful tool in order to answer this question. In fact, $M^{2,1}_{s_c}\subset H^{s_c}$, $M^{2,1}_s\not\subset H^{s_c}$ for $s<s_c$; and $H^{s_c}\not\subset M^{p,p'}$ for $p>2$, $s_c<0$. See e.g. \cite[Remark 2.1]{BhimaniJDE2}, \cite{Ruz4NLS, Wang2006} and the references therein. We note that the norm of modulation space does not preserve proper scaling symmetry \footnote{In \cite{BhimaniNorm, oh2018global}, authors considered almost scaling criticality.}. However, Bhimani et al. \cite{BhimaniNorm, BhimaniJFA2} proved \eqref{FNLSP} and \eqref{FNLSH} are ill-posed in $M^{p,q}_s$ under certain negative regularity. We briefly mention some known results (among others) having power-type and Hartree nonlinearity in the subcritical regime with initial data in either Sobolev or modulation space:\\ $-$\textbf{Classical case $\beta=2$:} \begin{enumerate} \item Tsutsumi \cite{YTsutsumi} proved \eqref{FNLSP} is GWP in $L^{2}$ for $\alpha \in (0,\frac{4}{n})$. Zhao, Wang and Guo \cite{Wang2006} proved \eqref{FNLSP} is LWP in $M^{2,1}$ for $\alpha \in 2 \mathbb N.$ Later, it is extended in $M_s^{p,1} \ (1\leq p \leq \infty, s\geq 0).$ See \cite{Kasso2009}. While Wang and Hudzik \cite{wango} proved classical NLS \eqref{FNLSP} is GWP for small data in $M^{2,1}$. Oh and Wang \cite{oh2018global} proved GWP for 1D cubic NLS \eqref{FNLSP} in $M^{2,q}$ for $2\leq q< \infty.$ Later, it is extended \cite{Klaus} in $X$ where $X=M^{p,q}$ for $1/q \geq 1/p, 1\leq p \leq 2$ or $M^{p,q}_s$ for $s>1/q', 1\leq p \leq 2$ or $M_1^{p,1}$ for $2\leq p< \infty.$ Chaichenets et al. in \cite{LeonidIn,leonidthesis} established GWP for $1$D cubic NLS \eqref{FNLSP} in $M^{p, p'}$ for $p$ sufficiently close to 2. \item In the early 1970s, Chadam and Glassey proved global existence for classical Hartree equation in the energy space $H^1$ in their seminal work \cite{ChadamGlassey}. Since then many authors have studied Cauchy problem for the Hartree equation, see e.g. influential papers \cite{MiaoJPDE, MiaoJFA} by Miao, Xu, and Zhao. Carles \cite{Carles} proved \eqref{FNLSH} is GWP in $L^2$ for $0<\nu< \min\{2, n\}.$ Moreover, it is GWP in $L^2\cap \mathcal{F}L^1$ for $0<\nu< \min\{2, n/2\}$ where $\mathcal{F}L^1=\{f \in \mathcal{S'}: \hat{f}\in L^1\}$ (Wiener algebra). Later, Bhimani \cite[Theorem 1.1]{bhimani2016cauchy} proved it is GWP in $M^{1,1}$ and further extended by Manna \cite[Theorem 1.1]{manna2017modulation} in $M^{p,p}$ for $1 \leq p < \frac{2n}{n+\nu}.$ Moreover, Bhimani \cite{DGBJDE} extended the range of $p, q$, specifically, it is GWP in $M^{p,q}$ with $1 \leq p \leq 2, 1 \leq q < \frac{2n}{n+\nu}.$ In \cite{RameshManna}, Manna established global existence for $1$D classical Hartree equation for $0 <\nu <1$ in $M^{p,p'}(\mathbb R)$ for $2<p$ and sufficiently small. We also mention that Kim, Lee and Seo \cite{DiracHartree} established LWP for Dirac equations with a Hartree type nonlinearity in modulation spaces. While Bhimani, Grillakis and Okoudjou \cite{BhimaniHartree-Fock} also treated Hartree equation \eqref{FNLSH} in the presence of harmonic potential. \end{enumerate} $-$\textbf{Smaller dispersion case $0<\beta<2$:} \begin{enumerate} \item Guo-Huo \cite{Bguo} proved 1D cubic \eqref{FNLSP} is GWP in $L^2$ for $ \beta \in (1,2).$ Hong and Sire \cite{HongSire} established LWP for \eqref{FNLSP} with $\beta \in (0,2) \setminus \{1\}$ in $X$, where $X= H^s(\mathbb R)$ when $\alpha \geq 4, s>s_c$ or $H^s(\mathbb R^n)$ when $n\geq 2, \alpha \geq 2, s>s_c$. They also established small data GWP and scattering in some critical Sobolev spaces, specifically for $X= H^{s_c}(\mathbb R)$ when $\alpha > 4$ or $H^{s_c}(\mathbb R^2)$ when $\alpha>3.$ Cho et al. \cite{Chodcds} proved LWP for 1D cubic \eqref{FNLSP} with $1<\beta<2$ in $H^s$ for $s \geq \frac{2-\beta}{4}$. Also see \cite{dinh2016}. B\'enyi and Okoudjou \cite[Remark 1]{Kasso2009} proved LWP for \eqref{FNLSP} with $\beta \in [0,2]$ in $M^{p,1}_s$ $(s\geq 0, 1\leq p \leq \infty)$ for $\alpha \in 2 \mathbb N.$ \item Cho, Hajaiej, Hwang, and Ozawa proved GWP for \eqref{FNLSH} in $H^s$ for $s \geq \nu /2$ in their seminal work \cite{Cho}. Bhimani, Okoudjou and Grillakis \cite{BhimaniHartree-Fock} established GWP for \eqref{FNLSH} for radial initial data in $M^{p,q}$ $(1 \leq p \leq 2, 1 \leq q \leq \frac{2n}{n+\nu})$ for $0<\nu < \min \{\beta,\frac{n}{2}\}$ and $ \frac{2n}{2n-1} < \beta < 2$ with $n \geq 2.$ \end{enumerate} $-$\textbf{Larger dispersion case $\beta> 2$:} \begin{enumerate} \item Dinh \cite[Corollary 4]{dinh2016} proved GWP for \eqref{FNLSP} in $L^2$ for $\alpha \in (0,\frac{2\beta}{n})$. Seong \cite{SeongK} proved 1D cubic \eqref{FNLSP} is GWP in $H^{s}(\mathbb R)$ for $s \geq -1/2$ for $\beta =4$. Ruzhansky, Wang and Zhang \cite{Ruz4NLS} established small data GWP for fourth order $(\beta=4)$ \eqref{FNLSP} in some weighted modulation spaces. Kato \cite[Theorem 1.1]{T.Kato} established small data GWP in some modulation spaces. Lu \cite[Theorems 6 and 7]{LuBnls} proved LWP for fourth-order \eqref{FNLSP} in $M_{s}^{p,2} \ ( 10/3 \leq p \leq 6, s > 7/2 -2/p )$. \item For well-posedness theory in the realm of generalized modulation spaces, so called $\alpha-$modulation space, we refer to \cite{alphaMS3} and the references therein. We also note that Chaichenets and Pattakos \cite{NLSHOAID21,NLSHOAID23} established global solution to the NLS with higher-order anisotropic dispersion for small initial data in certain modulation spaces $M_{s}^{p,q}$. \end{enumerate} Taking the aforementioned results into account, we note that GWP results for \eqref{FNLSP} and \eqref{FNLSH} \ $(\beta\neq 2)$ having large initial data in modulation space $M^{p,p'} \ (p>2)$ remain unsolved. Theorems \ref{gwp} and \ref{gwpHar} resolve this for large radial initial data and complement well-posedness theory studied in Sobolev spaces. \subsection{Proof techniques and novelties} The main ingredients in well-posedness theory developed in several papers for \eqref{FNLSP} in modulation spaces are the following: \begin{itemize} \item The Schr\"odinger propagator $e^{-it(-\Delta)^{\beta/2}} $ is bounded on \( M^{p,q} \) and further enjoys time decay estimates. See Proposition \ref{uf} and cf. \cite{FabioPAMS}. \item The space $M^{p,1}_{s}$ is an algebra under pointwise multiplication. See e.g. \cite{Kasso2009,FeiStudia}. \item In \cite{RSJFA,alphaMS3,LuBnls}, authors used $U^{p}$-$V^{p}$ spaces taking values in modulation spaces as iteration spaces to establish LWP in $M^{2,q}_{s}$ for certain values of $s$ and $q.$ \end{itemize} These properties help to get the LWP and small data GWP in modulation spaces, see e.g. \cite{Kasso2009, wango, Wang2006, Bhimani2016,RSJFA, NLSHOAID21,NLSHOAID23}. However, there is no useful conservation law in modulation spaces. This raises mathematical challenges to establish GWP for large data. In this paper, we aim to address this for \eqref{FNLSP} and \eqref{FNLSH}. Our method of proof is inspired from Bourgain’s high-low frequency decomposition method. It was introduced by Bourgain \cite{Bourgain1999} to establish GWP for NLS having power-type nonlinearity in $H^{s}$ for $s>\frac{3}{5}$. See \cite[Section 3.2]{KenigonBourgain}, \cite{vargas2001global} and \cite[Section 3.9]{TaoBook} for more details. The main idea is to break down the initial data into a low-frequency/smoother component and a high-frequency/rougher component. The choice of the cutoff for high and low-frequency component depends on the regularity of the data. The low-frequency component has finite energy after the cutoff, hence its evolution exists globally. On the other hand, the nonlinear evolution to the difference equation for the high-frequency component has small energy in an interval of time of existence. Such smallness of the Duhamel term in the smoother energy space allows the iteration to continue by merging this smoother component with the evolution of the low-frequency component. Chaichenets et al. \cite{LeonidIn,leonidthesis} and Manna \cite{RameshManna} have successfully adapted this method for classical NLS \eqref{FNLSP} $(\beta = 2)$ and classical Hartree eqaution \eqref{FNLSH} in modulation spaces respectively. In this paper, we extend this work to the case $ \frac{2n}{2n-1} < \beta < 2$ for $n \geq 2.$ We note that due to the non-locality and non-smoothness of the fractional dispersion operator $(-\Delta)^\beta$, the problem becomes significantly more challenging. Therefore, our analysis involves a new technicalities and we need to make a radial assumption on the initial data. We briefly outline the strategy for our proof of Theorem \ref{gwp}. \begin{enumerate} \item[-] We decompose the initial data into two parts by making use of the interpolation result formulated in Lemma \ref{ipt} for radial functions as: \[u_0=\phi_0 + \psi_0 \in L^2_{rad} + M^{(\alpha+2), (\alpha +2)'}_{rad}\] such that $\|\phi_0\|_{L^2} \lesssim_{p} N^{\gamma}$ for some $\gamma>0$ and $\|\psi_0\|_{M^{(\alpha+2), (\alpha+2)'}} \lesssim_{p} N^{-1}.$ See \eqref{dp} and \eqref{asi}. \item[-] We have a local solution to \eqref{FNLS} in $L^2_{rad} + M^{(\alpha+2), (\alpha +2)'}_{rad}$ by Theorem \ref{lwp}. Also, we have a global existence of \eqref{FNLS} for initial data in $L^{2}_{rad}$ due to mass conservation \eqref{mass}, see Theorem \ref{GWPL2Rad}. \item[-] To address global well-posedness, we construct a solution $u$ of \eqref{FNLS} of the form \[u= (v_{0} + w_{0}) + e^{it\Delta} \psi_0.\] Here $v_{0} $ is the $L^{2}_{rad}-$ global solution of \eqref{FNLS} with data $\phi_0$, see \eqref{gwpl2}. While $U_{\beta}(t)\psi_0 \in M^{(\alpha +2), (\alpha+2)'}$ for $t\in \R$ is the linear evolution of $\psi_0$ and $w_{0}$ is the nonlinear interaction term; see \eqref{w01}. See \eqref{solnlocal}. \item[-] Since $\|\psi_0\|_{M^{(\alpha+2), (\alpha+2)'}}\lesssim_{p} N^{-1}$ can be made small, we get $v_{0} + w_{0}$ close to $v_{0}$ in $L^2_{rad}.$ Even though $M(v_{0} + w_{0})$ is not conserved, it grows slowly enough to ensure the existence of a global solution. See Subsection \ref{secgwp} for the proof of Theorem \ref{gwp}. \end{enumerate} \begin{Remark} Overall proof strategy for Theorem \ref{gwpHar} is similar to that of Theorem \ref{gwp}. However, we note that the technicalities are quite different due to the presence of non-local nonlinearity $(|x|^{-\nu}\ast |u|^2)u$. \end{Remark} \begin{Remark} The sign $+1$ (resp. $ -1$) corresponds to the focusing (resp. defocusing) case. However, the sign is irrelevant in our analysis. \end{Remark} \begin{Remark} The proof of Theorem \ref{lwp}-\ref{gwpHar} relies on the radial assumption, which removes the problematic evolution seen in the non-radial cases (such as the Knapp counterexample). The Strichartz estimates as stated in Theorem \ref{fst} without loss of derivatives hold only in the radial case. Guo et al. \cite{guo2014improved} presented the Knapp counterexample to show that the general non-radial Strichartz estimates have loss of derivative for $1 <\beta < 2.$ Moreover, with these results, we may not have GWP in $L^2$ with the desired property (cf. Theorem \ref{GWPL2Rad} and \cite[Proposition 3.4]{DGBJDE}) which is essential to proving Theorem \ref{gwp} and Theorem \ref{gwpHar} respectively. On the other hand, it would be interesting to investigate \eqref{FNLS} for the remaining dispersion $\beta$ and non-radial initial data. \end{Remark} This article is organized as follows: In Section \ref{np}, we introduce the basic notations and tools that will be used throughout the paper. This section includes definitions of function spaces, some of the key properties and relevant lemmas. Section \ref{s3} is dedicated to proving a global well-posedness result for \eqref{FNLSP} in $L^2_{rad}$ which will serve as a fundamental tool for obtaining global well-posedness results in the context of modulation spaces (Theorem \ref{gwp}). While Section \ref{s4} presents the proofs of Theorems \ref{lwp} and \ref{gwp}. Finally, in Section \ref{s5}, we discuss the proofs of Theorems \ref{lwpHar} and \ref{gwpHar}. \section{Preliminaries}\label{np} \subsection{Notations}\noindent The symbol $X \lesssim_{\alpha} Y$ means $X \leq CY$ for some positive constant $C$ depending on the parameter $\alpha.$ While $X \approx Y $ means $C^{-1}X\leq Y \leq CX$ for some constant $C>0.$ The norm of the space-time Lebesgue space $L^{q}([0,T],L^{r}(\R^{n}))$ is defined as $$\|u\|_{L^{q}_{T}L^{r}}:=\|u\|_{L^{q}([0,T],L^{r}(\R^{n}))}=\left(\int_{0}^{T} \|u(\cdot,t)\|_{L^{r}(\R^{n})}^{q} dt\right)^{\frac{1}{q}}.$$ We simply write $\|u\|_{L^{q}L^{r}}$ in place of $\|u\|_{L^{q}(\R,L^{r}(\R^{n}))}.$ \subsection{Key Ingredients} For $f\in \mathcal{S}(\mathbb R^{n})$ and $\beta >0,$ we define the \textbf{fractional Schr\"odinger propagator $e^{-it(-\Delta)^{\beta/2}}$ }for $t \in \mathbb R$ as follows: \begin{eqnarray} \label{sg}\label{-1} [U_{\beta}(t)f](x)=\left[e^{-it (-\Delta)^{\beta/2}}f\right](x):= \int_{\mathbb R^n} e^{-(2\pi)^\beta i |\xi|^{\beta}t}\, \widehat{f}(\xi) \, e^{2\pi i \xi \cdot x} \, d\xi. \end{eqnarray} \begin{Definition} A pair $(q,r)$ is \textbf{$\beta$-fractional admissible pair} if $q\geq 2, r\geq 2$ and $$\frac{\beta}{q} = n \left( \frac{1}{2} - \frac{1}{r} \right),(q,r,n)\neq(\infty,2,2).$$ \end{Definition} The set of all such admissible pairs is denoted by $$\mathcal{A}_{\beta}= \{(q,r):(q,r) \; \text{is $\beta$-fractional admissible pair} \}.$$ \begin{prop}[Strichartz estimates for fractional Schr\"odinger equation] \label{fst}Denote $$DF(x,t):= U_{\beta}(t)u_{0}(x) + i\int_0^t U_\beta(t-s)F(x,s) ds.$$ \begin{enumerate} \item \label{fst1} \cite{KeelTao1998} Let $u_{0} \in L^2,$ $n\in \mathbb N$ and $\beta=2.$ Then for any time interval $I\ni0$ and 2-admissible pairs $(q_j,r_j)$, $j=1,2,$ satisfying $$ \|D(F)\|_{L^{q_1}(I,L^{r_1})} \lesssim_{n,r_1,r_2} \|u_{0} \|_{L^2}+ \|F\|_{L^{q'_2}(I,L^{r'_2})}, \quad\forall F \in L^{q_2'} (I, L^{r_2'}(\R^n))$$ where $q_j'$ and $ r_j'$ are H\"older conjugates of $q_j$ and $r_j$ respectively. \item \label{fst2} \cite[Corollary 3.10]{guo2014improved} Let $u_{0} \in L_{rad}^2$, $n\geq 2,$ and $\frac{2n}{2n-1} < \beta < 2.$ Then for any time interval $I\ni0$ and $(q_j, r_j) \in \mathcal{A}_{\beta}$, $j=1,2,$ such that $$ \|D(F)\|_{L^{q_1}(I,L^{r_1})} \lesssim_{n,\beta,r_1, r_2} \|u_{0}\|_{L^2}+ \|F\|_{L^{q'_2}(I,L^{r'_2})}, \quad \forall F \in L^{q'_2}(I,L_{rad}^{r'_2}(\R^n))$$ where $q_j'$ and $ r_j'$ are H\"older conjugates of $q_j$ and $r_j$ respectively. \end{enumerate} \end{prop} \begin{lem}[see e.g. Lemma 3.9 in {\cite{leonidthesis}} ] \label{eg} Denote \begin{equation*} G(u,v,w)=|u+v|^{\alpha}(u+v)-|u+w|^{\alpha}(u+w) \end{equation*} Then, for $\alpha>0$ and $u,v,w\in \C$, we have $$|G(u,v,w)| \lesssim_{\alpha} (|u|^{\alpha}+|v|^{\alpha}+|w|^{\alpha})|v-w|. $$ \end{lem} \begin{lem}[Hardy-Littlewood Sobolev inequality]\label{HLS} Assume that $0<\nu <n$ and $1<p<q<\infty$ with $$\frac{1}{p}+\frac{\nu}{n}-1=\frac{1}{q}.$$ Then the map $f \to |\cdot|^{-\nu} \ast f$ is bounded from $L^p(\R^n )$ to $L^q(\R^n )$ as $$ \||\cdot|^{-\nu} \ast f\|_{L^q} \lesssim_{n,\nu,p} \|f\|_{L^p}.$$ \end{lem} Consider the identity for $\nu>0:$ \begin{equation}\label{haridentity} (|\cdot|^{-\nu} \ast |u_1|^{2})u_1 - (|\cdot|^{-\nu} \ast |u_2|^{2})u_2 = (|\cdot|^{-\nu} \ast |u_1|^{2})(u_1 - u_2) + (|\cdot|^{-\nu} \ast (|u_1|^2 - |u_2|^2))u_2. \end{equation} Define \begin{equation}\label{TildeG} \tilde{G}(v,w_{1},w_{2})=(|\cdot|^{-\nu}\ast|v+w_{1}|^{2})(v+w_{1})-(|\cdot|^{-\nu}\ast|v+w_{2}|^{2})(v+w_{2}) \end{equation} \begin{lem}[Basic property, \cite{KassoBook,wang2011harmonic, kobayashi2011inclusion}] \label{srp} Let $p,q,p_{i},q_{i} \in [1,\infty], (i=0,1,2)$ and $s_1, s_2 \in \R.$ \begin{enumerate} \item $ \label{srp1} M_{s_1}^{p_{1},q_{1}}\hookrightarrow M_{s_2}^{p_{2},q_{2}}$ whenever $p_{1}\leq p_{2}, q_{1}\leq q_{2}, s_2 \leq s_1.$ \item \label{srp2} $ M_{s_1}^{p,q_{1}}\hookrightarrow M_{s_2}^{p,q_{2}}$ whenever $ q_{2}<q_{1}, s_1-s_2 > \frac{n}{q_2}-\frac{n}{q_1}$. \item\label{embedding} $M^{p,q_1}\hookrightarrow L^{p} \hookrightarrow M^{p,q_2} $ for $q_1 \leq \min \{p, p'\}$ and $q_2 \geq \max \{p, p'\}.$ \item \label{srpa}If $\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{0}}$ and $\frac{1}{q_{1}}+\frac{1}{q_{2}}=1+\frac{1}{q_{0}},$ then $\|f g\|_ {M^{p_{0},q_{0}}} \lesssim \|f\|_ {M^{p_{1},q_{1}}} \|g\|_ {M^{p_{2},q_{2}}}.$ \item Denote $$\tau (p,q)= \max \left\{ 0, n\left( \frac{1}{q}- \frac{1}{p}\right), n\left( \frac{1}{q}+ \frac{1}{p}-1\right) \right\}.$$Then $L^{p}_{s_1} \subset M^{p,q}_{s_2}$ if and only if one of the following conditions is satisfied: \begin{gather*} (i) \ q\geq p>1, s_1\geq s_2 + \tau(p,q); \quad (ii) \ p>q, s_1>s_2+ \tau(p,q);\\ (iii) \ p=1, q=\infty, s_1\geq s_2 + \tau(1, \infty); \quad (iv) \ p=1, q\neq \infty, s_1>s_2+\tau (1, q). \end{gather*} \item For completely sharp embedding between $M^{p,q}_{s_1}$ and $H^s,$ see \cite{KassoBook}. \end{enumerate} \end{lem} \begin{prop}[Boundedness of $U_{\beta}(t),$ \cite{chendcds,wango}]\label{uf} The following inequalities hold: \begin{enumerate} \item For $\frac{1}{2} < \beta \leq 2$ and $ 1 \leq p,q\leq\infty,$ we have $$ \|U_{\beta}(t)f \|_{M^{p,q}}\leq (1+|t|)^{n\left| \frac{1}{p}-\frac{1}{2} \right|} \|f\|_{M^{p,q}}.$$ \item For $\beta \geq 2 , 1\leq q\leq \infty$ and $ 2 \leq p\leq \infty,$ we have $$ \|U_{\beta}(t)f \|_{M^{p,q}}\leq (1+|t|)^{- \frac{2n}{\beta}\left( \frac{1}{2}-\frac{1}{p} \right)} \|f\|_{M^{p',q}}.$$ \end{enumerate} \end{prop} By invoking interpolation \cite[Theorem 6.1 (D)]{Feih83} theory, we may split the initial data, with the desire property. More specifically, we have the following lemma, which will play a crucial role to prove our main results. \begin{lem}[see Proposition 5.1 in {\cite{leonidthesis}}]\label{ipt} Let $r>2$ and $ \ p\in (2, r).$ Then there exists a constant $C=C(n,r,p)$ such that for any $u\in M^{p,p'}$ and $N>0$ there are $\ v \in L^{2}$ and $w \in M^{r,r'}$ satisfying \begin{gather*} \begin{cases} u= v + w\\ \|v\|_{L^{2}}\leq C \|u\| _{M^{p,p'} }N^{\gamma}\\ \|w\|_{M^{r,r'}}\leq C \|u\|_{M^{p,p'}}/N \end{cases}, \ \ where \ \gamma = \frac{\frac{1}{2} - \frac{1}{p}}{\frac{1}{p} - \frac{1}{r}}. \end{gather*} \end{lem} \section{Global well-posedness in $L^{2}_{rad}$}\label{s3} In this section, we recall \cite[Corollary 4.8]{guo2014improved} in Theorem \ref{GWPL2Rad} below for the sake of completeness. This result will be crucial in establishing local and global well-posedness in modulation spaces. \\ Denote \begin{align} \label{exponent1} \omega&:=1-\frac{n\alpha}{2\beta},\\ \label{kappa} \kappa&:=\frac{n\alpha}{2\beta(\alpha+2)},\\ \label{YT} Y(T) &:= L^{1 / \kappa}_{T}L^{\alpha+2}_{rad}. \end{align} \begin{lem}\label{lemlwp}Let $n \geq 2, \alpha \in (0,\frac{2\beta}{n}), \beta \in (\frac{2n}{2n-1}, 2)$ and $(q_{1},r_{1})\in \mathcal{A}_{\beta}$. Then, there exists a constant $C=C(n,\alpha,\beta,r_{1})>0$ such that for any $T > 0$ and any $u,v,w \in Y(T)$, the estimate \begin{align*} \hspace{-0.5cm}\left|\left| \int_0^t U_\beta(t-s)G(u,v,w)(s) ds \right|\right|_{L^{q_{1}}_{T}L^{r_{1}}} \lesssim_{n,\alpha,\beta,r_{1}} T^{\omega}\|v-w\|_{Y(T)}\left(\|u\|_{Y(T)}^{\alpha} +\|v\|_{Y(T)}^{\alpha} + \|w\|_{Y(T)}^{\alpha}\right) \end{align*} holds. \end{lem} \begin{proof} Using Proposition \ref{fst}\eqref{fst2}, we have \begin{align*} \left|\left| \int_0^t U_\beta(t-s)G(u,v,w)(s) ds \right|\right|_{L^{q_{1}}_{T}L^{r_{1}}} \lesssim_{n,\beta,r_{1},r_{2}} \left|\left| G(u,v,w) \right|\right|_{L^{q'_{2}}_{T}L^{r'_{2}}} \end{align*} for $(q_j, r_j) \in \mathcal{A}_{\beta}$, $j=1,2.$ Applying Lemma \ref{eg} and H\"older's inequality twice with $(q_{2},r_{2})=(1 / \kappa,\alpha+2),$ we have\begin{align*} \left|\left| G(u,v,w) \right|\right|_{L^{q'_{2}}_{T}L^{r'_{2}}} \lesssim_{\alpha}&\|u^{\alpha}(v-w)\|_{L^{q'_{2}}_{T}L^{r'_{2}}} + \|v^{\alpha}(v-w)\|_{L^{q'_{2}}_{T}L^{r'_{2}}} + \|w^{\alpha}(v-w)\|_{L^{q'_{2}}_{T}L^{r'_{2}}} \end{align*} \begin{align*} \leq T^{\omega}\|v-w\|_{L^{1 / \kappa}_{T}L^{\alpha+2}}\left( \|u\|_{L^{1 / \kappa}_{T}L^{\alpha+2}}^{\alpha} +\|v\|_{L^{1 / \kappa}_{T}L^{\alpha+2}}^{\alpha}+ \|w\|_{L^{1 / \kappa}_{T}L^{\alpha+2}}^{\alpha}\right). \end{align*} \end{proof} \begin{thm}[Global well-posedness in $L^{2}_{rad}$]\label{GWPL2Rad}Let $n \geq 2, 0 < \alpha < \frac{2\beta}{n}$ and $ \frac{2n}{2n-1} < \beta < 2.$ If $u_{0}\in L^{2}_{rad}$, then \eqref{FNLS} has a unique global solution $$u\in C(\R,L^2_{rad}) ~\cap~ L^{1 / \kappa}_{loc}(\R,L^{\alpha+2}).$$ Moreover, $$\|u(t)\|_{L^{2}}=\|u_{0}\|_{L^{2}} \; \forall t\in \R $$ and for all $(q,r) \in \mathcal{A}_{\beta},~u \in L^{q}_{loc}(\R,L^{r}).$ \end{thm} \begin{proof} By Duhamel's principle, \eqref{FNLSP} is formulated as the integral equation \begin{eqnarray}\label{DP} u(x,t)=U_{\beta}(t)u_{0}(x) \pm i \int_0^t U_\beta(t-s)G(0,u,0)(s) ds :=\Lambda (u)(x,t). \end{eqnarray} Let $R$ and $T$ be a positive real numbers to be chosen later. Define $$B(R,T)=\{u\in X_{1}(T):\|u\|_{X_{1}(T)}\leq R\}$$ where $$X_1(T)=C_{T}L^2_{rad} ~\cap~ L^{1 / \kappa}_{T}L^{\alpha+2}$$ equipped with the norm $$\|v\|_{X_1(T)}=\|v\|_{L^{\infty}_{T}L^2} + \|v\|_{L^{1 / \kappa}_{T}L^{\alpha+2}}.$$ Using Proposition \ref{fst}\eqref{fst2} with $r_{1} \in \{2,\alpha+2\},$ we have \begin{equation}\label{RBound} \|U_{\beta}(t)u_{0}\|_{X_1(T)} \lesssim_{n,\alpha,\beta} \|u_{0}\|_{L^2}. \end{equation} This suggests the choice of $$R=2C(n,\alpha,\beta)\|u_{0}\|_{L^2}.$$ Using Lemma \ref{lemlwp} with $r_{1} \in \{2,\alpha+2\},$ we have \begin{align} \left|\left| \int_0^t U_\beta(t-s)G(0,u,0)(s) ds \right|\right|_{X_{1}(T)} \nonumber &\lesssim_{n,\alpha,\beta} T^{\omega} \|u\|_{Y(T)}^{\alpha+1}\nonumber\\ & \label{TBound}\leq T^{\omega} R^{\alpha+1} \leq R, \end{align} provided $u\in B(R,T)$ and $$T \lesssim_{n,\alpha,\beta} \|u_{0}\|_{L^{2}}^{-\alpha / \omega}.$$ This shows that $\Lambda(u) \in B(R,T)$. Similarly, we can show that $\Lambda(u)$ is a contraction map from $B(R,T)$ to $B(R,T)$. Using Banach contraction principle, there exists a unique solution $u$ to \eqref{FNLSP} in the time interval of existence $[0,T]$ with $T$ depending on $\|u_{0}\|_{L^{2}}$. Since \eqref{FNLSP} enjoys conservation of mass \eqref{mass}, we can extend the solution globally. \par{The last property follows from the Strichartz estimates applied with an arbitrary $\beta$- fractional admissible pair on the left hand side and same pair on the right hand side of \eqref{RBound} and \eqref{TBound}. } \end{proof} \section{Proof of Theorems \ref{lwp} and \ref{gwp}}\label{s4} \subsection{Local well-posedness in $L^2_{rad}+M^{\alpha+2,(\alpha+2)'}_{rad}$}\label{seclwp}In this subsection, we shall prove Theorem \ref{lwp}. First, we introduce some notation before proceeding with the main proof. We are going to work in the Banach space $X(T)$ expressed as \begin{equation} \begin{aligned}\label{X(T)} X(T)&:= X_{1}(T)+X_{2}(T) \end{aligned} \end{equation} where $$X_1(T):=C_{T}L^2_{rad} ~\cap~ L^{1 / \kappa}_{T}L^{\alpha+2}$$ equipped with the norm $$\|v\|_{X_1(T)}=\max \left\{\|v\|_{L^{\infty}_{T}L^2},\|v\|_{L^{1 / \kappa}_{T}L^{\alpha+2}}\right\} $$ and $$ X_2(T):=C_{T}M^{\alpha+2,(\alpha+2)'}_{rad}.$$\\ The norm on $X(T)$ is given as \begin{equation*} \|u\|_{X(T)} =\inf_{\substack{u=v+w \\ v \in X_1(T) \\ w \in X_2(T)}} \left(\|v\|_{X_1(T)} + \|w\|_{X_2(T)} \right). \end{equation*} \begin{Remark}\label{lemlwp2} For $T\leq 1$, by H\"older's inequality for the time variable and Lemma \ref{srp}\eqref{embedding}, we have $\|\cdot\|_{Y(T)} \lesssim_{n} \|\cdot\|_{X(T)}.$ \end{Remark} \begin{proof}[\textbf{Proof of Theorem \ref{lwp}}] Let $a$ and $T$ be a positive real numbers (to be chosen later). Define $$B(a,T)=\{u\in X(T):\|u\|_{X(T)}\leq a\}.$$ We shall show that $\Lambda(u)$ (as defined in \eqref{DP}) is a contraction mapping on $B(a, T).$ Assume w.l.o.g. $T\leq1.$ Consider an arbitrary decomposition of $u_{0}=v_{0}+w_{0}\in L^{2}_{rad}+ M^{\alpha+2,(\alpha+2)'}_{rad}, v_{0}\in L^{2}_{rad}$ and $w_{0}\in M^{\alpha+2,(\alpha+2)'}_{rad}.$ For the linear evolution of $u_{0},$ using Proposition \ref{fst}\eqref{fst2} and Proposition \ref{uf}, we have \begin{align*} \|U_{\beta}(t)u_{0} \|_{X(T)}&\leq \|U_{\beta}(t)v_{0} \|_{X_{1}(T)}+\|U_{\beta}(t)w_{0} \|_{X_{2}(T)} \\ &\lesssim_{n,\alpha,\beta} \|v_{0}\|_{L^{2}}+(1+T)^{n\left( \frac{1}{2}-\frac{1}{\alpha+2} \right)} \|w_{0}\|_{M^{\alpha+2,(\alpha+2)'}}\\ &\lesssim_{n}\|v_{0}\|_{L^{2}}+\|w_{0}\|_{M^{\alpha+2,(\alpha+2)'}}. \end{align*} Since the decomposition is arbitrary, it follows that $$\|U_{\beta}(t)u_{0} \|_{X(T)} \lesssim_{n,\alpha,\beta} \|u_{0}\|_{L^2+M^{\alpha+2,(\alpha+2)'}}.$$ This suggests the choice of $a=2C(n,\alpha,\beta)\|u_{0}\|_{L^{2}+M^{\alpha+2,(\alpha+2)'}}.$ The integral part is estimated using $X_{1}(T)\hookrightarrow X(T),$ Lemma \ref{lemlwp} (with $r_{1} \in \{\alpha+2,2\}$) and Remark \ref{lemlwp2} as follows \begin{align} \left|\left| \int_{0}^{t} U_{\beta}(t-\tau)(|u|^{\alpha}u)(\tau)d\tau \right|\right|_{X(T)} &\lesssim \left|\left| \int_{0}^{t} U_{\beta}(t-\tau)(|u|^{\alpha}u)(\tau)d\tau \right|\right|_{X_{1}(T)} \nonumber\\ &\lesssim_{n,\alpha,\beta} T^{\omega} \|u\|_{Y(T)}^{\alpha+1}\nonumber\\ &\lesssim_{n} T^{\omega} \|u\|_{X(T)}^{\alpha+1} \leq T^{\omega}a^{\alpha+1} \nonumber \end{align} provided $u\in B(a,T)$. Taking \begin{equation}\label{Tchoice} T :=\min \left\{ 1,~C(n,\alpha,\beta) \|u_{0}\|_{L^2+M^{\alpha+2,(\alpha+2)'}}^{-\alpha /\omega } \right\}~, \end{equation} we conclude $\Lambda(u)\in B(a, T).$ Similarly, one can show that $\Lambda(u)$ is a contraction mapping. Using Banach contraction mapping principle, we obtain a unique fixed point $u$ for $\Lambda,$ which is a solution to \eqref{FNLS} on the time interval $[0,T]$. \end{proof} \subsection{Global well-posedness in $M^{p,p'}_{rad}$}\label{secgwp} In this subsection, we shall prove that, when $u_{0} \in M^{p,p'}_{rad},$ the local solution $u$ obtained in Theorem \ref{lwp} can be extended globally in time. Suppose to the contrary that the solution established in Theorem \ref{lwp} is not global in time. Thus, we have the maximal time $T^*< \infty.$ In this case, we shall produce a solution $u$ of \eqref{FNLS} (to be defined in \eqref{ss} below), which will exist on a larger interval $[0, T_1]$ for $T_1> T^*.$ This will lead to a contradiction to the maximal time interval $[0, T^*).$ We start by decomposing initial data $u_0 \in M^{p, p'}_{rad} \subset L^2_{rad} + M^{(\alpha +2), (\alpha +2)'}_{rad} $ into two parts such that the size of $M^{(\alpha +2), (\alpha +2)'}-$data can be controlled by arbitrary small quantity.\\ Using Lemma \ref{ipt} for radial functions, for any $N>1$ and $u_0 \in M^{p, p'}_{rad},$ there exists $\phi_0 \in L^2_{rad}$ and $ \psi_0 \in M^{(\alpha +2), (\alpha +2)'}_{rad}$ such that \begin{equation}\label{dp} u_0= \phi_0 + \psi_0 \end{equation} with \begin{eqnarray}\label{asi} \|\phi_0\|_{L^2} \lesssim N^{\gamma}, \quad \|\psi_0\|_{M^{(\alpha +2), (\alpha +2)'}} \lesssim 1/N \end{eqnarray} where\begin{equation}\label{betap} \gamma = \frac{\frac{1}{2} - \frac{1}{p}}{\frac{1}{p} - \frac{1}{\alpha+2}}. \end{equation} Now, Consider \eqref{FNLSP} having initial data $\phi_0,$ namely \begin{eqnarray} \begin{cases} i \partial_t v_{0} + (-\Delta)^\frac{\beta}{2} v_{0}\pm |v_{0}|^{\alpha}v_{0}=0 \\ v_{0}(\cdot,0)=\phi_{0}\in L^2_{rad}. \end{cases} \end{eqnarray} By Theorem \ref{GWPL2Rad}, \eqref{FNLSP} has a unique solution $v_{0}$ in the space \begin{eqnarray}\label{gwpl2} C(\R, L^2_{rad}) \cap L_{loc}^{q}(\R,L^{r} ) \end{eqnarray} with $(q,r) \in \mathcal{A}_{\beta}$ and satisfies \begin{equation}\label{v0qr2} \sup_{(q,r)\in \mathcal{A}_{\beta}}\|v_{0}\|_{L_{loc}^{q} L^{r}} \lesssim_{n,r,\beta} \|\phi_{0}\|_{L^{2}}. \end{equation} Next, we consider the modified \eqref{FNLSP} corresponding to the evolution of $\psi_{0}$: \begin{eqnarray}\label{ivpMod} \begin{cases} i \partial_t w + (-\Delta)^\frac{\beta}{2} w \pm (|w+v_{0}|^{\alpha}(w+v_{0})-|v_{0}|^{\alpha}v_{0})=0 \\ w(\cdot,0)=\psi_{0}\in M^{\alpha+2, (\alpha+2)'}_{rad} . \end{cases} \end{eqnarray} The solution to the above I.V.P. \eqref{ivpMod} is given as \begin{equation} U_{\beta}(t)\psi_{0}+w_{0}, \end{equation} where $w_0$ is the nonlinear interaction associated with $\psi_0,$ expressed as \begin{align} w_{0} &= \pm i \int_{0}^{t} U_\beta(t-\tau)\left( \big| U_{\beta}(\tau)\psi_{0}+w_{0}+v_{0}\big|^{\alpha} (U_{\beta}(\tau)\psi_{0}+w_{0}+v_{0}) - |v_{0}|^{\alpha} v_{0}\right) \, d\tau \nonumber\\ &=\pm i \int_{0}^{t} U_\beta(t-\tau)G(v_{0} + U_{\beta}(\tau)\psi_{0}, w_{0},0) \, d\tau \pm i \int_{0}^{t} U_\beta(t-\tau)G(v_{0}, U_{\beta}(\tau)\psi_{0},0)\, d\tau.\label{w01} \end{align} Thus, the solution to the \eqref{FNLSP} having initial data $u_{0}$ in \eqref{dp} can be written as follows \begin{eqnarray}\label{solnlocal} v_{0}+U_{\beta}(t)\psi_{0}+w_{0}. \end{eqnarray} We remark that $v_0$ and $ U_{\beta}(t)\psi_0$ are globally defined in appropriate spaces as a result of \eqref{gwpl2} and Proposition \ref{uf}. To establish the desired global existence, we first examine the time interval of existence for $w_0$. To this end, we prove local existence of solution to the integral equation \eqref{w01} for general $\phi$ and $\psi$ as it is required at each stage of the iteration. \begin{prop}\label{w0exist} Let $\phi \in L^2_{rad}, \psi\in M^{(\alpha +2), (\alpha +2)'}_{rad}$ and $Y(T)$ be as in \eqref{YT}. Assume that $\beta$ and $\alpha$ be as in Theorem \ref{gwp}. Denote by $v$ the $L^2-$global solution for initial value $\phi$. Then there exists a constant $C=C(n, \alpha,\beta)>0$ such that the integral equation $$w= \pm i \int_{0}^{t} U_\beta(t-\tau) G(v + U_{\beta}(\tau)\psi, w,0) \, d\tau \pm i \int_{0}^{t} U_\beta(t-\tau) G(v, U_{\beta}(\tau)\psi,0) \, d\tau$$ has a unique solution $w \in Y(T)$ provided $T$ satisfying \begin{align} \label{c1} T &\leq 1 \\ \label{c2} T &\leq C \left( \|\phi\|_{L^{2}} + \|\psi\|_{M^{\alpha+2,(\alpha+2)'}} \right)^{-\alpha /\omega} \\ \label{c3} T &\leq C \left( \|\psi\|_{M^{\alpha+2,(\alpha+2)'}} \right)^{-\alpha / (\omega + \alpha \kappa)}. \end{align} \end{prop} \begin{proof} Define $$B(A,T)=\{w\in Y(T):\|w\|_{Y(T)}\leq A\}$$ with $A>0$ (to be choosen later). Let $T$ be the minimum of the right-hand sides of the conditions \eqref{c1}, \eqref{c2} and \eqref{c3}. Define the operator $$\Gamma(w):= \pm i \int_{0}^{t} U_\beta(t-\tau) G(v + U_{\beta}(\tau)\psi, w,0) \, d\tau \pm i \int_{0}^{t} U_\beta(t-\tau) G(v, U_{\beta}(\tau)\psi,0) \, d\tau.$$ Using Lemma \ref{lemlwp} and embedding $L^{\infty}_{T} \hookrightarrow L^{1 / \kappa}_{T}$ under the assumption \eqref{c1}, for any $v_{1},v_{2} \in \C,$ we have \begin{align} \left\| \int_{0}^{t} U_\beta(t-\tau)G(v_{1},v_{2},0) \, d\tau \right\|_{Y(T)} \lesssim_{n,\alpha,\beta}\label{before4.10} & T^{\omega} \left(\|v_{1}\|^{\alpha}_{Y(T)} \|v_{2}\|_{Y(T)}+ \|v_{2}\|^{\alpha + 1}_{Y(T)}\right) \end{align}\ignorespaces \begin{align} \label{4.10}\lesssim & \left( T^{\omega + \kappa} \right) \|v_{1}\|^{\alpha}_{Y(T)} \|v_{2}\|_{L^{\infty}_{T}L^{\alpha+2}} + \left( T^{\omega + \kappa(\alpha+1)} \right) \|v_{2}\|^{\alpha + 1}_{L^{\infty}_{T}L^{\alpha+2}}. \end{align} Using the estimate \eqref{4.10} for $v_{1}=v , v_{2}=U_{\beta}(\tau)\psi$ along with \eqref{v0qr2}, Lemma \ref{srp} \eqref{embedding} and Proposition \ref{uf} under the assumption \eqref{c1}, we obtain\\ \begin{align*} \hspace{-7cm}\left|\left| \displaystyle\int_{0}^{t} U_\beta(t-\tau)G(v,U_{\beta}(\tau)\psi,0) \, d\tau \right|\right|_{Y(T)} \end{align*} \begin{align*} \lesssim_{\alpha,n,\beta}& \left( T^{\omega + \kappa} \right) \|v\|_{Y(T)}^{\alpha} \| U_{\beta}(\tau)\psi \|_{L^{\infty}_{T}L^{\alpha+2}} + \left( T^{\omega + \kappa(\alpha+1)} \right) \| U_{\beta}(\tau)\psi \|_{L^{\infty}_{T}, L^{\alpha+2}}^{\alpha+1} \\ \lesssim_{\alpha,n,\beta} & \left( T^{\omega + \kappa} \right) \| \phi \|_{L^{2}}^{\alpha} \| \psi\|_{M^{\alpha+2, (\alpha+2)'}}+\left( T^{\omega + \kappa(\alpha+1)} \right) \| \psi \|_{M^{\alpha+2, (\alpha+2)'}}^{\alpha+1} \\ =& T^{\kappa} \| \psi \|_{M^{\alpha+2, (\alpha+2)'}}\left( T^{\omega} \right. \| \phi \|_{L^{2}}^{\alpha} +\left. T^{\omega + \alpha \kappa}\| \psi \|_{M^{\alpha+2, (\alpha+2)'}}^{\alpha} \right) \\ \lesssim_{\alpha,n,\beta} & T^{\kappa} \| \psi \|_{M^{\alpha+2, (\alpha+2)'}}. \end{align*} The last inequality follows due to our assumptions \eqref{c2} and \eqref{c3}. This suggests the choice of \begin{equation}\label{AA} A= \frac{3}{C(n,\alpha,\beta)}T^{\kappa}\|\psi\|_{M^{\alpha+2,(\alpha+2)'}}. \end{equation} where $C=C(n,\alpha,\beta)$ is the same constant that appears in \eqref{c2} and \eqref{c3}. Thus, \begin{equation}\label{gammaw1} \left|\left| \displaystyle\int_{0}^{t} U_\beta(t-\tau)G(v,U_{\beta}(\tau)\psi,0)\, d\tau \right|\right|_{Y(T)}\leq \frac{A}{3}. \end{equation} Using the estimate \eqref{before4.10} for $v_{1}=v + U_{\beta}(\tau)\psi$ and $ v_{2}=w$ along with \eqref{v0qr2}, Lemma \ref{srp} \eqref{embedding} and Proposition \ref{uf} under the assumption \eqref{c1}, we have \begin{align*} \hspace{-7cm} \left\| \int_{0}^{t} U_\beta(t-\tau)G(v + U_{\beta}(\tau)\psi, w,0) \, d\tau \right\|_{Y(T)} \end{align*}\ignorespaces \begin{align*} & \lesssim_{\alpha,n,\beta} T^{\omega} \left( \|v + U_{\beta}(\tau)\psi \|_{Y(T)}^{\alpha} \|w\|_{Y(T)} + \|w\|_{Y(T)}^{\alpha+1} \right) \\ & \lesssim_{\alpha,n,\beta} \|w\|_{Y(T)}\left\{T^{\omega} \left( \left( \|\phi\|_{L^{2}} + \|\psi\|_{M^{\alpha+2, (\alpha+2)'}} \right)^{\alpha} +\|w\|_{Y(T)}^{\alpha} \right)\right\} \\ & \lesssim_{\alpha,n,\beta}\|w\|_{Y(T)} \left\{\frac{1}{3} +T^{\omega+\alpha \kappa}\|\psi\|^{\alpha}_{M^{\alpha+2,(\alpha+2)'}} \right\}. \end{align*} In the last inequality, we have used \eqref{c2} in the first summand and substitute the norm of $w$ in $Y(T)$ to the power of $\alpha$ by $A^{\alpha}$, ($A$ given in \eqref{AA}) in the second summand. Considering the second summand of last inequality under the assumption \eqref{c3}, we get \begin{equation}\label{gamma2} \left\| \int_{0}^{t} U_\beta(t-\tau)G(v + U_{\beta}(\tau)\psi, w,0)\, d\tau \right\|_{Y(T)} \leq \frac{2A}{3}. \end{equation} Combining \eqref{gammaw1} and \eqref{gamma2}, we can say that $\Gamma(w)$ belongs to $B(A, T).$ Contractivity of $\Gamma$ follows similarly \footnote{$C(n,\alpha,\beta)$ is choosen small enough such that all requirements are fulfilled.}. Thus, by the Banach fixed-point theorem, we get a unique fixed point $w$ to the integral equation \eqref{w01} on the time-interval $[0,T].$ \end{proof} \begin{Remark} \label{nexphar} Note that both of the exponents on right hand side of conditions \eqref{c2} and \eqref{c3} involving $L^{2}$ and $M^{\alpha+2,(\alpha+2)'}-$ norms are negative as $0<\alpha<\frac{2\beta}{n}$ . \end{Remark} \begin{cor}\label{winfty2} Under the hypothesis of Proposition \ref{w0exist}, there exists a constant $C(n,\alpha,\beta)$ satisfying \begin{equation*} \|w\|_{L^{\infty}_{T}L^{2}}\lesssim_{n,\alpha,\beta} T^{\kappa}\|\psi\|_{M^{\alpha+2,(\alpha+2)'}}. \end{equation*} \end{cor} \begin{proof} Applying Strichartz estimates (Proposition \ref{fst}) for $L^{\infty}_{T}L^{2}$ in place of $Y(T)-$ norm i.e. using an admissible pair $ (\infty,2)$ on the left hand side and the same pairs on the right hand side of \eqref{before4.10} and \eqref{4.10} in the proof of Proposition \ref{w0exist}, we obtain the desired result. Specifically, \begin{align*} \|w\|_{L^{\infty}_{T}L^{2}} \leq & \left|\left|\int_{0}^{t} U_\beta(t-\tau)G(v + U_{\beta}(\tau)\psi, w,0) \, d\tau \right|\right|_{L^{\infty}_{T}L^{2}}\\ &+\left|\left|\int_{0}^{t} U_\beta(t-\tau)G(v, U_{\beta}(\tau)\psi,0) \, d\tau \right|\right|_{L^{\infty}_{T}L^{2}}\\ &\lesssim_{n,\alpha,\beta} T^{\kappa}\|\psi\|_{M^{\alpha+2,(\alpha+2)'}}. \end{align*} \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{gwp}}] Denote the constant from Proposition \ref{w0exist} by $C=C(n,\alpha,\beta)$ and put \begin{equation}\label{TN} T=T(N)=(3CN^{\gamma})^{-\alpha /\omega} . \end{equation} Applying Proposition \ref{w0exist} for $\phi=\phi_{0}$ and $\psi=\psi_{0}$, we can say that the solution $u =v_0 + U_{\beta}(t) \psi_0 + w_0$ to \eqref{FNLSP} corresponding to the data $u_{0}$ in \eqref{dp} exists in the interval $[0, T(N)]$.\\ We aim to extend our solution to the interval $[T(N), 2T(N)]$ by using similar procedure but with the new initial data as the sum of the following two functions: \[ \phi_{1}=v_{0}(T)+w_{0}(T) \quad \text{and} \quad \psi_{1} =U_{\beta}(T)\psi_{0}.\] More generally, we aim to extend our solution further by using a similar procedure : \begin{itemize} \item[--]We define $\phi_{k}$ and $\psi_k$ for $k \geq 1$ (for $k=0,\; \phi_{0}$ and $\psi_0$ are defined in \eqref{dp}) as follows: \begin{equation}\label{iter1} \phi_{k}= v_{k-1}(kT) + w_{k-1}(kT) \quad \text{and} \quad \psi_{k}= U_{\beta}(kT)\psi_{0}, \end{equation} where \begin{eqnarray} \begin{aligned} w_{k-1}(kT)=\pm i \displaystyle \int_{0}^{kT} U_{\beta}(kT-\tau) G(v_{k-1}+U_{\beta}(k\tau)\psi_{0},w_{k-1},0) d\tau \nonumber \\ \pm i \displaystyle \int_{0}^{kT} U_{\beta}(kT-\tau)G(v_{k-1},U_{\beta}(k\tau)\psi_{0},0) d\tau. \end{aligned} \end{eqnarray} \item[--] Assume for $kT\leq T^* \footnote{Note that $(K-1)T\leq T^*.$ Since $T\leq 1,$ we will have $KT\leq T^{*}+1.$}, \phi=\phi_{k}$ and $\psi=U_{\beta}(kT)\psi_{0}$, $T$ satisfy all three conditions \eqref{c1}, \eqref{c2} and \eqref{c3} of Proposition \ref{w0exist}, where $k \in \{0, 1,\cdots, K-1\}.$ \item[--]Let $v_{k}$ be INLS evolution of $\phi_k$, and by construction \begin{equation}\label{ss} u(\cdot, t)= v_{k} (\cdot, t-kT ) + w_{k}(\cdot, t-kT) + U_{\beta}(t) \psi_0, \quad \text{if} \ t\in [kT, (k+1)T] \end{equation} defines a solution of \eqref{FNLS} for $k \in \{0, 1,\cdots, K-1\}$. \end{itemize} We shall show that the iterative process terminates with $KT >T^*.$ Since $v_{K}$ and $U_{\beta}(t)\psi_0$ are globally defined in appropriate spaces, we are left to extend the solution of nonlinear interaction term $w_{K}$ to the $K$th iteration. To do this, we shall use Proposition \ref{w0exist} with $\phi=\phi_{K}$ and $\psi=U_{\beta}(KT)\psi_{0}.$ Considering Remark \ref{nexphar} and \eqref{TN}, $T=T(N)\to 0$ as $N\to \infty$. Thus, the smallness condition \eqref{c1} is satisfied independently of $k$ for large $N.$\\ Using Proposition \ref{uf} and \eqref{asi}, we have \begin{align}\label{ref1} \|U_{\beta}(t)\psi_{0}\|_{L^{\infty}([0,T^{*}+1],M^{\alpha+2,(\alpha+2)'}) } &\lesssim_{n,T^*}\|\psi_{0}\|_{M^{\alpha+2,(\alpha+2)'}} \lesssim_{n} 1/N \xrightarrow{N \to \infty} 0. \end{align} Inserting $U_{\beta}(kT)\psi_{0}$ in the right hand side of \eqref{c3}, we have \begin{eqnarray*} \begin{aligned} \left( \|U_{\beta}(kT)\psi_{0}\|_{M^{\alpha+2,(\alpha+2)'}} \right)^{-\alpha /(\omega + \alpha \kappa)} \gtrsim_{n} N^{\alpha /(\omega + \alpha \kappa)} \xrightarrow{N \to \infty} \infty. \end{aligned} \end{eqnarray*} Since the lower bound does not depend on $k$ and $T \xrightarrow{N \to \infty} 0,$ the condition \eqref{c3} holds for sufficiently large $N.$ \noindent Thus, we either have $KT> T^*$ or the condition \eqref{c2} fails in the last iterative step $k=K,$ i.e. \begin{equation}\label{aim11} 3CN^{\gamma}< \|\phi_{K}\|_{L^2}+ \|U_{\beta}(KT)\psi_{0}\|_{M^{\alpha+2,(\alpha+2)'}}. \end{equation} Considering \eqref{ref1}, \eqref{aim11} can be written as \begin{equation}\label{aim} 3CN^{\gamma}< \|\phi_{K}\|_{L^2}+CN^{\gamma}. \end{equation} We claim that even under condition \eqref{aim}, we still have $KT>T^*$. This clearly leads to a contradiction with the definition of $T^*.$ Considering the construction of $\phi_k$ and Corollary \ref{winfty2}, we note that $\phi_k \in L^2$ for $k\in \{0, 1,\cdots, K-1\}.$ Now exploiting the conservation of mass \eqref{mass} and Corollary \ref{winfty2} (for $w=w_{k}$ and $\psi=U_{\beta}(kT)\psi_{0}, \; 0\leq k \leq K-1$), we have \begin{align} \|\phi_{K}\|_{L^{2}} &\leq \|v_{K-1}\|_{L^{\infty}_{[(K-1)T,KT]}L^{2}}+\|w_{K-1}\|_{L^{\infty}_{[(K-1)T,KT]}L^{2}} \nonumber\\ &= \|\phi_{K-1}\|_{L^2} + \|w_{K-1}\|_{L^{\infty}_{[(K-1)T,KT]}L^2}\nonumber\\ & \leq \|v_{K-2}\|_{L^{\infty}_{[(K-2)T,(K-1)T]}L^{2}}+\|w_{K-2}\|_{L^{\infty}_{[(K-2)T,(K-1)T]}L^{2}} +\|w_{K-1}\|_{L^{\infty}_{[(K-1)T,KT]}L^{2}} \nonumber\\ &= \|\phi_{K-2}\|_{L^2}+\|w_{K-2}\|_{L^{\infty}_{[(K-2)T,(K-1)T]}L^{2}} +\|w_{K-1}\|_{L^{\infty}_{[(K-1)T,KT]}L^{2}} \nonumber\\ & \leq \cdots \leq \|\phi_{0}\|_{L^{2}}+\sum_{k=0}^{K-1}\|w_{k}\|_{L^{\infty}_{[kT,(k+1)T]}L^{2}} \nonumber \\ & \lesssim_{n, \alpha,\beta} N^{\gamma}+T^{\kappa}\sum_{k=0}^{K-1}\|U_{\beta}(kT)\psi_{0}\|_{M^{\alpha+2,(\alpha+2)'}}\nonumber\\ &\lesssim_{n,T^{*}} N^{\gamma}+T^{\kappa}K\frac{1}{N}.\label{long} \end{align} In the last two inequalities, we have used \eqref{asi} and \eqref{ref1}. Thus, using \eqref{long} and \eqref{TN}, \eqref{aim} can be expressed as \begin{align} KT &\gtrsim_{n,\alpha,\beta,T^*} N^{1+\gamma}T^{1-\kappa} \approx N^{1+\gamma \left(1-\frac{\alpha (1-\kappa)} {\omega}\right)}\nonumber\\ &\label{Npower}= N^{1-\gamma \left(-1+\frac{\alpha (1-\kappa)}{\omega}\right)}. \end{align} Note that $N$ can be chosen to be arbitrarily large. For any $\beta$ satisfying \begin{align}\label{betarange} 0 < \gamma < \begin{cases} \frac{\omega}{\alpha (1-\kappa) -\omega} \quad &\text{if}\quad \alpha (1-\kappa)-\omega>0\\ \infty \quad &\text{otherwise,} \end{cases} \end{align} the exponent of $N$ is positive in \eqref{Npower}, we get $KT>T^*$. This concludes the proof of Theorem \ref{gwp}. \end{proof} \begin{Remark}\label{Whypso} Recall $\gamma$ as defined in \eqref{betap}. The range of $\gamma$ in \eqref{betarange} decides the range of $p.$ Thus, we have $p \in (2,p_{max})$ with $p_{max}$ as given in \eqref{pmax}. This validates the choice of \(p\) in Theorem \ref{gwp}. \end{Remark} \section{Proof of Theorem \ref{lwpHar} and Theorem \ref{gwpHar}}\label{s5} \subsection{Local well-posedness in $L^2_{rad}+M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}$}\label{seclwp2}In this subsection, we shall prove Theorem \ref{lwpHar}. First, we introduce some notation and few lemmas before proceeding with the main proof.\\ Denote \begin{equation}\label{theta} \theta:=\nu /\beta \end{equation} We define the Banach space $\tilde{X}(T)$ as \begin{equation} \begin{aligned}\label{tildeXT} \tilde{X}(T)&:= X_{3}(T)+X_{4}(T) \end{aligned} \end{equation} where $$X_3(T):=C_{T}L^2_{rad} ~\cap~ L^{4/ {\theta}}_{T}L^{4n/(2n-\nu)}$$ equipped with the norm $$\|v\|_{X_3(T)}:=\max \left\{\|v\|_{L^{\infty}_{T}L^2},\|v\|_{L^{4/ {\theta}}_{T}L^{4n/(2n-\nu)}}\right\} $$ and $$ X_4(T):=C_{T}M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}.$$\\ The norm on $\tilde{X}(T)$ is given as \begin{equation*} \|u\|_{\tilde{X}(T)} :=\inf_{\substack{u=v+w \\ v \in X_3(T) \\ w \in X_4(T)}} \left(\|v\|_{X_3(T)} + \|w\|_{X_4(T)} \right). \end{equation*} Denote $$Z(T):=L^{4/ {\theta}}_{T}L^{4n/(2n-\nu)}$$ \begin{Remark}\label{lemHarlwp2} Using H\"older's inequality for the time variable and Lemma \ref{srp}\eqref{embedding}, we have $\|\cdot\|_{Z(T)} \lesssim_{n} \|\cdot\|_{\tilde{X}(T)}$ for $T\leq 1.$ \end{Remark} \begin{lem}\label{lemlwpHar}Let $n \geq 2, 0< \nu < \min \{\beta,n\}, \beta \in (\frac{2n}{2n-1}, 2)$ and $(q_{1},r_{1})\in \mathcal{A}_{\beta}$. Then, there exists a constant $C=C(n,\nu,\beta,r_{1})>0$ such that for any $T > 0$ and $v,w_{1},w_{2} \in Z(T)$, the following estimate holds : \begin{align*} \hspace{-0.5cm}\left|\left| \int_0^t U_\beta(t-s)\tilde{G}(v,w_{1},w_{2})(s) ds \right|\right|_{L^{q_{1}}_{T}L^{r_{1}}} &\lesssim_{n,\nu,\beta,r_{1}} T^{1-\theta}\|w_{1}-w_{2}\|_{Z(T)}\left(\|v\|_{Z(T)}^{2} + \|w_{1}\|_{Z(T)}^{2} + \|w_{2}\|_{Z(T)}^{2}\right.\nonumber\\ +&\left. \|w_{1}\|_{Z(T)}\|w_{2}\|_{Z(T)}+\|v\|_{Z(T)}\|w_{1}\|_{Z(T)}+\|v\|_{Z(T)}\|w_{2}\|_{Z(T)}\right). \end{align*} \end{lem} \begin{proof} Using Proposition \ref{fst}\eqref{fst2} for $(q_j, r_j) \in \mathcal{A}_{\beta}$, $j=1,2$, we have \begin{align*} \left|\left| \int_0^t U_\beta(t-s)\tilde{G}(v,w_{1},w_{2})(s) ds \right|\right|_{L^{q_{1}}_{T}L^{r_{1}}} \lesssim_{n,\beta,r_{1},r_{2}} \left|\left| \tilde{G}(v,w_{1},w_{2})\right|\right|_{L^{q'_{2}}_{T}L^{r'_{2}}} . \end{align*} Taking $(q_{2},r_{2})=(\frac{4}{\theta},\frac{4n}{2n-\nu})$ and using \eqref{haridentity}, H\"older's inequality twice and Lemma \ref{HLS}, we have \begin{align*} \left|\left| \tilde{G}(v,w_{1},w_{2})\right|\right|_{L^{q'_{2}}_{T}L^{r'_{2}}} \lesssim_{n,\alpha,\beta,r_{1}} &T^{1-\theta}\|w_{1}-w_{2}\|_{Z(T)}\left(\|v\|_{Z(T)}^{2} + \|w_{1}\|_{Z(T)}^{2} + \|w_{2}\|_{Z(T)}^{2}\right.\nonumber\\ &+\left. \|w_{1}\|_{Z(T)}\|w_{2}\|_{Z(T)}+\|v\|_{Z(T)}\|w_{1}\|_{Z(T)}+\|v\|_{Z(T)}\|w_{2}\|_{Z(T)}\right). \end{align*} \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{lwpHar}}] By Duhamel's principle, the equation \eqref{FNLSH} can be expressed as an integral equation: \begin{eqnarray}\label{DPhar} u(x,t)=U_{\beta}(t)u_{0}(x) + i \int_0^t U_\beta(t-s)\tilde{G}(0,u,0)(s) ds :=\Pi (u)(x,t). \end{eqnarray} Let $a$ and $T$ be a positive real numbers (to be chosen later). Define $$B(a,T)=\{u\in \tilde{X}(T):\|u\|_{\tilde{X}(T)}\leq a\}.$$ We shall show that $\Pi(u)$ (as defined in \eqref{DPhar}) is a contraction mapping on $B(a, T).$ Assume, without loss of generality, that $T\leq1.$ Consider an arbitrary decomposition of $u_{0}$ as $$u_{0}=v_{0}+w_{0}\in L^{2}_{rad}+ M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad},$$ where $v_{0}\in L^{2}_{rad}$ and $w_{0}\in M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}.$\\ For the linear evolution of $u_{0},$ using Proposition \ref{fst}\eqref{fst2} and Proposition \ref{uf}, we have \begin{align*} \|U_{\beta}(t)u_{0} \|_{\tilde{X}(T)}&\leq \|U_{\beta}(t)v_{0} \|_{X_{3}(T)}+\|U_{\beta}(t)w_{0} \|_{X_{4}(T)} \\ &\lesssim_{n,\nu,\beta} \|v_{0}\|_{L^{2}}+(1+T)^{n\left( \frac{1}{2}-\frac{2n-\nu}{4n} \right)} \|w_{0}\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}\\ &\lesssim \|v_{0}\|_{L^{2}}+\|w_{0}\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}. \end{align*} Since the decomposition is arbitrary, it follows that $$\|U_{\beta}(t)u_{0} \|_{\tilde{X}(T)} \lesssim_{n,\nu,\beta} \|u_{0}\|_{L^2+M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}.$$ This suggests the choice of $a=2C(n,\nu,\beta)\|u_{0}\|_{L^{2}+M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}.$ The integral part is estimated using $X_{3}(T)\hookrightarrow \tilde{X}(T),$ Lemma \ref{lemlwpHar} (with $r_{1} \in \{\frac{4n}{2n-\nu},2\}$) and Remark \ref{lemHarlwp2} as follows \begin{align} \left|\left| \int_{0}^{t} U_{\beta}(t-\tau)\tilde{G}(0,u,0)(\tau)d\tau \right|\right|_{\tilde{X}(T)} &\lesssim \left|\left| \int_{0}^{t} U_{\beta}(t-\tau)\tilde{G}(0,u,0)(\tau)d\tau \right|\right|_{X_{3}(T)} \nonumber\\ &\lesssim_{n,\nu,\beta} T^{1-\theta} \|u\|_{Z(T)}^{3}\nonumber\\ &\lesssim_{n} T^{1-\theta} \|u\|_{\tilde{X}(T)}^{3} \leq T^{1-\theta}a^{3}, \nonumber \end{align} provided $u\in B(a,T)$. Taking \begin{equation}\label{tildeTchoice} T :=\min \left\{ 1,~C(n,\nu,\beta) \|u_{0}\|_{L^2+M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}^{-\frac{2}{1-\theta }} \right\}~, \end{equation} we conclude $\Pi(u)\in B(a, T).$ Similarly, we can show that $\Pi(u)$ is a contraction mapping. Using Banach contraction mapping principle, we obtain a unique fixed point $u$ for $\Pi(u)$ which is a solution to \eqref{FNLSH} on the time interval $[0,T]$. \end{proof} \subsection{Global well-posedness in $M^{s,s'}_{rad}$}\label{secgwpHar} In this subsection, we shall prove that the local solution $u$ obtained in Theorem \ref{lwpHar} can be extended globally in time when $u_{0} \in M^{s,s'}_{rad}$. We employ the same strategy used in the proof of Theorem \ref{gwp}. Suppose to the contrary that the solution established in Theorem \ref{lwpHar} is not global in time and there exists the maximal time $T^*< \infty$. We firstly decompose initial data $u_0 \in M^{s,s'}_{rad} \subset L^2_{rad} + M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad} $ into two parts using Lemma \ref{ipt} for radial functions. \\ For any $\tilde{N}>1$ and $u_0 \in M^{s,s'}_{rad},$ there exists $\phi_0 \in L^2_{rad}$ and $ \psi_0 \in M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}$ such that \begin{equation}\label{dphar} u_0= \phi_0 + \psi_0 \end{equation} with \begin{eqnarray}\label{asihar} \|\phi_0\|_{L^2} \lesssim \tilde{N}^{\tilde{\gamma}}, \quad \|\psi_0\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}} \lesssim 1/ \tilde{N} \end{eqnarray} where\begin{equation}\label{betaq} \tilde{\gamma} = \frac{\frac{1}{2} - \frac{1}{s}}{\frac{1}{s} - \frac{2n-\nu}{4n}}. \end{equation} Now, Consider \eqref{FNLSH} with initial data $\phi_0,$ namely \begin{eqnarray} \begin{cases} i \partial_t v_{0} - (-\Delta)^\frac{\beta}{2} v_{0}+(|\cdot|^{-\nu} \ast |v_{0}|^2)v_{0}=0 \\ v_{0}(\cdot,0)=\phi_{0}\in L^2_{rad}. \end{cases} \end{eqnarray} By \cite[Proposition 3.4]{DGBJDE}, \eqref{FNLSH} has a unique solution $v_{0}$ in the space \begin{eqnarray}\label{gwpHarl2} C(\R, L^2_{rad}) \cap L_{loc}^{q}(\R,L^{r} ) \end{eqnarray} with $(q,r) \in \mathcal{A}_{\beta}$ and satisfies \begin{equation}\label{v0qr2Har} \sup_{(q,r)\in \mathcal{A}_{\beta}}\|v_{0}\|_{L_{loc}^{q} L^{r}} \lesssim_{n,r,\beta} \|\phi_{0}\|_{L^{2}}. \end{equation} Next, we consider the modified \eqref{FNLSH} corresponding to the evolution of $\psi_{0}$: \begin{eqnarray}\label{ivpModHar} \begin{cases} i \partial_t w -(-\Delta)^\frac{\beta}{2} w +(|\cdot|^{-\nu} \ast |w+v_{0}|^2) (w+v_{0})-(|\cdot|^{-\nu} \ast |v_{0}|^2) v_{0}=0 \\ w(\cdot,0)=\psi_{0}\in M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}_{rad} . \end{cases} \end{eqnarray} The solution to the above I.V.P. \eqref{ivpModHar} is given as \begin{equation} U_{\beta}(t)\psi_{0}+w_{0}, \end{equation} where $w_0$ is the nonlinear interaction associated with $\psi_0,$ expressed as \begin{align} w_{0} &= i\int_{0}^{t} U_\beta(t-\tau)\left\{(|\cdot|^{-\nu} \ast |U_{\beta}(\tau)\psi_{0}+w_{0}+v_{0}|^2)(U_{\beta}(\tau)\psi_{0}+w_{0}+v_{0}) - (|\cdot|^{-\nu} \ast |v_{0}|^2) v_{0}\right\} \, d\tau \nonumber\\ &=i\int_{0}^{t} U_\beta(t-\tau) \tilde{G}(v_{0} + U_{\beta}(\tau)\psi_{0}, w_{0},0) \, d\tau + i \int_{0}^{t} U_\beta(t-\tau) \tilde{G}(v_{0}, U_{\beta}(\tau)\psi_{0},0) \, d\tau\label{w01har} \end{align} Thus, the solution to the \eqref{FNLSH} having initial data $u_{0}$ (as given in \eqref{dphar}) can be written as: \begin{eqnarray*} v_{0}+U_{\beta}(t)\psi_{0}+w_{0}. \end{eqnarray*} Since $v_0$ and $ U_{\beta}(t)\psi_0$ are globally defined in appropriate spaces as a result of \eqref{gwpHarl2} and Proposition \ref{uf}, we are left to examine the time interval of existence for $w_0$ (given in \eqref{w01har}). For this purpose, we establish local existence of \eqref{w01har} for general $\phi$ and $\psi$. \begin{prop}\label{w0existhar} Let $\phi \in L^2_{rad}$ and $ \psi\in M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}_{rad}$. Assume that $\beta$ and $\nu$ be as in Theorem \ref{gwpHar}. Denote by $v$ the $L^2-$global solution for initial value $\phi$. Then there exists a constant $C=C(n, \nu,\beta)>0$ such that the integral equation $$w= i\int_{0}^{t} U_\beta(t-\tau) \tilde{G}(v + U_{\beta}(\tau)\psi, w,0) \, d\tau + i \int_{0}^{t} U_\beta(t-\tau) \tilde{G}(v, U_{\beta}(\tau)\psi,0) \, d\tau$$ has a unique solution $w \in Z(T)$ provided $T$ satisfying \begin{align} \label{D1} T &\leq 1 \\ \label{D2} T &\leq C \left( \|\phi\|_{L^{2}} + \|\psi\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}} \right)^{-2 /(1-\theta)} \\ \label{D3} T &\leq C \left( \|\psi\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}} \right)^{-4/(2-\theta)}. \end{align} \end{prop} \begin{proof} Define $$B(\tilde{A},T)=\{w\in Z(T):\|w\|_{Z(T)}\leq \tilde{A}\}$$ with $\tilde{A}>0$ (to be choosen later). Let $T$ be the minimum of the right-hand sides of the conditions \eqref{D1}, \eqref{D2} and \eqref{D3}. Define the operator \begin{align*} \tilde{\Gamma}(w)=i\int_{0}^{t} U_\beta(t-\tau) \tilde{G}(v + U_{\beta}(\tau)\psi, w,0) \, d\tau + i \int_{0}^{t} U_\beta(t-\tau) \tilde{G}(v, U_{\beta}(\tau)\psi,0) \, d\tau. \end{align*} Using Lemma \ref{lemlwpHar} and embedding $L^{\infty}_{T} \hookrightarrow L^{4/ {\theta}}_{T}$ under the assumption \eqref{D1}, for any $v_{1},v_{2} \in \C,$ we have \begin{equation*} \left\| \int_{0}^{t} U_\beta(t-\tau)\tilde{G}(v_{1},v_{2},0) \, d\tau \right\|_{Z(T)} \end{equation*} \begin{align} \lesssim_{n,\nu,\beta}\label{beforehar4.10} & T^{1-\theta}\|v_{2}\|_{Z(T)} \left( \|v_{1}\|^{2}_{Z(T)}+\|v_{1}\|_{Z(T)} \|v_{2}\|_{Z(T)}+ \|v_{2}\|^{2}_{Z(T)}\right)\\ \label{har4.10}\lesssim T^{1-\frac{3\theta}{4}}&\|v_{1}\|^{2}_{Z(T)}\|v_{2}\|_{L^{\infty}_{T}L^{\frac{4n}{2n-\nu}}}+T^{1-\frac{\theta}{2}}\|v_{1}\|_{Z(T)} \|v_{2}\|_{L^{\infty}_{T}L^{\frac{4n}{2n-\nu}}}^{2}+T^{1-\frac{\theta}{4}} \|v_{2}\|^{3}_{L^{\infty}_{T}L^{\frac{4n}{2n-\nu}}}. \end{align} Using the estimate \eqref{har4.10} for $v_{1}=v , v_{2}=U_{\beta}(\tau)\psi$ along with \eqref{v0qr2Har}, Lemma \ref{srp} \eqref{embedding} and Proposition \ref{uf} under the assumption \eqref{D1}, we obtain \begin{align*} \hspace{-7cm}\left|\left| \displaystyle\int_{0}^{t} U_\beta(t-\tau)\tilde{G}(v,U_{\beta}(\tau)\psi,0) \, d\tau \right|\right|_{Z(T)} \end{align*} \begin{align*} &\hspace{-0.5cm}\lesssim_{\nu,n,\beta} T^{1-\frac{3\theta}{4}}\|v\|^{2}_{Z(T)}\|U_{\beta}(\tau)\psi\|_{L^{\infty}_{T}L^{\frac{4n}{2n-\nu}}}+T^{1-\frac{\theta}{2}}\|v\|_{Z(T)} \|U_{\beta}(\tau)\psi\|_{L^{\infty}_{T}L^{\frac{4n}{2n-\nu}}}^{2}+T^{1-\frac{\theta}{4}} \|U_{\beta}(\tau)\psi\|^{3}_{L^{\infty}_{T}L^{\frac{4n}{2n-\nu}}} \\ &\hspace{-0.5cm} \lesssim_{\nu,n,\beta} T^{1-\frac{3\theta}{4}} \| \phi \|_{L^{2}}^{2} \| \psi\|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}}+ T^{1-\frac{\theta}{2}} \| \phi \|_{L^{2}} \| \psi \|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}}^{2}+T^{1-\frac{\theta}{4}} \| \psi\|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}}^{3}\\ & \hspace{-0.5cm}\leq T^{\frac{\theta}{4}} \| \psi \|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}}\left( T^{1-\theta} (\| \phi \|_{L^{2}}+\| \psi \|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}})^{2} + T^{1-\frac{\theta}{2}}\| \psi \|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}}^{2} \right) \\ &\hspace{-0.5cm} \lesssim_{\nu,n,\beta} T^{\frac{\theta}{4}} \| \psi \|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}}. \end{align*} The last inequality follows due to our assumptions \eqref{D2} and \eqref{D3}. This suggests the choice of \begin{equation}\label{AB} \tilde{A}= \frac{5}{C(n,\nu,\beta)}T^{\frac{\theta}{4}}\|\psi\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}. \end{equation} where $C=C(n,\nu,\beta)$ is the same constant that appears in \eqref{D2} and \eqref{D3}.\\ Thus, \begin{equation}\label{gammaw1har} \left|\left| \displaystyle\int_{0}^{t} U_\beta(t-\tau)\tilde{G}(v,U_{\beta}(\tau)\psi,0)\, d\tau \right|\right|_{Z(T)}\leq \tilde{A}/5. \end{equation} Using the estimate \eqref{beforehar4.10} for $v_{1}=v + U_{\beta}(\tau)\psi$ and $ v_{2}=w$ along with \eqref{v0qr2Har}, Lemma \ref{srp} \eqref{embedding} and Proposition \ref{uf} under the assumption \eqref{D1}, we have \begin{align*} \hspace{-7cm} \left\| \int_{0}^{t} U_\beta(t-\tau)\tilde{G}(v + U_{\beta}(\tau)\psi, w,0) \, d\tau \right\|_{Z(T)} \end{align*}\ignorespaces \begin{align*} & \lesssim_{\nu,n,\beta} T^{1-\theta} \left\{ \|v + U_{\beta}(\tau)\psi \|_{Z(T)}^{2} \|w\|_{Z(T)} +\|v + U_{\beta}(\tau)\psi \|_{Z(T)} \|w\|_{Z(T)}^{2}+ \|w\|_{Z(T)}^{3} \right\} \\ & \lesssim_{\nu,n,\beta} \|w\|_{Z(T)}\left\{2T^{1-\theta} \left( \|\phi\|_{L^{2}} + \|\psi\|_{M^{\frac{4n}{2n-\nu}, \frac{4n}{2n+\nu}}} \right)^{2}+2T^{1-\theta} \|w\|_{Z(T)}^{2} \right\} \\ & \lesssim_{\nu,n,\beta}\|w\|_{Z(T)} \left\{\frac{2}{5} +2T^{1-\frac{\theta}{2}}\|\psi\|^{2}_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}} \right\}. \end{align*} In the last inequality, we have used \eqref{D2} in the first summand and substitute the square norm of $w$ in $Z(T)$ by $\tilde{A}^{2}$, ($\tilde{A}$ given in \eqref{AB}) in the second summand. \\ Consider second summand of last inequality under the assumption \eqref{D3} to get \begin{equation}\label{gamma2har} \left\| \int_{0}^{t} U_\beta(t-\tau)\tilde{G}(v + U_{\beta}(\tau)\psi, w,0)\, d\tau \right\|_{Z(T)} \leq 4\tilde{A} /5. \end{equation} Combining \eqref{gammaw1har} and \eqref{gamma2har}, we can say that $\tilde{\Gamma}(w)$ belongs to $B(\tilde{A}, T).$ Contractivity of $\tilde{\Gamma}$ follows similarly \footnote{$C(n,\nu,\beta)$ is choosen small enough such that all requirements are fulfilled.}. Thus, by the Banach fixed-point theorem, we get a unique fixed point $w$ to the integral equation \eqref{w01har} on the time-interval $[0,T].$ \end{proof} \begin{cor}\label{winfty2har} Under the hypothesis of Proposition \ref{w0existhar}, there exists a constant $C(n,\nu,\beta)$ satisfying \begin{equation*} \|w\|_{L^{\infty}_{T}L^{2}}\lesssim_{n,\nu,\beta} T^{\frac{\theta}{4}}\|\psi\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}. \end{equation*} \end{cor} \begin{proof} The proof follows in a similar manner to Corollary \ref{winfty2}. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{gwpHar}}] Denote the constant from Proposition \ref{w0existhar} by $C=C(n,\nu,\beta)$ and put \begin{equation}\label{TM} T=T(\tilde{N})=(3C\tilde{N}^{\tilde{\gamma}})^{-2/(1-\theta) }. \end{equation} Applying Proposition \ref{w0existhar} for $\phi=\phi_{0}$ and $\psi=\psi_{0}$, we can say that the solution $u =v_0 + U_{\beta}(t) \psi_0 + w_0$ to \eqref{FNLSH} corresponding to the data $u_{0}$ in \eqref{dphar} exists in the interval $[0, T(\tilde{N})]$.\\ We aim to extend our solution further by introducing the same iterative procedure as in the proof of Theorem \ref{gwp}, see \eqref{iter1}, but with Hartree nonlinearity. However, the nonlinear interaction term $w_{k-1}$ is given as: \begin{eqnarray} \begin{aligned} w_{k-1}= i \displaystyle \int_{0}^{t} U_{\beta}(t-\tau) \tilde{G}(v_{k-1}+U_{\beta}(k\tau)\psi_{0},w_{k-1},0) d\tau \nonumber \\ + i \displaystyle \int_{0}^{t} U_{\beta}(t-\tau)\tilde{G}(v_{k-1},U_{\beta}(k\tau)\psi_{0},0) d\tau. \end{aligned} \end{eqnarray} We shall show that the iterative process terminates with $KT >T^*$ by extending the solution to the $K$th iteration. We shall use Proposition \ref{w0existhar} with $\phi=\phi_{K}$ and $\psi=U_{\beta}(KT)\psi_{0}.$ Taking into account \eqref{TM}, $T=T(\tilde{N})\to 0$ as $\tilde{N}\to \infty$. Thus, \eqref{D1} holds independently of $k$ for large $\tilde{N}.$ \\ Using Proposition \ref{uf} and \eqref{asihar}, we have \begin{align}\label{ref1har} \|U_{\beta}(t)\psi_{0}\|_{L^{\infty}([0,T^{*}+1],M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}) } &\lesssim_{n,T^*}\|\psi_{0}\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}} \lesssim_{n} 1 / \tilde{N} \xrightarrow{\tilde{N} \to \infty} 0. \end{align} Inserting $U_{\beta}(kT)\psi_{0}$ in the right hand side of \eqref{D3}, we have \begin{eqnarray*} \begin{aligned} \left( \|U_{\beta}(kT)\psi_{0}\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}} \right)^{-4/(2-\theta)} \gtrsim_{n} \tilde{N}^{4/ (2-\theta)} \xrightarrow{\tilde{N} \to \infty} \infty. \end{aligned} \end{eqnarray*} Thus, the condition \eqref{D3} holds for sufficiently large $\tilde{N}$ as the lower bound does not depend on $k$ and $T \xrightarrow{\tilde{N} \to \infty} 0.$ \\ Now, we are left with two cases. We either have $KT> T^*$ or the condition \eqref{D2} fails in the last iterative step $k=K,$ i.e. \begin{equation}\label{aim11Har} 3C\tilde{N}^{\tilde{\gamma}}< \|\phi_{K}\|_{L^2}+ \|U_{\beta}(KT)\psi_{0}\|_{M^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}. \end{equation} Considering \eqref{ref1har}, \eqref{aim11Har} can be written as \begin{equation}\label{aimHar} 3C\tilde{N}^{\tilde{\gamma}}< \|\phi_{K}\|_{L^2}+C\tilde{N}^{\tilde{\gamma}}. \end{equation} We shall prove that even under condition \eqref{aimHar}, we have $KT>T^*$ which clearly leads to a contradiction on $T^*.$ Using conservation of mass \eqref{mass} and Corollary \ref{winfty2har} (for $w=w_{k}$ and $\psi=U_{\beta}(kT)\psi_{0}, \; 0\leq k \leq K-1$), we have \begin{align} \|\phi_{K}\|_{L^{2}} &\leq \|v_{K-1}\|_{L^{\infty}_{[(K-1)T,KT]}L^{2}}+\|w_{K-1}\|_{L^{\infty}_{[(K-1)T,KT]}L^{2}} \nonumber\\ & \lesssim_{n, \nu,\beta} \tilde{N}^{\tilde{\gamma}}+T^{\frac{\theta}{4}}\sum_{k=0}^{K-1}\|U_{\beta}(kT)\psi_{0}\|_{\tilde{N}^{\frac{4n}{2n-\nu},\frac{4n}{2n+\nu}}}\nonumber\\ &\lesssim_{n,T^{*}} \tilde{N}^{\tilde{\gamma}}+T^{\frac{\theta}{4}}K\frac{1}{\tilde{N}}. \end{align} In the last two inequalities, we have used \eqref{asihar} and \eqref{ref1har}. Thus, using \eqref{TM}, \eqref{aimHar} can be expressed as \begin{align} KT &\gtrsim_{n,\nu,\beta,T^*} \tilde{N}^{1+\tilde{\gamma}}T^{1-\frac{\theta}{4}} \approx \tilde{N}^{1+\tilde{\gamma} \left(1-\frac{4-\theta}{2(1-\theta)}\right)}\nonumber\\ &\label{Mpower}= \tilde{N}^{1 - \tilde{\gamma} \left( \frac{2 + \theta}{2(1 - \theta)} \right)} \end{align} By choosing the exponent of $\tilde{N}$ to be positive in \eqref{Mpower} and $\tilde{N}$ to be arbitrarily large, we get $KT>T^*$. This concludes the proof of Theorem \ref{gwpHar}. Note that the exponent of $\tilde{N}$ decides the range of $\tilde{\gamma}$ which in turn decides the range of $s$, see \eqref{betaq}. Thus, we have $s \in (2,s_{max})$ with $s_{max}$ as given in \eqref{qmax}. This validates the choice of \(s\) in Theorem \ref{gwpHar}. \end{proof} {\bf Acknowledgments:} The authors would like to express their gratitude to professor H.G. Feichtinger for his valuable comments and suggestions on an earlier version of this paper. The second author acknowledges the financial support from the University Grants Commission (UGC), India (file number 201610135365). The third author would like to thankfully acknowledge the financial support from the Matrics Project of DST (file number 2018/001166). \bibliographystyle{plain} \bibliography{finls.bib} \end{document}
2412.19752v1
http://arxiv.org/abs/2412.19752v1
A random walk among random graphs
\documentclass[12pt]{report} \usepackage{amsmath,amsfonts,amssymb,amsthm,mathrsfs} \usepackage{wasysym} \usepackage{minitoc} \usepackage{endnotes} \usepackage[dvipsnames]{xcolor} \usepackage[a4paper,vmargin={3.5cm,3.5cm},hmargin={2.5cm,2.5cm}]{geometry} \usepackage{graphicx,graphics} \usepackage{epsfig}\usepackage{latexsym}\usepackage[applemac]{inputenc} \linespread{1.2} \usepackage{ae,aecompl} \newcommand{\cev}[1]{\reflectbox{\ensuremath{\vec{\reflectbox{\ensuremath{#1}}}}}} \newcommand{\ER}[2]{ \mathcal{G}(#1, #2)} \usepackage[english]{babel} \usepackage[colorlinks=true]{hyperref} \usepackage{pstricks} \usepackage{enumerate} \newcommand{\subgraph}{\sqsubset} \renewcommand{\leq}{\leqslant} \renewcommand{\geq}{\geqslant} \usepackage{baskervald} \usepackage[baskervaldx]{newtxmath} \usepackage[font=sf, labelfont={sf,bf}, margin=1cm]{caption} \usepackage{titlesec} \usepackage{titletoc} \usepackage{color} lleft\bfseries\Large\partTitleFont} lleft lright} [\vspace{1ex} \titlerule\thispagestyle{empty}] \titleformat{\part}[display]{\beaupetit}{\Huge\textsf{Part \textsc{\Roman{part}}:} }{0pt}{}[] \DeclareFixedFont{\chapterNumberFont}{OT1}{fsk}{b}{n}{5cm} \DeclareFixedFont{\classik}{OT1}{ptm}{b}{sc}{1cm} \definecolor{gris75}{gray}{0.75} \definecolor{gris25}{gray}{0.15} \titleformat{\section}[block]{\Large\sffamily}{\noindent \bfseries \thesection}{1em}{} \titleformat{\subsection}[block]{\large\sffamily}{\thesubsection}{1em}{} \titleformat{\subsubsection}[block]{\sffamily}{}{}{} \titleformat{\paragraph}[runin]{\sffamily}{}{}{} \titlespacing{\section} {0pc}{3.5ex plus .1ex minus .2ex}{1.5ex minus .1ex} \titleformat{\chapter}[hang]{\bfseries\Large}{\textsf{\textsc{\Roman{chapter}}:} }{0pt}{}[\titlerule ] \newcommand{\crefrangeconjunction}{--} \newcommand{\op}{\operatorname} \newcommand{\one}{{\triangl\mathrm{e}^{1}}} \newcommand{\W}{\mathcal{W}} \newcommand{\two}{{\triangl\mathrm{e}^{2}}} \newcommand{\Z}{\mathbb{Z}} \renewcommand{\P}{\mathbb{P}} \newcommand{\E}{\mathbb{E}} \newcommand{\noeud}[1]{\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$#1$}}}} \DeclareFixedFont{\beaupetit}{T1}{ftp}{b}{n}{2cm} \usepackage{thmtools} \declaretheorem[thmbox=M,name=Theorem,parent=chapter]{theorem} \declaretheorem[name=Proposition,sibling=theorem]{proposition} \declaretheorem[name=Lemma,sibling=theorem]{lemma} \declaretheorem[name=Corollary,sibling=theorem]{corollary} \declaretheorem[numbered=no,name=Question]{question} \declaretheorem[shaded={bgcolor= {rgb}{0.8,0.85,1}},parent=chapter]{definition} \declaretheorem[parent=chapter,name=Definition-Proposition,shaded={bgcolor= {rgb}{0.8,0.85,1}}]{defprop} \declaretheorem[parent=chapter,parent=chapter,shaded={bgcolor= {rgb}{0.9,0.9,0.8}},name=Open Question]{open} \declaretheorem[parent=chapter,style=remark,name=Remark,parent=chapter]{remark} \declaretheorem[parent=chapter,style=remark,name=Example,parent=chapter]{example} \declaretheorem[parent=chapter,style=remark,name=Exercise,parent=chapter]{exo} \newcommand{\note}[1]{{\textcolor{red}{[#1]}}} \def\llbracket{[\hspace{-.10em} [ } \def\rrbracket{ ] \hspace{-.10em}]} \def\llbrack{\{\hspace{-.25em} \{ } \def\rrbrack{ \} \hspace{-.25em}\}} \def\f{\mathcal{F}} \def\ve{\varepsilon} \def\la{{\longrightarrow}} \def\build#1_#2^#3{\mathrel{ \mathop{\kern 0pt#1}\limits_{#2}^{#3}}} \def\wt{\widetilde} \newcommand{\keywords}[1]{ \noindent {\footnotesize {\small \em Keywords and phrases.} {\sc #1} } } \newcommand{\ams}[2]{ \noindent {\footnotesize {\small \em AMS {\rm 2000} subject classifications. {\rm Primary {\sc #1}; secondary {\sc #2}} } } } \title{\beaupetit A random Walk among \\ random Graphs} \author{\textsc{Nicolas CURIEN}} \begin{document} \maketitle \clearpage \section*{A random walk among random graphs} The theory of random graphs is now ubiquitous in probability theory, and there are already many comprehensive textbooks (to name just a few \cite{RemcoRGI,RemcoRGII,bollobas2001random,janson2011random,drmota2009random,durrett2010random}) dealing with the numerous models of random graphs invented over the last decades. The goal of these lecture notes is to give a glimpse of a few models of random graphs together with some of the probabilistic tools used to study them. It is intended for master or PhD students in probability theory. I chose the models of random graphs mainly by taste and by the will to cover different types of probabilistic arguments. This document should not be seen as an authoritative reference but rather as a recreational (random) walk in the wonderland of random graph theory. Several exercises of varying difficulty (most of them being non trivial) are scattered along the text and each chapter is ended with bibliographical pointers. Here are the main topics covered in the lecture notes together with the \textit{mathematical tools they introduce}:\medskip \begin{itemize} \item \textsc{Chapter I:} Basic of (bond) percolation. Phase transition. The Rado graph. \\ \indent \textit{Graph theory, First and second moment, duality}. \item \textsc{Chapter II:} One-dimensional random walk, Recurrence/transience, Oscillation/drift. \\ \indent \textit{Law of large numbers and its reciproque, Fourier transform}. \item \textsc{Chapter III:} Skip-free random walk, duality and cycle lemma. Applications: Kemperman formula, Ballot theorem, parking on the line.\\ \noindent \textit{Feller combinatorial cyclic lemma}. \item \textsc{Chapter IV:} Bienaym-Galton-Watson trees, {\L}ukasiewicz encoding, Enumeration. \\ \noindent \textit{Formal series, Neveu's plane tree formalism}. \item \textsc{Chapter V:} Sharp threshold for graph properties on the Erd{\H{o}}s--R\'enyi: connectedness, clique number, diameter, cycle. Convergence of the spectrum. \\ \noindent \textit{First and second moment method, method of moments, Poisson paradigm}. \item \textsc{Chapter VI:} Phase transition for the giant component I. \\ \noindent \textit{$ \varepsilon$-cut, first moment method, sprinkling, multiplicative coalescent}. \item \textsc{Chapter VII:} Phase transition for the giant component II. \\ \noindent \textit{Markovian exploration, differential equation method}. \item \textsc{Chapter VIII:} Phase transition for the giant component III. \\ \noindent \textit{Poissonization, Bin counting processes, Brownian asymptotics}. \item \textsc{Chapter IX:} (Uniform) random permutations. Poisson-Dirichlet distribution and Dyckman function for large cycles. Poisson limit for small cycle counts. \\ \noindent \textit{Feller's coupling, Randomization, Stick breaking construction}. \item \textsc{Chapter X:} Random recursive tree (and random permutations). \\ \noindent \textit{Chinese restaurant process, Recursive distributional equation, Polya urn scheme}. \item \textsc{Chapter XI:} Continuous-time embedding and applications. \\ \noindent \textit{Athreya-Karling embedding of Markov chains, convergence of Yule processes and links between exponential and Poisson processes}. \item \textsc{Chapter XII:} Spine decomposition and applications. \\ \noindent \textit{Martingale transform, spine decomposition, many-to-one formulas}. \item \textsc{Chapter XIII:} Barab\'asi--Albert random tree. \\ \noindent \textit{Preferential attachment mechanism, scale-free random networks}. \end{itemize} Many thanks go to the students that attended the ``random graph'' master course I gave in 2019-2025 at Orsay. They contributed to the development of the material and spotted many typos. I am particularly grateful to Alice Contat, Baojun Wu (promotion 2019), Guillaume Blanc, Maude Bellugeon, Elie Khalfallah (promotion 2020), Tanguy Lions, Francisco Calvillo (promotion 2021), Corentin Correia, Lo\"ic Gassmann (promotion 2022), Nathan de Montgolfier, Laureline Legros, Emile Averous (promotion 2023), Remi Bernard, Simone Maria Giancola (promotion 2024). Special thanks go to Damian Cid for spotting (so)many typoes and inaccuracies and for his participation to Chapter \ref{chap:poissonER}. I am also grateful to Serte Donderwinkel for many useful comments. \clearpage \section*{Notations} We list here the (perhaps non-standard) notation we use through the lecture notes:\\ \noindent \begin{tabular}{cl} e.g. & for example (\textit{exempli gratia})\\ i.e. & namely (\textit{id est})\\ a.s. & almost surely\\ i.o. & infinitely often\\ $ \mathbb{Z}_{>0}$ & $=\{1,2,3,\cdots\}$\\ $ \mathbb{Z}_{\geq0}$ & $=\{0,1,2,3,\cdots\}$\\ $ \mathbb{Z}_{<0}$ & $=\{\cdots,-3,-2,-1\}$\\ $ \mathbb{Z}_{\leq0}$ & $=\{\cdots,-3,-2,-1,0\}$\\ $ \equiv$ & \mbox{gives a shorter and temporary notation for an object}\\ $\#E$ & cardinality of the set $E$\\ $ [z^{n}]f(z)$ & $=f_{n}$ when $f(z) = \sum_{i \geq0} f_{i}z^{i} \in \mathbb{C}[[X]]$ \end{tabular} \medskip \noindent For an asymptotically positive function $f(n)$ and random variables $X_{n} : n \geq 0$ we write \noindent \begin{tabular}{cl} $ X_{n} \sim_{ \mathbb{P}} f(n)$ & if $\frac{X_{n}}{f(n)} \xrightarrow[n\to\infty]{( \mathbb{P})} 1$\\ $ X_{n} = o_{ \mathbb{P}}( f(n))$ & if $\frac{X_{n}}{f(n)} \xrightarrow[n\to\infty]{( \mathbb{P})} 0$\\ $ X_{n} = O_{ \mathbb{P}}( f(n))$ & if $(X_{n}/f(n) : n \geq 1)$ is tight\\ \end{tabular} \medskip \noindent If furthermore the variables $X_{n}$ are coupled and form a sequence $(X_{n} : n \geq 0)$ then we write \noindent \begin{tabular}{cl} $ X_{n} \sim_{ a.s.} f(n)$ & if $\frac{X_{n}}{f(n)} \xrightarrow[n\to\infty]{a.s.} 1$\\ $ X_{n} = o_{ a.s.} (f(n))$ & if $\frac{X_{n}}{f(n)} \xrightarrow[n\to\infty]{a.s.} 0$\\ $ X_{n} = O_{ a.s.}( f(n))$ & if $(X_{n}/f(n) : n \geq 1)$ is bounded above \end{tabular} \medskip \noindent We use standard notation for several (laws of) random variables: \medskip \noindent \begin{tabular}{ll} $ \mathcal{N}(m, \sigma^2)$ & real Gaussian law with mean $m$ and variance $\sigma^2$\\ $ \mathcal{E}(\alpha)$ & exponential variable with mean $1/\alpha$\\ $ (B_t : t \geq 0)$& standard linear Brownian motion issued from $0$\\ $ ( \mathfrak{P}(t) : t \geq 0)$& unit rate Poisson counting process, \\ & in particular $ \mathfrak{P}(t)$ is a Poisson random variable with mean $t$\\ $G(n, p)$ & Erd{\H{o}}s--R\'enyi random graph with $n$ vertices and edge parameter $p$\\ $(S_n : n \geq 0)$ & random walk with i.i.d.~increments (see context for the law of increments)\\ \end{tabular} \medskip \noindent Graph notation: \noindent \begin{tabular}{ll} $ \mathrm{V}(\mathfrak{g}),\mathrm{E}(\mathfrak{g})$& vertex and edge sets of a graph $ \mathfrak{g}$\\ $x \sim y$ & vertices $x,y$ are neighbors in the underlying graph $ \mathfrak{g}$\\ $x \leftrightarrow y$ & vertices $x,y$ are in the same connected component in the underlying graph $ \mathfrak{g}$\\ $ \mathrm{deg}_{ \mathfrak{g}}(x)$ or $ \mathrm{deg}(x)$ & degree of the vertex $x \in \mathrm{V}( \mathfrak{g})$\\ $\mathfrak{g}' \sqsubset \mathfrak{g}$ & $ \mathfrak{g'}$ is a subgraph of $ \mathfrak{g}$\\ $ \mathfrak{g}[V]$ & graph induced by $ \mathfrak{g}$ on the vertices $V$\\ $ \mathfrak{g}\simeq \mathfrak{g}'$ & two isomorphic graphs\\ $ \mathrm{d}_{ \mathrm{gr}}^\mathfrak{g}$ or $ \mathrm{d_{gr}}$ & graph distance on $ \mathrm{V}(\mathfrak{g})$\\ $ \mathbb{G}_n$ & set of all simple graphs on the vertex set $\{1,2, \dots , n \}$ \end{tabular} \medskip \noindent Tree notation: \medskip \noindent \begin{tabular}{ll} $ (T_n : n \geq 0)$ & is the random recursive tree or uniform attachment chain \\ $ ( \mathsf{T}_n : n \geq 1)$ & is the Barab\'asi--Albert or linear preferential attachment chain \\ $( [\mathbb{T}]_t : t \geq 0)$ & is a standard Yule tree (rate $1$ and usually order $2$) process\\ $ \mathcal{T}$ & is a Bienaym\'e--Galton--Watson tree \\ & whose offspring distribution should be clear from the context\\ \end{tabular} \tableofcontents \bigskip \chapter{Basics of percolation} \hfill An appetizer. \bigskip In this introductory chapter we present the model of Bernoulli bond \textbf{percolation}. This is a way to generate a random graph from a deterministic graph by keeping some of its edges at random. The random graphs studied in part I (Bienaym\'e--Galton--Watson trees) and in part II (Erd{\H{o}}s--R\'enyi random graphs) can be seen as percolation models on some special graphs. Our goal here is only to present the main features of the Bernoulli percolation model focusing on the \textbf{phase transition} for the existence of an infinite cluster. \begin{figure}[!h] \begin{center} \includegraphics[width=3.7cm]{arbre1} \includegraphics[width=3.7cm]{arbre2} \includegraphics[width=3.7cm]{arbre3} \includegraphics[width=3.7cm]{arbre4} \caption{ Increasing Bernoulli percolation on a complete binary tree up to level $10$ with parameters $p=0.2$, $0.4$, $0.5$ and $p=0.6$ from left to right.} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=3.7cm]{perco035} \includegraphics[width=3.7cm]{perco045} \includegraphics[width=3.7cm]{perco055} \includegraphics[width=3.7cm]{perco065} \caption{Increasing Bernoulli percolation on a $50\times 50$ grid with parameters $p=0.35$, $p=0.45$, $p=0.55$ and $p=0.65$. Notice the appearance of an ubiquitous cluster between the second and the third picture.} \end{center} \end{figure} \section{Basics on graphs} A \textbf{graph}\footnote{more formally, a non-oriented multi-graph} $ \mathfrak{g}$ is a pair $ \mathfrak{g}=(\mathrm{V}(\mathfrak{g}), \mathrm{E}(\mathfrak{g}))$, where $ V= \mathrm{V}(\mathfrak{g})$ is the set of \textbf{vertices} of $\mathfrak{g}$ and $E=\mathrm{E}(\mathfrak{g})$ is the set of \textbf{edges} of $\mathfrak{g}$ which is a multiset ( i.e.~where repetitions are allowed) over the set $\{V^{2}\}$ of all unordered pairs of elements of $V$. The graph is $\textbf{simple}$ if they are no multiple edges nor loops (an edge with confounded end vertices). \begin{figure}[!h] \begin{center} \includegraphics[height=4cm]{graph-ex} \caption{(Left) An example of a graph $\mathfrak{g}=(V,E)$ with vertex set $ \displaystyle V= \{1,2,3,4,5\}$ and edge set $ \displaystyle E = \{\hspace{-1mm}\{\{1,1 \},\{1,2 \},\{ 1,2\},\{1,3 \},\{3,2 \},\{2,5 \},\{3,5 \}\}\hspace{-1mm}\} $. The vertex degrees of $1,2,3,4$ in $ \mathfrak{g}$ are respectively $5,3,2,0$. (Center) An example of a subgraph $ \mathfrak{g}'\subgraph \mathfrak{g}$ and (Right) the subgraph induced on the vertices $2,3,4,5$.} \end{center} \end{figure} If $x,y \in V$ and $\{x,y\} \in E$ we say that $x$ and $y$ are \textbf{neighbors} and we write $ x \sim y$. We say that an edge is \textbf{adjacent} to a vertex if it is one of its endpoints, and two edges are adjacent if they are adjacent to a common vertex. The \textbf{degree} of a vertex $x \in V$ denoted by $ \mathrm{deg}_{ \mathfrak{g}}(x)$ (or $ \mathrm{deg}(x)$ if this no ambiguity) is the number of half-edges adjacent to $x$, otherwise said it is the number of edges adjacent to $x$ where loops are counted twice. A \textbf{subgraph} of $ \mathfrak{g}$ is a graph $ \mathfrak{g'}$ such that $ \mathrm{V}( \mathfrak{g}') \subset \mathrm{V}( \mathfrak{g})$ and where $ \mathrm{E} ( \mathfrak{g}') \subset \mathrm{E}( \mathfrak{g})$. We shall write $ \mathfrak{g}' \sqsubset \mathfrak{g}$ in this case. If $ V' \subset \mathrm{V}( \mathfrak{g})$ the subgraph \textbf{graph induced} by $ \mathfrak{g}$ on $V'$ is the graph with vertex set $V'$ obtained by keeping only the edges of $ \mathrm{E}( \mathfrak{g})$ whose endpoints are in $V'$. It is denoted by $ \mathfrak{g}[V']$, note that $ \mathfrak{g}[V'] \subgraph \mathfrak{g}$. \paragraph{Graph equivalence.} If $\mathfrak{g}$ and $\mathfrak{g}'$ are two graphs we say that $\mathfrak{g}$ and $\mathfrak{g}'$ are \textbf{equivalent} if they represent the same graph up to renaming the vertex set. Formally this means that there exists a bijection $\phi : \mathrm{V}(\mathfrak{g}) \to \mathrm{V}(\mathfrak{g}')$ which maps the multi-set $ \mathrm{E}(\mathfrak{g})$ to $ \mathrm{E}(\mathfrak{g}')$: such a function is called a homomorphism of graph (automorphism if $ \mathfrak{g}= \mathfrak{g}'$) and we write $ \mathfrak{g} \simeq \mathfrak{g}'$. In this course we shall often implicitly identify two equivalent\footnote{although the space of equivalence classes of all finite connected countable graphs is monstrous, see \cite{vershik1998universal}} graphs. \clearpage \begin{center} \hrulefill \textit{Convention} \hrulefill \end{center} Unless explicitly specified, we shall always suppose that $ \mathrm{E}( \mathfrak{g})$ is finite or countable and that $ \mathfrak{g}$ is \textbf{locally finite} i.e.~that the vertex degrees are all finite (no vertices of infinite degree). \begin{center} \hrulefill \end{center} \paragraph{Connected graphs.} A \textbf{path} $\gamma = (e_1, e_2, \dots)$ is a sequence of adjacent edges in the graph, its \textbf{length} is the number of edges it contains. If the starting and endpoint points of $\gamma$ are the same it is called a \textbf{cycle}. The path $\gamma$ is \textbf{self-avoiding} if $e_i$ and $e_j$ are not adjacent when $|i-j|>1$. The \textbf{graph distance} on $\mathfrak{g}$ is denoted by $ \mathrm{d}_{ \mathrm{gr}}^\mathfrak{g}$ or $ \mathrm{d_{gr}}$ when there is no ambiguity, and is defined for $x,y \in \mathrm{V}( \mathfrak{g})$ by $$ \mathrm{d_{gr}}(x,y) = \mbox{minimal length of a path $\gamma$ going from }x \mbox{ to }y.$$ By convention we put $ \mathrm{d_{gr}}(x,y)=\infty$ if there is no path linking $x$ to $y$ in $\mathfrak{g}$. The equivalence classes from the relation $ x \leftrightarrow y \iff \mathrm{d_{gr}}(x,y) < \infty$ are the connected components of $\mathfrak{g}$. If the connected component of $v_0 \in \mathrm{V}( \mathfrak{g})$ is infinite we write $ v_0\leftrightarrow \infty$. We say that $\mathfrak{g}$ is \textbf{connected} if it has only one connected component. The connected graphs with a minimal number of edges are famously called \textbf{trees}: \begin{proposition}[Tree] \label{prop:tree} Let $\mathfrak{g}=(V,E)$ be a connected graph on $n$ vertices. Then we must have $ \# E \geq n-1$. If $\# E=n-1$ then $\mathfrak{g}$ is a \textbf{tree}, meaning that is has no non trivial cycle. \end{proposition} \noindent \textbf{Proof.} We can suppose that the vertex set of $\mathfrak{g}$ is $\{1,2,3,\dots ,n\}$. We start with the vertex $1$. Since $\mathfrak{g}$ is connected there exists an edge adjacent to $1$ of the form $\{1, i_{1}\}$. If $i_{1} =1$ then this edge is a loop and otherwise $i_{1} \ne 1$. We then throw this edge away and pick a new edge adjacent to either $1$ or $i_{1}$. Iteratively, after having explored $k$ edges, we have discovered a part of the connected component of $1$ which has at most $k+1$ vertices. Since $\mathfrak{g}$ is connected it follows that $ \# E \geq n-1$. In case of equality this means that during the exploration process we have never found an edge linking two vertices already explored, in other words, no non trivial cycle has been created and $\mathfrak{g}$ is thus a tree. \qed \medskip We record here a useful property (whose proof is left as an exercise) known as K\"onig's lemma which characterizes infinite connected components via existence of infinite self-avoiding paths: \begin{lemma}[K\"onig's lemma] \label{lem:konig} Let $ \mathfrak{g}$ be a locally finite graph and let $v_0 \in \mathrm{V}( \mathfrak{g})$. Then the following propositions are equivalent \begin{enumerate}[(i)] \item The connected component of $v_0$ is infinite, i.e.~$v_0 \leftrightarrow \infty$, \item There is a self-avoiding infinite path starting from $v_0$, \item For every $ n \geq 1$, there is a self-avoiding path starting from $v_0$ and of length $n$. \end{enumerate} \end{lemma} \section{Percolation} \label{sec:percoabstrait} \begin{definition}[Bernoulli bond percolation] Fix a countable graph $ \mathfrak{g}$ and a parameter $p \in [0,1]$. The Bernoulli bond percolation on $ \mathfrak{g}$ with parameter $p$ is the random graph $$ \mathrm{Perc}( \mathfrak{g},p)$$ whose vertex set is $ \mathrm{V}( \mathfrak{g})$ and where each edge $e \in \mathrm{E}( \mathfrak{g})$ is kept independently of each other with probability $p$. The edges kept are called ``open" and those discarded are called ``closed". \end{definition} Obviously, for each $p \in [0,1]$, the random graph $\mathrm{Perc}( \mathfrak{g},p)$ is a subgraph of $ \mathfrak{g}$ which is bigger and bigger as $p$ increases. To make this statement formal, it is useful to \textbf{couple}~i.e.~to realize on the same probability space, all graphs $\mathrm{Perc}( \mathfrak{g},p)$ for $p \in [0,1]$. A natural way to do this is to consider a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ which supports i.i.d.~random variables $ (U_e : e \in \mathrm{E}( \mathfrak{g}))$ which are uniformly distributed on $[0,1]$ --this is possible since we supposed that $ \mathrm{E}( \mathfrak{g})$ is at most countable--. It is now clear that if we set $$ \mathrm{Perc}( \mathfrak{g},p) = \Big( \mathrm{V}( \mathfrak{g}) ; \left\{ e \in \mathrm{E}( \mathfrak{g}) : U_e \leq p \right\} \Big),$$ then for each $p$, the random graph $\mathrm{Perc}( \mathfrak{g},p)$ is indeed distributed as a percolation on $ \mathfrak{g}$ with parameter $p$ and furthermore $p \mapsto \mathrm{Perc}( \mathfrak{g},p)$ is increasing (for the inclusion of edges). The connected components of $\mathrm{Perc}( \mathfrak{g},p)$ are called \textbf{clusters}. \begin{remark}[History, see \cite{Mendes}] Percolation was designed to model the porosity of coal (used for gas masks during the second world war). In 1942, Rosalind Franklin (famous later for participating to the discovery of DNA structure) working for the British Coal Utilisation Research Association remarked that the porosity of coal depends on the size of the molecules of the gas and on the temperature at which the coal was formed. Later on, in the 50's, Simon Broadbent also working at BCURA as a statistician, together with the mathematician John Hammersley, introduced Bernoulli bond percolation on the grid to model these phenomena. \end{remark} \section{Phase transition} In the rest of the chapter, we focus on graphs $ \mathfrak{g}$ which are infinite and connected. Much of the theory of percolation is focused on the existence of large clusters in $\mathrm{Perc}( \mathfrak{g},p)$. More precisely, if $ \mathfrak{g}$ is an infinite connected graph, one can ask whether for some parameter $p$, the random graph $\mathrm{Perc}( \mathfrak{g},p)$ has an infinite cluster\footnote{using Proposition \ref{lem:konig} one can prove that this event is indeed measurable for the the $\sigma$-field generated by the variables $ \mathbf{1}_{e \mathrm{ \ is \ open }}$ for $e \in \mathrm{E}( \mathfrak{g})$}. More precisely, the function $$ p \mapsto \mathbb{P}( \mathrm{Perc}( \mathfrak{g},p) \mbox{ contains an infinite cluster} ),$$ is easily seen to be increasing using the coupling of Section \ref{sec:percoabstrait}. Since the existence of an infinite cluster in $\mathrm{Perc}( \mathfrak{g},p)$ is an event which is independent of the status of any finite number of edges, it has probability $0$ or $1$ by Kolmogorov $0-1$ law. We say that there is a phase transition, if this probability does not depend trivially on $p$: \begin{definition}[$p_c$ and phase transition] We define the \textbf{critical parameter} $p_c( \mathfrak{g})$ as $$ p_c( \mathfrak{g})= \inf \{ p \in [0,1] : \mathbb{P}( \mathrm{Perc}( \mathfrak{g},p) \mbox{ has an infinite cluster})=1 \}.$$ If $p_c( \mathfrak{g}) \in (0,1)$ we say that there is a non trivial phase transition for percolation on $ \mathfrak{g}$. \end{definition} For example, the line graph $ \mathfrak{z}$ whose vertex set is $\mathbb{Z}$ with the edges $\{\{i,i+1\} : i \in \mathbb{Z}\}$ has no phase transition since $p_c( \mathfrak{z}) = 1$. Similarly, the (non-locally finite) graph made of a star with infinite degree has $p_c( \mathrm{star})=0$. We will see in Proposition \ref{prop:lowerperco} that having vertices with large degrees is the only way to achieve $p_c=0$. \\ Knowing whether or not there is an infinite cluster at the critical threshold $p_{c}$ is one of the main open question in the area: it is widely believed that for ``homogeneous'' graphs there is no infinite cluster at the critical point. \begin{remark} The terminology ``phase transition" comes from the fact that around the critical parameter $p_c$, a slight variation of the parameter $p$ induces dramatic changes in the large scale geometry of the random graph $ \mathrm{Perc}( \mathfrak{g},p)$. This can be used to model physical phase transitions (such as the transformation of water into ice when the temperature drops below $0^{\circ}$C). \end{remark} \section{Two examples} In the rest of this section we shall prove the existence of a non-trivial phase transition for percolation on two infinite graphs: the infinite binary tree and the cubic planar lattice. We shall use the so-called first and second moment method which is (a sometimes subtle) application of Markov\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{markov}} Andre Andreevitch Markov (1856--1922), Russian} and Cauchy--Schwarz\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{cauchy}} Augustin Louis Cauchy (1789--1857), French \raisebox{-5mm}{\includegraphics[width=1cm]{schwarz}} Hermann Amandus Schwarz (1843--1921), German} inequalities and which will accompany us all along this course. Our first proposition shows that the critical parameter must be positive as long as the underlying graph $ \mathfrak{g}$ has bounded degree. \begin{proposition} \label{prop:lowerperco} Let $ \mathfrak{g}$ be an (infinite connected countable) graph such that $$ \max_{v\in \mathrm{V}( \mathfrak{g})} \mathrm{deg}(v) \leq M.$$ Then we have $ \mathbb{P}( \exists \mbox{ infinite cluster in }\mathrm{Perc}( \mathfrak{g},p)) =0$ as long as $p (M-1)< 1$. \end{proposition} The proof of this proposition is our first application of the \textbf{first moment method} which we single out as a lemma: \begin{lemma}[First moment method]\label{def:first-moment} Let $X \in \{0,1,2,\dots\}$ be a non-negative integer valued random variable. Then we have $$ \mathbb{P}(X \geq 1) \leq \mathbb{E}[X].$$ \end{lemma} \noindent \textbf{One-line proof:} Since $X \in \mathbb{Z}_{\geq0}$ we have $ \mathbb{P}(X\geq 1) =\mathbb{E}[ \mathbf{1}_{X>0}]\leq \mathbb{E}[X \mathbf{1}_{X>0}] = \mathbb{E}[X]$. \qed \medskip \noindent \textbf{Proof of Proposition \ref{prop:lowerperco}.} Let us consider a reference vertex $v_0$ in $ \mathfrak{g}$ and let $X(p)= \mathbf{1}_{v_0 \leftrightarrow \infty}$. Our goal is to show that $X(p)=0$ for $p$ small. For this, we shall use the proxy random variables $X_n(p)$ counting the number of self-avoiding paths starting from $v_0$ of length $n$ and made of open edges in $\mathrm{Perc}( \mathfrak{g},p)$. Clearly, since the degree of each vertex in $ \mathfrak{g}$ is bounded above by $M$, there is at most $ M\cdot (M-1)^{n-1}$ non-backtracking paths of length $n$ starting from $v_0$ in $ \mathfrak{g}$. Since there are more non-backtracking paths than self-avoiding paths, by independence of the status of the edges we have $$ \mathbb{E}[ X_n(p)] \leq M \cdot (M-1)^{n-1} \cdot p^n.$$ Lemma \ref{lem:konig} shows that $v_{0}$ is in an infinite cluster if and only if there is a self-avoiding path of arbitrary length starting from $v_{0}$. We deduce that \begin{eqnarray*} \mathbb{P}( v_0 \leftrightarrow \infty \mbox{ in }\mathrm{Perc}( \mathfrak{g},p)) & \underset{ \mathrm{Lem.\ } \ref{lem:konig} }{=}& \mathbb{P}( X_n(p) \geq 1, \forall n \geq 1)\\ & \leq & \inf_{n\geq 1} \mathbb{P}(X_n(p) \geq 1)\\ & \underset{ \mathrm{First\ Moment }}{\leq} & \inf_{n \geq 1} \mathbb{E}[ X_n(p)] \\ & \leq & \inf_{n \geq 1 }M \cdot (M-1)^{n-1} \cdot p^n. \end{eqnarray*} Hence if $p (M-1)<1$ the above probability is $0$. By countable union over all $v_0 \in \mathfrak{g}$, the probability that there exists an infinite cluster (at all) is also zero in this regime. \qed \subsection{Regular $d$-ary tree} Fix $d \geq 3$. Let us suppose in this section that $ \mathfrak{g}$ is the infinite $(d-1)$-ary tree $ \mathfrak{t}_d$ where all vertices have degree $d$ except for the origin vertex $v_0$ which has degree $d-1$ (so that there are exactly $(d-1)^n$ vertices at distance $n$ from $v_0$). By Proposition \ref{prop:lowerperco} we have $p_c( \mathfrak{t}_d) \geq 1/ (d-1)$ and in fact this lower bound is sharp: \begin{proposition} \label{prop:binarythreshold} We have $p_c( \mathfrak{t}_d)= \frac{1}{d-1}$. \end{proposition} To prove the proposition we shall now use the \textbf{second moment method}: \begin{lemma}[Second moment method]\label{def:second-moment} Let $X \in \{0,1,2,\dots\}$ be a non-negative integer valued random variable which is not constant equal to $0$. Then we have $$ \mathbb{P}(X \geq 1) \geq \frac{\mathbb{E}[X]^{2}}{ \mathbb{E}[X^{2}]}.$$ \end{lemma} \noindent \textbf{One-line proof:} Use Cauchy-Schwarz $\mathbb{E}[X]^{2}= \mathbb{E}[X \mathbf{1}_{X>0}]^{2} \leq \mathbb{E}[X^{2}] \mathbb{P}(X>0)$. \qed \medskip \noindent \textbf{Proof of Proposition \ref{prop:binarythreshold}.} Let us focus on the case $d=3$ to ease notation. Recall from the proof of Proposition \ref{prop:lowerperco} in the case when $v_0$ is the origin of $ \mathfrak{t}_3$ that $X_n(p)$ is the number of open paths in $\mathrm{Perc}( \mathfrak{t}_3,p)$ starting at $v_0$ and reaching level $n$. When $p > 1/2$ we know that $ \mathbb{E}[X_n(p)] = (2p)^n$ tends to infinity, but \textbf{that does not imply} that $X_n(p) \geq 1$ with large probability. To ensure this, we shall compute the second moment of $X_n(p)$: \begin{eqnarray*}\mathbb{E}[\left(X_n(p)\right)^2]&=& \mathbb{E}\left[\left(\sum_{x : \mathrm{d_{gr}}(x,v_0)=n} \mathbf{1}_{v_0 \leftrightarrow x \mbox{ in } \mathrm{Perc}( \mathfrak{t}_3,p)} \right)^2\right]\\ &=& \sum_{x,y : \mathrm{d_{gr}}(x,v_0)=\mathrm{d_{gr}}(y,v_0)=n } \mathbb{P}(v_0 \leftrightarrow x \mbox{ and } v_0 \leftrightarrow y \mbox{ in } \mathrm{Perc}( \mathfrak{t}_3,p))\\ &=& (2p)^n \left( 1 + p+ 2p^2 + 4 p^3+\dots +2^{n-1}p^n \right) \sim \frac{p}{2p-1} (2p)^{2n}, \end{eqnarray*} as $n \to \infty$ for $p >1/2$. We thus find that the second moment of $X_n(p)$ is of the same order as the first moment squared. Applying Lemma \ref{def:second-moment} we deduce that $ \mathbb{P}(X_n(p)>0) \geq \mathbb{E}[X_n(p)]^2/\mathbb{E}[X_n(p)^2] \geq \frac{2p-1}{p}$ asymptotically. We deduce as in the proof of Proposition \ref{prop:lowerperco} that $$ \mathbb{P}( v_0 \leftrightarrow \infty \mbox{ in } \mathrm{Perc}( \mathfrak{t}_3,p)) = \inf_{n \geq 1} \mathbb{P}(X_n(p)\geq 1) \geq \frac{2p-1}{p}.$$ By the $0-1$-law there is an infinite cluster in $\mathrm{Perc}( \mathfrak{t}_3,p)$ with probability $1$ when $p >1/2$. \qed \medskip \begin{exo} Show that there is no infinite cluster in $ \mathrm{Perc}( \mathfrak{t}_{d},p)$ at $p= \frac{1}{d-1}$. \label{exo:11} \end{exo} Of course, the knowledgeable reader may have noticed that the open subtree of the origin in $ \mathrm{Perc}( \mathfrak{t}_d,p)$ is a Bienaym\'e--Galton--Watson tree with offspring distribution $ \mathrm{Bin}(d-1,p)$. The phase transition for the existence of an infinite cluster happens when $p(d-1)$, the mean number of children in the Bienaym\'e--Galton--Watson tree, is larger than $1$. We shall study in more details Bienaym\'e--Galton--Watson trees in Part \ref{part:trees} and in particular get a new proof of the above proposition. \subsection{Cubic lattice $ \mathbb{Z}^2$} Let us now focus on the case when $ \mathfrak{g}$ is the standard Manhattan lattice i.e.~the cubic lattice in dimension $2$. This is the usual grid graph, whose vertex set is $ \mathbb{Z}^2$ and where an edge joins the point $x$ to the point $x+ e$ for $e \in \{(0,\pm1), (\pm1,0)\}$. Let us denote this graph by $ \mathfrak{z}_2$. We know from Proposition \ref{prop:lowerperco} that $p_c \geq 1/3$, but the second moment method does not work well in this setting since two paths of length $n$ may have a very complicated structure. To show that $p_c <1$, we shall rely on another argument specific to planar lattices. \begin{proposition} \label{prop:Z2} We have $ 0 < p_c( \mathfrak{z}_2) <1.$ \end{proposition} \noindent\textbf{Proof.} The idea is to use plane duality. More precisely, if the cluster of the origin vertex $(0,0)$ is finite in $ \mathrm{Perc}( \mathfrak{z}_2,p)$ this forces the existence of a \textbf{blocking self-avoiding cycle} in the dual graph, see Figure \ref{fig:blocking}. \begin{figure}[!h] \begin{center} \includegraphics[width=7cm]{blockingpath} \caption{ \label{fig:blocking} If the cluster of the origin is finite, then it is surrounded by a blocking dual self-avoiding cycle of length at least $4$.} \end{center} \end{figure} Since the dual graph of $ \mathfrak{z}_2$ is $ \mathfrak{z}_2$ itself, there are at most $4\cdot3^{n-1}$ dual cycles of length $n$ starting from the origin, and at most $n \cdot 4 \cdot 3^{n-1}$ such cycles blocking the origin (re-root at its first intersection with the positive origin axis which must be at distance less than $n$). We can now use the first-moment method on these blocking cycles: the probability that there exists such a cycle in the dual graph is upper bounded by its expectation and so by $$ \mathbb{P}( \exists \mbox{ blocking cycle}) \leq \sum_{n \geq 4} n 4^{n} (1-p)^n.$$ The above sum can be made smaller than $1$ if $p$ is close enough to $1$. In this case we get $ \mathbb{P}( (0,0) \leftrightarrow \infty \mbox{ in } \mathrm{Perc}( \mathfrak{z}_2,p))>0$ and so $p_c( \mathfrak{z}_2) <1$. \qed \begin{remark} In essence, the duality argument shows that the percolation on $ \mathfrak{z}_2$ is self-dual at $p=1/2$ and this is one of the key ingredients to prove that $p_c( \mathfrak{z}_2)= 1/2$ (a result due to Kesten). \end{remark} Since the two-dimensional cubic lattice is included in its higher dimensional analog in a trivial fashion, we deduce that there is a non-trivial phase transition in $ \mathfrak{z}_{d}$ for any $d \geq 2$. The nature of the phase transition in low dimensions $3,4,5,6\dots$ (and also in dimension $2$ to some extent) is still elusive. \section{Mean-field regime} In Part \ref{part:ER}, we will study the case when the underlying graph $ \mathfrak{g}$ is the complete graph $ \mathbb{K}_n$ on $n$ vertices. This is the graph made of the vertices $1,2, \dots , n$ and where there is an edge between two distinct vertices (no loops). One of the main objects in this course is obtained by studying $ \mathrm{Perc}( \mathbb{K}_{n},p)$ when $p$ may vary with $n$. This random graph, usually referred to as the \textbf{Erd{\H{o}}s--R\'enyi random graph} will be denoted by $G(n,p)$. This model was introduced\footnote{Actually this definition of random graph is not really due to Erd{\H{o}}s and R\'enyi who considered a random graph on $n$ vertices with a fixed number $m \leq {n \choose 2}$ of edges. However, once conditioned on the number of edges the two models are equivalent and we shall use the name Erd{\H{o}}s--R\'enyi instead of Edgar Gilbert who introduced this variant.} by Erd{\H{o}}s and R\'enyi\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{erdos}} Paul Erd{\H{o}}s (1913--1996), Hungarian and \raisebox{-5mm}{\includegraphics[width=1cm]{renyi}} Alfr\'ed R\'enyi (1921--1970), Hungarian} in 1959 who wanted to probe randomly a graph with $n$ (labeled) vertices. This random graph model has become ubiquitous in probability and commonly referred to as the ``\textbf{mean field model}''. This means that the initial geometry of the model is trivial: one could permute all the vertices and get the same model. \medskip There is a convenient way to couple all these realizations including the case $n= \infty$: consider the \textbf{complete} graph $ \mathbb{K}_{\infty}$ whose vertex set is $ \mathbb{Z}_{>0} = \{1,2,3, \dots \}$ and whose edge set is $ \{ \{i,j\} : i \ne j \in \mathbb{Z}_{>0}\}$ (hence, an edge between any possible pair of distinct vertices). This graph is connected and countable although it is not locally finite. We can then consider for each edge $e= \{i,j\}$ and independent uniform random variable $U_{ e} \in [0,1]$ and set for each $n \in \{1,2, \dots\}$ and $p \in [0,1]$ \begin{eqnarray} \label{def:erdosrenyicoupled} G(n, p) = \Big( \underbrace{\big\{ i \in \mathbb{Z}_{>0} : i \leq n\big\}}_{ \mathrm{vertex\ set}}, \underbrace{\big\{ \{i,j\} : i,j \leq n \mbox{ and } U_{\{i,j\}} \leq p\big\}}_{ \mathrm{edge\ set}} \Big). \end{eqnarray} Once again, studying the properties of $G(n, p_{n})$ for finite $n'$s and varying parameter $p \equiv p_{n}$ will be the subject of the whole Part \ref{part:ER}. To conclude this chapter, let us focus on the case when $n = \infty$ and $p>0$. It should be clear to the reader that $G(\infty, p)$ is almost surely connected, but the following result might come as a surprise: \begin{theorem}[Erd{\H{o}}s--R\'enyi (1963)] \label{thm:rado} For any $p,p' \in (0,1)$ almost surely $G( \infty,p)$ and $ G(\infty,p')$ are equivalent. In particular, the equivalence class of non trivial Bernoulli bond-percolation on $ \mathbb{K}_\infty$ is almost surely constant: this is the \textbf{Rado graph}. \end{theorem} Recall that two graph $ \mathfrak{g}, \mathfrak{g}'$ are equivalent if there is a bijection of their vertex sets which preserve the adjacency properties. The proof of the result is easy once we know the following characteristic property of the Rado\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{rado}} Richard Rado (1906--1989), German } graph (over the vertex set $ \mathbb{ Z}_{>0}$): it is the only (equivalence class of) countable simple graph such that for any finite disjoint subsets $U,V \subset \mathbb{Z}_{>0}$, there exists $v$ outside of $U$ and $V$ such that $v$ is neighbor to all vertices in $U$ and none of $V$. The previous property is called the \textbf{extension property} and can be used to prove by induction that any finite or countable graph $ \mathfrak{g}$ can be embedded inside the Rado graph. See the excellent wikipedia article on the Rado graph for more details. \medskip \noindent \textbf{Proof of Theorem \ref{thm:rado}.} Let us check that $G( \infty,p)$ with $p \in (0,1)$ almost surely satisfies the extension property. Fix $U,V \subset \mathbb{Z}_{>0}$. For $v \in \mathbb{Z}_{>0} \backslash (U \cup V)$ the probability that $v$ is connected to all vertices of $U$ and none of $V$ is $ p^{\# U} (1-p)^{\# V}>0$. By independence and the Borel--Cantelli lemma, there exists $v$ connected to all the vertices of $U$ and none of $V$ in $G( \infty,p)$ with probability one. The property holds true for all finite subsets $U,V \subset \mathbb{Z}_{>0}$ simultaneously by countable union. \qed \bigskip \noindent\textbf{Bibliographical notes.} Percolation theory is a very broad and vivid area in nowadays probability theory, \cite{grimmett1997percolation,werner2007lectures,duminil2017sixty}. When the underlying graph has strong geometric constraints (e.g.~the cubic lattices in $ \mathbb{Z}^{d}$ for $d \geq 2$) then the study of the phase transition and in particular of the critical behavior is still a challenge for mathematicians. Theorem \ref{thm:rado} is proved in \cite{erdos1963asymmetric}. For more about the links between the geometry of the graph and the behavior of Bernoulli percolation we advise the reading of the influential paper \cite{benjamini1996percolation}. \medskip \noindent{\textbf{Hints for Exercises.}}\ \\ \noindent Exercise \ref{exo:11}: With the notation above the exercise, show that $ \mathbb{E}[X_{n}(p) \mid X_{n}(p) \geq 1] \to \infty$ as $n \to \infty$ when $p = \frac{1}{d-1}$. \part[Bienaym\'e--Galton--Watson trees]{Bienaym\'e-Galton-Watson trees \\ \\ \label{part:trees} \begin{center} \begin{minipage}[l]{15cm} \normalsize In this part we study the model of Bienaym\'e--Galton--Watson (BGW) tree, or discrete branching process, modeling the genealogy of an asexual population where individuals reproduce independently of each other according to the same offspring distribution. The main tool to study such objects is their encodings by one-dimensional random walks. \end{minipage} \end{center} \vspace{1cm} \begin{figure}[!h] \begin{center} \includegraphics[height=8cm]{arbreCRT} \caption{A large critical Bienaym\'e--Galton--Watson tree with finite variance \label{fig:CRT}} \end{center} \end{figure} } \chapter{One-dimensional random walks} \hfill Back to basics. \label{chap:generalRW} In this chapter we consider the following object: \begin{definition}[One-dimensional random walk] Let $\mu = ( \mu_k : k \in \mathbb{Z})$ be a probability distribution on $ \mathbb{Z}$ with $\mu( \mathbb{Z}_{>0})>0$ as well as $\mu( \mathbb{Z}_{<0}) >0$. Consider $X_{1}, X_{2}, \ldots$ i.i.d.\,copies of law $\mu$ which we see as the increments of the process $(S) \equiv (S_{n}: n \geq 0)$ on $ \mathbb{Z}$ defined as follows : $S_{0}=0$ and for $n \geq 1$ $$ S_{n} = X_{1}+ \cdots + X_{n}.$$ We say that $(S)$ is a one-dimensional random walk with step distribution $\mu$ (or $\mu$-random walk for short). \end{definition} \begin{figure}[!h] \begin{center} \includegraphics[height=4cm]{normalpoint} \hspace{1cm} \includegraphics[height=4cm]{stablepoint} \caption{Two samples of one-dimensional random walks with different step distributions. The first one seems continuous at large scales whereas the second one displays macroscopic jumps.} \end{center} \end{figure} Notice that we restrict (for simplicity) to the \textbf{lattice case} by demanding that the support of $\mu$ be included in $ \mathbb{Z}$ and that we excluded the monotone situation since the support of $\mu$ contains both positive and negative integers. Of course, the behavior of a one-dimensional random walk depends on the step distribution $ \mu$ in a non-trivial way as we will see. We first recall the general background on such objects before moving to \textbf{skip-free random walks} which can only make negative jumps of size $-1$ and which will be used in the next chapters to study random trees and graphs. \section{General theory} In this section we gather a few general results on one-dimensional random walks and start with the applications of discrete Markov chain theory since a one-dimensional random walk is clearly a very particular case of Markov chain in discrete time with a discrete state space. \subsection{Reminder on Markov chains} We start with the parity consideration: \begin{proposition} The Markov chain $(S_n: n \geq 0)$ is \begin{itemize} \item \textbf{irreducible} if $ \mathrm{Supp}(\mu)$ is not included in $ a \mathbb{Z}$ for some $a>1$, \item It is furthermore \textbf{aperiodic} if $ \mathrm{Supp}(\mu)$ is not included in $ b + a \mathbb{Z}$ for some $a>1$ and $ b \in \mathbb{Z}$. \end{itemize} \end{proposition} \noindent \textbf{Proof.} Using the fact that the walk is not monotone, it is an exercise to check that the chain can come back to $0$ with positive probability and so the set of integers accessible by the chain starting from $0$ is a subgroup of $ \mathbb{Z}$. Writing Bezout relation we can find non-negative integers $\alpha_{1}, \dots , \alpha_{k}, \alpha'_{1}, \dots , \alpha'_{k}$ and $\ell_{1}, \dots , \ell_{k},\ell'_{1}, \dots , \ell'_{k'} \in \mathrm{Supp}( \mu)$ so that $$ \alpha_{1}\ell_{1} + \dots + \alpha_{k} \ell_{k }= \alpha'_{1}\ell'_{1} + \dots + \alpha'_{k'} \ell'_{k' } +\mathrm{gcd}( \mathrm{Supp}(\mu)).$$ Hence, by the above consideration $\mathrm{gcd}( \mathrm{Supp}(\mu))$ is an accessible value for the walk and the first point is proved. For the second point, notice that if $\mathrm{Supp}(\mu) \subset b + a \mathbb{Z}$ then $$S_n \equiv bn \quad [a]$$ and so the chain cannot be aperiodic if $a \notin\{-1,+1\}$. If $\mathrm{Supp}(\mu)$ is not included in $b + a \mathbb{Z}$ for $|a|>1$, then we have $ \mathrm{gcd} (\mathrm{Supp}( \mu) - k_0) =1$ where $k_0$ is any integer in the support of $ \mu$. We pick $k_0 \in \mathrm{Supp}( \mu)$ such that the measure $\tilde{\mu}(\cdot) = \mu(k_0 + \cdot)$ does not put all its mass on $ \mathbb{Z}_{\geq0}$ nor on $ \mathbb{Z}_{\leq 0}$. It is possible since otherwise $ \mathrm{Supp}( \mu) = \{ \alpha,\beta\}$ with $\alpha < 0 <\beta$ and so $ \mathrm{Supp}(\mu) \subset \alpha +(\beta-\alpha) \mathbb{Z}$ which is excluded by hypothesis. Then, by the first point of the proposition, we can find $n_0\geq 0$ large enough so that a $\tilde{\mu}$-random walk satisfies $ \mathbb{P}( \widetilde{S}_{n_0} = k_0) >0$ which means that $$ \mathbb{P}(S_{n_0} = (n_{0}+1) k_0) >0.$$ Combining this with the trivial point $ \mathbb{P}(S_{n_0+1} = (n_0+1) k_0) \geq (\mu_{k_{0}})^{{n_{0}+1}} >0$ we deduce that the integer $(n_0+1) k_0$ is accessible both at time $n_0$ and time $n_0+1$ for the chain. By standard results on Markov chains this implies aperiodicity. \qed \medskip \begin{example} Simple random walk on $ \mathbb{Z}$ with $ \mathbb{P}(S_1= \pm 1)= \frac{1}{2}$ is irreducible but not aperiodic. \end{example} The counting measure on $ \mathbb{Z}$ is clearly an invariant measure for any $\mu$-random walk (beware, it is not usually reversible, and it might not be the only invariant measure up to multiplicative constant in the transient case). Due to homogeneity of the process, the Markov property takes a nice form in our setup: as usual $ \mathcal{F}_{n} = \sigma ( X_{1}, \dots , X_{n})$ is the natural filtration generated by the walk $(S)$ up to time $n$ and a \textbf{stopping time} is a random variable $\tau \in \{0,1,2, \dots \} \cup\{\infty\}$ such that for each $n \geq 0$ the event $\{ \tau =n \}$ is measurable with respect to $ \mathcal{F}_{n}$. \begin{proposition}[Strong Markov property] \label{prop:strongmarkov}If $\tau$ is a stopping time then conditionally on $\{\tau < \infty\}$ (implicitly of positive probability) the process $(S^{(\tau)}_{n})_{n \geq 0}=(S_{\tau+n}- S_{\tau})_{n \geq 0}$ is independent of $( S_{n})_{ 0 \leq n\leq \tau}$ and is distributed as the initial walk $(S_{n})_{n \geq 0}$. \end{proposition} \noindent \textbf{Proof.} Let $f,g$ be two positive measurable functions and let us compute \begin{eqnarray*} \mathbb{E}\left[f\left((S_{n})_{ 0 \leq n \leq \tau}\right)g\left((S^{(\tau)}_{n})_{n \geq 0}\right) \mathbf{1}_{\tau < \infty}\right] &\underset{\tau < \infty}{=}& \sum_{ k=0}^{\infty} \mathbb{E}\left[ \mathbf{1}_{\tau=k} f\left((S_{n})_{ 0 \leq n \leq k}\right)g\left((S^{(k)}_{n})_{n \geq 0}\right) \right] \\ & \underset{\mathrm{indep}.}{=} & \sum_{ k=0}^{\infty} \mathbb{E}\left[ \mathbf{1}_{\tau=k} f\left((S_{n})_{ 0 \leq n \leq k}\right)\right] \mathbb{E}\left[g\left((S^{(k)}_{n})_{n \geq 0}\right) \right]\\ & \underset{\mathrm{stat}.}{=} & \sum_{ k=0}^{\infty} \mathbb{E}\left[ \mathbf{1}_{\tau=k} f\left((S_{n})_{ 0 \leq n \leq k}\right)\right] \mathbb{E}\left[g\left((S_{n})_{n \geq 0}\right) \right]\\ &=& \mathbb{E}\left[ f\left((S_{n})_{ 0 \leq n \leq \tau}\right) \mathbf{1}_{\tau < \infty}\right] \mathbb{E}\left[g\left((S_{n})_{n \geq 0}\right) \right]. \end{eqnarray*} This proves the proposition. \qed \subsection{$0-1$ laws} In the study of random walks, one often uses $0-1$ laws when dealing with asymptotic events such as $\{S_{n} \to \infty\}$. The most well-known of such laws is Kolmogorov's\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{kolmogorov}}Andre\"i Nikola\"ievitch Kolmogorov (1903--1987), Russian} $0-1$ law which states that if $(X_{i})_{i \geq 0}$ are independent random variables (not necessarily identically distributed), then any event $ \mathcal{A}$ measurable with respect to $\sigma( X_{i}: i \geq 0)$ and which is independent of $(X_{1}, \dots , X_{n_{0}})$ for any $n_{0}$ has measure $ \mathbb{P}( \mathcal{A}) \in \{0,1\}$. Let us present a stronger version of Kolmogorov $0-1$ law in the case of \textbf{i.i.d.}\ increments. This $0-1$-law, due to Hewitt \& Savage\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{hewitt}} Edwin Hewitt (1920--1999), \raisebox{-5mm}{\includegraphics[width=1cm]{savage}} Leonard Savage (1917--1971), American}, has many applications in the random walk setting: \clearpage \begin{theorem}[Hewitt--Savage exchangeable $0-1$ law]\label{thm:hewitt-savage}Let $(X_{i})_{ i \geq 1}$ be a sequence of independent and \underline{identically distributed} random variables with values in a Polish space $(E,d)$. Suppose that $ \mathcal{A}$ is a measurable event with respect to $ \sigma (X_{i}: i \geq 1)$ which is invariant (up to negligible events) by any permutation of the $(X_{i} : i \geq 1)$ with finite support. Then $ \mathbb{P}( \mathcal{A}) \in \{0,1 \}$. \end{theorem} \noindent \textbf{Proof.} Let $ \mathcal{A} \in \sigma (X_{i} : i \geq 1)$ be invariant by any permutations of the $X_{i}$ with finite support (i.e.~only finitely many terms are permuted). By a standard measure-theory argument (see \cite[Lemma 3.16]{Kal07}) one can approximate $ \mathcal{A}$ by a sequence of events $ \mathcal{A}_{{n}} \in \sigma( X_{1}, \dots , X_{n})$ in the sense that $$ \mathbb{P}( \mathcal{A} \Delta \mathcal{A}_{n}) \xrightarrow[n\to\infty]{} 0.$$ By definition, any event $ \mathcal{E} \in \sigma (X_{i} : i \geq 1)$ can be written $ \mathcal{E} = \mathbf{1}_{ (X_{i} : i \geq 1) \in \tilde{\mathcal{E}}}$ where $ \tilde{\mathcal{E}}$ is an event of the Borel cylindric $\sigma$-field on $ E^{ \mathbb{Z}_{> 0}}$. We can thus consider the function $\psi_{n}$ acting on events $ \mathcal{E} \in \sigma( X_{i}: i \geq 1)$ by swapping $X_{1}, \dots , X_{n}$ with $ X_{n+1}, \dots , X_{2n}$~i.e.~ $$ \psi_{n}( \mathcal{E}) = \mathbf{1}_{X_{n+1}, \dots ,X_{2n}, X_{1}, \dots , X_{n}, X_{2n+1}, \dots \in \tilde{ \mathcal{E}}} \in \sigma( X_{i}: i \geq 1).$$ Since the $X_{i}$ are i.i.d.\, we have $ \mathbb{P}( \psi_{n} (\mathcal{E})) = \mathbb{P} ( \mathcal{E})$ for any event $ \mathcal{ E}$ and also $ \psi_{n}( \mathcal{A}_{n})$ is independent of $ \mathcal{A}_{n}$. Using this we have $$0 \xleftarrow[n \to \infty]{} \mathbb{P}(\mathcal{A} \Delta \mathcal{A}_{n})=\mathbb{P}(\psi_{n}( \mathcal{A} \Delta \mathcal{A}_{n})) = \mathbb{P}( \psi_{n}(\mathcal{A}) \Delta \psi_{n}( \mathcal{A}_{n})) = \mathbb{P}( \mathcal{A} \Delta \psi_{n} (\mathcal{A}_{n})).$$ We deduce that $ \mathcal{A}$ is both very well approximated by $ \mathcal{A}_{n}$ but also by $ \psi_{n}( \mathcal{A}_{n})$. Since the last two events are independent we deduce that $ \mathbb{P}( \mathcal{A}) \in \{0,1\}$ because $$ \mathbb{P}( \mathcal{A}) = \lim_{n \to \infty} \mathbb{P}( \mathcal{A}_{n} \cap \psi_{n} ( \mathcal{A}_{n})) \underset{ \mathrm{indept.}}{=} \lim_{n \to \infty} \mathbb{P}( \mathcal{A}_{n}) \mathbb{P}( \psi_{n}( \mathcal{A}_{n})) \underset{ \mathrm{i.d.}}{=} \lim_{n \to \infty}\mathbb{P}( \mathcal{A}_{n})^{2} = \mathbb{P}( \mathcal{A})^{2}. $$ \qed \medskip \begin{example} \label{ex:O1} If $A \in \mathbb{Z}$ is a measurable subset and $(S)$ a one-dimensional random walk with i.i.d.~increments, we write \begin{eqnarray} \label{eq:IA} \mathcal{I}_{A}:= \sum_{n = 0}^{\infty} \mathbf{1}_{S_{n} \in A}. \end{eqnarray} Then the commutativity of $ \mathbb{Z}$ (sic!) shows that the event $ \{ \mathcal{I}_{A} = \infty\}$ is invariant under finite permutations of the $X_{i}$'s (indeed any finite permutation leaves $S_{n}$ invariant for large $n$); hence it has probability $0$ or $1$. Notice that this cannot be deduced directly from Kolmogorov's $0-1$ law. \end{example} \subsection{Asymptotic behavior} Let us denote $$ \overline{S} := \limsup_{n \to \infty} S_n\quad \mbox{ and } \quad \underline{S} := \liminf_{n \to \infty} S_n.$$ For any $k \in \mathbb{Z}$, the probability that $ \overline{S}$ or $\underline{S}$ is equal to $k$ is null, since otherwise the walk would take the value $k$ an infinite number of times with positive probability: by the classification of states, the walk would be recurrent and so would visit the whole subgroup $ \mathrm{gcd}( \mathrm{Supp}(\mu)) \cdot \mathbb{Z}$ almost surely which is incompatible with a finite $\limsup$ or $\liminf$ (recall that $\mu( \mathbb{Z}_{>0})$ and $\mu( \mathbb{Z}_{<0})$ are positive). This motivates the following definition: \begin{definition} \label{def:prop:oscillates} A (non-trivial) one-dimensional random walk $(S)$ with i.i.d.\ increments falls into exactly one of the three following categories: \begin{enumerate}[(i)] \item Either $\overline{S} = \underline{S} = \infty$, that is $ S_{n} \xrightarrow[n\to\infty]{} \infty$ in which case $(S)$ is said to \textbf{drift towards $\infty$}, \item Or $\overline{S} = \underline{S} = -\infty$, that is $ S_{n} \xrightarrow[n\to\infty]{} - \infty$ in which case $(S)$ is said to \textbf{drift towards $-\infty$}, \item Or $(S)$ \textbf{ oscillates} i.e.~$\limsup_{n \to \infty} S_{n}= +\infty$ and $\liminf_{n \to \infty} S_{n} = -\infty$ almost surely. \end{enumerate} \end{definition} When a random walk drifts, it is obviously transient, but in the oscillating case, it may be transient or recurrent, see Theorem \ref{prop:discretetransient} for examples. \begin{remark} \label{rek:>0pos>0}If the random walk $(S)$ drifts towards $+\infty$, then $$ \mathbb{P}( S_i >0 : \forall i \geq 1)>0.$$ Indeed, if we had $\mathbb{P}( S_i >0 : \forall i \geq 1) =0$ then the stopping time $ \theta = \inf\{ i \geq 1 : S_i \leq 0\}$ would be almost surely finite. Using (iterations of) the Markov property this would imply that $(S)$ visits $ \mathbb{Z}_{\leq 0}$ infinitely often a.s.\ which contradicts the fact that $(S)$ drifts to $+ \infty$. \end{remark} \section{Walks with finite mean and the law of large numbers} In this section we examine the particular case when $\mu$ has finite mean and show that the walk is recurrent whenever it is centered, otherwise it is transient and drifts. It will be a good opportunity to wiggle around the strong and weak laws of large numbers. We will see in the next chapter a quick proof (Lemma \ref{lem:LLN}) of the strong law of large numbers based on a path transformation called \textbf{duality}. \subsection{Recurrence/transience} Recall that a random walk $(S)$ is recurrent iff one of the following equivalent conditions is satisfied \begin{eqnarray} \label{eq:cnsrecurrence}\mathbb{P}(\exists n >0: S_{n}=0)=1 \iff \mathbb{E}\left[ \sum_{n = 0}^{\infty} \mathbf{1}_{S_{n}=0}\right] =\infty \iff \liminf_{ n \to \infty} |S_{n}| < \infty \quad a.s. \end{eqnarray} \begin{theorem}[Dichotomy for walks with finite mean] \label{thm:rec0}Suppose $ \mathbb{E}[|X_{1}|] < \infty$ then \begin{enumerate}[(i)] \item If $\mathbb{E}[X_{1}] \ne 0$ then $(S)$ is transient and drifts, \item otherwise if $ \mathbb{E}[X_{1}]=0$ then $(S)$ is recurrent. \end{enumerate} \end{theorem} \noindent \textbf{Proof.} The first point $(i)$ is easy since by the strong law of large numbers we have $ n^{{-1}} S_{n} \to \mathbb{E}[X_{1}]$ almost surely: when $ \mathbb{E}[X_{1}] \ne 0$ this automatically implies that $(S)$ drifts towards $\pm \infty$ depending on the sign of $ \mathbb{E}[X]$. In the second case we still use the law of large numbers to deduce that $S_{n}/n \to 0$ almost surely as $n \to \infty$. This implies that for any $\varepsilon >0$ we have $ \mathbb{P}(|S_{n}| \leq \varepsilon n) \to 1$ as $n$ tends to infinity. In particular, we have \begin{eqnarray} \mathbb{E}\left[\sum_{i=0}^{\infty} \mathbf{1}_{|S_{i}| \leq \varepsilon n} \right] \underset{ i \leq n}{\geq} \sum_{i=0}^{n} \mathbb{P} \left(|S_{i}| \leq \varepsilon i\right ) \underset{ \mathrm{Ces\`aro}}{ \geq} \frac{n}{2}, \label{eq:contradict} \end{eqnarray} eventually. We claim that this inequality is not compatible with transience. Indeed, according to \eqref{eq:cnsrecurrence}, if the walk $(S)$ is transient then for some constant $C>0$ we have $$ \mathbb{E}\left[ \sum_{i=0}^{\infty} \mathbf{1}_{S_{i}=0} \right] \leq C.$$ If $ k \in \mathbb{Z}$, applying the strong Markov property at the stopping time $\tau_k = \inf\{i \geq 0 : S_{i} = k\}$ we deduce that $$ \mathbb{E}\left[ \sum_{i=0}^{\infty} \mathbf{1}_{S_{i} = k} \right] = \mathbb{P}( \tau_k < \infty) \mathbb{E}\left[ \sum_{i=0}^{\infty} \mathbf{1}_{S_{i}=0} \right] \leq C.$$ Hence, if the walk were transient we would have $$\mathbb{E}\left[\sum_{i=0}^{\infty} \mathbf{1}_{|S_{i}| \leq \varepsilon n}\right ] \leq \sum_{ - \varepsilon n \leq k \leq \varepsilon n} \mathbb{E}\left[ \sum_{i =0}^{\infty} \mathbf{1}_{S_{i} = k}\right] \leq 2 \varepsilon n C,$$ which contradicts \eqref{eq:contradict} for $\varepsilon>0$ small enough. Hence the walk cannot be transient. \qed \medskip Notice that we only use the weak law of large numbers to deduce recurrence: any one-dimensional random walk $(S)$ for which $S_{n}/n \to 0$ in probability is recurrent. There are examples where the step distribution is not integrable, see Exercise \ref{exo:nolfgn}. The theorem above can be seen as a particular example of the Kesten--Spitzer--Whitman theorem (see \cite[Chapter I]{Spi76}) saying that a random walk with independent increments on a group is transient if and only if its range (i.e.~the number of visited vertices) grows linearly with time. \subsection{Wald's equality} \begin{theorem}[Wald equality] \label{thm:wald}\noindent Suppose $ \mathbb{E}[|X_{1}|] < \infty$. Let $\tau$ be a stopping time with finite expectation. Then we have $$ \mathbb{E}[\tau] \cdot \mathbb{E}[X_{1}] = \mathbb{E}[S_{\tau}].$$ \end{theorem} \noindent \textbf{Proof with martingales.} We present a first proof based on martingale techniques. If we denote by $m$ the mean of $X_{1}$ then clearly the process $( S_{n}- nm)_{n \geq 0}$ is a martingale for the canonical filtration $ \mathcal{F}_{n} = \sigma ( X_{1}, \dots, X_{n})$. By the optional sampling theorem we deduce that \begin{eqnarray} \label{eq:butbut} \mathbb{E}[S_{n \wedge \tau}] = m \mathbb{E}[n \wedge \tau]. \end{eqnarray} Since $\tau$ is almost surely finite, we can let $n \to \infty$ and get by monotone convergence that the right hand side tends to $ m\mathbb{E}[\tau]$. However, to deduce that the left hand side also converges towards $ \mathbb{E}[S_{\tau}]$ one would need a domination... To get this, the trick is to reproduce the argument with the process $$ Y_{n}=\sum_{i =1}^{n}|X_{i}| - \tilde{m} n,$$ where $\tilde{m} = \mathbb{E}[|X_{1}|]$. Then $(Y_{n})_{n \geq 0}$ is again a martingale for the filtration $( \mathcal{F}_{n})$. Notice that $Y_{n}$ is also a martingale for its own filtration but the previous statement is stronger. We can then apply the optional sampling theorem again for $n \wedge \tau$ and use \textit{monotone} convergence on both sides to get that $ \mathbb{E}[\sum_{i=1}^{\tau}|X_{i}|] = \tilde{m} \mathbb{E}[\tau]$. Clearly the variable $\sum_{i=1}^{\tau}|X_{i}|$ dominates all variables $S_{n \wedge \tau}$ for $n \geq 0$. One can then use this domination to prove convergence of the left-hand side in \eqref{eq:butbut}. \qed \medskip We now give a second proof of Wald's identity based on the less well-known \emph{converse} to the strong law of large numbers (Lemma \ref{lem:recilfgn}): \noindent \textbf{Proof of Wald's identity with the law of large numbers.} The idea is to iterate the stopping rule. Let $0 = \tau_{0} \leq \tau=\tau_{1} \leq \tau_{2} \leq \tau_{3} \leq \cdots$ be the successive stopping times obtained formally as $$ \tau_{i+1}=\tau_{i+1}\left(S_{n} : n \geq 0\right) = \tau_{i}(S_{n} : n \geq 0) + \tau(S_{n+\tau_{i}}-S_{\tau_i} : {n \geq 0}),$$ for $i \geq 1$ where we see here $\tau$ as a measurable function of the underlying walk\footnote{In particular, if $\tau$ were a stopping time of a larger filtration $ (\mathcal{G}_n : n \geq 0)$ than the filtration $ (\mathcal{F}_n : n \geq 0)$ generated by the walk, then we could not write the previous display in full generality.}. In particular since $\tau < \infty$ a.s., we deduce by successive applications of the Markov property (Proposition \ref{prop:strongmarkov}) that $\tau_i < \infty$ for all $i \geq 0$ a.s. and that $$(\tau_{i+1}-\tau_{i} ; S_{\tau_{i+1}}-S_{\tau_{i}})_{i \geq 0} \quad \mbox{ are i.i.d.~of law} \quad (\tau, S_{\tau}).$$ Since $ \tau$ has finite expectation by assumption, the law of large numbers gives $$ \frac{\tau_{i}}{i} \xrightarrow[i\to\infty]{a.s.} \mathbb{E}[\tau].$$ In particular $\tau_i \to \infty$ almost surely and by the law of large numbers applied on the walk $(S)$ (recall that $ \mathbb{E}[|X|] < \infty$) we deduce that $$ \frac{S_{\tau_{i}}}{i} = \frac{S_{\tau_{i}}}{\tau_{i}} \cdot \frac{\tau_{i}}{i} \xrightarrow[i\to\infty]{a.s.} \mathbb{E}[X_{1}] \cdot \mathbb{E}[\tau].$$ We then use the \textit{converse} to the law of large numbers (Lemma \ref{lem:recilfgn}) to deduce that $ S_{\tau}$ has finite expectation and equal to $\mathbb{E}[\tau] \cdot \mathbb{E}[X_{1}]$ as claimed by Wald\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{wald}} Abraham Wald (1902--1950), American}.\qed \begin{lemma} \label{lem:recilfgn} Let $(S_{i})_{i \geq 0}$ be a one-dimensional random walk with i.i.d.~increments $X_{i}$ of law $\mu$ on $ \mathbb{R}$. Suppose that $$ \frac{S_{n}}{n} \xrightarrow[n\to\infty]{a.s.} \mathcal{X},$$ for some finite (a priori random) variable $ \mathcal{X} \in \mathbb{R}$. Then $\mu$ has a first moment and $ \mathcal{X} = \mathbb{E}[X]$ a.s. \end{lemma} \noindent \textbf{Proof of the lemma.} Suppose that $ n^{-1}S_{n}$ converges almost surely as $n$ goes to infinity to an a priori random but finite variable $ \mathcal{X}$. In particular, we have the almost sure convergence $$ \frac{X_{n}}{n} = \frac{S_{n}}{n} - \frac{n-1}{n}\cdot \frac{S_{n-1}}{n-1} \xrightarrow[n\to\infty]{a.s.} \mathcal{X}- \mathcal{X} = 0.$$ We deduce that the event $\{ |X_{n}| \geq n\}$ happens only finitely many times a.s., and since those events are independent, by the second Borel--Cantelli lemma we deduce that $$ \infty > \sum_{n \geq 1} \mathbb{P}(|X_{n}| \geq n) \underset{ \mathrm{i.d.}}{=} \sum_{n \geq 1} \mathbb{P}(|X| \geq n) = \mathbb{E}[|X|],$$ where the last equality is a standard exercise using Fubini. We deduce that $ \mathbb{E}[|X|]< \infty$ and by the strong law of large numbers we have $ \mathcal{X} = \mathbb{E}[X]$ a.s. \qed \medskip Beware, the converse of the weak law of large number does not hold: \begin{exo}[No converse to the weak law of large numbers] \label{exo:nolfgn}Let $(S_{n})_{n \geq 0}$ be a one-dimensional random walk with symmetric step distribution $\mu_{k} = \mu_{-k}$ satisfying $$\mu_{k} \sim \frac{1}{k^{2} \log k}, \quad \mbox{ as } k \to \infty.$$ Show that $ n^{-1}S_{n} \to 0$ in probability, but not almost surely, as $n \to \infty$. \end{exo} In words, if $(S)$ is random walk as in the previous exercise, then for most scales we have $S_{n} =o(n)$ whereas there exists exceptional scales where $|S_{n}| >>n$ due to an unlikely event of a large jump during this scale. In fact, an almost sure convergence can always be realized as a ``uniform'' convergence in probability in the following sense: \begin{exo}[Almost sure convergence is a uniform convergence in probability] Let $ \mathcal{X}_{n}, \mathcal{X}$ be random variables taking values in a Polish space $(E, \mathrm{d})$. Show that \begin{eqnarray*} \mathcal{X}_{n} \xrightarrow[n\to\infty]{( \mathbb{P})} \mathcal{X} &\iff& \mathrm{d}( \mathcal{X}_{n}, \mathcal{X}) \xrightarrow[n\to\infty]{( \mathbb{P})} 0\\ \mathcal{X}_{n} \xrightarrow[n\to\infty]{a.s.} \mathcal{X} &\iff& \sup_{k \geq n} \mathrm{d}( \mathcal{X}_{k}, \mathcal{X}) \xrightarrow[n\to\infty]{( \mathbb{P})} 0. \end{eqnarray*} \end{exo} \begin{exo}[Independence is crucial!] \label{exo:22} Construct a random walk $(S_{n} : n \geq 0)$ whose increments have the same law $ \frac{2}{3}\delta_{1} + \frac{1}{3} \delta_{-1}$ (but not independent) and so that $(S)$ is recurrent. \end{exo} \section{Heavy tailed random walks} We will see below (Theorem \ref{thm:recfourier1}) a powerful recurrence criterion based on the Fourier transform, but let us use a probabilistic argument to construct transient yet oscillating (symmetric) random walks with heavy tails. \begin{theorem} \label{prop:discretetransient}Let $\mu$ be a symmetric step distribution (i.e.~$\mu_k = \mu_{-k}$ for $k \in \mathbb{Z}$) satisfying \begin{eqnarray} \mu_k \sim \mathrm{c} \ k^{-\alpha}, \quad \mbox{ as }k \to \infty, \label{eq:tailsymmetric}\end{eqnarray} for some $ \mathrm{c} >0$ and with $\alpha \in (1,\infty)$. Then the walk is recurrent if and only if $\alpha \geq 2$. \end{theorem} \begin{remark} Notice that since $\mu$ is symmetric, the walk $(S)$ automatically oscillates. The case $\alpha=2$ is critical as already hinted in Exercise \ref{exo:nolfgn}. \end{remark} \noindent \textbf{Proof.} \textsc{When $\alpha >2$} the increments have a finite mean and $ \mathbb{E}[X_1]=0$ by symmetry so the result follows from Theorem \ref{thm:rec0}. \\ \textsc{ Let us treat the case $\alpha \in (1,2)$} and show that $(S)$ is transient i.e. \begin{eqnarray} \label{eq:goalsumtrans}\sum_{n \geq 0 }\mathbb{P}( S_{n} = 0) < \infty. \end{eqnarray} The idea is to use the randomness produced by a \textit{single big jump} of the walk to produce a upper bound on $ \mathbb{P}(S_n =0)$. More precisely, let us introduce the stopping time $ \tau_{n} = \inf \{ i \geq 1: |X_{i}| > n^{1 + \varepsilon} \}$ where $ \varepsilon>0$ will be chosen small enough later on. We can write \begin{eqnarray*} \mathbb{P}( S_{n} = 0) &\leq& \mathbb{P}( \tau_{n} > n) + \mathbb{P}(S_{n}=0 \mbox{ and } \tau_{n} \leq n)\\ & \leq & \mathbb{P}( \tau_{n} > n) + \mathbb{P}(S_{n}=0 \mid \tau_{n} \leq n) \end{eqnarray*} The first term of the right-hand side is easy to evaluate: $$ \mathbb{P}(\tau_{n} > n) = \big (1 - \mathbb{P}(|X|\geq n^{1 + \varepsilon})\big)^{n} \underset{ \eqref{eq:tailsymmetric}}{=} \exp\left( - \frac{\mathrm{c}}{\alpha-1} \cdot n \cdot n^{(1+ \varepsilon)(1-\alpha)} (1+ o(1))\right) \leq \exp(-n^{\delta}),$$ for some $\delta >0$ provided that $ \varepsilon < \frac{2- \alpha}{\alpha-1}$ is small enough. On the other hand, conditionally on $\{ \tau_{n} \leq n\}$, the increment $X_{\tau_{n}}$ is independent of $\tau_{n}$ and of the increments $\{X_1, \dots , \widehat{X_{\tau_{n}}}, \dots , X_n\}$ (beware, those are not i.i.d.~anymore) and its law $\nu_n$ is the law of $X$ conditioned on being of absolute value larger than $ n^{1+ \varepsilon}$; in particular $$ \forall k \in \mathbb{Z},\qquad \mathbb{P}(X_{\tau_{n}} = k \mid \tau_{n} \leq n) = \nu_{n}(k) = \mathbf{1}_{|k| > n^{1+ \varepsilon}} \frac{\mathbb{P}(X =k)}{ \mathbb{P}(|X| > n^{1 + \varepsilon} )} $$ so that by \eqref{eq:tailsymmetric} we have \begin{eqnarray} \label{eq:suploi} \sup_{k \in \mathbb{Z}} \nu_{n}(k) \leq \mathrm{C}\, n^{-1- \varepsilon}, \end{eqnarray} for some constant $ \mathrm{C}>0$ for all $ n \geq 1$. Hence we can write \begin{eqnarray*}\mathbb{P}(S_{n}=0 \mid \tau_{n} \leq n) &=& \mathbb{P}(X_{\tau_{n}} = -(X_{1}+ \dots + \widehat{X_{\tau_{n}} }+ \dots + X_{n}) \mid \tau_{n} \leq n )\\ & \underset{ \mathrm{indept}}{=}& \mathbb{E}\left[\nu_{n}(-(X_{1}+ \dots + \widehat{X_{\tau_{n}} }+ \dots + X_{n})) \mid \tau_{n} \leq n \right]\\ & \leq& \sup_{k \in \mathbb{Z}} \nu_{n}(k) \underset{ \eqref{eq:suploi}}{\leq} C n^{-1 - \varepsilon}. \end{eqnarray*} Gathering-up the pieces, we deduced that $ \mathbb{P}(S_{n}=0) \leq \exp(-n^{\delta}) + \mathrm{C} n^{-1- \varepsilon}$ for $\delta >0$ provided that $ \varepsilon>0$ is small enough. The implies summability of the series \eqref{eq:goalsumtrans} and ensures transience of the walk. \\ We now use the same idea to treat the borderline line \textsc{case $\alpha =2$} and show that $(S)$ is recurrent by providing the lower bound \begin{eqnarray} \label{eq:goalsumrec}\mathbb{P}( S_{n} = 0) \geq \frac{c}{n}, \end{eqnarray} for some $c>0$ thus ensuring the divergence of the expected number of visits to the origin \eqref{eq:cnsrecurrence}. We use the same idea as above but with $ \varepsilon=0$. Let us consider the good event $$ \mathcal{G}_{n} = \{ |X_{i}| \leq n \mbox{ for all } 1 \leq i \leq n \mbox{ except for two values } \tau_{n}^{1} \mbox{ and } \tau_{n}^{2} \}.$$ The probability of $ \mathcal{G}_{n}$ is easily computed and we have \begin{eqnarray} \label{eq:asymGn} \mathbb{P}( \mathcal{G}_{n}) = {n \choose 2} \mathbb{P}(|X|>n) \cdot \left(1- \mathbb{P}(|X|>n)\right) ^{n-2} \xrightarrow[n\to\infty]{ \eqref{eq:tailsymmetric}} \frac{ \mathrm{c}^{2}}{2} \mathrm{e}^{- \mathrm{c}}>0, \end{eqnarray} where $ \mathrm{c}>0$ appears in \eqref{eq:tailsymmetric}. In particular, this event is of asymptotically positive probability. Conditionally on $ \mathcal{G}_{n}$, the two values $ X_{\tau_{n}^{1}}, X_{\tau_{n}^{2}}$ are independent of $\{X_1, \dots , \widehat{X_{\tau^{1}_{n}}}, \dots ,\widehat{X_{\tau^{2}_{n}}}, \dots, X_n\}$ and their common law is $\nu_{n}$, the law of $X$ conditioned on $\{|X|>n\}$. In particular, the variable $ \mathrm{J}_n := X_{\tau_{n}^{1}} + X_{\tau_{n}^{2}}$ is independent of $ \tilde{S}_n := X_1+ \dots + \widehat{X_{\tau^{1}_{n}}} + \dots + \widehat{X_{\tau^{2}_{n}}} + \dots + X_n$. If we denote by $\nu^{\otimes 2}_n$ the law $\mathrm{J}_n$, then we clearly have $ \inf_{k \in \mathbb{Z}} \nu^{ \otimes 2}_n (k) =0$, but a moment's though shows that for any $A >0$, there exists $c_A>0$ so that \begin{eqnarray} \label{eq:infavecA} \inf_{\begin{subarray}{c} k \in \mathbb{Z} \\ |k| \leq A n \end{subarray}} \nu^{ \otimes 2}_n (k) \geq \frac{c_A}{n}. \end{eqnarray} On the other hand, conditionally on $ \mathcal{G}_n$, the variables $X_i$ for $i \notin \{\tau_n^1, \tau_n^2\}$ are independent and have the law of $X$ conditioned on $\{|X| \leq n\}$. In particular, there are centered, and their variance is equal to $$ \frac{1}{\mu([-n,n])} \sum_{k=-n}^n k^2 \mu_k \underset{\eqref{eq:tailsymmetric}}{\sim} n.$$ We deduce that the variance of $ \tilde{S}_n$ is asymptotic to $n^2$ and by Markov's inequality that \begin{eqnarray} \label{eq:markovA} \mathbb{P}( |\tilde{S}_n| \geq An \mid \mathcal{G}_n ) \leq \frac{ \mathrm{Var}( \tilde{S}_n \mid \mathcal{G}_n) }{(An)^2} \leq \frac{2}{A^2}, \end{eqnarray} eventually as $n \to \infty$. Taking $A>2$ we can thus write \begin{eqnarray*} \mathbb{P}(S_n = 0) &\geq& \mathbb{P}(S_n =0 \ \&\ \mathcal{G}_{n} \ \&\ |\tilde{S}_n| \leq An) \\ &=& \mathbb{P}( \mathcal{G}_n) \cdot \mathbb{E}\left[ \mathbb{P}( J_n = - \tilde{S}_n \ \& \ |\tilde{S}_n| \leq An \mid \mathcal{G}_n) \right]\\ & \underset{ \mathrm{indept}}{=}& \mathbb{P}( \mathcal{G}_n) \cdot \mathbb{E}\left[ \mathbb{E}\left[ \nu^{\otimes 2}(- \tilde{S}_n) \mathbf{1}_{|\tilde{S}_n| \leq An} \mid \mathcal{G}_n\right] \right]\\ & \geq & \mathbb{P}( \mathcal{G}_n) \cdot \big( \inf_{\begin{subarray}{c} k \in \mathbb{Z} \\ |k| \leq A n \end{subarray}} \nu^{ \otimes 2}_n (k) \big) \cdot \mathbb{P}\left( |\tilde{S}_n| \leq An \mid \mathcal{G}_n\right) \\ & \underset{ \eqref{eq:asymGn},\eqref{eq:infavecA}, \eqref{eq:markovA} }{\geq}& \frac{ \mathrm{c}^2 \mathrm{e}^{- \mathrm{c}}}{4} \cdot \frac{c_A}{n} \cdot (1- \frac{2}{A^2}), \end{eqnarray*} for $n$ large enough. This shows \eqref{eq:goalsumrec} and completes the proof.\qed \medskip The above result is initially due to Shepp\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{shepp}} Lawrence Alan Shepp (1936--2013), American} with a proof based on Theorem \ref{thm:recfourier1} below. He also showed the disturbing fact that there exist recurrent one-dimensional random walks with arbitrary fat tails (but not regularly varying): \begin{theorem}[Shepp (1964)]For any positive function $ \epsilon(x) \in (0,1)$ tending to $0$ as $x \to \infty$, there exists a symmetric step distribution $\mu$ such that $\mu( \mathbb{R} \backslash [-x,x]) \geq \epsilon(x)$ for any $x \geq 0$ and such that the associated random walk $(S)$ is recurrent. \end{theorem} \section{Fourier transform} \label{sec:fourier} In this section, we use the Fourier transform to give a recurrence criterion as well as a local version of the central limit theorem. Recall that if $\mu$ is the step distribution of a random walk $(S)$ on $ \mathbb{Z}$, then the Fourier\footnote{\raisebox{-3mm}{\includegraphics[width=1cm]{fourier}} Jean Baptiste Joseph Fourier (1768--1830), French} transform of the measure $\mu$ is defined by $$ \hat{\mu}(\xi) = \mathbb{E}[\mathrm{e}^{{ \mathrm{i}\xi X_{1}}}]= \sum_{k \in \mathbb{Z}} \mathrm{e}^{ \mathrm{i} \xi k} \mu_k, \quad \mbox{ for } \xi \in \mathbb{R}.$$ To get information on the walk $(S)$ from $\hat{\mu}$, the main idea is of course to use Cauchy's formula to relate probabilities to integrals of powers of the Fourier transform: \begin{eqnarray} \label{eq:cauchy} \forall x \in \mathbb{Z}, \quad \mathbb{P}( S_{n}=x) &=& \sum_{k \in \mathbb{Z}} \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}\xi \, \mathrm{e}^{- \mathrm{i} \xi x} \mathrm{e}^{{\mathrm{i} \xi k}} \mathbb{P}(S_{n}=k)\\ &=& \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}\xi \, \mathrm{e}^{- \mathrm{i} \xi x} \mathbb{E}[\mathrm{e}^{{\mathrm{i} \xi S_{n}}}] = \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}\xi \, \mathrm{e}^{- \mathrm{i} \xi x} \left( \hat{\mu}(\xi) \right)^{n}, \nonumber \end{eqnarray} where we used the fact that $ \mathbb{E}[\mathrm{e}^{\mathrm{i}\xi S_{n}}] = (\hat{\mu}(\xi))^{n}$ by independence of the increments and where the interchange of series and integral is easily justified by dominated convergence. \subsection{Chung-Fuchs} In this section, we give a criterion for recurrence of a one-dimensional random walk based on its Fourier transform. The criterion is valid mutatis mutandis for more general random walks with values in $ \mathbb{R}^d$. \label{sec:fourierrec1} \begin{theorem}[Easy version of Chung--Fuchs]\label{thm:recfourier1}\noindent The one-dimensional walk $(S)$ is recurrent if and only if we have $$ \lim_{r \uparrow 1} \int_{-\pi}^{\pi} \mathrm{d}\xi \ \mathfrak{Re} \left( \frac{1}{1- r \hat{\mu}(\xi)} \right) = \infty. $$ \end{theorem} \noindent \textbf{Proof.} By \eqref{eq:cnsrecurrence}, the walk $(S)$ is recurrent if and only if the series $\sum_{n \geq 0} \mathbb{P}(S_{n}=0)$ diverges. Recall from \eqref{eq:cauchy} that $$ \mathbb{P}( S_{n}=0) = \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}t \, \mathbb{E}[\mathrm{e}^{{ \mathrm{i}tS_{n}}}] = \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}t \left( \hat{\mu}(t) \right)^{n}.$$ We are lead to sum the last equality for $n\geq 0$, but before that we first multiply by $r^{n}$ for some $r \in [0,1)$ in order to be sure that we can exchange series, expectation and integral. One gets $$ \sum_{n \geq 0} r^{n} \mathbb{P}( S_{n}=0) = \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}t \, \sum_{n \geq 0} r^{n} \left( \hat{\mu}(t) \right)^{n} = \frac{1}{2\pi} \int_{-\pi}^{\pi} \frac{\mathrm{d}t}{1-r \hat{\mu}(t)}.$$ Since the left-hand side is real, one can take the real part in the integral. Letting $r \uparrow 1$, the first series diverges if and only if $\sum_{n \geq 0} \mathbb{P}(S_{n}=0)= \infty$. This completes the proof of the theorem. \qed \medskip In fact, there is a stronger version of Theorem \ref{thm:recfourier1} which is obtained by formally interchanging the limit and the integral in the last theorem: the random walk $(S)$ is transient or recurrent according as to whether the real part of $(1- \hat{\mu}(t))^{{-1}}$ is integrable or not near $0$ (we do not give the proof). This can easily be proved when the law $\mu$ is \textbf{symmetric} (i.e.\,$\mu_k = \mu_{-k}$): In this case, $\hat{\mu}$ is real valued and notice that when $\hat{\mu}(t) \geq 0$ the function $r \mapsto (1- r \hat{\mu}(t))^{-1}$ is increasing, whereas if $\hat{\mu}(t) \leq 0$ we have $(1- r \hat{\mu}(t))^{-1} \leq 1$. Splitting the integral according to the sign of $\hat{\mu}(t)$ and using monotone and dominated convergence theorems on the respective parts shows that $$ \lim_{r \uparrow 1} \int_{-\pi}^{\pi} \mathrm{d}\xi \ \frac{1}{1- r \hat{\mu}(\xi)} = \int_{-\pi}^{\pi} \mathrm{d}\xi \ \frac{1}{1- \hat{\mu}(\xi)}.$$ This criterion can be used to give Fourier-proofs of some of the preceding results: \begin{exo} \label{exo:25} Suppose $\mu$ is centered. \begin{enumerate}[(i)] \item Show that $\hat{\mu}(\xi) = 1 + o(\xi)$ as $\xi \to 0$. \item Give another proof of Theorem \ref{thm:rec0} (ii) using Theorem \ref{thm:recfourier1}. \item Give another proof to Theorem \ref{prop:discretetransient}. \end{enumerate} \end{exo} \begin{exo}[Sums of random walks] \label{exo:edouard}Let $(S_{n})_{n \geq 0}$ and $(S'_{n})_{ n\geq 0}$ be two independent one-dimensional random walks with independent increments of law $\mu$ and $\mu'$ on $ \mathbb{R}$. \begin{enumerate}[(i)] \item Give an example where $(S)$ and $(S')$ are transient and yet $(S+S')$ is recurrent. \item We suppose that $\mu$ and $\mu'$ are both symmetric. Show that as soon as $(S)$ or $(S')$ is transient then so is $(S+S')$. \item Give an example where $(S)$ is recurrent, $(S')$ transient and yet $(S+S')$ is recurrent. \item (*) Can we have both $(S)$ and $(S')$ recurrent and $(S+S')$ transient? \end{enumerate} \end{exo} \subsection{Local central limit theorem} The central limit theorem is one of the most important theorems in probability theory and says in our context that the rescaled random walk $S_{n}/ \sqrt{n}$ converges in distribution towards a normal law provided that $\mu$ is centered and has finite variance. There are many proofs of this result, the most standard being through the use of Fourier transform and Lvy's criterion for convergence in law\footnote{ Here are a couple of other proofs: Lindeberg swapping trick, method of moments, Stein method, Skorokhod embedding theorem, approximation by discrete variables and de Moivre-Laplace, contraction method and Zolotarev metric... See the beautiful page by Terence Tao on this subject: https://terrytao.wordpress.com/2010/01/05/254a-notes-2-the-central-limit-theorem/ or the recent note \cite{CCW21CLT}}. We will see below that the central limit theorem can be ``disintegrated'' to get a more powerful \textbf{local} version of it. The proof is again based on \eqref{eq:cauchy}.\medskip When a one-dimensional random walk with mean $m$ and variance $\sigma^{2}$ satisfies a central limit theorem we mean that for any $a<b$ we have $$ \mathbb{P}\left(\frac{S_{n}- n m}{ \sqrt{n }} \in [a,b]\right) \xrightarrow[n\to\infty]{} \int_{a}^{b} \frac{ \mathrm{d}x}{ \sqrt{2 \pi \sigma^{2}}} \mathrm{e}^{{- x^{2}/(2 \sigma^{2})}}.$$ We say that we have a local central limit theorem if we can reduce the interval $[a,b]$ as a function of $n$ until it contains just \textbf{one point} of the lattice, that is if for $x \in \mathbb{Z}$ we have $$ \mathbb{P}( S_{n} = x) =\mathbb{P}\left(\frac{S_{n}- n m}{ \sqrt{n }} \in \left[ \frac{x-nm}{ \sqrt{n}},\frac{x-nm +1}{ \sqrt{n}}\right)\right) \approx \frac{1}{\sqrt{2 \pi \sigma^{2}}} \mathrm{e}^{{- \frac{(x-nm)^{2}}{2 n \sigma^{2}}}} \frac{1}{ \sqrt{n}}.$$ It turns out that aperiodicity and finite variance are already sufficient to get the local central limit theorem (the result extends to higher dimensions and to the case of random walks converging towards stable Lvy process with {mutatis mutandis} the same proof): \begin{theorem}[Local central limit theorem, Gnedenko]\label{thm:local}Let $\mu$ be a distribution supported on $ \mathbb{Z}$, aperiodic, with mean $m \in \mathbb{R}$ and with a finite variance $\sigma^{2}>0$. If we denote by $\gamma_{\sigma}(x) = \frac{1}{ \sqrt{2\pi \sigma^2}} \mathrm{e}^{{-x^{2}}/(2 \sigma^{2})}$ the density of the centered normal law of variance $\sigma^{2}$ then we have $$ \lim_{n \to \infty} \sup_{ x \in \mathbb{Z}} n^{{1/2}} \left| \mathbb{P}(S_{n}=x) - n^{-{1/2}} \gamma_{\sigma}\left(\frac{x-nm}{\sqrt{n}}\right)\right| = 0.$$ \end{theorem} The usual central limit theorem follows from its local version. Indeed, if we consider the random variable $ \tilde{S}_{n}=S_{n} + U_{n} $ where $U_{n}$ is uniform over $[0,1]$ and independent of $S_{n}$, then the local central limit theorem shows that the law of $(\tilde{S}_{n} - nm)/ \sqrt{n}$ is absolutely continuous with respect to the Lebesgue measure on $ \mathbb{R}$ whose density $f_{n}$ converges pointwise towards the density of $\gamma_{\sigma}$. Scheff's\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{scheffe}} Henry Scheff\'e (1907--1977), American} lemma (see Exercise \ref{exo:scheffe} below) then implies that $(\tilde{S}_{n}-nm)/ \sqrt{n}$ converges in law towards $ \gamma_{ \sigma} ( \mathrm{d}x)$ and similarly after removing the tilde. \begin{exo}[Scheff\'e lemma] \label{exo:scheffe}Let $X_{n},X$ be random variables taking values in a Polish space $(E,d)$ and whose distributions have densities $f_{n},f$ with respect to a background measure $ \mathrm{m}$ on $E$. We suppose that $f_{n}\to f$ pointwise $ \mathrm{m}$-almost everywhere. Prove that \begin{enumerate}[(i)] \item $f_{n}\to f$ in $ \mathbb{L}^{1}( \mathrm{m})$. \item $ \mathrm{d_{TV}}( f_{n} \mathrm{d} \mathrm{m} , f \mathrm{d} \mathrm{m}) \to 0$ where $ \mathrm{d_{TV}}$ is the total variation distance as $n \to \infty$, \item deduce that $X_{n} \to X$ in distribution as $n \to \infty$. \end{enumerate} \end{exo} Before moving to the proof of the local CLT, let us translate the aperiodicity condition on the Fourier transform: \begin{lemma} \label{ref:lem<1} When $\mu$ is aperiodic, we have \label{eq:<1} \begin{eqnarray*} |\hat{\mu}(\xi)| < 1, \quad \mbox{ for }\xi \in (0,2\pi). \end{eqnarray*} \end{lemma} \noindent \textbf{Proof.} Indeed, if we have $ |\hat{\mu}(\xi)| = |\mathbb{E}[\mathrm{e}^{{ \mathrm{i}\xi X_{1}}}]|=1$ we have also $|\big(\hat{\mu}(\xi)\big)^{n}| = |\mathbb{E}[\mathrm{e}^{{ \mathrm{i}\xi S_{n}}}]|=1$. This implies by the equality case in the triangle inequality that all $ \mathrm{e}^{{ \mathrm{i}\xi x}}$ for $x \in \mathrm{Supp}( \mathcal{L}(S_{n}))$ are (positively) aligned. Using the aperiodicity assumption, one can choose $n$ large enough so that the support of the law of $S_{n}$ contains $0$ and $1$. This shows that $\xi \equiv 0$ modulo $[2 \pi]$. \qed \medskip \noindent \textbf{Proof of the local central limit theorem.} The starting point is again Cauchy formula's relating probabilities to Fourier transform: $$ \mathbb{P}(S_{n}=x) = \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}t \, \mathrm{e}^{-ixt} \mathbb{E}[\mathrm{e}^{ \mathrm{i} S_{n}t}] = \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}t \, \mathrm{e}^{-\mathrm{i}xt}( \hat{\mu}(t))^{n}.$$ Since $|\hat \mu (t)| <1$ when $t \ne 0$ the main contribution of the integral comes from the integration near $0$: we shall apply Laplace's method. Since we want to use the series expansion of the Fourier transform near $0$ it is natural to introduce the image measure $\nu$ of $\mu$ after translation by $-m$ so that $\nu$ is centered and has finite variance: we can write $\hat{\nu}(t) = 1 - \frac{\sigma^{2}}{2} t^{2} + o(t^{2})$ for $t$ small. The last display then becomes \begin{eqnarray*} \mathbb{P}(S_{n}=x) &=& \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}t \, \mathrm{e}^{- \mathrm{i}xt} \mathrm{e}^{{ \mathrm{i} nm t}}( \hat{\nu}(t))^{n} \\ &=& \frac{1}{2\pi} \int_{-\pi}^{\pi} \mathrm{d}t \, \mathrm{e}^{-\mathrm{i}xt} \mathrm{e}^{{ \mathrm{i} nmt}}\left( 1 - \frac{\sigma^{2}}{2}t^{2} + o(t^{2})\right)^{n}\\ &\underset{u =t \sqrt{n}}{=}& \frac{1}{ \sqrt{n}}\frac{1}{2\pi} \int_{-\pi \sqrt{n}}^{\pi \sqrt{n}} \mathrm{d}u \, \mathrm{e}^{-\mathrm{i} u x/ \sqrt{n}} \mathrm{e}^{{ \mathrm{i} \sqrt{n} m u}}\underbrace{\left( 1 - \frac{\sigma^{2}}{2n}u^{2} + o(u^{2}/n)\right)^{n}}_{\approx \frac{ \sqrt{2\pi}}{\sigma}\gamma_{1/\sigma}(u)}. \end{eqnarray*} We can then approximate the last integral by $$ \frac{ \sqrt{2\pi}}{\sigma}\frac{1}{ \sqrt{n}}\frac{1}{2\pi} \int_{-\infty}^{\infty} \mathrm{d}u \, \mathrm{e}^{-\mathrm{i} u x/ \sqrt{n}} \mathrm{e}^{{ \mathrm{i} \sqrt{n} m u}} \gamma_{1/\sigma}(u) = \frac{1}{ \sigma \sqrt{2 \pi n}}\mathbb{E}\left[\exp\left( \mathrm{i} \left(\sqrt{n}m - \frac{x}{ \sqrt{n}}\right) \frac{ \mathcal{N}}{ \sigma}\right)\right],$$ where $ \mathcal{N}$ denotes a standard normal variable. Using the identity $ \mathbb{E}[\mathrm{e}^{{ \mathrm{i}t \mathcal{N}}}] = \mathrm{e}^{{-t^{2}/2}}$ the last display is indeed equal to $ \gamma_{\sigma} \left( \frac{x- nm}{ \sqrt{n}}\right) / \sqrt{n}$ as desired. It remains to quantify the last approximation. The error made in the approximation is clearly bounded above by the sum of the two terms: $$ A= \frac{1}{ \sqrt{n}}\frac{1}{2\pi} \int_{|u| > \pi \sqrt{n}} \mathrm{d}u \, \gamma_{1/\sigma}(u),$$ $$ B= \frac{1}{ \sqrt{n}}\frac{1}{2\pi} \int_{|u| < \pi \sqrt{n} } \mathrm{d}u \, \left| \gamma_{1/\sigma}(u) - \left(\hat{\nu}\left( \frac{u}{ \sqrt{n}}\right)\right)^{n}\right|.$$ The first term $A$ causes no problem since it is exponentially small (of the order of $\mathrm{e}^{-n}$) hence negligible in front of $ 1/ \sqrt{n}$. The second term may be further bounded above by the sum of three terms \begin{eqnarray*} B &\leq& \frac{1}{ \sqrt{n}} \int_{|u| < n^{{1/4}}} \mathrm{d}u \, \left| \gamma_{1/\sigma}(u) - \left(\hat{\nu}\left( \frac{u}{ \sqrt{n}}\right)\right)^{n}\right| \\ & +& \int_{n^{{1/4}}<|u| < \pi \sqrt{n}} \mathrm{d}u \, \gamma_{1/\sigma}(u) \\ &+& \int_{n^{{1/4}}<|u| < \pi \sqrt{n}} \mathrm{d}u \, \left|\hat{\nu}\left( \frac{u}{ \sqrt{n}}\right)\right|^{n}. \end{eqnarray*} The first of these terms is shown to be $o( n^{{-1/2}})$ using dominated convergence: in the region considered for $u$, the integrand converges pointwise to $0$; for the domination we may use the fact for $|u| < \varepsilon \sqrt{n}$ we have by the expansion of $\hat{\nu}$ that $\left|\hat{\nu}\left( \frac{u}{ \sqrt{n}}\right)\right|^{n} \leq ( 1 - \frac{ \sigma^{2} u^{2}}{4n})^{n} \leq \mathrm{e}^{- \sigma^{2}u^{2}/4}$. The second term of the sum is handled as above and seen to be of order $\mathrm{e}^{- \sqrt{n}}$. For the third term, we bound the integrand by $\left|\hat{\nu}\left( \frac{u}{ \sqrt{n}} \right)\right|^{n} \leq \mathrm{e}^{- \sigma^{2}u^{2}/4}$ for $|u| < \varepsilon \sqrt{n}$, as for $ \varepsilon \sqrt{n} < |u|< \pi \sqrt{n}$ we use the fact that $|\hat{\mu}(x)|<c<1$ for all $x \in [\varepsilon, \pi]$ by aperiodicity. The sum of the three terms is then of negligible order compared to $n^{{-1/2}}$ as desired. \qed \medskip \noindent \textbf{Bibliographical notes.} The material in this chapter is standard and can be found in many textbooks, see e.g.~ \cite[Chapters I,II]{Spi76}, \cite[Chapter 8]{Chung74} or \cite{Kal07,LalCours}. The proof of Theorem \ref{prop:discretetransient} is due to Yuval Peres (personal communication) while Shepp's original proof \cite{shepp1962symmetric} is based on the Fourier transform. Theorem \ref{thm:shepp} can be found in \cite{Shepp64}. The Fourier transform is a remarkable tool (whose efficiency is sometimes a bit mysterious) to study random walks with independent increments. The local central limit theorem is valid in the much broader context of random walks converging towards stable Lvy processes, see Gnedenko's local limit theorem in \cite[Theorem 4.2.1]{IL71}, and can be sharpened when we have further moment assumptions, see \cite{LL10}. It also applies when the variables are not exactly i.i.d.~\cite{davis1995elementary}. \medskip \noindent \textbf{Hints for Exercises.}\ \\ Exercise \ref{exo:nolfgn}: Use the truncated increments $X_{i} \mathbf{1}_{|X_{i}| < A n/\log n}$ for $1 \leq i \leq n$ for some large $A>0$.\\ Exercise \ref{exo:22}: Couple the increments $X_{i}$ so that they are the same for $2^{k} \leq i < 2^{k+1}$.\\ Exercise \ref{exo:edouard}: (i) is easy. (ii) Use the Fourier criterion. (iii) Use the $1$-stable Cauchy distribution (or a discrete version thereof). (iv) Edouard Maurel-Segala solved it here: \\ \textit{https://mathoverflow.net/questions/314312/sum-of-independent-random-walks} \chapter{Skip-free random walks} \label{chap:WH} \hfill Two simple but powerful observations. \bigskip In this chapter we still consider a one-dimensional random walk $(S)$ based on i.i.d.\;increments of law $\mu$ (whose support is not contained in $ \mathbb{Z}_{\geq0}$ nor in $ \mathbb{Z}_{\leq 0}$). But compared to the previous chapter, we furthermore suppose that the walk is \textbf{skip-free} which means that $$ \mathrm{Supp}( \mu) \subset \{-1, 0, 1, 2, 3, \dots \}.$$ In other words, the only negative steps of $(S)$ are steps of size $-1$. We shall see that some combinatorial magic happens for such walks. Let us start by drawing a consequence of the last chapter: the expectation $$m = \sum_{k \geq -1} k \mu_k$$ is always well-defined and belongs to $ [-1, \infty]$ and so by Theorem \ref{thm:rec0} the walk is recurrent if $m=0$ and drifts otherwise. We will now perform two simple combinatorial operations on paths (reversal and cycle shift) and explore their distributional consequences. \section{Duality lemma} We begin with a simple but surprisingly important observation called \textbf{duality}. This is valid for any random walk, not necessarily skip-free and not necessarily integer-valued. \subsection{Duality} \begin{proposition}[Duality] \label{lem:duality}For each fixed $n \geq 0$, we have the following equality in distribution $$ (0=S_{0},S_{1}, \dots , S_{n}) \overset{(d)}{=} (S_{n}-S_{n},S_{n}-S_{n-1}, S_{n}-S_{n-2}, \dots , S_{n}-S_{1}, S_{n}-S_0).$$ \end{proposition} \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{duality} \caption{Geometric interpretation of the duality: the rotation by an angle $\pi$ of the first $n$ steps of the walk $(S)$ leaves its distribution invariant.} \end{center} \end{figure} \noindent \textbf{Proof.} It suffices to notice that the increments of the walk $(S_{n}-S_{n-1}, S_{n}-S_{n-2}, \dots , S_{n}-S_{1}, S_n-S_0)$ are just given by $(X_{n}, X_{n-1}, \dots , X_{1})$ which obviously has the same law as $(X_{1}, \dots ,X_{n})$ since the $(X_{i})_{i \geq 1}$ are i.i.d.\ hence exchangeable (i.e.~whose law is invariant under any fixed permutation). \qed \medskip Beware, the duality lemma can only be applied for $n$ fixed and not for all $n$ simultaneously, yet it can be useful to deduce asymptotic properties of the walk: \begin{corollary} \label{cor:duality} Suppose that $(S)$ is a one-dimensional (non necessarily skip-free nor integer valued). We denote by $ \overline{S}_{n} = \sup \{ 0 \leq k \leq n : S_{k}\}$ and $ \underline{S}_{n} = \inf \{ 0 \leq k \leq n : S_{k}\}$ the running supremum and infimum processes. We suppose that $(S)$ drifts towards $+\infty$ so that $ \min S = \underline{S}_{\infty} > -\infty$ a.s.. Then we have $$ \overline{S}_{n} - S_{n} \xrightarrow[n\to\infty]{(d)} - \underline{S}_{\infty} < \infty.$$ \end{corollary} \noindent \textbf{Proof.} By duality we have for each $n \geq 0$ $$ \overline{S}_{n} - S_{n} \quad \overset{(d)}{ =} \quad - \underline{S}_{n}$$ whereas since $(S)$ drifts towards $-\infty$ we have $$- \underline{S}_{n} \quad \xrightarrow[n\to\infty]{a.s.} \quad - \underline{S}_{\infty} < \infty.$$\qed \bigskip One of the main application of duality is the following interpretation of hitting times of half-spaces. For $A \subset \mathbb{Z}$, we denote by $ {T}_{A} = \inf \{i \geq 0 : S_i \in A\}$ the hitting time of $A$ by the walk $(S)$. Then the previous proposition shows (see Figure \ref{fig:dualitylln}) that for $n \geq 0$ \begin{eqnarray*} \mathbb{P}( T_{ \mathbb{Z}_{<0}} > n) &=& \mathbb{P}( S_{0}=0, S_{1} \geq 0, \dots , S_{n}\geq 0)\\ & \underset{ \mathrm{duality}}{=} & \mathbb{P}( S_{n}-S_{n}= 0, S_{n}-S_{n-1} \geq 0, S_n-S_{n-2} \geq 0 \dots , S_{n} \geq 0)\\ & =& \mathbb{P}(S_{n} \geq S_{n-1}, S_{n}\geq S_{n-2}, \dots , S_{n} \geq S_{0})\\ &=& \mathbb{P}(n \mbox{ is a new (weak) ascending record time for the walk}), \end{eqnarray*} where an \textbf{ ascending/descending (resp. weak) record time} is a time where the walk attains (or equals) a new maximum/minimum value so far i.e.~such that $S_{i} > \max\{S_{j} : 0 \leq j < i\}$ for strict ascending, $\geq$ for weak ascending, $\leq \min$ for weak descending and $< \min$ for strict descending. Summing over $n \geq 0$ we deduce that \begin{eqnarray} \sum_{n \geq 0}\mathbb{P}(T_{ \mathbb{Z}_{<0}}>n) = \mathbb{E}[T_{ \mathbb{Z}_{<0}}]= \mathbb{E}[ \#\mbox{ weak ascending record times}]. \label{eq:duality*} \end{eqnarray} However, it is easy to see using the Markov property that the number of (weak) ascending record times is a geometric random variable which is finite almost surely if and only if the walk is bounded from above. In particular, we deduce that $T_{ \mathbb{Z}_{<0}}$ has finite expectation iff $m <0$ and since for skip-free random walk we have $ S_{T_{ \mathbb{Z}_{<0}}} = -1$ we deduce from Wald's identity that $$ \mathbb{E}[ T_{ \mathbb{Z}_{<0}}] \underset{ \mathrm{Thm.} \ref{thm:wald}}{=} \frac{1}{|m|}.$$ \begin{figure}[!h] \begin{center} \includegraphics[width=14cm]{duality-lln} \caption{ \label{fig:dualitylln} Duality shows that $\mathbb{P}(T_{ \mathbb{Z}_{<0}}>n) = \mathbb{P}(n \mbox{ is a weak ascending record time})$.} \end{center} \end{figure} \begin{exo} \label{exo:legallcornell} Let $(S)$ be a centered skip-free random walk. Show using duality that for any $k \geq 1$ we have $$ \mathbb{P}(S_{T_{ \mathbb{Z}_{>0}}} = k) = \frac{1}{\mu_{-1}} \sum_{i \geq k} \mu_i.$$ \end{exo} \subsection{A proof of the law of large numbers} To illustrate the power of the duality lemma, let us use it to give a short proof of the law of large numbers. In this section only, let $X_{1}, X_{2}, \dots $ be i.i.d.~random variables not necessarily integer-valued with finite expectation and let $S_{n} = X_{1}+ \dots + X_{n}$ for $n \geq 0$ be the corresponding random walk. Kolmogorov's strong law of large numbers says that $ n^{-1}S_{n} \to \mathbb{E}[X]$ almost surely as $n \to \infty$. Clearly, it is a consequence of the following lemma: \begin{lemma} \label{lem:LLN} Let $X_{1}, X_{2}, \dots $ be i.i.d.~r.v.~with $ \mathbb{E}[X]<0$. Then $ \sup_{n \geq 0} (X_{1} + \cdots + X_{n})$ is finite a.s. \end{lemma} \noindent \textbf{Proof.} \textsc{Step 1. Bounding the increments from below.} Choose $C>0$ large enough so that by dominated convergence $ \mathbb{E}[X \mathbf{1}_{X>-C}] < 0$. We will show that the random walk $\tilde{S}_{n} = X_{1} \mathbf{1}_{X_{1}>-C} + \dots + X_{n} \mathbf{1}_{ X_{n}>-C}$ is a.s.~bounded from above which is sufficient to prove the lemma.\\ \textsc{Step 2. Duality.} Consider $\tilde{T}_{ \mathbb{Z}_{<0}} = \inf\{i \geq 0 : \tilde{S}_{i}<0\}$ and recall from \eqref{eq:duality*} that $$ \mathbb{E}[\tilde{T}_{ \mathbb{Z}_{<0}}] = \mathbb{E}[\# \mbox{ weak ascending record times of } \tilde{S}]$$ and the proof is complete if we prove that $ \mathbb{E}[\tilde{T}_{ \mathbb{Z}_{<0}}]< \infty$ since this implies that almost surely there is a finite number of weak ascending records for $ \tilde{S}$, hence the walk is bounded from above a.s. \\ \textsc{Step 3. Optional sampling theorem.} To prove $ \mathbb{E}[\tilde{T}_{ \mathbb{Z}_{<0}}]<\infty$, consider the same martingale as in the proof of Wald's identity (Theorem \ref{thm:wald}) namely $$ M_{n}~=~\tilde{S}_{n} - \mathbb{E}[X \mathbf{1}_{X>-C}]n, \quad \mbox{ for }n \geq0$$ (for the filtration generated by the $X_{i}$'s) and apply the optional sampling theorem to the stopping time $n \wedge \tilde{T}_{ \mathbb{Z}_{<0}}$ to deduce that $$ 0=\mathbb{E}[M_{n\wedge \tilde{T}_{ \mathbb{Z}_{<0}}}] \quad \mbox{ or in other words } \quad \mathbb{E}[X \mathbf{1}_{X>-C}] \mathbb{E}[n \wedge \tilde{T}_{ \mathbb{Z}_{<0}}] = \mathbb{E}[\tilde{S}_{n \wedge \tilde{T}_{ \mathbb{Z}_{<0}}}].$$ Since the increments of $\tilde{S}$ are bounded below by $-C$, the right-hand side of the last display is bounded from below by $-C$ as well. Recalling that $\mathbb{E}[X \mathbf{1}_{X>-C}] $ is negative we deduce that $$ \mathbb{E}[n \wedge \tilde{T}_{ \mathbb{Z}_{<0}}] = \frac{\mathbb{E}[\tilde{S}_{n \wedge \tilde{T}_{ \mathbb{Z}_{<0}}}]}{\mathbb{E}[X \mathbf{1}_{X>-C}]} \leq \frac{C}{ |\mathbb{E}[X \mathbf{1}_{X>-C}]|} < \infty.$$ Letting $n \to \infty$, by monotone convergence we deduce that the expectation of $\tilde{T}_{ \mathbb{Z}_{<0}}$ is finite. Et voil. \qed \medskip \section{Cycle lemma} \label{sec:fellerskip} The following theorem has many names, equivalent forms and ramifications in the probabilistic and combinatorial literature (Kemperman's formula, Otter--Dwass formula, Feller\footnote{\raisebox{-3mm}{\includegraphics[width=1cm]{feller2}} William Feller (1906--1970), born Vilibald Sre\'cko Feller, Croatian and American} combinatorial lemma, D\'esir\'e Andr\'e cycle lemma, Lagrange inversion formula...). We shall start with the following deterministic statement: Let $x_{1}, x_{2}, \dots , x_{n} \in \{-1, 0,1, \dots\}$ be integers which we consider as the increments of the skip-free walk $(s)$ defined by $$s_{0}=0, s_{1}=x_{1}, s_{2}=x_{1}+x_{2}, \quad \dots, \quad s_{n}= x_{1}+ \dots + x_{n}.$$ If $ \ell \in \{ 0,1,2, \dots , n-1\}$ we consider $(s^{{(\ell)}})$ the $\ell$-th cyclic shift of the walk obtained by cyclically shifting its increments $\ell$ times, that is $$ s^{{(\ell)}}_{0}=0, s^{{(\ell)}}_{1}=x_{\ell+1}, \quad \dots \quad, s^{{(\ell)}}_{n-\ell}=x_{\ell+1}+ \dots + x_{n},\quad \dots \quad, s_{n}^{{(\ell)}} = x_{\ell+1}+ \dots + x_{n}+ x_{1}+ \dots + x_{\ell}.$$ \begin{lemma}[Feller]\label{lem:feller}Suppose that $s_{n}=-k$ for $k \geq 1$. Then there are exactly $k$ cyclic shifts $(s^{{(\ell)}})$ with $ \ell \in \{ 0,1,2, \dots , n-1\}$ for which time $n$ is the hitting time of $-k$ by the walk $(s^{{(\ell)}})$.\end{lemma} \noindent \textbf{Proof.} Let us first prove there is at least one such cycle shift. For this, consider the first time $\ell\in \{1,2,\dots , n\}$ such that the walk $(s)$ reaches its overall minimum $\min\{s_{i} : 0 \leq i \leq n\}$. Then clearly, after performing the cycle shift at that time, the new walk stays above $-k$ over $\{0,1, \dots , n-1\}$, see Figure \ref{fig:rerootmin} below. \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{reroot} \caption{ \label{fig:rerootmin} Re-rooting the walk at the hitting time of the minimum gives a walk reaching its minimum at time $n$.} \end{center} \end{figure} We can thus suppose without loss of generality that time $n$ is the hitting time of $-k$ by the walk. It is now clear (again see Figure \ref{fig:reroot} below) that the only possible cyclic shifts of the walk such that the resulting walk first hits $-k$ at time $n$ correspond to the hitting times of $ 0,-1, -2, \dots , -(k-1)$ by the walk (we use skip-free here to notice that those hitting times all appear). \begin{figure}[!h] \begin{center} \includegraphics[width=8cm]{rerootk} \caption{ \label{fig:reroot} The only possible cycle shifts (starting points in blue) for which the walk first hit $-k$ at time $n$ correspond to the hitting times of $0, -1, -2, \dots , -k+1$.} \end{center} \end{figure} \qed \medskip \begin{remark} \label{rem:shift}Beware, Feller's combinatorial lemma \textit{does not} say that the cyclic shifts $(s^{(\ell)})$ are distinct. Indeed, in the action of $ \mathbb{Z}/ n \mathbb{Z}$ on $\{ (s^{(\ell)}) : \ell \in \{0,1, \dots , n-1\}\}$ by cyclic shift, the size of the orbit is equal to $n / j$ where $j$ (which divides $n$) is the cardinal of the subgroup stabilizing $ (s^{(0)})$. In our case, it is easy to see that $j$ must also divide $k$ and in this case there are only $k/j$ distinct cyclic shifts having $n$ as the hitting time of $-k$. In particular, when $k=1$ the $n$ cycle shifts are pairwise distinct.\end{remark} \subsection{Kemperman's formula and applications} Notice that Lemma \ref{lem:duality} does not require that the random walk has i.i.d.~increments: it holds as soon as the increments $( \mathcal{X}_k : k \geq 1)$ of a random process $ ( \mathcal{S}_k : k \geq 0)$ are invariant by time reversal i.e. $$ ( \mathcal{X}_1, \dots, \mathcal{X}_{n_0}) \overset{(d)}{=} ( \mathcal{X}_{n_0}, \dots , \mathcal{X}_1),$$ for all $n_0 \geq 1$. In the application of Feller combinatorial lemma below, we shall use another invariance, by cycle shift, which amounts to ask $$ ( \mathcal{X}_1, \dots, \mathcal{X}_{n_0}) \overset{(d)}{=} (\mathcal{X}_2, \mathcal{X}_3, \dots , \mathcal{X}_{n_0}, \mathcal{X}_1) \overset{(d)}{=} (\mathcal{X}_3, \mathcal{X}_4, \dots , \mathcal{X}_{n_0}, \mathcal{X}_1,\mathcal{X}_2) = \dots.$$ for all $n_0 \geq 1$. Those properties are in particular satisfied as soon as the increments $( \mathcal{X})$ are exchangeable in the sense that $$ ( \mathcal{X}_1, \dots, \mathcal{X}_{n_0}) \overset{(d)}{=} (\mathcal{X}_{\sigma(1)}, \mathcal{X}_{\sigma(2)}, \dots , \mathcal{X}_{\sigma(n_0)})$$ for any $n_0$ and any permutation $\sigma$ of $\{1,2, \dots , n_0\}$. For the connoisseur, De Finetti's theorem (not discussed in these pages) shows that those processes are mixture of random walks with i.i.d.~increments. \subsection{Kemperman's formula} As usual, for $k \in \mathbb{Z}$, we denote by $T_{k} = \inf \{ 0\leq i \leq n : S_i=k\}$ the hitting time of $k$ by the random walk $(S)$. An easy corollary of the cycle lemma is Kemperman's\footnote{\raisebox{-3mm}{\includegraphics[width=1cm]{kemperman}} Johannes Henricus Bernardus Kemperman (1924--2011), Dutch} formula: \begin{proposition}[Kemperman's formula]\label{prop:kemperman} Let $(0=S_0,S_1, \dots , S_n)$ be a skip-free process with cyclically exchangeable increments. Then for every $n \geq 1$ and every $k \geq 1$ we have $$ \frac{1}{n}\mathbb{P}(S_{n}=-k) = \frac{1}{k} \mathbb{P}(T_{-k}=n).$$ \end{proposition} \noindent \textbf{Proof}. Let us first re-write Lemma \ref{lem:feller} in a single equation \begin{eqnarray} \label{eq:equivfeller} \mathbf{1}_{s_{n}=-k} \quad =\quad \frac{1}{k} \sum_{\ell=0}^{{n-1}} \mathbf{1}_{T_{-k}(s^{(\ell)})=n}. \end{eqnarray} Indeed, if the walk $(s)$ is such that $s_{n}=-k$ for $k \geq 1$, then there exists exactly $k$ shifts which do not annulate the indicator functions on the right-hand side. Since we divide by $k$ the total sum is one. We take expectation when $(s) = (S_{0}, S_{1}, \dots , S_{n})$ is the path made up of the first $n$ steps of our random walk. Using exchangeability of the increments, for all $0 \leq \ell \leq n-1$ we have $(S^{(\ell)}_{j})_{0 \leq j \leq n} = (S_{j})_{0 \leq j \leq n}$ in distribution. We deduce Kemperman's formula. \qed \medskip \begin{remark} \label{rek:kemp+local} Combining Kemperman's formula with the local central limit theorem (Theorem \ref{thm:local}), we deduce that if $(S)$ is an aperiodic skip-free random walk with centered increments having finite variance $\sigma^2$ then we have $$ \mathbb{P}(T_{-1}=n) \sim \frac{1}{\sqrt{2 \pi \sigma^{2}}} \cdot \frac{1}{n^{3/2}}, \quad \mbox{ as }n \to \infty.$$ \end{remark} \begin{exo}\label{exo:shiftunif} Let $(S)$ be an integer-valued one-dimensional random walk, but non necessarily skip-free. For $n \geq 0$, let $K_{n} = \inf\{ 0 \leq k \leq n : S_{k}= \sup_{0 \leq i \leq n}S_{i}\}$ for the first time when the walk achieves its maximum over $\{0,1, \dots ,n\}$. Show that conditionally on $S_{n}=1$, the variable $K_{n}$ is uniformly distributed over $\{1,2, \dots , n\}$. Compare with Proposition \ref{prop:arcsine1st}. \end{exo} \subsection{Simple symmetric random walk} Let us give a first application of this formula in the case of the symmetric simple random walk whose step distribution is $ \mu= \frac{1}{2}( \delta_{1} + \delta_{-1})$. Due to parity reasons, $T_{-1}$ must be odd, and by Kemperman's formula we have for $n \geq 1$ \begin{eqnarray} \label{eq:T>ss} \mathbb{P}( T_{-1}=2n-1) &=& \frac{1}{2n-1} \mathbb{P}(S_{2n-1}=-1) = \frac{1}{2n-1} 2^{{-(2n-1)}} { 2n-1 \choose n} \\ &=& 2^{-2n+1} \frac{(2n-2)!}{n!(n-1)!} = \frac{1}{2} \cdot {4^{-(n-1)}} \mathrm{Cat}(n-1), \end{eqnarray} where for $n \geq 0$ we have put $ \mathrm{Cat}(n) = \frac{1}{n+1} {2n \choose n}$ for the $n$th Catalan\footnote{\raisebox{-3mm}{\includegraphics[width=1cm]{catalan}} Eugne Charles Catalan (1814--1894), French and Belgian} number. As an application of this formula, we can prove the famous arcsine law \footnote{there are at least three arcsine laws in the theory of random walk...}: \begin{proposition}[1st Arcsine law] \label{prop:arcsine1st}Let $(S)$ be the simple symmetric random walk on $ \mathbb{Z}$. We put $K_{n} = \inf\{ 0 \leq k \leq n : S_{k} = \sup_{0 \leq i \leq n}S_{i}\}$ then $$ \frac{K_{n}}{n} \xrightarrow[n\to\infty]{(d)} \frac{ \mathrm{d}x}{ \pi \sqrt{x (1-x)}} \mathbf{1}_{ x \in (0,1)}.$$ \end{proposition} \begin{figure}[!h] \begin{center} \includegraphics[width=8cm]{arcsine} \caption{The arcsine distribution} \end{center} \end{figure} \begin{remark} The name arcsine comes from the cumulative distribution function of the right-hand side which is $\frac{2}{\pi} \mathrm{arcsin}(\sqrt{x})$. Quoting Feller Contrary to intuition, the maximum accumulated gain is much more likely to occur towards the very beginning or the very end of a coin-tossing game than somewhere in the middle. \end{remark} \noindent \textbf{Proof.} Putting $T_0^+ = \inf\{n >0 : S_n=0\}$ to be the first \textbf{return} time at $0$, using duality we can compute exactly for $k \in \{1,2, \dots,n\}$ \begin{eqnarray*} \mathbb{P}(K_{n}=k) &=& \mathbb{P}(T_{-1} > n-k) \cdot \mathbb{P}( {T}_0^+ > k \mbox{ and } S_{1} >0) \\ &\underset{ \mathrm{symm}}{=}& \mathbb{P}(T_{-1} > n-k) \cdot \frac{1}{2} \cdot \mathbb{P}( {T}_{-1} > k-1). \end{eqnarray*} For $k=0$ we simply have $ \mathbb{P}(K_n=0)= \mathbb{P}(T_{-1}>n)$. Using \eqref{eq:T>ss} and Stirling's formula, the last display is shown to be equivalent to $\frac{1}{\pi} \frac{1}{ \sqrt{k(n-k)}}$ where the last asymptotic holds as $k$ and $n-k$ tend to $\infty$. If we add a little blur to $K_{n}$ and consider $\tilde{K}_{n} = K_{n}+U_{n}$ where $U_{n}$ is independent of $K_{n}$ and uniformly distributed over $[0,1]$. Then clearly $ \tilde{K}_{n}/n$ has a density with respect to Lebesgue measure which converges pointwise towards the density of the arcsine law. It follows from Scheff\'e's lemma (Exercise \ref{exo:scheffe}) that $\tilde{K}_{n}/n$ converges in total variation towards the arcsine law and consequently $K_{n}/n$ converges in distribution towards the arcsine law since $U_{n}/n \to 0$ in probability. \qed \subsection{Poisson random walk} \label{sec:poissonRW} Another explicit application of Kemperman's formula is obtained by considering a random walk $(S)$ with step distribution given by the law of $ \mathfrak{P}({\alpha})-1$ where $ \mathfrak{P}({\alpha})$ is a Poisson random variable of parameter $\alpha$, namely $$ \mathbb{P}(X=k-1) = \mathrm{e}^{-\alpha} \frac{\alpha^k}{ k!}, \quad \mbox{ for } k \geq 0.$$ Clearly, if $\alpha > 1$ then the walk is transient and drifts towards $+\infty$. Using the additivity property of independent Poisson variables and Kemperman's formula we have: \begin{eqnarray*} \mathbb{P}(T_{-1}=n) &\underset{ \mathrm{Kemperman}}{=} & \frac{1}{n} \mathbb{P}( S_n = -1)\\ &\underset{ \sum \ \mathrm{id\ Poisson}}{=}& \frac{1}{n} \mathbb{P}( \mathfrak{P}({n \alpha}) = n-1) = \mathrm{e}^{-\alpha n} \frac{(\alpha n)^{n-1}}{n!}. \end{eqnarray*} This law is named after Borel:\footnote{ \raisebox{-5mm}{\includegraphics[width=1cm]{borel}}Flix douard Justin mile Borel (1871--1956), French.} \begin{definition} \label{def:borel-tanner} For $\alpha \in [0,1]$, the Borel--Tanner distribution $\xi_\alpha$ is the law on $ \mathbb{Z}_{>0}$ given by $$ \xi_\alpha(n) = \mathrm{e}^{-\alpha n} \frac{(\alpha n)^{n-1}}{n!}, \quad \mbox{ for } n \geq 1.$$ \end{definition} \begin{exo} Do you have an elementary way to see that the above display defines a probability distribution? \end{exo} \section{Ballot theorems} Let us now turn our attention to ballot theorems when we require a positivity constraint on the walk. In the following we say that $(S)$ is \textbf{skip-free ascending} if $(-S)$ is a skip-free random process. \subsection{Ballot theorem} \begin{lemma} \label{lem:>0} Let $(S)$ be a skip-free ascending random walk. Then for every $n \geq 1$ and every $k \geq 1$ we have $$ \mathbb{P}( S_{i}>0, \forall 1 \leq i \leq n \mid S_{n}=k) = \frac{k}{n}.$$ \end{lemma} \noindent \textbf{Proof.} Notice that the walk $(-S)$ is skip-free descending. So by time reversal (but not space reversal as in Lemma \ref{lem:duality}) and Kemperman's formula we have \begin{eqnarray*} \mathbb{P}(S_{i}>0, \forall 1 \leq i \leq n \mbox{ and } S_{n}=k) \underset{ \mathrm{time-rev.}}{=} \mathbb{P}(T_{-k}(-S) = n) \underset{ \mathrm{Prop.}\ \ref{prop:kemperman}}{=} \frac{k}{n} \mathbb{P}(-S_{n}=-k)= \frac{k}{n} \mathbb{P}(S_{n}=k).\end{eqnarray*} \qed \medskip Let us give an immediate application due to Bertrand\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{bertrand}} Joseph Bertrand (1822--1900), French} which is useful during election days: \begin{theorem}[Ballot theorem]\label{the:ballot}During an election, candidates $A$ and $B$ respectively have $a >b$ votes. Suppose that votes are spread uniformly in the urn. What is the chance that during the counting of votes, candidate $A$ is always strictly ahead? $$ \mbox{ answer: } \quad \frac{a-b}{a+b}.$$ \end{theorem} \noindent \textbf{Proof.} Let us model the scenario by a uniform path making only $+1$ or $-1$ steps which starts at $(0,0)$ and ends at $(a+b, a-b)$. The $+1$ steps correspond to votes for candidate $A$ and the $-1$ steps for votes for $B$. This path can be seen as the trajectory of a symmetric random walk conditioned to be equal to $a-b$ at time $a+b$. The conclusion is given by the previous lemma. \qed \subsection{Staying positive forever} \label{sec:stayposfacile} Let $(S)$ be a one-dimensional random walk with integrable increments having positive mean. Recall from Remark \ref{rek:>0pos>0} that the probability that walk stays positive after time $1$ is strictly positive. We compute below this probability in the case of skip-free ascending and skip-free (descending) walks: \begin{corollary} If $(S)$ is skip-free ascending such that $ \mathbb{E}[S_{1}] >0$ then we have $$ \mathbb{P}( S_{i}>0: \forall i \geq 1) = \mathbb{E}[S_{1}].$$ \end{corollary} \noindent \textbf{Proof.} We have \begin{eqnarray*} \mathbb{P}( S_{i}>0: \forall i \geq 1) &=& \lim_{n \to \infty} \mathbb{P}( S_{i}>0: \forall 1 \leq i \leq n)\\ &=&\lim_{n \to \infty} \mathbb{E}\left[ \mathbb{P}\mathbb( S_{i}>0: \forall 1 \leq i \leq n \mid S_{n})\right]\\ & \underset{ \mathrm{Lem.} \ref{lem:>0}}{=} & \lim_{n \to \infty} \mathbb{E}\left[ \frac{S_{n}}{n} \mathbf{1}_{ S_{n} >0}\right] = \mathbb{E}[S_{1}], \end{eqnarray*} where for the last convergence we used the fact that $\frac{S_n}{n} \mathbf{1}_{S_n>0} \to \mathbb{E}[S_{1}]$ almost surely by the strong law of large numbers together with the fact that $|\frac{S_n}{n} \mathbf{1}_{S_n>0}| \leq 1$ (recall that $S_n \leq n$ since the walk is skip-free ascending) which enabled us to invoke dominated convergence. \qed \begin{proposition} \label{prop:GWdisguise}If $(S)$ is skip-free descending (with $\mu \ne \delta_{0}$) then $ \mathbb{P}(S_{n} \geq 0, \forall n \geq 0)=1- \alpha$ where $\alpha$ is the smallest solution in $\alpha \in [0,1]$ to the equation: \begin{eqnarray} \label{eq:GWdisguise} \alpha = \sum_{k=-1}^{\infty} \mu_{k} \alpha^{k+1}. \end{eqnarray} \end{proposition} \noindent \textbf{Proof.} Since $ \mu$ is supported by $\{-1,0,1,\dots\}$ its mean $m$ is well-defined and belongs to $[-1, \infty]$. We already know from the previous chapter that $ \mathbb{P}(S_{n} \geq 0, \forall n \geq 0)>0 $ if and only if $m >0$ (we use here the fact that the walk is not constant since $\mu \ne \delta_{0}$). We denote by $ T_{<0}$ the hitting time of $\{\dots,-3,-2,-1\}$ by the walk $(S)$. Since $(S)$ is skip free descending,if $T_{<0}$ is finite then necessarily $ S_{T_{<0}}=-1$. To get the equation of the proposition we perform one step of the random walk $S$: if $S_{1}=-1$ then $ T_{<0}< \infty$. Otherwise if $S_{1}\geq 0$ then consider the stopping times $$\theta_{0}=1,\quad \theta_{1}= \inf\{ k\geq 1 : S_{k} = S_{1}-1\},\quad \theta_{2}= \inf\{ k\geq \theta_{1} : S_{k} = S_{1}-2\}, \dots .$$ If $I = \sup\{ i \geq 0 : \theta_{i}< \infty\}$, the strong Markov property shows that $(\theta_{i+1}-\theta_{i})_{0 \leq i \leq I}$ has the same law as i.i.d.\ samplings of law $T_{ <0}$ until the first hit of $+\infty$. Furthermore on the event $S_{1}\geq 0$ we have $$ \{ T_{ <0} < \infty\} = \bigcap_{n=0}^{S_{1}} \{ \theta_{n+1}-\theta_{n} < \infty\}.$$ Taking expectation, we deduce that $ \mathbb{P}( T_{ < 0}< \infty)$ is indeed solution of \eqref{eq:GWdisguise}. Now, notice that $F:\alpha \mapsto \sum_{k=-1}^{\infty} \mu_{k} \alpha^{k+1}$ is a convex function on $[0,1]$ which always admits $1$ as a fixed point. Since $F'(1) = m+1$ we deduce that $F$ admits two fixed points in the case $m>0$. But when $m>0$ we already know that $\alpha <1$ and so $\alpha$ must be equal to the smallest solution of \eqref{eq:GWdisguise}. \qed \begin{exo} \label{exo:32} Let $(S)$ be a skip-free descending random walk which drifts towards $+ \infty$. Compute the law of $\inf\{ S_{k} : k \geq 0\}$. Relate to Corollary \ref{cor:duality}. \end{exo} \subsection{Parking on the line} We finish our applications of the cycle lemma by giving a last, but nice, application of skip-free random walk to the parking problem on the line. Imagine an oriented discrete line with $n$ vertices, \textbf{the parking spots} (each vertex can only accommodate at most one car). The cars are labeled from $1$ up to $m$ and they arrive one after the other on some of the $n$ vertices. When arriving, they try to park at their arrival node, and, if the parking spot is occupied, the cars drive towards the left of the line and park on the first available spot. If they do not find a free spot, they exit from the parking lot. \begin{figure}[!h] \begin{center} \includegraphics[width=15cm]{parking} \caption{Illustration of the parking of $6$ cars on a line with $6$ spots. Notice that car number $6$ did not manage to find a spot and exited the parking lot. Below, the encoding of that the parking configuration by a skip-free ascending walk. \label{fig:parking}} \end{center} \end{figure} An Abelian property shows that the unlabeled final configuration as well as the number of cars exiting the parking lot does not depend on the order in which we try to park the cars (exercise!). In our random model, one shall imagine that the $m$ cars pick independently an arriving vertex $\in \{1,2, \dots , n\}$ uniformly at random. Of course, as long as $m >n$, it is impossible that all cars park.\bigskip We shall prove the following theorem due to Konheim and Weiss: \begin{theorem}[Konheim \& Weiss (1966)]\noindent Imagine that $m$ cars try to park uniformly and independently on $n$ vertices. The probability that they all manage to park is equal to $$ \frac{n+1-m}{n+1} \left( 1+\frac{1}{n}\right)^m.$$ In particular, if $m= [\alpha n]$ with $\alpha \in (0,1)$ the above probability converges to $(1-\alpha) \mathrm{e}^{\alpha}$ as $n \to \infty$. \label{thm:parking} \end{theorem} \noindent \textbf{Proof}. The idea is to encode the parking situation by a walk $(S)$. Specifically, each vertex receiving $k$ cars corresponds to an increment of the walk of $1-k$, see Figure \ref{fig:parking}. The path we obtain this way is clearly skip-free ascending. By construction of the coding, for any $i \in \{1, \dots , n\}$ the value of the walk at time $i$ is equal to $i$ minus the number of cars arriving on vertices on the left of it. It is easy to see that full parking for the cars corresponds to the fact that the walk stays non-negative until time $n$. In our probabilistic model where the $m$ cars choose independently and uniformly their arrival vertices, the increments of the walk $(S)$ are not independent. However, we clearly have $S_n = n-m$ and the increments of this walk are exchangeable, we can thus apply Lemma \ref{lem:>0}. The slight problem is that Lemma \ref{lem:>0} evaluates the probability that the walk stays positive, and we need the probability that it stays \textit{non-negative}. To go around this problem, we imagine that we add a $(n+1)$th vertex at the left extremity of the line. Clearly, each successful parking configuration on $\{1, \dots , n\}$ corresponds to a single configuration where the $m$ cars choose to park on vertices in $\{1,\dots , n\}$ and that the vertex $n+1$ is empty at the end. In terms of the random walk, we precisely ask that it stays positive. Hence, by Lemma \ref{lem:>0}, the number of successful parking configurations with $m$ drivers and $n$ spots is equal to $$ \frac{n+1-m}{n+1} \cdot (n+1)^m.$$ The theorem follows immediately after dividing by the number of configurations in the initial model, i.e.~by $n^{m}$. \qed \section{Wiener-Hopf factorization} In this section we extend the theory to the case of random walk with arbitrary step distribution $\mu$ which is non necessarily integer valued nor skip-free. We still denote $(S)$ a one-dimensional random walk starting from $0$ and with independent increments of law $\mu$ supported by $ \mathbb{R}$. We first need to introduce the so-called \textbf{ladder variables}. \subsection{Ladder variables} \label{sec:ladder} \begin{definition}[Ladder heights and epochs] \label{def:prop:ladder} We define by induction $ T_{0}^{\footnotesize >}=T_{0}^{<}={T}_{0}^{\geq}= {T}_{0}^{\leq}=0$ as well as $ H_{0}^{>}=H_{0}^{<}={H}_{0}^{\geq}= {H}_{0}^{\leq}=0$ and for $i \geq 1$ we put \begin{eqnarray*} T_{i}^{>} &=& \inf\left\{ k > T_{i-1}^{>}: S_{k} > H_{i-1}^{>} \right\} \quad \mbox{ and } \quad H_{i}^{>} = S_{T_{i}^{>}}, \\ {T}_{i}^{\geq} &=& \inf\left\{ k > {T}_{i-1}^{\geq}: S_{k} \geq {H}_{i-1}^{\geq} \right\} \quad \mbox{ and } \quad {H}_{i}^{\geq} = S_{{T}_{i}^{\geq}}, \\ T_{i}^{<} &=& \inf\left\{ k > T_{i-1}^{<}: S_{k} < H_{i-1}^{<} \right\} \quad \mbox{ and } \quad H_{i}^{<} = S_{T_{i}^{<}}, \\ {T}_{i}^{\leq} &=& \inf\left\{ k > {T}_{i-1}^{\leq}: S_{k} \leq {H}_{i-1}^{\leq} \right\} \quad \mbox{ and } \quad {H}_{i}^{\leq} = S_{{T}_{i}^{\leq}}. \end{eqnarray*} If $T_{i}^{*}$ is not defined (i.e.~we take the infimum over the empty set) then we put $T_{j}^{*}= H_{j}^{*} = \pm \infty$ for all $j \geq i$. The variables $(T^{>}/{T}^{\geq})$ (resp. $(T^{<}/{T}^{\leq})$) are called the strict/weak ascending (resp.\ descending) ladder epochs. The associated $H$ process are called the (strict/weak ascending/descending) ladder heights. \end{definition} \begin{figure}[!h] \begin{center} \includegraphics[width=16cm]{ladder} \caption{Illustration of the definition of the ladder heights and epochs.} \end{center} \end{figure} When $\mu$ has no atoms, the walk $S$ does not take twice the same value a.s. so the weak and strict ladder variables are the same. In the following we write $H$ and $T$ generically for one of the four couples $(T^{\geq}, H^{\geq}),({T}^{>}, {H}^{>}),(T^{<}, H^{<}) \mbox{ or }(T^{\leq}, H^{\leq}).$ Since the ladder epochs are stopping times for the natural filtration generated by the walk, the strong Markov property then shows that $N=\inf\{ i \geq 0: T_{i} = \infty\}$ is a geometric random variable with distribution $$ \mathbb{P}( N = k ) = \mathbb{P}( T_{1} = \infty)\mathbb{P}( T_{1} < \infty)^{{k-1}},$$ and that conditionally on $N$ the random variables $((T_{i}-T_{i-1}), (H_{i}-H_{i-1}))_{1 \leq i \leq N-1}$ are i.i.d.\ with law $(T_{1}, H_{1}) \mbox{ conditioned on } T_{1}< \infty$. In particular, $\limsup_{n \to \infty}S_n = +\infty$ a.s.~if and only if $ \mathbb{P}(T_1^> = \infty) = \mathbb{P}(T_1^\geq =\infty)=0$. \bigskip One can now extend Feller's cycle lemma (Lemma \ref{lem:feller}) in this setup. The main difference is that when the walk is not skip-free, the number of records cannot be easily tightened to the value of the walk, that is why the ladders epoch and heights are needed. With the same notation as in Section \ref{sec:fellerskip}, we have the extension of \eqref{eq:equivfeller} (with mutatis mutandis the same proof): For every $n \geq 1$ and any measurable subset $A \subset \mathbb{R}_{+}^{*}$ we have \begin{eqnarray*} \mathbf{1}_{s_{n} \in A} = \sum_{i=0}^{{n-1}} \sum_{k=1}^{\infty} \frac{1}{k} \mathbf{1}_{T_{k}^{>}(s^{(i)})=n} \mathbf{1}_{H_{k}^{>}(s^{(i)})\in A}. \end{eqnarray*} Taking expectation and using the invariance of the walk by cycle shift we deduce the equality of measures generalizing Kemperman's formula: \begin{eqnarray} \label{eq:kempgen}\mathbf{1}_{x >0} \frac{ \mathbb{P}(S_{n} \in \mathrm{d}x)}{n}= \sum_{k=1}^{\infty} \frac{1}{k} \mathbb{P}( H_{k}^{>} \in \mathrm{d}x, T_{k}^{>} =n) \mathbf{1}_{x >0}. \end{eqnarray} \subsection{Wiener--Hopf factorization} The following result is an analytic translation of our findings. \begin{theorem}[Spitzer--Baxter formula ; Wiener--Hopf factorization]\label{thm:WH}For $r \in [0,1)$ and $\mu \in \mathbb{C}$ so that $ \mathfrak{Re}(\mu)\geq0$ we have $$\left( 1 - \mathbb{E}\left[ r^{T_{1}^{>}} \mathrm{e}^{-\mu H_{1}^{>}} \right]\right) = \exp \left( - \sum_{n=1}^{\infty} \frac{r^{n}}{n} \mathbb{E}\left[ \mathrm{e}^{-\mu S_{n}} \mathbf{1}_{S_{n}>0} \right] \right),$$ $$\left( 1 - \mathbb{E}\left[ r^{{T}_{1}^{\leq}} \mathrm{e}^{\mu {H}_{1}^{\leq}} \right]\right) = \exp \left( - \sum_{n=1}^{\infty} \frac{r^{n}}{n} \mathbb{E}\left[ \mathrm{e}^{\mu S_{n}} \mathbf{1}_{S_{n}\leq 0} \right] \right).$$ \end{theorem} \noindent \textbf{Proof.} First since $r \in [0,1)$ and $ \mathfrak{Re}(\mu) \geq0$ all the quantities in the last two displays are well defined. We only prove the first display since the calculation is similar for the second one. Let us start from the right hand side of the theorem and write \begin{eqnarray*}\exp \left( - \sum_{n=1}^{\infty} \frac{r^{n}}{n} \mathbb{E}\left[ \mathrm{e}^{-\mu S_{n}} \mathbf{1}_{S_{n}>0} \right] \right) &\underset{ \eqref{eq:kempgen}}{=}& \exp \left( - \sum_{n=1}^{\infty} \frac{r^{n}}{n} \sum_{k=1}^{\infty} \frac{n}{k} \mathbb{E}\left[ \mathrm{e}^{-\mu H_{k}^{>}} \mathbf{1}_{T_{k}^{>}=n} \right] \right) \\ &=& \exp \Big( - \sum_{k=1}^{\infty} \frac{1}{k} \underbrace{\mathbb{E}\left[ \mathrm{e}^{-\mu H_{k}^{>}} r^{T_{k}^{>}} \right]}_{\left(\mathbb{E}\left[ \mathrm{e}^{-\mu H_{1}^{>}} r^{T_{1}^{>}} \right]\right)^{k} } \Big) \\ &=& 1- \mathbb{E}\left[ \mathrm{e}^{-\mu H_{1}^{>}} r^{T_{1}^{>}} \right], \end{eqnarray*} where in the last line we used the equality $\sum_{k =1}^{\infty} \frac{x^{k}}{k} = -\log (1-x)$ valid for $ |x|<1$. Note that we implicitly used the fact that $r<1$ by putting $r^{T_{k}^{>}}=0$ when $T_{k}^{>}= \infty$. This proves Spitzer's\footnote{\raisebox{-3mm}{\includegraphics[width=0.7cm]{spitzer}} Frank Ludvig Spitzer (1926--1992), Austrian \& American} formula\qed \begin{remark}[Explanation of the terminology of Wiener--Hopf factorization] If we write $$ \omega_{r}^{>}(\mu) = \exp \left( - \sum_{n=1}^{\infty} \frac{r^{n}}{n} \mathbb{E}\left[ \mathrm{e}^{-\mu S_{n}} \mathbf{1}_{S_{n}>0} \right] \right) \quad \mbox{ and }\omega_{r}^{\leq}(\mu) = \exp \left( - \sum_{n=1}^{\infty} \frac{r^{n}}{n} \mathbb{E}\left[ \mathrm{e}^{-\mu S_{n}} \mathbf{1}_{S_{n}\leq0} \right] \right),$$ then $\omega_{r}^{>}$ is analytic on the half-space $ \mathfrak{Re}(\mu) \geq 0$ whereas $\omega_{r}^{\leq}$ is analytic on $ \mathfrak{Re}(\mu) \leq 0$. On the imaginary line where the two functions are well defined we have \begin{eqnarray} \omega_{r}^{>}( it)\omega_{r}^{\leq}( it) = 1-r \mathbb{E}[\mathrm{e}^{-it X_{1}}]. \label{eq:wiener-hopf}\end{eqnarray} Hence, the characteristic function of the increment of the walk (or a slight modification thereof) has been writing as a product of two analytic functions, each defined on a different half-space. The idea of writing a function on a line as a product of two functions defined on a half-space goes back to Wiener \& Hopf and is often useful since we can use the tools of complex analysis for each of the factors. \end{remark} \medskip There are many applications of the previous formula, we just mention two surprizing ones: \begin{corollary}\label{cor:law return} Let $(S)$ be a one-dimensional random walk with symmetric and diffuse step distribution. Then the law of $T_{1}^{>}$ is given by $$ \mathbb{E}[r^{T_{1}^{>}}] = 1 - \sqrt{1-r}, \ r \in[0,1), \quad \mbox{ or equivalently } \quad \mathbb{P}(T_{1}^{>}= n) = \frac{(2n-2)!}{2^{2n-1} n! (n-1)!}, \ n \geq 1.$$ \end{corollary} \noindent \textbf{Proof.} It suffices to take the first display of Theorem \ref{thm:WH} and to plug $\mu=0$. Since by symmetry of the increments and the lack of atoms we have $ \mathbb{P}(S_{n}>0)= \mathbb{P}(S_{n}\geq 0)= \frac{1}{2}$. It follows that \begin{eqnarray*} 1 - \mathbb{E}[r^{T_{1}^{>}}] = \exp\left( - \sum_{n \geq1} \frac{r^{n}}{n} \mathbb{P}(S_{n}>0)\right) = \exp\left( - \sum_{n \geq1} \frac{r^{n}}{n} \frac{1}{2}\right) = \exp(- \frac{1}{2} \log(1-r)) = \sqrt{1-r}. \end{eqnarray*} To get the exact values of $ \mathbb{P}(T_{1}^{>}= n)$ it suffices to develop $1 - \sqrt{1-r}$ in power series and to identify the coefficients. \qed \begin{corollary}[Back to the law of large numbers, again!] The random walk $(S)$ drifts towards $-\infty$ if and only if $$ \log \mathbb{E}[{T}_{1}^{<}] = \sum_{n \geq 1} \frac{\mathbb{P}(S_{n}\geq0)}{n} < \infty.$$ \end{corollary} \noindent \textbf{Proof.} From Theorem \ref{thm:WH} with $\mu=0$ we get for $r \in [0,1)$ $$ 1- \mathbb{E}[r^{T_{1}^{\geq}}] = \exp \left( - \sum_{n\geq 1} \frac{r^{n}}{n} \mathbb{P}(S_{n}\geq0)\right).$$ Letting $r \uparrow 1$ the left-hand side converges towards $1 - \mathbb{E}[ \mathbf{1}_{T_{1}^{\geq}<\infty}] = \mathbb{P}(T_{1}^{\geq}=\infty)$ whereas the right-hand side converges towards $\exp(-\sum_{n \geq 1} \frac{\mathbb{P}(S_{n}\geq0)}{n})$. But clearly $(S)$ drifts towards $-\infty$ if and only if $T_{1}^{\geq}$ may be infinite. In this case, recall that by \eqref{eq:duality*} and the fact that the increments of the ladders variables are independent that we have $ \mathbb{E}[T_{1}^{<}] = 1/ \mathbb{P}(T_{1}^{\geq}=\infty)$ which immediately implies the second claim. \qed \bigskip \noindent \textbf{Biliographical notes.} The study of skip-free random walk may be seen as a particular case of fluctuation theory for random walks, see e.g.~\cite{kyprianou2010wiener} for a more trajectorial approach. The combinatorial approach taken here and based on the cycle lemma is adapted from \cite[Chapter XII]{Fel71} and \cite[Section 8.4]{Chung74}; it has many ramifications in the combinatorial literature, see \cite{ABR08} for much more about Ballot theorems and \cite{diaconis2017probabilizing} for parking functions. Theorem \ref{thm:parking} can be found in \cite{konheim1966occupancy}. The proof of the law of large numbers based on duality is taken from \cite{CurLLN}. In general, path transformations are very useful tools in fluctuation theory for random walks (Spitzer-Baxter or Wiener-Hopf factorization). In particular, we mention the Sparre-Andersen identity relating the position of the maximum and the time spent on the positive half-line for a random walk of length $n$, see \cite[Chapter XII]{Fel71} for more details. More recent applications of fluctuation theory for random walks can be found e.g.~in \cite{alili2005fluctuation,marchal2001two,kwasnicki2020random}. \medskip \noindent \textbf{Hints for exercises:}\ \\ Exercise \ref{exo:legallcornell}: is \cite[Lemma 1.9]{LG05} (but the proof there is different). This is a baby example of the Spitzer-Baxter or Wiener-Hopf factorization.\\ Exercise \ref{exo:shiftunif}: the $n$ distinct cycle shifts are such that $G_{n} = \{1, 2,\dots , n\}$.\\ Exercise \ref{exo:32}: it is a geometric distribution. \chapter{Bienaym\'e-Galton-Watson trees} \label{chap:GW} \hfill I will survive. \bigskip In this chapter we use our knowledge on one-dimensional random walk to study random tree coding for the genealogy of a population where individuals reproduce independently of each other according to the same offspring distribution. These are the famous Bienaym\'e--Galton--Watson (BGW) trees. \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{contour.pdf} \caption{A large Bienaym\'e--Galton--Watson tree and its contour function} \end{center} \end{figure} \section{Plane trees and Bienaym\'e--Galton--Watson processes} \subsection{Plane trees}\label{sec:plane trees} Throughout this chapter we will use the standard formalism for plane trees as found in \cite{Nev86}. Let \begin{eqnarray*} \mathcal{U}& = &\bigcup_{n=0}^{\infty} ( \mathbb{Z}_{>0})^n \end{eqnarray*} where we recall that $\mathbb{Z}_{>0} = \{ 1,2, \ldots \}$ and $(\mathbb{Z}_{>0})^0 = \{ \varnothing \}$ by convention. An element $u$ of $\mathcal{U}$ is thus a finite sequence of positive integers which we interpret as a \textbf{word} whose letters are positive integers. We let $|u|$ be the length of the word $u$. If $u, v \in \mathcal{U}$, $uv$ denotes the concatenation of $u$ and $v$. If $v$ is of the form $uj$ with $j \in \mathbb{Z}_{>0}$, we say that $u$ is the \textbf{parent} of $v$ or that $v$ is a \textbf{child} of $u$. More generally, if $v$ is of the form $uw$, for $u,w \in \mathcal{U}$, we say that $u$ is an \textbf{ancestor} of $v$ or that $v$ is a \textbf{descendant} of $u$. \begin{definition}\label{def:planetree} A \textbf{plane tree} $\tau$ is a (finite or infinite) subset of $\mathcal{U}$ such that \begin{enumerate} \item $\varnothing \in \tau$, the point $\varnothing$ is called the \textbf{root} of $\tau$, \item if $v \in \tau$ and $v \neq \varnothing$ then the parent of $v$ also belongs to $\tau$, \item for every $u \in \tau$ there exists $k_u(\tau)\in \{ 0,1,2, \dots \} \cup \{ \infty\}$ such that $uj \in \tau$ if and only if $j \leq k_u(\tau)$. The number $k_u( \tau)$ is then \textbf{the number of children} of $u$ in $\tau$. \end{enumerate} \end{definition} \begin{figure}[!h] \begin{center} \includegraphics[width=5cm]{plane} \caption{A (representation of a) finite plane tree.} \end{center} \end{figure} Since every $u \in \tau \backslash \{ \varnothing \}$ has a unique parent, we deduce that for finite plane trees $\tau$ we have \begin{eqnarray} \label{eq:enfantparent} \#\tau -1= \sum_{u \in \tau}k_{u}(\tau). \end{eqnarray} A plane tree can be seen as a graph, in which an edge links two vertices $u,v$ such that $u$ is the parent of $v$ or vice-versa. Notice that with our definition, vertices of infinite degree are allowed since $k_{u}(\tau)$ may be infinite. When all degrees are finite, the tree is said to be \textbf{locally finite}. In this case, this graph is of course a tree in the graph-theoretic sense (see Proposition \ref{prop:tree}), and we can draw it in the plane $ \mathbb{R}^2$ so that its edges are non-crossing and such that the edges from a vertex $u$ to its children $u\,1,\ldots,u\,k_u(\tau)$ and to its parent if $u \ne \varnothing$ are ordered in a clockwise fashion. Equivalently, a plane tree can be seen as a genealogical tree where the children of each vertex are ranked from the oldest to the youngest one. Unless explicitly mentioned, all the trees considered in this chapter are plane trees. \begin{definition} \label{def:ulam} The set $ \mathcal{U}$ is a plane tree where $k_{u}( \mathcal{U})= \infty, \forall u \in \mathcal{U}$. It is called Ulam's tree. \end{definition} The integer $\#\tau$ denotes the number of vertices of $\tau$ and is called the \textbf{size} of $\tau$. For any vertex $u \in \tau$, we denote the shifted tree at $u$ by $\sigma_u( \tau) :=\{v \in \mathcal{U} : uv \in \tau\}$. The \textbf{height} of the tree $\tau$ is the maximal length of its words, $$ \mathrm{Height}(\tau) = \max \{ |u| : u \in \tau\} \in \{0,1,2,\dots\} \cup \{\infty\}.$$ The truncation at level $n$ of $\tau$ is denoted by $ [\tau]_n = \{ u \in \tau : |u| \leq n\}$ which is again a plane tree. Its boundary $\partial [\tau]_n$ is made of the individuals at generation exactly $n$ in the genealogical interpretation $$ \partial [\tau]_n = \{ u \in \tau : |u|=n \}.$$ \subsection{Bienaym\'e--Galton--Watson trees} \label{sec:GWtrees} Let $\mu$ be a distribution on $\{0,1,2,\dots \}$ which we usually suppose to be different from $\delta_{1}$. Informally speaking, a Bienaym\'e--Galton--Watson\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{bienayme}} \begin{tabular}{l}Irne-Jules Bienaym\\ (1796--1878), French \end{tabular} $\quad$ \raisebox{-5mm}{\includegraphics[width=1cm]{galton}} \begin{tabular}{l} Francis Galton \\ (1822--1911), English \end{tabular} \ \ and \ \ \raisebox{-5mm}{\includegraphics[width=1cm]{watson}} \begin{tabular}{l}Henry William Watson\\ (1827--1903), English \end{tabular}} (BGW in short) tree with \textbf{offspring distribution} $\mu$ is a random (plane) tree coding the genealogy of a population starting with one individual and where all individuals reproduce independently of each other according to the distribution $\mu$. Here is the proper definition: \begin{definition}[BGW tree] Let $(K_{u}:u \in \mathcal{U})$ be independent and identically distributed random variables of law $\mu$. We let $\mathcal{T}$ be the random plane tree made of all words $u = j_{1}j_{2}\dots j_{n} \in \mathcal{U}$ such that $j_{i} \leq K_{j_{1}\dots j_{i-1}}$ for all $1 \leq i \leq n$. In particular we have $k_u( \mathcal{T}) = K_u$ for all $u \in \mathcal{T}$. Then the law of $\mathcal{T}$ is the $\mu$-BGW distribution. \end{definition} Equivalently, the law of a $\mu$-BGW tree $\mathcal{T}$ is characterized by the following \textbf{branching property}: Conditionally on the event $\{k_{ \varnothing}(\mathcal{T}) = \ell\}$ of probability $\mu_\ell$, then the $\ell$ random trees $\sigma_{i}(\mathcal{T})$ for $1 \leq i \leq \ell$ are independent and distributed as $\mathcal{T}$. Notice also that the $\mu$-BGW probability of a \textit{finite} plane tree is explicit: \begin{eqnarray} \label{exo:GWprod} \mathbb{P}(\mathcal{T} = \tau_{0}) = \prod_{u \in \tau_{0}} \mu_{k_{u}(\tau_{0})}, \end{eqnarray} but the previous display does not characterize the distribution since the random tree $ \mathcal{T}$ may very well be infinite. We now link the BGW tree to the well-known BGW process. We first recall its construction. Let $(\xi_{i,j} : i \geq 0, j \geq 1)$ be i.i.d.\ random variables of law $\mu$. The $\mu$-Bienaym\'e--Galton--Watson process is defined by setting $Z_{0}=1$ and for $i \geq 0$ $$ Z_{i+1} = \sum_{j=1}^{Z_{i}} \xi_{i,j}.$$ It is then clear from the above construction that if $\mathcal{T}$ is a $\mu$-Bienaym\'e-Galton--Watson tree, then the process $X_{n} = \# \{ u \in \mathcal{T} : |u|=n\}$ has the law of a $\mu$-Bienaym\'e--Galton--Watson process. \section{{\L}ukasiewicz walk and direct applications} In this section we will encode (finite) trees via one-dimensional walks. This will enable us to get information on random BGW trees from our previous study of one-dimensional random walks. \subsection{{\L}ukasiewicz walk} The \textbf{lexicographical or depth first} order $<$ on $ \mathcal{U}$ is defined as the reader may imagine: if $u=i_{1}i_{2}\dots i_{n}$ and $v= j_{1}j_{2}\dots j_{m}$ are two words then $u < v$ if $i_{\ell}< j_{\ell}$ where $\ell$ is the first index where $i_{ \ell} \ne j_{\ell}$, or if $n <m$ and $i_{1}i_{2}\dots i_{n} = j_{1}j_{2}\dots j_{n}$. The \textbf{breadth first} order on $ \mathcal{U}$ is defined by $ u \prec v$ if $|u| < |v|$ and if the two words are of the same length then we require $u <v$ (for the lexicographical order). \begin{definition} Let $\tau$ be a locally finite tree (i.e.~$k_u( \tau) < \infty$ for every $u \in \tau$). Write $u_{0}, u_{1}, \ldots$ for its vertices listed in the breadth first order. The \textbf{{\L}ukasiewicz walk} $ \mathcal{W}( \tau)= ( \mathcal{W}_n( \tau), 0 \leq n \leq \#\tau)$ associated to $\tau$ is given by $ \mathcal{W}_0( \tau)=0$ and for $0 \leq n \leq \#\tau-1$: $$ \mathcal{W}_ {n+1}( \tau)= \mathcal{W}_n( \tau)+k_ {u_{n}}( \tau)-1.$$ \end{definition} \begin{figure}[!h] \begin{center} \includegraphics[width=15cm]{lukabis} \caption{Left: a finite plane tree and its vertices listed in breadth-first order. Right: its associated {\L}ukasiewicz walk.} \end{center} \end{figure} In words, the {\L}ukasiewicz\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{lukasi}} Jan {\L}ukasiewicz (1878--1956), Polish logician.} walk consists in listing the vertices in breadth first order and making a stack by adding the number of children of each vertex and subtracting one (accounting for the exploration of the current vertex). In the case of a finite plane tree $\tau$, since the total number of children is equal to the number of vertices minus one, the following properties of $\mathcal{W}_{\cdot}( \tau)$ are easily checked: \begin{itemize} \item the {\L}ukasiewicz walk starts at $0$, i.e.~$$\mathcal{W}_{0}( \tau)=0,$$ \item it stays non-negative as long as all vertices have not been explored, i.e. $$\mathcal{W}_{i}( \tau)\geq 0 \quad \mbox{ for } 0 \leq i \leq \#\tau-1,$$ \item it ends up at $-1$, i.e. $$W_{\#\tau}(\tau) =-1,$$ \item the walk is skip-free in the sense of Chapter \ref{chap:WH}, i.e.$$ \mathcal{W}_{i+1}( \tau) - \mathcal{W}_{i}(\tau) \geq -1, \quad \mbox{ for any } 0 \leq i \leq \#\tau-1.$$ \end{itemize} When the tree is infinite but locally finite, every vertex of the tree will appear in the breadth first ordering\footnote{this is not true if we had chosen to explore the tree in the lexicographical (i.e.~depth first) order.} and the {\L}ukasiewicz path stays non-negative for ever. We leave the proof of the following as an exercise for the reader: \begin{proposition} \label{exo:bijection}Let $ \mathbf{T}_{\ell oc }$ the set of all finite or infinite but locally finite plane trees. Let $ \mathbf{W}_{\ell oc }$ the set of all finite or infinite paths $ (w_{0},w_{1}, \dots , w_{n})$ with $n \in \{1,2, \dots \} \cup \{ \infty\}$ which starts at $w_{0}=0$ and ends at $w_{n}=-1$ and such that $w_{i+1}-w_{i} \geq -1$ as well as $w_{i} \geq 0$ for any $0 \leq i \leq n-1$. Then taking the {\L}ukasiewicz walk creates a bijection between $ \mathbf{T}_{\ell oc }$ and $ \mathbf{W}_{\ell oc }$. \end{proposition} \begin{remark}[Different types of exploration] The {\L}ukasiewicz path encodes the information when we discover a tree using the breadth first search. Although we shall only use this exploration in these notes, one can similarly discover the tree using the depth first search (i.e.~using the lexicographical total order to enumerate the vertices of a plane tree) or using more exotic type of exploration. In particular, the exploration of the Erd{\H{o}}s--R\'enyi graph (Chapter \ref{chap:poissonER}) will be based on a depth-first exploration. This flexibility in the exploration algorithm is at the core of many nice results in random tree theory, see e.g.~\cite{broutin2016new,CurStFlour,krivelevich2013phase}. See also the next chapters where the idea of discovering the underlying geometry with a given algorithm plays a key role. \end{remark} \subsection{{\L}ukasiewicz walk of a Bienaym\'e--Galton--Watson tree} As it turns out, the {\L}ukasiewicz walk associated to a $\mu$-BGW tree is roughly speaking a random walk. Recall that the offspring distribution $\mu$ is supported by $\{0,1,2, \dots \}$ so a $\mu$-BGW tree is locally finite a.s. \begin{proposition}\label{prop:luka}Let $ \mathcal{T}$ be a $\mu$-BGW tree, and let $(S_{n})_{n \geq0}$ be a random walk with i.i.d.\ increments of law $ \mathbb{P}(S_{1}=k) = \mu_{k+1}$ for $k \geq -1$. If $T_{-1}$ is the first hitting time of $-1$ by the walk $S$ (we may have $T_{-1}= \infty$) then we have $$ \left( \W_{0}( \mathcal{T}), \W_{1}( \mathcal{T} ), \ldots, \W_{ \#\mathcal{T} }( \mathcal{T}) \right) \quad \overset{(d)}{=} \qquad \ (S_{0},S_{1}, \ldots, S_{ T_{-1}}). $$ \end{proposition} \noindent \textbf{Proof.} Let $ (\omega^{0}_i: 0 \leq i \leq n)$ be the first $n$ steps of a skip-free random walk so that $n$ is less than or equal to the hitting time of $-1$ by this walk. By reversing the {\L}ukasiewicz construction we see that in order that the first $n$ steps of the {\L}ukasiewicz walk of the tree $ \mathcal{T}$ matches with $ (\omega^{0}_i: 0 \leq i \leq n)$ then the subtree $\tau_0$ of the first $n$ vertices of $ \mathcal{T}$ in breadth first order as well as their number of children are fixed by $(\omega^{0}_i: 0 \leq i \leq n)$, see Figure \ref{fig:partialuka}. \begin{figure}[!h] \begin{center} \includegraphics[width=12cm]{partialuka} \caption{ \label{fig:partialuka} Fixing the first $n$ vertices explored (in red) during the breadth first exploration of a BGW tree. The black vertices and their subtrees (in gray) have not been explored yet.} \end{center} \end{figure} The probability under the $\mu$-BGW to see this event is equal to $$\prod_{u \in \tau_{0}} \mu_{k_{u}( \mathcal{T})} = \prod_{i=0}^{n-1} \mu_{\omega^{0}_{i+1}-\omega^0_i+1} = \mathbb{P}\left( (S_{i})_{0 \leq i \leq n} = (\omega^{0}_i)_{0 \leq i \leq n}\right).$$ The proposition follows. \qed \medskip Combining the previous proposition with Remark \ref{rek:kemp+local} we deduce that if the offspring distribution is critical, aperiodic and has finite variance we have $$ \mathbb{P}( \#\mathcal{T}=n)\sim \frac{1}{\sqrt{2 \pi \sigma^{2}}} \cdot \frac{1}{n^{3/2}}, \quad \mbox{ as }n \to \infty.$$ \paragraph{Extinction probability.} As a direct application of the previous proposition let us give a random walk proof of the following well-known criterion for survival of a Bienaym\'e--Galton--Watson process: \begin{theorem}[Extinction probability]\label{thm:extinction}Let $\mu$ be an offspring distribution of mean $m \geq 0$ such that $\mu \ne \delta_{1}$. The probability that $ \mathcal{T}$ is finite is equal to the smallest solution $\alpha \in [0,1]$ to the equation \begin{eqnarray} \alpha = \sum_{k\geq 0} \mu_{k} \alpha^k, \label{eq:GWfixed}\end{eqnarray} in particular it is equal to $1$ if $m \leq 1$. \end{theorem} \noindent \textbf{Proof.} With the same notation as in Proposition \ref{prop:luka} we have that $ \mathbb{P}( \# \mathcal{T} = \infty) = \mathbb{P}( T_{-1} = \infty)$. Since the walk $S$ is non trivial (i.e.~non constant) and skip-free, Proposition \ref{prop:GWdisguise} yields the statement. \qed \bigskip Let us also recall the more ``standard'' proof of the previous theorem which is useful in Exercise \ref{exo:dekking}. Let $ g(z) = \sum_{ k \geq 0} \mu_{k} z^{k}$ be the generating function of the offspring distribution $\mu$. In particular, if $ \mathcal{T}$ is a $\mu$-BGW tree then $g$ is the generating function of $ \#\partial [ \mathcal{T}]_1$. More generally, if $g_n$ is the generating function of $ \# \partial [ \mathcal{T}]_n$, then by the branching property of BGW trees and standard operation on generating functions we have that $g_{n+1} = g \circ g_n$ for $n \geq 1$ so that $$g_n = g \circ g \circ \cdots \circ g,$$ ($n$-fold composition). Specifying at $z=0$ we deduce that $ u_{n} = \mathbb{P}( \mathrm{Height}( \mathcal{T}) \leq n)$ follows the recurrence relation $ u_{0}=0$ and $u_{n+1} = g(u_{n})$. This recursive system is easily studied and $u_{n}$ converges towards the first fixed point of $g$ in $[0,1]$ which is strictly less than $1$ if and only if $g'(1) >1$ by convexity of $g$. We conclude using the fact that $ \{\mathcal{T} = \infty\}$ is the decreasing limit of the events $\{ \mathrm{Height}( \mathcal{T})\geq n\}$ as $n \to \infty$. \begin{figure}[!h] \begin{center} \includegraphics[width=14cm]{GWclassic} \caption{Illustration of the ``standard'' proof Theorem \ref{thm:extinction}. The extinction probability is computed as the limit of the recursive system defined by $u_0 =0$ and $u_{n+1} = g(u_n)$. \label{fig:GWclassic}} \end{center} \end{figure} \begin{exo}[A theorem of Dekking \cite{Dek91b} and a \textbf{discontinuous phase transition}] \label{exo:dekking} We say that an infinite tree $\tau$ contains an infinite binary tree (starting at the root) if it is possible to find a subset $S$ of vertices of $\tau$ containing the origin $\varnothing$ and such that each vertex in $S$ has exactly two children in $ S$. Let $ g(z) = \sum_{ k \geq 0} \mu_{k} z^{k}$ be the generating function of the offspring distribution $\mu$. \begin{enumerate} \item Show that the probability that a $\mu$-BGW tree $ \mathcal{T}$ contains no infinite binary tree (starting at the root) is the smallest solution $z \in [0,1]$ to $$ z = g(z) + (1-z) g'(z).$$ \item Application: in the case $p_{1}= (1-p)$ and $p_{3}=p$ with $p \in [0,1]$ show that there is no infinite binary tree in $ \mathcal{T}$ if and only if $p < \frac{8}{9}$ and that in the critical case $p= \frac{8}{9}$ this probability is in fact positive (contrary to the above case for survival of the tree). \begin{figure}[!h] \begin{center} \includegraphics[width=8cm]{dekking} \caption{Plot of the function $g(z) + (1-z) g'(z)$ against the first bissector (in blue) where $g(z) = (1-p)z+p z^{3}$ for the values $ p = \frac{1}{2}, \frac{2}{3}, \frac{4}{5}$ in (yellow, green, red), the critical case $p= \frac{8}{9}$ in purple and $p = 1$ in brown.} \end{center} \end{figure} \end{enumerate} \end{exo} \begin{remark}[A historical remark] We usually attribute to Galton and Watson the introduction and study of the so-called Galton--Watson process in 1873 in order to study the survival of family names among British lords. However, in their initial paper devoted to the calculation of the extinction probability they concluded hastily that the extinction is almost sure whatever the offspring distribution! This is even more surprising since almost thirty years before, in 1845 Bienaym considered the very same model and derived correctly the extinction probability. This is yet just another illustration of Stigler's law of eponymy! \end{remark} \subsection{Lagrange inversion formula} The Lagrange inversion is a closed formula for the coefficients of the reciprocal (composition inverse) of a power series. More precisely, imagine that $f(z) = \sum_{i \geq 0} f_{i} z^{i} \in \mathbb{C}[[z]]$ is a formal power series in the indeterminate $z$ (no convergence conditions are assumed) so that $f_{0}=0$ and $f_{1} \ne 0$. We recall the notation $[z^i]f(z) = f_i$. One would like to invert $f$ i.e.~to find a power series $\phi \in \mathbb{C}[[z]]$ such that $ z = \phi( f(z)) = f(\phi(z))$. In combinatorics, the above equation is usually written in the ``Lagrange formulation'' by supposing that $f(z) = \frac{z}{R(z)}$ with $ R(z) \in \mathbb{C}[[z]]$ with $R(0) \ne 0$ so that the equation becomes \begin{eqnarray} \phi(z) = z \cdot R( \phi( z)). \label{eq:lagrange} \end{eqnarray} \begin{theorem}[Lagrange inversion formula] \label{thm:lagrange}Let $ R \in \mathbb{C}[[z]]$ be a formal power series in $z$ such that $[z^{0}]R \ne 0$. Then there exists a unique formal power series $\phi$ satisfying \eqref{eq:lagrange} and we have for all $k\geq0$ and all $n \geq 1$ $$ [z^{n}] \big(\phi(z)\big)^{k} = \frac{k}{n}[z^{n-1}] \left(z^{k-1} R(z)^{n}\right),$$ where $[z^{n}] f(z)$ in the coefficient in front of $z^{n}$ in the formal power series $f \in \mathbb{C}[[z]]$. \end{theorem} \noindent \textbf{Proof.} The idea is to interpret combinatorially the weights in the formal expansion $z \cdot R( \phi( z))$, where $R(z) = \sum_{i \geq 0} r_{i}z^{i}$. Indeed, using \eqref{eq:lagrange}, it easy to prove by induction on $n \geq 1$ that the coefficient in front of $z^{n}$ in $\phi$ can be interpreted as a sum over all plane trees with $n$ vertices where the weight of a tree $\tau$ is given by \begin{eqnarray*} \mathrm{w}( \tau) &=& \prod_{u \in \tau} r_{ k_{u}(\tau)}. \label{eq:weightwalk}\end{eqnarray*} This is true for $n=1$ and for $ n \geq 1$ using \eqref{eq:lagrange} writing $\phi_{n} = [z^{n}] \phi(z)$ and $r_{n} = [z^{n}] R(z)$ we find \begin{eqnarray*} \phi_{n+1}&=& r_{1} [z^{n}] \phi(z) + r_{2} [z^{n}] \phi^{2}(z) + r_{3} [z^{n}] \phi^{3}(z) + \dots\\ &=& \sum_{\ell \geq 1} r_{ \ell} \sum_{k_{1} + \dots + k_{\ell} = n} \prod_{i=1}^{\ell} \phi_{k_{i}}\\ &\underset{ \mathrm{Induc.}}{=}& \sum_{\ell \geq 1} r_{ \ell} \sum_{k_{1} + \dots + k_{\ell} = n} \prod_{i=1}^{\ell} \left( \sum_{ \tau \mathrm{ \ plane\ tree\ size\ } k_{i} } \mathrm{w}( \tau)\right)\\ &=& \sum_{ \tau \mbox{ plane tree size } n+1 } \mathrm{w}( \tau), \end{eqnarray*} since the latter equality just comes from the decomposition of a plane tree of size $n+1$ at its root vertex. \begin{figure}[!h] \begin{center} \includegraphics[width=10cm]{feynmannlagrange} \caption{Interpretation of $[z^{4}]\phi(z)$ in diagrammatic form.} \end{center} \end{figure} Similarly for $k \geq 1$, the coefficient of $z^{n}$ in $\phi^{k}$ is the total weight of forests of $k$ trees having $n$ vertices in total. Now, using the {\L}ukasiewicz encoding, such a forest can be encoded by a skip-free descending path $(S)$ with $n$ steps and reaching $-k$ for the first time at time $n$ where the weight of such a path becomes $\mathrm{w}(S) = \prod_{i = 0}^{n-1} r_{S_{i+1}-S_{i} +1}$. By Feller's combinatorial lemma, for a skip-free descending walk $(S)$ of length $n$ such that $S_{n}=-k$ there are exactly $k$ cyclic shifts so that $n$ is the $k$-th strict descending ladder time. So if we partition the set of all walks of length $n$ so that $S_{n}=-k$ using the cyclic shift as an equivalence relation, we know that in each equivalence class, the proportion of walks so that $T_{-k}=n$ is $ \frac{k}{n}$ (most of the classes actually have $n$ elements in it, but it could be the case that the subgroup of cyclic shifts fixing the walk is non-trivial and has order $\ell | k$, in which case there are $n/\ell$ elements in the orbit and $k/\ell$ are such that $T_{-k}=n$). Since the weight $ \mathrm{w}(\cdot)$ is constant over all equivalence classes we deduce that: $$ \sum_{ \begin{subarray}{c}(S) \mathrm{\ walks \ of \ length\ }n \\ S_{n}=-k \end{subarray}} \mathrm{w}(S) = \frac{n}{k} \sum_{ \begin{subarray}{c}(S) \mathrm{\ walks \ of \ length \ }n \\ S_{n}=-k \mathrm{\ and \ }T_{-k}=n \end{subarray}} \mathrm{w}(S).$$ It remains to notice that $$ [z^{n-1}] \left(z^{k-1} R(z)^{n}\right),$$ is exactly the weight of all paths $(S)$ of length $n$ such that $S_{n} = -k$. \qed \medskip Here are two recreative (but surprising) applications of Lagrange inversion formula taken from the post ``What is Lagrange inversion formula good for?'' in \textit{Mathoverflow}: \begin{exo} Let $F(x)$ be the be the unique power series such that for all $n \geq 0$ the coefficient of $x^n$ in $F^{n+1}(x)$ is equal to $1$. Show that $F(x) = \frac{x}{1- \mathrm{e}^{-x}}$.\end{exo} \begin{exo} \label{exo:lagrangebis} For $a \in (0,1/2)$ show that the positive solution $x=x(a)$ near $0$ of $x^5-x-a=0$ can be written as $$ x = - \sum_{k\geq 0} {5k \choose k} \frac{a^{4k+1}}{4k+1},$$ i.e.~we can ``solve" quintic equations (any quintic equation can be put into this form, see ``Bring radical" or ``BringJerrard" on Wikipedia). \end{exo} \section{Probabilistic counting of trees} In this section we illustrate how to enumerate certain classes of trees using our knowledge on (random) walks. One underlying idea is to design a random variable which is uniformly distributed on the set we wish to count. \subsection{Prescribed degrees} \begin{theorem}[Harary \& Prins \& Tutte (1964)]\label{thm:prescribed} \noindent The number of plane trees with $d_{i}$ vertices with $i \geq 0$ children, and with $n = 1+\sum i d_{i}= \sum d_{i}$ vertices is equal to $$ \frac{(n-1)!}{d_{0}! d_{1}! \cdots d_{i}! \cdots } = \frac{1}{n} {n \choose d_{0}, d_{1}, d_{2}, \dots }.$$ \end{theorem} \noindent \textbf{Proof.} Fix $d_{i}, k$ and $n$ as in the theorem. Notice that from \eqref{eq:enfantparent} we must have $n= 1 + \sum i d_{i} = \sum d_{i}$. By the encoding of plane trees into their {\L}ukaciewicz path it suffices to enumerate the number of paths starting from $0$, ending at $-1$ at $n$ and with $d_{i}$ steps of $i-1$ and which stay non-negative until time $n-1$. Clearly, if one removes the last assumption there are $$ {n \choose d_{0}, \dots, d_{i}, \dots} = \frac{n!}{ d_{0}! d_{1}!\cdots}$$ such paths. If we partition those paths according to the cyclic shift equivalence relation, then by Lemma \ref{lem:feller} (see also Remark \ref{rem:shift}) we know that each equivalence class has cardinal $n$ and has a unique element which stays non-negative until time $n-1$. Hence the quantity we wanted to enumerate is equal to $$ \frac{1}{n} {n \choose d_{0}, \dots, d_{i}, \dots} = {(n-1)!} \prod_{i} \frac{1}{d_{i}!}.$$ \qed \begin{corollary}[Catalan's counting]\label{cor:catalan} For $n \in \{0,1,2,\dots\}$ we have $$ \# \big\{ \mathrm{plane\ trees \ with \ }n \mbox{ edges}\big\} = \# \big\{ \mathrm{plane\ trees \ with \ }n+1 \mbox{ vertices}\big\}= \frac{1}{n+1} {2n \choose n}.$$ \end{corollary} \noindent \textbf{Proof.} With the same notation as in the preceding theorem, the number of trees with $n \geq 1$ vertices is equal to $$ \sum_{\begin{subarray}{c}d_{0},d_{1}, d_{2}, \dots \\ 1+\sum i d_{i} =n = \sum d_{i} \end{subarray}} \frac{(n-1)!}{d_{0}!d_{1}! \cdots} = \frac{1}{n} \sum_{\begin{subarray}{c}d_{0},d_{1}, d_{2}, \dots \\ 1+\sum i d_{i} =n = \sum d_{i} \end{subarray}} { n \choose d_{0}, d_{1}, \dots } = \frac{1}{n} [z^{n-1}] \left( 1 + z+z^{2}+z^{3}+ \cdots \right)^{n}. $$ Using Lagrange inversion formula (Theorem \ref{thm:lagrange}) the last quantity can be expressed as $[z^{n}] \phi(z)$ where $\phi(z)$ is the formal power series solution to $\phi(z) = \frac{z}{1-\phi(z)}$ (i.e.~with $R(z) = \frac{1}{1-z}$). Solving explicitly we get $ \phi(z) = \frac{1}{2}(1- \sqrt{1-4z})$ and a coefficient extraction yields the desired formula. Alternatively, if we put $\phi(z) = z + z \psi(z)$, we find that $\psi$ satisfies the Lagrange equation $\psi(z) = z (1+ \psi(z))^2$ so that $\psi$ is amenable to an easy Lagrange inversion: we get that the number of plane trees with $n+1$ vertices is $$ [z^{n+1}] \phi(z) = [z^n]\psi(z) = \frac{1}{n} [z^{n-1}]\left((1+z)^{2}\right)^n = \frac{1}{n} {2n \choose n-1}=\frac{1}{n+1} {2n \choose n}. $$ \qed \subsection{Uniform geometric BGW plane trees} \label{sec:uniform} We denote by $ \mathbf{T}_{n}$ the set of all plane trees with $n$ edges and by $ \mathcal{T}_{n}$ a uniform plane tree taken in $ \mathbf{T}_{n}$. As we shall see $ \mathcal{T}_{n}$ can be interpreted as a conditioned version of a BGW tree: \begin{proposition} Let $ \mathcal{T}$ be a Bienaym\'e--Galton--Watson tree with geometric offspring distribution of parameter $1/2$, i.e.~$\mu_{k} = \left(\frac{1}{2}\right)^{k+1}$ for $k \geq 0$. Then $ \mathcal{T}_{n}$ has the law of $ \mathcal{T}$ conditioned on having $n$ edges. \end{proposition} \noindent \textbf{Proof.} Let $\tau_{0}$ be a tree with $n$ edges. Then by Exercise \ref{exo:GWprod} we have $$ \mathbb{P}( \mathcal{T} = \tau_{0}) = \prod_{u \in \tau_{0}} 2^{-k_{u}(\tau)-1}.$$ However, from \eqref{eq:enfantparent} we have $\sum_{u \in \tau_{0}} k_{u}(\tau_{0}) = \#\tau_{0}-1 = n$ so that the last display is equal to $ \frac{1}{2}4^{-n}$. The point is that this probability does not depend on $\tau_{0}$ as long as it has $n$ edges. Hence, the conditional law of $ \mathcal{T}$ on $ \mathbf{T}_{n}$ is the uniform law. \qed \medskip Notice that the above proposition and its proof hold for any non trivial parameter of the geometric offspring distribution. However, we chose $1/2$ because in this case the offspring distribution is critical, i.e.\ it has mean $1$. We can give another proof of Corollary \ref{cor:catalan}:\medskip \noindent \textbf{Proof of Corollary \ref{cor:catalan} (bis).} Combining the previous proposition with Proposition \ref{prop:luka} and Kemperman formula yields $$ \mathbb{P}( \# \mathcal{T} =n+1) = \mathbb{P}( T_{-1} = n+1) \underset{ \mathrm{Prop.} \ref{prop:kemperman}}{=} \frac{1}{n+1} \mathbb{P}(S_{n+1}=-1), $$ where $(S)$ is the random walk whose increments are distributed as $ \mathbb{P}(S_{1}= k) = 2^{-k-2}$ for $k \geq -1$ or equivalently as $G-1$ where $G$ is the geometric offspring distribution of parameter $1/2$. Recall that $G$ is also the number of failures before the first success in a series of independent coin flips: this is the negative Binomial distribution with parameter $(1,1/2)$. Hence $ \mathbb{P}(S_{n+1}=-1) = \mathbb{P}( \mathrm{Binneg}(n+1,1/2) = n)$ where $ \mathrm{Binneg}(n,p)$ is the negative Binomial distribution with parameter $(n,p)$ --the discrete analog of the Gamma laws. This distribution is explicit and we have $\mathbb{P}( \mathrm{Binneg}(n,p) = k) = {n+k-1 \choose n-1} p^{n} (1-p)^{k}$ which is our case reduces to $$ \frac{1}{n+1} \mathbb{P}(S_{n+1}=-1) = \frac{1}{n+1}\mathbb{P}( \mathrm{Binneg}(n+1,1/2) = n) = \frac{1}{2} 4^{-n}\frac{1}{n+1} { 2n \choose n}.$$ By the previous proposition (and its proof) we have on the other hand $$ \mathbb{P}( \# \mathcal{T} =n+1) = \# \{ \mathrm{plane\ trees \ with \ }n+1 \mbox{ vertices}\} \cdot \frac{1}{2}4^{-n}.$$ The result follows by comparing the previous two displays. \qed \medskip \begin{exo}[Enumeration of plane forests] \label{exo:forest} Extend the above proof to show that the number of forests of $f \geq 1$ trees (i.e.~ordered sequence of $f$ trees) whose total number of edges is $n$ is equal to $$ \frac{f}{2n+f} {2n +f \choose n}.$$ Give another proof of the last display using Lagrange inversion formula (Theorem \ref{thm:lagrange}). \end{exo} The above exercise is useful to show that the typical height of $ \mathcal{T}_{n}$ converge in law towards the Rayleigh\footnote{\raisebox{-3mm}{\includegraphics[width=1cm]{rayleigh}} John William Strutt, 3rd Baron Rayleigh (18421919), English} distribution $ \mathcal{R}$ which is the law of the norm of a two-dimensional normal vector: \begin{eqnarray} \label{def:rayleigh} \mathcal{R} \sim r \exp(-r^{2}) \mathbf{1}_{r>0} \mathrm{d}r. \end{eqnarray} \begin{corollary}[Typical height of uniform plane trees] \label{cor:heightplane} Let $ \mathcal{T}_{n}$ be a uniform plane tree with $n$ edges. Conditionally on $ \mathcal{T}_{n}$, let $\delta_{n}$ be a uniformly chosen vertex of $ \mathcal{T}_{n}$ and denote its height by $H_{n}$. Then we have $$ \mathbb{P}(H_{n} =h ) = \frac{2h+1}{2n+1} \frac{ {2n+1 \choose n-h}}{{2n \choose n}}.$$ In particular, we have the following convergence in distribution towards a scaled Rayleigh distribution $$\frac{H_{n}}{ \sqrt{n}} \xrightarrow[n\to\infty]{(d)} \frac{ \mathcal{R}}{ \sqrt{2}}.$$ \end{corollary} \noindent \textbf{Proof.} We compute exactly the probability that the point $\delta_{n}$ is located at height $h\geq 0$. \\ \begin{minipage}{4cm} \includegraphics[width=2.5cm]{hauteur} \end{minipage} \begin{minipage}{11.5cm} If so, the tree $ \mathcal{T}_{n}$ is obtained from the line joining $\varnothing$ to $\delta_{n}$ by grafting $h$ plane trees on its left, $h$ plane trees on its right and one on $\delta_{n}$, see the figure on the left. Obviously, the total number of edges of these trees must be equal to $n-h$. Using Exercise \ref{exo:forest} we deduce that $$ \mathbb{P}(H_{n}= h) = \frac{ \frac{2h+1}{2n+1} { 2n +1 \choose n-h}}{ {2n \choose n}}. $$ The second item of the theorem follows after applying Stirling formula and using Exercise \ref{exo:scheffe}. \qed \end{minipage} \medskip \begin{exo} For any $p \geq 2$ a $p$-tree is a plane tree such that the number of children of each vertex is either $0$ or $p$. When $p=2$ we speak of binary trees. In particular, the number of edges of a $p$-tree must be a multiple of $p$. Show that for any $k \geq 1$ we have $$ \# \{ p- \mbox{trees with }kp \mbox{ edges }\} = \frac{1}{kp+1} {k p+1 \choose k},$$ in three ways: using a direct application of Theorem \ref{thm:prescribed}, using a probabilistic approach via a certain class of random BGW trees, or via Lagrange inversion's formula Theorem \ref{thm:lagrange}. \end{exo} \subsection{Cayley and Poisson BGW trees} In this section we focus on a different type of tree first studied by Cayley:\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{cayley}} Arthur Cayley (1821--1895) receiving a phone call, English} \begin{definition} A \textbf{Cayley tree} of size $n$ is a tree over the $n$ vertices $\{1,2, \dots , n\}$ without any orientation nor distinguished point. In other words, it is a spanning tree on $\mathbb{K}_{n}$, the complete graph over $n$ vertices. See Figure~\ref{fig:cayleytree}. \end{definition} \begin{figure}[h!] \begin{center} \includegraphics[height=3cm]{spanning} \caption{ \label{fig:cayleytree} A Cayley tree over $\{1,2,3,4,\dots, 11\}$.} \end{center} \end{figure} Let $ \mathcal{T}$ be a BGW (plane) tree with Poisson offspring distribution of parameter $1$ (in particular, the mean number of children is $1$ and we are in the critical case). As in the previous subsection (but with vertices instead of edges) we denote by $ \mathcal{T}_{n}$ the random tree $ \mathcal{T}$ conditioned on having $n$ vertices. \begin{proposition} \label{prop:cayleyGW} Consider $ \mathcal{T}_{n}$ and assign the labels $\{1, \dots ,n\}$ uniformly at random to the vertices of $ \mathcal{T}_{n}$. After forgetting the plane ordering $ \mathcal{T}_{n}$ this produces a Cayley tree which we denote by $ \mathscr{T}_{n}$. Then $ \mathscr{T}_{n}$ is uniformly distributed over all Cayley trees with size $n$. \end{proposition} \noindent \textbf{Proof.} Let us first compute the probability that $ \mathcal{T}$ has $n$ vertices. Using the {\L}ukasiewicz walk and the cyclic lemma we get that $ \mathbb{P}(\# \mathcal{T} =n) = \frac{1}{n} \mathbb{P}(S_{n}=-1),$ where $S$ is the random walk whose increments are centered and distributed according to $ \mathrm{Poisson}(1)-1$ i.i.d.~variables. Recalling Section \ref{sec:poissonRW}, it follows that $$ \mathbb{P}(\# \mathcal{T} =n) = \frac{1}{n} \mathbb{P}( \mathrm{Poisson}(n) = n-1) = \mathrm{e}^{-n} \frac{n^{n-2}}{(n-1)!}.$$ Fix a Cayley tree $ \mathfrak{t}$ and let us study the possible ways to obtain $ \mathfrak{t}$ by the above process. We first choose the root of the tree among the $n$ possibles vertices and obtain a rooted Cayley tree $ \mathfrak{t}^{\bullet}$. Once the origin is distinguished, there are $\prod_{ u \in \mathfrak{t}^{\bullet}} k_{u}( \mathfrak{t}^{\bullet}) !$ possible ways to give a planar orientation to the tree, where $k_{u} ( \mathfrak{t}^{\bullet})$ is the number of children of the vertex $u$ in $ \mathfrak{t}^{\bullet}$ (for this we only need the ancestor vertex, not the planar ordering). After these operations, each of the labeled, rooted, plane trees $(\tau, \ell)$ obtained appears with a probability (under the Poisson(1)-BGW measure) equal to $$ \frac{1}{n!} \mathrm{e}^{-n} \prod_{ u \in \tau} \frac{1}{k_{u}(\tau)!} = \frac{1}{n!} \mathrm{e}^{-n} \prod_{ u \in \tau} \frac{1}{k_{u}( \mathfrak{t}^{\bullet})!}.$$ Performing the summation, the symmetry factors involving the $k_{u}!$ conveniently disappear and we get $$ \mathbb{P}( \mathcal{T}_{n} \to \mathfrak{t}) = n \times \frac{\mathrm{e}^{-n}}{n!} \left(\mathrm{e}^{-n} \frac{n^{n-2}}{(n-1)!}\right)^{-1} = \frac{1}{n^{n-2}}.$$ Since the result of the last display does not depend on the shape of $ \mathfrak{t}$, the induced law is indeed uniform over all Cayley trees and we have even proved: \begin{corollary}[Cayley's formula]\label{cayley}The number of Cayley trees of size $n$ is $ n^{n-2}$.\end{corollary} As a short application of the above corollary, we propose: \begin{exo}[Pick a tree - any tree, \cite{chin2015pick}] \label{exo:pickatree} Let $T_{n}$ be a random labeled subtree (no planar ordering) of the complete graph $ \mathbb{K}_{n}$ over the vertices $\{1,2, \dots , n\}$. Show that $$ \lim_{n \to \infty}\mathbb{P}(T_{n} \mathrm{ \ spans\ all\ vertices \ of \ } \mathbb{K}_{n}) = \mathrm{e}^{- \mathrm{e}^{{-1}}}.$$ \end{exo} \begin{exo}[Lagrange meets Cayley] \label{exo:lagrangecayley} Let $T(z)$ be the (formal) exponential generating series of Cayley trees with a distinguished vertex, i.e. $$ T(z) = \sum_{k \geq 1} n \cdot \# \{ \mathrm{Cayley \ trees \ size\ }n\} \cdot \frac{z^n}{n!}.$$ Show using a recursive decomposition at the root that $T(z) = z \mathrm{e}^{T(z)}$. Apply Lagrange inversion formula (Theorem \ref{thm:lagrange}) to recover Corollary \ref{cayley}. \end{exo} We have the following generalization similar to Exercise \ref{exo:forest}: \begin{exo}[Cayley forests] \label{exo:cayleyforest} Show that the number of (non-plane) forests on $\{1,2, \dots , n\}$ with $k$ trees with roots $1,2, \dots, k$ is counted by $$ \mathcal{F}(k,n) = \frac{k}{n} n^{n-k}. $$ \end{exo} The previous exercise can be used to prove the same Rayleigh limit (recall \eqref{def:rayleigh} and Corollary \ref{cor:heightplane}) for the typical height in a large uniform Cayley tree: \begin{corollary}[Typical height of uniform Cayley trees] \label{cor:heightcayley} Let $ \mathscr{T}_{n}$ be a uniform Cayley tree of size $n$. Conditionally on $ \mathscr{T}_{n}$, let $\delta_{n}$ be a uniform vertex of $\{1,2, \dots , n\}$. Then the distance $D_{n}$ in $ \mathscr{T}_{n}$ between the vertices $1$ and $\delta_{n}$ has the following distribution $$ \mathbb{P}( D_{n} = k-1) = \left( 1- \frac{1}{n}\right)\left( 1- \frac{2}{n}\right) \cdots \left( 1- \frac{k-1}{n}\right) \frac{k}{n}, \quad \mbox{ for }1 \leq k \leq n.$$ In particular we have $$\frac{ D_{n}}{ \sqrt{n}} \xrightarrow[n\to\infty]{(d)} \mathcal{R}. $$ \end{corollary} \noindent \textbf{Proof.} By symmetry, $D_{n}$ has the same law as the distance between two uniform vertices $U_{n},V_{n}$ of $ \mathscr{T}_{n}$ (possibly confounded). For $k=1$, the probability that $D_n=0$ is the probability that $U_n = V_n$ which is indeed $1/n$. Otherwise, for $k \geq 2$, the event $\{D_n =k-1\}$ happens if $ \mathscr{T}_n$ is obtained from an ordered line of $k$ vertices on which we graft a forest of $k$ Cayley trees with prescribed roots, and so that the selected vertices are on endpoints of this line. Diving by the obvious symmetry factors, the previous exercise shows that this probability is given by $$ \frac{1}{2} \times \frac{2}{n^2} \cdot n(n-1)\dots (n-(k-1)) \cdot \frac{\mathcal{F}(k,n)}{ n^{n-2}} = \frac{n(n-1)\dots (n-(k-1))}{n^k} \frac{k}{n},$$ as desired. We recognize the law of the \textbf{first collision in the birthday paradox} on a year with $n$ days. In particular, for $k_n = [x \sqrt{n}]$ with $ x >0$ we have $$ \mathbb{P}(D_n \geq k_n-1) = \prod_{i=1}^{k_n} (1- \frac{i}{n}) \sim \exp \left( - \frac{1}{n} \sum_{i=1}^{k_n} i\right) \xrightarrow[n\to\infty]{} \mathrm{e}^{-x^2/2},$$ entailing the convergence to the Rayleigh distribution. \qed \begin{exo}[Random mapping] \label{exo:mapping} Let $M_{n} = \{1,2, \dots, n \} \to \{1, 2, \dots , n\}$ be a mapping chosen uniformly at random among the $n^{n}$ possibilities. We represent $M_{n}$ as an oriented graph where an arrow goes from $i$ to $M_n(i)$, see Fig.~\ref{fig:mapping}. We denote by $ \mathcal{C}_{n} \subset \{1,2, \dots , n \}$ the cyclic points i.e.~the integers $i$ such that there exists $m \geq 1$ with $(M_{n})^{\circ m}(i) = i$. \begin{figure}[h!] \begin{center} \includegraphics[width=11cm]{mapping-simple} \caption{Illustration of the graph of the mapping $1 \to 6, 2\to 9, 3 \to 4, 4 \to 6, 5 \to 10, 6 \to 3, 7 \to 7, 8 \to 7, 9 \to 1, 10 \to 13, 11 \to 1, 12 \to 8, 13 \to 5$. \label{fig:mapping}} \end{center} \end{figure} \begin{enumerate} \item Prove that $ \# \mathcal{C}_{n}$ has the same law as $D_n$ in Corollary \ref{cor:heightcayley}. \item Show that $$ \mathbb{P}(\mbox{the (unoriented) graph of }M_{n} \mbox{ is connected}) = \frac{1}{n^{n-2}}\sum_{k=1}^{n} {n \choose k} (k-1)! \cdot \frac{k}{n} n^{n-k}$$ and give its asymptotic when $n \to \infty$. \end{enumerate} \end{exo} \subsection{Contour function} \label{sec:contourfunction} We finish this section by mentioning another more geometrical encoding of plane trees which is probabilistically less convenient in the general BGW case but very useful in the case of geometric BGW trees.\medskip Let $\tau$ be a finite plane tree. The contour function $ \mathcal{C}_{\tau}$ associated with $\tau$ is heuristically obtained by recording the height of a particle that climbs the tree and makes its contour at unit speed. More formally, to define it properly one needs the definition of a corner: We view $\tau$ as embedded in the plane, then a \textbf{corner} of a vertex in $\tau$ is an angular sector formed by two consecutive edges in clockwise order around this vertex. Note that a vertex of degree $k$ in $\tau$ has exactly $k$ corners. If $c$ is a corner of $\tau$, $\mathrm{Ver}(c)$ denotes the vertex incident to $c$, see Figure \ref{fig:contour}. The corners are ordered clockwise cyclically around the tree in the so-called {\em contour order}. If $\tau$ has $n \geq 2$ vertices we index the corners by letting $(c_{0},c_{1},c_{2}, \ldots,c_{2n-3})$ be the sequence of corners visited during the contour process of $\tau$, starting from the corner $c_0$ incident to $\varnothing$ that is located to the left of the oriented edge going from $\varnothing$ to $1$ in $\tau$. \begin{definition} Let $\tau$ be a finite plane tree with $n \geq 2$ vertices and let $(c_{0},c_{1},c_{2}, \ldots,c_{2n-3})$ be the sequence of corners visited during the contour process of $\tau$. We put $c_{2n-2}=c_{0}$ for notational convenience. The contour function of $\tau$ is the walk defined by $$ \mathcal{C}_{\tau}(i) = \# \mathrm{Ver}(c_{i}), \quad \mbox{ for }0 \leq i \leq 2n-2.$$ \end{definition} \begin{figure}[!h] \begin{center} \includegraphics[width=15cm]{contour} \caption{The contour function associated with a plane tree. \label{fig:contour}} \end{center} \end{figure} Clearly, the contour function of a finite plane tree is a finite non-negative walk of length $2 (\#\tau-1)$ which only makes $\pm 1$ jumps. Here as well, the encoding of a tree into its contour function is invertible: \begin{exo} Show that taking the contour function creates a bijection between the set of all finite plane trees and the set of all non-negative finite walks with $\pm 1$ steps which start and end at $0$. \end{exo} Now, we give a probabilistic description of the law of the contour function of $ \mathcal{T}$ when $ \mathcal{T}$ is distributed as a geometric(1/2)-BGW tree (i.e.~has the same law as in Section \ref{sec:uniform}). \begin{proposition}[Contour function of Catalan trees] \label{prop:contourgeo} Let $ \mathcal{T}$ as above. Then its contour function $ \mathcal{C}_{ \mathcal{T}}$ has the same law as $$ (S_{0}, S_{1}, \dots , S_{T_{-1}}),$$ where $(S)$ is a simple symmetric random walk and $T_{-1}$ is the first hitting time of $-1$. \end{proposition} \noindent \textbf{Proof.} Notice first that $ \mathcal{T}$ is almost surely finite by Theorem \ref{thm:extinction} and so all the objects considered above are well defined. Let $\tau_{0}$ be a plane tree with $n$ edges. We have seen in the previous proposition that $ \mathbb{P}( \mathcal{T} = \tau_{0}) = \frac{1}{2} 4^{-n}$. On the other hand, the contour function of $\tau_{0}$ has length $2n$ and the probability that the first $2n$ steps of $(S)$ coincide with this function and that $T_{-1}= 2n+1$ is equal to $ 2^{-2n} \cdot \frac{1}{2} = \frac{1}{2}4^{-n}$. This concludes the proof.\qed \medskip \begin{exo} \label{exo:contourcat}Give a new proof of Corollary \ref{cor:catalan} using the contour function. \end{exo} \begin{exo} \label{exo:height} Let $ \mathcal{T}$ be a BGW tree with geometric(1/2) offspring distribution. The height of $ \mathcal{T}$ is the maximal length of one of its vertices. Prove that $$ \mathbb{P}( \mathrm{Height}(\mathcal{T}) \geq n) = \frac{1}{n+1}.$$ \end{exo} \section{The Brownian continuum random tree} The reader might be puzzled by the appearance of the Rayleigh distribution as the typical height in both uniform plane trees (Corollary \ref{cor:heightplane}) and uniform Cayley trees (Corollary \ref{cor:heightcayley}) of large size. This is only the tip of a much larger iceberg: many classes of random trees converge in the scaling limit towards a universal Continuum Random Tree (CRT) called the Brownian CRT. We briefly describe this fascinating object. We first describe a way to control globally the geometry of a random graph. \subsection{Gromov--Hausdorff topology} The idea is to see a finite graph once endowed with its graph distance as a finite metric space, i.e.~a point in the space $$ \mathbb{K} = \left\{ \begin{array}{c} \mbox{isometry classes of compact metric spaces} \end{array}\right\},$$ since from the geometric point of view, it is impossible to distinguish two isometric metric spaces (in particular, in the following when we speak of a metric space, the reader should think of its isometry class). One might think that this set is monstrous and that its very definition could pose a problem. In reality, thanks to the compactness condition imposed on its points (i.e.~on the isometry classes of metric spaces), the space $ \mathbb{K}$ is quite ``small''; for example, any compact metric space can be seen as a closed subset of $\ell^{\infty}( \mathbb{R})$. We will now equip $ \mathbb{K}$ with a distance, known as the Gromov--Hausdorff distance and denoted $ \mathrm{d_{GH}}$. Let $(E, \mathrm{d_E})$ and $(F, \mathrm{d_F})$ be two points of $\mathbb{K}$, i.e.~two (isometry classes of) compact metric spaces, then the Gromov--Hausdorff distance between $E$ and $F$ is $$ \mathrm{d_{GH}}((E, \mathrm{d_E}),(F, \mathrm{d_F})) = \inf\{ \mathrm{d}_{ \mathrm{Haus},G}( E',F')\}$$ where $\mathrm{d}_{ \mathrm{Haus},G}(E',F')$ is the Hausdorff distance between $E' \subset G$ and $F' \subset G$ two compacts of the same ambient space $G$ that are respectively isometric to $E$ and $F$. \begin{figure}[!h] \begin{center} \includegraphics[width=12cm]{GHbis.pdf} \caption{Illustration of the Gromov--Hausdorff distance: to compare two metric spaces, first embed them in a common metric space and use the Hausdorff distance.} \end{center} \end{figure} \begin{theorem} The space $( \mathbb{K},d)$ is a Polish metric space (i.e.~separable and complete). \end{theorem} We refer the reader to \cite[Chapter 7]{BBI01} for details concerning this space. This formalism is very convenient and allows us to define the Brownian continuous tree as the ``scaling limit'' of renormalized random discrete trees. Indeed, if $\mu$ is a critical aperiodic offspring distribution with finite variance $\sigma^2 \in (0, \infty)$, one can consider $\mathcal{T}_{n}$ a $\mu$-BGW tree to have $n$ edges endowed with its graph distance as a random metric space. We have the following invariance principle: \begin{theorem}[Reformulation of Aldous by Le Gall ] We have the following convergence in distribution for the Gromov--Hausdorff topology $$ \left( \mathcal{T}_{n} , \frac{1}{ \sqrt{n}} \mathrm{d_{gr}} \right) \quad \mathop{\longrightarrow}^{(d)}_{n \rightarrow \infty} \quad \left( \mathfrak{T}, \frac{2}{ \sigma}\mathrm{d}\right),$$ where $( \mathfrak{T}, \mathrm{d})$ is a random compact continuous tree, called the Brownian continuum random tree, whose distribution does not depend on $\mu$. \label{thm:aldousCRT} \end{theorem} See Figure \ref{fig:CRT} for (an approximation of) a sampling of $ \mathfrak{T}$. The Brownian continuum random tree $ \mathfrak{T}$, frequently called CRT (for ``continuum random tree'') in the literature, is therefore a random metric space (for example, its diameter is random) but it has ``almost certain'' properties, i.e.~true with probability $1$: \begin{itemize} \item $ \mathfrak{T}$ is a.s.~a continuous tree, i.e.~a compact metric space, geodesic (in which any two points are connected by a single geodesic) and cycle-free. \item for any $x \in \mathfrak{T}$, the space $ \mathfrak{T}\backslash \{x\}$ has at most $3$ connected components. \item the fractal dimension of $ \mathfrak{T}$ is equal to $2$. \end{itemize} \subsection{Brownian excursion as continuous contour function} At first glance, there is not much Brownian about the definition of $ \mathfrak{T}$. To understand where the name comes from, let us take a look at the contour function $C(\mathcal{T}_{n})=(C_{s}(\mathcal{T}_{n}))_{0 \leq s \leq 2n}$ of the conditioned BGW trees. As a proxy in the proof of the previous theorem, one usually shows the following convergence: $$ \left( \frac{C_{ns}(\mathcal{T}_{n})}{ \sqrt{n}} : 0 \leq s\leq 1\right) \quad \mathop{\longrightarrow}^{(d)}_{n \rightarrow \infty} \quad \left( \frac{2}{\sigma}\mathbf{e}_{t} : 0 \leq t \leq 1\right)$$ for the uniform topology on $( \mathcal{C}([0,1]), \| \cdot \|_\infty)$ and where $\mathbf{e}$ is a random continuous function, called the \emph{Brownian excursion}, and which can informally be seen as a Brownian motion that starts from $0$ at time $0$, remains positive over the time interval $(0,1)$ and returns to $0$ at time $1$ (see Figure \ref{fig:broexc} for a simulation). \begin{figure}[!h] \begin{center} \includegraphics[width=6cm]{excursion} \caption{\label{fig:broexc} A simulation of a Brownian excursion.} \end{center} \end{figure} The reason why Brownian motion appears is that, although in general the contour function is not a random walk (except in the case of Catalan trees, i.e.\ when the reproduction law is geometric, see Proposition \ref{prop:contourgeo}), it can nevertheless be approximated by a random walk, so that the above convergence is an application (in a conditional setting) of Donsker's theorem, according to which suitably renormalized random walks converge to Brownian motion (this is the functional extension of the central limit theorem). In particular, the renormalization factor $\sqrt{n}$ is the same as in the central limit theorem, thanks to the finite variance assumption. It is then natural to expect the Brownian excursion to encode, in some sense, the Brownian continuous tree. This intuition was formalized by Duquesne \& Le Gall \cite{DLG05}, who mimicked the construction of a tree from its contour function in the discrete setting. More precisely, to any continuous function $f : [0,1] \to \mathbb{R}_{+}$ such that $f(0)=f(1)=0$, we associate a pseudo-distance on $[0,1]$, denoted $ \mathrm{d_{f}}$, and defined by $$\mathrm{d_{f}}(s,t) = f(s)+f(t) - 2 \min_{u \in [\min(s,t), \max(s,t)]} f(u).$$ It is easy to check that $ \mathrm{d_{f}}$ is a pseudo-distance and that the points $s,t \in [0,1]$ with zero distance are those that face each other under the graph of $f$. We can then consider the equivalence relation on $[0,1]$ obtained by putting $ s \sim t$ if $ \mathrm{d_{f}}(s,t)=0$. On the quotient space $[0,1]/\sim$ the (projection of) pseudo-distance $ \mathrm{d_{f}}$ is now a distance and $([0,1]/\sim, \mathrm{d_{f}})$ is a compact metric space, denoted $ \mathfrak{T}_{f}$, which is a continuous tree. When the previous construction is performed starting from the Brownian excursion, the random tree $ \mathfrak{T}_{ \mathbf{e}}$ is the continuous Brownian tree $( \mathfrak{T}, \mathrm{d})$ that appears in Theorem \ref{thm:aldousCRT}. \bigskip \noindent \textbf{Bibliographical notes.} The material about Bienaym\'e--Galton--Watson tree is rather classical. The coding of trees and the formalism for plane trees (the so-called Neveu's notation \cite{Nev86}) can be found in \cite{LG05}. The lecture notes of Igor Kortchemski \cite{Kor16} are a very good introduction accessible to the first years of undergraduate studies in math. The interested reader can also consult \cite{AD15}. Beware some authors prefer to take the lexicographical order rather than the breadth first order to define the {\L}ukasiewicz walk (in the finite case this causes no problem but this is not a bijection if the trees can be infinite). The two exercices illustrating Lagrange inversion formula are taken from the MathOverFlow post ``What is Lagrange inversion good for?''. Exercise \ref{exo:pickatree} is taken from \cite{chin2015pick}. Theorem \ref{thm:prescribed} is proved in \cite{harary1964number}. The idea of Gromov--Hausdorff topology was first discovered in theoretical physics by Edwards \cite{edwards1975structure} and later popularized by Gromov \cite{Gro07} in geometry. It was brought to the probability community mainly by Evans \cite{Eva08} and Le Gall \cite{LG06}. We refer to \cite{BBI01} for background. The theory of scaling limits of random trees is by now one of the pillar of random geometry. The pioneer papers of Aldous \cite{Ald91a,Ald91,Ald91c} are still the best references for background on the Brownian Continuum Random Tree. We refer to \cite{LG05} for a nice introductory course and to \cite{LGMi10} for its applications in the theory of random planar maps. \bigskip \noindent \textbf{Hints for Exercises:}\\ Exercise \ref{exo:dekking} is taken from \cite{Dek91b}.\\ Exercise \ref{exo:lagrangebis}: Put $\tilde{x} = ax-1$ to recover a Lagrangian formulation.\\ Exercise \ref{exo:forest}: After concatenating their {\L}ukasiewicz paths, such forests are coded by a skip-free walk of $n+f$ steps starting at $0$ and reaching $-f$ for the first time at $n$.\\ Exercise \ref{exo:lagrangecayley}: If $C_n$ is the number of Cayley trees on $[[1,n]]$ with a distinguished vertex, prove that for $n \geq 2$ we have $$ C_n = n \cdot \sum_{k \geq 1} \frac{1}{k!}\sum_{\ell_1+\dots+ \ell_k = n-1} {n-1 \choose \ell_1, \dots , \ell_k} C_{\ell_1} C_{\ell_2}\dots C_{\ell_k}.$$ Exercise \ref{exo:mapping}: Once the cyclic points have been chosen, the rest of the graph is obtained by grafting Cayley trees, then use Exercise \ref{exo:forest}.\\ Exercise \ref{exo:contourcat}: Using the contour, the number of plane trees with $n$ edges is also the number of $\pm 1$ paths going from $0$ to $0$ in $2n$ steps while staying non-negative.\\ Exercise \ref{exo:height}: Using the contour function, the probability that the height is larger than $n$ is the probability that a simple random walk started at $0$ reaches $n$ before $-1$.\\ \part[Erd{\H{o}}s-R\'enyi random graph]{Erd\"os-R\'enyi random graph \\ \\ \label{part:ER} \begin{center} \begin{minipage}[l]{15cm} \normalsize In this part we study the famous model of random graph due to Erd{\H{o}}s and R\'enyi: \begin{definition}[$ G(n,p)$ model] \label{def:erdosrenyi}The Erd{\H{o}}s--R\'enyi random graph $ G(n,p)$ with parameters $n~\geq1$ and $p \in [0,1]$ is the (distribution of a) random graph whose vertex set is $\{1,2, \dots , n\}$ and where for each pair $ i \ne j$ the edge $ i \leftrightarrow j$ is present with probability $p$ independently of all the other pairs. \end{definition} This is the most natural random graph model since conditionally on its number of edges $m$, the variable $ G(n, p)$ is uniformly distributed over the set $ \mathbb{G}_{n,m}$ of all labeled simple graphs on $\{1,2, \dots, n\}$ with $m$ edges. For convenience we shall use $ \mathbb{G}_n = \bigcup_{m\geq 1} \mathbb{G}_{n,m}$ for the set of all simple graphs on the vertex set $\{1,2, \dots , n\}$. For a fixed $n \geq 1$, we shall consider all Erd{\H{o}}s--R\'enyi graphs $(G(n,p ) : p \in [0,1])$ as coupled as in Section \ref{sec:percoabstrait}: for each $n$ we sample i.i.d.~uniform variables $(U_{\{i,j\}} : 1 \leq i \ne j \leq n)$ and declare that $\{i,j\}$ is present in $G(n,p)$ if $U_{\{i,j\}} \leq p$. \end{minipage} \end{center} \vspace{0cm} \begin{figure}[!h] \begin{center} \includegraphics[width=0.68\linewidth]{1044.jpg} \caption{A list of the 1044 simple graphs on 7 vertices up to isomorphism.} \end{center} \end{figure} } \chapter{Local properties} \hfill A tribute to the first and second moment method. \hfill \bigskip In this chapter, we study ``local properties'' of $G(n,p)$ focusing mostly on the presence of certain subgraph in $G(n,p)$ as $p$ varies with $n$. We shall see that the presence of some subgraph sometimes satisfies a \textbf{phase transition} and governs interesting global properties of the graph such as the connectedness, or the spectral measure. Many proofs are based on the first and second method together with combinatorics and basic analysis. \bigskip In the following, a \textbf{graph property} $A_{n}$ is just a subset of all simple graphs on $\{1,2, \dots , n \}$. We say that $A_{n}$ is \textbf{increasing} (resp. \textbf{decreasing}) if for any $ \mathfrak{g}, \mathfrak{g}' \in \mathbb{G}_{n}$ satisfying $ \mathfrak{g} \subgraph \mathfrak{g}'$, we have $ \mathfrak{g} \in A_n \Rightarrow \mathfrak{g}' \in A_n$ (resp. with $ \mathfrak{g}' \subgraph \mathfrak{g}$). In words, adding (resp. removing) edges only help satisfying $A_{n}$. \begin{example}[Appearance of subgraph or graph induced] Fix a graph $ \mathfrak{g}_{0}$ then the following graph properties $\{ \mathfrak{g} \in \mathbb{G}_n:\exists \mathfrak{g}' \subgraph \mathfrak{g} \mbox{ with } \mathfrak{g'} \simeq \mathfrak{g}_{0}\}$ is an increasing graph property, whereas $\{ \mathrm{Cayley\ trees \ of \ } \mathbb{G}_n\}$ is not an increasing graph property as soon as $n \geq 2$. \end{example} \begin{exo}[Erd\"os--R\'enyi is Cayley] For which $p\equiv p_{n}$ is $ \mathbb{P}(G( n, p) \mbox{ is a Cayley tree})$ maximal? \label{exo:ercayley}\end{exo} If $(A_{n} : n \geq 1)$ is a sequence of graph properties and if the edge density $p\equiv p_{n}$ may depend on $n$, we say that $A_{n}$ holds for $G(n, p_{n})$ with \textbf{with high probability} (abbreviated by w.h.p) if $$\mathbb{P}( G(n,p_{n}) \in A_{n}) \xrightarrow[n\to\infty]{} 1.$$ When we are in presence of properties $A_{n}$ for which $ G(n,p) \in A_{n}$ or not depends on $p_{n}$ in a drastic way (as $n \to \infty$), we speak of sharp threshold phenomena. In what follows we shall only focus on increasing graph properties: \begin{definition}[Sharp thresholds for graph properties] \label{def:sharpthres}Let $(A_{n} : n \geq 1)$ be a sequence of increasing properties of $ \mathbb{G}_n$. We say that $(A_{n})_{n\geq 1}$ has a \textbf{sharp threshold transition} for $(G(n,p))_{ p \in [0,1]}$ at $p\equiv p_{n}$ if for every $ \varepsilon>0$ we have $$ \mathbb{P}( G(n, (1- \varepsilon) p_{n})\in A_n) \to 0 \qquad \mbox{ whereas } \qquad \mathbb{P}( G(n,(1+ \varepsilon) p_{n})\in A_n) \to 1 \quad \mbox{ as }n \to \infty.$$ \end{definition} Notice that the location of the edge density threshold $p_{n}$ is unique up to asymptotic equivalence. An alternative ``dynamic'' way of speaking of sharp threshold is to consider the Erd{\H{o}}s--R\'enyi graphs $(G(n,p ) : p \in [0,1])$ as naturally coupled via uniform labelings on the edges as in Section \ref{sec:percoabstrait}; then we write $$ \tau_{n} := \inf\{ p >0 : G(n,p ) \in A_{n}\}.$$ For increasing graph properties, if $p < \tau_n$ then $ G(n, p) \notin A_n$ whereas if $p > \tau_n$ then $G(n,p) \in A_n$. Definition \ref{def:sharpthres} is then equivalent to the following concentration $$ \frac{\tau_{n}}{p_{n}} \xrightarrow[n\to\infty]{( \mathbb{P})}1.$$ \begin{figure}[!h] \begin{center} \includegraphics[width=10cm]{sharpthreshold} \caption{Illustration of the sharp threshold transition for an increasing graph property: the functions $x \mapsto \mathbb{P}( G(n, x \cdot p_{n}) \in A_{n})$ converge pointwise on $[0,\infty) \backslash\{1\}$ towards the step function $ \mathbf{1}_{x < 1}$.} \end{center} \end{figure} \begin{exo} \label{exo:weakthreshold} Let $A_n$ be a non-empty increasing graph property. Show that there exists $p_{n} \in (0,1)$ so that $A_{n}$ has a \textbf{weak threshold} at $p_{n}$ in the sense that for any sequence $\alpha_n \to \infty$ we have $$\mathbb{P}\left( G(n, p_{n}/\alpha_{n}) \in A_n\right)) \to 0 \qquad \mbox{ whereas } \qquad \mathbb{P}( G(n,\alpha_{n} \cdot p_{n})\in A_n) \to 1 \quad \mbox{ as }n \to \infty.$$ \end{exo} \section{Connectivity} \label{sec:connectivity} Probably the most natural question is to ask when the graph $G(n,p)$ becomes connected, i.e.~to consider the increasing graph property $$ \mathrm{Connected}_n = \{ \mathfrak{g} \in \mathbb{G}_n : \mathfrak{g} \mbox{ is connected}\}.$$ As will we see, with high probability this \textit{global} property is in fact ruled by \textit{local} properties, namely the degrees of the vertices in $G(n,p)$. As far as one given vertex $i$ is concerned, the situation is quite trivial since for every $i \in \{1, 2, \dots , n \}$ fixed, we have $$ \mathrm{deg}_{G(n,p)}(i) \overset{(d)}{=} \mathrm{Binomial}(n-1,p),$$ and so the expected degree of a given vertex is $(n-1)p$. But these degrees are not independent! \subsection{Isolated vertices} We shall focus on \textbf{isolated} vertices (i.e.\ of degree $0$), in the Erd{\H{o}}s--R\'enyi random graph. Consider the following increasing graph property $$ \mathrm{NoIso}_n = \{ \mathfrak{g} \in \mathbb{G}_n: \forall 1 \leq i \leq n, \mathrm{deg}_ \mathfrak{g}(i)>0\},$$ and notice that we trivially have $ \mathrm{Connected}_n \subset \mathrm{NoIso}_n$ for every $n \geq 1$. \begin{proposition} \label{prop:isolated}\noindent \noindent The sequence $( \mathrm{NoIso}_n)_{n\geq 1}$ has a sharp threshold transition for $(G(n, p))_{p \in [0,1]}$ at $$p_{n} = \frac{\log n}{n}.$$ \end{proposition} \noindent \textbf{Proof.} We use the method of first and second moment. Since the degree of any single vertex in $ G(n, p_{n})$ follows a $ \mathrm{Bin}(n-1,p_{n})$ distribution, the first moment method shows that if $X(n,p)$ is the number of isolated vertices in $ G(n,p)$ then \begin{eqnarray*} \mathbb{P}( G(n, p_{n}) \mbox{ has an isolated vertex}) &=& \mathbb{P}( X(n, p_{n})>0)\\ &\leq& \mathbb{E}[X(n,p_{n})]\\ &=& n \mathbb{P}( \mathrm{Bin}(n-1,p_{n}) = 0) = n (1-p_{n})^{n-1}. \end{eqnarray*} If $ p_{n} \geq (1 + \varepsilon) \frac{\log n}{n}$ then the right-hand size clearly tends to $0$ as $n \to \infty$ and this shows that $ G(n, p_{n})$ has no isolated vertices w.h.p.\,in this regime. If now $ p_{n} \leq (1 - \varepsilon) \frac{\log n}{n}$, we deduce from the last display that the expected number of isolated vertices diverges. To guarantee that $\mathbb{P}(G(n,p_{n}) \mbox{ has an isolated vertex}) \to 1$, we use second moment method (Lemma \ref{def:second-moment}) and compute \begin{eqnarray*}\mathbb{E}[X(n,p_{n})^{2}] &=& \sum_{1 \leq i,j \leq n} \mathbb{P}\left(i \mbox{ and }j \mbox{ are isolated in } G(n,p_{n}) \right)\\ &=& n \mathbb{P}(1 \mbox{ is isolated}) + n(n-1) \mathbb{P}(1 \mbox{ and } 2 \mbox{ are isolated})\\ & = & n(1-p_{n})^{n-1} + n(n-1)(1-p_{n})^{2n-3}. \end{eqnarray*} Notice that factor in the last display is not exactly equal to $\mathbb{P}(1 \mbox{ is isolated})^{2}$ since we only count the edge between the vertices $1$ and $2$ \textit{once}. However, in the regime $p_{n} \leq (1- \varepsilon) \frac{\log n}{n}$ it is easy to see that $ \mathbb{E}[X(n,p_{n})]^{2} \sim \mathbb{E}[X(n,p_{n})^{2}]$ and so by Lemma \ref{def:second-moment} there are isolated vertices with high probability. \qed \subsection{Hitting time theorem} Perhaps surprisingly, as soon as the graph $G(n,p)$ has no isolated vertices, it becomes instantaneously connected with high probability: \begin{theorem}[Erd{\H{o}}s--R\'enyi (1959)]\label{thm:connectedness}\noindent The sequence $( \mathrm{Connected}_n)_{n\geq 1}$ has a sharp threshold transition for $(G(n, p))_{p\in [0,1]}$ at $p_{n} = \frac{\log n}{n}.$ More precisely, in the coupled version of the Erd{\H{o}}s--R\'enyi random graphs, if we set $$ \tau_{n} = \inf \{ p>0 : G(n,p) \mbox{ is connected}\}\quad \mbox{ and }\quad \theta_{n} = \inf \{ p>0 : G(n,p) \mbox{ has no isolated vertices}\}$$ then we have $ \mathbb{P}(\tau_{n} = \theta_{n}) \to 1$ as $n \to \infty$. \label{thm:connectedfin} \end{theorem} We will see another proof of the first part of this result in the next chapter (Proposition \ref{prop:connectedness}). \\ Given Proposition \ref{prop:isolated}, it remains to understand whether the graph $G(n,p)$ can have several components which are not made of isolated vertices. We say that a graph $ \mathfrak{g}$ has the \textbf{core property} if it is made of a (usually large, but possibly small) connected component of size at least $2$ together with isolated vertices. We denote by $ \mathrm{Core}_n$ the associated set of graphs of $ \mathbb{G}_n$ satisfying the core property. Notice that this property \textit{is not increasing} (nor decreasing) and so some care is needed. \begin{figure}[!h] \begin{center} \includegraphics[width=5cm]{er35} \caption{A graph having the core property: a single component (in red in the figure) together with isolated vertices (in blue). This property is however not stable by addition of edges.} \end{center} \end{figure} \begin{lemma}[Core property] \label{lem:core} For any $ \frac{2}{3} \frac{\log n}{n} \leq p_n \leq 2 \frac{\log n}{n}$ we have $G(n, p_{n}) \in \mathrm{Core}_n$ with high probability. \end{lemma} \noindent \textbf{Proof of the lemma.} Actually, the proof will hint to the fact that the (sharp) phase transition for this property appears at $ \frac{1}{2} \frac{\log n}{n}$ (we put $2/3$ to be on safe ground). Let us denote $ \mathrm{Cut} \equiv \mathrm{Cut}(n,p)$ the number of ways to partition the vertices $\{1,2, \dots , n \}$ in two subsets $A \coprod B = \{1,2, \dots , n\}$ such that in $G(n,p)$ we have $$ \left\{ \begin{array}{l} 2 \leq \# A \leq \# B, \\ A \mbox{ is connected},\\ \mbox{there is no edge between } A \mbox{ and }B.\end{array} \right.$$ Notice that if $G(n, p)$ does not have the core property, we can find two disjoint clusters of size at least $2$ and by adding components we can split the graph into two subsets $A,B$ as above (taking for $A$ the smallest subset). By the first moment method (applied twice) we have \begin{eqnarray*} \mathbb{P}(G(n,p_n) \mbox{ has no Core}) &\leq& \mathbb{P}( \mathrm{Cut}(n,p_n) \geq 1) \\ &\leq& \mathbb{E}[ \mathrm{Cut}(n,p_n)] \\ &=& \sum_{k=2}^{ \lfloor n/2 \rfloor}{n \choose k} \left((1-p_{n})^{n-k}\right)^{k} \mathbb{P}( G(k, p_{n}) \mbox{ is connected})\\ &=& \sum_{k=2}^{ \lfloor n/2 \rfloor}{n \choose k} \left((1-p_{n})^{n-k}\right)^{k} \mathbb{P}( \exists \mbox{ spanning tree in } G(k, p_{n}))\\ &\leq& \sum_{k=2}^{ \lfloor n/2 \rfloor}{n \choose k} \left((1-p_{n})^{n-k}\right)^{k} \mathbb{E}\left[ \# \mbox{ spanning trees in } G(k, p_{n})\right]\\ &\underset{ \mathrm{Cor}.\ \ref{cayley}}{=}& \sum_{k=2}^{ \lfloor n/2 \rfloor}{n \choose k} \left((1-p_{n})^{n-k}\right)^{k} k^{k-2} (p_{n})^{k-1}, \end{eqnarray*} where the factor $ (1-p_{n})^{k(n-k)}$ counts for the probability that no edge is present between a subset of size $k$ and its complement in $ G(n,p_{n})$. Take $ \varepsilon>0$ small enough so that if $k \leq \varepsilon n$ we have $ \frac{2}{3}k(n-k) \geq \frac{7}{12} n k$. For those small $k$ we use the bound \begin{eqnarray*} {n \choose k} \left((1-p_{n})^{n-k}\right)^{k} k^{k-2} (p_{n})^{k-1} &\underset{\frac{2}{3} \frac{\log n}{n} \leq p_n \leq 2 \frac{\log n}{n}}{\leq}& \mathrm{Cst}^k\ \frac{n^{k}}{k!} \exp\left( - \frac{7}{12} k n \frac{\log n}{n}\right) k^{k} \left( \frac{\log n}{n} \right)^{k-1}\\ &\underset{ k^{k}/ k! \leq \mathrm{Cst} \cdot \mathrm{e}^{k}}{\leq}& \left( \mathrm{Cst} \cdot n \cdot n^{-7/12} \cdot \mathrm{e} \cdot \frac{\log n}{n} \right)^{k} \frac{n}{\log n}\\ & \leq & \left(\mathrm{Cst} \cdot n^{-69/120} \right)^{k} n, \end{eqnarray*} where $ \mathrm{Cst}$ is a universal constant that may vary from line to line. Since $k \geq 2$, and $2 \times \frac{69}{120}>1$, those bounds are summable in $2 \leq k \leq n/2$ and are in fact dominated by the term $k=2$ which tends to $0$. For the large $k$, since $ n-k \geq n/2$ and $k \geq \varepsilon n $, we have $$ {n \choose k} \left((1-p_{n})^{n-k}\right)^{k} \leq 2^{n} \exp\left( - \varepsilon n \log n / 100\right) \leq ( 2 n^{ - \varepsilon/100})^{n},$$ and this bound can also by summed over the possible values of $k$ to get a vanishing quantity. The lemma is proved. \qed \medskip \noindent \textbf{Proof of Theorem \ref{thm:connectedness}.} The combination of Proposition \ref{prop:isolated} with Lemma \ref{lem:core} already shows that connectedness has a sharp threshold at $p_n = \frac{\log n}{n}$: w.h.p there are still isolated vertices (and so the graph is not connected) at $ p = (1- \varepsilon)\frac{\log n}{n}$ by Proposition \ref{prop:isolated} whereas there are no isolated vertex at $ p_{n} = (1+ \varepsilon)\frac{\log n}{n}$ and by Lemma \ref{lem:core} the graph has the core property at this value of $p_{n}$: it must be connected w.h.p. We cannot directly derive that $\tau_n = \theta_n$ because we cannot apply Lemma \ref{lem:core} to the random time $\theta_n$. However, the following strengthening of the lemma holds and enables us to conclude that $\tau_n = \theta_n$ w.h.p.\ : \begin{eqnarray} \mathbb{P}\left( G(n,p) \in \mathrm{Core}_n \mbox{\ simultaneously for all }\frac{2}{3} \frac{\log n}{n} \leq p \leq 2 \frac{\log n}{n} \right) \xrightarrow[n\to\infty]{}1. \label{eq:coreuniform}\end{eqnarray} To see this, let us start from $ p_{n} = \frac{2}{3} \frac{\log n}{n}$ where we know that the graph has the core property with high probability by Lemma \ref{lem:core}. Denote by $ \mathcal{C}_{n}$ its core and by $\ell_{1}, \dots , \ell_{X(n,p_n)}$ its isolated vertices. Recall from the proof of Proposition \ref{prop:isolated} that we have $$ \mathbb{E}[X(n,p_n)] = n (1-p_{n})^{{n-1}} \sim n \mathrm{e}^{- \frac{2}{3} \log n} = n^{1/3},$$ so that by Markov's inequality the event $ {H}_{n} = \{ X(n,p_n) \leq n^{5/12}\}$ happens with high probability as $n \to \infty$. Conditionally on $G(n,p_{n})$, consider for each isolated $\ell_{i}$ the next edge $(\ell_{i} \leftrightarrow x_{i})$ adjacent to $\ell_{i}$ to be added to $G(n,p_{n})$. These edges are not independent, but conditionally on $G(n,p_{n})$, for each $1 \leq i \leq X(n,p_n)$, the other extremity $x_{i}$ is uniform on $\{1,2, \dots , n\} \backslash \{ \ell_{i}\}$. In particular, the probability that $x_{i}$ does not belong to the core of $G(n,\frac{2}{3} \frac{\log n}{n})$ is $$ \mathbb{P}(x_{i} \notin \mathcal{C}_{n}) = 1-\frac{ \# \mathcal{C}_{n}}{n} = \frac{X(n,p_n)-1}{n} \underset{ \mathrm{ on \ } H_{n}}{\leq} n^{-7/12}.$$ Conditionally on $G(n,p_{n})$ and on $H_{n}$, the expected number of $x_{i} \notin \mathcal{C}_{n}$ is bounded above by $n^{5/12}~\cdot~n^{-7/12}~=n^{-1/6}$ and by the first moment method we deduce that with high probability, for all $1 \leq i \leq X(n,p_n)$ the first edge connected to each isolated vertex $\ell_{i}$ after time $p_{n}$ will link it to the core $ \mathcal{C}_{n}$. In particular, no isolated vertices of $G(n,\frac{2}{3} \frac{\log n}{n})$ get connected together and this entails \eqref{eq:coreuniform}. \qed \medskip We saw above that the variation of the individual degrees in $ G(n,p_{n})$ rules some large scale geometric properties. This variation disappears when $p_{n}\gg \frac{\log n}{n}$ and we leave the following as an exercise for the reader (after having given a look at Lemma \ref{lem:LDpoisson}): \begin{exo}[Range of degrees] \label{exo:asympdegree} For $\beta>0$ set $p_{n} = \beta \log n/n$. In $G(n, p_{n})$ let $m_{n}$ and $M_{n}$ respectively be the minimal and maximal vertex degrees. Show that we have $$ \frac{m_{n}}{\beta \log n} \xrightarrow[n\to\infty]{( \mathbb{P})} h(\beta), \quad \mbox{ and } \quad \frac{M_{n}}{\beta \log n} \xrightarrow[n\to\infty]{( \mathbb{P})} H(\beta),$$ where $h(\beta)$ and $H(\beta)$ are the two solutions to the equation $$ \beta \cdot I(x) = 1, \quad \mbox{ with } I(a) := a \log a -(a-1),$$ where $h(\beta) = 0$ for $\beta < 1$ (there is only one solution). In particular $0=h(1)<H(1)= \mathrm{e}$ and $\beta^{-1}(H( \beta)- h( \beta)) \to 0$ as $\beta \to \infty$. \end{exo} \section{Other thresholds via first and second moments} We present a few other sharp thresholds for appearance or disappearance of certain (induced) subgraphs in $G(n,p)$ whose proofs are also based on the first and second moment method. \subsection{Diameter} \label{sec:diamER} In this section, let us focus on the diameter of $G(n, p_{n})$, that is the maximal (graph) distance between any pairs of points in the graph. Of course, by the results of the last section, the diameter is finite only in the regime $p_{n} \geq \frac{\log n}{n}(1+o(1))$. In a graph with maximum degree $d \geq 3$, by the same argument as in the proof of Proposition \ref{prop:lowerperco}, the number of vertices at distance less than or equal to $r$ from a given origin vertex is at most $1+ d + d(d-1) + d(d-1)^{2} + \cdots + d (d-1)^{r-1} = 1+ \frac{d}{d-2} ((d-1)^{r}-1)$. If the graph is connected and has $n$ vertices, maximal degree $d$ and diameter $r$, we deduce the crude bound $$ 1+ \frac{d}{d-2}(d-1)^{r} \geq n.$$ Combining this with the rule of thumb ``$ \mbox{degrees} \approx \mbox{mean degree} \approx np_{n}$'' (valid as soon as $p_{n} \gg \frac{\log n}{n}$ by Exercise \ref{exo:asympdegree}) leads us to postulate that the diameter of $G(n, p_{n})$ is roughly $ \frac{\log n}{\log np_{n}}$. Let us prove this fact in details for the case $ \mathrm{Diameter} =2$ which is the smallest non-trivial diameter. We denote by $$ \mathrm{Diam}^{2}_{n} = \{ \mathfrak{g} \in \mathbb{G}_{n} \mbox{ with diameter } \leq 2\}$$ the associated increasing graph property. \begin{proposition} The sequence $( \mathrm{Diam}^{2}_{n})_{n \geq 1}$ has a sharp threshold transition for $(G(n , p))_{p \in [0,1]}$ at $$p_{n} = \sqrt{ \frac{2 \log n}{n}}.$$ \end{proposition} \noindent \textbf{Proof.} We again use the first and second moment method. The number $ D_n$ of pairs $\{i,j\}$ with $i \ne j \in \{1,2, \dots ,n\}$ so that $ \mathrm{d}_{ G(n,p_{n})}(i,j) > 2$ is easily computed since this just means that $i$ and $j$ are not neighbors nor share a neighbor: \begin{eqnarray} \mathbb{E}[D_n] &=& {n \choose 2} \mathbb{P}(1 \mbox{ and }2 \mbox{ are not neighbors and do not share a neighbor})\\ &=& {n \choose 2} (1-p_{n})\cdot \mathbb{P}( \mbox{3 is not connected to both 1 and 2})^{n-2}\\ & =& {n \choose 2}(1-p_{n})( 1-p_{n}^{2})^{n-2} \sim \frac{n^{2}}{2} \exp(- n p_{n}^{2}), \end{eqnarray} when $p_{n} \to 0$. Hence if $ p_{n} \geq (1 + \varepsilon) \sqrt{2 \log n /n}$ the expected number of vertices at distance strictly larger than $2$ vanishes as $n \to \infty$. By the first moment method, this implies that w.h.p.~the diameter of $G(n, p_{n})$ is less than or equal to $2$ in this regime (it is then equal to $2$ unless $p_{n} = 1 - o(n^{-2})$; the diameter being $1$ if and only if all edges are present).\\ We now suppose that $p_{n} = (1- \varepsilon) \sqrt{2 \log n/n}$ so that $n^2 \mathrm{e}^{-n p_n^2} \to \infty$ and the expectation of $D_n$ diverges. To prove that w.h.p.~there are vertices which do not share a neighbor we compute the second moment of $D_n$: $$ \mathbb{E}[D_n^{2}] = \sum_{i,j,k,l} \mathbb{P}\left(\begin{array}{c} i \mbox{ and }j \mbox{ have no common neighbor}\\ i \mbox{ and }j \mbox{ are not neighbors}\\ k \mbox{ and }l \mbox{ have no common neighbor}\\ k \mbox{ and }l \mbox{ are not neighbors}\end{array}\right).$$ In the case when $i,j,k,l$ are all distinct, the possibilities of the induced subgraph $G(n,p_{n})[\{i,j,k,l\}]$ on $i,j,k,l$ are displayed in the following figure which account for a probability $(1-p_{n})^{6}+4p_{n}(1-p_{n})^{5}+2p_{n}^{2}(1-p_{n})^{4}$ which tends to $1$ as $n \to \infty$, and then the status of all other $n-4$ vertices contribute to a probability equal to $(1-p_{n}^{2})^{2(n-4)}$. \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{possibilitiesdiameter} \caption{The possibilities for the induced subgraph on two pairs of vertices (here in red and blue) so that the distance between elements of each pair is at least $3$.} \end{center} \end{figure} The contribution of this case to the sum is then asymptotic to $$ \frac{n^4}{4} \mathrm{e}^{-2np_n^2} \sim \mathbb{E}[D_n]^2.$$ Similarly, the contribution of overlapping pairs, i.e.\, the case when there are only three (resp.~two) vertices among $i,j,k,l$ is negligible in front of the last term (we leave the details to the fearless reader). Since $ \mathbb{E}[D_n]$ diverges, only the first case prevails and $ \mathbb{E}[D_n^{2}] \sim \mathbb{E}[D_n]^{2}$. By the second moment method (Lemma \ref{def:second-moment}) we conclude that $ \mathbb{P}(D_n>0) \to 1$ as $n \to \infty$. \qed \subsection{Ramsey theory, max-clique, $ \mathrm{P=NP}$ and alien invasion} A consequence of Ramsey's\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{ramsey}} Frank Ramsey (1903--1930), English} theorem (for two colors) is that for any $k \geq 1$, there exists a number $ \mathcal{R}(k)$ such that every (simple) graph with more than $\mathcal{R}(k)$ vertices contains either a \textbf{clique} (an induced subgraph equal to the complete graph) of size $k$ or an \textbf{independent set} (induced subgraph with only isolated vertices) with size $k$. Perhaps surprisingly, even the value of the Ramsey number $ \mathcal{R}(5)$ is unknown although it must lies in $\{43,44,45, \dots, 48\}$ (see wikipedia). The bounds on $\mathcal{R}(k)$ are exponential in $k$ and actually random Erd{\H{o}}s--R\'enyi graphs achieve almost the best possible: \begin{proposition} \label{prop:clique} Let $K_{n}$ and $ I_{n}$ respectively be the maximal size of a clique and of an independent set in $G(n,{\textstyle \frac{1}{2}})$. Then we have $$ \frac{K_{n}}{ \log_{2}n} \xrightarrow[n\to\infty]{( \mathbb{P})} 2, \qquad \frac{I_{n}}{ \log_{2}n} \xrightarrow[n\to\infty]{( \mathbb{P})} 2.$$ \end{proposition} \noindent \textbf{Proof.} Notice that $I_{n} = K_{n}$ in distribution since $ G(n,{\textstyle \frac{1}{2}})$ is self-dual in the sense that if we switch the status of all edges we obtain the same (law of) random graph. As the reader may have foreseen, we first compute the expected value of $ \chi_k$, the number of induced $k$-cliques in our random graph: \begin{eqnarray} \label{eq:probaclique} \mathbb{E}[ \chi_k]=\mathbb{E}\left[\# \left\{k-\mathrm{cliques\ } \mbox{ in }G(n,{\textstyle \frac{1}{2}})\right\}\right] = {n \choose k} 2^{- {k \choose 2}}. \end{eqnarray} It is easy to see that this tends to $0$ if $k\equiv k_n > (2 + \varepsilon) \log_{2} n$ as $ n \to \infty$. So by the first moment method, the size of the largest clique is less than $(2+ \varepsilon) \log_2 n$ w.h.p. We now compute the second moment of the number of $k$-cliques and obtain \begin{eqnarray*} \mathbb{E}[ \left(\chi_k \right)^2] &=& \sum_{\begin{subarray}{c}S,S' \subset \{1, \dots , n \}\\ \#S=\#S'=k \end{subarray}} \mathbb{P}( \mbox{the induced graphs on }S,S' \mbox{ are cliques})\\ &=& {n \choose k} 2^{-{k \choose 2}} \sum_{\ell=0}^{k}{ k \choose \ell}{n-k \choose k-\ell} \cdot 2^{-{k \choose 2}+{\ell \choose 2}}. \end{eqnarray*} To get the second line, we pick the first clique $S \subset \{1,2, \dots ,n \}$ and then partition according to the intersection $\ell = \#(S \cap S')$. If $S'$ shares some vertices with $S$, it is easier for it to be a clique since we only need to open ${ k \choose 2} -{\ell \choose 2}$ edges because the ${\ell \choose 2}$ edges in-between common vertices of $S$ and $S'$ are already present in (the induced subgraph of) $S$. Notice that when $\ell=0$ or $\ell=1$ the edges in-between vertices of $S$ and $S'$ are pairwise distinct. We then leave to the reader the tedious task of checking that the above sum is dominated by the term corresponding to $\ell=0$ when $k= k_n \leq (2- \varepsilon) \log_2 n$, i.e.~that $ \mathbb{E}[\chi_{k_n}^2] \sim \mathbb{E}[\chi_{k_n}]^2$. By the second moment method (Lemma \ref{def:second-moment}), we deduce that indeed w.h.p, there are $k$-cliques for $k \leq (2- \varepsilon) \log_2 n$. \qed \bigskip \begin{remark}[Ramsey and Erd{\H{o}}s] Recall the definition of $ \mathcal{R}(k)$ as the smallest integer so that a graph with size larger than $ \mathcal{R}(k)$ must contain a clique or an independent set of size $k$. Proving that $ \mathcal{R}(k) < \infty$ is not trivial and is in fact Ramsey's theorem. However, from \eqref{eq:probaclique} we deduce that if $ {n \choose k} 2^{-{k \choose 2}}<1/2$ then the expectation of $ \mathcal{X}_k + \mathcal{I}_{k}$ is less than $1$ where $ \mathcal{I}_{k}$ is the number of independent sets of size $k$, and this implies that $ \mathcal{X}_k+ \mathcal{I}_{k}$ is not almost surely larger than $1$ or equivalently that there \textbf{exists} a graph on $n$ vertices which has no clique nor independent set of size $k$. In other words, $$ \mathcal{R}(k) >n.$$ Although this reasoning (one of the first instances of the \textbf{probabilistic method}) might appear simplistic, finding such a graph is a very difficult problem. Quoting Spencer \cite{spencer1994ten}: \begin{quote}``For the Ramsey function $ \mathcal{R}(k)$ no construction is known that gives nearly the lower bound that can be derived from the [above] proof... Erd{\H{o}}s asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of $ \mathcal{R}(5)$ or they will destroy our planet. In that case, he claims, we should marshall all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for $ \mathcal{R}(6)$. In that case, he believes, we should attempt to destroy the aliens.'' \end{quote} \end{remark} \begin{remark}[Very sharp threshold] A careful inspection of the proof (and precise estimations) enable to reinforce Proposition \ref{prop:clique} as follows: There exists an integer $k_n \sim 2 \log_2 n$ so that we have $$ \mathbb{P}(K_n \in \{k_n, k_n+1\}) \xrightarrow[n\to\infty]{}1,$$ in other words, the maximal size of a clique is concentrated on only two values! \end{remark} \begin{remark}[Finding cliques] Although the previous result entails the existence of cliques of size $\approx 2 \log_2 n$ in $G(n, {\textstyle \frac{1}{2}})$, finding them is a very difficult task. Indeed, given a graph of size $n$, say by its adjacency matrix, an exhaustive search of a clique of size $\log_2 n$ costs $$ {n \choose \log_2 n} \approx n^{\log_2 n} \mbox{ which is superpolynomial in }n.$$ It is known that finding the max clique in a (deterministic) graph is a NP-complete task, but more surprisingly it is open as of today whether we can find a clique of size $(1 + \varepsilon) \log_2 n$ in $G(n,{\textstyle \frac{1}{2}})$ -- hence a bit above half of the maximal size-- in a polynomial time! See the exercise below to find a clique of size approximatively $\log_2 n$. \end{remark} \begin{exo}[Greedy construction of a clique] In $G(n, 1/2)$ , whose vertex set is $ \{1,2, \dots , n \}$, consider the following construction of a clique: Start with the vertex $X_0=1$. By induction, if $X_0<X_1<\dots < X_k$ have been constructed so that all edges $X_i \leftrightarrow X_j$ are present in $G(n, 1/2)$ for $0 \leq i < j \leq k$, let $X_{k+1}$ be the smallest vertex larger than $X_k$ which is connected to all $X_0, \dots , X_k$ in $G(n,1/2)$. If there is no such vertex the construction stops and output a complete induced subgraph, i.e.~a clique with $K_n$ vertices. Let $G_1, G_2, \dots $ be independent geometric variable with success parameter $2^{-i}$ i.e. $$ \mathbb{P}(G_i = \ell) = 2^{-i}(1-2^{-i})^{\ell-1}.$$ \begin{enumerate} \item Show that $ K_n = \min \{ k \geq 1 : 1+ \sum_{i=1}^k G_i > n\}$ in law. \item Deduce that $$ \frac{K_n}{\log_2 n} \xrightarrow[n\to\infty]{ ( \mathbb{P})}1.$$ \end{enumerate} \end{exo} \section{Higher moments} So far, we have established sharp thresholds for graph properties in $G(n,p)$ only using the first and second moment. When there is no sharp threshold or for more refined probabilistic estimates such as convergence in distribution, we need to control higher moments. We shall exhibit two examples when we need to do so: the \textbf{Poisson paradigm} and the convergence of the spectral measure. Let us recall the classic method of moments: \begin{lemma}[Method of moments] \label{lem:metmom} Let $(\mu_{n} : n \geq 0)$ be probability measures on $ \mathbb{R}$ (resp.~random real variables $X_{n}$) such that for any $k \geq 0$, there exists $C_k \in \mathbb{R}$ such that we have $$ \int_{ \mathbb{R}} \mu_{n} ( \mathrm{d}x) \cdot x^k \xrightarrow[n\to\infty]{} C_k, \qquad \left( \mbox{resp.} \quad \mathbb{E}[X_{n}^{k}] \xrightarrow[n\to\infty]{}C_{k} \right),$$ in particular the above moments all exist. We suppose furthermore that for some $M >0$ we have $|C_k| \leq M^k k!$ for all $k \geq 0$. Then there exists a probability measure $\mu $ on $ \mathbb{R}$ (resp.~a random variable $X$) such that $\mu_{n} \to \mu$ in distribution as $n \to \infty$ (resp. $ X_{n} \to X$ in law). \end{lemma} \noindent\textbf{Proof of the lemma:} Since $\mu_{n}$ have bounded first moment, $(\mu_n)_{n \geq 0}$ is tight and by dominated convergence its potential limits $\mu$ have the same moments $C_k$ for $k \geq 0$. However the growth condition $|C_k| \leq M^k k!$ implies that the moment generating function of (any possible limit) $\mu$ has a positive radius of convergence. By Fubini, the Laplace transform $ \mathcal{L}_\mu$ of $\mu$ also has a positive radius of convergence and, as every analytic function, $ \mathcal{L}_\mu$ is determined by its derivatives at $0$: it follows that $\mu$ is determined by its moments. In particular $\mu$ is unique, and $\mu_n \to \mu$ weakly as $n \to \infty$. The translation in terms of random variables is straightforward.\qed \medskip \begin{remark}[A trivial case: $ \sigma_n^2 \to 0$] When we have $ \mathbb{E}[X_n^1] \to C_1$ and $\mathbb{E}[X_n^2]\to C_1^2$ as $n \to \infty$ then we automatically have $ X_n \to C_1$ in distribution (and in probability). This was the case in most of the results of the previous section. \end{remark} \subsection{The Poisson paradigm} \label{sec:poissonparadigm} In this section, we explain informally why the property $$ \mathrm{Cycle}_{n} = \{ \mathfrak{g} \in \mathbb{G}_{n}: \mathfrak{g} \mbox{ contains a simple cycle as subgraph}\},$$ actually has \textbf{no} {sharp threshold} transition for $G(n,p)$. To fix ideas, let us look at the smallest non trivial cycle and let $\Delta(n, p)$ be the number of induced triangles in $G(n, p)$. One straightforwardly computes: $$ \mathbb{E}[\Delta(n, p)]= {n \choose 3} p^{3},$$ and so when $p=p_{n}=o(1/n)$, by the first moment method, there is no triangle inside $ G(n, p)$ with high probability. When $p \gg 1/n$ then the expectation of the number of triangles blows up and we can show that the variance of the number of triangles is comparable to its squared mean (exercise!) so that by the second moment method, there is a triangle inside $G(n, p)$ with high probability. However when $p = \frac{c}{n}$, the last expectation converge towards $c^{3}/6$ and actually $ \Delta(n, {\textstyle \frac{c}{n}})$ converges towards a Poisson variable of parameter $c^{3}/6$, this is the \textbf{Poisson paradigm}: the sum of many indicators of small probability which are roughly independent give a Poisson random variable in the limit. One way to prove it is to show that all moments of $ \Delta( n , {\textstyle \frac{c}{n}})$ converge towards the moments of $ \mathfrak{P}(c^{3}/6)$ and use the method of moments (Lemma \ref{lem:metmom}). This requires a careful but not unbearable analysis which we will not do in this course. \medskip In particular, existence of a triangle in $G(n, p)$ does not have a sharp threshold transition: its probability goes from $0$ to $1$ when $p$ ranges in the scale $1/n$ but does not jump from $0$ to $1$ abruptly ``in one scale''. Actually, the Poisson paradigm can be extended to consider cycles of length $3,4,5\dots$ simultaneously and actually they behave as independent Poisson variables with parameters $c^{k}/(2k)$ as $n \to \infty$ (each equivalence class of $k$ ordered points representing the same cycle has $2k$ members). In particular, for $c \in (0,1)$ we have $$ \mathbb{P}( G(n, {\textstyle \frac{c}{n}}) \mbox{ has no simple cycle}) \xrightarrow[n\to\infty]{} \exp\left( - \sum_{k \geq 3} \frac{c^{k}}{2k} \right) = \sqrt{(1-c)} \mathrm{e}^{ \frac{1}{4}c (c+2)}.$$ \begin{figure}[!h] \begin{center} \includegraphics[width=7cm]{proba-cycle} \caption{A plot of $\lim_{n \to \infty} \mathbb{P}( G(n, {\textstyle \frac{c}{n}}) \mbox{ has no simple cycle})$ for $ c \in [0,1]$. In particular the appearance of a simple cycle has no sharp threshold, but such a cycle should appear before $c=1$.} \end{center} \end{figure} Recalling the first section of this chapter, although the presence of isolated vertices obeys a sharp threshold, if we refine the scale, the Poisson paradigm also appears and it is known that the number of isolated vertices in $ G\left(n, \frac{\log n +c}{n}\right)$ for $c \in \mathbb{R}$ converges in distribution towards a Poisson variable of parameter $ \mathrm{e}^{-c}$. In particular, we have the ``double exponential limit of Erd{\H{o}}s--R\'enyi'' \begin{eqnarray} \label{eq:erconnectedpoisson} \mathbb{P}\left( G\left(n, \frac{\log n}{n} + \frac{c}{n}\right) \mbox{ has no isolated vertex}\right) \xrightarrow[n\to\infty]{} \mathrm{e}^{- \mathrm{e}^{-c}}, \end{eqnarray} see Theorem \ref{prop:connectedness} in Chapter \ref{chap:poissonER} for an proof of it. \begin{exo}[An application to random matrix]\label{exo:rangmatrix} Consider i.i.d.~vectors $X_1, \dots , X_k, \dots \in \{0,1\}^n$ such that $X_i$ has only zeros except at two positions chosen uniformly at random among the ${n \choose 2}$ possibilities. Evaluate $ \mathbb{P}(X_1, \dots, X_k \mbox{ are linearly independent over } \mathbb{Z}/2 \mathbb{Z})$ as a function of $k$ (for $n$ fixed but large). \label{exo:matrix2} \end{exo} \subsection{Spectrum} In this section we shall study $G (n, p)$ from a spectral point of view. As the reader will see, this boils down to computing the (expected) number of (possibly backtracking) cycles in $G(n, p)$. In this section we focus on the case when \begin{center} $ \displaystyle p = \frac{c}{n}, \quad$ with $c >0$ fixed. \end{center} Let $A=A^{(n)}_c$ the adjacency matrix of $ G(n, {\textstyle \frac{c}{n}})$. More precisely, it is a symmetric square $n \times n$ matrix where $A_{i,j}=1$ if $i$ and $j$ are neighbors in $ G(n, p)$. Notice that the entries of $A$ are not centered and by convention we put $A_{i,i}= 0$. As any symmetric real matrix, $A^{(n)}_c$ has a spectral decomposition and $n$ eigenvalues $$ \lambda_{1}^{(n)} \leq \lambda_{2}^{(n)} \leq \cdots \leq \lambda_{n}^{(n)}.$$ We shall be interested in the empirical spectral measure $$ \Lambda^{(n)}_c= \frac{1}{n}\sum_{k=1}^{n} \delta_{\lambda_{i}^{(n)}},$$ which is then a random probability measure (hence its law is an element of $ \mathcal{M}_{1}( \mathcal{M}_{1}( \mathbb{R}))$ where $ \mathcal{M}_{1}( \mathcal{X})$ is the set of all probability measures on $ \mathcal{X}$). The following theorem shows that this measure converges towards a deterministic measure: \begin{theorem}[Convergence of the spectral measure]\noindent With the above notation, for $c >0$ we have the following convergence in probability $$ \Lambda^{(n)}_c \xrightarrow[n\to\infty]{( \mathbb{P})} \mathcal{L}_{c},$$ where $ \mathcal{L}_{c}$ is a (deterministic) probability measure on $ \mathbb{R}$.\end{theorem} The above convergence in probability just means that for any function $f \in \mathcal{C}_c( \mathbb{R})$ with compact support we have $$ \int_{ \mathbb{R}} \Lambda^{(n)}_c( \mathrm{d}x) f(x) \xrightarrow[n\to\infty]{ (\mathbb{P})} \int_{ \mathbb{R}} \mathcal{L}_{c}( \mathrm{d}x) f(x).$$ It is possible to properly speak of convergence (in probability or in distribution) for random measures by defining a topology on probability distributions on $ \mathbb{R}$. We refer to the authoritative reference \cite{Kal86} for details about convergence of random measures. The limiting (deterministic) measure $ \mathcal{L}_{c}$ is poorly understood as of today (e.g.~the decomposition of $ \mathcal{L}_c$ in atomic and continuous part...). \begin{figure}[!h] \begin{center} \includegraphics[height=4cm]{spectre1} \hspace{1cm} \includegraphics[height=4cm]{spectre2} \caption{Simulations of $\Lambda^{(n)}_{c}$ for $c=2$ and $c=10$.} \end{center} \end{figure} \noindent \textbf{Partial proof.} We shall only prove a weak version of the theorem, namely that the \textbf{expected} empirical measure converges. To prove this, we shall use the method of moments (Lemma \ref{lem:metmom}) and prove convergence of the moments i.e. $$ \mathbb{E}\left[ \int \Lambda^{(n)}_c (\mathrm{d}x) \cdot x^k\right] = \int \underbrace{\mathbb{E}[\Lambda^{(n)}_c]}_{(*)} (\mathrm{d}x) \cdot x^k \xrightarrow[n\to\infty]{} \mathbb{E}\left[\int_{ \mathbb{R}} \mathcal{L}_c ( \mathrm{d}x) \cdot x^k\right]=\int_{ \mathbb{R}} \underbrace{\mathbb{E}\left[ \mathcal{L}_c \right]}_{(**)}( \mathrm{d}x) \cdot x^k,$$ where $(*)$ and $(**)$ are the expected measures which are deterministic probability measures on $ \mathbb{R}$. The convergence in probability of the random measure $\Lambda^{(n)}_c$ is obtained by further establishing concentration of the empirical moments (e.g.~by computing second moments), which we shall skip in these notes, see \cite{zakharevich2006generalization,khorunzhy2004eigenvalue} for details. Even the problem of the convergence of expectation of moments of $ \mathbb{E}[ \Lambda^{(n)}_c]$ might be complicated since the construction of the eigenvalues of $ A^{(n)}_c$ is very intricate. The idea is to use the spectral decomposition and to take the expected trace of the powers of the matrix $A^{(n)}_c$: indeed by invariance of the trace under change of basis we have $$n \cdot \int_{ \mathbb{R}} \Lambda^{(n)}_c( \mathrm{d}x) \cdot x^{k} = \sum_{k=1}^{n} (\lambda_{i}^{(n)}) ^{k} = \mathrm{Tr}\left(\left(A^{(n)}_c\right)^k\right)= \sum_{j=1}^{n}\sum_{j=i_{1}, \dots, i_{k}}A_{i_{1},i_{2}}A_{i_{2},i_{3}}\dots A_{i_{k},i_{1}}.$$ After taking the expectation, all we need is to isolate the contribution of order $n$ in the above sum. We will gather the terms in $\sum_{i_{1}, \dots, i_{k}} \mathbb{E}[A_{i_{1},i_{2}}A_{i_{2},i_{3}}\dots A_{i_{k},i_{1}}]$ which share the same the combinatorial structure for the cycle $i_1 \to i_2 \to \cdots \to i_k \to i_1$ and represent it by a diagram. More precisely, we shall partition this sum according to the underlying multigraph $ \mathfrak{g}$ obtained by identifying in the ``free" cycle $i_{1} \to i_{2} \to \cdots \to i_{k} \to i_{1}$ the indices $i_j$ corresponding to the same vertex $\in \{1, \dots , n\}$. Those graphs are usually called Feynman's\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{feynman}} Richard Feynman (1918--1988), American} diagrams in the physics literature. See Figure \ref{fig:graphdecomp} for examples. \begin{figure}[!h] \begin{center} \includegraphics[width=14cm]{feynmann} \caption{ \label{fig:graphdecomp} Expanding the expectation using a sum over Feynman diagrams. The only non zero asymptotic contribution comes from the trees with possibly several edges.} \end{center} \end{figure} Once a multi-graph $ \mathfrak{g}$ with a rooted oriented spanning path is fixed, if $v$ is its number of vertices and $e$ its number of edges after collapsing the possible multi-edges, the corresponding contribution in the above sum is equal to $$ {n \choose v} \left( \frac{c}{n} \right)^e \underset{n \to \infty}{\sim} \frac{c^e}{v!} n^{v-e}.$$ Hence, the main contribution to the above expectation is provided by Feynman diagrams for which $v-e$ is maximal: those are finite trees (for which we have $v-e=1$). More precisely, in this case $k= 2 \ell$ must be even and those objects are finite (non plane) trees with $e \leq \ell$ edges together with an image of the rooted $k$-cycle $1\to 2 \to \dots \to 2\ell \to 1$ which is surjective. If we denote by $ \mathrm{Fey}(2 \ell,e)$ the number of such combinatorial objects then we can summarize the discussion by \begin{eqnarray*} \lim_{n \to \infty} \mathbb{E}\left[ \int \Lambda^{(n)}_c (\mathrm{d}x) \cdot x^k\right] &=& \lim_{n \to \infty} \frac{1}{n} \sum_{j=1}^{n}\sum_{j=i_{1}, \dots, i_{k}} \mathbb{E}[A_{i_{1},i_{2}}A_{i_{2},i_{3}}\dots A_{i_{k},i_{1}}] \\ &=& \mathbf{1}_{ k= 2 \ell \mathrm{\ is \ even}} \sum_{e=1}^{\ell} \mathrm{Fey}( 2 \ell ,e) \frac{c^{e}}{(e+1)!} := C_{k}. \end{eqnarray*} To check the growth condition on $C_{k}$ needed in Lemma \ref{lem:metmom}, notice that $$\sum_{e=1}^{\ell} \mathrm{Fey}( 2 \ell ,e) \frac{c^{e}}{(e+1)!} \leq c^{\ell} \sum_{e=1}^{\ell} \mathrm{Fey}( 2 \ell, e) \leq c^{\ell} \underbrace{\#\{ \mbox{partitions of }\{1,2, \dots , 2 \ell \}\}}_{ \mbox{ Bell number}} \leq c^{\ell} (2 \ell)^{ 2 \ell} \leq M^{k} k!,$$ for some $M >0$. However, the number of diagrams corresponding to the moment of order $k$ grows quicker than exponentially: consider the case when the underlying tree is a star with $ \sqrt{n}$ vertices decorated by a walk of length $n$, it is easy to see that there are at least $( \sqrt{n})^{n- \sqrt{n}}$ diagrams and this in particular implies that the limiting measure $ \mathcal{L}_{c}$ has unbounded support. \qed \paragraph{A warning to conclude: Thresholds and expectation thresholds.} Before closing this chapter, let us warn the reader that the first (and second) moment method, although powerful, does not always yield the correct thresholds for typical appearance of induced subgraph. Consider the following example of the ``pan graph'' \begin{center} \includegraphics[width=4cm]{casserole} \end{center} By the first moment, the mean number of such induced graphs in $G(n, p)$ is $ {n \choose 7} p^{9}$ and this blows up when $p \gg n^{-7/9}$. The nave guess is then $p_{n} \approx n^{-7/9}$ for the (weak) threshold of appearance of the pan graph in $G(n, p_n)$. But if we focus on the appearance of the square with diagonals, we would guess a threshold of order $p_{n} \approx n^{-2/3}$. How come could the second threshold be larger than the first one since we consider a smaller subgraph? The reason is that the first guess $p_{n} \approx n^{-7/9}$ based on the first moment method is incorrect: with small probability, a square with diagonals may appear in $G(n, n^{-7/9 + \varepsilon})$ but then has many different ``tails'' causing a blow up of the expectation of such induced subgraphs... The expectation threshold conjecture of Kahn \& Kalai \cite{kahn2007thresholds} states that up to a multiplicative factor of $\log n$ the location of the (weak) threshold (see Exercise \ref{exo:weakthreshold}) is given by the above first moment method applied to all subgraphs and taking the maximum. This far-reaching conjecture was recently proved \cite{park2022proof}. \bigskip \noindent \textbf{Bibliographical notes.} The Erd{\H{o}}s--R\'enyi model is probably the simplest and the most studied random graph model. It is a wonderful playground for combinatorics and probability. It has many variations and descendants such as the stochastic block model, the rank $1$ model, the configuration model... which are more realistic models for real-life networks. The literature on this topic is vast, see e.g.~the recent monograph \cite{van2009random} or the classic books \cite{bollobas2001random,janson2011random}. There are also lecture notes available on the web such as \cite{Bor15,blaszczyszyn2017lecture} and \cite{wustatistical} for applications in statistics. Reading the original papers \cite{erdds1959random,erdHos1960evolution,erdos1963asymmetric} of Erd{\H{o}}s \& R\'enyi is still very inspiring. Theorem \ref{thm:connectedness} is proved in \cite{erdds1959random}. See the nice note \cite{cooper2019rank} for an application of random graph theory to sparse random matrices (as in Exercise \ref{exo:matrix2}). \bigskip \noindent \textbf{Hints for exercises.}\\ Exercise \ref{exo:ercayley}: By Cayley's formula the probability is equal to $n^{n - 2}p^{n - 1}(1 - p)^{{n \choose 2} - (n- 1)}$ and is maximal at $p = \frac{2}{n}$.\\ Exercise \ref{exo:weakthreshold}: Put $p_n$ such that $ \mathbb{P}(G(n,p_n) \in A_n) = \frac{1}{2}$. See Bollobas \& Thomason \cite{bollobas1987threshold} .\\ Exercise \ref{exo:asympdegree}: Use first and second moment method.\\ Exercise \ref{exo:rangmatrix}: Consider the graph whose vertices are the vectors and where there is an edge between two vectors if they share a non-zero coordinate in common. Then there is a non-trivial $ \mathbb{Z}/2 \mathbb{Z}$ relation iff the graph contains a cycle. \chapter{Birth of a giant $1$, via $ \varepsilon$-cut} \hfill Comment Gargantua nasquit en fa\c con bien estrange. (Rabelais)\bigskip \vspace{2cm} \begin{figure}[!h] \begin{center} \includegraphics[height=5cm]{er05} \includegraphics[height=5cm]{er1} \includegraphics[height=5cm]{er12} \includegraphics[height=5cm]{er2} \includegraphics[height=5cm]{er35} \includegraphics[height=5cm]{er10} \caption{A large $G(n,{\textstyle \frac{c}{n}})$ graph with $n=1000$ and $c$ equals (from left to right) to $0.1\quad 0.5\quad 1 \quad 1.1 \quad \frac{\log n}{2} \quad 2 \log n.$ We see the emergence of a giant component around $c \approx 1$ and that the graph becomes connected around $c \approx \log n$ (see Theorem \ref{thm:connectedfin}).} \end{center} \end{figure} \clearpage We continue the study of the geometry of $G(n, p)$ as $p$ increases and prove a phase transition for the size of the clusters in $G(n, p)$: suddenly a ``giant component'' carrying a positive proportion of the vertices appears around $p_{n} = \frac{1}{n}$. More precisely, for a (finite) graph $ \mathfrak{g}$ and a vertex $v \in \mathrm{V}(\mathfrak{g})$ we denote by $ \mathcal{C}^{v}( \mathfrak{g})$ the connected component of $v$ inside $ \mathfrak{g}$. We also denote by $ \mathrm{C}^{\max}_{1}( \mathfrak{g}), \mathrm{C}^{{ \max}}_{2}( \mathfrak{g}),\dots$ the sizes (number of vertices) of the connected components of $ \mathfrak{g}$ in non-increasing order. We write $ \mathcal{C}_{\max}$ for a connected component of maximal size (with ties broken using the labelings of the vertices). Sometimes we drop the notation $ ( \mathfrak{g})$ when the underlying graph is clear from the context. The main theorem of this part is the following: \begin{theorem}{Birth of a giant}\label{thm:erdosrenyi}\noindent There is a sharp threshold transition for the existence of a giant connected component at $p_{n} = \frac{1}{n}$. More precisely, if $p_{n} = \frac{c}{n}$ then inside $G(n, {\textstyle \frac{c}{n}})$: \begin{itemize} \item \textbf{Subcritical: If $ \mathbf{c<1}$} then there exists $A>0$ depending on $c>0$ such that w.h.p. we have $ \mathrm{C}^{\max}_{1}(G(n,{\textstyle \frac{c}{n}})) \leq A \log n$. \item \textbf{Supercritical: If $ \mathbf{c>1}$} then there exists $A>0$ depending on $c>0$ such that w.h.p. we have $ \mathrm{C}^{\max}_{2}(G(n,{\textstyle \frac{c}{n}}))~\leq~A\log n$ whereas $ n^{-1} \mathrm{C}^{\max}_{1}(G(n,{\textstyle \frac{c}{n}})) \to (1-\alpha(c))$ in probability where $\alpha(c)$ is the smallest solution in $(0,1]$ to the equation \begin{eqnarray} \label{def:alphac} \alpha(c) = \mathrm{e}^{-c(1-\alpha(c))}. \end{eqnarray} \item \textbf{Critical: If $ \mathbf{c=1}$} then the vector $( n^{ -2/3}\mathrm{C}^{\max}_{i}(G(n,{\textstyle \frac{1}{n}})))_{i \geq 1}$ converges in law in the finite dimensional sense towards a positive infinite vector in $\ell^2$. This vector is in fact the ordered version of the length of the excursions of the function $t \mapsto B_{t} - \frac{t^{2}}{2}$ above its running infimum (sic!). \end{itemize} \end{theorem} The goal of the following three chapters is to prove the above result (multiple times). We will actually only prove points $(i)$ and $(ii)$ and just provide an upper bound for the size of the largest component for point $(iii)$ (see also Proposition \ref{prop:aldous} in a slightly different model). The intuition behind Theorem \ref{thm:erdosrenyi} is that the local neighborhood around a given vertex in $G(n, {\textstyle \frac{c}{n}})$ looks like a BGW tree with offspring distribution $ \mathrm{Poisson}(c)$, see Proposition \ref{prop:cvlocpoisson}. When $c<1$ such a random tree dies out almost surely (in fact very quickly) and all the connected components in $G(n, {\textstyle \frac{c}{n}})$ are small. On the contrary, if $ c>1$ then the BGW process survives with positive probability equal to $1- \alpha(c)$ (see Proposition \ref{prop:GWdisguise} and Theorem \ref{thm:extinction}): in the finite setting this means that the component of the vertex in question is very large (a giant component). It turns out that this giant component is unique so that its density is asymptotically $1- \alpha(c)$, and the remaining components are small. We prove a weaker version of the above theorem using this sketch in this chapter then turn to a more modern proof using an exploration technique and estimates on random skip-free walks similar to those used in Chapter \ref{chap:GW}. This proof is shortened in Chapter \ref{chap:poissonER} by tricking a little the graph. \bigskip \label{sec:firstmomentgiant} In this chapter we give a ``first moment'' proof of a weak version of Theorem \ref{thm:erdosrenyi} which is close in spirit to the historical proof of Erd{\H{o}}s \& R\'enyi \cite{erdHos1960evolution}. We first study the law of the connected components in $G(n,{\textstyle \frac{c}{n}})$ and show that they are indeed described by BGW trees with Poisson$(c)$ increments. We then use a first moment argument on $ \varepsilon$-cuts together with a ``sprinkling'' idea to deduce the existence of a giant component in the supercritical regime: \begin{theorem}[The giant, weak version] \noindent For $c>0$ denote by $\alpha(c)$ the smallest solution in $(0,1]$ to the equation $\alpha(c) = \mathrm{e}^{-c(1-\alpha(c))}$ in particular $\alpha(c) =1$ when $c \leq 1$. \label{thm:weak-giant} Then we have $$ \frac{\mathrm{C}^{\max}_1}{n} \xrightarrow[n\to\infty]{( \mathbb{P})} 1 - \alpha(c) \quad \mbox{ and } \quad \frac{\mathrm{C}^{\max}_2}{n} \xrightarrow[n\to\infty]{( \mathbb{P})} 0.$$ \end{theorem} \section{The local limit} We denote by $ \mathcal{C}(n, p) \equiv \mathcal{C}^{1}(G(n,p))$ the cluster of the vertex $1$ in $G(n,p)$ which we see as a random labeled graph where its vertices have been relabeled in increasing order by $1,2, \dots , \# \mathcal{C}(n,p)$. We write $\mathrm{T}(c)$ for a Bienaym--Galton--Watson plane tree with offspring distribution Poisson$(c)$ and denote by $ \mathcal{T}(c)$ the Cayley tree obtained by labeling its root by $1$ and the rest of its vertices by $2,3, \dots , \# \mathrm{T}(c)$ uniformly at random. We put $ \mathcal{T}(c)= \dagger$ (a cemetery point) if $ \mathrm{T}(c)$ is infinite. \begin{proposition} \label{prop:cvlocpoisson}Fix $c >0$ and suppose that $p_{n} \sim \frac{c}{n}$ as $n \to \infty$. For $k \geq 1$ and for any connected labeled graph $ \mathfrak{g} \in \mathbb{G}_k$ we have $$\lim_{n \to \infty }\mathbb{P}( \mathcal{C}(n,p_{n}) = \mathfrak{g}) = \mathrm{e}^{-k \cdot c } \frac{c^{k-1}}{(k-1)!} \mathbf{1}_{ \displaystyle \mathfrak{g} \mbox{ is a Cayley tree}} = \mathbb{P}( \mathcal{T}(c) = \mathfrak{g}).$$ \end{proposition} \noindent \textbf{Proof.} If the connected labeled graph $ \mathfrak{g}$ with $k$ vertices and $\ell$ edges is fixed, we have $$\mathbb{P}( \mathcal{C}(n,{\textstyle \frac{c}{n}}) = \mathfrak{g}) \quad = \quad {n-1 \choose k-1} (1-p_{n})^{k(n-k) + {k \choose 2} -\ell} p_{n}^{\ell}$$ Taking limits as $n \to \infty$, the above display tends to $0$ if $\ell \geq k$ and towards $\mathrm{e}^{-k \cdot c } \frac{c^{k-1}}{(k-1)!}$ if $\ell = k-1$. In the latter case, this formula coincides with the probability that $ \mathcal{T}(c)$ lands on the tree $\mathfrak{g}$ as seen in the proof of Proposition \ref{prop:cayleyGW}. \qed \medskip In particular, when $c \leq 1$, since $ \mathrm{T}(c)$ is almost surely finite we have that $$ \sum_{k \geq 1} k^{k-2} \mathrm{e}^{-k \cdot c } \frac{c^{k-1}}{(k-1)!} =1$$ so that $ \# \mathcal{C}(n,{\textstyle \frac{c}{n}}) \to \# \mathrm{T}(c)$ in distribution as $ n \to \infty$. We recall from Definition \ref{def:borel-tanner} that the latter is distributed according to the Borel--Tanner distribution with parameter $c$. As a quick corollary we can deduce that there is no giant component when $c \leq1$: If $ \mathcal{C}_{\max}$ in a largest connected component in $G(n, {\textstyle \frac{c}{n}})$ we have \begin{eqnarray*}0 \xleftarrow[n\to\infty]{} \mathbb{P}( \#\mathcal{C}(n,{\textstyle \frac{c}{n}}) \geq \varepsilon n) &=& \frac{1}{n}\sum_{i=1}^{n} \mathbb{P}( \mathcal{C}^{i}(G(n, \frac{c}{n})) \varepsilon n)\\ &\geq& \frac{1}{n} \sum_{i=1}^{n}\mathbb{E} \left[ \mathbf{1}_{ \mathrm{C}^{\max}_1 \geq \varepsilon n} \mathbf{1}_{i \in \mathcal{C}^{\max}_{1}}\right] \geq \frac{ \varepsilon n}{n} \mathbb{P}( \mathrm{C}^{\max}_1 \geq \varepsilon n). \end{eqnarray*} More precisely, we will see below that the proportion of vertices belonging to ``big clusters'' is concentrated. But before that let us state an easy lemma whose proof is straightfoward: \begin{lemma} \label{lem:condcluster}Conditionally on $\mathcal{C}^{1}( G( n ,p ))$ the remaining graph\footnote{with vertices relabeled in increasing order} $ G(n,p) \backslash \mathcal{C}^{1} (G (n, p ))$ has law $G( n - \# \mathcal{C}^{1}( G( n ,p )))$. \end{lemma} \begin{corollary} \label{cor:propotionfini} With $\alpha (c)$ as defined in Theorem \ref{thm:weak-giant}, for all $ A \in \{0,1,2, \dots \}$ we have $$ n^{-1} \, \#\left\{ 1 \leq i \leq n : \# \mathcal{C}^{i}\big({\textstyle G(n, \frac{c}{n})}\big) \leq A\right\} \xrightarrow[n\to\infty]{ ( \mathbb{P})} \mathbb{P}( \#\mathrm{T}(c) \leq A) \xrightarrow[A\to\infty]{} \alpha(c).$$ \end{corollary} \noindent \textbf{Proof.} If $ N_{n}(A) = \#\left\{ 1 \leq i \leq n : \# \mathcal{C}^{i}\big({\textstyle G(n, \frac{c}{n})}\big) \leq A\right\}$, by the previous proposition, we have the asymptotic of the expectation: $$ \mathbb{E}[N_{n}(A)] = \sum_{i=1}^{n} \mathbb{P}( \# \mathcal{C}^{i} \leq A) = n \mathbb{P}( \#\mathcal{C}(n, {\textstyle \frac{c}{n}}) \leq A) \underset{n \to \infty}{\sim} n \cdot \mathbb{P}( \# \mathrm{T}(c) \leq A),$$ and this asymptotic actually holds as soon as $p_{n} \sim \frac{c}{n}$. Using $ \mathcal{C}^{i} = \mathcal{C}^{i} (G (n , c/n))$ as a shorthand notation, the second moment is easily bounded: \begin{eqnarray*} \mathbb{E}[N_{n}(A)^{2}]&=&\sum_{1 \leq i , j \leq n} \mathbb{P}( \# \mathcal{C}^{i} \leq A \mathrm{ \ and \ } \# \mathcal{C}^{j} \leq A) \\ &=& \sum_{1 \leq i , j \leq n} \left(\mathbb{P}( \# \mathcal{C}^{i} \leq A \mbox{ and } j \in \mathcal{C}^{i}) + \mathbb{P}( \# \mathcal{C}^{i} \leq A \mbox{ and } \# \mathcal{C}^{j} \leq A \mbox{ and } j \notin \mathcal{C}^{i})\right)\\ &=& n \mathbb{E}\left[ \# \mathcal{C}^{1} \mathbf{1}_{\# \mathcal{C}^{1} \leq A}\right] + n(n-1) \mathbb{P}\left( \# \mathcal{C}^{1} \leq A \mbox{ and } \# \mathcal{C}^{2} \leq A \mbox{ and } 2 \notin \mathcal{C}^{1}\right)\\ & \leq & n A + n(n-1) \mathbb{E}\Big[ \mathbf{1}_{ \#\mathcal{C}^{1} \leq A} \mathbb{E}\left[ \mathbf{1}_{\# \mathcal{C}^{2} \leq A \mbox{ and } 2 \notin \mathcal{C}^{1} } \mid \mathcal{C}^{1}\} \right] \Big] \\ & \underset{ \mathrm{Lemma\ \ref{lem:condcluster}}}{=}& n A + n(n-1) \mathbb{E}\left[ \mathbf{1}_{ \#\mathcal{C}^{1} \leq A} \mathbb{E}\Big[ \mathbf{1}_{2 \notin \mathcal{C}_1} \underbrace{\mathbb{E}\big[ \mathbf{1}_{\# \mathcal{C}^{2} \leq A} \big | \{ 2 \notin \mathcal{C}^{1} \} \mbox{ and } \mathcal{C}^{1} \big]}_{ \mathbb{P}( \# \mathcal{C}( N, \frac{c}{n}) \leq A) } \Big | \mathcal{C}^{1} \Big] \right] \\ & =& n A + n(n-1) \mathbb{E}\left[ \frac{N}{n-1} \cdot \mathbb{P}\left( \# \mathcal{C}\left( N, \frac{c}{n}\right) \leq A\right)\right], \end{eqnarray*} where we used Lemma \ref{lem:condcluster} to argue that once conditioned on the cluster $ \mathcal{C}^{1}$, the remaining graph is distributed as $G(N, {\textstyle \frac{c}{n}})$ where $N = n - \# \mathcal{C}^{1}$, so that by symmetry $ \mathbb{P}( 2 \notin \mathcal{C}^1 \mid \mathcal{C}_1) = \frac{N}{n-1}$ and $\mathbb{E}\left[ \mathbf{1}_{\# \mathcal{C}^{2} \leq A} \mid \{ 2 \notin \mathcal{C}^{1} \} \mbox{ and } \mathcal{C}^{1} \right] = \mathbb{P}( \# \mathcal{C}( N, c/n) \leq A)$. Since $N \sim n$, we can apply the lemma above once more and deduce that the previous display is asymptotic to $n^{2} \mathbb{P}( \#\mathrm{T}(c) \leq A)^{2}$. We deduce that $ \mathbb{E}[n^{-1} N_{n}(A)] \to \mathbb{P}( \#\mathrm{T}(c) \leq A)$ and $ \mathrm{Var}( n^{-1} N_{n}(A)) \to 0$ as $n \to \infty$, which by Tchebytchev inequality entails the convergence in probability in the corollary. The convergence of $n^{-1} N_{n}(A)$ to $\alpha(c)$ as $A \to \infty$ follows from Theorem \ref{thm:extinction}. \qed \section{An easy giant via $ \varepsilon$-cut} Fix a graph $ \mathfrak{g}$ with $n$ vertices and $ \varepsilon>0$. An \textbf{$ \varepsilon$-cut} is a partition of $\{1,2, \dots , n\}$ into two subsets $A$ and $B$ of size (number of vertices) $ \varepsilon n$ and $(1- \varepsilon)n$ so that there is no edge between $A$ and $B$ in $ \mathfrak{g}$. That notion was already used in the proof of Lemma \ref{lem:core}. The following deterministic lemma relates the existence of $ \varepsilon$-cuts to the size of the largest component: \begin{lemma}[Giants makes cutting difficult] Recall that $ \mathrm{C}^{\max}_{1}( \mathfrak{g})$ is the size of the largest component in $ \mathfrak{g}$. Then one can find an $ \varepsilon$-cut in $ \mathfrak{g}$ with $$ \left| \varepsilon - \frac{1}{2} \right| \leq \frac{1}{2} \frac{ \mathrm{C}^{\max}_{1}( \mathfrak{g})}{n}.$$ \end{lemma} \noindent \textbf{Proof.} Let us put $x_i = n^{-1} \mathrm{C}^{\max}_i ( \mathfrak{g})$ for the renormalized cluster sizes in $ \mathfrak{g}$ ranked in non-increasing order so that $\sum_{i \geq 1} x_i =1$. Let $$\ell = \inf\left\{ j \geq 1 : \sum_{i=1}^j x_i \geq \frac{1}{2}\right\}.$$ Since $x_\ell \leq x_1 = \frac{ \mathrm{C}^{\max}_1 ( \mathfrak{g})}{n}$ we deduce that either $\sum_{i=1}^{\ell-1} x_i$ or $\sum_{i=1}^{\ell} x_i$ is $ \frac{x_1}{2}$-close to $1/2$. Regrouping the vertices of the corresponding components, we get the desired $ \varepsilon$-cut. \qed \paragraph{An easy giant.} Let us use this lemma to quickly prove that there is a large component in $G(n,{\textstyle \frac{c}{n}})$ when $c > 4 \log 2$ (this is a much weaker statement compared to Theorem \ref{thm:erdosrenyi}). Indeed, one can upper-bound the expected number of $ \varepsilon$-cuts in $G(n,p)$ by \begin{eqnarray} \mathbb{E}\left[\# \varepsilon-\mbox{cuts in }G(n,p) \right] \leq {n \choose \varepsilon n} (1-p)^{ \varepsilon(1- \varepsilon) n^2} \underset{p= \frac{c}{n}}{\leq} 2^n \mathrm{exp}(- c \varepsilon(1- \varepsilon) n). \label{eq:cutgiant}\end{eqnarray} When $c > 4 \log 2$ the right-hand side tends exponentially fast to $0$ as soon as $ \varepsilon \in ( \frac{1-\delta_c}{2},\frac{1+\delta_c}{2} )$ where $(\delta_c)^2 = {1 - \frac{4 \log 2}{c}}$. Summing over all the at most $n$ possible values of $ \varepsilon$ in this range, we deduce by the first moment method that for all $ \eta >0$ we have $$ \mathbb{P}\left( \exists\ \varepsilon- \mbox{cut in } G(n,{\textstyle \frac{c}{n}}) \mbox{ with } \left| \frac{1}{2} - \varepsilon \right | \leq \frac{\delta_c- \eta}{2} \right) \xrightarrow[n\to\infty]{} 0,$$ hence by the above lemma $$ \quad \mathbb{P}\left( \exists \mbox{ a cluster of size at least } (\delta_c -\eta) \cdot n \mbox{ in } G(n,{\textstyle \frac{c}{n}})\right) \xrightarrow[n\to\infty]{} 1.$$ The former reasoning becomes very useful when we already start from a graph having large clusters: Suppose that $ \mathfrak{g}\in \mathbb{G}_N$ is a graph having only clusters of size $A >0$ and denote by $ G(N,p) \cup \mathfrak{g}$ the graph obtained by superimposing it with an independent Erd{\H{o}}s--R\'enyi random graph (and deleting the possible multiple edges). Then we have: \begin{lemma}[Sprinkling] \label{lem:sprinkling}Fix $ \delta, \varepsilon >0$ and $A\in \{1,2, \dots \}$. The graph $G(N, \frac{\delta}{N}) \cup \mathfrak{g}$ has a giant component of size at least $(1 - 2\varepsilon)N$ with high probability as $N \to \infty$ as soon as we have $$ - \delta \varepsilon(1- \varepsilon) + \frac{\log 2 }{A} < 0.$$ \end{lemma} \noindent \textbf{Proof.} As above, we compute the expected number of $ \varepsilon$-cuts in $G(N, \frac{\delta}{N}) \cup \mathfrak{g}$. Since those cuts have to be compatible with the initial structure of $ \mathfrak{g}$, there are at most $2^{K}$ choices where $K \leq N/A$ is the number of connected components of $ \mathfrak{g}$. Hence, the expected number of $ \varepsilon$-cuts is upper bounded by $$ \mathbb{E}\left[\# \varepsilon-\mbox{cuts in }G(N, {\textstyle \frac{\delta}{n}}) \cup \mathfrak{g} \right] \leq 2^{ N/A} \left(1- \frac{\delta}{N}\right)^{ \varepsilon(1- \varepsilon) N^{2}},$$ and we conclude as above by the first moment method after summing over the at most $N$ possible values of $ \varepsilon$. \qed \medskip \begin{exo} \label{exo:bissection} Suppose $n$ is even. Use Theorem \ref{thm:weak-giant} to prove that the existence of a $ \frac{1}{2}$-cut (i.e.~a partition of the vertices into two subsets of the same cardinality without edges between them) in $G(n, p)$ has a sharp threshold at $$p_{n} = \frac{\log 4}{n}.$$ \end{exo} \section{Sprinkling} We now gather Corollary \ref{cor:propotionfini} and Lemma \ref{lem:sprinkling} and prove Theorem \ref{thm:weak-giant}. The idea is to remark that the superimposition of two independent Erd{\H{o}}s--R\'enyi random graph is again an Erd{\H{o}}s--R\'enyi graph: for $c>1$ and $\delta>0$ we have that \begin{eqnarray} G\big(n, \frac{c}{n}\big) \underset{ \mathrm{indpt}}{\bigcup} G(n, \frac{\delta}{n}) \quad \overset{(d)}{=}\quad G\Big(n, 1- \big( 1- \frac{c}{n}\big)\big( 1- \frac{\delta}{n}\big)\Big) \quad \subgraph \quad G\big(n, \frac{c + 2 \delta}{n}\big) \label{eq:couplage2delta}\end{eqnarray} for $n$ large enough. \medskip \noindent \textit{Proof of Theorem \ref{thm:weak-giant}.} Fix $c>1$, fix $ \varepsilon,\delta >0$ small and $A>0$ large. Denote by $ \mathfrak{g}$ the subgraph of $G(n, { \textstyle \frac{c}{n}})$ spanned by the vertices in components of size larger than $A$. We know from Corollary \ref{cor:propotionfini} that $\mathfrak{g}$ is of size $N = n \mathbb{P}( \#\mathrm{T}(c) \geq A) +o_{ \mathbb{P}}(n)$ and we assume that $A$ is large enough so that $$\mathbb{P}( \#\mathrm{T}(c) \geq A) \geq (1- \varepsilon) (1- \alpha(c)),$$ in particular we used here that $c>1$ so that $1-\alpha(c)>0$. Conditionally $G(n, { \textstyle \frac{c}{n}})$ and in particular on $ \mathfrak{g}$ and $N$, when $N \geq (1- \varepsilon)^{2}n$ we can apply Lemma \ref{lem:sprinkling} and deduce that in the graph $ G(n, {\textstyle \frac{\delta}{n}}) \cup \mathfrak{g}$ restricted to the vertices of $ \mathfrak{g}$, there is w.h.p.~a component of size at least $ (1- \varepsilon)N$ as soon as $$ \delta (1- \varepsilon)^{2} \varepsilon(1- \varepsilon) - \log 2 /A >0.$$ Up to further increasing $A$, we can suppose that the former inequality is satisfied. We deduce that w.h.p.~there is a component of size $ (1- \varepsilon)N \geq (1- \varepsilon)^{3}(1-\alpha(c)) + o_{ \mathbb{P}}(n)$ inside $G\big(n, \frac{c}{n}\big) {\bigcup} G(n, \frac{\delta}{n}) \subgraph G\big(n, \frac{c + 2 \delta}{n}\big)$ by \eqref{eq:couplage2delta}. Letting $A \to \infty$ while $ \varepsilon \to 0$, this shows the existence of a connected component of size \textit{at least} $(1-\alpha(c))n + o_{ \mathbb{P}}(n)$ in $ G(n, { \textstyle \frac{c'}{n}})$ for any $c'>c$. By continuity of $c \mapsto \alpha(c)$ we deduce the existence of a connected component of size \textit{at least} $(1 - \alpha(c))n + o_ \mathbb{P}(n)$ in $ G(n, { \textstyle \frac{c}{n}})$ whereas Corollary \ref{cor:propotionfini} entails that $ \alpha(c) n + o_{ \mathbb{P}}(n)$ of its vertices are in components of bounded size (irrespectively of $n$). This proves Theorem \ref{thm:weak-giant}.\qed \medskip \paragraph{Bibliographical notes.} The analysis of the phase transition for the emergence of the giant component is a classic in nowadays probability theory, see \cite{erdHos1960evolution} for the initial paper and \cite{janson1993birth} and \cite{aldous1997brownian} for a detailed analysis. The proof of Section \ref{sec:firstmomentgiant} is directly inspired by the original proof of Erd{\H{o}}s and R\'enyi. The local limit paradigm is quite recent \cite{BS01} and has been a fruitful idea applied in the realm of random graphs, see \cite{AL07,BCstationary} for references. \medskip \noindent{\textbf{Hints for Exercises.}}\ \\ Exercise \ref{exo:bissection}: At $p_{n} = \frac{\log 4}{n}$ the giant component in $G(n,p_{n})$ is of size $ n/2 + o_{ \mathbb{P}}(n)$. A sharper result is even proved in \cite{luczak2001bisecting}.\\ \chapter{Birth of a giant $2$, exploration and fluid limit} We now turn to a more modern and powerful way of proving Theorem \ref{thm:erdosrenyi} based on exploration techniques and stochastic analysis. We define an exploration process of $G(n,p)$ which discovers its connected components one after the other in a Markovian way, by revealing its vertices one by one as $k=0,1,2, \dots , n$ and study associated $ \mathbb{R}$-valued Markov processes in the scaling limits. This will be the occasion to introduce the differential equation method, or fluid limit method whose applications are numerous. \section{Exploration process as a Markov chain} To properly define the exploration, we shall split the vertices $\{1,2, \dots ,n \}$ of $G(n,p)$ into three categories: the \textbf{untouched} vertices $ \mathcal{U}_{k}$, the \textbf{explored} vertices $ \mathcal{E}_{k}$ and the vertices in the current \textbf{stack} $ \mathcal{S}_{k}$ whose neighborhoods remain to be explored. The algorithm evolves as follows: \begin{itemize} \item at time $k=0$ we have $ \mathcal{E}_{0} = \varnothing$, the untouched vertices are $ \mathcal{U}_{0} = \{2,3,\dots\}$ and the only vertex in the stack is $ \mathcal{S}_{1}=\{1\}$. \item suppose $k \geq 0$ is given and such that $ \mathcal{S}_{k} \ne \varnothing$. We then select the vertex $x \in \mathcal{S}_{k}$ with minimal label (recall that the vertex set of $G(n,p)$ is $\{1,2, \dots, n\}$) and reveal all the neighbors $y_{1}, \dots , y_{j}$ of $x$ among $ \mathcal{U}_{k}$ (this could be an empty set!). We then put $$ \mathcal{U}_{k+1} = \mathcal{U}_{k} \backslash \{ y_{1}, \dots , y_{j}\}, \quad \mathcal{S}_{k+1} = \big( \mathcal{S}_{k} \backslash \{x\} \big) \cup \{ y_{1}, \dots , y_{j}\}, \quad \mathcal{E}_{k+1} = \mathcal{E}_{k} \cup \{x\}.$$ \item When the current stack is empty $ \mathcal{S}_{k} = \varnothing$ then the \textbf{first stage} of the algorithm ends. \end{itemize} It should be clear from the above exploration that at time $\tau_{1}$ when the first stage ends, the set of explored vertices $ \mathcal{E}_{\tau_1}$ is precisely the connected component of $1$ in $ G(n,p)$. If the graph is not yet entirely discovered, we shall continue the exploration in the remaining graph starting from the vertex with minimal label and consider the \textbf{{\L}ukasiewicz path} $$ (\mathbb{S}_{k}: 0 \leq k \leq n)$$ obtained by starting from $0$ and whose increments are equal to the number of neighbors discovered in the untouched part minus $1$, see Figure \ref{fig:lukaER} for an illustration. In terms of the stack process $ \mathcal{S}_k$, this consists in immediately adding the vertex with minimal label yet untouched (as long as there are some untouched vertices left) when $ \mathcal{S}_k$ becomes empty (without performing a time step). In particular, the stack becomes empty if and only if the graph has been entirely explored. \begin{figure}[!h] \begin{center} \includegraphics[width=15cm]{luka-ER} \caption{The {\L}ukasiewicz exploration of a random graph. The edges revealed during the exploration are in thick lines, they form spanning trees of each components. The concatenation (ordered by the minimal label of their component) of the {\L}ukasiewicz paths associated to those trees (explored by order of their labels) is the {\L}ukasiewicz path of the graph.\label{fig:lukaER}} \end{center} \end{figure} Note that the excursions above the running infimum of $ \mathbb{S}$ correspond to the explorations of the different connected components of the graph, and in particular, if we introduce the running infimum process $ \underline{\mathbb{S}}_{k} = \inf_{0 \leq j \leq k} \mathbb{S}_{j}$ then we can recover the size of the current stack $ \# \mathcal{S}_{k}$ as being \begin{eqnarray} \label{eq:sizestack} \# \mathcal{S}_{k} = \mathbb{S}_{k} - \underline{\mathbb{S}}_{k}+1, \quad \mbox{for } 0 \leq k \leq n-1. \end{eqnarray} We shall denote by $ \mathcal{F}_k$ for $k=0,\dots , n$ the filtration generated by the first $k$ steps of this exploration. \begin{proposition}[Markov property of the exploration] \label{prop:markov} For any $0 \leq k \leq n$, conditionally on $ (\mathcal{U}_{k}, \mathcal{E}_{k}, \mathcal{S}_k)$, each edge in $G(n,p)$ between $x$ and $y$ where $x, y \in \mathcal{U}_{k}$ or $x \in \mathcal{U}_{k}$ and $y \in \mathcal{S}_{k}$ is present independently with probability $p$. \end{proposition} \noindent \textbf{Proof.} Fix $k \geq 0$ and notice that given the status of the edges and vertices revealed by time $k$, one could deterministically change the status of all the edges between $ \mathcal{S}_{k}$ and $ \mathcal{U}_{k}$ or in-between vertices of $ \mathcal{U}_{k}$ and this would not have affected the exploration up to time $k$ (because these edges have not been explored by the algorithm). It is easy to see from this that those edges are indeed i.i.d.~present with probability $p$. \\ An alternative, and more ``algorithmic'' way to see this is to imagine that all the edges of the graph $ G(n,p)$ carry a question mark ``?'' which means that its status is currently unknown, present with probability $p$ and absent with probability $1-p$. When performing the exploration of the successive clusters, we reveal the status of certain edges (the question marks then disappear). The key point is to notice that since we are not allowed to use the randomness of unrevealed edges, at time $k$, conditionally on the past exploration, all the edges in question in the proposition still carry their ``?'' and so they are i.i.d.~present with probability $p$ and absent otherwise.\qed \bigskip We deduce from the above that the process $ \mathbb{S}$ evolves in a Markovian fashion, if one also records its running infimum process: \begin{proposition} \label{prop:markovER} For $0 \leq k \leq n-1$, conditionally on $ \mathbb{S}_{0}, \dots , \mathbb{S}_{k}$ the increment $ \Delta \mathbb{S}_{k}:=\mathbb{S}_{k+1}- \mathbb{S}_{k}$ is distributed as $$ \Delta \mathbb{S}_{k} \overset{(d)}{=} \mathrm{Bin}(\#\mathcal{U}_{k}, p) -1 = \mathrm{Bin}(n-k- ( \mathbb{S}_{k} - \underline{\mathbb{S}}_{k} +1 ), p) -1.$$ \end{proposition} \noindent \textbf{Proof.} This follows from the previous proposition, since the size of the stack $ \mathcal{S}_k = \mathbb{S}_k - \underline{ \mathbb{S}}_k+1$ is given by \eqref{eq:sizestack} and since the number of untouched vertices is $n-k$ minus the size of the current stack. \qed \medskip \section{Differential equation method or fluid limit} \label{sec:fluidER} Fix $c >0$. In the rest of this section we take $$p = \frac{c}{n}.$$ Taking expectations in Proposition \ref{prop:markovER}, according to a general principle that goes under the name of ``\textbf{fluid limit}'' or ``\textbf{differential equation method}'', we anticipate that the process $ \mathbb{S}$ behaves in the large scale limit as a deterministic function $f_{c}$ which satisfies the differential equation \begin{eqnarray} \label{eq:diffinf} f'_{c}(t) = c\left(1-t - \big(f_c(t) - \underline{f_{c}}(t)\big)\right)-1, \end{eqnarray} and starts at $f_{c}(0)=0$, where we used the notation $\underline{g}(s) = \inf \{ g(u) : 0 \leq u \leq s\}$ for a continuous function $g$. This is not a standard differential equation due to the seemingly awkward dependence in the function $ \underline{f_{c}}$, but it is easy to convince oneself that the equation indeed has a unique solution and that this solution is either decreasing (when $c <1$) or unimodal (when $c>1$). More precisely:\\ \noindent \textbf{For $\mathbf{c >1}$} the function $f_{c}$ is equivalently defined as the solution to the differential equation $$ f_{c}'(t) = \left\{\begin{array}{lcl} c(1-t-f_{c}(t))-1 & \mbox{for } & 0 \leq t < \inf\{ s >0 : f_c(s)=0\} \\ c(1-t)-1 & \mbox{for } & \inf\{ s >0 : f_c(s)=0\} \leq t \leq 1. \end{array} \right.$$ In particular $f_{c}(t) = 1-\mathrm{e}^{-ct}-t$ until it comes back to $0$ at time $1- \alpha(c)$ where we recall from \eqref{def:alphac} that $\alpha\equiv \alpha(c)$ is the solution in $(0,1)$ to $ \alpha = \mathrm{e}^{-c(1-\alpha)}.$ For $t \geq (1- \alpha)$ the function follows the parabola $$ f_c(t) = \frac{1}{2} (c (1 + \alpha - t)-2) (t-1+\alpha).$$ Although $f_c$ is $ \mathcal{C}^1$ over $[0,1]$, it is not $ \mathcal{C}^2$ at the point $1- \alpha(c)$ since its second derivative jumps from $-\alpha c^2$ to $-c$, see Figure \ref{fig:fc}. \noindent\textbf{For $\mathbf{c\leq1}$}, since $f_c$ is decreasing we have $ \underline{f_{c}}(t) = f_c(t)$. It follows that we have $\alpha=1$ and always have $f'_c(t)=c(1-t)-1$ so that $$f_{c}(t) = \frac{t}{2}(2c-2-ct), \quad \forall t \in [0,1].$$ \begin{figure}[!h] \begin{center} \includegraphics[width=12cm]{lukaERc=2} \caption{Plot of the function $f_2$: it follows the orange curve from $0$ to $1-\alpha(2) \approx 0.797$ and then the blue curve from $1-\alpha(2)$ to $1$. In particular, the function is not smooth at $t=1- \alpha(2)$.\label{fig:fc}} \end{center} \end{figure} The above heuristic is indeed correct and we have: \begin{theorem}[Fluid limit for the exploration]\noindent\label{thm:fluid}Fix $c>0$ and let $p= \frac{c}{n}$. Consider the {\L}ukasiewicz exploration $ (\mathbb{S}_{k} : 0 \leq k \leq n)$ of the random graph $G(n,p)$. Then we have the following convergence in probability $$ \big(n^{-1} \cdot \mathbb{S}_{\lfloor nt \rfloor} : t \in [0,1] \big) \xrightarrow[n\to\infty]{} \big(f_{c}(t) : t \in [0,1] \big),$$ for the uniform norm. \end{theorem} \noindent \textbf{Proof.} Fix $c >0$. The idea of the fluid limit theorem is to argue that $ \mathbb{S}_k$ evolves as a stochastic Euler's scheme based on the equation \eqref{eq:diffinf}. To be more precise, we shall compare $ \mathbb{S}_k $ with the discrete function $ \mathcal{L}_{k} = n f_c( \frac{k}{n})$ for $k=0,1,2, \dots , n$. We shall write $ \underline{ \mathbb{S}}_k:=\inf\{ \mathbb{S}_{j} : 0 \leq j \leq k \}$ and $ \underline{ \mathcal{L}}_{k} := \inf\{ \mathcal{L}_{j} : 0 \leq j \leq k \}$ for the running infimum processes of $ \mathbb{S}$ and $ \mathcal{L}$ respectively. First, from \eqref{eq:diffinf} and the fact that $f'_c$ is Lipschitz, it follows that we have the following Taylor approximation: \begin{eqnarray} \label{diffF} \mathcal{L}_{k+1}- \mathcal{L}_{k} = \int_{k}^{k+1} \mathrm{d}s \, f'_c\left( \frac{s}{n}\right) \underset{ \eqref{eq:diffinf}}{=} c\left(1- \frac{k}{n} - \frac{ \mathcal{L}_{k}}{n} + \frac{\underline{\mathcal{L}}_{k}}{n}\right)-1 + \Theta(1/n), \end{eqnarray} where $\Theta(1/n)$ is a function bounded in absolute value by $ \mathrm{cst}/n$ independently of $0 \leq k \leq n$. We now analyse the process $$ X_k = \mathcal{L}_{k}-\mathbb{S}_k.$$ Writing $ (\mathcal{F}_k : 0 \leq k \leq n)$ for the filtration generated by the exploration, we first compute the expected conditional increment of the process $X$: \begin{eqnarray*} \mathbb{E}[X_{k+1}-X_k \mid \mathcal{F}_k] &\underset{ \eqref{diffF} \ \& \ \mathrm{Prop.} \ref{prop:markovER}}{=}& c\left(1- \frac{k}{n} - \frac{ \mathcal{L}_{k}}{n} + \frac{\underline{ \mathcal{L}}_{k}}{n}\right)-1 + \Theta(1/n)\\ & & - \left( \mathbb{E}\left[ \mathrm{Bin}\left(n-k- \mathbb{S}_k +\underline{ \mathbb{S}}_k -1, \frac{c}{n}\right) \mid \mathcal{F}_{k}\right]-1\right)\\ &=& c\left(\frac{\underline{ \mathcal{L}}_k- \underline{\mathbb{S}}_k}{n}-\frac{ \mathcal{L}_k- \mathbb{S}_k}{n}\right) + \Theta(1/n). \end{eqnarray*} Remark that $| \underline{ \mathcal{L}}_{k}-\underline{\mathbb{S}}_k| \leq \sup_{0 \leq i \leq k}| { \mathcal{L}}_{i}-{\mathbb{S}}_i|$ so that taking absolute values in the last display we deduce that for all $ 0 \leq k \leq n-1$ \begin{eqnarray*} \Big| \mathbb{E}[X_{k+1}-X_k \mid \mathcal{F}_k] \Big| & \leq & \frac{C}{n}\left( 1 + \sup_{0 \leq i \leq k}|X_i|\right), \end{eqnarray*} for some constant $C>0$. Furthermore, since the increments of $ \mathbb{S}$ are always stochastically dominated by $ \mathrm{Bin}(n, \frac{c}{n})$ it is plain to see that up to increasing $C$ we have $$ \forall 0 \leq k \leq n-1, \quad \mathbb{E}[(X_{k+1}-X_k)^2] \leq C.$$ We are thus in position to apply the following ``stochastic'' version of Gronwall lemma to deduce that $ n^{-1}\sup_{0 \leq k \leq n}|X_k| \to 0$ in probability. This entails the theorem. \subsection{Stochastic Gronwall lemma} \begin{lemma}[Stochastic Gronwall lemma] Let $(X_k : 0 \leq k \leq n)$ be an adapted process with $X_0=0$. We define its supremum absolute value process $X_{k}^{*} = \sup \{ |X_{j}| : 0 \leq j \leq k \}$ for $0 \leq k \leq n$ and suppose that there exists $C >0$ satisfying for all $0 \leq k \leq n-1$ \begin{itemize} \item $ |\mathbb{E}[X_{k+1}-X_k\mid \mathcal{F}_k]| \leq \frac{C}{n}\left(1+ X_k^{*}\right)$ almost surely, \item $ \mathbb{E}[(X_{k+1}-X_k)^2] \leq C$. \end{itemize} Then we have $ n^{-1} \cdot X^{*}_{n} \to 0$ in probability as $n \to \infty$. \end{lemma} \noindent \textbf{Proof.} We decompose $X_k$ in its predictable and its martingale part by putting for $0 \leq k \leq n-1$ $$ X_{k+1}-X_k = \underbrace{\mathbb{E}[X_{k+1}-X_k \mid \mathcal{F}_k]}_{=: \ D_k} + \underbrace{( (X_{k+1}-X_k) - \mathbb{E}[X_{k+1}-X_k \mid \mathcal{F}_k])}_{=: \ M_{k+1}-M_k},$$ so that if $M_0=0$ then $(M_k : 0 \leq k \leq n)$ is a martingale and \begin{eqnarray} \label{eq:decompositionDoob}X_k = \sum_{i=0}^{k-1} D_i + M_k. \end{eqnarray} Let us first take care of the martingale part: We have by (the conditional) Jensen's inequality $$ \mathbb{E}[(M_{k+1}-M_k)^2] = \mathbb{E}[\mathbb{E}[ \mathrm{Var}(X_{k+1}-X_k) \mid \mathcal{F}_k]] \leq \mathbb{E}[\mathbb{E}[ (X_{k+1}-X_k)^2 \mid \mathcal{F}_k]] = \mathbb{E}[(X_{k+1}-X_k)^2].$$ Since the increments of a martingale are orthogonal in $L^2$ by the above calculation we deduce that $ \mathbb{E}[M_n^2] \leq 4C n$. By Doob's maximal inequality we have $$ \mathbb{P}( \sup_{0 \leq k \leq n} |M_k| \geq A) \leq 4 \frac{\mathbb{E}[|M_n|^2]}{A^2}$$ and it follows that $$ \frac{\sup_{0 \leq k \leq n} |M_k|}{n} \xrightarrow[n\to\infty]{( \mathbb{P})} 0.$$ The rest of the argument is purely deterministic. By the hypothesis in the proposition we have $D_{i} \leq \frac{C}{n}( X_{i}^{*}+1)$ and so \eqref{eq:decompositionDoob} combined with the fact that the martingale part is negligible in front of $n$ yield that for any $ \varepsilon >0$ on the event $\{ \sup_{0 \leq k \leq n}|M_{k}| \leq \varepsilon n\}$, whose probability tends to $1$ as $n \to \infty$, we have for all $ t \in [0,n]$ $$ X_{[t]}^{*} \underset{ \eqref{eq:decompositionDoob}}{\leq} \sum_{i=0}^{[t]-1}\frac{C}{n} X^{*}_{i} + \left(\varepsilon + \frac{C}{n}\right) n \leq 2 \varepsilon n + \frac{C}{n} \int_0^t \mathrm{d}s \, X^*_{[s]}.$$ On this event, by the usual (deterministic) Gr\"onwall\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{gronwall}}Thomas Hakon Gr\"onwall (1877--1932), Swedish} lemma we have $ X^*_{[t]} \leq 2 \varepsilon n \cdot \mathrm{exp}( \frac{C}{n} t),$ and in particular $ n^{-1} \cdot X^*_n \to 0$ as $n \to \infty$ in probability. \qed \bigskip The above strategy, called the ``differential equation method'' by Wormald \cite{wormald1995differential}, has found many applications in the realm of random graphs. Rather than giving an abstract convergence theorem, we propose to apply the strategy in the following exercise in order to estimate the size of the (random) greedy independent set on an Erd{\H{o}}s--R\'enyi random graph: \begin{exo}[Greedy independent set on $G(n,p)$] \label{exo:greedyER}Consider the graph $G(n,p)$ over the vertices $\{1,2, \dots , n\}$ for a parameter $p = \frac{c}{n}$ for some constant $c>0$. We will build inductively a random subset $ \mathbf{I}$ of $G(n,p)$ so that no vertices of $ \mathbf{I}$ are neighbors. To do this we put initially $ \mathcal{U}_{0}= \{1,2,3, \dots. , n\}$ (untouched) and $ \mathbf{I}_{0} = \varnothing$. Iteratively, for $i \geq 0$ as long as $ \mathcal{U}_{i} \ne \varnothing$ we select the vertex $x_{i}$ of smallest label in $ \mathcal{U}_{i}$ and denote its neighbors in $ G(n, p)$ by $\{y_{1}, \dots , y_{j}\}$ we then put $$ \mathbf{I}_{i+1}= \mathbf{I}_{i} \cup \{ x_{i}\} \quad \mbox{ and }\quad \mathcal{U}_{i+1}= \mathcal{U}_{i} \backslash\{x_{i}, y_{1}, \dots , y_{j}\}.$$ We denote by $ \mathcal{F}_{n}$ the canonical filtration generated by this process and consider the stopping time $$\tau = \inf\{ k \geq 0 : \mathcal{U}_{k} = \varnothing\}.$$ \begin{enumerate} \item Show that $ \mathbf{I}_{\tau}$ is an independent set, that is, no vertices of $ \mathbf{I}_{\tau}$ are neighbors. \item Show that conditionally on $ \mathcal{F}_{k}$, the graph induced by $G(n,p)$ on $ \mathcal{U}_{k}$ is an Erd{\H{o}}s-R\'enyi random graph with parameter $p$. That is, all edges between vertices of $\mathcal{U}_{k}$ are independent and present with probability $p$. \item Deduce that $$ \mathbb{E}[ \#\mathcal{U}_{k+1}- \# \mathcal{U}_{k} \mid \mathcal{F}_{k}] = - 1 - ( \# \mathcal{U}_{k}-1)p.$$ \item Recall that $p= p_{n} = \frac{c}{n}$. Use the differential equation method to prove that $$ (n^{-1} \# \mathcal{U}_{\lfloor nt \rfloor })_{t \in [0,1]} \to \left(\frac{(1+c- \mathrm{e}^{{ct}}) \mathrm{e}^{{-ct}}}{c} \vee 0 : 0 \leq t \leq 1\right).$$ Hint : $f(t) = \frac{(1+c- \mathrm{e}^{{ct}}) \mathrm{e}^{{-ct}}}{c}$ satisfies $f'(t) =-1-c f(t)$ and $f(0)=1$. \item Deduce and explain why $$ n^{-1}\tau \xrightarrow[n\to\infty]{( \mathbb{P})} \frac{\log(1+c)}{c}.$$ \end{enumerate} \end{exo} \section{Corollaries and refined estimates} \label{sec:fluidgiant1} Let us deduce some geometric consequences of the convergence of the rescaled process $ \mathbb{S}$ towards $f_c$ (Theorem \ref{thm:fluid}). Recall from the definition of the exploration of the connected components of $G(n,p)$ that: \begin{enumerate} \item The number of components in $G(n,p)$ is exactly $ -\underline{\mathbb{S}}_{n}$, \item The sizes of the components in $G(n,p)$ correspond to the lengths of the excursions of $ \mathbb{S}$ above its running infimum $ \underline{\mathbb{S}}$. \end{enumerate} As a direct corollary of Theorem \ref{thm:fluid} and the first item above we deduce: \begin{corollary} The number of components in $G(n,p)$ satisfies \begin{eqnarray*} \frac{\# \mathrm{ConnComp}(G(n, {\textstyle \frac{c}{n}}))}{n} &\xrightarrow[n\to\infty]{ (\mathbb{P})}& f_c(1) = \frac{\alpha(c)( 2- c \alpha(c)) }{2}, \quad \mbox{ for }c>0. \end{eqnarray*} \end{corollary} If $f : [0,1] \to \mathbb{R}$ is a continuous function with $f(0)=0$ and running infimum process $\underline{f}(t) = \inf_{0 \leq s \leq t} f(s)$, we denote by $ \mathrm{Exc}(f)$ the (at most countably many) excursion intervals of $ f - \underline{f}$ away from $0$. We write $\|\mathrm{Exc}(f)\| \in \ell_1$ for the lengths of those excursions ranked in decreasing order. We deduce the weak-giant property (Theorem \ref{thm:weak-giant}) from Exercise \ref{exo:continuityt} and the continuous mapping theorem. In particular, as in Section \ref{sec:firstmomentgiant}, we established the existence of the unique giant component but we did not give the logarithmic upper bounds for the second largest component stated in Theorem \ref{thm:erdosrenyi}. To prove it, we will use large deviations estimates in the next section. \begin{exo} \label{exo:continuityt} Consider the mapping $$ \mathcal{E} : f \in \big(\mathcal{C}([0,1], \mathbb{R}), \|\cdot \|_\infty\big) \mapsto \| \mathrm{ Exc}(f)\| \in (\ell^1, \|\cdot \|_1).$$ \begin{enumerate} \item Show that $ \mathcal{E}$ is not continuous in general. \item However, show that $ \mathcal{E}$ is continuous at points $f$ where $f$ has no two-sided local minima. In particular, $ \mathcal{E}$ is continuous at $f= f_{c}$ for $ c \geq 0$. \end{enumerate} \end{exo} In the rest of this section, we prove most of the refined estimates on the cluster size stated in Theorem \ref{thm:erdosrenyi}, especially the logarithmic upper bound in the subcritical case and for the second largest cluster in the supercritical case. We start with a stochastic domination of the typical cluster size coming from the exploration process. \medskip By Proposition \ref{prop:markovER}, since we always have $ \# \mathcal{U}_{k} \leq n$, we deduce that the increments of $ \mathbb{S}$ are stochastically dominated by independent $ \mathrm{Bin}(n,{\textstyle \frac{c}{n}})$ variables minus $1$. We denote by $(S^{(n)}_{t} : t \geq 0)$ a random walk starting from $0$ with i.i.d.~increments of law $ \mathrm{Bin}(n,{\textstyle \frac{c}{n}})-1$. In particular, the size of the cluster of $1$ in $G(n, {\textstyle \frac{c}{n}})$ is stochastically dominated by $T_{-1}( S^{(n)})$, the hitting time of $-1$ by that random walk. This can be evaluated via Kemperman's formula (Proposition \ref{prop:kemperman}) and an explicit computation: \begin{eqnarray} \mathbb{P}( T_{-1}( S^{(n)}) = k) &\underset{ \mathrm{Kemperman}}{=}& \frac{1}{k} \mathbb{P}( \mathrm{Bin}(k\cdot n,{\textstyle \frac{c}{n}}) = k-1) \nonumber \\ &=& \frac{1}{k}{nk \choose k-1} \left( \frac{c}{n}\right)^{k-1} \left( 1-\frac{c}{n}\right)^{nk- (k-1)} \nonumber \\ & \underset{ \mathrm{Stirling}}{\leq}& \mathrm{Cst} \cdot \frac{n}{k} \frac{ \sqrt{nk}(nk)^{nk}}{ \sqrt{k} k^{k-1} \sqrt{nk} ((n-1)k)^{(n-1)k+1}}\left(\frac{c}{n} \left(1- \frac{c}{n}\right)^{n-1}\right)^k \nonumber \\ &\leq &\mathrm{Cst} \cdot \frac{1}{k^{3/2}} \cdot \frac{(nk)^{nk}}{k^k((n-1)k)^{(n-1)k}} \left(\frac{c}{n} \left(1- \frac{c}{n}\right)^{n-1}\right)^k\nonumber \\ &\leq &\mathrm{Cst} \cdot \frac{1}{k^{3/2}} \cdot \left(\frac{n^{n}}{(n-1)^{(n-1)}} \left(\frac{c}{n} \left(1- \frac{c}{n}\right)^{n-1}\right)\right)^k\nonumber \\ & \leq& \mathrm{Cst}\cdot \frac{1}{k^{3/2}} \cdot \left( c \left(\frac{n-c}{n-1}\right)^{n-1}\right)^k , \label{eq:tailexact}\end{eqnarray} for some constant $ \mathrm{Cst}>0$ that may vary from line to line but which is independent of $k\geq 1$ and $n\geq 2$. When $c<1$ the term in the parenthesis tends to $ c \mathrm{e}^{1-c} < 1$ as $n \to \infty$, whereas for $c=1$ this term is equal to $1$. \subsection{Subcritical case} \label{sec:logarithmicsubcritical} We can now prove the logarithmic upper bound on the size of the clusters in the subcritical regime in Theorem \ref{thm:erdosrenyi}: Suppose that $c<1$ and recall that $ \mathcal{C}^i \equiv \mathcal{C}^i( G(n, \frac{c}{n}))$ is the cluster of the vertex $i \in \{1,2,3, \dots ,n\}$ inside $G(n,{\textstyle \frac{c}{n}})$. Using the above bound, we deduce that for any $A >0$, we have in $G(n,{\textstyle \frac{c}{n}})$: $$ \mathbb{P}( \# \mathcal{C}^1 \geq A) \leq \mathbb{P}( T_{-1}( S^{(n)}) \geq A) \underset{ \eqref{eq:tailexact}}{\leq} \mathrm{Cst} \cdot \eta^A,$$ where $\eta<1$ is independent of $A$. Taking $A = \frac{2}{\eta} \log n$, we deduce using the union bound that \begin{eqnarray*} \mathbb{P}( \mathrm{C}_{1}^{\max} \geq \frac{2}{\eta} \log n) &\underset{ \mathrm{union \ bd}}{\leq}& n \mathbb{P}\left ( \# \mathcal{C}^1 \geq \frac{2}{\eta} \log n\right) \\ &\leq& n \mathbb{P}( T_{-1}( S^{(n)}) \geq \frac{2}{\eta} \log n) \leq \mathrm{Cst}\cdot n \exp(- 2 \log n) \to 0. \end{eqnarray*} \subsection{Critical case} The same strategy can be used in the critical case $c=1$ (together with a little size-biasing trick). More precisely, imagine that we pick (independently of $ G(n,{\textstyle \frac{1}{n}})$) a vertex $U_{n}$ uniformly in $\{1,2, \dots , n\}$. The size of the cluster of $U_{n}$ has the same law as that of the vertex $1$ and so is stochastically dominated by $T_{-1}(S^{(n)})$. We can thus write \begin{eqnarray*} \mathbb{P}( T_{-1}(S^{(n)}) \geq A) &\geq& \mathbb{P}( \# \mathcal{C}^{ U_{n}} \geq A)\\ & \geq &\mathbb{P}( \#\mathcal{C}_{\max} \geq A \mbox{ and }U_{n} \in\mathcal{C}_{\max}) \geq \frac{A}{n} \mathbb{P}(\mathrm{C}^{\max}_{1} \geq A). \end{eqnarray*} Now taking $A = \lambda n^{2/3}$ and using \eqref{eq:tailexact} where the exponential factor disappears when $c=1$, we find $$ \mathbb{P}( \mathrm{C}^{\max}_{1} \geq \lambda n^{2/3}) = O( \lambda^{-3/2}),$$ which already gives the good order of magnitude of the largest cluster in $ G( n, {\textstyle \frac{1}{n}})$. Getting the full distributional convergence of $(n^{-2/3} \mathrm{C}^{\max}_{i})_{i \geq 1}$ requires to understand in much more details the exploration process. See the next chapter for such a result (Proposition \ref{prop:aldous}) in a slight variant of the Erd{\H{o}}s--R\'enyi random graph. \subsection{Supercritical case} For the supercritical case, we shall establish a common phenomenon in statistical physics: in the supercritical regime, the complement of the giant behaves as a \textbf{subcritical} system. See Exercise \ref{exo:superfini} for an instance of this phenomenon in BGW trees. \medskip Let $c >1$. By Theorem \ref{thm:fluid} we know that after the giant component of $G(n, {\textstyle \frac{c}{n}})$ has been explored we are left with a graph over $\approx \alpha(c)n$ vertices with edge density $ \frac{c}{n}$. This graph is close to being an Erd{\H{o}}s--R\'enyi: \begin{lemma} Conditionally on $ \mathrm{C}^{\max}_{1}$ and on $ \mathrm{C}^{\max}_{1} > \mathrm{C}^{\max}_{2}$, the remaining graph\footnote{with vertices relabeled in increasing order} $ G(n,p) \backslash \mathcal{C}_{\max}$ has law $G( n - \mathrm{C}^{\max}_{1},p)$ conditioned on having clusters of size strictly less than $\mathrm{C}^{\max}_{1}$. \label{lem:condmax} \end{lemma} \noindent \textbf{Proof.} Fix a connected component $ \mathfrak{g}_{\max}$ on $\{1,2, \dots , n\}$ and a given graph $ \mathfrak{g}_{ \mathrm{rem}}$ on the remaining vertices so that no component of $ \mathfrak{g}_{ \mathrm{rem}}$ has a cluster of size larger or equal to $ \# \mathrm{V}(\mathfrak{g}_{\max})$. Then we have (with a slight abuse of notation) \begin{eqnarray*} &&\mathbb{P}(\mathcal{C}^{\max} = \mathfrak{g}_{\max} \mbox{ and } G(n,p) \backslash \mathcal{C}^{\max} = \mathfrak{g}_{ \mathrm{rem}})\\ &=& {p}^{ \# \mathrm{E}(\mathfrak{g}_{\max})} p^{\# \mathrm{E}(\mathfrak{g}_{ \mathrm{rem}})} (1-p)^{ {n \choose 2} - \# \mathrm{E}(\mathfrak{g}_{\max}) - \# \mathrm{E}(\mathfrak{g}_{ \mathrm{rem}})} \\ &=& \mathbb{P}\Big(G\big(n - \# \mathrm{V}( \mathfrak{g}_{\max}),p\big) = \mathfrak{g}_{ \mathrm{rem}}\Big) \cdot \mathrm{Cst}( \mathfrak{g}_{\max}), \end{eqnarray*} where the constant $\mathrm{Cst}( \mathfrak{g}_{\max})$ only depends on $ \mathfrak{g}_{\max}$ and not on $ \mathfrak{g}_{ \mathrm{rem}}$ as long as its components have size strictly less than $\# \mathrm{V}( \mathfrak{g}_{\max})$. This proves the lemma. \endproof \medskip When $c >1$, notice that the complement of the giant component is \textbf{subcritical} since $$\forall c >0, \qquad c \alpha(c) <1,$$ see Figure \ref{fig:calphac}. \begin{figure}[!h] \begin{center} \includegraphics[width=8cm]{calphac} \caption{ \label{fig:calphac} Plot of $c \mapsto c\, \alpha(c)$ displaying the criticality in the remaining graph when the giant has been removed.} \end{center} \end{figure} We can thus prove point (ii) in Theorem \ref{thm:erdosrenyi}: Fix $c>1$ and $ \alpha (c)/2 > \varepsilon>0$. By Theorem \ref{thm:weak-giant}, the event $$ \{| \mathrm{C}^{\max}_{1} - (1-\alpha (c)) n| \leq \varepsilon n\} \cap \{ \mathrm{C}^{\max}_{2} < \varepsilon n \}$$ has a probability tending to $1$ and conditionally on it, the complement of the giant $G(n, \frac{c}{n}) \backslash \mathcal{C}^{\max}$ is an Erd{\H{o}}s--R\'enyi with $N \leq ( \alpha(c) + \varepsilon)n$ vertices and edge density $ \frac{c}{n}$, conditioned on having no cluster of size larger than $ \mathrm{C}^{\max}_{1}$. If $ \varepsilon>0$ is small enough so that $c(\alpha(c) + \varepsilon)<1$, we know from Section \ref{sec:logarithmicsubcritical} that $G(N, \frac{c}{n})$ has no cluster of size larger than $A \log n$ for some $A >0$, so the previous conditioning does not affect its law asymptotically and we deduce (ii) in Theorem \ref{thm:erdosrenyi}. \begin{exo}[Supercritical BGW conditioned to be finite are subcritical BGW] \label{exo:superfini} Let $\mathcal{T}$ be a BGW tree with a \textit{supercritical} offspring distribution $\mu$ with generating function $g$. We denote by $\tilde{\mathcal{T}}$ the tree $ \mathcal{T}$ conditioned on the event $\{ \#\mathcal{T} < \infty\}$ whose probability is equal to the unique solution $\alpha \in (0,1)$ to $g(\alpha)= \alpha$, see Figure \ref{fig:GWclassic}. Show that $ \tilde{ \mathcal{T}}$ is a BGW with offspring distribution $\tilde{\mu}$ whose generating function $ \tilde{g}$ is given by $$ \tilde{g}(z) = \frac{1}{\alpha} g (\alpha \cdot z), \quad \mbox{ for } z \in [0,1].$$ \end{exo} \paragraph{Bibliographical notes.} Although the exploration process of $G(n,p)$ is well known (see e.g.~\cite[(11.12)]{alon2016probabilistic} for Proposition \ref{prop:markovER}), the existence of the giant component using fluid limit for the exploration process seems to be new, although it is inspired by the much more precise analysis made in Aldous \cite{aldous1997brownian} and in \cite{nachmias2010critical}. More generally, the differential equation method has been used widely in random graph theory, see \cite{wormald1995differential}. Many formulations of the fluid limit paradigm, with increasing level of generality, can be found in the literature, see e.g.~ \cite{wormald1995differential,warnke2019wormald,darling2002fluid,darling2008differential}. Studying the emergence and the structure of the giant component in $G(n,p)$ is still a vivid subject in probability theory, see e.g.~\cite{ABBGM13} for very recent results relating the critical Erd{\H{o}}s-R\'enyi graph to the minimal spanning tree or \cite{schramm2005compositions} for a a connection with mixing time for the composition of transpositions on the symmetric group. We refer to \cite{van2009random} for extensions and more references. \noindent{\textbf{Hints for Exercises.}}\ \\ Exercise \ref{exo:greedyER}: The result is first proved in \cite{pittel1982probable}.\\ Exercise \ref{exo:continuityt}: The mapping $ \mathcal{E}$ is not continuous at the function $f : x \mapsto |x||x-1/2||x-1|$ as limit of the functions $ f_{ \varepsilon} : x \mapsto |x|\big(|x-1/2| + \varepsilon\big) |x-1|$.\\ Exercise \ref{exo:superfini}. Use \eqref{exo:GWprod} and massage it. See Corollary 2.7 in \cite{abraham2015introduction} for a proof. \chapter{Birth of giant $3$, Poissonized} \label{chap:poissonER} \hfill Pas frais mon poisson ? (Ordralfabtix)\bigskip \begin{figure}[!h] \begin{center} \includegraphics[height=8cm]{poissonalpha} \caption{Ordralfabtix (\copyright\ Goscinny et Uderzo).} \end{center} \end{figure} We introduce a variant of the Erd{\H{o}}s--R\'enyi random graph where infinitely ``stack'' vertices are added on the side. A very simple Markov property of the model entails that the {\L}ukasiewicz exploration is made of simple increments related to the repartition function of i.i.d.~uniforms. Using the standard Glivenko--Cantelli theorem, this enables us to give very short proofs of classical results such as the phase transition for the giant component (Theorem \ref{thm:erdosrenyi}) or the connectedness for the standard Erd{\H{o}}s--R\'enyi model (Theorem \ref{thm:connectedness}). \section{The stacked model and its exploration} We shall consider a variant of the Erd{\H{o}}s--R\'enyi model where we add infinitely many additional vertices ``in a stack on the side''. Formally, for $n \geq 1$ and $p$ fixed we consider the graph $ \mathrm{G}^{ \mathrm{stack}}(n,p)$ on the vertex set $\{1,2, \dots , n\} \cup \{1^{*},2^{*}, 3^{*}, \dots \}$, the vertices of $\{1,2, \dots , n \}$ form the \textbf{core} of the graph, whereas the vertices $\{1^{*}, 2^{*}, \dots \}$ form the \textbf{stack}. Then, each pair of core and stack vertices are connected by an edge with probability $p$ independently. \emph{There are no edges between vertices of the stack}. See Figure \ref{fig:ERP}. \begin{figure}[!h] \begin{center} \includegraphics[width=16cm]{ERPfixed} \caption{A stacked Erd{\H{o}}s--R\'enyi random graph and one step of exploration. The stack is made of the white vertices on the left part while the core is represented by the gray part. After one step of exploration, the explored vertex (in red) is deleted as well as the edges linking the discovered vertices between each other or to the former stack. Conditionally on the number $n'$ of vertices remaining in the core after this exploration, the resulting graph (after relabeling of its vertices) is distributed as $ G^{ \mathrm{stack}}(n',p)$. \label{fig:ERP}} \end{center} \end{figure} \paragraph{Markov property.} A step of \textbf{exploration} in $ \mathrm{G}^{ \mathrm{stack}}(n,p)$ is the following: Fix a vertex $\rho$ of the stack (independently of the core) and reveal its neighbors $ y_1, \dots , y_K$ with $K\geq 0$ inside the core. Then, see those vertices $y_1, \dots , y_K$ as new vertices of the stack, in particular erase all possible edges between $y_1,\dots,y_K$ and between $y_1,\dots,y_K$ and other vertices of the stack. Denote by $ \mathrm{Explo}( \mathrm{G}^{\mathrm{stack}}(n,p))$ the resulting random graph whose vertices are relabeled by $\{1,2, \dots , n-K\}$ and $\{1^{*}, 2^{*}, \dots \}$ accordingly. The following is trivially verified: \begin{lemma}[Markov property of $ \mathrm{G}^{\mathrm{stack}}(n,p)$] \label{lem:Markov} \label{lem:markov} Let $K \geq 0$ be the number of neighbors in the core of $ \mathrm{G}^{\mathrm{stack}}(n,p)$ of the stack vertex $\rho$. Then $ K \sim \mathrm{Bin}(n,p)$ and conditionally on $K$, we have the equality in law $$ \mathrm{Explo}( \mathrm{G}^{\mathrm{stack}}(n,p)) \quad \overset{(d)}{=} \mathrm{G}^{\mathrm{stack}}(n-K,p).$$\end{lemma} We shall now consider successive exploration steps and denote by $K \equiv K_{1}, K_{2}, \dots$ the number of vertices of the remaining core discovered at each step. In the rest of the chapter, we shall focus on a specific exploration of the graph: we shall assume that iteratively, the discovered vertices are placed on top of the stack and that we successively explore the first vertex of the stack. We get the so-called \textbf{{\L}ukasiewicz exploration} of the graph $ \mathrm{G^{stack}}(n,p)$ similar to the one used in the previous chapter, see Figure \ref{fig:lukaERpoi}. We encode it in a process $$ (\mathbb{S}_{k}^{(n, p)} : k \geq 0)$$ or in short $(\mathbb{S}_{k} : k \geq 0)$, the {\L}ukasiewicz walk, defined by $ \mathbb{S}^{(n, p)}_{0}=0$ and where $ \Delta \mathbb{S}^{(n, p)}_{i} = \mathbb{S}^{(n, p)}_{i}- \mathbb{S}^{(n, p)}_{i-1} = K_{i}-1$ is the number of neighbors discovered at step $i$ minus one. \begin{figure}[!h] \begin{center} \includegraphics[width=16cm]{lukapoisson} \caption{{\L}ukasiewicz exploration of the graph $ \mathrm{G^{stack}}(n,p)$: the numbering reflects the order in which the vertices have been explored. The thick edges are kept whereas the thin red edges are discarded in the exploration. The thick (and very thick) edges form $ \mathrm{F^{stack}}(n, p)$ and the very thick ones form $ \mathrm{F'^{stack}}(n, p)$ . The process $ \mathbb{S}^{(n, p)}$ on the right is obtained by concatenating the successive number of neighbors $-1$. \label{fig:lukaERpoi}} \end{center} \end{figure} \paragraph{Relation to components.} Since $ \mathrm{G^{stack}}(n, p)$ has an infinite stack of vertices linked to each vertex of the core independently with probability $p$, as soon as $p >0$, the graph is a.s.~connected and in fact all vertices of the core have infinite degree almost surely. However, if we only consider the edges that are truly used in the {\L}ukasiewicz exploration (i.e.~not the edges between stack and revealed vertices, nor edges between revealed vertices) we obtain a spanning forest $$ \mathrm{F^{ \mathrm{stack}}}(n,p) \subgraph \mathrm{G^{ \mathrm{stack}}}(n,p),$$ whose {\L}ukasiewciz walk is precisely $ \mathbb{S}$, see Figure \ref{fig:lukaERpoi}. In particular, new minimal records of $ \mathbb{S}$ correspond to the discovery of a new tree component in $ \mathrm{F^{ \mathrm{stack}}}(n,p)$. If we further remove all vertices of the initial stack (together with the adjacent edges) we split $\mathrm{F^{ \mathrm{stack}}}(n, p)$ into a finer forest $\mathrm{F^{ \mathrm{stack},'}}(n,p)$ which spans the core and we can check the following graph inclusions \begin{eqnarray} \label{eq:inclusion} \mathrm{F^{ \mathrm{stack},'}}(n,p) \subgraph \underbrace{G(n,p)}_{ \mathrm{Core}} \subgraph \mathrm{F^{ \mathrm{stack}}}(n,p) \subgraph \mathrm{G^{ \mathrm{stack}}}(n,p). \end{eqnarray} \section{Law of the increments} The advantage of the stacked version compared to the standard Erd{\H{o}}s--R\'enyi studied in the previous chapter is that the law of the increments of $ \mathbb{S}$ is simpler as it does not involved the running infimum process (compare Proposition \ref{prop:cid} with Proposition \ref{prop:markovER}). To make it even simpler, it is useful to randomize the size of the core. We first start with the description of $ (\Delta \mathbb{S}^{(n, p)}_{k} : k \geq 1)$ in the fixed-size case. \subsection{Fixed size} Consider the unit interval $[0,1)$ which is split in infinitely many subintervals $$ [0,1) = \bigsqcup_{k \geq 1} \underbrace{\big[x^{(p)}_{k-1},x^{(p)}_{k}\big[}_{:= I^{(p)}_{k}}, \quad \mbox{ where } x^{(p)}_{k} = 1 - \left( 1-p\right)^{k} \mbox{ for }k \geq 0,$$ so that for each $k \geq 1$, the length of $I^{(p)}_{k}$ is exactly $p$ times the total length of $I^{(p)}_{k},I^{(p)}_{k+1}, \dots$. We then throw $(U_{i} : 1 \leq i \leq n)$ independent identically distributed uniform r.v.~on $[0,1]$. The observation is: \begin{lemma} \label{prop:cid}The law of $(\Delta \mathbb{S}^{(n, p)}_{k} +1 : k \geq 1)$ is equal to the law of $$ \big(\#\{ 1 \leq i \leq n : U_{i} \in I_{k}^{(p)}\}\big)_{k \geq 1}.$$ \end{lemma} \noindent \textbf{Proof}. Denote by $ \tilde{K}_{j} = \#\{ 1 \leq i \leq n : U_{i} \in I_{j}^{(p)}\}$. Clearly $\tilde{K}_{1} \sim \mathrm{Bin}(n,p)$ in law. Furthermore, using the fact that the variables are uniform, we see that conditionally on $ \tilde{K}_{1}$, the sequence $ \tilde{K}_{2}, \tilde{K}_{3},\dots$ has the law of $(\tilde{K}_{1}, \tilde{K}_{2},\dots)$ where $n$ has been replaced by $n'=n- \tilde{K}_{1}$. Comparing with Lemma \ref{lem:markov} this suffices to prove equality of the laws recursively. \qed \medskip If we write $F_{n}(x) = \# \{ 1\leq i \leq n : U_{i} \leq x\}$ for the repartition function of the $n$ i.i.d.~uniforms, using the above proposition we can write \emph{simultaneously} for all $k \geq 0$ \begin{eqnarray} \label{eq:walkcid} \mathbb{S}_{k} &=& F_{n}(x_{k}^{(p)}) -k. \label{eq:couplagecid} \end{eqnarray} For our application, we recall the classical Glivenko-Cantelli \footnote{ \raisebox{-5mm}{\includegraphics[width=1cm]{glivenko}} Valery Ivanovich Glivenko (1897--1940), Ukrainian \raisebox{-5mm}{\includegraphics[width=1cm]{cantelli}}Francesco Paolo Cantelli (1875 -- 1966) Italian} theorem: \begin{eqnarray} \label{eq:glivenko} \left( \frac{F_{n}(x)}{n} : x \in [0,1]\right) & \xrightarrow[n\to\infty]{ (\mathbb{P})} & ( x : x \in [0,1]), \end{eqnarray} for the $L^{\infty}$ metric. Before drawing probabilistic consequences of the above observations, let us consider the model where the size of the core is itself random which yield to further simplifications (and which gave the name to the chapter). \subsection{Poissonized version} Fix $\alpha >0$ and suppose that $n\equiv N$ is first sampled at random, with law $\mathfrak{P}(\alpha)$ and conditionally on $N$ we perform the above construction. The resulting stacked graph will be denoted by $\mathrm{G_{Poi}^{stack}}(\alpha,p)$ and we denote the resulting {\L}ukasiewicz walk by $ \mathbb{S}^{[\alpha,p]}$. By the classical Poisson thinning observation, in Lemma \ref{lem:markov} we then have $K \sim \mathrm{Bin} (N,p) \sim \mathfrak{P}(\alpha p)$ and furthermore $K$ is independent of $N-K \sim \mathfrak{P}((1-p) \alpha)$. Iterating the above lemma, we deduce that in the Poissonized version the increments $ \Delta \mathbb{S}^{[\alpha,p]}_{k}+1$ of the {\L}ukasiewicz walk is now a sequence of \emph{independent} Poisson random variables with expectation $ \alpha p, \alpha p (1-p), \dots , \alpha p (1-p)^{k}, \dots$ whose total sum is just a Poisson variable of parameter $ \alpha p \sum_{i \geq 0} (1-p)^{i} = \alpha$, recovering the total number of vertices $N$ in the core as expected.\medskip As in \eqref{eq:walkcid} we can write in this case \emph{simultaneously} for all $k \geq 0$ \begin{eqnarray} \mathbb{S}^{[\alpha,p]}_{k} &= &(\mathfrak{P}(\alpha p )-1) + ( \mathfrak{P}(\alpha p (1-p))-1) + \dots + ( \mathfrak{P}(\alpha p (1-p)^{k-1})-1)\nonumber \\ &=& \mathfrak{P}\left( \alpha p \cdot \sum_{i=0}^{k-1} (1-p)^{i} \right)-k = \mathfrak{P}\left( \alpha (1-(1-p)^{k})\right) -k, \label{eq:lukapoisson} \end{eqnarray} where all the Poisson random variables written above are independent and where $ (\mathfrak{P}(t): t \geq 0)$ is a standard unit-rate Poisson counting process on $ \mathbb{R}_{+}$. We shall only use the following standard limit theorems on the Poisson counting process \begin{eqnarray} \label{eq:lawll} \frac{ \mathfrak{P}(t)}{t} \xrightarrow[t\to\infty]{a.s.} 1, \quad \mbox{ and } \quad \frac{ ( \mathfrak{P}(tn)-tn)}{ \sqrt{n}} \xrightarrow[n\to\infty]{(d)} (B_{t} : t \geq 0), \end{eqnarray} where $(B_{t} : t \geq 0)$ is a standard linear Brownian motion. The left-hand side follows from the law of large numbers and the right-hand side from Donsker's invariance principle. \section{Phase transition for the giant} Let us use the {\L}ukasiewicz exploration of the stacked version of the Erd{\H{o}}s--R\'enyi random graph to give a straightforward proof of Theorem \ref{thm:weak-giant}. \subsection{Existence of the giant component} Fix $c>0$. Let $p \equiv p_n = \frac{c}{n}$ and recall the notation $ \mathbb{S}^{(n,p)}$ for the {\L}ukasiewicz walk encoding the fixed size stacked Erd{\H{o}}s--R\'enyi random graph. Since we have $$x_{\lfloor nt \rfloor }^{(\frac{c}{n})} \sim \left(1- \frac{c}{n}\right)^{ \lfloor nt\rfloor } \to 1-\mathrm{e}^{-ct}, \quad \mbox{ as $n \to \infty$ uniformly over $ t \in \mathbb{R}_{+}$},$$ using \eqref{eq:walkcid} and the Glivenko-Cantelli theorem \eqref{eq:glivenko}, we immediately deduce the analog of Theorem \ref{thm:fluid}: \begin{proposition}[Fluid limit]\label{prop:fluid}We have the following convergences in probability $$ \sup_{t \geq 0} \left\| \left(n^{-1} \cdot {\mathbb{S}^{(n, \frac{c}{n})}_{\lfloor nt\rfloor}}\right) - \left( 1- \mathrm{e}^{{-ct}}-t\right)\right\| \xrightarrow[n\to\infty]{( \mathbb{P})}0.$$ \end{proposition} \begin{figure}[!h] \begin{center} \includegraphics[width=10cm]{courbesERP} \caption{Graphs of the functions $( 1- \mathrm{e}^{{-ct}}-t)_{t \geq 0}$ for different of values of $c$: in blue $c=1/2$, in orange $c=1$, in green $c=2$ and in red $c=3$. Notice the root $1-\alpha(c)$ and compare with Figure \ref{fig:fc}.\label{fig:graphsfunctions}} \end{center} \end{figure} Notice that the above convergence is not restricted to a compact time interval compared to Theorem \ref{thm:fluid}. However, when $c >1$, the function $ t \mapsto 1- \mathrm{e}^{-ct}-t$ coincides with the function $f_{c}$ defined in Section \ref{sec:fluidER} up to its first root at time $t = 1- \alpha(c)$, where we recall that $\alpha(c)$ is the smallest root to $ \alpha(c) = \mathrm{e}^{-c(1- \alpha(c))}$ and in particular $\alpha(c)=1$ if and only if $c \in [0,1]$. We give a proof of the existence of the giant component in the Poissonized Erd{\H{o}}s--R\'enyi (that is Theorem \ref{thm:weak-giant}) using the same lines as in Section \ref{sec:fluidgiant1}: \begin{corollary}[Phase transition for $G(n, \frac{c}{n})$] \label{cor:giant} If $c <1$ then the largest connected components \textbf{in the core} of $ \mathrm{G^{stack}}(n,{\textstyle \frac{c}{n}})$ has size $o_{ \mathbb{P}}(n)$, whereas if $ c >1$ it contains a unique giant component of size $ (1-\alpha(c))n + o_ \mathbb{P}(n)$, and the second largest component has size $o_{ \mathbb{P}}(n)$. \end{corollary} \noindent \textbf{Proof.} Using the sandwiching of \eqref{eq:inclusion} it suffices to prove the similar statements for $ \mathrm{F^{stack}}$ and $ \mathrm{F^{stack,'}}$. The size of the connected components in $\mathrm{F^{stack}}(n, \frac{c}{n})$ are given by the lengths of the excursions of $ \mathbb{S}^{(n, \frac{c}{n})}$ above its running infimum process $$ \underline{ \mathbb{S}}^{(n, \frac{c}{n})}_k := \inf \{ \mathbb{S}^{(n, \frac{c}{n})}_j : 0 \leq j \leq k\}.$$ We denote by $(L^{(n, \frac{c}{n})}_{i} : i \geq 1)$ those excursion lengths ranked in decreasing order. Notice that the excursion lengths above the running infimum of the function $ t \mapsto 1- \mathrm{e}^{-ct}-t$ are given by $( 1-\alpha(c), 0, 0, \dots)$. Using Proposition \ref{prop:fluid} and (a variation on) Exercise \ref{exo:continuityt} shows that $$\left( \frac{L^{(n, \frac{c}{n})}_{i}}{n} : i \geq 1\right) \xrightarrow[n\to\infty]{( \mathbb{P})} ( 1-\alpha(c), 0, 0, \dots)$$ for the $\ell^{\infty}$ norm. This proves the statement of the corollary for the random graph $\mathrm{F ^{ \mathrm{stack}}}(n, {\textstyle \frac{c}{n}})$. In the case $c \leq 1$, since $\mathrm{F^{stack,'}} \subgraph \mathrm{F^{stack}}$ and $1-\alpha(c)=0$ there is nothing more to prove. However, when $c>1$ the removal of the initial stack vertices may split the giant component of $\mathrm{F^{stack}}(n, \frac{c}{n})$ of size $(1-\alpha(c))n +o_{ \mathbb{P}}(n)$ into several components but a moment's though using the {\L}ukasiewicz walk and Proposition \ref{prop:fluid} again shows that one component of size $(1-\alpha(c))n + o_{ \mathbb{P}}(n)$ must remain.\qed \subsection{Critical case} In this section we turn to refined estimates on the cluster sizes in the case $ \alpha=n$ and $p\equiv p_n =\frac{1}{n}$. For technical simplicity, we focus on the Poissonized version $ \mathrm{G^{stack}_{Poi}}$ for which we can use the Brownian limit in \eqref{eq:lawll}. This is an analog of point (iii) in Theorem \ref{thm:erdosrenyi} (where we take $\lambda=0$ below). Getting from those results the analogs for the fixed-size Erd{\H{o}}s--R\'enyi via depoissonization is doable, but is not covered in these pages. \begin{proposition}[Near critical case] \label{prop:aldous} Fix $\lambda \in \mathbb{R}$. For $ p\equiv p_n = \frac{1}{n} + \frac{\lambda}{n^{{4/3}}}$ with $\lambda \in \mathbb{R}$, the {\L}ukasiewicz walk $ \mathbb{S}^{[n, \frac{1}{n} + \frac{\lambda}{n^{4/3}}]}$ of the Poissonized version satisfies $$ \left(n^{-1/3} \cdot {\mathbb{S}^{[n,\frac{1}{n} + \frac{\lambda}{n^{4/3}}]}_{\lfloor n^{2/3}t \rfloor }}\right)_{t \geq 0} \xrightarrow[n\to\infty]{(d)} \left( B_{t} + \lambda t - \frac{t^{2}}{2}\right)_{ t \geq 0},$$ where the convergence holds in distribution for the uniform norm over every compact of $ \mathbb{R}_{+}$. \end{proposition} \noindent \textbf{Proof.} Fix $A>0$. Putting $k = \lfloor n^{2/3}t\rfloor $ for $t \in [0,A]$ in the equation \eqref{eq:lukapoisson}, we have \begin{eqnarray} \label{eq:dse}n \left(1-(1- \frac{1}{n}- \frac{\lambda}{n^{4/3}})^{\lfloor n^{2/3}t\rfloor}\right) = tn^{2/3} + \lambda t n^{1/3} - \frac{t^{2}}{2} n^{1/3} + o(n^{1/3}), \end{eqnarray} as $n \to \infty$ and where the little $o$ is uniform in $t \in [0,A]$. The second item of \eqref{eq:lawll} together with Skorokhod representation theorem show that on a common probability space we can build for each $m \geq 1$ a Poisson counting process $ \mathfrak{P}^{(m)}$ and a Brownian motion $B$ so that we have the \emph{almost sure} convergence: \begin{eqnarray} \label{eq:skorokhod} \frac{ ( \mathfrak{P}^{(m)}(tm)-tm)}{ \sqrt{m}} \xrightarrow[m\to\infty]{a.s.} (B_{t} : t \geq 0) \end{eqnarray} for the uniform norm over every compact of $ \mathbb{R}_{+}$. Recalling \eqref{eq:lukapoisson} those observations yield for $m = \lfloor n^{2/3} \rfloor $ \begin{eqnarray*} \left(\frac{\mathbb{S}^{[n,p_{n}]}_{ \lfloor n^{2/3}t \rfloor }}{n^{1/3}} \right)_{0 \leq t \leq A} & \overset{(d)}{\underset{ \mathrm{ for\ each\ }n}{=}}& \left(\frac{\mathfrak{P}^{(m)}\left( n \left(1-\left(1- \frac{1}{n}- \frac{\lambda}{n^{4/3}}\right)^{ \lfloor n^{2/3}t \rfloor}\right)\right) - \lfloor n^{2/3}t \rfloor}{n^{1/3}}\right)_{0 \leq t \leq A}\\ & \underset{ \eqref{eq:dse}}{=}& \left(\frac{\mathfrak{P}^{(m)}\left( tm + \lambda t \sqrt{m} - \frac{t^{2}}{2} \sqrt{m} + o( \sqrt{m})) \right)- t m +o( \sqrt{m})}{ \sqrt{m} + o(1)}\right)_{0 \leq t \leq A} \\ &\underset{ \eqref{eq:skorokhod}}{\xrightarrow[n\to\infty]{a.s.}}& \left( B_{t} + \lambda t - \frac{t^{2}}{2} \right)_{0 \leq t \leq A}, \end{eqnarray*} and this proves the proposition. \qed \section{Connectedness} As another application of our modification of the Erd{\H{o}}s--R\'enyi random graph, let us give a short proof of the (very) sharp phase transition for connectedness in the fixed-size Erd{\H{o}}s--R\'enyi which is mentioned in \eqref{eq:erconnectedpoisson}: \begin{theorem}[Critical window for connectedness \cite{erdds1959random}] For $c \in \mathbb{R}$ we have \label{prop:connectedness} $$ \mathbb{P}\left( G\left(n, \frac{\log n +c}{n}\right) \mbox{ is connected}\right) \xrightarrow[n\to\infty]{} \mathrm{e}^{- \mathrm{e}^{-c}}.$$ \end{theorem} \noindent \textbf{Proof.} Let $p\equiv p_n = \frac{\log n +c}{n}$. Connectedness of the core $ G(n,p_{n})$ is equivalent to the fact that $ \mathrm{F^{stack}}(n,p_{n})$ has only one non-trivial component (the others being isolated vertices of the stack), or equivalently that the {\L}ukasiewicz walk $( \mathbb{S}^{(n,p_{n})})$ starts with a (large) excursion and once it has reached level $-1$, it makes only jumps of $-1$ forever. That is, $ \mathbb{S}^{(n,p_{n})}_{n+1} =-1$ and time $n+1$ is the first hitting time of $-1$. In particular, in the notation \eqref{eq:walkcid} we must have $F_{n}( x^{(p_{n})}_{n+1}) = n$ or equivalently, that no uniform $U_{i}$ for $1 \leq i \leq n$ falls after the point $$ x^{(p_{n})}_{n} = 1 - \left( 1- \frac{\log n +c}{n}\right)^{n} \sim \frac{ \mathrm{e}^{-c}}{n}, \quad \mbox{ as }n \to \infty.$$ Computing this probability is routine and we have $$ \mathbb{P}\left( \max_{1 \leq i \leq n} U_{i} \leq x^{(p_{n})}_{n}\right) \sim \left(1- \frac{ \mathrm{e}^{-c}}{n}\right)^{n} \xrightarrow[n\to\infty]{} \mathrm{e}^{- \mathrm{e}^{-c}}.$$ To finish the proof, one shows that as long as this event is realized, then the core is connected with high probability. In term of the {\L}ukasiewicz walk this boils down to: \begin{lemma} For $p\equiv p_n = \frac{\log n +c}{n} $ we have $$ \mathbb{P}\left( \mathbb{S}^{(n,p_{n})}_{k} \geq 0 : \forall 1 \leq k \leq n \mid F_{n}(x^{(p_{n})}_{n})=n\right) \xrightarrow[n\to\infty]{}1.$$ \end{lemma} \noindent \textbf{Proof of the lemma.} Notice that the event on which we are conditioning is of asymptotically positive probability, so it suffices to shows that $\mathbb{P}( \exists 1 \leq k \leq n-1: \mathbb{S}^{(n,p_{n})}_{k}=0 \mbox{ and }\mathbb{S}^{(n,p_{n})}_{n} = 0)$ tends to $0$. We perform a union bound over all such $k's$ and compute \begin{eqnarray*} && \mathbb{P}( \exists 1 \leq k \leq n-1 : \mathbb{S}^{(n,p_{n})}_{k} = 0 \ \& \ \mathbb{S}^{(n,p_{n})}_{n} = 0) \\ &\leq& \sum_{k=1}^{n-1}\mathbb{P}\left( \begin{array}{c} \# \{ 1 \leq i \leq n : U_{i} \in I_{1}\cup I_{2} \cup \dots \cup I_{k}\} =k \\ \# \{ 1 \leq i \leq n : U_{i} \in I_{k+1}\cup I_{2} \cup \dots \cup I_{n}\} = n-k \end{array}\right) \\ &\leq & \sum_{k=1}^{n/2} \mathbb{P}( \mathrm{Bin}(n, x^{(p_{n})}_{k}) \leq k) + \sum_{k=n/2}^{n-1} \mathbb{P}( \mathrm{Bin}(n,x^{(p_{n})}_{n} - x^{(p_{n})}_{k}) \geq n- k). \end{eqnarray*} For $ \varepsilon_{n}, \delta_{n}$ tending to $0$ such that $ n \varepsilon_{n} \to \infty$ as $n \to \infty$ we use the bound $$ \mathbb{P}( \mathrm{Bin}( n, \varepsilon_{n}) \leq \delta_{n} \varepsilon_{n}) \leq \mathrm{e}^{{} - \mathrm{c} \ n \varepsilon_{n}} \quad \mbox{ and } \quad \mathbb{P}( \mathrm{Bin}( n, \varepsilon_{n}) \geq n-\delta_{n} \varepsilon_{n}) \leq \mathrm{e}^{- \mathrm{c} \ n \varepsilon_{n}},$$ for some $c>0$. Since for $k \leq 10\frac{ n}{\log n}$ we have $k = o (n x^{(p_{n})}_{k})$ we can apply the above bound and get for some $ \mathrm{c}'>0$ $$\sum_{k=1}^{10 n/\log n} \mathbb{P}( \mathrm{Bin}(n, x^{(p_{n})}_{k}) \leq k) \leq \sum_{k =1}^{10 n/ \log n} \exp( - \mathrm{c}\ n x^{(p_{n})}_{k}) \leq \sum_{k =1}^{10 n/ \log n} \exp( - \mathrm{c'}\ k \log n) = o(1).$$ The case when $ 10\frac{ n}{\log n} \leq k \leq n/2$ is even easier since we have \begin{eqnarray*}\mathbb{P}( \mathrm{Bin}(n, x^{(p_{n})}_{k}) \leq k) &\leq& \mathbb{P}\left( \mathrm{Bin}(n, x^{(p_{n})}_{10 \frac{n}{ \log n}}) \leq \frac{n}{2}\right)\\ & \leq & \mathbb{P}( \mathrm{Bin}(n, 1- 2\mathrm{e}^{-10}) \leq n/2) \leq \mathrm{e}^{-{c}'' n}, \end{eqnarray*}for some $c''>0$ by a large deviation estimate since $1- 2 \mathrm{e}^{-10}> 1/2$. Summing-up those estimates we deduce that $ \sum_{k=1}^{n/2} \mathbb{P}( \mathrm{Bin}(n, x^{(p_{n})}_{k}) \leq k) \to 0$ as $n \to \infty$. A similar reasoning shows that $\sum_{k=n/2}^{n-1} \mathbb{P}( \mathrm{Bin}(n,x^{(p_{n})}_{n} - x^{(p_{n})}_{k}) \geq n- k) \to 0$ as well, and we leave the verification as an exercise for the reader. \qed \bigskip \noindent \textbf{Bibliographical notes.} The content of this chapter is adapted from the author's paper \cite{curien2022erd} and from the master's thesis of Damian Cid (promotion 2023-2024) who elegantly depoissonized the initial arguments. Various modifications of the Erd{\H{o}}s--R\'enyi random graph with nicer probabilistic properties have been used in the literature, see e.g.~the Poisson cloning model \cite{kim2007poisson}. \part[Random tree growth]{Random tree growth \\ \\ \begin{center} \begin{minipage}[l]{15cm} \normalsize In this part, we study several models of random growing trees where vertices are attached to the preceding structure according to some rule. The prototype is the random recursive tree process $(T_{n} : n \geq 0)$ where $T_{n+1}$ is obtained from $T_{n}$ by attaching a new vertex labeled $n+1$ onto a uniform vertex of $T_{n}$. We will study this process both from a static point of view (statistics of uniform random permutations), and from a dynamical process as $n$ increases (Polya urns and continuous time embedding). \end{minipage} \end{center} \vspace{1cm} \begin{figure}[!h] \begin{center} \includegraphics[height=5cm]{recursive10bis} \includegraphics[height=5cm]{recursive100bis}\\ \includegraphics[width=14.5cm,height=5cm]{recursive10000} \caption{A random recursive tree at stages $10, 100$ and $10000$.} \end{center} \end{figure} } \chapter{Random permutations} \hfill Many points of view on $n!$\bigskip \label{chap:permu} In this chapter, we study the law of the cycle decomposition of a random permutation $\boldsymbol{\sigma}_{n}$ chosen uniformly in the symmetric group $ \mathfrak{S}_{n}$ over $n$ elements $\{1,2, \dots , n\}$. In particular, we shall establish Poisson statistics for the number of shorts cycles and the Poisson--Dirichlet limit for the large cycles. \section{Feller coupling} In 1945, Feller (the author of Lemma \ref{lem:feller}) introduced a coupling between the cycle structure of a uniform permutation $\boldsymbol{\sigma}_n$ and the spacings between successes in a sequence of $n$ independent Bernoulli variables of parameters $ \frac{1}{n}$, $ \frac{1}{n-1}$, \dots $ \frac{1}{2},1$. This will be the main tool used in this chapter. The key idea of this representation is to explore a given permutation along its cycles ordered by their minimal element. A concept which is sometimes called the \textbf{Foata\footnote{ \raisebox{-5mm}{\includegraphics[width=1cm]{foata.jpg}} Dominique Foata (1934--), French} correspondence}. \subsection{Foata correspondence} A permutation $\sigma_{n} \in \mathfrak{S}_{n}$ can obviously be described by a sequence $(i_{1}, i_{2}, \dots , i_{n})$ representing the $n$ values $\{1,2, \dots,n\}$, the most obvious way is to prescribe the permutation by its values $ \sigma_{n}(1) = i_{1}, \sigma_{n}(2) = i_{2}, \dots , \sigma_{n}(n)= i_{n}$. Yet another way is to imagine that $(i_{1}, i_{2}, \dots , i_{n})$ is the sequence of values we discover when exploring the cycles of $\sigma_{n}$ ordered by their minimal values, see Figure \ref{fig:foata}. Specifically, let us denote by \begin{eqnarray} \label{eq:foata} \big(a_1^{(1)}, \dots , a_{k_1}^{(1)}\big)\big(a_1^{(2)}, \dots , a_{k_2}^{(2)}\big) \cdots \big(a_1^{(\ell)}, \dots , a_{k_\ell}^{(\ell)}\big), \end{eqnarray} the decomposition of $\sigma_n$ into $\ell$ cycles with disjoint supports of length $k_1, \dots , k_\ell \geq 1$. We suppose that those cycles are ranked according to their minimal element, which is placed at the end of each cycle in this representation: $$ 1=a_{k_1}^{(1)} = \min_{1 \leq i \leq k_1} a_i^{(1)} < a_{k_2}^{(2)} = \min_{1 \leq i \leq k_2} a_i^{(2)} < \cdots < a_{k_\ell}^{(\ell)} = \min_{1 \leq i \leq k_\ell} a_i^{(\ell)}.$$ Then, the Foata encoding of $\sigma_n$ is the permutation we obtain by reading the numbers in \eqref{eq:foata} from left to right, namely $$ \mathrm{Foata}(\sigma_n) = \big(a_1^{(1)}, \dots , a_{k_1}^{(1)},a_1^{(2)}, \dots , a_{k_2}^{(2)}, \cdots ,a_1^{(\ell)}, \dots , a_{k_\ell}^{(\ell)}\big).$$ It is then clear that $ \mathrm{Foata} : \mathfrak{S}_n \to \mathfrak{S}_n$ is a bijection and furthermore that the number of cycles of $\sigma_n$ is equal to the number of minimal records of $ \mathrm{Foata}(\sigma_n)$, i.e.~the values $k$ such that $ \mathrm{Foata}(\sigma_n)_k = \min \{ \mathrm{Foata}(\sigma_n)_i : i \geq k \}$ and that the length of the cycles correspond to the spacing between those records. \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{foata} \caption{Foata correspondence: on the left a description of a permutation via its images, on the right the description of a permutation by exploration of its cycles ranked in increasing order of their minimal element. This bijection transforms the number of cycles into the number of minimal records (in red on the left). \label{fig:foata}} \end{center} \end{figure} \begin{exo}[Law of a typical cycle] \label{exo:taillecycle}Show using the Foata correspondence that the size of the cycle containing $1$ in a uniform permutation is uniform on $\{1,2, \dots , n \}$. \end{exo} \subsection{Feller coupling} \label{sec:fellercoupling} Keeping in mind the Foata encoding of a permutation, we now present the famous result of Feller. We consider $n$ independent Bernoulli variables $ \mathrm{Ber}( \frac{1}{k})$ of success parameters $$ \frac{1}{n}; \quad \frac{1}{n-1}; \quad \dots \quad \frac{1}{2}; \quad \frac{1}{1}.$$ Denote by $ n \geq I_1 > \dots > I_\ell = 1$ the indices (the reciprocal of the parameter) of the variables equal to $1$ and consider the $\ell$ spacings $ \mathcal{S}_{n} = ((n+1)-I_1 , I_1-I_2, \dots , I_{\ell-1}-I_\ell)$ between the points $n+1 > I_1 > \dots > I_\ell$. The sum of those spacings is equal to $n$. \clearpage \begin{figure}[h] \begin{center} \includegraphics[width=14cm]{feller-couplingbis} \caption{Constructing the law of the length of the cycles (in orange above) in a random permutation via the spacings in Bernoulli trials with parameters $1/(n-i)$ for $i\in \{0,1,2, \dots , n-1\}$. The red dots correspond to successes. Notice that we start with a space of length $1$ just before the first trial of parameter $1/n$.} \end{center} \end{figure} \begin{theorem}[Feller] The spacings $ \mathcal{S}_{n}$ between successes of the above Bernoulli variables have the same law as the cycle lengths of a uniform permutation $ \boldsymbol{\sigma}_{n} \in \mathfrak{S}_{n}$ when ordered as in the Foata construction \eqref{eq:foata}.\label{thm:fellercoupling} \end{theorem} \noindent \textbf{Proof.} Let us explore the cycle structure of $\boldsymbol{\sigma}_{n}$ step by step. Consider first the cycle containing $1$ in $ \boldsymbol{\sigma}_{n}$. Then, $1$ is a fixed point with probability $1/n$ --this corresponds to success of $ \mathrm{Ber}(1/n)$-- otherwise, it is sent via $\boldsymbol{\sigma}_{n}$ to a value $\boldsymbol{\sigma}_{n}(1)$ uniformly distributed over $\{2,3, \dots , n\}$. Conditionally on $\boldsymbol{\sigma}_n(1)\ne 1$, a simple calculation shows that we have $\boldsymbol{\sigma}_n( \boldsymbol{\sigma}_{n}(1))=1$ with probability $ \frac{1}{n-1}$ --this corresponds to the success of $ \mathrm{Ber}(1/(n-1))$-- or it is sent to a value $\boldsymbol{\sigma}_{n}^{2}(1) \notin\{ 1, \boldsymbol{\sigma}_{n}(1)\}$. Iteratively, if after $k \geq 2$ iterations, conditionally on $\boldsymbol{\sigma}_n^{j}(1) \ne 1$ for all $1 \leq j \leq k-1$, we have $\boldsymbol{\sigma}^{k}_{n}(1)=1$ with probability $ \frac{1}{n-k}$ --corresponding to the success of $ \mathrm{Ber}(1/(n-k))$-- otherwise the cycle continues. Hence, the length of the cycle containing $1$ indeed has the same law as the first spacing $(n+1)-I_1$ in the Bernoulli trials. Once the cycle of $1$ of length $k_1$, has been entirely explored, if $k_1 < n$ we can relabel the remaining values in increasing order by $\{1,2, \dots , n-k_1\}$ and it is easy to see that the permutation $\tilde{\boldsymbol{\sigma}}_{n-k_1}$ induced by $\boldsymbol{\sigma}_{n}$ on these values, is, conditionally on the exploration of the first cycle, uniform over $ \mathfrak{S}_{n-k_1}$ so that we can iterate the procedure. \qed \medskip A direct consequence of the above theorem is that the law of the length of the cycle containing the point $i_0 \in \{1,2, \dots , n\}$ in the random permutation $\boldsymbol{\sigma}_n$ is a uniform variable over $\{1,2, \dots , n\}$ (see Exercise \ref{exo:taillecycle}). Also, the number of cycles $ \mathcal{C}_n$ of $ \boldsymbol{\sigma}_n$ can be expressed as $\sum_{1 \leq k \leq n} B_{k}$ where $B_{k} \sim\mathrm{Ber}(1/k)$ are independent, which is easily handled: \begin{proposition}[Law of the number of cycles in a uniform permutation]\label{thm:numbercycles}\noindent\noindent For any $z \in \mathbb{C}$ we have $$ \mathbb{E}[z^{ \mathcal{C}_{n}}] = \prod_{j=0}^{n-1} \frac{z+j}{j+1}.$$ As a result, its expectation and variance satisfy $ \mathbb{E}[ \mathcal{C}_{n}] = \sum_{k=1}^{n} \frac{1}{k} \sim \log n$ and $ \mathrm{Var}( \mathcal{C}_{n}) \sim \log n$ as $n \to \infty$, and we have a central limit theorem $$\frac{ \mathcal{C}_{n}- \log n}{ \sqrt{\log n}} \xrightarrow[n\to\infty]{(d)} \mathcal{N}(0,1).$$ \end{proposition} \noindent \textbf{Proof.} The formula for the generating function is easily proven using the equality in law $ \mathcal{C}_n =\sum_{1 \leq k \leq n} \mathrm{Ber}(1/k)$ where the Bernoulli variables are independent as in Theorem \ref{thm:fellercoupling}. Taking expectation yields the harmonic sum, while taking variance yields to the sum of the variances which is $ \sum_{k=1}^{n} ( \tfrac1k - \tfrac1{k^{2}}) \sim \log n$. The central limit theorem can be proved by evaluating the Fourier transform and using L\'evy's theorem (but we shall see another estimation-free route in Proposition \ref{prop:degreeroot}). \qed \medskip \begin{exo} \label{exo:heiseiberg} Show that $ \mathbb{E}[2^{ \mathcal{C}_n}] = n+1$. Do you have a combinatorial interpretation? \end{exo} \section{Large cycles and Poisson--Dirichlet distribution} In this section, we use Theorem \ref{thm:fellercoupling} to compute the law of the large cycles of a uniform permutation in the scaling limit. Perhaps surprisingly, the law of the random partition of $1$ we obtain pops-up in other contexts such as in the factorization of large random integers. \subsection{Stick breaking construction} Let $U_1, U_2, \dots $ be a sequence of independent identically distributed uniform variables on $[0,1]$. We use these variables to perform a ``stick breaking'' of the interval $[0,1]$ by setting $$ X_1 = (1-U_1), \quad X_2 = U_1(1-U_2), \quad X_3 = U_1 U_2(1-U_3)\dots.$$ By the law of large numbers we have $$ \prod_{i \geq 1}^{n} U_i = \exp \Big( \underbrace{\sum_{i=1}^{n} \log U_{i}}_{ = -n + o_{a.s.}(n)} \Big) \xrightarrow[n\to\infty]{a.s.} 0.$$ and in particular we have $ \sum X_i =1$ with probability one. \begin{definition}[Poisson--Dirichlet] \label{def:PD} The Poisson--Dirichlet distribution is the law of the lengths $X_1, X_2, \dots$ in the above stick-breaking construction. \end{definition} \paragraph{Ranked version.} Although the variables $X_i$ are stochastically decreasing in $i$, the sequence $(X_i : i \geq 1)$ is not decreasing in general. Sometimes the law of $(X_i : i \geq 1)$ is called the GEM (Griffiths, Engen, McCloskey) law and the Poisson--Dirichlet is its version $(X_i^\downarrow : i \geq 1)$ ranked in decreasing order. In these notes, we shall use the name Poisson--Dirichlet for both laws, the context making clear what we mean. The ranked version may seem more appropriate (at least to state convergence results), but actually the initial version is much more convenient from a probabilistic point of view. \begin{figure}[!h] \begin{center} \includegraphics[height=3cm]{PD1} \includegraphics[height=3cm]{PD2} \includegraphics[height=3cm]{PD3} \includegraphics[height=3cm]{PD4} \includegraphics[height=3cm]{PD5} \caption{Five simulations of the Poisson--Dirichlet (unranked) partition.} \end{center} \end{figure} A corollary of Theorem \ref{thm:fellercoupling} is the following: \begin{theorem}[Poisson--Dirichlet as limit of cycle length] \label{thm:PDforcycles}\noindent\noindent For $n \geq 0$ we denote by $K_{1}( \boldsymbol{\sigma}_{n}), K_{2}( \boldsymbol{\sigma}_{n}), \dots$ the cycle lengths appearing in the Foata encoding of a uniform permutation $\boldsymbol{\sigma}_n \in \mathfrak{S}_n$ as in \eqref{eq:foata}. Then we have the following convergence in distribution \begin{eqnarray} \label{eq:cvPD} \left(\frac{K_{i}(\boldsymbol{\sigma}_{n})}{n} : i \geq 1 \right ) \xrightarrow[n\to\infty]{(d)} ( X_{i} : i \geq 1), \end{eqnarray} for the $\ell_1$-distance on the space of sequences $ \ell_1^{(1)} = \{(x_i)_{i \geq 1} : x_i >0 \mbox{ and } \sum_i x_i =1 \}$. Consequently, if $K_1^\downarrow(\boldsymbol{\sigma}_n) \geq K_2^\downarrow(\boldsymbol{\sigma}_n) \geq \cdots$ are the cycle lengths of $\boldsymbol{\sigma}_n$ ranked in non-increasing order, then we have \begin{eqnarray} \label{eq:cvPDbis} \left(\frac{K_{i}^\downarrow(\boldsymbol{\sigma}_{n})}{n} : i \geq 1 \right ) \xrightarrow[n\to\infty]{(d)} ( X^\downarrow_{i} : i \geq 1), \end{eqnarray} \end{theorem} \noindent \textbf{Proof.} In Feller's coupling, it is straightforward to compute the law of the first spacing which is $ N = (n+1)- \sup\{ k \leq n : \mathrm{Ber}(1/k) = 1\}$. As already remarked (see Exercise \ref{exo:taillecycle}), this law is uniform over $\{1,2, \dots , n\}$ and conditionally on it, the remaining spacings have the law of $ \mathcal{S}_{n-N}$. It follows that if $ \mathcal{S}_{n}(1), \mathcal{S}_{n}(2)$ are the ordered spacings (when read from the parameter $1/n$ down to $1$) satisfy $ n^{-1} \cdot \mathcal{S}_{n}(1) \to U_{1}$ and recursively $$ \left(\frac{ \mathcal{S}_{n}(i)}{n} : i \geq 1\right) \xrightarrow[n\to\infty]{(d)} (X_{i} : i \geq 1),$$ in terms of finite-dimensional convergence. Actually, since we know that $ n^{-1} \mathcal{S}_n$ and $(X_i : i \geq 1)$ belong to $\ell_1^{(1)}$ (they sum-up to $1$) the finite dimensional convergence implies the $\ell_1$ convergence in law. The last convergence follows by the mapping theorem since reordering of a sequence is a continuous operation on $\ell_1^{(1)}$. \qed \medskip \begin{remark}[Size-biasing and split merge dynamic] Let us give two distributional properties of the ranked Poisson--Dirichlet partition $(X_{i}^{\downarrow} : i \geq 1)$ which are not easy to prove in the continuous setting, but whose analogs in the discrete setting are obvious. Let us imagine $(X_{i}^{\downarrow} : i \geq 1)$ as a stick breaking of the interval $[0,1]$ into countably many intervals, and let $V\sim \mathrm{Unif}[0,1]$ be a uniform point chosen independently of this stick breaking. Then the size of the interval containing the point $V$ (there is almost surely no tie) is uniformly distribution on $[0,1]$. This can be shown by considering the cycle length of a uniform point $V_{n} \in \{1,2, \dots , n \}$ in $ \boldsymbol{\sigma}_{n}$. \\ Similarly, there is a natural dynamic on random permutations $\boldsymbol{\sigma}_{n}$ of $ \mathfrak{S}_{n}$ which preserves the uniform distribution: just compose (to the left or to the right) $\boldsymbol{\sigma}_{n}$ by a transposition $\tau_{i,j}$ where $i,j \in \{1,2, \dots , n\}$ are i.i.d.~uniform. In terms of the cycle structure, this gives rise to a split-merge transform. In the continuous setup, this boils down to sampling $V,V'$ independently of the stick breaking $(X_{i}^{\downarrow} : i \geq 1)$: if the two points fall into two distinct intervals, then those two pieces are merged. Otherwise, the interval containing both $V$ and $V'$ is split into two intervals uniformly. The Poisson--Dirichlet law is an invariant measure for this dynamic (and is in fact the only one, see \cite{diaconis2004}). \end{remark} \bigskip Perhaps surprisingly, the Poisson--Dirichlet law appears in many other ``logarithmic combinatorial structures'' such as factorization of random polynomials over finite fields or prime factorization of large random integers: \begin{theorem}[Billingsley]\noindent Let $N \in \{1,2, \dots , n\}$ be a uniform integer less than or equal to $n$ and let $p^\downarrow_1(N) \geq p^{\downarrow}_2(N) \geq \dots$ its prime factors (with possible repetition). Then we have $$ \left( \frac{\log p^{\downarrow}_i(N)}{\log n} ; i \geq 1\right) \xrightarrow[n\to\infty]{(d)} (X_i^\downarrow: i \geq 1).$$ \end{theorem} We refer to \cite{arratia2003logarithmic} for details. \subsection{Dickman function and prisoners} In this section, we present Dickman\footnote{ \raisebox{-5mm}{\includegraphics[width=1cm]{dickman.jpg}} Karl Dickman (1861--1947), Swedish. He was actuary and published only one article in mathematics \cite{dickman1930frequency} introducing this function when he was around $70$.} function which is essentially the tail distribution $X_1^\downarrow$, the scaling limit of the longest cycle in a random permutation. This function pops up in various places in analytic number theory and has intriguing properties. \begin{proposition}[Dickman function] Consider $(X_i^\downarrow : i \geq 1)$ the ranked version of a Poisson--Dirichlet distribution. Then for $x \geq 0$ we have $$ \mathbb{P}( X_1^\downarrow \leq x) = \rho(1/x),$$ where $x \mapsto \rho(x)$ is Dickman's function defined by $$\left\{ \begin{array}{lc} \rho(x)=1 & \mbox{ for } 0 \leq x \leq 1\\ x \rho'(x) = - \rho(x-1)& \mbox{ for } x \geq 1. \end{array}\right.$$ \begin{figure}[!h] \begin{center} \includegraphics[width=10cm]{dickman} \caption{Dickman's function} \end{center} \end{figure} \end{proposition} \noindent \textbf{Proof.} We use the notation $\mathbb{P}( X_1^\downarrow \leq x) = \rho(1/x)$ extended to $\rho(u)=1$ for $u \in [0,1]$. In the unranked version $(X_{i} : i \geq 1)$ of the Poisson--Dirichlet partition we can write after conditioning on the first uniform variable $U_{1}$ \begin{eqnarray*} \mathbb{P}( X_1^\downarrow \leq x) &=& \mathbb{P}(\{X_{1} \leq x\} \cap \{X_{2}, \dots , X_{n}, \dots \leq x\})\\ &=& \mathbb{E}\left[ \mathbf{1}_{1-U_{1} \leq x} \mathbb{P}\left( \tilde{X}_{1}^{\downarrow} \leq \frac{x}{U_{1}}\right)\right], \end{eqnarray*} which give the following integral equation \begin{eqnarray*} \rho(1/x) =\mathbb{P}( X_1^\downarrow \leq x) &=& \int_0^x \mathrm{d}u \, \mathbb{P}( X_1^\downarrow \leq \frac{x}{1-u})\\ &=& \int_0^x \mathrm{d}u \, \rho\left( \frac{1-u}{x} \right)\\ \rho(y) & \underset{\begin{subarray}{c}v=u/x\\ y=1/x \end{subarray}}{=}& \frac{1}{y} \int_0^1 \mathrm{d}v \, \rho(y-v). \end{eqnarray*} Differentiating the equality $y \rho(y) = \int_0^1 \mathrm{d}v \, \rho(y-v)$ with respect to $y$, we recover the delayed differential equation of the proposition. \qed \bigskip Related to the Dickman function, let us state a famous riddle: \begin{quote} The director of a prison offers 100 death row prisoners, who are numbered from 1 to 100, a last chance. A room contains a cupboard with 100 drawers. The director randomly puts one prisoner's number in each closed drawer. The prisoners enter the room, one after another. Each prisoner may open and look into 50 drawers in any order. The drawers are closed again afterwards. If, during this search, every prisoner finds their number in one of the drawers, all prisoners are pardoned. If even one prisoner does not find their number, all prisoners die. Before the first prisoner enters the room, the prisoners may discuss strategy but may not communicate once the first prisoner enters to look in the drawers. What is the prisoners' best strategy? \end{quote} Opening $50$ drawers at random (independently for each prisoner) is a hopeless strategy since the probability that they all manage to find their numbers is $(1/2)^{100} \approx 0$. However, they can correlate their searchs if the $i$th prisoner starts with $i$th drawer, looks at the discovered label and successively follows the cycle of the underlying permutation of the labels. The probability of success is the probability that no cycle of the permutation of the labels has a length larger than $50$ which is approximately $ \mathbb{P}(X_1^\downarrow \leq 1/2) = 1- \log 2 \approx 30 \%$. \paragraph{Formulas without words.} \begin{eqnarray*} \mbox{Euler's constant} & = & \int_1^\infty\left( \frac{1}{\lfloor x \rfloor } -\frac{1}{x} \right) \mathrm{d}x = \log \int_0^\infty \mathrm{d}x\, \rho(x).\\ \mbox{Golomb-Dickman constant} &=& \int_0^1 \mathrm{d}x\,\mathrm{exp} \left( \int_0^x \frac{\mathrm{d}t}{ \ln t} \right) = \mathbb{E}[X_1^\downarrow] = \int_0^\infty \mathrm{d}t\frac{ \rho(t) }{(t+1)^2}.\\ \sum_{k\geq 1} \prod_{j=1}^k U_j &\overset{(d)}{=}& \mathrm{e}^{-\gamma} \rho( x) \mathrm{d}x.\\ \mbox{where $U_i$ are i.i.d.~uniforms on $[0,1]$} && \end{eqnarray*} \section{Poisson count for short cycles} In the previous section, we saw that the Poisson--Dirichlet law is the limit law of the large cycles in a random uniform permutation. However, the information about the small cycles is lost in this limit and we will see below that they are ruled by the Poisson paradigm already encountered in Section \ref{sec:poissonparadigm}. \subsection{An abstract limit from the Feller coupling} Recall the setup of Theorem \ref{thm:fellercoupling}, let $( B_{k} \sim \mathrm{Ber}(1/k) : k \geq 1)$ be independent Bernoulli variables of parameter $1/k$ and denote by $1 = \tilde{I}_{1} < \tilde{I}_{2} < \cdots$ the indices of the variables equal to $1$ (beware we see those variables as indexed ``in the other direction'' compared to the previous section). In a sense, the spacings between $\mathcal{S}_{\infty} := ( \tilde{I}_k: k \geq 1)$ could be seen as the cycle structure of an ``infinite permutation''. Down to earth, we have $$ \forall A \geq 1, \quad \sum_{k=1}^\infty \mathbb{P}( \tilde{I}_{k+1}-\tilde{I}_{k} = A) = \sum_{i=1}^\infty \sum_{k=1}^\infty \mathbb{P}( \tilde{I}_k =i, \tilde{I}_{k+1} = i+A) \leq \sum_{i=1}^\infty \frac{1}{i(i+A)} < \infty,$$ so that the Borel--Cantelli lemma shows that $ \tilde{I}_{k+1}- \tilde{I}_{k} \to \infty$ almost surely as $k \to \infty$. In particular, we can define the increasing rearrangement of the spacings between consecutive points in $( \tilde{I}_k: k \geq 1)$ and their count $$ \mathcal{N}_{i} := \# \{ k \geq 1 : \tilde{I}_{k+1}- \tilde{I}_{k} =i\} < \infty.$$ Below we write $N_i(\boldsymbol{\sigma}_n)$ for the \textbf{number of cycles} of length $i$ in the decomposition of the random uniform permutation $\boldsymbol{\sigma}_n$ into product of cycles with disjoint supports. Given Theorem \ref{thm:fellercoupling}, it is rather straightforward to show that $(N_{i}( \boldsymbol{\sigma}_{n}) : i \geq 1)$ converge in law as $n \to \infty$ : \begin{proposition} \label{prop:limitfellereasy} We have the convergence in law (in the sense of finite-dimensional marginals) \begin{eqnarray} ( N_{i}( \boldsymbol{\sigma}_{n}) : i \geq 1) \xrightarrow[n\to\infty]{(d)} ( \mathcal{N}_{i} : i \geq 1). \label{eq:limitPoisson?}\end{eqnarray} \end{proposition} \noindent \textbf{Proof.} Feller's coupling (Theorem \ref{thm:fellercoupling}) provides a way to couple uniform permutations $( \boldsymbol{\sigma}_n^{ \mathrm{fel}}: n \geq 1)$ on a common probability space so that $\boldsymbol{\sigma}_n^{ \mathrm{fel}} = \boldsymbol{\sigma}_n$ in law and such that the cycle structure of $ \boldsymbol{\sigma}_n^{ \mathrm{fel}}$ coincides with the spacings between the points $ 1=\tilde{I}_{1} < \tilde{I}_{2} < \cdots < \tilde{I}_{\ell_n} < (n+1)$ where $\tilde{I}_{\ell_n}$ is the last index strictly before $(n+1)$. In this coupling we \textit{nearly} have the almost sure convergence $ N_i( \boldsymbol{\sigma}_n^{\mathrm{fel}}) \to \mathcal{N}_i$ as $n \to \infty$. The reason that the coupling falls short of proving this point-wise convergence is that if $(n+1)$ is large and located precisely $i_0$ unit after a point of $ \mathcal{S}_\infty$ (with no other point in-between) then we have $N_{i_0}(\boldsymbol{\sigma}_n^{\mathrm{fel}}) = \mathcal{N}_{i_0} + 1$. However, for any positive function $f$ bounded by $C>0$ and any $k_0 \geq 1$ we have \begin{eqnarray*} && \Big|\mathbb{E}[f( N_i( \boldsymbol{\sigma}_n^{ \mathrm{fel}}) : 1 \leq i \leq k_0)] - \mathbb{E}[f( \mathcal{N}_i: 1 \leq i \leq k_0)]\Big| \\ &\leq& C \Big(\mathbb{P}( S_\infty \cap \{n-k_0,\dots, n-1,n\} \ne \varnothing) + \mathbb{P}( \exists \tilde{I}_{\ell} \geq n \mbox{ with } \tilde{I}_{\ell+1} - \tilde{I}_\ell \leq k_0)\Big)\\ & \xrightarrow[n\to\infty]{} &0. \end{eqnarray*} The desired convergence in law follows. \qed \bigskip We will see in Theorem \ref{thm:poissoncycle} below that the law of $ ( \mathcal{N}_{i} : i \geq 1)$ is actually super simple! A simple way to see this, is to take a small detour using Cauchy's formula and to randomize the permutation's length. This operation, usually called Poissonnization, will be made crystal clear in Chapter \ref{chap:AK}. \subsection{Cauchy's formula and interpretation} The starting point is a famous formula due to Cauchy giving the exact law of the cycle-counting function. With the notation above we have: \begin{proposition}[Cauchy] For any $c_1,c_2, \dots , c_n \in \mathbb{Z}_{\geq 0}$ so that $\sum_{i=1}^n i c_i =n$ we have $$ \mathbb{P}(N_i(\boldsymbol{\sigma}_n) = c_i, \forall 1 \leq i \leq n) = \prod_{i=1}^n\frac{(1/i)^{c_i}}{(c_i) !}.$$ \end{proposition} \noindent \textbf{Proof.} Once the cycle structure $(c_{i} : 1 \leq i \leq n)$ of the permutation $ \boldsymbol{\sigma}_{n}$ has been fixed (with the obvious constraint), the number of possible candidates is obtained by: \begin{itemize} \item distributing the $n$ numbers $1,2, \dots , n$ into the $\sum c_{i}$ boxes of sizes $1,2, \dots , i, \dots, n$: since the $c_{i}$ boxes of size $i$ are indistinguishable, there are $$ \left( \frac{n!}{ \cdots \underbrace{i! \cdot i!}_{c_{i} \mathrm{\ times}}\cdots }\right)\cdot \prod_{i=1}^{n} \frac{1}{ c_{i} !} \quad \mbox{ such choices}.$$ \item then constructing an $i$-cycle with the numbers in each box of size $i$: there are $(i-1)!$ possibilities each. \end{itemize} We deduce that the probability in the proposition is given by $$ \frac{1}{ n!} \cdot \frac{n!}{ \prod_{i=1}^{n} (i!)^{c_{i}}} \prod_{i=1}^{n} \frac{1}{ c_{i} !} \prod_{i=1}^{n} \big((i-1)!\big)^{c_{i}} = \prod_{i=1}^n\frac{(1/i)^{c_i}}{(c_i) !}.$$ \qed \bigskip Let us put our probabilist's glasses on and interpret the previous formula as follows: \begin{eqnarray*} \mathbb{P}(N_i(\boldsymbol{\sigma}_n) = c_i, \forall 1 \leq i \leq n) &=& \prod_{i=1}^n\frac{(1/i)^{c_i}}{(c_i) !}\\ &=& \mathrm{e}^{1+ \frac{1}{2} + \cdots + \frac{1}{n}} \cdot \prod_{i=1}^n \mathrm{e}^{-1/i} \frac{(1/i)^{c_i}}{(c_i) !} \\ &=& \mathrm{e}^{ \mathrm{h}_n} \prod_{i=1}^n \mathbb{P}(Z_i = c_i),\end{eqnarray*} where $(Z_i : i \geq 1)$ are independent Poisson random variables with mean $1/i$, and where $ \mathrm{h}_n$ is the $n$th harmonic sum. In other words, the vector $ (N_i(\boldsymbol{\sigma}_n) : 1 \leq i \leq n)$ has the same law as $ (Z_i : 1 \leq i \leq n)$ conditioned on the event $\{ \sum_{1 \leq i \leq n} i Z_i =n \}$. This observation, due to Kolchin, can actually be pushed a little further as remarked by Lloyd \& Shepp. Denote by $\sigma_0$ the permutation with $0$ cycles so that $N_i(\sigma_0) =0$ for all $i \geq 1$. For $x \in (0,1)$, for any sequence $(c_i : i\geq 1)$ of integers which is eventually the null sequence, if we denote by $N = \sum i c_i$ then we have \begin{eqnarray*} (1-x) \sum_{n =0}^\infty x^n \mathbb{P}\big(N_i(\boldsymbol{\sigma}_n) = c_i: \forall 1 \leq i \leq n\big) &=& (1-x) x^N \prod_{i=1}^n \frac{(1/i)^{c_i}}{(c_i) !} \\ &=& \underbrace{(1-x) \mathrm{e}^{x+ \frac{x^2}{2} + \cdots + \frac{x^n}{n} + \cdots}}_{1} \prod_{i=1}^\infty \mathrm{e}^{- x^i /i } \frac{(x^i/i)^{c_i}}{(c_i) !} \end{eqnarray*} This means: \begin{lemma} If $ \mathbf{n}_x \in \{0,1,2, \dots\}$ is a geometric random variable with mean $ \frac{x}{1-x}$ and if, conditionally on $ \mathbf{n}_x$, we let $\boldsymbol{\sigma}_{\mathbf{n}_x}$ be a uniform permutation on $ \mathfrak{S}_{\mathbf{n}_x}$, then the cycle counts $(N_i( \boldsymbol{\sigma}_{\mathbf{n}_x}) : i \geq 1)$ has the same law as independent Poisson random variables with means $ \frac{x^i}{i}$ for $i \geq 1$. \label{lem:shepplloyd} \end{lemma} We will see in Chapter \ref{chap:AK} that the above lemma follows from combining the construction of the random recursive tree from a Yule process in continuous time and the Chinese restaurant process (sic!). \begin{exo}[Random $\zeta$-number] \label{exo:zeta} For $s>1,$ consider $ \mathbf{N}_s \in \{1,2,\dots\}$ a random number sampled according to $$ \mathbb{P}( \mathbf{N}_s = n) = \frac{1}{\zeta(s)} n^{-s}.$$ Show that the $p$-valuations $(\nu_p( \mathbf{N}_s) : p \in \mathcal{P})$ are independent geometric random variables with success parameters $(1/p)^s$ for all prime numbers $p \in \mathcal{P}$. \end{exo} \subsection{Poisson limit} We are now armed to prove the following: \begin{theorem}[Goncharov, Kolchin] Recall that $N_i(\boldsymbol{\sigma}_n)$ is the number of cycles of length $i\geq 1$ in the decomposition of the uniform permutation $\boldsymbol{\sigma}_n$ into product of cycles with disjoint supports. Then we have the following convergence in law for the finite-dimensional marginals $$ \big( N_i(\boldsymbol{\sigma}_n) : i \geq 1\big) \xrightarrow[n\to\infty]{(d)} \big( \mathrm{Poi}( 1/i) : i \geq 1 \big),$$ where the Poisson random variables on the right-hand side are independent and of mean $1/i$ for $i \geq 1$. \label{thm:poissoncycle} \end{theorem} \begin{remark}[Derangements] We recover the famous asymptotic of the number of derangements (permutations without fixed points) since the last theorem implies in particular that as $n \to \infty$ we have $$ \frac{\# \{ {\sigma_n} \in \mathfrak{S}_n : {\sigma}_n \mbox{ has no fixed points}\}}{n!} = \mathbb{P}(N_1(\boldsymbol{\sigma}_n) =0) \xrightarrow[n\to\infty]{} \mathbb{P}( \mathrm{Poi}(1) =0) = \mathrm{e}^{-1}.$$ In fact, the inclusion-exclusion principle shows that we have the explicit series representation $ \sum_{k=0}^{n} (-1)^k \frac{n!}{k!}$ for the number of derangements of $ \mathfrak{S}_n$. \end{remark} \noindent \textbf{Proof.} We already know from \eqref{eq:limitPoisson?} that $( N_i( \boldsymbol{\sigma}_n) : i \geq 1)$ converges in law towards some limiting vector $( \mathcal{N}_i : i \geq 1)$ as $n \to \infty$. On the other hand, if we let $ x \to 1$ in Lemma \ref{lem:shepplloyd} we deduce that $ \mathbf{n}_x \to \infty$ in probability. Since conditionally on $ \mathbf{n}_x$ the permutation $\boldsymbol{\sigma}_{ \mathbf{n}_x}$ is uniform, we deduce that $$ \begin{array}{rcl} ( N_i(\boldsymbol{\sigma}_{ \mathbf{n}_x}) : i \geq 1) & \xrightarrow[x\to1]{(d)} & ( \mathcal{N}_i : i \geq 1)\\ \rotatebox{90}{=} \quad \mbox{in law}&& \quad \rotatebox{90}{=} \quad \mbox{in law} \\ \big( \mathrm{Poi}( x^i/i) : i \geq 1\big) & \xrightarrow[x\to1]{(d)} & \big( \mathrm{Poi}( 1/i) : i \geq 1\big). \end{array}$$ where all the Poisson variables are independent. \qed \begin{remark}[Direct calculation] It can be seen directly that the variables in \eqref{eq:limitPoisson?} are independent Poisson variables with mean $1/i$ without referring to random permutations. In fact, once the limit has been re-interpreted as the spacings between records of i.i.d.~uniforms on $[0,1]$, it is a consequence of a more general theorem due to Ignatov on the Poissonnian structure of records values of a Markov process. We refer the interested reader to \cite{najnudel2020feller} and \cite{resnick2008extreme} for details. \end{remark} \paragraph{Bibliographical notes.} There are many references on the popular subject of random permutation, see e.g. the Saint-Flour lectures of Pitman \cite{Pit06} in particular Section 3.1 or the Bible in combinatorics \cite{Flajolet:analytic}. Various sets of lecture notes are also available on the web such as \cite{feray2019random,gerinmini} and more recent results about ``logarithmic combinatorial structures'' can be found in \cite{arratia2003logarithmic}. Feller's coupling is proved in \cite{feller1945fundamental}, and the Poisson counting limit is due to Goncharov and Kolchin, but our proof based on Lemma \ref{lem:shepplloyd} is inspired from Lloyd and Shepp \cite{shepp1966ordered}. For more about appearance of Dickman's function in probabilistic and analytic number theory, see \cite{tenenbaum2015introduction} and \cite{chamayou1973probabilistic}. We also refer to \cite{bureaux2015methodes} for other applications of the randomization technique to random partitions. \bigskip \noindent{\textbf{Hints for Exercises.}}\ \\ Exercise \ref{exo:taillecycle}: The size of the cycle containing $1$ in $\sigma_n$ is equal to the value of the pre-image of $1$ in $ \mathrm{Foata}(\sigma_n)$.\\ Exercise \ref{exo:heiseiberg}: A random permutation sampled according to $2^{\# \mathrm{ number \ of\ cycles}}$ appears in Toth's representation of the quantum Heisenberg ferromagnet on the complete graph (sic!), see \cite{toth1993improved}.\\ Exercise \ref{exo:zeta}: Re-interpret the Eulerian product formula $ \displaystyle \prod_{p \in \mathcal{P}} \left( \frac{1}{1- \frac{1}{p^s}}\right) = \sum_{n=1}^\infty \frac{1}{n^s}.$ \chapter{Random recursive tree} \label{chap:RRT} \hfill Larbre, cest cette puissance qui lentement pouse le ciel. \medskip \hfill A. de Saint-Exupry \bigskip In this chapter we study the following random tree growth model: \begin{definition}[RRT] The \textbf{random recursive tree} (RRT) is the Markov chain with values in the set of all unoriented labeled trees such that $T_0 = $ \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {0}}} and so that for $n \geq 1$, conditionally on $T_{n-1}$, the labeled tree $T_{n}$ is obtained by attaching the new vertex \raisebox{.5pt}{\textcircled{\raisebox{-.6pt} {$n$}}} onto a uniform vertex of $T_{n-1}$. \end{definition} \begin{figure}[!h] \begin{center} \includegraphics[width=16cm]{recursive10000} \caption{A simulation of $T_{10 000}$ where the root vertex $ \noeud{0}$ is placed at the top. Clearly, the random recursive tree seems ``short and fat''.} \end{center} \end{figure} Obviously there are $n!$ possible values for $T_n$: these are all \textbf{increasing labeled trees with $n+1$ vertices} i.e.~unoriented trees labeled from $0$ up to $n$ and so that the labels along each branch starting from \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {0}}} are increasing. For each $n\geq 0$, the RRT $T_{n}$ is a uniform random variable over this set. We shall start with a link between this model of random tree and random permutations of the symmetric group $ \mathfrak{S}_{n}$ over $n$ elements. \section{Chinese Restaurant process} \label{sec:RRTandpermut} Since there are $n!$ equiprobable values for $T_n$, the RRT stopped at time $n$ can be seen as an encoding of a uniform permutation $ \boldsymbol{ \sigma}_n$ of $ \mathfrak{S}_{n}$. Moreover, it is possible to couple these encodings in a particularly nice way so that it is coherent for all $n \geq 1$ simultaneously: this is the so-called \textit{Chinese restaurant process} (CRP). This coupling is different from Feller's coupling seen in the previous chapter. \medskip \subsection{Coupling CRP-RRT} \label{sec:CRP-RRT} Let $\sigma_{n} \in \mathfrak{S}_{n}$ be a permutation over $\{1,2, \dots , n\}$. If $ n \geq 2$, we can canonically associate with $\sigma_n$ a permutation $[\sigma_{n}]_{n-1} \in \mathfrak{S}_{n-1}$ as follows: it is the permutation defined for $ k \in \{1, \dots , n-1\}$ by $$ \left\{ \begin{array}{cl} [\sigma_{n}]_{n-1}(k) = \sigma_{n}(k) & \mbox{ if } \sigma_{n}(k) \ne n,\\ [\sigma_{n}]_{n-1}(k) = \sigma_{n}(n) & \mbox{ if } \sigma_{n}(k) = n.\\ \end{array}\right. $$ The effect of removing the value $n$ from $ \sigma_n$ is better understood on the cycle decomposition: the permutation $[\sigma_n]_{n-1}$ is obtained by removing the value $n$ in the cycle of $ \sigma_n$ which contains it. By extending the restriction step by step we can define $[\sigma_{n}]_{k}$ for all $k \leq n$ and it is easy to see that if $\boldsymbol{\sigma}_{n}$ is uniform over $ \mathfrak{S}_{n}$ then $[ \boldsymbol{\sigma}_{n}]_{k}$ is also uniformly distributed over $ \mathfrak{S}_{k}$. Actually, it is easy to reverse the procedure and construct a sequence of random permutations $(\boldsymbol{\sigma}_{n}^{ \mathrm{cr}} \in \mathfrak{S}_{n} : n \geq 1)$ as a Markov chain. Specifically, let $\boldsymbol{\sigma}^{ \mathrm{cr}}_{1}= (1)$ and for $n \geq 2$, conditionally on $\boldsymbol{\sigma}_{n-1}^{ \mathrm{cr}}$, the permutation $\boldsymbol{\sigma}_{n}^{ \mathrm{cr}}$ is obtained with probability $1/n$ by just declaring $\boldsymbol{\sigma}_n^{ \mathrm{cr}}(n)=n$ and with probability $1- \frac{1}{n}$ by picking a uniform integer $k \in \{1, \dots , n-1\}$ and declaring that $$\boldsymbol{\sigma}^{ \mathrm{cr}}_{n}(k) = n\quad \mbox{and }\quad \boldsymbol{\sigma}^{ \mathrm{cr}}_{n}(n) = \boldsymbol{\sigma}^{ \mathrm{cr}}_{n-1}(k),$$ the others values being unchanged between $\sigma_{n}^{{ \mathrm{cr}}}$ and $\sigma_{n-1}^{{ \mathrm{cr}}}$. With the above notation we have $[\boldsymbol{\sigma}^{ \mathrm{cr}}_{n}]_{k}= \boldsymbol{\sigma}_{k}^{ \mathrm{cr}}$ and this Markov chain produces a coupling of permutations uniformly distributed over $ \mathfrak{S}_{n}$ for each $n$. \medskip The evolution of the cycle structure of $ \boldsymbol{\sigma}^{ \mathrm{cr}}_{n}$ in the previous Markov chain is described by the following mechanism called the \textbf{Chinese restaurant process}\footnote{ \raisebox{-5mm}{\includegraphics[width=1cm]{pitman}} Jim Pitman (1949--), Australian}: In this process, customers $1,2,3\dots$ arrive sequentially in an imaginary (Chinese) restaurant. At step $n=1$, the customer \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$1$}}} arrives and sits at a new table. Inductively at step $n \geq 2$, the customer \raisebox{.5pt}{\textcircled{\raisebox{-.6pt} {$n$}}} sits at the right of any of $n-1$ previous customers with probability $\tfrac1n$ or creates a new table with probability $\tfrac1n$. It should be clear from the above construction that the tables in the Chinese restaurant process describe the cycle structure of the growing sequence of permutations $(\boldsymbol{\sigma}^{ \mathrm{cr}}_n : n \geq 1)$. \medskip The Chinese restaurant process is canonically coupled with the RRT $(T_n : n \geq 0)$ by declaring that the new customer $n$ corresponds to the vertex \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$n$}}} and it attaches in $T_n$ to the vertex corresponding to the customer on its left, or to the vertex \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}} if this customer creates a new table. See Figure \ref{fig:CRP}. Thanks to this coupling, we deduce in particular that the degree of $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}$ in $T_{n}$ is equal to the number of cycles in the cycle decomposition of $\boldsymbol{\sigma}_{n}^{ \mathrm{cr}}$. \begin{figure}[!h] \begin{center} \includegraphics[width=16cm]{restochinois} \caption{ \label{fig:CRP} Illustration of the coupling between growing permutations $( \boldsymbol{\sigma}_n^{ \mathrm{cr}} : n \geq 1)$ on the left, the Chinese restaurant process in the middle and the random recursive tree $(T_n : n \geq 1)$ starting with initial state $0$ on the right.} \end{center} \end{figure} \subsection{Plya urn and almost sure convergence towards Poisson--Dirichlet} \label{sec:polyaurn} The Chinese restaurant coupling $( \boldsymbol{\sigma}_n^{ \mathrm{cr}} : n \geq 1)$ is a different coupling compared to Feller's coupling $( \boldsymbol{\sigma}_n^{ \mathrm{fel}} : n \geq 1)$ used in the proof of Proposition \ref{prop:limitfellereasy}. Roughly speaking, in the Chinese restaurant coupling, the structure of large cycles converges almost surely (see below), whereas in Feller's coupling the structure of small cycles (nearly) converges almost surely. Recalling Theorem \ref{thm:PDforcycles}, we have here: \begin{theorem}[Almost sure convergence of the Chinese restaurant process] Let $ (K_i(\boldsymbol{\sigma}_n^{ \mathrm{cr}}) : i \geq 1)$ be the cycle lengths of $\boldsymbol{\sigma}_n^{ \mathrm{cr}}$ in the Foata encoding \eqref{eq:foata} i.e.~the table sizes ranked by order of creation in the Chinese restaurant process. If $(X_i : i \geq 1)$ is the (unranked) Poisson--Dirichlet random partition of $[0,1]$ (see Definition \ref{def:PD}) then we have the following convergence in law in $\ell_1^{(1)}$ \label{thm:CVPDps} $$ \left( \frac{ K_i(\boldsymbol{\sigma}_n^{ \mathrm{cr}})}{n} : i \geq 1\right) \xrightarrow[n\to\infty]{a.s.} (X_i: i \geq 1).$$ \end{theorem} To prove the theorem let us first focus on the behavior of the process $$ (R_n,B_n) := \left( K_1( \boldsymbol{\sigma}_n^{ \mathrm{cr}}) , 1+\sum_{i \geq 2} K_i(\boldsymbol{\sigma}_n^ { \mathrm{cr}})\right),$$ for $n \geq 1$. It is clear from the definition of the Chinese restaurant process that this is a Markov chain starting from $R_1=1, B_1=1$ and with transition probabilities given by \begin{eqnarray} \mathbb{P}(R_{n+1} = R_n + 1 \mid R_n,B_n) = \frac{R_n}{R_n+B_n}, \quad \mathbb{P}(B_{n+1} = B_n + 1 \mid R_n,B_n) = \frac{B_n}{R_n+B_n}. \label{eq:polya}\end{eqnarray} We recognize here the (law of the) famous \textbf{Plya\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{polya}} George (Gyrgy) Plya (1887-1985), Hungarian} urn}, which is the stochastic system informally described as follows: initially at time $n=1$ an urn contains one red ball and one blue ball. At each step, a ball is drawn from the urn uniformly at random and is replaced in the urn \textbf{together} with a new ball of the same color (re-inforcement). Then the number $(R_n,B_n)$ of red and blue balls at step $n$ is clearly a Markov chain with transitions \eqref{eq:polya}. \begin{figure}[!h] \begin{center} \includegraphics[width=15cm]{polyaurnfig} \caption{Mecanism of the standard Polya urn: a ball is drawn uniformly at random and is replaced together with a ball of the same color (reinforcement). On the right, the transitions for the number of balls of each color over the first steps of the process.} \end{center} \end{figure} \begin{proposition}[Convergence of proportions] \label{prop:polyaunif} In the standard Polya urn started with 1 ball of each color, the proportion of red balls converges towards a uniform random variable on $[0,1]$. \end{proposition} \noindent \textbf{Proof.} It is straightforward to check that $R_n/(R_n+B_n)$ is a bounded martingale (for the canonical filtration) which thus converges almost surely towards a limiting proportion $U \in [0,1]$. An easy induction on $n \geq 1$ shows that $R_{n}$ is uniformly distributed over $\{1,2, \dots , n\}$ and so $$ \frac{R_{n}}{R_{n}+B_{n}} = \frac{R_{n}}{n+1} \xrightarrow[n\to\infty]{a.s.} U \sim \mathrm{Unif}[0,1].$$ In the next chapter, we will see another proof of this result based on continuous time techniques. \qed \bigskip \begin{exo}[Asymmetric starting configuration] \label{exo:polyaassy} Compute the law of the limiting proportion of red balls when the Polya urn starts from $R_{0}=a$ and $B_{0} = N_{0}-a$ balls. \end{exo} \noindent \textbf{Proof of Theorem \ref{thm:CVPDps}.} The above discussion, together with Proposition \ref{prop:polyaunif} translated in the framework of the theorem, shows the almost sure convergence $ n^{-1} \cdot K_{1}( \boldsymbol{\sigma}_{n}^{ \mathrm{cr}}) \to U_{1}$ where $U_{1}$ is uniform over $[0,1]$. However, it is easy to see that conditionally on the values $$ u_{k} = \inf \left\{ t \geq 0 : 1+ \sum_{i\geq 2} K_{i}( \boldsymbol{\sigma}_{t}^{ \mathrm{cr}}) = k\right\},$$ the restricted process $( K_{i}( \boldsymbol{\sigma}_{u_{k}}^{ \mathrm{cr}}) : i \geq 2)_{k \geq 1}$ has the law of a Chinese restaurant process (thus independent of $U_{1}$). By successive applications of the above reasoning we deduce that $$ \frac{K_{i}( \boldsymbol{\sigma}_{n})}{n} \xrightarrow[n\to\infty]{a.s.} (1-U_{1}) \cdots (1 - U_{i-1}) U_{i},$$ for independent random variables $U_{i}: i \geq 1$ uniformly distributed on $[0,1]$ as desired. \qed \medskip \section{Degrees} \label{sec:degreeRRT} In this section, we study the degrees of the vertices in $T_n$. More precisely, for $0 \leq i \leq n$ the \textbf{outdegree} (number of children) of \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$i$}}} in $T_{n}$ will be denoted by $$\mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$i$}}}) = \#\{ i < j \leq n : \noeud{i} \sim \noeud{j} \mbox{ in } T_{n}\}.$$ \begin{figure}[!h] \begin{center} \includegraphics[width=8cm]{rrtdegree} \caption{Simulation of $T_{100}$ where the size and color of vertices illustrate their degrees. The first $20$ vertices have their labels displayed.} \end{center} \end{figure} \subsection{Degree of fixed vertices} By construction, for any $i \geq 0$ fixed, we have \begin{eqnarray}\label{eq:sumbernoulli} \left(\mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$i$}}}) : n \geq 0 \right) = \left( \sum_{k=i+1}^n B_{k} : n \geq 0 \right), \end{eqnarray} where the Bernoulli random variables $ B_{k} \sim \mathrm{Ber}(1/k)$ are independent and of parameter $1/k$ for $k \geq 1$. Since $\sum_{k \geq 1} \frac{1}{k} = \infty$, the Borel--Cantelli lemma implies that the (out)degree of any vertex $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$i$}}}$ in $T_n$ tends to $\infty$ a.s.\ as $n \to \infty$. Also, by the coupling of the preceding section (or using Theorem \ref{thm:fellercoupling}) we deduce that for any $n \geq 1$ $$\mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}) \overset{(d)}{=} \mathcal{C}_n,$$ where we recall from Proposition \ref{thm:numbercycles} that $ \mathcal{C}_{n}$ is the law of the number of cycles in a random uniform permutation $ \boldsymbol{\sigma}_{n} \in \mathfrak{S}_{n}$ (with the CRP coupling, we have $\mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}) = \# \mathrm{Cycles}(\boldsymbol{\sigma}_n^{ \mathrm{cr}})$). In particular, we deduce from Proposition \ref{thm:numbercycles} that for each $i_0\geq 0$ fixed we have \begin{eqnarray} \label{eq:smalldegreelog} \frac{\mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$i_0$}}})}{\log n} \xrightarrow[n\to\infty]{( \mathbb{P})} 1, \end{eqnarray} and we will see later (Proposition \ref{prop:degreeroot}) that the convergence actually holds almost surely. \subsection{Empirical degree distribution} \label{sec:empiricalRRT} Let us now focus on the \textbf{empirical degree distribution} in $T_{n}$: We know from \eqref{eq:smalldegreelog} above that the vertices with small labels typically have a logarithmic degree, but as in any tree with $n+1$ vertices, the mean degree in $T_{n}$ is equal to $ \frac{2n}{n+1} \to 2$ as $n \to \infty$. So there must be (a lot) of vertices with small degrees. More precisely, we let $ \mu_{n} $ be the (random) empirical distribution of the out-degrees defined by $$ \mu_{n} = \frac{1}{n+1}\sum_{i=0}^{n} \delta_{ \mathrm{deg}_{T_{n}}^{+}( {\footnotesize \noeud{i}})}.$$ It turns out that for large $n$'s the empirical degree distribution converges towards a deterministic distribution (a stronger version will be proved in Section \ref{sec:rrtfromyule}): \begin{proposition}[Convergence of the empirical degree distribution] \label{prop:empirical}The empirical distribution of the out-degrees in $T_{n}$ converges in probability towards the critical geometric distribution of parameter $1/2$, i.e.~for each $k_{0} \geq 0$ we have $$ \mu_{n}(\{k_0\}) \xrightarrow[n\to\infty]{( \mathbb{P})} 2^{-k_{0}-1}.$$ \end{proposition} \begin{exo} Prove the above proposition by computing the first and second moment of $\mu_{n}( \{k_{0}\})$. \end{exo} \subsection{Maximal degree} By \eqref{eq:smalldegreelog}, the typical degree of vertices with fixed label is of order $ \log n$. Actually, the \textbf{largest} degree is much larger and is close to what would be the maximum of $n$ i.i.d.~critical geometric random variables, or in other words, as if we were sampling $n$ i.i.d.\ degrees distributed according to the limiting empirical degree distribution computed in Proposition \ref{prop:empirical}: \begin{theorem}[Devroye \& Lu] Let $ \mathrm{MaxDegree}(T_n) = \max\{ \mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$i$}}}) : 1 \leq i \leq n\}$ be the largest vertex (out)-degree in $T_{n}$. \label{prop:maxdegreeRRT} Then we have $$ \frac{\mathrm{MaxDegree}(T_n)}{ \log n} \xrightarrow[n\to\infty]{a.s.} \frac{1}{\log 2} \approx 1.44\dots$$ \end{theorem} \noindent \textbf{Teasing for the proof.} The convergence in probability can be approached using the first and second moment method, but the computations are really technical... A neat proof goes through a representation of the RRT in continuous time (a.k.a.~Rubbins/Athreya construction) via a Yule process, see Chapter \ref{chap:AK}. \qed \section{Height} We now turn to the study of \textbf{heights} in $T_{n}$, i.e.~the distances of the vertices to the root $\noeud{0}$ in $T_{n}$. More precisely, for $ n \geq 0$, we denote by $H_{n}$ the height (or generation) of the vertex \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$n$}}} in the random recursive tree $T_{m}$ for $m \geq n$ (the definition does not depend on $m \geq n$ since the vertex $\noeud{n}$, once attached, is fixed in $T_{m}$ for $m \geq n$). \subsection{Typical height} Clearly, the height of the first few vertices $H_{1}, H_{2}, \dots$ are small and are given by the first stages in the construction of $(T_{n} : n \geq 0)$. We shall prove below the surprising fact that $H_{n}$ has the same law as $ \mathrm{deg}^{+}_{T_{n}}( \noeud{0})$, which is the law $ \mathcal{C}_{n}$ of the number of cycles in a uniform permutation $\boldsymbol{\sigma}_{n}$: \begin{proposition} For any $n \geq 0$ we have $H_{n} = \mathcal{C}_{n}$ in law. \end{proposition} \begin{remark} The above proposition shows that for \textbf{fixed} $n \geq 0$, we have $H_{n} = \mathcal{C}_{n} = \mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}})$ in law, but the previous equality does not hold in terms of process in $n \geq 0$: $$ ( H_{n} : n \geq 0) \quad \ne \quad \left(\mathrm{deg}^{+}_{T_{n}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}) : n \geq 0\right).$$ Indeed, the process in the right-hand side is non-decreasing and tends to $+\infty$ a.s. (see \eqref{eq:smalldegreelog}), while the first one does not: because the degree of \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}} is unbounded as $n \to \infty$, there are infinitely many vertices grafted on \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}} and so infinitely many values for which $H_{n} =1$. \end{remark} \begin{figure}[!h] \begin{center} \includegraphics[width=10cm]{heightsdyn} \caption{Plot of a simulation of the successive heights $(H_{i} : 0 \leq i \leq 1000)$ against the $\log$ function (in red).} \end{center} \end{figure} \noindent \textbf{Proof.} Since \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$n$}}} is grafted to a uniform node with label $ <n$ we have the following recursive distributional equation: $H_{0}=0$ and for $n \geq 1$ \begin{eqnarray} \label{eq:recursiveheight} H_{n} \overset{(d)}{=} 1 + H_{U_{n-1}}, \end{eqnarray} where in the right-hand side $U_{n-1} \in \{0,1,2, \dots , n-1\}$ is independent of the RRT defining $(H_{n} : n \geq 0)$. This type of equality is called a \textbf{recursive distributional equation}. Actually, we saw in the proof of Theorem \ref{thm:PDforcycles} that in a uniform permutation $ \boldsymbol{\sigma}_{n}$, the size $V_{n}$ of the cycle containing $1$ is uniformly distributed over $\{1,2, \dots , n\}$ and conditionally on it the remaining (relabeled) permutation is uniform over $ \mathfrak{S}_{n-V_{n}}$. In particular, $ \mathcal{C}_{n}$ satisfies the same recursive distributional equation as in \eqref{eq:recursiveheight}: $$ \mathcal{C}_{n} \overset{(d)}{=} 1 + \tilde{\mathcal{C}}_{n-V_{n}}, $$ where on the right-hand side $ ( \tilde{\mathcal{C}}_{i} : i \geq 0)$ are independent variables of law $ \mathcal{C}_{i}$ and also independent of the uniform variable $V_{n} \in \{1,2, \dots , n\}$. With the convention $ \mathcal{C}_{0}=0$, this is sufficient to show that $ \mathcal{C}_{n}$ and $H_{n}$ have the same law since those recursive equations \eqref{eq:recursiveheight} characterize their laws. \qed \bigskip Proposition \ref{thm:numbercycles} directly implies a central limit theorem: $$ \frac{H_{n}- \log n}{ \sqrt{\log n}} \xrightarrow[n\to\infty]{(d)} \mathcal{N}(0,1),$$ and a weak law of large number $H_{n} / \log n \to 1$ in probability (but not almost surely). \subsection{Maximal height} As in the case of vertex degrees, the maximal height $$ \mathrm{Height}(T_n) := \max\{ H_i : 0 \leq i \leq n\}$$ of $T_n$ is much larger than the typical height and is also asymptotically the same as if the heights of different points were independent, that is comparable to $\sup\{ \mathcal{C}_{n}^{(k)} : 1 \leq k \leq n \}$ for independent random variable $ \mathcal{C}_{n}^{(i)}$ of law described in Proposition \ref{prop:degreeroot}.\medskip \begin{theorem}[Pittel] \label{thm:heightRRT} We have $$ \frac{ \mathrm{Height}( T_n) }{\log n} \xrightarrow[n\to\infty]{ a.s.} \mathrm{e}.$$ \end{theorem} \noindent \textbf{Proof.} See Exercise \ref{exo:upperheight} below for the upper bound using the first moment method. The lower bound can in principle be approached by the second moment method but yield to a very intricate proof. We shall prove this theorem using the continuous time embedding technique in Chapter \ref{chap:AK}. \qed \medskip \begin{exo}[Upper bound using Poisson approximation] \label{exo:upperheight} For $p \in (0,1)$ and $\alpha >0$ denote by $ \mathrm{Ber}(p)$ a Bernoulli variable with expectation $p$ and by $ \mathfrak{P}( \alpha)$ a Poisson variable with expectation $\alpha$. \begin{enumerate} \item Show that $ \mathrm{Ber}(p) \leq \mathfrak{P}(- \log (1-p))$ for the stochastic order and that $$ \mathrm{d_{TV}} ( \mathrm{Ber}(p), \mathfrak{P}(p)) = O (p^2), \quad \mbox{as } p \to 0,$$ where $ \mathrm{d_{TV}}$ is the total variation distance. \item Deduce that $ H_{n}$ is stochastically dominated by $1$ plus a Poisson variable with expectation $- \sum_{i=2}^{n}\log \left(1- \frac{1}{i} \right)$. \item Use \eqref{lem:LDpoisson} to conclude that for all $ \varepsilon>0$ we have $ \mathbb{P}( \mathrm{Height}(T_{n}) > (\mathrm{e} + \varepsilon) \log n) \to 0$ as $n \to \infty$. \item Prove that $\mathrm{Height}( T_n) \leq ( \mathrm{e} + \varepsilon) n$ eventually, a.s. \end{enumerate} \end{exo} \paragraph{Bibliographical notes.} The random recursive tree and random uniform permutations over the symmetric group are both very well studied in probability theory. Standard references are Feller \cite{feller1945fundamental} and the Saint-Flour lectures of Pitman \cite{Pit06} in particular Section 3.1 or the renowned \cite{Flajolet:analytic}. Theorem \ref{prop:maxdegreeRRT} is due to Devroye \& Lu \cite{devroye1995strong} and Theorem \ref{thm:heightRRT} to Pittel \cite{pittel1994note}. See \cite{smythe1995survey,goldschmidt2005random,baur2014cutting} for more results about the random recursive tree. \medskip \noindent{\textbf{Hints for Exercises.}}\ \\ \noindent Exercise \ref{exo:polyaassy}: Show that $Z_n\,=\, \frac{R_n(R_n+1)\ldots(R_n+k-1)}{(n+N_0-1)(n+N_0 )\ldots(n+N_0+k-2)}\,$ is a martingale for the canonical filtration and deduce the moments of the limiting proportion of red balls. See Exercise \ref{exo:afresh} for a calculus-free approach.\\ Exercise \ref{exo:upperheight}: For the last question, use the polynomial decay of $ \mathbb{P}( \mathrm{Height}(n) \geq ( \mathrm{e}+ \varepsilon) n)$ obtained in $2)$ along a subsequence $ n = c^{k}$ for $c >1$. Conclude using the fact that $ n \mapsto \mathrm{Height}(T_{n})$ is increasing. \chapter{Continuous-time branching processes} \label{chap:AK} \hfill Randomize to make it simpler!\bigskip In this chapter, we theorize the Poissonization technique which amounts to transforming a discrete-time process into a continuous-time version which possesses more independence properties. This will be particularly useful for urn processes and random tree growth mechanisms. \section{Continuous-time branching trees} Let us first recall the memorylessness property of exponential variables, which will be the crux of the continuous-time embedding technique. \subsection{Properties of exponential laws} In the following, for $ \alpha >0$ we denote by $ \mathcal{E}(\alpha)$ the exponential distribution of expectation $1/ \alpha$, i.e.~given by $$ \mathcal{E}( \alpha) = \alpha \cdot \mathrm{e}^{- \alpha x} \mathbf{1}_{x >0} \mathrm{d}x,$$ we shall say that $\alpha$ is the \textbf{rate} of the exponential, since by the \textbf{memorylessness} property of the exponential distribution if $X \sim \mathcal{E}(\alpha)$ we have \begin{eqnarray} \label{eq:expomemory} \mathbb{P}( X \in [x , x + \mathrm{d}x] \mid X \geq x) = \alpha \mathrm{d}x, \end{eqnarray} or equivalently that conditionally on $\{ X \geq t\}$ the variable $X-t$ has distribution $ \mathcal{E}(\alpha)$. Recall also that the memorylessness property is characteristic of the exponential and geometric laws: \begin{exo}[Memorylessness] \label{exo:memory} Let $X$ be a random variable with values in $ \mathbb{R}_+$ so that for every $a,b \in \mathrm{Supp}( \mathcal{L}(X))$ we have $$ \mathbb{P}(X >a+b \mid X >b)= \mathbb{P}(X >a).$$ Show that $X$ is either an exponential or a multiple of a geometric random variable. \end{exo} \paragraph{Choosing using clocks.} Consider $X_{1} \sim \mathcal{E}( \alpha_{1}), \dots , X_{k}\sim\mathcal{E}( \alpha_{k})$ a family of $k$ independent exponential variables of parameters $ \alpha_{1}, \dots , \alpha_{k}$ and denote by $ \mathscr{M} = \min\{X_{i} : 1 \leq i \leq k\}$ and by $ \mathscr{J}\in \{1, \dots , k \}$ the index at which this minimum is attained. Then we have: \begin{proposition}[Choosing with clocks]\label{prop:expomem} The index $\mathscr{J}$ is almost surely well-defined (there is no tie) and we have $$ \mathscr{M} \sim \mathcal{E}( \alpha_{1}+ \dots + \alpha_{k}), \quad \mathbb{P}(\mathscr{J} = i) = \frac{\alpha_{i}}{\alpha_{1} + \dots + \alpha_{k}},$$ and conditionally on $\{\mathscr{J}, \mathscr{M}\}$, the remaining variables $ X_{1} - \mathscr{M}, \dots , \widehat{ X_{\mathscr{J}}- \mathscr{M}}, \dots , X_{k}- \mathscr{M}$ are independent and of laws $ \mathcal{E}( \alpha_{1}) , \dots ,\widehat{ \mathcal{E}( \alpha_{\mathscr{J}})}, \dots , \mathcal{E}( \alpha_{k})$. \end{proposition} \noindent \textbf{Proof.} This can can heuristically be explained as follows: by the memorylessness property of the exponential laws \eqref{eq:expomemory}, the variable $ \mathscr{M}$ must follow an exponential law with rate $\alpha_{1} + \dots + \alpha_{k}$ and given that $ \mathscr{M} \in [x, x+ \mathrm{d}x]$, the probability that $ \mathcal{E}( \alpha_{i})$ is the smallest is just proportional to the rate i.e. $$ \mathbb{P}(\mathscr{J} = i) = \frac{\alpha_{i}}{\alpha_{1} + \dots + \alpha_{k}}.$$ The remaining statement follows by the memorylessness property. More formally, since the variables are independent and have a density with respect to the Lebesgue measure, there are a.s. pairwise distincts and so $\mathscr{J}$ is well-defined. Furthermore, for any positive function $ \phi : \mathbb{R}_+ \times (\mathbb{R}_+)^{k-1}$ we have \begin{eqnarray*} && \mathbb{E}\left[ \phi\Big( \mathscr{M} ; X_{1} - \mathscr{M}, \dots , \widehat{ X_{j})- \mathscr{M}}, \dots , X_{k}- \mathscr{M}\Big) \mathbf{1}_{\mathscr{J} = j}\right] \\ &=& \int_0^\infty \mathrm{d} s_j \alpha_j \mathrm{e}^{- \alpha_j s_j} \int_{s_j}^\infty \left(\prod_{i \ne j}\mathrm{d}s_i \alpha_i \mathrm{e}^{-\alpha_i s_i}\right) \phi( s_j ; (s_1-s_j), \dots , \widehat{(s_j-s_j)}, \dots ,( s_k -s_j))\\ &\underset{ \begin{subarray}{c} m = s_j \\ \tilde{s}_i = s_i-s_j \end{subarray}}{=}& \frac{\alpha_j}{ \sum_i \alpha_i} \int_0^{\infty} \mathrm{d}m \ (\sum_i \alpha_i) \mathrm{e}^{- m ( \sum_i \alpha_{i})} \prod_{i \ne j} \int_0^\infty \mathrm{d} \tilde{s}_i \ \ \phi (m ; \tilde{s}_1, \dots , \widehat{\tilde{s}_j}, \dots , \tilde{s}_k), \end{eqnarray*} and this proves the claim. \qed \medskip A consequence of the above proposition is that if we want to sample from $\{1,2, \dots, k \}$ proportionally to some weights $\alpha_{1}, \dots , \alpha_{k}$; one way, which may seem strange at first glance, is to sample independent exponential clocks $ (X_{i} \sim \mathcal{E}(\alpha_{i}) : 1 \leq i \leq k)$ and consider the index of the first clock that rings. The advantage of this point of view is that by Proposition \ref{prop:expomem}, the exponential clocks that have not rung can be further used (after subtracting the minimum) to sample according to $(\alpha_{i})$ the remaining items as well! We shall use many times the well-know extremal statistics of exponential distribution: \begin{lemma}[Gumbel distribution] \label{lem:gumbel} Let $ X_1, \dots , X_n$ be i.i.d.~variables of law $ \mathcal{E}(1)$. We denote their maximum by $ \mathrm{M}_n = \max\{ X_i : 1 \leq i \leq n\}$. Then we have the following convergence in distribution towards the Gumbel\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{gumbel.jpg}} Emil Julius Gumbel (1891-1966), German} distribution: $$ \mathrm{M}_n-\log n \xrightarrow[n\to\infty]{(d)} G \overset{(d)}{=} \mathrm{e}^{- \mathrm{e}^{-x}-x} \mathrm{d}x.$$ \end{lemma} Remark two useful observations: First, if $G$ has the Gumbel distribution then $ \mathbb{P}(G \leq x) = \mathrm{e}^{- \mathrm{e}^{-x}}$ so that $ \mathrm{e}^{-G}$ has law $ \mathcal{E}(1)$. Second, by iterating Proposition \ref{prop:expomem} the variable $ \mathrm{M}_{n}$ has the law same as $$ \mathrm{M}_{n} \overset{(d)}{=} \mathcal{E}(n) \overset{\coprod}{+} \mathcal{E}(n-1) \overset{\coprod}{+}\dots \overset{\coprod}{+} \mathcal{E}(1),$$ where the variables are independent. We deduce that the right-hand side of the last display satisfies the same convergence as stated in the lemma. \noindent \textbf{Proof.} For $x \in \mathbb{R}$, if $X \sim \mathcal{E}(1)$ we have $$ \mathbb{P}( \mathrm{M}_n \leq \log n +x) = ( \mathbb{P}( X \leq \log n +x))^n= \left(1- \frac{\mathrm{e}^{-x}}{n}\right)^n \xrightarrow[n\to\infty]{} \mathrm{e}^{- \mathrm{e}^{-x}}.$$ \qed \begin{exo}[Hide and seek] \label{exo:hideseek}We sample i.i.d.~random variables in a finite set $ \mathbb{X}=\{x_1, \dots , x_n\}$ according to some weights $\alpha_1=1, \alpha_2=1\dots, \alpha_{n-1}=1$ and $\alpha_n=2$ until all elements of $ \mathbb{X}$ have been seen in the sequence. What is the probability that the last element unseen is $x_n$? \end{exo} \subsection{Continuous branching trees and their discrete associated Markov chains} \label{sec:AK} Let $p \in \{1,2, \dots \} \cup \{\infty\}$ and denote by $ \mathfrak{y} = \{1,2, \dots , p\}$ if $p< \infty$ or $ \mathfrak{y} = \mathbb{Z}_{\geq 0}$ be the set of \textbf{discrete types}. To ease notation, we shall identify the space $(\mathbb{Z}_{\geq 0})^{ \mathfrak{y}}$ with the space of discrete measures $\sum_{i \in \mathfrak{y}} x_i \delta_i$ with $x_i \in \mathbb{Z}_{\geq 0}$, for example $(2,0,0,\dots)$ we be written $2\delta_1$. For each type $ i \in \mathfrak{y}$, we are given $\alpha_i >0$ a positive \textbf{weight} and an \textbf{offspring distribution} $ (\mu_{i} : i \in \mathfrak{y})$ over $(\mathbb{Z}_{\geq 0})^{ \mathfrak{y}}$. Finally, let us fix $ \mathbf{x} \in (\mathbb{Z}_{\geq 0})^{ \mathfrak{y}}$ a non-zero \textbf{starting configuration}. We now create a random genealogical tree, more precisely a forest of trees, as follows. Under $ \mathbb{P}_{\sum_{i \in \mathfrak{y}} x_i \delta_i}$ the random forest $ \mathbb{F}$ (we shall write $ \mathbb{T}$ if there is a single tree, i.e.~if $ \mathbf{x} = \delta_{i_0}$ for some $ i_0 \in \mathfrak{y}$) is the genealogical forest of a cloud of particles starting with $x_j$ particles of type $j$, and where subsequently each particule of type $i \in \mathfrak{y}$ behaves independently of the others and lives an exponential time $ \mathcal{E}(\alpha_i)$ of rate $\alpha_i$ before dying and giving birth to a cloud of particles sampled according to $\mu_{i}$ (independently of the past and of the other particles). The trees in $ \mathbb{F}$ are locally finite random rooted (but non-planar) trees with edge lengths as depicted on Figure \ref{fig:mtgw}. \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{mtgw} \caption{Illustration of the construction of the random tree $ \mathbb{T}$ starting from a single blue individual (the colors represent types of particles): each particle of type $i$ lives for an exponential time of expectation $1/ \alpha_i$, then dies and gives birth to new particles according to the distribution $\mu_i$. All those samplings are made independently of each other. The red crosses represent deaths with no birth. \label{fig:mtgw}} \end{center} \end{figure} In the case of a single ancestor, it is possible to make a formal definition of $ \mathbb{T}$ as a plane tree with edge lengths, by ordering the children of each particle from left-to-right, so that each particle alive at some time corresponds to a vertex of Ulam's tree. The type and the life time of particles are then additional decorations. We will however not bother to make such construction in general and mostly rely on the intuition of the reader. Several limit theorems are available in the literature for the number of particles of each type living at time $t$ in $ \mathbb{T}$, but for the purpose of these lecture notes we shall only deal with the most basic examples, namely Poisson processes and Yule trees, see Section \ref{sec:yules}. But before that, let us connect those random continuous trees to discrete Markov chains using properties of the exponential distributions. If $ \mathbb{F}$ is a random forest of law $ \mathbb{P}_{ \mathbf{x}}$ as above, consider $0 = \tau_0 < \tau_1 < \tau_2 < \dots$ the jump times\footnote{since the exponential distribution has a density and since all particles' life times are independent, it is easy to see that the jump times are a.s. distinct. But we do not exclude the possibility that the jump times accumulate.}, i.e.~the times when a particle dies in $ \mathbb{F}$ and gives birth to a new cloud of particles (possibly empty). Let also introduce $( \mathbb{X}_k : k \geq 0)$ the $ (\mathbb{Z}_{\geq 0})^{ \mathfrak{y}}$-valued process made of the number of particles of each type $i \in \mathfrak{y}$ at time $ \tau_k$. \begin{lemma}[Athreya--Karlin] \label{lem:AK} Under $ \mathbb{P}_{ \sum_i x_i \delta_i}$, the process $( \mathbb{X}_k: k \geq 0)$ is a Markov chain starting from $ (x_i : i \in \mathfrak{y})$ and with transitions described informally as follows: conditionally given $ \mathbb{X}_{n} = (x_{n}^{(i)} : i \in \mathfrak{y})$ we choose a uniform particle of type $i_{0}$ with probability $$ \frac{x_{n}^{{(i_{0})}} \cdot \alpha_{i_{0}}}{ \sum_{i \in \mathfrak{y}} x_{n}^{(j)} \alpha_{j}},$$ then this particle dies and creates new particles $( z^{(i)} : i \in \mathfrak{y})$ with law $\mu_{i_{0}}$. More formally, for any positive function $f : ( \mathbb{Z}_{\geq 0})^p \to \mathbb{R}_+$ we have $$ \mathbb{E}[f( \mathbb{X}_{n+1}) \mid \mathcal{F}_{n}] = \sum_{ i_{0}=1}^{p} \frac{\alpha_{i_{0}} \mathbb{X}_{n}^{(i_{0})}}{ \sum_{j } \mathbb{X}_{n}^{(j)} \alpha_{j}} \sum_{ z \in (\mathbb{Z}_{\geq 0})^{p}} \mu_{i_{0}}(z) \cdot f( \mathbb{X}_{n} -\delta_{i_0} + z).$$ \end{lemma} \noindent \textbf{Proof.} Let us prove by induction on $k \geq 0$ that at time $\tau_k$, conditionally on the past up to time $ \tau_k$, the particles alive at time $\tau_k$ all carry independent exponential clocks of weight $\alpha_i$ for a particle of type $i$. This is true for $k =0$ and propagates easily by Proposition \ref{prop:expomem}. In particular, by Proposition \ref{prop:expomem} again, conditionally on the types of the particles at time $\tau_k$, the next particle to die is chosen proportionally to the rate $\alpha_i$ of its type $i$ and reproduces according to $\mu_i$. \qed\medskip We shall see in Section \ref{sec:examplesAK} several examples of discrete chains which are more efficiently studied via their continuous-time analogs, but before that, let us study the most fundamental cases where particles reproduce at constant rate into a fixed number of new particles. \section{Yules trees} \label{sec:yules} In this section, we shall focus on a very special case of continuous-time branching process where there is only one type of particle which reproduce at rate $1$ into exactly $k \geq 1$ particles. When $k=1$ this corresponds to a vanilla constant rate \textbf{Poisson process} on $ \mathbb{R}_+$ and when $k \geq 2$ we speak of (random) \textbf{Yule} trees. \subsection{$k=1$ and Poisson process} \label{sec:poissoncounting} Fix here $p=1$ (monotype) and $\mu_{1} = \delta_{\delta_1}$, i.e.~when a particle dies, it gives rise to a single particle. In terms of set of particles, nothing is happening. But the temporal death counting process gives the link between exponential variables and Poisson processes. More precisely, consider $ X_{1}, X_{2}, \dots $ a sequence of i.i.d.~exponential variables of rate $1$ and build the counting process for $t \geq 0$ $$ \mathfrak{P}(t) = \sup\{i \geq 0 : X_{1} + \dots + X_{i} \leq t\}.$$ This random c\`adl\`ag process turns out to be a Poisson counting process and this connection is the standard way to prove \eqref{eq:lawll}: \begin{proposition}[Standard Poisson] \label{prop:standardpoisson} For any $0=t_{0}\leq t_{1} \leq t_{2}\leq t_{3} \leq \cdots$ , the variables $ \mathfrak{P}({t_{i+1}})-\mathfrak{P}({t_{i}})$ for $0 \leq i \leq k-1$ are independent and of law $$ \mathfrak{P}({t_{i+1}})-\mathfrak{P}({t_{i}}) \sim \mathrm{Poisson}(t_{i+1}-t_{i}).$$ \end{proposition} \noindent \textbf{Proof.} This is a very classical result whose proof can be found in many textbooks. Let us however sketch the arguments: The independence and stationary of the increments follows by the loss of memory property applied recursively at times $t_{i-1}, t_{i-2}, \dots , t_{1}$. To prove that $\mathfrak{P}({t})$ follows a Poisson distribution one can notice that from Proposition \ref{prop:expomem} we can write $$ \forall t \geq 0,\quad \mathfrak{P}(t) = \sum_{i=1}^{n} \mathfrak{P}^{(i)}({t/n}),$$ where $\mathfrak{P}^{(i)}(\cdot)$ are i.i.d.~copies of $\mathfrak{P}(\cdot /n)$ i.e.~of the process $ \mathfrak{P}$ constructed with exponentials of mean $n$. For fixed $t>0$, when $n \to \infty$ notice that we have $$\mathbb{P}(\mathfrak{P}(t /n) = 1) \sim \frac{t}{n} \quad \mbox{ and }\quad \mathbb{P}(\mathfrak{P}(t /n) \geq 2) \leq \frac{C_{t}}{n^{2}},$$ where $C_{t}>0$ is a positive constant. In particular, the total variation distance $\mathfrak{P}(t /n)$ and $ \mathrm{Bern}(t/n)$ is less than $ 2 C_{t}/n^{2}$ and we deduce that $$ \mathrm{d_{TV}}(\mathfrak{P}({t}), \mathrm{Bin}(n, t/n)) \leq \frac{2 C_{t}}{n^{2}} \cdot n \xrightarrow[n\to\infty]{}0,$$ and since $\mathrm{Bin}(n, t/n) \to \mathfrak{P}(t)$ in distribution we are done. \qed \bigskip These two visions on the standard Poisson process are already very useful: \begin{exo}For $n \geq 1$, let $(U_i : 1 \leq i \leq n-1)$ be i.i.d. uniform on $[0,1]$ and denote by $(V_i : 1 \leq i \leq n-1)$ their increasing rearrangement and put $V_0=0$ and $V_n=1$. Let $X_1, \dots , X_{n}$ be i.i.d.~r.v.~of law $ \mathcal{E}(1)$ and denote by $ \mathbb{X} = X_1 + \dots + X_{n}$. Show that $$ \left( V_{i}-V_{i-1} : 1 \leq i \leq n\right) \quad \overset{(d)}{=} \quad \left( \frac{X_i}{ \mathbb{X}} : 1 \leq i \leq n \right).$$ \end{exo} \subsection{$k\geq 2$ and Yule trees} Another special example of multi-type branching tree is given by setting $p=1$ (monotype), $\alpha_{1}=1$ to fix ideas, and $\mu_{1} = \delta_{ k\delta_1}$ for some $k \geq 2$, i.e.~each particle creates $k$ new particles when dying. We then speak of the \textbf{Yule\footnote{\raisebox{-5mm}{\includegraphics[width=1cm]{yule.jpg}} George Udny Yule (1871--1951) British} tree} of order $k$. In other words, the discrete tree underlying $ \mathbb{T}$ under $ \mathbb{P}_{\delta_1}$ is the full $k$-ary tree whose edge lengths are i.i.d.~distributed according to $ \mathcal{E}(1)$. For later purposes, it will be useful to have a plane ordering of the tree. This can be obtained by starting with the infinite $k$-ary tree whose vertex set is $\bigcup_{n \geq 0}\{0,1, \dots , k-1\}^{n}$ and equip each of its vertices with an independent exponential r.v.\ with rate $1$ (the vertex lengths). In this correspondance, the vertices of $\bigcup_{n \geq 0}\{0,1, \dots , k-1\}^{n}$ are associated with the edges of the plane Yule tree $ \mathbb{T}$. For each $t \geq 0$, we denote by $ [ \mathbb{T}]_{t}$ the finite plane tree obtained by cutting $ \mathbb{T}$ at height $t$. By the same procedure as before, it can be seen as a finite plane tree whose vertices have either $0$ or $k$ children and whose vertices are decorated with positive lengths, see Figure \ref{fig:RRTbinary}. In the following, we shall always make such identification without further notice. \begin{figure}[!h] \begin{center} \includegraphics[width=15cm]{yule-plane} \caption{Illustration of the encoding of (a restriction of) the Yule tree as a (finite) plane $k$-ary tree whose vertices carry positive numbers (in pink on the right). \label{fig:RRTbinary}} \end{center} \end{figure} We denote by $\# \partial [ \mathbb{T}]_{t}$ the number of leaves of $ [\mathbb{T}]_{t}$ and use $$ \mathcal{Y}_{t}^{(k)} \equiv \# \partial [ \mathbb{T}]_{t},$$ as a short-hand notation. In this case, the growth of the tree is very well understood since $( \mathcal{Y}^{(k)}_t : t \geq 0)$ is a Markov chain which makes positive jumps of size $(k-1)$ with rate $ \mathcal{Y}^{(k)}_{t}$. We deduce that $ \mathrm{m}(t) := \mathbb{E}_{\delta_1}[ \mathcal{Y}^{(k)}_{t}]$ satisfies $ \mathrm{m}'(t) = (k-1) \mathrm{m}(t)$ and $ \mathrm{m}(0) =1$ under $ \mathbb{P}_{\delta_1}$ so that $$ \mathbb{E}_{\delta_1}\left[ \mathcal{Y}^{(k)}_{t} \right] = \mathrm{e}^{(k-1) t}, \quad \mbox{ for }t \geq 0.$$ Combined with the Markov property, it follows that \begin{eqnarray} \label{eq:martingaleyule} \Big(\mathrm{e}^{-(k-1)t}\cdot \mathcal{Y}^{(k)}_{t} : t \geq 0\Big), \quad \mbox{ is a martingale} \end{eqnarray} for the filtration made of the information up to time $t$ and so converges almost surely (this will be reproved in the following proposition). We can even identify its limit: \begin{proposition} \label{prop:yule} Let $( \mathcal{Y}^{(k)}_t : t \geq 0)$ be the counting process in a Yule tree of order $ k\geq 2$ under $ \mathbb{P}_{\delta_1}$. Then we have $$ \mathrm{e}^{- (k-1)t} \cdot \mathcal{Y}^{(k)}_t \xrightarrow[t\to\infty]{a.s.} \boldsymbol{\Gamma}\left (\frac{1}{k-1}, \frac{1}{k-1}\right),$$ where $\boldsymbol{\Gamma}(\alpha, \beta)$ is a scaled Gamma random variable, i.e.~with law $ \frac{1}{\Gamma(\alpha)} x^{\alpha-1} \mathrm{e}^{-\beta x} \beta^\alpha \, \mathrm{d}x \mathbf{1}_{x>0}$ (in particular a standard exponential when $\alpha=\beta=1$). \end{proposition} \noindent \textbf{Proof.} Let us first prove the proposition in the case $k=2$ for simplicity. Consider the jump times $0 = \tau_0 < \tau_1 < \tau_2 < \dots$ of the process $\mathcal{Y}^{(2)}$ so that we have $ \mathcal{Y}_{\tau_i} ^{(2)} = i+1$ deterministically. By the properties of exponential variables we know that $ (\tau_{i+1}-\tau_i : i \geq 0)$ are independent exponential random variables with rate $i+1\geq 1$. We write $ \mathrm{h}_n = \sum_{k=1}^n \frac{1}{k}$ for the $n$th harmonic number. Clearly $( \tau_{n}- \mathrm{h}_{n}: n \geq 1)$ is a martingale bounded in $L^{2}$ since $$ \sum_{i=1}^{\infty} (\tau_{i}-\tau_{i-1} - \frac{1}{i})^{2} = \sum_{i=1} \frac{1}{i^2} \mathrm{Var}( \mathcal{E}(1)) < \infty.$$ Hence $( \tau_{n}- \log n: n \geq 1)$ converges almost surely (and in $L^{2}$) towards some random variable $ \mathcal{X}_{\infty}$. To compute the law of this variable, recall from Lemma \ref{lem:gumbel} and the discussion following it that we have $$\tau_{n} - \log n \xrightarrow[n\to\infty]{(d)} G,$$ where $G$ has the Gumbel distribution. We deduce that $$ (n+1) \mathrm{e}^{-\tau_n} \xrightarrow[n\to\infty]{a.s.} \exp(- \mathcal{X}_\infty) \overset{(d)}{=} \mathrm{e}^{-G} \overset{(d)}{=}\mathcal{E}(1),$$ and this proves the statement of the proposition for times $t$ of the form $\tau_n$. Assuming for a moment that $\Delta \tau_n \to 0$ almost surely as $n \to \infty$, a sandwiching argument for times $\tau_n \leq t < \tau_{n+1}$ concludes the proof. To prove that $\Delta \tau_n \to 0$, we use the Borel--Cantelli lemma since for $n \geq 1$ $$ \mathbb{P} \left(\tau_{n+1} - \tau_n \geq \frac{1}{ \sqrt{n}}\right) = \mathbb{P}\left( n^{-1}\mathcal{E}(1) \geq \frac{1}{ \sqrt{n}}\right) = \mathrm{e}^{- \sqrt{n}},$$ is summable in $n \geq 1$.\\ The case $k \geq 3$ is similar: the only trick is to consider the sum of $k-1$ independent Yule trees so that the jump times of the forest are separated by independent variables of law $ \mathcal{E}(k-1), \mathcal{E}(2(k-1)), \mathcal{E}(3(k-1))\dots$ we can reduce to the above problem (and using the fact that $(k-1)$ copies of r.v.~of law $\Gamma( \frac{1}{k-1},\frac{1}{k-1})$ is an exponential of parameter $k-1$). \qed \medskip Actually, in the case $k=2$ (and $\alpha_{1}=1$) the distribution of $ \mathcal{Y}^{(2)}_{t}$ is explicitly given for each $t\geq0$ by a geometric distribution with parameter $ \mathrm{e}^{-t}$, i.e. \begin{eqnarray} \label{eq:yuleexplicit} \mathbb{P}( \mathcal{Y}^{(2)}_{t}=k) = \mathrm{e}^{-t}(1- \mathrm{e}^{-t})^{k-1}, \quad \mbox{ for }k \in \{1,2,3, \dots\}. \end{eqnarray} Taking the limit as $t \to \infty$, this recovers the form of the limiting law in the above proposition. Once given, the proof of the above claim is easy by solving the differential equations satisfied by the probabilities $ p_{k}(t) := \mathbb{P}( \mathcal{Y}^{(2)}_{t}=k)$ for $k \geq 2$ $$ \frac{ \mathrm{d}}{ \mathrm{d}t} p_{k}(t) = - k p_{k}(t) + (k-1)p_{k-1}(t),$$ with the limiting condition $ p_{1}(t)= \mathrm{e}^{-t}$. See \cite[Chapter III.5 ]{AN72} for analogs when $k \geq 3$. \section{Examples} \label{sec:examplesAK} We now give a few examples of discrete Markov chains which are easily studied via their continuous time analogs. This includes the classical coupon collector problem, the pill problem, the O.K. Corral model and the random recursive tree! We shall start with a new look at the Polya urn studied in Section \ref{sec:polyaurn} before moving to the more challenging examples that will require a few results useful to perform the continuous-time $\to$ discrete time or ``depoissonization'' operation. \subsection{Polya Urn, reloaded} \label{sec:revolution} Let us interpret the classical Polya urn scheme (Section \ref{sec:polyaurn}) as the counting process of a continuous time branching process using Lemma \ref{lem:AK}. For this we consider the case when $p=2$, i.e.~we have two types of particles (red and blue say) and the offspring mechanisms are deterministic $\mu_{1} = \delta_{2\delta_1}$ and $\mu_{2} = \delta_{2\delta_2}$: each particle reproduces at rate $1$ into two particles of the same color independently of the others. Then the branching forest $ \mathbb{F}$ under $ \mathbb{P}_{\delta_1+\delta_2}$ is made of two trees, one red and one blue, describing the genealogy of the two initial particles. By Proposition \ref{lem:AK}, the discrete Markov chain describing the number of blue and red particles at each jump time is simply given by \eqref{eq:polya}, that is, if we start initially with one particle of each color, we are facing the dynamic of the standard Polya urn! \\ Now, the magic of the continuous time is that, since particles of different colors do not interact, the two trees of $ \mathbb{F}$ under $\mathbb{P}_{\delta_1+\delta_2}$, are independent copies of the standard Yule tree of order $2$ (started with a single particle). If $ \mathcal{B}_{t}$ and $ \mathcal{R}_{t}$ respectively denote the number of blue and red particles alive at time $ t \geq 0$ then from Proposition \ref{prop:yule} we have $$ \mathrm{e}^{-t} (\mathcal{B}_{t}, \mathcal{R}_{t}) \xrightarrow[t\to\infty]{}( X, X'),$$ where $ X$ and $ X$ are two independent exponential laws of expectation $1$. In particular, we re-deduce Proposition \ref{prop:polyaunif} on the asymptotic proportion of blue balls: $$ \frac{ \mathcal{B}_t}{ \mathcal{R}_t + \mathcal{B}_t} \xrightarrow[t\to\infty]{a.s.} \frac{X}{X+X'} \overset{(d)}{=} \mathrm{Unif}[0,1].$$ \begin{exo} \label{exo:afresh} Contemplate Exercise \ref{exo:polyaassy} afresh. \end{exo} \subsection{Depoissonization tools} We now present two lemmas that we will use repeatedly below. The first one is a probabilistic variation on Dini's lemma: \begin{lemma}[Dini] \label{lem:dini} Let $(D^{{(n)}}_{t} : t \in \mathbb{R})$ be random non-decreasing c\`adl\`ag processes, i.e such that $ D^{{(n)}}_{s} \leq D^{{(n)}}_{t}$ for every $s \leq t$ and $n \geq 0$. We suppose that $D^{(n)}$ converge point-wise in probability, that is for any $t \in \mathbb{R}$ we have $$ D^{(n)}_{t} \xrightarrow[n\to\infty]{( \mathbb{P})} f(t),$$ where $f : \mathbb{R} \to \mathbb{R}$ is a non-decreasing continuous function. Then we also have the stronger convergence $$ ( D_{t}^{{(n)}} : t \in \mathbb{R}) \xrightarrow[n\to\infty]{( \mathbb{P})} (f(t) : t \in \mathbb{R}),$$ for the topology of uniform convergence over every compact subset of $ \mathbb{R}$. \end{lemma} \noindent \textbf{Proof.} Fix a dense sequence $(t_{i} : i \geq 0)$ in $ \mathbb{R}$. Since $ D^{{(n)}}_{t_{i}} \to f(t_{i})$ in probability as $n \to \infty$ for each $i$, we have $( D_{t_{i}}^{{(n)}} : i \geq 0) \to ( f(t_{i}) : i \geq 0)$ in probability for the topology of point-wise convergence on $ \mathbb{R} ^{ \mathbb{Z}_{\geq 0}}$. By the Skorokhod representation theorem, we can construct a probability space $( \Omega, \mathcal{F}, \mathbb{P})$ and a sequence of processes $( \tilde{D}^{(n)})$ so that $ \tilde{D}^{(n)} {=} D^{{(n)}}$ in law for each $n$, and so that we have $$ \forall i \geq 0, \quad \tilde{D}^{(n)}_{t_{i}} \xrightarrow[n\to\infty]{} f(t_{i})\quad \mbox{ \textbf{almost surely}}.$$ Since $\tilde{D}^{(n)} \overset{(d)}{=} D^{{(n)}}$, the processes $\tilde{D}^{(n)}$ are non-decreasing, and it follows from (classical) Dini's theorem that we actually have the stronger convergence $(\tilde{D}^{(n)}_{t} : t \in \mathbb{R}) \to ( f(t) : t \in \mathbb{R})$ for the topology of uniform convergence over every compact subset of $ \mathbb{R}$. We deduce the similar convergence but in probability for $ D^{(n)}$ by equality in law. \qed \medskip The same result holds true (with the same proof) if we replace convergence in probability by almost sure convergence. Let us see how we can use such convergences: \begin{lemma}[Slutsky] \label{lem:slut} Suppose that $(D^{(n)}_{t})_{t \in \mathbb{R}}$ is a sequence of random processes and $T_{n}$ as sequence of random times (which might not be stopping times). Suppose that $$ D^{{(n)}} \xrightarrow[n\to\infty]{(d)} F \quad \mbox{ and } \quad T_{n} \xrightarrow[n\to\infty]{(d)} \theta,$$ where $F$ is a random continuous function and $\theta$ is a random variable. The first convergence is in the sense of uniform convergence over every compact of $ \mathbb{R}$. We suppose that either $F \equiv f$ is a fixed continuous function \textbf{or} that $\theta \equiv C$ is a constant (in which case the respective convergence in distribution holds in probability). Then we have \begin{eqnarray*} D^{{(n)}}_{T_{n}} \xrightarrow[n\to\infty]{ (d)} F(\theta). \end{eqnarray*} \end{lemma} \noindent \textbf{Proof.} Since one of the limiting variables is deterministic, Slutsky's lemma entails that $( D^{{(n)}}, T_{n})$ converges in distribution towards $((F(t) : t \in \mathbb{R}), \theta)$. We can then use Skorokhod representation again to obtain versions $( \tilde{D}^{{(n)}}, \tilde{T}_{n})$ so that $( \tilde{D}^{{(n)}}, \tilde{T}_{n}) \overset{(d)}{=}( {D}^{{(n)}}, {T}_{n})$ for each $n$ but satisfying $$ (\tilde{D}^{{(n)}}_t : t \in \mathbb{R}) \xrightarrow[n\to\infty]{a.s.}(F(t) : t \in \mathbb{R}) \quad \mbox{ and }\quad \tilde{T}_{n} \xrightarrow[n\to\infty]{a.s.} \theta,$$ where the first arrow holds for the uniform convergence on every compact of $ \mathbb{R}_+$. We deduce the desired convergence in law since $$D^{{(n)}}_{T_{n}} \underset{ \mathrm{for\ each \ }n }{\overset{(d)}{ = }} \tilde{D}^{{(n)}}_{\tilde{T}_{n}} \xrightarrow[n\to\infty]{a.s.} F(\theta).$$\qed \subsection{Coupon collector} The famous coupon collector problem is the following. Fix $n \geq 1$ and let $(X_{k})_{k \geq 1}$ be i.i.d.~uniform variables over $\{1,2, \dots , n\}$. We interpret each $X_{k}$ as a ``coupon'' among a collection of all $n$ possible ones, and we ask how many coupons we should buy to get the full collection, i.e. $$ \mathcal{T}_{n} := \inf\big\{ k \geq 1 : \{X_{1}, \dots , X_{k}\} = \{1,2, \dots , n\}\big\}.$$ Using our continuous time embedding technique we shall prove: \begin{proposition}[Coupon collector] \label{prop:coupon}We have the following convergence in law $$ \frac{ \mathcal{T}_{n} - n \log n}{n} \xrightarrow[n\to\infty]{(d)} G,$$ where $G$ has the Gumbel distribution. \end{proposition} \noindent \textbf{Proof.} We pass in continuous time and consider for each $i \in \{1,\dots , n\}$ an independent Poisson processes $ \mathfrak{P}^{{(i)}}$ of unit rate. This is equivalent to considering $p=1$, $\mu_{1} =\delta_{\delta_1}$ and $\alpha_{1}=1$ under $ \mathbb{P}_{n \delta_1}$ in Lemma \ref{lem:AK}. We let $0 < \tau_{1}< \tau_{2} < \dots$ be the jump times of the union of those processes, so that by an application of Lemma \ref{lem:AK} the indices of the corresponding Poisson processes are distributed as $(X_{k})_{ k \geq 1}$. The continuous time analog of $ \mathcal{T}_{n}$ in this setting is thus $$ T_{n} + \log n := \max_{1 \leq i \leq n} \min \big\{ {t \geq 0} : \mathfrak{P}^{(i)}(t) \geq 1\big\},$$ which by Proposition \ref{prop:standardpoisson} has the law of the maximum of $n$ independent exponential variables of rate $1$. This is given by Lemma \ref{lem:gumbel} and we have $ T_{n}\to G$ in distribution. Coming back to the discrete setting, the number of coupons bought as time $ T_{n}$ is thus $$ \mathcal{T}_{n} \overset{(d)}{=} \sum_{i=1}^{n} \mathfrak{P}^{{(i)}}( T_{n} + \log n).$$ The sum $\sum_{i}\mathfrak{P}^{{(i)}}(\cdot)$ has the same distribution as $ \mathfrak{P}(n \cdot)$, but beware, in this writing $ \mathfrak{P}$ is \textit{not independent} from $ T_{n}$. To circumvent this problem, notice that for any $t \in \mathbb{R}$ we have the convergence in probability $$ D^{{(n)}}_{t} \xrightarrow[n\to\infty]{ (\mathbb{P})} t, \quad \mbox{ where }\quad D^{{(n)}}_{t}:=\frac{ \mathfrak{P}( n \log n + n t)- n \log n}{n}.$$ This weak law of large number is easily seen since $ \mathbb{E}[ D^{{(n)}}_{t}]=t$ and $ \mathrm{Var}(D^{{(n)}}_{t}) \leq \mathrm{Cst} n \log n /n^2$ for some $ \mathrm{Cst}>0$. We deduce from Lemma \ref{lem:dini} the stronger version: $$ \left( D^{{(n)}}_{t}\right)_{t \in \mathbb{R}} \xrightarrow[n\to\infty]{ (\mathbb{P})} (t)_{t \in \mathbb{R}},$$ for the topology of uniform convergence over every compact of $ \mathbb{R}$ and by Lemma \ref{lem:slut} we get $$ \frac{ \mathcal{T}_{n} - n \log n}{n} \quad \overset{(d)}{=} D^{{(n)}}_{ T_{n}} = \frac{ \mathfrak{P}( n \cdot T_{n})- n \log n}{n} \quad \xrightarrow[n\to\infty]{(d)} \quad G.$$ \qed \subsection{Balls in bins} The above approach (with the same continuous time process!) can be used to address the balls in bin problem. Let again $(X_k)_{k\geq 1}$ be i.i.d.~r.v.~uniformly distributed over $\{1,2, \dots , n \}$. We interpret this time the $X_k$ as ``balls'' that are thrown uniformly at random in the $n$ ``bins'' numbered $1,2, \dots , n$. The question is: After throwing $n$ balls, what is the maximal load of a bin, i.e. $$ \mathrm{ML}_n = \max \{ B_i : 1 \leq i \leq n\}\quad \mbox{where} \quad B_i = \#\{1 \leq k \leq n: X_k =i\}.$$ \begin{proposition}[Balls in bins] We have $$ \mathrm{ML}_n \sim_{ \mathbb{P}} \frac{\log n}{ \log \log n}, \quad \mbox{ as $n \to \infty$}.$$ \end{proposition} \noindent \textbf{Proof.} We use the same notation as in the proof of Proposition \ref{prop:coupon} and in particular $ \mathfrak{P}^{(i)}$ are independent unit rate Poisson processes carried by each bin, and $\tau_n$ is the time at which $n$ balls have been thrown. We deduce that we have $$ \mathrm{ML}_n \overset{(d)}{=} \max_{1\leq i \leq n} \mathfrak{P}^{(i)}( \tau_n).$$ As before, the problem is that $\tau_n$ is not independent from the $ \mathfrak{P}^{(i)}$. However, on the one hand, recalling that sum $\sum_{i}\mathfrak{P}^{{(i)}}(\cdot)$ has the same distribution as $ \mathfrak{P}(n \cdot)$, we clearly have by the law of large numbers that $$ \tau_n \xrightarrow[n\to\infty]{ ( \mathbb{P})} 1.$$ On the other hand, for \textit{fixed} $t_0 >0$, the variable $ (\mathfrak{P}^{(i)}(t_0) : 1 \leq i \leq n )$ are independent $ \mathfrak{P}(t_0)$ random variables, so that if we let $ \mathcal{ML}_{n}(t_0) = \max_{1\leq i \leq n} \mathfrak{P}^{(i)}(t_0)$, we have for any $m\geq 1$ $$ \mathbb{P}\left(\mathcal{ML}_{n}(t_0) < m \right) = \big(1- \mathbb{P}( \mathfrak{P}(t_0) \geq m)\big)^n.$$ It is easy to see that $\mathbb{P}( \mathfrak{P}(t_0) \geq m)$ is actually equivalent to $\mathrm{e}^{-t_0}\frac{t_0^m}{m!}$ as $m \to \infty$. So, for any $ \varepsilon>0$ the above display goes to $0$ for $m \leq (1- \varepsilon) \log n/\log\log n$ and to $1$ for $m \geq (1+ \varepsilon) \log n / \log\log n$ as $n \to \infty$. We deduce that for any $t_0>0$ we have $$ \left( D^{{(n)}}_{t_{0}}\right) \xrightarrow[n\to\infty]{( \mathbb{P})} 1 \quad \mbox{ where }\quad D^{{(n)}}_{t_{0}} :=\frac{\mathcal{ML}_{n}(t_0)}{\log n / \log \log n}.$$ This convergence is reinforced using monotonicity and Lemma \ref{lem:dini} into $$ \left(D^{{(n)}}_{t}\right)_{ t \in [0.9,1.1]} \xrightarrow[n\to\infty]{(\mathbb{P})} (1)_{t \in [0.9, 1.1]},$$ for the topology of uniform convergence over $[0.9, 1.1]$. Since $ \tau_n \to 1$ in probability, we can then apply Lemma \ref{lem:slut} to deduce as desired that $$ \frac{\mathrm{ML}_n}{\log n / \log \log n} \overset{(d)}{=} D^{{(n)}}_{ \tau_n} \xrightarrow[n\to\infty]{( \mathbb{P})}1.$$ \qed \subsection{Pill problem} \label{sec:pills} From Wikipedia: \begin{quote} The pill jar puzzle is a probability puzzle, which asks the value of the number of half-pills remaining when the last whole pill is popped from a jar initially containing $n$ whole pills and the way to proceed is by removing a pill from the bottle at random. If the pill removed is a whole pill, it is broken into two half pills. One half pill is consumed and the other one is returned to the jar. If the pill removed is a half pill, then it is simply consumed and nothing is returned to the jar.\end{quote} This problem (attributed to Knuth and McCarthy) can be approached using the Athreya--Karlin embedding. Indeed, suppose we have two types of particles: those of type $1$ corresponding to half-pills and those of type $2$ corresponding to whole pills. We set the rates $\alpha_{1}= \alpha_{2}=1$ and suppose that when a particle of type $2$ dies, it gives rise to a single particle of type $1$, whereas particle of type $1$ have no descendance. Formally $\mu_{2} = \delta_{\delta_1}$ and $\mu_{1} = \delta_{\varnothing}$. If we start initially from $ \mathbb{P}_{n \delta_2}$ i.e.~a forest $ \mathbb{F}$ with $n$ particles of type $2$ (whole pills) then by Lemma \ref{lem:AK} the evolution of the underlying discrete time Markov chain corresponds to the evolution of the content of the jar in the pill puzzle above. If $L_{n}$ is the number of half-pills remaining when all whole pills have been consumed we can then easily prove: \begin{proposition}[Pill problem] \label{prop:pills}Under $ \mathbb{P}_{n \delta_2}$ the random variable $ \frac{L_{n}}{\log n}$ converges in law towards an exponential variable of mean $1$. \end{proposition} \noindent \textbf{Proof.} Under $ \mathbb{P}_{n \delta_2}$ the evolution of the $n$ genealogies starting from the $n$ particles of type $2$ are independent and are described by a sequence $( X_{2}^{(i)}, X_{1}^{(i)} : 1 \leq i \leq n)$ of i.i.d.~r.v.~of law $ \mathcal{E}(1)$ giving the life time of the particles of type $2$ and of their only child of type $1$. If for every $t >0$ we introduce the number of particles still alive at time $t + \log n$ $$ D_{t}^{{(n)}} = \frac{1}{\log n} \sum_{i=1}^{n} \mathbf{1}_{ X_{2}^{(i)} + X_{1}^{(i)} > t + \log n}, $$ then by Athreya--Karlin embedding we have $$ \frac{L_{n}}{\log n} \overset{(d)}{=} D_{T_{n}}^{{(n)}}\quad \mbox{ where } \quad T_{n} = \max_{1 \leq i \leq n} X_{2}^{(i)}-\log n.$$ By Lemma \ref{lem:gumbel} we have the convergence to a Gumbel distribution $T_{n} \xrightarrow{(d)} G$ as $n \to \infty$. On the other hand, since $ \mathbb{P}(X_{2}^{(1)} + X_{1}^{(2)}>t) = \mathrm{e}^{-t}(t+1)$, an easy law of large number (proved using first and second moment for example) shows that for deterministic times $\log n +t$ for $t \in \mathbb{R}$ we have $$ D_{t}^{{(n)}} \xrightarrow[n\to\infty]{ (\mathbb{P})} \mathrm{e}^{-x}.$$ This convergence is as usual reinforced using Lemma \ref{lem:dini} and monotonicity into a functional one. We can then couple the previous three displays to deduce using Lemma \ref{lem:slut} that $$ \frac{L_{n}}{\log n} \overset{(d)}{=} D^{(n)}_{T_{n}} \xrightarrow[n\to\infty]{(d)} \mathrm{e}^{-G} \overset{(d)}{=} \mathcal{E}(1).$$ \qed \subsection{O.K. Corral} Imagine two groups of $n$ people facing each other. At each time step, one individual is chosen uniformly and shouts a person of the other group. The question is: ``How many people are still standing when one of the group dies out''. This riddle is usually named the O.K. Corral \footnote{The gunfight at the O.K. Corral took place on October 26, 1881. Four lawmen were facing five outlaws. During that brief battle (less than a minute), three men were killed, three were wounded, two ran away, and one was unharmed.} problem. Formally, let $( O_1(k),O_2(k) : k \geq 0)$ a Markov chain on $ \{0,1,\dots,n\}^2$ starting from $O_1(0)=O_2(0) =n$ and with transition probabilities $$ \mathbb{P}\left( \displaystyle \begin{subarray}{l} O_1(k+1)\\ O_2(k+1) \end{subarray} = \begin{subarray}{l} O_1(k)-1\\ O_2(k) \end{subarray} \left| \begin{subarray}{l} O_1(k)\\ O_2(k) \end{subarray} \right)\right. = 1- \mathbb{P}\left( \begin{subarray}{c} O_1(k+1)\\ O_2(k+1) \end{subarray} = \begin{subarray}{l} O_1(k)\\ O_2(k)-1 \end{subarray} \left| \begin{subarray}{l} O_1(k)\\ O_2(k) \end{subarray} \right)\right. = \frac{ O_2(k)}{O_1(k)+O_2(k)}.$$ We then let $\theta_n = \inf\{ k \geq 0 : O_1(k)=0 \mbox{ or }O_2(k)=0\}.$ \begin{theorem} We have the following convergence in distribution $$ n^{-3/4}\left(O_1(\theta_n) + O_2(\theta_n)\right) \xrightarrow[n\to\infty]{(d)} \left( \frac{8}{3}\right)^{1/4} \sqrt{| \mathcal{N}|},$$ where $ \mathcal{N}$ is a standard normal variable. \end{theorem} \noindent \textbf{Proof.} We shall embed the discrete Markov chain in continuous time using the Athreya--Karlin lemma. Specifically suppose that we start from two particles of type $n$. Each particle of type $i \in \{1,2,\dots ,n\}$ behave independently of each other and lives for an exponential variable of parameter $ \frac{1}{i}$ (or mean $i$) and then gives rise to a particle of type $i-1$. If $i=1$ then the lineage dies out when the particle of type $1$ dies out. Formally, this is obtained by taking an infinite number of types $p=\infty$, with rates $\alpha_{i} = \frac{1}{i}$ and offspring distribution $\mu_{i} = \delta_{\delta_{i-1}}$ for $i \geq 1$ and $\mu_{1}= \delta_{\varnothing}$, see Figure \ref{fig:okcorral}. Then under $ \mathbb{P}_{2 \delta_n}$, we have two independent lineages of particles of type $n \to n-1 \to \dots \to 2 \to 1$. We denote by $ \mathcal{L}_1, \mathcal{L}_2$ the lengths of the lineages and put $ \mathcal{L} = \mathcal{L}_1 \wedge \mathcal{L}_2$. By Lemma \ref{lem:AK}, the discrete evolution of the types of particles at the jump times $0=\tau_0<\tau_1<\dots$ has the same law as $(O_1(k), O_2(k) : 0 \leq k \leq \theta_n)$. The quantity we are looking for is the type of the remaining particle at time $ \mathcal{L}$ and we shall observe this through its remaining life time: $$ \mathcal{L}_{1} + \mathcal{L}_{2} - \mathcal{L} = | \mathcal{L}_{1}- \mathcal{L}_{2}|.$$ We have $$ \mathbb{E}[\mathcal{L}_{1}- \mathcal{L}_{2}] =0, \quad \mathrm{Var}(\mathcal{L}_{1}- \mathcal{L}_{2}) = 2 \sum_{i=1}^{n} n^{2} \mathrm{Var}( \mathcal{E}(1)) \sim \frac{2 n^{3}}{3},$$ and we leave to the reader verify (using e.g.~Lindenberg CLT, or using characteristic functions) that we have $ n^{-3/2}(\mathcal{L}_{1}- \mathcal{L}_{2}) \to (2/3)^{-1/2} \mathcal{N}$ in law so that $$ \mathcal{R}_{n} = n^{-3/2}|\mathcal{L}_{1}- \mathcal{L}_{2}| \xrightarrow[n\to\infty]{(d)} \sqrt{\frac{2}{3}} | \mathcal{N}|,$$ where $ \mathcal{N}$ is a standard normal. \begin{figure}[!h] \begin{center} \includegraphics[width=14cm]{okcorral} \caption{Illustration of the proof: the two independent lineages of particles of types $n \to n-1 \to \dots \to 2 \to 1$. The type of the particle still standing at the death of the other lineage (here $4$) is studied through $ \mathcal{R}_n$. \label{fig:okcorral}} \end{center} \end{figure}We therefore know that the remaining life time of the lineage of the particle still standing at time $ \mathcal{L}$ is of order $ \sqrt{\frac{2}{3}} |\mathcal{N}| n^{3/2}$, to connect this variable with the type of the particle in question, we use the following: Let $ {D}^{(n)}_{t}$ be the type of the particle still alive in the first lineage at time $ \mathcal{L}_{1} - n^{3/2}t$, renormalized by $n^{3/4}$. We will show that $$ (D^{{(n)}}_{t})_{t \geq 0} \xrightarrow[n\to\infty]{ (\mathbb{P})} ( \sqrt{2 t})_{t \geq 0}.$$ Once this is done, since the same result holds for the second lineage where the process is denoted $(\tilde{D}^{{(n)}}_{t})_{t \geq 0}$, the result is again a consequence of Lemma \ref{lem:AK} since we have $$n^{-3/4}\left(O_1(\theta_n) + O_2(\theta_n)\right) \overset{(d)}{=} D^{(n)}_{ \mathcal{R}_{n}} \mathbf{1}_{ \mathcal{L}_{2}< \mathcal{L}_{1}} + \tilde{D}^{(n)}_{ \mathcal{R}_{n}} \mathbf{1}_{ \mathcal{L}_{1}< \mathcal{L}_{2}} \xrightarrow[n\to\infty]{(d)} \sqrt{2 \sqrt{2/3} |\mathcal{N}|}.$$ To prove the penultimate display, we shall rather focus on the inverse function of $D^{(n)}_{\cdot}$ and consider for $ x \geq 0$ the remaining time $ n^{3/2}\cdot \mathcal{H}^{{(n)}}_x$ in the lineage starting from a particle of type $\lfloor x n^{{3/4}} \rfloor$. It is thus sufficient to show that $ (\mathcal{H}^{{(n)}}_{ x })_{x\geq 0} \to (\frac{x^{2}}{2})_{x \geq 0}$, or by monotonicity and Lemma \ref{lem:dini} that for each $x\geq 0$ we have \begin{eqnarray} \mathcal{H}^{{(n)}}_x \xrightarrow[n\to\infty]{( \mathbb{P})} \frac{x^{2}}{2}. \label{eq:hnxlgn}\end{eqnarray} Since $\mathcal{H}^{{(n)}}_x \overset{(d)}{=} n^{-3/2}\cdot \sum_{i=1}^{\lfloor x n^{3/4} \rfloor} X_{i} $ where the variables $X_{i}$ are independent and of law $ \mathcal{E}( 1/i)$, the expectation and variance of $ \mathcal{H}^{(n)}_{x}$ are easily estimated: $$ \mathbb{E}[\mathcal{H}^{{(n)}}_x ] = n^{-3/2}\sum_{i=1}^{\lfloor x n^{3/4} \rfloor} i \xrightarrow[n\to\infty]{} \frac{x^{2}}{2}, \quad \mbox{ and }\quad \mathrm{Var}(\mathcal{H}^{{(n)}}_x) = n^{-3}\sum_{i=1}^{\lfloor x n^{3/4} \rfloor} i^{2} \xrightarrow[n\to\infty]{}0.$$ Our goal \eqref{eq:hnxlgn} then follows by Markov's inequality. \qed \section{Back to the Random Recursive Tree} Our last example is the random recursive tree process (Chapter \ref{chap:RRT}) which we will construct from a standard Yule tree of order $2$. This will enables us to give quick proofs of (stronger) results about the geometry of the RRT. As we will see in the next chapter, the Athreya-Karlin embedding will give independence properties that make life much simpler when proving the deep Theorems \ref{prop:maxdegreeRRT} and \ref{thm:heightRRT}. \subsection{Construction of the RRT from a Yule process} \label{sec:rrtfromyule} Let us consider the plane version of the Yule tree $ \mathbb{T}$ of order $2$ started from a single particle and recall the notation $( [\mathbb{T}]_{t} : t \geq 0)$ for the tree cut at height $t$. In the plane version of $ [ \mathbb{T}]_{t}$ we contract all the edges going to the left: we obtain a plane genealogical tree whose vertices are labeled by $0,1, 2, \dots$ by order of appearance in the Yule tree, see Figure \ref{fig:yuleRRT}. We denote by $ \{ \mathbb{T}\}_t$ the increasing tree obtained after forgetting the plane ordering. The following is easily proved using the same techniques as in the proof of Lemma \ref{lem:AK}: \begin{proposition}[From Yule to RRT] \label{prop:RRTyule} If $0= \tau_0 < \tau_{1}< \dots < \tau_{n}< \dots$ are the first times at which $\# \partial [ \mathbb{T}]_{\tau_n} = n+1$ then conditionally on $( \tau_{n} : n \geq 0)$ the process $( \{ \mathbb{T}\}_ {\tau_{n}} : n \geq 0)$ is a random recursive tree. \end{proposition} \begin{figure}[!h] \begin{center} \includegraphics[width=14cm]{yuleRTT} \caption{Constructing the random recursive tree (Right) from a standard Yule process (Left): each particle gives rise to a particle of a new type at an exponential rate and this is interpreted as an attachment in the RRT. \label{fig:yuleRRT}} \end{center} \end{figure} \noindent \textbf{Proof.} Let us prove by induction on $n \geq 0$ that at time $\tau_n$, conditionally on the past up to time $ \tau_n$, the Yule tree has $n+1$ alive particles carrying independent exponential clocks, the first one that rings inducing a splittings into two particles. This is true for $n =0$ and propagates easily by the memorylessness property of the exponential laws \eqref{eq:expomemory}. In particular, by Proposition \ref{prop:expomem}, conditionally on the past up to time $\tau_n$, the next particle to split is a uniform particle of $\partial [ \mathbb{T}]_{\tau_n}$. Translating the dynamics in terms of $\{\mathbb{T}\}_{\tau_n}$ directly shows that this chain evolves as a random recursive tree. \qed \medskip \subsection{Degree statistics} Let us use Proposition \ref{prop:RRTyule} to give streamlined proofs of basic results on degree distribution in the RRT. Recall in particular from Proposition \ref{prop:yule} that we have \begin{eqnarray} \label{eq:taynb} \frac{\tau_{n}}{ \log n} \xrightarrow[n\to\infty]{a.s.} 1, \end{eqnarray} and more precisely $ \tau_n - \log n \to G$ where $ G$ has the Gumbel distribution. By the above construction and Proposition \ref{prop:standardpoisson}, for all $t \geq 0$, the degree of the root vertex $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}$ in $ \{\mathbb{T}\}_{t} $ is given by $ \mathfrak{P}(t)$ where $( \mathfrak{P}(t) : t \geq 0)$ is a unit-rate Poisson counting process. This enables us to deduce a stronger version of \eqref{eq:smalldegreelog} given in the last chapter: \begin{proposition} \label{prop:degreeroot}We have the following convergences $$ \frac{\mathrm{deg}_{\{ \mathbb{T}\}_{\tau_{n}}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}})}{\log n} \xrightarrow[n\to\infty]{a.s.} 1 \quad \mbox{ and }\quad \frac{\mathrm{deg}_{\{ \mathbb{T}\}_{\tau_{n}}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}})- \log n}{ \sqrt{\log n}} \xrightarrow[n\to\infty]{(d)} \mathcal{N}(0,1).$$ \end{proposition} \noindent \textbf{Proof.} Since the degree of the root in $ \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}} $ in $ \{ \mathbb{T}\}_t$ is given by the Poisson counting process $ \mathfrak{P}(t)$ along the left-most branch, using \eqref{eq:lawll} we deduce that $$ \frac{\mathrm{deg}_{ \{ \mathbb{T} \}_{t}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}})}{t} \xrightarrow[t\to\infty]{a.s.} 1 \quad \mbox{ and }\quad \left(\frac{\mathrm{deg}_{ \{\mathbb{T}\}_{tu}}( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}})- tu}{ \sqrt{t}}\right)_{u \geq 0} \xrightarrow[t\to\infty]{(d)} (B_{u})_{u \geq 0},$$ where $B$ is a standard linear Brownian motion and where the convergence in the right-hand side holds with respect to the topology of uniform convergence for every compact subset of $ \mathbb{R}_{+}$. From Proposition \ref{prop:yule} it follows that $\tau_{n} \sim \log n$ a.s. as $ n\to \infty$ and the desired statement follows by combining those observations and using Lemma \ref{lem:slut}.\qed \medskip \subsection{A new look at Goncharov \& Kolchin's result} \label{sec:goncharovback} Let us now use the link between uniform permutations and the RRT, and the construction of the latter from a standard Yule tree, to give a fresh look at Goncharov \& Kolchin's result (Theorem \ref{thm:poissoncycle}) on the Poisson statistics of small cycle counts. More precisely, we shall give a direct proof of Lemma \ref{lem:shepplloyd} due to Loyd \& Shepp without relying on Cauchy formula: \medskip \noindent \textbf{Proof of Lemma \ref{lem:shepplloyd}, second version.} Consider the increasing tree $\{ \mathbb{T}\}_t$ and let us denote by $\sigma$ the random permutation associated with it thanks to Section \ref{sec:CRP-RRT}. In particular, conditionally on its size, the permutation $\sigma$ is uniformly distributed. Recall also that the cycle lengths of $\sigma$ correspond to the sizes of the subtrees above $\noeud{0}$ in $\{ \mathbb{T}\}_t$, the later corresponding via the construction of Figure \ref{fig:yuleRRT} to the size (number of individuals living at time $t$) of the subtrees branching of from the left-most branch in $[ \mathbb{T}]_t$. By \eqref{eq:yuleexplicit}, the process of points on the left-most branch, identified with $[0,t]$, at which branches a subtree reaching $k \in \{1,2, \dots \}$ individuals at time $t$ is Poisson with intensity $$ \mathrm{e}^{-(t-s)}(1- \mathrm{e}^{-(t-s)})^{k-1} \mathbf{1}_{s \in [0,t]} \mathrm{d}s,$$ and furthermore, by Poisson thinning, those processes are independent for different values of $k$. We deduce that the number of cycles of length $k \in \{1,2, \dots \}$ in $ \sigma$ are given by independent Poisson variables with mean $$ \int_0^t \mathrm{d}s\ \mathrm{e}^{-(t-s)}(1- \mathrm{e}^{-(t-s)})^{k-1} = \frac{(1- \mathrm{e}^{-t})}{k}.$$ This is exactly the statement of Lemma \ref{lem:shepplloyd} with $x = (1- \mathrm{e}^{-t})$. \qed \bigskip \subsection{Concentration of local statistics} The continuous time embedding and its independence properties can also be used to efficiently prove concentration of local statistics in the RRT. Let us focus on the degree to illustrate the method: For $k \geq 0$ and $t \geq 0$ introduce the variable $$ \mathrm{D}_k([ \mathbb{T}]_t) := \# \Big\{ u \in \{ \mathbb{T}\}_t \backslash \noeud{0} : \mathrm{deg}^{+}_{\{ \mathbb{T}\}_t}(u) =k\Big\},$$ which counts the number of vertices (except the root) in the contraction of $[ \mathbb{T}]_t$ whose out-degree is $k$. Then we have \begin{proposition}[Concentration of local statistics] \label{prop:concentrationlocal} We have $$ \frac{\mathrm{D}_k([ \mathbb{T}]_t)}{ \# \partial [ \mathbb{T}]_t} \xrightarrow[t\to\infty]{ ( \mathbb{P})} \lim_{t \to \infty} \mathrm{e}^{-t} \mathbb{E}[ \mathrm{D}_k([ \mathbb{T}]_t)],$$ where the limit exists. \end{proposition} It will follow from the forthcoming Theorem \ref{prop:manyto1} that the limit above is equal to $2^{-k-1}$, thus proving Proposition \ref{prop:empirical}, see Section \ref{sec:maxdegreeyule}. A little more effort in the proof enables to prove an almost sure convergence. \medskip \noindent \textbf{Proof.} The proof crucial relies on the Markov property of the Yule tree: Recall that conditionally on $[ \mathbb{T}]_{t}$ the tree $ [\mathbb{T}]_{t+s}$ is obtained by grafting $\# \partial [ \mathbb{T}]_t$ i.i.d.~copies of $[ \mathbb{T}]_s$ on the leaves of $[ \mathbb{T}]_t$. This enables us to write for any $s,t \geq 0$ the stochastic inequalities \begin{eqnarray}\sum_{i=1}^{\# \partial [ \mathbb{T}]_t} \mathrm{D}_k( [ \mathbb{T}^{(i)}]_{s})-\# \partial [ \mathbb{T}]_t \leq \mathrm{D}_k( [ \mathbb{T}]_{t+s}) \leq \sum_{i=1}^{\# \partial [ \mathbb{T}]_t} \mathrm{D}_k( [ \mathbb{T}^{(i)}]_{s}) + \# \partial [ \mathbb{T}]_t, \label{eq:stosandwhich}\end{eqnarray} where $ \mathbb{T}^{(i)}$ are i.i.d.~standard Yule trees of order $2$ independent of $ \mathbb{T}$. Taking expectation and dividing by $ \mathrm{e}^{-(t+s)}$ we deduce with the shorthand notation $ {d}_{k}(t) = \mathbb{E}[D_{k}( [ \mathbb{T}]_{t})] \mathrm{e}^{ -t}$ that $$ d_{k}(s) - \mathrm{e}^{-s}\leq d_{k}(t+s) \leq d_{k}(s) + \mathrm{e}^{-s}.$$ Taking $t>> s>>1$, this shows that ${d}_{k}(t)$ converges as $t \to \infty$ and we denote its limit by $ d_{k}(\infty)$. Since $ \mathrm{e}^{-t}\# \partial [ \mathbb{T}]_{t} \to X$ almost surely where $X \sim \mathcal{E}(1)$, for any $ \varepsilon>0$, the weak law of large numbers applied twice in \eqref{eq:stosandwhich} shows that with a probability tending to $1$ as $ t \to \infty$ we have \begin{eqnarray} X ( d_{k}(s) - \mathrm{e}^{-s})(1 - \varepsilon) \leq \frac{ \mathrm{D}_k( [ \mathbb{T}]_{t+s})}{ \mathrm{e}^{t+s}} \leq (1+ \varepsilon) ( d_{k}(s) + \mathrm{e}^{-s}) X, \end{eqnarray} and taking again $ t >> s >>1$ large, this implies the convergence in probability claimed in the lemma. \qed \bigskip \noindent \textbf{Bibliographical notes.} Passing discrete processes into continuous time to get more independence properties is usually called ``randomization'', ``Poissonization'' or ``continuous time embedding'' \cite{athreya1968embedding}. Background on Yule process can be found in \cite{AN72}. Actually, Proposition \ref{prop:yule} is stated there but with a wrong limit law. This has been corrected in \cite[Lemma 3]{bertoin2004dual} with a proof different from the one presented here. The continuous time-embedding of the O.K. Corral model is taken from \cite{levine2007internal}. The connection between Yule tree and the random growing trees has already been exploited many times in the literature, see e.g.~\cite[Section 3]{janson2021convergence} and the reference therein. The pill problem (Proposition \ref{prop:pills}) has been solved in \cite{kuba2012limiting} using analytic combinatoric. Our solution based on continuous time seems to be new. Proposition \ref{prop:concentrationlocal} (in a more general local version) implies that the random recursive tree converges in the Benjamini--Schramm sense (quenched), see \cite{Ald91c} or \cite[Example 6.1]{holmgren2015limit} for details. \medskip \noindent{\textbf{Hints for Exercises.}}\ \\ \noindent Exercise \ref{exo:memory}: The cumulative function $g(s) = \mathbb{P}(X > s)$ satisfies $g(s/n)^n = g(s)$ for any $s/n$ in the support of the law of $X$. If $ \mathrm{Supp}( \mathcal{L}(X)) = \mathbb{R}_+$, and since $g$ is decreasing, this forces $g(s) = \mathrm{e}^{-\alpha s}$ for some $\alpha > 0$.\\ \noindent Exercise \ref{exo:hideseek}: If $ X_1, \dots , X_n$ are independent exponential variables of parameter $1$ the probability is given by $$ \mathbb{P}\left( \frac{1}{2} \cdot X_n > \max X_1, \dots , X_{n-1}\right) = \int_0^\infty \mathrm{d}t \, \mathrm{e}^{-t}(1- \mathrm{e}^{-t/2})^{n-1} = \frac{2}{n(n+1)}.$$ \chapter{Spine decomposition and applications} \label{chap:spine} \hfill Grow a spine!\bigskip We describe in this chapter the spine decomposition of Yule trees which will be a key ingredient in our forthcoming applications to the random recursive and Barab\'asi--Albert trees. In particular, it will enable us to prove Theorems \ref{prop:maxdegreeRRT} and \ref{thm:heightRRT} on the max degree and max height in a random recursive tree of size $n$. \section{Spine decomposition of Yule trees} We fix $k \geq 2$ and consider under $ \mathbb{P} \equiv \mathbb{P}_{\delta_1}$ the plane Yule tree $ \mathbb{T} $ of order $k$ started from a single particle with rates equal to $1$ (see Section \ref{sec:yules}). Recall that for any $t\geq 0$ we denote by $ [\mathbb{T}]_{t}$ the tree $ \mathbb{T}$ cut at level $t$ and write $ \partial [\mathbb{T}]_{t}$ for the boundary of $ [\mathbb{T}]_{t}$ made of all particles alive at time $t$. If $u \in \partial [\mathbb{T}]_t$ is a particle living at time $t$ on the Yule tree, we denote by $[\mathbb{T}]_t^u$ the tree obtained from $[\mathbb{T}]_t$ by distinguishing the branch going from the root to the particle $u$ living at height $t$. We also use the notation $ \# \partial [ \mathbb{T}]_{t}$ for the number of particles alive at time $t$ in $ \mathbb{T}$ (this was abbreviated by $ \mathcal{Y}^{(k)}_{t}$ in the previous chapter). \subsection{Martingale transform} This section, rather abstract, can be skipped at first reading. It presents the spine construction in a broader context, that of \textbf{martingale transformation}. We do not aim at the same level of rigor as in the rest of these pages and just hope to pique the reader's interest. Those willing to proceed with the applications should take Theorem \ref{prop:manyto1} (the many-to-one formula) as granted. \medskip In general, a positive martingale $ (M_{n} : n \geq 0)$ over a filtered $( \mathcal{F}_{n} : n \geq 0)$ probability space enables us to change the underlying measure $ \mathbb{P}$ by biasing with the martingale $(M)$, see Exercise \ref{exo:martingalebiais} for a toy model. This is the essence of the famous ``Girsanov transformation'' in continuous stochastic calculus, and let us see the effect of this transformation when applied to Yule trees with the martingale identified in the previous chapter. Recall from \eqref{eq:martingaleyule} that the process $ M_{t} := \mathrm{e}^{-(k-1)t} \# \partial [\mathbb{T}]_{t}$ is a martingale starting from $1$ for the filtration $ \mathcal{F}_{t} = \sigma ( [\mathbb{T}]_{s} : 0 \leq s \leq t)$. When in possession of such a positive martingale, one can perform a change of measure by biasing the underlying random variables by this martingale. Specifically, this is obtained by considering the probability $ \mathbb{Q}_{t}$ whose Radon--Nikodym derivative with respect to the underlying probability $ \mathbb{P}$ is $$ \left. \frac{ \mathrm{d} \mathbb{Q}_{t}}{ \mathrm{d} \mathbb{P}}\right|_{ \mathcal{F}_{t}} = M_{t}.$$ Actually, since $ M_{t}$ is a martingale, this change of measure is coherent in the sense that for $0 \leq s \leq t$ we have $ \left.\mathbb{Q}_{t}\right|_{ \mathcal{F}_{s}} = \mathbb{Q}_{s}$. This can be checked by a one-line calculation using the martingale property: for any positive measurable function $F$ we have $$ \mathbb{E}[M_{t} F( [[\mathbb{T}]_{t}]_{s})] = \mathbb{E}[ \mathbb{E}[M_{t} F( [\mathbb{T}]_{s}) \mid \mathcal{F}_{s}]] = \mathbb{E}[M_{s} F( [\mathbb{T}]_{s})].$$ By coherence of the restrictions (and leaving the details of the topology, restriction ... to the courageous reader) one can thus define a probability measure $ \mathbb{Q}$ under which the random infinite tree $ \mathbb{T}$ has the property that $$ [ \mathbb{T}]_{t}\quad \mbox{ under } \mathbb{Q} \qquad \overset{(d)}{=} \qquad [ \mathbb{T}]_{t} \quad \mbox{ under } M_{t} \cdot \left. \mathrm{d} \mathbb{P}\right|_{ \mathcal{F}_{t}} = \mathbb{Q}_{t}.$$ Now, if $ \mathbb{T}_{t}^{\bullet}$ is obtained under $ \mathbb{Q}_{t}$ by distinguishing a particle of $\partial [ \mathbb{T}]_{t}$ uniformly at random (this actually distinguishes a branch in $[ \mathbb{T}]_t$), the same calculation as above enables us to see that the tree with distinguished branch obtained by restricting up to height $s$ has the same law as $ \mathbb{T}_{s}^{\bullet}$. By coherence of the restriction (and again leaving the details to the courageous reader) one can thus define a probability measure $ \mathbb{Q}$ and a random infinite tree $ \mathbb{T}^{\bullet}$ with an \textbf{infinite line of descent} so that for each $t$ the finite tree $ [ \mathbb{T}^{\bullet}]_{t}$ obtained by restricting to height $t$ and keeping the distinguished branch, has the distribution of $ \mathbb{T}_{t}^{\bullet}$ under $ \mathbb{Q}_{t}$. \begin{exo}[An example of martingale transform] \label{exo:martingalebiais} Let $(S_n : n \geq 0)$ be a simple symmetric random walk \textit{started from $1$}. We denote by $ \mathcal{F}_n$ its canonical filtration such that if $\tau_0 = \inf \{ k \geq 0 : S_k=0\}$ then the process $M_n = S_{n \wedge \tau_0}$ is a non-negative martingale. As above define the law $ \mathbb{Q}$ so that $$ \left. \frac{ \mathrm{d} \mathbb{Q}}{ \mathrm{d} \mathbb{P}}\right|_{ \mathcal{F}_{n}} = M_{n}.$$ Show that under $ \mathbb{Q}$ the process $(S_n : n \geq 0)$ is a Markov chain with probability transitions $$ \mathbb{Q}(S_{n+1}= S_n +1\mid S_n = i ) = \frac{i+1}{2i}, \quad \mathbb{Q}(S_{n+1}= S_n -1\mid S_n = i ) = \frac{i-1}{2i}, \quad i \geq 1.$$ \end{exo} \subsection{Spine decomposition} The law of $ \mathbb{T}^{\bullet}$ under $ \mathbb{Q}$ is actually quite simple to describe. Consider a continuous time branching tree as in Section \ref{sec:AK} with \textbf{two types} of particles: standard particles of type $1$ which reproduce at rate $1$ and mutant particles of type $2$ which reproduce at rate $k\geq 2$. When a standard particle dies, it gives rise to $k$ standard particles, but when a mutant particle dies it gives rise to $k-1$ standard particles (type $1$) and a single mutant particle (type $2$). Actually, since we shall consider them as plane trees, we need to prescribe an ordering in the case of reproduction of a mutant, by placing the mutant descendant uniformly among its children. We then consider the random plane tree $ \mathbb{T}$ under the measure $ \mathbb{P}_{\delta_2}$ started with only one mutant: it is clear that there is a single line of descent composed of mutant particles and this defines a random tree with a distinguished ray $ \mathbb{T}^{ \bullet}$. \begin{proposition}[Description of the law of $ \mathbb{T}^\bullet$] \label{prop:spinedecomp}The law of $ \mathbb{T}^{\bullet}$ under $\mathbb{Q}$ is that of $ \mathbb{T}^{\bullet}$ under $ \mathbb{P}_{\delta_2}$. \end{proposition} \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{spine} \caption{The law of the pointed tree $[\mathbb{T}^{\bullet}]_{t}$ under $ \mathbb{Q}$ is the same as that of the Yule tree started with a mutant particle. In particular, when $k=2$ the ancestral line (Right on the figure) from the distinguished point to the root in $[\mathbb{T}^{\bullet}]_{t}$ under $ \mathbb{Q}$ is obtained by superimposing two independent Poisson processes of intensity $1$ for each side of the spine.} \end{center} \end{figure} Before giving the proof, let us provide the reader with an equivalent formulation, the so-called ``\textbf{Many-to-one formula}'', which can be read without reference to the measure $ \mathbb{Q}$. It will be very practical for applications as it enables us to perform first-moment calculation over all branches: \begin{theorem}[Many-to-one formula] \label{prop:manyto1}\noindent For any positive and measurable function $F$ we have $$ \mathbb{E}_{\delta_1}\left[ \sum_{u \in \partial [\mathbb{T}]_{t}} F( [\mathbb{T}]_{t}^{u})\right] = \mathrm{e}^{(k-1)t} \cdot \mathbb{E}_{\delta_2}\left[ F([\mathbb{T}^{\bullet}]_{t}) \right].$$ \end{theorem} \noindent \textbf{Proof.} By the definition of the objects we have with $ \mathbb{E}\equiv \mathbb{E}_{\delta_1}$ \begin{eqnarray*}\mathbb{E}\left[ \sum_{u \in \partial [\mathbb{T}]_{t}} F( [\mathbb{T}]_{t}^{u})\right] &=& \frac{\mathbb{E}[\# \partial [ \mathbb{T}]_t]}{ \mathbb{E}[\# \partial [ \mathbb{T}]_t]} \cdot \mathbb{E}\left[ \frac{\# \partial [ \mathbb{T}]_t}{\# \partial [ \mathbb{T}]_t} \sum_{u \in \partial [\mathbb{T}]_{t}} F( [\mathbb{T}]_{t}^{u})\right] \\ &=& \mathbb{E}[\# \partial [ \mathbb{T}]_t] \cdot \mathbb{E}_{ \mathbb{Q}}[F(\mathbb{T}_t^\bullet)] \\ &=& \mathrm{e}^{(k-1)t} \mathbb{E}_{\delta_2}\left[ F([\mathbb{T}^{\bullet}]_{t}) \right]. \end{eqnarray*}\qed \medskip \noindent \textbf{Proof of the Proposition \ref{prop:spinedecomp}.} We consider the set $ \mathcal{S}_{t}$ (resp. $ \mathcal{S}^\bullet_{t}$) of all plane trees $\tau$ (resp $\tau^\bullet$) where each vertex has $0$ or $k$ children and endowed with vertex lengths $ (\ell(u) : u \in \mathrm{Vertices}(\tau))$ so that the $\ell$-height (the sum of the vertex lengths from a vertex to the root) of all its leaves is exactly $t$ (resp. with a distinguished leaf $\bullet$). Recall that $[ \mathbb{T}]_t$ (resp. $[ \mathbb{T}]_t^u$ or $[ \mathbb{T}^\bullet]_t$) can be seen as an element of $ \mathcal{S}_t$ (resp. $ \mathcal{S}_t^\bullet$), see Figure \ref{fig:RRTbinary}. There is a natural measure on $ \mathcal{S}_{t}$ (resp. $ \mathcal{S}_{t}^\bullet$) obtained as the sum for each finite plane tree $\tau$ as above of the product of the Lebesgue measure for each $\ell(u)\geq 0$ for all non leaves $u \in \tau$, subject to the condition that the sum of all $\ell(u)$ for all $u$ from the root to a leaf stays below $t$ (the label of a leaf $v$ is then obtained as $t - \sum \ell(u)$ where the sum runs of all ancestors of $v$). By construction of the (plane) Yule tree, the law of $[\mathbb{T}]_{t}$ under $ \mathbb{P}_{\delta_1}$ is absolutely continuous with respect to the above measure on $ \mathcal{S}_t$ with density given by $$ \prod_{u \in \tau \backslash \mathrm{Leaves}(\tau)} \mathrm{e}^{-\ell(u)} \prod_{v \in \mathrm{Leaves}(\tau)} \mathbb{P}( \mathcal{E} \geq \ell(v)) = \mathrm{\exp}\left(- \sum_{ u \in \tau} \ell(u)\right),$$ so that the law of $ [\mathbb{T}^\bullet]_t$ under $ \mathbb{Q}$ has density with respect to the above measure on $ \mathcal{S}_t^\bullet$ given by $$ \mathrm{\exp}\left(- \sum_{ u \in \tau^\bullet} \ell(u)\right) \times \mathrm{e}^{-(k-1)t} \times \frac{ \# \mathrm{Leaves}(\tau^\bullet)}{\# \mathrm{Leaves}(\tau^\bullet)} = \mathrm{\exp}\left(- \sum_{ u \in \tau^\bullet} \ell(u)\right) \mathrm{e}^{-(k-1)t},$$ On the other hand, the law of $[ \mathbb{T}^\bullet]_t$ under the two-type measure $ \mathbb{P}_{\delta_2}$ is also absolutely continuous with respect to the above measure: taking separately the behavior of the mutant particles along $ \mathrm{Spine}(\tau^\bullet)$, the path going from the root to the distinguished leaf, this density is seen to be \begin{eqnarray*} &=& \exp\left(- \sum_{u \in \tau^\bullet \backslash \mathrm{Spine}(\tau^\bullet) } \ell(u)\right) \times \left(\prod_{u \in \mathrm{Spine} \backslash \bullet} k \mathrm{e}^{- k \ell(u)} \cdot \frac{1}{k}\right) \times \underbrace{ \mathbb{P}( \mathcal{E}(k) \geq \ell(\bullet))}_{ \mathrm{e}^{-k \ell( \bullet)}}\\ &=&\mathrm{\exp}\left(- \sum_{ u \in \tau^\bullet} \ell(u)\right) \mathrm{e}^{-(k-1)t}. \end{eqnarray*} Since the last two displays agree we have proved the proposition. \qed \medskip \section{Application to extreme geometric properties of the RRT} Recall the construction of the random recursive tree $(T_n : n \geq 0)$ from the plane Yule tree $ \mathbb{T}$ described in Proposition \ref{prop:RRTyule}: in this section we shall suppose that $T_n = \{ \mathbb{T}\}_{\tau_n}$ where $(\tau_i : i \geq 0)$ are the jump times of the particle counting process and where $\{ \mathbb{T}\}_t$ is the increasing labeled tree obtained from $[ \mathbb{T}]_t$ by ``contracting'' the edges going to the left and numbering the vertices by order of appearance. We use the spinal decomposition to give quick proofs of the two results that were left unproven in Chapter \ref{chap:RRT}. \subsection{Maximal Height in RRT} \label{sec:heightRRT} We recall Theorem \ref{thm:heightRRT} here: For the random recursive tree $(T_{n} : n \geq 0)$ we have $$ \frac{\mathrm{Height}( T_{n})}{\log n} \xrightarrow[n\to\infty]{ a.s.} \mathrm{e}.$$ \noindent \textbf{Proof of Theorem \ref{thm:heightRRT}.} From Proposition \ref{prop:RRTyule} we can write $(T_n : n \geq 0) = (\{ \mathbb{T}\}_{\tau_n}: n \geq 0)$ where $\tau_n$ is the first time when there are $n+1$ particles alive in the Yule tree. Recall from Proposition \ref{prop:yule} and Eq.\ \eqref{eq:taynb} that $\tau_{n} \sim \log n$ almost surely as $n \to \infty$. Hence, by Lemma \ref{lem:slut}, the above theorem is a consequence of the previous two remarks provided that we prove $$ \left(\frac{\mathrm{Height}(\{ \mathbb{T}\}_{x \log n})}{\log n} : x \geq 0\right) \xrightarrow[t\to\infty]{a.s.} (x \cdot \mathrm{e} : x \geq 0), $$ for the uniform convergence over every compact subset of $ \mathbb{R}_+$. Since the height of $\{ \mathbb{T} \}_t$ is increasing with $t$, by Lemma \ref{lem:dini} it suffices to prove that \begin{eqnarray} \label{eq:goalRRTheight} \frac{\mathrm{Height}(\{ \mathbb{T}\}_t)}{t} \xrightarrow[t\to\infty]{ a.s.} \mathrm{e}. \end{eqnarray} Now, recall from the construction of Section \ref{sec:rrtfromyule} that each particle $u \in \partial [ \mathbb{T}]_{t} $ is associated with a vertex in $\{ \mathbb{T}\}_{t}$, which we still denote $u$ by abuse of notation, whose distance to the root $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}$ of $\{ \mathbb{T}\}_t$ satisfies \begin{eqnarray} \label{eq:distancespine} \mathrm{Dist}_{\{ \mathbb{T}\}_{t}}(\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}, u) = \begin{array}{l} \mbox{ number of ancestral lineages pointing to the left} \\ \mbox{ in the spine between } u \mbox{ and the root in } [\mathbb{T}]_t. \end{array} \end{eqnarray} Let us start with the easy upper bound for \eqref{eq:goalRRTheight}.\\ \noindent \texttt{Upper bound}. By the many to one formula (Theorem \ref{prop:manyto1}) we have \begin{eqnarray*} \mathbb{P}\left( \mathrm{Height}(\{ \mathbb{T}\}_t) \geq x\}\right) &\leq & \mathbb{E}\left[ \# \{ u \in \partial [\mathbb{T}]_{t} : \mathrm{Dist}_{\{ \mathbb{T}\}_{t}}(\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}, u) \geq x\}\right]\\ &\underset{ \mathrm{Thm.} \ref{prop:manyto1}}=& \mathrm{e}^{t} \cdot \mathbb{P}_{\delta_2}( \mathrm{Dist}_{\{ \mathbb{T}^{\bullet}\}_{t}}(\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}, \bullet) \geq x) \\ &\underset{\eqref{eq:distancespine}}{=}& \mathrm{e}^{t} \cdot \mathbb{P}( \mathfrak{P}(t) \geq x), \end{eqnarray*} where $( \mathfrak{P}(t) : t \geq 0)$ is a standard Poisson counting process. When $x = ( \mathrm{e}+ \varepsilon)t$ for $ \varepsilon>0$ small, Lemma \ref{lem:LDpoisson} entails that the above probability decays to $0$ exponentially fast in $t$. By Markov's inequality and the Borel-Cantelli Lemma we deduce that $\mathrm{Height}(\{\mathbb{T}\}_{n}) \leq ( \mathrm{e}+ \varepsilon) n$ eventually for $n \in \mathbb{Z}_{ >0}$ large enough $ \mathbb{P}$-a.s. Since $ t \mapsto \mathrm{Height}(\{\mathbb{T} \}_{t})$ is increasing, the same holds true when the integer $n$ is replaced by $t >0$.\\ \noindent \texttt{Lower bound}. By the previous calculation, we know that the expected number of branches $u \in \partial [\mathbb{T}]_{t}$ corresponding to a vertex $u$ at height $ \geq ( \mathrm{e} - \varepsilon)t$ in $\{ \mathbb{T}\}_t$ tends to $\infty$ exponentially fast with $t$. As usual, this does not imply right away that the number of such branches is non zero with high probability. However, this fact can be used together with the branching property of $ \mathbb{T}$: Fix $t_{0}>0$ large enough so that \begin{eqnarray} \label{eq:meangeq2} \mathbb{E}\left[ \# \big\{ u \in \partial [\mathbb{T}]_{t_0} : \mathrm{Dist}_{\{ \mathbb{T}\}_{t_0}}(\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}, u) \geq ( \mathrm{e} - \varepsilon)t_0\big\}\right] =\mathrm{e}^{t_0} \cdot \mathbb{P}( \mathfrak{P}(t_0) \geq ( \mathrm{e}- \varepsilon)t_0) \geq 2. \end{eqnarray} We now consider the branching process obtained by restricting the Yule tree to times $k \cdot t_{0}$ for $ k \in \{1,2, \dots\}$ and considering those particles $ u \in \partial [ \mathbb{T}]_{k t_{0}}$ for which there are at least $( \mathrm{e}- \varepsilon)t_0$ ancestral lineages pointing to the left between time $k t_{0}$ and time $ (k-1)t_{0}$ in $ \mathbb{T}$. By the Markov property of the Yule tree, those ``particles'' form a Bienaym--Galton--Watson tree in discrete time $k \geq 0$ whose mean offspring is larger than $2$ by \eqref{eq:meangeq2}, so it survives with positive probability. Hence, there exists a random generation $0 \leq M < \infty$ from which the branching process survives on. For $k \geq M$, a particle $u \in \partial [\mathbb{T}]_{k t_{0}}$ in this branching process has the property that $$ \mathrm{Dist}_{\{ \mathbb{T}\}_{kt_0}}(\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}, u) \geq ( \mathrm{e}- \varepsilon) (k-M) t_{0}\quad ,$$ which easily entails the lower bound $\mathrm{Height}(\{ \mathbb{T}\}_t) \geq ( \mathrm{e}-2 \varepsilon)t$ for $t$ large enough a.s. \qed \subsection{Maximal degree in RRT} \label{sec:maxdegreeyule} We now prove Theorem \ref{prop:maxdegreeRRT} on the maximum degree in $T_n$ which we also recall for the reader's convenience: Let $ \mathrm{MaxDegree}(T_n)$ be the largest vertex (out)-degree in the random recursive tree $T_{n}$. Then as $n \to \infty$ we have $$ \frac{ \mathrm{MaxDegree}(T_n)}{ \log_{2} n} \xrightarrow[n\to\infty]{a.s.} 1.$$ \noindent \textbf{Proof of Theorem \ref{prop:maxdegreeRRT}.} As in the previous section, since $ t \mapsto \mathrm{MaxDegree}(\{ \mathbb{T}\}_t)$ is increasing in $t$ and by virtue of \eqref{eq:taynb} it is sufficient to prove that \begin{eqnarray} \label{eq:goalddegreeyule} \frac{\mathrm{MaxDegree}( \{ \mathbb{T}\}_t)}{t} \xrightarrow[t\to\infty]{a.s.} \frac{1}{\log(2)}. \end{eqnarray} As for the height, if $u \in \partial [\mathbb{T}]_{t}$, we can read on $[ \mathbb{T}]_t$ the degree of $u$ inside $\{ \mathbb{T}\}_t$: it is easy by looking at Figure \ref{fig:yuleRRT} to convince oneself that we have \begin{eqnarray} \label{eq:degreespine} \mathrm{deg}^+_{\{ \mathbb{T}\}_{t}}( u) = \begin{array}{l} \mbox{ number of ancestral lineages pointing to the right} \\ \mbox{ in the spine between } u \mbox{ and the root in } \mathbb{T} \\ \mbox{ before the first ancestral lineage pointing to the left} . \end{array} \end{eqnarray} We now proceed separately with the upper and lower bounds for \eqref{eq:goalddegreeyule}. We set $$ \beta = \frac{1}{\log 2},$$ to ease notation. \noindent \texttt{Upper bound}. In the two-type tree $ [ \mathbb{T}^\bullet]_t$ under $ \mathbb{P}_{\delta_2}$, the branching events to the left and right of the mutant branch are independent and appear as Poisson processes with intensity $1$. The number of lineages branching to right before encountering a lineage branching to the left is then stochastically bounded from above by a geometric random variable with parameter $1/2$. By the many to one formula we thus have for $x \geq 1$ \begin{eqnarray*} \mathbb{P}\left( \exists u \in [ \mathbb{T}]_{t}: \mathrm{deg}^+_{\{ \mathbb{T} \}_t}(u) \geq x \right) &\leq& \mathbb{E}\left[ \# \{ u \in \partial [\mathbb{T}]_{t} : \mathrm{deg}^+_{\{ \mathbb{T} \}_t}(u) \geq x\}\right] \\ & \underset{ \mathrm{Thm.} \ref{prop:manyto1}}= & \mathrm{e}^{t} \mathbb{P}( \mathrm{deg}^+_{\{ \mathbb{T}^\bullet \}_t}(\bullet) \geq x ) \\ & \underset{ \eqref{eq:degreespine}}{\leq} & \mathrm{e}^{t} \mathbb{P}( \mathrm{Geo}(1/2) \geq x) = \mathrm{e}^{t} \,2^{-x}. \end{eqnarray*} If $x = (\beta +\varepsilon)t$ the above display goes to $0$ exponentially fast in $t$. We conclude using the Borel-Cantelli lemma and monotonicity as in the previous proof that $\mathrm{MaxDegree}(\{ \mathbb{T} \}_t) \leq ( \frac{1}{\log 2} + \varepsilon) t$ for all $t$ large enough a.s.\\ \texttt{Lower bound}. Let us consider all particles alive at time $t' = t (1- \frac{\beta}{2}) \approx t \times 0,27\dots$ inside $ [\mathbb{T}]_t$. Using the independence property of the Yule tree, and by considering only the monochromatic branches going from time $t'$ to time $t$ in $ [\mathbb{T}]_t$ (always turning left) we deduce that $$ \mathrm{MaxDegree}( \{ \mathbb{T} \}_t) \geq \max_{1 \leq i \leq \# \partial [ \mathbb{T}]_{t'}} X_i,$$ where conditionally on $ \# \partial [ \mathbb{T}]_{t'}$ the variables $X_i$ are independent and of law $ \mathfrak{P}(t-t') = \mathfrak{P}(t \beta/2)$. By Proposition \ref{prop:yule} we have $\# \partial [ \mathbb{T}]_{t'} \sim_{a.s.} \mathcal{E}(1) \mathrm{e}^{ ( 1-\beta/2)t}$. In the notation of Lemma \ref{lem:LDpoisson}, an easy computation shows that with $\beta = \frac{1}{\log 2}$ we have $$ (1- \frac{\beta}{2}) - \frac{\beta}{2} I \left( \frac{\beta}{\beta/2}\right) = 0,$$ so that for $\varepsilon>0$ there exists $\delta >0$ with $(1- \frac{\beta}{2}-\delta) - \frac{\beta}{2} I \left( \frac{\beta - \varepsilon}{\beta/2}\right)>0$. In particular \begin{eqnarray*} &&\mathbb{P}\Big(\mathrm{MaxDegree}(\{ \mathbb{T} \}_t) \leq t( \beta- \varepsilon) \ \big| \ \# \partial [ \mathbb{T}]_{t'} \geq \mathrm{e}^{ (1- \frac{\beta}{2}-\delta) t}\Big) \\ & \leq & \Big(1- \mathbb{P}( \mathfrak{P}( t\beta/2) > (\beta- \varepsilon) t)\Big)^{\mathrm{e}^{ (1- \frac{\beta}{2}-\delta) t}} \\ & \underset{ \mathrm{Lem.} \ref{lem:LDpoisson}}{\leq} & \exp \left( - \mathrm{e}^{ (1- \frac{\beta}{2}-\delta) t} \mathrm{e}^{ -\frac{\beta}{2} I \left( \frac{\beta - \varepsilon}{\beta/2}\right) t}\right) \leq \exp( - \mathrm{e}^{ \mathrm{cst}\, t}),\end{eqnarray*} for some $ \mathrm{cst}>0$. Since the right-hand side is summable for $t \in \mathbb{Z}_{>0}$ and since eventually $\# \partial [ \mathbb{T}]_{t'} \geq \mathrm{e}^{ (1- \frac{\beta}{2}-\delta) t}$ with probability one, we deduce from the Borel--Cantelli lemma that $\mathrm{MaxDegree}( \{ \mathbb{T} \}_t) \geq t(\beta - \varepsilon)$ eventually along integer values of $t$. By monotonicity the same holds for all $t$ large enough and this concludes the proof.\qed \begin{remark} \label{rek:ouestgros} The proof of Theorem \ref{prop:maxdegreeRRT} actually shows that the maximal degree in the random recursive tree $T_n$ is attained by a vertex $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$i$}}}$ with $i \approx n^{(1-\beta/2) + o(1)} = n^{0.27\dots}$. This may be seem counterintuitive since the vertex $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}$ clearly has the largest degree for the stochastic order. \end{remark} The many-to-one formula and Equation \eqref{eq:degreespine} directly show that the limit appearing in Proposition \ref{prop:concentrationlocal} is equal to $2^{-k-1}$ as announced after the proposition. \bigskip \noindent \textbf{Bibliographical notes.} Spinal decomposition (and the associated many-to-one formula) is a very important tool in the theory of branching processes. Although it had precursors e.g. \cite{chauvin1988kpp}, this method has been popularized by Lyons, Pemantle and Peres \cite{LPP95b}. See also \cite{shi2015branching} for its numerous applications in branching random walk or \cite{AD15} for discrete Bienaym\'e--Galton--Watson trees. In general, martingale change of measures are frequently met in probability theory ($h$-transforms, Girsanov formula, exponential tiltings...). See \cite{addario2018high} and \cite{addario2013poisson} for recent results about maximal degree and height of random recursive trees. \medskip \noindent{\textbf{Hints for Exercises.}}\ \\ \noindent Exercise \ref{exo:martingalebiais}: Biaising by the martingale is equivalent to performing a $h$-transformation with the function $h : i \mapsto i$ which is harmonic for the walk killed at $\tau_0$. See \cite[Appendix A.3]{CurStFlour} for more.\\ \chapter{Barab\'asi-Albert preferential attachment tree} \hfill Rich get richer.\bigskip In this chapter we modify the RRT construction using a preferential attachment rule: \begin{definition}[BA] The \textbf{Barab\'asi--Albert (BA) preferential attachment tree} is the Markov chain with values in the set of unoriented labeled trees such that $ \mathsf{T}_1 =$ \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}--\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$1$}}} and so that for $n \geq 2$, conditionally on $ \mathsf{T}_{n-1}$, the labeled tree $ \mathsf{T}_{n}$ is obtained by attaching the new vertex \raisebox{.5pt}{\textcircled{\raisebox{-.6pt} {$n$}}} onto the vertex \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$k$}}} of $ \mathsf{T}_{n-1}$ with probability $$ \mathbb{P}\left( \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$n$}}} \to \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$k$}}} \big| \mathsf{T}_{n-1}\right) = \frac{ \mathrm{deg}_{ \mathsf{T}_{n-1}}(\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$k$}}})}{2(n-1)}.$$ \end{definition} \begin{figure}[!h] \begin{center} \includegraphics[height=3cm]{test14-6/T1.pdf} \includegraphics[height=2.95cm]{test14-6/T2bis.pdf} \includegraphics[height=3cm]{test14-6/T2.pdf} \includegraphics[height=3cm]{test14-6/T3.pdf} \includegraphics[height=3cm]{test14-6/T4.pdf} \includegraphics[height=3cm]{test14-6/T5.pdf} \includegraphics[height=3cm]{test14-6/T6.pdf} \includegraphics[height=3cm]{test14-6/T7.pdf} \includegraphics[height=3cm]{test14-6/T8.pdf} \includegraphics[height=3cm]{test14-6/T9.pdf} \includegraphics[height=3cm]{test14-6/T10.pdf} \includegraphics[height=3cm]{test14-6/T11.pdf} \includegraphics[height=3cm]{test14-6/T12.pdf} \includegraphics[height=3cm]{test14-6/T13.pdf} \includegraphics[height=3cm]{test14-6/T14.pdf} \caption{A sampling of the process $ \mathsf{T}_n$ for $n=1,2,3,4,8,16,32, \dots , 2^{14}$. The colors and sizes of the vertices indicate their degrees.\label{fig:BA}} \end{center} \end{figure} Since $ \mathsf{T}_{n}$ has $n$ edges, the sum of its vertex degrees is equal to $2n$, so that the normalization in the above definition indeed produces probability transitions. Compared to the random recursive tree, the preferential attachment model has a reinforcement of large degrees ``the rich get richer'' paradigm. This mechanism has been popularized by Barab\'asi \& Albert \footnote{\raisebox{-3mm}{\includegraphics[height=1.5cm]{barabasi-albert.jpg}} Albert-Lszl Barabsi (1967--), and Rka Albert (1972--), Romanian} as a tractable model for real-world networks. It is possible to analyze this random tree growth using combinatorics as we did in Chapter \ref{chap:RRT} but we shall rather use the convenient tools developed in the previous two chapters. \section{Equivalent constructions} As in Section \ref{sec:rrtfromyule} we shall see that the Barab\'asi--Albert tree process $( \mathsf{T}_n : n \geq 1)$ can be constructed from a Yule process. But before that, let us reinterpret it as a random \textbf{plane} recursive trees. \subsection{Plane recursive tree} Let us consider a plane variant of the random recursive tree construction in which we consider a Markov chain $( {T}^{ \mathrm{plan}}_{n} : n \geq 1)$ of labeled \textit{plane} trees where $T^{\mathrm{plan}}_{1} = \noeud{0}-\noeud{1}$ and where for $n \geq 2$, conditionally on $T^{\mathrm{plan}}_{n-1}$ the tree $T^{\mathrm{plan}}_{n}$ is obtained by grafting $ -\noeud{n}$ in one of the $2(n-1)$ corners (an angular sector made by two consecutive edges around a vertex) of $T^{\mathrm{plan}}_{n-1}$ uniformly at random. \begin{figure}[!h] \begin{center} \includegraphics[width=14cm]{RRTplan} \caption{Illustration of the construction of $( {T}^{ \mathrm{plan}}_{n} : n \geq 1)$. The corners are represented by dots and the one selected for the grafting at the next step is in red.} \end{center} \end{figure} The tree $ {T}^{ \mathrm{plan}}_{n} $ is thus a plane tree (the root edge being the oriented edge $\noeud{0} \to \noeud{1}$) whose $n+1$ vertices are labeled by $\{0,1,2, \dots , n \}$ and such that the labels are increasing along branches starting from $\noeud{0}$. There are exactly $2^{n-1} (n-1)!$ such discrete tree structures and $ {T}^{ \mathrm{plan}}_{n}$ is, for each $n$, uniformly distributed over them. The following should then be clear: \begin{proposition}[Random plane recursive tree] The sequence of unlabeled non-plane trees obtained from $({T}^{ \mathrm{plan}}_{n} : n \geq 1)$ by forgetting the plane ordering is distributed as $ ( \mathsf{T}_{n} : n \geq 1)$.\end{proposition} It is also possible to obtain (a small variant of the) Barab\'asi--Albert tree process $( \mathsf{T}_n : n \geq 1)$ by modifying the uniform attachment rule: \begin{exo}[From RRT to BA] \label{exo:pawel} Consider the following attachment mechanism for labeled increasing trees starting with $ \mathfrak{T}_1 = \noeud{0}-\noeud{1}$: for $n \geq 2$ pick a uniform node $ \noeud{i}$ of $ \mathfrak{T}_{n-1}$ and attach $\noeud{n}$ with probability $1/2$ to $\noeud{i}$ or with probability $1/2$ to the first ancestor of $\noeud{i}$ (when going back towards $\noeud{0}$). If $\noeud{i} = \noeud{0}$, just attach $\noeud{n}$ to $\noeud{0}$. Show that the chain $ (\mathfrak{T}_n : n \geq 1)$ is very close to $( \mathsf{T}_n : n \geq 1)$. \end{exo} \subsection{Construction via Yule tree of order $3$}\label{sec:yuletoBA} Consider two independent plane Yule trees $ \mathbb{F} = \mathbb{T}_0 \cup \mathbb{T}_1$ of order $3$ with rates equal to $1$, that is, in Section \ref{sec:AK} take $p=1$, $\alpha_1 =1$ and $\mu_1 = \delta_{3 \delta_1}$ and work under $ \mathbb{P}_{2 \delta_1}$. To ease notation in the rest of this section, under $ \mathbb{P}$ the forest $ \mathbb{F}$ has law $ \mathbb{P}_{2 \delta_1}$ whereas $ \mathbb{T}, \mathbb{T}_0, \mathbb{T}_1$ have law $ \mathbb{P}_{\delta_1}$. As in the previous chapter, we shall suppose that those trees are obtained by labeling the vertices of the full ternary tree $ \bigcup_{n \geq 0} \{0,1,2\}^n$ with i.i.d.~random exponential variables with mean $1$. For $t \geq 0$, we shall perform a contraction operation on $[ \mathbb{F}]_t = [\mathbb{T}_0]_t \cup [\mathbb{T}_1]_t$ similar to that introduced in Section \ref{sec:rrtfromyule}: at each branch point of $[ \mathbb{F}]_t$, we shall separate the right-most particle created from its two brothers. This creates a partitioning of $[ \mathbb{F}]_t$ into smaller ``Yule trees of order $2$''. Contracting each of these smaller subtrees into a single node and labeling them in their time-order of apparition\footnote{with the convention that the subtree associated to the root of the first tree of $ \mathbb{F}$ corresponds to $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}$} yields to an increasing (non-plane) labeled tree which we denote by $\llbrack \mathbb{F}\rrbrack_t$. \begin{figure}[!h] \begin{center} \includegraphics[width=16cm]{barabasicontinuous} \caption{Constructing the increasing labeled tree $\llbrack \mathbb{F} \rrbrack_t$ by contracting all ``sub Yule trees of order 2" obtained by forgetting the right-most particle at each branch point.} \end{center} \end{figure} We then have the analog of Proposition \ref{prop:RRTyule} which is proved using the same techniques: \begin{proposition}[From Yule to BA] \label{prop:BAyule} If $0= \tau_1 < \tau_{2}< \dots < \tau_{n}< \dots$ are the first times at which $\# \partial [ \mathbb{F}]_{\tau_n} = 2n$ then conditionally on $( \tau_{n} : n \geq 1)$ the process $( \llbrack \mathbb{F} \rrbrack_ {\tau_{n}} : n \geq 1)$ is a Barab\'asi--Albert preferential attachment tree. \end{proposition} As in the preceding chapter, we will use the above construction together with our knowledge on Yule process to deduce interesting geometric properties of the Barab\'asi-Albert tree, in particular on its maximal degree and its height. \section{Degrees} We denote by $\mathrm{deg}_{\llbrack \mathbb{F} \rrbrack_{t}}( \noeud{i})$ the degree of the $i$th vertex in the contraction of $[ \mathbb{F}]_t$ so that by Proposition \ref{prop:BAyule} we have the equality in terms of processes $$\left(\mathrm{deg}_{\llbrack \mathbb{F} \rrbrack_{\tau_n}}( \noeud{i}), \mbox{ for } i \leq n\right)_{n \geq 1} = \left(\mathrm{deg}_{ \mathsf{T}_n}( \noeud{i}), \mbox{ for } i \leq n\right)_{n \geq 1}.$$ \subsection{Almost sure convergence} Let us focus first on the degree of the root vertex $\noeud{0}$ inside $ \llbrack \mathbb{F} \rrbrack_{t}$. On the one hand, the variable $ \mathrm{deg}_{\llbrack \mathbb{F} \rrbrack_{t}}( \noeud{0})$ is equal to the number $ \mathcal{Y}^{{(2)}}_{t}$ of particles alive at time $t$ in the ``sub Yule process'' of order $2$ obtained by keeping only the first two children at each branching point. On the other hand, the total number of particles $\mathcal{Y}^{{(3)}}_{t} = \# \partial [ \mathbb{F}]_{t}$ alive at time $t$ in the forest is the sum of two independent Yule processes of order $3$. We deduce from Proposition \ref{prop:yule} the following almost sure convergences $$ \mathrm{e}^{-t} \mathcal{Y}^{{(2)}}_{t} \xrightarrow[n\to\infty]{a.s.} \mathcal{E} \quad \mbox{ and } \quad \mathrm{e}^{-2t} \mathcal{Y}^{{(3)}}_{t} \xrightarrow[n\to\infty]{a.s.} \frac{1}{2} \cdot \mathcal{E}',$$ where $ \mathcal{E}$ and $ \mathcal{E}'$ are two exponential variables of mean $1$ which are \textbf{not independent}. In particular, it follows from the last display together with Proposition \ref{prop:BAyule} and Proposition \ref{prop:yule} that $ n^{-1/2} \cdot \mathrm{deg}_{\llbrack \mathbb{F} \rrbrack_{\tau_n}}( \noeud{0})$ converges almost surely towards $ \mathcal{E}/ \sqrt{ \mathcal{E}'/4}$ and more generally that: \begin{proposition}[Almost sure convergence of degrees] \label{prop:asdegree}There exists a vector of almost surely positive and finite random variables $( \mathcal{X}_{i} : i \geq 0)$ so that for each $i \geq 0$ we have the following almost sure convergences $$ \frac{\mathrm{deg}_{ \mathsf{T}_{n}}( \noeud{i})}{n^{1/2}} \xrightarrow[n\to\infty]{a.s.} \mathcal{X}_{i}.$$ Moreover the $( \mathcal{X}_i : i \geq 0)$ are almost surely distinct. \end{proposition} \noindent \textbf{Proof.} Recall that $\tau_i$ is the first time at which the particle $\noeud{i}$ appears in $ \llbrack \mathbb{F} \rrbrack_t$. By the Markov property of the Yule process, for $t \geq \tau_i$ the degree $ \mathrm{deg}_{\llbrack \mathbb{F} \rrbrack_t}( \noeud{i})$ can be expressed as a counting process in a Yule tree of order $2$, whereas the total number of corners is given by the sum of two independent Yule process of order $3$ (the number of vertices is half of it). Using Proposition \ref{prop:yule} three times, we deduce the almost sure convergence towards positive r.v. $ \mathcal{X}_i$. Let us now explain why $ \mathcal{X}_0 \ne \mathcal{X}_1$ with probability one, leaving the general case $ \mathcal{X}_i \ne \mathcal{X}_j$ to the reader. For $ a \in \{0,1\}$, denote by $ \mathcal{D}_a$ (resp. $ \mathcal{M}_a$) the limit of the renormalized size of the Yule tree of order $2$ (resp. of order $3$) obtained by keeping the first two children in each branching (resp. keeping all children) in the tree $ \mathbb{T}_a$. By the above discussion, we have $$ \mathcal{X}_0 = \frac{ \mathcal{D}_0}{ \sqrt{ (\mathcal{M}_0 + \mathcal{M}_1)/2}} \quad \mbox{ and }\quad \mathcal{X}_1 = \frac{ \mathcal{D}_1}{ \sqrt{ (\mathcal{M}_0 + \mathcal{M}_1)/2}}.$$ Remark now that $ ( \mathcal{D}_0, \mathcal{M}_0)$ and $( \mathcal{D}_1, \mathcal{M}_1)$ are independent and $ \mathcal{D}_a$ have no atoms (they are exponentially distributed). Hence the probability that $ \mathcal{D}_0 = \mathcal{D}_1$ is $0$ implying that $ \mathcal{X}_0 \ne \mathcal{X}_1$ a.s. \qed \medskip \begin{exo}[A martingale approach] \label{exo:martingale} Here is a way to prove the almost sure convergence of renormalized degrees without the continuous-time embedding. Let $D_{n} = \mathrm{deg}_{ \mathsf{T}_{n}}( \noeud{0})$ for $n \geq 1$. Show that we have $$ \mathbb{E}[D_{n+1} \mid \sigma( \mathsf{T}_{k} : 1 \leq k \leq n)] = D_{n} \cdot \left( 1 + \frac{1}{2n}\right).$$ Conclude that $D_{n} \cdot \prod_{k=1}^{n-1} (1+ \frac{1}{2k})^{-1}$ is positive martingale which converges almost surely and recover the first part of the previous proposition. \end{exo} \begin{figure}[!h] \begin{center} \includegraphics[width=13cm]{test14-6/courbes.pdf} \caption{\textsc{Degree's race in the evolution of Figure \ref{fig:BA}.} Curves of the renormalized vertex degrees in the scale $ \sqrt{n}$ (in $y$ axis) in a logarithmic scale for $n$ (in the $x$ axis). The renormalized degree of \noeud{0} is in red, that of $\noeud{1}$ in orange and that of $\noeud{2}$ (this vertex asymptotically has the largest degree) in dark yellow. \label{fig:degrees}} \end{center} \end{figure} \subsection{Maximal degree} As for the case of the random recursive tree, one can wonder about the maximal degree in the Barab\'asi--Albert tree process. In the RRT, the largest degree after $n$ steps turned out not to be among the first nodes of the network but among the nodes arrived at time $ \approx n^{0,27\dots}$, see Remark \ref{rek:ouestgros}. Here, the fast decay of the degrees enables us to show that the largest degree actually belongs to the first few nodes of the network. More precisely we have: \clearpage \begin{theorem}[Mori] \noindent With the notation of Proposition \ref{prop:asdegree}, the random vector $ ( \mathcal{X}_i : i \geq 0)$ almost surely satisfies $ \mathcal{X}_i \to 0$ as $i \to \infty$ and the pointwise a.s.\ convergence can be reinforced into an almost sure convergence for the $\ell^\infty$ metric: \label{thm:mori} $$ \left( \frac{\mathrm{deg}_{ \mathsf{T}_{n}}( \noeud{i})}{n^{1/2}} : i \geq 1 \right) \xrightarrow[n\to\infty]{a.s. \mathrm{ \ for \ } \ell^\infty} (\mathcal{X}_{i} : i \geq 0).$$ \end{theorem} Combining the previous result with the fact (proved in Proposition \ref{prop:asdegree}) that the $ \mathcal{X}_i$ are positive and almost surely distinct, we deduce that the relative position $ \mathrm{Pos}( \noeud{i}, n)$ of the degree of node $ \noeud{i}$ among $\{ \noeud{0}, \dots , \noeud{n}\}$ converges almost surely as $n \to \infty$ towards $ \mathfrak{S}_i$ where $ \mathfrak{S} : \{0,1,2, \dots \} \to \{1,2,3, \dots \}$ is a bijection. This implies in particular the convergence of the index of the largest vertex's degree in $ \mathsf{T}_n$. \medskip The main technical input for the proof of Theorem \ref{thm:mori} is a maximal inequality based on Proposition \ref{prop:yule}: \begin{lemma} \label{lem:techdeg} Let $ (\mathcal{Y}^{(2)}_t : t \geq 0)$ be the counting process of a standard Yule tree of order $2$, rate $1$, and starting from $1$ particle. For all $x \geq 2$ we have $$ \mathbb{P}\left( \sup_{t \geq 0} \mathrm{e}^{-t} \mathcal{Y}^{(2)}_t \geq x\right) \leq 2 \mathrm{exp}(- x/2).$$ \end{lemma} \noindent \textbf{Proof.} Fix $x \geq 2$ and denote by $\theta = \inf\{ t \geq 0 : \mathrm{e}^{-t} \mathcal{Y}^{(2)}_t \geq x\}$. On the event where the stopping time $\theta$ is finite, the strong Markov property entails that conditionally on $[ \mathbb{T}]_\theta$, the $\mathcal{Y}^{(2)}_\theta = \# \partial[ \mathbb{T}]_\theta=:N$ particles alive at time $\theta$ will have independent offsprings distributed according to a standard Yule tree of order $2$. Recalling from Proposition \ref{prop:yule} that $\mathrm{e}^{-t} \mathcal{Y}^{(2)}_t \to \mathcal{E}$ a.s., on the event $\{ \theta < \infty\}$ we can write $$ \mathcal{E} = \lim_{t \to \infty} \mathrm{e}^{-t} \mathcal{Y}^{(2)}_t = \mathrm{e}^{-\theta} \sum_{i=1}^N \lim_{t \to \infty } \mathrm{e}^{-t} \mathcal{Y}^{(2),i}_t = \mathrm{e}^{-\theta} \sum_{i=1}^N \mathcal{E}_i,$$ where on the right-hand side, the variables $ (\mathcal{E}_i : i \geq 1)$ are i.i.d.~exponential variables of rate $1$ independent of $N$. Using the easy fact that $\inf_{k \geq 1} \mathbb{P}( \sum_{i=1}^k \mathcal{E}_i \geq k/2) \geq \frac{1}{2}$ we have \begin{eqnarray*} \mathrm{e}^{-x/2} = \mathbb{P}( \mathcal{E} \geq x/2) &\geq& \mathbb{E}\left[ \mathbf{1}_{\theta< \infty} \mathbb{P}\left( \mathrm{e}^{-\theta} \sum_{i=1}^N \mathcal{E}_i \geq \frac{x}{2} \right) \right]\\ & \underset{N \mathrm{e}^{-\theta} \geq x}{\geq} & \mathbb{E}\left[ \mathbf{1}_{\theta< \infty} \mathbb{P}\left( \sum_{i=1}^N \mathcal{E}_i \geq \frac{N}{2}\right) \right]\\ & \geq & \inf_{k \geq 1} \mathbb{P}\left( \sum_{i=1}^k \mathcal{E}_i \geq k/2\right) \cdot \mathbb{P}(\theta < \infty) \geq \frac{1}{2} \mathbb{P}( \theta < \infty). \end{eqnarray*}\qed \medskip \noindent \textbf{Proof of Theorem \ref{thm:mori}.} Given the work done in the proof of Proposition \ref{prop:asdegree}, the convergence for the $\ell^\infty$ metric follows if we can show that $$ \lim_{m \to \infty }\sup_{i \geq m} \sup_{n \geq 1} \frac{ \mathrm{deg}_{ \mathsf{T}_n}(\noeud{i})}{ \sqrt{n}} =0, \quad a.s.$$ or via the continuous time representation that \begin{eqnarray} \lim_{T \to \infty }\sup_{{\footnotesize \noeud{i}} \mathrm{ \ created \ after\ }T} \sup_{t \geq T} \frac{ \mathrm{deg}_{ \llbrack \mathbb{F} \rrbrack_t}(\noeud{i})}{ \mathrm{e}^t} =0, \quad a.s. \label{eq:goalt0}\end{eqnarray} For $t > A$, a new splitting appears in $ \mathbb{F}$ with intensity $ \# \partial [ \mathbb{F}]_t \ \mathrm{d}t$, this creates a new vertex in $ \llbrack \mathbb{F} \rrbrack_t$ and the probability that such a vertex gets a degree larger than $ \varepsilon \mathrm{e}^u$ at some later time $u = s+t \geq t$ is upper bounded by $$ \mathbb{P}(\sup_{s \geq 0} \mathrm{e}^{-s} \mathcal{Y}^{(2)}_s \geq \varepsilon \mathrm{e}^{t}) \underset{ \mathrm{Lem.\ } \ref{lem:techdeg} }{\leq} 2 \exp(- \varepsilon \mathrm{e}^{t} /2).$$ We deduce that \begin{eqnarray*} && \mathbb{E}\left[ \sum_{{\footnotesize \noeud{i}} \mathrm{ \ created \ after\ }T} \mathbf{1}\left\{\sup_{t \geq A} \frac{ \mathrm{deg}_{ \llbrack \mathbb{F} \rrbrack_t}(\noeud{i})}{ \mathrm{e}^t} \geq \varepsilon \right\}\right] \\ &\leq& \int_{A}^\infty \mathrm{d}t\ \mathbb{E}[ \# \partial [ \mathbb{F}]_t ] \cdot 2 \exp(- \varepsilon \mathrm{e}^{t} /2) \\ &=& \int_{A}^\infty \mathrm{d}t\ 2 \mathrm{e}^{2t} \cdot 2 \exp(- \varepsilon \mathrm{e}^{t} /2). \end{eqnarray*} For $ \varepsilon>0$ fixed, the above integral can be made arbitrarily small provided that $A>0$ is chosen large enough. This implies \eqref{eq:goalt0}. \qed \subsection{Empirical degree distribution} As in Section \ref{sec:empiricalRRT} we can also study the \textbf{empirical degree distribution} in $ \mathsf{T}_{n}$: We let $ \nu_{n} $ be the (random) empirical distribution of the out-degrees defined by $$ \nu_{n} = \frac{1}{n+1}\sum_{i=0}^{n} \delta_{ \mathrm{deg}_{ \mathsf{T}_{n}}^+( {\footnotesize \noeud{i}})}.$$ As for Proposition \ref{prop:empirical}, the empirical degree distribution converges towards a deterministic distribution which now has an interesting polynomial tail behavior: \begin{theorem}[Convergence of the empirical degree distribution] \label{prop:empiricalBA}The empirical distribution of the out-degrees in $ \mathsf{T}_{n}$ converges in probability towards an explicit deterministic law: for each $k \geq 0$ we have $$ \nu_{n}(\{k\}) \xrightarrow[n\to\infty]{( \mathbb{P})} \frac{4}{(k+1)(k+2)(k+3)}.$$ \end{theorem} \noindent \textbf{Proof.} We obviously use the construction of $ \mathsf{T}_n = \llbrack \mathbb{F} \rrbrack_{\tau_n}$ valid for all $n \geq 1$ simultaneously. The same proof as for Proposition \ref{prop:concentrationlocal} shows that $$ \frac{\mathsf{D}_k([ \mathbb{F}]_t)}{ \# \partial [ \mathbb{F}]_t} \xrightarrow[t\to\infty]{ ( \mathbb{P})} \lim_{t \to \infty} \mathrm{e}^{-2t} \mathbb{E}[ \mathsf{D}_k([ \mathbb{F}]_t)],$$ where $$ \mathsf{D}_k([ \mathbb{F}]_t) := \# \Big\{ u \in \llbrack \mathbb{F} \rrbrack_t \backslash \noeud{0} : \mathrm{deg}^{+}_{ \llbrack \mathbb{F}\rrbrack_t}(u) =k\Big\},$$ and where the limit exists. We compute the expectation of $ { \mathsf{D}}_k( [ \mathbb{T}]_t)$, the number of vertices different from $ \noeud{0}$ and of degree $k\geq 1$ in $\llbrack \mathbb{T} \rrbrack_t$, in a single contracted Yule tree of order $3$. As in Section \ref{sec:goncharovback}, recall that a new vertex is created at time $s$ with intensity $ \# \partial [ \mathbb{T}]_s$ and by \eqref{eq:yuleexplicit}, this vertex has degree $k$ at time $t$ with probability $ \mathrm{e}^{-(t-s)}(1- \mathrm{e}^{-(t-s)})^{k-1}$. Recalling that $ \mathbb{E}[\# \partial [ \mathbb{T}]_t]= \mathrm{e}^{2t}$ we have \begin{eqnarray*} \mathrm{e}^{-2t} \cdot \mathbb{E}\left[{ \mathsf{D}}_k( [ \mathbb{T}]_t) \right] &=& \mathrm{e}^{-2t} \int_0^t \mathrm{d}s \, \mathbb{E}[\# \partial [ \mathbb{T}]_s] \cdot \mathrm{e}^{-(t-s)}(1- \mathrm{e}^{-(t-s)})^{k-1}\\ &\underset{u=t-s}{=}& \int_0^t \mathrm{d}u \, \mathrm{e}^{-3u} (1- \mathrm{e}^{-u})^{k-1} \\ & \xrightarrow[t\to\infty]{} & \int_0^\infty \mathrm{d}u \, \mathrm{e}^{-3u} (1- \mathrm{e}^{-u})^{k-1}\\ & \underset{x= \mathrm{e}^{-u}}{ = }& \int_0^1 \mathrm{d}x\, x^2(1-x)^{k-1}= \frac{2}{k(k+1)(k+2)} \end{eqnarray*} In the case of two trees, we also have for any $k \geq 1$ $$ \frac{ { \mathrm{D}}_k( [ \mathbb{F}]_t)}{ \# \partial [ \mathbb{F}]_t} \xrightarrow[t\to\infty]{( \mathbb{P})}\frac{2}{k(k+1)(k+2)},$$ which proves the result since the number of vertices in $ \llbrack \mathbb{F} \rrbrack_t$ is half of $\# \partial [ \mathbb{F}]_t$. \qed \begin{remark}[Scale-free property] The fact that the empirical degree distribution $\nu_n$ converges towards a limiting law $\nu_\infty$ with a polynomial tail behavior $\nu_\infty(\{k\}) \approx k^{-\alpha}$ with $\alpha > 0$ is usually refer to as the \textbf{scale-free} property. In the case of the Barabi--Albert trees the tail with exponent $\alpha=3$ is coherent with the fact that the largest degree in $ \mathsf{T}_n$ is of order $ \sqrt{n} = n^{1/(\alpha-1)}$ which is the order of magnitude of the maximum of $n$ i.i.d.~samplings according to $\nu_\infty$. \end{remark} \section{Height} We finish by studying the maximal height in $ \mathsf{T}_n$. The preferential attachment mechanism do yield to smaller trees compared to the uniform attachment case, but they stay of logarithmic order: \begin{theorem}[Pittel] We have \label{thm:heightBA} $$ \frac{ \mathrm{Height}( \mathsf{T}_n)}{\log n} \xrightarrow[n\to\infty]{a.s.} c\approx 1.79\dots,$$ where $c=(2 \gamma)^{-1}$ for $\gamma$ the solution to $ \gamma \mathrm{e}^{1+\gamma}= 1$. \end{theorem} \noindent \textbf{Sketch of proof.} The proof follows the same strategy as that of Theorem \ref{thm:heightRRT} presented in Section \ref{thm:heightRRT}. Similar to \eqref{eq:distancespine}, a particle $u \in \partial [ \mathbb{F}]_{t} $ is associated with a vertex in $\llbrack \mathbb{F}\rrbrack_{t}$ whose distance to the root $\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {$0$}}}$ in $\llbrack \mathbb{F} \rrbrack_t$ is equal to the number of branch points along the spine for which the lineage to $u$ is the right-most. When $u = \bullet$ is the distinguished particle of $[ \mathbb{T}^\bullet]_t$ under $ \mathbb{P}_{\delta_2}$, branchings happens at rate $3$ and a third of them is of the above form. By the many to one formula (Theorem \ref{prop:manyto1}) we then have $$ \mathbb{E}\left[\sum_{u \in \partial [ \mathbb{T}]_t} \mathbf{1}\{\mathrm{dist}_{\llbrack \mathbb{T} \rrbrack_t}(u, \noeud{0}) \geq \alpha t \}\right] = \mathrm{e}^{2t} \mathbb{P}( \mathfrak{P}(t) \geq \alpha t).$$ By Lemma \ref{lem:LDpoisson}, when $\alpha = \gamma^1 + \varepsilon$ the previous display converges to $0$ exponentially fast with $t \to \infty$ (notice that $a=\gamma^{-1}$ is solution to $a \log a - (a-1)=2$). Since there are roughly $ n \approx \mathrm{e}^{2t}$ vertices at time $t$ in $ \llbrack \mathbb{F} \rrbrack_t$ we deduce using the same arguments as in Section \ref{sec:heightRRT} that the height of $ \mathsf{T}_n$ is eventually less than $ \frac{\gamma^{-1}}{2} n$ as $n \to \infty$ a.s. The lower bound follows mutatis mutandis the same lines as in Section \ref{sec:heightRRT} and we leave it as an exercise for the reader. \qed \paragraph{Bibliographical notes.} Although generally attributed to Albert \& Barab\'asi \cite{AB99} which is one of the most cited papers in mathematics with more than 45 000 citations up to 2023, the model of linear preferential attachment tree has been studied before (at least) by Szymanski \cite{szymanski1987nonuniform} and Mahmoud \cite{mahmoud1992distances}. This is a very good example of Stigler's law of eponymy. Exercise \ref{exo:pawel} was suggested by Pavel Krapivsky. The almost sure convergence of the largest degrees (Theorem \ref{thm:mori}) is due to Mori \cite{mori2005maximum}. Theorem \ref{thm:heightBA} is first proved in \cite{pittel1994note} using the continuous time embedding technique.\medskip \noindent{\textbf{Hints for Exercises.}}\ \\ \noindent Exercise \ref{exo:pawel}: A new vertex attaches to $\noeud{i}$ with probability proportional $ \mathrm{deg}^+_{ \mathfrak{T}_n}(\noeud{i})+1$ which is equal to $ \mathrm{deg}_{ \mathfrak{T}_n}(\noeud{i})$ except for the root $\noeud{0}$ which has a small bias. \chapter*{Appendix} \hfill So the last shall be first.\\ \hfill (Matthew 20:16) \bigskip \section*{Large deviations for Poisson random variables} Let us state a simple lemma on Poisson random variables which we used many times in these lecture notes. Recall that $ \mathfrak{P}(a)$ is a Poisson variable of mean $a >0$. \begin{lemma}[Large deviations and maximum of i.i.d.~Poisson random variables] \label{lem:LDpoisson} For $a >0$ denote by $I(a) := a\log a-(a-1)$ then for all $t \geq 0$ we have $$\mathbb{P}( \mathfrak{P}(t) > at) \leq \mathrm{e}^{-t I(a)} \quad \mbox{ if } a>1 \quad \mbox{ and } \quad \mathbb{P}( \mathfrak{P}(t) < at) \leq \mathrm{e}^{-t I(a)} \quad \mbox{ if } 0<a<1.$$ Fix $c \geq 0$ and let $ X_1,\dots , X_{ \lfloor n^c \rfloor}$ be i.i.d.~random variables with Poisson law of expectation $\log n$. Then we have $$ \frac{\log{\max_{1 \leq i \leq n^c} X_i }}{\log n} \xrightarrow[n\to\infty]{( \mathbb{P})} x_c, \quad \mbox{ with $x_c \geq 1$ solution to } I(x_c) = c,$$ and furthermore $ \mathbb{P}(\max_{1 \leq i \leq n^c} X_i \leq n^{x_c - \varepsilon})$ tends to $0$ stretched-exponentially fast.\end{lemma} \noindent \textbf{Proof.} Suppose $a >1$ and let us apply a standard exponential Markov's inequality to write for $\lambda >0$ $$ \mathbb{P}( \mathfrak{P}(t) > at) \leq \frac{ \mathbb{E}[ \mathrm{e}^{\lambda \mathfrak{P}(t) }]}{ \mathrm{e}^{\lambda a t}} = \mathrm{exp}(t(( \mathrm{e}^\lambda -1)-\lambda a)) \underset{\lambda = \log a}{\leq} \exp(-t I(a)).$$ The case $a<1$ is dealt with similarly using negative $\lambda$. For the second point notice that for $x \geq 1$ we have using the first point $$ \mathbb{P}( \max_{1 \leq i \leq n^c} X_i \leq n^{x}) = (1- \mathbb{P}( \mathfrak{P}(\log n) > x \log n))^{n^c} = \exp\left(- n^{c -I(x) + o(1)}\right),$$ and so the above probability tends to $0$ stretched-exponentially fast if $I(x) < c$ and to $1$ if $I(x)>c$. \qed \medskip First part of Lemma \ref{lem:LDpoisson} is known under the name of ``Bennetts inequality'' (see Terence Tao's blog for a nice sharpening of it). \bibliographystyle{siam} \bibliography{bibli} \end{document}
2412.19895v1
http://arxiv.org/abs/2412.19895v1
The c-Entropy optimality of Donoghue classes
\documentclass{amsproc} \usepackage{amsmath} \usepackage{enumerate} \usepackage{amsmath,amsthm,amscd,amssymb} \usepackage{latexsym} \usepackage{upref} \usepackage{verbatim} \usepackage[mathscr]{eucal} \usepackage{dsfont} \usepackage{graphicx} \usepackage[colorlinks,hyperindex,hypertex]{hyperref} \usepackage{hhline} \usepackage[OT2,OT1]{fontenc} \newcommand\cyr { \renewcommand\rmdefault{wncyr} \renewcommand\sfdefault{wncyss} \renewcommand\encodingdefault{OT2} \normalfont \selectfont } \DeclareTextFontCommand{\textcyr}{\cyr} \def\cprime{\char"7E } \def\cdprime{\char"7F } \def\eoborotnoye{\char'013} \def\Eoborotnoye{\char'003} \newtheorem{theorem}{Theorem}\newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{hypothesis}[theorem]{Hypothesis} \chardef\bslash=`\\ \newcommand{\ntt}{\normalfont\ttfamily} \newcommand{\cn}[1]{{\protect\ntt\bslash#1}} \newcommand{\pkg}[1]{{\protect\ntt#1}} \newcommand{\fn}[1]{{\protect\ntt#1}} \newcommand{\env}[1]{{\protect\ntt#1}} \hfuzz1pc \newcommand{\thmref}[1]{Theorem~\ref{#1}} \newcommand{\secref}[1]{\S\ref{#1}} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \newcommand{\fA}{\mathfrak{A}} \newcommand{\fB}{\mathfrak{B}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\st}{\sigma} \newcommand{\XcY}{{(X,Y)}} \newcommand{\SX}{{S_X}} \newcommand{\SY}{{S_Y}} \newcommand{\SXY}{{S_{X,Y}}} \newcommand{\SXgYy}{{S_{X|Y}(y)}} \newcommand{\Cw}[1]{{\hat C_#1(X|Y)}} \newcommand{\G}{{G(X|Y)}} \newcommand{\PY}{{P_{\mathcal{Y}}}} \newcommand{\X}{\mathcal{X}} \newcommand{\wt}{\widetilde} \newcommand{\wh}{\widehat} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\dA}{{\dot A}} \newcommand{\dtU}{{\dot U}} \newcommand{\bbN}{{\mathbb{N}}} \newcommand{\bbR}{{\mathbb{R}}} \newcommand{\bbP}{{\mathbb{P}}} \newcommand{\bbZ}{{\mathbb{Z}}} \newcommand{\bbC}{{\mathbb{C}}} \newcommand{\supp}{\text{\rm{supp}}} \newcommand{\linspan}{\mathrm{lin\ span}} \newcommand{\ran}{\text{\rm{Ran}}} \newcommand{\f}{\frac} \newcommand{\ul}{\underline} \newcommand{\ol}{\overline} \newcommand{\ti}{\tilde } \newcommand{\wht}{\hat} \newcommand{\dom}{\text{\rm{Dom}}} \newcommand{\spec}{\text{\rm{spec}}} \newcommand{\calA}{{\mathcal A}} \newcommand{\calB}{{\mathcal B}} \newcommand{\calC}{{\mathcal C}} \newcommand{\calD}{{\mathcal D}} \newcommand{\calE}{{\mathcal E}} \newcommand{\calF}{{\mathcal F}} \newcommand{\calG}{{\mathcal G}} \newcommand{\calH}{{\mathcal H}} \newcommand{\calI}{{\mathcal I}} \newcommand{\calJ}{{\mathcal J}} \newcommand{\calK}{{\mathcal K}} \newcommand{\calL}{{\mathcal L}} \newcommand{\calM}{{\mathcal M}} \newcommand{\calN}{{\mathcal N}} \newcommand{\calO}{{\mathcal O}} \newcommand{\calP}{{\mathcal P}} \newcommand{\calQ}{{\mathcal Q}} \newcommand{\calR}{{\mathcal R}} \newcommand{\vecJ}{{\vec{J}}} \newcommand{\scrR}{\boldsymbol{\mathscr R}} \newcommand{\scrP}{{\mathscr P}} \newcommand{\romR}{{\mathrm R}} \newcommand{\sanR}{{\mathsf R}} \newcommand{\calS}{{\mathcal S}} \newcommand{\calT}{{\mathcal T}} \newcommand{\calU}{{\mathcal U}} \newcommand{\calV}{{\mathcal V}} \newcommand{\calW}{{\mathcal W}} \newcommand{\calZ}{{\mathcal Z}} \newcommand{\lb}{\label} \newcommand{\mR}{\mathfrak R} \newcommand{\mA}{\mathfrak A} \newcommand{\mL}{\mathfrak L} \newcommand{\mN}{\mathfrak N} \newcommand{\mM}{\mathfrak M} \newcommand{\mB}{\mathfrak B} \newcommand{\DdA}{\dom(\dA)} \newcommand{\DAst}{\dom(\dA^*)} \newcommand{\whA}{T} \newcommand{\whB}{T_{\cB}^\kappa} \newcommand{\whBo}{T_{\cB_0}} \newcommand{\Nl}{\mathfrak N_\lambda} \newcommand{\Nlb}{\mathfrak N_{\bar\lambda}} \newcommand{\Ml}{\mathfrak M_\lambda} \newcommand{\Mlb}{\mathfrak M_{\bar\lambda}} \newcommand{\Bl}{\mathfrak B_\lambda} \newcommand{\Blb}{\mathfrak B_{\bar\lambda}} \newcommand{\Cl}{C_\lambda} \newcommand{\dott}{\,\cdot\,} \newcommand{\bi}{\bibitem} \newcommand{\Oh}{O} \newcommand{\oh}{o} \newcommand{\rank}{\text{\rm{rank}}} \renewcommand{\Im}{\text{\rm Im}} \newcommand{\loc}{\text{\rm{loc}}} \newcommand{\Ree}{\text{\rm Re}} \def\sA{{\mathfrak A}} \def\sB{{\mathfrak B}} \def\sC{{\mathfrak C}} \def\sD{{\mathfrak D}} \def\sE{{\mathfrak E}} \def\sF{{\mathfrak F}} \def\sG{{\mathfrak G}} \def\sH{{\mathfrak H}} \def\sI{{\mathfrak I}} \def\sJ{{\mathfrak J}} \def\sK{{\mathfrak K}} \def\sL{{\mathfrak L}} \def\sM{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} \def\sP{{\mathfrak P}} \def\sQ{{\mathfrak Q}} \def\sR{{\mathfrak R}} \def\sS{{\mathfrak S}} \def\sT{{\mathfrak T}} \def\sU{{\mathfrak U}} \def\sV{{\mathfrak V}} \def\sW{{\mathfrak W}} \def\sX{{\mathfrak X}} \def\sY{{\mathfrak Y}} \def\sZ{{\mathfrak Z}} \def\bA{{\mathbb A}} \def\dB{{\mathbb B}} \def\dC{{\mathbb C}} \def\dD{{\mathbb D}} \def\dE{{\mathbb E}} \def\dF{{\mathbb F}} \def\dG{{\mathbb G}} \def\dH{{\mathbb H}} \def\dI{{\mathbb I}} \def\dJ{{\mathbb J}} \def\dK{{\mathbb K}} \def\dL{{\mathbb L}} \def\dM{{\mathbb M}} \def\dN{{\mathbb N}} \def\dO{{\mathbb O}} \def\dP{{\mathbb P}} \def\dQ{{\mathbb Q}} \def\dR{{\mathbb R}} \def\dS{{\mathbb S}} \def\dT{{\mathbb T}} \def\dU{{\mathbb U}} \def\dV{{\mathbb V}} \def\dW{{\mathbb W}} \def\dX{{\mathbb X}} \def\dY{{\mathbb Y}} \def\dZ{{\mathbb Z}} \def\cA{{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}} \def\cD{{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}} \def\cG{{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}} \def\cJ{{\mathcal J}} \def\cK{{\mathcal K}} \def\cL{{\mathcal L}} \def\cM{{\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}} \def\cP{{\mathcal P}} \def\cQ{{\mathcal Q}} \def\cR{{\mathcal R}} \def\cS{{\mathcal S}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}} \def\cV{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}} \def\cY{{\mathcal Y}} \def\cZ{{\mathcal Z}} \def\mbf{{\mathbf f}} \def\mbg{{\mathbf g}} \def\mbh{{\mathbf h}} \def\mbA{{\mathbf A}} \def\mbB{{\mathbf B}} \def\mbK{{\mathbf K}} \def\bTheta{\boldsymbol{\theta}} \def\RE{{\rm Re\,}} \def\Ker{{\rm Ker\,}} \def\wt{\widetilde} \def\wh{\hat} \def\fS{\bf S} \def\f{\varphi} \def\bl{\bigl} \def\br{\bigr} \def\uphar{{\upharpoonright\,}} \def\ovl{\overline} \def\half{{\frac{1}{2}}} \newcommand{\cmr}{\dC \setminus \dR} \DeclareMathOperator{\per}{per} \DeclareMathOperator{\cov}{cov} \DeclareMathOperator{\non}{non} \DeclareMathOperator{\cf}{cf} \DeclareMathOperator{\add}{add} \DeclareMathOperator{\Cham}{Cham} \DeclareMathOperator{\IM}{Im} \DeclareMathOperator{\esssup}{ess\,sup} \DeclareMathOperator{\meas}{meas} \DeclareMathOperator{\seg}{seg} \DeclareMathOperator{\Ext}{Ext} \newcommand{\interval}[1]{\mathinner{#1}} \newcommand{\eval}[2][\right]{\relax #2#1\rvert} \newcommand{\envert}[1]{\left\lvert#1\right\rvert} \let\abs=\envert \newcommand{\enVert}[1]{\left\lVert#1\right\rVert} \let\norm=\enVert \newcommand{\Du}{\big|{\widetilde D}u \big|} \newcommand{\Duy}{\big|{\widetilde D}u_y \big|} \begin{document} \title{The c-Entropy optimality of Donoghue classes} \author{S. Belyi} \address{Department of Mathematics\\ Troy University\\ Troy, AL 36082, USA\\ } \curraddr{} \email{[email protected]} \author[K. A. Makarov]{K. A. Makarov} \address{Department of Mathematics\\ University of Missouri\\ Columbia, MO 63211, USA} \email{[email protected]} \author{E. Tsekanovskii} \address{Department of Mathematics, Niagara University, Lewiston, NY 14109, USA} \email{\tt [email protected]} \subjclass{Primary 47A10; Secondary 47N50, 81Q10} \date{DD/MM/2004} \keywords{L-system, transfer function, impedance function, Herglotz-Nevan\-linna function, Donoghue class, c-entropy, dissipation coefficient, perturbation} \begin{abstract} In this note we evaluate c-Entropy of perturbed L-systems introduced in \cite{BMkT-3}. Explicit formulas relating the c-Entropy of the L-systems and the perturbation parameter are established. We also show that c-Entropy attains its maximum value (finite or infinite) whenever the perturbation parameter vanishes so that the impedance function of such a L-system belongs to one of the generalized (or regular) Donoghue classes. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{s1} This paper is {devoted} to the study of the connections between various subclasses of Herglotz-Nevanlinna functions and their realizations as the impedance functions of conservative L-systems (see \cite{ABT,BMkT,BMkT-2,BMkT-3,BT-21,Lv2}). Recall the concept of a conservative L-system. Let $T$ be a non-symmetric, densely defined, closed, dissipative linear operator in a Hilbert space $\cH$. We also assume that the lineal $$\dom (\dot A)=\dom(T)\cap \dom(T^*)$$ is dense in $\cH$ and that the restriction $\dot A=T|_{\dom(\dot A)}$ is a closed symmetric operator with deficiency indices $(1,1)$. Let $\calH_+\subset\calH\subset\calH_-$ be the rigged Hilbert space associated with the symmetric operator $\dot A$ (see the next section for details). By an \textit{L-system} we mean the array \begin{equation} \label{col0} \Theta = \left(\begin{array}{ccc} \bA & K & 1 \\ \calH_+\subset\calH\subset\calH_- & & \dC \\ \end{array}\right), \end{equation} where the \textit{state-space operator} $\bA$ is a bounded linear operator from $\calH_+$ into $\calH_-$ such that $\dA \subset T\subset \bA$, $\dA \subset T^* \subset \bA^*$, $K$ is a bounded linear operator from $\dC$ into $\calH_-$ such that $\IM\bA=KK^*$. {In the framework of the approach in question the} operator-valued function \begin{equation*}\label{W1} W_\Theta(z)=I-2iK^*(\bA-zI)^{-1}K,\quad z\in \rho(T), \end{equation*} is called the \textit{transfer function} of an L-system $\Theta$ and \begin{equation*}\label{real2} V_\Theta(z)=i[W_\Theta(z)+I]^{-1}[W_\Theta(z)-I] =K^*(\RE\bA-zI)^{-1}K,\quad z\in\rho(T)\cap\dC_{\pm}, \end{equation*} is {named} the \textit{impedance function } of $\Theta$. The formal definition of L-systems {is} presented in Section \ref{s2}. From the analytic standpoint, the main role in our considerations is played by the generalized Donoghue classes introduced and discussed in \cite{BMkT}, \cite{BMkT-2}, \cite{BT-16}, \cite{BT-21}. Recall that the standard Donoghue class $\sM$ consists of all analytic analytic functions $M(z)$ that admit the representation \begin{equation}\label{murep} M(z)=\int_\bbR \left (\frac{1}{\lambda-z}-\frac{\lambda}{1+\lambda^2}\right ) d\mu(\lambda), \quad z\in \bbC_+, \end{equation} for some infinite Borel measure $\mu(d\lambda)$ such that \begin{equation}\label{norm} \int_\bbR \frac{d\mu(\lambda)}{1+\lambda^2}=1 \end{equation} (see, e.g., \cite{MT-S}). Given that, the {\it generalized} Donoghue classes accommodate the functions from $\sM$ composed with the action of the ``$ax+b$ group", the group of affine transformations of $\bbR$ preserving the orientation. Namely, for $a>0 $ and $ Q\in \bbR$ introduce the class of analytic mapping from the upper half-plane into itself \begin{equation}\label{e-4-NR} \calN_{a,Q}=\{a M+Q, M\in \sM\}, \quad a>0, \quad Q\in \bbR. \end{equation} As it follows from \cite{BMkT} (also see \cite{BMkT-2,BT-16,BT-21}), the mappings from $\calN_{a,Q}$ can be realized as the impedance functions of L-systems of the form \eqref{col0}. One easily notices as well that the generalized Donoghue classes $\sM_\kappa$ and $\sM^{-1}_\kappa$ discussed in \cite{BMkT}, \cite{BMkT-2}, \cite{BT-16}, \cite{BT-21} and also the classes $\sM^Q$, $\sM^Q_\kappa$, $\sM^{-1,Q}_\kappa$ introduced in \cite{BMkT-3} by two of the authors coincide with the class $\calN_{a,Q}$ defined by \eqref{e-4-NR} for a certain choice of $a$ and $Q$. For instance, $$\sM_\kappa =\calN_{\frac{1-\kappa}{1+\kappa}, 0}\quad \text{and}\quad \sM_\kappa^Q =\calN_{\frac{1-\kappa}{1+\kappa}, Q}.$$ We refer to the publication list above where L-systems of the form \eqref{col0} for which the impedance function falls into a particular generalized Donoghue class {$\sM$, $\sM_\kappa$, or $\sM^{-1}_\kappa$ } are described in detail. We also refer to \cite[Section 10]{BMkT-3} where the concept of a \textit{perturbed L-system} was introduced and the membership of the corresponding impedance functions to the perturbed classes $\sM^Q$, $\sM^Q_\kappa$, or $\sM^{-1,Q}_\kappa$ was established. (Notice that in the framework of the traditional theory of self-adjoint extensions of symmetric operators the representation theorems for the functions from the standard Donoghue class $\sM$ are also discussed in \cite{MT-S}.) The main goal of this note is to show that the c-Entropy introduced in \cite{BT-16,BT-21} of the L-system with the impedance function from the classes $\sM^Q$, $\sM^Q_\kappa$, or $\sM^{-1,Q}_\kappa$ (i) attains a maximum whenever the perturbation parameter $Q$ is zero and (ii) vanished as $|Q|\to \infty$. { Notice that if the perturbation parameter $Q=0$, the classes $\sM^Q$, $\sM^Q_\kappa$, or $\sM^{-1,Q}_\kappa$ coincide with their canonical ``unperturbed" counterparts $\sM$, $\sM_\kappa$, or $\sM^{-1}_\kappa$ which, taking into account the above, yields the optimality of c-Entropy for the L-system with the impedance function from the unperturbed classes $\sM$, $\sM_\kappa$, or $\sM^{-1}_\kappa$.} The paper is organized as follows. Section \ref{s2} contains necessary information on the L-systems theory. In Section \ref{s3} we remind the formal definition and describe basic properties of regular and generalized Donoghue classes. Section \ref{s4} provides us with the detailed explanation of L-systems' perturbation concept. Here we also present the formulas for the von Neumann parameters of the main operator of a perturbed L-system. In Section \ref{s5} we recall the definition of c-Entropy and relate the c-Entropy of a perturbed L-system with the perturbation parameter. In Section \ref{s6} we {recap the definition} of the dissipation coefficient introduced in \cite{BT-16,BT-21} {and study its } behavior as a function of the perturbation parameter $Q$ and the c-Entropy of the corresponding unperturbed L-system. We remark that in case $Q=0$, the obtained results generalize those {obtained } in \cite{BT-21}. The main results of Sections \ref{s5} and \ref{s6} are { mapped out in the summary } Table \ref{Table-1}. We conclude our note with providing examples illuminating the main results. For convenience of the reader, an explicit construction of an L-system with a given state-space operator is presented in Appendix \ref{A1}. \section{Preliminaries}\label{s2} For a pair of Hilbert spaces $\calH_1$, $\calH_2$ denote by $[\calH_1,\calH_2]$ the set of all bounded linear operators from $\calH_1$ to $\calH_2$. Given a closed, densely defined, symmetric operator $\dA$ in a Hilbert space $\calH$ with inner product $(f,g),f,g\in\calH$, introduce the rigged Hilbert space (see \cite{ABT,Ber}) $\calH_+\subset\calH\subset\calH_- ,$ where $\calH_+ =\dom(\dA^*)$ is the Hilbert space equipped with the inner product \begin{equation}\label{108} (f,g)_+ =(f,g)+(\dA^* f, \dA^*g),\;\;f,g \in \dom(\dA^*), \end{equation} and $\cH_-$ is its dual, the space of continuous linear functionals with respect to the corresponding norm $\|\cdot \|_+$. Denote by $\calR$ the \textit{\textrm{Riesz-Berezansky operator}} $\calR$ (see \cite{ABT}, \cite{Ber}) which maps $\mathcal H_-$ onto $\mathcal H_+$ such that $(f,g)=(f,\calR g)_+$ ($\forall f\in\calH_+$, $g\in\calH_-$) and $\|\calR g\|_+=\| g\|_-$. Thus, \begin{equation}\label{e3-4} \aligned (f,g)_-=(f,\calR g)=(\calR f,g)=(\calR f,\calR g)_+,\qquad (f,g\in \mathcal H_-),\\ (u,v)_+=(u,\calR^{-1} v)=(\calR^{-1} u,v)=(\calR^{-1} u,\calR^{-1} v)_-,\qquad (u,v\in \mathcal H_+). \endaligned \end{equation} Note that identifying the space conjugate to $\calH_\pm$ with $\calH_\mp$, we get that if $\bA\in[\calH_+,\calH_-]$, then $\bA^*\in[\calH_+,\calH_-]$ as well. We will be mostly interested in the following type of quasi-self-adjoint bi-extensions. \textit{In what follows we assume that $\dA$ has deficiency indices $(1,1)$.} \begin{definition}[Definition 4.3.1 \cite{ABT},]\label{star_ext} Suppose that $T$ is a quasi-self-adjoint extension of $\dA$, that is, $$ \dA\subset T\subset\dA^*. $$ An operator $\bA\in[\calH_+,\calH_-]$ is called the \textit{($*$)-extension } of $T$ if $$\dA \subset T\subset \bA \quad \text{and}\quad \dA \subset T^*\subset \bA^*$$ and the restriction $\widehat A$ of $\RE\bA$ on \[ \dom(\widehat A)=\{f\in\cH_+:(\RE\bA) f\in\cH\}, \] the quasi-kernel of $\RE\bA$, is a self-adjoint extension of $\dA$ \end{definition} Recall that an operator $\bA\in[\calH_+,\calH_-]$ is said to be a \textit{self-adjoint bi-extension} of a symmetric operator $\dA$ if $\bA=\bA^*$ and $\bA \supset \dA$. For an operator $\bA\in[\calH_+,\calH_-]$, the restriction $\hat A$, $ \hat A=\bA\uphar\dom(\hat A)$ of $\bA$ on \[ \dom(\hat A)=\{f\in\cH_+:\bA f\in\cH\} \] will be called the \textit{quasi-kernel} of $\bA$ (see \cite[Section 2.1]{ABT}, \cite{TSh1}). In this case, according to the von Neumann Theorem (see \cite[Theorem 1.3.1]{ABT}) the domain of $\wh A$, which is a self-adjoint extension of $\dA$, can be represented as \begin{equation}\label{DOMHAT} \dom(\hat A)=\dom(\dA)\oplus(I+U)\sN_{i}, \end{equation} where von Neumann's parameter $U$ is both a $(\cdot)$-isometric as well as $(+)$-isometric operator from $\sN_i$ into $\sN_{-i}$ , with $$\sN_{\pm i}=\Ker (\dA^*\mp i I)$$ the deficiency subspaces of $\dA$. The description of all $(*)$-extensions via the Riesz-Berezansky operator $\calR$ can be found in \cite[Section 4.3]{ABT}. The following definition is a ``lite" version of the definition of L-system given for a scattering L-system with one-dimensional input-output space. It is tailored for the case when the symmetric operator of an L-system has deficiency indices $(1,1)$. (The general definition of an L-system can be found in \cite[Definition 6.3.4]{ABT}.) \begin{definition}\label{defs} Given a symmetric operator $\dot A$ with deficiency indices $(1,1)$, its quasi-self-adjoint dissipative extension $T$, and the rigged Hilbert space $\calH_+\subset\calH\subset\calH_-$ associated with $\dot A$, an array \begin{equation}\label{e6-3-2} \Theta= \begin{pmatrix} \bA&K&\ 1\cr \calH_+ \subset \calH \subset \calH_-& &\dC\cr \end{pmatrix} \end{equation} is called an \textbf{{L-system}} if $\mathbb A$ is a ($\ast $)-extension of of $ T$ with $$\IM\bA= KK^*,$$ where $K\in [\dC,\calH_-]$ and $K^*\in [\calH_+,\dC].$ \end{definition} For the dissipative operator in Definition \ref{defs} we reserve the notation $T$ and will call it the \textit{main operator } of the system, while the operator $\bA$ will be said to be \textit{the state-space operator } of the system $\Theta$. The operator $K$ will be traditionally called the \textit{channel operator} of the system $\Theta$. It is easy to see that the operator $\bA$ of the system \eqref{e6-3-2} can be chosen in such a way that $$\IM\bA=(\cdot,\chi)\chi, \quad \text{for some}\quad\quad \chi\in\calH_-$$ and $$K c=c\cdot\chi,\quad c\in\dC.$$ A system $\Theta$ in \eqref{e6-3-2} is called \textit{minimal} if the operator $\dA$ is a prime operator in $\calH$, i.e., there exists no non-trivial reducing invariant subspace of $\calH$ on which it induces a self-adjoint operator. Notice that minimal L-systems of the form \eqref{e6-3-2} with one-dimensional input-output space were also discussed in \cite{BMkT}. We associate with an L-system $\Theta$ two analytic functions, the \textbf{transfer function} of the L-system $\Theta$ \begin{equation}\label{e6-3-3} W_\Theta (z)=I-2iK^\ast (\mathbb A-zI)^{-1}K,\quad z\in \rho (T), \end{equation} and also the \textbf{impedance function} given by the formula \begin{equation}\label{e6-3-5} V_\Theta (z) = K^\ast (\RE\bA - zI)^{-1} K, \quad z\in \rho (\RE\bA), \end{equation} Recall that the impedance function $V_\Theta(z)$ admits the integral representation \begin{equation}\label{hernev-real} V_\Theta(z)=Q+\int_\bbR \left(\frac{1}{\lambda-z}-\frac{\lambda}{1+\lambda^2}\right)d\sigma, \end{equation} where $Q$ is a real number and $\sigma$ is an infinite Borel measure such that $$ \int_\bbR\frac{d\sigma(\lambda)}{1+\lambda^2}<\infty. $$ The transfer function $W_\Theta (z)$ of the L-system $\Theta $ and function $V_\Theta (z)$ of the form (\ref{e6-3-5}) are connected by the following relations valid for $\IM z\ne0$, $z\in\rho(T)$, \begin{equation}\label{e6-3-6} \begin{aligned} V_\Theta (z) &= i [W_\Theta (z) + I]^{-1} [W_\Theta (z) - I],\\ W_\Theta(z)&=(I+iV_\Theta(z))^{-1}(I-iV_\Theta(z)). \end{aligned} \end{equation} In this context we refer to \cite{ABT,BMkT,GT} and references therein for the description of the class of all Herglotz-Nevanlinna functions that admit realizations as impedance functions of an L-system. \section{Donoghue classes and L-systems}\label{s3} Denote by $\calN$ (see \cite{BMkT-3}) the class of all Herglotz-Nevanlinna functions $M(z)$ that admit the representation \begin{equation}\label{hernev-0} M(z)=\int_\bbR \left(\frac{1}{\lambda-z}-\frac{\lambda}{1+\lambda^2}\right)d\sigma, \end{equation} where $\sigma$ is an infinite Borel measure. $$ \int_\bbR\frac{d\sigma(\lambda)}{1+\lambda^2}<\infty. $$ Following our earlier developments in \cite{BMkT,BMkT-3,MT10,MT2021} denote by $\sM$, $\sM_\kappa$ and $\sM_\kappa^{-1}$ ($0\le\kappa<1$) the subclass of $\calN$ with the property \begin{equation}\label{e-42-int-don} \int_\bbR\frac{d\sigma(\lambda)}{1+\lambda^2}=1\,,\quad\text{equivalently,}\quad M(i)=i, \end{equation} \begin{equation}\label{e-38-kap} \int_\bbR\frac{d\sigma(\lambda)}{1+\lambda^2}=\frac{1-\kappa}{1+\kappa}\,,\quad\text{equivalently,}\quad M(i)=i\,\frac{1-\kappa}{1+\kappa}, \end{equation} and \begin{equation}\label{e-39-kap} \int_\bbR\frac{d\sigma(\lambda)}{1+\lambda^2}=\frac{1+\kappa}{1-\kappa}\,,\quad\text{equivalently,}\quad M(i)=i\,\frac{1+\kappa}{1-\kappa}, \end{equation} respectively. Clearly, $$\sM=\sM_0=\sM_0^{-1}.$$ Recall that \cite{D,GMT97,GT,MT-S} that $M\in \mM$ if and only if $M(z)$ can be realized as the Weyl-Titchmarsh function $M_{(\dot A, A)}(z)$ associated with the pair $(\dot A, A)$ where $\dA$ is a closed prime densely defined symmetric operator with deficiency indices $(1,1)$, $A$ its self-adjoint extension and \begin{equation}\label{e-DWT} M_{(\dot A, A)}(z)=((Az+I)(A-zI)^{-1}g_+,g_+), \quad z\in \bbC_+, \end{equation} $$g_+\in \Ker( \dA^*-iI)\quad \text{with }\quad \|g_+\|=1.$$ If $M(z)$ is an arbitrary function from the class $\calN$ and the normalization condition \begin{equation}\label{e-66-L} \int_\bbR\frac{d\sigma(\lambda)}{1+\lambda^2}=a \end{equation} holds for some $a>0$, then it is easy to see that $M\in\sM$ if and only if $a=1$. The membership of $M\in \cN$ in the other generalized Donoghue classes $ \sM_\kappa $ and $\sM_\kappa^{-1}$ can also be easily described as follows: \begin{enumerate} \item[] if $a<1$, then $M\in \sM_\kappa$ with \begin{equation}\label{e-45-kappa-1} \kappa=\frac{1-a}{1+a}, \end{equation} \item[]and \item[]if $a>1$, then $M\in \sM_\kappa^{-1}$ with \begin{equation}\label{e-45-kappa-2} \kappa=\frac{a-1}{1+a}. \end{equation} \end{enumerate} Throughout this Note we adopt the following hypothesis. \begin{hypothesis}\label{setup} Suppose that $\whA \ne\whA^*$ is a maximal dissipative extension of a symmetric operator $\dot A$ with deficiency indices $(1,1)$. Assume, in addition, that the deficiency elements $g_\pm\in \Ker (\dA^*\mp iI)$ are normalized, $\|g_\pm\|=1$, and chosen in such a way that \begin{equation}\label{domT} g_+-\kappa g_-\in \dom (\whA )\,\,\,\text{for some } \,\,\, 0\le \kappa<1. \end{equation} Assume that $A$ is a self-adjoint extension of $\dot A$ such that either \begin{equation}\label{ddoomm14} g_+- g_-\in \dom ( A) \end{equation} or \begin{equation}\label{ddoomm14-1} g_++ g_-\in \dom ( A). \end{equation} \end{hypothesis} \begin{remark}\label{r-12} If $T \ne T^*$ is a maximal dissipative extension of $\dot A$, $$ \Im(T f,f)\ge 0, \quad f\in \dom(T ), $$ then $T$ is automatically quasi-self-adjoint \cite{ABT, MT-S, MTBook} and therefore \begin{equation}\label{parpar-1} g_+-\kappa g_-\in \dom (T )\quad \text{for some } |\kappa|<1. \end{equation} In particular (see, e.g., \cite{MT-S}), if $\kappa=0$, then quasi-self-adjoint extension $\whA $ coincides with the restriction of the adjoint operator $\dot A^*$ on $$ \dom(\whA )=\dom(\dot A)\dot + \Ker (\dA^*-iI). $$ The requirement in \eqref{domT} that $0\le \kappa<1$ does not really restricts the choice of the main operator $T$ of the systm (if $\kappa=|\kappa|e^{i\theta}$, change (the basis) $g_-$ to $e^{i\theta}g_-$ in the deficiency subspace $\Ker (\dA^*+ i I)$ to see that \eqref{domT} is satisfied in the new basis, rather it imposes additional requirements (relative to $T$ ) on the self-adjoint reference operator $\widehat A$. \end{remark} \noindent As far as the generalized classes $\sM_\kappa$ and $\sM_\kappa^{-1}$, are concerned, recall that if the main operator $T$ and the quasi-kernel $\hat A$ of $\RE\bA$ of an L-system $\Theta_1$ and $\Theta_2$ of the form \eqref{e6-3-2} satisfy Hypothesis \ref{setup} (\eqref{ddoomm14} and \eqref{ddoomm14-1}), respectively, then the impedance functions $V_{\Theta_1}(z)$ and $V_{\Theta_2}(z)$ belong to the classes $\sM_\kappa$ and $\sM_\kappa^{-1}$, respectively (see \cite{BMkT-2}). \section{Perturbations of Donoghue classes and {the related} L-systems}\label{s4} In this section we recall the definition of ``perturbed" versions $\sM^Q$, $\sM^Q_\kappa$, and $\sM^{-1,Q}_\kappa$ of the generalized Donoghue classes $\sM$, $\sM_\kappa$, and $\sM^{-1}_\kappa$ discussed in Section \ref{s3} and briefly revisit the concept of a ``perturbed" L-system introduced in \cite{BMkT-3}. Given $Q\in \bbR\setminus\{0\}$, we say that $V(z)\in\sM^Q$ if $V(z)$ admits the representation \begin{equation}\label{e-52-M-q} V(z)= Q+\int_\bbR\left (\frac{1}{\lambda-z}-\frac{\lambda}{1+\lambda^2}\right )d\mu,\end{equation} with $$ \int_\bbR\frac{d\mu(\lambda)}{1+\lambda^2}=1. $$ If along with \eqref{e-52-M-q} the normalization conditions \eqref{e-38-kap}, \eqref{e-39-kap} hold, we say that $V(z)$ belongs to the class $\sM^Q_{\kappa}$, $\sM^{-1,Q}_{\kappa}$, respectively. \begin{figure} \begin{center} \includegraphics[width=90mm]{Fig1-3.eps} \caption{Class $\sM^Q$: Parameter $\kappa$ as a function of $Q$}\label{fig-1} \end{center} \end{figure} The following was shown in \cite[Theorem 10.1]{BMkT-3}. Let $\Theta_0$ be an L-system of the form \eqref{e6-3-2} satisfying the conditions of Hypothesis \ref{setup} \eqref{ddoomm14} and such that its impedance function $V_{\Theta_0}(z)$ belongs to the class $\sM$. Then for any real number $Q\ne0$ there exists another L-system $\Theta(Q)$ with the same symmetric operator $\dA$ as in $\Theta_0$ and such that \begin{equation}\label{impshift1} V_{\Theta(Q)}(z)=Q+V_{\Theta_0}(z) \end{equation} belongs to the class $\sM^Q$. In this case, the von Neumann parameter $\kappa(Q)$ of its main operator $T(Q)$ is determined by \begin{equation}\label{e-53-kappa'} \kappa(Q)=\frac{|Q|}{\sqrt{Q^2+4}},\quad Q\ne0. \end{equation} while the quasi-kernel $\hat A(Q)$ of $\RE\bA(Q)$ of the L-system $\Theta(Q)$ is defined by \eqref{DOMHAT} with \begin{equation}\label{e-54-U-M-q} U(Q)=\frac{Q}{|Q|}\cdot\frac{-Q+2i}{\sqrt{Q^2+4}},\quad Q\ne0. \end{equation} For the graph of $\kappa$ as a function of $Q$ see Figure \ref{fig-1}. We note that $\kappa(Q)$ is an even function whose derivative for $Q>0$ is $$ \kappa'(Q)=\frac{4}{(Q^2+4)^{3/2}},\quad Q>0, $$ giving the slope of the graph at $Q=0$ as $\kappa'(0+)=1/2$. The graph of the function is symmetric with respect to the $\kappa$-axis. A similar result (see \cite[Theorem 10.2]{BMkT-3}) takes place for the class $\sM_{\kappa}^Q$: Let $\Theta_{\kappa}$ be an L-system of the form \eqref{e6-3-2} such that its impedance function $V_{\Theta_\kappa}(z)$ belongs to the class $\sM_{\kappa}$. Then for any real number $Q\ne0$ there exists another L-system $\Theta_\kappa(Q)$ with the same symmetric operator $\dA$ as in the system $\Theta_{\kappa}$ and such that its impedance function is obtained from $V_{\Theta_{\kappa}}(z)$ by shifting by the constant $Q$, that is, \begin{equation}\label{impshift2} V_{\Theta_{\kappa}(Q)}(z)=Q+V_{\Theta_{\kappa}}(z). \end{equation} Notice that $V_{\Theta_{\kappa}(Q)}\in \sM_{\kappa}^Q$. \begin{figure} \begin{center} \includegraphics[width=90mm]{Fig2-3.eps} \caption{Class $\sM^Q_\kappa$ $(0<a<1)$: Parameter $\kappa$ as a function of $Q$}\label{fig-2} \end{center} \end{figure} In this case, the von Neumann parameter $\kappa(Q)$ of the main operator $T(Q)$ of the system $\Theta_\kappa(Q)$ is determined by the formula \begin{equation}\label{e-53-kappa-prime} \kappa(Q)=\frac{\left(b-2Q^2-\sqrt{b^2+4Q^2}\right)^2-a\left(b-\sqrt{b^2+4Q^2}\right)^2+4Q^2a(a-1)}{\left(b-2Q^2-\sqrt{b^2+4Q^2}\right)^2+a\left(b-\sqrt{b^2+4Q^2}\right)^2+4Q^2a(a+1)}. \end{equation} Here \begin{equation}\label{e-78-b} b=Q^2+a^2-1 \end{equation} with $$ a=\frac{1-\kappa}{1+\kappa}, $$ while the quasi-kernel $\hat A(Q)$ of $\RE\bA(Q)$ of the L-system $\Theta_\kappa(Q)$ is defined by \eqref{DOMHAT} with \begin{equation}\label{e-75-U} U(Q)=\frac{(a+Qi)(1-\kappa^2(Q))-1-\kappa^2(Q)}{2\kappa(Q)},\quad Q\ne0. \end{equation} The graph of $\kappa$ as a function of $Q$ for this case is shown on the Figure \ref{fig-2}. Note that the vertex of the graph is located at the value of $$\kappa=\kappa_0=\frac{1-a}{1+a}.$$ Moreover, if $a\rightarrow 1^-$, then $\kappa_0\rightarrow 0$ as indicated by the dashed lines on the picture. Finally, (see \cite[Theorem 10.2]{BMkT-3}), for any L-system $\Theta_{\kappa}$ of the form \eqref{e6-3-2} with $V_{\Theta_\kappa}(z)\in\sM_{\kappa}^{-1}$ and any real number $Q\ne0$ there exists another L-system $\Theta_\kappa(Q)$ with the same symmetric operator $\dA$ as in $\Theta_{\kappa}$ and such that \begin{equation}\label{impshift3} V_{\Theta_{\kappa}(Q)}(z)=Q+V_{\Theta_{\kappa}}(z). \end{equation} In this case, the von Neumann parameter $\kappa(Q)$ of its main operator $T(Q)$ is determined for $Q\ne0 $ by the formula \begin{equation}\label{e-85-kappa-prime} \kappa(Q)=\frac{a\left(b+\sqrt{b^2+4Q^2}\right)^2-\left(b-2Q^2+\sqrt{b^2+4Q^2}\right)^2-4Q^2a(a-1)}{\left(b-2Q^2+\sqrt{b^2+4Q^2}\right)^2+a\left(b+\sqrt{b^2+4Q^2}\right)^2+4Q^2a(a+1)}, \end{equation} with $$ b=Q^2+a^2-1 $$ and $$ a=\frac{1+\kappa}{1-\kappa}, $$ while the quasi-kernel $\hat A(Q)$ of $\RE\bA(Q)$ of the L-system $\Theta(Q)$ is defined by \eqref{DOMHAT} with $U(Q)$ given by the same formula \eqref{e-75-U} with the only difference that $\kappa$ is \eqref{e-85-kappa-prime}. Figure \ref{fig-3} shows the graph of $\kappa$ as a function of $Q$. Note that the vertex of the graph is located at the value of $\kappa=\kappa_0=\frac{a-1}{1+a}$. Moreover, if $a\rightarrow+\infty$, then $\kappa_0\rightarrow 1$ as indicated on the picture with the dashed lines. \begin{figure} \begin{center} \includegraphics[width=90mm]{Fig3-3.eps} \caption{Class $\sM^{-1,Q}_\kappa $ ($a>1$): Parameter $\kappa$ as a function of $Q$ }\label{fig-3} \end{center} \end{figure} We remark that the ``perturbed" L-system $\Theta(Q)$ whose construction is based on a given L-system $\Theta$ (subject to either of Hypotheses \ref{setup} \eqref{ddoomm14} or \eqref{ddoomm14-1}) and described in details in \cite[Theorems 10.1-10.3]{BMkT-3} is called the \textbf{perturbation} of an L-system $\Theta$. The perturbation of a given L-system relies on the fixed choice of the deficiency vectors of the symmetric operator of $\Theta$ and a $Q$-dependent pair of von Neumann's parameters $\kappa$ and $U$ (see Appendix \ref{A1} for the exact construction). It is important to mention that the impedance functions of the perturbed and original L-systems are always related by the {\textbf{impedance shift}} formula (cf. \eqref{impshift1}, \eqref{impshift2} and \eqref{impshift3}) $$V_{\Theta(Q)}(z)=Q+V_{\Theta}(z).$$ \section{c-Entropy of a perturbed L-system}\label{s5} In this section we study how the perturbation affects the c-Entropy of an L-systems that initially satisfies the conditions of Hypotheses \ref{setup} \eqref{ddoomm14} or \eqref{ddoomm14-1}. We begin with reminding a definition of the c-Entropy of an L-system introduced in \cite{BT-16}. \begin{definition} Let $\Theta$ be an L-system of the form \eqref{e6-3-2}. The quantity \begin{equation}\label{e-80-entropy-def} \calS=-\ln (|W_\Theta(-i)|),\end{equation} where $W_\Theta(z)$ is the transfer function of $\Theta$, is called the \textbf{coupling entropy} (or \textbf{c-Entropy}) of the L-system $\Theta$. \end{definition} As it mentioned in \cite{BT-16}, there is an alternative operator-theoretic way to define the c-Entropy. If $T$ is the main operator of the L-system $\Theta$ and $\kappa$ is von Neumann's parameter of $T$ in some basis $g_\pm$, then, as shown in \cite{BMkT-2}), $$|W_\Theta(-i)|=|\kappa|$$ and hence \begin{equation}\label{e-70-entropy} \calS=-\ln (|W_\Theta(-i)|)=-\ln(|\kappa|).\end{equation} We emphasize that c-Entropy defined by \eqref{e-70-entropy} does not depend on the choice of deficiency basis $g_\pm$ and moreover is an additive function with respect to the coupling of L-systems (see \cite{BMkT-2}). Note that if, in addition, the point $z=i$ belongs to $\rho(T)$, then we also have that \begin{equation}\label{e-80-entropy} \calS=\ln (|W_\Theta(i)|)=\ln (1/|\kappa|)=-\ln(|\kappa|). \end{equation} This follows from the known (see \cite{ABT}) property of the transfer functions for L-systems that states that $W_\Theta(z)\overline{W_\Theta(\bar z)}=1$ and the fact that $|W_\Theta(i)|=1/|\kappa|$ (see \cite{BMkT}). Now we are going to find the c-Entropy of an L-system whose impedance function belongs to the class $\sM^Q$. \begin{theorem}\label{t-12}Let $\dA$ be a symmetric densely defined closed operator with deficiency indices $(1, 1)$ and $(+)$-normalized deficiency vectors $g_+$ and $g_-$ and $\Theta$ be an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14} or \eqref{ddoomm14-1} with $\kappa=0$. Then for any real $Q\ne0$, the c-Entropy $\calS(Q)$ of a perturbed L-system $\Theta(Q)$ is finite and given by the formula \begin{equation}\label{e-45-entropy} \calS(Q)=\frac{1}{2}\ln (Q^2+4)-\ln|Q|. \end{equation} \end{theorem} \begin{proof} We have shown in \cite[Theorem 10.1]{BMkT-3} that if an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14} or \eqref{ddoomm14-1} with $\kappa=0$ is perturbed by any real $Q\ne0$, then the parameter $\kappa(Q)$ of the perturbed L-system $\Theta(Q)$ is determined by the formula \eqref{e-53-kappa'}. Thus, in order to find the c-Entropy of the perturbed L-system $\Theta(Q)$ we apply \eqref{e-70-entropy} to the value of $\kappa(Q)$ in \eqref{e-53-kappa'}. We get $$ \calS(Q)=-\ln(|\kappa(Q)|)=\ln (1/|\kappa(Q)|)=\ln\frac{\sqrt{Q^2+4}}{|Q|}=\frac{1}{2}\ln (Q^2+4)-\ln|Q|, $$ as desired \eqref{e-45-entropy}. \end{proof} The graph of $\calS(Q)$ as a function of $Q$ for the perturbed class $\sM^{Q}$ is shown on Figure \ref{fig-4}. We note that c-Entropy $\calS(Q)$ is infinite when $Q=0$ and tends to zero as $Q\rightarrow\pm\infty$. \begin{figure} \begin{center} \includegraphics[width=60mm]{Fig1-22.eps} \caption{c-Entropy of the perturbed class $\sM^{Q}$}\label{fig-4} \end{center} \end{figure} A similar result takes place for the class $\sM_{\kappa}$. \begin{theorem}\label{t-14}Let $\dA$ be a symmetric densely defined closed operator with deficiency indices $(1, 1)$ and $(+)$-normalized deficiency vectors $g_+$ and $g_-$ and $\Theta$ be an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14} with finite c-Entropy $\calS$. Then for any real $Q\ne0$, the c-Entropy $\calS(Q)$ of a perturbed L-system $\Theta(Q)$ is finite and given by the formula \begin{equation}\label{e-46-entropy} \calS(Q)=\ln\frac{\left(b-2Q^2-\sqrt{b^2+4Q^2}\right)^2+a\left(b-\sqrt{b^2+4Q^2}\right)^2+4Q^2a(a+1)}{\left(b-2Q^2-\sqrt{b^2+4Q^2}\right)^2-a\left(b-\sqrt{b^2+4Q^2}\right)^2+4Q^2a(a-1)}, \end{equation} where \begin{equation}\label{e-47-b} a=\tanh\left(\frac{\calS}{2}\right)\textrm{ and }\;b=Q^2+a^2-1. \end{equation} \end{theorem} \begin{proof} Our requirement of finite c-Entropy $\calS$ implies (via \eqref{e-70-entropy}) that $\kappa\ne0$. Also, Hypotheses \ref{setup} \eqref{ddoomm14} yields that $a=\frac{1-\kappa}{1+\kappa}$ is such that $0<a<1$. It follows from \eqref{e-70-entropy} that $\kappa=e^{-\calS}$ and hence $$ a=\frac{1-\kappa}{1+\kappa}=\frac{1-e^{-\calS}}{1+e^{-\calS}}=\tanh\left(\frac{\calS}{2}\right). $$ It was shown in \cite[Theorem 10.2]{BMkT-3} that if an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14} with $\kappa\ne0$ is perturbed by any real $Q\ne0$, then the parameter $\kappa(Q)$ of the perturbed L-system $\Theta(Q)$ is determined by the formula \eqref{e-53-kappa-prime} with $0<a<1$. Consequently, in order to find the c-Entropy of the perturbed L-system $\Theta(Q)$ we apply \eqref{e-70-entropy} to the value of $\kappa(Q)$ in \eqref{e-53-kappa-prime}. This clearly yields \eqref{e-46-entropy}. \end{proof} \begin{figure} \begin{center} \includegraphics[width=70mm]{Fig2-22.eps} \caption{c-Entropy of the classes $\sM^{Q}_\kappa$ (solid graph) and $\sM^{-1,Q}_\kappa$} (dashed graph).\label{fig-5} \end{center} \end{figure} Now we state and prove an analogues result for the class $\sM_{\kappa}^{-1}$. \begin{theorem}\label{t-15}Let $\dA$ be a symmetric densely defined closed operator with deficiency indices $(1, 1)$ and $(+)$-normalized deficiency vectors $g_+$ and $g_-$ and $\Theta$ be an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14-1} with finite c-Entropy $\calS$. Then for any real $Q\ne0$, the c-Entropy $\calS(Q)$ of a perturbed L-system $\Theta(Q)$ is finite and given by the formula \begin{equation}\label{e-47-entropy} \calS(Q)=\ln\frac{\left(b-2Q^2+\sqrt{b^2+4Q^2}\right)^2+a\left(b+\sqrt{b^2+4Q^2}\right)^2+4Q^2a(a+1)}{a\left(b+\sqrt{b^2+4Q^2}\right)^2-\left(b-2Q^2+\sqrt{b^2+4Q^2}\right)^2-4Q^2a(a-1)}, \end{equation} where \begin{equation}\label{e-48-b} a=\coth\left(\frac{\calS}{2}\right)\textrm{ and }\;b=Q^2+a^2-1. \end{equation} \end{theorem} \begin{proof} As in the proof of Theorem \ref{t-14} we note that the requirement of finite c-Entropy $\calS$ implies (via \eqref{e-70-entropy}) that $\kappa\ne0$. Also, Hypotheses \ref{setup} \eqref{ddoomm14-1} yields that $a=\frac{1+\kappa}{1-\kappa}$ is such that $a>1$. It follows from \eqref{e-70-entropy} that $\kappa=e^{-\calS}$ and hence $$ a=\frac{1+\kappa}{1-\kappa}=\frac{1+e^{-\calS}}{1-e^{-\calS}}=\coth\left(\frac{\calS}{2}\right). $$ It was shown in \cite[Theorem 10.3]{BMkT-3} that if an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14-1} with $\kappa\ne0$ is perturbed by any real $Q\ne0$, then the parameter $\kappa(Q)$ of the perturbed L-system $\Theta(Q)$ is determined by the formula \eqref{e-85-kappa-prime} with $a>1$. Consequently, in order to find the c-Entropy of the perturbed L-system $\Theta(Q)$ we apply \eqref{e-70-entropy} to the value of $\kappa(Q)$ in \eqref{e-85-kappa-prime}. This clearly yields \eqref{e-47-entropy}. \end{proof} The graph of $\calS(Q)$ as a function of $Q$ for the perturbed classes $\sM^{Q}_\kappa$ (solid curve) and $\sM^{-1,Q}_\kappa$ (dashed curve) are shown on Figure \ref{fig-5}. We note that c-Entropy $\calS(Q)$ is at its maximum and equals $\calS$ when $Q=0$ and tends to zero as $Q\rightarrow\pm\infty$. \section{Dissipation coefficient of a perturbed L-system}\label{s6} Let us recall the definition of the dissipation coefficient of an L-system. \begin{definition}[{cf. \cite{BT-16}}, \cite{BT-21}]\label{d-10} Let $T$ be the main operator of an L-system $\Theta$ of the form \eqref{e6-3-2} and $\kappa$ be its von {Neumann's} parameter according to a fixed $(\cdot)$-normalized deficiency basis $g'_\pm$ such that $0\le\kappa\le1$. If \begin{equation}\label{e-76-ty} \ti y=g'_+-\kappa g'_-, \end{equation} then the quantity $\calD= \IM (T \ti y,\ti y)$ is called the \textbf{coefficient of dissipation} (or dissipation coefficient) of the L-system $\Theta$. \end{definition} It was shown in \cite{BT-21} that the c-entropy $\calS$ and the coefficient of dissipation $\calD$ of an L-system are related as \begin{equation}\label{e-69-ent-dis} \calD=1-e^{-2\cS}. \end{equation} We are going to find the c-Entropy of an L-system whose impedance function belongs to the class $\sM^Q$. \begin{theorem}\label{t-16}Let $\dA$ be a symmetric densely defined closed operator with deficiency indices $(1, 1)$ and $(+)$-normalized deficiency vectors $g_+$ and $g_-$ and $\Theta$ be an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14} or \eqref{ddoomm14-1} with $\kappa=0$. Then for any real $Q\ne0$, the dissipation coefficient $\calD(Q)$ of a perturbed L-system $\Theta(Q)$ is given by the formula \begin{equation}\label{e-50-dcy} \calD(Q)=\frac{4}{Q^2+4}. \end{equation} \end{theorem} \begin{proof} As we did in the proof of Theorem \ref{t-12}, we use the fact that if an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14} or \eqref{ddoomm14-1} with $\kappa=0$ is perturbed by any real $Q\ne0$, then the parameter $\kappa(Q)$ of the perturbed L-system $\Theta(Q)$ is determined by the formula \eqref{e-53-kappa'}. Consequently, in order to find the dissipation coefficient $\calD(Q)$ of the perturbed L-system $\Theta(Q)$ we apply \eqref{e-70-entropy} and \eqref{e-69-ent-dis} to the value of $\kappa(Q)$ in \eqref{e-53-kappa'}. We get $$ \calD(Q)=1-\kappa^2(Q)=1-\frac{Q^2}{Q^2+4}=\frac{4}{Q^2+4}, $$ that confirms \eqref{e-50-dcy}. \end{proof} \begin{figure} \begin{center} \includegraphics[width=70mm]{Fig3-22.eps} \caption{Dissipation coefficient of the perturbed class $\sM^{Q}$}\label{fig-6} \end{center} \end{figure} The graph of $\calD(Q)$ as a function of $Q$ for the perturbed class $\sM^{Q}$ is shown on Figure \ref{fig-6}. Note that the dissipation coefficient $\calD(Q)$ equals $1$ when $Q=0$ and tends to zero as $Q\rightarrow\pm\infty$. A similar to Theorem \ref{t-16} result takes place for the class $\sM_{\kappa}$. \begin{theorem}\label{t-17}Let $\dA$ be a symmetric densely defined closed operator with deficiency indices $(1, 1)$ and $(+)$-normalized deficiency vectors $g_+$ and $g_-$ and $\Theta$ be an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14} with finite c-Entropy $\calS$. Then for any real $Q\ne0$, the dissipation coefficient $\calD(Q)$ of a perturbed L-system $\Theta_\kappa(Q)$ is given by the formula \begin{equation}\label{e-51-dcy} \calD(Q)=\frac{4(Y+Z)(X+aZ)}{(X+Y+Z(a+1))^2}, \end{equation} where \begin{equation}\label{e-52-b} \begin{aligned} a&=\tanh\left(\frac{\calS}{2}\right),\;b=Q^2+a^2-1,\; X=\left(b-2Q^2-\sqrt{b^2+4Q^2}\right)^2,\\ Y&=a\left(b-\sqrt{b^2+4Q^2}\right)^2,\; Z=4aQ^2. \end{aligned} \end{equation} \end{theorem} \begin{proof} As we established in the proof of Theorem \ref{t-14}, the requirement of finite c-Entropy $\calS$ implies (via \eqref{e-70-entropy}) that $\kappa\ne0$. Also, Hypotheses \ref{setup} \eqref{ddoomm14} yields that $a=\frac{1-\kappa}{1+\kappa}$ is such that $0<a<1$. We have shown in the proof of Theorem \ref{t-14} that in this case $ a=\tanh\left(\frac{\calS}{2}\right). $ According to Section \ref{s4}, (see also \cite[Theorem 10.2]{BMkT-3}), if an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} with $\kappa\ne0$ is perturbed by any real $Q\ne0$, then the parameter $\kappa(Q)$ of the perturbed L-system $\Theta(Q)$ is determined by the formula \eqref{e-53-kappa-prime} with $0<a<1$. Writing $\kappa(Q)$ from \eqref{e-53-kappa-prime} in terms of $X$, $Y$, and $Z$ gives us \begin{equation}\label{e-52-kappa} \kappa(Q)=\frac{X-Y+(a-1)Z}{X+Y+(a+1)Z}. \end{equation} Therefore, in order to find the dissipation coefficient $\calD(Q)$ of the perturbed L-system $\Theta(Q)$ we apply \eqref{e-69-ent-dis} with \eqref{e-80-entropy-def} to the value of $\kappa(Q)$ in \eqref{e-52-kappa}. We get, after performing some basic algebra manipulations, $$ \begin{aligned} \calD(Q)&=1-\kappa^2(Q)=1-\frac{(X-Y+(a-1)Z)^2}{(X+Y+(a+1)Z)^2}=\frac{4XY+4XZ+4aZ^2+4aYZ}{(X+Y+(a+1)Z)^2}\\ &=\frac{4(Y+Z)(X+aZ)}{\left(X+Y+(a+1)Z\right)^2}, \end{aligned} $$ that confirms \eqref{e-51-dcy}. \end{proof} \begin{figure} \begin{center} \includegraphics[width=70mm]{Fig4-22.eps} \caption{Dissipation coefficient of $\sM^{Q}_\kappa$ (solid graph) and $\sM^{-1,Q}_\kappa$} (dashed graph).\label{fig-7} \end{center} \end{figure} An analogue of Theorem \ref{t-17} for the class $\sM_{\kappa}^{-1}$ is the following. \begin{theorem}\label{t-18}Let $\dA$ be a symmetric densely defined closed operator with deficiency indices $(1, 1)$ and $(+)$-normalized deficiency vectors $g_+$ and $g_-$ and $\Theta$ be an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14-1} with finite c-Entropy $\calS$. Then for any real $Q\ne0$, the dissipation coefficient $\calD(Q)$ of a perturbed L-system $\Theta_\kappa(Q)$ is given by the formula \begin{equation}\label{e-54-dcy} \calD(Q)=\frac{4(X'+Z)(Y'+aZ)}{(X'+Y'+Z(a+1))^2}, \end{equation} where \begin{equation}\label{e-55-b} \begin{aligned} a&=\coth\left(\frac{\calS}{2}\right),\;b=Q^2+a^2-1,\; X'=\left(b-2Q^2+\sqrt{b^2+4Q^2}\right)^2,\\ Y'&=a\left(b+\sqrt{b^2+4Q^2}\right)^2,\; Z=4aQ^2. \end{aligned} \end{equation} \end{theorem} \begin{proof} Following the steps of the proof of Theorem \ref{t-17}, we confirm again that the requirement of finite c-Entropy $\calS$ implies (via \eqref{e-70-entropy}) that $\kappa\ne0$. Also, Hypotheses \ref{setup} \eqref{ddoomm14-1} yields that $a=\frac{1+\kappa}{1-\kappa}$ is such that $a>1$. We have shown in the proof of Theorem \ref{t-15} that in this case $ a=\coth\left(\frac{\calS}{2}\right). $ According to Section \ref{s4}, if an L-system containing $\dA$ and satisfying Hypotheses \ref{setup} \eqref{ddoomm14-1} with $\kappa\ne0$ is perturbed by any real $Q\ne0$, then the parameter $\kappa(Q)$ of the perturbed L-system $\Theta(Q)$ is determined by the formula \eqref{e-85-kappa-prime} with $a>1$. Putting $\kappa(Q)$ from \eqref{e-85-kappa-prime} in a simpler form in terms of $X'$, $Y'$, and $Z$ preset in \eqref{e-55-b} gives us \begin{equation}\label{e-56-kappa} \kappa(Q)=\frac{Y'-X'-(a-1)Z}{X'+Y'+(a+1)Z}. \end{equation} Therefore, in order to find the dissipation coefficient $\calD(Q)$ of the perturbed L-system $\Theta(Q)$ we apply \eqref{e-69-ent-dis} with \eqref{e-80-entropy-def} to the value of $\kappa(Q)$ in \eqref{e-56-kappa}. We get, after performing some basic algebra manipulations, $$ \begin{aligned} \calD(Q)&=1-\kappa^2(Q)=1-\frac{(Y'-X'+(a-1)Z)^2}{(X'+Y'+(a+1)Z)^2}=\frac{4X'Y'+4Y'Z+4aZ^2+4aX'Z}{(X'+Y'+(a+1)Z)^2}\\ &=\frac{4(X'+Z)(Y'+aZ)}{\left(X'+Y'+(a+1)Z\right)^2}, \end{aligned} $$ that confirms \eqref{e-54-dcy}. \end{proof} \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|} \hline & & &\\ \textbf{Class}& \textbf{c-Entropy} & \textbf{Dissipation} & \textbf{Theorems} \\ & & \textbf{coefficient} &\\ \hline& & &\\ $\sM^Q$ & $\calS(Q)=\frac{1}{2}\ln (Q^2+4)-\ln|Q|$ & $\calD(Q)=\frac{4}{Q^2+4}$ &Theorems \ref{t-12}\\ & & & and \ref{t-16}\\ \hline & & &\\ $\sM^Q_\kappa$& Formula \eqref{e-46-entropy}& Formula \eqref{e-51-dcy} &Theorems \ref{t-14}\\ & & & and \ref{t-17}\\ \hline & & &\\ $\sM^{-1,Q}_\kappa$ & Formula \eqref{e-47-entropy}& Formula \eqref{e-54-dcy} &Theorems \ref{t-15}\\ & & & and \ref{t-18}\\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \end{tabular} \caption{c-Entropy and Dissipation coefficient of perturbed L-systems} \label{Table-1} \end{table} The results of Sections \ref{s5} and \ref{s6} are summarized in Table \ref{Table-1}. We would also like to note that if $Q=0$ all the formulas for $\calS(Q)$ and $\calD(Q)$ match their ``unperturbed" versions described in \cite{BT-21}. For example, using \eqref{e-46-entropy} and \eqref{e-51-dcy} with $Q=0$ and $0<a<1$ one obtains $$ \calS(0)=\ln(1+a)-\ln(1-a) \quad\textrm{ and }\quad \calD(0)=\frac{4a}{(1+a)^2}. $$ \section{Examples} In this section we present two examples that illustrate the construction of perturbed L-system. We also show how the c-Entropy of a perturbed L-system compares to that of an unperturbed one. \subsection*{Example 1}\label{ex-1} This example is designed to explain the construction of a perturbed L-system starting with an L-system whose impedance function belongs to the class $\sM$. We will also find c-Entropy of both L-systems. In the space $\calH=L^2_{\dR}=L^2_{(-\infty,0]}\oplus L^2_{[0,\infty)}$ we consider a prime symmetric operator \begin{equation}\label{e-87-sym} \dA x=i\frac{dx}{dt} \end{equation} on $$ \begin{aligned} \dom(\dA)&=\left\{x(t)=\left[ \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right]\,\Big|\,x(t) -\text{abs. cont.},\right.\\ &\qquad\left. x'(t)\in L^2_{\dR},\, x_1(0-)=x_2(0+)=0\right\}.\\ \end{aligned} $$ This operator $\dA$ is a model operator (see \cite{AG93}) for any prime symmetric operator with deficiency indices $(1, 1)$ that admits dissipative extension with the point spectrum filling the entire open upper half-plane. Its deficiency vectors are easy to find (see \cite{AG93}) \begin{equation}\label{e-87-def} g_z=\left( \begin{array}{c} e^{-izt} \\ 0 \\ \end{array} \right),\; \IM z>0,\qquad g_z=\left( \begin{array}{c} 0 \\ e^{-izt} \\ \end{array} \right),\; \IM z<0. \end{equation} In particular, for $z=\pm i$ the (normalized in $(+)$-norm) deficiency vectors are \begin{equation}\label{e-88-def} g_+=\left( \begin{array}{c} e^t \\ 0 \\ \end{array} \right) \in \sN_i,\, (t<0),\qquad g_-=\left( \begin{array}{c} 0 \\ e^{-t} \\ \end{array} \right)\in \sN_{-i},\,(t>0). \end{equation} Consider also, \begin{equation}\label{e-89-ext} \begin{aligned} A x&=i\frac{dx}{dt},\\ \dom(A)&=\left\{x(t)=\left( \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right) \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.},\right.\\ &\left. x'_1(t)\in L^2_{(-\infty,0]},\, x'_2(t)\in L^2_{[0,\infty)},\,x_1(0-)=-x_2(0+)\right\}.\\ \end{aligned} \end{equation} Clearly, $g_+-g_-\in\dom(A)$ and hence $A$ is a self-adjoint extension of $\dA$ satisfying the conditions of Hypothesis \ref{setup} \eqref{ddoomm14}. Furthermore, \begin{equation}\label{e-90-T} \begin{aligned} T x&=i\frac{dx}{dt},\\ \dom(T)&=\left\{x(t)=\left( \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right) \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.},\right.\\ &\left. x'_1(t)\in L^2_{(-\infty,0]},\, x'_2(t)\in L^2_{[0,\infty)},\,x_2(0+)=0\right\}\\ \end{aligned} \end{equation} is a quasi-self-adjoint extension of $\dA$ parameterized by a von Neumann parameter $\kappa=0$ that satisfies the conditions of Hypothesis \ref{setup} \eqref{ddoomm14}. Using direct check we obtain \begin{equation}\label{e-91-T-star} \begin{aligned} T^* x&=i\frac{dx}{dt},\\ \dom(T^*)&=\left\{x(t)=\left( \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right) \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.},\right.\\ &\left. x'_1(t)\in L^2_{(-\infty,0]},\, x'_2(t)\in L^2_{[0,\infty)},\,x_1(0-)=0\right\}.\\ \end{aligned} \end{equation} Similarly one finds \begin{equation}\label{e-92-adj} \begin{aligned} \dot A^* x&=i\frac{dx}{dt},\\ \dom(\dot A^*)&=\left\{x(t)=\left( \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right) \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.},\right.\\ &\left. x'_1(t)\in L^2_{(-\infty,0]},\, x'_2(t)\in L^2_{[0,\infty)}\right\}.\\ \end{aligned} \end{equation} Then $\calH_+=\dom(\dA^\ast)=W^1_2(-\infty,0]\oplus W^1_2[0,\infty)$, where $W^1_2$ is a Sobolev space. Construct a rigged Hilbert space \begin{equation}\label{e-139-triple} \begin{aligned} &\calH_+ \subset \calH \subset\calH_-\\ &=W^1_2(-\infty,0]\oplus W^1_2[0,\infty)\subset L^2_{(-\infty,0]}\oplus L^2_{[0,\infty)}\subset (W^1_2(-\infty,0]\oplus W^1_2[0,\infty))_- \end{aligned} \end{equation} and consider operators \begin{equation}\label{e-93-bA} \begin{aligned} \bA x&=i\frac{dx}{dt}+i x(0+)\left[\delta(t+)-\delta(t-)\right],\\ \bA^\ast x&=i\frac{dx}{dt}+i x(0-)\left[\delta(t+)-\delta(t-)\right], \end{aligned} \end{equation} where $x(t)\in W^1_2(-\infty,0]\oplus W^1_2[0,\infty)$, $\delta(t+)$, $\delta(t-)$ are delta-functions and elements of $(W^1_2(-\infty,0]\oplus W^1_2[0,\infty))_-=(W^1_2(-\infty,0])_-\oplus (W^1_2[0,\infty))_-$ such that $$ \delta(t+)=\left( \begin{array}{c} 0 \\ \delta_2(t+) \\ \end{array} \right), \qquad \delta(t-)=\left( \begin{array}{c} \delta_1(t-) \\ 0 \end{array} \right), $$ and generate functionals by the formulas $$x(0+)=(x,\delta(t+))=(x_1,0)+(x_2,\delta_2(t+))=x_2(0+),$$ and $$ x(0-)=(x,\delta(t-))=(x_1,\delta_1(t-))+(x_2,0)=x_1(0-). $$ It is easy to see that $\bA\supset T\supset \dA$, $\bA^\ast\supset T^\ast\supset \dA,$ and \begin{equation}\label{e-140-RbA} \RE\bA x=i\frac{dx}{dt}+\frac{i }{2}(x(0+)+x(0-))\left[\delta(t+)-\delta(t-)\right]. \end{equation} Clearly, $\RE\bA$ has its quasi-kernel equal to $A$ in \eqref{e-89-ext}. Moreover, $$ \IM\bA =\left(\cdot,\frac{1}{\sqrt 2}[\delta(t+)-\delta(t-)]\right) \frac{1}{\sqrt 2}[\delta(t+)-\delta(t-)]=(\cdot,\chi)\chi, $$ where $\chi=\frac{1}{\sqrt 2}[\delta(t+)-\delta(t-)]$. Now we can build \begin{equation}\label{e6-125-mom} \Theta= \begin{pmatrix} \bA &K &1\\ &&\\ \calH_+ \subset \calH \subset\calH_- &{ } &\dC \end{pmatrix}, \end{equation} that is an L-system with $\calH_+ \subset \calH \subset\calH_-$ of the form \eqref{e-139-triple}, \begin{equation}\label{e7-62-new} \begin{aligned} Kc&=c\cdot \chi=c\cdot \frac{1}{\sqrt 2}[\delta(t+)-\delta(t-)], \quad (c\in \dC),\\ K^\ast x&=(x,\chi)=\left(x, \frac{1}{\sqrt 2}[\delta(t+)-\delta(t-)]\right)=\frac{1}{\sqrt 2}[x(0+)-x(0-)],\\ \end{aligned} \end{equation} and $x(t)\in \calH_+= W^1_2(-\infty,0]\oplus W^1_2[0,\infty)$. It was shown in \cite{BMkT-3} that $V_{\Theta}(z)=i$ for all $z\in\dC_+$. Thus $V_{\Theta}(z)$ is a constant function of the class $\sM$. Also, clearly the c-Entropy of the L-system $\Theta$ in \eqref{e6-125-mom} is infinite. The corresponding dissipation coefficient of the L-system $\Theta$ is found according to \eqref{e-69-ent-dis} and is $\calD=1$. Now let us consider \begin{equation}\label{e-001-ex} V(z)=1+V_{\Theta}(z)=1+i, \quad z\in\dC_+. \end{equation} Obviously, by construction $V(z)\in\sM^1$. We are going to construct a perturbed L-system $\Theta(1)$ that realizes $V(z)$. This construction was thoroughly described in \cite[Example 1]{BMkT-3} and uses the same symmetric operator $\dA$ and state-space as in L-system $\Theta$. Taking $Q=1$ in \eqref{e-53-kappa'} we obtain \begin{equation}\label{e-002-ex'} \kappa(1)=\frac{1}{\sqrt5}. \end{equation} Then applying \eqref{e-54-U-M-q} yields \begin{equation}\label{e-158-U} U(1)=\frac{-1+2i}{\sqrt5}. \end{equation} The L-system $\Theta(1)$ constructed in \cite{BMkT-3} with parameters $\kappa=\kappa(1)$ in \eqref{e-002-ex'} and $U=U(1)$ in \eqref{e-158-U} out of the L-system $\Theta$ is such that $V_{\Theta(1)}(z)=V(z)\equiv 1+i$, $(z\in\dC_+)$. Its main operator $T(1)$ \begin{equation}\label{e-90-T1} \begin{aligned} T(1) x&=i\frac{dx}{dt},\\ \dom(T(1))&=\left\{x(t)=\left( \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right) \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.},\right.\\ &\left. x'_1(t)\in L^2_{(-\infty,0]},\, x'_2(t)\in L^2_{[0,\infty)},\,5 x_2(0+)=(-1+2i)x_1(0-)\right\}.\\ \end{aligned} \end{equation} and is such that $\kappa=\frac{1}{\sqrt5}$ is its von Neumann parameter of $T(1)$. The operator \begin{equation}\label{e-161-ext} \begin{aligned} A(1) x&=i\frac{dx}{dt},\\ \dom(A(1))&=\left\{x(t)=\left( \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right) \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.},\right.\\ &\left. x'_1(t)\in L^2_{(-\infty,0]},\, x'_2(t)\in L^2_{[0,\infty)},\,{5}x_2(0+)=(3+4i)x_1(0-)\right\}.\\ \end{aligned} \end{equation} is a self-adjoint extension of $\dA$ and has the von Neumann's parameter $U$ from \eqref{e-158-U}. Finally the state-space operator $\bA(1)$ is given by the formula $$ \begin{aligned} \bA(1) x&=i\frac{dx}{dt}-\frac{i}{2\sqrt5}\Big({5} x(0+)+(1-2i) x(0-)\Big)\big((5+5i)\delta(t-)+(7+i)\delta(t+)\big). \end{aligned} $$ The perturbed L-system $\Theta(1)$ we desire is \begin{equation*}\Theta(1)= \begin{pmatrix}\bA(1)&K(1) &1\\ &&\\ \calH_+ \subset \calH \subset\calH_- &{ } &\dC \end{pmatrix}, \end{equation*} where $\calH_+ \subset \calH \subset\calH_-$ is of the form \eqref{e-139-triple}, $K(1) c=c\cdot \chi(1)$, $(c\in \dC)$, $K^*(1) x=(x,\chi(1))$, and $x(t)\in \calH_+$ where \begin{equation}\label{e-160-chi1} \chi(1)=\frac{1+i}{\sqrt{2}}\delta(t-)+\frac{7+i}{5\sqrt{2}}\delta(t+). \end{equation} This L-system $\Theta(1)$ has the impedance function $V_{\Theta(1)}(z)=1+i$, ($z\in\dC_+$). The c-Entropy of this perturbed L-system $\Theta(1)$ is (see \eqref{e-70-entropy}) \begin{equation}\label{e-68-entr} \calS(1)=-\ln|\kappa|=-\ln\frac{1}{\sqrt5}=\frac{1}{2}\ln5\approx 0.8047, \end{equation} while the c-Entropy of the unperturbed L-system $\Theta$ is infinite. The dissipation coefficient of L-system $\Theta(1)$ is found according to \eqref{e-50-dcy} \begin{equation}\label{e-76-dcy} \calD(1)=\frac{4}{1^2+4}=\frac{1}{5}. \end{equation} \subsection*{Example 2}\label{ex-2} In this Example we construct a perturbed L-system based on a given one with finite c-Entropy. We will rely on some objects presented in Example 1 but with certain changes. Consider an L-system \begin{equation}\label{e-154-mom_0} \Theta= \begin{pmatrix} \bA &K &1\\ &&\\ \calH_+ \subset \calH \subset\calH_- &{ } &\dC \end{pmatrix}. \end{equation} The state space of $\Theta$ is $\calH_+ \subset \calH \subset\calH_-$ of the form \eqref{e-139-triple} and its symmetric operator $\dA$ is given by \eqref{e-87-sym} as in Example 1. The main operator $T$ of $\Theta$ is defined as follows \begin{equation}\label{e-149-T} \begin{aligned} T x&=i\frac{dx}{dt},\\ \dom(T)&=\left\{x(t)=\left( \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right) \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.},\right.\\ &\left. x'_1(t)\in L^2_{(-\infty,0]},\, x'_2(t)\in L^2_{[0,\infty)},\,3 x_2(0+)=-x_1(0-)\right\}.\\ \end{aligned} \end{equation} It follows from \eqref{e-88-def} that $g_+-\frac{1}{3}g_-\in\dom(T)$ and hence $\kappa=\frac{1}{3}$ is the von Neumann parameter of $T$ corresponding to the deficiency vectors \eqref{e-88-def}. The state-space operator of $\Theta$ in the rigged Hilbert space \eqref{e-139-triple} is (see \cite[Example 2]{BMkT-3}) \begin{equation}\label{e-152-bA} \begin{aligned} \bA x&=i\frac{dx}{dt}+\frac{i}{2} (3 x(0+)+x(0-))\left[\delta(t+)-\delta(t-)\right],\\ \bA^* x&=i\frac{dx}{dt}+\frac{i}{2} ( x(0+)+3 x(0-))\left[\delta(t+)-\delta(t-)\right], \end{aligned} \end{equation} where all the components are defined in Example 1. Finally, the channel operator of L-system $\Theta$ is \begin{equation*}\label{e-155-new_0} \begin{aligned} Kc&=c\cdot {\frac{1}{\sqrt2}}[\delta(t+)-\delta(t-)], \quad (c\in \dC),\\ K^* x&={\frac{1}{\sqrt2}}(x(0+)-x(0-)),\\ \end{aligned} \end{equation*} and $x(t)\in \calH_+= W^1_2(-\infty,0]\oplus W^1_2[0,\infty)$. It was shown in \cite[Example 2]{BMkT-3} that $$ V_{\Theta}(z)\equiv \frac{3-1}{3+1}i=\frac{1}{2}i,\quad z\in\dC_+. $$ Observe that $V_{\Theta}$ belongs to the class $\sM_{1/3}$, (here $\kappa=\frac{1}{3}$), and $a$ given by \eqref{e-66-L} is $$ a=\frac{1-\kappa}{1+\kappa}=\frac{1-1/3}{1+1/3}=\frac{1}{2}. $$ The c-Entropy of this L-system $\Theta$ is (see \eqref{e-70-entropy}) \begin{equation}\label{e-72-entr} \calS=-\ln|\kappa|=-\ln\frac{1}{3}=\ln3\approx 1.0986, \end{equation} The corresponding dissipation coefficient of the L-system $\Theta$ is found according to \eqref{e-69-ent-dis} and is $\calD=\frac{8}{9}$. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|} \hline & & &\\ \textbf{Class}& \textbf{c-Entropy} & \textbf{Dissipation} & \textbf{Example} \\ & & \textbf{coefficient} &\\ \hline & & &\\ $\sM$ & $\calS=\infty$ & $\calD=1$ &Example 1\\ & & & \\ \hline & & &\\ $\sM^1$ & $\calS(1)=\frac{1}{2}\ln5\approx 0.8047$ & $\calD(1)=\frac{1}{5}$ & Example 1\\ & & & \\ \hline & & &\\ $\sM_{1/3}$& $\calS=\ln3\approx 1.0986$& $\calD=\frac{8}{9}\approx0.8889$ &Example 2\\ & & & \\ \hline & & &\\ $\sM_{1/3}^1$ & $\calS(1)=\frac{1}{2}\ln\frac{13}{5}\approx 0.4778$& $\calD(1)=\frac{104}{169}\approx0.6154$ &Example 2\\ & & & \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \end{tabular} \caption{Numerical values of c-Entropy and Dissipation coefficient of perturbed L-systems} \label{Table-2} \end{table} Now we are going to construct a perturbed L-system $\Theta(1)$ out of the elements of L-system $\Theta(1)$ such that $$V_{\Theta(1)}(z)= 1+\frac{1}{2}i,\quad z\in\dC_+.$$ Clearly, by construction $V_{\Theta(1)}(z)\in\sM^1_{1/3}$. As we have shown in \cite[Example 2]{BMkT-3} this construction will require the value of $\kappa(Q)$ of the form \eqref{e-53-kappa-prime} and $U(Q)$ of the form \eqref{e-75-U} for $Q=1$ to yield \begin{equation}\label{e-161-k-U} \kappa(1)=\frac{\sqrt{65}}{13}\quad \textrm{and} \quad U(1)=\frac{-7+4i}{\sqrt{65}}. \end{equation} Then the main operator of the constructed L-system is \begin{equation*}\label{e-162-T1} \begin{aligned} &\quad\quad T(1) x=i\frac{dx}{dt},\\ &\dom(T(1))=\left\{x(t)=\left[ \begin{array}{c} x_1(t) \\ x_2(t) \\ \end{array} \right] \,\Big|\,x_1(t),\,x_2(t) -\text{abs. cont.}, x'_1(t)\in L^2_{(-\infty,0]},\right.\\ &\left. x'_2(t)\in L^2_{[0,\infty)},\,\sqrt{65} x_2(0+)=-13\,x_1(0-)\right\}.\\ \end{aligned} \end{equation*} Also, as it was shown in \cite[Example 2]{BMkT-3} the state-space operator of this L-system $\Theta(1)$ is $$ \begin{aligned} \bA(1) x&=i\frac{dx}{dt}-\frac{i}{20}\Big(\sqrt{65} x(0+)+13x(0-)\Big)\left(\sqrt{65}(4+3i) \delta(t-)+ (20+35 i)\delta(t+)\right). \end{aligned} $$ and the composed L-system is \begin{equation}\label{e6-125-11} \Theta(1)= \begin{pmatrix}\bA(1)&K(1) &1\\ &&\\ \calH_+ \subset \calH \subset\calH_- &{ } &\dC \end{pmatrix}, \end{equation} where $\calH_+ \subset \calH \subset\calH_-$ is of the form \eqref{e-139-triple}, $K(1) c=c\cdot \chi(1)$, $(c\in \dC)$, $K^\ast(1) x=(x,\chi(1))$, with \begin{equation}\label{e-164-chi1} \chi(1)=\frac{1}{2\sqrt{65}}\left(\sqrt{65}(1+2i) \delta(t-)+ (1+18 i)\delta(t+)\right), \end{equation} and $x(t)\in \calH_+$. This L-system $\Theta(1)$ is such that $V_{\Theta(1)}(z)= 1+\frac{1}{2}i$ for all $z\in\dC_+$. The c-Entropy of this perturbed L-system $\Theta(1)$ is (see \eqref{e-70-entropy}) \begin{equation}\label{e-77-entr} \calS(1)=-\ln|\kappa(1)|=-\ln\frac{\sqrt{65}}{13}=\frac{1}{2}\ln\frac{13}{5}\approx 0.4778. \end{equation} Note that $\calS(1)<\calS$ where $\calS$ is given by \eqref{e-72-entr}. The corresponding dissipation coefficient of the L-system $\Theta(1)$ is found according to \eqref{e-69-ent-dis} (or \eqref{e-51-dcy}) and is $$\calD(1)=1-\kappa^2(1)=1-\frac{65}{169}=\frac{104}{169}.$$ The numerical values for c-Entropy and Dissipation coefficient of perturbed L-systems constructed in the examples are summarized in Table \ref{Table-2}. \appendix \section{Inclusion into an L-system}\label{A1} In this appendix, following \cite{BMkT-3,T69}, we provide an explicit construction of an L-system based upon the following operator theoretic setting. Assume that $\dA$ is a densely defined closed symmetric operator with finite deficiency indices $(1,1)$. Given $$ (\kappa,U)\in [0,1)\times \mathbb{T}, \quad \text{with} \quad \mathbb{T}=\{z\in \bbC\,\mid\, |z|=1\}, $$ and $(+)$-normalized deficiency elements $g_\pm\in\sN_{\pm i}=\Ker (\dA^*\mp i I)$, $\|g_\pm\|_+=1$, assume that $T$ is a quasi-selfadjoint extension of $\dot A$ such that $$ g_+-\kappa g_-\in \dom (T). $$ Also assume that $A$ is a reference self-adjoint extension of $\dot A$ with \begin{equation}\label{e-78-ex} g_++Ug_-\in \dom (A). \end{equation} Introduce the L-system (see \cite{BMkT-3,BT-21}) \begin{equation}\label{e-215} \Theta= \begin{pmatrix} \bA&K&\ 1\cr \calH_+ \subset \calH \subset \calH_-& &\dC\cr \end{pmatrix}, \end{equation} where $\bA$ is a unique $(*)$-extension $\bA$ of $T$ (see \cite[Theorem 4.4.6]{ABT}) and $$K\,c=c\cdot\chi,\quad (c\in\dC).$$ In this case, the state-space operator $\bA$ given by \begin{equation}\label{e-205-A} \begin{aligned} \bA&=\dA^*+\frac{\sqrt2 i(\kappa+\bar U)}{|1+\kappa U|\sqrt{1-\kappa^2}}\Big(\cdot\,\,, \kappa\varphi+\psi\Big)\chi, \end{aligned} \end{equation} with \begin{equation}\label{e-212} \chi=\frac{\kappa^2+1+2\kappa U}{\sqrt2|1+\kappa U|\sqrt{1-\kappa^2}}\varphi+ \frac{\kappa^2 U+2\kappa+ U}{\sqrt2|1+\kappa U|\sqrt{1-\kappa^2}}\psi. \end{equation} Here \begin{equation}\label{e-23-phi-psi} \varphi=\calR^{-1}(g_+),\quad \psi=\calR^{-1}(g_-), \end{equation} with $\calR$ the Riesz-Berezansky operator. \begin{remark}\label{r-1} Notice that since by the hypothesis $ \|g_\pm\|_+=1, $ we have $$\|\varphi\|_-=\|\psi\|_-=1.$$ Indeed, by \eqref{e3-4}, $$ \|\varphi\|_-^2=\|\cR\varphi\|_+^2=\|g_+\|_+^2=1. $$ Analogously, $$ \|\psi\|_-^2=1. $$ Moreover, since obviously $$ \|g_\pm\|_+^2=2\|g_\pm\|^2, $$ we also see that the deficiency elements $g_\pm'\in\sN_{\pm i}$ given by \begin{equation}\label{e-34-conv} g_+'=\sqrt2\calR=\sqrt2\, g_+,\qquad g_-'=\sqrt2\calR\psi=\sqrt2\, g_- \end{equation} are $(\cdot)$-normalized. \end{remark} {Given all that, it is also worth mentioning that all the results are formulated in terms of the $(+)$-normalized deficiency elements $g_\pm$.} Observe that the constructed $L$-system $\Theta$ of the form \eqref{e-215} is in one-to-one correspondence with a parametric pair $(\kappa,U)\in [0,1)\times \mathbb T$. Also recall that (see \cite{BMkT-3,BT-21}) \begin{equation}\label{e-26-Im} \IM\bA =(\cdot,\chi)\chi, \end{equation} and \begin{equation}\label{e-214} \begin{aligned} \RE\bA&=\dA^*-\frac{i\sqrt{1-\kappa^2}}{\sqrt2|1+\kappa U|}(\cdot,\varphi-U\psi)\chi, \end{aligned} \end{equation} where $\chi$ is given by \eqref{e-212}. If the reference self-adjoint extension $A$ is such that $U=-1$ in \eqref{e-78-ex}, then for the corresponding L-system \begin{equation}\label{e-62-1-1} \Theta_1= \begin{pmatrix} \bA_1&K_1&\ 1\cr \calH_+ \subset \calH \subset \calH_-& &\dC\cr \end{pmatrix} \end{equation} we have \begin{equation}\label{e-29-bA1} \bA_1=\dA^*-\frac{\sqrt2 i}{\sqrt{1-\kappa^2}} \Big(\cdot, \kappa\varphi+\psi\Big)\chi_1, \end{equation} where \begin{equation}\label{e-18} \chi_1=\sqrt{\frac{1-\kappa}{2+2\kappa}}\,(\varphi- \psi)=\sqrt{\frac{1-\kappa}{1+\kappa}}\left(\frac{1}{\sqrt2}\,\varphi- \frac{1}{\sqrt2}\,\psi\right). \end{equation} Also, \eqref{e-26-Im} gives us \begin{equation}\label{e-17} \begin{aligned} \IM\bA_1&=\left(\frac{1}{2}\right)\frac{1-\kappa}{1+\kappa}(\cdot,\varphi-\psi)(\varphi- \psi)=(\cdot,\chi_1)\chi_1, \end{aligned} \end{equation} and, according to \eqref{e-214}, \begin{equation}\label{e-17-real} \begin{aligned} \RE\bA_1&=\dA^*-\frac{i}{2}(\cdot,\varphi+\psi)(\varphi-\psi). \end{aligned} \end{equation} If in \eqref{e-78-ex} we have $U=1$, then the entries of the corresponding L-system \begin{equation}\label{e-62-1-3} \Theta_2= \begin{pmatrix} \bA_2&K_2&\ 1\cr \calH_+ \subset \calH \subset \calH_-& &\dC\cr \end{pmatrix} \end{equation} are given by \begin{equation}\label{e-29-bA2} \bA_2=\dA^*+\frac{\sqrt2 i}{\sqrt{1-\kappa^2}} \Big(\cdot, \kappa\varphi+\psi\Big)\chi_2, \end{equation} where \begin{equation}\label{e-18-1} \chi_2=\sqrt{\frac{1+\kappa}{2-2\kappa}}\,(\varphi+ \psi)=\sqrt{\frac{1+\kappa}{1-\kappa}}\left(\frac{1}{\sqrt2}\,\varphi+ \frac{1}{\sqrt2}\,\psi\right). \end{equation} Also, \eqref{e-26-Im} yields \begin{equation}\label{e-17-1} \IM\bA_2= \left(\frac{1}{2}\right)\frac{1+\kappa}{1-\kappa}\Big((\cdot,\varphi+\psi)(\varphi+\psi)\Big)=(\cdot,\chi_2)\chi_2, \end{equation} and, according to \eqref{e-214}, \begin{equation}\label{e-32-real} \begin{aligned} \RE\bA_2&=\dA^*-\frac{i}{2}(\cdot,\varphi-\psi)(\varphi+ \psi). \end{aligned} \end{equation} Note that two L-systems $\Theta_1$ and $\Theta_2$ in \eqref{e-62-1-1} and \eqref{e-62-1-3} are constructed in a way that the quasi-kernels $\hat A_1$ of $\RE\bA_1$ and $\hat A_2$ of $\RE\bA_2$ satisfy the conditions \eqref{ddoomm14} or \eqref{ddoomm14-1}, respectively, as it follows from \eqref{e-17-real} and \eqref{e-32-real}. {Also, we would like to emphasize that formulas \eqref{e-215}--\eqref{e-212} allow us to construct an L-system $\Theta$ that is complectly based on a given triple $(\dot A, \whA, A)$ and a fixed $(+)$-normalized deficiency vectors $g_\pm$. Moreover, in this construction the operators $\dA$ and $T$ become the symmetric and main operators of $\Theta$, respectively, while the self-adjoint reference extension $A$ of the triple matches $\hat A$, the quasi-kernel of $\RE\bA$.} \begin{thebibliography}{1} \bibitem{AG93} N.~I.~Akhiezer, I.~M.~Glazman, Theory of linear operators. Pitman {A}dvanced {P}ublishing {P}rogram, 1981. \bibitem{ABT} Yu.~Arlinskii, S.~Belyi, E.~Tsekanovskii, \textit{Conservative Realizations of Herglotz-Nevanlinna functions}, {Oper. Theory Adv. Appl.}, {vol. \textbf{217}}, {Birkh\"auser Verlag}, {2011}. \bibitem{BMkT} S.~Belyi, K.~A.~ Makarov, E.~Tsekanovskii, \textit{Conservative L-systems and the Liv\v{s}ic function}. Methods of Functional Analysis and Topology, \textbf{21}, no. 2, (2015), 104--133. \bibitem{BMkT-2} S.~Belyi, K.~A.~Makarov, E.~Tsekanovskii, \textit{A system coupling and Donoghue classes of Herglotz-Nevanlinna functions}, Complex Analysis and Operator Theory, \textbf{10} (4), (2016), 835--880. \bibitem{BMkT-3} S.~Belyi, E.~Tsekanovskii, \textit{Perturbations of Donoghue classes and inverse problems for L-systems}, Complex Analysis and Operator Theory, vol. \textbf{13} (3), (2019), 1227--1311. \bibitem{BT-16} S.~Belyi, K.~A.~ Makarov, E.~Tsekanovskii, \textit{On the c-Entropy of L-systems with Schr\"odinger operator,} Complex Analysis and Operator Theory, \textbf{16} (107), (2022), 1--59. \bibitem{BT-21} S.~Belyi, K.~A.~ Makarov, E.~Tsekanovskii, \textit{ The L-system representation and c-Entropy}, Pure and Applied Functional Analysis, vol. \textbf{9} (4), (2024), pp. 935--961. \bibitem{Ber} Yu.~Berezansky, Expansion in eigenfunctions of self-adjoint operators, {vol. \textbf{17}}, {Transl. Math. Monographs}, {AMS}, {Providence}, {1968}. \bibitem {D} W.~F.~Donoghue, \textit{On perturbation of spectra}, Commun. Pure and Appl. Math., {\bf 18}, (1965), 559--579. \bibitem{GMT97} F.~Gesztesy, K.~A.~Makarov, E.~Tsekanovskii, \textit{An addendum to Krein's formula}, {J. Math. Anal. Appl.}, {\bf 222}, (1998), 594--606. \bibitem{GT} F.~Gesztesy, E.~Tsekanovskii, \textit{On Matrix-Valued Herglotz Functions}. Mathematische Nachrichten, \textbf{218}, (2000), 61--138. \bibitem{Lv1} M.~Liv\v{s}ic, \textit{On spectral decomposition of linear non-self-adjoint operators}. Mat. Sbornik (76) {\bf 34} (1954), 145--198 (Russian); English transl.: Amer. Math. Soc. Transl. (2) {\bf 5}, 67--114 (1957). \bibitem{Lv2} M.S.~Liv\v{s}ic, \textit{Operators, oscillations, waves}. Moscow, Nauka, 1966. \bibitem{MT-S} K.~A.~Makarov, E.~Tsekanovskii, \textit{On the Weyl-Titchmarsh and Liv\v{s}ic functions}. Proceedings of Symposia in Pure Mathematics, vol. \textbf{87}, American Mathematical Society, (2013), 291--313. \bibitem{MT10} K.~A.~Makarov, E.~Tsekanovskii, \textit{On the addition and multiplication theorems}. {Oper. Theory Adv. Appl.}, {vol. \textbf{244}}, (2015), 315--339. \bibitem{MT2021} {K.~A.~Makarov, E.~Tsekanovskii}, \textit{Representations of commutation relations in Dissipative Quantum Mechanics}. ArXiv: 2101.10923 [math-ph]. \bibitem{MTBook} K.~A.~Makarov, E.~Tsekanovskii, The Mathematics of Open Quantum Systems, {Dissipative and Non-Unitary Representations and Quantum Measurements}, World Scientific, 2022. \bibitem{T69} E.~Tsekanovski\u i, \textit{The description and the uniqueness of generalized extensions of quasi-Hermitian operators}. (Russian) Funkcional. Anal. i Prilozen., \textbf{3}, no. 1, (1969), 95--96. \bibitem{TSh1} E.~Tsekanovskii, Yu~.L.~\u Smuljan, \textit{The theory of bi-extensions of operators on rigged Hilbert spaces. Unbounded operator colligations and characteristic functions}, {Russ. Math. Surv.}, {\bf 32}, {(1977)}, {73--131}. \end{thebibliography} \end{document}
2501.00047v3
http://arxiv.org/abs/2501.00047v3
$σ$-Sets and $σ$-Antisets
\documentclass{article} \usepackage[english]{babel} \usepackage[letterpaper,top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} \usepackage{amsmath} \usepackage{amssymb} \usepackage{dsfont} \usepackage{graphicx} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage{amsthm} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \newtheorem{example}[theorem]{Example} \newtheorem{definition}[theorem]{Definition} \newtheorem{assumption}[theorem]{Assumption} \title{$\sigma$-Sets and $\sigma$-Antisets} \author{ {\bf Ivan Gatica and Alfonso Bustamante } \\ \\ Institute of Mathematics, Pontifical Catholic University of Valparaiso,\\ Valparaiso, Chile\\ {\small [email protected]}\\ Department of Informatics, Technological University of Chile,\\ Apoquindo 7282, Santiago, Chile,\\ {\small [email protected]}\\ } \begin{document} \maketitle \begin{abstract} In this paper we present a brief study of the $\sigma$-set-$\sigma$-antiset duality that occurs in $\sigma$-set theory and we also present the development of the integer space $3^{A}=\left\langle 2^{A}, 2^{A^{-}} \right\rangle$ for the cardinals $|A|=2,3$ together with its algebraic properties. In this article, we also develop a presentation of some of the properties of fusion of $\sigma$-sets and finally we present the development and definition of a type of equations of one $\sigma$-set variable. \end{abstract} \section{$\sigma$-Sets and $\sigma$-Antisets} As we have seen in \cite{Gatica}, an $\sigma$-antiset is defined as follows: \begin{definition} Let $A$ be a $\sigma$-set, then $B$ is said to be the $\sigma$-antiset of $A$ if and only if $A\oplus B=\emptyset$, where $\oplus$ is the fusion of $\sigma$-sets. \end{definition} We must observe that given the definition of the fusion operator $\oplus$ in \cite{Gatica} it is clear that it is commutative and therefore if $B$ is an $\sigma$-antiset of $A$, then it will be necessary that $A$ is also the $\sigma$-antiset of $B$. On the other hand, following the Blizard notation, \cite{Blizard} p. 347, we will denote $B$ the $\sigma$-antiset of $A$ as $B=A^{-}$, in this way we will have $A=(A^{-})^{-}$. Continuing with the development of the $\sigma$-sets we have constructed three primary $\sigma$-sets, which are: \begin{center} \begin{tabular}{|l|l|} \hline Natural Numbers & $\mathds{N}=\{1,2,3,4,5,6,7,8,9,10,\ldots\}$ \\ \hline $0$-Natural Numbers & $\mathds{N}^{0}=\{1_{0},2_{0},3_{0},4_{0},5_{0},6_{0},7_{0},8_{0},9_{0},10_{0},\ldots\}$ \\ \hline Antinatural Numbers & $\mathds{N}^{-}=\{1^{\ast},2^{\ast},3^{\ast},4^{\ast},5^{\ast},6^{\ast},7^{\ast},8^{\ast},9^{\ast},10^{\ast},\ldots\}$\\ \hline \end{tabular} \end{center} where $1=\{\alpha\}$, $1_{0}=\{\emptyset\}$ and $1^{\ast}=\{\omega\}$, we must clarify that we have changed the letter $\beta$ for the letter $\omega$ for symmetry reasons, we must also remember that: $$\ldots\in\alpha_{-2}\in\alpha_{-1}\in\alpha\in\alpha_{1}\in\alpha_{2}\ldots$$ and $$\ldots\in\omega_{-2}\in\omega_{-1}\in\omega\in\omega_{1}\in\omega_{2}\in\ldots$$ where both $\epsilon$-chains have the linear $\epsilon$-root property and are totally different, i.e. they do not have a link-intersection. These definitions can be found in \cite{Gatica} Definition 3.13, 3.14 and 3.16. On the other hand, we must remember the definition of the space generated by two $\sigma$-sets $A$ and $B$ which is: \begin{definition}\label{Df CG} Let $A$ and $B$ be two $\sigma$-sets. The \textbf{Generated space by $A$ and $B$ } is given by $$ \left\langle 2^{A},2^{B}\right\rangle=\{x\oplus y : x\in 2^{A}\wedge y\in 2^{B}\},$$ where $\oplus$ is the fusion operator. \end{definition} Let us recall a few things about the fusion operator $\oplus$. In this brief analysis, we must observe that given $x$, $y$ two $\sigma$-sets, if $\{x\}\cup\{y\}=\emptyset$ then it will be said that $y$ is the antielement of $x$ and $x$ the antielement of $y$, where the union of pairs $\cup$ axiomatized within the theory of $\sigma$-sets is used, in particular in the completion axioms A and B, which we will call annihilation axioms from now on. \begin{notation} Let $x$ be an element of some $\sigma$-set, then we will denote by $x^{\ast}$ the anti-element of $x$, if it exists. \end{notation} Now we move on to define the new operations with $\sigma$-sets which will help us define the fusion of $\sigma$-sets $\oplus$. \begin{definition} Let $A$ and $B$ be two $\sigma$-sets, then we define the $\ast$-intersection of $A$ with $B$ by $$A\widehat{\cap} B =\{x\in A : x^{\ast}\in B \}.$$ \end{definition} \begin{example} Let $A=\{1,2,3^{\ast},4\}$ and $B=\{2,3,4^{\ast}\}$ be two $\sigma$-sets, then we have that: $$A\widehat{\cap}B=\{3^{\ast},4\}$$ and $$B\widehat{\cap}A=\{3,4^{\ast}\},$$ it is clear that the $\ast$-intersection operator is not commutative. \end{example} \begin{theorem}\label{T IA} Let $A$ be a $\sigma$-set, then $A\widehat{\cap} A=\emptyset$. \end{theorem} \begin{proof} Let $A$ be a $\sigma$-set, by definition we will have that $$A\widehat{\cap}A=\{x\in A: x^{\ast}\in A\}.$$ Suppose now that $A\widehat{\cap}A\neq\emptyset $, then there exists an $x\in A$ such that $x^{\ast}\in A$, therefore we will have that $x,x^{\ast}\in A$, which is a contradiction with Theorem 3.39 (Exclusion of inverses) from \cite{Gatica}, so if $A$ is a $\sigma$-set then $$A\widehat{\cap}A=\emptyset.$$ \end{proof} \begin{example} Let $A=\{1,2,3^{*},4\}$, then $$A\widehat{\cap}A=\{1,2,3^{*},4\}\widehat{\cap}\{1,2,3^{*},4\},$$ $$A\widehat{\cap}A=\{x\in\{1,2,3^{*},4\}: x^{*}\in\{1,2,3^{*},4\}\}, $$ $$A\widehat{\cap}A=\emptyset.$$ \end{example} Regarding Theorem \ref{T IA}, we can observe that given a $\sigma$-set $A$, the $\sigma$-set theory does not allow the coexistence of a $\sigma$-element $x$ and its $\sigma$-antielement in the same $\sigma$-set $A$, and this is because $A$ is a $\sigma$-set. However, since $\sigma$-set theory is a $\sigma$-class theory, one can find the $\sigma$-elements together with the $\sigma$-antielements coexisting without problems in what we call the proper $\sigma$-class, in this way one will have that $\{x,x^{\ast}\}$ is a proper $\sigma$-class and not a $\sigma$-set. \begin{theorem}\label{T IV} Let $A$ be a $\sigma$-set, then $A\widehat{\cap}\emptyset=\emptyset$ and $\emptyset\widehat{\cap}A=\emptyset$. \end{theorem} \begin{proof} Let $A$ be a $\sigma$-set, by definition we will have that $$A\widehat{\cap}\emptyset=\{x\in A: x^{\ast}\in \emptyset\}.$$ Now suppose that $A\widehat{\cap}\emptyset\neq\emptyset $, then there exists an $x\in A$ such that $x^{\ast}\in\emptyset$, which is a contradiction, hence $A\widehat{\cap}\emptyset=\emptyset $. On the other hand, $\emptyset\widehat{\cap}A\subseteq\emptyset $ thus we will have to $\emptyset\widehat{\cap}A=\emptyset$. \end{proof} On the other hand, we will define the $\ast$-difference between $\sigma$-sets, a fundamental operation to be able to define the fusion between $\sigma$-sets. \begin{definition} Let $A$ and $B$ be two $\sigma$-sets, then we define the $\ast$-difference between $A$ y $B$ by $$ A\divideontimes B =A - (A\widehat{\cap} B),$$ where $A-B=\{x\in A : x\notin B\}.$ \end{definition} \begin{example} Let $A=\{1,2,3^{\ast},4\}$ and $B=\{2,3,4^{\ast}\}$, then we have that: $$A\widehat{\cap}B=\{3^{\ast},4\},$$ therefore $$ A\divideontimes B =A - (A\widehat{\cap} B)=\{1,2,3^{\ast},4\}-\{3^{\ast},4\}=\{1,2\}$$ $$ A\divideontimes B =\{1,2\}.$$ We also have to $$B\widehat{\cap}A=\{3,4^{\ast}\}$$ therefore $$ B\divideontimes A =B - (B\widehat{\cap} A)=\{2,3,4^{\ast}\}-\{3,4^{\ast}\}=\{2\}$$ $$ B\divideontimes A =\{2\}.$$ \end{example} \begin{corollary}\label{C DA} Let $A$ be a $\sigma$-set. Then $A\divideontimes A=A$. \end{corollary} \begin{proof} Let $A$ be a $\sigma$-set, then by Theorem \ref{T IA} we will have that $A\widehat{\cap}A=\emptyset$ therefore $$A\divideontimes A=A - (A\widehat{\cap} A)=A-\emptyset =A.$$ \end{proof} \begin{corollary}\label{C DV} Let $A$ be a $\sigma$-set. Then $A\divideontimes\emptyset=A$ and $\emptyset\divideontimes A=\emptyset$. \end{corollary} \begin{proof} Let $A$ be a $\sigma$-set, then by Theorem \ref{T IV} we will have that $A\widehat{\cap}\emptyset=\emptyset\widehat{\cap}A=\emptyset$ therefore $$A\divideontimes\emptyset=A - (A\widehat{\cap} \emptyset)=A-\emptyset =A$$ and $$\emptyset\divideontimes A=\emptyset - (\emptyset\widehat{\cap} A)=\emptyset-\emptyset =\emptyset.$$ \end{proof} Now after defining the $\ast$-intersection and the $\ast$-difference we can define the fusion of $\sigma$-sets as follows: \begin{definition} Let $A$ and $B$ be two $\sigma$-sets, then we define the fusion of $A$ and $B$ by $$ A\oplus B =\{ x : x\in A\divideontimes B \vee x\in B\divideontimes A\}.$$ \end{definition} It is clear that the fusion of $\sigma$-sets is commutative by definition. Now, let us show an example \begin{example} Let $A=\{1,2,3^{\ast},4\}$ y $B=\{2,3,4^{\ast}\}$, then we have that: $$ A\oplus B =\{ x : x\in A\divideontimes B \vee x\in B\divideontimes A\},$$ $$A\oplus B =\{x : x\in\{1,2\}\vee x\in\{2\} \}, $$ $$A\oplus B=\{1,2\}, $$ therefore we have that $$\{1,2,3^{\ast},4\}\oplus\{2,3,4^{\ast}\}=\{2,3,4^{\ast}\}\oplus \{1,2,3^{\ast},4\} =\{1,2\}.$$ \end{example} \begin{corollary}\label{C SA} Let $A$ be a $\sigma$-set, then $A\oplus A=A$. \end{corollary} \begin{proof} Let $A$ be a $\sigma$-set, by definition we have that, $$A\oplus A=\{x: x\in A\divideontimes A \vee x\in A\divideontimes A\}.$$ Now by corollary \ref{C DA}, we have that $$A\oplus A=\{x: x\in A \vee x\in A\},$$ $$A\oplus A=\{x: x\in A\},$$ therefore it is clear that $A\subset A\oplus A$ and that $A\oplus A\subset A$, therefore $A\oplus A=A$. \end{proof} \begin{corollary}\label{C SV} Let $A$ be a $\sigma$-set, then $A\oplus\emptyset=\emptyset\oplus A=A$. \end{corollary} \begin{proof} First we will show that $A\oplus\emptyset=A$. By definition we will have that, $$A\oplus\emptyset=\{x: x\in A\divideontimes\emptyset \vee x\in\emptyset\divideontimes A\}.$$ Now by the corollary \ref{C DV}, we will have that $$A\oplus\emptyset=\{x: x\in A \vee x\in\emptyset\},$$ $$A\oplus\emptyset=\{x: x\in A\},$$ from this it is clear that $A\subset A\oplus\emptyset$ and that $A\oplus\emptyset\subset A$, in this way $A\oplus\emptyset=A$. Second, we will show that $\emptyset\oplus A=A$. By definition we will have that, $$\emptyset\oplus A =\{x: x\in\emptyset\divideontimes A \vee x\in A\divideontimes\emptyset \}.$$ Now by the corollary \ref{C DV}, we will have that $$\emptyset\oplus A =\{x: x\in\emptyset \vee x\in A\},$$ $$A\oplus\emptyset=\{x: x\in A\},$$ from this it is clear that $A\subset \emptyset\oplus A$ and that $\emptyset\oplus A\subset A$, in this way $\emptyset\oplus A=A$. \end{proof} \begin{theorem}\label{T fusion-union} Let $X$ be a $\sigma$-set, then for all $A,B\in 2^{X}$, we have that: $$A\oplus B = A\cup B,$$ where $A\cup B=\{x: x\in A \vee x\in B\}.$ \end{theorem} \begin{proof} Let $X$ be a $\sigma$-set and $A,B\in 2^{X}$. Then, by theorem 3.39 of \cite{Gatica} we have that $$ A\widehat{\cap} B = B\widehat{\cap} A =\emptyset , $$ in this way $$A\divideontimes B=A \wedge B\divideontimes A=B. $$ Finally $A\oplus B=\{x: x\in A \vee x\in B\}=A\cup B.$ \end{proof} \begin{example} Let $X=\{1,2,3\}$, $A=\{1,2\}$ and $B=\{2,3\}$, it is clear that $A,B\in 2^{X}$. Now we apply the fusion operator $\oplus$. $$A\oplus B=\{x: x\in A\divideontimes B \vee x\in B\divideontimes A\},$$ $$A\oplus B=\{x: x\in A\vee x\in B\},$$ $$A\oplus B=A\cup B=\{1,2,3\}.$$ \end{example} \begin{corollary}\label{C fusion-union} Let $X$ be a $\sigma$-set, then for all $A\in 2^{X}$, we have that: $$A\oplus X = X.$$ \end{corollary} \begin{proof} Let $X$ be a $\sigma$-set and $A\in 2^{X}$. Then by theorem \ref{T fusion-union} we have that $$A\oplus X = A\cup X.$$ Now as $A\subset X$, then $A\cup X=X$, therefore $$A\oplus X=X.$$ \end{proof} \begin{example} Let $X=\{1,2,3,4\}$ and $A=\{1,2,3\}$, it is clear that $A\in 2^{X}$. Now we apply the fusion operator $\oplus$. $$A\oplus X=\{x: x\in A\divideontimes X \vee x\in X\divideontimes A\},$$ $$A\oplus B=\{x: x\in A\vee x\in X\},$$ $$A\oplus X=A\cup X=\{1,2,3,4\}=X.$$ \end{example} As we said before, the fusion of $\sigma$-sets $\oplus$ is commutative by definition but as we demonstrated in \cite{Gatica,Bustamante16,Bustamante11} this operation is not associative. \begin{example}\label{Ej no asociativo} Let $A=\{1^{\ast},2^{\ast}\}$, $B=\{1,2\}$ y $C=\{1\}$, then $$(A\oplus B)\oplus C= \emptyset\oplus C=C $$ and $$A\oplus (B\oplus C)= A\oplus B=\emptyset, $$ therefore we have that $$(A\oplus B)\oplus C\neq A\oplus(B\oplus C).$$ \end{example} \section{Generated space} As we have already indicated in the definition \ref{Df CG} we will have that the space generated by two $\sigma$-sets $A$ and $B$ is: $$ \left\langle 2^{A},2^{B}\right\rangle=\{x\oplus y : x\in 2^{A}\wedge y\in 2^{B}\}.$$ Now taking into account the duality $\sigma$-set, $\sigma$-antiset we could consider the following example. \begin{example} We consider the $\sigma$-set $A=\{1,2,3\}$ and its $\sigma$-antiset $A^{-}=\{1^{\ast},2^{\ast},3^{\ast}\}$ then we obtain the integer space $3^{A}$ where, $$3^{A}= \left\langle 2^{A}, 2^{A^{-}} \right\rangle.$$ Is important to observe that $$2^{A}=\{\emptyset, \{1\},\{2\},\{3\},\{1,2\},\{1,2\},\{2,3\},A \}$$ and $$2^{A^{-}}=\{\emptyset^{-}, \{1^{\ast}\},\{2^{\ast}\},\{3^{\ast}\},\{1^{\ast},2^{\ast}\},\{1^{\ast},2^{\ast}\},\{2^{\ast},3^{\ast}\},A^{-} \}.$$ Also is important to observe that $\emptyset=\emptyset^{-}$, which is very important for the construction of $3^{A}$. Now considering the definition of generated space, $$3^{A}=\left\langle 2^{A}, 2^{A^{-}} \right\rangle=\{X\oplus Y : X\in 2^{A} \wedge Y \in 2^{A^{-}} \},$$ where the operator $\oplus$ is the fusion of $\sigma$-sets, we will obtain the following matrix: \begin{table} \centering \begin{tabular}{||c||c|c|c|c|c|c|c|c||} \hline \hline $\oplus$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $A$ \\\hline\hline $\emptyset^{-}$ & $\emptyset^{0}_{0}$ & {\color{green}$\{1\}$} & {\color{green}$\{2\}$} & {\color{green}$\{3\}$} & {\color{red}$\{1,2\}$} & {\color{red}$\{1,3\}$} & {\color{red}$\{2,3\}$} & $A$ \\\hline $\{1^{\ast}\}$ & {\color{green}$\{1^{\ast}\}$} & $\emptyset^{1}_{1}$ & {\color{cyan}$\{1^{\ast},2\}$} & {\color{cyan}$\{1^{\ast},3\}$} & {\color{green}$\{2\}$} & {\color{green}$\{3\}$} & {\color{blue}$\{1^{\ast},2,3\}$} & {\color{red}$\{2,3\}$} \\\hline $\{2^{\ast}\}$ & {\color{green}$\{2^{\ast}\}$} & {\color{cyan}$\{1,2^{\ast}\}$} & $\emptyset^{2}_{1}$ & {\color{cyan}$\{2^{\ast},3\}$} & {\color{green}$\{1\}$} & {\color{yellow}$\{1,2^{\ast},3\}$} & {\color{green}$\{3\}$} & {\color{red}$\{1,3\}$} \\\hline $\{3^{\ast}\}$ & {\color{green}$\{3^{\ast}\}$} & {\color{cyan}$\{1,3^{\ast}\}$} & {\color{cyan}$\{2,3^{\ast}\}$} & $\emptyset^{3}_{1}$ & {\color{magenta}$\{1,2,3^{\ast}\}$} & {\color{green}$\{1\}$} & {\color{green}$\{2\}$} & {\color{red}$\{1,2\}$} \\\hline $\{1^{\ast},2^{\ast}\}$ & {\color{red}$\{1^{\ast},2^{\ast}\}$} & {\color{green}$\{2^{\ast}\}$} & {\color{green}$\{1^{\ast}\}$} & {\color{magenta}$\{1^{\ast},2^{\ast},3\}$} & $\emptyset^{4}_{2}$ & {\color{cyan}$\{2^{\ast},3\}$} & {\color{cyan}$\{1^{\ast},3\}$} & {\color{green}$\{3\}$} \\\hline $\{1^{\ast},3^{\ast}\}$ & {\color{red}$\{1^{\ast},3^{\ast}\}$} & {\color{green}$\{3^{\ast}\}$} & {\color{yellow}$\{1^{\ast},2,3^{\ast}\}$} & {\color{green}$\{1^{\ast}\}$} & {\color{cyan}$\{2,3^{\ast}\}$} & $\emptyset^{5}_{2}$ & {\color{cyan}$\{1^{\ast},3\}$} & {\color{green}$\{2\}$} \\\hline $\{2^{\ast},3^{\ast}\}$ & {\color{red}$\{2^{\ast},3^{\ast}\}$} & {\color{blue}$\{1,2^{\ast},3^{\ast}\}$} & {\color{green}$\{3^{\ast}\}$} & {\color{green}$\{2^{\ast}\}$} & {\color{cyan}$\{1,3^{\ast}\}$} & {\color{cyan}$\{1,2^{\ast}\}$} & $\emptyset^{6}_{2}$ & {\color{green}$\{1\}$} \\\hline $A^{-}$ & $A^{-}$ & {\color{red}$\{2^{\ast},3^{\ast}\}$} & {\color{red}$\{1^{\ast},3^{\ast}\}$} & {\color{red}$\{1^{\ast},2^{\ast}\}$} & {\color{green}$\{3^{\ast}\}$} & {\color{green}$\{2^{\ast}\}$} & {\color{green}$\{1^{\ast}\}$} & $\emptyset^{7}_{3}$ \\\hline\hline \end{tabular} \caption{\label{tab:matrix1}Integer Space.} \end{table} \end{example} It is important to note that from the perspective of $\sigma$-sets we have that $\emptyset=\emptyset^{-}=\emptyset^{i}_{j}$ with $i\in\{0,1 ,2,3,4,5,6,7\}$ and $j\in\{0,1,2,3 \}$, where the difference of the $\sigma$-emptysets $\emptyset^{i}_{j}$ is given by annihilation, which comes from equation $A\oplus A^{-}=\emptyset$. From the matrix representation of the integer space $3^{A}$, we can present another representation of the same integer space. This representation of the integer space $3^{A}$ is a graphical representation which we show in figure \ref{fig:Integer1}. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{figura1.jpg} \caption{\label{fig:Integer1} Integer Space $3^{A}$.} \end{figure} Finally, as a theoretical result, we have a cardinal theorem: \begin{theorem} Let $A=\{1,2,3\}$, then $\left|3^{A}\right|=\left|\left\langle 2^{A}, 2^{A^{-}}\right\rangle\right|=3^{3}=27$. \end{theorem} \begin{proof} Let $A=\{1,2,3\}$, the proof is the same fusion matrix for this $\sigma$-set. \end{proof} We should also note that we have obtained other cardinal results for the integer space $3^{A}$ with $\left|A \right|\in\{0,1,2,3,4,5\}$. The cardinal results are as follows: \begin{center} \begin{tabular}{|c|c|c|c|} \hline $\sigma$-Set & $\sigma$-Antiset & Generated & Cardinal \\ \hline \hline $A=\emptyset$ & $A^{-}=\emptyset^{-}$ & $\left\langle 2^{A},2^{A^{-}} \right\rangle$ & $3^{0}=1$ \\ \hline $A=\{1\}$ & $A^{-}=\{1^{\ast}\}$ & $\left\langle 2^{A},2^{A^{-}} \right\rangle$ & $3^{1}=3$ \\ \hline $A=\{1,2\}$ & $A^{-}=\{1^{\ast},2^{\ast}\}$ & $\left\langle 2^{A},2^{A^{-}} \right\rangle$ & $3^{2}=9$ \\ \hline $A=\{1,2,3\}$ & $A^{-}=\{1^{\ast},2^{\ast},3^{\ast}\}$ & $\left\langle 2^{A},2^{A^{-}} \right\rangle$ & $3^{3}=27$ \\ \hline $A=\{1,2,3,4\}$ & $A^{-}=\{1^{\ast},2^{\ast},3^{\ast},4^{\ast}\}$ & $\left\langle 2^{A},2^{A^{-}} \right\rangle$ & $3^{4}=81$ \\ \hline $A=\{1,2,3,4,5\}$ & $A^{-}=\{1^{\ast},2^{\ast},3^{\ast},4^{\ast},5^{\ast}\}$ & $\left\langle 2^{A},2^{A^{-}} \right\rangle$ & $3^{5}=243$ \\ \hline \end{tabular} \end{center} From these calculations made with the fusion matrix we can obtain the following conjecture. \begin{conjecture} Let $A$ be a $\sigma$-set such that $|A|=n$, then $\left|3^{A}\right|=\left|\left\langle 2^{A}, 2^{A^{-}}\right\rangle\right|=3^{n}$. \end{conjecture} On the other hand, as we have already said, we are going to change the notation of $1_{\Theta}$ to $1_{0}$, in this way we will have the $\sigma$-set of $0$-natural numbers defined as follows: \\ $1_{0}=\{\emptyset\}$\\ $2_{0}=\{\emptyset,1_{0}\}$\\ $3_{0}=\{\emptyset,1_{0},2_{0}\}$\\ $4_{0}=\{\emptyset,1_{0},2_{0},3_{0}\}$\\ and so on, forming the $0$-natural numbers $$\mathds{N}^{0}=\{1_{0},2_{0},3_{0},4_{0},5_{0},6_{0},7_{0},8_{0},9_{0},10_{0},\ldots\},$$ where one of the important properties of this $\sigma$-set is that it does not annihilate with the natural numbers $\mathds{N}$ nor with the antinatural numbers $\mathds{N}^{-}$, in this way we can consider the following example for the generated space. \begin{example} We consider the $\sigma$-sets $A=\{1_{0},2_{0}\}$ and $B=\{1,2\}$, therefore the space generated by $A\oplus B$ and $A\oplus B^{-}$ will be: $\left\langle 2^{A\oplus B}, 2^{A\oplus B^{-}}\right\rangle=\{ x\oplus y : x\in 2^{A\oplus B}\wedge y\in 2^{A\oplus B^{-}}\} $ $\left\langle 2^{A\oplus B}, 2^{A\oplus B^{-}}\right\rangle=\{\emptyset, \{1_{0}\}, \{1\}, \{1^{\ast}\}, \{2_{0}\}, \{2\}, \{2^{\ast}\}, \{1_{0},2_{0}\}, \{1_{0},1\}, \{1_{0}, 1^{\ast}\}, \{1_{0},2\}, \{1_{0},2^{\ast}\}, \\ \{2_{0},1\}, \{2_{0}, 1^{\ast}\}, \{2_{0},2\}, \{2_{0},2^{\ast}\}, \{1,2\}, \{1, 2^{\ast}\}, \{1^{\ast},2\}, \{1^{\ast},2^{\ast}\}, \{1_{0},1,2\}, \{1_{0},1, 2^{\ast}\}, \{1_{0},1^{\ast},2\},\\ \{1_{0},1^{\ast},2^{\ast}\}, \{2_{0},1,2\}, \{2_{0},1, 2^{\ast}\}, \{2_{0},1^{\ast},2\}, \{2_{0},1^{\ast},2^{\ast}\}, \{1_{0},2_{0},1\}, \{1_{0},2_{0},1^{\ast}\}, \{1_{0},2_{0},2\}, \{1_{0},2_{0},2^{\ast}\},\\ \{1_{0},2_{0},1,2\}, \{1_{0},2_{0},1, 2^{\ast}\}, \{1_{0},2_{0},1^{\ast},2\}, \{1_{0},2_{0},1^{\ast},2^{\ast}\} \}$\\ \\ In this case, the generated space becomes a meta-space generated by $A=\{1_{0},2_{0}\}$ and $B=\{1,2\}$ which can be ordered graphically as shown in figure \ref{fig:metaspace}. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figura2.jpg} \caption{\label{fig:metaspace} Meta-space $\left\langle 2^{A\oplus B}, 2^{A\oplus B^{-}}\right\rangle$.} \end{figure} \end{example} Now, if we count the number of elements that the meta-space generated by $A=\{1_{0},2_{0}\}$ and $B=\{1,2\}$ has, we will find that they are 36, where the prime decomposition of this number is $36=2^{2}\cdot 3^{2}$ which is equivalent to the following multiplication of cardinals $36=2^{|A|}\cdot 3^{|B|}$, from where we can obtain the following conjecture: \begin{conjecture} For all $A\in 2^{\mathds{N}^{0}}$ and $B\in 2^{\mathds{N}}$, then $\left| \left\langle 2^{A\oplus B}, 2^{A\oplus B^{-}}\right\rangle \right| = 2^{|A|}\cdot 3^{|B|} .$ \end{conjecture} \begin{example} We consider $A=\{1_{0}\}$ and $B=\{1,2\}$, then we obtain that\\ $\left\langle 2^{A\oplus B}, 2^{A\oplus B^{-}}\right\rangle=\{\emptyset, \{1_{0}\}, \{1\}, \{1^{\ast}\}, \{2\}, \{2^{\ast}\}, \{1_{0},1\}, \{1_{0},2\}, \{1_{0},1^{\ast}\}, \{1_{0},2^{\ast}\}, \{1,2\}, \{1,2^{\ast}\}, \\ \{1^{\ast},2\}, \{1^{\ast},2^{\ast}\}, \{1_{0},1,2\}, \{1_{0},1,2^{\ast}\}, \{1_{0},1^{\ast},2\}, \{1_{0},1^{\ast},2^{\ast}\} \} $\\ Thus, we have that $|A|=1$ and $|B|=2$ and $\left| \left\langle 2^{A\oplus B}, 2^{A\oplus B^{-}}\right\rangle \right| = 2^{|A|}\cdot 3^{|B|}= 2^{1}\cdot 3^{2}=18.$ \end{example} \begin{example} We consider $A=\emptyset$ and $B=\{1,2\}$, then we obtain that\\ $3^{B} =\{\emptyset, \{1\}, \{1^{\ast}\}, \{2\}, \{2^{\ast}\}, \{1,2\}, \{1,2^{\ast}\}, \{1^{\ast},2\}, \{1^{\ast},2^{\ast}\} \} $\\ Thus, we have that $|A|=0$ and $|B|=2$ and $\left| \left\langle 2^{A\oplus B}, 2^{A\oplus B^{-}}\right\rangle \right| = 2^{|A|}\cdot 3^{|B|}= 2^{0}\cdot 3^{2}=9.$ \end{example} \section{Algebraic structure of integer space $3^{A}$} With respect to the algebraic structure of the Integer Space $3^{A}$ for all $A\in 2^{\mathds{N}}$ we think that these structures are related with structures called NAFIL (non-associative finite invertible loops) \begin{theorem}\label{T Alg} Let $A=\{1,2\}$, then $(3^{A},\oplus)$ satisfies the following conditions: \begin{enumerate} \item $(\forall X,Y\in 3^{A})(X\oplus Y\in 3^{A}),$ \item $(\exists! \emptyset\in 3^{A})(\forall X\in 3^{A})(X\oplus \emptyset=\emptyset\oplus X=X),$ \item $(\forall X\in 3^{A})(\exists! X^{-}\in 3^{A})(X\oplus X^{-}=X^{-}\oplus X=\emptyset),$ \item $(\forall X,Y\in 3^{A})(X\oplus Y=Y\oplus X).$ \end{enumerate} \end{theorem} \begin{proof} Let $A=\{1,2\}$, then we quote the fusion matrix represented in table \ref{tab:matrix2} for $3^{\{1,2\}}$. From here it is clearly seen that conditions $(1)$, $(2)$, and $(3)$ of theorem \ref{T Alg} are satisfied, where the condition $(4)$ is obvious by definition. We must clarify that since $\sigma$-set $\emptyset=\emptyset^{-}$, and also $\emptyset=\emptyset^{0}_{0}=\emptyset^{1}_{1}=\emptyset^{2}_{1}=\emptyset^{3}_{2}$, from here we have condition $(2)$ and the difference is in another dimension, the dimension of annihilation. Here we must clarify that the fusion operation $\oplus$ is not associative. Let $X=\{1^{\ast},2^{\ast}\}$, $Y=\{1,2\}$ and $Z=\{1\}$ then we will have that $(\{1^{\ast},2^{\ast}\}\oplus\{1,2\})\oplus\{1\}=\emptyset\oplus\{1\}=\{1\}$ on the other hand $\{1^{\ast},2^{\ast}\}\oplus (\{1,2\}\oplus\{1\})=\{1^{\ast},2^{\ast}\}\oplus\{1,2\}=\emptyset$ therefore we have that $$(X\oplus Y)\oplus Z\neq X\oplus (Y\oplus Z), $$ which shows that the structure $(3^{A},\oplus)$, is non-associative. \begin{table} \centering \begin{tabular}{||c||c|c|c|c||} \hline $\oplus$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{1,2\}$ \\ \hline \hline $\emptyset^{-}$ & $\emptyset^{0}_{0}$ & $\{1\}$ & $\{2\}$ & $\{1,2\}$ \\ \hline $\{1^{\ast}\}$ & $\{1^{\ast}\}$ & $\emptyset^{1}_{1}$ & $\{1^{\ast},2\}$ & $\{2\}$ \\ \hline $\{2^{\ast}\}$ & $\{2^{\ast}\}$ & $\{1,2^{\ast}\}$ & $\emptyset^{2}_{1}$ & $\{1\}$ \\ \hline $\{1^{\ast},2^{\ast}\}$ & $\{1^{\ast},2^{\ast}\}$ & $\{2^{\ast}\}$ & $\{1^{\ast}\}$ & $\emptyset^{3}_{2}$ \\ \hline \end{tabular} \caption{\label{tab:matrix2}Integer Space $3^{\{1,2\}}$.} \end{table} \end{proof} We now present a new conjecture. \begin{conjecture}\label{CJ BUCLE} Let $A\in 2^{\mathds{N}}$, then $(3^{A},\oplus)$ satisfies the following conditions: \begin{enumerate} \item $(\forall X,Y\in 3^{A})(X\oplus Y\in 3^{A}),$ \item $(\exists! \emptyset\in 3^{A})(\forall X\in 3^{A})(X\oplus \emptyset=\emptyset\oplus X=X),$ \item $(\forall X\in 3^{A})(\exists! X^{-}\in 3^{A})(X\oplus X^{-}=X^{-}\oplus X=\emptyset),$ \item $(\forall X,Y\in 3^{A})(X\oplus Y=Y\oplus X).$ \end{enumerate} \end{conjecture} \section{$\sigma$-Sets Equations} Continuing with the analysis of the $\sigma$-sets, we now have the development of the equations of $\sigma$-sets of a $\sigma$-set variable, equations that play a very important role when solving a $\sigma$-set equation, now let's define and go deeper into the $\sigma$-sets variables. We must remember that for every $\sigma$-set $A$ and $B$, the fusion of both is defined as: $$A\oplus B=\{x: x\in A\divideontimes B \vee x\in B\divideontimes A \}$$ \begin{definition} Let $A$ be a $\sigma$-set, then $A$ is said to be an entire $\sigma$-set if there exists the $\sigma$-antiset $A^{-}$. \end{definition} \begin{example} Let $A=\{1_{0},2_{0},3_{0}\}$, then this $\sigma$-set is not an integer, since $A^{-}$ does not exist, on the other hand the $\sigma$-set $A=\{1,2,3,4\}$, is an integer $\sigma$-set since $A^{-}=\{1^{\ast},2^{\ast},3^{\ast},4^{\ast}\}$ exists which is the $\sigma$-antiset of $A$. \end{example} It is clear that if a $\sigma$-set $A$ is integer, then by definition there exists the integer space $3^{A}$. We should also note that if $A$ is an integer $\sigma$-set, then $\left[A\cup A^{-}\right]$ is a proper $\sigma$-class, for example, consider $A=\{1,2\}$, then $\left[A\cup A^{-}\right]=\left[1,2,1^{\ast},2^{\ast}\right]$, is a proper $\sigma$-class. We must observe that $\sigma$-set theory \cite{Gatica} is a theory of $\sigma$-classes, where $\sigma$-sets are characterized by axioms. We must also note that a proper $\sigma$-class is a $\sigma$-class that is not a $\sigma$-set. This difference is essential to give rise to the existence of antielements along with their respective elements. \begin{definition} Let $A$ be a integer $\sigma$-set such that $|A|=n$, then $X$ is said to be a $\sigma$-set variable of $3^{A}$, if and only if $$X=\{x_{1},x_{2},x_{3},\ldots ,x_{m}\},$$ where $m\leq n$ and $x_{i}$ a variable of the proper class $\left[A\cup A^{-}\right]$. \end{definition} \begin{example} Let $A=\{1,2,3\}$ be a $\sigma$-set, it is clear that $A$ is an entire $\sigma$-set since there exists $A^{-}=\{1^{\ast},2^{\ast},3^{\ast}\}$ and therefore $3^{A}$, in this way we will have that $$X=\emptyset,$$ $$X=\{x\},$$ $$X=\{x_{1},x_{2}\},$$ $$X=\{x_{1},x_{2},x_{3}\},$$ are $\sigma$-sets variables of $3^{A}$, where $x,x_{1},x_{2},x _{3}\in \left[1,2,3,1^{\ast},2^{\ast},3^{\ast}\right]$. \end{example} \begin{lemma}\label{L VC} Let $A$ be an integer $\sigma$-set and $X$ a $\sigma$-set variable of $3^{A}$, then $A\oplus X=A\cup X$, with $A\subset A\cup X$ and $X\subset A\cup X$. \end{lemma} \begin{proof} Let $A$ be an integer $\sigma$-set and $X$ a $\sigma$-set variable of $3^{A}$, then $$A\oplus X=\{x: x\in A\divideontimes X \vee x\in X\divideontimes A \} $$ Now we have that $$A\divideontimes X=A$$ and $$X\divideontimes A=X$$ since $X$ is a $\sigma$-set variable, therefore we will have that $$A\oplus X=\{x: x\in A \vee x\in X \}=A\cup X.$$ We can also observe that $A\cap X=\emptyset$ since $X$ is a $\sigma$-set variable, therefore $A\subset A\cup X$ and $X\subset A\cup X$. \end{proof} \begin{example} Let $A=\{1,2,3\}$, and $X$ be a $\sigma$-set variable of $3^{A}$, that is, $$X=\emptyset$$ $$X=\{x\},$$ $$X=\{x_{1},x_{2}\},$$ $$X=\{x_{1},x_{2},x_{3}\},$$ are $\sigma$-sets variable of $3^{A}$, where $x,x_{1},x_{2},x _{3}\in \left[1,2,3,1^{\ast},2^{\ast},3^{\ast}\right]$. then $$A\oplus X=\{1,2,3\},$$ $$A\oplus X=\{1,2,3,x\},$$ $$A\oplus X=\{1,2,3,x_{1},x_{2}\},$$ $$A\oplus X=\{1,2,3,x_{1},x_{2},x_{3}\}$$ \end{example} After the lemma \ref{L VC} we proceed to analyze some equations of a $\sigma$-set variable and their solutions Let $A$ be an integer $\sigma$-set, $X$ a $\sigma$-set variable and $M$ and $N$ two $\sigma$-sets of the integer space $3^{A}$, then an equation of a $\sigma$-set variable will be $$X\oplus M=N.$$ Now if $M=N$, then the equation becomes $$X\oplus M=M,$$ and by the corollary \ref{C fusion-union} we have that the solutions are all $X\in 2^{M}$, where we naturally count $X=\emptyset$, hence we have an equation of a $\sigma$-set variable with multiple solutions. Now consider $M\neq N$, then the $\sigma$-set equation becomes: $$X\oplus M=N,$$ We must remember that the structure in general is not associative, therefore we cannot freely use this property, so to find the solution to the equation we must develop a previous theorem. To develop this theorem we will assume that for every integer $\sigma$-set $A$ the generated space is $\left\langle 2^{A}, 2^{A^{-}} \right\rangle=3^{A}$, and also that $3^{A}$ satisfies conjecture \ref{CJ BUCLE}. \begin{theorem}\label{T IG} Let $A$ be an integer $\sigma$-set, $X$ be a $\sigma$-set variable of $3^{A}$ and $M\in 3^{A}$. Then $$(X\oplus M)\oplus M^{-}=X$$ \end{theorem} \begin{proof} Let $A$ be an integer $\sigma$-set, $X$ be a $\sigma$-set variable of $3^{A}$ and $M\in 3^{A}$, then by lemma \ref{L VC} we have that $X\oplus M=X\cup M$, with $X\cap M=\emptyset$. Therefore we have that $$ \circledast \ (X\oplus M)\oplus M^{-}=\{a: a\in(X\oplus M)\divideontimes M^{-}\vee a\in M^{-}\divideontimes (X\oplus M)\}$$ $$ =\{a: a\in(X\cup M)\divideontimes M^{-}\vee a\in M^{-}\divideontimes (X\cup M)\} $$ so $(X\cup M)\divideontimes M^{-}= (X\cup M)-(X\cup M)\widehat{\cap} M^{-}=(X\cup M)-M=X,$ and $M^{-}\divideontimes(X\cup M)= M^{-}-M^{-}\widehat{\cap} (X\cup M)=M^{-}-M^{-}=\emptyset.$ Now replacing these calculations in $(\circledast)$ we will have that $$(X\oplus M)\oplus M^{-}=\{a: a\in X \vee a\in\emptyset\}$$ $$(X\oplus M)\oplus M^{-}=\{a: a\in X\},$$ $$(X\oplus M)\oplus M^{-}=X.$$ \end{proof} Now, after theorem \ref{T IG} has been proved, we can solve some $\sigma$-set equation for the integer $\sigma$-set $A=\{1,2\}$, since the generated space is effectively equal to $3^{A}$, that is, $\left\langle 2^{A}, 2^{A^{-}} \right\rangle=3^{A}$, and also $3^{A}$ is a non-associative abelian loop. Let $A=\{1,2\}$ be an integer set and $M,N\in 3^{A}$, with $M\widehat{\cap} N=\emptyset$, then the equation $$X\oplus M=N,$$ has the following solution $$X\oplus M=N \setminus \oplus M^{-},$$ $$(X\oplus M)\oplus M^{-}=N\oplus M^{-},$$ then by theorem \ref{T IG} we will have that $$X=N\oplus M^{-}.$$ Let us now show a concrete example for $A=\{1,2\}$. \begin{example} Let $A=\{1,2\}$ be an integer $\sigma$-set, $M=\{1,2^{\ast}\}$ and $N=\{1\}$, with $M\widehat{\cap}N=\emptyset$, then the equation of a $\sigma$-set variable $$X\oplus \{1,2^{\ast}\}=\{1\}$$ has the following solution. $$X\oplus \{1,2^{\ast}\}=\{1\} \setminus \oplus \{1^{\ast},2\},$$ $$(X\oplus \{1,2^{\ast}\})\oplus \{1^{\ast},2\}=\{1\}\oplus \{1^{\ast},2\},$$ $$X=\{2\}.$$ Here we can see that the equation has as solution the $\sigma$-set $S_{1}=\{2\}$, since $$\{2\}\oplus \{1,2^{\ast}\}=\{1\},$$ but like the equation $X\oplus M=M$, this one does not have a unique solution since the $\sigma$-set $S_{2}=\{1,2\}$, is also a solution for the equation of a $\sigma$-set variable, $$\{1,2\}\oplus\{1,2^{\ast}\}=\{1\}.$$ In this way we have two solutions for our equation of a $\sigma$-set variable which are: $$ S =\{S_{1},S_{2}\}=\{ \{2\}, \{1,2\}\}.$$ \end{example} Note that if $M\widehat{\cap}N=\emptyset$ then the $\sigma$-set equation has a solution, but otherwise the $\sigma$-set equation has an empty solution. \begin{example} Let $A=\{1,2\}$ be an integer $\sigma$-set, $M=\{1^{\ast}\}$ and $N=\{1\}$, with $M\widehat{\cap}N=\{1^{*}\}$, then the equation of a $\sigma$-set variable $$X\oplus \{1^{\ast}\}=\{1\}$$ There is no solution. $$X\oplus \{1^{\ast}\}=\{1\} \setminus \oplus \{1\},$$ $$(X\oplus \{1^{\ast}\})\oplus \{1\}=\{1\}\oplus \{1\},$$ $$X=\{1\},$$ which is a contradiction, because $$\{1\}\oplus \{1^{\ast}\}=\{1\},$$ $$\emptyset=\{1\}.$$ \end{example} \begin{definition} \label{fusionableDef} A \(\sigma\)-set equation \(X \oplus M=N\) is said to be {\bf fusionable} if \(M \widehat{\cap} N=\emptyset\). \end{definition} With this in mind, let us conclude with a bounded theorem to find some solutions of the \(\sigma\)-set equation. \begin{theorem} \label{theoremSolutionsS1S2} Let $A$ be an integer $\sigma$-set, $X$ a $\sigma$-set variable of $3^{A}$, and $M,N\in 3^{A}$, then two possible solutions \(S=\{S_1,S_2\}\) of the fusionable equation $$X\oplus M=N,$$ are $S_1=N \oplus R^{-}$ and $S_2=R^{-}$, where $R:=M \oplus N^{-}.$ \end{theorem} \begin{proof} For the first solution \(S_1\) we have that \begin{eqnarray*} S_1 &=& N \oplus R^{-} \\ &=& N \oplus (M \oplus N^{-})^{-} \\ &=& N \oplus (N \oplus M^{-}) \\ &=& N \oplus M^{-}, \end{eqnarray*} where \(S_2=(M \oplus N^{-})^{-}=N \oplus M^{-}\) because of the result iteration seen above. Hence both results are actually a fusion solution for \(X \oplus M=N,\) where \(S_2=R^{-}\) is an exact solution and \(S_1=N \oplus R^{-}\) is an intersected rest solution. Because of \(M \widehat{\cap}N=\emptyset\) (Definition \ref{fusionableDef}) as the equation \(X \oplus M=N\) is fusionable, both \(S_1 \oplus M\) and \(S_2 \oplus M\) will be fusionable into another \(\sigma\)-set \(N\). \end{proof} As we looked above, the solution space is reduced such that the solutions are indeed \(N \oplus M^{-},\) being by consequence possible solutions for the fusionable equation \(X \oplus M=N\). \begin{example} Let \(A=\{1,2,3,4,5,6\}\) be an integer $\sigma$-set, $M=\{1,2,3^{*},4^{*},5,6^{*}\}$ and $N=\{1,2\}$, then the equation of a $\sigma$-set variable \[ X \oplus \{1,2,3^{*},4^{*},5,6^{*}\}=\{1,2\}, \] which is fusionable because \(M \widehat{\cap}N=\{1,2,3^{*},4^{*},5,6^{*}\} \widehat{\cap}\{1,2\}=\emptyset.\) Now, by using Theorem \ref{theoremSolutionsS1S2}, let us first obtain \begin{eqnarray*} R^{-} &=& (M \oplus N^{-})^{-} \\ &=& (\{1,2,3^{*},4^{*},5,6^{*}\} \oplus \{1,2\}^{-})^{-} \\ &=& (\{1,2,3^{*},4^{*},5,6^{*}\} \oplus \{1^{*},2^{*}\})^{-} \\ &=& (\{3^{*},4^{*},5,6^{*}\})^{-} \\ &=& \{3,4,5^{*},6\}, \end{eqnarray*} so we get \(S_1=N \oplus R^{-}=\{1,2,3,4,5^{*},6\}\) and \(S_2=R^{-}=\{3,4,5^{*},6\}\), which can be easily proved that both solutions gives \(S_1 \oplus M=S_2 \oplus M=N\) as a resulting \(\sigma\)-set. Hence \(S=\{\{1,2,3,4,5^{*},6\},\{3,4,5^{*},6\}\}\) is a solution set for the fusionable equation \(X \oplus M=N\). \end{example} \section{Conclusions} One of the first conclusions we can draw is that the fusion operator $\oplus$ for $\sigma$sets is equivalent to the union operator for sets within the context of the set of parts $2^{A}$, which allows us to deduce that the fusion of $\sigma$-sets is an extension of the union for the generated space. The fact that the integer space $3^{A}$ presents a cardinal of power 3, is very important for the development of the theory of transfinite numbers, since in general the power set $2^{A}$ that goes to the power of 2 is used; in this way our results can serve as an impetus for the development of the theory of transfinite numbers. We can also conclude that the algebraic structure of the integer space $3^{1,2}$ is a loop, which leads us to conjecture that the integer space in general has a loop structure. This fact is relevant to $\sigma$-set theory since, if it were so, it would show that the fusion operator $\oplus$ is not associative which is relevant for solving set equations. As a final conclusion, we can state that we can generate $\sigma$-set equations given the existence of inverses for the fusion operator $\oplus$ in the integer space, but in the general case, solutions are not given, so a condition must be imposed on the $\sigma$-sets of the equation. We have not yet conducted a detailed study on the number of solutions to each set equation, leaving this study for future research. To see more works in which antisets or $\sigma$-antiset are used or in which equation $A\cup B=\emptyset$ is described, visit the references \cite{Bustamante11}, \cite{Bustamante16}, \cite{Chunlin}, \cite{Gatica}, \cite{Sengupta}. \begin{thebibliography}{0} \bibitem{Bustamante11} A. Bustamente, {\it Link Algebra: a new aproach to graph theory}, (2011), arXiv:1103.3539v2. \bibitem{Bustamante16} A. Bustamente, {\it Associativity of $\sigma$-sets for non-antielement $\sigma$-set group}, (2016), arXiv:1701.02993v1. \bibitem{Blizard} W. D. Blizard, {\it The Development of Multiset Theory}, Modern Logic, vol.1, no. 4 (1991), pp. 319-352. \bibitem{Chunlin} Chunlin Kuang, Guofang Kuang, {Construction of Smart Sensor Networks Data System Based on Integration Formalized BCCS Model} Sensors and Transducers, vol. 155, Issue 8, August 2013, pp. 98-106. \bibitem{Gatica} I. Gatica A., {\it $\sigma$-Set Theory: introduction to the concepts of $\sigma$-antielement, $\sigma$-antiset and integer space}, (2010), arXiv:0906.3120v8. \bibitem{Sengupta} A. Sengupta, {\it ChaNoXity: The nonlinear dynamics of nature}, (2004), arXiv:nlin/0408043v2. \end{thebibliography} \end{document}
2412.19956v1
http://arxiv.org/abs/2412.19956v1
$L^{p}$-integrability of functions with Fourier supports on fractal sets on the moment curve
\documentclass[11pt,a4paper,reqno]{amsart} \usepackage{version} \usepackage{fancyhdr} \pagestyle{plain} \usepackage{color} \usepackage{hyperref} \hypersetup{ colorlinks=true, linktoc=all, linkcolor=blue, citecolor=red, } \usepackage{romannum} \AtBeginDocument{\pagenumbering{arabic}} \usepackage{xcolor} \newcommand*{\dd}{\mathop{}\!\mathrm{d}} \usepackage{amsmath,amsthm,amssymb,latexsym,epsfig,graphicx,subfigure} \usepackage{color} \usepackage{parskip} \usepackage{hyperref} \usepackage{mathabx} \setlength{\textheight}{22 cm} \setlength{\textwidth}{15 cm} \setlength{\oddsidemargin}{0.5cm}\setlength{\evensidemargin}{0.5cm} \setlength{\topmargin}{0cm} \setlength{\headheight}{1cm} \setlength{\marginparwidth}{6.5cm} \renewcommand{\baselinestretch}{1.15} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{claim}{Claim}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \newtheorem{conjecture}{Conjecture} \newtheorem{corollary}{Corollary}[section] \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{definition} \newtheorem{remark}{Remark}[section] \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} \newcommand{\D}{\mathcal{D}} \newcommand{\cI}{\mathcal{I}} \renewcommand{\a}{\alpha} \newcommand{\e}{\varepsilon} \newcommand{\Int}{\textrm{Int}} \newcommand{\diam}{\textrm{diam}} \newcommand{\dist}{\textrm{dist}} \newcommand{\spt}{\textrm{spt }} \usepackage{tikz} \usepackage{graphicx} \def\donggeun#1{\noindent \textcolor{blue} {\textsc{(Donggeun:} \textsf{#1})}} \def\shengze#1{\noindent \textcolor{green} {\textsc{(Shengze:} \textsf{#1})}} \def\quy#1{\noindent \textcolor{red} {\textsc{(Quy:} \textsf{#1})}} \usepackage{comment} \newcommand{\norm}[1]{{\left\Vert#1\right\Vert}} \allowdisplaybreaks \begin{document} \title{\texorpdfstring{$L^{p}$}{Lp}-integrability of functions with Fourier supports on fractal sets on the moment curve} \author{Shengze Duan} \address{Department of Mathematics, University of Rochester, USA.} \email{[email protected]} \author{Minh-Quy Pham} \address{Department of Mathematics, University of Rochester, USA.} \email{[email protected]} \author{Donggeun Ryou} \address{Department of Mathematics, Indiana University Bloomington, USA.} \email{[email protected] } \begin{abstract} For $0 < \alpha \leq 1$, let $E$ be a compact subset of the $d$-dimensional moment curve such that $N(E,\e) \lesssim \e^{-\alpha}$ for $0 <\e <1$ where $N(E,\e)$ is the smallest number of $\e$-balls needed to cover $E$. We proved that if $f \in L^p(\R^d)$ with \begin{align*} 1 \leq p\leq p_\alpha:= \begin{cases} \frac{d^2+d+2\alpha}{2\alpha} & d \geq 3,\\ \frac{4}{\alpha} &d =2, \end{cases} \end{align*} and $\widehat{f}$ is supported on the set $E$, then $f$ is identically zero. We also proved that the range of $p$ is optimal by considering random Cantor sets on the moment curve. We extended the result of Guo, Iosevich, Zhang and Zorin-Kranich \cite{Guoetal23}, including the endpoint. We also considered applications of our results to the failure of the restriction estimates and Wiener Tauberian Theorem.\\ \noindent \textbf{Keywords.} supports of Fourier transform, covering number, moment curve, Restriction estimates, Wiener Tauberian theorems \\ \noindent \textbf{Mathematics Subject Classification.} primary: 42B10; secondary: 42B20, 28A75 \end{abstract} \maketitle \tableofcontents \section{Introduction} This paper concerns the following question: Assume that $f \in L^p(\R^d)$ and its Fourier transform is supported on the set $E$ in $\R^d$. What is the optimal range of the $p$ such that $f$ is identically zero? Agranovsky and Narayanan \cite{Agranovsky04} proved that if $E$ is a $C^1$ manifold of dimension $0 < \alpha < d$ and $1 \leq p \leq \frac{2d}{\alpha}$, then $f $ is identically zero. Their result is optimal if $\alpha \geq d/2$. A typical example is when the manifold is the $d-1$ dimensional paraboloid. If $\mu$ is the surface measure on the paraboloid, we let $f = \widehat{\mu}$. Then $f$ is nonzero and $f \in L^p(\R^d)$ for $p > \frac{2d}{d-1}$. Their result was extended to fractal sets by Senthil Raani \cite{Raani14}. If the $\alpha$-dimensional packing pre-measure of $E$ is finite, the optimal range of $p$ where $f\equiv 0$ is $1 \leq p \leq \frac{2d}{\alpha}$ (See Remark \ref{premeas}). If we assume that the $E$ is a compact set with finite $\alpha$-dimensional Hausdorff measure, then the optimal range of $p$ such that $f \equiv 0$ is $1 \leq p < \frac{2d}{\alpha}$. Thus, it does not include the endpoint $\frac{2d}{\alpha}$. Salem \cite{Salem51} proved the sufficiency of the range of $p$ when $d=1$, and Edgar and Rosenblatt \cite{ER79} proved it when $\alpha \geq d-1$. For $0 < \alpha <d$, the proof of the sufficiency of the range of $p$ follows from the work of Kahane \cite{Kahane84} (see \cite{Dob24}). The necessity of the range of $p$ was recently proved by Dobronravov \cite{Dob24}. However, the threshold $2d/\alpha$ is not always optimal. The moment curve $$\Gamma_d=\{\gamma_d(t)=(t,t^2,\dots, t^d): t\in [0,1]\}$$ is such an example. Recently, Guo, Iosevich, Zhang and Zorin-Kranich \cite{Guoetal23} proved that if $E = \Gamma_d$ and if $$1 \leq p < \frac{d^2+d+2}{2},$$ then $f \equiv 0$. The range of $p$ is optimal except for the endpoint due to the work of Arkhipov, Chubarikov, and Karatsuba \cite[Theorem 1.3]{ACK_book}. The main aims of this paper are the following. First, we extend the result of \cite{Guoetal23} to fractal sets on $\Gamma_d$ including the endpoint. Second, we apply our result to the necessity condition of restriction estimates on the moment curve and Wiener Tauberian Theorem. Throughout this paper, we denote by $A \lesssim B$ when $A \leq CB$ for some constant $C>0$, and we denote by $A \sim B$ when $A \lesssim B$ and $A \lesssim B$. If the constant $C$ depends on a parameter such as $\e$, we write $A(\e) \lesssim_\e B(\e)$. The Lebesgue measure of a set $A$ in $\R^d$ is denoted by $|A|$, and the cardinality of a finite set $B$ is denoted by $\#B$ \subsection{\texorpdfstring{$L^p$}{Lp}-integrability of functions with Fourier supports on fractal sets on the moment curve} Let $E\subset \R^d$ be a non-empty bounded set. For $0<\e<1$, let {\it$\e$-covering number} $N(E,\e)$ be the smallest number of $\e$-balls needed to cover $E$: \begin{align*} N(E,\e)=\min\{ k: E\subset \bigcup\limits_{i=1}^kB(x_i,\e), \text{ for $x_i\in \R^d$}\}. \end{align*} Let us state our first main result. \begin{theorem}\label{thm_main_Lp_moment} Let $d\geq 2$ and $0< \alpha\leq 1$. Let $E$ be a set on $\Gamma_d$ such that $N(E,\e)\lesssim \e^{-\alpha}$ uniformly in $0 < \e <1$. If $f\in L^p(\R^d)$ and $\spt \widehat{f}\subseteq E$, then $f \equiv 0$ when \begin{align}\label{eq_thm_1} 1\leq p\leq p_\alpha:= \begin{cases} \frac{d^2+d+2\alpha}{2\alpha} & \text{if} \ d \geq 3,\\ \frac{4}{\alpha} &\text{if} \ d =2. \end{cases} \end{align} \end{theorem} The range of $p$ in Theorem \ref{thm_main_Lp_moment} is optimal in the following sense. \begin{theorem}\label{thm_main_Lp_optimal} Let $d\geq 2$, and $0< \alpha\leq 1$. If $p > p_\alpha$, there exists a nonzero $f \in L^p(\R^d)$ such that $N(\spt {\widehat{f}}, \e) \lesssim \e^{-\alpha} $ for all $ 0< \e <1$. \end{theorem} \begin{remark}\label{premeas} Theorem 2.3 in \cite{Raani14} was stated with an assumption on the $\alpha$-dimensional packing measure. However, the author clarified that it should be the $\alpha$-dimensional packing pre-measure \cite{Raani_pri}. For $\alpha \geq 0$ and $\delta >0$, let \[ \mathcal{P}_\delta^\alpha(E) = \sup\ \sum_{i=1}^\infty |B_i|^\alpha, \] where the supremum is taken over all collection of disjoint balls $\{B_i\}$ of raddi at most $\delta$ with centers in $E$. The $\alpha$-dimensional packing pre-measure $\mathcal{P}_0^\alpha(E)$ is defined by \[ \mathcal{P}_0^\alpha(E) = \lim_{\delta \rightarrow 0} \mathcal{P}_\delta^{\alpha}(E). \] Then, Lemma 2.2 in \cite{Raani14} holds when $\mathcal{P}_0^\alpha(E) < \infty $ and so does Theorem 2.3. \end{remark} \begin{remark}\label{re_covnum} Lemma 2.2 in \cite{Raani14} is still valid with a slightly weaker condition that $N(E,\e) \lesssim \e^{-\alpha}$ for all $0 < \e < 1$ instead of $\mathcal{P}_0^\alpha(E) <\infty$, since $E(\e) \lesssim \e^{n} N(E,\e) \lesssim \e^{\alpha-n}$ where $E(\e ) = \{x \in \R^d: d(x,E) < \e \} $. Therefore, Theorem 2.3 in \cite{Raani14} can be restated as follows: Let $f \in L^p(\R^d)$ be a function such that $\widehat{f}$ is supported on a set $E$ with $N(E,\e) \lesssim \e^{-\alpha}$ for all $0< \e <1$. Then $f \equiv 0$, provided that $p \leq \frac{2d}{\alpha}$. We will use this when $d=2$. \end{remark} \begin{remark} In fact, the proof of Theorem \ref{thm_main_Lp_moment} for $d \geq 3$ works even when $d=2$ and Senthil Ranni's result can be applied to higher dimensions. By choosing a better one between the two, $p_\alpha$ can be also written as \[ p_\alpha = \max\left\{ \frac{2d}{\alpha} , \frac{d^2+d+2\alpha}{2\alpha} \right\}. \] If $d=2$, then $\frac{2d}{\alpha}$ is larger and if $d \geq 3$, then $\frac{d^2+d+2\alpha}{2\alpha}$ is larger. \end{remark} \subsection{Restriction estimates on the moment curve} In this section, we consider the extension estimate \begin{equation}\label{restriction1} \norm{\widehat{fd\mu}}_{L^p(\R^d)} \lesssim \norm{f}_{L^q(\mu)} \end{equation} where $\mu$ is a measure supported on $\Gamma_d$. Since its dual version is called the restriction estimate, we will describe our results by using the extension estimate. If $\mu$ is the arc length measure on $\Gamma_d$, i.e., \[ \int f d\mu = \int f(t, t^2, \cdots, t^d) dt, \] then the optimal range of $p$ and $q$ where the extension estimate holds is already known. \begin{theorem}\label{restric_thm_moment} Let $\mu$ be the arc length measure on $\Gamma_d$ defined above. Then \eqref{restriction1} holds if and only if \[ p \geq q'\frac{d(d+1)}{2} \qquad \text{and} \qquad p > \frac{d^2+d+1}{2}, \] where $q'$ is the conjugate of $q$, i.e. $1/q+1/q'=1$. \end{theorem} When $d=2$ and $d=3$, the result was proved by Zygmund \cite{Zygmund} and Prestini \cite{Prestini} respectively. In higher dimensions, it was shown by Drury \cite{Drury}. The necessity of the first restriction is due to the Knapp-type example. For example, see Example 1.8 in \cite{Dem_book}. The necessity of the second restriction follows by using a constant function on the support of $\mu$, which can be found in Theorem 1.3 in \cite{ACK_book}. For more general discussion, see \cite{BGGIST07}. As an analog of the necessity of the range of $p$ and $q$ in Theorem \ref{restric_thm_moment}, we have the following. \begin{theorem}\label{restriction_cant_any} Assume that $\mu$ is a nonzero Borel probability measure with $\spt \mu \subseteq \Gamma_d$. If $N(\spt \mu ,\e) \lesssim \e^{-\alpha}$ for all $0 < \e <1$, then \eqref{restriction1} cannot hold when \begin{equation}\label{necessity_2} 1 \leq p \leq p_\alpha. \end{equation} If $\mu(B(x,r)) \lesssim r^\alpha$ for all $x \in \R^d$ and $r>0$, then \eqref{restriction1} cannot hold when \begin{equation}\label{necessity_1} p < q' \frac{d(d+1)}{2\alpha}. \end{equation} \end{theorem} Note that if the measure $\mu$ is AD-regular, i.e. $\mu(B(x,r)) \sim r^\alpha$ for all $x \in \spt\mu$ and $0<r<\diam(\spt \mu)$, then the measure also satisfies that $N(\spt \mu,\e) \lesssim \e^{-\alpha}$ for all $ 0 < \e <1$. The necessity of the first restriction \eqref{necessity_2} follows from a stronger result. \begin{prop} Let $\mu$ be a nonzero Borel measure with $\spt \mu\subseteq \Gamma_d$. If $N(\spt \mu,\e)\lesssim \e^{-\alpha}$ for all $ 0 <\e <1 $ and $p \leq p_\alpha$, then \eqref{restriction1} cannot be satisfied for any nonzero $f \in L^q(\mu)$. \end{prop} \begin{proof} If there exists such $f \in L^q(\mu)$, then $\widehat{f{d\mu} } (-x) \in L^p(\R^d)$. This contradicts Theorem \ref{thm_main_Lp_moment} if we replace $f$ in Theorem \ref{thm_main_Lp_moment} by $\widehat{fd\mu}(-x)$. \end{proof} Thus, it suffices to prove the necessity of the second restriction \eqref{necessity_1} and it can be shown by a Knapp-type example. \begin{definition}\label{dualbox} Let $B$ be a rectangular box in $\R^d$ with side lengths $l_1, \cdots, l_d$. The \textit{dual rectangular box $\widetilde{R}$} of the box $R$ is defined by the rectangular box with side lengths $l_1^{-1}, \cdots, l_d^{-1} $ centered at the origin such that $l_1$ side of $R$ is parallel to the $l_1^{-1}$ side of $\widetilde{R}$. \end{definition} \begin{proof}[Proof of Theorem \ref{restriction_cant_any}] The argument in Proposition 3.1 in \cite{Mitsis02} shows that if $\mu(B,r) \lesssim r^\alpha$ for all $r>0$, then there exist sequences $\{x_k\}_{k \in \N}$ and $\{r_k\}_{k \in \N}$ such that $r_k \rightarrow 0$ as $k \rightarrow \infty$ and $$ \mu(B(x_k,r_k)) \sim r_k^\alpha. $$ For each $k$, let be $f_k$ the indicator function of $B(x_k,r_k)$. Since $\mu$ is supported on $\Gamma_d$, $B(x_k,r_k) \cap \Gamma_d$ is contained in a rectangular box $B_k$ with sides $\sim r_k, \cdots, r_k^{d}$. Then, let $\widetilde{B}_k$ be its dual rectangular box. Note that $\widetilde{B}_k $ has side lengths $r_k^{-1}$, $r_k^{-2}, \cdots, r_k^{-d}$. Since $B(x_k, r_k) \cap \Gamma_d \subseteq B_k$, if $\xi \in \widetilde{B}_k/100$, then $e(\xi\cdot x) \sim 1 $, i.e., $\text{Re} (e(\xi\cdot x)) \sim 1 $ and $\text{Im}(e(\xi\cdot x)) \leq 1/10 $. Therefore, we obtain that \begin{align*} \norm{\widehat{f_kd\mu}}_{L^p(\R^d)}^p&\geq \int_{\widetilde{B}_k/100} \left|\int_{B(x_k,r_k)}e(\xi \cdot x) d\mu(x)\right|^pd\xi\\ &\gtrsim \int_{\widetilde{B}_k/100} \left| \mu(B(x_k, r_k))\right|^pd\xi\\ &\gtrsim r_k^{\alpha p - \frac{d(d+1)}{2}}. \end{align*} Also, we have that $$ \norm{f_k}_{L^q(\mu)} \lesssim r_k^{\frac{\alpha}{q}}.$$ If we assume \eqref{restriction1}, we combine the estimates above and obtain that \[ r_k^{\alpha -\frac{d(d+1)}{2p}} \lesssim r_k^{\frac{\alpha}{q}}. \] Since $r_k \rightarrow 0$ as $k \rightarrow \infty$, we have the desired relation between $p$ and $q$. \end{proof} \begin{figure}[ht] \centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw [draw=none][fill={rgb, 255:red, 155; green, 155; blue, 155 } ,fill opacity=1 ] (331.5,289.56) -- (181,217.56) -- (331.5,217.56) -- cycle ; \draw (109,301.56) -- (109,11.06) ; \draw [shift={(109,9.06)}, rotate = 90] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (97,289.56) -- (386.5,289.07) ; \draw [shift={(388.5,289.06)}, rotate = 179.9] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [fill={rgb, 255:red, 155; green, 155; blue, 155 } ,fill opacity=1 ] (109,67.06) -- (331.5,67.06) -- (331.5,218.06) -- (109,218.06) -- cycle ; \draw (331.5,218.06) -- (331.5,289.56) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (181,218.06) -- (181,289.56) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (331.5,289.56) -- (181,217.56) ; \draw (84.5,4.4) node [anchor=north west][inner sep=0.75pt] {$\frac{1}{p}$}; \draw (406,266.9) node [anchor=north west][inner sep=0.75pt] {$\frac{1}{q}$}; \draw (325.5,300.9) node [anchor=north west][inner sep=0.75pt] {$1$}; \draw (90,59.4) node [anchor=north west][inner sep=0.75pt] {$1$}; \draw (81,195.4) node [anchor=north west][inner sep=0.75pt] {$\frac{1}{p_{\alpha }}$}; \draw (170,299.4) node [anchor=north west][inner sep=0.75pt] {$\frac{1}{q_{\alpha }}$}; \draw (210,136) node [anchor=north west][inner sep=0.75pt] [align=left] {$A$}; \draw (281.5,231.5) node [anchor=north west][inner sep=0.75pt] [align=left] {$B$}; \end{tikzpicture} \caption{Failure range of the extension estimate: $q_\alpha= \begin{cases} p_\alpha & d \geq 3\\ 4 &d =2. \end{cases}$} \label{fig1} \end{figure} Therefore, if the measure $\mu$ is AD-regular, Theorem \ref{restriction_cant_any} implies that we can divide the range of $(p,q)$ where \eqref{restriction1} fails into two parts; when \eqref{necessity_2} holds and when \eqref{necessity_1} holds but \eqref{necessity_2} does not hold. They correspond to the region $A$ and the region $B$ respectively in Figure \ref{fig1}. In the region $A$, the estimate \eqref{restriction1} fails for any nonzero $f \in L^q(\mu)$ and in the region $B$, the estimate \eqref{restriction1} fails for some $f \in L^q(\mu)$. It would be interesting to find for which $f \in L^q(\mu)$ the estimate \eqref{restriction1} holds or fails in the region $B$. Also, when $\alpha =1$, the arc length measure on $\Gamma_d$ is an example such that \eqref{restriction1} holds if and only if \begin{equation}\label{sufficient_1} p \geq q'\frac{d(d+1)}{2 \alpha} \qquad \text{and} \qquad p > p_\alpha. \end{equation} However, when $0< \alpha <1$, there is no such example known yet. When $d=2$, Ryou \cite{Ryou23} constructed a measure such that \eqref{restriction1} holds when $p > 6/\alpha$ and $q=2$. Even with interpolation, this result only covers a partial range of \eqref{sufficient_1}. Thus, it would be interesting to construct a measure $\mu$ on $\Gamma_d$ with $0 < \alpha <1$ such that \eqref{restriction1} holds if and only if $p$ and $q$ satisfy \eqref{sufficient_1}. \subsection{Wiener Tauberian Theorem} Consider a function $f \in L^p(\R^d)$. Let $\mathcal{M}_f$ denote the space of the finite linear combinations of translates of $f$. Wiener Tauberian Theorem concerns a necessary and sufficient condition of $f$ such that $\mathcal{M}_f$ is dense in $L^p(\R^d)$. Wiener \cite{Weiner} proved the following result. See also p. 234 in \cite{Dono}. \begin{theorem} If $f \in L^1(\R^d)$, then $\mathcal{M}_f$ is dense in $L^1(\R^d)$ if and only if the zero set of $\widehat{f}$ is empty.\\ If $f \in L^2(\R^d)$, then $\mathcal{M}_f$ is dense in $L^2(\R^d)$ if and only if the zero set of $\widehat{f}$ has Lebesgue measure zero. \end{theorem} Wiener conjectured that a similar result would hold when $1 < p<2$. However, it was disproved by Lev and Olevskii \cite{LevOl}. They showed that, if $1 < p < 2$, then there exists two functions $f$ and $g$ in $L^1(\R) \cap C_0 (\R)$ such that $\mathcal{M}_f$ is dense in $L^p(\R)$ but $\mathcal{M}_g $ is not, while the zero sets of $\widehat{f}$ and $\widehat{g}$ are the same. However, there are some results about the sufficient conditions of $f$. Beurling \cite{Beur} proved that if $f \in L^p(\R)$ and the zero set of $\widehat{f}$ is in the closed set of Hausdorff dimension $0< \alpha< 1 $, then $\mathcal{M}_f$ is dense in $L^p(\R)$ for $ 2/(2-\alpha) <p <\infty$. Herz \cite{Herz} and Agranovsky and Narayanan \cite{Agranovsky04} proved similar results in higher dimensions with additional assumptions. Senthil Raani \cite{Raani14} improved their work with additional hypothesis on the zero set of $\widehat{f}$ as follows. \begin{prop}\label{Winer_Raani} Assume that $f \in L^{p'} \cap L^1(\R^d) $ and $1 \leq p < \infty$ where $1/p +1/p'=1$. If the zero set of $\widehat{f}$ is contained in $E \subseteq \Gamma_d$ with $p \leq 2d/\alpha$ and $N(E,\e) \lesssim \e^{-\alpha}$ for $0<\e< 1$, then $\mathcal{M}_f$ is dense in $L^{p'}(\R^d)$. \end{prop} Using Theorem \ref{thm_main_Lp_moment}, we can prove an analog of the above result. \begin{prop}\label{Wiener_moment} Assume that $f\in L^{p'}\cap L^1$. If the zero set of $\hat{f}$ is contained in $ E\subseteq\Gamma_d$ with $p\leq p_{\alpha}$ and $N(E,\e)\lesssim \e^{-\alpha}$ for all $0 < \e <1$, then $\mathcal{M}_f$ is dense in $L^{p'}(\R^d)$. \end{prop} The proof of Proposition \ref{Wiener_moment} is the same as Proposition \ref{Winer_Raani} except that we use Theorem \ref{thm_main_Lp_moment}. Since $p_\alpha > 2d/\alpha $ for $d \geq 3$, the range of $p'$ in Proposition \ref{Wiener_moment} is larger than the range of $p'$ in Proposition \ref{Winer_Raani}. \subsection{Outline of the Paper} The rest of the paper is devoted to the proof of Theorems \ref{thm_main_Lp_moment} and \ref{thm_main_Lp_optimal}. In Section \ref{sec2}, we prove Theorem \ref{thm_main_Lp_moment}. We follow the argument in the paper of Senthil Raani \cite{Raani14} with some modifications. We use rectangular boxes with side lengths $\e, \e^{2}, \cdots, \e^d$ adapted to the moment curve instead of a ball of radius $\e$. In Section \ref{sec3}, we prove Theorem \ref{thm_main_Lp_optimal} by showing the $L^p$ integrability of a random Cantor set on the moment curve. The Fourier decay of a random Cantor set was studied by Shmerkin and Suomala \cite{ShSu17} (also, see \cite{Shsu18}). Ryou \cite{Ryou23} combined the idea from \cite{ShSu17} and the estimates on the oscillatory integral, and then found an estimate on the Fourier decay of a random Cantor set constructed on the parabola. We will use this result when $d=2$. When $d \geq 3$, we will use the result of Arkhipov, Chubarikov and Karatsuba \cite{ACK_book}, in addition to the ideas in \cite{Ryou23} and \cite{ShSu17}. \subsection{Acknowledgements} We would like to thank Alex Iosevich for many discussions and hence inspiring this work. We would also like to thank Allan Greenleaf and K.S. Senthil Raani for helpful conversations. \section{Proof of Theorem \ref{thm_main_Lp_moment}}\label{sec2} \ In this section, we will give the proof of Theorem \ref{thm_main_Lp_moment}. The proof is a modification of the arguments in \cite{Guoetal23} and \cite{Raani14}. For $t\in \R$, let $(\Vec{e_1}(t),\dots, \Vec{e_d}(t))$ denote the Frenet coordinate along the moment curve at the point $\gamma(t)$. For $\e>0$, define an $\e$-isotropic neighborhood of the moment curve at the point $\gamma(t)$ by \begin{align}\label{def_n_gamma} \Gamma_{\e,t}:=\{\gamma(t)+\e_1\Vec{e_1}(t)+\cdots+\e_d\Vec{e_d}(t): |\e_k|\leq \e^k, 1\leq k\leq d\}. \end{align} Observe that by the assumption, for $\e>0$ small enough, we then can cover $ E$ by a finite-overlapping family of rectangular boxes $\{B_{\e,i}\}_{i=1}^{M_\e}$ with $M_\e \lesssim \e^{-\alpha}$ such that $B_{\e,i} = \Gamma_{\e,t_i}$ and the points $t_i$ are $\sim \e$-separated. Note that each $B_{\e,i}$ has side lengths $\e, \e^2, \cdots, \e^d$. Let $\{\phi_i^{(1)}\}_{i=1}^{M_\e}$ be a partition of unity in the first variable $x_1$ such that each $\phi_i$ is supported on an interval of length $\sim \e$ and the center of $\phi_i^{(1)}$ is the same with the first coordinate of $B_{\e,i}$ respectively. Then, we choose a smooth function $\phi^{(2)}(x')$ where $x' \in \R^{d-1}$ such that $\phi^{(2)}(x') =1$ for $x' \in B(0,2)$ and let $\phi_i(x) = \phi_i^{(1)}(x_1)\phi^{(2)}(x')$. For each $i$, we define \begin{align*} f_{\e,i}=\mathcal{F}^{-1}\big(\widehat{f}\phi_i\big). \end{align*} where $\mathcal{F}^{-1}$ is the inverse Fourier transform. In other words, $\widehat{f_{\e,i}}=\widehat{f}\phi_{i}$. We multiplied the smooth function $\phi_i$, since $\widehat{f}$ is well defined as a tempered distribution. Let $CB_{\e,i}$ be a rectangle with side lengths $C\e, C\e^2, \cdots, C\e^d$ and the same center as $B_{\e,i}$. Note that $\widehat{f_{\e,i}}$ is supported on $C B_{\e,i}$ for some constant $C$ independent on $i $ and $\e$ and also we have $f = \sum_{i}f_{\e,i}$. Let $\phi$ be a normalized smooth function supported on $B(0,1)$. We also define that $D_\e$ is the diagonal matrix with diagonal entries $\e, \cdots, \e^d$ and $M_t$ is the matrix whose columns are $\Vec{e_1}(t),\cdots,\Vec{e_d}(t)$. For each $i$, define \begin{align*} \eta_{\e,i}(x)=\e^{-\frac{d^2+d}{2}}\phi(D_\e^{-1} M_{t_i}^{-1} x). \end{align*} Denoting $u=\widehat{f}$, we then define \begin{align*} u_\e=\sum\limits_{i=1}^{M_\e} \widehat{f_{\e,i}}\ast \eta_{\e,i}=\sum\limits_{i=1}^{M_\e} \left(u{\phi_{i}}\right)\ast \eta_{\e,i}. \end{align*} To prove the theorem, we need the following lemmas. \begin{lemma}\label{lemma_1} Assume that $f \in L^{p_\alpha}(\R^d)$. For $0 < \e <1$, there exist $\{a_j\}_{j\in \Z}$ and $\{b_{j,\e}\}_{j\in \Z}$ such that \begin{align*} \Vert u_\e\Vert_2^2 \lesssim \e^{-\frac{d^2+d-2\alpha}{2}}\sum\limits_{j=-\infty}^{\infty} a_jb_{j,\e}, \end{align*} where $\sum_j|a_j|<\infty$, and for any fixed $j\in\Z$, $|b_{j,\e} | \lesssim \norm{f}_{p_\alpha}^2$ and $b_{j,\e}\to 0$, as $\e\to 0$. \end{lemma} \begin{lemma}\label{lemma_2} Assume that $f \in L^p(\R^d)$ for some $p \geq 2$. For any compactly supported smooth function $\psi:\R^d\to \R$, we have \begin{align*} |\langle u, \psi\rangle|^2 &=\lim\limits_{\e\to 0}|\langle u_\e, \psi\rangle|^2. \end{align*} \end{lemma} Assuming the above lemmas, let us prove Theorem \ref{thm_main_Lp_moment} first. \begin{proof}[Proof of Theorem \ref{thm_main_Lp_moment} using Lemmas \ref{lemma_1} and \ref{lemma_2}] As it was pointed out in Remark \ref{re_covnum}, the case when $d=2$ is a direct result of Senthil Raani \cite{Raani14}. Thus, we consider only when $d \geq 3$. Also, we will only consider when $p = p_\alpha$. If $f \in L^p(\R^d)$ for $p < p_\alpha$, we can convolve $f$ with a compactly supported smooth function and use Young's convolution inequality and Theorem \ref{thm_main_Lp_moment} with $p=p_\alpha$. For $\e>0$, let \begin{align*} E_\e:=\bigcup_{i=1}^{M_\e} \Gamma_{C\e, t_i}. \end{align*} We can choose a sufficiently large $C$ such that the support of $u_\e$ is contained in $E_\e$. Note that $|E_\e| \lesssim \e^{\frac{d^2+d-2\alpha}{2}}$. Let $\psi$ be a compactly supported function in $\R^d$. Applying Lemmas \ref{lemma_1} and \ref{lemma_2}, we have \begin{align*} |\langle u, \psi\rangle|^2 &=\lim\limits_{\e\to 0}|\langle u_\e, \psi\rangle|^2\\ &\leq \lim\limits_{\e\to 0} \Vert u_\e\Vert_2^2\int_{E_{\e}}|\psi|^2\\ &\lesssim\Vert \psi\Vert_\infty^2\lim_{\e\to 0}\e^{-\frac{d^2+d-2\alpha}{2}}|E_{\e}|\sum_{j=-\infty}^\infty a_jb_{j,\e}\\ &=\Vert \psi\Vert_\infty^2\lim_{\e\to 0}\sum_{j=-\infty}^\infty a_jb_{j,\e}\\ &=0. \end{align*} In the last equality, we used the dominated convergence theorem. Since this holds for any compactly supported smooth function $\psi$, we conclude that $u \equiv 0$. Thus, $f\equiv 0$ as desired. \end{proof} \subsection{ \texorpdfstring{$L^2$}{L2} estimate for \texorpdfstring{$u_\e$}{u epsilon}} In this section, we will prove Lemma \ref{lemma_1}. First, we will show that \begin{equation}\label{lem1_eq1} \norm{u_\e}_2^2 \lesssim_N \e^{-\frac{d^2+d-2\alpha}{2}} \sum_{j \in \Z} a_j b_{j,\e} \end{equation} for $ 0 < \e <1$, where $$ a_j=\min\{2^{-jN},1\} 2^{jd\cdot \frac{p_\alpha-2}{p_\alpha}} \qquad \text{and} \qquad b_{j,\e}=\bigg(\int_{A_{\e,j}}\bigg(\sum\limits_i|f_{\e,i}(x)|^{2}\bigg)^{\frac{p_\alpha}{2}} dx\bigg)^{\frac{2}{p_\alpha}}. $$ The region $A_{\e,r}$ will be specified later. Then, we will show that $a_j$ and $b_{j,\e}$ satisfy the desired properties. \begin{proof}[Proof of \eqref{lem1_eq1}] Observe that by the construction, for each $i$, the support of $\widehat{f_{\e,i}}\ast \eta_{\e,i}$ is contained in $ CB_{\e,i}$ for some large constant $C$ which does not depend on $\e$ and $i$. Hence, the supports of $\widehat{f_{\e,i}}\ast \eta_{\e,i}$ have bounded overlaps. By Plancherel theorem, we have \begin{align}\label{sum1} \norm{u_\e}_2^2 &=\int \left|\sum\limits_i \widehat{f_{\e,i}}\ast \eta_{\e,i}(\xi)\right|^2d\xi \nonumber\\ &\lesssim\sum\limits_i\int \big| \widehat{f_{\e,i}}\ast \eta_{\e,i}(\xi)\big|^2d\xi\nonumber \\ &=\sum\limits_i\int |f_{\e,i}(x)|^2|\widehat{\eta_{\e,i}}(x)\big|^2dx. \end{align} Recall that $\widetilde{B}_{\e,i}$ is the dual rectangular box of $B_{\e,i}$ (See Definition \ref{dualbox}). For each $1\leq i\leq M_\e$ and each $j\in \mathbb{Z}$, we let $A_{\e,i,j}=2^j\widetilde{B}_{\e,i} \setminus 2^{j-1}\widetilde{B}_{\e,i}$. Note that $\widehat{\eta_{\e,i}}$ decays rapidly outside of the dual box $\widetilde{B}_{\e,i}$ with side lengths $(\e^{-1},\dots, \e^{-d})$. Hence, \eqref{sum1} is bounded by \begin{align*} &\lesssim\sum\limits_i\sum_{j\in \Z} \int_{A_{\e,i,j}} |f_{\e,i}(x)|^2|\widehat{\eta_{\e,i}}(x)\big|^2dx\\ &\lesssim\sum\limits_i\sum_{j\in \Z} \sup\limits_{y\in A_{\e,i,j}} \big|\widehat{\eta_{\e,i}}(y)\big|^2\int_{A_{\e,i,j}} |f_{\e,i}(x)|^2dx\\ &\lesssim_N\sum\limits_i\sum_{j\in \Z} \min\{2^{-jN},1\}\int_{A_{\e,i,j}} |f_{\e,i}(x)|^2dx, \end{align*} with $N\in \N$ large enough. For each $j\in\Z$, we let $$ A_{\e,j}=\bigcup_{i=1}^{M_\e} A_{\e,i,j}. $$ Invoking H\"older inequality with the exponent $\frac{p_\alpha}{2}$, $\Vert u_\e\Vert_2^2$ is dominated by \begin{align} &\sum\limits_{i}\sum_{j\in \Z} \min\{2^{-jN},1\}\int_{A_{\e,j}} |f_{\e,i}(x)|^2dx \nonumber \\ &=\sum_{j\in \Z}\min\{2^{-jN},1\} \int_{{A_{\e,j}}}\sum\limits_i|f_{\e,i}(x)|^2dx \nonumber \\ &\lesssim \sum_{j\in \Z}\min\{2^{-jN},1\} \bigg(\int_{A_{\e,j}}\bigg(\sum\limits_i|f_{\e,i}(x)|^{2}\bigg)^{\frac{p_\alpha}{2}} dx\bigg)^{\frac{2}{p_\alpha}} |{A_{\e,j}}|^{1-\frac{2}{p_\alpha}}. \label{eq_u_epsilon} \end{align} Since $M_\e\lesssim\e^{-\alpha}$, we have \begin{align*} |{A_{\e,j}}|\lesssim \e^{-\alpha}\prod_{l=1}^d2^j(\e^{-1})^{l}=\e^{-\frac{d^2+d}{2}-\alpha}2^{jd}, \end{align*} which implies that \begin{align*} |{A_{\e,j}}|^{1-\frac{2}{p_\alpha}}\lesssim \e^{-\frac{d^2+d-2\alpha}{2}}2^{jd\cdot \frac{p_\alpha-2}{p_\alpha}}. \end{align*} Combining with \eqref{eq_u_epsilon}, we obtain that \begin{align*} \Vert u_\e\Vert_2^2 &\lesssim_N \e^{-\frac{d^2+d-2\alpha}{2}} \sum_{j\in \Z}\min\{2^{-jN},1\} 2^{jd\cdot \frac{p_\alpha-2}{p_\alpha}}\bigg(\int_{A_{\e,j}}\bigg(\sum\limits_i|f_{\e,i}(x)|^{2}\bigg)^{\frac{p_\alpha}{2}} dx\bigg)^{\frac{2}{p_\alpha}}\\ &=\e^{-\frac{d^2+d-2\alpha}{2}}\sum\limits_{j\in \Z}a_j b_{j,\e}, \end{align*} \end{proof} In order to prove the properties of $b_{j,\e}$, we need to show the followings. \begin{lemma}\label{lemma_2.5} If $p \geq 2$, we have \begin{equation*} \norm{\bigg(\sum_{i=1}^{M_\e} |f_{\e,i}|^2\bigg)^{1/2}}_{p} \lesssim \norm{f}_{p} \end{equation*} uniformly in $0 < \e< 1$. \end{lemma} \begin{lemma}\label{lemma_3} Assume that $f \in L^p(\R^d)$ for some $p >2$. For fixed $j \in \Z$, we have \begin{equation*} \bigg(\int_{|x| \geq 2^j \e^{-1} }\bigg(\sum\limits_i|f_{\e,i}(x)|^{2}\bigg)^{\frac{p}{2}} dx\bigg)^{\frac{1}{p}} \longrightarrow 0, \qquad \text{as}\ \e \rightarrow 0. \end{equation*} \end{lemma} Lemma \ref{lemma_2.5} is a result of Rubio de Francia's inequality, which is Theorem 1.2 in \cite{Rubio85}. \begin{proof}[Proof of Lemma \ref{lemma_2.5}] Observe that the Fourier transform of each $f_{\e,i}$ in the first variable is supported in an interval of length $\sim \e$, for all $1\leq i\leq M_\e$. In \cite{Rubio85}, the proof was reduced to the case of the smooth operator $G$ associated to a sequence of finite overlapping intervals (See Definition 3.3 in \cite{Rubio85}). Thus, we can apply Rubio de Francia's result to our setting in the first variable. Then, we get \begin{align*} \int \bigg(\sum_i|f_{\e,i}(x_1,x')|^2\bigg)^\frac{p}{2} dx_1 \leq C_p \int |f(x_1,x')|^p dx_1, \qquad \forall x'\in \R^{d-1}. \end{align*} Integrating in the variables $x'$, we obtain that \begin{align*} \bigg\Vert\bigg(\sum_{i=1}^{M_\e} |f_{\e,i}|^2\bigg)^{1/2}\bigg\Vert_{p}\lesssim ||f||_{p}. \end{align*} \end{proof} To prove Lemma \ref{lemma_3}, we will use the weighted version of Rubio de Francia's inequality, which is Theorem 6.1 in \cite{Rubio85}. Recall that a \textit{weight} is a nonnegative locally integrable function on $\R^d$ that takes values in $(0,\infty)$ almost everywhere. \begin{definition} Let $1<p<\infty$. A weight $w$ is said to be of class $A_p$ if the $A_p$ Muckenhoupt characteristic constant $[w]_{A_p}$ of $w$ is finite, i.e. \begin{align*} [w]_{A_p}=\sup_{Q \text{ cubes in }\R^d}\bigg(\frac{1}{|Q|}\int_Qw(x)dx\bigg)\bigg(\frac{1}{|Q|}\int_Qw(x)^{-\frac{1}{p-1}}dx\bigg)^{p-1}<\infty. \end{align*} \end{definition} For the properties of the Muckenhoupt weight, see Chapter 7 in \cite{Grafakosbook}. \begin{proof}[Proof of Lemma \ref{lemma_3}] Fix $j\in \Z$, and let $\delta$ be a number such that $0< \delta<\frac{p}{2}-1$. For $x\in \R^d$, we write $x=(x_1,x')$ where $x'=(x_2,\dots, x_d)\in \R^{d-1}$. Let $w:\R^d\to (0,\infty)$ be a weight in the variable $x_1$ defined by \begin{align*} w(x_1,x') = \left\{ \begin{matrix} \left( \frac{|x_1|}{ 2^j \e^{-1}} \right)^{p/2 - 1 -\delta } & \text{if} \ |x_1 | \leq 2^j \e^{-1} \ \text{and} \ |x'| \leq 2^{j} \e^{-1},\\ 1 & \text{otherwise}. \end{matrix}\right. \end{align*} We can show that $[w(\cdot,x')]_{A_{p/2}}\lesssim 1$ uniformly in $x'$, $j$, and $\e$. Indeed, if $|x'|\geq 2^j\e^{-1}$, then $w(x_1,x')=1$ and $[w(\cdot,x_1)]_{A_{p/2}}=1$. If $|x'|\leq 2^j\e^{-1}$, then \begin{align*} w(x_1,x')=\min \bigg\{ \left(\frac{|x_1|}{ 2^j \e^{-1}} \right)^{p/2 - 1 -\delta },1\bigg\}. \end{align*} Observe that $w_1(x_1)=|x_1|^{p/2 - 1 -\delta }$ is an $A_{p/2}$ weight, and $[w_1(\lambda x_1)]_{A_{p/2}} = [w_1]_{A_{p/2}}$ for any $\lambda>0$. For example, see Proposition 7.1.5 and Example 7.1.7 in \cite{Grafakosbook}. Since $\delta < p/2-1$, $w(\cdot, x')$ is also an $A_{p/2}$ weight, with \begin{align*} [w(\cdot,x')]_{A_{p/2}}=[\min\{ w_1,1\}]_{A_{p/2}}\lesssim_p [w_1]_{A_{p/2}} \lesssim 1. \end{align*} Recall that the Fourier transform of $f_{\e,i}$ in the first variable is supported in an interval of length $\sim \e$, for all $1\leq i\leq M_\e$, and those intervals overlap at most finitely many times. Since $p > 2$ and $[w (\cdot, x')]_{A_{p/2}} \lesssim 1$ uniformly in $x'$, applying the weighted version of Rubio de Francia's inequality, we obtain that \[ \int \bigg(\sum\limits_{i=1}^{M_\e}|f_{\e,i}(x_1, x')|^{2}\bigg)^{\frac{p}{2}} w(x_1, x') dx_1\lesssim \int |\sum_if_{\e,i}(x_1,x')|^{p} w(x_1, x') dx_1. \] Integrating the inequality above with respect to $x'$, we get \[ \bigg(\int_{|x| \gtrsim 2^j \e^{-1} }\bigg(\sum\limits_i|f_{\e,i}(x)|^{2}\bigg)^{\frac{p}{2}} dx\bigg)^{\frac{1}{p}}\lesssim \norm{\left( \sum_i |f_{\e,i} |^2\right)^{1/2}}_{L^{p}(w)} \lesssim \norm{f}_{L^{p}(w)}. \] Now, we have \begin{align*} \norm{f}_{L^{p}(w)}^{p} &= \int_{|x| \geq (2^j \e^{-1})^{1/2}} |f|^{p}w(x)dx+\int_{|x| \leq (2^j \e^{-1})^{1/2}} |f|^{p}w(x)dx\\ &\leq \int_{|x| \geq (2^j \e^{-1})^{1/2}} |f|^{p}dx + (2^{-j}\e)^{\frac{1}{2}(\frac{p}{2}-1-\delta)} \int |f|^{p} dx. \end{align*} Since $\norm{f}_p < \infty$, we establish the result by letting $\e \rightarrow 0$. \end{proof} Now, we are ready to prove Lemma \ref{lemma_1}. \begin{proof}[Proof of Lemma \ref{lemma_1}] We have established \eqref{lem1_eq1}. Thus, it suffices to show that $a_j$ and $b_j$ satisfies the desired properties. For sufficiently large $N$, we have $\sum_{j} |a_j| < \infty$. By Lemma \ref{lemma_2.5}, we get $|b_{j,\e} | \lesssim \norm{f}_{p_\alpha}^2$. Now we observe that the ball of radius $\sim 2^j \e^{-1}$ centered at the origin is contained in the intersection of dual boxes $\widetilde{B}_{\e,i}$. Thus for each $j\in \Z$, we have \begin{align*} {A}_{\e,j}\subseteq \R^d\setminus B(0,2^{j-2}\e^{-1}). \end{align*} Therefore, Lemma \ref{lemma_3} implies that $b_{j,\e} \rightarrow 0$ as $\e \rightarrow 0$. \end{proof} \subsection{Weak convergence of \texorpdfstring{$\{u_\e\}_{\e>0}$}{u epsilon}} In this section we will prove Lemma \ref{lemma_2}. \begin{proof}[Proof of Lemma \ref{lemma_2}] We will show that \[ \lim_{\e \rightarrow 0} \langle u - u_\e , \psi \rangle =0. \] Note that by Plancherel's theorem, we can write \begin{align*} |\langle u - u_\e , \psi \rangle| &=|\langle \widehat{f}-\sum_i\widehat{f_{\e,i}}\ast\eta_{\e,i} , \psi \rangle|=\big|\langle \sum_if_{\e,i}(1 -\widehat{\eta_{\e,i}} ), \widehat{\psi} \rangle\big|. \end{align*} Thus, we have \begin{align*} |\langle u - u_\e , \psi \rangle| &\leq \int_{\{|x|\leq \e^{-\frac{1}{4d}}\}} \bigg(\sum_i |f_{\e,i}(x) (\widehat{\eta_{\e,i}}(x) -1)| \bigg)|\widehat{\psi}(x)|dx\\ &\hspace{1cm}+\int_{\{|x|>\e^{-\frac{1}{4d}}\}}\bigg(\sum_i |f_{\e,i}(x) (\widehat{\eta_{\e,i}}(x) -1)| \bigg)|\widehat{\psi}(x)|dx\\ &=I+II. \end{align*} First, we estimate $I$. By Cauchy-Schwarz inequality, we can bound $I$ by \begin{align}\label{eq_estimate_I} I\lesssim \int_{\{|x|\leq \e^{-\frac{1}{4d}}\}}\bigg(\sum_i|f_{\e,i}(x)|^2\bigg)^{1/2}\bigg(\sum_i|\widehat{\eta_{\e,i}}(x)-1|^2\bigg)^{1/2}|\widehat{\psi}(x)|dx. \end{align} Observe that $\widehat{\eta_{\e,i}}(0)=1$. Since $|x|\leq \e^{-\frac{1}{4d}}$, we have that for each $1\leq i\leq M_\e$, \begin{align*} |\widehat{\eta_{\e,i}}(x)-1| =|\widehat{\eta_{\e,i}}(x)-\widehat{\eta_{\e,i}}(0)| \lesssim \e^{1- \frac{1}{4d}}. \end{align*} Therefore, using the fact that $M_\e\lesssim \e^{-\alpha } \leq \e^{-1}$, we have \begin{align*} \bigg(\sum_{i=1}^{M_\e}|\widehat{\eta_{\e,i}}(x)-1|^2\bigg)^{1/2}\lesssim \e^{\frac{1}{2} -\frac{1}{4d}}. \end{align*} We plug this into \eqref{eq_estimate_I}, and then apply Holder inequality with exponent $p$. Then we get \begin{align*} I&\lesssim \e^{\frac{1}{2} -\frac{1}{4d}}||\widehat{\psi}||_{\infty}\,\int_{\{|x|\leq \e^{-\frac{1}{4d}}\}}\bigg(\sum_i|f_{\e,i}(x)|^2\bigg)^{1/2}dx\\ &\lesssim \e^{\frac{1}{2} -\frac{1}{4d}}||\widehat{\psi}||_{\infty}\,\bigg(\int \bigg(\sum_i|f_{\e,i}(x)|^2\bigg)^{p/2}dx\bigg)^{1/p}\cdot \bigg(\int_{\{|x|\leq \e^{-\frac{1}{4d}}\}} 1 dx\bigg)^{1-\frac{1}{p}}\\ &=\e^{\frac{1}{4} ( 1-\frac{1}{d}+\frac{1}{p})}||\widehat{\psi}||_{\infty}\,\bigg\Vert\bigg(\sum_{i=1}^{M_\e} |f_{\e,i}|^2\bigg)^{1/2}\bigg\Vert_{p}. \end{align*} By Lemma \ref{lemma_2.5}, we obtain that \begin{align}\label{eq_estimate_I_result} I \lesssim \e^{\frac{1}{4} ( 1-\frac{1}{d}+\frac{1}{p})} \, ||\widehat{\psi}||_{\infty}\,||f||_{p}. \end{align} Now we bound the second integral. Applying Holder inequality with the exponent $p$, we have \begin{align*} II &\lesssim \sum_{i=1}^{M_\e} \int_{\{|x|>\e^{-\frac{1}{4d}}\}} |f_{\e,i}(x)||\widehat{\psi}(x)|dx\\ &\lesssim \sum_{i=1}^{M_\e} \bigg(\int |f_{\e,i}(x)|^{p}dx\bigg)^{1/p}\cdot \bigg(\int_{\{|x|>\e^{-\frac{1}{4d}}\}} |\widehat{\psi}(x)|^{p'}dx\bigg)^{1/p'}. \end{align*} Since $\widehat{\psi}$ is a Schwartz function, we have \begin{align*} |\widehat{\psi}(x)|\lesssim (1+|x|)^{-8d},\qquad \forall x\in \R^d. \end{align*} Also, Lemma \ref{lemma_2.5} implies that $||f_{\e,i}||_{L^{p}(\R^d)}\lesssim||f||_{L^{p}(\R^d)}$ uniformly in $\e$ and $i$. Thus, we can bound $II$ by \begin{align}\label{eq_estimate_II_result} II&\lesssim \sum_{i=1}^{M_\e} ||f_{\e,i}||_{p}\cdot \bigg(\int_{\{|x|>\e^{-\frac{1}{4d}}\}} |x|^{-8d p'}dx\bigg)^{1/p'} \nonumber \\ &\lesssim \sum_{i=1}^{M_\e} ||f_{\e,i}||_{p} \e^{2- \frac{d-1}{4d}(1-\frac{1}{p})} \nonumber \\ &\lesssim \e^{1- \frac{d-1}{4d}(1-\frac{1}{p})} \,||f||_{p} \nonumber \\ &\lesssim \e^{\frac{1}{4}(3+\frac{1}{p})} \,||f||_{p} \end{align} Combining \eqref{eq_estimate_I_result} and \eqref{eq_estimate_II_result}, we have \begin{align*} \langle u-u_\e,\psi\rangle \lesssim \e^{\frac{1}{4} ( 1-\frac{1}{d}+\frac{1}{p})}\,\max(||\widehat{\psi}||_{\infty},1)\,||f||_{p}. \end{align*} Thus, $\lim\limits_{\e\to 0} \langle u-u_\e,\psi\rangle=0$ as desired. \end{proof} \section{Optimality of the range of \texorpdfstring{$p$}{p} in \texorpdfstring{$L^p$}{Lp}-integrability}\label{sec3} We will prove Theorem \ref{thm_main_Lp_optimal} by a probabilistic method. An example with $\alpha =1$ was considered in Theorem 1.3 in \cite{ACK_book}. Thus, we assume that $0 < \alpha <1$ throughout this section. We abbreviate almost surely to a.s. Let us consider a nondecreasing sequence $\{m_j\}_{j \in \N}$ such that $m_j \geq 2$ and \begin{equation*} m_{j+1} \lesssim_\e (m_1 m_2 \cdots m_j)^{\e}, \qquad \forall \e >0 \end{equation*} For each $j$, we write $M_j = m_1 m_2 \cdots m_j$. The collection of $M_j^{-1}$-intervals $\cI_j$ is defined by \[ \{ M_j^{-1} (m + [0,1)) : 0 \leq m \leq M_j-1 \}. \] We consider a sequence of random functions $\mu_j$ which satisfies the following conditions for some deterministic nondecreasing sequence $\{\beta_j \}_{j \in \N}$: \begin{itemize} \item $\mu_0 = \mathbf{1}_{[0,1]}$. \item $\mu_j = \beta_j \mathbf{1}_{E_j}$ where $E_j$ is a union of intervals in $\cI_j$. \item $\mathbb{E}(\mu_{j+1} (x) |E_j) = \mu_j(x)$ for all $x \in [0,1]$. \item The sets $I_j \cap E_{j+1}$ are chosen independently for each $I_j \in \mathcal{I}_j$ conditioned on $E_j$. \end{itemize} We identify the functions $\mu_j$ with the measures $\mu_j dx$ and denote that $\norm{\mu_j} = \mu_j([0,1])$. Let $\alpha$ be a number such that \begin{equation}\label{cond_b_jM_j} 1-\alpha = \lim_{j \rightarrow \infty} \frac{\log(\beta_j)}{\log(M_j)}. \end{equation} Note that it implies that $M_j^{1-\alpha-\e} \lesssim_\e \beta_j \lesssim_\e M_j^{1-\alpha +\e}$. Recall that $\gamma_d (t) = (t, t^2, \cdots, t^d)$ and $t \in [0,1]$. We define a measure $\nu_j$ on $\Gamma_d$ by \[ \int f(x_1, x_2, \cdots, x_d) d\nu_j = \int f(\gamma_d(t)) d\mu_j(t). \] Then a.s. the measure $\mu_j$ converges weakly to a measure $\mu$ supported on a subset of $[0,1]$ and a.s. $\nu_j$ converges weakly to a measure $\nu $ supported on a subset of $\Gamma_d$. For more properties of the measure $\mu$, see \cite{ShSu17}. \begin{prop}\label{Prop_optimal} If $p > p_\alpha$, then a.s. $\norm{\widehat{\nu}}_p \lesssim 1$. \end{prop} Throughout this section, when we say an inequality holds a.s., the corresponding implicit constant may depend on the limiting measure $\nu$. However, the probability that the implicit constant exists is $1$. Theorem \ref{thm_main_Lp_optimal} follows from Proposition \ref{Prop_optimal}. \begin{proof}[Proof of Theorem \ref{thm_main_Lp_optimal} using Proposition \ref{Prop_optimal}] For sufficiently large number $m \in \N $, let $m_j =m$ for each $j \in \N$ so that $M_j = m^j$ and let $\beta_j = M_j^{1-\alpha}$. For each $j$, we choose a subset $S_j$ of $\{ 1, 2, \cdots, m\}$ of size $\sim m^\alpha$ such that $|S_{j_1}| \cdot |S_{j_1+1}|\cdots |S_{j_2}| \sim m^{(j_2-j_1+1)\alpha}$ for any $j_2 >j_1$. We choose each $I_j \cap E_{j+1}$ by random translation of $M_{j+1}^{-1}(S_{j+1}+[0,1]))$ as in Section 8 of \cite{LW18}. More specifically, we randomly choose a number $n(I_j)$ such that $\mathbb{P}(n(I_j) = n) =m^{-1}$ for each $I_j$ and $1 \leq n \leq m$. We let $$S_{j+1,n(I_j)} = S_{j+1}+n(I_j) \mod m.$$ Then, $I_j \cap E_{j+1} = \ell(I_j) + M_{j+1}^{-1} (S_{j+1,n(I_j)} + [0,1))$ where $\ell(I_j)$ is the left endpoint of $I_j$. Note that $\nu$ is an Ahlfors-David regular $\alpha$-measure, i.e. $ \nu(B(x,r))) \sim r^\alpha $ for all $ x \in \spt \nu$ and $ 0< r < \diam(\spt \nu)$. Thus, $N(\spt {\nu}, \e) \sim \e^{-\alpha}$ for all $ 0< \e <1$. By Proposition \ref{Prop_optimal}, there exists $\nu$ such that $\norm{\widehat{\nu}}_p \lesssim 1$. Therefore, $\widehat{\nu}$ is the desired function $f$. \end{proof} When $d=2$, an estimate on the Fourier decay of the measure $\nu$ was studied in \cite{Ryou23}. \begin{proof}[Proof of Proposition \ref{Prop_optimal} when $d =2$] By (2) in Theorem 1.2 \cite{Ryou23}, a.s. we have \[ |\widehat{\nu}(\xi)| \lesssim_\e (1+|\xi|)^{-\alpha/2+\e}, \qquad \forall \xi \in \mathbb{R}^2. \] Hence, $\norm{\widehat{\nu}}_p \lesssim 1$ for $p > p_\alpha$. \end{proof} Let us turn to the proof of Proposition \ref{Prop_optimal} in higher dimensions. First, we will consider when $d \geq 4$. For $s_1\geq 0$ and $1 \leq 2^{s_2} \leq M_j2^{s_1}$, we define the sets \[ \Omega_{j} (s_1):=\{\xi \in \R^d : \exists t \in E_{j} \ \text{such that} \ |\langle \gamma^{(k)} (t), \xi \rangle | \leq (M_j2^{s_1})^k, \ \forall 1 \leq k \leq d \}, \] and \begin{align*}\label{def_Os1s2} \Omega_{j} (s_1,s_2):=\{\xi \in \Omega_j(s_1) : \exists t \in E_{j} \ \text{such that} \ |\langle \gamma^{(k)} (t), \xi \rangle | \leq (M_j2^{s_1}) 2^{s_2(k-1)}, \ \forall 2 \leq k \leq d \}. \end{align*} \begin{lemma}\label{lem_mup_meas} For any $\e >0$, a.s., we have the following estimates. \begin{itemize} \item[(i)] If $1\leq 2^{s_2 } \leq (2^{s_1} M_j)^{\frac{1}{d-1}}$, then \begin{equation}\label{eq_Omega_meas1} |\Omega_j(s_1,s_2) | \lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2(\frac{d(d-1)}{2}+\alpha d)} 2^{s_1\frac{d(1-\alpha)}{d-1}}. \end{equation} \item[(ii)] If $(2^{s_1} M_j)^{\frac{1}{d-1}} \leq 2^{s_2 } \leq 2^{s_1 } M_j $, then \begin{equation}\label{eq_Omega_meas2} |\Omega_j(s_1,s_2)| \lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2} } M_j^\alpha 2^{s_1} ( M_j 2^{s_1-s_2})^{\frac{\alpha}{d-2}}2^{s_1\frac{1-\alpha}{d-1}}. \end{equation} \end{itemize} \end{lemma} Let \[ \widetilde{\Omega}_{j+1}(s_1,s_2) = \begin{cases} \Omega_{j+1}(s_1,s_2) \backslash (\Omega_{j+1}(s_1,s_2-1) \cup \Omega_{j+1}(s_1-1))& \text{if } \ s_1, s_2 \neq 0,\\ \Omega_{j+1}(s_1,0) \backslash \Omega_{j+1}(s_1-1)& \text{if } \ s_1 \neq 0, s_2=0,\\ \Omega_{j+1}(0,s_2) \backslash \Omega_{j+1}(0,s_2-1)& \text{if } \ s_1 = 0, s_2\neq 0,\\ \Omega_{j+1}(0,0) & \text{if } \ s_1= s_2=0. \end{cases} \] \begin{lemma}\label{lem_mup_osci} If $\xi \in \widetilde{\Omega}_{j+1}(s_1,s_2)$, for any $\e >0$, a.s. we have \begin{equation}\label{Omega_osci} |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) | \lesssim_\e(M_j 2^{s_1})^\e M_{j}^{-\frac{\alpha}{2}} 2^{-s_1(1-\frac{\alpha }{2})-s_2\frac{\alpha}{2}}. \end{equation} \end{lemma} \begin{remark} When $\alpha=1$, for each fixed $\xi \in \R^d$, it suffices to consider one oscillatory integral $$ \int_0^1 e(\xi \cdot \gamma_d(t))dt. $$ However, when $0 < \alpha <1$, $\widehat{\nu}_j (\xi)$ is a summation of oscillatory integrals of form $$ \beta_{j}\int_{I_j} e(\xi \cdot \gamma_d(t))dt $$ where $I_j \in \mathcal{I}_j $ and $I_j \subseteq E_j $. Therefore, we need to show that the summation of oscillatory integrals has a good upper bound and this is why we decompose $\Omega_j(s_1)$ further into $\widetilde{\Omega}_j(s_1,s_2)$. \end{remark} We prove Proposition \ref{Prop_optimal} using Lemmas \ref{lem_mup_meas} and \ref{lem_mup_osci} first. The proof of lemmas will be given later. \begin{proof}[Proof of Proposition \ref{Prop_optimal} when $d \geq 4$] It suffices to show that $\norm{\widehat{\nu}_n}_p \lesssim 1 $ uniformly in $n \in \N$. By letting $n \rightarrow \infty$, we have $\norm{\widehat{\nu}}_p \lesssim 1$. By triangle inequality, we get \[ \norm{\widehat{\nu}_n }_p \lesssim \sum_{ 1 \leq j \leq n-1 } \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p +1. \] For each $j$, we have \begin{equation} \label{eq_v_j} \begin{split} \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p^p &\leq \sum_{s_1\geq 0}\bigg(\sum_{ 1 \leq 2^{s_2} \leq (2^{s_1}M_{j+1})^{\frac{1}{d-1}} }\int_{\widetilde{\Omega}_{j+1}(s_1,s_2)} |\widehat{\nu}_{j+1} - \widehat{\nu}_j|^p d\xi\bigg)\\ &\hspace{1cm}+ \sum_{s_1\geq 0}\bigg(\sum_{ (2^{s_1}M_{j+1})^{\frac{1}{d-1}} \leq 2^{s_2} \leq 2^{s_1}M_{j+1}}\int_{\widetilde{\Omega}_{j+1}(s_1,s_2)} |\widehat{\nu}_{j+1} - \widehat{\nu}_j|^p d\xi\bigg)\\ &= \mathcal{J}_1+\mathcal{J}_2. \end{split} \end{equation} First, we estimate the sum $\mathcal{J}_1$. Recall that $m_{j+1} \lesssim_\e M_j^\e$. For fixed $s_1\geq 0$, since $1 \leq 2^{s_2} \leq (2^{s_1} M_{j+1})^{\frac{1}{d-1}}$, Lemmas \ref{lem_mup_meas} and \ref{lem_mup_osci} imply that each integral in the sum $\mathcal{J}_1$ can be bounded by \begin{align*} (M_j 2^{s_1})^\e M_j^{d-\frac{\alpha p}{2}}2^{s_1(\frac{\alpha-2}{2}p+d+\frac{d(1-\alpha)}{d-1})}2^{s_2(-\frac{\alpha p}{2}+\frac{d(d-1)}{2}+\alpha d)}. \end{align*} Denote $p(s_2,\mathcal{J}_1)$ by the exponent of $2^{s_2}$. Then, we can check that the sum over $1 \leq 2^{s_2} \leq (2^{s_1} M_{j+1})^{\frac{1}{d-1}}$ is bounded by \begin{align}\label{sum_s2_1} &\lesssim_\e \begin{cases} (M_j 2^{s_1})^\e M_{j}^{-\frac{\alpha p}{2}+d}2^{s_1(\frac{\alpha-2}{2}p+d+\frac{d(1-\alpha)}{d-1})} &\text{ if } p(s_2,\mathcal{J}_1)>0,\\ (M_j 2^{s_1})^\e M_{j}^{ \frac{d}{2(d-1)}(-\alpha p +3(d-1) +2\alpha)}2^{\frac{s_1}{2(d-1)}(3d(d-1)+2d-\alpha p -(2-\alpha)(d-1)p)}&\text{ if } p(s_2,\mathcal{J}_1)\leq 0. \end{cases} \end{align} In either case, the exponent of $2^{s_1}$ is negative for sufficiently small except for $\e>0$. When $p(s_2,\mathcal{J}_1) >0$, the exponent of $2^{s_1}$ in \eqref{sum_s2_1} except $\e$ is \[ \frac{(\alpha-2)p}{2} + d +\frac{d(1-\alpha)}{d-1} < -\frac{p}{2} +d + \frac{d}{d-1}<\frac{d^2}{d-1}-\frac{d(d+1)}{2}<0, \] since $p >p_\alpha\geq \frac{d(d+1)}{2}$. When $p(s_2,\mathcal{J}_1) \leq 0$, we can rewrite the exponent of $2^{s_1}$ without $\e$ as \[ \begin{split} \frac{d}{2(d-1)}(-\alpha p + 3(d-1)+2\alpha) + \frac{1-\alpha}{d-1} (-p(d-1)+d)={\Romannum{1}}+ {\Romannum{2}}. \end{split} \] It is easy to see that ${\Romannum{2}}$ is negative. For ${\Romannum{1}}$, by the assumption that $d \geq 4$, $p >p_\alpha$, and $0<\alpha<1$, we can check that \begin{equation}\label{eq_rom2} {\Romannum{1}}< \frac{d}{4(d-1)} (-d^2+5d-6+2\alpha)\leq -\frac{1}{4}d(d-4) \leq 0. \end{equation} Since the exponents of $2^{s_1}$ in \eqref{sum_s2_1} are negative for sufficiently small $\e >0$, we have \begin{align}\label{eq_sum_J1} \mathcal{J}_1\lesssim_\e M_{j}^{\e-\frac{\alpha p}{2}+d}+M_{j}^{\e+ \frac{d}{2(d-1)}(-\alpha p +3(d-1) +2\alpha)}. \end{align} To estimate the sum $\mathcal{J}_2$, we fix $s_1\geq 0$. Since $ (2^{s_1} M_{j+1})^{\frac{1}{d-1}} \leq 2^{s_2} \leq 2^{s_1} M_{j+1}$, we apply Lemmas \ref{lem_mup_meas} and \ref{lem_mup_osci} to bound each integral in the sum by a constant multiple of \begin{align}\label{eq_est_sumJ2} (M_j 2^{s_1})^\e M_{j}^{-\frac{\alpha p}{2}+d+\frac{\alpha (d-1)}{(d-2)}}2^{s_1(\frac{(\alpha-2)p}{2}+d+1+\frac{\alpha}{d-2)}+\frac{1-\alpha}{d-1}}2^{s_2(-\frac{\alpha p}{2}+\frac{d(d-1)}{2}-\frac{\alpha}{d-2})}. \end{align} Let $p(s_2,\mathcal{J}_2)$ be the exponent of $s_2$ in \eqref{eq_est_sumJ2}. The sum over $s_2$ with $ (2^{s_1} M_{j+1})^{\frac{1}{d-1}} \leq 2^{s_2} \leq 2^{s_1} M_{j+1}$ is bounded by \begin{align}\label{sum_s2_2} &\lesssim_\e \begin{cases} (M_j 2^{s_1})^\e M_{j}^{ \frac{d}{2(d-1)}(-\alpha p +3(d-1) +2\alpha)}2^{\frac{s_1}{2(d-1)}(3d(d-1)+2d-\alpha p -(2-\alpha)(d-1)p)} &\text{ if } p(s_2,\mathcal{J}_2)>0,\\ (M_j 2^{s_1})^\e M_{j}^{-\alpha p+\frac{d(d+1)}{2}+\alpha}2^{s_1(-p+\frac{d(d+1)}{2}+1+\frac{1-\alpha}{d-1})} &\text{ if } p(s_2,\mathcal{J}_2)\leq 0. \end{cases} \end{align} When $p(s_2,\mathcal{J}_2)>0$, the exponent of $2^{s_1}$ except for $\e$ in \eqref{sum_s2_2} is negative by the previous calculation. If $p(s_2, \mathcal{J}_2)\leq 0$, the exponent of $2^{s_1}$ except for $\e$ in \eqref{sum_s2_2} is \[ -p+\frac{d(d+1)}{2} + 1+\frac{1-\alpha}{d-1} < (1-\alpha)\left( \frac{1}{d-1} -\frac{d(d+1)}{2\alpha} \right) <0. \] Therefore, we can bound $\mathcal{J}_2$ by \begin{align}\label{eq_sum_J2} \lesssim_e M_{j}^{\e+ \frac{d}{2(d-1)}(-\alpha p +3(d-1) +2\alpha)}+M_{j}^{\e-\alpha p+\frac{d(d+1)}{2}+\alpha}. \end{align} Plugging \eqref{eq_sum_J1} and \eqref{eq_sum_J2} into \eqref{eq_v_j}, we obtain that \[ \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p^p\lesssim_\e M_j^{\e -\frac{\alpha p}{2}+d } + M_j^{\e -\alpha p + \frac{d(d+1)}{2}+\alpha} +M_j^{\e+ \frac{d}{2(d-1)}(-\alpha p +3(d-1) +2\alpha)} . \] Since $p >p_\alpha$, all the exponents of $M_j$ are negative, provided that $\e>0$ is small enough. Especially, the exponent of $M_j$ in the last term is negative since \eqref{eq_rom2} holds when $ d\geq 4$. By summing over $j$, we have \[ \norm{\widehat{\nu}_n }_p \lesssim \sum_{ 1 \leq j \leq n-1 } \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p +1 \lesssim_p 1. \] \end{proof} When $d=3$, \eqref{eq_rom2} does not hold. Thus, we will consider a slightly different decomposition of $\Omega_j(s_1)$. For each $1\leq 2^{s_2}\leq M_j 2^{s_1}$, we define \[ \Omega_{3,j} (s_1,s_2):=\{\xi \in \Omega_j(s_1) : \exists t \in E_{j} \ \text{such \ that} \ |\langle \gamma^{(k)} (t), \xi \rangle | \leq (M_j2^{s_1}) 2^{s_2(k-1)}, \ \forall 1 \leq k \leq 3 \}. \] The definition of $\Omega_{3,j}(s_1,s_2) $ requires the existence of $t$ such that \begin{equation}\label{Mj2s1s2} |\langle \gamma^{(k)} (t), \xi \rangle | \leq (M_j 2^{s_1}) 2^{s_2(k-1)} , \end{equation} for all $1 \leq k \leq 3$ However, in the definition of $\Omega_j(s_1,s_2)$, the numbers $t$ such that \eqref{Mj2s1s2} holds with $k=1$ and $t'$ such that \eqref{Mj2s1s2} holds with $k=2,3$ do not need to be the same. Hence, $\Omega_{3,j}(s_1,s_2)$ could be strictly contained in $\Omega_j(s_1,s_2)$ for some $s_2$. Now let us introduce the estimates for the case $d=3$. \begin{lemma}\label{lem_mup_meas_d=3} For any $\e >0$, a.s., we have \begin{equation}\label{Omega_meas_3} |\Omega_{3,j}(s_1,s_2)| \lesssim_\e M_j^\e(M_j2^{s_1})^3 2^{s_1(1-\alpha)+s_2( 3 +\alpha)} . \end{equation} \end{lemma} Let \[ \widetilde{\Omega}_{3,j+1}(s_1,s_2) = \begin{cases} \Omega_{3,j+1}(s_1,s_2) \backslash (\Omega_{3,j+1}(s_1,s_2-1) \cup \Omega_{j+1}(s_1-1))& \text{if } \ s_1, s_2 \neq 0,\\ \Omega_{3,j+1}(s_1,0) \backslash \Omega_{j+1}(s_1-1)& \text{if } \ s_1 \neq 0, s_2=0,\\ \Omega_{3,j+1}(0,s_2) \backslash \Omega_{3, j+1}(0,s_2-1)& \text{if } \ s_1 = 0, s_2 \neq 0,\\ \Omega_{3,j+1}(0,0) & \text{if } \ s_1= s_2=0. \end{cases} \] \begin{lemma}\label{lem_mup_osci_d=3} For any $\e >0$, if $\xi \in \widetilde{\Omega}_{3,j+1}(s_1,s_2)$, a.s. we have \begin{equation}\label{Omega_osci2} |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) | \lesssim_\e (M_j 2^{s_1})^\e M_j^{-\frac{\alpha}{2}} 2^{-s_1(1-\frac{\alpha}{2})} 2^{-s_2\frac{\alpha}{2}}. \end{equation} \end{lemma} Equipped with these two lemmas, we can prove Proposition \ref{Prop_optimal} when $d=3$. \begin{proof}[Proof of Proposition \ref{Prop_optimal} when $d = 3$] Similarly, it suffices to show that for any $n\geq 1$, \[ \norm{\widehat{\nu}_n }_p \lesssim \sum_{ 1 \leq j \leq n-1 } \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p +1\lesssim 1. \] For fixed $j\geq 1$, we have \begin{align}\label{eq_v_j_p} \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p^p \leq \sum_{\substack{1 \leq s_1 \\ 1 \leq 2^{s_2} \leq 2^{s_1} M_j }}\int_{\widetilde{\Omega}_{3,j+1}(s_1,s_2)} |\widehat{\nu}_{j+1} - \widehat{\nu}_j|^p d\xi . \end{align} Applying Lemmas \ref{lem_mup_meas_d=3} and \ref{lem_mup_osci_d=3}, we get \begin{equation*} \begin{split} \int_{\widetilde{\Omega}_{3,j+1}(s_1,s_2)} |\widehat{\nu}_{j+1} - \widehat{\nu}_j|^p d\xi \lesssim_\e (M_j 2^{s_1})^\e M_j^{3-\frac{\alpha p}{2}}2^{s_1(\frac{\alpha p}{2}-p+4-\alpha)}2^{s_2(3+\alpha-\frac{\alpha p}{2})}. \end{split} \end{equation*} For fixed $s_1$, denote the exponent of $2^{s_2}$ by $p(s_2)$, then we can check that the sum over $1 \leq 2^{s_2} \leq 2^{s_1} M_j$ is bounded by \begin{align}\label{sum_s2_3} &\lesssim_\e \begin{cases} (M_j 2^{s_1})^\e M_{j}^{-\alpha p +6+\alpha}2^{s_1(7-p)} &\text{ if } p(s_2)>0,\\ (M_j 2^{s_1})^\e M_{j}^{-\frac{\alpha p }{2} +3 }2^{s_1(\frac{\alpha p}{2}-p+4-\alpha)}&\text{ if } p(s_2)\leq 0. \end{cases} \end{align} Since $p > p_\alpha$, the exponent of $2^{s_1}$ and $M_j$ in \eqref{sum_s2_3} are all negative for sufficiently small $\e >0$. Thus, we get \[ \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p^p \lesssim_e M_j^\e ( M_j^{-\alpha p +6+ \alpha} + M_j^{- \frac{\alpha p}{2} +3}). \] Summing over $j$, we have the desired estimate, which is \[ \begin{split} \norm{\widehat{\nu}_n }_p &\lesssim_\e \sum_{ 1 \leq j \leq n-1 } \norm{\widehat{\nu}_{j+1} - \widehat{\nu}_j }_p +1\lesssim_\e 1. \end{split} \] \end{proof} \subsection{Estimates on the measures of sets} Before we prove Lemmas \ref{lem_mup_meas} and \ref{lem_mup_meas_d=3}, we will introduce some necessary results. \begin{lemma}\label{lem_est_mu_j} A.s. we have $\norm{\mu_j} \lesssim 1$ uniformly in $j$. \end{lemma} Lemma \ref{lem_est_mu_j} easily follows from Martingale convergence theorem. For example, see \cite[p. 28]{Ryou23}. \begin{lemma}[Lemma 5.10 in \cite{Ryou23}]\label{lem_est_mu_ball_r} For any $\e>0$, a.s. we have \[ \mu_j(B(t,r)) \lesssim_\e M_j^\e r^\alpha, \qquad \forall t\in [0,1], \forall r\geq M_j^{-1}. \] \end{lemma} We also need an estimate of the volume of a parallelotope. \begin{lemma}\label{lem_vol_P} Assume that a set vectors $\{b_k \in \R^d : 1 \leq k \leq d\}$ is linearly independent. Let $P$ be a parallelotope defined by \[ \{ \xi \in \R^d : |\langle b_k, \xi \rangle | \leq 1/2 , 1 \leq k \leq d \}. \] Then, $|P|= |\det(b_1,\cdots, b_d)|^{-1}$ where $\det(b_1, \cdots, b_d)$ is the determinant of a matrix whose columns are $b_1, \cdots, b_d$. \end{lemma} \begin{proof} Let $A$ be a matrix whose columns are $b_1, \cdots, b_d $. By the change of variable $\eta = A\xi$, we obtain that $$ |P| = \int_{|\eta_i | \leq 1/2, 1 \leq i \leq d} 1 d\xi = \int_{|\eta_i|\leq 1/2} |\det(A)|^{-1}d\eta = |\det(A)|^{-1}. $$ \end{proof} Lastly, we will use Taylor expansion of $\gamma(t)$ often. Thus, we write here. For fixed $\xi$, \begin{equation}\label{eq_taylor} \langle \gamma^{(k)} (t),\xi \rangle = \sum_{n=0}^{d-k}\frac{1}{n!}\langle\gamma^{(k+n)}(s),\xi\rangle (t-s)^n, \qquad \forall t,s\in [0,1]\,, 1\leq k\leq d. \end{equation} Let us explain the idea of the proof of Lemma \ref{lem_mup_meas} first. We define a parallelotope $P(t_1,t_2)$ by \begin{align*} \{ \xi \in \R^d : |\langle \gamma^{(k)} (t_1), \xi \rangle | \leq 10(M_j2^{s_1}), |\langle \gamma^{(k)} (t_2), \xi \rangle | \leq 10(M_j2^{s_1}) 2^{s_2(k-1)} \ \forall 2 \leq k \leq d \}. \end{align*} The set $\Omega_j(s_1,s_2)$ is contained in the union of $P(t_1,t_2)$ where $t_1, t_2 \in E_j$. When $s_2$ is small, i.e. $2^{s_2} \leq (2^{s_1} M_j)^{\frac{1}{d-1}}$, estimating the measure of the union of $P(t_1,t_2)$ gives a good upper bound on $\Omega_j(s_1,s_2)$, which is \eqref{eq_Omega_meas1}. However, if $s_2$ is large, i.e. $2^{s_2} \geq (2^{s_1} M_j)^\frac{1}{d-1}$, and $|t_1-t_2|$ is large, then a small portion of $P(t_1,t_2)$ intersects $\Omega_j(s_1)$. Thus, we need an additional argument for this case. Therefore, we will prove \eqref{eq_Omega_meas1} and \eqref{eq_Omega_meas2} in Lemma \ref{lem_mup_meas} separately. \begin{proof}[Proof of \eqref{eq_Omega_meas1} in Lemma \ref{lem_mup_meas}] Recall that our goal is to prove that a.s. we have \begin{equation*} |\Omega_j(s_1,s_2) | \lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2(\frac{d(d-1)}{2}+\alpha d)} 2^{s_1\frac{d(1-\alpha)}{d-1}}. \end{equation*} We will consider three cases: \begin{itemize} \item[(a)] $2^{s_2} \leq M_j^\frac{1}{d-1}$, \item[(b)] $M_j^{\frac{1}{d-1}} \leq 2^{s_2} \leq \min \{M_j, (2^{s_1 }M_j)^{\frac{1}{d-1}}\}$, \item[(c)] $M_j \leq 2^{s_2} \leq (2^{s_1} M_j)^{\frac{1}{d-1}}$. \end{itemize} Case (a): Let $i_1, i_2$ be two integers such that \begin{align}\label{eq_i1_i2} M_{i_1-1} \leq 2^{s_2(d-1)} \leq M_{i_1}\,,\qquad M_{i_2-1} \leq 2^{s_2} \leq M_{i_2}. \end{align} For each $\xi\in \Omega_j(s_1,s_2)$, recall that there exists $t_1,t_2\in E_j$ satisfying \begin{align}\label{eq_property_xi_Omega} |\langle \gamma^{(1)}(t_1), \xi \rangle| \leq M_j2^{s_1}, \end{align} and \begin{align}\label{eq_property_xi_Omega_2} \ |\langle\gamma^{(k)}(t_2), \xi \rangle| \leq M_j 2^{s_1} 2^{s_2(k-1)} \,, \qquad \forall 2 \leq k \leq d. \end{align} Let $T_1$ and $T_2$ be the sets of such points $t_1$ and $t_2$ respectively, i.e., $T_1$ is the set of $t_1 \in E_j$ such that there is $\xi \in \Omega_j(s_1,s_2)$ which satisfies \eqref{eq_property_xi_Omega} and $T_2$ is the set of $t_2 \in E_j$ such that there is $\xi \in \Omega_j(s_1,s_2)$ which satisfies \eqref{eq_property_xi_Omega_2}. Observe that $i_1,i_2\leq j$. We will consider levels $i_1$ and $i_2$ of our set, namely $E_{i_1}$ and $E_{i_2}$, respectively. Recall that $E_{i_1}$ is the union of intervals $I_1$ of length $ M_{i_1}^{-1}$, and $E_j\subseteq E_{i_1}$. Denote $\widetilde{T}_1$ by the set of the left endpoints of intervals $I_{1}$ which intersect $T_1$. Similarly, $E_{i_2}$ is the union of intervals $I_{2}$ of length $ M_{i_2}^{-1}$ with $E_j\subseteq E_{i_2}$. Denote $\widetilde{T}_2$ by the set of the left endpoints of intervals $I_2 $ which intersect $T_2$. For each $\tilde{t}_1\in \widetilde{T}_1$ and $ \tilde{t}_2 \in \widetilde{T}_2$, we define \[ \begin{split} \Omega_j(s_1,s_2,\tilde{t}_1 , \tilde{t}_2)=\{ \xi \in \R^d : &|\langle \gamma^{(1)}(\tilde{t}_1), \xi \rangle| \leq 100dM_j2^{s_1},\\ &|\langle \gamma^{(k)}(\tilde{t}_2), \xi \rangle| \leq 100dM_j 2^{s_1} 2^{s_2(k-1)} \,, \forall 2 \leq k \leq d \}. \end{split} \] We claim that \begin{align}\label{eq_union_Omegajs1s2} \Omega_j(s_1,s_2) \subseteq \bigcup\limits_{\tilde{t}_1 \in \widetilde{T}_1, \tilde{t}_2 \in \widetilde{T}_2} \Omega_j (s_1,s_2, \tilde{t}_1, \tilde{t}_2)\,. \end{align} Indeed, in each interval $I_1$, we have that $|\tilde{t}_1 -t_1| \leq M_{i_1}^{-1} \leq 2^{-s_2(d-1)} $ for some $\xi \in \Omega_j(s_1,s_2)$. By Taylor expansion \eqref{eq_taylor} with $t=\tilde{t}_1$ and $s=t_1(\xi)$, we get \[ |\langle \gamma^{(1)}(\tilde{t}_1), \xi \rangle | \leq dM_j 2^{s_1}. \] Similarly, using the fact that $|\tilde{t}_2 -t_2| \leq M_{i_2}^{-1} \leq 2^{-s_2}$ and \eqref{eq_taylor}, we have \[ |\langle\gamma^{(k)}(\tilde{t}_2), \xi \rangle | \leq dM_j2^{s_1+s_2(k-1)},\qquad \forall2 \leq k \leq d. \] Thus, we have \eqref{eq_union_Omegajs1s2}. Now, note that \[ \det(\gamma^{(1)}(\tilde{t}_1), \gamma^{(2)}(\tilde{t}_2), \cdots ,\gamma^{(d)}(\tilde{t}_2)) \gtrsim 1 \] uniformly in $\tilde{t}_1\in \widetilde{T}_1$ and $\tilde{t}_2\in \widetilde{T}_2$. Thus it follows from Lemma \ref{lem_vol_P} that \begin{align}\label{eq_est_Omega_meas_a} |\Omega_j(s_1,s_2,\tilde{t}_1, \tilde{t}_2)| \lesssim (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2}}. \end{align} Since $M_{i_1} \lesssim_\e M_{i_1-1}^{1+\e} \lesssim 2^{s_2(d-1)(1+\e)}$ and, $M_{i_2} \lesssim_\e M_{i_2-1}^{1+\e} \lesssim 2^{s_2(1+\e)} $, using \eqref{cond_b_jM_j} and Lemma \ref{lem_est_mu_j}, a.s. we obtain that \begin{align}\label{eq_est_T1_a} \# \widetilde{T}_1 \lesssim \norm{\mu_{i_1}} \beta_{i_1}^{-1}M_{i_1} \lesssim_\e 2^{s_2(d-1)\alpha +\e}, \end{align} and \begin{align}\label{eq_est_T2_a} \# \widetilde{T}_2 \lesssim \norm{\mu_{i_2}} \beta_{i_2}^{-1}M_{i_2} \lesssim_\e 2^{s_2\alpha +\e}. \end{align} Combining the above estimates, we obtain that a.s. \eqref{eq_Omega_meas1} holds. Case (b): We can prove \eqref{eq_Omega_meas1} in a similar way with some modifications. Since $2^{s_2} \leq M_j \leq 2^{s_2(d-1)}$, there exist integers $i_1, i_2$ satisfying \eqref{eq_i1_i2} with $i_2\leq j\leq i_1$. We consider the same sets $E_{i_2}$ and $\widetilde{T}_2$ as in case $(a)$. However, we cannot use the same $\widetilde{T}_1$ in case (a). Since $E_{i_1} \subseteq E_j$, there could exist $t_1 \in T_1$ in $ E_j$ but not in $E_{i_1}$. Therefore, we divide $E_j$ into intervals $I_{1}$ of length $M_{i_1}^{-1}$, and we use this new $I_1$ to define $\widetilde{T}_1$, i.e., $\widetilde{T}_1$ is the set of left endpoints of the intervals $I_1 \subset E_j $ of length $M_{i_1}^{-1}$ which intersect $T_1$. Then, a.s. we get \begin{equation}\label{eq_est_T1_b} \#\widetilde{T}_1 \lesssim \norm{\mu_j} \beta_j^{-1} M_{i_1} \lesssim_\e M_j^{\alpha-1+\e } 2^{s_2(d-1)}. \end{equation} Combining \eqref{eq_est_Omega_meas_a}, \eqref{eq_est_T2_a} and \eqref{eq_est_T1_b}, a.s. we have $$ |\Omega_j(s_1,s_2)| \lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2}} M_j^{\alpha-1} 2^{s_2(d-1+\alpha)}. $$ Due to the range of $s_2$, we have $M_j^{-1} 2^{s_2(d-1)} \leq 2^{s_1} \leq 2^{s_1 \frac{d}{d-1}}$ and it implies \eqref{eq_Omega_meas1}. Case (c): Since $M_j \leq 2^{s_2} $ and $M_j \leq 2^{s_2(d-1)}$, there exist $i_1,i_2\geq j$ such that \eqref{eq_i1_i2} holds. Instead of $E_{i_1}$ and $E_{i_2}$ in case (a), we divide $E_j$ into intervals $I_1$ of length $M_{i_1}^{-1}$, and into intervals $I_2$ of length $M_{i_2}^{-1}$ respectively. The sets $\widetilde{T}_1$ and $\widetilde{T}_2$ are then defined in the same way in $(a)$ by using these new $I_1$ and $I_2$. These decompositions a.s. give us \eqref{eq_est_Omega_meas_a}, \eqref{eq_est_T1_b} and \[ \#\widetilde{T}_2 \lesssim \norm{\mu_j} \beta_j^{-1} M_{i_2}^{-1} \lesssim_\e M_j^{\alpha-1+\e } 2^{s_2}. \] Thus, a.s. we have $$ |\Omega_j(s_1,s_2)| \lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2}} M_j^{-2(1-\alpha)} 2^{s_2d}. $$ Due to the range of $s_2$, we have $M_j^{-2}2^{s_2d} \leq 2^{s_1 \frac{d}{d-1}} M_j^{-\frac{d-2}{d-1}} \leq 2^{s_1 \frac{d}{d-1}}$. Thus, we conclude that \eqref{eq_Omega_meas1} holds a.s. \end{proof} Now we turn to the proof of \eqref{eq_Omega_meas2}. \begin{proof}[Proof of \eqref{eq_Omega_meas2} in Lemma \ref{lem_mup_meas}] Recall that our goal is to prove that a.s. we have \begin{equation*} |\Omega_j(s_1,s_2)| \lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2} } M_j^\alpha 2^{s_1} ( M_j 2^{s_1-s_2})^{\frac{\alpha}{d-2}}2^{s_1\frac{1-\alpha}{d-1}}. \end{equation*} We will consider three cases: \begin{itemize} \item[(a)] $(2^{s_1} M_j)^{\frac{1}{d-1}} \leq 2^{s_2} \leq M_j$, \item[(b)] $M_j \leq 2^{s_2} \leq 2^{s_1/(d-1)} M_j$, \item[(c)] $ 2^{s_1/(d-1)} M_j \leq 2^{s_2} \leq 2^{s_1} M_j$. \end{itemize} Case (a): For each $\xi\in \Omega_j(s_1,s_2)$, recall that there exists $t_1,t_2\in E_j$ satisfying $$ |\langle \gamma^{(1)}(t_1), \xi \rangle| \leq M_j2^{s_1}\,,\qquad |\langle \gamma^{(2)}(t_1), \xi \rangle| \leq (M_j2^{s_1})^2 $$ and $$ |\langle\gamma^{(k)}(t_2), \xi \rangle| \leq M_j 2^{s_1} 2^{s_2(k-1)} \,, \forall 2 \leq k \leq d. $$ Let $T_1$ and $T_2$ be the sets of such points $t_1$ and $t_2$ respectively. By the assumption on $s_2$, there is an integer $i$ such that $M_{i-1} \leq 2^{s_2} \leq M_i$. Decompose $E_{j}$ into intervals $I_{1}$ of length $ (M_j 2^{s_1})^{-1}$, and define $\widetilde{T}_1$ as in the previous argument, i.e., $\widetilde{T}_1$ is the set of left endpoints of intervals $I_1$ which intersect $T_1$. Next, we consider level $i$ of our set, namely $E_i$, which is the union of intervals $I_2$ of length $M_i^{-1}$, and we define the corresponding set $\widetilde{T}_2$ in the same way. For each $\tilde{t}_1\in \widetilde{T}_1$ and $ \tilde{t}_2 \in \widetilde{T}_2$, we define \begin{align*} \Omega_j(s_1,s_2,\tilde{t}_1 , \tilde{t}_2)=\{x \in \R^d : |\langle \gamma^{(1)}(\tilde{t}_1), \xi \rangle| \leq 100dM_j2^{s_1}\,, |\langle \gamma^{(2)}(\tilde{t}_1), \xi \rangle| \leq 100d(M_j2^{s_1})^2\,,&\\ |\gamma^{(k)}(\tilde{t}_2), \xi \rangle| \leq 100dM_j 2^{s_1} 2^{s_2(k-1)} \,,\qquad \forall 2 \leq k \leq d \}.& \end{align*} Since $|I_1| = (M_j2^{s_1})^{-1}$ and $|I_2| = M_i^{-1} \leq 2^{-s_2}$, we apply Taylor expansion \eqref{eq_taylor} and we can check that \begin{align*} \Omega_j(s_1,s_2) \subseteq \bigcup\limits_{\tilde{t}_1 \in \widetilde{T}_1, \tilde{t}_2 \in \widetilde{T}_2} \Omega_j (s_1,s_2, \tilde{t}_1, \tilde{t}_2). \end{align*} To estimate the measure of the set on the right-hand side, we decompose it further. Let $A_0$ be a union of $\Omega_j(s_1,s_2,\tilde{t}_1, \tilde{t}_2)$ such that $|\tilde{t}_1 - \tilde{t}_2| \lesssim 2^{-s_2} ( M_j 2^{s_1-s_2} )^{\frac{1}{d-2}}$. For $l \geq 1$, let $A_l$ be a union of $\Omega_j(s_1,s_2,\tilde{t}_1, \tilde{t}_2)$ such that $|\tilde{t}_1 - \tilde{t}_2| \sim 2^{l-s_2} ( M_j 2^{s_1-s_2} )^{\frac{1}{d-2}}$. Therefore, we have $$\Omega_j (s_1,s_2) \subseteq \bigcup_{l \geq 0 } A_l.$$ Let us first estimate $|A_0|$. Since $$ \det(\gamma^{(1)}(\tilde{t}_1), \gamma^{(2)}(\tilde{t}_2), \cdots ,\gamma^{(d)}(\tilde{t}_2)) \gtrsim 1 $$ uniformly in $\tilde{t}_1$ and $\tilde{t}_2$, Lemma \ref{lem_vol_P} implies that \[ |\Omega_j(s_1,s_2,\tilde{t}_1, \tilde{t}_2)| \lesssim (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2}},\qquad \forall \tilde{t}_1\in \widetilde{T}_1,\tilde{t}_2\in \widetilde{T}_2. \] Here, we did not use the inequality $|\langle \gamma^{(2)}(\tilde{t}_1), \xi \rangle| \leq 100d(M_j2^{s_1})^2$. We will use it to estimate $|A_l|$ for $l \geq 1$. For fixed $\tilde{t}_1\in \widetilde{T}_1$, since $M_i^{-1} \leq 2^{-s_2}$ and $M_j^{-1} \leq 2^{-s_2} (M_j 2^{s_1-s_2})^{\frac{1}{d-2}}$, we apply Lemma \ref{lem_est_mu_ball_r}. Then, a.s. we have \[ \begin{split} \#\{\tilde{t}_2 \in \widetilde{T}_2: |\tilde{t}_1 - \tilde{t}_2| \lesssim 2^{-s_2} (M_j 2^{s_1-s_2})^\frac{1}{d-2}\} &\lesssim \beta_i^{-1} M_i \mu_i \big(B(\tilde{t}_1, 2^{-s_2} ( M_j 2^{s_1-s_2})^\frac{1}{d-2})\big)\\ &\lesssim_\e M_i^{\alpha+\e } \big(2^{-s_2} (M_j 2^{s_1-s_2})^\frac{1}{d-2}\big))^\alpha\\ &\lesssim_\e M_i^{\e} (M_j 2^{s_1-s_2})^\frac{\alpha}{d-2}. \end{split} \] Additionally, a.s. we have \[ \#\widetilde{T}_1 \lesssim \norm{\mu_{j}} \beta_{j}^{-1}M_{j} 2^{s_1} \lesssim_\e M_j^{\alpha +\e }2^{s_1}. \] Therefore, we obtain that \begin{equation}\label{Omega_meas_A1} |A_1|\lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2}} M_j^\alpha 2^{s_1} (M_j 2^{s_1-s_2})^\frac{\alpha}{d-2}. \end{equation} Next, we estimate $|A_l|$. Observe that $\Omega_j(s_1,s_2,\tilde{t}_1, \tilde{t}_2)$ is contained in the parallelotope \[ \begin{split} \{ \xi \in \R^d : &|\langle \gamma^{(1)}(\tilde{t}_1), \xi \rangle| \leq 100dM_j2^{s_1}, |\langle \gamma^{(2)}(\tilde{t}_1)\,, \xi \rangle| \leq 100d(M_j2^{s_1})^2,\\ &\hspace{2cm}|\langle\gamma^{(k)}(\tilde{t}_2), \xi \rangle| \leq 100dM_j 2^{s_1} 2^{s_2(k-1)} \,, \forall 2 \leq k \leq d-1 \}. \end{split} \] By Lemma \ref{lem_vol_P}, we can bound $|\Omega_j(s_1,s_2, \tilde{t}_1 , \tilde{t}_2)| $ by a constant multiple of \begin{equation} \label{eq_Omega_meas_A2_est} \begin{split} (M_j 2^{s_1})^{d+1} 2^{s_2 \frac{(d-2)(d-1)}{2}}|\det (\gamma^{(1)} (\tilde{t}_1),\gamma^{(2)} (\tilde{t}_1), \gamma^{(2)} (\tilde{t}_2), \cdots, \gamma^{(d-1)} (\widetilde{t_2}))|^{-1}. \end{split} \end{equation} By the Taylor expansion \eqref{eq_taylor}, we have \[ \gamma^{(2)} (\tilde{t}_1) = \gamma^{(2)} (\tilde{t}_2 ) + \gamma^{(3)} (\tilde{t}_2 ) (\tilde{t}_1 - \tilde{t}_2 ) + \cdots +\frac{1}{(d-2)!} \gamma^{(d)} (\tilde{t}_2 ) (\tilde{t}_1 - \tilde{t}_2 )^{d-2}. \] Thus, the determinant in \eqref{eq_Omega_meas_A2_est} is bounded by a constant multiple of \[ \begin{split} |\det (\gamma^{(1)} (\tilde{t}_1), \gamma^{(2)} (\tilde{t}_2), \cdots, \gamma^{(d-1)} (\tilde{t}_2), \gamma^{(d)} (\tilde{t}_2))| \cdot |\tilde{t}_1 -\tilde{t}_2 |^{d-2}\sim |\tilde{t}_1 -\tilde{t}_2|^{d-2}. \end{split} \] Plug this into \eqref{eq_Omega_meas_A2_est}, we get \[ \begin{split} |\Omega_j(s_1,s_2, \tilde{t}_1 , \tilde{t}_2)| \lesssim &(M_j 2^{s_1})^{d} 2^{s_2 \frac{d(d-1)}{2}} \cdot 2^{-l(d-2) } . \end{split} \] By the same calculation as in the case of $A_0$, a.s. we have that $\#\widetilde{T}_1 \lesssim_\e M_j^{\alpha +\e }2^{s_1}$, and \begin{align}\label{eq_card_T2_a} \#\{\tilde{t}_2 \in \widetilde{T}_2: |\tilde{t}_1 - \tilde{t}_2 | \lesssim 2^{l-s_2} (M_j 2^{s_1-s_2})^\frac{1}{d-2}\} \lesssim_\e M_i^{\e} 2^{l\alpha}(M_j 2^{s_1-s_2})^\frac{\alpha}{d-2}, \qquad \forall \tilde{t}_1\in \widetilde{T}_1. \end{align} Combining the estimates above, a.s. we have \begin{equation}\label{Omega_meas_A2} \begin{split} |A_{l}| \lesssim_\e 2^{-l(d-2-\alpha)} M_j^\e (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2}} M_j^\alpha 2^{s_1} (M_j 2^{s_1-s_2})^\frac{\alpha}{d-2},\qquad \forall l\geq 1. \end{split} \end{equation} Since $d-2-\alpha>0$, using \eqref{Omega_meas_A1} and \eqref{Omega_meas_A2}, we obtain that \begin{align} \left|\bigcup_{l \geq 0 } A_l\right| \leq \sum_{l\geq 0}|A_{l}|\lesssim_\e M_j^\e (M_j 2^{s_1})^d 2^{s_2 \frac{d(d-1)}{2}} M_j^\alpha 2^{s_1} (M_j 2^{s_1-s_2})^\frac{\alpha}{d-2}. \end{align} Therefore, we have established \eqref{eq_Omega_meas2}. Case $(b)$: By the range of $s_2$, we have $2^{-s_2} \leq M_j^{-1} \leq 2^{-s_2} ( M_j 2^{s_1-s_2})^\frac{1}{d-2}$. Instead of $E_i$ in case (a), we decompose $E_j$ into intervals $I_2$ of length $ \sim 2^{-s_2}$ and define $\widetilde{T}_2$ in the same way. Then, we can follow the previous argument in case (a) line by line except that the estimate \eqref{eq_card_T2_a} is replaced by \[ \#\{\tilde{t}_2 \in \widetilde{T}_2: |\tilde{t}_1 - \tilde{t}_2 | \lesssim 2^{l-s_2} (M_j 2^{s_1-s_2})^\frac{1}{d-2}\} \lesssim_\e M_i^{\e } 2^{l\alpha} (M_j 2^{s_1-s_2})^\frac{\alpha}{d-2}(2^{s_2 } M_j^{-1})^{1-\alpha} ,\qquad \forall l\geq 0. \] Since $2^{s_2} M_j^{-1} \leq 2^{s_1/(d-1)}$, a.s. we have \eqref{eq_Omega_meas2}. Case $(c)$: Since $2^{-s_2} \leq M_j^{-1} $, as in case (b), we divide $E_j$ into intervals $I_2$ of length $\sim 2^{-s_2}$ and define $\widetilde{T}_2$ in the same way. Again, we can follow the same argument for case (a) except that \eqref{eq_card_T2_a} is replaced by \[ \#\{\tilde{t}_2 \in \widetilde{T}_2: |\tilde{t}_1 - \tilde{t}_2 | \lesssim 2^{l-s_2} (M_j 2^{s_1-s_2})^\frac{1}{d-2}\}\lesssim 2^l(M_j 2^{s_1-s_2})^\frac{1}{d-2},\qquad \forall l\geq 0. \] Here, we used that $2^{-s_2} ( M_j 2^{s_1-s_2})^\frac{1}{d-2 } \leq M_j^{-1}$ since $ 2^{s_1/(d-1)} M_j \leq 2^{s_2} $. We use the fact that $(M_j 2^{s_1-s_2})^{\frac{1}{d-2}} \leq 2^{s_1 \frac{1}{d-2}}$. Then, a.s. \eqref{eq_Omega_meas2} holds. \end{proof} Let us turn to the proof of Lemma \ref{lem_mup_meas_d=3}. \begin{proof}[Proof of Lemma \ref{lem_mup_meas_d=3}] Recall that we want to show that a.s. we have \begin{equation*} |\Omega_{3,j}(s_1,s_2)| \lesssim_\e M_j^\e(M_j2^{s_1})^3 2^{s_1(1-\alpha)+s_2( 3 +\alpha)}. \end{equation*} We will consider two cases: \begin{itemize} \item[(a)] $1\leq 2^{s_2}\leq M_j$, \item[(b)] $M_j\leq 2^{s_2}\leq 2^{s_1}M_j$. \end{itemize} Case (a): Let $i$ be an integer such that $M_{i-1} \leq 2^{s_2} \leq M_i$. We consider level $i$ of our set, namely $E_i$, which is the union of intervals $I_{i}$ of length $ M_{i}^{-1}$. For each $\xi\in \Omega_{3,j}(s_1,s_2)$, there exists $t \in E_j$ such that \[ |\langle \gamma^{(k) }(t), \xi \rangle | \leq M_j 2^{s_1} 2^{s_2(k-1)}, \qquad \forall 1 \leq k \leq 3. \] Let $T$ be the set of such points $t$ and let $\widetilde{T}$ be the set of left endpoints of intervals $I_{i}$ which intersect $T$. For each $\tilde{t} \in \widetilde{T}$, define \[ \Omega_{3,j}(s_1, s_2, \tilde{t}) = \{ \xi \in \R^d : |\langle \gamma^{(k)}(\tilde{t}), \xi \rangle | \leq 10 d(M_j 2^{s_1}) 2^{s_2(k-1)} \ \forall 1 \leq k \leq 3 \}. \] We can check that \[ \widetilde{\Omega}_{3,j} (s_1,s_2) \subseteq \bigcup\limits_{\tilde{t} \in \widetilde{T}} \Omega_{3,j}(s_1,s_2,\tilde{t}). \] By Lemma \ref{lem_vol_P}, we have \[ |\Omega_{3,j} (s_1,s_2, \tilde{t} )| \lesssim (M_j 2^{(s_1+s_2)})^3. \] Also note that $\#\widetilde{T} \lesssim\norm{\mu_i} \beta_i^{-1} M_i$, since $2^{s_2} \leq M_j$. Combining above estimates with Lemma \ref{lem_est_mu_j} and \eqref{cond_b_jM_j}, a.s. we get \begin{equation*} \begin{split} |\Omega_{j}(s_1,s_2)| &\lesssim (M_j 2^{(s_1+s_2)})^3\norm{\mu_i} \beta_{i}^{-1} M_i \\ &\lesssim_\e M_j^\e (M_j2^{s_1})^3 2^{s_2(3+\alpha)}. \end{split} \end{equation*} Therefore, we obtain \eqref{Omega_meas_3}. Case (b): Since $M_j \leq 2^{s_2} $, we divide $E_j$ into intervals of length $\sim 2^{-s_2}$ and define the set $\widetilde{T}$ in the same way. Note that $\#\widetilde{T}\lesssim \norm{\mu_j} \beta_j^{-1} 2^{s_2}$. Following the same argument in case (a), a.s. we obtain \eqref{Omega_meas_3}. \end{proof} \subsection{Estimates on the oscillatory integrals} In this section, we prove Lemma \ref{lem_mup_osci} and \ref{lem_mup_osci_d=3}. To begin with, we introduce Hoeffding's inequality. \begin{lemma} [Hoeffding's inequality \cite{Ho63}] Let $X_1, \cdots, X_n$ be independent real random variables such that $a_i \leq X_i \leq b_i$ and $S_n := X_1 + \cdots + X_n$. For $t>0$, \begin{equation}\label{eq_hoeff} \mathbb{P} \left( \big|S_n - \mathbb{E}(S_n) \big|>t \right) \leq 2 \exp{\left(\frac{-2t^2}{\sum_{i=1}^n (b_i-a_i)^2} \right)}. \end{equation} \end{lemma} For an interval $I \subseteq [0,1]$, we write $$X_{I} = \int_{I} \beta_{j+1} \mathbf{1}_{E_{j+1}} e(-\langle \gamma(t), \xi \rangle ) dt.$$ Recall that $\cI_j$ is the collection of invervals of length $M_j^{-1}$ which cover $[0,1]$. We can check that \[ \widehat{\nu}_{j+1}(\xi)= \sum_{I_j \in \mathcal{I}_j} X_{I_j} \qquad \text{and}\qquad \widehat{\nu}_j (\xi)= \mathbb{E}\left(\sum_{I_j \in \cI_j} X_{I_j} | E_j \right), \] which implies that \begin{align*} |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) |=\left|\sum_{I_j \in \cI_j} X_{I_j} - \mathbb{E}\left(\sum_{I_j \in \cI_j} X_{I_j} |E_j \right) \right|. \end{align*} Thus, we need an upper bound of $\sum_{I_j \in \mathcal{I}_j} |X_{I_j}|^2$ to use Hoeffding's inequality. \begin{lemma} \label{claim_1osc} If $d\geq 4$, for each $\xi \in \widetilde{\Omega}_{j+1}(s_1,s_2)$ and $E_j$, there exists a quantity $C(E_j,s_1,s_2)$ such that \begin{equation}\label{sum_X^2} \sum_{I_j\in\cI_j}|X_{I_j}|^2 \leq C(E_j,s_1,s_2) \end{equation} conditioned on $E_j$ and a.s. we have \begin{align}\label{eq_sum_Xj} C(E_j,s_1,s_2) \lesssim_\e M_j^{-\alpha+\e} 2^{-s_1(2-\alpha)} 2^{-s_2\alpha} \end{align} uniformly in $\xi$ and $E_j$. \end{lemma} The proof of Lemma \ref{lem_mup_osci_d=3} is same with the proof of of Lemma \ref{lem_mup_osci} except that we use the following lemma instead of Lemma \ref{claim_1osc}. \begin{lemma} \label{claim_2osc} If $d= 3$, for each $\xi \in \widetilde{\Omega}_{3,j+1}(s_1,s_2)$, there exists a quantity $C(E_j,s_1,s_2)$ such that $$ \sum_{I_j \in \mathcal{I}_j} |X_{I_j}|^2 \leq C(E_j,s_1,s_2) $$ conditioned on $E_j$ and a.s. we have \begin{align}\label{eq_sum_Xj_d=3} C(E_j,s_1,s_2) \lesssim_\e M_j^{-\alpha+\e} 2^{-s_1(2-\alpha)} 2^{-s_2\alpha} \end{align} uniformly in $\xi$ and $E_j$. \end{lemma} \begin{proof}[Proof of Lemmas \ref{lem_mup_osci} and \ref{lem_mup_osci_d=3} using Lemmas \ref{claim_1osc} and \ref{claim_2osc}] First we prove Lemma \ref{lem_mup_osci}. We claim that it is enough to show that \eqref{Omega_osci} holds for all $\xi \in \widetilde{\Omega}_{j+1}^\#(s_1,s_2)$, where \begin{align*} \widetilde{\Omega}_{j+1}^\#(s_1,s_2)= (C_d2^{s_1}M_{j+1}^2)^{-1} \mathbb{Z}^d \cap \widetilde{\Omega}_{j+1}(s_1,s_2), \end{align*} for sufficiently large constant $C_d>0$. Indeed, let us assume that \eqref{Omega_osci} holds for all $\xi \in \widetilde{\Omega}^\#_{j+1}(s_1,s_2)$, namely, \begin{align*} |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) |\lesssim_\e M_j^{-\frac{\alpha}{2}+\e}2^{-s_1(1-\frac{\alpha}{2})-s_2\frac{\alpha}{2}}, \qquad \forall \xi \in \widetilde{\Omega}^\#_{j+1}(s_1,s_2). \end{align*} For $h\in \R^d$, observe that \[ \begin{split} |\widehat{\nu}_{j+1} (\xi +h) -\widehat{\nu}_{j+1}(\xi)| &= \int \beta_{j+1} \mathbf{1}_{E_{j+1}} ( e(-\langle \gamma (t), \xi+h \rangle )-e(-\langle \gamma (t), \xi \rangle )) dt\\ &\lesssim \beta_{j+1} |h| \lesssim M_{j+1} |h|. \end{split} \] Similarly, we have $|\widehat{\nu}_{j} (\xi +h) -\widehat{\nu}_{j}(\xi)|\lesssim M_{j}|h|$. If $|h| \lesssim 2^{-s_1}M_{j+1}^{-2}$, then we have \begin{align*} |\widehat{\nu}_{j+1}(\xi+h) - \widehat{\nu}_{j} (\xi +h)| &\lesssim |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) | + 2^{-s_1}M_j^{-1}\\ &\lesssim_\e M_j^{-\frac{\alpha}{2}+\e}2^{-s_1(1-\frac{\alpha}{2})-s_2\frac{\alpha}{2}}+2^{-s_1}M_j^{-1}\\ &\lesssim_\e M_j^{-\frac{\alpha}{2}+\e}2^{-s_1(1-\frac{\alpha}{2})-s_2\frac{\alpha}{2}}. \end{align*} This means that that \eqref{Omega_osci} holds for all $\xi \in \widetilde{\Omega}_{j+1}(s_1,s_2)$. Now assume that $\xi \in \widetilde{\Omega}_{j+1}^\#(s_1,s_2)$. The inequality \eqref{sum_X^2} in Lemma \ref{claim_1osc} guarantees the existence of $a_{I_j} $ and $b_{I_j}$ such that $a_{I_j} \leq X_{I_j}\leq b_{I_j}$ and $\sum_{I_j \in \mathcal{I}_j} (b_{I_j}-a_{I_j})^2 \leq 4C(E_j,s_1,s_2)$. Thus, we can use $4C(E_j,s_1,s_2)$ instead of $\sum_{I_j \in \mathcal{I}_j} (b_{I_j}-a_{I_j})^2$ in Hoeffding's inequality. Using Hoeffding's inequality with \eqref{sum_X^2} in Lemma \ref{claim_1osc}, we obtain that \begin{equation}\label{eq_prob_1} \mathbb{P} \left( |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) |>C(E_j,s_1,s_2)^{1/2} (2^{s_1}M_j)^\e \big|E_j \right) \lesssim \exp(-(2^{s_1}M_j^{\e})^2/2). \end{equation} Note that $\widetilde{\Omega}_{j+1}(s_1,s_2) \subseteq \Omega_{j+1}(s_1)$ and $ \Omega_{j+1}(s_1)$ is contained in a ball of radius $\lesssim(M_{j+1} 2^{s_1})^{d}$ and the elements in $\widetilde{\Omega}_{j+1}^\#(s_1,s_2)$ are $\sim 2^{-s_1} M_{j+1}^{-2}$ separated. Thus, \begin{align*} \# \widetilde{\Omega}_{j+1}^\#(s_1,s_2) \lesssim M_{j+1}^{d(d+2)} 2^{s_1d(d+1)}. \end{align*} Since \eqref{eq_prob_1} holds uniformly in $E_j$ and $\xi \in \widetilde{\Omega}_{j+1}^\#(s_1,s_2)$, we have \[ \begin{split} \mathbb{P} \bigg( |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) |&>C(E_j,s_1,s_2)^{1/2}(2^{s_1}M_j)^\e, \text{ for some } \xi \in\widetilde{\Omega}_{j+1}^\#(s_1,s_2) \bigg)\\ &\hspace{3cm}\lesssim M_{j+1}^{d(d+2)}2^{s_1d(d+1)}\exp(-2 (2^{s_1}M_j)^{2\e}). \end{split} \] Since $2^{s_2} \leq 2^{s_1} M_{j+1}$, we sum both sides over $j, s_1$, and $s_2$ and we can check that the sum of the right-hand side is bounded. By Borel-Cantelli Lemma, a.s. we have \begin{equation}\label{eq_prob_2} |\widehat{\nu}_{j+1}(\xi) - \widehat{\nu}_j(\xi) | \leq C(E_j,s_1,s_2)^{1/2} (2^{s_1}M_j)^{\e}, \qquad \forall \xi\in \widetilde{\Omega}_{j+1}(s_1,s_2). \end{equation} Combining with \eqref{eq_sum_Xj} in Lemma \ref{claim_1osc}, we have established \eqref{Omega_osci}. Next, let us turn to the proof of Lemma \ref{lem_mup_osci_d=3}. The proof is same with the previous argument except that $\widetilde{\Omega}_{j+1}\#(s_1,s_2)$ is replaced by $$\widetilde{\Omega}_{3,j+1}^\#(s_1,s_2):=(C_d2^{s_1}M_{j+1}^2)^{-1})\cap \widetilde{\Omega}_{3,j+1}(s_1,s_2)$$ and that we use Lemma \ref{claim_2osc} instead of Lemma \ref{claim_1osc}. \end{proof} To establish Lemma \ref{claim_1osc} and \ref{claim_2osc}, we use the following lemma on the oscillatory integrals. \begin{lemma}[Theorem 1.1 in \cite{ACK_book}]\label{lem_osci_est} Suppose that $d \geq 1$, and $I$ is an interval in $[0,1]$. Then, \[ \left|\int_I e(\langle \gamma(t), \xi \rangle ) dt \right| \lesssim \min \{ |I|, H_I(\xi)^{-1}\} , \qquad \forall \xi\in \R^d, \] where \[ H_I(\xi) = \inf_{x \in I} \sum_{k=1}^d \bigg|\frac{1}{k!}\langle \gamma^{(k)}(t),\xi \rangle \bigg|^{1/k}. \] \end{lemma} The following estimate holds regardless of whether $d\geq 4$ or $d = 3$. \begin{lemma}\label{lem_XI_s_2=0} For each $\xi \in \widetilde{\Omega}_{j+1}(s_1,s_2)$, we have \begin{align*} |X_{I_j}| \lesssim \beta_{j+1} M_j^{-1} 2^{-s_1}. \end{align*} \end{lemma} \begin{proof} If $s_1=0$, we use that the length of $I_j$ is $M_j^{-1}$. Thus, we obtain that \begin{align*}\label{eq_X_estimate_1} |X_{I_j} | \lesssim \beta_{j+1} M_j^{-1}. \end{align*} If $s_1 \neq 0$, $\xi \in \widetilde{\Omega}_{j+1}(s_1,0)$ implies that $\xi \notin \Omega_{j+1}(s_1-1)$. Thus, for any $t \in E_{j+1}$, there exists $1 \leq k \leq d$ such that \[ |\langle \gamma^{(k)} (t), \xi \rangle | > (M_{j+1}2^{s_1})^k. \] This yields that for each interval $I_{j+1} \in \mathcal{I}_{j+1}$ contained in $ E_{j+1}$, we have \begin{align*}\label{eq_Hxi} H_{I_{j+1}}(\xi)\gtrsim M_{j+1}2^{s_1}. \end{align*} For each $I_j \in \mathcal{I}_j$, at most $m_{j+1}$ such intervals are contained in $I_j \cap E_{j+1}$. Thus, Lemma \ref{lem_osci_est} implies that \begin{align*} |X_{I_j} | &\leq \beta_{j+1}\sum_{I_{j+1}\subset I_j\cap E_{j+1}}\bigg|\int_{I_{j+1} }e(-\langle \gamma(t),\xi\rangle)dt\bigg| \\ &\lesssim \beta_{j+1}m_{j+1}\min\{M_{j+1}^{-1},H_{I_{j+1}}^{-1}(\xi)\} \\ &\lesssim \beta_{j+1} M_j^{-1} 2^{-s_1}. \end{align*} \end{proof} Let $I \subseteq [0,1]$ be an interval such that $I_j \cap I $ is not empty. If we replace $I_j$ in Lemma \ref{lem_XI_s_2=0} by $I_j \cap I$, the proof of Lemma \ref{lem_XI_s_2=0} yields the following corollary, which will be used in the proof of Lemmas \ref{claim_1osc} and \ref{claim_2osc}. \begin{corollary}\label{X_I_itrsc} For each $\xi \in \widetilde{\Omega}_{j+1}(s_1,s_2)$, We have \begin{align} |X_{I_j \cap I}| \lesssim \beta_{j+1} M_j^{-1} 2^{-s_1}. \end{align} \end{corollary} Let us turn to the proof of Lemma \ref{claim_1osc}. Assume that $\xi\not\in \Omega_{j+1}(s_1,s_2-1)$, then for any $t \in E_{j+1}$, there exists $2 \leq k \leq d$ such that \begin{equation}\label{osc_cond1} |\langle \gamma^{(k)} (t), \xi \rangle | \geq M_j 2^{s_1} 2^{(s_2-1)(k-1)}. \end{equation} Therefore, for each fixed $\xi \not\in \Omega_{j+1}(s_1,s_2-1)$, we can decompose $[0,1]$ into a finitely many closed intervals $\mathcal{J}$ which are disjoint possibly except the endpoints such that for each $I\in \mathcal{J}$, there exists $2\leq k_I\leq d$ which satisfies \begin{equation}\label{eq_I_cond_1} |\langle \gamma^{(k_I)} (t), \xi \rangle | \geq M_{j+1} 2^{s_1 + (s_2-1)(k_I-1)}, \qquad \forall t \in I , \end{equation} and \begin{equation}\label{eq_I_cond_2} |\langle \gamma^{(k)} (t), \xi \rangle | \leq M_{j+1} 2^{s_1 + (s_2-1)(k-1)}, \qquad \forall t \in I , \forall 2 \leq k < k_I. \end{equation} Since $\langle \gamma(t) ,\xi \rangle$ is a polynomial in $t$ with degree at most $d$, $\#\mathcal{J}$ is bounded uniformly in $j, s_1, s_2$ and $\xi$ and only depends on $d$. For each $I \in \mathcal{J}$, let $t_I\in I$ be a number such that $|\langle \gamma^{(1)} (t), \xi \rangle|$ is the minimum on $I$. \begin{lemma}\label{lem_lb_1st_der} Assume that $s_2 \neq 0 $, $\xi \in \widetilde{\Omega}_{j+1}(s_1,s_2)$, and $I \in \mathcal{J}$. Let $m$ be an integer such that $|m| \gtrsim \max\{ M_{j+1}2^{-s_2}, 1\}$. Let $I_{j+1}(t_I+mM_{j+1}^{-1})$ be the interval in $\cI_{j+1}$ which contains $t_I+mM_{j+1}^{-1}$. Then we have \[ |\langle \gamma^{(1)} (t), \xi \rangle| \gtrsim |m| 2^{s_1+s_2}\,,\qquad \forall t\in I_{j+1}(t_I+mM_{j+1}^{-1})\cap I. \] \end{lemma} \begin{proof} For each $t\in I_{j+1}(t_I+mM_{j+1}^{-1})\cap I$, using Taylor expansion up to order $k_I$, we have \[ \begin{split} \langle \gamma^{(1)} (t) ,\xi \rangle - \langle \gamma^{(1)} (t_I), \xi \rangle =\sum_{n=2}^{k_I-1}\langle \gamma^{(n)}(t_I),\xi\rangle\frac{(t-t_I)^{n-1}}{(n-1)!}+\langle \gamma^{(k_I)}(t'),\xi\rangle\frac{(t-t_I)^{k_I-1}}{(k_I-1)!}, \end{split} \] for some $t'\in I$. Since $t_I$ and $t'$ are both in $I$, they satisfy \eqref{eq_I_cond_1} and \eqref{eq_I_cond_2}. Combining with the fact that $|t-t_I|\sim |m|M_{j+1}^{-1}\gtrsim\max \{2^{-s_2},M_{j+1}^{-1}\}$, we obtain that \begin{align*} |\langle \gamma^{(1)} (t) ,\xi \rangle - \langle \gamma^{(1)} (t_I), \xi \rangle|\sim M_{j+1}2^{s_1} (2^{s_2} mM_{j+1}^{-1})^{k_I-1}. \end{align*} for sufficiently large $|m| \gtrsim \max\{ M_{j+1}2^{-s_2}, 1 \}$. Since $|\langle \gamma^{(1)} (t_I), \xi \rangle|$ is the minimum and $k_I \geq 2$, the above estimate implies that \[ \begin{split} |\langle \gamma^{(1)} (t), \xi \rangle | \gtrsim |m| 2^{s_1+s_2}. \end{split} \] Especially, if the signs of $\langle \gamma^{(1)} (t), \xi \rangle$ and $\langle \gamma^{(1)} (t_I), \xi \rangle$ are different, we can use that $$ |\langle \gamma^{(1)} (t),\xi\rangle| \geq \frac{1}{2} |\langle \gamma^{(1)} (t) ,\xi \rangle - \langle \gamma^{(1)} (t_I), \xi \rangle|. $$ \end{proof} Now we are ready to prove Lemma \ref{claim_1osc}. \begin{proof}[Proof of Lemma \ref{claim_1osc}] First, let us consider when $s_2=0$. Lemma \ref{lem_XI_s_2=0} implies that \begin{align} \sum_{I_j\in\cI_j}|X_{I_j}|^2 &\lesssim \beta_{j+1}^2 M_j^{-2}2^{-2s_1} \#\{I_j \in \cI_j : I_j \subset E_j\} \nonumber \\ &\lesssim \beta_{j+1}^2\beta_j^{-1} M_j^{-1}2^{-2s_1} \norm{\mu_j}. \label{sum_XI_s2=0} \end{align} Therefore, we let $C(E_j,s_1,s_2)$ be a constant multiple of \eqref{sum_XI_s2=0}. Since $\beta_{j+1}^2 \beta_j^{-1} M_j^{-1} \lesssim_\e M_j^{-\alpha+\e}$, Lemma \ref{lem_est_mu_j} implies that a.s. we have \eqref{eq_sum_Xj}, which is \begin{align*} C(E_j,s_1,s_2) \lesssim_\e M_j^{-\alpha+\e} 2^{-2s_1}\leq M_j^{-\alpha+\e} 2^{-s_1(2-\alpha)}. \end{align*} Next, we assume that $s_2 \neq 0$. By Cauchy-Schwartz inequality, we have \begin{align*} \sum_{I_j\in \mathcal{I}_j} |X_{I_j}|^2 =\sum_{I_j\in \mathcal{I}_j} |\sum_{\substack{I\in \mathcal{J}\\I\cap I_j\neq \emptyset}}X_{I_j\cap I}|^2 \leq \#\mathcal{J}\cdot\bigg(\sum_{I_j\in \mathcal{I}_j}\sum_{\substack{I\in \mathcal{J}\\I\cap I_j\neq \emptyset}} |X_{I_j\cap I}|^2\bigg). \end{align*} Since $\# \mathcal{J}$ is bounded, we obtain that \begin{align}\label{eq_Xestimate_0} \sum_{I_j\in \mathcal{I}_j} |X_{I_j}|^2 \lesssim \sum_{I\in \mathcal{J}}\sum_{\substack{I_j\in \cI_j\\ I\cap I_j\neq \emptyset}}|X_{I_j\cap I}|^2=\sum_{I \in \mathcal{J}}\sum_{|n|\leq M_j} |X_{I_j(t_I+nM_{j}^{-1})\cap I}|^2, \end{align} where $I_j(t_I+nM_j^{-1})$ is the interval in $\cI_j$ containing $t_I+nM_j^{-1}$. If $|n| \lesssim \max \{M_j2^{-s_2}, 1\}$, Corollary \ref{X_I_itrsc} implies that \begin{align}\label{X_n_small} |X_{I_j(t_I + nM_j^{-1}) \cap I} | \lesssim \beta_{j+1} M_j^{-1}2^{-s_1}. \end{align} If $|n | \gtrsim \max\{ M_j2^{-s_2},1\}$, we use the fact that $$I_j(t_I+nM_j^{-1})\subset \bigcup\limits_{|m-m_{j+1}n|\leq m_{j+1}}I_{j+1}(t_I+mM_{j+1}^{-1}),$$ to get \[ |X_{I_j(t_I + nM_j^{-1})\cap I}| \lesssim \beta_{j+1} \sum_{|m-m_{j+1} n| \leq m_{j+1}} \left| \int_{I_{j+1}(t_I + mM_{j+1}^{-1})\cap I} e(\langle \gamma(t), \xi \rangle ) dt \right| . \] Since $|m-m_{j+1}n|\leq m_{j+1}$ and $|n| \gtrsim \max\{ M_j 2^{-s_2}, 1\}$ implies that $|m|\gtrsim \max\{M_{j+1}2^{-s_2},1\}$, we apply Lemma \ref{lem_lb_1st_der} to get \begin{align*} |\langle\gamma^{(1)}(t),\xi\rangle|\gtrsim |m|2^{s_1+s_2}\,,\qquad \forall t\in I_{j+1}(t_I+mM_{j+1}^{-1})\cap I, \end{align*} which yields that $H_{I_{j+1}(t_I+mM_{j+1}^{-1})\cap I}\gtrsim |m|2^{s_1+s_2}$. By Lemma \ref{lem_osci_est}, we obtain that \begin{align*} \left| \int_{I_{j+1}(t_I + mM_{j+1}^{-1})\cap I} e(\langle \gamma(t), \xi \rangle ) dt \right| \lesssim |m|^{-1}2^{-s_1-s_2}. \end{align*} Hence, we have \begin{equation}\label{eq_X_estimate_3} |X_{I_j(t_I + nM_j^{-1})\cap I} | \lesssim \beta_{j+1} |n|^{-1} 2^{-s_1} 2^{-s_2}. \end{equation} Now we are ready to establish an upper bound of $\sum_{I_j\in \mathcal{I}_j}|X_{I_j}|^2$. It suffices to sum over the intervals $I_j\subset E_j$ and note that for $r \geq M_j^{-1}$, \begin{align*} \#\{I_j\subset E_j : I_j\subset B(x,r) \}\lesssim \beta_j^{-1}M_j \mu_j(B(x,r)). \end{align*} Using \eqref{X_n_small} and \eqref{eq_X_estimate_3}, we obtain that \begin{align} &\sum_{I\in \mathcal{J}}\sum_{|n|\lesssim \max \{M_j2^{-s_2},1\}} |X_{I_j(t_I+nM_{j}^{-1})\cap I}|^2 \nonumber\\ & \hspace{2cm}\lesssim \beta_{j+1}^2 M_j^{-2} 2^{-2s_1} \#\{I_j\subset E_j: I_j\subset B(t_I, 2\max \{2^{-s_2},M_j^{-1}\})\}\nonumber\\ &\hspace{2cm} \leq \beta_{j+1}^2\beta_j^{-1} M_j^{-1} 2^{-2s_1} \mu_j(B(t_I, 2\max \{2^{-s_2},M_j^{-1}\})).\label{sum_C_11} \end{align} and that \begin{align} &\sum_{I\in \mathcal{J}}\sum_{|n|\gtrsim \max \{M_j2^{-s_2},1\}} |X_{I_j(t_I+nM_{j}^{-1})\cap I}|^2 \nonumber\\ &\hspace{2cm} \lesssim \beta_{j+1}^2 2^{-2s_1-2s_2} \sum_{2^t \gtrsim \max\{ M_j 2^{-s_2},1\} } 2^{-2t}\#\{I_j\subset E_j: I_j\subset B(t_I, 2^{t+1}M_{j}^{-1})\}\nonumber\\ &\hspace{2cm} \leq \beta_{j+1}^2 \beta_j^{-1} M_j 2^{-2s_1-2s_2} \sum_{2^t \gtrsim \max\{ M_j 2^{-s_2},1\} } 2^{-2t} \mu_j(B(t_I, 2^{t+1} M_j^{-1}))\label{sum_C_22} \end{align} By \eqref{eq_Xestimate_0}, we can let $C(E_j,s_1,s_2) $ be a constant multiple of the sum of \eqref{sum_C_11} and \eqref{sum_C_22}. By Lemma \ref{lem_est_mu_ball_r}, a.s. we obtain that \begin{align*} \beta_{j+1}^2\beta_j^{-1} M_j^{-1} 2^{-2s_1} \mu_j(B(t_I, 2\max \{2^{-s_2},M_j^{-1}\})) \lesssim_\e M_j^{-\alpha+\e}2^{-2s_1} (\max\{2^{-s_2}, M_j^{-1}\})^\alpha \end{align*} and that \begin{align*} \beta_{j+1}^2 \beta_j^{-1} M_j 2^{-2s_1-2s_2} &\sum_{2^t \gtrsim \max\{ M_j 2^{-s_2},1\} } 2^{-2t} \mu_j(B(t_I, 2^{t+1} M_j^{-1}))\\ &\hspace{2cm}\lesssim_\e M_j^{2-2\alpha+\e} 2^{-2s_1-2s_2} \sum_{2^t \gtrsim \max\{ M_j 2^{-s_2},1\} } 2^{ -(2-\alpha)t}\\ &\hspace{2cm}\lesssim_\e M_j^{2-2\alpha+\e} 2^{-2s_1-2s_2} (\max\{ M_j 2^{-s_2},1\} )^{ -2+\alpha}\\ &\hspace{2cm}\lesssim_\e M_j^{-\alpha+\e} 2^{-2s_1-2s_2} (\max\{ 2^{-s_2},M_j^{-1}\} )^{ -2+\alpha}. \end{align*} Combining two estimates and the fact that $ 2^{s_2}\leq 2^{s_1}M_j$, a.s. we get \begin{align*} C(E_j,s_1,s_2) &\lesssim_\e M_j^{-\alpha+\e} 2^{-2s_1} \left( \max \{2^{-s_2\alpha} , M_j^{-\alpha}\}+ 2^{-2s_2} (\max\{ 2^{-s_2},M_j^{-1}\} )^{ -2+\alpha}\right) \\ &\lesssim_\e M_j^{-\alpha+\e} 2^{-s_1(2-\alpha)} 2^{-s_2\alpha}. \end{align*} \end{proof} In order to prove Lemma \ref{claim_2osc} We need to divide $[0,1]$ into sets where Lemma \ref{lem_lb_1st_der} is applicable and where it is not. For fixed $\xi\in \widetilde{\Omega}_{3,j+1}(s_1,s_2)$, we define \begin{align*} \Lambda_1&=\big\{ t\in [0,1]: \exists k\geq 2 \text{ such that } |\langle \gamma^{(k)}(t) , \xi \rangle | \geq 10^{-8}(M_{j} 2^{s_1}) 2^{s_2 (k-1)}\big\} \end{align*} and $\Lambda_2 := [0,1] \backslash \Lambda_1$. Assume that $\xi \in \widetilde{\Omega}_{3,j+1}(s_1,s_2)$ for $s_2 \neq 0$. Observe that since $\xi \in \Omega_{3,j+1}(s_1,s_2)$, there exists $t_0$ such that \begin{align}\label{cond_t_00} |\langle \gamma^{(k)}(t_0), \xi \rangle | \leq M_{j+1} 2^{s_1} 2^{s_2(k-1)},\qquad \forall 1 \leq k \leq 3. \end{align} Also, since $\xi \notin \Omega_{3,j+1}(s_1,s_2-1)$, we have \begin{equation}\label{cond_t_0} |\langle \gamma^{(2)}(t_0), \xi \rangle | > M_{j+1} 2^{s_1} 2^{(s_2-1)} \qquad \text{or} \qquad |\langle \gamma^{(3)}(t_0), \xi \rangle | > M_{j+1} 2^{s_1} 2^{2(s_2-1)}. \end{equation} Note that $ |\langle \gamma^{(1)}(t_0), \xi \rangle | > M_{j+1} 2^{s_1} $ cannot hold because of \eqref{cond_t_00}. Hence, $t_0 \in \Lambda_1$ and $\Lambda_1$ is nonempty. Since cannot use Lemma \ref{lem_lb_1st_der} on $\Lambda_2$, we need an additional lemma. \begin{lemma}\label{I_2_est} For $\xi \in \widetilde{\Omega}_{3,j+1}(s_1,s_2)$ with $s_2 \neq 0$, assume that $$M_{j+1}2^{s_1+s_2+s_3-1} \leq |\xi_3|\leq M_{j+1} 2^{s_1+s_2+s_3}$$ where $ 2^{s_3} \leq 10^{-8} 2^{s_2}$. If $2^{s_3} < 100^{-1}$, then $\Lambda_2$ is empty. If $100^{-1} \leq 2^{s_3}$, then for $t_0 \in \Lambda_1$ defined above and for any $t \in \Lambda_2$, we have $$ |t-t_0| \sim 2^{-s_3} $$ and \begin{equation}\label{gamma1_lb_d=3} |\langle \gamma^{(1)} (t), \xi \rangle | \gtrsim M_{j+1}2^{s_1+s_2-s_3}. \end{equation} \end{lemma} \begin{proof} Since $\xi_3 \neq 0$, $\langle \gamma(t), \xi\rangle$ is a polynomial in $t$ with degree $3$ and it implies that \begin{equation}\label{est_xi3} |\langle \gamma^{(3)} (t), \xi \rangle | =6 |\xi_3| \sim M_{j+1}2^{s_1+s_2+s_3} \qquad \forall t \in [0,1]. \end{equation} Since $2^{s_3} \leq 10^{-8} 2^{s_2}$, \eqref{cond_t_0} implies that \begin{equation}\label{est_t_0} M_{j+1} 2^{s_1} 2^{(s_2-1)} < |\langle \gamma^{(2)}(t_0), \xi \rangle | \leq M_{j+1} 2^{s_1} 2^{s_2}. \end{equation} Also, since $t \in \Lambda_2$, we have \begin{equation}\label{est_t} |\langle \gamma^{(2)} (t), \xi \rangle| \leq 10^{-8}M_{j+1} 2^{s_1+s_2}. \end{equation} For each $t$, observe that \[ |\langle \gamma^{(2)} (t), \xi \rangle - \langle \gamma^{(2)} (t_0), \xi \rangle | = |\langle \gamma^{(3)} (t_0), \xi \rangle (t-t_0)| =6 |\xi_3| |t-t_0|. \] Combining \eqref{est_xi3}, \eqref{est_t_0} and \eqref{est_t}, we obtain that \begin{align}\label{range_t} 100^{-1} 2^{-s_3} \leq |t-t_0 | \leq 100\cdot 2^{-s_3}. \end{align} Since $\Lambda_2 \subseteq [0,1]$, $\Lambda_2$ is empty if $2^{s_3} < 100^{-1}$. If $100^{-1} \leq 2^{s_3}$, then we have $|t-t_0| \sim 2^{-s_3}$. For the lower bound of $|\langle \gamma^{(1)} (t), \xi \rangle |$, we use Taylor expansion \eqref{eq_taylor} and obtain that \[ \langle \gamma^{(1) } (t_0) , \xi \rangle = \langle \gamma^{(1)} (t),\xi \rangle + \frac{\langle \gamma^{(2)} (t),\xi \rangle}{2} (t_0-t) + \frac{\langle \gamma^{(3)} (t),\xi \rangle}{6} (t_0-t)^2. \] Using \eqref{est_xi3}, \eqref{est_t} and \eqref{range_t}, we get \[ \begin{split} |\langle \gamma^{(1)} (t_0 ), \xi \rangle| &\leq M_{j+1}2^{s_1}\\ |\langle \gamma^{(2)} (t), \xi \rangle (t_0-t)| &\leq 10^{-6} M_{j+1} 2^{s_1+s_2-s_3} \\ \end{split} \] while \[ |\langle \gamma^{(3)} (t), \xi \rangle (t_0-t)^2| \geq 10^{-4}M_{j+1} 2^{s_1+s_2-s_3} \] Thus, we obtain \eqref{gamma1_lb_d=3} as desired. \end{proof} \begin{remark} The proof of Lemma \ref{I_2_est} works only when $d=3$, since we used that $|\langle \gamma^{(3)} (t), \xi \rangle |$ is a constant. \end{remark} Finally, we prove Lemma \ref{claim_2osc}. \begin{proof}[Proof of Lemma \ref{claim_2osc}] If $s_2=0$, the proof is the same as Lemma \ref{claim_1osc}. Thus, it suffices to assume that $s_2 \neq 0$. First, we divide $\Lambda_1$ into finitely many intervals $I$ such that there exists $k_I= 2$ or $3$ which satisfies \begin{equation}\label{eq_I_cond_1_d=3} |\langle \gamma^{(k_I)} (t), \xi \rangle | \geq 10^{-8} M_j 2^{s_1 + (s_2-1)(k_I-1)}, \qquad \forall t \in I , \end{equation} and \begin{equation}\label{eq_I_cond_2_d=3} |\langle \gamma^{(k)} (t), \xi \rangle | \leq 10^{-8} M_j 2^{s_1 + (s_2-1)(k-1)}, \qquad \forall t \in I , \forall 2 \leq k < k_I. \end{equation} We denote the set of such intervals by $\mathcal{J}_1$. Note that $\# \mathcal{J}_1$ is uniformly bounded in $\xi$, $s_1$ and $s_2$. The set $\Lambda_2$ is a connected interval since $\langle \gamma(t),\xi \rangle $ is a polynomial in $t$ with degree at most $3$. Thus, we do not need to divide $\Lambda_2$ further. Since $\# \mathcal{J}_1 \lesssim 1$, we have \[ \begin{split} \sum_{I_j\in \mathcal{I}_j} |X_{I_j}|^2 &=\sum_{I_j\in \mathcal{I}_j} |\sum_{\substack{I\in \mathcal{J}_1\\I\cap I_j\neq \emptyset}}X_{I_j\cap I}|^2 + \sum_{I_j\in \mathcal{I}_j} |X_{I_j\cap \Lambda_2}|^2. \end{split} \] For the first sum, we can use the same argument from Lemma \ref{claim_1osc} with \eqref{eq_I_cond_1} and \eqref{eq_I_cond_2} replaced by \eqref{eq_I_cond_1_d=3} and \eqref{eq_I_cond_2_d=3} respectively. Thus, there exists $C_1(E_j,s_1,s_2)$ such that $$ \sum_{I_j\in \mathcal{I}_j} |\sum_{\substack{I\in \mathcal{J}_1\\I\cap I_j\neq \emptyset}}X_{I_j\cap I}|^2 \leq C_1(E_j,s_1,s_2) $$ conditioned on $E_j$ and a.s. we have $$ C_1(E_j,s_1,s_2) \lesssim_\e M_j^{-\alpha+\e} 2^{-s_1(2-\alpha)} 2^{-s_2\alpha} $$ uniformly in $\xi$ and $E_j$. For the given $\xi \in \widetilde{\Omega}_{3,j+1}(s_1,s_2)$, let us assume that $|\xi_3| \sim M_{j+1} 2^{s_1+s_2+s_3}$ where $2^{s_3} \leq 2^{s_2} $. We will find an estimate on the second sum which is uniform on $s_3$. If $s_3 < 100^{-1} $, $\Lambda_2$ is empty by Lemma \ref{I_2_est}. If $s_3 \sim s_2$, then \begin{equation*} |\langle \gamma^{(3)} (t), \xi \rangle | \sim M_j 2^{s_1 + 2s_2}, \qquad \forall t \in [0,1] , \end{equation*} and since $t \in \Lambda_2$, we have \begin{equation*} |\langle \gamma^{(2)} (t), \xi \rangle | \lesssim M_j 2^{s_1 + s_2}, \qquad \forall t \in [0,1]. \end{equation*} They corresponds to \eqref{eq_I_cond_1} and \eqref{eq_I_cond_2} respectively with $k_I=3$ and $I=[0,1]$. Thus, we can use the argument from Lemma \ref{claim_1osc} again. Therefore, it suffices to consider when $100^{-1} \leq 2^{s_3} \leq 10^{-8} 2^{s_2}$. If $I_j(t_0+nM_j^{-1}) \cap \Lambda_2 \neq \emptyset$ for $|n| \gtrsim 1$, Lemmas \ref{lem_osci_est} and \ref{I_2_est} imply that $ |n| \sim M_j2^{-s_3}$ and \[ |X_{I_j(t_0+nM_j^{-1}\cap \Lambda_2)}| \lesssim \beta_{j+1} M_j^{-1} 2^{-s_1-s_2+s_3}. \] Since $|n|M_j^{-1} \sim 2^{-s_3} \gtrsim M_j^{-1}$, the number of $I_j$ such that $I_j(t_0+nM_j^{-1}) \cap \Lambda_2 \neq \emptyset$ is bounded by $\beta_j^{-1} M_j \mu_j( B(t_0, C2^{-s_3})) $ for sufficiently large $C$, say $200$. If $|n| \lesssim 1 $, we use Corollary \ref{X_I_itrsc} and get \[ |X_{I_j(t_0+nM_j^{-1}\cap \Lambda_2)}| \lesssim \beta_{j+1} M_j^{-1} 2^{-s_1}. \] Therefore, \begin{align*} \sum_{I_j\in \cI_j} |X_{I_j\cap \Lambda_2}|^2 \lesssim &\beta_{j+1}^2 M_j^{-2} 2^{-2s_1}\\ &+ \beta_{j+1}^2\beta_j^{-1} M_j^{-1} 2^{-2s_1-2s_2+2s_3} \mu_j( B(t_0, C2^{-s_3}))w_{j}(s_3) \end{align*} where $w_j(s_3) = 1 $ if $2^{s_3} \leq C^{-1} M_j$ and $w_j(s_3) =0$ otherwise. Thus, we let $C_2(E_j,s_1,s_2)$ be a constant multiple of the right-hand side. Now, we find an upper bound of $C_2(E_j,s_1,s_2)$. Since $2^{s_2} \leq M_j 2^{s_1}$, we get \begin{align*} \beta_{j+1}^2 M_j^{-2} 2^{-2s_1} \lesssim_\e M_j^{-2\alpha+\e}2^{-2s_1}\leq M_j^{-\alpha+\e}2^{-2s_1+(s_1-s_2)\alpha}. \end{align*} Using Lemma \ref{lem_est_mu_ball_r} and the fact that $s_3 \leq s_2$, a.s. we have \begin{align*} \beta_{j+1}^2\beta_j^{-1} M_j^{-1} 2^{-2s_1-2s_2+2s_3} \mu_j( B(t_0, C2^{-s_3}))w_j(s_3) &\lesssim_\e M_j^{-\alpha +\e} 2^{-2s_1-2s_2+(2-\alpha)s_3}\\ &\lesssim_\e M_j^{-\alpha +\e} 2^{-2s_1- s_2\alpha} \end{align*} Combining two estimates, a.s. we obtain that \begin{align*} C_2(E_j,s_1,s_2) \lesssim_\e M_j^{-\alpha+\e} 2^{-s_1(2-\alpha)} 2^{-s_2\alpha}. \end{align*} Let $C(E_j,s_1,s_2) := C_1(E_j,s_1,s_2)+C_2(E_j,s_1,s_2)$. Then, $C(E_j,s_1,s_2)$ satisfies the desired properties. \end{proof} \bibliographystyle{abbrv} \bibliography{ref} \end{document}
2412.19960v1
http://arxiv.org/abs/2412.19960v1
Numerical Linear Algebra: Least Squares, QR and SVD
\documentclass[12pt]{article} \input{headlines} \title{\large\sffamily\bfseries Lecture Notes \\ \LARGE{Numerical Linear Algebra}\\ \large{Least Squares, QR and SVD}} \author{\Large\sffamily Davoud Mirzaei\\ Uppsala University} \date{\sffamily September 1, 2023} \begin{document} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \setlength{\abovedisplayshortskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \maketitle \definecolor{contcol1}{HTML}{72E094} \definecolor{contcol2}{HTML}{24E2D6} \definecolor{convcol1}{HTML}{C0392B} \definecolor{convcol2}{HTML}{8E44AD} \begin{tcolorbox}[title=Contents, fonttitle=\huge\sffamily\bfseries\selectfont,interior style={left color=contcol1!10!white,right color=contcol2!10!white},frame style={left color=contcol1!30!white,right color=contcol2!30!white},coltitle=black,top=2mm,bottom=2mm,left=2mm,right=2mm,drop fuzzy shadow,enhanced,breakable] \makeatletter \@starttoc{toc} \makeatother \end{tcolorbox} \vspace*{10mm} \thispagestyle{empty} \newpage \pagenumbering{arabic} \input{lec2_part1} \input{lec2_part2} \input{lec2_part3} \input{workouts_2} \begin{thebibliography}{9} \bibitem{Datta:2010} B. N. Datta, {\em Numerical Lienar Algebra and Applications}, 2nd edition, SIAM, Philadelphia, PA, 2010. \bibitem{Elden:2019} L. Eld\'{e}n, {\em Matrix Methods in Data Mining and Pattern Recognition}, 2nd edition, SIAM, Philadelphia, PA, 2019. \bibitem{Heath:2018} M. T. Heath, {\em Scientific Computing, an Introductory Survey}, revised 2nd edition, SIAM, Philadelphia, PA, 2018. \bibitem{Trefethen-Bau:1997} L. N. Trefethen, D. Bau III, {\em Numerical Linear Algebra,} SIAM, 1997. \end{thebibliography} \end{document} \usepackage[english]{babel} \usepackage{graphicx,caption} \usepackage{framed} \usepackage[normalem]{ulem} \usepackage{indentfirst} \usepackage{amsmath,amsthm,amssymb,amsfonts} \usepackage[italicdiff]{physics} \usepackage[T1]{fontenc} \usepackage{mathtools,extarrows} \usepackage{setspace} \doublespacing \usepackage{wrapfig} \usepackage{lmodern,mathrsfs} \usepackage[inline,shortlabels]{enumitem} \setlist{topsep=2pt,itemsep=2pt,parsep=3pt,partopsep=2pt} \renewcommand{\baselinestretch}{1.25} \usepackage[dvipsnames]{xcolor} \usepackage[utf8]{inputenc} \usepackage[a4paper, top=0.8in,bottom=0.8in, left=0.8in, right=0.8in, footskip=0.3in, includefoot]{geometry} \usepackage[most]{tcolorbox} \usepackage{tikz,tikz-3dplot,tikz-cd,tkz-tab,tkz-euclide,pgf,pgfplots} \pgfplotsset{compat=newest} \usepackage{multicol} \usepackage[bottom,multiple]{footmisc} \usepackage[backend=bibtex,style=numeric]{biblatex} nalnamedelim}{\addcomma\addspace} \addbibresource{bibliography} \usepackage[nameinlink]{cleveref} \usepackage{graphicx} \graphicspath{{figures/}} \numberwithin{equation}{section} \usepackage{framed,color} \definecolor{shadecolor}{rgb}{0.9412,0.9725, 1} \setlength{\FrameSep}{4pt} \usepackage{caption} \captionsetup[figure]{font=small} \usepackage{tcolorbox} \usepackage{multirow} \usepackage{epstopdf} \usepackage{listings} \usepackage[]{mcode} \usepackage{algorithm} \usepackage{latexsym} \usepackage{showidx} \usepackage{latexsym} \usepackage{amssymb} \renewcommand{\vec}{{\rm vec}} \newcommand{\eqnref}[1]{(\ref{#1})} \newcommand{\ignore}[1]{} \usepackage{todonotes} \newcommand{\bhtodo}{\todo[color=yellow]} \newcommand{\yntodo}{\todo[color=green]} \usepackage{tikz} \newcommand*\circled[1]{\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=1pt] (char) {#1};}} \newcommand{\remind}[1]{\textcolor{red}{\textbf{#1}}} \newcommand{\hide}[1]{} \renewcommand{\qed}{\hfill\blacksquare} \newcommand{\qedwhite}{\hfill \ensuremath{\Box}} \newcommand{\ep}{\varepsilon} \newcommand{\vp}{\varphi} \newcommand{\lam}{\lambda} \newcommand{\Lam}{\Lambda} \renewcommand{\ip}[1]{\ensuremath{\left\langle#1\right\rangle}} \newcommand{\floor}[1]{\ensuremath{\left\lfloor#1\right\rfloor}} \newcommand{\ceil}[1]{\ensuremath{\left\lceil#1\right\rceil}} \newcommand{\C}{\mathbb{C}} \newcommand{\R}{\mathbb{R}} \newcommand{\As}{\mathcal{A}} \newcommand{\Bs}{\mathcal{B}} \newcommand{\Cs}{\mathcal{C}} \newcommand{\Ds}{\mathcal{D}} \newcommand{\Es}{\mathcal{E}} \newcommand{\Fs}{\mathcal{F}} \newcommand{\Gs}{\mathcal{G}} \newcommand{\Hs}{\mathcal{H}} \newcommand{\Is}{\mathcal{I}} \newcommand{\Js}{\mathcal{J}} \newcommand{\Ks}{\mathcal{K}} \newcommand{\Ls}{\mathcal{L}} \newcommand{\Ms}{\mathcal{M}} \newcommand{\Ns}{\mathcal{N}} \newcommand{\Os}{\mathcal{O}} \newcommand{\Ps}{\mathcal{P}} \newcommand{\Qs}{\mathcal{Q}} \newcommand{\Rs}{\mathcal{R}} \newcommand{\Ss}{\mathcal{S}} \newcommand{\Ts}{\mathcal{T}} \newcommand{\Us}{\mathcal{U}} \newcommand{\Vs}{\mathcal{V}} \newcommand{\Ws}{\mathcal{W}} \newcommand{\Xs}{\mathcal{X}} \newcommand{\Ys}{\mathcal{Y}} \newcommand{\Zs}{\mathcal{Z}} \newcommand{\ab}{\textbf{a}} \newcommand{\cb}{\textbf{c}} \newcommand{\db}{\textbf{d}} \newcommand{\ub}{\textbf{u}} \newcommand{\wb}{\textbf{w}} \newcommand{\xb}{\textbf{x}} \newcommand{\yb}{\textbf{y}} \newcommand{\zb}{\textbf{z}} \newcommand{\Ab}{\textbf{A}} \newcommand{\Bb}{\textbf{B}} \newcommand{\Cb}{\textbf{C}} \newcommand{\Db}{\textbf{D}} \newcommand{\eb}{\textbf{e}} \newcommand{\abar}{\overline{a}} \newcommand{\bbar}{\overline{b}} \newcommand{\cbar}{\overline{c}} \newcommand{\dbar}{\overline{d}} \newcommand{\ubar}{\overline{u}} \newcommand{\vbar}{\overline{v}} \newcommand{\wbar}{\overline{w}} \newcommand{\xbar}{\overline{x}} \newcommand{\ybar}{\overline{y}} \newcommand{\zbar}{\overline{z}} \newcommand{\Abar}{\overline{A}} \newcommand{\Bbar}{\overline{B}} \newcommand{\Cbar}{\overline{C}} \newcommand{\Dbar}{\overline{D}} \newcommand{\Ubar}{\overline{U}} \newcommand{\Vbar}{\overline{V}} \newcommand{\Wbar}{\overline{W}} \newcommand{\Xbar}{\overline{X}} \newcommand{\Ybar}{\overline{Y}} \newcommand{\Zbar}{\overline{Z}} \newcommand{\Aint}{A^\circ} \newcommand{\Bint}{B^\circ} \newcommand{\limk}{\lim_{k\to\infty}} \newcommand{\limm}{\lim_{m\to\infty}} \newcommand{\limn}{\lim_{n\to\infty}} \newcommand{\limx}[1][a]{\lim_{x\to#1}} \newcommand{\liminfm}{\liminf_{m\to\infty}} \newcommand{\limsupm}{\limsup_{m\to\infty}} \newcommand{\liminfn}{\liminf_{n\to\infty}} \newcommand{\limsupn}{\limsup_{n\to\infty}} \newcommand{\sumkn}{\sum_{k=1}^n} \newcommand{\emp}{\varnothing} \newcommand{\exc}{\backslash} \newcommand{\sub}{\subseteq} \newcommand{\sups}{\supseteq} \newcommand{\capp}{\bigcap} \newcommand{\cupp}{\bigcup} \newcommand{\kupp}{\bigsqcup} \newcommand{\cappkn}{\bigcap_{k=1}^n} \newcommand{\cuppkn}{\bigcup_{k=1}^n} \newcommand{\kuppkn}{\bigsqcup_{k=1}^n} \newcommand{\cappa}{\bigcap_{\alpha\in I}} \newcommand{\cuppa}{\bigcup_{\alpha\in I}} \newcommand{\kuppa}{\bigsqcup_{\alpha\in I}} \newcommand{\fl}{\mathrm{fl}} \newcommand{\Rx}{\overline{\mathbb{R}}} \newcommand{\dx}{\,dx} \newcommand{\dy}{\,dy} \newcommand{\dt}{\,dt} \newcommand{\dax}{\,d\alpha(x)} \newcommand{\dbx}{\,d\beta(x)} \DeclareMathOperator{\glb}{\text{glb}} \DeclareMathOperator{\lub}{\text{lub}} \newcommand{\wh}{\widehat} \newcommand{\wt}{\widetilde} \newcommand{\xh}{\widehat{x}} \newcommand{\yh}{\widehat{y}} \newcommand{\zh}{\widehat{z}} \newcommand{\sh}{\widehat{s}} \newcommand{\spann}{\mathrm{span}} \newcommand{\range}{\mathrm{range}} \newcommand{\nul}{\mathrm{null}} \newcommand{\cond}{\mathrm{cond}} \newcommand{\rankk}{\mathrm{rank}} \newcommand{\diagg}{\mathrm{diag}} \newcommand{\varr}{\mathrm{var}} \newcommand{\covv}{\mathrm{cov}} \def\bfm#1{\protect{\makebox{\boldmath $#1$}}} \def\bell {\bfm{\ell}} \def\btimes {\bfm{\times}} \def\b {\bfm{b}} \def\a {\bfm{a}} \def\c {\bfm{c}} \def\y {\bfm{y}} \def\z {\bfm{z}} \def\q {\bfm{q}} \def\v{\bfm{v}} \def\u {\bfm{u}} \def\w {\bfm{w}} \def\r {\bfm{r}} \def\bep {\bfm{\varepsilon}} \def\bdelta {\bfm{\delta}} \newcommand{\bb}{\textbf{\emph{b}}} \newcommand{\x}{\textbf{\emph{x}}} \newcommand{\cc}{\textbf{\emph{c}}} \newcommand{\<}{\langle} \renewcommand{\>}{\rangle} \renewcommand{\iff}{\Leftrightarrow} \DeclareMathOperator{\im}{\text{im}} \let\spn\relax\let\Re\relax\let\Im\relax \DeclareMathOperator{\spn}{\text{span}} \DeclareMathOperator{\Re}{\text{Re}} \DeclareMathOperator{\Im}{\text{Im}} \DeclareMathOperator{\diag}{\text{diag}} \newtheoremstyle{mystyle}{}{}{}{}{\sffamily\bfseries}{.}{ }{} \newtheoremstyle{cstyle}{}{}{}{}{\sffamily\bfseries}{.}{ }{\thmnote{#3}} \makeatletter \renewenvironment{proof}[1][\proofname] {\par\pushQED{\qed}{\normalfont\sffamily\bfseries\topsep6\p@\@plus6\p@\relax #1\@addpunct{.} }}{\popQED\endtrivlist\@endpefalse} \makeatother \renewcommand{\qedsymbol}{\coolqed{0.32}} \theoremstyle{mystyle}{\newtheorem{definition}{Definition}[section]} \theoremstyle{mystyle}{\newtheorem{theorem}[definition]{Theorem}} \theoremstyle{mystyle}{\newtheorem{lemma}[definition]{Lemma}} \theoremstyle{mystyle}{\newtheorem{corollary}[definition]{Corollary}} \theoremstyle{mystyle}{\newtheorem*{atten}{}} \theoremstyle{mystyle}{\newtheorem*{remarks}{Remarks}} \theoremstyle{mystyle}{\newtheorem*{examples}{Examples}} \theoremstyle{mystyle}{\newtheorem{workout}[definition]{Workout}} \theoremstyle{mystyle}{\newtheorem{miniproject}[definition]{PY programming}} \theoremstyle{mystyle}{\newtheorem{labexercise}[definition]{Lab Exercise}} \usepackage{amsmath} \newtheorem{remark}{Remark}[section] \newtheorem{example}{Example}[section] \newtheorem{exercise}{Exercise}[section] \newtheorem{proposition}{Proposition}[section] \theoremstyle{cstyle}{\newtheorem*{cthm}{}} \newtheoremstyle{warn}{}{}{}{}{\normalfont}{}{ }{} \theoremstyle{warn} \newtheorem*{warning}{\warningsign{0.2}\relax} \newcommand{\warningsign}[1]{\tikz[scale=#1,every node/.style={transform shape}]{\draw[-,line width={#1*0.8mm},red,fill=yellow,rounded corners={#1*2.5mm}] (0,0)--(1,{-sqrt(3)})--(-1,{-sqrt(3)})--cycle; \node at (0,-1) {\fontsize{48}{60}\selectfont\bfseries!};}} \tcolorboxenvironment{definition}{boxrule=0pt,boxsep=0pt,colback={red!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{red},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{proposition}{boxrule=0pt,boxsep=0pt,colback={Orange!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{Orange},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{theorem}{boxrule=0pt,boxsep=0pt,colback={blue!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{blue},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{lemma}{boxrule=0pt,boxsep=0pt,colback={blue!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{blue},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{corollary}{boxrule=0pt,boxsep=0pt,colback={violet!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{violet},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{proof}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{CadetBlue!80!white},left=8pt,right=8pt,sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{remark}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Green},left=8pt,right=8pt,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{remarks}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Green},left=8pt,right=8pt,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{example}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Black},left=8pt,right=8pt,sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{examples}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Black},left=8pt,right=8pt,sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{cthm}{boxrule=0pt,boxsep=0pt,colback={gray!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{gray},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{workout}{boxrule=0pt,boxsep=0pt,colback={Cyan!0},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{Cyan},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{miniproject}{boxrule=0pt,boxsep=0pt,colback={gray!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{gray},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{labexercise}{boxrule=0pt,boxsep=0pt,colback={green!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{green},sharp corners,before skip=10pt,after skip=10pt,breakable} \newenvironment{talign}{\let\displaystyle\textstyle\align}{\endalign} \newenvironment{talign*}{\let\displaystyle\textstyle\csname align*\endcsname}{\endalign} \usepackage[explicit]{titlesec} \titleformat{\section}{\fontsize{18}{30}\sffamily\bfseries}{\thesection}{18pt}{#1} \titleformat{\subsection}{\fontsize{15}{18}\sffamily\bfseries}{\thesubsection}{15pt}{#1} \titleformat{\subsubsection}{\fontsize{10}{12}\sffamily\large\bfseries}{\thesubsubsection}{8pt}{#1} \titlespacing*{\section}{0pt}{5pt}{5pt} \titlespacing*{\subsection}{0pt}{5pt}{5pt} \titlespacing*{\subsubsection}{0pt}{5pt}{5pt} \newcommand{\Disp}{\displaystyle} \newcommand{\qe}{\hfill\(\bigtriangledown\)} \DeclareMathAlphabet\mathbfcal{OMS}{cmsy}{b}{n} \setlength{\parindent}{0.2in} \setlength{\parskip}{0pt} \setlength{\columnseprule}{0pt} \usepackage{tkz-euclide} \usetikzlibrary{angles,quotes} \usetikzlibrary{shapes, calc, decorations} \newcommand{\vsp}{\vspace{0.3cm}} \newcommand\colortxt[2][]{\tikz[baseline=(char.base)]\node[minimum width=2em,text height=2ex, fill=yellow!100,text=black,#1](char){#2};} These lecture notes focus on some numerical linear algebra algorithms in scientific computing. We assume that students are familiar with elementary linear algebra concepts such as vector spaces, systems of equations, matrices, norms, eigenvalues, and eigenvectors. In the numerical part, we do not pursue Gaussian elimination and other LU factorization algorithms for square systems. Instead, we mainly focus on overdetermined systems, least squares solutions, orthogonal factorizations, and some applications to data analysis and other areas. The reference textbooks \cite{Datta:2010}, \cite{Heath:2018}, and \cite{Trefethen-Bau:1997} are our main sources in this lecture. \section{Least squares problem} Let \( A \in \mathbb{R}^{m \times n} \), where \( m > n \). The system \[ Ax = b \] for a given vector \( b \in \mathbb{R}^m \) and solution \( x \in \mathbb{R}^n \) is termed {\bf overdetermined} because it contains more equations than unknowns. This system essentially asks whether \( b \) can be expressed as a linear combination of the columns of \( A \): $$ b = x_1a_{\cdot 1} + x_2a_{\cdot 2} + \cdots + x_na_{\cdot n}. $$ For the square system (the case \( m = n \)), the answer is ``yes'' provided that the column vectors \( \{a_{\cdot 1}, a_{\cdot 2}, \ldots, a_{\cdot n}\} \) are linearly independent (or equivalently for a nonsingular matrix \( A \)). However, for overdetermined systems (\( m > n \)), the answer is usually ``no'' unless \( b \) happens to lie in the span of \( \{a_{\cdot 1}, a_{\cdot 2}, \ldots, a_{\cdot n}\} \) (often denoted \( \text{span}(A) \) or \( \text{range}(A) \)), which is highly unlikely in most applications. Therefore, in general, such a system has no solution. To illustrate this, consider the case where \( m = 3 \) and \( n = 2 \). In this scenario, \( a_{\cdot 1} \) and \( a_{\cdot 2} \) represent two vectors in \( \mathbb{R}^3 \). If \( a_{\cdot 1} \) and \( a_{\cdot 2} \) are linearly independent, then their span forms a plane (a \( 2 \)-dimensional subspace) in \( \mathbb{R}^3 \). The system \( Ax = b \) has a solution if \( b \) lies in that plane; otherwise, the system has no solution. The probability of a vector \( b \in \mathbb{R}^3 \) lying in a plane is zero. In such situations, one obvious alternative to ``solving the linear system exactly'' is to minimize the residual vector \[ r = b - (x_1a_{\cdot 1} + x_2a_{\cdot 2} + \cdots + x_na_{\cdot n}) = b - Ax . \] The solution to the problem depends on how we measure the length of the residual vector. It is preferred to use the \( 2 \)-norm, although any norm could be used. The \( 2 \)-norm is induced by the inner product, thus is related to the notion of orthogonality, and it is smooth and strictly convex. These properties make the theory and computation with this norm much easier than with other norms. With the use of the \( 2 \)-norm, the solution is the vector \( x \) that minimizes the sum of squares of differences between the components of \( b \) and \( Ax \). This method is known as {\bf least squares}. In the least squares method, we seek to find an optimal vector that solves the minimization problem: \begin{equation}\label{leastsquaredef} \min_{x \in \mathbb{R}^n} \|Ax - b\|_2. \vsp \end{equation} As we pointed out, the solution \( x \) of this problem (which always exists) may not exactly satisfy \( Ax = b \). To reflect the lack of exact equality, we may write the linear least squares problem as \[ Ax \cong b, \] and approximation is understood in the least square sense. \vsp \begin{example}[\cite{Heath:2018}] \label{ex:height_hills} A surveyor tries to measure the heights of three hills. Sighting first, his/her initial measurements are $x_1 = 1237$ ft, $x_2=1941$ ft, and $x_3=2417$ ft. To confirm these measurements, the surveyor climbs to the top of the first hill and measures the heights of the second and third hills above the first and obtain $x_2-x_1=711$ ft and $x_3-x_1=1177$ ft. Then he/she climbs to the top of the second hill and measures $x_3-x_2 = 475$ ft. It is obvious that there exists an inconsistency with measurements. These can be written in an overdetermined linear system of equations: \begin{equation*} Ax = \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\\-1&1&0\\-1&0&1\\0&-1&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2\\ x_3 \end{bmatrix} \cong\begin{bmatrix} 1237 \\ 1941 \\ 2417\\ 711\\ 1177\\ 475 \end{bmatrix} = b. \end{equation*} The least squares solution of this system (as we will see soon) is $x = [1236, 1943, 2416]^T$ which differs slightly from the initial measurements. The last three observations helped to obtain a better measurement. \end{example} \vsp \begin{example} Data fitting (or curve fitting) is a procedure for finding the curve of best fit to a given set of data points. Trying to find the best curve by minimizing the sum of the squares of the residuals of the points from the curve leads to a linear least squares problem of the form \eqref{leastsquaredef}. Given data $(t_k,y_k)$, $k=1,2,\ldots,m$, we wish to find a function $p\in \spann\{\phi_1(t),\phi_2(t),\ldots,\phi_n(t)\}$ such that $p$ is the best fit to data values $y_k$ in the sense that $$ \sum_{k=1}^{n}(y_k-p(t_k))^2 \rightarrow \min. \vsp $$ By expanding $p$ in terms of basis functions $\phi_j$ with coefficients $c_j$, i.e., $$ p(t) = c_1\phi_1(t) + c_2\phi_2(t) + \cdots + c_n\phi_n(t), $$ the problem is equivalent with finding a vector $c=(c_1,\ldots,c_n)^T\in \R^n$ that solves the minimization problem \begin{equation*} \min_{c\in\R^n} \sum_{k=1}^{m} \big( y_k - (c_1\phi_1(t_k) + \cdots + c_n\phi_n(t_k)) \big)^2. \end{equation*} If we define the $m\times n$ matrix $A$ with entries $a_{kj} = \phi_j(t_k)$ and $m$-vector $b=(y_1,\ldots,y_m)^T$, then the above data-fitting problem takes the form $$ \min_{c\in\R^n}\|Ac-b\|_2. $$ In the case where the approximation space is the space of polynomials (with basis $\{1,t,\ldots,t^{n-1}\}$, or any other basis), the problem is known as {\em polynomial curve fitting}. A schematic of a cubic curve fitting ($n=4$) is illustrated in Figure \ref{fig:curve_fit}. \begin{center} \includegraphics[scale=1]{curvefit} \captionof{figure}{A least squares polynomial fit to a given data set}\label{fig:curve_fit} \end{center} The special case with basis $\{1,t\}$ is refereed to as {\em linear curve fitting} or {\em linear regression}. As an example, to find a cubic fit $p_3(t)=c_1+c_2t+c_3t^2+c_4t^3$ to six points $(t_1,y_1),\ldots,(t_6,y_6)$ the matrix $A$ is $6\times 4$ and the problem has the form \begin{equation*} Ac = \begin{bmatrix}1&t_1&t_1^2 & t_1^3\\1&t_2&t_2^2& t_2^3\\1&t_3&t_3^2& t_3^3\\1&t_4&t_4^2& t_4^3\\1&t_5&t_5^2 & t_5^3\\ 1&t_6&t_6^2 & t_6^3 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2\\ c_3 \\ c_4 \end{bmatrix} \cong\begin{bmatrix} y_1 \\ y_2 \\ y_3\\ y_4\\ y_5 \\ y_6 \end{bmatrix} = b.\vsp \end{equation*} The matrix $A$ here is a {\em Vandemonde} matrix which will be ineffectively ill-conditioned for higher order polynomial curve fittings. \end{example} \vsp \subsection{Existence and uniqueness} Assume that $y=Ax $ so $y\in \range(A)$. The function $f(y)=\|b-y\|_2$ is continuous and coercive on $\R^m$, so it has at least a minimum on the closed and unbounded set $\mathrm{range}(A)$. Moreover, $f$ is strictly convex on the convex set $\mathrm{range}(A)$, so the minimum vector $y$ is unique. It does not mean that the solution $x $ of the least square problem \eqref{leastsquaredef} is unique in general. Suppose $x $ and $\widetilde{x }$ are two solutions for the least square problem and $z=x -\widetilde x $. Then $Az = 0$ because $Ax =A\widetilde x $. If the columns of $A$ are linearly independent then $z = 0$ and $x =\widetilde{x }$. We conclude that if $A$ has full column rank, i.e., $\mathrm{rank}(A) = n$ then the solution of least squares problem is unique (the inverse is also true). If $\mathrm{rank}(A) < n$, then A is said to be {\em rank-deficient}. \vsp \subsection{Normal equations} To find the solution of the least squares problem \eqref{leastsquaredef}, we define the \( n \)-variate function \( f:\mathbb{R}^n \to \mathbb{R} \) by \[ f(x ) := \|Ax -b\|_2^2 = (Ax -b)^T(Ax -b) = x ^TA^TAx -2x ^TA^Tb + b^Tb \] and aim to minimize it on \( \mathbb{R}^n \). The function \( f \) is quadratic, and a necessary condition for a minimizer \( x \) is that \( \nabla f(x ) = 0 \), i.e., \( 2A^TAx -2A^Tb = 0 \). Therefore, any minimizer of \( f \) should satisfy \begin{equation}\label{LS-normaleq} A^TAx = A^Tb. \end{equation} The {\em Hessian} of \( f \) is \( 2A^TA \), which is semi-positive definite in general. However, if the columns of \( A \) are linearly independent (meaning \( A \) has full rank), then \( A^TA \) is positive definite (you should prove this!). In this case, \eqref{LS-normaleq} becomes a sufficient condition as well. The linear system \eqref{LS-normaleq} is known as the system of {\bf normal equations} and suggests a method for solving the least squares problem. However, it is advisable to avoid it in favor of other computationally more stable algorithms, as we will describe later. \begin{figure}[!th] \begin{center} \begin{tikzpicture} \tkzDefPoint(0,0){A} \tkzDefPoint(6,0){B} \tkzDefPoint(10,2.5){C} \tkzDefPoint(4,2.5){D} \tkzDefPoint(5,5){E} \tkzDefPoint(5,1){F} \tkzDrawPolygon[thick,fill=gray!10](A,B,C,D) \draw[->,color=blue,thick] (0,0)--(5,5)node[above] {$b$}; \draw[->,color=red,thick] (0,0)--(5,1)node[black, right] {$Ax$}; \draw[dashed,color=red,thick] (5,5)--(5,1)node[right] {}; \tkzLabelSegment[above,pos=.4,sloped](E,F){$b-Ax $} \draw[->,color=black,thick] (0,0)--(5,0)node[black, below] {$a_{\cdot 1}$}; \draw[->,color=black,thick] (0,0)--(1.6,1)node[black, right] {$~a_{\cdot 2}$}; \draw[color=red,thick] (4.75,.98)--(4.75,1.25) node[black, right] {}; \draw[color=red,thick] (4.75,1.25)--(5,1.30) node[black, right] {}; \end{tikzpicture} \end{center} \caption{Geometric interpretation of the least squares problem.}\label{fig:bestapp} \end{figure} Another approach, equivalent with one we derived above, is as below. The vector $Ax =x_1a_{\cdot 1}+\cdots+x_na_{\cdot n}$ out of subspace $\range(A)$ closest to $b$ in the Euclidean norm occurs when the residual vector $b-Ax $ is perpendicular to $\range(A)$. See Figure \ref{fig:bestapp}. Thus, the inner product of $b-Ax $ and any column of $A$ should be zero, equivalently $$ A^T(b-Ax )=0 $$ which is the same system of normal equations \eqref{LS-normaleq}. \begin{example}\label{ex:height_hills2} Returning to Example \ref{ex:height_hills}, since $A$ has full rank, we can obtain the least squares solution by solving the normal system $A^TA=A^Tb$. We have $$ A^TA = \begin{bmatrix} 3&-1&-1\\ -1&3&-1\\-1&-1&3\end{bmatrix}, \quad A^Tb = \begin{bmatrix} -651\\ 2177\\ 4069 \end{bmatrix} $$ Since matrix $A^TA$ is positive definite, the solution of the normal system can be obtained by the Cholesky factorization $A^TA = LL^T$ where $L$ is a lower triangular matrix. We therefore need to solve two triangular systems $Lz = A^Tb$ (lower triangular) and $L^Tx = z$ (upper triangular) using forward and backward substitutions for final solution $x $. In Python write \begin{shaded} \vspace*{-3mm} \begin{verbatim} import numpy as np import scipy as sp A = np.array([[1,0,0],[0,1,0],[0,0,1],[-1,1,0],[-1,0,1],[0,-1,1]]) b = np.array([1237,1941,2417,711,1177,475]) L = np.linalg.cholesky(A.T@A) # Cholesky factorization z = sp.linalg.solve_triangular(L, A.T@b, lower = True) # forward x = sp.linalg.solve_triangular(L.T, z, lower = False) # backward print('Hill heights =', x) \end{verbatim} \vspace*{-3mm} \end{shaded} The final solution will be $x = [1236, 1943, 2416]^T$. From a computational point of view, the normal equation is effective for solving this small-sized least squares system. However, in practical scenarios and for larger matrix sizes, solving through the normal equation is strongly discouraged, as we will discuss it later. \end{example} \vsp \subsection{Conditioning of the least squares problem} The conditioning of linear system $Ax =b$ for a square and nonsingular matrix $A$ is measured by the condition number $$ \cond(A) = \|A\|\|A^{-1}\|. $$ For the overdetermined system $Ax \cong b$, however, the inverse of $A$ can not be defined in the conventional sense, but it is possible to define a {\bf pseudoinverse} or {\bf Moore–Penrose inverse}, denoted by $A^{+}$, that behaves like an inverse in many respects\footnote{Such definition of inverse matrix was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903.}. The pseudoinverse $A^{+}$ exists for any matrix $A$, in particular, when $A$ has linearly independent columns then $$ A^{+} = (A^TA)^{-1}A^T $$ which is indeed a {\em left inverse} because $A^{+}A=I$. On the other hand, $P:=AA^{+}$ is an orthogonal projector onto $\range(A)$, so that the solution of the least squares problem can be written as $$ x = A^{+}b. $$ In Section \ref{sect:svd}, we explore how the pseudoinverse can be computed for any matrix $A$ using the SVD. Now, for a matrix $A\in\R^{m\times n}$ with $m\geqslant n$ and $\rank(A)=n$, the condition number is defined as \begin{equation*} \cond_2(A):= \|A\|_2\|A^{+}\|_2. \end{equation*} This definition remains valid even if $\text{rank}(A) < n$, provided that we can compute $A^{+}$. While the conditioning of a square system depends solely on $A$, the conditioning of a least squares problem $Ax \cong b$ relies on both the coefficient matrix $A$ and the right-hand side vector $b$. In fact, the closeness of $b$ to $\text{range}(A)$ will affect the conditioning. From Figure \ref{fig:bestapp} we observe $$ \frac{\|Ax \|_2}{\|b\|_2} = \cos\theta, $$ where $\theta$ is the angle between $Ax $ and $b$. To measure the sensitivity of the least squares solution to input perturbations, we consider perturbations in $b$ and $A$ separately. For the perturbed vector $b+\delta$ the least squares solution $x +\varepsilon$ is given by the normal equation as $A^TA(x +\varepsilon) = A^T(b+\delta)$. Combining with $A^TAx = A^Tb$ yields $A^TA\varepsilon = A^T\delta$ or $\varepsilon = A^{+}\delta$. This gives $$ \|\varepsilon\|_2\leqslant \|A^{+}\|_2 \|\delta\|_2. $$ Dividing both sides by $\|x \|$ then yields \begin{align*} \frac{\|\varepsilon \|_2}{\|x \|_2} \leqslant & \;\|A^{+}\|_2 \frac{\|\delta \|_2}{\|x \|_2} \\ =& \;\cond(A) \frac{\|b\|_2}{\|A\|_2\|x \|_2}\frac{\|\delta \|_2}{\|b\|_2} \\ \leqslant &\; \cond(A) \frac{\|b\|_2}{\|Ax \|_2}\frac{\|\delta \|_2}{\|b\|_2} \\ =&\; \cond(A) \frac{1}{\cos\theta}\frac{\|\delta \|_2}{\|b\|_2}. \end{align*} We observe that the condition number for the least squares solution $x $ with respect to perturbations in $b$ depends on both $\cond(A)$ and the angle $\theta$ between $b$ and $Ax $. In particular, the condition number is approximately $\cond(A)$ when the residual is small ($\cos\theta \approx 1$), however, it can be arbitrarily worse than $\cond(A)$ when the residual is large ($\cos\theta \approx 0$). \vsp \begin{workout} Assume that the perturbed least squares solution $x + \varepsilon $ is obtained by perturbing the input matrix $A + E$, while the right-hand side vector $b$ remains unchanged. Show that \begin{equation}\label{cond_ls_AE} \frac{\|\varepsilon \|_2}{\|x \|_2} \leqslant \left([\cond(A)]^2\tan\theta + \cond(A) \right)\epsilon_A + \mathcal O(\epsilon_A^2) \vsp \end{equation} where $\epsilon_A = \frac{\|E\|_2}{\|A\|_2}$. Conclude that the condition number is approximately $\text{cond}(A)$ when the residual is small. However, the condition number is squared for a moderate residual, and it becomes arbitrarily large when the residual is large. \end{workout} \vsp The most striking feature of \eqref{cond_ls_AE} is that it depends on the square of $\text{cond}(A)$. This implies that even if $A$ is only mildly ill-conditioned, a small perturbation in $A$ can cause a large change in $x $. An exception occurs in problems where the least squares solution fits the data very well, i.e., $\tan\theta\approx 0$, causing the factor $[\text{cond}(A)]^2$ to cancel out. \vsp \begin{example} Let us again consider the height measurements of hills in Example \ref{ex:height_hills} with the least squares solution $x = [1236, 1943, 2416]^T$. The pseudoinverse of $A$ is given by $$ A^+ = (A^TA)^{-1}A^T=\frac{1}{4} \begin{bmatrix} 2 & 1 & 1 & -1 & -1 & 0 \\ 1 & 2 & 2 & 1 & 0 & -1 \\ 1 & 1 & 2 & 0 & 1 & 1 \end{bmatrix}. $$ We have $\|A\|_2= 2$ and $\|A^+\|_2=1$, so that $$ \cond(A) = \|A\|_2\|A^+\|_2 = 2. $$ On the other side, the ratio is computed as $$ \cos\theta = \frac{\|Ax \|_2}{\|b\|_2} \doteq \frac{3640.8761}{3640.8809} \doteq 0.99999868, $$ Hence, the angle between $Ax $ and $b$ is the small value $\theta = 0.001625$, indicating that the norm of the residual $r=b-Ax $ is very small. Given the small condition number and the angle $\theta$, we can conclude that this particular least squares problem is well-conditioned. \end{example} \vsp \begin{example} Consider a least squares problem with coefficient matrix $$ A = \begin{bmatrix} 1 & 1 \\ \epsilon & -\epsilon \\ 0 & 0 \end{bmatrix} $$ and right-hand side vector $b = [1,0,\epsilon]^T$, where $\epsilon>0$ is a small parameter. The pseudoinverse of $A$ is computed as $$ A^+ = \frac{1}{2}\begin{bmatrix} 1 & \frac{1}{\epsilon} & 0 \\ 1 & -\frac{1}{\epsilon} & 0 \end{bmatrix}. \vsp $$ The condition number of $A$ is $$ \cond(A) = \|A\|_2\|A^+\|_2 = \sqrt{2} \frac{1}{\epsilon \sqrt 2} = \frac{1}{\epsilon}, $$ and the least squares solution is given by $x =A^+b = [1/2,1/2]^T$. Assume a tiny perturbation $$ E = \begin{bmatrix} 0 & 0 \\ 0 & 0 \\ -\epsilon & \epsilon \end{bmatrix} $$ for matrix $A$. The perturbed solution then is obtained as $$ x +\varepsilon = (A+E)^+ b = \frac{1}{2}\begin{bmatrix} 1 & \frac{1}{2\epsilon} & -\frac{1}{2\epsilon} \\ 1 & -\frac{1}{2\epsilon} & \frac{1}{2\epsilon} \end{bmatrix} \begin{bmatrix} 1 \\ 0\\ \epsilon \end{bmatrix} =\begin{bmatrix} \frac{1}{4} \\ \frac{3}{4} \end{bmatrix}, $$ which shows that $\varepsilon = [-1/4,1/4]$ and $\frac{\|\varepsilon\|_2}{\|x \|_2}= 1/2$. We can observe this large error from stability bound \eqref{cond_ls_AE}. We have $\frac{\|E\|_2}{\|A\|_2}=\frac{\epsilon\sqrt{2}}{\sqrt{2}}=\epsilon$, and $\theta = \cos^{-1}\frac{\|Ax \|_2}{\|b\|_2}= \cos^{-1}\frac{1}{\sqrt{1+\epsilon^2}}\approx 0$ for small values of $\epsilon$. Therefore, the term with the squared condition number in the right-hand side of \eqref{cond_ls_AE} becomes negligible. Consequently, the bound on the output perturbation $\frac{\|{\varepsilon} \|_2}{\|x \|_2}$ is solely determined by $\cond(A)\frac{\|E\|_2}{\|A\|_2} = 1$, which aligns with the exact value of $\frac{\|{\varepsilon} \|_2}{\|x \|_2}$. Now, let us change the right-hand side to $b = [1,0,1]^T$. For this case, we have $$ x = A^{+}b = \begin{bmatrix} \frac{1}{2} \\ \frac{1}{2} \end{bmatrix}, \quad x +\varepsilon = (A+E)^+ b = \begin{bmatrix} \frac{1}{2}-\frac{1}{4\epsilon} \\ \frac{1}{2}+\frac{1}{4\epsilon} \end{bmatrix}, $$ and $\frac{\|{\varepsilon} \|_2}{\|x \|_2}=\frac{1}{2\epsilon}$. The relative perturbation in the solution is approximately $[\cond(A)]^2\frac{\|E\|_2}{\|A\|_2}$. In this case, $\theta = \cos^{-1}\frac{1}{\sqrt 2}=\frac{\pi}{4}$, and $\tan\theta = 1$, indicating that the condition-squared term in the perturbation bound is not suppressed. As a result, the solution is highly sensitive to perturbations. \end{example} \vsp \input{projectors} \section{Orthogonality and QR factorization} From several points of view, it is advantageous to use orthogonal vectors as basis vectors in a vector space. As an application, we will obtain an stable algorithm for solving the least squares problem when a specific orthogonal basis is obtained for the subspace $\range(A)$. We remind that two nonzero vectors $x$ and $y$ are called orthogonal if $x^T y = 0$. \begin{workout} If vectors $q_j$, $j=1,2,\ldots,m$ are mutually orthogonal, i.e. $q_j^Tq_k=0$ for $j\neq k$, then they are linearly independent. \end{workout} If the set of orthogonal vectors $q_j\in\R^m,\, j = 1, 2, \ldots , m$, be normalized by $\|q_j\|_2=1$ then they are called {\bf orthonormal}, and form an orthonormal basis for $\R^m$. \begin{definition} A square matrix whose columns are orthonormal is called an orthogonal matrix. \end{definition} It is clear that an orthogonal matrix $Q$ satisfies $Q^TQ = I$, it is full rank, and its inverse is equal to $Q^{-1} = Q^T$. The rows of an orthogonal matrix are orthogonal, i.e., $QQ^T = I$. It is not difficult to show that the product of two orthogonal matrices is orthogonal. One of the most important properties of orthogonal matrices is that they preserve the length (Euclidian norm) of a vector: $$ \|Qx\|_2^2 = (Qx)^TQx=x^TQ^TQx = x^Tx = \|x\|_2^2. $$ This means that $Q$ rotates $x$ but does not change its length. In numerical point of view, this norm-preserving property means that orthogonal matrices do not amplify errors. \begin{workout} Show that orthogonal matrices preserve the $2$-norm and the Frobenius norm of matrices. \end{workout} Here we introduce two classes of elementary orthogonal matrices that will be used in the sequel to transfer the columns of an arbitrary matrix $A$ into an set of orthonormal bases. \vsp \subsection{Householder transformations} In this section, we consider Householder transformations as a class of orthogonal matrices, which are particularly useful for performing reflections and orthogonal projections in matrix computations. The we will use them to compute the QR factorization of a matrix. \begin{definition} A matrix of the form \begin{equation}\label{def:housh} H = I -\frac{2}{u^Tu}uu^T \end{equation} where $u$ is a non-zero vector in $\R^n$ is called a {Householder matrix} or a {\bf Householder transformation}\footnote{After the celebrated numerical analyst Alston Scott Householder (5 May 1904 – 4 July 1993)}. The vector $u$ determining the Householder matrix $H$ is called the Householder vector. \end{definition} \begin{center} \begin{center} \begin{tikzpicture}[scale = 0.8] \tkzDefPoint(0,0){A} \tkzDefPoint(7,0){B} \tkzDefPoint(10,2){C} \tkzDefPoint(3,2){D} \tkzDefPoint(4,5){E} \tkzDefPoint(4,3){U} \tkzDefPoint(4,1){O} \tkzDefPoint(6.5,5){X} \tkzDefPoint(6.5,-3){W} \tkzDefPoint(6.5,1){P} \tkzDefPoint(6.5,0){Q} \tkzDefPoint(4.6,0){Z} \tkzDefPoint(4,-3){Y} \tkzDrawPolygon[thick,fill=gray!10](A,B,C,D) \node[color=black] at (1.5,.5) {$U$}; \tkzDrawSegments[red,dashed,->](O,E) \tkzDrawSegments[red,->,thick](O,U) \tkzDrawSegments[blue,->,thick](O,X) \tkzLabelSegment[right,pos=.9](O,U){$u$} \tkzLabelSegment[above,pos=1](O,X){$x$} \tkzDrawSegments[red](X,P) \tkzDrawSegments[red,dashed](X,E) \tkzDrawSegments[red,dashed](P,Q) \tkzDrawSegments[red,->](Q,W) \tkzDrawSegments[blue,dashed,thick](O,Z) \tkzDrawSegments[blue,->,thick](Z,W) \tkzDrawSegments[red,dashed](O,Y) \tkzLabelSegment[below,pos=.2,sloped](E,O){$u(u^Tx)$} \tkzLabelSegment[above,pos=.5,sloped](Q,W){$-2u(u^Tx)$} \tkzLabelSegment[right,pos=1](Z,W){$w=x-2uu^Tx=Hx$} \end{tikzpicture} \end{center} \captionof{figure}{Geometric interpretation of Householder transformation}\label{fig:househ} \end{center} \vsp See Figure \ref{fig:househ} for a geometric interpretation of a Householder transformation. In the figure the vector $u$ is assumed to be a normal vector ($u^Tu=1$) of plane $U$. The vector $w=Hx$ is the reflection of $x$ with respect to plane $U$. The plane acts as a {\bf mirror}; the reason why the Householder transformation is also known as {\em elementary reflector}. The following properties can be simply proved for a Householder transformation. \begin{theorem}\label{thm:househ_peroperties} Let $H=I-2uu^T/u^Tu$ be a Householder matrix with vector $u\in\R^n$. Then \begin{enumerate} \item $H$ is symmetric and orthogonal. \item $H^2 = I$ \item $Hu=-u$ \item $Hv=v$ if $u^Tv=0$ \item If $x,y\in\R^n$ are such that $x\neq y$ and $\|x\|_2=\|y\|_2$, and $u$ is chosen parallel to $x-y$ then $Hx=y$. \end{enumerate} \end{theorem} \proof We only prove item (5) because other items are easy to prove. Let $u = c(x-y)$ for a constant $c\neq 0$, and write $x = \frac{1}{2}(x+y)+\frac{1}{2}(x-y)$. Then $$ Hx = \frac{1}{2}H(x+y) + \frac{1}{2}H(x-y). $$ By using property (3) we have $H(x-y)=-(x-y)$. On the other hand $(x+y)$ is orthogonal to $x-y$ because $(x+y)^T(x-y)=x^Tx-x^Ty + y^Tx - y^Ty = \|x\|_2-\|y\|_2=0$. Thus by property (4) we have $H(x+y)=x+y$. All these together give $Hx=y$. $\qed$ \vsp \begin{remark} Properties (3) and (4) show that $H$ has $n-1$ eigenvalues $1$ and a simple eigenvalue $-1$ (corresponding to eigenvector $u$). \end{remark} \vsp \begin{theorem}\label{thm:househ_zero} Given a nonzero vector $x\neq e_{\cdot 1}:=[1,0,\ldots,0]^T$, the Householder matrix $H$ define by $u = x \pm \|x\|_2e_{\cdot 1}$ is such that $Hx = \mp \|x\|_2e_{\cdot 1}$. (Take care of signs $\pm$ and $\mp$). \end{theorem} \proof This is a simple consequence of item (5) of Theorem \ref{thm:househ_peroperties} by letting $y = \mp \|x\|_2e_{\cdot 1}$. $\qed$ Here is an illustration ($\alpha=\mp \|x\|_2$): $$ x = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end{pmatrix} \quad \Longrightarrow \quad Hx = \begin{pmatrix} \alpha \\ 0 \\ 0 \\ \vdots \\ 0 \end{pmatrix} $$ \vsp Given the vector $x$, we find a reflection matrix $H$ (the {mirror}) such that $Hx = \alpha e_{\cdot 1}$ (reflect $x$ on $x_1$-axis). See Figure \ref{fig:mirror} below. \begin{center} \begin{center} \begin{tikzpicture}[scale = 0.7] \tkzDefPoint(0,0){O} \tkzDefPoint(0,5){Z} \tkzDefPoint(4,-1){Y} \tkzDefPoint(-5,-1){X} \tkzDefPoint(-5/1.3,-1/1.3){Hx} \tkzDefPoint(-1,3.5){x} \tkzDefPoint(-5,2.5){A} \tkzDefPoint(-1,.8){B} \tkzDefPoint(1.3,-1){C} \tkzDefPoint(-3,1){D} \tkzDefPoint(-2.6,1.1){u1} \tkzDefPoint(-2.1,1.9){u2} \tkzDrawPolygon[fill=gray!10](A,B,C,D) \tkzDrawSegments[black,->](O,X) \tkzDrawSegments[black,->](O,Y) \tkzDrawSegments[black,->](O,Z); \tkzDrawSegments[blue,->,thick](O,x) \tkzLabelSegment[blue,above,pos=1](O,x){$x$} \tkzDrawSegments[blue,->,thick](O,Hx) \tkzLabelSegment[blue,below,pos=.6](O,Hx){$\; Hx = \alpha e_{\cdot 1}=\begin{bmatrix} \alpha\\ 0\\0 \end{bmatrix}$} \tkzDrawSegments[red,->](u1,u2); \tkzLabelSegment[red,above,pos=1](u1,u2){$u$} \end{tikzpicture} \end{center} \captionof{figure}{The mirror $H$ reflects $x$ on $x_1$-axis. The normal vector $u$ is chosen parallel to $x-\alpha e_{\cdot 1}$.}\label{fig:mirror} \end{center} Theorem \ref{thm:househ_zero} works with both negative and positive signs. However, to avoid cancelation errors in computing the first component of $u$, one can get rid of subtraction at all by choosing $\mathrm{sign}(x_1)$ in place of $\pm$. We always form the matrix $H$ via vector $$ u = x + \mathrm{sign}(x_1)\|x\|_2e_{\cdot 1}, \quad \mbox{with }\;\mathrm{sign}(0)=+. $$ Here we discuss how the computational costs of matrix-vector and matrix-matrix multiplications can be reduced when the matrix is a Householder reflection. The standard matrix-vector multiplication $Hx$ requires $2n^2$ flops for a $n\times n$ matrix $H$ and a $n$-vector $x$. The matrix-matrix multiplication $HA$ for $n\times m$ matrix $A$ costs for $2mn^2$ flops. However, if $H$ is a Householder matrix it is not required forming $H$ explicitly in favour of working with $u$ directly. In this case, letting $\beta ={2}/{u^Tu}$ one can write \begin{align*} Hx & = (I-\beta uu^T)x = x - \beta u(u^Tx)=x - \beta \gamma u, \quad \gamma = u^Tx,\\ HA & = (I-\beta uu^T)A = A - \beta u(u^TA)=A - \beta uw^T, \quad w = A^Tu,\\ AH & = A(I-\beta uu^T) = A - \beta (Au)u^T=A - \beta wu^T, \quad w = Au. \end{align*} \begin{workout} For a Householder matrix $H\in \R^{n\times n}$ verify that the flop-counts for matrix-vector product $Hx$ is about $6n$, and matrix-matrix product $HA$ for $A\in\R^{n\times m}$ (or $AH$ for $A\in\R^{m\times n}$) is about $4mn$. Compare with explicit multiplications. \end{workout} \vsp The following Python code computes a Householder vector $u$ for a given vector $x$. \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} def HouseVec(x): # HouseVec(x) computes the Householder vector u such that # (I-2uu'/u'u)x = |x|_2e_1 n = len(x); u = np.zeros(n) u[1:] = x[1:] s = np.sign(x[1]) if s == 0: s = 1 u[0] = x[0] + s*np.linalg.norm(x,2) return u \end{verbatim} \vspace*{-0.3cm} \end{shaded} Multiplication by a Householder transformation is implemented in the following code: \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} def HouseProd(u,A): # HouseProd(u,A) computes the product of Householder matrix # (I-2uu'/u'u) by matrix A b = 2/np.dot(u,u); w = np.matmul(np.transpose(A),u) HA = A - b*np.outer(u,w) return HA \end{verbatim} \vspace*{-0.3cm} \end{shaded} \vsp \begin{example}\label{ex:householder_zeros} The following script transforms the first column of matrix $$A=\begin{bmatrix} 2 & 3 & 5 \\ 1 & 2 & -1 \\ 2 & 5 & 3 \\ 1 & -1 & 0 \end{bmatrix}$$ to a multiple of $e_{\cdot 1}$. We write \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} A = np.array([[2,3,5],[1,2,-1],[2,5,3],[1,-1,0]]) u = HouseVec(A[:,0]) A = HouseProd(u,A) print ("Transferred A = \n", np.round(A,4)) \end{verbatim} \vspace*{-0.3cm} \end{shaded} \noindent and get the following output (rounded to 4 decimals places): \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} Transferred A = [[-3.1623 -5.3759 -4.7434] [ 0. 0.3775 -2.8874] [ 0. 1.755 -0.7749] [ 0. -2.6225 -1.8874]]\end{verbatim} \vspace*{-0.3cm} \end{shaded} \noindent \end{example} \vsp \subsection{Plane rotations} Householder transformations are efficient for dense matrices because of a few flops they need. In this section we introduce the {\em plane rotations} which are flexible and can be used efficiently for sparse matrices because they produce zeros entry by entry. The $2\times 2$ skew symmetric matrix \begin{equation}\label{rotmat22} J = \begin{bmatrix} c & s \\ -s & c \end{bmatrix}, \quad c^2+s^2=1 \end{equation} is an orthogonal matrix. If $c = \cos\theta$ then $Jx$ is a clockwise rotation of $x$ by angle $\theta$. So the matrix $J$ is a rotation matrix in the $(1,2)$-plane. Sometimes $J$ is called Givens rotation after Wallace Givens, who used them for eigenvalue computations around 1960. However, they had been used long before that by Jacobi for the same reason. Let $x = (x_1,x_2)^T\neq 0$ and $c = x_1/\|x\|_2$ and $s = x_2/\|x\|_2$ then $$ Jx = \frac{1}{\|x\|_2} \begin{bmatrix} x_1 & x_2 \\ -x_2 & x_1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} \|x\|_2 \\ 0 \end{bmatrix}. $$ In this case, the rotation matrix $J$ rotates $x$ and puts it on $x_1$-axis, i.e. zeros its second component. By embedding a $2\times2$ rotation in a larger identity matrix, one can manipulate vectors and matrices of arbitrary dimensions. \begin{example} Let $x=(x_1,x_2,x_3,x_4,x_5)^T$ be given such that $\alpha = \sqrt{x_3^2+x_5^2}\neq0$. Let $c = {x_3}/{\alpha}$ and $s = {x_5}/{\alpha}$. Then we have $$ \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & c & 0 & s \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 &-s & 0 & c \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix}= \begin{bmatrix} x_1 \\ x_2 \\ \alpha \\ x_4 \\ 0 \end{bmatrix}. $$ In fact this matrix is a rotation matrix in ($3,5$)-plane which changes $x_5$ to $0$, $x_3$ to $\alpha$ and leaves other components unchanged. \end{example} \vsp Using this idea we can construct a sequence of plane rotations to transfer an arbitrary vector $x\in\R^m$ to a multiple of unit vector $e_{\cdot 1}=[1,0,\ldots,0]^T\in\R^m$. Let the rotation matrix in the ($j,k$)-plane when applying on vector $x$ with $c = {x_j}/{\alpha}$ and $s = {x_k}/{\alpha}$ for $\alpha = \sqrt{x_j^2+x_k^2}$ be denoted by $J(j,k)$. Then it is easy to show that $$ \underbrace{J(1,2)J(1,3)\cdots J(1,m)}_{G}x = \|x\|_2 e_{\cdot 1} $$ We note that any rotation matrix $J(j,k)$ with $j<k$ can be used instead of $J(1,k)$ matrices. \begin{remark} Possible overflow and underflow in computing $c=x_1/\sqrt{x_1^2+x_2^2}$ and $s=x_2/\sqrt{x_1^2+x_2^2}$ can be avoided by an appropriate scaling. If $|x_1|\geq |x_2|$ we put $$ t=x_2/x_1, \quad c = 1/\sqrt{1+t^2},\quad s = c\cdot t, $$ and if $|x_1| < |x_2|$, $$ t = x_1/x_2, \quad s = 1/\sqrt{1+t^2},\quad c = s\cdot t. $$ In either case, we can avoid squaring any magnitude larger than $1$. \end{remark} \vsp If an elementary plane rotation $J(j,k)$, with $c= a_{j\ell}/\sqrt{a_{j\ell}^2+a_{k\ell}^2}$ and $s= a_{k\ell}/\sqrt{a_{j\ell}^2+a_{k\ell}^2}$ is applied on a matrix $A$ with entries $a_{j\ell}$ then the $(k,\ell)$ entry of $A$ becomes zero if $k>j$. It is important to note that only two rows $j$ and $k$ of the matrix are changed. This should be taken into account in programming. Instead of explicitly embedding the $2 \times 2$ rotation matrix into a matrix of larger dimension, which would require unnecessary computations, we can save operations and storage by implementing the rotation more efficiently. Here are two Python functions that illustrate how to implement the rotation while minimizing computational costs. In the \texttt{GivensProd} function we only operate on two rows of matrix $A$. \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} def GivensPar(x,y): # GivensPar(x,y) computes Givens parameters to make the second # component of [x,y] zero if abs(x) > abs(y): t = y/x; c = 1/np.sqrt(1+t**2); s = c*t else: t = x/y; s = 1/np.sqrt(1+t**2); c = s*t return c,s \end{verbatim} \vspace*{-0.3cm} \end{shaded} \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} def GivensProd(c,s,j,k,A): # GivensProd(c,s,j,k,A) applies a (j,k)-plane rotation to matrix A A[[j,k],:] = np.matmul([[c,s],[-s,c]],A[[j,k],:]) return A \end{verbatim} \vspace*{-0.3cm} \end{shaded} \noindent \begin{example}\label{ex:givens_zeros} The following script transforms the first column of matrix $$A=\begin{bmatrix} 2 & 3 & 5 \\ 1 & 2 & -1 \\ 2 & 5 & 3 \\ 1 & -1 & 0 \end{bmatrix}$$ to a multiple of $e_{\cdot 1}$. Consider the matrix $A$ in Example \ref{ex:householder_zeros}. We use Givens rotations to transfer $A$ into a new matrix that all the components of its first column except the first are annihilated. \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} A = np.array([[2,3,5],[1,2,-1],[2,5,3],[1,-1,0]]) print("Original A = \n", A) for k in range(3,0,-1): c,s = GivensPar(A[0,0],A[k,0]) A = GivensProd(c, s, 0, k, A) print ("Transferred A = \n", np.round(A,4)) \end{verbatim} \vspace*{-0.3cm} \end{shaded} \noindent The output with rounding to four decimal places: \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} Original A = [[ 2. 3. 5.] [ 1. 2. -1.] [ 2. 5. 3.] [ 1. -1. 0.]] Transferred A = [[3.1623 5.3759 4.7434] [ 0. 0.3162 -2.6352] [ 0. 2.2361 -0.7454] [ 0. -2.2361 -2.2361]] \end{verbatim} \vspace*{-0.3cm} \end{shaded} \noindent Comparing with the output of the Householder transformation (Example \ref{ex:householder_zeros}), we observe that $GA$ is not necessarily identical with $HA$. However, in both cases $A$ is transferred to a matrix with zeros under its $a_{11}$ entry. \end{example} \vsp For a single call the cost of function \verb+GivensPar+ is $6$ flops, and for a matrix $A\in\R^{m\times n}$ the total number of flops in \verb+GivensProd+ is $6n$ (multiplying a $2\times2$ matrix by a $2\times n$ matrix.) To zero all off-diagonal entries of the first column of a matrix $A\in\R^{m\times n}$, both functions \verb+GivensPar+ and \verb+GivensProd+ should be called in a $m-1$ folds loop. The total cost is thus $(m-1)(6n+6)\approx 6mn$ flops. \vsp \subsection{QR factorization} Matrix decompositions are essential, giving rise to fast and efficient algorithms in matrix computations. Students are typically familiar with LU decompositions (or Gaussian elimination), which factorize a matrix $A$ into a lower triangular matrix $L$ multiplied by an upper triangular matrix $U $, or its other variant $LL^T$ for positive definite matrices. Here, we introduce another useful factorization with numerous applications in efficient linear algebra algorithms. \begin{theorem} Every matrix $A\in R^{m\times n}$ with $m\geqslant n$ (overdetermined) can be factorized as $$ A = QR $$ where $Q$ is a $m\times m$ {\em orthogonal} matrix and $R$ is a $m\times n$ {\em upper triangular} matrix. \end{theorem} $$ \begin{array}{cccc} \begin{pmatrix} \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \end{pmatrix}&=& \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \end{pmatrix}& \begin{pmatrix} \times & \times & \times\\ 0 & \times & \times\\ 0 & 0 & \times \\ \hline 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix} \\ A&& Q&R \end{array} $$ \vsp Here is an illustration for a case with $m=5$ and $n = 3$. In the next section we give a ``constructive'' proof for this theorem using Householder reflections. By constructive we mean that the proof also suggests an algorithm to compute the $Q$ and $R$ factors. As we observe, the last $m-n$ rows of $R$ are zeros so the last $m-n$ columns of $Q$ have no contribution to the product (but are still important!). \vsp $$ \begin{array}{cccc} \begin{pmatrix} \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \end{pmatrix}&=& \begin{pmatrix} \times & \times & \times & \colortxt{$\times$~~~$\times$} \\ \times & \times & \times & \colortxt{$\times$~~~$\times$} \\ \times & \times & \times & \colortxt{$\times$~~~$\times$} \\ \times & \times & \times &\colortxt{$\times$~~~$\times$} \\ \times & \times & \times & \colortxt{$\times$~~~$\times$} \end{pmatrix}& \begin{pmatrix} \times ~~~ \times ~~~ \times\\ 0 ~~~~ \times ~~~ \times\\ 0 ~~~~~ 0 ~~~~ \times \\ \hline \colortxt{$0$~~~~~$0$~~~~~$0$} \\ \colortxt{$0$~~~~~$0$~~~~~$0$} \end{pmatrix} \\ \\ A&=& [~~~Q_1~~~~~~~~~Q_2~~]& \begin{bmatrix} R_1\\ 0\end{bmatrix} \end{array} $$ This suggests the {\em reduced QR factorization} $$ A = Q_1R_1 $$ which is often sufficient for many applications where the $Q_2$ portion is not needed. \vsp \subsection{QR factorization using Householder transformations} Now we show how the idea of introducing zeros in a vector using a Householder matrix can be used for computing a full QR factorization $$ A = QR $$ of a matrix $A\in\R^{m\times n}$, $m\geqslant n$, where $Q\in\R^{m\times m}$ is orthogonal and $R\in\R^{m\times n}$ is upper triangular (entries below the main diagonal are all zero). This process was first introduced by Householder in 1958. The idea is to reduce $A$ to an upper triangular matrix $R$ by successively premultiplying $A$ with a series of orthogonal Householder matrices. The products of Householder matrices then constitute the orthogonal matrix $Q$. The process is illustrated for $m=5$ and $n=3$:\\ Step 1: \vsp $$\;H_1A\;\, = H_1\begin{pmatrix} \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \end{pmatrix} = \begin{pmatrix} + & + & + \\ 0 & + & + \\ 0 & + & + \\ 0 & + & + \\ 0 & + & + \end{pmatrix} =:A^{(1)} $$ Step 2: $$H_2A^{(1)} = H_2\begin{pmatrix} + & + & + \\ 0 & + & + \\ 0 & + & + \\ 0 & + & + \\ 0 & + & + \end{pmatrix} = \begin{pmatrix} + & + & + \\ 0 & * & * \\ 0 & 0 & * \\ 0 & 0 & * \\ 0 & 0 & * \end{pmatrix} =:A^{(2)} $$ Step 3: $$\qquad H_3A^{(2)} = H_3\begin{pmatrix} + & + & + \\ 0 & * & * \\ 0 & 0 & * \\ 0 & 0 & * \\ 0 & 0 & * \end{pmatrix} = \begin{pmatrix} + & + & + \\ 0 & * & * \\ 0 & 0 & \star \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} =:A^{(3)} =: R $$ \\ We have $R = A^{(3)}=H_3A^{(2)}=H_3H_2A^{(1)}=H_3H_2H_1A$. If we define $Q^T = H_3H_2H_1$ then $R = Q^TA$ or $A=QR$ where $Q=H_1H_2H_3$. Remember that $H_k$ are symmetric and orthogonal. The general case $A\in \R^{m\times n}$ can be treated similarly. Let $A = (a_{ij})$. In the first step, the Householder matrix $H_1\in \R^{m\times m}$ is build upon the first column of $A$, i.e., $$ x = a_{\cdot 1}=\begin{bmatrix} a_{11} \\ a_{2,1}\\ \vdots \\ a_{m,1} \end{bmatrix}\in \R^m, \quad u_1 = a_{\cdot 1}-\mathrm{sign}(a_{11})e_{\cdot 1},\quad H_1 = I-\frac{2}{u_1^Tu_1}u_1u_1^T. $$ Then $H_1A$ annihilates the components below $a_{11}$. Other entries of $A$ are changed as well. The new matrix is denoted by $$H_1A=:A^{(1)}$$ with entries $a_{ij}^{(1)}$. In the second step, a Householder matrix $\wt H_2\in \R^{(m-1)\times (m-1)}$ is formed based on the entries from $2$ to $m$ of the second column of $A^{(1)}$, i.e., $$ x = a^{(1)}_{2:m,2}=\begin{bmatrix} a_{22}^{(1)} \\ a_{3,2}^{(1)}\\ \vdots \\ a_{m,2}^{(1)} \end{bmatrix}\in \R^{m-1}, \quad u_2 = x-\mathrm{sign}(x_1)e_{\cdot 1},\quad \wt H_2 = I-\frac{2}{ u_2^T u_2} u_2 u_2^T\in \R^{(m-1)\times (m-1)} $$ Then the Householder matrix $H_2$ is defined by $$ H_2 = \begin{bmatrix} 1 & 0 \\ 0 & \wt H_2 \end{bmatrix}\in\R^{m\times m}. $$ In the new matrix $$H_2A^{(1)}=:A^{(2)}$$ the entries below $a_{22}^{(2)}$ become zero. First row and first column of $A^{(2)}$ are identical with that of $A^{(1)}$ due to the special structure of $H_2$ in its first row and column. Specially, the zeros introduced in the pervious step (in the first column) are not destroyed in the current step. Similarly, in step $k$ a Householder matrix $\wt H_k\in \R^{(m-k+1)\times (m-k+1)}$ is formed based on the entries from $k$ to $m$ of the $k$-th column of $A^{(k-1)}$, i.e., $$ x =\begin{bmatrix} a_{k,k}^{(k-1)} \\ a_{k+1,k}^{(k-1)}\\ \vdots \\ a_{m,k}^{(k-1)} \end{bmatrix}\in \R^{m-k+1}, \quad u_k = x-\mathrm{sign}(x_1)e_{\cdot 1},\quad \wt H_k = I-\frac{2}{ u_k^T u_k} u_k u_k^T\in \R^{(m-k+1)\times (m-k+1)}. $$ Then the Householder matrix $H_k$ is defined by $$ H_k = \begin{bmatrix} I_{k-1} & 0 \\ 0 & \wt H_k \end{bmatrix}\in\R^{m\times m}, $$ where $I_{k-1}$ is the identity matrix of size $k-1$. In the new matrix $$ H_kA^{(k-1)}=: A^{(k)} $$ the entries below $a_{kk}^{(k)}$ are all zero. The first block of $H_k$ (the identity block) ensures that the first $k-1$ rows and columns of $A^{(k-1)}$ remains unchanged in $A^{(k)}$. This means that the zeros introduced in all previous steps are not destroyed. For an economic implementation, the matrix-matrix multiplication should only be done on a submatrix of $A^{(k-1)}$ of size $(m-k)\times (n-k)$. This process is continued until step $n$. In the last step we make zeros below the diagonal in the last column of $A^{(n-1)}$ by multiplying the Householder matrix $H_n$: $$ H_nA^{(n-1)}=:A^{(n)}. $$ The resulting matrix $A^{(n)}$ is an upper triangular matrix of size $m\times n$, let us denote it by $R$. We thus have $$ R = A^{(n)} = H_nA^{(n-1)} = \cdots = \underbrace{H_nH_{n-1}\cdots H_2H_1}_{Q^T}A = Q^TA $$ where $Q^T$ is an orthogonal matrix because it is the product of $n$ orthogonal Householder matrices. We simply have $$ A = QR $$ where $Q = (H_n\cdots H_2H_1)^T=H_1H_2\cdots H_n$. \begin{remark}[space complexity] To minimize the storage, $R$ is stored over $A$ in the upper triangular part. Householder matrices are not required to be stored at all. Instead the components $2$ through end of each vector $u_k$ are stored in the respective positions of $A$ (instead of zeros), and the first components of all $u_k$ vectors are stored in an auxiliary one-dimensional array. The matrix $Q$, if it is needed explicitly, can be formed in a postprocessing calculation using the stored $u_k$ vectors in a cheap way. See Workout \ref{wo:formQ}. We note that, in a majority of practical applications, it is sufficient to have $Q$ in this factored form, and in many applications, $Q$ is not needed at all. \end{remark} \vsp \begin{remark} In step $k$ of the algorithm the entries of the submatrix of A containing row $k$ through $m$ and columns $k$ through $n$, denoted by $A(k : m, k : n )$, are updated and stored over the corresponding entries of $A$ via \begin{equation}\label{QR:updatingH} \begin{split} A(k:m,k:n) =&\, (I-\frac{2}{u_k^Tu_k}u_ku_k^T)A(k:m,k:n) \\ =&\, A(k:m,k:n) - \beta u_k u_k^TA(k:m,k:n). \end{split} \end{equation} In a Python code for QR factorization in step $k$ the subroutine \verb+HouseProd+ can be called with input arguments $u_k$ and $A(k:m,k:n)$. \end{remark} \vsp \begin{workout}\label{wo:formQ} Show that it requires about $2n^2(m - n/3)$ flops to compute $R$ in the QR factorization of $A\in\R^{m \times n}$, $m\geqslant n$ using Householder transformations. This cost does not include the explicit construction of $Q$. Show that it is required about $\frac{4}{3}m^3$ flops to compute $Q$ explicitly. \end{workout} \vsp The procedure is the same for $m<n$ but is finished after $m-1$ steps. In this case the upper triangular matrix $R$ is of the form $[R_1~~R_2]$ where $R_1$ is a $m\times m$ upper triangular and $R_2$ is a full matrix of size $m\times (n-m)$. Here is an illustration for $m=3$ and $n = 5$: $$ \begin{array}{cccc} \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \end{pmatrix}&=& \begin{pmatrix} \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \end{pmatrix}& \begin{pmatrix} \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \end{pmatrix}\\ A&&Q&R \end{array} $$ \ \\ The following Python function computes the QR factorization of a $m\times n$ matrix $A$, either $m\geqslant n$ or $m<n$. The code handles cases where only $R$, $R$ and $u_k$ vectors and both $Q$ and $R$ are demanded. In the second case Householder vectors $u_k$ are stored in the lower diagonal part of the output matrix and in additional array \verb+u1+. \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} def qrfac(A, mode = 'Q&R'): # qrfac(A, mode) computes the QR factorization of a (m x n) matrix A # mode: 'R', 'R&u' and 'Q&R'. The default mode is 'Q&R' m,n = np.shape(A) s = min(m-1,n) if mode == 'R': for k in range(s): u = HouseVec(A[k:,k]) A[k:,k:] = HouseProd(u,A[k:,k:]) return np.triu(A) elif mode == 'R&u': u1 = np.zeros(s) for k in range(s): u = HouseVec(A[k:,k]) A[k:,k:] = HouseProd(u,A[k:,k:]) A[(k+1):,k] = u[1:] u1[k] = u[0]; return A, u1 elif mode == 'Q&R': A,u = qrfac(A,'R&u') Q = np.eye(m) for k in range(s): Q[k:,:] = HouseProd(np.append(u[k],A[(k+1):,k]),Q[k:,:]) return np.transpose(Q), np.triu(A) else: print("Input mode types 'R', 'R&u' or 'Q&R' ") \end{verbatim} \vspace*{-0.3cm} \end{shaded} \subsection{QR factorization using Givens rotations} The QR factorization of a matrix $A\in\R^{m\times n}$ can also be simply obtained using Givens rotations in $s = \min\{m-1,n\}$ steps as below: \begin{itemize} \item[-]Step 1: form an orthogonal matrix $G_1 = J(1,m)J(1,m-1)\cdots J(1,2)$ such that $A^{(1)}=G_1A$ has zeros below its $(1,1)$ entry in the first column. \item[-]Step 2: form an orthogonal matrix $G_2 = J(2,m)J(2,m-1)\cdots J(2,3)$ such that $A^{(2)}=G_2A^{(1)}$ has zeros below its $(2,2)$ entry in the second column. \item[] $\vdots$ \item[-]Step $k$: form an orthogonal matrix $G_k = J(k,m)J(k,m-1)\cdots J(k,k+1)$ such that $A^{(k)}=G_kA^{(k-1)}$ has zeros below its $(k,k)$ entry in the $k$-th column. \end{itemize} The final matrix $A^{(s)}:=R$ is upper triangular and the matrix $Q = G_1^TG_2^T\cdots G_s^T$ is orthogonal and $$ A = QR. $$ To optimize the storage used, the matrix $R$ is stored over $A$, and $Q$ is formed implicitly out of Givens parameters (for example using Python function \verb+GivensProd+). \vsp \begin{labexercise} Implement a Python function for QR factorization using Givens rotations. Ask the user for different modes. Call your function for some matrices and print the outputs. \end{labexercise} \vsp \begin{remark} The QR factorization with Givens rotations requires $3n^2(m-n/3)$ flops. This does not include the computation of $Q$. Compared with the Householder method, this algorithm is about $1.5$ times more expensive. However when the matrix $A$ is sparse or has a special structure with lots of zeros in its lower triangle, the Givens approach is cheaper. For example in several applications (for example in eigenvalue computation) one needs to find the QR factorization of an upper {\bf Hessenberg matrix}. An upper Hessenberg matrix is similar to an upper triangular matrix with additional nonzero elements on the diagonal right below its main diagonal. Since an upper Hessenberg matrix $A\in \R^{n\times n}$ has at most $(n-1)$ nonzero subdiagonal entries, the QR factorization of $A$ can be obtained by only $(n- 1)$ Givens rotations. See the following illustration for $n=4$: $$ \begin{array}{lllllll} \begin{pmatrix} \times & \times & \times & \times \\ \times & \times & \times & \times \\ 0 & \times & \times & \times \\ 0 & 0 & \times & \times \end{pmatrix} & \hspace{-.2cm} \xrightarrow{J(1,2)} & \hspace{-.2cm} \begin{pmatrix} \times & \times & \times & \times \\ 0 & \times & \times & \times \\ 0 & \times & \times & \times \\ 0 & 0 & \times & \times \end{pmatrix} & \hspace{-.2cm} \xrightarrow{J(2,3)} & \hspace{-.2cm} \begin{pmatrix} \times & \times & \times & \times \\ 0 & \times & \times & \times \\ 0 & 0 & \times & \times \\ 0 & 0 & \times & \times \end{pmatrix} & \hspace{-.2cm} \xrightarrow{J(3,4)} & \hspace{-.2cm} \begin{pmatrix} \times & \times & \times & \times \\ 0 & \times & \times & \times \\ 0 & 0 & \times & \times \\ 0 & 0 & 0 & \times \end{pmatrix} \end{array} $$ \end{remark} \vsp \begin{remark} It was shown by Wilkinson in 1965 that the computed $\wh Q$ and $\wh R$ with Givens rotations satisfy $$ \wh R = \wh Q^T(A+E) $$ where there exists a constant $c$ independent of $m$ and $n$ such that $$ \|E\|_F\leqslant c\|A\|_F. $$ This shows that the algorithm is backward stable. \end{remark} \vsp \subsection{Other algorithms} The Gram-Schmidt algorithms (classical and modified versions) are alternative algorithms for computing the thin QR factorization of $A$. See \cite{Trefethen-Bau:1997,Heath:2018}. \vsp \subsection{QR factorization for solving the least squares problem} Solving the linear least squares problems using the normal equations has two significant drawbacks: (1) Forming $A^TA$ can lead to loss of information, (2) The condition number $A^TA$ is the square of that of $A$: $$ \cond_2(A^TA) = [\cond_2(A)]^2. $$ We illustrate these points in a couple of examples. \begin{example}\label{ex:ATAdang} Let $$ A = \begin{bmatrix} 1 & 1 \\ \epsilon & 0 \\ 0 & \epsilon \end{bmatrix} \vsp $$ where $\epsilon>0$ is a small real number. Clearly $A$ has full rank and $$A^TA = \begin{bmatrix} 1+\epsilon^2 & 1 \\ 1 & 1+\epsilon^2 \\ \end{bmatrix} $$ In the double precision floating point arithmetic if we let $\epsilon$ to be smaller that $10^{-8}$ then $\fl(1+\epsilon^2)=1$ and the computed matrix $$\fl(A^TA) = \begin{bmatrix} 1 & 1 \\ 1 & 1 \\ \end{bmatrix} $$ is indeed singular. \end{example} \vsp \begin{example} In the matrix $A$ of Example \ref{ex:ATAdang} assume $\epsilon = 10^{-4}$. Then we can show that $\cond_2(A)=\sqrt{2}\times 10^4$ while $\cond_2(A^TA)=2\times 10^8$. \end{example} \vsp In view of the potential numerical difficulties with the normal equations approach, we need an alternative that does not require formation of the normal system. In this section we will explain the use of QR factorization for this purpose while in the sequel an alternative approach through SVD will be discussed. Consider again the least squares problem $$ \min_{x\in\R^n}\|Ax-b\|_2 $$ with overdetermined matrix $A\in\R^{m\times n}$. The QR factorization transforms this linear least squares problem into a triangular least squares problem having the same solution. First assume that $A$ has full rank, i.e., $\rankk(A)=n$. Let $A=QR$ be the QR factorization of $A$ and partition $Q=[Q_1\; Q_2]$ where $Q_1$ consists of first $n$ columns of $Q$, and $R=\begin{bmatrix}R_1 \\ 0 \end{bmatrix}$ where $R_1\in \R^{n\times n}$ is an upper triangular matrix. In the reduced form $A=Q_1R_1$. Since $A$ has full rank all diagonal entries of $R_1$ are nonzero, so it is nonsingular. We can write \begin{align*} \|Ax-b\|_2^2 & = \|QR x - b\|_2^2 = \|R x - Q^Tb\|_2^2 = \left\| \begin{bmatrix}R_1 \\ 0 \end{bmatrix}x - \begin{bmatrix}Q_1^Tb \\ Q_2^Tb \end{bmatrix} \right\|_2^2\\ & = \| R_1 x - Q_1^Tb\|_2^2 + \|Q_2^Tb\|_2^2. \end{align*} The minimum is obtained if the first norm on the right-hand side is vanished, i.e., $$R_1x = Q_1^Tb.$$ Since $R_1$ is upper triangular and nonsingular, a simple backward substitution with $\mathcal{O}(n^2)$ flops gives the least square solution $x$. The reminder then is $$r = \|Ax-b\|_2=\|Q_2^Tb\|_2.$$ If the reminder is not important to us, a reduced QR factorization is enough for obtaining the least squares solution. \vsp \begin{example} Consider again Example \ref{ex:height_hills}. The least squares solution to height measurements can be computed using the QR factorization as below. \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} import numpy as np A = np.array([[1.,0,0],[0,1.,0],[0,0,1.],[-1.,1.,0],[-1.,0,1.],[0,-1.,1.]]) b = np.array([1237,1941,2417,711,1177,475]) Q,R = qrfac(A) print('Q =', np.round(Q,4), '\n R =', np.round(R,4)) x = BackSub(R[0:3,:],Q[:,0:3].T@b) print('x =',np.round(x,4)) \end{verbatim} \vspace*{-0.3cm} \end{shaded} Note that we also called the backward substitution algorithm for solving upper triangular systems using the Python function \texttt{BackSub}. The code for this function is left as an exercise for the reader. The outputs are \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} Q = [[-0.5774 -0.2041 -0.3536 0.5113 0.4878 -0.0235] [ 0. -0.6124 -0.3536 -0.4878 0.0235 0.5113] [ 0. 0. -0.7071 -0.0235 -0.5113 -0.4878] [ 0.5774 -0.4082 -0. 0.6664 -0.1786 0.1551] [ 0.5774 0.2041 -0.3536 -0.1551 0.6664 -0.1786] [ 0. 0.6124 -0.3536 0.1786 -0.1551 0.6664]] R =[[-1.7321 0.5774 0.5774] [ 0. -1.633 0.8165] [ 0. 0. -1.4142] [ 0. 0. 0. ] [ 0. 0. 0. ] [ 0. 0. 0. ]] x = [1236. 1943. 2416.] \end{verbatim} \vspace*{-0.3cm} \end{shaded} \end{example} \vsp When $A$ is rank-deficient, i.e., $\rankk(A)<n$, the solution of the least squares problem in not unique; the problem has infinite number of solutions. In this case, the QR factorization of $A$ still exists, but the upper triangular factor $R$ is singular. This situation usually arises from a poorly designed experiment, insufficient data, or an inadequate model. If one insists on forging ahead as is, a variation of QR factorization with {\em column pivoting} (next section) can be used to find all the solutions. See section \ref{sect-lssvd} for an alternative approach via SVD. Dealing with rank deficiency also enables us to handle underdetermined problems, where $m < n$, since the columns of $A$ are necessarily linearly dependent in that case. \vsp \subsection{QR factorization with column pivoting} In computing the QR factorization, in each step one can interchange the column having the maximum Euclidean norm in the submatrix with the pivot column. This kind of column pivoting makes column interchanges so that the zero pivots are moved to the lower right hand corner of $R$. The resulting factorization then is suitable for solving rank-deficient least squares problems. Let's describe the algorithm step by step. In step 1, we compute the $2$-norm of all columns of $A$, and interchange the column having maximum norm with the first column by multiplying $A$ with a permutation matrix $P_1$ from right. Then we apply the first step of basic QR factorization on $AP_1$ to transfer its first column to the form $[\alpha_1,0,\ldots,0]^T$. By either Householder or Givens transformations we have \begin{equation}\label{qr:pivot-step1} Q_1AP_1 = \left[\begin{array}{c|ccc} \alpha_1 & \tilde a_{11} & \cdots & \tilde a_{1n} \\ \hline 0 & \tilde a_{22} & \cdots & \tilde a_{2n} \\ \vdots & \vdots & & \vdots \\ 0 & \tilde a_{m2} & \cdots & \tilde a_{mn} \end{array}\right]:=A^{(1)} \end{equation} In step 2, we compute the $2$-norm of all columns of the submatrix obtained by ignoring the first row and column of $A^{(1)}$ and interchange the column having maximum norm with the second column by multiplying $A^{(1)}$ with a permutation matrix $P_2$ from right. (Note: when the columns are interchanged the full columns should be swapped, not just the portions that lie in the submatrix). Then we apply the second step of the basic QR factorization on $A^{(1)}P_2$: $$ Q_2A^{(1)}P_2 = \left[\begin{array}{cc|ccc} \alpha_1 & \tilde a_{11} & \tilde a_{12} &\cdots & \tilde a_{1n} \\ 0 & \alpha_2 & \hat a_{23} & \cdots & \hat a_{2n} \\ \hline 0 & 0 & \hat a_{33} & \cdots & \hat a_{3n} \\ \vdots &\vdots &\vdots & & \vdots \\ 0 & 0 &\hat a_{m2} & \cdots & \hat a_{mn} \end{array}\right]:=A^{(2)} $$ We continue in a similar way to higher steps. If the matrix has full rank $n$, the algorithm terminates after $n$ steps where in the final step we have $$ R = A^{(n)} = Q_nA^{(n-1)}P_n = Q_nQ_{n-1}A^{(n-2)}P_{n-1}P_n = \cdots = Q_nQ_{n-1}\cdots Q_1AP_1\cdots P_{n-1}P_n=: Q^TAP $$ or $AP = QR$ where $Q = Q_1^T\cdots Q_n^T$ and $P = P_1\cdots P_n$. Here $PA$ is indeed a matrix obtained from $A$ by permuting some of its columns. $R$ is upper triangular and nonsingular. However, if $A$ is rank-deficient there will come a step at which we are forced to take $\alpha_k=0$. In this step all of the entries of the remaining submatrix are zero. Suppose this occurs after $r$ steps have been completed and $$ Q_rQ_{r-1}\cdots Q_1AP_1\cdots P_{r-1}P_r = \left[\begin{array}{cccc|ccc} \alpha_1 & \times & \cdots & \times & \times & \cdots & \times \\ 0 & \alpha_2 & \cdots & \times & \times & \cdots & \times \\ \vdots & & \ddots & \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & \alpha_r & \times & \cdots & \times\\ \hline 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & & \vdots & \vdots & & \vdots\\ 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \\ \end{array}\right]=: \begin{bmatrix} R_{11} & R_{12} \\ 0 & 0 \end{bmatrix}=R $$ where $R_{11}\in \R^{r\times r}$ is upper triangular and nonsingular. Its main diagonal entries $\alpha_1,\alpha_2,\ldots,\alpha_r$ are all nonzero. Again we have $ AP = QR $ where $Q = Q_1^T\cdots Q_r^T$ and $P = P_1\cdots P_r$. Since $\rankk(R)=r$ we have $\rankk(A)=r$. This result is summarized in the following theorem. \begin{theorem} Given a matrix $A\in \R^{m\times n}$ with $m\geqslant n$ and $\rankk(A)= r\leqslant n$ there exist a permutation matrix $P\in \R^{n\times n}$, an orthogonal matrix $Q\in \R^{m\times m}$, and an upper triangular matrix $R\in \R^{m\times n}$ of the form $R = \begin{bmatrix} R_{11} & R_{12} \\ 0 & 0 \end{bmatrix}$ with $R_{11}\in \R^{r\times r}$ and nonsingular such that $$ AP = QR. $$ \end{theorem} \vsp Now, we address the question of how this factorization can be used to solve the least squares problem. First, we note that the permutation matrix $P$ is orthogonal as it is obtained by swapping the columns of the identity matrix. It is clear that $P^{-1}=P^T$ if is applied from left on a vector $x$ will interchange the corresponding rows in $x$. Assume that $\rankk (A)=r<n$ and $AP = QR$ is the QR factorization of $A$ with column pivoting. Partition $Q=[Q_1\; Q_2]$ where $Q_1$ consists of first $r$ columns of $Q$. Using the change of variables $y=P^Tx$ and letting $y = [\wt y,\; \wh y]$ for $\wt y\in \R^{r}$, we can write \begin{align*} \|Ax-b\|_2^2 & = \|APP^Tx-b\|_2^2 = \|QRy - b\|_2^2 = \|R y - Q^Tb\|_2^2 \\ & = \left\| \begin{bmatrix}R_{11} & R_{12} \\ 0 & 0 \end{bmatrix}\begin{bmatrix}\wt y \\ \wh y \end{bmatrix} - \begin{bmatrix}Q_1^Tb \\ Q_2^Tb \end{bmatrix} \right\|_2^2= \| R_{11} \wt y + R_{12}\wh y - Q_1^Tb\|_2^2 + \|Q_2^Tb\|_2^2. \vsp \end{align*} There are many choices of $y = [\wt y,\; \wh y]$ for which the first term in the right hand side is zero. The second term is independent of $y$ (and thus $x$) and determines the reminder of the least squares solutions. Recall that $R_{11}$ is nonsingular. For any choice of $\wh y\in \R^{n-r}$ there exists a unique $\wt y\in \R^{r}$ such that $$ R_{11}\wt y = Q_1^Tb - R_{12}\wh y. $$ Since $R_{11}$ is upper triangular, $\wt y$ can be calculated using a backward substitution. Finally, $$ x = Py $$ for $y = [\wt y,\; \wh y]$ is a solution to the least squares problem. Since $\wh y$ is arbitrary, we obtain infinite number of least squares solutions $x$. \vsp \begin{remark} In practice we often do not know the rank of $A$ in advance. After $r$ steps of the QR factorization with column pivoting, $A$ will have been transformed to the form $\begin{bmatrix} R_{11} & R_{12} \\ 0 & R_{22} \end{bmatrix}$ where $R_{11}$ is nonsingular. If $\rankk(A) = r$, then in principle $R_{22} = 0$ and the algorithm terminates. However, in the presence of roundoff errors $R_{22}$ is not exactly zero. In a practical computation we might assign a {\bf numerical rank} $r$ to $A$ if the norm of the largest column of $R_{11}$ is less than a prescribed tolerance. The tolerance should be a small value depending on and the accuracy of the entries and the scale of the original matrix. It is usually set to be $\delta = 10^{-t}\| A \|_\infty$ where the entries of $A$ are correct to $t$ decimal digits. This approach generally works well, but unfortunately it is not $100\%$ reliable, because there exists some nearly rank-deficient triangular matrix with relatively large on-diagonal entries. Search for the well-known {\em Kahan's matrix} as an example. We will address a more reliable approach to detect the rank deficiency using SVD in a forthcoming section. \end{remark} \vsp \begin{remark} Another issue that is worth to be addressed is that if the norms of the columns are computed in the straightforward manner at each step then the total cost is about $mn^2 - n^3/3$ flops. This cost can be reduced substantially for steps $2,3,\ldots, r$ by using information from previous steps. For example let $\kappa_1,\kappa_2,\ldots,\kappa_n$ denote the squares of the norms of columns of $AP_1$ in the first step (thus $\kappa_1$ the largest value). Computing all $\kappa_j$ values counts $2mn$ flops. For the second step of factorization recall \eqref{qr:pivot-step1}. Since $Q_1$ is orthogonal, the Euclidian norm of columns of $A^{(1)}$ are identical with that of $AP_1$. So the squares of norms of the submatrix can be calculated as $$ \kappa_j^{(1)} = \kappa_j - \tilde a_{1j}^2, \quad j=2,3,\ldots,n $$ using $2(m-1)$ flops instead of the $2(m - 1)(n -1)$ flops that is required using the straightforward calculation. One can continue to other steps similarly, but before starting the new step $k$, the corresponding column swapping should be applied on $\kappa^{(k-1)}$ as well. The total cost then is $$ 2mn + 2\sum_{k=2}^{r}(m-k) = 2mn + 2m(r-1) - r(r+1) + 2 \approx 2m(n+r) -r^2. $$ For case $r=n$ the total cost is about $4mn - n^2$ which shows a remarkable reduction in the cost of the straightforward algorithm. \end{remark} \vsp \begin{workout} Show that after the QR factorization with column pivoting, the main diagonal entries of $R_{11}$ satisfy $|\alpha_1|\geqslant |\alpha_2|\geqslant \cdots \geqslant |\alpha_r|$. \end{workout} \vsp \begin{labexercise} Develop a Python function for QR factorization with column pivoting. Considers all the aspects described above. Then call your function to factorize some specific matrices, and verify your output by checking the equality $PA=QR$. Finally use your function for solving the least squares problem $Ax\cong b$ for a given matrix $A$ and right-hand side vector $b$. The coefficient matrix is assumed to be of either full rank or rank-deficient. Assume that the matrix rank is unknown, so compute it numerically by considering the on-diagonals of the $R$ factor. \end{labexercise} \vsp \section{Projectors*}\label{sect:projectors} In Figure \ref{fig:bestapp} we observed that the vector $y = Ax\in \range(A)$ closest to $b$ in the $2$-norm is the {\em orthogonal projection} of $b$ onto $\range(A)$. This observation leads to an algebraic characterization of least squares solutions via the notion of projectors\footnote{The reader can skip this section, as the following sections are independent of its content.}. \begin{definition} A projector is a square matrix $P$ that satisfies \begin{equation*}\label{def:projector} P^2 = P. \end{equation*} \end{definition} \begin{center} \begin{center} \begin{tikzpicture}[scale = 0.8] \tkzDefPoint(0,0){A} \tkzDefPoint(6,0){B} \tkzDefPoint(10,2.5){C} \tkzDefPoint(4,2.5){D} \tkzDefPoint(4.5,5){E} \tkzDefPoint(7,1.75){F} \tkzDrawPolygon[thick,fill=gray!10](A,B,C,D) \node[color=black] at (2,0.5) {$\mathrm{range}(P)$}; lldraw (4.5,5) circle (2pt); lldraw (7,1.75) circle (2pt); \node[color=black] at (4.8,5) {$v$}; \node[color=black] at (7.5,1.75) {$Pv$}; \draw[->,dashed,color=red,thick] (4.5,5)--(7,1.75)node[right] {}; \tkzLabelSegment[above,pos=.4,sloped](E,F){$Pv-v$} \end{tikzpicture} \end{center} \captionof{figure}{A non-orthogonal projector}\label{fig:non-orth-project} \end{center} This definition includes both orthogonal and nonorthogonal projectors. In Figure \ref{fig:non-orth-project}, the vector $v\in\R^3$ is projected (nonorthogonal) to two-dimensional subspace $\range(P)$. In this figure, $Pv$ is the shadow projected by vector $v$ if one shines a light from the north-west direction onto the subspace $\range(P)$. It is clear that if $v\in \range(P)$ then $Pv=v$ (i.e. $v$ lies exactly on its own shadow). In fact, if $v\in \range(P)$ then there exists a vector $x$ such that $v = Px$ and $Pv = P^2x = Px = v$. We also observe from the figure that if $Pv\neq v$ then the direction in which the light shines is $Pv-v$. Applying $P$ on the light direction we obtain $$ P(Pv-v)=P^2v-Pv = Pv-Pv = 0 $$ which means $Pv-v\in\nul(P)$. The direction of the light depends on $v$ but it is always described by a vector in $\nul(P)$. If $P$ is a projector then $(I - P)^2=I-2P+P^2=I-P$ which means that $(I-P)$ is also a projector. It is called the {\em complementary projector} to $P$. The matrix $I-P$ projects onto $\range(I-P)$ or equivalently onto $\nul(P)$ because: \begin{lemma}\label{lem:project1} If $P$ is a projector then $\range(I-P) = \nul(P)$ and $\nul(I-P) = \range(P)$. \end{lemma} \proof If $v\in\nul(P)$ then $Pv=0$ so $(I-P)v=v$ which means $v\in \range(I-P)$. Conversely, for any $v$, we have $(I -P)v = v - Pv \in \nul(P)$. By writing $P = I - (I - P)$ we derive the complementary fact $\nul(I-P) = \range(P)$. $\qed$ \begin{lemma} If $P$ is a projector then $\range(P) \cap \nul(P) = \{0\}$. \end{lemma} \proof Any vector $v$ in both sets $\nul(I-P)$ and $\nul(P)$ satisfies $v - Pv =0$ and $Pv=0$ which gives $v=0$. So, $\nul(I - P) \cap \nul(P) = \{0\}$. Then the result follows by Lemma \ref{lem:project1}. $\qed$ \vsp We conclude that any projector $P\in\R^{m\times m}$ separates $\R^m$ into two spaces $\range(P)$ and $\nul(P)$ in the sense that any vector $v\in \R^m$ can be decomposed to $v = x+y$ where $x\in\range(P)$ and $y\in\nul(P)$. Indeed $x=Pv$ and $y=(I-P)v$ because $v=Pv+(I-P)v$. In this scenario we may write $$ \R^m = \range(P)+\nul(P). $$ Conversely, if $S_1$ and $S_2$ are two subspaces of $\R^m$ such that $S_1\cap S_2=\{0\}$ and $\R^m=S_1+S_2$ then there is a projector $P$ such that $\range(P) = S_1$ and $\nul(P) = S_2$. We say that $P$ is the projector onto $S_1$ along $S_2$. \begin{definition} A projector $P$ is called an {\bf orthogonal projector} if it is symmetric, i.e. $P=P^T$. \end{definition} In Figure \ref{fig:orth-project} an orthogonal projection is illustrated. \begin{center} \begin{center} \begin{tikzpicture}[scale = 0.8] \tkzDefPoint(0,0){A} \tkzDefPoint(6,0){B} \tkzDefPoint(10,2.5){C} \tkzDefPoint(4,2.5){D} \tkzDefPoint(5,5){E} \tkzDefPoint(5,1){F} \tkzDrawPolygon[thick,fill=gray!10](A,B,C,D) \node[color=black] at (2,0.5) {$\mathrm{range}(P)$}; lldraw (5,5) circle (2pt); lldraw (5,1) circle (2pt); \node[color=black] at (5.3,5) {$v$}; \node[color=black] at (5.5,1) {$Pv$}; \draw[dashed,color=red,thick] (5,5)--(5,1)node[right] {}; \tkzLabelSegment[above,pos=.4,sloped](E,F){$Pv-v$} \draw[color=red,thick] (4.75,.98)--(4.75,1.25) node[black, right] {}; \draw[color=red,thick] (4.75,1.25)--(5,1.30) node[black, right] {}; \end{tikzpicture} \end{center} \captionof{figure}{An orthogonal projection}\label{fig:orth-project} \end{center} For an orthogonal projector $P$ the subspaces $\range(P)$ and $\nul(P)$ are orthogonal because the inner product between a vector $Px\in \range(P)$ and a vector $(I-P)y\in \range(I-P)=\nul(P)$ is zero: $$ (Px)^T(I-P)y = x^TP^T(I-P)y = x^TP(I-P)y = x^T(P-P^2)y = 0. $$ We use the notations $P_{\perp}:=(I-P)$ and $\range(P)^{\perp}:=\nul(P)$. Any vector $v\in\R^m$ then can be expressed as the sum $$ v = (P+(I-P))v = Pv + P_{\perp}v $$ of mutually orthogonal vectors one in $\range(P)$ and the other in $\range(P)^{\perp}$. We also have the Pythagorean relation (prove it!) $$ \|v\|_2^2 = \|Pv\|_2^2 +\|P_\perp v\|_2^2. $$ This concept can be applied to find the solution of the least square problem \eqref{leastsquaredef}. If $P$ is an orthogonal projector to $\range(A)$ (find $P$ such that $\range(P)=\range(A)$) then we have \begin{equation}\label{LS_project_sol} \begin{split} \|b-Ax\|_2^2 &= \|P(b-Ax) + P_{\perp}(b-Ax)\|_2^2 \\ & = \|P(b-Ax)\|_2^2 + \|P_\perp(b-Ax)\|_2^2\\ & =\|Pb-Ax\|_2^2 + \|P_\perp b)\|_2^2. \end{split} \end{equation} The last equality satisfies because $PA=A$ and $P_\perp A=0$. Since the second norm on the right does not depend on $x$, the residual is minimized by minimizing the first norm. But the first norm is minimized by the ideal solution $x$ satisfying the overdetermined, but consistent system \begin{equation}\label{LS-AxPb} Ax = Pb. \end{equation} If fact, such $x$ exists because $Pb\in \range(P)=\range(A)$. If we multiply both sides by $A^T$ we have $A^TAx = A^TPb = A^TP^Tb = (PA)^Tb = A^Tb$ giving $$ A^TAx = A^Tb, $$ which is the normal equation we already derived. The norm $\|P_\perp b)\|_2$ in \eqref{LS_project_sol} is the norm of residual of the least squares solution. How can we construct the projection $P$ explicitly? If $A$ is a full-rank matrix then $A^TA$ is nonsingular and \begin{equation*} P:=A(A^TA)^{-1}A^T \end{equation*} is an orthogonal projector to $\range(A)$. Because it is symmetric and idempotent and $\range(P)=\range(A)$ (prove!). This means that the vector $y\in \range(A)$ closest to $b$ is $$ \tilde b = Pb = A(A^TA)^{-1}A^T=Ax $$ where $x$ is the solution of least squares problem given by the normal equation. We can write $b$ as a sum $$ b = Pb + P_\perp b = Ax + (b-Ax) = \tilde b +r $$ of two mutual orthogonal vectors $\tilde b \in\range(A)$ and $r\in \range(A)^{\perp}$. \vsp \begin{example} Consider Examples \ref{ex:height_hills} and \ref{ex:height_hills2}. For solution $x= [1236, 1943, 2416]^T$, the reminder is $$ r = b-Ax = [1,-2,1,4,-3,2]^T \vsp $$ which is orthogonal to all columns of $A$, i.e., $A^Tr = 0$. The orthogonal projector on to $\range(A)$ is $$ P = A(A^TA)^{-1}A^T = \frac{1}{4}\begin{bmatrix} 2 & 1 &1 &-1&-1& 0 \\ 1 & 2 &1 &1 &0 & -1\\ 1 & 1 &2 &0 &1 & 1 \\ -1 & 1 &0 &2 &1 & -1\\ -1 & 0 &1 &1 &2 & 1 \\ 0 &-1 &1 &-1&1 & 2 \end{bmatrix}\vsp $$ and the orthogonal projector on to $\range(A)^{\perp}$ is $$ P_{\perp}=I-P = \frac{1}{4}\begin{bmatrix} 2 & -1 &-1& 1 &1& 0 \\ -1 & 2 &-1 &-1 &0& 1\\ -1 & -1 & 2 &0 &-1& -1\\ 1 & -1 &0 &2 &-1& 1\\ 1 & 0 &-1 &-1 &2 &-1\\ 0 &1& -1 &1& -1& 2 \end{bmatrix}, $$ so that $b = Pb + P_\perp b = \tilde b + r$. \end{example} \vsp An alternative way to define $P$ is to let $Q\in\R^{m\times n}$ be a matrix whose columns form an orthonormal basis (i.e., $Q^TQ = I$) for $\range(A)$. Then $$P := QQ^T$$ is symmetric and idempotent, so it is an orthogonal projector onto $\range(Q) = \range(A)$. Then from \eqref{LS-AxPb} we have $Ax = QQ^Tb$. Multiplying both sides by $Q^T$ gives the square system $ Q^TAx = Q^T b. $ We will see later how to compute the matrix $Q$ in such a way that this system is upper triangular and therefore easy to solve. \vsp \section{Singular Value Decomposition (SVD)}\label{sect:svd} Let us continue with one of the most important and practical matrix decompositions in numerical linear algebra. Our geometric presentation here is motivated by \cite{Trefethen-Bau:1997}. \subsection{Geometric interpretation} The SVD of a matrix $A$ can be described by the following geometric fact: \begin{quote} The image of the unit sphere $S = \{x\in\R^n: \|x\|_2=1\}$ in $\R^n$ under any matrix $A\in \R^{m \times n}$ is the hyperellipsoid $E = \{Ax: \|x\|_2=1\}$ in $\R^m$. \end{quote} Hyperellipsoid is just the $m$-dimensional generalization of an ellipse, i.e., the surface obtained by stretching the unit sphere in $\R^m$ by some factors $\sigma_1,\ldots, \sigma_m$ (possibly zero) in some orthonormal directions $u_1,\ldots,u_m\in \R^m$. The vectors $\{\sigma_ku_k\}$ are the {\em principal semiaxes} of the hyperellipse, with lengths $\sigma_1,\ldots,\sigma_m$. If A has rank $r$, exactly $r$ of the lengths $u_k$ will turn out to be nonzero, and in particular, if $m \geqslant n$, at most $n$ of them will be nonzero. \begin{figure}[!th] \begin{center} \begin{tikzpicture} \draw[black, thick] (-7,0) circle (1.5); \draw[rotate=30,thick] (0,0) ellipse (2.5 and 1); \draw (-7,-2)--(-7,2) node[right] {}; \draw (-9,0)--(-5,0) node[right] {}; \draw (0,-2)--(0,2) node[right] {}; \draw (-2,0)--(2,0) node[right] {}; \draw[->,blue,thick] (-7,0)--(-7+1.06,1.06) node[right] {}; \draw[->,blue,thick] (-7,0)--(-7-1.06,1.06) node[right] {}; \draw[->,blue,thick] (-7,0)--(-7+1.06,1.06) node[midway,right] {$v_1$}; \draw[->,blue,thick] (-7,0)--(-7-1.06,1.06) node[midway,left] {$v_2$}; \draw[->,red,thick] (0,0)--(-2.165,-1.25) node[pos=.6,right] {$\hspace{.1cm}\sigma_1u_1$}; \draw[->,red,thick] (0,0)--(0.5,-0.866) node[pos=.35,right] {$\sigma_2u_2$}; \node[color=black] at (-5.5,-1) {$S$}; \node[color=black] at (1.2,-1) {$AS$}; \end{tikzpicture} \end{center} \caption{SVD of a $2\times 2$ matrix $A$} \end{figure} Let $S$ be the unit sphere in $\R^n$ and $A\in \R^{m\times n}$ with $m\geqslant n$. For simplicity assume that $A$ has full rank $n$. The image $AS$ is a hyperellipse in $\R^m$. We define $n$ {\bf singular values} of $A$ as the length of $n$ principal semiaxes of $AS$, i.e., $\sigma_1,\sigma_2,\ldots,\sigma_n$. We assume that singular values are numbered in degreasing order, $$ \sigma_1\geqslant \sigma_2 \geqslant \cdots \geqslant \sigma_n\geqslant 0. $$ Then we define the {\bf left singular vectors} of $A$ as the direction of principal semiaxes of $AS$. These are orthonormal vectors $u_1,u_2,\ldots,u_n$ corresponding to singular values $\sigma_1,\ldots,\sigma_n$. This means that the vector $\sigma_ku_k$ is the $k$-th largest semiaxis of $AS$. The {\bf right singular vectors} of $A$ are the orthonormal vectors $v_1,v_2,\ldots,v_n$ that are the preimages of the principal semiaxes of $AS$, numbered so that \begin{equation}\label{svd:expand} Av_k = \sigma_k u_k, \quad k=1,2,\ldots,n. \end{equation} Equations \eqref{svd:expand} in a compact form can be written as $AV = U_1 \Sigma_1 $ where $V=[v_1\; v_2\cdots v_n]\in\R^{n\times n}$, $ U_1=[u_1\; u_2\cdots u_n]\in\R^{m\times n}$ and $\Sigma_1 = \mathrm{diag}(\sigma_1,\ldots,\sigma_n)\in \R^{n\times n}$. Since $V$ is an orthogonal matrix, we may write \begin{equation}\label{svd:reduce} A = U_1 \Sigma_1 V^T \end{equation} which is called the {\em reduced SVD} of $A$. Here is an illustration for case $m=5$ and $n=3$: $$ \begin{array}{ccccc} \begin{pmatrix} \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \end{pmatrix}&=& \begin{pmatrix} \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \end{pmatrix}& \begin{pmatrix} \sigma_1 & 0 & 0\\ 0 &\sigma_2 & 0\\ 0 & 0 & \sigma_3 \end{pmatrix}& \begin{pmatrix} \times & \times & \times\\ \times & \times & \times\\ \times & \times & \times \end{pmatrix} \\ A&& U_1&\Sigma_1 &V^T \end{array} $$ \newline The term {\em reduced} and tilde symbols on $\Sigma$ and $U$ are used to distinguish the factorization \eqref{svd:reduce} from the more standard {\em full SVD}. Since the column of $ U_1$ are $n$ orthonormal vectors in $\R^m$ and $m\geqslant n$ then (unless when $n=m$) they do not form a basis for $\R^m$. We can adjoin an additional $m-n$ columns to $ U_1$ to extend it to an orthonormal matrix $U\in \R^{m\times m}$. For the product to remain unaltered the last $m-n$ columns of $U$ should multiply by zero. So we extend $ \Sigma_1$ by $m-n$ rows of zeros to get the $m\times n$ matrix $\Sigma$. Now we can write the full SVD as \begin{equation}\label{svd:full} A = U\Sigma V^T \end{equation} where $U\in\R^{m\times m}$ and $V\in\R^{n\times n}$ are orthonormal matrices and $\Sigma\in\R^{m\times n}$ is a diagonal matrix that caries the nonnegative singular values $\sigma_1,\ldots,\sigma_n$ on its diagonal. The illustration for $m=5$ and $n=3$ is shown below. $$ \begin{array}{ccccc} \begin{pmatrix} \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \\ \times & \times & \times \end{pmatrix}&=& \begin{pmatrix} \times & \times & \times & \times& \times \\ \times & \times & \times & \times& \times \\ \times & \times & \times & \times& \times \\ \times & \times & \times & \times& \times \\ \times & \times & \times & \times& \times \end{pmatrix}& \begin{pmatrix} \sigma_1 & 0 & 0\\ 0 &\sigma_2 & 0\\ 0 & 0 & \sigma_3 \\ 0 & 0&0\\ 0& 0&0 \end{pmatrix}& \begin{pmatrix} \times & \times & \times\\ \times & \times & \times\\ \times & \times & \times \end{pmatrix} \\ A&& U& \Sigma &V^T \end{array} $$ \newline If $A$ is rank deficient, $\rankk(A)=r<n$ say, then the factorization \eqref{svd:full} is still valid. The only difference is that now $\sigma_{r+1} =\cdots = \sigma_n =0$ and $m-r$ columns of $U$ are `silent'. This means that $r$ singular values and $r$ left singular vectors of $A$ are determined by the geometry of the hyperellipse. The last $r$ columns of $V$ have no effect in factorization as well. Let us bring the SVD in a formal definition and prove it mathematically. We prove the existence and uniqueness (under some conditions) of SVD for any complex matrix $A$. \begin{theorem}\label{thm:svd} Let $m$ and $n$ be two arbitrary positive integers. Every matrix $A\in\C^{m\times n}$ has a full SVD of the form $$ A = U\Sigma V^* $$ where $U\in\C^{m\times m}$ and $V\in\C^{n\times n}$ are unitary matrices and $\Sigma\in\R^{m\times n}$ is a real diagonal matrix that carries the singular values $\sigma_1\geqslant\sigma_2\geqslant \cdots \geqslant \sigma_n\geqslant 0$ on its diagonal. Furthermore, the singular values are uniquely determined and if $A$ is square the left and right singular vectors are uniquely determined up to complex signs. \end{theorem} \proof \cite{Trefethen-Bau:1997} For proof of existence, let $\sigma_1 := \|A\|_2$. There exists vectors $v_1\in\C^n$ with $\|v_1\|_2=1$ such that $\|Av_1\|_2 = \sigma_1$. Let $u_1 := Av_1\in \C^m$. Consider any extension of $v_1$ to an orthonormal basis $\{v_1,v_2,\ldots,v_n\}$ for $\C^n$ and any extension of $u_1$ to an orthonormal basis $\{u_1,u_2,\ldots,u_m\}$ for $\C^m$. Suppose that $V_1$ and $U_1$ denote the unitary matrices with columns $v_j$ and $u_j$. We can write $$ U_1^*AV_1 = \begin{bmatrix} u_1^* \\ \wt U_1^* \end{bmatrix} A \begin{bmatrix} v_1 & \wt V_1 \end{bmatrix} \begin{bmatrix} \sigma_1 & u_1^*A \wt V_1 \\ 0 & \wt U_1^* A\wt V_1 \end{bmatrix} =: \begin{bmatrix} \sigma_1 & w^* \\ 0 & B \end{bmatrix}=:S, $$ where $0$ is column vector of dimension $m-1$, $w\in \C^{n-1}$ and $B\in \C^{(m-1)\times(n-1)}$. Now we show $w$ is indeed zero. If fact we have $$ \left\|\begin{bmatrix} \sigma_1 & w^* \\ 0 & B \end{bmatrix} \begin{bmatrix} \sigma_1 \\ w \end{bmatrix} \right\|_2= \sigma_1^2+w^*w \geqslant \sigma_1^2+w^*w =(\sigma_1^2+w^*w)^{1/2}\left\| \begin{bmatrix} \sigma_1 \\ w \end{bmatrix} \right\|_2 $$ which shows that $\|S\|_2\geqslant (\sigma_1^2+w^*w)^{1/2}$. Since $S$ and $A$ are unitarily equivalent we know that $\|S\|_2=\|A\|_2 = \sigma_1$. This shows that $w=0$, and $$ U_1^*AV_1 = \begin{bmatrix} \sigma_1 & 0 \\ 0 & B \end{bmatrix}. $$ The proof is now completed by induction. The proof for case $m=n=1$ is obvious. Let $B = U_2\Sigma_2 V_2^*$ is the full SVD of $B$. We can write $$ U_1^*AV_1 = \begin{bmatrix} \sigma_1 & 0 \\ 0 & U_2\Sigma_2 V_2^* \end{bmatrix}= \begin{bmatrix} 1 & 0 \\ 0 & U_2 \end{bmatrix} \begin{bmatrix} \sigma_1 & 0 \\ 0 & \Sigma_2 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & V_2^* \end{bmatrix} $$ which gives the full SVD of $A$ via $$ A = \underbrace{U_1 \begin{bmatrix} 1 & 0 \\ 0 & U_2 \end{bmatrix}}_{U} \underbrace{\begin{bmatrix} \sigma_1 & 0 \\ 0 & \Sigma_2 \end{bmatrix}}_{\Sigma} \underbrace{\begin{bmatrix} 1 & 0 \\ 0 & V_2^* \end{bmatrix}V_1^* }_{V^*}, $$ which completes the proof of existence. On the other hand we have $$ A^*AV = V\Sigma^T\Sigma ,\quad \mbox{or}\quad A^*A v_k = \sigma_k^2 v_k, \quad k= 1,2,\ldots,n $$ which shows $\sigma_k^2$ are eigenvalues of Hermitian matrix $A^*A$. This proves the uniqueness of singular values. $\qed$ For the remainder of this lecture we will assume, without loss of generality, that $m \geqslant n$, because if $m < n$ we consider the SVD of $A^T$, and if the SVD of $A^T$ is $U\Sigma V^T$, then the SVD of $A$ is $V\Sigma^TU^T$. Besides, we will assume $A$ is a real matrix although all results can be simply proved for the a complex SVD. \vsp \subsection{Properties of SVD} Several matrix properties including rank, norms and condition number can be extracted form SVD. In additions, SVD provides orthonormal bases for $\range(A)$ and $\nul(A)$ and orthogonal projections onto $\range(A)$ and $\nul(A)$. \begin{theorem}\label{thm:svdper} Let $\sigma_1\geqslant \sigma_2\geqslant \cdots \geqslant \sigma_n\geqslant 0$ be the singular values of $A\in \R^{m\times n}$ with $m\geqslant n$. \begin{enumerate} \item $\|A\|_2 = \sigma_1$, \item $\|A\|_F = \sqrt{\sigma_1^2+\cdots + \sigma_n^2}$, \item $\|A^{-1}\|_2=\frac{1}{\sigma_n}$ when $m=n$ and $A$ is nonsingular, \item $\mathrm{cond}_2(A)=\frac{\sigma_1}{\sigma_n}$ if $A$ is nonsingular, \item $\rankk(A)=$ number of nonzeros singular values. \end{enumerate} \end{theorem} \proof. The proof is left as an exercise. Use the facts that orthogonal matrices preserve norm 2 and norm Frobenius as well as the rank of matrices. See Workout \ref{wo:svdper}. \vsp \begin{workout}\label{wo:svdper} Prove Theorem \ref{thm:svdper}. \end{workout} \vsp The condition number of a square and nonsingular matrix $A$ is defined by $ \cond_2(A)=\|A\|_2\|A^{-1}\|_2 $. We observed that it can be obtained by dividing the largest singular values by the smallest one. One can generalize this notion to rectangular matrices by defining $$ \cond_2(A) := \frac{\sigma_1}{\sigma_n}, \quad A\in \R^{m\times n} $$ provided that $A$ has full rank ($\sigma_n\neq 0$). The condition number becomes large for nearly rank-deficient matrices, i.e., matrices with very small $\sigma_n$. When $A$ is rank-deficient, i.e., when $\sigma_n=0$, the condition number can be defined to be infinity. However, in practical application (for example in least squares settings) the true definition of condition number for a rank-deficient matrix $A\in\R^{m\times n}$, $m\geqslant n$ is \begin{equation}\label{svd:conddef} \cond_2(A) := \frac{\sigma_1}{\sigma_r}, \end{equation} where $\rankk(A)=r$. (i.e., $\sigma_{r+1}=\cdots=\sigma_n=0$). This generalizes the definition of condition number for rank-deficient matrices. \begin{remark} In presence of nosy data and roundoff errors it is more practical to talk about {\bf numerical rank} rather than just the rank of a matrix. In this cases we should set a criterion to accept a computed singular value to be `zero'. For example we may consider the singular values of order $\texttt{eps}$ (machine epsilon) to be zero. However, the norm of the matrix and the errors in data (entries) should also be taken in to account. Here is a practical criterion \cite{Datta:2010, Golub-VanLoan:2010}: \begin{quote} A computed singular value is accepted to be zero if it is less than or equal to $\delta=10^{-t}\|A\|_\infty$ where the entries of $A$ are correct to $t$ decimal digits. \end{quote} Using this criterion, $A$ has numerical rank $r$ if the computed singular values $\wh \sigma_1,\ldots,\wh \sigma_n$ satisfy $$ \wh\sigma_1\geqslant \wh\sigma_2 \geqslant \cdots \geqslant \wh\sigma_r > \delta \geqslant \wh\sigma_{r+1} \geqslant \cdots \geqslant \wh \sigma_n. $$ This means that to determine a numerical rank of a matrix one needs to count the large singular values only. \end{remark} \vsp As pointed out, SVD provides orthogonal bases for $\range(A)$ and $\nul(A)$ as well as projections to these subspaces. The following theorem reveals this. \vsp \begin{theorem} Let $A=U\Sigma V^T$ be the SVD of $A\in\R^{m\times n}$ with $m\geqslant n$, and let $\rankk(A)=r$. Partition $$ U = [U_1\; U_2],\quad V=[V_1\; V_2] $$ where $U_1$ and $V_1$ consist of first $r$ columns of $U$ and $V$, respectively. Then \begin{enumerate} \item The columns of $U_1$ form a basis for $\range(A)$. \item The columns of $V_2$ form a basis for $\nul(A)$. \item $U_1U_1^T$ is a projection onto $\range(A)$. \item $U_2U_2^T$ is a projection onto $\nul(A^T)$. \item $V_1V_1^T$ is a projection onto $\range(A^T)$. \item $V_2V_2^T$ is a projection onto $\nul(A)$. \end{enumerate} \end{theorem} \proof The proofs are straightforward; however, we refer to \cite{Datta:2010} or similar resources. You may skip items 3--6 if you have decided to skip section \ref{sect:projectors}. $\qed$ \vsp \subsection{Computation of SVD} SVD has a tight connection to the well-known eigendecomposition of symmetric matrices. As we know, if $B\in \R^{n\times n}$ is a symmetric matrix then all its eigenvalues $\lambda_k$ are real and the corresponding eigenvectors $v_k$ are orthonormal. From $Bv_k = \lambda_k v_k$ for $k=1,\ldots,n$, we can write $$ B = VDV^T $$ where $V = [v_1, v_2,\ldots, v_n]$ is orthogonal and $D=\text{diag}\{\lambda_1,\lambda_2,\ldots,\lambda_n\}$ is diagonal. This decomposition is called {\bf eigendecomposition}, and has limited applications in practice as it is only available for symmetric matrices\footnote{If $A$ is not symmetric but has a set of independent eigenvectors then it can be decomposed to $A = VDV^{-1}$ for nonsingular matrix $V$. In this case $A$ is called {\em diagonalizable}.}. The connection to SVD is obtained by computing $A^TA$ and $AA^T$ as below: $$ A^TA = (U\Sigma V^T)^T(U\Sigma V^T) = V\Sigma^T\underbrace{U^T U}_{I}\Sigma V^T = V \underbrace{\Sigma^T\Sigma}_{D} V^T = VDV^T $$ where $D$ is a $n\times n$ diagonal matrix $$ D= \Sigma^T\Sigma = \begin{bmatrix} \sigma_1^2 & & 0\\ & \ddots & \\ 0& & \sigma_n^2 \end{bmatrix}. $$ Since $V$ is orthogonal, $VDV^T$ is the {eigendecomposition} of symmetrix matrix $A^TA$. This means that $\sigma_1^2,\ldots, \sigma_n^2$ are eigenvalues and columns of $V$ are corresponding eigenvectors of $A^TA$. The same computation for $AA^T$ shows that $$ AA^T = U\Sigma\Sigma^T U^T = UDU^T $$ where $D$ is a $m\times m$ diagonal matrix $$ D= \Sigma\Sigma^T = \begin{bmatrix} \sigma_1^2 & & & & &0\\ & \ddots & & & &\\ & & \sigma_n^2 & & &\\ & & &0& & \\ & & & &\ddots& \\ 0& & & & &0 \end{bmatrix} $$ The columns of $U$ are eigenvectors of $AA^T$ corresponding to the same eigenvalues $\sigma_1^2,\ldots, \sigma_n^2$ plus $m-n$ zero eigenvalues $\sigma_{n+1}^2=\cdots=\sigma_m^2=0$. We conclude that singular values of $A$ are square roots of eigenvalues of $A^TA$. Columns of $V$ are eigenvectors of $A^TA$, and columns of $U$ are eigenvectors of $AA^T$. \vsp \begin{example} To compute the SVD of $$ A = \begin{bmatrix} 3&2\\ 2&3\\ 2 & -2 \end{bmatrix} $$ we form $$ A^TA = \begin{bmatrix} 17 & 8\\ 8 &17 \end{bmatrix}, \lambda_1 = 25, \; \lambda_2 = 9 $$ and compute its eigenvalues $\lambda_1 = 25$ and $\lambda_2 = 9$. These give $\sigma_1 = \sqrt \lambda_1 = \sqrt{25}=5$ and $\sigma_2 = \sqrt \lambda_2 = \sqrt{9}=3$. We can also show that eigenvectors of $A^TA$ are $$ v_1 = \frac{1}{\sqrt 2} \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \; v_2 = \frac{1}{\sqrt 2} \begin{bmatrix} 1\\ -1 \end{bmatrix} $$ which result in $$ V = \frac{1}{\sqrt 2} \begin{bmatrix} 1& 1\\ 1 & -1 \end{bmatrix}. $$ On the other side, columns of $U$ are eigenvectors of $$ AA^T = \begin{bmatrix} 13 & 12 & 2 \\ 12 & 13 & -2\\ 23 & -2 & 8 \end{bmatrix}. $$ Without additional computations we know that eigenvalues of this matrix are $\lambda_1=25$, $\lambda_2 = 9$ and $\lambda_3 = 0$ (why?). However, the eigenvectors of $AA^T$ are different from those of $A^TA$ and can be computed as $$ u_1 = \frac{1}{\sqrt 2} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \; u_2 = \frac{1}{\sqrt{18}} \begin{bmatrix} 1 \\ -1\\ 4 \end{bmatrix}, \; u_3 = \frac{1}{3} \begin{bmatrix} 2\\-2\\ 1 \end{bmatrix} $$ which show that $$ U= \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{18}} & \frac{2}{3}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{18}} &-\frac{2}{3}\\ 0 & \frac{4}{\sqrt{18}}& -\frac{1}{3} \end{bmatrix}. $$ \end{example} \vsp However, in practice, a different, faster and computationally more stable algorithm is used to compute the SVD factors. The commonly used algorithm is the {\bf Golub-Kahan-Reinsch} algorithm which consists of two phases. In the first phase the matrix $A$ is reduced to a bidiagonal matrix $B$ using Householder transformations. This step is usually known as the Golub-Kahan bidiagonal procedure. Then in the second phase the matrix $B$ is further reduced to diagonal matrix $\Sigma$ using an iterative method that successively constructs a sequence of bidiagonal matrices $B_k$ such that each $B_k$ has possibly smaller off-diagonal entries than the previous one. The second procedure is known as the Golub-Reinsch iterative procedure. It thus makes sense to call the combined two-stage procedure the Golub-Kahan-Reinsch algorithm. We refer the reader to chapter 10 of \cite{Datta:2010} for details. This algorithm is proved to be computationally stable. In Python the algorithm is implemented in \verb+numpy.linalg+ (and also in \verb+scipy.linalg+) module which provides both reduced and full SVD for either real or complex matrices. The default command \verb+numpy.linalg.svd(A)+ is equivalent with \begin{shaded} \vspace*{-0.5cm} \begin{verbatim} numpy.linalg.svd(A, full_matrices=True, compute_uv=True, hermitian=False) \end{verbatim} \vspace*{-0.5cm} \end{shaded} If \verb+full_matrices=False+, the output will be the reduced SVD. Additionally, in the case of \verb+compute_uv=False+, the unitary matrices $U$ and $V$ are not computed and the output is the vector of singular values only. Finally, by \verb+hermitian=True+, $A$ is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Here is an example. \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} from numpy.linalg import svd from numpy import array A = array([[1, 3, 2],[4,0,-1],[0.5, 2, 1],[1, 1, 1],[2, 1, -2]]) U,S,Vt = svd(A) print('Full SVD:','\n U =\n',np.round(U,4),'\n Vt =\n',np.round(Vt,4), '\n sigma =\n',np.round(S,4)) U,S,Vt = svd(A,full_matrices=False) print('\n Reduced SVD:','\n U =\n',np.round(U,4),'\n Vt =\n',np.round(Vt,4), '\n sigma =\n',np.round(S,4)) \end{verbatim} \vspace*{-0.3cm} \end{shaded} The output of the above script is (note that Python gives $V^T$ instead of $V$): \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} Full SVD: U = [[-0.4575 -0.6636 -0.0238 -0.4791 0.3468] [-0.6711 0.4798 0.5011 -0.1833 -0.186 ] [-0.2784 -0.4014 -0.2015 0.2431 -0.8134] [-0.2626 -0.2225 0.2953 0.8113 0.3689] [-0.4403 0.3446 -0.7877 0.1398 0.2176]] Vt = [[-0.8593 -0.5112 0.0186] [ 0.3474 -0.6099 -0.7123] [ 0.3755 -0.6056 0.7016]] sigma = [5.149 4.3804 1.5969] Reduced SVD: U = [[-0.4575 -0.6636 -0.0238] [-0.6711 0.4798 0.5011] [-0.2784 -0.4014 -0.2015] [-0.2626 -0.2225 0.2953] [-0.4403 0.3446 -0.7877]] Vt = [[-0.8593 -0.5112 0.0186] [ 0.3474 -0.6099 -0.7123] [ 0.3755 -0.6056 0.7016]] sigma = [5.149 4.3804 1.5969] \end{verbatim} \vspace*{-0.3cm} \end{shaded} We note that for the reduced SVD the command \verb+U,S,Vt = svd(A,0)+ can be used instead. \vsp \subsection{Solving the least squares problem using SVD}\label{sect-lssvd} The SVD provides a particularly flexible method for solving linear least squares problems of any shape or rank. Consider again the least squares problem $$ \min_{x\in\R^n}\|Ax-b\|_2 $$ with overdetermined matrix $A\in\R^{m\times n}$. First, we assume that $A$ is full-rank, i.e., $\rankk(A)=n$. Let $A=U\Sigma V^T$ be the SVD of $A$ and partition $U=[U_1\; U_2]$ where $U_1$ consists of first $n$ columns of $U$, and $\Sigma=\begin{bmatrix}\Sigma_1 \\ 0 \end{bmatrix}$ where $\Sigma_1\in \R^{n\times n}$ is a square diagonal matrix with $\sigma_k$ on its diagonal. Since $A$ is full-rank, all singular values are positive and $\Sigma$ is nonsingular. Furthermore, we use the change of variables $y = V^Tx$. Then, we can write \begin{align*} \|Ax-b\|_2^2 & = \|U\Sigma V^T x - b\|_2^2 = \|\Sigma V^T x - U^Tb\|_2^2 = \left\| \begin{bmatrix}\Sigma_1 \\ 0 \end{bmatrix}y - \begin{bmatrix}U_1^Tb \\ U_2^Tb \end{bmatrix} \right\|_2^2\\ & = \| \Sigma_1 y - U_1^Tb\|_2^2 + \|U_2^Tb\|_2^2. \end{align*} Since $V$ is nonsingular, the minimization over $x$ is equivalent to minimization over $y$. The minimum is obtained if the first norm on the right-hand side is vanished, i.e. $y = \Sigma_1^{-1}U_1^Tb$. This gives the least squares solution \begin{equation}\label{eq:leastsquaressol_1} x = V\Sigma_1^{-1}U_1^Tb = \sum_{j=1}^{n} \frac{u_j^Tb}{\sigma_j}v_j, \end{equation} and the reminder is $r = \|Ax-b\|_2=\|U_2^Tb\|_2$. Now, let $A$ be a rank-deficient matrix, $\rankk(A)=r<n$, say. Partition $U=[U_1\; U_2]$ now with $U_1$ consists of first $r$ columns of $U$, and $$ \Sigma = \left[ \begin{array}{ccc|ccc} {\sigma_1} & & & & & \\ & \ddots & & & 0 &\ \\ & &{\sigma_r} & & & \\ \hline & & &\colorbox{yellow}{$0$} & & \\ & 0 & & &\ddots& \\ & & & & & \colorbox{yellow}{$0$} \\ \hline & & & & & \\ & 0 & & & 0 & \\ & & & & & \end{array}\right] = \begin{bmatrix} \wt \Sigma_1 & 0 \\ 0 & 0 \end{bmatrix} $$ where $\wt \Sigma_1\in \R^{r\times r}$ is the upper-left square matrix with positive singular values $\sigma_1,\ldots,\sigma_r$ on its diagonal. Also, assume that $$ y = V^Tx =:\begin{bmatrix} \wt y \\ \wh y \end{bmatrix} $$ \vsp where $\wt y\in \R^r$. Similar to the first case, we have \begin{align*} \|Ax-b\|_2^2 & = \|U\Sigma V^T x - b\|_2^2 = \|\Sigma V^T x - U^Tb\|_2^2 = \left\| \begin{bmatrix}\wt\Sigma_1 & 0 \\ 0 & 0 \end{bmatrix}\begin{bmatrix}\wt y\\ \wh y \end{bmatrix} - \begin{bmatrix}U_1^Tb \\ U_2^Tb \end{bmatrix} \right\|_2^2\\ & = \| \wt \Sigma_1 \wt y - U_1^Tb\|_2^2 + \|U_2^Tb\|_2^2. \end{align*} In this case the least square solution is not unique. Solutions are obtained by putting $\wt y = \wt\Sigma_1^{-1}U_1^Tb$ and letting $\wh y$ arbitrary, and then \begin{equation}\label{eq:leastsquaressol_2} x = Vy = \sum_{j=1}^{r} \frac{u_j^Tb}{\sigma_j}v_j + \sum_{j=r+1}^{n}y_j v_j . \end{equation} A least squares solution $x$ with the minimum norm 2 among others is called the {\bf norm-minimal} solution. Since $\|x\|_2^2=\|y\|_2^2 =\|\wt y\|_2^2 + \|\wh y\|_2^2 $, the norm-minimal solution of the rank-deficient least squares problem is obtained by setting $\wh y = 0$, i.e. by dropping the second summation in \eqref{eq:leastsquaressol_2}. \begin{labexercise} Write a Python function for solving the least squares problem for either full-rank or rank-deficient case using SVD. Assume that the rank of the coefficient matrix is unknown in advance, so compute it numerically. \end{labexercise} \vsp \subsection{Pseudoinverse} Assuming $A \in \R^{m \times n} $ has rank $r $ with $r \leqslant n $, the SVD of $A$ is represented as \begin{equation}\label{svd:rank_deficient} A = [U_1 \;\, U_2] \begin{bmatrix} \widetilde{\Sigma}_1 & 0 \\ 0 & 0 \end{bmatrix} [V_1 \;\, V_2]^T = U_1 \widetilde{\Sigma}_1 V_1^T \end{equation} where $U_1$ and $ V_1 $ consist of the first $ r$ columns of $U$ and $V$, respectively, and $\widetilde{\Sigma}_1 \in \R^{r \times r} $ is a diagonal matrix with positive singular values $\sigma_1, \ldots, \sigma_r$ on its diagonal. When $m = n$ (i.e., $A$ is square) and $r = n$ (i.e., $ A $ is full-rank), we have the standard representation $A = U \Sigma V^T$ with all matrices square and $\Sigma$ nonsingular. In this case, the inverse of $ A $ is computed as $$ A^{-1} = (U \Sigma V^T)^{-1} = V^{-T} \Sigma^{-1} U^{-1} = V \Sigma^{-1} U^T. $$ This suggests an algorithm to compute the inverse of square and nonsingular matrices. However, this algorithm is more costly than the usual algorithm based on Gaussian elimination, although it is more stable for ill-conditioned matrices. When $A$ is rectangular and even rank-deficient, we can generalize this approach to compute the pseudoinverse of $A$. In this case, we use the factorization \eqref{svd:rank_deficient} and define \begin{equation*} A^{+} := V_1 \wt\Sigma^{-1}U_1^T \end{equation*} which is indeed the {\bf Moore–Penrose inverse} of matrix $A$. As a result, we can see from \eqref{eq:leastsquaressol_1} that the least squares solution of a full-rank system $Ax \cong b$ is given by $$ x = A^{+} b. $$ This is the counterpart of the exact solution $ x = A^{-1} b $ for square and nonsingular linear system $Ax = b$. Additionally, from \eqref{eq:leastsquaressol_1}, we observe that for the rank-deficient problem, $x = A^{+} b$ is the norm-minimal solution. \vsp \begin{example} The SVD factors of a matrix $A$ are given by $$ U = \begin{bmatrix} \frac{1}{\sqrt 2} & \frac{-1}{\sqrt 2} & 0 & 0 & 0\\ 0 & 0 & 0 &1 & 0\\ 0 & 0 & -1 & 0 & 0 \\ \frac{1}{\sqrt 2} & \frac{1}{\sqrt{2}} & 0 & 0 & 0\\ 0 & 0 & 0 &0 & 1 \end{bmatrix}, \; \Sigma = \begin{bmatrix} 2\sqrt 3 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}, \; V = \begin{bmatrix} \frac{\sqrt 6}{3} & 0 & 0 & \frac{-1}{\sqrt 3}\\ 0 & 0 & 1 & 0\\ \frac{1}{\sqrt 6} & \frac{-1}{\sqrt{2}} & 0 & \frac{1}{\sqrt 3}\\ \frac{1}{\sqrt 6} & \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt 3} \end{bmatrix}. $$ The pseudoinverse of $A$ is computed as \begin{align*} A^+ = V_1\Sigma_1^{-1}U_1^T = \begin{bmatrix} \frac{\sqrt 6}{3} & 0 \\ 0 & 0 \\ \frac{1}{\sqrt 6} & \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt 6} & \frac{1}{\sqrt{2}} \end{bmatrix} \begin{bmatrix} \frac{1}{2\sqrt 3} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} \frac{1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}} & 0 \\ \frac{-1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}}& 0 \end{bmatrix} = \begin{bmatrix} \frac{1}{6} & 0 & 0 &\frac{1}{6} & 0\\ 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ \end{bmatrix}. \end{align*} In addition, if we aim to solve the least square problem $\min \|Ax-b\|_2$ with $b = [1,1,1,1,1]^T$ for the norm-minimal solution, we write $$ x = A^+b = \begin{bmatrix} \frac{1}{6} & 0 & 0 &\frac{1}{6} & 0\\ 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1\\ 1\\ 1\\ 1\\ 1 \end{bmatrix} = \begin{bmatrix} \frac{1}{3} \\ 0 \\ 1 \\ 1 \end{bmatrix}. $$ \end{example} \vsp \subsection{Low-rank approximation} Low-rank approximation is a fundamental concept in linear algebra and numerical analysis that plays a crucial role in various applications. In this approach, we seek to represent a given matrix with a lower-rank approximation that captures its essential structure while reducing its size and complexity. SVD enables us to achieve this kind of approximation. We start with the following theorem whihc addresses the question of how far is a rank $r$ matrix from a matrix of rank $k<r$. This theorem is generally known as the Eckart-Yaung theorem. \begin{theorem}\label{thm:svdAk} Let $A=U\Sigma V^T$ be the SVD of $A$, and let $k\leqslant r = \rankk(A)$. Define $A_k = U\Sigma_kV^T$ where $\Sigma_k=\diagg\{\sigma_1,\ldots,\sigma_k,0,\ldots,0\}$, where $\sigma_1\geqslant\sigma_2\geqslant \cdots \geqslant \sigma_k>0$. Then the followings hold true: \begin{enumerate} \item $A_k$ has rank $k$, \item The distance between $A$ and $A_k$ is $\|A-A_k\|_2=\sigma_{k+1}$. \item Out of all rank $k$ matrices, $A_k$ is the closet to $A$, that is $$ \min_{\rankk(B)=k}\|A-B\|_2 = \|A-A_k\|_2. $$ \end{enumerate} \end{theorem} \proof The proof of 1 is obvious because $\rankk(A_k) = \rankk(U\Sigma_k V^T) = \rankk(\Sigma_k)=k$. For the second item we have $$ \|A-A_k\|_2 = \|U(\Sigma-\Sigma_k)V^T\|_2 = \|\Sigma-\Sigma_k\|_2 = \sigma_{k+1}. $$ To prove item 3, we assume that $B$ is another $m\times n$ matrix of rank $k$. So the null space of $B$ has dimension $n-k$. Consider the space $S = \spann\{v_1,\ldots v_{k+1}\}$ where $v_j$ are right singular vectors of $A$. The intersection of $\nul(B)$ and $S$ must be nonempty because both spaces are subspaces of $\R^n$ and the sum of their dimensions is greater than $n$. Let $z$ be a normal vector ($\|z\|_2=1$) lying in this intersection. Since $z\in S$ there exist scalers $c_j$ such that $z = c_1v_1 + \cdots + c_{k+1}v_{k+1}$. Since $\{v_j\}$ are orthonormal we have $|c_1|^2+\cdots + |c_{k+1}|^2=1$. On the other side, since $z\in \nul(B)$ we have $Bz = 0$. So $$ (A-B)z = Az = c_1Av_1 + \cdots + c_{k+1}Av_{k+1} = c_1\sigma_1u_1 + \cdots + c_{k+1}\sigma_{k+1}u_{k+1} $$ and since $\{u_j\}$ are orthonormal we have $$ \|(A-B)z\|_2^2 = |c_1\sigma_1|^2 + \cdots + |c_{k+1}\sigma_{k+1}|^2\geqslant \sigma_{k+1}^2(|c_1|^2+\cdots + |c_{k+1}|^2) = \sigma_{k+1}^2. $$ This shows that $$\|A-B\|_2^2=\max_{\|y\|_2=1} \|(A-B)y\|_2^2 \geqslant \|(A-B)z\|_2^2 \geqslant \sigma_{k+1}^2 = \|A-A_k\|_2^2 $$ which completes the proof. $\qed$ \vsp Another representation of SVD is \begin{equation*} A = U\Sigma V^T = \sigma_1 E_1 + \sigma_2 E_2 + \cdots + \sigma_n E_n, \quad E_k = u_kv_k^T. \end{equation*} Each $E_k$ is a rank $1$ matrix which shows that SVD writes $A$ as a sum of rank $1$ matrices. Since $\sigma_1\geq \sigma_2\geq \cdots \geq \sigma_n$, matrix $A$ is expressed as a list of its ``ingredients'', ordered by ``importance''. Each elementary matrix $E_k$ can be stored using only $m+n$ storage locations. Moreover, the matrix-vector product $E_kx$ can be formed using only $2n+m$ flops. As was shown in Theorem \ref{thm:svdAk}, $$ A_k = U\Sigma_k V^T = \sigma_1 E_1 + \sigma_2 E_2 + \cdots + \sigma_k E_k \vsp $$ is $k$-rank approximation of $A$ with $2$-norm error $\sigma_{k+1}$. Such an approximation is useful in data dimensionally reduction, image processing, information retrieval, cryptography, and numerous other applications. \vsp \subsection{Some applications of SVD} We have already studied the use of SVD for computing some norms, condition number, orthogonal bases for some subspaces related to a matrix, and an application for solving the linear least squares problem. In this section more concrete examples are given from different applications. \subsubsection*{Image compression} An image can be represented by an $m\times n$ matrix $A$ whose $(i,j)$-th entry corresponds to the brightness of the pixel $(i, j)$. The storage of this matrix requires $mn$ locations. See Figure \ref{fig:image_comp}, \begin{figure}[!th] \centering \includegraphics[scale=.55]{image_proc} \caption{Representation of an image with a matrix (form \texttt{pippin.gimp.org/image{\_}processing}).}\label{fig:image_comp} \end{figure} The idea of image compression is to compress the image represented by a very large matrix to the one which corresponds to a lower-order approximation of $A$. SVD provides a simple way if one stores $$ \sigma_1u_1v_1^T + \cdots + \sigma_ku_kv_k^T =:A_k \vsp $$ instead in $(m+n+1)k$ locations (the first $k$ columns of $U$ and $V$ together with the first $k$ singular values). This results a considerable savings when $k$ is small. On the other side $k$ should be large enough to keep the quality of the image still acceptable to the user\footnote{ For image compression, more sophisticated methods like JPG that take human perception into account generally outperform compression using SVD.}. In Figure \ref{fig:Lena} different low-rank approximations of a cat image obtained from the following Python script are shown. \begin{shaded} \vspace*{-0.3cm} \begin{verbatim} import numpy as np import matplotlib.pyplot as plt from PIL import Image A = Image.open('cat.jpg').convert('L') # Grayscale read picture sz = np.shape(A) U, S, Vt = np.linalg.svd(A) # Get svd of A k = [sz[1], 100, 50, 20] plt.figure(figsize = (7, 7)) for i in range(4): Ak = U[:, :k[i]] @ np.diag(S[:k[i]]) @ Vt[:k[i],:] if(i == 0): plt.subplot(2, 2, i+1), plt.imshow(Ak, cmap ='gray'), plt.axis('off'), plt.title("Original Image with k = " + str(k[i])) else: plt.subplot(2, 2, i+1), plt.imshow(Ak, cmap ='gray'), plt.axis('off'), plt.title("k = " + str(k[i])) plt.savefig("cat1.jpg") \end{verbatim} \vspace*{-0.3cm} \end{shaded} \begin{figure}[!th] \centering \includegraphics[scale=0.6]{cat1} \caption{Original cat image and compressed images with different values of $k$.}\label{fig:Lena} \end{figure} \subsubsection*{Image restoration} The aim in image restoration is to restor the original image from a blurry image contaminated by {\em noises}. It is known that noises correspond to the small singular values. Thus a simple idea is to compute the SVD of the noisy image and eliminate its last $n-k$ small singular values (smaller than a threshold) and consider the low-rank approximation $$ A_k = U_k\Sigma_kV_k^T $$ as a noise-free image. It is left as an exercise giving an example with a Python script. Note that, the literature contains a large number of more sophisticated denosing algorithms; some of them are based on SVD. \vsp \subsubsection*{Principal component analysis (PCA)} In PCA the objective is to reduce the dimensionality of a dataset in order to use the transferred data in applications that might not work well with high-dimensional data. The reduction is done by projecting each data point onto only the first few directions ({\em principal components}) to obtain lower dimensional data while preserving as much of the data's variation as possible. The first principal component is a direction that maximizes the {\em variance} of the projected data. This direction captures most information of the data. The second principal component is then calculated with the condition that it is uncorrelated with (i.e., perpendicular to) the first principal component and that it accounts for the next highest variance. We may proceed through other components, similarly. See Figure \ref{fig:pca2D}. \begin{figure}[!th] \centering \includegraphics[scale=0.55]{pca_2D} \caption{Cluster of points in $\R^2$ with principal components (PCs).}\label{fig:pca2D} \end{figure} Let us outline the fundamental principles of PCA within a mathematical framework\footnote{For a history, a review and some recent developments of PCA see the following sources: \newline Ian T. Jolliffe, Jorge Cadima, Principal component analysis: a review and recent developments, Phil. Trans. R. Soc. A 374: 20150202, (2016).\newline Ian T. Jolliffe, Principal component analysis, 2nd ed. New York, NY, Springer-Verlag, (2002).}. Assume that we are given a dataset with observations on $d$ variables (dimensions), for each of $n$ entities or individuals. These data values define $n$ vectors $x_{\cdot1}, \ldots , x_{\cdot n}$ each in $\R^d$ or, equivalently, a $d \times n$ {\em data matrix} $$ X=[x_{\cdot1},x_{\cdot2},\ldots,x_{\cdot n}]\in \R^{d\times n}. $$ We assume that $X$ is row-centred, i.e., the mean value of each row is zero. Otherwise, subtract each row by its mean. We are looking for a new orthonormal basis $\{v_{\cdot 1},v_{\cdot2},\ldots,v_{\cdot d}\}$, $v_{\cdot j}\in\R^n$, for the column space of $X^T$ (row space of $X$) which determines the directions of maximum variances in order of significance (a decreasing order). Any element of the new basis is a linear combination of columns of $X^T$. Such linear combinations can be written as \begin{equation}\label{pca-svdrep} \sigma_j v_{\cdot j} = \sum_{\ell=1}^{d} u_{\ell j} x_{\ell \cdot} =X^T u_{\cdot j}, \quad j=1,2,\ldots,d, \end{equation} where $u_{\cdot j}=[u_{1j},\ldots,u_{dj}]^T$ are assumed to be of the unit norm, i.e. $\|u_{\cdot j}\|_2=1$. The normalization constants $\sigma_j$ are multiplied from left to make this assumption meaningful. The variance of any such linear combination is given by $$ \varr( X^Tu) = \frac{1}{n-1}(X^Tu)^T(X^Tu) = u^TCu, $$ where $C=\frac{1}{n-1}XX^T\in \R^{d\times d}$ is the sample {\em covariance matrix} associated with the dataset. Hence, identifying the linear combination with maximum variance is equivalent to obtaining a $d$-dimensional vector $u$ with $u^Tu=1$ which maximizes the quadratic form $u^TCu$. By introducing a Lagrange multiplier $\lambda$, the problem is equivalent to maximizing $$ u^TCu - \lambda(u^Tu-1). $$ Differentiating with respect to $u$, and equating to the null vector, produces the equation \begin{equation*} Cu = \lambda u. \end{equation*} This means that $u$ is an eigenvector and $\lambda$ its corresponding eigenvalue of the covariance matrix $C$. Since $C$ is symmetric and semi-positive definite, all eigenvalues are real and nonnegative and eigenvectors form an orthonormal basis for $\R^d$. The eigenvalue $\lambda$ corresponding to eigenvector $u$ of $C$ is indeed the variance of the linear combination defined by $u$, i.e. $X^Tu$, because $$\varr(X^Tu)=u^TCu=u^T(\lambda u)=\lambda u^Tu = \lambda.$$ Let us denote the pair of eigenvalues and eigenvectors of $C$ by $(\lambda_j, u_{\cdot j})$ for $j=1,2,\ldots, d$ and sort eigenvalues is such way that $\lambda_1\geqslant \lambda_2\geqslant \cdots \geqslant \lambda_d\geqslant 0$. The maximum variance is attained at the dominant pair $(\lambda_1,u_{\cdot 1})$. Recalling \eqref{pca-svdrep}, this shows that the maximum variance of the data happens across the direction $u_{\cdot 1}$ with variance $$ \lambda_1 = \varr(X^Tu_{\cdot1})=\varr(\sigma_1v_{\cdot1}) = \frac{\sigma_1^2}{n-1}v_{\cdot1}^Tv_{\cdot 1} = \frac{\sigma_1^2}{n-1}. $$ This means that $u_{\cdot1}$ is the first principle component of data $X$ and $\frac{\sigma_1^2}{n-1}$ is the highest variance in the data along the direction $u_1$. A Lagrange multipliers approach, with the added restrictions of orthogonality of different vectors $u_{\cdot j}$ can be used to show that the full set $\{u_{\cdot 1},\ldots,u_{\cdot d}\}$ are the solutions to the problem of obtaining up to $d$ new linear combinations $X^Tu_{\cdot j}$ which successively maximize the variance, subject to uncorrelatedness with previous linear combinations. Uncorrelatedness results from the fact that the covariance between two such linear combinations is zero; $$ \covv(X^Tu_{\cdot j}, X^Tu_{\cdot k}) = u_{\cdot j}^T C u_{\cdot k} = \lambda_k u_{\cdot j}^Tu_{\cdot k}=0, \quad j\neq k. $$ As the above formulation reveals, PCA is essentially connected to the SVD. From \eqref{pca-svdrep} we have \begin{equation}\label{pcr-svd1} X = U\Sigma V^T \end{equation} where $U=[u_{\cdot1}\; u_{\cdot2}\ldots u_{\cdot d}]\in \R^{d\times d}$ and $V=[v_{\cdot1}\; v_{\cdot2}\ldots v_{\cdot d}]\in \R^{d\times n}$ have orthonormal columns and $\Sigma = \diagg(\sigma_1,\sigma_2,\ldots,\sigma_d)$. Consequently, the reduced SVD \eqref{pcr-svd1} gives all principal components as columns of the factor $U$. The variances in each direction are calculated as the squares of the singular values divided by $(n-1)$. \begin{figure}[!th] \centering \includegraphics[scale=0.5]{pca_2D_1} \includegraphics[scale=0.5]{pca_2D_2} \caption{A dataset in $\R^2$ (left) and its $1$-rank approximation using SVD.}\label{fig:pca2Dapp} \end{figure} Up to here we have found all the principal components in order of significance. The next step is to choose whether to keep all these components or discard those of lesser significance and form with the remaining ones a new data in a lower dimensional subspace. Assume that we decide to keep only $k<d$ principal components. Our new data matrix will be $$ X_k = U_k\Sigma_k V_k^T = \sum_{j=1}^{k}\sigma_j u_{\cdot j}v_{\cdot j}^T $$ which is a $k$-rank approximation of the original data matrix $X$, and is called the {\em feather matrix}. In fact, the new dataset is located on a $k$-dimensional subspace of $\R^d$ spanned by orthonormal vectors $\{u_{\cdot1}, \ldots,u_{\cdot k}\}$. This is a {\bf dimensionality reduction} algorithm based on SVD. See Figures \ref{fig:pca2Dapp} and \ref{fig:pca3Dapp} for illustrations in $2$ and $3$ dimensions, respectively . \begin{figure}[!th] \centering \includegraphics[scale=0.5]{pca_3D_1} \includegraphics[scale=0.5]{pca_3D_2}\includegraphics[scale=0.5]{pca_3D_3} \caption{A dataset in $\R^3$ (up) and its $1$-rank approximation (down-left) and its $2$-rank approximation (down-right) using SVD.}\label{fig:pca3Dapp} \end{figure} \subsubsection*{Keywords and key sentences extraction} Development of automatic procedures for {\em text summarization} is important because people are overwhelmed by the tremendous amount of online information and documents. The goal of a text summarization algorithm is to {\em extract content from a text document and present the most important content to the user in a condensed form.} We follow \cite{Elden:2019} and briefly discuss an algorithm based on SVD for automatically extracting key words and key sentences from a text. Assume that we are given a text document, for example an article or a chapter of a book. Assume that the text contains $m$ words and $n$ sentences\footnote{ Usually a couple of preprocessing steps should perform before summarization which are known as {\em stemming} and {\em stop words removing}. In the stemming step the same word stem with different endings is represented by one token only. For example words $\{$computation, computations, compute, computable, computing, computational$\}$ are represented by stem ``comput''. Stop words such as $\{$a, an, and, or, if, then, any, as, about, able, \ldots$\}$ occur frequently in all texts and do not distinguish between different sentences. They should be removed. We assume that special symbols, e.g., mathematics symbols are also removed. }. Normally $m$ is much larger than $n$. We form a $m\times n$ {\em term-sentence} matrix $A$ where its entry $a_{ij}$ is defined as the frequency of term (word) $i$ in sentence $j$. Let us give the term $i$ the nonnegative {\em score} $u_i$ and the sentence $j$ the nonnegative {\em score} $v_j$. The assignment of scores is made based on the following mutual reinforcement principle: \begin{quote} A term should have a high score if it appears in many sentences with high scores. A sentence should have a high score if it contains many words with high scores. \end{quote} Using this criterion the score of term $i$ is proportional to the sum of the scores of the sentences, weighted by frequencies, where it appears, $$ \sigma u_i = \sum_{j=1}^{n}a_{ij} v_j, \quad i = 1,2,\ldots,m, $$ where $\sigma>0$ is the proportional constant. Similarly, the score of sentence $j$ is proportional to the sum of scores of its words weighted by their frequencies in the sentence, $$ \sigma v_j = \sum_{i=1}^{m}a_{ij} u_i, \quad j = 1,2,\ldots,n. $$ In matrix forms we have $$ Av = \sigma u, \quad A^Tu = \sigma v $$ for $u=[u_1,\ldots,u_m]^T$ (the score vector of words) and $v = [v_1,\ldots,v_n]^T$ (the score vector of sentences). Collecting both equations we have $$ AA^T u = \sigma^2 u, \quad A^TA v = \sigma^2v $$ which show that $u$ is an eigenvector of $AA^T$ and $v$ is an eigenvector of $A^TA$ both corresponding to eigenvalue $\sigma^2$. This means that $u$ is left singular vector and $v$ is a right singular value of $A$ corresponding to the same singular value $\sigma$. If we choose the largest singular value, then we are guaranteed that the components of $u$ and $v$ are nonnegative because the matrix $A$ has nonnegative entries. Consequently, a simple algorithm based on SVD consists of the following steps: \begin{itemize} \item {\bf Step 1}: Apply a stemming and a stoping word algorithm on the text, \item {\bf Step 2}: Construct the term-sentence matrix $A$, \item {\bf Step 3}: Compute the SVD of $A$ and report the leading vectors $u$ and $v$ (first columns of $U$ and $V$, respectively) as word and sentence scores. \end{itemize} \noindent The full SVD is not necessary for this algorithm as the first singular vectors are only required. Moreover, the term-sentence matrix is sparse, so a SVD function for sparse matrices is numerically more efficient. \vsp \subsection*{Classification of handwritten digits} Classification of handwritten digits is a standard problem in pattern recognition. A typical application is the automatic reading of zip codes on envelopes. Here we follow the presentation of \cite{Elden:2019} and give a classification technique using SVD. The problem is as follows: {\em Given a set of manually classified digits as a training set, classify a set of unknown digits as the test set}. In Figure \ref{fig:handwritten0} a sample of handwritten digits $0,1,\ldots,9$ is illustrated. The images are downloaded from\footnote{\tt https://www.nist.gov/itl/products-and-services/emnist-dataset}. Each image is a $28 \times 28$ grayscale image but we stack the columns of the image above each other so that each image is represented by a vector of size $784$. \begin{center} \includegraphics[scale=.8]{handwritten} \captionof{figure}{A sample of handwritten digits $0,1,\ldots,9$.}\label{fig:handwritten0} \end{center} The training set of each kind of digits $0,1,\ldots,9$ can be considered as a matrix $A$ of size $m\times n$ with $m = 784$ and $n =$ `the number of training digits of one kind'. Usually, $n\geqslant m$ although the case $n<m$ is also possible. So, we have a total of 10 training matrices, each corresponding to one of the digits $0, 1, \ldots, 9$. Assume that $A$ is one of them. The columns of $A$ span a linear subspace of $\R^{784}$. However, this subspace cannot be expected to have a large dimension, because columns of $A$ represent the same digit with different handwritings. One can use SVD $A=U\Sigma V^T$ to obtain an orthogonal basis for the column space of $A$ (or $\range(A)$) via the columns of factor $U$. Then any test image of that kind can be well represented in terms of the orthogonal basis $\{u_{\cdot 1},u_{\cdot 2},\ldots,u_{\cdot m}\}$. Let us describe it in more detail. We have $$ A = U\Sigma V^T = \sum_{j=1}^{m}\sigma_j u_{\cdot j} v_{\cdot j}^T, $$ thus the $\ell$-th column of $A$ (the $\ell$-th training digit) can be represented as $$ a_{\cdot \ell} = \sum_{j=1}^{m} (\sigma_j v_{j\ell }) u_{\cdot j} $$ which means that the coordinates of image $a_{\cdot \ell}$ in terms of orthogonal basis $\{u_{\cdot 1},u_{\cdot 2},\ldots,u_{\cdot m}\}$ are $\sigma_j v_{j\ell }=:x_j$, i.e. $$ a_{\cdot \ell} = \sum_{j=1}^{m} x_j u_{\cdot j} = Ux. $$ This means, in another point of view, that solving the least squares problem $$ \min_{x}\|Ux - a_{\cdot \ell}\|_2 $$ results in an optimal vector $x=[\sigma_1 v_{1\ell },\ldots,\sigma_m v_{m\ell }]^T$ with residual $0$. As we pointed out, most columns of $A$ are nearly linearly dependent as they represent different handwritten for the same digit. If we translate it to columns of $U$, this means that a few leading columns of $U$ should well capture the subspace. For this reason the orthogonal vectors $u_{\cdot 1},u_{\cdot 2},\ldots,$ are called {\em singular images}. For example, in Figure \ref{fig:handwritten1} the singular values and the first three singular images for the training set $3$'s (containing the first $200$ images of the training set) are illustrated. \begin{center} \includegraphics[scale=.55]{handwritten2} \captionof{figure}{Rapid decay of singular values, and the first three singular images.}\label{fig:handwritten1} \end{center} We observe that the first singular image looks like a $3$, and the following singular images represent the dominating variations of the training set around the first singular image. Consequently, we can use a $k$-rank approximation of $A$ where $k\ll m$. In this case each $a_{\cdot \ell}$ can be well represented by orthogonal basis $\{u_{\cdot 1},u_{\cdot 2},\ldots,u_{\cdot k}\}$ in a least squares sense with a small residual. Now, let $d\in\R^m$, $m=784$, be a digit outside the training set (which is called a {\em test digit}). It is reasonable to assume that $d$ can be well represented in terms of singular images $\{u_{\cdot 1},u_{\cdot 2},\ldots,u_{\cdot k}\}$ of its own type with a relatively small residual. We must compute how well $d$ can be represented in the $10$ different bases, each corresponding to one of the digits $0, 1, \ldots, 9$. This can be done by computing the residual vector in the least squares problems of the type \begin{equation*}\label{lstq0} \min_{x} \|U_kx-d\|_2 \end{equation*} where $d$ represents a test digit and $U_k$ is a ${m\times k}$ matrix consisting of the first $k$ columns of $U$. Since the columns of $U_k$ are orthonormal, the solution of this problem is $x = U_k^Td$ and the residual is \begin{equation}\label{handw_res} \|(I-U_k^TU_k)d\|_2. \end{equation} Altogether, a SVD-based classification algorithm of handwritten digits consists of two steps: \begin{itemize} \item {\bf Step 1 (training):} For the training set of known digits, compute the SVD of each set of digits of one kind (Compute ten SVDs for digits $0,1,\ldots,9$). \item {\bf Step 2 (classification):} For a given test digit $d$, compute its relative residual in all ten bases using equation \eqref{handw_res}. If a residual in a class is smaller than all the others, classify $d$ in that class. \end{itemize} \noindent To test the algorithm, we download the {\em training} set consisting of $240000$ images ($24000$ images per any digit), and the {\em test} set consisting of $40000$ images ($4000$ images per digit) from the above-mentioned website. The document also contains two {\em label} files which are manual classifications of training and test sets. The training labels will be used to learn and the test labels to verify the algorithm. For each digit we consider using only the initial $200$ images out of the total $24000$ training images (ten matrices each of size $784\times 200$). However, we test the algorithm on all $40000$ test images and compare our classifications with the exact test labels. The percentages of the success of this SVD-based algorithm for each digit with different values $k = 3,6,\ldots,10$ (number of basis functions) are plotted in Figure \ref{fig:handwritten3}. It is left to readers to discuss their insights based on the observations from the figure. \begin{center} \includegraphics[scale=1]{Percentages} \captionof{figure}{Percentages of success ($y$-axis) in terms of number of basis functions ($x$-axis) for different digits $0,1,\ldots,9$.}\label{fig:handwritten3} \end{center} \begin{labexercise} Implement your own Python code for the above classification algorithm. Test your algorithm for different number of basis functions. Note that in the statement above, the equations you need are formulated for a single test vector $d$ (one ``flattened'' test image). However, if you use that approach in your code, it will take a very long time to run. In contrast, matrix-matrix multiplications are much more efficient on modern computers. To take full advantage of that, simply replace vector $d$ in equation \eqref{handw_res} with the entire test matrix. \end{labexercise} \vsp \section*{Additional workouts} \begin{workout}[Uniqueness of reduced QR factorization] Assume that $A\in \R^{m\times n}$ has a full rank. Prove that $A$ possesses a unique reduced QR factorization $A= Q_1R_1$ with the diagonal entries of $R_1$ positive. \end{workout} \begin{workout} Assume that $A\in \R^{m\times n}$ has rank $r <\min\{n, m\}$. Show that for every $\epsilon > 0$ (no matter how much small), there exists a full-rank matrix $A_\epsilon\in \R^{m\times n}$ such that $\|A-A_\epsilon\|_2\leqslant \epsilon$. \end{workout} \begin{workout} Let $A\in\R^{n\times n}$ be an arbitrary matrix. Find the closet orthogonal matrix $Q\in\R^{n\times n}$ to $A$ in Frobenius norm, i.e. solve the following problem: $$ \min_{Q^TQ=I}\|A-Q\|_F. $$ Hint: use SVD. \end{workout} \begin{workout}[\cite{Trefethen-Bau:1997}] Let $P\in\R^{n\times n}$ be a nonzero projector. Show that $\|P\|_2\geqslant 1$ with equality if and only if $P$ is an orthogonal projector. \end{workout} \begin{workout}[\cite{Trefethen-Bau:1997}] Suppose that a square matrix $A$ has SVD $A=U\Sigma V^T$. Find the eigendecomposition of the symmetric matrix $$ B = \begin{bmatrix} 0 & A^T \\ A & 0 \end{bmatrix}. $$ \end{workout} \begin{workout}[\cite{Trefethen-Bau:1997}] Suppose that $A$ is a $m\times n$ matrix with $m>n$ of the form $$ A = \begin{bmatrix} B \\ C \end{bmatrix} $$ where $B$ is a nonsingular matrix of size $n\times n$ and $C$ is an arbitrary matrix of size $(m-n)\times n$. Prove that $ \|A^+\|_2 \leqslant \|B^{-1}\|_2. $ \end{workout} \begin{workout}[\cite{Datta:2010}] Let $A$ be an $m\times n$ matrix of full rank $r = \min\{m,n\}$. If $B$ is another $m \times n$ matrix such that $\|B- A\|_2 < \sigma_r$ then show $B$ has also full rank. \end{workout} \begin{workout}[\cite{Datta:2010}] Show that the relative distance of a nonsingular matrix $A$ to the nearest singular matrix $B$ is $$ \frac{\|A-B\|_2}{\|A\|_2}= \frac{1}{\cond_2(A)}. $$ \end{workout} \vsp
2412.19959v1
http://arxiv.org/abs/2412.19959v1
Numerical Solution of Initial Value Problems
\documentclass[12pt]{article} \input{headlines} \title{\large\sffamily\bfseries Lecture Notes \\ \LARGE{Numerical Solution of ODEs}} \author{\Large\sffamily Davoud Mirzaei\\ Uppsala University} \date{\sffamily February 16, 2023} \begin{document} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \setlength{\abovedisplayshortskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \maketitle \definecolor{contcol1}{HTML}{72E094} \definecolor{contcol2}{HTML}{24E2D6} \definecolor{convcol1}{HTML}{C0392B} \definecolor{convcol2}{HTML}{8E44AD} \begin{tcolorbox}[title=Contents, fonttitle=\huge\sffamily\bfseries\selectfont,interior style={left color=contcol1!10!white,right color=contcol2!10!white},frame style={left color=contcol1!30!white,right color=contcol2!30!white},coltitle=black,top=2mm,bottom=2mm,left=2mm,right=2mm,drop fuzzy shadow,enhanced,breakable] \makeatletter \@starttoc{toc} \makeatother \end{tcolorbox} \vspace*{10mm} \thispagestyle{empty} \newpage \pagenumbering{arabic} \input{lec3_part1} \input{lec3_part2} \input{lec3_part3} \input{lec3_part4} \input{appendix} \begin{thebibliography}{9} \bibitem{Atkinson-et-al:2009} K. E. Atkinson, W. Han, D. E. Stewart, {\em Numerical Solution of Ordinary Differential Equations}, Wiley, 2009. \bibitem{Heath:2018} M. T. Heath, {\em Scientific Computing, an Introductory Survey}, revised 2nd edition, SIAM, Philadelphia, PA, 2018. \bibitem{Lambert:1991} J. D. Lambert, {\em Numerical Methods for Ordinary Differential Systems}, Wiley, 1991. \bibitem{LeVeque:2007} R. J. LeVeque, {\em Finite Difference Methods for Ordinary and Partial Differential Equations}, SIAM, 2007. \end{thebibliography} \end{document} \usepackage[english]{babel} \usepackage{graphicx} \usepackage{framed} \usepackage[normalem]{ulem} \usepackage{indentfirst} \usepackage{amsmath,amsthm,amssymb,amsfonts} \usepackage[italicdiff]{physics} \usepackage[T1]{fontenc} \usepackage{mathtools,extarrows} \usepackage{setspace} \doublespacing \usepackage{wrapfig} \usepackage{lmodern,mathrsfs} \usepackage[inline,shortlabels]{enumitem} \setlist{topsep=2pt,itemsep=2pt,parsep=3pt,partopsep=2pt} \renewcommand{\baselinestretch}{1.25} \usepackage[dvipsnames]{xcolor} \usepackage[utf8]{inputenc} \usepackage[a4paper, top=0.8in,bottom=0.8in, left=0.8in, right=0.8in, footskip=0.3in, includefoot]{geometry} \usepackage[most]{tcolorbox} \usepackage{tikz,tikz-3dplot,tikz-cd,tkz-tab,tkz-euclide,pgf,pgfplots} \pgfplotsset{compat=newest} \usepackage{multicol} \usepackage[bottom,multiple]{footmisc} \usepackage[backend=bibtex,style=numeric]{biblatex} nalnamedelim}{\addcomma\addspace} \addbibresource{bibliography} \usepackage{hyperref} \usepackage{graphicx} \graphicspath{{figures/}} \numberwithin{equation}{section} \usepackage{framed,color} \definecolor{shadecolor}{rgb}{1,0.9,0.6} \setlength{\FrameSep}{4pt} \usepackage{caption} \captionsetup[figure]{font=small} \usepackage{tcolorbox} \usepackage{multirow} \usepackage{epstopdf} \usepackage{listings} \usepackage[]{mcode} \usepackage{algorithm} \usepackage{latexsym} \usepackage{showidx} \usepackage{latexsym} \usepackage{amssymb} \renewcommand{\vec}{{\rm vec}} \newcommand{\eqnref}[1]{(\ref{#1})} \newcommand{\ignore}[1]{} \usepackage{todonotes} \newcommand{\bhtodo}{\todo[color=yellow]} \newcommand{\yntodo}{\todo[color=green]} \usepackage{tikz} \newcommand*\circled[1]{\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=1pt] (char) {#1};}} \newcommand{\remind}[1]{\textcolor{red}{\textbf{#1}}} \newcommand{\hide}[1]{} \newcommand{\myrightleftarrows}[1]{\mathrel{\substack{\xrightarrow{#1} \\[-.9ex] \xleftarrow{#1}}}} \renewcommand{\qed}{\hfill\blacksquare} \newcommand{\qedwhite}{\hfill \ensuremath{\Box}} \newcommand{\ep}{\varepsilon} \newcommand{\vp}{\varphi} \newcommand{\lam}{\lambda} \newcommand{\Lam}{\Lambda} \renewcommand{\ip}[1]{\ensuremath{\left\langle#1\right\rangle}} \newcommand{\floor}[1]{\ensuremath{\left\lfloor#1\right\rfloor}} \newcommand{\ceil}[1]{\ensuremath{\left\lceil#1\right\rceil}} \newcommand{\A}{\mathbb{A}} \newcommand{\B}{\mathbb{B}} \newcommand{\C}{\mathbb{C}} \newcommand{\D}{\mathbb{D}} \newcommand{\E}{\mathbb{E}} \newcommand{\F}{\mathbb{F}} \newcommand{\K}{\mathbb{K}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\T}{\mathbb{T}} \newcommand{\X}{\mathbb{X}} \newcommand{\Y}{\mathbb{Y}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\As}{\mathcal{A}} \newcommand{\Bs}{\mathcal{B}} \newcommand{\Cs}{\mathcal{C}} \newcommand{\Ds}{\mathcal{D}} \newcommand{\Es}{\mathcal{E}} \newcommand{\Fs}{\mathcal{F}} \newcommand{\Gs}{\mathcal{G}} \newcommand{\Hs}{\mathcal{H}} \newcommand{\Is}{\mathcal{I}} \newcommand{\Js}{\mathcal{J}} \newcommand{\Ks}{\mathcal{K}} \newcommand{\Ls}{\mathcal{L}} \newcommand{\Ms}{\mathcal{M}} \newcommand{\Ns}{\mathcal{N}} \newcommand{\Os}{\mathcal{O}} \newcommand{\Ps}{\mathcal{P}} \newcommand{\Qs}{\mathcal{Q}} \newcommand{\Rs}{\mathcal{R}} \newcommand{\Ss}{\mathcal{S}} \newcommand{\Ts}{\mathcal{T}} \newcommand{\Us}{\mathcal{U}} \newcommand{\Vs}{\mathcal{V}} \newcommand{\Ws}{\mathcal{W}} \newcommand{\Xs}{\mathcal{X}} \newcommand{\Ys}{\mathcal{Y}} \newcommand{\Zs}{\mathcal{Z}} \newcommand{\ab}{\textbf{a}} \newcommand{\bb}{\textbf{b}} \newcommand{\cb}{\textbf{c}} \newcommand{\db}{\textbf{d}} \newcommand{\ub}{\textbf{u}} \newcommand{\wb}{\textbf{w}} \newcommand{\xb}{\textbf{x}} \newcommand{\yb}{\textbf{y}} \newcommand{\zb}{\textbf{z}} \newcommand{\Ab}{\textbf{A}} \newcommand{\Bb}{\textbf{B}} \newcommand{\Cb}{\textbf{C}} \newcommand{\Db}{\textbf{D}} \newcommand{\eb}{\textbf{e}} \newcommand{\ex}{\textbf{e}_x} \newcommand{\ey}{\textbf{e}_y} \newcommand{\ez}{\textbf{e}_z} \newcommand{\abar}{\overline{a}} \newcommand{\bbar}{\overline{b}} \newcommand{\cbar}{\overline{c}} \newcommand{\dbar}{\overline{d}} \newcommand{\ubar}{\overline{u}} \newcommand{\vbar}{\overline{v}} \newcommand{\wbar}{\overline{w}} \newcommand{\xbar}{\overline{x}} \newcommand{\ybar}{\overline{y}} \newcommand{\zbar}{\overline{z}} \newcommand{\Abar}{\overline{A}} \newcommand{\Bbar}{\overline{B}} \newcommand{\Cbar}{\overline{C}} \newcommand{\Dbar}{\overline{D}} \newcommand{\Ubar}{\overline{U}} \newcommand{\Vbar}{\overline{V}} \newcommand{\Wbar}{\overline{W}} \newcommand{\Xbar}{\overline{X}} \newcommand{\Ybar}{\overline{Y}} \newcommand{\Zbar}{\overline{Z}} \newcommand{\Aint}{A^\circ} \newcommand{\Bint}{B^\circ} \newcommand{\limk}{\lim_{k\to\infty}} \newcommand{\limm}{\lim_{m\to\infty}} \newcommand{\limn}{\lim_{n\to\infty}} \newcommand{\limx}[1][a]{\lim_{x\to#1}} \newcommand{\liminfm}{\liminf_{m\to\infty}} \newcommand{\limsupm}{\limsup_{m\to\infty}} \newcommand{\liminfn}{\liminf_{n\to\infty}} \newcommand{\limsupn}{\limsup_{n\to\infty}} \newcommand{\sumkn}{\sum_{k=1}^n} \newcommand{\sumk}[1][1]{\sum_{k=#1}^\infty} \newcommand{\summ}[1][1]{\sum_{m=#1}^\infty} \newcommand{\sumn}[1][1]{\sum_{n=#1}^\infty} \newcommand{\emp}{\varnothing} \newcommand{\exc}{\backslash} \newcommand{\sub}{\subseteq} \newcommand{\sups}{\supseteq} \newcommand{\capp}{\bigcap} \newcommand{\cupp}{\bigcup} \newcommand{\kupp}{\bigsqcup} \newcommand{\cappkn}{\bigcap_{k=1}^n} \newcommand{\cuppkn}{\bigcup_{k=1}^n} \newcommand{\kuppkn}{\bigsqcup_{k=1}^n} \newcommand{\cappk}[1][1]{\bigcap_{k=#1}^\infty} \newcommand{\cuppk}[1][1]{\bigcup_{k=#1}^\infty} \newcommand{\cappm}[1][1]{\bigcap_{m=#1}^\infty} \newcommand{\cuppm}[1][1]{\bigcup_{m=#1}^\infty} \newcommand{\cappn}[1][1]{\bigcap_{n=#1}^\infty} \newcommand{\cuppn}[1][1]{\bigcup_{n=#1}^\infty} \newcommand{\kuppk}[1][1]{\bigsqcup_{k=#1}^\infty} \newcommand{\kuppm}[1][1]{\bigsqcup_{m=#1}^\infty} \newcommand{\kuppn}[1][1]{\bigsqcup_{n=#1}^\infty} \newcommand{\cappa}{\bigcap_{\alpha\in I}} \newcommand{\cuppa}{\bigcup_{\alpha\in I}} \newcommand{\kuppa}{\bigsqcup_{\alpha\in I}} \newcommand{\fl}{\mathrm{fl}} \newcommand{\Rx}{\overline{\mathbb{R}}} \newcommand{\dx}{\,dx} \newcommand{\dy}{\,dy} \newcommand{\dt}{\,dt} \newcommand{\dax}{\,d\alpha(x)} \newcommand{\dbx}{\,d\beta(x)} \DeclareMathOperator{\glb}{\text{glb}} \DeclareMathOperator{\lub}{\text{lub}} \newcommand{\wh}{\widehat} \newcommand{\wt}{\widetilde} \newcommand{\xh}{\widehat{x}} \newcommand{\yh}{\widehat{y}} \newcommand{\zh}{\widehat{z}} \newcommand{\sh}{\widehat{s}} \newcommand{\spann}{\mathrm{span}} \newcommand{\range}{\mathrm{range}} \newcommand{\nul}{\mathrm{null}} \newcommand{\cond}{\mathrm{cond}} \newcommand{\rankk}{\mathrm{rank}} \newcommand{\diagg}{\mathrm{diag}} \newcommand{\varr}{\mathrm{var}} \newcommand{\covv}{\mathrm{cov}} \def\bfm#1{\protect{\makebox{\boldmath $#1$}}} \def\bell {\bfm{\ell}} \def\a {\bfm{a}} \def\y {\bfm{y}} \def\u {\bfm{u}} \def\f {\bfm{f}} \def\g {\bfm{g}} \def\v {\bfm{v}} \def\c {\bfm{c}} \def\btimes {\bfm{\times}} \def\ee{\mathrm{e}} \newcommand{\<}{\langle} \renewcommand{\>}{\rangle} \renewcommand{\iff}{\Leftrightarrow} \DeclareMathOperator{\im}{\text{im}} \let\spn\relax\let\Re\relax\let\Im\relax \DeclareMathOperator{\spn}{\text{span}} \DeclareMathOperator{\Re}{\text{Re}} \DeclareMathOperator{\Im}{\text{Im}} \DeclareMathOperator{\diag}{\text{diag}} \newtheoremstyle{mystyle}{}{}{}{}{\sffamily\bfseries}{.}{ }{} \newtheoremstyle{cstyle}{}{}{}{}{\sffamily\bfseries}{.}{ }{\thmnote{#3}} \makeatletter \renewenvironment{proof}[1][\proofname] {\par\pushQED{\qed}{\normalfont\sffamily\bfseries\topsep6\p@\@plus6\p@\relax #1\@addpunct{.} }}{\popQED\endtrivlist\@endpefalse} \makeatother \renewcommand{\qedsymbol}{\coolqed{0.32}} \theoremstyle{mystyle}{\newtheorem{definition}{Definition}[section]} \theoremstyle{mystyle}{\newtheorem{theorem}[definition]{Theorem}} \theoremstyle{mystyle}{\newtheorem{lemma}[definition]{Lemma}} \theoremstyle{mystyle}{\newtheorem{corollary}[definition]{Corollary}} \theoremstyle{mystyle}{\newtheorem*{atten}{}} \theoremstyle{mystyle}{\newtheorem*{remarks}{Remarks}} \theoremstyle{mystyle}{\newtheorem*{examples}{Examples}} \theoremstyle{mystyle}{\newtheorem{workout}[definition]{Workout}} \theoremstyle{mystyle}{\newtheorem{labexercise}[definition]{Lab Exercise}} \theoremstyle{mystyle}{\newtheorem{miniproject}[definition]{MATLAB Project}} \usepackage{amsmath} \newtheorem{remark}{Remark}[section] \newtheorem{example}{Example}[section] \newtheorem{exercise}{Exercise}[section] \newtheorem{proposition}{Proposition}[section] \theoremstyle{cstyle}{\newtheorem*{cthm}{}} \newtheoremstyle{warn}{}{}{}{}{\normalfont}{}{ }{} \theoremstyle{warn} \newtheorem*{warning}{\warningsign{0.2}\relax} \newcommand{\warningsign}[1]{\tikz[scale=#1,every node/.style={transform shape}]{\draw[-,line width={#1*0.8mm},red,fill=yellow,rounded corners={#1*2.5mm}] (0,0)--(1,{-sqrt(3)})--(-1,{-sqrt(3)})--cycle; \node at (0,-1) {\fontsize{48}{60}\selectfont\bfseries!};}} \tcolorboxenvironment{definition}{boxrule=0pt,boxsep=0pt,colback={red!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{red},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{proposition}{boxrule=0pt,boxsep=0pt,colback={Orange!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{Orange},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{theorem}{boxrule=0pt,boxsep=0pt,colback={blue!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{blue},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{lemma}{boxrule=0pt,boxsep=0pt,colback={blue!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{blue},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{corollary}{boxrule=0pt,boxsep=0pt,colback={violet!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{violet},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{proof}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{CadetBlue!80!white},left=8pt,right=8pt,sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{remark}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Green},left=8pt,right=8pt,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{remarks}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Green},left=8pt,right=8pt,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{example}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Black},left=8pt,right=8pt,sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{examples}{boxrule=0pt,boxsep=0pt,blanker,borderline west={2pt}{0pt}{Black},left=8pt,right=8pt,sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{cthm}{boxrule=0pt,boxsep=0pt,colback={gray!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{gray},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{workout}{boxrule=0pt,boxsep=0pt,colback={Cyan!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{Cyan},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{miniproject}{boxrule=0pt,boxsep=0pt,colback={gray!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{gray},sharp corners,before skip=10pt,after skip=10pt,breakable} \tcolorboxenvironment{labexercise}{boxrule=0pt,boxsep=0pt,colback={green!10},left=8pt,right=8pt,enhanced jigsaw, borderline west={2pt}{0pt}{green},sharp corners,before skip=10pt,after skip=10pt,breakable} \newenvironment{talign}{\let\displaystyle\textstyle\align}{\endalign} \newenvironment{talign*}{\let\displaystyle\textstyle\csname align*\endcsname}{\endalign} \usepackage[explicit]{titlesec} \titleformat{\section}{\fontsize{18}{30}\sffamily\bfseries}{\thesection}{18pt}{#1} \titleformat{\subsection}{\fontsize{15}{18}\sffamily\bfseries}{\thesubsection}{15pt}{#1} \titleformat{\subsubsection}{\fontsize{10}{12}\sffamily\large\bfseries}{\thesubsubsection}{8pt}{#1} \titlespacing*{\section}{0pt}{5pt}{5pt} \titlespacing*{\subsection}{0pt}{5pt}{5pt} \titlespacing*{\subsubsection}{0pt}{5pt}{5pt} \newcommand{\Disp}{\displaystyle} \newcommand{\qe}{\hfill\(\bigtriangledown\)} \DeclareMathAlphabet\mathbfcal{OMS}{cmsy}{b}{n} \setlength{\parindent}{0.2in} \setlength{\parskip}{0pt} \setlength{\columnseprule}{0pt} \newcommand{\vsp}{\vspace{0.3cm}} \usepackage{matlab-prettifier} \usepackage{textcomp,xcolor} \definecolor{MatlabCellColour}{RGB}{252,251,220} \lstdefinestyle{matlab}{ frame=single, numbers=left, basicstyle=\lstconsolas\small, escapechar=`, style=Matlab-editor, backgroundcolor=\color{MatlabCellColour} } \lstdefinestyle{matlab1}{ frame=single, numbers=right, basicstyle=\lstconsolas\small, escapechar=`, style=Matlab-editor, backgroundcolor=\color{MatlabCellColour} } Welcome to a {\em beautiful} subject in scientific computing: numerical solution of ordinary differential equations (ODEs) with initial conditions. More precisely, we consider $$ y'(t) = f(t,y(t)), \quad y(t_0)=y_0, \quad t\in[t_0,b] $$ where $t$ is an independent variable (usually plays the role of time), $y$ is the unknown solution of the problem (possibly a vector) which is sought, $f$ is a known function and $y_0$ is the initial condition at $t=t_0$. We first give a brief introduction to the theory of ODEs and existence and uniqueness of solutions. Then some standard numerical techniques are derived and their advantages and disadvantages for solving different differential equations are outlined. In some parts of this lecture we follow \cite{Atkinson-et-al:2009} and \cite{LeVeque:2007}. \section{An introduction to ODEs} The simplest ordinary differential equation (ODE) has the form \begin{equation}\label{ode:simplest} y'(t) = g(t) \end{equation} for a given function $g$. The {\em general solution} of this equations is $$ y(t) = \int g(\tau) d\tau + c $$ with $c$ an arbitrary integration constant. Here $\int f(\tau)d\tau$ denotes any fixed antiderivative of $g$. In Figure \ref{fig:ode0}, with the case of $g(t)=\cos 5t$, the plots of $y(t)$ for four different values of $c$ are shown. Such plots are sometimes called {\em integration curves}. For the simple ODE \eqref{ode:simplest}, the integration curves are just copies (shifts) of each other along the $y$-axis. \begin{figure}[!th] \centering \includegraphics[scale=0.75]{ode0} \caption{Integration curves}\label{fig:ode0} \end{figure} The constant $c$ can be obtained by specifying the value of $y$ at some given point $$ y(t_0) = y_0, $$ where $y_0$ is known. The {\em particular solution} of the differential equation then can be written as $$ y(t) = y_0 + \int_{t_0}^{t}g(\tau)d\tau, $$ provided that $g$ be (for example) a continuous function. Another simple differential equation is $$ y'(t) = \lambda y(t),\quad y(t_0) = y_0, \quad \lambda\in \R $$ which possesses the exponential solution $$ y(t) = y_0 e^{\lambda (t-t_0)}. $$ A more general form which contains both the above differential equations reads as \begin{equation}\label{ivp:linear1} y'(t) = \lambda y(t)+g(t), \quad y(t_0) = y_0. \end{equation} The general solution of this equation can be obtained by the so-called {\em method of integration factors}. Multiplying the above linear equation by integration factor $e^{-\lambda t}$, the equation is reformulated as $$ \frac{d}{dt}\big(e^{-\lambda t}y(t)\big)=e^{-\lambda t}g(t). \vsp $$ Integrating both sides from $t_0$ to $t$, we obtain $$ y(t)= e^{\lambda t}\left[ c + \int_{t_0}^{t}e^{-\lambda \tau}g(\tau) d\tau \right]. $$ Imposing the condition $y(t_0)=y_0$ gives $c = \exp(-\lambda t_0)y_0$, and therefore \begin{equation}\label{ivp:linsol} y(t) = y_0e^{\lambda (t-t_0)} + \int_{t_0}^{t}e^{\lambda (t-\tau)}g(\tau)d\tau. \end{equation} In many applications of differential equations the independent variable $t$ plays the role of time, and $t_0$ can be interpreted as the initial time, and $y(t_0)=y_0$ is referred to the {\em initial condition}. A differential equation of the above type is called an {\bf initial value problem (IVP)}. In a more general form an IVP has the form \begin{equation}\label{ivp:form} \begin{split} \y'(t) =& \,\f(t,\y(t)) \\ \y(t_0) =&\, \y_0 \end{split} \end{equation} which is a system of IVPs for vector $\y = [y_1,\ldots,y_n]^T$. Here $\f$ may represent a nonlinear relation between the independent variable $t$ and the dependent variable $\y$. For the linear IVP \eqref{ivp:linear1} the solution is given by \eqref{ivp:linsol} and it exists and is unique on any open interval where the data function $f$ is continuous. But for the nonlinear IVP \eqref{ivp:form} even if the right hand side function $f(t,y)$ has derivatives of any order the solution of IVP may exist on only a smaller interval. In some cases the solution is not unique. \vsp \begin{example} Consider the nonlinear equation $$ y'(t) = -y(t)^2,\quad t\geqslant 0. $$ This problem has the trivial solution $y(t)\equiv 0$ and a general solution $$ y(t) = \frac{1}{t+c} $$ with arbitrary constant $c$. Let the equation be accompanied by initial condition $y(0)=y_0$. If $y_0=0$ then $y(t)\equiv 0$ is the solution of the IVP for any $t\geqslant 0$. If $y_0\neq 0$ then the solution of IVP is $$ y(t)=\frac{1}{t+y_0^{-1}}. \vsp $$ For $y_0>0$ the solution exists for any $t\geqslant0$ while for $y_0<0$ the solution exists only on interval $[0,-y_0^{-1}]$. \end{example} \vsp \begin{example} Consider the IVP $$ y'(t) = 2\sqrt{y(t)}, \quad t\geqslant 0, \quad y(0)=0. $$ It is clear that both $y(t)\equiv 0$ and $y(t) = t^2$ are solutions of this IVP. In additions any $C^2$ function $y(\cdot;\alpha)$ of the form $$ y(t;\alpha) = (t-\alpha)_{+}^2 = \begin{cases} 0, & 0\leqslant t\leqslant \alpha\\ (t-\alpha)^2, & t>\alpha \end{cases} $$ for any $\alpha \geqslant0$ is a solution for this IVP. In Figure \ref{fig:ode1} the solution with $\alpha=0,1,2$ are plotted. This example reveals the non-uniqueness of the nonlinear IVP \eqref{ivp:form} for some right hand side functions $f$. \begin{center} \includegraphics[scale=0.7]{ode1} \captionof{figure}{Three sample solutions for IVP $y'=2\sqrt y$ with $y(0)=0$ (non-uniqueness).}\label{fig:ode1} \end{center} \end{example} \vsp To guarantee that there is a unique solution it is necessary to impose a certain amount of smoothness on function $f(t,y)$. The following well-known theorem establishes the existence and uniqueness of the IVP \eqref{ivp:form}. \begin{theorem}\label{thm:ivpexuq} Let $D$ be an open and connected set in $\R^2$ and $f(t,y)$ be a continuous function in both $t$ and $y$ in $D$, and let $(t_0,y_0)$ be an interior point in $D$. Assume that $f$ satisfies the Lipschitz continuity in its second argument, i.e., there exists a constant $L\geqslant 0$ such that \begin{equation*} |f(t,y)-f(t,\tilde y)|\leqslant L|y-\tilde y|,\quad \forall (t,y),\,(t,\tilde y)\in D. \vsp \end{equation*} Then there exists a unique function $y(t)$ defined on an interval $[t_0-\beta,t_0+\beta]$ for some $\beta>0$ satisfying $$ y'(t) = f(t,y(t)), \quad t_0-\beta\leqslant t\leqslant t_0+\beta, \quad y(t_0)=y_0. $$ \end{theorem} The Lipschitz continuity is slightly stronger than mere continuity, which only requires that $|f(t,y)-f(t,\tilde y)|\to 0$ as $\tilde y\to y$. Lipschitz continuity requires that $$ |f(t,y)-f(t,\tilde y)| = \mathcal{O}(|y-\tilde y|), \; {as}\; \tilde y\to y. $$ If $f$ is differentiable with respect to $y$ in $D$ and this derivative $\frac{\partial f}{\partial y}(t,y)$ is bounded then we can take $$ L = \max_{(t,y)\in \overline D}\left|\frac{\partial f}{\partial y}(t,y)\right| \vsp $$ because the Taylor series representation gives $$ f(y,t)=f(t,\tilde y) + (y-\tilde y)\frac{\partial f}{\partial y}(t,\eta),\; \mbox{for some}\, (t,\eta)\in D. $$ The number $\beta$ in the statement of the theorem depends on the IVP \eqref{ivp:form}. For some equations, solutions exist for any $t$, thus we can take $\beta$ to be infinity. However for many nonlinear equations solutions can exist only in a bounded interval. \vsp \begin{example} For IVP \eqref{ivp:linear1} we have $f(t,y)=\lambda y + g(t)$; hence $L=|\lambda|$. This problem of course has a unique solution for any initial $y_0$ and for any $t$. In particular, if $\lambda=0$ then $f(t,y)=g(t)$. In this case $f$ is independent of $y$. The solution is then obtained by simply integrating the function $g(t)$. \end{example} \vsp \begin{example} Consider the IVP $$ y'(t)=2ty(t)^2,\quad y(0)=1. $$ For this equation $f(t,y)=2ty^2$ and $\frac{\partial f}{\partial y}(t,y)=4ty$. Both functions are continuous for all $(t,y)$. On any bounded domain $D=(a,b)\times(c,d)$ containing $(t_0,y_0)=(0,1)$ we can take $L=4bd$. According to Theorem \ref{thm:ivpexuq} there is a unique solution to this IVP for $t$ in some neighborhood of $t_0=0$. This solution is $$ y(t)=\frac{1}{1-t^2}, \quad -1<t<1. $$ As a side note, this example shows that the continuity of $f(t,y)$ and $\frac{\partial f}{\partial y}(t,y)$ for all $(t,y)$ does not imply the existence of a solution $y(t)$ for all $t$. \end{example} \vsp \subsection{Modelling with ODEs: a funny example} Imagine that you are jogging along a given path. Suddenly a dog in a nearby garden sees you and begins chasing you at full speed with constant velocity $w$. What is the trajectory of the dog if we assume it is always running directly toward you?\footnote{This example is taken form: W. Gander, M. J. Gander, F. Kwok, Scientific Computing, An Introduction using Maple and MATLAB, Springer (2014).} \begin{figure}[ht!] \centering \includegraphics[scale=.1]{dog10} \;\; \begin{tikzpicture}[>=latex,scale=1] \draw[->,thick] (-.5,0) -- (5,0) node[right] {$x$}; \draw[->,thick] (0,-.5) -- (0,4) node[above] {$y$}; lldraw (3,1) circle (2.5pt) node[right] {$(\xi,\eta)$}; lldraw (1,3) circle (2.5pt) node[right] {$~(x,y)$}; \draw[->] (1,3) -- (2,2) node[right] {$w$}; \node[color=black] at (.9,3.5) {Dog}; \node[color=black] at (3.4,.5) {Jogger}; \draw[samples=200, domain=0:3, color=red,thick] plot (\x,{\x-3/4*\x*(\x-1)+5/12*\x*(\x-1)*(\x-2)}); \end{tikzpicture} \caption{Dog chasing a jogger. (left image from freepik.com)}\label{fig:dog} \end{figure} This situation is depicted in Figure \ref{fig:dog}. We assume that the trajectory of you is represented by $(\xi(t),\eta(t))$ and the trajectory of dog by $(x(t),y(t))$. Since the dog is running with constant speed $w$, we have \begin{equation}\label{dog-speed} [x'(t)]^2 + [y'(t)]^2 = w^2, \quad \forall t\geqslant 0. \end{equation} Since the dog is always running toward you, the velocity vector of the dog is proportional to the difference vector between the position of you and the dog, i.e., \begin{equation}\label{dog-velocity} \begin{bmatrix} x'(t)\\ y'(t) \end{bmatrix} = \lambda(t) \begin{bmatrix} \xi(t)- x(t)\\ \eta(t)-y(t) \end{bmatrix} , \quad \forall t\geqslant 0, \end{equation} where $\lambda(t)>0$ is a constant of proportionality. To find $\lambda(t)$ we can substitute \eqref{dog-velocity} in to \eqref{dog-speed} to obtain $$ \lambda^2 = \frac{w^2}{(\xi-x)^2 + (\eta-y)^2}. \vsp $$ The trajectory of the dog therefore satisfies the following system of ODEs \begin{equation}\label{dog-ode} \begin{bmatrix} x'(t)\\ y'(t) \end{bmatrix} = \frac{w}{\sqrt{(\xi(t)-x(t))^2 + (\eta(t)-y(t))^2}} \begin{bmatrix} \xi(t)- x(t)\\ \eta(t)-y(t). \end{bmatrix} \end{equation} The initial condition for this ODE is the initial position of the dog, i.e., $(x(0),y(0))=(x_0,y_0)$. To find the trajectory of the dog, it remains to solve this IVP. But there is no hope for finding a closed-form solution for a general jogging path $(\xi(t),\eta(t))$. Soon we will solve this equation using some numerical methods, but let see the dog trajectories for different choices of jogger's path in Figure \ref{trajdog_fig}. This results are obtained by the Euler's method. See section \ref{sect-numerical-methods} below. \begin{figure}[ht!] \centering \includegraphics[scale=.5]{dog_fig0} \includegraphics[scale=.5]{dog_fig1}\\ \includegraphics[scale=.5]{dog_fig4} \includegraphics[scale=.5]{dog_fig2} \caption{Jogging path and the numerically computed trajectory of the dog: The jogger running in a straight path (top-left), the jogger notices the dog and tries to run back (top-right), the jogger running on a circular track (bottom-left), the jogger running on a circular track but the dog is slow (bottom-right).}\label{trajdog_fig} \end{figure} \vsp \subsection{Higher order ODEs} The highest-order derivative appearing in an ODE determines the order of the ODE. A $k$-th order ODE in the most general form can be written as \begin{equation*} f(t,y,y',\ldots,y^{(k)}) = 0, \end{equation*} where $f:\R^{k+2}\to \R$ is a known function and $y(t)$ is to be determined. A $k$-th order ODE is said to be explicit if it can be written in the form \begin{equation}\label{ode:kthorder} y^{(k)}(t) = f(t,y,y',\ldots,y^{(k-1)}). \end{equation} As a necessary condition for a unique solution, this ODE should accompany with $k$ initial conditions \begin{equation}\label{ode:kthorder_initial} y(t_0)=y_0,\quad y'(t_0)=y'_0, \quad \ldots\quad y^{(k-1)}{(t_0)}=y^{(k-1)}_{0}. \end{equation} Many ODEs arise naturally of the form \eqref{ode:kthorder}, and many others can be transformed into it. So far we considered the case $k=1$. This is not a real restriction because a higher-order ODE can always be reduced to a system of first-order equations as follows. For the explicit $k$-th order ODE \eqref{ode:kthorder} define $k$ new variables $$ y_1(t) = y(t),\quad y_2(t) = y'(t),\quad \ldots \quad y_k(t) = y^{(k-1)}(t), $$ so that the original $k$-th order equation becomes a system of $k$ first-order equations $$ \y'(t) = \begin{bmatrix} y'_1(t) \\ y'_2(t) \\ \vdots \\ y'_{k-1}(t) \\ y'_{k}(t) \end{bmatrix} = \begin{bmatrix} y_2(t) \\ y_3(t) \\ \vdots \\ y_{k}(t) \\ f(t,y_1,y_2,\ldots,y_k) \end{bmatrix}=: \g(t,\y), $$ with initial condition $$ y_1(t_0)=y_0,\quad y_2(t_0)=y'_0, \quad \ldots\quad y_k{(t_0)}=y^{(k-1)}_{0}. $$ or simply $$ \y(t_0) = \y_0, \vsp $$ for $\y_0=[y_0,y'_0,\ldots,y_0^{(k-1)}]$. In general, we will focus on explicit first-order system of ODEs with initial conditions of the form \begin{equation}\label{ode:sysform} \begin{split} \y'(t) =&\, \f(t,\y) \\ \y(t_0) = &\, \y_0 \end{split} \end{equation} where $\f:\R^{n+1}\to \R^{n}$ and $\y_0\in \R^n$. If $\f$ is not explicitly depend on $t$, i.e., $\f(t,\y)=\f(\y)$, the system is called {\em autonomous} and can be written in the form $$\y' = \f(y).$$ A nonautonomous ODE $\y' = \f(t, \y)$ can always be converted to autonomous form by introducing an additional dependent variable $y_{n+1}(t) = t$, so that $y'_{n+1}(t)=1$ and $y_{n+1}(t_0)=t_0$ yielding the autonomous ODE $$ \begin{bmatrix} \y'(t) \\ y'_{n+1}(t) \\ \end{bmatrix} = \begin{bmatrix} \f(y_{n+1},\y) \\ 1 \\ \end{bmatrix}. $$ \ \\ It is often convenient to assume $\f$ is of this form since it simplifies notation. \vsp \begin{example} Consider the second order ODE $$ \theta''(t) = -\frac{g}{\ell}\sin(\theta(t)), $$ which models the motion of a pendulum with mass $m$ at the end of a rigid bar of length $\ell$ by ignoring the mass of the bar and forces of friction and air resistance. Here $\theta(t)$ is the angle of the pendulum from vertical at time $t$, and $g$ is the gravitational constant. The motion is independent of the mass of pendulum. Let $v = \theta'$ be the velocity and define $$ \y = \begin{bmatrix} \theta \\ v \end{bmatrix} $$ to obtain the following first-order linear system of equations $$ \begin{bmatrix} \theta' \\ v' \end{bmatrix} = \begin{bmatrix} v\\ -(g/\ell)\sin\theta \end{bmatrix}=: \begin{bmatrix} f_1(\theta,v)\\ f_2(\theta,v) \end{bmatrix}. $$ \end{example} \vsp \begin{workout} Convert the following system of third order equations to a system of first order equations: \begin{align*} & u'''(t)+4u''(t)+5u'(t)+2u(t)=2t^2+10t+8 \\ & u(0)=1, \; u'(0)=-1,\; u''(0)=3. \end{align*} \end{workout} \vsp \begin{workout} The following system of second order equations arises from studying the gravitational attraction of one mass by another. Convert it to a system of first order equations. \begin{align*} x''(t)=-\frac{cx(t)}{r(t)^3}, \quad y''(t)=-\frac{cy(t)}{r(t)^3}, \quad z''(t)=-\frac{cz(t)}{r(t)^3}, \quad \end{align*} Here $c$ is a positive constant and $r(t)=\sqrt{x(t)^2+y(t)^2+z(t)^2}$ with $t$ denoting time. \end{workout} \vsp \subsection{First order system of ODEs} The study of first order system of ODEs is essentially important not only in solving a high order scaler equation using the techniques for first order ODEs but also in variety of other applications in which the system is obtained directly from the problem model. Such systems also appear in the procedure of solving parabolic and hyperbolic partial differential equations using the method of lines (MOL). The system of ODEs \eqref{ode:sysform} is linear if $$ \f(t,\y)=A(t)\y + \g(t) $$ where $A(t)\in \R^{n\times n}$ and $\g\in \R^n$. An important special case is the constant coefficient linear system \begin{equation*} \y'(t) = A\y(t) + \g(t) \vsp \end{equation*} where $A \in\R^{n\times n}$ is a constant matrix. If $\g(t)=0$, then the equation is homogeneous. The solution to the homogeneous system $\y'(t)=A\y(t)$ with initial data $\y(t_0)=\y_0$ is \begin{equation*} \y(t) = e^{A(t-t_0)}\y_0. \end{equation*} where $e^{A(t-t_0)}$ is matrix exponential. \vsp \begin{example}[Chemical Reaction Kinetics] Let $X$ and $Y$ represent chemical compounds and consider a reaction of the form $$ X \xlongrightarrow{k_1}Y $$ which represents a reaction in which $X$ is transformed into $Y$ with rate $k_1 > 0$. If we let $y_1$ represent the concentration of $X$ and $y_2$ represent the concentration of $Y$ (often denoted by $y_1=[X]$ and $y_2=[Y]$, then the ODEs for $y_1$ and $y_2$ are \begin{align*} y'_1 =& \,-k_1y_1 \\ y'_2 =& \,+k_1 y_1 \end{align*} If there is also a reverse reaction at rate $k_2$, we write $$ X \myrightleftarrows{\rule{0.8cm}{0cm}}_{{}_{{}_{\hspace{-.7cm}k_2}}}^{{}^{\hspace{-.7cm}k_1}} Y $$ and we have the system of ODEs \begin{align*} y'_1 =& \,-k_1y_1+k_2y_2 \\ y'_2 =& \,+k_1 y_1-k_2y_2 \end{align*} which with given initial concentrations $y_1(0)$ and $y_2(0)$ forms a linear system of initial value problems with constant coefficient matrix: \begin{equation}\label{chem:eq1} \begin{bmatrix} y'_1\\y'_2 \end{bmatrix}= \begin{bmatrix} -k_1 & k_2 \\ k_1 & -k_2 \end{bmatrix} \begin{bmatrix} y_1\\y_2 \end{bmatrix}, \quad \begin{bmatrix} y_1(0)\\y_2(0) \end{bmatrix}= \begin{bmatrix} y_{1,0}\\y_{2,0} \end{bmatrix}. \end{equation} \ \\ Another simple system arises from the decay process $$ X \xlongrightarrow{k_1}Y\xlongrightarrow{k_2}Z. $$ If $y_1=[X]$, $y_2=[Y]$ and $y_3=[Z]$ then we have the following equations \begin{equation}\label{chem:eq2} \begin{array}{rl} y'_1 =& \,-k_1y_1 \\ y'_2 =& \,k_1 y_1 - k_2y_2\\ y'_3 = &\, k_2y_2 \end{array}, \quad or \quad \begin{bmatrix} y'_1\\y'_2\\y'_3 \end{bmatrix}= \begin{bmatrix} -k_1 & 0 & 0 \\ k_1 & -k_2 & 0 \\ 0 & k_2 & 0 \end{bmatrix} \begin{bmatrix} y_1\\y_2\\y_3 \end{bmatrix}. \end{equation} \end{example} \vsp Consider a linear system $\y'=A\y$ with initial condition $\y(t_0)=\y_0$, where $A$ is a constant $n\times n$ matrix, and suppose for simplicity that $A$ is diagonalizable, i.e., $A$ has a complete set of $n$ linearly independent eigenvectors $\v_k$ corresponding to eigenvalues $\lambda_k$ for $k=1,\ldots,n$ such that $$ A\v_k = \lambda_k \v_k, \quad k=1,\ldots,n $$ or equivalently $$ A = VDV^{-1}, $$ where $V=[\v_1\; \v_2\ldots \v_n]$ and $D = \diagg(\lambda_1,\ldots,\lambda_n)$. Applying the change of variables $\u(t) = V^{-1}\y(t)$, the linear system $\y'=A\y$ will transfer to $$ \u' = D\u, \quad \u(t_0) = V^{-1}\y_0=:\u_0. $$ This is a decoupled system of ODEs because $D$ is diagonal. We may write \begin{equation}\label{sys:solveAy} u'_k = \lambda_k u_k, \quad u_k(t_0)=u_{k,0}, \quad k=1,2,\ldots,n, \end{equation} with solutions $u_k(t)=u_{k,0}e^{\lambda_k (t-t_0)}$ or in a matrix form $$ \u(t) = e^{D(t-t_0)}\u_0 $$ or equivalently $$ V^{-1}\y(t) = e^{D(t-t_0)}V^{-1}\y_0 $$ which finally gives $$ \y(t)=Ve^{D(t-t_0)}V^{-1}\y_0 = e^{A(t-t_0)}\y_0. $$ Keep in mind that $e^{A(t-t_0)}$ is a matrix and $\y_0$ is a vector, thus $e^{A(t-t_0)}\y_0$ is a matrix-vector multiplication. \ \\ \begin{example}\label{ex:chem_solve} Consider the linear system of ODEs \eqref{chem:eq1} with $k_1=2$ and $k_2=1$ and initial conditions $y_1(0)=5$ and $y_2(0)=2$: \begin{equation*} \begin{bmatrix} y'_1\\y'_2 \end{bmatrix}= \begin{bmatrix} -2 & 1 \\ 2 & -1 \end{bmatrix} \begin{bmatrix} y_1\\y_2 \end{bmatrix}, \quad \begin{bmatrix} y_1(0)\\y_2(0) \end{bmatrix}= \begin{bmatrix} 5\\2 \end{bmatrix}. \end{equation*} It can be simply shown that the eigenvalues and eigenvectors of $A$ are \begin{align*} \lambda_1 &= -3, \quad \v_1=[1,-1]^T\\ \lambda_2 &=0, \qquad \v_2=[1,2]^T, \end{align*} which gives $$ V = \begin{bmatrix} 1 & 1 \\ -1 & 2 \end{bmatrix}, \quad D = \begin{bmatrix} -3 & 0 \\ 0 & 0 \end{bmatrix}.\vsp $$ Therefore, the solution can be written as $$ \y(t) = Ve^{Dt}V^{-1}\y_0 =\frac{1}{3} \begin{bmatrix} 1 & 1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} e^{-3t} & 0 \\ 0 & e^{0t} \end{bmatrix} \begin{bmatrix} 2 & -1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 5 \\ 2 \end{bmatrix} $$ which shows that \begin{align*} y_1(t) & \,= \frac{5}{3}\left[2e^{-3t}+1\right]+ \frac{2}{3}\left[-e^{-3t}+1\right]\\ y_2(t) & \,= \frac{5}{3}\left[-2e^{-3t}+2\right]+ \frac{2}{3}\left[e^{-3t}+2\right]. \end{align*} The plots of solutions are depicted in Figure \ref{fig:ode2}. \begin{center} \includegraphics[scale=0.75]{ode2} \captionof{figure}{Solutions of a chemical reaction kinetics problem.}\label{fig:ode2} \end{center} Since, in this example, the reaction rate $k_1$ is larger than $k_2$, the amount $y_1$ decreases while $y_2$ increases. Both solutions reach the steady states $y_1(t)\to \frac{1}{3}(y_1(0)+y_2(0))=\frac{7}{3}$ and $y_2(t)\to \frac{2}{3}(y_1(0)+y_2(0))=\frac{14}{3}$ when $t$ increases. \end{example} \vsp \begin{workout} In the chemical reaction model \eqref{chem:eq2} let $k_1=2$ and $k_2=1$ and $[y_1(0), y_2(0), y_3(0)] = [1, 3, 2]$. Obtain the solution. Hint: Calculate the eigenvalues and eigenvectors of the coefficient matrix and put them into the formula. \end{workout} \vsp The situation will be more complicated for a nonlinear system of equations. In the sequel we study several numerical algorithms for solving different types of ODEs; both linear and nonlinear equations. \subsection{Stability of solutions}\label{sect-stabsol} Depending on given data $f$ and $y_0$, solutions of IVP may behave differently as $t\to\infty$. The Lipschitz constant measures how much $f(t,y)$ is changed if $y$ is perturbed (at some fixed time $t$). Since $f(t,y)=y'(t)$ is the slope of the line tangent to the solution curve through the value $y$, this indicates how the slope of the solution curve is varied if $y$ is perturbed. \vsp \begin{example} The solutions of $y'(t)=g(t)$ with Lipschitz constant $L=0$ are parallel curves each for a prescribed initial condition $y_0$. See Figure \ref{fig:ode0}. The ODE $y'(t)=\lambda y(t)$ with $L=|\lambda|$ possesses solutions $y(t)=y_0e^{\lambda t}$. Depending on the sign of $\lambda$ solutions decay exponentially to zero (if $\lambda <0$), grow exponentially to infinity (if $\lambda >0$), or stay parallel lines (if $\lambda=0$) for different values of $y_0$. See Figure \ref{fig:ode345}. \begin{center} \includegraphics[scale=0.67]{ode3}\includegraphics[scale=0.67]{ode4}\includegraphics[scale=0.67]{ode5} \captionof{figure}{Solutions of IVP $y'=\lambda y$ with different initial values $y_0$ for $\lambda<0$ (left), $\lambda>0$ (middle), and $\lambda=0$ (right).}\label{fig:ode345} \end{center} For the case of $\lambda>0$ any two solutions diverge away from each other, i.e., a small perturbation in the initial condition results in a substantial difference between the final solutions at time $t>t_0$. For the case of $\lambda<0$ the situation is different; any two solutions converge toward each other and even large perturbation in initial data will finally diminish in the solutions. For the case of $\lambda=0$ perturbations in data and solution are of the same order. If $\lambda$ is a complex number, $\lambda=a+ib$ say, then $$ y(t)=y_0e^{at}(\cos bt + i\sin bt). $$ The behaviour now depends on the sign of $\mathrm{Re}(\lambda)=a$. We have exponential decay for $\mathrm{Re}(\lambda)<0$, exponential growth for $\mathrm{Re}(\lambda)>0$ and oscillatory solutions (parallel curves) for $\mathrm{Re}(\lambda)=0$. \end{example} \vsp A solution of the ODE $\y'(t) = \f(t, \y)$ is said to be {\bf stable} if for every $\epsilon>0$ there is a $\delta>0$ such that if $\wh \y(t)$ satisfies the ODE and $\|\wh \y(t_0)-\y(t_0)\|\leqslant \delta$, then \begin{equation}\label{ivp-stab} \|\wh \y(t)-\y(t)\|\leqslant \epsilon, \quad \mbox{for all} \quad t \geqslant t_0. \end{equation} Hence, for a stable solution, if the initial value is perturbed, the perturbed solution remains close to the original solution. This roles out the exponential divergence solutions allowed by the ODE. A stable solution is said to be {\bf asymptotically stable} if $$\|\wh \y(t)-\y(t)\| \to 0, \; as \; t\to \infty.$$ This stronger form of stability means that the original and perturbed solutions not only remain close to each other, they converge toward each other over time. As we will soon see in detail, the significance of these concepts for the numerical solution of ODEs is that any errors introduced during the computation can be either amplified or diminished over time, depending on the stability of the solution. \vsp \begin{workout} Consider the IVP $$ y'(t)=-[y(t)]^2, \quad y(0)=1. \vsp $$ Show that the solution is $y(t)=1/(1+t)$. Then solve the perturbed problem $$ \wh y'(t)=-[\wh y(t)]^2, \quad \wh y(0)=1+\delta $$ and show this IVP is stable, and even asymptotically stable. Hint: This is a separable differential equation, so the analytical solution can be easily obtained. \end{workout} \vsp \begin{example}\label{ex:eigenvalAnal} For system of ODEs $$\y'=A\y$$ for $n\times n$ constant diagonalizable matrix $A$, the stability of solutions depends on the sign of the real part of eigenvalues of $A$. See \eqref{sys:solveAy}. In this case eigenvalues with negative real parts yield exponentially decaying solution components, eigenvalues with positive real parts yield exponentially growing solution components, and eigenvalues with zero real parts give oscillatory solutions. This means that the solutions of this ODE are stable if $\mathrm{Re}(\lambda_k)\leqslant 0$ for all $k=1,2,\ldots,n$, and asymptotically stable if $\mathrm{Re}(\lambda_k)< 0$ for all $k=1,2,\ldots,n$, but unstable if there is any eigenvalue such that $\mathrm{Re}(\lambda_k)>0$. \end{example} \vsp For the general case $\y'(t)=A(t)\y(t)$ where $A(t)$ is a time dependent matrix or for the nonlinear equation $\y'(t)=\f(t,\y)$ the stability analysis is more complicated. \section{Basic numerical methods}\label{sect-numerical-methods} Although the exact solution of some ODEs can be obtained by few analytic methods, such as method of integration factors, the solution of most ODEs arising in applications are so complicated that should be computed only by numerical methods. Even when a solution formula is available, it may involve integrals that can be calculated only by numerical integration formulas. An analytical solution of an ODE is a continuous (and sometimes a close form) function in an infinite-dimensional space, while a numerical solution is a table of approximate values of the solution function at a discrete set of points which can be considered as a vector in a finite dimensional space. In this section some basic numerical methods are given for solving \eqref{ivp:linear1}. In all methods, starting from the initial condition $y_0$ at $t=t_0$, the approximate solutions at times $t_1,t_2, \ldots$ are obtained successively by solving an algebraic (system of) difference equations obtained by discretizing the differential equation. \subsection{Euler's method} The simplest technique is the {\bf Euler's method}, also called {\em explicit} or {\em forward Euler's method}. For a given time step $h>0$ assume that $t_k = t_0 + kh$, $k=1,2,\ldots,N$, is a partitioning of time domain $[t_0,b]$ with $b=t_N$. Let the derivative $y'(t)$ in the ODE $y'(t) = f(t, y)$ at $t=t_k$ be approximated by the first-order forward difference approximation $$ y'(t_k) = \frac{y(t_{k+1})-y(t_k)}{h} + \frac{h}{2}y''(\xi_k), \quad t_k\leqslant\xi_k\leqslant t_{k+1}. $$ By dropping the error term and using the approximate values $y_k$ instead of $y(t_k)$, we obtain \begin{shaded} \vspace*{-0.3cm} \begin{equation}\label{euler:method} \begin{split} & y_{k+1} = y_{k} + hf(t_k,y_k), \quad k=0,1,\ldots, N-1,\\ & y_0 = y(t_0) \end{split} \end{equation} \vspace*{-0.3cm} \end{shaded} We use $y_0 = y(t_0)$ or some close approximation of it. Formula \eqref{euler:method} gives a rule for computing $y_1,y_2,\ldots, y_N$ in succession. We bring an example from \cite{Heath:2018}. \vsp \begin{example} Consider the simple IVP $y'=y$ with initial value $y_0$ at initial time $t_0=0$. The approximate value $y_1$ is obtained as $y_1 = y_0 + hy_0 = (1+h)y_0$. It is obvious that $y_1\neq y(t_1)$, thus, the value $y_1$ lies on a different solution curve of the ODE from the one on which we started, as shown in the left side of Figure \ref{fig:euler1}. The approximate value $y_1$ is the slope of the tangent line for the new (perturbed) solution curve. Now we continue to obtain the approximate value $y_2$ at $t=t_2$ by starting from $y_1$ with Euler rule $y_2 = y_1+hy_1=(1+h)y_1$. The approximate value $y_2$ carries both previous approximation error in $y_1$ and a new error introduced in the current step of the Euler's method. In fact, in this step we have moved to still another solution of the ODE, as is again shown in Figure \ref{fig:euler1}. We can advance to future times $t_3,t_4,\ldots$ until reaching the final time $b=t_N$. In each step a new {\em local truncation error} is introduced and the approximate solution falls down (or rises up) to another solution curve. Since the solutions of this ODE are unstable, the errors are amplified with time. For an equation with stable solutions, on the other hand, the errors in the numerical solution do not grow, and for an equation with asymptotically stable solutions, such as $y'=-y $, the errors diminish with time, as is shown in the right hand side of Figure \ref{fig:euler1}. \begin{center} \includegraphics[scale=0.35]{euler1}\includegraphics[scale=0.25]{euler2} \captionof{figure}{Steps of Euler's method for $y'=y$ (left) and $y'=-y$ (right).}\label{fig:euler1} \end{center} \end{example} \vsp \begin{workout} Write down Euler's formula for the following IVPs. Compute 1 iteration for the third ODE with $h=0.1$. \begin{itemize} \item (a) $y'(t)=te^{-t}-y(t), \; y(0)=1$, \item (b) $y'(t)=[\cos (y(t))]^2,\; y(0)=0$, \item (c) $y'(t)=t^3/y(t),\; y(0)=1$. \end{itemize} \end{workout} If the ODE is a system then $y$ and $f$ in formula \eqref{euler:method} are simply replaced by vectors $\y$ and $\f$. The ODE algorithms are usually simple to program. Here is the function of explicit Euler's method. \begin{lstlisting}[style = matlab] function [T,Y] = ExEuler(f,y0,tspan,h) Y = y0; T = tspan(1); for t = tspan(1):h:tspan(2)-h y = y0 + h*f(t,y0); Y = [Y y]; y0 = y; T = [T t+h]; end \end{lstlisting} The input \verb+f+ is a function of $t$ and $y$ variables, and can be defined as a separated function in the main script. For example we use the following commands for solving the chemical reaction kinetics ODE \eqref{chem:eq1} on interval $[0,3]$ with $k_1=2$ and $k_2 = 1$ and initial conditions $y_1(0)=5$ and $y_2(0)=2$. We also set $h = 0.01$. \begin{lstlisting}[style = matlab] y0 = [5;2]; tspan = [0 3]; h = 0.01; [t,y] = ExEuler(@func,y0,tspan,h); plot(t,y(1,:),'-b',t,y(2,:),'--r') set(gca,'TickLabelInterpreter','latex') xlabel('Time $t$', Interpreter='latex'); ylabel('Solution $y$',Interpreter='latex'); leg = legend('$y_1$','$y_2$'); set(leg,Interpreter='latex'); function yprime = func(t,y) yprime = [-2*y(1) + y(2); 2*y(1)-y(2)]; end \end{lstlisting} The exact solution was given in Example \ref{ex:chem_solve} and Figure \ref{fig:ode2}. If you run this script on your computer you will receive the same plot. \vsp \begin{labexercise} Solve the dog-jogger problem using the Euler's method and reproduce the plots of Figure \ref{trajdog_fig}. Use $(x(0),y(0))= (60,70)$ and \begin{itemize} \item $(\xi(t),\eta(t))=(8t,0)$ for $t\in[0,12]$ and $w=10$, \item $(\xi(t),\eta(t))=\begin{cases} (8t,0), & t\in[0,7)\\ (8(7-t),0), & t\in[7,12] \end{cases}$ and $w=10$, \item $(\xi(t),\eta(t)) = (30+20\cos t, 20 + 15\sin t)$ and $t\in[0,4\pi]$ and $w=10$, \item $(\xi(t),\eta(t)) = (30+20\cos t, 20 + 15\sin t)$ and $t\in[0,4\pi]$ and $w=18$. \end{itemize} Solve the problem with other jogger's paths and dog speeds. \end{labexercise} \vsp \subsubsection{Error analysis of Euler's method}\label{sect:error_anal_euler} The analysis of Euler's method is useful to understand how it works, to predict the error when using it and perhaps to accelerate its convergence. Moreover, it gives an insight to how analyze other more efficient numerical methods. We analyze the scaler IVP $y' = f(t,y)$ with $y(t_0)=y_0$ by assuming that it has a unique solution $y(t)$ on $t_0\leqslant t\leqslant b$ and this solution has a bounded second derivative $y''(t)$ over this interval. Using the Taylor series formula we have $$ y(t_{k+1})=y(t_k) + h y'(t_k)+\frac{1}{2}h^2y''(\xi_k) $$ for some $t_k\leqslant \xi_k\leqslant t_{k+1}$. Using the fact that $y'(t)=f(t,y(t))$ we can write \begin{equation}\label{euler:tylorser} y(t_{k+1})=y(t_k) + h f(t_k,y(t_k))+\frac{1}{2}h^2y''(\xi_k). \end{equation} The term $$ \tau_{k+1} = \frac{1}{2}h^2y''(\xi_k) $$\ \\ is a {\em local truncation error} (or one-step error) for the Euler's method introduced in step $k+1$. Subtracting \eqref{euler:tylorser} from the Euler's rule $$ y_{k+1} = y_k + hf(t_k,y_k), $$ we will obtain \begin{equation}\label{euler:errlocalglobal} y(t_{k+1})-y_{k+1}=(y(t_k)-y_k) + h [f(t_k,y(t_k))-f(t_k,y_k)]+\tau_{k+1}, \end{equation} which shows that the error in $y_{k+1}$ consists of two parts: \begin{itemize} \item (1) the newly introduced local truncation error $\tau_{k+1}$, \item (2) the {\em propagated error} $(y(t_k)-y_k) + h [f(t_k,y(t_k))-f(t_k,y_k)]$. \end{itemize} If we assume that $$ e_k = y(t_k)-y_{k} \vsp $$ then $e_N$ would be the {\em global error} at the final time $t=t_N$. The global error reflects not only the local error at the final step, but also the compounded effects of the local errors at all previous steps. Unless in some specific situations where $f$ is independent of $y$, {\bf the global error is not simply the sum of the local errors.} For example, for ODE $y'=y$ since the solutions are diverging, the local errors at each step are magnified over time, so that the global error is greater than the sum of the local errors, as shown in Figure \ref{fig:euler2}, where the local errors are indicated by small vertical bars between solutions and the global error is indicated by a bar at the end. On the other hand for ODE $y'=-y$ since the solutions are converging, the global error is less than the sum of the local errors. \begin{figure}[!th] \centering \includegraphics[scale=0.32]{euler4}\includegraphics[scale=0.35]{euler3} \caption{Local and global errors of Euler's method for $y'=y$ (left) and $y'=-y$ (right).}\label{fig:euler2} \end{figure} It is obvious that for the only case $f(t,y)=g(t)$, where the solutions are parallel curves, the global error is the direct sum of local errors. Because in this case we have $[f(t_k,y(t_k))-f(t_k,y_k)]=g(t_k)-g(t_k)=0$ and \eqref{euler:errlocalglobal} reduced to $e_{k+1}=e_k + \tau_{k+1}$ for $k=0,1,\ldots,N-1$. We then simply have $e_N=\tau_1+\tau_2+\cdots+\tau_N$. However, for a general $f(t,y)$ we have to analyze the effect of $[f(t_k,y(t_k))-f(t_k,y_k)]$ in each step. Considering $f(t,y)$ as a function of $y$ and using the {\em mean value theorem}, we can write $$ f(t_k,y(t_k))-f(t_k,y_k) = \frac{\partial f}{\partial y}(t_k,\eta_k)(y(t_k)-y_k) $$ for some $\eta_k$ between $y(t_{k})$ and $y_k$. Then, \eqref{euler:errlocalglobal} yields \begin{equation}\label{euler:errlocalglobal2} e_{k+1}=\left(1 + h \frac{\partial f}{\partial y}(t_k,\eta_k) \right)e_k +\tau_{k+1}, \end{equation} which shows that the amplification (or diminishing) factor for propagation error is $\left(1 + h \frac{\partial f}{\partial y}(t_k,\eta_k) \right)$ which is related to the stability of solutions via the Lipschitz constant of $f$. We assume that the function $f(t,y)$ satisfies the following stronger Lipschitz condition: there exists a constant $L>0$ such that \begin{equation*} |f(t,y)-f(t,\tilde y)|\leqslant L|y-\tilde y|,\quad \forall (t,y),\,(t,\tilde y)\in [t_0,b]\times \R. \end{equation*} In \eqref{euler:errlocalglobal2} if we go through absolute value of $e_k$, the term $\frac{\partial f}{\partial y}(t_k,\eta_k)$ can be replaced by the Lipschitz constant $L$ to obtain \begin{equation}\label{euler:errlocalglobal3} |e_{k+1}|\leqslant(1 + h L)|e_k| +|\tau_{k+1}|, \quad k=0,1,\ldots,N-1 \end{equation} from \eqref{euler:errlocalglobal2}. We assume that $$ \tau(h) = \frac{1}{2}h\|y''\|_\infty = \frac{1}{2}h\max_{t_0\leqslant t\leqslant b}|y''(t)|. $$ Then we have $|\tau_k|\leqslant h\tau(h)$ for $k=1,\ldots,N$. Now, apply \eqref{euler:errlocalglobal3} recursively, we obtain $$ |e_k|\leqslant (1+hL)^k|e_0| + [1+(1+hL)+(1+hL)^2+\cdots+(1+hL)^{k-1}] h\tau(h). $$ Using the formula for sum of geometric series, $$ 1+r+r^2+\cdots+r^{n-1} = \frac{r^n-1}{r-1}, \quad r\neq 1, $$ we obtain $$ |e_k|\leqslant (1+hL)^k|e_0| +\left[ \frac{(1+hL)^k-1}{L} \right] \tau(h). $$ Now we use the standard formula $$ 1+x \leqslant e^x, \quad x\geqslant 0 $$ with $x=hL$ to obtain $$ |e_k|\leqslant e^{khL}|e_0| +\left[ \frac{e^{khL}-1}{L} \right] \tau(h). $$ On the other hand $kh = t_k-t_0$, so we can write $$ |e_N|\leqslant e^{L(t_N-t_0)}|e_0| +\left[ \frac{e^{L(t_N-t_0)}-1}{L} \right] \tau(h). $$ If $y_0=y(t_0)$ or $|e_0|=|y(t_0)-y_0|\leqslant c_1 h$ then we have a global error of order $h$ for the Euler's method, i.e., \begin{equation}\label{euler:errorbound} |e_N| \leqslant Ch, \quad C = c_1e^{L(b-t_0)} + \frac{1}{2}\left[ \frac{e^{L(b-t_0)}-1}{L} \right]\|y''\|_\infty. \end{equation}\ \\ Therefore, the Euler's method is said to converge with order $1$. This order of convergence is obtained by assuming $y$ to have a continuous second derivative $y''$ over interval $[t_0,b]$. When such assumption fails the error bound \eqref{euler:errorbound} no longer holds. See Workout \ref{wo_eulersmoothness}. Since the error bound \eqref{euler:errorbound} uses the Lipschitz constant $L$ instead of $\frac{\partial f}{\partial y}$ and ignores the sign of $\frac{\partial f}{\partial y}$, it sometimes produces a very pessimistic numerical bound for the error. If \begin{equation}\label{euler:negativefpar} \frac{\partial f}{\partial y}(t,y)\leqslant 0 \end{equation} then we may have a smaller than $1$ amplification factor $\left(1 + h \frac{\partial f}{\partial y}(t_k,\eta_k) \right)$ instead of $(1+hL)$ which is always bigger than $1$. In this case (negative partial derivative of $f$) if we assume that $$ L = \sup_{t\in[t_0,b],y\in\R}\left| \frac{\partial f}{\partial y}(t,y) \right| $$ and $h$ is chosen so small that $1-hL\geqslant -1$ then we have $$ 1\geqslant 1+h\frac{\partial f}{\partial y}(t_k,\eta_k)\geqslant 1-hL\geqslant -1, $$ and from \eqref{euler:errlocalglobal2} we can write $$ |e_{k+1}|\leqslant |e_k|+|\tau_k|,\quad k=0,1,\ldots,N-1. $$ Applying this bound recursively, we obtain \begin{equation}\label{euler:bound2} |e_N|\leqslant |e_0| + (b-t_0)\tau(h) = Ch, \quad C = c_1 + (b-t_0)\|y''\|_\infty \end{equation} where $|e_0|\leqslant c_1h$ is assumed. The constant $C$ behind $h$ in bound \eqref{euler:bound2} is much smaller than that in bound \eqref{euler:errorbound} which contains the exponential terms. But, the error bound \eqref{euler:bound2} is valid with restrictive assumption \eqref{euler:negativefpar}. \vsp \begin{example} For simple IVP $$ y'(t)=-y(t), \quad y(0)=1, \quad 0\leqslant t\leqslant b, $$ we have $\partial f(t,y)/\partial y=-1$ and $L=1$. The true solution is $y(t)=e^{-t}$, hence $\|y''\|_\infty=1$. With $y_0=y(0)=1$ ($|e_0|=0$). From the bound \eqref{euler:errorbound} we have $$ |e_N|\leqslant \frac{1}{2}h(e^b-1) $$ which shows the convergence with order $h$. However the constant in the bound grows exponentially in $b$. For example with $b=5$ the bound becomes approximately $73.7 h$ which is far larger than the actual error in Table \ref{tb:euler1}. The error bound \eqref{euler:bound2}, on the other hand gives $$ |e_N|\leqslant \frac{1}{2}bh $$ which is very close to the actual error with different $h$ in Table \ref{tb:euler1}. The fifth column of the table also confirms the theoretical order $1$ for the method. \begin{center} \captionof{table}{Euler's method: numerical solutions, errors, orders, and error bounds for IVP $y'=-y$ with $y_0=1$ at time $t=b=5$.}\label{tb:euler1} \begin{tabular}{l|cccccc} \hline $h$ & $y_N$ & $|e_N|$ & $|e_N|/|y(b)|$ & order & $\frac{1}{2}h(e^b-1)$ & $\frac{1}{2}bh$ \\ \hline $0.2$ & $3.778\ee-3$ & $2.960\ee-3$ & $4.393\ee-1$& $-$ & $14.74$ & $0.5 $ \\ $0.1$ & $5.154\ee-3$ & $1.584\ee-3$ & $2.351\ee-1$& $0.90$ & $7.37 $ & $0.25$ \\ $0.05$ & $5.921\ee-3$ & $8.174\ee-4$ & $1.213\ee-1$& $0.95$ & $3.69 $ & $0.12$ \\ $0.025$ & $6.323\ee-3$ & $4.149\ee-4$ & $6.158\ee-2$& $0.98$ & $1.84 $ & $0.06$ \\ $0.0125$ & $6.529\ee-3$ & $2.090\ee-4$ & $3.102\ee-2$& $0.99$ & $0.92 $ & $0.03$ \\ $0.00625$ & $6.633\ee-3$ & $1.049\ee-4$ & $1.557\ee-2$& $0.99$ & $0.46 $ & $0.02$ \\ \hline \end{tabular} \end{center} The results of this table are obtained by executing the following code: \begin{lstlisting}[style = matlab1] ExactSol = @(t) exp(-t); b = 5; h = 0.2; for n = 1:6 [t,y] = ExEuler(@(t,y) -y, 1, [0 b], h); AppSol(n) = y(end); ABSerr(n) = abs(AppSol(n)-ExactSol(b)); RELerr(n) = ABSerr(n)/ExactSol(b); h = h/2; end Order = log2(ABSerr(1:5)./ABSerr(2:6)); fprintf('y_N = \n'); fprintf(' fprintf('abs_err = \n'); fprintf(' fprintf('rel_err = \n'); fprintf(' fprintf('order = \n'); fprintf(' \end{lstlisting} Try understanding why the numerical orders are computed using that logarithmic formula in line 11 of the script above! \begin{center} \captionof{table}{Euler's method: numerical solutions, errors, orders, and error bounds for IVP $y'=+y$ with $y_0=1$ at time $t=b=5$}\label{tb:euler2} \begin{tabular}{l|cccccc} \hline $h$ & $y_N$ & $|e_N|$ & $|e_N|/|y(b)|$ & order & $\frac{1}{2}h(e^b-1)\|y''\|_\infty$ & \\ \hline $0.2$ & $9.540\ee+1$ & $5.302\ee+1$ & $3.572\ee-1$& $-$ & $2.187\ee+3$ & \\ $0.1$ & $1.174\ee+2$ & $3.102\ee+1$ & $2.090\ee-1$& $0.77$ & $1.093\ee+3 $ & \\ $0.05$ & $1.315\ee+2$ & $1.691\ee+1$ & $1.140\ee-1$& $0.88$ & $5.470\ee+2$ & \\ $0.025$ & $1.396\ee+2$ & $8.849\ee+0$ & $5.963\ee-2$& $0.93$ & $2.735\ee+2 $ & \\ $0.0125$ & $1.439\ee+2$ & $4.529\ee+0$ & $3.052\ee-2$& $0.97$ & $1.367\ee+2$ & \\ $0.00625$ & $1.461\ee+2$ & $2.291\ee+0$ & $1.544\ee-2$& $0.98$ & $6.837\ee+1$ & \\ \hline \end{tabular} \end{center} Now consider the IVP $$ y'(t)=y(t), \quad y(0)=1, \quad 0\leqslant t\leqslant b. \vsp $$ For this problem the the error bound \eqref{euler:bound2} is not applicable because $\partial f(t,y)/\partial y=1>0$. However, the error bound \eqref{euler:errorbound} is nearly sharp for this IVP. The exact solution is $y(t)=e^t$ and $\|y''\|_\infty=e^b$. See the results in Table \ref{tb:euler2}. \end{example} \vsp \begin{labexercise}\label{py_ex_euler} Solve the following problems using Euler's method with stepsizes $h=0.2,0.1,0.05,0.025,0.0125,0.00625$. Compute the relative errors using the true solutions $y(t)$. In each case plot the error function in terms of $h$ in the log-log scale, and compute the computational order of convergence. \begin{itemize} \item (a) $y'(t)=te^{-t}-y(t), \; 0\leqslant t\leqslant 10,\; y(0)=1$, with exact solution $y(t)=(1+0.5t^2)e^{-t}$. \item (b) $y'(t)=[\cos (y(t))]^2, \; 0\leqslant t\leqslant 10,\; y(0)=0$, with exact solution $y(t)=\tan^{-1}(t)$. \item (c) $y'(t)=t^3/y(t), \; 0\leqslant t\leqslant 10,\; y(0)=1$, with exact solution $y(t)=\sqrt{0.5t^4+1}$. \end{itemize} \end{labexercise} \vsp \subsection{General explicit one-step methods}\label{sect:general-explicit} The Euler's method is a one-step method (counterpoise to multistep methods) as in each time level the approximate solution is obtained from merely the previous time level. A general explicit one-step method has the form \begin{equation}\label{onestep:general} y_{k+1} = y_k + h\psi(t_k,y_k,h) \end{equation} for a more general function $\psi$ instead of $f$. We will assume that $\psi(t,y,h)$ is continuous in $t$ and $h$ and Lipschitz continuous in $y$, with Lipschitz constant $\tilde L$ that is generally related to the Lipschitz constant $L$ of $f$. \vsp \begin{example} The choice $\psi(t,y,h)=f(t,y+\frac{h}{2}f(y))$ results in the two-stage {\em Runge-Kutta method} that will be addressed later. For this scheme we can simply show that $\psi$ has Lipschitz constant $\tilde{L}=L+\frac{h}{2}L^2$ where $L$ is the Lipschitz constant of $f$. \end{example} \vsp A one-step method is said to be {\em consistent} if \begin{equation}\label{one-step:consistency} \psi(t,y,0) = f(t,y), \end{equation} for all $t,y$, and $\psi$ is continuous in $h$. The consistency, indeed, implies that the local truncation error of method \eqref{onestep:general} is at least of order $h^2$, because \begin{align*} \tau_{k+1}&=y(t_{k+1})-y(t_k)-h\psi(t_k,y_k,h)\\ &=hy'(t_k)-h\psi(t_k,y(t_k),h) + \mathcal{O}(h^2)\\ &=h\psi(t_k,y(t_k),0)-h[\psi(t_k,y(t_k),0)+\mathcal O(h)] + \mathcal{O}(h^2)\\ &=\mathcal{O}(h^2) \end{align*} The error analysis of general one-step methods can be obtained in a similar way as was done for the Euler's method. First, like as \eqref{euler:tylorser}, the truncation error is obtained as \begin{equation*} \tau_{k+1} = y(t_{k+1})-y(t_k)-h\psi(t_k,y(t_k),h), \end{equation*} and then \eqref{euler:errlocalglobal} is modified to \begin{equation*}e_{k+1}=e_k + h [\psi(t_k,y(t_k),h)-\psi(t_k,y_k,h)]+\tau_{k+1}. \end{equation*} The remaining parts of analysis follow a same direction only $L$ should be replaced by $\tilde L$ in new error bounds. We then can conclude the following theorem. \begin{theorem}\label{thm:converg-onesteps} If $\psi(t,y,h)$ is continuous in all its arguments and is Lipschitz continuous in its second argument, and the consistency condition \eqref{one-step:consistency} holds, then the explicit one-step method \eqref{onestep:general} is convergent with a global error of at least order $h^1$. If the local truncation errors $\tau_{k+1}$ behave as $h^{p+1}$, then the global order is of order $h^p$. \end{theorem} \vsp \subsection{Zero-stability} In the convergence proof of the one-step methods we observed the effect of amplification factor in propagating the local errors which finally was summed up to factor $C$ in the error bound \eqref{euler:errorbound}. Although this factor grows in $b$, it is bounded independent of $h$ as $h\to 0$. Consequently the method is stable. This form of stability for a numerical method is often called {\bf zero-stability}, since it is concerned with the stability of the method in the limit as $h$ tends to zero. To see this observation in a more relevant presentation to stability of the original IVP \eqref{ivp:form}, assume that the initial condition $y_0$ is perturbed by $\ep$ and define numerical solutions of the perturbed problem by $$ z_{k+1}=z_k + hf(t_k,z_k), \quad z_0 = y_0+\ep. $$ For comparing two numerical solutions $y_k$ and $z_k$, let $e_k = z_k-y_k$. Then $e_0=\ep$ and subtracting from $y_{k+1}=y_k+hf(t_k,y_k)$ we obtain $$ e_{k+1}=e_k+h[f(t_k,z_n)-f(t_k,y_k)]. $$ This is exactly the same form as \eqref{euler:errlocalglobal} with $\tau_{k+1}$ set to be zero. Using the same procedure we obtain $$ |e_{k+1}|\leqslant (1+hL)^k|e_0|\leqslant e^{L(t_k-t_0)}|\ep|. $$ Consequently, we can write \begin{equation*} \max_{0\leqslant k\leqslant N}|z_k-y_k| \leqslant e^{L(b-t_0)}|\ep| \vsp \end{equation*} which is the analog to the stability result \eqref{ivp-stab} for the original IVP \eqref{ivp:form}. The {\em zero-stability} is different from other forms of stability which are of equal importance in practice. The fact that a method is zero-stable (and converges as $h\to 0$) is no guarantee that it will give reasonable results on the particular grid with $h>0$ that we want to use in practice. Other stability issues of a different nature will be taken up in the next sections. \vsp \subsection{Absolute stability} The zero-stability is needed to guarantee convergence of a numerical method as $h\to 0$. In practice, however, we need to perform a single calculation using a given positive stepsize $h>0$. Moreover, to minimize the computational cost a larger as possible $h$ (consistent with our desired accuracy) is usually preferred. A stronger form of stability than the zero-stability is required in this case to force the method to work for this particular stepsize $h$. Let's illustrate the situation in three numerical examples borrowed from \cite{LeVeque:2007}. \vsp \begin{example}\label{ex:absstab} We apply the Euler's method on a simple IVP of the form $$ y'(t) = -\sin(t), \quad 0\leqslant t\leqslant 2, \quad y(0)=1 $$ with exact solution $y(t)=\cos t$. Since $f(t,y)=\sin(t)$ is independent of $y$, ($L=0$) the global error is the sum of local errors $$ |\tau_k| = \frac{h^2}{2}|y''(\xi_k)|\leqslant \frac{h^2}{2}. $$ Indeed we have $$ |e_N|\leqslant (b-t_0)\tau(h) = 2\tau(h) = h. $$ Suppose we want to compute the solution at $t=2$ with a global error less than $0.001$. According to the error bound it suffices to take $h = 0.001$ and obtain the approximate solution after $2000$ time steps. The computed solution $y_{2000}\doteq -0.4156921$ has error $|e_{2000}|=|y_{2000}-\cos(2)|\doteq 0.45\times 10^{-3}$. Now, we change the IVP to \begin{equation}\label{exeulerlam} y'(t) =\lambda(y-\cos t)-\sin(t), \quad 0\leqslant t\leqslant 2, \quad y(0)=1, \end{equation} for some constant $\lambda$. The exact solution is $y(t)=\cos t$, as before. Let $\lambda=-10$. The error bound \eqref{euler:bound2} suggests again the global error $|e_N|\leqslant h$. For this reason, we again choose $h=0.001$ for a global error less than $0.001$. The computed solution now is $y_{2000}\doteq -0.4161629$ with error $|e_{2000}|\doteq 0.16\times 10^{-4}$ which is even better than the previous one. Let us examine some larger (in magnitude) $\lambda$. Let $\lambda = -2100$. Executing the Euler's method gives $y_{2000}\doteq 0.15\times 10^{77}$ which is far away from the exact solution and shows a blown up in computations. The method is zero-stable and we proved that when $h\to 0$ it is convergent. Indeed, for sufficiently small stepsizes we achieve accurate results as reported in Table \ref{tb:euler3}. \begin{center} \captionof{table}{Global errors for the Euler's method with different stepsizes.}\label{tb:euler3} \begin{tabular}{l|ccccc} \hline $h$ & $0.001$ & $0.00097$ & $0.00095$ & $0.0008$ & $0.0004$ \\ \hline $|e_N|$ & $0.15\ee+77$ & $0.77\ee+26$ & $0.40\ee-07$ & $0.79\ee-07$ & $0.40\ee-08$ \\ \hline \end{tabular} \end{center} Something dramatic happens for values of $h$ between $0.00095$ and $0.00097$. For smaller values of $h$ we get very good results, whereas for larger values of $h$ the solution blows up. To find the reason, we come back to \eqref{euler:errlocalglobal2} where we have for the linear IVP with $f(t,y)=\lambda(y-\cos t)-\sin(t)$ the recursion $$ e_{k+1} = (1+\lambda h) e_{k} + \tau_{k+1}. $$ This means that in each time step the previous error is multiplied by factor $(1+\lambda h)$. With $\lambda=-2100$ and $h = 0.001$ we have $|1+\lambda h| = 1.1$. After $2000$ steps the truncation error introduced in the first step has grown by a factor of roughly $(1.1)^{2000}\approx 10^{82}$, which is consistent with the error actually seen. Note that with $\lambda=10$, we have $|1+\lambda h|=0.99$ causing a decay in the effect of previous errors in each step. For the first case, i.e., $\lambda=0$, the amplification factor is $1$ the reason why we got a worse result in this case than the case of $\lambda=-10$. Consequently, we can argue that for values of $h$ satisfying $$ |1+\lambda h|\leqslant 1 $$ the Euler method produces stable and accurate results for IVP \eqref{exeulerlam}. In the case of $\lambda=-2100$ the above criterion suggests the values of $h$ smaller than $2/2100\doteq 0.000952$. \end{example} \vsp \begin{remark} Note that the exponential growth of errors for some positive values of $h$ in the previous example does not contradict zero-stability or convergence of the method in any way. The method does converge as $h\to 0$. \end{remark} \vsp Example \ref{ex:absstab} shows that another notion of stability is needed to force a numerical method to produce stable and accurate results with a given step length $h>0$. There exists a wide variety of ``stability'' notions but one that is most basic is the {\bf absolute stability}. This kind of stability is based on the linear {\em test equation} \begin{equation}\label{lineareq_test} y'(t)=\lambda y(t), \quad \lambda\in \C. \end{equation} The restriction on the step length $h>0$ on which the method will work for test ODE \eqref{lineareq_test} is called the absolute stability conditions. For example, the Euler's method if is applied on \eqref{lineareq_test} yields $$ y_{k+1}=(1+\lambda h)y_k = (1+\lambda h)^2y_{k-1} = \cdots = (1+\lambda h)^{k+1}y_0. \vsp $$ In order to prevent a blown up in the solution when $k\to\infty$ we should impose the condition \begin{equation*} |1+\lambda h|\leqslant 1. \end{equation*} If $\lambda \in \R$, the stability condition will be $-2\leqslant z\leqslant 0$ for $z\equiv \lambda h$. This implies that $$ \lambda \leqslant 0 \;\; \mbox{and}\;\; 0\leqslant h\leqslant \frac{2}{-\lambda}. $$ In the general case $\lambda\in\C$, the complex number $z$ should satisfy $|1+z|\leqslant 1$ which means that $z$ should lie inside and on a circle with center $(-1,0)$ and radius $1$ in the complex plane. This region is called the {\bf region of absolute stability} of the Euler's method. See shadow part on the left hand side of Figure \ref{fig:eulerregabs}. \begin{definition} By applying a one-step method on the test problem \eqref{lineareq_test} we get $$ y_{k+1}= R(z)y_k, \quad z = \lambda h $$ for some function $R(z)$, and then the absolute stability region of the method is defined to be $$ S = \{z\in\C: |R(z)|\leqslant 1\}. $$ \end{definition} The stability region of the explicit Euler's method is rather small compared to other (usually) implicit methods. This will impose a serious restriction on stepsize $h$ to guarantee the convergence. This restriction together with slow convergence rate of Euler's method convince us to study and develop other more efficient ODE solvers. For comparison, the absolute stability region of the implicit Euler's method (see the next section) is drown on the right hand side of Figure \ref{fig:eulerregabs}. It contains all the complex plane except a unit circle with center $(1,0)$. \begin{figure}[!th] \begin{center} \begin{tikzpicture}[scale=2] \node[color=black] at (-.5,1.2) {{\tiny Explicit Euler}}; \draw[rectangle] (-3/2,-1) rectangle (0.5cm,1cm); \draw[fill=blue!10] (-0.5,0) circle (0.5cm); \draw[-] (-3/2,0) -- (1/2,0) node[right] {} ; \draw[-] (0,-1) -- (0,1) node[right] {} ; \node[color=black] at (-3/2,-1.1) {{\tiny $-3$}}; \node[color=black] at (-1,-1.1) {{\tiny $-2$}}; \node[color=black] at (-1/2,-1.1) {{\tiny $-1$}}; \node[color=black] at (0,-1.1) {{\tiny $0$}}; \node[color=black] at (1/2,-1.1) {{\tiny $1$}}; \node[color=black] at (-1.7,-1) {{\tiny $-2$}}; \node[color=black] at (-1.7,-1/2) {{\tiny $-1$}}; \node[color=black] at (-1.6,0) {{\tiny $0$}}; \node[color=black] at (-1.6,1/2) {{\tiny $1$}}; \node[color=black] at (-1.6,1) {{\tiny $2$}}; \end{tikzpicture} \begin{tikzpicture}[scale=2] \node[color=black] at (-.5,1.2) {{\tiny Implicit Euler}}; \draw[fill=blue!10] (-3/2,-1) rectangle (0.5cm,1cm); \draw[fill=blue!0] (-0.5,0) circle (0.5cm); \draw[-] (-3/2,0) -- (1/2,0) node[right] {} ; \draw[-] (-1,-1) -- (-1,1) node[right] {} ; \node[color=black] at (-3/2,-1.1) {{\tiny $-1$}}; \node[color=black] at (-1,-1.1) {{\tiny $0$}}; \node[color=black] at (-1/2,-1.1) {{\tiny $1$}}; \node[color=black] at (0,-1.1) {{\tiny $2$}}; \node[color=black] at (1/2,-1.1) {{\tiny $3$}}; \node[color=black] at (-1.7,-1) {{\tiny $-2$}}; \node[color=black] at (-1.7,-1/2) {{\tiny $-1$}}; \node[color=black] at (-1.6,0) {{\tiny $0$}}; \node[color=black] at (-1.6,1/2) {{\tiny $1$}}; \node[color=black] at (-1.6,1) {{\tiny $2$}}; \end{tikzpicture} \end{center} \caption{Absolute stability regions of explicit and implicit Euler's methods}\label{fig:eulerregabs} \end{figure} Although the absolute stability region is determined by testing the method on simple linear ODE \eqref{lineareq_test}, it yields information that is typically useful in determining an appropriate step length in nonlinear problems as well. For a system of ODEs of the form \begin{equation}\label{y1Ay} \y'(t) = A\y(t), \quad A\in\R^{n\times n} \end{equation} where $A$ is diagonalizable with eigenvalues $\lambda_\ell$, $\ell=1,2,\ldots,n$, a numerical method is absolutely stable if $z_\ell = \lambda_\ell h $ all lie in the absolute stability region of the method in the scaler case. The proof is simple, because as we observed in \eqref{sys:solveAy} the system can be decoupled to $n$ scaler ODE $$ u'_\ell = \lambda_\ell u_\ell, \quad \ell=1,2,\ldots,n. $$ Now, we investigate a numerical solution of a simple partial differential equation (PDE) using the method of lines (MOL) which results in a linear system of ODEs. \ \\ \begin{example}\label{ex:heat} Consider the linear {\em diffusion} equation $$ \frac{\partial u(x,t)}{\partial t} = \frac{\partial^2 u(x,t) }{\partial x^2},\quad 0\leqslant x\leqslant 1, \quad t\geqslant 0 $$ with homogeneous Dirichlet boundary conditions $u(0,t)=u(1,t)=0$ and initial condition $u(x,0)=u^0(x)$. The method of lines (MOL) solution if is applied on this problem with the central difference approximation $$ \frac{\partial^2 u}{\partial x^2}(x_k,t)\approx \frac{u(x_{k+1},t)-2u(x_k,t)+u(x_{k-1},t)}{(\Delta x)^2}, \quad \Delta x = \frac{1}{m+1}, $$ with $x_k=k\Delta x$, leads to a system of equations of the form \eqref{y1Ay} with $$ \y(t) = \begin{bmatrix} u(x_1,t) \\ u(x_2,t) \\ \vdots \\ u(x_{m-1},t) \\ u(x_m,t) \end{bmatrix}, \quad A = \frac{1}{(\Delta x)^2}\begin{bmatrix} -2 & 1 & & & \\ 1 & -2& 1 & & \\ &\ddots & \ddots & \ddots & \\ & & 1 & -2& 1 \\ & & & 1 & -2 \end{bmatrix}. $$ The matrix $A$ is symmetric and tridiagonal. There exists a close formula for its eigenvalues: \begin{align*} \lambda_\ell = \frac{2}{(\Delta x)^2}(\cos(\pi\ell \Delta x)-1)= \frac{-4}{(\Delta x)^2}\sin^2(\frac{\pi}{2}\ell \Delta x), \quad \ell=1,\ldots,m. \end{align*} The distribution of eigenvalues for two different matrix size $m=10,20$ (or $\Delta x=1/11,1/21$) are displayed in Figure \ref{fig:eulereig}. \begin{center} \includegraphics[width=15cm]{eigen10.eps}\\ \includegraphics[width=15cm]{eigen20.eps} \captionof{figure}{Distribution of eigenvalues of matrix $A$.}\label{fig:eulereig} \end{center} All eigenvalues are real (because $A$ is symmetric) and fall on the left-half (complex) plane. If one insists to apply the explicit Euler's method for solving this system then the step length $h$ should be chosen small enough such that all $\lambda_\ell h$ lie in the absolute stability region of the method. Since the largest (in magnitude) eigenvalue is $\lambda_m$, the absolute stability is guaranteed if the step length is chosen equal to or less than $$ \frac{2}{-\lambda_m} = \frac{(\Delta x)^2}{2\sin^2(\frac{\pi}{2}m\Delta x)} $$ Since $\sin^2(\frac{\pi}{2}m\Delta x)<1$, it is enough to take the step length equal to or less than $\frac{1}{2}(\Delta x)^2$. This is a serious restriction for numerical solution of such PDE. \end{example} \vsp \begin{remark} A method is zero-stable if the origin belongs to its region of absolute stability. \end{remark} \subsection{Implicit methods} Euler's method is an explicit method in that it uses the already known information at time $t_k$ to advance the solution to time $t_{k+1}$. However, this method has a rather small stability region. The {\bf implicit Euler's method} (backward Euler's method) is obtained by approximating $y'(t_k)$ by the first-order backward difference approximation $$ y'(t_k) = \frac{y(t_{k})-y(t_{k-1})}{h} - \frac{h}{2}y''(\xi_k), \quad t_{k-1}\leqslant\xi_k\leqslant t_{k}. $$ By dropping the error term and using the approximate values $y_k$ instead of $y(t_k)$, we obtain an algebraic equation $y_k = y_{k-1}+hf(t_k,y_k)$ for $k=1,2,\ldots$. Shifting the index by $1$, the implicit Euler's method is obtained as \begin{shaded} \vspace*{-0.3cm} \begin{equation}\label{imeuler:method} \begin{split} &y_{k+1} = y_{k} + hf(t_{k+1},y_{k+1}), \quad k=0,1,\ldots, N-1, \\ & y_0=y(t_0). \end{split} \end{equation} \vspace*{-0.3cm} \end{shaded} This scheme is implicit because we must evaluate $f$ with the argument $y_{k+1}$ before we know its value. If $f$ is a nonlinear function in $y$ then a rootfinding method such as fixed-point iteration or Newton's method can be used. A good starting guess for the iteration is the solution at the previous time step or one step solution of the explicit Euler's method. If $f$ is Lipschitz continuous and $h$ is small enough it can be proved that $y-y_k+hf(t_{k+1},y)=0$ has a unique root. Usually, a simple iteration method is efficient for solving the nonlinear equation in each step. In step $k+1$, given an initial guess $y_{k+1}^{(0)}$, we define $y_{k+1}^{(1)},y_{k+1}^{(2)},\ldots$ by \begin{equation}\label{imeuler:iter} y_{k+1}^{(j+1)}=y_k + hf(t_{k+1},y_{k+1}^{(j)}),\quad j=0,1,2,\ldots. \end{equation} Subtracting \eqref{imeuler:iter} from \eqref{imeuler:method}, we obtain $$ y_{k+1}-y_{k+1}^{(j+1)}=h\left[f(t_{k+1},y_{k+1})-f(t_{k+1},y_{k+1}^{(j)})\right]. $$ If we assume that $f$ is Lipschitz continuous with constant $L$ then we can write $$ |y_{k+1}-y_{k+1}^{(j+1)}|\leqslant hL |y_{k+1}-y_{k+1}^{(j)}|. $$ This means that if $h$ is chosen small enough such that \begin{equation}\label{imeuler:Lh} hL\leqslant 1 \end{equation} then the error will converge to zero for a sufficiently good initial guess $y_{k+1}^{(0)}$. In practice, usually one step of the explicit Euler's method, i.e., $$ y_{k+1}^{(0)} = y_k+hf(t_k,y_k) $$ is used as an initial guess in each step of the implicit Euler's method. This is called a {\em predictor formula} as predicts the root of the implicit method. Besides, $h$ is chosen so small such that \eqref{imeuler:Lh} is much smaller than $1$ to have a rapid fixed-point convergence. Often few iterates (sometimes only one iterate) need(s) to obtain a satisfactory result. Another practical way is to assume $y_{k+1}^{(0)}=y_{k}$ and do the iteration \eqref{imeuler:iter} twice. This two-point iteration is equivalent to following two-step scheme \begin{align*} z & = y_k + hf(t_{k+1},y_k) \\ y_{k+1} & = y_k + hf(t_{k+1},z), \end{align*} or, by writing it in a one line, \begin{equation}\label{imeuler:1iter} y_{k+1} = y_k + hf(t_{k+1},y_k+hf(t_{k+1},y_k)). \end{equation} This method is still of first-order accuracy but has some absolute stability limitations. However, the implicit Euler's method \eqref{imeuler:method} if is applied on test equation \eqref{lineareq_test} gives $$ y_{k} = \frac{1}{(1-\lambda h)^k}y_0. $$ The instability never happens if $$ \frac{1}{|1-\lambda h|}\leqslant 1. \vsp $$ Therefore, the region of absolute stability of the method is $S = \{z\in \C: |1-z|\geqslant 1\}$ which is shown on the right hand side of Figure \ref{fig:eulerregabs}. One drawback of both explicit and implicit Euler's methods is the low convergence order. Before presenting new higher order schemes let us discuss another approach for obtaining Euler's formulas. If we integrate the equation $y'(t)=f(t,y(t))$ from $t_k$ to $t_{k+1}$, we obtain \begin{equation}\label{imeuler:integral} y(t_{k+1})=y(t_k)+\int_{t_k}^{t_{k+1}}f(\tau,y(\tau))d\tau.\vsp \end{equation} The explicit Euler's method will be resulted if the integral in \eqref{imeuler:integral} is approximated by the {\em box rule} $$ \int_{a}^{b}g(\tau) d\tau \approx (b-a)g(a) \vsp $$ while the implicit Euler's method follows from the box quadrature $$ \int_{a}^{b}g(\tau) d\tau \approx (b-a)g(b). $$ More accurate quadratures can be used to obtain more accurate methods. For instance, we can implement the {\em trapezoidal rule} (with the error term) \begin{equation}\label{trap-rule} \int_{a}^{b}g(\tau) d\tau = \frac{1}{2}(b-a)[g(a)+g(b)] -\frac{1}{12}(b-a)^3g''(\xi), \end{equation} for some $a\leqslant \xi\leqslant b$. Applying \eqref{trap-rule} to \eqref{imeuler:integral}, we obtain \begin{equation}\label{trap1} y(t_{k+1})=y(t_k)+\frac{h}{2}[f(t_k,y(t_k))+f(t_{k+1},y(t_{k+1}))]-\frac{h^3}{12}y^{(3)}(\xi_k) \end{equation} for some $t_k\leqslant \xi_k\leqslant t_{k+1}$. The second derivative in the error term of the trapezoidal rule \eqref{trap-rule} is replaced by the third derivative of $y$ in \eqref{trap1} because $f(t,y)=y'(t)$. By dropping the error term in \eqref{trap1} and replacing $y(t_k)$ by approximate values $y_k$, the {\bf trapezoidal method} is obtained as \begin{shaded} \vspace*{-0.2cm} \begin{equation}\label{trap-method} \begin{split} & y_{k+1}=y_k+\frac{h}{2}[f(t_k,y_k)+f(t_{k+1},y_{k+1})], \quad k=0,1,2,\ldots\\ & y_0=y(t_0). \end{split} \end{equation} \vspace*{-0.2cm} \end{shaded} The local truncation error for this method is \begin{equation}\label{trap-trunc} \tau_{k+1} = -\frac{h^3}{12}y^{(3)}(\xi). \end{equation} It can be proved that the trapezoidal method is of second-order accuracy and its global error satisfies $$ |e_N|\leqslant Ch^2 \vsp $$ for all sufficiently small $h$. The proof follows the same sketch as the proof of the explicit Euler's method. In additions, we can simply show that the region of absolute stability of the trapezoidal method is the left half plane as shown in Figure \ref{fig:regionabstrap}. \begin{figure}[!th] \begin{center} \begin{tikzpicture}[scale=2] \node[color=black] at (-.5,1.2) {{\tiny Trapezoidal}}; \draw[fill=blue!10] (-3/2,-1) rectangle (0.25cm,1cm); \draw[fill=blue!0] (-1/2,-1) rectangle (0.5cm,1cm); \draw[-] (-3/2,0) -- (1/2,0) node[right] {} ; \node[color=black] at (-3/2,-1.1) {{\tiny $-2$}}; \node[color=black] at (-1,-1.1) {{\tiny $-1$}}; \node[color=black] at (-1/2,-1.1) {{\tiny $0$}}; \node[color=black] at (0,-1.1) {{\tiny $1$}}; \node[color=black] at (1/2,-1.1) {{\tiny $2$}}; \node[color=black] at (-1.7,-1) {{\tiny $-2$}}; \node[color=black] at (-1.7,-1/2) {{\tiny $-1$}}; \node[color=black] at (-1.6,0) {{\tiny $0$}}; \node[color=black] at (-1.6,1/2) {{\tiny $1$}}; \node[color=black] at (-1.6,1) {{\tiny $2$}}; \end{tikzpicture} \end{center} \caption{Absolute stability region of the trapezoidal method}\label{fig:regionabstrap} \end{figure} \begin{workout} Show that the region of absolute stability of the trapezoidal method is the left half complex plane. \end{workout} \vsp The convergence order $2$ and the absolute stability of the trapezoidal method are two advantages that make this method an important tool for solving ordinary differential equations. When $f(t,y)$ is nonlinear in $y$, the discussion for the solution of the implicit Euler's method applies to the solution of the trapezoidal method \eqref{trap-method} with a slight variation. The iteration formula \eqref{imeuler:iter} is replaced by \begin{equation}\label{imeuler:itertrap} y_{k+1}^{(j+1)}=y_k + \frac{h}{2}[f(t_k,y_k)+f(t_{k+1},y_{k+1}^{(j)}),\quad j=0,1,2,\ldots. \end{equation} The convergence condition \eqref{imeuler:Lh} is replaced by $$ \frac{hL}{2}\leqslant 1. $$ The usual choice of the initial guess $y_{k+1}^{(0)}$ for \eqref{imeuler:itertrap} is the Euler's solution $$ y_{k+1}^{(0)} = y_k+hf(t_k,y_k). $$ With this choice the resulting global error will be still of order $h^2$, because a one step error of the Euler's method is of order $h^2$. If only one iteration of \eqref{imeuler:itertrap} is used the resulting new scheme is \begin{equation}\label{heun-method} y_{k+1} = y_k + \frac{h}{2}[f(t_k,y_k)+ f(t_{k+1},y_k+hf(t_k,y_k))], \vsp \end{equation} which is also known as {\em Heun's method}. This method is still of second order accuracy, but with a more restricted region of absolute stability. The Heun's method is identical with one of the {\em Runge-Kutta methods} of order two. We will address various types of Runge-Kutta method in a forthcoming section. \vsp \begin{workout} Show that the absolute stability regions of schemes \eqref{imeuler:1iter} and \eqref{heun-method} are bounded. Especially both of them do not include the whole negative real line. \end{workout} \vsp \begin{workout} Let $\theta\in[0,1]$ and consider the $\theta$-method $$ y_{k+1}=y_k + h[(1-\theta)f(t_k,y_k)+\theta f(t_{k+1},y_{k+1})]. $$ (a) Which values of $\theta$ correspond to the explicit Euler, implicit Euler, and trapezoidal methods? (b) Separate two cases $\theta\in[0,1/2)$ and $\theta\in [1/2,1]$. In which case the left-half plane lies in the absolute stability regions of this method? Optional: answer the above questions for the {\em generalized midpoint} method $$ y_{k+1}=y_k+hf(t_{k+\theta},(1-\theta)y_k+\theta y_{k+1}) $$ where $t_{k+\theta}=(1-\theta)t_k+\theta t_{k+1}$. \end{workout} \vsp \begin{workout}\label{wo:leapfrog} Consider the scheme $$ y_{k+1}=y_{k-1}+2hf(t_{k},y_k), $$ for solving $y'(t) = f(t,y)$. Show that the local truncation error of this scheme is of order $h^3$. The absolute stability region for this scheme is $S = \{z=\alpha+i\beta\in\C: \alpha=0, -1\leqslant\beta\leqslant 1\}$, which is a marginal stability region (no need to prove!) This method is known as {\em midpoint} or {\em leapfrog method}. Is the leapfrog method $A$-stable? Hint: To obtain the truncation error, replace $y_k$ by exact values $y(t_k)$ and leave the formula by additional term $\tau_k$. Then use Taylor expansion to determine $\tau_k$. \end{workout} \vsp \begin{labexercise} Write MATLAB functions for implicit and trapezoidal methods. In each case, use an iterative method and/or a predictor formula to handle the nonlinearity. Then solve the examples in Lab Exercise \ref{py_ex_euler} using your codes and compare the errors and orders. Comment on your results. \end{labexercise} \vsp \begin{labexercise}\label{wo_eulersmoothness} Consider the IVP $$ y'(t)=\frac{1}{t}y(t)+(\alpha-1) t^{\alpha-1}, \quad y(0)=0, \quad t\in[0,1], $$ with $\alpha>0$. The solution is $y(t)=t^\alpha$. To have $y$ twice continuously differentiable, we need $\alpha\geqslant 2$. Use your MATLAB codes for explicit and implicit Euler and trapezoidal methods for $\alpha=2.5,1.5,1.1$ with stepsizes $h=0.2,0.1,0.05,0.025,0.0125$. Determine the computational convergence orders. Compare with theoretical orders and report a reason for your observation. \end{labexercise} \subsection{Taylor series methods} Euler's methods can be formulated by using a Taylor series approximation when $y'$ is replaced by $f(t,y)$ and higher order derivative in the series are dropped. One can of course use higher order terms but then $y''$, $y'''$, $\ldots$ should be obtained by differentiating the differential equation $$ y'(t)=f(t,y), $$ successively. From the Taylor series expansion of order $p$ we have \begin{equation}\label{tayl-ser} y(t_{k+1}) \approx y(t_k) + hy'(t_k)+\frac{h^2}{2!}y''(t_k)+\cdots + \frac{h^p}{p!}y^{(p)}(t_k) \end{equation} where the truncation error is \begin{equation}\label{tayl-trunc} \tau_{k+1} = \frac{h^{p+1}}{(p+1)!}y^{(p+1)}(\xi_k), \quad t_k\leqslant \xi_k\leqslant t_{k+1}. \end{equation} The term $y'(t)$ in \eqref{tayl-trunc} can be replaced by $f(t,y)$ as we have done in Euler's methods. For higher order derivatives we can write \begin{equation*} \begin{split} y''(t)& = f_t + f_yf\\ y^{(3)}(t)& = f_{tt}+2f_{ty}f + f_{yy}f^2+f_y(f_t+f_yf)\\ \vdots \end{split} \end{equation*} provided that partial derivatives of $f(t,y)$ with respect to $y$ exist. Substituting these formulas into \eqref{tayl-ser}, we obtain \begin{equation}\label{tayl-method} y_{k+1} =y_k + hy'_k + \frac{h^2}{2}y''_k + \cdots + \frac{h^p}{p!}y^{(p)}_k, \vsp \end{equation} which is called the {\bf Taylor series method}. The derivatives formulas in \eqref{tayl-method} are $$ y'_k = f(t_k,y_k), \quad y''_k = (f_t+f_yf)(t_k,y_k), \; \mbox{and so on}. $$ The formulas for higher order derivatives rapidly become too complicated, so Taylor series methods of higher order have not often been used in practice. Recently, however, the availability of symbolic manipulation and automatic differentiation systems have made these methods more feasible. If the solution $y$ and the derivative function $f(t,y)$ are sufficiently differentiable then it can be proved that the global error for the scheme \eqref{tayl-method} satisfies \begin{equation*} |e_N|\leqslant Ch^p\|y^{(p+1)}\|_\infty, \end{equation*} which means that the method is of $p$-th order accuracy. The constant $C$ is something similar to that was obtained for the explicit Euler's method (the first order Taylor method). \begin{labexercise} Construct the Taylor series methods of orders $2$ and $3$ for IVP $$ y'(t) = [\cos y(t)]^2, \quad 0\leqslant t\leqslant 10, \quad y(0) = 0. $$ Write a MATLAB code to compute the results for stepsizes $h=0.2$, $0.1$,$0.05$, $0.025$, $0.0125$, $0.00625$. Plot the error functions in each case and calculate numerical orders. Compare the results with Euler and trapezoidal methods. The exact solution of the above IVP is $y(t)=\tan^{-1}(t)$. Use this information to calculate errors and orders. \end{labexercise} \vsp \subsection{Runge-Kutta methods} The calculation of higher order partial derivatives of $f(t,y)$ makes the Taylor series methods complicated and time-consuming. {\bf Runge-Kutta} methods (abbreviated by RK methods) replace higher derivatives by more evaluations of $f(t,y)$ to have finite difference approximations for derivatives while retain the accuracy of Taylor series methods. The RK methods are {\em one-step} but {\em multi-stage} and are fairly easy to program not only for a scaler ODE but also for a system of ODEs. To derive a second order RK method, consider the second order Taylor method \begin{equation}\label{rk-tayl} y_{k+1}=y_k + hy'_k + \frac{h^2}{2}y''_k \end{equation} where $y' = f(t,y)$ and $y''=f_t + f_yf$ both evaluated at $(t_k,y_k)$. We aim to approximate $f_t+f_yf$ by expanding $f$ in a bivariate Taylor series as $$ f(t+h,y+hf) = \big(f+ hf_t+hff_y\big){(t,y)} + \mathcal O(h^2). $$ This simply shows that $$ y''(t)=(f_t + f_yf)(t,y) = \frac{1}{h}[f(t+h,y+hf(t,y))-f(t,y)] + \mathcal O(h). $$ Dropping the $\mathcal O(h)$ term and substituting in \eqref{rk-tayl}, we obtain \begin{equation}\label{rk-heun} \begin{split} y_{k+1} & = y_k + hf(t_k,y_k) + \frac{h^2}{2}\frac{1}{h}[f(t_{k}+h,y_k+hf(t_k,y_k))-f(t_k,y_k)] \\ & = y_k + \frac{h}{2}[f(t_k,y_k) + f(t_{k}+h,y_k+hf(t_k,y_k))]. \end{split} \end{equation} This method was previously derived as the Heun's method in \eqref{heun-method}. As an RK2 method it is usually written in the following two step pattern: \begin{shaded} \vspace*{-0.3cm} \begin{equation}\label{rk2} \begin{split} z_1 & = y_k\\ z_2 & =y_k+hf(t_k,z_1) \\ y_{k+1} & = y_k + \frac{h}{2}[f(t_k,z_1)+f(t_k+h,z_2)]. \end{split} \end{equation} \vspace*{-0.2cm} \end{shaded} This is not the only order $2$ explicit RK method. As we discussed in section \ref{sect:general-explicit}, a general explicit method can be written as \begin{equation*} y_{k+1} = y_k + h\psi(t_k,y_k,h), \quad y_0=y(t_0). \end{equation*} In the RK2 method \eqref{rk-heun} we derived $\psi(t,y,h)$ as $$ \psi(t,y,h) = \frac{1}{2}f(t,y) + \frac{1}{2}f(t+h,y+hf(t,y)). $$ This formula can be generalized to ansatz \begin{equation}\label{rk2:ansatz} \psi(t,y,h) = b_1 f(t,y) + b_2 f(t+\alpha h,y+\beta hf(t,y)), \end{equation} with unknown coefficients $b_1,b_2, \alpha, \beta$ that can be determined such that the local truncation error \begin{equation*} \tau_{k+1} = y(t_{k+1})-[y(t_k)+h\psi(t_k,y(t_k),h)] \vsp \end{equation*} will satisfy $\tau_{k+1} = \mathcal{O}(h^3)$ just as with the Taylor method of order $2$. After some manipulations with the bivariate Taylor expansion, we will obtain the relations \begin{equation*} b_2\neq 0, \quad b_1=1-b_2, \quad \alpha = \beta = \frac{1}{2b_2}.\vsp \end{equation*} between the coefficients in order to have $\tau_{k+1} = \mathcal{O}(h^3)$. Depending on the choice of $b_2$, there exists a family of RK methods of order $2$. The case $b_2=1/2$ results in RK method \eqref{rk-heun}. Another choice $b_2=1$, $b_1=0$ and $\alpha=\beta = \frac{1}{2}$, results in \begin{equation}\label{rk2-2} y_{k+1} = y_k + hf(t_k+\tfrac{1}{2}h, y_k + \tfrac{1}{2}hf(t_k,y_k)). \end{equation} or in a multi-stage format \begin{shaded} \vspace*{-0.3cm} \begin{equation}\label{rk2_3} \begin{split} z_1 & = y_k\\ z_2 & = y_k+\frac{h}{2}f(t_k,z_1) \\ y_{k+1} & = y_k + hf(t_k+\tfrac{h}{2},z_2). \end{split} \end{equation} \vspace*{-0.2cm} \end{shaded} A general explicit RK method is defined as below. \begin{definition} An explicit RK method with $s$ stages has the form \begin{equation}\label{rk:sstageform} \begin{split} z_1 &= y_k ,\\ z_2 &= y_k + ha_{2,1}f(t_k,z_1) ,\\ z_3 &= y_k + h\big[a_{3,1}f(t_k,z_1)+a_{3,2}f(t_k+c_2h,z_2)\big] ,\\ & \vdots \\ z_s& =y_k + h\big[a_{s,1}f(t_k,z_1)+a_{s,2}f(t_k+c_2h,z_2)+\cdots+a_{s,s-1}f(t_k+c_{s-1}h,z_{s-1})\big], \\ y_{k+1} &= y_k + h\big[b_{1}f(t_k,z_1)+b_{2}f(t_k+c_2h,z_2)+\cdots+b_{s}f(t_k+c_{s}h,z_{s})\big]. \end{split} \end{equation} \end{definition} Such RK method is fully determined by coefficients $\{c_\ell,a_{\ell,j},b_j\}$. These coefficients are usually displayed in a table called {\bf Butcher's Tableau}\footnote{After John Charles Butcher (1933-present) who is a New Zealand mathematician and an specialist in numerical methods for ODEs.} \begin{equation*} \begin{array}{r|ccccc} 0=c_1 & & & & & \\ c_2 & a_{2,1} & & & & \\ c_3 & a_{3,1} & a_{3,2} & & & \\ \vdots &\vdots & \vdots & \ddots & & \\ c_s & a_{s,1} & a_{s,2} & \cdots & a_{s,s-1} & \\ \hline & b_1 & b_2 & \cdots & b_{s-1} & b_s \end{array} \vsp \end{equation*} RK methods can be expressed in the general form \eqref{onestep:general} with $$ \psi(t,y,h) = \sum_{j=1}^{s}b_{j} f(t+c_jh,z_j), \quad z_j = y+h\sum_{\ell=1}^{j-1}a_{j\ell}f(t+c_\ell h,z_\ell). $$ The consistency condition $\psi(t,y,0)=f(t,y)$ holds if \begin{align}\label{rk:consistency} \sum_{j=1}^{s} b_{\ell} = 1. \end{align} According to Theorem \ref{thm:converg-onesteps}, if $f$ is continuous in $t$ and Lipschitz continuous in $y$, and the condition \eqref{rk:consistency} holds, then the RK method \eqref{rk:sstageform} is convergent. Moreover, in a RK method we always assume that \begin{align}\label{rk:stagecond} \sum_{j=1}^{\ell-1} a_{\ell j} =c_\ell, \quad \ell=1,2,\ldots, s, \end{align} which ensure that intermediate values $z_\ell$ provide approximations of order at least $1$ to exact values $y(t_k+c_\ell h)$. Conditions \eqref{rk:stagecond} are called the {\em stage conditions} for RK methods. \vsp \begin{example} The Butcher's tableau of the RK2 method \eqref{rk2} is \begin{equation*} \begin{array}{r|cc} 0 & & \\ 1 & 1 & \\\hline & 1/2 & 1/2 \end{array} \end{equation*} while the RK2 method \eqref{rk2_3} has a Butcher's tableau of the form \begin{equation*} \begin{array}{r|cc} 0 & & \\ 1/2 & 1/2 & \\\hline & 0 & 1 \end{array} \end{equation*} \end{example} \vsp There also exist a family of third-order RK methods. The Butcher's tableau of one of these schemes is: \begin{equation*} \begin{array}{r|ccc} 0 & & & \\ 1/2 & 1/2 && \\ 1 &-1 & 2& \\ \hline &1/6&2/3&1/6 \end{array}\vsp \end{equation*} \begin{workout} (a) Convert the above tableau into a 3-stage RK formula. (b) Search the internet to find another RK3 method and write down its stages. \end{workout} A very popular explicit RK method is the following fourth-order scheme (RK4): \begin{shaded} \vspace*{-0.3cm} \begin{equation}\label{rk4} \begin{split} z_1 &= y_k ,\\ z_2 &= y_k + \tfrac{1}{2}hf(t_k,z_1) ,\\ z_3 &= y_k + \tfrac{1}{2}hf(t_k+\tfrac{1}{2}h,z_2) ,\\ z_4 &= y_k + hf(t_k+\tfrac{1}{2}h,z_3) ,\\ y_{k+1} &= y_k + \frac{h}{6}\big[f(t_k,z_1)+2f(t_k+\tfrac{1}{2}h,z_2)+2f(t_k+\tfrac{1}{2}h,z_3)+f(t_k+h,z_4)\big]. \end{split} \end{equation} \vspace*{-0.2cm} \end{shaded} The Butcher's tableau for this method is \begin{equation*} \begin{array}{r|cccc} 0 & & && \\ 1/2 & 1/2 &&& \\ 1/2 & 0 & 1/2&& \\ 1 &0&0&1& \\ \hline &1/6&1/3&1/3&1/6 \end{array} \end{equation*} Using a similar but more tedious calculation than that was done for method \eqref{rk2}, we can show that the local truncation error for this $4$-stage method is of order $h^5$. Then by applying the sketch given in section \ref{sect:general-explicit} for analysing general explicit methods we can show that the global error of method \eqref{rk4} satisfies $$ |e_N|\leqslant Ch^4 $$ which shows that this method is of fourth-order accuracy, for this reason is usually called RK4. This is not the only fourth-order explicit RK method; there exists a family of such methods with different Butcher's tableaus. \vsp \begin{workout} Show that the RK4 method \eqref{rk4} when is applied on simple differential equation $y'(t)=f(t)$ (with no dependence of $f$ on $y$) reduces to Simpson's rule for numerical integration. \end{workout} \vsp Like as other explicit methods, RK methods have restricted regions of absolute stability. For example, by applying the RK2 formula \eqref{rk-heun} on test equation $y'(t)=\lambda y$, we obtain $$ y_k = (1+\lambda h + \frac{1}{2}(\lambda h)^2) y_k, $$ which shows that the region of absolute stability for this method is $$ S = \{z\in \C: |1+z+\tfrac{1}{2}z^2|\leqslant 1\}. $$ This region is shown in Figure \ref{fig:absstab-rk} (left panel). The stability regions for an RK3 method and the RK4 method \eqref{rk4}, obtained in a similar way, are shown in the middle and right sides of Figure \ref{fig:absstab-rk}. \begin{figure}[!th] \centering \includegraphics[scale=0.5]{RK2}\includegraphics[scale=0.5]{RK3}\includegraphics[scale=0.5]{RK4}\\ \caption{Absolute stability regions of Runge-Kutta methods of orders $2$, $3$ and $4$.}\label{fig:absstab-rk} \end{figure} \begin{labexercise} Use a MATLAB code for solving the IVP $$ y'(t) = \frac{1}{1+t^2}-2[y(t)]^2, \quad 0\leqslant t\leqslant 10, \quad y(0) = 0. $$ using RK2 and RK4 formulas with stepsizes $h=0.2,0.1,0.05,0.025,0.0125,0.00625$. Plot the error functions in each case and determine numerical orders of convergence. The exact solution for this IVP is $y(t)=t/(1+t^2)$. Use this information to calculate errors and orders. \end{labexercise} \vsp \begin{labexercise} Use the RK2 method to solve $$ y'(t) = -y(t) + t^{0.1}(1.1+t), \quad y(0)=0 $$ whose exact solution is $y(t)=t^{1.1}$. Solve the equation on interval $[0,5]$ and compute the solution and errors at times $t=1,2,3,4,5$. Use different stepsizes $h=0.1,0.05,0.025,0.0125,0.00625$. Compute the order of convergence and compare with theoretical order $2$ of the RK2 method. Explain your results. \end{labexercise} \vsp \begin{workout} What difficulty arises in attempting to use a Taylor series method of order $\geqslant 2$ to solve the equation $$ y'(t) = -y(t) + t^{0.1}(1.1+t), \quad y(0)=0. $$ What does it tell us about the solution? \end{workout} \vsp \begin{workout} (a) Write down the RK4 method for solving linear system of equations $\y'(t)=A\y(t)$ for $A\in\R^{n\times n}$ with initial condition $\y(0)=\y_0$. (b) How many matrix-vector multiplications should be performed in each step? (c) Optional: estimate the overall complexity of the method to approximate $\y(t_N)$. \end{workout} \vsp \section{Stiff differential equations}\label{sect:stiff} At the beginning of 1950's, a new difficulty was discovered in numerical solution of some practical ODEs, which has come to be known as {\bf stiffness}, and led to some new concepts of {stability} by Germund Dahlquist\footnote{Germund Dahlquist (1925--2005) was a Swedish mathematician known primarily for his early contributions to the theory of numerical solution of ODEs.} and others. Numerical methods with finite absolute stability regions (such as explicit methods) all fail to produce accurate and stable solutions for stiff problems unless the step size $h$ is chosen excessively small which is impractical and inefficient in many situations. \subsection{What is stiffness?} It is difficult to formulate a precise definition for stiffness. One may argue that a stiff equation includes some terms that can lead to rapid variation (fast transients) in the solution. However, there exist some ODEs with smooth solutions\footnote{In this section, by a `smooth solution' we mean a function without any rapid transition.} but are known as stiff problems. This means that the stiffness is independent of the solution but it is a property of the ODE itself. However, even if a stiff problem has a smooth solution, a slight perturbation to the solution at any time results in another solution curve that has a rapid variation. The following example from \cite{LeVeque:2007} will make this more clear. \begin{example}\label{ex:stiff1} Consider the ODE \begin{equation}\label{eq:stiff1} y'(t) =\lambda(y-\cos t)-\sin(t) \vsp \end{equation} from Example \ref{ex:absstab}. For initial value $y(0)=1$, this problem has smooth solution $y(t)=\cos t$ independent of the value of $\lambda$. If we change the initial condition to $y(t_0)=y_0$ that does not lie on this curve, then the solution is $$ y(t) = e^{\lambda(t-t_0)}(y_0-\cos t_0) + \cos t $$ If $\Re(\lambda)<0$, this function approaches $\cos t$ exponentially quickly with decay rate $\Re(\lambda)$. In Figure \ref{fig:stiff2} different solution curves for this equation with different choices of $t_0$ and $y_0$ with $\lambda=-2$ and $\lambda=-20$ are plotted. \begin{center} \includegraphics[scale=0.68]{ode8}\includegraphics[scale=0.68]{ode9} \captionof{figure}{Solution curves of a stiff problem with different initial times and initial values for $\lambda=-2$ (left) and $\lambda=-20$ (right).}\label{fig:stiff2} \end{center} We observe rapid transients in solution curves for $\lambda=-20$. The perturb solutions quickly approach toward the particular solution $y(t)=\cos t$. This problem is known as a stiff problem for values of $\lambda$ with large real part magnitudes. \end{example} \vsp The phenomenon we observed in Example \ref{ex:stiff1} will cause a serious numerical difficulty even if the initial condition is chosen such that the exact solution does not exhibit any rapid transient (for example $y(t)=\cos t$ with $y(0)=1$ in ODE \eqref{eq:stiff1}). {\em Because any numerical method is subjected to local truncation and roundoff errors which act as a perturbation to the solution and move us away from the smooth solution to a solution with a rapid transient.} Numerical methods with finite absolute stability regions are unstable unless the time step is small relative to the time scale of the rapid transient. In the case of a smooth true solution it seems that a reasonably large step length would work, but the numerical method must always deal with the rapid transients introduced by truncation and roundoff errors in every time step. Consequently, a very small step length is needed to avoid the instability. \vsp \begin{example} Consider the ODE \eqref{eq:stiff1} on interval $[0,10]$ with initial condition $y(0)=1$. Let $\lambda=-10^4$. The numerical solution using the explicit RK4 method \eqref{rk4} with step length $h=0.00028$ blows up as is shown in the left hand side of Figure \ref{fig:stiff3}. Smaller values of $h$ such as $h = 0.00025$ leads to a stable solution that is shown in the right hand side. However, this stable calculations requires lots of function evaluations in the procedure of RK method. Note that, the complexity will quickly increase for a system of differential equations. \begin{center} \includegraphics[scale=0.68]{ode11}\includegraphics[scale=0.68]{ode10} \captionof{figure}{Numerical solution of ODE \eqref{eq:stiff1} with $\lambda=-10^4$ and initial condition $y(0)=1$ using the RK4 method: unstable solution with $h=0.00028$ (left) and a stable solution with $h=0.00025$ (right). In the left panel, the values on the $y$-axis are multiplied by huge number $10^{303}$.}\label{fig:stiff3} \end{center} It is better to look for other efficient numerical methods that can solve such a stiff problem stably using a much larger step size. For instance, the implicit Euler's method will do this job perfectly (write and execute the code). The trapezoidal method works much better than explicit methods but still introduces some limitations in the presence of rapid transients in the solution. \end{example} \vsp Lambert, after examining some other statements, finally has suggested the following definition for stiff problems \cite{Lambert:1991}. \begin{definition}\label{def:stiffness} A stiff ODE is an equation for which certain numerical methods with finite absolute stability regions for solving the equation are numerically unstable, unless the step size is taken to be excessively small in relation to the smoothness of the exact solution. \end{definition} \vsp Explicit methods such as forward Euler's method and explicit RK methods (and in fact all explicit methods) are examples of numerical methods with finite absolute stability regions. Definition \ref{def:stiffness} reveals the fact that solving a stiff problem with an explicit method (or an implicit method with a finite absolute stability region) is very costly. We also note that the stiffness may vary over the total interval of integration. \subsection{Stiff systems} Now, let us look at a system of ODEs. Consider the linear system $\y'(t)=A\y(t)+\g(t)$ on interval $[0,b]$ with a constant matrix $A$ of size $n\times n$, and initial condition $\y(0)=\y_0$. The exact solution of this ODE is $$ \y(t) = e^{At}{\y_0} + \int_{0}^{t}e^{A(t-\tau)}{\g(\tau)}d\tau. $$ If $A$ has distinct eigenvalues $\lambda_k\in \C$ and corresponding eigenvectors $\v_k\in \C^n$, then $$ \y(t) = \sum_{k=1}^{n}c_{k}e^{\lambda_k t}\v_k + \wt \g(t) $$ where $\c=V^{-1}\y_0$ for $V=[\v_1,\ldots,\v_n]$, and $\wt \g(t)$ is the integral term (particular solution) in solution $\y(t)$. Let us assume that $$ \Re(\lambda_k)< 0 ,\quad k=1,2,\ldots,n, $$ which imply that all terms $e^{\lambda_k t}\v_k$ go to $0$ as $t\to\infty$, so that the solution $\y$ approaches $\wt \g(t)$ asymptotically as $t\to\infty$. The term $e^{\lambda_k t}\v_k$ decreases monotonically if $\lambda_k$ is real and sinusoidally if $\lambda_k$ is complex. Thus, the term $$ \sum_{k=1}^{n}c_{k}e^{\lambda_k t}\v_k \vsp $$ can be viewed as the {\em transient solution} and the term $\wt \g(t)$ as the {\em steady state} solution. If $\Re(\lambda_k)$ is large then the corresponding term $c_ke^{\lambda k t}\v_k$ will decay quickly as $t$ increases and is thus called a fast transient; if $\Re(\lambda_k)$ is small the corresponding term $c_ke^{\lambda k t}\v_k$ decays slowly and is called a slow transient. Let $\overline\lambda$ and $\underline \lambda$ be defined such that $$ |\Re (\underline \lambda)|\leqslant |\Re (\lambda_k)|\leqslant |\Re(\overline\lambda)|, \quad k=1,2,\ldots n. $$ If our aim is to reach the steady-state solution, then we must keep integrating until the slowest transient is negligible. The smaller $|\Re (\underline \lambda)|$ is, the longer we must keep integrating. If, however, the method we are using has a finite region of absolute stability, we must ensure that the step size $h$ is sufficiently small for $\lambda_kh\in S$, $k = 1,2,\ldots,n$ to hold. Clearly a large value of $|\Re (\overline \lambda)|$ implies a small step size $h$. We therefore get into a difficult situation if $|\Re (\overline \lambda)|$ is very large and $|\Re (\underline \lambda)|$ is very small; we are forced to integrate for a very long time with an excessively small step size. It seems natural to take the ratio \begin{equation*} r_S:=\frac{|\Re (\overline \lambda)|}{|\Re (\underline \lambda)|} \end{equation*} the {\em stiffness ratio}, as a measure of the stiffness of the system \cite{Lambert:1991}. While $r_S$ is often a useful quantity, one should not rely entirely on this measure to determine whether a problem is stiff. For example, a scalar problem can be stiff while the $r_S$ is always $1$ since there is only one eigenvalue. Or, in a system of equations if one eigenvalue is zero then the contribution of this eigenvalue to the exact solution is a constant term. If the moduli of the real parts of the remaining eigenvalues are not particularly large, the system is not stiff, yet the stiffness ratio is now infinite. \vsp \begin{example}\label{ex:stiff2} Consider the following systems of ODEs \cite{Lambert:1991} \begin{equation}\label{eq:stiff-sys1} \begin{bmatrix} y'_1 \\ y'_2 \end{bmatrix} = \begin{bmatrix} -2 &1 \\ 1 &-2 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} + \begin{bmatrix} 2\sin t \\ 2(\cos t-\sin t) \end{bmatrix} ,\quad \begin{bmatrix} y_1(0) \\ y_2(0) \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} \end{equation} and \begin{equation}\label{eq:stiff-sys2} \begin{bmatrix} y'_1 \\ y'_2 \end{bmatrix} = \begin{bmatrix} -2 &1 \\ 998 &-999 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} + \begin{bmatrix} 2\sin t \\ 999(\cos t-\sin t) \end{bmatrix} ,\quad \begin{bmatrix} y_1(0) \\ y_2(0) \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}.\vsp \end{equation} Both problems have the same exact solution $$ \begin{bmatrix} y_1(t) \\ y_2(t) \end{bmatrix} =2e^{-x} \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} \sin t \\ \cos t \end{bmatrix}. $$ The plots of these solutions on interval $[0,10]$ are given in Figure \ref{fig:stiff4}. Both solutions are smooth (without any rapid transients). \begin{center} \includegraphics[scale=0.7]{ode12} \captionof{figure}{Exact solutions of systems \eqref{eq:stiff-sys1} and \eqref{eq:stiff-sys2}}\label{fig:stiff4} \end{center} We employ the explicit Euler's method for both systems. Everything is perfect for system \eqref{eq:stiff-sys1}, but unstable results are obtained for system \eqref{eq:stiff-sys2} unless the step size is chosen smaller than $0.002$. System \eqref{eq:stiff-sys2} is stiff but system \eqref{eq:stiff-sys1} is non-stiff. The eigenvalues of the matrix in \eqref{eq:stiff-sys1} are $-1$ and $-3$, and if we consider the general initial condition to $y(0)=y_0$ then the exact solution is $$ \begin{bmatrix} y_1(t) \\ y_2(t) \end{bmatrix} =c_1e^{-t} \begin{bmatrix} 1 \\ 1 \end{bmatrix} + c_2e^{-3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} + \begin{bmatrix} \sin t \\ \cos t \end{bmatrix}. $$ where $c_1$ and $c_2$ are determined by imposing the initial value $y_0$. The eigenvalues of the matrix in \eqref{eq:stiff-sys2} are $-1000$ and $-1$ and the exact solution for an arbitrary initial value $y_0$ is $$ \begin{bmatrix} y_1(t) \\ y_2(t) \end{bmatrix} =c_1e^{-t} \begin{bmatrix} 1 \\ 1 \end{bmatrix} + c_2e^{-1000t} \begin{bmatrix} 1 \\ -998 \end{bmatrix} + \begin{bmatrix} \sin t \\ \cos t \end{bmatrix}.\vsp $$ In the second solution the exponential term $e^{-1000t}$ produces a rapid transient in the solution. Although the initial value $y_0=[2,3]^T$ gives $c_1=2$ and $c_2=0$ for both systems (therefore annihilates the rapid transient in the second system), slight perturbations (truncation and roundoff errors) in numerical solution at different time levels produce a component on the rapid transient term and introduce the mentioned numerical difficulties. The same happens even if we choose initial conditions such that $c_1=c_2=0$; in that case the explicit Euler's and explicit RK methods are unable to integrate even the very smooth solution $y(x) = [\sin t, \cos t]^T$ unless at very small stepsizes. Finally, we note that the stiffness ratio is $r_S=3$ for system \eqref{eq:stiff-sys1} while it is $r_S=1000$ for system \eqref{eq:stiff-sys2}. \end{example} \vsp The situation would be more complicated for nonlinear system of equations. For the scaler case the partial derivative $\frac{\partial f}{\partial y}$ or Lipschitz constant of $f$ with respect to $y$, and for nonlinear system the eigenvalues of the Jacobian matrix $J_f(t,\y)$ may give an insight to discover the stiffness. As we pointed out at the end of subsection \ref{sect-stabsol}, the stability analysis through the Jacobian matrix has only a local validity. Nevertheless, the stiffness ratio $$ r_S = \max_{t\in[t_0,b]} \frac{\max_{k}|\Re(\lambda_k(t))|}{\min_{k}|\Re(\lambda_k(t))|}, $$ where $\lambda_k(t)$ are eigenvalues of $J_f(t,\y)$, may give an insight to stiffness of system $\y'(t)=f(t,\y)$. \begin{remark}\label{remark_stiff0} Sometimes we can indicate the stiffness of system of ODEs $\y'(t)=f(t,\y)$ by looking at the coefficients on the right-hand side. When these coefficients vary significantly in magnitude, it suggests that the system is likely stiff. For example the following system of equations for Robertson's auto-catalytic chemical reaction \begin{align*} & y_1' = -0.04y_1+10^4y_2y_3\\ & y_2' = 0.04y_1-10^4y_2y_3-3\times 10^7y_2^2\\ &y_3'= 3\times 10^7 y_2^2 \end{align*} is a stiff system of ODEs, as it contains coefficients with different sizes ranging from $0.04$ to $3\times 10^7$. This is also the case for system \eqref{eq:stiff-sys2} in Example \ref{ex:stiff2}. \end{remark} \vsp A practical way to detect the stiffness of an ODE is to attempt to solve it using a method with a finite absolute stability region with a moderate step size and see whether the computed solution is blown up or is not. \vsp \begin{workout} Consider solving Robertson's auto-catalytic chemical system of ODEs, as described in Remark \ref{remark_stiff0}, over the time interval $[0,500]$ with initial conditions $y_1(0)=1$, $y_2(0)=y_3(0)=0$, using the Explicit Euler and Explicit RK4 methods. Ensure that the chosen step length $h$ is sufficiently small to yield stable results. Upon plotting the solutions, provide your observations. What distinguishes the behavior of the solution $y_2$ from the others? \end{workout} \vsp \subsection{A-stability} It is clear from the considerations of this section that those methods with bounded stability regions are inappropriate for stiff problems. Such class of methods includes all explicit methods and even some implicit methods such as Adams-Moulton methods. On the other hand, an implicit method that its region of absolute stability includes the whole of the left half-plane is efficient for stiff problems because there will be no stability restriction on the step size $h$ provided that all the eigenvalues have negative real parts, as is often the case in practice. \begin{definition} A numerical ODE solver is called {\bf A-stable} if its absolute stability region contains the whole left-half plane $\{z\in\C:\Re(z)\leqslant0\}$. \end{definition} Among the methods we have already studied, the implicit Euler's method and the trapezoidal method are A-stable. As a negative result, the {\em Dahlquist's second barrier theorem} states that any A-stable linear multistep method is at most second order accurate, and in fact the trapezoidal method is the A-stable method with smallest truncation error. However, higher order A-stable implicit RK methods (multi-stage methods) do exist. For many stiff problems the eigenvalues are located near the negative real axis or at most in a wedge $\pi-\alpha\leqslant \mathrm{arg}(z)\leqslant \pi+\alpha$ for an angle $\alpha\in [0,\pi/2]$. For such problems, we just need the stability region to contain such a wedge rather than the whole left-half plane. See Figure \ref{fig:Aalpha}. \begin{definition} A numerical ODE solver is called {\bf A($\alpha$)-stable}, for $\alpha\in[0,\pi/2]$, if its absolute stability region contains the wedge $\{z\in\C: \pi-\alpha\leqslant \mathrm{arg}(z)\leqslant \pi+\alpha\}$. \end{definition} \definecolor{qqqqff}{rgb}{0.8,.8,1} \begin{center} \begin{tikzpicture}[>=latex,scale=0.7] ll[fill=qqqqff] (-5, -4) rectangle (5,4); ll[fill=white] plot[domain=-pi:pi] (xy polar cs:angle=\x r,radius= {5/2+2*cos(\x r)}); \draw[thick,color=blue,domain=0:2*pi,samples=500,smooth] plot (xy polar cs:angle=\x r,radius= {5/2+4/2*cos(\x r)}); \draw[dashed,thick] (-.5,0) -- (-2.8,-4); \draw[dashed,thick] (-.5,0) -- (-2.8,4); \draw[<->] (-1.2,1) arc (105:180:1) ; \draw[<->] (-1.95,0) arc (180:255:1) ; \node at (-2,0.8) {$\alpha$}; \node at (-2,-0.8) {$\alpha$}; \draw[->,thick] (-5,0) -- (5,0); \draw[->,thick] (-0.5,-4) -- (-0.5,4); \end{tikzpicture} \captionof{figure}{A region of A($\alpha$)-stability}\label{fig:Aalpha} \end{center} An A-stable method is A($\pi/2$)-stable. A method is A($0$)-stable if the negative real axis itself lies in the stability region. Some numerical methods with A($\alpha$)-stability property which are appropriate for stiff ODE problems have been developed. For example we can mention the {\bf implicit RK} methods and the {\bf backward differentiation formulas (BDF)} which will be introduced in sections \ref{sect:implicit_rk} and \ref{sect:bdf}. To see how to solve stiff ODEs using MATLAB, refer to the section \ref{sect:matlab} below. \vsp \subsection{L-stability} As we pointed out, both implicit Euler and trapezoidal methods are A-stable but there is a major difference between the stability regions of these methods. The trapezoidal method is stable only in the left half-plane, whereas implicit Euler's method is stable not only in the left-half plane but also over much of the right half-plane. See Figures \ref{fig:eulerregabs} and \ref{fig:regionabstrap}. Let us solve the stiff ODE \eqref{eq:stiff1} on interval $[0,10]$ for $\lambda=-10^4$ with both methods. First, we assume that $y(0)=1$ which corresponds to the smooth exact solution $y(t)=\cos t$. Let $h=0.2$. Both methods provide satisfactory results with norm-infinity error $9.998\times 10^{-6}$ for the Euler's method and $3.346\times 10^{-7}$ for the trapezoidal method. Remember that the explicit methods failed to numerically solve this problem even at much smaller step sizes. It is expectable that the trapezoidal method is more accurate because its convergence order is $2$ compared with the convergence order of the implicit Euler's method which is only $1$. However, this is not the whole story. Let us change the initial value to $y(0)=1.5$ which corresponds to exact solution $$ y(t)= \frac{1}{2}e^{-10000t} + \cos t $$ which includes a fast transient term. The initial value $y(0)=1.5$ rapidly (at a time scale of about $10^{-4}$) decreases toward the particular solution $\cos t$. The approximate solutions with $h=0.2$ are plotted in Figure \ref{fig:stiff5}. \begin{center} \includegraphics[scale=0.7]{ode13}\includegraphics[scale=0.7]{ode14} \captionof{figure}{Numerical solution of stiff problem \eqref{eq:stiff1} with $\lambda=-10^4$ and $y(0)=1.5$ with step size $h=0.2$, the implicit Euler's method (left) and the trapezoidal method (right).}\label{fig:stiff5} \end{center} Both methods are still absolutely stable, but the result of trapezoidal method shows unsatisfactory oscillations. For absolute stability we test on equation $y'=\lambda y$ and obtain $y_{k+1}=R(z)y_k$ such that \begin{equation*} R(z) = \frac{1}{1-z}, \quad |R(z)|\to 0 \;\mbox{as}\; z\to\infty, \end{equation*} for the implicit Euler's method and \begin{equation*} R(z) = \frac{1+\frac{1}{2}z}{1-\frac{1}{2}z}, \quad |R(z)|\to 1 \;\mbox{as}\; z\to\infty, \end{equation*} for the trapezoidal method. For problems with rapid transients, we aim for a method that can effectively damp in a single time step, because we intend to use a steplength much larger than the true decay time of the transient. To illustrate this point, refer to Figure \ref{fig:stiff6}, which provides a close-up comparison of the exact and numerical solutions obtained with each method over the time interval $[0,2]$. \begin{center} \includegraphics[scale=0.8]{ode15}\includegraphics[scale=0.8]{ode16} \captionof{figure}{Closeup of solutions: the implicit Euler's method (left) and the trapezoidal method (right).}\label{fig:stiff6} \end{center} At the first time step, the implicit Euler's method damps very effectively toward the steady state solution $\cos t$ and continues to produce very accurate results thereafter. In fact, this method, if is applied on stiff ODE \eqref{eq:stiff1}, yields $$ y_{k+1} = \frac{1}{1+\lambda h} y_k-\frac{\lambda h}{1+\lambda h}\cos t_{k+1}-\frac{h}{1-\lambda h}\sin t_{k+1}, $$ which means $$ y_{k+1}\approx \cos{t_{k+1}}\vsp $$ because $\lambda h = -10^4\times 0.2=-2000$ and \begin{equation*}\frac{1}{1+\lambda h} = \frac{1}{2001}\doteq 0.0005\approx 0. \end{equation*} This implies that by the second time step we approximately fall on the steady state solution $\cos t$. The trapezoidal method is also stable and the results stay bounded, however, we have \begin{equation*}\frac{1+\frac{1}{2}\lambda h}{1-\frac{1}{2}\lambda h} = -\frac{999}{1001}\doteq -0.9980\approx -1. \end{equation*} By applying on stiff ODE \eqref{eq:stiff1}, this method gives $$ y_{k+1} = \frac{1+\frac{1}{2}\lambda h}{1-\frac{1}{2}\lambda h}y_k - \frac{\frac{1}{2}\lambda h}{1-\frac{1}{2}\lambda h}[\cos t_{k+1}+\cos t_k]-\frac{\frac{1}{2} h}{1-\frac{1}{2}\lambda h}[\sin t_{k+1}+\sin t_k], $$ or, $$ y_{k+1}\approx -y_k + [\cos t_{k+1}+\cos t_k]. \vsp $$ At the first time step, we have $y_1\approx -1.5 + [\cos 0+\cos 0.2]\approx 0.48$ which falls below the steady state solution. Moving to the second time step, $y_2$ is $0.48 + [\cos 0.2+\cos 0.4] \approx 2.38$, overshooting the steady-state solution. This pattern persists in subsequent time steps, resulting in the zigzag-shaped solution observed on the left-hand side of Figure \ref{fig:stiff5}. The results of the trapezoidal method can be improved if a small enough steplength $h$ is used. In Figure \ref{fig:stiff7}, numerical solutions with $h=10^{-2}$ and $h=10^{-4}$ are shown. \begin{center} \includegraphics[scale=0.7]{ode17}\includegraphics[scale=0.7]{ode18} \captionof{figure}{Numerical solution of stiff problem \eqref{eq:stiff1} with $\lambda=-10^4$ and $y(0)=1.5$ using the trapezoidal method, with $h=10^{-2}$ (left) and $h=10^{-4}$ (right).}\label{fig:stiff7} \end{center} With step length $h=10^{-2}$ we still have oscillations near the initial time, while with step length $h=10^{-4}$ a perfect numerical solution is observed. We note that, $h=10^{-4}$ is identical with the time scale of the transient term. This means that the trapezoidal method works perfect provided that the step size is chosen equal or smaller than the time scale of the transient terms in the solution. However, our primary aim was to develop efficient numerical methods that be able to integrate the stiff problems with a rather large step length. What which makes the implicit Euler's method different from the trapezoidal method is the following property that the implicit Euler's method possesses while the trapezoidal method does not. \begin{definition} A one-step method is called {\bf L-stable} if it is A-stable and $\displaystyle \lim_{z\to\infty}|R(z)| = 0$. \end{definition} The implicit Euler method is L-stable. We will introduce some higher order L-stable methods in the forthcoming sections. \section{Adaptive time stepping} All solvers presented up to here use a single stepsize $h$ in all iterations. It is desirable to have an algorithm that can adjust the stepsize $h_k$ in each step $k$ to ensure that the local truncation errors at all steps remain below a certain tolerance: \begin{equation}\label{adapt:localerr} |\tau_k| \leqslant \varepsilon,\; \mbox{for all }\; k. \end{equation} In certain parts of the domain, where the derivatives of $y$ have small amplitudes, larger stepsizes may suffice, resulting in a reduced computational expense due to a lower number of function evaluations. Since $y$ and its derivatives are unknown the truncation error can not be calculated analytically. To do so, we need an {\bf estimate} for $\tau_k$ at each step, and a way to select a new stepsize that will ensure that the estimated error is acceptably small. \begin{remark} It is more reasonable to keep the {\bf global error} under control rather than the {\bf local errors} \eqref{adapt:localerr}. Because bounding the $\tau_k$'s individually does not lead to a natural bound on the global error, since it ignores the propagation of the error in each step. But local control is simple and is good enough in practice if we take local tolerances smaller than what it seems necessary. See the following. \end{remark} Now we address the question of how to estimate the local errors and compute the new time step. \subsection{Using two methods of different order} A good strategy is to use two methods of different orders. Informally, we use the more accurate one as an estimate for the exact solution, to estimate the error for the less accurate method. Assume that for a given stepsize $h$ we have two methods: \begin{itemize} \item Method A: with local order $p$ and truncation error $\tau_{k+1}(h)$ to compute $y_{k+1}$, \item Method B: with local order $p+1$ to compute a more accurate value $\tilde y_{k+1}$. \end{itemize} Then the estimated local truncation error is $$ e_{k+1}: = \tilde y_{k+1} - y_{k+1}. $$ If we suppose that the value at time step $k$ is exact, i.e. $y_k = y(t_k)$, then by definition of local truncation errors we have \begin{equation}\label{adapt:locloc} \begin{split} y(t_{k+1}) &= y_{k+1} + \tau_{k+1}(h)\\ y(t_{k+1}) &= \tilde y_{k+1} + \mathcal O(h^{p+1}). \end{split} \end{equation} Since the local error of Methpd A is $\mathcal O(h^{p})$, we can expand the truncation error $\tau_{k+1}(h)$ as $$ \tau_{k+1}(h) = C h^{p} + \mathcal O(h^{p+1}), $$ for a constant $C$. By subtracting two equations in \eqref{adapt:locloc} and taking absolute value we obtain $$ |e_{k+1}|= |\tilde y_{k+1}-y_{k+1}| = |C|h^p + \mathcal O(h^{p+1}). $$ By dropping the term $\mathcal O(h^{p+1})$, we approximately have \begin{equation}\label{adapt:estimtau} |\tau_{k+1}(h)|\approx |C|h^p \approx |e_{k+1}|. \end{equation} Consequently, if the error is small, i.e., \begin{equation}\label{adapt:errestimate} |e_{k+1}|\leqslant \varepsilon \end{equation} then the step is {\em accepted} and the more accurate solution $\tilde y_{k+1}$ is assigned as an approximation for $y(t_{k+1})$. The algorithm then goes to the next time step $t_{k+1}+h$. Otherwise the tentative value $\tilde y_{k+1}$ is {\em rejected} and the step must be redone by a smaller steplength $h_{new}$. To estimate $h_{new}$ we use this fact that it must satisfy $$ |\tau_{k+1}(h_{new})|\approx |C|h_{new}^p \leqslant \varepsilon. $$ Taking the ratio of this and the estimate \eqref{adapt:estimtau} we obtain $$ \left(\frac{h_{new}}{h}\right)^{p} \lesssim \frac{\varepsilon}{|e_{k+1}|} $$ which gives an estimate for $h_{new}$ as $$ h_{new} \lesssim h \left( \frac{\varepsilon}{|e_{k+1}|} \right)^{1/p}. $$ In practice, because this is just an estimate, one puts an extra coefficient in to be safe, typically something like \begin{equation}\label{adapt:hnewformula} h_{new} := 0.8 h \left( \frac{\varepsilon}{|e_{k+1}|} \right)^{1/p}.\vsp \end{equation} This formula decreases the stepsize if the error is large. This process should be repeated by replacing $h$ by $h_{new}$ until \eqref{adapt:errestimate} is satisfied and the step is accepted. Sometimes, other controls are added; like not decreasing or increasing $h$ by too much per time step. \vsp \subsection{Embedded RK methods} A disadvantage of the above adaptive scheme is that each step involving error estimation incurs a cost that is roughly twice that of a fixed step method because two methods are run. Nonetheless, by carefully selecting the methods, we can generate some computational overlap and thereby reduce the workload. An approach is to use an {\bf embedded pair} of RK formulas where most of the $f$ values are the same for both methods A and B. For example, suppose we wish to create a pair for estimating the error in Euler's method (RK1) as Method A. We also need a method with order $2$ (local order $3$) as Method B. Recall the RK2 method \eqref{rk2-2} which can be written as \begin{align*} f_1& = f(t_k,y_k)\\ f_2& = f\Big(t_k+\frac{h}{2}, y_k + \frac{h}{2}f_1\Big)\\ \tilde y_{k+1} &= y_k + hf_2 \end{align*} Then the value $y_{k+1}$ is computed via the Euler's method by $$ y_{k+1} = y_k + hf_1, $$ which requires essentially no extra function evaluation; the value of $f_1$ is used for both. Then following the rule \eqref{adapt:hnewformula} the adapted stepsize is computed as $$ h_{new} = 0.8 h \left( \frac{\varepsilon}{|e_{k+1}|} \right)^{1/2} $$ where $e_{k+1}=\tilde y_{k+1}-y_{k+1} = h(f_2-f_1)$. That value $y_{k+1}$ is not actually needed to compute because the error estimation $e_{k+1}$ is obtained in terms of $f_k$ values, and the more accurate value $\tilde y_{k+1}$ is selected as an approximation for $y(t_{k+1})$. The initial stepsize $h = \varepsilon^{1/3}$ can be used because the local error in the first step is supposed to be of order $h^3$. The MATLAB code for such scheme is given here. \begin{lstlisting}[style = matlab] function [T,Y] = ODE12(f,y0,tspan,tol) t = tspan(1); Y = y0; T = t; h = tol^(1/3); while t <= tspan(2) f1 = f(t,y0); f2 = f(t+h/2,y0+h/2*f1); e = norm(h*(f2-f1),inf); if e < tol yt = y0 + h*f2; y0 = yt; t = t + h; Y = [Y yt]; T = [T t]; h = tol^(1/3); else h = 0.8*h*(tol/e)^(1/2); end end \end{lstlisting} \begin{example} As an example, consider the IVP \begin{align*} &y' = \frac{1}{y^2+0.01}, \quad 0\leqslant t\leqslant 3, \\ & y(0)= 0 \end{align*} We solve this equation using the above MATLAB code with tolerance $\varepsilon = 10^{-3}$. \begin{lstlisting}[style = matlab1] [t,y] = ODE12(@(t,y) 1/(y^2+0.01),0,[0 3],10e-3); plot(t,y(1,:),'-ob','MarkerFaceColor','b') \end{lstlisting} \begin{center} \includegraphics[scale=.7]{fig_adapt2} \captionof{figure}{Numerical solution using the adaptive scheme}\label{fig:adapt2} \end{center} Plot of the solution is given in Figure \ref{fig:adapt2}. The scheme adapts much smaller stepsizes at the beginning of the time interval (where the solution takes more action) but remarkably larger stepsizes at remaining parts. This leads to saving memory and time specially when the underlying problem is a large system of equations with a long time interval. \end{example} \vsp Embedded methods of higher order can be constructed by the right choice of coefficients. One popular embedded pair is the {\bf Runge-Kutta-Fehlberg} method, which uses a fourth-order and fifth-order formula that share most of the $f_k$ values. A formula of this form with stepsize selected by \eqref{adapt:hnewformula} is the strategy employed, for instance, by MATLAB's \texttt{ode45}. \section{Implicit Runge-Kutta methods}\label{sect:implicit_rk} While high convergence order and ease of implementation are advantages of explicit RK methods and make them popular for solving various types of ODEs, their bounded stability regions render them impractical for a variety of important and challenging problems, such as stiff differential equations. Therefore, it is natural to develop, among other techniques, a class of implicit RK methods. An $s$-stage {\bf implicit Runge-Kutta} method has the form \begin{align*} z_\ell & =y_k + h\sum_{j=1}^{s}a_{\ell j} f(t_k+c_jh,z_j), \quad \ell=1,2,\ldots, s, \\ y_{k+1} & = y_k + h\sum_{j=1}^{s}b_{j} f(t_k+c_jh,z_j). \end{align*} The Butcher's tableau for this formula has the form \begin{equation*} \begin{array}{r|cccc} c_1 & a_{1,1} & a_{1,2} &\cdots & a_{1,s} \\ c_2 & a_{2,1} & a_{2,2} &\cdots & a_{2,s} \\ \vdots &\vdots & \vdots & & \vdots \\ c_s & a_{s,1} & a_{s,2} &\cdots & a_{s,s} \\ \hline & b_1 & b_2 & \cdots & b_s \end{array} \end{equation*} Here, the diagonal and upper diagonal parts of the tableau may have nonzero values. To advance form time level $t_k$ to $t_{k+1}$ using an $s$-stage implicit RK method we should solve a system of $s$ nonlinear equations for $s$ unknowns $z_1,z_2,\ldots,z_s$. For a system of equations $\y'=f(t,\y)$ with $m$ differential equations we must solve a system of $sm$ equations in $sm$ unknowns at each time step. There typically are some ways to derive coefficients $\{b_j,c_\ell,a_{\ell, j}\}$ for a given accuracy, provided the number of stages is sufficiently large. The dominant approach converts the IVP in to integral equation $$ y(t) = y(t_k) + \int_{t_k}^{t} f(\tau,y(\tau))d\tau, \quad t\in[t_k,t_{k+1}],\vsp $$ uses a polynomial interpolation of order $s$ for $f(\tau,y(\tau))$ on a predefined set of interpolation points $\tau_1,\ldots,\tau_s\in[t_k,t_{k+1}]$, and collocate the resulting equation at $t=\tau_1,\ldots,\tau_s$ to predict $z_1,z_2,\ldots,z_s$ and $y_{k+1}$. This approach is called {\em collocation}. We omit derivation details but we present the Butcher's tableaus for some formulas instead. An extensive theory has been developed by Butcher for analyzing these methods. The tableau for a two-stage formula with good convergence and stability properties is \begin{equation*} \begin{array}{r|cc} (3-\sqrt 3)/6 & 1/4 & (3-2\sqrt 3)/12 \\ (3+\sqrt 3)/6 & (3+2\sqrt 3)/12 & 1/4 \\ \hline & 1/2 & 1/2 \end{array} \end{equation*} This method is also called {\em two-stage Gauss method} because the transferred Gauss-Legendre points $\{-\tfrac{\sqrt 3}{3},\tfrac{\sqrt 3}{3}\}$ are used for the polynomial interpolation. It can be shown that the local truncation error for this method has size $\mathcal O(h^5)$, and the global error behaves like $\mathcal O(h^4)$. \begin{workout}\label{wo3-guassstab} Show that the absolute stability region of the two-stage Gauss method is $$ S= \left\{z\in \C: \left|\frac{1+\tfrac{1}{2}z+\tfrac{1}{12}z^2}{1-\tfrac{1}{2}z+\tfrac{1}{12}z^2}\right| \leqslant 1 \right\}. $$ Show that the negative part of the real line falls in $S$. More generally, show that the left-half plane falls in $S$. \end{workout} The nice stability feature of the two-stage Gauss method, as outlined in Workout \ref{wo3-guassstab}, comes at the cost of solving a nonlinear system of algebraic equations for each time step. In general, a fully implicit RK method where each $z_\ell$ value depends on all the $z_\ell$ values, can be costly to implement for a system of ODEs. This is because a nonlinear system of $sm$ equations in $sm$ unknowns must be solved at each time step. However, a subclass of implicit methods that are simpler to implement are the {\bf diagonally implicit Runge–Kutta} methods (DIRK methods), in which $a_{\ell, j}=0$ for $j>\ell$, i.e., $z_\ell$ depends on $z_j$ for $j=1,\ldots,\ell$. An $s$-stage DIRK method, when applied to a system of $m$ nonlinear ODEs, requires solving a sequence of $s$ nonlinear systems, each of size $m$, rather than a coupled set of $sm$ equations. As an example of a $3$-stage DIRK method, we can mention a method with following Butcher tableau: \begin{equation*} \begin{array}{r|ccc} 0&0 & & \\ 1/2& 1/4& 1/4 &\\ 1 & 1/3 & 1/3 & 1/3\\ \hline &1/3 &1/3 &1/3 \end{array} \end{equation*} \vsp This method is of second order accuracy, and also is known as the TR-BDF2 method. \vsp \section{Multistep methods}\label{sect:adams} All methods we considered so far are {\em single-step} or {\em one-step} methods, since at a typical step $y_{k+1}$ is determined solely from $y_k$. In this section we study the {\em multistep methods} where the solution $y_{k+1}$ depends on more than one previous values, i.e., $y_k,y_{k-1},\ldots$. Three families of such methods are widely used; {\bf Adams-Bashforth (AB)} and {\bf Adams-Moulton (AM)} methods and {\bf backward differentiation formulas (BDF)}. Again, we reformulate the solution of IVP $y'(t)=f(t,y)$ at $t=t_{k+1}$ as \begin{equation}\label{multi-eqintegral} y(t_{k+1}) = y(t_k) + \int_{t_k}^{t_{k+1}}f(t,y(t)) dt. \end{equation} A numerical scheme for computing $y(t_{k+1})$ can be obtained if the integral on the right-hand side of \eqref{multi-eqintegral} is approximated by a numerical quadrature. A numerical quadratures for computing a definite integral of the form $$ \int_{t_{k}}^{t_{k+1}}g(t)dt \vsp $$ can be developed by replacing $g$ with an interpolation polynomial $p$ of a certain degree. In a $q$-step AB method we assume that $p$ interpolates $g$ at points $\{t_{k-q},\ldots, t_{k-1},t_k\}$ while in a $q$-step AM method at points $\{t_{k-q+1},\ldots,t_k,t_{k+1}\}$. In AM methods the solution $y_{k+1}$ depends on already computed values $y_k,y_{k-1},\ldots, y_{k-q}$ thus AB methods are explicit. In AM methods, on the other hand, $y_{k+1}$ depends on $q-1$ previous values and $y_{k+1}$ itself, thus AM methods are implicit. \subsection{Adams-Bashforth methods} Let $q=1$. The linear interpolant of $g$ at points $\{t_{k-1},t_{k}\}$ then is \begin{equation*} p_1(t) = \frac{1}{h}[(t_k-t)g(t_{k-1})+(t-t_{k-1})g(t_k)], \end{equation*} with error function \begin{equation*} e(t) = g(t)- p_1(t) = \frac{1}{2}(t_k-t)(t-t_{k-1})g''(\zeta_k), \vsp \end{equation*} for some $t_{k-1}\leqslant \zeta_k\leqslant t_{k}$. Integrating over $[t_k,t_{k+1}]$ yields $$ \int_{t_{k}}^{t_{k+1}}g(t)dt = \int_{t_{k}}^{t_{k+1}}p_1(t) + \int_{t_{k}}^{t_{k+1}}e(t) = \frac{h}{2}[3g(t_k)-g(t_{k-1})] + \frac{5h^3}{12}g''(\xi_k) $$ for some $t_{k-1}\leqslant \xi_k\leqslant t_{k+1}$. Applying this to \eqref{multi-eqintegral}, gives \begin{equation}\label{multi-ab1ex} y(t_{k+1})=y(t_k) + \frac{h}{2}\big[3f(t_k,y(t_k))-f(t_{k-1},y(t_{k-1}))\big] + \frac{5h^3}{12}y'''(\xi_k). \end{equation} Dropping the error term and replacing the exact values $y(t_k)$ by the approximate values $y_k$, we obtain the $2$-step AB formula \begin{shaded} \vspace*{-0.1cm} \begin{equation}\label{multi-ab2} y_{k+1}=y_k + \frac{h}{2}\big[3f(t_k,y_k)-f(t_{k-1},y_{k-1})\big], \quad k=1,2,\ldots. \end{equation} \vspace*{-0.3cm} \end{shaded} At the initial level $k=1$, computing $y_2$ requires both $y_0$ and $y_1$, yet we only have $y_0$ from the initial value. The value of $y_1$ must be computed using another method. If $y_1$ is obtained in such a way that $|y_1-y(t_1)|\leqslant c_1h^2$ then it can be proved that the global error of the method \eqref{multi-ab2} satisfies $$ |e_N|\leqslant Ch^2 \vsp $$ provided that $h$ is sufficiently small, $f(t,y)$ is Lipschitz continuous and $y$ is $3$ times continuously differentiable. See section \ref{sect:error_anal_multi}. To calculate $y_1$ we have at least two possibilities from previous sections. The explicit Euler's method gives $$ y_1 = y_0 + hf(t_0,y_0) $$ with error $|y(t_1)-y_1|\leqslant c_1h^2$, and the RK2 method $$ y_1 = y_0 + \frac{h}{2}[f(t_0,y_0) + f(t_{0}+h,y_0+hf(t_0,y_0))] $$ with error $|y(t_1)-y_1|\leqslant c_1h^3$, which is more than adequate. The $3$-step AB method is obtained by interpolating $g$ at points $\{t_{k-2},t_{k-1},t_k\}$ and then integrating over $[t_k,t_{k-1}]$ as is required in \eqref{multi-eqintegral}. The interpolant will be $$ p_2(t)= \ell_0(t)g(t_{k})+\ell_1(t)g(t_{k-1})+\ell_2(t)g(t_{k-2}) $$ with Lagrange functions \begin{align*} \ell_0(t) & = \frac{1}{2h^2}(t-t_{k-1})(t-t_{k-2}), \\ \ell_1(t) & = \frac{1}{h^2}(t-t_{k})(t-t_{k-2}), \\ \ell_2(t) & = \frac{1}{2h^2}(t-t_{k})(t-t_{k-1}), \end{align*} and interpolation error $$ e(t)=g(t)-p_2(t) = \frac{1}{6}(t-t_{k})(t-t_{k-1})(t-t_{k-2})g'''(\zeta_k) $$ for some $t_{k-2}\leqslant \zeta_k\leqslant t_{k}$. Exact integration of the polynomial $p_2$ and error term $e$ reveals that $$ \int_{t_k}^{t_{k+1}}g(t)dt = \frac{h}{12}\left[23g(t_k)-16g(t_{k-1})+5g(t_{k-1}) \right]+\frac{3h^4}{8}g'''(\xi_k) \vsp $$ for some $t_{k-2}\leqslant \xi_k\leqslant t_{k+1}$. Using this quadrature for \eqref{multi-eqintegral} and dropping the error term, we obtain the $3$-step AB method \begin{shaded} \vspace*{-0.1cm} \begin{equation}\label{AB3} y_{k+1}=y_k + \frac{h}{12}\left[23y'_k -16y'_{k-1}+5y'_{k-2} \right], \quad k=2,3,4,\ldots, \end{equation} \vspace*{-0.2cm} \end{shaded} where $y'_k = f(t_k,y_k)$. In this formula $y_1$ and $y_2$ values must be obtained separately by other methods with errors at most of order $h^3$ (such as RK2 method) to keep the global error of \eqref{AB3} of order $h^3$. Higher order AB methods can be derived similarly. In Table \ref{tb:ABmethods} the AB methods of order $1$ through $4$ are listed. Local truncations errors are given in the last column. See section \ref{sect:error_anal_multi} for an error analysis. \begin{table}[!th] \centering \caption{Adams-Bashfoth methods. Here by $y'_k$ we mean $f(t_k,y_k)$.}\label{tb:ABmethods} \begin{tabular}{lclc} \hline $q$ & Order & $\qquad\qquad \qquad$ Method & $\tau_{k+1}$ \\ \hline \\ $0$ & $1$ & $y_{k+1}= y_k+hy'_k$& $\tfrac{1}{2}h^2y''(\xi_k)$ \\\\ $1$ & $2$ & $y_{k+1}= y_k+\tfrac{1}{2}h[3y'_k-y'_{k-1}]$& $\tfrac{5}{12}h^3y'''(\xi_k)$ \\\\ $2$ & $3$ & $y_{k+1}= y_k+\tfrac{1}{12}h[23y'_k-16y'_{k-1}+5y'_{k-2}]$&$ \tfrac{3}{8}h^4y^{(4)}(\xi_k)$ \\\\ $3$ & $4$ & $y_{k+1}= y_k+\tfrac{1}{24}h[55y'_k-59y'_{k-1}+37y'_{k-2}-9y'_{k-3}]$& $\tfrac{251}{720}h^5y^{(5)}(\xi_k)$ \\\\ \hline \end{tabular} \end{table} Compared to Runge-Kutta methods with the same order of accuracy, multistep methods require fewer evaluations of $f$ at each time step. For instance, in the explicit RK4 method \eqref{rk4}, we need $4$ function evaluations per time step, whereas in the explicit $4$-step AB method, only $1$ function evaluation is needed in each time step, provided that previous function values of $f$ are reused. \subsection{Adams-Moulton methods} The implicit Euler's method can be obtained by setting $q=0$, i.e., by using constant interpolation $p_0=t_{k+1}$ in the AM process. Besides, with $q=1$ the AM method is identical with trapezoidal method \eqref{trap-method} because the resulting quadrature in $[t_k,t_{k+1}]$ is just the well-known trapezoidal rule which is obtained by linear interpolation on points $\{t_k,t_{k+1}\}$. For $q=2$ the interpolant $p_2$ should set up on points $\{t_{k-1},t_{k},t_{k+1}\}$ and integration should apply on interval $[t_k,t_{k+1}]$. The resulting method is the $2$-step AM method listed in the second row of Table \ref{tb:AMmethods}. Other AM methods up to order $4$ are given in this table with the corresponding truncation errors in the last column. As we observe, the $h$ powers in local truncation errors are the same as their counterparts in the table of AM methods. However, the constants behind the $h$ are remarkably smaller in the AM methods. The error analysis is given in section \ref{sect:error_anal_multi} below. \begin{table}[!th] \centering \caption{Adams-Moulton methods. Here by $y'_k$ we mean $f(t_k,y_k)$.}\label{tb:AMmethods} \begin{tabular}{lclc} \hline $q$ & Order & $\qquad\qquad \qquad$ Method & $\tau_{k+1}$ \\ \hline \\ $0$ & $1$ & $y_{k+1}= y_k+hy'_{k+1}$& $-\tfrac{1}{2}h^2y''(\xi_k)$ \\\\ $1$ & $2$ & $y_{k+1}= y_k+\tfrac{1}{2}h[3y'_{k+1}-y'_{k}]$& $-\tfrac{1}{12}h^3y'''(\xi_k)$ \\\\ $2$ & $3$ & $y_{k+1}= y_k+\tfrac{1}{12}h[5y'_{k+1}+8y'_{k}-y'_{k-1}]$&$ -\tfrac{1}{24}h^4y^{(4)}(\xi_k)$ \\\\ $3$ & $4$ & $y_{k+1}= y_k+\tfrac{1}{24}h[9y'_{k+1}+19y'_{k}-5y'_{k-1}+y'_{k-2}]$& $-\tfrac{19}{720}h^5y^{(5)}(\xi_k)$ \\\\ \hline \end{tabular} \end{table} Discussion on using other methods with consistent accuracies for calculating few initial values to bootstrap the AB methods is applicable here for the AM methods as well. Since AM methods are implicit, an initial guess $y_{k+1}^{(0)}$ and an iteration on $y_{k+1}$ are needed in each time step. A choice for initial guess can be a solution of an AB method of the same order. For example, to implement the AM method of order $3$ we may write \begin{align*} y_{k+1}^{(0)} & =y_k+\tfrac{1}{12}h[23y'_k-16y'_{k-1}+5y'_{k-2}], \\ y_{k+1}^{(1)} & =y_k+\tfrac{1}{12}h[5f(t_{k+1},y_{k+1}^{(0)})+8y'_{k}-y'_{k-1}]. \end{align*} If $h$ is sufficiently small, we can accept $y_{k+1}^{(1)}$ as the solution $y_{k+1}$. Otherwise, we can proceed with more iterations at the expense of additional evaluations of $f(t,y)$. \vsp \subsection{Backward differentiation formulas (BDF)}\label{sect:bdf} Another class of efficient numerical methods with excellent stability properties is the backward differentiation formulas (BDF). As the name implies, they are backward (implicit) formulas. For constructing a BDF of order $q$, we assume that $p$ is a polynomial of degree $q$ that interpolates $y(t)$ at points $\{ t_{k+1}, t_k,\ldots, t_{k-q+1}\}$ for $k\geqslant q-1$: \begin{equation}\label{bdf:poly} p(t) = \sum_{j=-1}^{q-1} \ell_j(t) y(t_{k-j}) \end{equation} where $\ell_j(t), j=-1,\ldots,q-1$ are Lagrange polynomials on points $\{ t_{k+1}, t_k,\ldots, t_{k-q+1}\}$. Then we use \begin{equation}\label{bdf:deriv} p'(t_{k+1})\approx y'(t_{k+1}) = f(t_{k+1},y(t_{k+1})). \end{equation} Since the interpolation points are equidistance with distance $h$, we can use the change of variable $$ \theta = \frac{t-t_{k+1}}{h} \vsp $$ to simplify the notation. This change of variable maps the interpolation points to integer set $\{0,-1,-2,\ldots,-(k-q)\}$. In particular, $t=t_{k+1}$ is mapped to $\theta=0$. If Lagrange functions on this scaled points are denoted by $\tilde{\ell}_j(\theta)$, we can simply show that $$ \ell_j(t) = \tilde{\ell}_j(\theta) \quad \mbox{and}\quad \ell'_j(t)=\frac{1}{h}\tilde \ell_j(\theta). $$ On the other hand, the Lagrange interpolation error is \begin{align*} e(t) =& \frac{(t-{t_{k+1}})(t-t_k)\cdots(t-t_{k-q+1})}{(q+1)!}y^{(q+1)}(\zeta_k(t))\\ = &\frac{h^{q+1}\theta(\theta-1)\cdots(\theta-k+q)}{(q+1)!}y^{(q+1)}(\zeta_k(t)) \end{align*} for some $t_{k-q+1}\leqslant \zeta_k(t)\leqslant t_{k+1}$. The error of differentiation in \eqref{bdf:deriv} at $t=t_{k+1}$ (or $\theta=0$) then is \begin{equation}\label{bdf:errorderiv} e'(t_{k+1}) = \frac{1}{h}\frac{h^{q+1}(-1)^{k-q}(k-q)!}{(q+1)!}y^{(q+1)}(\xi_k) =:\tilde \tau_{k+1}, \end{equation} where $\xi_k=\zeta_k(t_{k+1})$. Combining \eqref{bdf:poly} and \eqref{bdf:deriv} and adding the error term give \begin{equation*} \frac{1}{h}\tilde \ell'_{-1}(0) y(t_{k+1}) + \frac{1}{h}\sum_{j=0}^{q-1} \tilde \ell'_j(0) y(t_{k-j}) = f(t_{k+1},y(t_{k+1}))+\tilde \tau_{k+1}. \end{equation*} By setting \begin{equation}\label{bdf:alphabeta} \beta = \frac{1}{\tilde \ell'_{-1}(0)}, \quad \alpha_j = -\frac{\tilde \ell'_j(0)}{\tilde \ell'_{-1}(0)}, \quad j=0,\ldots,q-1, \end{equation} we obtain \begin{equation*} y(t_{k+1}) = \sum_{j=0}^{q-1} \alpha_j y(t_{k-j}) +h\beta f(t_{k+1},y(t_{k+1}))+h\tilde \tau_{k+1}. \end{equation*} By replacing the exact values $y(t_k)$ by approximate values $y_k$ when dropping the truncation error \begin{equation*} \tau_{k+1} = h\tilde{\tau}_{k+1} = \frac{(-1)^{k-q}(k-q)!}{(q+1)!}h^{q+1} y^{(q+1)}(\xi_k), \end{equation*} the $q$-step BDF method is obtained as \begin{shaded} \vspace*{-0.1cm} \begin{equation}\label{bdf-method} y_{k+1} = \sum_{j=0}^{q-1} \alpha_j y_{k-j} +h\beta f(t_{k+1},y_{k+1}), \quad k=q-1,q,q+1,\ldots. \end{equation} \vspace*{-0.2cm} \end{shaded} The coefficients $\beta$ and $\alpha_j$ can be simply computed (for example using a symbolic toolbox such as Maple) from \eqref{bdf:alphabeta}. In Table \ref{tb:BDFmethods0} the $q$-step BDF methods for $q=1,2,\ldots,5$ are given. \begin{table}[!th] \centering \caption{BDF methods. Here by $y'_{k+1}$ we mean $f(t_{k+1},y_{k+1})$.}\label{tb:BDFmethods0} \begin{tabular}{llc} \hline $q$ & $\qquad\qquad \qquad\qquad\qquad$ Method & $\tau_{k+1}$ \\ \hline \\ $1$ & $y_{k+1}= y_k + hy'_{k+1}$ & $-\tfrac{1}{2}h^2y''(\xi_k)$ \\\\ $2$ & $y_{k+1}= \tfrac{4}{3}y_k-\tfrac{1}{3}y_{k-1} + \tfrac{2}{3}hy'_{k+1}$ & $-\tfrac{2}{9}h^3y'''(\xi_k)$ \\\\ $3$ & $y_{k+1}= \tfrac{18}{11}y_k-\tfrac{9}{11}y_{k-1} +\tfrac{2}{11}y_{k-2}+ \tfrac{6}{11}hy'_{k+1}$ &$ -\tfrac{3}{22}h^4y^{(4)}(\xi_k)$ \\\\ $4$ & $y_{k+1}= \tfrac{48}{25}y_k-\tfrac{35}{25}y_{k-1} +\tfrac{16}{25}y_{k-2}-\tfrac{3}{25}y_{k-3}+ \tfrac{12}{25}hy'_{k+1}$ & $-\tfrac{12}{625}h^5y^{(5)}(\xi_k)$ \\\\ $5$ & $y_{k+1}= \tfrac{300}{137}y_k-\tfrac{300}{137}y_{k-1} +\tfrac{200}{137}y_{k-2}-\tfrac{75}{137}y_{k-3}+\tfrac{12}{137}y_{k-4}+ \tfrac{60}{137}hy'_{k+1}$ & $-\tfrac{10}{137}h^6y^{(6)}(\xi_k)$ \\\\ \hline \end{tabular} \end{table} The case $q=1$ is simply the implicit Euler's method \eqref{imeuler:method}. All discussions concerning the initial values $y_1,\ldots,y_{q-1}$ and iteration for nonlinearity that we outlined for the Adams-Moulton methods are applicable here for BDF methods. As the local truncation error for a $q$-step BDF method behaves as $\mathcal{O}(h^{q+1})$, it is predictable that the global error of such method behaves as $\mathcal{O}(h^q)$. A general error analysis is given in section \ref{sect:error_anal_multi}. \vsp \begin{workout} \label{wo:BDF3forSys} Consider solving the linear system of equations $y'(t)=Ay(t) + f(t)$ with initial condition $y(0)=y_0$, given constant matrix $A\in \R^{n\times n}$ and know vector function $f$. Write down the BDF3 scheme for solving this system. \end{workout} \vsp \begin{labexercise} Let's revisit Example \ref{ex:heat} (MOL solution of the diffusion equation). Impose the initial condition $$ u_0(x) = \sin(\pi x), \quad 0\leqslant x\leqslant 1.\vsp $$ With a spatial step size of $\Delta x = 0.001$, apply the BDF3 method on the resulting system of ordinary differential equations (ODEs) and compute the PDE solution with steplengths $h=0.1,0.05,0.025,0.0125,$ and $0.00625$ up to the final time $t = 0.5$. Plot the errors, compute the convergence orders, and compare them with the theoretical order $q=3$. To initiate BDF3 iterations, in addition to the initial condition $\y_0$, we require $\y_1$ and $\y_2$, which should be computed using a one-step method with a truncation error of at least $\mathcal{O}(h^3)$. For instance, the trapezoidal or RK2 methods can be employed for this purpose. The exact solution of this equation with the given initial condition is $u(x,t) = e^{-\pi^2 t}\sin x$, which we can use to compute the errors and orders. Explain the advantages of BDF3 over explicit Euler, implicit Euler, and RK methods. Recall that when the spatial step size $\Delta x$ decreases, the size of the ODE system increases and the system becomes stiff. \end{labexercise} \vsp \subsection{Error analysis of multistep methods}\label{sect:error_anal_multi} In general, a multistep method, including AB, AM, and BDF methods, can be formulated as \begin{equation}\label{multi-general} y_{k+1} = \sum_{j=0}^{q} a_jy_{k-j} + h \sum_{j=-1}^{q} b_j f(t_{k-j},y_{k-j}), \quad k\geqslant q. \vsp \end{equation} In both Adams methods (AB and AM methods) $a_0=1$ and $a_j=0$ for $j=1,\ldots,q$. In AB methods $b_{-1}=0$ and in AM methods $b_q=0$. In BDF methods $b_{-1}=\beta$, $b_j=0$ for $j=0,\ldots,q$, and $a_q=0$. We present the analysis for the general form \eqref{multi-general} with only a restrictive condition on coefficients $a_j$. The analysis will be valid for all three classes of methods mentioned above. It also works for some one-step schemes such as implicit Euler and trapezoidal methods as they are special cases of AM methods. The truncation error for formula \eqref{multi-general} is defined as \begin{equation}\label{multi-truncationerr} \tau_{k+1} = y(t_{k+1})-\sum_{j=0}^{q} a_jy(t_{k-j}) + h \sum_{j=-1}^{q} b_j y'(t_{k-j}), \quad k\geqslant q. \end{equation} For function $\tau(h)$ defined by $$ \tau(h) = \frac{1}{h}\max_{q\leqslant k\leqslant N} |\tau_k|, \vsp $$ if we have $\tau(h)\to 0$ as $h\to 0$ then we say the method \eqref{multi-general} is {\em consistent}. If $$ \tau(h) = \mathcal{O}(h^m) $$ for some $m\geqslant 1$, we say the order of consistency is $m$. The following theorem gives necessary and sufficient conditions on coefficients in \eqref{multi-general} to achieve the consistency. \begin{theorem}[\cite{Atkinson-et-al:2009}]\label{thm:multi_consistency} Let $m\geqslant 1$ be a given integer. For $\tau(h)\to 0$ for all continuously differentiable function $y$, that is, for method \eqref{multi-general} to be consistent, it is necessary and sufficient that \begin{align} \sum_{j=0}^q a_j = & 1, \label{sum_a_j1} \\ -\sum_{j=0}^q ja_j + \sum_{j=-1}^{q}b_j= & 1 \label{sum_b_j1}. \end{align} Furthermore, to have the consistency order $m$, i.e., $\tau(h)=\mathcal O(h^m)$ for all functions $y$ that are $m+1$ continuously differentiable, it is necessary and sufficient that \eqref{sum_a_j1} and \eqref{sum_b_j1} hold and that \begin{equation}\label{sum_aj_bj} \sum_{j=0}^q (-j)^ia_j + i\sum_{j=-1}^{q}(-j)^{i-1}b_j= 1, \quad i=2,3,\ldots,m. \end{equation} \end{theorem} \proof For the proof just write the degree $m$ Taylor expansion of $y(t)$ around $t_k$ and manipulate the truncation error \eqref{multi-truncationerr}. It is left as an exercise. $\qed$ \vsp \begin{workout} Write the proof of Theorem \ref{thm:multi_consistency} \end{workout} \vsp The following theorem proves the convergence of the multistep method \eqref{multi-general}. The theorem does not cover all the multistep methods but includes all methods we considered so far such as AB, AM and BDF schemes. \begin{theorem}[\cite{Atkinson-et-al:2009}] Consider solving the initial value problem $y'(t) = f(t,y(t))$ for $t\geqslant t_0$ with initial condition $y(t_0)=y_0$ using the multistep method \eqref{multi-general}. Assume that $f(t,y)$ is continuous and satisfies the Lipschitz condition with respect to variable $y$ with Lipschitz constant $L>0$. Assume that the initial errors satisfy \begin{equation*} \eta(h):= \max_{0\leqslant k\leqslant q} |y(t_k)-y_k|\to 0, \; \mbox{as} \; h\to0. \end{equation*} Moreover, assume that $y$ is continuously differentiable and the method is consistent, that is, $\tau(h)\to 0$ as $h\to0$. Finally, assume that all coefficients $a_j$ are nonnegative: \begin{equation}\label{multi:aj_pos} a_j\geqslant 0,\quad j=0,1,\ldots, q. \end{equation} Then the method \eqref{multi-general} is convergent and \begin{equation}\label{multi:conv1} \max_{0\leqslant k\leqslant N} |y(t_k)-y_k|\leqslant c_1 \eta(h) + c_2\tau(h) \end{equation} for some suitable constants $c_1$ and $c_2$. If the solution $y(t)$ is $m+1$ continuously differentiable, the method \eqref{multi-general} is of consistency order $m$, and the initial errors satisfy $\eta(h)=\mathcal O(h^m)$ then the convergence order is $m$, i.e., the error is of size $\mathcal{O}(h^m)$. \end{theorem} \proof Let $e_k=y(t_k)-y_k$. By subtracting \eqref{multi-general} from \eqref{multi-truncationerr}, we obtain \begin{equation*} e_{k+1}=\sum_{j=0}^{q} a_je_{k-j} + h \sum_{j=-1}^{q} b_j \big[f(t_{k-j},y(t_{k-j}))-f(t_{k-j},y_{k-j})\big] + \tau_{k+1}. \end{equation*} Taking absolute value, using the Lipschitz condition, and using assumption \eqref{multi:aj_pos} yield $$ |e_{k+1}|\leqslant \sum_{j=0}^{q} a_j|e_{k-j}| + hL \sum_{j=-1}^{q} |b_j|\, |e_{k-j}| + |\tau_{k+1}|. $$ By introducing the notation $$ E_k = \max_{0\leqslant i\leqslant k}|e_i|, \quad k=0,1,\ldots,N \vsp $$ we can write $$ |e_{k+1}|\leqslant E_k\sum_{j=0}^{q} a_j + hL E_{k+1} \sum_{j=-1}^{q} |b_j| + h\tau(h). \vsp $$ From \eqref{sum_a_j1} we know that the sum of $a_j$s is $1$, thus we have $$ |e_{k+1}|\leqslant E_k + hc E_{k+1} + h\tau(h), \quad c= L \sum_{j=-1}^{q} |b_j|. $$ Since the right hand side is trivially a bound for $E_k$ and $E_{k+1}=\max\{|e_{k+1}|,E_k\}$, we simply have $$ E_{k+1}\leqslant E_k + hcE_{k+1}+ h\tau(h). \vsp $$ For $hc\leqslant \frac{1}{2}$, which is true as $h\to 0$, we obtain $$ E_{k+1}\leqslant \frac{E_k}{1-hc} +\frac{h}{1-hc}\tau(h)\leqslant (1+2hc)E_k + 2h \tau(h). $$ If we proceed as in the analysis of the Euler's method in section \ref{sect:error_anal_euler} we finally have \begin{equation*} E_{N}\leqslant e^{2c(b-t_0)}\eta(h) + \left[ \frac{e^{2c(b-t_0)}-1}{c} \right]\tau(h), \end{equation*} which complete the proof. $\qed$ To obtain the rate of convergence of $\mathcal O(h^m)$ it is necessary that $\tau_{k+1}=\mathcal O(h^{m+1})$ in each step $k$ (i.e., $\tau(h)=\mathcal O(h^m)$ which needs relation \eqref{sum_aj_bj} to hold), and the initial values $y_0,y_1,\ldots,y_q$ need to be computed with an accuracy of only $\mathcal O(h^m)$, i.e., $\eta(h)=\mathcal O(h^m)$. \vsp \subsection{Stability regions of multistep methods} Again consider the general multistep method \eqref{multi-general}. If we apply it on test equation $y'(t)=\lambda y(t)$ we obtain \begin{equation*} y_{k+1} = \sum_{j=0}^{q} a_jy_{k-j} + h\lambda \sum_{j=-1}^{q} b_j y_{k-j}. \vsp \end{equation*} Letting $z=\lambda h$ and rearranging the above equation give \begin{equation}\label{multi:abs-diff} (1-zb_{-1})y_{k+1} - \sum_{j=0}^{q} (a_j+zb_j)y_{k-j}=0. \end{equation} This is a homogenous linear {\em difference equation} of order $q+1$. For absolute stability, we need to find the general solution $y_k$ and impose some condition on $z$ in order to have a bounded $|y_k|$ as $k\to \infty$. The general theory for solving linear difference equation is given in the Appendix A. The general solution can be found by looking for solutions of the special form $$ y_k = r^k, \quad k\geqslant 0. $$ Substituting this special solutions in to \eqref{multi:abs-diff} and cancelling the factor $r^{k-q}$ yield \begin{equation}\label{multi:abs-diff} (1-zb_{-1})r^{q+1} - \sum_{j=0}^{q} (a_j+zb_j)r^{q-j}=0. \end{equation} This equation is called the {\em characteristic equation}. For example, the characteristic equations for $3$-step AB, AM, and BDF methods are \begin{align*} & r^{3} - (1+\tfrac{23}{12}z)r^2 +\tfrac{16}{12}z r - \tfrac{5}{12}z = 0,\\ & (1-\tfrac{9}{24}z)r^3-(1+\tfrac{19}{24}z)r + \tfrac{5}{24}zr-\tfrac{1}{24}z = 0, \\ & (1-\tfrac{6}{11}z)r^3-\tfrac{18}{11}r^2 + \tfrac{9}{11}r-\tfrac{2}{11} = 0, \end{align*} respectively. Assume that the characteristic polynomial has roots $r_0,r_1,\ldots,r_q$. As the roots depend on $z$, we denote them by $r_0(z),r_1(z),\ldots,r_q(z)$. The boundary of the stability region is where all roots have magnitude $1$ or less, and at least one root has magnitude $1$. Roots with magnitude $1$ can be represented as $r=e^{i\theta}$ for $0\leqslant \theta<2\pi$, where $i$ is the imaginary number. To obtain the boundary of the stability region we can find all $z$ where \eqref{multi:abs-diff} holds true with $r=e^{i\theta}$. One can separate the terms containing $z$ and write \eqref{multi:abs-diff} in the form \begin{equation*} \rho(r)-z\sigma(r) = 0 \end{equation*} where $$ \rho(r) = r^{q+1}-\sum_{j=0}^{q}a_j r^{q-j},\quad \sigma(r) =b_{-1}r^{q+1}+ \sum_{j=0}^{q}b_jr^{q-j}. \vsp $$ Now, for all $\theta\in[0,2\pi)$ we compute the complex values $$ z = \frac{\rho(e^{i\theta})}{\sigma(e^{i\theta})} =: u+iv $$ and plot $v$ against $u$. The plot includes the boundary of the stability region. With a little care we can identify the true stability region. Stability regions of the Adams-Bashforth and Adams-Moulton methods are plotted in Figures \ref{fig:AB-stab-reg} and \ref{fig:AM-stab-reg}. It can be seen that for AB and AM methods of equal order, the AM method has a larger absolute stability region. Unless the AM methods of order $1$ and $2$ which are the implicit Euler's and the trapezoidal methods, stability regions of other AB and AM methods do not contain the left-half plane and even do not encompass the whole negative real line. Consequently, these methods are not adequate for solving stiff differential equations. \begin{figure}[ht!] \centering \includegraphics[scale = .05]{AB1}\includegraphics[scale = .05]{AB2} \includegraphics[scale = .05]{AB3}\includegraphics[scale = .05]{AB4} \caption{Stability regions of Adams-Bashforth methods of orders $1$ to $4$. }\label{fig:AB-stab-reg} \end{figure} \begin{figure} \centering \includegraphics[scale = .05]{AM1}\includegraphics[scale = .05]{AM2} \includegraphics[scale = .05]{AM3}\includegraphics[scale = .05]{AM4} \caption{Stability regions of Adams-Moulton methods of orders $1$ to $4$. Note the different scale on the axes comapred to Figure \ref{fig:AB-stab-reg}.}\label{fig:AM-stab-reg} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale = .05]{BDF1}\includegraphics[scale = .05]{BDF2}\includegraphics[scale = .05]{BDF3} \includegraphics[scale = .05]{BDF4}\includegraphics[scale = .05]{BDF5}\includegraphics[scale = .05]{BDF6} \caption{Stability regions of BDF methods of orders $1$ to $6$. BDF1 and BDF2 are $A$-stable and BDF3-BDF6 are $A(\alpha)$-stable. } \label{fig:BDF-stab-reg} \end{figure} The stability regions of BDF schemes are plotted in Figure \ref{fig:BDF-stab-reg}. Notably, BDF1 (implicit Euler's method) and BDF2 are $A$-stable. As the order of the method increases, the region of absolute stability diminishes, although it still encompasses the entire negative real line and maintains $A(\alpha)$-stability. The angle $\alpha$ decreases with increasing order from 1 to 6. Nonetheless, BDF1 to BDF6 are suitable schemes for solving stiff differential equations. However, schemes of orders $q\geqslant 7$ lose this property and are not adequate for solving stiff problems. \vsp \subsection{One-step versus multistep methods} Finally, we compare one-step and multistep methods for numerical solution of ODEs. Some advantages of one-step methods over multistep methods are listed below. \begin{itemize} \item One-step methods are self-starting: the available initial guess $y_0$ is enough to start and compute next values $y_1,y_2,\ldots$. \item Adaptivity is easier with one-step methods. The step length $h$ can be changed at any point in one-step method, based on an error estimate. In a multistep methods, however, more care is required since the previous values are assumed to be equally spaced in the standard form of these methods given in Table \ref{tb:ABmethods} and \ref{tb:AMmethods}. \item If the solution $y(t)$ is not smooth at some isolated point $t^*$ then with a one-step method it is often possible to get full accuracy simply by ensuring that $t^*$ is a grid point. This is impossible with multistep methods. \end{itemize} On the other hand, one-step methods have some disadvantages. The disadvantage of Taylor series methods is that they require differentiating the given equation and are cumbersome and often expensive to implement. RK methods only use evaluations of the function $f$, but a higher order RK method requires evaluating $f$ several times each time step. While, in multistep methods only one new $f$ evaluation is required in each time step. \section{MATLAB's ODE suite}\label{sect:matlab} A suite of ODE solvers was introduced with version 5 of MATLAB and continued to current versions. The suite now contains seven solvers. Here we introduce some of them. \begin{itemize} \item \texttt{ode23} This is a low-order adaptive embedded solver for {\em nonstiff} ODEs. The `23' in the function name indicates that two simultaneous single-step formulas, one of second order and one of third order, are involved. \item \texttt{ode45} This is a high order embedded Runge-Kutta-Fehlberg solver which uses a fourth-order and fifth-order formulas. For differential equations with smooth solutions, \texttt{ode45} is often more accurate than \texttt{ode23} but still works for {\em nonstiff} ODEs. The MATLAB documentation recommends \texttt{ode45} as the first choice and Simulink blocks set \texttt{ode45} as the default solver. \item \texttt{ode23t} This solver is an implementation of the trapezoidal rule using a free interpolant. Use this solver if the problem is only {\em moderately stiff} and you need a solution without numerical damping. This solver can solve differential algebraic equations (DAEs) as well. \item \texttt{ode15s} This solver is suitable for solving {\em stiff} ODEs and DAEs. It is a variable order solver based on the numerical differentiation formulas. Optionally, it uses the backward differentiation formulas (BDFs, also known as Gear's method) which are a class of multistep solvers. (we did not learn multistep methods in this course). Try \texttt{ode15s} when \texttt{ode45} fails, or is very inefficient. \item See also \texttt{ode113}, \texttt{ode78}, \texttt{ode89}, etc. in the MATLAB's help document. \end{itemize} You can explore other ODE and DAE solvers by referring to MATLAB's help documentation or other sources available. The syntax for calling these solvers is one of the \begin{lstlisting}[style = matlab] [t,y] = solver(odefun,tspan,y0) [t,y] = solver(odefun,tspan,y0,options) \end{lstlisting} where \texttt{solver} is one of \texttt{ode45}, \texttt{ode23} and else. \texttt{odefun} is a function that evaluates $f(t,y)$, the right-hand side of the differential equations. \texttt{y0} is the initial condition. \texttt{tspan} is a two-vector specifying the interval of integration, $[t_0,t_{final}]$. To obtain solutions at specific times (all increasing or all decreasing), use $\texttt{tspan} = [t_0,t_1,\ldots,t_{final}]$. For example the command \begin{lstlisting}[style = matlab] [t,y] = ode45(odefun,[0:0.01:0.5],[0 1]); \end{lstlisting} returns the function values at the specified vector $\texttt{[0:0.01:0.5]}$. But if the values at specified points are not required you can simply set $\texttt{tspan = [0 0.5]}$. Optional integration arguments are created using the \texttt{odeset} function. For example we can customize the error tolerances using \begin{lstlisting}[style = matlab] options = odeset('RelTol',1e-6,'AbsTol',1e-10); [t,y] = ode45(@myfunc,[0 0.5],[0 1],options); \end{lstlisting} This guarantees that the error at each step is less than \texttt{RelTol} times the value at that step, and less than \texttt{AbsTol}. More precisely $$ |e_k|\leqslant \max\{\texttt{RelTol}\times |y_k|, \texttt{AbsTol}\}. $$ Decreasing error tolerance can considerably slow the solver. \vsp \begin{example} To solve the scaler value equation $$y'(t) = t^3/y(t), \quad 0\leqslant t\leqslant 10, \quad y(0)=1$$ using \texttt{ode23} we can write (without additional options): \begin{lstlisting}[style = matlab1] [t,y] = ode23(@(t,y) t^3/y,[0 10],1); plot(t,y,'-ob') \end{lstlisting} The second example is a known nonstiff system of equations describing the motion of a rigid body without external forces: \begin{align*} y_1'(t) &= y_2(t)y_3(t)\\ y_2'(t) &= -y_1(t)y_3(t) \\ y_3'(t) & = -0.51 y_1(t)y_2(t) \end{align*} with initial conditions $y_1(0)=0$, $y_2(0)=1$ and $y_3(0)=1$. The time interval is $[0,12]$. With optional relative and absolute errors, we can compute the solutions by writing the following script. \begin{lstlisting}[style = matlab1] options = odeset('RelTol',1e-4,'AbsTol',[1e-4 1e-4 1e-5]); [t,y] = ode45(@rigid,[0 12],[0 1 1],options); plot(t,y(:,1),'-',t,y(:,2),'-.',t,y(:,3),'.'); \end{lstlisting} The \texttt{rigid} function can be written in a separated script as a new function: \begin{lstlisting}[style = matlab1] function yprime = rigid(t,y) yprime = zeros(3,1); yprime(1) = y(2)*y(3); yprime(2) = -y(1)*y(3); yprime(3) = -0.51*y(1)*y(2); end \end{lstlisting} \end{example} \vsp All solvers solve systems of equations in the form $\y'= \f(t,\y)$ or problems that involve a mass matrix, $M(t,\y)\y' = \f(t,\y)$. \texttt{ode15s} and \texttt{ode23t} can solve problems with a mass matrix that is singular, i.e., DAEs. The mass matrix can be imported via \texttt{odeset}. See doc \texttt{odeset} in the MATLAB's help for a list of options you can customize. \vsp \begin{labexercise} Repeat solving examples in Lab Exercise \ref{py_ex_euler} using \texttt{ode23} and \texttt{ode45}. Try to test different options. In each case plot the exact and numerical solutions in the same figure. \end{labexercise} \vsp \begin{labexercise} The Van der Pol oscillator is a self-maintained electrical circuit comprised of an inductor ($L$), an initially charged capacitor with capacitance ($C$), and a nonlinear resistor ($R$), all connected in series, as shown in the figure below. \begin{center} \includegraphics[scale=.85]{rlc} \end{center} By using the operational amplifier, the characteristic intensity-tension of the nonlinear resistance ($R$) is given as \begin{equation}\label{Intensity} U_R = -R_0 i_0 \left[\frac{i}{i_0}-\frac{1}{3}\left(\frac{i}{i_0}\right)^3 \right] \vsp \end{equation} where $i$ is the current and $i_0$ and $R_0$ are the current and the resistance of the normalization respectively. By applying the Kirchhoff's law to the above figure we have \begin{equation}\label{law} U_L + U_R + U_C = 0 \end{equation} where $U_L$ and $U_C$ are the tension to the limits of the inductor and capacitor, respectively, and are defined as \begin{equation}\label{UL_UC} U_L = L \frac{\mathrm{d}i}{\mathrm{d}t}, \quad U_C = \frac{1}{C} \int i\, \mathrm{d} t. \vsp \end{equation} \begin{enumerate} \item Substitute \eqref{Intensity} and \eqref{UL_UC} into \eqref{law} and obtain an integral-differential equation. Then differentiate it and obtain the following second order ODE: \begin{equation}\label{second_order1} L\frac{\mathrm{d}^2i}{\mathrm{d}\tau^2} - R_0\left[1-\frac{i^2}{i_0^2} \right] \frac{\mathrm{d} i}{\mathrm{d} \tau } + \frac{i}{C} = 0. \end{equation} Then use the change of variables $u = i/i_0$ and $t = \omega \tau$ where $\omega = 1/\sqrt{LC}$ (electric pulsation), to convert \eqref{second_order1} to the following equation: \begin{equation*}\frac{\mathrm{d}^2u}{\mathrm{d}t^2} - R_0\sqrt{\frac{C}{L}}\left(1-u^2 \right) \frac{\mathrm{d} u}{\mathrm{d} t} + u = 0. \end{equation*} \item By setting $\mu = R_0\sqrt{\frac{C}{L}}$ and adding initial conditions obtain the well-known {\em Van der Pol} equation \begin{equation}\label{vdp} \begin{split} &u''-\mu u'(1-u^2)+u = 0, \quad 0\leqslant t\leqslant b\\ &u(0) = u_0, \quad u'(0)=u'_0. \end{split} \end{equation} and convert it to a system of first-order ODEs. \item Apply RK2 and RK4 methods (your own codes) for solving the system obtained from equation \eqref{vdp} with different values $\mu = 1,10,100,1000$, and with initial conditions $u_0=2$ and $u'_0=0$. For $\mu=1$ let $b=20$, for $\mu=10$ let $b=100$, for $\mu=100$ let $b = 500$ and for $\mu = 1000$ let $b = 5000$. For large values of $\mu$ use a supper small stepsize $h$ to get accurate results. In each case plot the numerical solution $u$ and $u'$ in terms of $t$ in interval $[0,b]$ and report your observations. Also report the executing times (sec.) in a table. \item Now use ODE solvers \texttt{ode45}, \texttt{ode23t} and \texttt{ode15s} to solve this ODE again with different values of $\mu$ and $b$ given in item (1). In each case produce the plot of $u$ and $u'$ in terms of $t$, and compute the CPU time required (sec.) for executing the codes. Compare with the results of item (2). \end{enumerate} \end{labexercise} \vsp \section{Appendix} \subsection*{A: Difference equations} Consider the following {\em homogeneous difference equation} of order $n$, \begin{equation}\label{diff-eq_n} c_{k}y_{k} + c_{k-1}y_{k-1} + \cdots + c_{k-p} y_{k-p} = 0, \quad k\geqslant p \end{equation} with given initial values $y_0,y_1,\ldots,y_{p-1}$. The general solution of this equation is obtained by looking for solutions of the special form $$ y_k = r^k, \quad k\geqslant 0. $$ If we can obtain $p$ linearly independent solutions, then an arbitrary linear combinations of this solutions give the general solution of \eqref{diff-eq_n}. Substituting $y_k=r^k$ into \eqref{diff-eq_n} and cancelling $r^{k-p}$, we obtain \begin{equation}\label{characteristic_eq} c_{p}r^{p} + c_{p-1}r^{p-1} + \cdots + c_1r+ c_0 = 0 \vsp \end{equation} which is called {\em characteristic equation}, and the left side is {\em characteristic polynomial}. If \eqref{characteristic_eq} possesses $p$ distinct solutions (roots) $r_1,r_2,\ldots,r_p$ then the general solution of \eqref{diff-eq_n} is \begin{equation}\label{differ:generalsol} y_k = \sum_{j=1}^{p} \beta_j r_j^k, \quad k\geqslant 0. \end{equation} The coefficients $\beta_0, \beta_1,\ldots, \beta_{p-1}$ are obtained by imposing the known initial values $y_0,y_1,\ldots,y_{p-1}$ into the general solution \eqref{differ:generalsol}. It is clear that if $|r_j|\leqslant 1$ then the solution $y_k$ does not grow as $k\to\infty$. However, if $r_j$ is a root of multiplicity $\nu$, i.e., $$ r_j = r_{j+1} = \cdots = r_{j+\nu-1}, $$ then the $\nu$ linearly independent solutions corresponding to these roots are $$ r_j^k ,\, kr_j^k, \ldots, k^{\nu-1}r_j^k $$ and in formula \eqref{differ:generalsol} the part $\beta_jr_j^k + \beta_{j+1}r_{j+1}^k+\cdots+\beta_{j+\nu-1}r_{j+\nu-1}^k$ should be replaced by \begin{equation*} [\beta_j + k\beta_{j+1}+\cdots + k^{\nu-1}\beta_{j+\nu-1}]r_j^k. \end{equation*} In this case the solution $y_k$ remains stable as $k\to\infty$ provided that $|r_j|\leqslant 1$ for simple roots and $|r_j|<1$ for roots with multiplicity. \vsp \begin{example} The general solution of difference equation $$ y_{k}+5y_{k-1}+6r_{k-2}=0 $$ is obtained in terms of roots $r_1=2$ and $r_2=3$ of characteristic polynomial $r^2+3r+2$, $$ y_k = \beta_1 2^k + \beta_2 3^k. $$ If $y_0=0$ and $y_1=2$ are given then by solving $y_0=\beta_1+\beta_2$ and $y_1=2\beta_1+3\beta_2$ we simply obtain $\beta_1=-2$ and $\beta_2=2$. The solution then is $$ y_k = -2\times 2^k +2 \times 3^k,\quad k=0,1,2,3,\ldots. $$ As second example consider the following difference equation $$ y_{k}-5y_{k-1}+6y_{k-2}+4y_{k-3}-8y_{k-4}=0. $$ The characteristic equation is $r^4 - 5r^3 + 6r^2 + 4r - 8=0$ with roots $r_1=-1$ and $r_2=r_3=r_4=2$. The general solution then is $$ y_k = \beta_1(-1)^k + [\beta_2+k\beta_3+k^2\beta_4]2^k. $$ With given initial values $y_0=-1$, $y_1=y_2=-7$ and $y_3=7$ we must solve \begin{align*} -1= & \beta_1+\beta_2 \\ -7= & -\beta_1+2[\beta_2+\beta_3+\beta_4] \\ 7 =& \beta_1+4[\beta_2+2\beta_3+4\beta_4] \\ 7 = & -\beta_1+8[\beta_2+3\beta_3+9\beta_4] \end{align*} to obtain coefficients $\beta_j$. Solving this system gives $\beta_1=1,\beta_2=-2,\beta_3=-2$ and $\beta_4=1$. \end{example} \ \vsp \section{Additional workouts} \section{Appendix} \subsection*{A: Difference equations} Consider the following {\em homogeneous difference equation} of order $n$, \begin{equation}\label{diff-eq_n} c_{k}y_{k} + c_{k-1}y_{k-1} + \cdots + c_{k-p} y_{k-p} = 0, \quad k\geqslant p \end{equation} with given initial values $y_0,y_1,\ldots,y_{p-1}$. The general solution of this equation is obtained by looking for solutions of the special form $$ y_k = r^k, \quad k\geqslant 0. $$ If we can obtain $p$ linearly independent solutions, then an arbitrary linear combinations of this solutions give the general solution of \eqref{diff-eq_n}. Substituting $y_k=r^k$ into \eqref{diff-eq_n} and cancelling $r^{k-p}$, we obtain \begin{equation}\label{characteristic_eq} c_{p}r^{p} + c_{p-1}r^{p-1} + \cdots + c_1r+ c_0 = 0 \vsp \end{equation} which is called {\em characteristic equation}, and the left side is {\em characteristic polynomial}. If \eqref{characteristic_eq} possesses $p$ distinct solutions (roots) $r_1,r_2,\ldots,r_p$ then the general solution of \eqref{diff-eq_n} is \begin{equation}\label{differ:generalsol} y_k = \sum_{j=1}^{p} \beta_j r_j^k, \quad k\geqslant 0. \end{equation} The coefficients $\beta_0, \beta_1,\ldots, \beta_{p-1}$ are obtained by imposing the known initial values $y_0,y_1,\ldots,y_{p-1}$ into the general solution \eqref{differ:generalsol}. It is clear that if $|r_j|\leqslant 1$ then the solution $y_k$ does not grow as $k\to\infty$. But if $r_j$ is a root of multiplicity $\nu$, i.e., $$ r_j = r_{j+1} = \cdots = r_{j+\nu-1}, $$ then the $\nu$ linearly independent solutions corresponding to these roots are $$ r_j^k ,\, kr_j^k, \ldots, k^{\nu-1}r_j^k $$ and in formula \eqref{differ:generalsol} the part $\beta_jr_j^k + \beta_{j+1}r_{j+1}^k+\cdots+\beta_{j+\nu-1}r_{j+\nu-1}^k$ should be replaced by \begin{equation*} [\beta_j + k\beta_{j+1}+\cdots + k^{\nu-1}\beta_{j+\nu-1}]r_j^k. \end{equation*} In this case the solution $y_k$ remains stable as $k\to\infty$ provided that $|r_j|\leqslant 1$ for simple roots and $|r_j|<1$ for roots with multiplicity. \vsp \begin{example} The general solution of difference equation $$ y_{k}+5y_{k-1}+6r_{k-2}=0 $$ is obtained by computing the roots of characteristic polynomial $r^2+3r+2$. The roots are $r_1=2$ and $r_2=3$. Thus $$ y_k = \beta_1 2^k + \beta_2 3^k. $$ If $y_0=0$ and $y_1=2$ are given then by solving $y_0=\beta_1+\beta_2$ and $y_1=2\beta_1+3\beta_2$ we simply obtain $\beta_1=-2$ and $\beta_2=2$. The solution then is $$ y_k = -2\times 2^k +2 \times 3^k,\quad k=0,1,2,3,\ldots. $$ As second example consider the following difference equation $$ y_{k}-5y_{k-1}+6y_{k-2}+4y_{k-3}-8y_{k-4}=0. $$ The characteristic equation is $r^4 - 5r^3 + 6r^2 + 4r - 8=0$ with roots $r_1=-1$ and $r_2=r_3=r_4=2$. The general solution then is $$ y_k = \beta_1(-1)^k + [\beta_2+k\beta_3+k^2\beta_4]2^k. $$ If the initial values are given by $y_0=-1$, $y_1=y_2=-7$ and $y_3=7$ then we need to solve the system \begin{align*} -1= & \beta_1+\beta_2 \\ -7= & -\beta_1+2[\beta_2+\beta_3+\beta_4] \\ 7 =& \beta_1+4[\beta_2+2\beta_3+4\beta_4] \\ 7 = & -\beta_1+8[\beta_2+3\beta_3+9\beta_4] \end{align*} to obtain coefficients $\beta_j$. Solving this system gives $\beta_1=1,\beta_2=-2,\beta_3=-2$ and $\beta_4=1$. \end{example}
2412.20001v2
http://arxiv.org/abs/2412.20001v2
Colouring signed Kneser, Schrijver, and Borsuk graphs
\documentclass[11pt, a4paper]{article} \usepackage{graphicx} \usepackage{epsfig} \usepackage{caption} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{amsmath} \usepackage{amsthm} \usepackage{enumerate} \usepackage{graphics} \usepackage{pict2e} \usepackage{cite} \usepackage{subfigure} \usepackage[subnum]{cases} \usepackage{multirow} \usepackage{comment} \usepackage{xcolor} \usepackage{cleveref} \usepackage{authblk} \usepackage{color} \def \red {\textcolor{red} } \def \bl {\textcolor{blue} } \def \gr {\textcolor{green} } \def \ye {\textcolor{yellow} } \setlength{\textheight}{8.5in} \setlength{\textwidth}{6.2in} \setlength{\oddsidemargin}{0in} \setlength{\parindent}{1em} \makeatletter \newcommand{\Ss}{\mathcal{S}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\HB}{\hat{\B}} \newcommand{\C}{\mathcal{C}} \newcommand{\F}{\mathcal{F}} \newcommand{\D}{\mathcal{D}} \newcommand{\M}{\mathcal{M}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\eps}{\varepsilon} \newcommand{\spto}{\xrightarrow{\mathrm{\textit{sp}}}} \newcommand{\SB}{\mathrm{\textit{SB}}} \newcommand{\SK}{\mathrm{\textit{SK}}} \newcommand{\SSG}{\mathrm{\textit{SS}}} \newcommand{\HSK}{\widehat{\SK}} \newcommand{\HSS}{\widehat{\SSG}} \newcommand{\HC}{\widehat{C}} \newcommand{\cat}{\mathrm{cat}} \newcommand{\RP}{\R\mathrm{P}} \newcommand{\DS}{\mathrm{DS}} \newcommand{\DC}{\mathrm{DC}} \newcommand{\abs}{\mathrm{abs}} \newcommand{\Aut}{\mathrm{Aut}} \newcommand{\spAut}{\mathrm{Aut_{sp}}} \newcommand{\Sym}[1]{\mathrm{Sym}(#1)} \newcommand{\sgn}[1]{\mathrm{sgn}(#1)} \newcommand{\dist}{\mathrm{dist}} \newcommand{\Cat}[1]{\mathrm{Cat}(#1)} \newcommand{\notto}{\nrightarrow} \newcommand{\one}{\mathbf{1}} \newcommand{\Reza}[1]{{\color{purple}[Reza: #1]}} \begin{document} \newtheorem{theorem}{Theorem} \newtheorem{observation}[theorem]{Observation} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{definition}[theorem]{Definition} \newtheorem{guess}[theorem]{Conjecture} \newtheorem{claim}[theorem]{Claim} \newtheorem{problem}[theorem]{Problem} \newtheorem{question}[theorem]{Question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{fact}[theorem]{Fact} \makeatletter gcaption{\def\@captype{figure}\caption} \newcommand\tabcaption{\def\@captype{table}\caption} \makeatother \newtheorem{case}[theorem]{Case} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{notation}{Notation} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \title{\textbf{Colouring signed Kneser, Schrijver, and Borsuk graphs}} \author[1]{Luis Kuffner} \author[2]{Reza Naserasr} \author[3]{Lujia Wang} \author[4]{\linebreak Xiaowei Yu} \author[2,3]{Huan Zhou} \author[3]{Xuding Zhu} \affil[1]{\small École normale supérieure, PSL University, Paris, France. {Email:\texttt{[email protected]}}} \affil[2]{\small Université Paris Cité, CNRS, IRIF, F-75013, Paris, France. {Emails: \texttt{\{reza, zhou\}@irif.fr}}} \affil[3]{\small Zhejiang Normal University, Jinhua, China. {Emails: \texttt{\{ljwang, huanzhou, xdzhu\}@zjnu.edu.cn}}} \affil[4]{\small Jiangsu Normal University, Xuzhou, China. {Email: \texttt{[email protected]}}} \maketitle \begin{abstract} The signed Kneser graph $\SK(n,k)$, $k\leq n$, is the graph whose vertices are signed $k$-subsets of $[n]$ (i.e. $k$-subsets $S$ of $\{ \pm 1, \pm 2, \ldots, \pm n\}$ such that $S\cap -S=\emptyset$). Two vertices $A$ and $B$ are adjacent with a positive edge if $A\cap -B=\emptyset$ and with a negative edge if $A\cap B=\emptyset$. We prove that the balanced chromatic number of $\SK(n,k)$ is $n-k+1$. We then introduce the signed analogue of Schrijver graphs and show that they form vertex-critical subgraphs of $\SK(n,k)$ with respect to balanced colouring. Further connection to topological methods, in particular, connection to signed Borsuk graphs is also considered. \end{abstract} \section{Introduction} A \emph{signed graph} $(G,\sigma)$, is a graph $G=(V,E)$ endowed with a \emph{signature function} $\sigma: E(G)\rightarrow \{-1, +1\}$ which assigns to each edge $e$ a sign $\sigma(e)$. An edge $e$ is called a {\em positive edge} (or {\em negative edge}, respectively) if $\sigma(e) = +1$ (or $\sigma(e) = -1$, respectively). The graph $G$ is called the {\em underlying graph} of $(G,\sigma)$. \begin{definition} \label{def-switching} Assume $(G, \sigma)$ is a signed graph and $v$ is a vertex of $G$. The \emph{vertex switching} at $v\in V(G)$ results in a signature $\sigma'$ defined as \[ \sigma'(e) = \begin{cases} - \sigma(e), &\text{if $v$ is a vertex of $e$ and $e$ is not a loop}; \cr \sigma(e), &\text{ otherwise}. \end{cases} \] Two signatures $\sigma_1$ and $\sigma_2$ on the same underlying graph $G$ are said to be \emph{switching equivalent}, denoted by $\sigma_1\equiv\sigma_2$, if one is obtained from the other by a sequence of vertex switchings. \end{definition} Assume $(G, \sigma)$ is a signed graph and $X$ is a subset of $V(G)$. If we switch at vertices of $X$ in any order, then the resulting signature $\sigma'$ is obtained from $\sigma$ by flipping the signs of all edges in the edge cut $(X, V(G)\setminus X)$ of $G$. We call this operation \emph{the switching at cut} $(X, V(G)\setminus X)$. Thus $\sigma_1 \equiv \sigma_2$ if and only if the set $\{e: \sigma_1(e) \ne \sigma_2(e)\}$ is an edge cut. Given a graph $G$, we denote by $(G,+)$ ($(G,-)$, respectively) the signed graph whose signature function is constantly positive (negative, respectively) on $G$. \begin{definition} \label{def-balanced} A signed graph $(G, \sigma)$ is {\em balanced} if $(G, \sigma) \equiv (G,+)$. A subset $X$ of vertices of a signed graph $(G, \sigma)$ is called balanced if $(G[X], \sigma)$ is balanced. \end{definition} Note that switching does not change the parity of the number of negative edges in a cycle, and a signed cycle $(C, \sigma)$ is balanced if it has an even number of negative edges, or equivalently, $\prod_{e \in E(C)}\sigma(e) =1$. If a signed graph $(G, \sigma)$ is balanced then every cycle must be balanced. Harary \cite{Harary1953} proved that this necessary condition is also sufficient. \begin{definition} \label{def-balancedcolouring} Assume $(G, \sigma)$ is a signed graph and $p$ is a positive integer. A {\em balanced $p$-colouring} of $(G, \sigma)$ is a mapping $f: V(G) \to [p]$ such that for each colour $i$, the set $f^{-1}(i)$ is a balanced set of $V(G)$. The {\em balanced chromatic number} of $(G, \sigma)$ is defined as $$\chi_b(G, \sigma) = \min\{p:\text{ there is a balanced $p$-colouring of $(G, \sigma)$}\}.$$ \end{definition} A signed graph $(G,\sigma)$ admits a balanced $p$-colouring for some $p$ if and only if it has no negative loop. Thus $\chi_b(G, \sigma)$ is well-defined for signed graphs with no negative loop. On the other hand, the existence of a positive loop does not affect the balanced chromatic number. Thus in this work, negative loops are never considered and it is assumed every vertex has a positive loop attached to it. A signed graph $(G, \sigma)$ is {\em simple} if there are no parallel edges of opposite signs (or no negative cycle of length 2). A balanced $p$-colouring of $(G, -\sigma)$ is equivalent to a 0-free $2p$-colouring of $(G, \sigma)$ introduced in \cite{Zaslavsky1982-DAM}, which is a mapping $c: V(G) \to \{\pm1, \pm2, \ldots, \pm p\}$ such that $c(x) \neq \sigma(xy)c(y)$ for each edge $xy$. The colour set in a $0$-free $2p$-colouring consists of $p$ colour pairs $\{\{i,-i\}: i \in [p]\}$, and vertices coloured by a pair of colours in a $0$-free $2p$-colouring form a single colour class in a balanced $p$-colouring. The first reference to the parameter $\chi_b(G, \sigma)$ is \cite{Zaslavsky1987}, where the term ``balanced partition number" is used instead. The connection between the balanced chromatic number of signed graphs and the classic chromatic number of graphs can be presented in two ways. The first is by observing that in $(G,-)$ a set of vertices is balanced if and only if it induces a bipartite subgraph of $G$. \begin{proposition}\label{prop:Xb_NegativeGraph} For every graph $G$, we have $\chi_b(G,-)=\lceil\chi(G)/2\rceil$. \end{proposition} For the second connection given a graph $G$, let $(G, \pm)$ be the signed graph obtained from $G$ by replacing each edge $e=xy$ with a pair of parallel edges of opposite signs. Then a subset $X$ of $V(G)$ is balanced in $(G, \pm)$ if and only if $X$ is an independent set of $G$. Thus a $p$-colouring of $G$ is equivalent to a balanced $p$-colouring of $(G, \pm)$. In this sense, colouring graphs is equivalent to colouring special signed graphs. Many classical results about graph colouring in the setting of colouring signed graphs become challenging problems, and conjectures about graph colouringin the setting of colouring of signed graphs become more profound. For example, as a generalization of the Four Colour Theorem to signed graphs, M\'{a}\v{c}ajov\'{a}, Raspaud and \v{S}koviera \cite{m2016chromatic} conjectured that every simple planar signed graph is 0-free 4-colourable. That is equivalent to claiming that every signed simple planar graph admits a balanced 2-colouring. The conjecture received a lot of attention and was refuted by Kardo\v{s} and Narboni \cite{KN21}. The famous Hadwiger conjecture which claims that $K_n$-minor free graphs are $(n-1)$-colourable is naturally extended to signed graphs where the minor theory implies richer structures (see \cite{JMNNQ24+}). For a positive integer $n$, let $[n] = \{1,2,\ldots, n\}$. Denote by $\binom{[n]}{k}$ the set of all $k$-subsets of $[n]$. For $n\ge 2k$, the Kneser graph $K(n,k)$ has vertex set $\binom{[n]}{k}$, in which two vertices are adjacent if they are disjoint $k$-subsets of $[n]$. It was conjectured by Kneser \cite{MR0093538} and proved by Lov\'{a}sz \cite{MR0514625} that the chromatic number of $K(n,k)$ is $n-2k+2$. Schrijver graph $S(n,k)$ is the subgraph of $K(n,k)$ induced by the set of stable $k$-subsets, where a $k$-subset $A$ of $[n]$ is {\em stable} if $i \in A$ implies $i+1 \notin A$, where $i\in [n-1]$, and $n \in A$ implies that $1 \notin A$. It was proved by Schrijver \cite{Schrijver} that $S(n,k)$ is a vertex-critical subgraph of $K(n,k)$, i.e., $\chi(S(n,k)) = \chi(K(n,k))=n-2k+2$ and for any vertex $A$ of $S(n,k)$, $\chi(S(n,k)-A) =n-2k+1$. Lov\'{a}sz's proof of Kneser conjecture initiated the application of topological methods in graph colouring. Presently, the study of topological bounds for graph parameters forms an important and elegant part of chromatic graph theory. The goal of this paper is to generalize the concepts of Kneser graphs and Schrijver graphs to signed Kneser graphs and signed Schrijver graphs and to explore applications of topological methods in the colouring of signed graphs. For a positive integer $n$, let $ \pm [n] =\{\pm 1, \pm 2, \ldots, \pm n\}$. A \emph{signed $k$-subset} of $[n]$ is a $k$-subset $A$ of $\pm [n]$ such that for any $i \in [n]$, $|A \cap \{i, -i\} | \le 1$. We denote by $\binom{[n]}{\pm k}$ the set of all signed $k$-subsets of $[n]$. For $A \in \binom{[n]}{\pm k}$, let $-A = \{ -a: a \in A\}$. Thus a $k$-subset of $\pm [n]$ is a signed subset of $[n]$ if and only if $A\cap -A=\emptyset$. A $k$-subset can naturally be presented by a $\{-1,0,1\}$-vector of length $n$ whose coordinates are labeled by $[n]$ and whose number of nonzero coordinates is $k$. In the rest of this paper $k,n$ are integers satisfying $k\leq n$. \begin{definition} \label{def-signedKneser} The signed Kneser graph ${\SK}(n,k)$ has $\binom{[n]}{\pm k}$ as the vertex set where $A,B$ are joined by a positive edge if $A \cap (-B) = \emptyset$, and $A, B$ are joined by a negative edge if $A \cap B = \emptyset$. \end{definition} Viewing vertices as vectors, vertices $A$ and $B$ are adjacent by a positive (respectively, negative) edge if the coordinatewise product is non-negative (respectively non-positive). Analogous to the Kneser graph and its relation to the fractional chromatic number of graphs, signed Kneser graphs are homomorphism targets for the study of the fractional balanced chromatic number of signed graphs. For more details on this subject and the basic properties of signed Kneser graphs, we refer to \cite{KNWYZZ25}. In this paper, we study the balanced colouring of signed Kneser graphs and prove the following result: \begin{theorem}\label{thm-signedK} For any positive integers $n \ge k \ge 1$, $$\chi_b(\SK(n,k)) = n-k+1.$$ \end{theorem} \begin{definition} \label{def-signedSchijver} A signed $k$-subset $A$ of $[n]$ is said to be {\em alternating} if $A$ is of the form $$\{a_1, -a_2, \ldots, (-1)^{k-1}a_k\}\quad \text{ or }\quad \{-a_1, a_2, \ldots, (-1)^{k}a_k\},$$ where $1 \leq a_1 < a_2 < \ldots < a_k\leq n$. Denote by $\A(n,k)$ the family of alternating signed $k$-subsets of $[n]$. The {\em signed Schrijver graph} $\SSG(n,k)$ is the subgraph of $\SK(n,k)$ induced by the vertex set $\A(n,k)$. \end{definition} In terms of vectors, $\A(n,k)$ consists of those vertices of $\SK(n,k)$ whose nonzero entries are alternating. Let $\HSK(n,k)$ be the subgraph of $\SK(n,k)$ induced by the set of vertices whose first nonzero coordinate is positive. Define $\HSS(n,k)$ similarly. Observe that replacing $A$ with $-A$ in $\HSK(n,k)$ is the same as switching at $A$. Given a signed graph $(G,\sigma)$ and vertex $u$ of it, adding a vertex $-u$ which is a switched copy of $u$, or deleting it if already exists, does not affect its balanced chromatic number. Thus Theorem~\ref{thm-signedK} is equivalent to claiming that $\chi_b(\HSK(n,k)) = n-k+1.$ Next, we shall prove that $\HSS(n,k)$ is a vertex-critical subgraph of $\SK(n,k)$. \begin{theorem} \label{thm:signedS} For any positive integers $n \ge k \ge 1$, $$\chi_b(\HSS(n,k)) = n-k+1.$$ Moreover, for any vertex $A$ of $\HSS(n,k)$, $\HSS(n,k) - A$ admits an $(n-k)$-colouring. \end{theorem} \section{Balanced colouring signed Kneser graphs and singed Schrijver graphs}\label{sec:BalColKneser} For $i \in [n]$, let $\B_{i}(n,k)=\{A \in {[n] \choose \pm k }: A \cap \{i, -i\} \ne \emptyset\}$. Observe that $\B_{i}(n,k)$ is a balanced set in $\HSK(n,k)$. Furthermore, any collection of $n-k+1$ of these sets covers all the vertices of $\HSK(n,k)$, resulting in an $(n-k+1)$-colouring of $\HSK(n,k)$. Hence $\chi_b(\HSK(n,k)) \le n-k+1$. We shall prove that $\chi_b(\HSS(n,k)) \ge n-k+1$, which would imply that $\chi_b(\HSS(n,k)) = \chi_b(\HSK(n,k)) = n-k+1$. Nevertheless, one can derive the lower bound $\chi_b(\HSK(n,k)) \ge n-k+1$ easily from the (classic) chromatic number of Schrijver graphs. {\bf Proof of the lower bound for Theorem \ref{thm-signedK}}: We order the elements of $\pm [n]$ in cyclic order as $(1,-1,2, -2, \ldots, n, -n)$. Then every stable $k$-subset of $\pm [n]$ with respect to this order is, in particular, a signed $k$-subset of $[n]$, and hence is a vertex of $\SK(n,k)$. In other words, every vertex $A$ of $S(2n,k)$ has an associated vertex $f(A)$ in $\SK(n,k)$. Two vertices $A$ and $B$ are joined by an edge in $S(2n,k)$ if they are disjoint. Hence $f(A)$ and $f(B)$ are adjacent by a negative edge in $\SK(n,k)$. Thus $(S(2n,k), -)$ is a subgraph of $\SK(n,k)$. It follows from Proposition~\ref{prop:Xb_NegativeGraph} that $2n-2k+2 = \chi(S(2n,k)) \leq 2\chi_b(\SK(n,k))$. Hence, $\chi_b(\SK(n,k))\ge n-k+1$. \subsection{Proof of Theorem \ref{thm:signedS}} Observe that the only alternating $k$-sets contained in $A \cup -A$ are $A$ and $-A$ themselves. Therefore, the collection $\{\B_i(n,k): \{i, -i\} \cap A = \emptyset\}$ of $n-k$ balanced sets covers all vertices of $\HSS(n,k)$ except $A$. Hence $$\chi_b(\HSS(n,k)-A) \le (n-k).$$ It remains to show that $\chi_b(\HSS(n,k)) \ge n-k+1$. Let $S^d:=\{x\in \R^{d+1}: \Vert x\Vert_2=1\}$ be the $d$-dimensional sphere. We say a subset $C\subseteq S^d$ is \emph{(antipodally) symmetric} if $-C=C$. We need the following form of Ky Fan's theorem, see \cite{ST06} and references therein. \begin{theorem}\label{thm:KyFan} Let $\A$ be a system of open (or a finite system of closed) subsets of $S^d$ such that $A\cap -A=\emptyset$ for every element $A$ of $\A$, and $\bigcup_{A\in \A}(A\cup -A)=S^d$. For any linear order $<$ on $\A$ there are elements $A_1 <A_2 <\ldots<A_{d+1}$ of $\A$ and a point $x\in S^d$ such that $x\in \bigcap_{i=1}^{d+1} (-1)^i A_i$. \end{theorem} A subset $Y$ of $S^d$ is \emph{connected} if there are no disjoint open sets $A$ and $B$ of $S^d$ such that $Y \subseteq A\cup B$ and $ A \cap Y \ne \emptyset$, $ B \cap Y \ne \emptyset$. Given a subset $Y$ of $S^d$, a maximal connected subset of $Y$ is called a \emph{connected component} of $Y$, and two points of $Y$ are \emph{connected by $Y$} if they are in the same connected component. Two points $y_1$ and $y_2$ in a subset $Y$ are said to be \emph{path-connected} by $Y$ if there is a continuous mapping $f$ of $[0,1]$ to $Y$ such that $f(0)=y_1$ and $f(1)=y_2$. It is known \cite{Munkers2000} that if $Y$ is connected and open, then $Y$ is path-connected. \begin{theorem}\label{thm:signedBorsukUlam} For every open cover $C_1, C_2,\ldots, C_d$ of the sphere $S^d$, where each $C_i$ is an antipodally symmetric set, one of the $C_i$'s connects a pair of antipodal points. \end{theorem} \begin{proof} Let $\C_i$ be the collection of connected components of $C_i$. Assume $X \in \C_i$. Since $C_i$ is symmetric, $-X\in \C_i$. If $X\cap -X \neq \emptyset$ (and hence $X=-X$), then we are done. Thus we may assume for $X\cap -X = \emptyset$ for each element $X$ of $\C_i$. Let $\A=\bigcup_{1\leq i \leq d} \C_i$. Then $\A$ satisfies all the conditions of \Cref{thm:KyFan}. Thus there are distinct sets $X_1, X_2, \ldots X_{d+1}$ in $\A$ and a point $x$ such that $$x\in \bigcap_{l=1}^{d+1}(-1)^lX_l.$$ By the pigeonhole principle, two of these sets are in the same $\C_i$, leading to a contradiction. \end{proof} Now we prove a Gale-Schrijver type theorem regarding the existence of a well-distributed arrangement of our ground set $\pm [n]$ into $S^d$. \begin{theorem}\label{thm:signedGaleSchrijver} There is an embedding of $\pm[n]$ in the sphere $S^{n-k}$, such that the images of $i$ and $-i$ are antipodal for each $i\in [n]$, and any open hemisphere contains an alternating $k$-set. \end{theorem} \begin{proof} Let $d=n-k$. We first embed $\pm[n]$ into $\R^{d+1}$ with the assistance of the \emph{odd moment curve}. More precisely, let $$v_i=(-1)^i(i,i^3,\ldots,i^{2d+1})\in \R^{d+1}$$ for each $i\in \pm[n]$ and let $V=\{v_i:i\in \pm[n]\}$. By the definition $v_{-i}=-v_i$ for all $i$. Let $V^+=\{v_{i}: i\in [n]\}$. By a property of the moment curve, no hyperplane that passes through the origin intersects $V^{+}$ in more than $d$ point (see Lemma~1.6.4 of \cite{M03}). We now claim that the mapping $i\mapsto w_i=v_i/|v_i|$ is the desired embedding of $\pm[n]$ in the sphere $S^{d}$. Let $a=(a_1,a_2,\ldots,a_{d+1})\in S^d$. The hyperplane $h_a=\{x\in \R^{d+1}:x\cdot a=0\}$ passing through the origin and perpendicular to $a$ partitions $\R^{d+1}$ into three regions, namely $h_a$, $h_a^+=\{x\in \R^{d+1}:x\cdot a>0\}$, and $h_a^-=\{x\in \R^{d+1}: x\cdot a<0\}$. The open hemisphere centered at $a$ is $H_a=S^d\cap h_a^+$. We shall find an alternating $k$-set $X\in \A(n,k)$ whose image $\{w_i:i\in X\}$ is contained in $H_a$, equivalently $\{v_i:i\in X\} \subset h_a^+$. To do so, we first continuously move the vector $a\in S^{d}$ to increase the number of points of $V$ contained in the hyperplane $h_a$ while no points of $V$ get swept through by $h_a$, i.e., each $v_i$ stays in $h_a\cup h_a^+$ or $h_a\cup h_a^-$ that it originally belonged to. Since no $d+1$ points of $V^+=\{v_{i}: i\in [n]\}$ is on $h_a$, and noting that $0\in h_a$, we can do this by gradually increasing (one at a time) the intersection $h_a\cap V^+$ while fixing the subspace generated by the vectors already in $h_a\cap V$, until we reach the vector $a'=(a'_1, a'_2, \ldots, a'_{d+1})$ such that $|h_{a'}\cap V^+|=d$. Furthermore, observe that $v_i\in h_{a'}$ if and only if $v_{-i} \in h_{a'}$, thus $|h_{a'}\cap V|=2d$ at the end of this process. Thus, $|V\setminus h_{a'}|=2k$, and, since $V$ is antipodally symmetric about the origin, we must have $|V\cap h_{a'}^+|=|V\cap h_{a'}^-|=k$. The process of obtaining $h_{a'}$ from $h_{a}$ guarantees that $V\cap h_{a'}^+\subseteq V\cap h_a^+$ and $V\cap h_{a'}^-\subseteq V\cap h_a^-$. Hence, to complete the proof, it suffices to show that $V\cap h_{a'}^+$ is the image of an alternating $k$-set. Let $p(x)=a'_1x+a'_2x^3+\cdots+a'_{d+1}x^{2d+1}$. By the choice of $a'$, $p(x)$ has $2d+1$ simple roots: $0$ and $d$ pairs of antipodal elements of $\pm[n]$. Observe that $v_i\in h_{a'}^+$ if and only if $(-1)^ip(i)>0$. Hence $X=\{i\in\pm[k+d]:v_i\in h_{a'}^+\}=\{i\in\pm[k+d]:(-1)^ip(i)>0\}$. To complete the proof it is enough to prove that: \medskip {\bf Claim.} $X\in\A(n,k)$, that is, $X$ is an alternating $k$-set. {\bf Proof of Claim.} First of all, since $v_i$ and $v_{-i}$ are on the opposite sides of $h_{a'}$, $X$ does not contain an antipodal pair of indices. Hence $X \in \binom{[n]}{\pm k}$. To see that $X$ is alternating, suppose, to the contrary, that $i_l$ and $i_{l+1}$ are two indices in $X$ of the same sign with adjacent absolute values, that is, there is no $j\in X$ with $|i_l|<|j|<|i_{l+1}|$. This implies that, all the integers in $(i_l,i_{l+1})$ are (simple) roots of $p(x)$. If $i_l,i_{l+1}$ are of the same parity, then $p(i_l)$ and $p(i_{l+1})$ are of the same sign. So, the number of roots of $p(x)$ on $(i_l,i_{l+1})$ is even, contradicting the fact that the number of integral points in $(i_l,i_{l+1})$ is odd. Similarly, if $i_l$ and $i_{l+1}$ are of opposite parities, then $p(i_l)$ and $p(i_{l+1})$ are of opposite signs and similarly we get a contradiction. Therefore, $X$ is alternating. \end{proof} {\bf Proof of the lower bound for Theorem~\ref{thm:signedS}.} Again, we write $d=n-k$ and suppose, to the contrary, that there is a balanced $d$-colouring $f$ for $\SSG(k+d,k)$. For each $A\in V(\SSG(k+d,k))=\A(k+d,k)$, let $c(A):=\{f(-A),f(A)\}$. Arrange $\pm[k+d]$ in $S^d$ as described in Theorem~\ref{thm:signedGaleSchrijver}. For each $i\in[d]$, let $$A_i:=\{x\in S^d: \text{there is an alternating $k$-set $X\subset H_x$ with $i\in c(X)$}\}.$$ The condition on $c$ implies that each $A_i$ is symmetric. From \Cref{thm:signedGaleSchrijver} we conclude that $\bigcup_{1\leq i \leq d} A_i=S^d$. As each $A_i$ is easily observed to be an open set, by \Cref{thm:signedBorsukUlam}, there is an $A_i$ connecting two antipodal points $x_0$ and $-x_0$ of $S^d$. Thus, there exists a (simple) path $\gamma:[0,1]\to S^d$ with $\gamma(0)=x_0, \gamma(1)=-x_0$ such that $\Gamma:=\gamma([0,1]) \subseteq A_i$. By definition, $x\in A_i$ if and only if there is an alternating $k$-set $X\subset H_x$ with $i\in c(X)$. We denote such a $k$-set by $X_x$ (when there is more than one choice, pick one arbitrarily). Since $H_x$ is an open hemisphere and $X_x$ is a discrete set in $S^d$, there is an $\eta=\eta_x>0$ such that the open neighbourhood $U_x:=\{y\in S^d: \dist(x,y)<\eta\}$ of $x$ satisfies that $X_x\subset H_y$ for all $y\in U_x$, where $\dist(\cdot,\cdot)$ denotes the Euclidean distance in $\mathbb{R}^{d+1}$. Thus $\{U_x:x\in I\}$ covers $I$ and there exists a finite subcover by compactness. Further, we find a sequence $U_{x_l}, l\in[0,m]$ in this subcover such that $U_{x_l}\cap U_{x_{l+1}}\neq \emptyset$ for all $l\in [0,m-1]$, where $x_{m}:=-x_0$. We claim that the alternating $k$-sets $X_{x_l}$ and $X_{x_{l+1}}$ are joined by a positive edge. Suppose not, there is an $i_0$ with $i_0\in X_{x_l}$ and $-i_0\in X_{x_{l+1}}$. But since $U_{x_l}\cap U_{x_{l+1}}\neq \emptyset$, by definition, this means that for any $y\in U_{x_l}\cap U_{x_{l+1}}$, $H_y$ contains both the images of $\pm i_0$, which is a contradiction. Since $X_{x_0}$ and $X_{-x_0}$ are separated by the hyperplane $h_{x_0}$, they have no common element and hence are adjacent with a negative edge in $\SK(n,k)$. Altogether $X_{x_0}, X_{x_1}\ldots,X_{x_{m}}=X_{-x_0}$ give an unbalanced cycle in the colour class $c^{-1}(i)$, a contradiction.\hfill{$\Box$} \subsection{A conjecture on the structure of signed Schrijver graphs} Given a signed graph $(G, \sigma)$ the subgraph of $G$ induced by the set of negative edges is denoted by $(G,\sigma)^-$. The following proposition, proved in \cite{Zaslavsky1987}, connects the balanced chromatic number of a signed graph to the chromatic numbers of subgraphs induced by the set of negative edges among all switchings of it. \begin{proposition}\label{prop:Xb_NegativeSubgraph} For every signed graph $(G,\sigma)$, $$\chi_b(G,\sigma)=\min_{\sigma'\equiv\sigma}\chi((G,\sigma')^-).$$ \end{proposition} Recall that $\HSK(n,k)$ and $\HSS(n,k)$ are the subgraphs of $\SK(n,k)$ and $\SSG(n,k)$, respectively, induced by the vertices whose first nonzero element is positive with the signature inherited. We observe here that this standard signature is the one for which the equality of \Cref{prop:Xb_NegativeSubgraph} holds. We will need the following notation. For $i\in [n]$, let $\B_i^+(n,k)=\{A\in \binom{[n]}{\pm k}: i\in A\}$. It is easily observed that $\B_i^+(n,k)$ is an independent set of $\SK(n,k)^-$. \begin{theorem}\label{thm:negsubgraphHSKHSS} For all $n\geq k$, $$\chi(\HSK(n,k)^-)=\chi(\HSS(n,k)^-)=n-k+1.$$ \end{theorem} \begin{proof} The lower bound follows from \Cref{prop:Xb_NegativeSubgraph}, \Cref{thm-signedK}, and \Cref{thm:signedS}. Hence it is enough to give an $(n-k+1)$-colouring for $\HSK(n,k)^-$ (hence also for $\HSS(n,k)^-$). To that end, we observe that $\B_i^+(n,k)$ for $i=1,2,\ldots, n-k+1$ covers all vertices of $\HSK(n,k)$ because the first nonzero element of each vertex is positive. \end{proof} Now, we turn to the colouring of $\SK(n,k)^-$ and $\SSG(n,k)^-$. Since $\SK(n,k)$ and $\SSG(n,k)$ each contains two copies of $\HSK(n,k)$ and $\HSS(n,k)$ respectively, the upper bound for the chromatic number of their negative subgraphs is $2n-2k+2$. We show that $\SK(n,k)$ reaches this bound while $\SSG(n,k)$ does not. \begin{theorem}\label{thm:negsubgraphSKSS} For all $n\geq k$, $$\chi(\SK(n,k)^-)=2n-2k+2,$$ $$\chi(\SSG(n,k)^-)=n-k+2.$$ \end{theorem} \begin{proof} The first claim is clear once we recall from the proof of \Cref{thm-signedK} that $\SK(n,k)^-$ contains $S(2n,k)$ as a subgraph. To see the second part, first notice that $\B_i^+(n,k),$ $i=1,2,\ldots, n-k+2$ is an $(n-k+2)$-colouring of $\SSG(n,k)^-$, establishing the upper bound. The lower bound is already implicitly proved along the way of proving \Cref{thm:signedS}, so we give a sketch of it. Using \Cref{thm:signedGaleSchrijver}, we embed $\pm[n]$ in $S^{n-k}$ in such a way that $i$ and $-i$ are antipodal for each $i\in [n]$, and any open hemisphere contains an alternating $k$-set. Suppose there is an $(n-k+1)$-colouring $c$ of $\SSG(n,k)^-$, let $$A_i:=\{x\in S^{n-k}: \exists X\in \A(n,k), X \subset H_x, c(X)=i\}$$ for $i\in [n-k+1]$. Since each hemisphere contains an alternating $k$-set, $A_1,A_2,\ldots,A_{n-k+1}$ gives an open cover of $S^{n-k}$. By the Borsuk-Ulam theorem (cf. \Cref{thm:Borsuk-Ulam}), there is an $A_i$ that contains a pair of antipodal points of $S^{n-k}$. However, this gives a pair of disjoint alternating $k$-sets in the colour class $c^{-1}(i)$. A contradiction. \end{proof} By \Cref{prop:Xb_NegativeSubgraph}, Theorem \ref{thm:signedS} is equivalent to saying that for any switching of $\HSS(n,k)$, the set of negative edges induces a graph of chromatic number at least $n-k+1$. Nevertheless, it seems that all these induced subgraphs are highly structured. This is presented in the following conjecture. \begin{conjecture}\label{conj:signedSchrijverContainsNegativeSchrijver} In any switching equivalent copy of $\HSS(n,k)$, the graph induced by the set of negative edges contains $S(n-1,k/2)$ as a subgraph when $k$ is even and $S(n,(k+1)/2)$ when $k$ is odd. \end{conjecture} As $\chi(S(n-1,k/2)) =\chi(S(n, (k+1)/2)) = n-k+1$, Conjecture \ref{conj:signedSchrijverContainsNegativeSchrijver} would imply that $\chi_b(\HSS(n,k)) \ge n-k+1$. It can be easily verified that Conjecture~\ref{conj:signedSchrijverContainsNegativeSchrijver} holds for $k=1,n-1,$ and $n$. Next, we prove that it holds when $k=2$. \begin{theorem}\label{thm:sinedSchrijverCase2} Any switching equivalent copy of $\HSS(n,2)$ contains $(S(n-1,1),-)$ as a subgraph. \end{theorem} \begin{proof} Note that $S(n-1,1)=K_{n-1}$. We need to show that for any switching equivalent copy of $\HSS(n,2)$, its negative subgraph has clique number at least $n-1$. Let $B$ be the bipartite graph with parts $\{1, 2, \ldots, n\}$ and $\{-1, -2, \ldots, -n\}$, where $\{i, -j\}$ is an edge if $i< j$. The vertices of $\HSS(n,2)$ are the edges of $B$, where two vertices (that is, the edges of $B$) are connected by a negative edge if they form a matching. Thus the clique number of the subgraph induced by the negative edges in $\HSS(n,2)$ is the maximum size of a matching in $B$. To switch at vertex $\{i,-j\}$ means to replace this edge by $\{j,-i\}$ in $B$. So we need to prove that for any subset $S$ of $E(B)$, replacing each edge $\{i,-j\} \in S$ with edge $\{j,-i\}$, the resulting bipartite graph $B'$ has a matching of size $n-1$. By K\H{o}nig's theorem, it suffices to show that the minimum size of a vertex cover of $B'$ is $n-1$. Note that for any $i$, $d_B(i)=n-i$ and $d_B(-i) = i-1$. So $d_B(i)+ d_B(-i)=n-1$. Replacing edge $\{i,-j\}$ with edge $\{-i, j\}$ does not change the sum of the degrees of $i$ and $-i$. So $d_{B'}(i)+ d_{B'}(-i)=n-1$ for $i \in [n]$. Let $C$ be a cover of $B'$. If for some $i$, $C \cap \{i,-i\} = \emptyset$, then all neighbours of $i$ and $-i$ in $B'$ must be in $C$. Thus $|C|\geq n-1$. Otherwise, $C \cap \{i,-i\} \ne \emptyset$ for $i=1,2,\ldots, n$, and hence $|C|\geq n$. This completes the proof of Theorem \ref{thm:sinedSchrijverCase2}. \end{proof} \section{Signed Borsuk graphs}\label{Signed Borsuk} One of the original versions of the Borsuk-Ulam is the following. \begin{theorem}\label{thm:Borsuk-Ulam} For any open cover $A_1, A_2, \ldots A_{d+1}$ of $S^{d}$, one of $A_i$'s contains a pair of antipodal points. \end{theorem} Given positive integer $d$ and positive real number $\eps< 2$, the \emph{Borsuk graph} $B(d,\eps)$ has as its vertex set the points of $S^d$, where a pair $x,y$ of points are adjacent if $\dist(x, -y) \le \eps$ (again, $\dist(\cdot,\cdot)$ denotes the Euclidean distance in $\mathbb{R}^{d+1}$). Deciding the chromatic number of Borsuk graph for small values of $\eps$ turned out to be equivalent to the Borsuk-Ulam theorem, see \cite{MR0514625}. \begin{theorem}[Reformulation of Borsuk-Ulam]\label{thm:BorsukColoring} Given $d$, there exists an $\eps_d$ such that for every $\eps \leq \eps_d$ we have $\chi(B(d,\eps))=d+2$. \end{theorem} It is mentioned by Lov\'asz in his original proof of Kneser's conjecture that this equivalence has been the motivation behind his work. It was shown in \cite{ST06} that for $d=n-2k$, there is a suitable choice of $\eps$ such that the Borsuk graph $B(d,\eps)$ admits a homomorphism to the Schrjiver graph $S(n,k)$, implying that $\chi(S(n,k)) \ge n-2k+2$. Following this direction of thought, here we introduce signed Borsuk graphs and present the connection between their chromatic property and various extensions of the Borsuk-Ulam theorem. \begin{definition} The \emph{signed Borsuk graph,} $\SB(d,\eps)$, is the signed graph on the vertex set $S^d$, where for any $x,y\in S^d$, there is a positive edge joining $x$ and $y$ if $\dist(x,y)\leq \eps$, and a negative edge joining them if $\dist(x,-y)\leq \eps.$ \end{definition} That $\SB(d,\eps)$ admits a balanced $(d+1)$-colouring for a small enough value of $\eps$ can be observed in various ways. One possible colouring is obtained as follows. For each element $\mathbf{e}_{i}$ of the standard basis let ${E_i}=h_{\mathbf{ e}_i} \cap S^d$ where $h_{\mathbf {e}_i}$ is the hyperplane perpendicular to ${\bf e}_i$ containing the origin. Then for a small enough value of $\eps$ let ${B}_i$ be the subset of $S^d$ obtained by removing an $\eps$-neighbourhood of ${E}_i$. Observe that each $B_i$ is a balanced set and that each point of $S^d$ belongs to at least one $B_i$. So $\SB(d,\eps)$ admits a balanced $(d+1)$-colouring. The following theorem explores the relations between chromatic properties of $\SB(d,\eps)$ and various extensions of the Borsuk-Ulam theorem. In particular, for sufficiently small $\eps$, the balanced chromatic number of $\SB(d, \eps)$ is determined. \begin{theorem}\label{thm-signedBorsuk} For every natural number $d$, the following statements are equivalent: \begin{itemize} \item [(a)] There exist an $\eps_d >0$ such that for any $\eps$, $0<\eps \leq \eps_d$, we have $\chi_b(\SB(d,\eps))=d+1$. \item [(b)] (\Cref{thm:signedBorsukUlam}, signed Borsuk-Ulam theorem (open form)) For every symmetric open cover $A_1, A_2,\ldots, A_d$ of $S^d$, one of the ${A_i}$'s connects (hence path-connects) a pair of antipodal points. \item[(c1)] (Signed Borsuk-Ulam theorem (closed form 1)) For every symmetric closed cover $F_1, F_2,\ldots, F_d$ of the sphere $S^d$, there is an $F_i$ such that any open neighbourhood $U$ of $F_i$ connects a pair of antipodal points. \item [(c2)] (Signed Borsuk-Ulam theorem (closed form 2)) For every symmetric closed cover $F_1, F_2,\ldots, F_d$ of $S^d$, one of the $F_i$'s connects a pair of antipodal points. \end{itemize} \end{theorem} \begin{proof}[Proof of $(a) \Rightarrow (c1)$.] Assume that antipodally symmetric sets $F_1, F_2,\ldots, F_d$ is a closed cover of $S^d$. Further, suppose for each $i\in[d]$ there is an open neighbourhood $U_i\supset F_i$ that does not connect any pair of antipodal points. By compactness of the sphere, there is an $\eps_i>0$ such that $F_i\subset F_{i,\eps_i}\subset U_i$ where $F_{i, \eps_i}$ is the $\eps_i$-neighbourhood of $F_i$. Being a subset of $U_i$, $F_{i,\eps_i}$ does not connect a pair of antipodal points either. For every $\eps_0>0$, let $\eps=\min_{i\in\{0,1,2,\ldots,d\}}\eps_i.$ We claim that $F_i$ is a balanced set in $\SB(d, \eps)$. If $x, y\in F_i$ belong to different connected components of $F_{i,\eps_i}$, then $\dist(x,y)\geq 2\eps_i>\eps.$ In that case $x$ and $y$ cannot be joined by a positive edge in $\SB(d, \eps)$. On the other hand, if $x, y\in F_i$ belong to the same component of $F_{i,\eps_i}$, since $-x$ is not in the same component as $x$, we have $\dist(-x,y)\geq 2\eps_i>\eps.$ This shows that vertices in the same component cannot be joined by a negative edge. We now claim that $F_{i}$ is a balanced set of $\SB(d, \eps)$. Being symmetric, if there is a negative cycle in $F_{i}$, there is a negative cycle, say $C$, with exactly one negative edge. In this cycle, the sequence of positive edges implies that all its vertices are in the same component of $F_{i,\eps_i}$. But the two ends of the only negative edge must be on two different components. The collection of $F_{i}$, $1\leq i \leq d$, then gives a balanced $d$-colouring of $\SB(d, \eps)$, contradicting $(a)$. \end{proof} \begin{proof}[Proof of $(c1) \Rightarrow (b)$] This is a consequence of the fact that for every open cover $A_1, A_2,\ldots, A_d$ of $S^d$, there is a closed cover $F_1, F_2,\ldots, F_d$ of $S^d$ such that $F_i\subseteq A_i$ for each $i=1, 2, \ldots d$ (see for example \cite{AH45}). \end{proof} \begin{proof}[Proof of $(b) \Rightarrow (a)$] For every $\eps_0>0$, suppose there is an $\eps\leq \eps_0$ such that $\SB(d,\eps)$ is balanced $d$-colourable. Let $F_1, F_2, \ldots, F_d$ be the colour classes of a balanced $d$-colouring for $\SB(d,\eps)$. We may assume that for each point $x$, $x$ and $-x$ are assigned the same colour. Thus each $F_i$ is a symmetric set. Let $A_i$ be the $(\eps /8)$-neighbourhood of $F_i$ for each $i$, $i=1, 2, \ldots, d$. By $(b)$, there is a pair $x$ and $-x$ of antipodal points connected in some $A_j$. As $A_j$ is an open set, these two points are path-connected. Thus there is sequence $x=x_1, x_2, \ldots, x_k=-x$ of vertices such that the distance between $x_l$ and $x_{l+1}$ is at most $\eps /4$. By the choice of $A_j$, for each $x_j$, there is a vertex $x'_j$ in $F_j$ at distance at most $\eps /8$ from $x_j$. Hence $x'_j$ and $x'_{j+1}$ are at distance at most $\eps /2$. Moreover, $x'_1$ and $-x'_k$ have distance at most $\eps/2$ as well. So the vertices $\{x'_1,x'_2,\ldots, x'_k\}$ induce a negative cycle, contradicting the fact that $F_j$ induces a balanced set. \end{proof} \begin{proof}[Proof of $(c1) \Leftrightarrow (c2)$] The statement $(c2)$ contains $(c1)$. For the other direction, suppose every $\eps$-neighbourhood of $F_i$ connects a pair of antipodal points. Let $x_j$ and $-x_j$ be a pair of antipodal points connected in the $(1/j)$-neighbourhood of $F_i$. Since $F_i$ is a closed (and compact) set, there is a limit point $x\in F_i$ of $\{x_j\}$. Then the antipodal pair $x$ and $-x$ of points are connected by $F_i$. \end{proof} Similar to the relation between Borsuk graphs and Schrijver graphs, for $d=n-k$ and sufficiently small $\eps > 0$, $\SB(d, \eps)$ admits a homomorphism to $\SSG(n,k)$. One such homomorphism is described as follows. By Theorem \ref{thm:signedGaleSchrijver}, we may assume that $\pm [n]$ is embedded in $S^d$ such that $i$ and $-i$ are antipodal and any open hemisphere contains an alternating $k$-set of $\pm [n]$, which is a vertex of $\SSG(n,k)$. For each point $x$ of $S^d$, let $A_x$ be an alternating $k$-set of $\pm [n]$, which is a vertex of $\SSG(n,k)$, contained in the open hemisphere centered at $x$. Let $f(x) = A_x$. By compactness, if $\eps > 0$ is small enough, then $f$ is a homomorphism from $\SB(d, \eps)$ to $\SSG(n,k)$ that preserves the signs of the edges. So Theorem \ref{thm-signedBorsuk} gives an alternate proof of the result that $\chi_b(\SSG(n,k))=n-k+1$. Below we show a connection to yet another formulation of the signed Borsuk-Ulam Theorem. Let $X$ be a topological space, the \emph{Lusternik-Schnirelmann category} of $X$ is the smallest integer $k$ such that there exists an open cover $U_0, U_1, \ldots, U_k$ with each $U_i$ being a contractible open set in $X$. Such a cover $\{U_i\}_{i=0}^k$ is a called a \emph{categorical} cover of $X$. We refer to \cite{Hatcher2001} for the definition of contractible space. \begin{theorem}\label{thm:Lusternik-Schnirelmann} The Lusternik-Schnirelmann category of the real projective space $\R\mathrm{P}^d$ is $d$, that is, for every open cover $U_1, U_2,\ldots, U_d$ of $\R\mathrm{P}^d$, one of the $U_i$'s is non-contractible in $\R\mathrm{P}^d$. \end{theorem} Here we show that this theorem is also equivalent to any of the statements of \Cref{thm-signedBorsuk}. \begin{proof}[\Cref{thm:Lusternik-Schnirelmann} $\Leftrightarrow$ \Cref{thm-signedBorsuk} $(b)$] A symmetric open cover $A_1, \ldots, A_{d}$ of $S^d$, corresponds to a natural open cover $U_1, \ldots, U_{d}$ of $\RP^d$ through the quotient map $q: S^d\to \RP^d$ that identifies the antipodal points. Then, each of the $U_i$'s is non-contractible if and only if the corresponding $A_i$ connects a pair of antipodal points of $S^d$, (see\cite{Hatcher2001}, Example 1.43 for more details). \end{proof} The Lusternik-Schnirelmann category can be equivalently defined with closed categorical covers, establishing the equivalence between \Cref{thm:Lusternik-Schnirelmann} and \Cref{thm-signedBorsuk} $(c2)$. Finally, we remark that \Cref{thm:Lusternik-Schnirelmann} is one of the equivalent forms of the original Borsuk-Ulam theorem (cf. for example \cite{CTOT03,Steinlein93}). Therefore, we have an equivalence among all \Cref{thm:Borsuk-Ulam,thm:BorsukColoring,thm-signedBorsuk,thm:Lusternik-Schnirelmann} in this section. \section*{Acknowledgment} The second author has received support under the program ``Investissement d'Avenir" launched by the French Government and implemented by ANR, with the reference ``ANR‐18‐IdEx‐0001" as part of its program ``Emergence". The third and sixth authors are supported by grant NSFC 12371359. The sixth author is also supported by grant U20A2068. \begin{thebibliography}{10} \bibitem{AH45} P.~Alexandroff and H.~Hopf. \newblock {\em Topolgie I}. \newblock Ann. Arbor, Michigan, 1945. \bibitem{CTOT03} Octav Cornea, Gregory Lupton, John Oprea, and Daniel Tanre. \newblock {\em Lusternik-Schnirelmann Category}. \newblock American Mathematical Society, 2003. \bibitem{Harary1953} Frank Harary. \newblock On the notion of balance of a signed graph. \newblock {\em Michigan Math. J.}, 2:143--146, 1953/54. \bibitem{Hatcher2001} Allen Hatcher. \newblock {\em Algebraic Toppology}. \newblock Cambirdge University Press, 2001. \bibitem{JMNNQ24+} Andrea Jimenez, Jessica McDonald, Reza Naserasr, Kathryn Nurse, and Daniel Quiroz. \newblock Balanced-chromatic number and hadwiger-like conjectures. \newblock {\em Priprint}, 2024+. \bibitem{KN21} Franti\v{s}ek Kardo\v{s} and Jonathan Narboni. \newblock On the 4-color theorem for signed graphs. \newblock {\em European J. Combin.}, 91:Paper No. 103215, 8, 2021. \bibitem{MR0093538} F.~Kasch, M.~Kneser, and H.~Kupisch. \newblock Unzerlegbare modulare {D}arstellungen endlicher {G}ruppen mit zyklischer {$p$}-{S}ylow-{G}ruppe. \newblock {\em Arch. Math. (Basel)}, 8:320--321, 1957. \bibitem{KNWYZZ25} L.~Kuffner, R.~Naserasr, L.~Wang, X.~Yu, Zhou H., and Zhu X. \newblock Fractional balanced coloring of signed graphs. \newblock {\em manuscript}, 2025+. \bibitem{MR0514625} L.~Lov\'{a}sz. \newblock Kneser's conjecture, chromatic number, and homotopy. \newblock {\em J. Combin. Theory Ser. A}, 25(3):319--324, 1978. \bibitem{M03} Ji\v{r}\'i Matou\v{s}ek. \newblock {\em Using the {B}orsuk-{U}lam theorem}. \newblock Universitext. Springer-Verlag, Berlin, 2003. \newblock Lectures on topological methods in combinatorics and geometry, Written in cooperation with Anders Bj\"orner and G\"unter M. Ziegler. \bibitem{m2016chromatic} Edita M\'a\v~cajov\'a, Andr\'e{} Raspaud, and Martin \v~Skoviera. \newblock The chromatic number of a signed graph. \newblock {\em Electron. J. Combin.}, 23(1):Paper 1.14, 10, 2016. \bibitem{Munkers2000} James~R. Munkres. \newblock {\em Topology}. \newblock Prentice Hall, Inc., Upper Saddle River, NJ, second edition, 2000. \bibitem{Schrijver} Alexander Schrijver. \newblock Vertex-critical subgraphs of kneser graphs. \newblock {\em Nieuw Archiev voor Wiskunde}, 3(26):454--461, 1978. \bibitem{ST06} G{\'{a}}bor Simonyi and G{\'{a}}bor Tardos. \newblock Local chromatic number, {KY} fan's theorem, and circular colorings. \newblock {\em Comb.}, 26(5):587--626, 2006. \bibitem{Steinlein93} H.~Steinlein. \newblock Spheres and symmetry: Borsuk's antipodal theorem. \newblock {\em Topological methodos in nonlinear analysis, Journal of the Juliusz Schauder Center}, 1:15--33, 1993. \bibitem{Zaslavsky1982-DAM} Thomas Zaslavsky. \newblock Signed graphs. \newblock {\em Discrete Applied Mathematics}, 4(1):47--74, 1982. \bibitem{Zaslavsky1987} Thomas Zaslavsky. \newblock Balanced decompositions of a signed graph. \newblock {\em J. Combin. Theory Ser. B}, 43(1):1--13, 1987. \end{thebibliography} \end{document}
2412.20097v1
http://arxiv.org/abs/2412.20097v1
Parameter spaces for cross-diffusive-driven instability in a reaction-diffusion system on an annular domain
\documentclass{ws-ijbc} \usepackage{ws-rotating} \usepackage{graphicx} \usepackage{hyperref} \usepackage{xcolor} \usepackage{float} \usepackage{array} \usepackage[normalem]{ulem} \usepackage{cancel} } \begin{document} \markboth{G. Yigit et al.}{Instability in a reaction-diffusion system} \title{Parameter spaces for cross-diffusive-driven instability in a reaction-diffusion system on an annular domain} \author{Gulsemay Yigit} \address{Department of Mathematics, Faculty of Engineering and Natural Sciences,\\ Bahcesehir University, Istanbul, T\"{u}rkiye\\ Mathematics Department, Mathematics Building, 1984 Mathematics Road,\\ The University of British Columbia, Vancouver, BC Canada V6T 1Z2.\\ [email protected]} \author{Wakil Sarfaraz} \address{Corndel Ltd., 410 Highgate Studio 53-79 Highgate Road, \\ London NW5 1TL, United Kingdom\\ ICT, Bahrain Polytechnic,\\ PO Box 33349, Isa Town, Kingdom of Bahrain\\ [email protected]} \author{Raquel Barreira} \address{Instituto Polit\'{e}cnico de Set\'{u}bal, Escola Superior de Tecnologia de Set\'{u}bal \\Campus do IPS Estefanilha, 2914-508 Set\'{u}bal, Portugal\\ Centro de Matem\'{a}tica, Aplicac\~{o}es Fundamentais e Investigac\~{a}o Operacional (CMAFcIO), \\ Universidade de Lisboa, Portugal\\ [email protected]} \author{Anotida Madzvamuse\footnote{Author for correspondence}} \address{Mathematics Department, Mathematics Building, 1984 Mathematics Road,\\ The University of British Columbia, Vancouver, BC Canada V6T 1Z2.\\ Department of Mathematics and Applied Mathematics, \\University of Pretoria, Pretoria 0132, South Africa\\ Department of Mathematics and Applied Mathematics, \\University of Johannesburg, \\PO Box 524 Auckland Park, 2006, South Africa\\ University of Zimbabwe, Department of Mathematics and Computational Science, \\ Mt Pleasant, Harare, Zimbabwe\\ [email protected]} \maketitle \begin{abstract} In this work, the influence of geometry and domain size on spatiotemporal pattern formation is investigated to establish parameter spaces for a cross-diffusive reaction-diffusion model on an annulus. By applying linear stability theory, we derive conditions which can give rise to Turing, Hopf and transcritical types of diffusion-driven instabilities. We explore whether selection of a sufficiently large domain size, together with the appropriate selection of parameters, can give rise to the development of patterns on non-convex geometries e.g. annulus. Hence, the key research methodology and outcomes of our studies include: a complete analytical exploration of the spatiotemporal dynamics in an activator-depleted reaction-diffusion system; a linear stability analysis to characterise the dual roles of cross-diffusion and domain size of pattern formation on an annulus region; the derivation of the instability conditions through lower and upper bounds of the domain size; the full classification of the model parameters, and a demonstration of how cross-diffusion relaxes the general conditions for the reaction-diffusion system to exhibit pattern formation. To validate theoretical findings and predictions, we employ the finite element method to reveal spatial and spatiotemporal patterns in the dynamics of the cross-diffusive reaction-diffusion system within a two-dimensional annular domain. These observed patterns resemble those found in ring-shaped cross-sectional scans of hypoxic tumours. Specifically, the cross-section of an actively invasive region in a hypoxic tumour can be effectively approximated by an annulus. \end{abstract} \keywords{Reaction-diffusion systems; cross-diffusion; pattern formation; parameter space; spatiotemporal dynamics; annular domain; finite element method; standing wave.} \section{Introduction} \noindent Patterns varying in space and time are abundantly seen in biological systems, from the development of embryos, to the patterning of animal skin pigmentation \cite{murray2001mathematical,baker2008partial,gierer1972theory,satnoianu2000turing,madzvamuse2000numerical}. The concept of pattern formation in reaction-diffusion systems originates from Turing's seminal work in the field of biological morphogenesis \cite{turing1990chemical}. A primary mechanism proposed to be responsible for driving the formation of these patterns is the instability propelled by considering the effect of the diffusion in the dynamical interplay of species undergoing the process of reaction-diffusion. Since the initial work of Alan Turing, the theory of reaction-diffusion has been significantly expanded and explored by subsequent researchers in many fields of science \cite{gierer1972theory,schnakenberg1979simple,murray2001mathematical,baker2008partial,gierer1972theory,satnoianu2000turing,madzvamuse2000numerical,landge2020pattern,diez2023turing}. Cross diffusion, has an immense effect in the study of pattern formation, biological systems, and chemical reactions, where the interactions between different components play a crucial role in the spatiotemporal dynamics of the reaction-diffusion systems. Cross-diffusion refers to a special type of a diffusion process in which one of the species influences the flux of the concentration gradient of others. Cross-diffusion coefficients are positioned in the off-diagonal elements of the diffusion matrix. Empirical observations reveal that these coefficients can attain positive, negative, zero and in some cases, the diffusion coefficients can be of substantially different magnitudes. The negative sign of cross-diffusion term indicates that one species is attracted towards higher concentrations of another species \cite{vanag2009cross}. Turing instabilities, emerging as a consequence of cross-diffusion, have been widely studied and presented in the literature within a substantial class of predator-prey systems involved in ecological or competitive interactions \cite{gambino2018cross,li2019cross, zhang2020characterizing, song2023cross, yang2023cross}. Studies include applications of cross-diffusive systems to pattern formation \cite{yi2021turing, kersner2022competition, ritchie2022turing, kumari2022cross, liu2022dynamics,yang2022cross}, bacterial chemotaxis \cite{bellomo2022chemotaxis, kim2023modeling,gaffney2023spatial}, epidemology \cite{duan2019turing,chang2020cross, hu2021turing, mohan2021positive,hu2022dynamics} and so forth. Most of these studies include linear case of cross-diffusion terms, in some instances the research also includes nonlinear cross-diffusion, which can be found in \cite{gambino2012turing,gambino2013pattern}. Another application relevant to cross-diffusive systems is illustrated by the process of electrodeposition \cite{lacitignola2018cross,lacitignola2021turing}. A study on spatiotemporal pattern formation for a cross-diffusive reaction-diffusion system is given by \cite{yang2022cross}. Comparing our contribution to the study of \cite{yang2022cross}, our distinctive results show the effect of the ring shape domain size for the system to exhibit spatiotemporal pattern formation. Our research introduces a novel aspect in the field of cross-diffusive reaction diffusion systems through the analysis and numerical simulations of spatiotemporal patterns, with a particular emphasis on the ring shape domain. To the best of our knowledge, to date, very little analysis and simulations of reaction-diffusion systems characterising the effects of domain shape size and and cross-diffusion have been undertaken in the literature.. In this respect, we aim to contribute insights to the understanding of spatiotemporal patterns on the ring shape domain. By extending the idea of spatiotemporal pattern formation without cross-diffusion with the relation between the domain size system parameters presented in \cite{sarfaraz2017classification,sarfaraz2018domain,sarfaraz2020stability}, we provide novel results for the cross-diffusive systems. The focus of the works of \cite{sarfaraz2017classification,sarfaraz2018domain,sarfaraz2020stability} primarily addresses spatiotemporal pattern formation without cross-diffusion. Therefore, the current work constitutes as substantial extension of current-state-of-the-art analysis of reaction-diffusion systems by exploring the collective effects of the cross-diffusion, the geometry, and the domain size on the dynamical properties of the reaction-diffusion system (RDS). On the other hand, understanding pattern formation in reaction-diffusion systems on growing domains is crucial for modeling dynamic biological processes. Studies such as \cite{cotterell2015local,krause2021introduction,madzvamuse2010stability,klika2017history,van2021turing,krause2023concentration,tsubota2024bifurcation} highlight the significant influence of domain growth on patterning. Our work distinguishes from such studies, by deriving conditions on domain size for studying instability in reaction-diffusion systems on a stationary domain. Extensions of this work to growing domains and evolving surfaces is the subject of our current studies and is beyond the scope of this manuscript. Obtaining closed-form solutions to reaction diffusion systems is not practical or possible for the majority of cases involving nonlinear reaction kinetics. This complexity is overcome by using numerical approaches which provide indispensable tools for gaining insights into the spatiotemporal dynamics of RDS. Therefore, the significance of numerical methods becomes evident in their ability to address, computationally, the nature of RDSs and facilitate their comprehensive understanding. Within this context, numerous computational works have been contributed to explore the numerical simulations of RDSs. One such powerful numerical method, widely acknowledged in the literature, is the finite element method, specifically used for the spatial discretisation of RDSs. This numerical method enhances our ability to analyse and interpret the spatiotemporal dynamical behaviour of RDSs. Examples of studies which investigate the computational aspects of RDSs on different types of geometries including stationary and growing domains can be found in \cite{madzvamuse2007velocity,barreira2011surface,madzvamuse2014fully,lakkis2013implicit,tuncer2017projected,frittelli2021bulk,song2022efficient,frittelli2023bulk,frittelli2017lumped}. We have employed the finite element method to enable the visualisation of patterning structures resulting from Hopf and transcritical type of bifurcations as well as from Turing instabilities. This work introduces a set of novel theoretical findings by deriving bifurcation results through an analytical approach motivated by the dynamical systems' theory for a cross-diffusive system defined on a non-convex geometry. The significance of these results have the potential to offer profound insights in understanding both the effects of domain size on the dynamics and the impact of cross-diffusion on annular regions in the context of pattern formation. Our distinctive contributions includes a comprehensive framework elucidating the influence of geometry on the development of spatiotemporal pattern formations for a cross-diffusive reaction-diffusion model as well. Our preliminary studies \cite{yigit2024domain,sarfaraz2024understanding} leading to this study, finally, allowed us to fully understand and compare the geometric impact of spatiotemporal pattern formation of cross-diffusive systems on flat, circular and annular two-dimensional geometries. The focus of \cite{yigit2024domain,sarfaraz2024understanding} primarily examines the role of convex geometries on Turing's theory of pattern formation. The significant novelty in the present study lies in the expansion of the methodology, where we analytically provide new bifurcation results for a cross-diffusion reaction diffusion systems within a non-convex geometry. A key finding of this study is the parameter spaces generated on non-convex geometries are substantially different from those of convex geometries. Therefore, this article is organised as follows: In Section \ref{Modeleqn}, we present the non-dimensional reaction-diffusion equations with linear cross-diffusion on the annular region, a non-convex geometry. We provide the necessary conditions for the well-posedness of the model equations. A detailed linear stability analysis of the cross-diffusive reaction-diffusion system is presented in Section \ref{sec:linearstability} providing the geometric effect of the annular region on the eigenfunctions and eigenmodes of the corresponding eigenvalue problem. In this section, we also present new bifurcation results for the cross-diffusive system to exhibit Hopf/transcritical and Turing types of instabilities. Section \ref{sec:ParameterSpaces} provides a comprehensive classification of parameter spaces corresponding to Hopf/transcritical type of bifurcation regions and Turing regions. These regions are generated using the theoretical results obtained in Section \ref{sec:linearstability}. In Section \ref{sec:FemSim}, we show the numerical simulations of cross-diffusive reaction-diffusion systems using the finite element method to confirm the proposed classification of parameter spaces and to validate the predicted dynamical behaviour based on theoretical considerations. In Section \ref{sec:Conclusion}, we provide the conclusive remarks and possible research directions of the present work. \subsection{Model equations}\label{Modeleqn} We consider an \emph{activator-depleted} reaction-diffusion system (RDS) with linear cross-diffusion modelling the dynamics of two chemical species $u(x,y,t)$ and $v(x,y,t)$ within a non-convex circular domain $\Omega = \{(x,y)\in\mathbb{R}^2: a^2<x^2+y^2<b^2 \}$. Here, the domain consists of an annulus with an inner radius $a$ and outer radius $b$. The boundary is represented as $\partial\Omega = \{(x,y)\in\mathbb{R}^2: x^2+y^2=a^2 \} \cup\{(x,y)\in\mathbb{R}^2: x^2+y^2=b^2 \}$. The RDS in the presence of linear cross-diffusion in non-dimensional form on the defined annulus under Neumann boundary conditions is given by, \begin{equation}\label{r1} \begin{cases} \begin{cases} \displaystyle{\frac{\partial u}{\partial t} =\Delta_r u + d_v \Delta_r v + \gamma f(u,v)}, & \smash{\raisebox{-1.6ex}{$(r,\theta)\in \Omega$, $t>0$ }} \\[6pt] \displaystyle{\frac{\partial v}{\partial t} =d_u \Delta_r u + d\Delta_r v + \gamma g(u,v),}\\ \end{cases} \\ \begin{cases} \displaystyle{\left(\frac{\partial u}{\partial r} + d_v \frac{\partial v}{\partial r}\right)\Big{|}_{r=a}} = \left(d_u \frac{\partial u}{\partial r} + d \frac{\partial v}{\partial r}\right)\Big{|}_{r=a} = 0, &\smash{\raisebox{-1.6ex}{$(r,\theta)\in \partial \Omega$, $t>0$, }} \\[6pt] \displaystyle{\left(\frac{\partial u}{\partial r} + d_v \frac{\partial v}{\partial r}\right)\Big{|}_{r=b}} = \left(d_u \frac{\partial u}{\partial r} + d \frac{\partial v}{\partial r}\right)\Big{|}_{r=b} = 0, \end{cases}\\ u(r,\theta,0)=u_0(r,\theta), ~v(r,\theta,0)=v_0(r,\theta),~ (r,\theta)\in \Omega,~ t=0, \end{cases} \end{equation} where $\Delta_r $ denotes the Laplace operator in $(r, \theta)$ coordinates, provided later by equation \eqref{Laplace}. The macroscopic dispersion approximated by mean-field effect of random movements (Brownian motion) of microscopic particles of the reacting and diffusing species motivates the use of the diffusion operator, which by incorporating the Fickian law of diffusion and conservation of mass (both for self-diffusive and cross-diffusive species) justifies the derivation of the cross-diffusive reaction-diffusion system given by \eqref{r1} \cite{murray2001mathematical,madzvamuse2015cross}. The parameter $d$ in (\ref{r1}) is a non-dimensional diffusion coefficient representing the ratio of the dimensional diffusion coefficients defined as $d=\dfrac{D_v}{D_u}$. Similarly the constants $d_u$ and $d_v$ are non-dimensional ratios of cross-diffusion coefficients defined as $d_u=\dfrac{D_{uv}}{D_u}$ and $d_v=\dfrac{D_{vu}}{D_u}$, respectively. The quantities $D_u>0$, and $D_v>0$ are the dimensional diffusion coefficients for the specie $u$ and $v$, respectively. The cross-diffusion coefficient of $u$ influenced by the presence of the concentration of $v$ is denoted by $D_{uv}$ and that of $v$ influenced by the concentration presence of $u$ is represented by $D_{vu}$. A detailed calculation of the non-dimensionalization process resulting in system \eqref{r1} is presented in \cite{madzvamuse2015cross}. The functions $f(u,v)=\alpha-u+u^2v$ and $g(u,v)=\beta-u^2v$ are the nonlinear reaction kinetics coupling the two species, also known as the activator-depleted kinetics with $\alpha>0$ and $\beta>0$. \cite{murray2001mathematical,gierer1972theory,schnakenberg1979simple}. {\bf Remark.} Note that, in order for the cross-diffusive system to be well-posed, it is necessary for the main diffusion parameter $d$ and the cross-diffusion parameters $d_u$ and $d_v$ to be given such that the inequality $d - d_u d_v > 0$ holds true \cite{madzvamuse2015cross}. Therefore, the condition on the positivity of the determinant of the diffusion matrix is satisfied and this enables us to write the boundary conditions as, \begin{equation} \begin{cases} \displaystyle{{\frac{\partial u}{\partial r}\Big{|}_{r=a}}}=\displaystyle{{\frac{\partial v}{\partial r}\Big{|}_{r=a}}}=0, &\smash{\raisebox{-1.6ex}{$(r,\theta)\in \partial \Omega$, $t>0$, }} \\[6pt] \displaystyle{{\frac{\partial u}{\partial r}\Big{|}_{r=b}}}=\displaystyle{{\frac{\partial v}{\partial r}\Big{|}_{r=b}}}=0. \end{cases} \end{equation} \section{Linear stability analysis for the cross-diffusive system on two dimensional annular region}\label{sec:linearstability} System \eqref{r1} has a constant uniform steady state solution represented by $(u_s,v_s)=(\alpha+\beta,\frac{\beta}{(\alpha+\beta)^2})$ \cite{,murray2001mathematical,madzvamuse2015cross, madzvamuse2015stability}. This steady state is a unique positive stationary point that satisfies the nonlinear algebraic equations obtained by the $f(u_s,v_s)=g(u_s,v_s)=0$, along with the zero-flux boundary conditions imposed on System \eqref{r1}. Involving local perturbation of System \eqref{r1}, the standard approach of linear stability theory around the uniform steady-state $(u_s+\epsilon\bar{u},v_s+\epsilon\bar{u})$ with $0<\epsilon \ll 1$ is used. The perturbation variables $(\bar{u},\bar{v})$ are assumed to be sufficiently smooth satisfying the conditions for Taylor expansion of two variables. We proceed with neglecting $O(\epsilon^2)$ and higher order terms. Therefore, we obtain the following linearized matrix-vector form as, \begin{equation}\label{rr2} {\bar {\textbf{w}_t}}= \textbf{D}\Delta\bar {\textbf{w}} +\gamma \textbf{J}_\textbf{F} \bar{\textbf{w}} \end{equation} where $\bar{\textbf{w}}$ is the solution vector, $\textbf{D}$ denotes the diffusion coefficients matrix, $\textbf{F}$ is reaction terms vector, and $\textbf{J}_\textbf{F}$ represents the Jacobian matrix. These can be written explicitly as, \begin{equation}\label{diffmatrix} \bar{\textbf{w}}= \begin{bmatrix} \bar{u}\\ \bar{v} \end{bmatrix}, \; \textbf{D}= \begin{bmatrix} 1& d_v\\ d_u & d \end{bmatrix}, \; \textbf{F}(u,v)= \begin{bmatrix} f(u_s,v_s)\\ g(u_s,v_s) \end{bmatrix}, \; \textbf{J}_{\textbf{F}}= \begin{bmatrix} f_u(u_s,v_s)& f_v(u_s,v_s)\\ g_u(u_s,v_s)& g_v(u_s,v_s). \end{bmatrix}. \end{equation} To determine the analytical results of bifurcation properties, we employ the linearization procedure and seek to find the eigenfunctions of the Laplace operator that satisfy the homogenous Neumann boundary conditions. These eigenfunctions are obtained by solving the following eigenvalue problem on an annulus domain, \begin{equation}\label{eigen1} \begin{cases} \Delta_r w=-k^2 w, \qquad \quad k\in \mathbb{R},\qquad (r,\theta) \in \Omega, \\[10pt] \displaystyle{\frac{\partial w}{\partial r} \Big{|}_{r=a}={\frac{\partial w}{\partial r} \Big{|}_{r=b}=0}} \qquad \quad a,b \in \mathbb{R}_+\setminus \{0\} . \end{cases} \end{equation} where $\Delta_r $ represents the Laplace operator in polar coordinates as, \begin{equation}\label{Laplace} \Delta_r u(r,\theta)=\frac{1}{r}\frac{\partial}{\partial r}\left( r \frac{\partial u}{\partial r}\right) +\frac{1}{r^2}\frac{\partial^2 u}{\partial \theta^2}. \end{equation} The solution of the eigenvalue problem \eqref{eigen1} is obtained in the form of $w(r,\theta)=R(r)\Theta(\theta)$ under Neumann boundary conditions on an annular region. The following theorem is presented for the solution of the eigenvalue problem \eqref{eigen1}. The solution of the eigenvalue problem on flat ring with Neumann Boundary Conditions is exploited from the results in \cite{sarfaraz2018domain}, which is provided in the following Theorem. \begin{theorem}\label{theoeigen} (Solution of the eigenvalue problem) Let $w(r,\theta)$ satisfies the eigenvalue problem \eqref{eigen1} under the homogeneous Neumann boundary conditions on annulus domain, with $n$ the order of Bessel's function satisfying $\mathbb{R}\setminus \dfrac{1}{2} \mathbb{Z}$. Then, for a fixed pair $(m,n)$, $m \in\mathbb{Z}^{+}$, there exists an infinite set of eigenfunctions of the Laplace operator which is expressed as, \begin{equation} w_{m,n}(r,\theta)=[R_{m,n}^1(r)+R_{m,n}^2(r)]\Theta_n(\theta) \end{equation} where, $R_{m,n}^1(r)$ and $R_{m,n}^2(r)$ represents the Bessel functions of first kind given as \begin{equation} \begin{cases} R_{m,n}^1(r) = \sum\limits_{j=0}^{\infty} \frac{(-1)^jC_0(k_{m,n}r)^{2j+n}}{4^jj!(n+j)(n+j-1)\cdots(n+1)} \\ R_{m,n}^2(r) = \sum\limits_{j=0}^{\infty} \frac{(-1)^jC_0(k_{m,n}r)^{2j-n}}{4^jj!(-n+j)(-n+j-1)\cdots(-n+1)} \end{cases}\label{besselsol} \end{equation} for $j=2m$ with $ \Theta_n(\theta)=e^{(n\theta i)}$. Here, $C_0$ is the first coefficient of the Bessel series. The eigenvlues to the eigenvalue problem \eqref{eigen1}, $k_{m,n}^2$ exist as, \begin{equation} k_{m,n}^2=\frac{4(a^nb+ab^n)(2m+1)(n+2m+1)(n+4m)}{ab(a^{n+1}+b^{n+1})(n+4m+2)}. \end{equation} \end{theorem} \begin{proof} The detailed proof of Theorem \ref{theoeigen} is step-by-step presented in \cite{sarfaraz2020stability}. \end{proof} \textbf{Remark.} The eigenvalues given by $k_{m,n}^2$ can be represented using $b=\rho+a$ as follows, \begin{equation}\label{eigenvalueExpression} k_{m,n}^2=f(\rho,n)\frac{4(2m+1)(n+2m+1)(n+4m)}{(n+4m+2)}, \end{equation} where \begin{equation} f(\rho,n)=\dfrac{a^{n-1} +(\rho+a)^{n-1}}{a^{n+1}+ (\rho+a)^{n+1}}. \end{equation} Before we proceed with the domain size conditions to understand the dynamics of the cross-diffusive system, the properties of $f(\rho,n)$ need to be explored at the limiting case with respect to the order of the Bessel's functions. We also explore the properties of $f(\rho,n)$ by the variation of the inner radius $a$, and the thickness of the ring which is denoted by $\rho$. This can be accomplished through considering the upper bounds of $f(\rho,n)$ for the following two possible cases namely $n<0$ and $n>0$, \begin{equation}\label{eigenValueSup1} {\underset{0>n \in \mathbb{R}\setminus \frac{1}{2} \mathbb{Z} }\sup} f(\rho,n)= \lim_{n\rightarrow -\infty} f(\rho,n)= \dfrac{2}{a(\rho+a)}, \end{equation} and \begin{equation}\label{eigenValueSup2} {\underset{0<n \in \mathbb{R}\setminus \frac{1}{2} \mathbb{Z} }\sup} f(\rho,n)= \lim_{n\rightarrow 0} f(\rho,n)= \dfrac{1}{a(\rho+a)}. \end{equation} Therefore, the conditions are derived using the above bounds of the domain-weighting function $f(\rho,n)$ \cite{sarfaraz2020stability}. A numerical demonstration of these limiting cases is simulated with various combinations of $a$ and $\rho$ as shown in Fig. \ref{domainfactor}. \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{analysisOfDomainFactor.png} \\ {\small (a) $f(\rho,n)$ by the variation $a$ and $\rho$.} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{domain-factor-variation-by-a.png} \\ {\small (b) $f(\rho,n)$ by the variation of $a$} \end{tabular} \caption{(a)-(b) Analysis of the domain factor on the eigenvalue of the Laplace operator demonstrated for the limiting cases with respect to the order of Bessel's function $n$, the inner radius $a$, and the thickness of the ring $\rho$.} \label{domainfactor} \end{figure} The demonstration above is necessary to verify that in the limiting case when the inner radius of the ring approaches zero, the domain factor on the eigenvalues of the Laplace operator on a ring coincide to those on the disc, which is $\lim_{a \rightarrow 0}f(\rho,n)=\frac{1}{\rho^2}$. Note that this is the case since \begin{align*} \lim_{a \rightarrow 0}\dfrac{a^{n-1} +(\rho+a)^{n-1}}{a^{n+1}+ (\rho+a)^{n+1}} = \frac{1}{\rho^2}. \end{align*} The solution to system \eqref{r1} can be expressed through the method of separation of variables, where the ansatz takes the form of the product of the eigenfunctions of the Laplacian $\Delta_r$ on annulus and an exponential with respect to the time variable $t$, written as, \begin{equation}\label{eigen2} {{u}}(r,\theta,t)=\sum\limits_{m=0}^{\infty}\sum\limits_{n=0}^{\infty}U_{m,n}e^{-k^2_{m,n}t} R_{m,n}\Theta_n(\theta), \; \text{and} \; {{v}}(r,\theta,t)=\sum\limits_{m=0}^{\infty}\sum\limits_{n=0}^{\infty}V_{m,n}e^{-k^2_{m,n}t} R_{m,n}\Theta_n(\theta), \end{equation} where $U_{n,m}$ and $V_{n,m}$ are the coefficients of the eigenfunctions in the eigen expansion of the series solution to \eqref{eigen1}. Substitution of \eqref{eigen2} into the vectorised representation \eqref{rr2} provides the fully linearised form of \eqref{r1}, \begin{align}\label{smat} \begin{bmatrix} \bar{u}_t\\ \bar{v}_t \end{bmatrix} = \begin{bmatrix} -k_{m,n}^2+\gamma f_u(u_s,v_s) &-d_vk_{m,n}^2+\gamma f_v(u_s,v_s)\\ -d_uk_{m,n}^2+\gamma g_u(u_s,v_s)&-dk_{m,n}^2+\gamma g_v(u_s,v_s) \end{bmatrix} \begin{bmatrix} \bar{u}\\ \bar{v} \end{bmatrix}. \end{align} The characteristic polynomial is written in terms of the trace and determinant of the stability matrix as, \begin{equation}\label{polyn} \lambda^2-\mathcal{T}(\alpha,\beta)\lambda+\mathcal{D}(\alpha,\beta)=0. \end{equation} In \eqref{polyn} above, the symbols $\mathcal{T}(\alpha,\beta)$ and $\mathcal{D}(\alpha,\beta)$ denote the trace and the determinant of the stability matrix given by \eqref{smat}, respectively. Thus, the pair of the eigenvalues for the stability matrix of the system are obtained by calculating the roots of \eqref{polyn}, which are given by \begin{equation}\label{eigen} \lambda_{1,2}=\frac{\mathcal{T}(\alpha,\beta)\mp\sqrt{\mathcal{T}^2(\alpha,\beta)-4\mathcal{D}(\alpha,\beta)}}{2}. \end{equation} In the case where both eigenvalues $\lambda_{1,2}$ given in \eqref{eigen} are real, the stability of the uniform steady-state $(u_s,v_s)$ depends on the signs of the eigenvalues. For the real case of both eigenvalues $\lambda_{1,2}$, the uniform steady state is unstable if at least one of the eigenvalues $\lambda_1$ or $\lambda_2$ is positive. On the other hand, if both eigenvalues form a complex-conjugate pair, then the stability of the uniform steady-state $(u_s,v_s)$ is determined by the sign of the real part of the eigenvalues. To predict the dynamics of the system \eqref{r1}, regarding each type of instability, we investigate the parameter regions on the top right quadrant of the cartesian plane i.e. $(\alpha,\beta) \in \mathbb{R}_+^{2}$. This is because the only permissible values for parameters $\alpha$ and $\beta$ by definition of the model are positive real values. \subsection{Spatiotemporal pattern formation on annular region } To determine the relationship between the model system parameters and domain-size which is represented by $\rho=b-a$ showing the thickness of the annulus, we first analyse the roots of the characteristic polynomial $\eqref{polyn}$. The parameter plane $(\alpha, \beta) \in \mathbb{R}^2_+$ features a partition defined by a curve. This curve serves to distinguish regions on the plane that constrain $\lambda_{1,2}$ to either a real pair or a complex conjugate pair. The implicit equation governing this curve is \begin{equation} \label{partitionb} \mathcal{T}^2(\alpha,\beta) -4\mathcal{D}(\alpha,\beta)=0. \end{equation} Equation \eqref{partitionb} signifies that one side of the curve corresponds to the region where $\lambda_{1,2}\in\mathbb{R}$, while the other side corresponds to $\lambda_{1,2}\in\mathbb{C}\backslash\mathbb{R}$. It follows that the eigenvalues $\lambda_{1,2}$ form a complex conjugate pair if the parameters $(\alpha, \beta)\in\mathbb{R}^2_+$ satisfy the inequality \begin{equation} \label{complexpair} \mathcal{T}^2(\alpha,\beta) -4\mathcal{D}(\alpha,\beta)<0. \end{equation} The immediate requirement of \eqref{complexpair} gives the inequality $\mathcal{D}(\alpha,\beta)>0$. As mentioned above, the trace of the stability matrix \eqref{smat} does not include the cross-diffusion constants given by $d_u$ and $d_v$. Therefore, we deduce that, conditions on the domain-size in the absence of cross-diffusion remain the same as in the work \cite{sarfaraz2018domain}. This leads us to explore the positivity of $\mathcal{D}(\alpha,\beta)$ to fully understand the effect of the cross-diffusion in the reaction-diffusion system. Before we proceed to the analysis on the positivity $\mathcal{D}(\alpha,\beta)$ of the stability matrix \eqref{smat}, we recall the following theorems which depend on the analysis of trace $\mathcal{T}^2(\alpha,\beta)$ of the stability matrix \cite{sarfaraz2018domain}. \begin{theorem}\label{theo1} (Condition for Hopf/Transcritical bifurcation) Let $u$ and $v$ satisfy the cross-diffusive reaction-diffusion system given by the Eq. (\ref{r1}) with real positive parameters $\alpha>0$, $\beta>0$, $d>0$, and $\gamma>0$ on a domain with the thickness of ring $\rho=b-a$ where $a$ and $b$ represent the inner and outer boundaries of domain respectively. For the system to exhibit Hopf and/or transcritical bifurcation, the domain-size controlling parameter $\rho$ must be large enough satisfying, \begin{equation}\label{Theo1} \rho \geq\dfrac{8(d+1)(2m+1)(n+2m+1)(n+4m)-\gamma a^2(n+4m+2)}{\gamma a(n+4m+2)} \end{equation} where $n \in\mathbb{R}\setminus \dfrac{1}{2} \mathbb{Z}$ is the associated order of the Bessel's equation and $m$ is any positive integer. \end{theorem} \begin{proof} The proof of this theorem strategically focuses on examining the positivity of the trace $\mathcal{T}(\alpha,\beta)$ of the stability matrix. It is noteworthy that the trace $\mathcal{T}(\alpha,\beta)$, remains invariant when introducing cross-diffusive coefficients to the system. Consequently, we infer that the (necessary) condition for the system to exhibit Hopf/transcritical-type bifurcation remains unaltered compared to the standard reaction-diffusion system without cross-diffusion. It must be noted that only the self-diffusion coefficient $d$ enters into the condition through the definition of the trace of the stability matrix. The proof establishing the domain size and system parameters without cross-diffusion can be found in \cite{sarfaraz2020stability}. \end{proof} \begin{theorem}\label{theo2} (Turing diffusion-driven instability) Let $u$ and $v$ satisfy the cross-diffusive reaction-diffusion system given by (\ref{r1}) with real parameters $\alpha>0$, $\beta>0$, $d>0$, and $\gamma>0$ on the ring $\Omega \subset \mathbb{R}^2$ whose thickness is given by $\rho=b-a$ where $a$ and $b$ represent the inner and outer boundaries of ring domain, respectively. If the domain-size controlling parameter $\rho$ satisfies the condition, \begin{equation} \rho<\dfrac{4(d+1)(2m+1)(n+2m+1)(n+4m)-\gamma a^2(n+4m+2)}{\gamma a(n+4m+2)} \end{equation} then, the instability of the cross-diffusive system \eqref{r1} is restricted to Turing type only, implying that this condition forbids the temporal periodicity in the dynamics. In \eqref{theo2}, $n \in\mathbb{R}\setminus \dfrac{1}{2} \mathbb{Z}$ represent the order of Bessel's equation and $m$ is a positive integer. \end{theorem} \begin{proof} The proof of this theorem involves examining the real part of the eigenvalues $\lambda_{1,2}$ when the discriminant $\mathcal{T}^2-4\mathcal{D}$ of the characteristic polynomial is negative. Similar to the scenario in Theorem \eqref{theo1}, it is important to highlight that the trace $\mathcal{T}(\alpha,\beta)$ of the stability matrix remains unaffected by the values of $d_u$ and $d_v$. Consequently, we conclude that the conditions outlined in Theorem \ref{theo2} persist unchanged in the absence of cross-diffusion, as discussed and proven in \cite{sarfaraz2020stability}. Hence, we note again that only the self-diffusion coefficient $d$ enters into the condition through definition of the trace of the stability matrix. \end{proof} \noindent \textbf{Remark.} It is important to note that Theorem \ref{theo1} represents the necessary condition for the system to exhibit Hopf/transcritical type of bifurcations, on the other hand it allows for the possibility that the system may undergo Turing-type instability. However, Theorem \ref{theo2} precludes temporal periodicity, permitting only spatial pattern formation. Both Theorems \ref{theo1} and \ref{theo2} are independent of cross-diffusion terms since the strategy used in these theorems is constructed based on the exploitation of the sign of $\mathcal{T}(\alpha,\beta)$, which is independent of cross-diffusion constants of System \eqref{r1}. Conditions in Theorems \ref{theo1} and \ref{theo2} remain valid in the form of necessary requirements in the case of cross-diffusive systems. Our analysis extends to establishing the sufficient conditions regarding the thickness of the ring $\rho$ in relation to the parameters of the reaction-cross-diffusive system, namely $d$, $d_u$, $d_v$, and $\gamma$. This is achieved by examining the positivity of the determinant $\mathcal{D}(\alpha,\beta)$ of the stability matrix. The process involves expanding the determinant of the stability matrix \eqref{eigen2} and rearranging $\mathcal{D}(\alpha,\beta)$ into the form of an implicit cubic polynomial in $\beta$. Each term in the polynomial shares a common factor of $\frac{1}{\alpha+\beta}$ with all other parameters consolidated as coefficients. This expression reads as \begin{equation}\label{dnew} \mathcal{D(\alpha,\beta)}=p_0 + p_1 \beta + p_2 \beta^2 + p_3 \beta^3, \end{equation} where \begin{align} p_0 = \frac{1}{\alpha+\beta}\kappa_0(\alpha), ~~ p_1 = \frac{1}{\alpha+\beta} \kappa_1(\alpha), ~~ p_2 = \frac{1}{\alpha+\beta}\kappa_2(\alpha), ~~ p_3 = \frac{1}{\alpha+\beta}\kappa_3(\alpha).\notag \end{align} Here, the terms $\kappa_i$'s ($i=0,1,2, 3$) are expressed in terms of all the remaining parameters, which are given by \begin{eqnarray} &&\kappa_0(\alpha)=\alpha^3\gamma^2 + \alpha^3\gamma k_{m,n}^2 + \alpha dk_{m,n}^4 - \alpha d_u d_vk_{m,n}^4+\alpha d \gamma k_{m,n}^2 +\alpha^3d_u\gamma k_{m,n}^2, \nonumber \\ &&\kappa_1(\alpha)=dk^4 - d_ud_vk_{m,n}^4- d\gamma k_{m,n}^2 + 3\alpha^2\gamma^2 + 3\alpha^2\gamma k_{m,n}^2 +3\alpha^2d_u\gamma k_{m,n}^2 - 2d_v\gamma k_{m,n}^2, \nonumber \\ &&\kappa_2(\alpha)= 3\alpha\gamma( k_{m,n}^2(d_u+1)+ \gamma), \nonumber \\ &&\kappa_3(\alpha)=\gamma k_{m,n}^2 + \gamma^2 + d_u\gamma k_{m,n}^2 =\gamma( k_{m,n}^2(d_u+1)+ \gamma). \nonumber \end{eqnarray} Note that $(\alpha,\beta)\in\mathbb{R}_+^2$ are by definition non-zero positive constants, therefore, we assert that $\mathcal{D}(\alpha,\beta)>0$ requires the positivity of the cubic polynomial in $\beta$ given in \eqref{dnew}. We proceed by writing the polynomial such that the coefficient of $\beta^3$ is one, which means we have \begin{equation}\label{standardCubic} \beta^3+\dfrac{\kappa_2(\alpha)}{\kappa_3(\alpha)}\beta^2+\dfrac{\kappa_1(\alpha)}{\kappa_3(\alpha)}\beta+\dfrac{\kappa_0(\alpha)}{\kappa_3(\alpha)}>0. \end{equation} Note that the domain-size controlling parameter $\rho$ representing the thickness of the flat-ring is implicitly embedded within the expressions of the coefficients of the cubic polynomial described by \eqref{standardCubic}. This is through the fact that the coefficients of \eqref{standardCubic} directly depend on $k_{m,n}^2$, which represent the eigenvalues of the diffusion operator, and $\rho$ is a parameter of the expression for $k_{m,n}^2$ as shown in \eqref{eigenvalueExpression}. We consider the following theorem from \cite{qi2020positivity} to obtain the conditions required for \eqref{standardCubic} to be positive in terms of the thickness of ring $\rho$ along with $d$, $d_u$, $d_v$, $\gamma$. \begin{proposition}\label{prop1} Suppose that $\mathcal {D}(\beta)=\beta^3+a\beta^2+b\beta+c$ be a non-degenerate cubic polynomial. Let the cubic polynomial $\mathcal{D}(\beta)$ defined as follows, \begin{equation} \mathcal{D}(\beta)=h(\beta)\beta+c \end{equation} where $h(\beta)=\beta^2+a\beta+b$, with $c>0$ and $\beta\geq 0$. For the cubic polynomial $\mathcal{D}(\beta)$ to be strictly positive, the quadratic polynomial $h(\beta)$ must satisfy,$a\geq 0$, $b\geq 0$, or $b>0$, $4b\geq a^2$. \end{proposition} \begin{proof} The proof of this proposition is presented in \cite{qi2020positivity,schmidt1988positivity}. \end{proof} The conditions on $a$, $b$, and $c$ in Proposition \ref{prop1} are represented in terms of the reaction-diffusion system parameters as follows, \begin{equation} \label{trivial} a=\dfrac{\kappa_2(\alpha)}{\kappa_3(\alpha)}=\dfrac{3\alpha\gamma^2 + 3\alpha\gamma k_{m,n}^2 + 3\alpha d_u\gamma k_{m,n}^2}{\gamma k_{m,n}^2 + \gamma^2 + d_u\gamma k_{m,n}^2 } \geq 0, \end{equation} \begin{eqnarray}\label{ntrivial2} b=&\dfrac{\kappa_1(\alpha)}{\kappa_3(\alpha)}=\dfrac{(d - d_ud_v)k_{m,n}^4- (d\gamma - 3\alpha^2\gamma -3\alpha^2d_u\gamma - 2d_v\gamma) k_{m,n}^2}{\gamma k_{m,n}^2 + \gamma^2 + d_u\gamma k_{m,n}^2 } \\ \nonumber &\hspace{2cm}+\dfrac{ 3\alpha^2\gamma^2}{\gamma k_{m,n}^2 + \gamma^2 + d_u\gamma k_{m,n}^2 } \geq 0 , \end{eqnarray} and \begin{eqnarray}\label{ntrivial1} c=\dfrac{\kappa_0(\alpha)}{\kappa_3(\alpha)}=\dfrac{(d-d_ud_v)\alpha k_{m,n}^4 + (\alpha^3\gamma + \alpha d \gamma +\alpha^3d_u\gamma) k_{m,n}^2+\alpha^3\gamma^2}{\gamma k_{m,n}^2 + \gamma^2 + d_u\gamma k_{m,n}^2 } > 0. \end{eqnarray} With this setup in mind, we state Theorem 4 and provide its proof subsequently. \begin{theorem}\label{Maincond} (Condition on $\rho$ for spatiotemporal pattern formation of cross-diffusive reaction-diffusion system) Let $u$ and $v$ satisfy System \eqref{r1} with cross-diffusive constants $d_u$ and $d_v$ as presented therein with real positive parameters $\alpha>0$, $\beta>0$, $d>0$, $\gamma>0$ on an annulus $\Omega$ with thickness $\rho$ and the inner radius $a$. For the thickness of the annulus denoted by $\rho$ satisfying \begin{equation}\label{Theo4} \rho> \dfrac{8(d-d_ud_v) (2m+1)(n+2m+1)(n+4m)-\gamma a^2(7d+8d_v)(n+4m+2)}{(7d+8d_v)(n+4m+2)\gamma}, \end{equation} is a sufficient condition for the parameter space $(\alpha,\beta)\in\mathbb{R}_+^2$ to admit values for which System \eqref{r1} will undergo spatiotemporal dynamics i.e. (Hopf and/or transcritical bifurcation) pattern formation. \end{theorem} \begin{proof} The strategy of this proof involves analysing conditions on $a$, $b$, and $c$ as provided in \eqref{trivial}-\eqref{ntrivial1}. Direct algebraic operations on \eqref{trivial} leads to a trivial requirement of the form $3\alpha \geq 0$, which allows to assert that \eqref{trivial} holds true. Conditions \eqref{ntrivial2} and \eqref{ntrivial1} in Proposition \ref{prop1}, which together ensure the positivity of $\mathcal{D}(\beta)$, are investigated and a set of two inequalities of the form \begin{equation}\label{newtrivial1} \begin{cases} 3\alpha^2+\dfrac{(d-d_u d_v)k_{m,n}^4-(d+2d_v)\gamma k_{m,n}^2}{(1+d_u)\gamma k_{m,n}^2+\gamma^2} \geq \dfrac{9\alpha^2}{4}, \\[10pt] \alpha^2+\dfrac{(d-d_ud_v)k_{m,n}^4+d\gamma k_{m,n}^2}{(1+d_u)\gamma k_{m,n}^2+\gamma^2} > 0, \end{cases} \end{equation} is deduced. The inequalities provided by $\eqref{newtrivial1}$ are identical to those derived for a convex domain investigated in \cite{yigit2024domain}. This entails a universality in obtaining sufficient conditions for spatiotemporal dynamics in reaction-diffusion systems across different geometries regardless of the convexity condition for the domain. However, the domain-dependent distinction emerges when the explicit expression for the eigenvalues of the Laplace operator corresponding to the geometry and boundary conditions is substituted and the inequalities are simultaneously solved for the domain-size controlling parameter, which in this case is $\rho$. Note that inequalities in \eqref{newtrivial1} can be re-arranged to be written in the form \begin{equation}\label{sysIneqnew} \begin{cases} \dfrac{4(d-d_ud_v)k_{m,n}^4-4(d+2d_v)\gamma k_{m,n}^2}{(1+d_u)\gamma k_{m,n}^2+\gamma^2}= -3\alpha^2 \\[10pt] \dfrac{3(d-d_ud_v)k_{m,n}^4+3d\gamma k_{m,n}^2}{(1+d_u)\gamma k_{m,n}^2+\gamma^2} > -3\alpha^2 . \end{cases} \end{equation} Without loss of generality, in \eqref{sysIneqnew}, we substitute for $-3\alpha^2$ in the second inequality to obtain \begin{align} \dfrac{3(d-d_ud_v)k_{m,n}^4+3d\gamma k_{m,n}^2}{(1+d_u)\gamma k_{m,n}^2 +\gamma^2} > \dfrac{4(d-d_ud_v)k_{m,n}^4-4(d+2d_v)\gamma k_{m,n}^2 }{(1+d_u)\gamma k_{m,n}^2+\gamma^2}. \label{combinednew} \end{align} Solving \eqref{combinednew} for $k_{m,n}^2$ leads to the desired condition on the domain-size controlling parameter $\rho$ that will ensure the positivity of the cubic polynomial \eqref{standardCubic} and in turn that of the determinant of the stability matrix given by \eqref{dnew}. To derive the desired condition, the sign of the denominator of both sides of \eqref{combinednew} is exploited, which requires the analysis to independently investigate the case when the denominator on both sides of \eqref{combinednew} is either positive or negative. Such a requirement enforces two independent cases to explore, namely $d_u>-1$ or $d_u<-1$, subject to $d-d_u d_v >0$. Hence, the current analysis admits the concept of cross-diffusion for one of the species to be either negative or positive. However, the determinant of the diffusion matrix must be positive to guarantee the regularity of System \eqref{r1}. We exploit the experimental investigation presented in \cite{vanag2009cross} and results therein, where negative and positive cross-diffusion give rise to Turing type behaviour in the dynamics. Such experimental findings create the platform to explore the cross-diffusive parameter $d_u$ across the full spectrum of the real line in particular, both for the choice of negative and positive real values. Therefore, exploiting such observations we consider $d_u>-1$, which corresponds to the positivity of the denominator of both sides of \eqref{combinednew}. As a result, we write \begin{equation}\label{combinednew1} 3(d-d_ud_v)k_{n,m}^2+3d\gamma >4(d-d_ud_v)k_{n,m}^2-4(d+2d_v)\gamma \end{equation} or equivalently \begin{equation}\label{combinednew2} 3d\gamma +4(d+2d_v)\gamma >(d-d_ud_v)k_{n,m}^2. \end{equation} Inequality \eqref{combinednew2} is re-written by isolating $k_{m,n}^2$, resulting in \begin{equation}\label{combinednew3} k_{n,m}^2< \dfrac{(7d+8d_v)\gamma}{d-d_u d_v}. \end{equation} To establish \eqref{combinednew2}, we substitute $k_{m,n}^2$ with the maximum of the two suprema, accounting for the limiting cases of the domain-size factor $f(\rho)$ as outlined in \eqref{eigenValueSup1}: \begin{equation}\label{combinednew4} \dfrac{8(2m+1)(n+2m+1)(n+4m)}{a(\rho+a)(n+4m+2)}< \dfrac{(7d+8d_v)\gamma}{d-d_u d_v}. \end{equation} The proposed sufficient condition of Theorem \ref{Maincond} is attained by solving \eqref{combinednew4} for $\rho$, which leads to \begin{equation} \rho> \dfrac{8(d-d_ud_v) (2m+1)(n+2m+1)(n+4m)-\gamma a^2(7d+8d_v)(n+4m+2)}{(7d+8d_v)(n+4m+2)\gamma}, \end{equation} thereby completing the proof. \end{proof} \section{Geometric comparison of cross-diffusion induced parameter spaces for convex and non-convex domains}\label{sec:ParameterSpaces} In the analysis of the parameter spaces indicative of Turing instability, Hopf bifurcation, and Transcritical curves within the $(\alpha, \beta)\in\mathbb{R}_+^2$ bifurcation plane, emphasis is placed on the imperative requirement on the partitioning curve that classifies the bifurcation plane into real and complex roots, which must satisfy the implicit equation given by $\mathcal{T}^2(\alpha,\beta) -4\mathcal{D}(\alpha,\beta)=0$. A numerical verification procedure was initially undertaken to ascertain the stable and unstable regions via analysis of the real part of eigenvalues, conducted across distinct two-dimensional geometries encompassing rectangular, circular, and annular domains, i.e. convex and non-convex geometries. Fig. \ref{fullclass1} presents a comprehensive classification of these stable and unstable regions, illustrated according to conditions outlined in Table \ref{Tab1}. Notably, regions characterized by positive real parts of the eigenvalues, regardless of their real or complex nature, denote instability of the solution of System \eqref{r1}, which in turn makes the steady state $(u_s,v_s)$ asymptotically unstable. Conversely, regions featuring negative real parts of the eigenvalues represent dynamic stability of the uniform steady state of System \eqref{r1}. Subsequently, parameter regions depicted in Fig. \ref{fullclass1} are color-coded to signify distinct characteristics. The colour magenta represents regions corresponding to real distinct eigenvalues, green colour regions with complex eigenvalues and negative real parts, red colour regions with complex eigenvalues and positive real parts, and blue colour regions with real eigenvalues and at least one positive root. Both magenta and green regions denote stability, while the observation of a Hopf bifurcation in system dynamics occurs within the red region. Additionally, the emergence of Turing-type patterns is observed by selecting parameters from within the blue regions. Further analysis conducted with varying domain sizes reveal a variety of shifts in unstable spaces across different geometries, elucidating the intricate interplay between geometry and domain size parameters. Specifically, while the unstable spaces remain intact in terms of area, however, the numerical procedure demonstrates that Turing spaces diminish with increasing domain size, where the area of unstable regions is conserved by expanding Hopf bifurcation regions to conserve the total area corresponding to diffusion-driven instability. Hopf bifurcation regions display variability contingent upon the geometry under consideration. This comprehensive analysis sheds light and thereby adds to the novelty of formalizing a universal understanding of the influences of geometry and domain size on system dynamics within distinct parameter spaces. The unstable parameter spaces depicted in Figs. \ref{HopfTransParameter}, \ref{TuringTheo1}, and \ref{TuringTheo2} are explored through numerical representation, with a focus on the variations of system parameters $d$, $d_u$, $d_v$, and $\gamma$ within the constraints presented in Section \ref{sec:linearstability}, specified for an annular domain. Fig. \ref{HopfTransParameter} illustrates Hopf bifurcation regions alongside transcritical curves, adhering to the conditions stated in Theorems \ref{theo1} and \ref{Maincond}. It is recognized from the detailed analysis presented in Figs. \ref{HopfTransParameter} (a)-(b) that augmenting the self-diffusion coefficient $d$ while maintaining positive and negative cross-diffusion coefficients $d_u$ induces a reduction in the Hopf bifurcation regions, coupled with a shift of transcritical curves from $t_4$ to $t_1$. An important point of verification associated with these findings is that we observe that including cross-diffusion terms compels System \eqref{r1} to manifest unstable spaces in the plane $(\alpha,\beta)\in\mathbb{R}_+^2$ for $d=1$. This entails that the proven case for reaction-diffusion systems of self-diffusive species in the existing literature precluding the manifestation of unstable spaces for $d=1$ is limited to the absence of cross-diffusion \cite{madzvamuse2015cross}, and is violated when cross-diffusion is added to the system. Furthermore, Figure \ref{HopfTransParameter} (c) illustrates a diminishing trend in Hopf bifurcation regions with an increase in positive $d_u$, whereas transcritical curves remain invariant. Figures \ref{HopfTransParameter} (c)-(f) collectively illustrate that elevating cross-diffusion parameters $d_u$ and $d_v$ while holding $d$ constant gives rise to progressively smaller Hopf bifurcation regions, while limit cycle curves remain unchanged across each case. Fig. \ref{TuringTheo1} shows the regions where eigenvalues are real with at least one positive value corresponding to the Turing spaces in light of Theorems \ref{theo1} and \ref{Maincond}. From the parameters chosen according to Fig. \ref{TuringTheo1}, we expect the system dynamics to exhibit exclusively spatial patterns. Fig. \ref{TuringTheo1} (a) shows that an increase in the self-diffusion coefficient $d$ results in the enlargement of the Turing regions. In Fig. \ref{TuringTheo1} (b), we observe that increasing $d$ with the negative cross-diffusion coefficient $d_u$, entails the expansion of the Turing regions. It is important to note that, the main difference between Fig. \ref{TuringTheo1} (a) and Fig. \ref{TuringTheo1} (b) is the choice of cross-diffusion coefficient $d_u$ being positive or negative. Figs. \ref{TuringTheo1} (a)-(b)-(c) are generated by fixing $d$ and $d_v$ and considering $d_u$ being positive, zero and negative respectively. Fig. \ref{TuringTheo1} (f) shows the effect of $d_v$ by fixing $d$ and $d_u$. We observe that an increase in self- and cross-diffusion coefficients has a positive effect on the enlargement of Turing spaces. Fig. \ref{TuringTheo2} presents the Turing spaces conditional to Theorems \ref{theo2} and \ref{Maincond}. It is essential to note that, Fig. \ref{TuringTheo2} considers the fact that the condition of Theorem \ref{theo2} forbids the existence of Hopf and transcritical type of bifurcations, allowing only spatial patterns in the system dynamics. Fig. \ref{TuringTheo2} is generated considering that the eigenvalues are real with at least one of them being positive. Fig. \ref{TuringTheo2} (a) presents the effect of self-diffusion coefficient $d$, and Fig. \ref{TuringTheo2} (b) shows the effect of cross-diffusion coefficient $d_v$. We observe that, unlike the Hopf bifurcation regions, an increase in self-diffusion and cross-diffusion coefficients results in an enlargement of the Turing spaces. \begin{figure}[H] \centering \begin{tabular}{cc} \includegraphics[width=0.28\textwidth]{Planar_k_eq} \\ {\small (a) } \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.31\textwidth]{Disc_k_Eq} \\ {\small (b)} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{Ring_k_Eq} \\ {\small (c)} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{PlanarL_1} \\ {\small (d) } \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{DiscL_1} \\ {\small (e) } \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{RingL_1} \\ {\small (f)} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{PlanarL_3_8} \\ {\small (g) } \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{DiscL_3_8} \\ {\small (h) } \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{RingL_3_8} \\ {\small (i)} \end{tabular} \begin{tabular}{cc} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=0.3\textwidth]{PlanarL_15} \\ {\small (j) } \end{tabular} \begin{tabular}{cc} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=0.3\linewidth]{DiscL_15}\\ {\small (k)} \end{tabular} \begin{tabular}{cc} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=0.3\textwidth]{RingL_15} \\ {\small (l)} \end{tabular} \caption{First column (rectangular domain), Second column (disc-shape domain), Third Column (ring-shape domain). Full parameter classification with equal case of (a)-(c) $k^2(m,n)=1$, (d)-(f) equal case of domain size $L=1$, $\rho=1$ (g)-(i) equal case domain sizes as $L=3.8$, $\rho=3.8$, (j)-(l) equal case of domain sizes $L=15$ and $\rho=15$. Colour coding; magenta regions correspond to the real-distinct negative eigenvalues, green regions correspond to a complex-conjugate pair with a negative real part, red regions correspond to a complex-conjugate pair with a positive real part, the blue region corresponds to a real-distinct pair with at least one positive eigenvalue.} \label{fullclass1} \end{figure} \begin{table}[ht] \tbl{Spatiotemporal pattern formation conditions for cross-diffusive reaction-diffusion systems on two-dimensional geometries} {\begin{tabular}{l c c c l}\\[-1pt] \toprule Type of geometry&Eigenmodes $k^2_{m,n}$&Sufficient condition on domain size\\[4pt] \hline\\[-1pt] Rectangular &$k^2_{m,n}=\frac{(m^2+n^2)\pi^2}{L^2}$ &$L^2> \frac{(d-d_ud_v) (m^2+n^2)\pi^2}{(7d+8d_v)\gamma}$ \\[6pt] Circular (disc) &$k^2_{m,n}=\frac{4(2m+1)(n+2m+1)(n+4m)}{\rho^2(n+4m+2)}$ & $ \rho^2> \frac{4(d-d_ud_v) (2m+1)(n+2m+1)(n+4m)}{(7d+8d_v)(n+4m+2)\gamma}$\\[6pt] Flat ring (annulus) &$k_{m,n}^2=\frac{8(2m+1)(n+2m+1)(n+4m)}{a(\rho+a)(n+4m+2)}$ &$\rho> \frac{8(d-d_ud_v) (2m+1)(n+2m+1)(n+4m)-\gamma a^2(7d+8d_v)(n+4m+2)}{(7d+8d_v)(n+4m+2)\gamma}$\\[2pt] \botrule \end{tabular}} \label{Tab1} \end{table} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Hopfdvarying} \\ {\small (a) Varying $d$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{hopfddunegative} \\ {\small (b) Varying $d$ with $d_u<0$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Hopfdupositive} \\ {\small (c) Varying $d_u$ positively} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Hopfdu0} \\ {\small (d) Varying $d_v$ for $d_u=0$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Hopfdunegative} \\ {\small (e) Varying $d_v$ for $d_u<0$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Hopfdvvarying} \\ {\small (f) Varying $d_v$ for $d_u=1$} \end{tabular} \caption{(a)-(f) Parameter spaces illustrating Hopf bifurcation regions and limit cycle curves with domain-size $\rho$ restricted to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} for various system parameters. $n=0.1$, $m=1$, $\rho=14.8$, $\gamma=100$.} \label{HopfTransParameter} \end{figure} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{1Turing1} \\ {\small (a) Varying $d$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{2Turing1} \\ {\small (b) Varying $d$ with $d_u<0$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{3Turing1} \\ {\small (c) Varying $d_u$ positively} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{4Turing1} \\ {\small (d) Varying $d_v$ for $d_u=0$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{5Turing1} \\ {\small (e) Varying $d_v$ for $d_u<0$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{6Turing1} \\ {\small (f) Varying $d_v$ for $d_u=1$} \end{tabular} \caption{(a)-(f) Turing regions with domain-size $\rho$ restricted to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} for various system parameters. $n=0.1$, $m=1$, $\rho=14.8$, $\gamma=100$.} \label{TuringTheo1} \end{figure} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{2Turingd} \\ {\small (a) Varying $d$} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{2Turingdv} \\ {\small (b) Varying $d_v$} \end{tabular} \caption{(a)-(b) Turing regions with domain-size $\rho$ restricted to satisfy conditions of Theorems \ref{theo2} and \ref{Maincond} for various system parameters. $n=0.1$, $m=1$, $\rho=7.3$, $\gamma=100$.} \label{TuringTheo2} \end{figure} \section{Finite element simulations on ring-shape domain}\label{sec:FemSim} In this section, we employ the finite element method to obtain numerical simulations of the model system \eqref{r1}, aiming to validate the theoretical findings outlined in Section \ref{sec:linearstability}. The main strategy of the finite element method lies in the weak formulation which satisfies the weak variational form \cite{madzvamuse2007velocity,barreira2011surface,madzvamuse2014fully,lakkis2013implicit,tuncer2017projected,frittelli2021bulk,song2022efficient,frittelli2023bulk,frittelli2017lumped}. The weak variational form is then transformed into a semi-discrete weak variational form where the domain has been triangulated into a conforming finite element formulation \cite{evans2022partial,brenner2008mathematical}. This process results in a system of ordinary differential equations, then the desired finite element numerical solution is obtained by using a suitable time-stepping scheme, resulting in a fully-discrete weak formulation. Details of time-stepping schemes for reaction-diffusion systems can be found in \cite{madzvamuse2006time1, ruuth1995implicit}. In all our simulations presented in the preceding section, model parameters are given in the figures and their captions. The reaction-kinetic parameters $\alpha$ and $\beta$ are selected from the parameter spaces generated, while the self- and cross-diffusion coefficients are chosen to ensure that the necessary and sufficient conditions for the generation of the parameter spaces are fulfilled. Furthermore, we choose the scaling parameter appropriately. It is essential to note that the shape of the patterns and convergence rates remained unchanged when we used more and more refined time and space steps. Since we are working on a non-convex domain, we use an open source package for the domain generation called $Gmsh$ \cite{geuzaine2009gmsh}. We use 7936 elements and 8292 degrees of freedom in all our simulations on the discretized domain. In all our simulations, initial conditions are taken as small random perturbations around the uniform steady state \cite{madzvamuse2015stability,sarfaraz2017classification, sarfaraz2018domain,sarfaraz2020stability}. We have used two different forms for the initial conditions to demonstrate the applicability of the numerical solver. First, initial conditions were taken to be small random perturbations near the uniform steady-state given by \begin{eqnarray*}\label{inits} u_0(x,y)= \alpha+\beta+ \epsilon_1 \cos (2\pi(x+y)) + \epsilon_2 \sum\limits_{n=0}^{8} \cos{(n\pi x)} ,\\ v_0(x,y)= \dfrac{\beta}{(\alpha+\beta)^2} + \epsilon_1 \cos(2\pi(x+y)) + \epsilon_2 \sum\limits_{n=0}^{8} \cos{(n\pi x)}, \end{eqnarray*} with $\epsilon_1=0.0016$ and $\epsilon_2 = 0.001$. Second, we also considered initial conditions prescribed by purely small random perturbations \begin{eqnarray*}\label{initsnew} u_0(x,y)= \alpha+\beta+ \epsilon \, rand , \\ v_0(x,y)= \dfrac{\beta}{(\alpha+\beta)^2} +\epsilon \, rand, \end{eqnarray*} where $rand$ is a randomly generated number from the uniform distribution with mean one and variance zero. The first approach allows to validate the numerical solver in the case that parameter values are selected to excite specific modes $m$ and $n$, for example. The second, assumes no specific knowledge of the behaviour of the solution of the reaction-diffusion system close to bifurcation points. For mode selection and the role of initial conditions close to bifurcation points, the interested reader is referred to consult \cite{madzvamuse2015stability,madzvamuse2000numerical}. We start our numerical experiments to understand the effects of parameter choice in the spatial pattern formation using conditions on the value representing the thickness of the annulus ($\Omega$), which is denoted by $\rho$. Fig. \ref{Turing1} shows the Turing type of pattern formation as analytically predicted when system parameters are chosen in adherence to conditions stated in Theorems \ref{theo1} and \ref{Maincond}. A timestep value of $\Delta t =0.0025$ and model parameter values $d=1$, $\gamma=730$, $d_u=0.001$, $d_v=0.46$, $\alpha=0.09$ and $\beta=0.2$ are selected for the finite element simulation of System \eqref{r1}. We observe the convergence of the numerical solution to a spatially inhomogeneous time-independent solution. During this evolution to the spatially inhomogeneous steady state, we observe patterns evolving from stripes to spots in the early stages. The shape of the spot type pattern remains unchanged in later stages of its evolution. This is validated by analyzing the $L_2$ norm of the discrete time-derivatives of the numerical solutions $u$ and $v$ as shown in Fig. \ref{Turing1} (h). Fig. \ref{TuringNegativeCD} presents the Turing type of pattern formation in light of conditions stated in Theorems \ref{theo1} and \ref{Maincond} using the parameters $\Delta t =0.0025$, $d=1$, $\gamma=720$, $d_u=-0.1$, $d_v=0.5$, $\alpha=0.09$ and $\beta=0.2$. Here, we want to explore the impact of negative cross-diffusion on pattern formation. We observe the formation of a spatially inhomogeneous steady state which is time-independent in the later stages of its development. The important observation in Fig. \ref{TuringNegativeCD} is that the choice of negative cross-diffusion can give rise to Turing type of pattern formation. We observe that, once the patterns evolve from stripes to spots in the early stages, these remain unchanged as time progresses. This can be understood by considering the $L_2$ norm of the discrete time derivatives of the solutions $u$ and $v$ as shown in Fig. \ref{TuringNegativeCD} (h). It is essential to note that, in the absence of cross-diffusion Turing type pattern can not be obtained by the choice of self-diffusion coefficient $d=1$ alone, for this two-component reaction-diffusion system. In the standard theory of diffusion-driven instability, one of the criticism of Turing's theory is that it requires diffusion coefficients to be of significantly different magnitudes. In particular, it requires that the inhibitor diffuses a lot faster than the activator, resulting in what is known as the {\it long-range inhibition, short-range activation}. In our case, by adding cross-diffusion to the system, a two-component reaction-diffusion system with equal self-diffusion has the capacity to give rise to pattern formation as illustrated in the numerical simulations given by Fig. \ref{Turing1} and Fig. \ref{TuringNegativeCD}. We note that the main difference between the simulations of Fig. \ref{Turing1} and Fig. \ref{TuringNegativeCD} is that the choice of cross-diffusion coefficient is positive and negative. Both Figs. \ref{Turing1} and \ref{TuringNegativeCD} show the formation of spatially inhomogeneous steady state solutions which are time-independent at later stages. Next, we want to demonstrate that if Theorems \ref{theo2} and \ref{Maincond} are simultaneously fulfilled, that is there exists model parameters and $m$ and $n$ such that the domain-size $\rho$ is bounded both from above and below, then hopf and/or transcritical bifurcations are completely forbidden. Only Turing patterns are allowed. In Fig. \ref{Turing2} we provide a numerical example illustrating this behaviour where we show the emergence of a Turing type pattern when model parameters are selected to fulfill conditions stated in Theorems \ref{theo2} and \ref{Maincond}. Model parameter values are selected as $\Delta t =0.0025$, $d=12$, $\gamma=270$, $d_u=1$, $d_v=1.7$, $\alpha=0.07$ and $\beta=0.45$ according to the conditions in Theorems \ref{theo2} and \ref{Maincond}. Comparing the results of Figs. \ref{Turing1} and \ref{TuringNegativeCD} with Fig. \ref{Turing2}, we observe that spot-type patterns are less abundant on the annular region. The evolution profile of the temporal stability in the dynamics is well-observed by visualising the $L_2$ norms of the discrete time-derivative of the numerical solutions of components $u$ and $v$. As presented in the parameter regions given by Fig. \ref{HopfTransParameter}, when eigenvalues are a complex-conjugate pair with positive real parts, we expect the system to exhibit periodicity in the spatiotemporal pattern formation process. This behavior is presented in Fig. \ref{Limitcycle1}, as a series of snapshots showing how the shape of the spot bifurcates, periodically, in time. The alternating pattern behavior of radii of the spots, transitioning from smaller to larger, is well-captured in Fig. \ref{Limitcycle1}. It is noted that the system parameters are selected to satisfy conditions stated in Theorems \ref{theo1} and \ref{Maincond} on $\rho$ with $\Delta t =0.0025$, $d=2.6$, $\gamma=375$, $d_u=1.6$, $d_v=0.5$, $\alpha=0.09$ and $\beta=0.1$. Spatiotemporal periodicity is shown by depicting the $L_2$ norm of the discrete time-derivative of the components $u$ and $v$. We observe the same temporal gap between the peaks of the discrete time-derivative of the solution as shown in the $L_2$ norm of the time derivatives given in Fig. \ref{Limitcycle1} (k). Our next distinctive finite element simulation given in Fig. \ref{Limitcycle2} provides spatiotemporal {\it periodic} pattern formation with negative cross-diffusion as well as the special case of self-diffusion coefficient $d=1$ which demonstrates the effect of the cross-diffusion. Fig. \ref{Limitcycle2} presents the finite element simulations when eigenvalues are a complex-conjugate pair with a positive real part as shown in Fig. \ref{HopfTransParameter}. It is noted that the system parameters are selected to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} on $\rho$ with $\Delta t =0.0025$, $d=1$, $\gamma=250$, $d_u=-0.9$, $d_v=0.55$, $\alpha=0.085$ and $\beta=0.1$. The alternating pattern behavior of standing waves of spots type that transition from smaller to larger, is well-captured in Figs. \ref{Limitcycle1} and \ref{Limitcycle2}. Spatiotemporal periodicity is shown by the plot of the $L_2$ norm of the discrete time-derivative of the components $u$ and $v$. We observe the same temporal gap with different amplitudes from the $L_2$ norm of the time derivatives given in Fig. \ref{Limitcycle2} (k). In Fig. \ref{Limitcycle3}, we provide the 3D views of the simulations presented in Fig. \ref{Limitcycle2}. Now, with the 3D views of the simulations, we can capture how the system dynamics evolve both in space and time, resulting in what are known as "standing waves". Lastly, Fig. \ref{NegativeCD} illustrates the relationship between the plot of the $L_2$ norm for the discrete time-derivative and the selected simulations, exhibiting the periodicity of the evolution of the spatiotemporal pattern. Note that Figs. \ref{Limitcycle2}, \ref{Limitcycle3} and \ref{NegativeCD} represent the same finite element simulation with the same parameter values but presented in different ways for better visualisation and interpretation. \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{1Td1t0_04} \\ {\small (a) t=0.04} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{1Td1t0_075} \\ {\small (b) t=0.075} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{1Td1t025} \\ {\small (c) t=0.25} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{1Td1t08} \\ {\small (d) t=0.8} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{1Td1t1} \\ {\small (e) t=1} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{1Td1t1_5} \\ {\small (f) t=1.5} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{1Td1t2} \\ {\small (g) t=2} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.64\textwidth]{1L2normd1} \\ {\small (h) Discrete time derivative of the solutions $u$ and $v$} \end{tabular} \caption{(a)-(g) Finite element simulations corresponding to the $u$-component of the cross-diffusive reaction-diffusion system on annulus showing the transient process to a spatially inhomogeneous, and time-independent pattern. Parameters are selected to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} with $\Delta t =0.0025$, $d=1$, $\gamma=730$, $d_u=0.001$, $d_v=0.46$, $\alpha=0.09$ and $\beta=0.2$, as shown in Fig. \ref{TuringTheo1} (h) Plot of the $L_2$ norms showing the convergence of the discrete time-derivative of the numerical solutions $u$ and $v$.} \label{Turing1} \end{figure} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{NTurt0_075} \\ {\small (a) t=0.075} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{NTurt0_1} \\ {\small (b) t=0.1} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{NTurt0_25} \\ {\small (c) t=0.25} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{NTurt0_5} \\ {\small (d) t=0.5} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{NTurt1} \\ {\small (e) t=1} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{NTurt1_5} \\ {\small (f) t=1.5} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{NTurt2} \\ {\small (g) t=2} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.64\textwidth]{NL2} \\ {\small (h) Discrete time derivative of the solutions $u$ and $v$} \end{tabular} \caption{(a)-(g) Finite element simulations corresponding to the $u$-component of the cross-diffusive reaction-diffusion system on annulus domain exhibiting the transient process to a spatially inhomogeneous, and time-independent pattern. Parameters are selected to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} with $\Delta t =0.0025$, $d=1$, $\gamma=720$, $d_u=-0.1$, $d_v=0.5$, $\alpha=0.09$ and $\beta=0.2$, as shown in Fig. \ref{TuringTheo1} (h) Plot of the $L_2$ norms showing the convergence of the discrete time-derivative of the numerical solutions $u$ and $v$.} \label{TuringNegativeCD} \end{figure} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{2Turt0_02} \\ {\small (a) t=0.02} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{2Turt0_05} \\ {\small (b) t=0.05} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{2turt0.1} \\ {\small (c) t=0.1} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{2Turt0_25} \\ {\small (d) t=0.25} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{2Turt0_5} \\ {\small (e) t=0.5} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{2Turt1} \\ {\small (f) t=1} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{2Turt2} \\ {\small (g) t=2} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.64\textwidth]{2TL2} \\ {\small (h) Discrete time derivative of the solutions $u$ and $v$} \end{tabular} \caption{(a)-(g) Finite element simulations corresponding to the $u$-component of the cross-diffusive reaction-diffusion system on annulus domain illustrating the transient process to a spatially inhomogeneous, and time-independent pattern. Parameters are selected to satisfy conditions of Theorems \ref{theo2} and \ref{Maincond} with $\Delta t =0.0025$, $d=12$, $\gamma=270$, $d_u=1$, $d_v=1.7$, $\alpha=0.07$ and $\beta=0.45$, as shown in Fig. \ref{TuringTheo2}. (h) Plot of the $L_2$ norms showing the convergence of the discrete time-derivative of the numerical solutions $u$ and $v$.} \label{Turing2} \end{figure} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{L1t0_1} \\ {\small (a) t=0.1} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{L1t0_3} \\ {\small (b) t=0.3} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t0_45} \\ {\small (c) t=0.45} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t0_509} \\ {\small (d) t=0.50} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t0_825} \\ {\small (e) t=0.825} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t1_27} \\ {\small (f) t=1.27} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t1_45} \\ {\small (g) t=1.45} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t1_982} \\ {\small (h) t=1.982} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t2_0} \\ {\small (i) t=2} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{L1t2_5} \\ {\small (j) t=2.5} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.64\textwidth]{LimitL2norm} \\ {\small (k) Discrete time derivative of the solutions $u$ and $v$} \end{tabular} \caption{(a)-(j) Finite element simulations corresponding to the $u$-component of the cross-diffusive reaction-diffusion system on disc shape domain exhibiting {\it periodic} spatiotemporal time-periodic pattern formation, also known as {\it standing waves}. Parameters are selected to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} on $\rho$ with $\Delta t =0.0025$, $d=2.6$, $\gamma=375$, $d_u=1.6$, $d_v=0.5$, $\alpha=0.09$ and $\beta=0.1$, as shown in Fig. \ref{HopfTransParameter}. (k) Plot of the $L_2$ norms of the discrete time-derivatives showing the periodicity of the solutions $u$ and $v$.} \label{Limitcycle1} \end{figure} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{Nt0_235} \\ {\small (a) t=0.235} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{Nt0_7} \\ {\small (b) t=0.7} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt0_945} \\ {\small (c) t=0.945} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt1_495} \\ {\small (d) t=1.495} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt1_7225} \\ {\small (e) t=1.725} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt1_8} \\ {\small (f) t=1.8} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt1_965} \\ {\small (g) t=1.965} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt2_12} \\ {\small (h) t=2.12} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt2_2675} \\ {\small (i) t=2.265} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{Nt2_5} \\ {\small (j) t=2.5} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.64\textwidth]{NegativeCD_LC} \\ {\small (k) Discrete time derivative of the solutions $u$ and $v$} \end{tabular} \caption{(a)-(j) Finite element simulations corresponding to the $u$-component of the cross-diffusive reaction-diffusion system on disc shape domain exhibiting spatiotemporal time-periodic pattern formation. Parameters are selected to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} on $\rho$ with $\Delta t =0.0025$, $d=1$, $\gamma=250$, $d_u=-0.9$, $d_v=0.55$, $\alpha=0.085$ and $\beta=0.1$, as shown in Fig. \ref{HopfTransParameter}. (k) Plot of the $L_2$ norms showing the periodicity of the discrete time-derivative of the numerical solutions $u$ and $v$.} \label{Limitcycle2} \end{figure} \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{3Dt0_2} \\ {\small (a) t=0.2} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.28\linewidth]{3Dt0_55} \\ {\small (b) t=0.55} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt0_86} \\ {\small (c) t=0.86} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt0_9} \\ {\small (d) t=0.9} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt1_025} \\ {\small (e) t=1.025} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt1_495} \\ {\small (f) t=1.495} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt1_725} \\ {\small (g) t=1.725} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt1_775} \\ {\small (h) t=1.775} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt2_12} \\ {\small (i) t=2.12} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt2_2675} \\ {\small (j) t=2.2675} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt2_4325} \\ {\small (k) t=2.4325} \end{tabular} \begin{tabular}{cc} \includegraphics[width=0.3\linewidth]{3Dt2_5} \\ {\small (l) t=2.5} \end{tabular} \caption{(a)-(j) 3D view of the finite element simulations corresponding to the $u$-component of the cross-diffusive reaction-diffusion system on disc shape domain exhibiting spatiotemporal time-periodic pattern formation. Parameters are selected to satisfy conditions of Theorems \ref{theo1} and \ref{Maincond} on $\rho$ with $\Delta t =0.0025$, $d=1$, $\gamma=250$, $d_u=-0.9$, $d_v=0.55$, $\alpha=0.085$ and $\beta=0.1$, as shown in Fig. \ref{HopfTransParameter}. We observe the classical phenomenon of "standing waves".} \label{Limitcycle3} \end{figure} \begin{figure}[H] \centering \begin{tabular}{cc} \includegraphics[width=1.05\textwidth]{negativeCD_simulation} \\ \end{tabular} \caption{The plot of the evolution of the $L_2$ norm of the discrete time-derivative of $u$ and $v$ finite element solutions with negative cross-diffusion. Model parameter values are selected as $\Delta t =0.0025$, $d=1$, $\gamma=250$, $d_u=-0.9$, $d_v=0.55$, $\alpha=0.085$ and $\beta=0.1$.} \label{NegativeCD} \end{figure} \section{Conclusion}\label{sec:Conclusion} Investigating cross-diffusive reaction-diffusion systems is a rapidly emerging area of research for pattern formation and has numerous applications in many chemical and biological problems in developmental and cell biology, material, and plant sciences. In this study, we provide the importance of a proper parameter selection strategy through a rigorous understanding of the spatiotemporal behavior of solutions which is critically dependent on the size of the domain. An analytical linear stability method is applied to obtain the relationship between the domain size $\rho$ and the system parameters $d$, $d_u$, $d_v$, and $\gamma$ on a flat ring shape domain. This work entails an important novelty by comparing the parameter spaces using the eigenvalues of the stability matrix on convex and non-convex geometries. To the authors' knowledge, this work is the first of its kind where such comparisons are made and detailed conclusions are derived. The detailed parameter spaces have been classified, taking into account the Hopf/transcritical bifurcations and Turing instabilities on the annulus. It must be noted that most of the spaces generated when the self-diffusion coefficient $d=1$ only exist in the presence of cross-diffusion for a two-component reaction-diffusion system. We note that such a restriction does not hold for multi-component reaction-diffusion systems \cite{cotterell2015local,krause2021introduction,madzvamuse2010stability,klika2017history,van2021turing,krause2023concentration,tsubota2024bifurcation}. For two-component reaction-diffusion systems, such spaces do not exist in the absence of cross-diffusion. To confirm the predicted behaviour in the dynamics, model parameter values from the parameter regions are hand-picked and the reaction-diffusion system with linear cross-diffusion is then solved with these parameter values by using the finite element method on an annular domain. Plots of the discrete $L_2$ norms of the discrete time-derivative of the solutions are generated to illustrate the temporal behaviour of the system as well as the formation of spatially inhomogeneous Turing patterns. For example, when parameter values are selected from the Turing parameter space, the $L_2$ norm exhibits rapid decay at the early stages, which is followed by rapid exponential growth associated with the growing modes of the linear reaction-diffusion system, when the real part of the eigenvalues is positive, and finally, the growth plateaus and starts to decay monotonically due to the effects of the nonlinear reaction-kinetics which act as bounds on the exponentially growing modes. Another example, is the periodic behaviour of the $L_2$ norm which is associated with the periodicity of limit cycles and Hopf bifurcation types of dynamics, here model parameters are selected from the spaces satisfying Hopf and transcritical instabilities. Our numerical simulations provide the expected dynamical behaviour of the reaction-diffusion system considering the effect of cross-diffusion and the domain size on the ring shape domain. Hence, the key research methodology and outcomes of our studies can be summarised as follows. A complete analytical exploration of the spatiotemporal dynamics in an activator-depleted reaction-diffusion system, a linear stability analysis providing the dual roles of cross-diffusion and domain size of an annulus, the derivation of precise stability conditions through lower and upper bounds of the domain size, a full classification of the model parameters, and a demonstration of how cross-diffusion relaxes the general conditions for the reaction-diffusion system to exhibit pattern formation. Our current studies involve a multi-throned research study to (i) extend the analysis to stationary surfaces where analytical tractability can be established, (ii) understand the role of domain size during growth development on planar and evolving surfaces in the presence of linear cross-diffusion and (iii) to explore weakly nonlinear analysis when nonlinear cross-diffusion is considered. \section*{Acknowledgement} GY would like to thank the Scientific and Technological Research Council of T\"{u}rkiye (T\"{U}BITAK) for their support. GY would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Mathematics of movement: an interdisciplinary approach to mutual challenges in animal ecology and cell biology, where partial work on this paper was undertaken and supported by EPSRC grant EP/R014604/1. This work (AM) was supported by the Canada Research Chair (Tier 1) in Theoretical and Computational Biology (CRC-2022-00147), the Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery Grants Program (RGPIN-2023-05231), the British Columbia Knowledge Development Fund (BCKDF), Canada Foundation for Innovation – John R. Evans Leaders Fund – Partnerships (CFI-JELF), the British Columbia Foundation for Non-Animal Research, and the UKRI Engineering and Physical Sciences Research Council (EPSRC: EP/J016780/1). RB was partially supported by National Funding from FCT - Fundac\~{a}o para a Ci\^{e}ncia e Tecnologia, Portugal, under the project UIDB/04561/2020: https://doi.org/10.54499/UIDB/04561/2020. \bibliographystyle{ws-ijbc} \bibliography{sample} \end{document}
2412.20137v1
http://arxiv.org/abs/2412.20137v1
On convergence of Thurston's iteration for transcendental entire functions with infinite post-singular set
\documentclass[10pt,reqno,a4paper]{amsart} \setcounter{tocdepth}{2} \let\oldtocsection=\tocsection \let\oldtocsubsection=\tocsubsection \let\oldtocsubsubsection=\tocsubsubsection \renewcommand{\tocsection}[2]{\hspace{0em}\oldtocsection{#1}{#2}} \renewcommand{\tocsubsection}[2]{\hspace{2em}\oldtocsubsection{#1}{#2}} \renewcommand{\tocsubsubsection}[2]{\hspace{2em}\oldtocsubsubsection{#1}{#2}} \addtolength{\hoffset}{0cm} \addtolength{\textwidth}{0cm} \addtolength{\voffset}{0cm} \addtolength{\textheight}{-0.5cm} \usepackage{amsmath,amsthm,amsfonts,amssymb} \usepackage[foot]{amsaddr} \usepackage{marginnote} \usepackage{fancyhdr} \usepackage{graphicx} \usepackage{pdfpages} \usepackage{tikz} \usetikzlibrary{matrix} \usetikzlibrary{cd} \usepackage{xcolor} \numberwithin{figure}{section} \numberwithin{equation}{section} \newtheorem{thm}{Theorem}[section] \newtheorem*{thmnonum}{Theorem} \newtheorem{defn}[thm]{Definition} \newtheorem*{defnnonum}{Definition} \newtheorem{lmm}[thm]{Lemma} \newtheorem{prp}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem*{ntt}{Notation} \newtheorem{remark}[thm]{Remark} \newtheorem{example}[thm]{Example} \newcommand{\tei}{Teichm\"uller} \newcommand{\qc}{quasiconformal} \newcommand{\const}{\operatorname{const}} \newcommand{\dist}{\operatorname{dist}} \newcommand{\id}{\operatorname{id}} \newcommand{\Id}{\operatorname{Id}} \newcommand{\SV}{\operatorname{SV}} \newcommand{\SVinfty}{\operatorname{SV}_\infty} \newcommand{\Crit}{\operatorname{Crit}} \newcommand{\ord}{\operatorname{ord}} \newcommand{\cyl}{\operatorname{cyl}} \renewcommand{\Re}{\operatorname{Re\,}} \renewcommand{\Im}{\operatorname{Im\,}} \renewcommand{\mod}{\operatorname{mod\,}} \DeclareMathOperator*{\esssup}{ess\,sup} \DeclareMathOperator*{\arsinh}{arsinh} \newcommand{\abs}[1]{\left| #1 \right|} \newcounter{reminder} \newcommand{\reminder}[1]{$\langle$ {\sf #1} $\rangle$ \stepcounter{reminder} \marginpar{$\rhd\rhd$ \thereminder$\lhd\lhd$}} \pagestyle{plain} \graphicspath{ {./images/}} \title{On convergence of Thurston's iteration for transcendental entire functions with infinite post-singular set} \author{Konstantin Bogdanov$^1$} \address{Institute of Mathematics of Polish Academy of Sciences, ul. Śniadeckich 8, 00-656 Warsaw, Poland} \address{Saarland University, Mathematics and Computer Science, Campus E2 4, 66123 Saarbr\"ucken, Germany} \email{[email protected], [email protected]} \begin{document} \begin{abstract} Given an entire function $f_0$ with finitely many singular values, one can construct a quasiregular function $f$ by post-composing $f_0$ with a \qc\ map equal to identity on some open set $U\ni\infty$. It might happen that the $f$-orbits of all singular values of $f$ are eventually contained in $U$. The goal of this article is to investigate properties of Thurston's pull-back map $\sigma$ associated to such $f$, especially in the case when $f$ is post-singularly infinite, that is, when $\sigma$ acts on an infinite-dimensional \tei\ space $\mathcal{T}$. The main result yields sufficient conditions for existence of a $\sigma$-invariant set $\mathcal{I}\subset\mathcal{T}$ such that its projection to the subspace of $\mathcal{T}$ associated to marked points in $\mathbb{C}\setminus U$ is bounded in the \tei\ metric, while the projection to the subspace associated to the marked points in $U$ (generally there are infinitely many) is a small perturbation of identity. The notion of a \emph{fat spider} is defined and used as a dynamically meaningful way define coordinates in the \tei\ space. The notion of \emph{asymptotic area property} for entire functions is introduced. Roughly, it requires that the complement of logarithmic tracts in $U$ degenerates fast as $U$ shrinks. A corollary of the main result is that for a finite order entire function, if the degeneration is fast enough and singular values of $f$ escape fast, then $f$ is Thurston equivalent to an entire function. \end{abstract} \maketitle \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} \footnotetext[1]{The author gratefully acknowledges partial support from National Science Centre, Poland, Grant OPUS21 ``Holomorphic dynamics, fractals, thermodynamic formalism'' 2021/41/B/ST1/00461, and from the ERC AdG grant 101097307.} \section{Introduction} In complex dynamics, one investigates the long term behaviour of sequences of iterations $f^n(z)=\overbrace{f\circ\dots \circ f}^{n \text{ times}}(z)$, $n=0,1,2,\dots$ of a holomorphic function $f$. In transcendental dynamics, the main focus is on the case when $f:\mathbb{C}\to\mathbb{C}$ is a transcendental entire function, i.e., an entire function which is not a polynomial. Elementary examples are $e^z$, $\sin z$, $\sinh z$, $e^{p(z)}$ where $p$ is a polynomial, etc. The set of singular values of $f$, denoted $\SV(f)$, is defined as the closure of the set of all critical or asymptotic values. In this article we mainly restrict to the case when $f$ is of \emph{finite type}, that is, $\abs{\SV(f)}<\infty$. Dynamical behaviour of singular values defines to a big extent the global dynamics of $f$. One extreme of this behaviour is when all singular values escape (i.e., converge to $\infty$ under the iteration of $f$). As an analogue from polynomial dynamics one could think of the complement of the Mandelbrot set, which is the set of escaping parameters. Each point $c$ in the complement of the Mandelbrot set corresponds to a quadratic polynomial $z^2+c$ for which the critical value $c$ escapes. The mode of this escape is described by two parameters: potential and external angle. By analogy one can ask: \emph{how exactly the singular set of an entire function can escape?} And more generally: \emph{which post-singularly infinite behaviour appears for entire functions?} A global strategy to answer such questions is the following. \begin{enumerate} \item Choose some meaningful (for the class of functions under consideration) dynamical behaviour (e.g., in terms of orbit portraits, speed of escape, combinatorics, etc.) \item Construct a (topological) map whose ``singular values'' model the desired dynamical behaviour. \item Decide if this map is ``equivalent'' to an entire map. \end{enumerate} This article aims to address the third item in this list for some types of post-singular dynamics including escaping dynamics. More precisely, let $f_0:\mathbb{C}\to\mathbb{C}$ be an entire function of finite type and $\lambda:\mathbb{C}\to\mathbb{C}$ be a \qc\ map equal to identity in some neighbourhood of $\infty$. Consider the composition $f:=\lambda\circ f_0$. The singular values of $f$ are naturally defined as the images of the singular values of $f_0$ under $\lambda$. This map $f$ is a quasiregular (rather than just topological) model from the second item of the list above. Denote by $\mathcal{O}$ the union of singular orbits (under $f$) and assume that there is a Riemann domain $U\subset\hat{\mathbb{C}}$ containing $\infty$, such that \begin{enumerate} \item $\SV(f)\cap U=\emptyset$, \item $\mathcal{O}\cap U$ is forward invariant, \item every singular orbit has a non-empty intersection with $U$. \end{enumerate} Thus, $f$ models the post-singular dynamics which is absorbed by $U$ and eventually controlled by $f_0$. It turns out that in many natural cases the ``parameter space'' of $f_0$ contains an entire function $g$ with ``the same'' dynamical behaviour of singular values. On the formal level ``the sameness'' is described as \begin{defn}[Thurston equivalence] We say that $f$ is \emph{Thurston equivalent} to an entire map $g$ if there exist two homeomorphisms $\varphi,\psi:\mathbb{C}\to\mathbb{C}$ such that \begin{enumerate} \item $\varphi=\psi$ on $\overline{\mathcal{O}}$, \item the following diagram commutes \begin{tikzcd} \mathbb{C},\overline{\mathcal{O}} \arrow[r, "{\psi}"] \arrow[d, "f"] & \mathbb{C},\psi(\overline{\mathcal{O}}) \arrow[d, "g"] \\ \mathbb{C},\overline{\mathcal{O}} \arrow[r, "{\varphi}"] & \mathbb{C},\varphi(\overline{\mathcal{O}}) \end{tikzcd} \item $\varphi$ is isotopic to $\psi$ relative to $\overline{\mathcal{O}}$. \end{enumerate} \end{defn} Following \cite{ErLyu}, we say that an entire function $g$ belongs to the \emph{parameter space} of $f_0$ if there exist \qc\ homeomorphisms $\varphi_1, \psi_1:\mathbb{C}\to\mathbb{C}$ such that $g\circ\psi_1=\varphi_1\circ f_0$. It is easy to see that for a finite type function $f_0$, if $f$ is Thurston equivalent to an entire function $g$, then $g$ belongs to the parameter space of $f_0$. Thus, for the holomorphic realization of the model $f$ we must look in the parameter space of $f_0$. Note that the model map $f$ is quasiregular rather than just topological. On one hand, this is a strong restriction, on the other, in \cite{LasseParaSpace} it is shown in much bigger generality that given two functions from the same parameter space, they are \qc ly conjugate on the set points staying under iterations in some domain $U\ni\infty$. This gives a hope that most of the models represented within the parameter space can be obtained with help of the construction above. \subsection{Asymptotic area property} Let us introduce the class of entire functions we are going to work with. It is defined by \emph{asymptotic area property} which could be considered as a somewhat stronger version of the area property introduced in \cite{EpRe}. A related discussion also appears in \cite{ErLyu}. Let $g$ be a transcendental entire function of bounded type. For a compact $\mathcal{C}$ contained in $\mathbb{C}\setminus\SV(g)$ denote $\mathcal{E}=\mathcal{E}(g,\mathcal{C}):=f^{-1}(\mathcal{C})$ and let $$I(\mathcal{C}):= \frac{1}{2\pi}\iint\displaylimits_{\{1\leq\abs{z}\}\bigcap\mathcal{E}}\frac{dx dy}{\abs{z}^2},$$ that is, $I(\mathcal{C})$ is the \emph{cylindrical measure} of the set $\{1\leq\abs{z}\}\bigcap\mathcal{E}$, which could be either finite or infinite. Then, according to the definition in \cite{EpRe}, $g$ has \emph{area property} if $I(\mathcal{C})<\infty$ for every $\mathcal{C}$. However, we are interested in a parametrized version of this integral. Let $D\supset\SV(g)$ be an open bounded set, denote $\mathcal{E}_r:=f^{-1}(\overline{\mathbb{D}}_r\setminus D)$ and consider the parametrized integral $$I_1(\rho,D):= \frac{1}{2\pi}\iint\displaylimits_{\{\rho\leq\abs{z}\}\bigcap\mathcal{E}_\rho}\frac{dx dy}{\abs{z}^2}.$$ \begin{defn} [Asymptotic area property] \label{defn:as_area_property} We say that $f\in\mathcal{B}$ has \emph{asymptotic area property (AAP) relative to an open set $D\supset\SV(f)$} if $$\limsup_{\rho\to\infty}I_1(\rho,D)<\infty.$$ We say that $f\in\mathcal{B}$ has \emph{AAP} if it has AAP relative to every open set $D\supset\SV(f)$. \end{defn} Elementary examples of a function with $AAP$ are $e^z$, $\sin z$, $\sinh z$, $e^{p(z)}$ for a non-constant polynomial $p$. The asymptotic area property does not imply the area property: as $\rho$ grows, on one hand, $\mathcal{E}_\rho$ becomes bigger, on the other, cylindrical measure of $\mathcal{E}_\rho$ is included starting from a bigger radius. We will be mostly interested in the case when $I(\rho,D)$ tends to $0$ as $\rho$ tends to $\infty$. Let us restrict to it. The asymptotics of $I_1(\rho, D)$ depends on the initial choice of the domain $D$. However, in many cases (e.g.\ for finite type functions with bounded degrees of critical points) one can find a function $\chi:\mathbb{R}_+\to\mathbb{R}_+$ such that for every $D$, $I_1(\rho,D)=O(\chi(\rho))$. If this is the case, we say that $\chi$ is a \emph{degeneration function} for $g$. Next theorem is a corollary of the main result. \begin{thm}[Singular values with fast speed of escape] \label{thm:esc_singular_orbits} Let $f_0$ be a transcendental entire function of finite type having degeneration function $1/\rho^\epsilon$ for some $\epsilon>0$, and satisfying the inequality $\max_{\partial\mathbb{D}_r}\abs{f_0(z)}<\exp^2(\log r)^d$ for some constant $d>1$ and all $r>0$ big enough (it holds, in particular, for functions of finite order). For a \qc\ map $\lambda:\mathbb{C}\to\mathbb{C}$ equal to identity near $\infty$, consider the quasiregular map $f=\lambda\circ f_0$ with singular values $\{a_{i1}\}_{i=1}^m$ and corresponding escaping singular orbits $\{a_{ij}\}_{j=1}^\infty=\{f^{j-1}(a_{i1})\}_{j=1}^\infty$ such that: \begin{enumerate} \item for some $\delta>1$ and all $j$ big enough, $\abs{a_{i(j+1)}}>\exp\left(\log\abs{a_{ij}}\right)^\delta$, \item the set $\{d_{\cyl}(a_{ij},a_{kl}): 0\neq a_{ij}\neq a_{kl}\neq 0\}$, where $d_{\cyl}$ is cylindrical distance, has a positive lower bound. \end{enumerate} Then $f$ is Thurston equivalent to an entire function. \end{thm} By cylindrical distance we understand the distance with respect to the conformal metric $dxdy/\abs{z}^2$. It is defined on $\mathbb{C}\setminus\{0\}$ and coincides with the Euclidean metric in the logarithmic coordinates. Note that generically the escaping points tend to escape even faster than in item $(1)$ (see e.g.\ \cite[Lemma~3.1]{RRRS}). \subsection{Infinite-dimensional Thurston theory} Given a quasiregular function $f=\lambda\circ f_0$, we need to be able to decide whether it is equivalent to an entire function. A general approach for answering such types of questions was developed by Thurston and Douady--Hubbard \cite{DH, HubbardBook2}: Thurston's topological characterization of rational functions is a criterion allowing to decide whether a post-critically finite branched covering of the $2$-sphere is equivalent to a rational map. The branched covering defines naturally a ``pull-back map $\sigma$'' acting on the \tei\ space of the complement to the post-critical set. The branched covering is Thurston equivalent to a rational map if and only if $\sigma$ has a fixed point. It is easy to see that $\sigma$ does not increase \tei\ distances, which makes plausible existence of fixed points. Thus, the question of Thurston equivalence is reduced to the study of the properties of $\sigma$. There are two major directions for the generalization of this classical result. The first one is by considering other classes of functions, for instance, transcendental entire and meromorphic functions. In this regard one should mention \cite{HSS}, generalizing the theory for the exponential family, and the PhD thesis of Sergey Shemyakov \cite{SergeyThesis}, encompassing more general families of entire functions. Another direction is by considering post-critically infinite dynamics. The corresponding generalization for hyperbolic rational functions is treated in \cite{Cui}. A subject of separate interest lies in the intersection of the two directions. The case of transcendental entire functions with the post-singular set which is infinite ``near $\infty$'' (e.g., if all singular values escape) cannot be reduced to the two cases above. The reason is that $\infty$ is an essential singularity for transcendental entire functions, hence one cannot apply the techniques from the rational case. Markus F\"orster shows in his PhD thesis \cite{MarkusThesis} that every ``mode'' of escape in the exponential family can be realized as the post-singular ``mode''. The approach is using pull-backs of ``spiders'' with infinitely many ``legs'' which can be interpreted as a version of Thurston's pull-back map $\sigma$. These techniques were generalized in \cite{IDTT1,IDTT2,IDTT3} for the families $p(e^z)$ where $p$ is a polynomial. An important feature of the construction is that one is considering the pull-back map $\sigma$ defined on the infinite-dimensional \tei\ space (of the complement of the set of marked points). Unfortunately, there are only weaker versions of \tei's theorems in this setup, hence different techniques are required (than in finite-dimensional case). We also note the in \cite{Cui}, the authors reduce the infinite-dimensional setting to the finite-dimensional using \qc\ surgery, hence apparently one cannot reproduce this approach in a neighbourhood of an essential singularity. Following the strategy of \cite{MarkusThesis, IDTT1, IDTT2, IDTT3}, in order to prove existence of a fixed point of $\sigma$, two major ingredients are required: \begin{itemize} \item a $\sigma$-invariant pre-compact subset $\mathcal{I}$ of the corresponding \tei\ space, \item $\sigma$ should be strictly contracting on $\overline{\mathcal{I}}$. \end{itemize} The two conditions imply existence of a fixed point by an elementary argument. By strict contraction we mean that $\sigma$ decreases the distances, but not necessarily with a uniform contraction factor smaller than one (hence one cannot apply the Banach Fixed Point Theorem). To address this problem we might use \cite[Lemma~4.1]{IDTT1} saying that if the $\sigma$-images of two \emph{asymptotically conformal} points are also asymptotically conformal (see Subsection~\ref{subsec:strict_contraction} for the definition), then $\sigma$ decreases the distances between them. Thus, the set $\mathcal{I}$ must contain only asymptotically conformal points. This is one of the reasons to consider entire functions satisfying asymptotic area property. For instance, think about such $f=\lambda\circ f_0$ so that its singular orbits are very sparse near $\infty$, e.g., separated by round annuli around the origin with moduli tending to $\infty$. Then, if we apply $\sigma$ to a \tei\ equivalence class of a homeomorphism which is ``nearly identity'' near $\infty$, after pulling-back its Beltrami coefficient via $f$ and integrating it, due to \tei\--Wittich theorem~\ref{thm:teich--wittich}, we obtain a \tei\ equivalence class of homeomorphism which is also ``nearly identity'' near $\infty$. Roughly speaking, the main difficulty in the construction of $\mathcal{I}$ is to arrange that the former neighbourhood of $\infty$ is contained inside of the latter. This would imply the invariance of $\mathcal{I}$ under $\sigma$. The main result of the present article is Theorem~\ref{thm:invariant_structure}. At this point, we provide only a heuristic description of the result, for more details see Section~\ref{sec:invariant_structure}. \begin{itemize} \item [***] \emph{\textbf{Invariant set.}} Let $f_0$ be a finite type function and $U$ be a neighbourhood of $\infty$. Assume that $\lambda$ is such that the singular orbits of $f=\lambda\circ f_0$ are absorbed by $U=\hat{\mathbb{C}}\setminus\mathbb{D}_\rho, \rho>0$ in the sense described above and $D$ be a domain such that $\mathcal{O}\cap D=\SV(f)$. If the first points belonging to $U$ on each post-singular orbit are $\epsilon$-distant from each other (in cylindrical metric), the set $\mathcal{O}\setminus U$ is separated from the boundary of $U$ by an annulus of big enough modulus, and, for some universal constant $\nu>1$ and a constant $K$ depending on $\lambda$ and $f_0$, the product $K^{\nu^{\#{(\mathcal{O}\setminus U)}}}I_1(\rho,D)$ is small enough, then the corresponding extended \tei\ space contains a $\sigma$-invariant set $\mathcal{I}$ such that: \begin{enumerate} \item if we ``forget'' the post-singular points in $U$, then the corresponding projection of $\mathcal{I}$ to the finite-dimensional \tei\ space is bounded in the \tei\ metric; \item every \tei\ equivalence class in $\mathcal{I}$ contains a homeomorphism which is $\varepsilon/4$-distant from identity on $U$ (in the $\sup$-cylindrical metric). \end{enumerate} \end{itemize} By an extended \tei\ space space we understand the set of \tei\ equivalence classes after we relax the requirement for every homotopy class to contain a \qc\ map. The reason for doing this is that in the infinite-dimensional setup, a point in the \tei\ space and its $\sigma$-image do not necessarily belong to the same \tei\ space. However, $\sigma$ is well-defined as a map on the extended \tei\ space. By $\#(\mathcal{O}\setminus U)$ we denote the total amount of points in the set $\mathcal{O}\setminus U$. In the proof of Theorem~\ref{thm:invariant_structure} it will be seen that $K^{\nu^{\#{(\mathcal{O}\setminus U)}}}$ roughly corresponds to the maximal dilatation of representatives in $\mathcal{I}$ after ``forgetting'' the points in $U$. The requirement for the product $K^{\nu^{\#{(\mathcal{O}\setminus U)}}}I_1(\rho,D)$ to be small is a strict way of saying that Thurston's pull-back of the representative, or, more precisely, of its restriction to $U$, is a \qc\ map with presumably very big maximal dilatation, but supported on a set of very small area. In this setting, it is possible to prove a special type of Koebe-like distortion bounds and to show that the representative will be uniformly close identity on $U$. The proof of Theorem~\ref{thm:invariant_structure} has three major ingredients depending on the type of marked points. The marked points contained in $U$ are controlled with help of Koebe-like estimates obtained in Section~\ref{sec:Koebe}. To control the behaviour of the marked points in $\mathbb{C}\setminus U$, we define a special structure called \emph{fat spider}. It has some similarity to the classical spider introduced in \cite{Spiders} for encoding the combinatorics of post-critically finite polynomials, but the principles of functioning are quite different. First, the ``body'' of the classical spider is the point $\infty$ while the ``body'' of a fat spider is a big disk around $\infty$ (hence ``fat''). The feet of a fat spider are finitely many marked points in the complement of the body and separated from it by an annulus of some definite modulus. Each feet is connected to the body by a homotopy class of paths (called ``legs'') in the complement of marked points. To every leg we associate the maximal dilatation of a \qc\ map which maps the underlying foot to the body along the leg and (via isotopy) relative to all other feet. Given a homeomorphism equal to identity on the body so that the images of all feet are separated from the body by the same annulus we can consider the push-forward of the spider, in which its legs are just push-forwards of the legs of the initial fat spider. If we also know the maximal dilatation associated to the legs of the pushed-forward spider, we have an estimate on the maximal dilatation of the homeomorpism, see Proposition~\ref{prp:teich_metric_fat_spider_map}. On the other hand, it is not difficult to define a lift (with augmentation) of a spider leg which corresponds to the $\sigma$ map and for which it is convenient to compute the corresponding maximal dilatations. Note, that we \emph{do not} have a standard spider in the sense of \cite{Spiders}, i.e., some invariant structure formed by external rays. Instead the spider legs change on every iteration on both the left and the right hand sides the commutative diagram for $\sigma$. Finally there should be some ingredient which ``glues'' these two very different types of storing information about points in the \tei\ space in order to keep this decomposition invariant under $\sigma$. This ingredient is the property of $f_0$ which we call $(K,\delta)$-regularity of tracts. Roughly, this means the following. Consider some big $\rho>0$ and two punctured disks around $\infty$: $D:=\hat{\mathbb{C}}\setminus\mathbb{D}_\rho$ and $\hat{D}:=\hat{\mathbb{C}}\setminus\mathbb{D}_{\rho/2}$, and two logarithmic tracts $\hat{T}\supset T$ such that $f_0(\hat{T})=\hat{D}$ and $f_0(T)=D$, and assume that $T\cap\mathbb{D}_\rho\neq\emptyset$. Then we say that a pair of tracts $\hat{T}\supset T$ is $(K,\delta)$-regular if every point belonging to $\partial T\cap\mathbb{D}_\rho$ can be mapped to some point of the circle $\partial\mathbb{D}_\rho$ via a $K$-\qc\ map equal to identity outside of $\hat{T}\cap\mathbb{D}_{\rho e^\delta}$. This property allows to define a dynamically meaningful pull-back of a fat spider. It will be shown that for $f_0$ having finite order the value of $K$ can be estimated in terms of $\log\rho$. For a more detailed version with marked points see Subsection~\ref{subsec:tracts_regularity}. \subsection{Structure of the article} In Section~\ref{sec:standard_notions} we briefly discuss some properties of entire functions and connections to the cylindrical metric. Afterwards we provide some basic notions from the theory of \qc\ maps together with a rather lengthy list of statements we are going to use. In Section~\ref{sec:Teich_spaces_and_Thurston_theory} we define the (extended) \tei\ space, introduce formally the $\sigma$-map and discuss its contraction properties. Finally, we prove a few semi-standard statements about different types of \qc\ representatives in the \tei\ equivalence classes. Section~\ref{sec:AAP} is devoted to asymptotic area property. In particular, we investigate dependence of the degeneration function on $D$. In Section~\ref{sec:Koebe} we discuss \qc\ maps with small ``total dilatation per area ratio'' and show that locally they can be approximated by identity with the quantitative bounds depending only on the ratio (Proposition~\ref{prp:distortion of identity}). In the proof of the main result (Theorem~\ref{thm:invariant_structure}) this allows to control the behaviour of marked points near $\infty$. Section~\ref{sec:shifts_and_spiders} consists of three parts. In Subsection~\ref{subsec:shifts_properties} we introduce the homeomorphisms of a special type, called \emph{shifts}, and investigate their properties. In Subsection~\ref{subsec:tracts_regularity} we define formally the $(K,\delta)$-regularity and compute bounds for $K$ for entire functions of finite order (Proposition~\ref{prp:log_regularity_finite_order}). Finally, in Subsection~\ref{subsec:spiders} we define fat spiders and show how the maximal dilatation of a homeomorphism can be bounded using the information about the underlying fat spiders (Proposition~\ref{prp:teich_metric_fat_spider_map}). In Section~\ref{sec:invariant_structure} we first introduce a \emph{separating structure}. This is in some sense an ``environment'' we are going to work in. It is needed in order to be able to prove Theorem~\ref{thm:invariant_structure} without fixing some particular $\lambda$ and for many different types of dynamics altogether. We construct a \emph{standard fat spider} and introduce the pull-back procedure (of standard spiders) corresponding to $\sigma$. Afterwards, we state and prove Theorem~\ref{thm:invariant_structure} and conclude the section with a rather simple Theorem~\ref{thm:fixed_point_existence} allowing to deduce existence of a fixed point for certain settings of Theorem~\ref{thm:invariant_structure}. In Section~\ref{sec:applications} we first prove Theorem~\ref{thm:esc_singular_orbits}, and then discuss briefly perturbations of the invariant structure of Theorem~\ref{thm:invariant_structure} and the case when $f$ models a polynomial with escaping critical orbits. \subsection{Acknowledgements} I would like to thank Kevin Pilgrim for the series of fruitful discussions of aspects of \tei\ theory during his visit to Saarbr\"ucken. I would like to thank Nikolai Prochorov and Dierk Schleicher for their feedback, especially for the discussion related to the parameter spaces of entire functions. Also I would like to express my gratitude to Feliks Przytycki for his support during my stay at IMPAN. \subsection{Notations and agreements} We denote by $\mathbb{C},\hat{\mathbb{C}}$ the complex plane and the Riemann sphere, respectively. For $a\in\mathbb{C}$, we denote by $\mathbb{D}_r(a)$ the disk around $a$ of radius $r$. If we omit the index $r$, this means that $r=1$, if we omit the center $a$, this means that $a=0$. By $\mathbb{D}^\infty_r$ we denote the disk $\hat{\mathbb{C}}\setminus\overline{\mathbb{D}}_r$ with center at $\infty$, and for $x\in\mathbb{R}$, $\mathbb{H}_x$ is the right half-plane $\{z:\Re z>x\}$. For a subspace $U$ of a topological space we denote by $\partial U$ its boundary and by $U^\circ$ the interior of $U$. For $0<r<R<\infty$, by $\mathbb{A}_{r,R}$ we denote the open round annulus between the circles $\partial\mathbb{D}_r$ and $\partial\mathbb{D}_R$. When we say that an isotopy is relative to some set $X$, we imply that this isotopy is constant on $\overline{X}$. In particular, when it contains the identity map, this means that all maps along the isotopy are equal to identity on $\overline{X}$. The modulus of the annulus $\mathbb{A}_{r,R}$, we define as the relation $\log(R/r)$ without the factor $1/2\pi$ in front of it. This is the convention in \cite{LehtoVirtanen} and it suits better for us for the sake of shorter formulas and simpler referencing to \cite{LehtoVirtanen}. \section{Standard notions and definitions} \label{sec:standard_notions} In this section we assemble definitions and results about transcendental entire functions and \qc\ maps. \subsection{Logarithmic coordinates and cylindrical metric} \label{subsec:log_coordinates} We are mainly interested in the class of entire functions of finite type (also called Speiser class or class $\mathcal{S}$) and occasionally we consider the entire functions of bounded type (also called Eremenko-Lyubich class or class $\mathcal{B}$). For the former ones, the singular set is finite, for the latter ones --- bounded. Recall that the set of singular values is defined as the closure of the set of critical and asymptotic values. A \emph{critical value} is the image of a critical point. We say that $a\in\hat{\mathbb{C}}$ is an \emph{asymptotic value} for $f\in\mathcal{B}$ if there exists a path $\gamma:[0,1]\to\mathbb{C}$ such that $\lim_{t\to 1}\gamma(t)\to\infty$ and $\lim_{t\to 1}f\left(\gamma(t)\right)\to a$. In particular, $\infty$ is always a singular value for $f$. However, when speaking about singular values, we always mean a \emph{finite} singular value and denote their set by $\SV(f)$. By the \emph{post-singular set} $P$ of $f\in\mathcal{S}$ we understand the closure of the union of forward orbits of singular values (including the singular value itself). Note that very often we need to distinguish between the union of singular orbits (without taking the closure) and $P$. In this case we denote this union using the letter $\mathcal{O}$. For $f\in\mathcal{B}$, the \emph{logarithmic coordinates} are introduced as follows. Let $R$ be big enough, so that $\SV(f)\cup\{f(0)\}\subset\mathbb{D}_R$, and let $\mathcal{T}:=f^{-1}(\mathbb{C}\setminus\overline{\mathbb{D}}_R)$. Then $\mathcal{T}$ is a union of unbounded simply-connected connected components called (logarithmic) \emph{tracts} of $f$. Then the restriction $f|_{\mathcal{T}}$ can be lifted via the exponential map. The derived function is called \emph{logarithmic transform} of $f$ and is denoted by $F$. \begin{center} \begin{tikzcd} \tilde{\mathcal{T}}\arrow[r, "{F}"] \arrow[d, "\exp"] & \mathbb{H}_{\log R} \arrow[d, "\exp"] \\ \mathcal{T} \arrow[r, "{f}"] & \mathbb{C}\setminus\overline{\mathbb{D}}_R \end{tikzcd} \end{center} \vspace{0.5cm} $F$ is defined on the $2\pi i$-periodic set $\tilde{\mathcal{T}}$ of pre-images of tracts (also called \emph{tracts}) under the exponential. The restriction of $F$ to each tract is a conformal map onto the right half-plane $\mathbb{H}_{\log R}$. The logarithmic coordinates are well suited for orbits staying ``near $\infty$'', in particular, for the escaping points. The most important feature of these coordinates is that for big enough $R$, $F$ is expanding, and in a quite strong way \cite[Lemma 1]{ErLyu}: $$\abs{F'(z)}\geq\frac{1}{4\pi}\left(\Re F(z)-\log R\right).$$ We will not require the explicit inequality in this article, but we will occasionally use the strong expansivity of $F$. The \emph{cylindrical area} is defined on $\mathbb{C}\setminus\{0\}$ by the area element $dxdy/\abs{z}^2$. For $z,w\in\mathbb{C}\setminus\{0\}$, we will denote by $d_{\cyl}(z,w)$ the distance between points in the cylindrical metric and call it \emph{cylindrical distance}. Note that its pull-back under the exponential coincides with the Euclidean metric --- we will use this property regularly throughout the article. \subsection{Quasiconformal maps} The most standard references are \cite{Ahlfors,BrannerFagella,LehtoVirtanen}, though the latter one will be used most intensively in this article. \begin{defn}[Quadrilateral] A \emph{quadrilateral} $Q(z_1,z_2,z_3,z_4)$ is a Jordan domain $Q$ together with a sequence $z_1,z_2,z_3,z_4$ of boundary points called vertices of the quadrilateral. The order of vertices agrees with the positive orientation with respect to $Q$. Arcs $z_1 z_2$ and $z_3 z_4$ are called $a$-sides, arcs $z_2 z_3$ and $z_4 z_1$ are called $b$-sides. \end{defn} Every such quadrilateral $Q$ is conformally equivalent to the unique canonical rectangle with the length of $b$-sides equal to 1. For a quadrilateral $Q$, the length of the $a$-sides of the canonical rectangle is called a \emph{modulus} of $Q$ and is denoted $\mod Q$. Analogously, every annulus $A$ is conformally equivalent to a unique round annulus $\mathbb{A}_{1,R}$ for some $R>1$. Then the modulus of $A$ is defined as $\mod A=\log R$. Note that it is more colloquial to have a factor $1/2\pi$ in front of the logarithm. However, we will follow the convention in \cite{LehtoVirtanen}. A maximal dilatation can be defined both in terms of the moduli of quadrilaterals and of the moduli of annuli. The defined objects coincide. \begin{defn}[Maximal dilatation] Let $U$ and $V$ be planar domains and $\psi:U\to V$ be an orientation-preserving homeomorphism. The \emph{maximal dilatation} of $\psi$ is called the number \begin{center} $K_\psi=\sup_{\overline{Q}\subset U}\frac{\mod \psi(Q)}{\mod Q}$,\\ \end{center} where the supremum is taken over all quadrilaterals $Q$ (resp. annuli) contained in $U$ together with its boundary. \end{defn} Using $K_\psi$ we can define \qc\ maps. \begin{defn}[Quasiconformal map] An orientation-preserving homeomorphism $\psi$ of a plane domain $U$ is called quasiconformal if its maximal dilatation $K_\psi$ is finite. If $K_\psi\leq K<\infty$, then $\psi$ is called $K$-quasiconformal. \end{defn} The inverse of a $K$-quasiconformal map is also $K$-quasiconformal, while the composition of a $K_1$-quasiconformal and $K_2$-quasiconformal map is $K_1 K_2$-quasiconformal. Quasinconformal maps can also be defined analytically. \begin{defn}[Quasiconformal map] A homeomorphism $\psi$ of a plane domain $U$ is quasiconformal if there exists $k<1$ such that \begin{enumerate} \item $\psi$ has locally integrable, distributional derivatives $\psi_z$ and $\psi_{\overline{z}}$ on $U$, and \item $\abs{\psi_{\overline{z}}} \leq k\abs{\psi_z}$ almost everywhere. \end{enumerate} Such $\psi$ is called $K$-quasiconformal, where $K=\frac{1+k}{1-k}$. \end{defn} Each quasiconformal map is determined up to a post-composition by a conformal map by its Beltrami coefficient. \begin{defn}[Beltrami coefficient] The function $\mu_\psi(z)=\psi_{\overline{z}}(z)/{\psi_z (z)}$ is called the \emph{Beltrami coefficient} of $\psi$. It is defined almost everywhere on $U$. \end{defn} Providing the Beltrami coefficient is almost the same as proiding a quasiconformal map. Consider the Beltrami equation $$\psi_{\overline{z}}(z)=\mu (z) \psi_z (z)$$ where the partial derivatives $\psi_z (z)$ and $\psi_{\overline{z}}(z)$ are defined in the sense of distributions and are locally integrable. \begin{thm}[Measurable Riemann Mapping Theorem \cite{GardinerLakic}] The Beltrami equation gives a one-to-one correspondence between the set of quasiconformal homeomorphisms of $\hat{\mathbb{C}}$ that fix the points $0,1$ and $\infty$ and the set of measurable complex-valued functions $\mu$ on $\hat{\mathbb{C}}$ for which $\lvert\lvert\mu\rvert\rvert_{L^\infty}<1$. \end{thm} We finish this subsection by providing a list of somewhat more advanced statements about \qc\ maps. \begin{thm}[Theorem 2.1, \cite{McMullen_book}] \label{thm:essential_round_annulus} Any annulus $A\subset\mathbb{C}$ of sufficiently large modulus contains an essential (i.e., separating the boundary components of $A$) round annulus $B$ with $\mod A = \mod B + O(1)$. \end{thm} \begin{lmm}\cite[Section I.4.4, Rengel's inequality]{LehtoVirtanen} \label{lmm:Rengel} Let $Q\subset\mathbb{C}$ be a quadrilateral with (Euclidean) area $m(Q)$ and $s_a(Q), s_b(Q)$ be Euclidean distances between its a-sides and b-sides respectively (measured along paths inside of $Q$). Then $$\frac{(s_b(Q))^2}{m(Q)}\leq \mod Q\leq \frac{m(Q)}{(s_a(Q))^2}.$$ The inequality in each case is possible if and only if $Q$ is a rectangle. \end{lmm} \begin{lmm}\cite[Chapter II, inequality (9.1)]{LehtoVirtanen} \label{lmm:max_over_min} Let $\varphi:\mathbb{D}\to U\subset\mathbb{C}$ be a $K$-\qc\ map such that $\varphi(0)=0$ and for some $\alpha,r>0$, $\varphi(\partial\mathbb{D}_\alpha)\subset\mathbb{D}_r\subset U$. Then $$\max\displaylimits_{z\in\partial\mathbb{D}_\alpha}\abs{\varphi(z)}\leq e^{\pi K}\min\displaylimits_{z\in\partial\mathbb{D}_\alpha}\abs{\varphi(z)}.$$ \end{lmm} For a \qc\ map $\varphi$, let $D_\varphi(z):=\frac{1+\abs{\mu_\varphi(z)}}{1-\abs{\mu_\varphi(z)}}$. \begin{lmm} \label{lmm:ineq_mod_rectangle} Let $Q$ be a quadrilateral such that its $a$-sides lie on different sides of a horizontal strip of height $h>0$ and let $\varphi$ be a \qc\ map of $Q$. Then $$\mod \varphi(Q)\leq\frac{1}{h^2}\iint\displaylimits_Q D_\varphi(z) dxdy.$$ \end{lmm} \begin{proof} The proof is the same as for the analogous inequality for rectangles in \cite[Chapter V, Section 6.3]{LehtoVirtanen}. Following the notation of the book, we have to assign $\rho(z)=1/h^2$ when $z$ be longs to the intersection of $Q$ with the strip, and $\rho(z)=0$ otherwise. \end{proof} \begin{lmm}\cite[Chapter V, inequality (6.9)]{LehtoVirtanen} \label{lmm:ineq_mod_diff} Let $A$ be a round annulus around $0$ and $\varphi$ be a \qc\ map of $A$. Then $$\abs{\mod\varphi(A)-\mod A}\leq\frac{1}{2\pi}\iint\displaylimits_A \frac{D_\varphi(z)-1}{\abs{z}^2}dxdy.$$ \end{lmm} The following result is a part of the \tei\--Wittich theorem. In this article, we follow the exposition of \cite{LehtoVirtanen}. An alternative proof of the theorem can be found in \cite{Mitsu_teich_wittich}. Another reference for a similar type of results is \cite{Bishop_thin}. \begin{thm}\cite[Satz 6.1, \tei--Wittich theorem]{LehtoVirtanen} \label{thm:teich--wittich} Let $\varphi:\mathbb{C}\to\mathbb{C}$ be a \qc\ map such that $\varphi(0)=0$ and $$\frac{1}{2\pi}\iint\displaylimits_{\abs{z}<1} \frac{D_\varphi(z)-1}{\abs{z}^2}dxdy<\infty.$$ Then $\varphi$ is complex differentiable (conformal) at $0$. \end{thm} \section{\tei\ spaces and Thurston's theory} \label{sec:Teich_spaces_and_Thurston_theory} \subsection{\tei\ spaces} \label{subsec:teich_spaces} Standard references for \tei\ theory are \cite{GardinerLakic, HubbardBook1}. \begin{defn}[\tei\ space of $\mathbb{C}\setminus P$] \label{defn:tei_space} For a set $P\subset\mathbb{C}$, the \emph{\tei\ space} $\mathcal{T}_P$ of $\mathbb{C}$ with the \emph{marked set} $P$ is the set of \emph{quasiconformal} homeomorphisms of $\mathbb{C}\setminus P$ modulo post-composition with an affine map and isotopy relative to $P$ (or, equivalently, relative to $\overline{P}$). By $\hat{\mathcal{T}}_P$ we denote the set of \emph{topological} homeomorphisms of $\mathbb{C}$ modulo post-composition with an affine map and isotopy relative to $P$. \end{defn} \begin{remark} A more standard definition of the \tei\ space on a Riemann surface involves isotopy relative the ideal boundary rather than the topological boundary. For planar domains the two definitions are equivalent \cite{GardinerLakic}. \end{remark} Every \tei\ space is equipped with the \tei\ metric with respect to which $\mathcal{T}_P$ is a complete metric space. \begin{defn}[\tei\ distance] Let $[\varphi_0],[\varphi_1]\in\mathcal{T}_P$. The \tei\ distance $d_\mathcal{T}([\varphi_0],[\varphi_1])$ is defined as $$\inf\limits_{\psi\in [\varphi_1\circ (\varphi_0)^{-1}]} \log K_\psi.$$ \end{defn} Clearly, $\mathcal{T}_P$ is contained in $\hat{\mathcal{T}}_P$ and consists exactly of the equivalence classes containing a \qc\ map. If $P$ is finite, then $\hat{\mathcal{T}}_P=\mathcal{T}_P$. The points belonging to the set $\overline{P}$ are called \emph{marked points}. Since an isolated point is a removable singularity for a \qc\ map, our setting agrees with the more colloquial one when one considers the Riemann sphere $\hat{\mathbb{C}}$ instead of $\mathbb{C}$: the formal difference lies in either ``forgetting'' about $\infty$ (as we do) or making it a marked point (not having any dynamical meaning later on). For $P'\subset P$ and $[\psi]\in\hat{\mathcal{T}}_P$, we denote by $[\psi]_{P'}$ the projection of $[\psi]$ to $\hat{\mathcal{T}}_{P'}$, i.e.\ the \tei\ equivalence class $[\psi]_{P'}$ is defined as the image of the class $[\psi]$ under the \emph{forgetful} map which ``forgets'' the marked points $\overline{P}\setminus\overline{P'}$. \subsection{Setup of Thurston's iteration} \label{subsec:iteration_setup} The construction is described in \cite{IDTT1,IDTT2} for a more specific class of entire function. Let $f_0$ be a transcendental entire function of bounded type, $\lambda:\mathbb{C}\to\mathbb{C}$ be a \qc\ map and $f=\lambda\circ f_0$. Further, let $P\subset\mathbb{C}$ be a forward invariant set containing $\SV(f):=\lambda(\SV(f_0))$. The most important example for us is when $f_0$ is of finite type and all singular values of $f$ escape. In this case the union of the orbits of singular values can be chosen as $P$. The quasiregular map $f$ defines Thurston's pull-back map $$\sigma:[\varphi]\in\mathcal{T}_P\mapsto[\tilde{\varphi}]\in\mathcal{T}_P,$$ acting on the \tei\ space $\mathcal{T}_P$, via the following diagram: \begin{center} \begin{tikzcd} \mathbb{C},P \arrow[r, "{\tilde{\varphi}}"] \arrow[d, "f"] & \mathbb{C},\tilde{\varphi}(P) \arrow[d, "g"] \\ \mathbb{C},P \arrow[r, "{\varphi}"] & \mathbb{C},\varphi(P) \end{tikzcd} \end{center} \vspace{0.5cm} More precisely, due to Measurable Riemann Mapping Theorem, applied to the pull-back of the Beltrami coefficient of $\varphi\circ\lambda$ via $f_0$, for every \qc\ map $\varphi:\mathbb{C}\to\mathbb{C}$ there is another \qc\ map $\tilde{\varphi}:\mathbb{C}\to\mathbb{C}$ such that $g=\varphi\circ f\circ\tilde{\varphi}^{-1}$ is an entire function. Thus, define $\sigma[\varphi]:=[\tilde{\varphi}]$. The standard lifting argument shows that $\sigma$ is well defined. Note that according to our definition of $\sigma$ we do not consider $\sigma[\varphi]$ for an arbitrary topological homeomorphism $\varphi$ because, generally speaking, if $P$ is infinite, the equivalence class $[\varphi]$ might not contain any single \qc\ map. If this is the case, there is no Beltrami coefficient to pull-back and integrate. And even if $\sigma$ is defined, $[\varphi]$ and $[\tilde{\varphi}]$ might belong to different \tei\ spaces (see \cite[Section 3.3]{IDTT1}) which makes it umpossible to use the contracting properties of $\sigma$. However, this setup will still be useful if we restrict to finite type entire functions $f_0$. Then the domain of definition of $\sigma$ can be extended to $\hat{\mathcal{T}}_P$ as follows. Let $[\psi]\in\hat{\mathcal{T}}_P$ for some \emph{topological} homeomorphism $\psi:\mathbb{C}\to\mathbb{C}$. There is a \qc\ map $\psi'\in[\psi]_{\SV(f)}\in\hat{\mathcal{T}}_{\SV(f)}$ such that $\psi'|_{\SV(f)}=\psi|_{\SV(f)}$, i.e., a \qc\ representative of the projection of $[\psi]\in\hat{\mathcal{T}}_P$ to $[\psi]_{\SV(f)}\in\hat{\mathcal{T}}_{\SV(f)}$. Moreover, there exists an isotopy $\psi_t:\mathbb{C}\to\mathbb{C}, t\in[0,1]$ between $\psi$ and $\psi'$ relative to $\SV(f)$. Thus, to obtain $\sigma[\psi]$, we first choose some $\tilde{\psi}'\in\sigma[\psi']_{\SV(f)}$ in the usual way and then lift the isotopy $\psi_t$ starting at $\tilde{\psi}'$. The terminal point will be $\tilde{\psi}$. The usual lifting argument shows that such prolongation of $\sigma$ is well defined and there is a map $$\sigma:[\psi]\in\hat{\mathcal{T}}_P\mapsto[\tilde{\psi}]\in\hat{\mathcal{T}}_P.$$ Analogously, if $P', P\subset\mathbb{C}$ are two sets (not necessarily forward invariant) such that $\SV(f)\subset P$ and $f(P')\subset P$, one can interpret $\sigma$ as a map $$\sigma:[\psi]\in\hat{\mathcal{T}}_P\mapsto[\tilde{\psi}]\in\hat{\mathcal{T}}_{P'}.$$ It will be clear from the context which exactly version of $\sigma$ is under consideration. Fixed points of $\sigma$ correspond to the entire functions which are Thurston equivalent to $f$. \begin{lmm}\cite[Proposition 2.3]{DH} The quasiregular function $f$ is Thurston equivalent to an entire function if and only if $\sigma|_{\hat{\mathcal{T}}_P}$ has a fixed point. \end{lmm} However, in order to apply the contraction properties and deduce existence of a fixed point we need $\sigma$ to act on some \tei\ space (which is not always the case if $P$ is infinite). Very often one can deal with this obstacle by switching to a certain $\sigma$-invariant subset of $\hat{\mathcal{T}}_P$. \subsection{Strict contraction} \label{subsec:strict_contraction} It is easy to see that if $\sigma$ is contracting on $\mathcal{T}_P$, i.e., for every pair of distinct $[\varphi],[\psi]\in\mathcal{T}_P$, $$d_\mathcal{T}(\sigma[\varphi],\sigma[\psi])\leq d_\mathcal{T}([\varphi],[\psi]),$$ where $d_\mathcal{T}$ is the \tei\ metric. For proof, one needs to pull-back their composition via one of the entire maps appearing on the right hand side of the Thurston diagrams (either for $\psi$ or $\varphi$). The maximal dilatation will not increase. However, this is not enough for deducing existence of a fixed point of $\sigma$ and a stronger condition is required. In fact, $\sigma$ can be strictly contracting on a particular subset of $\mathcal{T}_P$. \begin{defn}[Asymptotically conformal points \cite{GardinerLakic}] \label{defn:as_conformal} A point $[\varphi]\in\mathcal{T}_P$ is called \emph{asymptotically conformal} if for every $\varepsilon>0$ there is a compact set $\mathcal{C}\subset\mathbb{C}\setminus \overline{P}$ and a \qc\ representative $\psi\in[\varphi]$ such that $\abs{\mu_\psi}<\varepsilon$ almost everywhere on $(\mathbb{C}\setminus \overline{P})\setminus \mathcal{C}$. \end{defn} Consider the quasiregular function $f=\lambda\circ f_0$ constructed in the previous subsection. Then the pull-back map $\sigma$ associated to $f$ tends to decrease distances between asymptotically conformal points if their $\sigma$-images are asymptotically conformal. \begin{lmm}[Strict contraction of $\sigma$] \label{lmm:strict_contraction} Assume that every singular value of $f=\lambda\circ f_0$ is either escaping or strictly pre-periodic. Let $\mathcal{A}\subset\mathcal{T}_P$ be $\sigma$-invariant set containing only asymptotically conformal points. Then some iterate $\sigma^n, n>0$ is strictly contracting on $\mathcal{A}$, or, equivalently, if $[\varphi],[\psi]\in\mathcal{A}$, then $$d_\mathcal{T}(\sigma^n[\varphi],\sigma^n[\psi])<d_\mathcal{T}([\varphi],[\psi]).$$ \end{lmm} \begin{proof}[Sketch of proof] The lemma is a simple upgrade of \cite[Lemma 4.1]{IDTT1} allowing orbits to merge. We show now what has to be modified by using the notation of \cite[Lemma 4.1]{IDTT1}. Recall that an entire function has at most one omitted value which is necessarily an asymptotic value. Therefore, if $\sigma$ does not contract strictly the distance between $[\varphi]$ and $[\psi]$, the quadratic differential $q_0$ cannot have poles associated to marked points not having singular values on its orbits (in particular on marked points belonging to cycles). This means that $q_0$ has finitely many poles and its pull-back coincides with $q_1$, but the indices of the marked points with an associated pole are decreased by one. This procedure can be repeated at most finitely many times, say $m$ (depending only on the orbit portrait): otherwise we obtain an integrable quadratic differential without poles, hence equal to $0$. Thus, we can take $n=m!$. \end{proof} \subsection{Representatives of \tei\ equivalence classes} \label{subsec:teich_representatives} We prove a few rather technical statements about existence of a suitable for us representative in \tei\ equivalence classes. Similar statements are represented in the literature, but often in a slightly different form (e.g., without explicit boundss for the dilatation). Therefore, we provide them in the form suitable for ad hoc application and with proofs. The lemma below, though elementary, will be used in multiple places throughout the article after trivial modifications (such as for disks of different radii). \begin{lmm} \label{lmm:conformal_neighbourhood_expand} Let $\psi:\overline{\mathbb{D}}\to\overline{\mathbb{D}}$ be a $K$-\qc\ map such that $\psi(0)=0$ and $\psi|_{\mathbb{D}_r}$ is conformal for some $r\in (0,1)$. If $1>\rho>r$, then $\psi$ can be isotoped relative to $\partial\mathbb{D}\cup\{0\}$ to a $K'$-\qc\ map $\varphi$ so that it is conformal on $\mathbb{D}_\rho$ and $K'=K\log{r}/\log{\rho}$. Analogously, if $\psi|_{\mathbb{D}_r}=\id$ for some $r\in (0,1)$ and $1>\rho>r$, then $\psi$ can be isotoped relative to $\partial\mathbb{D}\cup\{0\}$ to a $K'$-\qc\ map $\varphi$ so that $\varphi|_{\mathbb{D}_\rho}=\id$ and $K'=K(\log{r}/\log{\rho})^2$. \end{lmm} \begin{proof} Construct an auxiliary isotopy $\psi_t, t\in [r,\rho]$ of $\overline{\mathbb{D}}$ as follows. On the annulus $A=\{z: 1>\abs{z}>\rho\}$ define $\psi_t(z):=z\abs{z}^{\frac{\log{t}}{\log{\rho}}-1}$, while on $\overline{\mathbb{D}}_\rho$ let $\psi_t(z):=z t/\rho$. Then the composition $\psi\circ\psi_t, t\in [r,\rho]$ is the desired isotopy between $\psi=\psi\circ\psi_\rho$ and a $K'$-\qc\ map $\psi\circ\psi_r$, which is conformal on $\mathbb{D}_\rho$. For the case $\psi|_{\mathbb{D}_r}=\id$ we might consider the isotopy $\psi_t^{-1}\circ\psi\circ\psi_t, t\in [r,\rho]$ instead. \end{proof} Next two lemmas are providing bounds on the maximal dilatation if we isotope a \qc\ map in a neighbourhood of a point in order to make it either conformal or equal to identity there. It is clear that the constant $1/2$ can be replaced by any other smaller than $1$. \begin{lmm}[Conformality near puncture] \label{lmm:conformal_neighbourhood} Let $\psi:\overline{\mathbb{D}}\to\overline{\mathbb{D}}$ be a $K$-\qc\ map such that $\psi(0)=0$. Then $\psi$ is isotopic relative to $\partial\mathbb{D}\cup\{0\}$ to a $K'$-\qc\ map $\varphi$ such that $\varphi|_{\mathbb{D}_{1/2}}$ is conformal and $K'=O(K^2)$. \end{lmm} \begin{proof} Let $r\in (0,1)$: its particular value will be chosen later. Consider a \qc\ map $\psi_1:\mathbb{D}\to\mathbb{D}$ fixing the origin such that its Beltrami coefficient is equal to $0$ in $\mathbb{D}_r$ and equal to the Beltrami coefficient of $\psi$ in $A_r:=\{z:1>\abs{z}>r\}$. Then the map $\psi_2:=\psi\circ\psi_1^{-1}$ is $K$-\qc\ and conformal on $\psi_1(A_r)$. Now, we construct a suitable isotopy of $\psi_2$, which is conformal on $\psi_1(\mathbb{D}_r)$. Note first that since the modulus of $\psi_1(A_r)$ is at least $-\log r/K$, the image $\psi_1(\mathbb{D}_r)$ is contained in $\mathbb{D}_{r_2}$ where $r_2=\sqrt{-8K/\log r}$. Indeed, if $\epsilon=\sup_{z\in\mathbb{D}_r}\abs{\psi_1(z)}$, then $\mod \psi_1(A_r)\leq 2\pi^2/(2\epsilon)^2<8/\epsilon^2$ (use, for example, \cite[Formula I.6.1]{LehtoVirtanen} with the Euclidean metric $\rho$), hence $\epsilon<\sqrt{-8K/\log r}$. Assume that $r_2$ is chosen in such a way that $\mathbb{D}_{r_2}$ has hyperbolic diameter (in $\mathbb{D}$) not exceeding $\delta=1/2$, i.e., $r_2<(e^{\delta/2}-1)/(e^{\delta/2}+1)$. Then as in the proof of \cite[Lemma 4.2]{IDTT1} we see that the quasisymmetry constant of the function $\psi_2^{-1}|_{\partial\mathbb{D}}$ is bounded independently of $K$ (by $\lambda(1/(1-\delta))=\lambda(2)$ following the notation of \cite[Lemma 4.2]{IDTT1}). It follows from \cite[Proposition 2.28]{BrannerFagella} and the Alexander trick that $\psi_2$ can be isotoped relative $\partial\mathbb{D}\cup\{0\}$ to a \qc\ map $\psi'_2$ such that $\psi'_2|_{\mathbb{D}_{r_2}}=\id$ and its maximal dilatation is universally bounded. Restrict to $r_2<1/8$ or, equivalently, $r<e^{-2^9 K}$. Then the composition $\psi'_2\circ\psi_1$ is conformal on $\mathbb{D}_r$ and has maximal dilatation not exceeding $K_0 K$, for some universal constant $K_0\geq 1$. Note that $1/2>e^{-2^9 K}$, but using Lemma~\ref{lmm:conformal_neighbourhood_expand} we can additionally isotope $\psi'_2\circ\psi_1$ to a map having the maximal dilatation at most $K_0 K \log{e^{-2^{9}K}}/\log(1/2)$. \end{proof} \begin{lmm}[Identity near puncture] \label{lmm:id_near_puncture} Let $0\in U,V\subset\hat{\mathbb{C}}$ be two Riemann domains, $r,R>0$ be maximal radii so that $\mathbb{D}_r\subset U,\; \mathbb{D}_R\subset V$ and $\psi: U\to V$ be a $K$-\qc\ map such that $\psi(0)=0$. Then $\psi$ is isotopic relative to $\partial U\cup\{0\}$ to a $K'$-\qc\ map $\varphi$ such that $\varphi|_{\mathbb{D}_{\min(r, R)/2}}=\id$ and $K'=O(K^4)\left(1+\abs{\log R/r}\right)$. \end{lmm} \begin{proof}[Sketch of proof] First, let us prove the lemma for $U=V=\mathbb{D}$. After applying Lemma~\ref{lmm:conformal_neighbourhood_expand} and Lemma~\ref{lmm:conformal_neighbourhood} we might assume that $\psi$ is conformal on $\mathbb{D}_{3/4}$ and has maximal dilatation $K_1=O(K^2)$. Let $A:=\mathbb{D}\setminus\overline{\mathbb{D}_{3/4}}$. From the Gr\"otzsch inequality, $\max_{z\in \partial\mathbb{D}_{3/4}}\abs{\psi(z)}\geq \left(3/4\right)^{K_1}$. Applying also Lemma~\ref{lmm:max_over_min} we obtain $$\min_{z\in \partial\mathbb{D}_{3/4}}\abs{\psi(z)}\geq e^{-\pi K_1}\max_{z\in \partial\mathbb{D}_{3/4}}\abs{\psi(z)}\geq e^{-\pi K_1}\left(3/4\right)^{K_1}=e^{-O(K^2)}.$$ That is, $\psi(\mathbb{D}_{3/4})$ contains a round disk around $0$ of a radius $e^{-O(K^2)}$. After post-composing $\psi$ with the isotopy from Lemma~\ref{lmm:conformal_neighbourhood_expand}, we might assume additionally that $\psi(\mathbb{D}_{3/4})$ contains $\mathbb{D}_{3/4}$ and has maximal dilatation $K_2=O(K^2) K_1=O(K^4)$. The Koebe $1/4$-Theorem implies that $\abs{\psi'(0)}$ is universally bounded from above and from below. The proof can be concluded by interpolation as in \cite[Proposition 2.28]{BrannerFagella} together with the standard Koebe distortion argument applied to the conformal map $\psi|_{\mathbb{D}_{3/4}}$ (note that the ``quasisymmetric'' constants are universally bounded on $\mathbb{D}_{r}$ if $r<3/4$ is close enough to $0$). More generally, if $U,V$ are such that $r=R=1$, pre- and post-composition of $\psi$ with the respective Riemann maps fixing $0$ reduces the problem to the solved case above. On the other hand, for the Riemann maps the derivatives at $0$ are universally bounded from above and below, hence one can isotope them to identity on $\mathbb{D}_{1/2}$ with the universal bound on maximal dilatation. Thus, as in the case of the unit disks, $K'=O(K^4)$. Finally, let $U,V$ be arbitrary Riemann domains. Consider the map $z\mapsto\psi(rz)/R$ defined on the rescaled domain. Applying the conclusion of the previous paragraph we see that $\psi$ can be isotoped relative to $\partial U\cup \{0\}$ to a map which is equal to $z\mapsto zR/r$ on $\mathbb{D}_{r/2}$ and has maximal dilatation $O(K^4)$. This new map can be isotoped to a map $\varphi$ equal to identity on $\mathbb{D}_{\min(r, R)/4}$ and having maximal dilatation $O(K^4)\left(1+\abs{\log R/r}\right)$. Using Lemma~\ref{lmm:conformal_neighbourhood_expand} and increasing the $O(.)$ bounds we might extend the domain on which $\varphi=\id$ to $\mathbb{D}_{\min(r, R)/2}$. \end{proof} Next two statements are giving an answer to the following question. Given a Riemann domain with two marked points in it, how big is the maximal dilatation induced by moving one point into another by an isotopy relative to the boundary of the domain? \begin{lmm} \label{lmm:move_inside_annulus_bounds} Let $A\subset \mathbb{C}$ be an annulus and let points $x,y$ be contained in the bounded component of $\mathbb{C}\setminus A$. Then there exists a $K$-\qc\ map $\psi:\mathbb{C}\to\mathbb{C}$ equal to identity on the unbounded component of $\mathbb{C}\setminus A$ such that $\psi(x)=y$ and $K=O(1+1/(\mod A)^2)$. \end{lmm} \begin{proof} Let $D$ be the union of $A$ with the bounded component of $\mathbb{C}\setminus A$. Without loss of generality we might assume that $D$ is the unit disk and $x=0, y=r$ for some $r\in[0,1)$. The annulus of the biggest modulus separating $0$ and $r$ from $\partial D$ is the Gr\"otzsch extremal domain $D\setminus[0,r]$ (see \cite[Section II.1]{LehtoVirtanen}). Therefore, we might assume that $A=D\setminus[0,r]$. Next, after changing coordinates via applying a M\"obius transformation we assume that $A=D\setminus [-r_1,r_1]$ for some $r_1<r$. From the central simmetry of $A$ one sees that it is enough to apply a half twist exchanging $-r_1$ and $r_1$ and leaving the interval $[-r_1,r_1]$ invariant, in order to provide a \qc\ map exchanging $-r_1$ and $r_1$. Most easily this can be done for the round annulus of modulus equal to $\mod A$: direct computation shows that the induced maximal dilatation will be equal to $O(1+1/(\mod A)^2)$. \end{proof} The lemma above can be restated in the form which is more convenient for us. \begin{cor} \label{cor:moving_inside_round_disk_bounds} Let $\delta\in (0,1/2)$ and $x\in\overline{\mathbb{D}}_{1-\delta}$. Then $x$ can be mapped to any other point of $\overline{\mathbb{D}}_{1-\delta}$ by a $K$-\qc\ map equal to identity on $\partial\mathbb{D}$ and with $K=O(\log^2\delta)$. \end{cor} \begin{proof} We can restrict to the case when $\delta\to 0$. As in the lemma above, if $x,y\in\mathbb{D}_{1-\delta}$, then the annulus of maximal modulus in $\mathbb{D}$, separating them from $\mathbb{D}_1^\infty$ is the complement in $\mathbb{D}$ of the hyperbolic geodesic segment joining $x$ to $y$. Thus, this modulus will be the smallest if $\abs{x}=1-\delta$ and $y=-x$, i.e., the hyperbolic distance between them is equal to $2\log\left(\frac{1+\abs{x}}{1-\abs{x}}\right)=2\log\left(\frac{2-\delta}{\delta}\right)$. After a holomorphic change of coordinates we might assume that $y=0$ and $\abs{x}=1-\delta'$ where $\delta'=\left(1/2+O(\delta)\right)\delta^2$. The annulus of the largest modulus separating $x$ and $0$ from $\partial\mathbb{D}$ is the Gr\"otzsch extremal domain $\mathbb{D}\setminus [0,x]$ having modulus $\mu(\abs{x})=\mu(1-\delta')$ (see \cite[Section II.1]{LehtoVirtanen} for the definition of the function $\mu$). From \cite[Section 2.1, Equation (2.7)]{LehtoVirtanen} and \cite[Section 2.1, Equation (2.11)]{LehtoVirtanen}, $\mu(1-\delta')\log\delta'$ converges to a negative constant as $\delta'\to 0$. Using the estimate of Lemma~\ref{lmm:move_inside_annulus_bounds}, we obtain the required bound. \end{proof} We finish this subsection by a short computation needed to bound the maximal number of twists happening under a $K$-\qc\ automorphism of the thrice punctured sphere. The bound is quite rough, but it will be sufficient for our needs. \begin{lmm}[Twist angle in thrice punctured sphere] \label{lmm:twist_in_3_punctured_sphere} Let $p\in\mathbb{D}_{2}^\infty\setminus\{\infty\}$ and $\psi:\mathbb{C}\to\mathbb{C}$ be a $K$-\qc\ homeomorphism isotopic relative to $\{0,1,p\}$ to an $n$-twist of the annulus $\mathbb{A}_{2,\abs{p}}$ (in particular, $\psi(p)=p$). Then for some universal constant $C>0$, $$n<\frac{\log\abs{p}}{2\pi} K^{1/C}.$$ \end{lmm} \begin{proof} Let $\psi_t, t\in[0,1]$ be an isotopy relative to $\{0,1\}$ such that $\psi_0=\id$ and $\psi_1=\psi$. Since the \tei\ metric on the 4-punctured sphere coincides with the hyperbolic metric, we have to bound from below the length of the geodesic segment in the homotopy class of the curve $\psi_t(p), t\in[0,1]$. This length is commensurable with the length $d$ of the geodesic segment in $\mathbb{D}_1^\infty\setminus\{\infty\}$, hence $\log K\geq Cd/2$ for some universal constant $C>0$. Lifting this segment to the right half plane via $e^{z}$ we obtain a geodesic segment between points with the real parts equal to $\log\abs{p}$ and the difference between their imaginary parts equal to $2\pi n$. Then $$d=2\arsinh\frac{\pi n}{\log\abs{p}}$$ and $$K\geq (e^{d/2})^C>(2\sinh{d/2})^C=\left(\frac{2\pi n}{\log\abs{p}}\right)^C.$$ \end{proof} \section{Asymptotic area property} \label{sec:AAP} In this section we in more details functions having asymptotic area property (AAP). Let $f\in\mathcal{B}$, $D\supset\SV(f)$ be an open bounded set and denote $\mathcal{E}_r:=f^{-1}(\overline{\mathbb{D}}_r\setminus D)$. Consider the function $$I_1(\rho,D):= \frac{1}{2\pi}\iint\displaylimits_{\{\rho\leq\abs{z}\}\bigcap\mathcal{E}_\rho}\frac{dx dy}{\abs{z}^2}.$$ Recall from the Introduction (Definition~\ref{defn:as_area_property}) that $f$ has AAP relative to an open set $D\supset\SV(f)$ if $$\limsup_{\rho\to\infty}I_1(\rho,D)<\infty,$$ and $f$ has AAP if it has AAP relative to every open set $D\supset\SV(f)$. It is easy to see from this definition that it is enough to check AAP only for bounded $D$. Further, in the case when $f$ has finite type, one can restrict only to $D$'s being the union of arbitrarily small disjoint disks around singular values. It is convenient to have some bound for $I_1(\rho, D)$ which is independent of $D$. Proposition~\ref{prp:AAP_for_finite_type} justifies this approach at least in some generality. \begin{defn}[Degeneration function] \label{defn:area_degeneration_function} Let $f\in\mathcal{B}$ have AAP. We will say that $\chi:\mathbb{R}_+\to\mathbb{R}_+$ is an (area) \emph{degeneration function} for $f$ if for every open $D\supset\SV(f)$, $$\limsup_{\rho\to\infty}\frac{I_1(\rho,D)}{\chi(\rho)}<\infty.$$ \end{defn} We are mainly interested in the setup when the degeneration function tends to $0$ as $\rho\to\infty$. Sometimes it is possible to provide a very precise asymptotics for $I_1$. A particular example should be more enlightening here. \begin{example} \label{eg:exp_area_property} The exponential function $f(z)=e^z$ has AAP with a degeneration function equal to $\log\rho/\rho$. \end{example} \begin{proof} It is enough to prove the statement for $D=\mathbb{D}_R$ where $R>0$. If $\rho>R$, then $\mathcal{E}_\rho$ is a vertical strip bounded by the straight vertical lines $\Re z=\log R$ and $\Re z=\log\rho$. Therefore, it is sufficient to prove the bound for $R=1$. For $r\geq\rho$, the angular measure $\theta(r)$ of $\mathcal{E}_\rho$ in $\mathbb{S}_r$ is equal to $2\arcsin(\log\rho/r)$. Since $\log\rho$ is much smaller than $\rho$ and hence much smaller than $r$ as $\rho\to\infty$, we have $\theta(r)\sim 2\log\rho/r$. Finally, $$I_1(\rho,\mathbb{D}):= \frac{1}{2\pi}\iint\displaylimits_{\{\rho\leq\abs{z}\}\bigcap\mathcal{E}_\rho}\frac{dx dy}{\abs{z}^2}=\int\limits_{\rho\leq r}\frac{\theta(r)dr}{r}\sim\int\limits_{\rho\leq r}\frac{2\log\rho dr}{r^2}=\frac{2\log\rho}{\rho}.$$ \end{proof} Similar computations show that any function $p\circ\exp$ where $p$ is a non-constant polynomial, as well as $\cos z$ and $\sin z$, have degeneration functions equal to $\log\rho/\rho$ while for $\exp\circ p$, it is $\log\rho/\rho^{\deg p}$. Along with the integral $I_1(\rho,D)$ we will need to consider its more general version depending on a parameter $\alpha>0$: $$I_\alpha(\rho,D):= \frac{1}{2\pi}\iint\displaylimits_{\{\alpha\rho\leq\abs{z}\}\bigcap\mathcal{E}_\rho}\frac{dx dy}{\abs{z}^2}.$$ Clearly, the value of $I_\alpha$ at $\rho$ coincides with the value of $I_1$ at $\rho$, but computed for the function $g(z):=f(\alpha z)$. It is natural to expect that $g$ and $f$ have $AAP$ with commensurable degeneration functions. This is indeed the case. \begin{lmm} \label{lmm:AAP_for_scaling} Let $f\in\mathcal{B}$ have AAP with respect to $D$. Then for every $\alpha>0$, $$I_\alpha(\rho,D)<I_1(\alpha\rho,D)+2I_1(\alpha^2\rho,D)$$ whenever $\rho$ is big enough. \end{lmm} \begin{proof} If $\alpha\geq 1$, then trivially $I_\alpha(\rho,D)\leq I_1(\alpha\rho, D)$. Thus, we focus on the case $\alpha<1$. Denote $\beta:=-\log\alpha$ and fix some $\rho_0$ such that $\mathbb{D}_{\rho_0}\supset\SV(f)$. Let us switch to the logarithmic coordinates with $F$ being some logarithmic transform of $f$. Consider a parametrized nested family $\{T_x\}_{x\geq\log\rho_0}$ of tracts $T_x$ such that $F(T_x)=\mathbb{H}_x$ and let $\mathcal{T}$ be the set of all such families modulo vertical translation by $2\pi$. Recall that the pull-back of the cylindrical metric under the exponential map is Euclidean metric and denote by $\nu(S,a)$ the area of a measurable set $S$ inside of the right half-plane $\mathbb{H}_a$. Then we can write $$I_\alpha(e^x,D)=I_1(e^{x-\beta},D)+\sum\limits_{\mathcal{T}}\nu(T_{x-\beta}\setminus T_x,x-\beta)$$ Therefore, in order to prove the lemma, it is enough to show that for big enough $x$ (independent on the choice of the family in $\mathcal{T}$), holds the inequality \begin{equation} \label{eqn:ineq_area_of_tracts_pieces} \nu(T_{x-\beta}\setminus T_x,x-\beta)<2\nu(T_{x-3\beta}\setminus T_{x-2\beta},x-2\beta). \end{equation} Indeed, after summing up the right hand side of (\ref{eqn:ineq_area_of_tracts_pieces}) over all families in $\mathcal{T}$, we obtain $$2\sum\limits_{\mathcal{T}}\nu(T_{x-3\beta}\setminus T_{x-2\beta},x-2\beta)<2I_1(e^{x-2\beta},D).$$ Let us prove the inequality~(\ref{eqn:ineq_area_of_tracts_pieces}). By a small abuse of notation we assume that $F=F|_{T_{\log\rho_0}}$, i.e., $F$ is univalent, and for every $y\in\mathbb{R}$, consider three horizontal segments $s_y^1:=[x-\beta,x]\times\{y\}$, $s_y^2:=[x-3\beta,x-2\beta]\times\{y\}$ and $s_y:=[x-3\beta,x]\times\{y\}$. Let $Y$ be the set of all $y\in\mathbb{R}$ such that $F^{-1}(s_1^y)$ has a non-empty intersection with the strip $\{x-\beta<\Re z<x\}$. By making $x$ big enough, we might assume that $\abs{(F^{-1})'(w)}<1/3$ when $\Re w>x-3\beta$. Then the length of $F$ of $F^{-1}(s_y)$is smaller than $\beta$, hence for every $y\in Y$, the curve $F^{-1}(s_y^2)$ is contained in $\mathbb{H}_{x-2\beta}$. On the other hand, if $x$ is much bigger than $\log\rho_0$, due to Koebe distortion theorem applied to $F^{-1}$ and a big disk centered at $x+iy$, the derivatives $\abs{(F^{-1})'(w)}$ are uniformly commensurable along every $s_y$ (e.g., up to a multiplier $\sqrt{2}$). This provides the desired bound on the area and finishes the proof of the lemma. \end{proof} More easily, if $f$ has AAP for $D$, then for every $b\in\mathbb{C}$, $f_b(z):=(z-b)$ has AAP for $D$, and for big enough $\rho$, the values $I_1(\rho,D)$ computed for $f_b$ do not exceed $MI_\alpha(\rho,D)$ computed for $f$ where $M>1,\alpha<1$ are some constants. If $f$ is of finite type, using similar techniques we can prove an even stronger result. \begin{prp}[AAP for finite type functions] \label{prp:AAP_for_finite_type} Let $f$ be a finite type entire function with bounded degrees of critical points and $D=\cup_{v\in\SV(f)}D_v$ where $D_v$'s are bounded Riemann domains with pairwise disjoint closures. If $f$ has AAP relative to $D$, then it also has AAP relative to any other open set $D'\subset D$ containing all singular values. Moreover, for big enough $\rho$ and some constants $M>1$, $\alpha<1$, $$I_1(\rho, D')<MI_\alpha(\rho,D).$$ \end{prp} \begin{proof}[Sketch of proof] Choose some pairwise disjoint bounded Riemann domains $\hat{D}_v$ such that $\overline{D}_v\subset\hat{D}_v$ and let $R_v:\hat{D}_v\to\mathbb{D}$ be a Riemann map of $\hat{D}_v$ mapping $v$ to $0$. Denote $D'_v:=D_v\cap D'$ and fix some numbers $\beta<\alpha<1$ such that for every singular value $v$, $R_v(D'_v)\subset\mathbb{D}_\beta$ and $R_v(D_v)\subset\mathbb{D}_\alpha$. Without loss of generality we might assume that $D_v=R_v^{-1}(\mathbb{D}_\alpha)$ and $D'_v=R_v^{-1}(\mathbb{D}_\beta)$. In the setting as above we can switch to the ``semi-logarithmic coordinates'' in a sense that for every $v\in\SV(f)$, there is a map $F_v:=R_v\circ f\circ\exp$ defined on a disjoint union $\mathcal{T}_v$ of Riemann domains. The setup is well-defined because we are only interested in the pre-images (or their parts) of $\mathbb{D}$ under $R_v\circ f$ which are far from the origin. We need to consider $3$ cases depending on the type of the branched covering $F_v|_U$, where $U$ is a connected component of $\mathcal{T}_v$. We are going to work with each $v$ separately so let us suppress $v$ from the indices and locally use the notation $F=F_v|_U$. \begin{enumerate} \item[(Regular value)] Let $F$ be a conformal map. Then from the Koebe 1/4-theorem and $2\pi i$-periodicity of the exponential map follows that $\abs{(F^{-1})'(0)}$ is universally bounded. Therefore, due to Koebe distortion theorem, the diameters of $F^{-1}(\mathbb{D}_\alpha)$ are uniformly bounded and the area of $F^{-1}(\mathbb{A}_{\beta,\alpha})$ is commensurable with the are of $F^{-1}(\mathbb{A}_{\alpha,\sqrt{\alpha}})$. This means that the space in $D'$ (compared to $D$) added inside of the regular pre-image domains has area commensurable to the area already included into $I_\alpha(\rho,D)$ for some $\alpha<1$ and corresponding to a regular pre-image. \item[(Critical value)] Let $F$ be a branched covering of degree $d$ with the only critical point $p:=F^{-1}(0)$. Then there exists a lift of $F$ of the form $F^{1/d}$ and its inverse is a conformal map of $\mathbb{D}$. Since the degrees of critical points are bounded, we can proceed as in the previous case. \item[(Asymptotic value)] Let $F:U\to\mathbb{D}\setminus\{0\}$ be a covering of infinite degree. We can switch to the genuine logarithmic coordinates by considering a conformal map $\tilde{F}:U\to\mathbb{H}_0$ defined by the relation $\tilde{F}=-\log F$. Then, similarly as in the proof of Lemma~\ref{lmm:AAP_for_scaling}, by Koebe distortion argument, the $\tilde{F}$-pre-images of segments $[-\log\alpha/2,-\log\beta]\times\{y\}$ have lengths bounded independently of $y\in\mathbb{R}$. For the same reason, the area distortion near such segments is bounded, and the claim follows. \end{enumerate} \end{proof} Note that $AAP$ generically behaves well under composition of functions. That is, if $f$ and $g$ have AAP, then it is natural to expect that $f\circ g$ also has AAP. Indeed, if we switch to the logarithmic coordinates $F,G$, then the pull-back of the cylindrical measure is the Euclidean measure. So, if the tracts of $F$ ``fill'' almost all space near $+\infty$, their preimage under $G$ should ``fill'' most of the area in the tracts of $F$. For example, a very rough estimate shows that $e^{e^z}$ has AAP with magnitude $O(\log^2\rho/\rho)$. \section{Koebe-like estimates for \qc\ maps with small dilatation per area} \label{sec:Koebe} We will prove two quantitative estimates for the \qc\ maps possibly having a very big maximal dilatation but supported on a small area. The computations rely heavily on the techniques from the proof of the \tei--Wittich theorem~\ref{thm:teich--wittich} as presented in \cite[Chapter V.6]{LehtoVirtanen}. \begin{lmm}[Conditional Koebe distortion] \label{lmm:conditional_Koebe_I} Let $\varphi:\mathbb{D}\to U\subset\mathbb{C}$ be a \qc\ map such that $\varphi(0)=0$ and $$I= \frac{1}{2\pi}\iint\displaylimits_{0<\abs{z}< 1}\frac{D_\varphi(z)-1}{\abs{z}^2}dx dy< \infty.$$ If we restrict to $\varphi$ such that $I\leq\kappa$ for some parameter $\kappa>0$, then: \begin{enumerate} \item for every $z\in \mathbb{D}$, $\abs{\varphi(z)}/\abs{z\varphi'(0)}$ is bounded from below by a constant depending only on $\kappa$; \item there exists a radius $0<r_\kappa<1$ such that for every $z\in\mathbb{D}_{r_\kappa}(0)$, $\abs{\varphi(z)}/\abs{z\varphi'(0)}$ is bounded from above by a constant depending only on $\kappa$. \end{enumerate} Moreover, as $\kappa\to 0$, the radii $r_\kappa$ can be chosen in such a way that for every $z\in\mathbb{D}_{r_\kappa}(0)$, $$\left|\frac{\abs{\varphi(z)}}{\abs{z\varphi'(0)}}-1\right|<C_\kappa$$ where $C_\kappa\to 0$ as $\kappa\to 0$. \end{lmm} \begin{proof} From the Teichm\"uller--Wittich Theorem~\ref{thm:teich--wittich}, we know that $\varphi$ is conformal at $0$, hence after rescaling we may assume that $\varphi'(0)=1$, i.e., $\abs{\varphi(z)}/\abs{z}\to 1$ as $\abs{z}\to 0$. Let $0<\delta<\rho<1$. From Lemma~\ref{lmm:ineq_mod_diff}, we obtain \begin{equation} \label{eqn:lmm_conditional_Koebe} \abs{\mod\varphi(\mathbb{A}_{\delta,\rho})-\log\frac{\rho}{\delta}}\leq\frac{1}{2\pi}\iint\displaylimits_{\mathbb{A}_{\delta,\rho}} \frac{D(z)-1}{\abs{z}^2}dxdy\leq \kappa. \end{equation} From Theorem~\ref{thm:essential_round_annulus}, if $\delta$ is small enough, there exists a round annulus $B$ centered at $0$ so that $\mod\varphi(\mathbb{A}_{\delta,\rho})=\mod B+O(1)$. We may assume that $B$ is the maximal such annulus, i.e., its outer radius is equal to $R=\min_{z\in \partial\mathbb{D}_\rho}\abs{\varphi(z)}$ and its inner radius to $r=\max_{z\in \partial\mathbb{D}_\delta}\abs{\varphi(z)}$. Due to the conformality at $0$, by making $\delta$ small we have $(1-\varepsilon)\delta<r<(1+\varepsilon)\delta$ for any initially chosen $\varepsilon>0$. Thus, as $\varepsilon\to 0$ the inequality~(\ref{eqn:lmm_conditional_Koebe}) rewrites as $$\log\rho-\kappa+O(1)\leq\log R\leq\log\rho+\kappa+O(1),$$ which implies that $\min_{z\in \partial\mathbb{D}_\rho}\abs{\varphi(z)}/\rho$ is bounded from below by a constant depending only on $\kappa$ (but note that it is also bounded from above by $e^{\kappa+O(1)}$). This proves the first part of the statement. Choose (if possible) some $\rho_1>\rho$ such that $\mod \mathbb{A}_{\rho,\rho_1}=I+C$ where $C$ is the universal constant from Theorem~\ref{thm:essential_round_annulus}. Then from the inequality $$\abs{\mod\varphi(\mathbb{A}_{\rho,\rho_1})-\mod \mathbb{A}_{\rho,\rho_1}}\leq\frac{1}{2\pi}\iint\displaylimits_{\mathbb{A}_{\rho,\rho_1}} \frac{D(z)-1}{\abs{z}^2}dxdy\leq I,$$ we see that $\mod\varphi(\mathbb{A}_{\rho,\rho_1})>C$, hence there exists a circle centered at $0$ and contained in $\overline{\varphi(\mathbb{A}_{\rho,\rho_1})}$. This means that $$\max_{z\in \partial\mathbb{D}_\rho}\abs{\varphi(z)}\leq\min_{z\in \partial\mathbb{D}_{\rho_1}}\abs{\varphi(z)}=\rho_1 e^{I+O(1)}=\rho e^{C+I}e^{I+O(1)}.$$ The estimate for $\kappa\to 0$ follows almost immediately from the proof of \cite[Hilfssatz V.6.1]{LehtoVirtanen} if we upgrade the input data. More precisely, we no longer need the maximal dilatation $K$ to estimate the quantity $\psi(r):=\max_{\abs{z}=r}\abs{\varphi(z)}/\min_{\abs{z}=r}\abs{\varphi(z)}$ (notation of \cite{LehtoVirtanen}). Instead, use the uniform bounds obtained from the first part of the lemma, i.e., $\psi(r)<C_1$ for $r>r_\kappa$ and some constant $C_1$ depending only on $\kappa$. Then it follows from \cite[Chapter V, inequality (6.21)]{LehtoVirtanen} together with the discussion in the subsequent paragraph, that $\psi(r)$ is smaller than $\epsilon=\epsilon(\kappa)$ (tending to $0$ as $\kappa\to 0$) if $r<r_\kappa$ where $r_\kappa$ depends only on $\kappa$. Then the required statement can be proved exactly as the first part of the lemma: the universal constant from Theorem~\ref{thm:essential_round_annulus} can be replaced by a constant arbitrarily close to $0$. \end{proof} Before proving a similar statement for angular distortion, we need a short preparatory lemma. For (a small) $d>0$, denote by $R_d$ the rectangle $[0,d]\times[0,1]$ and consider the situation when such rectagle is divided into two quadrilaterals by an injective path $\gamma:[0,1]\to R$ contained in the interior of $R_d$ except of its endpoints belonging to different vertical sides of $R_d$. The upper and the lower quadrilaterals are denoted by $Q_1$ and $Q_2$, respectively. We assume that the orientation is chosen in such a way the $\gamma$ and the horizontal sides of $R_d$ are the corresponding $a$-sides of $Q_1,Q_2$ and $R_d$. \begin{lmm} \label{lmm:splitted_rectangle} Fix some $0<\tau<1$. For every $\varepsilon>0$, there exist $\delta>0$ such that if simultaneously $\mod Q_1<d(1+\delta)/(1-\tau)$, $\mod Q_2<d(1+\delta)/\tau$ and $d<\delta$, then the path $\gamma$ is contained inside of a horizontal strip of height at most $\varepsilon$. \end{lmm} \begin{proof} Let us consider the annulus $A$ obtained by gluing $R_d$ together with its mirror copy along the vertical sides. Then the union $\Gamma$ of $\gamma$ with its mirror copy is a topological circle dividing $A$ into two annuli $A_1$ and $A_2$, each of them being the quadrilaterals $Q_1$ and $Q_2$, respectively, glued with their mirror copies. Then $\mod A=\pi/\mod R$ and $\mod A_i=\pi/\mod Q_i$, $i=1,2$ (the relations follow immediately from \cite[Hilfssatz 6.5]{LehtoVirtanen} after noticing that $A_1$ and $A_2$ have an axis of symmetry). Due to Theorem~\ref{thm:essential_round_annulus}, if $\mod A_i$ is big enough (that is, when $d$ is small enough), $A_i$ contains a round annulus $B_i$ such that $\mod A_i-\mod B_i<C$ for some universal constant $C>0$. Then the curve $\Gamma$ is contained inside of the round annulus $B'$ between $B_1$ and $B_2$. However, by superadditivity of modulus, $$\mod B'\leq\mod A-\mod B_1-\mod B_2<$$ $$2C+\mod A-\mod A_1-\mod A_2=$$ $$2C+\pi\left(\frac{1}{\mod R_d}-\frac{1}{\mod Q_1}-\frac{1}{\mod Q_2}\right)<$$ $$2C+\frac{\pi}{d}\left(1-\frac{1}{1+\delta}\right)<2C+\frac{\pi\delta}{d}.$$ Therefore $$\frac{\mod B'}{\mod A}<\frac{2Cd+\pi\delta}{\pi}\to 0$$ as $\delta\to 0$. Since $B'$ and $A$ are concentric round annuli, the claim follows. \end{proof} Now, we state and prove a key result that allows us to maintain and reproduce an ``invariant structure'' in Theorem~\ref{thm:invariant_structure}. It says, that if the cylindrical integral is small for a \qc\ map fixing $0$, then this map is predictably close to identity on a neighbourhood of $0$ which is independent of the maximal dilatation, and depends only on the value of the integral. \begin{prp}[Distortion of identity] \label{prp:distortion of identity} For every $\varepsilon>0$, there exist $0<\kappa<\infty$ and a radius $0<r<1$, so that the following statement holds. If $\varphi:\mathbb{D}\to U\subset\mathbb{C}$ is a \qc\ map such that $\varphi(0)=0$, $\varphi'(0)=1$ and $$\frac{1}{2\pi}\iint\displaylimits_{0<\abs{z}< 1}\frac{D_\varphi(z)-1}{\abs{z}^2}dx dy<\kappa,$$ then for $z\in\mathbb{D}_r\setminus\{0\}$, $$d_{\cyl}(\varphi(z),z)<\varepsilon.$$ \end{prp} \begin{proof} The first part of the proposition, about the radial distortion, is already proven in Lemma~\ref{lmm:conditional_Koebe_I}. To provide bounds for the angular distortion, let us switch to the logarithmic coordinates. This can be described by the following diagram where by $\mathbb{H}$ we denote the left half-plane $\{z:\Re z<0\}$. The map $\xi$ is defined up to a vertical translation by $2\pi$, so we fix some choice of it. \begin{center} \begin{tikzcd} \mathbb{H} \arrow[r, "{\xi}"] \arrow[d, "\exp"] & \mathbb{C} \arrow[d, "\exp"] \\ \mathbb{D}\setminus\{0\} \arrow[r, "{\varphi}"] & \mathbb{C}\setminus\{0\} \end{tikzcd} \end{center} \vspace{0.5cm} If we denote $$I(\rho):= \frac{1}{2\pi}\iint\displaylimits_{0<\abs{z}< \rho}\frac{D(z)-1}{\abs{z}^2}dx dy,$$ in the logarithmic coordinates the integral inequality rewrites as $$I^{\log}(\chi):=I(e^\chi)=\frac{1}{2\pi}\iint\displaylimits_{S_{\chi}}\left(D(e^\chi)-1\right)dx dy\leq I^{\log}(0)\leq\kappa,$$ where $\rho=e^\chi$ , $\chi\in[-\infty,0]$ and $S_\chi=[-\infty,\chi]\times[0,2\pi]$. Clearly, $S_{\chi}$ can be translated vertically without changing $I^{\log}(\chi)$. Let us agree that for a curve, the difference between the supremum and the infimum of the imaginary parts of points belonging to the curve is called \emph{height} of the curve. Thus, in order to prove the proposition, we need to show that for every $\epsilon>0$ there exist $\kappa>0$ and $\chi>-\infty$ such that if $I^{\log}(0)<\kappa$ and $x_1<x_2<\chi$, then for every $y$, the height of $\xi\left([x_1,x_2]\times\{y\}\right)$ is smaller than $\epsilon$. First, given $\epsilon>0$, we show existence of $\kappa$ and $\chi$ such that if $I^{\log}(0)<\kappa$ and $x<\chi$, then $\abs{\Im\xi(x+iy)-y}<\varepsilon$ for at least one $y=y(x)\in[0,2\pi]$. Consider a (very long) rectangle $[x_1,x]\times[0,2\pi]$ such than $(x-x_1)/2\pi\in\mathbb{N}$ and recall that from Lemma~\ref{lmm:conditional_Koebe_I}, we have the upper and the lower bound on $\Re\xi(x)$ if $x$ is close enough to $-\infty$. At the same time, we can choose $x_1$ even closer to $-\infty$, in the region where we have precise estimates for the map $\xi$ due to conformality of $\varphi$ at $0$, that is, by increasing $\abs{x_1}$ we can make $\abs{\Im\xi(x_1+iy)-y}$ arbitrarily small (but here $x_1$ depends also on a particular map $\varphi$). If for some $y\in[0,2\pi]$, we have $\xi(x+iy)=0$, and the claim is proven. Otherwise, $\xi(x+iy)$ has the same sign for all $y\in[0,2\pi]$. Then we can literally repeat the computations in \cite[p.241-242]{LehtoVirtanen} for the skewed (by translating the side $\{x\}\times[0,2\pi]$ vertically by $x-x_1$) quadrilateral and obtain the upper bound on $\min_{y\in[0,2\pi]}\abs{\Im(x+iy)}$ which depends only on $\kappa$ and tends to $0$ as $\kappa\to 0$ and $\chi\to-\infty$. Thus, more generally, we have shown that if $x>\chi$, there exists $y\in[0,2\pi]$ (depending on $x$) so that $\abs{\xi(x+iy)-x-iy}<2\epsilon$. Fix a number $d>0$ and consider a rectangle $R:=[x,x+d]\times[0,2\pi]$. Then, if $x$ is smaller than some $\chi$, due to Lemma~\ref{lmm:conditional_Koebe_I} and $2\pi i$-periodicity of $\xi$, the area of $\xi(R)$ does not exceed $2\pi(d+\epsilon_1)$, where $\epsilon_1$ depends only on $\kappa$ and tends to $0$ as $\kappa\to 0$ and $\chi\to-\infty$. Let us subdivide $R$ into $n>0$ equal ``horizontal'' rectangles with horizontal sides equal to $d$ and vertical sides equal to $2\pi/n$. Let $R'$ be one of them, such that its $\xi$-image has area not exceeding $2\pi(d+\epsilon_1)/n$. Let $s_b=s_b(\xi(R))$ be the distance between the b-sides of the quadrilateral $\xi(R')$. Applying the left side of Rengel's inequality~(\ref{lmm:Rengel}) to $\xi(R')$, we obtain $$\mod \xi(R')\geq\frac{s_b^2\left(\xi(R')\right)}{m\left(\xi(R')\right)}\geq\frac{ns_b^2}{2\pi(d+\epsilon_1)}.$$ On the other hand, from Lemma~\ref{lmm:ineq_mod_rectangle}, we have $$\mod \xi(R')\leq\frac{n^2}{(2\pi)^2}\iint\displaylimits_{R'} D_\xi(z) dxdy\leq$$ $$\frac{n^2}{2\pi}\left(I^{\log}(x+d)-I^{\log}(x)\right)+\frac{nd}{2\pi}<\frac{n}{2\pi}\left(d+n\kappa\right).$$ Combining the two inequalities above, we obtain an estimate on $s_b$: $$s_b< \sqrt{(d+\epsilon_1)(d+n\kappa)}\leq d+\max\{n\kappa,\epsilon_1\}.$$ Note that the distance between images under $\xi$ of the lines $\Re z=x$ and $\Re z=x+d$ is bigger than $d-\epsilon_2$ for some $\epsilon_2>0$. That is, there exists a curve $\gamma$ joining the $b$-sides of $\xi(R')$ such that its height is smaller that some $\epsilon_3$ which can be made arbitrarily small by adjusting $\kappa$ and $\chi$. On the other hand, $\xi^{-1}(\gamma)$ is contained in $R'$ and hence has height not exceeding $2\pi/n$. Note that $\gamma$ can be parametrized by the interval $[0,d]$ in such a way that $\abs{\gamma(t)-\gamma(0)-t}<\epsilon_4$, where $\epsilon_4$ can be arbitrarily close to $0$. After translating $R$ vertically, we assume that $\xi^{-1}(\gamma)$ is in a small neighbourhood of the lower side of $R'$. Due to $2\pi i$-periodicity, the analogous statement holds for the upper side of $R$. Now, let $s_a=s_a(\xi(R))$ be the distance between the a-sides of the quadrilateral $\xi(R)$. Applying the right side of Rengel's inequality~(\ref{lmm:Rengel}) to $\xi(R)$, we obtain $$\mod \xi(R'')=\frac{1}{\mod\xi(R)}\geq\frac{s_a^2\left(\xi(R)\right)}{m\left(\xi(R)\right)}\geq\frac{s_a^2}{2\pi(d+\epsilon_1)}$$ where $R''$ is the quadrilateral $R$ with the reversed orientation of sides and $\epsilon_1$ is small compared to $d$. Again, from Lemma~\ref{lmm:ineq_mod_rectangle} applied to $\xi(R'')$, we obtain $$\mod \xi(R'')\leq\frac{1}{d^2}\iint\displaylimits_R D_\xi(z) dxdy\leq$$ $$\frac{2\pi}{d^2}\left(I^{\log}(x+d)-I^{\log}(x)\right)+\frac{2\pi}{d}<\frac{2\pi}{d}\left(1+\frac{\kappa}{d}\right).$$ Combining the two inequalities, we obtain: $$s_a<2\pi \sqrt{(1+\epsilon_1/d)(1+\kappa/d)}\leq 2\pi+2\pi\max\{\kappa,\epsilon_1\}/d.$$ Exactly as for $s_b$, this means that inside of the quadrilateral $\xi(R)$ there is curve $\gamma$, parametrized by the interval $[0,2\pi]$ such that $\abs{\gamma(t)-\gamma(0)-t}<\epsilon_5$, where $\epsilon_5$ can be arbitrarily close to $0$. Let us summarize what we have shown up to this moment. Given $d>0$ and $\epsilon>0$, there are such $\kappa$ and $\chi$ that the following statement holds. If $x<\chi$ there exists $y\in[0,2\pi]$ such that the rectangle $R$ of width $d$ and height $2\pi$ with the lower left vertex $x+iy$ can be $\epsilon$-approximated by a quadrilateral $Q$ (i.e., each side of $Q$ is in the $\epsilon$-neighbourhood of the corresponding side of $R$) such that $\xi(Q)$ is an $\epsilon$-approximation of a translated copy $R_1$ of $R$. Moreover, the sides of $\xi(Q)$ can be parametrized by the sides of $R_1$ via a function $\Pi:\partial R_1\to\partial\xi(Q)$, respecting the sides, in such a way that $\abs{\Pi(z)-z}<\epsilon$. Now, we want to improve the statement above by replacing the height $2\pi$ by an arbitrarily small height $h$ (subject to good enough $\kappa$ and $\chi$). Pick some $0<\tau<1$ and cut $Q$ by a segment $L$ of the horizontal straight line $\Im z=y+2\pi\tau$ into two quadrilaterals $Q_{1-\tau}$ and $Q_\tau$, containing the ``upper'' and the ``lower'' side of $Q$, respectively. Since $Q$ is an $\epsilon$-approximation of $R$, the modulus of $Q_\tau$ is close to $d/(2\pi\tau)$ (see e.g.\ \cite[p. 29, Satz \"uber die Modulkonvergenz]{LehtoVirtanen}). Applying Lemma~\ref{lmm:ineq_mod_rectangle} to $Q_\tau$, we obtain $$\mod\xi(Q_\tau)\leq\frac{1}{(2\pi\tau-\epsilon)^2}\iint\displaylimits_{Q_\tau} D_\xi(z) dxdy<$$ $$\frac{1}{(2\pi\tau-\epsilon)^2}\left(2\pi\kappa+(2\pi\tau+\epsilon)(d+2\epsilon)\right)<(1+\epsilon_6)\mod Q_\tau,$$ where $\epsilon_6\to 0$ as $\epsilon\to 0$. The same estimate holds for $Q_{1-\tau}$. Let $\zeta:\xi(Q)\to \tilde{R}$ be the canonical conformal map from $\xi(Q)$ to the rectangle $\tilde{R}$ having height $2\pi$. Then $\mod \zeta\circ\xi(Q_t)$ and $\mod \zeta\circ\xi(Q_{1-\tau})$ are smaller than $d/(2\pi\tau)+\epsilon_7$ and $d/(2\pi(1-\tau))+\epsilon_7$, respectively, and $\mod\zeta\circ\xi(Q)$ is close to $d/(2\pi)$. From Lemma~\ref{lmm:splitted_rectangle} follows that $\zeta\circ\xi(L)$ is contained in a horizontal strip of height at most $\epsilon_8>0$ which can be made arbitrarily small by adjustments of $\kappa$ and $\chi$. On the other hand, $\zeta\to\id$ uniformly as $\epsilon\to 0$ (see, e.g., \cite[Theorem 2.11]{Pommerenke_book}). Therefore, $\xi(L)$ is contained in a strip of a small height tending to $0$ as $\epsilon\to 0$ and containing the straight line $\Im z=y+2\pi\tau$. If we fix some $m>0$, this procedure can be performed for $\tau=1/m,2/m,...,(m-1)/m$ if $\kappa$ and $\chi$ are good enough. That is, up to a small error, $\xi$ translates each $R$ vertically together with the subdivision into $m$ smaller rectangles. Recall that we have shown, that on every straight vertical line there is a point $p$ (hence also $2\pi i$ translates of $p$) such that $\abs{\xi(p)-p}$ is small. It belongs to at least one of the smaller rectangles in the subdivision. Hence $\abs{\Im\xi(z)-\Im z}<2\pi/m+\epsilon_9$. By adjusting $\kappa$ and $\chi$, we can make $m$ arbitrarily big and $\epsilon_9$ arbitrarily small. This finishes the proof of the proposition. \end{proof} \section{Shifts and fat spiders} \label{sec:shifts_and_spiders} In this section we introduce all left-over tools needed to prove the main result. In Subsection~\ref{subsec:shifts_properties} we define shifts and discuss the properties. $(K,\delta)$-regularity of tracts and fat spiders are defined in Subsection~\ref{subsec:tracts_regularity} and Subsection~\ref{subsec:spiders}, respectively. \subsection{Shifts and their properties} \label{subsec:shifts_properties} To simplify the notation, we make use of the following language. \begin{defn}[Shift] \label{defn:shift} Let $U\subset\mathbb{C}$ have a non-empty interior $U^\circ$, let a point $x$ either belong to $ U^{\circ}$ or be a puncture (hence belonging to $\partial U$), let $[\gamma]$ be a (fixed endpoints) homotopy class of paths $\gamma:[0,1]\to U^\circ\cup\{x\}$ such that $\gamma(0)=x$. We say that a homeomorphism $\psi:\mathbb{C}\to\mathbb{C}$ is a \emph{shift} (from $x$ to $\gamma(1)$) along $[\gamma]$ in $U$ if there exists an isotopy $\psi_t:\mathbb{C}\to\mathbb{C}, t\in[0,1]$ such that $\psi_t=\id$ on $\mathbb{C}\setminus(U^\circ\cup\{x\})$, $\psi_0=\id$, $\psi_1=\psi$ and $[\psi_t(x)]\in[\gamma]$. Additionally, introduce the following notation. \begin{enumerate} \item By $K_U(x,[\gamma])$ denote the extremal maximal dilatation in the \tei\ isotopy class of the shift along $[\gamma]$. We say that $\psi$ is a \emph{$K$-shift} along $[\gamma]$ if $K_U(x,[\gamma])\leq K$. \item For $y\in U^{\circ}\cup\{x\}$, $$K_U(x,y):=\inf_{\{\gamma:\gamma(1)=y\}} K_U(x,[\gamma])$$ (hence for a fixed $[\gamma]$, $K_U(x,\gamma(1))\leq K_U(x,[\gamma])$). \item For sets $Y_1\subset U^\circ$, $Y_2\subset\mathbb{C}$, $$K_U(Y_1\gg Y_2):=\sup_{x\in Y_1}\inf_{y\in Y_2\cap U^{\circ}} K_U(x,y).$$ If for some $x\in Y_1$ its path-connected component of $U^{\circ}$ does not contain a point of $Y_2$, we define $$K_U(Y_1\gg Y_2):=\infty.$$ \end{enumerate} \end{defn} Note that unlike in the definition of $K_U(x,[\gamma])$ there is no initially chosen homotopy class along which $x$ is moved in $(2)-(3)$. In other words, $K_U(x,[\gamma])$ is based on the \tei\ distance in the \tei\ space of $U\setminus \{x\}$ while in $(2)-(3)$ the \tei\ distance on the moduli spaces of $U\setminus \{x\}$ is involved. We use the symbol ``$\gg$'' to emphasize that $Y_1,Y_2$ play asymmetric role and to avoid confusion between them. We prove a few statements about properties of shifts which we are going to use actively in the rest of the article. \begin{prp} \label{prp:K_shifts_imply_distance_bounds} Fix a real number $0<q<1$. Let $X\subset\mathbb{D}_q$ be a set containing at least $3$ points, $x_1,x_2\in X, x_1\neq x_2$ are isolated in $X$ and $\psi_i, i=1,2$ be a $K$-shift of $x_i$ in $\mathbb{C}\setminus X$ such that $\abs{\psi_i(x_i)}>1$. Then $\psi_1$ is isotopic relative to $X$ to a $K'$-shift $\psi'_1$ in $\mathbb{D}_{2\abs{\psi_1(x_1)}}\setminus X$ with $K'=O(K^{\beta})$ for a universal constant $\beta>0$. If, additionally, for $p>1$ and a set $Y\subset\mathbb{D}_q^\infty\setminus\{\infty\}$, $\psi_1$ is a $K$-shift along $[\gamma]$ in $\mathbb{C}\setminus(X\cup Y)$, then for the ``semi-projected'' path $\gamma_\pi:[0,1]\to\mathbb{D}\setminus X$ defined by the formula $\gamma_\pi(t):=\min\{\abs{\gamma(t)},1\}e^{i\arg\gamma(t)}$, we have $$K_{\mathbb{C}\setminus(X\cup Y)}(\gamma(0),[\gamma_\pi])=O(K^{\beta+4}).$$ The bounds $O(.)$ depend only on $q$ and $p$. \end{prp} The maps $\psi_1$ and $\psi_2$ play symmetric role in the proposition hence all statements are valid also for $\psi_2$. \begin{proof} Without loss of generality we might assume that $0\in X$ and $x_i\neq 0, i=1,2$. Otherwise one can apply a \qc\ change of coordinates equal to identity on $\mathbb{C}\setminus\mathbb{D}$ in such a way that this will increase $K$ and $K'$ at most by a multiplicative constant (depending only on $q$). Let us first show that $\abs{\psi_i(x_i)}=e^{O(K^2)}$. It is enough to do this for $X=\{0,x_1,x_2\}$. Assume that $\abs{x_1}\leq \abs{x_2}$. By Lemma~\ref{lmm:max_over_min} applied to the circle $\partial\mathbb{D}_{\abs{x_2}}$ and the map $\psi_1$ we have $$\abs{\psi_1(x_1)}<\max_{z\in\partial\mathbb{D}_{\abs{x_2}}}\abs{\psi_1(z)}\leq\abs{x_2}e^{\pi K}<e^{\pi K}.$$ To obtain the upper bound for $\abs{\psi_2(x_2)}$, consider first the round annulus $\mathbb{A}_{\abs{x_1},\abs{x_2}}$. Its image under $\psi_1$ is an annulus separating $0$ and $\psi_1(x_1)$ from $x_2$ and $\infty$. Since $\abs{\psi_1(x_1)}>1$, its modulus is bounded from above by a constant depending only on $q$. This implies that $\abs{x_2}=\abs{x_1}e^{O(K)}$. Next, consider the annulus $\mathbb{A}_{1,\abs{\psi_2(x_2)}}$, having modulus $\log\abs{\psi_2(x_2)}$. Its image under the map $\psi_2^{-1}$ separates $0$ and $x_1$ from $x_2$ and $\infty$. Therefore, by \tei's theorem about separating annulus \cite[Section II.1.3]{LehtoVirtanen}, we have $$\frac{\log\abs{\psi_2(x_2)}}{K}\leq\mod\psi_2(\mathbb{A}_{1,\abs{\psi_2(x_2)}})\leq 2\mu\left(\sqrt{\frac{1}{1+\frac{\abs{x_2}}{\abs{x_1}}}}\right)=2\mu\left(\sqrt{\frac{1}{1+e^{O(K)}}}\right),$$ where $\mu:(0,1)\to(0,+\infty)$ is a decreasing function. From \cite[Chapter II, Equation (2.14)]{LehtoVirtanen}, $$\mu\left(\sqrt{\frac{1}{1+e^{O(K)}}}\right)=O(\log4\sqrt{1+e^{O(K)}})=O(K),$$ hence $\abs{\psi_2(x_2)}=e^{O(K^2)}$. Note also that we have shown along the way that $$\abs{x_1}=\abs{x_2}e^{-O(K)}\geq\abs{\psi_1(x_1)}e^{-O(K)}\geq e^{-O(K)}.$$ We will need this bound a bit later. We are finally ready prove the estimates for $\psi'_1$. To simplify the notation, let $x:=x_1$, $\psi:=\psi_1$, $\psi':=\psi'_1$ (we will not use the inequality $\abs{x_1}\leq\abs{x_2}$) and prove the proposition for $\psi$. Applying Lemma~\ref{lmm:id_near_puncture} to the disk $\mathbb{D}_{\abs{\psi(x)}}^\infty$ around $\infty$ and the map $\psi^{-1}$, we obtain a map $\varphi$ isotopic to $\psi$ relative to $X$ and equal to identity on $\mathbb{D}_r^\infty$ where $r=2\max\{\abs{\psi(x)},e^{\pi K}\}$ (as earlier, the bound $e^{\pi K}$ is obtained from Lemma~\ref{lmm:max_over_min} applied to the circle $\partial\mathbb{D}_{\abs{\psi(x)}}$). The maximal dilatation of $\varphi$ is equal to $$O(K^4)\left(1+\left|\log e^{O(K^2)}\right|\right)=O(K^6).$$ If $\abs{\psi(x)}\geq e^{\pi K}$, then $\varphi(z)=\id$ on $\mathbb{D}_{2\abs{\psi(x)}}^\infty$. Otherwise, after applying Lemma~\ref{lmm:conformal_neighbourhood_expand} to the disk $\mathbb{D}_{\abs{\psi(x)}}^\infty$, we might assume that $\varphi=\id$ on $\mathbb{D}_{2\abs{\psi(x)}}^\infty$ and its maximal dilatation is equal to $$O(K^6)(\log e^{O(K)})^2=O(K^8).$$ The map $\varphi$ cannot yet be taken as $\psi'$ because their isotopy classes might differ by an additional $n$-twist of the annulus $\mathbb{A}_{\abs{\psi(x)},2\abs{\psi(x)}}$. The multiplicity $n$ of this twist can be estimated by applying Lemma~\ref{lmm:twist_in_3_punctured_sphere} to the map $\varphi\circ\psi^{-1}$ having maximal dilatation $O(K^9)$. For this, rescale the coordinates to make $x_2=1$. Then the bound on $\abs{\psi(x)}$ stays the same: $$\abs{\psi(x)}=\abs{\psi_1(x_1)}/\abs{x_2}=e^{O(K^2)}e^{O(K)}=e^{O(K^2)}.$$ Thus, $$n<\frac{\log\abs{\psi(x)}}{2\pi} K^{9/C}=O(K^{2+9/C}).$$ After post-composing $\varphi$ with a \qc\ $n$-twist on the annulus $\mathbb{A}_{\abs{\psi(x)},2\abs{\psi(x)}}$, we obtain the desired map $\psi'$ having dilatation $K'=O(K^8)(1+O(\theta^2))=O(K^\beta)$. If $\psi$ is a $K$-shift along a path $\gamma$ in $\mathbb{C}\setminus(X\cup Y)$, then the (properly adjusted) conjugacy from the proof of Lemma~\ref{lmm:conformal_neighbourhood_expand} (applied to $\mathbb{D}_q^\infty$ and the map $\psi'$) will replace $[\gamma]$ by $[\gamma_\pi]$ and yield a shift along $[\gamma_\pi]$ in $\mathbb{D}$ having maximal dilatation not exceeding $$O(K^{\beta_1})\log^2 e^{O(K^2)}=O(K^{\beta+4}).$$ \end{proof} \begin{remark} \label{remark:semi-projected} The homotopy type $[\gamma_\pi]$ of the ``semi-projected'' path $\gamma_\pi$ defined in Proposition~\ref{prp:K_shifts_imply_distance_bounds} is independent of the choice of the representative in $[\gamma]$. Moreover, the notion is useful in the following context. Assume that $\varphi:\mathbb{C}\to\mathbb{C}$ is a homeomorphism equal to identity on $\mathbb{D}_1^\infty$. Then $\left(\varphi(\gamma)\right)_\pi=\varphi(\gamma_\pi)$ and the maximal dilatation of the corresponding shift along $[\varphi(\gamma_\pi)]$ might be estimated using solely the maximal dilatation of the shift along $[\varphi(\gamma)]$ (and without using any information about $\varphi$). Moreover, if $\varphi$ is an ``almost identity'' on $\mathbb{D}_q^\infty$ in a sense that for every $z\in\mathbb{D}_q^\infty$, $\|\varphi(z)-z\|_{\cyl}<\delta$ for some small enough $\delta>0$, then $[\varphi(\gamma_\pi)]$ differs from $[(\varphi(\gamma))_\pi]$ solely by concatenation with a short straight interval. Therefore, the corresponding dilatations differ only by a constant depending only on $\delta$: $$K_{\mathbb{C}\setminus\varphi(X\cup Y)}\left(\varphi\circ\gamma(0),[(\varphi\circ\gamma)_\pi]\right)=O\left(K_{\mathbb{C}\setminus\varphi(X\cup Y)}(\varphi\circ\gamma(0),\varphi_*[\gamma_\pi])\right),$$ where $O(.)$ depends on $p,q$ and $\delta$. \end{remark} Next two lemmas provide bounds on how $K_U$ change after application of a \qc\ map and after lift under a branched covering map. \begin{lmm}[Quasiconformal change of coordinates] \label{lmm:qc_change_of_coordinates} Let $U, W\subset\mathbb{C}$ be open sets, $\gamma:[0,1]\to U$ be a path and $\varphi:U\to W$ be a $K$-\qc\ map. Then $$K_W\left(\varphi\circ\gamma(0),[\varphi\circ\gamma]\right)\leq K^2 K_U\left(\gamma(0),[\gamma]\right).$$ \end{lmm} \begin{proof} For a shift $\psi$ along $[\gamma]$ in $U$, we can consider the induced shift $\psi'$ along $[\varphi\circ\gamma]$ in $W$. It is given by the formula $\psi':=\varphi\circ\psi\circ\varphi^{-1}$. The estimate on $K_W$ follows. \end{proof} \begin{lmm}[Lifting properties] \label{lmm:K_U_for_lifts} Let $U$ be a Riemann domain, $f:\tilde{U}\to U$ be a holomorphic branched covering of degree $d<\infty$ with the only branching value $x\in U$, $\gamma:[0,1]\to U$ be a path starting at $x$ and $\tilde{\gamma}:[0,1]\to \tilde{U}$ be one of its (homeomorphic) lifts under $f$ starting at $\tilde{x}\in f^{-1}(x)$. Then $K_{\tilde{U}}(\tilde{x},[\tilde{\gamma}])\leq d K_U(x,[\gamma])$. \end{lmm} \begin{proof} After conformal change of coordinates we may assume that $\tilde{U}=U=\mathbb{D}$, $\tilde{x}=x=0$ and $f=z^d$ (this will not change the values of $K_U$ and $K_{\tilde{U}}$). Then for $t\in[0,1]$, we have $\abs{\tilde{\gamma}(t)}=\abs{\gamma(t)}^{1/d}$. It is enough to show for every $t\in[0,1]$ that if $\psi$ is a $K$-\qc\ shift mapping $0$ to $\gamma(t)$, then there is a $dK$-\qc\ shift $\tilde{\psi}$ mapping $0$ to $\tilde{\gamma}(t)$. Without loss of generality we may assume that $\arg\gamma(t)=\arg\tilde{\gamma}(t)$. Then the map $\tilde{\psi}=\abs{\psi}^{1/d}e^{\arg\psi}$ is the required shift. \end{proof} Lemma~\ref{lmm:K_U_for_lifts} is suited for the situations where we lift a path containing a critical point. However, it is applicable only when $U$ is a Riemann domain. The last definition in this subsection allows usage of Lemma~\ref{lmm:K_U_for_lifts} in higher generality: we split the path homotopy type into a concatenation of two, so that the shift along the first one happens inside of some Riemann domain. \begin{defn}[$K$-decomposability] \label{defn:K_decomposability} Let $X\subset\mathbb{C}$ be a closed set and $x\in X$ be an isolated point. For a path $\gamma:[0,1]\to\mathbb{C}\setminus X\cup\{x\}$ such that $\gamma(0)=x$, we say that $\gamma$ is \emph{$K$-decomposable} for $X$ if there exists $\tau\in(0,1]$ and a Riemann domain $D\subset\mathbb{C}\setminus X\cup\{x\}$ containing $\gamma([0,\tau))$ such that $$K_D\left(x,[\gamma|_{(0,\tau]}]\right)K_{\mathbb{C}\setminus X}\left(\gamma(\tau),[\gamma|_{(\tau,1]}]\right)<K.$$ The homotopy class $[\gamma]$ is called \emph{$K$-decomposable} for $\mathcal{O}$ if it contains a $K$-decomposable path. \end{defn} It is clear that $K$-decomposability yields a shift (along $[\gamma]$) of a special form: first, the puncture is shifted relative to the boundary of a Riemann domain, afterwards it is shifted relative to $X$ (note that $x$ is also included as a puncture for the second shift). \subsection{$(K,\delta)$-regularity of tracts} \label{subsec:tracts_regularity} This subsection discusses the notion of the $(K,\delta)$-regularity, which serves in Theorem~\ref{thm:invariant_structure} as a glue between two different types of storaging the information about the marked points: that a homeomorphism is ``almost'' identity near $\infty$ is encoded in terms of cylindrical metric, while the homotopy and moduli type information for marked points near the origin is encoded using ``fat spiders'' (see next subsection). \begin{defn}[$(K,\delta)$-regularity] \label{defn:K_regularity} For an entire function $f\in\mathcal{B}$, consider a triple $(\hat{T}\supset T, X)$ of tracts $\hat{T}\supset T$ such that $f(\hat{T})=\mathbb{C}\setminus\overline{\mathbb{D}}_{\hat{\rho}}$, $f(T)=\mathbb{C}\setminus\overline{\mathbb{D}}_\rho$ for some $0<\hat{\rho}<\rho$ and of a set $X\subset \mathbb{C}$. For $K>1,\delta>0$, we say that the triple $(\hat{T}\supset T, X)$ is \emph{$(K,\delta)$-regular} if the following conditions are satisfied. \begin{enumerate} \item $$K_{\hat{T}\cap\mathbb{D}_{\rho e^{\delta}}\setminus X}\left(\partial T\cap \mathbb{D}_\rho\gg \partial \mathbb{D}_\rho\right)\leq K,$$ \item for every $x\in X\cap T\cap\mathbb{D}_{\rho}$, there exists a Riemann domain $U_x\subset\hat{T}\cap\mathbb{D}_{\rho e^{\delta}}\setminus X\cup\{x\}$ such that $$K_{U_x}(\{x\}\gg \partial \mathbb{D}_\rho)\leq K.$$ \end{enumerate} The triple $(\hat{\rho},\rho, X)$ is called \emph{$(K,\delta)$-regular} if $(1)-(2)$ hold for every pair of tracts $\hat{T}\supset T$. \end{defn} Condition $(1)$ of Definition~\ref{defn:K_regularity} means that we can shift any point of the boundary of $T$ which lies in $\mathbb{D}_\rho$ to the circle $\partial\mathbb{D}_\rho$ with dilatation at most $K$ and relative to $X$, $\partial\hat{T}$ and $\partial\mathbb{D}_{\rho e^{\delta}}$. Condition $(2)$ means the same for points of the set $X$ except that the corresponding point $x\in X$ has to be ``unmarked'' and the shift ``lives'' on a Riemann domain without other marked points. \begin{remark} One could replace condition $(2)$ by a simpler version: $\forall x\in X\cap T$, $$K_{\hat{T}\cap\mathbb{D}_{\rho e^{\delta}}\setminus X}(\{x\}\gg \partial \mathbb{D}_\rho)\leq K.$$ However, this would make it more difficult to treat such singular portraits when a singular value is not the first point on a marked orbit. \end{remark} Next proposition holds for all functions having finite (and some ``modest'' infinite) order. \begin{prp}[$\log$-regularity for finite order] \label{prp:log_regularity_finite_order} Fix constants $\varepsilon>1$ and $n\in\mathbb{N}$. Let $f\in\mathcal{B}$ satisfy the inequality $$\max_{\partial\mathbb{D}_r}\abs{f(z)}<e^{e^{\log^d r}}$$ for some constant $d>0$ and all $r>0$ big enough. Consider two radii $\hat{\rho}<\rho$, two tracts $\hat{T}\supset T$ such that $f(\hat{T})=\mathbb{C}\setminus\overline{\mathbb{D}}_{\hat{\rho}}, f(T)=\mathbb{C}\setminus\overline{\mathbb{D}}_{\rho}$ and a finite set $X=\{x_1,x_2,\dots,x_m\}\subset T\cap \mathbb{D}_{\rho}$. Assume further that $\log(\rho/\hat{\rho})>\varepsilon$ and for the set $F\circ\log|_T X$ (where $F$ is the logarithmic transform of $f$) holds \begin{enumerate} \item $F\circ\log|_T X\subset\mathbb{H}_{\log\rho +\varepsilon}$, \item all points in $F\circ\log|_T X$ have Euclidean distance at least $\varepsilon$ from each other. \end{enumerate} If $\hat{\rho}$ is big enough, then $(\hat{T}\supset T, X)$ is $(K,\varepsilon)$-regular with $K=O(\log^{2d(m+1)}\rho)$ and $O(.)$ depending only on $\varepsilon$ and $m$. \end{prp} \begin{proof} After choosing $\hat{\rho}$ bigger we might assume that $\max_{z\in\partial\mathbb{D}_\rho}\abs{f(z)}<e^{e^{\log^d\rho}}$. In the logarithmic coordinates this implies $\Re F(z)<e^{\log^d\rho}$ on the vertical line $\Re z=\log\rho$. Instead of working with $\hat{T},T$ and the set $X$, we switch to their conformal images under $F\circ\log|_{\hat{T}}$. First, we focus on the statement for points in $\partial T$. Let us show that there exist $K$-\qc\ shifts $\psi=\psi_w$ of every point $w$ of the line $\Re z=\log\rho$ to the line $\Re z=e^{\log^d\rho}$ in $\mathbb{H}_{\log\hat{\rho}}\setminus F\circ\log|_T X$ such that $K=O(\log^{2m+4}\rho)$. This can be easily done by a most $m+2$ application of Corollary~\ref{cor:moving_inside_round_disk_bounds} together with intermediate ``corrections''. More precisely, let $Y=\{y_1,\dots,y_m\}$ be the set $F\circ\log|_T X$ ordered by increase of the real parts. Assume first that $\abs{\Re (y_{j+1}-y_j)}\geq 4$ and let $v$ be a point on the vertical line $\Re z =\Re y_{j}+1$. Consider a round disk $D_j$ with center at $\Re(y_{j}+y_{j+1})/2+i\Im v$ with radius $r_j=\Re(y_{j+1}-y_{j})/2$. The closed disk $\overline{D'_j}$ with the same center and radius $r_j-1$ intersects the vertical line $\Re z =\Re y_{j}+1$ at $v$ and the line $\Re z=\Re y_{j+1}-1$ at some other point $v_1$. Due to Corollary~\ref{cor:moving_inside_round_disk_bounds} we can map $v$ to $v_1$ via a \qc\ map equal to identity outside of $D_j$ and having maximal dilatation $$O\left(\log^2\frac{1}{r_j}\right)=O\left(\log^2e^{\log^d\rho}\right)=O(\log^{2d}\rho).$$ Thus, any point of the line $\Re z =\Re y_{j}+1$ can be mapped to the line $\Re z=\Re y_{j+1}-1$ in such a way. If there is a cluster of $\{y_i,y_{i+1},\dots,y_{i+k}\}$ with consecutive differences of real parts less than $4$, we can move through such cluster with some bounded dilatation depending only on $\varepsilon$ and $m$ which are initially fixed. In any case, there is at most $m$ such clusters. Composing all consecutive maps ``between'' and ``through'' the clusters, we conclude that the maximal dilatation of the resulting \qc\ map is at most $O(\log^{2d(m+1)}\rho)$. The real part of points in the set $F\circ\log|_T (T\cap \mathbb{D}_{\rho})$ is generally smaller than $e^{\log^d\rho}$. In order to obtain shifts equal to identity outside of $F\circ\log|_T (T\cap \mathbb{D}_{e^\varepsilon\rho})$, we simply need to ``truncate'' the construction above and recall that $F$ is expanding on tracts if $\hat{\rho}$ is big enough. The proof for points $x\in X$ is identical, except that one replaces $m$ by $m-1$. Possibility of the property that the corresponding shift is equal to identity outside of some Riemann domain $U_x$ is evident from the construction. \end{proof} \subsection{Fat spiders} \label{subsec:spiders} In this subsection we introduce a special description for certain type of points in (finite-dimensional) \tei\ spaces. \begin{defn}[Fat spider $S(A,\mathcal{L}, Y)$] \label{defn:fat_spider} Let $A\subset\mathbb{C}$ be an open annulus of finite modulus with $B\ni\infty$ (``body'') and $G$ (``ground'') being the unbounded and bounded components of $\hat{\mathbb{C}}\setminus A$, respectively, and $Y\ni\infty$ be a subset of $B\cup\{\infty\}$. Also, let $L_{i}:[0,1]\to\mathbb{C}, i=\overline{1,n}$ be continuous paths such that all $L_i(0)$'s are distinct and for every $i$: \begin{enumerate} \item $L_i(0)\in G$, \item range of $L_i|_{(0,1)}$ is $\mathbb{C}\setminus\left(\overline{Y}\cup\{L_1(0),\dots,L_{i-1}(0),L_{i+1}(0),\dots,L_n(0)\}\right)$, \item $L_i(1)\in B\setminus \overline{Y}$. \end{enumerate} Then we say that a triple $(A,\{[L_{i}]\}_{i=1}^n,Y)$ is a \emph{fat spider} with \emph{separating annulus} $A$, \emph{removed set} $Y$ and \emph{legs} $\{[L_i]\}_{i=1}^n$ (the homotopy type of the leg $[L_i]$ is defined in $\mathbb{C}\setminus\left(\overline{Y}\cup\{L_1(0),\dots,L_{i-1}(0),L_{i+1}(0),\dots,L_n(0)\}\right)$). \end{defn} \begin{figure}[h] \includegraphics[width=\textwidth-15em]{Fat_spider.pdf} \caption{Example of a fat spider on 3 legs. The boundary of a separating annulus is depicted in blue.} \label{pic:fat_spider} \end{figure} Introduce also the notation $\mathcal{L}:=\{[L_1],\dots,[L_n]\}$, $\mathcal{L}(0):=\{[L_1](0),\dots,[L_n](0)\}$ and $\mathcal{L}(1):=\{[L_1](1),\dots,[L_n](1)\}$. By $K^S_i$ we denote the maximal dilatation of the shifts in $\mathbb{C}\setminus\left(\overline{Y}\cup\mathcal{L}(0)\right)$ mapping $L_i(0)$ along $[L_i]$, i.e., $$K^S_i:=K_{\mathbb{C}\setminus\left(\overline{Y}\cup\mathcal{L}(0)\right)}(L_i(0),[L_i]).$$ \begin{defn}[Fat spider map] \label{defn:fat_spider_map} Let $S^1(A,\mathcal{L}^1,\{\infty\}),S^2(A,\mathcal{L}^2,\{\infty\})$ be two fat spiders on $n$ legs with the same separating annulus $A$ (and body $B$). We say that a homeomorphism $\varphi:\mathbb{C}\to\mathbb{C}$ is a \emph{fat spider map} $\varphi:S^1\to S^2$ if $\varphi|_B=\id$ and for every $i$, $\varphi_*[L_i^1]=[L_i^2]$. \end{defn} \begin{figure}[h] \includegraphics[width=\textwidth+4em]{Spider_map.pdf} \caption{Fat spider map equivalent to a counterclockwise twist by $\pi/2$. Note that the map respects the homotopy type of legs.} \label{pic:fat_spider_map} \end{figure} The fat-spider-language helps to formulate compactly the following proposition which will play one of the key roles in the construction of the invariant subset of the \tei\ space later. \begin{prp}[\tei\ metric for fat spider maps] \label{prp:teich_metric_fat_spider_map} Fix a constant $M>0$ and a natural number $n>2$. Let $S^1(A,\mathcal{L}^1,\{\infty\}),S^2(A,\mathcal{L}^2,\{\infty\})$ be two fat spiders on $n$ legs such that $A$ is a round annulus around the origin, $\mod A\geq M$ and $K_i^j<K$ for all pairs $i,j$. If $\varphi:S^1\to S^2$ is a fat spider map, then $\varphi$ is isotopic relative to $\mathcal{L}^1(0)\cup B$ to a $K'$-\qc\ map $\varphi':\mathbb{C}\to\mathbb{C}$ with $$K'<\left(C K\right)^{\nu n},$$ where $C>0$ depends only on $M$ and $\gamma>0$ is a universal constant. \end{prp} \begin{remark} The cases $n=1,2$ are rather special. For $n=2$ the proposition is false, but luckily this will not cause any difficulties later. For $n=1$ one can immediately see that $K'=O(1)$. \end{remark} \begin{proof} Without loss of generality we might assume that the outer radius of $A$ is equal to $1$. Otherwise apply a linear change of coordinates. If $\psi_i^j,j={1,2}$ is a $K$-shift of $L_i^j(0)$ along $[L_i^j]$ in $\mathbb{C}\setminus\mathcal{L}^j(0)$, then due to Proposition~\ref{prp:K_shifts_imply_distance_bounds}, there exists a $K_1$-shift $\chi_i^j$ of $L_i^j(0)$ along $[L_i^j]$ in $\mathbb{D}_{2\abs{L_i^j(1)}}\setminus\mathcal{L}^j(0)$ such that $\chi_i^j$ is isotopic relative to $\mathcal{L}^j(0)$ to $\psi_i^j$ and $\chi_i^j=\id$ on $\mathbb{C}\setminus\mathbb{D}_{2\abs{L_i^j(1)}}$, where $K_1=O(K^\beta)$ for a universal constant $\beta>0$. We use inductive argument. Consider the homeomorphism $$\chi_n^2\circ\varphi\circ(\chi_n^1)^{-1}.$$ Note that it is isotopic relative to $\left(\mathbb{C}\setminus\mathbb{D}_{2\abs{L_n^1(1)}}\right)\cup\left(\mathcal{L}(0)\setminus\{L_n(0)\}\right)\cup\{L_n(1)\}$ to a homeomorphism $\varphi_1:\mathbb{C}\to\mathbb{C}$ equal to identity on $B$. This implies that the maximal dilatation $K_\varphi$ induced by $\varphi$ is bounded by the maximal dilatation $K_{\varphi_1}$ induced by $\varphi_1$ times the maximal dilatations of $\chi_n^j$, i.e., $K_\varphi\leq (K_1)^2 K_{\varphi_1}$. At the same time, $\varphi_1$ is isotopic to $\varphi$ relative to $B\cup\mathcal{L}(0)\setminus\{L_n(0)\}$. We repeat this procedure for $\varphi_1$ (but without the leg $[L_n]$) and proceed inductively. After noticing that if $n=1$, the corresponding induced maximal dilatation is equal to $O(1)$, we see that $K_\varphi=O\left((K_1)^{2(n-1)}\right)$. Since all isopies are relative to $\max_{1<i\leq n}\{\abs{L_i(1)}\}$ and by the proof of Proposition~\ref{prp:K_shifts_imply_distance_bounds}, $\abs{L_i(1)}=e^{O(K^2)}$, application of Lemma~\ref{lmm:conformal_neighbourhood_expand} provides us the desired map $\varphi'$ with maximal dilatation $$K'<O(K_1)^{2(n-1)}O(K^4)<(C K)^{2(n-1)\beta+4}=(C K)^{\nu n}.$$ \end{proof} \section{Invariant structure} \label{sec:invariant_structure} In this section after a few preparational definitions we finally state and prove the main result: Theorem~\ref{thm:invariant_structure}. Let $f:\mathbb{C}\to\mathbb{C}$ be a quasiregular function and $\{a_{ij}\}$ with $i=1,...,m$ and $j=1,2,...$ be $m$ marked orbits. \begin{defn}[$N(\rho), N_i(\rho)$] By $N(\rho)$ we denote the number of pairs $i,k$ such that $a_{ik}\in\mathbb{D}_\rho$ (in particular, $N(\rho)$ can be equal to $\infty$). Analogously, by $N_i(\rho)$ we denote the number of $k$'s such that $a_{ik}\in\mathbb{D}_\rho$. \end{defn} Let us return to the context of Thurston's iteration, i.e., consider a captured quasiregular map $f=\lambda\circ f_0$ as constructed in Subsection~\ref{subsec:iteration_setup} with $m$ marked orbits containing all singular values. We will need the following rather lengthy definition. \begin{defn}[Separating structure] \label{defn:separating_structure} Let $f_0$ be a transcendental entire function of finite type and $U\supset\SV(f_0)$ be a bounded domain. By a \emph{separating structure} $\mathbb{S}[\rho, q, K_0, K_1, \varepsilon]$ for $f_0$ and $U$, where $\rho>0$, $0<q<1/2$, $K_0,K_1\geq 1$, $0<\varepsilon<1$, we understand the following list of interdependent objects and conditions on them: \begin{enumerate} \item a $K_0$-\qc\ map $\lambda$ so that $\lambda|_{\mathbb{C}\setminus\mathbb{D}_{q\rho}}=\id$; \item $m$ marked orbits $\mathcal{O}:=\{a_{ij}\}$ of the quasiregular map $f=\lambda\circ f_0$ such that \begin{enumerate} \item $\SV(f)\subset\mathcal{O}\cap\mathbb{D}_{q\rho}$, \item $\mathcal{O}\cap U=\SV(f_0)$, \item $\mathcal{O}\cap \mathbb{A}_{q\rho, \rho e^{\varepsilon}}=\emptyset$, \item $\mathcal{O}\setminus\mathbb{D}_{\rho}$ is forward invariant, \item $N(\rho)<\infty$; \end{enumerate} \item the triple $(\rho/2,\rho,\mathcal{O})$ is $(K_1,\varepsilon/2)$-regular; \item if $N_i(\rho), N_k(\rho)>0, i\neq k$ and $a_{iN_i},a_{kN_k}$ belong to the same asymptotic tract $f^{-1}(\mathbb{C}\setminus\mathbb{D}_{\rho})$, then $d_{\cyl}(a_{i(N_i+1)},a_{k(N_k+1)})>\varepsilon$. \end{enumerate} \end{defn} Clearly, a separating structure is not defined uniquely by its parameters but rather describes an ``environment'' to work in. Note that the definition of a separating structure forbids marked cycles inside of $\mathbb{D}_\rho$, but allows them in general. The coefficient $1/2$ at $\rho$ for the triple $(\rho/2,\rho,\mathcal{O})$ can be replaced by any other positive constant smaller than $1$, the only condition is that it has to remain bigger than $q$. The conclusions of all subsequent theorems will remain valid. Denote by $\omega_{ij}$ the local degree of $f$ at $a_{ij}$ (i.e., $\omega_{ij}=1$ unless $a_{ij}$ is a critical point), denote $$\Omega_{ij}(\beta):=\prod_{k=j}^\infty\omega_{ik}^{\beta^{k-j+1}}$$ and let $\Omega(\beta):=\max_{ij}\Omega_{ij}(\beta)$. \begin{lmm}[Initial fat spider] \label{lmm:initial_fat_spider} Let $\mathbb{S}[\rho, q, K_0, K_1, \varepsilon]$ be a separating structure for $f_0$ and let $N(\rho)\neq 2$. Then there exists a fat spider $S(\mathbb{A}_{q\rho,\rho},\{[L_{ij}]\}_{j\leq N_i},\mathcal{O})$ with $N(\rho)$ legs $[L_{ij}]$ for $j\leq N_i(\rho)$ such that \begin{enumerate} \item $[L_{ij}]$ joins $[L_{ij}](0)=a_{ij}\in \mathbb{D}_{q\rho}$ to a point $[L_{ij}](1)\in\partial\mathbb{D}_{\rho}$, \item $[L_{ij}]$ is $K_{ij}$-decomposable for $\mathcal{O}$ where $$K_{ij}<(B K_1K_0^2)^{\beta^{N_i-j}}\Omega_{ij}$$ where $\beta>1$ and $B>1$ are universal constants. \end{enumerate} \end{lmm} \begin{proof} For $N(\rho)=1$ the statement is obvious so let us assume that $N(\rho)>2$. The proof uses inductive argument. Consider a point $a_{i N_i}\in T\cap \mathbb{D}_\rho$, where $T=f^{-1}(\mathbb{C}\setminus\mathbb{D}_{\rho})$ is an asymptotic tract. From the $(K_1,\varepsilon/2)$-regularity, it follows that there exists a leg $[L_{iN_i}]$ from $a_{iN_i}$ to $\partial\mathbb{D}_{\rho}$ which is $K_1$-decomposable for $\mathcal{O}$. That is, $K_{iN_i}\leq K_1$. To construct $[L_{i(N_i-1)}]$, consider the pre-image $[\gamma]$ of $[L_{iN_i}]$ under $f=\lambda\circ f_0$ which starts at $a_{i(N_i-1)}$. By Lemma~\ref{lmm:K_U_for_lifts} and Lemma~\ref{lmm:qc_change_of_coordinates}, it is $K_{\gamma}$-decomposable where $$K_{\gamma}=K_{iN_i}K_0^2\omega_{i(N_i-1)}.$$ There are two different types of prolongation of $[\gamma]$ depending on the position of $[\gamma](1)$. \begin{enumerate} \item ($[\gamma](1)\notin\mathbb{D}_\rho$) Assign $[L_{i(N_i-1)}]$ to be equal to $[\gamma_\pi]$ where $\gamma_\pi$ is the ``semi-projection'' in the circle $\partial\mathbb{D}_\rho$ similarly as defined in Proposition~\ref{prp:K_shifts_imply_distance_bounds}. The endpoint of $[\gamma_\pi]$ belongs to $\partial\mathbb{D}_\rho$ and, due to Proposition~\ref{prp:K_shifts_imply_distance_bounds} (note that there is at least one leg $[L_{iN_i}]$), $$K_{\gamma_\pi}<C_1K_{\gamma}^{\beta_1+4}$$ for some universal constants $C_1$ and $\beta_1$ (universality is due to inequalities $q<1/2$ and $\varepsilon<1$). Note that $[L_{i(N_i-1)}]$ can be written as the concatenation of the three homotopy classes in the provided order: $[\gamma]$, reversed $[\gamma]$ and $[\gamma_\pi]$. Thus, $[L_{i(N_i-1)}]$ is $K_{i(N_i-1)}$-decomposable with $$K_{i(N_i-1)}<K_{[\gamma]}K_{[\gamma]} C_1K_{[\gamma]}^{\beta_1+4}=C_1(K_{iN_i}K_0^2\omega_{i(N_i-1)})^{\beta_1+6}.$$ The decomposability takes place because we have a concatenation starting with a decomposable homotopy type. \item ($[\gamma](1)\in\mathbb{D}_\rho$) Note that $[\gamma](1)\in\partial T$ where $T=f^{-1}(\mathbb{C}\setminus\overline{\mathbb{D}}_{\rho})$ is an asymptotic tract. Exactly as in the case above, we see that $[\gamma]$ is $K_{[\gamma]}$-decomposable for $\mathcal{O}$. Let $[L_{i(N_i-1)}]$ be the concatenation of $[\gamma]$ with a homotopy class of paths from $[\gamma](1)$ to $\partial\mathbb{D}_{\rho}$ existing due to $(K_1,\varepsilon/2)$-regularity. Then, since $\mathcal{O}\cap \mathbb{A}_{q\rho, \rho e^{\varepsilon}}=\emptyset$, $[L_{i(N_i-1)}]$ is $K_{i(N_i-1)}$-decomposable with $$K_{i(N_i-1)}\leq K_{[\gamma]}K_1=K_{iN_i}K_1K_0^2\omega_{i(N_i-1)}.$$ \end{enumerate} Proceeding by induction and ``unifying'' two cases, one can show existence of all $[L_{ij}]$ such that $$K_{iN_i}<K_1<BK_1K_0^2$$ and $$K_{ij}< K_{i(j+1)}^{\beta_1+6} (BK_1K_0^2\omega_{ij})^{\beta_1+6}\leq K_{i(j+1)}^{2(\beta_1+6)}\omega_{ij}^{2(\beta_1+6)}=K_{i(j+1)}^\beta\omega_{ij}^\beta.$$ The claim of the lemma follows. \end{proof} \begin{defn}[Standard spiders] \label{defn:standard_spiders} Let $\mathbb{S}[\rho, q, K_0, K, \varepsilon]$ be a separating structure for a function $f_0$ (i.e., with some fixed choice of parameters). Denote by $\mathcal{S}_0$ the set of fat spiders satisfying conditions of Lemma~\ref{lmm:initial_fat_spider}. We call it \emph{the set of standard (fat) spiders associated to the separating structure $\mathbb{S}[\rho, q, K_0, K, \varepsilon]$}. \end{defn} The main benefit of Definition~\ref{defn:standard_spiders} is that one can define a dynamically meaningful pull-back (via $f$) keeping the set $\mathcal{S}_0$ invariant. In other words, there is a procedure (by a slight abuse of terminology called \emph{pull-back}) producing from every standard spider $S\in\mathcal{S}_0$ a new standard spider $\tilde{S}\in\mathcal{S}_0$ by literally repeating the algorithm in the proof of Lemma~\ref{lmm:initial_fat_spider}: we take a pre-image of a leg under $f$ and choose its prolongation. From the proof it is clear that all bounds are invariant. Note that this procedure \emph{does not} define the spider $\tilde{S}$ uniquely. However, instead of artificially making some precise choice, we should rather think that the pull-back gives as its output \emph{some} spider $\tilde{S}\in\mathcal{S}_0$, a particular choice is irrelevant for us. We are finally ready to describe the situation in which existence of a separating structure with ``good'' bounds implies existence of a certain invariant structure associated to Thurston's pull-back map. \begin{thm}[Invariant structure] \label{thm:invariant_structure} Let $f_0$ be a finite type entire function, $D$ be a union of Riemann domains with pairwise disjoint closures and each containing exactly one singular value and $U\supset\overline{D}$ be a bounded domain. Fix a real number $0<\varepsilon<1$. There are such universal constants $C>1,\gamma>1$ and (non-universal) constants $\rho_0(f_0, U, \varepsilon)>0$, $q_0(\varepsilon)<1/2$, $\Delta(U,D)>0$ that existence of a separating structure $\mathbb{S}[\rho, q, K_0, K, \varepsilon]$ for $f_0$ and $U$ satisfying inequalities $\rho\geq\rho_0$, $q\leq q_0$ and \begin{equation} \label{eqn:dilatation_per_area_condition} \left(CK_1K_0\right)^{\nu^{N(\rho)}}\Omega^{\nu N(\rho)}(\beta)I_q(\rho,D)<\Delta \end{equation} implies that there is a nonempty set $\mathcal{I}\subset\hat{\mathcal{T}}_{\mathcal{O}}$ of \tei\ equivalence classes $[\varphi]$ of (topological) homeomorphisms such that \begin{enumerate} \item $\mathcal{I}$ is $\sigma$-invariant; \item the projection of $\mathcal{I}$ to $\mathcal{T}_{\{a_{ij}: j\leq N_i(\rho)+1\}}$ is a bounded set; \item every equivalence class $[\varphi]\in\mathcal{I}$ contains a homeomorphism $\varphi$ such that for every $z\in\mathbb{D}_\rho^\infty$, we have $d_{\cyl}(\varphi(z),z)<\varepsilon/4$. \end{enumerate} \end{thm} \begin{proof} First, without loss of generality we might assume that $N=N(\rho)\neq 2$. If $N=2$, we just add an additional marked point (for example, non-marked pre-image of some $a_{i(N_i+1)}$). Let $\alpha>1$ be some universal constant and $0<\delta<\varepsilon/4$ be a parameter depending only on $\varepsilon$, their precise values we determine later. We will require a more elaborate description for $\mathcal{I}$ than in the statement of the theorem. It is defined as the set of equivalence classes $[\varphi]$ containing a homeomorphism $\varphi:\mathbb{C}\to\mathbb{C}$ satisfying the following list of conditions: \begin{enumerate} \item $\varphi|_{\partial\mathbb{D}_\rho}=\id$, \item for $z\in\mathbb{D}_\rho^\infty$, $d_{\cyl}(\varphi(z),z)<\delta$, \item there is a standard fat spider $S$ with $K_{ij}$-decomposable (for $\mathcal{O}$) legs $[L_{ij}]$ so that $S':=\varphi(S)$ is a fat spider with the separating annulus $\mathbb{A}_{\alpha q\rho,\rho/2}$ and $K'_{ij}$-decomposable (for $\varphi(\mathcal{O})$) legs $[L'_{ij}]=\varphi_*[L_{ij}]$ so that $$K'_{ij}<(8BK_1K_0^2)^{\beta^{N_i-j}}\Omega_{ij}(\beta)$$ where $B$ and $\beta$ are universal constants from the definition of the standard spider. \end{enumerate} Due to Proposition~\ref{prp:teich_metric_fat_spider_map}, it is clear that for any $\mathcal{I}$ of this form, its projection to $\mathcal{T}_{\{a_{ij}: j\leq N_i(\rho)+1\}}$ is bounded, hence it will suffice to prove $\sigma$-invariance in terms of the second description. $\mathcal{I}$ is non-empty because identity trivially satisfies the conditions for $\varphi$. In order to prove invariance of $\mathcal{I}$ under $\sigma$ we have to show that if (\ref{eqn:dilatation_per_area_condition}) holds for big enough $\rho$ and small enough $q$, then there is a homeomorphism $\hat{\varphi}\in\sigma[\varphi]$ satisfying the same conditions. So, let $\varphi$ be a homeomorphism satisfying $(1)-(3)$. Note that from the definition of standard spider and since $N_i\leq N$, for every $a_{ij}\in\mathbb{D}_\rho$, we have $$K_{ij}<(BK_1K_0^2)^{\beta^{N_i-j}}\Omega_{ij}(\beta)<(8BK_1K_0^2)^{\beta^N}\Omega(\beta)$$ and $$K'_{ij}<(8BK_1K_0^2)^{\beta^N}\Omega(\beta).$$ In order to work with \qc\ maps let us consider the isotopy class of $\varphi$ relative to $\mathcal{O}\cap\mathbb{D}_\rho\cup\partial\mathbb{D}_\rho$. It contains a homeomorphism $\varphi_1$ equal to identity on $\mathbb{D}_{\rho/2}^\infty$. The fat spiders and the homotopy types of their legs project correspondingly and the maximal dilatation coefficients can only decrease, hence we keep denoting them $S$ and $S'$. Assuming that $\mod\mathbb{A}_{\alpha q\rho,\rho/2}>\log 2$ (i.e., $q\alpha<1/4$) and applying Proposition~\ref{prp:teich_metric_fat_spider_map} to the spider map $\varphi_1$ between fat spiders $S$ (with a new smaller separating annulus $\mathbb{A}_{\alpha q\rho,\rho/2}$ rather than the default $\mathbb{A}_{q\rho,\rho}$) and $S'$, we obtain a $K'$-\qc\ map $\varphi_2$ isotopic to $\varphi$ relative to $\mathcal{O}\cap\mathbb{D}_\rho\cup\partial\mathbb{D}_\rho$ such that $\varphi_2|_{\mathbb{D}_{\rho/2}^\infty}=\id$ and \begin{equation} \label{eq:main_thm_K'} K'<\left(C_1\left(8BK_1K_0^2\right)^{\beta^N}\Omega(\beta)\right)^{\nu_1 N}<\left(CK_1K_0^2\right)^{\nu_1\beta^NN}\Omega^{\nu_1 N}(\beta), \end{equation} where $C_1,C$ and $\nu_1$ are universal constants (the constant $C_1$ from Proposition~\ref{prp:teich_metric_fat_spider_map} is now universal because the modulus of the separating annulus is bounded from below by $\log 2$). The constant $C$ is defined here. Further, if $\rho$ is big enough, then due to Lemma~\ref{lmm:conformal_neighbourhood}, $\varphi'\circ\lambda$ can be isotoped relative to $\lambda^{-1}(\mathcal{O})\cap\mathbb{D}_{q\rho}\cup\mathbb{D}_{\rho/2}^\infty$ to a $C_3 K_0^2 K'^2$-\qc\ map $\chi:\mathbb{C}\to\mathbb{C}$ which is conformal on $D$ and equal to identity on $\mathbb{D}_{\rho/2}^\infty$, where the constant $C_3$ depends only on $U$. Therefore, the maximal dilatation of $\chi$ satisfies $$C_3 K_0^2 K'^2<C_3K_0^2\left(CK_1K_0^2\right)^{\nu_1\beta^NN}\Omega^{\nu_1 N}(\beta)<C_3\left(CK_1K_0\right)^{\nu^{N(\rho)}}\Omega^{\nu N}(\beta)$$ where $\nu>1$ is a universal constant. Note that it is defined at this point. From inequality~(\ref{eqn:dilatation_per_area_condition}), $C_3 K_0^2K'^2 I_q(\rho,D)<C_3\Delta$. Let $\tilde{\chi}$ be the pull-back of $\chi$ under $f_0$ normalized so that $\tilde{\chi}(0)=0$ and $\tilde{\chi}(z)/z\to 1$ as $z\to\infty$. The normalization is well defined due to \tei--Wittich theorem. Applying Proposition~\ref{prp:distortion of identity} to the map $\tilde{\chi}$ and the round disk $\mathbb{D}_{q\rho}^{\infty}$ centered at $\infty$, we see that there exists $q_0=q_0(\delta)$ such that if $q<q_0$ and $\Delta$ is small enough, then: \begin{enumerate} \item for every $z\in\mathbb{D}_{\rho/2}^{\infty}$, we have $d_{\cyl}(\tilde{\chi}(z),z)<\delta/3$, \item $\tilde{\chi}(\overline{\mathbb{D}}_{q\rho+\varepsilon/4})\subset\mathbb{D}_{\alpha q\rho}$. \end{enumerate} Note that if we assume that $\Delta<1$, then $\alpha$ can be chosen as a universal constant. So, $\alpha$ is defined at this point. Denote $g:=\chi\circ f_0\circ\tilde{\chi}^{-1}$. It is clear that $g=\varphi\circ f\circ\tilde{\varphi}^{-1}$ for some $\tilde{\varphi}\in\sigma[\varphi]$ (however, we ``forgot'' the marked points outside of $\mathbb{D}_\rho$). The normalization of $\tilde{\varphi}$ is uniquely determined by the one of $\tilde{\chi}$. Therefore, we can recover a big part of information about the isotopy class of $[\tilde{\varphi}]$ by lifting the isotopy (relative to $\mathcal{O}\cap\mathbb{D}_\rho\cup\partial\mathbb{D}_\rho$) between $\chi$ and $\varphi\circ\lambda$. By construction, we have $\tilde{\varphi}=\tilde{\chi}$ on $f^{-1}(\partial\mathbb{D}_\rho)$. Let us define the desired map $\hat{\varphi}\in[\tilde{\varphi}]$ by prescribing $\hat{\varphi}:=\tilde{\chi}$ on $f^{-1}(\mathbb{D}_\rho)$ and $\hat{\varphi}:=\tilde{\varphi}$ otherwise. For the moment we ignore the necessary condition that $\hat{\varphi}=\id$ on $\partial\mathbb{D}_\rho$. This flaw will be easily corrected later. We show that for $z\in\mathbb{D}_\rho^\infty$, $d_{\cyl}(\hat{\varphi}(z),z)<2\delta/3$. Indeed, if $z\in\mathbb{D}_\rho^\infty\cap f^{-1}(\mathbb{D}_\rho)$, then $$d_{\cyl}(\hat{\varphi}(z),z)=d_{\cyl}(\tilde{\chi}(z),z)<\delta/3.$$ Otherwise, if $z\in\mathbb{D}_\rho^\infty\setminus f^{-1}(\mathbb{D}_\rho)$, consider the shortest (cylindrical) geodesic interval between $\chi\circ f_0(z)$ and $\varphi\circ\lambda\circ f_0(z)$. On one hand, its lift under $g$ is a curve joining $\tilde{\chi}(z)$ to $\tilde{\varphi}(z)=\hat{\varphi}(z)$. On the other, the expansion property of $F_0$ on logarithmic tracts shows that the lift of the interval under $\chi\circ f_0$ starting at $z$ and ending at $\tilde{\chi}^{-1}\circ\hat{\varphi}(z)$ has cylindrical length smaller than $\delta/3$ if $\rho$ is big enough. Thus, due to the triangle inequality, $$d_{\cyl}(\hat{\varphi}(z),z)\leq d_{\cyl}\left(\tilde{\chi}\left(\tilde{\chi}^{-1}\circ\hat{\varphi}(z)\right),\tilde{\chi}^{-1}\circ\hat{\varphi}(z)\right) +d_{\cyl}(\tilde{\chi}^{-1}\circ\hat{\varphi}(z),z)<2\delta/3.$$ One can see similarly that $\hat{\varphi}(a_{iN_i})\in\mathbb{D}_{\alpha q\rho}$. Consider the shortest (cylindrical) geodesic interval between $a_{i(N_i+1)}=\chi\circ f_0(a_{iN_i})$ and $\varphi(a_{i(N_i+1)})$. Again, due to the expansion property of $F_0$, its lift under $\chi\circ f_0$ starting at $a_{iN_i}$ and ending at $\tilde{\chi}^{-1}\circ\hat{\varphi}(a_{iN_i})$ has cylindrical length smaller than $\delta$ if $\rho$ is big enough. Thus, $\tilde{\chi}^{-1}\circ\hat{\varphi}(a_{iN_i})\in\mathbb{D}_{q\rho+\delta}\subset\mathbb{D}_{q\rho+\varepsilon/4}$ and property $(2)$ of $\tilde{\chi}$ yields the estimate. Thus, if $\tilde{S}$ is a pulled-back spider of $S$, then its image under $\hat{\varphi}$ is also a fat spider $\tilde{S}'\left(\mathbb{A}_{\alpha q\rho,\rho e^{-\delta}},\{[\tilde{L}'_{ij}]\}_{j\leq N_i},\hat{\varphi}(\mathcal{O})\right)$, though generally with a smaller separating annulus. Note that according to our definitions we cannot say that $\tilde{S}'$ is obtained from $S'$ by a pull-back via $g$. However, as it is a homotopic image of $\tilde{S}$, the bounds on the maximal dilatation induced by its legs can be computed with the help of $g$. The following diagram illustrates the relations we have just described. \begin{center} \begin{tikzcd} \tilde{S},[\tilde{L}_{ij}] \arrow[r, "{\hat{\varphi}}"] \arrow[d, "f"] & \tilde{S}', [\tilde{L}_{ij}'] \arrow[d, "g"] \\ S, [L_{ij}] \arrow[r, "{\varphi}"] & S', [L_{ij}'] \end{tikzcd} \end{center} \vspace{0.5cm} We will show that the legs $[\tilde{L}_{ij}']=[\hat{\varphi}(L_{ij}')]$ are $\tilde{K}_{ij}'$-decomposable for $\hat{\varphi}(\mathcal{O})$ and compute the upper bounds for $\tilde{K}_{ij}'$. Recall the construction algorithm for pulled-back spider: to obtain $\tilde{S}_{ij}$, first, we take a pre-image of a leg $L_{i(j+1)}$ under $f$ starting at $a_{ij}$ and, second, extend this pre-image in a way depending on whether its endpoint belongs to either $\mathbb{D}_\rho$ or its complement. However, there is an initial step when $j=N_i$: a leg $[L_{iN_i}]$ exists due to $(K_1,\varepsilon/2)$-regularity of tracts. More precisely, if $\hat{T}\supset T$ are some tracts of $f_0$ corresponding to radii $\rho/2$ and $\rho$, and $a_{iN_i}\in T$, then there exists a Riemann domain $D_i\subset f_0^{-1}(\mathbb{D}_{\rho/2}^\infty)\cap\mathbb{D}_{\rho e^{\varepsilon/2}}\setminus\mathcal{O}$, such that $K_{D_i}(\{a_{iN_i}\}\gg\partial\mathbb{D}_\rho)\leq K_1$. We will show that there exists a Riemann domain $D_i'\subset\mathbb{D}_{\rho e^{\varepsilon/2+\delta}}\setminus\hat{\varphi}(\mathcal{O})\subset\mathbb{C}\setminus\hat{\varphi}(\mathcal{O})$ such that $$K_{\hat{\varphi}(D_i')}\left(\hat{\varphi}(a_{iN_i}),\hat{\varphi}_*[L_{ij}']\right)<4K_1.$$ Consider the isotopy type of $\varphi$ relative to $\mathcal{O}\cap\mathbb{D}_{\rho/2}$ and to those marked points $a_{k(N_k+1)}$ such that $a_{kN_k}\in T$. It contains a map $\xi$ which coincides with $\chi\circ\lambda^{-1}$ on $\mathbb{D}_\rho$ and is equal to identity on $\mathbb{D}_\rho^\infty$ except small disjoint (cylindrical) disks around $a_{k(N_k+1)}$'s (assuming $\delta$ is small). We can assume that the maximal dilatation of $\xi$ on $\mathbb{D}_\rho^\infty$ is smaller than two. Let $\tilde{\xi}$ be the pull-back of $\xi$ normalized as $\tilde{\varphi}$. Then the Riemann domain $D_i':=\tilde{\xi}(D_i)$ is contained in $\mathbb{D}_{\rho e^{\varepsilon/2+\delta}}\setminus\hat{\varphi}(\mathcal{O})$ and, due to Lemma~\ref{lmm:qc_change_of_coordinates}, $$K_{\hat{\varphi}(D_i')}\left(\hat{\varphi}(a_{iN_i}),\hat{\varphi}_*[L_{ij}']\right)<4K_1.$$ In other words, $[\tilde{L}_{iN_i}]$ is $\tilde{K}'_{iN_i}$-decomposable, where $\tilde{K}'_{iN_i}<4K_1<\frac{1}{2}(8BK_1K_0^2)^{\beta^0}$. Now, assume that $j<N_i$ and consider the case when the lift $[\gamma]$ of $[L_{i(j+1)}]$ under $f$ starting at $[\gamma](0)=a_{ij}$ terminates at $[\gamma](1)\in\mathbb{D}_\rho\cap f^{-1}(\partial\mathbb{D}_\rho)$. Then, as in the construction of a standard spider, $[\gamma]$ can be concatenated with some $[\gamma_1]$ such that $[\gamma_1](1)\in\partial\mathbb{D}_\rho$ and the concatenation forms the leg $[\tilde{L}_{ij}]$. Since $[\gamma']:=\hat{\varphi}_*[\gamma]=g^*\left(\varphi_*[L_{i(j+1)}']\right)$ for a holomorphic map $g$, $[\gamma']$ is $K_{ij}'\omega_{ij}$-decomposable by Lemma~\ref{lmm:K_U_for_lifts} and Lemma~\ref{lmm:qc_change_of_coordinates}. Similarly as in last paragraph for $j=N_i$ (except that additionally $\xi$ must fix $f\left([\gamma](1)\right)$), one shows that $$K_{\mathbb{C}\setminus\mathcal{O}}\left([\gamma'](0),[\gamma']\right)<4K_1.$$ Since the concatenation of a decomposable path with any other path remains decomposable, after taking product of the obtained bounds, we get $$\tilde{K}_{ij}'<4K_1\omega_{ij}K_{ij}'<4K_1\omega_{ij}(8BK_1K_0^2)^{\beta^{N_i-j-1}}\Omega_{i(j+1)}<$$ $$<\frac{1}{2}(8BK_1K_0^2)^{\beta^{N_i-j}}\Omega_{ij}.$$ Assume now that $[\gamma](1)\in\mathbb{D}_{\rho}^{\infty}$. In this case $[\tilde{L}_{ij}]$ is formed by concatenating $[\gamma]$, reversed $[\gamma]$ and $[\gamma_\pi]$. After taking $\delta$ small enough, due to Proposition~\ref{prp:K_shifts_imply_distance_bounds} and Remark~\ref{remark:semi-projected} after it, we see that $[\tilde{L}_{ij}]$ is $\tilde{K}_{ij}'$-decomposable where $$\tilde{K}_{ij}'<O(\omega_{ij}K_{ij}')^{\beta_1+6},$$ where $O(.)$ can be made arbitrarily small by making $\delta$ small. Thus $$\tilde{K}_{ij}'<\frac{1}{2}(8BK_1K_0^2)^{\beta^{N_i-j}}\Omega_{ij}.$$ Finally, recall that $\hat{\varphi}$ is not identity on $\partial\mathbb{D}_\rho$ but rather $2\delta/3$-distant from it. We might isotope it to make identity on $\partial\mathbb{D}_\rho$ by the price of moving points in $\mathbb{D}_\rho^\infty$ by at most $\delta/3$ and changing the $\tilde{K}_{ij}'$'s by at most a multiplicative factor $2$. This yields the upgraded homeomorphism $\hat{\varphi}$ satisfying properties $(1)-(3)$ and finishes the proof of the theorem. \end{proof} Existence of such $\sigma$-invariant set $\mathcal{I}$ implies in many cases existence of a fixed point of $\sigma$. This is summarized in the following theorem. We stay in the setup of Theorem~\ref{thm:invariant_structure}. \begin{thm}[Fixed point in $\overline{\mathcal{I}}$] \label{thm:fixed_point_existence} Let $\mathcal{I}\neq\emptyset$ be the invariant set constructed in Theorem~\ref{thm:invariant_structure}. If, additionally, \begin{enumerate} \item $d_{\cyl}(a_{ij},a_{kl})>\varepsilon$ for $a_{ij},a_{kl}\in\mathcal{O}\cap\mathbb{D}_\rho^{\infty}$, \item $d_{\cyl}(a_{ij},a_{kl})\to\infty$ for $a_{ij},a_{kl}\in\mathcal{O}\cap\mathbb{D}_r^{\infty}$ as $r\to\infty$, \end{enumerate} then $\overline{\mathcal{I}}$ contains a fixed point of $\sigma$ and it is a unique fixed point in $\overline{\mathcal{I}}$. \end{thm} \begin{proof} Condition $(1)$ implies that every point of $\mathcal{I}$ contains a \qc\ representative and the same is true for the closure $\overline{\mathcal{I}}$ (which is also $\sigma$-invariant). Condition $(2)$ together with the definition of a separating structure implies that all the marked orbits are either (pre-)periodic or escaping and the orbits of singular values are either strictly pre-periodic or escaping. Moreover, from condition $(2)$ follows that every point of $\overline{\mathcal{I}}$ is asymptotically conformal. Therefore, due to Lemma~\ref{lmm:strict_contraction}, there exists an iterate $\sigma^n, n>0$ such that its restriction to $\overline{\mathcal{I}}$ is strictly contracting. Since $\overline{\mathcal{I}}$ is a compact in the \tei\ metric (by sequential compactness argument), $\sigma^n$ must have a fixed point $[\psi]\in\overline{\mathcal{I}}$. From strict contraction of $\sigma^n$ follows that $[\psi]$ is a unique fixed point of $\sigma^n$ in $\overline{\mathcal{I}}$, hence $[\psi]$ is fixed point of $\sigma$ which is unique in $\overline{\mathcal{I}}$. \end{proof} Theorem~\ref{thm:fixed_point_existence} has a somewhat restricted usage in the generality it is stated in. The reason is that in practice, in order to obtain the bounds needed for existence of an invariant structure, one might need to assume that $d_{\cyl}(a_{ij},a_{kl})$ tends to $\infty$ with some particular ``speed'' which is rather high. However, the full generality might be useful if we already have an invariant structure and want to perturb it, e.g., by changing the map $\lambda$ in which we keep bounds for parameters $\rho,q,K_0,K_1,\varepsilon$ and $N(\rho)$ approximately the same. The invariant structure will still exist, but the new map $f$ will have a quite different dynamical behaviour near $\infty$ (e.g., a slower speed of escape). In some situations it is possible to upgrade Theorem~\ref{thm:fixed_point_existence} by allowing marked points near $\infty$ to come arbitrarily close to each other. In this setting the nearby points might be a source of non-compactness if they either rotate around each other or nearly collide. This kind of behaviour needs to be controlled separately. See \cite{IDTT3} for the corresponding constructions in the case when $f_0$ is the composition of a polynomial with the exponential map. \section{Applications} \label{sec:applications} \subsection{Escaping singular values} \label{subsec:escaping_singular_values} We can finally prove Theorem~\ref{thm:esc_singular_orbits} from the Introduction. Recall that we have a quasiregular function $f=\lambda\circ f_0$ with $\mathcal{O}$ being the union of singular orbits $\{a_{ij}\}_{j=1}^\infty$, $i=\overline{1,m}$ (and they all escape) and we want to show that it is Thurston equivalent to an entire function. \begin{proof}[Proof of Theorem~\ref{thm:esc_singular_orbits}] Let $D$ be a union of small disks, each around each singular value, so that their closures are pairwise disjoint. Make their radii small enough so that there exists an domain $U\supset\overline{D}$ such that $\mathcal{O}\cap U=\emptyset$. Let $\varepsilon>0$ be the lower bound of the pairwise cylindrical distances between marked point in $\mathbb{D}_1^\infty$. We want to apply Theorem~\ref{thm:invariant_structure} for some big enough $\rho$. Note from the statement of Theorem~\ref{thm:invariant_structure} that $q_0$ depends only on $\varepsilon$. Since for every $i$, $\abs{a_{i(j+1)}}/\abs{a_{ij}}\to\infty$ as $j\to\infty$, for arbitrary big $\rho$ there is a separating structure $\mathbb{S}[\rho,q,K_0,K_1,\varepsilon]$ where $q<q_0$ and $K_0,K_1$ are some numbers bigger or equal than $1$. In order to turn $\mathbb{S}[\rho,q,K_0,K_1,\varepsilon]$ into an invariant structure from Theorem~\ref{thm:invariant_structure}, it is enough to provide upper bounds on $K_1$ and $N(\rho)$ so that for big enough $\rho$, the product (\ref{eqn:dilatation_per_area_condition}) can be arbitrarily small. Note that in our setting, since we have a fixed $\lambda$, $K_0$ and $\Omega$ are constant and both can be absorbed by $C$ in (\ref{eqn:dilatation_per_area_condition}). From Proposition~\ref{prp:log_regularity_finite_order}, we obtain the bound $K_1<C_1(\log\rho)^{2d(m+1)}$ for some constant $C_1>1$ and we can assume that $C_1$ is also absorbed by $C$. The inequality $\abs{a_{i(j+1)}}>\exp\left(\log\abs{a_{ij}}\right)^\delta$ implies that for big enough $\rho$ and some constant $A>0$, we have a bound $N(\rho)<A\log^3(\rho)$. Thus, for big enough $\rho$ and a constant $B>0$, $$(CK_1)^{\nu^{N(\rho)}}<\left(C(\log\rho)^{2d(m+1)}\right)^{\nu^{N(\rho)}}<e^{(\log^2\rho)^B}.$$ Therefore, due to Lemma~\ref{lmm:AAP_for_scaling}, the product $$(CK_1)^{\nu^{N(\rho)}}I_q(\rho, D)=O\left(e^{(\log^2\rho)^B}\rho^{-\varepsilon}\right)$$ tends to $0$ as $\rho$ tends to $\infty$. Condition~\ref{eqn:dilatation_per_area_condition} is satisfied, hence $\mathbb{S}[\rho,q,K_0,K_1,\varepsilon]$ is invariant structure and there exists a $\sigma$-invariant set $\mathcal{I}\subset\hat{\mathcal{T}}_{\mathcal{O}}$ from Theorem~\ref{thm:invariant_structure}. A simple application of Theorem~\ref{thm:fixed_point_existence} finishes the proof. \end{proof} \subsection{Further applications} \label{subsec:further_applications} In the last subsection of the article we give a few heuristic explanations about less direct ways to use our constructions. \subsubsection{Perturbations of the orbit} As was mentioned after Theorem~\ref{thm:fixed_point_existence}, in order to construct the invariant set $\mathcal{I}$ of Theorem~\ref{thm:invariant_structure} in practice, we often need $\lambda$ to be fixed in advance so that the singular orbits of $f=\lambda\circ f_0$ behaved well, for instance, to escape ``fast'' and to form a ``sparse'' set. However, our constructions also allow to construct some functions with a ``slow'' speed of escape of singular values. Let us discuss this on the example of the exponential family. Let $f_0(z)=e^z$, $\lambda_0=\id$ and $\varepsilon=1$. Of course, the quasiregular function $f=\lambda_0\circ f_0$ is Thurston equivalent (and simply equal) to the entire function $f_0$, but at this point we are interested in perturbing the singular orbit. Denote $a_n:=f^{n-1}(0), n>0$ and consider $\rho=(a_{n+1}+a_n)/2$ for big $n$. From the proof of Theorem~\ref{thm:esc_singular_orbits}, the value of $K_1$ corresponding to the pair $\rho,\varepsilon$ will be expressed as some degree of $\log\rho$ and we have a separating structure $\mathbb{S}[\rho,q,1,K_1,\varepsilon]$ for $f$ with some fixed $q<1/2$ (and with $\lambda=\lambda_0$). Also, it will be invariant in the sense of Theorem~\ref{thm:invariant_structure}. We can perturb this structure by changing $\lambda_0$ to a different $\lambda$, having maximal dilatation $K_0<2$, which is also equal to identity except of a small neighbourhood of the singular value $0$. It is clear that the corresponding new separating structure will still be invariant if $\rho$ is big enough and the new singular orbit is absorbed by $\mathbb{D}_\rho^\infty$ and is $\varepsilon$ sparse on it. Let us now look more closely at how we can perturb $\lambda_0$. Denote the new singular orbit by $b_n$. Since the derivative of $f_0$ along the orbit $\{a_n\}_{n=1}^\infty$ tends to $\infty$, we might assume that for $N(\rho)<n<k$, $b_n$ is contained in a ``very small'' neighbourhood of $a_n$, for $n=k$, $b_k$ can be any point in a square with horizontal sides, center at $a_k$ and the side length $2\pi$. The image of this square under $f_0$ is the annulus $\mathbb{A}_{e^{a_k-\pi},e^{a_k}+\pi}$. If, for example some ``slow escaping'' orbit of $f_0$ passes through this annulus and is $\varepsilon$-sparse, we can assume that $b_{k+1}$ is a point on it. The obtained separating structure will still be invariant and the existence of the fixed point of the new $\sigma$ will follow by Theorem~\ref{thm:fixed_point_existence}. In a similar way we can obtain finite singular orbits, though with cicles contained in $\mathbb{D}_\rho^{\infty}$. \subsubsection{Captured polynomials} Even though the techniques developed in this article are suited mainly for dealing with transcendental entire functions, as a byproduct we can also model the polynomials with escaping critical values. Note that the corresponding result is a rather simple subcase of \cite{Cui}, though obtained differently, without considering iterations on infinite-dimensional \tei\ space. Let $f_0$ be a polynomial of degree $d$ and $\lambda$ be a \qc\ map equal to $\id$ near $\infty$ so that all critical values of $f=\lambda\circ f_0$ escape. Existence of B\"ottcher coordinates for polynomials implies that for every escaping orbit $\{a_n\}_{n=1}^\infty$ of $f_0$, we have asymtotical equality $\abs{a_{n+1}}\sim\abs{a_n}^d$ and $\arg a_{n+1}\sim d\arg a_n$. We can find arbitrarily big $\rho$ so that the first points (on each critical orbit of $f$) which are outside of $\mathbb{D}_\rho$ are $\varepsilon$ distant from each other (in the cylindrical metric). Since $q$ depends only on $\varepsilon$, and $I_q(\rho, D)=0$ whenever we consider $[\varphi]\in\mathcal{T}_{\mathcal{O}}$ with $\varphi|_{\mathbb{D}_{q\rho}^{\infty}}$ being conformal, there is an invariant structure with the given $\lambda$. Note, that the orbits outside of $\mathbb{D}_\rho$ are not necessarily sparse, but we anyway can apply Theorem~\ref{thm:fixed_point_existence} because all $[\varphi]$ as above are asymptotically conformal (because $\varphi$ is conformal near $\infty$). That is, $f$ is Thurston equivalent to a polynomial. \begin{thebibliography}{RSSS} \bibitem[A]{Ahlfors} Lars Ahlfors, \emph{Lectures on Quasiconformal Mappings}. American Mathematical Society (2006) \bibitem[Bi]{Bishop_thin} Christopher J. Bishop, \emph{Quasiconformal maps with thin dilatations}. Publ. Mat. \textbf{66/2} (2022), 715--727 \bibitem[Bo1]{IDTT1} Konstantin Bogdanov, \emph{Infinite-dimensional Thurston theory and transcendental dynamics I: infinite-legged spiders}. Fund. Math. \textbf{261} (2023), 157--200. \bibitem[Bo2]{IDTT2} Konstantin Bogdanov, \emph{Infinite-dimensional Thurston theory and transcendental dynamics II: classification of entire functions with escaping singular orbits}. Adv. Math. \textbf{452} (2024). \bibitem[Bo3]{IDTT3} Konstantin Bogdanov, \emph{Infinite-dimensional Thurston theory and transcendental dynamics III: entire functions with escaping singular orbits in the degenerate case}. arXiv:2104.12206 \bibitem[BF]{BrannerFagella} Bodil Branner and N\'uria Fagella. \emph{Quasiconformal Surgery in Holomorphic Dynamics}. Cambridge Studies in Advanced Mathematics. Cambridge: Cambridge University Press (2014). \bibitem[CT]{Cui} Guizhen Cui and Lei Tan, \emph{A Characterization of Hyperbolic Rational Maps}. Inventiones Mathematicae \textbf{183}.3 (2011), 451--516. \bibitem[DH]{DH} Adrien Douady and John Hubbard, \emph{A proof of Thurstons's topological characterization of rational functions}. Acta Mathematica \textbf{171} (1993), 263--297. \bibitem[ER]{EpRe} Adam Epstein and Lasse Rempe, \emph{On invariance of order and the area property for finite-type entire functions}. Ann. Acad. Sci. Fenn. Math. \textbf{40} (2015), no. 2, 573-599. \bibitem[EL]{ErLyu} Alexandre Eremenko and Mikhail Lyubich, \emph{Dynamical properties of some classes of entire functions}. Annales de l’institut Fourier, \textbf{42} (1992) no. 4, 989--1020. \bibitem[F]{MarkusThesis} Markus F\"orster, \emph{Exponential maps with escaping singular orbits}. PhD Thesis, International University Bremen, 2006. \bibitem[GL]{GardinerLakic} Frederick P. Gardiner and Nikola Lakic, \emph{Quasiconformal \tei\ theory}. American Mathematical Society (2000). \bibitem[H1]{HubbardBook1} John Hubbard, \emph{\tei\ theory and applications to geometry, topology, and dynamics}. Volume 1: \tei\ theory. Ithaca, NY: Matrix Editions (2006). \bibitem[H2]{HubbardBook2} John Hubbard, \emph{\tei\ theory and applications to geometry, topology, and dynamics}, Volume 2: Surface homeomorphisms and rational functions. Ithaca, NY: Matrix Editions (2016). \bibitem[HS]{Spiders} John Hubbard and Dierk Schleicher, \emph{The spider algorithm}. In: R. Devaney (ed.), \emph{Complex dynamics: the mathematics behind the Mandelbrot and Julia sets}. Proc Symp Pure Math \textbf{49}, Amer Math Soc (1994), 155--180. \bibitem[HSS]{HSS} John Hubbard, Dierk Schleicher, and Mitsuhiro Shishikura, \emph{Exponential Thurston maps and limits of quadratic differentials}. Journal Amer Math Soc \textbf{22} (2009), 77--117. \bibitem[LV]{LehtoVirtanen} O. Lehto and K. I. Virtanen, \emph{Quasikonforme Abbildungen}. Springer-Verlag Berlin Heidelberg (1965). \bibitem[McM]{McMullen_book} Curt McMullen, \emph{Complex dynamics and renormalization (AM-135)} \textbf{135}. Princeton: Princeton University Press, 1995. \bibitem[P]{Pommerenke_book} Cristian Pommerenke, \emph{Boundary behaviour of conformal maps}. Springer-Verlag Berlin Heidelberg (1992). \bibitem[Re]{LasseParaSpace} Lasse Rempe, \emph{Rigidity of escaping dynamics for transcendental entire functions}. Acta Mathematica \textbf{203} (2009), 235--267. \bibitem[RRRS]{RRRS} G\"unter Rottenfu{\ss}er, Johannes R\"uckert, Lasse Rempe, and Dierk Schlei\-cher, \emph{Dynamic rays of bounded-type entire functions}. {Annals of Mathematics} \textbf{173} (2011), 77--125. \bibitem[She]{SergeyThesis} Sergey Shemyakov, \emph{A topological characterization of certain postsingularly finite entire functions : transcendental dynamics and Thurston theory}. PhD Thesis, Aix-Marseille University, 2022. \bibitem[Shi]{Mitsu_teich_wittich} Mitsuhiro Shishikura, \emph{Conformality of quasiconformal mappings at a point, revisited}. Ann. Acad. Sci. Fenn., Math. \textbf{43/2} (2018), 981--990. \end{thebibliography} \end{document}
2412.20174v1
http://arxiv.org/abs/2412.20174v1
Explicit bounds on common projective torsion points of elliptic curves
\documentclass[11pt]{amsart} \usepackage{amssymb,amsthm,amsmath,amstext} \usepackage{mathrsfs} \usepackage{bm} \usepackage{mathtools} \usepackage{color} \usepackage[bbgreekl]{mathbbol} \usepackage{cite} \usepackage[left=1.5in,top=1in,right=1.5in,bottom=1in]{geometry} \usepackage{graphicx} \usepackage[all]{xy} \usepackage{tikz} \usepackage{tikz-cd} \usepackage{enumitem} \newcommand{\xycenter}[1]{ \begin{center} \mbox{\xymatrix{#1}} \end{center} } \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem*{expectation}{Expectation} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{problem}[theorem]{Problem} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{theoremintro}{Theorem} \newcommand{\theHtheoremintro}{\Alph{theoremintro}} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{convention}[theorem]{Convention} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{maintheorem}{Theorem \ref{tSaturatedPrelogY}} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newcommand{\io}[2]{\iota_{\{#1\}>\{#2\}}} \newcommand{\ioUpper}[2]{\iota^*_{\{#1\}>\{#2\}}} \newcommand{\ioLower}[2]{\iota_{\{#1\}>\{#2\}*}} \newcommand{\sheaf}[1]{\mathscr{#1}} \newcommand{\PP}{\sheaf{P}} \newcommand{\DD}{\sheaf{D}} \newcommand{\LL}{\sheaf{L}} \newcommand{\OO}{\sheaf{O}} \newcommand{\QQ}{\sheaf{Q}} \newcommand{\MM}{\sheaf{M}} \newcommand{\EE}{\sheaf{E}} \newcommand{\FF}{\sheaf{F}} \newcommand{\GG}{\sheaf{G}} \newcommand{\II}{\sheaf{I}} \newcommand{\HH}{\sheaf{H}} \newcommand{\NN}{\sheaf{N}} \newcommand{\VV}{\sheaf{V}} \newcommand{\ZZ}{\sheaf{Z}} \newcommand{\WW}{\sheaf{W}} \newcommand{\BB}{\sheaf{B}} \newcommand{\CC}{\sheaf{C}} \newcommand{\YY}{\sheaf{Y}} \newcommand{\XX}{\sheaf{X}} \newcommand{\sA}{\sheaf{A}} \newcommand{\sS}{\sheaf{S}} \newcommand{\UU}{\sheaf{U}} \newcommand{\perfect}[1]{{\color{black} #1}} \newcommand{\formalbase}{\mathrm{Spec}\,\mathbb{C}[[t]]} \newcommand{\valuations}[1]{\mathsf{#1}} \newcommand{\residue}{\partial} \newcommand{\divisor}{\mathrm{div}} \newcommand{\Brtwo}{{}_2\mathrm{Br}} \newcommand{\Pictwo}{{}_2\mathrm{Pic}} \newcommand{\isom}{\cong} \newcommand{\isomto}{\simto} \newcommand{\isometry}{\cong} \newcommand{\F}{\mathbb F} \newcommand{\Z}{\mathbb Z} \newcommand{\N}{\mathbb N} \newcommand{\A}{\mathbb A} \newcommand{\C}{\mathbb C} \newcommand{\R}{\mathbb R} \renewcommand{\P}{\mathbb P} \newcommand{\Q}{\mathbb Q} \newcommand{\Gm}{\mathbb{G}_{\mathrm{m}}} \newcommand{\Gms}[1]{\mathbb{G}_{\mathrm{m},#1}} \DeclareMathOperator{\AAut}{\mathbf{Aut}} \DeclareMathOperator{\Aut}{\mathrm{Aut}} \DeclareMathOperator{\Br}{\mathrm{Br}} \DeclareMathOperator{\Isom}{\mathrm{Isom}} \DeclareMathOperator{\IIsom}{\Group{Isom}} \DeclareMathOperator{\PPic}{\sheaf{P}\!\mathit{ic}} \DeclareMathOperator{\Pic}{\mathrm{Pic}} \DeclareMathOperator{\rk}{\mathrm{rk}} \DeclareMathOperator{\SSpec}{\mathbf{Spec}} \DeclareMathOperator{\Spec}{\mathrm{Spec}} \DeclareMathOperator{\EExt}{\sheaf{E}\!\mathit{xt}} \DeclareMathOperator{\Ext}{\mathrm{Ext}} \DeclareMathOperator{\coker}{\mathrm{coker}} \DeclareMathOperator{\Chow}{\mathrm{CH}} \DeclareMathOperator{\Num}{\mathrm{Num}} \DeclareMathOperator{\Chownum}{\mathrm{Num}} \DeclareMathOperator{\Chowprelog}{\mathrm{CH}^*_{\mathrm{prelog}}} \DeclareMathOperator{\Chowprelognum}{\mathrm{Num}^*_{\mathrm{prelog}}} \DeclareMathOperator{\Chowprelogsat}{\mathrm{Chow}_{\mathrm{prelog, sat}}} \DeclareMathOperator{\Numsatprelog}{\mathrm{Num}_{\mathrm{prelog, sat}}} \DeclareMathOperator{\grad}{grad} \newcommand{\inv}{^{-1}} \newcommand{\sep}{^{\mathrm{s}}} \newcommand{\mult}{^{\times}} \newcommand{\dual}{^{\vee}} \newcommand{\tensor}{\otimes} \newcommand{\bslash}{\smallsetminus} \newcommand{\mapto}[1]{\xrightarrow{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\wt}[1]{\widetilde{#1}} \newcommand{\et}{\mathrm{\acute{e}t}} \newcommand{\linedef}[1]{\textsl{#1}} \newcommand{\Het}{H_{\et}} \newcommand{\ur}{\mathrm{nr}} \newcommand{\Hur}{H_{\ur}} \newcommand{\Brur}{\Br_{\ur}} \newcommand{\merk}{\mathrm{r}} \newcommand{\Hr}{H_{\merk}} \newcommand{\Pfister}[1]{\ll\!{#1}\gg} \newcommand{\quadform}[1]{<\! #1 \!>} \newcommand{\Local}{\mathsf{Local}} \newcommand{\Ab}{\mathsf{Ab}} \newcommand{\Var}{\mathsf{Var}} \newcommand{\Frac}{\mathrm{Frac}} \newcommand{\im}{\mathrm{im}} \newcommand{\CH}{\mathrm{CH}} \newcommand{\CM}{\mathsf{CM}} \newcommand{\res}{\mathrm{res}} \newcommand{\cores}{\mathrm{cor}} \newcommand{\ord}{\mathrm{ord}} \newcommand{\Norm}{\mathrm{N}} \newcommand{\vp}{\pi} \newcommand{\Hom}{\mathrm{Hom}} \newcommand{\id}{\mathrm{id}} \newcommand{\Pfad}{\mathrm{adj}^{\mathrm{Pf}}} \newcommand{\Pf}{\mathrm{Pf}} \newcommand{\adj}{\mathrm{adj}} \newcommand{\wMM}{\widetilde{\MM}} \newcommand{\GLsix}{\mathrm{GL}_6(\C)} \newcommand\WHY{{\color{red}\textsf{WHY?}}~} \renewcommand\theenumi{\it\alph{enumi}} \renewcommand\labelenumi{\theenumi)} \usepackage[backref=page]{hyperref} \newcommand*{\DashedArrow}[1][]{\mathbin{\tikz [baseline=-0.25ex,-latex, dashed,#1] \draw [#1] (0pt,0.5ex) -- (2.3em,0.5ex);}} \newcommand{\smallAffineGroup}{\mathcal U} \newcommand{\koszul}{Koszul\,} \newcommand{\intR}{R} \DeclareMathOperator{\berk}{Berk} \DeclareMathOperator{\spec}{spec} \begin{document} \title[Explicit bounds on common projective torsion points]{Explicit bounds on common projective torsion points of elliptic curves} \author[B\"ohning]{Christian B\"ohning} \address{Christian B\"ohning, Mathematics Institute, University of Warwick\\ Coventry CV4 7AL, England} \email{[email protected]} \author[Bothmer]{Hans-Christian Graf von Bothmer} \address{Hans-Christian Graf von Bothmer, Fachbereich Mathematik der Universit\"at Hamburg\\ Bundesstra\ss e 55\\ 20146 Hamburg, Germany} \email{[email protected]} \author[Hubbard]{David Hubbard} \address{David Hubbard, Mathematics Institute, University of Warwick\\ Coventry CV4 7AL, England} \email{[email protected]} \begin{abstract} Suppose $E_1, E_2$ are elliptic curves (over the complex numbers) together with double coverings $\pi_i \colon E_i \to \P^1$ ramified in the two-torsion points of $E_i$. Let $E_i[\infty]$ be the torsion points on $E_i$. In \cite{BFT18}, Bogomolov, Fu and Tschinkel ask if the number of points in $\pi_1 (E_1[\infty]) \cap \pi_2 (E_2[\infty])$ is uniformly bounded in the case when the branch loci of the $\pi_i$ do not coincide. Very recently this was answered affirmatively \cite{DKY20, Kueh21, Gao21, DGH21, GGK21} and also \cite{Poi22-1, Poi22-2}, but realistic effective bounds are unknown. In this article we obtain such bounds for common projective torsion points on elliptic curves under some mild extra assumptions on the reduction type of the input data at given primes. The method is based on Raynaud's original groundbreaking work on the Manin-Mumford conjecture \cite{Ray83-1, Ray83-2}. In particular, we generalise several of his results to cases of bad reduction using techniques from logarithmic algebraic geometry. \end{abstract} \maketitle \tableofcontents \section{Introduction and basic setup}\label{sIntroduction} The following question in the theory of unlikely intersections, which was raised in \cite{BFT18} and is closely related to the uniform Manin-Mumford conjecture, has recently attracted a lot of attention: suppose $E_i$, $i=1,2$ are elliptic curves over the complex numbers (=one-dimensional abelian varieties), together with \emph{standard projections} to $\P^1$, $\pi_i \colon E_i \to \P^1$. Here and in the following by a standard projection we will mean a degree $2$ morphism $\pi_i \colon E_i \to \P^1$ that identifies each point on $E_i$ with its inverse, hence is ramified in the four $2$-torsion points $E_i[2]$ of $E_i$. Suppose furthermore that the branch points in $\P^1$ of these two double covers do not coincide as subsets of points in $\P^1$. Write $E_i[\infty]$ for the torsion points on $E_i$ (of arbitrary order). What is the smallest $C$ such that under these hypotheses one can conclude \[ \left| \pi_1(E_1[\infty]) \cap \pi_2 (E_2[\infty]) \right| \le C \quad ? \] It is not too difficult to deduce that for any given $(E_i, \pi_i)$, the set $\pi_1(E_1[\infty]) \cap \pi_2 (E_2[\infty])$ is finite. In fact, this already follows from Raynaud's result \cite{Ray83-1} that the torsion points of a complex abelian variety $A$ that lie on some curve $C\subset A$ that is not elliptic are finite in number. Indeed, consider the four to one covering \[ \pi_1\times \pi_2 \colon E_1 \times E_2 \to \P^1 \times \P^1 \] and the preimage of the diagonal \[ X = (\pi_1\times \pi_2)^{-1} (\Delta ). \] Then it is easy to see that under the assumption that the sets of branch points of $\pi_1, \pi_2$ do not coincide, this curve is irreducible and not elliptic. \medskip Recently, several authors \cite{DKY20, Kueh21, Gao21, DGH21, GGK21} finally managed to show, as a corollary of their work, that one can choose one constant $C$ that works for all pairs $(E_i, \pi_i)$ above at once (i.e., \emph{uniformity} holds). Poineau in \cite{Poi22-1, Poi22-2} also proved this using a different technique using Berkovich spaces over the integers and dynamics of Latt\`{e}s maps. To the best of our knowledge, these approaches have so far failed to determine the minimal possible $C$ above and not yielded \emph{effective realistic} bounds. However, one knows pairs $(E_i, \pi_i)$ where $|\pi_1(E_1[\infty]) \cap \pi_2 (E_2[\infty])|$ is comparatively large \cite{BF17,FS19}. The current record (in \cite{FS19}), as far as we are aware, is $34$. \medskip In this work, we propose to obtain such effective realistic bounds for the common torsion points $\pi_1(E_1[\infty]) \cap \pi_2 (E_2[\infty])$, or some large subset of this set, under some mild extra assumptions on the curves $E_i$, taking our point of departure from the methods used by Raynaud's in \cite{Ray83-1}, \cite{Ray83-2}. In particular, we generalise several arguments by Raynaud to the log smooth setting. \medskip The road map of the paper is as follows: in Section \ref{sGoodReduction} we obtain explicit bounds on common projective torsion points of order coprime to $p$ for two elliptic curves together with standard projections that have good reduction at a given place of some number field lying over a given prime $p$. This is the content of Theorem \ref{tCoarseBounds}. We refine these bounds in Section \ref{sRefinements} in Proposition \ref{pBothSupersingular} and Proposition \ref{pRefinementFrobLiftability}. In Section \ref{pBoundsTwoPrimes} we show how one can obtain explicit bounds on common projective torsion points for curves with good reduction at two given primes. In Section \ref{sGoodMult}, we generalise the preceding results to the case when one or two of the elliptic curves are allowed to have bad multiplicative reduction at a given place. This is done in Theorem \ref{tMainMixedReductionTypes}. This is valid under Assumption \ref{aGoodBadReduction}. Part b) of that Assumption is less geometric, but we expect it to be implied by part a). In fact, we show that this is true in special cases in Theorem \ref{tFinitenessGeneralised}. The proof is longer and occupies the remaining sections of the paper. It involves ideas from logarithmic algebraic geometry and generalises an argument in \cite{Ray83-2}. \begin{remark}\label{rReductionToNumberFields} To obtain effective bounds of the type mentioned above, it is no essential restriction to assume that both $E_1$ and $E_2$ are defined over a number field; indeed, if $E_i, \pi_i$ are initially defined over $\mathbb{C}$, there exists a $\mathbb{Z}$-algebra $A$ of finite type contained in $\mathbb{C}$ such that all these data are already defined over $S =\mathrm{Spec}\, (A)$. Replacing $S$ by some nonempty open subset of necessary, we can assume that there are \begin{enumerate} \item One-dimensional abelian schemes $\EE_i \to S$ with geometric generic fibres the $E_1$, with morphisms $\pi_{i, S} \colon \EE_i \to \P^1_S$ that are standard projections on each geometric fibre and the given standard projections on the geometric generic fibres. \item The scheme $\XX = (\pi_{1, S} \times \pi_{2, S} )^{-1} (\Delta_{\P^1_S \times_S \P^1_S} )$ is a proper flat $S$-curve with geometric generic fibre $X$. \item For each geometric fibre, the set of common branch points of the standard projections has the same cardinality as on the geometric generic fibre. \end{enumerate} Let $s$ be a closed point of $S$ lying above the generic point of $\mathrm{Spec}\, \Z$. The number of torsion points of the geometric generic fibre $E_1\times E_2$ that are contained in $X$ specialise injectively (since we are in equal characteristic zero) to torsion points of $\EE_{1, s}\times \EE_{2,s}$ lying on $\XX_s$. In any case, if $t_{E_1\times E_2, X}$ denotes the number of torsion points of $E_1\times E_2$ that are contained in $X$, then \[ \left| \pi_1(E_1[\infty]) \cap \pi_2 (E_2[\infty]) \right| \le \frac{t_{E_1\times E_2, X}}{4} + 8 \] (since the covering $\pi_1\times \pi_2 \colon X \to \Delta \simeq \P^1$ is \'{e}tale of degree $4$ away from the points that coincide with one of the branch points of $\pi_1$ or $\pi_2$, which are at most eight). Thus a bound on the number of torsion points of $\EE_{1, s}\times \EE_{2,s}$ lying on $\XX_s$ will in general yield a very good bound for our original problem. \end{remark} In view of the preceding remark, we usually assume in the sequel that the data $E_i, \pi_i$ is defined over some number field $K$, with ring of integers $\mathcal{O}_K$. In that case, using the same spread construction as in Remark \ref{rReductionToNumberFields}, we can assume that $E_i$ extend to abelian $U$-schemes for some nonempty open subset $U\subset \mathrm{Spec}\, \mathcal{O}_K$, and the standard projections $\pi_i$ extend to $U$-morphisms that induce standard projections on every geometric fibre, with the number of common branch points of the standard projections being constant in the family. Moreover, we can assume $U$ is unramified over $\mathrm{Spec}\, \mathbb{Z}$. We can then choose a closed point $v$ of $U$ lying over a prime $p$, and identifying $v$ with the corresponding extension of the $p$-adic valuation to $K$, we can pass to the completion of the maximal unramified extension \[ \widehat{K_v^{\mathrm{ur}}} \] with valuation ring $R \supset \mathcal{O}_{K, v}$ isomorphic to the ring of Witt vectors $W(\overline{\mathbb{F}}_p)$ with coefficients in the algebraic closure of the finite field $\mathbb{F}_p$. This shows that there are always plenty of prime numbers $p$ satisfying the following assumptions. \begin{assumption}\label{aBasic} There exists a prime $p$ and a place $v$ of $K$ unramified over $p$ with the following properties. Let $R = W(\overline{\mathbb{F}}_p)$ be the Witt vectors over $k:=\overline{\mathbb{F}}_p$ with fraction field $F= \widehat{K^{\mathrm{ur}}_v} \supset K$. \begin{enumerate} \item There are abelian schemes \[ \EE_i \to \mathrm{Spec}\, R , \quad i=1, 2 \] with geometric generic fibres equal to (the base change to $F$) of the given elliptic curves $E_i$ defined over $K$. \item For $i=1, 2$, there are $R$-morphisms \[ \xymatrix{ \EE_i \ar[rd] \ar[rr]^{\pi_{i, R}} & & \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] inducing standard projections on every geometric fibre and the given standard projections (base-changed to $F$) on the geometric generic fibre. Sometimes, by slight abuse of notation, we will also simply write $\pi_i$ for $\pi_{i, R}$ if there is no risk of confusion. \item The number of common branch points in $\P^1$ of the $\pi_{i, R}$ is the same on the special fibre as on the geometric generic fibre. We write \[ \XX = \left( \pi_{1, R} \times \pi_{2, R} \right)^{-1} \left( \Delta_{\P^1_R \times_R \P^1_R} \right) \] for the preimage of the diagonal, which is a proper flat $R$-curve. \end{enumerate} \end{assumption} Of course, the prime numbers in question depend on the data $(E_i, \pi_i)$ and can be very large in special cases. \section{Bounds on common torsion points if both curves have good reduction}\label{sGoodReduction} Here we assume we are given elliptic curves $E_i$ and standard projections $\pi_i$, $i=1, 2$, defined over a number field $K$ satisfying Assumption \ref{aBasic} above, and we wish to show how a method pioneered by Raynaud in \cite{Ray83-1} yields very realistic bounds on \[ \left| \pi_1\left( E_1[\infty]^{(p')} \right) \cap \pi_2 \left( E_2[\infty]^{(p')} \right) \right| \] where we denote by $E_i[\infty]^{(p')}$ the coprime to $p$ torsion on $E_i$. \medskip We denote the abelian $R$-scheme $\EE_1 \times_{\mathrm{Spec}\, R}\EE_2$ by $\sA$. We denote its special fibre over $\mathrm{Spec}\, k$ by $\sA_0$. \begin{lemma}\label{lDefCoprimeTorsion} All torsion points in $(E_1\times E_2)(\bar{K})$ of order not divisible by $p$ are defined over $K_v^{\mathrm{ur}}$, hence can be viewed as sections of $ \sA \to \mathrm{Spec}\, R$. Moreover, the reduction map $\sA (\mathrm{Spec}\, R) \to \sA_0(k)$ gives an isomorphism from the $n$-torsion points in $(E_1\times E_2)(\bar{K})$ onto the $n$-torsion points of $\sA_0(k)$ as long as $p$ does not divide $n$. \end{lemma} \begin{proof} Indeed, given an abelian scheme over a discrete valuation ring of mixed characteristic $(0,p)$, the sub-group scheme of $n$-torsion points is finite and \'{e}tale over the base provided $p$ does not divide $n$ \cite[Prop. 1.34]{Sai13}. \end{proof} \begin{lemma}\label{lInImageOfMultByP} If $p$ does not divide $n$, every $n$-torsion point in $(E_1\times E_2)(\bar{K})$ can be written as $p$-times another such $n$-torsion point. Thus every section of $ \sA \to \mathrm{Spec}\, R$ corresponding to such a torsion point is in the image of another $R$-valued point in $\sA$ under the multiplication by $p$ map $[p] \colon \sA \to \sA$ on the abelian scheme $\sA$. \end{lemma} \begin{proof} This is simply because multiplication by $p$ is an isomorphism on $\Z/n\times \Z/n$. \end{proof} Taken together, these two lemmas directly imply \begin{proposition}\label{pSimpleBounds} A bound on \[ t (E_1, \pi_1, E_2, \pi_2, p'):= \left| \pi_1\left( E_1[\infty]^{(p')} \right) \cap \pi_2 \left( E_2[\infty]^{(p')} \right) \right| \] is given by \[ \frac{1}{4} \left| \mathrm{im} \left( p \sA (R) \cap \XX (R) \to \XX_0(k) \right) \right| + 8 \] where the arrow in the displayed formula is the specialisation map and $\XX_0$ denotes the central fibre of the curve $\XX$. Moreover, putting $R_1= R/p^2$, and denoting by $\sA_1$, $\XX_1$ the base change of $\sA, \XX$ to $\mathrm{Spec}\, R_1$, a bound on $t (E_1, \pi_1, E_2, \pi_2, p')$ is also obtained by \[ \frac{1}{4} \left| \mathrm{im} \left( p \sA_1 (R_1) \cap \XX_1 (R_1) \to \XX_0(k) \right) \right| + 8 \] \end{proposition} To explain the key ideas in a simple context, in the sequel of this section we will, in addition to Assumption \ref{aBasic}, make the following \begin{assumption}\label{aSimplest} The branch loci of the standard projections on the special fibres $(\pi_1)_k \colon \EE_{1}\times_{\mathrm{Spec}\, R} \mathrm{Spec} k \to \P^1_k$ and $(\pi_2)_k \colon \EE_{2}\times_{\mathrm{Spec}\, R} \mathrm{Spec} k \to \P^1_k$ are disjoint. (Hence the same is true for the generic fibres). \end{assumption} This implies that $\XX$ is a smooth $R$-curve. \begin{theorem}\label{tCoarseBounds} Let $(E_1, \pi_1)$, $(E_2, \pi_2)$ satisfy Assumptions \ref{aBasic} and \ref{aSimplest}. Then \[ t (E_1, \pi_1, E_2, \pi_2, p') \le 2p^3 + 8. \] \end{theorem} \begin{proof} By Proposition \ref{pSimpleBounds}, it suffices to show \[ \left| \mathrm{im} \left( p \sA_1 (R_1) \cap \XX_1 (R_1) \to \XX_0(k) \right) \right| \le 8p^3. \] For this, it is convenient, following ideas in \cite{Ray83-1}, to pass to a structure defined over $k$ to encode information about first-order infinitesimal deformations. We recall how this is done, following \cite{Ray83-1}: writing $\sA_1 (R_1, \XX_0)$ for the set of $R_1$-points of $\sA_1\to \mathrm{Spec}\, R_1$ that specialise to a point in $\XX_0$, we note that there is a factorisation \[ \xymatrix{ \sA_1 (R_1, \XX_0) \ar[r]^{\tau} \ar[rd]_{\mathrm{specialization}} & V_0 (k)\ar[d]^{\varpi} \\ & \XX_0(k) } \] where $V_0 \to \XX_0$ is a certain \emph{affine bundle} over $X_0$, obtained as follows: consider the normal bundle $\NN_{\XX_0/\sA}$ with subbundle $\NN_{\XX_0/\sA_0}$ and form \[ V_0 = \P \bigl( \NN_{\XX_0/\sA} \bigr) \backslash \P \bigl( \NN_{\XX_0/\sA_0} \bigr) \] which is naturally an affine bundle over $\XX_0$. Since sections of $\sA_1\to \mathrm{Spec}\, R_1$ that specialise to a point $x$ in $\XX_0$ have a normal direction at $x$ that is not contained in $\AA_0$, we get a factorisation as claimed in the diagram above. Write $\XX'_0$ for the curve in $V_0$ whose $k$-points are the image of $\XX_1(R_1)$ under $\tau$. It lies isomorphically over $\XX_0$ via $\varpi$. Now if $f\colon \sA_1\to \sA_1$ is any $R_1$-morphism whose base change to the central fibre has zero differential (such as, for example, the multiplication by $p$ map), we get a factorization \[ \xymatrix{ \sA_1(R_1) \ar[rr]^f \ar[rd] & & \sA_1 (R_1)\\ & \sA_0(k) \ar[ru] & } \] Denote by $\YY_0 \subset \sA_0$ the \emph{reduced} preimage of $\XX_0$ under the multiplication by $p$ map on $\sA_0$. Then, in particular, all points in $\YY_0(k)$ give in this way unique points in $p \sA_1 (R_1)$ which we can specialise again to $V_0(k)$: it turns out, \cite[Prop. 3.3.1]{Ray83-1}, that the resulting set $\YY_0'(k) \subset A_0(k)$ is the set of $k$-points of another projective curve $\YY_0'\subset V_0$, and we are interested in computing the intersection number $\XX_0'.\YY_0'$ in $V_0$ (or better its compactification $\P \bigl( \NN_{\XX_0/\sA} \bigr) = \P \bigl( \NN_{\XX_0/\sA_0}\oplus \NN_{\XX_0/\XX}\bigr) = \P \bigl( \NN_{\XX_0/\sA_0}\oplus \OO_{\XX_0}\bigr)$). Indeed, $k$-points of $\XX_0'.\YY_0'$ map, by construction, surjectively onto the set $\mathrm{im} \left( p \sA_1 (R_1) \cap \XX_1 (R_1) \to \XX_0(k) \right)$ whose cardinality we are trying to bound. Here it is also essential to notice that $\XX_0'\cap \YY_0'$ is actually finite under Assumption \ref{aSimplest}; indeed, under that assumption, $\XX_0'$ is irreducible of genus $5$, in particular, does not contain any elliptic component. Then the finiteness follows from \cite[proof of Thm. 4.4.1]{Ray83-1} as well as in this special case \cite{Ray83-2}. \medskip The Picard group of the projective bundle $ \P \bigl( \NN_{\XX_0/\sA_0}\oplus \OO_{\XX_0}\bigr)$ can be generated by the zero section $\XX_0'=\P \bigl( \OO_{\XX_0}\bigr)$ and the class of a fibre, and since $\YY_0'$ is contained in the finite part (the complement of the infinity section), intersecting with the infinity section tells us that $\YY_0'$ is a multiple of $\XX_0'$. Moreover, intersecting with a fibre, we see that \[ \YY_0' \equiv \delta \XX_0' \] where $\delta$ is the degree of $\varpi \colon \YY_0' \to \XX_0$. Since the normal bundle of $\XX_0'$ in $\P \bigl( \NN_{\XX_0/\sA_0}\oplus \OO_{\XX_0}\bigr)$ is nothing but $\NN_{\XX_0/\sA_0}$ we get \[ \XX_0'.\YY_0' = \delta (\XX_0.\XX_0)_{\sA_0}= 8 \delta . \] Recall that $\XX_0$ is the preimage in $\sA_0 \times \sA_0$ of the diagonal in $\P^1_k\times \P^1_k$ under a $4:1$ covering map, whence the factor $8$ in the preceding formula. \medskip Thus to finish the proof we need to bound $\delta = \deg \varpi$, more precisely, we need to show \[ \delta \le p^3. \] For this, remark that by construction there is a commutative diagram \[ \xymatrix{ \YY_0 \ar[rd]_{(\cdot p)\mid_{\YY_0}} \ar[r]^{\theta} & \YY_0' \ar[d]^{\varpi}\\ & \XX_0 } \] Thus $\delta$ is bounded from above by the degree of $(\cdot p)\mid_{\YY_0}$. We will show that this latter is less than or equal to $p^3$. Indeed, the multiplication by $p$-map on the abelian surface $\sA_0$ has degree $p^4$, but it factors over the relative Frobenius. Since $\YY_0$ is defined to be the reduced preimage of $\XX_0$ under this map, we get the desired bound. \end{proof} \section{Refinements according to the reduction type and Frobenius liftability}\label{sRefinements} In certain case, the bounds obtained in Theorem \ref{tCoarseBounds} can be substantially refined. First recall \cite[V.3 Theorem 3.1]{Sil09} \begin{definition}\label{dSupersingularOrdinary} Let $E_0$ be a curve over $k=\overline{\mathbb{F}}_p$, and denote by $E_0(k)[p]$ the group that is the kernel of the multiplication by $p$-map $E_0(k) \to E_0(k)$. Then $E_0$ is called \emph{ordinary} if $E_0(k)[p]\simeq \Z/p\Z$ and \emph{supersingular} if $E_0(k)[p]= \{0\}$. \end{definition} If $E_0$ is an elliptic curve over $k$, then $E$ is ordinary if and only if one has a factorisation \[ \xymatrix{ E \ar[rd]^{\mathrm{Fr}} \ar[rr]^{\cdot p} & & E \\ & E' \ar[ru]^{g} & } \] where $\mathrm{Fr}$ is the relative Frobenius of degree $p$, $E'$ the Frobenius twist of $E$, and $g$ is \'{e}tale of degree $p$. An elliptic curve $E_0$ as above is supersingular if and only if the multiplication by $p$ map factors as \[ \xymatrix{ E \ar[rd]^{\mathrm{Fr}^2} \ar[rr]^{\cdot p} & & E \\ & E'' \ar[ru]^{g} & } \] and $g$ is an isomorphism. \begin{proposition}\label{pBothSupersingular} Keeping all the assumptions of Theorem \ref{tCoarseBounds} and assuming in addition that the reductions $\EE_{1, 0}$ and $\EE_{2, 0}$ of the curves $E_1$ and $E_2$ are both supersingular, we have \[ t (E_1, \pi_1, E_2, \pi_2, p') \le 2p^2 + 8. \] \end{proposition} \begin{proof} Indeed, this will follow if we can show that the quantity $\delta$ appearing in the proof of Theorem \ref{tCoarseBounds} is bounded by $p^2$ in this case. It suffices to show that this is so for the degree of $(\cdot p)\mid_{\YY_0}\colon \YY_0 \to \XX_0$. In this case, by definition, $\YY_0$ is isomorphic to the second Frobenius twist of $\XX_0$ and $(\cdot p)\mid_{\YY_0}$ the second power of Frobenius, so it has degree $p^2$. \end{proof} As the previous proof illustrates, improving the bounds is closely connected to improving the bounds on $\delta = \deg \varpi$. Under certain conditions, one can get such better bounds also in the case when $\sA_0$ is ordinary. So we will now consider the case when $E_1, E_2$ have good ordinary reduction. The idea is to look at the connected-\'{e}tale exact sequence for the finite flat group scheme of $p$-torsion points $\GG_p$ on $\sA_1\to \mathrm{Spec}\, R_1$ \[ \xymatrix{ 0 \ar[r] & \GG_p^0 \ar[r] & \GG_p \ar[r] & \GG_p^{et} \ar[r] & 0 } \] (here $\GG_p^0 \simeq \bbmu_p\times \bbmu_p$ and $\GG_p^{et} \simeq (\Z/p\Z)^2$ under the assumption that both $\EE_{i, 0}=\EE_i\times_{\mathrm{Spec}\, R} \mathrm{Spec}(k)$ are ordinary elliptic curves). If $\HH\subset \GG_p^{et}$ is an \'{e}tale subgroup-scheme over which the preceding sequence splits, i.e. if there exists a subgroup-scheme $\widetilde{\HH}$ of $\GG_p $ mapping isomorphically onto $\HH$, then the multiplication by $p$-map factors \[ \xymatrix{ \sA_1 \ar[rr]_{\cdot p} \ar[rd]^q & & \sA_1\\ & \BB=\sA_1/\widetilde{\HH} \ar[ru]^r & } \] where $q$ is \'{e}tale and $r$ restricted to the central fibre has differential zero, whence letting $\ZZ_0$ be the reduced preimage of $\XX_0$ under the map \[ \BB_0 = \BB\times_{\mathrm{Spec}\, R_1} \mathrm{Spec}\, k \to \sA_1 \times_{\mathrm{Spec}\, R_1} \mathrm{Spec}\, k =\sA_0 \] we get a factorisation of $\theta= \theta''\circ \theta'$ \[ \xymatrix{ \YY_0 \ar[rrd]_{(\cdot p)\mid_{\YY_0}} \ar[r]^{\theta'} & \ZZ_0 \ar[r]^{\theta''} \ar[rd]& \YY_0' \ar[d]^{\varpi}\\ & & \XX_0 } \] Here $\theta'$ has degree equal to the degree/order of $\widetilde{\HH}$. Thus, in this case, $\delta$, the degree of $\varpi$ is bounded by \[ \frac{p^3}{| \widetilde{\HH} |}. \] \begin{proposition}\label{pRefinementFrobLiftability} Keep the assumptions of Theorem \ref{tCoarseBounds} and assume in addition that the reductions $\EE_{1, 0}$ and $\EE_{2, 0}$ of the curves $E_1$ and $E_2$ are both ordinary. Suppose that the connected-\'{e}tale exact sequence \[ \xymatrix{ 0 \ar[r] & \bbmu_p \ar[r] & (\EE_i \times_{\mathrm{Spec}\, R} \mathrm{Spec}\, R_1 )_p \ar[r] & \Z/p\Z \ar[r] & 0 } \] for the finite flat group scheme of $p$-torsion points $(\EE_i\times_{\mathrm{Spec}\, R} \mathrm{Spec}\, R_1)_p$ splits for \emph{one of the curves} $\EE_i$. Then \[ t (E_1, \pi_1, E_2, \pi_2, p') \le 2p^2 + 8. \] If this sequence splits for \emph{both curves} we have \[ t (E_1, \pi_1, E_2, \pi_2, p') \le 2p + 8. \] \end{proposition} \begin{proof} This is immediate from the preceding reasoning and the proof of Theorem \ref{tCoarseBounds}. \end{proof} Hence it becomes interesting to ascertain when, given elliptic curves $\EE_i\times_{\mathrm{Spec}\, R} \mathrm{Spec}\, R_1 \to \mathrm{Spec}\, R_1$ with ordinary reduction, the connected-\'{e}tale exact sequence for the finite flat group scheme of $p$-torsion points splits. \medskip Recall that $R_1=W_2(k)$, $W_2(k) = W(k)/p^2$, and that, as a \emph{set} $W_2(k) = k \times k$ with addition and multiplication defined explicitly by \begin{align*} (x_0,x_1)+ (y_0,y_1) &:= \bigl( x_0+y_0, x_1+y_1 - \sum_{i=1}^{p-1} \frac{(p-1)!}{i! (p-i)!} x_0^i y_0^{p-i} \bigr)\\ & =\bigl( x_0+y_0, x_1+y_1 - \frac{(x_0+y_0)^p -x_0^p-y_0^p}{p} \bigr) \\ (x_0,x_1)\cdot (y_0,y_1) &:= \bigl( x_0y_0 , x_0^py_1 + y_0^px_1 + px_1y_1 \bigr) = \bigl( x_0y_0 , x_0^py_1 + y_0^px_1 \bigr). \end{align*} (The formulas defining addition and multiplication work more generally for any ring $A$ to give $W_2 (A)$). The Frobenius induces a homomorphism \[ \mathrm{Fr}\colon W_2 (k) \to W_2 (k), \quad (x_0, x_1) \mapsto (x_0^p, x_1^p). \] \begin{lemma}\label{lSplittingPTorsion} Let $\EE_1 \to \mathrm{Spec}\, R_1$ be an elliptic curve with ordinary reduction $E_0/k$. The following are equivalent: \begin{enumerate} \item the connected-\'{e}tale exact sequence \[ \xymatrix{ 0 \ar[r] & \bbmu_p \ar[r] & (\EE_1)_p \ar[r] & \Z/p\Z \ar[r] & 0 } \] for the finite flat group scheme of $p$-torsion points $(\EE_1)_p$ splits. \item The (relative) Frobenius morphism $\mathrm{Fr}\colon E_0 \to E_0'$ lifts to a morphism \[ \xymatrix{ F\colon \EE_1 \ar[rr]\ar[rd] & & \EE_1' \ar[ld]\\ & S_1 & } \] Here $\EE_1'$ is the pull-back of $\EE_1$ under the Witt vector Frobenius $\mathrm{Fr}\colon S_1 \to S_1$. \end{enumerate} \end{lemma} \begin{proof} The properties in the statement are equivalent to $\EE_1 \to S_1$ being the canonical lift of $E_0/k$ in the sense of Serre-Tate, and the proof requires some background concerning Serre-Tate canonical lifts, compare \cite{Ka78}, \cite[Section 2.10]{Hi12}, Appendix by M.V. Nori and V. Srinivas to \cite{MS87}. \medskip Suppose first that a) holds, so this exact sequence splits. Now b) is equivalent to $\EE_1 \to \mathrm{Spec}\, R_1$ being the canonical lift of $E_0/k$ in the sense of Serre-Tate (which is unique) by the Appendix by M.V. Nori and V. Srinivas to \cite{MS87}, Theorem 1) (and its proof, compare also Proposition 1 ibidem). So we have to prove that the splitting of the exact sequence tells us that $\EE_1 \to \mathrm{Spec}\, R_1$ is the canonical lift. Let \[ T_p E_0 = \varprojlim_n E_0[p^n] \] be the Tate module of $E_0$. One knows that for a local artinian $k$-algebra $A$ there is an isomorphism, functorial in $A$ between infinitesimal deformations of $E_0$ over $A$ and $\Z_p$-bilinear maps \[ q\colon T_p(E_0) \times T_p(E_0) \to 1+\mathfrak{m}_A \] where $\mathfrak{m}_A$ is the maximal ideal of $A$ (``Serre-Tate coordinates"), see \cite[Thm. 2.10.5]{Hi12} or \cite[Thm. 2.1]{Ka78}. (Actually it is neater to think of the second factor in the source of the pairing as $T_p(E_0^t)$, the Tate module of the dual abelian variety, which is again isomorphic to $E_0$ in our case, however). So we need to check that under our hypothesis on the splitting of the sequence, the $q$-pairing is trivial (the canonical lift corresponds to the trivial pairing). In our case, the target $1+\mathfrak{m}_{R_1} = 1+ (p)$ is annihilated by $p$, so the pairing already factors over a pairing \[ T_p(E_0)/p \times T_p(E_0)/p \simeq E[p] \times E[p] \to 1+\mathfrak{m}_{R_1}. \] The construction of $q$ is described \cite[p. 151/152]{Ka78} or \cite[p. 218-221]{Hi12}: in our case, for the pairing to be trivial, we only need to check that the composite \[ \xymatrix{ T_p(E_0) \ar@{->>}[r] & E_0[p] \ar[r]^{``p"\quad\quad\quad\quad\quad} & \mathrm{Hom}_{\Z_p} (T_p(E_0), 1+\mathfrak{m}_{R_1}) } \] is trivial, where the homomorphism $``p"$ is defined as follows: for $x\in E_0[p]$, pick a lift $\tilde{x} \in \EE_1(R_1)$ of $x$; then $p\tilde{x}$ does not depend on the chosen lift, and can be identified with an element in $\mathrm{Hom}_{\Z_p} (T_p(E_0), 1+\mathfrak{m}_{R_1})$; however, if the sequence in a) splits we can choose a lift in $\EE_1(R_1)$ of order $p$ whence $p\tilde{x}$ is trivial. \medskip Now suppose that b) holds, the Frobenius lifts. Then again by the Appendix by M.V. Nori and V. Srinivas to \cite{MS87}, Theorem 1), $\EE_1 \to \mathrm{Spec}\, R_1$ is the canonical lift of $E_0/k$. We can extend it to the canonical lift $\EE \to \mathrm{Spec}\, R$ over the entire Witt vectors (not just the first order truncation). But by Serre-Tate theory, lifts of $E_0$ to $\mathrm{Spec}\, R$ correspond to lifts of the $p$-divisible group scheme of torsion points of order a power of $p$ on $E_0$ to $\mathrm{Spec}\, R$, and the Serre-Tate canonical lift is precisely characterised by the fact that that lift splits into the unique lift of the \'{e}tale rank $1$ group and the group of multiplicative type. Thus in particular, the exact sequence in a) splits. \end{proof} It is interesting and necessary for applications to have a way to test when b) of Lemma \ref{lSplittingPTorsion} holds for a concretely given $\EE_1 \to \mathrm{Spec}\, R_1$. By Thm. 1, 3) of the Appendix to \cite{MS87}, if we let $L$ be a degree $1$ line bundle on $E_0$, associated to the given origin of $E_0$, it lifts uniquely to a line bundle $\LL$ on $\EE_1 \to \mathrm{Spec}\, R_1$ such that $F^* \LL' \simeq \LL^{\otimes p}$ (where $\LL'$ is the line bundle induced by pull-back by $\LL$ on the Frobenius twist $\EE_1'$). If we use $\LL^{\otimes 3}$ and $(\LL')^{\otimes 3}$ to embed $\EE_1 \to \mathrm{Spec}\, R_1$ and $\EE_1' \to \mathrm{Spec}\, R_1$ into $\P^2_R$ (with homogeneous coordinates $x,y,z$), the Frobenius lift $F$ is given by a triple of homogeneous polynomials of degree $p$ that reduce to $(x^p, y^p, z^p)$ on the central fibre. This gives a way to decide algorithmically if a given lift $\EE_1 \to \mathrm{Spec}\, R_1$ is the canonical lift or not. In fact, it is advantageous to work with all possible lifts at once. \medskip We will illustrate the algorithm in a simple case. Suppose we are given a homogeneous degree $3$ polynomial $e \in \Z[x,y,z]$ such that its reduction $e_p \in \F_p[x,y,z]$ is the equation of a smooth plane cubic. Also write $f = (x^p, y^p, z^p)$. Then \[ e (f) - e^p \equiv 0 \, \mod \, p \] and thus $e (f) - e^p = pd$ for some homogeneous polynomial $d$ of degree $3p$. Let $f+pf'$ for $f'$ another triple of homogeneous degree $p$ polynomials be a lift of the Frobenius modulo $p^2$. Taylor expansion gives \[ e(f+pf') = e(f) + p\, (\grad e)(f) f' + p^2 \cdot (\mathrm{remainder}) \] and thus \[ e(f+pf') - e^p \equiv p (d + (\grad e)(f) f' ) \, \mod \, p^2 . \] Then $f +pf'$ is a lift of the Frobenius if and only if the last expression is a multiple of $e$ modulo $p^2$ meaning one can write it as $p\, c \cdot e$ for another unknown polynomial $c$. Thus we have to solve \[ p (d + (\grad e)(f) f' - c e)\equiv 0 \, \mod \, p^2 \] or \[ d + (\grad e)(f) f' - c e\equiv 0 \, \mod \, p \] for $f'$ and $c$. This can be done by a Gr\"obner basis calculation. \section{Bounds for curves with good reduction at two given primes}\label{pBoundsTwoPrimes} So far we have we have only discussed how to obtain a bound on the image of coprime-to-$p$ torsion \[ t (E_1, \pi_1, E_2, \pi_2, p'):= \left| \pi_1\left( E_1[\infty]^{(p')} \right) \cap \pi_2 \left( E_2[\infty]^{(p')} \right) \right| \] Naively, one can take another prime $q\neq p$ satisfying \ref{aBasic} and obtain a bound on $p$-primary torsion \[ t (E_1, \pi_1, E_2, \pi_2, p):= \left| \pi_1\left( E_1[\infty]^{(p)} \right) \cap \pi_2 \left( E_2[\infty]^{(p)} \right) \right| \] by simply noting that $E_i[\infty]^{(p)}\subseteq E_i[\infty]^{(q')}$. In \cite{Ray83-1}, Raynaud describes how to combine the two bounds into a total torsion bound. For this, it is equivalent and more convenient to work with the abelian scheme $\sA$ and the curve $\XX$ and instead discuss how to obtain bounds on $t_{E_{1}\times E_{2},X}$; the (geometric) torsion points of $A=E_1 \times E_2$ that lie on $X$. For this we denote by $t_{A,X,p'}$ and $t_{A,X,p}$ the coprime-to-$p$ and $p$-primary torsion lying on $X$ respectively. It will be necessary for us to make the following definition \begin{definition}\label{dLargeGaloisActions} Let $G_{K}$ denote the absolute Galois group of $K$ and let $M$ be a $p$-divisible $G_{K}$ module. We say that the action of $G_{K}$ on $M$ is \textit{large} if for any element $x\in M$ of order at least $p^r$, the size of the orbit $G_{K}\cdot x$ tends to infinity as $r$ tends to infinity. Specifically, for any integer $N>0$ there exists an integer $r$ such that for any elements of order $>p^r$ we have that: \[|G_{K}\cdot x | > N\] \end{definition} Of course we are interested in when $M=A[\infty]^{(p)}$ is the $p$-primary torsion of $A$. It is immediate that the torsion of $A$ may be decomposed into $p$-primary and coprime-to-$p$ parts \[A[\infty] = A[\infty]^{(p)}\oplus A[\infty]^{(p')}\] We will combine the bounds on the two types of torsion to obtain a total bound \begin{proposition}\label{pTotalTorsionBounds} If the action of $G_{K}$ on $A[\infty]^{(p)}$ is large then there exists a constant $c$ such that \[t_{A,X} \leq c\] \end{proposition} To do this, we are required to strengthen Theorem \ref{tCoarseBounds} and show that the bounds obtained are invariant under translation (cf.\@ \cite[Theorem 4.4.1]{Ray83-1}) \begin{lemma}\label{lCoprimeTorsionBoundsforTranslate} Let $a\in A(\bar{K})$. Then if $(X+a) \hookrightarrow A$ denotes the translation of the curve $X$ by $a$ we have \[t_{A,X+a,p'} \leq 8p^3\] \end{lemma} \begin{proof} Of course, what we have shown already is the case $a=0$. First consider the case that $a\in A(K)$. Then we may repeat the methods of Theorem \ref{tCoarseBounds}. Namely, we can take $a\in \sA(R)\cong A(K)$ and prove that \[ \left| \mathrm{im} \left( p \sA_1 (R_1) \cap (\XX_1+a_1) (R_1) \to \XX_0(k) \right) \right| \le 8p^3. \] where $a_1 \in \sA_1(R_1)$ is the reduction mod $p^2$ of $a$. We write \[\Lambda(a_1)=\mathrm{im} \left( p \sA_1 (R_1) \cap (\XX_1+a_1) (R_1) \to \XX_0(k) \right)\] As $k$ is algebraically closed, there exists an element $b_{0}\in \sA(k)$ such that $pb_0=a_0$. Picking any lifting $b_1$ of $b_0$ we have that $a_1=pb_1+c_1$ where $c_1$ is in the kernel of reduction. \begin{enumerate}[label=(\roman*)] \item First we assume that $a_1=pb_1$. Then we see that $\Lambda(pb_1)=\Lambda(0)+pb_0$ and so these sets are of the same cardinality. \item Now assume that $a_1=c_1$ is in the kernel of reduction. Then, as in Theorem \ref{tCoarseBounds}, the bundle $V_0$ and curve $\YY_0'$ remain unchanged (as they depend only on the special fibre of the translation) whereas the zero section $\XX_0'$ of $V_0$ changes, corresponding to $\XX_1+c_1$ being a different choice of lifting for $\XX_0 $. However, this new lifting still has normal bundle $\NN_{\XX_0/\sA_0}$ in $\P \bigl( \NN_{\XX_0/\sA} \bigr)$ and so we still have the same bound of $8\delta$ for $\Lambda(c_1)$ using the notation of the Theorem. \end{enumerate} Now assume that $a\in A(\bar{K}) \setminus A(K)$. Consider the $R$-group scheme $\GG$ of automorphisms of $\XX$ that come from translations by elements of $\sA$. As $\XX$ is smooth and irreducible with fibres of genus $5$, this is a finite group scheme. Moreover, $\XX$ has no infinitesimal automorphisms so $\GG$ is unramified. Thus, $X\neq X+a$ and there exists $\sigma \in G_{K}$ such that \[X+a^\sigma \neq X+a\] On the other hand, $A[\infty]^{(p')}$ is unramified so that \[(X+a)(\bar{K})\cap A[\infty]^{(p')} \subseteq (X+a^\sigma)(\bar{K}) \cap (X+a)(\bar{K})\] As this is the intersection of two irreducible curves which are not equal, it can be bounded by $(X+a)\cdot (X+a) = X\cdot X = 8$. \end{proof} Similarly, for any $a\in A(\bar{K})$, we repeat the above argument for a second prime $q\neq p$ satisfying Assumption \ref{aBasic} and obtain the following \begin{lemma}\label{lpPrimaryTorsionBounds} For any $a\in A(\bar{K})$, we have that \[t_{A,X+a,p} \leq 8q^3\] \end{lemma} We can now prove Proposition \ref{pTotalTorsionBounds} \begin{proof} Suppose that $x \in X(\bar{K}) \cap A[\infty]$. Then there exists a unique decomposition $x=x'+x''$ where $x'\in (X-x'')(\bar{K})\cap A[\infty]^{(p')}$ and $x''\in (X-x')(\bar{K})\cap A[\infty]^{(p)}$. As $x'\in A(K)$ we have that $X-x'$ is a $K$-curve and thus \[G_{K}\cdot x'' \subseteq (X-x')(\bar{K}) \cap A[\infty]^{(p)}\] Thus, using Lemma \ref{lCoprimeTorsionBoundsforTranslate} we have that $|G_{K}\cdot x''| \leq 8q^3$. As the Galois action is large, there exists some $r>0$ such that the order of $x''$ is at most $p^r$ (depending on $q$). As $\textnormal{dim}(A)=2$, there are at most $|A[p^r]|=p^{4r}$ possibilities for $x''$. For each such possibility, the number of possibilities for $x' $ is bounded by \[|(X-x'')(\bar{K})\cap A[\infty]^{(p')}| \leq 8p^3\] and so in total \[t_{A,X} = |X(\bar{K}) \cap A[\infty]| \leq c:=8p^{4r+3}\] \end{proof} From the proposition it is clear that one has to understand when the Galois action on the $p$-primary torsion of an elliptic curve is large. Moreover, to obtain tractable bounds for the whole torsion, one needs to understand how the size of Galois orbits $G_{K}\cdot x$ grow with the order of $x$. \begin{remark}\label{rGaloisOrbitsandFieldExtensions} As $K$ is maximally unramified, $G_{K}$ coincides with the inertia group $I_{K}$. It is immediate that for a $p$-primary torsion point $x\in E(\bar{K})$ we have the following relationship \[[K(x):K]=|I_{K}\cdot x|\] where $K(x)$ is the smallest extension of $K$ over which the point $x$ is defined. \end{remark} The answers to both questions raised are given by the following \begin{proposition}\label{pGaloisActionsGoodReduction} Let $E/K$ be an elliptic curve with good reduction where $K$ is a maximally unramified extension of $\mathbb{Q}_{p}$ with valuation $v$ (normalised so that $v(p)=1$) and valuation ring $R$. \begin{enumerate}[label=(\roman*)] \item If $E$ has ordinary reduction with Serre-Tate coordinate $\lambda \in 1+pR$ then the action on the $p$-primary torsion of $E$ is large if and only if $\lambda \neq 1$. When $\lambda=1$, i.e.\@ $E$ is the canonical lifting of its reduction in the sense of Serre-Tate, there is a splitting \[E[\infty]^{(p)}\cong \mu_{p^\infty} \oplus \mathbb{Q}_p/\mathbb{Z}_p\] as $I_K$-modules, where $\mu_{p^\infty}$ is the $p$-divisible group formed from the $p^n$-th roots of unity. One may repeat the methods of Theorem \ref{tCoarseBounds} to bound common torsion points of the form \[E[\infty]^{(p')}\oplus \mathbb{Q}_p/\mathbb{Z}_p\] as they are unramified and $p$-divisible. Then, using the new decomposition: \[E[\infty] \cong \left(E[\infty]^{(p')}\oplus \mathbb{Q}_p/\mathbb{Z}_p\right) \oplus \mu_{p^\infty}\] one may now apply Proposition \ref{pTotalTorsionBounds} noting that the action on $\mu_{p^\infty}$ is large. \item If $E$ has ordinary reduction with Serre-Tate coordinate $\lambda \neq 1$ then the representation: \[I_{K} \rightarrow \textnormal{Aut}(E[p^r])\] is given by the short exact sequence: \[0 \rightarrow \mu_{p^r} \hookrightarrow E[p^r] \twoheadrightarrow \mathbb{Z}/p^r \mathbb{Z} \rightarrow 0\] via the matrix \[\begin{bmatrix} \chi_{p^r} & \theta_{p^r}\\ 0 & 1 \end{bmatrix}\] where for $\sigma \in I_K$ \[ \sigma(\lambda^{\frac{1}{p^r}})=\zeta_{p^r}^{\theta_{p^r}(\sigma)} \cdot \lambda^{\frac{1}{p^r}} \] where we fix a $p^r$-th root of $\lambda$, $\zeta_{p^r}$ is a primitive $p^r$-th root of unity and $\chi_{p^r}$ is the cyclotomic character. Thus, if $x\in E[p^r]$ is of exact order $p^r$ then either: \begin{enumerate} \item $\lambda^{\frac{1}{p^r}} \in K$, the exact sequence above is split and \[ |I_K \cdot x|= \begin{cases} 1& \text{if } x\in \mathbb{Z}/p^r \mathbb{Z}\\ p^r-p^{r-1} & \text{otherwise} \end{cases} \] \item $\lambda^{\frac{1}{p^r}} \notin K$ and \[|I_K \cdot x|= p^{r}-p^{r-1}\] \end{enumerate} \item If $E$ has good supersingular reduction, then the Galois orbit on $p$-primary torsion is large. Moreover, if $x$ is of exact order $p^r$ then \[[K(x):K]=p^{2r}-p^{2r-2}\] \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[label=(\roman*)] \item As discussed in \cite[5.3]{Ray83-1}, to bound the torsion of $A=E_1\times E_2$ lying on $X$ we are required to define a splitting $A[\infty]=T'\oplus T''$ where $T'$ has trivial $G_K$ action and is $p$-divisible and $T''$ has large Galois action. Clearly, it is sufficient to find such a decomposition for each curve $E_i$. In general, this will mean splitting into coprime-to-$p$ and $p$-primary torsion respectively. However, if $E_i$ is a canonical lifting of an ordinary elliptic curve, then there is a height 1, $p$-divisible subgroup $H \subseteq E_i[\infty]^{(p)}$ which is $G_K$-trivial. Writing $T''_i$ for the supplement of $H$ in $E_i[\infty]^{(p)}$ and setting $T'_i=H \oplus E_i[\infty]^{(p')}$, we obtain the desired decomposition for $E_i$. \item See \cite[Appendix]{Kr97}. \item The case $r=1$ is \cite[1.9 Prop.\@ 9]{Se72} and this result is extended for all $p^r$-torsion in \cite[Cor.\@ 5.2]{Sm23}. \end{enumerate} \end{proof} \begin{remark}\label{rValuationofSerreTateCoordinate} To obtain bounds on torsion in the ordinary reduction case, it is important to determine the minimal $r$ such that $\lambda^{\frac{1}{p^r}} \in K$ but $\lambda^{\frac{1}{p^{r+1}}} \notin K$. i.e.\@ \[\lambda \in (1+pR)^r=1+p^r R\] and so $v(\lambda-1)=r$. In \cite{Kr97}[Appendix, Prop.\@ 3], Kraus determines formulae for this value. If $E_0$ is the reduction of $E$, then for the special values $j(E_0)=0$ or $1728$, the valuation is determined by a minimal Weierstrass equation for $E$. Otherwise, we have the equality \[v(\lambda-1)=v(j(E)-j_{can}(E_0))\] where $j_{can}(E_0)$ is the $j$-invariant of the canonical lifting of $E_0$. The question of determining $j_{can}(E_0))$ as an equation of $j(E_0))$ has been explored in \cite{Fin10} and one may compute an approximation of this $j$-invariant for small primes. \end{remark} Let $E/K$ be an elliptic curve with good ordinary reduction. Let $E[p]$ be the $p$-torsion subgroup (viewed as a Galois module), $\EE$ the N\'eron model of $E$ and $\GG$ the $p$-torsion group scheme of its N\'eron model. In the view of the above remarks, it is useful to crystallise the relationship between the finite flat group scheme $\GG$ and the Galois module $\GG_K = E[p]$ \begin{lemma}\label{lFiniteFlatGroupSchemesandGKModules} Let $\GG$ and $\HH$ be finite flat group schemes and view $\GG_K$ and $\HH_K$ as $G_K=\mathrm{Gal}(\bar{K}/K)$-modules. Then the natural map is an isomorphism \[\mathrm{Hom}_R(\GG,\HH)\longrightarrow \mathrm{Hom}_{G_K}(\GG_K,\HH_K)\] i.e.\@ the functor between the category of finite flat group schemes over $R$ to $G_K$-modules is fully faithful. \end{lemma} \begin{proof} As $K$ has absolute ramification index $e=1$, this is a special case of \cite[4.5 Corollary]{Ta97}. \end{proof} \begin{corollary}\label{cExactSequencesofFFGSs} Let \[0 \to \GG' \to \GG \to \GG'' \to 0\] be an exact sequence of finite flat group schemes over $R$. Then this sequence splits if and only if the corresponding exact sequence of $G_K$-modules is split. \end{corollary} \section{Bounds for the cases when one or both of the curves have bad multiplicative reduction}\label{sGoodMult} We now wish to relax the conditions in Assumption \ref{aBasic} to the effect that we also want to be able to say something about curves that have bad multiplicative reduction at the given prime $p$. We modify that Assumption now. \begin{definition}\label{dGoodModel} Suppose we are given a pair $(E, \pi)$ where $E$ is an elliptic curve defined over a number field $K$ and $\pi \colon E \to \P^1$ is a standard projection to $\P^1$, also defined over $K$. We say that a \emph{nice model} for $(E, \pi )$ if the following is true. There exists a prime $p$ and a place $v$ of $K$ unramified over $p$ such that, letting $R = W(\overline{\mathbb{F}}_p)$ be the Witt vectors over $k:=\overline{\mathbb{F}}_p$ with fraction field $F= \widehat{K^{\mathrm{ur}}_v} \supset K$, one of the two scenarios below holds. \begin{enumerate} \item There is an abelian scheme \[ \EE \to \mathrm{Spec}\, R , \] with geometric generic fibres equal to (the base change to $F$) of the given elliptic curve $E$ defined over $K$, the central fibre $\EE_0$ is ordinary and there is an $R$-morphism \[ \xymatrix{ \EE \ar[rd] \ar[rr]^{\pi_{R}} & & \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] that is the composition of the quotient morphisms $\EE \to \EE/\iota$, where $\iota$ is the fibrewise involution induced by the inversion maps on the generic fibre, with an $R$-isomorphism $\EE / \iota \simeq \P^1_R$, and $\pi_{R}$ induces the given standard projection (base-changed to $F$) on the geometric generic fibre. \item There is a minimal Weierstrass model for $E \times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L$: \[ \WW \to \mathrm{Spec}\, R \] with nodal rational central fibre $\WW_{0}$ and there is an $R$-morphism \[ \xymatrix{ \WW \ar[rd] \ar[rr]^{\pi_{R}} & & \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] that is the composition of the quotient morphism $\WW \to \WW/\iota$, where $\iota$ is again the fibrewise involution induced by the inversion map on the generic fibre, and an $R$-isomorphism $\WW/\iota \simeq \P^1_R$ such that $\pi_{ R}$ induces the given standard projection (base-changed to $F$) on the geometric generic fibre. \end{enumerate} In case (a) we say $(E, \pi)$ has \emph{a nice model with good reduction} and in case (b) \emph{a nice model with bad reduction}. \end{definition} \begin{assumption}\label{aGoodBadReduction} Suppose now that $(E_1, \pi_1)$ and $(E_2, \pi_2)$ are two elliptic curves together with standard projections defined over a number field $K$. We assume that each of them has a nice model \[ \xymatrix{ \EE_i \ar[rd] \ar[rr]^{\pi_{i, R}} & & \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] with either good or bad reduction (note that from now on we will also use $\EE$ to denote a Weierstrass model to simplify notation in the sequel). We also assume the following: \begin{enumerate} \item The set of branch points in $\P^1$ of the morphisms induced by the models of the standard projections $\pi_{1, R}$ and $\pi_{2, R}$ on the geometric generic fibres are distinct. In addition, the set of branch points in $\P^1$ of the morphisms induced by $\pi_{1, R}$ and $\pi_{2, R}$ on the normalisations of the special fibres are distinct, too, and disjoint from the images in $\P^1_k$ of the nodes of the special fibres, which are also themselves required to be distinct. \item We will write $\sA \to \mathrm{Spec}\, R$ for the fibre-product of the two given models of the elliptic curves. Moreover, we will write \[ \XX = \left( \pi_{1, R} \times \pi_{2, R} \right)^{-1} \left( \Delta_{\P^1_R \times_R \P^1_R} \right) \] for the preimage of the diagonal, which is a proper flat $R$-curve. We will also denote by $\sA^{\circ} \subset \sA$ the largest open subscheme that is smooth over $\mathrm{Spec}\, R$, which is a group scheme, and by $\XX^{\circ}$ the restriction of $\XX$ to $\sA^{\circ}$. In analogy with notation used earlier, we will then also denote by $\sA^{\circ}_1$ the base change of $\sA^{\circ}$ to $\mathrm{Spec}\, R_1$, similarly define $\XX^{\circ}_1$ and denote by $\XX^{\circ}_0$ the central fibre of $\XX^{\circ}$. We will then assume that \[ \mathrm{im} \left( p \sA_1^{\circ} (R_1) \cap \XX^{\circ}_1 (R_1) \to \XX^{\circ}_0(k) \right) \] is finite. \end{enumerate} \end{assumption} \begin{remark}\label{rImplicationsAssumptions} We will show below in the sections following Section \ref{sExtension} that under extra assumptions on the nice models, part a) of Assumption \ref{aGoodBadReduction} implies part b), but we are not able to show this in complete generality although it may be true. \end{remark} Remark that a problem to carry over the arguments used in the proof of Theorem \ref{tCoarseBounds} to the present context where we assume $(E_1, \pi_1), (E_2, \pi_2)$ to be subject to Assumption \ref{aGoodBadReduction}, is that, the central fibre of $\sA \to \mathrm{Spec}\, R$ no longer being necessarily nonsingular, it is more subtle to do intersection theory on it. This problem can partially be circumvented by noting that the multiplication by $p$-map is still a rational map on the models that \emph{commutes} with the fibrewise involutions $\iota_1, \iota_2$, hence descends to a rational map from $\P^1_R \times_R \P^1_R$ to itself. In short, it is convenient, in the presence of these involutions, to transfer the entire argument based on the ideas in \cite{Ray83-1} to the product of projective lines over $R$. \medskip We first need to introduce some further notation and definitions, and prove auxiliary results. Everywhere below we suppose from now on that we are in the setup of Assumption \ref{aGoodBadReduction}. \begin{definition}\label{dMultiplicationByP} For $i=1,2$, we denote by \[ \xymatrix{ \EE_i \ar@{-->}[rr]^{\mathrm{mult}_{p, i}} \ar[rd] && \EE_i \ar[ld] \\ & \mathrm{Spec}\, R & } \] the multiplication by $p$ map, which is in general only a rational map. It is defined on the largest open subscheme of $\EE_i$ that is smooth over $\mathrm{Spec}\, R$, which is a group scheme. Since the multiplication by $p$ map commutes with the fibrewise involutions given by taking inverses for the group law, we obtain an induced rational map, which we will denote by \[ \xymatrix{ \P^1_R \ar@{-->}[rr]^{\overline{\mathrm{mult}}_{p, i}} \ar[rd] && \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] For the sake of brevity, we will write \[ \mathrm{mult}_{p} = \mathrm{mult}_{p, 1} \times \mathrm{mult}_{p, 2} \] which is thus a rational map \[ \xymatrix{ \sA \ar@{-->}[rr]^{\mathrm{mult}_{p}} \ar[rd] && \sA \ar[ld] \\ & \mathrm{Spec}\, R & } \] and \[ \overline{\mathrm{mult}}_{p} =\overline{ \mathrm{mult}}_{p, 1} \times \overline{\mathrm{mult}}_{p, 2} \] for \[ \xymatrix{ \P^1_R \times_R \P^1_R \ar@{-->}[rr]^{\overline{\mathrm{mult}}_{p}} \ar[rd] && \P^1_R \times_R \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] Note that on the central fibres, all the rational maps extend uniquely to morphisms, and we denote these by a suffix $0$, so, for example, we obtain \[ \xymatrix{ \P^1_k \times_k \P^1_k \ar[rr]^{(\overline{\mathrm{mult}}_{p})_0} \ar[rd] && \P^1_k \times_k \P^1_k \ar[ld] \\ & \mathrm{Spec}\, k & } \] and \[ \xymatrix{ \sA_0 \ar[rr]^{(\mathrm{mult}_{p})_0} \ar[rd] && \sA_0 \ar[ld] \\ & \mathrm{Spec}\, k & . } \] Finally, we denote by $\YY_0 \subset \sA_0$ the reduced preimage of $\XX_0$ under $(\mathrm{mult}_{p})_0$, which is a curve lying over the reduced preimage $\overline{\YY}_0 \subset \P^1_k\times \P^1_k$ of the diagonal $\Delta_0 \subset \P^1_k \times_k \P^1_k$ under $(\overline{\mathrm{mult}}_{p})_0$. \end{definition} \begin{definition}\label{dSpecialPointInP1} Under our standing Assumption \ref{aGoodBadReduction} we define a point in $\P^1_k$ to be a \emph{special point} for $(\pi_{i, R})_0$, $i=1, 2$ if it is the image in $\P^1_k$ under \[ \xymatrix{ (\EE_i )_0 \ar[rr]^{(\pi_{i, R})_0} & & \P^1_k } \] of either a node on $(\EE_i )_0$, or a ramification point of the covering of $\P^1_k$ induced by $(\pi_{i, R})_0$ on the normalisation of $(\EE_i )_0$. \medskip We call a point $(x, y)\in \P^1_k \times \P^1_k$ \emph{special} if $x$ is special for $(\pi_{1, R})_0$ or $y$ is special for $(\pi_{2, R})_0$. \end{definition} \begin{remark}\label{rMeaningAss} Note that Assumption \ref{aGoodBadReduction} a) precisely amounts to saying that if $(x, y)\in \P^1_k$, then at most one of $x$ and $y$ can be special for a projection $(\pi_{i, R})_0$, but not both at the same time. \end{remark} \begin{definition}\label{dNodalPoint} We call a point $(x, y)\in \P^1_k \times \P^1_k$ \emph{nodal} if either $x$ or $y$ is the image of a node under $(\pi_{1, R})_0$ or $(\pi_{2, R})_0$. \end{definition} \begin{lemma}\label{lY0Nonsingular} The curve $\overline{\YY}_0$ is nonsingular. \end{lemma} \begin{proof} Let $\widetilde{(\EE_i )}_0$ be the normalisation of $(\EE_i )_0$ and \[ \xymatrix{ \widetilde{(\EE_i )}_0 \ar[rr]^{\widetilde{(\pi_{i, R})}_0} & & \P^1_k } \] the induced double coverings. The preimage $\widetilde{\XX_0}$ of $\XX_0$ in $\widetilde{(\EE_1 )}_0\times \widetilde{(\EE_2 )}_0$ is nonsingular under our assumption that if $(x, y)\in \P^1_k\times \P^1_k$, then at most one of $x$ and $y$ can be special for a projection $(\pi_{i, R})_0$, but not both at the same time. The preimage $\widetilde{\YY}_0$ of $\YY_0$ under the product of normalisation maps is nonsingular because it is the Frobenius twist of the preimage of $\widetilde{\XX_0}$ under an \'{e}tale map. Now $\overline{\YY}_0$ is a quotient of $\widetilde{\YY}_0$ by an action of $\Z/2\times \Z/2$ with at most $\Z/2$ stabilisers, hence nonsingular. \end{proof} \begin{theorem}\label{tMainMixedReductionTypes} Let $(E_1, \pi_1)$, $(E_2, \pi_2)$ satisfy Assumption \ref{aGoodBadReduction}. Let $\mathbb{M}$ be the set of pairs of torsion points $(t_1, t_2) \in E_1(\overline{K}) \times E_2 (\overline{K})$ with the following properties: \begin{enumerate} \item $\pi_1 (t_1) =\pi_2 (t_2)$; \item $t_1, t_2$ have order coprime to $p$. \end{enumerate} Then \[ \left| \mathbb{M} \right| \le 2p^3 + 2. \] \end{theorem} \begin{proof} We consider the set $\mathbb{T}$ of pairs of torsion points $(t_1, t_2) \in E_1(\overline{K}) \times E_2 (\overline{K})$ with the following properties: \begin{enumerate} \item $\pi_1 (t_1) =\pi_2 (t_2)$; \item $t_1, t_2$ have order coprime to $p$; \item for $i=1, 2$, $t_i$ does not specialise to a node of $(\EE_i)_0$. \end{enumerate} Elements of the set $\mathbb{T}$ can be identified with sections/$R$-valued points of $\sA^{\circ} \to \mathrm{Spec}\, R$, and their images in the central fibre $(\sA^{\circ})_0\simeq \mathbb{G}_m$ are distinct. \medskip Since $t_1, t_2$ have order coprime to $p$, every element in the set $\mathbb{T}$ is equal to $p$ times another element in that set. Therefore, $\mathbb{T}$ injects into the set \[ \mathbb{S}= \mathrm{im} \left( p \sA^{\circ} (R) \cap \XX^{\circ} (R) \to \XX^{\circ}_0(k) \right), \] and also into \[ \mathbb{S}_1= \mathrm{im} \left( p \sA_1^{\circ} (R_1) \cap \XX^{\circ}_1 (R_1) \to \XX^{\circ}_0(k) \right), \] which we have assumed to be finite in Assumption \ref{aGoodBadReduction}, b). \medskip Let now \[ \overline{\mathbb{T}} =\left\{ t \in \P^1(\overline{K}) \mid \, \exists (t_1, t_2) \in \mathbb{T} \, :\, t = \pi_1 (t_1)=\pi_2 (t_2) \right\} . \] Let $\PP^{\circ}$ be the open subscheme of $\PP = \P^1_R \times_R \P^1_R$ that is the complement of the points of the central fibre $(x, y)$ with $x$ or $y$ nodal. Since $\overline{\mathbb{T}}$ is obtained from $\mathbb{T}$ by dividing out by the fibrewise involution, it follows that $\overline{\mathbb{T}}$ injects into the set \[ \overline{\mathbb{S}}_1= \mathrm{im} \left( \overline{\mathrm{mult}}_p \left( \PP_1^{\circ} (R_1) \right) \cap \Delta^{\circ}_1 (R_1) \to \Delta^{\circ}_0(k) \right), \] where $\Delta^{\circ}$ is the complement of the nodal points in $\Delta$. Moreover, the finiteness of $\mathbb{S}_1$, which we assumed, implies the finiteness of $\overline{\mathbb{S}}_1$. To derive the desired bound for $\overline{\mathbb{S}}_1$, hence for $\overline{\mathbb{T}}$, we now apply Proposition \ref{pComputInt} below and obtain $|\mathbb{T}| \le 2p^3$. To complete the proof, it remains to notice that (a) our assumption that nodes of $(\EE_1)_0$ and $(\EE_2)_0$ map to distinct points in $\P^1_k$ and (b) the fact that torsion points of order coprime to $p$ that do not specialise to a node specialise injectively into the central fibre of the model, taken together imply that $\mathbb{M}$ has at most two more elements than $\mathbb{T}$. This proves the Theorem. \end{proof} \section{Some computations}\label{sComputations} \newcommand{\Fpbar}{{\overline{\F}_p}} \newcommand{\Witt}{{R_1}} \begin{notation}\label{nWitt} We work over the field $k= \Fpbar$ and denote by $R$ the Witt vectors and by $\Witt$ the Witt vectors of length $2$ over $k$. We write elements of $a \in \Witt$ as $(a_0,a_1)$ with $a_i \in \Fpbar$. The operations in $\Witt$ are \begin{align*} a + b &= (a_0,a_1) + (b_0,b_1) = \left(a_0+b_0, a_1+b_1-\frac{(a_0+b_0)^p-a_0^p-b_0^p}{p}\right) \\ a \cdot b & = (a_0,a_1) \cdot (b_0,b_1) = \left(a_0b_0,a_0^pb_1+a_1b_0^p\right) \end{align*} where the first formula is interpreted formally. There is a natural quotient ring homomorphism $\Witt \to \Fpbar$ sending a Witt vector $a = (a_0,a_1)$ to $a_0$. Notice that due to the nontrivial addition law the inclusion map $\Fpbar \to \Witt$ sending $a_0$ to $(a_0,0)$ is not a ring homomorphism. \end{notation} \begin{lemma}\label{lFormulaWitt} Let $a = (a_0,a_1)$ represent a point in $\P^1_\Witt$, i.e. $a_i = (a_{i,x} , a_{i,y})$, and let $\varphi \colon \P^1_\Witt \dashrightarrow \P^1_\Witt$ be a rational map such that $\varphi_0$ is defined in $a_0$ (thus extends to a morphism) and with $(d \varphi)_0 = 0$. Then we have \begin{enumerate} \item $(a_0,0) + (0,a_1) = (a_0,a_1)$ even though we use the nontrivial addition in the Witt vectors. \item $\varphi ((a_0, a_1)) =\varphi ((a_0, 0))$ is independent of $a_1$. \end{enumerate} \end{lemma} \begin{proof} For the first formula we compute \[ (a_0,0) + (0,a_1) = \left(a_0+0,0+a_1+\frac{(a_0+0)^p-a_0^p-0^p}{p}\right) = (a_0,a_1) \] where $a_0^p = (a_{0,x}^p, a_{0,y}^p)$. For part b), using a) and Taylor expansion compute \begin{align*} \varphi((a_0, a_1)) &= \varphi\bigl((a_0,0) + (0,a_1)\bigr) \\ &= \varphi\bigl((a_0,0)\bigr) + (0,a_1)d\varphi\bigl((a_0,0)\bigr) \\ & = \varphi\bigl((a_0,0)\bigr) + \left(0,a_1 \left(d\varphi\bigl((a_0,0)\bigr)\right)_0^p \right) \\ &= \varphi\bigl((a_0,0)\bigr) \end{align*} \end{proof} \begin{proposition}\label{pComputInt} Let $U_1, U_2 \subset \P^1_{R}$ be open subsets that contain the generic point of the central fibre. Let $\psi_i \colon U_i \to U_i$ be morphisms representing rational maps which we will denote by the same letters. Assume that $\psi_i$ has degree $p^2$ and $(d\psi_i)_0=0$. Consider $\psi = (\psi_1,\psi_2)$ and let $U=U_1\times U_2$. Assume that $\psi_0$, the morphism induced by $\psi$ on the central fibre, is of bidegree $(pd, pe)$. Let furthermore $\Delta_R \subset \P^1_R \times \P^1_R$ be the diagonal. We denote by $Y_0$ the reduced support of $\psi_0^{-1}(\Delta_0)$ and assume it is nonsingular. Assume that the number $N$ of points in \[ \mathrm{im} \left( \psi (U (R_1)) \cap (\Delta \cap U)(\Witt) \to \Delta_0 (k) \right) \] is finite. Then \[ N \le (d+e)p^2. \] \end{proposition} \begin{proof} If $a = (a_0,a_1)\in U(\Witt)$ is an $\Witt$-valued point such that $\psi(a)=b=(b_0, b_1) \in (\Delta_\Witt \cap U)(\Witt )$, we must have \[ \psi_0(a_0) \in \Delta_0 \subset \P^1_\Fpbar \times \P^1_\Fpbar . \] Therefore $a_0$ must lie in the support of the preimage of $\Delta_0$ and of course in $U$. Let $F_0=0$ be a bihomogeneous equation defining $Y_0$. Let also \[ Y_R \subset \P^1_R \times \P^1_R \] be the curve defined by the equation $F=0$ where $F$ is obtained from $F_0$ by lifting all coefficients $f_{i,0} \in \Fpbar$ to $(f_{i,0},0, \dots ) \in R$. This is a non-canonical lift of $F_0$, any other lift would also work for our purpose. Since $d \psi_0 = 0$, the morphism $\psi_0$ factors over the Frobenius, and the preimage of $\psi_0$ has multiplicity at least $p$. Therefore $Y_0$ is of bidegree at most $(d,e)$. Similarly $Y_R$ has bidegree at most $(d,e)$. We now try to find $a_1' \in \Fpbar$ such that $a' = (a_0,a_1') \in (Y_R \cap U)(\Witt)$ and $\psi(a') = \psi(a)$. Using Taylor expansion we calculate \[ 0 = F(a') = F\bigl((a_0,a_1')\bigr) = F\bigl((a_0,0)\bigr)+ (0,a_1') dF\bigl((a_0,0)\bigr) \] which can be solved for $a_1'$ if $dF\bigl((a_0,0)\bigr) \not= 0$. This is the case iff $dF_0(a_0) \not= 0$ which holds because $Y_0$ is smooth in $a_0$. Using Lemma \ref{lFormulaWitt} we also have \[ \psi(a) = \psi\bigl((a_0,a_1) \bigr) = \psi\bigl((a_0,0) \bigr) = \psi\bigl((a_0,a_1') \bigr) = \psi(a') . \] Now consider the scheme-theoretic image $X_{R_1}$ of $\psi \colon Y_{R_1} \cap U \to \P^1_{R_1} \times \P^1_{R_1}$. Recall that by definition this is the smallest closed subscheme of $\P^1_{R_1} \times \P^1_{R_1}$ through which this morphism $\psi$ factors, or equivalently, the closed subscheme defined by the sheaf of ideals \[ \II = \mathrm{ker} \left( \OO_{ \P^1_{R_1} \times \P^1_{R_1}} \to \psi_* \OO_{Y_{R_1} \cap U} \right) . \] Then $\psi \colon Y_{R_1} \cap U \to X_{R_1}$ is dominant. The closed subscheme $X_{R_1} \subset \P^1_{R_1} \times \P^1_{R_1}$ has no embedded points (otherwise it would not be the smallest closed subscheme through which $\psi$ factors since $Y_{R_1}$ has no embedded points and the preimage under $\psi$ of the pure-one dimensional component of $X_{R_1}$ has to equal $Y_{R_1}\cap U$), and the support of $X_{R_1}$ contains the diagonal $\Delta_0$. Moreover, by \cite[Thm. 11.10.9, Prop. 11.10.1 b)]{EGAIV}, the smallest closed subscheme containing all sections in $\psi ( (Y_{R_1}\cap U)(\Witt))$ equals $X_{R_1}$: indeed, this follows because $(Y_{R_1}\cap U)(\Witt))$ is scheme-theoretically dense in $Y_{R_1} \cap U$ and $\psi \colon Y_{R_1} \cap U \to X_{R_1}$ is dominant. Note that the scheme-theoretic image $X_{R}$ of $\psi \colon Y_{R} \cap U \to \P^1_{R} \times \P^1_{R}$ is flat over $\mathrm{Spec}\, R$ because every irreducible component dominates $\mathrm{Spec}\, R$. \medskip Consider the ideal $I$ defining $X_{R_1}$ and its reduction $I_0$ to $k$. This reduction defines a curve without embedded points and is therefore generated by a polynomial $G_0 \in I_0$. Since $I \to I_0$ is surjective, we can choose a lift $G$ of $G_0$ in $I$. Let now $G' \in I$ be another polynomial, and $G_0'\in I_0$ its reduction to $k$. Now $I_0$ is generated by $G_0$ and therefore there exists a $L_0$ such that $G'_0 = G_0L_0$. Let $L$ be any lift of $L_0$ to $R_1$. Then \[ G' - LG = pG'' \in I \] for some $G''$. Now since $p \not\in I$ this implies $G''_0 \in I_0$. But then $G''_0 = G_0M_0$. If $M$ is any lift of $M_0$ we have that \[ G' -(LG+pMG) \] is zero modulo $p^2$. But then $G' = G(L+pM)$ in $R_1$. Therefore $G$ generates the ideal of $X_{R_1}$. \medskip The polynomial $G$ has bidegree at most $(dp^2, ep^2)$ because the curve $G=0$ is contained in the flat limit of $\psi (Y_K)$ where $K= \mathrm{Quot}(R)$. We parametrise $\Delta_R$ by $X_0 =T_0^2, X_1=T_0T_1, Y_0= T_1T_0, Y_1=T_1^2$. We put \[ \widetilde{G}(T_0, T_1) = G (T_0^2, T_0T_1, T_1T_0, T_1^2). \] If $\widetilde{G}$ is not identically zero, then the degree of $\widetilde{G}$ is at most $dp^2 + ep^2$, which gives the bound of the Proposition. Assume to the contrary that $\widetilde{G}$ is identically zero. Then the equation of $\Delta_{R_1}$ is a factor of $G$. This is only possible if infinitely many elements of $\psi ( (Y_{R}\cap U)(\Witt))$ lie on $\Delta_{R_1}$. In that case, \[ \mathrm{im} \left( \psi (U (R_1)) \cap (\Delta \cap U)(\Witt) \to \Delta_0 (k) \right) \] is infinite, contrary to our assumption in the statement of the Proposition. \end{proof} \section{Extending the multiplication-by-$p$ map to proper models}\label{sExtension} In this Section we will keep most of the notation from Section \ref{sGoodMult} except that standard projections from elliptic curves to $\P^1$ will usually be denoted by the letter $\sigma$ instead of $\pi$ from now on since here we will need to give names to a number of structure morphisms to $\mathrm{Spec}\, R$ and will reserve the letter $\pi$ for those. We will prove below that the finiteness hypothesis in Assumption \ref{aGoodBadReduction}, b) is implied by Assumption \ref{aGoodBadReduction}, a) under certain extra assumptions on the models of $(E_i, \sigma_i)$, $i=1, 2$. More precisely: \begin{theorem}\label{tFinitenessGeneralised} Suppose that $(E_1, \sigma_1)$ and $(E_2, \sigma_2)$ are two elliptic curves together with standard projections defined over a number field $K$. Assume that each of them has a nice model \[ \xymatrix{ \WW_1 \ar[rd] \ar[rr]^{\sigma_{1, R}} & & \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] respectively \[ \xymatrix{ \EE_2 \ar[rd] \ar[rr]^{\sigma_{2, R}} & & \P^1_R \ar[ld] \\ & \mathrm{Spec}\, R & } \] in the sense of Definition \ref{dGoodModel}, and further assume that \begin{enumerate} \item The model $\pi_{\WW_1}\colon \WW_1 \to \mathrm{Spec}\, R$ is a minimal Weierstrass model with nodal rational central fibre and the elliptic curve $E_1$ over $K$ has a Tate uniformisation $K^*/q^{\mathbb{Z}}$ with a parameter $q\in K^*$ that is a $p$-th power of a uniformiser in $K$. \item The model $\pi_{\EE_2}\colon \EE_2 \to \mathrm{Spec}\, R$ is smooth with central fibre an ordinary elliptic curve. \end{enumerate} Then the statement in part a) of Assumption \ref{aGoodBadReduction} implies the finiteness statement in part b). \end{theorem} \begin{remark}\label{rRaynaudProof} Raynaud in \cite{Ray83-2} describes a method to prove the analogue of Theorem \ref{tFinitenessGeneralised} in the case when both curves have good ordinary reduction. The punchline of the argument is that if the finiteness statement in Assumption \ref{aGoodBadReduction}, b) is false then the relative Frobenius morphism on some smooth proper curve of genus $\ge 2$ would lift infinitesimally to first order, which gives a contradiction. To prove Theorem \ref{tFinitenessGeneralised} we will follow the structure of Raynaud's argument and generalise it to log smooth curves in logarithmic algebraic geometry. \end{remark} The proof of Theorem \ref{tFinitenessGeneralised} needs a number of preparations and will occupy this and the next three sections. The non-liftability of the Frobenius used in Raynaud's argument only holds if one works with proper curves, so as a first step of the proof of Theorem \ref{tFinitenessGeneralised}, we will extend the multiplication by $p$ map for certain elliptic curves with bad multiplicative reduction to some proper models of these curves over $\mathrm{Spec}\, R$. \medskip We start by recalling a few general facts about models of elliptic curves needed in the sequel. We retain the previous notation $k=\overline{\mathbb{F}}_p$, $R=W(k)$ the ring of Witt vectors with coefficients in $k$, and $K$ its field of fractions (the completion of the maximal unramified extension of $\mathbb{Q}_p$). Let $E=E_K$ be an elliptic curve defined over $K$. Of course, $E$ being elliptic, it comes with a privileged rational point $o\in E(K)$, the origin for the group-law. Denote by $\pi_{\EE}\colon \EE \to \mathrm{Spec}\, R$ the minimal proper regular model of $E$. The vertical prime divisors of $\EE$ that do not meet $\overline{\{o\}}$ can be contracted to obtain the minimal Weierstrass model of $E$, which we denote by $\pi_{\WW} \colon \WW \to \mathrm{Spec}\, R$ \cite[Thm. 4.35]{Liu02}. The largest subschemes $\EE^{\circ}$ and $\WW^{\circ}$ that are smooth over $\mathrm{Spec}\, R$ are $R$-group schemes in a natural way \cite[Prop. 2.7]{De-Ra73}. In particular, for every integer $n$, the multiplication by $n$ maps $[n] \colon \EE^{\circ} \to \EE^{\circ}$ and $[n] \colon \WW^{\circ} \to \WW^{\circ}$ are well-defined. More precisely, there is a morphism $+ \colon \EE^{\circ} \times_{R} \EE \to \EE$ making $\EE \to \mathrm{Spec}(R)$ into a generalised elliptic curve in the sense of \cite[Def. 1.12]{De-Ra73} or \cite[Def. 1.29]{Sai13}, and the central fibre of $\EE \to \mathrm{Spec}(R)$ is a N\'{e}ron $N$-gon $P_{N, k}$ over $k$ with the action of the smooth locus $P_{N, k}^{\circ} \simeq \mathbb{G}_m^N$ on $P_{N, k}$ being explicitly given as in \cite[\S 1.5, p. 29 ff.]{Sai13}. In a nutshell, $P_{N,k}$ consists of $N$ projective lines, labelled by $\mathbb{Z}/N\mathbb{Z}$, and glued cyclically in such a way that $\infty$ on the $\mathbb{P}^1_k$ with label $i$ gets identified with $0$ on the copy of $\mathbb{P}^1_k$ with label $i+1$, and the action of $P_{N, k}^{\circ} \simeq \mathbb{G}_m^N$ on the N\'{e}ron $N$-gon is given by adding corresponding labels and letting $\mathbb{G}_m$ act naturally on $\mathbb{P}^1_k$ with fixed points $0, \infty$. The kernel $K_n=\mathrm{ker}([n])$ of multiplication by $n$ on $\EE^{\circ}$ is an $R$-group scheme that acts on $\EE$ by the above construction. If $n$ divides $N$, it is a finite flat commutative $R$-group scheme, of degree $n^2$, \'{etale} if $n$ is invertible in $R$ \cite[Prop. 1.34, Cor. 1.35]{Sai13}. \begin{definition}\label{dAdmissibleFactorisation} An \emph{admissible factorisation} of the multiplication by $p$ map consists of \begin{enumerate} \item A projective model $\pi_{\UU} \colon \UU \to \mathrm{Spec}\, R$ of $E$. \item An $R-$morphism $f_p \colon \EE \to \UU$ whose restriction to the generic fibre is the multiplication by $p$ map $[p]\colon E \to E$. \item A flat, projective $R$-scheme $\pi_{\FF} \colon \FF \to \mathrm{Spec}\, R$ with $R$-morphisms \[ \xymatrix{ \EE \ar[r]^{\alpha_{\EE}} & \FF \ar[r]^{\beta_{\FF}} & \UU } \] such that $f_p = \beta_{\FF} \circ \alpha_{\EE}$ and the morphism $\alpha_{\EE, k} \colon \EE_k \to \FF_k$ induced on the central fibres is the relative Frobenius morphism; in particular, $\FF_k$ is the Frobenius twist of $\EE_k$; and the morphism $\beta_{\FF}$ is \'{e}tale. \end{enumerate} \end{definition} \begin{proposition}\label{pExistenceFactorisation} Suppose that the central fibre $\EE_k$ of $\EE\to \mathrm{Spec}(R)$ is either a nonsingular ordinary elliptic curve or that it is a N\'{e}ron $N$-gon with $p$ dividing $N$. Then an admissible factorisation exists. \end{proposition} \begin{proof} In the case when $\EE_k$ is a nonsingular ordinary elliptic, this has already been observed in \cite[p. 5/6]{Ray83-2}: indeed, in this case, we can let $\UU =\EE$ and denoting by $K_p$ the kernel of multiplication by $p$ on $\EE^{\circ}$, $K_p^{\circ}$ its identity component, one can define $\FF := \EE/ K_p^{\circ}$ (the quotient of $\EE$ by the action of the finite group scheme $K_p^{\circ}$). So we consider the case when $\EE_k=P_{N, k}$ is a N\'{e}ron $N$-gon in the sequel, with $N=p\cdot m$. The main point now is that, since $p$ divides $N$, the kernel of multiplication by $p$, $K_p$, is a \emph{finite flat} $R$-group subscheme of $\EE^{\circ}$ that acts on $\EE$ by restricting the morphism $+ \colon \EE^{\circ} \times_{R} \EE \to \EE$ to $K_p$. Then by \cite{Ray66} or \cite[Chapter 4, Thm. 4.16, p. 55]{EGM}, we obtain that there exists a geometric quotient $\UU := \EE / K_p$ that is an integral, projective, flat $R$-scheme (by part (i) of the Theorem in loc. cit.), and the quotient morphism $\EE \to \UU$ is given by multiplication by $p$ on the generic fibre (for example by \cite[Chapter 4, Thm. 4.16, part (ii), p. 55]{EGM}, compatibility with flat base change). We can perform the same construction with any finite flat $R$-subscheme of $K_p$. Now by \cite[A.1.2, IV-31, (1)]{Se88} $K_p$ sits in an exact sequence of finite flat $R$-group schemes \begin{gather}\label{eTorsionE1} 0 \to \bbmu_p \to K_p \to \mathbb{Z}/p\mathbb{Z} \to 0. \end{gather} Here $K_p^{\circ}=\bbmu_p$ is the connected component of the identity, and the quotient is \'{e}tale. If we let $\FF = \EE / \bbmu_p$ all the desired properties of the proposition hold. \end{proof} \begin{remark}\label{rQuotientSings} \'{E}tale locally around a singular point of the special fibre, $\EE$ is isomorphic to the subscheme of $\mathbb{A}^2_R$ given by $XY-\pi =0$ cf. \cite[I. Thm. 5.3]{De-Ra73} for a uniformiser $\pi$ of $R$. The $\bbmu_p=\mathrm{Spec}\, R[T]/(T^p -1)$-action is given locally around this $\bbmu_p$-fixed point by \begin{align*} R[X, Y](XY-\pi) & \to R[T]/(T^p-1) \otimes_R R[X, Y](XY-\pi)\\ X & \mapsto T \otimes X\\ Y & \mapsto T^{-1} \otimes Y \end{align*} and the quotient $\FF$ (and hence also $\UU$) can be described \'{e}tale locally around the image of that singular point as $UV - \pi^p=0$ in $\mathbb{A}^2_R$. \end{remark} \section{The geometry of preimages of the diagonal under certain covering maps}\label{sGeometryCovering} We work over $k=\overline{\mathbb{F}}_p$ in this section, assume $p\neq 2$ from now, and consider \begin{enumerate} \item A nodal rational cubic $C_0$ with a degree $2$ covering $\sigma \colon C_0 \to \mathbb{P}^1$. Precomposing with the normalisation morphism of $C_0$ we get a degree $2$ covering $\widetilde{\sigma} \colon \widetilde{C}_0 \simeq \mathbb{P}^1 \to \mathbb{P}^1$ branched in two points $p_1, p_2\in \mathbb{P}^1$. Let $\gamma \colon P_{N, k} \to C_0$ be the \'{e}tale $N:1$ cover of $C_0$ by the N\'{e}ron $N$-gon. \item An elliptic curve $E_0$ over $k$ with a double covering $\tau \colon E_0 \to \mathbb{P}^1$ branched in four points $q_1, \dots , q_4$, identifying a point and its inverse for the group law on $E_0$ in each fibre. We assume the sets $\{q_1, \dots , q_4\}$ and $\{ p_1, p_2\}$ are disjoint. We also assume each $q_i$ is different from the image of the node on $C_0$ under $\sigma$. \end{enumerate} Let $\Delta \subset \mathbb{P}^1\times \mathbb{P}^1$ be the diagonal. We wish to determine the geometry of the preimage curve \[ \Gamma = \left( (\gamma\circ \sigma) \times \tau \right)^{-1} (\Delta ) \subset P_{N, k} \times E_0. \] This can be reduced to determining the geometry of \[ \overline{\Gamma}= \left( \widetilde{\sigma} \times \tau \right)^{-1} (\Delta ) \subset \widetilde{C}_0 \times E_0. \] It is easy to see that since the sets of branch points for $\widetilde{\sigma}$ and $\tau$ are disjoint, the curve $\overline{\Gamma}$ is nonsingular and irreducible (nonsingularity can be checked \'{e}tale/analytically locally, and irreducibility holds because, again looking \'{e}tale locally, one sees that if $\overline{\Gamma}$ were reducible, it would split into two components permuted by the covering group $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$, but again since the sets of branch points for $\widetilde{\sigma}$ and $\tau$ are disjoint, a local argument shows that no subgroup $\mathbb{Z}/2\mathbb{Z}$ of $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ acts trivially on the set of components, a contradiction). Denoting by $F_1, F_2$ a fibre of the first and second projection of $\widetilde{C}_0 \times E_0$ onto its factors, we see that $\overline{\Gamma}$ is numerically equivalent to $2F_1 + 2F_2$. The canonical class $K_S$ of $S:=\widetilde{C}_0 \times E_0$ being $-2F_1$, we get for the genus of $\overline{\Gamma}$ \[ g (\overline{\Gamma}) = \frac{1}{2} \overline{\Gamma}\cdot (\overline{\Gamma} + K_S) +1 = 3. \] Let $\nu_1, \nu_2\in \widetilde{C}_0$ be the points mapping to the node of $C_0$ under the normalisation morphism; since we assumed that each $q_i$ is different from the image of the node on $C_0$ under $\sigma$, it follows that $\overline{\Gamma}$ intersects $\{\nu_1\} \times E_0$ and $\{\nu_2\} \times E_0$ transversely in $S$. Thus in summary we get \begin{proposition}\label{pGeometryCurve} The curve $\Gamma$ is a connected curve with $N$ connected components each of which is a nonsingular curve of genus $3$. These connected components intersect in points that are nodes on $\Gamma$. \end{proposition} \section{The connection to torsion points}\label{sConnectionTorsion} Suppose now that we are given two elliptic curves $E_1, E_2$ over $K$ with standard projections $\sigma_i$ as in Theorem \ref{tFinitenessGeneralised}. Then, with the hypotheses and notation of Section \ref{sExtension} (adding indices $1$ and $2$ to $\EE$ etc.), the minimal proper regular model $\EE_1 \to \mathrm{Spec}\, R$ of $E_1$ has central fibre a N\'{e}ron $p$-gon, whereas $\EE_2 \to \mathrm{Spec}\, R$ has central fibre an ordinary reduction elliptic curve. Proposition \ref{pExistenceFactorisation} and its proof then produce admissible factorisations \[ \xymatrix{ \EE_1 \ar[r]^{\alpha_{\EE_1}} & \FF_1 \ar[r]^{\beta_{\FF_1}} & \UU_1, \\ \EE_2 \ar[r]^{\alpha_{\EE_2}} & \FF_2 \ar[r]^{\beta_{\FF_2}} & \UU_2 } \] and $\UU_1 =\WW_1$ is the minimal Weierstrass model and $\UU_2 =\EE_2$. We put \begin{enumerate} \item $\sA = \EE_1 \times_{\mathrm{Spec}\, R} \EE_2$, $\BB = \FF_1 \times_{\mathrm{Spec}\, R} \FF_2$, $\CC = \UU_1 \times_{\mathrm{Spec}\, R} \UU_2$ with structural morphisms to $\mathrm{Spec}\, R$ denoted by $\pi_{\sA}, \pi_{\BB}, \pi_{\CC}$. (Note that this is a slight departure from the notation used in Section \ref{sGoodMult} inasmuch there the letter $\sA$ was used for what is denoted by $\CC$ here and in the sequel. However, the notation we now adopt will make the following arguments, somewhat heavy on notation anyway, more transparent and readable we hope). \item $\alpha = \alpha_{\EE_1} \times_R \alpha_{\EE_2}$, $\beta = \beta_{\FF_1} \times_R \beta_{\FF_2}$. \end{enumerate} So we get a sequence of morphisms of $R$-schemes \[ \xymatrix{ \sA \ar[r]^{\alpha} & \BB \ar[r]^{\beta} & \CC } \] and $\beta\circ \alpha$ restricted to the generic fibre is the multiplication by $p$ map on the abelian surface $E_1 \times E_2$, $\beta$ is \'{e}tale, and $\alpha$ restricts to the relative Frobenius on the central fibre of $\pi_{\sA}\colon \sA \to \mathrm{Spec}\, R$. By the assumptions made in Theorem \ref{tFinitenessGeneralised}, we are also given standard double covers \[ \sigma_{E_i}\colon E_i \to \mathbb{P}^1_K, \: i=1, 2, \] extending to double covers \[ \sigma_{\UU_i}\colon \UU_i \to \mathbb{P}^1_R, \: i=1, 2 \] of the minimal Weierstrass models. Given some model over $\mathrm{Spec}\, R$, we denote the largest open subscheme of it that is smooth over $\mathrm{Spec}\, R$ by an upper $\circ$, such as in $\sA^{\circ}$ for example. \medskip With $\Delta_R \subset \mathbb{P}^1_R \times \mathbb{P}^1_R$ the diagonal, we introduce the further notation \[ \Gamma_{\CC, R} = \left( \sigma_{\UU_1}\times_R\sigma_{\UU_2} \right)^{-1} \left( \Delta_R \right) \subset \CC , \quad \Gamma_{\BB, R} := \beta^{-1}\left( \Gamma_{\CC, R} \right) \subset \BB , \] and denote by $\Gamma_{\CC, k} \subset \CC_k$, $\Gamma_{\BB, k} \subset \BB_k$ the special fibres of these $R$-schemes. Furthermore, we write $\overline{\Gamma}_{\sA, k}$ for the \emph{reduced} preimage of $\Gamma_{\BB, k}$ under $\alpha_k \colon \sA_k \to \BB_k$. \medskip Note that $\Gamma_{\CC, R}$ was denoted by $\XX$ previously in Section \ref{sGoodMult}. \medskip We also denote $R_j :=R/ p^{j+1} R$, and by an upper index in round brackets the pull back of the various $R$-schemes to $\mathrm{Spec}\, R_j$. So, for example, $(\sA^{\circ})^{(1)} (R_1)$ are $R_1$-valued points of $(\sA^{\circ})^{(1)}$, the pull back of $\sA^{\circ}$ to $\mathrm{Spec}\, (R/p^2 R)$. \medskip Let $\Sigma$ be the set of points in $(\sA^{\circ})^{(1)} (R_1)$ that lift points of $\overline{\Gamma}_{\sA, k}^{\circ}$, and set $\Lambda = \alpha^{(1)} \left( \Sigma \right) \subset (\BB^{\circ})^{(1)} (R_1)$. Then $p\Sigma = \beta^{(1)}(\Lambda)$ is the subset of points in $p \left( (\sA^{\circ})^{(1)} (R_1) \right) $ that lift points of $\Gamma_{\CC, k}^{\circ}$. Moreover, \begin{gather*} \beta^{(1)} \left( \Lambda \cap (\Gamma_{\BB, R}^{\circ})^{(1)} (R_1) \right) = \beta^{(1)} \left( \Lambda \cap (\beta^{(1)})^{-1}\left( (\Gamma_{\CC, R}^{\circ})^{(1)} (R_1) \right) \right)\\ = \beta^{(1)} \left( \Lambda \right) \cap \left( (\Gamma_{\CC, R}^{\circ})^{(1)} (R_1) \right) = p\Sigma \cap \left( (\Gamma_{\CC, R}^{\circ})^{(1)} (R_1) \right). \end{gather*} Thus we obtain \begin{lemma}\label{lFiniteCheck} If the image of $\Lambda \cap (\Gamma_{\BB, R}^{\circ})^{(1)} (R_1)$ in $\Gamma_{\BB, k}^{\circ}(k)$ is finite, then the image of $p\Sigma \cap \left( (\Gamma_{\CC, R}^{\circ})^{(1)} (R_1) \right)$ in $\Gamma_{\CC, k}^{\circ}(k)$ is finite. \end{lemma} In Theorem \ref{tFinitenessGeneralised} we assumed that the elliptic curve $E_1$ over $K$ has a Tate uniformisation $K^*/q^{\mathbb{Z}}$ with a parameter $q\in K^*$ that is a $p$-th power of a uniformiser $\varpi$ in $K$: $q=\varpi^p$. We will now use that assumption to prove \begin{proposition}\label{pRotationalSymmetry} If the image of $\Lambda \cap (\Gamma_{\BB, R}^{\circ})^{(1)} (R_1)$ in $\Gamma_{\BB, k}^{\circ}(k)$ is infinite, then this image is infinite in every irreducible component of $\Gamma_{\BB, k}$. \end{proposition} \begin{proof} This follows from the rotational symmetry of the situation, more precisely: choose a $p$-torsion point $t$ of $E_1(\overline{K})$ such that $(t, \mathrm{id}_{E_2})$ defines an $R$-valued point $x_t$ of $\sA^{\circ}$ intersecting the central fibre $\sA^{\circ}_k$ in a point not lying on the identity component of $\sA^{\circ}_k$. Such $t$ exist, for example, the torsion point $t$ corresponding to $\varpi \in K^*$ under the Tate uniformisation. Also $x_t$ induces an $R_1$-valued point of $(\sA^{\circ})^{(1)}$ which we denote by the same symbol. Suppose now given an $R_1$-valued point $y$ in $\Lambda \cap (\Gamma_{\BB, R}^{\circ})^{(1)} (R_1)$ that specialises to a point on a certain component of $\Gamma_{\BB, k}^{\circ}(k)$. Adding the $R_1$-valued point $\alpha^{(1)}(x_t)$ to $y$ multiple times for the structure of $\BB^{\circ}$ as an $R$-group scheme, we obtain from $y$ points in $\Lambda \cap (\Gamma_{\BB, R}^{\circ})^{(1)} (R_1)$ specialising into points on all the other components. \end{proof} \section{Log deformation theory and Frobenius liftings}\label{sLogFrobenius} We start by noticing that $\pi_{\sA}\colon \sA \to \mathrm{Spec}\, R$ becomes log smooth if we endow $\sA$ with the divisorial log structure determined by the central fibre $\sA_k \subset \sA$ and $\mathrm{Spec}\, R$ with the divisorial log structure given by its closed point \cite[Thm. 4.1]{Kato96} or \cite[IV., Thm. 3.1.18]{Ogus18}. We denote the resulting morphism of log schemes \[ \pi_{\sA}^{\dagger} \colon \sA^{\dagger} \to (\mathrm{Spec}\, R)^{\dagger}, \] and will adhere to the same practice of denoting log schemes by an added dagger in other instances below. In fact, $\pi_{\BB}\colon \BB \to \mathrm{Spec}\, R$ and $\pi_{\CC}\colon \CC \to \mathrm{Spec}\, R$ also become log smooth over $(\mathrm{Spec}\, R)^{\dagger}$ if we endow the total spaces with the divisorial log structures determined by the central fibres, and $\alpha$, $\beta$ naturally determine morphisms of log schemes, which we denote $\alpha^{\dagger}$, $\beta^{\dagger}$; indeed, it suffices to check this \'{e}tale locally around singular points of the central fibres where these fibrations are given by \[ (xy - \pi^p =0 ) \subset \mathbb{A}_R^2 \] (where we denote a uniformiser of $R$ by $\pi$). By \cite[Ex. 3.27, 3.28]{Gross11}, the log morphism down to $(\mathrm{Spec}\, R)^{\dagger}$ can be described, using charts, by the diagram \[ \xymatrix{ S_p = \mathbb{N}^2 \oplus_{\mathbb{N}} \mathbb{N} \ar[r] & R [x,y]/(xy - \pi^p) \\ \mathbb{N}\ar[u] \ar[r] & R \ar[u] } \] where $S_p$ is the submonoid of $\mathbb{N}^2 \oplus \mathbb{N}$ generated by $$\alpha_1 = ((1,0),0), \alpha_2 = ((0,1),0), \varrho = ((0,0),1)$$ with one relation $\alpha_1 + \alpha_2 = p\varrho$, and denoting by $1$ the standard generator of $\mathbb{N}$ (the copy in the left hand lower corner in the diagram), the maps are given as follows: \begin{enumerate} \item $\mathbb{N} \to S_p$ maps $1\mapsto \varrho$; \item $S_p \to R[x,y]/(xy-\pi^p)$ sends $\alpha_1\mapsto x, \alpha_2\mapsto y, \varrho \mapsto \pi$; \item $\mathbb{N}\mapsto R$ satisfies $1\mapsto \pi, 0 \mapsto 1$; \item $R \to R [x,y]/(xy - \pi^p)$ is the natural inclusion. \end{enumerate} Thus the toroidal characterisation of log smoothness \cite[Thm. 4.1]{Kato96} applies. \medskip Restricting the log structure from $\pi_{\sA}^{\dagger}$ to the subscheme $\overline{\Gamma}_{\sA, k}$, we obtain a log scheme $\overline{\Gamma}_{\sA, k}^{\dagger}$ log smooth over the standard log point $(\mathrm{Spec}\, k)^{\dagger}$, which one checks \'{e}tale locally as before. Using \cite[Prop. 3.40, 3.28]{Gross11}, we can lift $\overline{\Gamma}_{\sA, k}^{\dagger}$ to a log smooth curve $\ZZ^{\dagger} \to (\mathrm{Spec}\, R)^{\dagger}$. Note that $\Gamma_{\BB, R}\to \mathrm{Spec}\, R$ also becomes log-smooth if we endow total space and base with the divisorial log structures determined by the central fibre and marked point, yielding $\Gamma_{\BB, R}^{\dagger} \to (\mathrm{Spec}\, R)^{\dagger}$. Our goal now is to show that under the assumptions of Proposition \ref{pRotationalSymmetry}, the morphism $\alpha$ induces a first order infinitesimal lifting of the relative Frobenius \[ \xymatrix{ (\ZZ^{(1)})^{\dagger} \ar[rr]^{\Phi}\ar[rd] & & (\Gamma_{\BB, R}^{(1)})^{\dagger} \ar[ld] \\ & (\mathrm{Spec}\, R_1)^{\dagger} & } \] which is a morphism of log schemes that are log smooth over $(\mathrm{Spec}\, R_1)^{\dagger}$. This will yield a contradiction as in \cite[Lemma I.5.4]{Ray83-2}, using a log version of the Cartier operator and log differential forms. Then by Proposition \ref{pRotationalSymmetry} and Lemma \ref{lFiniteCheck} we conclude that the conclusion of Theorem \ref{tFinitenessGeneralised} holds. \medskip To start we have \begin{lemma}\label{lLocalLifting} There exists a canonical morphism of log schemes \[ \xymatrix{ (\ZZ^{(1)})^{\dagger} \ar[rr]^{\varphi}\ar[rd] & & (\BB^{(1)})^{\dagger} \ar[ld] \\ & (\mathrm{Spec}\, R_1)^{\dagger} & } \] that lifts $\alpha_k^{\dagger}$ on $\overline{\Gamma}_{\sA, k}^{\dagger}$ and satisfies $\varphi ((\ZZ^{\circ})^{(1)} (R_1)) =\Lambda$. \end{lemma} \begin{proof} We wish to mimic \cite[Lemma I.5.2]{Ray83-2} in the present log setting. Since $(\sA^{(1)})^{\dagger} \to (\mathrm{Spec}\, R_1)^{\dagger}$ is log smooth, we can lift the inclusion of $\overline{\Gamma}_{\sA, k}^{\dagger}$ into the central fibre \'{e}tale locally, using the categorical characterisation, or rather definition, of log smoothness \cite[Definition 3.7]{Kato96}. Two different such lifts differ by a derivation \cite[Section. 1.1]{Ser06}, but since the differential of $\alpha_k \colon \sA_k \to \BB_k$ is zero, we get a well-defined map to $\BB^{(1)}$ if we compose with $\alpha^{(1)}$. Since morphisms can be defined \'{e}tale-locally on the source, these local lifts glue to a morphism $\varphi\colon (\ZZ^{(1)})^{\dagger} \to (\BB^{(1)})^{\dagger}$. The property $\varphi ((\ZZ^{\circ})^{(1)} (R_1)) =\Lambda$ is clear by construction. \end{proof} \begin{lemma}\label{lFrobeniusLifting} Suppose that the the assumptions of Proposition \ref{pRotationalSymmetry} are satisfied, in particular, the image of $\Lambda \cap (\Gamma_{\BB, R}^{\circ})^{(1)} (R_1)$ in $\Gamma_{\BB, k}^{\circ}(k)$ is infinite. Then the morphism $\alpha$ induces a first order infinitesimal lifting of the relative Frobenius \[ \xymatrix{ (\ZZ^{(1)})^{\dagger} \ar[rr]^{\Phi}\ar[rd] & & (\Gamma_{\BB, R}^{(1)})^{\dagger} \ar[ld] \\ & (\mathrm{Spec}\, R_1)^{\dagger} & } \] which is a morphism of log schemes that are log smooth over $(\mathrm{Spec}\, R_1)^{\dagger}$. \end{lemma} \begin{proof} This is the analogue in the log setting of \cite[Lemma I.5.3]{Ray83-2}. We wish to show that the morphism $\phi$ of Lemma \ref{lLocalLifting} factors through the closed subscheme $(\Gamma_{\BB, R}^{(1)})^{\dagger}$ in $(\BB^{(1)})^{\dagger}$. We denote by $\widetilde{\ZZ^{(1)}}$ the closed subscheme of $\ZZ^{(1)}$ that we obtain when we pull back $\Gamma_{\BB, R}^{(1)}$ have via $\varphi$. We want to show that $\widetilde{\ZZ^{(1)}} = \ZZ^{(1)}$ and for that it suffices to show that $\widetilde{\ZZ^{(1)}}$ is schematically dense in $\ZZ^{(1)}$. Since we assume that the image of $\Lambda \cap (\Gamma_{\BB, R}^{\circ})^{(1)} (R_1)$ in $\Gamma_{\BB, k}^{\circ}(k)$ is infinite, this image is infinite in every irreducible component of $\Gamma_{\BB, k}$ by Proposition \ref{pRotationalSymmetry}. Therefore there is a set of sections in $\widetilde{\ZZ^{(1)}} (R_1)$ with Zariski dense image in every irreducible component of the special fibre of $\ZZ$, which is $\overline{\Gamma}_{\sA, k}$. Then $\widetilde{\ZZ^{(1)}}$ is schematically dense in $\ZZ^{(1)}$ by \cite[11.10.9]{EGAIV}. \end{proof} We now want to show that there is no lifting of Frobenius as in Lemma \ref{lFrobeniusLifting}, showing the finiteness of the image of $p\Sigma \cap \left( (\Gamma_{\CC, R}^{\circ})^{(1)} (R_1) \right)$ in $\Gamma_{\CC, k}^{\circ}(k)$ under our assumptions. \begin{lemma}\label{lFrobDoesNotLift} Suppose $C^{\dagger} \to (\mathrm{Spec}\, R_1)^{\dagger}$ and $D^{\dagger} \to (\mathrm{Spec}\, R_1)^{\dagger}$ are log smooth curves, and denote by $C_0^{\dagger} \to (\mathrm{Spec}\, k)^{\dagger}$ and $D_0^{\dagger} \to (\mathrm{Spec}\, k)^{\dagger}$ their central fibres, which are the base changes to the standard log point. Assume $D_0$ is the Frobenius twist of $C_0$. Suppose there is a nonsingular component $D_0'$ of $D_0$ on which \[ \omega_{D_0}\left( \sum_{i=1}^n x_i\right) \] has positive degree, where $x_1, \dots , x_n$ are the double points of $D_0$ or log marked points lying on $D_0'$ as in \cite[Example 3.26 and Examples 3.36 (6)]{Gross11}. Suppose also that on the corresponding component $C_0'$ of $C_0$ there is a matching number $y_1, \dots , y_n$ of double points or log marked points. Then there is no first order infinitesimal lifting of the relative Frobenius \[ \xymatrix{ C^{\dagger} \ar[rr]^{\Phi}\ar[rd] & & D^{\dagger} \ar[ld] \\ & (\mathrm{Spec}\, R_1)^{\dagger} & . } \] \end{lemma} \begin{proof} We argue similarly to \cite[Lemma I.5.4]{Ray83-2}. First, since $C^{\dagger} \to (\mathrm{Spec}\, R_1)^{\dagger}$ and $D^{\dagger} \to (\mathrm{Spec}\, R_1)^{\dagger}$ are log smooth curves, the sheaves of log differentials $\Omega^1_{C^{\dagger} / (\mathrm{Spec}\, R_1)^{\dagger}}$ and $\Omega^1_{D^{\dagger} / (\mathrm{Spec}\, R_1)^{\dagger}}$ are locally free of rank $1$, and in any event we have a natural functorial morphism of these lines bundles \[ \Phi^* \colon \Phi^* \Omega^1_{D^{\dagger} / (\mathrm{Spec}\, R_1)^{\dagger}} \to \Omega^1_{C^{\dagger} / (\mathrm{Spec}\, R_1)^{\dagger}}, \] cf. \cite[p. 115, 116]{Gross11}. Since the differential of the restriction of $\Phi$ to the central fibre, $\Phi_0$, is zero, this morphism of line bundles $\Phi^*$ factors through $p \Omega^1_{C^{\dagger} / (\mathrm{Spec}\, R_1)^{\dagger}}$ and dividing by $p$, we get a morphism \[ \tau \colon \Phi^*_0 \Omega^1_{D_0^{\dagger} / (\mathrm{Spec}\, k)^{\dagger}} \to \Omega^1_{C_0^{\dagger} / (\mathrm{Spec}\, k)^{\dagger}} \] or, what is the same thing by adjunction, a morphism \[ \tau' \colon \Omega^1_{D_0^{\dagger} / (\mathrm{Spec}\, k)^{\dagger}} \to (\Phi_0)_*\Omega^1_{C_0^{\dagger} / (\mathrm{Spec}\, k)^{\dagger}}. \] Now both of these maps are nonzero because away from the log marked or double points of $C_0$, the Cartier operator furnishes an inverse to $\tau'$ as in \cite[p. 8, proof of Lemma I.5.4]{Ray83-2}. But now \cite[Examples 3.36 (6)]{Gross11} tells us that $\Omega^1_{D_0^{\dagger} / (\mathrm{Spec}\, k)^{\dagger}}$ restricted to $D_0'$ is nothing but $\omega_{D_0}\left( \sum_{i=1}^n x_i\right)$, which we assumed to have positive degree $d>0$, say. Then $\Phi^*_0 \Omega^1_{D_0^{\dagger} / (\mathrm{Spec}\, k)^{\dagger}}$ will have degree $pd$ on the corresponding component $C_0'$ of $C_0$ (which is just a Frobenius twist of $D_0$). This is a contradiction because $\Omega^1_{C_0^{\dagger} / (\mathrm{Spec}\, k)^{\dagger}}$ has the same degree $d$ when restricted to $C_0'$, but there cannot be a nonzero morphism from a line bundle of degree $pd$ to one of degree $d$ for $d>0$. \end{proof} We can now finally put everything together and give the \begin{proof}[Proof of Theorem \ref{tFinitenessGeneralised}] If the conclusion of the Theorem is wrong, then in particular, by Proposition \ref{pRotationalSymmetry} and Lemma \ref{lFiniteCheck}, we are in the case when Lemma \ref{lFrobeniusLifting} applies. But by Proposition \ref{pGeometryCurve}, each component of $\Gamma_{\BB , k}$ is a nonsingular curve of genus $3$. This is a contradiction to Lemma \ref{lFrobDoesNotLift}. \end{proof} \begin{thebibliography}{9999999999} \bibitem[BF16]{BF16} Bogomolov, F., Fu, H., \emph{Division polynomials and intersection of projective torsion points}, Eur. J. Math. \textbf{2} (2016), no. 3, 644--660 \bibitem[BF17]{BF17} Bogomolov, F., Fu, H., \emph{Elliptic curves with large intersection of projective torsion points}, Eur. J. Math. \textbf{4}, 555--560 (2018) \bibitem[BFT18]{BFT18} Bogomolov, F., Fu, H., Tschinkel, Y., \emph{Torsion of elliptic curves and unlikely intersections}, Geometry and physics. Vol. I, Oxford Univ. Press, Oxford, (2018), 19--37 \bibitem[Col87]{Col87} Coleman, R.F., \emph{Torsion points on curves}, Advanced Studies in Pure Mathematics \textbf{12}, in: Galois Representations and Arithmetic Algebraic Geometry (1987), 235--247 \bibitem[De-Ra73]{De-Ra73} Deligne, P., Rapoport, M., \emph{Les sch\'{e}mas de modules de courbes elliptiques}, in: ``Modular Functions of One Variable II", W. Kuyk, P. Deligne (eds.), Lecture Notes in Math. \textbf{349}, Springer (1973) \bibitem[DKY20]{DKY20} DeMarco, L., Krieger, H., Ye, H., \emph{Uniform Manin-Mumford for a family of genus 2 curves}, Ann. of Math. (2) \textbf{191} (2020), no. 3, 949--1001 \bibitem[DGH21]{DGH21} Dimitrov, V., Gao, Z., Habegger, P., \emph{Uniformity in Mordell-Lang for curves}, Annals of Math., \textbf{194} (1), 237--298 \bibitem[EGM]{EGM} Edixhoven, B., van der Geer, G., Moonen, B., \emph{Abelian Varieties}, online book preprint, available at \href{http://van-der-geer.nl/~gerard/AV.pdf}{http://van-der-geer.nl/~gerard/AV.pdf} \bibitem[EV92]{EV92} Esnault, H., Viehweg, E., \emph{Lectures on Vanishing Theorems}, DMV Seminar Band \textbf{20}, Springer Basel AG (1992) \bibitem[Fin10]{Fin10} Finotti, L., \emph{Lifting the j-invariant: Questions of Mazur and Tate}, Journal of Number Theory \textbf{130}, number 3, pages 620--638, 2010, Elsevier \bibitem[FS19]{FS19} Fu, H., Stoll, M., \emph{Elliptic curves with common torsion x-coordinates and hyperelliptic torsion packets}, Proc. Amer. Math. Soc. \textbf{150} (2022), no. 12, 5137--5149 \bibitem[Gao21]{Gao21} Gao, Z., \emph{Recent developments of the uniform Mordell-Lang conjecture}, (2021) \href{https: //arxiv.org/abs/2104.03431v4}{https: //arxiv.org/abs/2104.03431v4} \bibitem[GGK21]{GGK21} Gao, Z., Ge, T., K\"uhne, L., \emph{The uniform Mordell-Lang conjecture}, (2021), \href{https://arxiv.org/abs/2105.15085v2}{https://arxiv.org/abs/2105.15085v2} \bibitem[Gro72]{Gro72} Grothendieck, A., \emph{Mod\`{e}les de N\'{e}ron et monodromie}, in: A. Grothendieck (Ed.), Groupes de Monodromie en G\'{e}ometrie Alg\'{e}brique, SGA7 I, Lecture Notes in Math., vol. 288, Springer (Berlin) (1972), 313--523 \bibitem[EGAIV]{EGAIV} Grothendieck, A., Dieudonn\'{e}, J., \emph{\'{E}l\'{e}ments de g\'{e}ometrie alg\'{e}brique IV}, Publ. Math. IHES (1966) \bibitem[Gross11]{Gross11} Gross, M., \emph{Tropical Geometry and Mirror Symmetry}, CBMS Regional Conference Series in Mathematics, Number \textbf{114}, AMS (2011) \bibitem[Hart10]{Hart10} Hartshorne, R., \emph{Deformation Theory}, Graduate Texts in Math. \textbf{257}, Springer-Verlag (2010) \bibitem[Hi12]{Hi12} Hida, H., \emph{Geometric Modular Forms and Elliptic Curves}, Second Edition, World Scientific (2012) \bibitem[Kato96]{Kato96} Kato, F., \emph{Log smooth deformation theory}, T\^{o}hoku Math. J. \textbf{48} (1996), 317--354 \bibitem[Ka78]{Ka78} Katz, N.M., \emph{Serre-Tate local moduli}, in: ``SurfacesAlg\'{e}briques", Lecture Notes in Math. \textbf{868} (1978), 138--202 \bibitem[Kr97]{Kr97} Kraus, Alain, \emph{D{\'e}termination du poids et du conducteur associ{\'e}s aux repr{\'e}sentations des points de p-torsion d'une courbe elliptique}, Polska Akademia Nauk, Instytut Matematyczny \bibitem[Kueh21]{Kueh21} K\"uhne, L., \emph{Equidistribution in families of abelian varieties and uniformity}, (2021), \href{https://arxiv.org/abs/2101.10272v2}{https://arxiv.org/abs/2101.10272v2} \bibitem[Liu02]{Liu02} Liu, Q., \emph{Algebraic Geometry and Arithmetic Curves}, Oxford Graduate Texts in Mathematics \textbf{6}, Oxford University Press (2002), reprint 2010 \bibitem[MS87]{MS87} Mehta, V.B., Srinivas, V., \emph{ Varieties in positive characteristic with trivial tangent bundle}, Compositio Mathematica, tome \textbf{64}, no 2 (1987), 191--212 \bibitem[Milne17]{Milne17} Milne, J.S., \emph{Algebraic groups : the theory of group schemes of finite type over a field}, Cambridge studies in advanced mathematics \textbf{170}, Cambridge University Press (2017) \bibitem[Ogus18]{Ogus18} Ogus, A., \emph{Lectures on Logarithmic Algebraic Geometry}, Cambridge studies in adv. math. \textbf{178}, Cambridge University Press (2018) \bibitem[Poi22-1]{Poi22-1} Poineau, J., \emph{Dynamique analytique sur $\mathbb{Z}$. I : Mesures d'\'{e}quilibre sur une droite projective relative}, (2022) \href{https://arxiv.org/abs/2201.08480}{https://arxiv.org/abs/2201.08480} \bibitem[Poi22-2]{Poi22-2} Poineau, J., \emph{Dynamique analytique sur $\mathbb{Z}$. II : \'{E}cart uniforme entre Latt\`{e}s et conjecture de Bogomolov-Fu-Tschinkel}, (2022) \href{https://arxiv.org/abs/2207.01574}{https://arxiv.org/abs/2207.01574} \bibitem[Ray66]{Ray66} Raynaud, M., \emph{Passage au quotient par une relation d'equivalence plate}, in: Proceedings of a conference on Local Fields, Driebergen, 1966, Springer, (1967), 79--85 \bibitem[Ray83-1]{Ray83-1} Raynaud, M., \emph{Courbes sur une vari\'{e}t\'{e} ab\'{e}lienne et points de torsion}, Invent. math. \textbf{71}, (1983), 207--233 \bibitem[Ray83-2]{Ray83-2} Raynaud, M., \emph{Around the Mordell conjecture for function fields and a conjecture of Serge Lang}, in: ``Algebraic Geometry", Raynaud, M., Shioda, T. (eds.), Lecture Notes in Mathematics \textbf{1016} Springer (Berlin, Heidelberg) (1983) \bibitem[Sai13]{Sai13} Saito, T., \emph{Fermat's Last Theorem, Basic Tools}, Translations of Math. Monographs, Vol. \textbf{243}, AMS (2013) \bibitem[Ser06]{Ser06} Sernesi, E., \emph{Deformations of algebraic schemes}, Grundlehren der math. Wissenschaften \textbf{334}, Springer (2006) \bibitem[Se72]{Se72} Serre, J-P., \emph{Propri\'{e}t\'{e}s galoisiennes des points d'ordre fini des courbes elliptiques}, Inv. math. \textbf{15}, (1972), 259--331 \bibitem[Se79]{Se79} Serre, J-P., \emph{Local Fields}, Graduate Texts in Math. \textbf{67}, Springer (1979) \bibitem[Se88]{Se88} Serre, J-P., \emph{Abelian $l$-adic Representations and Elliptic Curves}, Research Notes in Mathematics, Volume \textbf{7}, A K Peters, Ltd. (1998) \bibitem[Sil09]{Sil09} Silverman, J.H., \emph{The Arithmetic of Elliptic Curves}, Graduate Texts in Math. \textbf{106}, 2nd Edition, Springer-Verlag (2009) \bibitem[Sil94]{Sil94} Silverman, J.H., \emph{Advanced Topics in the Arithmetic of Elliptic Curves}, Graduate Texts in Math. \textbf{151}, 2nd Edition, Springer-Verlag (1994) \bibitem[Sm23]{Sm23} Smith, Hanson, \emph{Ramification in division fields and sporadic points on modular curves}, Research in Number Theory. volume 9, page 17, Springer, 2023 \bibitem[TV10]{TV10} Talpo, M., Vistoli, A., \emph{Deformation theory from the point of view of fibered categories}, (2010), arXiv:1006.0497, 2010 \bibitem[Ta97]{Ta97} Tate, J., \emph{Finite flat group schemes}, in: ``Modular Forms and Fermat's Last Theorem", G. Cornell, J.H. Silverman, G. Stevens eds., Springer (1997) \bibitem[Water79]{Water79} Waterhouse, W.C., \emph{Introduction to Affine Group Schemes}, Graduate Texts in Math. \textbf{66}, Springer-Verlag (1979) \bibitem[Za12]{Za12} Zannier, U., \emph{Some Problems of Unlikely Intersections in Arithmetic and Geometry}, Annals of Math. Studies \textbf{181}, Princeton University Press (2012) \end{thebibliography} \end{document}
2412.20282v1
http://arxiv.org/abs/2412.20282v1
Invariance of intrinsic hypercontractivity under perturbation of Schrödinger operators
\documentclass[12pt]{article} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{amsmath} \usepackage{hyperref} \usepackage{color} \usepackage[mathscr]{eucal} \def\func{\text} \newtheorem{theorem}{Theorem}[section] \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{sublemma}[theorem]{Sublemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary} \newtheorem{properties}[theorem]{Properties} \newtheorem{terminology}[theorem]{Terminology} \newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}} \def \<{\langle} \def \>{\rangle} \def \al{\alpha} \def \ab{{\boldsymbol{\alpha}}} \def \as{\mathsf{a}} \def \bs{\mathsf{b}} \def \bb{{\boldsymbol{\beta}}} \def \A{{\cal A}} \def \Ab{{\bf A}} \def \As{\mathsf{A}} \def \At{\boldsymbol{\it A}} \def \B{{\mathcal B}} \def \Bb{{\bf B}} \def \Bs{\mathsf{B}} \def \Bt{\boldsymbol{\it B}} \def \c{\text{curv}\ } \def \C{\mathbb C} \def \ds{\mathsf{d}} \def \D{{\mathcal D}} \def \Db{{\bf D}} \def \Ds{\mathsf{D}} \def \Dt{\boldsymbol{\it D}} \def \ep{\epsilon} \def \E{{\mathcal E}} \def \Eb{{\bf E}} \def \Es{\mathsf{E}} \def \Et{\boldsymbol{\it E}} \def \fsk{{\mathsf{f}_\kappa}} \def \F{{\mathcal F}} \def \hs{\mathsf {h}} \def \Hb{{\bf H}} \def \Hs{\mathsf{H}} \def \Ht{\boldsymbol{\it H}} \def \Pb{{\bf P}} \def \Pbs{{\boldsymbol{\mathsf P}}} \def \R{\mathbb R} \def \Ra{\mathcal R} \def \H{{\mathcal H}} \def \hel{{\mathcal Hel}} \def \Hoo{{\cal H}or^0} \def \Hz{{\cal H}oriz} \def \Hzo{{\cal H}oriz^0} \def \G{{\cal G}} \def \Ga{\Gamma} \def \M{{\cal M}} \def \P{{\cal P}} \def \S{{\cal S}} \def \T{{\cal T}} \def \cg{{\cal C}} \def \ac{a{\cal C}} \def \Y{{\cal Y}} \def \a{{\bf a}} \def \ad{{\mathcal a}} \def \hf{{\mathfrak h}} \def \Hf{{\mathfrak H}} \def \k{{\bf k}} \def \kf{\frak k} \def \gf{\frak g} \def \kfc{\frak k^\C} \def \e{{\bf e}} \def \b{{\bf b}} \def \ga{\gamma} \def \ka{\kappa} \def \kh{\hat k} \def \l{\lambda} \def \L{\Lambda} \def \op{\oplus} \def \pb{{\bf p}} \def \r{\rho} \def \sd{\boldsymbol{\mathsf{sd}}} \def \t{\otimes} \def \ua{\mathsf{u}} \def \ub{{\bf u}} \def \va{\mathsf{v}} \def \vb{{\bf v}} \def \wb{{\bf w}} \def \wa{{\mathsf w}} \def \w{\omega} \def \xb{{\bf x}} \def \wc{\small{\mathcal W} \normalsize} \def \Wo{{\mathcal W}} \def \x{\xi} \def \t{\theta} \def \y{\eta} \def \z{\zeta} \def \zb{\bar\zeta} \def \p{\partial} \def \beq{\begin{equation}} \def \eeq{\end{equation}} \def \ben{\begin{align}} \def \een{\end{align}} \def \enda{\end{align}} \def \u{\text{curl}} \def \en{{\bf n}} \def \d{{\text div}_A} \def \g{{\text grad}_A} \def \n{\nabla} \def \nl{ {\Large $\n$} } \def \nm{ \text{{\Large $\n$}}} \def \no{ {\large $\n$} } \def \np{ \text{{\large $\n$}}} \def \eref{\eqref} \def \V{\mathcal V} \def \W{\Omega} \def \s{\sigma} \def \lrc{\lrcorner\,} \def \({\Big(} \def \){\Big)} \numberwithin{equation}{section} \def \ws{\text{\scriptsize{$\mathcal W$} \normalsize}} \def \wf{\text{\footnotesize{$\mathcal W$} \normalsize}} \def \Us{\mathsf{U}} \def \Ws{\mathsf{W}} \begin{document} \title{Invariance of intrinsic hypercontractivity under perturbation of Schr\"odinger operators \footnote{\emph{Key words and phrases.} Perturbation of Schr\"odinger operators, intrinsic hypercontractivity, logarithmic Sobolev inequalities. \newline \indent \emph{2020 Mathematics Subject Classification.} Primary; 81Q15, 47D08, Secondary; 35J10, 35B20, 60J46.} \author{Leonard Gross \\ Department of Mathematics\\ Cornell University\\ Ithaca, NY 14853-4201\\ {\tt [email protected]}} } \maketitle \begin{abstract} A Schr\"odinger operator that is bounded below and has a unique positive ground state can be transformed into a Dirichlet form operator by the ground state transformation. If the resulting Dirichlet form operator is hypercontractive, Davies and Simon call the Schr\"odinger operator ``intrinsically hypercontractive". I will show that if one adds a suitable potential onto an intrinsically hypercontractive Schr\"odinger operator it remains intrinsically hypercontractive. The proof uses a fortuitous relation between the WKB equation and logarithmic Sobolev inequalities. All bounds are dimension independent. The main theorem will be applied to several examples. \end{abstract} \tableofcontents \newpage \section{Introduction} An operator whose quadratic form is a Dirichlet form has some particularly nice properties. Suppose that $m$ is a measure on a Riemannian manifold $X$ and $A$ is a self-adjoint operator, densely defined in $L^2(X,m)$, such that \begin{align} (Af, g)_{L^2(X,m)} = \int_X \<\n f, \n g\> dm \label{I1} \end{align} for an appropriate set of functions $f$ and $g$. Here $\<\cdot, \cdot\>$ denotes the Riemannian metric. Such operators have been studied systematically for many years. \cite{BD58,BD59,Sil74,Fu80,BH91,MR92,FOT94,ChF2012}. Some divergence form operators are included in this class, \cite{GT}. The semigroup $e^{-tA}$ associated to such an operator is positivity preserving, is the generator of a Markov process, is a contraction in all $L^p$ spaces, and frequently has useful smoothing properties. There is an equivalence between hypercontractivity properties of the semigroup $e^{-tA}$ and coercivity properties of its Dirichlet form generator $A$, \cite{G1}. The latter take the form of logarithmic Sobolev inequalities. A Schr\"odinger operator with a non-zero potential is not a Dirichlet form operator, but can often be unitarily transformed into one: Suppose that the operator $H:= -\Delta +V$ acts in $L^2(\R^n, dx)$ and has an eigenvalue $\l_0$ at the bottom of its spectrum with multiplicity one. The corresponding normalized eigenfunction $\psi$ may typically be chosen to be strictly positive almost everywhere. The measure $dm_\psi := \psi^2 dx$ is then a probability measure on $\R^n$ and the map $U:f\to f\psi$ is a unitary operator from $L^2(m_\psi)$ onto $L^2(m)$, as is easily verified. A simple computation shows that the operator $\hat H :=U^{-1}(H - \l_0)U$, which acts in $L^2(\R^n, m_\psi)$, will then be the Dirichlet form operator for $m_\psi$. That is, $(\hat H f,g)_{L^2(m_\psi)} = \int_{\R^n} \<\n f(x), \n g(x)\> dm_\psi(x)$, where $\<\cdot,\cdot\>$ is the inner product on $\R^n$. The semigroups $e^{-t(H-\l_0)}$ and $e^{-t\hat H}$ are unitarily equivalent via $U$, but differ in very important respects. The transformation of the Schr\"odinger operator $H$ into the Dirichlet form operator $\hat H$ is nowadays called the ground state transformation. An early incarnation of this transformation goes back to an 1837 paper of Jacobi, \cite{J1837}, whose interest was to remove the zeroth order term from an ordinary differential operator. Indeed $\hat H$ has no zeroth order term. The potential $V$ is now encoded in the measure $m_\psi$. The ground state transformation was used in \cite{G1} to produce Dirichlet form operators from Schr\"odinger operators by this method. The notions of {\it intrinsic hypercontractivity} and {\it intrinsic ultracontractivity} were introduced by Davies and Simon in their paper \cite{DS84}: Suppose that the operator $\hat H$ above is hypercontractive or ultracontractive in the sense that the semigroup $e^{-t\hat H}$ is hypercontractive, resp. ultracontractive in $L^2(\R^n, m_\psi)$. They then call $H$ itself {\it intrinsically hypercontractive} (resp. {\it intrinsically ultracontractive$)$}. They showed that intrinsic ultracontractivity is invariant under perturbation of the potential $V$ by a bounded potential but left open the question as to whether intrinsic hypercontractivity is also invariant under perturbation by bounded potentials. The goal of this paper is to show that intrinsic hypercontractivity for semigroups generated by Schr\"odinger operators is invariant under perturbation of the potential by a class of unbounded potentials, including all bounded potentials in particular. We will do this in a dimension independent way over arbitrary Riemannian manifolds. We also show, by examples, how to combine this perturbation theorem with the convexity techniques of the Bakry-Emery method to produce a large class of Dirichlet forms satisfying a logarithmic Sobolev inequality. A proof of invariance of intrinsic hypercontractivity requires showing that if $-\Delta + V_1$ is intrinsically hypercontractive then $-\Delta + V_1 + V$ is also intrinsically hypercontractive under suitable conditions on $V$. The ground state transformation for $-\Delta + V_1 + V$ can be realized as the composition of two successive ground state transformations, one for $-\Delta + V_1$, giving a Dirichlet form operator $\hat H_1$, and a second one for the Schr\"odinger operator $\hat H_1 + V$. We will elaborate on this composition property of the ground state transformation in Section \ref{secgst}. By hypothesis, the Dirichlet form operator $\hat H_1$ is hypercontractive. Using this and the known equivalence of hypercontractivity to logarithmic Sobolev inequalities, the invariance of intrinsic hypercontractivity can be phrased directly in terms of the perturbation of a Dirichlet form as follows: Suppose that $m$ is a probability measure on a Riemannian manifold $X$ and that the logarithmic Sobolev inequality \begin{align} Ent_m(u^2) \le 2c \int_X |\n u|^2 dm \label{I4} \end{align} holds for some constant $c$. (Here $m$ plays the role of the ground state measure for $-\Delta + V_1$ in the example of the preceding paragraph.) Denote by $\n^*\n$ the Dirichlet form operator associated to $m$. It is defined, as in \eref{I1}, by $(\n^*\n u, v)_{L^2(m)} = \int_X \<\n u, \n v\> dm$. (Thus $\n^*\n = \hat H_1$ in the example.) Let $V$ be a potential on $X$. If the Schr\"odinger operator $\n^*\n + V$ has an eigenvalue $\l_0$ of multiplicity one at the bottom of its spectrum with a normalized, a.e. strictly positive eigenfunction $\psi$ then the ground state transformation for $\n^*\n + V$ associates to $\psi$ the new ground state measure $m_\psi = \psi^2 m$ and its corresponding Dirichlet form $\int_X |\n f|^2 dm_\psi$. The problem of invariance of intrinsic hypercontractivity asks for conditions on $V$ that will ensure that the new Dirichlet form also satisfies a logarithmic Sobolev inequality. We will prove that if \eref{I4} holds and if there are constants $\ka >0$ and $\nu > 2c$ such that \beq M:= \|e^V\|_{L^\ka(m)}\| e^{-V}\|_{L^\nu(m)} < \infty \label{I5} \eeq then the operator $\n^*\n + V$ is bounded below, the bottom of its spectrum is an eigenvalue of multiplicity one, there is a normalized ground state $\psi >0$ a.e. and there is a constant $c_1$ such that \begin{align} Ent_{m_\psi}(f^2) \le 2c_1 \int_X |\n f|^2 dm_\psi. \label{I6} \end{align} Moreover there are constants $a$ and $b$ depending only on $c,\ka$ and $\nu$ such that $c_1 \le a M^b$. In particular, the Schr\"odinger operator $\n^*\n +V$ has a gap at the bottom of its spectrum of at least $2M^{-b}/a$. All bounds are dimension independent. This is the main theorem of the present paper. \bigskip There is a large literature on a related problem: Suppose that $F:X \to \R$ is measurable and $\int_Xe^{-2F} dm =1$ for some probability measure $m$. Then $m^F := e^{-2F} dm$ is another probability measure and one can ask for conditions on $F$ which ensure that the Dirichlet form for $m^F$ satisfies a logarithmic Sobolev inequality when $m$ does. If, given a potential $V$ with its ground state $\psi$, one puts $F = -\log \psi$ then $m_\psi = m^F$ and the desired conclusion is the same for the two perturbation problems. But the hypotheses are very different. For us it is essential to impose conditions only on the potential $V$ and deduce from them any properties of $\psi$ that may be needed for proving \eref{I6}. If, on the other hand, one takes $F$ as the primary data rather than $V$, then it is natural to impose conditions directly on $F$. This is the case for the application of logarithmic Sobolev inequalities to classical statistical mechanics, and is frequently used in the application of logarithmic Sobolev inequalities to large deviations, concentration of measure and optimal transport. An early perturbation theorem taking $F$ as the given data is the Deuschel-Holley-Stroock (DHS) theorem, \cite{HS1987,DS1990}, which asserts that boundedness of $F$ is a sufficient condition. One may take $c_1 = c\exp(osc 2F)$ in \eref{I6}. (cf. also \cite[Proposition 3.1.18]{Roy07} or \cite[Proposition 5.1.6]{BGL} for a proof of this.) The two papers \cite{HS1987,DS1990} link logarithmic Sobolev inequalities with classical statistical mechanics. See also e.g. Royer, \cite{Roy99,Roy07}, Guionnet and Zegarlinski, \cite{GZ03}, Helffer \cite{Helffer2002}, and Ledoux, \cite{Led2001u}, for further early expositions of this connection with classical statistical mechanics. See Ledoux, \cite{Led1999, Led2000, Led2001} for expositions of the connection with concentration of measure, and see Villani, \cite{Vil2003,Vil2009} and Gigli-Ledoux \cite{LedG2013} for expositions of the connection with optimal transport. Whether one perturbs the measure $m$ directly, by inserting a density $e^{-2F}$, or perturbs $m$ indirectly, via the Schr\"odinger equation, the identities that accompany the ground state transformation play a central role, as will be explained in Section \ref{secgs}. Even if $F$ is the primary object, these identities suggest the use of hypotheses on $F$ that include its relation to an artificial potential $V_F$, constructed from $F$, for which the ground state of $\n^*\n + V_F$ is exactly $e^{-F}$. Many works hypothesize conditions on $F$, which are in fact conditions on a combination of $F$ and $V_F$. Further historical discussion of this will be given in Section \ref{secgs} after more details of the ground sate transformation are described and also in Section \ref{secwp}, which contains some comparisons of results. \bigskip Several papers aimed at developing techniques for proving spectral gaps and logarithmic Sobolev inequalities directly over infinite dimensional spaces led to some of the methods that we will be building on. S. Kusuoka, \cite{Kus1991}, \cite{Kus1992}, seeking an infinite dimensional analog of the Hodge-deRham theorem for an open subset of an abstract Wiener space, developed a method for proving a weak kind of spectral gap for a Dirichlert form over an infinite dimensional manifold. Aida, Masuda and Shigekawa \cite{AMS94}, \cite{AS94} proved a perturbation theorem for Gaussian measure on an abstract Wiener space that imposed hypotheses on the perturbing density $e^{-2F} $. They replaced the hypothesis that $F$ be bounded, required in the DHS theorem, by a size condition on the gradient of $F$. The notions of spectral gap and positivity improving were themselves better understood through various kinds of weaker or stronger versions developed further by M. Hino, \cite{Hino1997}, \cite{Hino2000}, S. Aida, \cite{Aida1998}, \cite{Aida2001}, Gong and Ma, \cite{GongMa1998a}, Liming Wu, \cite{Wu2000}, P. Mathieu, \cite{Mat98}, M. Rockner and F-Y Wang, \cite{RW2001}, and culminating in the resolution, by Gong and Wu \cite{GongWu2000} and F. Gong, M. R\"ockner and L. Wu in \cite{GRW2001}, of a spectral gap conjecture for loop groups made in \cite{G93}, which was itself aimed at proving a Hodge-deRham type theorem over loop groups. See the introductions to \cite{GRW2001} and \cite{Aida2001} for histories of these techniques up to that time and in particular see Remark 4.13 in \cite{Aida2001} for an illuminating comparison of some of the historical conditions on the log density $F$. See \cite{CLW2011} for later historical perspective and development of more quantitative bounds on the rate function for the weak Poincar\'e inequality over loop spaces. \bigskip This paper depends heavily on techniques developed by Aida, \cite{Aida2001}. Aida derived a lower bound on the spectral gap of the perturbed operator largely in terms of information about the distribution of the ground state wave function $\psi$. We will build on his techniques. We will first derive bounds on $\|\psi^{-1}\|_{L^s(m)}$, for some $s >0$, that depend only on $c,\kappa, \nu$ and $M$. We will use these bounds to derive a defective logarithmic Sobolev inequality and then use them again to derive information needed about the distribution of $\psi$ for producing a spectral gap via Aida's method. Rothaus' theorem \cite{Rot5} in the form of \cite[Proposition 5.1.3]{BGL}, then yields \eref{I6}. All bounds are quantitatively dependent on the input data $c,\ka, \nu$ and $ M$. \bigskip \section{Statements} \subsection{The main theorem} \label{secmainthm} \begin{notation} \label{notso} {\rm (Schr\"odinger operator in its ground state representation). Denote by $X$ a Riemannian manifold, by $dx$ its Riemann-Lebesgue measure and by $\n$ the gradient operator. $m$ will denote a measure on $X$ with a density: $dm = \rho^2 dx$ with $\rho >0$ and $\n \rho \in L^2_{loc}(dx)$. The adjoint of the gradient operator with respect to $m$ is defined on smooth vector fields over $X$ by \begin{align} \int_X (\n^* v)h\, dm = \int_X v\cdot (\n h) dm \ \ \text{for all}\ \ h \in C_c^\infty(X). \label{div1} \end{align} Here we have written $v\cdot u = g(v, u)$, where $g$ is the Riemannian metric and $v$ and $u$ are vector fields. The technical condition on $\rho$ ensures that $\n^* v \in L^2_{loc}(m)$ for every smooth vector field $v$ on $X$, (cf. \cite[Theorem 3.1.3]{FOT94}). Then \begin{align} (\n^* \n f, g)_{L^2(m)} = \int_X \n f \cdot \n g\, dm\ \ \text{for all}\ f,g \in C_c^\infty(X). \label{div2} \end{align} The Dirichlet form on the right is closable in $L^2(m)$, (cf. \cite[Theorem 3.1.3]{FOT94}). Its closure is associated to a non-negative self-adjoint operator, which we refer to as the Dirichlet form operator associated to $m$ and $g$. We denote it by $\n^*\n$. For example if $m$ is Lebesgue measure on $\R^n$ and $g$ is the Euclidean metric then $\n^*\n = - \Delta$ with its usual self-adjoint domain in $L^2(\R^n, dx)$. Let $V$ be a real valued function on $X$. The Schr\"odinger operator we are interested in is given informally by \beq H = \n^*\n + V. \eeq We will impose conditions on $V$ which ensure that this expression is essentially self-adjoint, that $\l \equiv \inf(\text{spectrum}\, H)$ is an eigenvalue with multiplicity one and that $H$ has a corresponding normalized eigenfunction $\psi$ which is strictly positive a.e. on $X$. The corresponding ground state measure $m_\psi$ is given by \beq d m_\psi = \psi^2 dm. \eeq $m_\psi$ is a probability measure on $X$ and has its own Dirichlet form operator $\n_\psi^* \n$ acting in $L^2(m_\psi)$ and given by \begin{align} (\n_\psi^*\n f, g)_{L^2(m_\psi)} = \int_X \n f \cdot \n g\, dm_\psi. \end{align} The map $U: L^2(m_\psi) \to L^2(m)$ defined by \beq U f = f \psi \eeq is clearly unitary. It is a standard computation, which we will repeat in Section \ref{secgs}, to show that $U$ intertwines $H -\l$ with $\n_\psi^*\n$: \beq U^{-1}(H - \l) U = \n_\psi^*\n. \eeq Thereby the ground state transformation $U$ converts the Schr\"odinger operator $H-\l$ to another Dirichlet form operator. In case $m$ is a probability measure we define the $m$ entropy of a non-negative integrable function $f$ by \begin{align} Ent_m(f) = \int_X f \log f dm - \(\int_X f dm\)\(\log \int_X f dm\). \label{E1} \end{align} } \end{notation} \bigskip \noindent \begin{theorem} \label{thmM} $($Main theorem$)$. Assume that $m(X) =1$ and that \begin{align} &1. \ \ Ent_m (u^2) \le 2c \int_X |\n u|^2 dm. \label{mt1}\\ &2. \ \ \|e^V\|_\kappa < \infty \ \text{and}\ \|e^{-V}\|_\nu < \infty\ \text{for some}\ \ \ka >0\ \text{and}\ \ \nu > 2c. \label{mt2} \end{align} Then a. $ \n^*\n + V$ is essentially self-adjoint on $\D(\n^*\n) \cap L^\infty$. Let $H =$ closure of $\n^*\n + V$. b. $\l_0 \equiv \inf$ spectrum $H$ is an isolated eigenvalue of multiplicity one. It has an eigenfunction $\psi > 0\ a.e.$ with $ \int_X \psi^2 dm =1$. c. Let \begin{align} M =\|e^V\|_\kappa \|e^{-V}\|_\nu. \label{mt3} \end{align} There is a constant $c_1$ depending only on $c, \kappa, \nu$ and $M$, such that \beq Ent_{m_\psi}(f^2) \le 2c_1 \int_X |\n f|^2 dm_\psi. \label{mt5} \eeq d. In particular $H$ has a spectral gap of at least $1/c_1$ above the eigenvalue $\l_0$. e. There are constants $\alpha$ and $\beta$, depending only on $c, \ka, \nu$, such that $c_1 \le \alpha M^\beta$ and therefore $H$ has a spectral gap above $\l_0$ of at least $\alpha^{-1}M^{-\beta}$. \end{theorem} \begin{remark}\label{remsg1} {\rm (Spectral gap). Our procedure for proving \eref{mt5} requires proving both a Poincar\'e inequality for $m_\psi$ and a defective logarithmic Sobolev inequality. The spectral gap associated to this Poincar\'e inequality is typically larger than the one listed in item d. See Remark \ref{remsg2} for more details. } \end{remark} \begin{remark} \label{remsum} {\rm (Overview). The main ingredient in the proof of Theorem \ref{thmM} is the derivation of $L^p(m)$ bounds for the ground state $\psi$ and for its inverse $1/\psi$. Bounds on $\|\psi\|_{L^p(m)}$ can be derived from hyperboundedness estimates for the Schr\"odinger operator $\n^*\n +V$ by techniques that were initially developed in the early 1970's for the purposes of constructive quantum field theory. In addition to the logarithmic Sobolev inequality \eref{mt1} the key hypothesis needed for this step is the assumption that $\| e^{-V}\|_\nu < \infty$ for some $ \nu >2c$, but not the drastic condition $\|e^{V}\|_\ka < \infty$. The proofs of essential self-adjointness of $\n^*\n +V$ and the existence and uniqueness of its ground state also depend only on these two hypotheses and not on the condition $\| e^V\|_\ka < \infty$. The proofs and bounds on $\| \psi\|_{L^p(m)}$ are given in Section \ref{sechyperp}. The techniques needed to establish bounds on $\|(1/\psi)\|_p$ are very different. They have their origin partly in the work of Aida, \cite{Aida2001}, which was itself motivated by attempts to prove a Hodge-deRham theorem over certain infinite dimensional loop spaces. Aida derived information about the distribution of $\log \psi$, which he needed to prove a spectral gap, from an identity related to the WKB equation. We will see that Aida's identity also bears a fortuitous relation to logarithmic Sobolev inequalities. We will use this relation to derive bounds on the entropy of $\psi^{-s}$ for small positive $s$. From this, using Herbst's method, we will derive bounds on $\|\psi^{-s}\|_{L^1(m)}$ for such $s$. These bounds make use of the strong condition $\|e^{V}\|_\ka < \infty$ assumed in \eref{mt2}. These steps are carried out in Sections \ref{secpm} and \ref{seclp}. Our bounds on $\|\psi\|_p$ and $\|\psi^{-1}\|_s$ allow us to derive a defective logarithmic Sobolev inequality for the ground state measure $m_\psi$. The final step in proving \eref{mt5} consists in removing the defect by proving a spectral gap for $\n^*\n + V$ (or equivalently, for the Dirichlet form operator for $m_\psi$) and then applying Rothaus' theorem. Our technique for proving a spectral gap is largely due to Aida, \cite{Aida2001}. We are able to make some simplifications of his method by using our $L^p(m)$ bounds for $\psi^{\pm 1}$. These bounds will allow us to derive the quantitative bounds on $c_1$ given in item e. of Theorem \ref{thmM}. } \end{remark} \subsection{Non-standard hyperboundedness in $L^p(m)$} \label{secns} We will establish logarithmic Sobolev inequalities for the operator $\n^*\n +V$ in the spaces $L^p(m)$ and derive corresponding hyperboundedness in these spaces in order to prove existence, uniqueness and properties of its ground state. This must be done before transforming to the ground state representation. Since $\n^*\n + V$ is not a Dirichlet form operator the minimum time to boundedness from $L^q(m)$ to $L^p(m)$ of $e^{-t(\n^*\n +V)}$ takes a different form from Nelson's classical time. Moreover $q$ and $p$ must be restricted to a small neighborhood of 2 for any such boundedness to hold. We will see by example in Section \ref{secgp} that the peculiar restrictions on $q$ and $p$ in Corollary \ref{corhb2} are not artifacts of the proof. \begin{notation} {\rm The quadratic equation \beq p\frac{p}{p-1} = 2\nu/c \label{L313m} \eeq is self-conjugate in the sense that it is invariant under the map $ p \to p/(p-1)$. If $\nu > 2c$ then it has two solutions, which are conjugate exponents as we will see in Section \ref{sechyperp}. Denote them by $q_0, p_0$ with $1 < q_0 <2 < p_0 < \infty$. } \end{notation} \begin{theorem} \label{thmns2} Assume that \eref{mt1} holds. Suppose that $\nu > 2c$ and that $\|e^{-V}\|_{L^\nu(m)} < \infty$. Suppose also that $V \in L^{p_1}(m)$ for some $p_1 \ge 2p_0/(p_0-2)$. Then $\n^*\n +V$ is essentially self-adjoint. Its closure $H$ is bounded below. The semigroup $e^{-tH}$ that it generates extends uniquely to a strongly continuous semigroup of bounded operators on $L^q$ for $q \in [q_0,2]$ and restricts to a strongly continuous semigroup of bounded operators on $L^q$ for $q \in [2, p_0]$. If $q_0 < p <p_0$ then \begin{align} &Ent_m(|u|^p) \le pc_\nu(p) \<(H+ \log \|e^{-V}\|_\nu) u, u_p\>_{L^2(m)}\ \ \label{L325} \end{align} for $u$ in the $L^p$ domain of $H$, where $u_p = (\text{sgn}\, u) |u|^{p-1}$ and \begin{align} c_\nu (p) = \frac{\nu p}{(p_0-p)(p-q_0)}\ \ \ \text{for}\ \ \ p \in (q_0, p_0). \label{L329} \end{align} In particular, at $p=2$ the defective logarithmic Sobolev inequality \begin{align} &Ent_m(u^2) \le 2c_\nu \<(H+ \log \|e^{-V}\|_\nu) u, u\>_{L^2(m)}\ \ \label{L325h} \end{align} holds with \begin{align} c_\nu = \frac{c}{1- (2c/\nu)}. \label{L329a} \end{align} \end{theorem} \begin{corollary} \label{corhb2} $($Non-standard hyperboundedness$)$. Continuing the notation and assumptions of Theorem \ref{thmns2}, let \begin{align} a_\nu &= \sqrt{1 - (2c/\nu)} \ \ \ \ \ \text{and} \label{L341d}\\ \tau(p) &= \frac{c}{2a_\nu} \log\frac{q_0^{-1} - p^{-1}}{p^{-1} - p_0^{-1}},\ \ \ q_0 < p < p_0. \label{L505} \end{align} Then \begin{align} \|e^{-tH} \|_{q \to p} \le \| e^{-V}\|_\nu^t\ \ \ \ \text{for}\ \ t \ge \tau(p) - \tau(q) \ \ \text{ if}\ \ \ q_0 < q \le p <p_0. \label{L289} \end{align} Moreover, if $ q \in [q_0, p_0]$ then \beq \|e^{-tH} \|_{q\to q} \le \| e^{-V}\|_\nu^t\ \ \ \ \ \ \text{for all}\ \ t \ge 0. \label{L291} \eeq For fixed $q$ and $p$ in $(q_0, p_0)$ with $q\le p$ the function $t_{q,p}\equiv \tau(p) - \tau(q)$ decreases as $\nu$ increases. \end{corollary} \begin{remark} {\rm The function $\tau(p)$ does not give the standard Nelson time to contraction in \eref{L289}. The Nelson time is determined by $\tau_0(p) = (c/2) \log(p-1)$. (See e.g. \cite{G1}.) But if $V$ is bounded below, then we may let $\nu \uparrow \infty$ and, as we will see in Section \ref{secbelow}, $\tau(p)-\tau(q) \downarrow \tau_0(p) - \tau_0(q)$. } \end{remark} \begin{corollary} \label{corEU} Under the assumptions of Theorem \ref{thmns2}, $\l_0 \equiv$ inf spectrum $H$ is an eigenvalue of multiplicity one. It has an eigenvector $\psi$ which is strictly positive a.e.. \end{corollary} The proofs will be given in Section \ref{sechyperp}. We will also establish upper bounds on $\|\psi\|_p$ for $2 < p <p_0 $ and lower bounds on $\|\psi\|_r$ for $0 < r < 2$. \subsection{A product of moments} \label{secpms} The Schr\"odinger equation for the ground state $\psi$ can be written in WKB form simply. Let $F = - \log \psi$. Since $\psi$ is strictly positive almost everywhere, $F$ is real valued almost everywhere. A computation, which will be sketched in Remark \ref{remWKB}, yields \begin{align} \n^*\n F + |\n F|^2 = V - \l_0.\ \ \ \text{WKB} \label{wkb} \end{align} Suppose that $v$ is a real valued function on $\R$. Multiply \eref{wkb} by the composed function $v\circ F$ and, using $\n (v\circ F) = v'(F) \n F$, integrate over $X$ to find informally, after an integration by parts \begin{align} \int_X (v'(F) + v(F)) |\n F|^2 dm = \int_X v(F) (V -\l_0) dm \ \ \ \ \text{Aida's identity} \label{W30} \end{align} A more precise derivation will be given in Theorem \ref{thmA3}. Aida used this identity cf. \cite[Equ. (3.26) in Lemma 3.3]{Aida2001} to derive information about the distributions of $|\n F|$, $F$ and $\psi$, which was crucial for his proof of a spectral gap. We will exploit Aida's identity in a different way. Suppose that $\phi$ is a real valued function on $\R$. We may apply the logarithmic Sobolev inequality \eref{mt1} to the composed function $\phi \circ F$ to find \begin{align} Ent_m((\phi\circ F)^2) \le 2c \int_X (\phi'\circ F)^2 |\n F|^2 dm, \label{pm5} \end{align} wherein we have used $\n (\phi\circ F) = (\phi'\circ F) \n F$. If $\phi$ and $v$ are chosen in \eref{pm5} and \eref{W30} so that the two integrands involving $|\n F|^2$ are equal then Aida's identity, together with \eref{pm5} give a bound on $Ent_m((\phi\circ F)^2)$ in terms of the potential. In this way the quadratic nonlinearity in the WKB equation meshes well with the use of logarithmic Sobolev inequalities. We will show that this procedure can be carried out for several different kinds of functions $\phi$. In particular, taking $\phi(s) = e^{ts}$ (giving $\phi\circ F = \psi^{-t}$), we will derive entropy bounds on $\psi^{-t}$ for $t $ in an open interval containing zero. We will then derive moment bounds from these entropy bounds using Herbst's method. The interval of $t$ for which this procedure works depends on $\ka$ in the condition \eref{mt2}, and on the solutions to the quadratic equation \eref{W752}. We will show by example in Section \ref{secpp} that the peculiar interval of $t$ for which this procedure works is not an artifact of the proof. The moment bounds that we arrive at take the form of a bound on a product of moments, as in the following simplified theorem. \begin{theorem} \label{thmmp0} Suppose that the hypotheses of Theorem \ref{thmns2} hold. Let $\ka >0$. Assume that \begin{align} \|e^V\|_\ka < \infty. \label{W751} \end{align} Let $s_0$ and $-r_0$ be the positive and negative roots of the quadratic equation \begin{align} t^2 - (2\ka/c)(t+1) = 0. \label{W752} \end{align} Then there is a function $f:(0,r_0)\times (0, s_0) \to [0, \infty)$ such that \begin{align} \|\psi\|_r \| \psi^{-1}\|_s \le \| e^{V-\l_0}\|_\kappa^{f(r,s)}, \ \ \ 0 < r < r_0,\ \ 0 < s <s_0. \label{W753h} \end{align} The function $f(r,s)$ will be given explicitly in Theorem \ref{thmmp1}. \end{theorem} \begin{remark} {\rm(Upper bound on $\|\psi^{-1}\|_s$). Typical perturbation proofs of a defective LSI for the ground state measure $m_\psi$ rely on some information about the behavior of $\psi$ in the regions where $\psi$ is large or where $\psi$ is close to zero. For example the classical condition of Deuschel-Holley-Stroock \cite{HS1987,DS1990} requires that $F \equiv - \log \psi$ be bounded both above and below; equivalently, $0< \ep \le \psi \le K < \infty$ on all of $X$ for some $\ep, K$. Aida relaxed the condition that $\psi$ be bounded away from zero by assuming instead that $\psi^{-1} \in L^p(m)$ for some $p >0$, cf. \cite[Lemma 4.12]{Aida2001}. We will derive an upper bound on $\| \psi^{-1}\|_s$, depending only on $c,\ka, \nu$ and $M$, by combining \eref{W753h} with the lower bound on $\|\psi\|_r$ derived in Section \ref{seculb}. The upper bound on $\|\psi^{-1}\|_s$ is the key input to the derivation of a DLSI. } \end{remark} \subsection{A defective LSI for $m_\psi$.} \begin{theorem} Assume that \eref{mt1} and \eref{mt2} hold. Let $b_\ka = \sqrt{1 +(2c/\ka)}$ and define $c_\nu$ as in \eref{L329a}. Let $a > c_\nu b_\ka$. Then there exists a number $D$, depending only on $c,\nu,\ka, M$ and the choice of $a$, such that \begin{align} Ent_{m_\psi}(u^2) \le 2a \int_X |\n u|^2 dm_\psi + D \|u\|_{L^2(m_\psi)}^2 \label{D1} \end{align} \end{theorem} A more detailed version of this theorem, showing the dependence of $D$ on the various parameters, and in particular its dependence on our bounds of the norms $\|\psi^{-1}\|_s$, is given in Section \ref{secdlsi}. \subsection{Spectral gap} To complete the proof of Theorem \ref{thmM} we will show that the Dirichlet form operator for $m_\psi$ has a spectral gap. A theorem of Rothaus then shows that the defect in \eref{D1} can be removed at the cost of increasing the Sobolev coefficient $2a$. In case the defect in \eref{D1} is sufficiently small, a theorem of F-Y. Wang can be used to show that there is a spectral gap. In general, a proof that $m_\psi$ has a spectral gap depends on data encoded in $\psi$ and not just on the size of $D$ and $a$ in \eref{D1}. We adapt a method of Aida, \cite{Aida2001}, which produces a spectral gap dependent on the distribution of $\psi$ and its gradient. With the help of quantitative bounds on $D$ and $a$ in \eref{D1} we then obtain quantitative bounds on the spectral gap, and, by Rothaus' theorem, a quantitative bound on the Sobolev constant $c_1$ in \eref{mt5}. This will be carried out in Section \ref{secsg}. \section{Hyperboundedness of $\n^*\n +V$ in $L^p(m)$} \label{sechyperp} \subsection{Interval of validity } \label{seciv1} \begin{lemma} \label{lemiv1} $($Interval of validity$)$. Suppose that $1 < \nu/(2c) < \infty$. Define $a_\nu$ by \eref{L341d}. Then the quadratic equation \beq p^2 - (2\nu/c) (p-1) =0 \label{L313} \eeq has two real roots, $q_0 < p_0$, which are given by \begin{align} &p_0 = (\nu/c)\( 1+ a_\nu\),\ \ \ \ \ q_0 = (\nu/c)\( 1- a_\nu\). \label{L313q} \end{align} They satisfy the following identities. \begin{align} &(2\nu/c) (p-1) -p^2 = (p_0 -p)(p - q_0)\ \ \forall\ \ p \in \R. \label{L313a}\\ &p_0^{-1} = (1/2)\(1-a_\nu\), \ \ \ q_0^{-1} =(1/2)\(1+ a_\nu\) \label{L313qi} \\ &(1/p_0) + (1/q_0) = 1. \label{L313b}\\ &1 < q_0 < 2 < p_0 < \infty . \label{L313d}\\ &(p_0 -2) (2-q_0) = (2\nu/c) a_\nu^2. \label{L313c} \\ & \nu/(p_0 -q_0) =c/(2a_\nu). \label{L313j}\\ &(1/2) - (1/p_0) =(a_\nu/2) = (1/q_0) - (1/2) . \label{L325p} \end{align} In particular $q_0 $ and $p_0$ are conjugate indices. Define $\tau(p)$ by \eref{L505}. Then \begin{align} &a) \ \tau\ \text{is a strictly increasing function on}\ (q_0, p_0). \label{L325s} \\ &b) \lim_{p\uparrow p_0} \tau(p) = + \infty, \ \ \ \lim_{p\downarrow q_0}\tau(p) = - \infty \label{L325v} \\ &c) \ \tau(2) = 0. \label{L325t} \\ &d) \ \tau(p') = - \tau(p)\ \ \text{if} \ \ p' =p/(p-1). \label{L325u} \end{align} \end{lemma} \begin{proof} By the quadratic formula the quadratic equation \eref{L313} has two positive real roots given by $p = (\nu/c)\( 1\pm \sqrt{1 -2c/\nu}\)$. The roots are therefore correctly given by \eref{L313q}, in view of the definition \eref{L341d}. The inverse of the roots are are therefore given by $ 1/p = (1/2)(1\mp \sqrt{1 -2c/\nu}\)$, from which follows \eref{L313qi}. \eref{L313b} and \eref{L313d} follow from \eref{L313qi} while \eref{L313a} just restates that $q_0, p_0$ are the roots of \eref{L313}. Insert $p=2$ in \eref{L313a} to find $(p_0 -2) (2-q_0) = 2\nu/c - 4= (2\nu/c) a_\nu^2$, which is \eref{L313c}. \eref{L313q} shows that $p_0 -q_0 = (2\nu/c)a_\nu $, which is \eref{L313j}. \eref{L325p} follows from \eref{L313qi}. That $q_0$ and $p_0$ are conjugate indices follows from \eref{L313b}, but also from writing the equation \eref{L313} in the form \eref{L313m}, which exhibits the equation as self conjugate. Concerning the function $\tau$ defined in \eref{L505}, the properties \eref{L325s} and \eref{L325v} are clear from the definition, \eref{L505}. \eref{L325t} follows from \eref{L325p}. Replacing $p^{-1}$ by $1- p^{-1}$ in the numerator and denominator of \eref{L505} interchanges the numerator and denominator, in view of \eref{L313b}. This proves \eref{L325u}. \end{proof} \subsection{Proof of non-standard hyperboundedness for \linebreak bounded $V$} \label{secnsh} We assume in this subsection that $V$ is bounded. $\n^*\n$ denotes the self-adjoint Dirichlet form operator for $m$. The Schr\"odinger operator $H \equiv \n^*\n + V$ is then self-adjoint on the domain of $\n^*\n$ and there are no serious domain issues. We will prove all of the inequalities of Section \ref{secns} in this case. In Section \ref{secesa} we will remove the boundedness assumption for $V$ and show that $\n^*\n + V$ is essentially self-adjoint and that its closure, $H$, also satisfies the inequalities of Section \ref{secns}. Section \ref{secesa} has a technical character. \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmns2} for bounded $V$] By \cite[Lemma 6.1]{G1}, the logarithmic Sobolev inequality \eref{mt1} implies \begin{align} Ent_m(|u|^p) \le c \frac{p^2}{2(p-1)} \<\n^*\n u, u_p\> , \ 1 < p < \infty , \ \ \ \ (LSp) \label{L320p} \end{align} where $u_p = (\text{sgn}\, u) |u|^{p-1}$. We will frequently use Young's inequality in the form \begin{align} E(gu) \le Ent(g) +\(\log E(e^u)\) E(g), \label{BG500c} \end{align} where $g$ and $u$ are real valued measurable functions on some probability space, $g \ge 0$ and $E(g) < \infty$. In particular, if $v \in L^p(m)$ then, choosing $u = -\nu V$ and $g = |v|^p$ in \eref{BG500c}, we find \begin{align} \int_X (-V)|v|^p dm &\le \nu^{-1} \Big\{Ent_m(|v|^p) + \(\log E(e^{-\nu V})\) E(|v|^p) \Big\} \\ &= \nu^{-1} Ent_m(|v|^p) + \(\log \|e^{-V}\|_\nu\) E(|v|^p) \label{L48} \end{align} It follows from \eref{L320p}, \eref{L48} and from the definition $H = \n^*\n + V$ that \begin{align} - \<Hv, v_p\> &=-\<\n^*\n v, v_p\> + \int (-V) |v|^p dm \notag\\ &\le - \frac{2(p-1)}{cp^2} Ent_m(|v|^p) + \nu^{-1} Ent_m(|v|^p) + \alpha \int |v|^p dm \notag\\ &= \(\nu^{-1} -\frac{2(p-1)}{cp^2} \) Ent_m(|v|^p) + \alpha \int |v|^p dm, \label{L49a} \end{align} where $\alpha = \log \|e^{-V}\|_\nu$. Rearrange to find \begin{align} \( \frac{2(p-1)}{cp^2} - \nu^{-1}\)Ent_m(|v|^p)&\le \<Hv, v_p\> + \alpha \int |v|^p dm \notag\\ &= \<(H+\alpha)v, v_p\> . \label{L327} \end{align} With the help of \eref{L313a} we find \begin{align} \frac{2(p-1)}{cp^2} - \nu^{-1} =\frac{(2\nu/c)(p-1) - p^2}{\nu p^2} =\frac{(p_0 -p)(p - q_0)}{\nu p^2}. \end{align} For $q_0 < p <p_0$ the last expression is strictly positive. We may therefore divide \eref{L327} by it to find \eref{L325}. Put $p=2$ in \eref{L329} and use \eref{L313c} to arrive at \eref{L325h}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Corollary \ref{corhb2} for bounded $V$] For $q <p$ the time $t_{q,p}$ that it takes for $e^{-tH}$ to map $L^q(m)$ into $L^p(m)$ is determined by the equation (cf. \cite[Equation (2.4) of Theorem 1]{G1}) \beq \hat c(p) dp/dt = p(t), \ \ \ \ \ \ p(0,q) = q, \label{L330} \eeq where $\hat c(p)$ is determined by the definition $Ent_m(|u|^p) \le p \hat c(p) \< (H + \alpha)u, u_p\>$ and $\alpha$ is the ``local norm" at index $p$. In our case, \eref{L325}, $\alpha = \log \| e^{-V}\|_\nu$ and $\hat c(p)= c_\nu(p)$, which is given by \eref{L329}. Upon separating variables in \eref{L330} the equation becomes \begin{align} \nu \frac{dp}{(p_0-p)(p-q_0)} = dt. \label{L332a} \end{align} Using \eref{L313j} in the second line below, we have \begin{align} \frac{\nu}{ (p_0-p) (p-q_0)} &= \frac{\nu}{(p_0-q_0)} \{(p_0-p)^{-1} + (p- q_0)^{-1}\} \\ &= \frac{c}{2 a_\nu}\{(p_0-p)^{-1} + (p- q_0)^{-1}\}. \end{align} The solution to \eref{L332a} is therefore given by \begin{align} \frac{c}{2 a_\nu}\int_q^p\{(p_0-r)^{-1} + (r- q_0)^{-1}\} dr = \int_0^{t_{q,p}} dt = t_{q,p}. \label{L333a} \end{align} Thus \begin{align} t_{q,p} &= \frac{c}{2 a_\nu} \log\frac{r-q_0}{p_0-r}\Big|_q^p = \frac{c}{2 a_\nu} \(\log\frac{(r/q_0) -1}{1- (r/p_0)} +\log(q_0/p_0)\)\Big|_q^p \notag \\ &= \frac{c}{2 a_\nu} \(\log\frac{q_0^{-1} -r^{-1} }{r^{-1} - p_0^{-1}}\)\Big|_q^p = \tau(p) - \tau(q) \ \ \ \end{align} This proves that the minimum assured time to boundedness of $e^{-tH}$ from $L^q(m)$ to $L^p(m)$ is correctly given in \eref{L289}. From \cite[Equation (2.5)]{G1} we find that $\|e^{-t_{q,p}H}\|_{q\to p} \le e^{t_{q,p} \alpha}$, where $\alpha = \log \| e^{-V}\|_\nu$, because the integrand in \cite[Equation (2.5)]{G1} is just the constant $\log \| e^{-V}\|_\nu$ that appears in \eref{L325}. This proves \eref{L289} in case $t = t_{q,p}$. If $t > t_{q,p}$ then there exists $p_1 \in (p, p_0)$ such that $t = t_{q,p_1}$ because $ \tau(p)$ is a continuous and strictly increasing function of $p$ by \eref{L325s}, and goes to $\infty$ as $p \uparrow p_0$ by \eref{L325v}. Therefore $\|e^{-tH}\|_{q\to p} \le \|e^{-tH}\|_{q\to p_1} \le e^{t_{q, p_1} \alpha} = e^{t\alpha}$. This proves \eref{L289} for all $t \ge t_{q,p}$. The representation \eref{L333a} shows that $t_{q,p}$ is decreasing as a function of $\nu$, as asserted in the corollary, because, as $\nu$ increases $a_\nu$ increases , as we see from \eref{L341d}, while $p_0$ increases, as we see from \eref{L313q}, and consequently $q_0$ decreases, implying that the integrand in \eref{L333a} decreases. This proves the last line of Corollary \ref{corhb2}. For the proof of \eref{L291} set $p = q $ in \eref{L289}. Since $t_{q,q} =0$ it follows that \eref{L291} holds for all $t \ge0$, provided $q \in (q_0, p_0)$. (A short, abstract, but less illuminating proof of \eref{L291} for $q \in (q_0, p_0)$ is given in \cite[Remark 3.5]{G1993} that just uses the Hille-Yosida theorem.) To prove \eref{L291} for $q \in \{ q_0, p_0\}$ choose first $v \in L^{p_0}$. Then $v \in L^q$ for all $q \in (q_0, p_0)$ and $\|v\|_q \to \|v\|_{p_0}$ as $q \uparrow p_0$. By \eref{L291} for $q < p_0$ we have \begin{align} \int_X |e^{-tH} v|^q dm \le \|e^{-V}\|_\nu^{tq} \| v\|_q^q . \label{L335} \end{align} Choose a sequence $q_n\uparrow p_0$ and apply Fatou's lemma on the left side of \eref{L335} to find \eref{L291} for $q = p_0$. To prove \eref{L291} for $q = q_0$ observe that the previous argument shows that \eref{L335} holds for $q = q_0$ if $v$ is bounded. Now the semigroup $e^{-tH}$ is positivity preserving. So it suffices to prove \eref{L335} for $0 \le v\in L^{q_0}$. For such a function $v$, let $v_n = min(v,n)$ for each positive integer $n$. Then each function $v_n$ is bounded and \eref{L335} holds for $q = q_0$. We can now apply the monotone convergence theorem to find that \eref{L335} holds for $v$. \end{proof} \begin{remark} \label{remtpf} {\rm The Trotter product formula offers a good heuristic for an understanding of the inequality \eref{L289}. If, putting $H_0 = \n^*\n$, one writes $e^{-tH} = \lim_{n\to \infty}\(e^{-tV/n} e^{-t H_0/n} \)^n$ and if $f \in L^q$ then $e^{-t H_0/n}f$ will be in $L^{q_1}$ for some $q_1 > q$ by hypercontractivity. Then $e^{-tV/n} e^{-t H_0/n} f$ will be in $L^{q_2}$ for some $q_2 < q_1$ by H\"older's inequality. The exponents $q_1$ and $q_2$ are explicitly computable. Continuing in this way $n$ times one can bound the product and take the limit as $n \to \infty$ to derive a version of \eref{L289}. This procedure was carried out by I.E. Segal in \cite[Lemma 2.1]{Seg70}, though not with Nelson's shortest time to contraction, which was not known at that time. Segal's method for showing boundedness of $e^{-tH}: L^q\to L^p$, based on the Trotter product formula, was refined in \cite[Chapter 2]{SHk72} and in \cite[Theorem X.58]{RS2}. Our forced confinement of $q, p$ to the interval $(q_0, p_0)$ in \eref{L289} does not show up in these three sources because it was always assumed that $\|e^{-V}\|_\nu < \infty$ for all $\nu <\infty$. Our proof may be considered to be an infinitesimal version of Segal's method. } \end{remark} \begin{remark} \label{remfed5} {\rm (Federbush's semi-boundedness theorem). If $H_0$ is a non-negative self-adjoint operator on $L^2(\text{probability measure}\ m)$ satisfying a logarithmic Sobolev inequality \beq Ent_m(u^2) \le 2c (H_0 u, u)_{L^2(m)} \label{L337} \eeq and such that $H_0 1 =0$ then for any real valued measurable function $V$ there holds \begin{align} ((H_0 + V) u, u)_{L^2(m)} \ge \(-\log \|e^{-V}\|_{L^{2c}(m)}\) \|u\|_2^2 \label{L338} \end{align} for all $u \in D(H_0) \cap D(V)$. This is the Federbush semi-boundedness theorem, \cite{Fe, G1, G1993}. $H_0$ need not be a Dirichlet form operator. \eref{L337} and \eref{L338} are in a sense equivalent. See \cite[Theorem 2.1]{G1993}. However if $H_0$ is a Dirichlet form operator then \eref{L338} can be regarded as a limiting form of the hyperboundedness inequality \eref{L291}: Taking $V$ bounded for simplicity and $q=2$ in \eref{L291} we have $ \|e^{-tH} \|_{2\to 2} \le \| e^{-V}\|_\nu^t\ \text{for all}\ \ t \ge 0$. We may apply the spectral theorem to find $\text{inf spectrum}\ H \ge - \log \|e^{-V}\|_\nu$. Let $\nu\downarrow 2c$ to find \eref{L338}. Notice that as $\nu \downarrow 2c$ the interval of validity in Corollary \ref{corhb2}, $(q_0, p_0)$, collapses to the one point set $\{2\}$, as one can see from \eref{L313q}, since $a_\nu \downarrow 0$ as $\nu \downarrow 2c$. For $\nu = 2c$, hyperboundedness inequalities such as \eref{L291}, involving the exponential $e^{-tH}$, can fail. See for example Theorem \ref{thmgp1} for the modes of such failure. } \end{remark} \begin{example} \label{ex2p}{\rm The case $q=2$ and $2 < p < p_0$ will be important for us. Suppose that $t$ is the minimum time for \eref{L289} to hold when $q=2$. That is, $t = \tau(p)$ because $\tau(2) =0$ by \eref{L325t}. Then by \eref{L505} we have \begin{align} 2a_\nu t/c = \log\frac{q_0^{-1} - p^{-1}}{p^{-1} - p_0^{-1}},\ \ \ q_0 < p < p_0. \label{L340} \end{align} Let $b= e^{2a_\nu t/c}$ and take the exponential of \eref{L340} to find $q_0^{-1} - p^{-1} = b ( p^{-1} - p_0^{-1})$. Therefore $(1+b)p^{-1} = q_0^{-1} + b p_0^{-1}$. Hence the function \begin{align} p(t) \equiv \frac{1+e^{2a_\nu t/c}}{q_0^{-1} + e^{2a_\nu t/c} p_0^{-1}} \label{L340a} \end{align} gives the maximum Lebesgue index for boundedness from $L^2(m)$ to $L^p(m)$ predicted by \eref{L289}. That is, \begin{align} \| e^{-tH}\|_{2 \to p(t)} \le \|e^{-V}\|_\nu^t\ \ \ \text{for all} \ \ t \ge 0. \label{L289a} \end{align} It is instructive to observe that as $t\uparrow \infty$ the index $p(t) \uparrow p_0$. In Section \ref{secbelow} we will show that if $V$ is bounded below then we may take $\nu = \infty$ and $t_{2,p}$ reduces exactly to Nelson's shortest time to contraction. } \end{example} \subsection{Essential self-adjointness} \label{secesa} The computations in Section \ref{secnsh} were proven in case the potential $V$ is bounded. If $V$ is unbounded the operator $H$, defined as the closure of the operator $\n^*\n+ V$, must be shown to be self-adjoint before inequalities such as \eref{L289} can be given meaning. We will show in this section that $\n^*\n + V$ is essentially self-adjoint and that Theorem \ref{thmns2} and Corollary \ref{corhb2} hold in the generality stated. We will also prove that the self-adjoint operator $H$ has an eigenvalue of multiplicity one at the bottom of its spectrum which belongs to a unique positive eigenfunction $\psi$. The methods of this section are based on techniques that have their origin in the early attempts to prove the internal consistency of quantum field theory. The problem there, as here, was to prove that a particular operator of the form $H_0 + V$ is essentially self-adjoint and that its closure has a unique ground state. The operator $H_0$, of interest at that time, was similar in many ways to our operator $\n^*\n$, but had additional special structure. All three properties, essential self-adjointness, existence and uniqueness of a ground state, were first proved by Glimm and Jaffe \cite{GJ68}, \cite{GJ70}. Their proofs made use of some of the special structures of $H_0$ available in that setting, that are not shared by our operators $\n^*\n$. I.E. Segal, \cite{Seg1970}, subsequently removed the need for the special structure in the proof of essential self-adjointness and replaced it by a hypercontractivity argument. The present author, \cite{G1972}, subsequently removed need of the special structure in the proof of existence of a ground state, replacing it again by hypercontractive notions. The proofs we will give here are modifications of the latter proofs. They depend only on the positivity preserving character of the operators $e^{-tH}$ and the hypercontractivity bounds that are already available to us. Simon and Hoegh-Krohn, \cite[Section 2]{SHk72} developed the methods of Segal, \cite{Seg1970}, further. We will make use of their techniques also. The statements and techniques of proof of essential self-adjointness and existence and uniqueness of a ground state are dimension independent. Although the underlying manifold in this paper is assumed to be finite dimensional, these results can be formulated and the proofs carried out directly in infinite dimensions once a suitable notion of differentiation is available. See, for example, \cite{AKR1995} or \cite{MR92} for a systematic exposition of Dirichlet forms over infinite dimensional spaces. \begin{theorem}\label{thmesa2} $($Essential self-adjointness$)$. Assume that \eref{mt1} holds. Suppose that \beq \int_X e^{-\nu V} dm < \infty \ \ \ \text{for some} \ \ \ \nu > 2c. \label{EU6} \eeq Define $p_0$ as in Lemma \ref{lemiv1} and assume that \beq V \in L^{p_1}(m) \ \ \text{for}\ \ p_1 = 2\frac{p_0}{p_0 -2} . \label{EU7} \eeq Then $\n^*\n + V$ is essentially self adjoint on \beq D(\n^*\n) \cap L^{p_0}(m) \label{EU8} \eeq and is bounded below. Denote by $H$ its closure. The semigroup $e^{-tH}$ extends to a strongly continuous semigroup of bounded operators on $L^q$ for $q \in [q_0,2]$ and restricts to a strongly continuous semigroup of bounded operators on $L^q$ for $q \in [2,p_0]$. For these extensions we have \begin{align} \|e^{-tH} f\|_q \le \|e^{-V}\|_\nu^t\ \|f\|_q \ \ \text{for}\ \ q_0 \le q \le p_0\ \ and \ \ t \ge 0. \label{EU9} \end{align} Moreover \begin{align} \|e^{-tH} f\|_p \le \|e^{-V}\|_\nu^t\ \|f\|_q \ \ \text{for} \ \ q_0 < q \le p <p_0\ \ \text{if} \ \ t \ge \tau(p) - \tau(q). \label{EU10} \end{align} $e^{-tH}$ is positivity preserving for all $t \ge 0$. \end{theorem} The proof depends on the following three lemmas. \begin{lemma}\label{lemEU1a} Let $V_1$ and $V_2$ be bounded potentials. Let $H_i = \n^*\n + V_i$, $i = 1,2$. Let $p_1 =2\frac{p_0}{p_0 -2}$. Then \begin{align} \|(e^{-t H_1} - e^{-tH_2})f\|_2 &\le \(\int_0^t\|e^{-V_1}\|_\nu^{t-u} \|e^{-V_2}\|_\nu^u du\) \ \|V_1-V_2\|_{p_1} \|f\|_{p_0} . \label{EU19} \end{align} \end{lemma} \begin{proof} If $H_1$ and $H_2$ are two self-adjoint operators on $L^2(m)$ which have a common domain $\D$ and are both bounded below then the DuHamel formula \begin{align} (e^{-tH_2} - e^{-tH_1}) f = \int_0^t e^{-(t-u)H_1}(H_1 - H_2) e^{-uH_2} f\ du\ \ \ f \in \D \label{EU20} \end{align} follows by integrating from $0$ to $t$ the identity $(d/du)\(e^{-(t-u)H_1} e^{-uH_2}\) f =\(e^{-(t-u)H_1}(H_1 - H_2) e^{-uH_2}\) f$, which is valid for $f \in \D$. If $H_1 - H_2$ is a bounded (albeit only densely defined) operator then \eref{EU20} extends by continuity to all $f \in L^2(m)$. Thus if $H_i = \n^*\n + V_i$ we have \begin{align} (e^{-tH_2} - e^{-tH_1}) f = \int_0^t e^{-(t-u)H_1}(V_1-V_2) e^{-uH_2} f \ du\ \ \ \forall\ \ f \in L^2(m). \label{EU21} \end{align} Since \eref{L291} has been proven for bounded $V$ in Section \ref{secnsh} we may use it to find that $ \|e^{-(t-u)H_1}\|_{2\to 2} \le \|e^{-V_1}\|_\nu^{t-u}$ for all $t-u \ge 0$, while \beq \|e^{-uH_2} f\|_{p_0} \le \| e^{-V_2}\|_\nu^u\ \|f\|_{p_0} \label{EU22} \eeq for all $u \ge 0$. Since $p_1^{-1} + p_0^{-1} = 1/2$ we have \begin{align} \|(V_1-V_2) e^{-uH_2} f\|_2 \le \|V_1 - V_2\|_{p_1} \| e^{-V_2}\|_\nu^u\ \|f\|_{p_0}\ \ \forall\ \ u \ge0. \label{EU23} \end{align} Hence \begin{align} \|(e^{-tH_2} - e^{-tH_1}) f\|_2 &\le \int_0^t \|e^{-(t-u)H_1}\|_{2\to 2}\|(V_1-V_2) e^{-uH_2} f\|_2 \ du \notag \\ &\le \int_0^t \|e^{-V_1}\|_\nu^{t-u} \|V_1 - V_2\|_{p_1} \| e^{-V_2}\|_\nu^u\ \|f\|_{p_0} du, \notag \end{align} which is \eref{EU19}. \end{proof} \begin{lemma}\label{lemesa2} Suppose that $\|e^{-V}\|_\nu < \infty$ and that $V \in L^{p_1}(m)$. For \linebreak $k = 0,1,2, \dots$ let \begin{align} V_k = (-k\vee V)\wedge k. \label{EU40} \end{align} Define $H_k = \n^*\n + V_k$ and let \begin{align} S_k(t) = e^{-tH_k}, \ \ \ t \ge 0. \label{EU41} \end{align} Then the sequence $S_k(t)$ converges strongly in $L^2$ to a bounded positive operator $S(t)$ for each $t \ge 0$. $S(\cdot)$ is a strongly continuous semigroup of bounded positivity preserving operators on $L^2(m)$. For each $t \ge 0$, $S(t)$ extends uniquely to a bounded operator on $L^q$ for $q \in [q_0, 2]$ and restricts to a bounded operator on $L^q(m)$ for $q \in [2,p_0]$. The extensions and restrictions form strongly continuous semigroups in these spaces. Denoting the extensions and restrictions by $S(t)$ we have, for all $f \in L^q$, \begin{align} \|S(t) f\|_q &\le \|e^{-V}\|_\nu^t\ \|f\|_q \ \ \text{for}\ \ q_0 \le q \le p_0\ \ and \ \ t \ge 0\ \ \text{and} \label{EU45}\\ \|S(t) f\|_p &\le \|e^{-V}\|_\nu^t\ \|f\|_q \ \ \text{for} \ \ q_0 < q \le p <p_0\ \ \text{if} \ \ t \ge \tau(p) - \tau(q). \label{EU46} \end{align} \end{lemma} \begin{proof} By the monotone convergence theorem on $\{ V\le 0\}$ and dominated convergence theorem on $\{V>0\}$ we have $\lim_{k\to \infty} \|e^{-V_k}\|_\nu \to \|e^{-V}\|_\nu$. Moreover, since $-V_k \le -V$ wherever $V \le 0$ it follows that $0 \le e^{-V_k} \le e^{-V} + 1$ everywhere and therefore $\|e^{-V_k}\|_\nu \le \|e^{-V}\|_\nu +1$ for all $k$. Hence \begin{align} \int_0^t \|e^{-V_k}\|_\nu^{t-u} \|e^{-V_n}\|_\nu^u du \le t\(\|e^{-V}\|_\nu +1\)^t. \end{align} Apply \eref{EU19} to the potentials $V_k$ and $V_n$ to find \begin{align} \|(S_k(t) - S_n(t))f\|_2 \le t\(\|e^{-V}\|_\nu +1\)^t \|V_k - V_n\|_{p_1} \|f\|_{p_0}. \end{align} Since $\|V_k - V_n\|_{p_1} \to 0$ as $k,n \to \infty$, it follows that for fixed $f \in L^{p_0}$ and $T < \infty$ the sequence $S_k(t)f$ converges in $L^2(m)$ uniformly on $[0, T]$ as $k\to \infty$. Denote the limit by $\hat S(t)f$. $\hat S(t)$ is a linear operator from $L^{p_0}(m) \to L^2(m)$ for each $t \in [0, T]$ and $\hat S(t)f$ is continuous in $t \in [0,T]$ into $L^2$ for each $f \in L^{p_0}$ and each $T>0$. For fixed $t>0$ and $f \in L^{p_0}$ there is a subsequence $k_j$ such that $S_{k_j}(t) f$ converges to $\hat S(t)f$ pointwise almost everywhere. By \eref{L291}, which has already been proven for bounded $V$ in Section \ref{secnsh}, we have $\|S_{k_j}(t) f\|_q \le \|e^{-V_{k_j}}\|_\nu^t\ \|f\|_q$. Apply Fatou's lemma on the left to find \beq \|\hat S(t) f\|_q \le \|e^{-V}\|_\nu^t\ \|f\|_q,\ \ q \in [q_0, p_0],\ \ f \in L^{p_0} \label{EU47} \eeq The same argument also shows that \eref{EU46} holds for $\hat S(t) f$ when $f \in L^{p_0}$. Note for later use the uniform (in $k$) bound \begin{align} \|S_{k}(t) f\|_q \ \le (\|e^{-V}\|_\nu +1)^t\ \|f\|_q\ \ \text{for all}\ \ q \in [q_0, p_0] \label{EU48} \end{align} and all $f \in L^q$, which follows from \eref{L291}, with $V$ replaced by the bounded function $V_k$, and using the bound $\|e^{-V_k}\|_\nu \le \|e^{-V}\|_\nu +1$. Since $L^{p_0}$ is dense in $L^q$ for each $q \in [q_0, p_0]$ we may, by virtue of \eref{EU47}, extend $\hat S(t)$ by continuity in $L^q$ norm to a bounded linear operator from $L^q$ into $L^q$, which we denote by $S_q(t)$. \eref{EU45} and \eref{EU46} hold for all $f \in L^q$ for this extended operator. The extensions are easily seen to be consistent in the sense that if $q_0 \le q_1 \le q_2 \le p_0$ then $S_{q_1}$, restricted to $L^{q_2}$, is $S_{q_2}$. We will drop the subscript $q$ and just write $S(t)$, which now acts on each space $L^q$ as a bounded operator satisfying \eref{EU45} and \eref{EU46}. In case $q\in [q_0, 2]$, the extended operator $S(t)$ is also a strong limit on all of $L^q$ of the operators $S_k(t)$. Indeed, if $g \in L^q$ and $\|f_n - g\|_q \to 0$ for some sequence $f_n \in L^{p_0}$ then, \begin{align*} &\|S(t)g - S_k(t)g\|_q \\ &\le \| S(t)g - \hat S(t)f_n\|_q + \| \hat S(t)f_n -S_k(t) f_n\|_q + \|S_k(t)f_n - S_k(t)g\|_q \\ &\le \| S(t)g - \hat S(t)f_n\|_q + \| \hat S(t)f_n -S_k(t) f_n\|_2 + \|S_k(t) \|_{q\to q} \|f_n - g\|_q. \end{align*} The first term on the right goes to zero as $n\to \infty$ by the definition of $S(t)g$. The second goes to zero for each $n$ as $k\to \infty$ by the definition of $\hat S(t)f_n$. The third term goes to zero as $n\to \infty$ by the uniform bound \eref{EU48}. A standard argument completes the proof. The semigroup property $S(t+s) = S(t) S(s)$ follows from the fact that the operators $S_k(t)$ form semigroups and, for $q\in [q_0, 2]$, converge strongly in $L^q$, boundedly in $k$, to $S(t)$ for each $t >0$. The semigroup equation also holds of course when restricted to the spaces $L^p; p \in [2, p_0]$. Each operator $S_k(t)$ is positivity preserving because $e^{-t\n^*\n}$ is positivity preserving while the Trotter product formula \begin{align} e^{-t(\n^*\n + V_k)} = \text{strong limit}_{n\to \infty}\( e^{-(t/n)(\n^*\n)} e^{-(t/n)V_k}\)^n \end{align} is applicable in $L^2$, since $V_k$ is bounded. $ S(t)$, being a strong limit in $L^2$ of the operators $S_k(t)$, is therefore also positivity preserving. For the proof of strong continuity we must go back to the operators $\hat S(t)$. For each $f \in L^{p_0}$, $\hat S(t)f$ is continuous in $t$ as a function into $L^2$ and therefore as a function into $L^q$ if $q_0 \le q \le 2$. Hence if $g \in L^q$ with $q_0 \le q \le 2$ and $f_n \to g$ in $L^q$ for some sequence $f_n \in L^{p_0}$ then, since $S(t) f_n = \hat S(t) f_n$, we have $S(t)g = \lim\ \text{in}\ L^q\ S(t)f_n$ and the convergence is uniform for $t \in [0,T]$ by \eref{EU45}. Therefore $S(\cdot)g$ is continuous into $L^q$ for each $g \in L^q$. $S(t)$ is therefore a strongly continuous semigroup on the spaces $L^q: q \in [q_0,2]$ and therefore weakly continuous on the dual spaces $L^p, \, p \in [2,p_0]$ because the operators $S(t)$ are symmetric. But these spaces are reflexive. Consequently $S(\cdot)$ is strongly continuous on $L^p: 2 \le p \le p_0$ by the general theorem \cite[Theorem 1.6]{EN}. \end{proof} $S(t)$ has been constructed as a limit of the semigroups $e^{-tH_k}$. We should expect that $S(t) = e^{-tH}$ with $H = \text{closure of}\ \n^*\n +V$. The next lemma proves that this is the case. \begin{lemma} \label{lemesa3} Suppose that $\|e^{-V}\|_\nu < \infty$ and that $V \in L^{p_1}(m)$. Denote by $H$ the infinitesimal generator of the strongly continuous semigroup $S(t)$ in $L^2(m)$. Denote by $H_0$ the self-adjoint Dirichlet form operator $\n^*\n$ for $m$. Then \begin{align} D(H)\cap L^{p_0} = D(H_0) \cap L^{p_0} \label{EU55} \end{align} and \begin{align} H= \text{closure of}\ (H_0 +V)\ \ \text{in}\ \ L^2(m). \label{EU56a} \end{align} \end{lemma} \begin{proof} Choose $H_1 = H_k$ and $H_2 = H_0$ in \eref{EU21}. Since $V_0 =0$ we have \begin{align} (e^{-tH_0} -e^{-tH_k})f = \int_0^t e^{-(t-u)H_k} (V_k - 0) e^{-uH_0} f \ du, \ \ \ f \in L^{p_0}. \label{EU49} \end{align} Fix $f \in L^{p_0}(m)$. Since $p_1^{-1} + p_0^{-1} = 1/2$ and $V \in L^{p_1}(m)$ we see that $\| (V_k -0)e^{-uH_0}f - Ve^{-uH_0}f\|_2 \le \|V_k - V\|_{p_1} \| e^{-uH_0}f\|_{p_0} \to 0$ uniformly for $0 \le u \le t$ as $ k \to \infty$. Moreover, by Lemma \ref{lemesa2}, $e^{-(t-u)H_k}$ converges strongly in $L^2$ and boundedly. Therefore $e^{-(t-u)H_k} (V_k - 0) e^{-uH_0} f$ converges in $L^2$ boundedly for $u\in [0,t]$ to $e^{-(t-u)H} V e^{-uH_0} f$. We may take the limit as $k \to \infty$ in \eref{EU49} to find \begin{align} (e^{-tH_0} -e^{-tH})f = \int_0^t e^{-(t-u)H} V e^{-uH_0} f \ du, \ \ \ f \in L^{p_0}. \label{EU50c} \end{align} because, by the assumption of this lemma, $S(t) = e^{-tH}$ on $L^2$ and therefore on $L^{p_0}$. The integrand is a continuous function of $u$ into $L^2$. Multiply \eref{EU50c} by $t^{-1}$ and rearrange to find, for $f \in L^{p_0}$, \begin{align} t^{-1}(I -e^{-tH_0})f = t^{-1}(I -e^{-tH})f - t^{-1}\int_0^t e^{-(t-u)H} V e^{-uH_0} f \ du. \label{EU51} \end{align} If $f \in D(H) \cap L^{p_0}$ then both terms on the right side of \eref{EU51} converge in $L^2$ as $t\downarrow 0$, and therefore the limits on the right and left both exist. The limit on the right is $Hf - Vf$. Since the limit on the left exists we know that $f \in D(H_0)$ and moreover the limit is $H_0f$. Thus $D(H) \cap L^{p_0} \subset D(H_0)$ and $(H-V)f= H_0 f$ for $f \in D(H)\cap L^{p_0}$. Therefore \begin{align} H = H_0+V \ \ \text{on}\ \ D(H) \cap L^{p_0}. \label{EU56} \end{align} Similarly, if $f\in D(H_0) \cap L^{p_0}$ then the left side and second term on the right side of \eref{EU51} converge in $L^2$ and therefore $f \in D(H)$. This proves \eref{EU55}. Let $K$ be the closure of $H_0 +V$ in $L^2$. $K$ is a closed symmetric operator on $L^2$. We wish to prove that $H= K$. It suffices to show that there is a set $\D \subset D(H_0) \cap D(V)$ which is a core for $H$ and such that $H = H_0 + V$ on $\D$. For then $K \supset \text{closure of}\ \(\{H_0+V\}\Big|\D\) =H$ and therefore $K^*\subset H \subset K$. Since $K^* \supset K$ it will then follow that $K = K^*= H$. Let $a = 1 +\log \|e^{-V}\|_\nu$. Then \eref{EU45} shows that, for all $q \in [q_0, p_0]$ we have $\|S(t) f\|_q \le e^{(a-1)t}\|f\|_q$. For $q\ge 2$ we may write this as $ \|e^{-tH}f\|_q \le e^{(a-1)t} \|f\|_q$ because $L^q \subset L^2$. Therefore $ \|e^{-t(H+a)}f\|_q \le e^{-t} \|f\|_q$ for $2 \le q \le p_0$. Hence \begin{align} \| (H+a)^{-1} f\|_q &= \| \int_0^\infty e^{-t(H+a)} f dt\|_q \le \int_0^\infty e^{-t}dt \|f\|_q = \|f\|_q. \label{EU58} \end{align} For $q=2$ this shows that $\|(H+a)^{-1}\|_{2\to 2} \le 1$. Let $\D = (H+a)^{-1} L^{p_0}$. Then $\D \subset D(H)$ because $(H+a)^{-1} L^{p_0} \subset (H+a)^{-1} L^{2}$. Further, from \eref{EU58} with $q = p_0$ we see that $\D \subset L^{p_0}$. Hence \beq \D \subset D(H) \cap L^{p_0} = D(H_0)\cap L^{p_0} \subset D(H_0) \cap D(V), \eeq From this and \eref{EU56} it follows that $H = H_0 + V$ on $\D$. It remains to show that $\D$ is a core for $H$. Suppose that $\phi \in D(H)$. Since $(H+a) \phi \in L^2$ there is a sequence $f_n \in L^{p_0}$ which converges to $(H+a) \phi $ in $L^2$. The functions $g_n \equiv (H+a)^{-1}f_n$ are in $\D$ and $(H+a)g_n = f_n$, which converge to $(H+a) \phi$ in $L^2$. But also $g_n \to \phi$ because $\|(H+a)^{-1}\|_{2\to 2} \le 1$. Therefore $\D$ is a core for $H$. This completes the proof of \eref{EU56a}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmesa2}] It was shown in Lemma \ref{lemesa3} that the closure, $H$, of $\n^*\n +V$ is the infinitesimal generator of the semigroup $S(t)$ and that this semigroup extends and restricts to $L^q$ spaces as asserted in the statement of the theorem. The asserted inequalities \eref{EU9} and \eref{EU10} are restatements of \eref{EU45} and \eref{EU46} respectively. \end{proof} \begin{corollary} \label{cordom} Under the hypotheses of Theorem \ref{thmesa2} there holds \begin{align} Q(H) = Q(\n^*\n) \cap Q(|V|). \label{EU60} \end{align} where $Q(A)$ denotes the form domain of a closed semi-bounded operator $A$. \end{corollary} \begin{proof} If $\int_X |\n u|^2 dm < \infty$ and $\int_X |V|u^2 dm < \infty$ then $u$ is clearly in the form domain of $H$. Therefore $Q(H) \supset Q(\n^*\n) \cap Q(|V|)$. The reverse containment requires specific information about $V$. Let $V_- = \sup(-V, 0)$. Choose $\nu_1 \in (2c, \nu)$ and let $\ep = (\nu/\nu_1) - 1$. Define a new potential by $W = V - \ep V_-$. On the set $\{ V\le 0\}$ we have $V_- = -V$. Therefore, on this set \begin{align*} -\nu_1 W & = -\nu_1 V + \ep \nu_1 V_-\\ &= \nu_1(1+\ep) \(-V \) \\ & = -\nu V. \end{align*} Since $W = V$ where $V >0$, the hypothesis \eref{EU6} ensures that $\int_X e^{-\nu_1 W} dm < \infty$. By the Federbush semiboundedness theorem, \eref{L338}, applied to the potential $W$, we therefore have \begin{align} \n^*\n +(V -\ep V_-) \ge -b \equiv -\log \|e^{-(V-\ep V_-)}\|_{\nu_1} >-\infty. \notag \end{align} So $ \ep V_- \le H + b$. Hence \begin{align} V_- \le \ep^{-1}(H + b). \label{EU65} \end{align} Therefore \begin{align} H_0 + |V| = H_0 + V + 2V_- = H + 2V_- \le (1 + 2\ep^{-1})H + 2\ep^{-1} b. \notag \end{align} All of these inequalities hold upon taking the inner products $(\cdot\ u, u)$ with $u$ in the operator core given by \eref{EU8}. Since the operator core is also a form core it follows that $Q(H) \subset Q(H_0+ |V|)$, which is the right hand side of \eref{EU60} because $H_0$ and $|V|$ are both non-negative. \end{proof} \begin{remark} {\rm The equality \eref{EU60} will be needed in the proof of Theorem \ref{thmA3}. It was needed in similar contexts in \cite{GRW2001} and \cite{Aida2001}, where it was taken as a natural hypothesis. } \end{remark} \subsubsection{Proof of non-standard hyperboundedness} \label{secnsh2} The proofs of Theorem \ref{thmns2} and Corollary \ref{corhb2} under the general conditions on $V$ specified there are largely consequences of the results in Sections \ref{secnsh} and \ref{secesa}: The qualitative assertions in Theorem \ref{thmns2} concerning essential self-adjointness of $\n^*\n +V$ and extendability of $e^{-tH}$ are proved in Theorem \ref{thmesa2}. The logarithmic Sobolev inequalities \eref{L325} follow, by \cite[Theorem 2]{G1}, from the hyperboundedness inequalities \eref{L289} asserted in Corollary \ref{corhb2} once the relation between the Sobolev coefficients $c_\nu(p)$ in Theorem \ref{thmns2} and the minimum time to boundedness $\tau(p) - \tau(q)$ in Corollary \ref{corhb2} are established, as they are in the Section \ref{secnsh} on bounded potentials. The hyperboundedness inequalities \eref{L289} and \eref{L291} were proved in Theorem \ref{thmesa2} for the desired class of potentials. \subsection{Existence and uniqueness of a ground state} \label{secEU} \begin{theorem} \label{thmE1} $($Existence of a ground state$)$. Suppose that $m$ satisfies a logarithmic Sobolev inequality \eref{mt1}. Assume that $\|e^{-V}\|_\nu < \infty$ for some $\nu > 2c$. Define $p_1$ as in \eref{EU7} and assume that $V\in L^{p}$ for some $p \ge p_1$. Then the closure, $H$, of $\n^*\n + V$ is self adjoint, bounded below, and the bottom of its spectrum is an eigenvalue of finite multiplicity. \end{theorem} \begin{proof} We already know from Theorem \ref{thmesa2} that $H$ is self-adjoint and bounded below, and that $e^{-tH}$ is positivity preserving for all $t \ge 0$. We need only show that the bottom of the spectrum of $H$ is an eigenvalue of finite multiplicity. Referring to the notation in Lemma \ref{lemiv1}, choose a number $p_2 \in (2, p_0)$ and a number $t \ge \tau(p_2)$. By \eref{L289}, with $q = 2$, $e^{-tH}$ is bounded from $L^2$ to $L^{p_2}$. By \cite[Theorem 1]{G1972} the operator norm $\|e^{-tH}\|_{2\to 2} \equiv :\mu$ is an eigenvalue of $e^{-tH}$ of finite multiplicity. The spectral theorem shows that $\l_0 \equiv -t^{-1}\log \mu$ is an eigenvalue of $H$ of finite multiplicity and $\l_0 = \inf\ spectrum\ H$. \end{proof} The techniques in the following theorem and lemma are distilled from \cite{GJ68,GJ70,Seg1970,G1972,SHk72}. \begin{theorem}\label{thmU2} $($Uniqueness of the ground state$)$. Let $m$ be a probability measure on some space $X$. Suppose that $H_0$ is a non-negative self-adjoint operator on $L^2(m)$ and that $V$ is a potential in $L^2(m)$. Suppose that $H_0+V$ is essentially self-adjoint. Denote its closure by $H$. Assume that \begin{align} &a.\ \text{The nullspace of $H_0$ is spanned by the constant functions.} \label{U4} \\ &b.\ D(H)\cap L^\infty = D(H_0)\cap L^\infty. \label{U5} \\ &c.\ \text{$e^{-tH}$ is positivity preserving for all $t >0$} \label{U6}\\ &d.\ \text{$\l_0 \equiv \inf spectrum\ H$ is an eigenvalue.} \label{U7} \end{align} \noindent Then $\l_0$ has multiplicity 1 and belongs to an a.e. strictly positive eigenfunction. \end{theorem} The proof depends on the following lemma. \begin{lemma} \label{lemU2} Let $m$ be a probability measure on some space $X$. Suppose that $H_0$ is a non-negative self-adjoint operator on $L^2(m)$ and that $V$ is a potential in $L^2(m)$. Suppose that $H_0 +V$ is essentially self-adjoint with closure $H$ and that conditions a. and b. of Theorem \ref{thmU2} hold. If $f$ is a bounded measurable function such that its multiplication operator, $M_f \equiv $ multiplication by $f$, commutes with $e^{-tH}$ for some $t >0$ then there is a real constant $C$ such that $f= C$ a.e.. \end{lemma} \begin{proof} If $M_f$ commutes with $e^{-tH}$ for some $t>0$ then it commutes for all $t >0$ by the spectral theorem. Since $M_f 1 = f$ we have \begin{align} M_fe^{-tH} 1 = e^{-tH}f\ \ \ \forall t \ge0. \label{U9} \end{align} Since $1$ is in the domains of $H_0$ and $V$ it is in $D(H)$. Hence the left side is differentiable at $t = 0$ and therefore so also is the right side. Thus $L^\infty \ni f \in D(H)$. By \eref{U5} we have then $f \in D(H_0)\cap L^\infty$. Differentiating \eref{U9} at $t=0$ gives $M_fH1= Hf$. That is, \begin{align} f(H_0 + V) 1= (H_0 + V) f. \label{U10} \end{align} Since $H_0 1=0$ it follows from \eref{U10} that $H_0f = 0$. The lemma now follows from \eref{U4}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmU2}] Pick $t >0$. Any eigenfunction $\psi$ for $H$ with eigenvalue $\l$ satisfies $e^{-tH}\psi = e^{-t\l}\psi$. Let $A = e^{-tH}$ and $\mu = e^{-t\l}$. Since $A$ is positivity preserving we have $\mu |\psi| = |A\psi| \le A|\psi|$. Therefore $\mu (|\psi|, |\psi|) \le (A|\psi|, |\psi|)$ and since $\mu = \sup \text{spectrum}\ A$, $|\psi|$ is also an eigenfunction of $A$ belonging to $\mu$. So also are $|\psi| \pm \psi$. At least one of these is non zero. So $(A - \mu)\phi =0$ for some almost everywhere non-negative function $\phi$ which is not identically zero. Let $E = \{ x: \phi(x) =0\}$ and let $f = \chi_E$. If $L^2 \ni g \ge 0$ then \begin{align} (A(fg), \phi) = (f g, A\phi) = (f g, \mu \phi) =0\ \text{because} \ \ f\phi =0. \end{align} Since $A(f g) \ge 0$ and is orthogonal to $\phi$ it must be supported in $E$. Therefore $A(f g) = f A(f g)$ for all non-negative $g \in L^2$ and hence for all $g \in L^2$. Thus $AM_f = M_f AM_f $, where $M_f$ is the Hermitian projection consisting of multiplication by $f$. Take adjoints to find $M_fA = M_f AM_f = AM_f$. So $M_f$ commutes with $A$. By Lemma \ref{lemU2}, $f$ is constant and therefore a.e. equal to 0 or 1. It can't be equal to 1 a.e. because $\phi > 0$ on a set of strictly positive measure. Therefore $E$ is the empty set (up to a set of measure $0$). So $\phi > 0 $ a.e.. Thus either $|\psi| - \psi >0$ a.e., in which case $\psi = - |\psi|$ a.e, or else $|\psi| + \psi >0$ a.e., in which case $\psi = |\psi|$ a.e.. Thus any eigenfunction is either strictly positive a.e. or strictly negative a.e. Since two such functions cannot be orthogonal the eigenspace has dimension one. \end{proof} \bigskip \noindent \begin{proof}[Proof of Corollary \ref{corEU}] Since the hypotheses of Theorem \ref{thmns2} imply the hypotheses of Theorem \ref{thmE1} the latter theorem ensures that $H$ has an eigenvalue at the bottom of its spectrum of finite multiplicity. Theorem \ref{thmU2} will show that the eigenvalue has multiplicity one and belongs to an a.e. positive eigenfunction once we verify the four conditions \eref{U4}-\eref{U7}. We already know that \eref{U7} holds by Theorem \ref{thmE1}. \eref{U6} was proven in Theorem \ref{thmesa2}. \eref{U5} follows from \eref{EU55} by intersecting both sides with $L^\infty$. For the proof of \eref{U4} observe that in our case $H_0 =\n^*\n$ and the measure $m$ satisfies the logarithmic Sobolev inequality \eref{mt1}. The constant functions therefore span the nullspace, and in fact there is a spectral gap, by the Rothaus-Simon theorem \cite{Rot1}, \cite{Simon1976}. For a direct proof of the Rothaus-Simon theorem see \cite[Theorem 2.5 ]{G1993} or \cite[Proposition 5.1.3]{BGL}. This completes the proof of Corollary \ref{corEU}. \end{proof} \subsection{Upper and lower bounds on $\|\psi\|_p$ for $p>0$} \label{seculb} \begin{theorem} \label{thmulb} Assume that \eref{mt1} holds and that $\|e^{-V}\|_{L^\nu(m)} < \infty$ for some $\nu > 2c$. Suppose also that $V \in L^{p_1}$ as in \eref{EU7}. Denote by $\psi$ the ground state of $\n^*\n +V$. Then \begin{align} \int_X \psi^2 \log\psi\ dm &\le c_\nu\log \|e^{\l_0 - V}\|_\nu \label{L510}\\ \|\psi\|_p &\le \|e^{\l_0 - V}\|_\nu^{\tau(p)},\ \ 2 \le p <p_0 \label{L511} \\ \|\psi\|_r &\ge \|e^{\l_0-V}\|_\nu^{-\sigma} ,\ \ \ 0 < r <2, \label{L512} \end{align} where $\tau(p)$ is given by \eref{L505}, $c_\nu$ is given by \eref{L329a}, and \begin{align} \sigma = c_\nu(2r^{-1} -1). \label{L513} \end{align} \end{theorem} \begin{proof} Since $\|\psi\|_2 = 1$ we have $Ent_m(\psi^2) = 2 \int \psi^2 \log \psi\ dm$. Choosing $u = \psi$ in \eref{L325h} we therefore find \begin{align} \int_X \psi^2 \log \psi\ dm &\le c_\nu\<(H+\log \|e^{-V}\|_\nu)\psi, \psi\> \\ &= c_\nu(\l_0 + \log \|e^{-V}\|_\nu), \end{align} which proves \eref{L510}. For the proof of \eref{L511} we apply \eref{L289} with $q=2$. Since $\tau(2) = 0$ and $H\psi = \l_0 \psi$ it follows from \eref{L289} that \begin{align} e^{-t\l_0} \|\psi\|_p = \|e^{-tH}\psi\|_p \le \|e^{-V}\|_\nu^t \|\psi\|_2 \ \text{if}\ \ \ t \ge \tau(p). \label{L294a} \end{align} Hence $\|\psi\|_{p} \le \| e^{(\l_0-V)}\|_\nu^t \ \ \ \text{if}\ \ \ t \ge \tau(p)$. In particular \eref{L511} holds. Since $(2-r)c_\nu = \sigma r$ it follows from \eref{L510} that $ (2-r) \int_X \psi^2 \log\psi\ dm \le \log \|e^{\l_0 - V}\|_\nu^{\sigma r}$ and therefore \begin{align} (r-2) \int_X \psi^2 \log\psi\ dm &\ge \log \|e^{\l_0 - V}\|_\nu^{-\sigma r}. \label{L514} \end{align} Now $m_\psi \equiv \psi^2 m$ is a probability measure. Using Jensen's inequality we find \begin{align} \exp\((r-2) \int_X \psi^2 \log \psi dm\) &=\exp\(\int_X (\log\psi^{r-2}) dm_\psi \notag\\ &\le\int_X \exp(\log\psi^{r-2}) dm_\psi \notag\\ &= \int_X \psi^{r} dm. \label{L515} \end{align} Combine this with \eref{L514} to find \eref{L512}. \footnote{The author thanks Barry Simon for a remark in a letter of August 5, 1973 that led to the simple proof of \eref{L512}.} \footnote{Inequalities similar to those in the proof of Theorem \ref{thmulb} can be found in \cite[page 33]{Mat98}} \end{proof} \begin{remark} {\rm Theorem \ref{thmulb} relies on hypercontractivity in the hypothesis to achieve a lower bound on $\|\psi\|_r$ in terms of $\|\psi\|_2$. Without some extra condition on $\psi$ beyond $\|\psi\|_2 =1$ such a lower bound cannot hold. For example if $\psi =\epsilon^{-1} \chi_{[0,\epsilon^2]}$ in $L^2([0,1])$ then $\int_0^1\psi^2 dx = 1$ while $\int_0^1 \psi(x) dx = \epsilon$. So the $L^2$ norm does not control the $L^1$ norm in the absence of some further condition, such as hypercontractivity. } \end{remark} \section{The product of moments $\|\psi\|_r \|\psi^{-1}\|_s$} \label{secpm} \subsection{The moment product theorem } \label{secmpt} In the previous section we were concerned with the existence and uniqueness of the ground state $\psi$ and its growth properties, as measured by the $L^p$ norms $\|\psi\|_{L^p(m)}$ with $ p >0$. The key determiner of these properties was the behavior of the negative part of the potential, as measured by $\|e^{-V}\|_{L^\nu(m)}$. In the present section we will be concerned with the decay behavior of $\psi$ where it is small, as measured by the norms $\|\psi^{-1}\|_{L^s(m)}$. The key determiner of this behavior will be the positive part of the potential, as measured by $\|e^{V}\|_{L^\kappa(m)}$. The maximum value of $s$ for which we can establish such bounds is given in the following notation and theorem. \begin{notation}\label{notkappa3}{\rm For $\ka >0$ let \beq b_\ka = \sqrt{1 +(2c/\kappa)}. \label{s98f} \eeq The quadratic equation \begin{align} t^2 -(2\kappa/c) (t+1) = 0 \label{W850} \end{align} has two solutions of opposite sign, $s_0>0$ and $-r_0<0$, given by \begin{align} s_0 &=(\kappa/c)\( b_\ka +1 \)\ \ \text{and} \ \ r_0= (\kappa/c)\( b_\ka -1 \) <1, \label{W851g} \end{align} in accordance with the quadratic formula: \beq t =(1/2)\{2\ka/c \pm \( (2\ka/c)^ 2+4(2\ka/c)\)^{1/2} \}= (\ka/c)(1 \pm b_\ka). \eeq The assertion that $r_0<1$ in \eref{W851g} follows directly from \eref{W850}, which shows that $t+1 >0$ for any solution, and in particular for $t= -r_0$. For later use note that\eref{W851g} is equivalent to \begin{align} s_0 = \frac{2}{b_\ka -1},\ r_0 = \frac{2}{b_\ka +1}\ \ \text{and also to} \ \ 2s_0^{-1} +1 = b_\ka = 2r_0^{-1} -1. \label{W851k} \end{align} The quadratic function in \eref{W850} factorizes as \begin{align} (2\ka/c)(t+1) - t^2 = (s_0-t)(t+r_0). \label{W851f} \end{align} With $c_\nu$ defined as in \eref{L329a}, define \begin{align} \ell(t) &= \frac{c}{2b_\ka}\log \frac{t + b_\ka c_\nu}{t- b_\ka c_\nu}, \ \ \text{for}\ \ t > b_\ka c_\nu. \label{W852} \end{align} For later use note that \begin{align} &\ell(t) - t \ \ \text{has a unique zero point in} \ \ (b_\ka c_\nu, \infty)\ \ \text{and} \label{W854}\\ &\ell(t) + t\ \ \text{has a unique minimum point in} \ \ (b_\ka c_\nu, \infty). \label{W854a} \end{align} The statement \eref{W854} follows from the fact that $\ell(t)$ is strictly decreasing from $\infty$ down to zero on the given interval while $t$ strictly increases to $\infty$ on the interval. \eref{W854a} follows from the computation \beq \ell'(t) = \frac{-c c_\nu }{t^2 - (b_\ka c_\nu)^2},\ \ t > b_\ka c_\nu, \label{W854c} \eeq which shows that the derivative $\ell'(t) + 1$ is strictly increasing from $-\infty$ to $1$ on the interval $\{t > b_\ka c_\nu\}$. } \end{notation} \begin{theorem} \label{thmmp1} $($Moment product theorem$)$. Assume the hypotheses of Theorem \ref{thmesa2} hold. Let $\ka >0$ and define $r_0$ and $s_0$ as in Notation \ref{notkappa3}. Assume further that $\|e^V\|_\ka < \infty$. Suppose that \begin{align} 0<r< r_0\ \ \ \text{and}\ \ \ 0 < s < s_0. \label{W852a} \end{align} Let \begin{align} \sigma &=(2r^{-1} - 1) c_\nu \ \ \ \text{and}\ \ \ a = (2s^{-1} +1) c_\nu. \label{W852b} \end{align} Then \begin{align} \|\psi\|_r \| \psi^{-1}\|_s \le \| e^{V-\l_0}\|_\kappa^{\ell(a) + \ell(\sigma)} \label{W753g} \end{align} \end{theorem} The proof will be given in the next four subsections. \begin{remark} \label{rempr} {\rm We will see by example in Section \ref{secgp} that the peculiar restriction on $r$ and $s$ ensuing from the quadratic equation \eref{W850} is not an artifact of the proof. } \end{remark} \subsection{Aida's identity} \begin{theorem}\label{thmA3} {\rm $($Aida, \cite[Equ. (3.26)]{Aida2001}$)$.} Assume the hypotheses of Theorem \ref{thmesa2}. Denote by $\psi$ the ground state for $H \equiv \n^*\n +V$ $($closure$)$. Informally, \begin{align} \n^*\n \psi + V \psi = \l_0 \psi. \label{sch} \end{align} Let \begin{align} F = - \log \psi \label{W6} \end{align} and let $v :\R \to \R$ be a bounded $C^1$ function with bounded derivative. Then, writing $v(F)$ for the composition of $v$ with $F$, we have \begin{align} \int_X (v'(F) + v(F)) |\n F|^2 dm = \int_X v(F) (V -\l_0) dm. \ \ \text{Aida's identity.} \label{W30a} \end{align} In particular, \begin{align} \ \ \ \int_X |\n F|^2 dm = \int_X (V - \l_0) dm. \label{W31} \end{align} \end{theorem} \begin{proof} Since $\psi$ is in the domain of $H$ it is also in the form domain of $H$. By\eref{EU60} it is therefore in the form domain of $\n^*\n$. That is, $\n \psi \in L^2(m)$. Let $\ep >0$ and define $\psi_\ep = \psi +\ep$ and $F_\ep = - \log \psi_\ep$. Since $\psi \ge 0$, $\psi_\ep^{-1}$ is bounded and $\psi/\psi_\ep \le 1$. Moreover $\n F_\ep = -\psi_\ep^{-1} \n \psi$, which is in $L^2(m)$. Suppose that $v:\R \to \R$ is $C^1$ and $v$ and $v'$ are bounded. Let \begin{align} w =v(F_\ep) e^{F_\ep} = v(F_\ep) \psi_\ep^{-1}. \end{align} Then $w$ is bounded and also \begin{align} \n w &= v'(F_\ep) e^{F_\ep} \n F_\ep +v e^{F_\ep} \n F_\ep \notag \\ &= (v' +v) e^{F_\ep} \n F_\ep, \end{align} which is in $L^2(m)$ because $(v'+v)$ and $e^{F_\ep}$ are bounded. So $w\in Q(\n^*\n)\cap Q(|V|)$. We may therefore compute the inner product of $w$ with both sides of the Schr\"odinger equation $H\psi = \l_0\psi$ as follows. \begin{align*} \l_0\int_X \psi w\, dm &= \int_X (H\psi) w\, dm \\ &=\int_X(\n^*\n \psi) w\, dm + \int_X V \psi w \, dm \\ &=\int_X(\n \psi)\cdot (\n w)\, dm + \int_X V\psi w \, dm\\ &=\int_X (\n \psi)\cdot \((v' +v) \psi_\ep^{-1} \n F_\ep \) dm+ \int _X V\psi w \, dm\\ &= -\int_X (v' +v) \n F_\ep\cdot \n F_\ep dm + \int_X V\psi w \, dm. \end{align*} Therefore \begin{align} \int_X (v' +v) |\n F_\ep|^2 dm = \int_X (V-\l_0)v(F_\ep) (\psi/\psi_\ep) dm, \label{W40} \end{align} where we have written $v$ for $v \circ F_\ep$. Consider first the case $v \equiv 1$. Since $|\n F_\ep|^2 = \psi_\ep^{-2} |\n \psi|^2 \uparrow \psi^{-2}|\n \psi|^2 = |\n F|^2$ as $\ep \downarrow 0$, we may apply the monotone convergence theorem on the left and the dominated convergence theorem on the right to find \begin{align} \int_X |\n F|^2 dm = \int_X (V-\l_0) dm. \end{align} This proves \eref{W31}. Furthermore, since $|(v' +v) |\n F_\ep|^2| \le \(\sup( |v + v')|\) | \n F|^2$, we can apply the dominated convergence theorem to the left side of \eref{W40} (as well as on the right) to find \eref{W30a}. \end{proof} \begin{remark} {\rm We will need different regularizations to carry out computations based on Aida's identity. The regularization of $F$ given by $\psi \to \psi+\ep$, which we used in the preceding proof was already used by Aida \cite{Aida2001}. } \end{remark} \begin{remark}\label{remWKB} {\rm (WKB equation). Aida's identity \eref{W30a} can be informally derived, as in Remark \ref{remAWKB}, from the WKB equation, \begin{align} \n^*\n F+ | \n F|^2 = V - \l_0, \ \ \ \text{ (WKB)} \label{W7} \end{align} where $F$ is given by \eref{W6}. \eref{W7} itself follows from our form of the Schr\"odinger equation \eref{sch} with the help of the product rule for the $m$ divergence operator $\n^*$, defined in \eref{div1}, namely, \begin{align} \n^*(f \alpha ) &= f\n^* \alpha - \alpha \cdot \n f \ \ \ \ \text{$($product rule$)$}, \label{gs5} \end{align} where $\alpha$ is a vector field on $X$ and $f$ is a real valued function. The product rule follows readily from the definition \eref{div1}. To derive \eref{W7} observe that \eref{W6} gives $\n \psi = - \psi \n F$ and therefore $\n^* \n \psi = - \n^* (\psi \n F) = \n \psi \cdot \n F - \psi \n^*\n F$. It follows then from \eref{sch} that $ (\l_0 - V)\psi = \n \psi \cdot \n F - \psi \n^*\n F$. Divide this equation by $\psi$ to find \eref{W7}. } \end{remark} \begin{remark}\label{remAWKB} {\rm (Aida's identity from WKB). Proceeding informally, let $g$ be a ``differentiable'' real valued function on $X$. Multiply \eref{W7} by $g$ and integrate over $X$ to find \begin{align} \int_X g(x)(V(x) - \l_0) dm(x) &= \int_X\( ( \n^*\n F)g + g|\n F|^2\) dm(x) \notag \\ &= \int_X\( (\n F)\cdot (\n g) + g|\n F|^2\) dm(x). \label{W26} \end{align} Let $g(x) = v(F(x))$. Then $\n g (x) = v'(F(x)) \n F(x)$. Insert this into \eref{W26} to find \eref{W30a}. The integration by parts in \eref{W26} needs to be justified. This was done in our proof of Theorem \ref{thmA3}. } \end{remark} \subsubsection{Examples} \label{secA2} We will derive information about the $L^p$ norms $\|\psi^{\pm 1}\|_p$ by first applying the logarithmic Sobolev inequality \eref{mt1} to compositions $w\circ F$, with $F = -\log \psi$. For this we will need bounds for integrals of the form $\int_X u(F(x)) |\n F|^2 dm$ because they appear as the energy term in \eref{mt1}. Aida's identity, \eref{W30a}, allows us to express such an integral directly in terms of the potential $V$. To carry this procedure out it is necessary to find a function $v$ such that \beq v'(s) + v(s) = u(s), \ \ \ s \in \R \label{W41b} \eeq when $u$ is given, as we can see from Aida's identity. Depending on the choice of $u$, we will derive different entropy bounds in Section \ref{secEbound} and then, via Herbst's method, norm bounds in Section \ref{secmombd}. In the following examples for Theorem \ref{thmA3} we ignore the previous boundedness restrictions on $v$ and $v'$ because these restrictions can be removed once more information about $V$ is available. \begin{example} \label{Ex1} {\rm Suppose that $u(s) = e^{as}$ for some real $a\ne -1$. Then we may take $v(s) =(1+a)^{-1} e^{as}$ as a solution to \eref{W41b}. \eref{W30a} then shows that \beq \int_X e^{aF(x)} |\n F|^2 dm(x) = (1+a)^{-1} \int_X e^{aF} (V -\l_0) dm. \label{W35} \eeq This simple example, with $a+1 >0$, underlies our main estimates. We will need to truncate this function $v$ at first to justify some technical steps. } \end{example} \begin{example} \label{Ex10} {\rm (A general class of examples). Suppose that $u:\R \to \R$ is a non-negative continuous function such that \begin{align} \int_{-\infty}^0 e^{r} u(r) dr <\infty. \label{W39} \end{align} Define \begin{align} v(s) = e^{-s}\int_{-\infty}^s e^r u(r) dr,\ \ \ -\infty < s < \infty . \label{W40} \end{align} Then \beq v'(s) + v(s) = u(s) \label{W41} \eeq and, by \eref{W30a}, \begin{align} \int_X u(F) |\n F|^2 dm = \int_X v(F) (V -\l_0) dm. \label{W42} \end{align} Of course in each application of this identity one must verify the integrability of both sides for the given functions $u$ and $v$. } \end{example} \begin{example} \label{Ex3.6c} {\rm $($Aida, \cite[Equ. (3.27)]{Aida2001}$)$. For any real number $a$ there holds \begin{align} \int_{F \ge a} |\n F|^2 dm \le \int_{F\ge a} (V - \l_0) dm \qquad (Aida) \label{W43a} \end{align} and \begin{align} \int_{F > a} |\n F|^2 dm \le \int_{F> a} (V - \l_0) dm. \qquad (Aida) \label{W43b} \end{align} \begin{proof} Choose $\ep >0$ and let $v$ be a smooth non-decreasing function on $\R$ which is zero on $(-\infty, a -\ep]$ and one on $[a,\infty)$. Then \begin{align} \int_{F \ge a} |\n F|^2 dm &\le \int_X v(F(x)) |\n F|^2 dm \notag \\ &\le \int_X (v + v') |\n F|^2 dm \notag \\ &=\int_X v(F(x)) (V -\l_0)dm \notag\\ &=\int_{a-\ep < F < a} v(F(x))(V- \l_0) dm + \int_{F\ge a} (V - \l_0) dm \label{W43b} \end{align} Since $|v| \le 1$ and $V$ is integrable we can let $\ep\downarrow 0$ and find that the first term on the right of \eref{W43b} goes to zero. This proves \eref{W43a}. Use \eref{W43a} and the dominated convergence theorem twice to find \begin{align*} \int_{F> a}|\n F|^2 dm &=\lim_{n\to \infty} \int_{F\ge a+(1/n)}|\n F|^2 dm &\le \lim_{n\to \infty} \int_{F\ge a+(1/n)}(V-\l_0) dm \\ &=\int_{F >a} (V-\l_0)dm. \end{align*} \end{proof} } \end{example} Note: The set $\{ F= a\}$ could be a set of strictly positive measure. But, interestingly, one always has $\int_{F= a} |\n F|^2 dm =0 $. We will not need this fact. \subsection{Entropy bound from Aida's identity} \label{secEbound} The simple identity in Example \ref{Ex1} underlies the method of this section. We will combine variants of it with the logarithmic Sobolev inequality \eref{mt1} to derive bounds on $L^p(m)$ norms of $1/\psi$. To avoid technical problems we need to use first a bounded truncation of $F$, denoted $\hat F$ in the next theorem. We will remove the truncation in Section \ref{secpfmp}. If, for some real valued function $f$ on $X$, one puts $u = e^{f/2}$ into the logarithmic Sobolev inequality \eref{mt1}, one finds \begin{align} Ent_m(e^f) \le (c/2) \int_X |\n f|^2 e^f dm. \label{W60} \end{align} This is actually equivalent to \eref{mt1}. It is convenient to use this form of the logarithmic Sobolev inequality. \begin{theorem} \label{thmeb2} $($Entropy bound$)$. Assume that the logarithmic Sobolev inequality \eref{W60} holds for $m$. Assume that the hypotheses of Theorem \ref{thmesa2} hold and also that $\|e^V\|_\ka < \infty$ for some $\ka >0$. Denote by $s_0$ and $r_0$ the roots of the quadratic equation defined in Notation \ref{notkappa3}. Let \beq \eta = \log \int_X e^{\kappa(V-\l_0)} dm.\ \ \ \ \ \ \ \ \ \ \ \label{W746} \eeq Suppose that $\phi:\R \to \R$ is bounded, smooth and \beq 0 \le \phi' \le 1. \label{W53} \eeq Let \beq \hat F(x) = \phi (F(x)) . \label{W54} \eeq Then \begin{align} Ent_m(e^{t\hat F} )&\le \frac{t^2}{(s_0-t)(t+r_0)} \eta E(e^{t\hat F}) \ \ \ \text{if} \ t \in (-r_0, s_0). \label{W801f} \end{align} \end{theorem} Note: $\phi$ is intended to be a bounded approximation to the identity function $\phi(s) = s$. It will later be taken to be a smooth approximation to $ \phi_n(s) := (-n)\vee(s\wedge n)$. \begin{lemma} \label{lemeb3} If $\phi: \R \to \R$ is smooth, bounded and $0 \le \phi' \le 1$ then, for $\hat F = \phi \circ F$, we have \begin{align} Ent_m(e^{t\hat F}) \le \frac{ct^2}{2(1+t)} \int_X e^{t\hat F} (V- \l_0) dm \ \ \text{if}\ \ 1+t >0. \label{W59g} \end{align} \end{lemma} \begin{proof} Insert $f(x) = t\phi(F(x))$ into the logarithmic Sobolev inequality \eref{W60} to find \begin{align} Ent_m(e^{t \hat F}) &\le (c/2)\int_X e^{t\hat F} |\n (t \phi\circ F)|^2 dm \notag\\ &=(ct^2/2) \int_X e^{t\hat F} \phi' (F(x))^2 |\n F|^2 dm. \label{W73} \end{align} Let \beq u(s) = e^{t\phi(s)} \phi'(s)^2\ \ \ \text{ and} \ \ v(s) = (1+t)^{-1} e^{t\phi(s)}. \label{W74} \eeq We will show that \beq u(s) \le v(s) + v'(s). \label{W59} \eeq Since $1+t >0$ and $(1+t)(v(s) + v'(s)) =e^{t\phi(s)}\( 1+ t \phi'(s)\)$, we have \begin{align*} (1+t) u(s) &= e^{t\phi(s)}\((1+t) \phi'(s)^2\) \\ & \le e^{t\phi(s)}\((1+t) \phi'(s)\) \\ & = e^{t\phi(s)} \( \phi'(s) + t \phi'(s)\) \\ & \le e^{t\phi(s)} \( 1+ t\phi'(s)\) \\ &= (1+t) (v(s) + v'(s)). \end{align*} Divide by $1+t$ to find \eref{W59}. From \eref{W59} and Aida's identity \eref{W30a} we find \begin{align} \int_X e^{t\phi(F)} \phi'(F)^2 |\n F|^2 dm &= \int_X u(F) |\n F|^2 dm \notag\\ &\le \int_X(v(F) + v'(F)) |\n F|^2 dm \notag\\ &= \int_X v(F) (V- \l_0) dm \notag\\ &=\frac{1}{(1+t)} \int_X e^{t\phi(F)} (V- \l_0) dm. \label{W59f} \end{align} Combine this with \eref{W73}, using $\hat F = \phi \circ F$, to find \eref{W59g}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmeb2}] From Young's inequality \eref{BG500c} we have \begin{align} \int_X e^{t\hat F} \kappa(V- \l_0) dm &\le Ent_m(e^{t\hat F}) + \(\log \int_X e^{\kappa(V-\l_0)} dm \) E(e^{t\hat F}) \notag\\ &= Ent_m(e^{t\hat F}) + \eta E(e^{t\hat F}). \label{W59y} \end{align} Note that $t+1 >0$ if $t \in (-r_0, s_0)$ because $r_0 <1$, by \eref{W851g}. From \eref{W59g} and \eref{W59y} we find \begin{align} Ent_m(e^{t\hat F}) &\le \frac{ct^2}{2\kappa(1+t)}\( Ent_m(e^{t\hat F}) + \eta E(e^{t\hat F}) \) \notag \end{align} and therefore \begin{align} \(1- \frac{ct^2}{2\kappa(1+t)}\)Ent_m(e^{t\hat F}) \le \frac{t^2}{2(\kappa/c)(1+t)}\eta E(e^{t\hat F}). \label{W801a} \end{align} But \begin{align} \(1- \frac{ct^2}{2\kappa(1+t)}\) &= 1 -\frac{t^2}{2(\kappa/c)(1+t)} = \frac{2(\kappa/c)(1+t) - t^2}{2(\kappa/c)(1+t)} . \notag \end{align} Insert this into \eref{W801a} and cancel denominators to find \begin{align} \( 2(\kappa/c)(1+t) - t^2\)Ent_m(e^{t\hat F}) \le t^2\eta E(e^{t\hat F}). \label{W801b} \end{align} The coefficient of $Ent_m(e^{t\hat F}) $ factorizes by \eref{W851f} into $(s_0 - t)(t+r_0)$, which is strictly positive for $ t \in (-r_0, s_0)$. We may therefore divide by it to find \eref{W801f}. \end{proof} \subsection{Moment bound from entropy: Herbst's method} \label{secmombd} Herbst's method for deriving bounds on the moments $E(\psi^{-s})$ consists first in deriving bounds on the ratios $Ent_m(\psi^{-s})/ E(\psi^{-s})$, which we have already done in Section \ref{secEbound} for the truncated versions of $\psi^{-s}$. (Recall $\psi^{-s} = e^{sF}$.) Second, one expresses the derivative $(d/ds)\(s^{-1}\log E(\psi^{-s})\)$ in terms of this ratio and then integrates the resulting differential inequality. In the many applications, \cite{Car79,AMS94,ASt94, AS94, BG99,DS84,Hino1997,Led1995,Rot8,GR98,Ust92,Ust93,Ust96,vanHandel}, of this method, however, one needs information about the initial condition at $s=0$ in order to derive information at time $s$ from the differential inequality. In our setting this initial condition takes the form of an assumption on $E(\log \psi)$, which we cannot use because the only size hypothesis available to us is the normalization condition $E(\psi^2) = 1$. Instead, we will continue the differential inequality through the apparent singularity at $s=0$ and use for initial condition the value of $E(\psi^r)$ for some $r >0$. We will thereby derive an upper bound for $\int_X \psi^{-s} dm$ in terms of a lower bound for $\int_X \psi^r dm$. The lower bound has already been derived in Section \ref{seculb}. Further discussion of the impracticality of using the initial condition at $s =0$ is given in Remark \ref{reminit2}. The next lemma carries out Herbst's method in the form we need for passing through the apparent singularity. We abstract this step in Herbst's method by replacing the truncated function $\hat F$ by a general bounded measurable function $g$. Various forms of the identity \eref{H2} figure in many of the applications of Herbst's method in the papers listed above. \begin{lemma}\label{lemH2} Let $g:X\to R$ be a bounded measurable function. Define \begin{align} \|e^{g}\|_t = E(e^{tg})^{1/t} \ \ \text{for}\ \ \ t\ne0, \ \ \ t \in \R. \label{H1} \end{align} Then \begin{align} (d/dt)\log \|e^g\|_t =\frac{Ent_m(e^{tg})}{t^2 E(e^{tg})}, \ \ \ t \ne0. \label{H2} \end{align} Moreover \begin{align} \lim_{t\downarrow 0} \log \|e^g\|_t &=\int_X g\, dm =\lim_{t\uparrow 0} \log \|e^g\|_t . \label{H3} \end{align} The singularity on the right hand side of \eref{H2} at $t=0$ is removable in the sense that the right side extends to a continuous function on $\R$. Suppose that $\beta$ is a continuous function on an interval $(-r_0, s_0)$ including $0$ such that \begin{align} \frac{Ent_m(e^{tg})}{t^2 E(e^{tg})} \le \beta(t),\ \ \ 0 \ne t \in (-r_0, s_0). \label{H4} \end{align} Then \begin{align} \|e^{-g}\|_r \|e^g\|_s \le e^{\int_{-r}^s \beta(t) dt}\ \ \ \text{when}\ 0 <r < r_0\ \text{and}\ 0 < s < s_0. \label{H5} \end{align} \end{lemma} \begin{proof} Let $w(t) = E(e^{tg})$. If $t \ne0$ then \begin{align} w'(t) &= E(e^{tg} g) = (1/t)E(e^{tg} \log e^{tg}) \notag\\ &=(1/t) \(Ent_m(e^{tg}) + w(t) \log w(t)\). \label{H6} \end{align} Therefore \begin{align*} &(d/dt) \log \|e^g\|_t = (d/dt) \((1/t) \log (w(t)\) \\ &=(1/t)w^{-1}w' -(1/t^2)\log w(t) \\ & = (1/t^2) w(t)^{-1} \(Ent_m(e^{tg}) + w(t) \log w(t)\) -(1/t^2)\log w(t) \\ &= (1/t^2)w(t)^{-1} Ent_m(e^{tg}). \end{align*} This proves \eref{H2}. If $t >0$ then $\|e^g\|_t = E((e^g)^t)^{1/t} \to \exp(\int_X g\, dm)$ as $t\downarrow 0$ by \cite[Page 71, Problem 5]{Ru}. This proves the first equality in \eref{H3}. If $t<0$ let $s = -t$. Then $\|e^g\|_t = E(((e^{-g})^s)^{-1/s} \to \(\exp \int (-g) dm\)^{-1} = \exp \int g\, dm$ as $s \downarrow 0$. This proves the second equality. Concerning the removability of the singularity in \eref{H2} observe that for small $t$ we have \begin{align} &Ent_m(e^{tg}) =E(e^{tg} tg) - E(e^{tg}) \log E(e^{tg}) \notag \\ &= E( tg + t^2g^2 +o(t^2)) \notag\\ &\ \ \ \ \ \ - \(1 + E(tg) +O(t^2)\)\log\(1+E(tg) +(t^2/2)E(g^2) +o(t^2)\) \notag \\ & = tE(g) + t^2E(g^2) +o(t^2) \notag\\ &\ \ \ \ \ \ - \(1 + E(tg) +O(t^2)\)\(E(tg) +(t^2/2)E(g^2) -E(tg)^2/2 +o(t^2)\) \notag \\ &= t^2E(g^2) +o(t^2) - \((t^2/2)E(g^2) -E(tg)^2/2 +o(t^2)\) \notag\\ &\ \ \ \ \ \ - \( E(tg) +O(t^2)\)\(E(tg) +(t^2/2)E(g^2) -E(tg)^2/2 +o(t^2)\) \notag\\ & = (t^2/2) E(g^2) -(3t^2/2) E(g)^2 +o(t^2). \notag \end{align} Divide by $t^2$ to see that the right hand side of \eref{H2} has a common limit from the left and the right. Now suppose that \eref{H4} holds. Writing $u(t) = \log E(e^{tg})^{1/t}$ for $t \ne 0$ we find from \eref{H2} and \eref{H4} that $du/dt \le \beta(t)$ for $t \ne 0$. Therefore, taking into account \eref{H3}, we have $u(s) - u(-r) = u(s) - u(0_+) +\(u(0_-) - u(-r)\) \le \int_0^s\beta(t) dt + \int_{-r}^0 \beta(t) dt = \int_{-r}^s \beta(t)dt$. Take the exponential of this inequality to find \begin{align} e^{u(s)} e^{-u(-r)} \le \exp \int_{-r}^s \beta(t)dt. \notag \end{align} Since $e^{u(s)} = \|e^{g}\|_s$ and $e^{-u(-r)} = \|e^{-g}\|_r$ the inequality \eref{H5} is proved. \end{proof} \begin{remark} \label{reminit} {\rm The proof of Lemma \ref{lemH2} shows that one can bound $\|e^g\|_s$ and $\|e^{-g}\|_r$ separately: One has $u(s) - u(0_+) \le \int_0^s \beta(t) dt$, which gives \beq \|e^g\|_s \le e^{E(g)} e^{\int_0^s \beta(t) dt}, \ \ \ 0 < s < s_0. \label{H10} \eeq Similarly $u(0_-) - u(-r) \le \int_{-r}^0 \beta(t) dt$, which gives $e^{E(g)} \|e^{-g}\|_r \le e^{\int_{-r}^0 \beta(t) dt}$. \eref{H5} follows by multiplying these two inequalities and canceling $e^{E(g)}$. It is the inequality \eref{H10}, with untruncated $g$, which is usually used in the application of Herbst's method. Information about $E(g)$ is available in these applications and is sometimes taken as a hypothesis. But in our application $g$ is a truncated version of $\log\psi$. We have no useful information about $E(\log \psi)$. The usefulness of the product inequality \eref{H5} relies on the fact that $E(g)$ does not appear. See Remark \ref{reminit2} for further discussion of our case. } \end{remark} \begin{remark} \label{remHerbsthist} {\rm In many of the classical applications of the inequality \eref{H2} one assumes that a logarithmic Sobolev inequality, such as \eref{mt1}, holds and that $C\equiv \sup_X |\n g| < \infty$. In this case one has the simple entropy bound \begin{align*} Ent_m(e^{tg}) &\le 2c \int |\n e^{tg/2}|^2 dm \\ &=2c(t/2)^2 \int e^{tg} |\n g|^2 dm \\ & \le (ct^2/2) E(e^{tg}) C^2, \end{align*} from which it follows that one can take $\beta(t) = cC^2/2$ for all $t \ne0$ in \eref{H4}. By \eref{H10} we have then \begin{align} E(e^{sg}) \le e^{sE(g)} e^{s^2cC^2/2}\ \ \text{for}\ s >0\ \text{and therefore for all}\ \ s \in \R. \end{align} One can remove the boundedness assumption on $g$ in this inequality while maintaining the bound $|\n g| \le C$ on its gradient. Such knowledge of the Laplace transform of $g$ can be used to deduce other bounds on functions of $g$. See for example \cite[page 100]{AMS94}. The identity \eref{H6}, which is equivalent to \eref{H2}, was the form originally used by Herbst (cf. \cite[Corollary 3.4]{AMS94}). It works well for the case $g(x) = a x^2$ on $\R$. \eref{H2} was already used by van Handel \cite{vanHandel} in the study of sub Gaussian measures. For our application of \eref{H2} we will need to use the entropy bound \eref{W801f}, which produces the choice of $\beta(t)$ given in \eref{W756} and which is singular at the endpoints of the interval $(-r_0, s_0)$. } \end{remark} \subsection{Proof of the moment product theorem} \label{secpfmp} The next lemma proves Theorem \ref{thmmp1} for a truncated version of $\psi$. As before, we write $\psi = e^{-F}$. \begin{lemma}\label{lemptm} $($Product of truncated moments$)$. Assume the hypotheses and notation of Theorem \ref{thmmp1}. Denote by $\hat F$ the truncated function defined in \eref{W54}. Then \begin{align} \|e^{-\hat F}\|_r \| e^{\hat F}\|_s \le \| e^{V-\l_0}\|_\kappa^{\ell(a) + \ell(\sigma)} . \label{W753ga} \end{align} \end{lemma} \begin{proof} Choose $g = \hat F$ in Lemma \ref{lemH2}. By \eref{W801f} we have \begin{align} \frac{Ent_m(e^{t\hat F})}{t^2 E(e^{t\hat F})} \le \beta(t),\ \ 0 \ne t \in (-r_0, s_0), \label{W755} \end{align} where \begin{align} \beta(t) = \frac{\eta}{(s_0 -t)(t+r_0)}, \ \ \ t \in (-r_0, s_0). \label{W756} \end{align} It follows from \eref{H5} that \begin{align} \|e^{-\hat F}\|_r \| e^{\hat F}\|_s \le e^{\int_{-r}^s \beta(t) dt} \ \ \text{when}\ 0 <r < r_0\ \text{and}\ 0 < s < s_0. \label{W550c} \end{align} It remains only to compute that the right side of \eref{W550c} is equal to the right side of \eref{W753ga}. We isolate this computation in the following sublemma. \end{proof} \begin{sublemma} \label{sublem5} \begin{align} \exp\(\int_{-r}^s \frac{\eta}{(s_0 -t)(t+r_0)} dt\) = \| e^{V-\l_0}\|_\kappa^{\ell(a) + \ell(\sigma)} \end{align} \end{sublemma} \begin{proof} From the definition \eref{W746} of $\eta$ we have $e^{\eta y} = \|e^{V - \l_0}\|_\ka^{\ka y}$ for any real number $y$. Thus we need to show that \begin{align} \int_{-r}^s \frac{\ka}{(s_0-t)(t+r_0)} dt = \ell(a) + \ell(\sigma). \label{W853a} \end{align} From \eref{s98f} and \eref{W851g} we see that $s_0+ r_0= 2(\ka/c) b_\ka$. Therefore \begin{align} \int_{-r}^s \frac{\ka}{(s_0-t)(t+r_0)} dt &= \ka (s_0+ r_0)^{-1} \int_{-r}^s \(\frac{1}{(s_0-t)} + \frac{1}{(t+r_0)} \)dt \notag\\ &= (c/2b_\ka)\log\frac{t+r_0}{s_0-t} \big|_{-r}^s . \label{W853b} \end{align} We want to rewrite this in terms of the quantities $a$ and $\sigma$ defined in \eref{W852b} because they will appear explicitly in the defective logarithmic Sobolev inequality \eref{gs805} - \eref{gs807b}. To this end we have, using \eref{W851k}, \begin{align} \log\frac{t+r_0}{s_0-t} \Big|_{-r}^s & = \log \(\frac{ r_0^{-1} +t^{-1} }{t^{-1} -s_0^{-1} }\ \frac{r_0}{s_0}\) \Big|_{-r}^s \notag \\ &= \log \(\frac{ r_0^{-1} +t^{-1} }{t^{-1} -s_0^{-1} } \) \Big|_{-r}^s \notag \\ &= \log \(\frac{ (2r_0^{-1} -1) +(2t^{-1} +1)}{(2t^{-1} +1) -(2s_0^{-1} +1) } \) \Big|_{-r}^s \notag \\ &=\log \(\frac{ b_\ka +(2t^{-1} +1)}{(2t^{-1} +1) -b_\ka } \) \Big|_{-r}^s \notag \\ &= \log \(\frac{ b_\ka +(2s^{-1} +1)}{(2s^{-1} +1) -b_\ka } \) - \log \(\frac{ b_\ka +(-2r^{-1} +1)}{(-2r^{-1} +1) -b_\ka } \) \notag \\ &= \log \(\frac{ b_\ka +(2s^{-1} +1)}{(2s^{-1} +1) -b_\ka } \) + \log \(\frac{ (2r^{-1} -1) + b_\ka}{-b_\ka +(2r^{-1} -1) } \) \notag \\ & = \log\frac{a + b_\ka c_\nu}{a - b_\ka c_\nu} + \log\frac{\sigma + b_\ka c_\nu}{\sigma - b_\ka c_\nu}. \ \ \ {\color{blue} OK } \label{W853c} \end{align} This, together with \eref{W853b} and the definition \eref{W852}, proves \eref{W853a}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmmp1}] We will choose a sequence $\phi_n$ of functions, each of which satisfies the conditions in Theorem \ref{thmeb2} for $\phi$ and such that $\phi_n(s)$ converges to $s$ in a suitable sense. Taking $\hat F = \phi_n\circ F$ in \eref{W753ga}, we will show that the limit yields \eref{W753g}. For an integer $n \ge 1$ the function $\R \ni y \mapsto f_n(y) \equiv (-n)\vee(y \wedge n)$ is linear on $[-n,n]$ and constant outside this interval. Choose a smooth nondecreasing function $\phi_n$ which agrees with $f_n$ outside the two intervals $\{|s - (\pm n)| < 1/2\}$, satisfies $0 \le \phi_n' \le 1$ and lies below $f_n$ for positive $y$ and above $f_n$ for negative $y$. Clearly such functions exist. Then $0 \le \phi_n(y) \uparrow y$ for $y \ge0$ and $0 \ge \phi_n(y)\downarrow y$ for $y \le 0$. The functions $ F_n(x) := \phi_n(F(x))$ then converge monotonically upward on $\{x: F(x) \ge 0\}$ and downward on $\{ x: F(x) <0\}$. For $s >0$ the sequence $\int_Xe^{s F_n} dm$ therefore converges to $\int_X e^{s F} dm$ by applying the monotone convergence theorem over the first set and the dominated convergence theorem over the second set. Similarly, for $r >0$ the sequence $\int_X e^{-r F_n} dm$ converges by applying these two theorems to the opposite sets. Choose $\phi$ in Theorem \ref{thmeb2} to be $\phi_n$. The left side of \eref{W753ga} is then $\| e^{-F_n}\|_r \|e^{F_n}\|_s$, which converges to $\| e^{-F}\|_r \|e^{F}\|_s$ as $n\to \infty$. The right side of \eref{W753ga} is independent of $n$ and the inequality therefore holds in the limit. Since $\psi = e^{-F}$ and $\psi^{-1} = e^F$, \eref{W753g} follows. \end{proof} \bigskip \begin{remark}\label{reminit2} {\rm Remark \ref{reminit}, together with the limiting procedure of the previous proof, shows, informally, that \begin{align} \|\psi^{-1}\|_s &\le \| e^{V-\l_0}\|_\ka^{\ell(a)} \exp{\int F dm}, \ \ \ 0< s< s_0 \ \ \text{and} \label{W860}\\ \|\psi\|_r &\le \| e^{V-\l_0}\|_\ka^{\ell(\sigma)} \exp{-\int F dm}, \ \ \ 0<r < r_0 \label{W861} \end{align} On the one hand, the two exponential factors are finite because \begin{align} 2\int (-F) dm &= \int \log \psi^2 dm \le \int \psi^2 dm =1\ \ \ \text{and} \notag \\ s\int F dm &= \int \log \psi^{-s} dm \le \int \psi^{-s} dm < \infty \end{align} if $0 < s < s_0$, by Theorem \ref{thmmp1}. But these inequalities are not useful for us because we do not have good control over the size of the exponential factor in \eref{W860}. Some bounds on $\pm \int_X F dm$ are derived in \cite[Lemma 3, Part (4)]{Aida2001}. } \end{remark} \begin{remark}\label{remnonu} {\rm The bound in the moment product inequality \eref{W753g} depends on $\|e^V\|_\ka , \ka$ and $\l_0$, but only uses the condition $\|e^{-V}\|_\nu < \infty$ for the purpose of showing $\n^*\n + V$ is essentially self-adjoint and that a unique ground state exists. The boundary values $r_0, s_0$ depend only on $\ka$. The inequality \eref{W753g} therefore holds without any specific assumption on $e^{-V}$ if the essential self-adjointness and existence of a unique ground state can be shown by some other method. The equation \eref{W853c} shows that the exponent of $\|e^{V-\l_0}\|_\ka$ in \eref{W753g} depends only on $c, \ka$ and on $r,s$ but not on $\nu$. } \end{remark} \section{$L^p$ bounds on the inverse of the ground state} \label{seclp} \subsection{The controlling functional of $V$} \label{secM} The upper bound \eref{W753g} on the product of moments is dominated by a power of $\|e^{V-\l_0}\|_\ka$ while a lower bound on $\|\psi\|_r$ is dominated by a power of $\|e^{\l_0-V}\|_\nu$, as in \eref{L512}. The ground state eigenvalue appears in both sets of estimates. We will see that when combining these estimates so as to get a bound on $\|\psi^{-1}\|_s$ it is possible to arrange these two factors in a product so that the eigenvalue $\l_0$ cancels. As a result the following functional of $V$ appears naturally in almost all of the estimates. \begin{notation} \label{notM} {\rm Let \begin{align} M = \|e^V\|_{L^\ka(m)} \| e^{-V}\|_{L^\nu(m)}. \label{M1} \end{align} $M$ depends on $\ka, \nu$ and $V$. $M$ has the following general properties for any $\ka >0, \nu >0$ and $a \in \R$. \begin{align} \|e^{V-a}\|_\kappa \|e^{a - V}\|_\nu &=M. \label{M2} \\ M &\ge 1. \label{M3} \end{align} \eref{M2} holds because the constant factors $e^{-a}$ and $e^a$ cancel. For the proof of \eref{M3} observe that for any $p >0$ we have \begin{align} 1 = \( \int e^{pV/2} e^{-pV/2} dm\)^2 \le \int e^{pV} dm \int e^{-pV} dm. \notag \end{align} Therefore $\|e^V\|_p \|e^{-V}\|_p \ge 1$. Choose $ p = \min (\ka, \nu)$. If, say, $p=\nu$ then we have $1 \le \|e^V\|_\nu \|e^{-V}\|_\nu \le \|e^V\|_\ka \|e^{-V}\|_\nu$ by H\"older's inequality. A similar argument holds if $p =\ka$. This proves \eref{M3}. } \end{notation} \begin{lemma} $($Upper and lower bounds on $\l_0)$. Assume that the logarithmic Sobolev inequality \eref{mt1} holds. Then \begin{align} e^{-\l_0} &\le \|e^{-V}\|_\nu, \ \ \ \nu \ge 2c \ \ \ (\text{Federbush})\ \ \label{s1} \\ e^{\l_0 }&\le \|e^V\|_\kappa , \ \ \ \ \ \kappa >0\ \ \ \ \ (\text{Aida}) \label{s2} \\ e^{t \l_0} &\le e^{t\int_X V dm}, \ \ \ t \ge 0 \label{s2a} \\ \|e^{V-\l_0}\|_\kappa &\le M, \ \ \ \ \ \ \ \ \ \ \nu \ge 2c \label{s3} \\ \|e^{\l_0 - V}\|_\nu & \le M, \ \ \ \ \ \ \ \ \ \ \kappa >0\ \label{s4} \\ \|e^{V-\l_0}\|_\kappa \|e^{\l_0 - V}\|_\nu &=M, \qquad\ \ \forall\ \ka >0, \nu > 0 \label{s5} \end{align} \end{lemma} \begin{proof} The Federbush semi-boundedness theorem, see Remark \ref{remfed5}, asserts that $e^{-\l_0} \le \|e^{-V}\|_{2c}$ because $ \l_0 = \inf \{(H_0 +V)u,u): \|u\|_2 =1\}$. \eref{s1} now follows from H\"older's inequality. From \eref{W31} we find that $\l_0 = \int_X V dm -\int_X|\n F|^2 dm \le \int_X V dm$. Therefore \begin{align} \l_0 \le \int_X V dm, \label{s6} \end{align} from which \eref{s2a} follows. But also $\kappa \l_0 \le \int \kappa V dm \le \log \int e^{\kappa V} dm$ by Jensen's inequality. Hence $\l_0 \le \log \| e^V\|_\kappa$ from which \eref{s2} follows. In view of \eref{s1} we have $\|e^{V -\l_0}\|_\ka = \|e^V\|_\ka e^{-\l_0} \le \|e^V\|_\ka \|e^{-V}\|_\nu = M$, giving \eref{s3}. \eref{s4} follows similarly from \eref{s2}. The identity \eref{s5} is a special case of \eref{M2}. \end{proof} \subsection{Upper bound on $\int \psi^{-s} dm$ for $s > 0$} \label{secub} The following is a corollary of Theorem \ref{thmmp1}. \begin{corollary} \label{corub1} $($Upper bound on $\|\psi^{-1}\|_s)$. Assume the hypotheses and notation of Theorem \ref{thmmp1}. Suppose that $\sigma > b_\ka c_\nu$. Then \begin{align} \|\psi^{-1}\|_{L^s(m)} \le \| e^{V-\l_0}\|_\kappa^{\ell(a) + \ell(\sigma)} \|e^{\l_0-V}\|_\nu^{\sigma}, \ \ \ 0 < s < s_0, \label{W710a} \end{align} where $a$ is given by \eref{W852b}. In particular \begin{align} \|\psi^{-1}\|_{L^s(m)} \le M^{\ell(a) + \ell(\sigma) +\sigma} . \label{W710b} \end{align} If $ 0< s <\min\{s_0, 2\}$ then \begin{align} \int_X e^{s|\log \psi|} dm < \infty \ \ \text{and}\ \ Ent_m(e^{s|\log \psi|}) < \infty . \label{W712} \end{align} \end{corollary} \begin{proof} Given $\sigma >b_\ka c_\nu$, define $r$ by \eref{W852b}. Combine \eref{W753g} and \eref{L512} to find \eref{W710a}. Use \eref{s3} and \eref{s4} to derive \eref{W710b} from \eref{W710a}. For the proof of \eref{W712} observe that for $0 < s < s_0$, \eref{W710a} implies that $\int e^{s(-\log \psi)}dm < \infty$. On the other hand if $0 < s \le 2$ then $\int e^{s\log\psi} dm = \int \psi^s dm \le \|\psi\|_2^{s/2} =1$. Since $\int e^{s|\log \psi|} dm \le \int e^{s\log \psi} dm + \int e^{s(-\log \psi)} dm $ the first assertion in \eref{W712} follows. The second assertion follows by choosing a slightly larger $s$ in the first assertion. \end{proof} \begin{remark} \label{remsigmas} {\rm The bound \eref{W710b} arises from bounding each of the two factors in \eref{W710a} by $M$ to a power, using \eref{s3} and \eref{s4}. But there is a loss in using \eref{s3} and \eref{s4} separately instead of using the combined product, as in \eref{s5}, where possible. If, given $s$, one chooses $\sigma$ suitably then the two powers on the right side of \eref{W710a} can be made equal and the $\l_0$ independent bound \eref{s5} can be used. For the proof of existence of such a $\sigma$ observe that the definition \eref{W852} shows that $\ell$ is strictly decreasing on the interval $(b_\ka c_\nu, \infty)$ and $-\ell$ is strictly increasing with range $(-\infty, 0)$. Consequently the function $\sigma \to \sigma - \ell(\sigma)$ is strictly increasing on this interval and has range $(-\infty, \infty)$. Given $s \in (0, s_0)$, there is therefore a unique number $\sigma_s$ in this interval such that $ \sigma_s - \ell(\sigma_s) = \ell(a)$. With this choice of $\sigma$ we then have \begin{align} \| e^{V-\l_0}\|_\kappa^{\ell(a) + \ell(\sigma_s)} \|e^{\l_0-V}\|_\nu^{\sigma_s} =\(\| e^{V-\l_0}\|_\kappa\|e^{\l_0-V}\|_\nu\)^{\sigma_s} = M^{\sigma_s} \label{W710c} \end{align} and therefore \begin{align} \|\psi^{-1}\|_{L^s(m)} \le M^{\sigma_s}. \label{W710d} \end{align} Although this is a sharper bound than \eref{W710b} when $a$ and $\sigma$ do not have to be specified, it may be difficult in applications to control $\sigma_s$. } \end{remark} \subsection{$V$ is large where $\psi$ is small} \label{secbig} Suppose that $\ka >0$ and $s_0$ is defined as in Notation \ref{notkappa3}. Theorem \ref{thmmp1} shows that if $\|e^V\|_\ka < \infty$ then $\|\psi^{-1}\|_s <\infty$ for all $s < s_0$. Contrapositively, if $\|\psi^{-1}\|_s =\infty$ for some $s < s_0$ then $\|e^V\|_\ka =\infty$. We will show that a stronger contrapositve holds. Namely, if $\|\psi^{-1}\|_s =\infty$ for some $s < s_0$ then $\int_{\psi < \delta} e^{\ka V} dm = \infty$ for all $\delta >0$. This is a quantitative version of the statement that $V$ is large where $\psi$ is small. A qualitative version, such as ``$V$ is unbounded where $\psi^{-1}$ is unbounded", does not hold in our context, nor for a Schr\"odinger operator $-\Delta + V$ acting in $L^2(\R^n, dx)$. The latter is well known. We will describe in Example \ref{exunb}, a bounded potential in our context for which $\psi$ and $\psi^{-1}$ are both unbounded. The proof of the strong contrapositive inequality is a consequence of the following local moment product theorem. \begin{theorem} \label{thmlocmp2} $($A local moment product theorem$)$. Suppose that the hypotheses of Theorem \ref{thmesa2} hold. Let $\ka >0$ and define $r_0$ and $s_0$ as in Notation \ref{notkappa3}. Let $\delta >0$. Define \begin{align} \psi_\delta(x) = \min(\psi(x), \delta). \label{big40} \end{align} If $0 < r < r_0$ and $0<s < s_0$ then \begin{align} \|\psi_\delta\|_r \| \psi_\delta^{-1}\|_s \le \| e^{(V-\l_0)\chi_{\psi \le \delta}}\|_\ka^{\ell(a) + \ell(\sigma)}. \label{big42} \end{align} \end{theorem} The proof depends on the following lemma, which is a small variant of Lemma \ref{lemeb3}. \begin{lemma} \label{lemeb5} Let $\phi: \R \to \R$ be a smooth bounded function which is zero on $(-\infty, b]$ and such that $0 \le \phi' \le 1$ everywhere. Let $F = - \log \psi$ and define $\hat F =\phi\circ F$. Then \begin{align} Ent_m(e^{t\hat F})\le\frac{ct^2}{2(1+t)} \int_{F \ge b} e^{t\hat F} (V- \l_0) dm. \label{W59k} \end{align} Define $r_0$ and $s_0$ as in Notation \ref{notkappa3} and let \begin{align} \eta_b = \log \int_X e^{\kappa (V- \l_0)\chi_{F\ge b}} dm. \label{big54} \end{align} Then \begin{align} Ent_m(e^{t\hat F} )&\le \frac{t^2}{(s_0-t)(t+r_0)} \eta_b E(e^{t\hat F}) \ \ \ \text{if} \ t \in (-r_0, s_0), \label{W801g} \end{align} \end{lemma} \begin{proof} Let $h$ be a non-decreasing smooth function on $\R$ which is $0$ on $ (-\infty, b-\ep)$ and $1$ on $[b, \infty)$. Define \beq u(s) = e^{t\phi(s)} \phi'(s)^2\ \ \ \text{ and} \ \ v_1(s) = (1+t)^{-1} e^{t\phi(s)}h(s). \label{W74b} \eeq Then \begin{align} u(s) \le v_1(s) + v_1'(s) \label{W74c} \end{align} because on $[b, \infty)$, $u$ and $v_1$ are equal to the functions $u$ and $v$, respectively, given in \eref{W74}. Therefore \eref{W74c} holds over this interval by virtue of \eref{W59}. For $s\le b-\ep$ both sides of \eref{W74c} are zero, while for $b-\ep < s < b$, $u(s)$ is zero while $(1+t) (v_1(s) + v_1'(s)) = (1+t)e^{t\phi(s)}(h(s) + h'(s)) \ge 0$. So \eref{W74c} holds everywhere. As in the derivation of \eref{W59f}, we then find \begin{align} \int_X e^{t\phi(F)} \phi'(F)^2 |\n F|^2 dm &\le \int_X v_1(F) (V- \l_0) dm \notag\\ &=\frac{1}{1+t}\int_X e^{t\phi(F)} h(F) (V-\l_0) dm. \notag \end{align} From \eref{W73} it follows that $Ent_m(e^{t\hat F})\le\frac{ct^2}{2(1+t)} \int_X e^{t\hat F} h(F) (V- \l_0) dm$. Since $e^{t\hat F}$ is bounded and $V - \l_0$ is integrable we can let $\ep \downarrow 0$ and conclude from the dominated convergence theorem that \begin{align} Ent_m(e^{t\hat F})\le\frac{ct^2}{2(1+t)} \int_X e^{t\hat F} (V- \l_0) \chi_{F \ge b} dm, \label{big55} \end{align} which is \eref{W59k}. The proof of \eref{W801g} follows from \eref{big55} the same way that \eref{W801f} follows from \eref{W59g}. One need only replace $\eta$ by $\eta_b$ in \eref{W59y}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmlocmp2}] (9/30/22, 10/4/22, 7/23/23) Let \begin{align} \beta_b(t) = \frac{\eta_b}{(s_0 -t)(t+r_0)}, \ \ \ t \in (-r_0, s_0). \end{align} Then, from \eref{W801g}, we find \begin{align} \frac{Ent(e^{t \hat F})}{t^2 E(e^{t\hat F})} \le \beta_b(t) \end{align} As in the derivation of \eref{W550c} it follows that $\| e^{-\hat F}\|_r \| e^{\hat F}\|_s \le e^{\int_{-r}^s \beta_b(t) dt }$. Since $e^{\eta_b} = \|e^{(V-\l_0)\chi_{F \ge b}}\|_\ka^\ka$, we find, by Sublemma \ref{sublem5} \begin{align} \| e^{-\hat F}\|_r \| e^{\hat F}\|_s \le \| e^{(V - \l_0)\chi_{F \ge b}}\|_\ka^{\ell(a) + \ell(\sigma) }. \label{big58} \end{align} Let $g(y) = 0\vee (y-b)$ for $ y \in \R$. Choose a sequence $\phi_n$ of smooth functions, each of which is bounded, such that $\phi_n = 0$ on $(-\infty, b]$, $0 \le \phi_n' \le 1$, and such that $\phi_n(y) \uparrow g(y)$. Such a sequence is easily seen to exist. In Lemma \ref{lemeb5} choose $\phi = \phi_n$. Let $F_b = g \circ F$. Then $\phi_n \circ F \uparrow F_b$ on $X$. Replacing $\hat F$ by $\phi_n\circ F$ in \eref{big58} we can apply the dominated convergence theorem on the first factor on the left and the monotone convergence theorem on the second factor to find \begin{align} \|e^{-F_b}\|_r \|e^{F_b}\|_s \le \|e^{(V- \l_0)\chi_{F\ge b}}\|_\ka^{\ell(a) + \ell(\sigma)}. \label{big44} \end{align} Given $\delta >0$ choose $b$ so that $e^{-b} = \delta$. We claim that \begin{align} e^{-F_b(x)} = \delta^{-1}\psi_\delta(x)\ \ \forall x \in X. \label{big60} \end{align} Indeed $F_b(x) = 0 \vee(F(x) - b)$. So if $F(x) < b$ then $F_b(x) = 0$. So $e^{-F_b(x)} = 1$. But $\psi(x) =e^{-F(x)} > e^{-b} = \delta$. So \eref{big60} holds by the definition \eref{big40}. On the other hand, if $F(x) \ge b$ then $\psi(x) = e^{-F(x)}\le e^{-b} = \delta$. So $\psi_\delta(x) = \psi(x)= e^{-F(x)} = e^{-F_b(x)}e^{-b} = \delta e^{-F_b(x)}$. This proves \eref{big60}. Moreover $\{\psi \le \delta\} = \{F\ge b\}$. Therefore we may write \eref{big44} as $\| \delta^{-1} \psi_\delta\|_r \|\delta \psi_\delta^{-1}\|_s \le \| e^{(V- \l_0)\chi_{\psi \le \delta} }\|_\ka^{\ell(a) + \ell(\sigma)}$, which is \eref{big42} after canceling $\delta$. \end{proof} \begin{corollary} \label{corvlps2} $(V$ is large where $\psi$ is small$)$. Given $\kappa >0$, define $s_0$ as in Notation \ref{notkappa3}. Suppose that $0<s < s_0$. If \begin{align} \int_X \psi^{-s} dm = \infty \label{big71} \end{align} then \begin{align} \int_{\psi \le\delta} e^{\ka V} dm = \infty\ \ \ \text{for all} \ \ \delta >0. \label{big72} \end{align} \end{corollary} \begin{proof} Let $\delta >0$. Choose a number $r \in (0, r_0)$. Then $\|\psi_\delta\|_r > 0$ because $\psi_\delta > 0$ a.e.. From \eref{W852} we see that $\ell(t) >0$ for all allowed $t$ and therefore $\ell(a) + \ell(\sigma) >0$. Since $\psi_\delta^{-1} - \psi^{-1}$ is bounded, it follows from \eref{big71} that $\|\psi_\delta^{-1}\|_s = \infty$. The local moment product formula \eref{big42} shows then that $ \|e^{(V - \l_0)\chi_{\psi \le\delta}}\|_\ka^\ka =\infty$. That is, $\int_{\psi > \delta} 1 dm + \int_{\psi \le \delta} e^{\ka(V-\l_0)} dm = \infty$. \eref{big72} follows. \end{proof} \begin{remark} {\rm If one takes $s$ as given in the condition \eref{big71} then the condition on $\ka$ that ensures ``largeness'' in the sense of \eref{big72} is \beq \ka/c > \frac{ s^2}{ 2(s+1)} . \label{big70} \eeq Indeed $\ka/c = s_0^2/2(s_0+1)$ by \eref{W850}. The condition \eref{big70} is therefore equivalent to $s < s_0$ for $s >0$ because the right side of \eref{big70} is increasing. } \end{remark} \bigskip Even if the potential is bounded, neither the ground state $\psi$ nor its inverse $1/\psi$ need be bounded, even in the presence of \eref{mt1}. Here is a simple example. \begin{example}\label{exunb} {\rm (Bounded $V$ but unbounded $\psi$ and $\psi^{-1}$). Take $m$ to be the Gauss measure $dm= (2\pi c)^{-1/2} e^{-x^2/(2c)} dx$. It is known that $m$ satisfies the logarithmic Sobolev inequality \eref{mt1}, \cite{G1}. Let \begin{align} \psi(x) = Z^{-1} \begin{cases} &(1+x^2), \ \ \ x >1 \\ & (1+x^2)^{-1}, \ \ \ x < -1 \\ & \text{smooth and $>0$ on} \ \ \ [-2,2] \end{cases} \end{align} Let $F = - \log \psi$. Then outside the interval $[-1,1]$ we have $F(x) = -\delta \log (1+x^2)$, where $\delta = sgn\ x$. Therefore, for $|x| >1$ we find $F'(x) =-2\delta x/(1+x^2)$ and $F''(x) = -2\delta \(\frac{1-x^2}{1+x^2}\)/(1+x^2)$. The definition \eref{div1} shows that for our measure $m$ we have $\n^*v(x) = -v'(x) + c^{-1}x v(x)$ for any smooth vector field $v$ on $\R$. Therefore $|\n F|^2 + \n^*\n F = (F')^2 - F'' + c^{-1} xF'$. We take this to be our potential. Explicitly, we have then, for $|x| >1$, \begin{align} V= 4 x^2/(1+x^2)^2 +2\delta \(\frac{1-x^2}{1+x^2}\)/(1+x^2) -2\delta c^{-1}x^2/(1+x^2). \end{align} By the WKB equation \eref{W7} the ground state for $\n^*\n +V$ is $\psi$. $V$ is bounded on $\R$ but $\psi$ and $\psi^{-1}$ are both unbounded. Theorem \ref{thmM} is applicable to this example and therefore the ground state measure $\psi^2 dm$ satisfies a logarithmic Sobolev inequality. } \end{example} \section{Defective LSI for the ground state measure} \label{secDLSI} \subsection{The ground state transformation} \label{secgs} In the previous sections we established properties of the Schr\"odinger operator $\n^*\n +V$ and its heat semigroup in the spaces $L^p(m)$. We also established properties of the ground state $\psi$ and its inverse $1/\psi$ in the spaces $L^p(m)$. The ground state measure associated to $\psi$ is the probability measure $m_\psi$ defined by \begin{align} dm_\psi : = \psi^2 dm. \end{align} In the present section we will relate the Schr\"odinger operator $\n^*\n + V$ to the Dirichlet form operator $\hat H$ for $m_\psi$. $\hat H$ acts densely in $L^2(m_\psi)$. Define \begin{align} U: L^2(m_\psi) &\to L^2(m)\ \ \ \ \ \text{by} \\ Uu &= u \psi, \ \ \ \ u \in L^2(m_\psi) \end{align} The identity $\int_X |u\psi|^2 dm = \int_X |u|^2 dm_\psi$ shows that the map $U$ is unitary. Denote by $H$ the closure of $\n^*\n +V$ in $L^2(m)$. In the next lemma we will make a computation, frequently made in this context, which shows that $U^{-1}(H - \l_0)U = \hat H$, and which at the same time exhibits the quantities which need to be estimated for proving invariance of intrinsic hypercontractivity. This computation is sketched in \cite[Section 4]{G1}, derived and used in \cite{KS85} and derived again in many similar contexts. We make no effort to identify the domains of operators in this partly informal computation or to justify some of the technical steps because in the cases of interest the final identities will be easily justifiable. \begin{lemma} \label{lemgs1} Let $\psi =e^{-F}$ be a strictly positive $($a.e.$)$ function with $\|\psi\|_{L^2(m)} = 1$. The adjoint of $\n$ with respect to the measure $m$, defined in \eref{div1}, is denoted $\n^*$. If $u$ is bounded and $|\n u|$ is in $L^2(m_\psi)$ then \begin{align} \int_X |\n (u\psi)|^2 dm =\int_X|\n u|^2\, dm_\psi -\int_X u^2(\n^*\n F + |\n F|^2) dm_\psi. \label{gs709F} \end{align} In particular, if $\psi$ is the ground state of $\n^*\n+V$ then \begin{align} \int_X |\n (u\psi)|^2 dm =\int_X|\n u|^2\, dm_\psi + \int_X u^2(\l_0 -V) dm_\psi \label{gs709G} \end{align} and \begin{align} U^{-1}(H- \l_0) U = \hat H \label{gs709H} \end{align} \end{lemma} \begin{proof} Since $\n\psi = -\psi \n F$ the product rule gives $\n (u\psi) = (\n u)\psi + u \n \psi = (\n u - u\n F)\psi$. The product rule \eref{gs5} for $\n^*$ implies $\n^* (e^{-2F} \n F)= (\n^*\n F +2 |\n F|^2)e^{-2F}$. Hence \begin{align} \int_X |\n (u\psi)|^2 dm &=\int_X |\n u - u\n F) |^2 \psi^2 dm \notag\\ & = \int_X \(|\n u|^2 + u^2 |\n F|^2 - 2 u\n u \cdot \n F\)\psi^2 dm \label{gs1000} \\ &= \int_X\(|\n u|^2 + u^2 |\n F|^2) dm_\psi -\int_X \n u^2 \cdot e^{-2F} \n F dm \notag \\ &= \int_X\(|\n u|^2 + u^2 |\n F|^2) dm_\psi -\int_X u^2 \n^*\( e^{-2F} \n F\) dm \notag\\ & = \int_X\(|\n u|^2 + u^2 |\n F|^2) dm_\psi -\int_X u^2\( \n^* \n F + 2 |\n F|^2\) dm_\psi, \notag \end{align} which proves \eref{gs709F}. If $\psi$ is the ground state of $\n^*\n +V$ then \eref{gs709G} follows from \eref{W7}. The left side of \eref{gs709G} is $((\n^*\n) Uu, Uu)_{L^2(m)}$. The right side is $(\hat H u, u)_{L^2(m_\psi)} + ((\l_0 - V) Uu, Uu)_{L^2(m)}$. Therefore \beq ((\n^*\n) Uu, Uu)_{L^2(m)} + ((V- \l_0) Uu, Uu)_{L^2(m)} =(\hat H u, u)_{L^2(m_\psi)}. \notag \eeq Hence $(U^{-1}(H- \l_0) U u, u)_{L^2(m_\psi)} = (\hat H u, u)_{L^2(m_\psi)}$. Since $H$ and $\hat H$ are both symmetric \eref{gs709H} holds. \end{proof} \begin{corollary} \label{corgs2} Suppose that $m$ satisfies the logarithmic Sobolev inequality \eref{mt1} : \begin{align} Ent_m(f^2) \le 2c \int_X |\n f|^2 dm \label{LSc3} \end{align} Then the ground state measure satisfies the inequality \begin{align} Ent_{m_\psi}(u^2) \le 2c\int_X |\n u|^2 dm_\psi + \int_X u^2(2c(\l_0 -V) + 2F) dm_\psi \label{gs720} \end{align} \end{corollary} \begin{proof} Putting $f = u\psi =ue^{-F}$ we have \begin{align} \int_X u^2 \log u^2 dm_\psi &= \int f^2 \log (f^2e^{2F}) dm \notag\\ & = \int f^2 \log f^2 dm + \int f^2 2F dm. \notag \end{align} Since $\|f\|_{L^2(m)}^2 = \|u\|_{L^2(m_\psi)}^2 $ we therefore have \begin{align} Ent_{m_\psi}(u^2) = Ent_m(f^2) + \int f^2 2F dm. \label{gs724} \end{align} Combine this with \eref{LSc3} and then use \eref{gs709G} to find \begin{align} Ent_{m_\psi}(u^2) &\le 2c\int_X|\n(u\psi)|^2 dm +\int u^2 2F dm_\psi \label{gs725a} \\ &= 2c\int_X|\n u|^2\, dm_\psi + \int_X u^2(2c(\l_0 -V)+2F) dm_\psi. \label{gs725} \end{align} \end{proof} \begin{remark} \label{remearly} {\rm Many of the early approaches to the derivation of a DLSI from a perturbation of either $V$ or $F$ hinge on estimating the last integral in \eref{gs720} We compare some of these approaches in Section \ref{secwp}. } \end{remark} \begin{remark} \label{remearly2}{\rm If one assumes only that a defective logarithmic Sobolev inequality holds, namely \beq Ent_m(f^2) \le 2c \int_X |\n f|^2 dm + D \|f\|_{L^2(m)}^2, \eeq instead of \eref{LSc3}, then the ground state transformation yields, instead of \eref{gs720}, the inequality \begin{align} Ent_{m_\psi}(u^2) &\le 2c\int |\n u|^2 dm_\psi \notag \\ &+ \int_X u^2(2c(\l_0 -V) + 2F) dm_\psi + D\|u\|_{L^2(m_\psi)}^2 . \label{gs720b} \end{align} If one knew that $F- cV$ were bounded above then \eref{gs720b} would show that $m_\psi$ also satisfies a DLSI. The discussion in Section \ref{secwp} includes a history of conditions that relate $F$ and $V$ in such a pointwise manner. Such pointwise conditions do not fall within the purview of this paper. } \end{remark} \begin{remark}\label{remrosen2}{\rm We can borrow a bit of the kinetic energy from \eref{gs720} and shift it to the last term in \eref{gs720} to derive a condition on $\log \psi$ ensuring a DLSI: Using the the relation $f = u\psi$ and the identity \eref{gs709G} we find \begin{align*} &2c \int_X |\n (u\psi)|^2 dm = 2(c+a) \int_X |\n (u\psi)|^2 dm - 2a \int_X |\n f|^2 dm\\ &=2(c+a)\(\int_X|\n u|^2\, dm_\psi + \int_X (\l_0 -V)f^2 dm\) - 2a \int |\n f|^2 dm\\ & =2(c+a)\int_X|\n u|^2\, dm_\psi -2\int_X\( a |\n f|^2 +(c+a)(V - \l_0)f^2\) dm. \end{align*} Since, by \eref{gs725a}, $Ent_{m_\psi}(u^2) \le 2c\int_X |\n (u\psi)|^2 dm + 2\int_X F f^2 dm$, we have \begin{align} Ent_{m_\psi}(u^2) & \le 2(c+a) \int_X|\n u|^2\, dm_\psi \notag \\ & + 2\int_X\Big\{ F f^2- a |\n f|^2 -(c+a)(V-\l_0) f^2\Big\}dm. \label{gs740b} \end{align} Suppose then that there is a number $b$ such that $-\log \psi$ satisfies the form inequality \beq -\log \psi \le \{a \n^*\n +(c+a)(V-\l_0)\} +b \label{gs741b} \eeq in $L^2(m)$. Then line \eref{gs740b} is at most $2b\| f\|_{L^2(m)}^2$ and we have the DLSI \beq Ent_{m_\psi}(u^2) \le 2(c+a) \int_X|\n u|^2\, dm_\psi + 2b \|u\|_{L^2(m_\psi)}^2 \label{gs742} \eeq This is a perturbation version of Rosen's lemma, \cite{DS84}. In practice one proves or assumes that $-\log \psi \le (c+a)(V-\l_0) +b$, which implies \eref{gs741b} and is slightly more general than the condition in Remark \ref{remearly2} but is still a pointwise condition. Example \ref{exunb} shows how easily this condition can fail even though the perturbed measure is hypercontractive. In that example $-\log \psi$ is unbounded above and below while $V$ is bounded. } \end{remark} \subsection{The defective logarithmic Sobolev inequality} \label{secdlsi} In the following two theorems we derive a defective logarithmic Sobolev inequality for the ground state measure $m_\psi$ using progressively stronger conditions on the potential $V$. In the first theorem we assume that $\|e^{-V}\|_{L^\nu(m)} < \infty$ under the usual condition that $\nu > 2c$. We describe the defect partly in terms of $\|\psi^{-1}\|_{L^s(m)}$ to illustrate how this quantity plays a central role. In the second theorem we add on the hypothesis that $\|e^{V}\|_{L^\ka(m)} < \infty$ and use the bounds on $\|\psi^{-1}\|_{L^s(m)}$ derived in Section \ref{seclp}. The constants $c_\nu $ and $b_\ka$ that occur repeatedly are defined in \eref{L329a} and \eref{s98f} respectively. \begin{theorem} \label{thmDLSI2} Assume the hypotheses of Theorem \ref{thmesa2}. Suppose that \begin{align} a > c_\nu\ \ \ \text{and let}\ \ \ s = \frac{2 c_\nu}{a - c_\nu} .\ \ \ \text{Equivalently,} \ \ \ a = (1 +\frac{2}{s})c_\nu. \label{gs803} \end{align} Assume that $\|\psi^{-1}\|_s < \infty$. Then \begin{align} Ent_{m_\psi} (f^2) &\le 2a \int_X |\n f|^2 dm_\psi \notag \\ & \ \ \ \ \ +2 \| f\|_{L^2(m_\psi)}^2\Big\{ \log \(\|\psi^{-1}\|_s\|e^{\l_0 - V}\|_\nu^a\) \Big\} \label{gs805} \end{align} \end{theorem} \begin{theorem} \label{thmDLSI3} In addition to the hypotheses of Theorem \ref{thmesa2} assume that $\|e^V\|_\ka < \infty$ for some $\ka >0$. Suppose that \begin{align} a > c_\nu b_\ka \ \ \ \text{and}\ \ \ \sigma > c_\nu b_\ka. \label{gs803b} \end{align} Then \begin{align} Ent_{m_\psi} (f^2) &\le 2a \int_X |\n f|^2 dm_\psi \notag \\ & \ \ \ \ \ +2 \| f\|_{L^2(m_\psi)}^2 \log\(\|e^{\l_0 - V}\|_\nu^{a+\sigma} \, \| e^{V-\l_0}\|_\kappa^{\ell(a) + \ell(\sigma)} \). \label{gs806} \end{align} In particular, if $a = \sigma = t$, the unique point at which $\ell(t) = t$, $($cf. \eref{W854}$)$ then the right side is independent of $\l_0$ and there holds \begin{align} Ent_{m_\psi} (f^2) &\le 2a \int_X |\n f|^2 dm_\psi +2 \| f\|_{L^2(m_\psi)}^2 \log M^{2a}, \label{gs807} \end{align} with $M$ defined in \eref{M1}. For arbitrary $a$ and $\sigma$ in the allowed range $(c_{\nu, \ka}, \infty)$ there holds the $\l_0$ independent bound \begin{align} Ent_{m_\psi} (f^2) &\le 2a \int_X |\n f|^2 dm_\psi +2 \| f\|_{L^2(m_\psi)}^2 \log M^{a+ \ell(a) +\sigma+\ell(\sigma)} . \label{gs807b} \end{align} \end{theorem} Note that the lower bound, $c_\nu b_\ka$, required of $a$ and $\sigma$ in \eref{gs803b} depends only on $c, \nu$ and $ \ka$. The proofs depend on the following lemma. \begin{lemma} \label{lemDLS5}(3/4/22) If $\|u\|_{L^2(m_\psi)} < \infty$ and $Ent_{m_\psi}(u^2) < \infty$ then \begin{align} &\int_X u^2 F dm_\psi \le \frac{1}{2+s}\(Ent_{m_\psi}(u^2) + s\|u\|_{L^2(m_\psi)}^2 \log \|\psi^{-1}\|_{L^s(m)}\). \label{gs609b1}\\ & \int_X u^2\(\nu(\l_0 -V) +2F\) dm_\psi \notag \\ &\qquad \qquad \ \ \ \ \le \(Ent_{m_\psi}(u^2) + \nu\|u\|_{L^2(m_\psi)}^2 \log\| e^{\l_0 - V}\|_{L^\nu(m)}\). \label{gs607b1} \\ \ \notag \\ & \int_X u^2\(2c(\l_0 -V) + 2F\) dm_\psi \le\(1 -(c/a)\) Ent_{m_\psi}(u^2) \label{gs605b1a}\\ &\ \ \ \ \ \ \ \ \ +(2c/a)\Big\{a\log\| e^{\l_0 - V}\|_{L^\nu(m)} + \log \|\psi^{-1}\|_{L^s(m)} \Big\} \|u\|_{L^2(m_\psi)}^2 . \notag \end{align} \end{lemma} \begin{proof} Apply Young's inequality \eref{BG500c} to find \begin{align} (2+s)\int_X u^2 F dm_\psi &=\int_X u^2 \{(2+s)F\} dm_\psi \notag\\ &\le Ent_{m_\psi}(u^2) + \|u\|_{L^2(m_\psi)}^2 \log \int_X e^{(2+s)F} dm_\psi \notag\\ &= Ent_{m_\psi}(u^2) + \|u\|_{L^2(m_\psi)}^2 \log \int_X e^{s F} dm \notag\\ &= Ent_{m_\psi}(u^2) + s\|u\|_{L^2(m_\psi)}^2 \log \|\psi^{-1}\|_{L^s(m)}, \notag \end{align} proving \eref{gs609b1}. To prove \eref{gs607b1} apply Young's inequality again to find \begin{align} \int_X &u^2\(\nu(\l_0 -V) +2F\) dm_\psi \notag \\ & \le Ent_{m_\psi}(u^2) + \|u\|_{L^2(m_\psi)}^2 \log \int e^{\nu(\l_0 -V) +2F} dm_\psi \notag \\ &= Ent_{m_\psi}(u^2) + \|u\|_{L^2(m_\psi)}^2 \log\int_X e^{\nu(\l_0 -V)} dm \notag\\ &= Ent_{m_\psi}(u^2) + \nu\|u\|_{L^2(m_\psi)}^2 \log \|e^{\l_0 -V}\|_\nu. \end{align} For the proof of \eref{gs605b1a} we can apply \eref{gs609b1} and \eref{gs607b1} after decomposing the left side of \eref{gs605b1a} as \begin{align} &\int_X u^2\(2c(\l_0 -V) + 2F\) dm_\psi \notag \\ &= \frac{2c}{\nu}\int_X u^2\(\nu(\l_0 -V) +2F\) dm_\psi +(1- (2c/\nu))\int_X u^2\, 2F dm_\psi \notag \\ & \le \frac{2c}{\nu}\(Ent_{m_\psi}(u^2) + \nu\|u\|_{L^2(m_\psi)}^2 \log\| e^{\l_0 - V}\|_{L^\nu(m)}\) \notag \\ &+ (1- (2c/\nu)) \frac{2}{2+s}\(Ent_{m_\psi}(u^2) + s\|u\|_{L^2(m_\psi)}^2 \log \|\psi^{-1}\|_{L^s(m)}\). \label{gs606b} \end{align} The definition \eref{gs803} gives $s/(2+s) = c_\nu a^{-1}$ and therefore $2/(2+s) = 1 - c_\nu a^{-1}$. The combined coefficient of $Ent_{m_\psi}(u^2) $ in the last two lines is therefore $(2c/\nu) + (1- (2c/\nu)) (1 - c_\nu a^{-1}) =1-ca^{-1}$, since $(1-(2c/\nu)) c_\nu = c$. This agrees with the coefficient of $Ent_{m_\psi}(u^2)$ in \eref{gs605b1a}. The coefficient of $\log \|e^{\l_0- V}\|_\nu$ in \eref{gs606b} is clearly in agreement with that in \eref{gs605b1a}. The coefficient of $\|u\|_{L^2(m_\psi)}^2 \log \|\psi^{-1}\|_{L^s(m)}$ in \eref{gs606b} is $2 (1- (2c/\nu)) s/(2+s)= 2 (1- (2c/\nu)) c_\nu a^{-1} = 2c/a$, giving agreement with \eref{gs605b1a}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmDLSI2}] Combining \eref{gs720} with \eref{gs605b1a} we find \begin{align*} &Ent_{m_\psi}(u^2) \le 2c\int |\n u|^2 dm_\psi + \int_X u^2(2c(\l_0 -V) + 2F) dm_\psi \\ &\qquad \qquad \ \ \ \le 2c\int |\n u|^2 dm_\psi +\(1 -(c/a)\) Ent_{m_\psi}(u^2) \\ & \ \ \ \ \ \ \ \ \ +\(2c \log\| e^{\l_0 - V}\|_{L^\nu(m)} + (2c/a) \log \|\psi^{-1}\|_{L^s(m)} \) \|u\|_{L^2(m_\psi)}^2 . \end{align*} Transfer the term $\(1 -(c/a)\) Ent_{m_\psi}(u^2)$ to the left side and multiply by $a/c$ to find \begin{align*} &Ent_{m_\psi}(u^2) \le 2a\int |\n u|^2 dm_\psi \\ &+2\(a \log\| e^{\l_0 - V}\|_{L^\nu(m)} + \log \|\psi^{-1}\|_{L^s(m)} \) \|u\|_{L^2(m_\psi)}^2, \end{align*} which is \eref{gs805}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmDLSI3}] We will bound the factor $ \|\psi^{-1}\|_{L^s(m)}$ in \eref{gs805} using the bound \eref{W710a}, given in Corollary \ref{corub1}. To apply Corollary \ref{corub1} we must show that $s$, defined in \eref{gs803} is at most $s_0$. Using \eref{gs803b} and \eref{W851k}, we find \begin{align} s&<\frac{2c_\nu}{c_\nu b_\ka - c_\nu}= \frac{2}{b_\ka - 1} = s_0. \end{align} We also need to verify that $r$, defined in \eref{W852b}, lies in $(0, r_0)$. But the condition \eref{gs803b} for $\sigma$ gives $(2r^{-1} - 1) > b_\ka$ and therefore $r < 2/(b_\ka +1) = r_0$, by \eref{W851k}. Insert the bound on $\|\psi^{-1}\|_{L^s(m)}$ from \eref{W710a} into \eref{gs805} to find \eref{gs806}. By \eref{W854} there is a unique point $t \in (b_\ka c_\nu, \infty)$ such that $t= \ell(t)$. If $a= t$ then $ \|e^{\l_0 - V}\|_\nu^{a} \,\| e^{V-\l_0}\|_\kappa^{\ell(a)} = \(\|e^{\l_0 - V}\|_\nu \,\| e^{V-\l_0}\|_\kappa\)^a = M^a$. Choosing $a = \sigma = t$ we see then that \eref{gs807} follows from \eref{gs806}. If $a$ and $\sigma$ are chosen arbitrarily in the allowed range $(c_\nu b_\ka, \infty)$ then \eref{gs807b} follows from \eref{s3}, \eref{s4} and \eref{gs806}. \end{proof} \begin{remark} \label{remparam} {\rm The parameters $a$ and $\sigma$ in \eref{gs806} are at our disposal as long as both are chosen greater than $c_\nu b_\ka$. We saw that if $a = \sigma = t$, with $t$ chosen to make $\ell(t) = t$ as in \eref{W854}, then the bound \eref{gs806} reduces to the $\l_0$ independent bound \eref{gs807}. But if we choose $a = \sigma = t$, with $t$ chosen to minimize $\ell(t) + t$, as in \eref{W854a} then the estimate in the $\l_0$ independent defect given in \eref{gs807b} would be minimized. The choice of some special values of $a$ and $\sigma$, as well as the behavior of these special values as $\ka\downarrow 0$ and $\nu \downarrow 2c$, may be of significance in some applications. } \end{remark} \subsection{Cases: $V$ is bounded below, or above, or both} \label{seccases} If $V$ is bounded below or above then one can let $\nu\uparrow \infty$ or $\ka \uparrow \infty$, respectively, in the formulas of the preceding sections, giving some clarifying simplifications. Throughout this section we assume that the logarithmic Sobolev inequality \eref{mt1} holds. \subsubsection{$V$ bounded below} \label{secbelow} If $V$ is bounded below then all of the significant quantities in Theorem \ref{thmns2} and Corollary \ref{corhb2} have limits as $\nu \uparrow \infty$. The following corollary shows that the Sobolev coefficient, $pc_\nu(p)$in \eref{L325} converges to the classical one, the interval of validity $(q_0,p_0)$ converges to $(1,\infty)$ and the minimum time $\tau(p) - \tau(q)$ to boundedness converges to Nelson's shortest time. \begin{corollary} \label{corbelow} $(V$ bounded below$)$. Suppose that the logarithmic Sobolev inequality \eref{mt1} holds. Assume that $V$ is bounded below and that $V \in L^{p_1}(m)$ for some $p_1 > 2$. Then $\n^*\n +V$ is essentially self-adjoint. Its closure $H$ has a unique positive $($a.e.$)$ ground state $\psi$. There holds \begin{align} &Ent_m(|u|^p) \le \frac{c p^2}{2(p-1)} \int_X \<(H -(\inf V)) u, u_p\> dm\ \ \text{if}\ \ p\in (1, \infty). \label{L325below} \end{align} Furthermore \beq \| e^{-tH}\|_{q\to p} \le e^{- t\inf\, V} \ \text{if} \ \ \ e^{-t/c} \le \sqrt{\frac{q-1}{p-1}}, \ \ \ 1 < q < p < \infty. \label{L289a} \eeq In particular the time to boundedness, \eref{L505}, reduces to Nelson's classical time to contraction, which is determined by $\tau_0(p) = (c/2) \log (p-1), 1 < p <\infty$. \end{corollary} \begin{proof} If $V$ is bounded below we have $\lim_{\nu\uparrow \infty} \|e^{-V}\|_\nu \to e^{-\inf V}$. Moreover $p_0\uparrow \infty$ and $q_0\downarrow 1$ when $\nu \uparrow \infty$, as we see from \eref{L341d} and \eref{L313qi}. From \eref{L505} we see that $\tau(p)$ converges to $\tau_0(p) :=(c/2)\log (p-1)$. If $t >(c/2)\log (p-1) -(c/2)\log (q-1)$ then, by \eref{L289}, $ \| e^{-tH}\|_{q\to p} \le \|e^{-V}\|_\nu^t $ holds for large enough $\nu$, leaving aside for a moment the technical issue of self-adjointness of $H$. Therefore $ \| e^{-tH}\|_{q\to p} \le e^{- t\inf\, V}$ if $t> (c/2)\log\frac{p-1}{q-1}$ and also if $t\ge (c/2)\log\frac{p-1}{q-1}$ by strong continuity of $e^{-tH}$. This proves \eref{L289a}. It may be of use to note monotonicity: $\tau(p)- \tau(q) \downarrow \tau_0(p) - \tau_0(q)$ as $\nu \uparrow \infty$ by Corollary \ref{corhb2}. Concerning the Sobolev coefficient in \eref{L325below} observe first that $\nu/p_0 \to c/2$ as $\nu \to \infty$, as we see from \eref{L313q}. In view of \eref{L329} we therefore have $ c_\nu(p) = \frac{\nu}{p_0 - p} \frac{p}{p-q_0} \to (c/2) \frac{p}{p-1}$. If $1 < p < \infty$ then $ q_0 < p < p_0$ for sufficiently large $\nu$. Keeping $p$ and $u$ fixed, we may take the limit in the inequality \eref{L325} as $\nu \to \infty$. Since $p c_\nu(p) \to \frac{cp^2}{2(p-1)}$ while $\log\|e^{-V}\|_\nu \to - \inf V$, the inequality \eref{L325} goes over to \eref{L325below} as $\nu \to \infty$. Since $V \in L^{p_1}$ for some $p_1 > 2$ the hypothesis \eref{EU7} of Theorem \ref{thmesa2} holds for some large enough finite $p_0$. $\n^*\n + V$ is essentially self-adjoint on its domain and all the conclusions of Theorem \ref{thmesa2} hold. \end{proof} \begin{corollary} \label{corpolbelow} $(Polynomial\ growth\ of\ \|\psi\|_p)$. If the hypotheses of Corollary \ref{corbelow} hold then \begin{align} \|\psi\|_p \le (p-1)^{(c/2) \sup (\l_0 - V)}, \ \ \ \ p \ge 2. \label{L804} \end{align} In particular \beq \psi \in \cap_{p < \infty} L^p(X,m). \label{L804b} \eeq Further, \begin{align} \|\psi\|_r \ge e^{-\sigma \sup(\l_0 - V)},\ \ 0 < r < 2, \ \ \ \ \sigma = c(2r^{-1} - 1). \label{L808} \end{align} \end{corollary} \begin{proof} Using \eref{L289a} with $q = 2$ and $p>2 $ we find that \begin{align} \|\psi\|_p &= \|e^{t\l_0}e^{-tH} \psi\|_p \notag\\ & \le e^{t\l_0} \| \psi\|_2 e^{-t\inf V}\ \ \ \text{if}\ \ e^{-t/c} \le \sqrt{\frac{1}{p-1}} \label{L805}\\ &= \|\psi\|_2 e^{t\sup(\l_0 - V)} . \notag \end{align} Take \beq t = \tau_0(p) := (c/2) \log (p-1), \label{L806} \eeq which is the optimal value allowed in \eref{L805}. Since $\|\psi\|_2 = 1$ we find \eref{L804}. For the proof of \eref{L808} observe that as $\nu\uparrow \infty$, $c_\nu \to c$ by \eref{L329a} while the right side of \eref{L512} converges to the right side of \eref{L808}. \end{proof} \subsubsection{$V$ bounded above} \label{secabove} \begin{corollary} \label{corpolabove} $($Polynomial growth of $\|\psi^{-1}\|_s$ $).$ Assume that \eref{mt1} holds, that $\|e^{-V}\|_\nu < \infty$ for some $\nu > 2c$ and that $V$ is bounded above. Then $\|\psi^{-1}\|_s$ has at most polynomial growth as $ s\uparrow \infty$. In particular \begin{align} \| \psi^{-1}\|_s \le (1+s)^{(c/2)\sup (V - \l_0)} \(\|e^{\l_0-V}\|_\nu^{3c_\nu} 2^{(c/2)\sup (V - \l_0)}\), \ 0 < s <\infty. \label{L819} \end{align} \end{corollary} \begin{proof} Since $V$ is bounded above we have $\| e^V\|_\ka < \infty$ for $0 < \ka \le \infty$. In Theorem \ref{thmmp1} all of the significant quantities, $b_\ka$, $\ell_\ka$, $r_0$ and $s_0$ have limits as $\ka \uparrow \infty$, which we can summarize as follows. \begin{align} b_\infty &= 1, \ \ r_0= 1,\ \ s_0 =\infty, \ \ \ell_\infty(t) = (c/2) \log \frac{t+ c_\nu}{t-c_\nu},\ t > c_\nu . \label{L820} \end{align} The first of these identities follows from the definition \eref{s98f}, the second and third from \eref{W851g}, while the fourth follows from the definition \eref{W852}. Since $s_0 = \infty$ the moment product theorem, Theorem \ref{thmmp1}, shows that $\|\psi^{-1}\|_s < \infty$ for all $s < \infty$. To make use of the moment product inequality \eref{W753g} observe that $a$ is given in terms of $s$ by \eref{W852b}. Thus \begin{align} \ell_\infty(a) &=(c/2) \log \frac{(2s^{-1} +1)c_\nu+ c_\nu}{(2s^{-1} +1)c_\nu-c_\nu},\ \ \ 0 < s < \infty \notag\\ &=(c/2) \log (1+s), \ \ \ 0 < s <\infty. \end{align} Similarly we have \begin{align} \ell_\infty(\sigma) &= (c/2) \log\frac{(2r^{-1} - 1) + 1}{(2r^{-1} - 1) -1} \notag\\ &= -(c/2) \log(1-r). \end{align} Inserting these values into \eref{W753g} we find \begin{align} \|\psi\|_r \| \psi^{-1}\|_s &\le \| e^{V-\l_0}\|_\infty^{(c/2) (\log (1+s) - \log(1-r))} \notag \\ &= \frac{(1+s)^{(c/2)\log\| e^{V-\l_0}\|_\infty}}{(1-r)^{(c/2)\log\| e^{V-\l_0}\|_\infty}}, \ \ 0 < s < \infty, \ 0 < r <1 \notag\\ & =\frac{(1+s)^{(c/2)\sup (V - \l_0)}}{(1-r)^{(c/2)\sup (V - \l_0)}}, \ \ 0 < s < \infty, \ 0 < r <1 \label{L822} \end{align} Choose $r = 1/2$ for simplicity. \eref{L512} and \eref{L513} show that $\|\psi\|_{1/2} \ge \|e^{\l_0-V}\|_\nu^{-3c_\nu}$. Therefore \begin{align} \| \psi^{-1}\|_s &\le (1+s)^{(c/2)\sup (V - \l_0)} \(\frac{ \|e^{\l_0-V}\|_\nu^{3c_\nu}}{(1/2)^{(c/2)\sup (V - \l_0)}}\) . \end{align} This proves \eref{L819}. \end{proof} \subsubsection{$V$ bounded.} When $V$ is bounded several expressions take a simpler form. Observe first that \begin{align} \inf V \le \l_0 &\le \sup V. \label{L830}\\ \sup (V - \l_0) &\le \sup V - \inf V = Osc (V). \label{L831}\\ \sup (\l_0 - V) &\le \sup V - \inf V = Osc (V). \label{L832} \end{align} \begin{corollary} \label{corallp} $($Bounds on $\|\psi^{\pm1}\|_p$ when $V$ is bounded$)$. Assume that $V$ is bounded. Then \begin{align} \|\psi\|_p &\le (p-1)^{(c/2) Osc(V)}, \ \ \ \ p \ge 2. \label{L833a} \\ \|\psi\|_r &\ge e^{-\sigma Osc (V)}, \ \ \ 0<r< 2, \ \ \sigma = c(2r^{-1} -1). \label{L834a}\\ \|\psi\|_r \|\psi^{-1}\|_s &\le \(\frac{1+s}{1-r}\)^{(c/2)Osc(V)} , \ \ \ 0 < s <\infty, \ \ 0 < r <1 . \label{L835a}\\ \|\psi^{-1}\|_s &\le \(\frac{1+s}{1-r}\)^{(c/2)Osc(V)} e^{\sigma Osc (V)} , \ \ \ 0 < s < \infty, \ \ 0 < r <1 . \label{L836a} \end{align} \end{corollary} \begin{proof} \eref{L833a} follows from \eref{L804} and \eref{L832}. \eref{L834a} follows from \eref{L808} and \eref{L832}. \eref{L835a} follows from \eref{L822} and \eref{L831}. \eref{L836a} follows from \eref{L835a} and \eref{L834a}. \end{proof} \begin{remark} {\rm We saw in Example \ref{exunb} that $\psi$ and $\psi^{-1}$ can be unbounded even if $V$ is bounded. But Corollary \ref{corallp} shows that the $L^p$ norms of $\psi$ and $\psi^{-1}$ can only grow polynomially in $p$ when $V$ is bounded. } \end{remark} \begin{corollary} \label{coreF2} If $V$ is bounded then \begin{align} \int_X e^{b F^2} dm < \infty\ \ \ \forall\ \ \ b< \infty, \label{L841} \end{align} where $F = - \log \psi$. \end{corollary} \begin{proof} The polynomial growth conditions in Corollary \ref{corallp} imply that there are constants $C_1, C_2$ independent of $t$ such that \begin{align} \|\psi^{\pm 1} \|_t \le C_1(1+t)^{C_2}\ \ \ \ \ \ t > 0: \label{L840} \end{align} Indeed, put $r = 1/2$ in \eref{L836a} to derive \eref{L840} for $\| \psi^{-1}\|_t$, while \eref{L833a} gives \eref{L840} for $\|\psi\|_t$ in case $t \ge 2$. Use $\|\psi\|_ t \le \|\psi\|_2$ for $0 < t <2$. We can write \eref{L840} in terms of $F$ in the equivalent form \beq \int_X e^{tF(x)} dm(x) \le C_1^{|t|} e^{C_2|t| \log (1+|t|)}, \ \ \ t \in \R. \eeq Suppose that $b >0$. In the identity $ e^{by^2/2} = (2\pi b)^{-1/2} \int_{-\infty}^\infty e^{ty} e^{-t^2/(2b)} dt $ insert $y = F(x)$ and take expectation to find \begin{align} \int_X e^{bF(x)^2/2} dm(x) &= (2\pi b)^{-1/2} \int_{-\infty}^\infty \int_Xe^{tF(x)}dm(x)e^{-t^2/(2b)} dt \notag\\ &\le (2\pi b)^{-1/2} \int_{-\infty}^\infty C_1^{|t|} e^{C_2|t| \log(1+ |t|)} e^{-t^2/(2b)} dt \notag\\ & < \infty. \notag \end{align} \end{proof} \begin{remark}\label{remtv} {\rm (Two variants of Corollary \ref{coreF2}). The two inequalities in \eref{L840} correspond to the two conditions: $V$ is bounded below or above, from which they were derived. If just one of these two conditions holds then \eref{L841} can be replaced by \begin{align} \int_{F \ge 0} e^{bF^2/2} dm &< \infty\ \ \ \text{if $V$ is bounded above.} \label{L801}\\ \int_{F \le 0} e^{bF^2/2} dm &< \infty\ \ \ \text{if $V$ is bounded below.} \label{L801a} \end{align} One need only start with the inequality $ e^{by^2/2} \le 2 (2\pi b)^{-1/2} \int_0^\infty e^{t|y|} e^{-t^2/(2b)} dt $ and proceed as in the proof. For example on the set $\{ F \le 0\}$ we have $e^{t|F(x)|} = e^{-tF(x)} = \psi(x)^t$ for $t >0$ and therefore, using \eref{L840} with $\|\psi^{+1}\|_t$ we have \begin{align} \int_{ F \le 0} e^{bF(x)^2/2} dm(x) &\le 2(2\pi b)^{-1/2} \int_0^\infty C_1^{|t|} e^{C_2|t| \log(1+ |t|)} e^{-t^2/(2b)} dt < \infty , \notag \end{align} A similar argument, using the bound \eref{L840} for $\|\psi^{-1}\|_t$, gives the same bound for $\int_{F \ge 0} e^{bF^2} dm$. } \end{remark} \begin{corollary} \label{corDLSIbV} $($DLSI for bounded $V$ $)$. Assume that \eref{mt1} holds and that $V$ is bounded. Let $\psi$ denote the normalized ground state for $\n^*\n +V$. Then \begin{align} Ent_{m_\psi}(f^2) \le 2a \int_X |\n f|^2 dm_\psi + \{2D_{a,\sigma} Osc(V)\} \| f\|_{L^2(m_\psi)}^2 \label{L855} \end{align} for any $a > c$ and $\sigma > c$, where \begin{align} D_{a, \sigma} &= a + \sigma + \ell_0(a) + \ell_0(\sigma)\ \ \ \text{and} \label{L856}\\ \ell_0(t) &= (c/2)\log \frac{t+c}{t-c},\ \ \ t > c . \label{L857} \end{align} In particular, choosing $a = \sigma =2c$ we have \begin{align} Ent_{m_\psi}(f^2) \le 4c \int_X |\n f|^2 dm_\psi + \{2c (4 + \log 3) Osc(V)\} \|f\|_{L^2(m_\psi)}^2. \label{L858} \end{align} \end{corollary} \begin{proof} Since $V$ is bounded we can let $\nu$ and $\ka$ increase to infinity in \eref{gs806}. First observe that $c_\nu\downarrow c, a_\nu \uparrow 1$ as $\nu\uparrow \infty$ and $b_\ka \downarrow 1$ as $\ka \uparrow \infty$ by their definitions \eref{L329a}, \eref{L341d} and \eref{s98f}. Thus if $a>c$ and $\sigma >c$ then \eref{gs803b} holds for large enough $\nu$ and $\ka$ and we may apply \eref{gs806}. We have $\|e^{\l_0 - V}\|_\infty = e^{\sup(\l_0 - V)}$ and $ \|e^{V- \l_0}\|_\infty = e^{\sup(V-\l_0)}$. Further, $\ell(t) $, defined in \eref{W852} goes over to $\ell_0(t)$, defined in \eref{L857}, as $\nu\uparrow \infty$ and $\ka \uparrow \infty$. Therefore \eref{gs806} goes over to \begin{align} &Ent_{m_\psi} (f^2) \le 2a \int_X |\n f|^2 dm_\psi \notag \\ & +2 \| f\|_{L^2(m_\psi)}^2 \( (a+\sigma) \sup(\l_0 - V) +(\ell_0(a) + \ell_0(\sigma))\sup (V - \l_0)\). \label{gs806a} \end{align} \eref{L855} now follows from \eref{gs806a}, \eref{L831} and \eref{L832}. Now $\ell_0(2c) = (c/2)\log 3$. Hence, for $a = \sigma = 2c$ we have $D_{a,\sigma} = 4c + c\log 3$. This proves \eref{L858}. \end{proof} \begin{remark}\label{remrec} {\rm (Recovery of \eref{mt1}.) In case $V=0$ (or constant) the ground state for $ \n^*\n + V$ is the constant $1$. Therefore $m_\psi = m$. Moreover $Osc(V)= 0$. Hence \eref{L855} reduces to $Ent_m (f^2) \le 2a \int_X |\n f|^2 dm $, which is valid for all $a >c$. Taking the limit $a\downarrow c$ yields the original LSI, \eref{mt1}, again. } \end{remark} \begin{remark}\label{remhb} {\rm (Invariance of DLSI). In their fundamental paper \cite{DS84}, Davies and Simon were interested mainly in intrinsic ultracontractivity. They proved that intrinsic ultracontractivity is invariant under perturbation of the Sch\"odinger potential by a bounded potential. They raised the question as to whether intrinsic hyperboundedness (referred to as intrinsic hypercontractivity at that time) was also invariant under perturbation by a bounded potential. At the infinitesimal level this amounts to asking whether, for a bounded potential $V_1$, the ground state measure for $-\Delta + V_0 + V_1$ satisfies a DLSI when the ground state measure $\psi_0^2 dx$ for $-\Delta + V_0$ does. We have answered this affirmatively in this paper but only when the defect for $\psi_0^2 dx$ is zero. In this case the perturbed ground state measure also has defect zero. As to whether perturbation of a DLSI yields another DLSI under some conditions on the perturbing potential is still an open question. } \end{remark} \section{Spectral gap} \label{secsg} \subsection{Small perturbations: Wang's method} Feng-Yu Wang, \cite[Corollary 1.2]{Wang2004}, showed that if the defect in a defective logarithmic Sobolev inequality is sufficiently small then there is a spectral gap that can be quantitatively estimated. In our context his theorem shows that if, for some probability measure $\mu$ on a Riemannian manifold $X$, there holds \begin{align} Ent_\mu(f^2) \le 2C_1 \int_X|\n f|^2 d\mu + C_2 \|f\|_2^2 \label{T12} \end{align} with $C_2 < \log 2$ then $\n^*\n$ has a spectral gap that can be estimated by \begin{align} Gap\ \n^*\n \ge \frac{ \log(3 - 4b)}{C_1\log3},\ \ \ \text{where}\ b =\sqrt{(1 - e^{-C_2})/2}. \label{T14} \end{align} Wang first proved an equivalent, exponentiated version of this Corollary. It asserts that if \begin{align} \|e^{-t\n^*\n}\|_{L^2 \to L^4}^4 \le A \label{T15} \end{align} with $1 \le A <2$ for some $t >0$ then $\n^*\n$ has a spectral gap. This extends a theorem of B. Simon \cite[Theorem 2]{Simon1976}, who showed $\n^*\n$ has a spectral gap if $A=1$ for some $t >0$. Miclo, \cite[Proposition 11]{Mic2015}, gave an example of a very similar form that showed that if one only knows \eref{T15} holds for some $A\ge2$ then, although there is still a spectral gap, there can be no quantitative bound on the spectral gap dependent only on $A$ and $t$ (or equivalently, $C_1,C_2$ and $t$). His example strongly suggests that the same is true in our context. \bigskip In our setting, the number $M\equiv \|e^V\|_\ka \|e^{-V}\|_\nu $, introduced in Section \ref{secM}, controls the defect, as we see in \eref{gs807b}. Combining this estimate of the defect with Wang's theorem gives the following consequence. \begin{theorem} \label{thmws} Suppose that \beq \(a + \sigma +\ell(a) + \ell(\sigma)\)\log M< (1/2) \log 2 \label{T5} \eeq for some $a$ and $\sigma$ in the allowed range $(c_\nu b_\ka, \infty)$. Then there is a constant $C_3$ such that \begin{align} Ent_{m_\psi}(f^2) \le C_3\int_X|\n f|^2 dm_\psi . \end{align} \end{theorem} \begin{proof} The inequality \eref{gs807b} shows that the measure $m_\psi$ satisfies a defective logarithmic Sobolev inequality with defect $2 (a + \sigma +\ell(a) + \ell(\sigma))\log M$ for any choice of $a$ and $\sigma$ in the allowed range $(c_\nu b_\ka, \infty)$. The condition \eref{T5} shows therefore that Wang's condition on the defect is satisfied. From Wang's gap, \eref{T14}, and \eref{gs807b} we can compute $C_3$ via Rothaus' tightening theorem. See \cite[Proposition 5.1.3]{BGL} for an efficient exposition of this method or Proposition \ref{propbgl} below. \end{proof} \begin{remark} {\rm For any choice of $a$ and $\sigma$ there are clearly potentials $V$ for which \eref{T5} is satisfied because $\log M =0$ when $V=0$. } \end{remark} \begin{example}\label{exwang}{\rm In case $V$ is bounded we can use the estimate \eref{L858} for the defect. We find that Wang's criterion for a spectral gap holds if \begin{align} Osc(V)< \frac{\log 2}{ 2c (4 + \log 3) }. \end{align} A lower bound for the spectral gap can be computed from \eref{T14} with $C_1 = 2c$ and $C_2 = \{2c(4 + \log 3\} Osc(V)$. } \end{example} \subsection{General perturbations: Aida's method} \label{seclargeV} We will prove that the Dirichlet form operator for the probability measure $\psi^2 dm$ has a spectral gap at zero. This will allow us, in Section \ref{sectightening}, to remove the defect in \eref{gs807b}. The Deuschel-Holley-Stroock theorem, \cite{HS1987,DS1990}, asserts that if the logarithmic Sobolev inequality \eref{mt1} holds for a probability measure $m$ and if $w$ is a strictly positive weight, which is bounded and bounded away from zero, then the logarithmic Sobolev inequality \begin{align} Ent_{wm}(u^2) \le 2c_1\int_X |\n u|^2 wdm\ \ \text{holds with}\ \ c_1 = c\frac{\sup w}{\inf w} . \label{sg202} \end{align} See e.g. \cite[Proposition 5.1.6]{BGL} for an efficient proof. Similarly, if $m$ satisfies a Poincar\'e inequality, $Var_m(f) \le \gamma \int |\n f|^2 dm$, then \begin{align} Var_{wm}(f) \le \gamma_1\int |\n f|^2 wdm, \ \ \ \gamma_1 = \gamma \frac{\sup w}{\inf w}.\label{sg203} \end{align} The latter follows easily from the inequalities $Var_{wm}(f) \le \int (f - \int f dm)^2 wdm \le (\sup w) Var_m(f)$ and $\int |\n f|^2 dm \le (1/\inf w)\int |\n f|^2 wdm$. The density $\psi^2$ for the ground state measure is typically neither bounded nor bounded away from zero. The DHS theorem was consequently inapplicable in Section \ref{secDLSI} for proving a DLSI. The inequality \eref{sg203} is similarly inapplicable for proving that $m_\psi$ satisfies a Poincar\'e inequality, even though $m$ does. Aida developed a method in \cite{Aida2001} for proving that $\psi^2 dm$ satisfies a Poincar\'e inequality even when $\psi$ is not bounded nor bounded away from zero. He decomposed the space $X$ into the three regions $\{ \psi < \ep\}, \{ \ep \le \psi \le K\}$ and $ \{\psi >K\}$ and used the idea behind the DHS theorem on the middle region. He established bounds associated to the remaining two regions in terms of the quantities defined below in Notation \ref{notA3}. In this section we will derive Aida's lower bound on the spectral gap in terms of these quantities. We will also make use of the already established defective logarithmic Sobolev inequality \eref{gs807b}, which was unavailable to Aida at the time of his paper \cite{Aida2001}. In the next section we will show how our assumptions on the potential $V$ allow us to make quantitative estimates of these quantities, thereby producing a quantitative estimate of the spectral gap. $\psi$ need not be a solution to the Schr\"odinger equation in this section. \begin{notation} \label{notA3} {\rm Suppose that $\psi$ is an a.e. strictly positive function in $L^2(m)$ with $\|\psi\|_{L^2(m)} = 1$. Define \begin{align} dm_\psi = \psi^2 dm\ \ \ \ \text{and}\ \ \ F = - \log \psi. \label{505} \end{align} Let $0 < \ep < 1 < K < \infty$. Define \begin{align} A_\ep &= m(\psi \le \ep). \label{sg10}\\ B_\ep & = \int_{\psi \le\ep}|\n F|^2 dm.\label{sg11}\\ C_K &= \int_{\psi > K} \psi^2 dm. \label{sg12} \end{align} } \end{notation} \begin{theorem} \label{thmsg2} $($Aida's Theorem$)$. Referring to Notation \ref{notA3}, assume that \beq \int_X |\n F|^2 dm < \infty. \label{506} \eeq Suppose that for some numbers $ \gamma >0, B >0, D \ge 0$ and for all real valued functions $u$ of finite energy there holds \begin{align} \int_X\(u^2 - \<u\>_m^2\)dm &\le \gamma \int_X|\n u|^2 dm\ \ \ \ \ \label{503f} \ \ \ \text{and} \\ Ent_{m_\psi}(u^2) &\le B \int_X |\n u|^2 dm_\psi + \|u\|_2^2 D . \label{504} \end{align} Then there is a number $\gamma_1$ such that \begin{align} \int_X\(u^2 - \<u\>_{m_\psi}^2\)dm_\psi &\le \gamma_1 \int_X|\n u|^2 dm_\psi. \label{A363} \end{align} If, for some $\ep > 0$ and $K > 1$, there holds \begin{align} \(2K^2\( 2\gamma B_\ep +A_\ep \)+4C_K\) e^{ 12(D + e^{-1})} &\le 1/3 \label{A360} \end{align} then one may choose \begin{align} \gamma_1 \le B + 8\gamma\(\frac{K}{\epsilon}\)^2 . \label{A365b} \end{align} Such an $\ep$ and $K$ always exist. \end{theorem} The proof depends on the following lemmas. The invariance of a weak Poincar\'e inequality under perturbation of a measure by insertion of a density was proven by Rockner and Wang \cite[Theorem 6.1]{RW2001} and by Aida \cite[Lemma 2.2]{Aida2001}. The next lemma is a form of Aida's weak Poincar\'e inequality in the case that the unperturbed measure satisfies a Poincar\'e inequality, which is the only case that we need. \begin{lemma} \label{lemA1} $($Weak Poincar\'e inequality$)$. Referring to Notation \ref{notA3} again, assume that \eref{506} and \eref{503f} hold. Suppose that $u$ is bounded and has finite energy. Let $0 < \epsilon < 1 < K < \infty$. Then \begin{align} \| u - \<u\>_{m_\psi} \|_{L^2(m_\psi)}^2 &\le \gamma\(\frac{2K}{\epsilon}\)^2 \int_X |\n u|^2 dm_\psi +\zeta \|u\|_\infty^2 \label{520e} \end{align} where \begin{align} \zeta =2K^2\( 2\gamma B_\ep +A_\ep \)+4C_K. \label{422e1} \end{align} For any $\delta >0$ there exists $\epsilon$ and $K$ such that $\zeta < \delta$. \end{lemma} \begin{sublemma}\label{lemA2b} Referring to Notation \ref{notA3} again, for any bounded real \linebreak valued function $u$ on $X$ we have \begin{align} \| u - \<u\>_{m_\psi} \|_{L^2(m_\psi)}^2 \le K^2 \| u - \<u\>_m\|_{L^2(m)}^2 + 4C_K \|u\|_\infty^2. \label{A100} \end{align} \end{sublemma} \begin{proof} For any real number $a$ we have \begin{align*} \| u - \<u\>_{m_\psi} \|_{L^2(m_\psi)}^2 &\le \| u - a\|_{L^2(m_\psi)}^2 \\ &\le K^2\int_{\psi \le K} (u-a)^2 dm + \(\int_{\psi >K} \psi^2 dm \)\|u-a\|_\infty^2 \\ &\le K^2\int_X (u-a)^2 dm + C_K \|u-a\|_\infty^2. \end{align*} Choose $a = \<u\>_m$. Then $ \|u-a\|_\infty \le 2\|u\|_\infty$ and \eref{A100} follows. \end{proof} \begin{sublemma} \label{lemA3} Assume that the hypotheses of Lemma \ref{lemA1} hold. Then \begin{align} \|u - \<u\>_m\|_{L^2(m)}^2 \le \frac{4\gamma}{\epsilon^2}\int_X |\n u|^2 \psi^2 dm + \Big\{4\gamma B_\ep +2 A_\ep\Big\} \|u\|_\infty^2 \label{191} \end{align} \end{sublemma} \begin{proof} We need to use a regularized version of the function $[0,\infty) \ni t \to min(t, 1)$. Let $ 0 < \delta < 1/2$ and let $f$ be a smooth non-decreasing real valued function on $[0,\infty)$ such that $f(t)= t$ for $0\le t \le 1-\delta$ and $f(t) = 1$ for $ t \ge 1+\delta$ and such that $f'(t) \le 1$ everywhere. In the end we will let $\delta \downarrow 0$. Let $\phi(t) = f(t/\ep)$. Then $\phi'(t) \le \ep^{-1}$ everywhere, $\phi(t) \le t/\ep$ everywhere, and, when $t\ge\ep(1+\delta)$ we have $\phi(t)=1$ and $ \phi'(t) =0$. Let \beq \chi(x) = \phi(\psi(x)). \label{201e} \eeq Since $\|u - \<u\>_m\|_2^2 \le \|u - a\|_2^2$ for any real number $a$ we have \begin{align} \|u - \<u\>_m\|_2^2 &\le \|u - \<u\chi\>_m\|_2^2 \notag\\ &=\| (u- u\chi) + (u\chi - \<u\chi\>_m) \|_2^2 \notag \\ &\le 2 \|u\chi - \<u\chi\>_m\|_2^2 + 2\|u(1-\chi)\|_2^2 \notag\\ &\le 2 \|u\chi - \<u\chi\>_m\|_2^2 + 2m(\psi <\epsilon(1+\delta)) \|u\|_\infty^2, \label{205} \end{align} wherein we have used in the last line the fact that $1- \chi =0$ wherever $\psi \ge\ep(1+\delta).$ From the Poincar\'e inequality \eref{503f} we find \begin{align*} \|u\chi - \<u\chi\>_m\|_2^2 &\le \gamma \int_X |\n (u \chi)|^2 dm \\ &= \gamma \int_X\( |u \n \chi+ \chi \n u|^2 dm \notag\\ &\le 2\gamma \int_X |\n u|^2 \chi^2 dm +2 \gamma\int_X u^2 |\n \chi|^2 dm \\ &\le 2\gamma \int_X |\n u|^2 (\psi/\ep)^2 dm + 2\gamma \|u\|_\infty^2 \int_X |\n \chi|^2 dm. \end{align*} Now $\n \chi = \phi'(\psi)\n \psi = -\phi'(\psi) \psi \n F$. Therefore $|\n \chi|^2 = \phi'(\psi)^2 \psi^2 |\n F|^2 \le \ep^{-2} \psi^2 |\n F|^2$ wherever $\psi < \ep(1+\delta)$ and is zero elsewhere. Therefore \begin{align} \|u\chi - \<u\chi\>_m\|_2^2 \le \frac{2\gamma}{\ep^2} \int_X |\n u|^2 \psi^2 dm + 2\gamma \|u\|_\infty^2 (1+\delta)^2 \int_{\psi < \ep(1+\delta)} | \n F|^2 dm \notag. \end{align} Insert this bound into \eref{205} to find \begin{align} \|u - \<u\>_m\|_2^2 &\le \frac{4\gamma}{\ep^2} \int_X |\n u|^2 \psi^2 dm + 4\gamma \|u\|_\infty^2 (1+\delta)^2 \int_{\psi < \ep(1+\delta)} | \n F|^2 dm \notag\\ &+ 2m(\psi <\epsilon(1+\delta)) \|u\|_\infty^2 .\notag \end{align} We can now let $\delta\downarrow 0$ and use the dominated convergence theorem on the second term to find \eref{191}. \end{proof} \bigskip \noindent \begin{proof}[Proof of Lemma \ref{lemA1}] Insert \eref{191} into \eref{A100} to find \eref{520e}. To prove the last assertion of the lemma choose $K$ so large that $4C_K < \delta/2$. Then choose $\ep$ so small that the first term in $\zeta$ is also $< \delta/2$. These choices can be made because $\psi \in L^2(m)$ while $\psi >0$ a.e. and \eref{506} holds. \end{proof} \begin{lemma} \label{lemA4} $($Truncation of $u)$. Let $\psi$ be a non-negative function satisfying $\int_X\psi^2 dm = 1$. Let $u \in L^2(m_\psi)$ and assume that $\int_X u\, dm_\psi =0$. For all $R > 0$ define $u_R = (u\wedge R)\vee (-R)$. Then \begin{align} \|u\|_{L^2(m_\psi)}^2 \le \|u_R -\<u_R\>_{m_\psi} \|_{L^2(m_\psi)}^2 + 2 \int_{|u| > R} u^2 dm_\psi\ \ \label{605b} \end{align} \end{lemma} \begin{proof} Writing $m_\psi = \mu$ for ease in reading we have \begin{align} \Big| \|u\|_{L^2(\mu)}^2 -&\|u_R - \< u_R\>_\mu \|_{L^2(\mu)}^2 \Big| \notag = \Big| \int_X (u^2 - u_R^2) d\mu + \(\int_X u_R d\mu\)^2 \Big|. \end{align} But $u - u_R =0$ wherever $|u| \le R$ and $|u - u_R| \le |u|$ everywhere. Therefore $ \Big|\int_X (u^2 - u_R^2) d\mu\Big| = \int_{|u| >R} (u^2 -R^2) d \mu \le \int_{|u| > R} u^2 d\mu$. Further, since $\int_X u d\mu = 0$, it follows that $\(\int_X u_R d\mu\)^2 = \(\int_X (u_R -u) d\mu\)^2 = \(\int_{|u| >R} (u_R -u) d\mu\)^2 \le \int_{|u| >R} |u|^2 d\mu$. \end{proof} \begin{lemma} \label{lemA5} Assume that the hypotheses of Theorem \ref{thmsg2} hold. Suppose that $\|u\|_{L^2(m_\psi)} =1$ and that $R >1$. Then \begin{align} \int_{|u| \ge R} u^2 dm_\psi \le \frac{1}{\log R^2}\(B \int_X |\n u|^2 dm_\psi + D +e^{-1}\). \label{gs216d} \end{align} \end{lemma} \begin{proof} Since $s\log_+ s \le s\log s + e^{-1}$ for all $s \ge 0$ we have, in case $\|u\|_{L^2(m_\psi)} =1$, \begin{align} \int_X u^2 \log_+ u^2 dm_\psi &\le Ent_\psi(u^2) + e^{-1} \le B \int_X |\n u|^2 dm_\psi + D +e^{-1} . \notag \end{align} Therefore if $R >1$ then \begin{align} \log R^2 \int_{|u| \ge R} u^2 dm_\psi &\le \int_X u^2\log_+ u^2 dm_\psi &\le B \int_X |\n u|^2 dm_\psi + D +e^{-1} . \notag \end{align} \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmsg2}] Suppose that $\|u\|_{L^2(m_\psi)}^2=1$ and $\int_X u\ dm_\psi = 0$. Using first \eref{605b}, then \eref{520e} and\eref{gs216d} we find \begin{align} 1&=\|u\|_{L^2(m_\psi)}^2 \notag \\ &\le \|u_R -\<u_R\>_{m_\psi}\|_{L^2(m_\psi)}^2 +2\int_{|u| > R} u^2 dm_\psi \notag \\ &\le \Big\{\gamma\(\frac{2K}{\epsilon}\)^2 \int_X |\n u|^2 dm_\psi +\zeta R^2\Big\} \notag\\ &\qquad \qquad\qquad+ \Big\{ \frac{2}{\log R^2}\(B \int_X |\n u|^2 dm_\psi + D +e^{-1}\)\Big\} \notag \\ &= \Big\{\gamma\(\frac{2K}{\epsilon}\)^2 +\frac{B}{\log R} \Big\} \int_X |\n u|^2 dm_\psi + \Big\{\zeta R^2 +\frac{D+e^{-1}}{\log R} \Big\} \label{gs280b} \end{align} It suffices to show that the second expression in braces in \eref{gs280b} can be made less than $1/2$ by choosing $R, K$ and $\ep$ suitably. We may choose $R>1$ so that \beq \frac{(D+e^{-1})}{\log R} = 1/6. \label{gs281} \eeq From \eref{422e1} we find $\zeta R^2 =2K^2\( 2\gamma B_\ep +A_\ep \)R^2 +4C_K R^2$. Choose $K \ge 1$ so large that $4C_K R^2 \le 1/6$. Then choose $\ep$ so small that \begin{align} 2K^2\( 2\gamma B_\ep +A_\ep \)R^2 \le 1/6. \notag \end{align} Then $\zeta R^2 \le 1/3$. From the definitions of $C_K, A_\ep$ and $B_\ep$ it's clear that such $\ep$ and $K$ exist. Since $R^2 = e^{12(D+ e^{-1})}$, \eref{A360} is satisfied. Inserting these bounds into \eref{gs280b} we find \begin{align} 1 \le & \Big\{\gamma\(\frac{2K}{\epsilon}\)^2 + \frac{B}{6(D + e^{-1})} \Big\} \int_X |\n u|^2 dm_\psi + 1/2 \notag \end{align} and therefore \begin{align} 1 \le \Big\{2\gamma\(\frac{2K}{\epsilon}\)^2 + \frac{B}{3(D + e^{-1})} \Big\} \int_X |\n u|^2 dm_\psi. \notag \end{align} Thus we have a spectral gap and we may take \begin{align} \gamma_1 = \Big\{2\gamma\(\frac{2K}{\epsilon}\)^2 + \frac{B}{3(D + e^{-1})} \Big\}. \label{A340} \end{align} in \eref{A363}. But $3(D + e^{-1}) >1$ because $D \ge 0$. The second term in \eref{A340} is therefore at most $B$. \eref{A365b} now follows. \end{proof} \subsection{Bounds on Aida's spectral gap} \label{secbasg} \subsubsection{The distribution of $\psi$} \label{secdist} The three quantities that determine most of the estimates needed in Aida's bound on the spectral gap are given in Notation \ref{notA3}. We will give bounds on these three quantities in terms of the given data $c,\nu, \kappa, \|e^{-V}\|_\nu$ and $\|e^{V}\|_\kappa$. There are parameters at our disposal whose choice of values may be significant in some applications. But we will use values which keep our bounds simple and serve the purposes of this paper. In order to apply Theorem \ref{thmDLSI3} we need to choose $a$ and $\sigma$ satisfying \eref{gs803b}. We will take $a = \sigma = 2c_\nu b_\ka$, which will simplify some formulas. \begin{theorem} \label{thmdist} Let \begin{align} a = \sigma = 2c_\nu b_\ka, \ \ s_1 = (b_\ka - (1/2))^{-1} \ \ \text{and}\ \ \ \alpha_1 &= a + (c \log 3)/b_\ka \label{Adist0} \end{align} Then \begin{align} A_\ep &\le (\ep M ^{\alpha_1})^{s_1}, \label{Adist1c} \\ B_\ep&\le A_\ep^{1/2} \Big\| V + \log \|e^{-V}\|_{2c}\Big\|_2\ \ \ \ \ \ \ \text{and} \label{Adist2c} \\ B_\ep&\le A_\ep^{1/2} M\ \ \ \ \ \ \ \ \ (12/11/23) \label{Adist2c3}\\ C_K &\le \(M^{ \frac{\log 3}{(2c)^{-1} - \nu^{-1}} }/K^2\)^{a_\nu/(2-a_\nu)} . \label{Adist3c} \end{align}c\{Recall $a_\nu = \sqrt{1 - (2c/\nu)}$ by \eref{L341d}, $c_\nu = c/a_\nu^2$ by \eref{L329a}, and $b_\ka = \sqrt{1 + (2c/\ka)}$ by \eref{s98f}.\} \end{theorem} \begin{proof} [Proof of \eref{Adist1c}] If $\epsilon = e^{-b}$ then $m(\psi \le \epsilon) = m(F \ge b)$. From Chebyshev's inequality we find $m(F\ge b) e^{sb} \le \int_X e^{sF} dm$ for all $s \ge 0$ and therefore, by \eref{W710b}, we have \begin{align} m(\psi \le \epsilon) &\le e^{-sb} \int e^{sF} dm = \epsilon^s \|\psi^{-1}\|_s^s \notag \\ &\le \epsilon^s M^{s\(\ell(a) + \ell(\sigma) +\sigma\)}\ \ \ \text{for}\ \ s \in (0,s_0) \label{Adist1d} \end{align} whenever $a > c_\nu b_\ka$, $\sigma > c_\nu b_\ka$ and $a = (2s^{-1} +1) c_\nu$, as in \eref{W852b}. The choice for $a$ and $s_1$ given in \eref{Adist0} is consistent with this link between $a$ and $s_1$ because $(2s_1^{-1} +1) c_\nu =(2b_\ka - 1 +1)c_\nu = 2 b_\ka c_\nu =a$. Put $t =a$ in \eref{W852} to find \begin{align} \ell(a) =(c\log 3)/(2b_\ka) \label{Adist5} \end{align} when $a = 2c_\nu b_\ka$. Therefore $\ell(a) + \ell(\sigma) + \sigma = (c\log 3)/b_\ka + 2c_\nu b_\ka= \alpha_1$, which, inserted into \eref{Adist1d} gives \eref{Adist1c}. \end{proof} \bigskip \noindent \begin{proof}[Proof of \eref{Adist2c} - \eref{Adist2c3}] For the proof of \eref{Adist2c} use the Federbush semi-boundedness theorem \eref{s1} to find $- \l_0 \le \log \|e^{-V}\|_{2c}$ and insert this into \eref{W43a}. We get \begin{align*} \int_{F \ge b} |\n F|^2 dm &\le \int_{F \ge b} (V + \log \|e^{-V}\|_{2c}) dm \\ &\le m(F \ge b)^{1/2} \Big\| V + \log \|e^{-V}\|_{2c}\Big\|_2. \end{align*} Choose $b $ so that $\epsilon = e^{-b}$ again. Since $\{ F \ge b\} = \{ \psi \le \ep\}$, \eref{Adist2c} follows. The inequality \eref{W43a} together with Young's inequality give \begin{align} \int_{F \ge b} |\n F|^2 dm &\le \int_{F \ge b} (V-\l_0) dm \notag \\ &\le Ent_m(\chi_{F\ge b}) + m(F \ge b) \ka^{-1}\log\int_X e^{\ka(V-\l_0)} dm \label{Adist2c2} \end{align} Since $\chi_F \log \chi_F = 0$ we have $Ent_m(\chi_{F\ge b}) = -m(F \ge b)\log(m(F \ge b)) \le m(F \ge b)^{1/2} $ because $-t^{1/2} \log t \le 2/ e <1$ for $0 \le t <1$. The second term in \eref{Adist2c2} is $ m(F \ge b)\log\|e^{V-\l_0}\|_\ka$, which, in view of \eref{s3} is at most $m(F \ge b)^{1/2} \log M$. Choose $b$ again so that $e^{-b} = \ep$ to find \begin{align} B_\ep&\le A_\ep^{1/2} (1 + \log M). \label{Adist2c1} \end{align} Since $M \ge 1$ (cf. \eref{M3}) we have $1 + \log M \le M$. This proves \eref{Adist2c3}. \end{proof} The proof of \eref{Adist3c} depends on the following lemma, which implements a standard method of getting $L^p$ bounds from hyperboundedness. \begin{lemma} \label{lemdistK} Let $K >0$ and $2 <p <p_0$. Then \begin{align} C_K &\le \|e^{\l_0 -V}\|_\nu^{p \tau(p)}/K^{p-2}, \label{Adist3b} \end{align} \end{lemma} \begin{proof} For any $K>0$ and $p >2$ we have $\psi^2 \le \psi^p/K^{p-2}$ wherever $\psi \ge K$. Apply the hyperboundedness inequality \eref{L289} with $q=2$ to find \begin{align*} \int_{\psi\ge K} \psi^2 dm & \le \int_{\psi \ge K} K^{2-p} \psi^p dm \\ &\le K^{2-p} \int_X \psi^p dm \\ &= K^{2-p} \|e^{t\l_0}e^{-tH} \psi\|_p^p \\ &\le K^{2-p} \(e^{t\l_0}\|e^{-tH}\|_{2\to p} \|\psi\|_2\)^p\ \ \\ &\le K^{2-p} \(e^{t\l_0} \| e^{-V}\|_\nu^t\)^p \qquad \qquad \ \ \forall t \ge \tau(p) \\ &= K^{2-p} \|e^{\l_0-V}\|_\nu^{tp} \qquad\qquad \qquad \ \ \ \forall t \ge \tau(p). \end{align*} Now choose $t=\tau(p)$ to find \eref{Adist3b}. \end{proof} \bigskip \noindent \begin{proof}[Proof of \eref{Adist3c}] We will choose a special value of $p$ in \eref{Adist3b} that makes the dependence of the exponents on $\nu$ simple and explicit. Define $p$ by \beq p^{-1} = (1/4) + (1/2) p_0^{-1}. \label{A424c} \eeq Since $p_0 >2$ we have $(1/2)p_0^{-1} < 1/4$ and therefore \begin{align} p_0^{-1} = (1/2)p_0^{-1} + (1/2)p_0^{-1} < 1/4 +(1/2)p_0^{-1} < 1/2. \notag \end{align} Hence $p_0> p >2$. To evaluate $\tau(p)$, observe that, in view of \eref{L313b}, one has \begin{align*} q_0^{-1} - p^{-1} &= 1-p_0^{-1} - \(1/4 +(1/2)p_0^{-1} \) = 3\(1/4 -(1/2)p_0^{-1}\) \\ &= 3(p^{-1} - p_0^{-1}\). \end{align*} It follows from \eref{L505a} that $\tau(p) = \frac{c}{2a_\nu} \log 3 .$ From the expression \eref{L313qi} for $p_0^{-1}$ we find $p^{-1} = (1/4)(2- a_\nu)$. Therefore \begin{align} p\tau(p) = \frac{2c\log 3}{(2-a_\nu)a_{\nu}}, \end{align} while $p-2 = \frac{4}{2-a_\nu} - 2 = \frac{2a_\nu}{2-a_\nu}$. Inserting these values into \eref{Adist3b} we find \begin{align} C_K &\le \|e^{\l_0 -V}\|_\nu^{ \frac{2c\log 3}{(2-a_\nu)a_{\nu}}}/K^{\frac{2a_\nu}{2-a_\nu}} .\label{A425} \end{align} It will be useful to write this in terms of $K^2$. From \eref{L341d} we see that \begin{align} \frac{2c}{a_\nu^2} = \frac{1}{(2c)^{-1} - \nu^{-1}} . \notag \end{align} So we may rewrite \begin{align} C_K \le \(\| e^{\l_0 -V}\|_\nu^{ \frac{\log 3}{(2c)^{-1} - \nu^{-1}} }/K^2\)^{a_\nu/(2-a_\nu)} . \end{align} Finally, the bound \eref{s4} gives \eref{Adist3c}. \end{proof} \begin{remark} \label{remCK} {\rm The choice of $p$ given by \eref{A424c} simplifies $\tau(p)$ in \eref{Adist3b} and gives the simple form \eref{Adist3c} of the bound on $C_K$. But one can also simplify $\tau(p)$ by choosing $p$ so that $(1/2) - p^{-1} = y^{-1}((1/2) - p_0^{-1})$, for some $y >1$. (This reduces to \eref{A424c} when $y=2$.) In this case one finds $\tau(p) = (c/(2a_\nu)) \log x$, where $x=(y+1)/(y-1) $. The resulting bound on $C_K$ is given by \begin{align} C_K &\le \(\|e^{\l_0 -V}\|_\nu^{\frac{cy\log x}{2a_\nu^{2}} }/K\)^{2 \frac{a_\nu}{y-a_\nu}} \notag \\ &= \(\|e^{\l_0 -V}\|_\nu^{\frac{cy\log x}{a_\nu^{2}} }/K^2\)^{ \frac{a_\nu}{y-a_\nu}} . \label{A471} \end{align} In some applications it might be useful to choose $y$ large. But in this paper the estimate \eref{Adist3c} serves our purposes. } \end{remark} \begin{remark} \label{remA3.3,4} {\rm The estimates of $A_\ep, B_\ep$ and $C_K$ that we gave in Theorem \ref{thmdist} depend on $\|e^{-V}\|_\nu$ and $\|e^V\|_\ka$. It may be desirable for future applications to avoid use of $\|e^V\|_\ka$, because a bound on this, although almost necessary for bounds on $\|\psi^{-s}\|_{L^1(m)}$, as we see in Corollary \ref{corvlps2}, do not seem to be anywhere near necessary for establishing a defective logarithmic Sobolev inequality, as examples show. It is possible, however, to get bounds on $A_\ep, B_\ep$ and $C_K$ just in terms of $c,\nu,\|e^{-V}\|_\nu$ and $\|V\|_p$ for any $p > 1$. The key steps in one such procedure have been carried out by Aida in \cite[Lemma 3.3, Part (4)]{Aida2001}. } \end{remark} \subsubsection{Aida's spectral gap} \label{secAsg} Theorem \ref{thmsg2}, gives a bound, \eref{A365b}, on the coefficient $\gamma_1$ in Poincar\'e's inequality \eref{A363} for the ground state measure $m_\psi$. The bound depends on the choice of a region $\{ \ep < \psi < K\}$, outside of which the contributions to the Poincar\'e inequality are well controlled by the energy. $\ep$ and $K$ must be chosen so as to satisfy the inequality \eref{A360}. In this section we will use the bounds on the distribution of $\psi$, derived in Theorem \ref{thmdist}, to make a choice of $\ep$ and $K$ satisfying \eref{A360}, from which we can derive a quantitative bound on the Poincar\'e coefficient $\gamma_1$ in terms of the given data $c,\nu, \kappa$ and $M\equiv \|e^{-V}\|_\nu \|e^V\|_\kappa$. \begin{theorem} \label{thmspecbd} Under the hypotheses of Theorem \ref{thmM} there exists a number $\gamma_1$ such that \begin{align} \int_X\(u^2 - \<u\>_{m_\psi}^2\)dm_\psi &\le \gamma_1 \int_X|\n u|^2 dm_\psi. \label{A363a} \end{align} $\gamma_1$ may be chosen so as to satisfy the bound \begin{align} \gamma_1 \le d_1 M^{e_1} \label{A363b} \end{align} for constants $d_1, e_1$ depending only on $c, \nu, \kappa$. \end{theorem} \begin{proof} With the goal of implementing the procedure of Theorem \ref{thmsg2}, we first choose $K$ so large that $4C_K e^{12(D + e^{-1})} \le 1/6$. For this it suffices by \eref{Adist3c} to take $K$ so that \begin{align} 4 \(M^{ \frac{\log 3}{(2c)^{-1} - \nu^{-1}} }/K^2\)^{a_\nu/(2-a_\nu)} e^{12(D + e^{-1})} \le 1/6 . \label{A480b} \end{align} Define $K$ by equality in \eref{A480b}. Then \begin{align} K^2 = M^{ \frac{\log 3}{(2c)^{-1} - \nu^{-1}} } \(24 e^{12(D + e^{-1})} \)^{\frac{2- a_\nu}{a_\nu}}. \label{A480} \end{align} Second, we choose $\ep$ so small that \begin{align} 2K^2\( 2\gamma B_\ep +A_\ep \) e^{ 12(D + e^{-1})} &\le 1/6. \end{align} That is, $\( 2\gamma B_\ep +A_\ep \) \le \frac{ e^{ -12(D + e^{-1})}}{12 K^2}.$ For this it suffices, by \eref{Adist2c3}, to take $\ep$ such that \begin{align} 2\gamma A_\ep^{1/2}M+ A_\ep \le \frac{ e^{ -12(D + e^{-1})}}{12 K^2}. \label{A482} \end{align} Since, by its definition \eref{sg10}, we have $A_\ep \le 1$, we also have $A_\ep \le A_\ep^{1/2}$. Thus it suffices to choose $\ep$ such that \begin{align} A_\ep^{1/2} \le \frac{ e^{ -12(D + e^{-1})}}{12 K^2\(1 + 2\gamma M\)}. \label{A483} \end{align} From \eref{Adist1c}, we see that \eref{A483} will hold if \begin{align} (\ep M^{\alpha_1})^{s_1/2} \le \frac{ e^{ -12(D + e^{-1})}}{12 K^2\(1 + 2\gamma M\)}. \label{A484} \end{align} Define $\ep$ by equality in \eref{A484}. Then \begin{align} \ep^2 = \Big\{ \frac{ e^{ -12(D + e^{-1})}}{12 K^2\(1 + 2\gamma M\)} \Big\}^{4/s_1} M^{-2\alpha_1} . \label{A485} \end{align} The values of $K$ and $\ep$ defined in \eref{A480} and \eref{A485} satisfy \eref{A360}. We may therefore use them to bound $\gamma_1$ by \eref{A365b}. We find \begin{align} &K^2/\ep^2 = K^2 \Big\{\frac{12 K^2\(1 + 2\gamma M\)} { e^{ -12(D + e^{-1})}}\Big\}^{4/s_1} M^{2\alpha_1} \notag \\ &=\Big\{K^{2\{1+(4/s_1)\}}\(12e^{ 12(D + e^{-1})}\)^{4/s_1}\Big\} \(1 + 2\gamma M\)^{4/s_1} M^{2\alpha_1}. \label{A488a} \end{align} In view of \eref{A480} the factor in braces is \begin{align} &\Big\{K^{2\{1+(4/s_1)\}}\(12e^{ 12(D + e^{-1})}\)^{4/s_1}\Big\} \notag \\ & = \Big\{M^{ \frac{\log 3}{(2c)^{-1} - \nu^{-1}} } \(24 e^{12(D + e^{-1})} \)^{\frac{2- a_\nu}{a_\nu}}\Big\}^{1 +(4/s_1)} \(12e^{ 12(D + e^{-1})}\)^{4/s_1} \notag\\ & =M^{ \frac{\log 3}{(2c)^{-1} - \nu^{-1}} (1 +(4/s_1))} \(24 e^{12(D+e^{-1})}\)^{\(\frac{2- a_\nu}{a_\nu}(1 +(4/s_1))+ 4/s_1\) } 2^{-4/s_1}. \label{A488b} \end{align} From \eref{Adist0} we see that $4/s_1= 4b_\ka - 2$, and therefore $$ \frac{2- a_\nu}{a_\nu}(1 +(4/s_1))+ 4/s_1 = \frac{2(4b_\ka -1)}{a_\nu} -1. $$ Inserting this into \eref{A488b}, we find from \eref{A488a} \begin{align} K^2/\ep^2 = M^{\beta_1} e^{\beta_2 D} \(1 + 2\gamma M\)^{\beta_3} \beta_4, \label{A488c} \end{align} where \begin{align} \beta_ 1 & = \frac{\log 3}{(2c)^{-1} - \nu^{-1}} (4b_\ka -1) + 2\alpha_1 \label{A489a}\\ \beta_2 & =12\(\frac{2(4b_\ka -1)}{a_\nu} -1\) \label{A489b}\\ \beta_3 &= 4b_\ka -2 \label{A489c}\\ \beta_4 &= \(24 e^{12/e}\)^{\(\frac{2(4b_\ka -1)}{a_\nu} -1\)} 2^{2- b_\kappa}, \label{A489d} \end{align} where $a_\nu =\sqrt{1 - (2c/\nu)} , b_\ka = \sqrt{1 + (2c/\ka)} $, by \eref{L341d} and \eref{s98f}, and $\alpha_1$ is given in \eref{Adist0}. All of the four constants $\beta_j$ depend only on $c, \nu$ and $\ka$ and are non-negative. In Theorem \ref{thmsg2}, the constants $B$ and $D$ are arbitrary. To apply our bounds from Theorem \ref{thmDLSI3} we use the form of the defective logarithmic Sobolev inequality given in \eref{gs807b}. Thus we take \beq B= 2a \ \ \ \text{and} \ \ D = 2 \log M^{a + \ell(a) + \sigma + \ell(\sigma)}. \label{A490} \eeq For our choices, $a = \sigma = 2c_\nu b_\ka$, we see from \eref{Adist5} that \begin{align} e^D = M^{2(2a + (c\log 3)/b_\ka)} \label{A490a} \end{align} and therefore $ e^{\beta_2 D} = M^{2\beta_2(2a + (c\log 3)/b_\ka)}$. Combining the first two factors in \eref{A488c} we find \begin{align} K^2/\ep^2 = M^{\beta_5} \(1 + 2\gamma M\)^{\beta_3} \beta_4, \label{A488d} \end{align} where \begin{align} \beta_5 = \beta_1 + 2\beta_2(2a + (c\log 3)/b_\ka). \label{A489e} \end{align} Our assumed logarithmic Sobolev inequality \eref{mt1} implies that the Poincar\'e inequality \eref{503f} in Aida's hypothesis holds in our case with $\gamma = c$. See \cite[Theorem 2.5]{G93a} or \cite[Proposition 5.1.3]{BGL} for a proof of this Thus the bound \eref{A365b} yields in our case \begin{align} \gamma_1 \le 2a + 8c M^{\beta_5} \(1 + 2c M\)^{\beta_3} \beta_4 \label{A489f} \end{align} with $a = 2c_\nu b_\ka$. To reach the simple looking form \eref{A363b} we can use the overestimate $1 \le M$, as in \eref{M3}, from which follows that $1+2c M \le (1+2c) M$ and $2a \le 2a M^{\beta_5 + \beta_3}$. Inserting these two bounds into \eref{A489f} we find \begin{align} \gamma_1 \le d_1M^{e_1}, \label{A489h} \end{align} where \begin{align} d_1 = 2a + 8c (1+2c)^{\beta_3} \beta_4 \ \ \ \ \text{and}\ \ e_1 =\beta_5+ \beta_3 \label{A489g} \end{align} and $a = 2c_\nu b_\ka$ as usual. This proves Theorem \ref{thmspecbd}. \end{proof} \subsection{Tightening: Proof of the main theorem} \label{sectightening} If the generator, $H$, of a hyperbounded semigroup has a spectral gap at the bottom of its spectrum then the semigroup is in fact hypercontractive. This was first proven by J. Glimm, \cite[Lemma 5.1]{Glimm68}, and later amplified by I. E. Segal \cite[Section 1]{Seg70}. In view of the equivalence of hyperboundedness with logarithmic Sobolev inequalities, \cite{G1}, one can restate this at an infinitesimal level: If a Dirichlet form satisfies both a defective logarithmic Sobolev inequality and a Poincar\'e inequality then it also satisfies a logarithmic Sobolev inequality (without defect). The initial form of this theorem was given by O. Rothaus, \cite{Rot5}, wherein he proved a key lemma for this theorem, \cite[Lemma 9]{Rot5}, and applied it then to a specific geometric circumstance in the context of isoperimetric inequalities, \cite[Theorem 10]{Rot5}, to remove the defect. Deuschel and Stroock, \cite{DS1989}, gave another proof of Rothaus' theorem and Carlen and Loss, \cite{Ca2004}, gave another different proof. We will use the form of this theorem given in \cite[Proposition 5.1.3]{BGL}, which we quote here. \begin{proposition} \label{propbgl} {\rm (\cite[Proposition 5.1.3]{BGL}). } Suppose that $\mu$ is a probability measure on a Riemannian manifold. If \begin{align} Ent_\mu (f^2) \le 2C \int |\n f|^2 d\mu + D \int f^2 d\mu \label{sls5} \end{align} and \begin{align} Var_\mu (f) \le C' \int |\n f|^2 d\mu \label{sls6} \end{align} then \begin{align} Ent_\mu (f^2) \le 2\(C +C'((D/2) + 1)\) \int |\n f|^2 d\mu. \label{sls7} \end{align} \end{proposition} We will apply this proposition to the defective logarithmic Sobolev inequality derived in Theorem \ref{thmDLSI3} in combination with Aida's spectral gap estimate derived in Section \ref{secbasg}. As in both of those inequalities, there are parameters that can be chosen according to needs in applications. We will use the choices we made before to arrive at bounds of the simple form described in Theorem \ref{thmM}. \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmM}] Items a. and b. in Theorem \ref{thmM} have been proved in Section \ref{sechyperp}. For the proof of item c. we take $\mu = m_\psi$ in Proposition \ref{propbgl}. Choose $a = \sigma = 2c_\nu b_\ka$ as we did in \eref{Adist0} (in Section \ref{secdist}) and take $C= a$ in \eref{sls5}. Then \eref{sls5} holds with \begin{align} D = \log M^{2(2a + (c\log 3)/b_\ka)} \end{align} by \eref{A490a} and Section \ref{secAsg}. In \eref{sls6} we may, by Theorem \ref{thmspecbd}, take $C' = \gamma_1$, where $\gamma_1 \le d_1 M^{e_1}$ and $d_1, e_1$ are given by \eref{A489g}. Proposition \ref{propbgl} then assures that \begin{align} Ent_{m_\psi}(f^2) \le 2c_1 \int_X |\n f|^2 dm_\psi \end{align} with \begin{align} c_1 \le a + \gamma_1( 1 + \log M^{(2a + (c\log 3)/b_\ka)}). \label{T10} \end{align} Item d. of Theorem \ref{thmM} follows from item c. and the Rothaus-Simon theorem \cite{Rot1}, \cite{Simon1976}. A direct proof of the Rothaus-Simon theorem may be found in \cite[Theorem 2.5 ]{G1993} or \cite[Proposition 5.1.3]{BGL}. In item e. of Theorem \ref{thmM} the form of the bound on $c_1$ can be derived from \eref{T10} by overestimating again, using $M \ge 1$, to find $1 + \log M^{(2a + (c\log 3)/b_\ka)} \le M^{(2a + (c\log 3)/b_\ka)}$ and therefore \begin{align} c_1& \le a + d_1M^{e_1} M^{(2a + (c\log 3)/b_\ka)} \notag \\ &\le (a+d_1) M^{e_1} M^{(2a + (c\log 3)/b_\ka)}. \notag\\ &=\alpha M^\beta \end{align} where $ \alpha = a + d_1$ and $ \beta = e_1 + (2a + (c\log 3)/b_\ka)$. The constants $d_1$ and $e_1$ are defined in \eref{A489g} and depend only on $c,\nu$ and $\kappa$. This proves item e. of Theorem \ref{thmM} . \end{proof} \begin{remark} \label{remsg2} {\rm The spectral gap for $m_\psi$ listed in item d. of Theorem \ref{thmM} is derived from the logarithmic Sobolev inequality \eref{mt5} described in item c.. But our procedure for deriving \eref{mt5} includes deriving first the Poincar\'e inequality \eref{A363a} for $m_\psi$. The constant $\gamma_1$ in \eref{A363a} is much smaller than the Sobolev constant $c_1$ as one can see from \eref{T10}. Therefore we have actually a smaller Poincar\'e constant than that derived from $c_1$. In particular the spectral gap is at least $d_1^{-1}M^{-e_1}$. } \end{remark} \section{Examples and Applications} \subsection{Consecutive ground state transforms} \label{secgst} If the potential in a Schr\"odinger operator is a sum of two potentials then the ground state transformation may factor into two ground state transformations, one for each potential, in the following sense. \begin{lemma} \label{lemcgst} $($Consecutive ground state transformations$)$. Suppose that $m$ is a smooth measure on a Riemannian manifold and that $V_1$ and $V_2$ are two potentials. Assume that $H\equiv \n^*\n + V_1 + V_2$ has a unique ground state $\psi \in L^2(m)$. Denote by $U:L^2(m_\psi) \to L^2(m)$ the ground state transformation. Assume further that $H_1 \equiv \n^*\n +V_1$ has a unique ground state $\psi_1 \in L^2(m)$. Let $U_1:L^2(m_{\psi_1}) \to L^2(m)$ be the ground state transformation for $H_1$. Further, suppose that the Schrodinger operator $\n_{m_{\psi_1}}^* \n + V_2$ has a unique ground state $\psi_2 \in L^2(m_{\psi_1})$. Denote by $U_2: L^2(\psi_2^2 m_{\psi_1}) \to L^2(m_{\psi_1})$ the ground state transformation. Then \begin{align} \psi &= \psi_1 \psi_2. \label{E5}\\ m_\psi &= \psi_2^2 m_{\psi_1}. \label{E6} \\ U &= U_1U_2. \label{E7} \end{align} That is, the first line below factors as the second line. \begin{align} &L^2(m) \xleftarrow{\ \ \ \ \ \ \ \ U\ \ \ \ \ \ \ \ \ } L^2( m_{\psi}) \label{E8}\\ &L^2(m)\xleftarrow{U_1} L^2(m_{\psi_1})\xleftarrow{U_2}L^2(\psi_2^2 m_{\psi_1}) \label{E9} \end{align} \end{lemma} \begin{proof} Let us write $m_1 = \psi_1^2 m$ and $m_2 = \psi_2^2m_1 = \psi_2^2\psi_1^2 m$. If $\l_1$ is the bottom of the spectrum of $\n^*\n + V_1$ then \begin{align} U_1^{-1}(\n^*\n + V_1 - \l_1) U_1 = (\n)_{m_1}^*\n \notag \end{align} by the definition of the ground state transformation for $\n^*\n + V_1$. Since $U_1$ is a multiplication operator it commutes with multiplication by $V_2$. Therefore \begin{align} U_1^{-1}(\n^*\n + V_1+V_2- \l_1) U_1 = (\n)_{m_1}^*\n + V_2. \label{E20} \end{align} By the definition of $U_2$ we have \begin{align} U_2^{-1}\((\n)^*_{m_1} \n +V_2-\l_2\)U_2 = (\n)_{m_2}^* \n. \label{E12} \end{align} where $\l_2 = \inf$ spectrum $(\n)^*_{m_1} \n +V_2$. Insert \eref{E20} into \eref{E12} to find \begin{align} (U_1U_2)^{-1} \(\n^*\n + (V_1 + V_2) - (\l_1 + \l_2)\) U_1U_2 = (\n)_{m_2}^* \n. \notag \end{align} Apply this identity to the function identically one, which is a unit vector in $L^2(m_2)$. The right hand side is zero while $U_1U_2 1 = \psi_1\psi_2$. Therefore \begin{align} \(\n^*\n + (V_1 + V_2) - (\l_1 + \l_2)\) \psi_1\psi_2 =0. \label{E15} \end{align} Since $\n^*\n + (V_1 + V_2)$ has a unique ground state $\psi$, and $\psi_1\psi_2$ is a positive normalized function in $L^2(m)$ satisfying \eref{E15}, it follows that $\psi_1 \psi_2 = \psi$ and $\l_1 + \l_2 = \l$ and $U_1U_2 f= \psi_1\psi_2 f = \psi f = Uf$. This proves \eref{E5} - \eref{E9}. \end{proof} \begin{remark} {\rm In case $X = \R^n$ and $m$ is Lebesgue measure then $\n^*\n = -\Delta$ and we have the usual Schr\"odinger operator in the hypothesis of this lemma. } \end{remark} \begin{remark} \label{remKS} {\rm The use of consecutive ground state transforms is implicit in \cite[Theorem 1.4]{KS87}. } \end{remark} \bigskip \begin{example} \label{expos} Let $m$ be a smooth measure $($not necessarily finite$)$ on a Riemannian manifold $X$. Let $V$ be a measurable potential and suppose that $ V= V_0 + V_1$ with $V_1 \ge 0$. Assume that the Schr\"odinger operator $\n^*\n + V_0$ has a unique (positive) ground state $\psi_0$ whose ground state measure $m_{\psi_0}$ satisfies a LSI. Assume also that \begin{align} \int_Xe^{\ka V_1} dm_{\psi_0} < \infty\ \ \ \text{for some}\ \ \ka >0. \label{sls20} \end{align} Then the Schr\"odinger operator $\n^*\n +V$ has a unique (positive) ground state $\psi$ and the ground state measure $m_\psi$ satisfies a LSI. \end{example} \begin{proof} Since $V_1 \ge 0$ and \eref{sls20} holds, the condition \eref{mt2} holds for $V_1$ and the measure $m_{\psi_0}$. We can apply Theorem \ref{thmM} to find a ground state $\psi_1$ for the Schr\"odinger operator $\n^*_{m_{\psi_0}} \n + V_1$ and the ground state measure $\psi_1^2 dm_{\psi_0}$ satisfies a LSI. By Lemma \ref{lemcgst} the function $\psi \equiv \psi_1 \psi_0$ is the ground state of the Schr\"odinger operator $\n^*\n + V$. Moreover $m_\psi = \psi^2 dm= \psi_1^2 \psi_0^2 dm = \psi_1^2 dm_{\psi_0}$. Therefore $m_\psi$ satisfies a LSI. \end{proof} \subsection{Gaussian precision} \label{secgp} The two quadratic equations \eref{W850} and \eref{L313} determine intervals of Lebesgue indices for which various moment bounds and hypercontractive bounds hold. \eref{W850} is key in case the potential is positive and \eref{L313} is key in case the potential is negative. In this section we will show that the intervals of validity of these bounds are exact for Gaussians and therefore the intervals determined by these peculiar quadratic equations are not just artifacts of the proof. \subsubsection{Negative potentials} \label{secnp} Corollary \ref{corhb2} shows that the semigroup $e^{-t(\n^*\n +V)}$ is bounded from $L^q(m)$ to $L^p(m)$ if the Dirichlet form for $m$ satisfies a logarithmic Sobolev inequality and if $t, q, p$ and $\|e^{-V}\|_{L^\nu(m)}$ are suitably related. There is, in addition, a surprising restriction on the allowed range of $q$ and $p$, unlike in the usual hyperboundedness theorems. The restriction is determined by the quadratic equation \eref{L313m}, whose two roots $q_0, p_0$ are conjugate indices. Boundedness, $\|e^{-t(\n^*\n + V)}\|_{L^q(m)\to L^p(m)} < \infty$, is assured by the corollary for large $t$, but only in case $q_0 \le q \le p \le p_0$. In particular, the corollary shows that $e^{-t(\n^*\n +V)}$ is a strongly continuous semigroup in $L^p(m)$ if $q_0 \le p \le p_0$. We will give an example in which the latter fails if $p \notin [q_0,p_0]$. Let \begin{align} m = \gamma = \pi^{-1/2} e^{-x^2} dx, \ \ V(x) = - ax^2,\ \ a >0, \ \ \text{and}\ \ H = \n^*\n + V \label{gp3a} \end{align} Then \eref{mt1} holds with $ c = 1/2$, \cite{G1}. \begin{theorem} \label{thmgp1} Let $\nu > 1$. Define $q_0$ and $p_0$ by \eref{L313q} with $c = 1/2$. Let $p_1 > p_0$. Then there exists a real number $a >0$ such that \begin{align} &\int_\R e^{-\nu V} d\gamma < \infty\ \ \ \ \ \ \ \text{and} \label{gp4} \\ &e^{-tH} g \notin L^{p_1}(\gamma)\ \ \text{for some}\ \ g \in L^{p_1}(\gamma) \text{ and some}\ t >0. \label{gp5} \end{align} In particular $e^{-tH}$ does not operate as a strongly continuous semigroup in $L^{p_1}(\gamma)$. \end{theorem} For the proof, we will first show, in the next lemma, that the family of functions \begin{align} f(t, x) =e^{b(t) + s(t) x^2} ,\ \ t \ge0 \label{gp3} \end{align} includes functions of the form $e^{-tH} g$. \begin{lemma} \label{lemnp1} Define $\gamma$ as in \eref{gp3a}. Let $H= \n^*\n -ax^2$. Then \begin{align} a. \ & \n^*\n g (x)= - g''(x) +2x g'(x),\ \ \ g \in C^{\infty}(\R)\cap \D(\n^*\n). \label{gp6}\\ b.\ & (H f)(t,x) = \( -2s + \{-4s^2 +4s -a\}x^2\) f(t,x)\ \ \text{if}\ \ s(t) < 1/2. \label{gp7}\\ c.\ &\dot f = (\dot b +\dot s x^2) f . \label{gp8} \end{align} In particular, if $s(t) < 1/2$ for $t$ in an interval $[0, t_1]$ and \begin{align} \dot s &= 4s^2 -4s +a\ \ \text{on} \ \ [0, t_1] \ \ \ \text{and}\ \ \label{gp9}\\ \dot b &= 2s \label{gp10} \end{align} then $ \dot f = -Hf $ and \begin{align} e^{-tH} f(0) = f(t) \ \ \ \text{on}\ \ \ [0, t_1] \label{gp12} \end{align} \end{lemma} \begin{proof} If $g \in C^{\infty}(\R)$ then the definition $(\n^*\n g, h)_{L^2(\gamma)} = \int_\R g'(x) h'(x) d \gamma(x)$, valid for $h \in C_c^\infty(\R)$, together with an integration by parts proves \eref{gp6}. With $f$ given by \eref{gp3}, the identities \eref{gp7} and \eref{gp8} follow from straight forward computations. The technical issue as to whether $f(t,\cdot)$ is actually in the $L^2$ domain of $H$ is easily deduced from the fact that $f(t,\cdot)$ and $x\to x^2f(t,x) $ are in $L^2(\gamma)$ when $s(t) < 1/2$. The identity $\dot f = -Hf$ now follows from \eref{gp7} - \eref{gp10}. The exponentiated version of this, \eref{gp12}, follows by observing that, for $0 < t_2 < t_1$ and $h \in C_c^{\infty}(X)$, the identity \begin{align*} (d/dt)(f(t), e^{(t - t_2)H}h) &= (-H f(t), e^{(t - t_2)H}h) + (f(t), H e^{(t - t_2)H}h) \\ &= 0\ \ \ \text{for} \ \ 0 < t <t_2, \end{align*} implies that $ (f(t), e^{(t - t_2)H}h)$ is constant on $(0, t_2)$ and by strong continuity on $[0, t_2]$. Therefore $(e^{-t_2 H} f(0), h) =(f(0), e^{-t_2H}h) = (f(t_2), h)$ for all $h \in C_c^{\infty}(X)$. Hence \eref{gp12} holds for all $t \in [0, t_1)$ and by strong continuity for all $t \in [0, t_1]$. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmgp1}] We will construct a function $s(t)$ satisfying \eref{gp9}, which increases on $[0, t_1]$ and such that $f(0, \cdot) \in L^{p_1}(\gamma)$ but $f(t_1, \cdot) \notin L^{p_1}(\gamma)$. The theorem will then follow from \eref{gp12}. Since, by \eref{L313q}, $p_0$ is the larger zero of the upward opening parabola $p\mapsto p^2 -4\nu(p-1)$ and $p_1 > p_0$ it follows that $p_1^2 - 4\nu(p_1 -1) >0$. Let $s_1 = 1/p_1$. Then \begin{align} 4s_1^2 - 4s_1 +1/\nu = \{s_1^2/\nu\}\(4\nu - 4\nu p_1 + p_1^2\) >0. \end{align} Choose $\ep >0 $ so small that \begin{align} a \equiv \nu^{-1} - \ep &>0\ \ \ \text{and} \\ 4s_1^2 - 4s_1 + \nu^{-1} - \ep &>0. \end{align} Then $\int_\R e^{-\nu V} d\gamma = \pi^{-1/2}\int_\R e^{(\nu a -1)x^2} dx < \infty$ and therefore \eref{gp4} holds. Choose $s_2 < s_1$ such that $4s^2 - 4s + \nu^{-1} - \ep >0$ on the interval $[s_2, s_1]$. Continuity of the quadratic polynomial ensures the existence of such a point $s_2$. Denote by $s(t)$ the solution to \eref{gp9} with initial condition $s(0) = s_2$. Since $a = \nu^{-1} - \ep$, the right side of \eref{gp9} is strictly positive for $s \in [s_2, s_1]$. The solution will therefore increase and reach $s_1$ in a finite time $t_1 >0$. In particular $s(t)\le s_1 =1/p_1 < 1/2$ for $0\le t \le t_1$. Define $b(t) = \int_0^t 2s(t') dt'$. Then \eref{gp9} and \eref{gp10} are both satisfied on the interval $[0, t_1]$ and \eref{gp12} holds on this interval. Let $g(x) = f(0, x)$. Now \begin{align} \int_\R (e^{b +sx^2})^{p} d\gamma(x) &= \pi^{-1/2}\ e^{pb} \int_\R e^{(ps -1)x^2} dx <\infty\ \ \text{iff}\ \ ps <1. \label{gp15} \end{align} Since $p_1s(0) = p_1s_2 <p_1s_1 = 1$ we see from \eref{gp15} that $g \in L^{p_1}(\gamma)$. On the other hand $p_1 s(t_1) = p_1 s_1 = 1$. From \eref{gp15} we therefore find that $f(t_1, \cdot) \notin L^{p_1}(\gamma)$. That is, $e^{-t_1 H} g \notin L^{p_1}(\gamma)$. \end{proof} \subsubsection{Positive potentials}\label{secpp} Theorem \ref{thmmp1} and Corollary \ref{corub1} give a sufficient condition on the growth of $V$ that ensures that $\psi$ stays away from zero well enough that $\int_X \psi^{-s} dm < \infty$ for all $s$ in an interval $(0, s_0)$. (cf \eref{W710a}.) We will show here, by example, that this interval, determined by the quadratic equation \eref{W850}, is not just an artifact of the proof, and is close to necessary in the sense that if $s >s_0$ then there is a potential $V$ such that $\|e^V\|_\ka < \infty$ while $\|\psi^{-1}\|_s = \infty$. \begin{notation} {\rm For a real number $\w >0$ the Hamiltonian \begin{align} H_\w = - d^2/dx^2 + \w^2 x^2 \label{gp160} \end{align} has the normalized ground state and, respectively, ground state measure \begin{align} \phi_\w(x) = (\w/\pi)^{1/4} e^{-\w x^2/2}, \ \ \ \ m_\w = (\w/\pi)^{1/2} e^{-\w x^2} dx, \label{gp161} \end{align} as can be easily verified. The Gaussian measure $m_\w$ satisfies, by \cite{G1}, the logarithmic Sobolev inequality \begin{align} Ent_{m_\w}(f^2) \le \w^{-1} \int_\R |\n f|^2 dm_\w. \label{gp163} \end{align} } \end{notation} In the preceding subsection we perturbed the potential $\w^2 x^2$ (for $\w = 1$) by adding a negative quadratic potential $V \equiv -a x^2$ to $H_\w$ and found that in the ground state representation of $H_\w$ the resulting perturbed semigroup $e^{-t(\hat H_\w +V)}$ had pathological behavior outside the interval of validity $[q_0, p_0]$ allowed by Corollary \ref{corhb2}. In the present section we will add a positive quadratic potential $V \equiv ax^2$ onto $H_\w$ to show that, for any $\ka >0$, \eref{gp170} and \eref{gp171} can both hold for suitable $a$. We should note in this example that the new ground state measure associated to the perturbed Hamiltonian $-d^2/dx^2 +(\w^2 +a)x^2$ has a smaller Sobolev coefficient, $(\w^2 + a)^{-1/2}$, than the unperturbed ground state measure, whereas the perturbation method we are using will always produce a bigger Sobolev coefficient than the unperturbed one. \begin{theorem} Let $\w >0$. Denote by $\n^*\n$ the Dirichlet form operator for $m_\w$. Let $\ka >0$ and define $s_0$ as in \eref{W851g}. Suppose that $s > s_0$. Then there is a potential $V \equiv a x^2$ such that \begin{align} \|e^V\|_{L^\ka(m_\w)} < \infty \label{gp170} \end{align} while the ground state $\psi$ for $\n^*\n +V$ in $L^2(m_\w)$ satisfies \begin{align} \|\psi^{-1}\|_{L^s(m_\w)} = \infty. \label{gp171} \end{align} \end{theorem} \begin{proof} For $a >0$ let $\alpha =\sqrt{\w^2 +a}$. Using the consecutive ground state theorem of Section \ref{secgst} we can compute the ground state $\psi$ for $\n^*\n + V$ by the ratio $\psi = \phi_\alpha/\phi_\w$. Thus \begin{align} \psi(x) = (\alpha/\w)^{1/4} e^{(\w - \alpha)x^2/2}. \label{gp172} \end{align} Hence \begin{align} \int_\R \psi^{-s} dm_\w = const. \int_\R e^{\(\frac{s(\alpha - \w)}{2}\ -\w\)x^2} dx. \label{gp140a} \end{align} This integral will be infinite if and only if the coefficient of $x^2$ is non-negative. That is, if and only if $s\alpha \ge (s+2)\w$. Squaring, we find the equivalent condition $s^2( \w^2 +a) \ge (s^2 + 4s + 4)\w^2$, and, equivalently, $s^2 a \ge 4(s+1) \w^2$, and, equivalently, $a\w^{-2} \ge 4(s^{-1} + s^{-2})$. Thus the integrals in \eref{gp140a} are infinite if and only if $a\w^{-2} \ge 4(s^{-1} + s^{-2})$. Now $\int_\R e^{\ka V} dm_\w = const. \int_\R e^{(\ka a - \w) x^2} dx$, which is finite if and only if $a <\ka^{-1} \w$. That is, if and only if $ a\w^{-2} < (\w\ka)^{-1}$. Therefore \eref{gp170} and \eref{gp171} both hold for some $a >0$ if and only if $4(s^{-1} + s^{-2}) < (\w\ka)^{-1}$. Comparing \eref{gp163} with \eref{mt1} we see that $\w^{-1} = 2c$. Hence the equation \eref{W850} for $s_0$ may be written $t^2 - 4\w\ka(t+1) = 0$. Therefore $(\w\ka)^{-1} = 4(s_0^{-1} +s_0^{-2})$. Since $s >s_0$ it follows that $(\w\ka)^{-1} > 4(s^{-1} +s^{-2})$. \end{proof} \begin{remark}\label{remA4.13} {\rm In \cite[Remark 4.13 and Lemma 5.5]{Aida2001} Aida showed that when $m$ is Gaussian it is sufficient for $\int e^{\ep V} dm < \infty$ for some $\epsilon >0$ (plus some conditions on the negative part of $V$) in order for $\psi^{-1}$ to be in $L^{p}(m)$ for some $p > 0$. } \end{remark} \subsection{Eckmann's theorem} \label{secEck} We apply our techniques in this section to prove intrinsic hypercontractivity for the one dimensional Schr\"odinger operator \begin{align} H\equiv - d^2/dx^2 + V. \label{E200} \end{align} J.-P. Eckmann, \cite{Eck74}, described a class of potentials $V$ on $\R$ for which the ground state measure of $H$ satisfies a defective logarithmic Sobolev inequality. We will derive a version of Eckmann's theorem by a method that illustrates how to combine use of the Bakry-Emery criterion with Theorem \ref{thmM}, the main perturbation theorem of this paper. Suppose that $F$ is a continuous real valued function on $\R$ such that $dm \equiv e^{-2F} dx$ is a probability measure. The Bakry-Emery criterion, \cite[Corollary 5.7.2]{BGL}, assures that $m$ satisfies a logarithmic Sobolev inequality if $F$ is uniformly convex on $\R$. Given a potential $V$, we will construct a uniformly convex function $F$ such that $e^{-F}$ is an approximate ground state, in some sense, for $H$. We then use the WKB identity \eref{W7} to produce a potential $W$ from $F$, whose ground state is exactly $e^{-F}$. The probability measure $m \equiv e^{-2F} dx$ is therefore hypercontractive, but is the ground state measure for $W$, not $V$. We then apply our perturbation theorem, Theorem \ref{thmM}, to the pair $m, V-W$ to find another ground state measure satisfying a LSI and which, by the consecutive transformation method in Section \ref{secgst}, is exactly the ground state measure for $V$. The notions of intrinsic supercontractivity, \cite{Ros}, and intrinsic ultracontractivity, \cite{DS84}, are closely related to intrinsic hypercontractivity and have a large literature using very different techniques from Eckmann's and this paper's. See Remark \ref{reminap} for further discussion. \bigskip The following theorem is stated for an even potential for ease in reading. It's minor extension to more general potentials is explained in Remark \ref{rem7.1}. \begin{theorem} \label{thmeck4} $($Eckmann's Theorem$)$. Let $V \in C^1(\R)$ and assume that $V$ is even. Suppose that there are constants $a >0$ and $k >0$ and a number $x_0 >0$ such that $V(x) > 0$ when $x \ge x_0$ and \begin{align} &a.\ \ (d/dx) \sqrt{V(x)} \ge a\ \ \ \ \ \text{when}\ \ x \ge x_0 \ \ (\text{Eckmann's\ condition}) \label{E316}\\ &b.\ \ (d/dx) V(x) \le k V(x)\ \ \text{when}\ \ x \ge x_0. \label{E311b} \end{align} Then \begin{align} -(d^2/dx^2) + V \label{E312} \end{align} is bounded below. The bottom of the spectrum belongs to a unique positive ground state $\psi$. The ground state measure $m_\psi = \psi^2 dx$ satisfies a logarithmic Sobolev inequality. \end{theorem} The proof depends on the following construction of an intermediate ground state, which will be computationally useful in applications. \begin{lemma}\label{leminterm} $($Intermediate state$)$. Let $V \in C^1(\R)$ and assume that $V$ is even. Suppose that there is a constant $a >0$ such that Eckmann's condition \eref{E316} holds. Let \beq F_0(x) = \int_{x_0}^x \sqrt{V(s)} ds\ \ \ \ \ \text{for}\ x \ge x_0. \label{E314} \eeq Let \beq b = \sqrt{V(x_0)} /x_0 \label{E314b} \eeq and define \begin{align} F(x) = \begin{cases} &F_0(x) + bx_0^2/2 , \ \ \ x\ge x_0 \\ & b x^2/2, \qquad \ \ \ 0\le x < x_0. \end{cases} \label{E173a} \end{align} Extend $F$ to be even on $\R$. Then $F$ and $F'$ are continuous on $\R$ and \begin{align} \int_\R e^{-pF} dx < \infty\ \ \text{for all}\ \ p >0. \label{E315e} \end{align} Let $\psi_0 \equiv Z^{-1} e^{-F}$ be normalized in $L^2(\R, dx)$ and define $m^F = \psi_0^2 dx$. Then $m^F$ satisfies the logarithmic Sobolev inequality \beq Ent_{m^F}(f^2) \le \frac{1}{\min(b,a)} \int_\R (f')^2 dm^F. \label{E116} \eeq In particular the Sobolev constant $c_F$ for $m^F$ satisfies $2c_F \le 1/\min(b,a)$. \end{lemma} For the proof we need the following sublemma. \begin{sublemma} \label{lemqg2} If Eckmann's condition \eref{E316} holds then \beq F_0(x) \ge \sqrt{V(x_0)}(x-x_0) +(a/2)(x-x_0)^2 \ \ \text{for all}\ x \ge x_0. \label{E315c} \eeq In particular \beq \int_{x_0}^\infty e^{-pF_0(x)} dx < \infty,\ \ \text{for all}\ \ p >0. \label{E315d} \eeq \end{sublemma} \begin{proof} Let $u(s) = \sqrt{V(s)}$ for $s \ge x_0$. Then $u'(s) \ge a$ by \eref{E316} and therefore $u(s) \ge u(x_0) + a(s-x_0)$. Hence $F_0(x) = \int_{x_0}^x u(s) ds \ge u(x_0)(x-x_0) + (a/2)(x-x_0)^2$ for $x \ge x_0$. \eref{E315d} follows. \end{proof} \bigskip \noindent \begin{proof}[Proof of Lemma \ref{leminterm}] $F$ is clearly continuous on $\R$. It will suffice to make the following computations just for $x \ge 0$. Since $F$ is bounded on $[0, x_0]$, \eref{E315e} follows from \eref{E315d}. $e^{-F}$ is normalizable in $L^2(\R, dx)$ and $m_F$ is a probability measure. The first two derivatives of $F$ are given by \begin{align} F'(x) &= \begin{cases} &\sqrt{V(x)}, \ \ \ \ \ x > x_0 \\ &b x, \qquad 0\le x < x_0. \end{cases} \label{E110a}\\ F''(x) &= \begin{cases} &(d/dx)\sqrt{V(x)} \ \ \ \ x > x_0 \\ & b \qquad \ \ \ 0\le x < x_0 \end{cases} \label{E111a} \end{align} $F'$ extends continuously to $[0, \infty)$ by the definition of $b$. Moreover $F''(x) \ge \min(b,a)$ everywhere except possibly at $x_0$. Since $F'$ is continuous and $ F''$ is bounded away from zero we can apply the Bakry-Emery theorem (see e.g. \cite[Corollary 5.7.2]{BGL}) to the normalized measure $m^F = Z^{-2} e^{-2F} dx$. Bakry-Emery's theorem assures that \beq Ent_{m^F}(f^2) \le 2c \int_\R (f')^2 dm^F \eeq with $c= 1/\rho$ if $2F'' \ge \rho$. In view of \eref{E111a} and Eckmann's condition, \eref{E316}, we have $2F'' \ge 2\min (b,a)$. So we may take $\rho = 2\min (b,a)$ and \eref{E116} follows. \end{proof} \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmeck4}] Define a potential $W$ on $\R$ by applying the WKB equation \eref{W7} to the function $F$ defined in Lemma \ref{leminterm}, putting \begin{align} W = - F''(x) + | F'(x)|^2. \label{E112} \end{align} At $x = \pm x_0$ this should be interpreted as a weak derivative. Then the state $\psi_0$, defined in Lemma \ref{leminterm}, is the ground state for the Schr\"odinger operator $-d^2/dx^2 + W$ and $m^F$ is the ground state measure. $W$ can be computed explicitly with the help of \eref{E110a} and \eref{E111a} as follows. \begin{align} W &= - F''(x) + | F'(x)|^2 \notag\\ &= \begin{cases}&-(d/dx)\sqrt{V(x)} +V(x), \ \ \ \ x \ge x_0 \\ &=-b +b^2 x^2 , \qquad \qquad \ \ \ \ 0 \le x < x_0. \end{cases} \label{E182a} \end{align} Therefore \begin{align} V -W &= \begin{cases}&(d/dx)\sqrt{V(x)} , \qquad \ \ \ \ \ x \ge x_0 \\ & b -b^2 x^2 + V(x), \ \ \ 0 \le x < x_0. \end{cases} \label{E185} \end{align} In accordance with the consecutive ground state procedure of Section \ref{secgst}, the ground state, $\psi$, for $-d^2/dx^2 +V$ in $L^2(\R, dx)$ is the product of the ground state $\psi_0$ with the relative ground state $\psi_1 \in L^2(m^F)$, defined as the ground state of $\n^*\n + V-W$, where $\n^*\n$ is the Dirichlet form operator for $m^F$. It suffices therefore to show that the ground state measure $(m^F)_{\psi_1}$ satisfies a logarithmic Sobolev inequality. For this we only need to verify the two hypotheses \eref{mt2} of Theorem \ref{thmM} for the perturbation $V-W$ with $m = m^F$, since we already know that $\n^*\n$ satisfies the LSI \eref{E116}. $V- W$ is bounded on $[0,x_0]$ and, by \eref{E316}, is positive on $[x_0, \infty)$. Therefore $V-W$ is bounded below and \beq \int_\R e^{-\nu (V-W)} dm^F = Z^{-2}\int_\R e^{-\nu (V-W)} e^{-2F} dx < \infty\ \ \ \text{for all}\ \ \ \nu >0. \label{E187} \eeq This verifies the second of the two conditions \eref{mt2}. To verify the first condition we need to show that $\int_{x_0}^\infty e^{\kappa(V-W) - 2F} dx < \infty$ for some $\kappa >0$, since $V,W$ and $F$ are all even. We see from \eref{E311b} that $(d/dx) \sqrt{V(x)} \le (k/2) \sqrt{V(x)} $ for $x \ge x_0$ and therefore, by \eref{E185}, we have $V- W \le (k/2) \sqrt{V(x)} $ for $x \ge x_0$. Hence $\kappa (V-W) - 2F \le \ka (k/2) \sqrt{V(x)} - 2F(x)$ on $[x_0, \infty)$. But \begin{align*} (d/dx) \(\ka (k/2) \sqrt{V(x)} - 2F\) &\le \ka (k/2)^2 \sqrt{V(x)} - 2\sqrt{V(x)} \\ &=\{\ka (k/2)^2 -2\} \sqrt{V(x)} . \end{align*} Therefore, for some constant $C_1$ we have \begin{align*} &\kappa (V- W) - 2F \le \int_{x_0}^x \{\ka (k/2)^2-2\} \sqrt{V(y)} dy +C_1 \\ & = \{2-\ka (k/2)^2\} (-F_0(x)) + C_1. \end{align*} Hence, if $ \kappa (k/2)^2 < 2$ then, by \eref{E315d}, we find \begin{align} \int_{x_0}^\infty e^{ \kappa (V- W) - 2F} dx &\le \int_{x_0}^\infty e^{-\{2-\ka (k/2)^2\} F_0(x) + C_1} dx \notag\\ &<\infty. \label{E190} \end{align} By Theorem \ref{thmM}, $\n^*\n +(V-W)$ is bounded below, has a unique ground state $\psi_1 \in L^2(m^F)$ and the ground state measure for $\n^*\n +(V-W)$ satisfies a logarithmic Sobolev inequality. Since, by the consecutive ground state procedure of Section \ref{secgst}, this is the ground state measure for $-d^2/dx^2 + V$, the theorem is proved. \end{proof} \begin{corollary} \label{correlgs} Denote by $m^F$ the intermediate measure defined in Lemma \ref{leminterm} and by $\n^*\n$ its Dirichlet form operator. Let $\psi_1$ be the ground state for $\n^*\n +(V-W)$ in $L^2(m^F)$. Then $\psi_1$ is in $L^p(m^F)$ for all $ p < \infty$. In particular if $f \ge 0$ then \begin{align} \int_\R f dm_\psi < \infty \ \ \text{if} \ \ \ \int_\R f^q dm^F < \infty \ \ \text{for some}\ \ q >1. \end{align} \end{corollary} \begin{proof} As noted in the proof of \eref{E187} the potential $V-W$ for the relative ground state $\psi_1$ is bounded below. It follows from Corollary \ref{corpolbelow} that $\psi_1 \in \cap_{p<\infty}L^p(m^F)$. Therefore, if $q >1$ and $1/q + 1/p = 1$ then \begin{align} \int_\R f dm_\psi &=\int_\R f \psi_1^2 dm^F \notag \\ &\le \|\psi_1^2\|_{L^p(m^F)} \(\int_\R f^q dm_F\)^{1/q} . \notag \end{align} \end{proof} \begin{remark} \label{rem7.1} {\rm The restriction to an even potential can easily be removed. Suppose that $V \in C_1(\R) $ and that there are constants $a >0$ and $k >0$ and a number $x_0 >0$ such that \begin{align} &a.\ \ (sgn\, x)\, (d/dx) \sqrt{V(x)} \ge a\ \ \text{when}\ \ |x| \ge x_0 \ \ (\text{Eckmann's\ condition}) \label{E316s}\\ &b.\ \ (sgn\, x)\, (d/dx) V(x) \le k V(x)\ \ \text{when}\ \ |x| \ge x_0. \label{E311s} \end{align} Then the conclusion of Theorem \ref{thmeck4} holds. The proof is the same if one takes into account the change in signs on the negative half-line. } \end{remark} \bigskip Consider a Schr\"odinger operator of the form \beq -d^2/dx^2 + V + V_1 \ \ \text{on}\ \ \R, \label{E699} \eeq in which $V$ satisfies the conditions of Eckmann's theorem, while $V_1$ is merely measurable. If $m$ is the ground state measure for $-d^2/dx^2 +V$ then $m$ is hypercontractive by Eckmann's theorem. The consecutive ground state transformation method together with Theorem \ref{thmM} can in principle be used to show that the ground state measure for the full operator \eref{E699} will be hypercontractive if $V_1$ satisfies exponential bounds of the form \eref{mt2}. But Eckmann's theorem gives only indirect information about the Sobolev coefficient of the measure $m$. In the next corollary we will establish the hypercontractivity of the ground state measure of the operator \eref{E699}, but by replacing $m$ by the explicit intermediate measure $m^F$, thereby getting conditions on $V_1$ which are easily verified in applications. \begin{corollary} \label{corpp5} Suppose that $V$ is a potential that satisfies the conditions of Eckmann's theorem, \eref{E316} and \eref{E311b}. Let $e^{-F}$ be the intermediate ground state for $V$, constructed in Lemma \ref{leminterm}. Denote by $c_F$ the Sobolev constant for the measure $m^F$. Let $V_1$ be a measurable potential such that \begin{align} \int_\R e^{-\nu_1 V_1} dm^F &< \infty\ \ \text{for some}\ \ \nu_1 > 2c_F \ \ \text{and} \label{E701a}\\ \int_\R e^{\ka_1V_1} dm^F &< \infty\ \ \text{for some}\ \ \ka_1 >0. \label{E700a} \end{align} Then the Schr\"odinger operator $-(d/dx)^2 + V+V_1 $ is bounded below, has a unique positive ground state $\psi \in L^2(\R, dx)$ and the ground state measure $\psi^2 dx$ satisfies a logarithmic Sobolev inequality. \end{corollary} \begin{proof} Let $W$ be the intermediate potential defined in \eref{E182a}. Writing $V+ V_1 = W + (V+ V_1 - W)$, we may apply the consecutive groundstate transformation method to realize the ground state measure for $-d^2/dx^2 + V + V_1$ as the ground state measure for $\n^*\n + (V+V_1 -W)$, where $\n^*\n$ is the Dirichlet form operator for $m^F$. By Theorem \ref{thmM} we need to show then that \begin{align} \int_\R e^{-\nu(V_1 +V-W)} dm^F &< \infty \ \ \text{for some}\ \nu > 2c_F\ \ \text{and} \label{E705}\\ \int_\R e^{\ka(V_1 +V-W)} dm^F &< \infty \ \ \text{for some}\ \ka >0. \label{E706} \end{align} But $V-W$ is bounded below, by \eref{E185} and \eref{E316}. Therefore \eref{E705}, with $\nu =\nu_1$, follows from \eref{E701a}. For the proof of \eref{E706} suppose that $\kappa_2>0$ and that \eref{E190} holds for this value of $\kappa$. It was shown in \eref{E190} that any $\ka_2 \in (0, 8/k^2)$ will do. Let $\ka = (1/2)\min(\ka_1, \ka_2)$. Then \begin{align} \int_\R e^{\ka(V_1 +V-W)} dm^F \le\(\int_\R e^{2\ka V_1} dm^F\)^{1/2} \(\int_R e^{2\ka(V-W)} dm^F\)^{1/2}. \end{align} Since $2\ka \le \ka_1$ the first factor on the right is finite by \eref{E700a}. Since $2\ka \le \ka_2$ the second factor is finite by \eref{E190}. \end{proof} \subsubsection{Second order intermediate state} The next theorem further illustrates use of the combination of Bakry-Emery convexity followed by our perturbation theorem. An additional derivative is assumed for $V$ but a wider variety of growth conditions are permitted. This modification of Eckmann's theorem is based on a second order approximation in the WKB method which is discussed by A. Dicke in Appendix IV of \cite{Simon1970}. We use the second order WKB approximation to construct an intermediate ground state, whose potential is closer to the given potential $V$ than that in the preceding method. \begin{theorem} \label{thmeck5} Let $V$ be an even function in $C^1(\R)$. Suppose that there is a number $x_0 >0$ such that, on $[x_0, \infty)$, $V \in C^2$ and $V >0 $. Define $F_0$ by \eref{E314} again. Let \begin{align} g(x) &= (1/4) (d/dx) \log V(x) \ \ \ \text{for}\ \ x \ge x_0. \label{E340d} \end{align} Suppose that there is a constant $a >0$ such that Eckmann's condition \eref{E316} holds and also, with $F_0$ given by \eref{E314}, assume that \begin{align} &a.\ \ (d/dx) \(\sqrt{V(x)} + g(x) \)\ge a\ \ \text{when}\ \ x \ge x_0 \ \ \label{E360} \\ &b.\ \ g(x)^2 =o(F_0(x)) \ \ \text{as}\ x \to \infty \label{E317c} \\ &c. \ \ |g'(x)| = o(F_0(x))\ \ \text{as}\ x \to \infty. \label{E318c} \end{align} Then, over $\R$, \begin{align} -(d^2/dx^2) + V \label{E320} \end{align} is bounded below. The bottom of the spectrum belongs to a unique positive ground state $\psi$. The ground state measure $m_\psi := \psi^2 dx$ satisfies a logarithmic Sobolev inequality. \end{theorem} Note that the condition \eref{E317c} is weaker than \eref{E311b} since the latter assumes that $g$ is bounded on $[x_0, \infty)$ while the former allows $g$ to be unbounded by virtue of Sublemma \ref{lemqg2}. \bigskip \noindent \begin{proof}[Proof of Theorem \ref{thmeck5}] Let \begin{align} b &= \(\sqrt{ V(x_0)} + g(x_0)\)/x_0 \label{E341c} \end{align} and define \begin{align} F(x) = \begin{cases} &\int_{x_0}^x \(\sqrt{V(s)} +g(s)\) ds + bx_0^2/2 , \ \ \ x\ge x_0 \\ & b x^2/2, \ \ \ \ 0\le x < x_0. \end{cases} \label{E173b} \end{align} Then on $[x_0, \infty)$ we have \begin{align} F'(x) &= \sqrt{V(x)} + g(x) \label{E350} \\ F'^2 &= V(x) + g(x)^2 + 2 g(x) \sqrt{V(x)} \label{E351} \\ -F''(x) &= -(d/dx)\sqrt{V(x)} - g'(x). \label{E352} \end{align} Notice that $2g(x) \sqrt{V(x)} = V'(x)/(2\sqrt{V(x)}) = (d/dx)\sqrt{V(x)}$. The last term in \eref{E351} therefore cancels with the first term in \eref{E352} in the expression for the intermediate potential $W$ to give \begin{align} W&= - F'' + (F')^2 \\ &= - g'(x) + V(x) + g(x)^2 \end{align} over the interval $[x_0, \infty)$. Hence over this interval we have \beq V-W = g'(x) - g(x)^2. \label{E353} \eeq It follows from \eref{E317c} and \eref{E318c} that \begin{align} |V-W| = o(F_0(x)) \ \ \text{as}\ \ x \to \pm \infty . \label{E354} \end{align} On the interval $[x_0, \infty)$ we have $g(x)=(1/4)V'(x)/V(x) =$ \linebreak $(1/2)\((d/dx)\sqrt{V(x)}\) / \sqrt{V(x)} \ge 0$. Therefore $ F \ge F_0 +bx_0^2/2$ on this interval. Hence, for any real number $p$ we have, in view of \eref{E354}, \begin{align} p(V-W) -2F &\le p(V-W) -2F_0 - bx_0^2\ \ \ \text{for}\ x\ge x_0 \\ &\le -F_0 - bx_0^2 \ \ \ \text{for large $x$ depending on $p$}. \end{align} Therefore, since $V -W$ is locally bounded, we have \begin{align} \int_\R e^{p(V-W) - 2F} dx < \infty \label{E361} \end{align} for any $p \in \R$. From \eref{E352} and \eref{E360} we see that $F''\ge a$ on $[x_0, \infty)$. From \eref{E173b} we then find that $F'' \ge \min(a,b)$ everywhere except at $x = \pm x_0$. $m^F$ is therefore hypercontractive by the Bakry-Emery theorem. In view of \eref{E361} Theorem \ref{thmeck5} now follows from Theorem \ref{thmM}. \end{proof} \subsubsection{Examples of Eckmann's theorem} \label{secExE} In each of the following examples we consider the one dimensional Schr\"odinger operator \eref{E200}. We take $V$ to be an even function for simplicity. It suffices then to compute derivatives for $x >0$. In the first five examples we will apply Eckmann's Theorem in the form of Theorem \ref{thmeck4}. But in the sixth example the potential grows too rapidly and we must use the more refined theorem, Theorem \ref{thmeck5}, which is based on a second order WKB approximation. \begin{example} \label{exE1} {\rm {\bf(Potential with power growth).} Let $V(x) = \lambda |x|^{2r}$ for some $r \ge 1$ and $\lambda >0$. Choose $x_0 \ge 1$. Then \begin{align} &\ \ (d/dx)\sqrt{V(x)} = \lambda^{1/2}r x^{r-1}\ge \lambda^{1/2} r x_0^{r-1}\ \ \text{when}\ \ x \ge x_0. \label{E370} \end{align} So \eref{E316} holds with $a = \lambda^{1/2} r x_0^{r-1}$. Moreover for $x \ge 1$ we have $(d/dx)V(x) \le 2r V(x)$. So \eref{E311b} holds with $k = 2r$ and Theorem \ref{thmeck4} applies. Thus $H$ is bounded below, the bottom of the spectrum is an eigenvalue of multiplicity one belonging to a positive ground state $\psi$ and the ground state measure $\psi^2 dx$ satisfies a logarithmic Sobolev inequality. In this example the intermediate ground state is $e^{-F}$, where \beq F(x) = \lambda^{1/2} \int_{x_0}^x s^r ds + const. = \lambda^{1/2} x^{r+1}/ (r+1) \text{+ const. for large}\ x. \label{E371} \eeq } \end{example} \begin{remark}\label{remsuper} {\rm In case $r >1$, \eref{E370} shows that $a$ can be chosen large by choosing $x_0$ large. Moreover $b$, defined in \eref{E314b}, also increases to $\infty$ as $x_0 \uparrow \infty$. Consequently the intermediate measure $m^F$ can be chosen to have an arbitrarily small Sobolev coefficient by \eref{E116}. } \end{remark} \begin{example} \label{exE2} {\rm {\bf (Perturbation of power growth).} Let $V = |x|^{2r} + V_1$ for some $r\ge 1$ and some locally bounded, even, measurable function $V_1$ such that \begin{align} |V_1(x)| = o(|x|^{r+1})\ \ \ \text{as}\ \ |x| \to \infty, \label{E450a} \end{align} or, in case $r>1$, \begin{align} |V_1(x)| = O(|x|^{r+1})\ \ \ \text{as}\ \ |x| \to \infty . \label{E450b} \end{align} Then $-(d/dx)^2 + V$ is bounded below, has a unique positive ground state $\psi$ and the ground state measure $\psi^2 dx$ satisfies a logarithmic Sobolev inequality. \begin{proof} We apply Corollary \ref{corpp5} with $V = |x|^{2r}$. Then \eref{E371}, with $\lambda =1$, determines the behavior near $\infty$ of the density of the intermediate measure $m^F$. If \eref{E450a} holds then we see that $\int_\R e^{ p |V_1|} dm^F < \infty$ for all real $p$. Therefore \eref{E701a} and \eref{E700a} both hold. If only \eref{E450b} holds then the integral in\eref{E701a} is only finite for some $\nu_1 >0$. But by Remark \ref{remsuper} a change in $F$ locally (by increasing $x_0$) can produce an $F$ such that $m^F$ has arbitrarily small Sobolev coefficient $c_F$, while \eref{E371} still holds. Choose $x_0$ such that $c_F < (1/2) \nu_1$. Then \eref{E701a} holds. \eref{E700a} holds because $\ka_2$ can be chosen arbitrarily small. \end{proof} } \end{example} \begin{example}\label{exE3} {\rm {\bf (Polynomial potential).} Let $V(x) = \sum_{j=0}^n a_jx^{2j}$ be an even polynomial with $a_n >0$. Choose $x_0 >0$ so that $V(x) \ge 1$ for $x \ge x_0$. Since $(d/dx) \sqrt{V(x)} =(1/2) V'(x)/\sqrt{V(x)}$ which, for large $x$ behaves like $n a_nx^{2n-1}/\sqrt{a_n} x^n$ $ = n\sqrt{a_n} x^{n-1}$, we can choose $x_0$ so that $(d/dx) \sqrt{V(x)} \ge a$ on $[x_0, \infty)$ for some $a >0$. Moreover $V'(x)/V(x) \to 0$ as $x \to \infty$. So Theorem \ref{thmeck4} applies. } \end{example} \begin{example} \label{exE4} {\rm {\bf (Potential with slow growth).} Suppose that $v(x);[0,\infty) \to (0, \infty)$ is $C_1$ and $0 < v' \le c < \infty$. Let $V(x) =x^2 v(x)^2$ for $x \ge 0$ and extend $V$ to be even on $\R$. Then, for $x >0$ we have \begin{align} (d/dx) \sqrt{V(x)} = (d/dx)\(x v(x)\) = v(x) + x v'(x) \end{align} Let $a = v(1)$. Then $(d/dx) \sqrt{V(x)} \ge a$ when $x \ge 1$ because $v$ is increasing. Moreover $V'(x) =2xv(x)^2 + 2x^2 vv'(x) \le 2x^2 v(x)^2 + 2cx^2v(x) \le 2(1 +(c/a))x^2 v(x)^2$ for $x \ge 1$. Therefore Theorem \ref{thmeck4} applies. The ground state measure is hypercontractive. In particular this example includes the cases $v(x) = (\log(3+x))^{b}$ with $b >0$ since $0 < v' \le c$ on $[0,\infty)$ for some constant $c<\infty$. Therefore the potentials $V(x) = x^2(\log(3+|x|))^{2b}$ have hypercontractive ground state measures, $m_\psi$. Davies and Simon, \cite{DS84}, have shown that $m_\psi$ is ultracontractive if $2b >1$ but not if $2b \le 1$. Our method does not distinguish these two cases. } \end{example} \begin{example} \label{exE5}{\rm {\bf (Potential with exponential growth).} Let $c >0$ and let $V(x) = e^{2c|x|}$ for $|x| \ge 1$ and be smooth and even on $[-2,2]$. Choose $x_0 >1$. Then \begin{align} &\qquad (d/dx)\sqrt{V(x)} = ce^{cx} \ge ce^{cx_0} \ \ \text{when}\ \ x \ge x_0. \end{align} So \eref{E316} holds with $a = ce^{cx_0}$. Moreover \eref{E311b} holds because $V'(x) = 2c V(x)$ for $x \ge x_0$. Theorem \ref{thmeck4} therefore applies. } \end{example} \begin{example}\label{exE6} {\rm {\bf (Potential with very rapid growth).} In the following example the potential grows too rapidly to satisfy the growth condition \eref{E311b}. But the refined version of Eckmann's theorem, Theorem \ref{thmeck5}, applies: Let $\alpha >0$ and let $V(x) = e^{2\alpha x^2}$. Choose $x_0 >0$. Then, for $x \ge x_0$, we have \begin{align} &\qquad (d/dx)\sqrt{V(x)} = 2\alpha x e^{\alpha x^2} \ge 2\alpha x_0 e^{\alpha x_0^2} \\ &\qquad g(x) = \alpha x,\ g'(x) = \alpha.\\ &\qquad F_0(x) = \int_{x_0}^x e^{\alpha s^2} ds \end{align} Eckmann's condition, \eref{E316}, holds with any choice of $x_0 >0$. But the growth limitation \eref{E311b} doesn't hold because $g$ is unbounded. Nevertheless \eref{E360}, \eref{E317c} and \eref{E318c} hold and Theorem \ref{thmeck5} applies. } \end{example} \begin{remark} \label{reminap} {\rm (Inapplicability to intrinsic ultracontractivity). Davies and Simon introduced in \cite{DS84} the terminology ``intrinsically ultracontractive" to refer to a Schr\"odinger operator $H = -\Delta + V$ on $\R^n$ for which $e^{-t\hat H}:L^2(\psi^2 dx) \to L^\infty$ is bounded for all $t >0$, where $\hat H$ is the ground state transform of $H$. Since the Schr\"odinger operators of interest for ultracontractivity typically have a mass gap, an intrinsically ultracontractive Schr\"odinger operator will also be intrinsically hypercontractive by virtue of the Glimm-Segal-Rothaus theorem. An intrinsically hypercontractive Schr\"odinger operator, however, need not be intrinsically ultracontractive, as we see in the harmonic oscillator. There is a large literature proving and exploiting intrinsic ultracontractivity, both for Schr\"odinger operators and for Dirichlet Laplacians on open subsets of $\R^n$. The proofs usually depend on dimension dependent estimates of the defect in the defective LSI for the ground state measure. These in turn depend on delicate pointwise estimates of the ground state, near infinity in the case of Schr\"odinger operators, or near the boundary in the case of the Dirichlet Laplacian. Intrinsic ultracontractivity is qualitatively stronger than intrinsic hypercontractivity because hypercontractivity only yields bounds on $\|e^{-t\hat H}\|_{2\to p}$ for $p < \infty$. There is a trade-off between dimension independence and sup-norm bounds. Our techniques in this paper are aimed at dimension independence. A qualitative distinction between intrinsic hypercontractivity and intrinsic ultracontractivity is already apparent in Example \ref{exunb}, which describes a bounded potential whose addition to a hypercontractive Schr\"odinger operator $\n^*\n$ yields a ground state $\psi$ for which $\psi$ and $\psi^{-1}$ are both unbounded. But \cite[Theorem 3.4]{DS84} shows that the ground state for the perturbation of an intrinsically ultracontractive Schr\"odinger operator by a bounded potential is always bounded and bounded away from zero. In the literature on intrinsic ultracontractivity there are parallel versions of our Example \ref{exE2}. Their assumptions on the potential $V$ take the form $f(x) \le V(x) \le g(x)$, where $f$ and $g$ are specified. See e.g. \cite[Proposition 5.5]{Car79}, \cite[Theorem 6.3]{DS84}, \cite[Lemma 4.5.1]{Da89} and \cite{Ci2}. For an avenue into this large literature on intrinsic ultracontractivity see the early papers Rosen, \cite{Ros}, Davies and Simon, \cite{DS84}, Davies, \cite{Da83,Da85,Da89}, Carmona, \cite{Car78,Car79}, Banuelos, \cite{Ban1991}, Murata, \cite{Mu1993}, Cowling and Meda, \cite{CoMe1993}, Lianantonakis, \cite{Lian1993}, Cipriani, \cite{Ci1,Ci2,CiG95}, Z-Q. Chen and R. Song, \cite{ChenZQ1997}, Tomisaki, \cite{Tom2007} and the citations lists for these papers in Mathematical Reviews. } \end{remark} \bigskip \begin{remark}\label{reminfdim}{\rm The bounds we have gotten in the general theory are dimension independent. This reflects the fact that it was not necessary to use the classical Sobolev inequalities. Eckmann's theorem is essentially one dimensional, although Eckmann also applied his methods to radial potentials over $\R^n$. We will see later, in the toy model for the quantum field $\phi^4_2$, how dimension independence can expected to be used. But to emphasize the dimension independence in a simple, though artificial example, consider the Dirichlet form operator over an abstract Wiener space $(H, B, m)$, where $H$ is a real separable Hilbert space densely embedded in a Banach space $B$ and $m$ is the centered Gaussian measure on $B$ with covariance given by the inner product of $H$, \cite{G1967a}. Denoting by $\n$ the gradient of functions on $B$ associated to differentiation only in $H$ directions, referred to as the $H$ derivative in \cite{G1967b}, the Dirichlet form operator $\n^*\n$ is densely defined in $L^2(B,m)$ and is the well known number operator of quantum field theory. The logarithmic Sobolev inequality $Ent_{m}(f^2) \le 2 \int_B |\n f|^2 dm$ holds on $B$ because it reduces to the $n$ dimensional Gaussian LSI in case $f$ is a cylinder function based on some $n$ dimensional subspace of $H$, while these functions form a core for $\n^*\n$. The arguments in Sections \ref{secesa} and \ref{secEU}, showing that if \eref{mt2} holds for some potential $V$ on $B$ then the Schr\"odinger operator $\n^*\n + V$ has a unique ground state $\psi$ which is strictly positive almost everywhere, apply with no essential change even though $B$ is not finite dimensional. Theorem \ref{thmM} also applies, from which we can conclude that $Ent_{m_\psi}(f^2) \le 2c_1 \int_B |\n f|^2 dm_\psi$ for a constant $c_1$ computed as in Section \ref{sectightening}. For example, denoting by $\|\cdot\|$ the $B$ norm, the potentials $V(x) = \|x\|^{\beta}, 0\le \beta \le 2$ and $V(x) = \|x\| \sin \|x\|$ both satisfy the condition \eref{mt2}. One needs only to use Fernique's theorem on the distribution of $\|\cdot\|$, \cite[Theorem 3.1]{Kuo}, or \cite[Theorem 3.6]{AMS94}, for both examples. } \end{remark} \begin{remark} {\rm In his paper, \cite{Eck74}, Eckmann also allowed potentials which have a strong singularity at zero. The techniques that we have been exploiting are not appropriate for such singularities. } \end{remark} \subsection{Irregular potential over $\R$. } \label{secwp} \begin{remark}\label{remcombined}{\rm (Conditions on $\xi F - V$). We have been concerned with the probability measure $m^F \equiv e^{-2F}m$ only when $F = - \log \psi$, where $\psi$ is the ground state of a given Schrodinger operator $\n^*\n +V$. But, as mentioned in the Introduction, there is a large literature in which $F$ is given and is the primary object of interest, rather than the potential $V$. In that case one is interested in conditions on $F$ itself which ensure that $m^F$ is hypercontractive. The relation between these two problems was first discussed by Kusuoka and Stroock, \cite{KS85}, which appeared about the same time as the paper of Davies and Simon that introduced intrinsic hypercontractivity, \cite{DS84}. Kusuoka and Stroock explained that if, given $F$, one defines an artificial potential by $V_F = \n^*\n F + |\n F|^2$ then, taking Davies and Simon's given potential $V$ to be $V_F$, the hypotheses in both papers are very similar. In particular, they both depend on information about the combination $\xi F - V$ for various values of a real number $\xi$. It can already be seen from \eref{gs720b} that if $F - cV$ is bounded above then $m_\psi$ satisfies a defective logarithmic Sobolev inequality if $m$ does. Conditions which impose bounds on $\xi F - V$ from above figure prominently in either hypotheses or intermediate steps in the early papers Rosen \cite{Ros}, Carmona \cite{Car79}, Davies and Simon, \cite{DS84}, Kusuoka and Stroock, \cite{KS85}. Davies's book, \cite{Da89}, gives a self-contained exposition of parts of this also. Cattiaux, \cite[Section 5]{Cat2005}, takes $F$ as the primary object and imposes upper bounds on $\xi F- V_F$ as well as various integral bounds. $V_F$ arises naturally in his paper because the Girsanov formula for change of density by $e^{-F}$ relates closely to the Feynman-Kac formula for $V_F$. See Carmona \cite{Car79a} for a discussion of this relation. Cattiaux also gives another kind of mixed condition which shows how close to necessary is our condition $\|\psi^{-1}\|_{L^p(m)} < \infty$ for proving a DLSI. In \cite[Theorem 2.5]{Cat2005} he shows that if $V_F$ is bounded below then for a DLSI to hold it is necessary and sufficient that $\psi^{-1} e^{-t(\n^*\n + V_F)} 1 \in L^p(m^F)$ for some $t >0$ and $ p > 2$. For $t = 0$ this reduces to Aida's condition. Carlen and Loss, \cite{Ca2004}, also assume $F - cV_F$ is bounded above to show that $e^{-2F} d^nx$ satisfies a logarithmic Sobolev inequality. Their method is based on use of a perturbation of the known Euclidean logarithmic Sobolev inequality. Just such a combination of $F$ and $V$ is also used in the book of Bakry, Gentil and Ledoux, \cite[Section 7.3]{BGL}, to determine a growth function for a general class of entropy-energy inequalities. Bartier and Dolbeault \cite{BaDo2006}, in their perturbation theorem for a measure $m \equiv e^{-W}d^nx$ by a density $e^{-2F}$, assume that $V_F$ is bounded below and that $F$ is bounded above to perturb a logarithmic Sobolev inequality, and they also show that one can perturb the inequalities of Beckner, \cite{Beckner1989}, that are intermediate between Poincar\'e and LSI by assuming again that $V_F$ is bounded below and that $F\vee 0$ is in a suitable $L^p(m)$ class. Another kind of hypothesis involving a combination of $F$ and $V_F$ is given by F.Y. Wang in \cite[Equ. (5.4)]{Wang2001}. He assumes that $\int \exp{\epsilon F -c_\epsilon V_F} dm < \infty$ for suitable $\ep$ and $c_\ep$. A change of density by a given factor $e^{-2F}$ arises also for purely geometric motives over Riemannian manifolds. In the papers \cite{CLR2015}, \cite{CR2019}, Charalambous, Lu and Rowlett use the ground state transformation to gain information about the spectrum of the Dirichlet form operator for the measure with density $e^{-2F}$ with respect to Riemann-Lebesgue measure. They transform the problem into the study of the Schr\"odinger operator $-\Delta + V_F$ and impose a uniform bound on $V_F$. This is a very natural condition in this context. } \end{remark} Theorem \ref{thmM} shows that only conditions on the perturbing potential $V$ involving means are required to produce a hypercontractive ground state measure. The following one dimensional example of an irregular potential emphasizes this fact and at the same time shows that $V$ can be so badly unbounded below that a combined condition such as $\sup (\xi F - V) < \infty$ can fail over every interval, even though the ground state measure for $-d^2/dx^2+V$ is hypercontractive. \begin{example} {\rm (Irregular potential). We will construct a potential $V$ over $\R$ which is unbounded below on every interval but such that the Schr\"odinger operator $H \equiv -d^2/dx^2 + V$ is bounded below, has an eigenvalue at the bottom of its spectrum, has a unique continuous ground state $\psi > 0$ a.e. for which the ground state measure $\psi^2 dx$ satisfies a logarithmic Sobolev inequality. In particular $\xi F - V$ is not bounded above on any interval for any real number $\xi$. Let $r_1, r_2, \dots$ be an enumeration of the rational numbers in $[0,1)$. Define $f(x) =\sum_{j=1}^\infty 2^{-j} |x- r_j|^{-1/2}$ for $0\le x < 1$ and $f(x) = 0$ elsewhere. Then $f$ is unbounded above on every open set in $ [0,1)$. But $\int_0^1 f(x)dx \le 4$ because $\int_0^1 |x-r|^{-1/2} dx \le 2 \int_0^1 x^{-1/2} dx = 4$. So $f$ is finite a.e.. Let $b >0$ and define \begin{align} V_1(x) = -(1/6)\sum_{n= -\infty}^\infty \log (1 + b2^{-|n|} f(x-n)) \label{Ir5} \end{align} The terms in the sum have disjoint supports. Clearly $V_1$ is non-positive and is unbounded below on every interval. Define \beq V(x) = x^2/4 +V_1(x). \eeq $V$ is unbounded below on every interval. We will use Theorem \ref{thmM} and the consecutive ground state transformation procedure of Section \ref{secgst} to prove that $-d^2/dx^2 +V$ has the properties claimed above. Let $m \equiv (2\pi)^{-1/2} e^{-x^2/2} dx$ be the standard Gaussian measure on $\R$ of variance $1$. Then $m$ satisfies the LSI \eref{mt1} with $c =1$, \cite{G1}. Moreover $m$ is the ground state measure for the Schrodinger operator $-d^2/dx^2 + x^2/4$ because $\sqrt{dm/dx}$ is the ground state, as one can easily compute. By the consecutive ground state procedure of Section \ref{secgst} it therefore suffices to check the two conditions \eref{mt2} for $m$ and $V_1$. In view of the disjoint supports of the terms in \eref{Ir5} we find, for any $\nu > 0$, and then for $\nu =6$ that \begin{align*} \int_\R &e^{-\nu V(x)} dm(x) = \sum_{-\infty}^\infty \int_n^{n+1}(1+b2^{-|n|}f(x-n))^{\nu/6} dm(x)\\ &=\sum_{-\infty}^\infty \int_n^{n+1} 1 dm + \sum_{-\infty}^\infty b 2^{-|n|}\int_n^{n+1} f(x-n) dm(x) \ \ \text{if}\ \nu =6\\ &\le 1 + b\sum_{-\infty}^\infty 2^{-|n|} \int_n^{n+1} f(x-n) dx \\ &\le 1 + 12b . \end{align*} Since $6 > 2 = 2c$ the second condition in \eref{mt2} is satisfied for $m, V_1$. The first condition is automatic because $V_1 \le 0$. Therefore $H$ has a unique ground state $\psi$ whose ground state measure $\psi^2 dx$ satisfies a logarithmic Sobolev inequality. Moreover $\psi$ is given by $\psi(x) = (2\pi)^{-1/4} e^{-x^2/4} \psi_1(x)$ where $\psi_1$ is the ground state for $\n^*\n + V_1$ and $\n^*\n$ is the Dirichlet form operator of $m$. In particular $\int_\R \psi_1'(x)^2 dm(x) < \infty$. So $\psi_1 \in H_{1, loc}(\R)$ and one can take $\psi_1$ to be continuous. Consequently $\psi >0\ a.e.$ and is continuous. Hence $F \equiv - \log \psi$ is continuous on some neighborhood of any point where $\psi(x) >0$ and bounded on some smaller neighborhood. Therefore $\xi F - V$ is unbounded above on any open interval. The same argument shows that in the intermediate space $L^2(\R, dm)$, the combination $\xi F_1 - V_1$ is also unbounded above on any open interval for any real number $\xi$, wherein $F_1 = - \log \psi_1$. } \end{example} \bigskip \begin{remark}\label{remF} {\rm (Direct conditions on $F$). Size conditions on $F$ itself, not dependent on differentiability of $F$ and in particular not dependent on $V_F$, which ensure that $m^F$ is hypercontractive when $m$ is, have been explored with a view toward extending the Deuschel-Holley-Stroock theorem \cite{HS1987,DS1990}, according to which it is sufficient for $F$ to be bounded. Hebisch and Zegarlinski, \cite[Proposition A.1]{HZ1}, have given a dramatic example showing that even if a density on $\R$ is sandwiched between two densities that give a Poincar\'e inequality, the sandwiched density need not. Bakry, Ledoux and Wang, \cite{BLW2007}, have explored pure growth conditions on $F$ which ensure that $m^F$ satisfies a slightly weaker functional inequality than $m$ does in a scale of inequalities interpolating between a Poincar\'e inequality and a logarithmic Sobolev inequality. But their results suggest that invariance of LSI under some reasonable class of unbounded pointwise perturbations of $F$ may not hold. The perturbation theorem of Barthe and Milman \cite{Barthe13} does not fall into any of these three categories of perturbation theorems. They consider, e.g., two measures on $\R^n$, $\mu_i = e^{-2F_i} dx$, $i = 1,2$, and assume that $\mu_1$ satisfies a logarithmic Sobolev inequality while $ Hess(F_2) \ge -\kappa$ for some $\kappa \ge 0$. The latter condition says, roughly, that $F_2$ is not too badly non-convex. They show that even though the Bakry-Emery condition (which requires $\kappa <0$) fails, nevertheless $\mu_2$ is hypercontractive if $e^{-2F_2} - e^{-2F_1}$ is small in a suitable $L^p$ sense. In this perturbation theorem a part of the hypothesis is placed directly on the perturbed measure $\mu_2$. This paper contains a good, recent, exposition of the use of logarithmic Sobolev inequalities in classical statistical mechanics and gives references to related work in this large literature on spin systems. In this context it is natural to impose conditions directly on $F$, which is the Hamiltonian for a finite lattice spin system. There are several other works imposing conditions directly on $F$, which assume some differentiability of $F$ but are quite different from the conditions discussed in Remark \ref{remcombined} in that the artificial potential $V_F$ is not used. Typical of these are theorems imposing integrability with respect to $m$ of $\exp(|\n F|^2)$. See Aida's Remark 4.13 in \cite{Aida2001} for a comparison of these conditions. Royer, \cite[Theorem 3.2.7]{Roy07}, shows, by a simple proof, that if $|\n F|$ is bounded then $m^F$ is hypercontractive if $m$ itself is a measure on $\R^n$ of the form $e^{-2F_1} d^n x$ for some uniformly convex function $F_1$. } \end{remark} \subsection{Non-convexity} We combined the convexity techniques of the Bakry-Emery method and the perturbation theorem of this paper to deduce Eckmann's theorem over $\R$ in Section \ref{secEck}. But the final density of the ground state measure for a Schr\"odinger operator over $\R$ need not be log-concave in order for the ground state measure to be hypercontractive. Malrieu and Roberto \cite[Theorem 6.4.3]{Ane} have given an example of a density on the line which is far from log-concave but is hypercontractive. Cattiaux, \cite[Example 5.5]{Cat2005}, further illuminated this example. Here we show how the hypercontractivity in their example can be deduced from Theorem \ref{thmM}. \begin{example} \label{MRo-Cat}{\rm (The example of Malrieu-Roberto, \cite[Theorem 6.4.3]{Ane}). Let \begin{align} F(x) &= x^2 + \beta x \sin x +C, \ \ \ x, \beta \in \R, \label{C5} \end{align} Define $\mu = e^{-2F} dx$. It is clear that $\mu$ is a normalizable measure on $\R$. We will ignore the normalization constant because it drops out in all of our calculations. We have $F' =2x +\beta x \cos x + \beta \sin x $ and $F'' = -\beta x \sin x +2 +2\beta \cos x$. Clearly $\liminf_{x \to \infty} F''(x) = -\infty$. So $F$ is not convex outside of any bounded set. Malrieu-Roberto \cite[Theorem 6.4.3]{Ane} and Cattiaux \cite[Example 5.5]{Cat2005} have shown that $\mu$ is hypercontractive if and only if $|\beta| < 2$. We illustrate how our methods show that $|\beta| < 2$ is sufficient for hypercontractivity. According to the WKB equation, \eref{W7}, $e^{-F}$ is the ground state for the potential \begin{align} V_F &\equiv -F'' + (F')^2 \notag\\ & = \beta x\sin x -\( 2 + 2\beta \cos x \) + \( x(2 + \beta \cos x) + \beta \sin x\)^2 \notag \\ & =x^2(2 + \beta \cos x)^2 +W \label{C8} \end{align} where $W$ grows at most linearly: $|W(x)| \le c_1 |x| + c_2$. Suppose that $|\beta| < 2$. Let $V_0 =x^2(2- |\beta|)^2$ and let $U = x^2(2 + \beta \cos x)^2 -x^2(2- |\beta)^2$. Then $V_F = V_0 + U + W$. Moreover $0\le U \le x^2 (2+|\beta|)^2$ . Let $V_1 = U+W$. Then \begin{align} V_F = V_0 + V_1. \label{C10} \end{align} The ground state measure for the quadratic potential $V_0$ is the Gaussian measure $dm = (2\pi c)^{-1/2} e^{-x^2/(2c)} dx$ where $c^{-1} = 2 - |\beta|$. By the consecutive ground state transformation procedure of Section \ref{secgst} we need only show that $V_1$ satisfies the two conditions \eref{mt2} for some $\nu > 2c$ and some $\ka >0$. But since $U \ge 0$ and $W$ grows at most linearly we have $\int_\R e^{-\nu V_1} dm < \infty$ for all $\nu >0$. Moreover $\int_\R e^{\ka V_1} dm < \infty$ whenever $\ka (2+|\beta|)^2 < (2c)^{-1}$. Therefore we may apply Theorem \ref{thmM} to find that $\mu$ is hypercontractive. } \end{example} \begin{remark} {\rm (Inapplicability of Eckmann's method in this example). Eckmann's method relies in part on non-oscillation of the given potential. In our example the potential whose ground state measure is $\mu$ is given by \eref{C8} and the derivative, $V_F'$, contains the highly oscillatory, quadratically growing term $-2x^2(2 + \beta \cos x) \sin x$ while $\sqrt{V_F}$ increases at most like $|x|$. The condition \eref{E316} therefore fails. } \end{remark} \begin{remark} {\rm Since the example can be presented as the Gaussian measure $dm_0 \equiv Z^{-1}e^{-2x^2}dx$ with an additional density $e^{-2\beta x\sin x}$, as suggested by \eref{C5}, it is possible to identify the measure in the example as the ground state measure for the Schrodinger operator $\n^*\n + V_G$ in $L^2(m_0)$, where $\n^*$ is computed in $L^2(m_0)$ and $V_G = \n^*\n G + |\n G|^2$ with $G = \beta x \sin x$, in accordance with \eref{W7} and Lemma \ref{lemcgst}. A computation shows that this approach will work only if $|\beta| <1$. The choice of decomposition of the potential $V_F$ given in \eref{C10}, for the application of the method of consecutive transformations, therefore greatly affects the outcome. } \end{remark} \subsection{A Toy Model} Take $X= \R^n$. Define \begin{align} H = -\Delta +(Ax,x) +\l \sum_{j=1}^n x_j^4, \ \ \ \l >0 \label{tm5} \end{align} Here $(Ax,x)$ is any quadratic form, not necessarily positive. \begin{theorem} \label{thmtm1} The Hamiltonian \eref{tm5} is bounded below. The bottom of its spectrum is an eigenvalue of multiplicity one. It has a unique strictly positive ground state $\psi$. Let \begin{align} m_\psi = \psi^2 d^nx \ \ \ \text{on}\ \ \R^n. \label{tm7} \end{align} Then there is a constant $c_1 < \infty$ such that \begin{align} Ent_{m_\psi}(u^2) \le 2c_1 \int_{\R^n} |\n u|^2 dm_\psi. \label{tm9} \end{align} \end{theorem} \begin{proof} By Example \ref{exE1} the Hamiltonian $-(d/dx)^2 + \l x^4$ has a unique positive ground state $\psi_0$ and the measure $dm_{\psi_0} \equiv \psi_0^2 dx$ satisfies a LSI \begin{align} Ent_{m_{\psi_0}} (u^2) \le 2c \int_\R u'(x)^2 dm_{\psi_0}(x). \end{align} Let $m(dx_1,\dots, dx_n) = m_{\psi_0}(dx_1) \cdots m_{\psi_0}(dx_n)$. By the additivity property of logarithmic Sobolev inequalities, \cite{G1}, \cite{F75} or \cite[Theorem 2.3]{G1993}, the measure $m$ satisfies \begin{align} Ent_{m} (f^2) \le 2c \int_{\R^n} |\n f(x)|^2 dm(x). \end{align} We will apply the perturbation theorem, Theorem \ref{thmM} to $m$ with potential $V(x) = (Ax,x)$. The consecutive ground state procedure of Section \ref{secgst} then shows that the ground state of the Hamiltonian $H$ is hypercontractive. To verify the hypotheses \eref{mt2} for $e^{\pm V}$ we make the crude estimate $|(Ax,x)| \le b |x|^2$, where $b$ is the operator norm of $A$ over $\R^n$. For any real number $\alpha >0$ we have \begin{align} \int_{\R^n} e^{\alpha |(Ax,x)| }dm &\le \int_{\R^n} e^{\alpha b |x|^2} dm \\ &= \(\int_{\R} e^{\alpha b x_1^2} dm_{\psi_0}(x_1)\)^n. \end{align} To show that this is finite it suffices, by Corollary \ref{correlgs}, to show that \linebreak $\int_\R e^{\alpha b x_1^2} dm^F(x_1) < \infty$ for all positive $\alpha$, where $m^F$ is the intermediate ground state measure in the construction of $m_{\psi_0}$. According to \eref{E371}, $m^F$ has an even density proportional to $e^{-2\sqrt{\lambda}x^3/3}$ for large positive $x$. Since \newline $\int^{\infty} e^{\alpha b x^2 -2\sqrt{\lambda} x^3/3} dx < \infty$ for all real $\alpha$, the integral, $\int_\R e^{\alpha b x_1^2} dm^F(x_1) < \infty$ and therefore $\int_{\R^n}e^{\alpha |(Ax,x)| }dm <\infty$ for all real $\alpha$. We can therefore apply Theorem \ref{thmM} since \eref{mt2} holds for all real $\nu$ and $\kappa$. \end{proof} \bigskip For the significance of this example to $\phi^4$ models note that the Hamiltonians \eref{tm5} include Hamiltonians of the form $H = -\Delta +(Bx,x) +\l \sum_{j=1}^n (x_j^4- a x_j^2), \ \ \ \l >0, a >0 $. \section{Bibliography} \bibliographystyle{amsplain} \bibliography{ymh1} \end{document}
2412.20353v1
http://arxiv.org/abs/2412.20353v1
Rigidity and regularity for almost homogeneous spaces with Ricci curvature bounds
\documentclass[twoside,a4paper,11pt]{amsart} \usepackage[twoside,asymmetric,bottom=3.5cm,bindingoffset=0pt,nomarginpar]{geometry} \usepackage{mathtools} \usepackage{amsmath} \allowdisplaybreaks[4] \usepackage{amsthm} \usepackage{amssymb} \usepackage{mathrsfs} \usepackage[all]{xy} \usepackage{tikz,tikz-cd} \usepackage[bottom]{footmisc} \usepackage{bm}\usepackage{fancyhdr}\pagestyle{plain} \usepackage{geometry} \usepackage{hyperref} \usepackage{cite} \usepackage{enumitem} \usepackage{verbatim} \usepackage{makecell} \usepackage{fancyhdr} \setlist[enumerate]{label=(\arabic*)} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{fact}[theorem]{Fact} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{lemdef}[theorem]{Lemma and Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{claim}[theorem]{Claim} \newtheorem{assumption}[theorem]{Assumption} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark}\newtheorem{example}[theorem]{Example} \newtheorem{remarks}[theorem]{Remarks} \newtheorem*{notation}{Notation} \newtheorem{convention}[theorem]{Convention} \newtheorem*{jjj}{Remark} \newtheorem*{ak}{Acknowledgements} \DeclareMathOperator{\md}{md} \newcommand{\hU}{\hat{U}} \global\long\def\G{\mathsf{G}} \global\long\def\H{\mathsf{H}} \global\long\def\ord{\operatorname{ord}} \newcommand{\R}{\mathbb{R}} \newcommand{\RR}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\vphi}{\varphi} \DeclareMathOperator{\Int}{Int} \DeclareSymbolFont{AMSb}{U}{msb}{m}{n} \DeclareMathSymbol{\Z}{\mathalpha}{AMSb}{"5A} \DeclareMathSymbol{\D}{\mathalpha}{AMSb}{"44} \DeclareMathSymbol{\s}{\mathalpha}{AMSb}{"53} \numberwithin{equation}{section} \allowdisplaybreaks \newcommand{\nn}{\mathcal{N}} \newcommand{\sfd}{\mathsf{d}} \newcommand{\orb}{\mathcal{O}} \newcommand{\Or}{\mathcal{O}} \newcommand{\defeq}{\mathrel{\mathop:}=} \newcommand{\Ker}{{\text{Ker}}} \newcommand{\rank}{{\text{rank}}} \newcommand{\inj}{{\text{inj}}} \newcommand{\Lip}{{\text {Lip}}} \newcommand{\diam}{{\text {diam}}} \newcommand{\dist}{\mathsf d} \newcommand{\vol }{{\text {Vol}}} \newcommand{\supp}{{\text {supp}}} \newcommand{\esssdim}{{\text {essdim}}} \newcommand{\bet}{\text{b}_1} \newcommand{\what}{\textcolor{red}{(What?)}} \newcommand{\why}{\textcolor{red}{(Why?)}} \newcommand{\how}{\textcolor{red}{(How?)}} \newcommand{\Id}{\text{Id}} \newcommand{\iso}{\text{Iso}} \newcommand{\vv}{\text{v}} \newcommand{\ww}{\text{w}} \newcommand{\dd}{\mathsf d} \newcommand{\di}{\text{d}} \newcommand{\thi}{\text{thick}} \newcommand{\chee}{\textbf{Ch}} \newcommand{\sobo}{H^{1,2}} \newcommand{\loc}{\text{loc}} \newcommand{\eps}{\epsilon} \newcommand{\rcd}{\text{RCD}} \newcommand{\prob}{\text{Prob}} \newcommand{\cdkn}{\text{CD}(K,N)} \newcommand{\haus}{\mathcal{H}} \newcommand{\meas}{\mathfrak{m}} \newcommand{\LIP}{\text{LIP}} \newcommand{\RCD}{\text{RCD}} \newcommand{\Ch}{\text{Ch}} \newcommand{\Ric}{\text{Ric}} \newcommand{\mm}{\mathfrak{m}} \newcommand{\C}{\mathbb{C}} \newcommand{\pghd}{\mathsf{d}_{pGH}} \newcommand{\ghd}{\mathsf{d}_{GH}} \DeclareMathOperator{\lip}{lip} \newcommand{\op}[1]{\operatorname{#1}} \setlength{\headheight}{14.0pt} \pagestyle{fancy} \fancyhf{} \fancyhead{} \renewcommand{\headrulewidth}{0pt} \fancyhead[LO]{Rigidity and regularity for almost homogeneous spaces with Ricci curvature bounds} \fancyhead[RE]{Xin Qian} \fancyhead[RO]{\thepage} \fancyhead[LE]{\thepage} \begin{document} \title[Rigidity and regularity for almost homogeneous spaces with Ricci curvature bounds] {Rigidity and regularity for almost homogeneous spaces with Ricci curvature bounds} \author{Xin Qian} \address{School of Mathematical Sciences, Fudan University, Shanghai China} \curraddr{} \email{[email protected]} \begin{abstract} We say that a metric space $X$ is $(\epsilon,G)$-homogeneous if $G\leq\op{Iso}(X)$ is a discrete group of isometries with $\op{diam}(X/G)\leq\epsilon$.\ A sequence of $(\epsilon_i,G_i)$-homogeneous spaces $X_i$ with $\epsilon_i\to0$ is called a sequence of almost homogeneous spaces. In this paper we show that the Gromov-Hausdorff limit of a sequence of almost homogeneous RCD$(K,N)$ spaces must be a nilpotent Lie group with $\Ric \geq K$.\ We also obtain a topological rigidity theorem for $(\epsilon,G)$-homogeneous RCD$(K,N)$ spaces, which generalizes a recent result by Wang.\ Indeed, if $X$ is an $(\epsilon,G)$-homogeneous RCD$(K,N)$ space and $G$ is an almost-crystallographic group, then $X/G$ is bi-H\"older to an infranil orbifold.\ Moreover, we study $(\epsilon,G)$-homogeneous spaces in the smooth setting and prove rigidity and $\epsilon$-regularity theorems for Riemannian orbifolds with Einstein metrics and bounded Ricci curvatures respectively. \end{abstract} \maketitle \tableofcontents \vspace{-5pt} \section{Introduction}\label{section-0} A classical result of Gromov \cite{gromov1978almost} (refined by Ruh \cite{ruh1982almost}) on almost flat manifolds states that for any integer $n\geq2$, there exists $\epsilon(n), C(n)>0$ such that if a closed $n$-manifold $(M,g)$ satisfies $\op{diam}(M,g)\leq\epsilon(n)$ and $|\op{sec}_g|\leq 1$, then $M$ is diffeomorphic to a infranilmanifold $\nn/\Gamma$, where $\nn$ is a simply connected nilpotent Lie group, and $\Gamma$ is a torsion free discrete subgroup of affine group $\nn\rtimes\op{Aut}(\nn)$ such that $[\Gamma:\Gamma\cap \nn]\le C(n)$.\ This topological control fails if one works on manifolds with bounded Ricci curvature, since even the topology of compact Ricci-flat manifold can exhibit considerable complexity. However, the nilpotent structure still occurs at the level of fundamental group for manifolds with lower bounded Ricci curvature.\ In fact, it was proved by Kapovitch-Wilking \cite{kapovitch2011structure} that there exists $\epsilon(n), C(n)>0$ such that for any closed $n$-manifold $(M,g)$ with $\op{diam}(M,g)\leq\epsilon(n)$ and $\op{Ric}_g\geq-(n-1)$, its fundamental group $\pi_1(M)$ contains a nilpotent subgroup $N$ with index $[\pi_1(M):N]\le C(n)$ and $\op{rank}(N)\leq n$.\ This theorem is now called the generalized Margulis lemma and it has recently been generalized in the non-smooth setting (i.e., for RCD spaces) in \cite{deng2023margulis}.\ It is known that any finitely generated nilpotent group is polycyclic and $\op{rank}(N)$ is defined as the number of $\mathbb Z$ factors in the polycyclic series of $N$.\ In general, following \cite{zamora2024topological}, we can define the rank for any finitely generated group $G$ as the infimum of $\op{rank}(N)$ among all finite index polycyclic subgroups $N\leq G$ (see Definition \ref{def-rank}).\ In particular, for a closed $n$-manifold $(M,g)$ with $\op{diam}(M,g)\leq\epsilon(n)$ and $\op{Ric}_g\geq-(n-1)$, $\op{rank}(\pi_1(M))\leq n$. By Naber-Zhang's results in \cite{naber2016topology}, if $\pi_1(M)$ attains maximal rank $n$, then the universal cover $\widetilde{M}$ is volume non-collapsing; Rong termed this the manifold $M$ satisfying the bounded covering geometry (see \cite{huang2020collapsed,rong2022collapsed}).\ It was proved in \cite{huang2020collapsed} that the bounded covering geometry will result in $M$ being an infra-nilmanifold, which generalized Gromov-Ruh's theorem on almost flat manifolds.\ Combining the results in \cite{naber2016topology,huang2020collapsed}, we have the following theorem, which was also mentioned in a recent work by Si-Xu \cite{si2024rigidity}. \begin{theorem}[\citeonline{naber2016topology,huang2020collapsed}]\label{thm-almostflat} There is $\epsilon(n)>0, v(n)>0$ such that for any $n$-manifold $(M,g)$ with $\op{Ric}_g\geq -(n-1)$ and $\op{diam}(M)\leq\epsilon(n)$, the followings are equivalent: \begin{enumerate} \item $M$ is diffeomorphic to an infranilmanifold; \item $\op{rank}(\pi_1(M))$ is equal to $n$; \item $(M,g)$ satisfies $(1,v(n))$-bounded covering geometry, i.e., $\op{vol}(B_1(\tilde x))\geq v(n)$, where $\tilde x$ is a point in the Riemannian universal cover $\widetilde{M}$. \end{enumerate} \end{theorem} In \cite{si2024rigidity}, Si-Xu proved that if in addition $(M,g)$ is Einstein in Theorem \ref{thm-almostflat}, then $(M,g)$ must be flat.\ Moreover, based on the results of the two most recent works by Zamora-Zhu \cite{zamora2024topological} and Wang \cite{wang2024non}, which will be reviewed in Section \ref{preliminaries}, Theorem \ref{thm-almostflat} can actually be extended to the RCD setting as following. \begin{theorem}[\citeonline{wang2024non,zamora2024topological}]\label{thm-rcd-almostflat} For each $K\in\mathbb{R}$ and $N\geq1$, there is $\epsilon=\epsilon(K,N), v=v(K,N)$ such that for any RCD($K,N$) space $(X,d,\mm)$ with $\op{diam}(X)\leq\epsilon$, the followings are equivalent: \begin{enumerate} \item $X$ is bi-H\"older homeomorphic to an $N$-dimensional infranilmanifold $\mathcal{N}/\Gamma$, where $\mathcal{N}$ is a simply connected nilpotent Lie group endowed with a left invariant metric; \item $\op{rank}(\pi_1(X))$ is equal to $N$; \item $\mathcal{H}^N(B_1(\tilde x))\geq v$, where $\tilde x$ is a point in the universal cover $\widetilde{X}$. \end{enumerate} \end{theorem} In this paper, we aim to study such questions in a more general situation.\ Accordingly, we first introduce the following definition (see \cite{zamora2024limits}). \begin{definition}\label{def-almosthomogeneous} We say a proper geodesic space $X$ is $(\epsilon,G)$-homogeneous if $G\leq\op{Iso}(X)$ is a discrete group of isometries with $\op{diam}(X/G)\leq\epsilon$.\ A sequence of $(\epsilon_i,G_i)$-homogeneous spaces $X_i$ with $\epsilon_i\to0$ is called a sequence of almost homogeneous spaces. \end{definition} For example, if $X$ is a compact geodesic space with $\op{diam}(X)\leq\epsilon$, then any normal cover $\hat{X}$ is $(\epsilon,G(\hat{X}))$-homogeneous where $G(\hat{X})$ is the group of deck transformations.\ In particular, the universal cover $\widetilde{X}$ is $(\epsilon,\pi_1(X))$-homogeneous.\ In general, the action of $G$ may not be free.\ Notice that we require discreteness for the group $G$ in Definition \ref{def-almosthomogeneous} and hence, a homogeneous space is not necessarily an $(\epsilon,G)$-homogeneous space. The goal of this paper is to study the structure of $(\epsilon,G)$-homogeneous spaces with Ricci curvature bounds.\ In the case of lower Ricci curvature bounds, we investigate this in the general non-smooth setting (i.e., the RCD setting).\ However, for two-sided Ricci curvature bounds or Einstein metrics, we are limited to a smooth setting, focusing on Riemannian manifolds and orbifolds.\ Our first main result is to classify the pointed measured GH-limit of a sequence of almost homogeneous RCD($K,N$) spaces. \begin{theorem}\label{thm-almost homogeneous rcd} Let $K\in\mathbb{R}, N\in[1,\infty)$ and $(X_i,d_i,\mm_i,p_i)$ be a sequence of $(\epsilon_i,G_i)$-homogeneous pointed RCD$(K,N)$ spaces converging to $(X,d,\mm,p)$ in the pmGH-sense with $\epsilon_i\to0$.\ Assume that $X$ is not a point.\ Then the followings hold: \begin{enumerate} \item $(X,d)$ is isometric to an $n$-dimensional nilpotent Lie group with a left invariant Riemannian metric for some $n\leq N$; \item if the actions of $G_i$ are measure-preserving, then $\mm=c\mathcal{H}^n$ for some $c>0$ and $(X,d,\mathcal{H}^n)$ is an RCD$(K,n)$ space.\ In particular, $\op{Ric}_X\geq K$ if $n\geq2$; \item if $X_i$ are compact (or equivalently, $G_i$ are finite), then $\mm$ and $\mathcal{H}^n$ are mutually abolutely continuous and $(X,d,\mathcal{H}^n)$ is an RCD$(K,n)$ space.\ In particular, $\op{Ric}_X\geq K$ if $n\geq2$; \item if $X$ is compact, then $X$ is isometric to a flat torus $\mathbb{T}^n$. \end{enumerate} \end{theorem} For a metric measure space $(Y,d,\mm)$, we say an isometric action $g\in\op{Iso}(Y)$ is measure preserving if $g_{\#}\mm=\mm$.\ Notice that when $\mm$ is the Hausdorff measure, any isometric action is measure preserving.\ Also, if we consider a normal covering $(\hat{Y},\hat{d},\hat{\mm})\to(Y,d,\mm)$, then the construction of the lifted measure $\hat{\mm}$ trivially implies that the deck transformations are measure preserving. Theorem \ref{thm-almost homogeneous rcd} implies that the pGH-limit of a sequence of pointed almost homogeneous manifolds with $\Ric\geq K$ must be isometric to one of the following: a point, $S^1(r)$, $\mathbb{R}$, or a nilpotent Lie group with left invariant Riemannian metric and $\Ric\geq K$. Based on Gigli's splitting theorem on RCD($0,N$) spaces (see Theorem \ref{gigli}), we also obtain the following result. \begin{theorem}\label{thm-almost nonnegative ricci-limit} Let $(X_i,d_i,\mm_i,p_i)$ be a sequence of almost homogeneous RCD$(-\delta_i,N)$ spaces converging to $(X,d,\mm,p)$ in the pmGH-sense with $\delta_i\to0$.\ Then $X$ is isometric to $\mathbb{R}^{k}\times\mathbb{T}^{n-k}$, where $\mathbb{T}^{n-k}$ is a flat torus and $0\leq k\leq n\leq N$. \end{theorem} We can also prove the following corollary for non-collapsed RCD spaces.\ Recall that an RCD($K,N$) space $(X,d,\mm)$ is called non-collapsed if $\mm=\mathcal{H}^N$ (see \cite{de2018non}).\ In this case, $N$ must be an integer. \begin{corollary}\label{cor-homeomorphic to tori} Given $v>0,D>0,K\in\mathbb{R}$ and $N\geq1$, there exists $\epsilon=\epsilon(K,N,v,D)$ such that if $(X,d,\mathcal{H}^N)$ is an $(\epsilon,G)$-homogeneous RCD($K,N$) space with $\mathcal{H}^N(X)\geq v$ and $\diam(X)\leq D$, then $X$ is bi-H\"older homeomorphic to a flat torus $\mathbb{T}^N$.\ Moreover, if $X$ is a Riemannian manifold, then $X$ is diffeomorphic to $\mathbb{T}^N$. \end{corollary} More generally, one may consider those $(X,d,\mathcal{H}^N)$ satisfying local $(r,v)$-bounded covering geometry; that is, for any $x\in X$, $\mathcal{H}^N(B_{r/2}(\tilde{x}))\geq v$, where $\tilde{x}$ is a pre-image of $x$ in the (incomplete) universal covering $\widetilde{B_r(x)} \to B_r(x)$.\ Notice that on an RCD$(K,N)$ space, any ball $B_r(x)$ admits a universal cover by Wang's result \cite{wang2024rcd} (see Theorem \ref{jikang}).\ Employing Wang's latest results \cite{wang2024non}, we have the following fibration theorem for almost homogeneous RCD$(K,N)$ spaces satisfying local bounded covering geometry. \begin{theorem}\label{thm-covering and almost homogeneous} Given $v>0,D>0,K\in\mathbb{R}$ and $N\geq1$, there exists $\eps=\eps(K,N,v,D)$ such that if $(X,d,\mathcal{H}^N)$ is an $(\epsilon,G)$-homogeneous RCD($K,N$) space satisfying local $(1,v)$-bounded covering geometry and $\diam(X)\leq D$, then there is a flat torus $\mathbb{T}^k$ with $0\leq k\leq N$ and a continuous fiber bundle map $f: X\to\mathbb{T}^k$, which is also an $\Phi(\eps|K,N,v,D)$-GHA, where $\Phi(\eps|K,N,v,D)\to0$ as $\epsilon\to0$ while $K,N,v$ and $D$ are fixed.\ Moreover, the $f$-fiber with the induced metric is bi-H\"older to an $(N-k)$-dimensional infranilmanifold and in particular, $X$ is homeomorphic to an infranilmanifold. In addition, if $X$ is a Riemannian manifold, then $f$ is a smooth bundle map and the $f$-fiber is diffeomorphic to an $(N-k)$-dimensional infranilmanifold.\ In particular, $X$ is diffeomorphic to an infranilmanifold \end{theorem} Notice that $\diam(X)\leq\eps$ is equivalent to $X$ being $(\eps,\left\lbrace e\right\rbrace)$-homogeneous.\ In this case, the flat torus $\mathbb{T}^k$ in Theorem \ref{thm-covering and almost homogeneous} will be a point, and thus this theorem can be seen as a generalization of \cite[Theorem A]{huang2020collapsed} and \cite[Theorem A]{wang2024non}, which are the “(3) implies (1)" parts of Theorem \ref{thm-almostflat} and Theorem \ref{thm-rcd-almostflat}, respectively. For a proper geodesic space $X$, $G\leq\op{Iso}(X)$ is discrete if and only if its action on $X$ has discrete orbits and is almost free (any isotropy group is finite).\ If $X$ is a manifold, then $X/G$ admits a natural orbifold structure \cite{thurston1997three}.\ Recall that an infranil orbifold is the quotient of a simply connected nilpotent Lie group $\mathcal{N}$ by an almost-crystallographic group (see Definition \ref{def-almost crystal}).\ We can then generalize Theorem \ref{thm-rcd-almostflat} as following. \begin{theorem}\label{thm-rcd-maximal rank} For each $K\in\mathbb{R}$ and $N\geq1$, there is $\epsilon=\epsilon(K,N), v=v(K,N)$ such that for any $(\eps,G)$-homogeneous RCD($K,N$) space $(X,d,\mm)$, the followings are equivalent: \begin{enumerate} \item $X$ is homeomorphic to $\mathbb{R}^N$; \item $X$ is a contractible topological $N$-manifold without boundary; \item $\op{rank}(G)$ is equal to $N$; \item $X$ is simply connected and $\mathcal{H}^N(B_1(x))\geq v$ for some $x\in X$; \item $\pi_1(X)$ is finite and $\mathcal{H}^N(B_1(x))\geq v$ for some $x\in X$. \end{enumerate} Moreover, if $G$ does not contain a non-trivial finite normal subgroup, then the above conditions are equivalent to the following: \begin{enumerate} \item[(6)] $X/G$ is bi-H\"older homeomorphic to an $N$-dimensional infranil orbifold $\mathcal{N}/\Gamma$, where $\mathcal{N}$ is a simply connected nilpotent Lie group endowed with a left invariant metric and $G$ is isomorphic to $\Gamma$. \end{enumerate} \end{theorem} If $X\to X/G$ is a covering map, then $G$ is torsion free when $X$ is a contractible manifold.\ Thus, Theorem \ref{thm-rcd-maximal rank} will indeed yield Theorem \ref{thm-rcd-almostflat} (see Remark \ref{cor-rcd}).\ In general, we need to assume that $G$ does not contain a non-trivial finite normal subgroup to identify $G$ as an almost-crystallographic group (see Theorem \ref{thm-almost crystallographic}). The construction of the bi-H\"older homeomorphism in Theorem \ref{thm-rcd-maximal rank} (6) is based on the recent work of Wang \cite{wang2024non} and can be adapted to prove diffeomorphism in the smooth case.\ In fact, the proof of Theorem \ref{thm-rcd-maximal rank} allows us to obtain the following orbifold version of Theorem \ref{thm-almostflat}.\ Basic notions and terminology regarding smooth (Riemannian) orbifolds will be reviewed in Section \ref{subsec:orb}. \begin{theorem}\label{thm-orbifold almost flat} There is $\epsilon(n)>0, v(n)>0$ such that if $(\mathcal{O},g)$ is a Riemannian $n$-orbifold with $\op{Ric}\geq-(n-1)$ and $\op{diam}(\mathcal{O})\leq\epsilon(n)$, then the followings are equivalent: \begin{enumerate} \item $|\tilde \orb|$ is homeomorphic to $\mathbb{R}^n$, where $|\tilde \orb|$ is the underlying topological space of the universal orbifold cover $\tilde{\orb}$; \item $\op{rank}(\pi_1^{orb}(\mathcal{O}))=n$, where $\pi_1^{orb}(\orb)$ is the orbifold fundamental group of $\orb$; \item $\op{vol}(B_1(\tilde x))\geq v(n)$, where $\tilde x$ is a point in the universal orbifold covering $\widetilde{\mathcal{O}}$. \end{enumerate} Moreover, if $\orb$ is good and $\pi_1^{orb}(\orb)$ does not contain a non-trivial finite normal subgroup, then the above conditions are equivalent to the following: \begin{enumerate} \item[(4)] $\orb$ is diffeomorphic to an infranil orbifold. \end{enumerate} \end{theorem} Recall that a smooth orbifold is called good if it is the global quotient of a smooth manifold by some dicrete group.\ In the last statement of Theorem \ref{thm-orbifold almost flat}, without assuming $\orb$ is good, we know from Theorem \ref{thm-rcd-maximal rank} that the underlying space $|\orb|$ is homeomorphic to an infranil orbifold; with this assumption, we obtain diffeomorphism. For two-sided Ricci curvature bounds or Einstein metrics, there is currently no comprehensive and rigorous synthetic theory on metric measure spaces.\ So we are limited to a smooth setting and the following theorem is about the rigidity case in Theorem \ref{thm-orbifold almost flat} for Einstein orbifolds, which can be seen as an orbifold version of \cite[Theorem 0.2]{si2024rigidity}. \begin{theorem}[Rigidity for Einstein orbifolds]\label{thm-einstein orbifold} There is $\epsilon(n)>0, v(n)>0$ such that for any closed Einstein $n$-orbifold $(\mathcal{O},g)$ satisfying $\op{Ric}\equiv\lambda g$ with $\lambda\geq-(n-1)$ and $\op{diam}(\mathcal{O})\leq\epsilon(n)$, the followings are equivalent: \begin{enumerate} \item $\mathcal{O}$ is a closed flat orbifold; \item $\orb$ is diffeomorphic to an infranil orbifold; \item $\op{rank}(\pi_1^{orb}(\mathcal{O}))=n$; \item $\op{vol}(B_1(\tilde x))\geq v(n)$, where $\tilde x$ is a point in the universal orbifold covering $\widetilde{\mathcal{O}}$. \end{enumerate} \end{theorem} Compared with Theorem \ref{thm-orbifold almost flat}, we removed the assumption that $\orb$ is good and $\pi_1^{orb}(\mathcal{O})$ does not contain a non-trivial finite normal subgroup, since we can get a flat metric in the Einstein case.\ Recall that by a result of Thurston \cite{thurston1997three}, a closed flat orbifold must be the quotient orbifold $\mathbb{R}^n/\Gamma$, where $\Gamma$ is a crystallographic group. It is known that almost flat orbifolds are infranil orbifolds (see \cite[Proposition 1.4]{ding2011restriction}).\ Then it follows from Theorem \ref{thm-einstein orbifold} that almost flat Einstein orbifolds must be flat. \begin{corollary} There exists $\epsilon(n)>0$ such that if $(\orb,g)$ is an Einstein $n$-orbifold satisfying $\op{diam}(\orb)^2 |\sec_g|\leq \epsilon(n)$, then $(\orb,g)$ is flat. \end{corollary} For the case of bounded Ricci curvature, we have the following $\epsilon$-regularity theorem. \begin{theorem}\label{thm-regularity} Given $v>0$ and $p\in(1,\infty)$, there exists $C=C(n,v,p), \epsilon=\epsilon(n,v,p)$ such that if an $(\epsilon,G)$-homogeneous pointed $n$-orbifold $(\mathcal{O},g,x)$ satisfies $|\op{Ric}_g|\leq n-1$ and $\op{vol}(B_1(x))\geq v$, then $\int_{B_1(x)}|Rm|^p\leq C$.\ If, in addition, $(\orb,g)$ is a manifold, then there exists $r_0=r_0(n,v)$ such that $r_h(y)\geq r_0$ for any $y\in B_1(x)$, where $r_h(y)$ is the $C^1$-harmonic radius at $y$. \end{theorem} For a manifold with bounded Ricci curvature, a lower bound on $C^1$-harmonic radius implies a lower bound on $C^{1,\alpha}$-harmonic radius and a local $L^p$-bound on curvature, due to elliptic estimates.\ For Einstein manifolds, this will further lead to higher-order control on curvature.\ However, under our conditions, the Einstein metric exhibits very strong rigidity and will, in fact, be flat by Theorem \ref{thm-einstein orbifold}. Moreover, for manifolds with bounded Ricci curvature, we obtain the following result analogous to Theorem \ref{thm-almostflat}. \begin{theorem}\label{thm-bounded Ricci almostflat} Given $p>n/2$, there is $C(n,p), \epsilon(n,p), v(n)$ such that for any $n$-manifold $(M,g)$ with $|\op{Ric}_g|\leq n-1$ and $\op{diam}(M)\leq\epsilon(n,p)$, the followings are equivalent: \begin{enumerate} \item $||Rm||_{L^p(M)}\leq C(n,p)$; \item $M$ is diffeomorphic to an $n$-dimensional infra-nilmanifold; \item $\op{rank}(\pi_1(M))$ is equal to $n$; \item $(M,g)$ satisfies $(1,v(n))$-bounded covering geometry, i.e., $\op{vol}(B_1(\tilde x))\geq v(n)$, where $\tilde x$ is a point in the Riemannian universal cover $\widetilde{M}$. \end{enumerate} \end{theorem} The paper is organized as follows.\ In Section \ref{preliminaries}, we cover the preliminary material.\ In Section \ref{sec-3}, we prove Theorem \ref{thm-almost homogeneous rcd}, Theorem \ref{thm-almost nonnegative ricci-limit} and derive a series of consequences.\ In Section \ref{sec-4}, the topological rigidity theorems (Theorem \ref{thm-rcd-maximal rank} and Theorem \ref{thm-orbifold almost flat}) will be proved using results in \cite{zamora2024topological} and \cite{wang2024non}.\ In Section \ref{sec-5}, we study the rigidity and $\epsilon$-regularity for Einstein orbifolds and orbifolds with bounded Ricci curvature respectively. \textbf{Acknowledgements:} The author would like to thank Ruobing Zhang for bringing Jikang Wang's recent paper \cite{wang2024non} to his attention. \section{Preliminaries}\label{preliminaries} In this paper, a metric measure space is a triple $(X,d,\mathfrak{m})$, where $(X,d)$ is a complete, separable and proper metric space and $\mathfrak{m}$ is a locally finite non-negative Borel measure on $X$ with $\supp\mathfrak{m}=X$.\ We will also always assume $(X,d)$ to be geodesic, i.e., any couple of points is joined by a length minimizing geodesic. Throughout this paper, we assume the reader is familiar with the notion and basic theory of $\rcd (K,N)$ spaces ($N<\infty$).\ We refer the reader to \cite{lott2009ricci,sturm2006geometry,sturm2006geometry2,ambrosio2014metric,ambrosio2014calculus,gigli2015differential} for the relevant notions.\ Notice that a large body of literature has studied the so-called RCD$^*(K,N)$ spaces, which are now known to be equivalent to RCD$(K,N)$ spaces by the work of Cavelletti-Milman \cite{cavalletti2021globalization} and Li \cite{li2024globalization}.\ We refer to \cite{erbar2015equivalence} for an overview of equivalent definitions of $\rcd (K,N)$ spaces. \subsection{Gromov--Hausdorff topology} \begin{definition} Let $(X,p),(Y,q)$ be pointed geodesic spaces.\ A map $f:X\to Y$ is called a pointed $\epsilon$-Gromov-Hausdorff approximation (or pointed $\epsilon$-GHA) if \begin{equation}\label{pgh1} d(f(p),q)\leq\eps, \end{equation} \begin{equation}\label{pgh2} \sup_{x_1,x_2 \in B_{\eps^{-1}}(p)} \vert d(f(x_1),f(x_2)) - d(x_1,x_2) \vert \leq\epsilon, \end{equation} \begin{equation}\label{pgh3} \sup_{y \in B_{\eps^{-1}}(q) } \inf_{x \in B_{\eps^{-1}}(p)} d(f(x),y) \leq\eps. \end{equation} \end{definition} \begin{definition}\label{pmgh} \rm Let $(X_i,p_i)$ be a sequence of pointed proper geodesic spaces.\ We say that it \textit{converges in the pointed Gromov--Hausdorff sense} (or pGH-sense) to a pointed proper geodesic space $(X,p)$ if there is a sequence of pointed $\epsilon_i$-GHAs $\phi_i : X_i \to X$ with $\epsilon_i \to 0 $ as $i\to\infty$. If in addition to that, $(X_i,d_i,\mathfrak{m}_i)$, $(X,d,\mathfrak{m})$ are metric measure spaces, the maps $\phi_i$ are Borel measurable and $\int_X f \cdot d(\phi_i)_{\#}\mathfrak{m}_i \to \int_X f \cdot d\mathfrak{m}$, for all continuous $f: X \to \mathbb{R}$ with compact support, then we say $(X_i,d_i, \mathfrak{m}_i,p_i)$ converges to $(X,d,\mathfrak{m},p)$ in the \textit{pointed measured Gromov--Hausdorff sense} (or pmGH-sense). \end{definition} \begin{remark} \rm The maps $\phi_i:X_i\to X$ above are called Gromov-Hausdorff approximations and for any $x\in X$, there is a sequence $x_i\in X_i$ such that $\phi_i(x_i)\to x$. \end{remark} \begin{remark} If there is a sequence of groups $\Gamma_i$ acting on $X_i$ by (measure preserving) isometries with diam$(X_i/\Gamma_i) \leq D$ for some $D>0$, then one could ignore the points $p_i$ when one talks about p(m)GH convergence, as any pair of limits are going to be isomorphic as metric (measure) spaces.\ In particular, if $\op{diam}(X_i)\leq D$ for some $D>0$, then we simply say that $X_i$ converges to $X$ in the (m)GH-sense. \end{remark} One of the main features of the RCD($K,N$) condition is that it is stable under pmGH convergence, i.e., the pmGH-limit of a sequence of RCD($K,N$) spaces is also an RCD($K,N$) space (see \cite{gigli2015convergence}).\ Combined with Gromov's precompactness criterion and Prokhorov's compactness theorem (see \cite[Chapter 27]{villani2009optimal} for instance), the class of RCD($K,N$) spaces with normalized measure is compact under the pmGH-topology. \begin{definition}\label{def-tangent} We say that a pointed metric measure space $(Y,d_{Y},m_{Y},y)$ is a tangent cone of $(X,d,\mm)$ at $x$ if there exists a sequence $r_{i}\to0^{+}$ such that \[(X,r_{i}^{-1}d,\mm(B_{r_{i}}(x))^{-1}\mm,x)\xrightarrow{pmGH}(Y,d_{Y},m_{Y},y).\] The collection of all tangent cones of $(X,d,\mm)$ at $x$ is denoted by $\text{Tan}_x(X,d,\mm)$. \end{definition} For an RCD$(K,N)$ space $(X,d,\mm)$, the compactness yields that $\text{Tan}_x(X,d,\mm)$ is non-empty and any tangent cone is RCD($0,N$).\ We are now in the position to introduce the notions of $k$-regular set and essential dimension as follows. \begin{definition}[$k$-regular set]\label{def-regular} For any integer $k\in[1,N]$, we denote by $\mathcal{R}_{k}$ the set of all points $x\in X$ such that $\text{Tan}_x(X,d,\mm)=\left\lbrace(\mathbb{R}^{k},d_{\mathbb{R}^{k}},(\omega_{k})^{-1}\mathcal{H}^{k},0^{k})\right\rbrace$, where $\omega_{k}$ is the volume of the unit ball in $\mathbb{R}^{k}$.\ We call $\mathcal{R}_{k}$ the $k$-regular set of $X$. \end{definition} \indent The following result is proved by Bru\`e-Semola in \cite{brue2020constancy}. \begin{theorem}[\citeonline{brue2020constancy}]\label{thm-essdim} Let $(X,d,\mm)$ be an RCD$(K,N)$ space with $K\in\mathbb{R}\ and\ N\in(1,\infty)$.\ Then there exists a unique integer $k\in[1,N]$, called the essential dimension of $(X,d,\mm)$, denoted by $\dim_{ess}(X)$, such that $\mm(X\setminus\mathcal{R}_{k})=0$. \end{theorem} \subsection{Equivariant Gromov--Hausdorff convergence} There is a well studied notion of convergence of group actions in this setting, first introduced by Fukaya-Yamaguchi in \cite{fukaya1992fundamental}.\ For a pointed proper metric space $(X,p)$, we equip its isometry group $\op{Iso}(X)$ with the metric $d_{0}^{p}$ given by \begin{equation}\label{d0} d_0^p (h_1, h_2) : = \inf_{r > 0 } \left\{ \frac{1}{r} + \sup_{x \in B_r(p)} d(h_1x, h_2x) \right\} . \end{equation} for $h_1,h_2\in\op{Iso}(X)$.\ Obviously, we get a left invariant metric that induces the compact-open topology and makes $\op{Iso}(X)$ a proper metric space. Recall that if a sequence of pointed proper metric spaces $(X_i, p_i)$ converges in the pGH sense to the pointed proper metric space $(X,p)$, one has a sequence of pointed $\epsilon_i$-GHAs $\phi_i : X_i \to X$ with $\epsilon_i\to0$. \begin{definition}\label{def:equivariant} \rm Consider a sequence of pointed proper metric spaces $(X_i,p_i)$ that converges in the pGH sense to a pointed proper metric space $(X,p)$, a sequence of closed groups of isometries $G_i \leq \op{Iso}(X_i)$ and a closed group $G \leq \op{Iso}(X)$.\ Equip $G_i$ with the metric $d^{p_i}_{0}$ and $G$ with the metric $d^{p}_{0}$.\ We say that the sequence $G_i$ \textit{converges equivariantly} to $G$ if there is a sequence of Gromov-Hausdorff approximations $f_i:G_i\to G$ such that for each $R>0$, one has \[\lim\limits_{i\to\infty}\sup_{g \in B^{G_i}_R(Id_{X_i})}\sup_{x \in B^{X_i}_R(p_i)}d(\phi_i(gx),f_i(g)\phi_i(x))=0.\] \end{definition} Let us recall two basic properties of equivariant convergence proved in \cite{fukaya1992fundamental}. \begin{lemma}[\citeonline{fukaya1992fundamental}]\label{equivariant} Let $(Y_i,q_i)$ be a sequence of proper geodesic spaces that converges in the pGH sense to $(Y,q)$, and $\Gamma_i \leq \op{Iso}(Y_i)$ a sequence of closed groups of isometries that converges equivariantly to a closed group $\Gamma \leq \op{Iso}(Y)$.\ Then the sequence $(Y_i/\Gamma_i, [q_i])$ converges in the pGH sense to $(Y/\Gamma , [q])$. \end{lemma} \begin{lemma}[\citeonline{fukaya1992fundamental}]\label{equivariant-compactness} Let $(Y_i,q_i) $ be a sequence of proper geodesic spaces that converges in the pGH sense to $(Y,q)$, and take a sequence $\Gamma_i \leq \op{Iso}(Y_i)$ of closed groups of isometries.\ Then there is a subsequence $(Y_{i_k}, q_{i_k}, \Gamma_{i_k})_{k \in \mathbb{N}}$ such that $\Gamma_{i_k}$ converges equivariantly to a closed group $\Gamma \leq \op{Iso}(Y)$. \end{lemma} A sequence of groups, which converges equivariantly to the trivial group, is called a sequence of small groups.\ The explicit definition is the following. \begin{definition} Let $(X_i, p_i)$ be a sequence of proper geodesic spaces. We say a sequence of groups $W_i \leq \iso (X_i) $ consists of \emph{small subgroups} if for each $R > 0 $ we have \[ \lim _{i \to \infty } \, \sup _{g \in W_i}\, \sup _{x \in B_R(p_i)} d(gx,x) = 0 . \] Equivalently, the groups $W_i$ are small if $d_0^{p_i}(g_i,Id_{X_i})\to0$ for any choice of $g_i\in W_i$. \end{definition} Obviously, if $W_i \leq \iso (X_i) $ is a sequence of discrete small subgroups, then $W_i$ is a finite group for any large $i$.\ It is proved in \cite[Theorem 93]{santos2023fundamental} that a non-collapsing sequence of $RCD(K,N)$ spaces cannot have small groups of measure preserving isometries. \begin{theorem}[\citeonline{santos2023fundamental}]\label{nss} Let $(X_i, d_i, \mathfrak{m}_i,p_i) $ be a sequence of pointed $RCD(K,N)$ spaces of essential dimension $n$ and let $H_i \leq Iso (X_i)$ be a sequence of small subgroups acting by measure preserving isometries.\ Assume the sequence $(X_i, d_i, \mathfrak{m}_i,p_i)$ converges in the pmGH sense to an $RCD(K,N)$ space $(X, d, \mathfrak{m},p) $ of essential dimension $n $. Then $H_i$ is trivial for large $i$. \end{theorem} \subsection{Properties of RCD(K,N) spaces} In this subsection, we review some properties on RCD($K,N$) spaces that we will need in this paper. Combining Theorem \ref{thm-essdim} and \cite[Theorem 4.1]{ambrosio2018short}, we have the following theorem. \begin{theorem}\label{thm-measure rectifiable} Let $(X,d,\mm)$ be an RCD$(K,N)$ space with $k=\dim_{ess}(X)$.\ Set \[\mathcal{R}_k^*=\left\lbrace x\in\mathcal{R}_k :\ \lim\limits_{r\to0+}\frac{\mm(B_r(x))}{\omega_{k}r^k} \text{ exists and}\in(0,\infty)\right\rbrace. \] Then we have the following: \begin{enumerate} \item $\mm(X\setminus\mathcal{R}_k^*)=0$; \item $\mm\llcorner\mathcal{R}_k^*$ and $\mathcal{H}^k\llcorner\mathcal{R}_k^*$ are mutually absolutely continuous; \item $\lim\limits_{r\to0+}\dfrac{\mm(B_r(x))}{\omega_{k}r^k}=\dfrac{d\mm\llcorner\mathcal{R}_k^*}{d\mathcal{H}^k\llcorner\mathcal{R}_k^*}(x)$, for $\mm$-a.e.\ $x\in\mathcal{R}_k^*$. \end{enumerate} \end{theorem} The well known Cheeger--Gromoll splitting theorem \cite{cheeger1971splitting} was extended by Gigli to the setting of RCD$(0,N)$ spaces \cite{gigli2014overview}. \begin{theorem}[\citeonline{gigli2014overview}]\label{gigli} Let $(X,d,\mathfrak{m})$ be an RCD$(0,N)$ space.\ Then there is a metric measure space $(Y, d_Y, \nu )$ where $(Y,d_Y)$ contains no line, such that $(X,d,\mathfrak{m})$ is isomorphic to the product $(\mathbb{R}^k\times Y, d_{\mathbb{R}^k} \times d_Y, \mathcal{H}^k \otimes \nu)$. Moreover, if $N-k \in [0,1)$ then $Y$ is a point, and in general, $(Y,d_Y,\nu)$ is an RCD$(0,N-k)$ space. \end{theorem} \begin{remark}\label{rem-splitting} If $(X,d,\mm)$ is an RCD$(0,N)$ space and $G$ is a discrete subgroup of $\op{Iso}(X)$ with $\diam(X/G)<\infty$, then in the above splitting theorem, the space $Y$ can be taken to be compact (see \cite{mondino2019universal}).\ In addition, $\op{Iso}(X)=\op{Iso}(\mathbb{R}^k)\times \op{Iso}(Y)$. \end{remark} Let $(X,d,\mathfrak{m})$ be an RCD$(K,N)$ space and $\rho : Y \to X$ be a covering space.\ $Y$ has a natural geodesic structure such that for any curve $\gamma : [0,1] \to Y$ one has \[ \text{length}(\rho \circ \gamma) = \text{length}(\gamma).\] Set \[\mathcal{W} : = \{ W \subset Y \text{ open bounded }\vert \text{ } \rho_{\vert_{W}} : W \to \rho (W) \text{ is an isometry} \} \] and define a measure $\mathfrak{m}_{Y}$ on $Y$ by setting $\mathfrak{m}_Y(A) : = \mathfrak{m}(\rho (A))$ for each Borel set $A$ contained in $\mathcal{W}$.\ The measure $\mathfrak{m}_Y$ makes $\rho : Y \to X$ a local isomorphism of metric measure spaces, so by the local-to-global property of RCD$(K,N)$ space \cite{erbar2015equivalence}, $(Y,d_Y,\mathfrak{m}_Y)$ is an RCD$(K,N)$ space, and its group of deck transformations acts by measure-preserving isometries (see \cite{mondino2019universal} for more details). Wang proved the following in \cite{wang2024rcd}. \begin{theorem}[\citeonline{wang2024rcd}]\label{jikang} Let $(X,d,\mathfrak{m})$ be an RCD$(K,N)$ space.\ Then for any $x\in X$ and $R>0$, there exists $r>0$ so that any loop in $B_r(x)$ is contractible in $B_R(x)$.\ In particular, $X$ is semi-locally-simply-connected and its universal cover $\tilde{X}$ is simply connected. \end{theorem} Due to Theorem \ref{jikang}, for an RCD$(K,N)$ space $(X,d,\mathfrak{m})$ we can think of its fundamental group $\pi_1(X)$ as the group of deck transformations of the universal cover $\tilde{X}$. Recall that an RCD$(K,N)$ space $(X,d,\mm)$ is called non-collapsed if $\mm=\mathcal{H}^N$ (see \cite{de2018non}).\ There are some equivalent conditions for the non-collapse condition up to a scaling on measure (see \cite[Theorem 2.20]{brena2023weakly}, \cite[Theorem 2.3]{zamora2024topological} and references therein). \begin{theorem} \label{thm:dim} Let $(X,\mathsf{d},\mm)$ be an RCD$(K,N)$ space.\ Then the following five conditions are equivalent. \begin{enumerate} \item $X$ has essential dimension $N$. \label{thm:dim-1} \item $X$ has topological dimension $N$. \label{thm:dim-3} \item $X$ has Hausdorff dimension $N$. \label{thm:dim-2} \item $\mm = c \mathcal{H}^N$ for some constant $c>0$.\label{thm:dim-4} \item $N\in\mathbb{N}$ and $X$ has Hausdorff dimension greater than $N-1$. \end{enumerate} \end{theorem} De Philippis-Gigli \cite{de2018non} studied the GH-limit of non-collapsed RCD spaces and obtained the following result, generalizing Cheeger-Colding's result on Ricci-limit spaces \cite{cheeger1997structure}. \begin{theorem}[\citeonline{de2018non}]\label{thm:volume-continuity} Let $(X_i, d_i, \mathcal{H}^N, p_i )$ be a sequence of pointed $\rcd (K,N)$ spaces and $(X_i,d_i,p_i)\xrightarrow{pGH}(X, d, p)$. Then precisely one of the following holds: \begin{enumerate} \item $\limsup_{i \to \infty}\mathcal{H}^N( B_1 (p_i) )>0$.\ In this case, $(X_i,d_i,\mathcal{H}^N,p_i)\xrightarrow{pmGH}(X, d,\mathcal{H}^N,p)$ and the limit $\lim_{i \to \infty}\mathcal{H}^N( B_1 (p_i) )$ exists and equal to $\mathcal{H}^N (B_1 (p))$. \label{eq:non-collapsed} \item $\lim_{i \to \infty}\mathcal{H}^N( B_1 (p_i) )=0$.\ In this case, $\dim_{\mathcal{H}}(X)\leq N-1$. \end{enumerate} \end{theorem} Let us recall the following topological stability theorem for non-collapsed RCD$(K,N)$ spaces, proved by Kapovitch-Mondino \cite[Theorem 3.3]{kapovitch2021topology}, based on Cheeger-Colding's Reifenberg type theorem \cite{cheeger1997structure}. \begin{theorem}[\citeonline{kapovitch2021topology}]\label{thm:reifenberg-weak} Let $(X_i,d_i,\mathcal{H}^N,p_i)$ be a sequence of pointed $\rcd (K,N)$ spaces such that the sequence $(X_i, p_i)$ converges in the pGH-sense to $(M^N,p)$ where $M^N$ is a smooth Riemannian manifold.\ Then for any $R>0$, there is a sequence of pointed $\epsilon_i$-GHAs $f_i : (X_i , p_i) \to ( M^N , p ) $ with $\epsilon_i\to0$, such that for all $i$ large enough depending on $R$, the restriction of $f_i$ to $B_R(p_i)$ is a bi-H\"older homeomorphism onto its image, and \[ B_{R - 4\epsilon_i}(p) \subset f_i (B_R(p_i)) . \] In particular, if $M$ is compact then $X_i$ is bi-H\"older homeomorphic to $M$ for all large $i$. \end{theorem} The bi-H\"older homeomorphism can be constructed via harmonic splitting map.\ We refer to \cite{brue2022boundary} for the notion of harmonic $(k,\eps)$-splitting map on RCD spaces. \begin{theorem}[\citeonline{cheeger2021rectifiability,honda2023note}]\label{Rei} Assume that $(X,d,\mathcal{H}^N,p)$ is a pointed RCD$(-\epsilon,N)$ space and $u: B_2(p) \to \mathbb{R}^N$ is a harmonic $(N,\epsilon)$-splitting map.\ Then for any $x,y \in B_1(p)$ we have $$(1-\Phi(\epsilon|N))d(x,y)^{1+\Phi(\epsilon|N)} \le d(f(x),f(y)) \le (1+\Phi(\epsilon|N))d(x,y),$$ where $\Phi(\eps|N)\to0$ as $\eps\to0$ while $N$ is fixed.\ Moreover, if $X$ is a smooth $N$-manifold with $\Ric\geq-\epsilon$, then for any $x \in B_1(p)$, $du:T_x X \to \mathbb{R}^N$ is nondegenerate. \end{theorem} \subsection{Nilpotent and polycyclic groups}\label{subsec:rank} \begin{definition} For a group $G$, let $G^{(0)} : = G$ and define inductively $G^{(j + 1 )}:=[G^{(j)}, G]$.\ $G$ is called \emph{nilpotent} if $G^{(s)}$ is the trivial group for some $s \in \mathbb{N}$.\ $G$ is called \emph{virtually nilpotent} if there exists a nilpotent subgroup $N\leq G$ of finite index. \end{definition} \begin{definition}\label{def-polycyclic} A finitely generated group $\Lambda$ is said to be \emph{polycyclic} if there is a finite subnormal series \[ \Lambda= \Lambda _m \trianglerighteq \cdots \trianglerighteq \Lambda_0 = {1} \] with $\Lambda _ j / \Lambda _ {j-1} $ cyclic for each $j$.\ Such a subnormal series is called a polycyclic series.\ The polycyclic rank is defined as the number of $j$'s for which $\Lambda _ j / \Lambda _{j-1}$ is isomorphic to $\Z$, which is independent of the choice of the polycyclic series and denoted by $\rank (\Lambda )$. \end{definition} From the definition, we immediately know that any finite index subgroup of a polycyclic group is a polycyclic group with the same rank.. It is well-known that any finitely generated nilpotent group is polycyclic (see \cite{kargapolov1979fundamentals} for instance), and the rank of a finitely generated nilpotent group is defined to be the polycyclic rank.\ The following lemma (see \cite[Lemma 2.22 and Lemma 2.24]{naber2016topology}) gives the definition of the rank of a finitely generated virtually nilpotent group. \begin{lemma}\label{lem:rank of virtually nilpotent} Let $G$ be a finitely generated virtually nilpotent group.\ Then: \begin{enumerate} \item Every nilpotent subgroup $N\leq G$ of finite index has the same rank.\ The common rank is called the rank of $G$ and also denoted by $\op{rank} (G)$. \item If $\Gamma$ is a finite index subgroup of $G$, then $\op{rank} (\Gamma)=\op{rank}(G)$. \end{enumerate} \end{lemma} Following \cite{zamora2024topological}, we can define the rank for an arbitrary finitely generated group. \begin{definition}\label{def-rank} For a finitely generated group $G$, we define \[\rank (G):=\inf\left\lbrace \rank (\Lambda ):\ \Lambda \text{ is a finite index polycyclic subgroup of } G \right\rbrace. \] The infimum of the empty set is defined to be $+\infty$. \end{definition} By Definition \ref{def-polycyclic} and Lemma \ref{lem:rank of virtually nilpotent}, if $G$ is polycyclic or finitely generated virtually nilpotent, there is no conflict between the distinct definitions of $\rank ( G )$.\ Also, if $\Lambda$ is a finite index subgroup of $G$, then $\op{rank}(\Lambda)=\op{rank}(G)$. We also note that if $G$ is a discrete group of isometries on a proper geodesic space $X$ with $\diam(X/G)\in(0,\infty)$, then by \cite[Lemma 2.5]{zamora2024limits}, $G$ is finitely generated. \subsection{Riemannian orbifolds}\label{subsec:orb} In this subsection, we review the basic theory of orbifolds.\ An orbifold $\orb$ is, roughly speaking, a topological space that is locally homeomorphic to a quotient of $\mathbb{R}^n$ by some finite group.\ We recall the definitions from \cite{kleiner2011geometrization} (see also \cite{galaz2018quotients}). \begin{definition} \label{D:local_model} A \emph{local model of dimension $n$} is a pair $(\hat{U}, G)$, where $\hat{U}$ is an open, connected subset of a Euclidean space $\RR^n$, and $G$ is a finite group acting smoothly and effectively on $\hat{U}$. A \emph{smooth map} $(\hat{U}_1,G_1)\to (\hat{U}_2,G_2)$ between local models $(\hat{U}_i,G_i)$, $i=1,2$, is a homomorphism $f_{\sharp}: G_1\to G_2$ together with a $f_{\sharp}$-equivariant smooth map $\hat{f}:\hat{U}_1\to \hat{U}_2$, i.e., $\hat f(\gamma\cdot \hat u) = f_\sharp(\gamma) \cdot \hat f(\hat u)$, for all $\gamma \in G_1$ and $\hat u \in \hat U_1$. \end{definition} Given a local model $(\hat{U},G)$, denote by $U$ the quotient $\hat{U}/G$.\ The smooth map between local models is called an \emph{embedding} if $\hat{f}$ is an embedding. In this case, the effectiveness of the actions in the local models implies that $f_{\sharp}$ is injective. \begin{definition} An \emph{$n$-dimensional orbifold local chart} $(U_x, \hat U_x, G_x, \pi_x)$ around a point $x$ in a topological space $X$ consists of: \begin{enumerate} \item A neighborhood $U_x$ of $x$ in $X$; \item A local model $(\hat{U}_x, G_x)$ of dimension $n$; \item A $G_x$-equivariant projection $\pi_x:\hat{U}_x\to U_x$ that induces a homeomorphism $\hat{U}_x/G_x\to U_x$. \end{enumerate} If $\pi_x^{-1}(x)$ consists of a single point, $\hat{x}$, then $(U_x, \hat U_x, G_x, \pi_x)$ is called a \emph{good local chart} around $x$.\ In particular, $\hat x$ is fixed by the action of $G_x$ on $\hat{U}_x$. \end{definition} \begin{definition} An \emph{$n$-dimensional orbifold atlas} for a topological space $X$ is a collection of $n$-dimensional local charts $\mathcal{A}=\{U_{\alpha}\}_\alpha$ such that the local charts $U_\alpha \in \mathcal{A}$ give an open covering of $X$ and for any $x\in U_{\alpha}\cap U_{\beta}$, there is a local chart $U_\gamma \in \mathcal{A}$ with $x\in U_{\gamma}\subset U_{\alpha}\cap U_{\beta}$ and embeddings $(\hat{U}_{\gamma}, G_{\gamma})\to(\hat{U}_{\alpha}, G_{\alpha})$, $(\hat{U}_{\gamma}, G_{\gamma})\to (\hat{U}_{\beta}, G_{\beta})$. Two $n$-dimensional atlases are called \emph{equivalent} if they are contained in a third atlas. \end{definition} \begin{definition} An \emph{$n$-dimensional (smooth) orbifold}, denoted by $\orb^n$ or simply $\orb$, is a second-countable, Hausdorff topological space $|\orb|$, called the \emph{underlying topological space} of $\orb$, together with an \emph{equivalence class of $n$-dimensional orbifold atlases}. \end{definition} Given an orbifold $\orb$ and any point $x\in |\orb|$, one can always find a good local chart $U_x$ around $x$. Moreover, the corresponding group $G_x$ does not depend on the choice of good local chart around $x$, and is referred to as the \emph{local group at $x$}.\ From now on, only good local charts will be considered. Each point $x\in |\orb|$ with $G_{x}=\{e\}$ is called a \emph{regular point}.\ The subset $|\orb|_{reg}$ of regular points is called \emph{regular part}; it is a a smooth manifold that forms an open dense subset of $|\orb|$. A point which is not regular is called \emph{singular}. If a discrete group $\Gamma$ acts properly discontinuously on a manifold $M$, then the quotient space $M/\Gamma$ can be naturally endowed with an orbifold structure.\ For simplicity, we still use the terminology $M/\Gamma$ to mean $M/\Gamma$ as an orbifold.\ An orbifold $\Or$ is {\em good} if $\Or = M/\Gamma$ for some manifold $M$ and some discrete group $\Gamma$. Similarly, suppose that a discrete group $\Gamma$ acts by diffeomorphisms on an orbifold $\Or$.\ We say that it acts {\em properly discontinuously} if the action of $\Gamma$ on $|\Or|$ is properly discontinuous. Then there is a quotient orbifold $\Or/\Gamma$, with $|\Or/\Gamma| = |\Or|/\Gamma$. A {\em smooth map} $f : \Or_1 \rightarrow \Or_2$ between orbifolds is given by a continuous map $|f| : |\Or_1| \rightarrow |\Or_2|$ with the property that for each $p \in |\Or_1|$, there are local models $(\hU_1, G_1)$ and $(\hU_2, G_2)$ for $p$ and $f(p)$ respectively, and a smooth map $\hat{f} : (\hU_1, G_1) \rightarrow (\hU_2, G_2)$ between local models so that the diagram \begin{equation} \begin{matrix} \hU_1 & \stackrel{\hat{f}}{\rightarrow} & \hU_2 \\ \downarrow & & \downarrow \\ U_1 & \stackrel{|f|}{\rightarrow} & U_2 \end{matrix} \end{equation} commutes.\ A {\em diffeomorphism} $f \: : \: \Or_1 \rightarrow \Or_2$ is a smooth map with a smooth inverse.\ In this case, $G_p$ is isomorphic to $G_{f(p)}$. An {\em orbifold covering} $\pi : \hat\Or \rightarrow \Or$ is a surjective smooth map such that \begin{enumerate} \item for each $x\in|\orb|$, there is an orbifold local chart $(U,\tilde U,H,\phi)$ around $x$ such that $|\pi|^{-1}(U)$ is a disjoint union of open subsets $V_i\subset |\hat{\Or}|$; \item each $V_i$ admits an orbifold local chart of the type $(V_i,\tilde U,H_i,\phi_i)$ where $H_i<H$, such that $|\pi|$ locally lifts to the identity $\tilde{U}\to\tilde{U}$ with inclusion $H_i\to H$. \end{enumerate} A universal orbifold covering of $\Or$ is an orbifold covering $\pi : \tilde\Or \rightarrow \Or$ such that for every orbifold covering $\psi: \hat\Or \rightarrow \Or$, there is an orbifold covering $\phi: \tilde\Or \rightarrow \hat\Or$ so that $\psi\circ\phi=\pi$.\ It is due to Thurston \cite{thurston1997three} that any connected orbifold $\Or$ admits a universal orbifold covering $\pi : \hat\Or \rightarrow \Or$.\ The {\em orbifold fundamental group} of $\Or$, denoted by $\pi_1^{orb}(\Or)$, is defined to be the deck transformation group of its universal orbifold covering.\ The universal orbifold covering $\pi : \tilde\Or \rightarrow \Or$ induces a diffeomorphism $\tilde\Or /\pi_1^{orb}(\Or) \to \Or$. In general, an orbifold covering is not locally homeomorphism and hence, not a covering.\ In addition, $\pi_1^{orb}(\Or)$ is different from $\pi_1(|\Or|)$ and there is actually a epimorphism $\pi_1^{orb}(\Or)\to \pi_1(|\Or|)$ (see \cite{haefliger1984appendice}). \begin{definition} [Riemannian metric on an orbifold] A Riemannian metric $g$ on an orbifold $\orb$ is given by a collection of Riemannian metrics on the local models $\hat{U}_\alpha$ so that the following conditions hold: \begin{enumerate} \item The local group $G_\alpha$ acts isometrically on $\hat{U}_\alpha$. \item The embeddings $(\hat{U}_3,G_3)\to (\hat{U}_1,G_1)$ and $(\hat{U}_3,G_3)\to (\hat{U}_2,G_2)$ in the definition of orbifold atlas are isometries (with respect to the Riemannian metric). \end{enumerate} \end{definition} Note that the Riemannian metric $g$ induces a natural metric $d$ on $|\orb|$ that is locally isometric to the quotient metric of $(\hat{U}_{x},\hat{d}_{x})$ by $G_{x}$, where $\hat{d}_{x}$ is induced by the Riemannian metric $\hat g_x$ on $\hat{U}_{x}$.\ In the absence of ambiguity, we sometimes directly treat $(\orb,g)$ as the metric space $(|\orb|,d)$ and apply the terminology from metric spaces to $(\orb,g)$. For any Riemannian orbifold $(\orb,g)$, there is a natural volume measure $\operatorname{vol}_{g}$ given on the local orbifold charts $(U_x, \hat U_x, G_x, \pi_x)$ by $\operatorname{vol}_{g}|_{U_{x}}:= \frac{1}{|G_x|}(\pi_{x})_{\#}\operatorname{vol}_{\hat g_{x}}$, where $\operatorname{vol}_{\hat g_{x}}$ is the Riemannian volume measure on $(\hat{U}_{x}, \hat g_{x})$. The regular part $|\Or|_{reg}$ inherits a Riemannian metric.\ The corresponding volume form equals the $n$-dimensional Hausdorff measure on $|\Or|_{reg}$. In particular, $\op{vol}_g(\Or)$ coincides with the volume of the Riemannian manifold $|\Or|_{reg}$, which equals the $n$-dimensional Hausdorff measure of the metric space $|\Or|$. The Levi-Civita connection on $(\orb,g)$ can be defined via the local models.\ We can then define the curvature tensor $Rm$ on $\orb$ and derived curvature notions, such as sectional and Ricci curvatures, are defined accordingly.\ We adopt the same notation for corresponding geometric quantities as is used on Riemannian manifolds. Letting $g_{reg} = g|_{|\orb|_{reg}}$, we have that $(|\orb|_{reg},g_{reg})$ is a smooth open Riemannian manifold.\ By density, it is clear that $(|\orb|_{reg},g_{reg})$ satisfies $\Ric_{g_{reg}}\geq K$ if and only if $(\orb, g)$ satisfies $\Ric_{g}\geq K$. Also, the following result was proved by Galaz-Kell-Mondino-Sosa \cite[Theorem 7.10]{galaz2018quotients}. \begin{theorem}[\citeonline{galaz2018quotients}]\label{thm-orb rcd} Let $(\orb,g)$ be an $n$-dimensional Riemannian orbifold.\ Then $\Ric_g\geq K$ if and only if $(\orb,g,\op{vol}_g)$ is an RCD$(K,n)$ space. \end{theorem} Finally, let us review some facts about closed flat orbifolds.\ Recall that a group $\Gamma\leq \op{Iso}(\R^n)$ is called \emph{crystallographic} if it is discrete and cocompact, so that $\R^n/\Gamma$ is a closed flat orbifold.\ Conversely, by a result of Thurston \cite{thurston1997three}, if $(\orb,g)$ is a closed flat orbifold, then it is good, its universal orbifold cover is $\R^n$ and $\pi_1^{orb}(\orb)$ is isomorphic to a crystallographic group. \subsection{Almost-crystallographic groups and infranil orbifolds} In this subsection, we review some basic notions of almost-crystallographic groups and infranil orbifolds. \begin{definition}\label{def-almost crystal} Let $\mathcal{N}$ be a connected and simply connected nilpotent Lie group, and consider a maximal compact subgroup $C$ of $\op{Aut}(\mathcal{N})$.\ A cocompact and discrete subgroup $\Gamma$ of $\mathcal{N}\rtimes\op{Aut}(\mathcal{N})$ is called an almost-crystallographic group (modeled on $\mathcal{N}$).\ The dimension of $\Gamma$ is defined to be that of $\nn$.\ If moreover, $\Gamma$ is torsion free, then $\Gamma$ is called an almost-Bieberbach group. An infranil orbifold (resp.\ infranilmanifold) is a quotient space $\mathcal{N}/\Gamma$, where $\Gamma$ is an almost-crystallographic (resp.\ almost-Bieberbach) group modeled on $\mathcal{N}$.\ If further $\Gamma\subset\mathcal{N}$ (so $\Gamma$ acts freely on $\mathcal{N}$), then we say that $\mathcal{N}/\Gamma$ is a nilmanifold. \end{definition} Let $\Gamma$ be an almost-crystallographic group modeled on $\mathcal{N}$.\ Due to the generalized first Bieberbach Theorem proved by Auslander \cite{auslander1960bieberbach}, $G=\nn\cap\Gamma$ is a lattice of $\nn$ and $\Gamma/G$ is finite.\ Therefore, an infranil orbifold is actually the quotient of a nilmanifold by a finite group of affine diffeomorphisms.\ In addition, $\op{rank}(\Gamma)=\op{rank}(G)=\dim(\nn)$. Let us recall a well-known algebraic characterization of almost-crystallographic groups (see \cite[Theorem 4.2]{dekimpe2016users} for instance). \begin{theorem}\label{thm-almost crystallographic} Let $E$ be a finitely generated virtually nilpotent group.\ Then the followings are equivalent. \begin{enumerate} \item $E$ is isomorphic to an almost-crystallographic group. \item $E$ contains a torsion free nilpotent normal subgroup $G$, such that $G$ is maximal nilpotent in $E$ and $[E:G]<\infty$. \item $E$ does not contain a non-trivial finite normal subgroup. \end{enumerate} \end{theorem} To prove “(3) implies (1)" part in the above theorem, one may first find a normal subgroup $N$ of finite index in $E$ such that $N$ is a finitely generated torsion free nilpotent group.\ It was shown by Mal'cev \cite{malcev1949} that such $N$ can always be embedded as a lattice in a simply connected nilpotent Lie group $\nn$, which is unique up to isomorphism and now called the Mal'cev completion of $N$.\ Then $E$ can be identified to an almost-crystallographic group modeled on $\nn$.\ Specifically, we have the following proposition (see \cite{dekimpe2016users} for the detailed proof). \begin{proposition}\label{prop-Malcev} Let $E$ be a finitely generated virtually nilpotent group which does not contain a non-trivial finite normal subgroup.\ Let $N$ be a torsion free nilpotent normal subgroup of finite index in $E$.\ Then $E$ is isomorphic to an almost-crystallographic group modeled on $\nn$, where $\nn$ is the Mal'cev completion of $N$. \end{proposition} \subsection{Limits of almost homogeneous spaces} Let $(X_i,p_i)$ be a sequence of pointed almost homogeneous spaces, which converges in the pGH sense to $(X,p)$.\ By Lemma \ref{equivariant} and Lemma \ref{equivariant-compactness}, there is a closed group $G\leq \iso(X)$ acting transitively on $X$; that is, $X$ is $G$-homogeneous.\ Indeed, the limit of almost homogeneous spaces was specifically studied by Zamora in \cite{zamora2024limits}, where he utilized the results of Breuillard-Green-Tao \cite{breuillard2012structure} and proved the following theorem. \begin{theorem}[\citeonline{zamora2024limits}]\label{thm-zamora} Let $(X_i,p_i)$ be a sequence of pointed almost homogeneous spaces, which converges in the pGH sense to $(X,p)$.\ If $X$ is semi-locally-simply-connected, then $X$ is a nilpotent Lie group equipped with a sub-Finsler invariant metric, and $\pi_1(X)$ is a torsion free subgroup of a quotient of $\pi_1(X_i)$ for sufficiently large $i$. \end{theorem} Indeed, the fundamental group of any connected nilpotent Lie group is finitely generated torsion free abelian (see \cite[Corollary 2.11]{zamora2024limits}).\ In the above theorem, $X$ will be simply connected when $\pi_1(X_i)$ are finite groups. It is well-known that any compact connected nilpotent Lie group is abelian (see \cite[Corollary 2.13]{zamora2024limits} for instance).\ Thus, if the above limit space $X$ is compact, then it must be a torus.\ We note that this result on compact limits of almost homogeneous spaces can also be obtained in the finite dimensional case by using an old theorem of Turing \cite{turing1938finite}, and Gelander \cite{gelander2013limits} proved a more general result which covers the infinite dimensional case (see also \cite[Theorem 1.4]{zamora2024limits}). A key to proving Theorem \ref{thm-zamora} lies in finding a nilpotent group of isometries acting transitively on $X$, which was further applied in the recent work of Zamora-Zhu \cite{zamora2024topological}.\ The following three lemmas stem from \cite{zamora2024topological}. \begin{lemma}[\citeonline{zamora2024topological}]\label{lem:almost-homogeoeus-subgroups} Let $(X_i, p_i)$ be a sequence of pointed proper geodesic spaces that converges to a pointed proper semi-locally-simply-connected geodesic space $(X,p)$ in the pointed Gromov--Hausdorff sense, and $G_i \leq \iso (X_i)$ a sequence of discrete groups with $\diam (X_i / G_i ) \to 0$.\ Then there exists $s \in \mathbb{N}$ and a sequence of finite index normal subgroups $ G_i ^{\prime} \leq G_i $ with $$\lim\limits_{i \to \infty } \, \sup _{x \in X_i} \, \sup _{g \in (G_i ^{\prime } ) ^{(s)}} d(gx, x ) = 0 \text{ and } \limsup\limits_{i \to \infty} \, [G_i : G_i ^{\prime}] < \infty. $$ \end{lemma} Note the basic fact that any subgroup of bounded index contains a normal subgroup of bounded index (see \cite[Lemma 4.8]{wang2024non} for instance).\ Thus based on \cite[Lemma 2.23]{zamora2024topological}, we can further assume that $G_i^{\prime}$ is a normal subgroup. \begin{lemma}[\citeonline{zamora2024topological}]\label{lem:almost-translations} Let $(X_i, p_i)$, $(X,p)$, $G_i$, $G_i^{\prime}$ be as in Lemma \ref{lem:almost-homogeoeus-subgroups}.\ Then after passing to a subsequence the groups $G_i^{\prime}$ converge equivariantly to a connected nilpotent group $ G\leq \iso (X)$ acting freely and transitively. \end{lemma} \begin{lemma}[\citeonline{zamora2024topological}]\label{lem:free} Let $(X_i, p_i)$, $(X,p)$, $G_i$, $G_i^{\prime}$ be as in Lemma \ref{lem:almost-homogeoeus-subgroups}.\ If $G_i$ satisfy that any sequence of small subgroups is trivial for large $i$, then $G_i^{\prime}$ acts freely for large $i$. \end{lemma} We further introduce the following definition and lemma from \cite{breuillard2012structure,zamora2024limits,zamora2024topological}, which gives an explicit description of the groups $G_i^{\prime}$. \begin{definition} Let $G$ be a group, $u_1, u_2, \ldots , u_r \in G$, and $N_1, N_2, \ldots , N_r $ $\in \mathbb{R}^+$. The set $P(u_1, \ldots , u_r ; N_1, \ldots , N_r) \subset G$ is defined to be the set of elements that can be expressed as words in the $u_i$'s and their inverses such that the number of appearances of $u_i$ and $u_i^{-1}$ is not more than $N_i$. We then say that $P(u_1, \ldots , u_r ; N_1, \ldots , N_r)$ is a \emph{nilprogression in $C$-normal form} for some $C>0$ if it also satisfies the following properties: \begin{enumerate} \item For all $1 \leq i \leq j \leq r$, and all choices of signs, we have \begin{center} $ [ u_i^{\pm 1} , u_j^{\pm 1} ] \in P \left( u_{j+1} , \ldots , u_r ; \dfrac{CN_{j+1}}{N_iN_j} , \ldots , \dfrac{CN_r}{N_iN_j} \right). $ \end{center}\label{condition:n1} \item The expressions $ u_1 ^{n_1} \ldots u_r^{n_r} $ represent distinct elements as $n_1, \ldots , n_r$ range over the integers with $\vert n_1 \vert \leq N_1/C , \ldots , \vert n_r \vert \leq N_r/C$.\label{condition:n2} \item One has \[ \frac{1}{C} (2\lfloor N_1 \rfloor + 1 ) \cdots ( 2\lfloor N_r \rfloor + 1 ) \leq \vert P \vert \leq C (2 \lfloor N_1 \rfloor + 1 ) \cdots ( 2\lfloor N_r \rfloor + 1 ).\] \label{condition:n3} \end{enumerate} For a nilprogression $P$ in $C$-normal form, and $\varepsilon \in (0,1)$, the set $P( u_1, \ldots , u_r ; $ $ \varepsilon N_1, \ldots , \varepsilon N_r )$ also satisfies conditions (1) and (2), and we denote it by $\varepsilon P$.\ We define the \emph{thickness} of $P$ as the minimum of $N_1, \ldots , N_r$ and we denote it by $ \thi (P)$.\ The set $\{ u_1 ^{n_1 }\ldots u_r^{n_r} \vert \vert n_i \vert \leq N_i/C \}$ is called the \emph{grid part of }$P$, and is denoted by $G(P)$. \end{definition} \begin{lemma}\label{lem:malcev-construction} Let $(X_i, p_i)$, $(X,p)$, $G_i$, $G_i^{\prime}$ be as in Lemma \ref{lem:almost-homogeoeus-subgroups} and $n=\dim_{top}(X)$.\ Then for $i$ large enough, there are $N_{1,i}, \ldots , N_{n,i} \in \mathbb{R}^+$ and torsion free nilpotent groups $\tilde{\Gamma}_i$ generated by elements $\tilde{u}_{1,i}, \ldots , \tilde{u}_{n,i} \in \tilde{\Gamma}_i$ with the following properties: \begin{enumerate} \item There are polynomials $Q_i : \mathbb {R}^n \times \mathbb{R}^n\to \mathbb{R}^n$ of degree $\leq d(n)$ giving the group structures on $\R^n$ by $x_1 \cdot x_2 = Q_i (x_1, x_2)$ such that for each $i$, $\tilde{\Gamma}_i$ is isomorphic to the group $(\Z^n,Q_i|_{\Z^n\times\Z^n})$ and the group $(\R^n,Q_i)$ is isomorphic to the Mal'cev completion of $\tilde{\Gamma}_i$. \label{conclusion:malcev-1} \item There are small normal subgroups $W_i \trianglelefteq G_i^{\prime }$ and surjective group morphisms \[\Phi _i : \tilde {\Gamma }_ i \to \Gamma_i : = G_i^{\prime } / W_i\] such that $\op{Ker} (\Phi_i)$ is a quotient of $\pi_1(X_i)$ and contains an isomorphic copy of $\pi_1(X)$ for $i$ large enough. \label{conclusion:malcev-2} \item There is $C > 0 $ such that if $u_{j,i} : = \Phi _i (\tilde{u}_{j,i}) $ for each $j \in \{ 1, \ldots , n\}$, the set \[ P_i : = P ( u_{1,i}, \ldots , u_{n,i} ; N_{1,i}, \ldots , N_{n,i} ) \subset \Gamma_i \] is a nilprogression in $C$-normal form with $\thi (P_i ) \to \infty $. \label{conclusion:malcev-3} \item \label{conclusion:malcev-4} For each $\varepsilon > 0 $ there is $\delta > 0 $ such that \begin{align*} G(\delta P_i ) \subset \{ g \in \Gamma_i \, \vert \, &d(g[p_i], [p_i] ) \leq \varepsilon \} , \\ \{ g \in \Gamma_i \, \vert \, d(g [p_i ], [p_i&] ) \leq \delta \} \subset G( \varepsilon P_i ) \end{align*} for $i$ large enough, where we are considering the action of $\Gamma _ i$ on $X_i / W_i $. \end{enumerate} \end{lemma} \begin{proof} Note that \cite[Lemma 2.30]{zamora2024topological} provides (2), (3), (4), except that in (2) we additionally obtain that $\op{Ker} (\Phi_i)$ is a quotient of $\pi_1(X_i)$.\ This is due to \cite[Proposition 8.4]{zamora2024limits}.\ Moreover, (1) is just the Mal'cev Embedding Theorem and the construction of the polynomial $Q_i$ can be found in \cite[Section 5.1]{buser1981gromov} (see also \cite[Section 8]{zamora2024limits}). \end{proof} \begin{remark}\label{rem:rank-inequality} As noted in \cite[Remark 2.31]{zamora2024topological}, one obtains that \begin{equation}\label{eq:rank-string} \rank (G_i^{\prime} ) = \rank (\Gamma _i ) = \rank (\tilde {\Gamma}_i) - \rank (\Ker (\Phi _ i)) \leq n. \end{equation} If $\op{rank}(G_i^{\prime})=n$, then $\Ker (\Phi _ i ) $ is trivial and hence, $X$ is simply connected and $\tilde {\Gamma }_i = \Gamma_i$. \end{remark} \begin{remark}\label{rem-simply connected} If $\pi_1(X_i)$ are finite groups, then $\Ker(\Phi_i)$ will also be trivial, so $X$ is simply connected and $\tilde {\Gamma }_i = \Gamma_i= G_i^{\prime } / W_i$.\ In addition, $\rank(G_i)=\rank (G_i^{\prime})=\rank(\tilde {\Gamma}_i)=n$. \end{remark} \subsection{Topological rigidity for RCD spaces with bounded covering geometry} As we have noted in Section \ref{section-0}, Theorem \ref{thm-almostflat} can be extended to the RCD setting as the following theorem by the recent works of Zamora-Zhu \cite{zamora2024topological} and Wang \cite{wang2024non}. \begin{theorem}[Theorem \ref{thm-rcd-almostflat}]\label{yyc} For each $K\in\mathbb{R}$ and $N\geq1$, there is $\epsilon=\epsilon(K,N), v=v(K,N)$ such that for any RCD($K,N$) space $(X,d,\mathcal{H}^N)$ with $\op{diam}(X)\leq\epsilon$, the followings are equivalent: \begin{enumerate} \item $X$ is bi-H\"older homeomorphic to an $N$-dimensional infranilmanifold $\mathcal{N}/\Gamma$, where $\mathcal{N}$ is a simply connected nilpotent Lie group endowed with a left invariant metric; \item $\op{rank}(\pi_1(X))$ is equal to $N$; \item $\mathcal{H}^N(B_1(\tilde x))\geq v$, where $\tilde x$ is a point in the universal cover $\widetilde{X}$. \end{enumerate} \end{theorem} In the above theorem, (1) trivially implies (2).\ It was proved by Zamora-Zhu \cite{zamora2024topological} that (2) implies $X$ being homeomorphic to an $N$-dimensional infranilmanifold and their proof implicitly shows that (2) implies (3).\ In \cite{wang2024non}, Wang proved that (3) implies (1). We say that a non-collapsed RCD$(K,N)$ space $(X,d,\mathcal{H}^N)$ satisfies (global) $(1,v)$-bounded covering geometry if condition (3) of Theorem \ref{yyc} holds. Indeed, a local version of this term was proposed by Rong on Riemannian manifolds.\ Specifically, let $M$ be a compact $n$-manifold with $\Ric_M\geq-(n-1)$.\ We say that $M$ satisfies local $(r,v)$-bounded covering geometry if for any $x\in M$, $\op{vol}(B_{r/2}(\tilde{x}))\geq v$ where $\tilde{x}$ is a pre-image of $x$ in the (incomplete) Riemannian universal covering \[\pi: (\widetilde{B_r(x)},\tilde{x})\to (B_r(x),x).\] If $M$ has small diameter, then local bounded covering geometry is equivalent to (global) bounded covering geometry.\ We refer to \cite{huang2020collapsed,rong2022collapsed} and the survey paper \cite{huang2020collapsing} for more detailed descriptions on this terminology. Naturally, one can define a non-collapsed RCD$(K,N)$ space $(X,d,\mathcal{H}^N)$ satisfying local bounded covering geometry in a similar fashion.\ Note that by Theorem \ref{jikang}, any $r$-ball $B_r(x)\subset X$ is semi-locally-simply-connected and hence, admits a universal cover. The following fibration theorem summarizes the contributions from \cite{huang2020fibrations} and \cite{rong2022collapsed}. \begin{theorem}[\citeonline{huang2020fibrations,rong2022collapsed}]\label{thm-local covering geometry} Let $(M_i^n,g_i)$ converge to $(N^k,g)$ in the pGH-sense, where $N^k$ is a compact smooth manifold with $k\leq n$.\ Suppose that $(M_i^n,g_i)$ satisfy $\Ric_{g_i}\geq-(n-1)$ and local $(1,v)$-bounded covering geometry for some $v>0$.\ Then for all large $i$, there exist smooth fiber bundle maps $f_i: M_i\to N$ which are also $\epsilon_i$-GHAs with $\epsilon_i\to0$.\ Moreover, any $f_i$-fiber is diffeomorphic to an $(n-k)$-dimensional infranilmanifold. \end{theorem} In \cite{huang2020fibrations}, Huang constructed the smooth fiber bundle map and Rong \cite{rong2022collapsed} further identified the fibers to infranilmanifolds.\ This theorem is a generalization of Fukaya's fibration theorem in \cite{fukaya1987collapsing} on collapsed manifolds with bounded sectional curvature (see also \cite{cheeger1992nilpotent}).\ Indeed, any $n$-manifold with $|\sec|\leq1$ satisfies local $(r,v)$-bounded covering geometry with $r$ and $v$ depending only on $n$ (see \cite{cheeger1992nilpotent}). Recently, Wang \cite{wang2024non} generalized Theorem \ref{thm-local covering geometry} to RCD$(K,N)$ spaces $(X,d,\mathcal{H}^N)$. \begin{theorem}[\citeonline{wang2024non}]\label{wang-fibration} Let $(X_i,d_i,\mathcal{H}^N)$ be a sequence of RCD$(K,N)$ spaces and $(X_i,d_i)$ converge in the GH-sense to a closed Riemannian manifold $N^k$ with $k\leq N$.\ Suppose that all $(X_i,d_i,\mathcal{H}^N)$ satisfy local $(1,v)$-bounded covering geometry for some $v>0$.\ Then for large enough $i$, there are fiber bundle maps $f_i: X_i\to N^k$ which are also $\eps_i$-GHAs with $\eps_i\to0$ and any $f_i$-fiber with the induced metric is bi-H\"older homeomorphic to an $(N-k)$-dimensional infranilmanifold. \end{theorem} \section{Limits of almost homogeneous RCD spaces and applications}\label{sec-3} In this section, we will prove Theorem \ref{thm-almost homogeneous rcd} and Theorem \ref{thm-almost nonnegative ricci-limit} and derive a series of consequences.\ The following lemma is obvious but key to our proof of Theorem \ref{thm-almost homogeneous rcd} (2). \begin{lemma}\label{lem-measure converge} Let $(X_i,d_i,m_i,q_i)$ be a sequence of metric measure spaces that converges in the pmGH sense to $(X,d,m,q)$, and $G_i \leq \op{Iso}(X_i)$ a sequence of closed groups of measure preserving isometries that converges equivariantly to a closed group $G \leq \op{Iso}(X)$.\ Then $G$ acts on $X$ by measure preserving isometries. \end{lemma} \begin{proof} Let $g$ be an arbitrary element of the group $G$.\ We need to show that $g_{\#}m=m$. Notice that by Definition \ref{def:equivariant}, there is a sequence of Gromov-Hausdorff approximations $f_i: G_i\to G$ and $g_i\in G_i$, such that $f_i(g_i)\to g$.\ Thus, $f_i(g_i)_{\#}m\to g_{\#}m$ in $C_{c}(X)^*$.\ Also, there is a sequence of Gromov-Hausdorff approximations $\phi_i: X_i\to X$ such that $(\phi_i)_{\#}m_i\to m$ in $C_{c}(X)^*$.\ Then we have $(f_i(g_i)\circ\phi_i)_{\#}m_i\to g_{\#}m$ in $C_{c}(X)^*$. By Definition \ref{def:equivariant}, $(\phi_i\circ g_i)_{\#}m_i\to g_{\#}m$ in $C_{c}(X)^*$.\ Since $(g_i)_{\#}m_i=m_i$, we obtain that $(\phi_i)_{\#}m_i\to g_{\#}m$ in $C_{c}(X)^*$.\ This leads to $g_{\#}m=m$ and we complete the proof. \end{proof} We say that a metric measure space $(X,d,m)$ is metric measure homogeneous if for all $x,y\in X$, there exists a measure preserving isometry $h\in \op{Iso}(X)$ such that $h(x)=y$. \begin{lemma}\label{lem-mm homogeneous} Let $(X,d,m)$ be a metric measure homogeneous RCD$(K,N)$ space of essential dimension $n$ for some $K\in\mathbb{R}$ and $N\in [1,\infty)$.\ Then $X$ is isometric to a Riemannian $n$-manifold and $m=c\mathcal{H}^n$ for some $c>0$.\ In particular, $(X,d,\mathcal{H}^n)$ is a non-collapsed RCD$(K,n)$ space. \end{lemma} \begin{proof} By \cite[Proposition 5.14]{santos2020invariant}, $X$ is isometric to a Riemannian $n$-manifold.\ Due to homogeneity, $X=\mathcal{R}_n^*$ (see Theorem \ref{thm-measure rectifiable}) and the limit \[\lim\limits_{r\to0^+}\frac{m(B_r(x))}{\omega_{n}r^n}\] is a constant denoted by $c$.\ Then by Theorem \ref{thm-measure rectifiable}, we have $m=c\mathcal{H}^n$ and hence, $(X,d,\mathcal{H}^n)$ is RCD$(K,N)$.\ Since $X$ is a Riemannian $n$-manifold, $(X,d,\mathcal{H}^n)$ is an RCD$(K,n)$ space.\ This completes the proof. \end{proof} \begin{remark} In a recent work \cite{honda2024locally}, Honda-Nepechiy arrived at the same conclusion as Lemma \ref{lem-mm homogeneous} by assuming only that $(X,d,m)$ is \textit{locally} metric measure homogeneous.\ For our purposes, Lemma \ref{lem-mm homogeneous} is sufficient, and the proof is considerably simpler. \end{remark} Now, we can prove Theorem \ref{thm-almost homogeneous rcd} and for the convenience of readers, we rewrite it here. \begin{theorem}[Theorem \ref{thm-almost homogeneous rcd}] Let $K\in\mathbb{R}, N\in(1,\infty)$ and $(X_i,d_i,m_i,p_i)$ be a sequence of $(\epsilon_i,G_i)$-homogeneous pointed RCD$(K,N)$ spaces converging to $(X,d,m,p)$ in the pmGH-sense with $\epsilon_i\to0$.\ Assume that $X$ is not a point.\ Then the followings hold: \begin{enumerate} \item $(X,d)$ is isometric to an $n$-dimensional nilpotent Lie group with a left invariant Riemannian metric for some $n\leq N$; \item if the actions of $G_i$ are measure-preserving, then $m=c\mathcal{H}^n$ for some $c>0$ and $(X,d,\mathcal{H}^n)$ is an RCD$(K,n)$ space.\ In particular, $\op{Ric}_X\geq K$ if $n\geq2$; \item if $X_i$ are compact (or equivalently, $G_i$ are finite), then $m$ and $\mathcal{H}^n$ are mutually abolutely continuous and $(X,d,\mathcal{H}^n)$ is an RCD$(K,n)$ space.\ In particular, $\op{Ric}_X\geq K$ if $n\geq2$; \item if $X$ is compact, then $X$ is isometric to a flat torus $\mathbb{T}^n$. \end{enumerate} \end{theorem} \begin{proof} (1) By Lemma \ref{equivariant} and Lemma \ref{equivariant-compactness}, $(X,d,m)$ is an homogeneous RCD$(K,N)$ space.\ Then by \cite[Proposition 5.14]{santos2020invariant}, $X$ is a Riemannian $n$-manifold, where $n=\dim_{ess}(X)\leq N$.\ Combining with Theorem \ref{thm-zamora}, $(X,d)$ is isometric to an $n$-dimensional nilpotent Lie group with a left invariant Riemannian metric. (2) By Lemma \ref{equivariant-compactness} and Lemma \ref{lem-measure converge}, there is a closed group $G\leq\op{Iso}(X)$ acting transitively on $X$ by measure preserving isometries.\ Then by Lemma \ref{lem-mm homogeneous}, $m=c\mathcal{H}^n$ for some $c>0$ and $(X,d,\mathcal{H}^n)$ is an RCD$(K,n)$ space.\ Since $X$ is a Riemannian n-manifold, $\op{Ric}_X\geq K$ if $n\geq2$. (3) Since $G_i$ is a finite group, we can apply \cite[Theorem A]{santos2020invariant} to obtain a $G_i$-invariant measure $m_{G_i}$ (so $G_i$ acts on $(X_i,d_i,m_{G_i})$ by measure preserving isometries) such that $(X_i,d_i,m_{G_i})$ is also RCD$(K,N)$.\ After normalizing the measure $m_{G_i}$ and passing to a subsequence, we can assume $(X_i,d_i,m_{G_i},p_i)$ converges in the pmGH-sense to $(X,d,m^*,p)$.\ By (2), $m^*=c\mathcal{H}^n$ for some $c>0$ and $(X,d,\mathcal{H}^n)$ is an RCD$(K,n)$ space.\ Also, $m$ and $\mathcal{H}^n$ are mutually absolutely continuous due to \cite{kell2017transport}. (4) By (1), $X$ is a compact connected nilpotent Lie group and hence, a torus.\ Since the metric is invariant and Riemannian, $X$ is a flat torus. \end{proof} \begin{remark} One may prove (4) without using the nilpotency obtained in (1).\ In fact, if $X$ is compact, then $G_i$ are finite groups and the orbits $G_i\cdot p_i$ are finite homogeneous metric spaces.\ Notice that $G_i\cdot p_i$ converges in the GH-sense to $X$.\ Then by \cite[Theorem 1.1]{gelander2013limits} and \cite[Theorem 2.2.4]{benjamini2017scaling}, $X$ is a torus with an invariant metric.\ Since $X$ is a Riemannian manifold, $X$ is a flat torus. \end{remark} Below, we will present a number of results that follow from Theorem \ref{thm-almost homogeneous rcd}.\ The following is readily obtained from Theorem \ref{thm-almost homogeneous rcd} and Theorem \ref{thm-zamora} (see also Remark \ref{rem-simply connected}). \begin{proposition}\label{prop-simply connected} Let $K\in\mathbb{R}$, $N\in(1,\infty)$ and $(X_i,d_i,m_i,p_i)$ be a sequence of almost homogeneous pointed RCD$(K,N)$ spaces converging to $(X,d,m,p)$ in the pmGH-sense.\ Assume that $\pi_1(X_i)$ are finite groups.\ Then $X$ is isometric to a simply connected nilpotent Lie group with a left invariant Riemannian metric.\ In particular, $X$ is diffeomorphic to $\mathbb{R}^n$, where $n=\dim(X)$. \end{proposition} If we additionally assume that $X_i$ are compact in Proposition \ref{prop-simply connected}, then the sequence $X_i$ must be collapsed. \begin{proposition}\label{prop-collapse} Let $(X_i,d_i,m_i,p_i)$ and $(X,d,m,p)$ be as in Proposition \ref{prop-simply connected}.\ Additionally, assume $X_i$ are compact and not single points.\ Then $\dim(X) \leq \liminf\limits_{i\to\infty} \dim_{ess}(X_i)-1 $. \end{proposition} \begin{proof} We can assume without loss of generality that $\dim_{ess}(X_i)=k \geq1$ for all $i$.\ Then by \cite[Theorem 1.5]{kitabeppu2019sufficient}, $\dim(X)\leq k$.\ Let $(X_i,d_i)$ be $(\eps_i,G_i)$-homogeneous with $\eps_i\to0$.\ Since $G_i$ are finite groups, we can apply \cite[Theorem A]{santos2020invariant} to obtain $G_i$-invariant measures $m_{G_i}$ such that $(X_i,d_i,m_{G_i})$ are also RCD$(K,N)$.\ Recall that the essential dimension is invariant under changes of measure (see \cite[Remark 2.12]{brena2023weakly} for instance). Let us now argue by contradiction.\ Suppose that $\dim(X)=k$.\ Then it follows from Theorem \ref{nss} that any sequence of small subgroups $W_i\leq\op{Iso}(X_i)$ will be trivial for large enough $i$.\ Then by Lemma \ref{lem:almost-homogeoeus-subgroups} and Lemma \ref{lem:free}, there exist subgroups $G_i^{\prime}\leq G_i$ such that $\diam(X_i/G_i^{\prime})\to0$ and $G_i^{\prime}$ acts freely on $X_i$ for large $i$.\ So $X_i\to X_i/G_i^{\prime}$ is a covering.\ Let $\tilde{X}_i$ be the universal cover of $X_i$ and $X_i/G_i^{\prime}$.\ Note that $\tilde{X}_i$ are compact. By \cite[Theorem 7.24]{galaz2018quotients}, $X_i/G_i^{\prime}$ endowed with the quotient metric and quotient measure is an RCD$(K,N)$ space.\ Then it follows from \cite[Theorem 2]{santos2023fundamental} that $\diam(\tilde{X}_i)\to0$.\ Hence $\diam(X_i)\to0$ and this leads to a contradiction. \end{proof} For an RCD$(K,N)$ space with $K>0$, the Bonnet-Myers theorem on RCD spaces \cite{sturm2006geometry2} will lead to a uniform diameter upper bound and the finiteness of the fundamental group.\ So the following corollary follows from Proposition \ref{prop-simply connected}. \begin{corollary}\label{cor-bonnet myers} Let $(X_i,d_i,m_i)$ be a sequence of almost homogeneous RCD$(K,N)$ spaces for some $K>0$ and $N\in(1,\infty)$.\ Then $\diam(X_i)\to0$. \end{corollary} \begin{remark} Corollary \ref{cor-bonnet myers} can also be obtained from (3) and (4) in Theorem \ref{thm-almost homogeneous rcd}. \end{remark} Recall that if an RCD$(0,N)$ space $(X,d,m)$ admits a discrete cocompact group $G\leq\op{Iso}(X)$, then $X$ splits as $\mathbb{R}^k\times Y$ where $Y$ is a compact RCD$(0,N-k)$ space (see Theorem \ref{gigli} and Remark \ref{rem-splitting}).\ We can derive the following corollary directly from Proposition \ref{prop-simply connected} and proof by contradiction. \begin{corollary}\label{cor-pan rong} Let $(X,d,m)$ be an RCD$(0,N)$ space for some $N\geq1$.\ Assume that $\pi_1(X)$ is finite and $G$ is a discrete subgroup of $\op{Iso}(X)$ with $\diam(X/G)<\infty$.\ Then $(X,d,m)$ is isomorphic to $(\mathbb{R}^k\times Y, d_{\mathbb{R}^k} \times d_Y, \mathcal{H}^k \otimes \nu)$, where $(Y,d_Y,\nu)$ is a compact RCD$(0,N-k)$ space with $\op{diam}(Y)\leq C(N)\cdot\op{diam}(X/G)$. \end{corollary} \begin{proof} Without loss of generality, we can assume that $\diam(X/G)=1$.\ Let us argue by contradiction.\ Suppose that there is a sequence of $(1,G_i)$-homogeneous RCD$(0,N)$ spaces $(X_i,d_i,m_i)$ which are isomorphic to $\mathbb{R}^{k_i}\times Y_i$, where $Y_i$ are compact RCD$(0,N-k_i)$ spaces with finite fundamental groups and $\diam(Y_i)\to\infty$.\ We may also assume $k_i\equiv k$. Let $r_i=\diam(Y_i)$.\ Then $r_i^{-1}(\mathbb{R}^k\times Y_i)\to \R^k\times Y$ in the pGH-sense for some space $Y$ with $\diam(Y)=1$.\ Due to Proposition \ref{prop-simply connected}, this is a contradiction. \end{proof} The above corollary is also obtained by Pan-Rong in \cite{pan2018ricci}, where they considered Riemannian manifolds with $\Ric\geq0$. Theorem \ref{thm-almost nonnegative ricci-limit} is a direct consequence of Theorem \ref{thm-almost homogeneous rcd} and Theorem \ref{gigli}. \begin{proof}[Proof of Theorem \ref{thm-almost nonnegative ricci-limit}] It follows from Theorem \ref{thm-almost homogeneous rcd} and Theorem \ref{gigli} that $X$ is isometric to $\mathbb{R}^k\times N^{n-k}$ for some $0\leq k\leq n\leq N$, where $N^{n-k}$ is a nilpotent Lie group with a left invariant Riemannian metric which contains no lines.\ From the proof of Theorem \ref{thm-almost homogeneous rcd}, we know that $X$ is homogeneous which implies that $N^{n-k}$ is homogeneous.\ Then $N^{n-k}$ is compact and hence, is a torus.\ Since the metric is Riemannian and invariant, $N^{n-k}$ is a flat torus $\mathbb{T}^{n-k}$. \end{proof} \begin{remark} We note that Theorem \ref{thm-almost nonnegative ricci-limit} can also be derived from Theorem \ref{thm-almost homogeneous rcd} and \cite[Theorem 1.1]{huang2020non}. \end{remark} Theorem \ref{thm-almost nonnegative ricci-limit} and Proposition \ref{prop-simply connected} simply implies the following corollary. \begin{corollary}\label{cor-almost ricci positive} Let $(X_i,d_i,m_i,p_i)$ be a sequence of almost homogeneous RCD$(-\delta_i,N)$ spaces converging to $(X,d,m,p)$ in the pmGH-sense with $\delta_i\to0$.\ Suppose that $\pi_1(X_i)$ are finite groups.\ Then $X$ is isometric to $\mathbb{R}^{n}$ for some $n\leq N$. \end{corollary} The proof of Corollary \ref{cor-homeomorphic to tori} and Theorem \ref{thm-covering and almost homogeneous} is immediate. \begin{proof}[Proof of Corollary \ref{cor-homeomorphic to tori}] We argue by contradiction.\ Suppose that there is a sequence of almost homogeneous RCD$(K,N)$ spaces $(X_i,d_i,\mathcal{H}^N)$ with $\mathcal{H}^N(X_i)\geq v$ and $\diam(X_i)\leq D$ and all $X_i$ are not bi-H\"older homeomorphic to flat torus $\mathbb{T}^N$.\ Up to a subsequence, $X_i$ converges in the GH-sense to $Y$ and by Theorem \ref{thm-almost homogeneous rcd} (4) and Theorem \ref{thm:volume-continuity}, $Y$ is isometric to a flat torus $\mathbb{T}^N$.\ Then by Theorem \ref{thm:reifenberg-weak}, $X_i$ is bi-H\"older homeomorphic to $\mathbb{T}^N$ for sufficiently large $i$, which leads to a contradiction. Moreover, if $X$ is a Riemannian manifold, then we can show that $X$ is diffeomorphic to $\mathbb{T}^N$ by the same argument applying \cite[Theorem A.1.12]{cheeger1997structure} instead of Theorem \ref{thm:reifenberg-weak}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-covering and almost homogeneous}] Using Theorem \ref{thm-almost homogeneous rcd} (4), Theorem \ref{wang-fibration} and proof by contradiction, the proof is easily obtained.\ For the smooth case, substitute Theorem \ref{thm-local covering geometry} for Theorem \ref{wang-fibration} and the conclusion follows from the same argument. \end{proof} \section{Topological rigidity of almost homogeneous non-collapsed RCD spaces}\label{sec-4} The main purpose of this section is to prove Theorem \ref{thm-rcd-maximal rank}.\ The proof will be divided into two parts (Theorem \ref{thm-rcd rigidity} and Theorem \ref{thm-rcd biHolder}), based on Zamora-Zhu's results in \cite{zamora2024topological} and Wang's arguments in \cite{wang2024non} respectively.\ In addition, we will simultaneously obtain a proof of Theorem \ref{thm-orbifold almost flat}. Let us first review the following theorem from \cite{zamora2024topological}. \begin{theorem}[\citeonline{zamora2024topological}]\label{thm-zamora zhu} For each $K\in\mathbb{R}$ and $N\geq1$, there is $\eps=\eps(K,N)$ such that if $(X,d,m)$ is an $(\eps,G)$-homogeneous RCD$(K,N)$ space, then $\op{rank}(G)\leq N$ and in the case of equality, $X$ is homeomorphic to $\mathbb{R}^N$. \end{theorem} The contractibility of $X$ is particular powerful when paired with the following theorem, which is an observation by Kapovitch in \cite{kapovitch2021mixed}.\ We refer to \cite{kapovitch2021mixed} (see also \cite{zamora2024topological}) for the proof. \begin{theorem}[\citeonline{kapovitch2021mixed}]\label{thm-Borel conj} Let $M$ be a closed aspherical topological manifold with $\pi_1(M)$ virtually nilpotent.\ Then $M$ is homeomorphic to an infranilmanifold. \end{theorem} By the definition of $\op{rank}(G)$ (see Definition \ref{def-rank}), the group $G$ in Theorem \ref{thm-zamora zhu} is virtually polycyclic.\ Indeed, due to Breuillard-Green-Tao's result \cite{breuillard2012structure}, $G$ is virtually nilpotent. \begin{lemma}\label{lem-nilpotent group} For each $K\in\mathbb{R}$ and $N\geq1$, there is $\eps=\eps(K,N)$ such that if $(X,d,m)$ is an $(\eps,G)$-homogeneous RCD$(K,N)$ space, then $G$ is finitely generated virtually nilpotent with $\op{rank}(G)\leq N$. \end{lemma} \begin{proof} Fix $p\in X$.\ By \cite[Lemma 2.5]{zamora2024limits}, $G$ is generated by the set \[S:=\left\lbrace g\in G :\ d(gp,p)\leq 3\cdot\diam(X/G)\right\rbrace. \] Then by \cite[Corollary 1.15]{breuillard2012structure}, there is small $\eps=\eps(K,N)$ such that $G$ is finitely generated virtually nilpotent.\ It follows from Theorem \ref{thm-zamora zhu} that $\op{rank}(G)\leq N$. \end{proof} When $\rank(G)$ attains its maximum value $N$, $X$ is homeomorphic to $\mathbb{R}^N$ (Theorem \ref{thm-zamora zhu}) and in fact, the converse also holds.\ In addition, we can prove the first part of Theorem \ref{thm-rcd-maximal rank}, which demonstrate a set of conditions equivalent to maximal rank.\ Also, note that the first statement in Theorem \ref{thm-orbifold almost flat} is just a corollary. \begin{theorem}\label{thm-rcd rigidity} For each $K\in\mathbb{R}$ and $N\geq1$, there is $\epsilon=\epsilon(K,N), v=v(K,N)$ such that for any $(\eps,G)$-homogeneous RCD($K,N$) space $(X,d,m)$, the followings are equivalent: \begin{enumerate} \item $X$ is homeomorphic to $\mathbb{R}^N$; \item $X$ is a contractible topological $N$-manifold without boundary; \item $\op{rank}(G)$ is equal to $N$; \item $X$ is simply connected and $\mathcal{H}^N(B_1(x))\geq v$ for some $x\in X$; \item $\pi_1(X)$ is finite and $\mathcal{H}^N(B_1(x))\geq v$ for some $x\in X$. \end{enumerate} \end{theorem} \begin{proof} We will prove that (3)$\Rightarrow$(4)$\Rightarrow$(5)$\Rightarrow$(3) and (1)$\Rightarrow$(2)$\Rightarrow$(3)$\Rightarrow$(1). (3)$\Rightarrow$(4): By Theorem \ref{thm-zamora zhu} and Theorem \ref{thm:dim}, $X$ is simply connected and $m=c\mathcal{H}^N$ for some $c>0$.\ We only need to show $\mathcal{H}^N(B_1(x))\geq v$ for some $x\in X$. By contradiction, we assume that there is a sequence of $(\eps_i,G_i)$-homogeneous RCD$(K,N)$ spaces $(X_i,d_i,\mathcal{H}^N)$ with $\eps_i\to0$, such that $\rank(G_i)=N$ and $\mathcal{H}^N(B_1(x_i))\to0$ for some sequence $x_i\in X_i$.\ By compactness, Theorem \ref{thm-almost homogeneous rcd} and Theorem \ref{thm:volume-continuity}, we can assume $(X_i,x_i)$ converges in the pGH-sense to $(X,x)$, where $X$ is a nilpotent Lie group of dimension $n\leq N-1$.\ Let $G_i^{\prime}$ be as in Lemma \ref{lem:almost-homogeoeus-subgroups}.\ Then by Remark \ref{rem:rank-inequality}, $\rank(G_i^{\prime})\leq N-1$ which implies $\rank(G_i)\leq N-1$.\ This leads to a contradiction. (4)$\Rightarrow$(5): This is trivial. (5)$\Rightarrow$(3): Notice that $\dim_{\mathcal{H}}(X)=N$ and by Theorem \ref{thm:dim}, $m=c\mathcal{H}^N$ for some $c>0$.\ We then argue by contradiction.\ Assume that there is a sequence of $(\eps_i,G_i)$-homogeneous RCD$(K,N)$ spaces $(X_i,d_i,\mathcal{H}^N)$ with $\eps_i\to0$, such that $\pi_1(X_i)$ are finite, $\rank(G_i)<N$ and $\mathcal{H}^N(B_1(x_i))\geq v>0$ for some sequence $x_i\in X_i$.\ Due to compactness, Theorem \ref{thm-almost homogeneous rcd} and Theorem \ref{thm:volume-continuity}, we can assume $(X_i,x_i)$ converges in the pGH-sense to $(X,x)$, where $X$ is a nilpotent Lie group of dimension $N$.\ Then by Lemma \ref{lem:malcev-construction} and Remark \ref{rem-simply connected}, $\rank(G_i)=N$ which leads to a contradiction. (1)$\Rightarrow$(2): This is trivial. (2)$\Rightarrow$(3): By Lemma \ref{lem-nilpotent group}, $G$ is a finitely generated virtually nilpotent group.\ Then $G$ contains a torsion free nilpotent subgroup of finite index (see \cite{kargapolov1979fundamentals}), denoted by $\Gamma$.\ Notice that $\Gamma\leq\op{Iso}(X)$ is a discrete group acting freely on $X$.\ Since $X$ is a contractible topological $N$-manifold, $X/\Gamma$ is a closed aspherical topological manifold.\ Then by Theorem \ref{thm-Borel conj}, $X/\Gamma$ is homeomorphic to an $N$-dimensional nilmanifold.\ So $\Gamma=\pi_1(X/\Gamma)$ has rank $N$ and thus, $\rank(G)=N$. (3)$\Rightarrow$(1): This follows from Theorem \ref{thm-zamora zhu}. \end{proof} We now proceed to prove the last statement in Theorem \ref{thm-rcd-maximal rank} and Theorem \ref{thm-orbifold almost flat}.\ Notice that we only need to work on non-collapsed RCD$(K,N)$ spaces $(X,d,\mathcal{H}^N)$.\ Also, if $G$ is isomorphic to an almost-crystallographic group of dimension $N$, then $\op{rank}(G)=N$.\ Therefore, we only need to show the following theorem. \begin{theorem}\label{thm-rcd biHolder} For each $K\in\mathbb{R}$ and $N\geq1$, there is $\epsilon=\epsilon(K,N)$ such that for any $(\eps,G)$-homogeneous RCD($K,N$) space $(X,d,\mathcal{H}^N)$, if $G$ does not contain a non-trivial finite normal subgroup and $\op{rank}(G)=N$, then $X/G$ is bi-H\"older homeomorphic to an $N$-dimensional infranil orbifold $\mathcal{N}/\Gamma$, where $\mathcal{N}$ is a simply connected nilpotent Lie group endowed with a left invariant metric and $G$ is isomorphic to $\Gamma$. Furthermore, if $X$ is a smooth Riemannian manifold, then the Riemannian orbifold $X/G$ is diffeomorphic to an $N$-dimensional infranil orbifold. \end{theorem} First note that Theorem \ref{thm-almost crystallographic} and Lemma \ref{lem-nilpotent group} will imply that for a small $\eps$, the group $G$ in Theorem \ref{thm-rcd biHolder} is isomorphic to an almost-crystallographic group of dimension $N$. Assume that Theorem \ref{thm-rcd biHolder} does not hold.\ Then after rescaling on metrics, there is a sequence of $(\eps_i,G_i)$-homogeneous RCD$(-\eps_i,N)$ spaces $(X_i,d_i,\mathcal{H}^N)$ with $\eps_i\to0$, such that any $G_i$ is isomorphic to an almost-crystallographic group of dimension $N$ and $X_i/G_i$ is not bi-H\"older homeomorphic to any infranil orbifold of the form $\nn_i/G_i$. Let $G_i^{\prime}$ be the bounded index normal subgroups of $G_i$ in Lemma \ref{lem:almost-homogeoeus-subgroups}.\ Then $\op{rank}(G_i^{\prime})=N$ and it follows from \cite[Lemma 2.6]{zamora2024limits} that $\diam(X_i/G_i^{\prime})\to0$.\ Due to Remark \ref{rem:rank-inequality} and Corollary \ref{cor-almost ricci positive}, we have the following diagram: \[\xymatrix{ (X_{i},p_{i},G_{i}^{\prime}) \ar[r]^{eqGH} \ar[d] & (\R^N,0,G) \ar[d] \\ X_{i}/G_{i}^{\prime} \ar[r]^{GH} & \text{ pt }.} \] By Lemma \ref{lem:almost-translations}, $G$ acts freely and transitively on $\R^N$ and hence, $G=\R^N$. By Theorem \ref{nss}, the groups $G_i$ admit on non-trivial small subgroups.\ Then by Lemma \ref{lem:malcev-construction} and Remark \ref{rem:rank-inequality}, $G_i^{\prime}$ are torsion free nilpotent groups.\ Let $\nn_i$ be the Mal'cev completion of $G_i^{\prime}$.\ It follows from Proposition \ref{prop-Malcev} that $G_i$ is an almost-crystallographic group modeled on $\nn_i$ for each $i$. Our goal is to find a left invariant metric on $\mathcal{N}_i$ so that $X_i/G_i$ is bi-H\"older homeomorphic to $\nn_i/G_i$.\ The proof is essentially the same as in \cite{wang2024non}.\ For the convenience of readers, we give the construction of the left invariant metric on $\mathcal{N}_i$ in \cite[Lemma 4.5]{wang2024non}. \begin{lemma}[\citeonline{wang2024non}]\label{metric} Let $(X_i,d_i,p_i,G_i^{\prime})$ and $\nn_i$ be as above.\ For any $\epsilon\in (0,1)$ and large $i$, $\mathcal{N}_i$ admits a left invariant metric $g_{\mathcal{N}_i}$ with $\mathrm{inj}_{\mathcal{N}_i} \ge \frac{1}{\epsilon}$.\ Moreover, there is $\epsilon_i \to 0$ so that $\forall g \in \mathcal{N}_i$, $B_{\frac{1}{\epsilon}}(g) \subset \mathcal{N}_i$ is $\epsilon_i$-$C^4$-close, by $\mathrm{exp}_g^{-1}$, to the $\frac{1}{\epsilon}$-ball in $T_{g}\mathcal{N}_i$ with the flat metric. \end{lemma} \begin{proof} Note that the groups $G_i^{\prime}$ admit no non-trivial small subgroups and $\op{rank}(G_i^{\prime})=N$.\ Then by Lemma \ref{lem:malcev-construction} and Remark \ref{rem:rank-inequality}, for $i$ large enough there are generators $u_{1,i}, \ldots , u_{N,i} \in G_ i ^{\prime }$, and $C_{1,i}, \ldots , C_{N,i} \in \mathbb{R}^+$ with the following properties: \begin{enumerate} \item There are polynomials $Q_i : \mathbb {R}^N \times \mathbb{R}^N\to \mathbb{R}^N$ of degree $\leq d(N)$ giving the group structures on $\R^N$ by $x_1 \cdot x_2 = Q_i (x_1, x_2)$ such that for each $i$, $G_ i ^{\prime}$ is isomorphic to the group $(\Z^N,Q_i|_{\Z^N\times\Z^N})$ and the group $(\R^N,Q_i)$ is isomorphic to $\nn_i$. \item There is $C > 0 $ such that the set \[ P_i : = P ( u_{1,i}, \ldots , u_{N,i} ; C_{1,i}, \ldots , C_{N,i} ) \subset G_ i ^{\prime} \] is a nilprogression in $C$-normal form with $\thi (P_i ) \to \infty $. \label{property:zamora-3} \item \label{property:zamora-4} For each $\varepsilon > 0 $ there is $\delta > 0 $ such that \begin{align} G( \delta P_ i ) \subset \{ & g \in G_i ^{\prime} \, \vert \, d_i(g p_i, p_i ) \leq \varepsilon \} , \label{eq:grid-small} \end{align} for $i$ large enough. \end{enumerate} By \eqref{eq:grid-small}, there is $\delta_1 > 0 $ with $G(\delta_1 P_i) \subset \{ g \in G_ i ^{\prime} \vert d_i(gp_i,p_i) \leq 1 \} $.\ Hence there is an integer $D \in \mathbb{N} $ so that \begin{equation} \label{eq:grid-small-2} G(P_i) \subset G(\delta_1 P_i) ^{D} \subset \{ g \in G_ i ^{\prime} \vert d_i(gp_i, p_i) \leq D \} . \end{equation} Let $g_{j,i}:=u_{j,i}^{\lfloor \frac{C_{j,i}}{C}\rfloor }$ and $v_{j,i} : = \log (g_{j,i})\in T_e \nn_i $.\ Then $\{ v_{1,i}, \ldots , v_{N,i}\} $ is a strong Mal'cev basis of the Lie algebra $T_{e}\nn_i$ (see \cite{zamora2024limits}). Notice that the groups $G_i^{\prime}$ converge equivariantly to the group of translations in $\mathbb{R}^N$.\ By \eqref{eq:grid-small-2}, after passing to a subsequence, for each $j \in \{ 1, \ldots , N \}$ we can assume $g_{j,i}$ converges equivariantly to some $v_j \in \mathbb{R}^N$.\ We may identify $\R^N$ with its Lie algebra and $\{ v_1, \ldots , v_N \}$ is a basis of $\mathbb{R}^N$. Define the left invariant metric $g_{\mathcal{N}_i}$ by the inner product on $T_e\nn_i$ as following: $$g_{\mathcal{N}_i}(v_{j_1,i},v_{j_2,i}) = \langle v_{j_1}, v_{j_2} \rangle,$$ where $1 \le j_1,j_2 \le N$ and the right-hand side is the inner product in $\mathbb{R}^N$. Since $\{ v_{j,i} , 1 \le j \leq N\}$ is a strong Malcev basis of $T_e\nn_i$, for any $1 \le j_1 < j_2 \le N$, \begin{align} [v_{j_1,i},v_{j_2,i}]= \sum_{j=j_2+1}^n a_{j_1j_2,i}^j\ v_{j,i}. \end{align} It is proven in \cite[Lemma 2.64 and Proposition 8.2]{zamora2024limits} that the structure coefficients of $T_e\nn_i$ with respect to the basis $\{ v_{1,i}, \ldots , v_{N,i}\} $ converge to the structure coefficients in $\mathbb{R}^N$ with respect to $\{ v_1, \ldots , v_N \}$ as $i \to \infty$.\ Thus $a_{j_1j_2,i}^j \to 0$ as $i \to \infty$, since the limit group is abelian.\ Define $a_{j_1j_2,i}^j=0$ if $j \le j_1$ or $j \le j_2$.\ Then for any $ 1 \le j_1,j_2,j_3 \le N$, $$g_{\mathcal{N}_i}(\nabla_{v_{j_1,i}} v_{j_2,i} , v_{j_3,i}) = \frac{1}{2} (a_{j_1j_2,i}^{j_3} - a_{j_2j_3,i}^{j_1} + a_{j_3j_1,i}^{j_2}).$$ Note that all terms on the right-hand side are constant (depending on $i$) and converge to $0$ as $i \to \infty$.\ In particular, the covariant derivatives of the Riemannian curvature tensor $g_{\mathcal{N}_i}$ satisfy $$|(\nabla^{g_{\mathcal{N}_i}})^k Rm_{g_{\mathcal{N}_i}}| \le \epsilon_i, \ 0 \le k \le 3,$$ where $\epsilon_i \to 0$.\ Then one can easily verify that this metric fulfills the conditions. \end{proof} \begin{remark} In \cite{wang2024non}, Wang used the results in \cite{breuillard2012structure,zamora2024limits} to embed $G_i'$ as a lattice in a simply connected nilpotent Lie group $\nn_i$.\ In fact, since $G_i'$ is torsion free nilpotent, one can directly use its Mal'cev completion.\ One can see Remark \ref{cor-rcd} for the reason why the groups $G_i$ and $G_i'$ are torsion free in Wang's theorem. \end{remark} From now on, $\mathcal{N}_i$ is always endowed with the metric $g_{\mathcal{N}_i}$ constructed in Lemma \ref{metric}. Define $G_i^{\prime}(p_i,D):=\{g \in G_i ^{\prime}\vert d_i(gp_i, p_i)\leq D\}$.\ Assume that a pseudo-group $G$ acts on two metric spaces $X_1,X_2$ separately by isometries.\ Following \cite{wang2024non}, we say a map $h:X_1 \to X_2$ is $\epsilon$-almost $G$-equivariant if $d(h(gx),gh(x)) < \epsilon$ for any $ x \in X_1, g \in G$. The following two lemmas come from \cite{wang2024non,wang2023limit}. \begin{lemma}[\citeonline{wang2024non}]\label{localeg} For any $ \epsilon > 0$, let $B_{\frac{1}{\epsilon}}(p_i) \subset X_i$ and $B_{\frac{1}{\epsilon}}(e) \subset \mathcal{N}_i$.\ Then there exists an $\epsilon_i$-GHA $h_i': B_{\frac{1}{\epsilon}}(p_i) \to B_{\frac{1}{\epsilon}}(e)$ which is $\epsilon_i$-almost $G_i^{\prime}(p_i,{\frac{1}{\epsilon}})$-equivariant if it is well-defined, where $\epsilon_i \to 0$ as $i \to \infty$. \end{lemma} \begin{lemma}[\citeonline{wang2023limit,wang2024non}]\label{extension} The map $h_i^{\prime}$ in Lemma \ref{localeg} can be extended to a global map $h_i:X_i \to \mathcal{N}_i$, which is an $\epsilon_i$-GHA on any $\frac{1}{\epsilon}$-ball and $\epsilon_i$-almost $G_i'$-equivariant with $\eps_i\to0$. \end{lemma} Now we can follow the arguments in \cite{wang2024non} to complete the proof of Theorem \ref{thm-rcd biHolder}. \begin{proof}[Proof of Theorem \ref{thm-rcd biHolder}] Let us argue by contradiction.\ There is a sequence of $(\eps_i,G_i)$-homogeneous RCD$(-\eps_i,N)$ spaces $(X_i,d_i,\mathcal{H}^N)$ with $\eps_i\to0$ and $\op{rank}(G_i)=N$.\ Also, we have already established the following diagram: \[\xymatrix{ (X_{i},p_{i},G_{i}^{\prime}) \ar[r]^{eqGH} \ar[d] & (\R^N,0,\R^N) \ar[d] \\ X_{i}/G_{i}^{\prime} \ar[r]^{GH} & \text{ pt },} \] where for each $i$, $G_i'$ is a normal subgroup of bounded index in $G_i$, embedded as a lattice in an $N$-dimensional simply connected nilpotent Lie group $\mathcal{N}_i$ and $G_i$ is an almost-crystallographic group modeled on $\nn_i$.\ We also assumed that none of $X_i/G_i$ is bi-H\"{o}lder homeomorphic to the infranil orbifold $\nn_i/G_i$. By the construction of the metric $g_{\nn_i}$ in Lemma \ref{metric}, the lattice $G_i'$ is $\epsilon_i$-dense in $\mathcal{N}_i$.\ Since $\diam(X_i/G_i')\to0$, the map $h_i$ in Lemma \ref{extension} is also $\epsilon_i$-almost $G_i$-equivariant. By the same arguments in the proof of \cite[Theorem A]{wang2024non}, we can assume that $G_i$ acts on $\mathcal{N}_i$ by isometries and for any small $\eps>0$, there is a normal subgroup $G_i''$ in $G_i'$ of finite index, which is also normal in $G_i$, so that $G_i'' \cap B_{\frac{1}{\epsilon}}(e) = \{e\}$. Since $G_i'' \cap B_{\frac{1}{\epsilon}}(e) = \{e\}$, we can apply Lemma \ref{metric} to conclude that the injective radius of $\mathcal{N}_i/G_i''$ is at least ${\frac{1}{\epsilon}}$.\ For any $y \in \mathcal{N}_i/G_i''$, $B_{\frac{1}{\epsilon}}(y) \subset \mathcal{N}_i/G_i''$ is $\epsilon_i$-$C^4$-close to the $\frac{1}{\epsilon}$-ball in the tangent space $T_{y}(\mathcal{N}_i/G_i'')$ with the flat metric. Since $h_i$ is $\epsilon_i$-almost $G_i$-equivariant, we can reduce $h_i$ to a map $$\bar{h}_i: X_i/G_i''\to \mathcal{N}_i/G_i'',$$ which is an $\epsilon_i$-GHA on any ${\frac{1}{\epsilon}}$-ball and $\epsilon_i$-almost $G_i/G_i''$-equivariant. Since $G_i/G_i''$ is finite, we can apply \cite[Theorem 3.5]{wang2024non} to $$\bar{h}_i: (X_i/G_i'',G_i/G_i'')\longrightarrow (\mathcal{N}_i/G_i'',G_i/G_i'').$$ Thus there is a $(G_i/G_i'')$-equivariant map $f_{G_i/G_i''}: X_i/G_i'' \to \mathcal{N}_i/G_i''$, which is harmonic $(N,\Phi(\epsilon|N))$-splitting on any $\frac{1}{5\epsilon}$-ball.\ Then by Theorem \ref{Rei}, $$(1-\Phi(\epsilon|N))d_i(x,y)^{1+\Phi(\epsilon|N)} \le d(f_{G_i/G_i''}(x),f_{G_i/G_i''}(y)) \le (1+\Phi(\epsilon|N))d_i(x,y),$$ for any $x,y \in X_i/G_i''$ with $d_i(x,y) \le \frac{1}{10\epsilon}$. Since $f_{G_i/G_i''}$ is $(G_i/G_i'')$-equivariant, it can be reduced to a bi-H\"{o}lder map on the quotient space $f: X_i/G_i \to \mathcal{N}_i/G_i$.\ This leads to a contradiction to the assumption. Furthermore, if $X_i$ is a Riemannian manifold, then $X_i/G_i''$ is also a Riemannian manifold, since the group $G_i''$ acts freely on $X_i$.\ So by Theorem \ref{Rei}, the $(G_i/G_i'')$-equivariant map $f_{G_i/G_i''}: X_i/G_i'' \to \mathcal{N}_i/G_i''$ restricted on any $\frac{1}{10\epsilon}$-ball, is a diffeomorphism onto its image.\ Notice that $X_i/G_i''\to X_i/G_i$ and $\nn_i/G_i''\to\nn_i/G_i$ are orbifold coverings.\ Hence, the reduced map $f: X_i/G_i \to \mathcal{N}_i/G_i$ is a diffeomorphism between orbifolds.\ This completes the proof. \end{proof} Combining Theorem \ref{thm-rcd rigidity} and Theorem \ref{thm-rcd biHolder}, both Theorem \ref{thm-rcd-maximal rank} and Theorem \ref{thm-orbifold almost flat} are readily obtained. \begin{remark} In the last statement of Theorem \ref{thm-orbifold almost flat}, it is expected that the assumptions requiring a good orbifold and an orbifold fundamental group without non-trivial finite normal subgroups can be eliminated.\ Due to \cite[Proposition 1.4]{ding2011restriction}, any almost flat orbifold is an infranil orbifold.\ So it might be more natural to seek a nearby almost flat metric under the conditions in Theorem \ref{thm-orbifold almost flat}.\ This is achieved in the manifold case via Ricci flow smoothing techniques (see \cite{huang2020collapsed}). \end{remark} \begin{remark}\label{cor-rcd} Although in Theorem \ref{thm-rcd-maximal rank}, we assume that the group $G$ does not contain a non-trivial finite normal subgroup, Theorem \ref{thm-rcd-almostflat} is still a corollary of Theorem \ref{thm-rcd-maximal rank}.\ This is due to the fact that if $Y$ is a closed aspherical topological manifold, then $\pi_1(Y)$ is torsion free.\ Notice that any closed topological manifold is homotopy equivalent to a CW complex \cite{kirby1969triangulation} and the fundamental group of an aspherical finite-dimensional CW complex is torsion free \cite{luck2012aspherical}. \end{remark} \section{Rigidity and regularity of almost homogeneous Einstein metrics}\label{sec-5} In this section, we mainly focus on the rigidity and $\eps$-regularity for almost homogeneous Riemannian orbifolds and manifolds with bounded Ricci curvature.\ We first give the proof of Theorem \ref{thm-einstein orbifold}, which is an orbifold verion of \cite[Theorem 0.2]{si2024rigidity}. \begin{proof}[Proof of Theorem \ref{thm-einstein orbifold}] Since (1)$\Rightarrow$(2)$\Rightarrow$(3) is trivial and (3) is equivalent to (4) by Theorem \ref{thm-rcd-maximal rank}, it suffices to show that (3) and (4) together imply (1). Let us argue by contradiction.\ Suppose that there is a sequence of non-flat Einstein $n$-orbifolds $(\mathcal{O}_i,g_i)$ such that $\op{Ric}_{g_i}\equiv\lambda_i$ with $\lambda_i\ge -(n-1)$, $\op{diam}(\mathcal{O}_i,g_i)\to 0$, and satisfying (3) and (4) in Theorem \ref{thm-einstein orbifold}. Consider the universal orbifold covers $(\tilde{\Or}_i,\tilde{g}_i)$.\ Due to Corollary \ref{cor-bonnet myers} and Corollary \ref{cor-pan rong}, we can assume that $\lambda_i< 0$.\ Then up to a rescaling on metrics, we can further assume that $\lambda_i=-(n-1)$.\ Note that $\orb_i$ still converges to a point and $\op{rank}(\pi_1^{orb}(\orb_i))=n$.\ By Theorem \ref{thm-rcd-maximal rank}, $\op{vol}_{\tilde{g}_i}(B_1(\tilde{x}_i))\geq v^{\prime}(n)>0$ for some $\tilde{x}_i\in\tilde{\Or}_i$.\ Up to a subsequence, we have the following pmGH-convergence by Theorem \ref{thm:volume-continuity}, \[(\tilde{\Or}_i,\tilde{g}_i,\op{vol}_{\tilde{g}_i},\tilde{x}_i) \xrightarrow{pmGH} (\tilde{X},\tilde{g},\mathcal{H}^n,\tilde{x}),\] where by Proposition \ref{prop-simply connected}, $\tilde X $ is isometric to a simply connected nilpotent Lie group with a left invariant Riemannian metric, denoted by $\tilde{g}$. On the other hand, for any $i$, $(|\tilde \orb_i|_{reg},g_{i, reg})$ is a smooth open Riemannian manifold with $\Ric\equiv-(n-1)$ and $\op{vol}_{\tilde{g}_i}(B_r(\tilde y_i)\cap|\tilde \orb_i|_{reg}) = \op{vol}_{\tilde{g}_i}(B_r(\tilde y_i))$ for any $\tilde{y}_i\in \tilde{\Or}_i$ and $r>0$.\ By the standard Schauder estimate, $(|\tilde \orb_i|_{reg},g_{i, reg})$ converges in the $C^{\infty}_{loc}$-norm to a full measure subset of $(\tilde{X},\tilde{g})$ (see \cite{cheeger1997structure}).\ Since $\tilde{X}$ is a Riemannian manifold, $\Ric_{\tilde{X}}\equiv-(n-1)$. By \cite[Theorem 2.4]{milnor1976curvatures}, any left invariant Riemannian metric of a nilpotent but not abelian Lie group has both directions of strictly negative and positve Ricci curvature.\ Thus, $(\tilde X, \tilde g)$ must be isometric to $\mathbb{R}^n$, which leads to a contradiction. \end{proof} Recall that if $(M,g)$ is a Riemannian manifold and $G$ is a discrete subgroup of $\op{Iso}(M)$, then $M/G$ admits a natural orbifold structure.\ Let $\tilde{M}$ be the universal cover of $M$.\ Then $\tilde{M}$ is also the universal orbifold cover of the good orbifold $M/G$.\ Moreover, if $M$ is simply connected, then $\pi_1^{orb}(M/G)=G$.\ Therefore, the following corollary is readily derived from Theorem \ref{thm-rcd-maximal rank} and Theorem \ref{thm-einstein orbifold}. \begin{corollary}\label{cor-rigidity} There is $\epsilon=\epsilon(n)>0,v=v(n)>0$ such that if an $(\epsilon,G)$-homogeneous Einstein $n$-manifold $(M,g)$ satisfies $\op{Ric}_g=\lambda g$ with $\lambda\geq -(n-1)$, then the followings are equivalent: \begin{enumerate} \item $\op{vol}(B_1(x))\geq v$ for some $x\in M$, and $M$ is simply connected; \item $\op{rank}(G)=n$; \item $M$ is diffeomorphic to $\mathbb{R}^n$; \item $M$ is isometric to $\mathbb{R}^n$. \end{enumerate} In particular, if we only assume $\op{vol}(B_1(x))\geq v$ for some $x\in M$, then $M$ is flat. \end{corollary} The following proposition is a quantitative rigidity version of Corollary \ref{cor-rigidity}. \begin{proposition} Given $v>0$ and $p\in(1,\infty)$, for any $\delta>0$, there is $\epsilon=\epsilon(n,v,p,\delta)$ such that if an $(\epsilon,G)$-homogeneous $n$-manifold $(M,g)$ satisfies $|\op{Ric}-\lambda g|\leq\eps$ with $\lambda\geq -(n-1)$ and $\op{vol}(B_1(x))\geq v$, then $\int_{B_1(x)}|Rm|^p\leq \delta$. \end{proposition} \begin{proof} Argue by contradiction.\ Suppose that there exists $\delta_0>0$ such that for any $\eps_i\to0$, there is a sequence of $(\eps_i,G_i)$-homogeneous pointed $n$-manifolds $(M_i,g_i,x_i)$ satisfying $|\op{Ric}-\lambda_i g_i|\leq\eps_i$ with $\lambda_i \geq -(n-1)$, $\op{vol_{g_i}}(B_1(x_i))\geq v$ and $\int_{B_1(x_i)}|Rm|^p > \delta_0$. By Corollary \ref{cor-bonnet myers}, we can assume that $\lambda_i$ converges to some $\lambda_\infty \leq0$.\ Up to a subsequence, we have the following pGH-convergence \[(M_i,g_i,x_i)\xrightarrow{pGH}(X,d,x),\] where $(X,d)$ is isometric to a Riemannian manifold with $\Ric\geq\lambda_\infty$ by Theorem \ref{thm-almost homogeneous rcd}.\ Note that $(M_i,g_i,x_i)$ converges in the pointed $C^{1,\alpha}\cap W^{2,q}$-topology to $(X,g_X,x)$, where the metric $g_X$ is a weak solution of the Einstein equation $$\Delta g + Q(g,\partial g)=\lambda_\infty g,$$ under harmonic coordinate charts (see \cite{anderson1990convergence,cheeger1997structure}).\ Hence, $g_X$ is a smooth metric and $X$ is an Einstein manifold with $\Ric_{g_X}=\lambda_\infty$.\ By the same arguments in the proof of Theorem \ref{thm-einstein orbifold}, $X$ is a flat manifold.\ Since $(M_i,g_i,x_i)$ converges in the pointed $C^{1,\alpha}\cap W^{2,q}$-topology to $(X,g_X,x)$ for any $0<\alpha<1$ and $1<q<\infty$, we have $\int_{B_1(x_i)}|Rm|^p \to 0$.\ This leads to a contradiction. \end{proof} Let $(M,g)$ be a Riemannian manifold.\ Recall that the $C^k$-harmonic radius at $x\in M$ is defined to be the largest $r>0$ such that there exists a harmonic coordinate system on $B_r(x)$ with $C^k$-control on the metric tensor.\ Harmonic coordinates have an abundancce of good properties when it comes to regularity issues.\ We refer to \cite{petersen1997convergence} for a nice introduction.\ In particular, if the Ricci curvature is uniformly bounded, then in harmonic coordinates, the metric $g_{ij}$ has \emph{a priori} $C^{1,\alpha}\cap W^{2,q}$-bounds for any $\alpha\in(0,1)$ and $q\in(1,\infty)$. Let us now proceed to prove the $\eps$-regularity theorem (Theorem \ref{thm-regularity}). \begin{proof}[Proof of Theorem \ref{thm-regularity}] Let us argue by contradiction.\ Suppose that there exists a sequence of almost homogeneous pointed $n$-orbifolds $(\orb_i,g_i,x_i)$ satisfying $|\Ric_{g_i}|\leq n-1$, $\op{vol}(B_1(x_i))\geq v$ and $\int_{B_1(x_i)}|Rm|^p\to\infty$.\ Then up to a subsequence, $(\orb_i,g_i,x_i)$ converges in the pGH-sense to $(X,d,p)$.\ By Theorem \ref{thm-almost homogeneous rcd}, $X$ is a Riemannian manifold.\ Hence, $B_1(x_i)\cap|\orb_i|_{reg}$ converges to an open subset of $(X,d,p)$ in the $C^{1,\alpha}\cap W^{2,q}$-norm for any $\alpha\in(0,1)$ and $q\in(1,\infty)$, which implies that $\int_{B_1(x_i)}|Rm|^p=\int_{B_1(x_i)\cap|\orb_i|_{reg}}|Rm|^p$ is bounded.\ This leads to a contradiction. For the last statement, notice that the $C^1$-harmonic radius $r_h$ is continuous under $C^{1,\alpha}$-topology and hence, a similar proof applies. \end{proof} The proof of Theorem \ref{thm-bounded Ricci almostflat} is readily obtained. \begin{proof}[Proof of Theorem \ref{thm-bounded Ricci almostflat}] By \cite[Corollary 1.2]{dai2000integral}, (1) implies (2).\ It follows from Theorem \ref{thm-almostflat} that (2), (3) and (4) are equivalent.\ Then Theorem \ref{thm-regularity} shows that (4) implies (1).\ One can adjust the constants to make (1), (2), (3) and (4) equivalent. \end{proof} \bibliographystyle{plain} \bibliography{almosthomogeneous} \end{document}
2412.20339v2
http://arxiv.org/abs/2412.20339v2
On the formal ribbon extension of a quasitriangular Hopf algebra
\documentclass[12pt,reqno]{amsart} \usepackage{graphicx} \usepackage{enumerate} \usepackage{amsmath, amsthm} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{fullpage} \usepackage{xcolor} \newtheorem{theorem}{Theorem}[subsection] \newtheorem{thm}[theorem]{Theorem} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{conj}[theorem]{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}{Definition}[subsection] \newtheorem{example}[definition]{Example} \newtheorem{remark}[definition]{Remark} \usepackage{mathabx,amsmath,amscd, latexsym, amssymb, bbold} \newcommand{\qk}{\textcolor{red}} \usepackage{amsmath, amssymb, tikz} \DeclareMathOperator{\Hig}{Hig} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\coev}{coev} \def\bR{\mathbb{R}} \def\bC{\mathbb{C}} \def\bZ{\mathbb{Z}} \def\cZ{\mathcal{Z}} \def\cC{\mathcal{C}} \def\SL{\mathrm{SL}} \def\spa{\mathrm{span}} \def\NK{{\mathcal{K}}} \def\Kn{{\NK_n}} \def\RKn{\widetilde{D\Kn}} \def\H{\tilde H} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Rep}{Rep} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\spec}{spec} \renewcommand{\Vec}{\mathrm{Vec}} \def\Vecm{\Vec_{\bZ_2}^-} \DeclareMathOperator{\sVec}{sVec} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Irr}{Irr} \DeclareMathOperator{\Id}{Id} \newcommand{\TODO}[1]{\ClassWarning{main}{TODO: #1}\textcolor{red}{#1}} \def\bF{\mathbb{F}} \def\incl{\hookrightarrow} \def\rib{\tilde{\mathbf{v}}} \DeclareMathOperator{\irRC}{\operatorname{Irr}(\mathcal C)} \DeclareMathOperator{\inRC}{\operatorname{Ind}(\mathcal C)} \DeclareMathOperator{\fuRC}{\operatorname{Full}(\mathcal C)} \DeclareMathOperator{\StRC}{\operatorname{St}(\mathcal C)} \DeclareMathOperator{\HigRC}{\operatorname{Hig}(\mathcal C)} \DeclareMathOperator{\irRB}{\operatorname{Irr}(\mathcal B)} \DeclareMathOperator{\inRB}{\operatorname{Ind}(\mathcal B)} \DeclareMathOperator{\fuRB}{\operatorname{Full}(\mathcal B)} \DeclareMathOperator{\StRB}{\operatorname{St}(\mathcal B)} \DeclareMathOperator{\HigRB}{\operatorname{Hig}(\mathcal B)} \DeclareMathOperator{\lirr}{\operatorname{I}_{\textrm{irr}}} \DeclareMathOperator{\lind}{\operatorname{I}_{\textrm{ind}}} \DeclareMathOperator{\lfull}{\operatorname{I}_{\textrm{full}}} \DeclareMathOperator{\ccenterB}{Nat(\textrm{Id}_{\mcB})} \DeclareMathOperator{\ccenterC}{Nat(\textrm{Id}_{\mcC})} \newcommand{\mcA}{\mathcal{A}} \newcommand{\mcB}{\mathcal{B}} \newcommand{\mcC}{\mathcal{C}} \newcommand{\mcD}{\mathcal{D}} \newcommand{\mcE}{\mathcal{E}} \newcommand{\mcF}{\mathcal{F}} \newcommand{\mcG}{\mathcal{G}} \newcommand{\mcH}{\mathcal{H}} \newcommand{\mcI}{\mathcal{I}} \newcommand{\mcJ}{\mathcal{J}} \newcommand{\mcK}{\mathcal{K}} \newcommand{\mcL}{\mathcal{L}} \newcommand{\mcM}{\mathcal{M}} \newcommand{\mcN}{\mathcal{N}} \newcommand{\mcO}{\mathcal{O}} \newcommand{\mcP}{\mathcal{P}} \newcommand{\mcQ}{\mathcal{Q}} \newcommand{\mcR}{\mathcal{R}} \newcommand{\mcS}{\mathcal{S}} \newcommand{\mcT}{\mathcal{T}} \newcommand{\mcU}{\mathcal{U}} \newcommand{\mcV}{\mathcal{V}} \newcommand{\mcW}{\mathcal{W}} \newcommand{\mcX}{\mathcal{X}} \newcommand{\mcY}{\mathcal{Y}} \newcommand{\mcZ}{\mathcal{Z}} \newcommand{\mbbA}{\mathbb{A}} \newcommand{\mbbB}{\mathbb{B}} \newcommand{\mbbC}{\mathbb{C}} \newcommand{\mbbD}{\mathbb{D}} \newcommand{\mbbE}{\mathbb{E}} \newcommand{\mbbF}{\mathbb{F}} \newcommand{\mbbG}{\mathbb{G}} \newcommand{\mbbH}{\mathbb{H}} \newcommand{\mbbI}{\mathbb{I}} \newcommand{\mbbJ}{\mathbb{J}} \newcommand{\mbbK}{\mathbb{K}} \newcommand{\mbbL}{\mathbb{L}} \newcommand{\mbbM}{\mathbb{M}} \newcommand{\mbbN}{\mathbb{N}} \newcommand{\mbbO}{\mathbb{O}} \newcommand{\mbbP}{\mathbb{P}} \newcommand{\mbbQ}{\mathbb{Q}} \newcommand{\mbbR}{\mathbb{R}} \newcommand{\mbbS}{\mathbb{S}} \newcommand{\mbbT}{\mathbb{T}} \newcommand{\mbbU}{\mathbb{U}} \newcommand{\mbbV}{\mathbb{V}} \newcommand{\mbbW}{\mathbb{W}} \newcommand{\mbbX}{\mathbb{X}} \newcommand{\mbbY}{\mathbb{Y}} \newcommand{\mbbZ}{\mathbb{Z}} \newcommand{\mfA}{\mathfrak{A}} \newcommand{\mfB}{\mathfrak{B}} \newcommand{\mfC}{\mathfrak{C}} \newcommand{\mfD}{\mathfrak{D}} \newcommand{\mfE}{\mathfrak{E}} \newcommand{\mfF}{\mathfrak{F}} \newcommand{\mfG}{\mathfrak{G}} \newcommand{\mfH}{\mathfrak{H}} \newcommand{\mfI}{\mathfrak{I}} \newcommand{\mfJ}{\mathfrak{J}} \newcommand{\mfK}{\mathfrak{K}} \newcommand{\mfL}{\mathfrak{L}} \newcommand{\mfM}{\mathfrak{M}} \newcommand{\mfN}{\mathfrak{N}} \newcommand{\mfO}{\mathfrak{O}} \newcommand{\mfP}{\mathfrak{P}} \newcommand{\mfQ}{\mathfrak{Q}} \newcommand{\mfR}{\mathfrak{R}} \newcommand{\mfS}{\mathfrak{S}} \newcommand{\mfT}{\mathfrak{T}} \newcommand{\mfU}{\mathfrak{U}} \newcommand{\mfV}{\mathfrak{V}} \newcommand{\mfW}{\mathfrak{W}} \newcommand{\mfX}{\mathfrak{X}} \newcommand{\mfY}{\mathfrak{Y}} \newcommand{\mfZ}{\mathfrak{Z}} \newcommand{\mfc}{\mathfrak{c}} \newcommand{\mfd}{\mathfrak{d}} \newcommand{\mfe}{\mathfrak{e}} \newcommand{\mff}{\mathfrak{f}} \newcommand{\mfg}{\mathfrak{g}} \newcommand{\mfh}{\mathfrak{h}} \newcommand{\mfi}{\mathfrak{i}} \newcommand{\mfj}{\mathfrak{j}} \newcommand{\mfk}{\mathfrak{k}} \newcommand{\mfl}{\mathfrak{l}} \newcommand{\mfm}{\mathfrak{m}} \newcommand{\mfn}{\mathfrak{n}} \newcommand{\mfo}{\mathfrak{o}} \newcommand{\mfp}{\mathfrak{p}} \newcommand{\mfq}{\majhfrak{q}} \newcommand{\mfr}{\mathfrak{r}} \newcommand{\mfs}{\mathfrak{s}} \newcommand{\mft}{\mathfrak{t}} \newcommand{\mfu}{\mathfrak{u}} \newcommand{\mfv}{\mathfrak{v}} \newcommand{\mfw}{\mathfrak{w}} \newcommand{\mfx}{\mathfrak{x}} \newcommand{\mfy}{\mathfrak{y}} \newcommand{\mfz}{\mathfrak{z}} \title{On the formal ribbon extension of a quasitriangular Hopf algebra} \date{} \author[ucsb]{Quinn T. Kolt} \email{[email protected]} \address{Department of Mathematics, University of California, Santa Barbara, CA 93106, USA} \begin{document} \begin{abstract} Any finite-dimensional quasitriangular Hopf algebra $H$ can be formally extended to a ribbon Hopf algebra $\H$ of twice the dimension. We investigate this extension and its representations. We show that every indecomposable $H$-module has precisely two compatible $\H$-actions. We investigate the behavior of simple, projective, and M\"uger central $\H$-modules in terms of these $\H$-actions. We also observe that, in the semisimple case, this construction agrees with the pivotalization/sphericalization construction introduced by Etingof, Nikshych, and Ostrik (2003). As an example, we investigate the formal ribbon extension of odd-index doubled Nichols Hopf algebras $D\Kn$. \end{abstract} \maketitle \tableofcontents \section{Introduction} Reshetikhin and Turaev \cite{ReshetikhinTuraev1990} introduced the notion of a ribbon Hopf algebra and showed that any quasitriangular Hopf algebra $H$ can be formally extended to a ribbon Hopf algebra $\H$ of twice the dimension. These formal ribbon extensions are further discussed in \cite{Andruskiewitsch2014hopftensor} wherein Sommerh\"auser remarks that $\H$ can be factored as a cocycled crossed product of $H$ and $\bC[\bZ_2]$. This article seeks to study these extensions and their representations. Specifically, we seek a concrete understanding of the tensor category $\Rep(\H)$ in terms of $\Rep(H)$. Our interest lies in generating examples of both semisimple and non-semisimple ribbon categories, as well as better understanding braided tensor categories which are not ribbon. The question of whether every braided fusion category is ribbon remains open. This is a special case of the question of whether every fusion category has a spherical structure \cite{eno2005fusion}. Etingof, Nikshych, and Ostrik \cite{eno2005fusion} showed that every fusion category $\cC$ embeds into a spherical fusion category of twice the dimension $\tilde\cC$. By Theorem \ref{pivotalization-agrees}, this construction agrees with the formal ribbon extension explored here, i.e., $\Rep(\H)\cong\widetilde{\Rep(H)}$ for semisimple $H$. Consequently, we may interpret the formal ribbon extension as a generalization of sphericaalization to the non-semisimple braided case. Topological quantum field theories (TQFTs) provide another motivation for studying ribbon Hopf algebras. Reshetikhin-Turaev and Crane-Yetter-Hauffman TQFTs are semisimple (2+1)- and (3+1)-TQFTs which come from certain ribbon Hopf algebras (and more generally ribbon fusion categories) \cite{Crane1997statesum,Reshetikhin1991invariants,turaev1992modular}. Recently, there has been much interest in non-semisimple TQFTs. One potential advantage of studying non-semisimple TQFTs is that semisimple (3+1)-TQFTs are known to be unable to distinguish exotic smooth structure \cite{Reutter2023semisimple}. It is unknown whether this is also true for non-semisimple (3+1)-TQFTs. Like in the semisimple case, one build non-semisimple (2+1)- and (3+1)-TQFTs with non-semisimple ribbon categories \cite{Costantino:2023bjb,DeRenzi2022tqft,kerler2003homology}. Consequently, generating examples of ribbon Hopf algebras, as we do here, may have interesting applications to both topology and physics. The present article is ordered as follows; in Section \ref{formal-ribbon-sec}, we study the basic properties of $\H$, including expanding on two known factorizations of $\H$ in terms of $H$ and $\bF[\bZ_2]$, where $\bF$ is the underlying field \cite{Andruskiewitsch2014hopftensor,ReshetikhinTuraev1990}. It follows from Sommerh\"auser's factorization that $\Rep(\H)$ fits into an exact sequence $\Vec_{\bZ_2}\to\Rep(\H)\to\Rep(H)$ of braided finite tensor categories. However, we see in Section \ref{rep-section} that $\Rep(\H)$ can be described more precisely. In Section \ref{rep-section}, we show that every $H$-module has a compatible $\H$-action, using the holomorphic functional calculus and the model completeness of the theory of algebraically closed fields. We spend the majority of this section investigating this $\H$-action. As mentioned, we also show that, if $H$ is semisimple, $\Rep(\H)$ is isomorphic to the pivotalization/sphericalization $\widetilde{\Rep (H)}$ of $\Rep (H)$ introduced in \cite{eno2005fusion}. In Section \ref{DKn-section}, we investigate the formal ribbon extension of the Drinfeld doubles of Nichols Hopf algebras\footnote{Nichols Hopf algebras are distinct from Nichols algebras of a braided vector space. However, Nichols Hopf algebras may be constructed from Nichols algebras. The terminology ``Nichols Hopf algebra'' appears in \cite{EGNO}. The algebras $D\Kn$ for even $n$ are also known as symplectic fermion ribbon Hopf algebras \cite{farsad2022symplectic}.} of odd index. Doubled Nichols Hopf algebras $D\Kn$ are always quasitriangular but ribbon if and only if $n$ is even. Thus, for odd $n$, the algebras $D\Kn$ provide a nontrivial example of our theory. For simplicity, we build our theory for Hopf algebras. However, most of the theory (in particular, all of Section \ref{rep-section}) explored here can easily be extended to quasitriangular weak Hopf algebras, and, hence, to braided fusion categories. \section{Formal ribbon extensions}\label{formal-ribbon-sec} \subsection{Notations and terminology} We fix an algebraically closed field $\bF$ of characteristic 0. Throughout this article, we make use of sum-less Sweedler's notation (e.g., $\Delta(x) = x^{(1)}\otimes x^{(2)}$). Similarly, if $R\in H\otimes H$ is an $R$-matrix of a quasitriangular Hopf algebra $H$, we write $R = R^{(1)}\otimes R^{(2)}$. By an $H$-module, we always mean a left $H$-module. We denote the category of finite-dimensional representations of a Hopf algebra $H$ by $\Rep(H)$ and identify it with the category of finite-dimensional $H$-modules in the standard way. For an element $h\in H$ and an $H$-module $M\in\Rep(H)$, we denote by $h\cdot -$ the map $M\to M$ given by $m\mapsto h\cdot m$. Let $H=(H, m, 1, \Delta, \epsilon, S)$ be a Hopf algebra. Then, $H$ is: \begin{enumerate} \item \textit{quasitriangular} if there is an invertible \textit{$R$-matrix} $R\in H\otimes H$ such that \begin{align*} R\Delta(x)R^{-1} &= \Delta^{\mathrm{op}}(x), & (\Delta\otimes\id_H)(R) &= R_{13}R_{23}, & (\id_H\otimes\Delta)(R) &= R_{13}R_{12}, \end{align*} where $R_{12} = R^{(1)}\otimes R^{(2)}\otimes 1$, $R_{13} = R^{(1)}\otimes 1\otimes R^{(2)}$, and $R_{23} = 1\otimes R^{(1)}\otimes R^{(2)}$; \item \textit{ribbon} if $H$ is quasitriangular and there is an invertible central \textit{ribbon element} ${\mathbf{v}}\in Z(H)$ such that \begin{align*} {\mathbf{v}}^2 &= uS(u), & \Delta({\mathbf{v}}) &= (R_{21}R)^{-1}({\mathbf{v}}\otimes{\mathbf{v}}), & S({\mathbf{v}}) &= {\mathbf{v}}, & \epsilon({\mathbf{v}}) &= 1, \end{align*} where $u = S(R^{(2)})R^{(1)}$ is the \textit{Drinfeld element} and $R_{21} = R^{(2)}\otimes R^{(1)}$; \item \textit{unimodular} if the space of left integrals $\{\Lambda\in H|h\Lambda = \epsilon(h)\Lambda,\forall h\in H\}$ and the space of right integrals $\{\Lambda\in H|\Lambda h = \epsilon(h)\Lambda,\forall h\in H\}$ coincide; \item \textit{factorizable} if $H$ is quasitriangular and the Drinfeld map $f_Q:H^*\to H$ given by $f_Q(\beta) = (\beta\otimes\id_H)(R_{21}R)$ is a linear isomorphism. \end{enumerate} The Drinfeld element $u$ has a handful of nice properties that we exploit throughout this article. \begin{proposition}\label{Drinfeld-props} Let $H$ be a quasitriangular Hopf algebra. Then, the Drinfeld element $u$ satisfies \begin{itemize} \item $uS(u)\in Z(H)$ \item $\Delta(uS(u)) = (R_{21}R)^{-2}(uS(u)\otimes uS(u))$, \item $uS(u)^{-1}$ is grouplike, \item $S^2(h) = uhu^{-1}$ for all $h\in H$, \item $u^{-1} = R^{(2)} S^2(R^{(1)})$. \end{itemize} \end{proposition} \begin{theorem} Let $H$ be a finite-dimensional Hopf algebra. \begin{enumerate} \item Quasitriangular structures on $H$ are in one-to-one correspondence with braidings on $\Rep(H)$. Given an $R$-matrix $R\in H\otimes H$, the corresponding braiding $\beta:\otimes\to \otimes^{\mathrm{op}}$ on $\Rep(H)$ is generated, for $H$-modules $M, M'$, by $\beta_{M,M'}:m\otimes m'\mapsto [R^{(2)}\cdot m']\otimes [R^{(1)}\cdot m]$. \item Ribbon elements in $H$ are in one-to-one correspondence with ribbon structures on $\Rep(H)$. Given a ribbon element $\mathbf{v}\in H$, the corresponding ribbon structure $\theta:\Id_{\Rep(H)}\to \Id_{\Rep(H)}$ on $\Rep(H)$ is given, for an $H$-module $M$, by $\theta_{M}:m\mapsto \mathbf{v}^{-1}\cdot m$. \item $H$ is unimodular if and only if $\Rep(H)$ is unimodular. \item $H$ is factorizable if and only if $\Rep(H)$ is factorizable. \end{enumerate} \end{theorem} We denote by $\Vecm$ the ribbon fusion category whose underlying fusion category is the $\bZ_2$-graded vector spaces $\Vec_{\bZ_2}$ with simple representatives $V^+$ and $V^-$, whose braiding $\beta:\otimes\to \otimes^{\mathrm{op}}$ is trivial and nontrivial twist $\theta:\Id_{\Vecm}\to\Id_{\Vecm}$ satisfies $\theta_{V^-} = -\id_{V^-}$. In particular, if $\bF[\bZ_2]$ is the ribbon Hopf algebra with $R$-matrix $R=1\otimes 1$ and ribbon element $\mathbf{v}=g$, where $\langle g\rangle = \bZ_2$, then $\Vecm\cong \Rep(\bF[\bZ_2])$. \subsection{The formal ribbon extension $\H$} Ribbon Hopf algebras were introduced in \cite{ReshetikhinTuraev1990} to construct invariants of links. To prove the versatility of their construction, they showed that any quasitriangular Hopf algebra $H$ embeds into a ribbon Hopf algebra with the same $R$-matrix. In this paper, we term this ribbon Hopf algebra the \textit{formal ribbon extension of $H$}. As seen in Definition \ref{formal-ribbon}, the construction is quite straightforward. As we will see throughout the present article, it is also quite well-behaved. \begin{definition}\label{formal-ribbon} Let $H$ be a quasitriangular Hopf algebra with $R$-matrix $R\in H\otimes H$. Let $u = (m\circ(S\otimes \id))(R_{21})$ be the Drinfeld element. The \textit{formal ribbon extension} of $(H, R)$ is the ribbon Hopf algebra $\H$ defined by adjoining a formal ribbon element $\rib\in \H$ as follows; as a vector space $\H = H\oplus H\rib$, and $\rib\in Z(\H)$ is defined to satisfy the following identities: \begin{align*} \rib^2 &= uS(u), & \Delta(\rib) &= (R_{21}R)^{-1}(\rib\otimes\rib), & S(\rib) &= \rib, & \epsilon(\rib) &= 1. \end{align*} \end{definition} We now review some elementary properties of this construction. \begin{proposition} Let $H$ be a finite-dimensional, quasitriangular Hopf algebra. Then, $H$ is unimodular if and only if $\H$ is unimodular. \end{proposition} \begin{proof} Let $\mathrm{int}_L(H)$ and $\mathrm{int}_R(H)$ be the space of left and right integrals of $H$ respectively. We claim $\mathrm{int}_L(\H) = (1+\rib)\mathrm{int}_L(H)$ and $\mathrm{int}_R(\H)=(1+\rib)\mathrm{int}_R(H)$. Let $\Lambda$ be a nonzero left integral of $H$. Then, $(1+\rib)\Lambda$ is also nonzero since $H\cap H\rib=\{0\}$. Moreover, since $1+\rib\in Z(\H)$, we have $$(a+b\rib)(1+\rib)\Lambda = \epsilon(a)(1+\rib)\Lambda+\epsilon(b)(\epsilon(\rib^2)+\rib)\Lambda = (\epsilon(a)+\epsilon(b))(1+\rib)\Lambda=\epsilon(a+b\rib)(1+\rib)\Lambda.$$ Thus, $(1+\rib)\Lambda$ is a left integral. Since $\mathrm{int}_L(H)$ is one-dimensional for any finite-dimensional Hopf algebra $H$, it follows $\mathrm{int}_L(\H) = (1+\rib)\mathrm{int}_L(H)$. The proof of the statement for right integrals is similar. It follows that, if $H$ is unimodular, so is $\H$. By the linear independence of $H$ and $H\rib$, the converse is also true. \end{proof} While the formal ribbon extension preserves unimodularity, it cannot preserve factorizability, as a consequence of Proposition \ref{not-factorizable}. \begin{proposition}\label{not-factorizable} Let $(H, R)$ be a quasitriangular Hopf algebra, and let $H\subsetneq H'$ be a Hopf algebra extension for which $(H', R)$ is quasitriangular. Then, $(H', R)$ is not factorizable. \end{proposition} \begin{proof} Since $R\in H\otimes H$, the Drinfeld map is zero on any functional which vanishes on $H$. \end{proof} \begin{remark}\label{build-modular} While $\H$ cannot be factorizable, we may build factorizable, ribbon Hopf algebras using the formal ribbon extension and the Drinfeld double constructions. If $H$ is a finite-dimensional, quasitriangular, unimodular Hopf algebra, then $D\H$ is a ribbon Hopf algebra \cite{cohen2008characters}. In particular, if $H$ is any finite-dimensional Hopf algebra, then $D\widetilde{DH}$ is a ribbon Hopf algebra. This implies that, from any finite-dimensional Hopf algebra $H$, we may construct a (possibly non-semisimple) modular category $\Rep(D\widetilde{DH})$, which has dimension $4(\dim H)^4$. \end{remark} \subsection{Decompositions of $\H$} We consider two decompositions of $\H$ in terms of $H$ and $\bF[\bZ_2]$ and the implications of these decompositions on their representation categories. First, we show that $\H$ factors as a tensor product of $H$ and $\bF[\bZ_2]$ if and only if $H$ is ribbon. This tensor product decomposition is first noted in Reshetikhin and Turaev's original paper \cite[Rmk. 3.5]{ReshetikhinTuraev1990}, though this equivalence of conditions appears to be new. Sommerh\"auser also provides a decomposition for any Hopf algebra $H$, which says that $\H$ always factors as a cocycled crossed product of $H$ and $\bF[\bZ_2]$. It is worth noting that, even if $H$ is ribbon, the cocycle may be nontrivial. This is seen in Example \ref{decompositions-differ}. \begin{proposition}\label{ribbon-decomp} Let $(H, R, {\mathbf{v}})$ be a ribbon Hopf algebra and $(\H, R, \rib)$ be the formal ribbon extension of $(H, R)$. Then, $\H \cong H\otimes \bF[\bZ_2]$ as ribbon Hopf algebras. Moreover, if $H$ is a finite-dimensional, quasitriangular Hopf algebra and $\H\cong H\otimes \bF[\bZ_2]$ as quasitriangular Hopf algebras, then $H$ has a compatible ribbon element. \end{proposition} \begin{proof} Denote the generator of $\bZ_2$ by $g$. Namely, we define $\phi:\H\to H\otimes \bF[\bZ_2]$ by $$\phi(a + b\rib) := a\otimes 1 + b{\mathbf{v}}\otimes g.$$ This is clearly a linear isomorphism (since ${\mathbf{v}}$ is invertible) and preserves $1$. Under this isomorphism, $g$ can be identified with $1\otimes g = \phi({\mathbf{v}}^{-1}\rib)$. Moreover, $$\phi((a+b\rib)(a'+b'\rib)) = (aa'+bb'uS(u))\otimes 1 + (ab'+ba'){\mathbf{v}}\otimes g = \phi(a+b\rib)\phi(a'+b'\rib).$$ As a result of $\phi$ being an algebra homomorphism, it suffices to check that $\phi$ preserves the counit, comultiplication, and antipode on just $\rib$: \begin{align*} (\phi\otimes \phi)(\Delta(\rib)) &= (\phi\otimes \phi)((R_{21}R_{12})^{-1}(\rib\otimes\rib)) \\ &= ((R_{21}R_{12})^{-1})({\mathbf{v}}\otimes 1\otimes{\mathbf{v}}\otimes 1)(1\otimes g\otimes 1\otimes g)\\ &= \Delta(\phi(\rib)),\\ \phi(S(\rib)) &= \phi(\rib) = {\mathbf{v}}\otimes g = S({\mathbf{v}}\otimes g) = S(\phi(\rib)),\\ \epsilon(\phi(\rib)) &= \epsilon(g)\epsilon({\mathbf{v}}) = 1 = \epsilon(\rib). \end{align*} Clearly, $(R^{(1)}\otimes 1)\otimes (R^{(2)}\otimes 1)$ is an $R$-matrix for $H\otimes \bF[\bZ_2]$, and ${\mathbf{v}}\otimes g$ is a ribbon element for $H\otimes \bF[\bZ_2]$. For the converse, note that, if $\Rep(H\otimes \bF[\bZ_2])$ has a ribbon structure $\theta:\Id_{\Rep(H\otimes \bF[\bZ_2])}\to \Id_{\Rep(H\otimes \bF[\bZ_2])}$, then we could restrict $\theta$ to a ribbon structure on modules of the form $M\otimes \bF_{\text{triv}}$, which is precisely $\Rep(H)$. In particular, the ribbon element $\mathbf{v}\in H$ can be recovered from the equality $\theta_{H\otimes\bF_{\text{triv}}}(1\otimes 1) = \mathbf{v}^{-1}\otimes 1$. \end{proof} \begin{corollary}\label{ribbon-reps-factor} Let $(H, R, {\mathbf{v}})$ be a finite-dimensional ribbon Hopf algebra and $(\H, R, \rib)$ be the formal ribbon extension of $(H, R)$. Then, $$\Rep(\H)\cong \Rep(H)\boxtimes \Vecm$$ as ribbon finite tensor categories. \end{corollary} Corollary \ref{ribbon-reps-factor} is reminiscent of the property that, for a finite-dimensional factorizable Hopf algebra $H$, representations of the Drinfeld double $DH$ factor as a Deligne product in terms of the representations of $H$ (see \cite{shimizu2019non}): $$\Rep(DH)\cong \Rep(H)\boxtimes\Rep(H)^{\mathrm{op}}.$$ The notion of a cocycled crossed product algebra comes from \cite{BlattnerCohen1986}. Conditions under which this construction yields a Hopf algebra were studied in \cite{Agore2013}. This construction can be used to describe formal ribbon extensions in general \cite{Andruskiewitsch2014hopftensor}. \begin{definition}\label{weak} Let $K$ be a Hopf algebra and $H$ a unital associative algebra. A \textit{weak action of $K$ on $H$} is a linear map $\cdot: K\otimes H\to H$ such that, for all $h, h_1, h_2\in H$ and $k, k_1, k_2\in K$, \begin{enumerate} \item $k\cdot (h_1 h_2) = (k^{(1)}\cdot h_1)(k^{(2)}\cdot h_2)$, \item $k\cdot 1 = \epsilon(k)1$, \item $1\cdot h = h$. \end{enumerate} A weak action is \textit{symmetric} if, for all $h\in H$ and $k\in K$, $$k^{(1)}\otimes k^{(2)}\cdot h = k^{(2)}\otimes k^{(1)}\cdot h.$$ \end{definition} \begin{definition}\label{cocycle} Suppose $K$ acts weakly on $H$ via $\cdot:K\otimes H\to H$. A linear map $\sigma:K\otimes K\to H$ \begin{enumerate} \item is \textit{normal} if, for all $k\in K$, $\sigma(k\otimes 1) = \sigma(1\otimes k) = \epsilon(k)1$; \item is a \textit{cocycle} if, for all $k_1, k_2, k_3\in K$, $$k_1^{(1)}\cdot \sigma(k_2^{(1)}\otimes k_3^{(1)})\sigma(k_1^{(2)}\otimes (k_2^{(2)}k_3^{(2)})) = \sigma(k_1^{(1)}\otimes k_2^{(1)})\sigma((k_1^{(2)}k_2^{(2)})\otimes k_3);$$ \item satisfies the \textit{twisted module condition} if, for all $k_1, k_2\in K$ and $h\in H$, $$k_1^{(1)}\cdot (k_2^{(1)}\cdot h)\sigma(k_1^{(2)}\otimes k_2^{(2)}) = \sigma(k_1^{(1)}\otimes k_2^{(1)})((k_1^{(2)}k_2^{(2)})\cdot h),$$ \item is \textit{symmetric} if, for all $k_1, k_2\in K$, $$k_1^{(1)}k_2^{(1)}\otimes \sigma(k_1^{(2)}\otimes k_2^{(2)})=k_1^{(2)}k_2^{(2)}\otimes \sigma(k_1^{(1)}\otimes k_2^{(1)}).$$ \end{enumerate} \end{definition} \begin{lemma}[{\cite[Lem. 4.4--5]{BlattnerCohen1986}}]\label{crossed-algebra} Suppose $\cdot: K\otimes H\to H$ is a weak action of a Hopf algebra $K$ on a unital associative algebra $H$. Let $\sigma:K\otimes K\to H$ be a linear map and denote by $H\#_\sigma K$ the vector space $H\otimes K$ with simple tensors denoted $h\# k$. Define a multiplication on $H\#_\sigma K$ as follows: for any $h_1, h_2\in H$ and $k_1, k_2\in K$, $$(h_1\# k_1)(h_2\# k_2) = [h_1 (k_1^{(1)}\cdot h_2)\sigma(k_1^{(2)}\otimes k_2^{(1)})]\# [k_1^{(3)}k_2^{(2)}].$$ Then, this multiplication defines a unital algebra structure on $H\#_\sigma K$ if and only if $\sigma$ is a normal cocycle with the twisted module condition. \end{lemma} In the case where the cocycle is trivial, (i.e. $\sigma(k_1\otimes k_2) = \epsilon(k_1 k_2)1_H$) and the weak action is an action, this agrees with the more classical notion of a \textit{crossed product algebra} $H\rtimes K=H\#_\sigma K$. \begin{lemma}[{\cite[Ex. 2.5(2)]{Agore2011extending}}]\label{crossed-hopf} Suppose $\cdot: K\otimes H\to H$ is a weak action of a Hopf algebra $K$ on another Hopf algebra $H$. Let $\sigma:K\otimes K\to H$ be a normal cocycle with the twisted module condition. Then, $H\#_\sigma K$ is a Hopf algebra if and only if $\cdot$ and $\sigma$ are both coalgebra homomorphisms and symmetric. The coalgebra structure is given by the tensor product of the coalgebras $H$ and $K$ and the antipode is given by $$S(h\# k) = [S_H(\sigma(S_K(k^{(2)})\otimes k^{(3)}))\# S_K(k^{(1)})][S_H(h)\# 1].$$ \end{lemma} Note that $H\cong H\#_\sigma 1$ always appears as a normal subalgebra of $H\#_\sigma K$. However, when the cocycle $\sigma$ is nontrivial, $K$ need not be a subalgebra of $H\#_\sigma K$. However, these algebras fit into a cleft exact sequence of Hopf algebras as $H\incl H\#_\sigma K\twoheadrightarrow K$. In fact, all cleft exact sequences arise this way. With this notion of cocycled crossed product, Sommerh\"auser provides a general decomposition for $\H$. However, to the author's knowledge, a proof has not been publicly shared in the literature. We present a proof here for completeness. \begin{theorem}[{\cite[Rmk. 3.14 due to Sommerh\"auser]{Andruskiewitsch2014hopftensor}}] \label{sommer} Let $H$ be a finite-dimensional quasitriangular Hopf algebra and let $\bZ_2=\langle g\rangle$. Then, $\H\cong H\#_\sigma \bF[\bZ_2]$ as quasitriangular Hopf algebras, where the weak action is generated by $g\cdot h = S^2(h)$ and the cocycle $\sigma$ is generated by $\sigma(g\otimes g) = uS(u)^{-1}$. \end{theorem} \begin{proof} It is clear that $\cdot$ is a weak action and a coalgebra homomorphism since $S^2$ is a bialgebra homomorphism. First, we verify $\sigma$ satisfy all the conditions of Definition \ref{cocycle}. Note that normality is true by definition of $\sigma$. When verifying the cocycle and twisted module conditions, we need only check the case where $k_1=k_2=k_3=g$: for any $h\in H$, \begin{align*} g\cdot (g\cdot h)\sigma(g\otimes g)=S^4(h)uS(u)^{-1} &= uS(u)^{-1}h = \sigma(g\otimes g)(g^2\cdot h),\\ g\cdot \sigma(g\cdot g)\sigma(g\otimes g^2) = S^2(uS(u)^{-1}) &= uS(u)^{-1} = \sigma(g\otimes g)\sigma(g^2\otimes g). \end{align*} Moreover, $\sigma$ is indeed a coalgebra homomorphism because $uS(u)^{-1}$ is grouplike. Finally, $\cdot$ and $\sigma$ are symmetric because $\bF[\bZ_2]$ is cocommutative. By Lemma \ref{crossed-hopf}, $H\#_\sigma \bF[\bZ_2]$ is a well-defined Hopf algebra. Observe that $\H=H\oplus H\rib^{-1}$. Let $\phi:\H\to H\#_\sigma \bF[\bZ_2]$ be given by $$\phi(a + b\rib^{-1}) := a\# 1 + bu^{-1}\# g$$ for $a, b\in H$. This is clearly a linear isomorphism. It is easy to verify that this is a coalgebra homomorphism and preserves $1$. Moreover, \begin{align*} \phi(\rib^{-1})\phi(\rib^{-1}) &= (u^{-1}\# g)(u^{-1}\# g) = [u^{-1}(g\cdot u^{-1})\sigma(g\otimes g)]\# g^2 = (uS(u))^{-1}\# 1 = \phi(\rib^{-2}),\\ S(\phi(\rib^{-1})) &= S(u^{-1}\# g) = [S_H(\sigma(S_{\bF[\bZ_2]}(g)\otimes g))\# S_{\bF[\bZ_2]}(g)][S(u^{-1})\# 1]\\ &= [S_H(u)u^{-1}(g\cdot S_H(u)^{-1})]\# g = u^{-1}\# g = \phi(S(\rib^{-1})). \end{align*} From here, it is easy to see that that $\phi$ is a Hopf algebra isomorphism. Clearly, $(R^{(1)}\# 1)\otimes (R^{(2)}\# 1)=(\phi\otimes\phi)(R)$ is an $R$-matrix for $H\#_\sigma \bF[\bZ_2]$. The result follows. \end{proof} We now consider an example which shows that the decompositions given in Theorems \ref{ribbon-decomp} and \ref{sommer} are truly distinct. \begin{example}\label{decompositions-differ} Consider the 4-dimensional Sweedler's Hopf algebra $H_4$ generated by $K, \xi$ subject to \begin{align*} K^2 &= 1, & \xi^2 &= 0, & K\xi&=-\xi K \end{align*} with Hopf algebra structure described by \begin{align*} \Delta(K) &= K\otimes K, & \epsilon(K) &= 1, & S(K) &= K,\\ \Delta(\xi) &= K\otimes \xi + \xi\otimes 1, & \epsilon(\xi) &= 0, & S(\xi) &= -K\xi. \end{align*} By \cite{panaite1999quasitriangular}, $H_4$ is ribbon with \begin{align*} R &= (1\otimes 1 + K\otimes 1 + 1\otimes K - K\otimes K)(1\otimes 1 + \xi\otimes K\xi) \end{align*} and $\mathbf{v} = 1$. $H_4$ is the first Nichols Hopf algebra $\mcK_1$, which are further discussed in Section \ref{DKn-section}. Theorem \ref{ribbon-decomp} implies $\tilde H_4\cong H_4\otimes \bF[\bZ_2]$. However, $S^2(\xi)=-\xi$, so, in Sommerh\"auser's decomposition $\tilde H_4\cong H_4\#_\sigma \bF[\bZ_2]$, the weak action of $\bF[\bZ_2]$ on $H_4$ is nontrivial (meanwhile, the cocycle is trivial in this case). In particular, $(\xi\# 1)(1\# g) = -(1\#g)(\xi\#1)$ in $H_4\#_\sigma \bF[\bZ_2]$. Thus, Sommerh\"auser's decomposition does not reduce to the tensor product decomposition of Theorem \ref{ribbon-decomp}. \end{example} \begin{corollary}\label{exact} Let $H$ be a finite-dimensional quasitriangular Hopf algebra. Then, the sequence $H\to \H\to \bF[\bZ_2]$ is a strictly exact sequence of quasitriangular Hopf algebras. In particular, there is an exact sequence of braided finite tensor categories: $$\Vec_{\bZ_2}\to \Rep(\H)\to \Rep(H).$$ \end{corollary} \begin{proof} By the correspondence between cleft extensions and cocycled crossed products (and as noted in \cite[Rmk. 3.14]{Andruskiewitsch2014hopftensor}), there is a cleft exact sequence $H\incl \H\twoheadrightarrow \bF[\bZ_2]$. This sequence is, in particular, is strictly exact. By \cite[Prop. 2.9]{BruguieresNatale11}, this gives rise to an exact sequence of tensor categories: $$\Rep(\bF[\bZ_2])\cong\Vec_{\bZ_2}\to \Rep(\H)\to \Rep(H).$$ Since the maps $H\incl \H$ and $\H\twoheadrightarrow \bF[\bZ_2]$ preserve the $R$-matrix, the functors are braided. \end{proof} \section{Representations of $\H$}\label{rep-section} Throughout this section, $H$ is a finite-dimensional quasitriangular Hopf algebra over an algebraically closed field $\bF$ of characteristic 0, and $M$ is a finite-dimensional $H$-module. For an $\H$-module $N$, we denote the $H$-module obtained by restricting the action on $N$ to $H$ by $N|_H$. \subsection{Upgrading $H$-actions to $\H$-actions}\label{extension-props} By dominance, Corollary \ref{exact} shows that every $H$-module arises as an $H$-submodule of some $\H$-module (with its action restricted to $H$). Theorem \ref{action-extends} shows that, in fact, every $H$-module has an $\H$-action. Moreover, we obtain a characterization of simple and projective $\H$-modules in terms of simple and projective $H$-modules in Proposition \ref{simples-double} and Theorem \ref{indec-projectives-restrict}. Modules of this form are of particular interest in finite tensor categories, as the finiteness property can be characterized in terms of such objects \cite{etingof2004finite}. Subsection \ref{extension-props} is much more general than presented. Indeed, the coalgebra and antipode structures are not relevant here. All results hold identically for extensions of algebras $\mcA\subset \mcA[a]$ for which $a$ commutes with $\mcA$ and $a^2=b$ for some invertible $b\in \mcA$. This subsection may be further generalized to the case where $a^n=b$. As our primary interest is the formal ribbon extension, we phrase all results using the special case of $H\subset \H$, where $a = \rib$ and $b=uS(u)$. Lemma \ref{sqrt-uSu} is a technical result that we employ to show that $H$-modules can always be made into $\H$-modules and that there is a precise description in the indecomposable case. The proof presented here applies the holomorphic functional calculus to prove the result for $\bC$ and then completeness of the theory of algebraically closed fields of characteristic 0 to extend it to more general fields. The intersection in the applicability of these two results seems quite small. Moreover, there is a more direct proof of Lemma \ref{sqrt-uSu} which relies on neither of these techniques, described in Remark \ref{std-proof}. \begin{lemma}\label{sqrt-uSu} Suppose $V$ is a finite-dimensional vector space over an algebraically closed field $\bF$ of characteristic 0 and that $B\in \Aut_{\bF}(V)$. Then, $B$ has a square root $A\in\Aut_{\bF}(V)$ with the following properties: \begin{enumerate} \item $A$ commutes with the centralizer of $B$ in $\End_{\bF}(V)$; \item if $\lambda$ is an eigenvalue of $A$, then $-\lambda$ is not an eigenvalue of $A$. \end{enumerate} \end{lemma} \begin{proof} We can formulate Lemma \ref{sqrt-uSu} in terms of logical sentences in the language of rings. For each $n\geq 1$, let $\phi_n$ be the sentence \begin{align*} \phi_n := ``&\forall b_{11},\dots, b_{nn}, (\det[b_{ij}]\neq 0\to\exists a_{11},\dots, a_{nn}, (\det[a_{ij}]\neq 0\wedge [a_{ij}]^2 = [b_{ij}] \\&\wedge\forall c_{11},\dots, c_{nn}, ([c_{ij}][b_{ij}]=[b_{ij}][c_{ij}]\to [c_{ij}][a_{ij}]=[a_{ij}][c_{ij}])\wedge \\ &\wedge \forall \lambda (\det([a_{ij}] - \lambda I)=0\to \det([a_{ij}] + \lambda I)\neq 0)))". \end{align*} where $[a_{ij}]$ denotes the matrix whose entries are $a_{ij}, i,j=1,\dots, n$. That is, $\phi_n$ is the sentence ``for every $n\times n$ matrix $B$ with nonzero determinant, there is an $n\times n$ matrix $A$ which has nonzero determinant, squares to $B$, for every $n\times n$ matrix $C$, if $B$ commutes with $C$, then $A$ commutes with $C$, and, for any $\lambda$, if $\det(A-\lambda I)=0$, then $\det(A+\lambda I)\neq 0$.'' Observe that Lemma \ref{sqrt-uSu} is equivalent to $\phi_n$ being true for all $n$. By completeness of the theory of algebraically closed fields of characteristic 0, $\phi_n$ is true for $\bC$ if and only if $\phi_n$ is true for all algebraically closed fields of characteristic 0. Thus, it suffices to prove the result for $\bC$ only. Since $B$ has a finite spectrum $\sigma(B)$ and is invertible (so $0\notin \sigma(B)$), there is a holomorphic branch of the square root function $\sqrt{\cdot}:U\to\bC$ in a neighborhood $U$ of the spectrum of $B$. The holomorphic functional calculus gives an element $A=\sqrt{B}$ in the Banach algebra generated by $B$ such that $A^2 = B$. Moreover, since $V$ is finite-dimensional, the operator $A$ is a polynomial in $B$. In particular, $A\in Z_{\{B\}}(\End_\bF(V))$. Moreover, $A$ is invertible with $A^{-1} = AB^{-1}$. Finally, note that if $\delta\in \sigma(B)$, then $\sqrt{\delta}\in \sigma(A)$ (where this is the chosen branch of the square root) and $-\sqrt{\delta}\notin \sigma(A)$. All eigenvalues of $A$ arise this way. \end{proof} It is worth noting that extending results obtained by (holomorphic, continuous, or Borel) functional calculi on $\bC$ to arbitrary algebraically closed fields of characteristic 0 using model theory is quite difficult in general. Sentences in the language of rings may only involve polynomials equations, greatly limiting the applicability of model theoretic techniques. However, general holomorphic functions need not send, for example, $\bar\mbbQ$ to itself. Thus, we should not expect that most results obtained by functional calculi to be applicable to arbitrary $\bF$ anyways. \begin{remark}\label{std-proof} Here is a more constructive proof of Lemma \ref{sqrt-uSu}. Let $b(x)=(x-\lambda_1)^{n_1}\dots(x-\lambda_k)^{n_k}$ be the minimal polynomial of $B$, where the $\lambda_i\in\bF^\times$ are distinct and nonzero. For each $i=1,\dots,n$, using the fact that $\bF$ is algebraically closed of characteristic 0, let $a_i(x)$ be the Taylor polynomial of (a branch of) $\sqrt{x}$ about $x=\lambda_i$ of degree $n_i-1$. Then, $a_i(x)$ satisfies $b_i(x)^2=x\pmod{(x-\lambda_i)^{n_i}}$. By the Chinese remainder theorem, there is a polynomial $a(x)$ such that $a(x)\equiv a_i(x)\pmod{(x-\lambda_i)^{n_i}}$ for all $i$. In particular, $a(x)$ satisfies $a(x)^2 = x\pmod{b(x)}$, so $a(x)^2 = x + p(x)b(x)$ for some polynomial $p(x)$. Therefore, if $A = a(B)$, then $A^2 = B + p(B)b(B) = B$. It follows that $A$ commutes with the centralizer of $B$ because it is polynomial in $B$. Moreover, $a(\sigma(B)) = \sigma(a(B))$, so each $\delta\in\sigma(B)$ may correspond to at most one $\lambda\in \sigma(a(B))$. \end{remark} \begin{theorem}\label{action-extends} Suppose $H$ is a finite-dimensional quasitriangular Hopf algebra and $M$ is a finite-dimensional $H$-module. Then, there is an $\H$-module $\tilde M$ such that, when the action is restricted to $H$, $\tilde M|_H\cong M$ as $H$-modules. Moreover, if $M$ is indecomposable, then there are precisely two such $\H$-modules, up to isomorphism. \end{theorem} \begin{proof} By Lemma \ref{sqrt-uSu}(1), there is a linear map $A\in \End_\bF(M)$ such that $A^2=uS(u)\cdot -$ and $A(h\cdot v) = h\cdot Av$ for all $v\in M$. As a vector space, set $\tilde M := M$ and define the action $$(a + b\rib)\cdot_{\tilde M} v := a\cdot_M v + b\cdot_M Av$$ for $a, b\in H$ and $v\in \tilde M$. It is clear that this action does indeed define an $\H$-module and that $\tilde M|_H = M$. Set $M^+=\tilde M$, and let $M^-$ be the $\H$-module with the conjugate action: $$(a + b\rib)\cdot_{M^-} v := a\cdot_M v - b\cdot_M Av$$ for any $a,b\in H$ and $m\in M$. Note that indeed $M^-|_H=M$. Note that $M^-\not\cong M^+$ as $\H$-modules, as the spectra of $\rib\cdot -$ differ by Lemma \ref{sqrt-uSu}(2). Consider $M\oplus \rib M$ as an $\H$-module in the obvious way. Define $F:M\oplus \rib M\to M^+\oplus M^-$ by $$m_1\oplus \rib m_2\mapsto (m_1+Am_2)\oplus (m_1-Am_2).$$ It is easily shown that this map is an $\H$-linear isomorphism. Thus, if $M$ is indecomposable, the result follows by the uniqueness of the decomposition of $M\oplus \rib M$ into indecomposable $\H$-modules. \end{proof} \begin{corollary} Let $H$ be a finite-dimensional quasitriangular Hopf algebra and $M$ be a finite-dimensional indecomposable $H$-module. Then, $uS(u)\cdot -:M\to M$ has precisely one distinct eigenvalue. \end{corollary} \begin{remark} Note that $\rib\cdot -$ need not be a polynomial in $uS(u)\cdot -$ for general $\H$-modules. Let $H=\bC$ be the trivial quasitriangular complex Hopf algebra (with $R=1\otimes 1$). Consider the possible ways to make $\bC^2$ into a $\tilde\bC=\bC[\bZ_2]$-module. $\bC$ acts by scalar multiplication on $\bC^2$, so any choice of square root of $uS(u)\cdot - = \id_{\bC^2}$ suffices for the action of $\rib\cdot -$. For example, we may set $\rib\cdot - = \left(\begin{smallmatrix} 1 & 0\\ 0 & -1 \end{smallmatrix}\right)$, which is certainly not a polynomial in $\id_{\bC^2}$. In this case, $\rib\cdot -$ is diagonalizable with eigenvalues in $\{1, -1\}$. However, in the case of simple $H$-modules, $\rib\cdot -$ is just a scalar multiple of the identity. \end{remark} As in the proof of Theorem \ref{action-extends}, given an indecomposable $H$-module $M$, we will denote the two $\H$-modules which restrict to $M$ as $M^+$ and $M^-$. Note that the complex square root has no general notion of positive and negative square root. With the exception of the trivial $H$-module $V_1$, $M^+$ and $M^-$ can be interchanged throughout this work without loss of generality. For $V_1$, the $\H$-module $V^+_1$, where $\rib\cdot - = \id_{V_1^+}$, is the trivial $\H$-module, while the $\H$-module $V^-_1$, where $\rib\cdot - = -\id_{V_{1}^+}$, is a sort of sign module. In particular, the ribbon category tensor-generated by $V_1^-$ is $\Vecm$. \begin{proposition}\label{simples-double} Suppose $H$ is a finite-dimensional quasitriangular Hopf algebra and $N$ is a simple $\H$-module. Then, $N|_H$ is a simple $H$-module. \end{proposition} \begin{proof} Suppose $M\subseteq N|_H$ is an $H$-submodule of $N$. Note that $\rib\cdot -:N\to N$ is an $\H$-linear map since $\rib\in Z(H)$, so there is some $c\in\bF^\times$ such that $\rib\cdot - = c\id_N$ by Schur's lemma. In particular, $\rib\cdot m = cm$ for all $m\in M\subseteq N$. Therefore, $M$ is an $\H$-submodule of $N$, so $M=N$ or $M=0$. Thus, $N|_H$ is a simple $H$-module. \end{proof} \begin{corollary}\label{semisimple-iff} Let $H$ be a finite-dimensional quasitriangular Hopf algebra. Then, $H$ is semisimple if and only if $\H$ is semisimple. \end{corollary} \begin{proof} Suppose $H$ is semisimple. Note that $$\dim\left(\bigoplus_{\substack{V\text{ simple }\\\text{$H$-module}}} \dim(V^+)V^+\oplus \dim(V^-)V^-\right) = 2\dim\left(\bigoplus_{\substack{V\text{ simple }\\\text{$H$-module}}} \dim(V)V\right) = 2\dim H = \dim \H.$$ Thus, all simple $\H$-modules must be their own projective covers. The same equality of dimensions shows the opposite implication. \end{proof} \begin{theorem}\label{indec-projectives-restrict} Let $V$ be a finite-dimensional simple module over a finite-dimensional quasitriangular Hopf algebra $H$. Let $P$ be the projective cover of $V$ as $H$-modules and $P_\pm$ be the projective covers of $V^\pm$ as $\H$-modules. Then, $P_\pm|_H\cong P$ as $H$-modules. \end{theorem} \begin{proof} For a simple $H$-module $V$, let $P(V)$ denote its ($H$-module) projective cover. Note that $$\H = H\oplus H\rib\cong \bigoplus_{V\text{ simple}} (\dim V)(P(V)\oplus \rib P(V)).$$ Thus, $P(V)\oplus P(V)\rib$ is projective for every $V$. Let $p:P\to V$ be an $H$-module covering map. Without loss of generality, $V^\pm|_H = V$ as $H$-modules. Define the maps $p_\pm:P\oplus \rib P\to V^{\pm}$ given by $p_\pm(h_1 + h_2\rib) = p(h_1)+\rib\cdot p(h_2)$ for $h_1, h_2\in H$. Note that these maps are clearly $\H$-linear. Since $p$ is nonzero, both are nonzero. That is, $P\oplus \rib P$ has nonzero maps into both $V^+$ and $V^-$. Given a simple module $W$, $P(W)$ is the only projective indecomposable module (up to isomorphism) with a nonzero map into $W$. Since $(P\oplus \rib P)|_H\cong P\oplus P$ as $H$-modules, the decomposition of $P\oplus \rib P$ into a direct sum of $\H$-modules may contain at most two indecomposable summands. It follows that $P\oplus \rib P\cong P_+\oplus P_-$ as $\H$-modules. The isomorphism can be considered as an $H$-module isomorphism so that $$P\oplus P\cong (P\oplus \rib P)|_H\cong P_+|_H\oplus P_-|_H$$ as $H$-modules. The result follows by the uniqueness of decomposition of $P\oplus \rib P$ into indecomposable $H$-modules. \end{proof} \begin{corollary}\label{projectives-restrict} Suppose $H$ is a finite-dimensional quasitriangular Hopf algebra and $Q$ is a projective $\H$-module. Then, $Q|_H$ is a projective $H$-module. Moreover, every projective $H$-module arises this way. \end{corollary} The question of whether indecomposable $\H$-modules restrict to indecomposable $H$-module remains unclear to the author. However, Proposition \ref{simples-double} and Corollary \ref{ribbon-reps-factor} give a positive answer in the simplest cases. Thus, we conjecture that this is true in general. This would, in particular, imply that, if $P$ is a projective $H$-module, then any $\tilde P$ is also projective, which is not implied by Corollary \ref{projectives-restrict}. \begin{conj} Suppose $H$ is a finite-dimensional quasitriangular Hopf algebra and $N$ is a finite-dimensional indecomposable $\H$-module. Then, $N|_H$ is also indecomposable as an $H$-module. \end{conj} \subsection{The tensor category $\Rep(\H)$} \begin{lemma}\label{tensors-restrict} Suppose $H$ is a finite-dimensional quasitriangular Hopf algebra and $N, N'$ are finite-dimensional $\H$-modules. Then, $(N\otimes N')|_H\cong N|_H\otimes N'|_H$ as $H$-modules. \end{lemma} For a braided category $\cC$, let $\cC'$ denote the M\"uger center of $\cC$, defined by $$\cC' = \{M\in \cC~|~\beta_{M',M}\circ\beta_{M,M'}=\id_{M\otimes M'} \text{ for all }M'\in \cC\},$$ where $\beta:\otimes\to\otimes^{\mathrm{op}}$ is the braiding on $\cC$. \begin{theorem} Suppose $H$ is a finite-dimensional quasitriangular Hopf algebra. Then, the following are equivalent for an $H$-module $M$: \begin{enumerate} \item $M\in \Rep(H)'$, \item $\tilde M\in \Rep(\H)'$ for some $\tilde M$ such that $\tilde M|_H\cong M$, \item $\tilde M\in \Rep(\H)'$ for all $\tilde M$ such that $\tilde M|_H\cong M$. \end{enumerate} \end{theorem} \begin{proof} By Theorem \ref{action-extends}, every $H$-module has such an $\H$-module $\tilde M$, and the $R$-matrix acts identically on $M\otimes M'$ and $\tilde M\otimes \tilde M'$. \end{proof} For any finite-dimensional quasitriangular Hopf algebra $H$, $V_1^-$ is always in the M\"uger center of $\Rep(\tilde H)$. The ribbon category generated by $V_1^-$ is not $\sVec$. The category, which we have denoted $\Vecm$, has a nontrivial ribbon twist $\theta_{V^-}=-1$ on $V^-$ but a trivial braiding $\beta_{V^-, V^-} = \id_{V^-}$. In particular, $\Rep(\H)$ can never be a modular category nor be condensed to a modular category. \begin{corollary}\label{muger-factorizable} If $H$ is a finite-dimensional factorizable Hopf algebra, then $\Rep(\H)'\cong \Vecm$ and is tensor generated by $V_1^-$. \end{corollary} \begin{proof} Suppose $N\in\Rep(\H)'$ is indecomposable. By factorizability of $H$, $N|_H\cong V_1^{\oplus n}$ as an $H$-module for some $n\geq 0$. In particular, $h\cdot v = \epsilon(h)v$ for all $h\in H$ and $v\in N$. The action $g\cdot -$ of the finite-order grouplike $g = u\rib^{-1}$ on $N$ must be diagonalizable. Therefore, $g\cdot -$ has an eigenspace decomposition so that $N=N^+\oplus N^-$ and $g\cdot v^\pm = \pm v^\pm$ for $v^\pm\in N^\pm$. Since $H$ acts via scalar multiples of the identity, any subspace of $N|_H$ is a direct summand of $N|_H$ as an $H$-module. By indecomposability of $N$, this implies that either $N^+$ is one-dimensional and $N^-=0$ or vice versa. These correspond to $N\cong V_1^+$ or $N\cong V_1^-$ respectively. \end{proof} \cite[Rmk. 3.1]{eno2005fusion} introduces a notion of pivotalization $\tilde\cC$ of a fusion category $\cC$. It is later shown in Prop. 5.14 that the pivotal structure on $\tilde\cC$ is spherical. We show that $\Rep(\H)\cong \widetilde{\Rep(H)}$ as braided fusion categories, when these two constructions both apply. In this sense, the formal ribbon extension is a generalization of pivotalization to the non-semisimple braided case. The notion of pivotalization is well-defined for any finite tensor category $\cC$ with a monoidal natural isomorphism $\Phi:\Id_{\cC}\to (-)^{****}$. By Radford's formula for $S^4$, pivotalization is, in particular, possible for representation categories of finite-dimensional Hopf algebras (i.e., finite tensor categories with a fiber functor to $\Vec$) or finite-dimensional semisimple weak Hopf algebras (i.e., fusion categories). In general, however, such a $\Phi$ need not exist. For example, a general weak Hopf algebra does not have such a formula. There is a generalization of Radford's formula to finite tensor categories \cite{ENO2004Radford}, which leads to a weaker natural isomorphism $\Phi:\Id_{\cC}\to ((-)^{****})^N$ for some $N\geq 1$, which depends on the order of the distinguished invertible object. \begin{definition} Let $\cC$ be a finite tensor category with a monoidal natural isomorphism $\Phi:\Id_\cC\to (-)^{****}$. The pivotalization of $(\cC, \Phi)$ is the category $\tilde\cC$ with objects $(M, \phi)$ where $M\in\cC$ and $\phi:M\to M^{**}$ satisfies $\phi^{**}\circ \phi = \Phi$ and morphisms $$\Hom_{\tilde\cC}((M, \phi)\to (M',\psi)):=\{f\in\Hom_{\cC}(M\to M')| \psi\circ f = f^{**}\circ \phi\}.$$ We give the pivotalization a tensor category structure by $$(M, \phi)\otimes (M',\psi):=(M\otimes M', \phi\otimes \psi).$$ \end{definition} For a fusion category or representation category of a finite-dimensional Hopf algebra, $\Phi$ is assumed to be the Radford isomorphism induced by Radford's formula for $S^4$. In which case, we speak of the pivotalization of $\cC$ rather than $(\cC, \Phi)$. \begin{theorem}\label{pivotalization-agrees} If $H$ is a finite-dimensional semisimple quasitriangular Hopf algebra, then $\Rep(\H)\cong \widetilde{\Rep(H)}$ as braided fusion categories. \end{theorem} \begin{proof} In this proof, we will use the fact that $\H = H\oplus H\rib^{-1}$. Given a finite-dimensional $\H$-module $N$, set $M = N|_H$. Let $\{f_i\}$ be a basis of $M^*$ and $\{f^i\}\subset M^{**}$ be the dual basis. For $x\in M$ and $g\in M^*$, define the map $\phi_N:M\to M^{**}$ by $$\phi_N(x)(g) := \sum_i f_i(R^{(2)}\rib^{-1} \cdot x)[S(R^{(1)})\cdot f^i](g) = \sum_i f_i(R^{(2)}\rib^{-1} \cdot x)f^i(g(S^3(R^{(1)})\cdot -)).$$ Let $M$ be a finite-dimensional simple $H$-module and $\phi:M\to M^{**}$ an $H$-linear automorphism such that $\phi^{**}\circ\phi = d$. Let $\{e_i\}$ be a basis for $M$ and $\{e^i\}$ be the dual basis. Let $M^\phi = M$ as a vector space. For $m\in M$ and $a,b\in H$, we define the following explicit $\H$-action on $M^\phi$: $$(a+b\rib^{-1})\cdot m := a\cdot m + \sum_i [R^{(1)}\cdot \phi(m)](e^i)(bR^{(2)}\cdot e_i) = a\cdot m + \sum_i \phi(m)(e^i(S^2(R^{(1)})\cdot -))(bR^{(2)}\cdot e_i).$$ Diagrammatically, the maps $\phi_N:N|_H\to (N|_H)^{**}$ and $\rib^{-1}\cdot -:M^\phi\to M^\phi$ may be written as follows: $$ \phi_N = \begin{tikzpicture}[baseline=1cm] \draw (0, -2) node[below] {$N|_H$} -- (0, -0.35) (-2.2, 0.35) .. controls (-2.2, 1) and (0, 1) .. (0, 1.35) -- (0, 3) node[above] {$(N|_H)^{**}$} (-3.4, 0.35) -- node[left] {$(N|_H)^*$} (-3.4, 1.35); \draw[line width=2mm, white] (0, 0.35) .. controls (0, 1) and (-2.2, 1) .. (-2.2, 1.35); \draw (0, 0.35) node[above right] {$N|_H$} .. controls (0, 1) and (-2.2, 1) .. (-2.2, 1.35) ; \draw[rounded corners] (-1, -0.35) rectangle node {$\rib^{-1}\cdot_N -$} (1, 0.35) (-3.9, -0.35) rectangle node {$\coev_{(N|_H)^*}$} (-1.5, 0.35) (-3.9, 1.35) rectangle node {$\ev_{N|_H}$} (-1.5, 2.05); \end{tikzpicture},\hspace{10mm}\rib^{-1}\cdot - = \begin{tikzpicture}[baseline=1cm] \draw (0, -2) node[below] {$M^\phi$} -- (0, -0.35) (2, 0.35) .. controls (2, 1) and (0, 1) .. (0, 1.35) -- (0, 3) node[above] {$M^\phi$} (3, 0.35) -- node[right] {$(M^\phi)^*$} (3, 1.35); \draw[line width=2mm, white] (0, 0.35) .. controls (0, 1) and (2, 1) .. (2, 1.35); \draw (0, 0.35) node[above left] {$(M^\phi)^{**}$} .. controls (0, 1) and (2, 1) .. (2, 1.35); \draw[rounded corners] (0.5, -0.35) rectangle node {$\phi$} (-0.5, 0.35) (3.5, -0.35) rectangle node {$\coev_{M^\phi}$} (1.5, 0.35) (3.5, 1.35) rectangle node {$\ev_{(M^\phi)^*}$} (1.5, 2.05); \end{tikzpicture}. $$ This construction is well-known (see, for instance, \cite{Henriques2015CategorifiedTF}). As a composition of $H$-linear maps, $\phi_N$ and $\rib^{-1}\cdot -$ are $H$-linear. These constructions are inverse to each other in the sense that $\phi_{M^{\phi}} = \phi$ and $(N|_H)^{\phi_N} = N$. Consider the following functors $F:\Rep(\H)\to \widetilde{\Rep(H)}$ and $G:\widetilde{\Rep(H)}\to \Rep(\H)$: for an $H$-module $M$ and $\H$-module $N$, we define \begin{align*} F(N) &= (N|_H, \phi_{N}),\\ G((M, \phi)) &= M^\phi, \end{align*} The functors are defined on morphisms in the obvious way. Moreover, we have $G(F(N)) = N$ and $F(G((M, \phi))) = (M, \phi)$. Thus, $F$ and $G$ form a linear isomorphism of categories. By construction, $F$ and $G$ are indeed braided tensor functors. Thus, the result follows. \end{proof} \section{The formal ribbon extension of doubled Nichols Hopf algebras}\label{DKn-section} \subsection{Doubled Nichols Hopf algebras $D\Kn$} Given how nicely $\H$-modules can be described in relation to $H$-modules, one might wonder if anything nontrivial is being studied here. In particular, is it always true that the fusion rules of $\Rep(\H)$ and $\Rep(H)\boxtimes\Vecm$ agree? Proposition \ref{not-deligne} provides a disproof using the doubled Nichols Hopf algebra $D\Kn$ for odd $n$, even for the full subcategories generated by just simple and projective $\RKn$-modules. We recall the study of Nichols Hopf algebras $\Kn$ and their doubles $D\Kn$ from \cite{modulardata2024}. Let $n$ be a positive integer. The Nichols Hopf algebra $\Kn$ is the $2^{n+1}$-dimensional complex unital algebra which has generators $K, \xi_1,\dots,\xi_n$ subject to the following relations: for all $i,j=1,\dots, n$, \begin{align*} K^2 &= 1, & \xi_i^2 &= 0, & \xi_i\xi_j &= -\xi_j\xi_i, & K\xi_i &= -\xi_i K. \end{align*} In particular, $\Kn$ is a crossed product $\bigwedge^* \bC^n\rtimes \bC[\bZ_2]$ of an exterior algebra on an $n$-dimensional space by the $\bZ_2$-group algebra. The algebra $\Kn$ has a natural Hopf algebra structure with the following operations: for all $i=1,\dots, n$, \begin{align*} \Delta(K) &= K\otimes K,&\epsilon(K) &= 1,&S(K)&=K,\\ \Delta(\xi_i) &= K\otimes \xi_i + \xi_i\otimes 1,&\epsilon(\xi_i) &= 0,&S(\xi_i)&=-K\xi_i. \end{align*} Let $W = \{\xi_1^{a_1}\xi_2^{a_2}\dots\xi_n^{a_n} | a_i\in\{0,1\}\}$. Then, $\Kn$ has a natural basis given by $W\cup KW$. Set $\mathbf{I}(w) = \{i | a_i=1\}$ for $w=\xi_1^{a_1}\xi_2^{a_2}\dots\xi_n^{a_n}\in W$ and $|w| = |\mathbf{I}(w)| = \sum_{i=1}^n a_i$. By \cite[Prop. 11]{panaite1999quasitriangular}, there is a large family of $R$-matrices for $\Kn$ and each has a unique compatible ribbon element. The Drinfeld double $D\Kn$ of $\Kn$ is a $2^{2n+2}$-dimensional complex Hopf algebra. $D\Kn$ is generated by $K, \bar K, \xi_1,\dots,\xi_n, \bar\xi_1,\dots, \bar\xi_n$, where the sets $\{K, \xi_1, \dots, \xi_n\}$ and $\{\bar K, \bar\xi_1, \dots, \bar\xi_n\}$ each generate a copy of the Hopf algebra $\Kn$ with the operations and relations described above. Moreover, they have the following additional relations: for $i,j=1,\dots, n$, \begin{align*} K\bar K &= \bar KK, & K\bar\xi_i &= -\bar\xi_i K, & \bar K\xi_i &= -\xi_i\bar K, & \xi_i\bar\xi_j &= \delta_{i=j}(1 - K\bar K) - \bar\xi_j\xi_i. \end{align*} If we set $\overline{\xi_1^{a_1}\xi_2^{a_2}\dots\xi_n^{a_n}} = \bar\xi_1^{a_1}\bar\xi_2^{a_2}\dots\bar\xi_n^{a_n}$ for $\xi_1^{a_1}\xi_2^{a_2}\dots\xi_n^{a_n}\in W$ and set $\bar W = \{\bar w|w\in W\}$, then $W\bar W\cup KW\bar W\cup \bar KW\bar W\cup K\bar KW\bar W$ is a basis of $D\Kn$. By the construction of Drinfeld doubles, $D\Kn$ has a natural $R$-matrix which makes the algebra factorizable: $$R = \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}(w\otimes \bar w(1+\bar K) + Kw\otimes \bar w(1-\bar K)).$$ When $n$ is even, $D\Kn$ has two compatible ribbon elements ${\mathbf{v}}_0$ and ${\mathbf{v}}_1$ given by $${\mathbf{v}}_i = \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}((-1)^{|w|+i}(K-\bar K)+1+K\bar K)w\bar w.$$ \subsection{A presentation of $\RKn$} \begin{theorem} $D\Kn$ with its canonical $R$-matrix is not ribbon for odd $n$. \end{theorem} \begin{proof} Note that, in $\Kn$, $S^{4} = \id_{\Kn}$. When $n$ is odd, the distinguished grouplikes in $\Kn$ and $\Kn^*$ are $K$ and $\bar K$ respectively. By \cite[Thm. 3]{kauffman1993necessary}, it suffices to show that $K\in\Kn$ has no grouplike square-root. We show that $1$ and $K$ are the only grouplikes in $\Kn$. Suppose $x = \sum_{w\in W} (c_{1}(w)1 + c_{K}(w)K)w\in \Kn$ is a grouplike. Then, the following two equalities hold: \begin{align*} \Delta(x) &= \sum_{w_{1},w_{2}\in W} (c_{1}(w_{1})1 + c_{K}(w_{1})K)w_{1}\otimes(c_{1}(w_{2})1 + c_{K}(w_{2})K)w_{2},\\ &= \sum_{w\in W}(c_{1}(w)1\otimes 1 + c_{K}(w)K\otimes K)\prod_{i\in \mathbf{I}(w)} (K\otimes \xi_i + \xi_i\otimes 1). \end{align*} Note that in the first equality, for any $w\in W$, there is a term of the form $c_1(w)^2 w\otimes w$. However, in the second equality, every summand of the form $w_1\otimes w_2$ necessarily satisfies $\mathbf{I}(w_1)\cap \mathbf{I}(w_2)=\varnothing$. Thus, $c_1(w) = 0$ for any $w\neq 1$. Similarly, $c_K(w)=0$ for any $w\neq 1$. It is straightforward from here to see that only $1$ and $K$ are grouplikes for $\Kn$. In particular, neither is a square root of $K$. \end{proof} The following lemma provides a formula for converting between sums of the form $\sum_{w\in W} h_w\bar w w$ where $h_w$ depends only on the length of $w$ to sums of the form $\sum_{w\in W} k_w w\bar w$. Sums of this form are extremely common in our calculations. The lemma is symmetric, in that swapping all $w$'s with $\bar w$'s (and vice versa) does not change the validity of the formulas. \begin{lemma}\label{sum-by-length} Let $f:\{0, 1,\dots, n\}\to D\Kn$ be any function. Then, \begin{align*} \sum_{w\in W} f(|w|)\bar w w&=\sum_{w\in W} \left((-1)^{|w|}f(|w|) + \right.\\&\left.+ (-1)^{\lfloor (|w|+1)/2\rfloor}\sum_{\ell=1}^{n-|w|} \binom{n-|w|}{\ell}(-1)^{\lfloor (\ell+|w|)/2\rfloor}2^{\ell-1}f(\ell+|w|)(1 - K\bar K)\right)w\bar w. \end{align*} In particular, \begin{align*} \sum_{w\in W} f(|w|)(1+K\bar K)\bar w w&=\sum_{w\in W} (-1)^{|w|}f(|w|)(1+K\bar K)w\bar w,\\ \sum_{w\in W} f(|w|)(1-K\bar K)\bar w w&=\sum_{w\in W}(-1)^{\lfloor (|w|+1)/2\rfloor}\sum_{\ell=0}^{n-|w|} \binom{n-|w|}{\ell}(-1)^{\lfloor (\ell+|w|)/2\rfloor}2^{\ell}f(\ell+|w|)(1 - K\bar K)w\bar w. \end{align*} \end{lemma} Note: the index of $\ell$ starts at 1 in the general case, but 0 in the case when the sum has a $(1-K\bar K)$-coefficient. \begin{proof} First, we note the following formula for any $w\in W$: $$\bar ww = (-1)^{\lfloor |w|/2\rfloor}\sum_{\mathbf{I}(w')\subseteq \mathbf{I}(w)} (-1)^{\lfloor (|w'|+1)/2\rfloor}(1 - K\bar K)^{|w|-|w'|} w'\bar w'.$$ Then, \begin{align*} \sum_{w\in W} f(|w|)\bar w w &= \sum_{w\in W} f(|w|)(-1)^{\lfloor |w|/2\rfloor}\sum_{\mathbf{I}(w')\subseteq \mathbf{I}(w)} (-1)^{\lfloor (|w'|+1)/2\rfloor}(1 - K\bar K)^{|w|-|w'|} w'\bar w' \intertext{Note that there are $\binom{n-|w'|}{\ell}$ choices of $w\in W$ so that $\mathbf{I}(w')\subsetneq \mathbf{I}(w)$ so that $\ell+|w'| = |w|$. Thus, by indexing the sum by $\ell$ and relabeling $w'$ by $w$, we obtain the following:} &= \sum_{w\in W} (-1)^{\lfloor (|w|+1)/2\rfloor}\sum_{\ell=0}^{n-|w|} \binom{n-|w|}{\ell}(-1)^{\lfloor (\ell+|w|)/2\rfloor}f(\ell+|w|)(1 - K\bar K)^{\ell} w\bar w. \end{align*} Finally, if $\ell > 0$, then $(1 - K\bar K)^{\ell} = 2^{\ell-1}(1 - K\bar K)$, so the sum can be broken into the two sums given in the lemma statement. The two special cases follow directly. \end{proof} \begin{lemma}\label{Drinfeld-calculations} Let $n$ be a positive integer. Then, the Drinfeld element $u\in D\Kn$ satisfies \begin{align*} u &= \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}((-1)^{n+|w|}(1-K\bar K)+K+\bar K)w\bar w,\\ u^{-1} &= \sum_{w\in W} \frac{(-1)^{\lfloor(|w|+1)/2\rfloor}}{2}((-1)^{n}(1-K\bar K)+K+\bar K)w\bar w,\\ S(u) &= \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}((-1)^{|w|}(1-K\bar K)+K+\bar K)w\bar w,\\ uS(u) &= \frac{(-1)^{n}}{2}(1 - K\bar K) + (1+K\bar K)\sum_{w\in W}(-1)^{\lfloor|w|/2\rfloor}2^{|w|-1}w\bar w,\\ uS(u)^{-1} &= (K\bar K)^n. \end{align*} \end{lemma} We see that $S(u)=u$ only when $n$ is even, wherein $uK$ and $u\bar K$ are both ribbon elements in $D\Kn$. When $n$ is odd, $K$ and $\bar K$ still implement the antipode, but it is no longer the case that $uK$ and $u\bar K$ are ribbon elements since $\bar K^2\neq uS(u)^{-1}\neq K^2$. \begin{proof} By definition, \begin{align} u &= \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}( (1+\bar K)(-\bar K)^{|w|}\bar w w + (1-\bar K)(-\bar K)^{|w|}\bar w Kw)\nonumber\\ &= \sum_{w\in W} \frac{(-1)^{\lfloor(|w|+1)/2\rfloor}}{2}(1+K+\bar K-K\bar K)\bar w w.\label{u-barww} \end{align} By applying Lemma \ref{sum-by-length} to the functions $f(k) = \frac{(-1)^{\lfloor (k+1)/2\rfloor}}{2}K(1+K\bar K)$ and $f(k) = \frac{(-1)^{\lfloor (k+1)/2\rfloor}}{2}(1 - K\bar K)$, we get, by the binomial theorem, \begin{align*} &=\sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}(K+\bar K)w\bar w +\sum_{w\in W}(-1)^{\lfloor (|w|+1)/2\rfloor}\sum_{\ell=0}^{n-|w|} \binom{n-|w|}{\ell}(-1)^{\ell+|w|}2^{\ell-1}(1 - K\bar K)w\bar w,\\ &= \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}((-1)^{n+|w|}(1-K\bar K) + K + \bar K)w\bar w. \end{align*} By Proposition \ref{Drinfeld-props}, we see that \begin{align*} u^{-1} &= \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}(\bar w(1+\bar K)S^2(w) + \bar w (1-\bar K)S^2(Kw)),\\ &= \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}(((-1)^{|w|}(1-K\bar K)+K+\bar K)\bar ww). \intertext{Again, we apply the two special cases of Lemma \ref{sum-by-length} to get} &= \sum_{w\in W} \frac{(-1)^{\lfloor (|w|+1)/2\rfloor}}{2}\left((K+\bar K) + \sum_{\ell=0}^{n-|w|} \binom{n-|w|}{\ell}(-1)^{\ell + |w|}2^\ell (1-K\bar K))\right)w\bar w,\\ &= \sum_{w\in W} \frac{(-1)^{\lfloor (|w|+1)/2\rfloor}}{2}\left((K+\bar K) + (-1)^{n} (1-K\bar K))\right)w\bar w. \end{align*} Now, we compute $S(u)$ using Equation \eqref{u-barww}: \begin{align*} S(u) &= \sum_{w\in W} \frac{(-1)^{\lfloor(|w|+1)/2\rfloor}}{2}(1+K+\bar K-K\bar K)(-K)^{|w|}w(-\bar K)^{|w|}\bar w,\\ &= \sum_{w\in W} \frac{(-1)^{\lfloor|w|/2\rfloor}}{2}((-1)^{|w|}(1-K\bar K)+K+\bar K)w\bar w. \end{align*} Next, we compute the product $uS(u)$. First, let $\hat w = \prod_{i\in\mathbf{I}(w)} \xi_i\bar\xi_i$. Note that $\xi_i\bar\xi_i\xi_j\bar\xi_j = \xi_j\bar\xi_j\xi_i\bar\xi_i$ for $i\neq j$, so this product does not need a specified ordering. Then, we can write $u$ and $S(u)$ as \begin{align*} u &= \sum_{w\in W} \frac{1}{2}((-1)^{n+|w|}(1-K\bar K)+K+\bar K)\hat w,\\ S(u) &= \sum_{w\in W} \frac{1}{2}((-1)^{|w|}(1-K\bar K)+K+\bar K)\hat w. \end{align*} We compute $uS(u)$ as follows: \begin{align*} uS(u) &= \sum_{w_1,w_2\in W} \frac{1}{4}((-1)^{|w_1|+n}(1-K\bar K)+K+\bar K)\hat w_1((-1)^{|w_2|}(1-K\bar K)+K+\bar K)\hat w_2,\\ &= \sum_{w_1,w_2\in W} \frac{1}{2}((-1)^{|w_1|+|w_2|+n}(1 - K\bar K) + 1 + K\bar K)\prod_{i\in \mathbf{I}(w_1)\cap \mathbf{I}(w_2)}\xi_i\bar \xi_i\xi_i\bar \xi_i\prod_{i\in \mathbf{I}(w_1)\vartriangle \mathbf{I}(w_2)} \xi_i\bar \xi_i,\\ &= \sum_{w_1,w_2\in W} \frac{1}{2}((-1)^{|w_1|+|w_2|+n}(1 - K\bar K) + 1 + K\bar K)(1 - K\bar K)^{|\mathbf{I}(w_1)\cap \mathbf{I}(w_2)|}\hat{\mathbf{w}}(\mathbf{I}(w_1)\cup \mathbf{I}(w_2)), \end{align*} where $\vartriangle$ denotes symmetric difference and, if $I\subseteq \{1,\dots, n\}$, then $\hat{\mathbf{w}}(I)=(\xi_1\bar\xi_1)^{a_1}\dots(\xi_n\bar\xi_n)^{a_n}$, where $a_i = 1$ if $i\in I$ and $a_i = 0$ otherwise. We can reindex the sum to take advantage of the $(1-K\bar K)$ multiplier as follows: \begin{align*} &= \sum_{\mathbf{I}(w_1)\cap \mathbf{I}(w_2)=\varnothing} \frac{1}{2}(K + \bar K)\hat w_1\hat w_2 + \sum_{w_1, w_2\in W} 2^{|\mathbf{I}(w_1)\cap \mathbf{I}(w_2)|-1}(-1)^{|w_1|+|w_2|+n}(1 - K\bar K)\hat{\mathbf{w}}(\mathbf{I}(w_1)\cup \mathbf{I}(w_2)). \intertext{Given $w\in W$, consider all summands of the form $(1-K\bar K)\hat w$. For each $\ell_1=0,\dots,|w|$, there are $\binom{|w|}{\ell_1}$ different $w_1\in W$ such that $\mathbf{I}(w_1)\subseteq \mathbf{I}(w)$ and $|w_1|=\ell_1$. Fix such a $w_1$. Then, for each $\ell_2=|w|-\ell_1,\dots,|w|$, there are $\binom{\ell_1}{\ell_1+\ell_2-|w|}$ different $w_2\in W$ with $|w_2|=\ell_2$ and $\hat{\mathbf{w}}(\mathbf{I}(w_1)\cup \mathbf{I}(w_2)) = \hat w$. In this case $|\mathbf{I}(w_1)\cap \mathbf{I}(w_2)|=\ell_1+\ell_2-|w|$. Next, we consider all summands of the form $(1+K\bar K)\hat w$. For fixed $w_1\subseteq \mathbf{I}(w)$, there is exactly one $w_2$ such that $(K+\bar K)\hat w_1\hat w_2=\hat w$. There are $2^{|w|}$ such $w_1$. Moreover, the summand $\frac{1}{2}(1 + K\bar K)\hat w_1\hat w_2$ depends only on $w$. Indexing by $m=|w|-\ell_2$, the above sum can be rewritten as} &= \sum_{w\in W}2^{|w|-1}(1 + K\bar K)\hat w + \sum_{w\in W}\sum_{\ell_1=0}^{|w|}\sum_{m=0}^{\ell_1}\binom{|w|}{\ell_1}\binom{\ell_1}{m}2^{\ell_1-m-1}(-1)^{\ell_1+|w|-m+n}(1 - K\bar K)\hat{w}\\ &= \sum_{w\in W}2^{|w|-1}(1 + K\bar K)\hat w + \sum_{w\in W}\sum_{\ell_1=0}^{|w|}\frac{\binom{|w|}{\ell_1}(-1)^{\ell_1+|w|+n}}{2}(1 - K\bar K)\hat{w}\\ &= \sum_{w\in W}2^{|w|-1}(1 + K\bar K)\hat w + \frac{(-1)^{n}}{2}(1-K\bar K). \end{align*} The equation for $uS(u)$ follows. Finally, note that $$(K\bar K)^n ((-1)^\ell(1-K\bar K) + K + \bar K) = (-1)^{n+\ell}(1-K\bar K)+K+\bar K,$$ so $u = (K\bar K)^n S(u)$, and the last equation follows. \end{proof} By Theorem \ref{sommer}, for odd $n$, the ribbon extension $\RKn$ is obtained by formally adding a grouplike $\tilde k$ to $D\Kn$ such that $\tilde k^2 = K\bar K$ and $S^2(x) = \tilde kx\tilde k^{-1}$. The latter is equivalent to requiring that $\tilde k$ commutes with $K, \bar K$ and anticommutes with each $\xi_i, \bar\xi_i$. Note that since $\bar K = K\tilde k^2$, it need not be included in the generators. Thus, we can provide a simple presentation of $\RKn$. \begin{corollary} Let $n$ be odd. The Hopf algebra $\RKn$ is generated (as an algebra) by the elements $K, \tilde k, \xi_1, \dots, \xi_n, \bar\xi_1,\dots,\bar\xi_n$ which satisfy the following relations: for $i\neq j$, \begin{align*} K^2 &= 1, & \xi_i^2 &= 0, & \xi_i\xi_j &= -\xi_j\xi_i, & K\xi_i &= -\xi_i K,\\ \tilde k^4 &= 1, & \bar\xi_i^2 &= 0, & \bar\xi_i\bar\xi_j &= -\bar\xi_j\bar\xi_i, & \tilde k\bar\xi_i &= -\bar\xi_i \tilde k,\\ K\tilde k &= \tilde k K, & \bar\xi_i\xi_i &= 1 - \tilde k^2 - \xi_i\bar\xi_i, & \xi_i\bar\xi_j &= -\bar\xi_j\xi_i, & K\bar\xi_i&=-\bar\xi_i K, & \tilde k\xi_i &= -\xi_i \tilde k. \end{align*} \end{corollary} \subsection{Representations of $\RKn$} For even $n$, by Corollary \ref{ribbon-reps-factor}, $\Rep(\RKn)\cong \Rep(D\Kn)\boxtimes \Vecm$. For odd $n$, the behavior is quite similar, though the category does not factor so nicely. Throughout this section, we assume $n$ is odd. In \cite{modulardata2024}, it is shown that there are four simple $D\Kn$-modules, two of which are their own projective covers. There are two one-dimensional simple $D\Kn$-modules $V_1$ and $V_{K\bar K}$. The projective covers $P_1$ and $P_{K\bar K}$ of these modules are each of dimension $2^{2n}$. There are two $2^n$-dimensional projective simple $D\Kn$-modules $V_K$ and $V_{\bar K}$. It follows from Theorems \ref{simples-double} and \ref{indec-projectives-restrict} that all of these counts double. There are eight simple $\RKn$-modules. There are four one-dimensional simple $\RKn$-modules $V_1^\pm$ and $V_{K\bar K}^\pm$. The projective covers $P_1^\pm$ and $P_{K\bar K}^\pm$ of these modules are each of dimension $2^{2n}$. There are four $2^n$-dimensional projective simple $\RKn$-modules $V_K^\pm$ and $V_{\bar K}^\pm$. We now give explicit descriptions of these modules in terms of our presentation of $\RKn$. Aligning with the notation established in \cite{modulardata2024}, the one-dimensional simple $\RKn$-modules $V_{1}^+, V_{1}^-, V_{K\bar K}^+, V_{K\bar K}^-$ are described by \begin{alignat*}{5} \forall \lambda\in V_{1}^+&:& \hspace{1cm}K\cdot\lambda &=& \lambda, \hspace{1cm}\tilde k\cdot \lambda &=& \lambda, \hspace{1cm}\xi_i\cdot\lambda=\bar\xi_i\cdot\lambda &=&\, 0,\\ \forall \lambda\in V_{1}^-&:&\hspace{1cm}K\cdot\lambda &=& \lambda, \hspace{1cm}\tilde k\cdot \lambda &=&\, -\lambda, \hspace{1cm}\xi_i\cdot\lambda=\bar\xi_i\cdot\lambda &=&\, 0,\\ \forall \lambda\in V_{K\bar K}^+&:&\hspace{1cm}K\cdot\lambda &=&\, -\lambda, \hspace{1cm}\tilde k\cdot \lambda &=& \lambda, \hspace{1cm}\xi_i\cdot\lambda=\bar\xi_i\cdot\lambda &=&\, 0,\\ \forall \lambda\in V_{K\bar K}^-&:&\hspace{1cm}K\cdot\lambda &=&\, -\lambda, \hspace{1cm}\tilde k\cdot \lambda &=&\, -\lambda,\hspace{1cm} \xi_i\cdot\lambda=\bar\xi_i\cdot\lambda &=&\, 0. \end{alignat*} Observe that $V_1^{\pm}|_{D\Kn}=V_1$, $V_{K\bar K}^{\pm}|_{D\Kn}=V_{K\bar K}$. The projective covers $P_1^{\pm}$ and $P_{K\bar K}^\pm$ are given by \begin{align*} P_1^+ &= \RKn(1 + K)(1 + \tilde k + \tilde k^2 + \tilde k^3),\\ P_1^- &= \RKn(1 + K)(1 - \tilde k + \tilde k^2 - \tilde k^3),\\ P_{K\bar K}^+ &= \RKn(1 - K)(1 + \tilde k + \tilde k^2 + \tilde k^3),\\ P_{K\bar K}^- &= \RKn(1 - K)(1 - \tilde k + \tilde k^2 - \tilde k^3). \end{align*} They are indeed projective since $P_1^+ \oplus P_1^-\oplus P_{K\bar K}^+\oplus P_{K\bar K}^-\oplus \RKn(1 - \tilde k^2) = \RKn$. The covering maps $P\to V$ are all generated by $(1\pm K)(1\pm \tilde k + \tilde k^2 \pm \tilde k^3)\mapsto 1$. The dimensions of the $P_{1}^\pm$ and $P_{K\bar K}^\pm$ are $2^{2n}$. Set, as vector spaces, $V_{K}^{\pm}=V_{\bar K}^{\pm}=(\bC^2)^{\otimes n}$. Define the matrices $\Xi = \begin{bmatrix} 0 & \sqrt{2}\\ 0 & 0 \end{bmatrix}$ and $\sigma_Z = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix}$. Set $\Xi_j = \sigma_Z^{\otimes j-1}\otimes \Xi\otimes I_2^{\otimes n-j}$ and $\bar\Xi_i = \sigma_Z^{\otimes j-1}\otimes \Xi^T\otimes I_2^{\otimes n-j}$. Then, we define the following $\RKn$-module actions: for $j=1,\dots, n$, \begin{alignat*}{6} \forall v\in V_{K}^+&:& \hspace{1cm}K\cdot v &=& \sigma_Z^{\otimes n} v, \hspace{1cm}\tilde k\cdot v &=& i\sigma_Z^{\otimes n} v, \hspace{1cm}\xi_j\cdot v &=&\, \Xi_j v, \hspace{1cm}\bar\xi_j\cdot v &=&\, \bar\Xi_j v,\\ \forall v\in V_{K}^-&:& \hspace{1cm}K\cdot v &=& \sigma_Z^{\otimes n} v, \hspace{1cm}\tilde k\cdot v &=& -i\sigma_Z^{\otimes n} v, \hspace{1cm}\xi_j\cdot v &=&\, \Xi_j v, \hspace{1cm}\bar\xi_j\cdot v &=&\, \bar\Xi_j v,\\ \forall v\in V_{\bar K}^+&:& \hspace{1cm}K\cdot v &=& -\sigma_Z^{\otimes n} v, \hspace{1cm}\tilde k\cdot v &=& i\sigma_Z^{\otimes n} v, \hspace{1cm}\xi_j\cdot v &=&\, \Xi_j v, \hspace{1cm}\bar\xi_j\cdot v &=&\, \bar\Xi_j v,\\ \forall v\in V_{\bar K}^-&:& \hspace{1cm}K\cdot v &=& -\sigma_Z^{\otimes n} v, \hspace{1cm}\tilde k\cdot v &=& -i\sigma_Z^{\otimes n} v, \hspace{1cm}\xi_j\cdot v &=&\, \Xi_j v, \hspace{1cm}\bar\xi_j\cdot v &=&\, \bar\Xi_j v. \end{alignat*} Observe that again $V_K^{\pm}|_{D\Kn}=V_K$ and $V_{\bar K}^{\pm}|_{D\Kn}=V_{\bar K}$. Thus, these modules are necessarily simple and are, in particular, singly generated by $|0\rangle^{\otimes n}$. Let $f:V_K^+\to V_K^-$ be a $\RKn$-linear map. By considering the action of $\frac{\xi_1\bar\xi_1}{2}\dots\frac{\xi_n\bar\xi_n}{2}$, we see that $f(|0\rangle^{\otimes n}) = c|0\rangle^{\otimes n}$ for some $c\in \bC$. Therefore, $$-if(|0\rangle^{\otimes n}) = \tilde k\cdot f(|0\rangle^{\otimes n}) = f(\tilde k\cdot|0\rangle^{\otimes n}) = i f(|0\rangle^{\otimes n}).$$ It follows that $f$ is the zero map, so $V_K^+$ and $V_K^-$ must be distinct. The same argument distinguishes $V_{\bar K}^+$ and $V_{\bar K}^-$. \begin{proposition}\label{jordan-decomp} The Jordan decomposition of $P^{+}_{1}$ and $P^-_{K\bar K}$ consists of $2^{2n-1}$ copies of $V_1^+$ and $2^{2n-1}$ copies of $V_{K\bar K}^-$. The Jordan decomposition of $P^{-}_{1}$ and $P^+_{K\bar K}$ consists of $2^{2n-1}$ copies of $V_1^-$ and $2^{2n-1}$ copies of $V_{K\bar K}^+$. In particular, the Cartan matrix is $$\begin{pmatrix} 2^{2n-1} & 0 & 0 & 2^{2n-1} & 0 & 0 & 0 & 0\\ 0 & 2^{2n-1} & 2^{2n-1} & 0 & 0 & 0 & 0 & 0\\ 0 & 2^{2n-1} & 2^{2n-1} & 0 & 0 & 0 & 0 & 0\\ 2^{2n-1} & 0 & 0 & 2^{2n-1} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}.$$ \end{proposition} \begin{proof} This follows from the proof of \cite[Thm. 7.2.4]{modulardata2024}, noting that $\tilde k$ anticommutes with $\xi_i$ for all $i$. \end{proof} As expected, $uS(u)$ acts as $\id_M$ on the $D\Kn$-modules $M=V_1, V_{K\bar K}$ and $-\id_M$ on $M=V_K, V_{\bar K}$. However, $uS(u)$ does not act as a scalar multiple of $\id_M$ on $M=P_1$ or $M=P_{K\bar K}$. \begin{proposition}\label{fusion-rules} Let $\cdot:\{+, -\}^2\to\{+, -\}$ be given by $\pm\cdot\pm = +$ and $\pm\cdot\mp = -$, and set $-(\pm) = \mp$. Then, for $s, t\in \{+, -\}$, the following fusion rules hold. \begin{align} V_{1}^s V_{1}^t \cong V_{K\bar K}^s V_{K\bar K}^t&\cong V_{1}^{s\cdot t},& V_{K\bar K}^s V_{1}^t &\cong V_{K\bar K}^{s\cdot t},\label{fusion:SS}\\ V_K^s V_{1}^t&\cong V_{K}^{s\cdot t}, & V_{\bar K}^s V_{1}^t&\cong V_{\bar K}^{s\cdot t},\label{fusion:SMa}\\ V_K^s V_{K\bar K}^t&\cong V_{\bar K}^{s\cdot t}, & V_K^s V_{K\bar K}^t&\cong V_{\bar K}^{s\cdot t},\label{fusion:SMb}\\ V_{1}^s P_{1}^t\cong V_{K\bar K}^s P_{K\bar K}^t&\cong P_{1}^{s\cdot t}, & V_{K\bar K}^s P_{1}^t\cong V_{1}^s P_{K\bar K}^t&\cong P_{K\bar K}^{s\cdot t},\label{fusion:SB}\\ V_K^s V_K^t\cong V_{\bar K}^s V_{\bar K}^t&\cong P_{K\bar K}^{s\cdot t}, & V_K^s V_{\bar K}^t&\cong P_{1}^{s\cdot t},\label{fusion:MM} \end{align} \begin{align} V_{K}^{s} P_1^{t}\cong V_{K}^{s} P_{K\bar K}^{-t}\cong V_{\bar K}^{s} P_1^{-t}\cong V_{\bar K}^{s} P_{K\bar K}^{t}&\cong 2^{2n-1}V_K^{s\cdot t}\oplus 2^{2n-1}V_{\bar K}^{-s\cdot t},\label{fusion:MB}\\ P_1^s P_{1}^t \cong P_{K\bar K}^s P_{K\bar K}^t\cong P_1^s P_{K\bar K}^{-t}&\cong 2^{2n-1}P_1^{s\cdot t}\oplus 2^{2n-1}P_{K\bar K}^{-s\cdot t}.\label{fusion:BB} \end{align} \end{proposition} \begin{proof} Isomorphisms \eqref{fusion:SS}-\eqref{fusion:MB} almost follow from \cite[Thm. 7.2.5]{modulardata2024} (and the comment after). However, care should be taken in verifying the sign of the modules. For example, in the fourth line, there is an isomorphism $P_{K\bar K}^s\to V_K^+V_K^+$ generated by $(1-K)(1+\sigma\tilde k+\tilde k^2+\sigma\tilde k^3)\mapsto |0\rangle^{\otimes n}\otimes |1\rangle^{\otimes n}$ for some $\sigma\in\{1, -1\}$, where $s=\sgn\sigma$, but $s$ must be determined by the action of $\tilde k$. In this case, since $\tilde k$ is group-like, it acts as $$\tilde k\cdot (|0\rangle^{\otimes n}\otimes |1\rangle^{\otimes n}) = (\tilde k |0\rangle^{\otimes n}\otimes \tilde k |1\rangle^{\otimes n}) = (i|0\rangle^{\otimes n}\otimes (-1)^n i|1\rangle^{\otimes n}) = |0\rangle^{\otimes n}\otimes |1\rangle^{\otimes n}.$$ Thus, $\tilde k$, by the isomorphism, must act by the identity on the generator $(1-K)(1+\sigma\tilde k+\tilde k^2+\sigma\tilde k^3)$ of $P_{K\bar K}^s$, meaning $\sigma=1$ and $s=+$. Isomorphisms \eqref{fusion:MB} follows from Proposition \ref{jordan-decomp} and Isomorphisms \eqref{fusion:SMa} and \eqref{fusion:SMb}. Isomorphisms \eqref{fusion:BB} follows from Isomorphisms \eqref{fusion:MM} and \eqref{fusion:MB}. \end{proof} Propositions \ref{jordan-decomp} and \ref{fusion-rules} give some experimental evidence of the following general conjecture, which is also true when $H$ is ribbon. \begin{conj}\label{tensor-only-one} Let $H$ be a finite-dimensional quasitriangular Hopf algebra. Let $M_1, M_2$ be indecomposable $H$-modules and $\tilde M_1, \tilde M_2$ be $\H$-modules for which $\tilde M_1|_H\cong M_1$ and $\tilde M_2|_H\cong M_2$ as $H$-modules. \begin{enumerate} \item Suppose the simple $H$-module $V$ is a composition factor of $M_1\otimes M_2$. Then, there is exactly one $\H$-module $\tilde V$ (up to isomorphism) for which $\tilde V|_H\cong V$ and $\tilde V$ is a composition factor of $\tilde M_1\otimes \tilde M_2$. \item Suppose the indecomposable $H$-module $M$ is a direct summand of $M_1\otimes M_2$ as $H$-modules. Then, there is exactly one $\H$-module $\tilde M$ (up to isomorphism) for which $\tilde M|_H\cong M$ and $\tilde M$ is a direct summand of $\tilde M_1\otimes \tilde M_2$ as $\H$-modules. \end{enumerate} \end{conj} As a special case of Conjecture \ref{tensor-only-one}, we could set $M_1$ to be the trivial $H$-module in (1) to obtain the following; if the simple $H$-module $V$ is a composition factor of $M$ as $H$-modules, then there is exactly one $\H$-module $\tilde V$ (up to isomorphism) for which $\tilde V|_H\cong V$ and $\tilde V$ is a direct summand of $\tilde M$ as $\H$-modules. This is again supported by Proposition \ref{jordan-decomp}. \begin{remark} Despite having modules which seem to pair up, neither $D\Kn$ nor $\Kn$ is of the form $\H$ (as Hopf algebras) for some other quasitriangular Hopf algebra $H$. By factorizability, $D\Kn$ for even $n$ cannot be written as $D\Kn\cong \H$ for some $H$. If $\Kn\cong \H$, then by Proposition \ref{simples-double}, $\Rep(H)$ would have precisely one simple object up to isomorphism. This implies $H$ is semisimple. By Corollary \ref{semisimple-iff}, this would imply $\H\cong \Kn$ is semisimple, which is a contradiction. \end{remark} \begin{proposition}\label{not-deligne} For odd $n$, $\Rep(\RKn)\not\cong \Rep(D\Kn)\boxtimes\Vecm$ as monoidal categories. \end{proposition} \begin{proof} One can verify that such an equivalence cannot simultaneously preserve both Isomorphisms \eqref{fusion:MM} and \eqref{fusion:BB}. \end{proof} \section*{Acknowledgements} The author is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2139319. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. The author thanks Y. Sommerh\"auser for a helpful discussion on cocycled crossed products. The author also thanks B. R. Jones for illustrating the proof of Lemma \ref{sqrt-uSu} with the holomorphic functional calculus in the complex case. Finally, the author thanks K. Goodearl, Z. Wang, and Q. Zhang for many useful comments. \bibliographystyle{abbrv} \bibliography{zbib} \end{document}
2412.20443v4
http://arxiv.org/abs/2412.20443v4
Monogenic trinomials and class numbers of related quadratic fields
\documentclass[12]{amsart} \usepackage{amsmath,amssymb,amsthm,color,enumerate,comment,centernot,enumitem,url,cite} \usepackage{graphicx,relsize,bm} \usepackage{mathtools} \usepackage{array} \makeatletter \newcommand{\tpmod}[1]{{\@displayfalse\pmod{#1}}} \makeatother \newcommand{\ord}{\operatorname{ord}} \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{conj}[thm]{Conjecture} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{sch}[thm]{Scholium} \newtheorem{guess}[thm]{Guess} \newtheorem{ex}[thm]{Example} \newtheorem{exs}[thm]{Examples} \newtheorem{question}[thm]{Question} \theoremstyle{remark} \newtheorem*{remark}{{\bf Remark}} \newtheorem*{rems}{{\bf Remarks}} \newtheorem*{note}{Note} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{hyp}[thm]{Hypothesis} \newtheorem{rem}[thm]{Remark} \newtheorem{Con}{Conditions}[section] \theoremstyle{THM} \newtheorem*{THM}{Theorem} \DeclareMathOperator{\st}{\left\vert\vphantom{\frac{1}{1}}\right.} \DeclareMathOperator{\lcm}{lcm} \newcommand{\abs}[1]{\left|{#1}\right|} \def\ds{\displaystyle} \def\UU {{\widehat{\mathcal U}}} \def\g {{\widehat{g}}} \def\uu {{\widehat{u}}} \def\VV {{\widehat{\mathcal V}}} \def\vv {{\widehat{v}}} \def\P {{\mathcal P}} \def\O {{\mathcal O}} \def\p {{\mathfrak p}} \def\FF {{\mathcal F}} \def\GG {{\mathcal G}} \def\TT {{\mathcal T}} \def\R {{\mathbb R}} \def\Z {{\mathbb Z}} \def\ZZ {{\mathcal Z}} \def\NN {{\mathcal N}} \def\QQ {{\mathcal Q}} \def\N{{\mathbb N}} \def\Q {{\mathbb Q}} \def\T {{\mathcal T}} \def\E {{\mathcal E}} \def\B {{\mathcal B}} \def\C {{\mathcal C}} \def\S {{{\mathcal S}}} \def\A {{\mathcal A}} \def\D {{\mathcal D}} \def\L {{\mathcal L}} \def\Norm{{\mathcal N}} \def\M {{\mathcal M}} \def\h {{\mathfrak h}} \def\X {{\mathcal X}} \def\U {{\mathcal U}} \def\V {{\mathcal V}} \def\K{{\mathcal K}} \def\PP {{\widehat{\mathcal P}}} \def\d {{\rm det}} \def\PPP {{{\mathfrak P}}} \newcommand{\im}{\operatorname{im}} \def\k {k} \def\GG {{\mathcal G}} \def\UU {{\widehat{\mathcal U}}} \def\uu {{\widehat{u}}} \def\VV {{\widehat{\mathcal V}}} \def\vv {{\widehat{v}}} \def\P {{\mathcal P}} \def\F {{\mathbb F}} \def\D {{\mathcal D}} \def\Z {{\mathbb Z}} \def\Q {{\mathbb Q}} \def\C {{\mathbb C}} \def\S {{{\mathcal S}}} \def\U {{\mathcal U}} \def\V {{\mathcal V}} \def\W {{\mathcal W}} \def\PP {{\widehat{\mathcal P}}} \def\RR {{{\mathcal R}}} \def\PPP {{{\mathfrak P}}} \def\CC {{\mathcal C}} \def\Gal{{\mbox{{\rm{Gal}}}}} \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \def\n{{\mathbf n}} \def\tr{\mathrm{tr}} \def\red#1 {\textcolor{red}{#1 }} \def\blue#1 {\textcolor{blue}{#1 }} \numberwithin{equation}{section} \def\ds{\displaystyle} \def\Z {{\mathbb Z}} \newcommand{\Mod}[1]{\ (\mathrm{mod}\enspace #1)} \newcommand{\mmod}[1]{\ \mathrm{mod}\enspace #1} \renewcommand{\arraystretch}{1.25} \begin{document} \title[Monogenic trinomials and class numbers of related quadratic fields]{Monogenic trinomials and class numbers of related quadratic fields} \author{Lenny Jones} \address{Professor Emeritus, Department of Mathematics, Shippensburg University, Shippensburg, Pennsylvania 17257, USA} \email[Lenny~Jones]{[email protected]} \date{\today} \begin{abstract} We say that a monic polynomial $f(x)\in \Z[x]$ of degree $N\ge 2$ is \emph{monogenic} if $f(x)$ is irreducible over $\Q$ and $\{1,\theta,\theta^2,\ldots ,\theta^{N-1}\}$ is a basis for the ring of integers of $\Q(\theta)$, where $f(\theta)=0$. In this article, we investigate the divisibility of the class numbers of quadratic fields $\Q(\sqrt{\delta})$ for certain families of monogenic trinomials $f(x)=x^N+Ax+B$, where $\delta\ne \pm 1$ is a squarefree divisor of the discriminant of $f(x)$. \end{abstract} \subjclass[2020]{Primary 11R09, 11R29}\keywords{monogenic, trinomial, class number, quadratic field} \maketitle \section{Introduction}\label{Section:Intro} We say that a monic polynomial $f(x)\in \Z[x]$ is \emph{monogenic} if $f(x)$ is irreducible over $\Q$ and $\{1,\theta,\theta^2,\ldots ,\theta^{\deg{f}-1}\}$ is a basis for the ring of integers of $\Q(\theta)$, where $f(\theta)=0$. Hence, $f(x)$ is monogenic if and only if $\Z_K=\Z[\theta]$. For the minimal polynomial $f(x)$ of an algebraic integer $\theta$ over $\Q$, it is well known \cite{Cohen} that \begin{equation} \label{Eq:Dis-Dis} \Delta(f)=\left[\Z_K:\Z[\theta]\right]^2\Delta(K), \end{equation} where $\Delta(f)$ and $\Delta(K)$ are the respective discriminants over $\Q$ of $f(x)$ and the number field $K$. Thus, from \eqref{Eq:Dis-Dis}, $f(x)$ is monogenic if and only if $\Delta(f)=\Delta(K)$. Many authors have investigated divisibility properties of class numbers of quadratic fields. We present a few such results from the literature that are germane to our investigations in this paper. The first theorem is from 1955 and is due to Ankeny and Chowla \cite{AC}. \begin{thm}\label{Thm:AC} Let $n$ and $M$ be positive integers with $M\ge 5$. If $\delta:=M^{2n}+1$ is squarefree, then $n$ divides the class number $h_K$ of the real quadratic field $K=\Q(\sqrt{\delta})$. \end{thm} The next theorem is due to M. Ram Murty \cite{Murty}, which he published in 1998. \begin{thm}\label{Thm:Murty} Suppose that $\delta:=1-M^n$ is squarefree with $M\ge 5$ and odd. Then the class group of $\Q(\sqrt{\delta})$ has an element of order $n$. \end{thm} \begin{rem}\label{Rem:Murty} We point out that Murty's proof of Theorem \ref{Thm:Murty} is also valid when $M=3$ and $n$ is odd. \end{rem} In 2000, Kishi and Miyake \cite{KM} gave the following parameterization of the quadratic fields whose class numbers are divisible by 3: \begin{thm}\label{Thm:KM} Let $u,w\in \Z$ with $\gcd(u,w)=1$, such that $d:=4uw^3-27u^2$ is not a square and one of the following sets of conditions holds: \begin{align} \label{C1} &3\nmid w;\\ \label{C2} &3\mid w, \quad uw\not \equiv 3 \pmod{9}, \quad u\equiv w \pm 1 \pmod{9};\\ \label{C3} &3\mid w, \quad uw \equiv 3 \pmod{9}, \quad u\equiv w\pm 1 \pmod{27}. \end{align} Define \begin{equation}\label{Eq:f} f_{u,w}(x):=x^3-uwx-u^2. \end{equation} If $f_{u,w}(x)$ is irreducible over $\Q$, then the roots of $f_{u,w}(x)=0$ generate an unramified cyclic cubic extension of the quadratic field $\Q(\sqrt{d})$. Conversely, every quadratic field $K$ whose class number is divisible by {\rm 3} and every unramified cyclic cubic extension of $K$ are given in this way by a suitable pair of integers $u$ and $w$. \end{thm} The next theorem is from 2009 and due to Kishi \cite{K}. \begin{thm}\label{Thm:K} For any positive integers $k$ and $n$ with $2^{2k}<3^n$ and $(k,n)\ne (2,3)$, the class number of the imaginary quadratic field $\Q(\sqrt{2^{2k}-3^n})$ is divisible by $n$. \end{thm} In this paper, for certain monogenic trinomials $f(x)=x^N+Ax+B$, we use Theorems \ref{Thm:Murty}, \ref{Thm:KM} and \ref{Thm:K} to investigate divisibility properties of the class numbers of quadratic fields $\Q(\sqrt{\delta})$, where $\delta\ne \pm 1$ is a divisor of $\Delta(f)$. One motivation for such investigations is to determine whether there is some possible connection between the monogenicity of $f(x)$ and the class number of $\Q(\sqrt{\delta})$. This notion is not completely outside the realm of possibility since there are well-known connections between $\Delta(f)$ and the monogenicity of $f(x)$. For example, if $f(x)$ is monic and irreducible over $\Q$ such that $\Delta(f)$ is squarefree, then $f(x)$ is monogenic. A second motivation is to prove that there exist infinitely many distinct monogenic polynomials such that the class number of the related quadratic field can be arbitrarily large. We summarize our approach to this study. Our somewhat natural starting point is to construct and examine families of polynomials $f(x)$ such that $\Delta(f)$ has a divisor that is of the same form as the discriminants for the quadratic fields found in one of the known results: Theorem \ref{Thm:Murty}, Theorem \ref{Thm:KM} or Theorem \ref{Thm:K}. For the construction of such polynomials, we focus on trinomials $f(x)=x^N+Ax+B$ because of the relative simplicity of their discriminants. Then, for each of these constructed families of trinomials, we provide necessary and sufficient conditions for the monogenicity of a trinomial in the family, and we show that no two such degree-$N$ monogenic trinomials generate the same extension over $\Q$. Next, we confirm that these monogenic trinomials do satisfy the conditions of one of the Theorems \ref{Thm:Murty}, \ref{Thm:KM} and \ref{Thm:K}, so that the divisibility conditions for the related quadratic field found in these known theorems do indeed hold. Finally, we examine the data for any obvious patterns. Although our investigations did not yield any apparent connections between the monogenicity of these trinomials and the divisibility of the class numbers of the related quadratic fields, we have been able to show that there exist infinitely many monogenic trinomials that fall within the context of each of the Theorems \ref{Thm:Murty}, \ref{Thm:KM} and \ref{Thm:K}. Moreover, as mentioned above, we also show that no two degree-$N$ monogenic trinomials generate the same extension over $\Q$. Consequently, in doing so, we have established our second motivation. In the statement of each new theorem below, we indicate explicitly its association to the particular one of Theorems \ref{Thm:Murty}, \ref{Thm:KM} and \ref{Thm:K}. More precisely, our main results are: \begin{thm}(associated with Theorem \ref{Thm:KM})\label{Thm:Main1}\\ Let $u$, $w$ and $d$ be integers that satisfy the conditions set forth in Theorem \ref{Thm:KM}, and let $f_{u,w}(x)$ be as defined in \eqref{Eq:f}. Suppose that $f_{u,w}(x)$ is irreducible over $\Q$. Then \begin{equation}\label{Eq:Mono1} f_{u,w}(x)\ \mbox{is monogenic if and only if $u=1$ and $d$ is squarefree,} \end{equation} the set \begin{equation}\label{FF1} \FF_1=\{f_{1,w}(x): \mbox{$d$ is squarefree},\} \end{equation} is infinite, every element of $\FF_1$ is such that $\Gal_{\Q}(f_{a,b})\simeq S_3$, the class number of the unique quadratic subfield $\Q(\sqrt{d})$ of the splitting field of $f_{a,b}(x)$ is divisible by $3$, and no two elements of $\FF_1$ generate the same cubic extension over $\Q$. \end{thm} \begin{thm}\label{Thm:Main2}(associated with Theorem \ref{Thm:KM}) Let $a$ and $b$ be positive integers with $b\ge 2$ and $(a,b)\ne (1,2)$. Define \[f_{a,b}(x):=x^3-2^{2a}x-3^{b-1} \quad \mbox{and} \quad \delta:=2^{6a+2}-3^{2b+1}.\] Then \begin{equation}\label{Eq:Mono2} f_{a,b}(x)\ \mbox{is monogenic if and only if $\delta$ is squarefree,} \end{equation} and every element of the set \begin{equation}\label{FF2} \FF_2=\{f_{a,b}(x): \mbox{$\delta<0$ is squarefree}\} \end{equation} is such that $\Gal_{\Q}(f_{a,b})\simeq S_3$, $\FF_1\cap \FF_2=\varnothing$, no two elements of $\FF_2$ generate the same cubic extension of $\Q$, and the class number of the unique imaginary quadratic subfield $\Q(\sqrt{\delta})$ of the splitting field of $f_{a,b}(x)$ is divisible by $(6b+3)/\gcd(3,2b+1)$. \end{thm} \begin{thm}\label{Thm:Main3}(associated with Theorem \ref{Thm:AC}) Let $b,N\in \Z$ such that $b\ge 1$, $N\ge 6$ and $N\equiv 2 \pmod{4}$. Define \[f_{N,b}(x):=x^N-N(N-1)^bx-(N-1) \quad \mbox{and} \quad \delta:=(N-1)^{bN}+1.\] Suppose that $f_{N,b}(x)$ is irreducible over $\Q$. Then \begin{equation}\label{Eq:Mono3} f_{N,b}(x)\ \mbox{is monogenic if and only if $N(N-1)$ and $\delta$ are squarefree.} \end{equation} Furthermore, no two such monogenic trinomials $f_{N,b}(x)$ generate the same $N$th-degree extension of $\Q$, and $bN/2$ divides the class number of the real quadratic field $\Q(\sqrt{\delta})$. \end{thm} \begin{thm}\label{Thm:Main4}(associated with Theorem \ref{Thm:Murty}) Let $b,N\in \Z$ with $b\ge 1$, $N\ge 3$ and $N\equiv 3 \pmod{4}$. Define \[f_{N,b}(x)=x^N-x-(N-1)N^b\quad \mbox{and} \quad \delta:=1-N^{(b+1)N-b}.\] Suppose that $f_{N,b}(x)$ is irreducible over $\Q$. Then \begin{equation}\label{Eq:Mono4} f_{N,b}(x)\ \mbox{is monogenic if and only if $\delta$ is squarefree.} \end{equation} Furthermore, no two such monogenic trinomials $f_{N,b}(x)$ generate the same $N$th-degree extension of $\Q$, and the ideal class group of the imaginary quadratic field $\Q(\sqrt{\delta})$ contains an element of order $(b+1)N-b$. \end{thm} \section{Preliminaries}\label{Section:Prelims} The first theorem is a special case of a result due to Swan \cite{Swan}. \begin{thm}\label{Thm:Swan} Let $f(x)=x^N+Ax+B\in \Q[x]$. Then \[ \Delta(f)=(-1)^{N(N-1)/2}\left(N^{N}B^{N-1}-(-1)^{N}(N-1)^{N-1}A^{N}\right). \] \end{thm} The theorem below is the special case of a theorem due to Jakhar, Khanduja and Sangwan \cite[Theorem 1.1]{JKS2}, which is used to determine the monogenicity of an irreducible trinomial, when applied to trinomials of the form $f(x)=x^N+Ax+B$, where $N\ge 3$. We use the notation $q^e\mid \mid n$ to mean that $q^e$ is the exact power of the prime $q$ that divides the integer $n$. \begin{thm}{\rm \cite{JKS2}}\label{Thm:JKS} Let $N\ge 3$ be an integer. Let $K=\Q(\theta)$ be an algebraic number field with $\theta\in \Z_K$, the ring of integers of $K$, having minimal polynomial $f(x)=x^{N}+Ax+B$ over $\Q$. A prime factor $q$ of $\Delta(f)$ does not divide $\left[\Z_K:\Z[\theta]\right]$ if and only if $q$ satisfies one of the following conditions: \begin{enumerate}[font=\normalfont] \item \label{JKS:I1} when $q\mid A$ and $q\mid B$, then $q^2\nmid B$; \item \label{JKS:I2} when $q\mid A$ and $q\nmid B$, then \[\mbox{either } \quad q\mid A_2 \mbox{ and } q\nmid B_1 \quad \mbox{ or } \quad q\nmid A_2\left(-BA_2^{N}-\left(-B_1\right)^{N}\right),\] where $A_2=A/q$ and $B_1=\frac{B+(-B)^{q^j}}{q}$ with $q^j\mid\mid N$; \item \label{JKS:I3} when $q\nmid A$ and $q\mid B$, then \[\mbox{either } \quad q\mid A_1 \mbox{ and } q\nmid B_2 \quad \mbox{ or } \quad q\nmid A_1\left(-AA_1^{N-1}-\left(-B_2\right)^{N-1}\right),\] where $A_1=\frac{A+(-A)^{q^\ell}}{q}$ with $q^\ell\mid\mid (N-1)$, and $B_2=B/q$; \item \label{JKS:I5} when $q\nmid AB$, then $q^2\nmid \Delta(f)$. \end{enumerate} \end{thm} \section{Proof of Theorem \ref{Thm:Main1}} \begin{proof} Note that replacing $u$ with $-u$ and $w$ with $-w$ in \eqref{Eq:f} yields the same trinomial $f_{u,w}(x)=x^3-uwx-u^2$. Consequently, we can assume without loss of generality that $u>0$. A straightforward calculation shows that \[\Delta(f_{u,w})=4u^3w^3-27u^4.\] We begin with the proof of \eqref{Eq:Mono1}, and we assume first that $f_{u,w}(x)$ is monogenic. Let $q$ be a prime divisor of $\Delta(f_{u,w})$, and suppose that $f_{u,w}(\theta)=0$. If $u>1$, then condition \eqref{JKS:I1} of Theorem \ref{Thm:JKS} is not satisfied for every prime divisor $q$ of $u$; in which case, $q\mid \left[\Z_K:\Z[\theta]\right]$ and $f(x)$ is not monogenic. Thus, $u=1$ and we can focus on \begin{equation}\label{Eq:Newf} f_{1,w}(x)=x^3-wx-1\quad \mbox{and} \quad \Delta(f_{1,w})=d=4w^3-27. \end{equation} Observe then from \eqref{Eq:Newf} that $f_{u,w}(x)$ is irreducible over $\Q$ if and only if $w\not \in \{0,2\}$ by the Rational Root Theorem. If $q=3$, then $3\mid w$ from \eqref{Eq:Newf}. From condition \eqref{JKS:I2} of Theorem \ref{Thm:JKS}, we see that $B_1=0$, which implies that $3\nmid A_2$ since $f(x)$ is monogenic. Hence, we have that $w\equiv \pm 3 \pmod{9}$. However, with $u=1$, examining the two possibilities for $w$ reveals that neither condition \eqref{C2} nor condition \eqref{C3} of Theorem \ref{Thm:KM} can be satisfied. For example, if $w\equiv -3 \pmod{9}$, then $uw\equiv -3\not \equiv 3\pmod{9}$, but $u-w\equiv 4\not \equiv \pm 1 \pmod{9}$. Thus, $3\nmid \Delta(f_{1,w})$ and $3\nmid w$, so that condition \eqref{C1} of Theorem \ref{Thm:KM} holds. As a consequence, we have shown that no cubic trinomial $f_{1,w}(x)$ in \eqref{Eq:Newf} is monogenic when $3\mid w$. Suppose next that $q$ is a prime divisor of $d$ such that $q^2\mid d$. By the previous discussion, we have that $q\ne 3$ and $q\nmid w$. But then condition \eqref{JKS:I5} of Theorem \ref{Thm:JKS} is not satisfied, contradicting the fact that $f(x)$ is monogenic. Hence, $d$ is squarefree, which completes the proof in this direction. Conversely, suppose that $u=1$ and $d$ as in \eqref{Eq:Newf} is squarefree. Let $q$ be a prime divisor of $\Delta(f_{1,w})=d$. Then, from \eqref{Eq:Newf}, conditions \eqref{JKS:I1}, \eqref{JKS:I3} and \eqref{JKS:I5} of Theorem \ref{Thm:JKS} are easily seen to be satisfied. If $q\mid w$, then $q=3$ from \eqref{Eq:Newf}, and $3^3\mid d$, contradicting the fact that $d$ is squarefree, which completes the proof of \eqref{Eq:Mono1}. Next, let $\FF_1$ be the set as defined in \eqref{FF1}. First note the fact that every element of $\FF_1$ is such that $\Gal_{\Q}(f_{u,w})\simeq S_3$, and that the class number of the unique quadratic subfield $\Q(\sqrt{d})$ of the splitting field of $f_{a,b}(x)$ is divisible by $3$, follows immediately from Theorem \ref{Thm:KM}. Let $G(t):=4t^3-27\in \Z[t]$. It is easy to verify that $G(t)$ is irreducible over $\Q$. Then, it follows from a result of Erd\H{o}s \cite{Erdos} that there exist infinitely many integers $z$ such that $G(z)$ is squarefree. (In fact, there exist infinitely many primes $p$ such that $G(p)$ is squarefree \cite{Helfgott}.) Consequently, there exist infinitely many integers $w$ such that $d$ is squarefree, and hence, $\FF_1$ is infinite. Finally, to see that no two elements of $\FF_1$ generate the same cubic field, we assume, by way of contradiction, that $f_{1,w_1}(x)\ne f_{1,w_2}(x)$ are elements of $\FF_1$ such that $K_1=\Q(\theta_1)$ and $K_2=\Q(\theta_2)$ are equal, where $f_{1,w_i}(\theta_i)=0$. Then, since $f_{1,w_i}(x)$ is monogenic by \eqref{Eq:Mono1}, we have that $\Delta(f_{1,w_i})=\Delta(K_i)$, and so it follows from \eqref{Eq:Dis-Dis} that \begin{equation}\label{Eq:Disequation} 4w_1^3-27=4w_2^3-27. \end{equation} Since $w_1\ne w_2$, it is easy to see that \eqref{Eq:Disequation} has no integer solutions, which completes the proof of the theorem. \end{proof} \section{Proof of Theorem \ref{Thm:Main2}} \begin{proof} Since $f_{a,b}(x):=x^3-2^{2a}x-3^{b-1}$, we have from Theorem \ref{Thm:Swan} that \[\Delta(f_{a,b})=\delta=2^{6a+2}-3^{2b+1}.\] We first determine the values of $(a,b)$ for which $f_{a,b}(x)$ is irreducible over $\Q$. If $f_{a,b}(x)$ is reducible over $\Q$, then $f_{a,b}(r)=0$ for some $r\in \Z$, so that $r=\pm 3^c$ for some integer $c\ge 0$ by the Rational Root Theorem. Suppose that $r=-3^c$. Then \begin{align} \nonumber f_{a,b}(-3^c)=0 \Longrightarrow \ & (-3^c)\left((-3)^{2c}-2^{2a}\right)=3^{b-1}\\ \nonumber\Longrightarrow \ & 3^{2c}-2^{2a}=-3^{b-1-c}\\ \nonumber\Longrightarrow \ & 3^{2c}+3^{b-1-c}=2^{2a}\\ \label{Root} \Longrightarrow \ & 3^{b-1-c}(3^{3c-b+1}+1)=2^{2a} \quad \mbox{or} \quad 3^{2c}(1+3^{b-1-3c})=2^{2a}. \end{align} From the first possibility in \eqref{Root}, we deduce that $c=b-1$. Thus, we get the equation $2^{2a}-3^{2(b-1)}=1$, which has no solutions by Mih\u{a}ilescu's theorem \cite{M}. For the second possibility in \eqref{Root}, we conclude that $c=0$. Hence, this possibility yields the equation $2^{2a}-3^{b-1}=1$, which has only the solution $(a,b)=(1,2)$, again by Mih\u{a}ilescu's theorem \cite{M}. Note then that $r=-1$ and \[f_{1,2}(x)=(x+1)(x^2-x-3).\] A similar argument shows that no solutions arise from assuming that $r=3^c$. Consequently, $f_{a,b}(x)$ is irreducible over $\Q$ if and only if $(a,b)\ne (1,2)$. We now prove \eqref{Eq:Mono2}. With $A=-2^{2a}$, $B=-3^{b-1}$ and $q$ a prime divisor of $\Delta(f_{a,b})=\delta$, we see that only condition \eqref{JKS:I5} is applicable in Theorem \ref{Thm:JKS}. Hence, we conclude that $f_{a,b}(x)$ is monogenic if and only if $q^2\nmid \delta$ for all prime divisors $q$ of $\delta$. That is, $f_{a,b}(x)$ is monogenic if and only if $\delta$ is squarefree, which completes the proof of \eqref{Eq:Mono2}. Consider next the set $\FF_2$, as defined in \eqref{FF2}. Observe that $\FF_1\cap \FF_2=\varnothing$ since $b\ge 2$. To see that no two elements of $\FF_2$ can generate the same cubic extension, we assume, by way of contradiction, that \[K_1=\Q(\theta_1)=\Q(\theta_2)=K_2,\] for some $(a_1,b_1)\ne (a_2,b_2)$, where $f_{a_i,b_i}(\theta_i)=0$. Since each $f_{a_i,b_i}(x)$ is monogenic, then we have from \eqref{Eq:Dis-Dis} that $\delta_1=\delta_2$, where \[\delta_i:=2^{6a_i+2}-3^{2b_i+1}.\] Without loss of generality, we can assume that $a_1>a_2$. Then, by rearranging the equation $\delta_1=\delta_2$, we see that \[2^{6a_2+2}\left(2^{6(a_1-a_2)}-1\right)=3^{2b_2+1}\left(3^{2(b_1-b_2)}-1\right),\] which implies that \begin{equation}\label{Eq:2equations} 3^{2(b_1-b_2)}-2^{6a_2+1}=1 \quad \mbox{and} \quad 2^{6(a_1-a_2)}-3^{2b_2+1}=1. \end{equation} It is then easy to see that neither equation in \eqref{Eq:2equations} has a solution by Mih\u{a}ilescu's theorem \cite{M}. Hence, no two trinomials in $\FF_2$ generate the same cubic field. For each $f_{a,b}(x)\in \FF_2$, we define \begin{gather} \label{uwd} u:=3^{2b-2}, \quad w:=2^{2a}, \quad d:=4uw^3-27u^2=u\delta\\ \nonumber \mbox{and}\\ \label{g} g_{a,b}(x):=x^3-uwx-u^2=x^3-2^{2a}3^{2b-2}x-3^{4b-4}. \end{gather} Note that \begin{align*} 3^{3b-3}f_{a,b}(x/3^{b-1})&=3^{3b-3}\left((x/3^{b-1})^3-2^{2a}(x/3^{b-1})-3^{b-1}\right)\\ &=x^3-2^{2a}3^{2b-2}x-3^{4b-4}\\ &=g_{a,b}(x), \end{align*} and \[\Delta(g_{a,b})=3^{6b-6}\left(2^{6a+2}-3^{2b+1}\right)=u^2d=u^3\delta=u^3\Delta(f_{a,b}),\] by Theorem \ref{Thm:Swan}. Moreover, $g_{a,b}(x)$ is irreducible over $\Q$ if and only if $(a,b)\ne (1,2)$, $f_{a,b}(x)$ and $g_{a,b}(x)$ generate the same cubic field, but $g_{a,b}(x)$ is not monogenic since condition \eqref{JKS:I1} of Theorem \ref{Thm:JKS} is not satisfied with the prime $q=3$. We see from \eqref{uwd} that $\gcd(u,w)=1$, $d$ is not a square (since $\delta<0$) and $3\nmid w$. Hence, it follows from Theorem \ref{Thm:KM} that $\Gal_{\Q}(f_{a,b})\simeq \Gal_{\Q}(g_{a,b})\simeq S_3$ and that $3\mid h_K$, where $h_K$ is the class number of the imaginary quadratic field $K=\Q(\sqrt{d})=\Q(\sqrt{\delta})$. Furthermore, since $\delta=2^{2(3a+1)}-3^{2b+1}<0$ and $(3a+1,2b+1)\ne (2,3)$, it follows from Theorem \ref{Thm:K} that $(2b+1)\mid h_K$. Consequently, we deduce that $(6b+3)/\gcd(3,2b+1)$ divides $h_K$, which completes the proof of the theorem. \end{proof} \begin{rem} We point out that in the proof of Theorem \ref{Thm:Main2} a more elementary approach, such as in \cite{H}, will suffice for every appeal to Mih\u{a}ilescu's theorem \cite{M}. \end{rem} \section{Proof of Theorem \ref{Thm:Main3}} \begin{proof} In Theorem \ref{Thm:Swan}, we have \begin{equation}\label{Eq:AB} A=-N(N-1)^{b} \quad \mbox{and}\quad B=-(N-1), \end{equation} so that \[\Delta(f_{N,b})=N^N(N-1)^{N-1}((N-1)^{bN}+1)=N^N(N-1)^{N-1}\delta.\] We first point out some important facts. Suppose that $q$ is a prime dividing $\gcd(A,\delta)$. Then, $q\mid N$ so that \[\delta\equiv (-1)^{bN}+1 \equiv 2\pmod{q},\] since $2\mid N$. But $\delta\equiv 0 \pmod{q}$, and therefore, $q=2$. Since $N\equiv 2 \pmod{4}$, we see that $\delta\equiv 2 \pmod{4}$. Consequently, \begin{equation}\label{Eq:Fact} 2\mid\mid \delta\quad \mbox{and}\quad \gcd(A,\delta)=2. \end{equation} To establish \eqref{Eq:Mono3}, we let $q$ be a prime divisor of $\Delta(f_{N,b})$, and we use \eqref{Eq:AB} and Theorem \ref{Thm:JKS}. Observe that $q\mid A$ and $q\mid B$ if and only if $q\mid (N-1)$. Hence, condition \eqref{JKS:I1} of Theorem \ref{Thm:JKS} is satisfied for all prime divisors $q$ of $N-1$ if and only if $q^2\nmid (N-1)$, or equivalently, $N-1$ is squarefree. Next, observe that $q\mid A$ and $q\nmid B$ if and only if $q\mid N$. If $q=2$, then since $N\equiv 2 \pmod{4}$, we have $2\mid \mid N$, and in condition \eqref{JKS:I2} of Theorem \ref{JKS:I2}, we see that \begin{gather*} A_2=\frac{-N(N-1)^{b}}{2} \equiv 1 \pmod{2}\\ \mbox{and}\\ B_1=\frac{(N-1)^2-(N-1)}{2}=\frac{(N-1)(N-2)}{2}\equiv 0 \pmod{2}. \end{gather*} Hence, \[A_2(-BA_2^N-(-B_1)^N)\equiv 1 \pmod{2},\] so that condition \eqref{JKS:I2} is satisfied for $q=2$. Suppose then that $q\ne 2$ with $q^j\mid \mid N$ and write $N=q^jn$, where $q\nmid n$. Then \[A_2=\frac{N(N-1)^b}{q}\equiv (-1)^{b+1}q^{j-1}n \pmod{q},\] \begin{align*} B_1&=\frac{(q^jn-1)^{q^j}-(q^jn-1)}{q}\\ &=\frac{(-1)^{q^j}+q^{2j}n(-1)^{q^j-1}+\sum_{k=2}^{q^j}\binom{q^j}{k}q^{jk}n^k(-1)^{q^j-k}-(q^jn-1)}{q}\\ &=q^{j-1}n(q^j-1)+\sum_{k=2}^{q^j}\binom{q^j}{k}q^{jk-1}n^k(-1)^{q^j-k}\\ &\equiv q^{j-1}n \pmod{q}, \end{align*} and therefore, \[A_2(-BA_2^N-(-B_1)^N)\equiv (-1)^{b+2}2q^{(j-1)(N+1)}n^N \not \equiv 0 \pmod{q}\] if and only if $j=1$. Thus, condition \eqref{JKS:I2} of Theorem \ref{Thm:JKS} is satisfied for all odd prime divisors $q$ of $N$ if and only if $N$ is squarefree. Since $q\nmid A$ and $q\mid B$ is impossible, we see that condition \eqref{JKS:I3} is not applicable. If $q\nmid AB$, then $q\ne 2$ and $q\mid \delta$. Hence, condition \eqref{JKS:I5} is satisfied for all prime divisors $q\nmid AB$ if and only if $q^2\nmid \delta$, which is equivalent to $\delta$ being squarefree by \eqref{Eq:Fact}. Thus, we have established \eqref{Eq:Mono3}. Suppose that the monogenic trinomials $f_{N,b_1}(x)$ and $f_{N,b_2}(x)$ generate the same $N$th-degree extension of $\Q$. That is, $\Q(\alpha_1)=\Q(\alpha_2)$, where $f_{N,b_1}(\alpha_1)=f_{N,b_2}(\alpha_2)=0$. Then, by \eqref{Eq:Dis-Dis}, we have that \[(N-1)^{b_1N}+1=(N-1)^{b_2N}+1,\] which implies that $b_1=b_2$ and $f_{N,b_1}(x)=f_{N,b_2}(x)$. Finally, it follows from Theorem \ref{Thm:AC} that $bN/2$ divides the class number of the real quadratic field $\Q(\sqrt{\delta})$. \end{proof} \begin{rem} We point out that if $N-1$ is squarefree with $N\ge 6$, then $f_{N,b}(x)=x^N-N(N-1)^bx-(N-1)$ is $p$-Eisenstein for every prime divisor $p$ of $N-1$, and is hence irreducible over $\Q$. \end{rem} \section{Proof of Theorem \ref{Thm:Main4}} \begin{proof} In Theorem \ref{Thm:Swan}, we have \begin{equation}\label{Eq:AB} A=-1 \quad \mbox{and}\quad B=-(N-1)N^b, \end{equation} so that \[\Delta(f_{N,b})=(N-1)^{N-1}(1-N^{(b+1)N-b})=(N-1)^{N-1}\delta.\] To establish \eqref{Eq:Mono4}, let $q$ be a prime divisor of $\Delta(f_{N,b})$. Using \eqref{Eq:AB}, we proceed through Theorem \ref{Thm:JKS} to derive necessary and sufficient criteria to ensure the monogenicity of $f_{N,b}(x)$. Observe first that $q\mid A$ is impossible so that conditions \eqref{JKS:I1} and \eqref{JKS:I2} of Theorem \ref{Thm:JKS} are not applicable. If $q\nmid A$ and $q\mid B$, then $q$ must divide $N-1$ since $q\mid \Delta(f_{N,b})$. We see in condition \eqref{JKS:I3} of Theorem \ref{Thm:JKS} that $A_1=0$ and $B_2=-(N-1)N^b/q$. Thus, $q\nmid B_2$ if and only if $q^2\nmid (N-1)$. Hence, condition \eqref{JKS:I3} is satisfied for all prime divisors $q$ of $N-1$ if and only if $N-1$ is squarefree. If $q\nmid AB$, then $q\mid \delta$. Hence, condition \eqref{JKS:I5} is satisfied for all prime divisors $q\nmid AB$ if and only if $q^2\nmid \delta$, which is equivalent to $\delta$ being squarefree. Since $\delta$ being squarefree implies that $N-1$ is squarefree, we have established \eqref{Eq:Mono4}. The proof that no two monogenic trinomials $f_{N,b}(x)$ generate the same $N$-th-degree extension of $\Q$ is similar to the corresponding proof in Theorem \ref{Thm:Main3}, and we omit the details. Finally, it follows from Theorem \ref{Thm:Murty} that the ideal class group of the imaginary quadratic field $\Q(\sqrt{\delta})$ contains an element of order $(b+1)N-b$. \end{proof} \section{Examples to Illustrate the Theorems} All computations were done using Sage. \subsection{Theorem \ref{Thm:Main1}} For $\abs{w}\le 10$, we give in Table \ref{T:1} the values of $w$ and $d$ for trinomials $f_{1,w}(x)$ in $\FF_1$, and the class numbers $h_K$ of the unique quadratic subfield $K=\Q(\sqrt{d})$ of the splitting field of $f_{1,w}(x)$. \subsection{Theorem \ref{Thm:Main2}} For the seven smallest squarefree values of $\abs{\delta}$ with $1\le a\le 3$, $2\le b \le 6$ and $\delta<0$, we give in Table \ref{T:2} the values of $a$, $b$, $\delta$, the class number $h$ of the imaginary quadratic field $\Q(\sqrt{\delta})$, and the value of $n:=(6b+3)/\gcd(3,2b+1)$. \subsection{Theorem \ref{Thm:Main3}} For the two smallest squarefree values of $\delta$, such that $N(N-1)$ is squarefree, we give in Table \ref{T:3} the values of $N$, $b$, $\delta$, the class number $h$ of the real quadratic field $\Q(\sqrt{\delta})$, and $n:=bN/2$. \subsection{Theorem \ref{Thm:Main4}} For the four smallest squarefree values of $\abs{\delta}$, we give in Table \ref{T:4} the values of $N$, $b$, $\delta$, the class number $h$ of the real quadratic field $\Q(\sqrt{\delta})$, and $n:=(b+1)N-b$. \begin{table}[h] \begin{center} \begin{tabular}{c|ccccccccccccc} $w$ & $-10$ & $-7$ & $-5$ & $-4$ & $-1$ & 1 & 4 & 5 & 7& 8 & 10 \\ $d$ & $-4027$ & $-1399$ & $-527$ & $-283$ & $-31$ & $-23$ & 229 & 473 & 1345 & 2021 & 3973 \\ $h_K$ & 9 & 27 & 18 & 3 & 3 & 3 & 3 & 3 & 6 & 3 & 6 \end{tabular} \end{center} \caption{Values of $w$ and $d$ for $f_{1,w}(x)\in \FF_1$ and class numbers of $\Q(\sqrt{d})$} \label{T:1} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{c|ccccccc} $a$ & 1 & 2 & 1 & 2 & 1 & 3 & 2\\ $b$ & 3 & 4 & 4 & 5 & 5 & 6 & 6\\ $\delta$ & $-1931$ & $-3299$ & $-19427$ & $-160763$ & $-176891$ & $-545747$ & $-1577939$\\ $h$ & 21 & 27 & 27 & 66 & 132 & 273 & 624\\ $n$ & 21 & 9 & 9 & 33 & 33 & 39 & 39 \end{tabular} \end{center} \caption{Values of $a$, $b$, $\delta$, class numbers $h$ of $\Q(\sqrt{\delta})$ and $n$} \label{T:2} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{ccccc} $N$ & $b$ & $\delta$ & $h$ & $n$\\ \hline 6 & 1 & $15626$ & 24 & 3\\ 6 & 2 & $244140626$ & 1248 & 6 \end{tabular} \end{center} \caption{Values of $N$, $b$, $\delta$, class numbers $h$ of $\Q(\sqrt{\delta})$ and $n$} \label{T:3} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{ccccc} $N$ & $b$ & $\delta$ & $h$ & $n$\\ \hline 3 & 2 & $-2186$ & 42 & 7\\ 3 & 3 & $-19682$ & 108 & 9\\ 3 & 4 & $-177146$ & 396 & 11\\ 7 & 1 & $-96889010406$ & 196768 & 13 \end{tabular} \end{center} \caption{Values of $N$, $b$, $\delta$, class numbers $h$ of $\Q(\sqrt{\delta})$ and $n$} \label{T:4} \end{table} \section*{Acknowledgement} \begin{thebibliography}{99} \bibitem{AC} N. C. Ankeny and S. Chowla, {\em On the divisibility of the class number of quadratic fields}, Pacific J. Math. {\bf 5} (1955), 321--324. \bibitem{Cohen} H. Cohen, \emph{A Course in Computational Algebraic Number Theory}, {Springer-Verlag}, 2000. \bibitem{Erdos} P. Erd\H{o}s, {\em Arithmetical properties of polynomials}, J. London Math. Soc. {\bf 28} (1953), 416--425. \bibitem{Helfgott} H. Helfgott, {\em Square-free values of $f(p)$, $f$ cubic}, Acta Math. {\bf 213} (2014), no. 1, 107--135. \bibitem{H} A. Herschfeld, {\em The equation $2^x-3^y=d$}, Bull. Amer. Math. Soc. {\bf 42} (1936), no. 4, 231--234. \bibitem{JKS2} A. Jakhar, S. Khanduja and N. Sangwan, \emph{Characterization of primes dividing the index of a trinomial}, Int. J. Number Theory {\bf 13} (2017), no. 10, 2505--2514. \bibitem{K} Y. Kishi, {\em Note on the divisibility of the class number of certain imaginary quadratic fields}, Glasgow Math. J. {\bf 51} (2009), 187--191; {\em Corrigendum}, Glasgow Math. J. {\bf 52} (2010), no. 1, 207--208. \bibitem{KM} Y. Kishi and K. Miyake, {\em Parametrization of the quadratic fields whose class numbers are divisible by three.} (English summary) J. Number Theory {\bf 80} (2000), no. 2, 209--217. \bibitem{M} P. Mih\u{a}ilescu, {\em Primary cyclotomic units and a proof of Catalan's conjecture}, J. Reine Angew. Math. {\bf 572} (2004), 167--195. \bibitem{Murty} M. Ram Murty, {\em The abc conjecture and exponents of class groups of quadratic fields}, Number theory (Tiruchirapalli, 1996), 85--95, Contemp. Math., {\bf 210}, Amer. Math. Soc., Providence, RI, 1998. \bibitem{Swan} R. Swan, \emph{Factorization of polynomials over finite fields}, Pacific J. Math. {\bf 12} (1962), 1099--1106. \end{thebibliography} \end{document} \begin{table}[h] \begin{center} \begin{tabular}{ccccc} $a$ & $b$ & $\delta$ & $h$ & $n$\\ \hline 1 & 3 & $-1931$ & 21 & 21\\ 1 & 4 & $-19427$ & 27 & 9\\ 1 & 5 & $-176891$ & 132 & 33\\ 2 & 4 & $-3299$ & 27 & 9\\ 2 & 5 & $-160763$ & 66 & 33\\ 3 & 6 & $-545747$ & 273 & 39 \end{tabular} \end{center} \caption{Values of $a$, $b$, $\delta$, class numbers $h$ of $\Q(\sqrt{\delta})$ and $n$} \label{T:2A} \end{table}
2412.20483v1
http://arxiv.org/abs/2412.20483v1
Curvature, area and Gauss-Bonnet formula of the Moyal sphere
\documentclass[a4paper,12pt]{article} \usepackage[left=3.17cm,right=3.17cm,top=2.54cm, headheight=0.5cm,headsep=0.54cm,bottom=2.54cm,footskip=0.79cm ]{geometry} \usepackage{amsmath,amssymb,dsfont,cite} \usepackage{graphicx,hyperref} \usepackage{amsthm,float,subfigure} \newtheorem{prop}{Property} \newtheorem{thm}{Theorem} \begin{document} \title{Curvature, area and Gauss-Bonnet formula of the Moyal sphere} \author{Han-Liang Chen$^{\ast}$, Bing-Sheng Lin$^{\dag}$\\ \small School of Mathematics, South China University of Technology, \small Guangzhou 510641, China\\ \small $^{\ast}$Email: [email protected]\\ \small $^{\dag}$Email: [email protected]\\ } \date{\today} \maketitle \begin{abstract} We studied some geometric properties of the Moyal sphere. Using the conformal metric of the sphere in ordinary space and the matrix basis, we calculated the scalar curvature, total curvature integral and area of the Moyal sphere. We found that when the noncommutative parameter approaches to 0, the scalar curvature and area of the Moyal sphere return to those of the ordinary sphere. As the noncommutative parameter increases, the area of the Moyal sphere will decrease and eventually approach to 0. We found that the total curvature integral of the two-dimensional Moyal sphere still satisfies the usual Gauss-Bonnet formula and does not depend on the noncommutative parameter. We also calculated the approximate expression of the conformal metric with a constant curvature and obtained the corresponding correction function. In addition, we also studied a type of generalized deformed Moyal sphere with two noncommutative parameters and obtained similar results. \end{abstract} \section{Introduction}\label{sec1} The ideas of noncommutative spacetime in physics started in 1947\cite{Snyder}. In the 1980's, Connes formulated the mathematically rigorous framework of noncommutative geometry\cite{Connes}. Using the language of noncommutative algebras, noncommutative geometry provides new perspectives on other branches of mathematics, such as operator algebras, differential geometry, algebraic geometry, K-theory, cyclic cohomology, number theory, measure theory, etc. It also has many novel and useful applications in physics\cite{Seiberg, Douglas, Chaichian, Ettefaghi, lhc, Calmet, Couch, Muhuri, lh, Frob, Maris}. Many mathematicians have studied the generalizations of noncommutative analog of differential geometry from different perspectives\cite{Madore, Kaygun, Tretkoff, Fathizadeh, Dabrowski, Brzezinski, Arnlind, Floricel, Sciandra}. For example, Connes studied the geometric properties of noncommutative manifolds by studying the noncommutative form of the so-called spectral triples corresponding to Riemann manifolds. Some authors defined calculus on noncommutative algebras and then use the classical formula to define and calculate the scalar curvature on some noncommutative spaces\cite{Khalkhali,Wilson}. Moyal space is one of the simplest noncommutative spaces, and has many important applications in mathematics and physics\cite{Gayral}. We will study some geometric properties of the Moyal sphere. As we all know, the calculations of geometric quantities on smooth manifolds depend on the operations of smooth functions on them, such as volume, curvature, vector field, differential form, tensor field, vector bundle, etc. The noncommutative geometric ideas based on this are described in detail in Ref.~\cite{Connes}. In Ref.~\cite{Eckstein}, the authors introduced the conformal metric $g=\frac{1}{h^{2}}(dx^{1}dx^{1}+dx^{2}dx^{2})$ into the two-dimensional Moyal space. Using the Moyal matrix basis and the orthonormal system method, they obtained the scalar curvature formula, and proved that there is a family of constant curvature metrics. The authors called the Moyal space with such constant curvature metrics the Moyal sphere. They also derived the same Gauss-Bonnet formula with the metric in a specific case as that of the ordinary two-dimensional sphere. Since the classical sphere $\mathbf{S}^{n}$ is homeomorphic to $\mathbb{R}^{n}$ after removing a pole, and has a simple conformal metric $g=\frac{4}{(1+r^{2})^{2}}(dx^{1}dx^{1}+dx^{2}dx^{2}+\dots+dx^{n}dx^{n})$, one can use similar methods to generalize some results in Ref.~\cite{Eckstein} to high dimensional cases. In this paper, we will study the generalizations of high-dimensional spheres in Moyal space from two different perspectives. On the one hand, we will study the $2M$-dimensional Moyal space with spherical metric $g=\frac{4A^{4}}{(r^{2}+A^{2})^{2}}(dx^{1}dx^{1}+dx^{2}dx^{2}+\dots+dx^{2M}dx^{2M})$, and on the other hand, we will also study the Moyal space assigned a conformal metric with constant curvature similar to the classical sphere. These two noncommutative spaces can both return to the classical sphere when the noncommutative parameter approaches to 0, so they can be regarded as the generalizations of the ordinary sphere in Moyal space, and we also call them \textit{Moyal spheres}. In this paper we will calculate the area, curvature and total curvature integral of the Moyal spheres with certain dimensions, and study the conformal metric of the Moyal sphere with constant curvature. This paper is organized as follows: In Sec.~\ref{sec2}, we review the definitions and properties of Moyal star product and matrix basis. In Sec.~\ref{sec3}, we derive the curvature expression of the Moyal space. In Sec.~\ref{sec4}, we calculate the scalar curvature of the two-dimensional Moyal sphere. In Sec.~\ref{sec5}, we calculate the Gauss-Bonnet formula of the two-dimensional Moyal sphere, and find that the result is the same as that of the ordinary two-dimensional sphere. This implies that the Moyal star product structure of the smooth function space does not change some topological properties of the two-dimensional sphere. In Sec.~\ref{sec6} we also calculate the scalar curvature expression of the four-dimensional Moyal sphere. In Sec.~\ref{sec7}, we calculate the area of any even-dimensional Moyal sphere and a type of deformed Moyal sphere. We find that the area of the Moyal sphere will decrease as the noncommutative parameter increases. When the noncommutative parameter approaches to infinity, the area of the Moyal sphere will approach to 0. In Sec.~\ref{sec8}, we also study the conformal metric with constant curvature in the Moyal space, and gave an approximate expression of the factor. The last section are the conclusions. \section{Moyal star product and matrix basis}\label{sec2} The Moyal $*$-product in the space of complex-valued functions on $\mathbb{R}^{2M}$ (where $M$ is a positive integer) is defined as\cite{Gayral}: \begin{equation*} (f*g)(x):=(\pi \theta)^{-2M}\int_{\mathbb{R}^{2M}\times \mathbb{R}^{2M}}f(y)g(z)e^{\frac{2\mathbf{i}}{\theta}(x-y)\cdot\Lambda(x-z)}d^{2M}yd^{2M}z, \end{equation*} where the noncommutative parameter $\theta$\ is a real number, and the antisymmetric matrix $\Lambda=I_{M}\otimes \varepsilon_2$, $I_{M}$\ is the $M\times M$ identity matrix, $\varepsilon_2$ is the antisymmetric matrix $\varepsilon_2 =\begin{pmatrix} 0&1 \\ -1 & 0 \end{pmatrix}$. The smooth function space $C^{\infty}(\mathbb{R}^{2M})$ forms an algebra with respect to the Moyal star product, called the Moyal algebra. The corresponding noncommutative space is the so-called Moyal space, and its coordinate variables $\{x_{1},x_{2},...,x_{2M}\}$ satisfy the commutation relations: \begin{equation} \left[x_{2j-1},x_{2j}\right]_*:=x_{2j-1}*x_{2j}-x_{2j}*x_{2j-1} =-\left[x_{2j},x_{2j-1}\right]_*=\mathbf{i}\theta,\label{ms1} \end{equation} where $j=1,\dots,M$, and other commutators vanish. A Schwarz space is a non-unitary, associative, involutive Fr\'echet algebra $(\mathcal{S} (\mathbb{R}^{2M})$,$*)$ with respect to the $*$-product, and it satisfies the following properties: \begin{prop} Let $f,g\in \mathcal{S} (\mathbb{R}^{2M},*) $, and $x=(x_{1},x_{2},...,x_{2M})$, there are\cite{Gayral} \begin{itemize} \item[(i)]$(f*g)(x)\in \mathcal{S} (\mathbb{R}^{2M})$. \item[(ii)]When the variables of two functions are irrelevant with respect to the Moyal $*$-products, then their Moyal $*$-products return to the ordinary scalar products, for example: $f(x_{1},x_{2})*g(x_{3},x_{4})=f(x_{1},x_{2})g(x_{3},x_{4})$. \item[(iii)]$\partial^{j}(f*g)=\partial^{j}f*g+f*\partial^{j}g,(j=1,2,...,2M)$. \item[(iv)]$\int_{\mathbb{R}^{2M}}(f*g)(x)d^{2M}x=\int_{\mathbb{R}^{2M}}f(x)g(x)d^{2M}x$. \end{itemize} \end{prop} Next, we introduce a set of functions on the Moyal space called the matrix basis $\left \{ f_{mn}\right \} _{m,n\in \mathbb{N}^{M}}$. First, we consider a function in the Schwarz space $\mathcal{S}(\mathbb{R}^{2M}) $: $f_{00}=2^{M}e^{-\frac{1}{\theta}\sum_{j=1}^{2M}x_{j}^{2}}$. Let $m=(m_{1},m_{2},...,m_{M})$, and use multiple index notations: $ \left | m\right | =\sum_{j=1}^{M}m_{j}$, $m!=\prod_{j=1}^{M}m_{j} !$, $\delta_{ij}=\prod_{j=1}^{M}\delta_{m_{j}n_{j}} $. The matrix basis $\left \{ f_{mn}\right \} _{m,n\in \mathbb{N}^{M}}$ can be defined as: \begin{equation*} f_{mn}=\frac{1}{\sqrt{\theta^{\left | m\right |+\left | n\right |}m!n!}}\bar{a}^{m}*f_{00}*a^{n}, \end{equation*} where $a^{n}=a_{1}^{n_{1}}...a_{M}^{n_{M}}$, the annihilation and creation functions are $a_{l}=\frac{1}{\sqrt{2}}(x_{l}+\mathbf{i}x_{l+M})$, $\bar{a}_{l}=\frac{1}{\sqrt{2}}(x_{l}-\mathbf{i}x_{l+M})$, respectively. \begin{prop} The matrix basis $\left \{ f_{mn}\right \} _{m,n\in \mathbb{N}^{M}}$ satisfies the following properties: \begin{itemize} \item[(i)]$f_{mn}*f_{kl}=\delta_{nk}f_{ml}$. \item[(ii)]$\bar{f}_{mn}=f_{nm}$. \item[(iii)]$\int_{\mathbb{R}^{2M}}f_{mn} d^{2M}x=(2\pi \theta)^{M}\delta_{mn}$. \end{itemize} \end{prop} It is easy to see that $\left \{ f_{mn}\right \} _{m,n\in \mathbb{N}^{M}}$ is an orthogonal basis of the Schwarz space $\mathcal{S} (\mathbb{R}^{2M})$ under the $L^{2}$ norm. For the matrix basis on $\mathcal{S} (\mathbb{R}^{2})$, there are the following differential properties: \begin{prop} Consider the differential operators on $\mathcal{S} (\mathbb{R}^{2})$: $\partial=\frac{1}{\sqrt{2}}(\partial_{x_{1}}-\mathbf{i}\partial_{x_{2}}),\bar{\partial}=\frac{1}{\sqrt{2}}(\partial_{x_{1}}+\mathbf{i}\partial_{x_{2}})$, we have\cite{Eckstein} \begin{align*} \partial f_{mn} &= \sqrt{\frac{n}{\theta}}f_{m,n-1}-\sqrt{\frac{m+1}{\theta}}f_{m+1,n}, \\ \bar{\partial}f_{mn} &= \sqrt{\frac{m}{\theta}}f_{m-1,n}-\sqrt{\frac{n+1}{\theta}}f_{m,n+1}. \end{align*} \end{prop} The above differential formulas are valid for $m,n \ge 0$ (when a subscript is less than $0$, the term is set to be $0$). It is easy to obtain the following partial differential equations and the formula acted by the Laplace operator of the matrix basis: \begin{align*} \partial_{x_{1}}f_{mn} &=\frac{1}{\sqrt{2}}(\partial+\bar{\partial})f_{mn}\\ &=\frac{1}{\sqrt{2}}\left( \sqrt{\frac{n}{\theta}}f_{m,n-1}- \sqrt{\frac{m+1}{\theta}}f_{m+1,n}+\sqrt{\frac{m}{\theta}}f_{m-1,n}-\sqrt{\frac{n+1}{\theta}}f_{m,n+1}\right), \\ \partial_{x_{2}}f_{mn} &=\frac{1}{\sqrt{2}\mathbf{i}}(\bar{\partial}-\partial)f_{mn}\\ &= \frac{1}{\sqrt{2}\mathbf{i}}\left( \sqrt{\frac{n}{\theta}}f_{m,n-1}-\sqrt{\frac{m+1}{\theta}}f_{m+1,n}-\sqrt{\frac{m}{\theta}}f_{m-1,n}+\sqrt{\frac{n+1}{\theta}}f_{m,n+1}\right) , \\ \triangle f_{mn} &=2\partial\bar{\partial}f_{mn}\\ &=\frac{2}{\theta}\left[-(m+n+1)f_{mn}+\sqrt{(m+1)(n+1)}f_{m+1,n+1 }+\sqrt{mn}f_{m-1,n-1}\right]. \end{align*} In addition, the matrix basis $\{f_{mm}\}_{m=0}^{\infty}$ also satisfies the following very useful formulas: \begin{equation*} \sum_{m}f_{mm}=1 ,\qquad \sum_{m}mf_{mm}=\frac{1}{2}\left(\frac{r^{2}}{\theta}-1 \right). \end{equation*} \section{Curvature of the Moyal space}\label{sec3} For an $n$-dimensional Riemannian manifold $M$, one can choose a set of local orthonormal bases $\left\{e_{i}\right\}$ in its local coordinate system. Using the properties of the Levi-Civita connection $\nabla$, one can obtain the following expression of the curvature tensor: \begin{align*} R(e_{i},e_{j},e_{i},e_{j})&=\langle R(e_{i},e_{j})e_{i},e_{j}\rangle\\ &=\langle\nabla_{e_{j}}\nabla_{e_{i}}e_{i}-\nabla_{e_{i}}\nabla_{e_{j}}e_{i}+\nabla_{[ e_{i},e_{j}]}e_{i},e_{j}\rangle \\ &=e_{j}c_{jii}+e_{i}c_{ijj}-\frac{1}{4}\sum_{k}(c_{ kji}+c_{jik}+c_{kij})(c_{jik}+c_{ikj}+c_{jki})\\ &\qquad-\sum_{k}c_{kii}c_{kjj}+ \frac{1}{2}\sum_{k}c_{ijk}(c_{jki}+c_{kij}+c_{jik}), \end{align*} where $R(X,Y,Z,W)$ is the Riemann curvature tensor, $\langle \cdot ,\cdot \rangle$ is the Riemann metric of the manifold, $[e_{i},e_{j}]=\displaystyle \sum_{k=1}^{n} c_{ijk}e_{k}$, and $c_{ijk}$ are the structure constants of the commutators. The scalar curvature $S$ is: \begin{align*} S&=\sum_{i,j=1}^{n}R(e_{i},e_{j},e_{i},e_{j})\\ &=\sum_{i,j=1}^{n}\bigg[e_{j}c_{jii}+e_{i}c_{ijj}-\sum_{k}c_{kii}c_{kjj }-\frac{1}{4}\sum_{k}(c_{kji}+c_{jik}+c_{kij})(c_{jik}+c_{ikj}+c_{jki})\\ &\qquad +\frac{1}{2}\sum_{k}c_{ijk}(c_{jki}+c_{kij}+c_{jik})\bigg]\\ &=2\sum_{ij}^{n}e_{i}c_{ijj}-\sum_{ijk}^{n}c_{kii}c_{kjj}-\frac{1}{2}\sum_{ijk}^{n}c_{ijk}c_{ikj}-\frac{1}{4}\sum_{ijk}^{n}c_{ijk}c_{ijk}. \end{align*} Considering the symmetry of the summation index, and using the Einstein summation convention, one can obtain the following scalar curvature formula: \begin{equation*} S=2e_{i}c_{ijj}-c_{kii}c_{kjj}-\frac{1}{4}c_{ijk}c_{ijk}-\frac{1}{2}c_{ijk}c_{ikj}. \end{equation*} In the Moyal space, ordinary multiplications between functions will be replaced by Moyal $*$-products. Similar to the result in Ref.~\cite{Eckstein}, there is the scalar curvature in the Moyal space: \begin{equation}\label{scf} S_\theta=2e_{i}*c_{ijj}-c_{kii}*c_{kjj}-\frac{1}{4}c_{ijk}*c_{ijk}-\frac{1}{2}c_{ijk}*c_{ikj}. \end{equation} Due to the symmetry of the sum, adding the $*$-product does not cause any confusion in the definition. Consider the conformal metric $g=h^{-2}(dx^{1}dx^{1}+dx^{2}dx^{2}+ \dots dx^{n}dx^{n})$, where $h$ is some smooth positive function. We will calculate it using the set of local orthogonal basis $\left\{e_{i}=h\frac{\partial}{\partial x^{i}}\right\}$. Note that the scalar curvature (\ref{scf}) only contains the structure constants and their derivatives. So in order to obtain the scalar curvature, one only need to calculate the structure constants under the set of orthonormal basis : \begin{align*} [e_{i},e_{j}]_*&=h*\partial_{i}(h*\partial_{j})-h*\partial_{j}(h*\partial_{i})\\ &=h*\partial_{i}(h)*h_*^{-1}*(h*\partial_{j})-h*\partial_{j}(h)*h_*^{-1}*(h*\partial_{i})\\ &=h*\partial_{i}(h)*h_*^{-1}*e_{j}-h*\partial_{j}(h)*h_*^{-1}*e_{i}\\ &=\sum_{k=1}^{n} c_{ijk}e_{k}, \end{align*} where $h_*^{-1}$ is the inverse of $h$ with respect to the Moyal $*$-product, $h*h_*^{-1}=h_*^{-1}*h=1$. Note that $c_{ijk}\ne 0$ only when $i\ne j$ and $k=i$ or $j$, and we have $$c_{iji}=-h*\partial_{j}(h)*h_*^{-1}, \qquad c_{ijj}=h*\partial_{i}(h)*h_*^{-1},\qquad(i\ne j) .$$ Substituting the above expression into Eq.(\ref{scf}), one can get the scalar curvature expression of the $2n$-dimensional Moyal space: \begin{align}\label{sss} S_\theta&=2(n-1)h_*^{2}*\partial_{ii}(h)*h_*^{-1}-(n^{2}-3n+2)h*\partial_{i}(h)_*^{2}*h_*^{-1}\nonumber\\ &\quad -2(n-1)h_*^{2}*\partial_{i}(h)*h_*^{-1}*\partial_{i}(h)*h_*^{-1}. \end{align} Here we have used the Einstein summation convention, and the powers and inverses of functions in the above equation are in the sense of the Moyal $*$-product. It is easy to see that using the metric $g=h_*^{-2}(dx^{1}dx^{1}+dx^{2}dx^{2}+ ...+dx^{n}dx^{n}) $, the generalization of the classical scalar curvature formula (\ref{scf}) is well defined (because the indices of the summation terms are symmetric, after commuting the star products of the two summation terms in the sum, the results are still the same). \section{Scalar curvature of the two-dimensional Moyal sphere}\label{sec4} Let us recall that in ordinary space, the metric of a sphere with a radius of $A$ in stereographic coordinates is: \begin{equation*} g=\frac{4A^4}{(A^2+r^{2})^{2}}(dx^{1}dx^{1}+dx^{2}dx^{2}). \end{equation*} Therefore, the metric $g$ can be written as: \begin{equation*} g=\frac{1}{h^{2}}(dx^{1}dx^{1}+dx^{2}dx^{2}), \end{equation*} where $$h=\frac{A^2+r^{2}}{2A^2}=\sum_{m}\phi_{m}f_{mm},\qquad \phi_{m}=\frac{2\theta m+A^2+\theta}{2A^2},\quad(m=0,1,\dots).$$ After some straightforward calculations, one can obtain \begin{equation*} \triangle h=\frac{2}{\theta}\sum_{m}[-(2m+1)\phi_{m}+m\phi_{m-1}+(m+1)\phi_{m+1}]f_{mm}, \end{equation*} \begin{equation*} \partial_{i}(h)*h_*^{-1}*\partial_{i}(h) =\frac{1}{\theta}\sum_{m=0}\left[ \frac{m}{\phi_{m-1}}(\phi_{m}-\phi_{m-1})^{2}+\frac{m+1}{\phi_{m+1}}(\phi_{m+1}-\phi_{m})^{2}\right] f_{mm}. \end{equation*} Substituting the above results into Eq.(\ref{sss}), one can get the scalar curvature of the two-dimensional Moyal sphere \begin{eqnarray} S_\theta&=&2[h_*^{2}*\partial_{ii}(h)*h_*^{-1}-h_*^{2}*\partial_{i}(h)*h_*^{-1}*\partial_{i}(h)*h_*^{-1}]\nonumber\\ &=&\frac{2}{\theta}\sum_{m}\phi_{m}\left[ \frac{m+1}{\phi_{m+1}}(\phi_{m+1}^{2}-\phi_{m}^{2})-\frac{m}{\phi_{m-1}}(\phi_{m}^{2}-\phi_{m-1}^{2})\right] f_{mm}\nonumber\\ &=&\sum_{m}\left[ \frac{2}{A^{2}}-\frac{2\theta(\theta+A^{2})}{A^{4}(2\theta m+A^{2}+3\theta)}+\frac{2\theta(A^{2}-\theta)}{A^{4}(2\theta m+A^{2}-\theta)}\right] f_{mm}\nonumber\\ &=&\frac{2}{A^{2}}-\frac{2\theta(A^{2}+\theta)}{A^{4}}(r^{2}+A^{2}+2\theta)_{*}^{-1}+\frac{2\theta(A^{2}-\theta)}{A^{4}}(r^{2}+A^{2}-2\theta)_{*}^{-1}.\label{sth} \end{eqnarray} Note that $S_\theta=S_{-\theta}$, without losing generality, one can be assumed that $\theta>0$. In order to avoid singularities, one can choose $\theta\in (0,\frac{A^2}{2})$. Obviously, when $\theta \to 0$, there is $S_\theta\to \frac{2}{A^2}$, which is the same as the scalar curvature of the sphere $\mathbf{S}^{2}$ with a radius of $A$ in the classical case. When the noncommutative parameter $\theta$ is very small, using the methods similar to those in Ref.~\cite{Eckstein}, one can only consider terms that approximate to the $\theta^2$ term. The inverse of a radial function $f$ with respect to the Moyal $*$-product has an approximate expansion: $$f_{*}^{-1}=f^{-1}+\frac{\theta^{2}}{4r}[(f')^{3}f^{-4}-f''f'f^{-3}].$$ Therefore, the scalar curvature of the Moyal sphere (\ref{sth}) can be approximately expressed as: \begin{align*} S_\theta&\approx \frac{2}{A^{2}}-\frac{2\theta(A^{2}+\theta)}{A^{4}(r^{2}+A^{2}+2\theta)} +\frac{2\theta(A^{2}-\theta)}{A^{4}(r^{2}+A^{2}-2\theta)}\\ &= \frac{2}{A^{2}}\cdot \eta_\theta(\lambda), \end{align*} where $\lambda=\frac{A^{2}}{\theta}$, and $$ \eta_\theta(\lambda)=1-\frac{\lambda+1}{\lambda\left( \frac{r^{2}}{\theta}+\lambda+2\right) } +\frac{\lambda-1}{\lambda\left( \frac{r^{2}}{\theta}+\lambda-2\right) }. $$ \begin{figure}[H] \centering \includegraphics[scale=0.4]{2dscalar.png} \caption{The curves of $\eta_\theta(\lambda)$ with different values of $\lambda$ ($\theta=0.1$).}\label{fig1} \end{figure} In Fig.~\ref{fig1}, we show the curves of $\eta_\theta(\lambda)$ with different values of $\lambda$, where the noncommutative parameter $\theta=0.1$. One can see that for a given small $\theta$, when $\lambda \to \infty$, or equivalently, when the radius $A\to \infty$, the scalar curvature of the Moyal sphere will approach to that of the ordinary sphere. When $\lambda \to 0$, or equivalently, when the radius $A\to 0$, the scalar curvature of the Moyal sphere is significantly different from that of the ordinary sphere for small $|r|$. But when $|r|$ is very large, the result also approaches to that of the ordinary sphere. \section{Gauss-Bonnet formula for the Moyal sphere}\label{sec5} \begin{thm}[Gauss-Bonnet formula for compact surfaces] Let $M$ be a compact, oriented two-dimensional surface, there is \begin{equation*} \int_{M}Kd\sigma=2\pi \chi(M), \end{equation*} where $K$ is the Gaussian curvature of $M$, $d\sigma$ is the area element, and $\chi(M)$ is the Euler characteristic number of $M$. \end{thm} Since the scalar curvature $S=2K$ in two-dimensional surfaces, the Gauss-Bonnet formula can be written as: \begin{equation*} \int_{M}Sd\sigma=4\pi \chi(M). \end{equation*} In particular, the Euler characteristic number of the two-dimensional sphere $\mathbf{S}^{2}$ is $\chi(\mathbf{S}^{2})=2$, and $\int_{\mathbf{S}^{2}}Sd\sigma=8\pi$. Now let us calculate the Gauss-Bonnet formula for the two-dimensional Moyal sphere. \begin{align*} \int_{\mathbf{S}^{2}}S_\theta*d\sigma &=\int_{\mathbb{R}^{2}}\sum_{m}\left[ \frac{2}{A^{2}}-\frac{2\theta(\theta+A^{2})}{A^{4}(2\theta m+A^{2}+3\theta)}+\frac{2\theta(A^{2}-\theta)}{A^{4}(2\theta m+A^{2}-\theta)}\right] f_{mm}*h_{*}^{-2} dxdy\\ &=\int_{\mathbb{R}^{2}}\left\{ \sum_{m}\left[ \frac{2}{A^{2}}-\frac{2\theta(\theta+ A^{2})}{A^{4}(2\theta m+A^{2}+3\theta)}+\frac{2\theta(A^{2}-\theta)}{A^{4}(2\theta m+A^{2}-\theta)}\right] f_{mm}\right. \\ &\qquad\left. *\sum_{m}\frac{4A^{4}}{(2\theta m +A^{2}+\theta)^2}f_{mm}\right\} dxdy\\ &=\int_{\mathbb{R}^{2}}\sum_{m}\frac{8(2\theta A^{2}m+A^{4}-2\theta^{2}+\theta A^{2})}{(2\theta m+A^{2}+\theta)(2\theta m+A^{2}-\theta)(2\theta m+A^{2}+3\theta)}f_{mm}dxdy\\ &=2\pi\theta\sum_{m}\frac{8(2\theta A^{2}m+A^{4}-2\theta^{2}+\theta A^{2})} {(2\theta m+A^{2}+\theta)(2\theta m+A^{2}-\theta)(2\theta m+A^{2}+3\theta)}\\ &=8\pi. \end{align*} The above results show that the Gauss-Bonnet formula for the two-dimensional Moyal sphere is independent of the value of the noncommutative parameter $\theta$. It is the same as that of the classical two-dimensional sphere $\mathbf{S}^{2}$ with radius $A$. Therefore, the Gauss-Bonnet integrals for two-dimensional Moyal spheres of different radii are all equal to $8\pi$. This means that the integral of the scalar curvature on the surface in the noncommutative sense is only related to the topological invariant (Euler characteristic number) of the surface, and has nothing to do with the noncommutative parameter $\theta$. This result is consistent with that in Ref.~\cite{Eckstein}. But it should be noted that an appropriate $\theta$ must be chosen here so that the inverse with respect to the Moyal star product of the matrix basis series exists. \section{Scalar curvature of the four-dimensional Moyal sphere}\label{sec6} Considering the conformal metric $g=h^{-2}(dx^{1}dx^{1}+dx^{2}dx^{2}+dx^{3}dx^{3}+dx^{4}dx^{4})$ of the 4-dimensional sphere, one can similarly assume $$h=\sum_{m}\phi_{m}f_{mm}(x_1,x_2)+\sum_{n}\psi_{n}g_{nn}(x_3,x_4)=\sum_{m,n=0}(\phi_{m}+\psi_{n})f_{mm}g_{nn}. $$ After some straightforward calculations, one can obtain $$ h_*^{-1}*\partial_{i}(h)^{2}=\frac{1}{\theta}\sum_{m,n=0}\frac{1}{\phi_{m}+\psi_{n}}[m(\phi_{m}-\phi_{m-1})^{2}+(m+1)(\phi_{m+1}-\phi_{m})^{2}]f_{mm}g_{nn}. $$ The scalar curvature of the four-dimensional Moyal space (\ref{sss}) is \begin{align*} S_\theta&=6[h^{2}*\partial_{ii}(h)*h^{-1}-h*\partial_{i}(h)^{2}*h^{-1}-h^{2}*\partial_{i}(h)*h^{-1}*\partial_{i}(h)*h^{-1}]\\ &=\frac{6}{\theta}\sum_{m,n=0}(\phi_{m}+\psi_{n})\Bigg\{[-(4m+2)\phi_{m}+2m\phi_{m-1}+2(m+1)\phi_{m+1}]\\ &\qquad-\bigg[m(\phi_{m}-\phi_{m-1})^{2}\bigg(\frac{1}{\phi_{m}+\psi_{n}}+\frac{1}{\phi_{m-1}+\psi_{n}}\bigg)\\ &\quad\qquad+(m+1)(\phi_{m+1}-\phi_{m})^{2}\bigg(\frac{1}{\phi_{m}+\psi_{n}}+\frac{1}{\phi_{m+1}+\psi_{n}}\bigg)\bigg]\Bigg\}f_{mm}g_{nn}\\ &\quad+\frac{6}{\theta}\sum_{m,n=0}(\phi_{m}+\psi_{n})\Bigg\{[-(4n+2)\psi_{n} +2n\psi_{n-1}+2(n+1)\psi_{n+1}]\\ &\qquad-\bigg[n(\psi_{n}-\psi_{n-1})^{2}\bigg(\frac{1}{\phi_{m}+\psi_{n}}+\ frac{1}{\psi_{n-1}+\phi_{m}}\bigg)\\ &\quad\qquad+(n+1)(\psi_{n+1}-\psi_{n})^{2}\bigg(\frac{1}{\phi_{m}+\psi_{n}} +\frac{1}{\psi_{n+1}+\phi_{m}}\bigg)\bigg]\Bigg\}f_{mm}g_{nn}. \end{align*} For the conformal metric of the four-dimensional unit sphere, we have $$ h=\frac{r^{2}+1}{2}=\sum_{m,n}\frac{2\theta (m+n)+2\theta+1}{2}f_{mm}g_ {nn}. $$ Therefore, one can take $\phi_{m}=\frac{4\theta m+2\theta+1}{4} ,\psi_{n}=\frac{4\theta n+2\theta+1}{ 4}$. So we have \begin{align*} S_\theta&=\sum_{m,n=0}\left[12+12\theta^{2}\left( \frac{m+n+2}{2\theta(m+n)+4\theta+1} -\frac{m+n}{2\theta(m+n)+1}\right)\right] f_{mm}g_{nn}\\ &=12+12\theta^{2}\left[2\theta+2(1+\theta+\sigma^{2})\left(\frac{\rho^{2}}{\theta}+1\right)_{*}^{-1}\right]_{*}^{-1}-12\theta^{2}\left[2\theta+(1-2\theta+\sigma^{2})\left(\frac{\rho^{2}}{\theta}-1\right)_{*}^{-1}\right]_{*}^{-1}\\ &\quad+12\theta^{2}\left[2\theta+2(1+\theta+\rho^{2})\left(\frac{\sigma^{2}}{\theta}+1\right)_{*}^{-1}\right]_{*}^{-1}-12\theta^{2}\left[2\theta+(1-2\theta+\rho^{2})\left(\frac{\sigma^{2}}{\theta}-1\right)_{*}^{-1}\right]_{*}^{-1}\\ &\approx 12+12\theta^{2}\left\{\left[2\theta+2(1+\theta+\sigma^{2})\left(\frac{\rho^{2}}{\theta}+1\right)^{-1}\right]^{-1}-\left[2\theta+(1-2\theta+\sigma^{2})\left(\frac{\rho^{2}}{\theta}-1\right)^{-1}\right]^{-1}\right.\\ &\quad\left.+\left[2\theta+2(1+\theta+\rho^{2})\left(\frac{\sigma^{2}}{\theta}+1\right)^{-1}\right]^{-1}-\left[2\theta+(1-2\theta+\rho^{2})\left(\frac{\sigma^{2}}{\theta}-1\right)^{-1}\right]^{-1}\right\}, \end{align*} where $\rho^{2}=x_{1}^{2}+x_{2}^{2}, \ \sigma^{2}=x_{3}^{2}+x_{4}^{ 2}$. The calculations of the above equation approximates to the second-order term of $\theta$. In order to avoid singularities, one can choose $0<\theta <\frac{1}{4}$. Obviously, when $\theta \to 0$, there is $S_\theta\to 12$. This is also the scalar curvature of the classical four-dimensional sphere. Taking $\theta=0.01$, the values of the curvature $S_\theta$ are shown in Fig.~\ref{fig2}. \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[scale=0.55]{curvaturesurface.png}}\qquad \subfigure[]{\includegraphics[scale=0.4]{densityplot.png}} \caption{The values of scalar curvature $S_\theta$ with respect to the variables $\rho$ and $\sigma$ ($\theta=0.01$).} \label{fig2} \end{figure} From these two figures, one can see that the values of the curvature is highly symmetrical with respect to $\rho, \sigma$. The curvature changes rapidly and is large near the origin and the coordinate axis, and it is small when $\rho=\sigma$ and $\rho=-\sigma$. However, they are all very close to 12, which is the curvature of the classic four-dimensional sphere. In addition, one can also find that in the sense of second-order approximation, when $\rho^{2}+\sigma^{2}=r^{2}$ is a constant, the curvature is not a constant. \section{Area of the Moyal sphere}\label{sec7} In general, the conformal metric of the $2M$-dimensional Moyal sphere can be expressed as $g=h_{*}^{-2}(dx^{1}dx^{1}+\dots+dx^{2M}dx^{2M})$, where $$ h=\frac{r^{2}+A^{2}}{2A^{2}} =\sum_{m_{1},m_{2},\dots,m_{M}}\frac{2\theta (m_{1}+m_{2}+\dots m_{M})+A^{2}+M\theta}{2A^{2}}f_{m_{1}m_{1}}f_{m_{2}m_{2}}\dots f_{m_{M}m_{M}}. $$ By virtue of the formula for the classical sphere, one can calculate the area of the $2M$-dimensional Moyal sphere: \begin{align}\label{s2m} &\int_{\mathbf{S}^{2M}}dvol_{*}=\int_{\mathbb{R}^{2M}}h_{*}^{-2M}d^{2M}x\nonumber\\ &=\int_{\mathbb{R}^{2M}}\sum_{m_{1},m_{2},\dots,m_{M}}\frac{(2A^{2})^{2M}}{\left[2\theta (m_{1}+m_{2}+\dots m_{M})+A^{2}+M\theta\right]^{2M}}f_{m_{1}m_{1}}f_{m_{2}m_{2}}\dots f_{m_{M}m_{M}}d^{2M}x\nonumber\\ &=\left(2\pi \theta\right)^{M}\sum_{m_{1},m_{2},\dots,m_{M}}\frac{(2A^{2})^{2M}}{\left[2\theta (m_{1}+m_{2}+\dots m_{M})+A^{2}+M\theta\right]^{2M}}\nonumber\\ &=\frac{(2\pi A^{4})^{M}}{\theta^{M}}\sum_{k=1}^{\infty}\frac{C_{k-2+M}^{M-1}}{\left(k+\frac{A^{2}}{2\theta}+\frac{M}{2}-1\right) ^{2M}}\nonumber\\ &=\frac{2\pi^{M+\frac{1}{2}}A^{2M}}{\Gamma\left(M+\frac{1}{2}\right) }\cdot\frac{2^{M-1}\lambda^{M}}{\sqrt{\pi}}\Gamma\left(M+\frac{1}{2}\right)\sum_{k=0}^{\infty}\frac{C_{k-1+M}^{M-1}}{\left(k+\frac{\lambda}{2}+\frac{M}{2}\right)^{2M}}\nonumber\\ &=\frac{2\pi^{M+\frac{1}{2}}A^{2M}}{\Gamma\left(M+\frac{1}{2}\right) }\cdot\gamma_{M}(\lambda), \end{align} where $\Gamma(x)$ is the Gamma function, and \[ \gamma_{M}(\lambda)=\frac{2^{M-1}\lambda^{M}}{\sqrt{\pi}}\Gamma\left(M+\frac{1}{2}\right)\sum_{k=0}^{\infty}\frac{C_{k-1+M}^{M-1}}{\left( k+\frac{\lambda}{2}+\frac{M}{2}\right)^{2M}}. \] For a given $M$, $\gamma_{M}(\lambda)$ can also be expressed as a sum of polygamma functions. For example, for $M=1, 2$, we have \[ \gamma_{1}(\lambda)=\frac{\lambda}{2} \Psi^{(1)}\left(\frac{\lambda+1}{2}\right),\qquad \gamma_{2}(\lambda)=\frac{\lambda^{2}}{8}\left[-6\Psi^{(2)}\left(1+\frac{\lambda}{2}\right)-\lambda\Psi^{(3)}\left(1+\frac{\lambda}{2}\right)\right], \] where $\Psi^{(n)}(x)$ is a polygamma function. Therefore, the areas of the 2-dimensional and 4-dimensional Moyal spheres are \[ \int_{\mathbf{S}^{2}}dvol_{*}=4\pi A^{2}\cdot \gamma_{1}(\lambda) =2\pi A^{2}\lambda \Psi^{(1)}\left(\frac{\lambda+1}{2}\right), \] \[ \int_{\mathbf{S}^{4}}dvol_{*}=\frac{8\pi^{2}A^{4}}{3}\cdot\gamma_{2}(\lambda) =\frac{\pi^{2}A^{4}\lambda^{2}}{3}\left[-6\Psi^{(2)}\left(1+\frac{\lambda}{2}\right)-\lambda\Psi^{(3)}\left(1+\frac{\lambda}{2}\right)\right]. \] \begin{figure}[H] \centering \includegraphics[scale=0.35]{area.png} \caption{$\gamma_{M}(\lambda)$ with different values of $M$}\label{fig3} \end{figure} Fig.~\ref{fig3} shows the graphs of $\gamma_{M}(\lambda)$ for $M=1,2,4,6$. It is easy to see that for any $M$ and a given radius $A$, when $\theta \to 0^{+}$, or equivalently, $\lambda\to \infty$, there is $\gamma_{M}(\lambda)\to 1$. The result (\ref{s2m}) returns to the area of the ordinary $2M$-dimensional sphere $\frac{2\pi^{M+\frac{1}{2}}A^{2M}}{\Gamma\left(M+\frac{1}{2}\right) }$. When the noncommutative parameter $\theta \to \infty$, or equivalently, $\lambda\to 0^+$, there is $\gamma_{M}(\lambda)\to 0$. That means that the area of the $2M$-dimensional Moyal sphere will approach to 0. Analogous to the case of ordinary spaces, if we consider the space enclosed by the sphere to be the volume of the corresponding higher-dimensional ball, then one can see that when the noncommutative parameter $\theta \to \infty$, the volume enclosed by the Moyal sphere will also approach to 0. Furthermore, let us consider a type of deformed Moyal space. For the $4n$-dimensional case, the deformed Moyal product is defined as \[ {f \star g = f \cdot e^{\frac{\mathbf{i}}{2}\sum_{i,j=1}^{4n} \Theta_{ij} \overset{\leftarrow}{\partial_i} \overset{\rightarrow}{\partial_j}}} \cdot g, \] where the noncommutative parameter matrix $\Theta$ is an $4 n \times 4 n$ antisymmetric matrix $ \Theta =\theta I_{2n}\otimes \varepsilon_2+\mu I_n\otimes \varepsilon_2\otimes \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}, $ and the parameter $\mu$ is a real number. With respect to the deformed Moyal product, the variables satisfy the following commutation relations: \begin{eqnarray} &&\left[x_{2j-1},x_{2j}\right]_{\star}=-\left[x_{2j},x_{2j-1}\right]_{\star}=\mathbf{i}\theta,\nonumber\\ &&\left[x_{4k},x_{4k-2}\right]_{\star}=-\left[x_{4k-2},x_{4k}\right]_{\star} =\left[x_{4k-3},x_{4k-1}\right]_{\star}=-\left[x_{4k-1},x_{4k-3}\right]_{\star}=\mathbf{i}\mu,\label{ms2} \end{eqnarray} where $j=1,\dots,2n$, $k=1,\dots,n$, and other commutators vanish. Obviously, when $\mu=0$, it returns to the ordinary $4n$-dimensional Moyal space (\ref{ms1}). One can consider the variable substitution $X'=DX$, where $X=(x_{1},x_{2},...,x_{4n})$, $X'=(x'_{1},x'_{2},...,x'_{4n})$, and the transformation matrix $D$ is \[ D=\sqrt{\frac{\theta + \sqrt{\theta^2 + \mu^2}}{2 \sqrt{\theta^2 +\mu^2}}} \left[I_{4n}+ \frac{\mu}{\theta + \sqrt{\theta^2 + \mu^2}} I_n\otimes\varepsilon_2\otimes \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}\right]. \] The Moyal product now becomes \[ {f \ast' g = f\cdot e^{\frac{\mathbf{i}}{2}\sum_{i,j=1}^{4n} \Theta'_{ij} \overset{ \leftarrow}{\partial'_i} \overset{\rightarrow}{\partial'_j}}} \cdot g, \] where $\Theta' = \theta' I_{2n}\otimes \varepsilon_2$. The variables $\{x'_{1},x'_{2},...,x'_{4n}\}$ satisfy the commutative relations: \begin{equation} \left[x'_{2j-1},x'_{2j}\right]_{*'}=-\left[x'_{2j},x'_{2j-1}\right]_ {*'} =\mathbf{i} \theta', \qquad\theta'=\sqrt{\theta^2 + \mu^2}, \end{equation} where $j=1,\dots,2n$, and other commutators vanish. Therefore, the area of the deformed $4n$-dimensional Moyal sphere (\ref{ms2}) is \begin{align*} \int_{\mathbf{S}^{4n}} dvol_{\star} & = \int_{\mathbb{R}^{4n}} h_{\star}^{- 4n} d^{4n} x\\ & = \int_{{\mathbb{R}'}^{4n}} h_{\ast'}^{- 4n} |D^{-1}| d^{4n} x' = \int_{{\mathbb{R}'}^{4n}} h_{\ast'}^{- 4n} d^{4n} x'\\ &=\frac{(2\pi A^{4})^{2n}}{{\theta'}^{2n}}\sum_{k=0}^{\infty}\frac{C_{k+2n-1}^{2n-1}}{\left( k+\frac{A^{2}}{2\theta'}+n\right) ^{4n}}\\ &=\frac{2\pi^{2n+\frac{1}{2}}A^{4n}}{\Gamma\left(2n+\frac{1}{2}\right) }\cdot\gamma_{2n}(\lambda'), \end{align*} where $\lambda'=\frac{A^{2}}{\theta'}$. One can see that the area of the deformed $4n$-dimensional Moyal sphere (\ref{ms2}) depends only on the radius $A$ and $\theta'=\sqrt{\theta^2 + \mu^2}$, and does not depend on $\theta$ or $\mu$ alone. Obviously, it has similar properties to the area of the normal $4n$-dimensional Moyal sphere (\ref{s2m}). \section{Constant curvature metrics of Moyal spaces}\label{sec8} Similar to Ref.~\cite{Eckstein}, here we will study the conformal metric with constant curvature of the 4-dimensional Moyal space, and calculate the expression of its factor $h$. In this case, the Moyal $*$-product is defined as: $$f(x_{1},x_{2},x_{3},x_{4})*g(x_{1},x_{2},x_{3},x_{4})=f\cdot e^{\frac{\mathbf{i}\theta}{2}(\overleftarrow{\partial_{1}}\overrightarrow{\partial_{2}}-\overleftarrow{\partial_{2}}\overrightarrow{\partial_{1}}+\overleftarrow{\partial_{3}}\overrightarrow{\partial_{4}}-\overleftarrow{\partial_{4}}\overrightarrow{\partial_{3}})}\cdot g.$$ Here we only consider the case where $f, g$ are both radial functions. Similarly, $f*g$ can be expanded to the second-order term of $\theta$, $$ f*g =fg-\frac{\theta^{2}}{8r}\left( f''g'+f'g''+\frac{2}{r}f'g'\right) + O(\theta^{2}). $$ If $f$ is a radial function, then $f_{*}^{-1}$ is also a radial function. Ignoring the higher-order terms of $\theta^{2}$, one can get \begin{equation*} f_{*}^{-1}=f^{-1}+\frac{\theta^{2}}{4r}\left[f^{-4}(f')^{3}-f^ {-3}f'f''-\frac{1}{r}(f')^{2}f^{-3}\right]. \end{equation*} Consider the case where the scalar curvature of the 4-dimensional Moyal sphere is constant, that is $$ S_\theta=6h_*^{2}*[\partial_{ii}(h)-h_*^{-1}\partial_{i}(h)^{2}-\partial_{i}(h)* h_*^{-1}*\partial_{i}(h)]*h_*^{-1}=C, $$ where $C$ is a constant. Then there is the constant curvature equation \begin{equation}\label{hh} h*[\partial_{ii}(h)-h_*^{-1}\partial_{i}(h)^{2}-\partial_{i}(h)*h_*^{-1}*\partial_{i}(h)]=\frac{C}{6}, \end{equation} Approximating to the $\theta^{2}$ term, we assume that $h=\frac{A^{2}+r^{2}}{2A^{2}}+\theta^{2}\epsilon(r)=h_0+\theta^{2}\epsilon(r)$, where $h_0=\frac{A^{2}+r^{2}}{2A^{2}}$, then $h_{*}^{-1}=h^{-1}-\frac{4\theta^{2}A^{4}}{(A^{2}+r^{2})^{4}} $. Substituting these into the constant curvature equation (\ref{hh}), after some straightforward calculations, one can obtain $$ \frac{2}{A^{2}}+\theta^{2}\left(\frac{A^{2}+r^{2}}{2A^{2}}\epsilon''(r)+\frac{3A^{2}-5r^{2}}{2A^ {2}r}\epsilon'(r)+\frac{4}{A^{2}}\epsilon(r)+\frac{4}{A^{2}(A^{2}+r^{2})^{2}}\right)=\frac{C}{6}. $$ If the curvature of the Moyal sphere is required to be equal to the curvature of the classic 4-dimensional sphere $C=\frac{12}{A^{2}}$, then there is the equation: $$ \epsilon''(r)+\frac{3A^{2}-5r^{2}}{r(A^{2}+r^{2})}\epsilon'(r)+\frac{8}{A^{2}+r^{2}}\epsilon(r)+\frac{8}{(A^{2}+r^{2})^{3}}=0. $$ One can see that the error function $\epsilon(r)$ does not depend on the noncommutative parameter $\theta$. It is easy to find the general solution of the equation: \begin{align*} \epsilon(r)=&C_{1}(r^{2}-A^{2}) +\frac{C_{2}}{2r^{2}}\left(r^{6}-A^{2}r^{4}-17A^{4}r^{2}+A^{6}+12(r^{2}-A^{2})A^{2}r^{2}\ln r\right)\\ &-\frac{1}{30A^{6}r^{2}(A^{2}+r^{2})}\left(A^{6}-3A^{4}r^{2}+6A^{2}r^{4}+6(r^{4}-A^{4})r^{2}\ln\frac{r^2}{A^{2}+r^{2}}\right), \end{align*} where $C_{1}$, $C_{2}$ are integration constants. When $C_{2}=\frac{1}{15A^{8}}$, $\epsilon(r)$ is well defined on $[ 0,\infty)$. Furthermore, if $C_{1}=-\frac{13+12\ln A}{30A^6}$, there is $\epsilon(0)=0$. \begin{figure}[H] \centering \includegraphics[scale=0.33]{constanthh.png} \caption{$\frac{h^{-2}(r)}{h_0^{-2}}$ with different radii $A$ ($\theta=0.1$)}\label{fig4} \end{figure} Fig.~\ref{fig4} shows the ratio of two functions $\frac{h^{-2}(r)}{h_0^{-2}}$ for different radii $A$ when $\theta=0.1$, $C=\frac{12}{A^{2}}$, $C_{1}=-\frac{13+12\ln A}{30A^6}$, $C_{2}=\frac{1}{15A^{8}}$. One can see that for a given sufficiently small $\theta$, the smaller the radius $A$ of the sphere, the greater the deviation of the conformal metric of constant curvature of Moyal space from that of the classical case. Conversely, the larger the radius $A$ of the sphere, the closer the conformal metric of constant curvature is to the result of the classical sphere. Similarly, one can also consider high-dimensional cases. For any $2m$-dimensional Moyal space (\ref{ms1}), ignoring the higher-order terms of $\theta^{2}$, the approximate expansion for the Moyal product of radial functions $f, g$ is: \begin{equation*} f*g=fg-\frac{\theta^{2}}{8r}\left[f''g'+f'g''+\frac{2(m-1)}{r}f'g'\right]. \end{equation*} So the corresponding inverse with respect to the star product is: \begin{equation*} f_{*}^{-1}=f^{-1}+\frac{\theta^{2}}{4r}\left[f^{-4}(f')^{3}-f^{-3}f'f''-\frac{m-1}{r}(f')^{2}f^{-3}\right]. \end{equation*} Similar to the previous approach, one can assume $h=\frac{A^{2}+r^{2}}{2A^{2}}+\theta^{2}\epsilon(r)$. Substituting it into the corresponding constant curvature equation, it is not difficult to obtain the differential equation for the error function $\epsilon(r)$: \begin{equation*} \epsilon''(r)+\frac{A^{2}(2m-1)-(2m+1)r^{2}}{r(A^{2}+r^{2})}\epsilon'(r)+\frac{4m}{A^{2}+ r^{2}}\epsilon(r)+\frac{8(2m-1)\left[A^{2}m+(m-2)r^{2}\right]}{A^{2} (A^{2}+r^{2})^{3}}=0. \end{equation*} Furthermore, one can also consider the constant curvature metric of the deformed $4n$-dimensional Moyal sphere (\ref{ms2}). The approximate expansion of the deformed Moyal product of the radial function $f,g$ is as follow: \begin{equation*} f\star g=fg-\frac{\theta^{2}+\mu^{2}}{8r}\left[f''g'+f'g''+\frac{2(2n-1 )}{r}f'g'\right], \end{equation*} and \begin{equation*} f_{\star}^{-1}=f^{-1}+\frac{\theta^{2}+\mu^{2}}{4r}\left[f^{-4}(f' )^{3}-f^{-3}f'f''-\frac{2n-1}{r}(f')^{2}f^{-3}\right]. \end{equation*} In this case, the approximate expression of the conformal metric factor $h$ can be assumed to be $h=\frac{A^{2}+r^{2}}{2A^{2}}+(\theta^{2}+\mu^2)\epsilon(r)$. Similarly, one can obtain the differential equation of the error function $\epsilon(r)$: \begin{equation*} \epsilon''(r)+\frac{A^{2}(4n-1)-(4n+1)r^{2}}{r(A^{2}+r^{2})}\epsilon'(r)+\frac{8n}{A^{2}+ r^{2}}\epsilon(r)+\frac{16(4n-1)\left[A^{2}n+(n-1)r^{2}\right]}{A^{2}(A^{2}+r^{2})^{3}}=0. \end{equation*} From the formulas of the error functions above, we can see that this type of deformed $4n$-dimensional Moyal space (\ref{ms2}) and the usual $4n$-dimensional Moyal space (\ref{ms1}) have the same error function $\epsilon(r)$. So for given noncommutative parameters $\theta, \mu$, their constant curvature conformal metrics have similar properties and are also similar to those of the 4-dimensional Moyal sphere. Similarly, for given sufficiently small noncommutative parameters $\theta, \mu$, the smaller the radius $A$ of the Moyal sphere, the greater the deviation of its constant curvature conformal metric from that of the classical case; The larger the radius $A$ of the Moyal sphere, the closer its constant curvature conformal metric is to the result of the classical sphere. \section{Conclusions}\label{sec9} Introducing the Moyal product into the smooth function space $C^{\infty}(\mathbb{R}^{n})$ on the Euclidean space $\mathbb{R}^{n}$, one can obtain the generalized function space $(C^{\infty}(\mathbb{R}^{n}),\ast)$ with noncommutative products, and the corresponding space is the noncommutative Moyal space. If we introduce into the Moyal space a metric similar to the spherical metric of ordinary space, we will get a so-called Moyal sphere. In this paper, we study the Moyal sphere from two different aspects. First, with the help of the conformal metric of the sphere in ordinary space, we introduced the Moyal product in the conformal factor and obtained the corresponding spherical metric in the noncommutative sense. We calculated the scalar curvature and area of the Moyal sphere in this case. We found that when the noncommutative parameter approaches to 0, the Moyal sphere returns to an ordinary one. As the noncommutative parameter increases, the area of the Moyal sphere will decrease. When the noncommutative parameter approaches to infinity, the area of the Moyal sphere will approach to 0. Analogous to the case of ordinary space, if we consider the space enclosed by the sphere to be the volume of the corresponding higher-dimensional ball, then one can see that when the noncommutative parameter approaches to infinity, the volume enclosed by the Moyal sphere will also approach to 0. In addition, we found that the total curvature integral of the two-dimensional Moyal sphere exactly coincides with the Gauss-Bonnet formula of the two-dimensional Euclidean space, and the result does not depend on the noncommutative parameter $\theta$. This implies that the noncommutative algebraic structure of the smooth function space does not change some topological properties of the two-dimensional Moyal sphere. In the classical case, a sphere corresponds to a space of constant curvature. Therefore, we also study the conformal metric of constant curvature of the Moyal space. We obtained the approximate expression of the conformal metric of the Moyal space with constant curvature in the sense of the second order approximation. We found that for given sufficiently small noncommutative parameters, the smaller the radius $A$ of the Moyal sphere, the greater the deviation of its constant curvature conformal metric from that of the classical case; The larger the radius $A$ of the Moyal sphere, the closer its constant curvature conformal metric is to the result of the classical sphere. In addition, we also calculated the area and constant curvature metric expression of a generalized deformed Moyal sphere with two noncommutative parameters, and obtained similar results. When these noncommutative parameters approach to 0, the Moyal star product will return to the ordinary product, and these curvatures and areas will return to the results of the ordinary sphere. These results are significant for the studies of mathematical structures and physical properties of noncommutative spaces. To our knowledge, these results have not been reported in the literatures. Further, we will study the Gauss-Bonnet-Chern formula for the 4-dimensional Moyal sphere, and work on this direction is in progress. \section*{Acknowledgments} This work is partly supported by the National Natural Science Foundation of China (Grant No. 11911530750), the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2019A1515011703). \begin{thebibliography}{100} \bibitem{Snyder} H. S. Snyder, ``Quantized space-time,'' {\it Phys. Rev.} {\bf 71}, 38 (1947). \bibitem{Connes} A. Connes, {\it Noncommutative geometry} (Academic Press, New York, 1994). \bibitem{Seiberg} N. Seiberg and E. Witten, ``String theory and noncommutative geometry,'' {\it J. High Energy Phys.} {\bf 09}, 032 (1999). \bibitem{Douglas} M. R. Douglas and N. A. Nekrasov, ``Noncommutative field theory,'' {\it Rev. Mod. Phys.} {\bf 73}, 977 (2001). \bibitem{Chaichian} M. Chaichian, P. Pre\v{s}najder, and A. Tureanu, ``New concept of relativistic invariance in noncommutative space-time: Twisted Poincar\'{e} symmetry and its implications,'' {\it Phys. Rev. Lett.} {\bf 94}, 151602 (2005). \bibitem{Ettefaghi} M. M. Ettefaghi and M. Haghighat, ``Massive neutrino in noncommutative space-time,'' {\it Phys. Rev. D} {\bf 77}, 056009 (2008). \bibitem{lhc} B. S. Lin, T. H. Heng, and W. Chen, ``Quantum field theory with a minimal length induced from noncommutative space,'' {\it Commun. Theor. Phys.} {\bf 61}, 605 (2014). \bibitem{Calmet} X. Calmet and C. Fritz, ``Inflation on a non-commutative space-time,'' {\it Phys. Lett. B} {\bf 747}, 406 (2015). \bibitem{Couch} J. Couch, S. Eccles, W. Fischler, and M.-L. Xiao, ``Holographic complexity and noncommutative gauge theory,'' {\it J. High Energy Phys.} {\bf 03}, 108 (2018). \bibitem{Muhuri} A. Muhuri, D. Sinha, and S. Ghosh, ``Entanglement induced by noncommutativity: anisotropic harmonic oscillator in noncommutative space,'' {\it Eur. Phys. J. Plus} {\bf 136}, 35 (2021). \bibitem{lh} B. S. Lin and T. H. Heng, ``Connes spectral distance and nonlocality of generalized noncommutative phase spaces,'' {\it Eur. Phys. J. Plus} {\bf 137}, 899 (2022). \bibitem{Frob} M. B. Fr\"{o}b, A. Much, and K. Papadopoulos, ``Noncommutative geometry from perturbative quantum gravity,'' {\it Phys. Rev. D} {\bf 107}, 064041 (2023). \bibitem{Maris} V. Maris and J. C. Wallet, ``Gauge theory on $\rho$-Minkowski space-time,'' {\it J. High Energ. Phys.} {\bf 2024}, 119 (2024). \bibitem{Madore} J. Madore, ``An introduction to noncommutative differential geometry and its physical applications'' (Cambridge University Press, Cambridge, 1999). \bibitem{Kaygun} A. Kaygun and M. Khalkhali, ``Hopf modules and noncommutative differential geometry,'' {\it Lett Math Phys} {\bf 76}, 77 (2006). \bibitem{Tretkoff} A. Connes and P. Tretkoff, ``The Gauss-Bonnet theorem for the noncommutative two torus,'' in {\it Noncommutative geometry, arithmetic, and related topics,} edited by C. Consani and A. Connes (Johns Hopkins University Press, Maryland, 2012), pp. 141-158. \bibitem{Fathizadeh} F. Fathizadeh and M. Khalkhali, ``The Gauss–Bonnet theorem for noncommutative two tori with a general conformal structure,'' {\it J. Noncommut. Geom.} {\bf 6}, 457 (2012). \bibitem{Dabrowski} L. Dabrowski and A. Sitarz, ``Curved noncommutative torus and Gauss–Bonnet,'' {\it J. Math. Phys.} {\bf 54}, 013518 (2013). \bibitem{Brzezinski} T Brzezi\'{n}ski, ``Noncommutative differential geometry of generalized Weyl algebras,'' {\it SIGMA} {\bf 12}, 059 (2016). \bibitem{Arnlind} J. Arnlind and M. Wilson, ``Riemannian curvature of the noncommutative 3-sphere,'' {\it J. Noncommut. Geom.} {\bf 11}, 507 (2017). \bibitem{Floricel} R. Floricel, A. Ghorbanpour, and M. Khalkhali, ``The Ricci curvature in noncommutative geometry,'' {\it J. Noncommut. Geom.} {\bf 13}, 269 (2019). \bibitem{Sciandra} A. Sciandra and T. Weber, ``Noncommutative differential geometry on crossed product algebras,'' {\it Journal of Algebra} {\bf 664}, 129 (2025). \bibitem{Khalkhali} F. Fathizadeh and M. Khalkhali, ``Scalar curvature for the noncommutative two torus,'' {\it J. Noncommut. Geom.} {\bf 7}, 1145 (2013). \bibitem{Wilson} M. Wilson, ``Gauss-Bonnet-Chern type theorem for the noncommutative four-sphere,'' {\it Doctoral dissertation}, The University of Western Ontario (Canada) (2016). \bibitem{Gayral} V. Gayral, J. M. Gracia-Bond\'{i}a, B. Iochum, T. Sch\"{u}cker, and J. C. V\'{a}rilly, ``Moyal planes are spectral triple,'' {\it Commun. Math. Phys.} {\bf 246}, 569 (2004). \bibitem{Eckstein} M. Eckstein, A. Sitarz, and R. Wulkenhaar, ``The Moyal sphere,'' {\it J. Math. Phys.} {\bf 57}, 112301 (2016). \end{thebibliography} \end{document}
2501.01383v2
http://arxiv.org/abs/2501.01383v2
Electrical networks and data analysis in phylogenetics
\documentclass[11pt,reqno]{amsart} \usepackage[utf8]{inputenc} \usepackage{tikz-cd} \usepackage{pdfpages} \usepackage{graphicx} \usepackage{amssymb,amsmath, amscd, latexsym,amsfonts,bbm, amsthm,stmaryrd, enumerate,phaistos} \usepackage{yhmath} \usepackage{comment}\usepackage{todonotes}\usepackage[all]{xy} \usepackage[vcentermath]{youngtab} \usepackage{float} \renewcommand{\familydefault}{cmss} \usepackage{tabularx} \usepackage{amsmath, amssymb, latexsym, enumerate, graphicx,tikz, stmaryrd,pifont,ifsym} \usetikzlibrary{arrows,decorations,decorations.pathmorphing,positioning} \usepackage{mdframed} \usepackage{mdwlist} \usepackage[pdftex]{hyperref} \usepackage{amscd,mathrsfs,epic,empheq,float} \usepackage{bbm} \usepackage{cases} \usepackage[all]{xy} \def\theequation{\arabic{section}.\arabic{equation}} \usepackage{tikz} \setcounter{tocdepth}{1} \usepackage{tikz-cd} \newenvironment{defn}{\vspace{0.3cm}\par\noindent\refstepcounter{theorem}\begin{exafont}Definition \thetheorem.\end{exafont}\hspace{\labelsep}}{\vspace{0.3cm}\par} \theoremstyle{definition} \renewcommand{\familydefault}{cmss} \newcommand{\dist} {3mm} \newcommand{\Xn}{{\op{X}}_n} \newcommand{\cS}{\mathcal{S}} \newcommand{\mV}{\mathbb{V}} \newcommand{\bt}{\mathbf t} \newcommand{\bb}{\mathbf b} \newcommand{\bd}{\mathbf d} \newcommand{\oS}{\op{S}} \newcommand{\Func}{\op{Func}_n} \newcommand{\eu}{\op{eu}} \newcommand{\inc}{\op{inc}} \newcommand{\Supp}{\op{Supp}} \newcommand{\bw}{\op{w}} \newcommand{\ii}{\mathbf i} \newcommand{\RR}{\mathbb R} \newcommand{\CC}{\mathbb C} \newcommand{\QQ}{\mathbb Q} \newcommand{\ZZ}{\mathbb Z} \usepackage{pict2e} \def\StrangeCross \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amscd} \usepackage{amsmath} \def\bea{\begin{eqnarray}} \def\eea{\end{eqnarray}} \def\nn{\nonumber} \def\AA{\mathcal{A}} \def\ZZ{\mathbb{Z}} \def\CC{\mathbb{C}} \def\PP{\mathbb{P}} \def\RR{\mathbb{R}} \def\QQ{\mathbb{Q}} \def\GG{\mathbb{G}} \def\TT{\mathbb{T}} \def\kk{\Bbbk} \def\gg{\mathfrak{g}} \usepackage{pict2e} \newtheorem{theorem}{Theorem}[section] \newtheorem{question}[theorem]{Question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{problem}[theorem]{Problem} \newenvironment{exafont}{\begin{bf}}{\end{bf}} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{conv}[theorem]{Conventions} \newtheorem{conjecture}[theorem]{Conjecture} \newcommand{\op}{\operatorname} \newcommand{\Mat}{\operatorname{Mat}} \newcommand{\mC}{\mathbb{C}} \newcommand{\ov}{\overline{\otimes}} \newcommand{\la}{\lambda} \newcommand{\bH}{{\bf H}} \newcommand{\bP}{{\bf P}} \newcommand{\End}{\op{End}} \newcommand{\cU}{\mathcal{U}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\mg}{\mathfrak{gl}_2} \newcommand{\modH}{{$\op{mod}$-$\bH$}} \begin{document} \author[V. Gorbounov]{V.~Gorbounov} \address{V.~G.: Faculty of Mathematics, National Research University Higher School of Economics, Usacheva 6, 119048 Moscow, Russia} \email{[email protected] } \author[A. Kazakov]{A.~Kazakov} \address{A.~K.: Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Russia, 119991, Moscow, GSP-1, 1 Leninskiye Gory, Main Building; Centre of Integrable Systems, P. G. Demidov Yaroslavl State University, Sovetskaya 14, 150003, Yaroslavl, Russia; Center of Pure Mathematics, Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region, 141701, Russian Federation; Kazan Federal University, N.I. Lobachevsky Institute of Mathematics and Mechanics, Kazan, 420008, Russia} \email{[email protected]} \title{Electrical networks and data analysis in phylogenetics} \maketitle \begin{abstract} A classic problem in data analysis is studying the systems of subsets defined by either a similarity or a dissimilarity function on $X$ which is either observed directly or derived from a data set. For an electrical network there are two functions on the set of the nodes defined by the resistance matrix and the response matrix either of which defines the network completely. We argue that these functions should be viewed as a similarity and a dissimilarity function on the set of the nodes moreover they are related via the covariance mapping also known as the Farris transform or the Gromov product. We will explore the properties of electrical networks from this point of view. It has been known for a while that the resistance matrix defines a metric on the nodes of the electrical networks. Moreover for a circular electrical network this metric obeys the Kalmanson property as it was shown recently. We will call such a metric an electrical Kalmanson metric. The main results of this paper is a new description of the electrical Kalmanson metrics in the set of all Kalmanson metrics in terms of the geometry of the positive Isotropic Grassmannian whose connection to the theory of electrical networks was discovered earlier. One important area of applications where Kalmanson metrics are actively used is the theory of phylogenetic networks which are a generalization of phylogenetic trees. Our results allow us to use in phylogenetics the powerful methods of reconstruction of the minimal graphs of electrical networks and possibly open the door into data analysis for the methods of the theory of cluster algebras. \end{abstract} \setcounter{tocdepth}{3} \tableofcontents MSC2020: 14M15, 82B20, 05E10, 05C50, 05C10, 92D15, 94C15, 90C05, 90C59, 05C12. Key words: Electrical networks, circular split metrics, Kalmanson metrics \section{Introduction} The theory of electrical networks goes back to the work of Gustav Kirchhoff around mid 1800 and since then it has been a source of remarkable achievements in combinatorics, algebra, geometry, mathematical physics, and electrical engineering. An electrical network is a graph with positive weights, conductances attached to the edges, and a chosen subset of the set of vertices which are called the boundary vertices or nodes. An important characteristic of an electrical network with only two nodes $i$ and $j$ is the effective resistance $R_{ij}$, that is the voltage at node $i$ which, when node $j$ is held at zero volts, causes a unit current to flow through the circuit from node $i$ to node $j$. The effective resistance defines a metric on the set of nodes that is widely used in chemistry, for example, \cite{Kl}. For convenience, we will organize the effective resistance $R_{ij}$ in a matrix $R$ setting $R_{ii}=0$ for all $i$. In the case where there are more than two nodes, there is an important generalization of $R_{ij}$. Given an electrical network with $n$ nodes, there is a linear endomorphism of the vector space of functions defined on the nodes, constructed as follows: for each such a function, there is a unique extension of that function to all the vertices which satisfies Kirchhoff's current law at each interior vertex. This function then gives the current $I$ in the network at the boundary vertices defining a linear map which is called the Dirichlet-to-Neumann map or the network response. The matrix of this map is called the response matrix. It plays a key role in the theory and applications of the electrical network \cite{CIM}. The above two matrices define each other and moreover it is possible to reconstruct a planar circular electrical network if these matrices are known \cite{CIM}. A classic problem in data analysis is studying the systems of subsets defined by either a similarity or a dissimilarity function on $X$ which is either observed directly or derived from a data set. While the latter makes use of splits and split metrics, the key ingredients of the former are systems of clusters, subsets of $X$, and elementary similarity functions. One can interpret splits as distinctive features and clusters as common features, see \cite{GandC} and \cite{MS} for an introduction to these ideas. We argue that the "inverse" of the response and the resistance matrices should be viewed as a similarity and a dissimilarity function on the set of nodes of an electrical network and as such they are related via the covariance mapping also known as the Farris transform or the Gromov product, see Remark \ref{inverse} for more details. We will explore the properties of electrical networks from this point of view. In this paper we will work with the resistance matrix. The connection of the response matrix to data analysis will be presented in a future publication. In computational biology species are naturally assigned collections of symbols from a given set, called characters, and one can construct a distance between two species (such as Hamming distance) which records the proportion of characters where the two species differ. Such a record can be encoded by a real symmetric, nonnegative matrix called a dissimilarity matrix $M$. An important problem in phylogenetics is to reconstruct a weighted tree $T$ with the set of leaves equal to the set of of species and the matrix of the tree metric defined by $T$ on the set of leaves is equal to $M$, see \cite{MS} for a nice introduction to these ideas. In most cases, a tree structure is too constraining. The notion of a split network is a generalization of a tree in which a certain type of cycles is allowed. A geometric space of such networks was introduced \cite{DP}, forming a natural extension of the work by Billera, Holmes, and Vogtmann on tree space. It has been studied from different points of view since it is related to a number of objects in mathematics: the compactification of the real moduli space of curves, to the Balanced Minimal Evolution polytopes and Symmetric Traveling Salesman polytopes to name a few. The appropriate metric on the set of species defined by a split network has a very special property called the Kalmanson property which distinguishes it completely in the set of all metrics \cite{BD1}, \cite{MS}. Stefan Forcey has discovered recently that the resistance metric defined by a circular planar electric network obeys the Kalmanson property \cite{F}. We will call such a split metric an {\it electrical Kalmanson metric}. This important result puts a new light on the theory of electrical networks connecting it with phylogenetics and metric geometry. The purpose of this paper is twofold: firstly we want to collect in one place the relevant facts from the research areas mentioned above and indicate interesting connections between them, secondly we provide a new description of the set of the electrical Kalmanson metrics inside the set of all Kalmanson metrics given in Theorem \ref{th: dual}. For the later we will exploit the connection between the space of circular planar electrical networks and the non-negative Isotropic Grassmannian $\mathrm{IG}_{\geq 0}(n-1, 2n)$ which uses the effective resistance matrix \cite{BGGK}, Theorem $5.6$ as opposed to the response matrix. This description is a modification of the original construction given by Thomas Lam \cite{L} and developed further in \cite{BGKT}, \cite{CGS}. It turns out that the Kalmanson property itself is a consequence of this connection and the description of the electrical Kalmanson metrics we provide is given entirely in terms of the geometry of the non-negative part of this projective variety. In our description checking whether a given Kalmanson metric is an electrical Kalmanson metric amounts to checking that the Plücker coordinates of an explicitly defined point in the above Grassmannian are non-negative. Moreover, using the results from \cite{L} we propose an algorithm for reconstructing a minimal circular planar electrical network out of a given resistance matrix which might be useful for possible applications in phylogenetics. It is remarkable that the theory of positivity might play a role in studying phylogenetic networks since it would allow us to apply the powerful machinery of the theory of cluster algebras developed for describing positivity in mathematical objects \cite{Z}. This story should be extended to the compactifications of the respective spaces, taking cactus networks, the known strata in the compactification of circular planar electrical networks, to the newly defined compactified split systems as it was already started in \cite{DF}. In this picture the cactus networks should correspond to the pseudometrics playing the role of dissimilarity functions. The connection of the tropical geometry of the Grassmannians and the space of trees and tree metrics found in \cite{SS} is another interesting direction for developing our work. Finally building of an example from \cite{F} we show in Remark \ref{nk} that the planarity condition on the network is far from being necessary to guarantee the Kalmanson condition for the resistance metric. It leaves the description of the set of electrical Kalmanson metrics as an interesting question. {\bf Acknowledgments.} For V. G. this article is an output of a research project implemented as part of the Basic Research Program at the National Research University Higher School of Economics (HSE University). Working on this project V. G. also visited the Max Plank Institute of Mathematics in the Sciences in Leipzig, Germany in the Fall of 2024 and BIMSA in Beijing, China in the Summer 2024. Research of A.~K. on Sections \ref{sec:kalman} was supported by the Russian Science Foundation project No. 20-71-10110 (https://rscf.ru/en/project/23-71-50012) which finances the work of A.K. at P. G. Demidov Yaroslavl State University. Research of A.K. on Section \ref{sec:rec} was supported by the state assignment of MIPT (project FSMG-2023-0013). The authors are grateful to Borya Shapiro, Misha Shapiro, Anton Petrunin, Lazar Guterman and especially to Stefan Forcey for useful discussions and helpful suggestions. The authors are grateful to the referees for carefully reading the paper and suggesting a number of useful comments to make it better. \section{The space of electrical networks and the positive Isotropic Grassmannian} \subsection{The space of electrical networks} \begin{definition} A electrical network $\mathcal E$ is a graph $\Gamma$ with positive $\omega$ weights (conductances) attached to the edges and a chosen subset of the set of vertices which are called the boundary vertices or the nodes. \end{definition} In this paper we will denote by $n$ the number of the nodes. As explained in the introduction the key result in the theory of electrical networks says the boundary voltages and the currents are related to each other linearly via a matrix $M_R(\mathcal E)=(x_{ij})$ called the {\it response matrix} of a network. $M_R(\mathcal E)$ has the following properties: \begin{itemize} \item $M_R(\mathcal E)$ is a $n\times n$ symmetric matrix; \item All the non-diagonal entries $x_{ij}$ of $M_R(\mathcal E)$ are non-positive; \item For each row the sum of all its entries is equal to $0$. \end{itemize} Given an electrical network $\mathcal E$ the response matrix $M_R(\mathcal E)$ can be calculated as the Schur complement of a submatrix in the Laplacian matrix of the graph of $\mathcal E$ \cite{CIM}. Suppose that $M$ is a square matrix and $D$ is a non-singular square lower right-hand corner submatrix of $M$, so that $M$ has the block structure. \[ \begin{pmatrix} A&B\\ C&D \end{pmatrix} \] The Schur complement of $D$ in $M$ is the matrix $M/D = A - BD^{-1} C$. The Schur complement satisfies the following identity \[\text{det} M = \text{det} (M/D)\text{det} D\] Labeling the vertices starting from the nodes we get the Laplacian matrix of the graph representing $\mathcal E$ in a two by two block form as above. The submatrix $D$ corresponds to the connections between the internal vertices and is known to be non degenerate. Then $M_R(\mathcal E)=L/D$. There are many electrical networks which have the same response matrix, we will describe them now. The following five local network transformations given below are called the {\it electrical transformations}. Two electrical networks are said to be equivalent if they can be obtained from each other by a sequence of the electrical transformations. This is an equivalence relation of course, so the set of electrical networks is partitioned in the equivalence classes. \begin{theorem} \cite{CIM} \label{gen_el_1} The electrical transformations preserve the response matrix of an electrical network. \end{theorem} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{eltranseng.jpg} \hspace{-1.5cm} \caption{Electrical transformations } \label{fig:el_trans} \end{figure} In this paper we will deal with a particular type of connected electrical networks, the {\it connected circular planar electrical networks}, unless otherwise stated. For these we require in addition the graph of $\mathcal E$ to be planar and such that the nodes are located on a circle and enumerated clockwise while the rest of the vertices are situated inside of this circle. We will denote by $E_n$ the set of equivalence classes of the circular electrical networks in this paper. Note that the notation for this set of equivalence classes used in \cite{CIM} and in \cite{CIW} is $\Omega_n$. The set $E_n$ allows the following elegant description. \begin{definition} Let $P=(p_1,\ldots,p_k)$ and $Q=(q_1,\ldots,q_k)$ be disjoint ordered subsets of the nodes arranged on a circle, then $(P;Q)=(p_1,\ldots,p_k;q_1,\ldots,q_k)$ is a {\it circular pair} if $p_1,\ldots, p_k,q_k,\ldots,q_1$ are in non-overlapping circular order around the boundary circle. Let $(P;Q)$ be a circular pair then the determinant of the submatrix $ M(P;Q)$ whose rows are labeled by $(p_1,\ldots,p_k)$ and the columns labeled by $(q_1,\ldots,q_k)$ is called the {\it circular minor} associated with a circular pair $(P;Q)$. \end{definition} \begin{theorem} \cite{CIM}, \cite{CdV} \label{Set of response matrices all network} The set of response matrices of the the elements of $E_n$ is precisely the set of the matrices $M$ such that \begin{itemize} \item $M$ is a symmetric matrix; \item All the non-diagonal entries of $M$ are non-positive; \item For each row the sum of all its entries is equal to $0$. \item For any $k\times k$ circular minor $(-1)^k\det M(P;Q) \geq 0$. \item The kernel of $M$ is generated by the vector $(1,1,\dots,1)$. \end{itemize} Moreover, any two circular electrical networks which have the same response matrix are equivalent. \end{theorem} We will need the following notion later. \begin{definition} Let $\mathcal E$ be a connected circular electrical network on a graph $\Gamma$. The {\it dual electrical network} $\mathcal E^*$ is defined in the following way \begin{itemize} \item the graph $\Gamma^*$ of $\mathcal E^*$ is the dual graph to $\Gamma$; \item for any pair of dual edges of $\Gamma$ and $\Gamma^*$ their conductances are reciprocal to each other; \item the labeling of the nodes of $\mathcal E^{*}$ is determined by the requirement that the first node of $\mathcal E^{*}$ lies between the first and second node of $\mathcal E$. \end{itemize} \end{definition} \subsection{Cactus electrical networks} Setting an edge conductance to zero or infinity in $\mathcal E$ makes sense. According to the Ohm law it means that we delete or contract this edge. Doing it we either get isolated nodes or some nodes get glued together. We will consider the resulting network as a network with $n$ nodes, remembering how nodes get identified. Such a network is called {\it a cactus electrical network} with $n$ nodes. One can think of it as a collection of ordinary circular electrical networks with the total number of the nodes equal to $n$, glued along some of these nodes. Note, that the graph of such a network is planar, but it does not have to be connected. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{cactuswithout.jpg} \caption{A cactus electrical network with 4 nodes } \label{fig:cactus} \end{figure} The electrical transformations can be applied to the cactus electrical networks. We denote by $\overline{E}_n$ the set of equivalence classes with respect to the electrical transformations of the cactus electrical networks with $n$ nodes. The definition of a cactus network was introduced in \cite{L} where it was proved that the set $\overline{E}_n$ is a compactification of $E_n$ in the appropriate sense. Note that the space of cactus networks in denoted by $E_n$ in \cite{L}. We apologize for this unfortunate clash of notations. \subsection{Lam embedding} \label{sec: lam emb} Recall that the real Grassmannian $\mathrm{Gr}(k, n)$ is a differentiable manifold that parameterizes the set of all $k$-dimensional linear subspaces of a vector space $\mathbb{R}^n$. In fact it has a structure of a projective algebraic variety. The Plücker embedding is an embedding of the Grassmannian $\mathrm{Gr}(k, n)$ into the projectivization of the $k$-th exterior power of the vector space $\mathbb{R}^n$ \[\iota :\mathrm {Gr} (k,n)\rightarrow \mathrm {P}(\Lambda ^{k}\mathbb{R}^n)\] Suppose $W\subset \mathbb{R}^n$ is a $k$-dimensional subspace. To define $\iota (W)$, choose a basis $(w_{1},\cdots ,w_{k})$ for $W$, and let $\iota (W)$ be the projectivization of the wedge product of these basis elements: $\iota (W)=[w_{1}\wedge \cdots \wedge w_{k}]$, where $ [\,\cdot \,]$ denotes the projective equivalence class. For practical calculations one can view the matrix whose rows are the coordinates of the basis vectors $(w_{1},\cdots ,w_{k})$ as a representative of this equivalence class. For any ordered sequence $I$ of $k$ positive integers $1\leq i_{1}<\cdots <i_{k}\leq n$ denote by $\Delta_I$ the determinant of a $k\times k$ submatrix of the above matrix with columns labeled by the numbers from $I$. The numbers $\Delta_I$ are called the Plücker coordinates of the point $W$ of $\mathrm Gr(k,n)$. They are defined up to a common non zero factor. \begin{definition} The totally non-negative Grassmannian $\mathrm{Gr}_{\geq 0}(k, m)$ is the subset of the points of the Grassmannian $\mathrm{Gr}(k, n)$ whose Plücker coordinates $\Delta_I$ have the same sign or equal to zero. \end{definition} The following theorem of T. Lam \cite{L} is one of the key results about the space of electrical networks. \begin{theorem} \label{th: main_gr} There is a bijection \[\overline {E}_n \cong \mathrm{Gr}_{\geq 0}(n-1,2n)\cap \mathbb{P}H\] where $H$ is a certain subspace of $\bigwedge^{n-1}\mathbb{R}^{2n}$ of dimension equal to the Catalan number $C_n$. Moreover, the image of the set $E_n$ under this bijection is exactly the set of points with the Plücker coordinates $\Delta_{24\dots 2n-2}$ and $\Delta_{13\dots 2n-3}$ not equal to zero. \end{theorem} We will recall the explicit construction of the embedding of ${E}_n$ obtained in \cite{BGKT}, which is induced by the above bijection. Let $\mathcal E$ be a circular electrical network with the response matrix $M_R(\mathcal E)=(x_{ij}).$ The following $n\times 2n$ matrix \begin{equation} \label{omega_eq} \Omega(\mathcal E)=\left( \begin{array}{cccccccc} x_{11} & 1 & -x_{12} & 0 & x_{13} & 0 & \cdots & (-1)^n\\ -x_{21} & 1 & x_{22} & 1 & -x_{23} & 0 & \cdots & 0 \\ x_{31} & 0 & -x_{32} & 1 & x_{33} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \end{array} \right) \end{equation} gives the point in $\mathrm{Gr}_{\geq 0}(n-1,2n)\cap \mathbb{P}H$ which corresponds to $\mathcal E$ under the Lam bijection. Since the sum of the entries in each row of $M_R(\mathcal E)$ is equal to $0$ (see Theorem \ref{Set of response matrices all network}), we have that $\Omega(\mathcal E)$ has the rank equal to $n-1$. Therefore, the dimension of the row space of $\Omega(\mathcal E)$ is equal to $n-1$, hence it defines a point in $\mathrm{Gr}(n-1,2n)$. The Plücker coordinates of the point associated with $\Omega(\mathcal E)$ can be calculated as the maximal size minors of the matrix $\Omega'(\mathcal E)$ obtained from $\Omega(\mathcal E)$ by deleting, for example, the first row. \begin{theorem} \cite{BGK} \label{about sur} Let $A=(a_{ij})$ be a matrix which satisfies the first three conditions of Theorem \ref{Set of response matrices all network} and $\Omega(A)$ be a matrix constructed according to the formula \eqref{omega_eq}. If $\Omega(A)$ defines a point in $Gr_{\geq 0}(n-1, 2n)$ and the Plücker coordinate $\Delta_{13\dots 2n-3}\bigl(\Omega(A)\bigr)$ is not equal to zero, then there is a connected electrical network $\mathcal E \in E_n$ such that $A=M_R(\mathcal E).$ \end{theorem} \begin{example} For the network $\mathcal E \in E_4$ seen in the Figure \ref{treedaul} the matrix $\Omega(\mathcal E)$ has the following form: \begin{equation*} \Omega(\mathcal E) = \left( \begin{array}{cccccccc} \dfrac{5}{8}& 1 & \dfrac{1}{8} & 0 & -\dfrac{1}{8} & 0 & \dfrac{3}{8} & 1 \\ & & & & & & & \\ \dfrac{1}{8}& 1 & \dfrac{5}{8} & 1 & \dfrac{3}{8} & 0 & -\dfrac{1}{8} & 0 \\ & & & & & & & \\ -\dfrac{1}{8}& 0 &\dfrac{3}{8} & 1 & \dfrac{5}{8} & 1 & \dfrac{1}{8} & 0 \\ & & & & & & & \\ \dfrac{3}{8}& 0 & -\dfrac{1}{8} & 0 & \dfrac{1}{8} & 1 & \dfrac{5}{8} & 1 \\ \end{array} \right). \end{equation*} \end{example} In fact, the row space of $\Omega(\mathcal E)$ is isotropic with respect to a particular symplectic form. This refines the above embedding to a submanifold $$\mathrm{IG}_{\geq 0}(n-1, 2n)\subset \mathrm{Gr}_{\geq 0}(n-1, 2n)$$ made out of isotropic subspaces of $\mathbb{R}^{2n}$ \cite{BGKT}, \cite{CGS}. \section{Characterization of resistance distance} \label{sec:kalman} \subsection{Resistance metric} Now we assume that an electrical network is just connected. \begin{definition} \label{def:eff-resist} Let $\Gamma, \omega$ be a connected graph with a weight function $\omega$ on the edges and $i,\, j\in \Gamma$ be two of its vertices. Consider it as an electrical network $\mathcal E_{ij}(\Gamma, \omega)$ on a graph $\Gamma$ with a conductivity function $\omega$ by declaring the vertices $i$ and $j$ to be the nodes while the remaining vertices to be internal vertices. Apply the voltages $U=(U_i, U_j)$ to the nodes of $\mathcal E_{ij}(\Gamma, \omega)$ such that they induce the boundary currents $I=(1, -1)$. Then the {\it effective resistance} between the nodes $i$ and $j$ is defined as $$R_{ij}=|U_i-U_j|$$ Obviously $R_{ij}=R_{ji}$. \end{definition} This definition can be stated which is due to Gustav Kirchhoff in a beautiful way using the combinatorics of the graph. Two pieces of notation are required: recall the generating function of spanning-trees of $\Gamma$ is defined as follows \[T(\Gamma,\omega) = \sum_{T\in T(\Gamma)} \omega(T),\] where the sum is taken over the set $T(\Gamma)$ of all spanning trees of $\Gamma$ and $\omega(T)$ is defined as the product of the weights of the edges of $T$. For a graph $\Gamma, \omega$ as above and vertices $i,j \in V$, we let $\Gamma/ij$ denote the graph obtained by merging the two vertices $i$ and $j$ together into a single vertex. \begin{theorem}\label{KF} (Kirchhoff’s Formula \cite{MTT}, \cite{Wa}). Let $(\Gamma,\omega)$ be an electrical network, and let $i,j \in V$. The effective resistance between $i$ and $j$ in $\Gamma$ is \[R_{ij} =\frac{T(\Gamma/ij;\omega)} {T(\Gamma;\omega)} \] \end{theorem} The following lemma relates the effective resistances and the response matrix entries is well known. \begin{lemma} \cite{KW 2011} \label{lem:eff-resist} Let $\mathcal E$ be a connected electrical network with $n$ nodes\\ $\{1, \dots, i, \dots, j, \dots, n\} $, and let the boundary voltages $U = (U_1, \dots , U_n)$ be such that \begin{equation} \label{eq-resist} M_R(\mathcal E)U = -e_i + e_j, \end{equation} where $e_k, \ k \in \{1, \dots , n\}$ is the standard basis of $\mathbb{R}^n$. Then $$|U_i - U_j|=R_{ij}.$$ \end{lemma} \begin{proof} Indeed, if the boundary voltages $U$ are such as in \eqref{eq-resist} we can consider all the vertices except $i$ and $j$ as the inner vertices, and then $|U_i - U_j|$ is precisely as in Definition \ref{def:eff-resist}. \end{proof} For convenience, we will organize the effective resistances $R_{ij}$ between the nodes of $\mathcal E$ in a symmetric matrix $R_{\mathcal E}$ setting $R_{ii}=0$ for all $i$. We call this matrix the {\it resistance matrix} of $\mathcal E$ and denote it by $R_{\mathcal E}$. From Lemma \ref{lem:eff-resist} it follows that $$R_{ij}=(-e_i+e_j)^t\bigl(M_R(\mathcal E)\bigr)^{-1}(-e_i+e_j),$$ where $\bigl(M_R(\mathcal E)\bigr)^{-1}(-e_i+e_j)$ means a vector $U$ which satisfies \eqref{eq-resist}. Notice that such a vector always exists. \begin{proposition}\cite{KW 2011} \label{th: about inverse resp} Let $\mathcal E$ be a connected electrical network. Denote by $M'_R(\mathcal E)$ the matrix obtained from $M_R(\mathcal E)$ by deleting the last row and the last column, then $M'_R(\mathcal E)$ is invertible. The matrix elements of its inverse are given by the formula \begin{equation*} M'_R(\mathcal E)^{-1}_{ij}=\begin{cases} R_{in},\, \text{if}\,\, i=j \\ \frac{1}{2}(R_{in}+R_{jn}-R_{ij}),\, \text{if}\,\, i\not = j,\\ \end{cases} \end{equation*} \end{proposition} Proposition \ref{th: about inverse resp} and Lemma \ref{eq-resist} show that the resistance and the response matrices define each other. Therefore, Theorem \ref{gen_el_1} would hold if we replace the response matrix with the resistance matrix in its statement. \begin{remark}\label{inverse} The formula from Proposition \ref{th: about inverse resp} is well known in different areas of mathematics. It appeared in the literature under the names the Gromov product, the Farris transform, the Covariance mapping between the Cut and the Covariance cones, see \cite{GandC} for more information. This allows us to view $M'_R(\mathcal E)^{-1}$, the "inverse" of the response matrix, as a similarity function corresponding under the Covariance mapping to a dissimilarity function represented by the resistance metric $R_{\mathcal E}$. We are planning to explore the properties of the resistance matrix from these points of view in the future publications. \end{remark} Proposition \ref{th: about inverse resp} provides a simple proof that the effective resistances $R_{ij}$ satisfy the triangle inequality. \begin{theorem} \label{th:about metric} Let $\mathcal E$ be an electrical network on a connected network $\Gamma$ then for any of three nodes $k_1, k_2$ and $k_3$ the triangle inequality holds: $$R_{k_1k_3}+R_{k_2k_3}-R_{k_1k_2} \geq 0.$$ Hence the set of all $R_{vw}$ defines a metric on the nodes of $\Gamma$. \end{theorem} \begin{proof} Let $\mathcal E_{k_1k_2k_3}$ be a connected electrical network obtained from $\mathcal E$ by declaring the vertices $k_1, k_2, k_3$ to be the boundary nodes, while remaining vertices are declared to be inner and $M_R(\mathcal E_{k_1k_2k_3})$ be its response matrix, then according to Proposition \ref{th: about inverse resp} we have that: $$M'_R(\mathcal E_{k_1k_2k_3})^{-1}_{k_1k_2}=\frac{1}{2}(R_{k_1k_3}+R_{k_2k_3}-R_{k_1k_2}),$$ therefore to get the statement it is enough to verify that $M'_R(\mathcal E_{k_1k_2k_3})^{-1}_{k_1k_2} \geq 0.$ Indeed, the matrix $M'_R(\mathcal E_{k_1k_2k_3})$ has the following form: \begin{equation*} M'_R(\mathcal E_{k_1k_2k_3})=\left(\begin{matrix} x_{k_1k_1} & x_{k_1k_2} \\ x_{k_1k_2} & x_{k_2k_2} \end{matrix} \right) = \left(\begin{matrix} -x_{k_1k_2}-x_{k_1k_3} & x_{k_1k_2} \\ x_{k_1k_2} & -x_{k_1k_2}-x_{k_2k_3} \end{matrix} \right). \end{equation*} By the direct computation we obtain that \begin{equation*} M'_R(\mathcal E_{k_1k_2k_3})^{-1}_{k_1k_2}=\frac{-x_{k_1k_2}}{\det M'_R(\mathcal E_{k_1k_2k_3}) }=\frac{-x_{k_1k_2}}{x_{k_1k_2}x_{k_2k_3}+x_{k_1k_3}x_{k_1k_2}+x_{k_1k_3}x_{k_2k_3}}, \end{equation*} By the definition of the response matrix all $x_{k_ik_j} \leq 0, i\neq j$ which implies the statement of the theorem. \end{proof} To describe the properties of the resistance metric associated with a connected circular electrical network, we will provide a formula for the embedding of $E_n$ into $\mathrm{Gr}_{\geq 0}(n-1, 2n)$ described in Theorem \ref{th: main_gr} which uses the effective resistances matrix instead of the response matrix. Let $\mathcal E$ be a connected network with the resistance matrix $R_{\mathcal E}$, define a point in $Gr(n-1,2n)$ associated to it as the row space of the matrix: \begin{equation} \label{eq:omega_n,r} \Omega_{R}(\mathcal E)=\left(\begin{matrix} 1 & m_{11} & 1 & -m_{12} & 0 & m_{13} & 0 & \ldots \\ 0 & -m_{21} & 1 & m_{22} & 1 & -m_{23} & 0 & \ldots \\ 0 & m_{31} & 0 & -m_{32} & 1 & m_{33} & 1 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{matrix}\right), \end{equation} where $$m_{ij}= -\frac{1}{2}(R_{i,j}+R_{i+1,j+1}-R_{i,j+1}-R_{i+1,j}).$$ Notice that the matrix $M(R_{\mathcal E})=(m_{ij})$ is symmetric and the sum of the matrix entries in each row is zero. In other words, it looks like a response matrix of an electrical network. There is a reason for this as it was discovered by R. Kenyon and D. Wilson. \begin{theorem} \cite{KW 2011} \label{ken-wen} Let $\mathcal E$ be a connected circular electrical network and $\mathcal E^{*}$ be its dual, then the following holds: \begin{equation} \label{form:xij} x^{*}_{ij}=-\frac{1}{2}(R_{i,j}+R_{i+1,j+1}-R_{i,j+1}-R_{i+1,j}) \end{equation} \begin{equation} \label{form:rij} R^{*}_{ij}=-\sum \limits_{i'<j': \ D_{S_{ij}}(i', j') \neq 0}x_{i'j'}, \end{equation} where $x^*_{ij}$ are the matrix elements of the response matrix of the dual network $\mathcal E^*$. \end{theorem} Since $M(R_{\mathcal E})$ is a degenerate matrix, the Plücker coordinates of a point $Gr(n-1, 2n)$ associated with $\Omega_{R}(\mathcal E)$ can be calculated as the maximal size minors of the matrix $\Omega'_{ R}(\mathcal E)$ obtained from $\Omega_{R}(\mathcal E)$ by deleting, for example, the last row. \begin{theorem} \cite{BGGK} \label{th:aboout omeganr} The row space of $\Omega_{R}(\mathcal E)$ defines the same point in the $Gr_{\geq 0}(n-1,2n)$ as the point $\Omega(\mathcal E)$ defined by the Lam embedding, see Theorem \ref{th: main_gr}. In particular, the Plücker coordinate $\Delta_{24\dots 2n-2}(\Omega_{R}(\mathcal E))$ is not 0 if and only if $\mathcal E$ is a connected circular electrical network. Putting it together \[ \Omega_{R}(\mathcal E)=\Omega(\mathcal E^{*})s=\Omega(\mathcal E)\] where $s \in \Mat_{2n \times 2n}$ is the shift operator \[s=\left(\begin{matrix} 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 \\ (-1)^{n} & 0 & 0 & 0 & \cdots & 0 \end{matrix}\right),\]\label{cyclic operator s} See the proof of \cite[Theorem 5.6]{BGGK} for more details. \end{theorem} Many interesting inequalities involving $R_{ij}$ follow from the positivity of the Plücker coordinates of the point represented by $\Omega_{R}(\mathcal E)$. For some of them we have found an explicit combinatorial meaning, others are still waiting to be interpreted. Below we will deduce the Kalmanson property for the metric $R_{\mathcal E}$ as a consequence of the positivity described above. This fact was established in \cite{F} earlier using different methods. \begin{theorem} \label{charkalm} Let $\mathcal E$ be a connected circular electrical network and let $i_1, i_2, i_3, i_4$ be any four nodes in the circular order as it is shown in the Figure \ref{kalmanson}. Then the Kalmanson inequalities hold: \begin{equation} \label{kal_1} R_{i_1i_3}+R_{i_2i_4}\geq R_{i_2i_3}+R_{i_1i_4}, \end{equation} \begin{equation} \label{kal_2} R_{i_1i_3}+R_{i_2i_4}\geq R_{i_1i_2}+R_{i_3i_4}. \end{equation} \end{theorem} \begin{proof} Let $\mathcal E_{i_1i_2i_3i_4}$ be an electrical network obtained from $\mathcal E$ by declaring the vertices $i_1, i_2, i_3, i_4$ the boundary nodes, and the rest of the vertices are internal. Obviously $\mathcal E_{i_1i_2i_3i_4}$ is a connected circular electrical network in $E_4$ and let $\Omega'_{R}(\mathcal E_{i_1i_2i_3i_4})$ be its matrix defined by the formula \ref{eq:omega_n,r}. By direct computations we conclude that $$\Delta_{567}\bigl(\Omega'_{ R}(\mathcal E_{i_1i_2i_3i_4})\bigr)=\frac{1}{2}(R_{i_1i_3}+R_{i_2i_4} -R_{i_1i_2}-R_{i_3i_4}),$$ $$\Delta_{123}\bigl(\Omega'_{R}(\mathcal E_{i_1i_2i_3i_4})\bigr)=\frac{1}{2}(R_{i_1i_3}+R_{i_2i_4}- R_{i_2i_3}-R_{i_1i_4}).$$ Taking into account that all the minors $\Delta_{I}\bigl(\Omega'_{ R}(\mathcal E_{i_1i_2i_3i_4})\bigr)$ are non-negative we obtain that the Kalmanson inequalities hold for the metric defined by the resistances. \end{proof} \begin{figure}[h!] \center \includegraphics[width=60mm]{kalmanson.jpg} \caption{$4$-point Kalmanson property} \label{kalmanson} \end{figure} \begin{remark} Note that the equations \ref{kal_1}, \ref{kal_2} together are equivalent to the tropical inequality: \[ R_{i_{1}i_{3}}+R_{i_{2}i_{4}}\geq\max(R_{i_{2}i_{3}}+R_{i_{1}i_{4}},\,R_{i_{1}i_{2}}+R_{i_{3}i_{4}}) \] \end{remark} \begin{definition} Let $X$ be a finite set and $D$ is a metric on it. If there is a circular order on $X$ such that the inequalities from the theorem above hold for any four points of $X$ in this circular order, we call this metric the {\it Kalmanson metric}. \end{definition} Therefore the resistance metric $R_{\mathcal E}$ defined by a circular electrical network $\mathcal E$ is a Kalmanson metric. \subsection{Circular split systems and electrical networks} \label{sec:splitmetr} \begin{definition} A {\it split} $S$ of a set $X=\{1, \dots, n\}$ is a partition of $X$ into two non-empty, disjoint subsets $A$ and $B$, $ A \sqcup B = X$. A split is called trivial if either $A$ or $B$ has cardinality $1.$ A collection of splits $\mathcal{S}$ is called a {\it split system }. \end{definition} The pseudo metric associated with a split $S$ is defined by the following matrix $D:$ \begin{equation*} D_{A|B}(k,l)=\begin{cases} 1, \ \text{if} \ |A \cap \{k,l\}|=1, \\ 0, \ \text{otherwise.} \end{cases} \end{equation*} A circular order of $X$ can be drawn as a polygon with the elements of $X$ labeling the sides. A {\it circular split system} is a split system for which a circular order exists such that all the splits can be simultaneously drawn as sides or diagonals of the labeled polygon. Trivial splits are sides of the polygon, separating the label of that side from the rest, while a non-trivial split $A|B$ is a diagonal separating the sides labeled by $A$ and $B$. For any circular split system we can visualize it by such a polygonal representation, or instead choose a visual representation using sets of parallel edges for each split; these representations are called {\it circular split networks}. A set of parallel edges displays a split $A|B$ if the removal of those edges leaves two connected components with respective sets of terminals $A$ and $B$, see the Figure \ref{splex}. \begin{definition} We call a {\it weighted circular split system} a circular split system with a positive weight attached to each set of parallel edges, which display the splits. Such a weighted circular split system defines a pseudo metric on the set $X$ by the formula \begin{equation} \label{decom1} D_S=\sum \limits_{A|B \in S } \alpha_{A|B}D_{A|B}, \end{equation} where $\alpha_{A|B}$ is the weight of the set of parallel edges which defines the split $A|B$. \end{definition} The split $(\{1,2,3,4\}|\{5,6,7,8\})$ of the set of the sides of the polygon on the left defined by the yellow diagonal in the Figure \ref{splex} corresponds to the set of parallel edges with the weight $\alpha_1$ in the graph on the left. \begin{figure}[h!] \center \includegraphics[width=90mm]{splyweight.jpeg} \caption{Circular split system and their polygon representations. Removing any set of parallel edges defines a split of the set of leaves of the graph on the right which coincides with the split of the set of the sides of a polygon on the left obtained by removing an appropriate diagonal.} \label{splex} \end{figure} The following theorem gives a beautiful characterization of the set of Kalmanson metrics \cite{BD}. \begin{theorem} \label{BD} A metric $d$ is a Kalmanson metric with respect to a circular order $c$ if and only if $d = D_S$ for a unique weighted circular split system $S$, (not necessarily containing all trivial splits) with each split $A|B$ of $S$ having both parts contiguous in that circular order $c$. \end{theorem} Since the resistance metric defined by a planar circular electrical network satisfies the Kalmanson condition, it corresponds to a unique weighted circular split system, which we call an {\it electrical circular split system} following \cite{F} or, given Theorem \ref{charkalm}, an {\it electrical Kalmanson metric}. We will introduce a slightly different way of labeling the splits, it will be useful for us later. \begin{definition} \label{circ-system} Let $X$ be a set of nodes on a circle labeled clockwise by the symbols $1$ to $n$. Define the dual set $X^d$ consisting of nodes labeled by the symbols $\overline{1}$ to $\overline{n}$ in such a way that each $\overline{j}$ lies between $j$ and $j+1.$ Then each chord connecting $\overline{i}$ and $\overline{j}$ defines a split of $X$ which we will denote by $S_{ij}$. It is obvious that the set $S_{ij}$ forms a circular split system which we will denote by $\mathcal{S}_c$. \end{definition} The following theorem gives the formula for the weights of a weighted circular split systems \ref{BD}. \begin{theorem} \cite{GandC}, \cite{HKP} \label{th:decomp} Let $X$ be a set as in Definition \ref{circ-system} and let $D=(d_{ij})$ be a Kalmanson metric defined on $X$. Then the following split decomposition holds: \begin{equation} \label{decom} D=\sum \limits_{S_{ij} \in \mathcal{S}_c } \omega_{ij}D_{S_{ij}}, \end{equation} where the coefficients $\omega_{ij}$ are defined by the formula $$\omega_{ij}=\frac{1}{2}(d_{i,j}+d_{i+1,j+1}-d_{i,j+1}-d_{i+1,j}).$$ \end{theorem} \begin{remark}\label{KW} The formula \eqref{decom} for $R_{ij}$ can be obtained from Theorem \ref{ken-wen} since taking the dual electrical network is an involution up to a shift: $\mathcal E^{**}=\mathcal E'$, where $\mathcal E'$ is obtained from $\mathcal E$ by shifting the numeration of the nodes of $\mathcal E$ by $1$ clockwise: \begin{align} \label{formdual} R_{ij}=R^{**}_{i-1j-1}=-\sum \limits_{\bar{i}'<\bar{j}': \ D_{S_{i-1j-1}}(\bar{i}', \bar{j}') \neq 0}x^{*}_{\bar{i}'\bar{j}'}=-\sum \limits_{i'<j': \ D_{S_{ij}}(i', j') \neq 0}x^{*}_{i'j'}= \\ \nonumber \hspace{-40mm} {\sum \limits_{i'<j': \ D_{S_{ij}}(i', j') \neq 0}\frac{1}{2}(R_{i,j}+R_{i+1,j+1}-R_{i,j+1}-R_{i+1,j})}, \end{align} where $\{1, \dots, n\}$ and $\{\overline1, \dots, \overline n\}$ are the labels of the nodes of $\mathcal E$ and $\mathcal E^{*}$ respectively. Thus Theorem \ref{ken-wen} in fact provides an alternative way to observe that the effective resistance distance is the Kalmanson metric. \end{remark} \begin{figure}[H] \center \includegraphics[width=80mm]{toformualdual.jpg} \caption{Labeling in the formula \eqref{formdual} } \label{fig:treedaul} \end{figure} \begin{definition} Define a matrix $M(D)$ \begin{align} M(D)_{ij} = \begin{dcases*} \sum_{k \not= i}\omega_{ik}, & \text{if } i = j,\\ -{\omega_{ij}}, & \text{if } $i \not= j,$ \end{dcases*} \end{align} Notice that $\sum_{k \not= i}\omega_{ik}=-\omega_{ii}=d_{ii+1}.$ \end{definition} Given a Kalmanson metric as a matrix \( D \), one can decide whether it is an electrical Kalmanson metric in a straightforward but computationally heavy way. First one must reorder the matrix columns and rows to ensure its entries satisfy the Kalmanson inequalities, then find the unique response matrix $M$ that corresponds to a given Kalmanson metric matrix using the pseudoinverse formula, see \cite{F} (Lemma 3.1), and then check all the circular minors of that $M$ for non-negativity. Here is our main result, it gives a new characterization of the Kalmanson metrics, which are the resistance metrics of circular electrical networks without using inversion of matrices. \begin{theorem} \label{th: dual} Let $D$ be a Kalmanson metric matrix with the correct order of rows and columns, then $D$ is the effective resistance matrix of a connected circular electrical network $\mathcal E$ if and only if the matrix $\Omega_{D}$ constructed from $D$ according to the formula \eqref{eq:omega_n,r} defines a point $X$ in $\mathrm{Gr}_{\geq 0}(n-1, 2n)$ and the Plücker coordinate $\Delta_{24\dots 2n-2}(X)$ does not vanish. \end{theorem} \begin{proof} Necessity follows from Theorem \ref{th:aboout omeganr}. To prove sufficiency, assume that $\Omega_{D}$ defines a point $X$ in $\mathrm{Gr}_{\geq 0}(n-1, 2n)$ and $\Delta_{24\dots 2n-2}(\Omega'_{D}) \neq 0$. By direct computations we conclude that the matrix $\Omega_{D}s^{-1}$ has the form \eqref{omega_eq} and $\Delta_{13\dots 2n-3}\bigr((\Omega_{D}s^{-1})'\bigl)\neq 0$. Since the action of $s$ preserves the non-negativity according to \cite{L3}, the matrix $\Omega_{D}s^{-1}$ defines a point of $\mathrm{Gr}_{\geq 0}(n-1, 2n)$. Using the surjectivity of Lam's embedding (see Theorem \ref{about sur}) and Theorem \ref{th:aboout omeganr}, we obtain that both $\Omega_{D}$ and $\Omega_{D}s^{-1} $ are associated with connected networks. Finally, due to Theorem \ref{th:aboout omeganr} and Theorem \ref{ken-wen} we have that the matrix $M(D)=(m_{ij}),$ where \[m_{ij}=-\dfrac{1}{2}(d_{i,j}+d_{i+1,j+1}-d_{i,j+1}-d_{i+1,j})\] can be identified with the response matrix of a network $\mathcal E^{*},$ associated with $\Omega_{D}s^{-1}.$ It remains to prove that $d_{ij}$ are equal to the effective resistances $R_{ij}$ of the network $\mathcal E,$ associated with $\Omega_{D}.$ Since the metrics $d_{ij}$ and $R_{ij}$ are both Kalmanson and the weights in their split decomposition are equal, $\omega_{ij}=-m_{ij}$ (see the formula \eqref{decom}) , we conclude that: $$d_{ij}=-\sum \limits_{i'(i, j)j'(i, j)} m_{i'j'}= R_{ij}.$$ \end{proof} The following theorem is also a convenient test for detecting electrical Kalmanson metrics. It literally falls in our lap due to Theorems \ref{ken-wen} and \ref{th:decomp}. \begin{theorem} \label{th dual2} Let $D$ be a Kalmanson metric on $X$, then $D$ is an Electric Kalmanson metric of a connected circular electrical network $\mathcal E$ if and only if the rank of $M(D)$ is equal to $n-1$ and the circular minors of the matrix $M(D)$ are non-negative after multiplying by $(-1)^k$ where $k$ is the size of the minor. Given this, the matrix $M(D)$ is the response matrix of the dual to the electrical network $\mathcal E$. \end{theorem} \begin{proof} If any $k \times k$ circular minors of the matrix $M(D)$ is positive after multiplying by $(-1)^k$ then $M(D)$ defines a response matrix of a network $\mathcal E'$ due to Theorem \ref{Set of response matrices all network}. Hence we conclude that $\mathcal E'$ is connected (see Theorem \ref{Set of response matrices all network}) given the condition on the rank of $M(D)$. The matrix $\Omega( \mathcal E')$ defines a point in $Gr_{\geq 0}(n-1, 2n)$ hence $\Omega(\mathcal E')s$ does as well. According to Theorem \ref{eq:omega_n,r} $$\Omega(\mathcal E')s=\Omega(\mathcal E)=\Omega_{R}(\mathcal E)$$ for a connected network $\mathcal E$ with the resistance matrix $R_{\mathcal E}=M(D)$, moreover $\mathcal E'=\mathcal E^{*} $. It remains to prove that $R_{ij}=d_{ij}, $ it can be done as it has been explained in the proof of Theorem \ref{th: dual}. Suppose now that $D$ comes from a circular electrical split system associated with a connected circular electrical network $\mathcal E$, then due to Theorem \ref{eq:omega_n,r} we conclude that $\Omega_{R}(\mathcal E)s^{-1}$ defines a point in $Gr_{\geq 0}(n-1, 2n)$ associated with a connected network $\mathcal E^{*}$ and $M(D)=M_R(\mathcal E^*).$ The properties of $M(D)$ in the statement of the theorem follow from Theorem \ref{Set of response matrices all network}. \end{proof} A few remarks are in order. \begin{remark} Many statements in this paper can be extended to the connected cactus networks. In particular, one can obtain that the effective resistances of a connected cactus network give rise to a Kalmanson pseudometric and a Kalmanson pseudometric matrix $D$ can be identified with the resistance matrix of a connected cactus networks if and only if the matrix $\Omega_{D}$ constructed from $D$ according to the formula \eqref{eq:omega_n,r} defines a point $X$ in $\mathrm{Gr}_{\geq 0}(n-1, 2n)$. \end{remark} \begin{remark}\label{nk} In Remark 4.3, [F] S. Forcey gives an example of a non-planar network whose resistance metric is nevertheless Kalmanson. It is easy to generalise this example in the following way. Let's start with a circular planar electrical network, whose resistance metric therefore is Kalmanson, such that all the inequalities required by the Kalmanson condition are strict. This means the resistance metric is represented by a full circular planar split system. Add an edge with a variable conductance $x$ to it in such a way that it is not a circular planar anymore, as in the Figure \ref{nkal}. It is clear from the Kirchhoff formula \ref{KF} that each effective resistance $R_{ij}$ changes as a function of $x$ according to the formula $$R^{\text new}_{ij}=\frac{R_{ij}+r_{ij}x}{1+Tx}$$ where $r_{ij}$ and $T$ are non-negative numbers and $T$ does not depend on $i,\,j$. Therefore, for small $x$ the Kalmanson condition for the resistance metric will hold for the new network, although it is not any longer circular planar. This means that the planar circular Kalmanson metrics are sitting far away from the boundary of the space of all Kalmanson metrics. \end{remark} \begin{figure} \center \includegraphics[width=50mm]{non-plan-kalma.jpg} \caption{A non-planar electrical network with nodes \{1,\dots,5\} } \label{nkal} \end{figure} \begin{remark}\label{plueclerformetric} Let $X$ and $D$ be as in Theorem \ref{th:decomp}, then the matrix $M(D)$ defined above is the response matrix of a connected electrical network, not necessarily circular planar. Namely, it is symmetric, all non-diagonal elements are negative and the sum of the elements in any row is zero. The connection of the resistance matrix of this network to the original metric $D$ is an interesting question. Moreover, such a matrix defines a point $$\mathrm{IG}(n-1, 2n)\subset \mathrm{Gr}(n-1, 2n)$$ according to Theorem \ref{th:aboout omeganr}. Its Plücker coordinates are interesting invariants of the metric $D$. We are planning to address these questions in a future publication. \end{remark} \begin{remark} The resistance metric has the following important property: its square root $\sqrt {D(i,j)}$ is $L_2$ embeddable \cite{K}, hence the electric Kalmanson metrics have that property. There is a well known condition for $L_2$ embeddability stated in terms of the minors of the Caley-Menger matrix \cite{GandC}. It is interesting to see if this condition also describes the electric Kalmanson metrics in the set of all Kalmanson metrics. \end{remark} \begin{figure}[h!] \center \includegraphics[width=70mm]{treedual.jpg} \caption{A tree network $\mathcal E$ and its dual network $\mathcal E^{*},$ \\ all the conductances are equal to $1$} \label{treedaul} \end{figure} We will provide an example to illustrate the theorem above. \begin{example} Let $T$ be a tree with four leaves as in the Figure \ref{treedaul} with the weights of edges all equal to one. The resistance metric matrix of the tree is \[D= \begin{pmatrix} 0& 3 & 3& 2 \\ 3& 0& 2& 3\\ 3& 2& 0& 3 \\ 2&3&3&0 \end{pmatrix} \] It coincides with the matrix of the geodesic distance for trees. Its split decomposition is as follows as one sees easily \[D=D_{S_{12}}+D_{S_{13}}+D_{S_{14}}+D_{S_{23}}+D_{S_{34}}+D_{S_{43}}=D_{2|134}+D_{23|14}+D_{1|234}+D_{3|124}+D_{4|123}\] The response matrix of the dual network multiplied by $-1$ is \[ \begin{pmatrix} -3& 1& 1& 1 \\ 1& -2& 1& 0\\ 1& 1& -3 & 1 \\ 1&0&1&-2 \end{pmatrix} \] It coincides with the incidence matrix of the dual graph since there are no internal vertices. Our correspondence between the splits and pairs of numbers is given in the table \\ \[\begin{tabular}{ |c|c|c|c|c |} $(\bar1\bar2)$& $(\bar1\bar3)$& $(\bar1\bar4)$& $(\bar2\bar3)$&$ (\bar3\bar4)$ \\ \hline $(2|134)$ & $(14|23)$&$ (1|234)$&$(3|124)$&$(4|123)$ \\ \end{tabular}\] \\ matching the coefficients in the split decomposition of $D$ and the coefficients of the response matrix of the dual network. \end{example} \section{Reconstruction of network topology} \label{sec:rec} As we mentioned in the introduction one of the important problems in applied mathematics can be formulated as follows: \begin{problem} \label{black-box} Suppose we are given a matrix $D=(d_{ij})$, whose entries are the distances between $n$ terminal nodes of an unknown weighted graph $G$. It is required to recover the graph $G$ and the edge weights which are consistent with the given matrix. \end{problem} If the terminal nodes are the boundary vertices of a circular electrical networks $\mathcal E $ and $D=R_{\mathcal E}$, then due to Proposition \ref{th: about inverse resp} we conclude that Problem \ref{black-box} can be identified with the {\it black box problem}, see \cite{CIW}. It is also known as the discrete Calderon problem or the discrete inverse electro impedance tomography problem. If a graph $G$ is an unknown tree $T$ and the entries of $D_T=(d_{ij})$ are equal to the weights of the paths between vertices of $T$, then Problem \ref{black-box} can be identified with the minimal tree reconstruction problem, which plays an important role in phylogenetics \cite{HRS}. \begin{definition} We will call a matrix $D=(d_{ij})$ a {\it tree realizable}, if there is a tree $T$ such that \begin{itemize} \item $D=D_T$ i.e. $d_{ij}$ are equal to the weights of the paths between terminal nodes of $T$; \item a set of terminal nodes contains all leafs of $T;$ \item there are no vertices of degree $2.$ \end{itemize} \end{definition} \begin{theorem} \cite{CR}, \cite{HY} \label{minimaltree} If a matrix $D$ is tree realizable, then there is an unique tree $T$ such that $ D=D_T.$ \end{theorem} It is not difficult to see that if the graph of a circular electrical network $\mathcal E $ is a tree $T$ and its boundary nodes contain all tree leafs, then $R_{\mathcal E}=D_{\bar T}$ where $\bar T$ and $T$ are identical as unweighted trees and the weights of the corresponded edges are reciprocal. This observation allows us to use in phylogenetics the algorithm for reconstruction of electrical networks \cite{L} from a given resistance matrix. Our reconstruction method is different from the methods suggested in \cite{F}, \cite{FS}. \begin{definition} The median graph of a circular network $\mathcal E $ with the graph $\Gamma$ is the graph $\Gamma_M$ whose internal vertices are the midpoints of the edges of $\Gamma$ and two internal vertices are connected by an edge if the edges of the original graph $\Gamma$ are adjacent in one face. The boundary vertices of $\Gamma_M$ are defined as the intersection of the natural extensions of the edges of $\Gamma_M$ with the boundary circle. Since the interior vertices of the median graph have degree four, we can define the strands of the median graph as the paths which always go straight through any degree four vertex. The strands naturally define a permutation $\tau(\mathcal E)$ on the set of points $\{1, \dots, 2n\}.$ \end{definition} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{strad.jpg} \caption{Star-shape network, its median graph and the strand permutation $\tau(\mathcal E)=(14)(36)(25)$ } \label{fig:triangle} \end{figure} \begin{theorem} \cite{CIW} \label{strand} A circular electrical network is defined uniquely up to electrical transformations by its strand permutation. \end{theorem} Denote by $A_i$ the columns of the matrix $\Omega_{ R}(\mathcal E)$ and define the column permutation $g(\mathcal E)$ as follows: $g(\mathcal E)(i)=j,$ if $j$ is the minimal number such that $A_i \in \ \mathrm{span}(A_{i+1}, \dots, A_{j} ),$ where the indices are taken modulo $2n$. \begin{theorem}\label{permutationj} \label{th:aboutstradsper} The following holds $$g(\mathcal E)+1=\tau(\mathcal E).$$ \end{theorem} \begin{proof} This result follows from the fact that $\Omega_{ R}(\mathcal E)$ is an explicit parametrization of the Lam embedding, see Proposition $5.17$ \cite{L}. \end{proof} \begin{definition} A circular electrical network is called minimal if the strands of its median graph do not have self-intersections; any two strands intersect at most one point and the median graph has no loops or lenses, see the Figure \ref{fig:loop}. \end{definition} \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{loop.jpg} \caption{A lens obtained by the intersection of strands $ \alpha_1$ and $ \alpha_2$ } \label{fig:loop} \end{figure} Based on Theorem \ref{permutationj} we suggest the following reconstruction algorithm (see Example \ref{ex:rec-ex}): \begin{itemize} \item For a given matrix $R_{\mathcal E}$ construct the matrix $\Omega_{ R}(\mathcal E);$ \item Using $\Omega_{ R}(\mathcal E)$ calculate a strand permutation $\tau(\mathcal E)$; \item The permutation $\tau(\mathcal E)$ defines a strand diagram, which can be transformed to a median graph of a \textit{minimal} circular electrical network $\mathcal E$ using the procedure described in \cite{CIW}. \item From the median graph we recover the network $\mathcal E$ as in \cite{CIW} or \cite{F}. \end{itemize} \begin{theorem} \cite{CIW}\label{electrominim} Any circular electrical network is equivalent to a minimal network. Any two minimal circular electrical networks which share the same response matrix, and hence the same effective resistance matrix, can be converted to each other only by the star-triangle transformation. As a consequence we obtain that any two equivalent minimal circular electrical networks have the same number of edges which is less or equal than $\frac{n(n-1)}{2}$. If two minimal circular electrical networks are equivalent, they have the same strand permutation. \end{theorem} Informally, the theorem says that removing all the loops, the internal vertices of degree $1$ and reduction of all the parallels and the series any network can be converted into a minimal network. Let us compare the last result with Theorem \ref{minimaltree}. \begin{theorem} \label{tomin} Let $\mathcal E$ be a minimal electrical network whose graph is a tree $T$, then such a tree is unique. Let $\mathcal{E}(T, \omega)$ be a circular planar electrical network on a tree $T$ such that all its leaves are nodes and there are no inner vertices of degree $2$, then $\mathcal{E}(T, \omega)$ is minimal. \end{theorem} \begin{proof} Indeed, it is easy to see that $R_T$ is a tree realizable by $\bar T$. Therefore, $T$ is unique by Theorem \ref{minimaltree}. This proves the first statement. To prove the second statement, it is enough to verify that the median graph $T_M$ of $\mathcal{E}(T, \omega)$ has no lenses. Indeed, each strand of $T_M$ corresponds to a simple path between nodes in the original graph, see the Figure \ref{fig:treepath}. Since $T$ has no inner vertices of degree $2$ these paths can intersect at most once. Therefore, $T_M$ has no lenses and $\mathcal{E}(T, \omega)$ is minimal. \end{proof} \begin{figure}[h!] \center \includegraphics[width=100mm]{pathtree.jpg} \caption{On the left: A path which corresponds to a strand; On the right: the stands around a node of the degree $2$} \label{fig:treepath} \end{figure} We propose the following algorithm for reconstruction of the minimal tree for a given tree metric $D_T$ based on Theorem \ref{tomin}: \begin{itemize} \item Do all steps of the algorithm described above to obtain a minimal network $\mathcal E$ such that $R_{\mathcal E}=D_T$; \item Transform $\mathcal E$ to a minimal tree by a sequence of the star-triangle transformation. \end{itemize} Heuristically, by (\cite{Bu}, Theorem $1$), we assume that the second step can be performed monotonically, converting all triangles into stars. \begin{remark} A more advanced technique called the chamber ansatz when applied to $\Omega_{ R}(\mathcal E)$ gives an algorithm for recovering not only the topology of the network but the weights of the edges as well. This method is described in \cite{Ka}. \end{remark} We will illustrate our algorithm by an example. \begin{figure}[H] \centering \includegraphics[width=1.2\textwidth]{rectop.jpg} \caption{ An example of a reconstruction of a network topology} \label{fig:recon} \end{figure} \begin{example} \label{ex:rec-ex} Consider a dissimilarity matrix \[D= \begin{pmatrix} 0& 3 & 3& 2 \\ 3& 0& 2& 3\\ 3& 2& 0& 3 \\ 2&3&3&0 \end{pmatrix} \] Then the matrix $\Omega'_{ D}$ has the form \[\Omega'_D= \begin{pmatrix} 1& 3 & 1& 1 & 0 & -1& 0 &1 \\ 0& 1 & 1& 2 & 1 & 1& 0 &0\\ 0& -1 & 0& 1 & 1 & 3& 1 &1 \end{pmatrix} \] \end{example} and satisfies the Theorem \ref{th: dual}, therefore $\Omega'_{ D}=\Omega'_R(\mathcal E)$ for a network $\mathcal E \in E_4.$ By direct computations we verify that $g(\mathcal E)= [4\,6\,5\,7\,8\,2\,1\,3]$ in the one window notation as it is used in \cite{L}, therefore the strand permutation $\tau(\mathcal E)$ is $\tau(\mathcal E)=(15)(27)(36)(48).$ This strand permutation defines a minimal network as it is shown in the Figure \ref{fig:recon}, which can be transformed to a tree by applying one star-triangle transformation. \ \begin{thebibliography}{9999999} \bibitem{BD} H-J. Bandelt and A. Dress, Split decomposition: a new and useful approach to phylogenetic analysis of distance data, Molecular Phylogenetics and Evolution 1 (1992) 242–252. \bibitem{BD1} H-J. Bandelt and A. Dress, A canonical decomposition theory for metrics on a finite set, Advances in Mathematics Volume 92, Issue 1, March (1992) Pages 47-105. \bibitem{Bu} P. Buneman, A note on the metric properties of trees. J. Combin. Theory Ser. B., Vol. 17, No. 1, pp. 48-50 (1974). \bibitem{BGGK} B. Bychkov, V. Gorbounov, L. Guterman, A. Kazakov, Symplectic geometry of electrical networks, Journal of Geometry and Physics Volume 207, January 2025, 105323. \bibitem{BGKT} B. Bychkov, V. Gorbounov, A. Kazakov, D. Talalaev, Electrical Networks, Lagrangian Grassmannians, and Symplectic Groups, Moscow Mathematical Journal, Vol.23, No.2, pp.133-167 (2023). \bibitem{BGK} B. Bychkov, L. Guterman, A. Kazakov, Electrical networks via circular minors, in preparation. \bibitem{CGS} S. Chepuri, T. George and D. E. Speyer, Electrical networks and Lagrangian Grassmannians, https://arxiv.org/abs/2106.15418 (2021). \bibitem{CR} J. C. Culberson, P. Rudnicki, A fast algorithm for constructing trees from distance matrices. Information Processing Letters, Vol. 30, No. 4, pp. 215-220 (1989). \bibitem{CIM} E. B. Curtis, D. Ingerman, J. A. Morrow, Circular planar graphs and resistor networks. Linear algebra and its applications, Vol.283, No. 1-3, pp.115-150 (1998). \bibitem{CIW} B. Curtis, J. A. Morrow, Inverse problems for electrical networks. World Scientific. Vol.13 (2000). \bibitem{GandC} M. Deza, M. Laurent, Geometry of Cuts and Metrics, Springer Berlin, DOI https://doi.org/10.1007/978-3-642-04295-9. \bibitem{CdV} Colin de Verdi`ere, Yves réseaux électriques planaires. I. Comment. Math. Helv. 69 (1994), no. 3, 351374. \bibitem{DF} S. Devadoss, S. Forcey, Compactifications of phylogenetic systems and electrical networks, https://doi.org/10.48550/arXiv.2408.03431. \bibitem{DP} S. L. Devadoss, S. Petti, A space of phylogenetic networks. SIAM Journal on Applied Algebra and Geometry, Vol. 1, No. 1, pp. 683-705 (2017). \bibitem{F} S. Forcey, Circular planar electrical networks, split systems, and phylogenetic networks, https://doi.org/10.48550/arXiv.2108.00550. \bibitem{FS} Forcey S, Scalzo D. Phylogenetic Networks as Circuits With Resistance Distance. Front Genet. 2020 Oct 15;11:586664. doi: 10.3389/fgene.2020.586664. PMID: 33193721; PMCID: PMC7593533. \bibitem{HY} S. L. Hakimi, S. S. Yau, Distance matrix of a graph and its realizability. Quarterly of applied mathematics, Vol. 22, No. 4, pp. 305-317 (1965). \bibitem{HRS} D. H. Huson, R. Rupp, C. Scornavacca, Phylogenetic networks: concepts, algorithms and applications. Cambridge University Press (2010). \bibitem{Ka} A. A. Kazakov, Inverse problems related to electrical networks and the geometry of non-negative Grassmannians. arXiv preprint arXiv:2502.16710. (2025). \bibitem{MTT} G. Kirchhoff, On the solution of the equations obtained from the investigation of the linear distribution of galvanic currents. IRE Transactions on Circuit Theory, 5(1), 4-7 (1958). \bibitem{KW 2011} R. W. Kenyon and D. B. Wilson, Boundary partitions in trees and dimers, Trans. Amer. Math. Soc., 363, pp. 1325–1364 \bibitem{Kl} D.J. Klein, M. Randić, Resistance distance. J Math Chem 12, 81–95, https://doi.org/10.1007/BF01164627 (1993). \bibitem{K} D. Klein, H. Zhu, Distances and volumina for graphs. Journal of Mathematical Chemistry 23, 179–195 (1998). https://doi.org/10.1023/A:1019108905697 \bibitem{HKP} A. Kleinman, M. Harel, L. Pachter, Affine and Projective Tree Metric Theorems. Ann. Comb. 17, 205–228 (2013). https://doi.org/10.1007/s00026-012-0173-2. \bibitem{L} T. Lam, Electroid varieties and a compactification of the space of electrical networks, Adv. Math. 338, pp. 549–600 (2018). \bibitem{L3} T. Lam, Totally nonnegative Grassmannian and Grassmann polytopes, arXiv preprint arXiv:1506.00603 (2015). \bibitem{MS} M. A. Steel, Phylogeny: Discrete and Random Processes in Evolution, 2016, https://api.semanticscholar.org/CorpusID:31002469. \bibitem{SS} D. Speyer, B. Sturmfels, The tropical Grassmannian. Adv. Geom. 4 (2004), 389-411. \bibitem{Wa} D. G. Wagner, Combinatorics of electrical networks, Lecture Notes, Dept. of C and O, University of Waterloo (2009). \bibitem{Z} A. Zelevinsky, "What Is . . . a Cluster Algebra?", AMS Notices, 54 (11): 1494–1495, 2007. \end{thebibliography} \end{document} \begin{remark} Almost all theorems in our work can be expanded to connected cactus networks without any changes. Particularly, we obtain that effective resistances of connected cactus network give rise to Kalmanson pseudometrics. Kalmanson pseudometric $D$ can be identified with effective resistances of a connected cactus networks if and only if: \begin{itemize} \item $\Omega_D \in Gr_{\geq 0}(n-1, 2n);$ \item the circular minors of the matrix $M(D)$ are non-negative after multiplying by $(-1)^k$. \end{itemize} Analyzing it we conclude that for Kalmanson metrics the non-degenerative conditions in Theorem \ref{th: dual} and Theorem \ref{th dual2} automatically satisfy due to $d_{lm}>0 \ \forall l\neq m. $ \end{remark}
2412.20562v1
http://arxiv.org/abs/2412.20562v1
Some Necessary and Sufficient Conditions for Diophantine Graphs
\documentclass{article} \usepackage{amssymb,latexsym,amsmath,amsthm,amsfonts,graphics} \usepackage{graphicx} \graphicspath{ {Figures/} } \usepackage{caption} \usepackage{subcaption} \usepackage[rightcaption]{sidecap} \usepackage{color} \usepackage{lineno} \usepackage{multirow} \usepackage{epstopdf} \usepackage{rotating} \usepackage{cite} \usepackage[a4paper, total={6.8in, 9in}]{geometry} \usepackage{hyperref} \usepackage{tikz} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{dfn}{Definition}[section] \newtheorem{ex}{Example}[section] \newtheorem{conj}{Conjecture}[section] \newtheorem{rem}{Remark}[section] \setcounter{MaxMatrixCols}{10} \newcommand{\marginlabel}[1]{\mbox{}\marginpar{\raggedleft\hspace{0pt}#1}} \newcommand{\h}{\mbox{$\cal H$}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Complex}{\mathbb{C}} \newcommand{\Field}{\mathbb{F}} \newcommand{\RPlus}{\Real^{+}} \captionsetup[figure]{name={Figure},labelsep=period} \captionsetup[table]{name={Table},labelsep=period} \makeatletter \def\ps@pprintTitle{ \let\@oddhead\@empty \let\@evenhead\@empty \def\@oddfoot{\centerline{\thepage}} \let\@evenfoot\@oddfoot} \makeatother \begin{document} \begin{center} {\bf {\Large Some Necessary and Sufficient Conditions for Diophantine Graphs}}\\ \end{center} \begin{center} { \bf M. A. Seoud*$^3$, \ A. Elsonbaty*$^2$, \ A. Nasr*$^1$, \ M. Anwar*$^4$} \vspace{3mm}\\ *Department of Mathematics, Faculty of Science, Ain Shams University, 11566, Abbassia, Cairo, Egypt. \vspace{3mm}\\ e-mails: $^1$ \ \href{mailto:[email protected]}{\url{[email protected]}}, $^2$ \ \href{mailto:[email protected]}{\url{[email protected]}},\\ \hspace{0.9cm}$^3$ \ \href{mailto:[email protected]}{\url{[email protected]}},\hspace{0.2cm} $^4$ \ \href{mailto:[email protected]}{\url{[email protected]}}, \end{center} \begin{center} MSC code: 05A10, 05C07, 05C78, 11A05, 11A25, 11B75, 11D04, 11D88. \end{center} \begin{abstract} A linear Diophantine equation $ax+by=n$ is solvable if and only if $\gcd(a,b)$ divides $n$. A graph $G$ of order $n$ is called Diophantine if there exists a labeling function $f$ of vertices such that $\gcd(f(u),f(v))$ divides $n$ for every two adjacent vertices $u,v$ in $G$. In this work, maximal Diophantine graphs on $n$ vertices, $D_n$, are defined, studied and generalized. The independence number, the number of vertices with full degree and the clique number of $D_n$ are computed. Each of these quantities is the basis of a necessary condition for the existence of such a labeling. \end{abstract} \begin{flushleft} \textbf{Keywords}: Diophantine graph, Maximal Diophantine graph, labeling isomorphism, $\gamma$-labeled graph. \end{flushleft} \section{Introduction} \hspace{0.5cm} Assuming that a graph $G=(V, E)$ is a finite simple undirected graph with $|V|$ vertices and $|E|$ edges, where $V=V(G)$ is the vertex set, $E=E(G)$ is the edge set, $|V|$ is called the order of the graph $G$ and $|E|$ is called the size of the graph $G$. In general, $|X|$ denotes the cardinality of a set $X$. $\delta(G)$ denotes the minimum degree of the vertices in a graph $G$. A set of vertices $S$ of a graph $G$ is said to be an independent set or a free set if for all $u,v\in S$, $u,v$ are nonadjacent in $G$. The independence number, denoted by $\alpha(G)$, is the maximum order of an independent set of vertices of a graph $G$. The operation of adding an edge $e=uv$ to a graph $G$ joining the vertices $u,v$ yields a new graph with the same vertex set $V(G)$ and edge set $E(G)\cup\{uv\}$, which is denoted $G+\{uv\}$. The operation of deleting an edge $e=uv$ from a graph $G$ removes only that edge, the resulting graph is denoted $G-\{uv\}$. A spanning subgraph of a graph $G$ is a subgraph of $G$ obtained by deleting edges only, adding edges to a graph $G$ yields a spanning supergraph of $G$. The join of two graphs $G$ and $H$ is denoted by $G+H$, it has the following vertex set $V(G+H)= V(G)\cup V(H)$ and edge set $E(G+H)=E(G)\cup E(H)\cup\{uv: u\in V(G) \ \mbox{and} \ v\in V(H)\}$. $K_n,\overline{K_n}$ and $C_n$ denote the complete graph, the null graph and the cycle graph of order $n$ respectively. We follow terminology and notations in graph theory as in A. Bickle \cite{Bickle}, J. L. Gross; J. Yellen; P. Zhang \cite{G-Y-Z}, F. Harary \cite{Harary} and K. H. Rosen \cite{Rosen2}. The concept of prime labeling was introduced by R. Entringer and was discussed in a paper by A. Tout \cite{Tout}. A graph $G$ is called a prime graph if there exists a bijective map $f:V\rightarrow \{1, 2, \dots, n\}$ such that for all $uv\in E$, $(f(u),f(v))=1$. Some authors investigated algorithms for prime labeling in \cite{sonbaty} and necessary and sufficient conditions are studied in \cite{Seoud1}, \cite{Seoud-Y}. The notion of Diophantine labeling is an extension of that of prime labeling. In this paper, we give a brief summary of some definitions and some results pertaining to Diophantine graphs. A generalization encompassing prime graphs, Diophantine graphs and another type of graph labeling is introduced and discussed. In maximal Diophantine graphs, an arithmetic function is established to calculate the number of vertices with full degree and the order of the maximal clique or the maximal complete subgraph, the independence number is computed and necessary and sufficient conditions are provided with these bounds. Moreover, an explicit formula for a vertex with minimum degree and minimum label is proved. Furthermore, a new perspective on degree sequences for establishing necessary conditions is presented. Relevant definitions and notations from number theory are mentioned. We follow the basic definitions and notations of number theory as in T. M. Apostol \cite{Apostol} and D. Burton \cite{Burton}. This manuscript is structured as follows. Section 2 provides some results of $\gamma$-labelings. Section 3 is partitioned into three subsections, each presents some results related to maximal Diophantine graphs. Subsection 3.1 discusses some basic bounds and necessary and sufficient conditions for maximal Diophantine graphs. Subsection 3.2 and 3.3 provided some necessary conditions and explore properties of the minimum degree and the degree sequence in maximal Diophantine graphs. Section 4 includes some examples of non-Diophantine graphs to explain the relation among these necessary conditions. \begin{dfn}\label{dfn2}\cite{Nasr} Let $G$ be a graph with $n$ vertices. The graph $G$ is called a Diophantine graph if there exists a bijective map $f:V\rightarrow \{1, 2, \dots, n\}$ such that for all $uv\in E$, $(f(u),f(v))\mid n$. Such a map $f$ is called a Diophantine labeling of $G$. A maximal Diophantine graph with $n$ vertices, denoted by $(D_n,f)$, is a Diophantine graph such that adding any new edge yields a non-Diophantine graph. If there is no ambiguity, we drop $f$ from $(D_n,f)$ and write it simply $D_n$. \end{dfn} Clearly, if a graph $G$ is Diophantine, then $|E(G)|\leq|E(D_n)|$. A formula that computes the number of edges of $D_n$ can be found in \cite{Nasr}. Some maximal Diophantine graphs are given in the next example. \begin{ex} The following three graphs are examples of maximal Diophantine graphs. \begin{figure*}[h!] \centering \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.6,auto=center,every node/.style={circle,fill=blue!20}] \node (v9) at (0,4) {$9$}; \node (v1) at (3,2.5) {$1$}; \node (v7) at (3.7,0) {$7$}; \node (v5) at (-3,2.5) {$5$}; \node (v3) at (-3.7,0) {$3$}; \node (v2)[circle,fill=red!20] at (-3,-2.5) {$2$}; \node (v4)[circle,fill=red!20] at (-1,-3) {$4$}; \node (v6)[circle,fill=red!20] at (1,-3) {$6$}; \node (v8)[circle,fill=red!20] at (3,-2.5) {$8$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v1) -- (v8); \draw (v1) -- (v9); \draw (v3) -- (v2); \draw (v3) -- (v4); \draw (v3) -- (v5); \draw (v3) -- (v6); \draw (v3) -- (v7); \draw (v3) -- (v8); \draw (v3) -- (v9); \draw (v5) -- (v2); \draw (v5) -- (v4); \draw (v5) -- (v6); \draw (v5) -- (v7); \draw (v5) -- (v8); \draw (v5) -- (v9); \draw (v7) -- (v2); \draw (v7) -- (v4); \draw (v7) -- (v6); \draw (v7) -- (v8); \draw (v7) -- (v9); \draw (v9) -- (v2); \draw (v9) -- (v4); \draw (v9) -- (v6); \draw (v9) -- (v8); \end{tikzpicture}\caption{Graph $D_9$} \end{subfigure} ~~~ \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.6,auto=center,every node/.style={circle,fill=blue!20}] \node (v4) at (3.5,0) {$4$}; \node (v1) at (3.7,2) {$1$}; \node (v2) at (2.5,4) {$2$}; \node (v10) at (0,4.9) {$10$}; \node (v7) at (-2.5,4) {$7$}; \node (v5) at (-3.7,2) {$5$}; \node (v8) at (-3.5,0) {$8$}; \node (v3)[circle,fill=red!20] at (0,-2.5) {$3$}; \node (v6)[circle,fill=red!20] at (-2,-2) {$6$}; \node (v9)[circle,fill=red!20] at (2,-2) {$9$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v1) -- (v8); \draw (v1) -- (v9); \draw (v1) -- (v10); \draw (v5) -- (v2); \draw (v5) -- (v3); \draw (v5) -- (v4); \draw (v5) -- (v6); \draw (v5) -- (v7); \draw (v5) -- (v8); \draw (v5) -- (v9); \draw (v5) -- (v10); \draw (v7) -- (v2); \draw (v7) -- (v3); \draw (v7) -- (v4); \draw (v7) -- (v6); \draw (v7) -- (v8); \draw (v7) -- (v9); \draw (v7) -- (v10); \draw (v2) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v6); \draw (v2) -- (v8); \draw (v2) -- (v9); \draw (v2) -- (v10); \draw (v10) -- (v3); \draw (v10) -- (v4); \draw (v10) -- (v6); \draw (v10) -- (v8); \draw (v10) -- (v9); \draw (v4) -- (v3); \draw (v4) -- (v6); \draw (v4) -- (v9); \draw (v8) -- (v3); \draw (v8) -- (v6); \draw (v8) -- (v9); \end{tikzpicture}\caption{Graph $D_{10}$} \end{subfigure} ~~ \begin{subfigure}{0.25\textwidth} \centering \begin{tikzpicture} [scale=.6,auto=center,every node/.style={circle,fill=blue!20}] \node (v9) at (3.7,0) {$9$}; \node (v1) at (3,2.5) {$1$}; \node (v11) at (1.5,4) {$11$}; \node (v7) at (-1.5,4) {$7$}; \node (v5) at (-3,2.5) {$5$}; \node (v3) at (-3.7,0) {$3$}; \node (v2)[circle,fill=red!20] at (-3,-2.5) {$2$}; \node (v4)[circle,fill=red!20] at (-1.5,-3) {$4$}; \node (v6)[circle,fill=red!20] at (0,-3.5) {$6$}; \node (v8)[circle,fill=red!20] at (1.5,-3) {$8$}; \node (v10)[circle,fill=red!20] at (3,-2.5) {$10$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v1) -- (v8); \draw (v1) -- (v9); \draw (v1) -- (v10); \draw (v1) -- (v11); \draw (v11) -- (v2); \draw (v11) -- (v3); \draw (v11) -- (v4); \draw (v11) -- (v5); \draw (v11) -- (v6); \draw (v11) -- (v7); \draw (v11) -- (v8); \draw (v11) -- (v9); \draw (v11) -- (v10); \draw (v7) -- (v2); \draw (v7) -- (v3); \draw (v7) -- (v4); \draw (v7) -- (v5); \draw (v7) -- (v6); \draw (v7) -- (v8); \draw (v7) -- (v9); \draw (v7) -- (v10); \draw (v5) -- (v2); \draw (v5) -- (v3); \draw (v5) -- (v4); \draw (v5) -- (v6); \draw (v5) -- (v8); \draw (v5) -- (v9); \draw (v3) -- (v2); \draw (v3) -- (v4); \draw (v3) -- (v8); \draw (v3) -- (v10); \draw (v9) -- (v2); \draw (v9) -- (v4); \draw (v9) -- (v8); \draw (v9) -- (v10); \end{tikzpicture} \caption{Graph $D_{11}$} \end{subfigure}\caption{Some maximal Diophantine graphs $D_9$, $D_{10}$ and $D_{11}$}\label{figure0} \end{figure*} \end{ex} \begin{dfn}\cite{Nasr} For a given an integer $n\in \Z^+$ and a prime $p\in \mathbb{P}$, the successor of the $p$-adic valuation is denoted by $\acute{v}_p(n):=v_p(n)+1$, where $v_p(n)$ is the $p$-adic valuation, $ \Z^+$ is set of positive integers and $\mathbb{P}$ is the set of prime numbers. The number $p^{\acute{v}_p(n)}$ is called the critical prime power number with respect to $p,n$. \end{dfn} In the rest of this paper, the following arithmetic functions $\pi,\omega$ and $\tau$ will be used, (see \cite{Apostol}, \cite{Burton}): Let $n\in \Z^+$. \begin{equation*} \pi(n):=\big|\{p\in\mathbb{P}: 2\leq p\leq n\}\big|, \quad \omega(n):=\big|\{p\in\mathbb{P}: p\mid n, \ 2\leq p\leq n\}\big|, \quad\tau(n):=\big|\{d\in \Z^+ : d\mid n\}\big|. \end{equation*} \begin{lem}\label{lem1}\cite{Nasr} Suppose $D_n$ is a maximal Diophantine graph of order $n$. For every $u,v\in V(D_n)$, $uv\notin E(D_n)$ if and only if there exists $p\in\mathbb{P}$ such that $$f(u), f(v)\in M_{p^{\acute{v}_{p}(n)}}:=\left\{kp^{\acute{v}_{p}(n)}: \ k=1,2,\dots,\left\lfloor\frac{n}{p^{\acute{v}_{p}(n)}}\right\rfloor\right\}.$$ \end{lem} \begin{thm}\label{lem2}\cite{Nasr} Suppose $D_n$ is a maximal Diophantine graph of order $n$. For every $u\in V(D_n)$, $$\deg(u)=n-1\quad\mbox{if and only if}\quad f(u)\mid n\quad\mbox{\textbf{or}}\quad \frac{n}{2}<f(u)=p^{\acute{v}_p(n)}<n,$$ where $p\in\mathbb{P}$ and the exclusive \textbf{or} will be typed in bold while the inclusive or is as usual. \end{thm} The reduced label $f^*(u)$ of a vertex $u$ in a labeled graph $G$ with $n$ vertices is defined as $f^*(u):=\frac{f(u)}{(f(u), n)}.$ \begin{lem}\label{lem3}\cite{Nasr} Suppose $D_n$ is a maximal Diophantine graph of order $n$ and $u,v\in V(D_n)$. If $f(u)\mid f(v)$, then $N(u)\supseteq N(v)$, where $N(s)$ defines the neighborhood of $s$ as the set of all vertices in $D_n$ that join the vertex $s$. \end{lem} \begin{thm}\label{thm_eq-deq2}\cite{Nasr} Suppose $D_n$ is a maximal Diophantine graph of order $n$. Let $u,v\in V(D_n)$ such that $f(u)\mid f(v)$, $f(v)$ is not a prime power number and $f^*(u)>1$. If $\deg(u)=\deg(v)$, then $f^*(u),f^*(v)$ have the same prime factors. \end{thm} \begin{cor}\label{cor1}\cite{Nasr} Suppose $D_n$ is a maximal Diophantine graph of order $n$ and $u,v\in V(D_n)$ such that $f(v)=tf(u)$ for some $t\geq1$. If $t\mid n$ and $(t, f(u))=1$, then $\deg(u)=\deg(v)$. \end{cor} \section{$\gamma$-Labelings of Graphs } \hspace{0.cm}The following definition is a generalization of Definition \ref{dfn2}. \begin{dfn}\label{dfn3} Let $G$ be a graph with $n$ vertices. The graph $G$ is called an $\gamma$-labeled graph if there exists a bijective map $f:V\rightarrow \{x_1, x_2, \dots, x_n\}$ such that $f(u),f(v)$ satisfy some conditions, where $\{x_1, x_2, \dots, x_n\}$ is any set of $n$ elements. Such a map $f$ is called an $\gamma$-labeling. A maximal $\gamma$-labeled graph with $n$ vertices, denoted by $(\Gamma_n,f)$, is a $\gamma$-labeled graph in which for all $uv\notin E(\Gamma_n)$, $\Gamma_n+\{uv\}$ is not a $\gamma$-labeled graph. \end{dfn} The reader should not be confused the notion of $\gamma$-labeling as provided in Definition \ref{dfn3} with the concept of $\alpha$-valuation that presented in the seminal work of A. Rosa \cite{Rosa}. \begin{dfn}\cite{S-C-L} Let $(G_1,f_1),(G_2,f_2)$ be two labeled graphs, where $f_1:V(G_1)\rightarrow \{x_1, x_2, \dots, x_n\}$ and $f_2:V(G_2)\rightarrow \{x_1, x_2, \dots, x_n\}$ are two bijective maps. The labeled graphs $(G_1,f_1),(G_2,f_2)$ are said to be labeling isomorphic, denoted by $(G_1,f_1)\cong_l (G_2,f_2)$, if there exists a bijective map $\varphi:V(G_1)\rightarrow V(G_2)$ such that for all $u,v\in V(G_1)$, $uv\in E(G_1)$ if and only if $\varphi(u)\varphi(v)\in E(G_2)$ and $f_1(u)=\big(f_2\circ\varphi\big)(u).$ \end{dfn} \begin{thm}\label{thm-equivalance} A maximal $\gamma$-labeled graph $\Gamma_n$ is unique up to labeling isomorphism. \end{thm} \begin{proof} Suppose $(\Gamma_n,f_1)$ and $(\acute{\Gamma}_n,f_2)$ are two maximal $\gamma$-labeled graphs of order $n$, where the two maps $$f_1:V(\Gamma_n)\rightarrow \{x_1, x_2, \dots, x_n\}\quad \mbox{and}\quad f_2:V(\acute{\Gamma}_n)\rightarrow \{x_1, x_2, \dots, x_n\}$$ are $\gamma$-labelings of $\Gamma_n$ and $\acute{\Gamma}_n$ satisfying certain conditions, say condition $C$. Define a map $$\varphi:V(\Gamma_n)\rightarrow V(\acute{\Gamma}_n)\quad \mbox{by}\quad \varphi(u)=f_2^{-1}(f_1(u)).$$ Therefore, $\varphi$ is one to one (for let $u,v\in V(\Gamma_n)$, $\varphi(u)=\varphi(v)$. Then we obtain $f_2^{-1}(f_1(u))=f_2^{-1}(f_1(v))$; accordingly, $f_1(u)=f_1(v)$. Consequently, $u=v$), $\varphi$ is onto (since $\varphi$ is one to one and $|V(\Gamma_n)|=|V(\acute{\Gamma}_n)|=n$), $\varphi$ is preserving the adjacency and non-adjacency of $\Gamma_n$ and $\acute{\Gamma}_n$ (for the reason that let $u,v\in V(\Gamma_n)$ such that $uv\in E(\Gamma_n)$. Then we have the two labels $f_1(u),f_1(v)$ satisfy $C$. Since, $f_1(u)=f_2(\varphi(u))$ and $f_1(v)=f_2(\varphi(v))$ (see Figure \ref{fig.}), we get $f_2(\varphi(u)),f_2(\varphi(v))$ satisfy $C$. Consequently, $\varphi(u)\varphi(v)\in E(\acute{\Gamma}_n)$ and the converse is similar) and let $u\in V(\Gamma_n)$, $\varphi(u)=f_2^{-1}(f_1(u))$. Therefore, $f_1(u)=f_2(\varphi(u))=(f_2\circ\varphi)(u)$. Hence, the two graphs $(\Gamma_n,f_1)$ and $(\acute{\Gamma}_n,f_2)$ are labeling isomorphic. \end{proof} \begin{figure*}[h!] \centering \begin{tikzpicture} [scale=.8,auto=center] \node (v) at (0,1.33) {$\equiv$}; \node (v1) at (0,0) {$\{x_1, x_2, \dots, x_n\}$}; \node (v2) at (-2,2) {$V(\Gamma_n)$}; \node (v3) at (2,2) {$V(\acute{\Gamma}_n)$}; \path[->] (v2)edge [align=left, below] node {$f_1$} (v1); \path[->] (v3)edge [align=left, below] node {$f_2$} (v1); \path[->] (v2)edge [align=left, above] node {$\varphi$} (v3); \end{tikzpicture} \caption{$(\Gamma_n,f_1)\cong_l (\acute{\Gamma}_n,f_2)$}\label{fig.} \end{figure*} \begin{cor}\label{thm-equivalance1} The graphs $D_n$ are unique up to labeling isomorphism. \end{cor} \begin{thm} Suppose $G$ is a graph with order $n$ and $\Gamma_n$ is the maximal $\gamma$-labeled graph with order $n$. $G$ is an $\gamma$-labeled graph if and only if $G$ is labeling isomorphic to a spanning subgraph of $\Gamma_n$. \end{thm} \begin{proof} Suppose $\Gamma_n$ is the maximal $\gamma$-labeled graph with order $n$ and a graph $G$ is a $\gamma$-labeled graph with order $n$. Then there exists $f:V(G)\rightarrow \{x_1, x_2, \dots, x_n\}$ is a bijective map such that $f(u),f(v)$ satisfy certain conditions, say condition $C$ and define $$T:=\{uv:uv\notin E(G) \ \mbox{and} \ f(u),f(v) \ \mbox{satisfy} \ C\}.$$ Consequently, the spanning supergraph $G+T$ of $G$ is a $\gamma$-labeled graph of order $n$ and the set $E(G)\cup T$ is set of all edges such that $f(u),f(v)$ satisfy $C$. Let $\acute{u}\acute{v}\notin E(G)\cup T$. Then we have that the two labels $f(\acute{u}),f(\acute{v})$ do not satisfy $C$. Therefore, the spanning supergraph $G+(T\cup\{\acute{u}\acute{v}\})$ of $G$ is not a $\gamma$-labeled graph with a $\gamma$-labeling satisfy $C$. Consequently, $G+T$ is the maximal $\gamma$-labeled graph of order $n$. Thus, using Theorem \ref{thm-equivalance}, we have that $G+T$ is labeling isomorphic to $\Gamma_n$. Hence, the graph $G$ is labeling isomorphic to a spanning subgraph of the maximal $\gamma$-labeled graph $\Gamma_n$.\\ Conversely, suppose $\Gamma_n$ is the maximal $\gamma$-labeled graph with order $n$ and a graph $G$ is labeling isomorphic to a spanning subgraph of the maximal $\gamma$-labeled graph $\Gamma_n$. Let $T$ be the set of deleted edges of $\Gamma_n$ such that the graph $G$ is labeling isomorphic to $\Gamma_n-T$. Then we have $$|V(G)|=|V(\Gamma_n-T)|=|V(\Gamma_n)| \quad \mbox{and} \quad V(\Gamma_n)=V(\Gamma_n-T).$$ Therefore, using the same $\gamma$-labeling of $\Gamma_n$, we have $\Gamma_n-T$ is a $\gamma$-labeled graph. Since the graph $G$ is labeling isomorphic to $\Gamma_n-T$, hence the graph $G$ is a $\gamma$-labeled graph. \end{proof} \begin{cor}\label{spanning-thm} A graph $G$ of order $n$ is Diophantine if and only if $G$ is labeling isomorphic to a spanning subgraph of $D_n$. \end{cor} \section{Basic Bounds of the Maximal Diophantine Graphs $D_n$} \subsection{Some Necessary and Sufficient Conditions for $D_n$ } \hspace{0.5cm} In what follows, let $(D_n,f)$ denote the maximal Diophantine graph of order $n$, with Diophantine labeling $f$ and $F(G)$ denote the number of full degree vertices of a graph $G$. The next two theorems present two different methods that compute the quantity $F(D_n)$. \begin{thm}\label{fulldegree2} If $p_i^{\acute{v}_{p_i}(n)}<\frac{n}{2}$, $i=1, 2, \dots, r$, then the number of full degree vertices in $D_n$ is given by \begin{equation*} F(D_n) =n-\sum_{1\leq i\leq r}\left\lfloor\frac{n}{p_i^{\acute{v}_{p_i}(n)}}\right\rfloor +\sum_{1\leq i<j\leq r}\left\lfloor\frac{n}{p_i^{\acute{v}_{p_i}(n)}p_j^{\acute{v}_{p_j}(n)}}\right\rfloor -\dots +(-1)^{r}\left\lfloor\frac{n}{\prod\limits_{1\leq i\leq r}p_i^{\acute{v}_{p_i}(n)}}\right\rfloor, \end{equation*} where $p_1, p_2, \dots, p_r$ are distinct prime numbers. \end{thm} The proof of Theorem \ref{fulldegree2} is straightforward by applying Lemma \ref{lem1}, Theorem \ref{lem2} and the inclusion-exclusion principle (see \cite{Rosen2}). For a very large $n\in \Z^+$, the above formula does not provide efficient upper and lower bounds for the quantity $F(D_n)$. There is an alternative approach to determine the quantity $F(D_n)$ by using the following arithmetic function $$\gamma_x(n):=\left|\left\{p^{\acute{v}_p(n)}: p\mid n, \ x<p^{\acute{v}_p(n)}<n, \ p\in\mathbb{P}\right\}\right|,$$ where $n\in \Z^+$ and a positive real number $x<n$. This function is utilized for computing not only the number of vertices with full degree in $D_n$ but also the order of the maximal clique of $D_n$ as follows in Theorems \ref{fulldegree}, \ref{complete_subgraph}. Obviously, for every $n\in \Z^+$, $\gamma_1(n)\leq\omega(n)$, for every $p\in\mathbb{P}$, $k\in \Z^+$ and a positive real number $x<n$, $\gamma_x\left(p^k\right)=0$ and also, for every $n,m\in\Z^+$ with $m<n$, $\gamma_m(n)=\gamma_1(n)-\gamma_1(m)$. \begin{thm} \label{fulldegree} The number of vertices with full degree in $D_n$ is given by \begin{equation*} F(D_n)=\tau(n) + \pi(n-1)-\pi\left(\frac{n}{2}\right) + \gamma_{\frac{n}{2}}(n). \end{equation*} In particular, if $n$ is a prime number, we have $$F(D_n)=\pi(n)-\pi\left(\frac{n}{2}\right) +1.$$ \end{thm} \begin{proof} Let $D_n$ be the maximal Diophantine graph with order $n$. Define the following three sets \begin{equation*} S_1:=\{d\in \Z^+ : d\mid n\}, \quad S_2:=\left\{p\in\mathbb{P}: \frac{n}{2} < p < n\right\}, \quad S_3:=\left\{ p^{\acute{v}_p(n)} : p\mid n, \ \frac{n}{2}< p^{\acute{v}_p(n)} < n, \ p\in\mathbb{P} \right\}. \end{equation*} Consequently, using Theorem \ref{lem2}, one can see that $ S_1\cup S_2\cup S_3$ is the set of labels of the full degree vertices in $D_n.$ Clearly, $S_1,S_2$ and $S_3$ are mutually disjoint sets and $$|S_1|=\tau(n),\quad |S_2|=\pi(n-1)-\pi\left(\frac{n}{2}\right)\quad \mbox{and}\quad |S_3|=\gamma_{\frac{n}{2}}(n),$$ and hence $$F(D_n)= \tau(n) + \pi(n-1)-\pi\left(\frac{n}{2}\right) + \gamma_{\frac{n}{2}}(n).$$ In case of $n$ is a prime number, we have $F(D_n)= \pi(n)-\pi\left(\frac{n}{2}\right)+1$. \end{proof} \begin{cor}\label{corVI2} Let $G$ be a graph with order $n$. If the graph $G$ is Diophantine, then $F(G)\leq F(D_n)$. \end{cor} The clique number, denoted by $Cl(G)$, is the order of the maximal clique of a graph $G$. Although $\omega(G)$ is the standard notation of the clique number, we have chosen $Cl(G)$ in this study to prevent confusion with the arithmetic function $\omega(n)$. The following theorem gives the order of the maximal clique in $D_n$. \begin{thm}\label{complete_subgraph} The clique number of $D_n$ is given by $$Cl(D_n)= \tau(n) + \pi(n) - \omega(n) + \gamma_1(n).$$ In particular, if $n$ is a prime number, we have $$Cl(D_n)=\pi(n)+1.$$ \end{thm} \begin{proof} Let $D_n$ be the maximal Diophantine graph with order $n$. Define the following three sets \begin{equation*} S_1:=\{d\in \Z^+ : d\mid n\}, \quad S_2:=\{p\in\mathbb{P}: p\nmid n, \ 1 < p < n\}, \quad S_3:=\left\{p^{\acute{v}_p(n)}: p\mid n, \ 1<p^{\acute{v}_p(n)}<n, \ p\in\mathbb{P}\right\}. \end{equation*} Therefore, any two vertices in $V(D_n)$ that is labeled by integers from the set $S_1\cup S_2\cup S_3$ are adjacent, since for any two distinct labels $\ell_1,\ell_2$, we have \begin{equation*} \begin{cases} (\ell_1, \ell_2)=1, & \mbox{if} \ \ell_1, \ell_2\in S_2\cup S_3\\ &\\ (\ell_1, \ell_2)\mid n, & \mbox{if} \ \ell_1\in S_1. \\ \end{cases} \end{equation*} Consequently, one can see that $ S_1\cup S_2\cup S_3$ is the set of labels of vertices that are in the maximal clique of $D_n.$ Suppose contrary that $u\in V(D_n)$ is a vertex $u$ of the maximal clique in $D_n$ such that $f(u)\notin S_1\cup S_2\cup S_3.$ Then we have $f(u)\nmid n$. Therefore, there exists a prime number $p_0$ such that $p_0^{\acute{v}_{p_0}(n)}\mid f(u)$; otherwise, for every a prime number $p$, $p^{\acute{v}_p(n)}\nmid f(u)$, so we get $v_p(f(u))<\acute{v}_p(n)=v_p(n)+1$. Consequently, $v_p(f(u))\leq v_p(n)$ which is a contradiction of $f(u)\nmid n$. Let $\ell=p_0^{\acute{v}_{p_0}(n)}$ be a certain label. Then we have $\ell\in S_2\cup S_3$, $\ell\mid f(u)$ and $\ell\neq f(u)$. So, $(f(u),\ell)=\ell\nmid n,$ which contradicts the completeness of the maximal clique in $D_n$. Therefore, the set $S_1\cup S_2\cup S_3$ has all labels of vertices in the maximal clique of $D_n$. Obviously, $S_1, S_2$ and $S_3$ are mutually disjoint sets and $$|S_1|=\tau(n),\quad |S_2|=\pi(n)-\omega(n)\quad \mbox{and}\quad |S_3|=\gamma_1(n),$$ we obtain $$Cl(D_n)=\tau(n) + \pi(n) - \omega(n) + \gamma_1(n).$$ If $n$ is a prime number, then $Cl(D_n)=\pi(n)+1.$ \end{proof} \begin{cor} \label{corVI3} Let $G$ be a graph with order $n$. If the graph $G$ is Diophantine, then $Cl(G)\leq Cl(D_n)$. \end{cor} \begin{rem} Let $D_n$ be the maximal Diophantine graph of order $n$. Then \begin{itemize} \item[1.] $|E(D_n)|\geq\frac{1}{2}Cl(D_n)\big(Cl(D_n)-1\big)\geq \frac{1}{2}F(D_n)\big(F(D_n)-1\big),$ \item[2.] if $D_n$ is not a complete graph, then $F(D_n)\leq\delta(D_n)$, \item[3.] for every $n\in \Z^+$, $F(D_n)\leq Cl(D_n)\leq n$. \end{itemize} \end{rem} \begin{lem} For every a prime number $p\leq\frac{n}{2}$, $p\mid n$ and $p^{\acute{v}_p(n)}>\frac{n}{2}$ if and only if $D_n$ is a complete graph. \end{lem} \begin{proof} Assume $p\leq\frac{n}{2}$ is prime number such that $p\mid n$ and $p^{\acute{v}_p(n)}>\frac{n}{2}$. Suppose contrary that the maximal Diophantine graph $D_n$ is not a complete graph. Then there exist $u,v\in V(D_n)$ such that $uv\notin E(D_n)$. Therefore, using lemma \ref{lem1}, there exists a prime number $p$ such that $f(u),f(v)\in M_{p^{\acute{v}_p(n)}}$. Let $f(u)=tp^{\acute{v}_p(n)}$ and $f(v)=s p^{\acute{v}_p(n)}$ for some $t,s\geq1$ and $t<s$. Then, $p^{\acute{v}_p(n)}<\frac{n}{s}\leq\frac{n}{2},$ this contradicts the assumption. Hence, $D_n$ is a complete graph.\\ Conversely, let $D_n$ be a complete graph and consider contrary that there exists a prime number $p\leq\frac{n}{2}$ such that $p\nmid n$ or $p^{\acute{v}_p(n)}<\frac{n}{2}$, otherwise, if $p^{\acute{v}_p(n)}=\frac{n}{2}$, then $p^{\acute{v}_p(n)}\mid n$ that is a contradiction. Then we have the following two cases. In case of $p\leq\frac{n}{2}$ and $p\nmid n$, we obtain $2p<n$. Then we get $(p, 2p)=p\nmid n$. Therefore, $F(D_n)<n$. In the other case of $p^{\acute{v}_p(n)}<\frac{n}{2}$, we have $(p^{\acute{v}_p(n)}, 2p^{\acute{v}_p(n)})= p^{\acute{v}_p(n)}\nmid n$. Therefore, $F(D_n)<n$. Consequently, from the two cases, $D_n$ is not a complete graph, this contradicts the hypothesis. \end{proof} \begin{thm} The independence number of $D_n$ is given by $$\alpha(D_n)=\max\limits_{2\leq p\leq n}\left\lfloor\frac{n}{p^{\acute{v}_p(n)}}\right\rfloor,$$ where $p\in\mathbb{P}$. In particular, if $n$ is odd, we have $$\alpha(D_n)=\left\lfloor\frac{n}{2}\right\rfloor.$$ \end{thm} \begin{proof} Let $f(u),f(v)$ be two labels, where $u,v\in V(D_n)$. Then, using Lemma \ref{lem1}, $uv\notin E(D_n)$ if and only if there exists $p\in\mathbb{P}$ such that $f(u), f(v)\in M_{p^{\acute{v}_{p}(n)}}.$ Therefore, the set of vertices of $D_n$ with labels in $M_{p^{\acute{v}_p(n)}}$ is an independent set. Hence, $$\alpha(D_n)=\max\limits_{2\leq p\leq n}\left|M_{p^{\acute{v}_p(n)}}\right|=\max\limits_{2\leq p\leq n}\left\lfloor\frac{n}{p^{\acute{v}_p(n)}}\right\rfloor.$$ If $n$ is an odd, then the set of nonadjacent vertices in $D_n$ with labels in $M_2=\left\{2, 4, 6, \dots, 2\left\lfloor\frac{n}{2}\right\rfloor\right\}$ is a maximal independent set. Hence, $\alpha(D_n)=\left\lfloor\frac{n}{2}\right\rfloor$. \end{proof} \begin{cor} \label{corVI4} Let $G$ be a graph with order $n$. If the graph $G$ is Diophantine, then $\alpha(G)\geq\alpha(D_n)$. \end{cor} \begin{thm}(\textbf{sufficient condition})\label{thm_sufficient} Let $G$ be a graph with order $n$. If $\alpha(G)\geq n-F(D_n)$, then $G$ is a Diophantine graph. \end{thm} \begin{proof} Let $G$ be a graph with $n$ vertices such that $\alpha(G)\geq n-F(D_n)$. Suppose $S$ is a subgraph of $G$ with number of vertices less than or equal to $F(D_n)$ vertices. Then, using Theorem \ref{lem2}, the number that is either a divisor of $n$ or is of the form $p^{\acute{v}_{p}(n)}$, where $\frac{n}{2}<p^{\acute{v}_{p}(n)}<n$ can be used as labels in the vertices of the subgraph $S$ of $G$. Therefore, the other labels can be assigned in the remaining independent vertices of order $\alpha(G)$ from the graph $G$. Hence, $G$ is a Diophantine graph. \end{proof} \begin{ex} The graph $G=K_3+\overline{K_4}$ satisfies the sufficient condition in Theorem \ref{thm_sufficient} as $\alpha(G)\geq 7-F(D_7)$; accordingly, the graph $G$ is a Diophantine graph as seen in Figure \ref{figure1} part (a). \begin{figure*}[h!] \centering \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture} [scale=.5,auto=center,every node/.style={circle,fill=blue!20}] \node (v1) at (0,3) {$1$}; \node (v5) at (3,4) {$5$}; \node (v7) at (-3,4) {$7$}; \node (v3)[circle,fill=red!20] at (3,0) {$3$}; \node (v2)[circle,fill=red!20] at (-3,0) {$2$}; \node (v6)[circle,fill=red!20] at (1,0) {$6$}; \node (v4)[circle,fill=red!20] at (-1,0) {$4$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v5) -- (v2); \draw (v5) -- (v3); \draw (v5) -- (v4); \draw (v5) -- (v6); \draw (v5) -- (v7); \draw (v7) -- (v2); \draw (v7) -- (v3); \draw (v7) -- (v4); \draw (v7) -- (v6); \end{tikzpicture}\caption{The graph $G=K_3+\overline{K_4}$} \end{subfigure} ~~~~~~~ \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture} [scale=.5,auto=center,every node/.style={circle,fill=blue!20}] \node (v1) at (1,6) {$1$}; \node (v5) at (-1,6) {$5$}; \node (v7) at (-3,4) {$7$}; \node (v3) at (3,4) {$3$}; \node (v2)[circle,fill=red!20] at (-2,2) {$2$}; \node (v6)[circle,fill=red!20] at (0,1.5) {$6$}; \node (v4)[circle,fill=red!20] at (2,2) {$4$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v5) -- (v2); \draw (v5) -- (v3); \draw (v5) -- (v4); \draw (v5) -- (v6); \draw (v5) -- (v7); \draw (v7) -- (v2); \draw (v7) -- (v3); \draw (v7) -- (v4); \draw (v7) -- (v6); \draw (v3) -- (v2); \draw (v3) -- (v4); \end{tikzpicture}\caption{The maximal Diophantine graph $D_7$} \end{subfigure}\caption{}\label{figure1} \end{figure*} \end{ex} This sufficient condition is not a necessary condition. For instance, $D_7$ is a Diophantine graph. However, $D_7$ does not meet the sufficient condition since $\alpha(D_7)<7-F(D_7)$. \subsection{The Minimum Degree Vertex with Minimum Label} \hspace{0.5cm} Let $V_{\delta}(G)$ denote the set of vertices with minimum degree in $G$, i.e., $V_{\delta}(G):=\{u\in V(G):\deg(u)=\delta(G)\}$. So, we denote the following $f(u_0):=\min\{f(u): u\in V_{\delta}(D_n)\},$ in which the vertex $u_0$ is the vertex in $V_{\delta}(D_n)$ with minimum label and $f(u_0)$ is the smallest label of a vertex in $V_{\delta}(D_n)$. \begin{rem} Let $u_0$ be the vertex in $V_{\delta}(D_n)$ with minimum label. \begin{itemize} \item[1.] $f(u_0)=1$ if and only if $D_n$ is a complete graph. \item [2.] For every $n>3$, $\delta(D_n)\geq 3$. \end{itemize} \end{rem} \begin{thm} Suppose the maximal Diophantine graph $D_n$ is not a complete graph and the finite sequence $$p_1^{\acute{v}_{p_{_1}}(n)}< p_2^{\acute{v}_{p_{_2}}(n)}<\dots< p_s^{\acute{v}_{p_s}(n)}<\frac{n}{2},$$ where $p_i$, $i = 1, 2, \dots, s$ are distinct prime numbers. If the vertex $u_0\in V_{\delta}(D_n)$ has minimum label, then there exits an integer $r<s$ such that $$f(u_0)=\prod\limits_{i=1}^{r}p_i^{\acute{v}_{p_i}(n)}<n\quad \mbox{and}\quad p_{r+1}^{\acute{v}_{p_{r+1}}(n)}f(u_0)>n.$$ \end{thm} \begin{proof} Suppose the maximal Diophantine graph $D_n$ of order $n$ is not a complete graph. Let $u_0$ be the vertex in $V_{\delta}(D_n)$ with minimum label. Then $$1<f(u_0)=\min\{f(u): u\in V_{\delta}(D_n)\}<n.$$ Therefore, we have \begin{equation}\label{eqn8} f(u_0)=\prod\limits_{j=1}^{r_1}p_j^{\alpha_j} \prod\limits_{i=1}^{r_2}q_i^{\beta_i}<n. \end{equation} where $p_j,q_i$ are distinct prime numbers and $\alpha_j,\beta_i$ are two non-negative integers such that for every $j=1,2,\dots r_1$, $i=1,2,\dots r_2$ \begin{equation}\label{eqn12} \alpha_j\geq \acute{v}_{p_j}(n)\quad\mbox{and}\quad 0\leq \beta_i<\acute{v}_{h_i}(n). \end{equation} Clearly, the two terms of equation \eqref{eqn8} are relatively primes and $\prod\limits_{i=1}^{r_2}q_i^{\beta_i}\mid n$. Otherwise, if $\prod\limits_{i=1}^{r_2}q_i^{\beta_i}\nmid n$, then there exists a prime number $q_i\mid f(u_0)$ such that $q_i^{\beta_i}\nmid n$. Therefore, there exists $i=1,2,\dots r_2$ such that $\beta_i\geq\acute{v}_{q_i}(n)$ which contradicts equation \eqref{eqn12}. Let $u\in V(D_n)$ such that $f(u)=\prod\limits_{j=1}^{r_1}p_j^{\alpha_j}$. Then, using equation \eqref{eqn8}, we get \begin{equation}\label{eqn10} f(u_0)=f(u)\prod\limits_{i=1}^{r_2}q_i^{\beta_i}. \end{equation} Therefore, using Corollary \ref{cor1} and equation \eqref{eqn10}, $$f(u)\leq f(u_o)\quad \mbox{and}\quad \deg(u)=\deg(u_0),$$ This contradicts the hypothesise of the minimal label of $u_0$ unless $\beta_i=0$ in equation \eqref{eqn10}. Thus, \begin{equation}\label{eqn11} f(u_0)=f(u)=\prod\limits_{j=1}^{r_1}p_j^{\alpha_j}, \end{equation} where $\alpha_j\geq \acute{v}_{p_j}(n)$. Then $\alpha_j=\acute{v}_{p_j}(n)+k_j$ for some $k_j\geq 0$. Consequently, using equation \eqref{eqn11}, we get \begin{equation}\label{eqn9} f(u_0)=\prod\limits_{j=1}^{r_1}p_j^{v_{p_j}(n)+k_j}= \prod\limits_{j=1}^{r_1}p_j^{\acute{v}_{q_j}(n)} \prod\limits_{j=1}^{r_1}p_j^{k_j}. \end{equation} Let $v\in V(D_n)$ such that $f(v)=\prod\limits_{j=1}^{r_1}p_j^{\acute{v}_{p_j}(n)}.$ Then, using equation \eqref{eqn9}, we obtain \begin{equation}\label{eqn1} f(u_0)= \prod\limits_{j=1}^{r_1}p_j^{k_j} f(v). \end{equation} Thus, using Lemma \ref{lem3}, $\deg(v)\leq \deg(u_o)$. Since $u_o$ has minimum degree, we have $\deg(v)=\deg(u_o)$. Therefore, $$f(v)\leq f(u_o)\quad \mbox{and}\quad \deg(v)=\deg(u_o).$$ Since $f(u_o)$ is the minimum label and using equation \eqref{eqn1}, we get $k_j=0$. Consequently we have \begin{equation}\label{eqn14} f(u_0)=f(v)=\prod\limits_{j=1}^{r_1}p_j^{\acute{v}_{p_j}(n)}<n. \end{equation} Given a finite sequence $1<p_1^{\acute{v}_{p_{_1}}(n)}<p_2^{\acute{v}_{p_{_2}}(n)}<\dots<p_s^{\acute{v}_{p_s}(n)}<\frac{n}{2},$ where $p_i$, $i = 1, 2, \dots, s$ are distinct primes. Then we have \begin{equation*} \left|M_{p_1^{\acute{v}_{p_{_1}}(n)}}\right| \geq \left|M_{p_2^{\acute{v}_{p_{_2}}(n)}}\right| \geq \dots \geq \left|M_{p_s^{\acute{v}_{p_s}(n)}}\right|. \end{equation*} Since $p_i^{\acute{v}_{p_i}(n)}\mid f(u_0)$ for some $i=1, 2, \dots s$ and $u_0$ is the vertex in $V_{\delta}(D_n)$ with minimum label, we have the following cases: \\ Case i: If the prime factors of $f(u_0)$ are $i$ primes, then $f(u_0)\in M_{p_1^{\acute{v}_{p_{_1}}(n)}}, \ f(u_0)\in M_{p_2^{\acute{v}_{p_{_2}}(n)}}, \ \dots, \ f(u_0)\in M_{p_i^{\acute{v}_{p_i}(n)}},$ where $i=1,2,\dots, r$ for some $r<s$. Then, using equation \eqref{eqn14}, we have the following formula $$f(u_0)=\prod\limits_{j=1}^{i}p_j^{\acute{v}_{p_j}(n)}<n.$$ where $i=1,2,\dots, r$ and $r<s$. Suppose contrarily that for every $i=1,2,\dots, r$ and $r<s$ such that $p_{i+1}^{\acute{v}_{p_{i+1}}(n)}f(u_0)<n.$ Let $w\in V(D_n)$ such that $f(w)=p_{i+1}^{\acute{v}_{p_{i+1}}(n)}f(u_0).$ Then, using Lemma \ref{lem3}, $\deg(w)\leq \deg(u_o)$. Since the degree of $u_0$ is minimum, we get $\deg(w)=\deg(u_0)$. However, since $$f^*(u_0)=\frac{f(u_o)}{(f(u_o),n)}=\prod\limits_{j=1}^{i}p_j\quad\mbox{and}\quad f^*(w)=\frac{f(w)}{(f(w),n)}=\prod\limits_{j=1}^{i+1}p_j,$$ therefore the reduced labels $f^*(u_0),f^*(u)$ do not have the same prime factors which contradicts Theorem \ref{thm_eq-deq2}. Hence, the proof follows. \end{proof} Clearly, one can see that $\delta(D_n)=\deg(u_0)$ and the degree of every vertex in $D_n$ is provided by \begin{thm}\cite{Nasr}.\label{deg(u)} If $f^*(u)=\prod\limits_{i=1}^{r}p_i^{k_i}$, where $u\in V(D_n)$, $p_i$, $i=1,2,\dots,r$ are distinct prime numbers and $k_i\geq0$, $i=1,2,\dots,r$, then \begin{equation*} \deg(u)=\left\{\begin{array}{ll} n-1, & \hbox{$f^*(u)=1.$} \\ n-\sum\limits_{1\leq i\leq r}\bigg\lfloor\frac{n}{p_i^{\acute{v}_{p_i}(n)}}\bigg\rfloor +\sum\limits_{1\leq i.j\leq r}\bigg\lfloor\frac{n}{p_i^{\acute{v}_{p_i}(n)}p_j^{\acute{v}_{p_j}(n)}}\bigg\rfloor -\dots +(-1)^{r}\Bigg\lfloor\frac{n}{\prod\limits_{1\leq i\leq r}p_i^{\acute{v}_{p_i}(n)}}\Bigg\rfloor, & \hbox{$f^*(u)>1.$} \end{array} \right. \end{equation*} \end{thm} \begin{cor}\label{thmVI1} Let $G$ be a graph of order $n$. If the graph $G$ is Diophantine, then $\delta(G)\leq \delta(D_n)$. \end{cor} \begin{proof} Let $G$ be a Diophantine graph of order $n$. Then, using Corollary \ref{spanning-thm}, the graph $G$ is labeling isomorphic to a spanning subgraph (say $\acute{G}$) of $D_n$, i.e., $G\cong_l\acute{G}$. Hence, $\delta(G)=\delta(\acute{G})\leq\delta(D_n)$ \end{proof} \begin{thm}\label{lem5} There exists a vertex $w\in V_{\delta}(D_n)$ such that $\frac{n}{2}< f(w)< n$. \end{thm} \begin{proof} let $u\in V_{\delta}(D_n)$. Then the two cases follow, in the case of $\frac{n}{2}< f(u)< n$, we have nothing to prove. In the other case of $1<f(u)<\frac{n}{2}$, we get $\frac{n}{2}< 2^t f(u)< n$ for some $t>0$. Let $f(w)=2^tf(u)$, where $w\in V(D_n)$, $t>0$. Then we have $\frac{n}{2}<f(w)< n$. Using Lemma \ref{lem3}, we get $N(u)\supseteq N(w)$. Therefore, $\deg(w)\leq \deg(u)$. Since $u\in V_{\delta}(D_n)$, therefore $\deg(w)=\deg(u)=\delta(D_n)$. Hence, form the two cases, There exists a vertex $w\in V_{\delta}(D_n)$ such that $\frac{n}{2}< f(w)< n$. \end{proof} \subsection{The Degree Sequences of graphs} \begin{dfn} Let $G$ be a graph of order $n$. A finite sequence $S_G=\{g_i\}_{i=0}^{n-1}=(g_0, g_1, \dots, g_{n-1})$, where $g_i:=|\{v\in V(G):\deg(v)=i\}|$, $i=0,1, \dots, n-1$ is called the vertex-degree sequence of $G$. \end{dfn} The reader notices that the standard notion of degree sequence is different in this literature. Obviously, we obtain the following two equations $\sum\limits_{i=0}^{n-1}g_i=|V(G)|$ and $\sum\limits_{i=0}^{n-1} ig_i=2|E(G)|$. \begin{ex} $S_{D_7}=(0, 0, 0, 1, 2, 1, 3)$, and $S_{D_{11}}=(0,0,0,0,1,1,3,0,2,1,3)$. \end{ex} \begin{thm}\label{thmVI2} Let $G$ be a graph of order $n$ with $S_G=\{g_i\}_{i=0}^{n-1}$ and $D_n$ be the maximal Diophantine graph of order $n$ with $S_{D_n}=\{d_i\}_{i=0}^{n-1}$. If the graph $G$ is Diophantine then for each $k\in\{0,1,\dots,n-1\}$, $$\sum\limits_{i=0}^{k} g_{i}\geq \sum\limits_{i=0}^{k} d_{i}.$$ \end{thm} \begin{proof} Assume $(D_n,f_1)$ is the maximal Diophantine graph of order $n$ with $S_{D_n}=\{d_i\}_{i=0}^{n-1}$ and $(G,f_2)$ is a Diophantine graph of order $n$ with $S_G=\{g_i\}_{i=0}^{n-1}$, where $$f_1:V(D_n)\rightarrow \{1,2,\dots,n\}, \quad f_2:V(G)\rightarrow \{1,2,\dots,n\}.$$ are Diophantine labelings of $D_n,G$ respectively. Suppose contrarily that there exists $k_0 \in\{0,1,\dots,n-1\}$ such that \begin{equation}\label{eqn5} \sum\limits_{i=0}^{k_0} g_{i}< \sum\limits_{i=0}^{k_0} d_{i}. \end{equation} Using Corollary \ref{spanning-thm}, we get that $(G,f_2)$ is labeling isomorphic to a spanning subgraph of $(D_n,f_1)$. Let $(\acute{G},f_1)$ be a spanning subgraph of $(D_n,f_1)$ such that $(G,f_2)\cong_l(\acute{G},f_1).$ Then there is a bijective map $\varphi:V(G)\rightarrow V(\acute{G})$ such that for all $u,\acute{u}\in V(G)$, $u\acute{u}\in E(G)$ if and only if $\varphi(u)\varphi(\acute{u})\in E(\acute{G})$ and $f_1(u)=f_2(\varphi(u)).$ Then for every $u\in V(G)$, \begin{equation}\label{eqn2} \deg(u)=\deg(\varphi(u)). \end{equation} Define a map $\acute{\varphi}:V(G)\rightarrow V(D_n)$ such that for every $u\in V(G)$, $\acute{\varphi}(u):=\varphi(u).$ Then the map $\acute{\varphi}$ is bijective and for every $u\in V(G)$, $f_2(u)=f_1(\varphi(u))=f_1(\acute{\varphi}(u))$. Since $(\acute{G},f_1)$ is a spanning subgraph of $(D_n,f_1)$ and using equation \eqref{eqn2}, one can see that for every $u\in V(G)$, \begin{equation}\label{eqn3} \deg(u)\leq \deg(\acute{\varphi}(u)), \end{equation} Define the following two sets $$A_{D_n}(k):=\{v\in V(D_n):\deg(v)\leq k\}\quad\mbox{and} \quad A_G(k):=\{u\in V(G):\deg(u)\leq k\},$$ where $k=0,1,\dots,n-1$. Therefore, using equation \eqref{eqn5}, there exists $k_0\in\{ 0,1,\dots,n-1\}$ such that \begin{equation*} |A_{G}(k_0)|=\sum\limits_{i=0}^{k_0} g_{i}< \sum\limits_{i=0}^{k_0} d_{i}=|A_{D_n}(k_0)|. \end{equation*} Consequently, there exists a vertex $v\in A_{D_n}(k_0)\subseteq V(D_n)$ such that $u=\acute{\varphi}^{-1}(v)\notin A_{G}(k_0).$ Then, \begin{equation*} \deg(u)> k_0\geq\deg(v)=\deg(\acute{\varphi}(u)), \end{equation*} which contradicts equation \eqref{eqn3}. Hence, for every $k\in \{0,1,\dots,n-1\}$, \begin{equation*} \sum\limits_{i=0}^{k} g_{i}\geq \sum\limits_{i=0}^{k} d_{i}, \end{equation*} which completes the proof \end{proof} \begin{rem}\label{cor2} Let a graph $G$ of order $n$ have $S_G=\{g_i\}_{i=0}^{n-1}$ and the maximal Diophantine graph $D_n$ have $S_{D_n}=\{d_i\}_{i=0}^{n-1}$. If $\sum\limits_{i=0}^{k} g_{i}\geq\sum\limits_{i=0}^{k} d_{i}$ holds for every $k=0,1,\dots,n-1$, then $$|E(G)|\leq|E(D_n)|\quad \mbox{or}\quad F(G)\leq F(D_n)\quad\mbox{or}\quad \delta(G)\leq\delta(D_n).$$ \end{rem} The proof of Remark \ref{cor2} closely resembles the corollaries in \cite{Seoud1}. The following table presents the quantities $|E(G)|$, $F(G)$, $Cl(G)$, $\alpha(G)$, $\delta(G)$ and $S_{G}$, where $G=D_n$, $n=4,\dots,20$. \begin{table}[h] \centering \begin{tabular}{c c c c c c c c} \hline $n$ & $|E(D_n)|$ & $F(D_n)$ & $Cl(D_n)$ & $\alpha(D_n)$ & $\delta(D_n)$ & $S_{D_n}=(d_i)$ \\ \hline\hline 4 & 6 & 4 & 4 & 1 & 3 & $(0,0,0,4)$ \\ 5 & 9 & 3 & 4 & 2 & 3 & $(0,0,0,2,3)$ \\ 6 & 15 & 6 & 6 & 1 & 5 & $(0,0,0,0,0,6)$ \\ 7 & 17 & 3 & 5 & 3 & 3 & $(0,0,0,1,2,1,3)$ \\ 8 & 27 & 6 & 7 & 2 & 6 & $(0,0,0,0,0,0,2,6)$ \\ 9 & 30 & 5 & 6 & 4 & 5 & $(0,0,0,0,0,4,0,0,5)$ \\ 10 & 41 & 5 & 7 & 3 & 7 & $(0,0,0,0,0,0,0,3,2,5)$ \\ 11 & 41 & 3 & 6 & 5 & 4 & $(0,0,0,0,1,1,3,0,2,1,3)$ \\ 12 & 65 & 10 & 11 & 2 & 10 & $(0,0,0,0,0,0,0,0,0,0,2,10)$ \\ 13 & 57 & 4 & 7 & 6 & 5 & $(0,0,0,0,0,2,1,3,0,2,0,1,4)$ \\ 14 & 81 & 6 & 8 & 4 & 8 & $(0,0,0,0,0,0,0,0,1,0,3,2,2,6)$ \\ 15 & 83 & 7 & 9 & 7 & 7 & $(0,0,0,0,0,0,0,1,6,0,0,0,0,1,7)$ \\ 16 & 106 & 7 & 10 & 5 & 9 & $(0,0,0,0,0,0,0,0,0,1,0,4,0,2,2,7)$ \\ 17 & 95 & 4 & 8 & 8 & 6 & $(0,0,0,0,0,0,2,1,1,4,1,0,2,0,1,1,4)$ \\ 18 & 143 & 9 & 11 & 4 & 14 & $(0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,9)$ \\ 19 & 119 & 5 & 9 & 9 & 7 & $(0,0,0,0,0,0,0,3,1,1,4,1,0,2,0,0,1,1,5)$ \\ 20 & 173 & 10 & 13 & 6 & 14 & $(0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,0,0,0,4,10)$ \\ \hline \end{tabular} \caption{Basic Bounds of $D_n$, $n=4,\dots,20$} \label{table1} \end{table} \section{Necessary Conditions for Diophantine Graphs} \hspace{0.5cm} According to Definition \ref{dfn2}, Corollaries \ref{corVI2}, \ref{corVI3}, \ref{corVI4}, \ref{thmVI1} and Theorem \ref{thmVI2}, each of the following six conditions $\textbf{C1}, \dots,\textbf{C6}$ listed below constitutes a necessary condition for the existence of a Diophantine labeling for a given graph $G$ of order $n$:\\ \textbf{C1}. $|E(G)|\leq|E(D_n)|$.\quad \textbf{C2}. $F(G)\leq F(D_n)$.$\quad$ \textbf{C3}. $Cl(G)\leq Cl(D_n)$.$\quad$ \textbf{C4}. $\alpha(G)\geq\alpha(D_n)$.\quad \textbf{C5}. $\delta(G)\leq\delta(D_n)$.\hspace{0.8cm} \textbf{C6}. There exists $k\in\{0, 1, \dots, n-1\}$ such that $\sum\limits_{i=0}^{k} g_{i}\geq\sum\limits_{i=0}^{k} d_{i}$.\\ Notice that, based on Remark \ref{cor2} and Example \ref{example1}, \textbf{C6} is stronger than each of the following three conditions \textbf{Ci}, i = 1, 2, 5. Conditions \textbf{Ci}, i = 1, 2, 3, 4, 5 are mutually independent while conditions \textbf{Ci}, i = 3, 4, 6 are also mutually independent. Additionally, Examples \ref{example1} and \ref{example2} illustrate some relations among these conditions. \begin{ex}\label{example1} In this example, the previous six necessary conditions are studied in the following six graphs $G_i$, $i=1,\dots,6$ as shown in Figure \ref{figure2}. The basic bounds for these graphs are given in Table \ref{table2} and the corresponding maximal Diophantine graphs $D_7$ and $D_{11}$. The graph $G_1$ does not satisfy \textbf{C4}, the graph $G_2=C_4+\overline{K_3}$ does not satisfy \textbf{C5}, \textbf{C6} $\left(\mbox{for} \ \sum\limits_{i=0}^{3}g_{i}<\sum\limits_{i=0}^{3}d_{i}\right)$, the graph $G_3$ does not satisfy \textbf{C3}, the graph $G_4=K_4+\overline{K_7}$ does not satisfy \textbf{C2}, \textbf{C6} $\left(\mbox{for} \ \sum\limits_{i=0}^{9} g_{i}<\sum\limits_{i=0}^{9} d_{i} \right)$, the graph $G_5$ does not satisfy \textbf{C1}, \textbf{C6} $\left(\mbox{for} \ \sum\limits_{i=0}^{5} g_{i}<\sum\limits_{i=0}^{5} d_{i} \right)$ and the graph $G_6$ does not satisfy \textbf{C6} $\left(\mbox{for} \ \sum\limits_{i=0}^{8}g_{i}<\sum\limits_{i=0}^{8}d_{i} \right)$. Therefore, the graphs $G_i$, $i=1,\dots,6$ are not Diophantine. However, the other necessary conditions are satisfied for the graphs $G_i$, $i=1,2,\dots,6$. \end{ex} \begin{figure*}[h!] \centering \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.7,auto=center,every node/.style={circle,fill=blue!20}] \node (v1)[circle,fill=red!20] at (2,0) {$4$}; \node (v2) at (0,2) {$2$}; \node (v3) at (-2,0) {$3$}; \node (v4) at (0,-2) {$1$}; \node (v5) at (2,-2) {$5$}; \node (v6)[circle,fill=red!20] at (4,0) {$6$}; \node (v7) at (2,2) {$7$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v7); \draw (v2) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v5); \draw (v2) -- (v7); \draw (v3) -- (v4); \draw (v3) -- (v5); \draw (v3) -- (v7); \draw (v4) -- (v5); \draw (v5) -- (v6); \draw (v6) -- (v7); \end{tikzpicture}\caption{Graph $G_1$} \end{subfigure} ~ \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.7,auto=center,every node/.style={circle,fill=blue!20}] \node (v1) at (1.8,2) {$1$}; \node (v2) at (1.8,0.6) {$2$}; \node (v3) at (0,-2) {$3$}; \node (v4) at (0,-0.6) {$4$}; \node (v5)[circle,fill=red!20] at (6,1.5) {$5$}; \node (v6)[circle,fill=red!20] at (6,0) {$6$}; \node (v7)[circle,fill=red!20] at (6,-1.5) {$7$}; \draw (v1) -- (v2); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v2) -- (v3); \draw (v2) -- (v5); \draw (v2) -- (v6); \draw (v2) -- (v7); \draw (v3) -- (v4); \draw (v3) -- (v5); \draw (v3) -- (v6); \draw (v3) -- (v7); \draw (v4) -- (v5); \draw (v4) -- (v6); \draw (v4) -- (v7); \end{tikzpicture}\caption{Graph $G_2$} \end{subfigure} ~ \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.6,auto=center,every node/.style={circle,fill=blue!20}] \node (v1)[circle,fill=red!20] at (2,2) {$1$}; \node (v2)[circle,fill=red!20] at (2,0.8) {$2$}; \node (v3)[circle,fill=red!20] at (2,-0.8) {$3$}; \node (v4)[circle,fill=red!20] at (2,-2) {$4$}; \node (v5) at (-2,-2) {$6$}; \node (v6) at (-1,0) {$7$}; \node (v7) at (-2,2) {$8$}; \node (v8)[circle,fill=red!20] at (-4,2.5) {$9$}; \node (v9) at (-6,1.3) {$10$}; \node (v10) at (-6,-1.3) {$11$}; \node (v11) at (-4,-2.5) {$5$}; \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v2) -- (v5); \draw (v2) -- (v6); \draw (v2) -- (v7); \draw (v3) -- (v5); \draw (v3) -- (v6); \draw (v3) -- (v7); \draw (v4) -- (v5); \draw (v4) -- (v6); \draw (v4) -- (v7); \draw (v5) -- (v6); \draw (v5) -- (v7); \draw (v5) -- (v8); \draw (v5) -- (v9); \draw (v5) -- (v10); \draw (v5) -- (v11); \draw (v6) -- (v7); \draw (v6) -- (v8); \draw (v6) -- (v9); \draw (v6) -- (v10); \draw (v6) -- (v11); \draw (v7) -- (v8); \draw (v7) -- (v9); \draw (v7) -- (v10); \draw (v7) -- (v11); \draw (v8) -- (v9); \draw (v8) -- (v10); \draw (v8) -- (v11); \draw (v9) -- (v10); \draw (v9) -- (v11); \draw (v10) -- (v11); \end{tikzpicture}\caption{Graph $G_3$} \end{subfigure} ~ \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.6,auto=center,every node/.style={circle,fill=blue!20}] \node (v1) at (3,1) {$1$}; \node (v2) at (1,3) {$2$}; \node (v3) at (-1,3) {$3$}; \node (v4) at (-3,1) {$4$}; \node (v5)[circle,fill=red!20] at (-3.5,-1) {$10$}; \node (v6)[circle,fill=red!20] at (-2.5,-2) {$5$}; \node (v7)[circle,fill=red!20] at (-1.3,-2.5) {$6$}; \node (v8)[circle,fill=red!20] at (0,-3) {$7$}; \node (v9)[circle,fill=red!20] at (1.3,-2.5) {$8$}; \node (v10)[circle,fill=red!20] at (2.5,-2) {$9$}; \node (v11)[circle,fill=red!20] at (3.5,-1) {$11$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v1) -- (v8); \draw (v1) -- (v9); \draw (v1) -- (v10); \draw (v1) -- (v11); \draw (v2) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v5); \draw (v2) -- (v6); \draw (v2) -- (v7); \draw (v2) -- (v8); \draw (v2) -- (v9); \draw (v2) -- (v10); \draw (v2) -- (v11); \draw (v3) -- (v4); \draw (v3) -- (v5); \draw (v3) -- (v6); \draw (v3) -- (v7); \draw (v3) -- (v8); \draw (v3) -- (v9); \draw (v3) -- (v10); \draw (v3) -- (v11); \draw (v4) -- (v5); \draw (v4) -- (v6); \draw (v4) -- (v7); \draw (v4) -- (v8); \draw (v4) -- (v9); \draw (v4) -- (v10); \draw (v4) -- (v11); \end{tikzpicture}\caption{Graph $G_4$} \end{subfigure} ~ \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.6,auto=center,every node/.style={circle,fill=blue!20}] \node (v1) at (3.7,0) {$1$}; \node (v2) at (3,2.5) {$2$}; \node (v3) at (1.5,4) {$3$}; \node (v4) at (-1.5,4) {$4$}; \node (v5) at (-3,2.5) {$5$}; \node (v6) at (-3.7,0) {$6$}; \node (v7)[circle,fill=red!20] at (-3,-2.5) {$10$}; \node (v8)[circle,fill=red!20] at (-1.5,-3) {$7$}; \node (v9)[circle,fill=red!20] at (0,-3) {$8$}; \node (v10)[circle,fill=red!20] at (1.5,-3) {$9$}; \node (v11)[circle,fill=red!20] at (3,-2.5) {$11$}; \draw (v1) -- (v2); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v1) -- (v8); \draw (v1) -- (v9); \draw (v1) -- (v10); \draw (v1) -- (v11); \draw (v2) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v5); \draw (v2) -- (v7); \draw (v2) -- (v8); \draw (v2) -- (v9); \draw (v2) -- (v10); \draw (v2) -- (v11); \draw (v3) -- (v4); \draw (v3) -- (v5); \draw (v3) -- (v6); \draw (v3) -- (v7); \draw (v3) -- (v8); \draw (v3) -- (v9); \draw (v3) -- (v10); \draw (v3) -- (v11); \draw (v4) -- (v5); \draw (v4) -- (v6); \draw (v4) -- (v8); \draw (v4) -- (v9); \draw (v4) -- (v10); \draw (v4) -- (v11); \draw (v5) -- (v6); \draw (v5) -- (v8); \draw (v5) -- (v9); \draw (v5) -- (v10); \draw (v5) -- (v11); \draw (v6) -- (v7); \draw (v6) -- (v8); \draw (v6) -- (v9); \draw (v6) -- (v10); \draw (v6) -- (v11); \end{tikzpicture}\caption{Graph $G_5$} \end{subfigure} ~ \begin{subfigure}{0.3\textwidth} \centering \begin{tikzpicture} [scale=.6,auto=center,every node/.style={circle,fill=blue!20}] \node[circle,fill=blue!20](v1) at (4,1) {$1$}; \node[circle,fill=blue!20](v2) at (2.5,3) {$2$}; \node[circle,fill=blue!20](v3) at (0,4) {$3$}; \node[circle,fill=blue!20](v4) at (-2.5,3) {$4$}; \node[circle,fill=blue!20](v5) at (-4,1) {$5$}; \node[circle,fill=red!20](v6) at (-4,-1) {$6$}; \node[circle,fill=red!20](v7) at (-2.5,-1.5) {$7$}; \node[circle,fill=red!20](v8) at (0,-2) {$8$}; \node[circle,fill=red!20](v9) at (2.5,-1.5) {$9$}; \node[circle,fill=red!20](v10) at (4,-1) {$10$}; \node[circle,fill=red!20](v11) at (0,5.5) {$11$}; \draw (v2) -- (v1); \draw (v2) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v5); \draw (v2) -- (v6); \draw (v2) -- (v7); \draw (v2) -- (v8); \draw (v2) -- (v9); \draw (v2) -- (v10); \draw (v2) -- (v11); \draw (v3) -- (v1); \draw (v3) -- (v4); \draw (v3) -- (v5); \draw (v3) -- (v6); \draw (v3) -- (v7); \draw (v3) -- (v8); \draw (v3) -- (v9); \draw (v3) -- (v10); \draw (v3) -- (v11); \draw (v4) -- (v1); \draw (v4) -- (v5); \draw (v4) -- (v6); \draw (v4) -- (v7); \draw (v4) -- (v8); \draw (v4) -- (v9); \draw (v4) -- (v10); \draw (v4) -- (v11); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v1) -- (v8); \draw (v1) -- (v9); \draw (v1) -- (v10); \draw (v5) -- (v6); \draw (v5) -- (v7); \draw (v5) -- (v8); \draw (v5) -- (v9); \draw (v5) -- (v10); \end{tikzpicture}\caption{Graph $G_6$} \end{subfigure}\caption{The graphs $G_1,G_2,G_3,G_4,G_5$ and $G_6$ are non-Diophantine}\label{figure2} \end{figure*} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Graph $G$ & $|E(G)|$ & $F(G)$ & $Cl(G)$ & $\alpha(G)$ & $\delta(G)$ & $S_{G}$ \\ \hline $D_7$ & 17 & 3 & 5 & 3 & 3 & $(0,0,0,1,2,1,3)$ \\ \hline $G_1$ & 15 & 0 & 5 & \fbox{2} & 2 & $(0,0,1,0,2,4,0)$ \\ $G_2$ & 16 & 0 & 3 & 3 & \fbox{4} & $\fbox{(0,0,0,0,3,4,0)}$ \\ \hline $D_{11}$ & 41 & 3 & 6 & 5 & 4 & $(0,0,0,0,1,1,3,0,2,1,3)$ \\ \hline $G_3$ & 33 & 3 & \fbox{7} & 5 & 3 & $(0,0,0,4,0,0,4,0,0,0,3)$ \\ $G_4$ & 34 & \fbox{4} & 5 & 7 & 4 & $\fbox{(0,0,0,0,7,0,0,0,0,0,4)}$ \\ $G_5$ & \fbox{42} & 2 & 6 & 5 & 5 & $\fbox{(0,0,0,0,1,0,4,0,2,2,2)}$ \\ $G_6$ & 38 & 3 & 6 & 6 & 3 & $\fbox{(0,0,0,1,0,5,0,0,0,2,3)}$ \\ \hline \end{tabular} \caption{Some Basic Bounds of $G_i$, $i=1,\dots,6$ and The Corresponding Bounds of $D_7$ and $D_{11}$}\label{table2} \end{table} \begin{ex}\label{example2} The following graph $G$ in Figure \ref{figure3} is not Diophantine (see \cite{sonbaty}). However, condition \textbf{C1} is satisfied for $|E(G)|=37<41=|E(D_{11})|$, condition \textbf{C2} is satisfied for $F(G)=3=F(D_{11})$, condition \textbf{C3} is satisfied for $Cl(G)=6=Cl(D_{11})$, condition \textbf{C4} is satisfied for $\alpha(G)=6>5=\alpha(D_{11})$, condition \textbf{C5} is satisfied for $\delta(G)=3<4=\delta(D_{11})$ and condition \textbf{C6} is satisfied for the two sequences $$S_G=\{g_i\}_{i=0}^{10}=(0,0,0,1,1,4,0,0,1,1,3)\quad\mbox{and}\quad S_{D_{11}}=\{d_i\}_{i=0}^{10}=(0,0,0,0,1,1,3,0,2,1,3)$$ hold the following condition, for every $k=0,1,\dots,10$ $$\sum\limits_{i=0}^{k}g_{i}\geq\sum\limits_{i=0}^{k}d_{i}.$$ Hence, all six conditions \textbf{Ci}, i=1, \dots, 6 are necessary and insufficient conditions. \end{ex} \begin{figure*}[h!] \centering \begin{tikzpicture} [scale=0.6,auto=center] \node[circle,fill=blue!20](v1) at (4,1) {$5$}; \node[circle,fill=blue!20](v2) at (2.5,3) {$1$}; \node[circle,fill=blue!20](v3) at (0,4) {$11$}; \node[circle,fill=blue!20](v4) at (-2.5,3) {$7$}; \node[circle,fill=blue!20](v5) at (-4,1) {$3$}; \node[circle,fill=red!20](v8) at (-0.8,-1.5) {$2$}; \node[circle,fill=red!20](v6) at (-2.5,-1.5) {$6$}; \node[circle,fill=red!20](v9) at (0.8,-1.5) {$8$}; \node[circle,fill=red!20](v7) at (2.5,-1.5) {$4$}; \node[circle,fill=red!20](v11) at (-4,-1) {$9$}; \node[circle,fill=red!20](v10) at (4,-1) {$10$}; \draw (v2) -- (v1); \draw (v2) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v5); \draw (v2) -- (v6); \draw (v2) -- (v7); \draw (v2) -- (v8); \draw (v2) -- (v9); \draw (v2) -- (v10); \draw (v2) -- (v11); \draw (v3) -- (v1); \draw (v3) -- (v4); \draw (v3) -- (v5); \draw (v3) -- (v6); \draw (v3) -- (v7); \draw (v3) -- (v8); \draw (v3) -- (v9); \draw (v3) -- (v10); \draw (v3) -- (v11); \draw (v4) -- (v1); \draw (v4) -- (v5); \draw (v4) -- (v6); \draw (v4) -- (v7); \draw (v4) -- (v8); \draw (v4) -- (v9); \draw (v4) -- (v10); \draw (v4) -- (v11); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v1) -- (v8); \draw (v1) -- (v9); \draw (v1) -- (v10); \draw (v5) -- (v7); \draw (v5) -- (v8); \draw (v5) -- (v9); \draw (v5) -- (v10); \end{tikzpicture} \caption{A graph $G$ does not satisfy the six necessary conditions thought $G$ is not Diophantine} \label{figure3} \end{figure*} \newpage \begin{flushleft} \Large\textbf{Declarations} \end{flushleft} \begin{flushleft} \textbf{Conflict of interest}: The authors declare that they have no conflict of interests. \end{flushleft} \begin{thebibliography}{9} \bibitem{Apostol} {\bf Apostol, T.M.},\newblock{\it~Introduction to Analytic Number Theory}. Springer Scince+Business Media, New York, 1976. \bibitem{Bickle} {\bf Bickle, A.},\newblock{\it~Fundamentals of Graph Theory}, American Mathematical Society, Providence, Rhode Island, USA, 2020. \bibitem{Burton} {\bf Burton, D.M.},\newblock{\it~Elementary number theory,} 7th ed., a business unit of The McGraw-Hill Companies, New York, 2011. \bibitem{sonbaty} {\bf Elsonbaty, A.; Al-harbi, E.},\newblock{\it~Iterative independence numbers for prime graphs.}, Ars Combinatorai, Vol. 151, 2020. \bibitem{G-Y-Z} {\bf Gross, J.L.; Yellen, J.; Zhang, P.},\newblock{\it~Handbook of Graph Theory}, 2nd ed., CRC Press, Taylor \& Francis Group, USA, 2014. \bibitem{Harary} {\bf Harary, F.},\newblock{\it~Graph Theory}, Addison-Wesley Publishing Company, Reading, Massachusetts, 1969. \bibitem{S-C-L} {\bf Hsieh, S.-M.; Hsu, C.-C.; Hsu, L.-F.},\newblock{\it~Efficient Method to Perform Isomorphism Testing of Labeled Graphs}, Computational Science and Its Applications - ICCSA 2006, part V, pp 422-431. \bibitem{Nasr} {\bf Nasr, A.; Elsonbaty, A.; Seoud, M.A.; Anwar, M.}, \newblock{\it~Diophantine graphs}, submittd for publication. \bibitem{Robert} {\bf Robert, A.M.},\newblock{\it~A Course in $p$-adic Analysis.} Springer Science+Business Media, LLC, New York, 2000. \bibitem{Rosa} {\bf Rosa, A.},\newblock{\it~On certain valuations of the vertices of a graph}, Main Library-University of Lows, pp 349-355, 1967. \bibitem{Rosen2} {\bf Rosen, K.H.}, \newblock{\it~Discrete Mathematics and Its Applications,} 7th ed., a business unit of The McGraw-Hill Companies, New York, 2012. \bibitem{Seoud1} {\bf Seoud, M.A.; Elsonbaty, A.; Mahran, A.E.A.}, \newblock{\it~On prime graphs}, Ars Combinatorai, Vol. 104, pp 241-260, 2012. \bibitem{Seoud-Y} {\bf Seoud, M.A.; Youssef, M.Z.}, \newblock{\it~On prime labelings of graphs.} Congressus Numerantium, Vol. 141, pp 203-215, 1999. \bibitem{Tout} {\bf Tout, A.; Dabboucy, A.N.; Howalla, K.}, \newblock{\it~Prime labeling of graphs}. National Academy Science Letters, Vol. 11, pp 365-368, 1982. \end{thebibliography} \end{document}
2412.20589v1
http://arxiv.org/abs/2412.20589v1
Countable models of weakly quasi-o-minimal theories I
\documentclass{amsart} \usepackage{amssymb,amsmath,amsthm} \usepackage{enumerate} \usepackage{eucal} \usepackage{mathabx} \usepackage{xcolor} \usepackage{tasks} \usepackage{hyperref} \hypersetup{ final, colorlinks, linkcolor={red!50!black}, citecolor={red!50!black}, urlcolor={red!50!black} } \theoremstyle{definition} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Claim}{Claim} \newtheorem*{Claim*}{Claim} \theoremstyle{definition} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Fact}[Theorem]{Fact} \newtheorem{Conjecture}[Theorem]{Conjecture} \newtheorem{Example}[Theorem]{Example} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Observation}[Theorem]{Observation} \newtheorem{Question}{Question} \newtheorem{Notation}[Theorem]{Notation} \newtheorem{Theorem1}{Theorem} \newtheorem{Corollary1}{Corollary} \usepackage{etoolbox}\usepackage{orcidlink} \def\forces{\vdash} \def\strok{\restriction} \def\wom{\textit{wom}} \def\Ind#1#2{#1\setbox0=\hbox{$#1x$}\kern\wd0\hbox to 0pt{\hss$#1\mid$\hss} \lower.9\ht0\hbox to 0pt{\hss$#1\smile$\hss}\kern\wd0} \def\ind{\mathop{\mathpalette\Ind{}}} \def\notind#1#2{#1\setbox0=\hbox{$#1x$}\kern\wd0 \hbox to 0pt{\mathchardef\nn=12854\hss$#1\nn$\kern1.4\wd0\hss} \hbox to 0pt{\hss$#1\mid$\hss}\lower.9\ht0 \hbox to 0pt{\hss$#1\smile$\hss}\kern\wd0} \def\dep{\mathop{\mathpalette\notind{}}} \DeclareMathOperator{\wor}{\perp\hspace*{-0.4em}^{\mathit w}} \DeclareMathOperator{\nwor}{\not\perp\hspace*{-0.4em}^{\mathit w}} \DeclareMathOperator{\fwor}{\perp\hspace*{-0.4em}^{\mathit f}} \DeclareMathOperator{\nfor}{\not\perp\hspace*{-0.4em}^{\mathit f}} \DeclareMathOperator{\Mon}{\mathfrak C} \DeclareMathOperator{\tp}{tp} \DeclareMathOperator{\itp}{itp} \DeclareMathOperator{\dcl}{dcl} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Inv}{Inv} \DeclareMathOperator{\Th}{Th} \DeclareMathOperator{\Aut}{Aut} \usepackage[margin=30mm]{geometry} \title[Trivial types and shifts]{Countable models of weakly quasi-o-minimal theories I} \author[S.\ Moconja]{Slavko Moconja\ \orcidlink{0000-0003-4095-8830}} \address[S.\ Moconja]{University of Belgrade, Faculty of mathematics, Belgrade, Serbia} \email{[email protected]} \author[P.\ Tanovi\'c]{Predrag Tanovi\'c\ \orcidlink{0000-0003-0307-7508}} \address[P.\ Tanovi\'c]{Mathematical Institute of the Serbian Academy of Sciences and Arts, Belgrade, Serbia} \email{[email protected]} \thanks{The authors are supported by the Science Fund of the Republic of Serbia, grant 7750027--SMART} \begin{document} \begin{abstract} We introduce the notions of triviality and order-triviality for global invariant types in an arbitrary first-order theory and show that they are well behaved in the NIP context. We show that these two notions agree for invariant global extensions of a weakly o-minimal type, in which case we say that the type is trivial. In the o-minimal case, we prove that every definable complete 1-type over a model is trivial. We prove that the triviality has several favorable properties; in particular, it is preserved in nonforking extensions of a weakly o-minimal type and under weak nonorthogonality of weakly o-minimal types. We introduce the notion of a shift in a linearly ordered structure that generalizes the successor function. Then we apply the techniques developed to prove that every weakly quasi-o-minimal theory that admits a definable shift has $2^{\aleph_0}$ countable models. \end{abstract} \maketitle \section{Introduction} In this paper, we continue the work on the classification theory for countable models of weakly quasi-o-minimal theories; recall that a complete first-order theory is weakly quasi-o-minimal if for some (equivalently, all by \cite[Theorem 1(ii)]{MTwmon}) $0$-definable linear order $<$ every definable\footnote{Throughout the paper, definable means with parameters.} subset of $\Mon$ is a Boolean combination of unary $0$-definable sets and convex sets. In \cite{MT}, the classification for binary weakly quasi-o-minimal $T$ was completed; recall that $T$ is binary if every formula is equivalent to a Boolean combination of formulae in two free variables. Here, by classification, we mean: first, listing the relevant properties of $T$ that imply $I(T,\aleph_0)=2^{\aleph_0}$, and then, assuming that all these properties fail for $T$, finding a reasonable system of invariants for isomorphism types of countable models of $T$. By reasonable invariants, structures that can be easily counted are usually meant, so that Vaught's conjecture for $T$ can be confirmed. In \cite[Theorem 1]{MT}, four conditions are listed, each of which guarantees $I(T,\aleph_0)=2^{\aleph_0}$ for binary weakly quasi-o-minimal $T$: \begin{enumerate}[\hspace{10pt}(C1)] \item $T$ is not small; \item There is a non-convex type $p\in S_1(T)$; \item There is a non-simple type $p\in S_1(T)$; \item There is a nonisolated forking extension of some $p\in S_1(T)$ over a 1-element domain; \end{enumerate}Then, in \cite[Theorem 2]{MT}, assuming that none of (C1)-(C4) holds, the invariants of countable models are described as certain sequences of dense order-types (including $\mathbf 0$ and $\mathbf 1$), and in \cite[Theorem 8.10]{MT} the exact value of $I(T,\aleph_0)$ was computed. Outside the binary context, the above list is inadequate; we know that there are weakly o-minimal (non-binary) theories with three countable models that satisfy conditions (C3) and (C4). As for the other conditions, (C1) trivially works for general $T$, while in the subsequent part of this article \cite{MTclass2}, we will prove that (C2) works too. The principal theorem of this paper introduces a novel significant condition. \begin{Theorem1}\label{Theorem1_shift} If a weakly quasi-o-minimal theory has a definable shift on $(\Mon,<)$, then $I(T,\aleph_0)=2^{\aleph_0}$. \end{Theorem1} The non-existence of shifts, together with the failure of (C1) and (C2), will suffice to confirm Vaught's and Martin's conjecture for a wide subclass of the weakly quasi-o-minimal theories that includes all quasi-o-minimal and all almost $\aleph_0$-categorical theories (\cite{MTclass2}); however, for the whole class, it is unlikely that the three conditions suffice. The motivation for Theorem \ref{Theorem1_shift} we found in the work of Alibek, Baizhanov and Zambarnaya \cite{Alibek}, where they introduce the notion of quasi-successors as a generalization of the successor function; our notion of shifts is a minor modification of this notion. Shifts are defined as follows. Let $S(x,y)$ be a formula, $\mathcal S=(S(a,\Mon)\mid a\in \Mon)$, and let $c\in \Mon$. Then $S(a,\Mon)$ is an $\mathcal S$-shift if the following two conditions hold: \begin{enumerate}[\hspace{10pt}(1)] \item $S(a,\Mon)$ is a convex subset of $\Mon$ with minimum $a$ for all $a\in\Mon$; and \item $(S^{(n)}(c,\Mon)\mid n\in\mathbb N)$ is a strictly $\subset$-increasing family of sets, where $S^{(1)}(x,y):=S(x,y)$ and $S^{(n+1)}(x,y):=(\exists t)(S^{(n)}(x,t)\land S(t,y))$. \end{enumerate} Shifts generalize the successor functions in infinite discrete orders in the following sense. Let $S(x)$ denote the successor of $x$ if it exists. Consider the formula $R(x,y):= (y=x \vee y=S(x))$ and the family $\mathcal R=(R(a,\Mon)\mid a\in \Mon)$. Then $R(c,\Mon)$ is a $\mathcal R$-shift, provided that $S^{(n)}(c)$ is defined for all $n\in \mathbb N$. In general, it is known that the existence of an infinite definable discrete order in a model of a countable complete first-order theory $T$ implies $I(T,\aleph_0)=2^{\aleph_0}$, see \cite{Tane}. Therefore, it is interesting to know whether Theorem \ref{Theorem1_shift} holds outside the weakly quasi-o-minimal context. \begin{Question} If $T=\Th(M,<,f,c,\ldots)$ is countable, where $<$ is a linear order, $f:M\to M$, and the sequence $(f^{(n)}(c)\mid n\in \mathbb N)$ strictly increases, must $T$ have $2^{\aleph_0}$ countable models? \end{Question} Throughout the paper, first-order model theory techniques will be used exclusively. We will rely on results from our recent paper \cite{MTwom}, in which the notion of weak o-minimality for complete types of an arbitrary first-order theory is introduced and several favorable model-theoretic properties of weakly o-minimal types are proved; these are reviewed in Section \ref{Section_preliminaries}. The key new notion in this work is that of trivial weakly o-minimal types. They play a central role in the proof of Theorem \ref{Theorem1_shift}, making the proof not generalize outside the weakly quasi-o-minimal context. We believe also that trivial types will play a decisive role in the proof of Vaught's conjecture for weakly quasi-o-minimal theories. The paper is organized as follows. Section \ref{Section_preliminaries} contains preliminaries. We give a thorough overview of the results from \cite{MTwom} on weakly o-minimal types that will be used in this and subsequent articles. We also prove some properties of convex weakly o-minimal types ($p\in S(A)$ is convex if its locus is a convex subset of some $A$-definable linear order). In Section \ref{Section_trivial}, motivated by the notion of triviality for types in stable theories, that was introduced and studied by Baldwin and Harrington in \cite{BH}, we introduce the notions of triviality and order-triviality for invariant types in an arbitrary theory. The idea behind the triviality of the type in the stable case is that every pairwise independent set of realizations is independent (as a set). Instead of sets of realizations, we consider sequences of realizations of $\mathfrak p_{\restriction A}$, where $\mathfrak p$ is an $A$-invariant global type, and consider them independent if they are Morley in $\mathfrak p$ over $A$; thus $\mathfrak p$ is trivial over $A$ if $(a_i,a_j)\models \mathfrak p^2_{\restriction A}$ (for all $i<j<\omega$) implies that $(a_i\mid i\in\omega)$ is Morley in $\mathfrak p$ over $A$. For order triviality, we require that $(a_i,a_{i+1})\models \mathfrak p^2_{\restriction A}$ (for all $i\in\omega$) implies that $(a_i\mid i\in\omega)$ is Morley; this cannot occur in the stable case. In the NIP case, we show that the triviality of $\mathfrak p$ over $A$ transfers to any $B\supseteq A$. We also prove some nice properties of weak orthogonality ($\wor$) for trivial and order-trivial types. In Section \ref{Section_trivial_wom} we introduce and study triviality in the context of weakly o-minimal types. If the type $p=\mathfrak p_{\restriction A}$ is weakly o-minimal, we show that the triviality and order-triviality of $\mathfrak p$ agree; we also prove that the triviality of one $A$-invariant extension of $p$ implies triviality of the other; in that case, we say that $p$ is trivial. Surprisingly, trivial types are common in o-minimal theories; we show that all definable types $p\in S_1(M)$ of an o-minimal structure $M$ are trivial. We prove in Theorem \ref{Theorem_nwor preserves triviality}: the triviality of $p$ transfers to its nonforking extensions, that triviality is preserved under $\nwor$ of weakly o-minimal types, and that $\wor$ of trivial types transfers to their nonforking extensions. We also prove that every trivial type in a theory with few countable models is both convex and simple. In Section \ref{Section_shifts}, we introduce shifts and prove Theorem \ref{Theorem1_shift}. \section{Preliminaries}\label{Section_preliminaries} Throughout the paper, we use standard model-theoretical terminology and notation. We work in $\Mon$, a large, saturated (monster) model of a complete, first-order (possibly multi-sorted) theory $T$ in a language $L$. The singletons and tuples of elements from $\Mon$ are denoted by $a,b,c,\dots$. The letters $A,B,A', B',\dots$ are reserved for small (of cardinality $<|\Mon|$) subsets of the monster, while $C,D,C',D',\dots$ are used to denote arbitrary sets of tuples. By a formula $\phi(x)$, we mean one with parameters. For a formula $\phi(x)$ we denote $\phi(C)= \{c\in C\mid \models \phi(c)\}$, and similarly for partial types. A set $D$ is definable ($A$-definable) if $D=\phi(\Mon)$ for some formula $\phi(x)$ ($\phi(x)\in L_A$); similarly for type-definable sets. $S_n(C)$ denotes the set of all complete $n$-types over $C$ and $S(C)=\bigcup_{n\in\mathbb N}S_n(C)$. Types from $S(\Mon)$ are called global types. A global type $\mathfrak p(x)$ is {\em $A$-invariant} if $(\phi(x,a_1)\leftrightarrow \phi(x,a_2))\in\mathfrak p$ for all $L_A$-formulae $\phi(x,y)$ and all tuples $a_1,a_2$ with $a_1\equiv a_2\,(A)$; $\mathfrak p$ is {\em invariant} if it is $A$-invariant for some $A$. Let $\mathfrak p$ be $A$-invariant and let $(I,<)$ be a linear order; we say that the sequence of tuples $(a_i\mid i\in I)$ is a {\em Morley sequence} in $\mathfrak p$ over $A$ if $a_i\models \mathfrak p_{\restriction A\,a_{<i}}$ holds for all $i\in I$. Note that we allow Morley sequences to have arbitrary (even finite) order-type. Dividing and forking have usual meaning. Complete types over $A$, $p(x)$ and $q(y)$, are weakly orthogonal, denoted by $p\wor q$, if $p(x)\cup q(y)$ determines a complete type; $p$ is forking orthogonal to $q$, denoted by $p\fwor q$, if $\tp(a/bA)$ does not fork over $A$ for all $a\models p$ and $b\models q$. A partial type $p(x)$ over $A$ is {\em NIP} if there are no formula $\varphi(x,y)$, an $A$-indiscernible sequence $(a_n\mid n<\omega)$ in $p$ and a tuple $b$, such that $\models\varphi(a_n,b)$ iff $n$ is even. A partial type $p(x)$ over $A$ is {\em dp-minimal} if there are no formula $\varphi( x, y)$, an $A$-indiscernible sequence $I$ of tuples of length $|y|$ and $ a\models p$ such that $\varphi( a, y)$ has at least four alternations on $I$. (This characterization of dp-minimal types is due to Kaplan, Onshuus, and Usvyatsov; see \cite[Proposition 2.8]{KOU}). Recall that every dp-minimal types is NIP. The notation related to linear orders is mainly standard. Let $(X,<)$ be a linear order; $<$ is defined on the power set of $X$ by $Y<Z$ iff $y<z$ for all $y\in Y$ and $z\in Z$. $D\subseteq X$ is an initial part of $X$ if $a\in D$ and $b<a$ imply $b\in D$; similarly, the final parts are defined. $D$ is convex if $a<c<b$ and $a,b\in D$ imply $c\in D$; $a\in X$ is an upper (lower) bound of $D$ if $D<a$ ($a<D$) holds. We emphasize the following notation for $D_1,D_2\subseteq X$. \begin{itemize} \item $\sup D_1\leqslant \sup D_2$ \ if and only if \ $D_2<x$ implies $D_1<x$ for all $x\in X$; \item $\sup D_1<\sup D_2$ if $\sup D_1\leqslant \sup D_2$ but $\sup D_2\not\leqslant \sup D_1$. \end{itemize} This differs a bit from the standard notion of $\sup$, which will not be used. \subsection{Relative definability} Here, we recall some facts and conventions from \cite{MTwom}. Let $p(x)$ be a partial type over $A$. A subset $X\subseteq p(\Mon)$ is {\em relatively $B$-definable within $p(\Mon)$} if $X=\phi(\Mon)\cap p(\Mon)$ for some $L_B$-formula $\phi(x)$ (usually we will have $A\subseteq B$); here, $\phi(x)$ is called a relative definition of $X$ within $p(\Mon)$, and we will also say that $X$ is relatively defined by $\phi(x)$ within $p(\Mon)$. The family of all relatively $B$-definable subsets of a type-definable set is closed for finite Boolean combinations. Also, if $X_i$ is relatively $B$-definable within $P_i$ ($1\leqslant i\leqslant n$), then $\prod_{1\leqslant i\leqslant n}X_i$ is relatively $B$-definable within $\prod_{1\leqslant i\leqslant n}P_i$. In particular, the relative definability of (finitary) relations within $P$ is well defined. We will say that a structure $(P,R_i)_{i\in I}$ is relatively definable if each relation $R_i$ is such within $P$. For example, if a relation $R\subseteq P^2$ is relatively defined by a formula $\phi(x,y)$ and if $(P,R)$ is a linear order, then we say that $\phi(x,y)$ relatively defines a linear order on $P$. Also, if $P$ and $Q$ are type-definable sets, then the relative definability of (graphs of) functions $f:P\to Q$ is well defined. The universal properties of relatively definable structures are expressed by certain $L_{\infty,\omega}$-sentences, which we will call {\em $\tp$-universal sentences} and informally denote by $(\forall x_1\models p_1)\dots(\forall x_n\models p_n)\ \psi(x_1,\dots,x_n)$, where $p_1(x_1),\dots,p_n(x_n)$ are partial types and $\psi(x_1,\dots,x_n)$ is a first-order formula; formally, $(\forall x\models p)\ \phi(x)$ is $(\forall x)(\bigwedge_{\theta\in p}\theta(x)\rightarrow \phi(x))$. The properties of relatively definable relations (and their defining formulae) expressed by these sentences are called {\em $\tp$-universal properties}. For example, ``the formula $x\leq y$ relatively defines a preorder on $p(\Mon)$'' and ``$\leq$ is a relatively definable preorder on $p(\Mon)$'' are $\tp$-universal properties. The following is a version of the compactness that will be applied when dealing with $\tp$-universal properties. \begin{Fact}\label{Fact_L_P_sentence} Suppose that $p_1(x_1),\dots,p_n(x_n)$ are partial types that are closed for finite conjunctions and let $\phi(x_1,\dots,x_n)$ be an $L_{\Mon}$-formula such that $\Mon\models (\forall x_1\models p_1)\dots(\forall x_n\models p_n)\ \phi(x_1,\dots,x_n)$. Then there are $\theta_i(x_i)\in p_i$ ($1\leqslant i\leqslant n$) such that: $$\Mon\models (\forall x_1\dots x_n)\left(\bigwedge_{1\leqslant i\leqslant n}\theta_i'(x_i)\rightarrow \phi(x_1,\dots,x_n)\right)$$ for all formulae $\theta_i'(x_i)$ such that $\theta_i'(\Mon)\subseteq \theta_i(\Mon)$ ($1\leqslant i\leqslant n$). \end{Fact} \noindent {\bf Convention.} Whenever $<$ is a relatively definable order on $p(\Mon)$, we assume the formula $x<y$ is its relative definition; similarly, $E(x,y)$ is a relative definition of an equivalence relation $E$. \begin{itemize} \item Let $<$ be a relatively definable linear order on $p(\Mon)$. This is a tp-universal property, so by compactness, there is a formula $\theta(x)\in p$ such that the formula $x<y$ relatively defines a linear order on $\theta(\Mon)$, that is, $(\theta(\Mon),<)$ is a linear order. Orders of this form are called {\it definable extensions} of $(p(\Mon),<)$. \item Consider the structure $\mathcal P=(p(\Mon),<,E)$ where $<$ is a relatively definable linear order and $E$ a relatively definable convex equivalence relation on $p(\Mon)$. It is not hard to see that a finite conjunction of tp-universal properties is also tp-universal, so by compactness, there is a formula $\theta(x)\in p$ such that $(\theta(\Mon),<,E)$ is a definable extension of $\mathcal P$: $x<y$ defines a linear order and $E(x,y)$ a convex equivalence relation on $\theta(\Mon)$. \end{itemize} \subsection{Weakly o-minimal types} In this subsection, we survey basic properties of weakly o-minimal types. A complete type $p\in S(A)$ is {\it weakly o-minimal} if there is a relatively $A$-definable linear order $<$ on $p(\Mon)$ such that each relatively definable subset of $(p(\Mon),<)$ consists of a finite number of convex components; in that case, $(p,<)$ is called a {\it weakly o-minimal pair over $A$}. Weak o-minimality of the type does not depend on the particular choice of the order: By Corollary 3.3. from \cite{MTwom}, if $(p,<)$ is a weakly o-minimal pair over $A$, then so is $(p,<')$ for each relatively $A$-definable linear order $<'$ on $p(\Mon)$. We introduced so-types\footnote{Here, {\em so} stands for {\em stationarily ordered}.} and so-pairs in \cite{MT}. It is easy to see that weakly o-minimal types (pairs) are so-types (so-pairs), that is, if $\mathbf p=(p,<)$ is a weakly o-minimal pair over $A$, then for each relatively definable subset $D\subseteq p(\Mon)$ one of the sets $D$ and $p(\Mon)\smallsetminus D$ is right-eventual in $(p(\Mon),<)$ (contains a final part of $(p(\Mon),<)$) and one of them is left-eventual. In particular, we can define the left ($\mathbf p_l$) and the right ($\mathbf p_r$) globalization of $\mathbf p$: \begin{itemize} \item $\mathbf p_l(x):=\{\phi(x)\in L_{\Mon}\mid \phi(\Mon)\cap p(\Mon) \mbox{ is left-eventual in } (p(\Mon),<)\}$ \ and \item $\mathbf p_r(x):=\{\phi(x)\in L_{\Mon}\mid \phi(\Mon)\cap p(\Mon) \mbox{ is right-eventual in } (p(\Mon),<)\}$. \end{itemize} It is easy to see that $\mathbf p_l$ and $\mathbf p_r$ are global $A$-invariant extensions of $p$; by \cite[Fact 1.12]{MTwom}, they are the only such extensions. Moreover, by \cite[Corollary 4.2]{MTwom}, they are the only global nonforking extensions of $p$. Forking extensions of $p$ have a simple description: By \cite[Corollary 4.2]{MTwom}, for any formula $\phi(x)$ consistent with $p(x)$ we have: $p(x)\cup \{\phi(x)\}$ forks over $A$ iff $p(x)\cup \{\phi(x)\}$ divides over $A$ iff $\phi(x)$ relatively defines a bounded (from both sides) subset in $(p(\Mon),<)$. Such formulas are called {\it relatively $p$-bounded}. Note that the first two conditions do not refer to any specific order, so these formulae relatively define subsets that are bounded with respect to any relatively $A$-definable linear order. \begin{Fact}\label{Fact_basic_S_p_wom} Let $\mathbf p= (p,<_p)$ be a weakly o-minimal pair over $A$ and let $B\supseteq A$. Consider the set $S_p(B)=\{q(x)\in S_n(B)\mid p(x)\subseteq q(x)\}$. \begin{enumerate}[\hspace{10pt}(a)] \item $q(\Mon)$ is a convex subset of $p(\Mon)$ and $(q,<_p)$ is a weakly o-minimal pair over $B$ for each $q\in S_p(B)$. \item The set $S_p(B)$ is linearly ordered by: $q<_pq'$ iff $q(\Mon)<_pq'(\Mon)$; $(\mathbf p_{l})_{\restriction B}=\min S_p(B)$ and $(\mathbf p_{r})_{\restriction B}=\max S_p(B)$. \item For all $q\in S_p(B)$: $q$ forks over $A$ if and only if the locus $q(\Mon)$ is bounded (from both sides) in $(p(\Mon),<_p)$ if and only if $q$ contains a relatively $p$-bounded formula. \item $(\mathbf p_{l})_{\restriction B}$ and $(\mathbf p_{r})_{\restriction B}$ are the only nonforking extensions of $p$ in $S_p(B)$. \end{enumerate} \end{Fact} \begin{proof}The proofs for all the statements can be found in \cite{MTwom}: part (a) corresponds to Lemma 2.6(ii), part (b) to Corollary 2.7, and parts (c) and (d) are derived from Corollary 4.2. \end{proof} The key notion in studying the forking independence of weakly o-minimal types in \cite{MTwom} was the right (and left) $\mathbf p$-genericity. Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$ and let $B\supseteq A$. By Fact \ref{Fact_basic_S_p_wom}(a), the sets $(q(\Mon)\mid q\in S_p(B))$ form a convex partition of $(p(\Mon),<_p)$. The maximal, or the rightmost element of this partition, is $(\mathbf p_{r})_{\restriction B}(\Mon)$; this motivates the definition of the right $\mathbf p$-genericity. \begin{Definition}Let $\mathbf p=(p,<_p)$ be a weakly o-minimal pair over $A$ and let $B$ be {\it any} small set. \begin{enumerate}[\hspace{10pt}(a)] \item $a$ is {\em right $\mathbf p$-generic} over $B$, denoted by $B\triangleleft^{\mathbf p} a$, if $a\models (\mathbf p_{r})_{\restriction AB}$; \item $a$ is {\em left $\mathbf p$-generic} over $B$ if $a\models (\mathbf p_{l})_{\restriction AB}$; \item $\mathcal D_p(B):=\{a\in p(\Mon)\mid a\dep_A B\}$; \item $\mathcal D_p:=\{(x,y)\in p(\Mon)^2\mid x\dep_A y\}$. \end{enumerate} \end{Definition} \begin{Remark}\label{Remark_pgeneric_first} Let $\mathbf p=(p,<_p)$ be a weakly o-minimal pair over $A$. \begin{enumerate}[\hspace{10pt}(a)] \item To avoid possible confusion, no special notation for the left $\mathbf p$-genericity was chosen. However, $B\triangleleft^{\mathbf p^*}a$ means that $a$ is left $\mathbf p$-generic over $B$, where $\mathbf p^*=(p,>_p)$ is the reverse of $\mathbf p$. \item By Fact \ref{Fact_basic_S_p_wom}(d), the only nonforking extensions of $p$ in $S(AB)$ are $(\mathbf p_{l})_{\restriction AB}$ and $(\mathbf p_{r})_{\restriction AB}$. Therefore, $a\models p$ is (left or right) $\mathbf p$-generic over $B$ if and only if $a\ind_A B$; that is: ($\triangleleft^{\mathbf p}$-comparability) \ \ \ $a\ind_A B$ \ \ if and only if \ \ $B\triangleleft^{\mathbf p}a$ or $B\triangleleft^{\mathbf p^*}a$. \item By Fact \ref{Fact_basic_S_p_wom}(b) we have $(\mathbf p_{l})_{\restriction AB}=\min S_p(B)$, so the locus $(\mathbf p_{l})_{\restriction AB}(\Mon)$, which is the set of all left $\mathbf p$-generic elements over $B$, is an initial part of $p(\Mon)$. Similarly, the set of all right $\mathbf p$-generic elements over $B$, is a final part of $p(\Mon)$. \item By Fact \ref{Fact_basic_S_p_wom}(c), the set $\mathcal D_p(B)$ consists of all non-$\mathbf p$-generic elements. Since the left $\mathbf p$-generic elements form an initial part and the right $\mathbf p$-generic a final part of $(p(\Mon),<_p)$, the set $\mathcal D_p(B)$, if nonempty, is a convex subset of $p(\Mon)$; in that case, by part (c), $a<_p\mathcal D_p(B)<_p b$ holds whenever $a$ is left $\mathbf p$-generic and $b$ right $\mathbf p$-generic over $B$. In particular, $\mathcal D_p(a)$ is a convex, bounded subset of $p(\Mon)$ for all $a\models p$. \item The set $\mathcal D_p(B)$, if nonempty, is relatively $\bigvee$-definable over $AB$ within $p(\Mon)$, by which we mean that $\mathcal D_p(B)=p(\Mon)\cap\bigcup_{\phi\in\Phi}\phi(\Mon)$ holds for some family $\Phi$ of $L_{AB}$-formulas in variable $x$. Here, $\Phi$ can be taken to be the set of all relatively $p$-bounded $L_{AB}$-formulae, since by Fact \ref{Fact_basic_S_p_wom}(c) we have $a\in \mathcal D_p(B)$ if and only if $p(x)\cup \{\phi(x)\}\subseteq \tp(a/AB)$ for some relatively $p$-bounded formula $\phi(x)$. Therefore, $\mathcal D_p(B)$ is a convex relatively $\bigvee$-definable subset of $p(\Mon)$. \item $p\wor\tp(B/A)$ if and only if $(\mathbf p_{l})_{\restriction AB}=(\mathbf p_{r})_{\restriction AB}$. \end{enumerate} \end{Remark} The following theorem gathers key properties of right (left) $\mathbf p$-genericity, viewed as a binary relation on $p(\Mon)$; these are expressed as properties of the structure $(p(\Mon), \triangleleft^{\mathbf p},<_p,\mathcal D_p)$. \begin{Theorem}\label{Theorem4} Let $\mathbf p=(p,<_p)$ be a weakly o-minimal pair over $A$. \begin{enumerate}[\hspace{10pt}(a)] \item \ $b\models p$ is right $\mathbf p$-generic over $a\models p$ if and only if $a$ is left $\mathbf p$-generic over $b$ \ ($a\triangleleft^{\mathbf p} b\Leftrightarrow b\triangleleft^{\mathbf p^*} a$). \item $(p(\Mon), \triangleleft^{\mathbf{p}})$ is a strict partial order and $(p(\Mon), <_p)$ its linear extension. \item The relations $\mathcal D_{p}$ and the $\triangleleft^{\mathbf p}$-incomparability are the same, $<_p$-convex equivalence relation on $p(\Mon)$. In particular, for all $a,b\models p$, $\mathcal D_p(a)$ is a convex subset of $(p(\Mon),<_p)$ and: \begin{center}$a\in\mathcal D_p(b)\Leftrightarrow a\dep_A b\Leftrightarrow \lnot(a\triangleleft^{\mathbf p} b\vee b\triangleleft^{\mathbf p} a)\Leftrightarrow b\dep_A a\Leftrightarrow b\in\mathcal D_p(a)$. \end{center} \item The quotient $(p(\Mon) / \mathcal{D}_p, <_p)$ is a dense linear order. \end{enumerate} \end{Theorem} \begin{proof} (a) is \cite[Lemma 4.8(ii)]{MTwom}, (b)-(d) are from \cite[Theorem 4]{MTwom}. \end{proof} \begin{Remark}\label{Remark_trianglep_binary_basicprops} Let $\mathbf p=(p,<_p)$ be a weakly o-minimal pair over $A$ and let $a,b\models p$. \begin{enumerate}[\hspace{10pt}(a)] \item Theorem \ref{Theorem4}(c) implies the following version of $\triangleleft^{\mathbf p}$-comparability: $a\ind_A b\Leftrightarrow (a\triangleleft^{\mathbf p} b\vee b\triangleleft^{\mathbf p} a)$. \item By \cite[Remark 4.14]{MTwom}, each of the following conditions is equivalent to $a\triangleleft^{\mathbf p} b$: \ $(a,b)\models (\mathbf p_r^2)_{\restriction A}$; \ $(b,a)\models (\mathbf p_l^2)_{\restriction A}$; \ $a<_pb\land a\ind_A b$; \ $a<_p \mathcal D_p(b)$; \ $\mathcal D_p(a)<_p b$; \ $\mathcal D_p(a)<_p\mathcal D_p(b)$. \end{enumerate} \end{Remark} \begin{Definition} Two weakly o-minimal pairs over $A$, $\mathbf p=(p,<_p)$ and $\mathbf q=(q,<_q)$, are said to be {\it directly non-orthogonal}, denoted by $\delta_A(\mathbf p,\mathbf q)$, if $p\nwor q$ and whenever $a\models p$ is right $\mathbf p$-generic over $b\models q$, then $b$ is left $\mathbf q$-generic over $a$, and vice versa, whenever $b$ is right $\mathbf q$-generic over $a$, then $a$ is left $\mathbf p$-generic over $b$. \end{Definition} Intuitively, $\delta_A(\mathbf p,\mathbf q)$ describes that orders $<_p$ and $<_q$ ``have the same direction''. This is justified by the next theorem, where we prove that $\delta_A$ is an equivalence relation and that whenever $\mathbf p\nwor \mathbf q$ (meaning $p\nwor q$), the pair $\mathbf p$ has the same direction with exactly one of the pairs $\mathbf q$ and $\mathbf q^*$; that is, $\delta_A(\mathbf p,\mathbf q)$ or $\delta_A(\mathbf p,\mathbf q^*)$. \begin{Theorem}\label{Theorem_nonorthogonality}\cite[Theorem 5.9]{MTwom} Let $\mathcal{W}_A$ denote the set of all weakly o-minimal types over $A$ and $\mathcal P_A$ the set of all such pairs over $A$. Let $\mathcal{W}_A(\Mon)$ be the set of all realizations of types from $\mathcal{W}_A$. \begin{enumerate}[\hspace{10pt}(a)] \item $\nwor$ is an equivalence relation on both $\mathcal{W}_A$ and $\mathcal P_A$. \ \item $\delta_A$ is an equivalence relation on $\mathcal P_A$; $\delta_A$ refines $\nwor$ by splitting each class into two classes, each of them consisting of the reverses of pairs from the other class. \item $x\dep_A y$ is an equivalence relation on $\mathcal{W}_A(\Mon)$. \item $\nfor$ is an equivalence relation on $\mathcal{W}_A$. \end{enumerate} \end{Theorem} \begin{Fact}\label{Fact_notdelta} Suppose that $\mathbf p=(p,<_p)$ and $\mathbf q=(q,<_q)$ are weakly o-minimal pairs over $A$ and $\mathbf p\nwor\mathbf q$. \begin{enumerate}[\hspace{10pt}(a)] \item Exactly one of $\delta_A(\mathbf p,\mathbf q)$ and $\delta_A(\mathbf p,\mathbf q^*)$ holds. \item $\lnot\delta_A(\mathbf p,\mathbf q)$ if and only if $\delta_A(\mathbf p,\mathbf q^*)$ if and only if $a\triangleleft^{\mathbf q} b\triangleleft^{\mathbf p}a$ holds for some $a\models p$ and $b\models q$. \item If $p=q$, then $\delta_A(\mathbf p,\mathbf q)$ if and only if $\mathbf p_r=\mathbf q_r$ if and only if $\triangleleft^{\mathbf p}=\triangleleft^{\mathbf q}$ on $p(\Mon)$. \end{enumerate} \end{Fact} \begin{proof} (a) follows from Theorem \ref{Theorem_nonorthogonality}(b), (b) from \cite[Remark 5.3]{MTwom}, and (c) from \cite[Lemma 5.4]{MTwom}. \end{proof} As a consequence of Theorem \ref{Theorem_nonorthogonality} and Fact \ref{Fact_notdelta}(c), we have the following definition. \begin{Definition} Let $\mathcal F$ be a part of a $\delta_A$-class of weakly o-minimal pairs and let $\mathcal F(\Mon)$ be the set of all realizations of types from $\mathcal F$. \begin{enumerate}[\hspace{10pt}(a)] \item For $a,b\in \mathcal F(\Mon)$ define \ $a\triangleleft^{\mathcal F} b$ \ if and only if \ $a\triangleleft ^{\mathbf q}b$ holds for some (or equivalently, {\it any} according to Fact \ref{Fact_notdelta}(c)) weakly o-minimal pair $\mathbf q=(q,<_q)\in\mathcal F$ such that $b\models q$. \item $\mathcal D_{\mathcal F}:=\{(x,y)\in \mathcal F(\Mon)^2\mid x\dep_A y\}$. \end{enumerate} \end{Definition} \begin{Theorem}[{\cite[Theorem 5]{MTwom}}]\label{Theorem5} Let $\mathcal F$ be a part of a $\delta_A$-class of non-algebraic weakly o-minimal pairs over $A$. Then there is an $A$-invariant order $<^{\mathcal F}$ on $\mathcal F(\Mon)$ such that the structure \ $(\mathcal F(\Mon),\triangleleft^{\mathcal F}, <^{\mathcal F},\mathcal D_{\mathcal F})$ satisfies the following conditions: \begin{enumerate}[\hspace{10pt} (a)] \item $(\mathcal{F}(\Mon), \triangleleft^{\mathcal F})$ is a strict partial order and $(\mathcal{F}(\Mon), <^{\mathcal F})$ is its linear extension; \item Relations $\mathcal D_{\mathcal F}$ and the $\triangleleft^{\mathcal F}$-incomparability are the same, $<^{\mathcal F}$-convex equivalence relation on $\mathcal{F}(\Mon)$. The quotient $(\mathcal{F}(\Mon)/\mathcal D_{\mathcal F},<^{\mathcal F})$ is a dense linear order. \item For each $p\in S(A)$ represented in $\mathcal F$, the order $<_p=(<^{\mathcal F})_{\restriction p(\Mon)}$ is relatively $A$-definable and the pair $(p,<_p)$ is directly non-orthogonal to pairs from $\mathcal F$. $(p(\Mon), \triangleleft^{\mathbf p},<_p,\mathcal D_p)$ is a substructure of $(\mathcal F(\Mon),\triangleleft^{\mathcal F}, <^{\mathcal F},\mathcal D_{\mathcal F})$. \end{enumerate} Moreover, if $(\mathbf p_i=(p_i,<_i)\mid i\in I)$ is a family of pairs from $\mathcal F$, with the types $(p_i\mid i\in I)$ mutually distinct, then the order $<_{\mathcal F}$ can be chosen to extend each $<_i$ for $i\in I$. \end{Theorem} \begin{Theorem}\label{Theorem_triangle_mathcal F} Let $\mathcal F$ be part of a $\delta_A$-class of weakly o-minimal pairs over $A$ and let $<_{\mathcal F}$ be given by Theorem \ref{Theorem5}. Then for all small $B$, $\mathbf p=(p,<_p)\in\mathcal F$, and $a,b,c\in \mathcal{F}(\Mon)$ the following hold true: \begin{enumerate}[\hspace{10pt} (a)] \item (Existence) \ There exists a $c\models p$ such that $B\triangleleft^{\mathcal F}c$; \item (Density) \ If $B\triangleleft^{\mathcal F} b$, then there is a $c$ such that $B\triangleleft^{\mathbf p} c\triangleleft^{\mathcal F} b$; \item (Symmetry) \ $a\ind_A b$ if and only if $b\ind_A a$; \item \ $a \triangleleft^{\mathcal F} b\Leftrightarrow b\triangleleft^{\mathcal F^*} a$ \ ($b$ is right $\mathcal F$-generic over $a$ if and only if $a$ is left $\mathcal F$-generic over $b$.) \item ($\triangleleft^{\mathcal F}$-comparability) \ $a\ind_A B\Leftrightarrow (B \triangleleft^{\mathcal F} a\vee B\triangleleft^{\mathcal F^*} a)$ \ and \ $a\ind_A b\Leftrightarrow (a \triangleleft^{\mathcal F} b\vee b\triangleleft^{\mathcal F} a)$; \item (Transitivity) \ $B\triangleleft^{\mathcal F} a \triangleleft^{\mathcal F} b$ \ implies \ $B\triangleleft^{\mathcal F} b$. \item $B\triangleleft^{\mathcal F} a \land a\dep_A b$ implies $B\triangleleft^{\mathcal F} b\land a\dep_{B}b$; \item $B\triangleleft^{\mathcal F} a<^{\mathcal F}b$ implies $B\triangleleft^{\mathcal F} b$; \item $B\triangleleft^{\mathcal F} a<^{\mathcal F}b\land \, a\ind_B b$ implies $Ba\triangleleft^{\mathcal F} b$; \item $B\triangleleft^{\mathcal F}a<^{\mathcal F}b<^{\mathcal F}c\land a\dep_{B}c$ implies $b\dep_{B} a \land b\dep_{B}c$. \end{enumerate} \end{Theorem} \begin{proof} (a)-(f) are Theorem 5.11(i)-(vi) of \cite{MTwom} and (g)-(j) are Proposition 5.12(iii)-(vi) of \cite{MTwom}. \end{proof} Most applications of Theorem \ref{Theorem_triangle_mathcal F} will be in the case where $\mathcal F=\{\mathbf p,\mathbf q\}$ (with $\delta_A(\mathbf p,\mathbf q)$ assumed). This is summarized in the next corollary. \begin{Corollary}\label{Cor_triangle_mathbf_pq_properties} $\delta_A(\mathbf p,\mathbf q)$ implies that for all $B$, $a\models p$ and $b\models q$: \begin{enumerate}[\hspace{10pt} (a)] \item (Existence) \ There exists a $a'\models p$ such that $B\triangleleft^{\mathbf p}a'$; \item (Density) \ If $B\triangleleft^{\mathbf q} b$, then there is a $a'\models p$ such that $B\triangleleft^{\mathbf p} a'\triangleleft^{\mathbf q} b$; \item (Symmetry) \ $a\ind_A b$ if and only if $b\ind_A a$; \item \ $a \triangleleft^{\mathbf q} b\Leftrightarrow b\triangleleft^{\mathbf p^*} a$ \ ($b$ is right $\mathbf q$-generic over $a$ if and only if $a$ is left $\mathbf p$-generic over $b$.) \item ($\triangleleft^{\mathbf p}$-comparability) \ $a\ind_A B\Leftrightarrow (B \triangleleft^{\mathbf p} a\vee B\triangleleft^{\mathbf p^*} a)$ \ and \ $a\ind_A b\Leftrightarrow (a \triangleleft^{\mathbf q} b\vee b\triangleleft^{\mathbf p} a)$; \item (Transitivity) \ $B\triangleleft^{\mathbf p} a \triangleleft^{\mathbf q} b$ \ implies \ $B\triangleleft^{\mathbf q} b$; \item $B\triangleleft^{\mathbf p} a \land a\dep_A b$ implies $B\triangleleft^{\mathbf q} b\land a\dep_{B}b$. \end{enumerate} \end{Corollary} \begin{Corollary}\label{Corolary_a_1_equiv_a_2_AB} Suppose that $p\in S(A)$ is a weakly o-minimal type and $a_1,a_2\models p$. Then for all $B$: \begin{center} $a_1\dep_A a_2 \ \land \ a_1\ind_A B$ \ implies \ $a_1\equiv a_2 \,(AB)$. \end{center} \end{Corollary} \begin{proof} Assume $a_1\dep_A a_2 \ \land \ a_1\ind_A B$. Choose $\mathbf p=(p,<)$, a weakly o-minimal pair over $A$, such that $B\triangleleft^{\mathbf p} a_1$. Then by Corolary \ref {Cor_triangle_mathbf_pq_properties}(g), applied to $\mathbf p=\mathbf q$, $B\triangleleft^{\mathbf p} a_1$ and $a_1\dep_A a_2$ together imply $B\triangleleft^{\mathbf p} a_2$. Therefore, $a_1\equiv a_2\,(AB)$. \end{proof} \begin{Fact}[{\cite[Corollary 5.14]{MTwom}}]\label{Fact_delta_implies_deltaB_fworB} Suppose that $\mathbf p,\mathbf q$ are weakly o-minimal pairs over $A$ with $\delta_A(\mathbf p,\mathbf q)$. Let $B\supseteq A$. Define $p_B=(\mathbf p_r)_{\restriction B}$, $\mathbf p_B=(p_B,<_p)$ and analogously define $q_B$ and $\mathbf q_B$. \begin{enumerate}[\hspace{10pt} (a)] \item $\delta_B(\mathbf p_B,\mathbf q_B)$ holds; in particular, $p_B\nwor q_B$ and $\mathbf p_r\nwor \mathbf q_r$; \item If $p\nfor q$, then $p_B\nfor q_B$ and, in particular, $\mathbf p_r\nfor\mathbf q_r$. \end{enumerate} \end{Fact} \begin{Fact}[{\cite[Proposition 6.4]{MTwom}}] Every weakly o-minimal type is dp-minimal, so NIP. \end{Fact} \subsection{Interval types and convex types} Let $(D,<)$ be an $A$-definable linear order. A formula with parameters $\phi(x)$ is said to be {\em convex} if it implies $x\in D$ and defines a $<$-convex subset of $D$. For a convex formula $\phi(x)$, by $\phi^-(x)$ and $\phi^+(x)$ we will denote formulae $x\in D\land x<\phi(\Mon)$ and $x\in D\land x>\phi(\Mon)$ respectively. Note that both $\phi^-(x)$ and $\phi^+(x)$ are convex formulae given over the same parameters as $\phi(x)$, and that $\phi^-(\Mon)<\phi(\Mon)<\phi^+(\Mon)$ is a convex partition of $D$. By an {\em interval type in $(D,<)$ over $B\supseteq A$} we mean a maximal partial type $\Pi(x)$ consisting of convex $L_B$-formulae. Since for each convex $L_B$-formula $\phi(x)$ exactly one of $\phi^-(x)$, $\phi(x)$ and $\phi^+(x)$ belongs to an interval type in $(D,<)$ over $B$, an interval type in $(D,<)$ over $B$ may be described as a consistent set of convex $L_B$-formulae $\Pi(x)$ such that for each convex $L_B$-formula $\phi(x)$ either $\phi(x)\in \Pi(x)$ or $\phi(x)$ is inconsistent with $\Pi(x)$. The set of all interval types in $(D,<)$ over $B\supseteq A$ is denoted by $IT(B)$. Note that, although we do not emphasize it, $IT(B)$ depends on $A$, $D$ and $<$. However, these will always be clear from the context, so we feel that easing the notation is in order. The set $IT(B)$ is endowed with a compact Hausdorff topology in the usual way. For $a\in D$, {\em the interval type of $a$ in $(D,<)$ over $B\supseteq A$}, denoted by $\itp(a/B)$ (we again suppress $A$, $D$ and $<$), is the set of all convex $L_B$-formulas that are satisfied by $a$. It is easy to see that $\itp(a/B)\in IT(B)$. For a type $p\in S_x(B)$ implying $x\in D$, by $p^{conv}$ we denote the subtype of all convex $L_B$-formulae from $p$. Again, it is easy to see that $p^{conv}\in IT(B)$. In the following lemma, we list some of the basic facts about interval types. \begin{Lemma}\label{Lemma_interval_types} Let $B\supseteq A$. \begin{enumerate}[\hspace{10pt} (a)] \item For all $\Pi\in IT(B)$, $\Pi(\Mon)$ is a convex subset of $D$. \item For distinct $\Pi_1,\Pi_2\in IT(B)$, $\Pi_1(\Mon)$ and $\Pi_2(\Mon)$ are disjoint and $<$-comparable. \item The space $IT(B)$ is naturally linearly ordered by $<$. \item For all $p\in S_x(B)$ implying $x\in D$, $p^{conv}(\Mon)$ is the convex hull of $p(\Mon)$. \item If $\Pi\in IT(B)$ and if $p\in S_x(B)$ is a completion of $\Pi$, then $\Pi(\Mon)$ is the convex hull of $p(\Mon)$. \end{enumerate} \end{Lemma} \begin{proof} (a) Clearly, $\Pi(\Mon)$ is convex as an intersection of convex sets. For (b), if $\phi(x)\in\Pi_1\smallsetminus\Pi_2$, then $\phi(x)$ is inconsistent with $\Pi_2$, so $\Pi_1(\Mon)\cap\Pi_2(\Mon)=\emptyset$. Now, (c) follows directly from (a) and (b). (d) It is clear that $p^{conv}(\Mon)$ contains the convex hull of $p(\Mon)$. If $a\in D$ is not in the convex hull of $p(\Mon)$, then $a<p(\Mon)$ or $a>p(\Mon)$. Without loss, suppose that the former holds. By compactness, there is a $\theta(x)\in p(x)$ such that $a<\theta(\Mon)$. Denote by $\theta^{conv}(x)$ the formula that describes the convex hull of $\theta(\Mon)\cap D$. Then $a<\theta^{conv}(\Mon)$ and $\theta^{conv}(x)\in p^{conv}$, so $a\notin p^{conv}(\Mon)$. (e) Clearly, $\Pi\subseteq p^{conv}$ holds, so $\Pi=p^{conv}$ follows by the maximality of $\Pi$, and the conclusion follows by (d). \end{proof} In the literature, the term ``convex type" is commonly used to denote an interval type, but here we will follow \cite{MT} and assign it a different meaning. \begin{Definition}\label{Definition_convex} A complete type $p\in S(A)$ is {\it convex} if there exists an $A$-definable linear order such that $p(\Mon)$ is its convex subset. \end{Definition} Let $p\in S(A)$ be a convex type and let $<$ be a relatively $A$-definable linear order on $p(\Mon)$. A priori, there is no reason why there should be an $A$-definable extension of $(p(\Mon),<)$ in which $p(\Mon)$ is convex. However, if $p$ is also weakly o-minimal, then we will prove in Proposition \ref{Proposition_convex_type_witness} below that such extensions exist. We need the following lemma. \begin{Lemma}\label{Lemma_D_p_rel_def_convex} Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$ and let $(D,<)$ be an $A$-definable extension of $(p(\Mon),<)$. Then there is a family $\Phi(x,y)=\{\phi_i(x,y)\mid i\in I\}$ of $L_{A}$-formulae such that for all $a\models p$, the disjunction $\bigvee_{i\in I}\phi_i(x,a)$ relatively defines $\mathcal D_p(a)$ within $p(\Mon)$ and for all $i\in I$ the set $\phi_i(\Mon,a)$ is a convex subset of $D$. \end{Lemma} \begin{proof} Let $\Phi_0(x,a)$ denote the set of all relatively $p$-bounded $L_{Aa}$-formulae in variable $x$. By Remark \ref{Remark_pgeneric_first}(e), the disjunction $\bigvee \Phi_0(x,a)$ relatively defines $\mathcal D_p(a)$ within $p(\Mon)$. Define: \ \begin{center} $\Phi(x,a)=\{\phi(x,a)\in \Phi_0\mid \mbox{ $\phi(\Mon,a)$ is a convex subset of $D$}\}$. \end{center} We claim that the family $\Phi(x,y)$ satisfies the conclusion of the lemma: $\bigvee \Phi(x,a)$ relatively defines $\mathcal D_p(a)$ within $p(\Mon)$. To prove this, it suffices to show that for each formula $\phi(x,a)\in \Phi_0(x,a)$ there exists a $\phi^*(x,a)\in \Phi(x,a)$ such that $\phi(p(\Mon))\subseteq \phi^*(p(\Mon))$. Let $\phi(x,a)\in \Phi_0$ and let $b_1,b_2\models p$ satisfy $b_1\triangleleft^{\mathbf p}a\triangleleft^{\mathbf p}b_2$. By Remark \ref{Remark_trianglep_binary_basicprops} we have $b_1<\mathcal D_p(a)<b_2$. Since the set $\phi(p(\Mon),a)$ is a subset of $\mathcal D_p(a)$ we have: $p(x)\cup\{\phi(x,a)\}\vdash b_1<x<b_2$. By compactness, there is a formula $\theta(x)\in p$, without loss implying $x\in D$, such that $\models \theta(x)\land \phi(x,a)\rightarrow b_1<x<b_2$. Let $\phi^*(x,a)$ be a formula that defines the convex hull of $\phi(\Mon,a)\cap \theta(\Mon)$ in $(D,<)$. Clearly, $b_1<\phi^*(\Mon,a)<b_2$, so $\phi^*(x,a)$ is a relatively $p$-bounded formula and $\phi(x,a)\in \Phi(x,a)$. As $\theta(x)\in p$, the formula $\theta(x)\land \phi(x,a)$ relatively defines the same subset of $p(\Mon)$ as $\phi(x,a)$ does; clearly, this implies the desired conclusion $\phi^*(p(\Mon),a) \supseteq \phi(p(\Mon),a)$ and completes the proof of the lemma. \end{proof} \begin{Proposition}\label{Proposition_convex_type_witness} Suppose that $\mathbf p=(p,<)$ is a weakly o-minimal pair over $A$ and that $p$ is convex. Then there is a formula $\theta(x)\in p$ such that $(\theta(\Mon),<)$ is an $A$-definable extension of $(p(\Mon),<)$ and $p(\Mon)$ is a convex subset of $\theta(\Mon)$; we will say that $\theta(x)$ witnesses the convexity of $\mathbf p$. \end{Proposition} \begin{proof} Let $(D,<)$ be an $A$-definable extension of $(p(\Mon),<)$. Since $p$ is convex, there is an $A$-definable linear order $(D_p,<_p)$ that contains $p(\Mon)$ as a convex subset; by replacing the domains $D$ and $D_p$ by their intersection, we may assume $D=D_p$. Denote $\mathbf p=(p,<)$ and $\mathfrak p=(p,<_p)$. Reversing the order $<_p$ if necessary, we may assume that $<$ and $<_p$ have the same orientation; that is, $\mathbf p_r=\mathfrak p_r$. Let $a\models p$ and let $\Phi(x,y)$ be the set of formulas given by Lemma \ref{Lemma_D_p_rel_def_convex}: $\bigvee\Phi(x,a)$ relatively defines $\mathcal D_p(a)$ within $p(\Mon)$ and for all $\phi(x,a)\in \Phi(x,a)$ the set $\phi_i(\Mon,a)$ is a convex subset of $D$. Then $\{\phi(\Mon,a)<x\mid \phi(x,a)\in \Phi(x,a)\}$ is a type, denote it by $\bigvee\Phi(D,a)<x$. Similarly, define the type $x<\bigvee\Phi(D,a)$. Since $\bigvee\Phi(x,a)$ relatively defines the set $\mathcal D_p(a)$ within $p(\Mon)$, we have $p(x)\cup (\bigvee\Phi(D,a)<x)\vdash (\mathbf p_r)_{\restriction Aa}(x)$ and, since $\mathbf p_r=\mathfrak p_r$, $p(x)\cup (\bigvee\Phi(D,a)<x)\vdash a<_px$. Similarly, for all $b\models p$ we have $p(x)\cup (x<\bigvee\Phi(D,b))\vdash x<_p b$. So \begin{equation*}p(x)\cup \left(\bigvee\Phi(D,a)< x\right)\vdash a<_p x \ \mbox{ and } \ p(x)\cup \left(x<\bigvee\Phi(D,b)\right)\vdash x<_p b . \end{equation*} By compactness, there is a formula $\theta(x)\in p(x)$ implying $x\in D$ such that \begin{equation}\{\theta(x)\}\cup \left(\bigvee\Phi(D,a)< x\right)\vdash a<_p x \ \mbox{ and } \ \{\theta(x)\}\cup \left(x<\bigvee\Phi(D,b)\right)\vdash x<_p b . \end{equation} \setcounter{equation}{0} We claim that $\theta(x)$ satisfies the conclusion of the lemma: $p(\Mon)$ is convex in $(\theta(\Mon),<)$. To prove it, assume $a',b'\models p$ and $\models a'<c<b'\land \theta(c)$; we need to prove $c\models p$. Choose $a,b\models p$ satisfying $a\triangleleft^{\mathbf p}a'$ and $b'\triangleleft^{\mathbf p}b$. Since each formula from $\Phi(x,a)$ defines a convex subset of $D$, the locus of the type $\bigvee\Phi(D,a)<x$ is a final part of $(D,<)$; clearly, it contains $a'$, so $a'<c\in D$ implies $c\models (\bigvee\Phi(D,a)<x)$, which together with $\models \theta(c)$, by (1), implies $a<_pc$. Similarly, $c<_pb$. Since $p(\Mon)$ is convex in $(\theta(\Mon),<_p)$ and $a<_p c<_p b$, we conclude $c\models p$, as desired. \end{proof} \setcounter{equation}{0} \begin{Corollary}\label{Corollary_convex_iff_isolated_in_IT} Let $(D,<)$ be an $A$-definable linear order, let $\Pi\in IT(A)$, and let $S_{\Pi}=\{p\in S_1(A)\mid \Pi(x)\subseteq p(x)\}$. A weakly o-minimal type $p\in S_{\Pi}$ is convex if and only if it is an isolated point of $S_{\Pi}$. \end{Corollary} \begin{proof} First, suppose that $p\in S_\Pi$ is convex. By Lemma \ref{Proposition_convex_type_witness} there is a formula $\theta(x)\in p$ such that $p(\Mon)$ is convex in $(\theta(\Mon),<)$. Replacing $\theta(x)$ by $\theta(x)\land x\in D$, we obtain $\theta(\Mon)\subseteq D$, so $\Pi(x)\cup \{\theta(x)\}\vdash p(x)$; $p$ is an isolated point of $S_{\Pi}$. To prove the other implication, assume that $\theta(x)\in p$ (implies $x\in D$ and) isolates $p$ within $S_\Pi$; then $p(\Mon)=\Pi(\Mon)\cap\theta(\Mon)$. As $\Pi(\Mon)$ is convex in $(D,<)$, it follows that $p(\Mon)$ is convex in $(\theta(\Mon),<)$. Therefore, $p$ is a convex type. \end{proof} \begin{Lemma}\label{Lemma_wqom_interval_type} Suppose that $\Th(\Mon,<,\dots)$ is weakly quasi-o-minimal. \begin{enumerate}[\hspace{10pt}(a)] \item $\delta_A(\mathbf p,\mathbf q)$ holds for all $p,q\in S_1(A)$ extending the same interval type ($\mathbf p=(p,<),\mathbf q=(q,<)$). \item If $\mathcal F$ is a $\delta_A$ class, then the order $<^{\mathcal F}$ in Theorem \ref{Theorem5} can be chosen so that for each $\Pi\in IT(A)$ with $\Pi(\Mon)\cap\mathcal F(\Mon)\neq\emptyset$ one of $(<^{\mathcal F})_{\restriction \Pi(\Mon)}=\ <$ and $(<^{\mathcal F})_{\restriction \Pi}(\Mon)=\ >$ holds. \end{enumerate} \end{Lemma} \begin{proof} (a) Let $\Pi\in IT(A)$ and let $p,q\in S_{\Pi}$ be distinct completions of $\Pi$. First, we show $p\nwor q$. We know $p^{conv}=q^{conv}=\Pi$, so by Lemma \ref{Lemma_interval_types}(d) the sets $p(\Mon)$ and $q(\Mon)$ have the same convex hull in $(\Mon,<)$. Thus, for any $a\models p$ there are $b_1,b_2\models q$ such that $b_1<a<b_2$; clearly, $\tp(ab_1)\neq \tp(ab_2)$ witnesses $p\nwor q$. Next, we claim that $a\triangleleft^{\mathbf q} b$ implies $a<b$. Assume that $b$ is right $\mathbf q$-generic over $a$. By Remark \ref{Remark_pgeneric_first}(c), the set of all right $\mathbf q$-generic elements over $a$, which is the locus of $\tp(b/Aa)$, is a final part of $(q(\Mon),<)$; since $q(\Mon)$ is included in the convex hull of $p(\Mon)$, some $\mathbf q$-generic element, say $b'$, satisfies $a<b'$; $a<b$ follows, proving the claim. Similarly, we see that $b\triangleleft^{\mathbf p}a$ implies $b<a$. Therefore, $a\triangleleft^{\mathbf q}b\triangleleft^{\mathbf p}a$ is impossible. By Fact \ref{Fact_notdelta} this implies $\delta_A(\mathbf p,\mathbf q)$. (b) Follows from part (a) and the ``moreover" part of Theorem \ref{Theorem5}. \end{proof} \noindent {\bf Convention.} Whenever the theory $T=\Th(\Mon,<,\dots)$ is weakly quasi-o-minimal, $\mathcal F$ is part of a $\delta_A$-class, and $<^{\mathcal F}$ satisfies the conclusion of Theorem \ref{Theorem5}, we will assume that $<^{\mathcal F}$ also satisfies the conclusion of the previous lemma. \begin{Lemma}\label{Lemma_forking_on_interval_type} Suppose that $\Th(\Mon,<,\dots)$ is weakly quasi-o-minimal. For each $\Pi\in IT(A)$ define: \begin{center}$S_{\Pi}=\{p\in S_1(A)\mid \Pi(x)\subseteq p(x)\} \ \ \ \mbox{ and } \ \ \ \mathcal D_{\Pi}=\{(a,b)\in \Pi(\Mon)^2\mid a\dep_A b\}.$ \end{center} \begin{enumerate}[\hspace{10pt}(a)] \item $(a,b)\in\mathcal D_{\Pi}$ \ if and only if \ $a,b\in \Pi(\Mon)$ and $\tp(a/Ab)$ contains a $\Pi$-bounded formula. \item $\mathcal D_{\Pi}$ is a convex equivalence relation on $\Pi(\Mon)$. \end{enumerate} \end{Lemma} \begin{proof} (a) Let $(a,b)\in \Pi(\Mon)$ and let $p=\tp(a/A)$. First, suppose that $(a,b)\in\mathcal D_{\Pi}$. Then $a\dep_A b$, so there is a relatively $p$-bounded formula $\phi(x)\in\tp(a/Ab)$ in $(p(\Mon),<)$; that is, there are $a_1,a_2\models p$ such that $p(x)\cup \{\phi(x)\}\vdash a_1<x<a_2$. By compactness, there is $\theta(x)\in p$ such that $\theta(x)\land\phi(x)\vdash a_1<x<a_2$. Clearly, $\theta(x)\land \phi(x)\in \tp(a/Ab)$ is a $\Pi$-bounded formula, proving the left-to-right implication. To prove the other, it suffices to note that, since $\Pi(\Mon)$ is the convex hull of $p(\Mon)$, every $\Pi$-bounded formula $\phi(x)\in\tp(a/Ab)$ is also $p$-bounded, so $\phi(x)$ witnesses $a\dep_Ab$. (b) Let $<^{\mathcal F}$ be given by Lemma \ref{Lemma_wqom_interval_type}(b). By Theorem \ref{Theorem5}(b), $\mathcal D_{\mathcal F}=\{(a,b)\in \mathcal F(\Mon)^2\mid a\dep_A b\}$ is a $<^{\mathcal F}$-convex equivalence relation on $\mathcal F(\Mon)$. Since $\mathcal D_{\mathcal F\restriction \Pi(\Mon)}=\mathcal D_{\Pi}$ and $(<^{\mathcal F})$ is equal to $<$ or $>$ on $\Pi(\Mon)$, we conclude that $\mathcal D_{\Pi}$ is $<$-convex on $\Pi(\Mon)$. \end{proof} \section{Order-trivial types}\label{Section_trivial} The notion of triviality for types in stable theories was introduced and studied by Baldwin and Harrington in \cite{BH}: a stationary type $p\in S_x(A)$ is trivial if, for all $B\supseteq A$, any pairwise independent triple of realizations of the nonforking extension of $p$ is independent (as a set) over $B$. They showed that triviality of the type $p$ is preserved in all nonforking extensions and restrictions, suggesting that triviality is a property of the global nonforking extension of $p$, which is also a unique $A$-invariant, global extension of $p$. Therefore, a global type $\mathfrak p$ is trivial if for some (equivalently all) sets $A$ over which $\mathfrak p$ is invariant, any pairwise independent over $A$ set of realizations of $\mathfrak p_{\restriction A}$ is independent over $A$. Now, if we want to consider an $A$-invariant type $\mathfrak p$ in an arbitrary first-order theory, then it is natural to consider Morley sequences in $\mathfrak p$ over $A$ as independent. \begin{Definition}\label{Def_trivial_global_type} A global type $\mathfrak p\in S_x(\Mon)$ is {\em trivial over $A$} if it is non-algebraic, $A$-invariant and whenever the members of the sequence $I=(a_i\mid i\in \omega)$ are such that $(a_i,a_j)$ is a Morley sequence in $\mathfrak p$ over $A$ for all $i<j\in\omega$, then $I$ is a Morley sequence in $\mathfrak p$ over $A$. \end{Definition} \begin{Remark}\phantomsection\label{Remark_basic_trivial} \begin{enumerate}[\hspace{10pt}(a)] \item Let $T$ be stable and let $p\in S_x(A)$ be stationary and non-algebraic. Denote by $\mathfrak p$ the global nonforking extension of $p$. Then $p$ is trivial in the Baldwin-Harrington sense if and only if $\mathfrak p$ is trivial over $A$ in the sense of Definition \ref{Def_trivial_global_type}. \item If $T$ is a binary theory, then any non-algebraic, $A$-invariant global type is trivial over $A$. Examples of binary theories are: the theory of random graphs, theories of colored orders, etc. \item If a global type $\mathfrak p$ is trivial over $A$, then so is every finite power $\mathfrak p^n$. \end{enumerate} \end{Remark} \begin{Remark}\label{Remark_trivial_basic2} Let $\mathfrak p\in S_x(\Mon)$ be a non-algebraic, $A$-invariant type. \begin{enumerate}[\hspace{10pt}(a)] \item An equivalent way to state that $\mathfrak p$ is trivial over $A$ is: for every $n\in\mathbb N$, the type \ $\bigcup_{i<j< n}(\mathfrak p^2)_{\restriction A}(x_i,x_j)$ \ implies a unique complete type over $A$ in variables $x_0,\dots,x_{n-1}$ (in which case this type must be $(\mathfrak p^n)_{\restriction A}(x_0,\dots,x_{n-1})$). \item Similarly as in \cite{BH}, by a {\em $\mathfrak p$-triangle} over $A$ we will mean a triplet $(a_0,a_1,a_2)$ of realizations of $\mathfrak p_{\restriction A}$ which is not a Morley sequence in $\mathfrak p$ over $A$, but each pair $(a_i,a_j)$, $i<j<3$, is. When $\mathfrak p$ is not trivial over $A$, we can find a finite (possibly empty) Morley sequence $\bar a$ in $\mathfrak p$ over $A$ such that $\mathfrak p$ is not trivial over $A\bar a$, as witnessed by a triangle. To do this, choose minimal $n$ such that some sequence of size $n+3$, say $(a_0,a_1,\ldots, a_{n+2})$, realizes the type $\bigcup_{i<j< n+3}(\mathfrak p^2)_{\restriction A}(x_i,x_j)$ but does not realize $(\mathfrak p^n)_{\restriction A}(x_0,\dots,x_{n+2})$. Then put $\bar a=(a_0,\ldots,a_{n-1})$ and observe that $(a_n,a_{n+1},a_{n+2})$ is a $\mathfrak p$-triangle over $\bar aA$. \end{enumerate} \end{Remark} Poizat in \cite{Goode} introduced triviality in the context of stable theories. A stable theory $T$ is trivial if, for all parameter sets $A$, any family of pairwise independent tuples over $A$ is independent over $A$. It is easy to see that every global non-algebraic type in a trivial, stable theory is trivial over any parameter set over which it is based. The following fact is worth noting, but since we will not use it later, we leave the proof to the reader. \begin{Fact} A complete theory $T$ is stable and trivial if and only if every global non-algebraic type is trivial (in the sense of Definition \ref{Def_trivial_global_type}) over some small set of parameters. \end{Fact} Below, we will introduce order-triviality, a strong form of triviality for global types. The motivation for this comes from the weakly o-minimal case: we will prove in Lemma \ref{Lemma wom left trivial iff right trivial} that for every weakly o-minimal pair $(p,<)$ over $A$, the type $\mathbf p_r$ is trivial over $A$ if and only if it is order-trivial over $A$. \begin{Definition} A global type $\mathfrak p\in S_x(\Mon)$ is {\em order-trivial over $A$} if it is non-algebraic, $A$-invariant and whenever the sequence $I=(a_i\mid i\in \omega)$ is such that $(a_i,a_{i+1})$ is a Morley sequence in $\mathfrak p$ over $A$ for all $i\in\omega$, then $I$ is a Morley sequence in $\mathfrak p$ over $A$. \end{Definition} \begin{Remark}\phantomsection\label{Remark_basic_ordertrivial} \begin{enumerate}[\hspace{10pt}(a)] \item A global $A$-invariant type $\mathfrak p\in S_x(\Mon)$ is order-trivial over $A$ if and only if for every $n\in\mathbb N$, the type $\bigcup_{i< n}(\mathfrak p^2)_{\restriction A}(x_i,x_{i+1})$ implies a unique complete type over $A$ in variables $x_0,\dots,x_{n-1}$ (in which case this type must be $(\mathfrak p^{n+1})_{\restriction A}(x_0,\dots,x_{n})$). \item If $\mathfrak p$ is order-trivial over $A$, then $\mathfrak p$ is trivial over $A$ and every power $\mathfrak p^n$ is order-trivial over $A$. \item A non-algebraic symmetric global invariant type $\mathfrak p$ is {\it not} order-trivial over a small set of parameters (where $\mathfrak p$ is symmetric if $\mathfrak p^2(x,y)=\mathfrak p^2(y,x)$). Indeed, if $\mathfrak p$ is symmetric and $A$-invariant, take $(a_0,a_1)\models \mathfrak p^2_{\restriction A}$ and consider the sequence $a_0,a_1,a_0,a_1,\dots$; clearly, this is not a Morley sequence in $\mathfrak p$ over $A$, although each consecutive pair is. In particular, there are no order-trivial types in stable theories nor in the theory of random graphs. \end{enumerate} \end{Remark} All examples of indiscernible sequences of order-trivial types that we know of are strictly increasing with respect to some definable partial order; yet, it remains uncertain if this is always the case. \begin{Question} If $\mathfrak p(x)\in S_x(\Mon)$ is order-trivial over $A$, must there exist a $B\supseteq A$ and a $B$-definable partial order such that all Morley sequences in $\mathfrak p$ over $B$ are strictly increasing? \end{Question} \begin{Proposition}\label{Prop_basic_order_trivial} Suppose that $\mathfrak p\in S(\Mon)$ is order-trivial over $A$ and that the type $p=\mathfrak p_{\restriction A}$ is NIP. Let $I=(a_i\mid i\in \omega)$ be a Morley sequence in $\mathfrak p$ over $A$ and let $B\supseteq A$. Then $I$ is a Morley sequence in $\mathfrak p$ over $B$ if and only if $a_0\models \mathfrak p_{\restriction B}$. \end{Proposition} \begin{proof} The left-to-right implication is clear. For the converse, suppose on the contrary that $a_0\models \mathfrak p_{\restriction B}$ but there is $n\in\omega$ such that $\bar a=(a_0,\dots,a_{n-1})$ is not Morley in $\mathfrak p$ over $B$. Choose a formula $\phi(\bar x,b)\in \tp(\bar a/B)$ such that $\phi(\bar x,b)\notin \mathfrak p^n$. Define the sequence $J=(\bar c^i\mid i\in\omega)$, where $\bar c^i=(c_0^i,\dots,c_{n-1}^i)\models (\mathfrak p^n)_{\restriction A}$, in the following way: Let $\bar c^0=\bar a$. Suppose that $J_{<k}=(\bar c^i\mid i<k)$ has already been defined. First choose $c_0^k\models \mathfrak p_{\restriction Bc_{n-1}^{k-1}}$ and then $\bar c^k=(c_0^k,\dots,c_{n-1}^k)$ such that: \begin{equation}\tag{$\dagger$}\label{eq prop 2.9} \bar c^{k}\equiv \bar a\,(B)\mbox{ if $k$ is even \ \ and \ \ }\bar c^{k}\models \mathfrak p^n_{\restriction B}\mbox{ if $k$ is odd}. \end{equation} Note that this is possible as $c_0^k,a_0\models\mathfrak p_{\restriction B}$. Consider the sequence: $$c_0^0,\dots,c_{n-1}^0,c_0^1,\dots,c_{n-1}^1,\dots, c_0^k,\dots,c_{n-1}^k,\dots.$$ Note that, by our choice of $c_0^k$ and $\bar c^k$, any pair of consecutive elements of this sequence realizes the type $(\mathfrak p^2)_{\restriction A}$, so, by order-triviality, the sequence is Morley in $\mathfrak p$ over $A$. It follows that $J$ is a Morley sequence in $\mathfrak p^n$ over $A$. By our choice of $\phi(\bar x,b)$, condition (\ref{eq prop 2.9}) implies: $\models \phi(\bar c^k,b)$ if and only if $k$ is even. Therefore, the indiscernible sequence $J$ and the formula $\phi(\bar x,b)$ show that the type $(\mathfrak p^n)_{\restriction A}$ is not NIP. Contradiction! \end{proof} \begin{Corollary}\label{Cor_otrivialoverA_otrivialoverB} Suppose that $\mathfrak p\in S(\Mon)$ is order-trivial over $A$ and that the type $p=\mathfrak p_{\restriction A}$ is NIP. Then $\mathfrak p$ is order-trivial over any $B\supseteq A$. \end{Corollary} \begin{proof} Let $I=(a_i\mid i\in\omega)$ be a sequence of realizations of $\mathfrak p_{\restriction B}$ such that $(a_i,a_{i+1})\models (\mathfrak p^2)_{\restriction B}$ for all $i\in\omega$. We need to show that $I$ is a Morley sequence in $\mathfrak p$ over $B$. Note that $(a_i,a_{i+1})\models(\mathfrak p^2)_{\restriction A}$ for each $i\in\omega$, so by order-triviality of $\mathfrak p$ over $A$, $I$ is a Morley sequence in $\mathfrak p$ over $A$. Since $a_0$ realizes $\mathfrak p_{\restriction B}$, by Proposition \ref{Prop_basic_order_trivial}, $I$ is a Morley sequence in $\mathfrak p$ over $B$. Therefore, $\mathfrak p$ is order-trivial over $B$. \end{proof} \begin{Corollary}\label{Cor wom lifts to omega for o trivial} Let $\mathfrak p,\mathfrak q\in S(\Mon)$ be order-trivial over $A$ such that $p=\mathfrak p_{\restriction A}$ and $q=\mathfrak q_{\restriction A}$ are NIP types. \begin{enumerate}[\hspace{10pt}(a)] \item If $r\in S(A)$ and $p\wor r$, then $\mathfrak (\mathfrak p^{\omega})_{\restriction A}\wor r$. \item If $p\wor q$ then $(\mathfrak p^{\omega})_{\restriction A}\wor (\mathfrak q^{\omega})_{\restriction A}$. \end{enumerate} \end{Corollary} \begin{proof} (a) Let $c\models r$, $\bar a=(a_0,a_1,\dots)\models(\mathfrak p^\omega)_{\restriction Ac}$ and $\bar a'=(a_0',a_1',\dots)\models(\mathfrak p^\omega)_{\restriction A}$; we need to prove $\bar a\equiv\bar a'\,(Ac)$. Since $p\wor r$, $a_0\equiv a_0'\,(Ac)$, so $a_0'\models \mathfrak p_{\restriction Ac}$. By Proposition \ref{Prop_basic_order_trivial}, $\bar a'\models(\mathfrak p^\omega)_{\restriction Ac}$, and we are done. (b) By (a), $(\mathfrak p^\omega)_{\restriction A}\wor q$, so, again by (a), $(\mathfrak p^\omega)_{\restriction A}\wor (\mathfrak q^\omega)_{\restriction A}$. \end{proof} \section{Trivial weakly o-minimal types}\label{Section_trivial_wom} \begin{Lemma}\label{Lemma wom left trivial iff right trivial} Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$. The following are equivalent: \begin{enumerate}[\hspace{10pt}(1)] \item $\mathbf p_r$ is trivial over $A$; \item Every $\triangleleft^{\mathbf p}$-increasing sequence of realizations of $p$ is Morley in $\mathbf p_r$ over $A$; \item $\mathbf p_r$ is order-trivial over $A$; \item Every $\triangleleft^{\mathbf p}$-decreasing sequence of realizations of $p$ is Morley in $\mathbf p_l$ over $A$; \item $\mathbf p_l$ is order-trivial over $A$. \end{enumerate} \end{Lemma} \begin{proof} Recall that $(p(\Mon), \triangleleft^{\mathbf p})$ is a strict partial order and that, by Remark \ref{Remark_trianglep_binary_basicprops}(b), $a\triangleleft^{\mathbf p}b$, $(a,b)\models (\mathbf p_r^2)_{\restriction A}$, and $(b,a)\models (\mathbf p_l^2)_{\restriction A}$ are mutually equivalent for all $a,b\models p$; in particular, $(\mathbf p_l^2\restriction A)(x,y)=(\mathbf p_r^2\restriction A)(y,x)$. (1)$\Rightarrow$(2) Suppose that $\mathbf p_r$ is trivial over $A$ and let $I=(a_i\mid i\in \omega)$ be a $\triangleleft^{\mathbf p}$-increasing sequence. Then for all $i<j<\omega$ we have $a_i\triangleleft^{\mathbf p} a_j$; hence, $(a_i,a_j)\models (\mathbf p_r^2)_{\restriction A}$. Due to the triviality of $\mathbf p_r$, $I$ is a Morley sequence in $\mathbf p_r$ over $A$. (2)$\Leftrightarrow$(3) Every $\triangleleft^\mathbf p$-increasing sequence is Morley if and only if: \begin{enumerate} \item[(r)] For all $n\in \mathbb N$ the type $\bigcup_{i<n}(\mathbf p_r^2)_{\restriction A}(x_i,x_{i+1})$ has a unique completion over $A$. \end{enumerate} By Remark \ref{Remark_basic_ordertrivial}, this is equivalent to the order-triviality of $\mathbf p_r$ over $A$. As (3)$\Rightarrow$(1) is immediate from the definitions, we conclude (1)$\Leftrightarrow$(2)$\Leftrightarrow$(3). Similarly, (4)$\Leftrightarrow$(5). (2)$\Leftrightarrow$(4) Every $\triangleleft^{\mathbf p}$-increasing sequence of realizations of $p$ is Morley in $\mathbf p_r$ over $A$ if and only if condition (r) holds. As $(\mathbf p_l^2)_{\restriction A}(x,y)=(\mathbf p_r^2)_{\restriction A}(y,x)$, condition (r) is equivalent with: \begin{enumerate} \item[(l)] For all $n\in \mathbb N$ the type $\bigcup_{i<n}(\mathbf p_l^2)_{\restriction A}(x_i,x_{i+1})$ has a unique completion over $A$, \end{enumerate} which holds if and only if every $\triangleleft^{\mathbf p}$-decreasing sequence of realizations of $p$ is Morley in $\mathbf p_l$ over $A$. \end{proof} \begin{Definition} A weakly o-minimal type $p\in S(A)$ is {\em trivial} if one (equivalently, both) of its $A$-invariant globalizations is (order-) trivial over $A$. \end{Definition} \noindent{\bf Convention.} \ For $\mathbf p=(p,<)$ a weakly o-minimal pair over $A$, by $x_1\triangleleft^{\mathbf p} x_2\triangleleft^{\mathbf p} \dots \triangleleft^{\mathbf p}x_{n}$ we will denote the partial type $\bigcup_{1\leqslant i< n} (\mathbf p_r^2)_{\restriction A}(x_i,x_{i+1})$. \begin{Remark}\label{Remark trivial wom expressing via triangleleft} Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$. As a consequence of Lemma \ref{Lemma wom left trivial iff right trivial} we have the following: \begin{enumerate}[\hspace{10pt}(a)] \item $p$ is trivial if and only if $x_1\triangleleft^{\mathbf p} x_2\triangleleft^{\mathbf p} \dots \triangleleft^{\mathbf p}x_{n}$ determines a complete type for all $n\in\mathbb N$. \item If $p$ is trivial, then a sequence $I=(a_i\mid i< \omega)$ of realizations of $p$ is Morley in a nonforking globalization of $p$ over $A$ if and only if $I$ is strictly $\triangleleft^{\mathbf p}$-monotone for some (all) relatively $A$-definable order $<$; equivalently, if the sequence $(\mathcal D_p(a_i)\mid i<\omega)$ is strictly $<$-monotone. \end{enumerate} \end{Remark} \begin{Lemma}\label{Lemma_triv_preserved_in_nonforking} Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$. Suppose that $p$ is trivial. \begin{enumerate}[\hspace{10pt}(a)] \item Every nonforking extension of $p$ is trivial. \item The sequence $I=(a_i\mid i< \omega)$ of realizations of $p$ is a Morley sequence in $\mathbf p_r$ over $B\supseteq A$ if and only if \ $B\triangleleft^{\mathbf p} a_0\triangleleft^{\mathbf p}a_1\triangleleft^{\mathbf p} a_2\triangleleft^{\mathbf p}\dots$. \end{enumerate} \end{Lemma} \begin{proof}(a) Let $q\in S(B)$ be a nonforking extension of $p$. Then $q=(\mathbf p_r)_{\restriction B}$ or $q=(\mathbf p_l)_{\restriction B}$. Since $p$ is trivial, according to Lemma \ref{Lemma wom left trivial iff right trivial} both $\mathbf p_r$ and $\mathbf p_l$ are order-trivial over $A$, so by Corollary \ref{Cor_otrivialoverA_otrivialoverB}, they are order-trivial over $B$. In particular, $q$ is trivial. (b) First, assume \ $B\triangleleft^{\mathbf p} a_0\triangleleft^{\mathbf p}a_1\triangleleft^{\mathbf p} a_2\triangleleft^{\mathbf p}\dots$. Since $I$ is strictly $\triangleleft^{\mathbf p}$-increasing, by Remark \ref{Remark trivial wom expressing via triangleleft}, it is a Morley sequence in $\mathbf p_r$ over $A$. Since $a_0\models (\mathbf p_r)_{\restriction B}$, Proposition \ref{Prop_basic_order_trivial} applies: $I$ is a Morley sequence in $\mathbf p_r$ over $B$. This proves one direction of the equivalence. To prove the other, assume $I$ is Morley in $\mathbf p_r$ over $B$. Then $a_0\models (\mathbf p_r)_{\restriction B}$ implies $B\triangleleft^{\mathbf p} a_0$. Since $I$ is Morley over $A$, $a_0\triangleleft^{\mathbf p}a_1\triangleleft^{\mathbf p} a_2\triangleleft^{\mathbf p}\dots$ follows. Therefore, \ $B\triangleleft^{\mathbf p}a_0\triangleleft^{\mathbf p}a_1\triangleleft^{\mathbf p} a_2\triangleleft^{\mathbf p}\dots$ \end{proof} In the following lemma, we establish an important property of trivial types, which will be referred to as the strong transitivity property later in the text. \begin{Lemma}[Strong transitivity]\label{Lemma_trivial_strong_trans} Suppose that $\mathbf p$ and $\mathbf q$ are directly nonorthogonal weakly o-minimal pairs over $A$ and that $p$ is trivial. Then $B\triangleleft^{\mathbf p}a\triangleleft^{\mathbf q} b$ implies $Ba\triangleleft^{\mathbf q} b$ for all parameters $B$. \end{Lemma} \begin{proof} Assume $B\triangleleft^{\mathbf p}a\triangleleft^{\mathbf q}b$. Choose $a'$ with $a\triangleleft^{\mathbf p}a'\triangleleft^{\mathbf q}b$; that is possible by the density property (Corollary \ref {Cor_triangle_mathbf_pq_properties}(b)). By Lemma \ref{Lemma_triv_preserved_in_nonforking}(b), $B\triangleleft^{\mathbf p}a\triangleleft^{\mathbf p}a'$ implies that $(a,a')$ is a Morley sequence in $\mathbf p_r$ over $B$, so $Ba\triangleleft^{\mathbf p} a'$ holds. By the transitivity property (Corollary \ref {Cor_triangle_mathbf_pq_properties}(6)) $Ba\triangleleft^{\mathbf p}a'\triangleleft^{\mathbf q}b$ implies $Ba\triangleleft^{\mathbf q}b$. \end{proof} \begin{Lemma}\label{Lemma_trivial_iff_Dp=Dq} A weakly o-minimal type $p\in S(A)$ is trivial if and only if $\mathcal D_q(a)=\mathcal D_p(a)$ for all nonforking extensions $q$ of $p$ and all $a\models q$. \end{Lemma} \begin{proof} Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$. ($\Rightarrow$) Assume that $p$ is trivial. Let $q\in S(B)$ be a nonforking extension of $p$ and let $a\models q$; we need to prove $\mathcal D_p(a)=\mathcal D_q(a)$. As $q$ is a nonforking extension of $p$, $a$ is left or right $\mathbf p$-generic over $B$; without loss, assume $B\triangleleft^{\mathbf p} a$. By Theorem \ref{Theorem_triangle_mathcal F}(g) we know that for all $b\models p$, $B\triangleleft^{\mathbf p}a\land a\dep_A b$ implies $B\triangleleft^{\mathbf p}b\land a\dep_Bb$; hence, $\mathcal D_p(a)\subseteq \mathcal D_q(a)$. If this inclusion were proper, then there would be $a'\in\mathcal D_q(a)$ with $a\ind_A a'$; in particular, $a'\models q$ and thus $B\triangleleft^{\mathbf p}a'$. By the $\triangleleft^{\mathbf p}$-comparability property, Corollary \ref {Cor_triangle_mathbf_pq_properties}(5), $a\ind_A a'$ implies $a\triangleleft^{\mathbf p}a'$ or $a'\triangleleft^{\mathbf p}a$. In the first case, we have $B\triangleleft^{\mathbf p}a\triangleleft^{\mathbf p}a'$, which, by Lemma \ref{Lemma_triv_preserved_in_nonforking}(b), implies that $(a,a')$ is Morley over $B$ and, in particular, $a'\ind_B a$; this contradicts $a'\in \mathcal D_q(a)$. The second case is dealt with similarly. Therefore, $\mathcal D_p(a)=\mathcal D_q(a)$, as desired. ($\Leftarrow$) Suppose that $\mathcal D_p(a)=\mathcal D_q(a)$ holds for all nonforking extensions $q$ and all $a\models q$. We will prove that every $\triangleleft^{\mathbf p}$-increasing sequence $I=(a_i\mid i\in \omega)$ of realizations of $p$ is Morley in $\mathbf p_r$ over $A$; by Lemma \ref{Lemma wom left trivial iff right trivial}, this implies that $p$ is trivial. By induction on $n>0$, we prove that $(a_0,a_1,\dots,a_n)$ is Morley (in $\mathbf p_r$ over $A$). The case $n=1$ is clear, so assume that $(a_0,a_1,\dots,a_n)$ is Morley for some $n>0$; in particular, $a_{<n}\triangleleft^{\mathbf p} a_n\triangleleft^{\mathbf p}a_{n+1}\triangleleft^{\mathbf p}\ldots$. Put $q=(\mathbf p_r)_{\restriction Aa_{<n}}$ and $\mathbf q=(q,<)$. Then $q$ is a nonforking extension of $p$, $\mathbf p_r=\mathbf q_r$, and $a_n,a_{n+1}\models q$. By the above assumption, we have $\mathcal D_p(a_n)=\mathcal D_q(a_{n})$ and $\mathcal D_p(a_{n+1})=\mathcal D_q(a_{n+1})$. As $I$ is $\triangleleft^{\mathbf p}$-increasing, we get $\mathcal D_p(a_n)<\mathcal D_p(a_{n+1})$; hence $\mathcal D_q(a_n)<\mathcal D_q(a_{n+1})$; the latter implies that $(a_n,a_{n+1})$ is Morley in $\mathbf q_r$ over $Aa_{<n}$, so $(a_0,\ldots,a_n,a_{n+1})$ is Morley in $\mathbf p_r$ over $A$, finishing the proof. \end{proof} \begin{Lemma}\label{Lemma trivial wom morley in nonforking is left or right morley} Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$ and let $q\in S(B)$ be a nonforking extension of $p$. Suppose that $\mathfrak q$ is a global nonforking extension of $q$. Then every Morley sequence in $\mathfrak q$ over $B$ is a Morley sequence in $\mathbf p_r$ or $\mathbf p_l$ over $A$. \end{Lemma} \begin{proof}By Lemma \ref{Lemma_triv_preserved_in_nonforking}, the triviality of $p$ is preserved in nonforking extensions, so $q$ is trivial. Suppose that $I=(a_i\mid i\in\omega)$ is a Morley sequence in $\mathfrak q$ over $B$. By Remark \ref{Remark trivial wom expressing via triangleleft}, the sequence $(\mathcal D_q(a_i)\mid i\in\omega)$ is strictly $<$ monotone, so by Lemma \ref{Lemma_trivial_iff_Dp=Dq} the sequence $(\mathcal D_p(a_i)\mid i\in\omega)$ is also strictly $<$-monotone; by Remark \ref{Remark trivial wom expressing via triangleleft} again, $I$ is a Morley sequence in $\mathbf p_r$ or $\mathbf p_l$ over $A$. \end{proof} \begin{Remark}\label{Remark_notrivial_wom_witness}Let $\mathbf p=(p,<)$ be a weakly o-minimal pair over $A$. \begin{enumerate}[\hspace{10pt}(a)] \item Recall that a sequence $(a_1,a_2,a_3)$ of realizations of $p$ is a $\mathbf p_r$-triangle over $A$ if it is not Morley in $\mathbf p_r$ over $A$, but every pair $(a_i,a_j) \ (i<j)$ is; that is, $a_1 \triangleleft^{\mathbf p}a_2\triangleleft^{\mathbf p} a_3$ and $a_1a_2a_3\not\models (\mathbf p_r^3)_{\restriction A}(x,y,z)$. In other words, $(a_1,a_2,a_3)$ is a $\mathbf p_r$-triangle over $A$ if and only if the type $\tp_{xyz}(a_1a_2a_3/A)$ is a complete extension of the partial type $x\triangleleft^{\mathbf p} y\triangleleft^{\mathbf p}z$ that is different from $(\mathbf p_r^3)_{\restriction A}(x,y,z)$. In particular, the type $x\triangleleft^{\mathbf p} y\triangleleft^{\mathbf p}z$ is incomplete if and only if $\mathbf p_r$-triangles over $A$ exist. \item Similarly as in (a), we see that the types of $\mathbf p_l$-triangles over $A$ correspond to completions of $x\triangleleft^{\mathbf p} y\triangleleft^{\mathbf p}z$ that are different from $(\mathbf p_l^3)_{\restriction A}(z,y,x)$. In particular, $\mathbf p_r$-triangles over $A$ exist if and only if $\mathbf p_l$-triangles over $A$ exist. \item $(a_1,a_2,a_3)$ is a $\mathbf p_r$-triangle over $A$ if $a_1 \triangleleft^{\mathbf p}a_2\triangleleft^{\mathbf p} a_3$ and $a_1a_2\not\triangleleft^{\mathbf p} a_3$. Since by Theorem \ref{Theorem_triangle_mathcal F}, $a_1 \triangleleft^{\mathbf p}a_2\triangleleft^{\mathbf p} a_3$ and $a_2\ind _{Aa_1} a_3$ together imply $a_1a_2\triangleleft^{\mathbf p}a_3$, $(a_1,a_2,a_3)$ is a $\mathbf p_r$-triangle over $A$ if and only if \ $a_1 \triangleleft^{\mathbf p}a_2\triangleleft^{\mathbf p} a_3 \ \ \mbox{and} \ \ a_2\dep_{Aa_1} a_3.$ \item By Remark \ref{Remark_trivial_basic2} the nontriviality of $\mathbf p_r$ over $A$ can be witnessed by a finite (possibly empty) Morley sequence $\bar b$ in $\mathbf p_r$ over $A$ and elements $a_1,a_2,a_3$ that form a $\mathbf p_r$-triangle over $A\bar b$. This can be expressed by: \ $\bar b\triangleleft^{\mathbf p} a_1 \triangleleft^{\mathbf p_{\bar b}}a_2\triangleleft^{\mathbf p_{\bar b}} a_3 \ \ \ \mbox{and} \ \ \ a_2\dep_{A\bar ba_1} a_3$ \ (where $\mathbf p_{\bar b}=((\mathbf p_r)_{\restriction A\bar b},<)$). \end{enumerate} \end{Remark} \begin{Lemma} The following are equivalent for a weakly o-minimal pair $\mathbf p=(p,<)$ over $A$. \begin{enumerate}[\hspace{10pt} (1)] \item There exists a $\mathbf p_r$-triangle over $A$; \item There exists a $\mathbf p_l$-triangle over $A$; \item $(\mathbf p_l)_{\restriction Aa}\nwor (\mathbf p_r)_{\restriction Aa}$ \ for all $a\models p$. \item The type $x\triangleleft^{\mathbf p}y\triangleleft^{\mathbf p}z$ has at least two completions in $S_{xyz}(A)$; \item $(\mathbf p_r^3)_{\restriction A}(x,y,z)\neq (\mathbf p_l^3)_{\restriction A}(z,y,x)$; \item If $(a,b,c)\models (\mathbf p_r^3)_{\restriction A}(x,y,z)$, then $(c,b,a)$ is a $\mathbf p_l$-triangle over $A$ (that is, $a\dep_{Ac}b$); \item If $(a,b,c)\models (\mathbf p_l^3)_{\restriction A}(x,y,z)$, then $(c,b,a)$ is a $\mathbf p_r$-triangle over $A$ (that is, $a\dep_{Ac}b$). \end{enumerate} \end{Lemma} \begin{proof} (1)$\Leftrightarrow$(2)$\Leftrightarrow$(4) and (5)$\Leftrightarrow$(6)$\Leftrightarrow$(7) follow by Remark \ref{Remark_notrivial_wom_witness}. (3)$\Leftrightarrow$(4) Let $a\models p$. The type $x\triangleleft^{\mathbf p}y\triangleleft^{\mathbf p}z$ has at least two completions in $S_{xyz}(A)$ if and only if the type $x\triangleleft^{\mathbf p}a\triangleleft^{\mathbf p}z$ has at least two completions in $S_{xz}(Aa)$. The latter is equivalent to $(\mathbf p_l)_{\restriction Aa}\nwor (\mathbf p_r)_{\restriction Aa}$, because $p(x)\cup \{x\triangleleft^{\mathbf p}a\}$ is equivalent to $(\mathbf p_l)_{\restriction Aa}(x)$ and $p(z)\cup \{a\triangleleft^{\mathbf p}z\}$ to $(\mathbf p_r)_{\restriction Aa}(z)$. (5)$\Rightarrow$(4) is immediate, so it remains to prove (4)$\Rightarrow$(5). Assume (4). Let $(a,b,c)$ be a Morley sequence in $\mathbf p_r$ over $A$ and let $(c,b',a)$ be a Morley sequence in $\mathbf p_l$ over $A$. We will prove $b\nequiv b'\,(Aac)$; this implies (5). Denote $p_a=(\mathbf p_r)_{\restriction Aa}$ and $\mathbf p_a=(p_a,<)$, and consider $\Pi(x)=a\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}c$ as a partial type over $Aac$. By Remark \ref{Remark_pgeneric_first}(c), the set $p_a(\Mon)$, which is defined by $a\triangleleft^{\mathbf p}x$, is a final part of $(p(\Mon),<)$. Similarly, $x\triangleleft^{\mathbf p}c$ defines an initial part of $(p(\Mon),<)$, so $\Pi(\Mon)$ is a convex subset of $(p(\Mon),<)$. If $q\in S_x(Aac)$ is a completion of $\Pi(x)$, then $q$ is an extension of $p$, so by the weak o-minimality of $p$, $q(\Mon)$ is convex in $(p(\Mon),<)$ and hence in $(\Pi(\Mon),<)$; by (4), $q(\Mon)$ is a proper subset of $\Pi(\Mon)$. Now, we claim that the locus of $\tp(b/Aac)$, call it $P$, is an initial part of $\Pi(\Mon)$. To prove this, note that the sequence $(b,c)$ is Morley in $\mathbf p_r$ over $Aa$, so $b\triangleleft^{\mathbf p_a}c$, that is, $b$ is left $\mathbf p_a$-generic over $c$. We conclude that $P$ consists of all left $(\mathbf p_a)_l$-generic elements over $c$; by Remark \ref{Remark_pgeneric_first}(c), $P$ is an initial part of $p_a(\Mon)$. Since $\Pi(\Mon)$ is an initial part of $p_a(\Mon)$ and $P\subseteq \Pi(\Mon)$, we conclude that $P$ is an initial part of $\Pi(\Mon)$. Analogously, we see that the locus of $\tp(b'/Aac)$ is a proper final part of $\Pi(\Mon)$. Therefore, $b\not\equiv b'\,(Aac)$. \end{proof} \begin{Theorem}\label{Theorem_nwor preserves triviality} \begin{enumerate}[\hspace{10pt}(a)] \item Nonforking extensions of a trivial weakly o-minimal type are all trivial, but the triviality of some nonforking extension does not guarantee triviality of the type. \item Triviality is preserved under $\nwor$ of weakly o-minimal types over the same domain. \item Weak orthogonality of trivial types transfers to their nonforking extensions. \end{enumerate} \end{Theorem} \begin{proof}(a) The first clause is Lemma \ref{Lemma_triv_preserved_in_nonforking}(a); the second will be justified in Example \ref{Example_trivover0}. (b) Let $p$ and $q$ be weakly o-minimal types over $A$ with $p\nwor q$. Assuming that $p$ is nontrivial, we will prove that $q$ is also nontrivial. Choose relatively $A$-definable orders $<_p$ and $<_q$ such that the weakly o-minimal pairs $\mathbf p=(p,<_p)$ and $\mathbf q=(q,<_q)$ are directly nonorthogonal. By Remark \ref{Remark_notrivial_wom_witness}(d) there are $D\supseteq A$ and $a_1,a_2,a_3$ such that $D\triangleleft^{\mathbf p}a_1$, $Da_1\triangleleft^{\mathbf p}a_2\triangleleft^{\mathbf p} a_3$ and $a_2\dep_{Da_1} a_3$. By the density property, Corollary \ref {Cor_triangle_mathbf_pq_properties}(b), there are $b,b'$ satisfying $a_2\triangleleft^{\mathbf q}b\triangleleft^{\mathbf q}b'\triangleleft^{\mathbf p}a_3$. By Theorem \ref{Theorem_triangle_mathcal F}, $Da_1\triangleleft^{\mathbf p}a_2\triangleleft^{\mathbf q} b\triangleleft^{\mathbf q}b'\triangleleft^{\mathbf p} a_3$ and $a_2\dep_{Da_1} a_3$ together imply $b\dep_{Da_1} b'$. If $q$ were trivial, then by the strong transitivity property, Lemma \ref{Lemma_trivial_strong_trans}, $Da_1\triangleleft^{\mathbf q}b\triangleleft^{\mathbf q}b'$ would imply $Da_1b\triangleleft^{\mathbf q}b'$, which contradicts $b\dep_{Da_1} b'$. Therefore, $q$ is nontrivial, proving (b). (c) Suppose that $p,q\in S(A)$ are trivial weakly o-minimal types with $p\wor q$ and that $p_1,q_1\in S(B)$ are their nonforking extensions. We need to prove $p_1\wor q_1$. Suppose on the contrary that $p_1\nwor q_1$. By part (a) of the theorem, both $p_1$ and $q_1$ are trivial, since $p$ and $q$ are. Choose relatively $A$-definable orders $<_p$ and $<_q$ such that the weakly o-minimal pairs $\mathbf p_1=(p_1,<_p)$ and $\mathbf q_1=(q_1,<_q)$ are directly nonorthogonal. Let $\mathbf p=(p,<_p)$ and $\mathbf q=(q,<_q)$. Choose $a_0,a_1\models p_1$ and $b_0,b_1\models q_1$ such that \ $a_0\triangleleft^{\mathbf q_1}b_0\triangleleft^{\mathbf q_1}b_1\triangleleft^{\mathbf p_1}a_1$. Then \ $\tp(a_0b_0/B)\neq \tp(a_1b_1/B)$. To see this, first observe that $p_1\nwor q_1$ implies $\tp(a_0b_0/B)\neq \tp(a_1b_0/B)$ as $a_0$ is left- and $a_1$ is right $\mathbf p_1$-generic over $b_0$; also observe that $b_0\triangleleft^{\mathbf q_1}b_1\triangleleft^{\mathbf p_1}a_1$ implies $\tp(a_1b_0/B)=\tp(a_1b_1/B)$; therefore, $\tp(a_0b_0/B)\neq \tp(a_1b_1/B)$. Witness this by a formula $\phi(x,y,c)\in \tp(a_0b_0/B)$ such that $\phi(x,y,B)\notin \tp(a_1b_1/B)$. Define sequences $I=(a_n\mid n<\omega)$ and $J=(b_n\mid n<\omega)$ such that the following holds for all $n\geqslant 1$: $$a_{2n-1}\triangleleft^{\mathbf p_1}a_{2n}\ \ \ \ \mbox{ and }\ \ \ \ a_0b_0a_1b_1\equiv a_{2n}b_{2n}a_{2n+1}b_{2n+1}\, (B).$$ Note that for all $n<\omega$, $a_{2n}\triangleleft^{\mathbf q_1}b_{2n}\triangleleft^{\mathbf q_1}b_{2n+1}\triangleleft^{\mathbf p_1}a_{2n+1}\triangleleft^{\mathbf p_1}a_{2n+2}$ holds. Hence, $I$ is strictly $\triangleleft^{\mathbf p_1}$-increasing, so, by Remark \ref{Remark trivial wom expressing via triangleleft}, $I$ is a Morley sequence in $(\mathbf p_1)_r$ over $B$. Similarly, $J$ is a Morley sequence in $(\mathbf q_1)_r$ over $B$. By Lemma \ref{Lemma trivial wom morley in nonforking is left or right morley}(b), $I$ is a Morley sequence in a nonforking globalization of $p$ over $A$; as it is $<_p$-increasing, $I$ is a Morley sequence in $\mathbf p_r$ over $A$. Similarly, $J$ is a Morley sequence in $\mathbf q_r$ over $A$. By Corollary \ref{Cor wom lifts to omega for o trivial}(b), $\tp(I/A)\wor \tp(J/A)$ as $p\wor q$. In particular, $I$ and $J$ are mutually indiscernible over $A$ and the sequence $(a_nb_n\mid n<\omega)$ is indiscernible over $A$. Since $a_0b_0a_1b_1\equiv a_{2n}b_{2n}a_{2n+1}b_{2n+1}\,(B)$ implies $\models\phi(a_{2n},b_{2n},c)\land\lnot \phi(a_{2n+1},b_{2n+1},c)$, the type $\tp(a_0b_0/A)$ is not NIP. Contradiction. \end{proof} For directly non-orthogonal weakly o-minimal pairs we can strengthen part (c) of the theorem: the transfer of weak and forking non-orthogonality occurs in both directions, from and towards non-forking extensions. \begin{Corollary} Suppose that $\mathbf p=(p,<_p)$ and $\mathbf q=(q,<_q)$ are weakly o-minimal pairs over $A$. Let $B\supseteq A$, $p_B=(\mathbf p_r)_{\restriction B}$, and $q_B=(\mathbf q_r)_{\restriction B}$. Suppose that $p$ and $q$ are trivial types. \begin{enumerate}[\hspace{10pt}(a)] \item $p\wor q \Rightarrow p_B\wor q_B$ \ and \ $\delta_A(\mathbf p,\mathbf q) \Rightarrow p_B\nwor q_B$. \item $\delta_A(\mathbf p,\mathbf q)$ implies \ $p\fwor q \Leftrightarrow p_B\fwor q_B$. \end{enumerate} \end{Corollary} \begin{proof} (a) The first claim is Theorem \ref{Theorem_nwor preserves triviality}(c) and the second follows by Fact \ref{Fact_delta_implies_deltaB_fworB}(a). (b) Assume $\delta_A(\mathbf p,\mathbf q)$. By Fact \ref{Fact_delta_implies_deltaB_fworB}(b) we have $p\nfor q \Rightarrow p_B\nfor q_B$. To prove the other direction, assume $p\fwor q$ and let $a\models p_B$ and $b\models q_B$; we need to prove $a\ind_B b$. $p\fwor q$ implies $a\ind_A b$, so $a\triangleleft^{\mathbf q} b$ or $b\triangleleft^{\mathbf p} a$. Without loss, assume $a\triangleleft^{\mathbf q} b$. By the strong transitivity property, $B\triangleleft^{\mathbf p}a\triangleleft^{\mathbf q}b$ implies $Ba\triangleleft^{\mathbf q}b$; hence, $b\ind_B a$, as desired. \end{proof} We have already shown that every $\triangleleft^{\mathbf p}$-increasing sequence of realizations of a trivial type is Morley in $\mathbf p_r$. This generalizes to realizations of trivial types from the same $\delta_A$-class in the following way: \begin{Definition} Let $\mathcal F$ be a $\delta_A$-class of weakly o-minimal pairs over $A$ and let $(I,<_I)$ be a linear order. A sequence $(a_i\mid i\in I)$ of realizations of types from $\mathcal F$ is {\em $\mathcal F$-independent over $A$} if $(a_{j}\mid j<i)\triangleleft^{\mathcal F} a_i$ holds for all $i\in I$. \end{Definition} \begin{Proposition} Let $\mathcal F$ be a $\delta_A$-class such that the types appearing in $\mathcal F$ are trivial. Then every $\triangleleft^{\mathcal F}$-increasing sequence of realizations of types from $\mathcal F$ is $\mathcal F$-independent over $A$. \end{Proposition} \begin{proof} It suffices to prove the claim for finite sequences. We proceed by induction on the length of a sequence. The claim is trivial for sequences of length $\leq 2$, so suppose that we have a $\triangleleft^{\mathcal F}$-increasing sequence $(a_1,\dots,a_n,a_{n+1})$ for $n\geqslant 2$. By the induction hypothesis, $(a_j\mid j<i)\triangleleft^{\mathcal F}a_i$ holds for all $i\leqslant n$. In particular, we have $(a_j\mid j<n)\triangleleft^{\mathcal{F}} a_n\triangleleft^{\mathcal{F}}a_{n+1}$. By strong transitivity, Lemma \ref{Lemma_trivial_strong_trans}, we conclude $(a_j\mid j\leqslant n)\triangleleft^{\mathcal F}a_{n+1}$, and we are done. \end{proof} \subsection{Trivial 1-types in o-minimal theories} Throughout this subsection, $T$ denotes an o-minimal theory (with respect to some dense linear order $<$). Recall that an element $a$ of an o-minimal structure is said to be non-trivial if there is an open interval $I$ containing $a$ and a definable function $f:I\times I\to \Mon$ that is strictly increasing in both coordinates; otherwise, $a$ is trivial. By the Trichotomy Theorem (\cite{PeS}), $a$ is non-trivial if and only if there is a definable group structure on some open, convex neighborhood of $a$. On the other hand, as shown in \cite{MRS}, all elements of $\Mon$ are trivial if and only if $(\Mon,\dcl)$ is a degenerate pregeometry if and only if $T$ is binary. In fact, if $a$ is trivial, then it follows from Lemmas 2.1 and 2.2 of \cite{MRS} that $\tp(a)$ is trivial. In general, the converse is not true, as illustrated by the following example. \begin{Example}\phantomsection\label{Example_trivial_real_additive} Consider the ordered group of reals $(\mathbb R,+,<,0)$; clearly, it is an o-minimal structure. There is a unique type $p\in S_1(\emptyset)$ that contains $0<x$ and the pair $\mathbf p=(p,<)$ is weakly o-minimal. For all $A$, the type $(\mathbf p_{r})_{\restriction A}$ is determined by $Gp(A)<x$, where $Gp(A)$ is the subgroup generated by $A$. Using this, it is easy to see that $\mathbf p_r$ is order-trivial over $\emptyset$: an increasing sequence of realizations of $p$, $(a_i\mid i\in\omega)$, is Morley in $\mathbf p_r$ over $\emptyset$ if and only if $Gp(a_i)<a_{i+1}$ holds for all $i\in \omega$, or equivalently, if $a_i\triangleleft^{\mathbf p} a_{i+1}$ (for all $i\in\omega$). Therefore, $\mathbf p_r$ is trivial over $\emptyset$, but the $\mathrm{dcl}$-pregeometry on $p(\Mon)$ is not degenerate. \end{Example} Therefore, the triviality of the type is not equivalent to the triviality of its realizations. In the proof of Proposition \ref{Proposition_Ramakrishnan}, we will see that the triviality of $\tp(+\infty)$ is related to the existence of ``uniform bounds on growth" of functions, studied by Friedman and Miller in \cite{FM} and Ramakrishnan in \cite{Ramakrishnan}; we adapt the existence of uniform bounds to the context of arbitrary types as follows: \begin{Definition}\label{Definition_p_germs_bdd} Let $\mathfrak p\in S_1(\Mon)$. We say that {\it $\mathfrak p$-germs are $A$-bounded} if $\mathfrak p$ is $A$-invariant and for all $B\supseteq A$ and all relatively $B$-definable functions $f:\mathfrak p_{\restriction B}(\Mon)\to \mathfrak p_{\restriction B}(\Mon)$, there exists a relatively $A$-definable function $g:p(\Mon)\to p(\Mon)$ such that $f(x)\leqslant g(x)$ for all $x\in \mathfrak p_{\restriction B}(\Mon)$. \end{Definition} Equivalently, $\mathfrak p$-germs are $A$-bounded if every relatively $\Mon$-definable function mapping the locus of $\mathfrak p$ in a larger monster into itself is bounded by a relatively $A$-definable such function. \begin{Remark}\label{Remark_boundedgerms} If $\mathfrak p$-germs are $A$-bounded and $B,f$ are as in Definition \ref{Definition_p_germs_bdd}, then there are relatively $A$-definable functions $g,g' :\mathfrak p_{\restriction B}(\Mon)\to \mathfrak p_{\restriction B}(\Mon)$ with $g'(x)\leqslant f(x)\leqslant g(x)$. By the Monotonicity Theorem, $f$ is strictly increasing, so there is a relatively $A$-definable $g_1$ with $f(x)^{-1}\leqslant g_1(x)$. Since $f$ and $g_1$ are strictly increasing, $g_1^{-1}(x)\leqslant f(x)$ follows, so $g'=g_1^{-1}(x)$ works. \end{Remark} Let $p\in S_1(A)$ and let $a\models p$. Recall that the set $\mathcal D_p(a)$ is the union of all relatively $Aa$-definable subsets of $p(\Mon)$ that are bounded in $(p(\Mon),<)$; since $p(\Mon)$ is a convex subset of the monster, each of these bounded subsets is included in a segment with endpoints in $\dcl(Aa)\cap p(\Mon)$. Therefore, the set $\mathcal D_p(a)$ equals the union of all segments $[f(a),g(a)]$, where $f,g:p(\Mon)\to p(\Mon)$ are relatively $A$-definable functions; $\mathcal D_p(a)$ can also be described as the convex hull of $\dcl(Aa)\cap p(\Mon)$. \begin{Lemma}\label{Lemma_ominimal} A type $p\in S_1(A)$ (in an o-minimal theory) is trivial if and only if $\mathbf p_r$-germs are $A$-bounded if and only if $\mathbf p_l$-germs are $A$-bounded. \end{Lemma} \begin{proof}We will prove that $p\in S_1(A)$ is trivial if and only if $\mathbf p_r$-germs are $A$-bounded; a similar proof establishes the other equivalence. First, suppose that $p$ is trivial and $B\supseteq A$. Denote $q=(\mathbf p_r)_{\restriction B}$ and let $f:q(\Mon)\to q(\Mon)$ be relatively $B$-definable. By Lemma \ref{Lemma_trivial_iff_Dp=Dq}, for any $a\models q$ we have $\mathcal D_p(a)=\mathcal D_q(a)$. Hence, $f(a)\in \mathcal D_p(a)$, so there exists $b\in \mathcal D_p(a)\cap\dcl(Aa)$ such that $f(a)\leqslant b$; clearly, $b=g(a)$ for some relatively $A$-definable function $g$. To prove the other implication, suppose that $\mathbf p_r$-germs are $A$-bounded. First, we {\it claim} that $\mathcal D_{(\mathbf p_r)_{\restriction B}}(a)=\mathcal D_p(a)$ holds for all $B\supseteq A$ and all $a\models (\mathbf p_r)_{\restriction B}$. To prove it, let $[f_1(a),f_2(a)]\subseteq \mathcal D_{(\mathbf p_r)_{\restriction B}}(a)$, where $f_1(a),f_2(a)\in \dcl(Ba)$. By Remark \ref{Remark_boundedgerms}, there are $g_1(a),g_2(a)\in \dcl(Aa)\cap p(\Mon)$ such that $g_1(a)\leqslant f_1(a)<f_2(a)\leqslant g_2(a)$. Then $[f_1(a),f_2(a)]\subseteq [g_1(a),g_2(a)]\subseteq \mathcal D_p(a)$; the claim follows. Now, we will prove that $\mathbf p_r$ is trivial over $A$; that is, assuming that $I=(a_i\mid i\in \omega)$ is a sequence of realizations of $p$ such that $\mathcal D_p(a_0)<\mathcal D_p(a_1)< \dots$, we will show that $I$ is Morley in $\mathbf p_r$ over $A$. Put $p_0=(\mathbf p_r)_{\restriction Aa_0}$. Clearly, $a_i\models p_0$ for all $i\geqslant 1$, so $\mathcal D_{p_0}(a_i)=\mathcal D_p(a_i)$ holds by the above claim. Then $\mathcal D_{p_0}(a_1)<\mathcal D_{p_0}(a_2)<\dots$ implies that $(a_1,a_2)$ is Morley in $\mathbf p_r$ over $Aa_0$, so $(a_0,a_1,a_2)$ is Morley in $\mathbf p_r$ over $A$. Continuing in this way (induction), we see that $I$ is Morley in $\mathbf p_r$ over $A$. Therefore, $\mathbf p_r$ is trivial over $A$ and $p$ is trivial. \end{proof} \begin{Proposition}\label{Proposition_Ramakrishnan} If $(M,<, \dots)$ is a densely ordered o-minimal structure, then every definable type $p\in S_1(M)$ is trivial. \end{Proposition} \begin{proof} Suppose that $p$ is definable and nonalgebraic. After possibly modifying the order $<$, while preserving the o-minimality, we may assume that $p=\tp(+\infty/M)$. We will show that $\mathbf p_r$-germs are $M$-bounded; consequently, by Lemma \ref{Lemma_ominimal}, $p$ is trivial. Let $B\supseteq M$ and let $f:(\mathbf p_r)_{\restriction B}(\Mon)\to (\mathbf p_r)_{\restriction B}(\Mon)$ be a relatively $b$-definable function ($b\in B^n$); this is also expressed by (the formula) $\lim_{x\to +\infty}f(x)=+\infty$. By compactness, there are a $M$-definable set $D\subseteq \Mon^n$ and a $M$-definable function $F:D\times \Mon\to \Mon$ such that $b\in D$, $F(b,x)=f(x)$, and $\lim_{x\to +\infty}F(d,x)=+\infty$ (for all $d\in D$). By \cite[Theorem 1.2]{Ramakrishnan}, there exists a $M$-definable function $g:\Mon\to\Mon$ such that for all $d\in D$: $F(d,x)\leqslant g(x)$ holds for all sufficiently large $x\in M$. In particular, $f(x)=F(b,x)\leqslant g(x)$ holds for some (equivalently, all) $x\in (\mathbf p_r)_{\restriction B}(\Mon)$, as desired. \end{proof} \begin{Example} Consider the ordered group of rationales $(\mathbb Q,+,<,0)$, elementarily embedded in the group of reals. By Proposition \ref{Proposition_Ramakrishnan} we know that all definable types belonging to $S_1(\mathbb Q)$ are trivial; we will show that these are the only trivial types there. Let $p\in S_1(\mathbb Q)$ be non-definable. Then there is an irrational number $\alpha$ such that $p=\tp(\alpha/\mathbb Q)$. Since $\alpha$ is the unique realization of $p$ in $\mathbb R$, for each $a\in p(\Mon)$ we have $\dcl(\mathbb Qa)\cap p(\Mon)=\{a\}$. This implies that there are exactly three extensions of $p(x)\cup p(y)$ in $S_2(a)$, determined by $x<y$, $x=y$, and $y<x$, respectively. In particular, every two distinct realizations of $p$ are independent over $\emptyset$. It follows that if $a<b$ are distinct realizations of $p$, then $(a,\frac{a+b}{2},b)$ is a triangle of realizations of $p$, so $p$ is non-trivial. \end{Example} \begin{Example}[$\emptyset$-invariant type which is trivial over $\{0\}$, but not over $\emptyset$]\label{Example_trivover0} Consider the affine reduct of the additive ordered group of reals $(\mathbb R,m,<)$, where $m(x,y,z):=x-y+z$. This structure is o-minimal with a unique type $p\in S_1(\emptyset)$ and exactly three complete 2-types determined by $x<y$, $y<x$, and $x=y$, respectively; for example, if $a<b$, then $x\longmapsto \frac{x-a}{b-a}$ is an automorphism mapping $(a,b)$ to $(0,1)$, so $\tp(a,b)=\tp(0,1)$. In particular, every pair of distinct elements is independent. However, $p$ is non-trivial, since the triple $(0,1,2)$ is not a Morley sequence in $\mathbf p_r$ due to $m(1,0,1)=2$. Let $p_0(x)$ be the complete type over $\{0\}$ determined by $0<x$; clearly, $p_0(x)=(\mathbf p_r)_{\restriction \{0\}}$. Similarly as in Example \ref{Example_trivial_real_additive}, we may conclude that $p_0$ is trivial; this is because the structures $(\mathbb R,m,<,0)$ and $(\mathbb R,+,<,0)$ are interdefinable. Therefore, the type $\mathbf p_r$ is trivial over $\{0\}$ but not over $\emptyset$. \end{Example} \subsection{Omitting trivial weakly o-minimal types} In this subsection, we will prove that in a countable theory with few countable models, every trivial weakly o-minimal type over a finite domain is both convex and simple. Recall that a weakly o-minimal type $p\in S(A)$ is {\it simple} if $x\dep_Ay$ is a relatively definable (equivalence) relation on $p(\Mon)$ (in which case, by $\Aut_{A}(\Mon)$-invariance, it is relatively $A$-definable); that is, $\mathcal D_p=\{(x,y)\in p(\Mon)^2\mid x\dep_A y\}$ is relatively definable within $p(\Mon)^2$ or, equivalently, $\mathcal D_p(a)$ is relatively definable for some/all $a\models p$. In the next proposition, we characterize weakly o-minimal types that are both convex and simple; there (and later) by $a\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}b$ we denote the partial type $p(x)\cup(\mathcal D_p(a)<x)\cup (x<\mathcal D_p(b))$ (where $\mathbf p=(p,<)$ is a weakly o-minimal pair over $A$). \begin{Proposition}\label{Prop middle definable iff simple and convex} Suppose that $\mathbf p=(p,<)$ is a weakly o-minimal pair over $A$ and $A\triangleleft^{\mathbf p}a\triangleleft^{\mathbf p}b$. Then $p$ is both simple and convex if and only if the type $a\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}b$ has a definable locus. \end{Proposition} \begin{proof}Let $D$ denote the locus of $a\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}b$. Notice that $D$ is a $\mathcal D_p$-closed subset of $p(\Mon)$: if $c\in D$ then $\mathcal D_p(c)\subseteq D$; this holds since by Remark \ref{Remark_trianglep_binary_basicprops}(b) $a\triangleleft^{\mathbf p}c\triangleleft^{\mathbf p}b$ is equivalent with $a<\mathcal D_p(c)<b$. $(\Rightarrow)$ Suppose that $D$ is definable. Then $D$ is $Aab$-definable, since it is clearly $Aab$-invariant. Let $\theta(y,x,z)$ be an $L_A$-formula such that $\theta(a,x,b)$ defines $D$. First, we prove that $p$ is simple, that is, $\mathcal D_p(e)$ is relatively definable within $p(\Mon)$ for some/all $e\models p$. In fact, we prove more: $\mathcal D_p(e)$ is definable. Without loss, assume $e\in D$. We {\it claim} that $\mathcal D_p(e)$ is defined by the following formula: $$\sigma(x)=\theta(a,x,b)\land\lnot \theta(a,x,e)\land\lnot\theta(e,x,b).$$ Indeed, note that $\sigma(x)$ is equivalent with $a\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}b\land x\ntriangleleft^{\mathbf p}e \land e\ntriangleleft^{\mathbf p}x$. For all $x\models p$, $x\ntriangleleft^{\mathbf p}e \land e\ntriangleleft^{\mathbf p}x$ expresses the $<$-incompatibility of the classes $\mathcal D_p(x)$ and $\mathcal D_p(e)$, so by Theorem \ref{Theorem4}(c) is equivalent with $x\in \mathcal D_p(e)$. As $\mathcal D_p(e)\subset D$, we conclude that $\sigma(x)$ is equivalent with $x\in \mathcal D_p(e)$. This proves the claim, so $p$ is simple. It remains to prove that $p$ is convex. Note that: $$\{\theta(a,x_1,b),\theta(a,x_2,b),x_1<x<x_2\}\cup p(x)\vdash \theta(a,x,b);$$ Indeed, if $e_1,e,e_2$ satisfy the left-hand side (for $x_1,x,x_2$ respectively), then $e_1,e,e_2\models p$ and $a\triangleleft^{\mathbf p}e_1<e<e_2\triangleleft^{\mathbf p}b$, so $a\triangleleft^{\mathbf p}e\triangleleft^{\mathbf p}b$, and therefore $\models\theta(a,e,b)$. By compactness, find $\pi(x)\in p(x)$ such that: \begin{equation}\label{eq convex}\tag{$\dagger$} (\forall x_1,x,x_2)(\theta(y,x_1,z)\land\theta(y,x_2,z)\land x_1<x<x_2\land\pi(x)\rightarrow \theta(y,x,z))\in\tp_{y,z}(a,b/A). \end{equation} Moreover, we can assume that $\pi(\Mon)\subseteq D$. We now {\em claim} that $\pi(x)$ witnesses the convexity of $p$: $p(\Mon)$ is a $<$-convex subset of $(\pi(\Mon),<)$. Take $e_1,e_2\models p$ and $d\in\pi(\Mon)$ such that $e_1<d<e_2$; we need to show $d\in p(\Mon)$. Clearly, $e_1,e_2\in p(\Mon)$. Choose $a',b'\models p$ so that $a'\triangleleft^{\mathbf p}e_1$ and $e_2\triangleleft^{\mathbf p}b'$; then $a'\triangleleft^{\mathbf p}e_1<d<e_2\triangleleft^{\mathbf p}b'$. This implies $a'\triangleleft^{\mathbf p}e_1\triangleleft^{\mathbf p}b'$ and $a'\triangleleft^{\mathbf p}e_2\triangleleft^{\mathbf p}b'$, so $\models\theta(a',e_1,b')\land\theta(a',e_2,b')$ is valid. Also $a'\triangleleft^{\mathbf p}b'$, so $a'b'\equiv ab\ (A)$, hence from (\ref{eq convex}) we have: $$\models (\theta(a',e_1,b')\land\theta(a',e_2,b')\land e_1<d<e_2\land\pi(d))\rightarrow \theta(a',d,b')\,.$$ The left side of this implication is valid, so $\models \theta(a',d,b')$. In particular, $d\models p$, and we are done. $(\Leftarrow)$ Suppose now that $p$ is simple and convex. By Lemma \ref{Proposition_convex_type_witness}, there is $\pi(x)\in p(x)$ witnessing the convexity: $(\pi(\Mon),<)$ is a definable extension of $(p(\Mon),<)$ and $p(\Mon)$ is convex in $\pi(\Mon)$. By the simplicity of $p$, for $e\models p$, $\mathcal D_p(e)$ is a relatively $Ae$-definable subset of $p(\Mon)$, defined by $\phi(x,e)$ say. Then the locus of $a\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}b$ is clearly defined by $\pi(x)\land \phi(\Mon,a)<x<\phi(\Mon,b)$. \end{proof} \begin{Theorem}\label{Theorem trivial implies simple and convex} Suppose that $T$ is countable and $I(\aleph_0,T)<2^{\aleph_0}$. Then every trivial weakly o-minimal type over a finite domain is simple and convex. \end{Theorem} \begin{proof} Assuming that $p$ is a trivial weakly o-minimal type over a finite domain that is not both convex and simple, we will prove $I(\aleph_0,T)=2^{\aleph_0}$. We may assume that $\mathbf p=(p,<)$ is a weakly o-minimal pair over $\emptyset$ (as for finite $A$, $I(\aleph_0,T_A)=2^{\aleph_0}$ implies $I(\aleph_0,T)=2^{\aleph_0}$). For every countable endless linear order $\mathbb I=(I,<_I)$ we will find a countable $M_{\mathbb I}\models T$ such that $(p(M_{\mathbb I})/\mathcal D_p,<)\cong \mathbb I$; clearly, for nonisomorphic orders $\mathbb I$ and $\mathbb J$ the corresponding models $M_{\mathbb I}$ and $M_{\mathbb J}$ will be nonisomorphic. As there are $2^{\aleph_0}$ many pairwise nonisomorphic such orders, the desired conclusion follows. Fix $\mathbb I=(I,<_I)$ and let $A_I=(a_i\mid i\in I)$ be a $\triangleleft^{\mathbf p}$-increasing sequence of realizations of $p$; since $p$ is trivial, $A_I$ is a Morley sequence in $\mathbf p_r$ over $\emptyset$. Put \ $\Pi_I(x):=p(x)\cup\bigcup_{i\in I}x\notin\mathcal D_p(a_i)$. \begin{Claim*} There exists a countable model $M_{\mathbb I}\supseteq A_I$ that omits $\Pi_I(x)$. \end{Claim*} \noindent{\em Proof of Claim.} Suppose not. Then, by the Omitting Types Theorem there is an $L$-formula $\theta(x,\bar y)$ and a finite subsequence of $A_I$, $\bar a=(a_{i_1},\dots,a_{i_n})$, such that $\theta(x,\bar a)$ is consistent and $\theta(x,\bar a)\vdash \Pi_I(x)$. Moreover, we can assume that $D=\theta(\Mon,\bar a)$ is a convex subset of $(p(\Mon),<)$; indeed, $\theta(x,\bar a)\vdash p(x)$ holds, so according to the weak o-minimality of $p$, $\theta(\Mon,\bar a)\subseteq p(\Mon) $ has finitely many convex components in $(p(\Mon),<)$, and we can replace $\theta(x,\bar a)$ with a formula that defines one of them so that $\theta(x,\bar a)\vdash \Pi_I(x) $ still holds. Therefore, $D:=\theta(\Mon,\bar a)$ is a convex set that is disjoint from each of the (convex) sets $(\mathcal D_p(a_i)\mid i\in I)$. This implies that for each $i\in I$ either $\mathcal D_p(a_i)<D$ or $D<\mathcal D_p(a_i)$ holds, so every element $e\in D$ is $\triangleleft^{\mathbf p}$-comparable to $a_i$ and we can arrange $A_{\mathbb I}\cup \{e\}$ into a $\triangleleft^{\mathbf p}$-increasing sequence. Then the triviality of $p$ implies that this sequence is Morley in $\mathbf p_r$ over $\emptyset$, and that it remains such after replacing $e$ with any other $e'\in D$; in particular, $e\equiv e'\,(A_{\mathbb I})$ holds, so $\theta(x,\bar a)$ isolates a complete type $q\in S(A_{\mathbb I})$. Since $\theta(x,\bar a)$ uses only parameters $\bar a$, the type $q_{\restriction \bar a}\in S(\bar a)$ is isolated by $\theta(x,\bar a)$ ($q_{\restriction \bar a}(\Mon)=D$), and $q_{\restriction \bar a}\wor\tp(A_{\mathbb I}/\bar a)$ holds. Now we prove: \begin{equation} \mbox{ $\mathcal D_p(a_{i_k})<D<\mathcal D_p(a_{i_{k+1}})$ \ holds for some $k<n$. } \end{equation}\setcounter{equation}{0} We have a sequence of convex sets $\mathcal D_p(a_{i_0})<\mathcal D_p(a_{i_1})<\dots <\mathcal D_p(a_{i_n})$ that are disjoint from $D$ (which is also convex). Therefore, to prove (1), it suffices to exclude $D<\mathcal D_p(a_{i_0})$ and $\mathcal D_p(a_{i_n})<D$. Since $\mathbb I$ is endless, the type $\mathbf p_{r\restriction \bar a}$ is realized in $A_{\mathbb I}$, so $q_{\restriction \bar a}\wor\mathbf p_{r\restriction \bar a}$. We conclude that the locus $q_{\restriction \bar a}(\Mon)=D$ does not contain any elements above $\mathcal D_p(a_{i_n})$, as all realize $\mathbf p_{r\restriction \bar a}$. This excludes $\mathcal D_p(a_{i_n})<D$; similarly, $D<\mathcal D_p(a_{i_0})$ is ruled out, proving (1). Note that (1) implies $\theta(x,\bar a)\vdash a_{i_k}\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}a_{i_{k+1}}$. Also note that $a_{i_k}\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}a_{i_{k+1}}$ determines a complete type in $S(\bar a)$, as $a_{i_k}\triangleleft^{\mathbf p}d\triangleleft^{\mathbf p}a_{i_{k+1}}$ implies that $(a_{i_0},\ldots,a_{i_k},d,a_{i_{k+1}},\ldots,a_{i_n})$ is a Morley sequence in $\mathbf p_r$ over $\emptyset$; this type is $q_{\restriction \bar a}(x)$, which is isolated. In particular, the locus of $a_{i_k}\triangleleft^{\mathbf p}x\triangleleft^{\mathbf p}a_{i_{k+1}}$ is definable, so by Proposition \ref{Prop middle definable iff simple and convex}, $p$ is convex and simple. This contradicts the initial assumptions and completes the proof of Claim.\hfill$\blacksquare$ \smallskip Let the model $M_{\mathbb I}$ be given by the claim. To complete the proof of the theorem, it suffices to note that since $M_{\mathbb I}$ omits $\Pi_{\mathbb I}(x)$ and since $A_I\subseteq M_{\mathbb I}$, the classes $(\mathcal D_p(a_i)\mid i\in I)$ are the only $\mathcal D_p$-classes represented in $p(M_{\mathbb I})$, so $(p(M_{\mathbb I})/\mathcal D_p,<)\cong \mathbb I$. \end{proof} \section{Semiintervals and shifts}\label{Section_shifts} In this section, we first introduce the notions of semiintervals and shifts and then prove Theorem \ref{Theorem1_shift}. Let $(D,<)$ be any linear order. We say that $S\subseteq D$ {\it is a semi-interval of $(D,<)$} if $S$ is convex in $(D,<)$ and $a=\min S$ exists; in that case, we also say that $S$ is a semiinterval with minimum $a$, and denote it by $S_a$ to emphasize $a=\min S_a$. \begin{Definition}\label{Definition_family_semiintervals} Let $(D,<)$ be a linear order and let $\mathcal S=(S_x\mid x\in D)$ be a family of semi-intervals of $D$. \begin{enumerate}[\hspace{10pt}(a)] \item $\mathcal S$ is an {\it $A$-definable family of semiintervals of $(D,<)$} if $(D,<)$ is $A$-definable and there exists an $L_A$-formula $S(x,y)$ such that $S_a=S(a,\Mon)$ holds for all $a\in D$. \item The family $\mathcal S$ is {\it monotone} if $\sup S_x\leqslant \sup S_y$ for all elements $x\leqslant y$ of $D$. \end{enumerate} \end{Definition} Recall that by $\sup S_x\leqslant \sup S_y$ we mean that any upper bound of $S_y$ is also an upper bound of $S_x$. Suppose that $\mathcal S=(S_x\mid x\in D)$ is an $A$-definable family of semiintervals of $(D,<)$, defined by $S(x,y)$. Recursively, define: \begin{center} $S^{(1)}(x,y):=S(x,y)$ \ \ and \ \ $S^{(n+1)}(x,y):=(\exists t)(S^{(n)}(x,t)\land S(t,y))$ \ ($n\in\mathbb N$). \end{center} Further in the text, we will use the following notation: \begin{center}$\mathcal S^{n}:=(S^{(n)}(x,D)\mid x\in D)$, \ \ $S_x^n:=S^{(n)}(x,D)$ \ \ and \ \ $S^{\omega}_x:=\bigcup_{n\in\mathbb N} S_x^n$. \end{center} \begin{Observation}\label{Lemma_semiint_verybasic1} Suppose that $\mathcal S=(S_x\mid x\in D)$ is an $A$-definable family of semiintervals of $(D,<)$. \begin{enumerate}[\hspace{10pt}(a)] \item $\mathcal S^{n}$ is an $A$-definable family of semiintervals of $(D,<)$; if $\mathcal S$ is monotone, then so is $\mathcal S^n$ for all $n\in \mathbb N$. \item $(S_x^n\mid n\in\mathbb N)$ is a $\subseteq$-increasing sequence for all $x\in D$. If $S_x^n=S_x^{n+1}$ for some $n\in \mathbb N$, then $S_x^n=S_x^m$ for all $m\geqslant n$, so $S_x^n=S_x^\omega$. Hence, the sequence is strictly increasing or eventually constant. \item $(S^{\omega}_x\mid x\in D)$ is a family of semiintervals of $(D,<)$; it is monotone if $\mathcal S$ is monotone. \item $\models S^{(n+m)}(x,y)\leftrightarrow (\exists t)(S^{(n)}(x,t)\land S^{(m)}(t,y))$, so $S^{n+m}_a=\bigcup_{t\in S_a^n}S_t^m$ for all $m\in\mathbb N$. \end{enumerate} \end{Observation} \begin{Definition}\label{Def_shift}\label{Definition_shift} Let $\mathcal S=(S_x\mid x\in D)$ be a family of semiintervals of $(D,<)$. We say that the semiinterval $S_a\in\mathcal S$ is an {\em $\mathcal S$-shift} if the sequence $(S_a^n\mid n\in\mathbb N)$ is strictly $\subseteq$-increasing. \end{Definition} \begin{Remark}\label{Remark_shift} Let $\mathcal S$ be a family of semiintervals of $(D,<)$ and let $S_a\in \mathcal S$. By Observation \ref{Lemma_semiint_verybasic1}(b) we have two options: either $S_a$ is an $\mathcal S$-shift, or $S_a^n=S^{\omega}_a$ holds for some $n\in\mathbb N$. \end{Remark} \begin{Example} (1) \ Consider the theory $T=\Th(\mathbb Q,<,S)$ where $<$ is the usual order on the rationales and $S$ is defined by: \ $(x,y)\in S$ if and only if $x\leqslant y<x+\sqrt{2}$. Here, the formula $S(x,y)$ defines a monotone family of semiintervals with $S_0^n=\{y\mid 0\leqslant y<n\sqrt 2\}$, so $S_0$ is a shift. Since $T$ is weakly o-minimal, $I(T,\aleph_0)=2^{\aleph_0}$ follows by Theorem \ref{Theorem1_shift}; it also follows by Theorem \ref{Theorem trivial implies simple and convex}, since the unique type $p\in S_1(\emptyset)$ is trivial and non-simple. (2) \ A typical example of a monotone family without shifts is as follows. Let $(\Mon,<,E)$ be a linear order with a convex equivalence relation $E$. Define $S(x,y):=x\leqslant y\land E(x,y)$; $S_x$ is a final part of the class $[x]_E$ that begins at $x$. Here $S_x^n=S_x$ holds for all $n$, so $S_x$ is not a shift. \end{Example} \begin{Lemma}\label{Lemma_shift_properties} Suppose that $\mathcal S$ is a monotone $A$-definable family of semiintervals of $(D,<)$, $a\in D$, and $b\in S^{\omega}_a$. Then $S^{\omega}_b$ is a final part of $S^{\omega}_a$, and if $S_a$ is an $\mathcal S$-shift, then so is $S_b$. \end{Lemma} \begin{proof} Choose $n\in\mathbb N$ such that $b\in S^n_a$. As we observed in \ref{Lemma_semiint_verybasic1}(d), $S^{n+m}_a=\bigcup_{x\in S_a^n}S_x^m$ holds for all $m\in\mathbb N$. This, together with $b\in S_a^n$ implies $S^{m+n}_a\supseteq S^m_b$, so $S_a^\omega\supseteq S_b^\omega$ holds. Since the family $\mathcal S^m$ is monotone for all $m\in\mathbb N$, $a\leqslant b$ implies $\sup S_a^m\leqslant \sup S_b^m$ and thus $\sup S_a^\omega\leqslant \sup S_b^\omega$. This, together with $S_a^\omega\supseteq S_b^\omega$ implies that $S_b^\omega$ is a final part of $S_a^\omega$ and proves the first claim. To prove the second, assume that $S_b$ is not an $\mathcal S$-shift; we prove that $S_a$ is also not an $\mathcal S$-shift. By Remark \ref{Remark_shift}, there is $k\in\mathbb N$ with $S_b^k=S_b^\omega$. By the above proved, we have $S^k_b\subseteq S_a^{k+n}$; since $S_b^\omega=S_b^k$ is a final part of $S_a^\omega$, we conclude that the set $S_a^{k+n}$ contains a final part of $S_a^\omega$. Clearly, this implies $S_a^{k+n}=S_a^\omega$, so $S_a$ is not an $\mathcal S$-shift by Remark \ref{Remark_shift}. This proves the second claim. \end{proof} \begin{Lemma}\label{Lemma_exists_monotonefamily} If some $A$-definable set $D$ admits an $A$-definable family of semiintervals (with respect to some linear order) such that $S_a$ is its shift, then it also admits an $Aa$-definable order $<$ with $a=\min D$, and an $Aa$-definable monotone family of semiintervals of $(D,<)$ that contains $S_a$ as a shift. \end{Lemma} \begin{proof} Let $\mathcal S=(S_x\mid x\in D)$ be an $A$-definable family of semiintervals such that $S_a$ is a $\mathcal S$-shift. Modify $<$ and $\mathcal S$ as follows. Using $a$ as a parameter, first modify the family $\mathcal S$ by setting $S_x=\{x\}$ for $x\in (-\infty,a)$, and then modify the order $<$ by moving the segment $(-\infty,a)$ to the right of $[a,+\infty)$. Notice that the modified family is $Aa$-definable, that $S_{a}$ is still an $\mathcal S$-shift, and that $a=\min D$. For $x\in D$ define: $R_x=\bigcup_{t\leqslant x}S_t\cap [x,+\infty)$. It is easy to see that $\mathcal R=(R_x\mid x\in D)$ is a monotone $Aa$-definable family of semiintervals of $(D,<)$ such that $R_a=S_a$, $S_x\subseteq R_x$ for all $x\in D$, and $S_a^k\subseteq R_a^k$ for all $k\in\mathbb N$. We will prove that $R_a$ is a $\mathcal R$-shift. \noindent{\em Claim.} \ \ $b\in S_a^n$ \ implies \ $R_b^k\subseteq S_a^{k(n+1)}$. By induction on $k$. For $b\in S_a^n$ we have \ $R_b\subseteq \bigcup_{x\leqslant b}S_x\subseteq \bigcup_{x\in S_a^n}S_x=S_a^{n+1}$; here, the first inclusion follows by the definition of $R_b$, the second by $b\in S_a^n$, and the equality by \ref{Lemma_semiint_verybasic1}(d). This proves the case $k=1$. Assume that the claim holds for all $x\in S_a^n$ and some $k\in \mathbb N$: $R_x^k\subseteq S_a^{k(n+1)}$. We have: \begin{center} $R_b^{k+1}=\bigcup_{x\in R_b^k}R_x\subseteq \bigcup_{x\in S_a^{k(n+1)}}R_x \subseteq \bigcup_{x\in S_a^{k(n+1)}} S_x^{n+1}= S_a^{(k+1)(n+1)}$; \end{center} Here, the equalities follow by \ref{Lemma_semiint_verybasic1}(d), the first inclusion by the induction hypothesis, and the second by the case $k=1$. This proves the claim. Therefore, $S_a^k\subseteq R_a^k\subseteq S_a^{(k+1)(n+1)}$ holds for all $k\in\mathbb N$. This, together with the fact that the family $(S_a^n\mid n\in\mathbb N)$ is strictly increasing, implies that the family $(R_a^k\mid k\in\mathbb N)$ must strictly increase, too; $R_a$ is a $\mathcal R$-shift. \end{proof} \subsection{Proof of Theorem \ref{Theorem1_shift}} In this subsection, we will prove Theorem \ref{Theorem1_shift}. Throughout, assume that $T$ is a countable weakly quasi-o-minimal theory with respect to $<$, $\mathcal S=(S_x\mid x\in \Mon)$ is a definable family of semiintervals of $(\Mon,<)$, $c_0\in \Mon$, and $S_{c_0}$ is an $\mathcal S$-shift. We need to prove $I(T,\aleph_0)=2^{\aleph_0}$. Absorb in the language the parameter $c_0$ and a finite set of parameters needed to define the family $\mathcal S$; note that this does not affect the desired conclusion $I(T,\aleph_0)=2^{\aleph_0}$. By Lemma \ref{Lemma_exists_monotonefamily}, after possibly modifying $<$ and $\mathcal S$, we can also assume that $\mathcal S$ is monotone with $c_0=\min \Mon$. Therefore, for the rest of this section, we will assume: \begin{itemize} \item $T$ is small; \item $\mathcal S=(S_x\mid x\in\Mon)$ \ is a $0$-definable, monotone family of semiintervals of $(\Mon,<)$; \item $c_0=\min \Mon$ \ and \ $S_{c_0}$ is an $\mathcal S$-shift; $S^{1}_{c_0}\subset S^{2}_{c_0}\subset S^{3}_{c_0}\subset\dots$ is a family of initial parts of $\Mon$. \end{itemize} The following notation is convenient: \begin{itemize} \item $\mathcal C:=S^{\omega}_{c_0} =\bigcup_{n\in\mathbb N} S^{n}_{c_0}$; \item $\Pi_A(x):=\{\phi(x)\in L_A\mid \mbox{$\phi(\Mon)$ is convex and contains a final part of $\mathcal C$}\}$; $\Pi(x):=\Pi_{\emptyset}(x)$; \ \ \item $S_{\Pi}(A):=\{p\in S_1(A)\mid \Pi_A(x)\subseteq p(x)\}$. \end{itemize} \begin{Remark} \begin{enumerate}[\hspace{10pt}(a)] \item $\mathcal C$ is an initial part of $(\Mon,<)$, and $\mathcal C<x$ is $\emptyset$-type-definable. For all $n\in \mathbb N$, the formula $S^n_{c_0}<x$ belongs to $\Pi_A(x)$. \item Lemma \ref{Lemma_shift_properties} implies that $S_c$ is an $\mathcal S$-shift and $S^\omega_c$ is a final part of $\mathcal C$ for all $c\in \mathcal C$. \item $\Pi_A(x)$ is a partial type, $(\mathcal C<x)\subseteq \Pi_A(x)$, and $\Pi_A(x)$ is finitely satisfiable in $\mathcal C$. \end{enumerate} \end{Remark} We will say that $p\in S_1(A)$ is a {\it $\mathcal C$-type} if $p$ is finitely satisfiable but not realized in $\mathcal C$. $\mathcal C$-types have extension property: if $p\in S_1(A)$ is a $\mathcal C$-type and $A\subseteq B$, then there is a $\mathcal C$-type $q\in S_1(B)$ with $p\subseteq q$. Most of our efforts are aimed at proving Corollary \ref{Corollary_Ctype_trivial}, which states that every $\mathcal C$-type over a finite domain is trivial. Then, using the description of the $\mathcal C$-types proved in the next lemma, we will be able to quickly complete the proof of Theorem \ref{Theorem1_shift}. \begin{Lemma}\label{Lemma_Pi_basic} \begin{enumerate}[\hspace{10pt}(a)] \item For all $p\in S_1(A)$ the following conditions are equivalent: \begin{tasks}(3) \task[(1)] \ $p\in S_{\Pi}(A)$; \task[(2)] \ $p$ is a $\mathcal C$-type; \task[(3)] \ $\mathcal C=\{x\in\Mon\mid x< p(\Mon)\}$. \end{tasks} \item $\mathcal C\cup\Pi_A(\Mon)$ is an initial part of $\Mon$. \item $\Pi_A(x)$ is the minimal interval type from $IT(A)$ consistent with $\mathcal C<x$. \item $S_{\Pi}(A)=\{(\mathbf p_l)_{\restriction A}\mid p\in S_{\Pi}(\emptyset)\}$. \end{enumerate} \end{Lemma} \begin{proof} (a) (1)$\Rightarrow$(2) Assume $p\in S_\Pi(A)$. Then $(\mathcal C<x)\subseteq \Pi_A(x)\subseteq p(x)$, so $p$ is not realized in $\mathcal C$. To prove that $p$ is a $\mathcal C$-type, it remains to show that $p(x)$ is finitely satisfiable in $\mathcal C$. Suppose that an $L_A$-formula $\phi(x)$ is not realized in $\mathcal C$. Then the set of realizations of (a formula) $x<\phi(\Mon)$ is convex in $\Mon$ and contains $\mathcal C$; hence $(x<\phi(\Mon))\in \Pi_A(x)\subseteq p(x)$ and thus $p(\Mon)<\phi(\Mon)$. In particular, $\phi(x)\notin p$. Therefore, $p$ contains only formulae that are satisfied in $\mathcal C$; $p$ is a $\mathcal C$-type. (2)$\Rightarrow$(3) Suppose that $p$ is a $\mathcal C$-type. We need to prove that $\mathcal C$ is the set of all lower bounds of $p(\Mon)$. Since $p$ is not realized in $\mathcal C$, which is an initial part of $\Mon$, we conclude $\mathcal C< p(\Mon)$, so any $c\in\mathcal C$ is a lower bound of $p(\Mon)$. To prove (3), it remains to show that there are no other lower bounds; assuming $\mathcal C<a$ we will prove $a\nless p(\Mon)$. Since $p$ is a $\mathcal C$-type, by the extension property there is a $b\in p(\Mon)$ such that $\tp(b/aA)$ is a $\mathcal C$-type. Then, since $(x<a)\in \tp(c/aA)$ for all $c\in \mathcal C$, we have $(x<a)\in \tp(b/aA)$. Now, $b<a$ and $b\in p(\Mon)$ together imply $a\nless p(\Mon)$, so $a$ is not a lower bound of $p(\Mon)$. (3)$\Rightarrow$(1) Suppose that $\mathcal C$ is the set of all lower bounds of $p(\Mon)$. To prove $p\in S_{\Pi}(A)$, i.e. $\Pi_A(x)\subseteq p(x)$, assume $\phi(x)\in\Pi_A(x)$; we need to prove $\phi(x)\in p$. Since the set $\phi(\Mon)$ contains a final part of $\mathcal C$, there is a $n\in\mathbb N$ such that $\mathcal C\subseteq S^n_{c_0}\cup \phi(\Mon)$. Put $\psi(x):=(\forall y)(y\leqslant x\rightarrow y\in S^n_{c_0}\cup \phi(\Mon))$. Clearly, $\mathcal C\subseteq \psi(\Mon)$, so by compactness $\{\psi(x)\land\ S^n_{c_0}<x\mid n\in\mathbb N\}$ is consistent; let $b$ realize it. Then $\mathcal C<b$, so $b$ is not a lower bound of $p(\Mon)$. Hence, there is $a\models p$ such that $a<b$. Then $\models \psi(b)$ implies $\models \phi(a)$, so $\phi(x)\in p$. Therefore, $\Pi_A(x)\subseteq p(x)$. (b) Assuming $b<a$ and $a\in \mathcal C\cup \Pi_A(\Mon)$ we need to prove $b\in \mathcal C\cup \Pi_A(\Mon)$. This is immediate if $a\in \mathcal C$ or $b\in\mathcal C$. So assume $a,b\notin\mathcal C$ and let $p=\tp(a/A), q=\tp(b/A)$. Then $p\in S_{\Pi}(A)$ as $a\in \Pi_A(\Mon)$. By (1)$\Leftrightarrow$(3) from part (a) of the lemma, $\mathcal C$ is the set of all lower bounds of $p(\Mon)$. In particular, $b$ is not a lower bound of $p(\Mon)$, so $a'<b<a$ holds for some $a'\models p$. It follows that $p$ and $q$ extend the same interval type from $IT(A)$ and that $p(\Mon)$ and $q(\Mon)$ have the same set of lower bounds, so $q\in S_{\Pi}(A)$ follows by (1)$\Leftrightarrow$(3) from part (a) of the lemma. Therefore, $b\in\Pi_A(\Mon)$, as desired. (c) We prove that $\Pi_A(x)$ is an interval type, i.e. a maximal consistent set of convex $L_A$-formulae. Let $\phi(x)$ be a convex $L_A$-formula such that $\phi(x)\notin \Pi_A(x)$; we show that $\phi(x)$ is inconsistent with $\Pi_A(x)$. As $\phi(x)$ is convex and $\phi(x)\notin \Pi_A(x)$, the set $\phi(\Mon)$ does not contain a final part of $\mathcal C$. Then one of the convex components of $\lnot\phi(\Mon)$, say $D$, contains a final part of $\mathcal C$. This implies that the formula expressing $x\in D$ belongs to $\Pi_A(x)$. Since $x\in D$ is inconsistent with $\phi(x)$, it follows that $\phi(x)$ is inconsistent with $\Pi_A(x)$, as desired. Therefore, $\Pi_A(x)$ is an interval type. The minimality of $\Pi_A(x)$ follows from (b). (d) Let $p\in S_{\Pi}(\emptyset)$. By (1)$\Leftrightarrow$(3) from part (a), $\mathcal C$ is the set of all the lower bounds of $p(\Mon)$. Let $p_A=(\mathbf p_l)_{\restriction A}$. By the definition of $\mathbf p_l$, the set $p_A(\Mon)$ is an initial part of $p(\Mon)$, so $p(\Mon)$ and $p_A(\Mon)$ share the same set, $\mathcal C$, of lower bounds; $p_A\in S_{\Pi}(A)$ follows by part (a) of the lemma. Therefore, $\{(\mathbf p_l)_{\restriction A}\mid p\in S_{\Pi}(\emptyset)\}\subseteq S_{\Pi}(A)$. To prove the converse, suppose that $q_A\in S_{\Pi}(A)$ and let $q=(q_A)_{\restriction \emptyset}$; clearly, $q\in S_{\Pi}(\Mon)$. Now, part (a) of the lemma applies to both $q$ and $q_A$, so the sets $q(\Mon)$ and $q_A(\Mon)$ have the same set, $\mathcal C$, of lower bounds. This, together with the fact that $q_A(\Mon)$ is a convex subset of $q(\Mon)$, implies that $q_A(\Mon)$ is an initial part of $q(\Mon)$, so $q_A=(\mathbf q_l)_{\restriction A}$, proving the converse. Therefore, $S_{\Pi}(A)=\{(\mathbf p_l)_{\restriction A}\mid p\in S_{\Pi}(\emptyset)\}$. \end{proof} Next, we consider forking as a binary relation on $\Pi_A(\Mon)$: \begin{itemize} \item $\mathcal D_{\Pi_A}:=\{(x,y)\in\Pi_A(\Mon)^2\mid x\dep_A y\}$; \ \ \ \ $\mathcal D_{\Pi_A}(a):=\{x\in \Pi_A(\Mon)\mid (x,a)\in\mathcal D_{\Pi_{A}}\}$. \end{itemize} Recall that a set $D\subseteq \Mon$ is $\Pi_A$-bounded if there are $a_1,a_2\in \Pi_A(\Mon)$ such that $a_1<D<a_2$. \begin{Lemma}\label{Lemma_forking_on_D_Pi} \begin{enumerate}[\hspace{10pt}(a)] \item $(a,b)\in \mathcal D_{\Pi_{A}}$ \ if and only if \ $\tp(a/Ab)$ contains a $\Pi_A$-bounded formula; \item $\mathcal D_{\Pi_A}$ is a convex equivalence relation on $\Pi_A(\Mon)$. \item $\mathcal S^{\omega}_a\subseteq \mathcal D_{\Pi_A}(a)$ \ holds for all $a\in\Pi_A(\Mon)$. \end{enumerate} \end{Lemma} \begin{proof} (a) and (b) follow from Lemma \ref{Lemma_forking_on_interval_type}. To prove (c), let $a\in \Pi_A(\Mon)$ and we prove that $S_a$ is $\Pi_A$-bounded. Put $p=\tp(a/A)$. Choose $b\in p(\Mon)$ such that $a$ is left $\mathbf p$-generic over $b$. By Lemma \ref{Lemma_Pi_basic}(a), $\tp(a/bA)$ is a $\mathcal C$-type. Note that for all $c\in\mathcal C$ we have $S_c<b$, so the formula (expressing) $S_x<b$ belongs to $\tp_x(a/bA)$. In particular, $S_a<b$ and thus $a\leqslant S_a<b$. Therefore, $x\in S_a$ is a $\Pi_A$-bounded formula, so $S_a\subseteq \mathcal D_{\Pi_A}(a)$; $\mathcal S^{n}_a\subseteq \mathcal D_{\Pi_A}(a)$ follows by induction, hence $\mathcal S^{\omega}_a\subseteq \mathcal D_{\Pi_A}(a)$. \end{proof} \begin{Definition} Let $D\subseteq \Mon$ and $a\in D$. We say that $a$ is an {\it $\mathcal S$-minimal element of $D$} if $a=c_0$ or if there exists $b< D$ such that $a\in S_b$. \end{Definition} Usually, when we say that $a\in D$ is $\mathcal S$-minimal, the minimality is with respect to $D$. Note that if $D$ is definable, then ``$x$ is an $\mathcal S$-minimal element of $D$'' is also definable. \begin{Lemma}\label{Lemma_S_minimal_exist} \begin{enumerate}[\hspace{10pt}(a)] \item Every definable set $D$ that intersects $\mathcal C\cup \Pi(\Mon)$ contains an $\mathcal S$-minimal element. Moreover, if $D$ is $\bar b$-definable, then there is an $\mathcal S$-minimal element $a\in D$ such that $\tp(a/\bar b)$ is isolated. \item Suppose that $D\subseteq \Pi(\Mon)$ is the locus of an isolated type (over some parameters). Then every element of $D$ is $\mathcal S$-minimal and, whenever $D$ intersects some $\mathcal D_{\Pi_A}$-class for some $A$, then it is completely contained within that class. \end{enumerate} \end{Lemma} \begin{proof} (a) First, consider the case $D\cap \mathcal C\neq 0$. If $c_0\in D$, then $c_0$ is $\mathcal S$-minimal in $D$. Assume $c_0\notin D$ and let $n\in\mathbb N$ be minimal with $S^n_{c_0}\cap D\neq 0$. We claim that every element $a\in S^n_{c_0}\cap D$ is $\mathcal S$-minimal in $D$. If $n=1$ then $c_0$ witnesses the $\mathcal S$-minimality of $a$ in $D$, as $c_0<D$ and $a\in S_{c_0}$ hold. If $n>1$, then every element $a\in D\cap S^n_{c_0}$ is $\mathcal S$-minimal in $D$: $a\in S^n_{c_0}$ implies that $a\in S_d$ holds for some $d\in S^{n-1}_{c_0}$; the assumed minimality of $n$ implies $d<D$, so $d$ witnesses the $\mathcal S$-minimality of $a$ in $D$. The second case is $D\cap\Pi(\Mon)\neq 0$. Let $\phi(x,\bar b)$ be a formula that defines $D$ and let $d\in D\cap \Pi(\Mon)$. Let $\sigma(y,\bar z)$ be an $L$-formula that says ``$y$ is an $\mathcal S$-minimal element of $\phi(\Mon,\bar z)$''. By the first case, for each $\bar b'\in \Mon^{|\bar b|}$ the following holds: if the set $\phi(\Mon,\bar b')$ meets $\mathcal C$ then $\phi(\Mon,\bar b')$ has an $\mathcal S$-minimal element. Therefore, for all $c\in\mathcal C$ we have: \ $\models (\forall \bar z)(\phi(c,\bar z) \rightarrow (\exists y)\, \sigma(y,\bar z))$. Since $\tp(b)$ is finitely satisfiable in $\mathcal C$, the same holds with $c$ replaced by $d$; $D$ contains an $\mathcal S$-minimal element. To prove the ``moreover" part, assume $D$ is $\bar b$-definable. Choose an $L_{\bar b}$-formula $\phi(x)$ that expresses ``$x$ is an $\mathcal S$-minimal element of $D$''; it is consistent by part (a), so, since $T$ is small, $\phi(x)$ belongs to some isolated type of $S_1(\bar b)$. (b) Let $\phi(x,\bar b)$ define $D$ and isolate a complete type in $S_1(\bar b)$. By (a), we know that some element of $x\in D$ satisfies the formula ``$x$ is an $\mathcal S$-minimal element of $\phi(\Mon,\bar b)$''. Since $x\in D$ isolates a complete type, all elements of $D$ satisfy that formula. This proves the first assertion. To prove the second, assume that $a\in D\cap \Pi_A(\Mon)$; we need to show $D\subseteq \mathcal D_{\Pi_A}(a)$. By the first claim, $a$ is $\mathcal S$-minimal in $D$, so there is a $b\in \Pi(\Mon)$ witnessing that: $b<D$ and $a\in S_b$. By Lemma \ref{Lemma_forking_on_D_Pi}(c), we have $\mathcal S^{\omega}_b\subseteq \mathcal D_{\Pi_A}(b)$, so $a\in S_b$ implies $\mathcal D_{\Pi_A}(a)=\mathcal D_{\Pi_A}(b)$. Therefore, to complete the proof, it suffices to prove $D\subseteq S_a\cup S_b$. Suppose on the contrary that $d\in D\smallsetminus (S_a\cup S_b)$. Note that $S_a\cup S_b$ is a convex set, so $b\in S_b$ and $b<D$ together imply $(S_b\cup S_a)<d$. By the above proved, $d\in D$ is $\mathcal S$-minimal; let $b'\in \Pi_A(\Mon)$ be a witness for that: $b'<D$ and $d\in S_{b'}$. Then $b'<a$ as $b'<D\ni a$, so the monotonicity of $\mathcal S$ implies $\sup S_{b'}\leqslant \sup S_a$. On the other hand, $S_a<d$ and $d\in S_{b'}$ imply $\sup S_{a}<\sup S_{b'}$. Contradiction. \end{proof} \begin{Lemma}\label{Lemma_every_convexshift_istrivial} Every convex $\mathcal C$-type over a finite domain is trivial. \end{Lemma} \begin{proof} Toward contradiction assume that $A_0$ is finite and $r\in S_1(A_0)$ is a non-trivial convex $\mathcal C$-type. By Remark \ref{Remark_notrivial_wom_witness}, there is a finite Morley sequence, $\bar b$, in $\mathbf{r}_l$ over $A_0$ such that the type $p=(\mathbf{r}_l)_{\restriction A_0\bar b}$ is non-trivial, as witnessed by a triangle; put $A=A_0\bar b$. Note that Lemma \ref{Lemma_Pi_basic}(a) guarantees: (1) \ $p\in S_1(A)$ is a $\mathcal C$-type and $p\in S_{\Pi}(A)$. \noindent Also note that $p$ is convex, as a complete extension of a convex weakly o-minimal type $r$. Since $\Pi_A(x)$ is an interval type and since $p\in S_{\Pi}(A)$ is convex, $p$ is an isolated point of $S_{\Pi}(A)$. Then there is a formula $\theta(x)$ such that: (2) \ $\theta(x)\in p$ \ and \ $\Pi_A(x)\cup\{\theta(x)\}\vdash p(x)$. \noindent Note that this guarantees that $p(\Mon)$ is convex in $(\theta(\Mon),<)$. Let $(a_3,a_2,a_1)$ be a $\mathbf p_l$-triangle over $A$: (3) \ $\mathcal D_p(a_1)<\mathcal D_p(a_2) <\mathcal D_p(a_3)$, \ and \ $a_1\dep_{Aa_3} a_2$. \noindent By Lemma \ref{Lemma_forking_on_D_Pi}, the classes $\mathcal D_{\Pi_A}(a_1),\mathcal D_{\Pi_A}(a_2)$, and $\mathcal D_{\Pi_A}(a_3)$ are convex; they are disjoint because $a_1,a_2$, and $a_3$ are pairwise independent over $A$. Therefore, $\mathcal D_{\Pi_A}(a_1)<\mathcal D_{\Pi_A}(a_2) <\mathcal D_{\Pi_A}(a_3)$. Now, we claim that a triple $(a_1,a_2,a_3)$ can be found to also satisfy the following: (4) \ $\tp(a_1/a_2a_3A)$ is isolated. \noindent To prove (4), start with a triple $(a_1',a_2,a_3)$ satisfying $\mathcal D_{\Pi_A}(a_1')<\mathcal D_{\Pi_A}(a_2) <\mathcal D_{\Pi_A}(a_3)$ and $a_1'\dep_{Aa_3} a_2$. Put $q=(\mathbf p_l)_{\restriction a_3A}$; clearly, $q(\Mon)$ is an initial part of $p(\Mon)$ and $a_1',a_2\in q(\Mon)$. Note that $q(\Mon)$ is a convex subset of $(\theta(\Mon),<)$, as $p(\Mon)$ is. Choose a formula $\phi(x,y)\in\tp(a_1'a_2/a_3A)$ that implies $x<y\land \theta(x)\land \theta(y)$ and witnesses $a_1'\dep_{Aa_3} a_2$: $\phi(\Mon,a_2)$ is a $q$-bounded subset of $\Mon$. Then $\phi(\Mon,a_2)\subseteq \theta(\Mon)$ implies that $\phi(\Mon,a_2)$ is a $q$-bounded subset of $(\theta(\Mon), <)$, so since $q(\Mon)$ is a convex subset of $(\theta(\Mon),<)$, we derive $\phi(\Mon,a_2)\subset q(\Mon)$. Let $a_1$ be an $\mathcal S$-minimal element of $\phi(\Mon,a_2)$ with $\tp(a_1/a_2a_3A)$ isolated; it exists by Lemma \ref{Lemma_S_minimal_exist}. Clearly, $a_1\models q$ and the triple $(a_1,a_2,a_3)$ satisfy condition (4). We will prove that (3) is also satisfied. Choose $b\in\Pi_A(\Mon)$ to witness the $\mathcal S$-minimality of $a_1$ in $\phi(\Mon,a_2)$: $b< \phi(\Mon,a_2)$ and $a_1\in S_b$. By Lemma \ref{Lemma_forking_on_D_Pi}(b), $a_1\in S_b$ implies $\mathcal D_{\Pi_A}(a_1)=\mathcal D_{\Pi_A}(b)$. In addition, $b<\phi(\Mon,a_2)$ and $\models \phi(a_1',a_2)$ imply $b\leqslant a_1'$, which together with $\mathcal D_{\Pi_A}(a_1')<\mathcal D_{\Pi_A}(a_2)$ implies $\mathcal D_{\Pi_A}(b)<\mathcal D_{\Pi_A}(a_2)$. Hence $\mathcal D_{\Pi_A}(a_1)<\mathcal D_{\Pi_A}(a_2)<\mathcal D_{\Pi_A}(a_3)$. As $\mathcal D_p(a_i)\subseteq \mathcal D_{\Pi_A}(a_i)$ clearly holds for $i=1,2,3$, we conclude $\mathcal D_{\Pi_A}(a_1)<\mathcal D_{\Pi_A}(a_2)<\mathcal D_{\Pi_A}(a_3)$. Since $a_1\in\phi(\Mon,a_2)$ and since the formula $\phi(x,a_2)$ has been chosen to witness $a_1'\dep_{a_3A} a_2$, we conclude $a_1\dep_{a_3A} a_2$. Therefore, the triple $(a_1,a_2,a_3)$ satisfies conditions (3) and (4). \smallskip From now on assume that $(a_1,a_2,a_3)$ satisfies conditions (3) and (4). Before stating the next claim, recall from the proof of (4) that the type $q=\tp(a_1/a_3A)=\tp(a_2/a_3A)$ is convex and that $q(\Mon)$ is convex in $(\theta(\Mon), <)$. We claim: (5) \ $\tp(a_2/a_1a_3A)$ is isolated. \noindent We know that $a_2\dep_{Aa_3} a_1$ holds and that the type $\tp(a_1/a_2a_3A)$ is isolated, so there is a formula $\phi'(x,y)\in\tp(a_1,a_2/a_3A)$ implying $x<y\land \theta(x)\land \theta(y)$ such that $\phi'(a_1,\Mon)$ is a $q$-bounded subset of $q(\Mon)$ and $\phi'(x,a_2)$ isolates $\tp(a_1/a_2a_3A)$. As $\phi'(a_1,\Mon)$ is a $q$-bounded subset of $\theta(\Mon)$ and $q(\Mon)$ is convex in $(\theta(\Mon),<)$, we conclude $\phi'(a_1,\Mon)\subset q(\Mon)$. Choose $a_2'\in\phi'(a_1,\Mon)$ such that $\tp(a_2'/a_1a_3A)$ is isolated; then $a_2\in q(\Mon)$. Now, both $a_2$ and $a_2'$ realize the type $q\in S_1(a_3A)$, so $a_2\equiv a_2'\,(a_3A)$. Since the formula $\phi'(x,a_2)$ isolates a complete type in $S_1(a_2a_3A)$, the formula $\phi'(x,a_2')$ isolates the $a_3A$-conjugate of that type in $S_1(a_2'a_3A)$. Then $\models \phi'(a_1,a_2)\land \phi(a_1,a_2')$ implies $a_1a_2\equiv a_1a_2'\,(a_3A)$. Since $\tp(a_2'/a_1a_3A)$ is isolated, so is $\tp(a_2/a_1a_3A)$; this proves (5). \smallskip Let $\psi(a_1,y,a_3)$ be a formula that isolates $\tp(a_2/a_1a_3A)$ and let $M$ be a countable model prime over $a_1a_3A$. We claim: (6) \ The set $\{\mathcal D_p(x)\mid x\in p(M)\}$ is densely ordered by $<$. \noindent We need to prove that for all $b_1,b_3\in p(M)$ satisfying $\mathcal D_p(b_1)<\mathcal D_p(b_3)$ there exists a $b_2\in M$ with $\mathcal D_p(b_1)<\mathcal D_p(b_2)<\mathcal D_p(b_3)$. Note that $b_1b_3\equiv a_1a_3\,(A)$, so the formula $\psi(b_1,y,b_3)$ isolates a complete type in $S_1(b_1b_3A)$; that type is realized by some $b_2\in M$. Then $a_1a_2a_3\equiv b_1b_2b_3\, (A)$ and, in particular, $\mathcal D_p(b_1)<\mathcal D_p(b_2)<\mathcal D_p(b_3)$. This proves (6). \smallskip Let $(b_i\mid i\in\mathbb Q)$ be an increasing sequence of elements of $p(M)$ with $\mathcal D_p(b_i)<\mathcal D_p(b_j)$ for all rational numbers $i<j$. As $M$ is prime over $a_1a_3A$, for each $i\in\mathbb Q$ the type $\tp(b_i/a_1a_3A)$ is isolated, by $\phi_i(x)$ say. By Lemma \ref{Lemma_S_minimal_exist} we know $\phi_i(\Mon)\subseteq \mathcal D_p(b_i)$. This, together with $\mathcal D_p(b_i)<\mathcal D_p(b_j)$ implies $\phi_i(\Mon)<\phi_j(\Mon)$. It is easy to see that for each irrational number $r$ the following set: $$p_r(x):=\{\phi_i(\Mon)<x\mid i\in\mathbb Q \land i<r\}\cup \{x<\phi_j(\Mon)\mid j\in\mathbb Q\land j<r\}$$ is consistent. Our final claim in this proof is: (7) \ $p_r(x)\cup p_s(x)$ is inconsistent for all irrational numbers $r\neq s$. \noindent To prove the claim, suppose that $r<s$ are irrational. Choose $i\in \mathbb Q$ with $r<i<s$. Then $(x<\phi_i(\Mon))\in p_r$ and $(\phi_i(\Mon)<x)\in p_s$, so $p_r(x)\cup p_s(x)$ is inconsistent. We have found $2^{\aleph_0}$ pairwise inconsistent partial types over $a_1a_3A$, so $T$ is not small. Contradiction. \end{proof} \begin{Corollary}\label{Corollary_Ctype_trivial} Every $\mathcal C$-type over a finite domain is trivial. \end{Corollary} \begin{proof} Suppose that $A$ is finite and that $p\in S_1(A)$ is a $\mathcal C$-type. Since $T$ is small, there is a convex type $q$ that extends the same interval type as $p$; clearly, $p\nwor q$. By Lemma \ref{Lemma_every_convexshift_istrivial}, $q$ is trivial, so by Theorem \ref{Theorem_nwor preserves triviality}(a), $p\nwor q$ implies that $p$ is also trivial. \end{proof} \begin{Lemma}[$I(\aleph_0,T)<2^{\aleph_0}$]\label{Lema_S_Pi_isfinite} Every type in $S_{\Pi}(\emptyset)$ is both convex and simple. $S_{\Pi}(\emptyset)$ is a finite set. \end{Lemma} \begin{proof} By Lemma \ref{Lemma_Pi_basic} the set $S_{\Pi}(\emptyset)$ consists of all completions of the interval type $\Pi(x)$, so $S_{\Pi}(\emptyset)$ is a closed subspace of $S_1(\emptyset)$. Corollary \ref{Corollary_Ctype_trivial} guarantees that each $p\in S_{\Pi}(\emptyset)$ is trivial, and then Theorem \ref{Theorem trivial implies simple and convex} implies that such $p$ is both convex and simple. By Corollary \ref{Corollary_convex_iff_isolated_in_IT}, convex types are isolated points of $S_{\Pi}(\emptyset)$, so $S_{\Pi}(\emptyset)$ is a discrete closed subspace of $S_1(\emptyset)$; $S_{\Pi}(\emptyset)$ is finite. \end{proof} \begin{Lemma}[$I(\aleph_0,T)<2^{\aleph_0}$]\label{Lema_forking_reldef} $\mathcal D_{\Pi}$ is a relatively $\emptyset$-definable equivalence relation on $\Pi(\Mon)$. \end{Lemma} \begin{proof} We need to prove that the set $\mathcal D_{\Pi}(a)=\{x\in\Pi(\Mon)\mid x\dep a\}$ is $a$-definable for all $a\in \Pi(\Mon)$. By Lemma \ref{Lema_S_Pi_isfinite} every $p\in S_{\Pi}(\emptyset)$ is simple, so there exists a definable convex equivalence relation $E_p$ on $\Mon$ such that $E_{p\restriction p(\Mon)}=\mathcal D_p$. For each $q\in S_{\Pi}(\emptyset)$ with $q\nfor \tp(a)$ choose an element $a_q\in q(\Mon)$ with $a_q\dep a$. Note that the class $[a_q]_{E_q}$ contains exactly those realizations of $q$ that fork with $a$, so $\mathcal D_{\Pi}(a)=\bigcup \{[a_q]_{E_q}\mid q\nfor \tp(a), q\in S_{\Pi}(\emptyset)\}$; given that $S_{\Pi}(\emptyset)$ is finite, as proved in Lemma \ref{Lema_S_Pi_isfinite}, it follows that $D_{\Pi}(a)$ is definable; since it is $\Aut_a(\Mon)$-invariant, it is $a$-definable. \end{proof} \begin{proof}[Proof of Theorem \ref{Theorem1_shift}] We need to prove $I(\aleph_0,T)=2^{\aleph_0}$. By way of contradiction, assume $I(\aleph_0,T)<2^{\aleph_0}$. Then Lemma \ref{Lema_forking_reldef} applies: $\mathcal D_{\Pi}$ is a convex relatively $\emptyset$-definable equivalence relation on $\Pi(\Mon)$. Since $\Pi(\Mon)$ is a convex subset of $\Mon$, there is a $\emptyset$-definable convex equivalence relation $E$ on $\Mon$ such that $E_{\restriction \Pi(\Mon)}=\mathcal D_{\Pi}$. Let $a\in \Pi(\Mon)$. Then $[a]_E=\mathcal D_{\Pi}(a)$ because $\mathcal D_{\Pi}(a)$ is a $\Pi$-bounded subset of $\Pi(\Mon)$. By Lemma \ref{Lemma_forking_on_D_Pi}, we know that $S_x\subset \mathcal D_{\Pi}(x)$ holds for all $x\in [a]_E$, so \ $\models \forall x(x\in [a]_E \rightarrow S_x \subseteq [a]_E)$. Since $\tp(a)$ is a $\mathcal C$-type, \ $\models (\forall x)(x\in [c]_E \rightarrow S_x \subseteq [c]_E)$ holds for some $c\in\mathcal C$. Then $S^{\omega}_c\subseteq [c]_E$. By Lemma \ref{Lemma_shift_properties}, $S_c$ is an $\mathcal S$-shift, so the sequence $(S_c^n\mid n\in\mathbb N\}$ is strictly increasing. By compactness there exists an $b\in[c]_E$ such that $S_c^n<b$ holds for all $n\in\mathbb N$, that is, $S^{\omega}_c<b$. By Lemma \ref{Lemma_shift_properties} the set $S^{\omega}_c$ is a final part of $\mathcal C$, so we conclude $\mathcal C<b$. Then, since the class $[c]_E$ is convex, there exists $b'<b$ with $b'\in[c]_E\cap \Pi(\Mon)$. By our choice of $E$, the class $[b']_E=\mathcal D_\Pi(b')$ is disjoint from $\mathcal C$. Hence $c\notin[b']_E$. Contradiction. The proof of Theorem \ref{Theorem1_shift} is now complete. \end{proof} \subsection{A generalization} Now, let $T$ be any complete countable theory with infinite models. A ``local" context of weak quasi-o-minimality was introduced in \cite{MTwom}: A partial type $\Pi(x)$ is weakly quasi-o-minimal over $A$ if $\Pi(x)$ is over $A$ and every type $p\in S_x(A)$ is weakly o-minimal. In that case, we also say that the set $\Pi(\Mon)$ is weakly quasi-o-minimal over $A$. This definition was motivated (and justified) by the next proposition, whose corollary is that the theory $T$ is weakly quasi-o-minimal if and only if the formula $x=x$ is weakly quasi-o-minimal. \begin{Proposition}[{\cite[Proposition 3.9]{MTwom}}] Let $P$ be type-definable over $A$. Then $P$ is weakly quasi-o-minimal over $A$ if and only if there exists a relatively $A$-definable linear order $<$ on $P$ such that every relatively definable subset of $P$ is a Boolean combination of $<$-convex and relatively $A$-definable sets. In that case, the latter is true for any relatively $A$-definable linear order $<$ on $P$. \end{Proposition} Now, it is routine to generalize Theorem \ref{Theorem1_shift} to the local definable weakly quasi-o-minimal context. \begin{Theorem}Suppose that for some finite set of parameters $A$, there is a definable weakly quasi-o-minimal order over $A$ possessing a monotone $A$-definable family of semiintervals which includes a shift. Then $I(T,\aleph_0)=2^{\aleph_0}$. \end{Theorem} \begin{thebibliography}{9} \bibitem{Alibek} Alibek, A. A., B. S. Baizhanov, and T. S. Zambarnaya. "Discrete order on a definable set and the number of models." Mathematical Journal 14.3 (2014): 5-13. \bibitem{BH} Baldwin, John T., and Leo Harrington. "Trivial pursuit: remarks on the main gap." Annals of Pure and Applied Logic 34.3 (1987): 209-230. \bibitem{Belegradek} Belegradek, O., Peterzil, Y., Wagner, F. (2000). Quasi-O-Minimal Structures. The Journal of Symbolic Logic, 65(3), 1115–1132. \bibitem{Goode} Goode, John B. "Some trivial considerations." The Journal of symbolic logic 56.2 (1991): 624-631. \bibitem{KOU} Kaplan Itay, Alf Onshuus, and Alexander Usvyatsov. "Additivity of the dp-rank." Transactions of the American Mathematical Society 365.11 (2013): 5783-5804. \bibitem{FM} Friedman, Harvey, and Chris Miller. "Expansions of o-minimal structures by fast sequences." The Journal of Symbolic Logic 70.2 (2005): 410-418. \bibitem{MRS} Mekler Alan, Matatyahu Rubin, and Charles Steinhorn. "Dedekind completeness and the algebraic complexity of o-minimal structures." Canadian journal of mathematics 44.4 (1992): 843-855. \bibitem{MT} Moconja, Slavko, and Predrag Tanovi\'c. "Stationarily ordered types and the number of countable models." Annals of Pure and Applied Logic 171.3 (2020): 102765. \bibitem{MTwmon} Moconja, Slavko, and Predrag Tanovi\'c. "Does weak quasi-o-minimality behave better than weak o-minimality?." Archive for Mathematical Logic 61.1 (2022): 81-103. \bibitem{MTwom} Moconja, Slavko, and Predrag Tanovi\'c. "Weakly o-minimal types." arXiv preprint arXiv:2404.08260 (2024). \bibitem{MTclass2} Moconja, Slavko, and Predrag Tanovi\'c. Countable models of weakly quasi-o-minimal theories II. (in preparation) \bibitem{PeS}Peterzil, Ya'acov, and Sergei Starchenko. "A trichotomy theorem for o-minimal structures." Proceedings of the London Mathematical Society 77.3 (1998): 481-523. \bibitem{Ramakrishnan}Ramakrishnan, Janak. "Uniform bounds on growth in o‐minimal structures." Mathematical Logic Quarterly 56.4 (2010): 406-408. \bibitem{Tane} Tanovi\'c, Predrag. "Vaught’s Conjecture for Theories of Discretely Ordered Structures." Notre Dame Journal of Formal Logic 65.3 (2024): 247-257. \end{thebibliography} \end{document}
2412.20652v1
http://arxiv.org/abs/2412.20652v1
Hyperbolic knots with arbitrarily large torsion order in knot Floer homology
\pdfoutput=1 \documentclass{amsart} \usepackage{amssymb} \usepackage{graphicx} \usepackage{caption} \captionsetup[table]{skip=10pt} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{claim}[theorem]{Claim} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{question}[theorem]{Question} \theoremstyle{definition} \newtheorem{example}[theorem]{Example} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \newcommand{\FL}{{\rm FL}} \begin{document} \title[Hyperbolic knots with large torsion order]{Hyperbolic knots with arbitrarily large torsion order in knot Floer homology} \author[K. Himeno]{Keisuke Himeno} \address{Graduate School of Advanced Science and Engineering, Hiroshima University, 1-3-1 Kagamiyama, Higashi-hiroshima, 7398526, Japan} \email{[email protected]} \thanks{The first author was supported by JST SPRING, Grant Number JPMJSP2132. } \author[M. Teragaito]{Masakazu Teragaito} \address{Department of Mathematics and Mathematics Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-hiroshima 7398524, Japan.} \email{[email protected]} \thanks{The second author has been partially supported by JSPS KAKENHI Grant Number JP20K03587.} \subjclass[2020]{Primary 57K10; Secondary 57K18} \date{\today} \commby{} \begin{abstract} In knot Floer homology, there are two types of torsion order. One is the minimal power of the action of the variable $U$ to annihilate the $\mathbb{F}_2[U]$-torsion submodule of the minus version of knot Floer homology $\mathrm{HFK}^-(K)$. This is introduced by Juh\'{a}sz, Miller and Zemke, and denoted by $\mathrm{Ord}(K)$. The other, $\mathrm{Ord}'(K)$, introduced by Gong and Marengon, is similarly defined for the $\mathbb{F}_2[U]$-torsion submodule of the unoriented knot Floer homology $\mathrm{HFK}'(K)$. For both torsion orders, it is known that arbitrarily large values are realized by torus knots. In this paper, we prove that they can be realized by hyperbolic knots, most of which are twisted torus knots. Two torsion orders are argued in a unified way by using the Upsilon torsion function introduced by Allen and Livingston. We also give the first infinite family of hyperbolic knots which shares a common Upsilon torsion function. \end{abstract} \keywords{twisted torus knot, torsion order, Upsilon torsion function, knot Floer homology} \maketitle \section{Introduction}\label{sec:intro} There are two types of torsion order in knot Floer homology. The first one is introduced by Juh\'{a}sz, Miller and Zemke \cite{JMZ}. Recall that the minus version of knot Floer homology $\mathrm{HKF}^-(K)$ is a finitely generated module over the polynomial ring $\mathbb{F}_2[U]$. Let us denote $\mathrm{Tor}(\mathrm{HFK}^-(K))$ its $\mathbb{F}_2[U]$-torsion submodule. Then the torsion order of a knot $K$ is defined as \[ \mathrm{Ord}(K)=\min \{ k\ge 0 \mid U^k\cdot \mathrm{Tor}(\mathrm{HFK}^-(K))=0 \} \in \mathbb{N}\cup \{0\}. \] Of course, for the unknot $O$, $\mathrm{Ord}(O)=0$. Since knot Floer homology detects the unknot \cite{OS0}, $\mathrm{Ord}(K)\ge 1$ when $K$ is non-trivial. For example, for the torus knot $T(p,q)$ with $1<p<q$, $\mathrm{Ord}(T(p,q))=p-1$ \cite{JMZ}. Hence arbitrarily large values of torsion order can be realized by torus knots. There are several applications for knot cobordisms. See also \cite{HKP}. The second is similarly defined in \cite{GM} by using the torsion submodule of Ozsv\'{a}th, Stipsicz and Szab\'{o}'s unoriented knot Floer homology $\mathrm{HFK}'(K)$, which is also a module over $\mathbb{F}_2[U]$ (\cite{OSS}), instead of $\mathrm{HFK}^-(K)$. Hence \[ \mathrm{Ord}'(K)=\min \{ k\ge 0 \mid U^k\cdot \mathrm{Tor}(\mathrm{HFK}'(K))=0 \} \in \mathbb{N}\cup \{0\}. \] Again, $\mathrm{Ord}'(K)=0$ if and only if $K$ is trivial. (For, $\mathrm{HFK}'(O)=\mathbb{F}_2[U]$, which is torsion-free \cite[Corollary 2.15]{OSS}. Conversely, if $\mathrm{HFK}'(K)$ is torsion-free, then $\mathrm{HFK}'(K)=\mathbb{F}_2[U]= \mathrm{HFK}'(O)$ \cite[Proposition 3.5]{OSS}. So, the unoriented knot Floer complexes $\mathrm{CFK}'(K)$ and $\mathrm{CFK}'(O)$ share the same homology, which implies chain homotopy equivalence between them \cite[Proposition A.8.1]{OSS2}. Since setting $U=0$ reduces the complex into the hat version of knot Floer complex \cite[Proposition 2.4]{OSS}, we have $\widehat{\mathrm{HFK}}(K)\cong \widehat{\mathrm{HFK}}(O)$ by \cite[Proposition A.3.5]{OSS2}. This implies $K=O$.) Gong and Marengon \cite[Lemma 7.1]{GM} verify $\mathrm{Ord}'(T(p,p+1))=\lfloor \frac{p}{2} \rfloor$. Hence arbitrarily large values of this torsion order can be realized by torus knots, again. As shown in \cite{AL}, two types of torsion order can be unified in terms of the Upsilon torsion function $\Upsilon^{\mathrm{Tor}}_K(t)$, which is a piecewise linear continuous function defined on the interval $[0,2]$. The derivative of $\Upsilon^{\mathrm{Tor}}_K(t)$ near $0$ equals to $\mathrm{Ord}(K)$, and $\Upsilon^{\mathrm{Tor}}_K(1)=\mathrm{Ord}'(K)$. We remark that the Upsilon torsion function and two types of torsion order are not concordance invariats. The main purpose of this paper is to confirm that arbitrarily large values of these two types of torsion order can be realized by hyperbolic knots. Except a few small values, we make use of twisted torus knots. \begin{theorem}\label{thm:main} Let $K$ be a twisted torus knot $T(p,kp+1;2,1)$ with $k\ge 1$. \begin{itemize} \item[(1)] If $p\ge 2$, then $\mathrm{Ord}(K)=p-1$. \item[(2)] If $p\ge 4$, then $\mathrm{Ord}'(K)=\lfloor\frac{p-2}{2}\rfloor$. \end{itemize} \end{theorem} Unfortunately, a twisted torus knot $T(p,kp+1;2,1)$ is not hyperbolic when $p<5$ (see Proposition \ref{prop:hyp}). However, an additional argument gives the following. \begin{corollary}\label{cor:main} Let $N\ge 1$ be a positive integer. Then there exist infinitely many hyperbolic knots $K_1$ and $K_2$ with $\mathrm{Ord}(K_1)=N$ and $\mathrm{Ord}'(K_2)=N$. \end{corollary} \begin{corollary}\label{cor:main2} There exist infinitely many hyperbolic knots that share the same Upsilon torsion function. \end{corollary} We pose a simple question. \begin{question} Let $M$ and $N$ be positive integers. Does there exist a knot $K$ with $(\mathrm{Ord}(K), \mathrm{Ord}'(K))=(M,N)$? \end{question} \begin{remark} For two types of torsion order, the original symbols are $\mathrm{Ord}_v(K)$ and $\mathrm{Ord}_U(K)$ (see \cite{JMZ,GM}). \end{remark} \section{Twisted torus knots} A twisted torus knot is obtained from a torus knot of type $(p,q)$ by twisting $r$ adjacent strands by $s$ full twists. The resulting knot is denoted by $T(p,q;r,s)$ as in literatures \cite{L0,L1,L2,L3}. Throughout this section, let $K$ be the twisted torus knot $T(p,kp+1;2,1)$ with $p\ge 2, k\ge 1$. Clearly, if $p=2$, then $T(2,2k+1;2,1)=T(2,2k+3)$. Also, Lee \cite{L2,L3} shows that $T(3,3k+1;2,1)=T(3,3k+2)$, and $T(4,4k+1;2,1)$ is the $(2,8k+3)$-cable of $T(2,2k+1)$. We will show later that $T(p,kp+1;2,1)$ is hyperbolic if $p\ge 5$ (Proposition \ref{prop:hyp}). Since these knots are the closure of a positive braid, it is fibered by \cite{S}. In particular, the Seifert algorithm on a positive braided diagram gives a fiber, which is a minimal genus Seifert surface. Thus we know that it has genus $(kp^2-kp+2)/2$. Hence $K$ is non-trivial. \begin{lemma}\label{lem:tunnel} $K$ is an L--space knot.\end{lemma} \begin{proof} This follows from \cite{V}.\end{proof} \begin{lemma}\label{lem:alex} The Alexander polynomial $\Delta_K(t)$ of $K$ is given by \[ \Delta_K(t)= \begin{cases} \begin{aligned} 1&+\sum_{i=1}^k(-t +t^{p})t^{(i-1)p}\\ &+\sum_{i=1}^{p-3} \sum_{j=1}^{k}( -t^{ikp+1}+t^{ikp+2}-t^{ikp+2+i}+t^{(ik+1)p} ) t^{(j-1)p}\\ &+\sum_{i=1}^{k+1}( -t^{kp(p-2)+1} + t^{kp(p-2)+2} )t^{(i-1)p}, \end{aligned} & \text{ if $p\ge 3$},\\ 1-t+t^2-\dots +t^{2k+2}, &\text{if $p=2$.} \end{cases} \] \end{lemma} \begin{proof} When $p=2$, it is well known that \[ \Delta_K(t)=\frac{ (1-t) (1-t^{2(2k+3)}) } { (1-t^2) (1-t^{2k+3} ) }=1-t+t^2-\dots +t^{2k+2}, \] since $K=T(2,2k+3)$ as mentioned before. Assume $p\ge 3$. The conclusion essentially follows from \cite{Mo}. In his notation, our knot $K$ is $\Delta(p,kp+1,2)$ with $r=p-1$. Hence \begin{align*} \Delta_K(t) &=\frac{1-t} {(1-t^p)(1-t^{kp+1}) }\cdot \\ & \qquad ( 1- (1-t) ( t^{(p-1)(kp+1)+1} + t^{kp+1} ) -t^{p(kp+1)+2} ). \end{align*} The second factor is changed as \[ \begin{split} 1 &- (1-t) ( t^{(p-1)(kp+1)+1} + t^{kp+1} ) -t^{p(kp+1)+2} = 1- t^{(p-1)(kp+1)+1}- t^{kp+1}\\ & + t^{(p-1)(kp+1)+2} + t^{kp+2} -t^{p(kp+1)+2} = (1-t^{kp+1})+\\ &+t^{kp+2}(1-t^{ (kp+1)(p-2)})+ t^{(p-1)(kp+1)+2}(1-t^{kp+1}). \end{split} \] Thus \[ \Delta_K(t)=\frac{1-t}{1-t^p}\cdot ( 1 +t^{kp+2} \sum_{i=0}^{p-3} t^{i(kp+1)}+t^{(p-1)(kp+1)+2} ) . \] We set \begin{align*} A&=\sum_{i=1}^k(-t +t^{p})t^{(i-1)p}, \\ B&=\sum_{i=1}^{p-3} \sum_{j=1}^{k}( -t^{ikp+1}+t^{ikp+2}-t^{ikp+2+i}+t^{(ik+1)p} ) t^{(j-1)p}, \\ C&=\sum_{i=1}^{k+1}( -t^{kp(p-2)+1} + t^{kp(p-2)+2} )t^{(i-1)p}. \end{align*} Then it is straightforward to calculate \begin{align*} (1-t^p)A&= -t+t^p+t^{kp+1}-t^{(k+1)p},\\ (1-t^p)B&= -t^{kp+1}+t^{(k+1)p}+(1-t) \sum_{i=0}^{p-3} t^{(i+1)kp+i+2}+t^{kp(p-2)+1}-t^{kp(p-2)+2}, \\ (1-t^p)C&= -t^{kp(p-2)+1} + t^{kp(p-2)+2} + t^{kp(p-2)+1+(k+1)p} - t^{kp(p-2)+2+(k+1)p}. \end{align*} Hence \[ \begin{split} (1-t^p)(1+A+B+C)&=1-t +(1-t) \sum_{i=0}^{p-3} t^{(i+1)kp+i+2} \\ &\qquad +(1-t) t^{kp(p-2)+1+(k+1)p}\\ &=(1-t)( 1+ t^{kp+2}\sum_{i=0}^{p-3}t^{i(kp+1)}+t^{ (p-1)(kp+1)+2 }). \end{split} \] This shows that $\Delta_K(t)=1+A+B+C$ as desired. \end{proof} \begin{corollary}\label{cor:gap} The gaps of the exponents of the Alexander polynomial of $K$ are \[ (1,p-1)^k,(1,1,1,p-3)^k,(1,1,2,p-4)^k,\dots, (1,1,p-3,1)^k,1,1,(p-1,1)^k\] if $p\ge 3$, and $1^{2k+2}$ if $p=2$. Here, the power indicates the repetition. (We remark that the above sequence is $(1,2)^k,1,1,(2,1)^k$ when $p=3$.) \end{corollary} To prove that our twisted torus knot $K=T(p,kp+1;2,1)$ is hyperbolic when $p\ge 5$, we give a more general result by using \cite{I}. A knot $k$ is called a \textit{fully positive braid knot\/} if it is the closure of a positive braid which contains at least one full twist. \begin{proposition}\label{prop:full} Let $k$ be a fully positive braid knot. If $k$ is a tunnel number one, satellite knot, then $k$ is a cable knot. \end{proposition} \begin{proof} By \cite{MS}, $k$ has a torus knot $T(r,s)$ as a companion. We may assume that $1<r<s$. Then Theorem 1.2 of \cite{I} claims that the pattern $P$ is represented by a positive braid in a solid torus. Let us recall the construction of \cite{MS}. Starting from a $2$-bridge link $K_1\cup K_2$, consider the solid torus $E(K_2)$ containing $K_1$. Remark that $K_1$ and $K_2$ are unknotted. For the companion knot $T(r,s)$, consider a homeomorphism from $E(K_2)$ to the tubular neighborhood of $T(r,s)$, which sends the preferred longitude of $E(K_2)$ to the regular fiber of the Seifert fibration in the exterior of $T(r,s)$. Hence our pattern knot $P$, which is defined under preserving preferred longitudes, is obtained from $K_1$ by adding $rs$ positive full twists to $E(K_2)$. Since $K_1$ is unknotted, we can set the pattern $P$ as the closure of a positive braid \[ \sigma_{i(1)} \sigma_{i(2)} \dots \sigma_{i(n-1)} (\sigma_1 \sigma_2 \dots \sigma_{n-1})^{nrs} \] for some $n\ge 2$, where $\{i(1), i(2), \dots, i(n-1)\}=\{1,2,\dots, n-1\}$. (If the initial part before $rs$ full twists contains more than $n-1$ generators, then the Seifert algorithm gives a fiber surface of the closure $K_1$, which has positive genus.) For two braids $\beta_1$ and $\beta_2$, we write $\beta_1\sim \beta_2$ if they are conjugate or equivalent. \begin{claim}\label{cl:braid} $\sigma_{i(1)} \sigma_{i(2)} \dots \sigma_{i(n-1)} (\sigma_1 \sigma_2 \dots \sigma_{n-1})^{nrs} \sim (\sigma_1\sigma_2 \dots \sigma_{n-1})^{nrs+1}$. \end{claim} \begin{proof}[Proof of Claim \ref{cl:braid}] Put $F=(\sigma_1 \sigma_2 \dots \sigma_{n-1})^{nrs}$, which is central in the braid group. First, write $\sigma_{i(1)} \sigma_{i(2)} \dots \sigma_{i(n-1)} F=U_1 \sigma_1 U_2 F$, where $U_i$ is a word without $\sigma_1$, which is possibly empty. Then $U_1\sigma_1 U_2 F \sim \sigma_1 U_2F U_1\sim \sigma_1 U_2U_1F$. Next, set $U_2U_1=V_1\sigma_2 V_2$, where $V_i$ is a (possibly, empty) word without $\sigma_1, \sigma_2$. Note that $\sigma_1$ and $V_1$ commute. Then \[ \sigma_1 U_2U_1F =\sigma_1 V_1\sigma_2 V_2 F \sim V_1\sigma_1 \sigma_2 V_2F\sim \sigma_1\sigma_2V_2FV_1 \sim \sigma_1\sigma_2 V_2V_1F. \] Repeating this procedure, we have the conclusion. \end{proof} Thus the pattern $P$ is the closure of a braid $(\sigma_1\sigma_2\dots \sigma_{n-1})^{nrs+1}$. This means that $k$ is a cable knot. \end{proof} \begin{remark} Lee \cite[Question 1.2]{L3} asks whether $T(p,q;r, s)$ is a cable knot, if it is a satellite knot under a condition that $1<p<q$, $r\ne q$, $r$ is not a multiple of $p$, $1<r\le p+q$ and $s>0$. Proposition \ref{prop:full} gives a positive answer if the knot has tunnel number one, which is known to be true when $r\in \{2,3\}$ (see \cite{JL}). \end{remark} \begin{proposition}\label{prop:hyp} If $p\ge 5$, then $K$ is hyperbolic. \end{proposition} \begin{proof} First, $K=T(p,kp+1;2,1)$ is a torus knot if and only if $p=2,3$ by \cite[Theorem 1.1]{L2}. Hence we know that our knot is not a torus knot. Assume that $K$ is a satellite knot for a contradiction. We remark that $K$ has tunnel number one. (A short arc at the extra full twist gives an unknotting tunnel.) Proposition \ref{prop:full} shows that $K$ is the $(n,nrs+1)$-cable of $T(r,s)$. Then $K=T(4,4m+1; 2,1)$ for some $m\ge 1$ by \cite{L3}. This is a contradiction, because of Lemma \ref{lem:alex} and $p\ge 5$. Thus we have shown that $K$ is neither a torus knot nor a satellite knot, so $K$ is hyperbolic. \end{proof} \section{Upsilon torsion function}\label{sec:upsilon-torsion} In this section, we determine the Upsilon torsion function $\Upsilon^{\mathrm{Tor}}(K)$ of $K=T(p,kp+1;2,1)$. Since $K$ is an L--space knot (Lemma \ref{lem:tunnel}), the full knot Floer complex $\mathrm{CFK}^\infty(K)$ is determined by the Alexander polynomial (\cite{OS}). It has the form of staircase diagram described by the gaps of Alexander polynomial. If the gaps are given as a sequence $a_1,a_2,\dots, a_n$, then the terms give the length of horizontal and vertical steps. More precisely, let $g$ be the genus of $K$. Start at the vertex $(0,g)$ on the coordinate plane. Go right $a_1$ steps, and down $a_2$ steps, and so on. Finally, we reach $(g,0)$. By the symmetry of the Alexander polynomial, the staircase inherits the symmetry along the line $y=x$. We follow the process in \cite[Appendix]{AL}. However, we assign a modified filtration level $\FL$ to each generator of the complex. If a generator $x$ has the coordinate $(p,q)$, then $\FL(x)=tq+(2-t)p$. In fact, for any $t\in [0,2]$, $\FL$ defines a real-valued function on $\mathrm{CFK}^\infty(K)$. Then, for all $s\in \mathbb{R}$, $\mathcal{F}_s$ is spanned by all vectors $x \in \mathrm{CFK}^\infty(K)$ such that $\FL(x)\le s$. The collection $\{\mathcal{F}_s\}$ gives a filtration on $\mathrm{CFK}^\infty(K)$. See \cite{L}. (Remark that this filtration level is just the twice of that used in \cite{AL}.) Since $\mathcal{F}_s\subset \mathcal{F}_u$ if $s\le u$, a generator $x_i\in \mathcal{F}_u$ can be added by $x_j\in \mathcal{F}_s$, without any change of the filtration level. That, $\FL(x_i)=\FL(x_i+x_j)$. For the staircase complex, repeating a change of basis gradually splits the complex into a single isolated generator and separated arrows. Then the value of the Upsilon torsion function is given as the maximum difference between filtration levels among the arrows. Since the Upsilon torsion function, defined on $[0,2]$, is symmetric along $t=1$, it suffices to consider the domain $[0,1]$. As the simplest case, we demonstrate the process when $p=2$. \begin{example}\label{ex:p=2} Let $p=2$. Then $K=T(2,2k+3)$ as mentioned before, and we show that its Upsilon torsion function $\Upsilon^{\mathrm{Tor}}_K(t)=t \ (0\le t\le 1)$, independent of $k$. By Corollary \ref{cor:gap}, the gaps of the exponents of the Alexander polynomial is $1,1,\dots, 1$ (repeated $2k+2$ times). Hence the staircase diagram has the form as shown in Figure \ref{fig:p=2}, where $A_i$ has Maslov grading $0$, but $B_i$ has grading $1$, and each arrow has length one. \begin{figure}[ht] \centering \includegraphics*[scale=0.7]{p=2.pdf} \caption{Left: The staircase diagram when $p=2$. The generator $A_i$ has grading $0$, but $B_i$ has grading $1$. Each arrow has length one. Right: By adding $A_0$ to $A_1,\dots, A_{k+1}$, the generator $A_0$ is isolated from the complex.}\label{fig:p=2} \end{figure} Each generator is assigned the filtration level $\FL$. The difference between filtration levels among the generators is important. We have $\FL(B_{i+1})-\FL(A_i)=2-t$ and $\FL(B_i)-\FL(A_i)=t$, because each arrow has length one. Thus we have \[ \FL(A_0)\le \FL(A_1)\le \dots \le \FL(A_{k+1}), \] where each equality occurs only when $t=1$. Hence $A_0$ has the lowest filtration level among the generators with grading $0$. Add $A_0$ to $A_1,\dots, A_{k+1}$. Then the generator $A_0$ is isolated from the complex as shown in Figure \ref{fig:p=2}. (Recall that we use $\mathbb{F}_2$ coefficients.) In the remaining part of the complex, $A_1+A_0$ is the lowest, since $\FL(A_i+A_0)=\FL(A_i)$ for $i=1,2,\dots,k+1$. To simplify the notation, we keep the same symbol $A_i$, instead of $A_i+A_0$, after this, if no confusion can arise. Add $A_1$ to the other generators with grading $0$, except $A_0$. Then the arrow $B_1\rightarrow A_1$ is split off from the complex. Repeating this process leads to the decomposition of the original staircase into one isolated generator $A_0$ and $k+1$ vertical arrows. For each arrow, the difference of filtration levels is equal to $t$, so the maximum difference is $t$ among the arrows. This shows $\Upsilon^{\mathrm{Tor}}_K(t)=t$. \end{example} \begin{theorem}\label{thm:upsilon-torsion} Let $p\ge 4$. The Upsilon torsion function $\Upsilon^{\mathrm{Tor}}_K(t)$ is given as \[ \Upsilon_K^{{\rm Tor}}(t)= \begin{cases} (p-1)t & (0\le t \le \frac{2}{p})\\ 2-t & (\frac{2}{p}\le t \le \frac{2}{p-2})\\ (p-3)t & (\frac{2}{p-2}\le t \le \frac{4}{p})\\ 2m+(-m-1)t & (\frac{2m}{p}\le t \le \frac{2m}{p-1},\ m=2,\dots, \lfloor\frac{p-1}{2}\rfloor)\\ (p-2-m)t & (\frac{2m}{p-1}\le t\le \frac{2(m+1)}{p},\ m=2,\dots,\lfloor\frac{p}{2}\rfloor-1). \end{cases} \] In particular, $\Upsilon^{\mathrm{Tor}}_K(1)=\lfloor \frac{p-2}{2}\rfloor$. \end{theorem} \begin{proof} Recall that the gaps are \[ (1,p-1)^k,(1,1,1,p-3)^k,(1,1,2,p-4)^k,\dots, (1,1,p-3,1)^k,1,1,(p-1,1)^k \] by Corollary \ref{cor:gap}. We name the generators of the staircase as in Figures \ref{Fig1}, \ref{Fig2} and \ref{Fig3}. \begin{figure}[ht] \centering \includegraphics*[scale=0.3]{Fig1.pdf} \caption{The first part corresponds to $(1,p-1)^k$. The generators $A_i$ have Maslov grading $0$, but $B_i$ have $1$. The number $p-1$ next to each vertical arrow indicates the length. Each horizontal arrow has length one. Here, $C^1_0=A_k$. }\label{Fig1} \end{figure} \begin{figure}[ht] \centering \includegraphics*[scale=0.3]{Fig2.pdf} \caption{Left: The second part corresponds to $(1,1,j,p-2-j)^k\ (j=1,\ldots,p-3)$. The generators $C_*^j$ and $E_*^j$ have Maslov grading $0$, but the others have $1$. Right: This is a connecting part between $(1,1,j,p-2-j)$ and $(1,1,j+1, p-2-(j+1))$.}\label{Fig2} \end{figure} \begin{figure}[ht] \centering \includegraphics*[scale=0.3]{Fig3.pdf} \caption{The last part corresponds to $1,1,(p-1,1)^k$. The generators $G$ and $A_*'$ have Maslov grading $0$, but the others have $1$. }\label{Fig3} \end{figure} In particular, we have the difference between filtration levels of certain generators with Maslov grading 0 as in Table \ref{table:filtration}. The argument is divided into 4 cases. \begin{table}[htbp] \renewcommand{\arraystretch}{1.2} \begin{tabular}{l |l} \hline Difference & Indices\\ \hline $\FL(A_i)-\FL(A_{i-1})=2-pt$ & $i=0,\ldots,k$\\ $\FL(E^j_i)-\FL(C^j_{i-1})=2-2t\ge 0$ & $i=1,\ldots,k;\ j=1,\ldots,p-3$\\ $\FL(C^j_i)-\FL(E^j_i)=(2-p)t+2j $& $i=1,\ldots,k;\ j=1,\ldots,p-3$\\ $\FL(C^j_i)-\FL(C^j_{i-1})=-pt+2(j+1)$ & $i=1,\ldots,k;\ j=1,\ldots,p-3$\\ $\FL(C^j_0)-\FL(C^{j-1}_0)=(-pt+2j)k$ & $j=2,\ldots,p-3$ \\ $\FL(A'_0)-\FL(G)=2-2t\ge 0$ & \\ $\FL(A'_i)-\FL(A'_{i-1})=-pt+2p-2>0$ & $i=1,\ldots,k$\\ \hline \end{tabular} \caption{Difference between filtration levels of the generators with Maslov grading $0$.} \label{table:filtration} \end{table} \medskip \textbf{Case 1. $0\le t \le \frac{2}{p}$.} Then any difference in Table \ref{table:filtration} is at least $0$. Hence $A_0$ has the lowest filtration level among the generators with grading $0$, whose filtration levels increase when we go to the right. Exactly as in Example \ref{ex:p=2}, the staircase complex is decomposed into a single isolated generator $A_0$ and separated vertical arrows $B_i\rightarrow A_i\ (i=1,2,\dots,k)$. Hence the maximum difference of filtration levels on the arrows is $(p-1)t$. This gives $\Upsilon^{\mathrm{Tor}}_K(t)=(p-1)t$ for $0\le t \le 2/p$. \medskip \textbf{Case 2. $\frac{2}{p}\le t \le \frac{4}{p}$.} Then $\FL(A_0)\ge \FL(A_1)\ge \dots \ge \FL(A_k)$. After $A_k$, the filtration levels increase among the generators with grading $0$, so $A_k$ is the lowest. Add $A_k$ to the other generators with grading $0$. Then $A_k$ will be isolated, and the complex splits into two parts. We say that the first part, which starts at $A_0$ and ends at $B_k$, is \textit{N-shaped}, but the second, which starts at $D_1^1$ and ends at $A_k'$, is \textit{mirror N-shaped}. In general, if a ``zigzag'' complex starts and ends at horizontal arrows, then it is N-shaped. If it starts and ends at vertical arrows, then it is mirror N-shaped. For the first part, add $A_{k-1}$ to the others with grading $0$, which splits the arrow $A_{k-1} \leftarrow B_k$ off. Repeat this as in Case 1. Then the N-shaped complex is decomposed into separated horizontal arrows $A_{i-1}\leftarrow B_i\ (i=1,2,\dots,k)$, each of which has difference $2-t$. The mirror N-shaped complex is also decomposed into vertical arrows similarly. Thus the maximum difference among them is $(p-3)t$. Compare $2-t$ and $(p-3)t$. If $\frac{2}{p}\le t \le \frac{2}{p-2}$, then $2-t \ge (p-3)t$. If $\frac{2}{p-2}\le t \le \frac{4}{p}$, then $2-t\le (p-3)t$. Hence $\Upsilon^{\mathrm{Tor}}_K(t)=2-t$ for $\frac{2}{p}\le t\le \frac{2}{p-2}$, and $(p-3)t$ for $\frac{2}{p-2}\le t \le\frac{4}{p}$. \medskip \textbf{Case 3. $\frac{2m}{p}\le t \le \frac{2m}{p-1}\ (m=2,\dots,\lfloor (p-1)/2\rfloor)$.} From Table \ref{table:filtration}, we see that $C_k^{m-1}=C_0^m$ is the lowest among the generators with grading $0$. Adding this to the others with grading $0$ decomposes the complex into one isolated generator $C_0^m$, the N-shaped one between $A_0$ and $F_k^{m-1}$ and the mirror N-shaped one between $D_1^m$ and $A_k'$. As before, the mirror N-shaped complex can be decomposed into vertical arrows. The longest arrows has length $(p-2-m)t$. \begin{figure}[ht] \centering \includegraphics*[scale=0.4]{3zigzag.pdf} \caption{The N-shaped complex between $A_0$ and $F_k^{m-1}$ after isolating the lowest vertex $C_0^m$, where $k=2$. The height indicates the filtration level of each generator. As before, we keep the same notation for generators after a change of basis. }\label{fig:case3} \end{figure} The N-shaped complex is described in Figure \ref{fig:case3}. We have \[ \begin{split} \FL(A_0)\ge \FL(A_1)\ge \cdots &\ge \FL(A_k=C_0^1)\ge \FL(C_1^1)\ge \dots \ge \FL(C_k^1=C_0^2)\\ &\ge \FL(C_1^2)\ge \dots \ge \FL(C_{k-1}^{m-1}) \end{split} \] and $\FL(E_i^j)\ge \FL(C_{i-1}^j)\ (i=1,2,\dots,k; j=1, 2\dots, m-1 )$. Hence $C_{k-1}^{m-1}$ is the lowest. Adding this to the others with grading $0$ on the left splits an N-shaped complex $C_{k-1}^{m-1} \leftarrow D_k^{m-1} \rightarrow E_k^{m-1} \leftarrow F_k^{m-1}$ off. For the remaining part, the lowest is $C_{k-2}^{m-1}$. Again, adding this to the others with grading $0$ on the left splits an N-shaped complex $C_{k-2}^{m-1} \leftarrow D_{k-1}^{m-1} \rightarrow E_{k-1}^{m-1} \leftarrow F_{k-1}^{m-1}$ off. Repeat this, then we obtain an N-shaped complex between $A_0$ and $B_k$, and N-shaped complexes $C_{i-1}^j \leftarrow D_{i}^{j} \rightarrow E_{i}^{j} \leftarrow F_{i}^{j} \ (i=1,\dots, k; j=1,\dots,m-1)$. For the former, the process as in Case 2 yields separated horizontal arrows, each of which has difference $2-t$. Let us consider the latter N-shaped ones. Since $\FL(F_i^j)-\FL(D_i^j)=-t+(2-t)j\ge 0$, add $D_i^j$ to $F_i^j$. After that, add $C_{i-1}^j$ to $E_i^j$. As shown in Figure \ref{fig:case3N}, this change of basis decomposes the complex into a pair of arrows. One has difference $2-t$, and the other has difference $\FL(F_i^j)-\FL(C_{i^-1}^j)=2(j+1)+(-j-2)t$. Note $2-t\le 2(j+1)+(-j-2)t$. Furthermore, for $1\le j\le m-1$, $j=m-1$ attains the maximum value, $2m+(-m-1)t$. \begin{figure}[ht] \centering \includegraphics*[scale=0.8]{3Nnew.pdf} \caption{A change of basis for an N-shaped complex. Add $D_i^j$ to $F_i^j$, and $C_{i-1}^j$ to $E_i^j$.}\label{fig:case3N} \end{figure} Hence we need to compare the values $(p-2-m)t$ and $2m+(-m-1)t$. Since $2m+(-m-1)t \ge (p-2-m)t$, we have $\Upsilon^{\mathrm{Tor}}_K(t)=2m+(-m-1)t$ for this case. \medskip \textbf{Case 4. $\frac{2m}{p-1}\le t \le \frac{2(m+1)}{p}\ (m=2,\dots,\lfloor p/2\rfloor-1)$.} As in Case 3, $C_k^{m-1}=C_0^m$ is the lowest. So, adding this to the others with grading $0$ decomposes the complex into one isolated generator $C_0^m$, the N-shaped one and the mirror N-shaped one, again. For the N-shaped complex, the situation is the same as in Case 3. Thus we have an arrow with maximum difference $2m+(-m-1)t$ from this N-shaped complex. However, we need to handle the mirror N-shaped complex differently now. First, consider the case where $\frac{2m}{p-1}\le t\le \frac{2m}{p-2}$. Then the filtration levels of the generators with grading $0$ increase as going to the right. So, as in Case 3, this part can be decomposed into vertical arrows, and the longest has length $(p-2-m)t$. Second, consider the case where $\frac{2m}{p-2}\le t\le \frac{2(m+1)}{p}$. Then $\FL(E_i^m)\ge \FL(C_i^m)\ge \FL(C_{i-1}^m)\ (i=1,2,\dots,k)$, but the filtration levels of the remaining generators with grading $0$, $C_0^{m+1},E_1^{m+1},C_1^{m+1},\dots,G,A_0',\dots A_k'$, increase as going to the right. See Figure \ref{fig:case4}. \begin{figure}[ht] \centering \includegraphics*[scale=0.35]{4zigzag.pdf} \caption{The mirror N-shaped complex when $\frac{2m}{p-2}\le t\le \frac{2(m+1)}{p}$, where $k=2$. The generator $C_1^m$ is the lowest.}\label{fig:case4} \end{figure} Here, $C_1^m$ is the lowest. Adding this to the others with grading $0$ on the right splits a mirror N-shaped complex $D_1^m\rightarrow E_1^m \leftarrow F_1^m \rightarrow C_1^m$ off. Then $C_2^m$ is the lowest in the remaining part. Repeating this yields mirror N-shaped complexes $D_i^m\rightarrow E_i^m \leftarrow F_i^m \rightarrow C_i^m\ (i=1,2,\dots,k)$, and one more mirror N-shaped one between $D_1^{m+1}$ and $A_k'$. For the last one, the previous process gives vertical arrows. For each mirror N-complex $D_i^m\rightarrow E_i^m \leftarrow F_i^m \rightarrow C_i^m$, we remark $\FL(F_i^m)-\FL(D_i^m)=(-m-1)t+2m\ge 0$. Hence adding $D_i^m$ to $F_i^m$ yields a pair of vertical arrows as shown in Figure \ref{fig:case4N}. Thus we have only vertical arrows, whose longest length is $(p-2-m)t$. \begin{figure}[ht] \centering \includegraphics*[scale=0.8]{4Nnew.pdf} \caption{A change of basis for a mirror N-shaped complex. Adding $D_i^m$ to $F_i^m$ yields a pair of vertical arrows.}\label{fig:case4N} \end{figure} Finally, compare $2m+(-m-1)t$ and $(p-2-m)t$. Since $\frac{2m}{p-2}\le t\le \frac{2(m+1)}{p}$, the latter is bigger. Then $\Upsilon^{\mathrm{Tor}}_K(t)=(p-2-m)t$ for this case. \end{proof} \begin{example} When $p=6$, \[ \Upsilon^{\mathrm{Tor}}_K(t)= \begin{cases} 5t & (0\le t\le \frac{1}{3}) \\ 2-t & (\frac{1}{3}\le t\le \frac{1}{2}) \\ 3t & (\frac{1}{2}\le t\le \frac{2}{3}) \\ 4-3t & (\frac{2}{3}\le t\le \frac{4}{5}) \\ 2t & (\frac{4}{5}\le t\le 1). \end{cases} \] See Figure \ref{fig:p=6}. \end{example} \begin{figure}[ht] \centering \includegraphics*[scale=0.7]{p=6.pdf} \caption{The Upsilon torsion function $\Upsilon^{\mathrm{Tor}}_K(t)$ of $K=T(6,6k+1;2;1)$. Then $\Upsilon^{\mathrm{Tor}}_K(1)=2$.}\label{fig:p=6} \end{figure} \section{Torsion order} We are ready to prove Theorem \ref{thm:main}. \begin{proof}[Proof of Theorem \ref{thm:main}] By \cite{AL}, $\Upsilon'^{\mathrm{Tor}}_K(0)=\mathrm{Ord}(K)$ and $\Upsilon^{\mathrm{Tor}}_K(1)=\mathrm{Ord}'(K)$. Thus Theorem \ref{thm:upsilon-torsion} immediately gives $\mathrm{Ord}(K)=p-1$ and $\mathrm{Ord}'(K)=\lfloor (p-2)/2 \rfloor$ when $p\ge 4$. When $p\in \{2,3\}$, $K$ is a torus knot, and $\mathrm{Ord}(K)$ is equal to the longest gap in the exponents of the Alexander polynomial by \cite[Lemma 5.1]{JMZ}. Hence it is $p-1$ by Corollary \ref{cor:gap}. (Indeed, the latter argument proves $\mathrm{Ord}(K)=p-1$ for any $p\ge 2$.) \end{proof} \begin{proof}[Proof of Corollary \ref{cor:main}] By Proposition \ref{prop:hyp}, the twisted torus knot $K=T(p,kp+1;2,1)$ is hyperbolic if $p\ge 5$. Since $K$ has genus $(kp^2-kp+2)/2$, distinct choices of $k$, with a fixed $p$, give distinct knots. Set $K_2=K$ with $p=2N+3\ge 5$. Then $K_2$ is hyperbolic and $\mathrm{Ord}'(K_2)=\lfloor (p-2)/2\rfloor=N$. If $N\ge 4$, then set $K_1=K$ with $p=N+1$. Then $K_1$ is hyperbolic and $\mathrm{Ord}(K_1)=p-1=N$. To complete the proof, we need infinitely many hyperbolic knots $K_1$ whose $\mathrm{Ord}(K_1)$ takes each of the values $1,2, 3$. \begin{enumerate} \item By \cite[Corollary 1.8]{JMZ}, $\mathrm{Ord}(L)\le b(L)-1$ for any knot $L$. Hence if $K_1$ is a hyperbolic $2$-bridge knot, then $\mathrm{Ord}(K_1)=1$. \item Let $K_1=T(3,4;2,s)$ with $s\ge 2$. Then $K_1$ is an L--space knot (\cite{V}), and twist positive in the sense of \cite{KM}. In the proof of \cite[Theorem 1.3]{KM}, they show that $\mathrm{Ord}(K_1)=2$. By \cite{L1,L2}, $K_1$ is hyperbolic. Since $K_1$ has genus $s+3$, distinct choices of $s$ give distinct knots. \item Finally, there are infinitely many hyperbolic L--space knots $\{k_n\}$, defined in \cite[Section 2]{BK}, with $\mathrm{Ord}(k_n)=3$. (See \cite[Proposition 5.1]{KM}.) \end{enumerate} \end{proof} \begin{proof}[Proof of Corollary \ref{cor:main2}] Let $K=T(p,kp+1;2,1)$ with $p\ge 5$. Then $K$ is hyperbolic by Proposition \ref{prop:hyp}. After fixing $p$, Theorem \ref{thm:upsilon-torsion} shows that the Upsilon torsion function does not depend on $k$. \end{proof} \bibliographystyle{amsplain} \begin{thebibliography}{32} \bibitem{AL} S. Allen and C. Livingston, \textit{An Upsilon torsion function for knot Floer homology}, preprint. \texttt{arXiv:2208.04768}. \bibitem{BK} K. Baker and M. Kegel, \textit{Census L--space knots are braid positive, except for one that is not}, Algebr. Geom. Topol. \textbf{24} (2024), no. 1, 569--586. \bibitem{GM} S. Gong and M. Marengon, \textit{Nonorientable link cobordisms and torsion order in Floer homologies}, Algebr. Geom. Topol. \textbf{23} (2023), no. 6, 2627--2672. \bibitem{HKP} J. Hom, S. Kang and J. Park, \textit{Ribbon knots, cabling, and handle decompositions}, Math. Res. Lett. \textbf{28} (2021), no. 5, 1441--1457. \bibitem{I} T. Ito, \textit{Satellite fully positive braid links are braided satellite of fully positive braid links}, preprint. \texttt{arXiv:2402.01129}. \bibitem{JMZ} A. Juh\'{a}sz, M. Miller and I. Zemke, \textit{Knot cobordisms, bridge index, and torsion in Floer homology}, J. Topology \textbf{13} (2020), no.4, 1701--1724. \bibitem{KM} S. Krinshna and H. Morton, \textit{Twist positivity, L--space knots, and concordance}, preprint. \texttt{arXiv:2211.17109}. \bibitem{JL} J.~H. Lee, \textit{Twisted torus knots $T(p,q;3,s)$ are tunnel number one}, J. Knot Theory Ramifications {\bf 20} (2011), no.~6, 807--811. \bibitem{L0} S. Lee, \textit{Knot types of twisted torus knots}, J. Knot Theory Ramifications \textbf{26} (2017), no. 12, 1750074 (7 pages). \bibitem{L1} S. Lee, \textit{Satellite knots obtained by twisting torus knots: hyperbolicity of twisted torus knots}, Int. Math. Res. Not. IMRN (2018), no. 3, 785--815. \bibitem{L2} S. Lee, \textit{Positively twisted torus knots which are torus knots}, J. Knot Theory Ramifications \textbf{28} (2019), no. 3, 1950023, 13 pp. \bibitem{L3} S. Lee, \textit{Cable knots obtained by positively twisting torus knots}, J. Knot Theory Ramifications \textbf{32} (2023), no. 3, Paper No. 2350018, 15 pp. \bibitem{L} C. Livingston, \textit{Notes on the knot concordance invariant upsilon}, Algebr. Geom. Topol. \textbf{17} (2017), no. 1, 111--130. \bibitem{MS} K. Morimoto and M. Sakuma, \textit{On unknotting tunnels for knots}, Math. Ann. \textbf{289} (1991), no. 1, 143--167. \bibitem{Mo} H. Morton, \textit{The Alexander polynomial of a torus knot with twists}, J. Knot Theory Ramifications \textbf{15} (2006), no.8, 1037--1047. \bibitem{OSS} P. Ozsv\'{a}th, A. Stipsicz and Z. Szab\'{o}, \textit{Unoriented knot Floer homology and the unoriented four-ball genus}, Int. Math. Res. Not. IMRN (2017), no.17, 5137--5181. \bibitem{OSS2} P. Ozsv\'{a}th, A. Stipsicz and Z. Szab\'{o}, \textit{Grid homology for knots and links}, Mathematical Surveys and Monographs, vol. 208, American Mathematical Society, Providence, RI, 2015. \bibitem{OS0} P. Ozsv\'{a}th and Z. Szab\'{o}, \textit{Holomorphic disks and genus bounds}, Geom. Topol. \textbf{8} (2004), 311--334. \bibitem{OS} P. Ozsv\'{a}th and Z. Szab\'{o}, \textit{On knot Floer homology and lens space surgeries}, Topology \textbf{44} (2005), 1281--1300. \bibitem{S} J. R. Stallings, \textit{ Constructions of fibred knots and links}, Proc. Sympos. Pure Math., XXXII American Mathematical Society, Providence, RI, 1978, pp. 55--60. \bibitem{V} F. Vafaee, \textit{On the knot Floer homology of twisted torus knots}, Int. Math. Res. Not. IMRN (2015), no. 15, 6516--6537. \end{thebibliography} \end{document}
2412.20642v2
http://arxiv.org/abs/2412.20642v2
Direct and inverse spectral problems for the Schrodinger operator with double generalized Regge boundary conditions
\documentclass[reqno,12pt,centertags]{article} \usepackage{amsmath,amsthm,amscd,amssymb,latexsym,upref,stmaryrd} \usepackage[cp1251]{inputenc} \usepackage[english]{babel} \usepackage{mathtext} \usepackage{amsfonts} \usepackage{graphicx} \usepackage[numbers,sort&compress]{natbib} \usepackage{amsmath} \newcommand{\highlight}[1]{\textbf{#1}} \numberwithin{equation}{section}\numberwithin{figure}{section} \usepackage{hyperref} \newcommand*{\mailto}[1]{\href{mailto:#1}{\nolinkurl{#1}}} \textwidth=17 cm \textheight=22cm \oddsidemargin=-5mm \evensidemargin=-5mm \mathsurround=2pt \topmargin=0cm \usepackage{pgfplots} \usepackage{amsmath} \usepackage{amsthm} \newtheorem{corollary}{Corollary} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{remark}{Remark} \theoremstyle{proposition} \newtheorem{proposition}{Proposition} \begin{document} \thispagestyle{empty} {\center \large\bf Direct and inverse spectral problems for the Schr\"{o}dinger operator with double generalized Regge boundary conditions} \\[0.5cm] \noindent {\bf Xiao-Chuan Xu}\footnote{School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, 210044, Jiangsu, People's Republic of China, {\it Email: [email protected]}} \noindent {\bf and Yu-Ting Huang}\footnote{School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, 210044, Jiangsu, People's Republic of China, {\it Email: [email protected]}} \\ \noindent{\bf Abstract.} {In this paper, we study the direct and inverse spectral problems for the Schr\"{o}dinger operator with two generalized Regge boundary conditions. For the direct problem, we give the properties of the spectrum, including the asymptotic distribution of the eigenvalues. For the inverse problems, we prove several uniqueness theorems, including the cases: even potential, two-spectra, as well as the general partial inverse problem.} \medskip \noindent {\it Keywords:} Schr\"{o}dinger operator, generalized Regge boundary condition, spectral asymptotics, inverse spectral problem \medskip \noindent {\it 2020 Mathematics Subject Classification:} 34A55; 34L20; 34L40; 34B07 \section{Introduction} Consider the following generalized Regge problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$ \begin{equation}\label{k1} \ell y:=-y''+q(x)y=\lambda^2 y ,\quad x\in(0,a),\\ \end{equation} \begin{equation}\label{k2} \quad y'\left(0\right)-\left(i\alpha_{0}\lambda + \beta_{0}\right)y\left(0\right)=0,\\ \end{equation} \begin{equation}\label{k3} \quad y'\left(a\right)+\left(i\alpha\lambda +\beta\right)y\left(a\right)=0,\\ \end{equation} where the complex-valued potential function$q\in L^2(0,a)$, $\alpha_{0}\ge0,\alpha>0$ and $\beta_{0},\beta\in\mathbb{C}$. The problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$ arises in various mathematical and physical models. For example, the Liouville transformation can transform the problem of smooth inhomogeneous string vibration with viscous damping at both ends of the string into the problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$ with $\alpha,\alpha_0>0$. When $\alpha_{0}=0$, it describes the problem of an inhomogeneous string vibration with no damping at the left end but with various damping at the right end \cite{MV, mv}. The problem $L\left(q,1,0,1,0\right)$ is the resonance scattering problem on the real line \cite{EK,ST,XF,Zo1}. In the direct and inverse spectral theories, the Schr\"{o}dinger operator with boundary conditions independent of spectral parameters, i.e., the problem $L\left(q,0,\beta_{0},0,\beta\right)$, has the comparatively perfect researches (see the monographs \cite{GV,Bb,V}). For the inverse spectral problem of $L\left(q,0,\beta_{0},0,\beta\right)$, in general, in order to recover the potential and the parameters in the boundary conditions, one needs to specify two spectra. Whereas, in some cases, one spectrum is enough, such as, (i) the even inverse problem \cite{GV}: $q(x)=q(a-x)$ and $\beta=\beta_0$, and (ii) the half-inverse problem \cite{HB}: $\beta$ is given and $q(x)$ is known a priori on $(\frac{a}{2},a)$. After the half-inverse problem, many authors also continued to study the general partial inverse problems, i.e., the potential is given on a subinterval $(b,a)$ with arbitrary $b\in (0,a)$. The uniqueness theorems of the general partial inverse problems were proved in \cite{DGS,FB,M,MH} and other works. One can refer to the review \cite{BP} for the general summaries of partial inverse problems. When the boundary conditions contain the spectral parameter $\lambda$, the results of the problem will change. There are many works related Schr\"{o}dinger operator with the boundary condition dependent on $\lambda^2$ (see, e.g., \cite{BN1,BN2,FY0,G,G1,WW} and the references therein). The dependency form of spectral parameters in the boundary condition \eqref{k2} or \eqref{k3} is different from the works mentioned above. The boundary condition \eqref{k2} or \eqref{k3} is called the generalized Regge boundary condition. This kind of boundary condition has attracted a lot of attentions from many scholars (see, e.g., \cite{BY,BS,WY,mv,RS,VC,XP,x,X,Y}). When there is only one generalized Regge boundary condition, one spectrum uniquely recovers all the unknown coefficients \cite{mv,VC,XP,EK0,RS,x,X,Y}. When there are two generalized Regge boundary conditions, the situation becomes different, in particular, one spectrum is not enough to uniquely recover all the unknown coefficients. As far as we know, few ones considered two generalized Regge boundary conditions \eqref{k2} and \eqref{k3} directly for the Schr\"{o}dinger operator. Whereas, these two boundary conditions were studied by some scholars for the differential pencil (see \cite{BY,BS,WY}) \begin{equation}\label{hx9} -y''+(2\lambda p(x)+q(x))y=\lambda^2 y. \end{equation} In \cite{BY,WY}, the two-spectra theorem is proved for the problem \eqref{k2}-\eqref{hx9}: the spectrum of the problem \eqref{k2}-\eqref{hx9} and the spectrum of the problem \eqref{k3}, \eqref{hx9} and $y(0)=0$ uniquely determine $p$ and $q$ as well as all the unknown parameters in \eqref{k2} and \eqref{k3}. The uniqueness theorem of the half inverse problem was proved in \cite{BS}. Although \eqref{k1} is a particular case of \eqref{hx9}, the two-spectra theorem in \cite{BY,WY} is not applicable for the problem $L\left(q,\alpha_0,\beta_{0},\alpha,\beta\right)$ considered here. Because the spectrum of the problem \eqref{k3}, \eqref{hx9} and $y(0)=0$ with $p=0$ is enough to uniquely determine all the unknowns \cite{VC,X}. Hence, the two-spectra theorem for the problem $L\left(q,\alpha_0,\beta_{0},\alpha,\beta\right)$ should be reformulated and proved. Moreover, there is no result for the even inverse problem or the general partial inverse problem of the problem $L\left(q,\alpha_0,\beta_{0},\alpha,\beta\right)$ with $(1-\alpha)(\alpha_0-1)\ne0$, which are also interesting and important issues and should be investigated. In this paper, we study the direct and inverse spectral problems for the boundary value problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$. For the direct problem, we give some properties of the eigenvalues, in particular, give the asymptotic behavior. For the inverse spectral problem, we prove four uniqueness theorems, including the two-spectra theorem, even inverse problem as well as the general partial inverse problem. This paper is structured as follows. In the second section, we introduce three characteristic functions. In the third section, we investigate the asymptotic distribution of eigenvalues and the fundamental properties of eigenvalues in the lower plane. In the last section, we consider the inverse problems and prove the uniqueness theorems. \section{Characteristic functions} In this section, we introduce three characteristic functions. Let $s\left(\lambda,x\right)$ and $c\left(\lambda,x\right)$ be the solutions of the initial value problems for \eqref{k1} with the initial conditions $s\left(\lambda,0\right)=0$, $s'\left(\lambda,0\right)=1$ and $c\left(\lambda,0\right)=1$, $c'\left(\lambda,0\right)=\beta_{0}$, respectively. It was shown in \cite[p.9]{V} that \begin{equation}\label{k4} s\left(\lambda,x\right)=\frac{\sin{\lambda x}}{\lambda}+\int_{0}^{x}K_{1}\left(x,t\right)\frac{\sin{\lambda t}}{\lambda}dt, \end{equation} \begin{equation}\label{k5} c\left(\lambda,x\right)=\cos{\lambda x}+\int_{0}^{x}G\left(x,t,\beta_{0}\right)\cos{\lambda t}dt, \end{equation} where \begin{equation}\label{k6} K_{1}\left(x,t\right)=K\left(x,t\right)-K\left(x,-t\right), \end{equation} \begin{equation}\label{k6} G\left(x,t,\beta_{0}\right)=\beta_{0}+K\left(x,t\right)+K\left(x,-t\right)+\beta_{0}\int_{t}^{x}[K\left(x,\xi\right)-K\left(x,-\xi\right)]d\xi, \end{equation} and $K\left(x,t\right)$ is the unique solution of the integral equation \begin{equation} \notag K\left(x,t\right)=\frac{1}{2}\int_{0}^{\frac{x+t}{2}}q(s)ds+ \int_{0}^{\frac{x+t}{2}}\int_{0}^{\frac{x-t}{2}}q\left(s+v\right)K\left(s+v,s-v\right)dv ds, \end{equation} in the region $\left\{\left(x,t\right)\in (0,a)\times(0,a):\lvert t \rvert\leq x \right\}$. Moreover, \begin{equation}\label{mza} G\left(x,x,\beta_0\right)=\beta_0+K_{1}\left(x,x\right),\quad K_{1}\left(x,x\right)=K\left(x,x\right)=\frac{1}{2}\int_{0}^{x}q\left(t\right)dt. \end{equation} Let \begin{equation}\label{hx4} y\left(\lambda,x\right)=c\left(\lambda,x\right)+i\alpha_{0}\lambda s\left(\lambda,x\right). \end{equation} Then \begin{equation}\label{r2} y\left(\lambda,x\right)=\cos{\lambda x}+i\alpha_{0}\sin{\lambda x}+G(x,x,\beta_{0})\frac{\sin{\lambda x}}{\lambda}-i\alpha_{0}K_{1}\left(x,x\right)\frac{\cos{\lambda x}}{\lambda}+\frac{\Psi_{1}\left(\lambda,x\right)}{\lambda}, \end{equation} and \begin{equation}\label{k8} y'\left(\lambda,x\right)=-\lambda \sin{\lambda x}+\left[G(x,x,\beta_{0})+i\alpha_{0}\lambda\right]\cos{\lambda x}+i\alpha_{0}K_{1}\left(x,x\right)\sin{\lambda x}+\Psi_{2}\left(\lambda,x\right), \end{equation} where \begin{equation}\label{hx} \Psi_{1}\left(\lambda,x\right)=-\int_{0}^{x}G_{t}\left(x,t,\beta_{0}\right)\sin{\lambda t}dt+i\alpha_{0}\int_{0}^{x}K_{1t}\left(x,t\right)\cos{\lambda t}dt, \end{equation} and \begin{equation} \Psi_{2}\left(\lambda,x\right)=\int_{0}^{x}G_{x}\left(x,t,\beta_{0}\right)\cos{\lambda t}dt+i\alpha_{0}\int_{0}^{x}K_{1x}\left(x,t\right)\sin{\lambda t}dt. \end{equation} The spectrum of problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$ coincides with the sets of zeros of the entire function \begin{equation}\label{k9} \Delta_+\left(\lambda\right)=y'\left(\lambda,a\right)+\left(i\alpha\lambda+\beta\right)y\left(\lambda,a\right), \end{equation} which is called the \emph{characteristic function} of the problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right).$ Denote $\left<y,z\right>=yz'-y'z$. It is easy to show that if $y$ and $z$ are solutions of \eqref{k1}, then $\left<y,z\right>$ is independent of $x$. Let $\phi\left(\lambda,x\right)$ be the solution of the problem for \eqref{k1} with initial conditions $\phi\left(\lambda,a\right)=1$ and $\phi'\left(\lambda,a\right)=-\left(i\alpha\lambda+\beta\right).$ Then we have \begin{equation}\label{e10} \Delta_+\left(\lambda\right)=\left<\phi\left(\lambda,x\right),y\left(\lambda,x\right)\right>. \end{equation} Define \begin{equation}\label{e15} \Delta_{-}\left(\lambda\right)=y'\left(\lambda,a\right)-\left(i\alpha\lambda-\beta\right)y\left(\lambda,a\right). \end{equation} Obviously, $\Delta_{-}\left(\lambda\right)$ is the \emph{characteristic function} of the problem $L\left(q,\alpha_{0},\beta_{0},-\alpha,\beta\right)$, i.e., the equation \eqref{k1} with the boundary conditions \eqref{k2} and $$y'\left(\lambda,a\right)-\left(i\alpha\lambda-\beta\right)y\left(\lambda,a\right)=0.$$ Using \eqref{k9} and \eqref{e15}, it is easy to verify \begin{equation}\label{e7} \Delta_+\left(\lambda\right)\Delta_+\left(-\lambda\right)-\Delta_{-}\left(\lambda\right)\Delta_{-}\left(-\lambda\right)=4\alpha\alpha_{0}\lambda^2. \end{equation} We also consider the function \begin{equation}\label{o30} \Delta_{0}(\lambda)=y\left(\lambda,a\right). \end{equation} It is easy to show that the function $\Delta_{0}(\lambda)$ is the \emph{characteristic function} of the following problem \begin{equation}\label{hx12} -y''+q_{1}(x)y=\lambda^2 y ,\quad x\in(0,a), \end{equation} \begin{equation}\label{hx13} y(0)=0, \end{equation} \begin{equation}\label{hx14} y'(a)+(i\alpha_{0}\lambda+\beta_{0})y(a)=0, \end{equation} where $q_{1}(x)=q(a-x)$. \section{Properties of eigenvalues} In this section, we investigate the properties of the spectrum of the problems $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$. Let's first give the asymptotic distribution of the eigenvalues. \begin{theorem}\label{th1} $\left(1\right)$ If $\left(\alpha_{0}-1\right)\left(1-\alpha\right)>0$, then the eigenvalues $\{\lambda_{k}^\pm\}_{k\in\mathbb{Z}}$ of the problems $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$, respectively, have the following asymptotic behavior: \begin{equation}\label{k11} \begin{aligned} \lambda_{k}^\pm=\frac{\pi}{a}\left(\lvert k\rvert-\frac{1}{2}\right){\rm sgn} k+\frac{i P_0^\pm}{2a}+\frac{P}{k}+\frac{\beta_{k}}{k}, \quad \lvert k \rvert \to \infty, \end{aligned} \end{equation} where $\{\beta_{k}\}_{k=-\infty,k\neq0}^{\infty}\in l_{2}$ and where \begin{equation}\label{k10} \begin{aligned} P_0^\pm=\ln{\left(\frac{|\alpha_{0}\pm\alpha+1\pm\alpha\alpha_{0}|}{|\alpha_{0}\pm\alpha-(1\pm\alpha\alpha_{0})|}\right)},\quad P=\frac{\beta_0}{\pi\left(1-\alpha_{0}^2\right)}+\frac{\beta}{\pi\left(1-\alpha^2\right)}+\frac{K_{1}\left(a,a\right)}{\pi}. \end{aligned} \end{equation} $\left(2\right)$ If $\left(\alpha_{0}-1\right)\left(1-\alpha\right)<0$, then the eigenvalues $\{\lambda_{k}^\pm\}_{k\in\mathbb{Z}_{0}}$, $\mathbb{Z}_{0}=\mathbb{Z}\setminus\{0\}$, of the problem $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$, respectively, have the following asymptotic behavior: \begin{equation}\label{k12} \begin{aligned} \lambda_{k}^\pm=\frac{\pi}{a}\left(\lvert k\rvert-1\right){\rm sgn} k+\frac{iP_0^\pm}{2a}+\frac{P}{k}+\frac{\beta_{k}}{k}, \quad \lvert k \rvert \to \infty, \end{aligned} \end{equation} where $P_0^\pm$ and $P$ are given \eqref{k10} and $\{\beta_{k}\}_{k=-\infty,k\neq0}^{\infty}\in l_{2}$.\\ $\left(3\right)$ If $\left(\alpha_{0}-1\right)\left(1-\alpha\right)=0$, then the problem $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$ may have only finitely many eigenvalues. If there exist infinitely many eigenvalues $\{\lambda_{k}^+\}$, then $Im \lambda_{k}^+ \to \infty$ as $\lvert k\rvert\to \infty$. \end{theorem} \begin{proof} Using \eqref{r2}-\eqref{k8} we obtain \begin{align}\label{c16} \Delta_\pm\left(\lambda\right)=&\lambda[i\left(\alpha_{0}\pm\alpha\right)\cos{\lambda a}-\left(1\pm\alpha\alpha_{0}\right)\sin{\lambda a}]+i[\alpha_{0}\beta\pm\alpha \omega+\alpha_{0} K_{1}\left(a,a\right)]\sin{\lambda a}\\ \notag &+[\omega+\beta\pm\alpha_{0}\alpha K_{1}\left(a,a\right)]\cos{\lambda a}+\psi_\pm\left(\lambda\right). \end{align} where $\omega=G(a,a,\beta_{0})$ and \begin{equation}\label{o80} \psi_\pm\left(\lambda\right)=\omega\beta\frac{\sin{\lambda a}}{\lambda}-i\beta\alpha_{0}K_{1}(a,a)\frac{\cos{\lambda a}}{\lambda}+(\beta \pm i\alpha\lambda)\frac{\Psi_{1}\left(\lambda,a\right)}{\lambda}+\Psi_{2}\left(\lambda,a\right). \end{equation} It is obvious that the function $\psi_\pm(\lambda)$ are entire functions of exponential type $\le a$ and belong to $L^2(-\infty,\infty)$. Rewrite \eqref{c16} as \begin{align}\label{k13} \notag -i\Delta_\pm\left(\lambda\right)=&\lambda[\left(\alpha_{0}\pm\alpha\right)\cos{\lambda a}+i\left(1\pm\alpha\alpha_{0}\right)\sin{\lambda a}]+[\alpha_{0}\beta\pm\alpha \omega+\alpha_{0} K_{1}\left(a,a\right)]\sin{\lambda a}\\ &-i[\omega+\beta\pm\alpha_{0}\alpha K_{1}\left(a,a\right)]\cos{\lambda a}-i\psi_\pm\left(\lambda\right). \end{align} The right hand side of \eqref{k13} is of the form \eqref{A.7} in Lemma \ref{LA2} with $M^\pm=\alpha_{0}\beta\pm\alpha \omega+\alpha_{0} K_{1}\left(a,a\right)$ and $N^\pm=\omega+\beta\pm\alpha_{0}\alpha K_{1}\left(a,a\right)$, and the number $P$ defined in \eqref{A.9} becomes \begin{equation} P^\pm=\frac{\left(1\pm\alpha_{0}\alpha\right)N^\pm-\left(\alpha_{0}\pm\alpha\right)M^\pm}{\pi[\left(1\pm\alpha_{0}\alpha\right)^2-\left(\alpha_{0}\pm\alpha\right)^2]} =\frac{(1-\alpha^2)\omega+(1-\alpha_0^2)\beta+(\alpha^2-1)\alpha_0^2K_1(a,a)}{\pi(1-\alpha_0^2)(1-\alpha^2)}. \end{equation} Using Lemma \ref{LA2} and noting $\omega=\beta_0+K_1(a,a)$, we obtain \eqref{k11}-\eqref{k12}. When $\left(\alpha_{0}-1\right)\left(1-\alpha\right)=0$, we have $\alpha_0=1$ or $\alpha=1$. In the case $\alpha_{0}=1$, we have \begin{equation}\label{o26} \Delta_\pm(\lambda)\!=\!\left[\!(1\pm\alpha)\!\left(i\lambda+\frac{\omega}{2}+\frac{K_{1}(a,a)}{2}\!\right)\!+\beta\!\right]e^{i\lambda a}+\frac{1\mp\alpha}{e^{i\lambda a}}\!\left[\frac{\omega}{2}-\frac{K_{1}(a,a)}{2}\right]\!+\psi_\pm(\lambda). \end{equation} When $q=0$ and $\beta_{0}=0$ in \eqref{o26}, we have \begin{equation} \Delta_\pm(\lambda)=[i\lambda(1\pm\alpha)+\beta]e^{i\lambda a}. \end{equation} It implies that the unique zero of $\Delta_+(\lambda)$ is $\frac{i\beta}{1+\alpha}$ and $\Delta_-(\lambda)$ also has the only zero $\frac{i\beta}{1-\alpha}$ if $\alpha\ne1$. If $\Delta_+(\lambda)$ has infinitely many eigenvalues $\{\lambda_{k}^+\}$, then $\lvert\lambda_{k}^+\rvert\to\infty$ as $\lvert k\rvert\to\infty.$ Substituting $\lambda=\lambda_{k}^+$ into \eqref{o26}, respectively, and noting that $\psi(\lambda_k^+)=o(e^{|{\rm Im}\lambda_k^+|a})$, then we have \begin{equation}\label{o27} (1+\alpha)\!\!\left[i\lambda_{k}^++\frac{\omega}{2}+\frac{K_{1}(a,a)}{2}\right]+\beta\!=\!\frac{1-\alpha}{e^{2i\lambda_{k}^+ a}} \!\left[\!\frac{K_{1}(a,a)}{2}-\frac{\omega}{2}\!\right]\!\!+o(1)e^{\left(\lvert {\rm Im}\lambda_{k}^+\rvert+{\rm Im}\lambda_{k}^+\right)a}. \end{equation} If $ {\rm Im}\lambda_{k}^+$ is bounded or goes to $-\infty$ as $\lvert k\rvert\to\infty$, then the left hand side of \eqref{o27} is unbounded and the right hand is bounded, which is a contradiction. It shows that $\lim_{k\to\infty}\sup {\rm Im}\lambda_{k}^+=\infty.$ The left hand side of \eqref{o27} tending to $\infty$ implies for the right hand side that $2{\rm Im}\lambda_{k}^+\to\infty$ or $\lvert {\rm Im}\lambda_{k}^+\rvert+{\rm Im}\lambda_{k}^+\to\infty$ as $k\to\infty$, which gives ${\rm Im}\lambda_{k}^+\to\infty$. Similarly, we can also obtain the same results in the case $\alpha=1$. The proof is complete. \end{proof} \begin{theorem}\label{th2x} (1) There are at most a finite number of the eigenvalues of $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$ lying in the closed lower half-plane. If $(\alpha_0-1)(1-\alpha)\ne0$ and $(\alpha_0+1)|\alpha-1|>(\alpha+1)|\alpha_0-1|$ ($(\alpha_0+1)|\alpha-1|<(\alpha+1)|\alpha_0-1|$), then there are at most a finite number of the eigenvalues of $L\left(q,\alpha_{0},\beta_{0},-\alpha,\beta\right)$ lying in the closed lower (upper) half-plane. \\ (2) If $q(x)$ is a real-valued function and $\beta,\beta_{0}\in\mathbb{R}$, then the eigenvalues of the problems $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$ have the following properties:\\ (i) If $0$ is an eigenvalue of the problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$, then it is simple.\\ (ii) If $\lambda$ is an eigenvalue of the problem $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$, then $-\overline{\lambda}$ is also an eigenvalue, where $\overline{\lambda}$ is the complex conjugate of $\lambda$.\\ (iii) All nonzero eigenvalues of the problem $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$ in the lower half-plane lie on the negative imaginary semiaxis $i\mathbb{R}_-$ and are simple.\\ (iv) Assume that there are $\kappa$ eigenvalues of $L\left(q,\alpha_{0},\beta_{0},\alpha,\beta\right)$ on $i\mathbb{R}_-$, denoted by $\lambda_{-j}=-i\lvert \lambda_{-j} \rvert$,$j=1,\ldots,\kappa$, satisfying $\lvert \lambda_{-j} \rvert<\lvert \lambda_{-(j+1)}\rvert$. Then $$i\dot{\Delta}_+\left(-i\lvert\lambda_{-j}\rvert\right)(-1)^{\kappa-j}<0,\quad \Delta_{0}\left(-i\lvert\lambda_{-j}\rvert\right)(-1)^{\kappa-j}>0,\quad j=1,\ldots,\kappa,$$ where $\dot{\Delta}_+(\lambda)=\frac{d {\Delta_+}(\lambda)}{d\lambda}$. Moreover, the zeros of $\Delta_{0}(\lambda)$ and $\Delta_+(\lambda)$ on $i\mathbb{R}_-$ interlace each other. \end{theorem} \begin{proof} (1) From Theorem \ref{th1}, it is easy to see that there are at most a finite number of the eigenvalues $\{\lambda_n^+\}$ lying in the closed lower half-plane. Assume that $(\alpha_0-1)(1-\alpha)\ne0$, (i) if $(\alpha_0+1)|\alpha-1|>(\alpha+1)|\alpha_0-1|$, then $\frac{|\alpha_{0}-\alpha+1-\alpha\alpha_{0}|}{|\alpha_{0}-\alpha-(1-\alpha\alpha_{0})|}>1$, which implies $P_0^->0$; (ii) if $(\alpha_0+1)|\alpha-1|<(\alpha+1)|\alpha_0-1|$, then $\frac{|\alpha_{0}-\alpha+1-\alpha\alpha_{0}|}{|\alpha_{0}-\alpha-(1-\alpha\alpha_{0})|}\in (0,1)$, which implies $P_0^-<0$. (2) (i) Due to \eqref{k1}, then the differentiation of \eqref{k1} with respect to $\lambda$ gives \begin{equation}\label{e12} -\dot{y}''\left(\lambda,x\right)+q\left(x\right)\dot{y}\left(\lambda,x\right)=2\lambda y\left(\lambda,x\right)+\lambda^2\dot{y}\left(\lambda,x\right). \end{equation} Multiply \eqref{k1} and \eqref{e12} by $\dot{y}$ and $y$, respectively, take the difference, and get \begin{equation}\label{e5s} 2\lambda y^2\left(\lambda,x\right) =\left[y'\left(\lambda,x\right)\dot{y}\left(\lambda,x\right)-\dot{y}'\left(\lambda,x\right)y\left(\lambda,x\right)\right]'. \end{equation} Let us integrate both side of \eqref{e5s}, and using \eqref{k2}, then we have \begin{align} \notag 2\lambda\int_{0}^{a}y^2\left(\lambda,x\right)dx&=\int_{0}^{a}\left[y'\left(\lambda,x\right)\dot{y}\left(\lambda,x\right)-\dot{y}'\left(\lambda,x\right)y\left(\lambda,x\right)\right]'dx\\ &\notag=y'\left(\lambda,a\right)\dot{\Delta}_{0}\left(\lambda\right)-\dot{y}'\left(\lambda,a\right)\Delta_{0}\left(\lambda\right)+i\alpha_{0}\\ &\label{o1}=\Delta_+\left(\lambda\right)\dot{\Delta}_{0}\left(\lambda\right)-\dot{\Delta}_+\left(\lambda\right)\Delta_{0}\left(\lambda\right)+i\alpha \Delta_{0}^{2}(\lambda)+i\alpha_{0}. \end{align} Letting $\lambda=0$ in \eqref{o1}, due to $\Delta_+(0)=0$, we have \begin{equation} \dot{\Delta}_+\left(0\right)\Delta_{0}\left(0\right)=i\alpha \Delta_{0}^{2}(0)+i\alpha_{0}, \end{equation} which implies \begin{equation} -i\dot{\Delta}_+\left(0\right)\Delta_{0}\left(0\right)=\alpha \Delta_{0}^{2}(0)+\alpha_{0}>0. \end{equation} Hence, if $0$ is an eigenvalue, then it is simple. (ii) Note that \begin{equation*} \overline{ c^{(\nu)}\left(\lambda,x\right)}=c^{(\nu)}\left(-\bar{\lambda},x\right), \quad \overline{ s^{(\nu)}\left(\lambda,x\right)}=s^{(\nu)}\left(-\bar{\lambda},x\right),\quad \nu=0,1. \end{equation*} It follows from \eqref{hx4} that \begin{align}\label{k16} y^{(\nu)}\left(-\overline{\lambda},x\right)= \overline{y^{(\nu)}\left(\lambda,x\right)},\quad \nu=0,1, \end{align} which implies from \eqref{k9} and \eqref{e15} that \begin{equation}\label{m1s} \overline{\Delta_\pm \left(\lambda\right)}=\Delta_\pm\left(-\overline{\lambda}\right). \end{equation} The assertion in (i) follows from \eqref{m1s}. (iii) Assume that there is an eigenvalue $\lambda_{0}=\sigma-i\tau\left(\tau>0\right)$, i.e., $\Delta_+\left(\lambda_{0}\right)=0$. Using the initial condition of $y(\lambda_0,x)$ and integration by the parts, we calculate \begin{align}\label{e1} \notag &\lambda_{0}^2\int_{0}^{a}\lvert y\left(\lambda_{0},x\right)\rvert^2dx= \int_{0}^{a}\ell y(\lambda_0,x)y\left(-\overline{\lambda}_{0},x\right)dx\\ \notag= &i\alpha\left(\lambda_{0}+\overline{\lambda}_{0}\right)\lvert y\left(\lambda_{0},a\right)\rvert^2+i\alpha_0\left(\lambda_{0}+\overline{\lambda}_{0}\right)+ \int_{0}^{a} y(\lambda_0,x)\ell y\left(-\overline{\lambda}_{0},x\right)dx\\ =& 2i\sigma\left[\alpha\lvert y(\lambda_{0},a)\rvert^2+\alpha_{0}\right]+ \notag \overline{\lambda}_{0}^2\int_{0}^{a}\lvert y\left(\lambda_{0},x\right)\rvert^2dx, \end{align} which implies \begin{equation}\label{e3} - 2\sigma\tau\int_{0}^{a}\lvert y\left(\lambda_{0},x\right)\rvert^2dx=\alpha\sigma\lvert y(\lambda_{0},a)\rvert^2+\sigma\alpha_{0}. \end{equation} By observing \eqref{e3}, we have that $\tau<0$ if $\sigma\neq0$, which contradicts the fact that $\tau>0$. Thus $\sigma=0$. Next, we verify that the eigenvalues $\lambda_{0}=-i\tau \left(\tau>0\right)$ is simple. Letting $\lambda=\lambda_{0}=-i\tau$ in \eqref{o1}, due to $\Delta_+\left(-i\tau\right)=0$, we have \begin{align} \dot{\Delta}_+\left(-i\tau\right)\Delta_{0}\left(-i\tau\right)=2i\tau\int_{0}^{a}y^2\left(-i\tau ,x\right)dx+i\alpha \Delta_{0}^{2}(-i\tau)+i\alpha_{0}, \end{align} which implies \begin{equation}\label{o2} -i\dot{\Delta}_+\left(-i\tau\right)\Delta_{0}\left(-i\tau\right)>0. \end{equation} Therefore, if $\lambda_{0}$ is a zero of $\Delta_+(\lambda)$ on the negative imaginary semiaxis, then it is simple. (iv) Note that \begin{equation}\label{m1} \cos({-i\tau a})=\frac{e^{-\tau a}+e^{\tau a}}{2}, \sin({-i\tau a})=\frac{-e^{-\tau a}+e^{\tau a}}{2i}. \end{equation} Substituting \eqref{m1} into \eqref{c16}, then we have \begin{align} \Delta_+(-i\tau)\notag=&\left[\left(\alpha+\alpha_{0}\right)\frac{e^{-\tau a}+e^{\tau a}}{2}+\left(1+\alpha\alpha_{0}\right)\frac{-e^{-\tau a}+e^{\tau a}}{2}\right]\tau\\&\notag+\left[\omega+\beta+\alpha\alpha_{0}K_{1}\left(a,a\right)\right]\frac{e^{-\tau a}+e^{\tau a}}{2}\\ \notag &+\left[\alpha_{0}\beta+\alpha\omega+\alpha_{0}K_{1}\left(a,a\right)\right]\frac{-e^{-\tau a}+e^{\tau a}}{2}+O\left(\frac{e^{\tau a}}{\tau}\right). \end{align} Then \begin{equation}\label{q9} \Delta_+(-i\tau)\to +\infty, \quad \tau\to +\infty. \end{equation} Note that $\Delta_+(\lambda)$ has only simple zeros on on the negative imaginary semiaxis. It follows from \eqref{q9} that $-i\dot{\Delta}_+\left(-i\lvert\lambda_{-\kappa}\rvert\right)>0$. Consequently, we have \begin{equation} \notag -i\dot{\Delta}_+\left(-i\lvert\lambda_{-j}\rvert\right)(-1)^{\kappa-j}>0. \end{equation} Together with \eqref{o2}, we have $ \Delta_{0}\left(-i\lvert\lambda_{-j}\rvert\right)(-1)^{\kappa-j}>0.$ Substituting $\lambda=-i\tau$ into \eqref{o1}, we obtain \begin{align} \notag & \Delta_+\left(-i\tau \right)\frac{\mathrm{d}}{\mathrm{d}(-i\tau)}\Delta_{0}\left(-i\tau\right)-\Delta_{0}\left(-i\tau\right)\frac{\mathrm{d}}{\mathrm{d} (-i\tau)}\Delta_+\left(-i\tau\right)\\&=-2i\tau\int_{0}^{a}\lvert y\left( -i\tau,x\right)\rvert^2dx-i\alpha \Delta_{0}^{2}(-i\tau)-i\alpha_{0}. \end{align} Consequently, \begin{align} \notag \frac{\mathrm{d}}{\mathrm{d}\tau }\left(\frac{\Delta_{0}\left(-i\tau\right)}{\Delta_+\left(-i\tau\right)}\right)&=\frac{1}{\Delta_+^2\left(-i\tau\right)}\left[2\tau\int_{0}^{a}\lvert y\left(-i\tau,x\right)\rvert^2dx+\alpha\Delta_{0}^{2}(-i\tau)+\alpha_{0}\right]>0, \end{align} thus the function $\frac{\Delta_{0}\left(-i\tau\right)}{\Delta_+\left(-i\tau\right)}$ monotonically increasing for $\tau\in\mathbb{R}^{+}\verb|\|\left\{ \lvert \lambda_{-j} \rvert|j=1,\ldots,\kappa\right\}$ with \begin{align*} &\lim_{\tau\to \lvert \lambda_{-j} \rvert^\pm}\frac{\Delta_{0}\left(-i\tau\right)}{\Delta_+\left(-i\tau\right)}=\mp\infty,\quad j=2,\dots,\kappa-1\\ &\label{e5}\lim_{\tau\to \lvert \lambda_{-1} \rvert^{+}}\frac{\Delta_{0}\left(-i\tau\right)}{\Delta_+\left(-i\tau\right)}=-\infty,\quad \notag\lim_{\tau\to \lvert \lambda_{-\kappa} \rvert^{-}}\frac{\Delta_{0}\left(-i\tau\right)}{\Delta_+\left(-i\tau\right)}=+\infty. \end{align*} Hence, $\Delta_{0}\left(-i\lvert \lambda_{-j} \rvert\right)\Delta_{0}\left(-i\lvert \lambda_{-(j+1)} \rvert\right)<0$. It follows that the function $\Delta_{0}(\lambda)$ has one zero in the interval $\left(-i\lvert \lambda_{-(j+1)} \rvert,-i\lvert \lambda_{-j} \rvert\right)$, then we have that the zeros of $\Delta_+(\lambda)$ and $\Delta_{0}(\lambda)$ interlace on the negative imaginary semiaxis. The proof is complete. \end{proof} \section{Inverse problems} In this section, we study the inverse spectral problems. We will consider the even inverse problem, two-spectra theorem and the general partial inverse problem, and prove four uniqueness theorems. \begin{lemma}\label{hx11} Let $f(z)$ be an entire function of exponential type, and has the asymptotics \begin{equation}\label{hx10} f(z)=z[c_1\cos z +c_2\sin z ]+O(e^{|{\rm Im}z|}),\quad |z|\to \infty, \end{equation} where $c_1$ and $c_2$ are constants. If $|c_1|^2+|c_2|^2\ne0$ and there exists one non-zero $c_0\in \{c_1,c_2,c_1+c_2,c_1-c_2\}$ and $c_0$ is known, then $f(z)$ is uniquely determined by all its zeros (including multiplicity). \end{lemma} \begin{proof} Let $\{z_n\}$ (counted with multiplicities) be the non-zero zeros of the function $f(z)$. The Hadamard's factorization theorem implies \begin{equation}\label{3.0} f(z)=ce^{bz}E(z),\quad E(z):=z^{s}\prod_{z_n\ne0}\left(1-\frac{z}{z_n}\right)e^{\frac{z}{z_n}}, \end{equation} where $b,c$ are the constants to be determined, and $s\ge 0$. Without loss of generality, assume $c_1\ne 0$ and is known. Then from (\ref{hx10}) we have \begin{equation*} \frac{f(2n\pi)}{2n\pi}=c_1(1+o(1)),\quad n\to\infty, \end{equation*} which implies from (\ref{3.0}) that \begin{equation}\label{3.0s} b=-\lim_{n\to\infty} \frac{\ln E(2n\pi)}{2n\pi}, \quad c= c_1\left[\lim_{n\to\infty}\frac{e^{2n\pi b}E(2n \pi)}{2n\pi}\right]^{-1}. \end{equation} It follows from (\ref{3.0}) and (\ref{3.0s}) that all zeros of $f(z)$ uniquely determine $f(z)$. The proof is complete. \end{proof} In the following theorem, under the assumption that $0$ is not an eigenvalue, we prove a uniqueness result for the even inverse problem of the problem $L(q,\alpha,\beta,\alpha,\beta)$. \begin{theorem} Assume that $\Delta_+\left(0\right)\neq0$, $q\left(x\right)=q\left(a-x\right)$, $\alpha=\alpha_{0}$ and $\beta=\beta_{0}$. Then all the eigenvalues $\left\{\lambda_{k}^+\right\}$ (including multiplicity) uniquely determine $\alpha$, $\beta$ and $q(x)$ a.e. on $(0,a)$. \end{theorem} \begin{proof} Using Theorem \ref{th1} and $\alpha=\alpha_{0}$, we know that $\frac{2\alpha}{1+\alpha^2}$ is uniquely determined. Indeed, we can first determine whether $\alpha=1$ or not, by observing the number and the imaginary parts of the eigenvalues $\left\{\lambda_{k}^+\right\}$. If $\alpha\ne1$, then we can use the asymptotics of the eigenvalues to recover $\frac{2\alpha}{1+\alpha^2}$. Using Lemma \ref{hx11} and \eqref{c16}, whether $\alpha=1$ or not, we can obtain that the function $\frac{\Delta_+(\lambda)}{1+\alpha^2}$ is uniquely determined from its zeros. Using \eqref{k9}, \eqref{e15} and \eqref{o30}, we have \begin{equation}\label{o31} \frac{\Delta_+(\lambda)}{1+\alpha^2}-\frac{\Delta_{-}(\lambda)}{1+\alpha^2}=\frac{2i\alpha\lambda\Delta_{0}(\lambda)}{1+\alpha^2}. \end{equation} It is known \cite{X} that all zeros of $\Delta_{0}(\lambda)$ uniquely determine $q(x)$, $\alpha$ and $\beta$ under the assumption $\alpha\ne 1$. When $\alpha=1$, one can use a similar argument in \cite[Theorem 3.1]{XP} to prove the uniqueness theorem. Thus, it is enough to prove $\frac{\Delta_+(\lambda)}{1+\alpha^2}$ uniquely determines $\frac{\Delta_{-}(\lambda)}{1+\alpha^2}$. Define \begin{equation*} y_{a}\left(\lambda,x\right):=c\left(\lambda,a-x\right)-i\alpha\lambda s\left(\lambda,a-x\right). \end{equation*} Obviously, it satisfies \begin{align} \notag y_{a}\left(\lambda,x\right)= y(-\lambda,a-x),\quad y_{a}\left(\pm \lambda,a\right)=1,\quad y'_{a}\left(\pm \lambda,a\right)=\pm i\alpha\lambda-\beta. \end{align} Since $q(x)=q(a-x)$, $\alpha=\alpha_0$ and $\beta=\beta_0$, using \eqref{e15}, we have \begin{equation}\label{e8} \Delta_{-}\left(\lambda\right)=-\left<y\left(\lambda,x\right),y_{a}\left(\lambda,x\right)\right>. \end{equation} Letting $x=\frac{a}{2}$ in \eqref{e8}, then we have \begin{align} \notag \Delta_{-}\left(\lambda\right)&=-y\left(\lambda,\frac{a}{2}\right)y'_{a}\left(\lambda,\frac{a}{2}\right)+y_{a}\left(\lambda,\frac{a}{2}\right)y'\left(\lambda,\frac{a}{2}\right)\\ &\notag=-y_{a}\left(-\lambda,\frac{a}{2}\right)y'_{a}\left(\lambda,\frac{a}{2}\right)-y_{a}\left(\lambda,\frac{a}{2}\right)y_{a}'\left(-\lambda,\frac{a}{2}\right). \end{align} It follows that $\Delta_{-}\left(-\lambda\right)=\Delta_{-}\left(\lambda\right)$. Using \eqref{e7}, we have \begin{equation} \frac{\Delta_{-}(\lambda)}{1+\alpha^2}=\pm\sqrt{\frac{\Delta_+(\lambda)\Delta_+(-\lambda)}{(1+\alpha^2)^2}-\frac{4\alpha^2\lambda^2}{(1+\alpha^2)^2}} \end{equation} To determine which branch is the right choose, we note that $\frac{\Delta_{-}(0)}{1+\alpha^2}=\frac{\Delta_+(0)}{1+\alpha^2}\neq0$ that follows from \eqref{k9} and \eqref{e15}. Thus, $\frac{\Delta_{-}(\lambda)}{1+\alpha^2}$ is uniquely determined by $\frac{\Delta_+(\lambda)}{1+\alpha^2}$. The proof is complete. \end{proof} \begin{remark} If $\Delta_+(0)=0$, then the uniqueness may not hold. It was known \cite{EK,Zo1} that for the problem $L(q,1,0,1,0)$, in the case $\Delta_+(0)=0$, there exist two even potentials corresponding to the same set of eigenvalues. \end{remark} Now, let us prove the so-called two-spectra theorem. Note that $\Delta_+(\lambda)$ and $\Delta_-(\lambda)$ have no common non-zero zeros. \begin{theorem}\label{tht} If $\alpha_0\ne1$ (or $\alpha\ne1$) and is known a priori, then all zeros of $\Delta_+(\lambda)$ and all non-zero zeros of $\Delta_-(\lambda)$ (including multiplicity) uniquely determine $\alpha$ ($\alpha_0$), $\beta_0$, $\beta$ and $q(x)$ a.e. on $(0,a)$. \end{theorem} \begin{proof} Since $\alpha_0\ne1$ (or $\alpha\ne1$) and is known a priori, using Theorem \ref{th1}, we can determine the sign of $\left(\alpha_{0}-1\right)\left(1-\alpha\right)$. Indeed, if there are only finitely many eigenvalues $\{\lambda_n^+\}$ or infinitely many eigenvalues $\{\lambda_n^+\}$ with unbounded imaginary parts, then $\left(\alpha_{0}-1\right)\left(1-\alpha\right)=0$. If there are infinitely many eigenvalues $\{\lambda_n^+\}$ with bounded imaginary parts, then $\left(\alpha_{0}-1\right)\left(1-\alpha\right)\ne0$. In the latter case, if $|\sin (a{\rm Re}\lambda_n^+)|\to 1$ as $n\to\infty $ then $\left(\alpha_{0}-1\right)\left(1-\alpha\right)>0$; if $|\sin (a{\rm Re} \lambda_n^+)|\to 0$ as $n\to\infty $ then $\left(\alpha_{0}-1\right)\left(1-\alpha\right)<0$. In particular, we can determine whether $\alpha=1$ ($\alpha_0=1$) or not. If $\alpha\neq1$ ($\alpha_0\neq1$), then we can recover the term $\frac{\alpha+\alpha_0}{1+\alpha\alpha_0}$ from the the asymptotics of the eigenvalues, and so $\alpha$ ($\alpha_0$) is uniquely determined. Using Lemma \ref{hx11} and \eqref{c16}, we can obtain that the function $\Delta_+(\lambda)$ is uniquely determined from its zeros. If $\alpha=1$, since $\alpha_0$ is known, by using Lemma \ref{hx11} and \eqref{c16}, we can still obtain that the function $\Delta_+(\lambda)$ is uniquely determined from its zeros. Using \eqref{e7}, it is easy to get whether $\lambda=0$ is a zero of $\Delta_-(\lambda)$. If $\Delta_-(0)=0$, then the multiplicity of $\lambda=0$ is also uniquely determined by using \eqref{e7}. Then, together with the condition of this theorem, we know that all zeros of $\Delta_-(\lambda)$ are known. Then using Lemma \ref{hx11} and \eqref{c16}, we know that $\Delta_-(\lambda)$ is uniquely determined by its zeros. Using \eqref{k9}, \eqref{e15} and \eqref{o30}, we have \begin{equation}\label{o31s} {\Delta_+(\lambda)}+{\Delta_{-}(\lambda)}=y'\left(\lambda,a\right)+\beta y\left(\lambda,a\right). \end{equation} The right-hand side of \eqref{o31s} is the characteristic function of the problem \eqref{hx12}, \eqref{hx14} and $y'(0)-\beta y(0)=0$. Using Theorem 3.1 in \cite{XP}, we complete the proof. \end{proof} \begin{remark}(1) If $q(x)$ is real valued and $\beta,\beta_0\in \mathbb{R}$, then we can use the signs of imaginary parts of zeros of $\Delta_-(\lambda)$ instead of the zeros of $\Delta_-(\lambda)$. Indeed, after obtaining $\Delta_+(\lambda)$ and $\alpha\alpha_0$, the function $g(\lambda):=\Delta_-(\lambda)\Delta_-(-\lambda)$ is determined (cf.\eqref{e7}). Let $\{\xi_n\}$ (counted with multiplicities) be the zeros of $g(\lambda)$ in $\overline{\mathbb{C}_+}:=\{\lambda: {\rm Im}\lambda\ge0\}$. Then using Theorem \ref{th2x} (2)(ii), we know that $\xi_n$ or $\bar{\xi}_n$ is a zero of $\Delta_-(\lambda)$. Define \begin{equation*} \sigma_n={\rm sgn } ({\rm Im} \lambda_n^-)=\left\{ \begin{split} +1,\quad &\text{if} \quad \lambda_n^-= \xi_n\in \mathbb{C}_+, \\ -1,\quad &\text{if} \quad \lambda_n^-=\bar{\xi}_n\in \mathbb{C}_-,\\ 0,\quad &\text{if} \quad \lambda_n^-={\xi}_n\in \mathbb{R}. \end{split}\right. \end{equation*} Using the set of signs $\{\sigma_n\}$ and the zeros of $g(\lambda)$ in $\overline{\mathbb{C}_+}$, we can uniquely recovered all zeros of $\Delta_-(\lambda)$. We admit that this property was first observed by Korotyaev \cite{EK} in studying the inverse resonance problem $L(q,1,0,1,0)$. By contrast, when $\alpha\ne1$ and $\alpha_0\ne1$, it is possible that all zeros of $\Delta_+(\lambda)$ together with at most a finite number of the signs $\{\sigma_n\}$ uniquely recover all zeros of $\Delta_-(\lambda)$. For example, if $\alpha$ and $\alpha_0$ are determined such that $P_0^->0$, then from the asymptotics of $\lambda_n^-$ we know that there exist only finitely many $\lambda_n^-$s in $\overline{\mathbb{C}_-}$. So, in this case, there exist at most finitely many elements in $\{\xi_n\}$ belonging to the set of zeros of $\Delta_-(-\lambda)$ and the other elements are zeros of $\Delta_-(\lambda)$. That is to say, in this case, we only need at most a finite number of signs to distinguish the zeros of $\Delta_-(-\lambda)$ and the zeros of $\Delta_-(\lambda)$ from all zeros of $g(\lambda)$ in $\overline{\mathbb{C}_+}$, and so all zeros of $\Delta_-(\lambda)$ are determined. (2) If the known $\alpha_0$ is equal to $1$, it is unclear that if $\alpha$ can be determined. If $\alpha\ne1$ and is given, then it is included in Theorem \ref{tht}. If $\alpha_0=1=\alpha$, then $\Delta_+(\lambda)$ is still uniquely determined from its zeros by using Lemma \ref{hx11} and \eqref{c16}. Whereas, $\Delta_-(\lambda)$ is uniquely determined from its zeros provided that there is a non-zero $b_0\in\{\beta,\beta_0,\beta+\beta_0,\beta-\beta_0\}$ and $b_0$ is known. If $\beta=\beta_0=0$, i.e., in the case of the resonance problem $L(q,1,0,1,0)$, all zeros of $\Delta_-(\lambda)$ cannot uniquely determine $\Delta_-(\lambda)$ unless an additional sign is given (see \cite{EK}). (3) If $(\alpha_0-1)(\alpha-1)\ne0$ and the sign of either $\alpha_0-1$ or $\alpha-1$ is known a priori, then $\alpha$ and $\alpha_0$ can all be uniquely recovered from the asymptotics of the eigenvalues $\{\lambda_k^\pm\}$. Indeed, we can first recover two numbers $P_0^\pm$ and then it follows from \eqref{k10} that \begin{equation*} e^{P_0^++P_0^-}=\frac{(\alpha_0+1)^2}{(\alpha_0-1)^2},\quad e^{P_0^+-P_0^-}=\frac{(\alpha+1)^2}{(\alpha-1)^2}. \end{equation*} If the sign of $\alpha_0-1$ (or $\alpha-1$) is known, then $\alpha_0$ ($\alpha$) is obtained uniquely. Substituting $\alpha_0$ (or $\alpha$) into $P_0^+$, we get $\alpha$ ($\alpha_0$). \end{remark} Next, let us give two uniqueness theorems for the general partial inverse problem, namely, the potential function is known in a subinterval $(b,a)$ with arbitrary $b\in (0,a)$. \begin{lemma}[\textup{see \cite[p.173]{B}}]\label{le1} For any entire function $f(z)\not\equiv0$ of exponential type, the following inequality holds true: \begin{equation} \mathop {\varliminf }\limits_{r \to \infty} \frac{n_f(r)}{r}\leq\frac{1}{2\pi}\int_0^{2\pi}h_f(\theta)d\theta, \end{equation} where $n_f(r)$ denotes the number of zeros of $f(z)$ in the disk $|z|\le r$, and $h_f(\theta)$ is the indicator function of $f(z)$ that is defined by \begin{equation*} h_f(\theta):=\mathop {\varlimsup }\limits_{r \to \infty }\frac{\ln |f(re^{i\theta})|}{r}. \end{equation*} \end{lemma} From Theorem \ref{th1}, we see that \begin{equation*} n_{\Delta_\pm}(r)=\frac{2tr}{\pi}(1+o(1)),\quad r\to\infty, \end{equation*} where $t= a$ if $(\alpha_0-1)(1-\alpha)\ne0$ and $t\le a$ if $(\alpha_0-1)(1-\alpha)=0$. Let $\Omega_\pm$ be the sets of the zeros of the functions $\Delta_\pm(\lambda)$, respectively. Denote $\Omega_0:=\Omega_+\cup\Omega_-$. Let $ n_{\Omega_0}(r)$ be the number of the points in $\Omega_0\cap \{\lambda:|\lambda|\le r\}.$ \begin{theorem}\label{th6} Assume that $q(x)$ is known a priori a.e. on $\left(b,a\right)$ with $b\in (0,a)$ and $\alpha$ and $\beta$ are given. If the subset $\Omega_0$ satisfies \begin{equation} n_{\Omega_0}(r)=\frac{2mr}{\pi}[1+o(1)],\quad r \to \infty,\quad m>2b, \end{equation} then $\Omega_0$ uniquely determines $\alpha_{0}$ and $\beta_{0}$ and $q(x)$ a.e. on $\left(0,b\right)$. \end{theorem} \begin{proof} Suppose that there are two problems $L(q,\alpha_0,\beta_0,\alpha,\beta)$ and $L(\tilde{q},\tilde{\alpha}_0,\tilde{\beta}_0,\tilde{\alpha},\tilde{\beta})$, which satisfy that $q(x)=\tilde{q}(x)$ a.e. on $\left(b,a\right)$ and $\alpha=\tilde{\alpha}$ and $\beta=\tilde{\beta}$. Let us prove $q(x)=\tilde{q}(x)$ on $\left(0,b\right)$ and $\alpha_{0}=\tilde{\alpha}_{0}$ and $\beta_{0}=\tilde{\beta}_{0}$ if $\Omega_0=\tilde{\Omega}_0$. Define \begin{equation}\label{p1} F(\lambda)=y\left(\lambda,b\right)\tilde{y'}\left(\lambda,b\right)-\tilde{y}\left(\lambda,b\right)y'\left(\lambda,b\right) \end{equation} Combing \eqref{r2} and \eqref{k8}, then we have \begin{equation}\label{p9} \lvert F(\lambda)\rvert\leq C|\lambda|e^{2b\lvert {\rm Im}\lambda\rvert}, \quad \lvert\lambda\rvert\to\infty, \quad C>0. \end{equation} Let $\lambda=re^{i\theta}$, so $\lvert {\rm Im}\lambda\rvert=r\lvert\sin\theta\rvert.$ Then using \eqref{p9}, we have \begin{equation} h_{F}(\theta):=\varlimsup\limits_{r \to \infty}\frac{\ln|F(re^{i\theta})|}{r} \le 2b|\sin \theta|, \end{equation} which implies \begin{equation} \int_{0}^{2\pi}h_{F}(\theta)d\theta \le 2b\int_{0}^{2\pi}|\sin \theta|d\theta = 4b\int_{0}^{\pi}\sin \theta d\theta = 8b. \end{equation} Since $q(x)=\tilde{q}(x)$ a.e. on $\left(b,a\right)$ and $\alpha=\tilde{\alpha}$ and $\beta=\tilde{\beta}$, then $\phi(\lambda,b)=\tilde{\phi}(\lambda,b)$ for all $\lambda\in \mathbb{C}$. Note that $\phi(\lambda,x)$ meets the conditions $\phi(\lambda,a)=1$ and $\phi'(\lambda,a)=-(i\alpha\lambda+\beta)$, it is easy to get the function $\phi(-\lambda,x)$ to satisfy the conditions $\phi(-\lambda,a)=1$ and $\phi'(-\lambda,a)=(i\alpha\lambda-\beta)$. Then we can rewrite \eqref{e15} as \begin{equation}\label{o64} \Delta_{-}(\lambda)=\left<\phi(-\lambda,x),y(\lambda,x)\right>. \end{equation} Letting $x=b$ in \eqref{e10} and \eqref{o64}, we get \begin{align}\label{w1} \Delta_\pm\left(\lambda\right)=y'\left(\lambda,b\right)\phi\left(\pm\lambda,b\right)-\phi'\left(\pm\lambda,b\right)y\left(\lambda,b\right), \end{align} \begin{equation}\label{w2} \tilde{\Delta}_\pm\left(\lambda\right)=\tilde{y}'\left(\lambda,b\right)\phi\left(\pm\lambda,b\right)-\phi'\left(\pm\lambda,b\right)\tilde{y}\left(\lambda,b\right). \end{equation} Multiply \eqref{w1} and \eqref{w2} by $\tilde{y}'\left(\lambda,b\right)$ and $y'\left(\lambda,b\right)$, respectively, take the difference and get \begin{equation}\label{w3} F\left(\lambda\right)=\frac{y'\left(\lambda,b\right)\tilde{\Delta}_\pm \left(\lambda\right)-\tilde{y}'\left(\lambda,b\right)\Delta_\pm \left(\lambda\right)}{\phi'\left(\pm \lambda,b\right)}. \end{equation} Multiply \eqref{w1} and \eqref{w2} by $\tilde{y}\left(\lambda,b\right)$ and $y\left(\lambda,b\right)$, respectively, take the difference and get \begin{equation}\label{w4} F\left(\lambda\right)=\frac{y\left(\lambda,b\right)\tilde{\Delta}_\pm \left(\lambda\right)-\tilde{y}\left(\lambda,b\right)\Delta_\pm \left(\lambda\right)}{\phi\left(\pm \lambda,b\right)}. \end{equation} Considering \eqref{w3} and \eqref{w4}, as $\phi\left(\pm \lambda,b\right)$ and $\phi'\left(\pm\lambda,b\right)$ cannot vanish simultaneously, then we conclude that all common zeros (including multiplicity) of $\Delta_\pm\left(\lambda\right)$ and $\tilde{\Delta}_\pm\left(\lambda\right)$ are zeros of $F\left(\lambda\right)$. Thus, we have \begin{equation} \varliminf\limits_{r \to \infty}\frac{n_{F}(r)}{r} \ge \lim_{r \to \infty}\frac{n_{\Omega_0}(r)}{r} = \frac{2m}{\pi}. \end{equation} Using Lemma \ref{le1}, if the entire function $F(\lambda)\not\equiv0$, then \begin{equation} \frac{2m}{\pi}\le \varliminf\limits_{r \to \infty}\frac{n_{F}}{r} \le \frac{1}{2\pi}\int_0^{2\pi}h_{F}(\theta)d\theta \le \frac{4b}{\pi}, \end{equation} which implies $m\leq 2b$. This contradicts $m>2b$. Hence, $F(\lambda)\equiv0$, namely, $\frac{y'\left(\lambda,b\right)}{y\left(\lambda,b\right)}=\frac{\tilde{y}'\left(\lambda,b\right)}{\tilde{y}\left(\lambda,b\right)}$. Since $y'\left(\lambda,b\right)$ and $y\left(\lambda,b\right)$ do not have common zeros, the zeros of ${y\left(\lambda,b\right)}$ are uniquely determined. Using Theorem 3.2 in \cite{X}, we obtain that $q(x)=\tilde{q}(x)$ a.e. on $\left(0,b\right)$, $\alpha_{0}=\tilde{\alpha}_{0}$ and $\beta_{0}=\tilde{\beta}_{0}$. The proof is complete. \end{proof} \begin{corollary}\label{th7} If $q(x)$ is known a priori a.e. on $\left(b,a\right)$ with $b<\frac{a}{2}$ and $\alpha$ and $\beta$ are known, then the subset $\Omega:=\Omega_+$ (or $\Omega:=\Omega_-$) satisfying \begin{equation} n_{\Omega}(r)=\frac{2mr}{\pi}[1+o(1)],\quad r \to \infty,\quad 2b<m\le a, \end{equation} uniquely determines $\alpha_{0}$, $\beta_{0}$ and $q(x)$ a.e. on $\left(0,b\right)$. \end{corollary} In the above theorem, it needs $m$ to be greater than $2b$. In the following theorem, we consider the critical case $m=2b$. In this case, we assume $(\alpha_0-1)(1- \alpha)\ne0$. Denote \begin{equation}\label{xk1} g_\pm (\lambda):=\lambda[i\left(\alpha_{0}\pm \alpha\right)\cos{\lambda a}-\left(1\pm\alpha\alpha_{0}\right)\sin{\lambda a}]. \end{equation} Let $\{\mu_k^\pm\}$ be zeros of $g_\pm(\lambda)$, respectively. It is obvious that \begin{equation}\label{o28} \begin{cases}& \!\!\!\!\mu_{0}^\pm=0 ,\; \mu_{k}^\pm=\frac{\pi}{a}\left(\lvert k\rvert-\frac{1}{2}\right){\rm sgn}k+\frac{iP_0^\pm}{2a},\; k\in\mathbb{Z}_0,\ \text{if}\ \left(\alpha_{0}-1\right)\left(1-\alpha\right)>0, \\ & \!\! \!\!\!\! \mu_{-1}^\pm=0,\;\mu_{k}^\pm=\frac{\pi}{a}\left(\lvert k\rvert-1\right){\rm sgn}k+\frac{iP_0^\pm}{2a}, k\in\mathbb{Z}_{0}\setminus\{-1\} ,\ \text{if}\ \left(\alpha_{0}-1\right)\left(1-\alpha\right)<0. \end{cases} \end{equation} From Theorem \ref{th1}, we see that \begin{equation*} \lambda_k^\pm =\mu_k^\pm +O(1/k),\quad k\to\infty. \end{equation*} \begin{lemma}[\textup{\cite[Proposition B.6]{FB}}]\label{lem2} Assume that $E_0(\rho)$ is an entire function of order less than one. If $\lim_{\lvert t\rvert\to\infty, t\in\mathbb{R}}E_0(it)=0$, then $E_0(\rho)\equiv0$ on the whole complex plane. \end{lemma} \begin{theorem}\label{thxx} Assume that $(\alpha_0-1)(1-\alpha)\ne0$, and $q(x)$ is known a priori a.e. on $(b,a)$ with $b\in (0,a)$ and $\alpha$ and $\beta$ are known. If the subset $\Omega_{1}:=\{\lambda_{k_{j}}^+\}_{j\in \mathbb{Z}_1}\cup \{\lambda_{k_{j}}^-\}_{j\in \mathbb{Z}_1}$ satisfies \begin{equation}\label{i1} \sum_{j\in \mathbb{Z}_1}\frac{\lvert\lambda_{k_{j}}^+-a\mu_{j}^+/b_+\rvert}{\lvert j\rvert+1}+\sum_{j\in \mathbb{Z}_1}\frac{\lvert\lambda_{k_{j}}^--a\mu_{j}^-/b_-\rvert}{\lvert j\rvert+1}<\infty,\quad b_++b_-=2b, \end{equation} where $\mathbb{Z}_1=\mathbb{Z}_0$ if $(\alpha_0-1)(1- \alpha)<0$ and $\mathbb{Z}_1=\mathbb{Z}$ if $(\alpha_0-1)(1- \alpha)>0$, then $\Omega_{1}$ uniquely determines $\alpha_{0},\beta_{0}$ and $q(x)$ a.e. on $(0,b)$. \end{theorem} \begin{proof} We only deal with the case $(\alpha_0-1)(1- \alpha)>0$, i.e., $\mathbb{Z}_1=\mathbb{Z}$. Let $\rho=\lambda^{2}$. Define \begin{equation}\label{p2} G(\rho)=F(\lambda)F(-\lambda),\quad \Phi(\rho)=\prod_{j\in \mathbb{Z}}\left(1-\frac{\rho}{(\lambda_{k_{j}}^+)^{2}}\right)\left(1-\frac{\rho}{(\lambda_{k_{j}}^-)^{2}}\right), \quad E_0(\rho)=\frac{G(\rho)}{\Phi(\rho)}, \end{equation} where $F$ is defined in \eqref{p1}. Since $F(\lambda)$ is an entire function of order $\le1$ and $F(\lambda)F(-\lambda)$ is an even function of $\lambda$, then $G(\rho)$ is an entire function of $\rho $ of order $\le\frac{1}{2}$. Let us show that $E_0(\rho)$ is also an entire function of $\rho$ of order $\le\frac{1}{2}$. Since all zeros of $\Phi(\rho) $ are zeros of $G(\rho)$, it is enough to show that $\Phi(\rho) $ is an entire function of $\rho $ of order $\le\frac{1}{2}$. By virtue of \eqref{k11} and \eqref{k12}, we have \begin{equation}\label{4.26} \frac{1}{(\lambda_{k_{j}}^\pm)^{2}}=O\left(\frac{1}{j^2}\right), \quad j\to\infty. \end{equation} It follows that the series \begin{equation} \notag \sum_{j\in \mathbb{Z}}\left(\left\lvert \frac{\rho}{(\lambda_{k_{j}}^+)^{2}} \right\rvert+\left\lvert \frac{\rho}{(\lambda_{k_{j}}^-)^{2}}\right\rvert\right) \end{equation} converges uniformly on bounded subsets of $\mathbb{C}$. Consequently, the infinite product $\Phi(\rho)$ in \eqref{p2} converges to an entire function of $\rho$, with its roots being exactly $(\lambda_{k_{j}}^+)^{2}$ and $(\lambda_{k_{j}}^-)^{2}$, $j\in \mathbb{Z}$. Denote \begin{equation} \Xi_\Phi:=\inf \left\{p: \sum_{j\in \mathbb{Z}}\left(\frac{1}{\left \lvert(\lambda_{k_{j}}^+)^{2p}\right\rvert} +\frac{1}{\left\lvert (\lambda_{k_{j}}^-)^{2p}\right\rvert}\right)<\infty\right\}, \end{equation} which is called \emph{convergence exponent of zeros} of the canonical product of $\Phi(\rho)$ in \eqref{p2}. Obviously, the estimate \eqref{4.26} implies $\Xi_\Phi\le\frac{1}{2}$. Note that the order of canonical product of an entire function coincides with its convergence exponent of zeros (see \cite[p.16]{B}). It follows that the order of canonical product of $\Phi(\rho)$ is less than or equal to $1/2$. Using Hadamard's factorization theorem, we know that the infinite product in \eqref{p2} is the canonical product of the function $\Phi(\rho)$, and so the order of $\Phi(\rho)$ is at most $1/2$. Using the Lemma \ref{lem2} and the end part of proof of Theorem \ref{th6}, to prove Theorem \ref{thxx}, it is enough to prove \begin{equation} \lim_{\lvert t\rvert\to\infty,t\in\mathbb{R}}E_0(it)=0. \end{equation} Define \begin{equation} \Phi_{0}\left(\rho\right)=g_+\left(\frac{b_+\lambda}{a}\right)g_+\left(-\frac{b_+\lambda}{a}\right)g_-\left(\frac{b_-\lambda}{a}\right) g_-\left(-\frac{b_-\lambda}{a}\right), \end{equation} where the functions $g_\pm$ are defined in \eqref{xk1}. Using the condition $d_++d_-=2b$, we have that for large $|t|$, there holds \begin{equation}\label{xk4} \lvert\Phi_{0}\left(it\right)\rvert\geq C\lvert t\rvert^{2}e^{4b\lvert {\rm Im} \sqrt{t}\rvert},\quad C>0. \end{equation} Using the Hadamard's factorization theorem, we can also have \begin{equation} \Phi_{0}\left(\rho\right)=C_1\rho^2\prod_{j\in \mathbb{Z}_0}\left(1-\frac{\rho}{(\zeta_{j}^+)^{2}}\right)\left(1-\frac{\rho}{(\zeta_{j}^-)^{2}}\right), \end{equation} where $\zeta_{j}^\pm=\frac{a}{b_\pm}\mu_{j}^\pm$ and $C_1$ is a constant. Then \begin{align*} \left |\frac{\Phi_{0}\left(it\right)}{\Phi\left(it\right)}\right|\notag&\le C\prod_{j\in \mathbb{Z}_0}\frac{\left(1-\frac{it}{(\zeta_{j}^+)^{2}}\right)\left(1-\frac{it}{(\zeta_{j}^-)^{2}}\right)}{\left(1-\frac{it}{(\lambda_{k_{j}}^+)^{2}}\right)\left(1-\frac{it}{(\lambda_{k_{j}}^-)^{2}}\right)}\\ &\leq C\!\!\prod_{j\in \mathbb{Z}_0}\!\!\left[1+\left|\frac{(\zeta_{j}^+)^{2}-(\lambda_{k_{j}}^+)^{2}}{(\lambda_{k_{j}}^+)^{2}-it}\right|\right]\!\!\left[1+\left|\frac{(\zeta_{j}^-)^{2}-(\lambda_{k_{j}}^-)^{2}}{(\lambda_{k_{j}}^-)^{2}-it}\right|\right]\prod_{j\in \mathbb{Z}_0}\left|\frac{(\lambda_{k_{j}}^+)^{2}}{(\zeta_{j}^+)^{2}}\right|\left|\frac{(\lambda_{k_{j}}^-)^{2}}{(\zeta_{j}^-)^{2}}\right|, \end{align*} which implies from \eqref{i1} that \begin{equation}\label{xk6} \left |\frac{\Phi_{0}\left(it\right)}{\Phi\left(it\right)}\right|=O(1), \quad\lvert t\rvert\to \infty. \end{equation} Using \eqref{p9}, \eqref{p2}, \eqref{xk4} and \eqref{xk6}, we get \begin{equation} \lvert E_0\left(it\right)\rvert=\left|\frac{G(it)\Phi_{0}(it)}{\Phi_{0}(it)\Phi(it)}\right|=O\left(\frac{1}{\lvert t\rvert}\right), \quad\lvert t\rvert\to \infty. \end{equation}.\\ The proof is complete. \end{proof} \begin{corollary}\label{cor2} Assume that $(\alpha_0-1)(1-\alpha)\ne0$. If $q(x)$ is known a priori a.e. on $(b,a)$ with $b<\frac{a}{2}$, and $\alpha$,$\beta$ are known, then the subset $\{\lambda_{k_{j}}^+\}_{j\in \mathbb{Z}_1}$ (or $ \{\lambda_{k_{j}}^-\}_{j\in \mathbb{Z}_1}$) satisfying \begin{equation*} \sum_{j\in \mathbb{Z}_1}\frac{\lvert\lambda_{k_{j}}^+-\frac{a\mu_{j}^+}{2b}\rvert}{\lvert j\rvert+1}<\infty,\quad \left(\text{or}\quad \sum_{j\in \mathbb{Z}_1}\frac{\lvert\lambda_{k_{j}}^--\frac{a\mu_{j}^-}{2b}\rvert}{\lvert j\rvert+1}<\infty,\right) \end{equation*} uniquely determines $\alpha_{0}$, $\beta_{0}$ and $q(x)$ a.e. on $\left(0,b\right)$. \end{corollary} \noindent\textbf{\large Appendix} \\[-2mm] \appendix \setcounter{equation}{0} \renewcommand\theequation{A.\arabic{equation}} \setcounter{lemma}{0} \renewcommand\thelemma{A.\arabic{lemma}} In Appendix, we give the asymptotics of zeros of the entire function \begin{equation}\label{A.7} \varphi(\lambda)=\phi^{(1)}(\lambda)+M \sin{\lambda a}-iN \cos{\lambda a}+\psi(\lambda), \end{equation} where $M,\ N\in\mathbb{C}$, $\psi$ is an entire function of exponential type $\leq a$ and belongs to $L^2(-\infty,\infty)$ and \begin{equation}\label{A.1} \phi^{(1)}(\lambda)=\lambda(\sigma_1\cos{\lambda a}+i\sigma_2\sin{\lambda a}), \end{equation} with $\sigma_1,\sigma_2\in \mathbb{R}$. In \cite[p.178]{MV}, the asymptotics of zeros of $ \varphi(\lambda)$ is given for the condition $\sigma_1\ne\sigma_2$ and $\sigma_1,\sigma_2\ge0$. Indeed, this condition can be weaker. Let us first prove the following lemma. \begin{lemma}\label{LA1} Let $\sigma_1$, $\sigma_2$ be real numbers with $\sigma_1\ne\pm\sigma_2$. Then the zeros of the entire function $\phi^{(1)}(\lambda)$ defined in \eqref{A.1} have the following asymptotic representations.\\ $\left(1\right)$ If $\left(\sigma_2-\sigma_1\right)\left(\sigma_2+\sigma_1\right)<0$, then the zeros $\{\lambda_{k}^{(1)}\}_{k\in\mathbb{Z}}$ of $\phi^{(1)}(\lambda)$ are $\lambda_{0}^{(1)}=0$, \begin{equation}\label{XA1} \begin{aligned} \lambda_{k}^{(1)}=\frac{\pi}{a}\left(\lvert k\rvert-\frac{1}{2}\right){\rm sgn} k+\frac{i}{2a}\ln{\left(\frac{|\sigma_2+\sigma_1|}{|\sigma_2-\sigma_1|}\right)}, \quad k\in\mathbb{Z}_0,\quad \mathbb{Z}_0=\mathbb{Z}\setminus\{0\}, \end{aligned} \end{equation} $\left(2\right)$ If $\left(\sigma_2-\sigma_1\right)\left(\sigma_2+\sigma_1\right)>0$, then the zeros $\{\lambda_{k}^{(1)}\}_{k\in \mathbb{Z}_0}$ of $\phi^{(1)}(\lambda)$ are $\lambda_{-1}^{(1)}=0$: \begin{equation}\label{XA2} \begin{aligned} \lambda_{k}^{(1)}=\frac{\pi}{a}\left(\lvert k\rvert-1\right){\rm sgn} k+\frac{i}{2a}\ln{\left(\frac{|\sigma_2+\sigma_1|}{|\sigma_2-\sigma_1|}\right)}, \quad k\in \mathbb{Z}_0\setminus\{-1\}. \end{aligned} \end{equation} \end{lemma} \begin{proof} Note that \begin{equation}\label{A7} \cos{\lambda a}=\frac{e^{i\lambda a}+e^{-i\lambda a}}{2}, \quad \sin{\lambda a}=\frac{e^{i\lambda a}-e^{-i\lambda a}}{2i}. \end{equation} Then $\frac{1}{\lambda}\phi^{(1)}(\lambda)=0$ is equivalent to \begin{equation}\label{A.5} \left(\sigma_2+\sigma_1\right)e^{i\lambda a}=\left(\sigma_2-\sigma_1\right)e^{-i\lambda a}. \end{equation} Since $\sigma_1\ne\pm \sigma_2$, it follows that \begin{equation}\label{A11} e^{2i\lambda a}=\frac{\sigma_2-\sigma_1}{\sigma_2+\sigma_1}. \end{equation} Taking the logarithm on both sides of \eqref{A11}, we have \begin{equation*} 2i\lambda_k^{(1)} a=\ln{\left(\frac{\sigma_2-\sigma_1}{\sigma_2+\sigma_1}\right)}+2k\pi i,\quad k\in \mathbb{Z}. \end{equation*} Note that $\ln z=\ln |z|+i\arg z$ with $\arg z\in (-\pi,\pi]$. It follows that \begin{equation} \lambda_{k}^{(1)}=\begin{cases}& \!\!\!\! \frac{k\pi}{a}+\frac{i}{2a}\ln{\left(\frac{\sigma_2+\sigma_1}{\sigma_2-\sigma_1}\right)},\; k\in\mathbb{Z},\ \text{if}\ \frac{\sigma_2+\sigma_1}{\sigma_2-\sigma_1}>0, \\ & \!\! \!\!\!\! \frac{\pi}{a}\left(k-\frac{1}{2}\right)+\frac{i}{2a}\ln{\left(\frac{|\sigma_2+\sigma_1|}{|\sigma_2-\sigma_1|}\right)},\; k\in\mathbb{Z},\ \text{if}\ \frac{\sigma_2+\sigma_1}{\sigma_2-\sigma_1}<0, \end{cases} \end{equation} which is equivalent to \eqref{XA1} and \eqref{XA2}. The proof is complete. \end{proof} Using Lemma \ref{LA1} and a similar argument to the proof of Lemma 7.1.3 in \cite{MV}, one can get the following lemma. \begin{lemma}\label{LA2} Let $\sigma_1$ and $\sigma_2$ be real numbers with $\sigma_1\ne\pm\sigma_2$, and the entire function $ \varphi(\lambda)$ is defined in \eqref{A.7}.\\ $\left(1\right)$ If $\left(\sigma_2-\sigma_1\right)\left(\sigma_2+\sigma_1\right)<0$, then the zeros $\{\lambda_{k}\}_{k\in\mathbb{Z}}$ of $\varphi(\lambda)$ have the following asymptotic behavior: \begin{equation} \begin{aligned} \lambda_{k}=\frac{\pi}{a}\left(\lvert k\rvert-\frac{1}{2}\right){\rm sgn} k+\frac{i}{2a}\ln{\left(\frac{|\sigma_2+\sigma_1|}{|\sigma_2-\sigma_1|}\right)}+\frac{P}{k}+\frac{\gamma_k}{k}, \quad \lvert k\rvert\to\infty, \end{aligned} \end{equation} where $\{\gamma_{k}\}_{k=-\infty,k\neq0}^{\infty}\in l_{2}$ and \begin{equation}\label{A.9} P=\frac{\sigma_2 N-\sigma_1 M}{\pi(\sigma_2^2-\sigma_1^2)}. \end{equation} $\left(2\right)$ If $\left(\sigma_2-\sigma_1\right)\left(\sigma_2+\sigma_1\right)>0$, then the zeros $\{\lambda_{k}\}_{k\in\mathbb{Z}_0}$ of $\varphi(\lambda)$ have the following asymptotic behavior: \begin{equation} \begin{aligned} \lambda_{k}=\frac{\pi}{a}\left(\lvert k\rvert-1\right){\rm sgn} k+\frac{i}{2a}\ln{\left(\frac{|\sigma_2+\sigma_1|}{|\sigma_2-\sigma_1|}\right)}+\frac{P}{k}+\frac{\gamma_k}{k}, \quad \lvert k\rvert\to\infty. \end{aligned} \end{equation} where $P$ is given \eqref{A.9} and $\{\gamma_{k}\}_{k=-\infty,k\neq0}^{\infty}\in l_{2}$. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma 7.1.3 in \cite{MV}. But for the convenience of readers, we just give the main idea of the proof. Denote $G_\delta:=\{\lambda\in\mathbb{C}: \lvert \lambda-\lambda_{k}^{(1)}\rvert\geq\delta>0, k\in\mathbb{Z}_0\}$. Clearly, there exist positive constants $C_\delta$ and $K$ such that \begin{equation}\label{A10} \lvert\phi^{(1)}(\lambda)\rvert\geq C_\delta\lvert \lambda \rvert e^{a\lvert{\rm Im}\lambda \rvert}, \quad \lambda \in G_\delta,\quad \lvert \lambda \rvert\geq \lambda^* , \end{equation} for sufficiently large $\lambda^*=\lambda^*(\delta)$, and \begin{equation}\label{A.12} \lvert \varphi(\lambda)-\phi^{(1)}(\lambda)\rvert\le Ke^{a\lvert{\rm Im}\lambda \rvert}. \end{equation} It follows that \begin{equation} \lvert\phi^{(1)}(\lambda)\rvert>\lvert \varphi(\lambda)-\phi^{(1)}(\lambda)\rvert,\quad \lambda\in G_\delta\cap \{\lambda:|\lambda|\ge \lambda^*\}, \end{equation} for sufficiently large $\lambda^*$. Therefore, there exist sufficient large numbers $R_k$ and sufficiently small numbers $\delta_k>0$ such that \begin{equation} \varphi(\lambda)\ne0, \quad \lvert \phi^{(1)}(\lambda)\rvert>\lvert \varphi(\lambda)-\phi^{(1)}(\lambda)\rvert,\quad \lambda\in\Gamma_{R_k}\cup \gamma_{\delta_k}, \end{equation} where \begin{equation} \Gamma_{R_k}:=\{\lambda\in\mathbb{C}: \lvert \lambda \rvert=R_k\},\quad \gamma_{\delta_k}:=\{\lambda\in\mathbb{C}: \lvert \lambda-\lambda_{k}^{(1)}\rvert=\delta_k\}. \end{equation} Using Rouch\`{e}'s theorem, we conclude that $\varphi(\lambda)$ and $\phi^{(1)}(\lambda)$ have the same number of zeros, counted with multiplicity, inside $\Gamma_{R_k}$ and $\gamma_{\delta_k}$. In particular, in $\gamma_{\delta_k}$ there is exactly one zero of $\varphi(\lambda)$. It follows that \begin{equation}\label{A12} \lambda_k=\lambda_{k}^{(1)}+\varepsilon_k,\quad \varepsilon_k=o(1),\quad |k|\to\infty. \end{equation} Substituting \eqref{A12} into \eqref{A.7}, we get for large $|k|$ that \begin{align}\label{XC0} \notag 0=&\varphi(\lambda_k)=\phi^{(1)}(\lambda_{k}^{(1)}+\varepsilon_k)+M \sin{(\lambda_{k}^{(1)}+\varepsilon_k) a}-iN \cos{(\lambda_{k}^{(1)}+\varepsilon_k) a}+\psi(\lambda_k)\\ \notag=& (\sigma_1\lambda_{k}-iN)\cos{(\lambda_{k}^{(1)}+\varepsilon_k) a}+(i\sigma_2\lambda_{k}+M)\sin{(\lambda_{k}^{(1)}+\varepsilon_k) a}+\psi(\lambda_k)\\ \notag=&(\sigma_1\lambda_{k}-iN)\left[\cos{(\lambda_{k}^{(1)} a)}\cos{(\varepsilon_k a)}-\sin{(\lambda_{k}^{(1)} a)}\sin{(\varepsilon_k a)}\right]\\ \notag &+(i\sigma_2\lambda_{k}+M)\left[\sin{(\lambda_{k}^{(1)} a)}\cos{(\varepsilon_k a)}+\cos{(\lambda_{k}^{(1)} a)}\sin{(\varepsilon_k a)}\right]+\psi(\lambda_k).\\ \notag=&\psi(\lambda_k)+\frac{\lambda_k}{\lambda_k^{(1)}}\phi^{(1)}(\lambda_k^{(1)})\cos (\varepsilon_ka)+\lambda_k\left[-\sigma_1\sin{(\lambda_{k}^{(1)} a)}+i\sigma_2\cos{(\lambda_{k}^{(1)} a)}\right]\sin{(\varepsilon_k a)}\\ &+\left[M\sin{(\lambda_{k}^{(1)} a)}-iN\cos{(\lambda_{k}^{(1)} a)}\right]\cos{(\varepsilon_k a)}+\left[M\cos{(\lambda_{k}^{(1)} a)}+iN\sin{(\lambda_{k}^{(1)} a)}\right]\sin{(\varepsilon_k a)}. \end{align} Using \eqref{A11} and \eqref{A7}, we have for large $|k|$ that \begin{equation}\label{XC} -\sigma_1\sin{(\lambda_{k}^{(1)} a)}+i\sigma_2\cos{(\lambda_{k}^{(1)} a)}=ie^{-i\lambda_k^{(1)}a}(\sigma_2-\sigma_1), \end{equation} \begin{equation}\label{XC1} M\sin{(\lambda_{k}^{(1)} a)}-iN\cos{(\lambda_{k}^{(1)} a)}=ie^{-i\lambda_k^{(1)}a}\frac{M\sigma_1-N\sigma_2}{\sigma_2+\sigma_1}, \end{equation} \begin{equation}\label{XC2} M\cos{(\lambda_{k}^{(1)} a)}+iN\sin{(\lambda_{k}^{(1)} a)}=e^{-i\lambda_k^{(1)}a}\frac{M\sigma_2-N\sigma_1}{\sigma_2+\sigma_1}. \end{equation} Substituting \eqref{XC}--\eqref{XC2} into \eqref{XC0}, and noting $ \phi^{(1)}(\lambda_{k}^{(1)})=0$, we have \begin{equation}\label{XC3} \sin{(\varepsilon_k a)}=\frac{i\left(N\sigma_2-M\sigma_1\right)\cos(\varepsilon_ka)-\psi(\lambda_k)(\sigma_2+\sigma_1)e^{i\lambda_k^{(1)}a}}{i\lambda_k(\sigma_2^2-\sigma_1^2)+\left(M\sigma_2-N\sigma_1\right)} . \end{equation} Note that \begin{equation}\label{XC4} \sin{\varepsilon_k a}=\varepsilon_k a+O(\lvert \varepsilon_k\rvert^{-3}), \ \cos{\varepsilon_k a}=1+O(\lvert \varepsilon_k\rvert^{-2}), \ \lambda_k=\frac{k\pi }{a}[1+O(1/k)], \ |k|\to \infty. \end{equation} Substituting \eqref{XC4} into \eqref{XC3}, and noting $\{\psi(\lambda_k)\}\in l_2$, we get \begin{equation} \varepsilon_k=\frac{P}{k}+\frac{\gamma_k}{k}. \end{equation} The proof is complete. \end{proof} \begin{thebibliography}{99} \bibitem{BN1} N.P. Bondarenko, Inverse Sturm-Liouville problem with analytical functions in the boundary condition, \emph{Open Mathematics} \textbf{18} (2020), 512-528. \bibitem{BN2} N.P. Bondarenko, Solvability and stability of the inverse Sturm-Liouville problem with analytical functions in the boundary condition. \emph{Math. Methods Appl. Sci.} \textbf{43} (2020) 7009-7021. \bibitem{BP} N.P. Bondarenko, Partial inverse Sturm-Liouville problems. \emph{Mathematics} \textbf{11} (2023), 2408. (44pp). \bibitem{BY} S. A. Buterin, V. A. Yurko Inverse spectral problem for pencils of differential operators on a finite interval, \emph{Vestnik Bashkirskogo Universiteta} \textbf{4} (2006), 8-12. \bibitem{BS} S. A. Buterin, On half inverse problem for differential pencils with the spectral parameter in boundary conditions, \emph{Tamkang Journal of Mathematics} \textbf{42} (2011), 355-364. \bibitem{DGS} R. del Rio, F. Gesztesy, B. Simon, Inverse spectral analysis with partial information on the potential, III. Updating boundary conditions, \emph{Internat. Math. Res. Notices} \textbf{15} (1997), 751-758. \bibitem{GV} G. Freiling, V. Yurko, Inverse Sturm-Liouville Problems and their Applications, NOVA Science Publishers, New York, 2001. \bibitem{FY0} G. Freiling, V. Yurko, Inverse problems for Sturm-Liouville equations with boundary conditions polynomially dependent on the spectral parameter, \emph{Inverse Problems} \textbf{26} (2010), 055003 (17pp). \bibitem{FB} F. Gesztesy, B. Simon, Inverse spectral analysis with partial information on the potential, II. The case of discrete spectrum, \emph{Trans. Amer. Math. Soc.} \emph{352} (2000), 2765-2787. \bibitem{G} N. J. Guliyev, On two-spectra inverse problems. \emph{Proc. Amer. Math. Soc.}, \textbf{148} (2020), 4491-4502. \bibitem{G1} N. J. Guliyev, Essentially isospectral transformations and their applications. \emph{Ann. Mat. Pura Appl.} \textbf{199} (2020), 1621-1648. \bibitem{HB} H. Hochstadt, B. Lieberman, An inverse Sturm-Liouville problem with mixed given data, \emph{SIAM J. Appl. Math.} \textbf{34} (1978), 676-680. \bibitem{MH} M. Horv\'{a}th, Inverse spectral problems and closed exponential systems. \emph{Ann. Math.} {\bf162} (2005) 885-918. \bibitem{M} M. Horv\'{a}th, On the inverse spectral theory of Schr\"{o}dinger and Dirac operators, \emph{Trans. Amer. Math. Soc.} \textbf{353} (2001), 4155-4171. \bibitem{EK} E. Korotyaev, Inverse resonance scattering on the real line, \emph{Inverse Problems} {\bf 21} (2005), 325-341. \bibitem{EK0} E. Korotyaev, Inverse resonance scattering for Schr\"{o}dinger operator on the half line. \emph{Asymptotic Anal.} \textbf{37} (2004), 215-226. \bibitem{Bb} B. M. Levitan, Inverse Sturm-Liouville Problems, Nauka, Moscow, 1987. \bibitem{B} B. Levin, Distribution of zeros of entire functions, AMS Transl. vol.5, Providence RI 1964. \bibitem{MV} M. M\"{o}ller, V. Pivovarchik, Spectral Theory of Operator Pencils, Hermite-Biehler Functions, and their Applications, OT 246, Birkh\"{a}user, Cham, 2015. \bibitem{mv} M. M\"{o}ller, V. Pivovarchik, Direct and inverse Robin-Regge problems, \emph{Electron. J. Differential Equations} \textbf{2017} (2017), 1-18. \bibitem{V} V. A. Marchenko, Sturm-Liouville Operators and Applications, Oper. Theory Adv. Appl. 22, Birkh\"{a}user, Basel, 1986. \bibitem{VC} V. Pivovarchik, C. van der Mee, The inverse generalized Regge problem, \emph{Inverse Problems} \textbf{17} (2001), 1831-1845. \bibitem{RS} W. Rundell, P. Sacks, Numerical technique for the inverse resonance problem, \emph{J. Computational and Applied Mathematics} {\bf170} (2004), 337-347. \bibitem{ST} S. A. Stepin, A. G. Tarasov, Asymptotic distribution of resonances for one-dimensional Schr\"{o}dinger operators with compactly supported potential, \emph{Mat. Sb.} \textbf{198} (2007), 1787-1804; \bibitem{WY} Y. P. Wang, V.A. Yurko, Inverse spectral problems for differential pencils with boundary conditions dependent on the spectral parameter, \emph{Math. Methods Appl. Sci.} \textbf{40} (2017), 3190-3196. \bibitem{WW} Z. Wei, G. Wei, On the uniqueness of inverse spectral problems associated with incomplete spectral data, \emph{J. Math. Anal. Appl.} \textbf{462} (2018), 697-711. \bibitem{XP} X.-C. Xu, Inverse spectral problems for the generalized Robin-Regge problem with complex coefffcients, \emph{J. Geom. Phys.} \textbf{159} (2021), Paper No. 103936. \bibitem{X} X.-C. Xu, N. P. Bondarenko, On the local solvability and stability for the inverse spectral problem of the generalized Dirichlet-Regge problem, \emph{Acta Math. Sin. (Engl. Ser.)} \textbf{38} (2022), no. 7, 1229-1240. \bibitem{x} X.-C. Xu, N.P. Bondarenko, Local solvability and stability of the generalized inverse Robin-Regge problem with complex coefficients. \emph{Journal of Inverse and Ill-posed Problems} \textbf{31} (2023), 711-721. \bibitem{XF} X.-C. Xu, C.-F. Yang, Inverse resonance problems for the Schr\"{o}dinger operator on the real line with mixed given data, \emph{Lett. Math. Phys.} \textbf{108} (2018), 213-223. \bibitem{Y} V.A. Yurko, On boundary value problems with a parameter in the boundary conditions, \emph{Soviet J. Contemporary Math. Anal.} \textbf{19} (1984), 62-73. \bibitem{Zo1} M. Zworski, A remark on isopolar potentials. \emph{SIAM J. Math. Anal.} \textbf{32} (2001), 1324-1326. \end{thebibliography} \end{document} Let's prove the theorem of Ambarzumian's type. \begin{theorem} If the eigenvalues of the problem $L(q,\alpha_{0},0,\alpha,0)$ (or $L(q,\alpha_{0},0,-\alpha,0)$) with real $q$ are \eqref{o28}, then $q(x)=0$ a.e. on $(0,a)$. \end{theorem} \begin{proof} From \eqref{k10} and the asymptotics of the eigenvalues, we see that $K_{1}(a,a)=0$. Let $y_{1}$ be an eigenfunction for the eigenvalue $\lambda=0$, then we have \begin{equation*} -y''_{1}(x)+q(x)y_{1}(x)=0, \quad y'_{1}(0)=y'_{1}(a)=0. \end{equation*} Thus, $y_1$ is an eigenfunction corresponding to the first eigenvalue of the problem $L(q,0,0,0,0)$. Following the same argument as the proof of Theorem 1.4.1 in \cite{GV}, we can prove this theorem. \end{proof} \begin{theorem}\label{th1} $\left(1\right)$ If $\left(\alpha_{0}-1\right)\left(1-\alpha\right)>0$, then the eigenvalues $\{\lambda_{k}^\pm\}_{k\in\mathbb{Z}}$ of the problems $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$, respectively, have the following asymptotic behavior: \begin{equation}\label{k11} \begin{aligned} \lambda_{k}^\pm=\frac{\pi}{a}\left(\lvert k\rvert-\frac{1}{2}\right){\rm sgn} k+\frac{i P_0^\pm}{2a}+\frac{P}{k}+\frac{\beta_{k}}{k}, \quad \lvert k \rvert \to \infty, \end{aligned} \end{equation} where $\{\beta_{k}\}_{k=-\infty,k\neq0}^{\infty}\in l_{2}$ and where \begin{equation}\label{k10} \begin{aligned} P_0^\pm=\ln{\left(\frac{|\alpha_{0}\pm\alpha+1\pm\alpha\alpha_{0}|}{|\alpha_{0}\pm\alpha-(1\pm\alpha\alpha_{0})|}\right)},\quad P=\frac{\beta_0}{\pi\left(1-\alpha_{0}^2\right)}+\frac{\beta}{\pi\left(1-\alpha^2\right)}+\frac{K_{1}\left(a,a\right)}{\pi}. \end{aligned} \end{equation} $\left(2\right)$ If $\left(\alpha_{0}-1\right)\left(1-\alpha\right)<0$, then the eigenvalues $\{\lambda_{k}\}_{k\in\mathbb{Z}_{0}}$, $\mathbb{Z}_{0}=\mathbb{Z}\setminus\{0\}$, of the problem $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$, respectively, have the following asymptotic behavior: \begin{equation}\label{k12} \begin{aligned} \lambda_{k}^\pm=\frac{\pi}{a}\left(\lvert k\rvert-1\right){\rm sgn} k+\frac{iP_0^\pm}{2a}+\frac{P}{k}+\frac{\beta_{k}}{k}, \quad \lvert k \rvert \to \infty, \end{aligned} \end{equation} where $P_0^\pm$ and $P$ are given \eqref{k10} and $\{\beta_{k}\}_{k=-\infty,k\neq0}^{\infty}\in l_{2}$.\\ $\left(3\right)$ If $\left(\alpha_{0}-1\right)\left(1-\alpha\right)=0$, then the problem $L\left(q,\alpha_{0},\beta_{0},\pm\alpha,\beta\right)$ may have only finitely many eigenvalues. If there exist infinitely many eigenvalues $\{\lambda_{k}^+\}$, then $Im \lambda_{k}^+ \to \infty$ as $\lvert k\rvert\to \infty$. \end{theorem} or equivalently, \begin{equation} \notag \lambda_k^0=\begin{cases}& \!\!\!\! \frac{\pi}{a}\left(\lvert k\rvert-1\right){\rm sgn} k+\frac{iP_0^-}{2a},\; k\in\mathbb{Z}_{0},\ \text{if}\ \left(1-\alpha_{0}\right)\left(1-\alpha\right)>0, \\ & \!\! \!\!\!\! \frac{\pi}{a}\left(\lvert k\rvert-\frac{1}{2}\right){\rm sgn} k+\frac{i P_0^-}{2a}, k\in\mathbb{Z},\ \text{if}\ \left(1-\alpha_{0}\right)\left(1-\alpha\right)<0, \end{cases} \end{equation}
2412.20703v2
http://arxiv.org/abs/2412.20703v2
The Restricted Inverse Optimal Value Problem under Weighted Bottle-neck Hamming distance on trees
\documentclass[a4paper,fleqn]{cas-dc} \def\tsc#1{\csdef{#1}{\textsc{\lowercase{#1}}\xspace}} \tsc{WGM} \tsc{QE} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amsmath} \usepackage{makeidx} \usepackage{graphicx} \usepackage{amsfonts, amssymb, amsmath, cite,bm} \usepackage{algorithm} \usepackage{algorithmic} \usepackage{color} \usepackage{float} \usepackage{orcidlink} \usepackage{multirow} \usepackage{multicol} \renewcommand{\algorithmicrequire}{ \textbf{Input:}} \renewcommand{\algorithmicensure}{ \textbf{Output:}} \newtheorem{thm}{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{rem}[thm]{Remark} \newtheorem{alg}[thm]{Algorithm} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}{Definition} \newcommand{\subsubsubsection}[1]{\paragraph{#1}\mbox{}\\} \newcommand{\eq}{\begin{equation}} \newcommand{\nq}{\end{equation}} \newcommand\eqa{\begin{eqnarray}}\newcommand\nqa{\end{eqnarray}} \newcommand{\lbl}[1]{\label{#1}} \newcommand\bess{\begin{eqnarray*}}\newcommand\eess{\end{eqnarray*}} \newcommand{\nn}{\nonumber} \newcommand{\mb}{\mbox{} \hspace} \newcommand{\hb}{\hfill $\Box$} \newcommand{\ALG}{} \newenvironment{breakablealgorithm} { \begin{center} \refstepcounter{algorithm} \hrule height.8pt depth0pt \kern2pt \renewcommand{\caption}[2][\relax]{ {\raggedright\textbf{\ALG Algorithm~\thealgorithm} ##2\par} \ifx\relax##1\relax \addcontentsline{loa}{algorithm}{\protect\numberline{\thealgorithm}##2} \else \addcontentsline{loa}{algorithm}{\protect\numberline{\thealgorithm}##1} \kern2pt\hrule\kern2pt } }{ \kern2pt\hrule\relax \end{center} } \begin{document} \let\WriteBookmarks\relax \def\floatpagepagefraction{1} \def\textpagefraction{.001} \shorttitle{The Restricted Inverse Optimal Value Problem} \shortauthors{Q. Zhang et~al.} \title [mode = title]{The Restricted Inverse Optimal Value Problem on Shortest Path on Trees under Weighted Bottleneck Hamming distance } \tnotemark[1] \tnotetext[1]{The work of Q. Zhang was supported by National Natural Science Foundation of China (1230012046).} \author[1]{Qiao Zhang \orcidlink{0009-0004-4723-2017} }\cormark[1]\ead{[email protected]} \cortext[1]{Corresponding author} \author[2]{Xiao Li \orcidlink{0009-0003-4222-6577}}\ead{[email protected]} \author[3]{Xiucui Guan \orcidlink{0000-0002-2653-1868}}\ead{[email protected]} \address[1]{Aliyun School of Big Data, Changzhou University, Changzhou 213164, Jiangsu, China } \address[2]{Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA } \address[3]{School of Mathematics, Southeast University, Nanjing 210096, Jiangsu, China} \begin{abstract} We consider the Restricted Inverse Optimal Value Problem on Shortest Path on Trees (RIOVSPT$_{BH}$) under weighted bottleneck Hamming distance. The problem aims to minimize the total cost under weighted bottleneck Hamming distance such that the length of the shortest root-leaf path of the tree is lower-bounded by a given value by adjusting the lengths of some edges. Additionally, the specified lower bound must correspond to the length of a particular root-leaf path. Through careful analysis of the problem's structural properties, we develop an algorithm with $O(n\log n)$ time complexity to solve (RIOVSPT$_{BH}$). Furthermore, by removing the path-length constraint, we derive the Minimum Cost Shortest Path Interdiction Problem on Trees, for which we present an $O(n\log n)$ time algorithm that operates under weighted bottleneck Hamming distance. Extensive computational experiments demonstrate the efficiency and effectiveness of both algorithms. \end{abstract} \begin{keywords} Inverse optimal value problem \sep Interdiction problem\sep Shortest path \sep Bottleneck Hamming distance \sep Tree \end{keywords} \maketitle \section{Introduction} The inverse shortest path problem \textbf{(ISP)} has been a foundational topic in inverse optimization since its introduction by Burton and Toint\cite{BT-MP-ISP-1992}. Consider a weighted graph $G=(V,E,w)$ with a set of $s$-$t$ paths $P_j(j=0,1,\ldots,r-1)$, where $V$ and $E$ denote the sets of nodes and edges, respectively. The \textbf{(ISP)} seeks to minimally adjust the weight vector such that a prescribed path $P_0$ becomes the shortest path under some measurement criterion. For a vector $f$ and a path $P_j$, we define $f(P_j):=\sum_{e\in P_{j}}f(e)$. Then the mathematical model for the problem \textbf{(ISP)} \cite{BT-MP-ISP-1992} can be stated as follows. \begin{eqnarray} &\min_{\bar w}&\|\bar w-w\| \nonumber\\ \textbf{(ISP)} &s.t.& \bar w(P_0)\leq \bar w(P_j),j=1,2,\cdots, r-1,\nn\\ &&\bar w(e)\geq 0, e\in E.\nonumber \end{eqnarray} A more challenging variant, the inverse optimal value problem on shortest paths \textbf{(IOVSP)}, has emerged as an important research direction, though it has received comparatively less attention than \textbf{(ISP)}. In \textbf{(IOVSP)}, rather than providing a desired solution a priori, the objective is to achieve a specified optimal value. Given a weighted graph $G=(V,E,w)$, the \textbf{(IOVSP)} problem seeks to adjust the weight vector while minimizing the total adjustment cost, such that the shortest path length equals a prescribed value $D$. Next is the mathematical formulation of \textbf{(IOVSP)}~\cite{CH-N-ISPL}: \begin{eqnarray} &\min_{\tilde w}&\|\tilde w-w\| \nonumber\\ \textbf{(IOVSP)} &\text{s.t.}& \min\limits_{0\leq j\leq r-1} \tilde w(P_j)= D, \nonumber\\ &&\tilde w(e)\geq 0, \quad e\in E.\nonumber \end{eqnarray} Now we consider a restricted inverse optimal value problem on shortest path on trees (denoted by \textbf{(RIOVSPT)}). Let $T=(V, E, w)$ be an edge-weighted tree rooted at $v_1$, where $V:=\{v_1, v_2, \cdots, v_n\}$ and $E:=\{e_2, e_3, \cdots, e_{n}\}$ are the sets of nodes and edges, respectively. Let $Y:=\{t_0,t_1, \cdots$ $,t_{r-1}\}$ be the set of leaves. Let $w(e)$, $l(e)$ and $u(e)$ be the original weight, the lower and upper bounds of the adjusted weight for an edge $e\in E$, respectively, where $l(e)\leq w(e)\leq u(e)$. Let $c(e)>0$ be a cost to ajust the weight of edge $e\in E$. Denoted by $P_{v_i, v_j}$ the unique path from $v_i$ to $v_j$ on the tree $T$ and record $P_k:=P_{v_1, t_k}$ as the ``root-leaf path" of the leaf node $t_k(k=0,1,2,\cdots,r-1)$ in brief. The problem \textbf{(RIOVSPT)} aims at adjusting some weights of edges to minimize the total cost under some measurement such that the length of the shortest path of the tree is lower-bounded by a given value $D$, which is just the restriction of the length of the given path $P_0$. The mathematical model can be stated below. \begin{eqnarray} &\min_{\tilde w}&\|\tilde w-w\| \nonumber\\ \textbf{ (RIOVSPT) }&s.t.&\tilde w(P_k)\geq D, t_k\in Y, \label{eq-RIOVSPT}\\ && \tilde w(P_0)=D,\nonumber\\ && l(e)\leq \tilde w(e)\leq u(e), e\in E.\nonumber \end{eqnarray} The feasible region of \textbf{(RIOVSPT)} can be characterized as the intersection of the feasible regions of \textbf{(ISP)} and \textbf{(IOVSP)} when restricted to tree structures. This structural relationship provides important insights into the problem's properties and solution space. Zhang et al.\cite{ZhangQ22} proposed an $O(n^2)$ time algorithm to solve the problem \textbf{(RIOVSPT$_1$)} under weighted $l_1$ norm. Meanwhile, they also solved the problem \textbf{(RIOVSPT$_1$)} under unit $l_1$ norm in linear time. In a seminal contribution to the field, Zhang et al.~\cite{ZGP-LBIOVMST-JGO,ZGZ-IOVMST-OL} introduced the inverse optimal value problem on minimum spanning trees \textbf{(RIOVMST)}. This problem incorporates dual constraints: ensuring a specified spanning tree $T^0$ maintains its minimum spanning tree status while simultaneously achieving a prescribed weight value $K$. Their research under the unit $l_{\infty}$ norm for measuring adjustment costs yielded two strongly polynomial-time algorithms, both achieving $O(mn)$ time complexity---one for the unbounded case~\cite{ZGZ-IOVMST-OL} and another for the lower-bounded case~\cite{ZGP-LBIOVMST-JGO}, where $|V|:=n, |E|:=m$. Building upon this foundation, Jia et al.~\cite{Jia2023} made several significant contributions. For the $l_\infty$ norm variant, they developed an efficient binary search method achieving $O(m^2 \log n)$ time complexity, while also proposing a more efficient $O(mn)$ algorithm specifically for the unit $l_\infty$ norm case. Their subsequent work~\cite{Jia2024} addressed the more challenging weighted $l_1$ norm variant, presenting an algorithm with time complexity $O(m^2 n^2 \log n \log(n c_{\max}))$, where $c_{\max}$ denotes the maximum element in the cost vector $c$. Complementing these advances, Wang et al.~\cite{WGZ-CIOVMST-JOCO} introduced novel algorithmic techniques for the bounded \textbf{RIOVMST} problem under bottleneck Hamming distance, achieving an $O(mn \log m)$ time complexity. The practical significance of \textbf{(RIOVSPT)} is particularly evident in telecommunication bandwidth pricing, as highlighted by Paleologo and Takriti~\cite{PT-ANV-BT}. Consider a bandwidth provider pricing city-to-city links on a specialized bandwidth tree network (excluding node $v_0$) in \textbf{Figure.\ref{Fig1}}\cite{ZhangQ22}. Given an observed traded price $D$ for a bandwidth link between cities $v_1$ and $v_0$, the sum of prices along the cheapest path between these cities must equal $D$ to prevent arbitrage opportunities. Without this equilibrium, traders could exploit price differentials by purchasing the cheaper option (either the direct traded link or the sequence of links along the cheapest path) and selling the more expensive alternative. In this context, suppose path $P_{11}$ through node $v_{11}$ represents the optimal route for emergency response scenarios, while alternative paths may be suboptimal due to construction budget constraints. To ensure reliable emergency communication between $v_1$ and $v_0$, the total price along path $P_{11}$ must equal $D$. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Fig-background.png} \caption{\cite{ZhangQ22}A special bandwidth network between the cities $v_1$ and $v_0$ with prices on the links.} \label{Fig1} \end{figure} Zhang, Guan et al. \cite{ZhangQ20, ZhangQ21} studied the minimum cost shortest path interdiction problem by upgrading edges on trees (\textbf{MCSPIT}). In their work, they did not consider the restriction on the path \( P_0 \) as outlined in problem (\textbf{RIOVSPT}). The \textbf{(MCSPIT)} problem seeks to minimize the total cost of upgrading selected edges under a given measurement, ensuring that the length of the shortest path on the tree is bounded below by a specified value \( D \). The mathematical model for this problem can be formulated as follows. \begin{eqnarray} &\min_{\bar w}& \|\bar w-w\|\nonumber\\ \textbf{(MCSPIT)}&s.t.&\bar w(P_k)\geq D, t_k\in Y, \label{eq-MCSPIT}\\ && w(e)\leq \bar w(e)\leq u(e), e\in E.\nonumber \end{eqnarray} It is obvious that the optimal solution of the problem (\textbf{RIOVSPT}) is a feasible solution of the problem \textbf{(MCSPIT)}. Under the weighted $ l_1 $ norm, Zhang, Guan et al.\cite{ZhangQ20} proposed a primal-dual algorithms in $ O(n^2) $ time for the problems \textbf{(MCSPIT$_1$)}. Under the weighted Hamming distance, they\cite{ZhangQ21} demonstrated that the problem are $\mathcal{NP}$-hard. While for the unit sum-Hamming distance, they developed an $ O(n^4\log n) $ time algorithm for the problem \textbf{(MCSPIT$_{\bm{USH}}$)} by employing a combination of dynamic programming . Subsequently, with some local optimization, Yi et al. \cite{YiL22} improved upon this algorithm, achieving two reduced time complexity of $ O(n^3 \log n) $ for this problem. when the weighted bottleneck Hamming distance is applied to $\|\cdot\|$ in models (\ref{eq-RIOVSPT})-(\ref{eq-MCSPIT}), the problems \textbf{(RIOVSPT$\bm{_{BH}}$)} and \textbf{(MCSPIT$\bm{_{BH}}$)} under weighted bottleneck Hamming distance can be formulated as follows. \begin{eqnarray} &\min_{\tilde w}& C(\tilde w):=\max_{e\in E}c(e)H(\tilde w(e),w(e)) \nonumber\\ \textbf{(RIOVSPT$\bm{_{BH}}$)} &s.t.&\tilde w(P_k)\geq D, t_k\in Y, \nonumber\\ && \tilde w(P_0)=D,\label{eq-RIOVSPTBH}\\ && l(e)\leq \tilde w(e)\leq u(e), e\in E.\nonumber \end{eqnarray} \begin{eqnarray} &\min_{\bar w}& M(\bar w):=\max_{e\in E}c(e)H(\bar w(e),w(e)) \nonumber\\ \textbf{(MCSPIT$\bm{_{BH}}$)}&s.t.&\bar w(P_k)\geq D, t_k\in Y, \label{eq-MCSPITBH}\\ && w(e)\leq \bar w(e)\leq u(e), e\in E.\nonumber \end{eqnarray} where Hamming distance $H(\bar w(e),w(e))$ between $\bar w(e)$ and $w(e)$ is defined as $$H(\bar w(e),w(e))=\left\{\begin{array}{ll}1,&\bar w(e)\neq w(e),\\0,&\bar w(e)= w(e).\end{array}\right.$$ This paper is structured as follows: Section 2 presents our solution methodology for the Restricted Inverse Optimal Value Problem on Trees under weighted bottleneck Hamming distance (\textbf{RIOVSPT}$_{BH}$). Section 3 develops an algorithmic approach for the Minimum Cost Shortest Path Interdiction Problem (\textbf{MCSPIT}$_{BH}$). Section 4 evaluates the computational performance of both algorithms through extensive numerical experiments. Finally, Section 5 concludes the paper with a summary of our findings and directions for future research. \section{Solve the problems (RIOVSPT$_{BH}$).} For problem (RIOVSPT$_{BH}$) in the model (\ref{eq-RIOVSPTBH}), the optimal value must be one of the costs of edges. Notice that once an edge $e$ is adjusted, no matter how or in what ajusting amount, the relevant Hamming distance of $e$ is 1 and its adjusting cost is $c(e)$. Hence, suppose $\tilde c$ is an objective value. Then the edges with costs less than $\tilde c$ can be adjusted to maintain the objective value. For convenience, we first sort the edges $e$ by their values of $c(e)$ in non-decreasing order and removing any duplicates to produce the sequence $ \{\alpha_{n^*}\} $: \begin{equation} \label{eq:Sequence} c(e_{\alpha_1}) < c(e_{\alpha_2}) < \cdots < c(e_{\alpha_{n^*}}). \end{equation} Let $C_k:=c(e_{\alpha_k})$, $k=1,2,\cdots,{n^*}$. Then for any one cost $C_k$, the feasibility of the problem \textbf{(RIOVSPT$\bm{_{BH}}$)} can be judged in the following theorem, where $E_k := \{e \in E \mid c(e) \leq C_k\}$. Let $\mathcal{C} := \{C_k \mid k=1,2,\cdots,{n^*}\}$ be the set of costs of edges in the sequence $ \{\alpha_{n^*}\} $. \begin{thm}\label{thm-fea=RIOVSPTBH} The problem \textbf{(RIOVSPT$\bm{_{BH}}$)} has a feasible solution with objective value $C_k \in \mathcal{C}$ if and only if: \begin{enumerate} \item $l(P_0 \cap E_k) + w(P_0 \setminus E_k) \leq D \leq u(P_i \cap E_k) + w(P_i \setminus E_k)$ for all $t_i \in Y$, and \item $l(P_0 \cap E_k) + w(P_0 \setminus E_k) \leq l(P_i \cap P_0 \cap E_k) \\ + u(P_i \setminus P_0 \cap E_k) + w(P_i \setminus E_k)$ for all $t_i \in Y$. \end{enumerate} \end{thm} \begin{proof} \textbf{Necessity.} If $\tilde{w}$ is a feasible solution of the problem \textbf{(RIOVSPT$\bm{_{BH}}$)}, then we have $l(e) \leq \tilde{w}(e) \leq u(e)$ for all $e \in E$. Moreover, $$ u(P_i \cap E_k) + w(P_i \setminus E_k) \geq \tilde{w}(P_i) \geq D, \quad \forall t_i \in Y \setminus \{t_0\}, $$ and $$\begin{aligned} u(P_0 \cap E_k) + w(P_0 \setminus E_k) &\geq \tilde{w}(P_0) = D \\&\geq l(P_0 \cap E_k) + w(P_0 \setminus E_k). \end{aligned} $$ Hence, condition 1 holds. Next, suppose that $$\begin{aligned} &l(P_0 \cap E_k) + w(P_0 \setminus E_k)\\& > l(P_{t^*} \cap P_0 \cap E_k) + u(P_{t^*} \setminus P_0 \cap E_k) + w(P_{t^*} \setminus E_k),\end{aligned} $$ where $$ t^* := \arg\min_{t_i \in Y} \Big(l(P_i \cap P_0 \cap E_k) + u(P_i \setminus P_0 \cap E_k) + w(P_i \setminus E_k)\Big). $$ \textbf{Case (i):} If $P_{t^*} \cap P_0 = \emptyset$, then $$\begin{aligned} D &\geq l(P_0 \cap E_k) + w(P_0 \setminus E_k) \\&> u(P_{t^*} \cap E_k) + w(P_{t^*} \setminus E_k) \geq \bar{w}(P_{t^*}),\end{aligned} $$ which contradicts with the feasibility of $\bar{w}$. \textbf{Case (ii):} If $P_{t^*} \cap P_0 \neq \emptyset$, we have \begin{eqnarray*} && u(P_{t^*} \setminus P_0 \cap E_k) + w(P_{t^*} \setminus P_0 \setminus E_k) < l(P_0 \cap E_k) \\&&+ w(P_0 \setminus E_k) - l(P_{t^*} \cap P_0 \cap E_k) - w(P_{t^*} \cap P_0 \setminus E_k)\\&=&l(P_0 \setminus P_{t^*} \cap E_k) + w(P_0 \setminus P_{t^*} \setminus E_k), \end{eqnarray*} and thus $$\begin{aligned} D &\leq \tilde{w}(P_{t^*}) = \tilde{w}(P_{t^*} \cap P_0) + \tilde{w}(P_{t^*} \setminus P_0) \\ &\leq \tilde{w}(P_{t^*} \cap P_0) + u(P_{t^*} \setminus P_0 \cap E_k) + w(P_{t^*} \setminus P_0 \setminus E_k) \\ &<\tilde{w}(P_{t^*} \cap P_0) + l(P_0 \setminus P_{t^*} \cap E_k) + w(P_0 \setminus P_{t^*} \setminus E_k) \\ &\leq \tilde{w}(P_0),\end{aligned} $$ which contradicts with $\tilde{w}(P_0) = D$. Hence, condition 2 holds. \textbf{Sufficiency.} Suppose the conditions 1-2 hold. Then we have \begin{eqnarray*} && l(P_0 \cap E_k) + w(P_0 \setminus E_k) \leq D\\&\leq&\min_{t_i \in Y}\big(u(P_i \cap E_k) + w(P_i \setminus E_k)\big) \\&\leq& u(P_0 \cap E_k) + w(P_0 \setminus E_k). \end{eqnarray*} Let $ e_j = (v_i, v_j) \in P_0 \cap E_k $ be the first edge on $ P_0 \cap E_k $ from $ v_1 $ satisfying $$ \begin{aligned} & u(P_{v_1, v_i} \cap E_k) + w(P_{v_1, v_i} \setminus E_k) + l(P_{v_i, t_0} \cap E_k)\\ &+ w(P_{v_i, t_0} \setminus E_k) \leq D\leq u(P_{v_1, v_j} \cap E_k) + w(P_{v_1, v_j} \setminus E_k)\\& + l(P_{v_j, t_0} \cap E_k) + w(P_{v_j, t_0} \setminus E_k). \end{aligned} $$ Notice that the special cases when $ v_i = v_1 $ with $$\begin{aligned} &~~~~~u(P_{v_1, v_i} \cap E_k) + w(P_{v_1, v_i} \setminus E_k) \\&= u(P_{v_1, v_1} \cap E_k) + w(P_{v_1, v_1} \setminus E_k) = 0\end{aligned} $$ and when $ v_j = t_0 $ with $$\begin{aligned} &~~~~~l(P_{v_j, t_0} \cap E_k) + w(P_{v_j, t_0} \setminus E_k) \\&= l(P_{t_0, t_0} \cap E_k) + w(P_{t_0, t_0} \setminus E_k) = 0\end{aligned} $$ have been considered. Then we can construct \begin{equation} \label{fs-RIOVSPTBH} \tilde{w}(e) := \begin{cases} u(e), & e \in E \setminus P_{v_i, t_0} \cap E_k, \\ l(e), & e \in P_{v_j, t_0} \cap E_k, \\ \delta, & e = e_j = (v_i, v_j), \\ w(e), & \text{otherwise}, \end{cases} \end{equation} where $ \delta = D - u(P_{v_1, v_i} \cap E_k) - w(P_{v_1, v_i} \setminus E_k) $ $~~~~~~~~~~~$ $ - l(P_{v_j, t_0} \cap E_k) - w(P_{v_j, t_0} \setminus E_k). $ Next, we show that $ \bar{w} $ is a feasible solution of the problem \textbf{(RIOVSPT$\bm{_{BH}}$)}. \textbf{ (i)} For $ e \in E \setminus \{e_j\} $, we have $l(e) \leq \tilde{w}(e) \leq u(e). $ For $ e_j = (v_i, v_j) \in E_k $, we have $$ \begin{aligned} & u(P_{v_1, v_i} \cap E_k) + w(P_{v_1, v_i} \setminus E_k) + l(v_i, v_j) + l(P_{v_j, t_0} \cap E_k)\\&~~~~ + w(P_{v_j, t_0} \setminus E_k) = u(P_{v_1, v_i} \cap E_k) + w(P_{v_1, v_i} \setminus E_k) \\ &~~~~+ l(P_{v_i, t_0} \cap E_k) + w(P_{v_i, t_0} \setminus E_k) \leq D \leq u(P_{v_1, v_j} \cap E_k) \\&~~~~+ w(P_{v_1, v_j} \setminus E_k) +l(P_{v_j, t_0} \cap E_k) + w(P_{v_j, t_0} \setminus E_k) \\& = u(P_{v_1, v_i} \cap E_k)+ w(P_{v_1, v_i} \setminus E_k) + u(v_i, v_j) \\&~~~~+ l(P_{v_j, t_0} \cap E_k) + w(P_{v_j, t_0} \setminus E_k). \end{aligned} $$ Thus, we have $ l(e_j) \leq \delta = \tilde{w}(e_j) \leq u(e_j). $ \textbf{ (ii)} Let $Y^1:=\{t'\in Y|E(P_{t'}\cap P_0)=\emptyset\}$ be the set of the leaf nodes whose root-leaf paths have no intersecting edges with the path $P_0$. For ${t'}\in Y^1,$ $\bar w(P_{t'})= u(P_{t'}\cap E_k)+w(P_{t'}\backslash E_k)\geq D$. $$\begin{aligned} &\tilde{w}(P_0)= \tilde w(P_{v_1,v_i})+ \tilde w(P_{v_j,t_0})+ \tilde w(v_i,v_j)\\ &=u(P_{v_1,v_i}\cap E_k)+w(P_{v_1,v_i}\backslash E_k) +l(P_{v_j,t_0}\cap E_k)\\&~~~~+w(P_{v_j,t_0}\backslash E_k)+\Big(D-u(P_{v_1,v_i}\cap E_k)-w(P_{v_1,v_i}\backslash E_k)\\&~~~~-l(P_{v_j,t_0}\cap E_k)-w(P_{v_j,t_0}\backslash E_k)\Big)=D. \end{aligned}$$ For ${t'}\in Y\backslash (Y^1\cup \{t_0\})$, let $v^*_t\in P_{t'}\cap P_0$ be the last intersection node on $P_{t'}\cap P_0$. \textbf{(a)} If $v^*_t\in P_{v_1,v_i}$, then $\bar w(P_{t'})=u(P_{t'}\cap E_k)+w(P_{t'}\backslash E_k)\geq D.$ \textbf{(b)} If $v^*_t=v_j$, we have$$ \begin{aligned} &l(P_{v_1,v_j}\cap E_k )+w(P_{v_1,v_j}\backslash E_k)+l(P_{v_j,t_0}\cap E_k)\\&~~~~+w(P_{v_j,t_0}\backslash E_k)=l(P_0\cap E_k)+w(P_0\backslash E_k)\\&\leq \min_{t'\in Y}(l(P_{t'}\cap P_0\cap E_k)+u(P_{t'}\backslash P_0\cap E_k)+w(P_{t'}\backslash E_k))\\&\leq l(P_{v_1,v^*_t}\cap E_k)+u(P_{v^*_t,{t'}}\cap E_k)+w(P_{t'}\backslash E_k)\\&=l(P_{v_1,v_j}\cap E_k)+w(P_{v_1,v_j}\backslash E_k)+u(P_{v_j,{t'}}\cap E_k)\\&~~~~+w(P_{v_j,{t'}}\backslash E_k) \end{aligned}$$ and thus $$l(P_{v_j,t_0}\cap E_k)+w(P_{v_j,t_0}\backslash E_k)\leq u(P_{v_j,{t'}}\cap E_k)+w(P_{v_j,{t'}}\backslash E_k).$$ Then $$\begin{aligned} &\tilde w(P_{t'})= \tilde w(P_{v_1,v_i})+ \tilde w(P_{v_j,{t'}})+\tilde w(v_i,v_j)\\&=u(P_{v_1,v_i}\cap E_k)+w(P_{v_1,v_i}\backslash E_k)+u(P_{v_j,{t'}}\cap E_k)\\&~~~~+w(P_{v_j,{t'}}\backslash E_k)+\Big(D-u(P_{v_1,v_i}\cap E_k)-w(P_{v_1,v_i}\backslash E_k)\\&~~~~-l(P_{v_j,t_0}\cap E_k)-w(P_{v_j,t_0}\backslash E_k)\Big)\geq D. \end{aligned}$$ \textbf{(c)} If $v^*_t\in P_{v_j,t_0}\backslash \{v_j\}$, we have $$\begin{aligned} &l(P_{v_1,v_j}\cap E_k)+w(P_{v_1,v_j}\backslash E_k)+l(P_{v_j,t_0}\cap E_k)\\&~~~~+w(P_{v_j,t_0}\backslash E_k)=l(P_0\cap E_k)+w(P_0\backslash E_k)\\&\leq \min_{t'\in Y}(l(P_{t'}\cap P_0\cap E_k)+u(P_{t'}\backslash P_0\cap E_k)+w(P_{t'}\backslash E_k))\\&\leq l(P_{v_1,v_j}\cap E_k)+w(P_{v_1,v_j}\backslash E_k)+l(P_{v_j,v^*_t}\cap E_k)\\&~~~~+u(P_{v^*_t,{t'}}\cap E_k)+w(P_{v_j,{t'}}\backslash E_k) \end{aligned} $$ and thus $l(P_{v_j,t_0}\cap E_k)+w(P_{v_j,t_0}\backslash E_k)\leq l(P_{v_j,v^*_t})\\ +u(P_{v^*_t,{t'}}\cap E_k)+w(P_{v_j,{t'}}\backslash E_k).$ Then we have $$\begin{aligned} &\tilde w(P_{t'})= \tilde w(P_{v_1,v_i})+\tilde w(P_{v_j,v^*_t})+ \tilde w(P_{v^*_t,{t'}})+\tilde w(v_i,v_j)\\ &=u(P_{v_1,v_i}\cap E_k)+w(P_{v_1,v_i}\backslash E_k)+l(P_{v_j,v^*_t}\cap E_k)\\&~~~~+u(P_{v^*_t,{t'}}\cap E_k)+w(P_{v_j,{t'}}\backslash E_k)+\Big(D-u(P_{v_1,v_i}\cap E_k)\\&~~~~-w(P_{v_1,v_i}\backslash E_k)-l(P_{v_j,t_0}\cap E_k)-w(P_{v_j,t_0}\backslash E_k)\Big)\geq D. \end{aligned}$$ Therefore, the problem \textbf{(RIOVSPT$\bm{_{BH}}$)} is feasible. \end{proof} For convenience, we intoduce the following definition. \begin{defn} If a cost $\tilde c\in \mathcal{C}$ satisfies the conditions 1-2 in \textbf{Theorem \ref{thm-fea=RIOVSPTBH}}, we call $\tilde c$ feasible. Otherwise, we call it infeasible. \end{defn} Then we can solve the problem \textbf{ (RIOVSPT$\bm{_{BH}}$)} by finding the minimum feasible cost in the sequence $ \{\alpha_{n^*}\} $. We can obviously have the following lemma. \begin{lem} Suppose $\tilde c,\hat c\in \mathcal{C}$ and $\hat c\geq \tilde c $. If $\tilde c$ is feasible, so is $\hat c$. If $\hat c$ is infeasible, so is $\tilde c$. \end{lem} If the maximum cost $C_{n^*}$ in $\mathcal{C}$ is infeasible, we have the following theorem. \begin{thm} If $C_{n^*}$ is infeasible, then the problem \textbf{ (RIOVSP-T$\bm{_{BH}}$)} is infeasible. \end{thm} Hence, for the feasible problem, we can search for the minimum feasible cost by a binary search method based on sorting the costs on $\mathcal{C}$ ascendingly.Then a relevant optimal solution can be constructed as formula (\ref{fs-RIOVSPTBH}). Based on the analysis above, we propose \textbf{Algorithm \ref{alg-RIOVSPTBH}} to solve the problem \textbf{ (RIOVSPT$\bm{_{BH}}$)} and followed by the corresponding time complexity analysis. \begin{breakablealgorithm} \renewcommand{\thealgorithm}{1} \caption{$[\tilde w, C(\tilde w)]$:=\textbf{RIOVSPT}$_{BH}(T, w,l, u, c, D,t_0)$ } \label{alg-RIOVSPTBH} \begin{algorithmic}[1] \REQUIRE The tree $T(V,E)$ rooted at a root node $v_1$ with the set $Y$ of leaf nodes; the cost vector $c$, three weight vectors $w,l,u$ and the given leaf node $t_0$ and a given value $D$. \ENSURE An optimal solution $\tilde w$ and its relevant objective value $C(\tilde w)$. \IF {$\max_{e\in E}c(e)$ is infeasible} \RETURN `` The problem is infeasible!" \ELSE \STATE Sort the values of $c(e)$ in non-decreasing order and removing any duplicates to produce the sequence $ \{\alpha_{n^*}\} $: \begin{equation*} c(e_{\alpha_1}) < c(e_{\alpha_2}) < \cdots < c(e_{\alpha_{n^*}}). \end{equation*} \STATE Let $C_j:=c(e_{\alpha_j})$, $j=1,2,\cdots,{n^*}$. \STATE Initialization: $a := 1, b := n^*$, $k^* := 0, k := \lfloor\frac{a+b}2\rfloor$. \WHILE{$k^* =0$} \IF {$a=b$ or $b=a+1$} \STATE $k^* := b$, \textbf{break.} \ENDIF \IF {$C_k$ is feasible} \STATE Update $b := k, k := \lfloor\frac{a+b}2\rfloor$. \ELSE \STATE Update $a := k, k := \lfloor\frac{a+b}2\rfloor$. \ENDIF \ENDWHILE \RETURN $\tilde w$ as formula (\ref{fs-RIOVSPTBH}), $C(\tilde w):=C_{k^*}$. \ENDIF \end{algorithmic} \end{breakablealgorithm} \begin{thm} The problem \textbf{ (RIOVSPT$\bm{_{BH}}$)} can be solved by \textbf{Algorithm 1} in $O(n\log n)$ time. \end{thm} \begin{proof} The main calculations are in \textbf{Lines 4-18} in \textbf{Algorithm \ref{alg-RIOVSPTBH}}. Sorting in \textbf{Line 4} costs $O(n\log n)$ time and there are $O(\log n )$ iterations in the binary search method in \textbf{Line 7-16}, where it takes $O(n)$ time to check the feasibility of $C_k$ in \textbf{Line 11} in each iteration. Hence, the theorem holds. \end{proof} \section{Solve the problem \textbf{(MCSPIT$\bm{_{BH}}$)}} We next solve the problem \textbf{(MCSPIT$\bm{_{BH}}$)} based on solving the problem \textbf{(MSPIT$\bm{_{BH}}$)}, which aims to maximize the root-leaf shortest path of a tree while ensuring that the total cost of upgrading selected edges under weighted bottle-neck Hamming distance does not exceed a given value $M$. The mathematical model for this problem can be formulated as follows. \begin{eqnarray} &\max_{\hat w}& D(\hat w):=\min_{t_k\in Y}\hat w(P_k) \label{eq-MSPITBH}\\ \textbf{(MSPIT$\bm{_{BH}}$)}&s.t.&\max_{e\in E}c(e)H(\hat w(e),w(e)) \leq M, \nonumber\\ && w(e)\leq \hat w(e)\leq u(e), e\in E.\nonumber \end{eqnarray} Here is the property of the optimal solutions of the two problems. \begin{thm}[\cite{ZhangQ21},Theorem 4] In model (\ref{eq-MCSPITBH})-(\ref{eq-MSPITBH}), for $e\in E$, if the edge length vector $\tilde w$ is optimal, so is the edge length vector $w^*$ defined as follows. $$w^*(e)=\left\{ \begin{array}{ll} w(e), &\text{if }\tilde {w}(e)=w(e)\\ u(e), &\text{if }\tilde {w}(e)\neq{w(e)}. \end{array}\right.$$ \end{thm} Hence, in the problems \textbf{(MSPIT$\bm{_{BH}}$)} and \textbf{(MCSPIT$\bm{_{BH}}$)}, if one edge is upgraded,then we can upgrade it to its upper bound. \begin{thm} The solution \begin{eqnarray}\hat w(e)=\left\{\begin{array}{ll}u(e),&c(e)\leq M\\w(e),&c(e)>M\end{array}\right.\label{os-MSPITBH}\end{eqnarray} is an optimal solution of the problem \textbf{(MSPIT$\bm{_{BH}}$)}. \end{thm} \begin{proof} \textbf{(1)} For any edge $ e \in E $ with $ c(e) \leq M $, we have $ w(e) \leq \hat{w}(e) = u(e) $, and $ c(e)H(\hat{w}(e), w(e)) = c(e) \leq M $. For any edge $ e \in E $ with $ c(e) > M $, we have $ \hat{w}(e) = w(e) \leq u(e) $ and $ c(e)H(\hat{w}(e), w(e)) = 0 \leq M $. Therefore, $$ \max_{e \in E} c(e)H(\hat{w}(e), w(e)) \leq M. $$ This implies that $ \hat{w} $ is a feasible solution of the problem \textbf{(MSPIT$\bm{_{BH}}$)}. \textbf{(2)} Suppose $ \hat{w} $ is not optimal, and let $ w^* $ be an optimal solution. Then, $ D(w^*) = w^*(P_{t^*}) > D(\hat{w}) = \hat{w}(P_{\hat {t}}). $ \textbf{Case (i):} If $ t^* = \hat{t} $, then there exists an edge $ \hat{e} \in P_{t^*} $ such that $ u(\hat{e}) \geq w^*(\hat{e}) > \hat{w}(\hat{e}) = w(\hat{e}) $ as $ w^*(P_{t^*}) > \hat{w}(P_{t^*}) $. In this case, $ c(\hat{e})H(w^*(\hat{e}), w(\hat{e})) = c(\hat{e}) > M, $ which contradicts with the feasibility of $ w^* $. \textbf{Case (ii):} If $ t^* \neq \hat{t} $, then $ w^*(P_{\hat{t}}) \geq w^*(P_{t^*}) > \hat{w}(P_{\hat{t}}) \leq \hat{w}(P_{t^*}) $. Consequently, there exists an edge $ \hat{e} \in P_{\hat{t}} $ such that $ u(\hat{e}) \geq w^*(\hat{e}) > \hat{w}(\hat{e}) = w(\hat{e}) $. Similarly, $$ c(\hat{e})H(w^*(\hat{e}), w(\hat{e})) = c(\hat{e}) > M, $$ which again contradicts with the feasibility of $ w^* $. Thus, in both cases, $ \hat{w} $ must be an optimal solution to the problem \textbf{(MSPIT$\bm{_{BH}}$)}. \end{proof} \begin{thm} The problem \textbf{(MSPIT$\bm{_{BH}}$)} can be solved by formla (\ref{os-MSPITBH}) in $ O(n) $ time. \end{thm} We next address the problem \textbf{(MCSPIT$\bm{_{BH}}$)}. From model (\ref{eq-MCSPITBH}), the objective value is one of the costs of edges. For any one cost $\tilde c\in \mathcal{C} := \{C_k \mid k=1,2,\cdots,{n^*}\}$, we consider the length of shortest path based on this cost $\tilde c$, which is just the problem \textbf{(MSPIT$\bm{_{BH}}$)} with $M=\tilde c.$ \begin{lem} If $\hat c,\tilde c\in \mathcal{C}$ and $\hat c>\tilde c$, we can obtain $ \hat{w}$ with $\hat c$ and $\tilde{w}$ with $\tilde c$ from formula (\ref{os-MSPITBH}). Then $D(\hat{w})\geq D(\tilde{w})$, where $D(\hat{w}):=\min_{t_k\in Y}\hat w(P_k)$. \end{lem} By a binary search method based on the sequence $\{\alpha_{n^*}\}$ in (\ref{eq:Sequence}), we can determine the minimum cost on $\mathcal{C}$ whose corresponding length of shortest path is no less than $D$. Then the optimal objective value of the problem \textbf{(MCSPIT$\bm{_{BH}}$)} can be obtained, so does its optimal solution. The process begins by the ascending sequence $\{\alpha_{n^*}\} $ in (\ref{eq:Sequence}). For any $ C_j= c(e_{\alpha_j}) $, we can obtain an optimal solution $ \hat{w}^j $ of the problem \textbf{(MSPIT$\bm{_{BH}}$)} by replacing $M$ with $C_j$ in (\ref{os-MSPITBH}) as well as its corresponding shortest root-leaf distance $D_j:=D(\hat{w}^j ):=\min_{t_i\in Y}\hat{w}^j(P_i)$. Since $\{ C_j \}$ is increasing, the output $ D_j $ obviously is non-decreasing. Specifically, for $ C_{n^*} $ and $ C_1 $, from formula (\ref{os-MSPITBH}), we can achieve the relevant optimal solutions $\hat{w}^{n^*}$ with its shortest root-leaf distance $ D_{n^*}:=D(\hat{w}^{n^*})$ and $\hat{w}^1$ with $D_1:=D(\hat{w}^1)$,respectively. Based on above analysis, we derive the following straightforward lemmas: \begin{lem} The optimal objective value of the problem \textbf{(MCSPIT$\bm{_{BH}}$)} lies within the range $ [0, C_{n^*}] $. \end{lem} \begin{lem}\label{lem-MCSPITinfty-feasible} If $ \bar {w} $ is an optimal solution to the problem \textbf{(MCSPIT$\bm{_{BH}}$)}, then the relevant shortest root-leaf distance $ D(\bar{w}) $ satisfies $ D(w) \leq D(\bar{w}) \leq D_{n^*} $. \end{lem} By \textbf{Lemma \ref{lem-MCSPITinfty-feasible}}, the problem \textbf{(MCSPIT$\bm{_{BH}}$)} is infeasible for $D > D_{n^*}$, as all the weights of edges have been upgraded to their maximum capacity. Conversely, for $D \le D(w)$, the optimal solution is $w$ with an optimal value $0$, since the weight vector $w$ already satisfies the feasibility conditions. Consequently, we can state the following theorems. \begin{thm} If $D> D_{n^*}$, the problem \textbf{(MCSPIT$\bm{_{BH}}$)} is infeasible. \end{thm} \begin{thm} If $D \le D(w)$, then $w$ is an optimal solution of the problem \textbf{(MCSPIT$\bm{_{BH}}$)} and the optimal value is $0$. \end{thm} Then we consider the problem \textbf{(MCSPIT$\bm{_{BH}}$)} when $D(w)<D\leq D_{n^*}$. To be specific, based on the ascending sequence in \eqref{eq:Sequence} and a binary search method, we can determine the index $k$ satisfying $D_k < D \leq D_{k+1}$ and then the corresponding $C_{k+1}$ is just the minimum cost to satisfy the constraints of the problem \textbf{(MCSPIT$\bm{_{BH}}$)}. Based on the analysis above, we have the following \textbf{Algorithm \ref{alg-MCSPITBH}} to solve the problem \textbf{(MCSPIT$\bm{_{BH}}$)}. \begin{breakablealgorithm} \renewcommand{\thealgorithm}{2} \caption{$[\bar w, M^*]:=\textbf{MCSPIT}_{\bm{BH}}(T, w, u, c, D)$} \label{alg-MCSPITBH} \begin{algorithmic}[1] \REQUIRE The tree $T(V,E)$ rooted at a root node $v_1$ with the set $Y$ of leaf nodes; the cost vector $c$, two weight vectors $w,u$ and a given value $D$. \ENSURE An optimal solution $\bar w$ and its relevant objective value $M^*$. \STATE Sort the edges $e$ by their values of $c(e)$ in ascending order(remove the repeating elements and suppose there are $n^*$ different values). Renumbered as follows: \\ $c(e_{\alpha_1})<c(e_{\alpha_2})< \cdots< c(e_{\alpha_{n^*}}).$ \STATE Let $C_j:=c(e_{\alpha_j})$, $j=1,2,\cdots,{n^*}$. \STATE$D(w):=\min_{t_k\in Y}w(P_k)$.\\ Obtain $\hat w^{n^*}$ with $C_{n^*}$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_{n^*}:=D(\hat w^{n^*})$. Obtain $\hat w^1$ with $C_1$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_1:=D(\hat w^1)$. \IF{$D\leq D(w)$} \RETURN ($ w$, $0$). \ELSIF {$D> D_{n^*}$} \RETURN ``the problem is infeasible!" \ELSE \STATE Initialization: $a := 1, b := n^*$, $k^* := 0, k := \lfloor\frac{a+b}2\rfloor$. \WHILE{$k^* = 0$} \IF{$a = b$} \STATE $k^* := a$, \textbf{break.} \ENDIF \STATE Obtain $\hat w^k$ with $C_k$ and its relevant length of shortest root-leaf distance $D_k :=D(\hat w^k)$.\\ Obtain $\hat w^{k+1}$ with $ C_{k+1}$ and its relevant length of shortest root-leaf distance $D_{k+1} := D(\hat w^{k+1})$. \IF{$ D_{k+1} < D$} \STATE Update $a := k, k := \lfloor\frac{a+b}2\rfloor$. \ELSIF{$D_k > D$} \STATE Update $b := k, k := \lfloor\frac{a+b}2\rfloor$. \ELSIF{$D_{k}<D \le D_{k+1}$} \STATE $k^* := k+1$. \ENDIF \ENDWHILE \RETURN $\bar w:=w^{k^*}, M^*:=C_{k^*}$. \ENDIF \end{algorithmic} \end{breakablealgorithm} The main calculations in \textbf{Algorithm \ref{alg-MCSPITBH}} are the cycles of the binary search method, where two \textbf{(MSPIT$\bm{_{BH}}$)} problems are solved in each iteration. Hence, \begin{thm} The problem \textbf{(MCSPIT$\bm{_{BH}}$)} can be solved by \textbf{Algorithm \ref{alg-MCSPITBH}} in $O(n\log n)$ time. \end{thm} \section{Numerical experiments} \subsection{An example to show the process of algorithms} For the better understanding of \textbf{Algorithm \ref{alg-RIOVSPTBH}} and \textbf{Algorithm \ref{alg-MCSPITBH}}, Example 1 is given to show the detailed computing process. \noindent\textbf{Example 1.} As is illustrated in \textbf{Figure.\ref{Fig1}}, a special bandwidth network between the cities $v_1$ and $v_0$ is given with prices on the links, which is a tree network(excluding the node $v_0$) with an observed traded price $D:=39$. Let $V:=\{v_1,v_2,\cdots,v_{17}\}$, $E:=\{e_2,e_3,\cdots,e_{17}\}$. Let the initial,lower and upper bounds vectors of the costs are $$\begin{aligned} &w:=(7, 12, 8, 6, 1, 12, 14, 19, 11, 11, 17, 9, 38, 10, 14, 17),\\& l: =(3 , 7 , 4, 3, 0 , 6 , 4 , 16 , 10 , 3, 13 , 5 , 37, 9, 8 , 9),\\& u: =(9 , 18, 12, 9, 7, 14 , 16 , 22, 20 , 14, 26, 12 , 48, 14, 16 , 20), \end{aligned}$$ \text{respectively}.The penalty vector of adjusting the costs between cities is $$c:=(7,2,4,14,3,13,9,8, 2,7,2,1,15,12,13,14).$$ \textbf{(1)} Suppose path $P_{11}$ through node $v_{11}$ is the optimal route for emergency response scenarios. Try to determine a pricing scheme to prevent arbitrage opportunities and also to ensure reliable emergency communication. \textbf{(2)} Try to determine a pricing scheme just to prevent arbitrage opportunities. We first show the detailed process to solve the problem \textbf{(1)} by \textbf{Algorithm \ref{alg-RIOVSPTBH}}. \textbf{In Lines 1-2:} $\max_{e\in E}c(e)=15,$ $E'=\{e\in E|c(e)\leq 15\}=E$. Then in \textbf{Theorem \ref{thm-fea=RIOVSPTBH}}, on the one hand, $l(P_0 \cap E') + w(P_0 \setminus E') =l(P_0) =29\leq D=39$ and $ \min_{t_k\in Y}u(P_k)=u(P_{v_{17}})=50\geq D:=40$, then for all $t_k \in Y, u(P_k \cap E') + w(P_k \setminus E')=P_k)\geq D:=39.$ On the other hand, $l(P_0 \cap E') + w(P_0 \setminus E') =l(P_0 )=29\leq \min_{t_k\in Y }\Big(l(P_k \cap P_0\cap E')+ u(P_k \setminus P_0 \cap E') + w(P_k \setminus E')\Big)=\min_{t_k\in Y }\Big(l(P_k \cap P_0)+ u(P_k \setminus P_0 ) \Big)=l(P_0)=29\leq l(P_k \cap P_0 \cap E')+ u(P_k \setminus P_0 \cap E') + w(P_k \setminus E')$ for all $t_k \in Y$. Hence, $\max_{e\in E}c(e)=15$ is feasible. \textbf{In Lines 4-5:} Construct the sequence $\{\alpha_{n^*}\}: C_1=c(e_{13})=1<C_2=c(e_{10})=2<C_3=c(e_{6})=3<C_4=c(e_{4})=4<C_5=c(e_{2})=7<C_6=c(e_{9})=8<C_7=c(e_{8})=9<C_8=c(e_{15})=12<C_9=c(e_{7})=13<C_{10}=c(e_{5})=14<C_{11}=c(e_{14})=15. $ \textbf{In Lines 7-16:a binary search method.} \\ Initialization: $a:=1,b:=n^*=11,k^*:=0,k:=6$. \textbf{In the first iteration}, $C_6=8, E_6=\{e_2,e_3,e_4,e_6,e_9,$ $e_{10},e_{11},e_{12},e_{13}\}$. $l(P_0 \cap E_6) + w(P_0 \setminus E_6)= l(P_0 \cap E_6)=29\leq D:=39\leq \min_{t_k\in Y}\big( u(P_k \cap E_6) + w(P_k \setminus E_6)\big)=u(P_{v_{17}} \cap E_6) + w(P_{v_{17}} \setminus E_6)=41,$ $l(P_0 \cap E_6) + w(P_0 \setminus E_6)=29 \leq \min_{t_k\in Y}\big(l(P_k \cap P_0 \cap E_6)+ u(P_k \setminus P_0 \cap E_6) + w(P_k \setminus E_6)\big)=l(P_0)=29$.Hence, $C_6=8$ is feasible. Then update $b:=6,k:=3.$ \textbf{In the second iteration}, $C_3=3,E_3=\{e_3,e_6,e_{10},e_{12}\}.$ $l(P_0 \cap E_3) + w(P_0 \setminus E_3)=40>D:=40$. Hence, $C_3=3$ is infeasible. Then update $a:=3,k:=4.$ \textbf{In the third iteration}, $C_4=4,E_4=\{e_3,e_4,e_6,e_{10},e_{12},$ $e_{13}\}.$ $l(P_0 \cap E_4) + w(P_0 \setminus E_4)=40> D:=39$. Hence, $C_4=4$ is infeasible. Then update $a:=4,k:=5.$ \textbf{In the fourth iteration}, $C_5=7, E_5=\{e_2,e_3,e_4,e_6,e_{10},$ $e_{11},e_{12},e_{13}\}$. $l(P_0 \cap E_5) + w(P_0 \setminus E_5)= l(P_0 \cap E_6)=32\leq D:=39\leq \min_{t_k\in Y}\big( u(P_k \cap E_5) + w(P_k \setminus E_5)\big)=u(P_{v_{17}} \cap E_5) + w(P_{v_{17}} \setminus E_6)=41,$ $l(P_0 \cap E_5) + w(P_0 \setminus E_5)=32 \leq \min_{t_k\in Y}\big(l(P_k \cap P_0 \cap E_5)+ u(P_k \setminus P_0 \cap E_5) + w(P_k \setminus E_5)\big)=l(P_0\cap E_5) + w(P_0 \setminus E_5)=32$.Hence, $C_5=7$ is feasible. Then update $b:=5,k:=4.$ Then $b=a+1,k^*:=b=5$. \textbf{In Line 17: determine an optimal solution and its corresponding objective value.} From (\ref{fs-RIOVSPTBH}), \\ $e_j=e_{10}=(v_9,v_{10})$. We can obtain an optimal solution $$\tilde w:=(\textbf{9}, \textbf{18}, \textbf{12}, 6, \textbf{7}, 12, 14, 19, \textbf{ 17}, \textbf{3}, 17, \textbf{12}, 38, 10, 14, 17)$$ and its corresponding objective value is $C(\tilde w):=C_{k^*}=C_5=7.$ Then $\bar w$ is a pricing scheme to prevent arbitrage opportunities and also to ensure reliable emergency communication. We next show the detailed process to solve the problem \textbf{(2)} by \textbf{Algorithm \ref{alg-MCSPITBH}}. \textbf{In Lines 1-2:} Construct the sequence $\{\alpha_{n^*}\}$ as above. \textbf{In Line 3:} Calculate $D(w):=\min_{t_k\in Y}w(P_k)=w(P_{v_6})=34$, Obtain $\hat w^{11}$ with $C_{11}=15$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_{11}:=D(\hat w^{11})=u(P_{v_{17}})=50$. Obtain $\hat w^1$ with $C_1=1$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_1:=D(\hat w^1)=\hat w^1(P_{v_{6}})=34$. \textbf{In Lines 4-7:} judging special cases. $D_{11}=50>D:=39>D(w)=34$. Then the problem is feasible. \textbf{In Lines 9-22: a binary search method.} \\ Initialization: $a:=1,b:=n^*=11,k^*:=0,k:=6$. \textbf{In the first iteration}, obtain $\hat w^6$ with $C_6=8$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_6 :=D(\hat w^6)=\hat w^6(P_{v_{17}})=41$. Obtain $\hat w^{7}$ with $ C_{7}=9$ and its relevant length of shortest root-leaf distance $D_{7} := D(\hat w^{7})=\hat w^7(P_{v_{17}})=41$. Then $D_6>D:=39$. Update $b:=6, k:=3$. \textbf{In the second iteration}, obtain $\hat w^3$ with $C_3=3$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_3 :=D(\hat w^3)=\hat w^3(P_{v_{17}})=41$. Obtain $\hat w^{4}$ with $ C_{4}=4$ and its relevant length of shortest root-leaf distance $D_{4} := D(\hat w^{4})=\hat w^4(P_{v_{17}})=41$. Then $D_3>D:=39$. Update $b:=3, k:=2$. \textbf{In the third iteration}, obtain $\hat w^2$ with $C_2=2$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_2 :=D(\hat w^2)=\hat w^2(P_{v_{6}})=40$. Obtain $\hat w^{3}$ with $ C_{3}=3$ and its relevant length of shortest root-leaf distance $D_{3} := D(\hat w^{3})=\hat w^3(P_{v_{17}})=41$. Then $D_2>D:=39$. Update $b:=2, k:=1$. \textbf{In the fourth iteration}, obtain $\hat w^1$ with $C_1=1$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $D_1 :=D(\hat w^1)=\hat w^1P_{v_{6}})=40$. Obtain $\hat w^{2}$ with $ C_{2}=2$ and its relevant length of shortest root-leaf distance $D_{2} := D(\hat w^{2})=\hat w^2(P_{v_{6}})=40$. Then $D_1>D:=39$. Update $b:=1, k:=1$. Then $k^*:=a=1$ as $a=b=1$. \textbf{In Line 23: determine an optimal solution and its corresponding objective value.} Obtain $$\hat w^1=(7, 12, 8, 6, \textbf{7}, 12, 14, 19, 11, 11, 17, 9, 38, 10, 14, 17)$$ with $C_1=1$ from formula (\ref{os-MSPITBH}) and its relevant length of shortest root-leaf distance $$D_1 :=D(\hat w^1)=\hat w^1(P_{v_{6}})=40\geq D:=39.$$ Then $\bar w:=\hat w^1 $ is a pricing scheme just to prevent arbitrage opportunities. \subsection{Numerical experiments} We present the numerical experimental results for Algorithms \ref{alg-RIOVSPTBH} and \ref{alg-MCSPITBH} in Table \ref{table:ex}. We tested these algorithms on six randomly generated trees with vertex numbers ranging from 1000 to 5000. We randomly generated the vectors $u$, $c$ and $w$ such that $0\leq w \leq u$ and $c > 0$. We generated $D$ for each tree based on $n$ with randomness. In this table the average, maximum and minimum CPU time are denoted by $T_i$, $T_i^{max}$ and $T_i^{min}$, $i=1,2$, respectively. The experimental results demonstrate that both Algorithms \ref{alg-RIOVSPTBH} and \ref{alg-MCSPITBH} perform efficiently on large-scale trees, with Algorithm \( T_1 \) outperforming Algorithm \( T_2 \) in terms of CPU time since the sorting algorithm used in Algorithm \ref{alg-RIOVSPTBH} is more stable than the binary search in Algorithm \ref{alg-MCSPITBH}. \begin{table}[htbp!] \centering \caption{Performance of Algorithms \ref{alg-RIOVSPTBH} and \ref{alg-MCSPITBH}.} \label{table:ex} \begin{tabular}{ccccc}\toprule \textbf{Complexity} & \textbf{\( n \)} & \textbf{1000} & \textbf{3000} & \textbf{5000} \\ \midrule \( O(n \log n) \) & \( T_1 \) & 0.3052 & 1.0427 & 1.8561 \\ & \( T_1^{\text{max}} \) & 0.5428 & 1.2613 & 2.1547 \\ & \( T_1^{\text{min}} \) & 0.1871 & 0.9701 & 1.4467 \\ \midrule \( O(n \log n) \) & \( T_2 \) & 0.5147 & 1.4928 & 2.5431 \\ & \( T_2^{\text{max}} \) & 0.6742 & 1.7624 & 2.9127 \\ & \( T_2^{\text{min}} \) & 0.3392 & 1.2691 & 2.1103 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion and further research} This paper addresses the restricted inverse optimal value problem for shortest paths under weighted bottleneck Hamming distance on trees \textbf{(RIOVSPT\textsubscript{BH})}. This optimization problem has significant applications in transportation and communication networks. Through rigorous analysis of its mathematical properties, we develop an efficient $O(n \log n)$ algorithm based on binary search methodology to solve \textbf{(RIOVSPT\textsubscript{BH})}. Additionally, we present a solution to the related \textbf{(MCSPIT\textsubscript{BH})} problem, achieving the same time complexity of $O(n \log n)$. Our findings establish a foundation for future research directions. Potential extensions include investigating the restricted inverse optimal value problem on shortest paths for more complex graph structures, particularly series-parallel graphs and block graphs. Furthermore, the methodology developed here could be adapted to examine restricted inverse optimal value problems for other network performance metrics, such as maximum flow and minimum cost flow problems. \begin{thebibliography}{00} \bibitem{BT-MP-ISP-1992} D. Burton, PL. Toint, On an instance of the inverse shortest paths problem. Mathematical Programming. 1992; 53(1-3): 45-61. \bibitem{CH-N-ISPL} T. Cui, DS. Hochbaum, Complexity of some inverse shortest path lengths problems. Networks. 2010; 56(1): 20-29. \bibitem{Jia2024} J. Jia, X. Guan, B. Zhang, X. Qian, P.M. Pardalos, Combinatorial algorithms for restricted inverse optimal value problems on minimum spanning tree under weighted $l_1$ norm. Journal of Computational and Applied Mathematics. 2024; 451: 116110. \bibitem{Jia2023} J. Jia, X. Guan, H. Wang, B. Zhang, P.M. Pardalos, Combinatorial algorithms for solving the restricted bounded inverse optimal value problem on minimum spanning tree under weighted $l_\infty$ norm. Journal of Computational and Applied Mathematics. 2023; 419: 114754. \bibitem{YiL22} L. Yi, H. Shao, T. Wu, P.J. Liu, An accelerating algorithm for maximum shortest path interdiction problem by upgrading edges on trees under unit Hamming distance. Optimization Letters. 2022; 17(2): 453–469. \bibitem{PT-ANV-BT} G. Paleologo, S. Takriti, Bandwidth trading: A new market looking for help from the OR community. AIRO News 2001;VI (3), 1-4. \bibitem{WGZ-CIOVMST-JOCO} H. Wang, X.C. Guan, Q. Zhang, B.W. Zhang, Capacitated Inverse Optimal Value Problem on Minimum Spanning Tree under Bottleneck Hamming Distance. Journal of Combinatorial Optimization. 2021; 41(4): 861-887. \bibitem{ZGP-LBIOVMST-JGO} B.W. Zhang, X.C. Guan, P.M. Pardalos, H. Wang, Q. Zhang, Y. Liu, S.Y. Chen, The Lower Bounded Inverse Optimal Value Problem on Minimum Spanning Tree under Unit $l_{\infty}$ Norm. Journal of Global Optimization. 2021; 79(3): 757-777. \bibitem{ZGZ-IOVMST-OL} B.W. Zhang, X.C. Guan, Q. Zhang, Inverse Optimal Value Problem on Minimum Spanning Tree under Unit $l_{\infty}$ Norm. Optimization Letters. 2020; 14(8): 2301-2322. \bibitem{ZhangQ20} Q. Zhang, X.C. Guan, P.M. Pardalos, Maximum shortest path interdiction problem by upgrading edges on trees under weighted $l_1$ norm. Journal of Global Optimization. 2021; 79(4): 959-987. \bibitem{ZhangQ21} Q. Zhang, X.C. Guan, H. Wang, P.M. Pardalos, Maximum shortest path interdiction problem by upgrading edges on trees under Hamming distance. Optimization Letters. 2021; 15(8): 2661-2680. \bibitem{ZhangQ22} Q. Zhang, X.C. Guan , J.H. Jia, X.Q. Qian, P.M. Panos, The restricted inverse optimal value problem on shortest path under $l_1$ norm on trees. Journal of Global Optimization. 2023; 86(1): 251–284. \end{thebibliography} \end{document} \pdfglyphtounicode{A}{00A0} \pdfglyphtounicode{AE}{00A0} \pdfglyphtounicode{AEacute}{00A0} \pdfglyphtounicode{AEmacron}{00A0} \pdfglyphtounicode{AEsmall}{00A0} \pdfglyphtounicode{Aacute}{00A0} \pdfglyphtounicode{Aacutesmall}{00A0} \pdfglyphtounicode{Abreve}{00A0} \pdfglyphtounicode{Abreveacute}{00A0} \pdfglyphtounicode{Abrevecyrillic}{00A0} \pdfglyphtounicode{Abrevedotbelow}{00A0} \pdfglyphtounicode{Abrevegrave}{00A0} \pdfglyphtounicode{Abrevehookabove}{00A0} \pdfglyphtounicode{Abrevetilde}{00A0} \pdfglyphtounicode{Acaron}{00A0} \pdfglyphtounicode{Acircle}{00A0} \pdfglyphtounicode{Acircumflex}{00A0} \pdfglyphtounicode{Acircumflexacute}{00A0} \pdfglyphtounicode{Acircumflexdotbelow}{00A0} \pdfglyphtounicode{Acircumflexgrave}{00A0} \pdfglyphtounicode{Acircumflexhookabove}{00A0} \pdfglyphtounicode{Acircumflexsmall}{00A0} \pdfglyphtounicode{Acircumflextilde}{00A0} \pdfglyphtounicode{Acute}{00A0} \pdfglyphtounicode{Acutesmall}{00A0} \pdfglyphtounicode{Acyrillic}{00A0} \pdfglyphtounicode{Adblgrave}{00A0} \pdfglyphtounicode{Adieresis}{00A0} \pdfglyphtounicode{Adieresiscyrillic}{00A0} \pdfglyphtounicode{Adieresismacron}{00A0} \pdfglyphtounicode{Adieresissmall}{00A0} \pdfglyphtounicode{Adotbelow}{00A0} \pdfglyphtounicode{Adotmacron}{00A0} \pdfglyphtounicode{Agrave}{00A0} \pdfglyphtounicode{Agravesmall}{00A0} \pdfglyphtounicode{Ahookabove}{00A0} \pdfglyphtounicode{Aiecyrillic}{00A0} \pdfglyphtounicode{Ainvertedbreve}{00A0} \pdfglyphtounicode{Alpha}{00A0} \pdfglyphtounicode{Alphatonos}{00A0} \pdfglyphtounicode{Amacron}{00A0} \pdfglyphtounicode{Amonospace}{00A0} \pdfglyphtounicode{Aogonek}{00A0} \pdfglyphtounicode{Aring}{00A0} \pdfglyphtounicode{Aringacute}{00A0} \pdfglyphtounicode{Aringbelow}{00A0} \pdfglyphtounicode{Aringsmall}{00A0} \pdfglyphtounicode{Asmall}{00A0} \pdfglyphtounicode{Atilde}{00A0} \pdfglyphtounicode{Atildesmall}{00A0} \pdfglyphtounicode{Aybarmenian}{00A0} \pdfglyphtounicode{B}{00A0} \pdfglyphtounicode{Bcircle}{00A0} \pdfglyphtounicode{Bdotaccent}{00A0} \pdfglyphtounicode{Bdotbelow}{00A0} \pdfglyphtounicode{Becyrillic}{00A0} \pdfglyphtounicode{Benarmenian}{00A0} \pdfglyphtounicode{Beta}{00A0} \pdfglyphtounicode{Bhook}{00A0} \pdfglyphtounicode{Blinebelow}{00A0} \pdfglyphtounicode{Bmonospace}{00A0} \pdfglyphtounicode{Brevesmall}{00A0} \pdfglyphtounicode{Bsmall}{00A0} \pdfglyphtounicode{Btopbar}{00A0} \pdfglyphtounicode{C}{00A0} \pdfglyphtounicode{Caarmenian}{00A0} \pdfglyphtounicode{Cacute}{00A0} \pdfglyphtounicode{Caron}{00A0} \pdfglyphtounicode{Caronsmall}{00A0} \pdfglyphtounicode{Ccaron}{00A0} \pdfglyphtounicode{Ccedilla}{00A0} \pdfglyphtounicode{Ccedillaacute}{00A0} \pdfglyphtounicode{Ccedillasmall}{00A0} \pdfglyphtounicode{Ccircle}{00A0} \pdfglyphtounicode{Ccircumflex}{00A0} \pdfglyphtounicode{Cdot}{00A0} \pdfglyphtounicode{Cdotaccent}{00A0} \pdfglyphtounicode{Cedillasmall}{00A0} \pdfglyphtounicode{Chaarmenian}{00A0} \pdfglyphtounicode{Cheabkhasiancyrillic}{00A0} \pdfglyphtounicode{Checyrillic}{00A0} \pdfglyphtounicode{Chedescenderabkhasiancyrillic}{00A0} \pdfglyphtounicode{Chedescendercyrillic}{00A0} \pdfglyphtounicode{Chedieresiscyrillic}{00A0} \pdfglyphtounicode{Cheharmenian}{00A0} \pdfglyphtounicode{Chekhakassiancyrillic}{00A0} \pdfglyphtounicode{Cheverticalstrokecyrillic}{00A0} \pdfglyphtounicode{Chi}{00A0} \pdfglyphtounicode{Chook}{00A0} \pdfglyphtounicode{Circumflexsmall}{00A0} \pdfglyphtounicode{Cmonospace}{00A0} \pdfglyphtounicode{Coarmenian}{00A0} \pdfglyphtounicode{Csmall}{00A0} \pdfglyphtounicode{D}{00A0} \pdfglyphtounicode{DZ}{00A0} \pdfglyphtounicode{DZcaron}{00A0} \pdfglyphtounicode{Daarmenian}{00A0} \pdfglyphtounicode{Dafrican}{00A0} \pdfglyphtounicode{Dbar}{00A0} \pdfglyphtounicode{Dcaron}{00A0} \pdfglyphtounicode{Dcedilla}{00A0} \pdfglyphtounicode{Dcircle}{00A0} \pdfglyphtounicode{Dcircumflexbelow}{00A0} \pdfglyphtounicode{Dcroat}{00A0} \pdfglyphtounicode{Ddotaccent}{00A0} \pdfglyphtounicode{Ddotbelow}{00A0} \pdfglyphtounicode{Decyrillic}{00A0} \pdfglyphtounicode{Deicoptic}{00A0} \pdfglyphtounicode{Delta}{00A0} \pdfglyphtounicode{Deltagreek}{00A0} \pdfglyphtounicode{Dhook}{00A0} \pdfglyphtounicode{Dieresis}{00A0} \pdfglyphtounicode{DieresisAcute}{00A0} \pdfglyphtounicode{DieresisGrave}{00A0} \pdfglyphtounicode{Dieresissmall}{00A0} \pdfglyphtounicode{Digamma}{00A0} \pdfglyphtounicode{Digammagreek}{00A0} \pdfglyphtounicode{Djecyrillic}{00A0} \pdfglyphtounicode{Dlinebelow}{00A0} \pdfglyphtounicode{Dmonospace}{00A0} \pdfglyphtounicode{Dotaccentsmall}{00A0} \pdfglyphtounicode{Dslash}{00A0} \pdfglyphtounicode{Dsmall}{00A0} \pdfglyphtounicode{Dtopbar}{00A0} \pdfglyphtounicode{Dz}{00A0} \pdfglyphtounicode{Dzcaron}{00A0} \pdfglyphtounicode{Dzeabkhasiancyrillic}{00A0} \pdfglyphtounicode{Dzecyrillic}{00A0} \pdfglyphtounicode{Dzhecyrillic}{00A0} \pdfglyphtounicode{E}{00A0} \pdfglyphtounicode{Eacute}{00A0} \pdfglyphtounicode{Eacutesmall}{00A0} \pdfglyphtounicode{Ebreve}{00A0} \pdfglyphtounicode{Ecaron}{00A0} \pdfglyphtounicode{Ecedillabreve}{00A0} \pdfglyphtounicode{Echarmenian}{00A0} \pdfglyphtounicode{Ecircle}{00A0} \pdfglyphtounicode{Ecircumflex}{00A0} \pdfglyphtounicode{Ecircumflexacute}{00A0} \pdfglyphtounicode{Ecircumflexbelow}{00A0} \pdfglyphtounicode{Ecircumflexdotbelow}{00A0} \pdfglyphtounicode{Ecircumflexgrave}{00A0} \pdfglyphtounicode{Ecircumflexhookabove}{00A0} \pdfglyphtounicode{Ecircumflexsmall}{00A0} \pdfglyphtounicode{Ecircumflextilde}{00A0} \pdfglyphtounicode{Ecyrillic}{00A0} \pdfglyphtounicode{Edblgrave}{00A0} \pdfglyphtounicode{Edieresis}{00A0} \pdfglyphtounicode{Edieresissmall}{00A0} \pdfglyphtounicode{Edot}{00A0} \pdfglyphtounicode{Edotaccent}{00A0} \pdfglyphtounicode{Edotbelow}{00A0} \pdfglyphtounicode{Efcyrillic}{00A0} \pdfglyphtounicode{Egrave}{00A0} \pdfglyphtounicode{Egravesmall}{00A0} \pdfglyphtounicode{Eharmenian}{00A0} \pdfglyphtounicode{Ehookabove}{00A0} \pdfglyphtounicode{Eightroman}{00A0} \pdfglyphtounicode{Einvertedbreve}{00A0} \pdfglyphtounicode{Eiotifiedcyrillic}{00A0} \pdfglyphtounicode{Elcyrillic}{00A0} \pdfglyphtounicode{Elevenroman}{00A0} \pdfglyphtounicode{Emacron}{00A0} \pdfglyphtounicode{Emacronacute}{00A0} \pdfglyphtounicode{Emacrongrave}{00A0} \pdfglyphtounicode{Emcyrillic}{00A0} \pdfglyphtounicode{Emonospace}{00A0} \pdfglyphtounicode{Encyrillic}{00A0} \pdfglyphtounicode{Endescendercyrillic}{00A0} \pdfglyphtounicode{Eng}{00A0} \pdfglyphtounicode{Enghecyrillic}{00A0} \pdfglyphtounicode{Enhookcyrillic}{00A0} \pdfglyphtounicode{Eogonek}{00A0} \pdfglyphtounicode{Eopen}{00A0} \pdfglyphtounicode{Epsilon}{00A0} \pdfglyphtounicode{Epsilontonos}{00A0} \pdfglyphtounicode{Ercyrillic}{00A0} \pdfglyphtounicode{Ereversed}{00A0} \pdfglyphtounicode{Ereversedcyrillic}{00A0} \pdfglyphtounicode{Escyrillic}{00A0} \pdfglyphtounicode{Esdescendercyrillic}{00A0} \pdfglyphtounicode{Esh}{00A0} \pdfglyphtounicode{Esmall}{00A0} \pdfglyphtounicode{Eta}{00A0} \pdfglyphtounicode{Etarmenian}{00A0} \pdfglyphtounicode{Etatonos}{00A0} \pdfglyphtounicode{Eth}{00A0} \pdfglyphtounicode{Ethsmall}{00A0} \pdfglyphtounicode{Etilde}{00A0} \pdfglyphtounicode{Etildebelow}{00A0} \pdfglyphtounicode{Euro}{00A0} \pdfglyphtounicode{Ezh}{00A0} \pdfglyphtounicode{Ezhcaron}{00A0} \pdfglyphtounicode{Ezhreversed}{00A0} \pdfglyphtounicode{F}{00A0} \pdfglyphtounicode{FFIsmall}{00A0} \pdfglyphtounicode{FFLsmall}{00A0} \pdfglyphtounicode{FFsmall}{00A0} \pdfglyphtounicode{FIsmall}{00A0} \pdfglyphtounicode{FLsmall}{00A0} \pdfglyphtounicode{Fcircle}{00A0} \pdfglyphtounicode{Fdotaccent}{00A0} \pdfglyphtounicode{Feharmenian}{00A0} \pdfglyphtounicode{Feicoptic}{00A0} \pdfglyphtounicode{Fhook}{00A0} \pdfglyphtounicode{Finv}{00A0} \pdfglyphtounicode{Fitacyrillic}{00A0} \pdfglyphtounicode{Fiveroman}{00A0} \pdfglyphtounicode{Fmonospace}{00A0} \pdfglyphtounicode{Fourroman}{00A0} \pdfglyphtounicode{Fsmall}{00A0} \pdfglyphtounicode{G}{00A0} \pdfglyphtounicode{GBsquare}{00A0} \pdfglyphtounicode{Gacute}{00A0} \pdfglyphtounicode{Gamma}{00A0} \pdfglyphtounicode{Gammaafrican}{00A0} \pdfglyphtounicode{Gangiacoptic}{00A0} \pdfglyphtounicode{Gbreve}{00A0} \pdfglyphtounicode{Gcaron}{00A0} \pdfglyphtounicode{Gcedilla}{00A0} \pdfglyphtounicode{Gcircle}{00A0} \pdfglyphtounicode{Gcircumflex}{00A0} \pdfglyphtounicode{Gcommaaccent}{00A0} \pdfglyphtounicode{Gdot}{00A0} \pdfglyphtounicode{Gdotaccent}{00A0} \pdfglyphtounicode{Gecyrillic}{00A0} \pdfglyphtounicode{Germandbls}{00A0} \pdfglyphtounicode{Germandblssmall}{00A0} \pdfglyphtounicode{Ghadarmenian}{00A0} \pdfglyphtounicode{Ghemiddlehookcyrillic}{00A0} \pdfglyphtounicode{Ghestrokecyrillic}{00A0} \pdfglyphtounicode{Gheupturncyrillic}{00A0} \pdfglyphtounicode{Ghook}{00A0} \pdfglyphtounicode{Gimarmenian}{00A0} \pdfglyphtounicode{Gjecyrillic}{00A0} \pdfglyphtounicode{Gmacron}{00A0} \pdfglyphtounicode{Gmir}{00A0} \pdfglyphtounicode{Gmonospace}{00A0} \pdfglyphtounicode{Grave}{00A0} \pdfglyphtounicode{Gravesmall}{00A0} \pdfglyphtounicode{Gsmall}{00A0} \pdfglyphtounicode{Gsmallhook}{00A0} \pdfglyphtounicode{Gstroke}{00A0} \pdfglyphtounicode{H}{00A0} \pdfglyphtounicode{H18533}{00A0} \pdfglyphtounicode{H18543}{00A0} \pdfglyphtounicode{H18551}{00A0} \pdfglyphtounicode{H22073}{00A0} \pdfglyphtounicode{HPsquare}{00A0} \pdfglyphtounicode{Haabkhasiancyrillic}{00A0} \pdfglyphtounicode{Hadescendercyrillic}{00A0} \pdfglyphtounicode{Hardsigncyrillic}{00A0} \pdfglyphtounicode{Hbar}{00A0} \pdfglyphtounicode{Hbrevebelow}{00A0} \pdfglyphtounicode{Hcedilla}{00A0} \pdfglyphtounicode{Hcircle}{00A0} \pdfglyphtounicode{Hcircumflex}{00A0} \pdfglyphtounicode{Hdieresis}{00A0} \pdfglyphtounicode{Hdotaccent}{00A0} \pdfglyphtounicode{Hdotbelow}{00A0} \pdfglyphtounicode{Hmonospace}{00A0} \pdfglyphtounicode{Hoarmenian}{00A0} \pdfglyphtounicode{Horicoptic}{00A0} \pdfglyphtounicode{Hsmall}{00A0} \pdfglyphtounicode{Hungarumlaut}{00A0} \pdfglyphtounicode{Hungarumlautsmall}{00A0} \pdfglyphtounicode{Hzsquare}{00A0} \pdfglyphtounicode{I}{00A0} \pdfglyphtounicode{IAcyrillic}{00A0} \pdfglyphtounicode{IJ}{00A0} \pdfglyphtounicode{IUcyrillic}{00A0} \pdfglyphtounicode{Iacute}{00A0} \pdfglyphtounicode{Iacutesmall}{00A0} \pdfglyphtounicode{Ibreve}{00A0} \pdfglyphtounicode{Icaron}{00A0} \pdfglyphtounicode{Icircle}{00A0} \pdfglyphtounicode{Icircumflex}{00A0} \pdfglyphtounicode{Icircumflexsmall}{00A0} \pdfglyphtounicode{Icyrillic}{00A0} \pdfglyphtounicode{Idblgrave}{00A0} \pdfglyphtounicode{Idieresis}{00A0} \pdfglyphtounicode{Idieresisacute}{00A0} \pdfglyphtounicode{Idieresiscyrillic}{00A0} \pdfglyphtounicode{Idieresissmall}{00A0} \pdfglyphtounicode{Idot}{00A0} \pdfglyphtounicode{Idotaccent}{00A0} \pdfglyphtounicode{Idotbelow}{00A0} \pdfglyphtounicode{Iebrevecyrillic}{00A0} \pdfglyphtounicode{Iecyrillic}{00A0} \pdfglyphtounicode{Ifractur}{00A0} \pdfglyphtounicode{Ifraktur}{00A0} \pdfglyphtounicode{Igrave}{00A0} \pdfglyphtounicode{Igravesmall}{00A0} \pdfglyphtounicode{Ihookabove}{00A0} \pdfglyphtounicode{Iicyrillic}{00A0} \pdfglyphtounicode{Iinvertedbreve}{00A0} \pdfglyphtounicode{Iishortcyrillic}{00A0} \pdfglyphtounicode{Imacron}{00A0} \pdfglyphtounicode{Imacroncyrillic}{00A0} \pdfglyphtounicode{Imonospace}{00A0} \pdfglyphtounicode{Iniarmenian}{00A0} \pdfglyphtounicode{Iocyrillic}{00A0} \pdfglyphtounicode{Iogonek}{00A0} \pdfglyphtounicode{Iota}{00A0} \pdfglyphtounicode{Iotaafrican}{00A0} \pdfglyphtounicode{Iotadieresis}{00A0} \pdfglyphtounicode{Iotatonos}{00A0} \pdfglyphtounicode{Ismall}{00A0} \pdfglyphtounicode{Istroke}{00A0} \pdfglyphtounicode{Itilde}{00A0} \pdfglyphtounicode{Itildebelow}{00A0} \pdfglyphtounicode{Izhitsacyrillic}{00A0} \pdfglyphtounicode{Izhitsadblgravecyrillic}{00A0} \pdfglyphtounicode{J}{00A0} \pdfglyphtounicode{Jaarmenian}{00A0} \pdfglyphtounicode{Jcircle}{00A0} \pdfglyphtounicode{Jcircumflex}{00A0} \pdfglyphtounicode{Jecyrillic}{00A0} \pdfglyphtounicode{Jheharmenian}{00A0} \pdfglyphtounicode{Jmonospace}{00A0} \pdfglyphtounicode{Jsmall}{00A0} \pdfglyphtounicode{K}{00A0} \pdfglyphtounicode{KBsquare}{00A0} \pdfglyphtounicode{KKsquare}{00A0} \pdfglyphtounicode{Kabashkircyrillic}{00A0} \pdfglyphtounicode{Kacute}{00A0} \pdfglyphtounicode{Kacyrillic}{00A0} \pdfglyphtounicode{Kadescendercyrillic}{00A0} \pdfglyphtounicode{Kahookcyrillic}{00A0} \pdfglyphtounicode{Kappa}{00A0} \pdfglyphtounicode{Kastrokecyrillic}{00A0} \pdfglyphtounicode{Kaverticalstrokecyrillic}{00A0} \pdfglyphtounicode{Kcaron}{00A0} \pdfglyphtounicode{Kcedilla}{00A0} \pdfglyphtounicode{Kcircle}{00A0} \pdfglyphtounicode{Kcommaaccent}{00A0} \pdfglyphtounicode{Kdotbelow}{00A0} \pdfglyphtounicode{Keharmenian}{00A0} \pdfglyphtounicode{Kenarmenian}{00A0} \pdfglyphtounicode{Khacyrillic}{00A0} \pdfglyphtounicode{Kheicoptic}{00A0} \pdfglyphtounicode{Khook}{00A0} \pdfglyphtounicode{Kjecyrillic}{00A0} \pdfglyphtounicode{Klinebelow}{00A0} \pdfglyphtounicode{Kmonospace}{00A0} \pdfglyphtounicode{Koppacyrillic}{00A0} \pdfglyphtounicode{Koppagreek}{00A0} \pdfglyphtounicode{Ksicyrillic}{00A0} \pdfglyphtounicode{Ksmall}{00A0} \pdfglyphtounicode{L}{00A0} \pdfglyphtounicode{LJ}{00A0} \pdfglyphtounicode{LL}{00A0} \pdfglyphtounicode{Lacute}{00A0} \pdfglyphtounicode{Lambda}{00A0} \pdfglyphtounicode{Lcaron}{00A0} \pdfglyphtounicode{Lcedilla}{00A0} \pdfglyphtounicode{Lcircle}{00A0} \pdfglyphtounicode{Lcircumflexbelow}{00A0} \pdfglyphtounicode{Lcommaaccent}{00A0} \pdfglyphtounicode{Ldot}{00A0} \pdfglyphtounicode{Ldotaccent}{00A0} \pdfglyphtounicode{Ldotbelow}{00A0} \pdfglyphtounicode{Ldotbelowmacron}{00A0} \pdfglyphtounicode{Liwnarmenian}{00A0} \pdfglyphtounicode{Lj}{00A0} \pdfglyphtounicode{Ljecyrillic}{00A0} \pdfglyphtounicode{Llinebelow}{00A0} \pdfglyphtounicode{Lmonospace}{00A0} \pdfglyphtounicode{Lslash}{00A0} \pdfglyphtounicode{Lslashsmall}{00A0} \pdfglyphtounicode{Lsmall}{00A0} \pdfglyphtounicode{M}{00A0} \pdfglyphtounicode{MBsquare}{00A0} \pdfglyphtounicode{Macron}{00A0} \pdfglyphtounicode{Macronsmall}{00A0} \pdfglyphtounicode{Macute}{00A0} \pdfglyphtounicode{Mcircle}{00A0} \pdfglyphtounicode{Mdotaccent}{00A0} \pdfglyphtounicode{Mdotbelow}{00A0} \pdfglyphtounicode{Menarmenian}{00A0} \pdfglyphtounicode{Mmonospace}{00A0} \pdfglyphtounicode{Msmall}{00A0} \pdfglyphtounicode{Mturned}{00A0} \pdfglyphtounicode{Mu}{00A0} \pdfglyphtounicode{N}{00A0} \pdfglyphtounicode{NJ}{00A0} \pdfglyphtounicode{Nacute}{00A0} \pdfglyphtounicode{Ncaron}{00A0} \pdfglyphtounicode{Ncedilla}{00A0} \pdfglyphtounicode{Ncircle}{00A0} \pdfglyphtounicode{Ncircumflexbelow}{00A0} \pdfglyphtounicode{Ncommaaccent}{00A0} \pdfglyphtounicode{Ndotaccent}{00A0} \pdfglyphtounicode{Ndotbelow}{00A0} \pdfglyphtounicode{Ng}{00A0} \pdfglyphtounicode{Nhookleft}{00A0} \pdfglyphtounicode{Nineroman}{00A0} \pdfglyphtounicode{Nj}{00A0} \pdfglyphtounicode{Njecyrillic}{00A0} \pdfglyphtounicode{Nlinebelow}{00A0} \pdfglyphtounicode{Nmonospace}{00A0} \pdfglyphtounicode{Nowarmenian}{00A0} \pdfglyphtounicode{Nsmall}{00A0} \pdfglyphtounicode{Ntilde}{00A0} \pdfglyphtounicode{Ntildesmall}{00A0} \pdfglyphtounicode{Nu}{00A0} \pdfglyphtounicode{O}{00A0} \pdfglyphtounicode{OE}{00A0} \pdfglyphtounicode{OEsmall}{00A0} \pdfglyphtounicode{Oacute}{00A0} \pdfglyphtounicode{Oacutesmall}{00A0} \pdfglyphtounicode{Obarredcyrillic}{00A0} \pdfglyphtounicode{Obarreddieresiscyrillic}{00A0} \pdfglyphtounicode{Obreve}{00A0} \pdfglyphtounicode{Ocaron}{00A0} \pdfglyphtounicode{Ocenteredtilde}{00A0} \pdfglyphtounicode{Ocircle}{00A0} \pdfglyphtounicode{Ocircumflex}{00A0} \pdfglyphtounicode{Ocircumflexacute}{00A0} \pdfglyphtounicode{Ocircumflexdotbelow}{00A0} \pdfglyphtounicode{Ocircumflexgrave}{00A0} \pdfglyphtounicode{Ocircumflexhookabove}{00A0} \pdfglyphtounicode{Ocircumflexsmall}{00A0} \pdfglyphtounicode{Ocircumflextilde}{00A0} \pdfglyphtounicode{Ocyrillic}{00A0} \pdfglyphtounicode{Odblacute}{00A0} \pdfglyphtounicode{Odblgrave}{00A0} \pdfglyphtounicode{Odieresis}{00A0} \pdfglyphtounicode{Odieresiscyrillic}{00A0} \pdfglyphtounicode{Odieresissmall}{00A0} \pdfglyphtounicode{Odotbelow}{00A0} \pdfglyphtounicode{Ogoneksmall}{00A0} \pdfglyphtounicode{Ograve}{00A0} \pdfglyphtounicode{Ogravesmall}{00A0} \pdfglyphtounicode{Oharmenian}{00A0} \pdfglyphtounicode{Ohm}{00A0} \pdfglyphtounicode{Ohookabove}{00A0} \pdfglyphtounicode{Ohorn}{00A0} \pdfglyphtounicode{Ohornacute}{00A0} \pdfglyphtounicode{Ohorndotbelow}{00A0} \pdfglyphtounicode{Ohorngrave}{00A0} \pdfglyphtounicode{Ohornhookabove}{00A0} \pdfglyphtounicode{Ohorntilde}{00A0} \pdfglyphtounicode{Ohungarumlaut}{00A0} \pdfglyphtounicode{Oi}{00A0} \pdfglyphtounicode{Oinvertedbreve}{00A0} \pdfglyphtounicode{Omacron}{00A0} \pdfglyphtounicode{Omacronacute}{00A0} \pdfglyphtounicode{Omacrongrave}{00A0} \pdfglyphtounicode{Omega}{00A0} \pdfglyphtounicode{Omegacyrillic}{00A0} \pdfglyphtounicode{Omegagreek}{00A0} \pdfglyphtounicode{Omegainv}{00A0} \pdfglyphtounicode{Omegaroundcyrillic}{00A0} \pdfglyphtounicode{Omegatitlocyrillic}{00A0} \pdfglyphtounicode{Omegatonos}{00A0} \pdfglyphtounicode{Omicron}{00A0} \pdfglyphtounicode{Omicrontonos}{00A0} \pdfglyphtounicode{Omonospace}{00A0} \pdfglyphtounicode{Oneroman}{00A0} \pdfglyphtounicode{Oogonek}{00A0} \pdfglyphtounicode{Oogonekmacron}{00A0} \pdfglyphtounicode{Oopen}{00A0} \pdfglyphtounicode{Oslash}{00A0} \pdfglyphtounicode{Oslashacute}{00A0} \pdfglyphtounicode{Oslashsmall}{00A0} \pdfglyphtounicode{Osmall}{00A0} \pdfglyphtounicode{Ostrokeacute}{00A0} \pdfglyphtounicode{Otcyrillic}{00A0} \pdfglyphtounicode{Otilde}{00A0} \pdfglyphtounicode{Otildeacute}{00A0} \pdfglyphtounicode{Otildedieresis}{00A0} \pdfglyphtounicode{Otildesmall}{00A0} \pdfglyphtounicode{P}{00A0} \pdfglyphtounicode{Pacute}{00A0} \pdfglyphtounicode{Pcircle}{00A0} \pdfglyphtounicode{Pdotaccent}{00A0} \pdfglyphtounicode{Pecyrillic}{00A0} \pdfglyphtounicode{Peharmenian}{00A0} \pdfglyphtounicode{Pemiddlehookcyrillic}{00A0} \pdfglyphtounicode{Phi}{00A0} \pdfglyphtounicode{Phook}{00A0} \pdfglyphtounicode{Pi}{00A0} \pdfglyphtounicode{Piwrarmenian}{00A0} \pdfglyphtounicode{Pmonospace}{00A0} \pdfglyphtounicode{Psi}{00A0} \pdfglyphtounicode{Psicyrillic}{00A0} \pdfglyphtounicode{Psmall}{00A0} \pdfglyphtounicode{Q}{00A0} \pdfglyphtounicode{Qcircle}{00A0} \pdfglyphtounicode{Qmonospace}{00A0} \pdfglyphtounicode{Qsmall}{00A0} \pdfglyphtounicode{R}{00A0} \pdfglyphtounicode{Raarmenian}{00A0} \pdfglyphtounicode{Racute}{00A0} \pdfglyphtounicode{Rcaron}{00A0} \pdfglyphtounicode{Rcedilla}{00A0} \pdfglyphtounicode{Rcircle}{00A0} \pdfglyphtounicode{Rcommaaccent}{00A0} \pdfglyphtounicode{Rdblgrave}{00A0} \pdfglyphtounicode{Rdotaccent}{00A0} \pdfglyphtounicode{Rdotbelow}{00A0} \pdfglyphtounicode{Rdotbelowmacron}{00A0} \pdfglyphtounicode{Reharmenian}{00A0} \pdfglyphtounicode{Rfractur}{00A0} \pdfglyphtounicode{Rfraktur}{00A0} \pdfglyphtounicode{Rho}{00A0} \pdfglyphtounicode{Ringsmall}{00A0} \pdfglyphtounicode{Rinvertedbreve}{00A0} \pdfglyphtounicode{Rlinebelow}{00A0} \pdfglyphtounicode{Rmonospace}{00A0} \pdfglyphtounicode{Rsmall}{00A0} \pdfglyphtounicode{Rsmallinverted}{00A0} \pdfglyphtounicode{Rsmallinvertedsuperior}{00A0} \pdfglyphtounicode{S}{00A0} \pdfglyphtounicode{SF010000}{00A0} \pdfglyphtounicode{SF020000}{00A0} \pdfglyphtounicode{SF030000}{00A0} \pdfglyphtounicode{SF040000}{00A0} \pdfglyphtounicode{SF050000}{00A0} \pdfglyphtounicode{SF060000}{00A0} \pdfglyphtounicode{SF070000}{00A0} \pdfglyphtounicode{SF080000}{00A0} \pdfglyphtounicode{SF090000}{00A0} \pdfglyphtounicode{SF100000}{00A0} \pdfglyphtounicode{SF110000}{00A0} \pdfglyphtounicode{SF190000}{00A0} \pdfglyphtounicode{SF200000}{00A0} \pdfglyphtounicode{SF210000}{00A0} \pdfglyphtounicode{SF220000}{00A0} \pdfglyphtounicode{SF230000}{00A0} \pdfglyphtounicode{SF240000}{00A0} \pdfglyphtounicode{SF250000}{00A0} \pdfglyphtounicode{SF260000}{00A0} \pdfglyphtounicode{SF270000}{00A0} \pdfglyphtounicode{SF280000}{00A0} \pdfglyphtounicode{SF360000}{00A0} \pdfglyphtounicode{SF370000}{00A0} \pdfglyphtounicode{SF380000}{00A0} \pdfglyphtounicode{SF390000}{00A0} \pdfglyphtounicode{SF400000}{00A0} \pdfglyphtounicode{SF410000}{00A0} \pdfglyphtounicode{SF420000}{00A0} \pdfglyphtounicode{SF430000}{00A0} \pdfglyphtounicode{SF440000}{00A0} \pdfglyphtounicode{SF450000}{00A0} \pdfglyphtounicode{SF460000}{00A0} \pdfglyphtounicode{SF470000}{00A0} \pdfglyphtounicode{SF480000}{00A0} \pdfglyphtounicode{SF490000}{00A0} \pdfglyphtounicode{SF500000}{00A0} \pdfglyphtounicode{SF510000}{00A0} \pdfglyphtounicode{SF520000}{00A0} \pdfglyphtounicode{SF530000}{00A0} \pdfglyphtounicode{SF540000}{00A0} \pdfglyphtounicode{SS}{00A0} \pdfglyphtounicode{SSsmall}{00A0} \pdfglyphtounicode{Sacute}{00A0} \pdfglyphtounicode{Sacutedotaccent}{00A0} \pdfglyphtounicode{Sampigreek}{00A0} \pdfglyphtounicode{Scaron}{00A0} \pdfglyphtounicode{Scarondotaccent}{00A0} \pdfglyphtounicode{Scaronsmall}{00A0} \pdfglyphtounicode{Scedilla}{00A0} \pdfglyphtounicode{Schwa}{00A0} \pdfglyphtounicode{Schwacyrillic}{00A0} \pdfglyphtounicode{Schwadieresiscyrillic}{00A0} \pdfglyphtounicode{Scircle}{00A0} \pdfglyphtounicode{Scircumflex}{00A0} \pdfglyphtounicode{Scommaaccent}{00A0} \pdfglyphtounicode{Sdotaccent}{00A0} \pdfglyphtounicode{Sdotbelow}{00A0} \pdfglyphtounicode{Sdotbelowdotaccent}{00A0} \pdfglyphtounicode{Seharmenian}{00A0} \pdfglyphtounicode{Sevenroman}{00A0} \pdfglyphtounicode{Shaarmenian}{00A0} \pdfglyphtounicode{Shacyrillic}{00A0} \pdfglyphtounicode{Shchacyrillic}{00A0} \pdfglyphtounicode{Sheicoptic}{00A0} \pdfglyphtounicode{Shhacyrillic}{00A0} \pdfglyphtounicode{Shimacoptic}{00A0} \pdfglyphtounicode{Sigma}{00A0} \pdfglyphtounicode{Sixroman}{00A0} \pdfglyphtounicode{Smonospace}{00A0} \pdfglyphtounicode{Softsigncyrillic}{00A0} \pdfglyphtounicode{Ssmall}{00A0} \pdfglyphtounicode{Stigmagreek}{00A0} \pdfglyphtounicode{T}{00A0} \pdfglyphtounicode{Tau}{00A0} \pdfglyphtounicode{Tbar}{00A0} \pdfglyphtounicode{Tcaron}{00A0} \pdfglyphtounicode{Tcedilla}{00A0} \pdfglyphtounicode{Tcircle}{00A0} \pdfglyphtounicode{Tcircumflexbelow}{00A0} \pdfglyphtounicode{Tcommaaccent}{00A0} \pdfglyphtounicode{Tdotaccent}{00A0} \pdfglyphtounicode{Tdotbelow}{00A0} \pdfglyphtounicode{Tecyrillic}{00A0} \pdfglyphtounicode{Tedescendercyrillic}{00A0} \pdfglyphtounicode{Tenroman}{00A0} \pdfglyphtounicode{Tetsecyrillic}{00A0} \pdfglyphtounicode{Theta}{00A0} \pdfglyphtounicode{Thook}{00A0} \pdfglyphtounicode{Thorn}{00A0} \pdfglyphtounicode{Thornsmall}{00A0} \pdfglyphtounicode{Threeroman}{00A0} \pdfglyphtounicode{Tildesmall}{00A0} \pdfglyphtounicode{Tiwnarmenian}{00A0} \pdfglyphtounicode{Tlinebelow}{00A0} \pdfglyphtounicode{Tmonospace}{00A0} \pdfglyphtounicode{Toarmenian}{00A0} \pdfglyphtounicode{Tonefive}{00A0} \pdfglyphtounicode{Tonesix}{00A0} \pdfglyphtounicode{Tonetwo}{00A0} \pdfglyphtounicode{Tretroflexhook}{00A0} \pdfglyphtounicode{Tsecyrillic}{00A0} \pdfglyphtounicode{Tshecyrillic}{00A0} \pdfglyphtounicode{Tsmall}{00A0} \pdfglyphtounicode{Twelveroman}{00A0} \pdfglyphtounicode{Tworoman}{00A0} \pdfglyphtounicode{U}{00A0} \pdfglyphtounicode{Uacute}{00A0} \pdfglyphtounicode{Uacutesmall}{00A0} \pdfglyphtounicode{Ubreve}{00A0} \pdfglyphtounicode{Ucaron}{00A0} \pdfglyphtounicode{Ucircle}{00A0} \pdfglyphtounicode{Ucircumflex}{00A0} \pdfglyphtounicode{Ucircumflexbelow}{00A0} \pdfglyphtounicode{Ucircumflexsmall}{00A0} \pdfglyphtounicode{Ucyrillic}{00A0} \pdfglyphtounicode{Udblacute}{00A0} \pdfglyphtounicode{Udblgrave}{00A0} \pdfglyphtounicode{Udieresis}{00A0} \pdfglyphtounicode{Udieresisacute}{00A0} \pdfglyphtounicode{Udieresisbelow}{00A0} \pdfglyphtounicode{Udieresiscaron}{00A0} \pdfglyphtounicode{Udieresiscyrillic}{00A0} \pdfglyphtounicode{Udieresisgrave}{00A0} \pdfglyphtounicode{Udieresismacron}{00A0} \pdfglyphtounicode{Udieresissmall}{00A0} \pdfglyphtounicode{Udotbelow}{00A0} \pdfglyphtounicode{Ugrave}{00A0} \pdfglyphtounicode{Ugravesmall}{00A0} \pdfglyphtounicode{Uhookabove}{00A0} \pdfglyphtounicode{Uhorn}{00A0} \pdfglyphtounicode{Uhornacute}{00A0} \pdfglyphtounicode{Uhorndotbelow}{00A0} \pdfglyphtounicode{Uhorngrave}{00A0} \pdfglyphtounicode{Uhornhookabove}{00A0} \pdfglyphtounicode{Uhorntilde}{00A0} \pdfglyphtounicode{Uhungarumlaut}{00A0} \pdfglyphtounicode{Uhungarumlautcyrillic}{00A0} \pdfglyphtounicode{Uinvertedbreve}{00A0} \pdfglyphtounicode{Ukcyrillic}{00A0} \pdfglyphtounicode{Umacron}{00A0} \pdfglyphtounicode{Umacroncyrillic}{00A0} \pdfglyphtounicode{Umacrondieresis}{00A0} \pdfglyphtounicode{Umonospace}{00A0} \pdfglyphtounicode{Uogonek}{00A0} \pdfglyphtounicode{Upsilon}{00A0} \pdfglyphtounicode{Upsilon1}{00A0} \pdfglyphtounicode{Upsilonacutehooksymbolgreek}{00A0} \pdfglyphtounicode{Upsilonafrican}{00A0} \pdfglyphtounicode{Upsilondieresis}{00A0} \pdfglyphtounicode{Upsilondieresishooksymbolgreek}{00A0} \pdfglyphtounicode{Upsilonhooksymbol}{00A0} \pdfglyphtounicode{Upsilontonos}{00A0} \pdfglyphtounicode{Uring}{00A0} \pdfglyphtounicode{Ushortcyrillic}{00A0} \pdfglyphtounicode{Usmall}{00A0} \pdfglyphtounicode{Ustraightcyrillic}{00A0} \pdfglyphtounicode{Ustraightstrokecyrillic}{00A0} \pdfglyphtounicode{Utilde}{00A0} \pdfglyphtounicode{Utildeacute}{00A0} \pdfglyphtounicode{Utildebelow}{00A0} \pdfglyphtounicode{V}{00A0} \pdfglyphtounicode{Vcircle}{00A0} \pdfglyphtounicode{Vdotbelow}{00A0} \pdfglyphtounicode{Vecyrillic}{00A0} \pdfglyphtounicode{Vewarmenian}{00A0} \pdfglyphtounicode{Vhook}{00A0} \pdfglyphtounicode{Vmonospace}{00A0} \pdfglyphtounicode{Voarmenian}{00A0} \pdfglyphtounicode{Vsmall}{00A0} \pdfglyphtounicode{Vtilde}{00A0} \pdfglyphtounicode{W}{00A0} \pdfglyphtounicode{Wacute}{00A0} \pdfglyphtounicode{Wcircle}{00A0} \pdfglyphtounicode{Wcircumflex}{00A0} \pdfglyphtounicode{Wdieresis}{00A0} \pdfglyphtounicode{Wdotaccent}{00A0} \pdfglyphtounicode{Wdotbelow}{00A0} \pdfglyphtounicode{Wgrave}{00A0} \pdfglyphtounicode{Wmonospace}{00A0} \pdfglyphtounicode{Wsmall}{00A0} \pdfglyphtounicode{X}{00A0} \pdfglyphtounicode{Xcircle}{00A0} \pdfglyphtounicode{Xdieresis}{00A0} \pdfglyphtounicode{Xdotaccent}{00A0} \pdfglyphtounicode{Xeharmenian}{00A0} \pdfglyphtounicode{Xi}{00A0} \pdfglyphtounicode{Xmonospace}{00A0} \pdfglyphtounicode{Xsmall}{00A0} \pdfglyphtounicode{Y}{00A0} \pdfglyphtounicode{Yacute}{00A0} \pdfglyphtounicode{Yacutesmall}{00A0} \pdfglyphtounicode{Yatcyrillic}{00A0} \pdfglyphtounicode{Ycircle}{00A0} \pdfglyphtounicode{Ycircumflex}{00A0} \pdfglyphtounicode{Ydieresis}{00A0} \pdfglyphtounicode{Ydieresissmall}{00A0} \pdfglyphtounicode{Ydotaccent}{00A0} \pdfglyphtounicode{Ydotbelow}{00A0} \pdfglyphtounicode{Yen}{00A0} \pdfglyphtounicode{Yericyrillic}{00A0} \pdfglyphtounicode{Yerudieresiscyrillic}{00A0} \pdfglyphtounicode{Ygrave}{00A0} \pdfglyphtounicode{Yhook}{00A0} \pdfglyphtounicode{Yhookabove}{00A0} \pdfglyphtounicode{Yiarmenian}{00A0} \pdfglyphtounicode{Yicyrillic}{00A0} \pdfglyphtounicode{Yiwnarmenian}{00A0} \pdfglyphtounicode{Ymonospace}{00A0} \pdfglyphtounicode{Ysmall}{00A0} \pdfglyphtounicode{Ytilde}{00A0} \pdfglyphtounicode{Yusbigcyrillic}{00A0} \pdfglyphtounicode{Yusbigiotifiedcyrillic}{00A0} \pdfglyphtounicode{Yuslittlecyrillic}{00A0} \pdfglyphtounicode{Yuslittleiotifiedcyrillic}{00A0} \pdfglyphtounicode{Z}{00A0} \pdfglyphtounicode{Zaarmenian}{00A0} \pdfglyphtounicode{Zacute}{00A0} \pdfglyphtounicode{Zcaron}{00A0} \pdfglyphtounicode{Zcaronsmall}{00A0} \pdfglyphtounicode{Zcircle}{00A0} \pdfglyphtounicode{Zcircumflex}{00A0} \pdfglyphtounicode{Zdot}{00A0} \pdfglyphtounicode{Zdotaccent}{00A0} \pdfglyphtounicode{Zdotbelow}{00A0} \pdfglyphtounicode{Zecyrillic}{00A0} \pdfglyphtounicode{Zedescendercyrillic}{00A0} \pdfglyphtounicode{Zedieresiscyrillic}{00A0} \pdfglyphtounicode{Zeta}{00A0} \pdfglyphtounicode{Zhearmenian}{00A0} \pdfglyphtounicode{Zhebrevecyrillic}{00A0} \pdfglyphtounicode{Zhecyrillic}{00A0} \pdfglyphtounicode{Zhedescendercyrillic}{00A0} \pdfglyphtounicode{Zhedieresiscyrillic}{00A0} \pdfglyphtounicode{Zlinebelow}{00A0} \pdfglyphtounicode{Zmonospace}{00A0} \pdfglyphtounicode{Zsmall}{00A0} \pdfglyphtounicode{Zstroke}{00A0} \pdfglyphtounicode{a}{00A0} \pdfglyphtounicode{aabengali}{00A0} \pdfglyphtounicode{aacute}{00A0} \pdfglyphtounicode{aadeva}{00A0} \pdfglyphtounicode{aagujarati}{00A0} \pdfglyphtounicode{aagurmukhi}{00A0} \pdfglyphtounicode{aamatragurmukhi}{00A0} \pdfglyphtounicode{aarusquare}{00A0} \pdfglyphtounicode{aavowelsignbengali}{00A0} \pdfglyphtounicode{aavowelsigndeva}{00A0} \pdfglyphtounicode{aavowelsigngujarati}{00A0} \pdfglyphtounicode{abbreviationmarkarmenian}{00A0} \pdfglyphtounicode{abbreviationsigndeva}{00A0} \pdfglyphtounicode{abengali}{00A0} \pdfglyphtounicode{abopomofo}{00A0} \pdfglyphtounicode{abreve}{00A0} \pdfglyphtounicode{abreveacute}{00A0} \pdfglyphtounicode{abrevecyrillic}{00A0} \pdfglyphtounicode{abrevedotbelow}{00A0} \pdfglyphtounicode{abrevegrave}{00A0} \pdfglyphtounicode{abrevehookabove}{00A0} \pdfglyphtounicode{abrevetilde}{00A0} \pdfglyphtounicode{acaron}{00A0} \pdfglyphtounicode{acircle}{00A0} \pdfglyphtounicode{acircumflex}{00A0} \pdfglyphtounicode{acircumflexacute}{00A0} \pdfglyphtounicode{acircumflexdotbelow}{00A0} \pdfglyphtounicode{acircumflexgrave}{00A0} \pdfglyphtounicode{acircumflexhookabove}{00A0} \pdfglyphtounicode{acircumflextilde}{00A0} \pdfglyphtounicode{acute}{00A0} \pdfglyphtounicode{acutebelowcmb}{00A0} \pdfglyphtounicode{acutecmb}{00A0} \pdfglyphtounicode{acutecomb}{00A0} \pdfglyphtounicode{acutedeva}{00A0} \pdfglyphtounicode{acutelowmod}{00A0} \pdfglyphtounicode{acutetonecmb}{00A0} \pdfglyphtounicode{acyrillic}{00A0} \pdfglyphtounicode{adblgrave}{00A0} \pdfglyphtounicode{addakgurmukhi}{00A0} \pdfglyphtounicode{adeva}{00A0} \pdfglyphtounicode{adieresis}{00A0} \pdfglyphtounicode{adieresiscyrillic}{00A0} \pdfglyphtounicode{adieresismacron}{00A0} \pdfglyphtounicode{adotbelow}{00A0} \pdfglyphtounicode{adotmacron}{00A0} \pdfglyphtounicode{ae}{00A0} \pdfglyphtounicode{aeacute}{00A0} \pdfglyphtounicode{aekorean}{00A0} \pdfglyphtounicode{aemacron}{00A0} \pdfglyphtounicode{afii00208}{00A0} \pdfglyphtounicode{afii08941}{00A0} \pdfglyphtounicode{afii10017}{00A0} \pdfglyphtounicode{afii10018}{00A0} \pdfglyphtounicode{afii10019}{00A0} \pdfglyphtounicode{afii10020}{00A0} \pdfglyphtounicode{afii10021}{00A0} \pdfglyphtounicode{afii10022}{00A0} \pdfglyphtounicode{afii10023}{00A0} \pdfglyphtounicode{afii10024}{00A0} \pdfglyphtounicode{afii10025}{00A0} \pdfglyphtounicode{afii10026}{00A0} \pdfglyphtounicode{afii10027}{00A0} \pdfglyphtounicode{afii10028}{00A0} \pdfglyphtounicode{afii10029}{00A0} \pdfglyphtounicode{afii10030}{00A0} \pdfglyphtounicode{afii10031}{00A0} \pdfglyphtounicode{afii10032}{00A0} \pdfglyphtounicode{afii10033}{00A0} \pdfglyphtounicode{afii10034}{00A0} \pdfglyphtounicode{afii10035}{00A0} \pdfglyphtounicode{afii10036}{00A0} \pdfglyphtounicode{afii10037}{00A0} \pdfglyphtounicode{afii10038}{00A0} \pdfglyphtounicode{afii10039}{00A0} \pdfglyphtounicode{afii10040}{00A0} \pdfglyphtounicode{afii10041}{00A0} \pdfglyphtounicode{afii10042}{00A0} \pdfglyphtounicode{afii10043}{00A0} \pdfglyphtounicode{afii10044}{00A0} \pdfglyphtounicode{afii10045}{00A0} \pdfglyphtounicode{afii10046}{00A0} \pdfglyphtounicode{afii10047}{00A0} \pdfglyphtounicode{afii10048}{00A0} \pdfglyphtounicode{afii10049}{00A0} \pdfglyphtounicode{afii10050}{00A0} \pdfglyphtounicode{afii10051}{00A0} \pdfglyphtounicode{afii10052}{00A0} \pdfglyphtounicode{afii10053}{00A0} \pdfglyphtounicode{afii10054}{00A0} \pdfglyphtounicode{afii10055}{00A0} \pdfglyphtounicode{afii10056}{00A0} \pdfglyphtounicode{afii10057}{00A0} \pdfglyphtounicode{afii10058}{00A0} \pdfglyphtounicode{afii10059}{00A0} \pdfglyphtounicode{afii10060}{00A0} \pdfglyphtounicode{afii10061}{00A0} \pdfglyphtounicode{afii10062}{00A0} \pdfglyphtounicode{afii10063}{00A0} \pdfglyphtounicode{afii10064}{00A0} \pdfglyphtounicode{afii10065}{00A0} \pdfglyphtounicode{afii10066}{00A0} \pdfglyphtounicode{afii10067}{00A0} \pdfglyphtounicode{afii10068}{00A0} \pdfglyphtounicode{afii10069}{00A0} \pdfglyphtounicode{afii10070}{00A0} \pdfglyphtounicode{afii10071}{00A0} \pdfglyphtounicode{afii10072}{00A0} \pdfglyphtounicode{afii10073}{00A0} \pdfglyphtounicode{afii10074}{00A0} \pdfglyphtounicode{afii10075}{00A0} \pdfglyphtounicode{afii10076}{00A0} \pdfglyphtounicode{afii10077}{00A0} \pdfglyphtounicode{afii10078}{00A0} \pdfglyphtounicode{afii10079}{00A0} \pdfglyphtounicode{afii10080}{00A0} \pdfglyphtounicode{afii10081}{00A0} \pdfglyphtounicode{afii10082}{00A0} \pdfglyphtounicode{afii10083}{00A0} \pdfglyphtounicode{afii10084}{00A0} \pdfglyphtounicode{afii10085}{00A0} \pdfglyphtounicode{afii10086}{00A0} \pdfglyphtounicode{afii10087}{00A0} \pdfglyphtounicode{afii10088}{00A0} \pdfglyphtounicode{afii10089}{00A0} \pdfglyphtounicode{afii10090}{00A0} \pdfglyphtounicode{afii10091}{00A0} \pdfglyphtounicode{afii10092}{00A0} \pdfglyphtounicode{afii10093}{00A0} \pdfglyphtounicode{afii10094}{00A0} \pdfglyphtounicode{afii10095}{00A0} \pdfglyphtounicode{afii10096}{00A0} \pdfglyphtounicode{afii10097}{00A0} \pdfglyphtounicode{afii10098}{00A0} \pdfglyphtounicode{afii10099}{00A0} \pdfglyphtounicode{afii10100}{00A0} \pdfglyphtounicode{afii10101}{00A0} \pdfglyphtounicode{afii10102}{00A0} \pdfglyphtounicode{afii10103}{00A0} \pdfglyphtounicode{afii10104}{00A0} \pdfglyphtounicode{afii10105}{00A0} \pdfglyphtounicode{afii10106}{00A0} \pdfglyphtounicode{afii10107}{00A0} \pdfglyphtounicode{afii10108}{00A0} \pdfglyphtounicode{afii10109}{00A0} \pdfglyphtounicode{afii10110}{00A0} \pdfglyphtounicode{afii10145}{00A0} \pdfglyphtounicode{afii10146}{00A0} \pdfglyphtounicode{afii10147}{00A0} \pdfglyphtounicode{afii10148}{00A0} \pdfglyphtounicode{afii10192}{00A0} \pdfglyphtounicode{afii10193}{00A0} \pdfglyphtounicode{afii10194}{00A0} \pdfglyphtounicode{afii10195}{00A0} \pdfglyphtounicode{afii10196}{00A0} \pdfglyphtounicode{afii10831}{00A0} \pdfglyphtounicode{afii10832}{00A0} \pdfglyphtounicode{afii10846}{00A0} \pdfglyphtounicode{afii299}{00A0} \pdfglyphtounicode{afii300}{00A0} \pdfglyphtounicode{afii301}{00A0} \pdfglyphtounicode{afii57381}{00A0} \pdfglyphtounicode{afii57388}{00A0} \pdfglyphtounicode{afii57392}{00A0} \pdfglyphtounicode{afii57393}{00A0} \pdfglyphtounicode{afii57394}{00A0} \pdfglyphtounicode{afii57395}{00A0} \pdfglyphtounicode{afii57396}{00A0} \pdfglyphtounicode{afii57397}{00A0} \pdfglyphtounicode{afii57398}{00A0} \pdfglyphtounicode{afii57399}{00A0} \pdfglyphtounicode{afii57400}{00A0} \pdfglyphtounicode{afii57401}{00A0} \pdfglyphtounicode{afii57403}{00A0} \pdfglyphtounicode{afii57407}{00A0} \pdfglyphtounicode{afii57409}{00A0} \pdfglyphtounicode{afii57410}{00A0} \pdfglyphtounicode{afii57411}{00A0} \pdfglyphtounicode{afii57412}{00A0} \pdfglyphtounicode{afii57413}{00A0} \pdfglyphtounicode{afii57414}{00A0} \pdfglyphtounicode{afii57415}{00A0} \pdfglyphtounicode{afii57416}{00A0} \pdfglyphtounicode{afii57417}{00A0} \pdfglyphtounicode{afii57418}{00A0} \pdfglyphtounicode{afii57419}{00A0} \pdfglyphtounicode{afii57420}{00A0} \pdfglyphtounicode{afii57421}{00A0} \pdfglyphtounicode{afii57422}{00A0} \pdfglyphtounicode{afii57423}{00A0} \pdfglyphtounicode{afii57424}{00A0} \pdfglyphtounicode{afii57425}{00A0} \pdfglyphtounicode{afii57426}{00A0} \pdfglyphtounicode{afii57427}{00A0} \pdfglyphtounicode{afii57428}{00A0} \pdfglyphtounicode{afii57429}{00A0} \pdfglyphtounicode{afii57430}{00A0} \pdfglyphtounicode{afii57431}{00A0} \pdfglyphtounicode{afii57432}{00A0} \pdfglyphtounicode{afii57433}{00A0} \pdfglyphtounicode{afii57434}{00A0} \pdfglyphtounicode{afii57440}{00A0} \pdfglyphtounicode{afii57441}{00A0} \pdfglyphtounicode{afii57442}{00A0} \pdfglyphtounicode{afii57443}{00A0} \pdfglyphtounicode{afii57444}{00A0} \pdfglyphtounicode{afii57445}{00A0} \pdfglyphtounicode{afii57446}{00A0} \pdfglyphtounicode{afii57448}{00A0} \pdfglyphtounicode{afii57449}{00A0} \pdfglyphtounicode{afii57450}{00A0} \pdfglyphtounicode{afii57451}{00A0} \pdfglyphtounicode{afii57452}{00A0} \pdfglyphtounicode{afii57453}{00A0} \pdfglyphtounicode{afii57454}{00A0} \pdfglyphtounicode{afii57455}{00A0} \pdfglyphtounicode{afii57456}{00A0} \pdfglyphtounicode{afii57457}{00A0} \pdfglyphtounicode{afii57458}{00A0} \pdfglyphtounicode{afii57470}{00A0} \pdfglyphtounicode{afii57505}{00A0} \pdfglyphtounicode{afii57506}{00A0} \pdfglyphtounicode{afii57507}{00A0} \pdfglyphtounicode{afii57508}{00A0} \pdfglyphtounicode{afii57509}{00A0} \pdfglyphtounicode{afii57511}{00A0} \pdfglyphtounicode{afii57512}{00A0} \pdfglyphtounicode{afii57513}{00A0} \pdfglyphtounicode{afii57514}{00A0} \pdfglyphtounicode{afii57519}{00A0} \pdfglyphtounicode{afii57534}{00A0} \pdfglyphtounicode{afii57636}{00A0} \pdfglyphtounicode{afii57645}{00A0} \pdfglyphtounicode{afii57658}{00A0} \pdfglyphtounicode{afii57664}{00A0} \pdfglyphtounicode{afii57665}{00A0} \pdfglyphtounicode{afii57666}{00A0} \pdfglyphtounicode{afii57667}{00A0} \pdfglyphtounicode{afii57668}{00A0} \pdfglyphtounicode{afii57669}{00A0} \pdfglyphtounicode{afii57670}{00A0} \pdfglyphtounicode{afii57671}{00A0} \pdfglyphtounicode{afii57672}{00A0} \pdfglyphtounicode{afii57673}{00A0} \pdfglyphtounicode{afii57674}{00A0} \pdfglyphtounicode{afii57675}{00A0} \pdfglyphtounicode{afii57676}{00A0} \pdfglyphtounicode{afii57677}{00A0} \pdfglyphtounicode{afii57678}{00A0} \pdfglyphtounicode{afii57679}{00A0} \pdfglyphtounicode{afii57680}{00A0} \pdfglyphtounicode{afii57681}{00A0} \pdfglyphtounicode{afii57682}{00A0} \pdfglyphtounicode{afii57683}{00A0} \pdfglyphtounicode{afii57684}{00A0} \pdfglyphtounicode{afii57685}{00A0} \pdfglyphtounicode{afii57686}{00A0} \pdfglyphtounicode{afii57687}{00A0} \pdfglyphtounicode{afii57688}{00A0} \pdfglyphtounicode{afii57689}{00A0} \pdfglyphtounicode{afii57690}{00A0} \pdfglyphtounicode{afii57694}{00A0} \pdfglyphtounicode{afii57695}{00A0} \pdfglyphtounicode{afii57700}{00A0} \pdfglyphtounicode{afii57705}{00A0} \pdfglyphtounicode{afii57716}{00A0} \pdfglyphtounicode{afii57717}{00A0} \pdfglyphtounicode{afii57718}{00A0} \pdfglyphtounicode{afii57723}{00A0} \pdfglyphtounicode{afii57793}{00A0} \pdfglyphtounicode{afii57794}{00A0} \pdfglyphtounicode{afii57795}{00A0} \pdfglyphtounicode{afii57796}{00A0} \pdfglyphtounicode{afii57797}{00A0} \pdfglyphtounicode{afii57798}{00A0} \pdfglyphtounicode{afii57799}{00A0} \pdfglyphtounicode{afii57800}{00A0} \pdfglyphtounicode{afii57801}{00A0} \pdfglyphtounicode{afii57802}{00A0} \pdfglyphtounicode{afii57803}{00A0} \pdfglyphtounicode{afii57804}{00A0} \pdfglyphtounicode{afii57806}{00A0} \pdfglyphtounicode{afii57807}{00A0} \pdfglyphtounicode{afii57839}{00A0} \pdfglyphtounicode{afii57841}{00A0} \pdfglyphtounicode{afii57842}{00A0} \pdfglyphtounicode{afii57929}{00A0} \pdfglyphtounicode{afii61248}{00A0} \pdfglyphtounicode{afii61289}{00A0} \pdfglyphtounicode{afii61352}{00A0} \pdfglyphtounicode{afii61573}{00A0} \pdfglyphtounicode{afii61574}{00A0} \pdfglyphtounicode{afii61575}{00A0} \pdfglyphtounicode{afii61664}{00A0} \pdfglyphtounicode{afii63167}{00A0} \pdfglyphtounicode{afii64937}{00A0} \pdfglyphtounicode{agrave}{00A0} \pdfglyphtounicode{agujarati}{00A0} \pdfglyphtounicode{agurmukhi}{00A0} \pdfglyphtounicode{ahiragana}{00A0} \pdfglyphtounicode{ahookabove}{00A0} \pdfglyphtounicode{aibengali}{00A0} \pdfglyphtounicode{aibopomofo}{00A0} \pdfglyphtounicode{aideva}{00A0} \pdfglyphtounicode{aiecyrillic}{00A0} \pdfglyphtounicode{aigujarati}{00A0} \pdfglyphtounicode{aigurmukhi}{00A0} \pdfglyphtounicode{aimatragurmukhi}{00A0} \pdfglyphtounicode{ainarabic}{00A0} \pdfglyphtounicode{ainfinalarabic}{00A0} \pdfglyphtounicode{aininitialarabic}{00A0} \pdfglyphtounicode{ainmedialarabic}{00A0} \pdfglyphtounicode{ainvertedbreve}{00A0} \pdfglyphtounicode{aivowelsignbengali}{00A0} \pdfglyphtounicode{aivowelsigndeva}{00A0} \pdfglyphtounicode{aivowelsigngujarati}{00A0} \pdfglyphtounicode{akatakana}{00A0} \pdfglyphtounicode{akatakanahalfwidth}{00A0} \pdfglyphtounicode{akorean}{00A0} \pdfglyphtounicode{alef}{00A0} \pdfglyphtounicode{alefarabic}{00A0} \pdfglyphtounicode{alefdageshhebrew}{00A0} \pdfglyphtounicode{aleffinalarabic}{00A0} \pdfglyphtounicode{alefhamzaabovearabic}{00A0} \pdfglyphtounicode{alefhamzaabovefinalarabic}{00A0} \pdfglyphtounicode{alefhamzabelowarabic}{00A0} \pdfglyphtounicode{alefhamzabelowfinalarabic}{00A0} \pdfglyphtounicode{alefhebrew}{00A0} \pdfglyphtounicode{aleflamedhebrew}{00A0} \pdfglyphtounicode{alefmaddaabovearabic}{00A0} \pdfglyphtounicode{alefmaddaabovefinalarabic}{00A0} \pdfglyphtounicode{alefmaksuraarabic}{00A0} \pdfglyphtounicode{alefmaksurafinalarabic}{00A0} \pdfglyphtounicode{alefmaksurainitialarabic}{00A0} \pdfglyphtounicode{alefmaksuramedialarabic}{00A0} \pdfglyphtounicode{alefpatahhebrew}{00A0} \pdfglyphtounicode{alefqamatshebrew}{00A0} \pdfglyphtounicode{aleph}{00A0} \pdfglyphtounicode{allequal}{00A0} \pdfglyphtounicode{alpha}{00A0} \pdfglyphtounicode{alphatonos}{00A0} \pdfglyphtounicode{amacron}{00A0} \pdfglyphtounicode{amonospace}{00A0} \pdfglyphtounicode{ampersand}{00A0} \pdfglyphtounicode{ampersandmonospace}{00A0} \pdfglyphtounicode{ampersandsmall}{00A0} \pdfglyphtounicode{amsquare}{00A0} \pdfglyphtounicode{anbopomofo}{00A0} \pdfglyphtounicode{angbopomofo}{00A0} \pdfglyphtounicode{angbracketleft}{00A0} \pdfglyphtounicode{angbracketright}{00A0} \pdfglyphtounicode{angkhankhuthai}{00A0} \pdfglyphtounicode{angle}{00A0} \pdfglyphtounicode{anglebracketleft}{00A0} \pdfglyphtounicode{anglebracketleftvertical}{00A0} \pdfglyphtounicode{anglebracketright}{00A0} \pdfglyphtounicode{anglebracketrightvertical}{00A0} \pdfglyphtounicode{angleleft}{00A0} \pdfglyphtounicode{angleright}{00A0} \pdfglyphtounicode{angstrom}{00A0} \pdfglyphtounicode{anoteleia}{00A0} \pdfglyphtounicode{anticlockwise}{00A0} \pdfglyphtounicode{anudattadeva}{00A0} \pdfglyphtounicode{anusvarabengali}{00A0} \pdfglyphtounicode{anusvaradeva}{00A0} \pdfglyphtounicode{anusvaragujarati}{00A0} \pdfglyphtounicode{aogonek}{00A0} \pdfglyphtounicode{apaatosquare}{00A0} \pdfglyphtounicode{aparen}{00A0} \pdfglyphtounicode{apostrophearmenian}{00A0} \pdfglyphtounicode{apostrophemod}{00A0} \pdfglyphtounicode{apple}{00A0} \pdfglyphtounicode{approaches}{00A0} \pdfglyphtounicode{approxequal}{00A0} \pdfglyphtounicode{approxequalorimage}{00A0} \pdfglyphtounicode{approximatelyequal}{00A0} \pdfglyphtounicode{approxorequal}{00A0} \pdfglyphtounicode{araeaekorean}{00A0} \pdfglyphtounicode{araeakorean}{00A0} \pdfglyphtounicode{arc}{00A0} \pdfglyphtounicode{archleftdown}{00A0} \pdfglyphtounicode{archrightdown}{00A0} \pdfglyphtounicode{arighthalfring}{00A0} \pdfglyphtounicode{aring}{00A0} \pdfglyphtounicode{aringacute}{00A0} \pdfglyphtounicode{aringbelow}{00A0} \pdfglyphtounicode{arrowboth}{00A0} \pdfglyphtounicode{arrowbothv}{00A0} \pdfglyphtounicode{arrowdashdown}{00A0} \pdfglyphtounicode{arrowdashleft}{00A0} \pdfglyphtounicode{arrowdashright}{00A0} \pdfglyphtounicode{arrowdashup}{00A0} \pdfglyphtounicode{arrowdblboth}{00A0} \pdfglyphtounicode{arrowdblbothv}{00A0} \pdfglyphtounicode{arrowdbldown}{00A0} \pdfglyphtounicode{arrowdblleft}{00A0} \pdfglyphtounicode{arrowdblright}{00A0} \pdfglyphtounicode{arrowdblup}{00A0} \pdfglyphtounicode{arrowdown}{00A0} \pdfglyphtounicode{arrowdownleft}{00A0} \pdfglyphtounicode{arrowdownright}{00A0} \pdfglyphtounicode{arrowdownwhite}{00A0} \pdfglyphtounicode{arrowheaddownmod}{00A0} \pdfglyphtounicode{arrowheadleftmod}{00A0} \pdfglyphtounicode{arrowheadrightmod}{00A0} \pdfglyphtounicode{arrowheadupmod}{00A0} \pdfglyphtounicode{arrowhorizex}{00A0} \pdfglyphtounicode{arrowleft}{00A0} \pdfglyphtounicode{arrowleftbothalf}{00A0} \pdfglyphtounicode{arrowleftdbl}{00A0} \pdfglyphtounicode{arrowleftdblstroke}{00A0} \pdfglyphtounicode{arrowleftoverright}{00A0} \pdfglyphtounicode{arrowlefttophalf}{00A0} \pdfglyphtounicode{arrowleftwhite}{00A0} \pdfglyphtounicode{arrownortheast}{00A0} \pdfglyphtounicode{arrownorthwest}{00A0} \pdfglyphtounicode{arrowparrleftright}{00A0} \pdfglyphtounicode{arrowparrrightleft}{00A0} \pdfglyphtounicode{arrowright}{00A0} \pdfglyphtounicode{arrowrightbothalf}{00A0} \pdfglyphtounicode{arrowrightdblstroke}{00A0} \pdfglyphtounicode{arrowrightheavy}{00A0} \pdfglyphtounicode{arrowrightoverleft}{00A0} \pdfglyphtounicode{arrowrighttophalf}{00A0} \pdfglyphtounicode{arrowrightwhite}{00A0} \pdfglyphtounicode{arrowsoutheast}{00A0} \pdfglyphtounicode{arrowsouthwest}{00A0} \pdfglyphtounicode{arrowtableft}{00A0} \pdfglyphtounicode{arrowtabright}{00A0} \pdfglyphtounicode{arrowtailleft}{00A0} \pdfglyphtounicode{arrowtailright}{00A0} \pdfglyphtounicode{arrowtripleleft}{00A0} \pdfglyphtounicode{arrowtripleright}{00A0} \pdfglyphtounicode{arrowup}{00A0} \pdfglyphtounicode{arrowupdn}{00A0} \pdfglyphtounicode{arrowupdnbse}{00A0} \pdfglyphtounicode{arrowupdownbase}{00A0} \pdfglyphtounicode{arrowupleft}{00A0} \pdfglyphtounicode{arrowupleftofdown}{00A0} \pdfglyphtounicode{arrowupright}{00A0} \pdfglyphtounicode{arrowupwhite}{00A0} \pdfglyphtounicode{arrowvertex}{00A0} \pdfglyphtounicode{asciicircum}{00A0} \pdfglyphtounicode{asciicircummonospace}{00A0} \pdfglyphtounicode{asciitilde}{00A0} \pdfglyphtounicode{asciitildemonospace}{00A0} \pdfglyphtounicode{ascript}{00A0} \pdfglyphtounicode{ascriptturned}{00A0} \pdfglyphtounicode{asmallhiragana}{00A0} \pdfglyphtounicode{asmallkatakana}{00A0} \pdfglyphtounicode{asmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{asterisk}{00A0} \pdfglyphtounicode{asteriskaltonearabic}{00A0} \pdfglyphtounicode{asteriskarabic}{00A0} \pdfglyphtounicode{asteriskcentered}{00A0} \pdfglyphtounicode{asteriskmath}{00A0} \pdfglyphtounicode{asteriskmonospace}{00A0} \pdfglyphtounicode{asterisksmall}{00A0} \pdfglyphtounicode{asterism}{00A0} \pdfglyphtounicode{asuperior}{00A0} \pdfglyphtounicode{asymptoticallyequal}{00A0} \pdfglyphtounicode{at}{00A0} \pdfglyphtounicode{atilde}{00A0} \pdfglyphtounicode{atmonospace}{00A0} \pdfglyphtounicode{atsmall}{00A0} \pdfglyphtounicode{aturned}{00A0} \pdfglyphtounicode{aubengali}{00A0} \pdfglyphtounicode{aubopomofo}{00A0} \pdfglyphtounicode{audeva}{00A0} \pdfglyphtounicode{augujarati}{00A0} \pdfglyphtounicode{augurmukhi}{00A0} \pdfglyphtounicode{aulengthmarkbengali}{00A0} \pdfglyphtounicode{aumatragurmukhi}{00A0} \pdfglyphtounicode{auvowelsignbengali}{00A0} \pdfglyphtounicode{auvowelsigndeva}{00A0} \pdfglyphtounicode{auvowelsigngujarati}{00A0} \pdfglyphtounicode{avagrahadeva}{00A0} \pdfglyphtounicode{aybarmenian}{00A0} \pdfglyphtounicode{ayin}{00A0} \pdfglyphtounicode{ayinaltonehebrew}{00A0} \pdfglyphtounicode{ayinhebrew}{00A0} \pdfglyphtounicode{b}{00A0} \pdfglyphtounicode{babengali}{00A0} \pdfglyphtounicode{backslash}{00A0} \pdfglyphtounicode{backslashmonospace}{00A0} \pdfglyphtounicode{badeva}{00A0} \pdfglyphtounicode{bagujarati}{00A0} \pdfglyphtounicode{bagurmukhi}{00A0} \pdfglyphtounicode{bahiragana}{00A0} \pdfglyphtounicode{bahtthai}{00A0} \pdfglyphtounicode{bakatakana}{00A0} \pdfglyphtounicode{bar}{00A0} \pdfglyphtounicode{bardbl}{00A0} \pdfglyphtounicode{barmonospace}{00A0} \pdfglyphtounicode{bbopomofo}{00A0} \pdfglyphtounicode{bcircle}{00A0} \pdfglyphtounicode{bdotaccent}{00A0} \pdfglyphtounicode{bdotbelow}{00A0} \pdfglyphtounicode{beamedsixteenthnotes}{00A0} \pdfglyphtounicode{because}{00A0} \pdfglyphtounicode{becyrillic}{00A0} \pdfglyphtounicode{beharabic}{00A0} \pdfglyphtounicode{behfinalarabic}{00A0} \pdfglyphtounicode{behinitialarabic}{00A0} \pdfglyphtounicode{behiragana}{00A0} \pdfglyphtounicode{behmedialarabic}{00A0} \pdfglyphtounicode{behmeeminitialarabic}{00A0} \pdfglyphtounicode{behmeemisolatedarabic}{00A0} \pdfglyphtounicode{behnoonfinalarabic}{00A0} \pdfglyphtounicode{bekatakana}{00A0} \pdfglyphtounicode{benarmenian}{00A0} \pdfglyphtounicode{bet}{00A0} \pdfglyphtounicode{beta}{00A0} \pdfglyphtounicode{betasymbolgreek}{00A0} \pdfglyphtounicode{betdagesh}{00A0} \pdfglyphtounicode{betdageshhebrew}{00A0} \pdfglyphtounicode{beth}{00A0} \pdfglyphtounicode{bethebrew}{00A0} \pdfglyphtounicode{betrafehebrew}{00A0} \pdfglyphtounicode{between}{00A0} \pdfglyphtounicode{bhabengali}{00A0} \pdfglyphtounicode{bhadeva}{00A0} \pdfglyphtounicode{bhagujarati}{00A0} \pdfglyphtounicode{bhagurmukhi}{00A0} \pdfglyphtounicode{bhook}{00A0} \pdfglyphtounicode{bihiragana}{00A0} \pdfglyphtounicode{bikatakana}{00A0} \pdfglyphtounicode{bilabialclick}{00A0} \pdfglyphtounicode{bindigurmukhi}{00A0} \pdfglyphtounicode{birusquare}{00A0} \pdfglyphtounicode{blackcircle}{00A0} \pdfglyphtounicode{blackdiamond}{00A0} \pdfglyphtounicode{blackdownpointingtriangle}{00A0} \pdfglyphtounicode{blackleftpointingpointer}{00A0} \pdfglyphtounicode{blackleftpointingtriangle}{00A0} \pdfglyphtounicode{blacklenticularbracketleft}{00A0} \pdfglyphtounicode{blacklenticularbracketleftvertical}{00A0} \pdfglyphtounicode{blacklenticularbracketright}{00A0} \pdfglyphtounicode{blacklenticularbracketrightvertical}{00A0} \pdfglyphtounicode{blacklowerlefttriangle}{00A0} \pdfglyphtounicode{blacklowerrighttriangle}{00A0} \pdfglyphtounicode{blackrectangle}{00A0} \pdfglyphtounicode{blackrightpointingpointer}{00A0} \pdfglyphtounicode{blackrightpointingtriangle}{00A0} \pdfglyphtounicode{blacksmallsquare}{00A0} \pdfglyphtounicode{blacksmilingface}{00A0} \pdfglyphtounicode{blacksquare}{00A0} \pdfglyphtounicode{blackstar}{00A0} \pdfglyphtounicode{blackupperlefttriangle}{00A0} \pdfglyphtounicode{blackupperrighttriangle}{00A0} \pdfglyphtounicode{blackuppointingsmalltriangle}{00A0} \pdfglyphtounicode{blackuppointingtriangle}{00A0} \pdfglyphtounicode{blank}{00A0} \pdfglyphtounicode{blinebelow}{00A0} \pdfglyphtounicode{block}{00A0} \pdfglyphtounicode{bmonospace}{00A0} \pdfglyphtounicode{bobaimaithai}{00A0} \pdfglyphtounicode{bohiragana}{00A0} \pdfglyphtounicode{bokatakana}{00A0} \pdfglyphtounicode{bparen}{00A0} \pdfglyphtounicode{bqsquare}{00A0} \pdfglyphtounicode{braceex}{00A0} \pdfglyphtounicode{braceleft}{00A0} \pdfglyphtounicode{braceleftbt}{00A0} \pdfglyphtounicode{braceleftmid}{00A0} \pdfglyphtounicode{braceleftmonospace}{00A0} \pdfglyphtounicode{braceleftsmall}{00A0} \pdfglyphtounicode{bracelefttp}{00A0} \pdfglyphtounicode{braceleftvertical}{00A0} \pdfglyphtounicode{braceright}{00A0} \pdfglyphtounicode{bracerightbt}{00A0} \pdfglyphtounicode{bracerightmid}{00A0} \pdfglyphtounicode{bracerightmonospace}{00A0} \pdfglyphtounicode{bracerightsmall}{00A0} \pdfglyphtounicode{bracerighttp}{00A0} \pdfglyphtounicode{bracerightvertical}{00A0} \pdfglyphtounicode{bracketleft}{00A0} \pdfglyphtounicode{bracketleftbt}{00A0} \pdfglyphtounicode{bracketleftex}{00A0} \pdfglyphtounicode{bracketleftmonospace}{00A0} \pdfglyphtounicode{bracketlefttp}{00A0} \pdfglyphtounicode{bracketright}{00A0} \pdfglyphtounicode{bracketrightbt}{00A0} \pdfglyphtounicode{bracketrightex}{00A0} \pdfglyphtounicode{bracketrightmonospace}{00A0} \pdfglyphtounicode{bracketrighttp}{00A0} \pdfglyphtounicode{breve}{00A0} \pdfglyphtounicode{brevebelowcmb}{00A0} \pdfglyphtounicode{brevecmb}{00A0} \pdfglyphtounicode{breveinvertedbelowcmb}{00A0} \pdfglyphtounicode{breveinvertedcmb}{00A0} \pdfglyphtounicode{breveinverteddoublecmb}{00A0} \pdfglyphtounicode{bridgebelowcmb}{00A0} \pdfglyphtounicode{bridgeinvertedbelowcmb}{00A0} \pdfglyphtounicode{brokenbar}{00A0} \pdfglyphtounicode{bstroke}{00A0} \pdfglyphtounicode{bsuperior}{00A0} \pdfglyphtounicode{btopbar}{00A0} \pdfglyphtounicode{buhiragana}{00A0} \pdfglyphtounicode{bukatakana}{00A0} \pdfglyphtounicode{bullet}{00A0} \pdfglyphtounicode{bulletinverse}{00A0} \pdfglyphtounicode{bulletoperator}{00A0} \pdfglyphtounicode{bullseye}{00A0} \pdfglyphtounicode{c}{00A0} \pdfglyphtounicode{caarmenian}{00A0} \pdfglyphtounicode{cabengali}{00A0} \pdfglyphtounicode{cacute}{00A0} \pdfglyphtounicode{cadeva}{00A0} \pdfglyphtounicode{cagujarati}{00A0} \pdfglyphtounicode{cagurmukhi}{00A0} \pdfglyphtounicode{calsquare}{00A0} \pdfglyphtounicode{candrabindubengali}{00A0} \pdfglyphtounicode{candrabinducmb}{00A0} \pdfglyphtounicode{candrabindudeva}{00A0} \pdfglyphtounicode{candrabindugujarati}{00A0} \pdfglyphtounicode{capslock}{00A0} \pdfglyphtounicode{careof}{00A0} \pdfglyphtounicode{caron}{00A0} \pdfglyphtounicode{caronbelowcmb}{00A0} \pdfglyphtounicode{caroncmb}{00A0} \pdfglyphtounicode{carriagereturn}{00A0} \pdfglyphtounicode{cbopomofo}{00A0} \pdfglyphtounicode{ccaron}{00A0} \pdfglyphtounicode{ccedilla}{00A0} \pdfglyphtounicode{ccedillaacute}{00A0} \pdfglyphtounicode{ccircle}{00A0} \pdfglyphtounicode{ccircumflex}{00A0} \pdfglyphtounicode{ccurl}{00A0} \pdfglyphtounicode{cdot}{00A0} \pdfglyphtounicode{cdotaccent}{00A0} \pdfglyphtounicode{cdsquare}{00A0} \pdfglyphtounicode{cedilla}{00A0} \pdfglyphtounicode{cedillacmb}{00A0} \pdfglyphtounicode{ceilingleft}{00A0} \pdfglyphtounicode{ceilingright}{00A0} \pdfglyphtounicode{cent}{00A0} \pdfglyphtounicode{centigrade}{00A0} \pdfglyphtounicode{centinferior}{00A0} \pdfglyphtounicode{centmonospace}{00A0} \pdfglyphtounicode{centoldstyle}{00A0} \pdfglyphtounicode{centsuperior}{00A0} \pdfglyphtounicode{chaarmenian}{00A0} \pdfglyphtounicode{chabengali}{00A0} \pdfglyphtounicode{chadeva}{00A0} \pdfglyphtounicode{chagujarati}{00A0} \pdfglyphtounicode{chagurmukhi}{00A0} \pdfglyphtounicode{chbopomofo}{00A0} \pdfglyphtounicode{cheabkhasiancyrillic}{00A0} \pdfglyphtounicode{check}{00A0} \pdfglyphtounicode{checkmark}{00A0} \pdfglyphtounicode{checyrillic}{00A0} \pdfglyphtounicode{chedescenderabkhasiancyrillic}{00A0} \pdfglyphtounicode{chedescendercyrillic}{00A0} \pdfglyphtounicode{chedieresiscyrillic}{00A0} \pdfglyphtounicode{cheharmenian}{00A0} \pdfglyphtounicode{chekhakassiancyrillic}{00A0} \pdfglyphtounicode{cheverticalstrokecyrillic}{00A0} \pdfglyphtounicode{chi}{00A0} \pdfglyphtounicode{chieuchacirclekorean}{00A0} \pdfglyphtounicode{chieuchaparenkorean}{00A0} \pdfglyphtounicode{chieuchcirclekorean}{00A0} \pdfglyphtounicode{chieuchkorean}{00A0} \pdfglyphtounicode{chieuchparenkorean}{00A0} \pdfglyphtounicode{chochangthai}{00A0} \pdfglyphtounicode{chochanthai}{00A0} \pdfglyphtounicode{chochingthai}{00A0} \pdfglyphtounicode{chochoethai}{00A0} \pdfglyphtounicode{chook}{00A0} \pdfglyphtounicode{cieucacirclekorean}{00A0} \pdfglyphtounicode{cieucaparenkorean}{00A0} \pdfglyphtounicode{cieuccirclekorean}{00A0} \pdfglyphtounicode{cieuckorean}{00A0} \pdfglyphtounicode{cieucparenkorean}{00A0} \pdfglyphtounicode{cieucuparenkorean}{00A0} \pdfglyphtounicode{circle}{00A0} \pdfglyphtounicode{circleR}{00A0} \pdfglyphtounicode{circleS}{00A0} \pdfglyphtounicode{circleasterisk}{00A0} \pdfglyphtounicode{circlecopyrt}{00A0} \pdfglyphtounicode{circledivide}{00A0} \pdfglyphtounicode{circledot}{00A0} \pdfglyphtounicode{circleequal}{00A0} \pdfglyphtounicode{circleminus}{00A0} \pdfglyphtounicode{circlemultiply}{00A0} \pdfglyphtounicode{circleot}{00A0} \pdfglyphtounicode{circleplus}{00A0} \pdfglyphtounicode{circlepostalmark}{00A0} \pdfglyphtounicode{circlering}{00A0} \pdfglyphtounicode{circlewithlefthalfblack}{00A0} \pdfglyphtounicode{circlewithrighthalfblack}{00A0} \pdfglyphtounicode{circumflex}{00A0} \pdfglyphtounicode{circumflexbelowcmb}{00A0} \pdfglyphtounicode{circumflexcmb}{00A0} \pdfglyphtounicode{clear}{00A0} \pdfglyphtounicode{clickalveolar}{00A0} \pdfglyphtounicode{clickdental}{00A0} \pdfglyphtounicode{clicklateral}{00A0} \pdfglyphtounicode{clickretroflex}{00A0} \pdfglyphtounicode{clockwise}{00A0} \pdfglyphtounicode{club}{00A0} \pdfglyphtounicode{clubsuitblack}{00A0} \pdfglyphtounicode{clubsuitwhite}{00A0} \pdfglyphtounicode{cmcubedsquare}{00A0} \pdfglyphtounicode{cmonospace}{00A0} \pdfglyphtounicode{cmsquaredsquare}{00A0} \pdfglyphtounicode{coarmenian}{00A0} \pdfglyphtounicode{colon}{00A0} \pdfglyphtounicode{colonmonetary}{00A0} \pdfglyphtounicode{colonmonospace}{00A0} \pdfglyphtounicode{colonsign}{00A0} \pdfglyphtounicode{colonsmall}{00A0} \pdfglyphtounicode{colontriangularhalfmod}{00A0} \pdfglyphtounicode{colontriangularmod}{00A0} \pdfglyphtounicode{comma}{00A0} \pdfglyphtounicode{commaabovecmb}{00A0} \pdfglyphtounicode{commaaboverightcmb}{00A0} \pdfglyphtounicode{commaaccent}{00A0} \pdfglyphtounicode{commaarabic}{00A0} \pdfglyphtounicode{commaarmenian}{00A0} \pdfglyphtounicode{commainferior}{00A0} \pdfglyphtounicode{commamonospace}{00A0} \pdfglyphtounicode{commareversedabovecmb}{00A0} \pdfglyphtounicode{commareversedmod}{00A0} \pdfglyphtounicode{commasmall}{00A0} \pdfglyphtounicode{commasuperior}{00A0} \pdfglyphtounicode{commaturnedabovecmb}{00A0} \pdfglyphtounicode{commaturnedmod}{00A0} \pdfglyphtounicode{compass}{00A0} \pdfglyphtounicode{complement}{00A0} \pdfglyphtounicode{compwordmark}{00A0} \pdfglyphtounicode{congruent}{00A0} \pdfglyphtounicode{contourintegral}{00A0} \pdfglyphtounicode{control}{00A0} \pdfglyphtounicode{controlACK}{00A0} \pdfglyphtounicode{controlBEL}{00A0} \pdfglyphtounicode{controlBS}{00A0} \pdfglyphtounicode{controlCAN}{00A0} \pdfglyphtounicode{controlCR}{00A0} \pdfglyphtounicode{controlDC1}{00A0} \pdfglyphtounicode{controlDC2}{00A0} \pdfglyphtounicode{controlDC3}{00A0} \pdfglyphtounicode{controlDC4}{00A0} \pdfglyphtounicode{controlDEL}{00A0} \pdfglyphtounicode{controlDLE}{00A0} \pdfglyphtounicode{controlEM}{00A0} \pdfglyphtounicode{controlENQ}{00A0} \pdfglyphtounicode{controlEOT}{00A0} \pdfglyphtounicode{controlESC}{00A0} \pdfglyphtounicode{controlETB}{00A0} \pdfglyphtounicode{controlETX}{00A0} \pdfglyphtounicode{controlFF}{00A0} \pdfglyphtounicode{controlFS}{00A0} \pdfglyphtounicode{controlGS}{00A0} \pdfglyphtounicode{controlHT}{00A0} \pdfglyphtounicode{controlLF}{00A0} \pdfglyphtounicode{controlNAK}{00A0} \pdfglyphtounicode{controlRS}{00A0} \pdfglyphtounicode{controlSI}{00A0} \pdfglyphtounicode{controlSO}{00A0} \pdfglyphtounicode{controlSOT}{00A0} \pdfglyphtounicode{controlSTX}{00A0} \pdfglyphtounicode{controlSUB}{00A0} \pdfglyphtounicode{controlSYN}{00A0} \pdfglyphtounicode{controlUS}{00A0} \pdfglyphtounicode{controlVT}{00A0} \pdfglyphtounicode{coproduct}{00A0} \pdfglyphtounicode{copyright}{00A0} \pdfglyphtounicode{copyrightsans}{00A0} \pdfglyphtounicode{copyrightserif}{00A0} \pdfglyphtounicode{cornerbracketleft}{00A0} \pdfglyphtounicode{cornerbracketlefthalfwidth}{00A0} \pdfglyphtounicode{cornerbracketleftvertical}{00A0} \pdfglyphtounicode{cornerbracketright}{00A0} \pdfglyphtounicode{cornerbracketrighthalfwidth}{00A0} \pdfglyphtounicode{cornerbracketrightvertical}{00A0} \pdfglyphtounicode{corporationsquare}{00A0} \pdfglyphtounicode{cosquare}{00A0} \pdfglyphtounicode{coverkgsquare}{00A0} \pdfglyphtounicode{cparen}{00A0} \pdfglyphtounicode{cruzeiro}{00A0} \pdfglyphtounicode{cstretched}{00A0} \pdfglyphtounicode{ct}{00A0} \pdfglyphtounicode{curlyand}{00A0} \pdfglyphtounicode{curlyleft}{00A0} \pdfglyphtounicode{curlyor}{00A0} \pdfglyphtounicode{curlyright}{00A0} \pdfglyphtounicode{currency}{00A0} \pdfglyphtounicode{cwm}{00A0} \pdfglyphtounicode{cyrBreve}{00A0} \pdfglyphtounicode{cyrFlex}{00A0} \pdfglyphtounicode{cyrbreve}{00A0} \pdfglyphtounicode{cyrflex}{00A0} \pdfglyphtounicode{d}{00A0} \pdfglyphtounicode{daarmenian}{00A0} \pdfglyphtounicode{dabengali}{00A0} \pdfglyphtounicode{dadarabic}{00A0} \pdfglyphtounicode{dadeva}{00A0} \pdfglyphtounicode{dadfinalarabic}{00A0} \pdfglyphtounicode{dadinitialarabic}{00A0} \pdfglyphtounicode{dadmedialarabic}{00A0} \pdfglyphtounicode{dagesh}{00A0} \pdfglyphtounicode{dageshhebrew}{00A0} \pdfglyphtounicode{dagger}{00A0} \pdfglyphtounicode{daggerdbl}{00A0} \pdfglyphtounicode{dagujarati}{00A0} \pdfglyphtounicode{dagurmukhi}{00A0} \pdfglyphtounicode{dahiragana}{00A0} \pdfglyphtounicode{dakatakana}{00A0} \pdfglyphtounicode{dalarabic}{00A0} \pdfglyphtounicode{dalet}{00A0} \pdfglyphtounicode{daletdagesh}{00A0} \pdfglyphtounicode{daletdageshhebrew}{00A0} \pdfglyphtounicode{daleth}{00A0} \pdfglyphtounicode{dalethatafpatah}{00A0} \pdfglyphtounicode{dalethatafpatahhebrew}{00A0} \pdfglyphtounicode{dalethatafsegol}{00A0} \pdfglyphtounicode{dalethatafsegolhebrew}{00A0} \pdfglyphtounicode{dalethebrew}{00A0} \pdfglyphtounicode{dalethiriq}{00A0} \pdfglyphtounicode{dalethiriqhebrew}{00A0} \pdfglyphtounicode{daletholam}{00A0} \pdfglyphtounicode{daletholamhebrew}{00A0} \pdfglyphtounicode{daletpatah}{00A0} \pdfglyphtounicode{daletpatahhebrew}{00A0} \pdfglyphtounicode{daletqamats}{00A0} \pdfglyphtounicode{daletqamatshebrew}{00A0} \pdfglyphtounicode{daletqubuts}{00A0} \pdfglyphtounicode{daletqubutshebrew}{00A0} \pdfglyphtounicode{daletsegol}{00A0} \pdfglyphtounicode{daletsegolhebrew}{00A0} \pdfglyphtounicode{daletsheva}{00A0} \pdfglyphtounicode{daletshevahebrew}{00A0} \pdfglyphtounicode{dalettsere}{00A0} \pdfglyphtounicode{dalettserehebrew}{00A0} \pdfglyphtounicode{dalfinalarabic}{00A0} \pdfglyphtounicode{dammaarabic}{00A0} \pdfglyphtounicode{dammalowarabic}{00A0} \pdfglyphtounicode{dammatanaltonearabic}{00A0} \pdfglyphtounicode{dammatanarabic}{00A0} \pdfglyphtounicode{danda}{00A0} \pdfglyphtounicode{dargahebrew}{00A0} \pdfglyphtounicode{dargalefthebrew}{00A0} \pdfglyphtounicode{dasiapneumatacyrilliccmb}{00A0} \pdfglyphtounicode{dbar}{00A0} \pdfglyphtounicode{dblGrave}{00A0} \pdfglyphtounicode{dblanglebracketleft}{00A0} \pdfglyphtounicode{dblanglebracketleftvertical}{00A0} \pdfglyphtounicode{dblanglebracketright}{00A0} \pdfglyphtounicode{dblanglebracketrightvertical}{00A0} \pdfglyphtounicode{dblarchinvertedbelowcmb}{00A0} \pdfglyphtounicode{dblarrowdwn}{00A0} \pdfglyphtounicode{dblarrowheadleft}{00A0} \pdfglyphtounicode{dblarrowheadright}{00A0} \pdfglyphtounicode{dblarrowleft}{00A0} \pdfglyphtounicode{dblarrowright}{00A0} \pdfglyphtounicode{dblarrowup}{00A0} \pdfglyphtounicode{dblbracketleft}{00A0} \pdfglyphtounicode{dblbracketright}{00A0} \pdfglyphtounicode{dbldanda}{00A0} \pdfglyphtounicode{dblgrave}{00A0} \pdfglyphtounicode{dblgravecmb}{00A0} \pdfglyphtounicode{dblintegral}{00A0} \pdfglyphtounicode{dbllowline}{00A0} \pdfglyphtounicode{dbllowlinecmb}{00A0} \pdfglyphtounicode{dbloverlinecmb}{00A0} \pdfglyphtounicode{dblprimemod}{00A0} \pdfglyphtounicode{dblverticalbar}{00A0} \pdfglyphtounicode{dblverticallineabovecmb}{00A0} \pdfglyphtounicode{dbopomofo}{00A0} \pdfglyphtounicode{dbsquare}{00A0} \pdfglyphtounicode{dcaron}{00A0} \pdfglyphtounicode{dcedilla}{00A0} \pdfglyphtounicode{dcircle}{00A0} \pdfglyphtounicode{dcircumflexbelow}{00A0} \pdfglyphtounicode{dcroat}{00A0} \pdfglyphtounicode{ddabengali}{00A0} \pdfglyphtounicode{ddadeva}{00A0} \pdfglyphtounicode{ddagujarati}{00A0} \pdfglyphtounicode{ddagurmukhi}{00A0} \pdfglyphtounicode{ddalarabic}{00A0} \pdfglyphtounicode{ddalfinalarabic}{00A0} \pdfglyphtounicode{dddhadeva}{00A0} \pdfglyphtounicode{ddhabengali}{00A0} \pdfglyphtounicode{ddhadeva}{00A0} \pdfglyphtounicode{ddhagujarati}{00A0} \pdfglyphtounicode{ddhagurmukhi}{00A0} \pdfglyphtounicode{ddotaccent}{00A0} \pdfglyphtounicode{ddotbelow}{00A0} \pdfglyphtounicode{decimalseparatorarabic}{00A0} \pdfglyphtounicode{decimalseparatorpersian}{00A0} \pdfglyphtounicode{decyrillic}{00A0} \pdfglyphtounicode{defines}{00A0} \pdfglyphtounicode{degree}{00A0} \pdfglyphtounicode{dehihebrew}{00A0} \pdfglyphtounicode{dehiragana}{00A0} \pdfglyphtounicode{deicoptic}{00A0} \pdfglyphtounicode{dekatakana}{00A0} \pdfglyphtounicode{deleteleft}{00A0} \pdfglyphtounicode{deleteright}{00A0} \pdfglyphtounicode{delta}{00A0} \pdfglyphtounicode{deltaturned}{00A0} \pdfglyphtounicode{denominatorminusonenumeratorbengali}{00A0} \pdfglyphtounicode{dezh}{00A0} \pdfglyphtounicode{dhabengali}{00A0} \pdfglyphtounicode{dhadeva}{00A0} \pdfglyphtounicode{dhagujarati}{00A0} \pdfglyphtounicode{dhagurmukhi}{00A0} \pdfglyphtounicode{dhook}{00A0} \pdfglyphtounicode{dialytikatonos}{00A0} \pdfglyphtounicode{dialytikatonoscmb}{00A0} \pdfglyphtounicode{diamond}{00A0} \pdfglyphtounicode{diamondmath}{00A0} \pdfglyphtounicode{diamondsolid}{00A0} \pdfglyphtounicode{diamondsuitwhite}{00A0} \pdfglyphtounicode{dieresis}{00A0} \pdfglyphtounicode{dieresisacute}{00A0} \pdfglyphtounicode{dieresisbelowcmb}{00A0} \pdfglyphtounicode{dieresiscmb}{00A0} \pdfglyphtounicode{dieresisgrave}{00A0} \pdfglyphtounicode{dieresistonos}{00A0} \pdfglyphtounicode{difference}{00A0} \pdfglyphtounicode{dihiragana}{00A0} \pdfglyphtounicode{dikatakana}{00A0} \pdfglyphtounicode{dittomark}{00A0} \pdfglyphtounicode{divide}{00A0} \pdfglyphtounicode{dividemultiply}{00A0} \pdfglyphtounicode{divides}{00A0} \pdfglyphtounicode{divisionslash}{00A0} \pdfglyphtounicode{djecyrillic}{00A0} \pdfglyphtounicode{dkshade}{00A0} \pdfglyphtounicode{dlinebelow}{00A0} \pdfglyphtounicode{dlsquare}{00A0} \pdfglyphtounicode{dmacron}{00A0} \pdfglyphtounicode{dmonospace}{00A0} \pdfglyphtounicode{dnblock}{00A0} \pdfglyphtounicode{dochadathai}{00A0} \pdfglyphtounicode{dodekthai}{00A0} \pdfglyphtounicode{dohiragana}{00A0} \pdfglyphtounicode{dokatakana}{00A0} \pdfglyphtounicode{dollar}{00A0} \pdfglyphtounicode{dollarinferior}{00A0} \pdfglyphtounicode{dollarmonospace}{00A0} \pdfglyphtounicode{dollaroldstyle}{00A0} \pdfglyphtounicode{dollarsmall}{00A0} \pdfglyphtounicode{dollarsuperior}{00A0} \pdfglyphtounicode{dong}{00A0} \pdfglyphtounicode{dorusquare}{00A0} \pdfglyphtounicode{dotaccent}{00A0} \pdfglyphtounicode{dotaccentcmb}{00A0} \pdfglyphtounicode{dotbelowcmb}{00A0} \pdfglyphtounicode{dotbelowcomb}{00A0} \pdfglyphtounicode{dotkatakana}{00A0} \pdfglyphtounicode{dotlessi}{00A0} \pdfglyphtounicode{dotlessj}{00A0} \pdfglyphtounicode{dotlessjstrokehook}{00A0} \pdfglyphtounicode{dotmath}{00A0} \pdfglyphtounicode{dotplus}{00A0} \pdfglyphtounicode{dottedcircle}{00A0} \pdfglyphtounicode{doubleyodpatah}{00A0} \pdfglyphtounicode{doubleyodpatahhebrew}{00A0} \pdfglyphtounicode{downfall}{00A0} \pdfglyphtounicode{downslope}{00A0} \pdfglyphtounicode{downtackbelowcmb}{00A0} \pdfglyphtounicode{downtackmod}{00A0} \pdfglyphtounicode{dparen}{00A0} \pdfglyphtounicode{dsuperior}{00A0} \pdfglyphtounicode{dtail}{00A0} \pdfglyphtounicode{dtopbar}{00A0} \pdfglyphtounicode{duhiragana}{00A0} \pdfglyphtounicode{dukatakana}{00A0} \pdfglyphtounicode{dz}{00A0} \pdfglyphtounicode{dzaltone}{00A0} \pdfglyphtounicode{dzcaron}{00A0} \pdfglyphtounicode{dzcurl}{00A0} \pdfglyphtounicode{dzeabkhasiancyrillic}{00A0} \pdfglyphtounicode{dzecyrillic}{00A0} \pdfglyphtounicode{dzhecyrillic}{00A0} \pdfglyphtounicode{e}{00A0} \pdfglyphtounicode{eacute}{00A0} \pdfglyphtounicode{earth}{00A0} \pdfglyphtounicode{ebengali}{00A0} \pdfglyphtounicode{ebopomofo}{00A0} \pdfglyphtounicode{ebreve}{00A0} \pdfglyphtounicode{ecandradeva}{00A0} \pdfglyphtounicode{ecandragujarati}{00A0} \pdfglyphtounicode{ecandravowelsigndeva}{00A0} \pdfglyphtounicode{ecandravowelsigngujarati}{00A0} \pdfglyphtounicode{ecaron}{00A0} \pdfglyphtounicode{ecedillabreve}{00A0} \pdfglyphtounicode{echarmenian}{00A0} \pdfglyphtounicode{echyiwnarmenian}{00A0} \pdfglyphtounicode{ecircle}{00A0} \pdfglyphtounicode{ecircumflex}{00A0} \pdfglyphtounicode{ecircumflexacute}{00A0} \pdfglyphtounicode{ecircumflexbelow}{00A0} \pdfglyphtounicode{ecircumflexdotbelow}{00A0} \pdfglyphtounicode{ecircumflexgrave}{00A0} \pdfglyphtounicode{ecircumflexhookabove}{00A0} \pdfglyphtounicode{ecircumflextilde}{00A0} \pdfglyphtounicode{ecyrillic}{00A0} \pdfglyphtounicode{edblgrave}{00A0} \pdfglyphtounicode{edeva}{00A0} \pdfglyphtounicode{edieresis}{00A0} \pdfglyphtounicode{edot}{00A0} \pdfglyphtounicode{edotaccent}{00A0} \pdfglyphtounicode{edotbelow}{00A0} \pdfglyphtounicode{eegurmukhi}{00A0} \pdfglyphtounicode{eematragurmukhi}{00A0} \pdfglyphtounicode{efcyrillic}{00A0} \pdfglyphtounicode{egrave}{00A0} \pdfglyphtounicode{egujarati}{00A0} \pdfglyphtounicode{eharmenian}{00A0} \pdfglyphtounicode{ehbopomofo}{00A0} \pdfglyphtounicode{ehiragana}{00A0} \pdfglyphtounicode{ehookabove}{00A0} \pdfglyphtounicode{eibopomofo}{00A0} \pdfglyphtounicode{eight}{00A0} \pdfglyphtounicode{eightarabic}{00A0} \pdfglyphtounicode{eightbengali}{00A0} \pdfglyphtounicode{eightcircle}{00A0} \pdfglyphtounicode{eightcircleinversesansserif}{00A0} \pdfglyphtounicode{eightdeva}{00A0} \pdfglyphtounicode{eighteencircle}{00A0} \pdfglyphtounicode{eighteenparen}{00A0} \pdfglyphtounicode{eighteenperiod}{00A0} \pdfglyphtounicode{eightgujarati}{00A0} \pdfglyphtounicode{eightgurmukhi}{00A0} \pdfglyphtounicode{eighthackarabic}{00A0} \pdfglyphtounicode{eighthangzhou}{00A0} \pdfglyphtounicode{eighthnotebeamed}{00A0} \pdfglyphtounicode{eightideographicparen}{00A0} \pdfglyphtounicode{eightinferior}{00A0} \pdfglyphtounicode{eightmonospace}{00A0} \pdfglyphtounicode{eightoldstyle}{00A0} \pdfglyphtounicode{eightparen}{00A0} \pdfglyphtounicode{eightperiod}{00A0} \pdfglyphtounicode{eightpersian}{00A0} \pdfglyphtounicode{eightroman}{00A0} \pdfglyphtounicode{eightsuperior}{00A0} \pdfglyphtounicode{eightthai}{00A0} \pdfglyphtounicode{einvertedbreve}{00A0} \pdfglyphtounicode{eiotifiedcyrillic}{00A0} \pdfglyphtounicode{ekatakana}{00A0} \pdfglyphtounicode{ekatakanahalfwidth}{00A0} \pdfglyphtounicode{ekonkargurmukhi}{00A0} \pdfglyphtounicode{ekorean}{00A0} \pdfglyphtounicode{elcyrillic}{00A0} \pdfglyphtounicode{element}{00A0} \pdfglyphtounicode{elevencircle}{00A0} \pdfglyphtounicode{elevenparen}{00A0} \pdfglyphtounicode{elevenperiod}{00A0} \pdfglyphtounicode{elevenroman}{00A0} \pdfglyphtounicode{ellipsis}{00A0} \pdfglyphtounicode{ellipsisvertical}{00A0} \pdfglyphtounicode{emacron}{00A0} \pdfglyphtounicode{emacronacute}{00A0} \pdfglyphtounicode{emacrongrave}{00A0} \pdfglyphtounicode{emcyrillic}{00A0} \pdfglyphtounicode{emdash}{00A0} \pdfglyphtounicode{emdashvertical}{00A0} \pdfglyphtounicode{emonospace}{00A0} \pdfglyphtounicode{emphasismarkarmenian}{00A0} \pdfglyphtounicode{emptyset}{00A0} \pdfglyphtounicode{enbopomofo}{00A0} \pdfglyphtounicode{encyrillic}{00A0} \pdfglyphtounicode{endash}{00A0} \pdfglyphtounicode{endashvertical}{00A0} \pdfglyphtounicode{endescendercyrillic}{00A0} \pdfglyphtounicode{eng}{00A0} \pdfglyphtounicode{engbopomofo}{00A0} \pdfglyphtounicode{enghecyrillic}{00A0} \pdfglyphtounicode{enhookcyrillic}{00A0} \pdfglyphtounicode{enspace}{00A0} \pdfglyphtounicode{eogonek}{00A0} \pdfglyphtounicode{eokorean}{00A0} \pdfglyphtounicode{eopen}{00A0} \pdfglyphtounicode{eopenclosed}{00A0} \pdfglyphtounicode{eopenreversed}{00A0} \pdfglyphtounicode{eopenreversedclosed}{00A0} \pdfglyphtounicode{eopenreversedhook}{00A0} \pdfglyphtounicode{eparen}{00A0} \pdfglyphtounicode{epsilon}{00A0} \pdfglyphtounicode{epsilon1}{00A0} \pdfglyphtounicode{epsiloninv}{00A0} \pdfglyphtounicode{epsilontonos}{00A0} \pdfglyphtounicode{equal}{00A0} \pdfglyphtounicode{equaldotleftright}{00A0} \pdfglyphtounicode{equaldotrightleft}{00A0} \pdfglyphtounicode{equalmonospace}{00A0} \pdfglyphtounicode{equalorfollows}{00A0} \pdfglyphtounicode{equalorgreater}{00A0} \pdfglyphtounicode{equalorless}{00A0} \pdfglyphtounicode{equalorprecedes}{00A0} \pdfglyphtounicode{equalorsimilar}{00A0} \pdfglyphtounicode{equalsdots}{00A0} \pdfglyphtounicode{equalsmall}{00A0} \pdfglyphtounicode{equalsuperior}{00A0} \pdfglyphtounicode{equivalence}{00A0} \pdfglyphtounicode{equivasymptotic}{00A0} \pdfglyphtounicode{erbopomofo}{00A0} \pdfglyphtounicode{ercyrillic}{00A0} \pdfglyphtounicode{ereversed}{00A0} \pdfglyphtounicode{ereversedcyrillic}{00A0} \pdfglyphtounicode{escyrillic}{00A0} \pdfglyphtounicode{esdescendercyrillic}{00A0} \pdfglyphtounicode{esh}{00A0} \pdfglyphtounicode{eshcurl}{00A0} \pdfglyphtounicode{eshortdeva}{00A0} \pdfglyphtounicode{eshortvowelsigndeva}{00A0} \pdfglyphtounicode{eshreversedloop}{00A0} \pdfglyphtounicode{eshsquatreversed}{00A0} \pdfglyphtounicode{esmallhiragana}{00A0} \pdfglyphtounicode{esmallkatakana}{00A0} \pdfglyphtounicode{esmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{estimated}{00A0} \pdfglyphtounicode{esuperior}{00A0} \pdfglyphtounicode{eta}{00A0} \pdfglyphtounicode{etarmenian}{00A0} \pdfglyphtounicode{etatonos}{00A0} \pdfglyphtounicode{eth}{00A0} \pdfglyphtounicode{etilde}{00A0} \pdfglyphtounicode{etildebelow}{00A0} \pdfglyphtounicode{etnahtafoukhhebrew}{00A0} \pdfglyphtounicode{etnahtafoukhlefthebrew}{00A0} \pdfglyphtounicode{etnahtahebrew}{00A0} \pdfglyphtounicode{etnahtalefthebrew}{00A0} \pdfglyphtounicode{eturned}{00A0} \pdfglyphtounicode{eukorean}{00A0} \pdfglyphtounicode{euro}{00A0} \pdfglyphtounicode{evowelsignbengali}{00A0} \pdfglyphtounicode{evowelsigndeva}{00A0} \pdfglyphtounicode{evowelsigngujarati}{00A0} \pdfglyphtounicode{exclam}{00A0} \pdfglyphtounicode{exclamarmenian}{00A0} \pdfglyphtounicode{exclamdbl}{00A0} \pdfglyphtounicode{exclamdown}{00A0} \pdfglyphtounicode{exclamdownsmall}{00A0} \pdfglyphtounicode{exclammonospace}{00A0} \pdfglyphtounicode{exclamsmall}{00A0} \pdfglyphtounicode{existential}{00A0} \pdfglyphtounicode{ezh}{00A0} \pdfglyphtounicode{ezhcaron}{00A0} \pdfglyphtounicode{ezhcurl}{00A0} \pdfglyphtounicode{ezhreversed}{00A0} \pdfglyphtounicode{ezhtail}{00A0} \pdfglyphtounicode{f}{00A0} \pdfglyphtounicode{fadeva}{00A0} \pdfglyphtounicode{fagurmukhi}{00A0} \pdfglyphtounicode{fahrenheit}{00A0} \pdfglyphtounicode{fathaarabic}{00A0} \pdfglyphtounicode{fathalowarabic}{00A0} \pdfglyphtounicode{fathatanarabic}{00A0} \pdfglyphtounicode{fbopomofo}{00A0} \pdfglyphtounicode{fcircle}{00A0} \pdfglyphtounicode{fdotaccent}{00A0} \pdfglyphtounicode{feharabic}{00A0} \pdfglyphtounicode{feharmenian}{00A0} \pdfglyphtounicode{fehfinalarabic}{00A0} \pdfglyphtounicode{fehinitialarabic}{00A0} \pdfglyphtounicode{fehmedialarabic}{00A0} \pdfglyphtounicode{feicoptic}{00A0} \pdfglyphtounicode{female}{00A0} \pdfglyphtounicode{ff}{00A0} \pdfglyphtounicode{ffi}{00A0} \pdfglyphtounicode{ffl}{00A0} \pdfglyphtounicode{fi}{00A0} \pdfglyphtounicode{fifteencircle}{00A0} \pdfglyphtounicode{fifteenparen}{00A0} \pdfglyphtounicode{fifteenperiod}{00A0} \pdfglyphtounicode{figuredash}{00A0} \pdfglyphtounicode{filledbox}{00A0} \pdfglyphtounicode{filledrect}{00A0} \pdfglyphtounicode{finalkaf}{00A0} \pdfglyphtounicode{finalkafdagesh}{00A0} \pdfglyphtounicode{finalkafdageshhebrew}{00A0} \pdfglyphtounicode{finalkafhebrew}{00A0} \pdfglyphtounicode{finalkafqamats}{00A0} \pdfglyphtounicode{finalkafqamatshebrew}{00A0} \pdfglyphtounicode{finalkafsheva}{00A0} \pdfglyphtounicode{finalkafshevahebrew}{00A0} \pdfglyphtounicode{finalmem}{00A0} \pdfglyphtounicode{finalmemhebrew}{00A0} \pdfglyphtounicode{finalnun}{00A0} \pdfglyphtounicode{finalnunhebrew}{00A0} \pdfglyphtounicode{finalpe}{00A0} \pdfglyphtounicode{finalpehebrew}{00A0} \pdfglyphtounicode{finaltsadi}{00A0} \pdfglyphtounicode{finaltsadihebrew}{00A0} \pdfglyphtounicode{firsttonechinese}{00A0} \pdfglyphtounicode{fisheye}{00A0} \pdfglyphtounicode{fitacyrillic}{00A0} \pdfglyphtounicode{five}{00A0} \pdfglyphtounicode{fivearabic}{00A0} \pdfglyphtounicode{fivebengali}{00A0} \pdfglyphtounicode{fivecircle}{00A0} \pdfglyphtounicode{fivecircleinversesansserif}{00A0} \pdfglyphtounicode{fivedeva}{00A0} \pdfglyphtounicode{fiveeighths}{00A0} \pdfglyphtounicode{fivegujarati}{00A0} \pdfglyphtounicode{fivegurmukhi}{00A0} \pdfglyphtounicode{fivehackarabic}{00A0} \pdfglyphtounicode{fivehangzhou}{00A0} \pdfglyphtounicode{fiveideographicparen}{00A0} \pdfglyphtounicode{fiveinferior}{00A0} \pdfglyphtounicode{fivemonospace}{00A0} \pdfglyphtounicode{fiveoldstyle}{00A0} \pdfglyphtounicode{fiveparen}{00A0} \pdfglyphtounicode{fiveperiod}{00A0} \pdfglyphtounicode{fivepersian}{00A0} \pdfglyphtounicode{fiveroman}{00A0} \pdfglyphtounicode{fivesuperior}{00A0} \pdfglyphtounicode{fivethai}{00A0} \pdfglyphtounicode{fl}{00A0} \pdfglyphtounicode{flat}{00A0} \pdfglyphtounicode{floorleft}{00A0} \pdfglyphtounicode{floorright}{00A0} \pdfglyphtounicode{florin}{00A0} \pdfglyphtounicode{fmonospace}{00A0} \pdfglyphtounicode{fmsquare}{00A0} \pdfglyphtounicode{fofanthai}{00A0} \pdfglyphtounicode{fofathai}{00A0} \pdfglyphtounicode{follownotdbleqv}{00A0} \pdfglyphtounicode{follownotslnteql}{00A0} \pdfglyphtounicode{followornoteqvlnt}{00A0} \pdfglyphtounicode{follows}{00A0} \pdfglyphtounicode{followsequal}{00A0} \pdfglyphtounicode{followsorcurly}{00A0} \pdfglyphtounicode{followsorequal}{00A0} \pdfglyphtounicode{fongmanthai}{00A0} \pdfglyphtounicode{forall}{00A0} \pdfglyphtounicode{forces}{00A0} \pdfglyphtounicode{forcesbar}{00A0} \pdfglyphtounicode{fork}{00A0} \pdfglyphtounicode{four}{00A0} \pdfglyphtounicode{fourarabic}{00A0} \pdfglyphtounicode{fourbengali}{00A0} \pdfglyphtounicode{fourcircle}{00A0} \pdfglyphtounicode{fourcircleinversesansserif}{00A0} \pdfglyphtounicode{fourdeva}{00A0} \pdfglyphtounicode{fourgujarati}{00A0} \pdfglyphtounicode{fourgurmukhi}{00A0} \pdfglyphtounicode{fourhackarabic}{00A0} \pdfglyphtounicode{fourhangzhou}{00A0} \pdfglyphtounicode{fourideographicparen}{00A0} \pdfglyphtounicode{fourinferior}{00A0} \pdfglyphtounicode{fourmonospace}{00A0} \pdfglyphtounicode{fournumeratorbengali}{00A0} \pdfglyphtounicode{fouroldstyle}{00A0} \pdfglyphtounicode{fourparen}{00A0} \pdfglyphtounicode{fourperiod}{00A0} \pdfglyphtounicode{fourpersian}{00A0} \pdfglyphtounicode{fourroman}{00A0} \pdfglyphtounicode{foursuperior}{00A0} \pdfglyphtounicode{fourteencircle}{00A0} \pdfglyphtounicode{fourteenparen}{00A0} \pdfglyphtounicode{fourteenperiod}{00A0} \pdfglyphtounicode{fourthai}{00A0} \pdfglyphtounicode{fourthtonechinese}{00A0} \pdfglyphtounicode{fparen}{00A0} \pdfglyphtounicode{fraction}{00A0} \pdfglyphtounicode{franc}{00A0} \pdfglyphtounicode{frown}{00A0} \pdfglyphtounicode{g}{00A0} \pdfglyphtounicode{gabengali}{00A0} \pdfglyphtounicode{gacute}{00A0} \pdfglyphtounicode{gadeva}{00A0} \pdfglyphtounicode{gafarabic}{00A0} \pdfglyphtounicode{gaffinalarabic}{00A0} \pdfglyphtounicode{gafinitialarabic}{00A0} \pdfglyphtounicode{gafmedialarabic}{00A0} \pdfglyphtounicode{gagujarati}{00A0} \pdfglyphtounicode{gagurmukhi}{00A0} \pdfglyphtounicode{gahiragana}{00A0} \pdfglyphtounicode{gakatakana}{00A0} \pdfglyphtounicode{gamma}{00A0} \pdfglyphtounicode{gammalatinsmall}{00A0} \pdfglyphtounicode{gammasuperior}{00A0} \pdfglyphtounicode{gangiacoptic}{00A0} \pdfglyphtounicode{gbopomofo}{00A0} \pdfglyphtounicode{gbreve}{00A0} \pdfglyphtounicode{gcaron}{00A0} \pdfglyphtounicode{gcedilla}{00A0} \pdfglyphtounicode{gcircle}{00A0} \pdfglyphtounicode{gcircumflex}{00A0} \pdfglyphtounicode{gcommaaccent}{00A0} \pdfglyphtounicode{gdot}{00A0} \pdfglyphtounicode{gdotaccent}{00A0} \pdfglyphtounicode{gecyrillic}{00A0} \pdfglyphtounicode{gehiragana}{00A0} \pdfglyphtounicode{gekatakana}{00A0} \pdfglyphtounicode{geomequivalent}{00A0} \pdfglyphtounicode{geometricallyequal}{00A0} \pdfglyphtounicode{gereshaccenthebrew}{00A0} \pdfglyphtounicode{gereshhebrew}{00A0} \pdfglyphtounicode{gereshmuqdamhebrew}{00A0} \pdfglyphtounicode{germandbls}{00A0} \pdfglyphtounicode{gershayimaccenthebrew}{00A0} \pdfglyphtounicode{gershayimhebrew}{00A0} \pdfglyphtounicode{getamark}{00A0} \pdfglyphtounicode{ghabengali}{00A0} \pdfglyphtounicode{ghadarmenian}{00A0} \pdfglyphtounicode{ghadeva}{00A0} \pdfglyphtounicode{ghagujarati}{00A0} \pdfglyphtounicode{ghagurmukhi}{00A0} \pdfglyphtounicode{ghainarabic}{00A0} \pdfglyphtounicode{ghainfinalarabic}{00A0} \pdfglyphtounicode{ghaininitialarabic}{00A0} \pdfglyphtounicode{ghainmedialarabic}{00A0} \pdfglyphtounicode{ghemiddlehookcyrillic}{00A0} \pdfglyphtounicode{ghestrokecyrillic}{00A0} \pdfglyphtounicode{gheupturncyrillic}{00A0} \pdfglyphtounicode{ghhadeva}{00A0} \pdfglyphtounicode{ghhagurmukhi}{00A0} \pdfglyphtounicode{ghook}{00A0} \pdfglyphtounicode{ghzsquare}{00A0} \pdfglyphtounicode{gihiragana}{00A0} \pdfglyphtounicode{gikatakana}{00A0} \pdfglyphtounicode{gimarmenian}{00A0} \pdfglyphtounicode{gimel}{00A0} \pdfglyphtounicode{gimeldagesh}{00A0} \pdfglyphtounicode{gimeldageshhebrew}{00A0} \pdfglyphtounicode{gimelhebrew}{00A0} \pdfglyphtounicode{gjecyrillic}{00A0} \pdfglyphtounicode{glottalinvertedstroke}{00A0} \pdfglyphtounicode{glottalstop}{00A0} \pdfglyphtounicode{glottalstopinverted}{00A0} \pdfglyphtounicode{glottalstopmod}{00A0} \pdfglyphtounicode{glottalstopreversed}{00A0} \pdfglyphtounicode{glottalstopreversedmod}{00A0} \pdfglyphtounicode{glottalstopreversedsuperior}{00A0} \pdfglyphtounicode{glottalstopstroke}{00A0} \pdfglyphtounicode{glottalstopstrokereversed}{00A0} \pdfglyphtounicode{gmacron}{00A0} \pdfglyphtounicode{gmonospace}{00A0} \pdfglyphtounicode{gohiragana}{00A0} \pdfglyphtounicode{gokatakana}{00A0} \pdfglyphtounicode{gparen}{00A0} \pdfglyphtounicode{gpasquare}{00A0} \pdfglyphtounicode{gradient}{00A0} \pdfglyphtounicode{grave}{00A0} \pdfglyphtounicode{gravebelowcmb}{00A0} \pdfglyphtounicode{gravecmb}{00A0} \pdfglyphtounicode{gravecomb}{00A0} \pdfglyphtounicode{gravedeva}{00A0} \pdfglyphtounicode{gravelowmod}{00A0} \pdfglyphtounicode{gravemonospace}{00A0} \pdfglyphtounicode{gravetonecmb}{00A0} \pdfglyphtounicode{greater}{00A0} \pdfglyphtounicode{greaterdbleqlless}{00A0} \pdfglyphtounicode{greaterdblequal}{00A0} \pdfglyphtounicode{greaterdot}{00A0} \pdfglyphtounicode{greaterequal}{00A0} \pdfglyphtounicode{greaterequalorless}{00A0} \pdfglyphtounicode{greaterlessequal}{00A0} \pdfglyphtounicode{greatermonospace}{00A0} \pdfglyphtounicode{greatermuch}{00A0} \pdfglyphtounicode{greaternotdblequal}{00A0} \pdfglyphtounicode{greaternotequal}{00A0} \pdfglyphtounicode{greaterorapproxeql}{00A0} \pdfglyphtounicode{greaterorequalslant}{00A0} \pdfglyphtounicode{greaterorequivalent}{00A0} \pdfglyphtounicode{greaterorless}{00A0} \pdfglyphtounicode{greaterornotdbleql}{00A0} \pdfglyphtounicode{greaterornotequal}{00A0} \pdfglyphtounicode{greaterorsimilar}{00A0} \pdfglyphtounicode{greateroverequal}{00A0} \pdfglyphtounicode{greatersmall}{00A0} \pdfglyphtounicode{gscript}{00A0} \pdfglyphtounicode{gstroke}{00A0} \pdfglyphtounicode{guhiragana}{00A0} \pdfglyphtounicode{guillemotleft}{00A0} \pdfglyphtounicode{guillemotright}{00A0} \pdfglyphtounicode{guilsinglleft}{00A0} \pdfglyphtounicode{guilsinglright}{00A0} \pdfglyphtounicode{gukatakana}{00A0} \pdfglyphtounicode{guramusquare}{00A0} \pdfglyphtounicode{gysquare}{00A0} \pdfglyphtounicode{h}{00A0} \pdfglyphtounicode{haabkhasiancyrillic}{00A0} \pdfglyphtounicode{haaltonearabic}{00A0} \pdfglyphtounicode{habengali}{00A0} \pdfglyphtounicode{hadescendercyrillic}{00A0} \pdfglyphtounicode{hadeva}{00A0} \pdfglyphtounicode{hagujarati}{00A0} \pdfglyphtounicode{hagurmukhi}{00A0} \pdfglyphtounicode{haharabic}{00A0} \pdfglyphtounicode{hahfinalarabic}{00A0} \pdfglyphtounicode{hahinitialarabic}{00A0} \pdfglyphtounicode{hahiragana}{00A0} \pdfglyphtounicode{hahmedialarabic}{00A0} \pdfglyphtounicode{haitusquare}{00A0} \pdfglyphtounicode{hakatakana}{00A0} \pdfglyphtounicode{hakatakanahalfwidth}{00A0} \pdfglyphtounicode{halantgurmukhi}{00A0} \pdfglyphtounicode{hamzaarabic}{00A0} \pdfglyphtounicode{hamzadammaarabic}{00A0} \pdfglyphtounicode{hamzadammatanarabic}{00A0} \pdfglyphtounicode{hamzafathaarabic}{00A0} \pdfglyphtounicode{hamzafathatanarabic}{00A0} \pdfglyphtounicode{hamzalowarabic}{00A0} \pdfglyphtounicode{hamzalowkasraarabic}{00A0} \pdfglyphtounicode{hamzalowkasratanarabic}{00A0} \pdfglyphtounicode{hamzasukunarabic}{00A0} \pdfglyphtounicode{hangulfiller}{00A0} \pdfglyphtounicode{hardsigncyrillic}{00A0} \pdfglyphtounicode{harpoondownleft}{00A0} \pdfglyphtounicode{harpoondownright}{00A0} \pdfglyphtounicode{harpoonleftbarbup}{00A0} \pdfglyphtounicode{harpoonleftright}{00A0} \pdfglyphtounicode{harpoonrightbarbup}{00A0} \pdfglyphtounicode{harpoonrightleft}{00A0} \pdfglyphtounicode{harpoonupleft}{00A0} \pdfglyphtounicode{harpoonupright}{00A0} \pdfglyphtounicode{hasquare}{00A0} \pdfglyphtounicode{hatafpatah}{00A0} \pdfglyphtounicode{hatafpatah16}{00A0} \pdfglyphtounicode{hatafpatah23}{00A0} \pdfglyphtounicode{hatafpatah2f}{00A0} \pdfglyphtounicode{hatafpatahhebrew}{00A0} \pdfglyphtounicode{hatafpatahnarrowhebrew}{00A0} \pdfglyphtounicode{hatafpatahquarterhebrew}{00A0} \pdfglyphtounicode{hatafpatahwidehebrew}{00A0} \pdfglyphtounicode{hatafqamats}{00A0} \pdfglyphtounicode{hatafqamats1b}{00A0} \pdfglyphtounicode{hatafqamats28}{00A0} \pdfglyphtounicode{hatafqamats34}{00A0} \pdfglyphtounicode{hatafqamatshebrew}{00A0} \pdfglyphtounicode{hatafqamatsnarrowhebrew}{00A0} \pdfglyphtounicode{hatafqamatsquarterhebrew}{00A0} \pdfglyphtounicode{hatafqamatswidehebrew}{00A0} \pdfglyphtounicode{hatafsegol}{00A0} \pdfglyphtounicode{hatafsegol17}{00A0} \pdfglyphtounicode{hatafsegol24}{00A0} \pdfglyphtounicode{hatafsegol30}{00A0} \pdfglyphtounicode{hatafsegolhebrew}{00A0} \pdfglyphtounicode{hatafsegolnarrowhebrew}{00A0} \pdfglyphtounicode{hatafsegolquarterhebrew}{00A0} \pdfglyphtounicode{hatafsegolwidehebrew}{00A0} \pdfglyphtounicode{hbar}{00A0} \pdfglyphtounicode{hbopomofo}{00A0} \pdfglyphtounicode{hbrevebelow}{00A0} \pdfglyphtounicode{hcedilla}{00A0} \pdfglyphtounicode{hcircle}{00A0} \pdfglyphtounicode{hcircumflex}{00A0} \pdfglyphtounicode{hdieresis}{00A0} \pdfglyphtounicode{hdotaccent}{00A0} \pdfglyphtounicode{hdotbelow}{00A0} \pdfglyphtounicode{he}{00A0} \pdfglyphtounicode{heart}{00A0} \pdfglyphtounicode{heartsuitblack}{00A0} \pdfglyphtounicode{heartsuitwhite}{00A0} \pdfglyphtounicode{hedagesh}{00A0} \pdfglyphtounicode{hedageshhebrew}{00A0} \pdfglyphtounicode{hehaltonearabic}{00A0} \pdfglyphtounicode{heharabic}{00A0} \pdfglyphtounicode{hehebrew}{00A0} \pdfglyphtounicode{hehfinalaltonearabic}{00A0} \pdfglyphtounicode{hehfinalalttwoarabic}{00A0} \pdfglyphtounicode{hehfinalarabic}{00A0} \pdfglyphtounicode{hehhamzaabovefinalarabic}{00A0} \pdfglyphtounicode{hehhamzaaboveisolatedarabic}{00A0} \pdfglyphtounicode{hehinitialaltonearabic}{00A0} \pdfglyphtounicode{hehinitialarabic}{00A0} \pdfglyphtounicode{hehiragana}{00A0} \pdfglyphtounicode{hehmedialaltonearabic}{00A0} \pdfglyphtounicode{hehmedialarabic}{00A0} \pdfglyphtounicode{heiseierasquare}{00A0} \pdfglyphtounicode{hekatakana}{00A0} \pdfglyphtounicode{hekatakanahalfwidth}{00A0} \pdfglyphtounicode{hekutaarusquare}{00A0} \pdfglyphtounicode{henghook}{00A0} \pdfglyphtounicode{herutusquare}{00A0} \pdfglyphtounicode{het}{00A0} \pdfglyphtounicode{hethebrew}{00A0} \pdfglyphtounicode{hhook}{00A0} \pdfglyphtounicode{hhooksuperior}{00A0} \pdfglyphtounicode{hieuhacirclekorean}{00A0} \pdfglyphtounicode{hieuhaparenkorean}{00A0} \pdfglyphtounicode{hieuhcirclekorean}{00A0} \pdfglyphtounicode{hieuhkorean}{00A0} \pdfglyphtounicode{hieuhparenkorean}{00A0} \pdfglyphtounicode{hihiragana}{00A0} \pdfglyphtounicode{hikatakana}{00A0} \pdfglyphtounicode{hikatakanahalfwidth}{00A0} \pdfglyphtounicode{hiriq}{00A0} \pdfglyphtounicode{hiriq14}{00A0} \pdfglyphtounicode{hiriq21}{00A0} \pdfglyphtounicode{hiriq2d}{00A0} \pdfglyphtounicode{hiriqhebrew}{00A0} \pdfglyphtounicode{hiriqnarrowhebrew}{00A0} \pdfglyphtounicode{hiriqquarterhebrew}{00A0} \pdfglyphtounicode{hiriqwidehebrew}{00A0} \pdfglyphtounicode{hlinebelow}{00A0} \pdfglyphtounicode{hmonospace}{00A0} \pdfglyphtounicode{hoarmenian}{00A0} \pdfglyphtounicode{hohipthai}{00A0} \pdfglyphtounicode{hohiragana}{00A0} \pdfglyphtounicode{hokatakana}{00A0} \pdfglyphtounicode{hokatakanahalfwidth}{00A0} \pdfglyphtounicode{holam}{00A0} \pdfglyphtounicode{holam19}{00A0} \pdfglyphtounicode{holam26}{00A0} \pdfglyphtounicode{holam32}{00A0} \pdfglyphtounicode{holamhebrew}{00A0} \pdfglyphtounicode{holamnarrowhebrew}{00A0} \pdfglyphtounicode{holamquarterhebrew}{00A0} \pdfglyphtounicode{holamwidehebrew}{00A0} \pdfglyphtounicode{honokhukthai}{00A0} \pdfglyphtounicode{hookabovecomb}{00A0} \pdfglyphtounicode{hookcmb}{00A0} \pdfglyphtounicode{hookpalatalizedbelowcmb}{00A0} \pdfglyphtounicode{hookretroflexbelowcmb}{00A0} \pdfglyphtounicode{hoonsquare}{00A0} \pdfglyphtounicode{horicoptic}{00A0} \pdfglyphtounicode{horizontalbar}{00A0} \pdfglyphtounicode{horncmb}{00A0} \pdfglyphtounicode{hotsprings}{00A0} \pdfglyphtounicode{house}{00A0} \pdfglyphtounicode{hparen}{00A0} \pdfglyphtounicode{hsuperior}{00A0} \pdfglyphtounicode{hturned}{00A0} \pdfglyphtounicode{huhiragana}{00A0} \pdfglyphtounicode{huiitosquare}{00A0} \pdfglyphtounicode{hukatakana}{00A0} \pdfglyphtounicode{hukatakanahalfwidth}{00A0} \pdfglyphtounicode{hungarumlaut}{00A0} \pdfglyphtounicode{hungarumlautcmb}{00A0} \pdfglyphtounicode{hv}{00A0} \pdfglyphtounicode{hyphen}{00A0} \pdfglyphtounicode{hyphenchar}{00A0} \pdfglyphtounicode{hypheninferior}{00A0} \pdfglyphtounicode{hyphenmonospace}{00A0} \pdfglyphtounicode{hyphensmall}{00A0} \pdfglyphtounicode{hyphensuperior}{00A0} \pdfglyphtounicode{hyphentwo}{00A0} \pdfglyphtounicode{i}{00A0} \pdfglyphtounicode{iacute}{00A0} \pdfglyphtounicode{iacyrillic}{00A0} \pdfglyphtounicode{ibengali}{00A0} \pdfglyphtounicode{ibopomofo}{00A0} \pdfglyphtounicode{ibreve}{00A0} \pdfglyphtounicode{icaron}{00A0} \pdfglyphtounicode{icircle}{00A0} \pdfglyphtounicode{icircumflex}{00A0} \pdfglyphtounicode{icyrillic}{00A0} \pdfglyphtounicode{idblgrave}{00A0} \pdfglyphtounicode{ideographearthcircle}{00A0} \pdfglyphtounicode{ideographfirecircle}{00A0} \pdfglyphtounicode{ideographicallianceparen}{00A0} \pdfglyphtounicode{ideographiccallparen}{00A0} \pdfglyphtounicode{ideographiccentrecircle}{00A0} \pdfglyphtounicode{ideographicclose}{00A0} \pdfglyphtounicode{ideographiccomma}{00A0} \pdfglyphtounicode{ideographiccommaleft}{00A0} \pdfglyphtounicode{ideographiccongratulationparen}{00A0} \pdfglyphtounicode{ideographiccorrectcircle}{00A0} \pdfglyphtounicode{ideographicearthparen}{00A0} \pdfglyphtounicode{ideographicenterpriseparen}{00A0} \pdfglyphtounicode{ideographicexcellentcircle}{00A0} \pdfglyphtounicode{ideographicfestivalparen}{00A0} \pdfglyphtounicode{ideographicfinancialcircle}{00A0} \pdfglyphtounicode{ideographicfinancialparen}{00A0} \pdfglyphtounicode{ideographicfireparen}{00A0} \pdfglyphtounicode{ideographichaveparen}{00A0} \pdfglyphtounicode{ideographichighcircle}{00A0} \pdfglyphtounicode{ideographiciterationmark}{00A0} \pdfglyphtounicode{ideographiclaborcircle}{00A0} \pdfglyphtounicode{ideographiclaborparen}{00A0} \pdfglyphtounicode{ideographicleftcircle}{00A0} \pdfglyphtounicode{ideographiclowcircle}{00A0} \pdfglyphtounicode{ideographicmedicinecircle}{00A0} \pdfglyphtounicode{ideographicmetalparen}{00A0} \pdfglyphtounicode{ideographicmoonparen}{00A0} \pdfglyphtounicode{ideographicnameparen}{00A0} \pdfglyphtounicode{ideographicperiod}{00A0} \pdfglyphtounicode{ideographicprintcircle}{00A0} \pdfglyphtounicode{ideographicreachparen}{00A0} \pdfglyphtounicode{ideographicrepresentparen}{00A0} \pdfglyphtounicode{ideographicresourceparen}{00A0} \pdfglyphtounicode{ideographicrightcircle}{00A0} \pdfglyphtounicode{ideographicsecretcircle}{00A0} \pdfglyphtounicode{ideographicselfparen}{00A0} \pdfglyphtounicode{ideographicsocietyparen}{00A0} \pdfglyphtounicode{ideographicspace}{00A0} \pdfglyphtounicode{ideographicspecialparen}{00A0} \pdfglyphtounicode{ideographicstockparen}{00A0} \pdfglyphtounicode{ideographicstudyparen}{00A0} \pdfglyphtounicode{ideographicsunparen}{00A0} \pdfglyphtounicode{ideographicsuperviseparen}{00A0} \pdfglyphtounicode{ideographicwaterparen}{00A0} \pdfglyphtounicode{ideographicwoodparen}{00A0} \pdfglyphtounicode{ideographiczero}{00A0} \pdfglyphtounicode{ideographmetalcircle}{00A0} \pdfglyphtounicode{ideographmooncircle}{00A0} \pdfglyphtounicode{ideographnamecircle}{00A0} \pdfglyphtounicode{ideographsuncircle}{00A0} \pdfglyphtounicode{ideographwatercircle}{00A0} \pdfglyphtounicode{ideographwoodcircle}{00A0} \pdfglyphtounicode{ideva}{00A0} \pdfglyphtounicode{idieresis}{00A0} \pdfglyphtounicode{idieresisacute}{00A0} \pdfglyphtounicode{idieresiscyrillic}{00A0} \pdfglyphtounicode{idotbelow}{00A0} \pdfglyphtounicode{iebrevecyrillic}{00A0} \pdfglyphtounicode{iecyrillic}{00A0} \pdfglyphtounicode{ieungacirclekorean}{00A0} \pdfglyphtounicode{ieungaparenkorean}{00A0} \pdfglyphtounicode{ieungcirclekorean}{00A0} \pdfglyphtounicode{ieungkorean}{00A0} \pdfglyphtounicode{ieungparenkorean}{00A0} \pdfglyphtounicode{igrave}{00A0} \pdfglyphtounicode{igujarati}{00A0} \pdfglyphtounicode{igurmukhi}{00A0} \pdfglyphtounicode{ihiragana}{00A0} \pdfglyphtounicode{ihookabove}{00A0} \pdfglyphtounicode{iibengali}{00A0} \pdfglyphtounicode{iicyrillic}{00A0} \pdfglyphtounicode{iideva}{00A0} \pdfglyphtounicode{iigujarati}{00A0} \pdfglyphtounicode{iigurmukhi}{00A0} \pdfglyphtounicode{iimatragurmukhi}{00A0} \pdfglyphtounicode{iinvertedbreve}{00A0} \pdfglyphtounicode{iishortcyrillic}{00A0} \pdfglyphtounicode{iivowelsignbengali}{00A0} \pdfglyphtounicode{iivowelsigndeva}{00A0} \pdfglyphtounicode{iivowelsigngujarati}{00A0} \pdfglyphtounicode{ij}{00A0} \pdfglyphtounicode{ikatakana}{00A0} \pdfglyphtounicode{ikatakanahalfwidth}{00A0} \pdfglyphtounicode{ikorean}{00A0} \pdfglyphtounicode{ilde}{00A0} \pdfglyphtounicode{iluyhebrew}{00A0} \pdfglyphtounicode{imacron}{00A0} \pdfglyphtounicode{imacroncyrillic}{00A0} \pdfglyphtounicode{imageorapproximatelyequal}{00A0} \pdfglyphtounicode{imatragurmukhi}{00A0} \pdfglyphtounicode{imonospace}{00A0} \pdfglyphtounicode{increment}{00A0} \pdfglyphtounicode{infinity}{00A0} \pdfglyphtounicode{iniarmenian}{00A0} \pdfglyphtounicode{integerdivide}{00A0} \pdfglyphtounicode{integral}{00A0} \pdfglyphtounicode{integralbottom}{00A0} \pdfglyphtounicode{integralbt}{00A0} \pdfglyphtounicode{integralex}{00A0} \pdfglyphtounicode{integraltop}{00A0} \pdfglyphtounicode{integraltp}{00A0} \pdfglyphtounicode{intercal}{00A0} \pdfglyphtounicode{interrobang}{00A0} \pdfglyphtounicode{interrobangdown}{00A0} \pdfglyphtounicode{intersection}{00A0} \pdfglyphtounicode{intersectiondbl}{00A0} \pdfglyphtounicode{intersectionsq}{00A0} \pdfglyphtounicode{intisquare}{00A0} \pdfglyphtounicode{invbullet}{00A0} \pdfglyphtounicode{invcircle}{00A0} \pdfglyphtounicode{invsmileface}{00A0} \pdfglyphtounicode{iocyrillic}{00A0} \pdfglyphtounicode{iogonek}{00A0} \pdfglyphtounicode{iota}{00A0} \pdfglyphtounicode{iotadieresis}{00A0} \pdfglyphtounicode{iotadieresistonos}{00A0} \pdfglyphtounicode{iotalatin}{00A0} \pdfglyphtounicode{iotatonos}{00A0} \pdfglyphtounicode{iparen}{00A0} \pdfglyphtounicode{irigurmukhi}{00A0} \pdfglyphtounicode{ismallhiragana}{00A0} \pdfglyphtounicode{ismallkatakana}{00A0} \pdfglyphtounicode{ismallkatakanahalfwidth}{00A0} \pdfglyphtounicode{issharbengali}{00A0} \pdfglyphtounicode{istroke}{00A0} \pdfglyphtounicode{isuperior}{00A0} \pdfglyphtounicode{iterationhiragana}{00A0} \pdfglyphtounicode{iterationkatakana}{00A0} \pdfglyphtounicode{itilde}{00A0} \pdfglyphtounicode{itildebelow}{00A0} \pdfglyphtounicode{iubopomofo}{00A0} \pdfglyphtounicode{iucyrillic}{00A0} \pdfglyphtounicode{ivowelsignbengali}{00A0} \pdfglyphtounicode{ivowelsigndeva}{00A0} \pdfglyphtounicode{ivowelsigngujarati}{00A0} \pdfglyphtounicode{izhitsacyrillic}{00A0} \pdfglyphtounicode{izhitsadblgravecyrillic}{00A0} \pdfglyphtounicode{j}{00A0} \pdfglyphtounicode{jaarmenian}{00A0} \pdfglyphtounicode{jabengali}{00A0} \pdfglyphtounicode{jadeva}{00A0} \pdfglyphtounicode{jagujarati}{00A0} \pdfglyphtounicode{jagurmukhi}{00A0} \pdfglyphtounicode{jbopomofo}{00A0} \pdfglyphtounicode{jcaron}{00A0} \pdfglyphtounicode{jcircle}{00A0} \pdfglyphtounicode{jcircumflex}{00A0} \pdfglyphtounicode{jcrossedtail}{00A0} \pdfglyphtounicode{jdotlessstroke}{00A0} \pdfglyphtounicode{jecyrillic}{00A0} \pdfglyphtounicode{jeemarabic}{00A0} \pdfglyphtounicode{jeemfinalarabic}{00A0} \pdfglyphtounicode{jeeminitialarabic}{00A0} \pdfglyphtounicode{jeemmedialarabic}{00A0} \pdfglyphtounicode{jeharabic}{00A0} \pdfglyphtounicode{jehfinalarabic}{00A0} \pdfglyphtounicode{jhabengali}{00A0} \pdfglyphtounicode{jhadeva}{00A0} \pdfglyphtounicode{jhagujarati}{00A0} \pdfglyphtounicode{jhagurmukhi}{00A0} \pdfglyphtounicode{jheharmenian}{00A0} \pdfglyphtounicode{jis}{00A0} \pdfglyphtounicode{jmonospace}{00A0} \pdfglyphtounicode{jparen}{00A0} \pdfglyphtounicode{jsuperior}{00A0} \pdfglyphtounicode{k}{00A0} \pdfglyphtounicode{kabashkircyrillic}{00A0} \pdfglyphtounicode{kabengali}{00A0} \pdfglyphtounicode{kacute}{00A0} \pdfglyphtounicode{kacyrillic}{00A0} \pdfglyphtounicode{kadescendercyrillic}{00A0} \pdfglyphtounicode{kadeva}{00A0} \pdfglyphtounicode{kaf}{00A0} \pdfglyphtounicode{kafarabic}{00A0} \pdfglyphtounicode{kafdagesh}{00A0} \pdfglyphtounicode{kafdageshhebrew}{00A0} \pdfglyphtounicode{kaffinalarabic}{00A0} \pdfglyphtounicode{kafhebrew}{00A0} \pdfglyphtounicode{kafinitialarabic}{00A0} \pdfglyphtounicode{kafmedialarabic}{00A0} \pdfglyphtounicode{kafrafehebrew}{00A0} \pdfglyphtounicode{kagujarati}{00A0} \pdfglyphtounicode{kagurmukhi}{00A0} \pdfglyphtounicode{kahiragana}{00A0} \pdfglyphtounicode{kahookcyrillic}{00A0} \pdfglyphtounicode{kakatakana}{00A0} \pdfglyphtounicode{kakatakanahalfwidth}{00A0} \pdfglyphtounicode{kappa}{00A0} \pdfglyphtounicode{kappasymbolgreek}{00A0} \pdfglyphtounicode{kapyeounmieumkorean}{00A0} \pdfglyphtounicode{kapyeounphieuphkorean}{00A0} \pdfglyphtounicode{kapyeounpieupkorean}{00A0} \pdfglyphtounicode{kapyeounssangpieupkorean}{00A0} \pdfglyphtounicode{karoriisquare}{00A0} \pdfglyphtounicode{kashidaautoarabic}{00A0} \pdfglyphtounicode{kashidaautonosidebearingarabic}{00A0} \pdfglyphtounicode{kasmallkatakana}{00A0} \pdfglyphtounicode{kasquare}{00A0} \pdfglyphtounicode{kasraarabic}{00A0} \pdfglyphtounicode{kasratanarabic}{00A0} \pdfglyphtounicode{kastrokecyrillic}{00A0} \pdfglyphtounicode{katahiraprolongmarkhalfwidth}{00A0} \pdfglyphtounicode{kaverticalstrokecyrillic}{00A0} \pdfglyphtounicode{kbopomofo}{00A0} \pdfglyphtounicode{kcalsquare}{00A0} \pdfglyphtounicode{kcaron}{00A0} \pdfglyphtounicode{kcedilla}{00A0} \pdfglyphtounicode{kcircle}{00A0} \pdfglyphtounicode{kcommaaccent}{00A0} \pdfglyphtounicode{kdotbelow}{00A0} \pdfglyphtounicode{keharmenian}{00A0} \pdfglyphtounicode{kehiragana}{00A0} \pdfglyphtounicode{kekatakana}{00A0} \pdfglyphtounicode{kekatakanahalfwidth}{00A0} \pdfglyphtounicode{kenarmenian}{00A0} \pdfglyphtounicode{kesmallkatakana}{00A0} \pdfglyphtounicode{kgreenlandic}{00A0} \pdfglyphtounicode{khabengali}{00A0} \pdfglyphtounicode{khacyrillic}{00A0} \pdfglyphtounicode{khadeva}{00A0} \pdfglyphtounicode{khagujarati}{00A0} \pdfglyphtounicode{khagurmukhi}{00A0} \pdfglyphtounicode{khaharabic}{00A0} \pdfglyphtounicode{khahfinalarabic}{00A0} \pdfglyphtounicode{khahinitialarabic}{00A0} \pdfglyphtounicode{khahmedialarabic}{00A0} \pdfglyphtounicode{kheicoptic}{00A0} \pdfglyphtounicode{khhadeva}{00A0} \pdfglyphtounicode{khhagurmukhi}{00A0} \pdfglyphtounicode{khieukhacirclekorean}{00A0} \pdfglyphtounicode{khieukhaparenkorean}{00A0} \pdfglyphtounicode{khieukhcirclekorean}{00A0} \pdfglyphtounicode{khieukhkorean}{00A0} \pdfglyphtounicode{khieukhparenkorean}{00A0} \pdfglyphtounicode{khokhaithai}{00A0} \pdfglyphtounicode{khokhonthai}{00A0} \pdfglyphtounicode{khokhuatthai}{00A0} \pdfglyphtounicode{khokhwaithai}{00A0} \pdfglyphtounicode{khomutthai}{00A0} \pdfglyphtounicode{khook}{00A0} \pdfglyphtounicode{khorakhangthai}{00A0} \pdfglyphtounicode{khzsquare}{00A0} \pdfglyphtounicode{kihiragana}{00A0} \pdfglyphtounicode{kikatakana}{00A0} \pdfglyphtounicode{kikatakanahalfwidth}{00A0} \pdfglyphtounicode{kiroguramusquare}{00A0} \pdfglyphtounicode{kiromeetorusquare}{00A0} \pdfglyphtounicode{kirosquare}{00A0} \pdfglyphtounicode{kiyeokacirclekorean}{00A0} \pdfglyphtounicode{kiyeokaparenkorean}{00A0} \pdfglyphtounicode{kiyeokcirclekorean}{00A0} \pdfglyphtounicode{kiyeokkorean}{00A0} \pdfglyphtounicode{kiyeokparenkorean}{00A0} \pdfglyphtounicode{kiyeoksioskorean}{00A0} \pdfglyphtounicode{kjecyrillic}{00A0} \pdfglyphtounicode{klinebelow}{00A0} \pdfglyphtounicode{klsquare}{00A0} \pdfglyphtounicode{kmcubedsquare}{00A0} \pdfglyphtounicode{kmonospace}{00A0} \pdfglyphtounicode{kmsquaredsquare}{00A0} \pdfglyphtounicode{kohiragana}{00A0} \pdfglyphtounicode{kohmsquare}{00A0} \pdfglyphtounicode{kokaithai}{00A0} \pdfglyphtounicode{kokatakana}{00A0} \pdfglyphtounicode{kokatakanahalfwidth}{00A0} \pdfglyphtounicode{kooposquare}{00A0} \pdfglyphtounicode{koppacyrillic}{00A0} \pdfglyphtounicode{koreanstandardsymbol}{00A0} \pdfglyphtounicode{koroniscmb}{00A0} \pdfglyphtounicode{kparen}{00A0} \pdfglyphtounicode{kpasquare}{00A0} \pdfglyphtounicode{ksicyrillic}{00A0} \pdfglyphtounicode{ktsquare}{00A0} \pdfglyphtounicode{kturned}{00A0} \pdfglyphtounicode{kuhiragana}{00A0} \pdfglyphtounicode{kukatakana}{00A0} \pdfglyphtounicode{kukatakanahalfwidth}{00A0} \pdfglyphtounicode{kvsquare}{00A0} \pdfglyphtounicode{kwsquare}{00A0} \pdfglyphtounicode{l}{00A0} \pdfglyphtounicode{labengali}{00A0} \pdfglyphtounicode{lacute}{00A0} \pdfglyphtounicode{ladeva}{00A0} \pdfglyphtounicode{lagujarati}{00A0} \pdfglyphtounicode{lagurmukhi}{00A0} \pdfglyphtounicode{lakkhangyaothai}{00A0} \pdfglyphtounicode{lamaleffinalarabic}{00A0} \pdfglyphtounicode{lamalefhamzaabovefinalarabic}{00A0} \pdfglyphtounicode{lamalefhamzaaboveisolatedarabic}{00A0} \pdfglyphtounicode{lamalefhamzabelowfinalarabic}{00A0} \pdfglyphtounicode{lamalefhamzabelowisolatedarabic}{00A0} \pdfglyphtounicode{lamalefisolatedarabic}{00A0} \pdfglyphtounicode{lamalefmaddaabovefinalarabic}{00A0} \pdfglyphtounicode{lamalefmaddaaboveisolatedarabic}{00A0} \pdfglyphtounicode{lamarabic}{00A0} \pdfglyphtounicode{lambda}{00A0} \pdfglyphtounicode{lambdastroke}{00A0} \pdfglyphtounicode{lamed}{00A0} \pdfglyphtounicode{lameddagesh}{00A0} \pdfglyphtounicode{lameddageshhebrew}{00A0} \pdfglyphtounicode{lamedhebrew}{00A0} \pdfglyphtounicode{lamedholam}{00A0} \pdfglyphtounicode{lamedholamdagesh}{00A0} \pdfglyphtounicode{lamedholamdageshhebrew}{00A0} \pdfglyphtounicode{lamedholamhebrew}{00A0} \pdfglyphtounicode{lamfinalarabic}{00A0} \pdfglyphtounicode{lamhahinitialarabic}{00A0} \pdfglyphtounicode{laminitialarabic}{00A0} \pdfglyphtounicode{lamjeeminitialarabic}{00A0} \pdfglyphtounicode{lamkhahinitialarabic}{00A0} \pdfglyphtounicode{lamlamhehisolatedarabic}{00A0} \pdfglyphtounicode{lammedialarabic}{00A0} \pdfglyphtounicode{lammeemhahinitialarabic}{00A0} \pdfglyphtounicode{lammeeminitialarabic}{00A0} \pdfglyphtounicode{lammeemjeeminitialarabic}{00A0} \pdfglyphtounicode{lammeemkhahinitialarabic}{00A0} \pdfglyphtounicode{largecircle}{00A0} \pdfglyphtounicode{latticetop}{00A0} \pdfglyphtounicode{lbar}{00A0} \pdfglyphtounicode{lbelt}{00A0} \pdfglyphtounicode{lbopomofo}{00A0} \pdfglyphtounicode{lcaron}{00A0} \pdfglyphtounicode{lcedilla}{00A0} \pdfglyphtounicode{lcircle}{00A0} \pdfglyphtounicode{lcircumflexbelow}{00A0} \pdfglyphtounicode{lcommaaccent}{00A0} \pdfglyphtounicode{ldot}{00A0} \pdfglyphtounicode{ldotaccent}{00A0} \pdfglyphtounicode{ldotbelow}{00A0} \pdfglyphtounicode{ldotbelowmacron}{00A0} \pdfglyphtounicode{leftangleabovecmb}{00A0} \pdfglyphtounicode{lefttackbelowcmb}{00A0} \pdfglyphtounicode{less}{00A0} \pdfglyphtounicode{lessdbleqlgreater}{00A0} \pdfglyphtounicode{lessdblequal}{00A0} \pdfglyphtounicode{lessdot}{00A0} \pdfglyphtounicode{lessequal}{00A0} \pdfglyphtounicode{lessequalgreater}{00A0} \pdfglyphtounicode{lessequalorgreater}{00A0} \pdfglyphtounicode{lessmonospace}{00A0} \pdfglyphtounicode{lessmuch}{00A0} \pdfglyphtounicode{lessnotdblequal}{00A0} \pdfglyphtounicode{lessnotequal}{00A0} \pdfglyphtounicode{lessorapproxeql}{00A0} \pdfglyphtounicode{lessorequalslant}{00A0} \pdfglyphtounicode{lessorequivalent}{00A0} \pdfglyphtounicode{lessorgreater}{00A0} \pdfglyphtounicode{lessornotdbleql}{00A0} \pdfglyphtounicode{lessornotequal}{00A0} \pdfglyphtounicode{lessorsimilar}{00A0} \pdfglyphtounicode{lessoverequal}{00A0} \pdfglyphtounicode{lesssmall}{00A0} \pdfglyphtounicode{lezh}{00A0} \pdfglyphtounicode{lfblock}{00A0} \pdfglyphtounicode{lhookretroflex}{00A0} \pdfglyphtounicode{lira}{00A0} \pdfglyphtounicode{liwnarmenian}{00A0} \pdfglyphtounicode{lj}{00A0} \pdfglyphtounicode{ljecyrillic}{00A0} \pdfglyphtounicode{ll}{00A0} \pdfglyphtounicode{lladeva}{00A0} \pdfglyphtounicode{llagujarati}{00A0} \pdfglyphtounicode{llinebelow}{00A0} \pdfglyphtounicode{llladeva}{00A0} \pdfglyphtounicode{llvocalicbengali}{00A0} \pdfglyphtounicode{llvocalicdeva}{00A0} \pdfglyphtounicode{llvocalicvowelsignbengali}{00A0} \pdfglyphtounicode{llvocalicvowelsigndeva}{00A0} \pdfglyphtounicode{lmiddletilde}{00A0} \pdfglyphtounicode{lmonospace}{00A0} \pdfglyphtounicode{lmsquare}{00A0} \pdfglyphtounicode{lochulathai}{00A0} \pdfglyphtounicode{logicaland}{00A0} \pdfglyphtounicode{logicalnot}{00A0} \pdfglyphtounicode{logicalnotreversed}{00A0} \pdfglyphtounicode{logicalor}{00A0} \pdfglyphtounicode{lolingthai}{00A0} \pdfglyphtounicode{longdbls}{00A0} \pdfglyphtounicode{longs}{00A0} \pdfglyphtounicode{longsh}{00A0} \pdfglyphtounicode{longsi}{00A0} \pdfglyphtounicode{longsl}{00A0} \pdfglyphtounicode{longst}{00A0} \pdfglyphtounicode{lowlinecenterline}{00A0} \pdfglyphtounicode{lowlinecmb}{00A0} \pdfglyphtounicode{lowlinedashed}{00A0} \pdfglyphtounicode{lozenge}{00A0} \pdfglyphtounicode{lparen}{00A0} \pdfglyphtounicode{lscript}{00A0} \pdfglyphtounicode{lslash}{00A0} \pdfglyphtounicode{lsquare}{00A0} \pdfglyphtounicode{lsuperior}{00A0} \pdfglyphtounicode{ltshade}{00A0} \pdfglyphtounicode{luthai}{00A0} \pdfglyphtounicode{lvocalicbengali}{00A0} \pdfglyphtounicode{lvocalicdeva}{00A0} \pdfglyphtounicode{lvocalicvowelsignbengali}{00A0} \pdfglyphtounicode{lvocalicvowelsigndeva}{00A0} \pdfglyphtounicode{lxsquare}{00A0} \pdfglyphtounicode{m}{00A0} \pdfglyphtounicode{mabengali}{00A0} \pdfglyphtounicode{macron}{00A0} \pdfglyphtounicode{macronbelowcmb}{00A0} \pdfglyphtounicode{macroncmb}{00A0} \pdfglyphtounicode{macronlowmod}{00A0} \pdfglyphtounicode{macronmonospace}{00A0} \pdfglyphtounicode{macute}{00A0} \pdfglyphtounicode{madeva}{00A0} \pdfglyphtounicode{magujarati}{00A0} \pdfglyphtounicode{magurmukhi}{00A0} \pdfglyphtounicode{mahapakhhebrew}{00A0} \pdfglyphtounicode{mahapakhlefthebrew}{00A0} \pdfglyphtounicode{mahiragana}{00A0} \pdfglyphtounicode{maichattawalowleftthai}{00A0} \pdfglyphtounicode{maichattawalowrightthai}{00A0} \pdfglyphtounicode{maichattawathai}{00A0} \pdfglyphtounicode{maichattawaupperleftthai}{00A0} \pdfglyphtounicode{maieklowleftthai}{00A0} \pdfglyphtounicode{maieklowrightthai}{00A0} \pdfglyphtounicode{maiekthai}{00A0} \pdfglyphtounicode{maiekupperleftthai}{00A0} \pdfglyphtounicode{maihanakatleftthai}{00A0} \pdfglyphtounicode{maihanakatthai}{00A0} \pdfglyphtounicode{maitaikhuleftthai}{00A0} \pdfglyphtounicode{maitaikhuthai}{00A0} \pdfglyphtounicode{maitholowleftthai}{00A0} \pdfglyphtounicode{maitholowrightthai}{00A0} \pdfglyphtounicode{maithothai}{00A0} \pdfglyphtounicode{maithoupperleftthai}{00A0} \pdfglyphtounicode{maitrilowleftthai}{00A0} \pdfglyphtounicode{maitrilowrightthai}{00A0} \pdfglyphtounicode{maitrithai}{00A0} \pdfglyphtounicode{maitriupperleftthai}{00A0} \pdfglyphtounicode{maiyamokthai}{00A0} \pdfglyphtounicode{makatakana}{00A0} \pdfglyphtounicode{makatakanahalfwidth}{00A0} \pdfglyphtounicode{male}{00A0} \pdfglyphtounicode{maltesecross}{00A0} \pdfglyphtounicode{mansyonsquare}{00A0} \pdfglyphtounicode{maqafhebrew}{00A0} \pdfglyphtounicode{mars}{00A0} \pdfglyphtounicode{masoracirclehebrew}{00A0} \pdfglyphtounicode{masquare}{00A0} \pdfglyphtounicode{mbopomofo}{00A0} \pdfglyphtounicode{mbsquare}{00A0} \pdfglyphtounicode{mcircle}{00A0} \pdfglyphtounicode{mcubedsquare}{00A0} \pdfglyphtounicode{mdotaccent}{00A0} \pdfglyphtounicode{mdotbelow}{00A0} \pdfglyphtounicode{measuredangle}{00A0} \pdfglyphtounicode{meemarabic}{00A0} \pdfglyphtounicode{meemfinalarabic}{00A0} \pdfglyphtounicode{meeminitialarabic}{00A0} \pdfglyphtounicode{meemmedialarabic}{00A0} \pdfglyphtounicode{meemmeeminitialarabic}{00A0} \pdfglyphtounicode{meemmeemisolatedarabic}{00A0} \pdfglyphtounicode{meetorusquare}{00A0} \pdfglyphtounicode{mehiragana}{00A0} \pdfglyphtounicode{meizierasquare}{00A0} \pdfglyphtounicode{mekatakana}{00A0} \pdfglyphtounicode{mekatakanahalfwidth}{00A0} \pdfglyphtounicode{mem}{00A0} \pdfglyphtounicode{memdagesh}{00A0} \pdfglyphtounicode{memdageshhebrew}{00A0} \pdfglyphtounicode{memhebrew}{00A0} \pdfglyphtounicode{menarmenian}{00A0} \pdfglyphtounicode{merkhahebrew}{00A0} \pdfglyphtounicode{merkhakefulahebrew}{00A0} \pdfglyphtounicode{merkhakefulalefthebrew}{00A0} \pdfglyphtounicode{merkhalefthebrew}{00A0} \pdfglyphtounicode{mhook}{00A0} \pdfglyphtounicode{mhzsquare}{00A0} \pdfglyphtounicode{middledotkatakanahalfwidth}{00A0} \pdfglyphtounicode{middot}{00A0} \pdfglyphtounicode{mieumacirclekorean}{00A0} \pdfglyphtounicode{mieumaparenkorean}{00A0} \pdfglyphtounicode{mieumcirclekorean}{00A0} \pdfglyphtounicode{mieumkorean}{00A0} \pdfglyphtounicode{mieumpansioskorean}{00A0} \pdfglyphtounicode{mieumparenkorean}{00A0} \pdfglyphtounicode{mieumpieupkorean}{00A0} \pdfglyphtounicode{mieumsioskorean}{00A0} \pdfglyphtounicode{mihiragana}{00A0} \pdfglyphtounicode{mikatakana}{00A0} \pdfglyphtounicode{mikatakanahalfwidth}{00A0} \pdfglyphtounicode{minus}{00A0} \pdfglyphtounicode{minusbelowcmb}{00A0} \pdfglyphtounicode{minuscircle}{00A0} \pdfglyphtounicode{minusmod}{00A0} \pdfglyphtounicode{minusplus}{00A0} \pdfglyphtounicode{minute}{00A0} \pdfglyphtounicode{miribaarusquare}{00A0} \pdfglyphtounicode{mirisquare}{00A0} \pdfglyphtounicode{mlonglegturned}{00A0} \pdfglyphtounicode{mlsquare}{00A0} \pdfglyphtounicode{mmcubedsquare}{00A0} \pdfglyphtounicode{mmonospace}{00A0} \pdfglyphtounicode{mmsquaredsquare}{00A0} \pdfglyphtounicode{mohiragana}{00A0} \pdfglyphtounicode{mohmsquare}{00A0} \pdfglyphtounicode{mokatakana}{00A0} \pdfglyphtounicode{mokatakanahalfwidth}{00A0} \pdfglyphtounicode{molsquare}{00A0} \pdfglyphtounicode{momathai}{00A0} \pdfglyphtounicode{moverssquare}{00A0} \pdfglyphtounicode{moverssquaredsquare}{00A0} \pdfglyphtounicode{mparen}{00A0} \pdfglyphtounicode{mpasquare}{00A0} \pdfglyphtounicode{mssquare}{00A0} \pdfglyphtounicode{msuperior}{00A0} \pdfglyphtounicode{mturned}{00A0} \pdfglyphtounicode{mu}{00A0} \pdfglyphtounicode{mu1}{00A0} \pdfglyphtounicode{muasquare}{00A0} \pdfglyphtounicode{muchgreater}{00A0} \pdfglyphtounicode{muchless}{00A0} \pdfglyphtounicode{mufsquare}{00A0} \pdfglyphtounicode{mugreek}{00A0} \pdfglyphtounicode{mugsquare}{00A0} \pdfglyphtounicode{muhiragana}{00A0} \pdfglyphtounicode{mukatakana}{00A0} \pdfglyphtounicode{mukatakanahalfwidth}{00A0} \pdfglyphtounicode{mulsquare}{00A0} \pdfglyphtounicode{multicloseleft}{00A0} \pdfglyphtounicode{multicloseright}{00A0} \pdfglyphtounicode{multimap}{00A0} \pdfglyphtounicode{multiopenleft}{00A0} \pdfglyphtounicode{multiopenright}{00A0} \pdfglyphtounicode{multiply}{00A0} \pdfglyphtounicode{mumsquare}{00A0} \pdfglyphtounicode{munahhebrew}{00A0} \pdfglyphtounicode{munahlefthebrew}{00A0} \pdfglyphtounicode{musicalnote}{00A0} \pdfglyphtounicode{musicalnotedbl}{00A0} \pdfglyphtounicode{musicflatsign}{00A0} \pdfglyphtounicode{musicsharpsign}{00A0} \pdfglyphtounicode{mussquare}{00A0} \pdfglyphtounicode{muvsquare}{00A0} \pdfglyphtounicode{muwsquare}{00A0} \pdfglyphtounicode{mvmegasquare}{00A0} \pdfglyphtounicode{mvsquare}{00A0} \pdfglyphtounicode{mwmegasquare}{00A0} \pdfglyphtounicode{mwsquare}{00A0} \pdfglyphtounicode{n}{00A0} \pdfglyphtounicode{nabengali}{00A0} \pdfglyphtounicode{nabla}{00A0} \pdfglyphtounicode{nacute}{00A0} \pdfglyphtounicode{nadeva}{00A0} \pdfglyphtounicode{nagujarati}{00A0} \pdfglyphtounicode{nagurmukhi}{00A0} \pdfglyphtounicode{nahiragana}{00A0} \pdfglyphtounicode{nakatakana}{00A0} \pdfglyphtounicode{nakatakanahalfwidth}{00A0} \pdfglyphtounicode{nand}{00A0} \pdfglyphtounicode{napostrophe}{00A0} \pdfglyphtounicode{nasquare}{00A0} \pdfglyphtounicode{natural}{00A0} \pdfglyphtounicode{nbopomofo}{00A0} \pdfglyphtounicode{nbspace}{00A0} \pdfglyphtounicode{ncaron}{00A0} \pdfglyphtounicode{ncedilla}{00A0} \pdfglyphtounicode{ncircle}{00A0} \pdfglyphtounicode{ncircumflexbelow}{00A0} \pdfglyphtounicode{ncommaaccent}{00A0} \pdfglyphtounicode{ndotaccent}{00A0} \pdfglyphtounicode{ndotbelow}{00A0} \pdfglyphtounicode{negationslash}{00A0} \pdfglyphtounicode{nehiragana}{00A0} \pdfglyphtounicode{nekatakana}{00A0} \pdfglyphtounicode{nekatakanahalfwidth}{00A0} \pdfglyphtounicode{newsheqelsign}{00A0} \pdfglyphtounicode{nfsquare}{00A0} \pdfglyphtounicode{ng}{00A0} \pdfglyphtounicode{ngabengali}{00A0} \pdfglyphtounicode{ngadeva}{00A0} \pdfglyphtounicode{ngagujarati}{00A0} \pdfglyphtounicode{ngagurmukhi}{00A0} \pdfglyphtounicode{ngonguthai}{00A0} \pdfglyphtounicode{nhiragana}{00A0} \pdfglyphtounicode{nhookleft}{00A0} \pdfglyphtounicode{nhookretroflex}{00A0} \pdfglyphtounicode{nieunacirclekorean}{00A0} \pdfglyphtounicode{nieunaparenkorean}{00A0} \pdfglyphtounicode{nieuncieuckorean}{00A0} \pdfglyphtounicode{nieuncirclekorean}{00A0} \pdfglyphtounicode{nieunhieuhkorean}{00A0} \pdfglyphtounicode{nieunkorean}{00A0} \pdfglyphtounicode{nieunpansioskorean}{00A0} \pdfglyphtounicode{nieunparenkorean}{00A0} \pdfglyphtounicode{nieunsioskorean}{00A0} \pdfglyphtounicode{nieuntikeutkorean}{00A0} \pdfglyphtounicode{nihiragana}{00A0} \pdfglyphtounicode{nikatakana}{00A0} \pdfglyphtounicode{nikatakanahalfwidth}{00A0} \pdfglyphtounicode{nikhahitleftthai}{00A0} \pdfglyphtounicode{nikhahitthai}{00A0} \pdfglyphtounicode{nine}{00A0} \pdfglyphtounicode{ninearabic}{00A0} \pdfglyphtounicode{ninebengali}{00A0} \pdfglyphtounicode{ninecircle}{00A0} \pdfglyphtounicode{ninecircleinversesansserif}{00A0} \pdfglyphtounicode{ninedeva}{00A0} \pdfglyphtounicode{ninegujarati}{00A0} \pdfglyphtounicode{ninegurmukhi}{00A0} \pdfglyphtounicode{ninehackarabic}{00A0} \pdfglyphtounicode{ninehangzhou}{00A0} \pdfglyphtounicode{nineideographicparen}{00A0} \pdfglyphtounicode{nineinferior}{00A0} \pdfglyphtounicode{ninemonospace}{00A0} \pdfglyphtounicode{nineoldstyle}{00A0} \pdfglyphtounicode{nineparen}{00A0} \pdfglyphtounicode{nineperiod}{00A0} \pdfglyphtounicode{ninepersian}{00A0} \pdfglyphtounicode{nineroman}{00A0} \pdfglyphtounicode{ninesuperior}{00A0} \pdfglyphtounicode{nineteencircle}{00A0} \pdfglyphtounicode{nineteenparen}{00A0} \pdfglyphtounicode{nineteenperiod}{00A0} \pdfglyphtounicode{ninethai}{00A0} \pdfglyphtounicode{nj}{00A0} \pdfglyphtounicode{njecyrillic}{00A0} \pdfglyphtounicode{nkatakana}{00A0} \pdfglyphtounicode{nkatakanahalfwidth}{00A0} \pdfglyphtounicode{nlegrightlong}{00A0} \pdfglyphtounicode{nlinebelow}{00A0} \pdfglyphtounicode{nmonospace}{00A0} \pdfglyphtounicode{nmsquare}{00A0} \pdfglyphtounicode{nnabengali}{00A0} \pdfglyphtounicode{nnadeva}{00A0} \pdfglyphtounicode{nnagujarati}{00A0} \pdfglyphtounicode{nnagurmukhi}{00A0} \pdfglyphtounicode{nnnadeva}{00A0} \pdfglyphtounicode{nohiragana}{00A0} \pdfglyphtounicode{nokatakana}{00A0} \pdfglyphtounicode{nokatakanahalfwidth}{00A0} \pdfglyphtounicode{nonbreakingspace}{00A0} \pdfglyphtounicode{nonenthai}{00A0} \pdfglyphtounicode{nonuthai}{00A0} \pdfglyphtounicode{noonarabic}{00A0} \pdfglyphtounicode{noonfinalarabic}{00A0} \pdfglyphtounicode{noonghunnaarabic}{00A0} \pdfglyphtounicode{noonghunnafinalarabic}{00A0} \pdfglyphtounicode{noonhehinitialarabic}{00A0} \pdfglyphtounicode{nooninitialarabic}{00A0} \pdfglyphtounicode{noonjeeminitialarabic}{00A0} \pdfglyphtounicode{noonjeemisolatedarabic}{00A0} \pdfglyphtounicode{noonmedialarabic}{00A0} \pdfglyphtounicode{noonmeeminitialarabic}{00A0} \pdfglyphtounicode{noonmeemisolatedarabic}{00A0} \pdfglyphtounicode{noonnoonfinalarabic}{00A0} \pdfglyphtounicode{notapproxequal}{00A0} \pdfglyphtounicode{notarrowboth}{00A0} \pdfglyphtounicode{notarrowleft}{00A0} \pdfglyphtounicode{notarrowright}{00A0} \pdfglyphtounicode{notbar}{00A0} \pdfglyphtounicode{notcontains}{00A0} \pdfglyphtounicode{notdblarrowboth}{00A0} \pdfglyphtounicode{notdblarrowleft}{00A0} \pdfglyphtounicode{notdblarrowright}{00A0} \pdfglyphtounicode{notelement}{00A0} \pdfglyphtounicode{notelementof}{00A0} \pdfglyphtounicode{notequal}{00A0} \pdfglyphtounicode{notexistential}{00A0} \pdfglyphtounicode{notfollows}{00A0} \pdfglyphtounicode{notfollowsoreql}{00A0} \pdfglyphtounicode{notforces}{00A0} \pdfglyphtounicode{notforcesextra}{00A0} \pdfglyphtounicode{notgreater}{00A0} \pdfglyphtounicode{notgreaterdblequal}{00A0} \pdfglyphtounicode{notgreaterequal}{00A0} \pdfglyphtounicode{notgreaternorequal}{00A0} \pdfglyphtounicode{notgreaternorless}{00A0} \pdfglyphtounicode{notgreaterorslnteql}{00A0} \pdfglyphtounicode{notidentical}{00A0} \pdfglyphtounicode{notless}{00A0} \pdfglyphtounicode{notlessdblequal}{00A0} \pdfglyphtounicode{notlessequal}{00A0} \pdfglyphtounicode{notlessnorequal}{00A0} \pdfglyphtounicode{notlessorslnteql}{00A0} \pdfglyphtounicode{notparallel}{00A0} \pdfglyphtounicode{notprecedes}{00A0} \pdfglyphtounicode{notprecedesoreql}{00A0} \pdfglyphtounicode{notsatisfies}{00A0} \pdfglyphtounicode{notsimilar}{00A0} \pdfglyphtounicode{notsubset}{00A0} \pdfglyphtounicode{notsubseteql}{00A0} \pdfglyphtounicode{notsubsetordbleql}{00A0} \pdfglyphtounicode{notsubsetoreql}{00A0} \pdfglyphtounicode{notsucceeds}{00A0} \pdfglyphtounicode{notsuperset}{00A0} \pdfglyphtounicode{notsuperseteql}{00A0} \pdfglyphtounicode{notsupersetordbleql}{00A0} \pdfglyphtounicode{notsupersetoreql}{00A0} \pdfglyphtounicode{nottriangeqlleft}{00A0} \pdfglyphtounicode{nottriangeqlright}{00A0} \pdfglyphtounicode{nottriangleleft}{00A0} \pdfglyphtounicode{nottriangleright}{00A0} \pdfglyphtounicode{notturnstile}{00A0} \pdfglyphtounicode{nowarmenian}{00A0} \pdfglyphtounicode{nparen}{00A0} \pdfglyphtounicode{nssquare}{00A0} \pdfglyphtounicode{nsuperior}{00A0} \pdfglyphtounicode{ntilde}{00A0} \pdfglyphtounicode{nu}{00A0} \pdfglyphtounicode{nuhiragana}{00A0} \pdfglyphtounicode{nukatakana}{00A0} \pdfglyphtounicode{nukatakanahalfwidth}{00A0} \pdfglyphtounicode{nuktabengali}{00A0} \pdfglyphtounicode{nuktadeva}{00A0} \pdfglyphtounicode{nuktagujarati}{00A0} \pdfglyphtounicode{nuktagurmukhi}{00A0} \pdfglyphtounicode{numbersign}{00A0} \pdfglyphtounicode{numbersignmonospace}{00A0} \pdfglyphtounicode{numbersignsmall}{00A0} \pdfglyphtounicode{numeralsigngreek}{00A0} \pdfglyphtounicode{numeralsignlowergreek}{00A0} \pdfglyphtounicode{numero}{00A0} \pdfglyphtounicode{nun}{00A0} \pdfglyphtounicode{nundagesh}{00A0} \pdfglyphtounicode{nundageshhebrew}{00A0} \pdfglyphtounicode{nunhebrew}{00A0} \pdfglyphtounicode{nvsquare}{00A0} \pdfglyphtounicode{nwsquare}{00A0} \pdfglyphtounicode{nyabengali}{00A0} \pdfglyphtounicode{nyadeva}{00A0} \pdfglyphtounicode{nyagujarati}{00A0} \pdfglyphtounicode{nyagurmukhi}{00A0} \pdfglyphtounicode{o}{00A0} \pdfglyphtounicode{oacute}{00A0} \pdfglyphtounicode{oangthai}{00A0} \pdfglyphtounicode{obarred}{00A0} \pdfglyphtounicode{obarredcyrillic}{00A0} \pdfglyphtounicode{obarreddieresiscyrillic}{00A0} \pdfglyphtounicode{obengali}{00A0} \pdfglyphtounicode{obopomofo}{00A0} \pdfglyphtounicode{obreve}{00A0} \pdfglyphtounicode{ocandradeva}{00A0} \pdfglyphtounicode{ocandragujarati}{00A0} \pdfglyphtounicode{ocandravowelsigndeva}{00A0} \pdfglyphtounicode{ocandravowelsigngujarati}{00A0} \pdfglyphtounicode{ocaron}{00A0} \pdfglyphtounicode{ocircle}{00A0} \pdfglyphtounicode{ocircumflex}{00A0} \pdfglyphtounicode{ocircumflexacute}{00A0} \pdfglyphtounicode{ocircumflexdotbelow}{00A0} \pdfglyphtounicode{ocircumflexgrave}{00A0} \pdfglyphtounicode{ocircumflexhookabove}{00A0} \pdfglyphtounicode{ocircumflextilde}{00A0} \pdfglyphtounicode{ocyrillic}{00A0} \pdfglyphtounicode{odblacute}{00A0} \pdfglyphtounicode{odblgrave}{00A0} \pdfglyphtounicode{odeva}{00A0} \pdfglyphtounicode{odieresis}{00A0} \pdfglyphtounicode{odieresiscyrillic}{00A0} \pdfglyphtounicode{odotbelow}{00A0} \pdfglyphtounicode{oe}{00A0} \pdfglyphtounicode{oekorean}{00A0} \pdfglyphtounicode{ogonek}{00A0} \pdfglyphtounicode{ogonekcmb}{00A0} \pdfglyphtounicode{ograve}{00A0} \pdfglyphtounicode{ogujarati}{00A0} \pdfglyphtounicode{oharmenian}{00A0} \pdfglyphtounicode{ohiragana}{00A0} \pdfglyphtounicode{ohookabove}{00A0} \pdfglyphtounicode{ohorn}{00A0} \pdfglyphtounicode{ohornacute}{00A0} \pdfglyphtounicode{ohorndotbelow}{00A0} \pdfglyphtounicode{ohorngrave}{00A0} \pdfglyphtounicode{ohornhookabove}{00A0} \pdfglyphtounicode{ohorntilde}{00A0} \pdfglyphtounicode{ohungarumlaut}{00A0} \pdfglyphtounicode{oi}{00A0} \pdfglyphtounicode{oinvertedbreve}{00A0} \pdfglyphtounicode{okatakana}{00A0} \pdfglyphtounicode{okatakanahalfwidth}{00A0} \pdfglyphtounicode{okorean}{00A0} \pdfglyphtounicode{olehebrew}{00A0} \pdfglyphtounicode{omacron}{00A0} \pdfglyphtounicode{omacronacute}{00A0} \pdfglyphtounicode{omacrongrave}{00A0} \pdfglyphtounicode{omdeva}{00A0} \pdfglyphtounicode{omega}{00A0} \pdfglyphtounicode{omega1}{00A0} \pdfglyphtounicode{omegacyrillic}{00A0} \pdfglyphtounicode{omegalatinclosed}{00A0} \pdfglyphtounicode{omegaroundcyrillic}{00A0} \pdfglyphtounicode{omegatitlocyrillic}{00A0} \pdfglyphtounicode{omegatonos}{00A0} \pdfglyphtounicode{omgujarati}{00A0} \pdfglyphtounicode{omicron}{00A0} \pdfglyphtounicode{omicrontonos}{00A0} \pdfglyphtounicode{omonospace}{00A0} \pdfglyphtounicode{one}{00A0} \pdfglyphtounicode{onearabic}{00A0} \pdfglyphtounicode{onebengali}{00A0} \pdfglyphtounicode{onecircle}{00A0} \pdfglyphtounicode{onecircleinversesansserif}{00A0} \pdfglyphtounicode{onedeva}{00A0} \pdfglyphtounicode{onedotenleader}{00A0} \pdfglyphtounicode{oneeighth}{00A0} \pdfglyphtounicode{onefitted}{00A0} \pdfglyphtounicode{onegujarati}{00A0} \pdfglyphtounicode{onegurmukhi}{00A0} \pdfglyphtounicode{onehackarabic}{00A0} \pdfglyphtounicode{onehalf}{00A0} \pdfglyphtounicode{onehangzhou}{00A0} \pdfglyphtounicode{oneideographicparen}{00A0} \pdfglyphtounicode{oneinferior}{00A0} \pdfglyphtounicode{onemonospace}{00A0} \pdfglyphtounicode{onenumeratorbengali}{00A0} \pdfglyphtounicode{oneoldstyle}{00A0} \pdfglyphtounicode{oneparen}{00A0} \pdfglyphtounicode{oneperiod}{00A0} \pdfglyphtounicode{onepersian}{00A0} \pdfglyphtounicode{onequarter}{00A0} \pdfglyphtounicode{oneroman}{00A0} \pdfglyphtounicode{onesuperior}{00A0} \pdfglyphtounicode{onethai}{00A0} \pdfglyphtounicode{onethird}{00A0} \pdfglyphtounicode{oogonek}{00A0} \pdfglyphtounicode{oogonekmacron}{00A0} \pdfglyphtounicode{oogurmukhi}{00A0} \pdfglyphtounicode{oomatragurmukhi}{00A0} \pdfglyphtounicode{oopen}{00A0} \pdfglyphtounicode{oparen}{00A0} \pdfglyphtounicode{openbullet}{00A0} \pdfglyphtounicode{option}{00A0} \pdfglyphtounicode{ordfeminine}{00A0} \pdfglyphtounicode{ordmasculine}{00A0} \pdfglyphtounicode{orthogonal}{00A0} \pdfglyphtounicode{orunderscore}{00A0} \pdfglyphtounicode{oshortdeva}{00A0} \pdfglyphtounicode{oshortvowelsigndeva}{00A0} \pdfglyphtounicode{oslash}{00A0} \pdfglyphtounicode{oslashacute}{00A0} \pdfglyphtounicode{osmallhiragana}{00A0} \pdfglyphtounicode{osmallkatakana}{00A0} \pdfglyphtounicode{osmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{ostrokeacute}{00A0} \pdfglyphtounicode{osuperior}{00A0} \pdfglyphtounicode{otcyrillic}{00A0} \pdfglyphtounicode{otilde}{00A0} \pdfglyphtounicode{otildeacute}{00A0} \pdfglyphtounicode{otildedieresis}{00A0} \pdfglyphtounicode{oubopomofo}{00A0} \pdfglyphtounicode{overline}{00A0} \pdfglyphtounicode{overlinecenterline}{00A0} \pdfglyphtounicode{overlinecmb}{00A0} \pdfglyphtounicode{overlinedashed}{00A0} \pdfglyphtounicode{overlinedblwavy}{00A0} \pdfglyphtounicode{overlinewavy}{00A0} \pdfglyphtounicode{overscore}{00A0} \pdfglyphtounicode{ovowelsignbengali}{00A0} \pdfglyphtounicode{ovowelsigndeva}{00A0} \pdfglyphtounicode{ovowelsigngujarati}{00A0} \pdfglyphtounicode{owner}{00A0} \pdfglyphtounicode{p}{00A0} \pdfglyphtounicode{paampssquare}{00A0} \pdfglyphtounicode{paasentosquare}{00A0} \pdfglyphtounicode{pabengali}{00A0} \pdfglyphtounicode{pacute}{00A0} \pdfglyphtounicode{padeva}{00A0} \pdfglyphtounicode{pagedown}{00A0} \pdfglyphtounicode{pageup}{00A0} \pdfglyphtounicode{pagujarati}{00A0} \pdfglyphtounicode{pagurmukhi}{00A0} \pdfglyphtounicode{pahiragana}{00A0} \pdfglyphtounicode{paiyannoithai}{00A0} \pdfglyphtounicode{pakatakana}{00A0} \pdfglyphtounicode{palatalizationcyrilliccmb}{00A0} \pdfglyphtounicode{palochkacyrillic}{00A0} \pdfglyphtounicode{pansioskorean}{00A0} \pdfglyphtounicode{paragraph}{00A0} \pdfglyphtounicode{parallel}{00A0} \pdfglyphtounicode{parenleft}{00A0} \pdfglyphtounicode{parenleftaltonearabic}{00A0} \pdfglyphtounicode{parenleftbt}{00A0} \pdfglyphtounicode{parenleftex}{00A0} \pdfglyphtounicode{parenleftinferior}{00A0} \pdfglyphtounicode{parenleftmonospace}{00A0} \pdfglyphtounicode{parenleftsmall}{00A0} \pdfglyphtounicode{parenleftsuperior}{00A0} \pdfglyphtounicode{parenlefttp}{00A0} \pdfglyphtounicode{parenleftvertical}{00A0} \pdfglyphtounicode{parenright}{00A0} \pdfglyphtounicode{parenrightaltonearabic}{00A0} \pdfglyphtounicode{parenrightbt}{00A0} \pdfglyphtounicode{parenrightex}{00A0} \pdfglyphtounicode{parenrightinferior}{00A0} \pdfglyphtounicode{parenrightmonospace}{00A0} \pdfglyphtounicode{parenrightsmall}{00A0} \pdfglyphtounicode{parenrightsuperior}{00A0} \pdfglyphtounicode{parenrighttp}{00A0} \pdfglyphtounicode{parenrightvertical}{00A0} \pdfglyphtounicode{partialdiff}{00A0} \pdfglyphtounicode{paseqhebrew}{00A0} \pdfglyphtounicode{pashtahebrew}{00A0} \pdfglyphtounicode{pasquare}{00A0} \pdfglyphtounicode{patah}{00A0} \pdfglyphtounicode{patah11}{00A0} \pdfglyphtounicode{patah1d}{00A0} \pdfglyphtounicode{patah2a}{00A0} \pdfglyphtounicode{patahhebrew}{00A0} \pdfglyphtounicode{patahnarrowhebrew}{00A0} \pdfglyphtounicode{patahquarterhebrew}{00A0} \pdfglyphtounicode{patahwidehebrew}{00A0} \pdfglyphtounicode{pazerhebrew}{00A0} \pdfglyphtounicode{pbopomofo}{00A0} \pdfglyphtounicode{pcircle}{00A0} \pdfglyphtounicode{pdotaccent}{00A0} \pdfglyphtounicode{pe}{00A0} \pdfglyphtounicode{pecyrillic}{00A0} \pdfglyphtounicode{pedagesh}{00A0} \pdfglyphtounicode{pedageshhebrew}{00A0} \pdfglyphtounicode{peezisquare}{00A0} \pdfglyphtounicode{pefinaldageshhebrew}{00A0} \pdfglyphtounicode{peharabic}{00A0} \pdfglyphtounicode{peharmenian}{00A0} \pdfglyphtounicode{pehebrew}{00A0} \pdfglyphtounicode{pehfinalarabic}{00A0} \pdfglyphtounicode{pehinitialarabic}{00A0} \pdfglyphtounicode{pehiragana}{00A0} \pdfglyphtounicode{pehmedialarabic}{00A0} \pdfglyphtounicode{pekatakana}{00A0} \pdfglyphtounicode{pemiddlehookcyrillic}{00A0} \pdfglyphtounicode{perafehebrew}{00A0} \pdfglyphtounicode{percent}{00A0} \pdfglyphtounicode{percentarabic}{00A0} \pdfglyphtounicode{percentmonospace}{00A0} \pdfglyphtounicode{percentsmall}{00A0} \pdfglyphtounicode{period}{00A0} \pdfglyphtounicode{periodarmenian}{00A0} \pdfglyphtounicode{periodcentered}{00A0} \pdfglyphtounicode{periodhalfwidth}{00A0} \pdfglyphtounicode{periodinferior}{00A0} \pdfglyphtounicode{periodmonospace}{00A0} \pdfglyphtounicode{periodsmall}{00A0} \pdfglyphtounicode{periodsuperior}{00A0} \pdfglyphtounicode{perispomenigreekcmb}{00A0} \pdfglyphtounicode{perpcorrespond}{00A0} \pdfglyphtounicode{perpendicular}{00A0} \pdfglyphtounicode{pertenthousand}{00A0} \pdfglyphtounicode{perthousand}{00A0} \pdfglyphtounicode{peseta}{00A0} \pdfglyphtounicode{pfsquare}{00A0} \pdfglyphtounicode{phabengali}{00A0} \pdfglyphtounicode{phadeva}{00A0} \pdfglyphtounicode{phagujarati}{00A0} \pdfglyphtounicode{phagurmukhi}{00A0} \pdfglyphtounicode{phi}{00A0} \pdfglyphtounicode{phi1}{00A0} \pdfglyphtounicode{phieuphacirclekorean}{00A0} \pdfglyphtounicode{phieuphaparenkorean}{00A0} \pdfglyphtounicode{phieuphcirclekorean}{00A0} \pdfglyphtounicode{phieuphkorean}{00A0} \pdfglyphtounicode{phieuphparenkorean}{00A0} \pdfglyphtounicode{philatin}{00A0} \pdfglyphtounicode{phinthuthai}{00A0} \pdfglyphtounicode{phisymbolgreek}{00A0} \pdfglyphtounicode{phook}{00A0} \pdfglyphtounicode{phophanthai}{00A0} \pdfglyphtounicode{phophungthai}{00A0} \pdfglyphtounicode{phosamphaothai}{00A0} \pdfglyphtounicode{pi}{00A0} \pdfglyphtounicode{pi1}{00A0} \pdfglyphtounicode{pieupacirclekorean}{00A0} \pdfglyphtounicode{pieupaparenkorean}{00A0} \pdfglyphtounicode{pieupcieuckorean}{00A0} \pdfglyphtounicode{pieupcirclekorean}{00A0} \pdfglyphtounicode{pieupkiyeokkorean}{00A0} \pdfglyphtounicode{pieupkorean}{00A0} \pdfglyphtounicode{pieupparenkorean}{00A0} \pdfglyphtounicode{pieupsioskiyeokkorean}{00A0} \pdfglyphtounicode{pieupsioskorean}{00A0} \pdfglyphtounicode{pieupsiostikeutkorean}{00A0} \pdfglyphtounicode{pieupthieuthkorean}{00A0} \pdfglyphtounicode{pieuptikeutkorean}{00A0} \pdfglyphtounicode{pihiragana}{00A0} \pdfglyphtounicode{pikatakana}{00A0} \pdfglyphtounicode{pisymbolgreek}{00A0} \pdfglyphtounicode{piwrarmenian}{00A0} \pdfglyphtounicode{planckover2pi}{00A0} \pdfglyphtounicode{planckover2pi1}{00A0} \pdfglyphtounicode{plus}{00A0} \pdfglyphtounicode{plusbelowcmb}{00A0} \pdfglyphtounicode{pluscircle}{00A0} \pdfglyphtounicode{plusminus}{00A0} \pdfglyphtounicode{plusmod}{00A0} \pdfglyphtounicode{plusmonospace}{00A0} \pdfglyphtounicode{plussmall}{00A0} \pdfglyphtounicode{plussuperior}{00A0} \pdfglyphtounicode{pmonospace}{00A0} \pdfglyphtounicode{pmsquare}{00A0} \pdfglyphtounicode{pohiragana}{00A0} \pdfglyphtounicode{pointingindexdownwhite}{00A0} \pdfglyphtounicode{pointingindexleftwhite}{00A0} \pdfglyphtounicode{pointingindexrightwhite}{00A0} \pdfglyphtounicode{pointingindexupwhite}{00A0} \pdfglyphtounicode{pokatakana}{00A0} \pdfglyphtounicode{poplathai}{00A0} \pdfglyphtounicode{postalmark}{00A0} \pdfglyphtounicode{postalmarkface}{00A0} \pdfglyphtounicode{pparen}{00A0} \pdfglyphtounicode{precedenotdbleqv}{00A0} \pdfglyphtounicode{precedenotslnteql}{00A0} \pdfglyphtounicode{precedeornoteqvlnt}{00A0} \pdfglyphtounicode{precedes}{00A0} \pdfglyphtounicode{precedesequal}{00A0} \pdfglyphtounicode{precedesorcurly}{00A0} \pdfglyphtounicode{precedesorequal}{00A0} \pdfglyphtounicode{prescription}{00A0} \pdfglyphtounicode{prime}{00A0} \pdfglyphtounicode{primemod}{00A0} \pdfglyphtounicode{primereverse}{00A0} \pdfglyphtounicode{primereversed}{00A0} \pdfglyphtounicode{product}{00A0} \pdfglyphtounicode{projective}{00A0} \pdfglyphtounicode{prolongedkana}{00A0} \pdfglyphtounicode{propellor}{00A0} \pdfglyphtounicode{propersubset}{00A0} \pdfglyphtounicode{propersuperset}{00A0} \pdfglyphtounicode{proportion}{00A0} \pdfglyphtounicode{proportional}{00A0} \pdfglyphtounicode{psi}{00A0} \pdfglyphtounicode{psicyrillic}{00A0} \pdfglyphtounicode{psilipneumatacyrilliccmb}{00A0} \pdfglyphtounicode{pssquare}{00A0} \pdfglyphtounicode{puhiragana}{00A0} \pdfglyphtounicode{pukatakana}{00A0} \pdfglyphtounicode{punctdash}{00A0} \pdfglyphtounicode{pvsquare}{00A0} \pdfglyphtounicode{pwsquare}{00A0} \pdfglyphtounicode{q}{00A0} \pdfglyphtounicode{qadeva}{00A0} \pdfglyphtounicode{qadmahebrew}{00A0} \pdfglyphtounicode{qafarabic}{00A0} \pdfglyphtounicode{qaffinalarabic}{00A0} \pdfglyphtounicode{qafinitialarabic}{00A0} \pdfglyphtounicode{qafmedialarabic}{00A0} \pdfglyphtounicode{qamats}{00A0} \pdfglyphtounicode{qamats10}{00A0} \pdfglyphtounicode{qamats1a}{00A0} \pdfglyphtounicode{qamats1c}{00A0} \pdfglyphtounicode{qamats27}{00A0} \pdfglyphtounicode{qamats29}{00A0} \pdfglyphtounicode{qamats33}{00A0} \pdfglyphtounicode{qamatsde}{00A0} \pdfglyphtounicode{qamatshebrew}{00A0} \pdfglyphtounicode{qamatsnarrowhebrew}{00A0} \pdfglyphtounicode{qamatsqatanhebrew}{00A0} \pdfglyphtounicode{qamatsqatannarrowhebrew}{00A0} \pdfglyphtounicode{qamatsqatanquarterhebrew}{00A0} \pdfglyphtounicode{qamatsqatanwidehebrew}{00A0} \pdfglyphtounicode{qamatsquarterhebrew}{00A0} \pdfglyphtounicode{qamatswidehebrew}{00A0} \pdfglyphtounicode{qarneyparahebrew}{00A0} \pdfglyphtounicode{qbopomofo}{00A0} \pdfglyphtounicode{qcircle}{00A0} \pdfglyphtounicode{qhook}{00A0} \pdfglyphtounicode{qmonospace}{00A0} \pdfglyphtounicode{qof}{00A0} \pdfglyphtounicode{qofdagesh}{00A0} \pdfglyphtounicode{qofdageshhebrew}{00A0} \pdfglyphtounicode{qofhatafpatah}{00A0} \pdfglyphtounicode{qofhatafpatahhebrew}{00A0} \pdfglyphtounicode{qofhatafsegol}{00A0} \pdfglyphtounicode{qofhatafsegolhebrew}{00A0} \pdfglyphtounicode{qofhebrew}{00A0} \pdfglyphtounicode{qofhiriq}{00A0} \pdfglyphtounicode{qofhiriqhebrew}{00A0} \pdfglyphtounicode{qofholam}{00A0} \pdfglyphtounicode{qofholamhebrew}{00A0} \pdfglyphtounicode{qofpatah}{00A0} \pdfglyphtounicode{qofpatahhebrew}{00A0} \pdfglyphtounicode{qofqamats}{00A0} \pdfglyphtounicode{qofqamatshebrew}{00A0} \pdfglyphtounicode{qofqubuts}{00A0} \pdfglyphtounicode{qofqubutshebrew}{00A0} \pdfglyphtounicode{qofsegol}{00A0} \pdfglyphtounicode{qofsegolhebrew}{00A0} \pdfglyphtounicode{qofsheva}{00A0} \pdfglyphtounicode{qofshevahebrew}{00A0} \pdfglyphtounicode{qoftsere}{00A0} \pdfglyphtounicode{qoftserehebrew}{00A0} \pdfglyphtounicode{qparen}{00A0} \pdfglyphtounicode{quarternote}{00A0} \pdfglyphtounicode{qubuts}{00A0} \pdfglyphtounicode{qubuts18}{00A0} \pdfglyphtounicode{qubuts25}{00A0} \pdfglyphtounicode{qubuts31}{00A0} \pdfglyphtounicode{qubutshebrew}{00A0} \pdfglyphtounicode{qubutsnarrowhebrew}{00A0} \pdfglyphtounicode{qubutsquarterhebrew}{00A0} \pdfglyphtounicode{qubutswidehebrew}{00A0} \pdfglyphtounicode{question}{00A0} \pdfglyphtounicode{questionarabic}{00A0} \pdfglyphtounicode{questionarmenian}{00A0} \pdfglyphtounicode{questiondown}{00A0} \pdfglyphtounicode{questiondownsmall}{00A0} \pdfglyphtounicode{questiongreek}{00A0} \pdfglyphtounicode{questionmonospace}{00A0} \pdfglyphtounicode{questionsmall}{00A0} \pdfglyphtounicode{quotedbl}{00A0} \pdfglyphtounicode{quotedblbase}{00A0} \pdfglyphtounicode{quotedblleft}{00A0} \pdfglyphtounicode{quotedblmonospace}{00A0} \pdfglyphtounicode{quotedblprime}{00A0} \pdfglyphtounicode{quotedblprimereversed}{00A0} \pdfglyphtounicode{quotedblright}{00A0} \pdfglyphtounicode{quoteleft}{00A0} \pdfglyphtounicode{quoteleftreversed}{00A0} \pdfglyphtounicode{quotereversed}{00A0} \pdfglyphtounicode{quoteright}{00A0} \pdfglyphtounicode{quoterightn}{00A0} \pdfglyphtounicode{quotesinglbase}{00A0} \pdfglyphtounicode{quotesingle}{00A0} \pdfglyphtounicode{quotesinglemonospace}{00A0} \pdfglyphtounicode{r}{00A0} \pdfglyphtounicode{raarmenian}{00A0} \pdfglyphtounicode{rabengali}{00A0} \pdfglyphtounicode{racute}{00A0} \pdfglyphtounicode{radeva}{00A0} \pdfglyphtounicode{radical}{00A0} \pdfglyphtounicode{radicalex}{00A0} \pdfglyphtounicode{radoverssquare}{00A0} \pdfglyphtounicode{radoverssquaredsquare}{00A0} \pdfglyphtounicode{radsquare}{00A0} \pdfglyphtounicode{rafe}{00A0} \pdfglyphtounicode{rafehebrew}{00A0} \pdfglyphtounicode{ragujarati}{00A0} \pdfglyphtounicode{ragurmukhi}{00A0} \pdfglyphtounicode{rahiragana}{00A0} \pdfglyphtounicode{rakatakana}{00A0} \pdfglyphtounicode{rakatakanahalfwidth}{00A0} \pdfglyphtounicode{ralowerdiagonalbengali}{00A0} \pdfglyphtounicode{ramiddlediagonalbengali}{00A0} \pdfglyphtounicode{ramshorn}{00A0} \pdfglyphtounicode{rangedash}{00A0} \pdfglyphtounicode{ratio}{00A0} \pdfglyphtounicode{rbopomofo}{00A0} \pdfglyphtounicode{rcaron}{00A0} \pdfglyphtounicode{rcedilla}{00A0} \pdfglyphtounicode{rcircle}{00A0} \pdfglyphtounicode{rcommaaccent}{00A0} \pdfglyphtounicode{rdblgrave}{00A0} \pdfglyphtounicode{rdotaccent}{00A0} \pdfglyphtounicode{rdotbelow}{00A0} \pdfglyphtounicode{rdotbelowmacron}{00A0} \pdfglyphtounicode{referencemark}{00A0} \pdfglyphtounicode{reflexsubset}{00A0} \pdfglyphtounicode{reflexsuperset}{00A0} \pdfglyphtounicode{registered}{00A0} \pdfglyphtounicode{registersans}{00A0} \pdfglyphtounicode{registerserif}{00A0} \pdfglyphtounicode{reharabic}{00A0} \pdfglyphtounicode{reharmenian}{00A0} \pdfglyphtounicode{rehfinalarabic}{00A0} \pdfglyphtounicode{rehiragana}{00A0} \pdfglyphtounicode{rehyehaleflamarabic}{00A0} \pdfglyphtounicode{rekatakana}{00A0} \pdfglyphtounicode{rekatakanahalfwidth}{00A0} \pdfglyphtounicode{resh}{00A0} \pdfglyphtounicode{reshdageshhebrew}{00A0} \pdfglyphtounicode{reshhatafpatah}{00A0} \pdfglyphtounicode{reshhatafpatahhebrew}{00A0} \pdfglyphtounicode{reshhatafsegol}{00A0} \pdfglyphtounicode{reshhatafsegolhebrew}{00A0} \pdfglyphtounicode{reshhebrew}{00A0} \pdfglyphtounicode{reshhiriq}{00A0} \pdfglyphtounicode{reshhiriqhebrew}{00A0} \pdfglyphtounicode{reshholam}{00A0} \pdfglyphtounicode{reshholamhebrew}{00A0} \pdfglyphtounicode{reshpatah}{00A0} \pdfglyphtounicode{reshpatahhebrew}{00A0} \pdfglyphtounicode{reshqamats}{00A0} \pdfglyphtounicode{reshqamatshebrew}{00A0} \pdfglyphtounicode{reshqubuts}{00A0} \pdfglyphtounicode{reshqubutshebrew}{00A0} \pdfglyphtounicode{reshsegol}{00A0} \pdfglyphtounicode{reshsegolhebrew}{00A0} \pdfglyphtounicode{reshsheva}{00A0} \pdfglyphtounicode{reshshevahebrew}{00A0} \pdfglyphtounicode{reshtsere}{00A0} \pdfglyphtounicode{reshtserehebrew}{00A0} \pdfglyphtounicode{revasymptequal}{00A0} \pdfglyphtounicode{reversedtilde}{00A0} \pdfglyphtounicode{reviahebrew}{00A0} \pdfglyphtounicode{reviamugrashhebrew}{00A0} \pdfglyphtounicode{revlogicalnot}{00A0} \pdfglyphtounicode{revsimilar}{00A0} \pdfglyphtounicode{rfishhook}{00A0} \pdfglyphtounicode{rfishhookreversed}{00A0} \pdfglyphtounicode{rhabengali}{00A0} \pdfglyphtounicode{rhadeva}{00A0} \pdfglyphtounicode{rho}{00A0} \pdfglyphtounicode{rho1}{00A0} \pdfglyphtounicode{rhook}{00A0} \pdfglyphtounicode{rhookturned}{00A0} \pdfglyphtounicode{rhookturnedsuperior}{00A0} \pdfglyphtounicode{rhosymbolgreek}{00A0} \pdfglyphtounicode{rhotichookmod}{00A0} \pdfglyphtounicode{rieulacirclekorean}{00A0} \pdfglyphtounicode{rieulaparenkorean}{00A0} \pdfglyphtounicode{rieulcirclekorean}{00A0} \pdfglyphtounicode{rieulhieuhkorean}{00A0} \pdfglyphtounicode{rieulkiyeokkorean}{00A0} \pdfglyphtounicode{rieulkiyeoksioskorean}{00A0} \pdfglyphtounicode{rieulkorean}{00A0} \pdfglyphtounicode{rieulmieumkorean}{00A0} \pdfglyphtounicode{rieulpansioskorean}{00A0} \pdfglyphtounicode{rieulparenkorean}{00A0} \pdfglyphtounicode{rieulphieuphkorean}{00A0} \pdfglyphtounicode{rieulpieupkorean}{00A0} \pdfglyphtounicode{rieulpieupsioskorean}{00A0} \pdfglyphtounicode{rieulsioskorean}{00A0} \pdfglyphtounicode{rieulthieuthkorean}{00A0} \pdfglyphtounicode{rieultikeutkorean}{00A0} \pdfglyphtounicode{rieulyeorinhieuhkorean}{00A0} \pdfglyphtounicode{rightangle}{00A0} \pdfglyphtounicode{rightanglene}{00A0} \pdfglyphtounicode{rightanglenw}{00A0} \pdfglyphtounicode{rightanglese}{00A0} \pdfglyphtounicode{rightanglesw}{00A0} \pdfglyphtounicode{righttackbelowcmb}{00A0} \pdfglyphtounicode{righttriangle}{00A0} \pdfglyphtounicode{rihiragana}{00A0} \pdfglyphtounicode{rikatakana}{00A0} \pdfglyphtounicode{rikatakanahalfwidth}{00A0} \pdfglyphtounicode{ring}{00A0} \pdfglyphtounicode{ringbelowcmb}{00A0} \pdfglyphtounicode{ringcmb}{00A0} \pdfglyphtounicode{ringhalfleft}{00A0} \pdfglyphtounicode{ringhalfleftarmenian}{00A0} \pdfglyphtounicode{ringhalfleftbelowcmb}{00A0} \pdfglyphtounicode{ringhalfleftcentered}{00A0} \pdfglyphtounicode{ringhalfright}{00A0} \pdfglyphtounicode{ringhalfrightbelowcmb}{00A0} \pdfglyphtounicode{ringhalfrightcentered}{00A0} \pdfglyphtounicode{ringinequal}{00A0} \pdfglyphtounicode{rinvertedbreve}{00A0} \pdfglyphtounicode{rittorusquare}{00A0} \pdfglyphtounicode{rlinebelow}{00A0} \pdfglyphtounicode{rlongleg}{00A0} \pdfglyphtounicode{rlonglegturned}{00A0} \pdfglyphtounicode{rmonospace}{00A0} \pdfglyphtounicode{rohiragana}{00A0} \pdfglyphtounicode{rokatakana}{00A0} \pdfglyphtounicode{rokatakanahalfwidth}{00A0} \pdfglyphtounicode{roruathai}{00A0} \pdfglyphtounicode{rparen}{00A0} \pdfglyphtounicode{rrabengali}{00A0} \pdfglyphtounicode{rradeva}{00A0} \pdfglyphtounicode{rragurmukhi}{00A0} \pdfglyphtounicode{rreharabic}{00A0} \pdfglyphtounicode{rrehfinalarabic}{00A0} \pdfglyphtounicode{rrvocalicbengali}{00A0} \pdfglyphtounicode{rrvocalicdeva}{00A0} \pdfglyphtounicode{rrvocalicgujarati}{00A0} \pdfglyphtounicode{rrvocalicvowelsignbengali}{00A0} \pdfglyphtounicode{rrvocalicvowelsigndeva}{00A0} \pdfglyphtounicode{rrvocalicvowelsigngujarati}{00A0} \pdfglyphtounicode{rsuperior}{00A0} \pdfglyphtounicode{rtblock}{00A0} \pdfglyphtounicode{rturned}{00A0} \pdfglyphtounicode{rturnedsuperior}{00A0} \pdfglyphtounicode{ruhiragana}{00A0} \pdfglyphtounicode{rukatakana}{00A0} \pdfglyphtounicode{rukatakanahalfwidth}{00A0} \pdfglyphtounicode{rupeemarkbengali}{00A0} \pdfglyphtounicode{rupeesignbengali}{00A0} \pdfglyphtounicode{rupiah}{00A0} \pdfglyphtounicode{ruthai}{00A0} \pdfglyphtounicode{rvocalicbengali}{00A0} \pdfglyphtounicode{rvocalicdeva}{00A0} \pdfglyphtounicode{rvocalicgujarati}{00A0} \pdfglyphtounicode{rvocalicvowelsignbengali}{00A0} \pdfglyphtounicode{rvocalicvowelsigndeva}{00A0} \pdfglyphtounicode{rvocalicvowelsigngujarati}{00A0} \pdfglyphtounicode{s}{00A0} \pdfglyphtounicode{sabengali}{00A0} \pdfglyphtounicode{sacute}{00A0} \pdfglyphtounicode{sacutedotaccent}{00A0} \pdfglyphtounicode{sadarabic}{00A0} \pdfglyphtounicode{sadeva}{00A0} \pdfglyphtounicode{sadfinalarabic}{00A0} \pdfglyphtounicode{sadinitialarabic}{00A0} \pdfglyphtounicode{sadmedialarabic}{00A0} \pdfglyphtounicode{sagujarati}{00A0} \pdfglyphtounicode{sagurmukhi}{00A0} \pdfglyphtounicode{sahiragana}{00A0} \pdfglyphtounicode{sakatakana}{00A0} \pdfglyphtounicode{sakatakanahalfwidth}{00A0} \pdfglyphtounicode{sallallahoualayhewasallamarabic}{00A0} \pdfglyphtounicode{samekh}{00A0} \pdfglyphtounicode{samekhdagesh}{00A0} \pdfglyphtounicode{samekhdageshhebrew}{00A0} \pdfglyphtounicode{samekhhebrew}{00A0} \pdfglyphtounicode{saraaathai}{00A0} \pdfglyphtounicode{saraaethai}{00A0} \pdfglyphtounicode{saraaimaimalaithai}{00A0} \pdfglyphtounicode{saraaimaimuanthai}{00A0} \pdfglyphtounicode{saraamthai}{00A0} \pdfglyphtounicode{saraathai}{00A0} \pdfglyphtounicode{saraethai}{00A0} \pdfglyphtounicode{saraiileftthai}{00A0} \pdfglyphtounicode{saraiithai}{00A0} \pdfglyphtounicode{saraileftthai}{00A0} \pdfglyphtounicode{saraithai}{00A0} \pdfglyphtounicode{saraothai}{00A0} \pdfglyphtounicode{saraueeleftthai}{00A0} \pdfglyphtounicode{saraueethai}{00A0} \pdfglyphtounicode{saraueleftthai}{00A0} \pdfglyphtounicode{sarauethai}{00A0} \pdfglyphtounicode{sarauthai}{00A0} \pdfglyphtounicode{sarauuthai}{00A0} \pdfglyphtounicode{satisfies}{00A0} \pdfglyphtounicode{sbopomofo}{00A0} \pdfglyphtounicode{scaron}{00A0} \pdfglyphtounicode{scarondotaccent}{00A0} \pdfglyphtounicode{scedilla}{00A0} \pdfglyphtounicode{schwa}{00A0} \pdfglyphtounicode{schwacyrillic}{00A0} \pdfglyphtounicode{schwadieresiscyrillic}{00A0} \pdfglyphtounicode{schwahook}{00A0} \pdfglyphtounicode{scircle}{00A0} \pdfglyphtounicode{scircumflex}{00A0} \pdfglyphtounicode{scommaaccent}{00A0} \pdfglyphtounicode{sdotaccent}{00A0} \pdfglyphtounicode{sdotbelow}{00A0} \pdfglyphtounicode{sdotbelowdotaccent}{00A0} \pdfglyphtounicode{seagullbelowcmb}{00A0} \pdfglyphtounicode{second}{00A0} \pdfglyphtounicode{secondtonechinese}{00A0} \pdfglyphtounicode{section}{00A0} \pdfglyphtounicode{seenarabic}{00A0} \pdfglyphtounicode{seenfinalarabic}{00A0} \pdfglyphtounicode{seeninitialarabic}{00A0} \pdfglyphtounicode{seenmedialarabic}{00A0} \pdfglyphtounicode{segol}{00A0} \pdfglyphtounicode{segol13}{00A0} \pdfglyphtounicode{segol1f}{00A0} \pdfglyphtounicode{segol2c}{00A0} \pdfglyphtounicode{segolhebrew}{00A0} \pdfglyphtounicode{segolnarrowhebrew}{00A0} \pdfglyphtounicode{segolquarterhebrew}{00A0} \pdfglyphtounicode{segoltahebrew}{00A0} \pdfglyphtounicode{segolwidehebrew}{00A0} \pdfglyphtounicode{seharmenian}{00A0} \pdfglyphtounicode{sehiragana}{00A0} \pdfglyphtounicode{sekatakana}{00A0} \pdfglyphtounicode{sekatakanahalfwidth}{00A0} \pdfglyphtounicode{semicolon}{00A0} \pdfglyphtounicode{semicolonarabic}{00A0} \pdfglyphtounicode{semicolonmonospace}{00A0} \pdfglyphtounicode{semicolonsmall}{00A0} \pdfglyphtounicode{semivoicedmarkkana}{00A0} \pdfglyphtounicode{semivoicedmarkkanahalfwidth}{00A0} \pdfglyphtounicode{sentisquare}{00A0} \pdfglyphtounicode{sentosquare}{00A0} \pdfglyphtounicode{seven}{00A0} \pdfglyphtounicode{sevenarabic}{00A0} \pdfglyphtounicode{sevenbengali}{00A0} \pdfglyphtounicode{sevencircle}{00A0} \pdfglyphtounicode{sevencircleinversesansserif}{00A0} \pdfglyphtounicode{sevendeva}{00A0} \pdfglyphtounicode{seveneighths}{00A0} \pdfglyphtounicode{sevengujarati}{00A0} \pdfglyphtounicode{sevengurmukhi}{00A0} \pdfglyphtounicode{sevenhackarabic}{00A0} \pdfglyphtounicode{sevenhangzhou}{00A0} \pdfglyphtounicode{sevenideographicparen}{00A0} \pdfglyphtounicode{seveninferior}{00A0} \pdfglyphtounicode{sevenmonospace}{00A0} \pdfglyphtounicode{sevenoldstyle}{00A0} \pdfglyphtounicode{sevenparen}{00A0} \pdfglyphtounicode{sevenperiod}{00A0} \pdfglyphtounicode{sevenpersian}{00A0} \pdfglyphtounicode{sevenroman}{00A0} \pdfglyphtounicode{sevensuperior}{00A0} \pdfglyphtounicode{seventeencircle}{00A0} \pdfglyphtounicode{seventeenparen}{00A0} \pdfglyphtounicode{seventeenperiod}{00A0} \pdfglyphtounicode{seventhai}{00A0} \pdfglyphtounicode{sfthyphen}{00A0} \pdfglyphtounicode{shaarmenian}{00A0} \pdfglyphtounicode{shabengali}{00A0} \pdfglyphtounicode{shacyrillic}{00A0} \pdfglyphtounicode{shaddaarabic}{00A0} \pdfglyphtounicode{shaddadammaarabic}{00A0} \pdfglyphtounicode{shaddadammatanarabic}{00A0} \pdfglyphtounicode{shaddafathaarabic}{00A0} \pdfglyphtounicode{shaddafathatanarabic}{00A0} \pdfglyphtounicode{shaddakasraarabic}{00A0} \pdfglyphtounicode{shaddakasratanarabic}{00A0} \pdfglyphtounicode{shade}{00A0} \pdfglyphtounicode{shadedark}{00A0} \pdfglyphtounicode{shadelight}{00A0} \pdfglyphtounicode{shademedium}{00A0} \pdfglyphtounicode{shadeva}{00A0} \pdfglyphtounicode{shagujarati}{00A0} \pdfglyphtounicode{shagurmukhi}{00A0} \pdfglyphtounicode{shalshelethebrew}{00A0} \pdfglyphtounicode{sharp}{00A0} \pdfglyphtounicode{shbopomofo}{00A0} \pdfglyphtounicode{shchacyrillic}{00A0} \pdfglyphtounicode{sheenarabic}{00A0} \pdfglyphtounicode{sheenfinalarabic}{00A0} \pdfglyphtounicode{sheeninitialarabic}{00A0} \pdfglyphtounicode{sheenmedialarabic}{00A0} \pdfglyphtounicode{sheicoptic}{00A0} \pdfglyphtounicode{sheqel}{00A0} \pdfglyphtounicode{sheqelhebrew}{00A0} \pdfglyphtounicode{sheva}{00A0} \pdfglyphtounicode{sheva115}{00A0} \pdfglyphtounicode{sheva15}{00A0} \pdfglyphtounicode{sheva22}{00A0} \pdfglyphtounicode{sheva2e}{00A0} \pdfglyphtounicode{shevahebrew}{00A0} \pdfglyphtounicode{shevanarrowhebrew}{00A0} \pdfglyphtounicode{shevaquarterhebrew}{00A0} \pdfglyphtounicode{shevawidehebrew}{00A0} \pdfglyphtounicode{shhacyrillic}{00A0} \pdfglyphtounicode{shiftleft}{00A0} \pdfglyphtounicode{shiftright}{00A0} \pdfglyphtounicode{shimacoptic}{00A0} \pdfglyphtounicode{shin}{00A0} \pdfglyphtounicode{shindagesh}{00A0} \pdfglyphtounicode{shindageshhebrew}{00A0} \pdfglyphtounicode{shindageshshindot}{00A0} \pdfglyphtounicode{shindageshshindothebrew}{00A0} \pdfglyphtounicode{shindageshsindot}{00A0} \pdfglyphtounicode{shindageshsindothebrew}{00A0} \pdfglyphtounicode{shindothebrew}{00A0} \pdfglyphtounicode{shinhebrew}{00A0} \pdfglyphtounicode{shinshindot}{00A0} \pdfglyphtounicode{shinshindothebrew}{00A0} \pdfglyphtounicode{shinsindot}{00A0} \pdfglyphtounicode{shinsindothebrew}{00A0} \pdfglyphtounicode{shook}{00A0} \pdfglyphtounicode{sigma}{00A0} \pdfglyphtounicode{sigma1}{00A0} \pdfglyphtounicode{sigmafinal}{00A0} \pdfglyphtounicode{sigmalunatesymbolgreek}{00A0} \pdfglyphtounicode{sihiragana}{00A0} \pdfglyphtounicode{sikatakana}{00A0} \pdfglyphtounicode{sikatakanahalfwidth}{00A0} \pdfglyphtounicode{siluqhebrew}{00A0} \pdfglyphtounicode{siluqlefthebrew}{00A0} \pdfglyphtounicode{similar}{00A0} \pdfglyphtounicode{similarequal}{00A0} \pdfglyphtounicode{sindothebrew}{00A0} \pdfglyphtounicode{siosacirclekorean}{00A0} \pdfglyphtounicode{siosaparenkorean}{00A0} \pdfglyphtounicode{sioscieuckorean}{00A0} \pdfglyphtounicode{sioscirclekorean}{00A0} \pdfglyphtounicode{sioskiyeokkorean}{00A0} \pdfglyphtounicode{sioskorean}{00A0} \pdfglyphtounicode{siosnieunkorean}{00A0} \pdfglyphtounicode{siosparenkorean}{00A0} \pdfglyphtounicode{siospieupkorean}{00A0} \pdfglyphtounicode{siostikeutkorean}{00A0} \pdfglyphtounicode{six}{00A0} \pdfglyphtounicode{sixarabic}{00A0} \pdfglyphtounicode{sixbengali}{00A0} \pdfglyphtounicode{sixcircle}{00A0} \pdfglyphtounicode{sixcircleinversesansserif}{00A0} \pdfglyphtounicode{sixdeva}{00A0} \pdfglyphtounicode{sixgujarati}{00A0} \pdfglyphtounicode{sixgurmukhi}{00A0} \pdfglyphtounicode{sixhackarabic}{00A0} \pdfglyphtounicode{sixhangzhou}{00A0} \pdfglyphtounicode{sixideographicparen}{00A0} \pdfglyphtounicode{sixinferior}{00A0} \pdfglyphtounicode{sixmonospace}{00A0} \pdfglyphtounicode{sixoldstyle}{00A0} \pdfglyphtounicode{sixparen}{00A0} \pdfglyphtounicode{sixperiod}{00A0} \pdfglyphtounicode{sixpersian}{00A0} \pdfglyphtounicode{sixroman}{00A0} \pdfglyphtounicode{sixsuperior}{00A0} \pdfglyphtounicode{sixteencircle}{00A0} \pdfglyphtounicode{sixteencurrencydenominatorbengali}{00A0} \pdfglyphtounicode{sixteenparen}{00A0} \pdfglyphtounicode{sixteenperiod}{00A0} \pdfglyphtounicode{sixthai}{00A0} \pdfglyphtounicode{slash}{00A0} \pdfglyphtounicode{slashmonospace}{00A0} \pdfglyphtounicode{slong}{00A0} \pdfglyphtounicode{slongdotaccent}{00A0} \pdfglyphtounicode{slurabove}{00A0} \pdfglyphtounicode{slurbelow}{00A0} \pdfglyphtounicode{smile}{00A0} \pdfglyphtounicode{smileface}{00A0} \pdfglyphtounicode{smonospace}{00A0} \pdfglyphtounicode{sofpasuqhebrew}{00A0} \pdfglyphtounicode{softhyphen}{00A0} \pdfglyphtounicode{softsigncyrillic}{00A0} \pdfglyphtounicode{sohiragana}{00A0} \pdfglyphtounicode{sokatakana}{00A0} \pdfglyphtounicode{sokatakanahalfwidth}{00A0} \pdfglyphtounicode{soliduslongoverlaycmb}{00A0} \pdfglyphtounicode{solidusshortoverlaycmb}{00A0} \pdfglyphtounicode{sorusithai}{00A0} \pdfglyphtounicode{sosalathai}{00A0} \pdfglyphtounicode{sosothai}{00A0} \pdfglyphtounicode{sosuathai}{00A0} \pdfglyphtounicode{space}{00A0} \pdfglyphtounicode{spacehackarabic}{00A0} \pdfglyphtounicode{spade}{00A0} \pdfglyphtounicode{spadesuitblack}{00A0} \pdfglyphtounicode{spadesuitwhite}{00A0} \pdfglyphtounicode{sparen}{00A0} \pdfglyphtounicode{sphericalangle}{00A0} \pdfglyphtounicode{square}{00A0} \pdfglyphtounicode{squarebelowcmb}{00A0} \pdfglyphtounicode{squarecc}{00A0} \pdfglyphtounicode{squarecm}{00A0} \pdfglyphtounicode{squarediagonalcrosshatchfill}{00A0} \pdfglyphtounicode{squaredot}{00A0} \pdfglyphtounicode{squarehorizontalfill}{00A0} \pdfglyphtounicode{squareimage}{00A0} \pdfglyphtounicode{squarekg}{00A0} \pdfglyphtounicode{squarekm}{00A0} \pdfglyphtounicode{squarekmcapital}{00A0} \pdfglyphtounicode{squareln}{00A0} \pdfglyphtounicode{squarelog}{00A0} \pdfglyphtounicode{squaremg}{00A0} \pdfglyphtounicode{squaremil}{00A0} \pdfglyphtounicode{squareminus}{00A0} \pdfglyphtounicode{squaremm}{00A0} \pdfglyphtounicode{squaremsquared}{00A0} \pdfglyphtounicode{squaremultiply}{00A0} \pdfglyphtounicode{squareoriginal}{00A0} \pdfglyphtounicode{squareorthogonalcrosshatchfill}{00A0} \pdfglyphtounicode{squareplus}{00A0} \pdfglyphtounicode{squaresolid}{00A0} \pdfglyphtounicode{squareupperlefttolowerrightfill}{00A0} \pdfglyphtounicode{squareupperrighttolowerleftfill}{00A0} \pdfglyphtounicode{squareverticalfill}{00A0} \pdfglyphtounicode{squarewhitewithsmallblack}{00A0} \pdfglyphtounicode{squiggleleftright}{00A0} \pdfglyphtounicode{squiggleright}{00A0} \pdfglyphtounicode{srsquare}{00A0} \pdfglyphtounicode{ssabengali}{00A0} \pdfglyphtounicode{ssadeva}{00A0} \pdfglyphtounicode{ssagujarati}{00A0} \pdfglyphtounicode{ssangcieuckorean}{00A0} \pdfglyphtounicode{ssanghieuhkorean}{00A0} \pdfglyphtounicode{ssangieungkorean}{00A0} \pdfglyphtounicode{ssangkiyeokkorean}{00A0} \pdfglyphtounicode{ssangnieunkorean}{00A0} \pdfglyphtounicode{ssangpieupkorean}{00A0} \pdfglyphtounicode{ssangsioskorean}{00A0} \pdfglyphtounicode{ssangtikeutkorean}{00A0} \pdfglyphtounicode{ssuperior}{00A0} \pdfglyphtounicode{st}{00A0} \pdfglyphtounicode{star}{00A0} \pdfglyphtounicode{sterling}{00A0} \pdfglyphtounicode{sterlingmonospace}{00A0} \pdfglyphtounicode{strokelongoverlaycmb}{00A0} \pdfglyphtounicode{strokeshortoverlaycmb}{00A0} \pdfglyphtounicode{subset}{00A0} \pdfglyphtounicode{subsetdbl}{00A0} \pdfglyphtounicode{subsetdblequal}{00A0} \pdfglyphtounicode{subsetnoteql}{00A0} \pdfglyphtounicode{subsetnotequal}{00A0} \pdfglyphtounicode{subsetorequal}{00A0} \pdfglyphtounicode{subsetornotdbleql}{00A0} \pdfglyphtounicode{subsetsqequal}{00A0} \pdfglyphtounicode{succeeds}{00A0} \pdfglyphtounicode{suchthat}{00A0} \pdfglyphtounicode{suhiragana}{00A0} \pdfglyphtounicode{sukatakana}{00A0} \pdfglyphtounicode{sukatakanahalfwidth}{00A0} \pdfglyphtounicode{sukunarabic}{00A0} \pdfglyphtounicode{summation}{00A0} \pdfglyphtounicode{sun}{00A0} \pdfglyphtounicode{superset}{00A0} \pdfglyphtounicode{supersetdbl}{00A0} \pdfglyphtounicode{supersetdblequal}{00A0} \pdfglyphtounicode{supersetnoteql}{00A0} \pdfglyphtounicode{supersetnotequal}{00A0} \pdfglyphtounicode{supersetorequal}{00A0} \pdfglyphtounicode{supersetornotdbleql}{00A0} \pdfglyphtounicode{supersetsqequal}{00A0} \pdfglyphtounicode{svsquare}{00A0} \pdfglyphtounicode{syouwaerasquare}{00A0} \pdfglyphtounicode{t}{00A0} \pdfglyphtounicode{tabengali}{00A0} \pdfglyphtounicode{tackdown}{00A0} \pdfglyphtounicode{tackleft}{00A0} \pdfglyphtounicode{tadeva}{00A0} \pdfglyphtounicode{tagujarati}{00A0} \pdfglyphtounicode{tagurmukhi}{00A0} \pdfglyphtounicode{taharabic}{00A0} \pdfglyphtounicode{tahfinalarabic}{00A0} \pdfglyphtounicode{tahinitialarabic}{00A0} \pdfglyphtounicode{tahiragana}{00A0} \pdfglyphtounicode{tahmedialarabic}{00A0} \pdfglyphtounicode{taisyouerasquare}{00A0} \pdfglyphtounicode{takatakana}{00A0} \pdfglyphtounicode{takatakanahalfwidth}{00A0} \pdfglyphtounicode{tatweelarabic}{00A0} \pdfglyphtounicode{tau}{00A0} \pdfglyphtounicode{tav}{00A0} \pdfglyphtounicode{tavdages}{00A0} \pdfglyphtounicode{tavdagesh}{00A0} \pdfglyphtounicode{tavdageshhebrew}{00A0} \pdfglyphtounicode{tavhebrew}{00A0} \pdfglyphtounicode{tbar}{00A0} \pdfglyphtounicode{tbopomofo}{00A0} \pdfglyphtounicode{tcaron}{00A0} \pdfglyphtounicode{tccurl}{00A0} \pdfglyphtounicode{tcedilla}{00A0} \pdfglyphtounicode{tcheharabic}{00A0} \pdfglyphtounicode{tchehfinalarabic}{00A0} \pdfglyphtounicode{tchehinitialarabic}{00A0} \pdfglyphtounicode{tchehmedialarabic}{00A0} \pdfglyphtounicode{tchehmeeminitialarabic}{00A0} \pdfglyphtounicode{tcircle}{00A0} \pdfglyphtounicode{tcircumflexbelow}{00A0} \pdfglyphtounicode{tcommaaccent}{00A0} \pdfglyphtounicode{tdieresis}{00A0} \pdfglyphtounicode{tdotaccent}{00A0} \pdfglyphtounicode{tdotbelow}{00A0} \pdfglyphtounicode{tecyrillic}{00A0} \pdfglyphtounicode{tedescendercyrillic}{00A0} \pdfglyphtounicode{teharabic}{00A0} \pdfglyphtounicode{tehfinalarabic}{00A0} \pdfglyphtounicode{tehhahinitialarabic}{00A0} \pdfglyphtounicode{tehhahisolatedarabic}{00A0} \pdfglyphtounicode{tehinitialarabic}{00A0} \pdfglyphtounicode{tehiragana}{00A0} \pdfglyphtounicode{tehjeeminitialarabic}{00A0} \pdfglyphtounicode{tehjeemisolatedarabic}{00A0} \pdfglyphtounicode{tehmarbutaarabic}{00A0} \pdfglyphtounicode{tehmarbutafinalarabic}{00A0} \pdfglyphtounicode{tehmedialarabic}{00A0} \pdfglyphtounicode{tehmeeminitialarabic}{00A0} \pdfglyphtounicode{tehmeemisolatedarabic}{00A0} \pdfglyphtounicode{tehnoonfinalarabic}{00A0} \pdfglyphtounicode{tekatakana}{00A0} \pdfglyphtounicode{tekatakanahalfwidth}{00A0} \pdfglyphtounicode{telephone}{00A0} \pdfglyphtounicode{telephoneblack}{00A0} \pdfglyphtounicode{telishagedolahebrew}{00A0} \pdfglyphtounicode{telishaqetanahebrew}{00A0} \pdfglyphtounicode{tencircle}{00A0} \pdfglyphtounicode{tenideographicparen}{00A0} \pdfglyphtounicode{tenparen}{00A0} \pdfglyphtounicode{tenperiod}{00A0} \pdfglyphtounicode{tenroman}{00A0} \pdfglyphtounicode{tesh}{00A0} \pdfglyphtounicode{tet}{00A0} \pdfglyphtounicode{tetdagesh}{00A0} \pdfglyphtounicode{tetdageshhebrew}{00A0} \pdfglyphtounicode{tethebrew}{00A0} \pdfglyphtounicode{tetsecyrillic}{00A0} \pdfglyphtounicode{tevirhebrew}{00A0} \pdfglyphtounicode{tevirlefthebrew}{00A0} \pdfglyphtounicode{tfm:cmbsy10/diamond}{00A0} \pdfglyphtounicode{tfm:cmbsy10/heart}{00A0} \pdfglyphtounicode{tfm:cmbsy5/diamond}{00A0} \pdfglyphtounicode{tfm:cmbsy5/heart}{00A0} \pdfglyphtounicode{tfm:cmbsy6/diamond}{00A0} \pdfglyphtounicode{tfm:cmbsy6/heart}{00A0} \pdfglyphtounicode{tfm:cmbsy7/diamond}{00A0} \pdfglyphtounicode{tfm:cmbsy7/heart}{00A0} \pdfglyphtounicode{tfm:cmbsy8/diamond}{00A0} \pdfglyphtounicode{tfm:cmbsy8/heart}{00A0} \pdfglyphtounicode{tfm:cmbsy9/diamond}{00A0} \pdfglyphtounicode{tfm:cmbsy9/heart}{00A0} \pdfglyphtounicode{tfm:cmmi10/phi}{00A0} \pdfglyphtounicode{tfm:cmmi10/phi1}{00A0} \pdfglyphtounicode{tfm:cmmi12/phi}{00A0} \pdfglyphtounicode{tfm:cmmi12/phi1}{00A0} \pdfglyphtounicode{tfm:cmmi5/phi}{00A0} \pdfglyphtounicode{tfm:cmmi5/phi1}{00A0} \pdfglyphtounicode{tfm:cmmi6/phi}{00A0} \pdfglyphtounicode{tfm:cmmi6/phi1}{00A0} \pdfglyphtounicode{tfm:cmmi7/phi}{00A0} \pdfglyphtounicode{tfm:cmmi7/phi1}{00A0} \pdfglyphtounicode{tfm:cmmi8/phi}{00A0} \pdfglyphtounicode{tfm:cmmi8/phi1}{00A0} \pdfglyphtounicode{tfm:cmmi9/phi}{00A0} \pdfglyphtounicode{tfm:cmmi9/phi1}{00A0} \pdfglyphtounicode{tfm:cmmib10/phi}{00A0} \pdfglyphtounicode{tfm:cmmib10/phi1}{00A0} \pdfglyphtounicode{tfm:cmmib5/phi}{00A0} \pdfglyphtounicode{tfm:cmmib5/phi1}{00A0} \pdfglyphtounicode{tfm:cmmib6/phi}{00A0} \pdfglyphtounicode{tfm:cmmib6/phi1}{00A0} \pdfglyphtounicode{tfm:cmmib7/phi}{00A0} \pdfglyphtounicode{tfm:cmmib7/phi1}{00A0} \pdfglyphtounicode{tfm:cmmib8/phi}{00A0} \pdfglyphtounicode{tfm:cmmib8/phi1}{00A0} \pdfglyphtounicode{tfm:cmmib9/phi}{00A0} \pdfglyphtounicode{tfm:cmmib9/phi1}{00A0} \pdfglyphtounicode{tfm:cmsy10/diamond}{00A0} \pdfglyphtounicode{tfm:cmsy10/heart}{00A0} \pdfglyphtounicode{tfm:cmsy5/heart}{00A0} \pdfglyphtounicode{tfm:cmsy6/diamond}{00A0} \pdfglyphtounicode{tfm:cmsy6/heart}{00A0} \pdfglyphtounicode{tfm:cmsy7/diamond}{00A0} \pdfglyphtounicode{tfm:cmsy7/heart}{00A0} \pdfglyphtounicode{tfm:cmsy8/diamond}{00A0} \pdfglyphtounicode{tfm:cmsy8/heart}{00A0} \pdfglyphtounicode{tfm:cmsy9/diamond}{00A0} \pdfglyphtounicode{tfm:cmsy9/heart}{00A0} \pdfglyphtounicode{tfm:eurb10/phi}{00A0} \pdfglyphtounicode{tfm:eurb10/phi1}{00A0} \pdfglyphtounicode{tfm:eurb5/phi}{00A0} \pdfglyphtounicode{tfm:eurb5/phi1}{00A0} \pdfglyphtounicode{tfm:eurb6/phi}{00A0} \pdfglyphtounicode{tfm:eurb6/phi1}{00A0} \pdfglyphtounicode{tfm:eurb7/phi}{00A0} \pdfglyphtounicode{tfm:eurb7/phi1}{00A0} \pdfglyphtounicode{tfm:eurb8/phi}{00A0} \pdfglyphtounicode{tfm:eurb8/phi1}{00A0} \pdfglyphtounicode{tfm:eurb9/phi}{00A0} \pdfglyphtounicode{tfm:eurb9/phi1}{00A0} \pdfglyphtounicode{tfm:eurm10/phi}{00A0} \pdfglyphtounicode{tfm:eurm10/phi1}{00A0} \pdfglyphtounicode{tfm:eurm5/phi}{00A0} \pdfglyphtounicode{tfm:eurm5/phi1}{00A0} \pdfglyphtounicode{tfm:eurm6/phi}{00A0} \pdfglyphtounicode{tfm:eurm6/phi1}{00A0} \pdfglyphtounicode{tfm:eurm7/phi}{00A0} \pdfglyphtounicode{tfm:eurm7/phi1}{00A0} \pdfglyphtounicode{tfm:eurm8/phi}{00A0} \pdfglyphtounicode{tfm:eurm8/phi1}{00A0} \pdfglyphtounicode{tfm:eurm9/phi}{00A0} \pdfglyphtounicode{tfm:eurm9/phi1}{00A0} \pdfglyphtounicode{tfm:fplmbi/phi}{00A0} \pdfglyphtounicode{tfm:fplmbi/phi1}{00A0} \pdfglyphtounicode{tfm:fplmri/phi}{00A0} \pdfglyphtounicode{tfm:fplmri/phi1}{00A0} \pdfglyphtounicode{tfm:lmbsy10/diamond}{00A0} \pdfglyphtounicode{tfm:lmbsy10/heart}{00A0} \pdfglyphtounicode{tfm:lmbsy5/diamond}{00A0} \pdfglyphtounicode{tfm:lmbsy5/heart}{00A0} \pdfglyphtounicode{tfm:lmbsy7/diamond}{00A0} \pdfglyphtounicode{tfm:lmbsy7/heart}{00A0} \pdfglyphtounicode{tfm:lmmi10/phi}{00A0} \pdfglyphtounicode{tfm:lmmi10/phi1}{00A0} \pdfglyphtounicode{tfm:lmmi12/phi}{00A0} \pdfglyphtounicode{tfm:lmmi12/phi1}{00A0} \pdfglyphtounicode{tfm:lmmi5/phi}{00A0} \pdfglyphtounicode{tfm:lmmi5/phi1}{00A0} \pdfglyphtounicode{tfm:lmmi6/phi}{00A0} \pdfglyphtounicode{tfm:lmmi6/phi1}{00A0} \pdfglyphtounicode{tfm:lmmi7/phi}{00A0} \pdfglyphtounicode{tfm:lmmi7/phi1}{00A0} \pdfglyphtounicode{tfm:lmmi8/phi}{00A0} \pdfglyphtounicode{tfm:lmmi8/phi1}{00A0} \pdfglyphtounicode{tfm:lmmi9/phi}{00A0} \pdfglyphtounicode{tfm:lmmi9/phi1}{00A0} \pdfglyphtounicode{tfm:lmmib10/phi}{00A0} \pdfglyphtounicode{tfm:lmmib10/phi1}{00A0} \pdfglyphtounicode{tfm:lmmib5/phi}{00A0} \pdfglyphtounicode{tfm:lmmib5/phi1}{00A0} \pdfglyphtounicode{tfm:lmmib7/phi}{00A0} \pdfglyphtounicode{tfm:lmmib7/phi1}{00A0} \pdfglyphtounicode{tfm:lmsy10/diamond}{00A0} \pdfglyphtounicode{tfm:lmsy10/heart}{00A0} \pdfglyphtounicode{tfm:lmsy5/diamond}{00A0} \pdfglyphtounicode{tfm:lmsy5/heart}{00A0} \pdfglyphtounicode{tfm:lmsy6/diamond}{00A0} \pdfglyphtounicode{tfm:lmsy6/heart}{00A0} \pdfglyphtounicode{tfm:lmsy7/diamond}{00A0} \pdfglyphtounicode{tfm:lmsy7/heart}{00A0} \pdfglyphtounicode{tfm:lmsy8/diamond}{00A0} \pdfglyphtounicode{tfm:lmsy8/heart}{00A0} \pdfglyphtounicode{tfm:lmsy9/diamond}{00A0} \pdfglyphtounicode{tfm:lmsy9/heart}{00A0} \pdfglyphtounicode{tfm:msam10/diamond}{00A0} \pdfglyphtounicode{tfm:msam5/diamond}{00A0} \pdfglyphtounicode{tfm:msam6/diamond}{00A0} \pdfglyphtounicode{tfm:msam7/diamond}{00A0} \pdfglyphtounicode{tfm:msam8/diamond}{00A0} \pdfglyphtounicode{tfm:msam9/diamond}{00A0} \pdfglyphtounicode{tfm:pxbmia/phi}{00A0} \pdfglyphtounicode{tfm:pxbmia/phi1}{00A0} \pdfglyphtounicode{tfm:pxbsy/diamond}{00A0} \pdfglyphtounicode{tfm:pxbsy/heart}{00A0} \pdfglyphtounicode{tfm:pxbsya/diamond}{00A0} \pdfglyphtounicode{tfm:pxmia/phi}{00A0} \pdfglyphtounicode{tfm:pxmia/phi1}{00A0} \pdfglyphtounicode{tfm:pxsy/diamond}{00A0} \pdfglyphtounicode{tfm:pxsy/heart}{00A0} \pdfglyphtounicode{tfm:pxsya/diamond}{00A0} \pdfglyphtounicode{tfm:pzdr/a1}{00A0} \pdfglyphtounicode{tfm:pzdr/a10}{00A0} \pdfglyphtounicode{tfm:pzdr/a100}{00A0} \pdfglyphtounicode{tfm:pzdr/a101}{00A0} \pdfglyphtounicode{tfm:pzdr/a102}{00A0} \pdfglyphtounicode{tfm:pzdr/a103}{00A0} \pdfglyphtounicode{tfm:pzdr/a104}{00A0} \pdfglyphtounicode{tfm:pzdr/a105}{00A0} \pdfglyphtounicode{tfm:pzdr/a106}{00A0} \pdfglyphtounicode{tfm:pzdr/a107}{00A0} \pdfglyphtounicode{tfm:pzdr/a108}{00A0} \pdfglyphtounicode{tfm:pzdr/a109}{00A0} \pdfglyphtounicode{tfm:pzdr/a11}{00A0} \pdfglyphtounicode{tfm:pzdr/a110}{00A0} \pdfglyphtounicode{tfm:pzdr/a111}{00A0} \pdfglyphtounicode{tfm:pzdr/a112}{00A0} \pdfglyphtounicode{tfm:pzdr/a117}{00A0} \pdfglyphtounicode{tfm:pzdr/a118}{00A0} \pdfglyphtounicode{tfm:pzdr/a119}{00A0} \pdfglyphtounicode{tfm:pzdr/a12}{00A0} \pdfglyphtounicode{tfm:pzdr/a120}{00A0} \pdfglyphtounicode{tfm:pzdr/a121}{00A0} \pdfglyphtounicode{tfm:pzdr/a122}{00A0} \pdfglyphtounicode{tfm:pzdr/a123}{00A0} \pdfglyphtounicode{tfm:pzdr/a124}{00A0} \pdfglyphtounicode{tfm:pzdr/a125}{00A0} \pdfglyphtounicode{tfm:pzdr/a126}{00A0} \pdfglyphtounicode{tfm:pzdr/a127}{00A0} \pdfglyphtounicode{tfm:pzdr/a128}{00A0} \pdfglyphtounicode{tfm:pzdr/a129}{00A0} \pdfglyphtounicode{tfm:pzdr/a13}{00A0} \pdfglyphtounicode{tfm:pzdr/a130}{00A0} \pdfglyphtounicode{tfm:pzdr/a131}{00A0} \pdfglyphtounicode{tfm:pzdr/a132}{00A0} \pdfglyphtounicode{tfm:pzdr/a133}{00A0} \pdfglyphtounicode{tfm:pzdr/a134}{00A0} \pdfglyphtounicode{tfm:pzdr/a135}{00A0} \pdfglyphtounicode{tfm:pzdr/a136}{00A0} \pdfglyphtounicode{tfm:pzdr/a137}{00A0} \pdfglyphtounicode{tfm:pzdr/a138}{00A0} \pdfglyphtounicode{tfm:pzdr/a139}{00A0} \pdfglyphtounicode{tfm:pzdr/a14}{00A0} \pdfglyphtounicode{tfm:pzdr/a140}{00A0} \pdfglyphtounicode{tfm:pzdr/a141}{00A0} \pdfglyphtounicode{tfm:pzdr/a142}{00A0} \pdfglyphtounicode{tfm:pzdr/a143}{00A0} \pdfglyphtounicode{tfm:pzdr/a144}{00A0} \pdfglyphtounicode{tfm:pzdr/a145}{00A0} \pdfglyphtounicode{tfm:pzdr/a146}{00A0} \pdfglyphtounicode{tfm:pzdr/a147}{00A0} \pdfglyphtounicode{tfm:pzdr/a148}{00A0} \pdfglyphtounicode{tfm:pzdr/a149}{00A0} \pdfglyphtounicode{tfm:pzdr/a15}{00A0} \pdfglyphtounicode{tfm:pzdr/a150}{00A0} \pdfglyphtounicode{tfm:pzdr/a151}{00A0} \pdfglyphtounicode{tfm:pzdr/a152}{00A0} \pdfglyphtounicode{tfm:pzdr/a153}{00A0} \pdfglyphtounicode{tfm:pzdr/a154}{00A0} \pdfglyphtounicode{tfm:pzdr/a155}{00A0} \pdfglyphtounicode{tfm:pzdr/a156}{00A0} \pdfglyphtounicode{tfm:pzdr/a157}{00A0} \pdfglyphtounicode{tfm:pzdr/a158}{00A0} \pdfglyphtounicode{tfm:pzdr/a159}{00A0} \pdfglyphtounicode{tfm:pzdr/a16}{00A0} \pdfglyphtounicode{tfm:pzdr/a160}{00A0} \pdfglyphtounicode{tfm:pzdr/a161}{00A0} \pdfglyphtounicode{tfm:pzdr/a162}{00A0} \pdfglyphtounicode{tfm:pzdr/a163}{00A0} \pdfglyphtounicode{tfm:pzdr/a164}{00A0} \pdfglyphtounicode{tfm:pzdr/a165}{00A0} \pdfglyphtounicode{tfm:pzdr/a166}{00A0} \pdfglyphtounicode{tfm:pzdr/a167}{00A0} \pdfglyphtounicode{tfm:pzdr/a168}{00A0} \pdfglyphtounicode{tfm:pzdr/a169}{00A0} \pdfglyphtounicode{tfm:pzdr/a17}{00A0} \pdfglyphtounicode{tfm:pzdr/a170}{00A0} \pdfglyphtounicode{tfm:pzdr/a171}{00A0} \pdfglyphtounicode{tfm:pzdr/a172}{00A0} \pdfglyphtounicode{tfm:pzdr/a173}{00A0} \pdfglyphtounicode{tfm:pzdr/a174}{00A0} \pdfglyphtounicode{tfm:pzdr/a175}{00A0} \pdfglyphtounicode{tfm:pzdr/a176}{00A0} \pdfglyphtounicode{tfm:pzdr/a177}{00A0} \pdfglyphtounicode{tfm:pzdr/a178}{00A0} \pdfglyphtounicode{tfm:pzdr/a179}{00A0} \pdfglyphtounicode{tfm:pzdr/a18}{00A0} \pdfglyphtounicode{tfm:pzdr/a180}{00A0} \pdfglyphtounicode{tfm:pzdr/a181}{00A0} \pdfglyphtounicode{tfm:pzdr/a182}{00A0} \pdfglyphtounicode{tfm:pzdr/a183}{00A0} \pdfglyphtounicode{tfm:pzdr/a184}{00A0} \pdfglyphtounicode{tfm:pzdr/a185}{00A0} \pdfglyphtounicode{tfm:pzdr/a186}{00A0} \pdfglyphtounicode{tfm:pzdr/a187}{00A0} \pdfglyphtounicode{tfm:pzdr/a188}{00A0} \pdfglyphtounicode{tfm:pzdr/a189}{00A0} \pdfglyphtounicode{tfm:pzdr/a19}{00A0} \pdfglyphtounicode{tfm:pzdr/a190}{00A0} \pdfglyphtounicode{tfm:pzdr/a191}{00A0} \pdfglyphtounicode{tfm:pzdr/a192}{00A0} \pdfglyphtounicode{tfm:pzdr/a193}{00A0} \pdfglyphtounicode{tfm:pzdr/a194}{00A0} \pdfglyphtounicode{tfm:pzdr/a195}{00A0} \pdfglyphtounicode{tfm:pzdr/a196}{00A0} \pdfglyphtounicode{tfm:pzdr/a197}{00A0} \pdfglyphtounicode{tfm:pzdr/a198}{00A0} \pdfglyphtounicode{tfm:pzdr/a199}{00A0} \pdfglyphtounicode{tfm:pzdr/a2}{00A0} \pdfglyphtounicode{tfm:pzdr/a20}{00A0} \pdfglyphtounicode{tfm:pzdr/a200}{00A0} \pdfglyphtounicode{tfm:pzdr/a201}{00A0} \pdfglyphtounicode{tfm:pzdr/a202}{00A0} \pdfglyphtounicode{tfm:pzdr/a203}{00A0} \pdfglyphtounicode{tfm:pzdr/a204}{00A0} \pdfglyphtounicode{tfm:pzdr/a205}{00A0} \pdfglyphtounicode{tfm:pzdr/a206}{00A0} \pdfglyphtounicode{tfm:pzdr/a21}{00A0} \pdfglyphtounicode{tfm:pzdr/a22}{00A0} \pdfglyphtounicode{tfm:pzdr/a23}{00A0} \pdfglyphtounicode{tfm:pzdr/a24}{00A0} \pdfglyphtounicode{tfm:pzdr/a25}{00A0} \pdfglyphtounicode{tfm:pzdr/a26}{00A0} \pdfglyphtounicode{tfm:pzdr/a27}{00A0} \pdfglyphtounicode{tfm:pzdr/a28}{00A0} \pdfglyphtounicode{tfm:pzdr/a29}{00A0} \pdfglyphtounicode{tfm:pzdr/a3}{00A0} \pdfglyphtounicode{tfm:pzdr/a30}{00A0} \pdfglyphtounicode{tfm:pzdr/a31}{00A0} \pdfglyphtounicode{tfm:pzdr/a32}{00A0} \pdfglyphtounicode{tfm:pzdr/a33}{00A0} \pdfglyphtounicode{tfm:pzdr/a34}{00A0} \pdfglyphtounicode{tfm:pzdr/a35}{00A0} \pdfglyphtounicode{tfm:pzdr/a36}{00A0} \pdfglyphtounicode{tfm:pzdr/a37}{00A0} \pdfglyphtounicode{tfm:pzdr/a38}{00A0} \pdfglyphtounicode{tfm:pzdr/a39}{00A0} \pdfglyphtounicode{tfm:pzdr/a4}{00A0} \pdfglyphtounicode{tfm:pzdr/a40}{00A0} \pdfglyphtounicode{tfm:pzdr/a41}{00A0} \pdfglyphtounicode{tfm:pzdr/a42}{00A0} \pdfglyphtounicode{tfm:pzdr/a43}{00A0} \pdfglyphtounicode{tfm:pzdr/a44}{00A0} \pdfglyphtounicode{tfm:pzdr/a45}{00A0} \pdfglyphtounicode{tfm:pzdr/a46}{00A0} \pdfglyphtounicode{tfm:pzdr/a47}{00A0} \pdfglyphtounicode{tfm:pzdr/a48}{00A0} \pdfglyphtounicode{tfm:pzdr/a49}{00A0} \pdfglyphtounicode{tfm:pzdr/a5}{00A0} \pdfglyphtounicode{tfm:pzdr/a50}{00A0} \pdfglyphtounicode{tfm:pzdr/a51}{00A0} \pdfglyphtounicode{tfm:pzdr/a52}{00A0} \pdfglyphtounicode{tfm:pzdr/a53}{00A0} \pdfglyphtounicode{tfm:pzdr/a54}{00A0} \pdfglyphtounicode{tfm:pzdr/a55}{00A0} \pdfglyphtounicode{tfm:pzdr/a56}{00A0} \pdfglyphtounicode{tfm:pzdr/a57}{00A0} \pdfglyphtounicode{tfm:pzdr/a58}{00A0} \pdfglyphtounicode{tfm:pzdr/a59}{00A0} \pdfglyphtounicode{tfm:pzdr/a6}{00A0} \pdfglyphtounicode{tfm:pzdr/a60}{00A0} \pdfglyphtounicode{tfm:pzdr/a61}{00A0} \pdfglyphtounicode{tfm:pzdr/a62}{00A0} \pdfglyphtounicode{tfm:pzdr/a63}{00A0} \pdfglyphtounicode{tfm:pzdr/a64}{00A0} \pdfglyphtounicode{tfm:pzdr/a65}{00A0} \pdfglyphtounicode{tfm:pzdr/a66}{00A0} \pdfglyphtounicode{tfm:pzdr/a67}{00A0} \pdfglyphtounicode{tfm:pzdr/a68}{00A0} \pdfglyphtounicode{tfm:pzdr/a69}{00A0} \pdfglyphtounicode{tfm:pzdr/a7}{00A0} \pdfglyphtounicode{tfm:pzdr/a70}{00A0} \pdfglyphtounicode{tfm:pzdr/a71}{00A0} \pdfglyphtounicode{tfm:pzdr/a72}{00A0} \pdfglyphtounicode{tfm:pzdr/a73}{00A0} \pdfglyphtounicode{tfm:pzdr/a74}{00A0} \pdfglyphtounicode{tfm:pzdr/a75}{00A0} \pdfglyphtounicode{tfm:pzdr/a76}{00A0} \pdfglyphtounicode{tfm:pzdr/a77}{00A0} \pdfglyphtounicode{tfm:pzdr/a78}{00A0} \pdfglyphtounicode{tfm:pzdr/a79}{00A0} \pdfglyphtounicode{tfm:pzdr/a8}{00A0} \pdfglyphtounicode{tfm:pzdr/a81}{00A0} \pdfglyphtounicode{tfm:pzdr/a82}{00A0} \pdfglyphtounicode{tfm:pzdr/a83}{00A0} \pdfglyphtounicode{tfm:pzdr/a84}{00A0} \pdfglyphtounicode{tfm:pzdr/a85}{00A0} \pdfglyphtounicode{tfm:pzdr/a86}{00A0} \pdfglyphtounicode{tfm:pzdr/a87}{00A0} \pdfglyphtounicode{tfm:pzdr/a88}{00A0} \pdfglyphtounicode{tfm:pzdr/a89}{00A0} \pdfglyphtounicode{tfm:pzdr/a9}{00A0} \pdfglyphtounicode{tfm:pzdr/a90}{00A0} \pdfglyphtounicode{tfm:pzdr/a91}{00A0} \pdfglyphtounicode{tfm:pzdr/a92}{00A0} \pdfglyphtounicode{tfm:pzdr/a93}{00A0} \pdfglyphtounicode{tfm:pzdr/a94}{00A0} \pdfglyphtounicode{tfm:pzdr/a95}{00A0} \pdfglyphtounicode{tfm:pzdr/a96}{00A0} \pdfglyphtounicode{tfm:pzdr/a97}{00A0} \pdfglyphtounicode{tfm:pzdr/a98}{00A0} \pdfglyphtounicode{tfm:pzdr/a99}{00A0} \pdfglyphtounicode{tfm:rpxbmi/phi}{00A0} \pdfglyphtounicode{tfm:rpxbmi/phi1}{00A0} \pdfglyphtounicode{tfm:rpxmi/phi}{00A0} \pdfglyphtounicode{tfm:rpxmi/phi1}{00A0} \pdfglyphtounicode{tfm:rpzdr/a1}{00A0} \pdfglyphtounicode{tfm:rpzdr/a10}{00A0} \pdfglyphtounicode{tfm:rpzdr/a100}{00A0} \pdfglyphtounicode{tfm:rpzdr/a101}{00A0} \pdfglyphtounicode{tfm:rpzdr/a102}{00A0} \pdfglyphtounicode{tfm:rpzdr/a103}{00A0} \pdfglyphtounicode{tfm:rpzdr/a104}{00A0} \pdfglyphtounicode{tfm:rpzdr/a105}{00A0} \pdfglyphtounicode{tfm:rpzdr/a106}{00A0} \pdfglyphtounicode{tfm:rpzdr/a107}{00A0} \pdfglyphtounicode{tfm:rpzdr/a108}{00A0} \pdfglyphtounicode{tfm:rpzdr/a109}{00A0} \pdfglyphtounicode{tfm:rpzdr/a11}{00A0} \pdfglyphtounicode{tfm:rpzdr/a110}{00A0} \pdfglyphtounicode{tfm:rpzdr/a111}{00A0} \pdfglyphtounicode{tfm:rpzdr/a112}{00A0} \pdfglyphtounicode{tfm:rpzdr/a117}{00A0} \pdfglyphtounicode{tfm:rpzdr/a118}{00A0} \pdfglyphtounicode{tfm:rpzdr/a119}{00A0} \pdfglyphtounicode{tfm:rpzdr/a12}{00A0} \pdfglyphtounicode{tfm:rpzdr/a120}{00A0} \pdfglyphtounicode{tfm:rpzdr/a121}{00A0} \pdfglyphtounicode{tfm:rpzdr/a122}{00A0} \pdfglyphtounicode{tfm:rpzdr/a123}{00A0} \pdfglyphtounicode{tfm:rpzdr/a124}{00A0} \pdfglyphtounicode{tfm:rpzdr/a125}{00A0} \pdfglyphtounicode{tfm:rpzdr/a126}{00A0} \pdfglyphtounicode{tfm:rpzdr/a127}{00A0} \pdfglyphtounicode{tfm:rpzdr/a128}{00A0} \pdfglyphtounicode{tfm:rpzdr/a129}{00A0} \pdfglyphtounicode{tfm:rpzdr/a13}{00A0} \pdfglyphtounicode{tfm:rpzdr/a130}{00A0} \pdfglyphtounicode{tfm:rpzdr/a131}{00A0} \pdfglyphtounicode{tfm:rpzdr/a132}{00A0} \pdfglyphtounicode{tfm:rpzdr/a133}{00A0} \pdfglyphtounicode{tfm:rpzdr/a134}{00A0} \pdfglyphtounicode{tfm:rpzdr/a135}{00A0} \pdfglyphtounicode{tfm:rpzdr/a136}{00A0} \pdfglyphtounicode{tfm:rpzdr/a137}{00A0} \pdfglyphtounicode{tfm:rpzdr/a138}{00A0} \pdfglyphtounicode{tfm:rpzdr/a139}{00A0} \pdfglyphtounicode{tfm:rpzdr/a14}{00A0} \pdfglyphtounicode{tfm:rpzdr/a140}{00A0} \pdfglyphtounicode{tfm:rpzdr/a141}{00A0} \pdfglyphtounicode{tfm:rpzdr/a142}{00A0} \pdfglyphtounicode{tfm:rpzdr/a143}{00A0} \pdfglyphtounicode{tfm:rpzdr/a144}{00A0} \pdfglyphtounicode{tfm:rpzdr/a145}{00A0} \pdfglyphtounicode{tfm:rpzdr/a146}{00A0} \pdfglyphtounicode{tfm:rpzdr/a147}{00A0} \pdfglyphtounicode{tfm:rpzdr/a148}{00A0} \pdfglyphtounicode{tfm:rpzdr/a149}{00A0} \pdfglyphtounicode{tfm:rpzdr/a15}{00A0} \pdfglyphtounicode{tfm:rpzdr/a150}{00A0} \pdfglyphtounicode{tfm:rpzdr/a151}{00A0} \pdfglyphtounicode{tfm:rpzdr/a152}{00A0} \pdfglyphtounicode{tfm:rpzdr/a153}{00A0} \pdfglyphtounicode{tfm:rpzdr/a154}{00A0} \pdfglyphtounicode{tfm:rpzdr/a155}{00A0} \pdfglyphtounicode{tfm:rpzdr/a156}{00A0} \pdfglyphtounicode{tfm:rpzdr/a157}{00A0} \pdfglyphtounicode{tfm:rpzdr/a158}{00A0} \pdfglyphtounicode{tfm:rpzdr/a159}{00A0} \pdfglyphtounicode{tfm:rpzdr/a16}{00A0} \pdfglyphtounicode{tfm:rpzdr/a160}{00A0} \pdfglyphtounicode{tfm:rpzdr/a161}{00A0} \pdfglyphtounicode{tfm:rpzdr/a162}{00A0} \pdfglyphtounicode{tfm:rpzdr/a163}{00A0} \pdfglyphtounicode{tfm:rpzdr/a164}{00A0} \pdfglyphtounicode{tfm:rpzdr/a165}{00A0} \pdfglyphtounicode{tfm:rpzdr/a166}{00A0} \pdfglyphtounicode{tfm:rpzdr/a167}{00A0} \pdfglyphtounicode{tfm:rpzdr/a168}{00A0} \pdfglyphtounicode{tfm:rpzdr/a169}{00A0} \pdfglyphtounicode{tfm:rpzdr/a17}{00A0} \pdfglyphtounicode{tfm:rpzdr/a170}{00A0} \pdfglyphtounicode{tfm:rpzdr/a171}{00A0} \pdfglyphtounicode{tfm:rpzdr/a172}{00A0} \pdfglyphtounicode{tfm:rpzdr/a173}{00A0} \pdfglyphtounicode{tfm:rpzdr/a174}{00A0} \pdfglyphtounicode{tfm:rpzdr/a175}{00A0} \pdfglyphtounicode{tfm:rpzdr/a176}{00A0} \pdfglyphtounicode{tfm:rpzdr/a177}{00A0} \pdfglyphtounicode{tfm:rpzdr/a178}{00A0} \pdfglyphtounicode{tfm:rpzdr/a179}{00A0} \pdfglyphtounicode{tfm:rpzdr/a18}{00A0} \pdfglyphtounicode{tfm:rpzdr/a180}{00A0} \pdfglyphtounicode{tfm:rpzdr/a181}{00A0} \pdfglyphtounicode{tfm:rpzdr/a182}{00A0} \pdfglyphtounicode{tfm:rpzdr/a183}{00A0} \pdfglyphtounicode{tfm:rpzdr/a184}{00A0} \pdfglyphtounicode{tfm:rpzdr/a185}{00A0} \pdfglyphtounicode{tfm:rpzdr/a186}{00A0} \pdfglyphtounicode{tfm:rpzdr/a187}{00A0} \pdfglyphtounicode{tfm:rpzdr/a188}{00A0} \pdfglyphtounicode{tfm:rpzdr/a189}{00A0} \pdfglyphtounicode{tfm:rpzdr/a19}{00A0} \pdfglyphtounicode{tfm:rpzdr/a190}{00A0} \pdfglyphtounicode{tfm:rpzdr/a191}{00A0} \pdfglyphtounicode{tfm:rpzdr/a192}{00A0} \pdfglyphtounicode{tfm:rpzdr/a193}{00A0} \pdfglyphtounicode{tfm:rpzdr/a194}{00A0} \pdfglyphtounicode{tfm:rpzdr/a195}{00A0} \pdfglyphtounicode{tfm:rpzdr/a196}{00A0} \pdfglyphtounicode{tfm:rpzdr/a197}{00A0} \pdfglyphtounicode{tfm:rpzdr/a198}{00A0} \pdfglyphtounicode{tfm:rpzdr/a199}{00A0} \pdfglyphtounicode{tfm:rpzdr/a2}{00A0} \pdfglyphtounicode{tfm:rpzdr/a20}{00A0} \pdfglyphtounicode{tfm:rpzdr/a200}{00A0} \pdfglyphtounicode{tfm:rpzdr/a201}{00A0} \pdfglyphtounicode{tfm:rpzdr/a202}{00A0} \pdfglyphtounicode{tfm:rpzdr/a203}{00A0} \pdfglyphtounicode{tfm:rpzdr/a204}{00A0} \pdfglyphtounicode{tfm:rpzdr/a205}{00A0} \pdfglyphtounicode{tfm:rpzdr/a206}{00A0} \pdfglyphtounicode{tfm:rpzdr/a21}{00A0} \pdfglyphtounicode{tfm:rpzdr/a22}{00A0} \pdfglyphtounicode{tfm:rpzdr/a23}{00A0} \pdfglyphtounicode{tfm:rpzdr/a24}{00A0} \pdfglyphtounicode{tfm:rpzdr/a25}{00A0} \pdfglyphtounicode{tfm:rpzdr/a26}{00A0} \pdfglyphtounicode{tfm:rpzdr/a27}{00A0} \pdfglyphtounicode{tfm:rpzdr/a28}{00A0} \pdfglyphtounicode{tfm:rpzdr/a29}{00A0} \pdfglyphtounicode{tfm:rpzdr/a3}{00A0} \pdfglyphtounicode{tfm:rpzdr/a30}{00A0} \pdfglyphtounicode{tfm:rpzdr/a31}{00A0} \pdfglyphtounicode{tfm:rpzdr/a32}{00A0} \pdfglyphtounicode{tfm:rpzdr/a33}{00A0} \pdfglyphtounicode{tfm:rpzdr/a34}{00A0} \pdfglyphtounicode{tfm:rpzdr/a35}{00A0} \pdfglyphtounicode{tfm:rpzdr/a36}{00A0} \pdfglyphtounicode{tfm:rpzdr/a37}{00A0} \pdfglyphtounicode{tfm:rpzdr/a38}{00A0} \pdfglyphtounicode{tfm:rpzdr/a39}{00A0} \pdfglyphtounicode{tfm:rpzdr/a4}{00A0} \pdfglyphtounicode{tfm:rpzdr/a40}{00A0} \pdfglyphtounicode{tfm:rpzdr/a41}{00A0} \pdfglyphtounicode{tfm:rpzdr/a42}{00A0} \pdfglyphtounicode{tfm:rpzdr/a43}{00A0} \pdfglyphtounicode{tfm:rpzdr/a44}{00A0} \pdfglyphtounicode{tfm:rpzdr/a45}{00A0} \pdfglyphtounicode{tfm:rpzdr/a46}{00A0} \pdfglyphtounicode{tfm:rpzdr/a47}{00A0} \pdfglyphtounicode{tfm:rpzdr/a48}{00A0} \pdfglyphtounicode{tfm:rpzdr/a49}{00A0} \pdfglyphtounicode{tfm:rpzdr/a5}{00A0} \pdfglyphtounicode{tfm:rpzdr/a50}{00A0} \pdfglyphtounicode{tfm:rpzdr/a51}{00A0} \pdfglyphtounicode{tfm:rpzdr/a52}{00A0} \pdfglyphtounicode{tfm:rpzdr/a53}{00A0} \pdfglyphtounicode{tfm:rpzdr/a54}{00A0} \pdfglyphtounicode{tfm:rpzdr/a55}{00A0} \pdfglyphtounicode{tfm:rpzdr/a56}{00A0} \pdfglyphtounicode{tfm:rpzdr/a57}{00A0} \pdfglyphtounicode{tfm:rpzdr/a58}{00A0} \pdfglyphtounicode{tfm:rpzdr/a59}{00A0} \pdfglyphtounicode{tfm:rpzdr/a6}{00A0} \pdfglyphtounicode{tfm:rpzdr/a60}{00A0} \pdfglyphtounicode{tfm:rpzdr/a61}{00A0} \pdfglyphtounicode{tfm:rpzdr/a62}{00A0} \pdfglyphtounicode{tfm:rpzdr/a63}{00A0} \pdfglyphtounicode{tfm:rpzdr/a64}{00A0} \pdfglyphtounicode{tfm:rpzdr/a65}{00A0} \pdfglyphtounicode{tfm:rpzdr/a66}{00A0} \pdfglyphtounicode{tfm:rpzdr/a67}{00A0} \pdfglyphtounicode{tfm:rpzdr/a68}{00A0} \pdfglyphtounicode{tfm:rpzdr/a69}{00A0} \pdfglyphtounicode{tfm:rpzdr/a7}{00A0} \pdfglyphtounicode{tfm:rpzdr/a70}{00A0} \pdfglyphtounicode{tfm:rpzdr/a71}{00A0} \pdfglyphtounicode{tfm:rpzdr/a72}{00A0} \pdfglyphtounicode{tfm:rpzdr/a73}{00A0} \pdfglyphtounicode{tfm:rpzdr/a74}{00A0} \pdfglyphtounicode{tfm:rpzdr/a75}{00A0} \pdfglyphtounicode{tfm:rpzdr/a76}{00A0} \pdfglyphtounicode{tfm:rpzdr/a77}{00A0} \pdfglyphtounicode{tfm:rpzdr/a78}{00A0} \pdfglyphtounicode{tfm:rpzdr/a79}{00A0} \pdfglyphtounicode{tfm:rpzdr/a8}{00A0} \pdfglyphtounicode{tfm:rpzdr/a81}{00A0} \pdfglyphtounicode{tfm:rpzdr/a82}{00A0} \pdfglyphtounicode{tfm:rpzdr/a83}{00A0} \pdfglyphtounicode{tfm:rpzdr/a84}{00A0} \pdfglyphtounicode{tfm:rpzdr/a85}{00A0} \pdfglyphtounicode{tfm:rpzdr/a86}{00A0} \pdfglyphtounicode{tfm:rpzdr/a87}{00A0} \pdfglyphtounicode{tfm:rpzdr/a88}{00A0} \pdfglyphtounicode{tfm:rpzdr/a89}{00A0} \pdfglyphtounicode{tfm:rpzdr/a9}{00A0} \pdfglyphtounicode{tfm:rpzdr/a90}{00A0} \pdfglyphtounicode{tfm:rpzdr/a91}{00A0} \pdfglyphtounicode{tfm:rpzdr/a92}{00A0} \pdfglyphtounicode{tfm:rpzdr/a93}{00A0} \pdfglyphtounicode{tfm:rpzdr/a94}{00A0} \pdfglyphtounicode{tfm:rpzdr/a95}{00A0} \pdfglyphtounicode{tfm:rpzdr/a96}{00A0} \pdfglyphtounicode{tfm:rpzdr/a97}{00A0} \pdfglyphtounicode{tfm:rpzdr/a98}{00A0} \pdfglyphtounicode{tfm:rpzdr/a99}{00A0} \pdfglyphtounicode{tfm:rtxbmi/phi}{00A0} \pdfglyphtounicode{tfm:rtxbmi/phi1}{00A0} \pdfglyphtounicode{tfm:rtxmi/phi}{00A0} \pdfglyphtounicode{tfm:rtxmi/phi1}{00A0} \pdfglyphtounicode{tfm:txbmia/phi}{00A0} \pdfglyphtounicode{tfm:txbmia/phi1}{00A0} \pdfglyphtounicode{tfm:txbsy/diamond}{00A0} \pdfglyphtounicode{tfm:txbsy/heart}{00A0} \pdfglyphtounicode{tfm:txbsya/diamond}{00A0} \pdfglyphtounicode{tfm:txmia/phi}{00A0} \pdfglyphtounicode{tfm:txmia/phi1}{00A0} \pdfglyphtounicode{tfm:txsy/diamond}{00A0} \pdfglyphtounicode{tfm:txsy/heart}{00A0} \pdfglyphtounicode{tfm:txsya/diamond}{00A0} \pdfglyphtounicode{tfm:zd/a1}{00A0} \pdfglyphtounicode{tfm:zd/a10}{00A0} \pdfglyphtounicode{tfm:zd/a100}{00A0} \pdfglyphtounicode{tfm:zd/a101}{00A0} \pdfglyphtounicode{tfm:zd/a102}{00A0} \pdfglyphtounicode{tfm:zd/a103}{00A0} \pdfglyphtounicode{tfm:zd/a104}{00A0} \pdfglyphtounicode{tfm:zd/a105}{00A0} \pdfglyphtounicode{tfm:zd/a106}{00A0} \pdfglyphtounicode{tfm:zd/a107}{00A0} \pdfglyphtounicode{tfm:zd/a108}{00A0} \pdfglyphtounicode{tfm:zd/a109}{00A0} \pdfglyphtounicode{tfm:zd/a11}{00A0} \pdfglyphtounicode{tfm:zd/a110}{00A0} \pdfglyphtounicode{tfm:zd/a111}{00A0} \pdfglyphtounicode{tfm:zd/a112}{00A0} \pdfglyphtounicode{tfm:zd/a117}{00A0} \pdfglyphtounicode{tfm:zd/a118}{00A0} \pdfglyphtounicode{tfm:zd/a119}{00A0} \pdfglyphtounicode{tfm:zd/a12}{00A0} \pdfglyphtounicode{tfm:zd/a120}{00A0} \pdfglyphtounicode{tfm:zd/a121}{00A0} \pdfglyphtounicode{tfm:zd/a122}{00A0} \pdfglyphtounicode{tfm:zd/a123}{00A0} \pdfglyphtounicode{tfm:zd/a124}{00A0} \pdfglyphtounicode{tfm:zd/a125}{00A0} \pdfglyphtounicode{tfm:zd/a126}{00A0} \pdfglyphtounicode{tfm:zd/a127}{00A0} \pdfglyphtounicode{tfm:zd/a128}{00A0} \pdfglyphtounicode{tfm:zd/a129}{00A0} \pdfglyphtounicode{tfm:zd/a13}{00A0} \pdfglyphtounicode{tfm:zd/a130}{00A0} \pdfglyphtounicode{tfm:zd/a131}{00A0} \pdfglyphtounicode{tfm:zd/a132}{00A0} \pdfglyphtounicode{tfm:zd/a133}{00A0} \pdfglyphtounicode{tfm:zd/a134}{00A0} \pdfglyphtounicode{tfm:zd/a135}{00A0} \pdfglyphtounicode{tfm:zd/a136}{00A0} \pdfglyphtounicode{tfm:zd/a137}{00A0} \pdfglyphtounicode{tfm:zd/a138}{00A0} \pdfglyphtounicode{tfm:zd/a139}{00A0} \pdfglyphtounicode{tfm:zd/a14}{00A0} \pdfglyphtounicode{tfm:zd/a140}{00A0} \pdfglyphtounicode{tfm:zd/a141}{00A0} \pdfglyphtounicode{tfm:zd/a142}{00A0} \pdfglyphtounicode{tfm:zd/a143}{00A0} \pdfglyphtounicode{tfm:zd/a144}{00A0} \pdfglyphtounicode{tfm:zd/a145}{00A0} \pdfglyphtounicode{tfm:zd/a146}{00A0} \pdfglyphtounicode{tfm:zd/a147}{00A0} \pdfglyphtounicode{tfm:zd/a148}{00A0} \pdfglyphtounicode{tfm:zd/a149}{00A0} \pdfglyphtounicode{tfm:zd/a15}{00A0} \pdfglyphtounicode{tfm:zd/a150}{00A0} \pdfglyphtounicode{tfm:zd/a151}{00A0} \pdfglyphtounicode{tfm:zd/a152}{00A0} \pdfglyphtounicode{tfm:zd/a153}{00A0} \pdfglyphtounicode{tfm:zd/a154}{00A0} \pdfglyphtounicode{tfm:zd/a155}{00A0} \pdfglyphtounicode{tfm:zd/a156}{00A0} \pdfglyphtounicode{tfm:zd/a157}{00A0} \pdfglyphtounicode{tfm:zd/a158}{00A0} \pdfglyphtounicode{tfm:zd/a159}{00A0} \pdfglyphtounicode{tfm:zd/a16}{00A0} \pdfglyphtounicode{tfm:zd/a160}{00A0} \pdfglyphtounicode{tfm:zd/a161}{00A0} \pdfglyphtounicode{tfm:zd/a162}{00A0} \pdfglyphtounicode{tfm:zd/a163}{00A0} \pdfglyphtounicode{tfm:zd/a164}{00A0} \pdfglyphtounicode{tfm:zd/a165}{00A0} \pdfglyphtounicode{tfm:zd/a166}{00A0} \pdfglyphtounicode{tfm:zd/a167}{00A0} \pdfglyphtounicode{tfm:zd/a168}{00A0} \pdfglyphtounicode{tfm:zd/a169}{00A0} \pdfglyphtounicode{tfm:zd/a17}{00A0} \pdfglyphtounicode{tfm:zd/a170}{00A0} \pdfglyphtounicode{tfm:zd/a171}{00A0} \pdfglyphtounicode{tfm:zd/a172}{00A0} \pdfglyphtounicode{tfm:zd/a173}{00A0} \pdfglyphtounicode{tfm:zd/a174}{00A0} \pdfglyphtounicode{tfm:zd/a175}{00A0} \pdfglyphtounicode{tfm:zd/a176}{00A0} \pdfglyphtounicode{tfm:zd/a177}{00A0} \pdfglyphtounicode{tfm:zd/a178}{00A0} \pdfglyphtounicode{tfm:zd/a179}{00A0} \pdfglyphtounicode{tfm:zd/a18}{00A0} \pdfglyphtounicode{tfm:zd/a180}{00A0} \pdfglyphtounicode{tfm:zd/a181}{00A0} \pdfglyphtounicode{tfm:zd/a182}{00A0} \pdfglyphtounicode{tfm:zd/a183}{00A0} \pdfglyphtounicode{tfm:zd/a184}{00A0} \pdfglyphtounicode{tfm:zd/a185}{00A0} \pdfglyphtounicode{tfm:zd/a186}{00A0} \pdfglyphtounicode{tfm:zd/a187}{00A0} \pdfglyphtounicode{tfm:zd/a188}{00A0} \pdfglyphtounicode{tfm:zd/a189}{00A0} \pdfglyphtounicode{tfm:zd/a19}{00A0} \pdfglyphtounicode{tfm:zd/a190}{00A0} \pdfglyphtounicode{tfm:zd/a191}{00A0} \pdfglyphtounicode{tfm:zd/a192}{00A0} \pdfglyphtounicode{tfm:zd/a193}{00A0} \pdfglyphtounicode{tfm:zd/a194}{00A0} \pdfglyphtounicode{tfm:zd/a195}{00A0} \pdfglyphtounicode{tfm:zd/a196}{00A0} \pdfglyphtounicode{tfm:zd/a197}{00A0} \pdfglyphtounicode{tfm:zd/a198}{00A0} \pdfglyphtounicode{tfm:zd/a199}{00A0} \pdfglyphtounicode{tfm:zd/a2}{00A0} \pdfglyphtounicode{tfm:zd/a20}{00A0} \pdfglyphtounicode{tfm:zd/a200}{00A0} \pdfglyphtounicode{tfm:zd/a201}{00A0} \pdfglyphtounicode{tfm:zd/a202}{00A0} \pdfglyphtounicode{tfm:zd/a203}{00A0} \pdfglyphtounicode{tfm:zd/a204}{00A0} \pdfglyphtounicode{tfm:zd/a205}{00A0} \pdfglyphtounicode{tfm:zd/a206}{00A0} \pdfglyphtounicode{tfm:zd/a21}{00A0} \pdfglyphtounicode{tfm:zd/a22}{00A0} \pdfglyphtounicode{tfm:zd/a23}{00A0} \pdfglyphtounicode{tfm:zd/a24}{00A0} \pdfglyphtounicode{tfm:zd/a25}{00A0} \pdfglyphtounicode{tfm:zd/a26}{00A0} \pdfglyphtounicode{tfm:zd/a27}{00A0} \pdfglyphtounicode{tfm:zd/a28}{00A0} \pdfglyphtounicode{tfm:zd/a29}{00A0} \pdfglyphtounicode{tfm:zd/a3}{00A0} \pdfglyphtounicode{tfm:zd/a30}{00A0} \pdfglyphtounicode{tfm:zd/a31}{00A0} \pdfglyphtounicode{tfm:zd/a32}{00A0} \pdfglyphtounicode{tfm:zd/a33}{00A0} \pdfglyphtounicode{tfm:zd/a34}{00A0} \pdfglyphtounicode{tfm:zd/a35}{00A0} \pdfglyphtounicode{tfm:zd/a36}{00A0} \pdfglyphtounicode{tfm:zd/a37}{00A0} \pdfglyphtounicode{tfm:zd/a38}{00A0} \pdfglyphtounicode{tfm:zd/a39}{00A0} \pdfglyphtounicode{tfm:zd/a4}{00A0} \pdfglyphtounicode{tfm:zd/a40}{00A0} \pdfglyphtounicode{tfm:zd/a41}{00A0} \pdfglyphtounicode{tfm:zd/a42}{00A0} \pdfglyphtounicode{tfm:zd/a43}{00A0} \pdfglyphtounicode{tfm:zd/a44}{00A0} \pdfglyphtounicode{tfm:zd/a45}{00A0} \pdfglyphtounicode{tfm:zd/a46}{00A0} \pdfglyphtounicode{tfm:zd/a47}{00A0} \pdfglyphtounicode{tfm:zd/a48}{00A0} \pdfglyphtounicode{tfm:zd/a49}{00A0} \pdfglyphtounicode{tfm:zd/a5}{00A0} \pdfglyphtounicode{tfm:zd/a50}{00A0} \pdfglyphtounicode{tfm:zd/a51}{00A0} \pdfglyphtounicode{tfm:zd/a52}{00A0} \pdfglyphtounicode{tfm:zd/a53}{00A0} \pdfglyphtounicode{tfm:zd/a54}{00A0} \pdfglyphtounicode{tfm:zd/a55}{00A0} \pdfglyphtounicode{tfm:zd/a56}{00A0} \pdfglyphtounicode{tfm:zd/a57}{00A0} \pdfglyphtounicode{tfm:zd/a58}{00A0} \pdfglyphtounicode{tfm:zd/a59}{00A0} \pdfglyphtounicode{tfm:zd/a6}{00A0} \pdfglyphtounicode{tfm:zd/a60}{00A0} \pdfglyphtounicode{tfm:zd/a61}{00A0} \pdfglyphtounicode{tfm:zd/a62}{00A0} \pdfglyphtounicode{tfm:zd/a63}{00A0} \pdfglyphtounicode{tfm:zd/a64}{00A0} \pdfglyphtounicode{tfm:zd/a65}{00A0} \pdfglyphtounicode{tfm:zd/a66}{00A0} \pdfglyphtounicode{tfm:zd/a67}{00A0} \pdfglyphtounicode{tfm:zd/a68}{00A0} \pdfglyphtounicode{tfm:zd/a69}{00A0} \pdfglyphtounicode{tfm:zd/a7}{00A0} \pdfglyphtounicode{tfm:zd/a70}{00A0} \pdfglyphtounicode{tfm:zd/a71}{00A0} \pdfglyphtounicode{tfm:zd/a72}{00A0} \pdfglyphtounicode{tfm:zd/a73}{00A0} \pdfglyphtounicode{tfm:zd/a74}{00A0} \pdfglyphtounicode{tfm:zd/a75}{00A0} \pdfglyphtounicode{tfm:zd/a76}{00A0} \pdfglyphtounicode{tfm:zd/a77}{00A0} \pdfglyphtounicode{tfm:zd/a78}{00A0} \pdfglyphtounicode{tfm:zd/a79}{00A0} \pdfglyphtounicode{tfm:zd/a8}{00A0} \pdfglyphtounicode{tfm:zd/a81}{00A0} \pdfglyphtounicode{tfm:zd/a82}{00A0} \pdfglyphtounicode{tfm:zd/a83}{00A0} \pdfglyphtounicode{tfm:zd/a84}{00A0} \pdfglyphtounicode{tfm:zd/a85}{00A0} \pdfglyphtounicode{tfm:zd/a86}{00A0} \pdfglyphtounicode{tfm:zd/a87}{00A0} \pdfglyphtounicode{tfm:zd/a88}{00A0} \pdfglyphtounicode{tfm:zd/a89}{00A0} \pdfglyphtounicode{tfm:zd/a9}{00A0} \pdfglyphtounicode{tfm:zd/a90}{00A0} \pdfglyphtounicode{tfm:zd/a91}{00A0} \pdfglyphtounicode{tfm:zd/a92}{00A0} \pdfglyphtounicode{tfm:zd/a93}{00A0} \pdfglyphtounicode{tfm:zd/a94}{00A0} \pdfglyphtounicode{tfm:zd/a95}{00A0} \pdfglyphtounicode{tfm:zd/a96}{00A0} \pdfglyphtounicode{tfm:zd/a97}{00A0} \pdfglyphtounicode{tfm:zd/a98}{00A0} \pdfglyphtounicode{tfm:zd/a99}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a1}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a10}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a100}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a101}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a102}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a103}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a104}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a105}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a106}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a107}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a108}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a109}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a11}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a110}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a111}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a112}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a117}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a118}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a119}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a12}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a120}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a121}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a122}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a123}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a124}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a125}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a126}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a127}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a128}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a129}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a13}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a130}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a131}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a132}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a133}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a134}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a135}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a136}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a137}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a138}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a139}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a14}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a140}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a141}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a142}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a143}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a144}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a145}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a146}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a147}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a148}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a149}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a15}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a150}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a151}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a152}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a153}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a154}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a155}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a156}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a157}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a158}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a159}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a16}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a160}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a161}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a162}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a163}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a164}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a165}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a166}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a167}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a168}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a169}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a17}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a170}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a171}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a172}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a173}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a174}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a175}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a176}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a177}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a178}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a179}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a18}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a180}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a181}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a182}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a183}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a184}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a185}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a186}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a187}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a188}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a189}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a19}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a190}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a191}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a192}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a193}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a194}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a195}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a196}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a197}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a198}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a199}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a2}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a20}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a200}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a201}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a202}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a203}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a204}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a205}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a206}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a21}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a22}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a23}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a24}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a25}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a26}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a27}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a28}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a29}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a3}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a30}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a31}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a32}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a33}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a34}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a35}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a36}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a37}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a38}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a39}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a4}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a40}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a41}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a42}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a43}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a44}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a45}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a46}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a47}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a48}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a49}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a5}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a50}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a51}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a52}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a53}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a54}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a55}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a56}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a57}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a58}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a59}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a6}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a60}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a61}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a62}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a63}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a64}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a65}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a66}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a67}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a68}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a69}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a7}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a70}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a71}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a72}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a73}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a74}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a75}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a76}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a77}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a78}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a79}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a8}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a81}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a82}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a83}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a84}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a85}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a86}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a87}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a88}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a89}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a9}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a90}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a91}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a92}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a93}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a94}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a95}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a96}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a97}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a98}{00A0} \pdfglyphtounicode{tfm:zpzdr-reversed/a99}{00A0} \pdfglyphtounicode{thabengali}{00A0} \pdfglyphtounicode{thadeva}{00A0} \pdfglyphtounicode{thagujarati}{00A0} \pdfglyphtounicode{thagurmukhi}{00A0} \pdfglyphtounicode{thalarabic}{00A0} \pdfglyphtounicode{thalfinalarabic}{00A0} \pdfglyphtounicode{thanthakhatlowleftthai}{00A0} \pdfglyphtounicode{thanthakhatlowrightthai}{00A0} \pdfglyphtounicode{thanthakhatthai}{00A0} \pdfglyphtounicode{thanthakhatupperleftthai}{00A0} \pdfglyphtounicode{theharabic}{00A0} \pdfglyphtounicode{thehfinalarabic}{00A0} \pdfglyphtounicode{thehinitialarabic}{00A0} \pdfglyphtounicode{thehmedialarabic}{00A0} \pdfglyphtounicode{thereexists}{00A0} \pdfglyphtounicode{therefore}{00A0} \pdfglyphtounicode{theta}{00A0} \pdfglyphtounicode{theta1}{00A0} \pdfglyphtounicode{thetasymbolgreek}{00A0} \pdfglyphtounicode{thieuthacirclekorean}{00A0} \pdfglyphtounicode{thieuthaparenkorean}{00A0} \pdfglyphtounicode{thieuthcirclekorean}{00A0} \pdfglyphtounicode{thieuthkorean}{00A0} \pdfglyphtounicode{thieuthparenkorean}{00A0} \pdfglyphtounicode{thirteencircle}{00A0} \pdfglyphtounicode{thirteenparen}{00A0} \pdfglyphtounicode{thirteenperiod}{00A0} \pdfglyphtounicode{thonangmonthothai}{00A0} \pdfglyphtounicode{thook}{00A0} \pdfglyphtounicode{thophuthaothai}{00A0} \pdfglyphtounicode{thorn}{00A0} \pdfglyphtounicode{thothahanthai}{00A0} \pdfglyphtounicode{thothanthai}{00A0} \pdfglyphtounicode{thothongthai}{00A0} \pdfglyphtounicode{thothungthai}{00A0} \pdfglyphtounicode{thousandcyrillic}{00A0} \pdfglyphtounicode{thousandsseparatorarabic}{00A0} \pdfglyphtounicode{thousandsseparatorpersian}{00A0} \pdfglyphtounicode{three}{00A0} \pdfglyphtounicode{threearabic}{00A0} \pdfglyphtounicode{threebengali}{00A0} \pdfglyphtounicode{threecircle}{00A0} \pdfglyphtounicode{threecircleinversesansserif}{00A0} \pdfglyphtounicode{threedeva}{00A0} \pdfglyphtounicode{threeeighths}{00A0} \pdfglyphtounicode{threegujarati}{00A0} \pdfglyphtounicode{threegurmukhi}{00A0} \pdfglyphtounicode{threehackarabic}{00A0} \pdfglyphtounicode{threehangzhou}{00A0} \pdfglyphtounicode{threeideographicparen}{00A0} \pdfglyphtounicode{threeinferior}{00A0} \pdfglyphtounicode{threemonospace}{00A0} \pdfglyphtounicode{threenumeratorbengali}{00A0} \pdfglyphtounicode{threeoldstyle}{00A0} \pdfglyphtounicode{threeparen}{00A0} \pdfglyphtounicode{threeperiod}{00A0} \pdfglyphtounicode{threepersian}{00A0} \pdfglyphtounicode{threequarters}{00A0} \pdfglyphtounicode{threequartersemdash}{00A0} \pdfglyphtounicode{threeroman}{00A0} \pdfglyphtounicode{threesuperior}{00A0} \pdfglyphtounicode{threethai}{00A0} \pdfglyphtounicode{thzsquare}{00A0} \pdfglyphtounicode{tihiragana}{00A0} \pdfglyphtounicode{tikatakana}{00A0} \pdfglyphtounicode{tikatakanahalfwidth}{00A0} \pdfglyphtounicode{tikeutacirclekorean}{00A0} \pdfglyphtounicode{tikeutaparenkorean}{00A0} \pdfglyphtounicode{tikeutcirclekorean}{00A0} \pdfglyphtounicode{tikeutkorean}{00A0} \pdfglyphtounicode{tikeutparenkorean}{00A0} \pdfglyphtounicode{tilde}{00A0} \pdfglyphtounicode{tildebelowcmb}{00A0} \pdfglyphtounicode{tildecmb}{00A0} \pdfglyphtounicode{tildecomb}{00A0} \pdfglyphtounicode{tildedoublecmb}{00A0} \pdfglyphtounicode{tildeoperator}{00A0} \pdfglyphtounicode{tildeoverlaycmb}{00A0} \pdfglyphtounicode{tildeverticalcmb}{00A0} \pdfglyphtounicode{timescircle}{00A0} \pdfglyphtounicode{tipehahebrew}{00A0} \pdfglyphtounicode{tipehalefthebrew}{00A0} \pdfglyphtounicode{tippigurmukhi}{00A0} \pdfglyphtounicode{titlocyrilliccmb}{00A0} \pdfglyphtounicode{tiwnarmenian}{00A0} \pdfglyphtounicode{tlinebelow}{00A0} \pdfglyphtounicode{tmonospace}{00A0} \pdfglyphtounicode{toarmenian}{00A0} \pdfglyphtounicode{tohiragana}{00A0} \pdfglyphtounicode{tokatakana}{00A0} \pdfglyphtounicode{tokatakanahalfwidth}{00A0} \pdfglyphtounicode{tonebarextrahighmod}{00A0} \pdfglyphtounicode{tonebarextralowmod}{00A0} \pdfglyphtounicode{tonebarhighmod}{00A0} \pdfglyphtounicode{tonebarlowmod}{00A0} \pdfglyphtounicode{tonebarmidmod}{00A0} \pdfglyphtounicode{tonefive}{00A0} \pdfglyphtounicode{tonesix}{00A0} \pdfglyphtounicode{tonetwo}{00A0} \pdfglyphtounicode{tonos}{00A0} \pdfglyphtounicode{tonsquare}{00A0} \pdfglyphtounicode{topatakthai}{00A0} \pdfglyphtounicode{tortoiseshellbracketleft}{00A0} \pdfglyphtounicode{tortoiseshellbracketleftsmall}{00A0} \pdfglyphtounicode{tortoiseshellbracketleftvertical}{00A0} \pdfglyphtounicode{tortoiseshellbracketright}{00A0} \pdfglyphtounicode{tortoiseshellbracketrightsmall}{00A0} \pdfglyphtounicode{tortoiseshellbracketrightvertical}{00A0} \pdfglyphtounicode{totaothai}{00A0} \pdfglyphtounicode{tpalatalhook}{00A0} \pdfglyphtounicode{tparen}{00A0} \pdfglyphtounicode{trademark}{00A0} \pdfglyphtounicode{trademarksans}{00A0} \pdfglyphtounicode{trademarkserif}{00A0} \pdfglyphtounicode{tretroflexhook}{00A0} \pdfglyphtounicode{triagdn}{00A0} \pdfglyphtounicode{triaglf}{00A0} \pdfglyphtounicode{triagrt}{00A0} \pdfglyphtounicode{triagup}{00A0} \pdfglyphtounicode{triangle}{00A0} \pdfglyphtounicode{triangledownsld}{00A0} \pdfglyphtounicode{triangleinv}{00A0} \pdfglyphtounicode{triangleleft}{00A0} \pdfglyphtounicode{triangleleftequal}{00A0} \pdfglyphtounicode{triangleleftsld}{00A0} \pdfglyphtounicode{triangleright}{00A0} \pdfglyphtounicode{trianglerightequal}{00A0} \pdfglyphtounicode{trianglerightsld}{00A0} \pdfglyphtounicode{trianglesolid}{00A0} \pdfglyphtounicode{ts}{00A0} \pdfglyphtounicode{tsadi}{00A0} \pdfglyphtounicode{tsadidagesh}{00A0} \pdfglyphtounicode{tsadidageshhebrew}{00A0} \pdfglyphtounicode{tsadihebrew}{00A0} \pdfglyphtounicode{tsecyrillic}{00A0} \pdfglyphtounicode{tsere}{00A0} \pdfglyphtounicode{tsere12}{00A0} \pdfglyphtounicode{tsere1e}{00A0} \pdfglyphtounicode{tsere2b}{00A0} \pdfglyphtounicode{tserehebrew}{00A0} \pdfglyphtounicode{tserenarrowhebrew}{00A0} \pdfglyphtounicode{tserequarterhebrew}{00A0} \pdfglyphtounicode{tserewidehebrew}{00A0} \pdfglyphtounicode{tshecyrillic}{00A0} \pdfglyphtounicode{tsuperior}{00A0} \pdfglyphtounicode{ttabengali}{00A0} \pdfglyphtounicode{ttadeva}{00A0} \pdfglyphtounicode{ttagujarati}{00A0} \pdfglyphtounicode{ttagurmukhi}{00A0} \pdfglyphtounicode{tteharabic}{00A0} \pdfglyphtounicode{ttehfinalarabic}{00A0} \pdfglyphtounicode{ttehinitialarabic}{00A0} \pdfglyphtounicode{ttehmedialarabic}{00A0} \pdfglyphtounicode{tthabengali}{00A0} \pdfglyphtounicode{tthadeva}{00A0} \pdfglyphtounicode{tthagujarati}{00A0} \pdfglyphtounicode{tthagurmukhi}{00A0} \pdfglyphtounicode{tturned}{00A0} \pdfglyphtounicode{tuhiragana}{00A0} \pdfglyphtounicode{tukatakana}{00A0} \pdfglyphtounicode{tukatakanahalfwidth}{00A0} \pdfglyphtounicode{turnstileleft}{00A0} \pdfglyphtounicode{turnstileright}{00A0} \pdfglyphtounicode{tusmallhiragana}{00A0} \pdfglyphtounicode{tusmallkatakana}{00A0} \pdfglyphtounicode{tusmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{twelvecircle}{00A0} \pdfglyphtounicode{twelveparen}{00A0} \pdfglyphtounicode{twelveperiod}{00A0} \pdfglyphtounicode{twelveroman}{00A0} \pdfglyphtounicode{twentycircle}{00A0} \pdfglyphtounicode{twentyhangzhou}{00A0} \pdfglyphtounicode{twentyparen}{00A0} \pdfglyphtounicode{twentyperiod}{00A0} \pdfglyphtounicode{two}{00A0} \pdfglyphtounicode{twoarabic}{00A0} \pdfglyphtounicode{twobengali}{00A0} \pdfglyphtounicode{twocircle}{00A0} \pdfglyphtounicode{twocircleinversesansserif}{00A0} \pdfglyphtounicode{twodeva}{00A0} \pdfglyphtounicode{twodotenleader}{00A0} \pdfglyphtounicode{twodotleader}{00A0} \pdfglyphtounicode{twodotleadervertical}{00A0} \pdfglyphtounicode{twogujarati}{00A0} \pdfglyphtounicode{twogurmukhi}{00A0} \pdfglyphtounicode{twohackarabic}{00A0} \pdfglyphtounicode{twohangzhou}{00A0} \pdfglyphtounicode{twoideographicparen}{00A0} \pdfglyphtounicode{twoinferior}{00A0} \pdfglyphtounicode{twomonospace}{00A0} \pdfglyphtounicode{twonumeratorbengali}{00A0} \pdfglyphtounicode{twooldstyle}{00A0} \pdfglyphtounicode{twoparen}{00A0} \pdfglyphtounicode{twoperiod}{00A0} \pdfglyphtounicode{twopersian}{00A0} \pdfglyphtounicode{tworoman}{00A0} \pdfglyphtounicode{twostroke}{00A0} \pdfglyphtounicode{twosuperior}{00A0} \pdfglyphtounicode{twothai}{00A0} \pdfglyphtounicode{twothirds}{00A0} \pdfglyphtounicode{u}{00A0} \pdfglyphtounicode{uacute}{00A0} \pdfglyphtounicode{ubar}{00A0} \pdfglyphtounicode{ubengali}{00A0} \pdfglyphtounicode{ubopomofo}{00A0} \pdfglyphtounicode{ubreve}{00A0} \pdfglyphtounicode{ucaron}{00A0} \pdfglyphtounicode{ucircle}{00A0} \pdfglyphtounicode{ucircumflex}{00A0} \pdfglyphtounicode{ucircumflexbelow}{00A0} \pdfglyphtounicode{ucyrillic}{00A0} \pdfglyphtounicode{udattadeva}{00A0} \pdfglyphtounicode{udblacute}{00A0} \pdfglyphtounicode{udblgrave}{00A0} \pdfglyphtounicode{udeva}{00A0} \pdfglyphtounicode{udieresis}{00A0} \pdfglyphtounicode{udieresisacute}{00A0} \pdfglyphtounicode{udieresisbelow}{00A0} \pdfglyphtounicode{udieresiscaron}{00A0} \pdfglyphtounicode{udieresiscyrillic}{00A0} \pdfglyphtounicode{udieresisgrave}{00A0} \pdfglyphtounicode{udieresismacron}{00A0} \pdfglyphtounicode{udotbelow}{00A0} \pdfglyphtounicode{ugrave}{00A0} \pdfglyphtounicode{ugujarati}{00A0} \pdfglyphtounicode{ugurmukhi}{00A0} \pdfglyphtounicode{uhiragana}{00A0} \pdfglyphtounicode{uhookabove}{00A0} \pdfglyphtounicode{uhorn}{00A0} \pdfglyphtounicode{uhornacute}{00A0} \pdfglyphtounicode{uhorndotbelow}{00A0} \pdfglyphtounicode{uhorngrave}{00A0} \pdfglyphtounicode{uhornhookabove}{00A0} \pdfglyphtounicode{uhorntilde}{00A0} \pdfglyphtounicode{uhungarumlaut}{00A0} \pdfglyphtounicode{uhungarumlautcyrillic}{00A0} \pdfglyphtounicode{uinvertedbreve}{00A0} \pdfglyphtounicode{ukatakana}{00A0} \pdfglyphtounicode{ukatakanahalfwidth}{00A0} \pdfglyphtounicode{ukcyrillic}{00A0} \pdfglyphtounicode{ukorean}{00A0} \pdfglyphtounicode{umacron}{00A0} \pdfglyphtounicode{umacroncyrillic}{00A0} \pdfglyphtounicode{umacrondieresis}{00A0} \pdfglyphtounicode{umatragurmukhi}{00A0} \pdfglyphtounicode{umonospace}{00A0} \pdfglyphtounicode{underscore}{00A0} \pdfglyphtounicode{underscoredbl}{00A0} \pdfglyphtounicode{underscoremonospace}{00A0} \pdfglyphtounicode{underscorevertical}{00A0} \pdfglyphtounicode{underscorewavy}{00A0} \pdfglyphtounicode{union}{00A0} \pdfglyphtounicode{uniondbl}{00A0} \pdfglyphtounicode{unionmulti}{00A0} \pdfglyphtounicode{unionsq}{00A0} \pdfglyphtounicode{universal}{00A0} \pdfglyphtounicode{uogonek}{00A0} \pdfglyphtounicode{uparen}{00A0} \pdfglyphtounicode{upblock}{00A0} \pdfglyphtounicode{upperdothebrew}{00A0} \pdfglyphtounicode{uprise}{00A0} \pdfglyphtounicode{upsilon}{00A0} \pdfglyphtounicode{upsilondieresis}{00A0} \pdfglyphtounicode{upsilondieresistonos}{00A0} \pdfglyphtounicode{upsilonlatin}{00A0} \pdfglyphtounicode{upsilontonos}{00A0} \pdfglyphtounicode{upslope}{00A0} \pdfglyphtounicode{uptackbelowcmb}{00A0} \pdfglyphtounicode{uptackmod}{00A0} \pdfglyphtounicode{uragurmukhi}{00A0} \pdfglyphtounicode{uring}{00A0} \pdfglyphtounicode{ushortcyrillic}{00A0} \pdfglyphtounicode{usmallhiragana}{00A0} \pdfglyphtounicode{usmallkatakana}{00A0} \pdfglyphtounicode{usmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{ustraightcyrillic}{00A0} \pdfglyphtounicode{ustraightstrokecyrillic}{00A0} \pdfglyphtounicode{utilde}{00A0} \pdfglyphtounicode{utildeacute}{00A0} \pdfglyphtounicode{utildebelow}{00A0} \pdfglyphtounicode{uubengali}{00A0} \pdfglyphtounicode{uudeva}{00A0} \pdfglyphtounicode{uugujarati}{00A0} \pdfglyphtounicode{uugurmukhi}{00A0} \pdfglyphtounicode{uumatragurmukhi}{00A0} \pdfglyphtounicode{uuvowelsignbengali}{00A0} \pdfglyphtounicode{uuvowelsigndeva}{00A0} \pdfglyphtounicode{uuvowelsigngujarati}{00A0} \pdfglyphtounicode{uvowelsignbengali}{00A0} \pdfglyphtounicode{uvowelsigndeva}{00A0} \pdfglyphtounicode{uvowelsigngujarati}{00A0} \pdfglyphtounicode{v}{00A0} \pdfglyphtounicode{vadeva}{00A0} \pdfglyphtounicode{vagujarati}{00A0} \pdfglyphtounicode{vagurmukhi}{00A0} \pdfglyphtounicode{vakatakana}{00A0} \pdfglyphtounicode{vav}{00A0} \pdfglyphtounicode{vavdagesh}{00A0} \pdfglyphtounicode{vavdagesh65}{00A0} \pdfglyphtounicode{vavdageshhebrew}{00A0} \pdfglyphtounicode{vavhebrew}{00A0} \pdfglyphtounicode{vavholam}{00A0} \pdfglyphtounicode{vavholamhebrew}{00A0} \pdfglyphtounicode{vavvavhebrew}{00A0} \pdfglyphtounicode{vavyodhebrew}{00A0} \pdfglyphtounicode{vcircle}{00A0} \pdfglyphtounicode{vdotbelow}{00A0} \pdfglyphtounicode{vector}{00A0} \pdfglyphtounicode{vecyrillic}{00A0} \pdfglyphtounicode{veharabic}{00A0} \pdfglyphtounicode{vehfinalarabic}{00A0} \pdfglyphtounicode{vehinitialarabic}{00A0} \pdfglyphtounicode{vehmedialarabic}{00A0} \pdfglyphtounicode{vekatakana}{00A0} \pdfglyphtounicode{venus}{00A0} \pdfglyphtounicode{verticalbar}{00A0} \pdfglyphtounicode{verticallineabovecmb}{00A0} \pdfglyphtounicode{verticallinebelowcmb}{00A0} \pdfglyphtounicode{verticallinelowmod}{00A0} \pdfglyphtounicode{verticallinemod}{00A0} \pdfglyphtounicode{vewarmenian}{00A0} \pdfglyphtounicode{vhook}{00A0} \pdfglyphtounicode{vikatakana}{00A0} \pdfglyphtounicode{viramabengali}{00A0} \pdfglyphtounicode{viramadeva}{00A0} \pdfglyphtounicode{viramagujarati}{00A0} \pdfglyphtounicode{visargabengali}{00A0} \pdfglyphtounicode{visargadeva}{00A0} \pdfglyphtounicode{visargagujarati}{00A0} \pdfglyphtounicode{visiblespace}{00A0} \pdfglyphtounicode{visualspace}{00A0} \pdfglyphtounicode{vmonospace}{00A0} \pdfglyphtounicode{voarmenian}{00A0} \pdfglyphtounicode{voicediterationhiragana}{00A0} \pdfglyphtounicode{voicediterationkatakana}{00A0} \pdfglyphtounicode{voicedmarkkana}{00A0} \pdfglyphtounicode{voicedmarkkanahalfwidth}{00A0} \pdfglyphtounicode{vokatakana}{00A0} \pdfglyphtounicode{vparen}{00A0} \pdfglyphtounicode{vtilde}{00A0} \pdfglyphtounicode{vturned}{00A0} \pdfglyphtounicode{vuhiragana}{00A0} \pdfglyphtounicode{vukatakana}{00A0} \pdfglyphtounicode{w}{00A0} \pdfglyphtounicode{wacute}{00A0} \pdfglyphtounicode{waekorean}{00A0} \pdfglyphtounicode{wahiragana}{00A0} \pdfglyphtounicode{wakatakana}{00A0} \pdfglyphtounicode{wakatakanahalfwidth}{00A0} \pdfglyphtounicode{wakorean}{00A0} \pdfglyphtounicode{wasmallhiragana}{00A0} \pdfglyphtounicode{wasmallkatakana}{00A0} \pdfglyphtounicode{wattosquare}{00A0} \pdfglyphtounicode{wavedash}{00A0} \pdfglyphtounicode{wavyunderscorevertical}{00A0} \pdfglyphtounicode{wawarabic}{00A0} \pdfglyphtounicode{wawfinalarabic}{00A0} \pdfglyphtounicode{wawhamzaabovearabic}{00A0} \pdfglyphtounicode{wawhamzaabovefinalarabic}{00A0} \pdfglyphtounicode{wbsquare}{00A0} \pdfglyphtounicode{wcircle}{00A0} \pdfglyphtounicode{wcircumflex}{00A0} \pdfglyphtounicode{wdieresis}{00A0} \pdfglyphtounicode{wdotaccent}{00A0} \pdfglyphtounicode{wdotbelow}{00A0} \pdfglyphtounicode{wehiragana}{00A0} \pdfglyphtounicode{weierstrass}{00A0} \pdfglyphtounicode{wekatakana}{00A0} \pdfglyphtounicode{wekorean}{00A0} \pdfglyphtounicode{weokorean}{00A0} \pdfglyphtounicode{wgrave}{00A0} \pdfglyphtounicode{whitebullet}{00A0} \pdfglyphtounicode{whitecircle}{00A0} \pdfglyphtounicode{whitecircleinverse}{00A0} \pdfglyphtounicode{whitecornerbracketleft}{00A0} \pdfglyphtounicode{whitecornerbracketleftvertical}{00A0} \pdfglyphtounicode{whitecornerbracketright}{00A0} \pdfglyphtounicode{whitecornerbracketrightvertical}{00A0} \pdfglyphtounicode{whitediamond}{00A0} \pdfglyphtounicode{whitediamondcontainingblacksmalldiamond}{00A0} \pdfglyphtounicode{whitedownpointingsmalltriangle}{00A0} \pdfglyphtounicode{whitedownpointingtriangle}{00A0} \pdfglyphtounicode{whiteleftpointingsmalltriangle}{00A0} \pdfglyphtounicode{whiteleftpointingtriangle}{00A0} \pdfglyphtounicode{whitelenticularbracketleft}{00A0} \pdfglyphtounicode{whitelenticularbracketright}{00A0} \pdfglyphtounicode{whiterightpointingsmalltriangle}{00A0} \pdfglyphtounicode{whiterightpointingtriangle}{00A0} \pdfglyphtounicode{whitesmallsquare}{00A0} \pdfglyphtounicode{whitesmilingface}{00A0} \pdfglyphtounicode{whitesquare}{00A0} \pdfglyphtounicode{whitestar}{00A0} \pdfglyphtounicode{whitetelephone}{00A0} \pdfglyphtounicode{whitetortoiseshellbracketleft}{00A0} \pdfglyphtounicode{whitetortoiseshellbracketright}{00A0} \pdfglyphtounicode{whiteuppointingsmalltriangle}{00A0} \pdfglyphtounicode{whiteuppointingtriangle}{00A0} \pdfglyphtounicode{wihiragana}{00A0} \pdfglyphtounicode{wikatakana}{00A0} \pdfglyphtounicode{wikorean}{00A0} \pdfglyphtounicode{wmonospace}{00A0} \pdfglyphtounicode{wohiragana}{00A0} \pdfglyphtounicode{wokatakana}{00A0} \pdfglyphtounicode{wokatakanahalfwidth}{00A0} \pdfglyphtounicode{won}{00A0} \pdfglyphtounicode{wonmonospace}{00A0} \pdfglyphtounicode{wowaenthai}{00A0} \pdfglyphtounicode{wparen}{00A0} \pdfglyphtounicode{wreathproduct}{00A0} \pdfglyphtounicode{wring}{00A0} \pdfglyphtounicode{wsuperior}{00A0} \pdfglyphtounicode{wturned}{00A0} \pdfglyphtounicode{wynn}{00A0} \pdfglyphtounicode{x}{00A0} \pdfglyphtounicode{xabovecmb}{00A0} \pdfglyphtounicode{xbopomofo}{00A0} \pdfglyphtounicode{xcircle}{00A0} \pdfglyphtounicode{xdieresis}{00A0} \pdfglyphtounicode{xdotaccent}{00A0} \pdfglyphtounicode{xeharmenian}{00A0} \pdfglyphtounicode{xi}{00A0} \pdfglyphtounicode{xmonospace}{00A0} \pdfglyphtounicode{xparen}{00A0} \pdfglyphtounicode{xsuperior}{00A0} \pdfglyphtounicode{y}{00A0} \pdfglyphtounicode{yaadosquare}{00A0} \pdfglyphtounicode{yabengali}{00A0} \pdfglyphtounicode{yacute}{00A0} \pdfglyphtounicode{yadeva}{00A0} \pdfglyphtounicode{yaekorean}{00A0} \pdfglyphtounicode{yagujarati}{00A0} \pdfglyphtounicode{yagurmukhi}{00A0} \pdfglyphtounicode{yahiragana}{00A0} \pdfglyphtounicode{yakatakana}{00A0} \pdfglyphtounicode{yakatakanahalfwidth}{00A0} \pdfglyphtounicode{yakorean}{00A0} \pdfglyphtounicode{yamakkanthai}{00A0} \pdfglyphtounicode{yasmallhiragana}{00A0} \pdfglyphtounicode{yasmallkatakana}{00A0} \pdfglyphtounicode{yasmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{yatcyrillic}{00A0} \pdfglyphtounicode{ycircle}{00A0} \pdfglyphtounicode{ycircumflex}{00A0} \pdfglyphtounicode{ydieresis}{00A0} \pdfglyphtounicode{ydotaccent}{00A0} \pdfglyphtounicode{ydotbelow}{00A0} \pdfglyphtounicode{yeharabic}{00A0} \pdfglyphtounicode{yehbarreearabic}{00A0} \pdfglyphtounicode{yehbarreefinalarabic}{00A0} \pdfglyphtounicode{yehfinalarabic}{00A0} \pdfglyphtounicode{yehhamzaabovearabic}{00A0} \pdfglyphtounicode{yehhamzaabovefinalarabic}{00A0} \pdfglyphtounicode{yehhamzaaboveinitialarabic}{00A0} \pdfglyphtounicode{yehhamzaabovemedialarabic}{00A0} \pdfglyphtounicode{yehinitialarabic}{00A0} \pdfglyphtounicode{yehmedialarabic}{00A0} \pdfglyphtounicode{yehmeeminitialarabic}{00A0} \pdfglyphtounicode{yehmeemisolatedarabic}{00A0} \pdfglyphtounicode{yehnoonfinalarabic}{00A0} \pdfglyphtounicode{yehthreedotsbelowarabic}{00A0} \pdfglyphtounicode{yekorean}{00A0} \pdfglyphtounicode{yen}{00A0} \pdfglyphtounicode{yenmonospace}{00A0} \pdfglyphtounicode{yeokorean}{00A0} \pdfglyphtounicode{yeorinhieuhkorean}{00A0} \pdfglyphtounicode{yerahbenyomohebrew}{00A0} \pdfglyphtounicode{yerahbenyomolefthebrew}{00A0} \pdfglyphtounicode{yericyrillic}{00A0} \pdfglyphtounicode{yerudieresiscyrillic}{00A0} \pdfglyphtounicode{yesieungkorean}{00A0} \pdfglyphtounicode{yesieungpansioskorean}{00A0} \pdfglyphtounicode{yesieungsioskorean}{00A0} \pdfglyphtounicode{yetivhebrew}{00A0} \pdfglyphtounicode{ygrave}{00A0} \pdfglyphtounicode{yhook}{00A0} \pdfglyphtounicode{yhookabove}{00A0} \pdfglyphtounicode{yiarmenian}{00A0} \pdfglyphtounicode{yicyrillic}{00A0} \pdfglyphtounicode{yikorean}{00A0} \pdfglyphtounicode{yinyang}{00A0} \pdfglyphtounicode{yiwnarmenian}{00A0} \pdfglyphtounicode{ymonospace}{00A0} \pdfglyphtounicode{yod}{00A0} \pdfglyphtounicode{yoddagesh}{00A0} \pdfglyphtounicode{yoddageshhebrew}{00A0} \pdfglyphtounicode{yodhebrew}{00A0} \pdfglyphtounicode{yodyodhebrew}{00A0} \pdfglyphtounicode{yodyodpatahhebrew}{00A0} \pdfglyphtounicode{yohiragana}{00A0} \pdfglyphtounicode{yoikorean}{00A0} \pdfglyphtounicode{yokatakana}{00A0} \pdfglyphtounicode{yokatakanahalfwidth}{00A0} \pdfglyphtounicode{yokorean}{00A0} \pdfglyphtounicode{yosmallhiragana}{00A0} \pdfglyphtounicode{yosmallkatakana}{00A0} \pdfglyphtounicode{yosmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{yotgreek}{00A0} \pdfglyphtounicode{yoyaekorean}{00A0} \pdfglyphtounicode{yoyakorean}{00A0} \pdfglyphtounicode{yoyakthai}{00A0} \pdfglyphtounicode{yoyingthai}{00A0} \pdfglyphtounicode{yparen}{00A0} \pdfglyphtounicode{ypogegrammeni}{00A0} \pdfglyphtounicode{ypogegrammenigreekcmb}{00A0} \pdfglyphtounicode{yr}{00A0} \pdfglyphtounicode{yring}{00A0} \pdfglyphtounicode{ysuperior}{00A0} \pdfglyphtounicode{ytilde}{00A0} \pdfglyphtounicode{yturned}{00A0} \pdfglyphtounicode{yuhiragana}{00A0} \pdfglyphtounicode{yuikorean}{00A0} \pdfglyphtounicode{yukatakana}{00A0} \pdfglyphtounicode{yukatakanahalfwidth}{00A0} \pdfglyphtounicode{yukorean}{00A0} \pdfglyphtounicode{yusbigcyrillic}{00A0} \pdfglyphtounicode{yusbigiotifiedcyrillic}{00A0} \pdfglyphtounicode{yuslittlecyrillic}{00A0} \pdfglyphtounicode{yuslittleiotifiedcyrillic}{00A0} \pdfglyphtounicode{yusmallhiragana}{00A0} \pdfglyphtounicode{yusmallkatakana}{00A0} \pdfglyphtounicode{yusmallkatakanahalfwidth}{00A0} \pdfglyphtounicode{yuyekorean}{00A0} \pdfglyphtounicode{yuyeokorean}{00A0} \pdfglyphtounicode{yyabengali}{00A0} \pdfglyphtounicode{yyadeva}{00A0} \pdfglyphtounicode{z}{00A0} \pdfglyphtounicode{zaarmenian}{00A0} \pdfglyphtounicode{zacute}{00A0} \pdfglyphtounicode{zadeva}{00A0} \pdfglyphtounicode{zagurmukhi}{00A0} \pdfglyphtounicode{zaharabic}{00A0} \pdfglyphtounicode{zahfinalarabic}{00A0} \pdfglyphtounicode{zahinitialarabic}{00A0} \pdfglyphtounicode{zahiragana}{00A0} \pdfglyphtounicode{zahmedialarabic}{00A0} \pdfglyphtounicode{zainarabic}{00A0} \pdfglyphtounicode{zainfinalarabic}{00A0} \pdfglyphtounicode{zakatakana}{00A0} \pdfglyphtounicode{zaqefgadolhebrew}{00A0} \pdfglyphtounicode{zaqefqatanhebrew}{00A0} \pdfglyphtounicode{zarqahebrew}{00A0} \pdfglyphtounicode{zayin}{00A0} \pdfglyphtounicode{zayindagesh}{00A0} \pdfglyphtounicode{zayindageshhebrew}{00A0} \pdfglyphtounicode{zayinhebrew}{00A0} \pdfglyphtounicode{zbopomofo}{00A0} \pdfglyphtounicode{zcaron}{00A0} \pdfglyphtounicode{zcircle}{00A0} \pdfglyphtounicode{zcircumflex}{00A0} \pdfglyphtounicode{zcurl}{00A0} \pdfglyphtounicode{zdot}{00A0} \pdfglyphtounicode{zdotaccent}{00A0} \pdfglyphtounicode{zdotbelow}{00A0} \pdfglyphtounicode{zecyrillic}{00A0} \pdfglyphtounicode{zedescendercyrillic}{00A0} \pdfglyphtounicode{zedieresiscyrillic}{00A0} \pdfglyphtounicode{zehiragana}{00A0} \pdfglyphtounicode{zekatakana}{00A0} \pdfglyphtounicode{zero}{00A0} \pdfglyphtounicode{zeroarabic}{00A0} \pdfglyphtounicode{zerobengali}{00A0} \pdfglyphtounicode{zerodeva}{00A0} \pdfglyphtounicode{zerogujarati}{00A0} \pdfglyphtounicode{zerogurmukhi}{00A0} \pdfglyphtounicode{zerohackarabic}{00A0} \pdfglyphtounicode{zeroinferior}{00A0} \pdfglyphtounicode{zeromonospace}{00A0} \pdfglyphtounicode{zerooldstyle}{00A0} \pdfglyphtounicode{zeropersian}{00A0} \pdfglyphtounicode{zerosuperior}{00A0} \pdfglyphtounicode{zerothai}{00A0} \pdfglyphtounicode{zerowidthjoiner}{00A0} \pdfglyphtounicode{zerowidthnonjoiner}{00A0} \pdfglyphtounicode{zerowidthspace}{00A0} \pdfglyphtounicode{zeta}{00A0} \pdfglyphtounicode{zhbopomofo}{00A0} \pdfglyphtounicode{zhearmenian}{00A0} \pdfglyphtounicode{zhebrevecyrillic}{00A0} \pdfglyphtounicode{zhecyrillic}{00A0} \pdfglyphtounicode{zhedescendercyrillic}{00A0} \pdfglyphtounicode{zhedieresiscyrillic}{00A0} \pdfglyphtounicode{zihiragana}{00A0} \pdfglyphtounicode{zikatakana}{00A0} \pdfglyphtounicode{zinorhebrew}{00A0} \pdfglyphtounicode{zlinebelow}{00A0} \pdfglyphtounicode{zmonospace}{00A0} \pdfglyphtounicode{zohiragana}{00A0} \pdfglyphtounicode{zokatakana}{00A0} \pdfglyphtounicode{zparen}{00A0} \pdfglyphtounicode{zretroflexhook}{00A0} \pdfglyphtounicode{zstroke}{00A0} \pdfglyphtounicode{zuhiragana}{00A0} \pdfglyphtounicode{zukatakana}{00A0} \documentclass[a4paper,12pt]{article} \input{glyphtounicode} \pdfgentounicode=1 \usepackage[xcolor,qtwo]{rvdtx} \usepackage{multicol} \usepackage{color} \usepackage{xspace} \usepackage{pdfwidgets} \usepackage{enumerate} \def\ttdefault{cmtt} \headsep4pc \makeatletter \def\bs{\expandafter\@gobble\string\\} \def\lb{\expandafter\@gobble\string\{} \def\rb{\expandafter\@gobble\string\}} \def\@pdfauthor{C.V.Radhakrishnan} \def\@pdftitle{CAS templates: A documentation} \def\@pdfsubject{Document formatting with CAS template} \def\@pdfkeywords{LaTeX, Elsevier Ltd, document class} le#1{\textsf{#1}\xspace} \DeclareRobustCommand{\LaTeX}{L\kern-.26em {\sbox\z@ T \vbox to\ht\z@{\hbox{\check@mathfonts \fontsize\sf@size\z@ \math@fontsfalse\selectfont A\,} \vss} } \kern-.15em \TeX} \makeatother gurename{Clip} \setcounter{tocdepth}{1} \AtBeginDocument{ \setcounter{topnumber}{2} \setcounter{bottomnumber}{2} \setcounter{totalnumber}{4} \renewcommand{\topfraction}{0.85} \renewcommand{\bottomfraction}{0.85} \renewcommand{\textfraction}{0.15} \renewcommand{\floatpagefraction}{0.7} } \begin{document} \def\testa{This is a specimen document. } \def\testc{\testa\testa\testa\testa} \def\testb{\testc\testc\testc\testc\testc} \long\def\test{\testb\par\testb\par\testb\par} \pinclude{\copy\contbox\printSq{\LastPage}} \title{Documentation for Elsevier's CAS \LaTeX\ template} \author{Elsevier Ltd} \contact{[email protected]} \version{2.3} \date{\today} \maketitle \section{Introduction} This bundle provides two classfiles, namely \verb+cas-sc.cls+ and \verb+cas-dc.cls+ and corresponding template files for typesetting journal articles supposed to go through Elsevier's updated workflow. \verb+cas-sc.cls+ is meant for one-column, the other \verb+cas-dc.cls+ for two-column layout. These are now accepted for submitting articles both in Elsevier's electronic submission system and elsewhere. \subsection{Usage} \begin{enumerate} \item \verb+cas-sc.cls+ for single column journals. \begin{vquote} \documentclass[<options>]{cas-sc} \end{vquote} \item \verb+cas-dc.cls+ for single column journals. \begin{vquote} \documentclass[<options>]{cas-dc} \end{vquote} \end{enumerate} and have an option \verb+longmktitle+ to handle long front matter. \section{Front matter} \begin{vquote} \title [mode = title]{This is a specimen $a_b$ title} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \begin{vquote} \author[1,3]{J.K. Krishnan}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-0000-0000] \cormark[1] \fnmark[1] \ead{[email protected]} \ead[url]{www.jkkrishnan.in} \credit{Conceptualization of this study, Methodology, Software} \affiliation[1]{organization={Department of Physics, J.K. Institute of Science}, addressline={Jawahar Nagar}, city={Trivandrum}, postcode={695013}, state={Kerala}, country={India}} \author[2,4]{Han Thane}[style=chinese] \author[2,3]{William {J. Hansen}}[ role=Co-ordinator, suffix=Jr, ] \fnmark[2] \ead{[email protected]} \ead[URL]{https://www.university.org} \credit{Data curation, Writing - Original draft preparation} \end{vquote} \begin{vquote} \affiliation[2]{organization={World Scientific University}, addressline={Street 29}, postcode={1011 NX}, postcodesep={}, city={Amsterdam}, country={The Netherlands}} \author[1,3]{T. Rafeeq} \cormark[2] \fnmark[1,3] \ead{[email protected]} \ead[URL]{www.campus.in} \affiliation[3]{organization={University of Intelligent Studies}, addressline={Street 15}, city={Jabaldesh}, postcode={825001}, state={Orissa}, country={India}} \cortext[cor1]{Corresponding author} \cortext[cor2]{Principal corresponding author} \fntext[fn1]{This is the first author footnote, but is common to third author as well.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \nonumnote{This note has no numbers. In this work we demonstrate $a_b$ the formation Y\_1 of a new type of polariton on the interface between a cuprous oxide slab and a polystyrene micro-sphere placed on the slab. } \end{vquote} \begin{vquote} \begin{abstract}[S U M M A R Y] This template helps you to create a properly formatted \LaTeX\ manuscript. \begin{abstract} ... \end{abstract} and \begin{keyword} ... \end{keyword} which contain the abstract and keywords respectively. Each keyword shall be separated by a \sep command. \end{abstract} \begin{keywords} quadrupole exciton \sep polariton \sep WGM \sep BEC \end{keywords} \maketitle \end{vquote} \subsection{Title} \verb+\title+ command have the below options: \begin{enumerate} \item \verb+title:+ Document title \item \verb+alt:+ Alternate title \item \verb+sub:+ Sub title \item \verb+trans:+ Translated title \item \verb+transsub:+ Translated sub title \end{enumerate} \begin{vquote} \title[mode=title]{This is a title} \title[mode=alt]{This is a alternate title} \title[mode=sub]{This is a sub title} \title[mode=trans]{This is a translated title} \title[mode=transsub]{This is a translated sub title} \end{vquote} \subsection{Author} \verb+\author+ command have the below options: \begin{enumerate} \item \verb+auid:+ Author id \item \verb+bioid:+ Biography id \item \verb+alt:+ Alternate author \item \verb+style:+ Style of author name, eg.\ chinese \item \verb+prefix:+ Prefix, eg.\ Sir \item \verb+suffix:+ Suffix \item \verb+degree:+ Degree \item \verb+role:+ Role \item \verb+orcid:+ ORCID \item \verb+collab:+ Collaboration \item \verb+anon:+ Anonymous author \item \verb+deceased:+ Deceased author \item \verb+twitter:+ Twitter account \item \verb+facebook:+ Facebook account \item \verb+linkedin:+ LinkedIn account \item \verb+plus:+ Google plus account \item \verb+gplus:+ Google plus account \end{enumerate} \begin{vquote} \author[1,3]{Author Name}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-0000-0000, facebook=<facebook id>, twitter=<twitter id>, linkedin=<linkedin id>, gplus=<gplus id>] \end{vquote} \begin{figure} \includegraphics[width=\textwidth]{sc-sample.pdf} \caption{Single column output (classfile: cas-sc.cls).} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{dc-sample.pdf} \caption{Double column output (classfile: cas-dc.cls).} \end{figure} \subsection{Various Marks in the Front Matter} The front matter becomes complicated due to various kinds of notes and marks to the title and author names. Marks in the title will be denoted by a star ($\star$) mark; footnotes are denoted by super scripted Arabic numerals, corresponding author by an Conformal asterisk (*) mark. \subsubsection{Title marks} Title mark can be entered by the command, \verb+\tnotemark[<num>]+ and the corresponding text can be entered with the command \verb+\tnotetext[<num>]+ \verb+{<text>}+. An example will be: \begin{vquote} \title[mode=title]{Leveraging social media news to predict stock index movement using RNN-boost} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \verb+\tnotemark+ and \verb+\tnotetext+ can be anywhere in the front matter, but should be before \verb+\maketitle+ command. \subsubsection{Author marks} Author names can have many kinds of marks and notes: \begin{vquote} footnote mark : \fnmark[<num>] footnote text : \fntext[<num>]{<text>} affiliation mark : \author[<num>] email : \ead{<emailid>} url : \ead[url]{<url>} corresponding author mark : \cormark[<num>] corresponding author text : \cortext[<num>]{<text>} \end{vquote} \subsubsection{Other marks} At times, authors want footnotes which leave no marks in the author names. The note text shall be listed as part of the front matter notes. Class files provides \verb+\nonumnote+ for this purpose. The usage \begin{vquote} \nonumnote{<text>} \end{vquote} \noindent and should be entered anywhere before the \verb+\maketitle+ command for this to take effect. \subsection{Abstract and Keywords} Abstract shall be entered in an environment that starts with\break \verb+\begin{abstract}+ and ends with \verb+\end{abstract}+. Longer abstracts spanning more than one page is also possible in slass file even in double column mode. We need to invoke \verb+longmktitle+ option in the class loading line for this to happen smoothly. The key words are enclosed in a \verb+{keyword}+ environment. \begin{vquote} \begin{abstract} This is an abstract. \lipsum[3] \end{abstract} \begin{keywords} First keyword \sep Second keyword \sep Third keyword \sep Fourth keyword \end{keywords} \end{vquote} \section{Main Matter} Main matter contains sections, paragraphs, equations and floats like tables, figures, textboxes etc. \subsection{Tables} \subsubsection{Normal tables} \begin{vquote} \begin{table} \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLL@{} } \toprule Col 1 & Col 2\\ \midrule 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ \bottomrule \end{tabular*} \end{table} \end{vquote} \subsubsection{Span tables} \begin{vquote} \begin{table*}[width=.9\textwidth,cols=4,pos=h] \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLLLL@{} } \toprule Col 1 & Col 2 & Col 3 & Col4 & Col5 & Col6 & Col7\\ \midrule 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ \bottomrule \end{tabular*} \end{table*} \end{vquote} \subsection{Figures} \subsubsection{Normal figures} \begin{vquote} \begin{figure} \centering \includegraphics[scale=.75]{Fig1.pdf} \caption{The evanescent light - $1S$ quadrupole coupling. See also Fig. \protect\ref{FIG:2}).} \label{FIG:1} \end{figure} \end{vquote} \subsubsection{Span figures} \begin{vquote} \begin{figure*} \centering \includegraphics[width=\textwidth,height=2in]{Fig2.pdf} \caption{Schematic of formation of the evanescent polariton on linear chain of \PMS.} \label{FIG:2} \end{figure*}\end{vquote} \subsection{Theorem and theorem like environments} CAS class file provides a few hooks to format theorems and theorem like environments with ease. All commands the options that are used with \verb+\newtheorem+ command will work exactly in the same manner. Class file provides three commands to format theorem or theorem like environments: \begin{enumerate} \item \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font for theorem statement, bold weight for theorem heading and theorem number typeset at the right of theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. Here is an example coding and output: \begin{vquote} \newtheorem{theorem}{Theorem} \begin{theorem}\label{thm} The \WGM evanescent field penetration depth into the cuprous oxide adjacent crystal is much larger than the \QE radius: \begin{equation*} \lambda_{1S}/2 \pi \left({\epsilon_{Cu2O}-1} \right)^{1/2} = 414 \mbox{ \AA} \gg a_B = 4.6 \mbox{ \AA} \end{equation*} \end{theorem} \end{vquote} \item \verb+\newdefinition+ command does exactly the same thing as with except that the body font is up-shape instead of italic. See the example below: \begin{vquote} \newdefinition{definition}{Definition} \begin{definition} The bulk and evanescent polaritons in cuprous oxide are formed through the quadrupole part of the light-matter interaction: \begin{equation*} H_{int} = \frac{i e }{m \omega_{1S}} {\bf E}_{i,s} \cdot {\bf p} \end{equation*} \end{definition} \end{vquote} \item \verb+\newproof+ command helps to define proof and custom proof environments without counters as provided in the example code. Given below is an example of proof of theorem kind. \begin{vquote} \newproof{pot}{Proof of Theorem \ref{thm}} \begin{pot} The photon part of the polariton trapped inside the \PMS moves as it would move in a micro-cavity of the effective modal volume $V \ll 4 \pi r_{0}^{3} /3$. Consequently, it can escape through the evanescent field. This evanescent field essentially has a quantum origin and is due to tunneling through the potential caused by dielectric mismatch on the \PMS surface. Therefore, we define the \emph{evanescent} polariton (\EP) as an evanescent light - \QE coherent superposition. \end{pot} \end{vquote} \end{enumerate} \subsection{Enumerated and Itemized Lists} CAS class files provides an extended list processing macros which makes the usage a bit more user friendly than the default LaTeX list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. You can see the coding and typeset copy. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.' so that the item counter will be suffixed by a period as in the optional argument. \item If you provide a closing parenthesis to the number in the optional argument, the output will have closing parenthesis for all the item counters. \item You can use `(a)' for alphabetical counter and `(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \begin{enumerate}[(i)] \item This item has roman numeral counter. \end{vquote} \begin{vquote} \item Another one before we close the third level. \end{enumerate} \item Third item in second level. \end{enumerate} \item All list items conclude with this step. \end{enumerate} \section{Biography} \verb+\bio+ command have the below options: \begin{enumerate} \item \verb+width:+ Width of the author photo (default is 1in). \item \verb+pos:+ Position of author photo. \end{enumerate} \begin{vquote} \bio[width=10mm,pos=l]{tuglogo.jpg} \textbf{Another Biography:} Recent experimental \cite{HARA:2005} and theoretical \cite{DEYCH:2006} studies have shown that the \WGM can travel along the chain as "heavy photons". Therefore the \WGM acquires the spatial dispersion, and the evanescent quadrupole polariton has the form (See Fig.\ref{FIG:3}): \endbio \end{vquote} \section[CRediT...]{CRediT authorship contribution statement} Give the authorship contribution after each author as \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \end{vquote} To print the details use \verb+\printcredits+ \begin{vquote} \author[1,3]{J.K. Krishnan}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-0000-0000] \end{vquote} \begin{vquote} \cormark[1] \fnmark[1] \ead{[email protected]} \ead[url]{www.jkkrishnan.in} \credit{Conceptualization of this study, Methodology, Software} \affiliation[1]{organization={Department of Physics, J.K. Institute of Science}, addressline={Jawahar Nagar}, city={Trivandrum}, postcode={695013}, state={Kerala}, country={India}} \author[2,4]{Han Thane}[style=chinese] \author[2,3]{William {J. Hansen}}[ role=Co-ordinator, suffix=Jr, ] \fnmark[2] \ead{[email protected]} \ead[URL]{https://www.university.org} \credit{Data curation, Writing - Original draft preparation} . . . . . . . . . \printcredits \end{vquote} \section{Bibliography} For CAS categories, two reference models are recommended. le{cas-model2-names.bst}. Former will format the reference list and their citations according to numbered scheme whereas the latter will format according name-date or author-year style. Authors are requested to choose any one of these according to the journal style. You may download these from The above bsts are available in the following location for you to download: \url{https://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} \hfill $\Box$ \end{document}
2412.20700v1
http://arxiv.org/abs/2412.20700v1
Optimal rolling of fair dice using fair coins
\documentclass[12pt]{article} \usepackage[backend=biber,style=numeric,]{biblatex} \addbibresource{optimalDDG.bib} \usepackage{amsfonts, amsmath} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \usepackage{booktabs} \usepackage{etoolbox} \preto\tabular{\setcounter{alglinecounter}{0}} \newcounter{alglinecounter} \newcommand{\gk}{\stepcounter{alglinecounter}\arabic{alglinecounter})} \usepackage{tikz} \newcommand{\unifdist}{\textsf{Unif}} \newcommand{\berndist}{\textsf{Bern}} \newcommand{\prob}{\mathbb{P}} \newcommand{\mean}{\mathbb{E}} \newcommand{\ind}{\mathbb{I}} \begin{document} \title{Optimal rolling of fair dice using fair coins} \author{Mark Huber and Danny Vargas} \maketitle \begin{abstract} In 1976, Knuth and Yao presented an algorithm for sampling from a finite distribution using flips of a fair coin that on average used the optimal number of flips. Here we show how to easily run their algorithm for the special case of rolling a fair die that uses memory linear in the input. Analysis of this algorithm yields a bound on the average number of coin flips needed that is slightly better than the original Knuth-Yao bound. This can then be extended to discrete distributions in a near optimal number of flips again using memory linear in the input. \end{abstract} \section{Introduction} A \emph{probabilisitic Turing machine} is a Turing machine that has access to an independent, identically distributed (iid) sequence of flips of a fair coin. Specifically, say that \( B \) is the flip of a fair coin if \( B \) is equally likely to be 1 or 0. This means that \( B \) is uniform over \( \{0, 1\} \), written \( B \sim \unifdist(\{0, 1\}) \). Alternatively, say that \( B \) has the Bernoulli distribution with parameter \( 1 / 2 \), and write \( B \sim \berndist(1 / 2) \). A natural question to ask is how many flips of a fair coin would it take (on average) to generate a draw from a roll of a fair die? To be precise, if \( \prob(X = i) = 1 / i \) for \( i \in \{1, \ldots, n \} \) say that \( X \) is the roll of a fair \( n \)-sided die. Write \( X \sim \unifdist(\{1, \ldots, n\}) \) or \( X \sim \text{d}n \) for short. In 1976, Knuth and Yao~\cite{knuth1976complexity} developed the optimal algorithm for generating from any discrete probability distribution using a number of fair coin flips. Various authors (such as~\cite{baidya2024efficient, saad2020fast, sinha2013high}) have looked at how to implement Knuth-Yao as efficiently as possible in different use cases. The purpose of this work is to show how to implement their algorithm for generating from a fair die in a very simple way using a \emph{randomness recycler}~\cite{fill2000randomness, huber2016perfect} approach. The memory used by the algorithm consists of storing the number \( n \), the original input, storing \( m \) a positive integer which is at most \( 2n - 1 \), storing \( X \) a positive integer at most \( m \), and one bit. Altogether, this uses \( 3 \lceil \log_2(n) \rceil + 3 \) bits of memory when the input uses \( \lceil \log_2(n) \rceil \) bits of memory. The algorithm operates as follows. The idea is to always have a state that is an ordered pair \( (X, m) \) of two random integers. At any time during the algorithm's run, if the computation conducted by the algorithm used fair coin flips, then \( [X | m] \sim \text{d}m \). If \( m = n \), then the algorithm is done, since \( X \sim \text{d}n \) as desired. If \( m < n \), the next step is to flip a fair coin \( B \) and change the state as follows. \[ (X, m) \mapsto (X + Bm, 2m). \] The new state has \( [X + Bm \mid 2m] \sim \text{d}(2m) \). This allows us to increase the number of sides of the die that \( X \) is a draw from until the number of sides reaches at least \( n \). At this point, if \( m \geq n \) and \( X \leq n \) then \[ (X, m) \mapsto (X, n). \] That is to say, accept \( X \) as a draw from \( \text{d}n \). However, if \( X > n \), do not throw away the state and start over. Instead, note that \( [X - m \mid m - n] \sim \text{d}(m - n) \). In other words, what is left over after subtracting the die roll is still a roll of a fair die, just one with a smaller number \( m - n \) of sides. That is, \[ (X, m) \mapsto (X - n, m - n). \] Return to the doubling procedure and repeat until acceptance occurs. To illustrate, consider trying to generate a roll from a 5-sided die. Begin in the state \( (1, 1) \). Since a 1-sided die only has output 1, \( X = 1 \) is a valid draw from a \( \text{d}1 \). Now let \( B_1, B_2, B_3, \ldots \) be the iid flips of the fair coin used by the algorithm. First, the number of sides of the die is doubled to 2, and the current draw from the die is \( 1 + 1 \cdot B_1 \). So the new state is \( (1 + B_1, 2) \). Since \( 2 < 5 \), another coin is flipped to get to state \( (1 + B_1 + 2 B_2, 4) \). Since \( 4 < 5 \), flip another coin to get to state \( (1 + B_1 + 2 B_2 + 4 B_3, 8) \). Note that \( B_1 + 2B_2 + 4B_3 \) is uniform over \( \{0, 1, 2, \ldots, 7\} \), so \( X = 1 + B_1 + 2 B_2 + 4B_3 \sim \text{d}8 \). At this point, if \( X \leq 5 \), then this number is uniform over \( \{1, 2, 3, 4, 5\} \) and the algorithm terminates. If \( X > 5 \), then \( X \) is uniform over \( \{6, 7, 8\} \). Therefore \( X - 5 \) is uniform over \( \{1, 2, 3\} \), so \( X - 5 \) is a 3-sided die roll. Therefore the next state is \[ (X \ind(X \leq 5) + (X - 5) \ind(X > 5), 5 \ind(X \leq 5) + (8 - 5) \ind(X > 5)) \] where \( \ind(r) \) is the indicator function that evaluates to 1 if expression \( r \) is true, and 0 otherwise. At this point, if the second component is \( 5 \) the algorithm terminates, otherwise it continues by doubling the number of sides on the die from 3 to 6. Then it accepts if the state is at most \( 6 \), otherwise returning to state \( (1, 1) \). The evolution of the state for \( n = 5 \) is given in Figure~\ref{FIG:example}. At a given state, when there is one child, take that path, when there are two, choose the child uniformly from the two choices. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[xscale=0.8] \draw (7,0) node (A) {\( (1, 1) \)}; \draw (3, -1) node (B) {\( (1, 2) \)}; \draw (1, -2) node (D) {\( (1, 4) \)}; \draw (5, -2) node (E) {\( (3, 4) \)}; \draw (11, -1) node (C) {\( (2, 2) \)}; \draw (9, -2) node (F) {\( (2, 4) \)}; \draw (13, -2) node (G) {\( (4, 4) \)}; \draw (0, -3) node (H) {\( (1, 8) \)}; \draw (2, -3) node (I) {\( (5, 8) \)}; \draw (4, -3) node (J) {\( (3, 8) \)}; \draw (6, -3) node (K) {\( (7, 8) \)}; \draw (8, -3) node (L) {\( (2, 8) \)}; \draw (10, -3) node (M) {\( (6, 8) \)}; \draw (12, -3) node (N) {\( (4, 8) \)}; \draw (14, -3) node (O) {\( (8, 8) \)}; \draw (0, -4) node (P) {\( (1, 5) \)}; \draw (2, -4) node (Q) {\( (5, 5) \)}; \draw (4, -4) node (R) {\( (3, 5) \)}; \draw (6, -4) node (S) {\( (2, 3) \)}; \draw(5, -5) node (X) {\( (2, 6) \)}; \draw(5,-6) node(DD) {\( (2, 5) \)}; \draw(7, -5) node (Y) {\( (5, 6) \)}; \draw(7, -6) node (EE) {\( (5, 5) \)}; \draw (8, -4) node (T) {\( (2, 5) \)}; \draw (10, -4) node (U) {\( (1, 3) \)}; \draw(9, -5) node (Z) {\( (1, 6) \)}; \draw(11, -5) node (AA) {\( (4, 6) \)}; \draw(9, -6) node (FF) {\( (1, 5) \)}; \draw(11, -6) node (GG) {\( (4, 5) \)}; \draw (12, -4) node (V) {\( (4, 5) \)}; \draw (14, -4) node (W) {\( (3, 3) \)}; \draw(13, -5) node (BB) {\( (3, 6) \)}; \draw(15, -5) node (CC) {\( (6, 6) \)}; \draw(13, -6) node (HH) {\( (3, 5) \)}; \draw(15, -6) node (II) {\( (1, 1) \)}; \draw[->] (A) to (B); \draw[->] (A) to (C); \draw[->] (B) to (D); \draw[->] (B) to (E); \draw[->] (C) to (F); \draw[->] (C) to (G); \draw[->] (D) to (H); \draw[->] (D) to (I); \draw[->] (E) to (J); \draw[->] (E) to (K); \draw[->] (F) to (L); \draw[->] (F) to (M); \draw[->] (G) to (N); \draw[->] (G) to (O); \draw[->] (H) to (P); \draw[->] (I) to (Q); \draw[->] (J) to (R); \draw[->] (K) to (S); \draw[->] (L) to (T); \draw[->] (M) to (U); \draw[->] (N) to (V); \draw[->] (O) to (W); \draw[->] (S) to (X); \draw[->] (S) to (Y); \draw[->] (U) to (Z); \draw[->] (U) to (AA); \draw[->] (W) to (BB); \draw[->] (W) to (CC); \draw[->] (X) to (DD); \draw[->] (Y) to (EE); \draw[->] (Z) to (FF); \draw[->] (AA) to (GG); \draw[->] (BB) to (HH); \draw[->] (CC) to (II); \end{tikzpicture} \end{center} \caption{From each state, each child is equally likely to be chosen. When a state of the form \( (i, 5) \) is reached, output \( i \). When state \( (1, 1) \) is reached, begin again at the root.} \label{FIG:example} \end{figure} In general, the pseudocode for this algorithm is as follows. \vspace*{12pt} \begin{tabular}{rl} \toprule \multicolumn{2}{l}{\textbf{Algorithm} \textsf{optimal\_uniform}(n) } \\ \midrule \gk & \hspace{0em} \( m \leftarrow 1,\ X \leftarrow 1 \) \\ \gk & \hspace*{0em} while \( m < n \) \\ \gk & \hspace*{1em} draw \( B \) uniform over \( \{0, 1\} \) \\ \gk & \hspace*{1em} \( X \leftarrow X + Bm \), \( m \leftarrow 2m \) \\ \gk & \hspace*{1em} if \( m \geq n \) and \( X \leq n \) then \( m \leftarrow n \) \\ \gk & \hspace*{1em} if \( m \geq n \) and \( X > n \) then \( X \leftarrow X - n \), \( m \leftarrow m - n \) \\ \gk & \hspace*{0em} return \( X \) \\ \bottomrule \end{tabular} \vspace*{12pt} This type of algorithm, where either acceptance occurs, or rejection occurs and as much of the randomness leftover is used in the next state is called the \emph{randomness recycler} protocol~\cite{fill2000randomness, huber2016perfect}. This method works best with a fair die. The \emph{Alias method}~\cite{walker1977efficient} gives a close to efficient method of generating from a loaded die, that is, an arbitrary distribution over a finite number of states. The new method presented here can be extended to handle this situation as well, but it is less simple to implement. The rest of the work is organized as follows. The next section contains a proof that the algorithm is correct and that it uses the optimal number of coin flips. Section~\ref{sec:three} shows an upper bound on the expected number of flips needed that is slightly better than that of~\cite{knuth1976complexity}. The last section then shows how this technique can be extended to loaded dice. \section{Proof of correctness} \begin{theorem} \label{thm:one} The output of \( \textsf{optimal\_uniform}(n) \) is uniform over \( \{1, \ldots, n \} \). \end{theorem} It helps to have the following fact in place. \begin{lemma} At the end of each line of the algorithm, for state \( (X, m) \) it holds that \( [X \mid m] \sim \text{d}m \). \end{lemma} \begin{proof} The state \( (X, m) \) is changed by lines 1, 4, 5, and 6. When \( (X, m) = (1, 1) \), it is trivially true that \( X \sim \text{d}1 \), so line 1 maintains this invariant. Suppose \( (X, m) \) has \( X \sim \text{d}m \) before running line 4, 5, or 6. For line 4, if \( X \sim \unifdist(\{1, \ldots, m\} \) then \( X + m \sim \unifdist(\{m + 1, \ldots, 2m\} \). Since \( B \) is equally likely to be \( 0 \) or \( 1 \), \( X + Bm \) is equally likely to be any element of \( \{1, \ldots, 2m \} \) and the invariant is maintained. For line 5, it is a well known fact that for \( X \sim \text{d}m \) where \( m \geq n \), \( [X | X \leq n] \sim \text{d}n \). Since \( X \leq n \) leads to \( m = n \), \( [X | m = n] \sim \text{d}n \), Similarly for line 6, if \( X \sim \text{d}m \) where \( m > n \), \( [X \mid X > n] \sim \unifdist(\{n + 1, \ldots, m \} \). So \( [X - n \mid X > n] \sim \unifdist(\{1, \ldots, m - n\} \). An induction then completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:one}] The theory of the \emph{randomness recycler} (see for instance~\cite{fill2000randomness,huber2016perfect}) says that two conditions are necessary and sufficient to establish correctness. The first is that \( [X | m] \sim \text{d}m \) throughout the running of the algorithm, which is true from the previous lemma. The second is that the algorithm terminates with probability 1. After at most \( \lceil \log_2(n) \rceil \) steps there will have been at least one opportunity for \( m \) to grow to be at least \( n \). When \( m \in \{n, n + 1, \ldots, 2n - 1\} \), there is at least a \( 1 / m \geq 1 / (2n) \) chance of termination. Hence the probability that the algorithm terminates approaches 1 as the number of steps goes to infinity. \end{proof} \section{The optimal DDG} In fact, this simple algorithm is not only correct, it is optimal in the following sense. Consider any algorithm that utilizes random bits to reach an outcome. This can be represented using a \emph{discrete distribution generating} tree, called a DDG for short. This is a rooted binary tree where each leaf of the tree is associated with an outcome. The algorithm works by starting with the current node equal to the root of the tree. A coin is flipped, if the result is 0 move to the left child of the current node, otherwise move to the right. If the resulting node is a leaf, output the outcome associated with that leaf. Otherwise make this the new current node, and repeat. Figure~\ref{fig:DDG} shows two DDGs that output a random variable with probabilities \( (3 / 8, 1 / 2, 1 / 8) \) over outcomes \( (1, 2, 3) \). The tree on the right will be preferred in practice because the average depth (and hence average number of coin flips used by the algorithm) is smaller. \begin{figure}[ht!] \centering \begin{tikzpicture} \draw (0, 0) to (-1, -1); \draw (0, 0) to (1, -1); \draw (-1, -1) to (-1.5, -2); \draw (-1, -1) to (-0.5, -2); \draw (1, -1) to (1.5, -2); \draw (1, -1) to (0.5, -2); \draw (-1.5, -2) to (-1.75, -3) node[below] {1}; \draw (-1.5, -2) to (-1.25, -3) node[below] {1}; \draw (-0.5, -2) to (-0.25, -3) node[below] {1}; \draw (-0.5, -2) to (-0.75, -3) node[below] {2}; \draw (1.5, -2) to (1.75, -3) node[below] {2}; \draw (1.5, -2) to (1.25, -3) node[below] {2}; \draw (0.5, -2) to (0.25, -3) node[below] {2}; \draw (0.5, -2) to (0.75, -3) node[below] {3}; \end{tikzpicture} \hspace*{12pt} \begin{tikzpicture} \draw (0, 0) to (-1, -1) node[below] {2}; \draw (0, 0) to (1, -1); \draw (1, -1) to (1.5, -2); \draw (1, -1) to (0.5, -2) node[below] {1}; \draw (1.5, -2) to (1.75, -3) node[below] {1}; \draw (1.5, -2) to (1.25, -3) node[below] {3}; \end{tikzpicture} \caption{Two DDGs for \( (3 / 8, 1 / 2, 1 / 8) \). The DDG on the left will always use 3 flips to determine the outcome, while the one on the right uses 1 flip with probability \( 1 / 2 \), 2 flips with probability \( 1 / 4 \), or 3 flips with probability \( 1 / 4 \).} \label{fig:DDG} \end{figure} Knuth and Yao defined an optimal DDG for a probability distribution \( p = (p_1, \ldots, p_n) \) as follows. Let \( N(T) \) denote the random number of bits used by DDG \( T \). For \( p \), say that \( T \) is \emph{optimal} if for any other DDG \( T' \) with output \( p \), for all \( i \), \( \prob(N(T) > i) \leq \prob(N(T') > i) \). Another way to say this is that the number of flips used by the optimal tree is stochastically dominated by the number of flips used by any other tree. As usual, define the \emph{level} of a node to be the number of edges in the path from the root to the node. Knuth and Yao showed the following result~\cite{knuth1976complexity}. \begin{theorem} For probability vector \( (p_1, \ldots, p_n) \), a DDG is optimal if and only if outcome \( i \) appears on level \( j \) a number of times equal to \[ \lfloor 2^j p_i \rfloor - 2 \lfloor 2^{j - 1} p_i \rfloor \] (this is the \( j \)th bit in the binary expansion of \( p_i \), using the finite expansion when possible.) \end{theorem} This is equivalent to the following. \begin{lemma} For probability vector \( (p_1, \ldots, p_n) \), a DDG is optimal if and only if outcome \( i \) appears on any given level either 0 or 1 times, and there is no \( k \) such that outcome \( i \) appears at level \( k, k + 1, k + 2, \ldots \). \end{lemma} \begin{proof} If outcome \( i \) appears once at level \( j \), this leaf contributes \( 1 / 2^j \) to the probability of outcome \( i \). Since \( p_i \) has a unique binary expansion where there are not an infinite number of trailing 1's, this DDG must be equivalent to \( i \) appearing on level \( i \) if and only if the (finite if possible) binary expansion of \( p_i \) has a 1 at level \( i \). \end{proof} This characterization can show that \( \textsf{optimal\_uniform}(n) \) is optimal. \begin{theorem} The algorithm \( \textsf{optimal\_uniform}(n) \) yields an optimal DDG. \end{theorem} \begin{proof} Consider a state \( (X, m) \) at a level greater than 1 in the tree. Then there is exactly one state \( (X', m') \) and coin flip \( B' \) that moves to this state. If the previous level did not have any leaves, then \( m' = m / 2 \), \( B' = \ind(X > m') \), and \[ X' = X - m' \ind(X > m') \] is the only state and flip that leads to \( (X, m) \). If the previous level did have leaves, then the state that moves at line 6 to \( (X, m) \) is \( (X + n, m + n) \). Therefore, the state at the previous level that moves to this state is \( m' = (m + n) / 2 \), \( B' = \ind(X > m') \), and \[ X' = X + n - B m' \] is the only option for the previous state and flip that leads to the current state. But since the initial state is fixed at \( (1, 1) \), this forms the basis of an induction proof that any state \( (X, m) \) at a later level is determined by a unique set of flips. In particular, if \( (X, m, B) \) moves to a leaf at a particular level, then there is a unique set of flips that led to that state. That means that for any outcome at a particular level, there is exactly one leaf at that level that results in that state. Therefore each outcome appears on at most one leaf at each level. Now consider how many levels in a row the DDG can have a particular outcome as a leaf. A leaf only occurs when \( m \geq n \) and \( X \leq n \) at line 6. If \( X > n \), then at the next state \( m \leftarrow m - n \). Therefore, if there are \( r \) values for \( X \) that reject at that level, and at the next level there are at most \( 2r - n \) states of \( X \) that can reject at line 6. At the next level there will be at most \( 4r - 2n - n = 4r - 3n \) values of \( X \) that reject, then \( 8r - 7n \), then \( 16r - 15n \) and so on. After \( k \) levels there will be \( 2^k r - (2^k - 1)n = 2^k(r - (1 - (1 / 2^k)n \) states of \( X \) that lead to rejection. Because \( r < n \), this either reaches 0 or goes negative, indicating that rejection does not occur. If rejection does not occur at a particular level then there cannot be a leaf at that same level. Therefore, it cannot be the case that the DDG for this algorithm has an infinite sequence of consecutive levels with leaves. Therefore, by the previous lemma this DDG must be optimal. \end{proof} \section{Upper bounding the expected number of flips} \label{sec:three} It is straightforward to calculate exactly the average number of flips used by \( \textsf{optimal\_sampling}(n) \) for any particular \( n \). For instance, if \( n = 5 \) it takes three flips to make \( m = 8 \), at which point there is a \( 5 / 8 \) chance of accepting the result and terminating the algorithm. If it does not accept then \( m = 8 - 5 = 3 \), and one more flip makes \( m = 6 \). At this point either the algorithm accepts with probability \( 5 / 6 \) or rejects and sets \( m = 6 - 5 = 1 \), so the algorithm starts from the beginning. Hence \[ \mean[N_{n = 5}] = 3 + \frac{3}{8}\left[1 + \frac{1}{6} \mean[N_{n = 5}] \right], \] so \( \mean[N_{n = 5}] = 3.6 \). Note that any DDG for a fair die will need at least \( \lceil \log_2(n) \rceil \) bits on average. Results in~\cite{knuth1976complexity} give that \( \log_2(n) + 2 \) bits suffice on average. The following result strengthens this slightly. \begin{theorem} The expected number of flips used by \( \textsf{optimal\_sampling}(n) \) is at most \( \lceil \log_2(n) \rceil + 1 \). \end{theorem} \begin{proof} Start the algorithm by flipping coins until the initial state \( m_1 \) has doubled to be at least \( n \). This will require \( f_1 = \lceil \log_2(n / m) \rceil \) flips. After these flips, say that the state is \( (X_2, m_2) \) where \( m_2 \in \{n, \ldots, 2n - 1\} \). To be specific, \[ m_2 = 2^{\lceil \log_2(n / m) \rceil}. \] The chance of accepting is \( n / m_2 \), so the chance that further work is needed is \( 1 - n / m_2 \). If further work is needed, then \( f_2 \) more flips are needed to double \( m_2 \) back up to be at least \( n \), and so on. Altogether the expected number of flips \( t \) is \[ t = f_1 + \left(1 - \frac{n}{m_2} \right) \left[ f_2 + \left(1 - \frac{n}{m_3}\right)\left[f_3 + \left( 1 - \frac{n}{m_4} \right) \cdots \right.\right. \] where \( f_k = \lceil \log_2(n / (n - m_{k})) \rceil \) for \( k \geq 2 \). Suppose this is thought of as a function of \( m_2, m_3, \ldots \) chosen independently of each other in \( \{n, \ldots, 2n - 1\} \). Then once \( m_2 \) is chosen to maximize the result, \( m_3 \) should receive the same value because of the self-similarity of the expression. Hence there is \( m^* \in \{n, \ldots, 2n - 1\} \) such that \[ t \leq f_1 + \left(1 - \frac{n}{m^*} \right) \left[ f_{m^*} + \left(1 - \frac{n}{m^*}\right)\left[f_{m^*} + \left( 1 - \frac{n}{m^*} \right) \cdots \right.\right. \] Setting \[ \alpha = \left(1 - \frac{n}{m^*} \right) \left[ f_{m^*} + \left(1 - \frac{n}{m^*}\right)\left[f_{m^*} + \left( 1 - \frac{n}{m^*} \right) \cdots \right.\right. \] gives \[ \alpha = \left(1 - \frac{n}{m^*} \right) \left[ f_{m^*} + \alpha \right], \] so \[ \alpha = \left( \frac{m^*}{n} - 1 \right) \left(\lceil \log_2\left(\frac{n}{m^* - n} \right) \right\rceil. \] Let \( \beta = (m^* - n) / n \). If \( \beta \in (1 / 2, 1] \), then \( \alpha = 1 \cdot 1 = 1 \). If \( \beta \in (1 / 4, 1/ 2] \), then \( \alpha = (1 / 2) \cdot 2 = 1 \). If \( \beta \in (1 / 8, 1 / 4] \), \( \alpha = (1 / 4) \cdot 3 = 3 / 4 \), and for smaller \( \beta \) the value of \( \alpha \) only shrinks further. Hence \( t \leq \lceil \log_2(n) \rceil + 1 \). \end{proof} \section{Extending to discrete distributions} \label{sec:four} For the general algorithm, the idea is to also keep state \( (X, m) \), but use \( X \) to draw uniformly from the set of outcomes that have a \( i \) in their bit description at the bit equal to the number of flips of the coin taken. \vspace*{12pt} \begin{tabular}{rl} \toprule \multicolumn{2}{l}{\textbf{Algorithm} \( \textsf{optimal\_discrete}(p = (p_1, p_2, \ldots)) \) } \\ \midrule \gk & \hspace{0em} \( m \leftarrow 1,\ X \leftarrow 1, Y \leftarrow 0 \) \\ \gk & \hspace*{0em} while \( Y = 0 \) \\ \gk & \hspace*{1em} let \( A \leftarrow\{i:2p_i \geq 1\} \), \( n \leftarrow \#(A) \) \\ \gk & \hspace*{1em} for all \( i \), let \( p_i \leftarrow 2p_i - \ind(2p_i \geq 1)\) \\ \gk & \hspace*{1em} draw \( B \) uniform over \( \{0, 1\} \) \\ \gk & \hspace*{1em} \( X \leftarrow X + Bm \), \( m \leftarrow 2m \) \\ \gk & \hspace*{1em} if \( m \geq n \) and \( X \leq n \) then \( m \leftarrow n \), let \( Y \) be the \( X \)th element of \( A \). \\ \gk & \hspace*{1em} if \( m \geq n \) and \( X > n \) then \( X \leftarrow X - n \), \( m \leftarrow m - n \) \\ \gk & \hspace*{0em} return \( Y \) \\ \bottomrule \end{tabular} \vspace*{12pt} \begin{theorem} Algorithm \( \textsf{optimal\_discrete}(p) \) is optimal. \end{theorem} \begin{proof} Although the value of \( n \) changes from flip to flip, the arguments used in the uniform case remain the same. Here there is a unique set of flips that results in a particular state \( (X, m) \) and hence any particular leaf outcome appears only once at a level. Here \( A \) is chosen after \( d \) flips so that \( i \in A \) if and only if \( \lceil 2^d A_i \rceil = 1 \), so the DDG must be optimal. \end{proof} Of course this algorithm is a bit more abstract than the first, given that to actually implement line 3 requires an analysis of the probability distribution being used to determine which of the outcomes have a 1 bit in the position equal to the number of flips. \section{Conclusion} The Knuth-Yao optimal sampler can be implemented using a randomness recycler type algorithm. This allows for a very simple implementation of the algorithm for generating the roll of a fair die and can be extended to handle general discrete distributions given the ability to draw out the \( i \)th bit in the binary expansion of the probabilities. \printbibliography \end{document} For distributions that are finite with a finite probability vector \( (p_1, \ldots, p_n) \), this requires at most \( \Theta(N n) \) steps of work, where \( N \) is the number of bits used by the optimal DDG. A more concrete way of doing things is to use the \emph{Alias} method of Walker~\cite{walker1977efficient}. Given input \( (p_1, \ldots, p_n) \), Walker showed how to create in linear time (and memory) a table \( f:\{0, 1\} \rightarrow \{1, \ldots, n\} \) and a new probability vector \( (a_1, \ldots, a_n) \) such that if \( U \sim \text{d}n \) and \( [B \mid U] \sim \berndist(a_U) \), then \( f(B, U) \sim (p_1, \ldots, p_n) \). Note that flipping a \( \berndist(p) \) random variable can be done for any \( p \in (0, 1) \) using on average 2 flips with Knuth-Yao sampling. \vspace*{12pt} \begin{tabular}{rl} \toprule \multicolumn{2}{l}{\textbf{Algorithm} \( \textsf{optimal\_bern}(p) \) } \\ \midrule \gk & \hspace*{0em} repeat \\ \gk & \hspace*{1em} draw \( B \) uniform over \( \{0, 1\} \) \\ \gk & \hspace*{1em} if \( B = 1 \) and \( p \geq 1 / 2 \) then return(1) \\ \gk & \hspace*{1em} if \( B = 1 \) and \( p < 1 / 2 \) then return(0) \\ \gk & \hspace*{1em} if \( B = 0 \) and \(p \geq 1 / 2\) then \( p \leftarrow 2p - 1 \) \\ \gk & \hspace*{1em} if \( B = 0 \) and \(p < 1 / 2\) then \( p \leftarrow 2p \) \\ \bottomrule \end{tabular} \vspace*{12pt} Therefore, the number of flips needed to generate from \( (p_1, \ldots, p_n) \) (after linear time preprocessing)
2412.20782v1
http://arxiv.org/abs/2412.20782v1
A randomisation method for mean-field control problems with common noise
\documentclass[ 10pt, a4paper ]{article} \usepackage{amsmath} \usepackage{color} \usepackage{amssymb} \usepackage{amsthm} \usepackage{accents} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{bbm} \usepackage{chngcntr} \usepackage{enumitem} \usepackage{listings} \usepackage{lmodern} \usepackage{mathtools} \usepackage[unicode=true,bookmarks=false,hypertexnames=false]{hyperref} \usepackage{graphicx,tikz,tabularx} \usepackage{csquotes} \usepackage[textwidth=16cm,textheight=23.5cm]{geometry} \usepackage[capitalise,noabbrev,nameinlink,sort]{cleveref} \let\etoolboxforlistloop\forlistloop \usepackage{autonum} \let\forlistloop\etoolboxforlistloop \cslet{blx@noerroretextools}\empty \counterwithin{equation}{section} \newcommand{\ubar}[1]{\underaccent{\bar}{#1}} \DeclarePairedDelimiter\abs{\lvert}{\rvert}\DeclarePairedDelimiter\norm{\lVert}{\rVert} \DeclareMathOperator*{\esssup}{ess\,sup} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\<}{\langle} \renewcommand{\>}{\rangle} \let\oldc\c \renewcommand{\c}[1]{\mathcal{#1}} \renewcommand{\b}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \renewcommand{\P}{\mathbb{P}} \DeclareMathOperator{\E}{\mathbb{E}} \DeclareMathOperator{\Var}{Var} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Pc}{\mathcal{P}} \newcommand{\1}{\mathbbm{1}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \crefalias{enumi}{part} \newtheorem{assumption}{Assumption} \crefname{assumption}{Assumption}{Assumptions} \renewcommand{\theassumption}{\Alph{assumption}} \newlist{assumptionenum}{enumerate}{1} \setlist[assumptionenum]{label=(\theassumption\arabic*)}\crefalias{assumptionenumi}{assumption} \let\eqref\labelcref \makeatletter \autonum@generatePatchedReferenceCSL{eqref} \makeatother \def\blu#1{{\color{blue}#1}} \def\rd#1{{\color{red}#1}} \def\gr#1{{\color{green}#1}} \def\red#1{{\color{red}#1}} \def\ora#1{{\color{orange}#1}} \def\cy#1{{\color{cyan}#1}} \def\teal#1{{\color{teal}#1}} \begin{document} \title{A randomisation method for mean-field control problems with common noise} \author{Robert Denkert\thanks{Humboldt University of Berlin, Department of Mathematics, rob at denkert.eu; This author gratefully acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 410208580 -- IRTG2544 (``Stochastic Analysis in Interaction'').} \quad Idris Kharroubi\thanks{ LPSM, UMR CNRS 8001, Sorbonne Universit\'e and Universit\'e Paris Cit\'e, idris.kharroubi at sorbonne-universite.fr. Research of the author partially supported by Agence Nationale de la Recherche (ReLISCoP grant ANR-21-CE40-0001). } \quad Huy\^en Pham\thanks{Ecole Polytechnique, CMAP, huyen.pham at polytechnique.edu; This author is supported by the BNP-PAR Chair ``Futures of Quantitative Finance", and by FiME, Laboratoire de Finance des March\'es de l'Energie, and the ``Finance and Sustainable Development'' EDF - CACIB Chair} } \maketitle \begin{abstract} We study mean-field control (MFC) problems with common noise using the control randomisation framework, where we substitute the control process with an independent Poisson point process, controlling its intensity instead. To address the challenges posed by the mean-field interactions in this randomisation approach, we reformulate the admissible control as $L^0$-valued processes adapted only to the common noise. We then construct the randomised control problem from this reformulated control process, and show its equivalence to the original MFC problem. Thanks to this equivalence, we can represent the value function as the minimal solution to a backward stochastic differential equation (BSDE) with constrained jumps. Finally, using this probabilistic representation, we derive a randomised dynamic programming principle (DPP) for the value function, expressed as a supremum over equivalent probability measures. \end{abstract} \vspace{5mm} \noindent \textbf{ MSC Classification}: 60H30; 60K35; 60K37; 93E20. \vspace{3mm} \noindent \textbf{ Keywords:}{ Mean-field control with common noise; control randomisation; decomposition of processes; randomised dynamic programming principle; backward stochastic differential equations. \section{Introduction} We will consider mean-field control problems with the following state dynamics \begin{align} dX_s^{t,\xi,\alpha} &= b(s, \P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\P}_s},X^{t,\xi,\alpha}_s,\alpha_s)ds + \sigma(s,\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\P}_s},X^{t,\xi,\alpha}_s,\alpha_s) dW_s\\ &\quad + \sigma^0(s,\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\P}_s},X^{t,\xi,\alpha}_s,\alpha_s) dB_s,\qquad X^{t,\xi,\alpha}_t = \xi, \end{align} with two independent Brownian motions $W,B$ on some probability space $(\Omega,\c F,\P)$ and an independent given starting value $\xi$ and a control process $\alpha$ in some action space $A$. Here $(\P^{\c F^{B,\P}_s}_{X^{t,\xi,\alpha}_s})_{s\in [t,T]}$ denotes the $\b F^{B,\P}$-predictable and $\P$-a.s.\@ continuous version of $(\c L(X^{t,\xi,\alpha}_s|\c F^{B,\P}_s))_{s\in [t,T]}$.Our goal is to maximise the reward functional \begin{align} J(t,\xi,\alpha) \coloneqq \E\Big[ g(X^{t,\xi,\alpha}_T, \P_{X^{t,\xi,\alpha}_T}^{{\c F}^{B,\P}_T}) + \int_t^T f(s,X^{t,\xi,\alpha}_s,\P_{X^{t,\xi,\alpha}_s}^{{\c F}^{B,\P}_s},\alpha_s) ds \Big]. \end{align} There are generally two main approaches to solving mean-field (or McKean-Vlasov) control problems in the literature. The first is the stochastic maximum principle, which characterises the optimal control by an adjoint backward stochastic differential equation (BSDE). In the context of mean-field control problems, the stochastic maximum principle was first studied by \cite{buckdahn_general_2011,andersson_maximum_2011} for cases where the coefficients only depend on the law through its moments. Building on the notion of differentiability introduced by \cite{lions2007theorie} in his course at Coll\`ege de France, \cite{carmona_forwardbackward_2015} extended this approach to general MFC problems; see also the standard work \cite{carmona_probabilistic_1_2018} for a broader overview of the related literature. The second approach revolves around the dynamic programming principle (DPP), which breaks the original optimisation problem down into several local optimisation problems. Combined with an Itô formula for McKean-Vlasov diffusions, this method leads to a Hamilton-Jacobi-Bellman (HJB) equation on the Wasserstein space that characterises the value function. \cite{pham_bellman_2018} were the first to develop such a DPP in the general setting without common noise, while \cite{pham_dynamic_2017} derived a DPP for MFC problems with common noise, where the admissible controls are solely adapted to the common noise. To our knowledge, the most comprehensive results on mean-field control problems involving common noise can be found in works of \cite{djete_mckeanvlasov_2022,djete_mckeanvlasov_2022-1}. In this paper, we establish a randomised dynamic programming principle based on a randomised formulation of the control problem. The idea is to replace the control process by an independent Poisson point process, and then optimising over a set of equivalent probability measures, effectively making the intensity of the Poisson point process the new control. The randomisation method was introduced by \cite{bouchard_stochastic_2009} to deal with optimal switching problems, with further works for optimal switching problems found in \cite{elie_bsde_2014,elie_adding_2014,elie_probabilistic_2010} and with \cite{fuhrman_optimal_2020} addressing the infinite dimensional case. Additionally, it has been applied to impulse control problems in \cite{kharroubi_backward_2010} and to optimal stopping in non-Markovian settings in \cite{fuhrman_representation_2016}. \cite{fuhrman_randomized_2015} applied the randomisation method to non-Markovian regular control problems. In addition, \cite{kharroubi_feynmankac_2015} studied the constrained BSDE for the randomised value function across a wide range of control problems, establishing a general connection between a class of constrained BSDEs and HJB equations. The randomisation framework has also found applications beyond establishing a randomised DPP. Specifically, \cite{kharroubi_discrete_2015,kharroubi_numerical_2014} proposed a numerical scheme for fully non-linear HJB equations based on the constrained BSDEs arising from the control randomisation framework, leading to various applications including portfolio optimisation \cite{zhang_dynamic_2019} and algorithmic trading \cite{abergel_algorithmic_2020}. Additionally, \cite{denkert_control_2024} introduced a policy gradient method for continuous-time reinforcement learning based on the equivalence between the original problem and its randomised formulation. In a mean-field setting, the randomisation method as a tool has previously only been considered by \cite{bayraktar_randomized_2018} based on a decoupled MKV representation introduced by \cite{buckdahn_mean-field_2017} in the uncontrolled case. \cite{bayraktar_randomized_2018} provided a randomisation method for mean-field control problems without common noise and used this to obtain a Feynman-Kac formulation and also a randomised dynamic programming principle. Their results apply to MFC problems with \emph{deterministic} initial laws of the form $\pi = \delta_x$, $x\in\R^d$. This result does not extend to general initial laws as the following example illustrates. \begin{example} We consider the following controlled, decoupled dynamics \begin{align} dX^{\xi,\alpha}_s &= (\alpha_s + \E[X^{\xi,\alpha}_s]) ds,\qquad X^{\xi,\alpha}_0 = \xi \sim \frac 1 2 \delta_{-1} + \frac 1 2 \delta_1,\\ dX^{\xi,x,\alpha}_s &= (\alpha_s + \E[X^{\xi,\alpha}_s]) ds,\qquad X^{\xi,x,\alpha}_0 = x, \end{align} where the control process $\alpha$ should be $A=[-1,1]$-valued, $x\in\R$ and $\xi$ a square-integrable random variable with distribution $\xi\sim \frac 1 2 \delta_{-1} + \frac 1 2 \delta_1$. Further we assume that $T=1$ and that the reward functional is given by \[ J(\xi,\alpha) = \E\big[(X^{\xi,\alpha}_1 - 1)_+ + (X^{\xi,\alpha}_1 - \tfrac 5 2)_-\big]. \] Then the value function is given by \[ V(\xi) = \sup_\alpha J(\xi,\alpha), \] and solving for the optimal control, we find $\alpha^*_s \equiv 1$ for all $s\in [0,1]$, which results in $V(\xi) = J(\xi,\alpha^*) % = \frac 1 2 \cdot (e-1) + \frac 1 2 \cdot 0 = \frac 1 2 (e-1)$. At the same time, the decoupled value function in \cite{bayraktar_randomized_2018} is defined by \[ V(\xi,x) = \sup_\alpha \E\big[(X^{\xi,x,\alpha}_1 - 1)_+ + (X^{\xi,x,\alpha}_1 - \tfrac 5 2)_-\big]. \] For the starting value $x=-1$, the optimal control is $\alpha^*_s \equiv -1$ for $s\in [0,1]$, resulting in $V(\xi,-1) = e - \frac 5 2$. For $x=1$, the optimal control is $\alpha^*_s \equiv 1$ for all $s\in [0,1]$, leading to $V(\xi,1) = e-1$. According to \cite[Proposition 2.2]{bayraktar_randomized_2018}, we would have $V(\xi) = \E_{\xi'\sim\xi}[V(\xi,\xi')] = \frac 1 2 V(\xi,-1) + \frac 1 2 V(\xi,1)$, which is not the case. \end{example} In this paper, we improve on the randomisation approach in \cite{bayraktar_randomized_2018} to handle general mean-field control problems and to allow for common noise. Unlike the previous approach that relies on the decoupled SDEs, we directly work with the mean-field SDE. This approach presents a challenge because we must ensure that the mean-field interaction is preserved during the control randomisation. Our approach is based on a novel reformulation of the admissible control set as a set of $L^0$-valued processes, adapted solely to the common noise, where we require that at each time point $s\in [t,T]$, the reformulated admissible control takes values in $L^0(\Omega,\c G\lor\c F^W_s,\P;A)$. This decomposition of the original controls is motivated by the observation that for the McKean-Vlasov dynamics, by viewing not each particle but instead the whole current conditional distribution as the state, the idiosyncratic noise will become part of the "deterministic" state dynamics, and only the common noise remains as "stochastic noise" for this system. This viewpoint has the advantage that it leads to a very natural way to apply the control randomisation method, as randomising over such $L^0$-valued processes adapted solely to the common noise keeps the mean-field interaction, in form of the conditional distribution of state process given the common noise, intact. We establish an isomorphism between the original and the reformulated admissible control set, up to modification of processes. A key challenge lies in the way back from a reformulated control to an original control, as it presents a subtle measurability issue. Specifically, given two filtrations $\b F = (\c F_t)_t$, $\b G = (\c G_t)_t$ and an $\b F$-progressive process $\hat\alpha:[0,T]\times\Omega\to L^0(\Omega',\c G_{T},\P;A)$ satisfying $\hat\alpha_t(\omega) \in L^0(\Omega',\c G_t,\P;A)$ for all $(t,\omega)\in [0,T]\times\Omega$, such as the reformulated control, we need to find an $\b F\otimes\b G$-progressive process $\alpha:[0,T]\times\Omega\times\Omega'\to A$ that is a modification of $\hat\alpha$, meaning that \[ \alpha_t(\omega,\cdot) = \hat\alpha_t(\omega)\text{ in }L^0(\Omega',\c G_{T},\P;A)\qquad\text{for all }(t,\omega)\in [0,T]\times\Omega, \] as detailed in \cref{proposition selecting a measurable process}. We introduce a randomised control problem based on the reformulated admissible control set, by replacing the reformulated control by an independent Poisson process taking values in $L^0$. By showing that we can again find a corresponding progressive $A$-valued control process, we are able to define the randomised state dynamics. The optimisation takes now place over a set of equivalent probability measures that vary the intensity of the Poisson point process. Crucially, the prior decomposition of the control now ensures that the conditional law of state given the common noise remains unchanged under different randomised controls. This allows us to apply the standard tools from the randomisation framework, although adapted to account for the additional layer of abstraction between the original and reformulated control. Our first main result shows that the randomised value function coincides with the value function of the original problem. Subsequently we represent the value function as the minimal solution to a backward stochastic differential equation (BSDE) with constrained jumps. This result is obtained by studying a class of penalised BSDEs corresponding to value functions for a restricted set of randomised controls. Finally, with this probabilistic representation of the value function, we directly derive a randomised dynamic programming principle (DPP), expressed as a supremum over equivalent probability measures. In the benchmark case where the value function is sufficiently regular, we recover the standard (non-randomised) DPP for mean-field control problems, which has been established in the comprehensive works of \cite{djete_mckeanvlasov_2022,djete_mckeanvlasov_2022-1}. The remainder of this paper is organised as follows. In \cref{section original problem formulation} we formulate the mean-field control problem. In \cref{section problem l2 formulation}, we introduce the reformulated admissible control set and shows its equivalence to the original problem. \cref{section randomised problem} constructs the randomised control problem and gives our first main result \cref{theorem equivalence original and randomised value function} on the equivalence of the non-randomised and the randomised problem. Finally, \cref{section bsde characterisation and randomised dpp} characterises the value function as the minimal solution to a constrained BSDE, enabling us to subsequently derive the randomised DPP. \paragraph{Notation} Given a random variable $X$, we will denote the (non-augmented) filtration generated by $X$ by $\b F^X$. Further, for a filtration $\b F$, we will denote the $\P$-augmented filtration by $\b F^{\P}$ and the predictable $\sigma$-algebra w.r.t. $\b F$ by $Pred(\b F)$. For a path $w:[0,T]\to A$, we denote its stopped path by $[w]_t \coloneqq (w_{s\land t})_{s\in [0,T]}$. For some integer $\ell$, we denote by $|\cdot|$, and $\cdot$ the Euclidean norm and the inner (or scalar) product on $\R^\ell$. We denote by $\c{P}(\R^d)$ the space of Borel probability measures on $\R^d$ and by $\c{P}_2(\R^d)$ the set of elements of $\c{P}(\R^d)$ having a finite second order moment. We equip $\c{P}_2(\R^d)$ with the 2-Wasserstein distance $\c{W}_2$ so that it is a metric space itself and even a Polish space. We recall that the 2-Wasserstein distance on $\c{P}_2(\R^d)$ is defined by \begin{equation}\label{eq definition w_2 distance} \c{W}_2(\mu, \nu) = \Big( \inf_{X\sim\mu, Y\sim\nu} \E[|X-Y|^2] \Big)^{\frac 1 2} \end{equation} for $\mu, \nu \in \c{P}_2(\R^d)$. We also denote by $C^\ell$ the set of continuous functions from $[0,T]$ to $\R^\ell$ and by $\mathbf{C}^\ell=(\c C ^\ell_t)_{t \in [0,T]}$ its canonical filtration. Further we denote the Wiener measure on $C^\ell$ by $\mu^\ell_W$. Finally, given a Polish space $E$, we write $D([0,T];E)$ for the space of $E$-valued càdlàg functions on $[0,T]$ equipped with the usual Skorokhod topology. \section{Setting}\label{section original problem formulation} Let $(\Omega,\c F,\P)$ be a complete probability space supporting an $m$-dimensional Brownian motion $W$ and an independent $n$-dimensional Brownian motion $B$. Further we suppose that there exists a separable $\sigma$-algebra $\c G\subseteq \c F$ independent of $\b F^{W,B}$ sufficiently large such that $\{\P_\xi \,|\, \xi \in L^2(\Omega,\c G,\P;\R^d) \} = \c P_2(\R^d)$. We define the set of admissible controls $\c A$ as all $\b F^{W,B}\lor \c G$-predictable\footnote{We could also allow all $(\b F^{W,B,\P}\lor\c G)$-predictable processes, since by \cite[Lemma 2.17(b)]{jacod_limit_2003}, every $(\b F^{W,B,\P}\lor\c G)$-predictable process is indistinguishable from an $(\b F^{W,B}\lor \c G)$-predictable process. We choose $(\b F^{W,B}\lor\c G)$-predictable processes only to slightly simplify the presentation in \cref{section problem l2 formulation}. } processes taking values in some Polish space $A$ equipped with some metric $\rho < 1$ (otherwise we replace it by $\frac{\rho}{1+\rho}$). Let $b,\sigma,\sigma^0:~[0,T]\times \R^d\times \c{P}_2(\R^d)\times {A}\rightarrow \R^d,\R^{d\times m},\R^{d\times n}$ be three measurable functions that we suppose to satisfy the following assumption. \begin{assumption}\label{assumptions sde coefficients} \begin{assumptionenum} \item The functions $b,\sigma$ and $\sigma^0$ are continuous. \item There exists a constant $L$ such that \begin{align} &|b(t,x,\mu,a)-b(t,y,\nu,a)|+|\sigma(t,x,\mu,a)-\sigma(t,y,\nu,a)|+|\sigma^0(t,x,\mu,a)-\sigma^0(t,y,\nu,a)|\\ &\leq L\Big(|x-y|+\c{W}_2(\mu,\nu)\Big), \end{align} for all $x,y\in\R^d$, $\mu,\nu\in \c{P}_2(\R^d)$, $t\in[0,T]$ and $a\in A$. \item There exists a constant $M$ such that \begin{align} |b(t,0,\delta_0,a)|+|\sigma(t,0,\delta_0,a)|+|\sigma^0(t,0,\delta_0,a)| & \leq M, \end{align} for all $t\in[0,T]$ and $a\in A$. \end{assumptionenum} \end{assumption} Given an initial time $t\in[0,T]$, an initial condition $\xi\in L^2(\Omega,\c G\lor \c F^W_t,\P;\R^d)$ and an admissible control $\alpha\in \c A$, we consider the $\b F^{W,B,\P}\lor\c G$-progressive state process $(X^{t,\xi,\alpha}_s)_{s\in [t,T]}$ defined by \begin{align} X_r^{t,\xi,\alpha} &= \xi+\int_t^rb(s, X^{t,\xi,\alpha}_s,\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\P}_s},\alpha_s)ds + \int_t^r \sigma(s,X^{t,\xi,\alpha}_s,\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\P}_s},\alpha_s) dW_s\\ &\quad +\int_t^r \sigma^0(s,X^{t,\xi,\alpha}_s,\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\P}_s},\alpha_s) dB_s,\qquad r\in[t,T]\;. \label{eq mfc state dynamics} \end{align} Here we denote by $(\P^{\c F^{B,\P}_s}_{X^{t,\xi,\alpha}_s})_{s\in [t,T]}$ the $\b F^{B,\P}$-predictable and $\P$-a.s.\@ continuous version of $(\c L(X^{t,\xi,\alpha}_s|\c F^{B,\P}_s))_{s\in [t,T]}$, that is $\P^{\c F^{B,\P}_s}_{X^{t,\xi,\alpha}_s} = \c L(X^{t,\xi,\alpha}_s|\c F^{B,\P}_s)$, $\P$-a.s., for all $s\in [t,T]$. The existence of such a version is ensured by \cite[Lemma A.1]{djete_mckeanvlasov_2022-1}, since the sets of $\b F^{B,\P}$-optional and $\b F^{B,\P}$-predictable processes coincide by \cite[Chapter V, Corollary 3.3]{revuz_continuous_1999}. We note that under \cref{assumptions sde coefficients}, we have existence and uniqueness of $(X^{t,\xi,\alpha}_s)_{s\in [t,T]}$ and \begin{align} \sup_{\alpha\in \c A}\E\Big[\sup_{s\in[t,T]}|X^{t,\xi,\alpha}_s|^2\Big] & < \infty\;. \end{align} Let $f:~[0,T]\times \R^d\times \c{P}_2(\R^d)\times {A}\rightarrow \R$ and $g:~ \R^d\times \c{P}_2(\R^d)\rightarrow \R$ be two Borel-measurable functions on which we make the following assumption. \begin{assumption}\label{assumptions reward functional} There exists a constant $M>0$ such that \begin{align} |f(t,x,\mu,a)|+|g(x,\mu)| &\leq M\Big( 1+|x|+ \Big(\int_{\R^d}|y|^2\mu(dy)\Big)^{\frac 1 2} \Big) \end{align} for all $(t,x,\mu,a)\in[0,T]\times \R^d\times \c{P}_2(\R^d)\times A$. \end{assumption} \begin{remark} In certain results, such as the equivalence between the original and the randomised problem in \cref{theorem equivalence original and randomised value function}, the linear growth assumption on $f$ and $g$ can be relax to quadratic growth. Nevertheless, for the law invariance result of the value function by \cite{djete_mckeanvlasov_2022-1}, and to establish the (randomised) DPP in \cref{section randomised dpp} and the BSDE characterisation of the value function in \cref{section bsde characterisation}, we need the more restrictive linear growth assumption. \end{remark} We next define the function $J$ by \begin{align} J(t,\xi,\alpha) \coloneqq \E\Big[ g(X^{t,\xi,\alpha}_T, \P_{X^{t,\xi,\alpha}_T}^{{\c F}^{B,\P}_T}) + \int_t^T f(s,X^{t,\xi,\alpha}_s,\P_{X^{t,\xi,\alpha}_s}^{{\c F}^{B,\P}_s},\alpha_s) ds \Big] \end{align} for $t\in[0,T]$, $\xi\in L^2(\Omega,\c G\lor \c F^W_t,\P;\R^d)$ and $\alpha\in \c A$. Under \cref{assumptions sde coefficients,assumptions reward functional}, the function $J$ is well defined. We consider the optimal value of $J$ over the set of admissible controls $\c A$. Consequently, we define the value function as \[ V(t,\xi) \coloneqq \sup_{\alpha\in \c A} J(t,\xi,\alpha) < \infty. \] We note that \cite{djete_mckeanvlasov_2022,djete_mckeanvlasov_2022-1} show that the value function does not depend on the choice of the probability space, and that it depends on $\xi$ only through its law $\P_\xi$.\footnote{They furthermore show that the strong and the weak formulation of the problem both lead to the same value function.} \begin{proposition}[{\cite[Proposition 2.4]{djete_mckeanvlasov_2022-1}}] \label{proposition value function law invariant and independent of probabilistic setting} The value function is \emph{law invariant}, that is, it depends on $\xi$ only through its law $\P_\xi$. Furthermore, the value function does not depend on the specific choice of $(\Omega,\c F,\P)$ and $\c G$, as long as $\c G$ is rich enough. \end{proposition} Therefore, with slight abuse of notation, we will also write \[ V(t,\P_\xi) = V(t,\xi). \] \section{Reformulating the set of admissible controls}\label{section problem l2 formulation} In this section, we derive an equivalent reformulation of the original mean-field control problem, that will form the basis for developing the randomised formulation in \cref{section randomised problem}. The key idea behind this reformulation is that in McKean-Vlasov dynamics, by treating the entire current distribution as the state space rather than focusing on individual particles, the idiosyncratic noise becomes part of the deterministic state dynamics, leaving the common noise as the only "true noise" affecting the system. This leads us to reframe the control set $\c A$ of $\b F^{W,B}\lor\c G$-predictable processes by connecting it to a suitable set $\hat{\c A}$ of only $\b F^B$-predictable processes. This approach aligns the control problem thus aligning the control set more closely with the traditional (non-mean-field) framework, where the control is adapted to the external noise. In this spirit, let us consider the sets \[ \c A_s \coloneq L^0(\Omega,\c G\lor\c F^W_s,\P;A),\qquad s\in [0,T], \] of $\P$-equivalence classes of $\c G\lor\c F^W_s$-$\c B(A)$-measurable functions $\phi:\Omega\to A$ equipped with the metric \[ \hat\rho(\phi^1,\phi^2) \coloneqq \E^\P[\rho(\phi^1,\phi^2)] < 1 ,\qquad\text{for all } \phi^1,\phi^2\in \c A_s. \] They will now take the role of the action space, each at its corresponding time $s\in [0,T]$. We remark that $\c A_u\subseteq\c A_v$, for $u\leq v$. Further, since $A$ is a Polish space and $\c G$ and $\c F^W_s$ are separable, the spaces $\c A_s$ are also Polish, $s\in [0,T]$. Finally, we define the set $\hat{\c A}$ of all $\b F^B$-predictable processes $\hat\alpha:[0,T]\times\Omega\to \c A_T$, such that additionally $\hat\alpha_s\in\c A_s$ for $s\in [0,T]$. We will spend the rest of this section on the relation between the sets $\c A$ and $\hat{\c A}$. More precisely, in \cref{subsection connecting A and L0 A processes} we will introduce in which sense we can identify such process $\hat\alpha:[0,T]\times\Omega\to \c A_T$ with the original control processes of the form $\alpha:[0,T]\times\Omega\to A$ in a slightly more general setting, which we will need in \cref{section randomised problem}. Afterwards, in \cref{subsection equivalent action set}, we will use this concept to establish an equivalence between the action sets $\c A$ and $\hat{\c A}$, which will aid in the upcoming construction of the randomised control problem in \cref{section randomised problem}. \subsection{Identifying decomposed controls with original controls}\label{subsection connecting A and L0 A processes} Since in \cref{section randomised problem} we are going to extend our probability space for the randomised formulation to allow for an additional Poisson point process $\mu$, we will in this section consider more generally processes defined on an extension $(\hat\Omega,\hat{\c F},\hat\P)$ of our original probability space $(\Omega,\c F,\P)$. This means that $\hat \Omega = \Omega\times \tilde \Omega$, $\tilde \Omega \times \c F\subseteq \hat{\c F} $ and $\hat \P(\tilde\Omega\times \cdot)=\P$. We denote by $\pi:\hat\Omega\to\Omega$ the canonical projection, which by definition is probability preserving. We next identify $\sigma$-algebras on $\Omega$ as $\sigma$-algebras on $\hat\Omega$ by replacing them with their reciprocal images by $\pi$. We also extend the definition of random variables on $\Omega$ to $\hat \Omega$ by composing by $\pi$. For a suitably general class of processes $\hat\alpha:[0,T]\times\hat\Omega\to \c A_T$ and $\alpha:[0,T]\times\hat\Omega\to A$, we will define the identification relation as follows. \begin{definition} Let $(\hat\Omega,\hat{\c F},\hat\P)$ be an extension of $(\Omega,\c F,\P)$ and $\b F$ a filtration on the extension independent of $\c G,W$. Then we say that an $\b F^W\lor\c G\lor \b F$-progressive process $\alpha:[0,T]\times\hat\Omega\to A$ $\b F$-identifies to a $\b F$-progressive process $\hat\alpha:[0,T]\times\hat\Omega\to \c A_T = L^0(\Omega,\c G\lor\c F^W_T,\P;A)$ if there exists an $(\c G\lor\b F^W)\otimes\b F$-progressive process $\Phi:[0,T]\times\Omega\times\hat\Omega\to A$ such that \[ \alpha_s(\hat\omega) = \Phi_s(\pi(\hat\omega),\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega, \] and, viewed as equality in $L^0(\Omega,\c G\lor\c F^W_T,\P;A)$, \[ \hat\alpha_s(\hat\omega) = \Phi_s(\cdot,\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega. \] \end{definition} We remark that just by the definition it is not clear whether given an $\alpha$ also an $\hat\alpha$ identifying with $\alpha$ exists (and vice versa). However, we can show that, if it then exists, $\hat\alpha$ given $\alpha$ (and similarly $\alpha$ given $\hat\alpha$) is unique up to modification, as the following \cref{lemma alpha bar alpha uniqueness} shows. \begin{lemma}\label{lemma alpha bar alpha uniqueness} Let $(\hat\Omega,\hat{\c F},\hat\P)$ be an extension of $(\Omega,\c F,\P)$ and for $i\in\{1,2\}$ let $\alpha^i$ be a $\b F^W\lor\c G\lor\b F^i$-progressive process $\b F^i$-identifying to a $\b F^i$-progressive process $\hat\alpha^i$. Then for all $s\in [0,T]$, \[ \E^{\hat\P}[\rho(\alpha^1_s,\alpha^2_s)] = \E^{\hat\P}[\hat\rho(\hat\alpha^1_s,\hat\alpha^2_s)]. \] In particular, $\alpha^1$ is a modification of $\alpha^2$ iff $\hat\alpha^1$ is a modification of $\hat\alpha^2$. \end{lemma} \begin{proof} For $i\in\{1,2\}$, by definition there exists $(\c G\lor\b F^W)\otimes\b F^i$-progressive processes $\Phi^i:[0,T]\times\Omega\times\hat\Omega\to A$ such that \[ \alpha^i_s(\hat\omega) = \Phi^i_s(\pi(\hat\omega),\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega, \] and, viewed as equality in $L^0(\Omega,\c G\lor\c F^W_T,\P;A)$, \[ \hat\alpha^i_s(\hat\omega) = \Phi^i_s(\cdot,\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega. \] Defining the filtration $\b F\coloneqq \b F^1\lor\b F^2$, we note that $\b F$ is also independent of $\c G,W$ and $\Phi^1,\Phi^2$ are also $(\c G\lor\b F^W)\otimes\b F$-progressive. Thus, for all $s\in [0,T]$, \begin{align} \E^{\hat\P}[\rho(\alpha^1_s,\alpha^2_s)] &= \int_{\hat\Omega} \rho(\Phi^1_s(\pi(\hat\omega),\hat\omega),\Phi^2_s(\pi(\hat\omega),\hat\omega)) \hat\P(d\hat\omega)\\ &= \int_{\hat\Omega} \int_{\hat\Omega} \rho(\Phi^1_s(\pi(\hat\omega^1),\hat\omega^2),\Phi^2_s(\pi(\hat\omega^1),\hat\omega^2)) \hat\P(d\hat\omega^1)\hat\P(d\hat\omega^2) = \E^{\hat\P}[\hat\rho(\hat\alpha^1_s,\hat\alpha^2_s)], \end{align} where the second equality comes from the independence of $\c G$, $W$ and $\b F$. \end{proof} Moreover, in some cases, e.g. if the filtration $\b F$ is generated by a càdlàg process, we can even show that uniqueness in joint law holds for this the identification, see the following \cref{lemma joint law N bar N map}, which will be useful in the case of Poisson processes in \cref{section randomised problem}. \begin{lemma}\label{lemma joint law N bar N map} Let $(\hat\Omega^i,\hat{\c F}^i,\hat\P^i)$, $i=\{1,2\}$ be two extensions of $(\Omega,\c F,\P)$. Denotes by $\c G^i,W^i$ be the canonical extensions of $\c G,W$, and by $\pi^i:\hat\Omega^i\to\Omega$ be the canonical projection, $i=\{1,2\}$. Further let on each $(\hat\Omega^i,\hat{\c F}^i,\hat\P^i)$ be a càdlàg, $\b F^{N^i}$-progressive process $N^i$ independent of $\c G^i$ and $W^i$ taking values in a Polish space $E$, an $\b F^{W^i,N^i}\lor\c G^i$-progressive map $\alpha^i:[0,T]\times\hat\Omega^i\to A$ and an $\b F^{N^i}$-progressive $\hat\alpha^i:[0,T]\times\hat\Omega^i\to\c A_T$ which $\b F^{N^i}$-identifies to $\alpha^i$, $i=\{1,2\}$. Then the following statements (of equality in finite dimensional distributions) are equivalent \begin{enumerate}[label=(\roman*)] \item\label{lemma 4.2 assumption i} $\c L^{\hat\P^1}(\pi^1,N^1,\alpha^1) = \c L^{\hat\P^2}(\pi^2,N^2,\alpha^2)$, \item\label{lemma 4.2 assumption ii} $\c L^{\hat\P^1}(\pi^1,N^1,\hat\alpha^1) = \c L^{\hat\P^2}(\pi^2,N^2,\hat\alpha^2)$, \item\label{lemma 4.2 assumption iii} $\c L^{\hat\P^1}(\pi^1,N^1,\alpha^1,\hat\alpha^1) = \c L^{\hat\P^2}(\pi^2,N^2,\alpha^2,\hat\alpha^2)$. \end{enumerate} \end{lemma} \begin{proof} By definition there exists $(\c G\lor\b F^W)\otimes\b F^{N^i}$-progressive processes $\Phi^i:[0,T]\times\Omega\times\hat\Omega^i\to A$ such that \[ \alpha^i_s(\hat\omega^i) = \Phi^i_s(\pi^i(\hat\omega^i),\hat\omega^i),\qquad\text{for all }(s,\hat\omega^i)\in [0,T]\times\hat\Omega^i, \] and, viewed as equality in $L^0(\Omega,\c G\lor\c F^W_T,\P;A)$, \[ \hat\alpha^i_s(\hat\omega^i) = \Phi^i_s(\cdot,\hat\omega^i),\qquad\text{for all }(s,\hat\omega^i)\in [0,T]\times\hat\Omega^i. \] Then the Doob-Dynkin Lemma implies that there exists a $\c B([0,T])\otimes(\c G\lor\c F^W_T)\otimes\c B(D([0,T];E))$-measurable $\Psi^i:[0,T]\times\Omega\times D([0,T];E)\to A$ such that \[ \Phi^i_s(\omega,\hat\omega^i) = \Psi^i_s(\omega,N^i(\hat\omega^i)),\qquad\text{for all }(s,\omega,\hat\omega^i)\in[0,T]\times\Omega\times\hat\Omega^i. \] Now we define ${\alpha^2}':[0,T]\times\hat\Omega^2\to A$ and ${\hat\alpha^2}':[0,T]\times\hat\Omega^2\to\c A_T$ via \[ {\alpha^2_s}'(\hat\omega^2) \coloneqq \Psi^1_s(\pi^2(\hat\omega^2),N^2(\hat\omega^2)),\quad\text{for all }(s,\hat\omega^2)\in [0,T]\times\hat\Omega^2, \] and \[ {\hat\alpha^2_s}'(\hat\omega^2) = \Psi^1_s(\cdot,N^2(\hat\omega^2)),\qquad\text{for all }(s,\hat\omega^2)\in [0,T]\times\hat\Omega^2. \] By construction, ${\hat\alpha^2}'$ is $\c B([0,T])\otimes\c F^{N^2}_T$-$\c B(A)$-measurable and ${\alpha^2}'$ is $\c B([0,T]\otimes(\c F^{W^2,N^2}_T\lor\c G)$-measurable. Now let us assume that \labelcref{lemma 4.2 assumption i} holds, which means that $\c L^{\hat\P^1}(\pi^1,N^1,\alpha^1) = \c L^{\hat\P^2}(\pi^2,N^2,\alpha^2)$. Then by construction \begin{align} \c L^{\hat\P^1}(\pi^1,N^1,\alpha^1,\alpha^1,\hat\alpha^1) = \c L^{\hat\P^2}(\pi^2,N^2,\alpha^2,{\alpha^2}',{\hat\alpha^2}'), \label{eq lemma 4.2 joint law equality} \end{align} which shows that $\alpha^2_s = {\alpha^2_s}'$, $\hat\P^2$-a.s., for all $s\in [0,T]$, which means that $\alpha^2$ is a modification of ${\alpha^2_s}'$. Therefore, using that $\Psi^2(\pi^2(\cdot),N^2(\cdot))$ is $\c B([0,T])\otimes (\c G^2\lor\c F^{W^2}_T)\otimes \c F^{N^2}_T$-measurable and that $\c G^2\lor\c F^{W^2}_T$ and $N^2$ are independent, we obtain \begin{align} \E^{\hat\P^2}[\rho(\alpha^2_s,{\alpha^2_s}')] = \E^{\hat\P^2}[\rho(\Psi^2_s(\pi^2,N^2),\Psi^1_s(\pi^2,N^2))] = \E^{\hat\P^2}[\hat\rho(\hat\alpha^2_s,{\hat\alpha^2_s}')] = 0, \end{align} and hence $\hat\alpha^2_s = {\hat\alpha^2_s}'$, $\hat\P^2$-a.s., for all $s\in [0,T]$, which together with \eqref{eq lemma 4.2 joint law equality} implies \labelcref{lemma 4.2 assumption iii}. The direction \labelcref{lemma 4.2 assumption ii} $\Rightarrow$ \labelcref{lemma 4.2 assumption iii} can be proven similarly. \end{proof} \begin{remark} It is not strictly necessary for there to be a càdlàg process $N$ generating the filtration $\b F=\b F^N$. The result \cref{lemma joint law N bar N map} holds more generally as long as the filtration $\b F$ is countably generated. However, in this paper, we focus on filtrations $\b F=\b F^N$ generated by càdlàg, piece-wise constant processes $N$, such as Poisson processes, as these will be relevant to the randomisation framework outlined in \cref{section randomised problem}. \end{remark} \subsection{Equivalence of the action sets $\c A$ and $\hat{\c A}$}\label{subsection equivalent action set} In this section we will show that for each $\alpha\in\c A$ there exists an $\hat\alpha\in\hat{\c A}$ such that $\alpha$ identifies with $\hat\alpha$ and vice versa. We equip $\c A$ and $\hat{\c A}$ with the following pseudometrics \[ d_{\c A}^{\hat\P}(\alpha^1,\alpha^2) = \E^{\hat\P}\Big[\int_0^T \rho(\alpha^1_s,\alpha^2_s)ds \Big],\qquad d_{\hat{\c A}}^{\hat\P}(\hat\alpha^1,\hat\alpha^2) \coloneqq \E^{\hat\P}\Big[\int_0^T \hat\rho(\hat\alpha^1_s,\hat\alpha^2_s) ds\Big]. \] By \cref{lemma alpha bar alpha uniqueness}, this will then define an isomorphism between the induced quotient spaces $\c A/_\sim$ and $\hat{\c A}/_\sim$, where $\alpha^1\sim\alpha^1$ (resp. $\hat\alpha^1\sim\hat\alpha^2$) iff $d_{\c A}^{\hat\P}(\alpha^1,\alpha^2) = 0$ (resp. $d_{\hat{\c A}}^{\hat\P}(\hat\alpha^1,\hat\alpha^2) = 0$). In particular, this justifies considering the optimal control problem over the action set $\hat{\c A}$ rather than $\c A$, which will be crucial for \cref{section randomised problem}, as it provides a more natural starting point for the construction of the randomised control problem. \subsubsection{From $\c A$ to $\hat{\c A}$} For the construction of an $\hat\alpha\in\hat{\c A}$, which $\b F^B$-identifies to a given $\alpha\in\c A$, we first prove the following general results about decomposing predictable processes. \begin{proposition}\label{proposition decompose filtrations of predictable processes} Let $(\Omega,\c F)$ be a measurable space with two filtrations $\b F$ and $\b G$ and let $A$ be a Polish space. Then a process $X:[0,T]\times\Omega\to A$ is $\b F\lor\b G$-predictable if and only if there exists a $\b F\otimes\b G$-predictable process $Y:[0,T]\times\Omega\times\Omega\to A$ defined on the space $(\Omega\times \Omega,\c F\otimes \c F)$ such that \[ X_t(\omega) = Y(t,\omega,\omega),\qquad\text{for all }(t,\omega)\in [0,T]\times\Omega. \] \end{proposition} \begin{proof} Given $Y$, the $\b F\lor\b G$-predictability of $X$ is obvious. So let us focus on the other direction. We assume that the Polish space $A$ is isomorphic $\R$, and thus w.l.o.g. assume that $A = \R$. We will prove this result using the functional monotone class theorem. To this end, we denote by $\c H$ the space of $\b F\lor\b G$-predictable functions $X:[0,T]\times\Omega\to\R$ such that a corresponding $Y^X$ exists. We note that if $X^1,X^2\in\c H$ and $c\in\R$, then also $X^1 X^2,X^1+c X^2\in\c H$ since we can choose $Y^{X^1 X^2} \coloneqq Y^{X^1}Y^{X^2}$ and $Y^{X^1+c X^2} \coloneqq Y^{X^1}+c Y^{X^2}$. Furthermore, if $(X^n)_n\subseteq \c H$ and $0\leq X^n\uparrow X$ to a bounded process $X$, then we can choose $Y^X\coloneqq \limsup_n Y^{X^n}$ to conclude that also $X\in\c H$. Thus, recalling the generator of $Pred(\b F\lor\b G)$, we only need to show that (i) $\1_{(t_1,t_2]\times B\cap C}\in\c H$ for all $0\leq t_1<t_2\leq T$ and $B\in \c F_{t_1},C\in\c G_{t_1}$ (ii) $\1_{\{0\}\times B\cap C}\in\c H$ for all $B\in \c F_{0-},C\in\c G_{0-}$. By defining $Y^{\1_{(t_1,t_2]\times B\cap C}}\coloneqq \1_{(t_1,t_2]\times B\times C}$ (respectively $Y^{\1_{\{0\}\times B\cap C}}\coloneqq \1_{\{0\}\times B\times C}$) this is clear and the result follows. \end{proof} \begin{proposition}\label{proposition l0 control is predictable} Let $(\Omega,\c F,\b F)$ a filtered space, $(\Omega',\c G,\P)$ a probability space and $X:[0,T]\times\Omega\times\Omega'\to A$ an $\b F\otimes\c G$-predictable process taking values in some Polish space $A$. Then the process $Y:[0,T]\times\Omega\to L^0(\Omega',\c G,\P;A)$, where $L^0$ is meant w.r.t. $\P$-equivalence classes, defined by \[ Y_t(\omega) \coloneqq X_t(\omega,\cdot) \in L^0(\Omega',\c G,\P;A),\qquad (t,\omega)\in [0,T]\times\Omega, \] is $\b F$-predictable. \end{proposition} \begin{proof} Note that it suffices to show the claim for $A=\R$ since every Polish space is isomorphic to a Borel subset of $\R$. Therefore we will only consider the case $A=\R$, and carry out the proof using the functional monotone class theorem. To this end, let $\c H$ be the set of such processes $X$ for which $Y^X$ is indeed $\b F$-predictable. Note that if $X\equiv 1$, then $Y^X\equiv 1$ is trivially $\b F$-predictable. Similarly, if $X^1,X^2\in \c H$ and $c\in\R$, then also $X^1 X^2, X^1+c X^2\in\c H$ since $Y^{X^1 X^2} = Y^{X^1} Y^{X^2}$ and $Y^{X^1 + c X^2} = Y^{X^1} + c Y^{X^2}$. Furthermore, if $(X^n)_n\subseteq \c H$ and $0\leq X^n\uparrow X$ to a bounded process $X$, then $Y^X = \lim_n Y^{X^n}$ is also $\b F$-predictable, and thus $X\in\c H$. Finally, we need to verify that $\c H$ contains a generating set for $Pred(\b F)$. For this we note that both (i) $\1_{(t_1,t_2]\times B\times C}$ for $0 \leq t_1 < t_2\leq T$ and $B\in \c F_{t_1},C\in\c G$ and (ii) $\1_{\{0\}\times B\times C}$ for $B\in\c F_{0-},C\in\c G$ are in $\c H$ since the corresponding processes $Y$ are clearly $\b F$-predictable and thus $\c H$ indeed contains all $\b F\otimes\c H$-predictable processes $X$. \end{proof} Putting these results together, we can now prove the following \cref{proposition bar alpha given alpha well defined}. \begin{proposition}\label{proposition bar alpha given alpha well defined} For every $\alpha\in\c A$, there exists an $\hat\alpha\in\hat{\c A}$ that $\b F^B$-identifies to $\hat\alpha$. \end{proposition} \begin{proof} We recall that $\alpha\in\c A$ is $\b F^{W,B}\lor\c G$-predictable, and thus by \cref{proposition decompose filtrations of predictable processes} there exists a $(\c G\lor\b F^W)\otimes\b F^B$-predictable process $\Psi:[0,T]\times\Omega\times\Omega \to A$ such that \[ \Psi_s(\omega,\omega) = \alpha_s(\omega),\qquad\text{for all }(s,\omega)\in [0,T]\times\Omega. \] Next we note that we can canonically extend $\Psi$ to $\Phi:[0,T]\times\Omega\times\hat\Omega\to A$, that means it satisfies $\Phi = \Psi(\cdot,\pi(\cdot))$. In particular $\Phi:[0,T]\times\Omega\times\hat\Omega\to A$ is $(\c G\lor\b F^W)\otimes\b F^B$-predictable, and thus by \cref{proposition l0 control is predictable}, the process $\hat\alpha:[0,T]\times\hat\Omega\to L^0(\Omega,\c G\lor\c F^W_T,\P;A) = \c A_T$ defined by \[ \hat\alpha_s(\hat\omega) \coloneqq \Phi_s(\cdot,\hat\omega), \] is $\b F^B$-predictable and satisfies $\hat\alpha_s(\hat\omega)\in L^0(\Omega,\c G\lor\c F^W_s,\P;A) = \c A_s$ for all $s\in [0,T]$, $\hat\omega\in\hat\Omega$. Therefore $\hat\alpha\in\hat{\c A}$ and $\alpha$ is $\b F^B$-identifying to $\hat\alpha$. \end{proof} \subsubsection{From $\hat{\c A}$ to $\c A$} Our goal in this section is constructing an $\alpha\in\c A$ which $\b F^{\hat\alpha}$-identifies to a given $\hat\alpha\in\hat{\c A}$. For this, we will need several auxiliary result. The first key tool is the following measurable selection result. \begin{proposition}\label{proposition selecting a measurable process} Let $(\Omega,\c F,\b F)$ be a filtered space and $(\Omega',\c G,\b G,\P)$ be a filtered probability space, such that $\c G$ is separable. Let $A$ be a Polish space and $X:[0,T]\times\Omega\to L^0(\Omega',\c G,\P;A)$ be a $(\c B([0,T])\otimes\c F)$-$\c B(L^0(\Omega',\c G,\P;A))$-measurable and $\b F$-adapted process. Further assume that $X_t(\omega) \in L^0(\Omega',\c G_t,\P;A)\subseteq L^0(\Omega',\c G,\P;A)$ for all $(t,\omega)\in [0,T]\times\Omega$. Then there exists a $(\c B([0,T])\otimes\c F\otimes\c G)$-$\c B(A)$-measurable and $\b F\otimes\b G$-adapted process $Y:[0,T]\times\Omega\times\Omega'\to A$ such that \[ Y_t(\omega,\cdot) = X_t(\omega)\text{ in }L^0(\Omega',\c G,\P;A),\qquad\text{for all }(t,\omega)\in [0,T]\times\Omega. \] Further, if $X$ is $\b F$-progressive, then $Y$ can also be chosen $\b F\otimes\b G$-progressive. \end{proposition} \begin{proof} \begin{enumerate}[wide,label=(\roman*)] \item \label{proof proposition selecting a measurable process case A unit interval} Let us first consider the case that $A = [0,1]$. The following construction and proof is based on \cite[Theorem IV.30]{dellacherie_probabilities_1978} and its detailed version in \cite{kaden_progressive_2004}. Since $[0,1]$ is bounded and thus the notions of convergence in probability and convergence in $L^1$ coincide (and therefore induce the same topology), we will in the following consider $L^1$ instead of $L^0$. Let $X:[0,T]\times\Omega\to L^1(\Omega',\c G,\P;[0,1])$ be a $(\c B([0,T])\otimes\c F)$-$\c B(L^1(\Omega',\c G,\P;[0,1]))$-measurable and $\b F$-adapted process. Then, since $\c G$ and thus also $L^1(\Omega',\c G,\P;[0,1])$ is separable, we can approximate $X$ uniformly by countable sums of simple predictable processes as follows: Let $n\in\N$, and let $(A^n_k)_k\subset\c B(L^1(\Omega',\c G,\P;[0,1]))$ be a Borel sets of diameter $d^n_k \coloneqq \sup_{x,y\in A^n_k} \norm{x-y}_{L^1(\Omega',\c G,\P;[0,1])} \leq 2^{-n}$ covering $L^1(\Omega',\c G,\P;[0,1])$, that is $\bigcup_k A^n_k = L^1(\Omega',\c G,\P;[0,1])$. Further let $(B^n_k)_k\subseteq \c B([0,T])\otimes \c F$ be the corresponding preimages under $X$, that is $B^n_k \coloneqq X^{-1}(A^n_k)$ for all $k\in\N$. We note that if $X$ is $\b F$-progressive, then $B^n_k\in Prog(\b F)$. Now for each $k\in\N$, we choose a $\c G$-measurable $\alpha^n_k:\Omega'\to [0,1]$ such that $\alpha^n_k\in A^n_k$ (more precise: the $\P$-equivalence class of $\alpha^n_k$ is in $A^n_k$). Then we can define \[ \Phi^n_s(\omega,\omega') \coloneqq \sum_{k=1}^\infty \alpha^n_k(\omega') \1_{B^n_k}(s,\omega),\qquad (s,\omega,\omega')\in [0,T]\times\Omega\times\Omega'. \] Thus we observe that $\norm{\Phi^n_s(\omega,\cdot) - X_s(\omega)}_{L^1(\Omega',\c G,\P;[0,1])}\leq 2^{-n}\to 0$ uniformly in $(s,\omega) \in [0,T]\times\Omega$. Moreover, the $\Phi^n$ are $\c B([0,T])\otimes \c F\otimes \c G$-measurable, but not yet adapted to $\b F\otimes\b G$, since we only have $\alpha^n_k\in L^1(\Omega',\c G,\P;[0,1])$. To fix this, we define $t^n_k\coloneqq \inf \{t \geq 0 \,|\, (t,\omega)\in B^n_k\text{ for some }\omega\in\Omega\}$ and distinguish two cases: If the infimum is attained, i.e. $t^n_k \in B^n_k(\omega)$ for some $\omega\in\Omega$, we note that $Z^n_k \coloneqq X_{t^n_k} \in L^1(\Omega',\c G_{t^n_k},\P;[0,1])$. In particular there exists a $\c G_{t^n_k}$-measurable $\hat Z^n_k:\Omega'\to [0,1]$ such that $\hat Z^n_k(\cdot) = Z^n_k$ in $L^1(\Omega',\c G,\P;[0,1])$. Furthermore, by construction $\norm{\hat Z^n_k(\cdot) - \alpha^n_k(\cdot)}_{L^1(\Omega',\c G,\P;[0,1])}\leq 2^{-n}$. In the case that $t^n_k \not\in B^n_k(\omega)$ for all $\omega\in\Omega$, then we can instead find $(s^n_{k,l},\omega^n_{k,l})_l\subseteq B^n_k$ such that $s^n_{k,l}\downarrow t^n_k$. We now consider the sequence $(Z^n_{k,l})_l \coloneqq (X_{s^n_{k,l}}(\omega^n_{k,l}))_l\subseteq L^1(\Omega',\c G,\P;[0,1])$. Note that by construction $Z^n_{k,l} \in L^1(\Omega',\c G_{s^n_{k,l}},\P;[0,1])$ for all $l\in\N$. Now since $(Z^n_{k,l})_l$ are uniformly bounded, by \cite[Theorem II.25]{dellacherie_probabilities_1978}, the sequence is relatively compact in the weak $\sigma(L^1,L^\infty)$ topology. Thus, \cite[Theorem II.24]{dellacherie_probabilities_1978} implies that the sequence $(Z^n_{k,l})_l$ converges (along a subsequence) in the weak $\sigma(L^1,L^\infty)$ topology to some $Z^n_k\in L^1(\Omega',\c G,\P;[0,1])$. Further, since $Z^n_{k,l}\in L^1(\Omega',\c G_{s^n_{k,l}},\P;[0,1])$ and $s^n_{k,l}\downarrow t^n_k$, we conclude that $Z^n_k \in L^1(\Omega',\c G_{t^n_k+},\P;[0,1])$, which means that we can find a $\c G_{t^n_k+}$-measurable $\hat Z^n_k:\Omega'\to [0,1]$ such that $\hat Z^n_k(\cdot) = Z^n_k$ in $L^1(\Omega',\c G,\P;[0,1])$. At the same time, by construction $\norm{\hat Z^n_k(\cdot) -\alpha^n_k(\cdot)}_{L^1(\Omega',\c G,\P;[0,1])} \leq \liminf_{l\to\infty} \norm{Z^n_{k,l} - \alpha^n_k(\cdot)}_{L^1(\Omega',\c G,\P;[0,1])} \leq 2^{-n}$ since the $L^1$-norm is l.s.c.\@ w.r.t.\@ weak $\sigma(L^1,L^\infty)$ convergence. Therefore, by defining \[ Y^n_s(\omega,\omega') \coloneqq \sum_{k=1}^\infty \hat Z^n_k(\omega') \1_{B^n_k}(s,\omega),\qquad (s,\omega,\omega')\in [0,T]\times\Omega\times\Omega', \] we obtain $(\c B([0,T])\otimes\c F\otimes\c G)$-$\c B([0,1])$-measurable and $\b F\otimes\b G$-adapted processes $Y^n$ for $n\in\N$, which by construction satisfy for all $(s,\omega)\in [0,T]\times \Omega$, \begin{align} \norm{Y^n_s(\omega,\cdot) - \Phi^n_s(\omega,\cdot)}_{L^1(\Omega',\c G,\P;[0,1])} &\leq \sup_k \norm{\hat Z^n_k - \alpha^n_k}_{L^1(\Omega',\c G,\P;[0,1])} = \sup_k \norm{Z^n_k - \alpha^n_k}_{L^1(\Omega',\c G,\P;[0,1])} \leq 2^{-n}, \end{align} and thus \begin{align} &\norm{Y^n_s(\omega,\cdot) - X_s(\omega)}_{L^1(\Omega',\c G,\P;[0,1])}\\ &\leq \norm{Y^n_s(\omega,\cdot) - \Phi^n_s(\omega,\cdot)}_{L^1(\Omega',\c G,\P;[0,1])} + \norm{\Phi^n_s(\omega,\cdot) - X_s(\omega)}_{L^1(\Omega',\c G,\P;[0,1])} \leq 2^{-n+1} \to 0, \end{align} uniformly in $(s,\omega)\in [0,T]\times\Omega$. Finally, by defining the point-wise limit \[ Y_s(\omega,\omega') \coloneqq \liminf_{n\to\infty} Y^n_s(\omega,\omega'),\qquad (s,\omega,\omega') \in [0,T]\times\Omega\times\Omega', \] we obtain the desired $(\c B([0,T])\otimes\c F\otimes\c G)$-$\c B([0,1])$-measurable and $\b F\otimes\b G$-adapted process, which satisfies by construction $X_s(\omega) = Y_s(\omega,\cdot)$ in $L^1(\Omega',\c G,\P;[0,1])$ for all $(s,\omega)\in [0,T]\times\Omega$. Furthermore, if $X$ is $\b F$-progressive, then $B^n_k\in Prog(\b F)$, for $n,k\in\N$ and thus, using a similar argument as \cite[Lemma 2.7]{kaden_progressive_2004}, we see that the constructed processes $\hat Z^n_k \1_{B^n_k}$ will be $\b F\otimes\b G$-progressive. This in turn implies that then also $Y^n = \sum_k \hat Z^n_k \1_{B^n_k}$ and $Y = \liminf Y^n$ are $\b F\otimes\b G$-progressive. \item Let us now consider the case where $A$ is a general Polish space. By a standard result, there exists a Borel-subset $I \subseteq [0,1]$ and a Borel-isomorphism $\phi:A\to I\subseteq [0,1]$ between $A$ and $I$.\footnote{If $A$ is uncountable, one may choose $I=[0,1]$.} We define a left-inverse $\psi:[0,1]\to A$ to $\phi$ by $\psi = \1_I \phi^{-1} + \1_{I^c} a_0$, where $a_0\in A$ is a fixed but arbitrary element of the Polish space $A$. Clearly, $\psi$ is Borel-measurable and $\psi\circ\phi = \operatorname{id}_A$. Given this identification of $A$ with $I\subseteq [0,1]$, every process $X:[0,T]\times\Omega\to L^0(\Omega',\c G,\P;A)$ can be viewed as a process taking values in $L^0(\Omega',\c G,\P;[0,1])$. Specifically, we define $\bar X \coloneqq \phi(X):[0,T]\times\Omega\to L^0(\Omega',\c G,\P;I) \subseteq L^0(\Omega',\c G,\P;[0,1])$. Then $\bar X$ is a $(\c B([0,T])\otimes \c F)$-$(\c B(L^0(\Omega',\c G,\P;[0,1])))$-measurable and $\b F$-adapted process. Furthermore, it holds that $\bar X_t(\omega) \in L^0(\Omega',\c G_t,\P;[0,1])\subseteq L^0(\Omega',\c G,\P;[0,1])$ for all $(t,\omega)\in [0,T]\times\Omega$. By \cref{proof proposition selecting a measurable process case A unit interval}, there exists a corresponding $(\c B([0,T])\otimes \c F\otimes\c G)$-$\c B([0,1])$-measurable and $\b F\otimes\b G$-adapted process $\bar Y:[0,T]\times\Omega\times\Omega'\to [0,1]$, such that \[ \bar Y_t(\omega,\cdot) = \bar X_t(\omega)\text{ in }L^0(\Omega',\c G,\P;[0,1]),\qquad\text{for all }(t,\omega)\in [0,T]\times\Omega. \] If $\bar X$ is $\b F$-progressive, then $\bar Y$ can also be chosen to be $\b F\otimes\b G$-progressive. Next, we map this process back to $A$ by defining $Y\coloneqq \psi(\bar Y):[0,T]\times\Omega\times\Omega'\to A$. By construction, $Y$ is $(\c B([0,T])\otimes \c F\otimes\c G)$-$\c B(A)$-measurable and $\b F\otimes\b G$-adapted. Furthermore, if $X$, and thus $\bar X$, is $\b F$-progressive, then $\bar Y$, and thus $Y$, can also be chosen to be $\b F\otimes\b G$-progressive. Finally we have \[ Y_t = \psi(\bar Y_t(\omega,\cdot)) = \psi(\bar X_t(\omega)) = \psi(\phi(X_t(\omega))) = X_t(\omega)\text{ in }L^0(\Omega',\c G,\P;A),\qquad\text{for all }(t,\omega)\in [0,T]\times\Omega, \] which shows that $Y$ is the desired process. \end{enumerate} \end{proof} While the preceding measurable selection result \cref{proposition selecting a measurable process} allows us to select a progressive process for each $\hat\alpha\in\hat{\c A}$, the definition of the action set $\c A$ requires predictability. To obtain such a predictable process, we will leverage the fact that a large class of predictable processes can be mapped to the canonical space $(C^d, \c C^d_T, \mathbf C^d, \mu^d_W)$, see \cref{proposition canonical representation of predictable processes}. W.r.t.\@ this canonical filtration, the concepts of progressiveness and predictability then coincide, see also the upcoming \cref{lemma canonical space predictable adapted measurable equivalence}. Consequently, we will begin by proving these two auxiliary results, which are extensions of \cite[Propositions 10 and 9]{claisse_pseudo-markov_2016}. \begin{proposition}\label{proposition canonical representation of predictable processes} Let $(\Omega,\c F,\b F)$ be a filtered space, let $(\Omega',\c G)$ be a measurable space with a continuous process $X:[0,T]\times\Omega'\to\R^d$ and its generated filtration $\b G^X$ and let $A$ be a Polish space. Then a process $Y:[0,T]\times\Omega\times\Omega'\to A$ is $\b F\otimes \b G^X$-predictable if and only if there exists a $\b F\otimes \mathbf C^d$-predictable process $\phi:[0,T]\times\Omega\times C^d\to A$ defined on the space $(\Omega\times C^d,\c F\otimes \c C^d_T)$, where $\mathbf C^d$ is the canonical filtration on $C^d$, such that \[ Y_t(\omega,\omega') = \phi(t,\omega,[X(\omega')]_t) = \phi(t,\omega,X(\omega')),\qquad\text{for all }(t,\omega,\omega')\in [0,T]\times\Omega\times\Omega', \] where $[w]_t \coloneqq (w_{s\land t})_{s\in [0,T]}$. \end{proposition} \begin{proof} We first prove that $\psi:(t,\omega,\omega')\mapsto (t,\omega,[X(\omega')]_t)$ is $Pred(\b F\otimes\b G^X)$-$Pred(\b F\otimes\mathbf C^d)$-measurable. To this end, we recall that $Pred(\b F\otimes\mathbf C^d)$ is generated by all sets of the form (i) $(t_1,t_2]\times C\times D$, where $0\leq t_1<t_2\leq T$ and $C\in\c F_{t_1}$, $D\in\c C^d_{t_1}$, and (ii) $\{0\}\times C\times D$ where $C\in \c F_{0-}$, $D\in\{\emptyset,C^d\}$. Thus, the measurability directly follows from noticing that $\psi^{-1}((t_1,t_2]\times C\times D) = (t_1,t_2]\times C\times X^{-1}(D)\in Pred(\b F\otimes\b G^X)$, since for $s\geq t_1$, $X^{-1}_{t_1}(D) = ([X]_s)^{-1}_{t_1}(D)$, respectively from $\psi(\{0\}\times C\times D) = \{0\}\times C\times D'\in Pred(\b F\otimes\b G^X)$, where $D' = \emptyset$ if $D=\emptyset$ and $D' = \Omega'$ if $D=C^d$. At the same time, we observe that $\psi%:(t,\omega,\omega')\mapsto(t,\omega,[X(\omega')]_t) $ also generates the $\sigma$-algebra $Pred(\b F\otimes\b G^X)$, that is $Pred(\b F\otimes\b G^X) = \sigma(\psi)$. Recall that the measurability of $\psi$ shows that $\sigma(\psi)\subseteq Pred(\b F\otimes\b G^X)$. For the other direction, we note that since $\c G^X_t = \sigma(X_s^{-1}(C)|s\in [0,t],C\in\c B(\R^d))$, the $\sigma$-algebra algebra $Pred(\b F\otimes\b G^X)$ is generated by all sets of the form (i) $(t_2,t_3]\times B\times X_{t_1}^{-1}(C)$, where $0\leq t_1\leq t_2 < t_3\leq T$ and $B\in\c F_{t_2}$, $C\in\c B(\R^d)$, and (ii) $\{0\}\times B\times D$, where $B\in\c F_{0-}$ and $D\in \{\emptyset,\Omega'\}$. For (i), noting that for $s\geq t_1$, $X_{t_1}^{-1}(C) = ([X]_s)_{t_1}^{-1}(C) = \{w^d_s\in C\}\in\c C^d_s$, we see that $(t_2,t_3]\times B\times X_{t_1}^{-1}(C) = \psi^{-1}((t_2,t_3]\times B\times \{w_{t_1}\in C\})\in\psi^{-1}(Pred(\b F\otimes\mathbf C^d))$. For (ii), we note that $\{0\}\times B\times D = \psi^{-1}(\{0\}\times B\times D')$, where $D' = \emptyset$ if $D=\emptyset$ and $D' = C^d$ if $D=\Omega'$, which implies that $\{0\}\times B\times D\in \psi^{-1}(Pred(\b F\otimes\b C^d))$. Therefore also $Pred(\b F\otimes\b G^X)\subseteq \sigma(\psi)$, and we conclude that $Pred(\b F\otimes\b G^X) = \sigma(\psi)$. Using this auxiliary result, we can now prove both directions separately as follows. \begin{enumerate}[wide,label=(\roman*)] \item Let $Y:[0,T]\times\Omega\times\Omega'\to A$ be $\b F\otimes\b G^X$-predictable. Using that $\psi%:(t,\omega,\omega')\mapsto (t,\omega,[X(\omega')]_t) $ is $Pred(\b F\otimes\mathbf C^d)$-$Pred(\b F\otimes\b G^X)$-measurable, and that $Pred(\b F\otimes\mathbf C^d) = \sigma(\psi)$, an application of the Doob-Dynkin Lemma gives us the desired $\b F\otimes\mathbf C^d$-predictable process $\phi:[0,T]\times\Omega\times C^d\to A$ with $Y(t,\omega,\omega') =(\phi\circ\psi)(t,\omega,\omega') = \phi(t,\omega,[X(\omega')]_t)$ for all $(t,\omega,\omega')\in [0,T]\times\Omega\times\Omega'$. Finally, since $[\cdot]_t\circ[\cdot]_t = [\cdot]_t$, we conclude that $\phi(t,\omega,X(\omega')) = Y_t(\omega,\omega') = \phi(t,\omega,[X(\omega')]_t)$ for all $(t,\omega,\omega')\in [0,T]\times\Omega\times\Omega'$. \item For the other direction, let $\phi:[0,T]\times\Omega\times C^d\to A$ be such an $\b F\otimes\mathbf C^d$-predictable process. Then since $\psi%:(t,\omega,\omega')\mapsto (t,\omega,[X(\omega')]_t) $ is $Pred(\b F\otimes\b G^X)$-$Pred(\b F\otimes\mathbf C^d)$-measurable, we directly obtain that $Y:[0,T]\times\Omega\times\Omega'\to A$, which satisfies $Y_t(\omega,\omega') = \phi(t,\omega,[X(\omega')]_t) = (\phi \circ \psi)(t,\omega,\omega')$ for all $(t,\omega,\omega')\in [0,T]\times\Omega\times\Omega'$, is $\b F\otimes\b G^X$-predictable. \end{enumerate} \end{proof} The following generalisation of \cite[Proposition 9]{claisse_pseudo-markov_2016} is proven the same way as in \cite{claisse_pseudo-markov_2016}, but since the proof is quite short, we will include it for the reader's convenience. \begin{lemma}\label{lemma canonical space predictable adapted measurable equivalence} Let $(\Omega,\c F)$ be a measurable space and $(C^d,\c C^d_T,\mathbf C^d,\mu^d_W)$ the canonical filtered probability space of continuous $\R^d$-valued paths. Let $E$ be a Polish space equipped with its Borel $\sigma$-algebra $\c B(E)$. Then for a process $X:[0,T]\times\Omega\times C^d\to E$ the following statements are equivalent: \begin{enumerate}[label=(\roman*)] \item $X$ is $\c F\otimes \mathbf C^d$-predictable, \item $X$ is $\c F\otimes \mathbf C^d$-optional, \item $X$ is $\c F\otimes \mathbf C^d$-progressive, \item $X$ is $\c B([0,T])\otimes\c F\otimes\c C^d_T$-$\c B(E)$-measurable and $\c F\otimes \mathbf C^d$-adapted, \item $X$ is $\c B([0,T])\otimes\c F\otimes\c C^d_T$-$\c B(E)$-measurable and satisfies \[ X_s(\omega,w^d) = X_s(\omega,[w^d]_s),\qquad\text{for all }(s,\omega,w^d) \in [0,T]\times\Omega\times C^d. \] \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[wide,label=(\roman*)] \item It is known that (i) $\Rightarrow$ (ii) $\Rightarrow$ (iii) $\Rightarrow$ (iv). \item Regarding (iv) $\Rightarrow$ (v), we first note that $\c F\otimes\c C^d_s = \sigma((\omega,w^d)\mapsto (\omega,[w^d]_s))$ for all $s\in [0,T]$. Thus using that $E$ is Polish together with $X_s$ being $\c F\otimes\c C^d_s$-measurable, we can apply the Doob-Dynkin Lemma to obtain an $\c F\otimes\c C^d_s$-measurable $Y_s:\Omega\times C^d\to E$ such that $X_s(\omega,w^d) = Y_s(\omega,[w^d]_s)$ for all $(\omega,w^d)\in \Omega\times C^d$. Now using that $[\cdot]_s\circ [\cdot]_s = [\cdot]_s$ for all $s\in [0,T]$, we conclude that $X_s(\omega,[w^d]_s) = Y_s(\omega,[w^d]_s) = X_s(\omega,w^d)$ for all $(s,\omega,w^d)\in [0,T]\times\Omega\times C^d$. \item Regarding (v) $\Rightarrow$ (i), we note that $\psi:(s,\omega,w^d)\mapsto (s,\omega,[w^d]_s)$ is $\c F\otimes\b C^d$-predictable. To this end, we recall since $\c C^d_T = \sigma(\{w^d_t\in B\} | t\in [0,T], B\in\c B(\R^d))$ that $\c B([0,T])\otimes\c F\otimes\c C^d_T$ is generated by all sets of the form $D = A\times B\times \{w^d_t\in C\}$, where $A\in\c B([0,T])$, $B\in\c F$ and $t\in [0,T]$, $C\in\c B(\R^d)$. For such sets $D$, we have $\psi^{-1}(D) = \{(s,\omega,w^d)\in A\times B\times C^d \,|\, ([w^d]_s)_t \in C\} = \{(s,\omega,w^d)\in A\times B\times C^d \,|\, w^d_{s\land t}\in C\} \in Pred(\c F\otimes\b C^d)$ since $(s,\omega,w^d)\mapsto w^d_{s\land t}$ is continuous and $\b C^d$-adapted and hence $\b C^d$-predictable. Therefore $\psi$ is $\c F\otimes\b C^d$-predictable, and we can conclude that $X = X\circ \psi$ is also $\c F\otimes\b C^d$-predictable. \end{enumerate} \end{proof} With these tools in hand, we are now ready to go back to the action sets $\c A$ and $\hat{\c A}$ and prove the following \cref{corollary alpha given alpha bar well defined}. The key idea is noting that $\c A_s = L^0(\Omega,\c G\lor\c F^W_s,\P;A)$ and $L^0(\Omega\times C^m,\c G\otimes\c C^m_s,\P\otimes\mu^m_W;A)$ are isomorphic: by decomposing $\alpha\in\c A_s$ similar to the preceding \cref{proposition decompose filtrations of predictable processes} and together with the Doob-Dynkin Lemma, we can find for each $\alpha\in\c A_s$ an $\tilde\alpha\in L^0(\Omega\times C^m,\c G\otimes\c C^m_s,\P\otimes\mu^m_W;A)$ such that $\alpha(\omega) = \tilde\alpha(\omega,W(\omega))$ for all $\omega\in\Omega$, and vice versa. Furthermore, since $\c G$ and $W$ are independent, we also see that $\norm{\alpha}_{\c A_s} = \norm{\tilde\alpha}_{L^0(%\Omega\times C^m,\c G\otimes\c C^m_s, \P\otimes\mu^m_W;A )}$, that means that their respective norms also coincide. Using this canonical probability space has now the advantage that we can use \cref{lemma canonical space predictable adapted measurable equivalence} to show the predictability of the process, which allows us to prove the following \cref{corollary alpha given alpha bar well defined}. \begin{proposition}\label{corollary alpha given alpha bar well defined} For every $\hat\alpha\in\hat{\c A}$, there exists an $\alpha\in\c A$ that $\b F^B$-identifies to $\hat\alpha$. \end{proposition} \begin{proof} Starting from $\hat\alpha\in\hat{\c A}$, recalling that $\hat\alpha$ is $\b F^B$-predictable, we can by \cite[Proposition 9]{claisse_pseudo-markov_2016} (or \cref{proposition canonical representation of predictable processes} with $\Omega$ being the trivial probability space) find a $\mathbf C^n$-predictable $\hat\Psi:[0,T]\times C^n \to \c A_T% = L^0(\Omega,\c G\lor\c F^W_T,\P;A) $ with $\hat\Psi(s,w)\in \c A_s$ for all $(s,w)\in [0,T]\times C^n$ such that \[ \hat\alpha_s(\hat\omega) = \hat\Psi_s([B(\hat\omega)]_s) = \hat\Psi_s(B(\hat\omega)),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega. \] Furthermore as indicated before, we use that $\c A_s$ is isomorphic to $L^0(\Omega\times C^m,\c G\otimes\c C^m_s,\P\otimes\mu^m_W;A)$ for all $s\in [0,T]$, and we therefore can find a $\mathbf C^n$-predictable map $\tilde\Psi:[0,T]\times C^n\to L^0(\Omega\times C^m,\c G\otimes\c C^m_T,\P\otimes\mu^m_W;A)$ such that (viewed as equality in $\c A_T$) \[ \tilde\Psi_s(B(\hat\omega))(\cdot,W(\cdot)) = \hat\Psi_s(B(\hat\omega))(\cdot),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega\times\Omega. \] Note that since $\hat\Psi_s(\hat\omega)\in\c A_s$, also $\tilde\Psi_s(\hat\omega)\in L^0(\Omega\times C^m,\c G\otimes\c C^m_s,\P\otimes\mu^m_W;A)$. Next we apply \cref{proposition selecting a measurable process}, which enables us to select a $\c G\otimes\mathbf C^m\otimes\mathbf C^n$-progressive process $\Psi:[0,T]\times\Omega\times C^m\times C^n\to A$ from the equivalence classes from $\tilde\Psi$ in the sense that (as equality in $L^0(\Omega\times C^m,\c G\otimes \c C^m_T,\P\otimes\mu^m_W;A)$) \[ \tilde\Psi_s(w) = \Psi_s(\cdot,\cdot,w),\qquad\text{for all }(s,w)\in [0,T]\times C^n. \] Now $\Psi$ is defined on the canonical space and hence \cref{lemma canonical space predictable adapted measurable equivalence} shows that $\Psi$ is also $\c G\otimes\mathbf C^m\otimes\mathbf C^n$-predictable. Finally, we define the process $\Phi:[0,T]\times\Omega\times\hat\Omega\to A$ by \[ \Phi_s(\omega,\hat\omega) \coloneqq \Psi_s(\omega,W(\omega),B(\hat\omega)) ,\qquad\text{for all }(s,\omega,\hat\omega)\in [0,T]\times\Omega\times\hat\Omega, \] which by \cref{proposition canonical representation of predictable processes,proposition decompose filtrations of predictable processes} is now $(\c G\lor\b F^W)\otimes\b F^B$-predictable, and satisfies by construction (viewed as equality in $\c A_T$) \[ \hat\alpha_s(\hat\omega) = \Phi_s(\cdot,\hat\omega),\qquad\text{for all }(s,\hat\omega)\in[0,T]\times\hat\Omega. \] Now it only remains to define the process $\alpha:[0,T]\times\hat\Omega\to A$ by \[ \alpha_s(\hat\omega) \coloneqq \Phi_s(\pi(\hat\omega),\hat\omega),\qquad (s,\hat\omega)\in [0,T]\times\hat\Omega, \] and we obtain the desired $\b F^{W,B}\lor\c G$-predictable process $\alpha\in\c A$ which $\b F^B$-identifies to $\hat\alpha$. \end{proof} Finally, we conclude this section with the following \cref{lemma existence N bar N}, which establishes the existence of an $\alpha$ identifying to a given $\hat\alpha$ for a more general class of $\hat\alpha$. This result is particularly important for \cref{section randomised problem}, where we introduce the randomisation framework, as the Poisson process involved will neither be predictable nor adapted to $\b \c G\lor\b F^{W,B}$ and thus will not fit into the setting of \cref{corollary alpha given alpha bar well defined}. \begin{corollary}\label{lemma existence N bar N} Let $(\hat\Omega,\hat{\c F},\hat\P)$ be an extension of $(\Omega,\c F,\P)$ and $\b F$ be a filtration independent of $\c G,W$. Further let $\hat\alpha:[0,T]\times\hat\Omega\to\c A_T$ be an $\b F$-progressive process such that $\hat\alpha_s(\hat\omega) \in \c A_s$ for all $(s,\hat\omega)\in [0,T]\times\hat\Omega$. Then there exists an $\b F^W\lor\c G\lor\b F$-progressive and $\b F^W\lor\c G\lor\c F_T$-predictable process $\alpha$ which $\b F$-identifies to $\hat\alpha$. \end{corollary} \begin{proof} Similarly to the construction in the proof of \cref{corollary alpha given alpha bar well defined}, we start by using that $\c A_s$ is isomorphic to $L^0(\Omega\times C^m,\c G\otimes\c C^m_s,\P\otimes\mu^m_W;A)$ for all $s\in [0,T]$, and we therefore can find a $\b F$-progressive process $\tilde\alpha:[0,T]\times\hat\Omega\to L^0(\Omega\times C^m,\c G\otimes\c C^m_T,\P\otimes\mu^m_W;A)$ such that (viewed as equality in $\c A_T$) \[ \tilde\alpha_s(\hat\omega)(\cdot,W(\cdot)) = \hat\alpha_s(\hat\omega)(\cdot),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega, \] and $\tilde\alpha_s(\hat\omega) \in L^0(\Omega\times C^m,\c G\otimes\c C^m_s,\P\otimes\mu^m_W;A)$ for all $(s,\hat\omega)\in [0,T]\times\hat\Omega$. Now using \cref{proposition selecting a measurable process}, we can select a $\c G\otimes\mathbf C^m\otimes\b F$-progressive process $\Psi:[0,T]\times\Omega\times C^m\times\hat\Omega\to A$ such that, viewed as equality in $\c A_T$, \[ \Psi_s(\cdot,\cdot,\hat\omega) = \tilde\alpha_s(\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega. \] Since $\Psi$ is defined on the canonical space, \cref{lemma canonical space predictable adapted measurable equivalence} shows that $\Psi$ is also $\c G\otimes\mathbf C^m\otimes\c F_T$-predictable. Finally we define the process $\Phi:[0,T]\times\Omega\times\hat\Omega\to A$ by \[ \Phi_s(\omega,\hat\omega) \coloneqq \Psi_s(\omega,W(\omega),\hat\omega) ,\qquad\text{for all }(s,\omega,\hat\omega)\in [0,T]\times\Omega\times\hat\Omega, \] which is then $(\c G\lor\b F^W)\otimes\b F$-progressive and by \cref{proposition canonical representation of predictable processes,proposition decompose filtrations of predictable processes} also $(\c G\lor\b F^W)\otimes\c F_T$-predictable. Now we only need to define the process $\alpha:[0,T]\times\hat\Omega\to A$ by \[ \alpha_s(\hat\omega) \coloneqq \Phi_s(\pi(\hat\omega),\hat\omega),\qquad (s,\hat\omega)\in [0,T]\times\hat\Omega, \] which is then the desired $\b F^W\lor\c G\lor\b F$-progressive and $\b F^W\lor\c G\lor\c F_T$-predictable process which $\b F$-identifies to $\hat\alpha$. \end{proof} \section{Randomised problem}\label{section randomised problem} In this section, we derive a randomised formulation for mean-field control problems with common noise. The main idea of the randomisation method introduced in \cite{bouchard_stochastic_2009} is to replace the control process by an independent Poisson point process taking values in the action space, and then controlling its intensity instead. While in the non-mean-field case, the set $A$ would be the natural choice for the action space, in which the aforementioned Poisson process should take its values, in the mean-field case this choice runs into problem concerning the additional mean-field dependence of the state dynamics when changing the intensity of the Poisson point process. In fact, in \cite{bayraktar_randomized_2018} (without common noise), the authors instead introduced an additional space $\c A_{\text{step}}$ of all $\b F^W$-progressive step processes taking values in $A$, which they then took as action space for the randomising Poisson point process. The motivation behind this approach is that the conditional law process $(\P^{\c F^{B,\P}_s}_{X_s})_s$ is $\b F^{B,\P}$-progressive and as such well-behaved when tilting the Poisson point process $\mu$ w.r.t.\@ $\b F^{B,\mu}$-predictable intensities. In this paper, we take a similar approach. On the one hand, we simplify the idea of \cite{bayraktar_randomized_2018} by considering the action spaces $\c A_s = L^0(\Omega,\c G\lor\c F^W_s,\P;A)$ defined in \cref{section problem l2 formulation}, which only represent the (conditional) distribution of $\alpha$ at time $s$, instead of considering a whole piece-wise constant process in $\c A_{\text{step}}$ as action at a given time point.\footnote{The idea behind $\c A_{\text{step}}$ could be interesting when further extending this setting to include path-dependence. However for now, while it seems to ease the construction of the randomising Poisson point process, it later complicates the proofs and the resulting BSDE formulation. Further it seems to be hard to get rid of the dependence on the specific sequences used to construct $\c A_{\text{step}}$.} On the other hand, we extend the setting to additionally consider problems with common noise, which is aided by viewing $\hat{\c A}$ as the set of admissible controls instead of $\c A$. Our first step will be defining an $\c A_s$-valued Poisson point process. Now defining a Poisson point process with marks taking values on different spaces $\c A_s$ depending on the current time $s\in [0,T]$ is difficult, but we are going to utilise that $\c A_0\subseteq\c A_s\subseteq\c A_T$ for all $s\in [0,T]$. \begin{assumption}\label{assumptions lambda family} We assume that $(\lambda_s)_{s\in [0,T]}$ is a family of finite measures on $\c A_T$ such that \begin{assumptionenum}[label=(C\arabic*)] \item the topological support of each $\lambda_s$ is given by $\c A_s$ for all $s\in [0,T]$, \item $\lambda_s\ll \lambda_r$ for all $0\leq s\leq r\leq T$, \item $\sup_{s\in [0,T]} \lambda_s(\c A_s) < \infty$. \end{assumptionenum} \end{assumption} Since it may not be obvious why such a family exists, we will provide a possible construction in \cref{remark construction lambda family}. Note that since $\lambda_s$ topological support is given by $\c A_s$, we will view $\lambda_s$, depending on the context, as measure on $\c A_T$ or $\c A_s$ respectively. \begin{lemma}\label{remark construction lambda family} Such a family $(\lambda_s)_{s\in [0,T]}$ satisfying \cref{assumptions lambda family} exists. \end{lemma} \begin{proof} Since each $\c A_s$ is Polish,\footnote{Recall that every uncountable Polish space is Borel-isomorphic to $[0,1]$, which we can equip with the Lebesgue measure.} we can construct a family of probability measures $(\kappa_s)_{s\in [0,T]\cap\Q}$ on $\c A_T$ such that the topological support of each $\kappa_s$ is given by $\c A_s$ for all $s\in [0,T]\cap\Q$, and by normalisation, we can further assume that $\sup_{s\in [0,T]\cap\Q} \kappa_s(\c A_s) < \infty$. Note that this is a countable family and let $(s_n)_n = [0,T]\cap\Q$ be an enumeration of it. Now we define $\lambda_r \coloneqq \sum_{s_n\leq r} 2^{-n} \kappa_{s_n}$ for all $r\in [0,T]$. We see that by construction $\lambda_s\ll\lambda_r$ for all $t\leq s\leq r\leq T$. Further, we recall that the filtration $\c G\lor\b F^{W,\P}$ is left-continuous, see \cite[Chapter 2, Problem 7.6]{karatzas_brownian_1998}, and thus we can approximate every indicator function $\1_A \in\c A_r$, $A\in \c G\lor\c F^{W,\P}_r$ by a series of indicator functions $\1_{A_n}\in\c A_{q_n}$, $A_n\in\c G\lor\c F^{W,\P}_{q_n}$ with $q_n\leq r$,$q_n\in\Q$. Since such indicator functions generate $\c A_s = L^0(\Omega,\c G\lor\c F^W,\P;A)$, this shows that $\bigcup_{q\leq r, q\in\Q} \c A_q\subseteq\c A_r$ is dense in $\c A_r$, and thus the topological support of each $\lambda_r$ is really given by $\c A_r$. \end{proof} To construct a randomised setting, let us suppose we are given such a family $(\lambda_s)_{s\in [0,T]}$, and $(\hat\Omega,\hat{\c F},\hat\P)$ is (if needed) an extension of the original space $(\Omega,\c F,\P)$ such that there exists a Poisson random measure $\mu$ on $(0,T]\times\c A_T$, independent of $\c G,W,B$, with intensity $\frac{d\lambda_s}{d\lambda_T}(\alpha) \lambda_T(d\alpha)ds = \lambda_s(d\alpha) ds$. Note that with slight abuse of notation, we again denote the canonical extensions of the $\sigma$-algebra $\c G$, the (non-augmented) filtrations $\b F^W,\b F^{B}$, the random variable $\xi$ and the processes $W,B$ with the same symbols as their original versions. Further, we denote the canonical projection from $(\hat\Omega,\hat{\c F},\hat\P)$ to $(\Omega,\c F,\P)$ by $\pi:\hat\Omega\to\Omega$, and recall that by definition it is probability preserving, that is $\hat\P(\pi^{-1}(A)) = \P(A)$, for all $A\in\c F$. Given an initial time $t\in [0,T]$, we define the processes $B^t,\mu^t$ via \[ B^t\coloneqq (B_{\cdot\lor t} - B_t),\quad \mu^t\coloneqq \mu(\cdot\cap (t,T]\times\c A_T)). \] Note that by construction $B^t,\mu^t$ are $\hat\P$-independent of $\c F^{B,\mu,\hat\P}_t$. Next, we construct the piece-wise constant $\b F^{\mu^t}$-progressive process $\hat I^{t,\alpha_t}:[t,T]\times\hat\Omega\to \c A_T$ as follows \begin{align} \hat I^{t,\alpha_t}_s \coloneqq \alpha_t + \int_{(t,s]}\int_{\c A_r} (\alpha - \hat{I}^{t,\alpha_t}_{r-}) \mu(dr,d\alpha),\qquad s\in [t,T], \label{eq randomised poisson point process} \end{align} where $\alpha_t\in\c A_t\subseteq\c A_T$ is a fixed chosen initial action. Note that we will later show that both the particular choices of $\alpha_t\in\c A_t$ and the extension $(\hat\Omega,\hat{\c F},\hat\P)$ of $(\Omega,\c F,\P)$ have no influence on the resulting value function. Furthermore, we observe in the following \cref{corollary I given bar I is well defined} that there exists a $\b F^{W,\mu^t}\lor\c G$-progressive process $I^{t,\alpha_t}$ which $\b F^{\hat I^{t,\alpha_t}}$-identifies to $\hat I^{t,\alpha_t}$, and that this choice is even unique in joint law, independent of the chosen extension $(\hat\Omega,\hat{\c F},\hat\P)$ of $(\Omega,\c F,\P)$. In the following we will use the notation $\pi|_{\c G}:(\hat\Omega,\c G)\to (\Omega,\c G)$ to denote the $\c G$-measurable projection from $\hat\Omega$ to $\Omega$. \begin{corollary}\label{corollary I given bar I is well defined} Given $\hat I^{t,\alpha_t}$, we can choose a càdlàg, $\b F^W\lor\c G\lor\c F^{\hat I^{t,\alpha_t}}_T$-predictable and $\b F^{W,\hat I^{t,\alpha_t}}\lor\c G$-progressive $I^{t,\alpha_t}$ which $\b F^{\hat I^{t,\alpha_t}}$-identifies to $\hat I^{t,\alpha_t}$, unique up to indistinguishability. In particular, $I^{t,\alpha_t}$ is also $\b F^W\lor\c G\lor\c F^{\mu^t}_T$-predictable and $\b F^{W,\mu^t}\lor\c G$-progressive. Further, this choice is even unique in the joint law $(\pi|_{\c G},\xi,W,B,\mu,\hat I^{t,\alpha_t},I^{t,\alpha_t})$, by which we mean that if $(\hat\Omega',\hat{\c F}',\hat\P')$ is a another extension of $(\Omega,\c F,\P)$ on which suitable $\mu,\hat I^{t,\alpha_t},I^{t,\alpha_t}$ are defined, then also \[ \c L^{\hat\P}(\pi|_{\c G},\xi,W,B,\mu,\hat I^{t,\alpha_t},I^{t,\alpha_t}) = \c L^{\hat\P'}(\pi|_{\c G},\xi,W,B,\mu,\hat I^{t,\alpha_t},I^{t,\alpha_t}). \] \end{corollary} \begin{proof} From \cref{lemma existence N bar N}\footnote{To match the setting described in \cref{section problem l2 formulation}, we can define $\c G'\coloneqq\c G\lor\c F^W_t$ and consider $\hat I'\coloneqq \hat I^{t,\alpha_t}_{\cdot - t}:[0,T-t]\times\hat\Omega\to\c A_T$ and $W' \coloneqq W^t_{\cdot - t}$. Note that, for example, $\hat I'$ is then $\c G'\lor\b F^{W'}$-progressive since $\c G'\lor\b F^{W'} = (\c G\lor\c F^W_{t+s})_{s\in [0,T-t]}$.} we obtain an $\b F^W\lor\c G\lor\c F^{\mu^t}_T$-predictable and $\b F^{W,\mu^t}\lor\c G$-progressive process $I:[t,T]\times\Omega\to A$ that $\b F^{\hat I^{t,\alpha_t}}$-identifies to $\hat I^{t,\alpha_t}$. Next we define \begin{align} I^{t,\alpha_t}_s\coloneqq I_t + \sum_{r\in(t,s],\hat I^{t,\alpha_t}_{r-}\not= \hat I^{t,\alpha_t}_r} (I_r - I_{r-}),\qquad s\in [t,T], \label{eq corollary 4.2 construction I t alpha_t} \end{align} which by construction is piece-wise constant and in particular càdlàg. Furthermore, just like $I$, the process $I^{t,\alpha_t}$ is also $\b F^W\lor\c G\lor\c F^{\mu^t}_T$-predictable and $\b F^{W,\mu^t}\lor\c G$-progressive. Finally, since $I$ is $\b F^{\hat I^{t,\alpha_t}}$-identifying with $\hat I^{t,\alpha_t}$, there exists a $(\c G\lor\b F^W)\otimes\b F^{\hat I^{t,\alpha_t}}$-progressive process $\Psi:[t,T]\times\Omega\times\hat\Omega\to A$ such that \begin{align} I_s(\hat\omega) = \Psi_s(\pi(\hat\omega),\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega, \label{eq corollary 4.2 I Psi} \end{align} and, viewed as equality in $\c A_T$, \begin{align} \hat I^{t,\alpha_t}_s(\hat\omega) = \Psi_s(\cdot,\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega. \label{eq corollary 4.2 hat I Psi} \end{align} Now defining $\Phi:[t,T]\times\Omega\times\hat\Omega\to A$ via \[ \Phi_s\coloneqq \Psi_t + \sum_{r\in(t,s],\hat I^{t,\alpha_t}_{r-}\not= \hat I^{t,\alpha_t}_r} (\Psi_r - \Psi_{r-}) ,\qquad s\in [t,T], \] we obtain an $(\c G\lor\c F^W)\otimes\b F^{\hat I^{t,\alpha_t}}$-progressive process, which due to \eqref{eq randomised poisson point process,eq corollary 4.2 construction I t alpha_t,eq corollary 4.2 hat I Psi,eq corollary 4.2 I Psi} now also satisfies \[ I^{t,\alpha_t}_s(\hat\omega) = \Phi_s(\pi(\hat\omega),\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega, \] and, viewed as equality in $\c A_T$, \[ \hat I^{t,\alpha_t}_s(\hat\omega) = \Phi_s(\cdot,\hat\omega),\qquad\text{for all }(s,\hat\omega)\in [0,T]\times\hat\Omega. \] Hence, $I^{t,\alpha_t}$ also $\b F^{\hat I^{t,\alpha_t}}$-identifies to $\hat I^{t,\alpha_t}$. Finally the uniqueness result follows from \cref{lemma existence N bar N,lemma joint law N bar N map}, noting that $(\hat\Omega,\hat{\c F},\hat\P)$ and $(\hat\Omega',\hat{\c F}',\hat\P')$ are both extensions of $(\Omega,\c G\lor\c F^{W,B}_T,\P)$. \end{proof} Having a well-defined corresponding control process $I^{t,\alpha_t}$, we can now define the $\b F^{W,B^t,\mu^t,\hat\P}\lor\c G$-progressive state process $X^{t,\xi,\alpha_t}$ as the unique solution to the following uncontrolled dynamics \begin{align} dX^{t,\xi,\alpha_t}_s &= b(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds + \sigma(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) dW_s\\ &\qquad+ \sigma^0(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) dB_s,\qquad X^{t,\xi,\alpha_t}_t = \xi. \label{eq sde randomised problem} \end{align} Again $(\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s})_{s\in [t,T]}$ shall denote the $\b F^{B^t,\mu^t,\hat\P}$-optional and $\hat\P$-a.s.\@ continuous version of the conditional law $(\c L(X^{t,\xi,\alpha_t}_s|\c F^{B,\mu,\hat\P}_s))_{s\in [t,T]}$, that is $\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s} = \c L(X^{t,\xi,\alpha_t}_s|\c F^{B,\mu,\hat\P}_s)$, $\hat\P$-a.s., for all $s\in [t,T]$. We recall that the existence of such a version is ensured by \cite[Lemma A.1]{djete_mckeanvlasov_2022-1}. Further, \cref{assumptions sde coefficients} ensures by standard arguments the existence and uniqueness of $X^{t,\xi,\alpha_t}$. As admissible controls to our randomised problem, we consider the set $\c V$ of all $Pred(\b F^{B,\mu})\otimes \c B(\c A_T)$-measurable processes which are strictly bounded away from $0$ and $\infty$, that is $0 < \inf_{[0,T]\times\hat\Omega\times \c A_T} \nu \leq \sup_{[0,T]\times\hat\Omega\times\c A_T} \nu < \infty$. Further, we will also introduce the subset $\c V_t\subseteq\c V$ of $Pred(\b F^{B^t,\mu^t})\otimes \c B(\c A_T)$-measurable processes. Note that by Girsanov's Theorem, see \cite[Proposition 14.4.I]{daley_introduction_2008}, we can define for each admissible control $\nu\in \c V$ a probability measure using the Doléans-Dade exponential $L^\nu$ as follows \begin{align}\label{eq randomised control girsanov formula} \frac{d\hat\P^\nu}{d\hat\P}\Big|_{\c F^{B,\mu,\hat\P}_s} \coloneqq L^\nu_s \coloneqq \exp\Big(\int_{(0,s]} \int_{\c A_r} \log \nu_r(\alpha) \mu(dr,d\alpha) - \int_0^s \int_{\c A_r} (\nu_r(\alpha) - 1) \lambda_r(d\alpha)dr \Big),\qquad s\in [0,T], \end{align} such that under $\hat\P^\nu$ the Poisson random measure $\mu$ has the intensity $\nu_s(\alpha) ds\lambda_s(d\alpha)$, (recall that $\mu((0,s],du)$ is supported on $\c A_s\subseteq\c A_T$). By the following \cref{lemma d P nu d P is square integrable}, which can be proved in the same way as \cite[Lemma 2.4]{kharroubi_feynmankac_2015}, we see that $\hat\P^\nu$ is indeed well-defined. \begin{lemma}\label{lemma d P nu d P is square integrable} For every $\nu\in\c V$ the process $L^\nu$ defined in \eqref{eq randomised control girsanov formula} is a uniform integrable martingale and $L_T \in L^2(\hat\Omega,\c F^{B,\mu}_T,\hat\P;\R)$. \end{lemma} We note that by standard arguments, since $\hat\P_\xi = \hat\P^\nu_\xi$ for all $\nu\in\c V$, from \cref{assumptions sde coefficients} we obtain the following uniform estimate \begin{align}\label{eq randomised state dynamics basic estimate} \sup_{\alpha_t\in\c A_t} \sup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[\sup_{s\in [t,T]} |X^{t,\xi,\alpha_t}_s|^2 \Big] < \infty. \end{align} Our goal is now to maximise the following randomised reward functional \begin{align} J^{\c R}(t,\xi,\alpha_t,\nu) \coloneqq \E^{\hat\P^\nu}\Big[ g(\hat\P_{X^{t,\xi,\alpha_t}_T}^{\c F^{B,\mu,\hat\P}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P_{X^{t,\xi,\alpha_t}_s}^{\c F^{B,\mu,\hat\P}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds \Big], \end{align} where $\b E^{\hat\P^\nu}$ denotes the expectation under the probability measure $\hat\P^\nu$ corresponding to $\nu\in\c V$. Finally, we also define the randomised value function via \[ V^{\c R}(t,\xi,\alpha_t) \coloneqq \sup_{\nu\in\c V} J^{\c R}(t,\xi,\alpha_t,\nu) < \infty, \] which we know by \cref{assumptions reward functional} and \eqref{eq randomised state dynamics basic estimate} is well-defined. We note in the following \cref{proposition equivalence v v_t randomised value function}, that this is equivalent to taking only optimising over the controls in $\c V_t$. \begin{proposition}\label{proposition equivalence v v_t randomised value function} Let $t\in [0,T]$ and $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$. Defining the randomised problem over the restricted control set $\c V_t\subseteq\c V$ leads to the same value function, that is \[ V^{\c R}(t,\xi,\alpha_t) = \sup_{\nu\in\c V_t} J^{\c R}(t,\xi,\alpha_t), \] for all $\alpha_t\in\c A_t\subseteq\c A_T$. \end{proposition} \begin{proof} We first note that $\sup_{\nu\in\c V_t} J^{\c R}(t,\xi,\alpha_t,\nu) \leq \sup_{\nu\in\c V} J^{\c R}(t,\xi,\alpha_t,\nu) \eqqcolon V^{\c R}(t,\xi,\alpha_t)$ follows directly from $\c V_t\subseteq\c V$. For the other direction, let us fix $\nu\in\c V$. Since $\nu$ is $Pred(\b F^{B,\mu})\otimes\c B(\c A_T)$-measurable, by \cref{proposition decompose filtrations of predictable processes},\footnote{Note that $\b F^{B,\mu} = \b F^{B,\mu}_{t\land \cdot}\lor \b F^{B^t,\mu^t} \subseteq \c F^{B,\mu}_t\lor\b F^{B^t,\mu^t}$.} we can decompose $\nu$ into an $\c F^{B,\mu}_t\otimes Pred(\b F^{B^t,\mu^t})\otimes\c B(\c A_T)$-measurable process $\Upsilon:[0,T]\times\hat\Omega\times\hat\Omega\to\c A_T$ with $0<\inf_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon\leq \sup_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon < \infty$ such that $\nu(\hat\omega) = \Upsilon(\hat\omega,\hat\omega)$, for all $\hat\omega\in\hat\Omega$. Note that in particular $\Upsilon(\hat\omega,\cdot)\in\c V_t$ for all $\hat\omega\in\hat\Omega$. Now noting that $X^{t,\xi,\alpha_t}$ and $I^{t,\alpha_t}$ are $\hat\P$-independent of $\c F^{B,\mu,\hat\P}_t$, we see that using freezing lemma, see e.g.\@ \cite[Lemma 4.1]{baldi_stochastic_2017}, that \begin{align} J^{\c R}(t,\xi,\alpha_t,\nu) &= \E^{\hat\P}\Big[L^{\nu}_T g(\hat\P_{X^{t,\xi,\alpha_t}_T}^{\c F^{B,\mu,\hat\P}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P_{X^{t,\xi,\alpha_t}_s}^{\c F^{B,\mu,\hat\P}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds \Big]\\ &= \E^{\hat\P}\Big[L^\nu_t \E^{\hat\P}\Big[\frac{L^\nu_T}{L^\nu_t} \Big(g(\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big)\Big| \c F^{B,\mu,\hat\P}_t\Big]\Big]\\ &= \int_{\hat\Omega} L^{\nu(\hat\omega)}_t \E^{\hat\P}\Big[\frac{L^{\Upsilon(\hat\omega,\cdot)}_T}{L^{\Upsilon(\hat\omega,\cdot)}_t} \Big(g(\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big)\Big] \hat\P(d\hat\omega)\\ &= \int_{\hat\Omega} L^{\nu(\hat\omega)}_t \E^{\hat\P}\Big[L^{\Upsilon(\hat\omega,\cdot)}_T \Big(g(\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big)\Big] \hat\P(d\hat\omega)\\ &= \int_{\hat\Omega} L^{\nu(\hat\omega)}_t J^{\c R}(t,\xi,\alpha_t,\Upsilon(\hat\omega,\cdot)) \hat\P(d\hat\omega)\\ &\leq \int_{\hat\Omega} L^{\nu(\hat\omega)}_t \sup_{\upsilon\in\c V_t} J^{\c R}(t,\xi,\alpha_t,\upsilon) \ \hat\P(d\hat\omega) = \sup_{\upsilon\in\c V_t} J^{\c R}(t,\xi,\alpha_t,\upsilon), \end{align} using that for every $\hat\omega\in\hat\Omega$ we have $\Upsilon(\hat\omega,\cdot)\in\c V_t\subseteq\c V$ and thus $\frac{L^{\Upsilon(\hat\omega,\cdot)}_T}{L^{\Upsilon(\hat\omega,\cdot)}_t}$ is $\hat\P$-independent of $\c F^{B,\mu,\hat\P}_t$, together with $\E^{\hat\P}[L^{\upsilon}_t] = 1$ and $L^\upsilon_t$ being $\c F^{B,\mu,\hat\P}_t$-measurable, for all $\upsilon\in\c V$. \end{proof} \begin{remark}\label{remark V > 0 vs V geq 0} We could also relax the requirement $\inf_{[0,T]\times\hat\Omega\times \c A_T} \nu > 0$ to $\nu\geq 0$. This leads to the same value function $V^{\c R}$, as also noted in \cite[Remark 3.1]{bandini_backward_2018}. However, to simplify our arguments, especially later in the proof of \cref{theorem equivalence original and randomised value function}, we will consider only $\inf_{[0,T]\times\hat\Omega\times \c A_T}\nu > 0$ as this condition ensures that $\hat\P\sim\hat\P^\nu$. \end{remark} \begin{remark}\label{remark conditional distribution unchanged under measure change} Given $\nu\in\c V$, since $\nu$ is $\b F^{B,\mu}$-predictable and $\hat\P^\nu\sim\hat\P$, the process $\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s}$ is also $\b F^{B^t,\mu^t,\hat\P^\nu}$-optional and for all $s\in [t,T]$ satisfies $\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s} = \c L^{\hat\P^\nu}(X^{t,\xi,\alpha_t}_s | \c F^{B,\mu,\hat\P^\nu}_s)$, $\hat\P^\nu$-a.s., which hence justifies writing $\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s} = \hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s}$. \end{remark} \begin{lemma}\label{lemma J R continuous} The reward functional $J^{\c R}(t,\cdot,\cdot,\cdot):L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)\times\c A_t\times\c V\to \R$ is continuous for each $t\in [0,T]$, where we equip $\c V$ with the $L^\infty$-norm. \end{lemma} \begin{proof} Let $(\xi^n,\alpha_t^n,\nu^n)_n\subseteq L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)\times\c A_t\times\c V$ with $(\xi^n,\alpha_t^n,\nu^n) \to (\xi^\infty,\alpha_t^\infty,\nu^\infty) \in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)\times\c A_t\times\c V$ in $L^2(\P)\times L^0(\hat\P)\times L^\infty(ds\times\hat\P)$. Further let us define $\bar\nu \coloneqq \sup_{k\in\N\cup\{\infty\}} \nu^k$. Note that since $\nu^n\to\nu^\infty$ in $L^\infty(ds\times\hat\P)$, by construction $\bar\nu\in\c V$ and thus $\hat\P^{\bar\nu} \sim \hat\P$. By \cref{lemma alpha bar alpha uniqueness}, and using that $\rho < 1$ we see that also $d_{\c A}^{\hat\P^{\bar\nu}}(I^{t,\alpha_t^n},I^{t,\alpha_t^\infty}) = d_{\hat{\c A}}^{\hat\P^{\bar\nu}}(\hat I^{t,\alpha_t^n},\hat I^{t,\alpha_t^\infty}) \to 0$. Furthermore, since $\hat\P|_{\c G\lor\c F^W_t} = \hat\P^{\bar\nu}|_{\c G\lor\c F^W_t}$, we also have $\xi^n\to\xi^\infty$ in $L^2(\hat\P^{\bar\nu})$. Since $X^{t,\xi^n,\alpha_t^n}$ and $X^{t,\xi^\infty,\alpha_t^\infty}$ are $\c G\lor\b F^{W,B,\mu,\hat\P^{\bar\nu}}$-progressive, and since $\c G\lor\b F^W$ is independent of $\b F^{B,\mu}$ under $\hat\P^{\bar\nu}$, we see that, for all $s\in [t,T]$, $\hat\P^{\bar\nu}$-a.s., \[ \c W_2(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^n,\alpha_t^n}_s},\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^\infty,\alpha_t^\infty}_s})^2 \leq \E^{\hat\P^{\bar\nu}}[|X^{t,\xi^n,\alpha_t^n}_s - X^{t,\xi^\infty,\alpha_t^\infty}_s|^2 | \c F^{B,\mu,\hat\P^{\bar\nu}}_s] = \E^{\hat\P^{\bar\nu}}[|X^{t,\xi^n,\alpha_t^n}_s - X^{t,\xi^\infty,\alpha_t^\infty}_s|^2 | \c F^{B,\mu,\hat\P^{\bar\nu}}_T], \] which implies due to the $\hat\P^{\bar\nu}$-a.s.\@ continuity of $(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^n,\alpha_t^n}_s})_s$ and $(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^\infty,\alpha_t^\infty}_s})_s$ that \[ \sup_{s\in [t,T]} \c W_2(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^n,\alpha_t^n}_s},\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^\infty,\alpha_t^\infty}_s})^2 \leq \E^{\hat\P^{\bar\nu}}\Big[ \sup_{s\in [t,T]} |X^{t,\xi^n,\alpha_t^n}_s - X^{t,\xi^\infty,\alpha_t^\infty}_s|^2 \,\Big|\,\c F^{B,\mu,\hat\P^{\bar\nu}}_T\Big]. \] Thus from \cref{assumptions sde coefficients}, standard arguments using the BDG-inequality and Gronwall's Lemma show that \begin{align} \E^{\hat\P^{\bar\nu}}\Big[ \sup_{s\in [t,T]} \Big(|X^{t,\xi^n,\alpha_t^n}_s - X^{t,\xi^\infty,\alpha_t^\infty}_s|^2 + \c W_2(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^n,\alpha_t^n}_s},\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^\infty,\alpha_t^\infty}_s})^2\Big)\Big] \to 0. \label{eq lemma J R continuous convergence of X} \end{align} Finally, we see that for all $n\in\N$, \begin{align} &|J^{\c R}(t,\xi^n,\alpha_t^n,\nu^n) - J^{\c R}(t,\xi^\infty,\alpha_t^\infty,\nu^\infty)|\\ &\leq \E^{\hat\P^{\bar\nu}}\Big[\Big| \frac{L^{\nu^n}_T}{L^{\bar\nu}_T} \Big(g(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_T}_{X^{t,\xi^n,\alpha_t^n}_T},X^{t,\xi^n,\alpha_t^n}_T) + \int_t^T f(s,\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^n,\alpha_t^n}_s},X^{t,\xi^n,\alpha_t^n}_s,I^{t,\alpha_t^n}_s) ds \Big)\\ &\qquad - \frac{L^{\nu^\infty}_T}{L^{\bar\nu}_T} \Big(g(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_T}_{X^{t,\xi^\infty,\alpha_t^\infty}_T},X^{t,\xi^\infty,\alpha_t^\infty}_T) + \int_t^T f(s,\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\xi^\infty,\alpha_t^\infty}_s},X^{t,\xi^\infty,\alpha_t^\infty}_s,I^{t,\alpha_t^\infty}_s) ds \Big) \Big| \Big]. \end{align} Thus, together with, for all $k\in\N\cup\{\infty\}$, \begin{align} 0\leq \frac{L^{\nu^k}_T}{L^{\bar\nu}_T} &= \exp\Big(\int_{[0,T]}\int_{\c A_s} \log \frac{\nu^k_s(\alpha)}{\bar\nu_s(\alpha)} \mu(ds,d\alpha) - \int_0^T \int_{\c A_s} (\nu^k_s(\alpha) - \bar\nu_s(\alpha)) \lambda_s(d\alpha)ds \Big)\\ &\leq \exp\Big(\int_0^T \int_{\c A_s} \lambda_s(d\alpha)ds\Big)^{\norm{\bar\nu}_\infty} < \infty, \end{align} and \eqref{eq lemma J R continuous convergence of X}, \eqref{eq randomised state dynamics basic estimate} and \cref{assumptions reward functional}, we obtain the desired result by dominated convergence. \end{proof} Our first main result is that this randomised problem is in some sense equivalent to the original problem. Note that this then also allows us to transfer the properties of $V$ from \cref{proposition value function law invariant and independent of probabilistic setting} to $V^{\c R}$. We give the proof in \cref{section proof of equivalence randomised non randomised formulation}. \begin{theorem}\label{theorem equivalence original and randomised value function} The randomised problem is equivalent to the original MFC problem in the sense that for all $t\in [0,T]$ and all $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$ and $\alpha_t\in\c A_t\subseteq\c A_T$, \[ V(t,\xi) = V^{\c R}(t,\xi,\alpha_t). \] In particular, $V^{\c R}$ depends neither on the initial action $\alpha_t\in\c A_t$, the family $(\lambda_s)_{s\in [0,T]}$ nor the extension $(\hat\Omega,\hat{\c F},\hat\P)$ of $(\Omega,\c F,\P)$. \end{theorem} \begin{remark} By \cref{proposition value function law invariant and independent of probabilistic setting}, $V$ depends on $\xi$ only through its law $\P_\xi$ and is independent of the specific choice of $(\Omega,\c F,\P)$ and $\c G$. The equivalence result above thus implies that $V^{\c R}$ also depends only on $\P_\xi$ and remains the same for all spaces $(\hat\Omega,\hat{\c F},\hat\P)$ for which suitable $\c G,W,B,\mu$ exist, since every space is a trivial extension of itself. \end{remark} \section{Randomised dynamic programming}\label{section bsde characterisation and randomised dpp} In this section, we establish a randomised dynamic programming principle to characterise the value function of our mean-field control problem. Following the randomisation approach, we will first derive a BSDE representation for the (randomised) value function through penalisation techniques. This representation, combined with the equivalence between the original and randomised formulations established in \cref{theorem equivalence original and randomised value function}, subsequently enables us to derive the (randomised) dynamic programming principle. \subsection{BSDE representation}\label{section bsde characterisation} We start by deriving a BSDE representation for the value function $V$ using its randomised setting from \cref{theorem equivalence original and randomised value function}. For this, we first define the following spaces \begin{itemize} \item $\c S^2_{[t,T]}(\b F^{B,\mu,\hat\P})$, the set of all càdlàg $\b F^{B,\mu,\hat\P}$-adapted processes $Y:[t,T]\times\hat\Omega\to\R$ processes $Y$ such that $\E^{\hat\P}[\int_t^T |Y_u|^2 du] < \infty$, \item $L^2_{W,[t,T]}(\b F^{B,\mu,\hat\P})$, the set of all $\b F^{B,\mu,\hat\P}$-predictable processes $Z:[t,T]\times\hat\Omega\to\R^n$ that satisfy $\E^{\hat\P}[\int_t^T |Z_u|^2 du] < \infty$, \item $L^2_{\lambda,[t,T]}(\b F^{B,\mu,\hat\P})$, the set of all $Pred(\b F^{B,\mu,\hat\P})\otimes\c B(A)$-measurable processes $U:[t,T]\times\hat\Omega\times\c A_T\to\R$ such that $\E^{\hat\P}[\int_t^T \int_{\c A_u} |U_u(\alpha)|^2 \lambda_u(d\alpha)du] < \infty$, \item $\c K^2_{[t,T]}(\b F^{B,\mu,\hat\P})\subseteq\c S^2_{[t,T]}(\b F^{B,\mu,\hat\P})$ the subset of all processes $K:[t,T]\times\hat\Omega\to\R$ that are additionally nondecreasing, $\b F^{B,\mu,\hat\P}$-predictable and satisfy $K_t = 0$. \end{itemize} As it is standard for the randomisation approach, we will characterise the value function as the minimal solution to a certain constrained BSDE. For this, our main tool will be \cite[Theorem 2.1]{kharroubi_feynmankac_2015} since it replaces the problem of characterising such a minimal solution by studying the limit of solutions to a class of penalised BSDEs. Hence, we will first study in the following \cref{lemma penalised bsde minimal solution existence} the penalised BSDE \eqref{eq penalised bsde value function} and show that it corresponds to the optimisation over the following restricted randomised control set $\c V^n\subseteq\c V$ of all $Pred(\b F^{B,\mu})\otimes\c B(\c A_T)$-measurable processes $\nu:[0,T]\times\hat\Omega\times\c A_T\to (0,n]$ with $\inf_{[0,T]\times\hat\Omega\times\c A_T} \nu > 0$, based on the methodology by \cite{kharroubi_feynmankac_2015,bayraktar_randomized_2018,bandini_backward_2018}. The proof is given in \cref{appendix proof lemma penalised bsde minimal solution existence}. \begin{lemma}\label{lemma penalised bsde minimal solution existence} For $n\in\N$, $t\in [0,T]$, $\alpha_t\in\c A_t\subseteq\c A_T$ and $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$, there exists a unique solution \[ (Y^{n,t,\xi,\alpha_t},Z^{n,t,\xi,\alpha_t},U^{n,t,\xi,\alpha_t})\in \c S^2_{[t,T]}(\b F^{B,\mu,\hat\P})\times L^2_{W,[t,T]}(\b F^{B,\mu,\hat\P})\times L^2_{\lambda,[t,T]}(\b F^{B,\mu,\hat\P}) \] to following penalised BSDE, \begin{align}\label{eq penalised bsde value function} Y^{n,t,\xi,\alpha_t}_s &= \E^{\hat\P}[g(X^{t,\xi,\alpha_t}_T,\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T}) | \c F^{B,\mu,\hat\P}_T] + \int_s^T \E^{\hat\P}[f(r,\hat\P^{\c F^{B,\mu,\hat\P}_r}_{X^{t,\xi,\alpha_t}_r},X^{t,\xi,\alpha_t}_r,I^{t,\alpha_t}_r) | \c F^{B,\mu,\hat\P}_r] dr\\ &\qquad - \int_s^T Z^{n,t,\xi,\alpha_t}_r dB_r + n \int_s^T \int_{\c A_r} (U^{n,t,\xi,\alpha_t}_r(\alpha))_+ \lambda_r(d\alpha) dr - \int_s^T \int_{\c A_r} U^{n,t,\xi,\alpha_t}_r(\alpha) \mu(dr,d\alpha). \end{align} Further, for any $s\in [t,T]$ and $r\in [s,T]$, we also have the following representation, \begin{align}\label{eq Y n t xi snell envelope formula} Y^{n,t,\xi,\alpha_t}_s &= \esssup_{\nu\in\c V^n} \E^{\hat\P^\nu}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_s^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big| \c F^{B,\mu,\hat\P^\nu}_s\Big],\qquad\hat\P\text{-a.s.}, \end{align} and moreover for any $r\in [t,T]$, we have the following estimate \begin{align}\label{eq Y n t xi snell envelope upper bound at s=t} \E^{\hat\P}[Y^{n,t,\xi,\alpha_t}_t] \leq \sup_{\nu\in\c V^n} \E^{\hat\P^\nu}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_t^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big]. \end{align} \end{lemma} Using the previous result on penalised BSDEs, we can now show the following BSDE formulation for our randomised control problem. Our approach is similar to \cite{bayraktar_randomized_2018,bandini_backward_2018} and based on \cite[Theorem 2.1]{kharroubi_feynmankac_2015} which connects penalised BSDEs like \eqref{eq penalised bsde value function} with constrained BSDEs of the form \eqref{eq constrained bsde value function}. The proof is given in \cref{appendix proof lemma bsde minimal solution existence}. \begin{theorem}\label{lemma bsde minimal solution existence} Let $(t,m)\in [0,T]\times\c P_2(\R^d)$. For any $\alpha_t\in\c A_t\subseteq\c A_T$ and $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$ such that $\P_\xi = m$, there exists a unique minimal solution \[ (Y^{t,\xi,\alpha_t},Z^{t,\xi,\alpha_t},U^{t,\xi,\alpha_t},K^{t,\xi,\alpha_t})\in \c S^2_{[t,T]}(\b F^{B,\mu,\hat\P})\times L^2_{W,[t,T]}(\b F^{B,\mu,\hat\P})\times L^2_{\lambda,[t,T]}(\b F^{B,\mu,\hat\P})\times \c K^2_{[t,T]}(\b F^{B,\mu,\hat\P}), \] in the sense that for any other solution $(\tilde Y^{t,\xi,\alpha_t},\tilde Z^{t,\xi,\alpha_t},\tilde U^{t,\xi,\alpha_t},\tilde K^{t,\xi,\alpha_t})$ we have $Y^{t,\xi,\alpha_t}\leq \tilde Y^{t,\xi,\alpha_t}$, to the following constrained BSDE \begin{align}\label{eq constrained bsde value function} Y^{t,\xi,\alpha_t}_s &= \E^{\hat\P}[g(X^{t,\xi,\alpha_t}_T,\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T}) | \c F^{B,\mu,\hat\P}_T] + \int_s^T \E^{\hat\P}[f(r,\hat\P^{\c F^{B,\mu,\hat\P}_r}_{X^{t,\xi,\alpha_t}_r},X^{t,\xi,\alpha_t}_r,I^{t,\alpha_t}_r) | \c F^{B,\mu,\hat\P}_r] dr\\ &\qquad - \int_s^T Z^{t,\xi,\alpha_t}_r dB_r + K^{t,\xi,\alpha_t}_T - K^{t,\xi,\alpha_t}_s - \int_s^T \int_{\c A_r} U^{t,\xi,\alpha_t}_r(\alpha) \mu(dr,d\alpha),\\ U^{t,\xi,\alpha_t}_s(\alpha)&\leq 0,\qquad \text{for }\lambda_s(d\alpha)\text{-a.e. }\alpha \in \c A_s,\text{ for }ds\text{-a.e. }s\in [t,T]. \end{align} Further for any $s\in [t,T]$ and $r\in [s,T]$ we also have the following representation, \begin{align}\label{eq recursive snell envelope formula} Y^{t,\xi,\alpha_t}_s &= \esssup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[Y^{t,\xi,\alpha_t}_r + \int_s^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big| \c F^{B,\mu,\hat\P^\nu}_s\Big],\qquad \hat\P\text{-a.s}. \end{align} Finally, $Y^{t,\xi,\alpha_t}_t$ is $\hat\P$-a.s. constant, and satisfies for any $r\in [t,T]$ also \begin{align}\label{eq recursive snell envelope formula at s=t} Y^{t,\xi,\alpha_t}_t &= \sup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[Y^{t,\xi,\alpha_t}_r + \int_t^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big],\qquad\hat\P\text{-a.s.}, \end{align} and thus $Y^{t,\xi,\alpha_t}_t = V(t,m)$, $\hat\P$-a.s. \end{theorem} \subsection{Randomised dynamic programming principle}\label{section randomised dpp} Recalling that in \cref{section randomised problem} we have established an equivalence between the value function of the original and the randomised problem, we note that $V^{\c R} = V$ only depends on time and initial law of $X$. Thus the question about a dynamic programming principle is well-posed and in this section we want to use the randomised formulation to establish such a DPP for $V^{\c R}$ and thus also for $V$. We start by studying the flow property of the randomised state dynamics \eqref{eq sde randomised problem}. For this, we will need to define solutions $X^{t,\zeta,\gamma_t}$ to \eqref{eq sde randomised problem} for more general initial conditions $\zeta\in L^2(\hat\Omega,\c G\lor\c F^{W,B,\mu}_t,\hat\P;\R^d)$ and $\gamma_t \in L^0(\hat\Omega,\c G\lor\c F^{W,B,\mu}_t,\hat\P;A)$. We note that as in \cref{section randomised problem} the existence and uniqueness of $X^{t,\zeta,\gamma_t}$ is standard since both the existence and uniqueness of $I^{t,\gamma_t}$ given $\hat I^{t,\gamma_t}$ by \cref{corollary I given bar I is well defined} also extend to such more general initial conditions. If moreover $\sup_{\nu\in\c V} \E^{\hat\P}[|\zeta|^2] < \infty$ holds, then similarly to the estimate in \eqref{eq randomised state dynamics basic estimate}, we obtain \begin{align} \sup_{\gamma_t}\sup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[\sup_{s\in [t,T]} |X^{t,\zeta,\gamma_t}_s|^2 \Big] < \infty. \label{eq extended randomised state dynamics basic estimate} \end{align} This now allows us to write down the following flow property for \eqref{eq sde randomised problem}. \begin{lemma}\label{lemma flow property} Let $t\in [0,T]$ and $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$. Then for all $s\in [t,T]$, the following flow property holds \[ \big(X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_u,I^{s,I^{t,\alpha_t}_s}_u,\hat\P^{\c F^{B,\mu,\hat\P}_u}_{X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_u}\big)_{u\in [s,T]} = \big(X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u,\hat\P^{\c F^{B,\mu,\hat\P}_u}_{X^{t,\xi,\alpha_t}_u}\big)_{u\in [s,T]},\qquad\hat\P\text{-a.s}. \] \end{lemma} \begin{proof} We note that by \cref{corollary I given bar I is well defined}, $I^{t,\alpha_t}$ and $I^{s,I^{t,\alpha_t}}$ are indistinguishable on $[s,T]$. Thus $X^{t,\xi,\alpha_t}$ and $X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}$ both solve \eqref{eq sde randomised problem} on $[s,T]$, and hence the flow property follows from the strong uniqueness of solutions to \eqref{eq sde randomised problem}. \end{proof} Next let us take a closer look at such randomised settings with a randomised strategy $\nu\in\c V$ and general initial conditions $\zeta\in L^2(\hat\Omega,\c G\lor\c F^{W,B,\mu}_t,\hat\P;\R^d)$, $\gamma_t\in L^0(\hat\Omega,\c G\lor\c F^{W,B,\mu}_t,\hat\P;A)$. We want to decompose both $\zeta,\gamma_t$ and $\nu$ along the $\sigma$-algebra $\c F^{B,\mu,\hat\P}_t$, to be able to relate this to our original setting with $\xi\in L^2(\hat\Omega,\c G\lor\c F^W_t,\hat\P;\R^d)$ and $\alpha_t\in\c A_t$, which we studied in the previous sections. For this, we recall that we introduced $B^t\coloneqq (B_{\cdot\lor t} - B_t)$ and $\mu^t\coloneqq \mu(\cdot\cap ((t,T]\times\c A_T))$ in \cref{section randomised problem}. We then note that by \cref{proposition decompose filtrations of predictable processes}, we can decompose both $\zeta,\gamma_t$ and $\nu$ into unique $\Xi\in L^2(\hat\Omega\times\hat\Omega,\c F^{B,\mu}_t\otimes(\c G\lor\c F^W_t),\hat\P\otimes\hat\P;\R^d)$, $C_t\in L^0(\hat\Omega\times\hat\Omega,\c F^{B,\mu}_t\otimes(\c G\lor\c F^W_t),\hat\P\otimes\hat\P;A)$ and an $\c F^{B,\mu}_t\otimes Pred(\b F^{B^t,\mu^t})\otimes\c B(\c A_T)$-measurable process $\Upsilon$ with $0<\inf_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon\leq \sup_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon < \infty$ such that $(\zeta,\gamma_t,\nu)(\hat\omega) = (\Xi,C_t,\Upsilon)(\hat\omega,\hat\omega)$, for $\hat\P$-a.a.\@ $\hat\omega\in\hat\Omega$. This decomposition is the motivation for the following lemma, which studies how this decomposition reflects in the reward functional. \begin{lemma}\label{lemma generalised randomised payoff decomposition} Let $t\in [0,T]$, $\zeta\in L^2(\hat\Omega,\c G\lor\c F^{W,B,\mu}_t,\hat\P;\R^d)$, $\gamma_t\in L^0(\hat\Omega,\c G\lor\c F^{W,B,\mu}_t,\hat\P;A)$ and $\nu\in\c V$, and let $\Xi\in L^2(\hat\Omega\times\hat\Omega,\c F^{B,\mu}_t\otimes(\c G\lor\c F^W_t),\hat\P\otimes\hat\P;\R^d)$, $C_t\in L^0(\hat\Omega\times\hat\Omega,\c F^{B,\mu}_t\otimes(\c G\lor\c F^W_t),\hat\P\otimes\hat\P;A)$ and $\Upsilon$ be an $\c F^{B,\mu}_t\otimes Pred(\b F^{B^t,\mu^t})\otimes\c B(\c A_T)$-measurable process such that $0<\inf_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon\leq \sup_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon < \infty$ (in particular, $\Upsilon(\hat\omega,\cdot)\in\c V_t$ for every $\hat\omega\in\hat\Omega$), such that $(\zeta,\gamma_t,\nu)(\hat\omega) = (\Xi,C_t,\Upsilon)(\hat\omega,\hat\omega)$, for $\hat\P$-a.a.\@ $\hat\omega\in\hat\Omega$. Assuming that $\sup_{\upsilon\in\c V} \E^{\hat\P^\upsilon}[|\zeta|^2]<\infty$, then it holds that \begin{align} &\E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{t,\zeta,\gamma_t}_T},X^{t,\zeta,\gamma_t}_T) + \int_t^T f(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\zeta,\gamma_t}_s},X^{t,\zeta,\gamma_t}_s,I^{t,\gamma_t}_s) ds\Big| \c F^{B,\mu,\hat\P^\nu}_t\Big](\hat\omega)\\ &= \E^{\hat\P^\upsilon}\Big[g(\hat\P^{\upsilon,\c F^{B,\mu,\hat\P^\upsilon}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\upsilon,\c F^{B,\mu,\hat\P^\upsilon}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds \Big]\bigg|_{\xi = \Xi(\hat\omega,\cdot),\alpha_t = C_t(\hat\omega,\cdot),\upsilon = \Upsilon(\hat\omega,\cdot)}\\ &= J^{\c R}(t,\xi,\alpha_t,\upsilon)\Big|_{\xi = \Xi(\hat\omega,\cdot),\alpha_t = C_t(\hat\omega,\cdot), \upsilon = \Upsilon(\hat\omega,\cdot)},\qquad\text{ for }\hat\P\text{-a.a.\@ }\hat\omega\in\hat\Omega. \label{eq generalised randomised payoff decomposition} \end{align} \end{lemma} \begin{proof} To prove the formula \eqref{eq generalised randomised payoff decomposition}, we will first consider the case where $\hat\omega\to (\Xi(\hat\omega,\cdot),C_t(\hat\omega,\cdot),\Upsilon(\hat\omega,\cdot))$ takes finitely many values, and then use continuity of $J$ in $\xi,\alpha_t,\nu$ and of $(X,\P^{\c F^{B,\mu,\P}}_X)$ in $\Xi,C_t,\Upsilon$ to obtain the result for general $\Xi,C_t,\Upsilon$. \begin{enumerate}[wide,label=(\roman*)] \item\label{lemma 7.2 step i} Let us assume that $\hat\omega\mapsto (\Xi(\hat\omega,\cdot),C_t(\hat\omega,\cdot),\Upsilon(\hat\omega,\cdot))$ only takes finitely many values (in the spaces $L^2(\hat\Omega,\c G\lor\c F^W_t,\hat\P;\R^d)$, $\c A_t$ and $\c V_t$), and thus are of the form \begin{align} &\Xi(\hat\omega^1,\hat\omega^2) = \sum_{k=1}^K \xi^k(\pi(\hat\omega^2))\1_{Z^k}(\hat\omega^1),\quad C_t(\hat\omega^1,\hat\omega^2) = \sum_{l=1}^L \alpha_t^l(\hat\omega^2)\1_{H^l}(\hat\omega^1),\\ &\Upsilon(\hat\omega^1,\hat\omega^2) = \sum_{m=1}^M \upsilon^m(\hat\omega^2)\1_{E^m}(\hat\omega^1) ,\quad\text{for }(\hat\omega^1,\hat\omega^2)\in\hat\Omega\times\hat\Omega, \end{align} where $(Z^k)_k,(H^l)_l,(E^m)_m$ are each disjoint partitions of $\hat\Omega$, and $\xi^k\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$, $\alpha^l\in\c A_t$ and $\upsilon^m\in\c V_t$. Let us now consider the processes $\hat I^{t,\alpha^l_t}$ given by \eqref{eq randomised poisson point process}, and the corresponding $I^{t,\alpha^l_t}$ which $\b F^{\hat I^{t,\alpha^l_t}}$-identifies to $\hat I^{t,\alpha^l_t}$ from \cref{corollary I given bar I is well defined}. We see that then $\sum_{l=1}^L I^{t,\alpha^l_t}\1_{H^l}$ by construction $\b F^{\hat I^{t,\gamma_t}}$-identifies to $\hat I^{t,\gamma_t}$ and thus again by \cref{corollary I given bar I is well defined} is indistinguishable from $I^{t,\gamma_t}$ under $\hat\P\sim\hat\P^\nu$. Similarly, denoting the corresponding unique solutions to \eqref{eq sde randomised problem} with starting value $\xi^k$ by $X^{s,\xi^k,\alpha^l_t}$, we note that by strong uniqueness of solutions to \eqref{eq sde randomised problem} the processes $X^{s,\zeta,\gamma_t}$ and $\sum_{k=1}^K \sum_{l=1}^L X^{t,\xi^k,\alpha^l_t} \1_{Z^k\cap H^l}$ are indistinguishable under $\hat\P\sim\hat\P^\nu$. Finally, we also define the tilted probability measures $\hat\P^{\upsilon^m}\sim\hat\P$ belonging to $\upsilon^m$ by $\frac{d\hat\P^{\upsilon^m}}{d\hat\P}\big|_{\c F^{B,\mu,\hat\P}_u} \coloneqq L^{\upsilon^m}_u$ for $u\in [0,T]$, see \eqref{eq randomised control girsanov formula}, and note that by construction $L^\nu = \sum_{m=1}^M L^{\upsilon^m} \1_{E^m}$. All together, this allows us to decompose the LHS of \eqref{eq generalised randomised payoff decomposition} as follows \begin{align} &\E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{t,\zeta,\gamma_t}_T},X^{t,\zeta,\gamma_t}_T) + \int_t^T f(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\zeta,\gamma_t}_s},X^{t,\zeta,\gamma_t}_s,I^{t,\gamma_t}_s) ds\Big| \c F^{B,\mu,\hat\P^\nu}_t\Big]\\ &= \sum_{k=1}^K \sum_{l=1}^L \sum_{m=1}^M \1_{Z^k\cap H^l\cap E^m} \E^{\hat\P^{\upsilon^m}}\Big[g(\hat\P^{\upsilon^m,\c F^{B,\mu,\hat\P^{\upsilon^m}}_T}_{X^{t,\xi^k,\alpha_t^l}_T},X^{t,\xi^k,\alpha_t^l}_T)\\ &\hspace{145pt} + \int_t^T f(s,\hat\P^{\upsilon^m,\c F^{B,\mu,\hat\P^{\upsilon^m}}_s}_{X^{t,\xi^k,\alpha_t^l}_s},X^{t,\xi^k,\alpha_t^l}_s,I^{t,\alpha_t^l}_s) ds\Big| \c F^{B,\mu,\hat\P^{\upsilon^m}}_t\Big]. \label{eq lemma 7.2 decompose randomised payoff for simple functions} \end{align} For each summand, we note that since $\upsilon^m\in\c V_t$, by construction $\xi^k,\alpha^l_t,W,B^t,\mu^t$ are $\hat\P^{\upsilon^m}$ independent of $\c F_t^{B,\mu,\hat\P^{\upsilon^m}}$. Thus from the strong uniqueness of solutions to \eqref{eq sde randomised problem} we can deduce that $X^{t,\xi^k,\alpha_t^l}$ and $(\hat\P^{\upsilon^m,\c F^{B,\mu,\hat\P^{\upsilon^m}}_s}_{X^{t,\xi^k,\alpha_t^l}_s})_{s\in [t,T]}$ are $\hat\P^{\upsilon^m}$-independent of $\c F^{B,\mu,\hat\P^{\upsilon^m}}_t$. For this, one can e.g.\@ note that the standard construction of a solution to \eqref{eq sde randomised problem} using a Picard iteration leads to a process $\hat\P^{\upsilon^m}$-independent of $\c F^{B,\mu,\hat\P^{\upsilon^m}}_t$ in every step and thus also in the limit. Therefore \begin{align} &\E^{\hat\P^{\upsilon^m}}\Big[g(\hat\P^{\upsilon^m,\c F^{B,\mu,\hat\P^{\upsilon^m}}_T}_{X^{t,\xi^k,\alpha_t^l}_T},X^{t,\xi^k,\alpha_t^l}_T) + \int_t^T f(s,\hat\P^{\upsilon^m,\c F^{B,\mu,\hat\P^{\upsilon^m}}_s}_{X^{t,\xi^k,\alpha_t^l}_s},X^{t,\xi^k,\alpha_t^l}_s,I^{t,\alpha_t^l}_s) ds\Big| \c F^{B,\mu,\hat\P^{\upsilon^m}}_t\Big]\\ &= \E^{\hat\P^{\upsilon^m}}\Big[g(\hat\P^{\upsilon^m,\c F^{B,\mu,\hat\P^{\upsilon^m}}_T}_{X^{t,\xi^k,\alpha_t^l}_T},X^{t,\xi^k,\alpha_t^l}_T) + \int_t^T f(s,\hat\P^{\upsilon^m,\c F^{B,\mu,\hat\P^{\upsilon^m}}_s}_{X^{t,\xi^k,\alpha_t^l}_s},X^{t,\xi^k,\alpha_t^l}_s,I^{t,\alpha_t^l}_s) ds\Big]\\ &= J^{\c R}(s,\xi^k,\alpha_t^l,\upsilon^m),\quad \hat\P\text{-a.s.}, \end{align} which together with \eqref{eq lemma 7.2 decompose randomised payoff for simple functions}, shows that the claim \eqref{eq generalised randomised payoff decomposition} holds for $\hat\omega\mapsto (\Xi(\hat\omega,\cdot),C_t(\hat\omega,\cdot),\Upsilon(\hat\omega,\cdot))$ only taking finitely many values. \item Let us now consider general $(\Xi,C_t,\Upsilon)$. We first note that due to $\rho < 1$, and since $\Upsilon$ is essentially bounded, we can find a sequence $(\Xi^n,C_t^n,\Upsilon^n)_n$ such that $(\Xi^n,C_t^n,\Upsilon^n) \to (\Xi,C_t,\Upsilon)$ in $L^2\times L^\infty\times L^\infty$ and a.s., such that $\sup_{\upsilon\in\c V}\E^{\hat\P^\upsilon\otimes\hat\P^\upsilon}[|\Xi^n|^2] < \infty$ and each $\hat\omega\mapsto (\Xi^n(\hat\omega,\cdot),C_t^n(\hat\omega,\cdot),\Upsilon^n(\hat\omega,\cdot))$ only takes finitely many values. Correspondingly we define $\zeta^n,\gamma_t^n,\nu^n$ by $(\zeta^n,\gamma_t^n,\nu^n)(\hat\omega)\coloneqq (\Xi^n,C_t^n,\Upsilon^n)(\hat\omega,\hat\omega)$, which by construction then satisfy $(\zeta^n,\gamma_t^n,\nu^n)\to (\zeta,\gamma_t,\nu)$ in $L^2\times L^\infty\times L^\infty$ and a.s., and $\sup_{\upsilon\in\c V}\E^{\hat\P^\upsilon}[|\zeta^n|^2] < \infty$. Thus by \cref{lemma 7.2 step i}, we already know that \eqref{eq generalised randomised payoff decomposition} holds for each $(\Xi^n,C_t^n,\Upsilon^n)$ and $(\zeta^n,\gamma_t^n,\nu^n)$, and our task now will be showing that \eqref{eq generalised randomised payoff decomposition} still holds in the limit, for which it is sufficient to show the convergence of the left- and right-hand sides as $n\to\infty$. To this end, for the right-hand side of \eqref{eq generalised randomised payoff decomposition}, we note that $J^{\c R}:[0,T]\times L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)\times\c A_t\times\c V\to \R$ is continuous in $\xi,\alpha_t,\nu$ by \cref{lemma J R continuous}, and thus \[ \lim_{n\to\infty} J^{\c R}(t,\Xi^n(\hat\omega,\cdot),C_t^n(\hat\omega,\cdot),\Upsilon^n(\hat\omega,\cdot)) = J^{\c R}(t,\Xi(\hat\omega,\cdot),C_t(\hat\omega,\cdot),\Upsilon(\hat\omega,\cdot)), \qquad\text{for $\hat\P$-a.a.\@ }\hat\omega\in\hat\Omega. \] For the left-hand side of \eqref{eq generalised randomised payoff decomposition}, we introduce $\bar\nu\coloneqq \nu\lor\sup_n \nu^n \in\c V$. Then using the conditional Jensen's inequality, we obtain that \begin{align} &\E^{\hat\P^{\bar\nu}}\bigg[\bigg|\E^{\hat\P^{\nu^n}}\Big[g(\hat\P^{\nu^n,\c F^{B,\mu,\hat\P^{\nu^n}}_T}_{X^{t,\zeta^n,\gamma_t^n}_T},X^{t,\zeta^n,\gamma_t^n}_T) + \int_t^T f(s,\hat\P^{\nu^n,\c F^{B,\mu,\hat\P^{\nu^n}}_s}_{X^{t,\zeta^n,\gamma_t^n}_s},X^{t,\zeta^n,\gamma_t^n}_s,I^{t,\gamma_t^n}_s) ds\Big| \c F^{B,\mu,\hat\P^{\nu^n}}_t\Big]\\ &\qquad - \E^{\hat\P^{\nu}}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^{\nu}}_T}_{X^{t,\zeta,\gamma_t}_T},X^{t,\zeta,\gamma_t}_T) + \int_t^T f(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^{\nu}}_s}_{X^{t,\zeta,\gamma_t}_s},X^{t,\zeta,\gamma_t}_s,I^{t,\gamma_t}_s) ds\Big| \c F^{B,\mu,\hat\P^{\nu}}_t\Big]\bigg| \bigg]\\ &\leq \E^{\hat\P^{\bar\nu}}\bigg[\bigg|\frac{L^{\nu^n}_T}{L^{\bar\nu}_T} \Big(g(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_T}_{X^{t,\zeta^n,\gamma_t^n}_T},X^{t,\zeta^n,\gamma_t^n}_T) + \int_t^T f(s,\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\zeta^n,\gamma_t^n}_s},X^{t,\zeta^n,\gamma_t^n}_s,I^{t,\gamma_t^n}_s) ds\Big)\\ &\qquad - \frac{L^\nu_T}{L^{\bar\nu}_T} \Big(g(\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_T}_{X^{t,\zeta,\gamma_t}_T},X^{t,\zeta,\gamma_t}_T) + \int_t^T f(s,\hat\P^{\bar\nu,\c F^{B,\mu,\hat\P^{\bar\nu}}_s}_{X^{t,\zeta,\gamma_t}_s},X^{t,\zeta,\gamma_t}_s,I^{t,\gamma_t}_s) ds\Big)\bigg|\bigg], \end{align} and thus repeating the arguments in the proof of \cref{lemma J R continuous} using the estimate \eqref{eq extended randomised state dynamics basic estimate}, we conclude that \begin{align} &\E^{\hat\P^{\nu^n}}\Big[g(\hat\P^{\nu^n,\c F^{B,\mu,\hat\P^{\nu^n}}_T}_{X^{t,\zeta^n,\gamma_t^n}_T},X^{t,\zeta^n,\gamma_t^n}_T) + \int_t^T f(s,\hat\P^{\nu^n,\c F^{B,\mu,\hat\P^{\nu^n}}_s}_{X^{t,\zeta^n,\gamma_t^n}_s},X^{t,\zeta^n,\gamma_t^n}_s,I^{t,\gamma_t^n}_s) ds\Big| \c F^{B,\mu,\hat\P^{\nu^n}}_t\Big]\\ &\to \E^{\hat\P^{\nu}}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^{\nu}}_T}_{X^{t,\zeta,\gamma_t}_T},X^{t,\zeta,\gamma_t}_T) + \int_t^T f(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^{\nu}}_s}_{X^{t,\zeta,\gamma_t}_s},X^{t,\zeta,\gamma_t}_s,I^{t,\gamma_t}_s) ds\Big| \c F^{B,\mu,\hat\P^{\nu}}_t\Big]\text{ in }L^1(\hat\P^{\bar\nu}). \end{align} Thus from the uniqueness of limits, we conclude that \eqref{eq generalised randomised payoff decomposition} also holds for general $(\Xi,C_t,\Upsilon)$, which completes the proof. \end{enumerate} \end{proof} This allows us now to obtain the following more general connection between the mean-field control problem and the BSDE in \cref{theorem value function bsde characterisation}. \begin{theorem}\label{theorem value function bsde characterisation} Let $t\in [0,T]$ and $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$. Further let $\alpha_t\in\c A_t\subseteq \c A_T$ and let $(Y^{t,\xi,\alpha_t},Z^{t,\xi,\alpha_t},U^{t,\xi,\alpha_t},K^{t,\xi,\alpha_t})$ be the unique minimal solution to \eqref{eq constrained bsde value function} from \cref{lemma bsde minimal solution existence}. Then for any $s\in [t,T]$, it holds that \[ Y^{t,\xi,\alpha_t}_s = V(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s}),\qquad \hat\P\text{-a.s.} \] \end{theorem} \begin{proof} We start by decomposing $(X_s^{t,\xi,\alpha_t},I_s^{t,\alpha_t})$. Since $X_s^{t,\xi,\alpha_t}$ and $I_s^{t,\alpha_t}$ are both $\c G\lor\c F^{W,B,\mu,\hat\P}_s$-measurable, by e.g.\@ \cref{proposition decompose filtrations of predictable processes} there exist $\c F^{B,\mu}_s\otimes (\c G\lor\c F^W_s)$-measurable $\zeta:\hat\Omega\times\hat\Omega\to\R^d$ and $C_s:\hat\Omega\times\hat\Omega\to A$ such that \[ (X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s)(\hat\omega) = (\zeta,C_s)(\hat\omega,\hat\omega), \quad \text{for $\hat\P$-a.a.\@ }\hat\omega\in\hat\Omega. \] Note that by construction $\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s}(\hat\omega) = \hat\P_{\zeta(\hat\omega,\cdot)}$, for $\hat\P$-a.a.\@ $\hat\omega\in\hat\Omega$, since $\c F^{B,\mu,\hat\P}_t$ and $\c G\lor\c F^{W,\hat\P}_t$ are independent under $\hat\P$. Hence, from \cref{proposition equivalence v v_t randomised value function,theorem equivalence original and randomised value function}, we now see that \[ V(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s}(\hat\omega)) = V^{\c R}(s,\zeta(\hat\omega,\cdot),C_s(\hat\omega,\cdot)) = \sup_{\nu\in\c V_s} J^{\c R} (s,\zeta(\hat\omega,\cdot),C_s(\hat\omega,\cdot),\nu), \ \text{for $\hat\P$-a.a.\@ }\hat\omega\in\hat\Omega, \] At the same time, we note that \cref{lemma bsde minimal solution existence} (for $r=T$) together with the flow property in \cref{lemma flow property} implies that $\hat\P$-a.s. \begin{align} Y^{t,\xi,\alpha_t}_s &= \esssup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_s^T f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big| \c F^{B,\mu,\hat\P^\nu}_s\Big]\\ &= \esssup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_T},X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_T)\\ &\hspace{65pt}+ \int_s^T f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_u},X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_u,I^{t,\alpha_t}_u) du\Big| \c F^{B,\mu,\hat\P^\nu}_s\Big]. \end{align} In particular, the claim now follows if we can show that $\hat\P$-a.s. \begin{align} &\sup_{\nu\in\c V_s} J^{\c R} (s,\zeta(\hat\omega,\cdot),C_s(\hat\omega,\cdot),\nu)\\ &= \esssup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_T},X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_T)\\ &\hspace{65pt}+ \int_s^T f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_u},X^{s,X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s}_u,I^{t,\alpha_t}_u) du\Big| \c F^{B,\mu,\hat\P^\nu}_s\Big]. \end{align} To this end, we note that by the estimate \eqref{eq randomised state dynamics basic estimate}, we have $\sup_{\nu\in\c V}\E^{\hat\P^\nu}[|\zeta|^2] = \sup_{\nu\in\c V}\E^{\hat\P^\nu}[|X^{t,\xi,\alpha_t}_s|^2] < \infty$. Hence "$\leq$" follows from \cref{lemma generalised randomised payoff decomposition} since $\c V_s\subseteq\c V$. On the other hand, for "$\geq$" it suffices to notice that we can decompose every $\nu\in\c V$ by \cref{proposition decompose filtrations of predictable processes} into an $\c F^{B,\mu}_s\otimes Pred(\b F^{B^s,\mu^s})\otimes\c B(\c A_T)$-measurable process $\Upsilon$ such that $\nu_r(\hat\omega,\alpha) = \Upsilon_r(\hat\omega,\hat\omega,\alpha)$ for all $\hat\omega\in\hat\Omega,\alpha\in\c A_T,r\in [0,T]$, as this direction now follows from \cref{lemma generalised randomised payoff decomposition} since $\Upsilon(\hat\omega,\cdot)\in\c V_s$ for $\hat\P$-a.a.\@ $\hat\omega\in\hat\Omega$. \end{proof} Finally, we can now conclude the following randomised dynamic programming principle (DPP). \begin{theorem}\label{theorem randomised dpp} Let $(t,m) \in [0,T]\times \c P_2(\R^d)$. Then for any choice of $\alpha_t\in\c A_t\subseteq\c A_T$ and $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$ with $\hat\P_\xi = m$, it holds for $s\in [t,T]$ that \begin{align} V(t,m) = \sup_{\nu\in \c V} \E^{\hat\P^\nu}\Big[ V(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s}) + \int_t^s f(r,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_r}_{X^{t,\xi,\alpha_t}_r},X^{t,\xi,\alpha_t}_r,I^{t,\alpha_t}_r) dr \Big]. \label{eq randomised dpp} \end{align} \end{theorem} \begin{proof} This follows directly from \cref{lemma bsde minimal solution existence,theorem value function bsde characterisation}. \end{proof} \begin{remark} Assuming that the value function $V$ is sufficiently regular, this approach leads to the standard (non-randomised) DPP for mean-field control problems, which has been well-studied in the literature, with \cite{djete_mckeanvlasov_2022,djete_mckeanvlasov_2022-1} providing one of the most comprehensive studies. Specifically, we can interpret the right-hand side in \eqref{eq randomised dpp} as a new control problem with running reward $f$ and terminal reward $V$. Thus, as long as $V$ satisfies the assumptions on $g$ in \cref{assumptions reward functional}, we can apply the same theory developed in this paper again to this new problem. Consequently, by \cref{theorem equivalence original and randomised value function}, we then obtain \begin{align} V(t,m) &= \sup_{\nu\in \c V} \E^{\hat\P^\nu}\Big[ V(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s}) + \int_t^s f(r,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_r}_{X^{t,\xi,\alpha_t}_r},X^{t,\xi,\alpha_t}_r,I^{t,\alpha_t}_r) dr \Big]\\ &= \sup_{\alpha\in\c A} \E^\P\Big[ V(s,\P^{\c F^{B,\P}_s}_{X^{t,\xi,\alpha}_s}) + \int_t^s f(r,\P^{\c F^{B,\P}_r}_{X^{t,\xi,\alpha}_r},X^{t,\xi,\alpha}_r,\alpha_r) dr \Big]. \end{align} \end{remark} \appendix \section{Proof of \cref{theorem equivalence original and randomised value function}} \label{section proof of equivalence randomised non randomised formulation} Our goal in this section is proving that $V = V^{\c R}$. For this let us fix $t\in [0,T]$ and $\alpha_t\in\c A_t$. Furthermore, let us fix also the family $(\lambda_s)_{s\in [0,T]}$ satisfying \cref{assumptions lambda family}. As already outlined in \cref{theorem equivalence original and randomised value function}, we will obtain as a side result that $V^{\c R}$ is independent of specific choice of $\alpha_t\in\c A_t$, which we will come into play later again in the upcoming dynamic programming result in \cref{section randomised dpp}. In what follows, we will prove the two direction $V\leq V^{\c R}$ and $V^{\c R}\leq V$ separately. The general structure of the proofs is similar to \cite{bandini_backward_2018,bayraktar_randomized_2018}, albeit the detailed arguments are naturally different in our mean-field setting due to the presence of the mean-field interaction and the common noise. A crucial tool will be the following \cref{proposition randomised setting is independent of the extension}, which shows that the randomised value function $V^{\c R}$ is independent of the specific choice of the extension of the probability space $(\Omega,\c F,\P)$. This will allow us to simplify the upcoming proofs by considering suitable extensions of $(\Omega,\c F,\P)$ as needed for each direction. \begin{lemma}\label{proposition randomised setting is independent of the extension} The randomised value function $V^{\c R}$ does not depend on the specific extension $(\hat\Omega,\hat{\c F},\hat\P)$ of $(\Omega,\c F,\P)$. \end{lemma} \begin{proof} Let $t\in [0,T]$, $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$ and $\alpha_t\in\c A_t$. Further let $(\lambda_s)_{s\in [0,T]}$ satisfy \cref{assumptions lambda family}, and let $(\hat\Omega^i,\hat{\c F}^i,\hat\P^i)$, $i\in\{1,2\}$ be both extensions of $(\Omega,\c F,\P)$, each carrying a Poisson point measure $\mu^i$ with intensity $\lambda_s(d\alpha)ds$ independent of $\c G^i,W^i,B^i$. We denote the respective canonical projections by $\pi^i:(\hat\Omega^i,\hat{\c F}^i)\to(\Omega,\c F)$. By the extension property, we have then that $\c L^{\hat\P^1}(\pi^1) = \c L^{\hat\P^2}(\pi^2)$. Therefore, by introducing $\pi^i|_{\c G^i}:(\hat\Omega^i,\hat{\c G}^i)\to (\Omega,\c G)$, $i\in \{1,2\}$, we observe that also $\c L^{\hat\P^1}(\pi^1|_{\c G^1},W^1,B^1) = \c L^{\hat\P^2}(\pi^2|_{\c G^2},W^2,B^2)$. Finally, using the independence of $\c G^i,W^i,B^i$ from $\mu^i$ and thus also from the corresponding $\hat I^{t,\alpha_t,i}$ defined by \eqref{eq randomised poisson point process}, together with $\c L^{\hat\P^1}(\mu^1) = \c L^{\hat\P^2}(\mu^2)$ this shows that \[ \c L^{\hat\P^1}(\pi^1|_{\c G^1},W^1,B^1,\hat I^{t,\alpha_t,1},\mu^1) = \c L^{\hat\P^2}(\pi^1|_{\c G^2},W^2,B^2,\hat I^{t,\alpha_t,2},\mu^2). \] Then \cref{corollary I given bar I is well defined} shows that the corresponding càdlàg, $\b F^{W^i}\lor\c G^i\lor\c F^{\mu^{t,i}}_T$-predictable and $\b F^{W^i,\mu^{t,i}}\lor\c G^i$-progressive $I^{t,\alpha_t,i}$ which $\b F^{\hat I^{t,\alpha_t,i}}$-identify to $\hat I^{t,\alpha_t,i}$, $i\in\{1,2\}$ are unique (up to indistinguishability) and moreover satisfy \[ \c L^{\hat\P^1}(\pi^1|_{\c G^1},W^1,B^1,\hat I^{t,\alpha_t,1},I^{t,\alpha_t,1},\mu^1) = \c L^{\hat\P^2}(\pi^2|_{\c G^2},W^2,B^2,\hat I^{t,\alpha_t,2},I^{t,\alpha_t,2},\mu^2). \] Using the weak uniqueness of solutions to the SDE \eqref{eq sde randomised problem}, we hence obtain for the corresponding unique solutions $X^{t,\xi^i,\alpha_t,i}$ to \eqref{eq sde randomised problem}, $i\in\{1,2\}$, that \begin{align} \c L^{\hat\P^1}(X^{t,\xi^1,\alpha_t,1},\mu^1,I^{t,\alpha_t,1},(\hat\P^{1,\c F^{B^1,\mu^1,\hat\P^1}_s}_{X^{t,\xi^1,\alpha_t,1}_s})_s) = \c L^{\hat\P^2}(X^{t,\xi^2,\alpha_t,2},\mu^2,I^{t,\alpha_t,2},(\hat\P^{2,\c F^{B^2,\mu^2,\hat\P^2}_s}_{X^{t,\xi^2,\alpha_t,2}_s})_s). \label{eq proposition 4.9 law equivalence of state processes} \end{align} Now let $\nu^1\in\c V^1$ be a randomised control and $\hat\P^{1,\nu^1}\sim\hat\P^1$ be the corresponding tilted probability measure. Then using \cite[Lemma 4.2]{bandini_backward_2018} and \cref{proposition canonical representation of predictable processes} we can represent it in the following form, for all $(s,\hat\omega^1,a)\in [0,T]\times\hat\Omega^1\times\c A_T$, \begin{align} \label{eq proposition 4.9 representation nu 1} \nu^1_s(\hat\omega^1,a) &= \nu^{(0)}_s(B^1(\hat\omega^1),a) \ \1_{[0,\tau^1_1(\hat\omega^1)]}(s)\\ &\quad+ \sum_{k\geq 1} \nu^{(k)}_s(B^1(\hat\omega^1),(\tau^1_1(\hat\omega^1),\alpha^1_1(\hat\omega^1)),\dots,(\tau^1_k(\hat\omega^1),\alpha^1_k(\hat\omega^1)),a) \ \1_{(\tau^1_k(\hat\omega^1), \tau^1_{k+1}(\hat\omega^1)]}(s), \end{align} for $Pred(\mathbf C^n)\otimes \c B(([0,\infty)\times \c A_T)^k)\otimes\c B(\c A_T)$-$\c B((0,\infty))$-measurable processes $\nu^{(k)}:C^n\times ([0,\infty)\times \c A_T)^k\times \c A_T\to (0,\infty)$, for all $k\geq 0$, where $(\tau^1_k,\alpha^1_k)_{k\geq 1}$ are the events of $\mu^1$. This representation now allows us to transfer the control $\nu^1$ to $(\hat\Omega^2,\hat{\c F}^2,\hat\P^2)$ by defining, for all $(s,\hat\omega^1,a)\in [0,T]\times\hat\Omega^1\times\c A_T$, \begin{align} \label{eq proposition 4.9 representation nu 2} \nu^2_s(\hat\omega^2,a) &= \nu^{(0)}_s(B^2(\hat\omega^2),a) \ \1_{[0,\tau^2_1(\hat\omega^2)]}(s)\\ &\quad + \sum_{k\geq 1} \nu^{(k)}_s(B^2(\hat\omega^2),(\tau^2_1(\hat\omega^2),\alpha^2_1(\hat\omega^2)),\dots,(\tau^2_k(\hat\omega^2),\alpha^2_k(\hat\omega^2)),a) \ \1_{(\tau^2_k(\hat\omega^1), \tau^2_{k+1}(\hat\omega^2)]}(s), \end{align} where $(\tau^2_k,\alpha^2_k)_{k\geq 1}$ are the events of $\mu^2$. We note that by \cref{proposition canonical representation of predictable processes}, $\nu^2$ is $Pred(\b F^{B^2,\mu^2})\otimes\c B(\c A_T)$-measurable and thus $\nu^2\in\c V^2$. Furthermore, the representations \eqref{eq proposition 4.9 representation nu 1,eq proposition 4.9 representation nu 2} together with \eqref{eq proposition 4.9 law equivalence of state processes} show that \begin{align} \c L^{\hat\P^1}(X^{t,\xi^1,\alpha_t,1},\mu^1,I^{t,\alpha_t,1},(\hat\P^{1,\c F^{B^1,\mu^1,\hat\P^1}_s}_{X^{t,\xi^1,\alpha_t,1}_s})_s,\nu^1) = \c L^{\hat\P^2}(X^{t,\xi^2,\alpha_t,2},\mu^2,I^{t,\alpha_t,2},(\hat\P^{2,\c F^{B^2,\mu^2,\hat\P^2}_s}_{X^{t,\xi^2,\alpha_t,2}_s})_s,\nu^2). \end{align} Finally, this allows us to conclude that \begin{align} J^{\c R,1}(t,\xi,\alpha_t,\nu^1) &= \E^{\hat\P^1}\Big[L^{\nu^1}_T g(\hat\P_{1,X^{t,\xi^1,\alpha_t,1}_T}^{\c F^{B^1,\mu^1,\hat\P^1}_T},X^{t,\xi^1,\alpha_t,1}_T) + \int_t^T f(s,\hat\P_{1,X^{t,\xi^1,\alpha_t,1}_s}^{\c F^{B^1,\mu^1,\hat\P^1}_s},X^{t,\xi,\alpha_t,1}_s,I^{t,\alpha_t,1}_s) ds \Big]\\ &= \E^{\hat\P^2}\Big[L^{\nu^2}_T g(\hat\P_{2,X^{t,\xi^1,\alpha_t,2}_T}^{\c F^{B^2,\mu^2,\hat\P^2}_T},X^{t,\xi^2,\alpha_t,2}_T) + \int_t^T f(s,\hat\P_{2,X^{t,\xi^2,\alpha_t,2}_s}^{\c F^{B^2,\mu^2,\hat\P^2}_s},X^{t,\xi,\alpha_t,2}_s,I^{t,\alpha_t,2}_s) ds \Big]\\ &= J^{\c R,2}(t,\xi,\alpha_t,\nu^2), \end{align} where $L^{\nu^i}_T$ is given by \eqref{eq randomised control girsanov formula}. Since $\nu^1\in\c V^1$ was arbitrary, this shows that \[ V^{\c R,1}(t,\xi,\alpha_t) = \sup_{\nu^1\in{\c V}^1} J^{\c R,1}(t,\xi,\alpha_t,\nu^1) \leq \sup_{\nu^2\in{\c V}^2} J^{\c R,2}(t,\xi,\alpha_t,\nu^2) = V^{\c R,2}(t,\xi,\alpha_t). \] By the same arguments we can also show that $V^{\c R,2}(t,\xi,\alpha_t)\leq V^{\c R,1}(t,\xi,\alpha_t)$, and thus we conclude that $V^{\c R,1} \equiv V^{\c R,2}$. \end{proof} \subsection{Proof of $V\leq V^{\c R}$}\label{subsection V leq V^R} This direction relies on the result in \cite[Proposition A.1]{bandini_randomisation_2016_extended_arxiv_version}\footnote{\cite{bandini_randomisation_2016_extended_arxiv_version} is the extended version of the paper \cite{bandini_backward_2018}, and \cite[Proposition A.1]{bandini_randomisation_2016_extended_arxiv_version} is a more general version of \cite[Proposition 4.1]{bandini_backward_2018}.} which allows us to approximate general progressive controls with suitable point processes having both full support and a bounded intensity. Note that this requires taking the view point of the admissible control set as the set of $\b F^B$-predictable processes taking values in $\c A_T$ as described in \cref{section problem l2 formulation}, since this allows us to view the mean-field dynamics as a standard (albeit infinite-dimensional) control problem, for which we are now able to use the above approximation result. Let us fix $t\in [0,T]$, $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$ and some $\alpha_t\in\c A_t$. Our goal is now to show that for any control $\alpha\in\c A$, it holds that $J(t,\xi,\alpha) \leq V^{\c R}(t,\xi,\alpha_0)$. To this end, let us fix some arbitrary $\alpha\in\c A$, then by \cref{proposition bar alpha given alpha well defined} there exists a $\hat\alpha\in\hat{\c A}$ which is $\b F^B$-identifying to $\alpha$, and furthermore by \cref{lemma alpha bar alpha uniqueness} this $\hat\alpha\in\hat{\c A}$ is unique up to modification. Our first step will be constructing approximating $\c A_T$-valued Poisson point processes for the $\c A_T$-valued control process $\hat\alpha\in\hat{\c A}$. We recall that these processes are supposed to take the role of $\hat I^{t,\alpha_t}$ in the randomised setting which we are going to construct, and thus we want them to start in $\alpha_t$ at time $t$. For this reason, we will first establish an approximation on $[t,T]$, and afterwards extend the underlying Poisson random measure to $(0,T]$. We note that $(\hat\alpha_s)_{s\in [t,T]}$ is a $\b F^B$-progressive (in fact by the construction in \cref{proposition bar alpha given alpha well defined} it is even $\b F^B$-predictable), and that $\hat\alpha_s\in\c A_s\subseteq\c A_T$ for all $s\in [t,T]$. Thus, by \cite[Proposition A.1]{bandini_randomisation_2016_extended_arxiv_version}, there exists a suitable extension $(\Omega\times\Omega',(\c F\otimes\c F')^{\P\otimes\P'},\P\otimes\P')$ of our probability space $(\Omega,\c F,\P)$, such that for any $\delta > 0$ there exists a marked point process $(\tau^\delta_k,\alpha^\delta_k)_{k\geq 1}$ with the corresponding Poisson random measure $\mu^\delta = \sum_{k\geq 1} \delta_{(\tau^\delta_k,\alpha^\delta_k)}$ on $(t,T]\times\c A_T$ such that \begin{enumerate}[label=(\roman*)] \item $\c F^{\mu^\delta,\P\otimes\P'}_T\subseteq \c F^{B,\P\otimes\P'}_T\lor\c F'$,\footnote{With slight abuse of notation, we identify $\c F'$ with its canonical extension to $\Omega\times\Omega'$.} so in particular $\mu^\delta$ is independent of $\c G$ and $W$ under $\P\otimes\P'$,\footnote{The independence follows from the explicit construction in \cite[Proposition A.1]{bandini_randomisation_2016_extended_arxiv_version} which uses only $\b F^{B,\P\otimes\P'}$ together with a fixed family of random variables defined on $(\Omega',\c F',\P')$, which is independent of $\delta$.} \item the process $\hat I^{t,\alpha_t,\delta}$ defined by \[ \hat I^{t,\alpha_t,\delta}_s \coloneqq \alpha_t \1_{[t,\tau^\delta_1)} + \sum_{k\geq 1} \alpha^\delta_k \1_{[\tau^\delta_k,\tau^\delta_{k+1})} = \alpha_t + \int_{(t,s]}\int_{\c A_T} (\alpha - I^{t,\alpha_t,\delta}_{r-}) \mu^\delta(dr,d\alpha),\qquad s\in [t,T], \] satisfies $d_{\hat{\c A},[t,T]}^{\P\otimes\P'}(\hat\alpha,\hat I^{t,\alpha_t,\delta}) < \delta$, \item the $\b F^{B,\mu^\delta,\P\otimes\P'}$-compensator of $\mu^\delta$ is absolutely continuous with respect to $\lambda_s(d\alpha)ds$ and can be written as \[ \nu^\delta_s(\alpha)\lambda_s(d\alpha)ds, \] where $\nu^\delta$ is $Pred(\b F^{B,\mu^\delta})\otimes\c B(\c A_T)$-measurable and satisfies \[ 0 < \inf_{[t,T]\times\Omega\times\Omega'\times\c A_T} \nu^\delta \leq \sup_{[t,T]\times\Omega\times\Omega'\times\c A_T} \nu^\delta < \infty, \] \item $\alpha^\delta_k\in\c A_{\tau^\delta_k}$, for all $k\geq 1$. \end{enumerate} Note that this statement is slightly different than \cite[Proposition A.1]{bandini_randomisation_2016_extended_arxiv_version}. At first glance, property (i) may seem new, but it directly follows from the explicit construction in \cite[Proposition A.1]{bandini_randomisation_2016_extended_arxiv_version} constructing $\mu^\delta$ from $\b F^{B,\P\otimes\P'}$ and a family of random variables defined on $(\Omega',\c F',\P')$, which is independent of $\delta$. The more crucial difference, however, is that while the action set $A$ in \cite{bandini_randomisation_2016_extended_arxiv_version} is constant over time, our action sets $\c A_s$ vary with $s\in [t,T]$, which now leads to the adapted form of the intensity process in (iii) and the additional property (iv), necessary for consistency with the control randomisation introduced in \cref{section randomised problem}. Fortunately, we can extend its proof to cover our use case with minimal changes. Our admissible controls $\hat\alpha\in\hat{\c A}$ satisfy additionally $\hat\alpha_s\in\c A_s$ for all $s\in [0,T]$, which allows us to choose the step function approximation $\bar\alpha = \sum_{n=0}^\infty \alpha_n \1_{[t_n,t_{n+1})}$ on \cite[pp.\@ 41-42]{bandini_randomisation_2016_extended_arxiv_version} such that each $\hat\alpha_n\in\c A_{t_n}\subseteq\c A_T$. We then replace the kernel $q^m$ on \cite[p.\@ 42]{bandini_randomisation_2016_extended_arxiv_version} with a family of kernels $(q^m_{s})_{s\in [0,T]}$, where each $q^m_{s}$ is constructed from $\lambda_{s}$ instead of $\lambda$ and thus supported on $\c A_s$ instead of $\c A_T$. The definition of $\beta^m_n$ on \cite[p.\@ 42]{bandini_randomisation_2016_extended_arxiv_version} is also modified to include the time, so $\beta^m_n\coloneqq q^m_{R^m_n}(\alpha_n,U^m_n)$. The estimates on \cite[pp.\@ 42-43]{bandini_randomisation_2016_extended_arxiv_version} remain valid, and we obtain a marked point process $\kappa^m$ with the compensator $\tilde\kappa^m(dt,d\alpha) = \sum_{n\geq 1} \1_{(S^m_n,R^m_n]}(t) q^m_t(\alpha_n,d\alpha)\lambda_{nm} dt$, replacing $q^m$ by $q^m_t$ in \cite[Lemma A.1]{bandini_randomisation_2016_extended_arxiv_version} and its proof. Finally, by replacing $\lambda(d\alpha)dt$ by $\lambda_t(d\alpha)dt$ in the intensity of the additional "small" intensity Poisson process $\pi^k$ on \cite[p.\@ 45]{bandini_randomisation_2016_extended_arxiv_version}, we can obtain the desired point measure $\mu^\delta = \kappa^m + \pi^k$ without further modifications. To extend the Poisson random measure $\mu^\delta$ to $[0,T]\times\c A_T$, we will extend the probability space again by a probability space $(\Omega'',\c F'',\P'')$ on which a Poisson random measure $\mu$ on $(0,t]\times\c A_T$ with $\b F^{\mu^\delta}$-intensity $\lambda_s(d\alpha)ds$ lives. Hence we obtain $(\hat\Omega,\hat{\c F},\hat\Q) \coloneqq (\Omega\times\Omega'\times\Omega'',(\c F\otimes\c F'\otimes\c F'')^{\P\otimes\P'\otimes\P''},\P\otimes\P'\otimes\P'')$, such that $\mu^\delta \coloneqq \mu(\cdot \cap ((0,t]\times\c A_T)) + \mu^\delta(\cdot\cap((t,T]\times\c A_T))$ has the $\b F^{B,\mu^\delta}$-intensity $\nu^\delta_s(\alpha)\lambda_s(d\alpha)ds$, where we extend $\nu^\delta$ to $[0,t]\times\c A_T$ with $\nu^\delta|_{[0,t]\times\c A_T}\equiv 1$. As before, with slight abuse of notation, we identify each sigma algebra $\c F$ with its canonical extension to the product space. Note that property (i) also extends to $\mu^\delta$ on $[0,T]$, that is, $\c F^{\mu^\delta,\hat\Q}_T \subseteq \c F^{B,\hat\Q}_T\lor\c F'\otimes\c F''$, and in particular $\mu^\delta$ on $[0,T]$ is again independent of $\c G$ and $W$ under $\hat\Q$. Having defined approximations $(\hat I^{t,\alpha_t,\delta})_\delta$ to $\hat\alpha$, we will now also show that the corresponding state processes converge. We first note that by \cref{corollary I given bar I is well defined} for every $\delta > 0$, there exist an $\b F^{W,\mu^\delta}\lor\c G$-progressive, càdlàg $I^{t,\alpha_t,\delta}$ that $\b F^{I^{t,\alpha_t,\delta}}$-identifies to $\hat I^{t,\alpha_t,\delta}$. Moreover, by \cref{lemma alpha bar alpha uniqueness}, we have \begin{align} d_{\c A,[t,T]}^{\hat\Q}(\alpha,I^{t,\alpha_t,\delta}) = d_{\hat{\c A},[t,T]}^{\hat\Q}(\hat\alpha,\hat I^{t,\alpha_t,\delta}) = d_{\hat{\c A},[t,T]}^{\P\otimes\P'}(\hat\alpha,\hat I^{t,\alpha_t,\delta}) < \delta,\quad\text{for all }\delta>0. \label{eq proof V leq V^R alpha I delta close} \end{align} Correspondingly, we now define $X^{t,\xi,\alpha_t,\delta}$ as the unique solution to the SDE \begin{align} dX^{t,\xi,\alpha_t,\delta}_s &= b(s,\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},X^{t,\xi,\alpha_t,\delta}_s,I^{t,\alpha_t,\delta}_s) ds + \sigma(s,\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},X^{t,\xi,\alpha_t,\delta}_s,I^{t,\alpha_t,\delta}_s) dW_s\\ &\qquad+ \sigma^0(s,\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},X^{t,\xi,\alpha_t,\delta}_s,I^{t,\alpha_t,\delta}_s) dB_s,\qquad X^{t,\xi,\alpha_t,\delta}_t = \xi. \label{eq randomised state dynamics X t xi alpha_t delta} \end{align} We next show that $X^{t,\xi,\alpha_t,\delta}$ converges to the state process $X^{t,\xi,\alpha}$ for our control $\alpha\in\c A$ in the original control problem, as $\delta\to 0$. To this end, we note that we can rewrite the conditional distributions occurring in \eqref{eq mfc state dynamics,eq randomised state dynamics X t xi alpha_t delta} as follows: Since $X^{t,\xi,\alpha}$ is $\c G\lor\b F^{W,B,\hat\Q}$-progressive and by construction $\c G,W,B$ and $\c F'\otimes\c F''$ are independent under $\hat\Q$, we see that $\hat\Q^{\c F^{B,\hat\Q}_s}_{X^{t,\xi,\alpha}_s} = \hat\Q^{\c F^{B,\hat\Q}_T\lor\c F'\otimes\c F''}_{X^{t,\xi,\alpha}_s}$, $\hat\Q$-a.s., for all $s\in [t,T]$. For the $\c G\lor\b F^{W,B,\mu^\delta,\hat\Q}$-progressive process $X^{t,\xi,\alpha_t,\delta}$, we note that by property (i) of $\mu^\delta$, we have $\c F^{\mu^\delta,\hat\Q}_T\subseteq\c F^{B,\hat\Q}_T\lor\c F'\otimes\c F''$, which together with the independence of $\c G,W$ from $B,\c F'\otimes\c F''$ under $\hat\Q$ shows that $\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s} = \hat\Q^{\c F^{B,\hat\Q}_T\lor\c F'\otimes\c F''}_{X^{t,\xi,\alpha_t,\delta}_s}$, $\hat\Q$-a.s., for all $s\in [t,T]$. Therefore we can estimate, for all $s\in [t,T]$, $\hat\Q$-a.s., \[ \c W_2(\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s}, \hat\Q^{\c F^{B,\hat\Q}_s}_{X^{t,\xi,\alpha}_s})^2 = \c W_2(\hat\Q^{\c F^{B,\hat\Q}_T\lor\c F'\otimes\c F''}_{X^{t,\xi,\alpha_t,\delta}_s},\hat\Q^{\c F^{B,\hat\Q}_T\lor\c F'\otimes\c F''}_{X^{t,\xi,\alpha}_s})^2 \leq \E^{\hat\Q}\big[|X^{t,\xi,\alpha_t,\delta}_s - X^{t,\xi,\alpha}_s|^2 \,\big|\,\c F^{B,\hat\Q}_T\lor\c F'\otimes\c F''\big], \] which implies that \[ \sup_{s\in[t,T]} \c W_2(\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},\hat\Q^{\c F^{B,\hat\Q}_s}_{X^{t,\xi,\alpha}_s})^2 \leq \E^{\hat\Q}\Big[\sup_{s\in[t,T]} |X^{t,\xi,\alpha_t,\delta}_s - X^{t,\xi,\alpha}_s|^2 \,\Big|\,\c F^{B,\hat\Q}_T\lor\c F'\otimes\c F'' \Big]. \] Now, from \eqref{eq proof V leq V^R alpha I delta close}, and using \cref{assumptions sde coefficients}, standard arguments involving the BDG-inequality and Gronwall's Lemma lead to the following estimate \[ \E^{\hat\Q}\Big[\sup_{s\in[t,T]} \Big(|X^{t,\xi,\alpha_t,\delta}_s - X^{t,\xi,\alpha}_s|^2 + \c W_2(\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},\hat\Q^{\c F^{B,\hat\Q}_s}_{X^{t,\xi,\alpha}_s})^2 \Big) \Big] \to 0, \] and thus by \cref{assumptions reward functional} that \begin{align} &\E^{\hat\Q}\Big[g(\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_T}_{X^{t,\xi,\alpha_t,\delta}_T},X^{t,\xi,\alpha_t,\delta}_T) + \int_t^T f(s,\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},X^{t,\xi,\alpha_t,\delta}_s,I^{t,\alpha_t,\delta}_s) ds \Big]\\ &\to \E^{\hat\Q}\Big[g(\hat\Q^{\c F^{B,\hat\Q}_T}_{X^{t,\xi,\alpha}_T},X^{t,\xi,\alpha}_T) + \int_t^T f(s,\hat\Q^{\c F^{B,\hat\Q}_s}_{X^{t,\xi,\alpha}_s},X^{t,\xi,\alpha}_s,\alpha_s) ds \Big] = J(t,\xi,\alpha). \end{align} Therefore, the claim follows if we prove that \[ \E^{\hat\Q}\Big[g(\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_T}_{X^{t,\xi,\alpha_t,\delta}_T},X^{t,\xi,\alpha_t,\delta}_T) + \int_t^T f(s,\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},X^{t,\xi,\alpha_t,\delta}_s,I^{t,\alpha_t,\delta}_s) ds \Big] \leq V^{\c R}(t,\xi,\alpha_t),\qquad\text{for all }\delta > 0. \] For this, we will express the left-hand side as the pay-off of a randomised control, using that by \cref{proposition randomised setting is independent of the extension} we can freely choose our randomised probability setting. We will define a new probability measure $\hat\P^{(\delta)}\sim\hat\Q$ on $(\hat\Omega,\hat{\c F})$ via \[ \frac{d\hat\P^{(\delta)}}{d\hat\Q}\Big|_{\c F^{B,\mu^\delta,\hat\Q}_s} \coloneqq L^{\frac 1 {\nu^\delta}}_s = \exp\Big(\int_{(0,s]} \int_{\c A_r} \log \frac 1 {\nu^\delta_r(\alpha)} \ \mu^\delta(dr,d\alpha) - \int_0^s \int_{\c A_r} \Big(\frac 1 {\nu^\delta_r(\alpha)} - 1\Big) \lambda_r(d\alpha)dr \Big),\quad s\in [0,T]. \] By \cref{lemma d P nu d P is square integrable} and Girsanov's Theorem, see \cite[Proposition 14.4.I]{daley_introduction_2008}, we note that $\mu^\delta$ has under $\hat\P^{(\delta)}$ the intensity $\lambda_s(d\alpha)ds$, since by property (iii) the process $\nu^\delta$ and thus also $\frac 1 {\nu^\delta}$ are strictly bounded away from $0$ and $\infty$. Further, following the same arguments as in \cite[page 2155]{fuhrman_randomized_2015} shows that $\mu^\delta$ is independent of $W,B$ and also $B$ resp. $W$ are still $\b F^{B,\mu^\delta,\hat\P^{(\delta)}}$- resp. $\b F^{W,\mu^\delta,\hat\P^{(\delta)}}$-Brownian motions under $\hat\P^{(\delta)}$. Finally, since $\nu^\delta$ is $Pred(\b F^{B,\mu^\delta})\otimes\c B(\c A_T)$-measurable and thus independent of $\c G$ under $\hat\Q$, this shows that $\mu^\delta$ stays independent of $\c G$ under $\hat\P^{(\delta)}$. Next, we observe that the Radon-Nikodym derivative given by $\frac{d\hat\P^{(\delta)}}{d\hat\Q}|_{\c F^{B,\mu^\delta,\hat\Q}_\cdot} = L^{\frac 1 {\nu^\delta}}$ is by construction $\b F^{B,\mu^\delta}$-progressive and that $\E^{\P'\otimes\P''}[\frac{d\hat\P^{(\delta)}}{d\hat\Q}|_{\c F^{B,\mu^\delta,\hat\Q}_T}(\omega,\cdot)] = \E^{\P'\otimes\P''}[L^{\frac 1 {\nu^\delta}}_T(\omega,\cdot)] \equiv 1$, for $\P$-a.e.\@ $\omega\in\Omega$, which shows that $(\hat\Omega,\hat{\c F},\hat\P^{(\delta)})$ is an extension of $(\Omega,\c F,\P)$. Finally, as outlined in \cref{remark conditional distribution unchanged under measure change}, we see that $X^{t,\xi,\alpha_t,\delta}$ satisfies the (uncontrolled) randomised state equation \eqref{eq sde randomised problem} under $\hat\P^{(\delta)}$, and thus $(\hat\Omega,\hat{\c F},\hat\P^{(\delta)})$ forms a randomised setting as defined in \cref{section randomised problem}. From the properties (iii) and (iv) of $\mu^\delta$, we note that $\nu^\delta\in\c V$ is an admissible randomised control to this randomised setting. Further, by construction $\hat\Q = (\P^{(\delta)})^{\nu^\delta}$ is the probability measure belonging to the randomised control $\nu^\delta\in\c V$, and thus we conclude by \cref{proposition randomised setting is independent of the extension} that \[ \E^{\hat\Q}\Big[g(\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_T}_{X^{t,\xi,\alpha_t,\delta}_T},X^{t,\xi,\alpha_t,\delta}_T) + \int_t^T f(s,\hat\Q^{\c F^{B,\mu^\delta,\hat\Q}_s}_{X^{t,\xi,\alpha_t,\delta}_s},X^{t,\xi,\alpha_t,\delta}_s,I^{t,\alpha_t,\delta}_s) ds \Big] = J^{\c R}_{(\hat\Omega,\hat{\c F},\hat\P^{(\delta)})}(t,\xi,\alpha_t,\nu^\delta) \leq V^{\c R}(t,\xi,\alpha_t). \] Finally, since $\alpha\in\c A$ was arbitrary, we thus conclude that $V(t,\xi) = \sup_{\alpha\in\c A} J(t,\xi,\alpha) \leq V^{\c R}(t,\xi,\alpha_t)$ for all $\alpha_t\in\c A_t$. \subsection{Proof of $V^{\c R}\leq V$} Based on the approach in \cite{bandini_backward_2018,bayraktar_randomized_2018}, we will consider a canonical extension (in some sense) $(\hat\Omega,\hat{\c F},\hat\P)$ of $(\Omega,\c F,\P)$, using that by \cref{proposition randomised setting is independent of the extension} we are free to choose the extension we want to work on. Together with $(\hat\Omega,\hat{\c F},\hat\P)$, we will in \cref{subsection extended auxiliary problem} define an auxiliary problem, which essentially extends the original control problem to this extended space. Denoting its value function by $V^{\c E}$, our goal for this section will be proving that $V^{\c R}\leq V^{\c E}\leq V$. \subsubsection{The canonical extended space and an auxiliary problem}\label{subsection extended auxiliary problem} We will start by construction the space $(\hat\Omega,\hat{\c F},\hat\P)$, which we will need for the later proofs in this section. We recall from \cref{section problem l2 formulation} that $(\lambda_s)_{s\in [0,T]}$ is a family of measures on $\c A_T$ such that each $\lambda_s$ is fully supported on $\c A_s$, and $\lambda_s\ll\lambda_r$ for $0\leq s\leq r\leq T$. Now following essentially the construction in \cite{bandini_backward_2018}, we can find a surjective, measurable map $\pi_{\c A_T}:\R\to\c A_T$ and a family of measures $(\lambda'_s)_{s\in [0,T]}$ on $\R$ such that $\lambda_s = \lambda'_s\circ\pi_{\c A_T}^{-1}$ and $\lambda'_s\ll\lambda'_r$ for $0\leq s\leq r\leq T$. This allows us to consider the canonical space $(\Omega',\c F',\P')$ of a nonexplosive Poisson point process on $(0,\infty)\times\R$ with intensity $\lambda'_s(dr)ds$, which is constructed as follows: Let $\Omega'$ be the set of all sequences $\omega' = (\tau_k,r_k)_{k\geq 1}\subseteq (0,\infty)\times\R$ with $\tau_k < \tau_{k+1} < \infty$ and $\tau_k\to\infty$. Next we denote the canonical process by $(T_k,R_k)_{k\geq 1}$ and let $\mu' \coloneqq \sum_{k\geq 1} \delta_{(T_k,R_k)}$ be the corresponding point measure. Letting $\P'$ be the (unique) probability measure, under which $\mu'$ is a Poisson random measure with intensity $\lambda'_s(dr)ds$, (recall that $\lambda_s$ and thus $\lambda'_s$ are bounded), we define $\c F'\coloneqq \sigma(T_k,R_k|k\geq 1)\lor\c N^{\P'}$, where $\c N^{\P'}$ denotes all $\P'$-null sets. We note that by construction $\c F' = \c F^{\mu,\P'}_\infty$. Finally, we define $A_k\coloneqq \pi_{\c A_T}(R_k)\in\c A_T$ for each $k\geq 1$ and note that by construction $A_k\in\c A_{T_k}$, $\P'$-a.s. Therefore we define the random point measure $\mu \coloneqq \sum_{k\geq 1} \delta_{(T_k,A_k)}$, which by construction has intensity $\lambda_s(d\alpha)ds$ under $\P'$. In the following we will denote by $(\hat\Omega,\hat{\c F},\hat\P) \coloneqq (\Omega\times\Omega',(\c F\otimes\c F')^{\P\otimes\P'},\P\otimes\P')$ the extension of $\Omega$ with $\Omega'$. As usual, we denote the canonical extension of $\c G,\xi,W,B,\c F'$ to $(\hat\Omega,\hat{\c F},\hat\P)$ with the same symbol. Next we will define an auxiliary problem on this extended space. For this, we define $\c A^{\c E}$ as the set of all $\b F^{W,B}\lor\c G\lor\c F'$-predictable processes $\alpha$. Then for each $t\in [0,T]$, $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^n)$ and $\alpha\in\c A^{\c E}$, we denote by $X^{t,\xi,\alpha}$ the unique $\b F^{W,B,\hat\P}\lor\c G\lor\c F'$-progressive process solving the state dynamics \begin{align} dX_s^{t,\xi,\alpha} &= b(s, \hat\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\hat\P}_s\lor\c F'},X^{t,\xi,\alpha}_s,\alpha_s)ds + \sigma(s,\hat\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\hat\P}_s\lor\c F'},X^{t,\xi,\alpha}_s,\alpha_s) dW_s\\ &\quad + \sigma^0(s,\hat\P_{X^{t,\xi,\alpha}_s}^{\c F^{B,\hat\P}_s\lor\c F'},X^{t,\xi,\alpha}_s,\alpha_s) dB_s,\qquad X^{t,\xi,\alpha}_t = \xi. \label{eq extended problem state dynamics} \end{align} As before, here $(\hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha}_s})_{s\in [t,T]}$ shall denote the $\b F^{B,\hat\P}\lor\c F'$-predictable and $\hat\P$-a.s.\@ continuous version of $(\c L(X^{t,\xi,\alpha}_s|\c F^{B,\hat\P}_s\lor\c F'))_{s\in [t,T]}$, that is $\hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha}_s} = \c L(X^{t,\xi,\alpha}_s|\c F^{B,\hat\P}_s\lor\c F')$, $\hat\P$-a.s., for all $s\in [t,T]$. We recall that the existence of such a version is ensured by \cite[Lemma A.1]{djete_mckeanvlasov_2022-1}, since $(t,\omega,\omega')\mapsto (B_t(\omega),\omega')$ is a continuous Hunt process and thus by \cite[Chapter 3, Section 2.3, pp. 30–32]{chung_introduction_1990}, $O(\b F^{B,\hat\P}\lor\c F') = Pred(\b F^{B,\hat\P}\lor\c F')$. Again we note that by standard results, the existence and uniqueness of $X^{t,\xi,\alpha}$ follows from \cref{assumptions sde coefficients}, as well as the basic estimate \begin{align} \sup_{\alpha\in\c A^{\c E}} \E^{\hat\P}\Big[\sup_{s\in [t,T]} |X^{t,\xi,\alpha}_s|^2 \Big] < \infty. \end{align} Finally, by \cref{assumptions reward functional} the following reward functional \[ J^{\c E}(t,\xi,\alpha) \coloneqq \E^{\hat\P}\Big[ g(\hat\P_{X^{t,\xi,\alpha}_T}^{{\c F}^{B,\hat\P}_T\lor\c F'},X^{t,\xi,\alpha}_T) + \int_t^T f(s,\hat\P_{X^{t,\xi,\alpha}_s}^{{\c F}^{B,\hat\P}_s\lor\c F'},X^{t,\xi,\alpha}_s,\alpha_s) ds \Big] \] is again well-defined, and we can introduce the corresponding value function as \[ V^{\c E}(t,\xi) \coloneqq \sup_{\alpha\in\c A^{\c E}} J^{\c E}(t,\xi,\alpha) < \infty. \] \subsubsection{Proof of $V^{\c E}\leq V$}\label{subsection V^E leq V} Let $(\hat\Omega,\hat{\c F},\hat\P)$ be the canonical extended space constructed in \cref{subsection extended auxiliary problem}, and let us fix $t\in [0,T]$ and $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$. Further, let $\alpha\in\c A^{\c E}$ be an admissible control to the extended problem and $X^{t,\xi,\alpha}$ the corresponding state process satisfying \eqref{eq extended problem state dynamics}. Since by definition $\alpha:[0,T]\times\Omega\times\Omega'\to A$ is $(\b F^{W,B}\lor\c G)\otimes\c F'$-predictable, we see that for all $\omega'\in\Omega'$, the process $\alpha^{\omega'}\coloneqq \alpha(\cdot,\omega')$ is $\b F^{W,B}\lor\c G$-predictable and in particular $\alpha^{\omega'}\in\c A$. This motivates us to consider for each $\omega'\in\Omega'$ the corresponding state process $X^{t,\xi,\alpha^{\omega'}}$ to $\alpha^{\omega'}$ satisfying the state dynamics \eqref{eq mfc state dynamics}, and our next goal is showing that \[ X^{t,\xi,\alpha}(\cdot,\omega') = X^{t,\xi,\alpha^{\omega'}},\ \P\text{-a.s.},\quad \text{for }\P'\text{-a.a. }\omega'\in\Omega'. \] To this end, we note that since $(\hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha}_s})_{s\in [t,T]}$ is $\b F^{B,\hat\P}\lor\c F'$-predictable and for our canonical extended space $\b F^{B,\hat\P}\lor\c F' = (\b F^B\otimes\c F')^{\hat\P}$, it follows that for $\P'$-a.a.\@ $\omega'\in\Omega'$, the process $(\hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha}_s}(\cdot,\omega'))_{s\in [t,T]}$ is $\b F^{B,\P}$-predictable. At the same time, we see that since $\hat\P = \P\otimes\P'$, for all $s\in [t,T]$ and for $\P'$-a.a.\@ $\omega'\in\Omega'$, \[ \hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha}_s}(\cdot,\omega') = \c L^{\hat\P}(X^{t,\xi,\alpha}_s | (\c F^B_s\otimes\c F')^{\hat\P})(\cdot,\omega') = \c L^{\P}(X^{t,\xi,\alpha}_s(\cdot,\omega') | \c F^{B,\P}_s),\quad\P\text{-a.s}, \] and thus $(\hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha}_s}(\cdot,\omega'))_{s\in [t,T]}$ is a suitable $\P$-a.s.\@ continuous and $\b F^{B,\P}$-predictable version of the conditional law process $(\c L^{\P}(X^{t,\xi,\alpha}_s(\cdot,\omega') | \c F^{B,\P}_s))_{s\in [t,T]}$, for $\P'$-a.a.\@ $\omega'\in\Omega'$. Finally, this shows that for $\P'$-a.a.\@ $\omega'\in\Omega'$, the process $X^{t,\xi,\alpha}(\cdot,\omega')$ also satisfies the SDE \eqref{eq mfc state dynamics} with the control $\alpha^{\omega'} = \alpha(\cdot,\omega')$, and thus by the uniqueness of strong solutions to \eqref{eq mfc state dynamics}, we deduce that $X^{t,\xi,\alpha}(\cdot,\omega') = X^{t,\xi,\alpha^{\omega'}}$, $\P$-a.s. This now allows us to rewrite the extended reward functional $J^{\c E}$ in terms of $J$ as follows, \begin{align} J^{\c E}(t,\xi,\alpha) &= \int_{\Omega'} \E\Big[g(\hat\P^{\c F^{B,\hat\P}_T\lor\c F'}_{X^{t,\xi,\alpha}_T}(\cdot,\omega'),X^{t,\xi,\alpha}_T(\cdot,\omega'))\\ &\qquad + \int_t^T f(s, \hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha}_s}(\cdot,\omega'),X^{t,\xi,\alpha}_s(\cdot,\omega'),\alpha_s(\cdot,\omega')) ds\Big] \P'(d\omega')\\ &= \int_{\Omega'} J(t,\xi,\alpha^{\omega'}) \P'(d\omega') \leq V(t,\xi), \end{align} and since $\alpha\in\c A^{\c E}$ was chosen arbitrary, this shows that $V^{\c E}(t,\xi) = \sup_{\alpha\in\c A^{\c E}} J^{\c E}(t,\xi,\alpha)\leq V(t,\xi)$. \subsubsection{Proof of $V^{\c R}\leq V^{\c E}$}\label{subsection V^R leq V^E} Recalling that by \cref{proposition randomised setting is independent of the extension} we are free to choose our randomised probability space, we will consider the canonical extended space $(\hat\Omega,\hat{\c F},\hat\P) = (\Omega \times \Omega',(\c F\otimes\c F')^{\P\otimes\P'},\P\otimes\P')$ from \cref{subsection extended auxiliary problem}. Let us further fix $t\in [0,T]$, $\xi\in L^2(\Omega,\c G\lor\c F^W_t,\P;\R^d)$ and $\alpha_t\in\c A_t$. We now need to show that for any $\nu\in\c V$, we have $J^{\c R}(t,\xi,\alpha_t,\nu) \leq V^{\c E}(t,\xi)$. To this end, let us fix $\nu\in\c V$ and let $\hat\P^\nu\sim\hat\P$ be the corresponding tilted probability measure under which $\mu$ has the intensity $\nu_s(\alpha)\lambda_s(d\alpha)ds$, defined in \eqref{eq randomised control girsanov formula}. Following the approach by \cite{bandini_backward_2018}, we will construct a piece-wise constant process $\hat\alpha^\nu\in\c A^{\c E}$ that has under $\hat\P$ the same distribution as the Poisson point process $\hat I^{t,\alpha_t}$ under $\hat\P^\nu$ using \cite[Lemma 4.3]{bandini_backward_2018}. To set this up, we define for each $\omega\in\Omega$ the process $\nu^\omega:[0,T]\times\Omega'\times\c A_T\to (0,\infty)$ by $\nu^\omega_s(\omega',\alpha)\coloneqq \nu_s(\omega,\omega',\alpha)$. This $\nu^\omega$ is then $Pred(\b F^\mu)\otimes\c B(\c A_T)$-measurable and satisfies $0\leq \inf_{[0,T]\times\Omega'\times\c A_T} \nu^\omega\leq \sup_{[0,T]\times\Omega'\times\c A_T} \nu^\omega < \infty$, see also \cite[Lemma 4.2]{bandini_backward_2018}. Correspondingly we define probability measures ${\P'}^{\nu^\omega}\sim\P'$ on $(\Omega',{\c F}')$ via $\frac{d{\P'}^{\nu^\omega}}{d\P'}|_{\c F^\mu_T} = L^{\nu^\omega}_T$, where $L^{\nu^\omega}$ is given by \eqref{eq randomised control girsanov formula}. Using an analogous result to \cref{lemma d P nu d P is square integrable}, we note that ${\P'}^{\nu^\omega}$ is indeed well-defined, and moreover we observe that by construction $\hat\P^\nu(d\omega,d\omega') = {\P'}^{\nu^\omega}(d\omega')\P(d\omega)$. Next we apply \cite[Lemma 4.3]{bandini_backward_2018} and obtain a process \[ \hat\alpha'^\nu = \alpha_t \1_{[t,\tau'^\nu_1)} + \sum_{k\geq 1} \alpha'^\nu_k \1_{[\tau'^\nu_k,\tau'^\nu_{k+1})}, \] where $\tau'^\nu_k\geq t$ are $\b F^{B,\hat\P}\lor\c F'$-stopping times and $\alpha'^\nu_k$ are $\c F^{B,\hat\P}_{\tau^\nu_k}\lor\c F'$-measurable, $k\geq 1$, such that \[ \c L^{\P'}(\hat\alpha'^\nu(\omega,\cdot)) = \c L^{{\P'}^{\nu^\omega}}(\hat I^{t,\alpha_t}),\qquad\text{for }\P\text{-a.a. }\omega\in\Omega. \] Now we note that $(t,\omega,\omega')\mapsto (B_t(\omega),\omega')$ is a continuous Hunt process and thus by \cite[Chapter 3, Section 2.3, pp. 30–32]{chung_introduction_1990} every $\b F^{B,\hat\P}\lor\c F'$-optional time is also $\b F^{B,\hat\P}\lor\c F'$-predictable. This allows us to use the arguments in \cite[Lemma 6]{claisse_pseudo-markov_2016} to conclude that there exist $\b F^B\lor\c F'$-predictable stopping times $\tau^\nu_k$ and $\c F^B_{\tau^\nu_k}\lor\c F'$-measurable $\alpha^\nu_k$ such that $\tau^\nu_k = \tau'^\nu_k$ and $\alpha^\nu_k = \alpha'^\nu_k$, $\hat\P$-a.s. Therefore, the process \[ \hat\alpha^\nu \coloneqq \alpha_t \1_{[t,\tau^\nu_1)} + \sum_{k\geq 1} \alpha^\nu_k \1_{[\tau^\nu_k,\tau^\nu_{k+1})} \] is $\b F^B\lor\c F'$-predictable and indistinguishable from $\hat\alpha'^\nu$, and thus we have \[ \c L^{\P'}(\hat\alpha^\nu(\omega,\cdot)) = \c L^{\P'}(\hat\alpha'^\nu(\omega,\cdot)) = \c L^{{\P'}^{\nu^\omega}}(\hat I^{t,\alpha_t}),\qquad\text{for }\P\text{-a.a. }\omega\in\Omega. \] We note that this implies that \begin{align} \c L^{\hat\P}(\pi|_{\c G},\xi,W,B,\hat\alpha^\nu) = \c L^{\hat\P^\nu}(\pi|_{\c G},\xi,W,B,\hat I^{t,\alpha_t}), \label{eq proof V R <= V E law equality} \end{align} where $\pi|_{\c G}:(\hat\Omega,\c G)\to (\Omega,\c G)$ is the canonical $\c G$-measurable projection, since for all bounded, measurable functions $\phi:(\Omega\times\R^d\times\R^m\times\R^n,\c G\times\c B(\R^d\times\R^m\times\R^n))\to\R$ and $\psi:(\c A_T,\c B(\c A_T))\to\R$ we have \begin{align} &\E^{\hat\P^\nu}[\phi(\pi|_{\c G},\xi,W,B) \psi(\hat I^{t,\alpha_t})] \\ &= \int_{\Omega\times\Omega'} \phi(\omega,\xi(\omega),W(\omega),B(\omega)) \psi(\hat I^{t,\alpha_t}(\omega,\omega')) \hat\P^\nu(d\omega,d\omega')\\ &= \int_\Omega \phi(\omega,\xi(\omega),W(\omega),B(\omega)) \int_{\Omega'} \psi(\hat I^{t,\alpha_t}(\omega,\omega')) {\P'}^{\nu^\omega}(d\omega') \P(d\omega)\\ &= \int_\Omega \phi(\omega,\xi(\omega),W(\omega),B(\omega)) \int_{\Omega'} \psi(\hat\alpha^\nu(\omega,\omega')) \P'(d\omega') \P(d\omega)\\ &= \E^{\hat\P}[\phi(\pi|_{\c G},\xi,W,B) \psi(\hat\alpha^\nu)]. \end{align} As a next step, we recall that by \cref{corollary I given bar I is well defined} there is an unique (up to indistinguishability) càdlàg, $\b F^{W}\lor\c G\lor\c F^{\hat I^{t,\alpha_t}}_T$-predictable and $\b F^{W,\hat I^{t,\alpha_t}}\lor\c G$-progressive process $I^{t,\alpha_t}$ $\b F^{\hat I^{t,\alpha_t}}$-identifying to $\hat I^{t,\alpha_t}$. Similarly, \cref{lemma existence N bar N,lemma alpha bar alpha uniqueness} shows that there is an unique (up to modification) $\b F^{W}\lor\c G\lor\c F^{\hat\alpha^\nu}_T$-predictable and $\b F^{W,\hat\alpha^\nu}\lor\c G$-progressive process $\alpha^\nu$ which $\b F^{\hat\alpha^\nu}$-identifies to $\hat\alpha^\nu$, and moreover together with \eqref{eq proof V R <= V E law equality}, we obtain by \cref{lemma joint law N bar N map} that \begin{align} \c L^{\hat\P}(\pi|_{\c G},\xi,W,B,\hat\alpha^\nu,\alpha^\nu) = \c L^{\hat\P^\nu}(\pi|_{\c G},\xi,W,B,\hat I^{t,\alpha_t},I^{t,\alpha_t}). \label{eq proof V R <= V E second law equality} \end{align} We note that $\alpha^\nu$ is $\b F^{W,B}\lor\c G\lor\c F'$-predictable and hence $\alpha^\nu\in\c A^{\c E}$. Since $I^{t,\alpha_t}$ is $\b F^{W,\hat I^{t,\alpha_t}}\lor\c G$-progressive, there exists a unique $\b F^{W,B^t,\hat I^{t,\alpha_t},\hat\P}\lor\c G$-progressive process $X^{t,\xi,\alpha_t}$ solving the randomised state dynamics \eqref{eq sde randomised problem}, and similar since $\alpha^\nu$ is $\b F^{W,\hat\alpha^\nu}\lor\c G$-progressive, there exists a unique $\b F^{W,B,\hat\P}\lor\c G\lor\c F'$-progressive $X^{t,\xi,\alpha^\nu}$ process solving \eqref{eq extended problem state dynamics}. This implies that the corresponding conditional law process $(\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s})_{s\in [t,T]}$ is $\b F^{B,\hat I^{t,\alpha_t},\hat\P^\nu}$-progressive, since $\c G,W$ are $\hat\P^\nu$-independent of $B,\mu$ and similar that $(\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha^\nu}_s})_{s\in [t,T]}$ is $\b F^{B,\hat\alpha^\nu,\hat\P}$-progressive, since $\c G,W$ are $\hat\P$-independent of $B,\c F'$. This implies that \[ \hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s} = \hat\P^{\nu,\c F^{B,\hat I^{t,\alpha_t},\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s}, \qquad \hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha^\nu}_s} = \hat\P^{\c F^{B,\hat\alpha^\nu,\hat\P}_s}_{X^{t,\xi,\alpha^\nu}_s},\qquad\hat\P\text{-a.s.}, \] and therefore both $X^{t,\xi,\alpha_t}$ and $X^{t,\xi,\alpha^\nu}$ satisfy the following equation \begin{align} dX_s &= b(s,\hat\P^{\c F^{B,\hat\alpha,\hat\P}_s}_{X_s},X_s,\alpha_s) ds + \sigma(s,\hat\P^{\c F^{B,\hat\alpha,\hat\P}_s}_{X_s},X_s,\alpha_s) dW_s + \sigma^0(s,\hat\P^{\c F^{B,\hat\alpha,\hat\P}_s}_{X_s},X_s,\alpha_s) dB_s,\ X_t = \xi, \label{eq proof V R <= V E auxiliary SDE} \end{align} with the corresponding control $\hat\alpha = \hat I^{t,\alpha_t}$ respectively $\hat\alpha = \hat\alpha^\nu$. Consequently, together with \eqref{eq proof V R <= V E second law equality} the weak uniqueness of solutions to \eqref{eq proof V R <= V E auxiliary SDE} now shows that \[ \c L^{\hat\P}((\hat\P^{\c F^{B,\hat\P}_s\lor\c F'}_{X^{t,\xi,\alpha^\nu}_s})_{s\in [t,T]}, X^{t,\xi,\alpha^\nu},\alpha^\nu) = \c L^{\hat\P^\nu}((\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s})_{s\in [t,T]}, X^{t,\xi,\alpha_t},I^{t,\alpha_t}). \] Finally, this implies that \begin{align} J^{\c R}(t,\xi,\alpha_t,\nu) &= \E^{\hat\P^\nu}\Big[ g(\hat\P_{X^{t,\xi,\alpha_t}_T}^{\c F^{B,\mu,\hat\P}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P_{X^{t,\xi,\alpha_t}_s}^{\c F^{B,\mu,\hat\P}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds \Big]\\ &= \E^{\hat\P}\Big[g(X^{t,\xi,\alpha^\nu}_T, \hat\P_{X^{t,\xi,\alpha^\nu}_T}^{{\c F}^{B,\hat\P}_T\lor\c F'}) + \int_t^T f(s,X^{t,\xi,\alpha^\nu}_s,\hat\P_{X^{t,\xi,\alpha^\nu}_s}^{{\c F}^{B,\hat\P}_s\lor\c F'},\alpha^\nu_s) ds \Big] = J^{\c E}(t,\xi,\alpha), \end{align} and since $\nu\in\c V$ was arbitrary, this shows that $V^{\c R}(t,\xi,\alpha_t) = \sup_{\nu\in\c V} J^{\c R}(t,\xi,\alpha_t,\nu) \leq V^{\c E}(t,\xi)$. \section{Proof of \cref{lemma penalised bsde minimal solution existence}} \label{appendix proof lemma penalised bsde minimal solution existence} \begin{proof} \begin{enumerate}[wide,label=(\roman*)] \item We first note that we can view the penalised BSDE \eqref{eq penalised bsde value function} as BSDE on $[0,T]$ with the driver \[ F(s,\hat\omega,u) \coloneqq \1_{[t,T]}(s) \Big(H_{s-}(\hat\omega) + n \int_{\c A_s} (u(\alpha))_+ \lambda_s(d\alpha)\Big), \] where $H_r \coloneqq \E^{\hat\P}[f(r,\hat\P^{\c F^{B,\mu,\hat\P}_r}_{X^{t,\xi,\alpha_t}_r},X^{t,\xi,\alpha_t}_r,I^{t,\alpha_t}_r) | \c F^{B,\mu,\hat\P}_r]$, and terminal value \[ G \coloneqq \E^{\hat\P}[g(X^{t,\xi,\alpha_t}_T,\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T}) | \c F^{B,\mu,\hat\P}_T]. \] Then by \cite[Lemma 2.4]{tang_necessary_1994}, there exists a unique solution $(Y^{n,t,\xi,\alpha_t},Z^{n,t,\xi,\alpha_t},U^{n,t,\xi,\alpha_t}) \in \c S^2_{[t,T]}(\b F^{B,\mu,\hat\P})\times L^2_{W,[t,T]}(\b F^{B,\mu,\hat\P})\times L^2_{\lambda,[t,T]}(\b F^{B,\mu,\hat\P})$ to \eqref{eq penalised bsde value function}, noting that while boundedness of the driver $F$ in \cite[(2.15)]{tang_necessary_1994} is not fulfilled by our driver $F$, this assumption is only needed to show that for each $\bar U\in L^2_{\lambda,[t,T]}(\b F^{B,\mu,\hat\P})$,\begin{align} \E^{\hat\P}\Big[\Big|H + \int_0^T F(s,\bar U_s) ds \Big|^2\Big] < \infty, \end{align} which in our setting holds since by \cref{assumptions reward functional} together with \eqref{eq randomised state dynamics basic estimate}, we have \begin{align} &\E^{\hat\P}\Big[\Big|H + \int_0^T F(s,\bar U_s) ds \Big|^2\Big]\\ &\leq 2\E^{\hat\P}\Big[|\E^{\hat\P}[g(X^{t,\xi,\alpha_t}_T,\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T}) | \c F^{B,\mu,\hat\P}_T]|^2\\ &\qquad + T \int_t^T \Big|\E^{\hat\P}[f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) | \c F^{B,\mu,\hat\P}_s] + n \int_{\c A_s} (\bar U_s(\alpha))_+ \lambda_s(d\alpha) \Big|^2 ds\Big]\\ &\leq 2\E^{\hat\P}\Big[|g(X^{t,\xi,\alpha_t}_T,\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T})|^2 + T \int_t^T |f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s)|^2 ds\\ &\qquad + n T \sup_{s\in [0,T]} \lambda_s(\c A_s) \int_t^T \int_{\c A_s} |\bar U_s(\alpha)|^2 \lambda_s(d\alpha) ds \Big] < \infty. \end{align} \item\label{proof lemma 6.1 step 2} Next we will prove the representation \eqref{eq Y n t xi snell envelope formula}. From the BSDE \eqref{eq penalised bsde value function}, for any $\nu\in\c V$, by taking the conditional expectation under $\hat\P^\nu$ given $\c F^{B,\mu,\hat\P^\nu}_s$, we obtain for all $s\in [t,T]$ and $r\in [s,T]$, \begin{align} Y^{n,t,\xi,\alpha_t}_s &= \E^{\hat\P^\nu}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_s^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\\ &\qquad + \int_s^r\int_{\c A_u} \big(n (U^{n,t,\xi,\alpha_t}_u(\alpha))_+ - \nu_u(\alpha) U^{n,t,\xi,\alpha_t}_u(\alpha)\big) \lambda_u(d\alpha) du\Big| \c F^{B,\mu,\hat\P^\nu}_s\Big]. \label{eq Y n t xi expectation} \end{align} Noting that for $\nu\in\c V^n$ we have $0 < \nu\leq n$ and thus $n(U^{n,t,\xi,\alpha_t})_+ - \nu U^{n,t,\xi,\alpha_t} \geq 0$, we see that \eqref{eq Y n t xi expectation} shows that for all $\nu\in\c V^n$, $s\in [t,T]$ and $r\in [s,T]$, \[ Y^{n,t,\xi,\alpha_t}_s \geq \E^{\hat\P^\nu}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_s^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du \Big| \c F^{B,\mu,\hat\P^\nu}_s\Big], \] and hence \[ Y^{n,t,\xi,\alpha_t}_s \geq \esssup_{\nu\in\c V^n} \E^{\hat\P^\nu}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_s^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du \Big| \c F^{B,\mu,\hat\P^\nu}_s\Big]. \] For the other direction, we define for each $\varepsilon\in (0,1]$ the randomised control $\nu^{n,\varepsilon}\in\c V^n$ via \begin{align} \nu^{n,\varepsilon}_s(\alpha) \coloneqq \begin{cases*} n\1_{\{U^{n,t,\xi,\alpha_t}_s(\alpha) \geq 0\}} + \varepsilon\1_{\{U^{n,t,\xi,\alpha_t}_s(\alpha) < 0\}}&, on $[t,T]\times\c A_T$\\ 1&, on $[0,t)\times\c A_T$. \end{cases*} \label{eq Y n t xi definition of nu n varepsilon} \end{align} Correspondingly, we define the probability measures $\hat\P^{\nu^{n,\varepsilon}}\sim\hat\P$ via $\frac{d\hat\P^{\nu^{n,\varepsilon}}}{d\hat\P}\big|_{\c F^{B,\mu,\hat\P}_u} \coloneqq L^{n,\varepsilon}_u$ for all $u\in [0,T]$, see \eqref{eq randomised control girsanov formula}. Thus from \eqref{eq Y n t xi expectation} we obtain for all $s\in [t,T]$ and $r\in [s,T]$, \begin{align} Y^{n,t,\xi,\alpha_t}_s &\leq \E^{\hat\P^{\nu^{n,\varepsilon}}}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_s^r f(u,\hat\P^{\nu^{n,\varepsilon},\c F^{B,\mu,\hat\P^{\nu^{n,\varepsilon}}}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du \Big| \c F^{B,\mu,\hat\P^{\nu^{n,\varepsilon}}}_s\Big]\\ &\qquad + \E^{\hat\P^{\nu^{n,\varepsilon}}}\Big[\varepsilon\int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big|\c F^{B,\mu,\hat\P^{\nu^{n,\varepsilon}}}_s \Big]\\ &\leq \esssup_{\nu\in\c V^n} \E^{\hat\P^\nu}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_s^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du \Big| \c F^{B,\mu,\hat\P^\nu}_s\Big]\\ &\qquad + \varepsilon \E^{\hat\P^{\nu^{n,\varepsilon}}}\Big[\int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big|\c F^{B,\mu,\hat\P^{\nu^{n,\varepsilon}}}_s \Big]. \label{eq Y n t xi upper bound} \end{align} Therefore the desired result follows by letting $\varepsilon\downarrow 0$, if we can show that the last term vanishes in the limit. To estimate the last term, we observe that since by construction $0 < \nu^{n,\varepsilon} \leq \nu^{n,1}$ for all $\varepsilon\in (0,1]$, we have that for all $s\in [t,T]$, \begin{align} 0\leq \frac{L^{\nu^{n,\varepsilon}}_T}{L^{\nu^{n,\varepsilon}}_s} \frac{L^{\nu^{n,1}}_s}{L^{\nu^{n,1}}_T} &= \exp\Big(\int_{[s,T]}\int_{\c A_r} \log \frac{\nu^{n,\varepsilon}_r(\alpha)}{\nu^{n,1}_r(\alpha)} \mu(dr,d\alpha) - \int_0^T \int_{\c A_r} (\nu^{n,\varepsilon}_r(\alpha) - \nu^{n,1}_r(\alpha)) \lambda_r(d\alpha)dr \Big)\\ &\leq \exp\Big(\int_0^T \int_{\c A_r} \lambda_r(d\alpha)dr\Big)^{\norm{\nu^{n,1}}_\infty} \eqqcolon C < \infty. \end{align} This allows us to bound the last term in \eqref{eq Y n t xi upper bound} as follows \begin{align} &\E^{\hat\P^{\nu^{n,\varepsilon}}}\Big[\int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big|\c F^{B,\mu,\hat\P^{\nu^{n,\varepsilon}}}_s \Big]\\ &= \E^{\hat\P}\Big[\frac{L^{\nu^{n,\varepsilon}}_T}{L^{\nu^{n,\varepsilon}}_s} \int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big|\c F^{B,\mu,\hat\P}_s \Big]\\ &\leq C \E^{\hat\P}\Big[\frac{L^{\nu^{n,1}}_T}{L^{\nu^{n,1}}_s} \int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big|\c F^{B,\mu,\hat\P}_s \Big]\\ &= C \E^{\hat\P^{\nu^{n,1}}}\Big[ \int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big|\c F^{B,\mu,\hat\P^{\nu^{n,1}}}_s \Big]. \end{align} This term is now $L^1(\hat\P^{\nu^{n,1}})$-integrable, and due to $\hat\P^{\nu^{n,1}}\sim\hat\P$ also $\hat\P$-a.s.\@ finite, since \begin{align} &\E^{\hat\P^{\nu^{n,1}}}\Big[ \E^{\hat\P^{\nu^{n,1}}}\Big[ \int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big|\c F^{B,\mu,\hat\P^{\nu^{n,1}}}_s \Big] \Big]\\ &\leq \bigg(\E^{\hat\P}[ |L^{\nu^{n,1}}_T|^2 ] \E^{\hat\P}\Big[\Big(\int_s^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du\Big)^2\Big] \bigg)^{\frac 1 2}\\ &\leq \bigg(\E^{\hat\P}[|L^{\nu^{n,1}}_T|^2 ] \E^{\hat\P}\Big[\int_s^r \int_{\c A_u} |U^{n,t,\xi,\alpha_t}_u(\alpha)|^2 \lambda_u(d\alpha) du \Big] \int_s^r \lambda_u(\c A_u) du \bigg)^{\frac 1 2} < \infty, \end{align} since $U^{n,t,\xi,\alpha_t}\in L^2_{\lambda,[t,T]}(\b F^{B,\mu,\hat\P})$ and $L^{\nu^{n,1}}_T \in L^2(\hat\Omega,\c F^{B,\mu}_T,\hat\P;\R)$ by \cref{lemma d P nu d P is square integrable}. \item Finally, let us turn to \eqref{eq Y n t xi snell envelope upper bound at s=t}. By taking the expectation with respect to $\E^{\hat\P^{\nu^{n,\varepsilon}}}$ in \eqref{eq Y n t xi upper bound} for $s=t$, we obtain hat for all $r\in [t,T]$ and $\varepsilon\in (0,1]$, \begin{align} \E^{\hat\P^{\nu^{n,\varepsilon}}}[Y^{n,t,\xi,\alpha_t}_t] &\leq \E^{\hat\P^{\nu^{n,\varepsilon}}}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_t^r f(u,\hat\P^{\nu^{n,\varepsilon},\c F^{B,\mu,\hat\P^{\nu^{n,\varepsilon}}}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du \Big]\\ &\qquad + \varepsilon \E^{\hat\P^{\nu^{n,\varepsilon}}}\Big[\int_t^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big]. \end{align} Since by construction $\frac{d\hat\P^{\nu^{n,\varepsilon}}}{d\hat\P}|_{\c F^{B,\mu,\hat\P}_t} = L^{\nu^{n,\varepsilon}}_t \equiv 1$, we see that $\E^{\hat\P^{\nu^{n,\varepsilon}}}[Y^{n,t,\xi,\alpha_t}_t] = \E^{\hat\P}[Y^{n,t,\xi,\alpha_t}_t]$, and thus \begin{align} \E^{\hat\P}[Y^{n,t,\xi,\alpha_t}_t] = \E^{\hat\P^{\nu^{n,\varepsilon}}}[Y^{n,t,\xi,\alpha_t}_t] &\leq \sup_{\nu\in\c V^n} \E^{\hat\P^{\nu}}\Big[Y^{n,t,\xi,\alpha_t}_r + \int_t^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^{\nu}}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du \Big]\\ &\qquad + \varepsilon \E^{\hat\P^{\nu^{n,\varepsilon}}}\Big[\int_t^r \int_{\c A_u} (U^{n,t,\xi,\alpha_t}_u(\alpha))_- \lambda_u(d\alpha) du \Big]. \end{align} Finally, by letting $\varepsilon\downarrow 0$, we obtain \eqref{eq Y n t xi snell envelope upper bound at s=t}, recalling that we have already shown in \cref{proof lemma 6.1 step 2} that the last term will vanish. \end{enumerate} \end{proof} \section{Proof of \cref{lemma bsde minimal solution existence}} \label{appendix proof lemma bsde minimal solution existence} \begin{proof} \begin{enumerate}[wide,label=(\roman*)] \item\label{proof theorem 6.2 part i} We will start by proving the existence and uniqueness of such a minimal solution to \eqref{eq constrained bsde value function} using \cite[Theorem 2.1]{kharroubi_feynmankac_2015}, which tells us that this unique minimal solution to \eqref{eq constrained bsde value function} should be the increasing limit of the solutions to the penalised BSDEs \eqref{eq penalised bsde value function} obtained from \cref{lemma penalised bsde minimal solution existence}. For this we view \eqref{eq constrained bsde value function} and the penalised versions \eqref{eq penalised bsde value function} as a BSDEs on $[0,T]$ with the driver $F(s,\hat\omega) \coloneqq \1_{[t,T]}(s) H_{s-}(\hat\omega)$, where $H_r \coloneqq \E^{\hat\P}[f(r,\hat\P^{\c F^{B,\mu,\hat\P}_r}_{X^{t,\xi,\alpha_t}_r},X^{t,\xi,\alpha_t}_r,I^{t,\alpha_t}_r) | \c F^{B,\mu,\hat\P}_r]$, and the terminal value $G \coloneqq \E^{\hat\P}[g(X^{t,\xi,\alpha_t}_T,\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T}) | \c F^{B,\mu,\hat\P}_T]$. Then we note that \cite[Assumption (H0)]{kharroubi_feynmankac_2015} holds: (ii) and (iii) are naturally satisfied as the generator $F$ does not depend on $Y,Z,U$, and (i) holds since $\xi\in L^2(\hat\Omega,\c G\lor\c F^W_t,\hat\P;\R^d)$ together with\cref{assumptions reward functional} and \eqref{eq randomised state dynamics basic estimate} show that \begin{align} \E^{\hat\P}\Big[|G|^2 + \int_0^T |F(s)|^2 ds\Big] &\leq \E^{\hat\P}\Big[|g(X^{t,\xi,\alpha_t}_T,\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T})|^2 + \int_t^T |f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s)|^2 ds\Big] < \infty. \end{align} We remark that \cite[Assumption (H1)]{kharroubi_feynmankac_2015} is only needed to show the following estimate \begin{align} \sup_n \E^{\hat\P}\Big[\sup_{s\in [t,T]} |Y^{n,t,\xi,\alpha_t}_s|^2\Big] < \infty. \label{eq theorem 6.2 H1 estimate} \end{align} Thus, instead of verifying \cite[Assumption (H1)]{kharroubi_feynmankac_2015}, we will directly show that \eqref{eq theorem 6.2 H1 estimate} holds. For this, we note that following the same arguments as \cite[Lemma 4.9]{fuhrman_randomized_2015}, from the representation \eqref{eq Y n t xi snell envelope formula} in \cref{lemma penalised bsde minimal solution existence} together with \cref{assumptions sde coefficients,assumptions reward functional}, we can show that \[ \sup_{s\in [t,T]} |Y^{n,t,\xi,\alpha_t}_s|^2 \leq C\Big( 1 + \E^{\hat\P}\Big[ \sup_{s\in [t,T]} |X^{t,\xi,\alpha_t}_s|^2 \Big|\c F^{B,\mu,\hat\P}_T\Big]\Big),\quad\hat\P\text{-a.s.}, \] for a constant $C$ independent of $n$, which together with \eqref{eq randomised state dynamics basic estimate} implies \eqref{eq theorem 6.2 H1 estimate}. Thus by \cite[Theorem 2.1]{kharroubi_feynmankac_2015}, the BSDE \eqref{eq constrained bsde value function} has a unique minimal solution \[ (Y^{t,\xi,\alpha_t},Z^{t,\xi,\alpha_t},U^{t,\xi,\alpha_t},K^{t,\xi,\alpha_t})\in \c S^2_{[0,T]}(\b F^{B,\mu,\hat\P})\times L^2_{W,[0,T]}(\b F^{B,\mu,\hat\P})\times L^2_{\lambda,[0,T]}(\b F^{B,\mu,\hat\P})\times \c K^2_{[0,T]}(\b F^{B,\mu,\hat\P}), \] when viewed as BSDE on the whole time interval $[0,T]$, and which satisfies $Y^{n,t,\xi,\alpha_t}_s\uparrow Y^{t,\xi,\alpha_t}_s$, for all $s\in [t,T]$, $\hat\P$-a.s.\@ and in $L^2(ds\otimes\hat\P)$. Therefore, as a last step we need to show that also $K^{t,\xi,\alpha_t}_t = 0$ and thus $K^{t,\xi,\alpha_t}\in \c K^2_{[t,T]}(\b F^{B,\mu,\hat\P})$, which we will postpone to \cref{proof theorem 6.2 part iii} of the proof as it will follow from showing that $Y^{t,\xi,\alpha_t}_t$ is $\hat\P$-a.s.\@ constant. \item Regarding the representation \eqref{eq recursive snell envelope formula}, we first note that by \cref{lemma penalised bsde minimal solution existence}, we have the formula \eqref{eq Y n t xi snell envelope formula} for $Y^{n,t,\xi,\alpha_t}$ for all $n\in\N$. Thus, using that $\c V_n\subseteq\c V_{n+1}\subseteq\c V$ and $\bigcup_{n\in\N} \c V_n = \c V$, we conclude from $Y^{n,t,\xi,\alpha_t}_s\uparrow Y^{t,\xi,\alpha_t}_s$, for all $s\in [t,T]$, $\hat\P$-a.s., by monotone convergence that \eqref{eq recursive snell envelope formula} holds for $Y^{t,\xi,\alpha_t}$. \item\label{proof theorem 6.2 part iii} Next we will prove that $Y^{t,\xi,\alpha_t}_t$ is $\hat\P$-a.s.\@ constant, and with it that $K^{t,\xi,\alpha_t}_t = 0$ and hence $K^{t,\xi,\alpha_t}\in\c K^2_{[t,T]}(\b F^{B,\mu,\hat\P})$, which also completes the construction in \cref{proof theorem 6.2 part i}. For this, we will show that $Y^{t,\xi,\alpha_t}_t = V(t,m)$, $\hat\P$-a.s. First, we note that \eqref{eq recursive snell envelope formula} together with \eqref{eq constrained bsde value function} implies that \begin{align} Y^{t,\xi,\alpha_t}_t &= \esssup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big| \c F^{B,\mu,\hat\P^\nu}_t\Big],\ \hat\P\text{-a.s.} \label{eq Y t xi reward representation} \end{align} Next, we observe that by \cref{proposition decompose filtrations of predictable processes} we can decompose each $\nu\in\c V$ into an $\c F^{B,\mu}_t\otimes Pred(\b F^{B^t,\mu^t})\otimes\c B(\c A_T)$-measurable process $\Upsilon$ with $0<\inf_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon\leq \sup_{[0,T]\times\hat\Omega\times\hat\Omega\times\c A_T} \Upsilon < \infty$ such that $\nu(\hat\omega) = \Upsilon(\hat\omega,\hat\omega)$, for all $\hat\omega\in\hat\Omega$. In particular, we have $\Upsilon(\hat\omega,\cdot)\in\c V_t$ for every $\hat\omega\in\hat\Omega$. Now recalling that $X^{t,\xi,\alpha_t}$ and $I^{t,\alpha_t}$ are $\hat\P$-independent of $\c F^{B,\mu,\hat\P}_t$, we can apply the freezing lemma, see e.g.\@ \cite[Lemma 4.1]{baldi_stochastic_2017}, to rewrite the conditional expectation in \eqref{eq Y t xi reward representation} as follows, \begin{align} &\E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big| \c F^{B,\mu,\hat\P^\nu}_t\Big]\\ &= \E^{\hat\P}\Big[\frac{L^\nu_T}{L^\nu_t} \Big(g(\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big)\Big| \c F^{B,\mu,\hat\P}_t\Big]\\ &= \E^{\hat\P}\Big[\frac{L^\upsilon_T}{L^\upsilon_t} \Big(g(\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big)\Big]\bigg|_{\upsilon = \Upsilon(\hat\omega,\cdot)}\\ &= \E^{\hat\P}\Big[L^\upsilon_T \Big(g(\hat\P^{\c F^{B,\mu,\hat\P}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\c F^{B,\mu,\hat\P}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big)\Big]\bigg|_{\upsilon = \Upsilon(\hat\omega,\cdot)}\\ &= J^{\c R}(t,\xi,\alpha_t,\upsilon)\big|_{\upsilon = \Upsilon(\hat\omega,\cdot)},\qquad\hat\P\text{-a.s.}, \end{align} since for every $\hat\omega\in\hat\Omega$ we have $\upsilon = \Upsilon(\hat\omega,\cdot)\in\c V_t$ and thus $\frac{L^\upsilon_T}{L^\upsilon_t}$ is $\hat\P$-independent of $\c F^{B,\mu,\hat\P}_t$ together with $\E^{\hat\P}[L^{\upsilon}_t] = 1$ and $L^\upsilon_t$ being $\c F^{B,\mu,\hat\P}_t$-measurable. Therefore by using \eqref{eq Y t xi reward representation} together with \cref{proposition equivalence v v_t randomised value function,theorem equivalence original and randomised value function} we obtain one the one hand \begin{align} V(t,m) &= \sup_{\nu\in\c V_t} J^{\c R}(t,\xi,\alpha_t,\nu)\\ &= \sup_{\nu\in\c V_t} \E^{\hat\P^\nu}\Big[g(\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_T}_{X^{t,\xi,\alpha_t}_T},X^{t,\xi,\alpha_t}_T) + \int_t^T f(s,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_s}_{X^{t,\xi,\alpha_t}_s},X^{t,\xi,\alpha_t}_s,I^{t,\alpha_t}_s) ds\Big| \c F^{B,\mu,\hat\P^\nu}_t\Big] \leq Y^{t,\xi,\alpha_t}_t, \end{align} and on the other hand also \begin{align} Y^{t,\xi,\alpha_t}_t &= \sup_{\nu\in\c V} J^{\c R}(t,\xi,\alpha_t,\upsilon)\big|_{\upsilon = \Upsilon(\hat\omega,\cdot)} \leq \sup_{\upsilon\in\c V_t} J^{\c R}(t,\xi,\alpha_t,\upsilon) = V(t,m). \end{align} This shows that $Y^{t,\xi,\alpha_t}_t = V(t,m)$, $\hat\P$-a.s., and in particular that $Y^{t,\xi,\alpha_t}_t$ is $\hat\P$-a.s.\@ constant. Finally, we note that this also implies $K^{t,\xi,\alpha_t}_t = 0$, which shows that $K^{t,\xi,\alpha_t}\in\c K^2_{[t,T]}(\b F^{B,\mu,\hat\P})$. \item Finally, let us prove the representation \eqref{eq recursive snell envelope formula at s=t}. From the BSDE \eqref{eq constrained bsde value function}, by taking the conditional expectation under $\hat\P^\nu$, we obtain for all $r\in [t,T]$ and $\nu\in\c V$, \[ Y^{t,\xi,\alpha_t}_t \geq \E^{\hat\P^\nu}[Y^{t,\xi,\alpha_t}_t] \geq \E^{\hat\P^\nu}\Big[Y^{t,\xi,\alpha_t}_r + \int_t^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big], \] since $U^{t,\xi,\alpha_t}$ is nonnegative and $K^{t,\xi,\alpha_t}$ is nondecreasing, and $Y^{t,\xi,\alpha_t}_t$ is $\hat\P$-a.s.\@ constant. Taking the supremum over all $\nu\in\c V$ then shows that \[ Y^{t,\xi,\alpha_t}_t \geq \sup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[Y^{t,\xi,\alpha_t}_r + \int_t^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big]. \] For the other direction, we recall that by \cref{lemma penalised bsde minimal solution existence}, we have the upper estimate \eqref{eq Y n t xi snell envelope upper bound at s=t} for $Y^{n,t,\xi,\alpha_t}$, for all $n\in\N$. Thus, using that $\c V_n\subseteq\c V_{n+1}\subseteq\c V$ and $\bigcup_{n\in\N} \c V_n = \c V$, we deduce from $Y^{n,t,\xi,\alpha_t}_s\uparrow Y^{t,\xi,\alpha_t}_s$, for all $s\in [t,T]$, $\hat\P$-a.s., by monotone convergence that, for all $r\in [t,T]$, \[ Y^{t,\xi,\alpha_t}_t = \E^{\hat\P}[Y^{t,\xi,\alpha_t}_t] \leq \sup_{\nu\in\c V} \E^{\hat\P^\nu}\Big[Y^{t,\xi,\alpha_t}_r + \int_t^r f(u,\hat\P^{\nu,\c F^{B,\mu,\hat\P^\nu}_u}_{X^{t,\xi,\alpha_t}_u},X^{t,\xi,\alpha_t}_u,I^{t,\alpha_t}_u) du\Big]. \] \end{enumerate} \end{proof} \let\c\oldc \bibliographystyle{plain} \bibliography{bibliography} \end{document}
2412.20778v2
http://arxiv.org/abs/2412.20778v2
An accurate approach to determining the spatiotemporal vehicle load on bridges based on measured boundary slopes
\documentclass[preprint,12pt]{elsarticle} \usepackage{amssymb} \usepackage{amsmath,amssymb,amsfonts} \usepackage{tikz} \usetikzlibrary{decorations.pathmorphing, patterns, decorations.markings} \newtheorem{thm}{Theorem} \newtheorem{Cor}{Corollary} \newtheorem{Lem}{Lemma} \newdefinition{rmk}{Remark} \newproof{proof}{Proof} \hoffset=-.6in \voffset=-1.1in \setlength{\textheight}{9.3in} \setlength{\textwidth}{6.8in} \journal{Applied Mathematical Modelling} \begin{document} \begin{frontmatter} \title{An accurate approach to determining the spatiotemporal vehicle load on bridges based on measured boundary slopes} \author[1]{Alemdar Hasanov\fnref{fn1}} \ead{[email protected]} \author[2]{Onur Baysal\corref{cor1} \fnref{fn2}} \ead{[email protected]} \cortext[cor1]{Corresponding author} \fntext[fn1]{Emeritus Professor. \c{S}ehit Ekrem Dsitrict, Altun\c{s}ehir Str., Ayazma Villalari, No: 22. Bah\c{c}ecik - Ba\c{s}iskele, Kocaeli, 41030} \fntext[fn2]{Senior Lecturer, Department of Mathematics, University of Malta, Malta} \address[1]{Kocaeli University, 41001, \.{I}zmit/Kocaeli, T\"{u}rkiye} \address[2]{University of Malta, Msida, Malta} \begin{abstract} In this paper, a novel mathematical model is developed to evaluate the spatiotemporal vehicle loads on long bridges from slope measurements made at the ends of a bridge based on Euler-Bernoulli beam model with internal and external damping. The mathematical modelling of this phenomena leads to the inverse source problem of determining the spatiotemporal vehicle load $F(x,t)$ in the variable coefficient Euler-Bernoulli equation $\rho_A(x)u_{tt}+\mu(x) u_{t}+(r(x)u_{xx})_{xx}+(\kappa(x)u_{xxt})_{xx}=F(x,t)$, $(x,t)\in \Omega_T:=(0,\ell)\times (0,T)$ subject to the "simply supported" boundary conditions $u(0,t)=(r(x)u_{xx}+(\kappa(x)u_{xxt})_{x=0}=0$, $u(\ell,t)=(r(x)u_{xx}+(\kappa(x)u_{xxt})_{x=\ell}=0$, from the both measured outputs: $\theta_1(t):=u_x(0,t)$ and $\theta_2(t):=u_x(\ell,t)$, that is, the measured boundary slopes. It is shown that the input-output maps $(\Phi F)(t):=u_x(0,t;F)$, $(\Psi F)(t):=u_x(\ell,t;F)$, $F \in \mathcal{F}\subset L^2(\Omega_T)$, corresponding to the inverse problem, are compact and Lipschitz continuous. Then Tikhonov functional $J(F)=\Vert \Phi F-\theta_1 \Vert_{L^2(0,T)}^2+\Vert \Psi F-\theta_2 \Vert_{L^2(0,T)}^2$ is introduced to prove the existence of a quasi-solution to the inverse problem. An explicit gradient formula for the Fr\'{e}chet derivative of the Tikhonov functional is derived. The Lipschitz continuity of the Fr\'{e}chet gradient, which guarantees the monotonicity of iterations in gradient methods, has been proven. \end{abstract} \begin{keyword} Spatiotemporal vehicle loads identification, damped Euler-Bernoulli beam, inverse source problem, solvability of the inverse problem, Fr\'{e}chet gradient. \end{keyword} \end{frontmatter} \section{Introduction} Incorrectly assessing vehicle-induced loads can lead to fatigue cracking or even bridge collapse. Furthermore, establishing relationships between these loads using experimentally measurable data is an important basis for bridge design, safety assessment, maintenance, and reinforcement. The statistical data obtained from these relationships can help us better understand the behavior of bridges under vehicle loads. Therefore, accurate and reliable estimation of vehicle loads is very important. The dynamic effects of moving loads on bridges were not recognized until the mid-19th century. The analysis and history of moving load problems are described by Timoshenko in his book \cite{Timoshenko:1953}. The monograph \cite{Fryba:1972} was the first to give the analysis of numerous basic moving load problems and their analytical solutions. Later, these problems extensively were reviewed in the book \cite{Yang:2004}. An overview, analysis and history of dynamic problems caused by moving loads are given in article \cite{Ouyang:2011}. An isogeometric approach to dynamic analysis of beam constructions subjected to moving vehicles was proposed in \cite{VanDo:2017}, based on the Euler-Bernoulli model. The primary live loads on bridges are vehicle loads, which are critical parameters for bridge health monitoring. However, traditional Weigh-in-Motion (WIM) systems \cite{Lydon:2016} require the installation of weighing devices embedded in the road surface, which requires traffic interruptions during installation. However, this process is time-consuming and costly, which prevents the method from being widely implemented. As a sequel, Bridge Weigh-in-Motion (BWIM) systems \cite{Moses:1979}, developed in the 1970s, offer relatively easy installation and somewhat lower installation costs compared to traditional WIM systems. Over the last 40 years, BWIM systems have achieved a high level of research maturity, established technical methodologies, and high measurement accuracy. However, they still require special instrumentation and have complex installation requirements. Furthermore, the specificity of BWIM systems, where each bridge requires a special system, results in high initial installation and subsequent maintenance costs. This limits the deployment of dynamic weighing systems to only a few specific bridges. In recent years, major developments in computer vision and image processing technologies have attracted a lot of attention in the field of civil engineering. In this context, the study of using computer vision to identify vehicle loads on bridges has also been proposed by various papers. In \cite{Ojio:2016} the authors propose a method that uses a controller to simultaneously control the cameras on the bridge deck and underneath the bridge. The camera on the bridge deck can measure the dynamic displacement response data of the bridge data and receive the vehicle's axle information. They proved the applicability of employing computer vision to determine vehicle weight by identifying vehicle loads through the analysis of both sets of data. In \cite{Martini:2022}, a computer vision-based approach to determine the tire loads of moving cars, the load places on the bridge, and the displacement response of the bridge was proposed. A bridge influence line model was then built in accordance with these findings. This technique made use of multiple cameras cooperating. By identifying tire types and getting tire pressure information from a database, the on-bridge cameras calculated vehicle loads. Vehicle load positions and the bridge displacement reaction were picked up by the under-bridge cameras. They created a bridge influence line model by integrating these data, offering a fresh method for computer vision-based bridge structure monitoring. The use of computer vision for non-contact, target-free displacement measurements was discussed in detail in \cite{Khuc:2018}. Here, an iterative approximation algorithm to construct the displacement unit influence surface (UIS) was proposed. This algorithm allows to estimate the equivalent moving loads on the bridge under multiple vehicle loads using camera data and computer vision algorithms. To solve for vehicle load, a spatiotemporal connection model of structural displacement, vehicle load, and load distribution was developed in \cite{Tang:2024}. To confirm the viability of the proposed method, engineering practice experiments and model bridge testing under varied loading scenarios are carried out. According to the results of the model bridge tests, the structural displacement determined by traffic video measurement can accurately represent the displacement characteristics of the structure. As an alternative to the above methods and approaches, in this paper we propose a new mathematical model of vehicle-bridge interaction based on the damped Euler–Bernoulli beam, which takes into account all the main physical parameters, and a new solution algorithm based on weak solution theory for the forward problem and a quasi-solution approach combined with the adjoint method for the inverse problem. In this model, the angles at the ends of the bridge are used as measurable data. The advantage of this approach is that it allows for almost zero-cost load estimation on bridges equipped with traffic surveillance, meets more practical needs with extremely low monitoring costs, and promotes the widespread application of vehicle load estimation. In addition, unlike known methods, the error margin of this method in load estimation is below $10\%$. The rest of the paper is organized as follows. In Section 2, we describe the mathematical model of bridge-vehicle load interaction, and formulate the identification problem. Analysis of the weak solution of the corresponding forward problem with a'pripori estimates is discussed in Section 3. In Section 4, we introduce the input-output operator and prove some properties of this operator. The corresponding Tikhonov functional is introduced in Section 5. The Fr\'echet derivative of this functional through a suitable adjoint problem is derived in Section 6. In Section 7, the Lipschitz continuity of the Fr\'{e}chet gradient is proved. \section{The mathematical model of bridge-vehicle load interaction} Within the Euler-Bernoulli damped beam model, the vibration of a simply supported long bridge under the spatiotemporal load $F(x,t)$ is described by the following mathematical model: \begin{eqnarray}\label{1} \left\{ \begin{array}{ll} \rho_A(x) u_{tt}+\mu(x)u_{t}-(T(x)u_{x})_{x}+ (r(x)u_{xx}+\kappa(x)u_{xxt})_{xx} =F(x,t),\\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (x,t)\in \Omega_{T}:=(0,\ell)\times (0,T);\\ [2pt] u(x,0)=u_{t}(x,0)=0, ~x \in (0,\ell); \\ [2pt] u(0,t)=\left(r(x)u_{xx}+\kappa(x)u_{xxt}\right)_{x=0}=0; \\ \qquad \qquad \qquad \qquad u(\ell,t)=\left (r(x)u_{xx}+\kappa(x)u_{xxt}\right)_{x=\ell}=0,~t \in [0,T]. \end{array} \right. \end{eqnarray} Here and below, $u(x,t)$ is the transverse deflection in position $x\in (0,\ell)$ and at the time $t\in \times [0,T]$, while $T>0$ is the final time instance, which may be small enough, and $\ell>0$ is the length of the beam. Further, $\rho_A(x):=\rho(x)A_s(x)$, while $\rho(x)>0$ and $A_s(x)>0$ are the mass density and the cross-sectional area, $r(x):=E(x)I(x)>0$ is the flexural rigidity (or bending stiffness) of a nonhomogeneous beam while $E(x)>0$ is the elasticity modulus and $I(x)>0$ is the moment of inertia. $T_r(x)\ge 0$ is the axial tensile force. The external and internal damping mechanisms are given by the terms $\mu(x)u_t$ and $(\kappa(x)u_{xxt})_{xx}$, respectively. The coefficients $\mu(x)\ge 0$ and $\kappa(x)>0$ are called the viscous (internal) damping and the strain-rate or Kelvin-Voigt damping coefficients, respectively. The coefficient $\kappa(x):=c_d(x)I(x)$ represents energy dissipated by friction internal to the beam, while $c_d>0$ is the strain-rate damping coefficient. The function $F(x,t)$ is the spatiotemporal load expressing the effect of the moving car on the bridge. Figure 1 represents the geometry of the problem (\ref{1}). Suppose that the slopes $\theta_0(t)$ and $\theta_\ell(t)$ at the ends $x=0$ and $x=\ell$ of the bridge are given as measurable data: \begin{eqnarray}\label{2} \theta_0(t):=u_x(0,t),~\theta_\ell(t):=u_x(\ell,t),~ t \in [0,T]. \end{eqnarray} Within the framework of the mathematical model (\ref{1})-(\ref{2}), the problem of determining the unknown spatiotemporal load $F(x,t)$ based on these data, is defined as follows:\\ \emph{Find the unknown spatiotemporal load $F(x,t)$ in (\ref{1}) from the measured slopes $\theta_0(t)$ and $\theta_\ell(t)$ introduced in (\ref{2}).} The problem (\ref{1})-(\ref{2}) is defined as a spatiotemporal load identification problem or an inverse source problem with two Dirichlet measured outputs $\theta_0(t)$ and $\theta_\ell(t)$, according to generally accepted terminology. For a given function $F(x,t)$ from some class of admissible loads, the initial boundary value problem (\ref{1}) will be referred as the \emph{forward problem}. \begin{figure} \centering \hspace*{0.6cm} \begin{tikzpicture}[scale=.85] \draw[color=black,ultra thick] (0.0,0.0) -- (-0.2,-0.2); \draw[color=black,ultra thick] (0.0,0.0) -- (0.2,-0.2); \draw[color=black,ultra thick] (-0.50,-0.20) -- (0.55,-0.20); \draw[color=gray] (-0.50,-0.45) -- (-0.25,-0.20); \draw[color=gray] (-0.25,-0.45) -- (0,-0.20); \draw[color=gray] (0,-0.45) -- (0.25,-0.20); \draw[color=black] (0.25,-0.45) -- (0.50,-0.20); \draw [line width=0.2mm, black] (0.0,0.0) -- (3.2,0.8); \draw [line width=0.2mm, black] (8.5,0.75) -- (12.7,0.0); \node[label=right:{\scriptsize $ \theta_0(t):=u_x(0,t)$}] at (2.4,0.18) {}; \node[label=right:{\scriptsize $ \theta_\ell(t):=u_x(\ell,t)$}] at (8.0,0.18) {}; \draw[thin, ->] (2.7,0) arc (0:85:0.5); \draw[thin, <-] (9.1,0.63) arc (0:100:-0.6); \draw[dashed] (0,0) -- (4*3.14,0); \draw[thick, ->] (5.0,1.1)-- (5.5,1.1); \draw[thick, <-] (5.0,0.9)-- (5.5,0.9); \draw[thick, ->] (7.8,1.1)-- (8.3,1.1); \draw[thick, <-] (7.8,0.9)-- (8.3,0.9); \node[label=right:{\small$ F(x,t)$}] at (5.6,1.4) {}; \draw[thick, ->] (6.2,1.1)-- (6.2,0.78); \draw[thick, ->] (6.5,1.1)-- (6.5,0.78); \draw[thick, ->] (6.8,1.1)-- (6.8,0.78); \draw[thick,->] (4*3.14,0) -- (14,0) node [right]{$x$}; \draw[thick,->] (0,-1.5) -- (0,1.75) node [left]{$u$}; \draw[color=black,ultra thick,smooth,domain=0:{4*3.14}] plot (\x,{sin(0.25*deg(\x))/1.5}); \node[label=right:{\scriptsize$ u(0,t)=0$}] at (-.3,-0.7) {}; \node[label=right:{\scriptsize$(r(x)u_{xx}+(\kappa(x)u_{xxt})_{x=0}=0$}] at (-.3,-1.1) {}; \node[label=left:{\scriptsize$u(\ell,t)=0$}] at (12,-0.7) {}; \node[label=left:{\scriptsize $(r(x)u_{xx}+(\kappa(x)u_{xxt})_{x=\ell}=0$}] at (14.5,-1.1) {}; \draw[color=black,ultra thick] (12.5,0.0) -- (-0.2+12.5,-0.2); \draw[color=black,ultra thick] (12.5,0.0) -- (0.2+12.5,-0.2); \draw[color=black,ultra thick] (12.0,-0.20) -- (13,-0.20); \draw[color=gray] (12.5-0.50,-0.45) -- (12.5-0.25,-0.20); \draw[color=gray] (12.5-0.25,-0.45) -- (12.5,-0.20); \draw[color=gray] (12.5,-0.45) -- (12.5+0.25,-0.20); \draw[color=black] (12.5+0.25,-0.45) -- (12.5+0.50,-0.20); \end{tikzpicture} \caption{Geometry of the spatiotemporal vehicle load on a long bridge based on the damped Euler-Bernoulli beam model} \label{Fig-1} \end{figure} \section{Analysis of the forward problem and a'pripori estimates} We assume that the inputs and outputs in (\ref{1}) and (\ref{2}) satisfy the following physically justified basic conditions with a minimum requirement of smoothness: \begin{eqnarray} \label{3} \left \{ \begin{array}{ll} \rho_A, \mu, T_r, r, \kappa \in L^{\infty}(0,\ell),~F \in L^2(\Omega_T),\\ [1pt] \theta_0,\theta_\ell \in L^2(0,T),\\ [1pt] 0 < \rho_0 \le \rho_A (x)\le \rho_1,~0\le T_{r0} \le T_r(x) \le T_{r1},~0< r_0 \le r(x) \le r_1, \\ [1pt] 0 \le \mu_0\le \mu(x)\le \mu_1,~0 < \kappa_0 \le \kappa(x)\le \kappa_1,~x \in (0,\ell). \end{array} \right. \end{eqnarray} We use the weak solution theory developed in \cite{Baysal:2019} and \cite{Hasanov:Romanov:2021} to derive some a'priori estimates for the weak solution Euler-Bernoulli beam equation subject to clamped boundary conditions. For cantilever beams with Kelvin-Voigt damping, similar estimates have been derived in \cite{Sakthivel:2024} . \begin{thm}\label{Theorem-1} Assume that the basic conditions (\ref{3}) hold. Then for the weak solution $u\in L^2(0,T;\mathcal{V}^2(0,\ell))$ with $u_t\in L^2(0,T;L^2(0,\ell))$, $u_{tt}\in L^2(0,T;H^{-2}(0,\ell))$ of the forward problem (\ref{1}), the following estimates are satisfied: \begin{eqnarray}\label{4} \left. \begin{array}{ll} \displaystyle \Vert u_t \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{1}{\rho_0}\,C_e^2\, \Vert F \Vert^2_{L^2(\Omega_T)}, \\ [10pt] \Vert u_t \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq (C_e^2 -1) \,\Vert F \Vert^2_{L^2(\Omega_T)},\\ [12pt] \displaystyle \Vert u_{xx} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{1}{r_0}\,C_e^2\, \Vert F \Vert^2_{L^2(\Omega_T)}, \\ [10pt] \displaystyle \Vert u_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\rho_0}{r_0}\,(C_e^2 -1)\, \Vert F \Vert^2_{L^2(\Omega_T)}, \\ [13pt] \displaystyle \Vert u_{xxt} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{1}{\kappa_0}\,C_e^2\, \Vert F \Vert^2_{L^2(\Omega_T)}, \\ [10pt] \displaystyle \Vert u_{xxt} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\rho_0}{\kappa_0}\,(C_e^2 -1)\, \Vert F \Vert^2_{L^2(\Omega_T)}, \end{array} \right. \end{eqnarray} where $\mathcal{V}^2(0,\ell):=\{v\in H^2(0,\ell):~ v(0)=v(\ell)=0\}$, \begin{eqnarray}\label{5} C_e^2=\exp(T/\rho_0), \end{eqnarray} and $\rho_0,r_0>0$ are the constants introduced in (\ref{3}). \end{thm} {\bf Proof.} Utilize the following identities \begin{eqnarray}\label{6} \left. \begin{array}{ll} \displaystyle 2\int_0^t \int_0^\ell \left (r(x)u_{xx}\right )_{xx} u_{\tau} dx d\tau \\ [6pt] \qquad = \displaystyle 2\int_0^t \int_0^\ell \left \{\left [(r(x)u_{xx})_x u_{\tau}-r(x)u_{xx} u_{x\tau}\right ]_x+\frac{1}{2}\,\left (r(x)u_{xx}^2\right )_{\tau} \right \} dx d\tau,\\ [16pt] \displaystyle 2\int_0^t \int_0^\ell \left (\kappa(x)u_{xx\tau}\right)_{xx} u_{\tau} dx d\tau \\ [6pt] \qquad = \displaystyle 2\int_0^t \int_0^\ell \left \{ \left [(\kappa (x)u_{xx\tau})_x u_{\tau} -\kappa(x)u_{xx\tau} u_{x\tau}\right ]_x+\kappa (x)u^2_{xx\tau} \right \}dx d\tau, \end{array} \right. \end{eqnarray} after multiplying both sides of equation (\ref{1}) by $2u_t(x,t)$, integrating the result over $\Omega_t:=(0,\ell)\times (0,t)$, $t\in (0,T)$. In view of the homogeneous initial and boundary conditions, next we get the following \emph{energy identity}: \begin{eqnarray}\label{7} \int_0^\ell \left [\rho_A(x) u_t^2 +r(x)u_{xx}^2+\kappa(x)u_{xx\tau}^2 +T_r(x)u_x^2\right ]dx \qquad \qquad \qquad \quad \nonumber \\ [1pt] \qquad \qquad +2 \int_0^t \int_0^\ell \mu (x)u_{\tau}^2 dx d \tau = 2\int_0^t \int_0^\ell F(x,\tau) u_{\tau}dx d \tau,~t\in[0,T]. \end{eqnarray} The following main integral inequality is derived from the identity (\ref{7}) under the basic conditions (3): \begin{eqnarray}\label{8} \rho_0 \int_0^\ell u_t^2dx +r_0 \int_0^\ell u_{xx}^2dx+\kappa_0 \int_0^\ell u_{xx\tau}^2 dx +T_{r0} \int_0^\ell u_x^2dx \qquad \qquad \nonumber \\ [1pt] \qquad +2 \int_0^t \int_0^\ell \mu (x) u_{\tau}^2 dx d \tau \le \int_0^t \int_0^\ell u^2_{\tau}dx d \tau +\int_0^t \int_0^\ell F^2(x,\tau)dx d \tau,~ \end{eqnarray} for all $t\in [0,T]$. The first consequence of (\ref{8}) is the following inequality: \begin{eqnarray*} \rho_0 \int_0^\ell u_t^2dx \le \int_0^t \int_0^\ell u^2_{\tau}dx d \tau +\int_0^t \int_0^\ell F^2(x,\tau)dx d \tau,~t\in [0,T]. \end{eqnarray*} Here the Gr\"onwall-Bellman inequality is used to get: \begin{eqnarray}\label{9} \displaystyle \int_0^\ell u_t^2dx \le \frac{1}{\rho_0}\, \Vert F \Vert^2_{L^2(\Omega_T)} \exp(t/\rho_0),~t\in [0,T]. \end{eqnarray} The first two estimates in (\ref{4}) are derived from this inequality. As the second consequence of the main integral inequality (\ref{8}), we deduce that \begin{eqnarray*} r_0 \int_0^\ell u_{xx}^2dx \le \int_0^t \int_0^\ell u^2_{\tau}dx d \tau +\int_0^t \int_0^\ell F^2(x,\tau)dx d \tau,~ \end{eqnarray*} With the estimate \begin{eqnarray}\label{10} \displaystyle \int_0^\tau \int_0^\ell u_t^2dx \le \Vert F \Vert^2_{L^2(\Omega_T)} \left [\exp(t/\rho_0)-1\right ],~t\in [0,T]. \end{eqnarray} which is obtained by integration of the inequality (\ref{9}), this inequality yields: \begin{eqnarray*} r_0 \int_0^\ell u_{xx}^2dx \le \Vert F \Vert^2_{L^2(\Omega_T)}\exp(t/\rho_0),~t\in [0,T]. \end{eqnarray*} This inequality readily yields the third and fourth estimations in (\ref{4}). Finally, we use estimate (\ref{10}) with the third consequence \begin{eqnarray*} \kappa_0 \int_0^\ell u_{xx\tau}^2 dx \le \int_0^t \int_0^\ell u^2_{\tau}dx d \tau +\int_0^t \int_0^\ell F^2(x,\tau)dx d \tau \end{eqnarray*} of the main integral inequality to obtain the following estimate: \begin{eqnarray*} \kappa_0 \int_0^\ell u_{xx\tau}^2 dx \le \Vert F \Vert^2_{L^2(\Omega_T)}\exp(t/\rho_0),~t\in [0,T]. \end{eqnarray*} The fifth and sixth estimates in (\ref{4}) are easily obtained from this inequality. \hfill$\Box$ \begin{rmk}\label{Remark-1} This theorem, which obviously generalizes Theorem 11.1.2 of \cite{Hasanov:Romanov:2021} for the case when the Kelvin-Voigt damping coefficient $\kappa(x)$ is in the equation, allows us to obtain important trace estimates. In the case when $\kappa(x)$, the fifth and sixth estimates in (\ref{4}) can be obtained only for the regular weak solution of the forward problem (\ref{1}). Therefore, these considerations can also be interpreted as the Kelvin-Voigt damping coefficient increasing the regularity of the weak solution \cite{Sakthivel:2024}. \end{rmk} \begin{Cor}\label{Corollary-1} If conditions of Theorem \ref{Theorem-1} are satisfied, then the following trace estimates are true: \begin{eqnarray}\label{11} \left. \begin{array}{ll} \displaystyle \Vert u_{x}(0,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{r_0}\, \Vert F \Vert^2_{L^2(\Omega_T)}, \\ [10pt] \displaystyle \Vert u_{xt}(0,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{\kappa_0}\, \Vert F \Vert^2_{L^2(\Omega_T)},\\ [12pt] \displaystyle \Vert u_{x}(\ell,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{r_0}\, \Vert F \Vert^2_{L^2(\Omega_T)}, \\ [10pt] \displaystyle \Vert u_{xt}(\ell,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{\kappa_0}\,\Vert F \Vert^2_{L^2(\Omega_T)}, \end{array} \right. \end{eqnarray} where \begin{eqnarray}\label{12} \displaystyle C_1^2=\frac{5\ell \rho_0}{3}\,(C_e^2 -1), \end{eqnarray} and $C_e>0$ is the constant introduced in (\ref{5}). \end{Cor} {\bf Proof.} With the aid of Rolle's theorem, we have the following inequality: \begin{eqnarray}\label{13} \displaystyle \int_0^\ell v_x^2(x) dx \le \frac{\ell^2}{2} \int_0^\ell v_{xx}^2(x) dx, \end{eqnarray} for a function $v \in H^2(0,\ell)$ satisfying the conditions $v(0)=v(l)=0$. On the other hand, the identity \begin{eqnarray*} \displaystyle u_x(0,t)=- \frac{1}{\ell} \int_0^\ell \left (\ell-x)u_x(x,t) \right )_{x}dx \end{eqnarray*} for the weak solution of the forward problem (\ref{1}) implies: \begin{eqnarray*} \displaystyle u_x(0,t) \le \frac{2}{\ell^2} \left (\int_0^\ell u_x(x,t) dx\right )^2+ \frac{2}{\ell^2} \left (\int_0^\ell (\ell-x)u_{xx}(x,t)dx\right )^2 \\ [8pt] \displaystyle \qquad \qquad \le \frac{2}{\ell} \int_0^\ell u^2_x(x,t) dx+ \frac{2\ell}{3} \int_0^\ell u^2_{xx}(x,t)dx. \end{eqnarray*} Hence, \begin{eqnarray*} \displaystyle \int_0^T u^2_x(0,t) dx \le \frac{2}{\ell} \int_0^T \int_0^\ell u^2_x(x,t) dx dt+ \frac{2\ell}{3} \int_0^T \int_0^\ell u^2_{xx}(x,t)dx dt. \end{eqnarray*} In view of (\ref{13}), applied to the weak solution of the forward problem (\ref{1}), this yields: \begin{eqnarray}\label{14} \displaystyle \int_0^T u^2_x(0,t) dx \le \frac{5 \ell}{3} \int_0^T \int_0^\ell u^2_{xx}(x,t)dx. \end{eqnarray} In the same way, we deduce that \begin{eqnarray}\label{15} \displaystyle \int_0^T u^2_{xt}(0,t) dx \le \frac{5 \ell}{3} \int_0^T \int_0^\ell u^2_{xxt}(x,t)dx. \end{eqnarray} It is evident that the similar to (\ref{14}) and (\ref{15}) estimations are valid also for the norms $\Vert u_{x}(\ell,\,\cdot) \Vert^2_{L^2(0,T)}$ and $\Vert u_{xt}(\ell,\,\cdot) \Vert^2_{L^2(0,T)}$, respectively.  The desired estimations (\ref{11}) are obtained if we take into account the third, fourth, fifth and sixth estimates in (\ref{4}) in inequalities (\ref{14}) and (\ref{15}) above. \hfill$\Box$ \section{The input-output operators} We define the \emph{set of admissible loads (inputs)} as follows: \begin{eqnarray}\label{16} \mathcal{F}:=\{F \in L^2(\Omega_T):\, \Vert F \Vert^2_{L^2(\Omega_T)} \leq C_F\}. \end{eqnarray} where $C_F>0$ independent on $F(x,t)$ constant. Let $F \in \mathcal{F}$ be given. Denote by $u(x,t;F)$ the unique weak solution of the the forward problem (\ref{1}) corresponding to this input. Then $u_x(0,t;F)$ and $u_x(\ell,t;F)$ are the outputs, and we introduce the input-output operators as follows: \begin{eqnarray}\label{17} \left. \begin{array}{ll} \Phi_0(F)(t):=u_x(0,t;F),~F \in \mathcal{F},\, t\in [0,T], \\ [6pt] \Phi_0: \mathcal{F} \subset L^2(\Omega_T) \mapsto L^2(0,T); \\[10pt] \Phi_\ell(F)(t):=u_x(\ell,t;F),~F \in \mathcal{F},\, t\in [0,T], \\ [6pt] \Phi_\ell : \mathcal{F} \subset L^2(\Omega_T) \mapsto L^2(0,T). \end{array} \right. \end{eqnarray} In view of these operators the inverse source problem (\ref{1})-(\ref{2}) can be reformulated as the following system of operator equations: \begin{eqnarray}\label{18} \left \{ \begin{array}{ll} \Phi_0(F)(t)=\theta_0(t),~\theta_0 \in L^2(0,T), \\ [6pt] \Phi_\ell(F)(t)=\theta_\ell(t),~\theta_\ell \in L^2(0,T),~F \in \mathcal{F},\, t\in [0,T]. \end{array} \right. \end{eqnarray} The following lemma will make it possible for us to assert the ill-posedness of the considered inverse problem. \begin{Lem}\label{Lemma-1} Assume that conditions of Theorem \ref{Theorem-1} are satisfied. Then the input-output operators introduced in (\ref{17}) are compact. \end{Lem} {\bf Proof.} Let $\{F^{(m)} \}\subset \mathcal{F}$, $m=1,2,3,\, ...$, be a sequence of admissible loads. Denote by $\{\Phi_0(F^{(m)}) \}, \{\Phi_\ell(F^{(m)}) \}\subset \mathcal{F}$ the sequences of outputs: $\Phi_0(F^{(m)})(t)=u_x(0,t;F^{(m)})$, $\Phi_\ell(F^{(m)})(t)=u_x(\ell,t;F^{(m)})$. From the estimates (\ref{11}), it follows that the norms are bounded in the norm of the Sobolev space $H^1(0,T)$, and hence compact in $L^2(0,T)$. This means that the input-output operators transform a bounded in $L^2(0,T)$ set $\{F^{(m)} \}$, $m=1,2,3,\, ...$, to the compact in $L^2(0,T)$ sets $\{\Phi_0(F^{(m)}) \}$, and $\{\Phi_\ell(F^{(m)}) \}$. Hence, these operators are compact. \hfill$\Box$ \begin{rmk}\label{Remark-2} Here, unlike Lemma 11.2.1 of \cite{Hasanov:Romanov:2021}, the compactness property is obtained without assuming the existence of a \emph{regular }weak solution, thanks to the presence of the Kelving-Voigh damping term in the mathematical model (\ref{1}). This feature was first found in the study \cite{Sakthivel:2024}. \end{rmk} From Lemma \ref{Lemma-1}, it follows, in particular, that the \emph{inverse problem (\ref{1})-(\ref{2}) is ill-posed}. \begin{Lem}\label{Lemma-2} Under the conditions of Theorem \ref{Theorem-1}, the input-output operators (\ref{17}) are Lipschitz continuous, that is, \begin{eqnarray}\label{19} \left. \begin{array}{ll} \displaystyle \Vert \Phi_0(F_1)-\Phi_0(F_2) \Vert_{L^2(0,T)} \leq C_L \Vert F_1-F_2 \Vert_{L^2(\Omega_T)}, \\ [8pt] \displaystyle \Vert \Phi_\ell(F_1)-\Phi_\ell(F_2) \Vert_{L^2(0,T)} \leq C_L \Vert F_1-F_2 \Vert_{L^2(\Omega_T)}, ~\forall F_1,F_2 \in \mathcal{F}, \end{array} \right. \end{eqnarray} where $C_L=C_1/\sqrt{r_0} >0$ is the Lipschitz constant, and $C_1>0$ is the constant introduced in (\ref{12}). \end{Lem} {\bf Proof.} Denote by $\delta u(x,t):=u(x,t;F_1) -u(x,t;F_2)$ the weak solution of the forward problem (\ref{1}) with $F(x,t)$ replaced by $\delta F(x,t):=F_1(x,t)-F_2(x,t)$, $F_1,F_2 \in \mathcal{F}$. Then, from estimates (\ref{11}) it follows that \begin{eqnarray*} \left. \begin{array}{ll} \displaystyle \Vert \delta u_{x}(0,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{r_0}\, \Vert \delta F \Vert^2_{L^2(\Omega_T)}, \\ [10pt] \displaystyle \Vert \delta u_{x}(\ell,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{r_0}\, \Vert \delta F \Vert^2_{L^2(\Omega_T)}. \end{array} \right. \end{eqnarray*} By definition of the input-output operators, $\delta u_{x}(0,t)=\Phi_0(F_1)-\Phi_0(F_2)$, $\delta u_{x}(\ell,t)=\Phi_\ell (F_1)-\Phi_\ell (F_2)$, and these estimates lead to the required statements (\ref{19}). \hfill$\Box$ We will show that the property of the input-output operators yields the Lipschitz continuity of the Tikhonov functional, which will be introduced below. \section{The Tikhonov functional and existence of a quasi-solution} Due to random noise in the measured outputs $\theta_0(t)$ and $\theta_\ell(t)$, it is evident that exact equalities in the system of equations (\ref{18}) are not achievable in practice. As a consequence, one needs to introduce the Tikhonov functional \begin{eqnarray}\label{20} \displaystyle J(F):=\frac{1}{2} \int_0^T \left [\Phi_0(F)(t) -\theta_0(t) \right ]^2dt + \frac{1}{2} \int_0^T \left [\Phi_\ell(F)(t) -\theta_\ell(t) \right ]^2dt \quad \nonumber \\ [6pt] \qquad \equiv \frac{1}{2} \int_0^T \left [u_x(0,t;F) -\theta_0(t) \right ]^2dt + \frac{1}{2} \int_0^T \left [u_x(\ell,t;F) -\theta_\ell(t) \right ]^2dt, \end{eqnarray} $F \in \mathcal{F}$, and consider the following minimization problem for this functional: \begin{eqnarray}\label{21} J(F_*)=\inf_{F \in \mathcal{F}} J(F). \end{eqnarray} A solution of the minimization problem (\ref{21}) is called a quasi-solution of the inverse source problem (\ref{1})-(\ref{2}). \begin{Lem}\label{Lemma-3} Let conditions of Theorem \ref{Theorem-1} hold. Then the Tikhonov functional (\ref{20}) is Lipschitz continuous: \begin{eqnarray}\label{22} \displaystyle \vert J(F_1)-J(F_2) \vert_{L^2(0,T)} \leq C_J \Vert F_1-F_2 \Vert_{L^2(\Omega_T)}, ~\forall F_1,F_2 \in \mathcal{F}, \end{eqnarray} where \begin{eqnarray*} \displaystyle C_J=\left [\frac{2C_1 C_F}{\sqrt{r_0}} +\Vert \theta_0 \Vert_{L^2(0,T)}+\Vert \theta_\ell \Vert_{L^2(0,T)}\right ] C_L\,, \end{eqnarray*} is the Lipschitz constant, $C_1>0$ and $C_F>0$ are the constants introduced in (\ref{12}) and (\ref{16}), respectively. \end{Lem} {\bf Proof.} We can easily prove that \begin{eqnarray*} \displaystyle \vert J(F_1)-J(F_2) \vert_{L^2(0,T)}\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\ [8pt] \qquad \leq \frac{1}{2} \left [ \Vert \Phi_0(F_1) \Vert_{L^2(0,T)} +\Vert \Phi_0(F_2) \Vert_{L^2(0,T)} +2\Vert \theta_0 \Vert_{L^2(0,T)}\right ]\Vert \delta \Phi_0(F)\Vert_{L^2(0,T)} \\ [8pt] \qquad +\displaystyle \frac{1}{2} \left [ \Vert \Phi_\ell(F_1) \Vert_{L^2(0,T)} +\Vert \Phi_\ell(F_2) \Vert_{L^2(0,T)} +2\Vert \theta_\ell \Vert_{L^2(0,T)}\right ] \Vert \delta \Phi_\ell(F)\Vert_{L^2(0,T)}, \end{eqnarray*} for all $F_1,F_2 \in \mathcal{F}$, where $\Vert \delta \Phi_0(F)\Vert_{L^2(0,T)}= \Vert \Phi_0(F_1)-\Phi_0(F_2) \Vert_{L^2(0,T)}$ and $\Vert \delta \Phi_\ell(F)\Vert_{L^2(0,T)}= \Vert \Phi_\ell(F_1)-\Phi_\ell(F_2) \Vert_{L^2(0,T)}$, respectively. Using here (\ref{11}) and (\ref{16}) we obtain: \begin{eqnarray*} \displaystyle \vert J(F_1)-J(F_2) \vert_{L^2(0,T)}\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\ [8pt] \qquad \leq \left [\frac{C_1 C_F}{\sqrt{r_0}}+ \Vert \theta_0\Vert_{L^2(0,T)}\right ]\Vert \delta \Phi_0(F)\Vert_{L^2(0,T)} \\ [8pt] \qquad +\displaystyle \left [\frac{C_1 C_F}{\sqrt{r_0}}+ \Vert \theta_\ell \Vert_{L^2(0,T)}\right ] \Vert \delta \Phi_\ell(F)\Vert_{L^2(0,T)}, \end{eqnarray*} With the continuity property of the input-output operators, this leads to the required result (\ref{22}). \hfill$\Box$ \begin{thm}\label{Theorem-2} Assume that the basic conditions (\ref{3}) hold, and $F \in \mathcal{F}$, where $\mathcal{F}$ is the set of admissible loads. Then the minimization problem (\ref{21}) for the Tikhonov functional has at least one solution in $\mathcal{F}$, that is the inverse source problem -(\ref{1})-(\ref{2}) has a quasi-solution. \end{thm} The proof of this theorem follows the same procedure as that of Theorem 10.1.11 of (\ref{21}) \cite{Hasanov:Romanov:2021}. \section{Fr\'echet derivative of the Tikhonov functional} Let $F, F+\delta F \in \mathcal{F}$. Denote by $\delta J(F):= J(F+\delta F )-J(F)$ the increment of the Tikhonov functional introduced in (\ref{20}). Then, \begin{eqnarray}\label{23} \displaystyle \delta J(F)= \int_0^T \left [u_x(0,t;F) -\theta_0(t) \right ]\delta u_x(0,t;F) dt +\frac{1}{2} \int_0^T \left (\delta u_x(0,t;F) \right )^2 dt \nonumber \\ [8pt] \quad \displaystyle + \int_0^T \left [u_x(\ell,t;F) -\theta_\ell(t) \right ] \delta u_x(\ell,t;F) dt +\frac{1}{2} \int_0^T \left (\delta u_x(\ell,t;F) \right )^2 dt,~~ \end{eqnarray} \begin{Lem}\label{Lemma-4} Let conditions the basic conditions (\ref{3}) hold. Then the following integral relationship holds: \begin{eqnarray}\label{24} \displaystyle \int_0^T p(t)\delta u_x(0,t)dt +\int_0^T q(t)\delta u_x(\ell,t)dt = \int_0^T \int_0^\ell \delta F(x,t) \varphi (x,t) dx dt, \end{eqnarray} for all $p,q \in L^2(0,T)$, where the function $\varphi (x,t)$ is the weak solution of the following backward problem: \begin{eqnarray}\label{25} \left\{ \begin{array}{ll} \rho_A(x) \varphi_{tt}-\mu(x)\varphi_{t}-(T(x)\varphi_{x})_{x}+ (r(x)\varphi_{xx}-\kappa(x)\varphi_{xxt})_{xx} =0,\\ [2pt] \qquad \qquad \qquad \qquad \qquad \qquad \qquad (x,t)\in \Omega_{T}:=(0,\ell)\times (0,T);\\ [4pt] \varphi(x,T)=\varphi_{t}(x,T)=0, ~x \in (0,\ell); \\ [4pt] \varphi(0,t)=0,~-\left(r(x)\varphi_{xx}-\kappa(x)\varphi_{xxt}\right)_{x=0}=p(t); \\ [2pt] \qquad \qquad \varphi(\ell,t)=0,~\left (r(x)\varphi_{xx}-\kappa(x)\varphi_{xxt}\right)_{x=\ell}=q(t),~t \in [0,T], \end{array} \right. \end{eqnarray} with the inputs $p,q \in L^2(0,T)$, and $\delta u(x,t)$ is the weak solution of the forward problem (\ref{1}) with $F(x,t)$ replaced by $\delta F(x,t)$. \end{Lem} {\bf Proof.} Apply the integration by parts formula several times after multiplying both sides of equation (\ref{1}) for $ \delta u (x,t)$ by an arbitrary function $\varphi(x,t)$ and integrating over $(0,T)$. Next, we get: \begin{eqnarray*} \int_0^T \int_0^{\ell} \left [\rho_A(x)\varphi_{tt}-\mu(x)\varphi_{t}+\left (r(x)\varphi_{xx}- \kappa (x)\varphi_{xxt} \right)_{xx}- \left (T_r(x)\varphi_{x}\right)_{x}\right] \delta u \,dx dt ~\nonumber \\ [2pt] \quad + \int_0^\ell \left [\rho_A(x) \delta u_t \varphi-\rho_A(x) \delta u \varphi_t +\mu(x) \delta u \varphi - \delta u \left(\kappa(x)\varphi_{xx}\right)_{xx}\right ]_{t=0}^{t=T} \,dx \nonumber \\ [2pt] ~~ + \int_0^T \left [\left(r(x) \delta u_{xx}\right)_{x}\varphi-r(x)\delta u_{xx} \varphi_{x}+ r(x)\delta u_{x} \varphi_{xx}-\delta u \left (r(x)\varphi_{xx}\right)_{x} \right ]_{x=0}^{x=\ell} \,dt \nonumber \\ [2pt] ~+ \int_0^T \left [\left(\kappa(x) \delta u_{xxt}\right)_{x}\varphi-\kappa(x)\delta u_{xxt} \varphi_{x}- \kappa(x)\delta u_{x} \varphi_{xxt}+\delta u \left (\kappa(x)\varphi_{xxt}\right)_{x} \right ]_{x=0}^{x=\ell} \,dt \nonumber \\ [2pt] \quad + \int_0^T \left [T_r(x) \varphi_x \delta u-T_r(x)\varphi \delta u_{x} \right ]_{x=0}^{x=\ell} \,dt = \int_0^T \int_0^\ell \delta F(x,t) \varphi (x,t) dx dt. \end{eqnarray*} Here, we assume that the function $\varphi(x,t)$ solves the backward problem (\ref{25}). In view of the homogeneous boundary, initial, and final conditions in (\ref{1}) and (\ref{25}), the required integral relationship (\ref{24}) is then obtained. \hfill$\Box$ Considering the difference between the outputs $u_x(0,t;F)$ and $u_x(\ell,t;F)$, corresponding to the admissible $F \in \mathcal{F}$, and the measured outputs $\theta_0(t)$ and $\theta_\ell(t)$ in the Tikhonov functional (\ref{20}), we choose the inputs to the backward problem (\ref{25}) based on these differences, as follows: \begin{eqnarray}\label{26} \left\{ \begin{array}{ll} p(t)=u_x(0,t;F)-\theta_0(t),\\ [2pt] q(t)=u_x(\ell,t;F)-\theta_\ell(t),~t \in [0,T],\, F \in \mathcal{F}. \end{array} \right. \end{eqnarray} The backward problem (\ref{25}) with the inputs (\ref{26}) is defined as the \emph{adjoint problem, corresponding to the inverse problem (\ref{1})-(\ref{2}):} \begin{eqnarray}\label{27} \left\{ \begin{array}{ll} \rho_A(x) \varphi_{tt}-\mu(x)\varphi_{t}-(T(x)\varphi_{x})_{x}+ (r(x)\varphi_{xx}-\kappa(x)\varphi_{xxt})_{xx} =0,\\ [2pt] \qquad \qquad \qquad \qquad \qquad \qquad \qquad (x,t)\in \Omega_{T}:=(0,\ell)\times (0,T);\\ [4pt] \varphi(x,T)=\varphi_{t}(x,T)=0, \, x \in (0,\ell); \\ [4pt] \varphi(0,t)=0,\,\left(r(x)\varphi_{xx}-\kappa(x)\varphi_{xxt}\right)_{x=0}=u_x(0,t;F)-\theta_0(t); \\ [2pt] \quad \varphi(\ell,t)=0,\,\left (r(x)\varphi_{xx}-\kappa(x)\varphi_{xxt}\right)_{x=\ell}=u_x(\ell,t;F)-\theta_\ell(t),\,t \in [0,T]. \end{array} \right. \end{eqnarray} The adjoint problem (\ref{27}), as well as the backward problem (\ref{25}), are well-posed problems, as the change of the variable $\tau=T-t$ shows. Furthermore, the integral relationship (\ref{24}) with the inputs introduced in (\ref{26}) is defined as the \emph{input-output relationship}: holds: \begin{eqnarray}\label{28} \displaystyle \int_0^T \left [u_x(0,t;F)-\theta_0(t) \right ]\delta u_x(0,t)dt +\int_0^T \left [u_x(\ell,t;F)-\theta_\ell(t) \right ]\delta u_x(\ell,t)dt \nonumber \\ [2pt] \qquad \qquad \qquad = \int_0^T \int_0^\ell \delta F(x,t) \varphi (x,t;F) dx dt. \end{eqnarray} This integral identity expresses, as its name suggests, a relationship between the input $F(x,t)$, the outputs $u_x(0,t;F)$, $u_x(\ell,t;F)$, and the measured outputs $\theta_0(t)$, $\theta_\ell(t)$ of the inverse problem through the solution $\varphi (x,t;F)$ of the adjoint problem (\ref{27}). As a consequence of the increment formula (\ref{23}) for the Tikhonov functional and the input-output relationship (\ref{28}), we obtain the following formula \begin{eqnarray}\label{29} \displaystyle \delta J(F)= \int_0^T \int_0^\ell \varphi (x,t;F)\,\delta F(x,t)dx dt \qquad \qquad \qquad \qquad \qquad ~ \nonumber \\ [2pt] ~\qquad \qquad +\frac{1}{2} \int_0^T \left (\delta u_x(0,t;F) \right )^2 dt +\frac{1}{2} \int_0^T \left (\delta u_x(\ell,t;F) \right )^2 dt,~~ \end{eqnarray} which is implies \emph{the formal gradient formula}: \begin{eqnarray}\label{30} \displaystyle J'(F)= \varphi (x,t;F),\, \mbox {a.e.} ~(x,t)\in \Omega_T,\, F \in \mathcal{F}. \end{eqnarray} To justify the gradient formula (\ref{30}), we need to examine the adjoint problem (\ref{27}) in more detail, obtaining also the necessary estimates for the weak solution. As we will see below, this formula may not be valid for every measured output data $\theta_0, \theta_\ell \in L^2(0,T)$, which are, at the same time, inputs for the adjoıint problem (\ref{27}). We employ the change of variable $\tau=T-t$, $t\in [0,T]$, to transform the backward problem (\ref{25}) for $\varphi (x,t) \equiv \varphi (x,t;F)$ to the following initial boundary value problem \begin{eqnarray}\label{31} \left\{ \begin{array}{ll} \rho_A(x) \phi_{\tau \tau}-\mu(x)\phi_{\tau}-(T(x)\phi_{x})_{x}+ (r(x)\phi_{xx}-\kappa(x)\phi_{xx\tau})_{xx} =0,\\ [2pt] \qquad \qquad \qquad \qquad \qquad \qquad \qquad (x,\tau)\in \Omega_{T}:=(0,\ell)\times (0,T);\\ [4pt] \phi(x,0)=\phi_{\tau}(x,0)=0, ~x \in (0,\ell); \\ [4pt] \phi(0,\tau)=0,~-\left(r(x)\phi_{xx}-\kappa(x)\phi_{xx\tau}\right)_{x=0}=\widetilde{p}(\tau); \\ [2pt] \qquad \qquad \varphi(\ell,\tau)=0,~\left (r(x)\phi_{xx}-\kappa(x)\phi_{xx\tau}\right)_{x=\ell}=\widetilde{q}(\tau),~\tau \in [0,T], \end{array} \right. \end{eqnarray} for the function $\phi (x,\tau) = \varphi (x,t)$, with the inputs $\widetilde{p}(\tau)=p(t)$ and $\widetilde{q}(\tau)=q(t)$. \begin{Lem}\label{Lemma-5} Assume that in addition to the basic conditions (\ref{3}), the inputs $\widetilde{p}(\tau)$ and $\widetilde{q}(\tau)$ in (\ref{31}) satify the following regularity conditions: \begin{eqnarray}\label{32} \left. \begin{array}{ll} \widetilde{p},\widetilde{q} \in H^1(0,T). \end{array} \right. \end{eqnarray} Then for the weak solution $\phi\in L^2(0,T;\mathcal{V}^2(0,\ell))$ with $\phi_\tau\in L^2(0,T;L^2(0,\ell))$, $\phi_{\tau\tau}\in L^2(0,T;H^{-2}(0,\ell))$ of the transformed problem (\ref{31}), the following estimates hold: \begin{eqnarray}\label{33} \left. \begin{array}{ll} \displaystyle \Vert \phi_{xx} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \exp (T)\, C_0^2 \, \Vert \widetilde{Q}\,' \Vert^2_{L^2(0,T)}, \\ [8pt] \displaystyle \Vert \phi_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \left (\exp (T)-1\right ) C_0^2 \, \Vert \widetilde{Q}\,' \Vert^2_{L^2(0,T)}, \\ [10pt] \displaystyle \Vert \phi_\tau \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{\exp (T)r_0}{2\rho_0}\,C_0^2 \, \Vert \widetilde{Q}\,' \Vert^2_{L^2(0,T)}, \\ [12pt] \displaystyle \Vert \phi_\tau \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\left (\exp (T)-1\right )r_0}{2\rho_0}\,C_0^2 \, \Vert \widetilde{Q}\,' \Vert^2_{L^2(0,T)}, \\ [16pt] \displaystyle \Vert \phi_{xx\tau} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{\exp (T)r_0}{2\kappa_0}\,C_0^2 \, \Vert \widetilde{Q}\,' \Vert^2_{L^2(0,T)}, \\ [12pt] \displaystyle \Vert \phi_{xx\tau} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\left (\exp (T)-1\right )r_0 }{2\kappa_0}\,C_0^2 \, \Vert \widetilde{Q}\,' \Vert^2_{L^2(0,T)}, \end{array} \right. \end{eqnarray} where \begin{eqnarray}\label{34} \left. \begin{array}{ll} \displaystyle C_0^2= \frac{20 \ell \,C_T}{3r^2_0},~ C_T=\max (2/T,\, 1+3T/3), \\ [13pt] \Vert \widetilde{Q} \Vert^2_{L^2(0,T)}= \Vert \widetilde{p} \Vert^2_{L^2(0,T)}+\Vert \widetilde{q} \Vert^2_{L^2(0,T)}. \end{array} \right. \end{eqnarray} \end{Lem} {\bf Proof.} Multiply both sides of equation (\ref{31}) by $2\phi_t(x,\tau)$, integrate it over $\Omega_\tau:=(0,\ell)\times (0,\tau)$, $\tau \in [0,T]$, and use the identities (\ref{6}). Applying the integration by parts formula, using the initial and boundary conditions, after elementary transformations we obtain: \begin{eqnarray}\label{35} \int_0^\ell \left [\rho_A(x) \phi_\tau^2 +r(x)\phi_{xx}^2+\kappa(x)\phi_{xx\tau}^2 +T_r(x)\phi_x^2\right ]dx+2 \int_0^\tau \int_0^\ell \mu (x)\phi_{\eta}^2 dx d \eta \nonumber \\ [1pt] =2 \int_0^\tau \widetilde{p}(\eta)\phi_{x\eta}(0,\eta) d\eta+ 2 \int_0^\tau \widetilde{q}(\eta)\phi_{x\eta}(\ell,\eta) d\eta,~\tau \in[0,T].~ \end{eqnarray} Using the $\varepsilon$-inequality and the consequence $\phi_{x}(0,0)=\phi_{x}(\ell,0)=0$ of the homogeneous initial conditions, we estimate the right-hand side integrals, as follows: \begin{eqnarray*} \left. \begin{array}{ll} \displaystyle 2 \int_0^\tau \widetilde{p}(\eta)\phi_{x\eta}(0,\eta) d\eta= -2 \int_0^\tau \widetilde{p}\,'(\eta)\phi_{x}(0,\eta) d\eta+2 \widetilde{p}(\tau)\phi_{x}(0,\tau) \qquad \qquad\\ [8pt] \displaystyle \qquad \le \varepsilon \left [\int_0^\tau \phi^2_{x}(0,\eta) d\eta +\phi^2_{x}(0,\tau)\right ]+ \frac{1}{\varepsilon}\, \left [\int_0^T \left (\widetilde{p}\,'(\tau)\right)^2 d\tau + \left (\widetilde{p}(\tau)\right)^2\right ],~\\ [14pt] \displaystyle 2 \int_0^\tau \widetilde{q}(\eta)\phi_{x\eta}(\ell,\eta) d\eta=-2 \int_0^\tau \widetilde{q}\,'(\eta)\phi_{x}(\ell,\eta) d\eta+2 \widetilde{q}(\tau)\phi_{x}(\ell,\tau) \qquad \qquad \\ [8pt] \displaystyle \qquad \le \varepsilon \left [\int_0^\tau \phi^2_{x}(\ell,\eta) d\eta +\phi^2_{x}(\ell,\tau)\right ]+ \frac{1}{\varepsilon}\, \left [\int_0^T \left (\widetilde{q}\,'(\tau)\right)^2 d\tau + \left (\widetilde{q}(\tau)\right)^2\right ], \end{array} \right. \end{eqnarray*} for all $\tau \in[0,T]$. Here we use the identities \begin{eqnarray*} \displaystyle \widetilde{p}(\tau)=\frac {1}{\tau}\int_0^\tau \left (\eta \widetilde{p}(\eta)\right)' d\eta,~ \displaystyle \displaystyle \widetilde{q}(\tau)=\frac {1}{\tau}\int_0^\tau \left (\eta \widetilde{q}(\eta)\right)' d\eta,~\tau \in(0,T] \end{eqnarray*} to deduce that \begin{eqnarray*} \displaystyle \left (\eta \widetilde{p}(\eta)\right)^2 \le \frac {2}{\tau} \int_0^\tau \left (\widetilde{p}(\eta)\right)^2 d\eta+ \frac {2\tau}{3} \int_0^\tau \left (\widetilde{p}\,'(\eta)\right)^2 d\eta,\\ [1pt] \displaystyle \left (\eta \widetilde{q}(\eta)\right)^2 \le \frac {2}{\tau} \int_0^\tau \left (\widetilde{q}(\eta)\right)^2 d\eta+ \frac {2\tau}{3} \int_0^\tau \left (\widetilde{q}\,'(\eta)\right)^2 d\eta, \end{eqnarray*} Hence, \begin{eqnarray}\label{36} \left. \begin{array}{ll} \displaystyle 2 \int_0^\tau \widetilde{p}(\eta)\phi_{x\eta}(0,\eta) d\eta \le \varepsilon \left [\int_0^\tau \phi^2_{x}(0,\eta) d\eta +\phi^2_{x}(0,\tau)\right ]\\ [8pt] \displaystyle \qquad \qquad + \frac{1}{\varepsilon}\, \left [\frac {2}{\tau} \int_0^\tau \left (\widetilde{p}(\eta)\right)^2 d\eta+ \left (1+\frac {2\tau}{3}\right ) \int_0^\tau \left (\widetilde{p}\,'(\eta)\right)^2 d\eta\right ],~\\ [14pt] \displaystyle 2 \int_0^\tau \widetilde{q}(\eta)\phi_{x\eta}(\ell,\eta) d\eta \le \varepsilon \left [\int_0^\tau \phi^2_{x}(\ell,\eta) d\eta +\phi^2_{x}(\ell,\tau)\right ] \\ [8pt] \displaystyle \qquad \qquad + \frac{1}{\varepsilon}\, \left [\frac {2}{\tau} \int_0^\tau \left (\widetilde{q}(\eta)\right)^2 d\eta+ \left (1+\frac {2\tau}{3}\right ) \int_0^\tau \left (\widetilde{q}\,'(\eta)\right)^2 d\eta\right ]. \end{array} \right. \end{eqnarray} For the terms in the first right-hand side square brackets, we use the trace estimate (\ref{14}) and its analogue for $x=\ell$. Then we get: \begin{eqnarray}\label{37} \left. \begin{array}{ll} \displaystyle \int_0^\tau \phi^2_{x}(0,\eta) d\eta +\phi^2_{x}(0,\tau) \le \frac{5\ell}{3}\left [ \int_0^\tau \int_0^\ell \phi^2_{x}(x,\eta) d\eta dx +\int_0^\ell \phi^2_{x}(x,\tau) dx \right ],\\ [14pt] \displaystyle \int_0^\tau \phi^2_{x}(\ell,\eta) d\eta +\phi^2_{x}(\ell,\tau) \le \frac{5\ell}{3}\left [ \int_0^\tau \int_0^\ell \phi^2_{x}(x,\eta) d\eta dx +\int_0^\ell \phi^2_{x}(x,\tau) dx\right ], \end{array} \right. \end{eqnarray} for all $\tau \in[0,T]$. Further, we estimate the terms in the second right-hand side square brackets of (\ref{36}) as follows: \begin{eqnarray}\label{38} \left. \begin{array}{ll} \displaystyle \frac {2}{\tau} \int_0^\tau \left (\widetilde{p}(\eta)\right)^2 d\eta+ \left (1+\frac {2\tau}{3}\right ) \int_0^\tau \left (\widetilde{p}\,'(\eta)\right)^2 d\eta\le C_T\int_0^T \left (\widetilde{p}\,'(\eta)\right)^2 d\eta,~\\ [10pt] \displaystyle \frac {2}{\tau} \int_0^\tau \left (\widetilde{q}(\eta)\right)^2 d\eta+ \left (1+\frac {2\tau}{3}\right ) \int_0^\tau \left (\widetilde{q}\,'(\eta)\right)^2 d\eta \le C_T \int_0^T \left (\widetilde{q}\,'(\eta)\right)^2 d\eta, \end{array} \right. \end{eqnarray} $ \tau \in[0,T]$, with $C_T>0$ introduced in (\ref{34}). Substituting (\ref{36}) with (\ref{37}) and (\ref{38}) in (\ref{35}), we deduce that  \begin{eqnarray}\label{39} \displaystyle \int_0^\ell \rho_A(x) \phi_\tau^2dx +\left (r_0-\frac{5\ell \varepsilon}{3}\right)\int_0^\ell \phi_{xx}^2 dx+\kappa_0 \int_0^\ell \phi_{xx\tau}^2 dx \qquad \qquad \quad \nonumber \\ [1pt] \qquad +\int_0^\ell T_r(x)\phi_x^2dx +2 \int_0^\tau \int_0^\ell \mu (x)\phi_{\eta}^2 dx d \eta \qquad \qquad \qquad \qquad \qquad \nonumber \\ [1pt] \qquad \qquad \le \frac{5\ell\varepsilon}{3} \int_0^\tau \int_0^\ell\phi_{xx}^2 dx d \eta + \frac{C_T}{\varepsilon} \left [\Vert \widetilde{p}\,' \Vert^2_{L^2(0,T)}+\Vert \widetilde{q}\,' \Vert^2_{L^2(0,T)} \right ], \end{eqnarray} for all $\tau \in[0,T]$. We choose the arbitrary parameter $\varepsilon>0$ from the condition $r_0-5\ell \varepsilon/3>0$, as follows: \begin{eqnarray*} \varepsilon= \frac{3r_0}{10\ell}. \end{eqnarray*} Then (\ref{39}) yields: \begin{eqnarray}\label{40} \displaystyle \int_0^\ell \rho_A(x) \phi_\tau^2dx +\frac{r_0}{2}\int_0^\ell \phi_{xx}^2 dx+\kappa_0 \int_0^\ell \phi_{xx\tau}^2 dx \qquad \qquad \qquad \qquad \qquad \nonumber \\ [1pt] +\int_0^\ell T_r(x)\phi_x^2dx +2 \int_0^\tau \int_0^\ell \mu (x)\phi_{\eta}^2 dx d \eta \qquad \qquad \qquad \qquad \qquad \qquad \nonumber \\ [1pt] \qquad \le \frac{r_0}{2} \int_0^\tau \int_0^\ell \phi_{xx}^2 dx d \eta + \frac{10\ell C_T}{3r_0} \left [\Vert \widetilde{p}\,' \Vert^2_{L^2(0,T)}+\Vert \widetilde{q}\,' \Vert^2_{L^2(0,T)} \right ], \end{eqnarray} for all $\tau \in[0,T]$. The required estimates (\ref{33}) are derived from the main inequality (\ref{40}), as in the proof of Theorem \ref{Theorem-1}. \hfill$\Box$ In view of (\ref{26}), \begin{eqnarray*} \left. \begin{array}{ll} \Vert \widetilde{p}\,' \Vert^2_{L^2(0,T)}=\Vert p' \Vert^2_{L^2(0,T)}= \Vert u_{xt}(0,\,\cdot) -\theta'_0\Vert^2_{L^2(0,T)}, \\ [8pt] \Vert \widetilde{q}\,' \Vert^2_{L^2(0,T)}=\Vert q' \Vert^2_{L^2(0,T)}= \Vert u_{xt}(\ell,\,\cdot) -\theta'_\ell \Vert^2_{L^2(0,T)}. \end{array} \right. \end{eqnarray*} With the inequalities \begin{eqnarray*} \left. \begin{array}{ll} \Vert u_{xt}(0,\,\cdot) -\theta'_0\Vert^2_{L^2(0,T)}\le 2\Vert u_{xt}(0,\,\cdot) \Vert^2_{L^2(0,T)}+ 2\Vert \theta'_0\Vert^2_{L^2(0,T)}, \\ [8pt] \Vert u_{xt}(\ell,\,\cdot) -\theta'_\ell \Vert^2_{L^2(0,T)}\le 2\Vert u_{xt}(\ell,\,\cdot)\Vert^2_{L^2(0,T)}+2\Vert \theta'_\ell \Vert^2_{L^2(0,T)}, \end{array} \right. \end{eqnarray*} and the trace estimates in (\ref{11}) this leads to the following estimates: \begin{eqnarray}\label{41} \left. \begin{array}{ll} \displaystyle \Vert p' \Vert^2_{L^2(0,T)}\le \frac{2C_1^2}{\kappa_0} \Vert F\Vert^2_{L^2(\Omega_T)}+ 2\Vert \theta'_0\Vert^2_{L^2(0,T)}, \\ [10pt] \displaystyle \Vert q' \Vert^2_{L^2(0,T)}\le \frac{2C_1^2}{\kappa_0} \Vert F\Vert^2_{L^2(\Omega_T)}+ 2\Vert \theta'_\ell \Vert^2_{L^2(0,T)}. \end{array} \right. \end{eqnarray} The right-hand side norms $\Vert \theta'_0\Vert_{L^2(0,T)}$ and $\Vert \theta'_\ell\Vert_{L^2(0,T)}$ in estimates (\ref{41}) provide insight into the necessary conditions for existence of the weak solution to the adjoint problem (\ref{27}). Namely, the measured outputs $\theta_0(t)$ and $\theta_\ell(t)$ must not be from the space $L^2(0,T)$, but from the space of smoother functions $H^1(0,T)$. \begin{thm}\label{Theorem-3} Assume that the basic conditions (\ref{3}) are satisfied. Suppose, in addition, that the measured outputs satisfy the regularity conditions (\ref{32}). Then for there exists a weak solution $\varphi \in L^2(0,T;\mathcal{V}^2(0,\ell))$ with $\varphi_t\in L^2(0,T;L^2(0,\ell))$, $\varphi_{tt}\in L^2(0,T;H^{-2}(0,\ell))$ of the adjoint problem (\ref{27}), and the following a'piori estimates hold: \begin{eqnarray}\label{42} \left. \begin{array}{ll} \displaystyle \Vert \varphi_{xx}\Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq 2\exp(T) C_0^2\left [\frac{2C_1^2}{\kappa_0}\, \Vert F \Vert^2_{L^2(\Omega_T)} + \Vert \Theta \Vert^2_{L^2(\Omega_T)} \right ], \\ [10pt] \displaystyle \Vert \varphi_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq 2\left (\exp(T)-1\right) C_0^2 \left [\frac{2C_1^2}{\kappa_0}\, \Vert F \Vert^2_{L^2(\Omega_T)} + \Vert \Theta \Vert^2_{L^2(\Omega_T)} \right ],\\ [12pt] \displaystyle \Vert \varphi_{t} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{\exp(T)r_0}{\rho_0}\,C_0^2 \left [\frac{2C_1^2}{\kappa_0}\, \Vert F \Vert^2_{L^2(\Omega_T)} + \Vert \Theta \Vert^2_{L^2(\Omega_T)} \right ], \\ [10pt] \displaystyle \Vert \varphi_{t} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\left (\exp(T)-1\right )r_0}{\rho_0}\,C_0^2 \left [\frac{2C_1^2}{\kappa_0}\, \Vert F \Vert^2_{L^2(\Omega_T)} + \Vert \Theta \Vert^2_{L^2(\Omega_T)} \right ], \\ [13pt] \displaystyle \Vert \varphi_{xxt} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{\exp(T)r_0}{\kappa_0}\,C_0^2 \left [\frac{2C_1^2}{\kappa_0}\, \Vert F \Vert^2_{L^2(\Omega_T)} + \Vert \Theta \Vert^2_{L^2(\Omega_T)} \right ], \\ [10pt] \displaystyle \Vert \varphi_{xxt} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\left (\exp(T)-1\right )r_0}{\kappa_0}\,C_0^2 \left [\frac{2C_1^2}{\kappa_0}\, \Vert F \Vert^2_{L^2(\Omega_T)} + \Vert \Theta \Vert^2_{L^2(\Omega_T)} \right ], \end{array} \right. \end{eqnarray} where $\mathcal{V}^2(0,\ell)$ is the subspace of the Sobolev space $H^2(0,\ell)$ introduced in Theorem \ref{Theorem-1}, \begin{eqnarray}\label{43} \Vert \Theta \Vert^2_{L^2(\Omega_T)}= \Vert \theta_0 \Vert^2_{L^2(\Omega_T)}+ \Vert \theta_\ell \Vert^2_{L^2(\Omega_T)}, \end{eqnarray} and $C_0,C_1>0$ are the constants defined in (\ref{34}) and (\ref{12}), respectively. \end{thm} {\bf Proof.} The proof of the existence of the weak solution can be done in a similar way to the proof of the related theorems in \cite{Baysal:2019} and \cite{Sakthivel:2024}. Estimates (\ref{33}) clearly hold for the corresponding norms of the weak solution $\varphi(x,t)$ of the adjoint problem (\ref{27}), with $\Vert \widetilde {Q}\,' \Vert^2_{L^2(0,T)}$ replaced by \begin{eqnarray*} \displaystyle \Vert Q' \Vert^2_{L^2(0,T)}= \Vert p' \Vert^2_{L^2(0,T)}+ \Vert q' \Vert^2_{L^2(0,T)}\,. \end{eqnarray*} Namely, \begin{eqnarray*} \left. \begin{array}{ll} \displaystyle \Vert \varphi_{xx} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \exp (T)\, C_0^2 \, \Vert Q' \Vert^2_{L^2(0,T)}, \\ [8pt] \displaystyle \Vert \varphi_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \left (\exp (T)-1\right ) C_0^2 \, \Vert Q' \Vert^2_{L^2(0,T)}, \\ [10pt] \displaystyle \Vert \varphi_\tau \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{\exp (T)r_0}{2\rho_0}\,C_0^2 \, \Vert Q' \Vert^2_{L^2(0,T)}, \\ [8pt] \displaystyle \Vert \varphi_\tau \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\left (\exp (T)-1\right )r_0}{2\rho_0}\,C_0^2 \, \Vert Q' \Vert^2_{L^2(0,T)}, \\ [10pt] \displaystyle \Vert \varphi_{xx\tau} \Vert^2_{L^{\infty}(0,T;L^2(0,\ell))} \leq \frac{\exp (T)r_0}{2\kappa_0}\,C_0^2 \, \Vert Q' \Vert^2_{L^2(0,T)}, \\ [8pt] \displaystyle \Vert \varphi_{xx\tau} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\left (\exp (T)-1\right )r_0}{2\kappa_0}\,C_0^2 \, \Vert Q' \Vert^2_{L^2(0,T)}. \end{array} \right. \end{eqnarray*} Further, as a consequence of (\ref{41}), we deduce that \begin{eqnarray*} \left. \begin{array}{ll} \displaystyle \Vert Q' \Vert^2_{L^2(0,T)} \le \frac{4C_1^2}{\kappa_0} \Vert F\Vert^2_{L^2(\Omega_T)}+ 2\left [\Vert \theta'_0\Vert^2_{L^2(0,T)}+\Vert \theta'_\ell \Vert^2_{L^2(0,T)}\right ]. \end{array} \right. \end{eqnarray*} We obtain the necessary estimates (\ref{41}) by substituting this in the aforementioned inequalities. \hfill$\Box$ We can now use Theorem \ref{Theorem-3} and the estimates (\ref{11}) to justify the formal gradient formula (\ref{30}). \begin{thm}\label{Theorem-4} Suppose that conditions of Theorem \ref{Theorem-3} are satisfied. Then the Tikhonov functional introduced in (\ref{20}) is Fr\'{e}chet differentiable, and for the Fr\'{e}chet gradient of this functional, the gradient formula (\ref{30}) is valid. \end{thm} {\bf Proof.} Denote by $\delta u(x,t)$ the solution of problem (\ref{1}) with $F(x,t)$ replaced by $\delta F(x,t)$. Then, it follows from the estimates (\ref{11}) applied to the solution $\delta u(x,t)$ that \begin{eqnarray*} \left. \begin{array}{ll} \displaystyle \Vert \delta u_{x}(0,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{r_0}\, \Vert \delta F \Vert^2_{L^2(\Omega_T)}, \\ [10pt] \displaystyle \Vert \delta u_{x}(\ell,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{r_0}\, \Vert \delta F \Vert^2_{L^2(\Omega_T)}. \end{array} \right. \end{eqnarray*} Hence, the last two right-hand side integrals in (\ref{29}) are of the order $\mathcal{O}\left ( \Vert \delta F \Vert_{L^2(\Omega_T)}^2\right )$, which implies that the Tikhonov functional is Fr\'{e}chet differentiable and the formula (\ref{30}) is well-defined. This implies the proof. \hfill$\Box$ \section{The Lipschitz continuity of the Fr\'{e}chet gradient and monotonicity of iterations} It is a well-known fact that important properties, such as the monotonicity of iterations and the rate of convergence in gradient methods, are results of the Lipschitz continuity of the Fr\'{e}chet gradient $J'(F)$, $F \in \mathcal{F}$, of the Tikhonov functional \cite{Hasanov:2007}. Thus, for example, in the Landweber iteration algorithm $F^{n+1}=F^{n}-\omega_n J'(F^{n})$, applied to the inverse problem (\ref{1})-(\ref{2}), the relaxation parameter $\omega_n>0$ can be estimated through the Lipschitz constant \cite{Hasanov:Romanov:2021, Hasanov:2007}. \begin{thm}\label{Theorem-5} Assume that conditions of Theorem \ref{Theorem-3} are satisfied. Then the Fr\'{e}chet gradient of the Tikhonov functional, defined (\ref{30}), is Lipschitz continuous, \begin{eqnarray}\label{44} \Vert J'(F+\delta F)- J'(F)\Vert_{L^2(\Omega_T)} \leq L_G \,\Vert \delta F \Vert_{L^2(\Omega_T)}, \end{eqnarray} with the Lipschitz constant \begin{eqnarray}\label{45} \displaystyle L_G = \sqrt {\frac{\exp (T)-1}{2\kappa_0}} \,\ell^2 \,C_0 \,C_1, \end{eqnarray} where the constants $C_1>0$ and $C_0>0$ are defined in (\ref{12}) and (\ref{34}), respectively. \end{thm} {\bf Proof.} By the definition, \begin{eqnarray}\label{46} \Vert J'(F+\delta F)- J'(F)\Vert^2_{L^2(\Omega_T)}= \Vert \delta \varphi \Vert^2_{L^2(0,T;L^2(0,\ell))}, \end{eqnarray} where $\delta \varphi (x,t):= \varphi (x,t;F+\delta F)-\varphi (x,t;F)$. Here, $\varphi (x,t;F+\delta F)$ and $\varphi (x,t;F)$ are the weak solutions of the adjoint problem (\ref{27}) for given $F +\delta F \in \mathcal{F}$ and $F \in \mathcal{F}$, respectively, and $\delta \varphi (x,t)$ solves the following problem: \begin{eqnarray}\label{47} \left\{ \begin{array}{ll} \rho_A(x) \delta \varphi_{tt}-\mu(x)\delta \varphi_{t}-(T(x)\delta \varphi_{x})_{x}+ (r(x)\delta \varphi_{xx}-\kappa(x)\delta \varphi_{xxt})_{xx} =0,\\ [2pt] \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (x,t)\in \Omega_{T}:=(0,\ell)\times (0,T);\\ [4pt] \delta \varphi(x,T)=\delta \varphi_{t}(x,T)=0, \, x \in (0,\ell); \\ [4pt] \delta \varphi(0,t)=0,\, \left(r(x)\delta \varphi_{xx}-\kappa(x)\varphi_{xxt}\right)_{x=0}=\delta u_x(0,t); \\ [2pt] \quad \varphi(\ell,t)=0,\,\left (r(x)\varphi_{xx}-\kappa(x)\varphi_{xxt}\right)_{x=\ell}= \delta u_x(\ell,t;F),\,t \in [0,T], \end{array} \right. \end{eqnarray} with the inputs $\delta u_x(0,t):=u_x(0,t;F+\delta F)-u_x(0,t;F)$ and $\delta u_x(\ell,t):=u_x(\ell,t;F+\delta F)-u_x(\ell,t;F)$. From the second estimate of (\ref{33}) applied to the solution $\delta \varphi (x,t)$ of the problem (\ref{47}) we deduce that \begin{eqnarray}\label{48} \displaystyle \Vert \delta \varphi_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \nonumber \\ [1pt] \leq \left (\exp (T)-1\right ) C_0^2 \, \left [\Vert \delta u_{xt}(0,\cdot) \Vert^2_{L^2(0,T)}+\Vert \delta u_{xt}(0,\cdot) \Vert^2_{L^2(0,T)} \right ]. \end{eqnarray} We employ the second and fourth trace estimates in (\ref{11}) for the weak solution $\delta u(x,t):=u(x,t;F+\delta F) -u(x,t;F)$ of the problem (\ref{1}) with $F(x,t)$ replaced by $\delta F(x,t)$. Then, we have \begin{eqnarray*} \left. \begin{array}{ll} \displaystyle \Vert \delta u_{xt}(0,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{\kappa_0}\, \Vert \delta F \Vert^2_{L^2(\Omega_T)},\\ [10pt] \displaystyle \Vert \delta u_{xt}(\ell,\,\cdot) \Vert^2_{L^2(0,T)} \leq \frac{C_1^2}{\kappa_0}\,\Vert \delta F \Vert^2_{L^2(\Omega_T)}. \end{array} \right. \end{eqnarray*} Hence, by (\ref{48}), \begin{eqnarray}\label{49} \displaystyle \Vert \delta \phi_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{2\left (\exp (T)-1\right )}{\kappa_0} \,C_0^2 C_1^2 \, \Vert \delta F \Vert^2_{L^2(\Omega_T)}. \end{eqnarray} On the other hand, by the inequality (\ref{13}), applied to the solution of (\ref{47}), \begin{eqnarray*} \displaystyle \Vert \delta \varphi_{x} \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\ell^2}{2}\, \Vert \delta \varphi_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))}. \end{eqnarray*} If we consider here the inequality \begin{eqnarray*} \displaystyle \Vert \delta \varphi \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\ell^2}{2}\, \Vert \delta \varphi_{x} \Vert^2_{L^2(0,T;L^2(0,\ell))}, \end{eqnarray*} which can be easily proven for the function $\delta \varphi (x,t)$ satisfying the condition $\delta \varphi (0,t)=0$, then we obtain \begin{eqnarray*} \displaystyle \Vert \delta \varphi \Vert^2_{L^2(0,T;L^2(0,\ell))} \leq \frac{\ell^4}{4}\, \Vert \delta \varphi_{xx} \Vert^2_{L^2(0,T;L^2(0,\ell))}. \end{eqnarray*} Taking into account this inequality in the estimate (\ref{49}), we arrive at the requested result. \hfill$\Box$ \begin{thebibliography}{6} \bibitem{Timoshenko:1953} S.P. Timoshenko, History of Strength of Materials: With a Brief Account of the History of Theory of Elasticity and Theory of Structures, McGraw-Hill, New York, 1953. \bibitem{Fryba:1972} L. Fryba, Vibration of Solids and Structures under Moving Loads, Noordhoff, Groningen, 1972. \bibitem{Yang:2004} B.Y. Yang, J.D. Yau, Y.S. Wu, Vehicle-Bridge Interaction Dynamics, World Scientific Publishing Co., 2004. \bibitem{Ouyang:2011} H. Ouyang, Moving-load dynamic problems: A tutorial (with a brief overview), Mechanical Systems and Signal Processing, 25 (2011) 2039–2060. \bibitem{VanDo:2017} Van Do, Vuong Nguyen, Thanh Hai Ong, and Chien H. Thai, Dynamic responses of Euler–Bernoulli beam subjected to moving vehicles using isogeometric approach, Applied Mathematical Modelling 51 (2017): 405-428. \bibitem{Lydon:2016} M. Lydon, S. E. Taylor, D. Robinson, A. Mufti, and E.J.O. Brien, Recent developments in bridge weigh in motion (B-WIM), Journal of Civil Structural Health Monitoring 6 (2016) 69-81. \bibitem{Moses:1979} F. Moses, Weigh-in-motion system using instrumented bridges, Transportation Engineering Journal of ASCE, 105(3) (1979) 233-249. \bibitem{Ojio:2016} T. Ojio, C.H. Carey, E.J. OBrien, C. Doherty, and S.E. Taylor, Contactless bridge weigh-in-motion. Journal of Bridge Engineering, 21(7) (2016) 04016032. \bibitem{Martini:2022} A. Martini, E.M. Tronci, M.Q. Feng, and R.Y. Leung, A computer vision-based method for bridge model updating using displacement influence lines, Engineering Structures 259 (2016) 114129. \bibitem{Khuc:2018} T. Khuc and F.N. Catbas, Structural identification using computer vision–based bridge health monitoring, Journal of Structural Engineering 144(2)2 (2018) 04017202. \bibitem{Tang:2024} L. Tang, X.B. Liu, Y.J. Liu, et al. A bridge dynamic response analysis and load recognition method using traffic imaging, Scientific Reports 14 (2024) 18742. \bibitem{Hasanov:Romanov:2021} A. Hasanov and A. G. Romanov, Introduction to Inverse Problems for Differential Equations, 2nd ed., Springer, New York, 2021. \bibitem{Baysal:2019} O. Baysal and A. Hasanov, Solvability of the clamped Euler–Bernoulli beam equation, Applied Mathematics Letters 93 (2019) 85-90. \bibitem{Sakthivel:2024} K. Sakthivel, A. Hasanov and A. Dileep, Inverse problems of identifying the unknown transverse shear force in the Euler-Bernoulli beam with Kelvin-Voigt damping, Journal of Inverse and Ill-Posed Problems 32(1) (2024) 75-106. \bibitem{Hasanov:2007} A. Hasanov, Simultaneous determination of source terms in a linear parabolic problem from the final overdetermination: Weak solution approach, Journal of Mathematical Analysis and Applications, 330 (2007),766-779. \end{thebibliography} \end{document}
2412.20775v6
http://arxiv.org/abs/2412.20775v6
On Spectral Graph Determination
\documentclass[11pt,twoside,reqno]{amsart} \linespread{1.05} \usepackage[colorlinks=true,citecolor=blue]{hyperref} \numberwithin{equation}{section} \DeclareMathOperator*{\essinf}{ess\,inf} \makeatletter \makeatother \newcommand{\ep}{\varepsilon} \newcommand{\eps}[1]{{#1}_{\varepsilon}} \usepackage{amsmath, amssymb, amsthm, amsfonts, cite, dsfont, enumerate, epsfig, float, geometry, doi, infwarerr, mathrsfs, mathtools, mathrsfs, mathtools, relsize, stmaryrd, tabularx, txfonts, nicefrac, subfig} \usepackage[normalem]{ulem} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{claim}[theorem]{Claim} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{question}[theorem]{Question} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{problem}[theorem]{Problem} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \newcommand{\doilink}[1]{\href{https://doi.org/#1}{#1}} \newcommand{\prob}{\ensuremath{\mathbb{P}}} \newcommand{\integers}{\ensuremath{\mathbb{Z}}} \newcommand{\expectation}{\ensuremath{\mathbb{E}}} \newcommand{\Var}{\mathrm{Var}} \newcommand{\supp}{\mathop{\mathrm{supp}}} \newcommand{\dint}{\displaystyle\int} \newcommand{\es}{\varnothing} \newcommand{\naturals}{\ensuremath{\mathbb{N}}} \newcommand{\rationals}{\ensuremath{\mathbb{Q}}} \newcommand{\Reals}{\ensuremath{\mathbb{R}}} \newcommand{\tr}{\mathrm{tr}} \newcommand{\set}{\ensuremath{\mathcal}} \newcommand{\cset}[1]{\mathcal{#1}^{\textnormal{c}}} \newcommand{\Field}{\ensuremath{\mathbb{F}}} \newcommand{\OneTo}[1]{[#1]} \newcommand{\eqdef}{\triangleq} \newcommand{\card}[1]{|#1|} \newcommand{\bigcard}[1]{\bigl|#1\bigr|} \newcommand{\overbar}[1]{\mkern 1.5mu\overline{\mkern-1.5mu#1\mkern-1.5mu}\mkern 1.5mu} \DeclareMathOperator{\Vertex}{\mathsf{V}} \DeclareMathOperator{\Edge}{\mathsf{E}} \DeclareMathOperator{\Adjacency}{\mathbf{A}} \DeclareMathOperator{\Laplacian}{\mathbf{L}} \DeclareMathOperator{\SignlessLaplacian}{\mathbf{Q}} \DeclareMathOperator{\AllOne}{\mathbf{J}} \DeclareMathOperator{\Identity}{\mathbf{I}} \DeclareMathOperator{\Independentset}{\set{I}} \DeclareMathOperator{\Kneser}{\mathsf{K}} \DeclareMathOperator{\Complete}{\mathsf{K}} \DeclareMathOperator{\Friendship}{\mathsf{F}} \DeclareMathOperator{\Empty}{\mathsf{E}} \DeclareMathOperator{\Lattice}{\mathsf{L}} \DeclareMathOperator{\Path}{\mathsf{P}} \DeclareMathOperator{\Cycle}{\mathsf{C}} \DeclareMathOperator{\SRG}{\mathsf{srg}} \DeclareMathOperator{\Sp}{\mathsf{Sp}} \DeclareMathOperator{\Star}{\mathsf{S}} \DeclareMathOperator{\Clique}{\omega} \DeclareMathOperator{\Chromatic}{\chi} \newcommand{\Gr}[1]{\mathsf{#1}} \newcommand{\CGr}[1]{\overline{\mathsf{#1}}} \newcommand{\V}[1]{\Vertex(#1)} \newcommand{\E}[1]{\Edge(#1)} \newcommand{\A}{\Adjacency} \newcommand{\LM}{\Laplacian} \newcommand{\Q}{\SignlessLaplacian} \newcommand{\D}{\mathbf{D}} \newcommand{\Ng}[1]{\mathcal{N}(#1)} \newcommand{\J}[1]{\AllOne_{#1}} \newcommand{\I}[1]{\Identity_{#1}} \newcommand{\AG}[1]{\Adjacency(#1)} \newcommand{\LPG}[1]{\Laplacian(#1)} \newcommand{\QG}[1]{\SignlessLaplacian(#1)} \newcommand{\NLG}[1]{\mathcal{L}(#1)} \newcommand{\indset}[1]{\Independentset(#1)} \newcommand{\indsetmax}[1]{\Independentset_{\max}(#1)} \newcommand{\indnum}[1]{\alpha(#1)} \newcommand{\indnumbig}[1]{\alpha\bigl(#1\bigr)} \newcommand{\indnumBig}[1]{\alpha\Bigl(#1\Bigr)} \newcommand{\indnumbigg}[1]{\alpha\biggl(#1\biggr)} \newcommand{\indnumBigg}[1]{\alpha\Biggl(#1\Biggr)} ndnum}[1]{\alpha_{\mathrm{f}}(#1)} \newcommand{\relfindnum}[2]{\alpha_{\mathrm{f}}(#1|#2)} \newcommand{\clnum}[1]{\Clique(#1)} \newcommand{\clnumbig}[1]{\Clique\bigl(#1\bigr)} \newcommand{\clnumBig}[1]{\Clique\Bigl(#1\Bigr)} \newcommand{\clnumbigg}[1]{\Clique\biggl(#1\biggr)} \newcommand{\clnumBigg}[1]{\Clique\Biggl(#1\Biggr)} \newcommand{\fclnum}[1]{\Clique_{\mathrm{f}}(#1)} \newcommand{\chrnum}[1]{\Chromatic(#1)} \newcommand{\chrnumbig}[1]{\Chromatic\bigl(#1\bigr)} \newcommand{\chrnumBig}[1]{\Chromatic\Bigl(#1\Bigr)} \newcommand{\chrnumbigg}[1]{\Chromatic\biggl(#1\biggr)} \newcommand{\chrnumBigg}[1]{\Chromatic\Biggl(#1\Biggr)} \newcommand{\fchrnum}[1]{\Chromatic_{\mathrm{f}}(#1)} \newcommand{\fchrnumbig}[1]{\Chromatic_{\mathrm{f}}\bigl(#1\bigr)} \newcommand{\vchrnum}[1]{\Chromatic_{\mathrm{v}}(#1)} \newcommand{\vchrnumbig}[1]{\Chromatic_{\mathrm{v}}\bigl(#1\bigr)} \newcommand{\svchrnum}[1]{\Chromatic_{\mathrm{sv}}(#1)} \newcommand{\svchrnumbig}[1]{\Chromatic_{\mathrm{sv}}\bigl(#1\bigr)} \newcommand{\Eigval}[2]{\lambda_{#1}(#2)} \newcommand{\CoG}[1]{\Complete_{#1}} \newcommand{\CoBG}[2]{\Complete_{#1,#2}} \newcommand{\FG}[1]{\Friendship_{#1}} \newcommand{\GFG}[2]{\Friendship_{#1,#2}} \newcommand{\EmG}[1]{\Empty_{#1}} \newcommand{\KG}[2]{\Kneser(#1,#2)} \newcommand{\CCG}[2]{\Complete_{#1/#2}} \newcommand{\PathG}[1]{\Path_{#1}} \newcommand{\CG}[1]{\Cycle_{#1}} \newcommand{\CircG}[2]{\Cycle_{#1,#2}} \newcommand{\SG}[1]{\Star_{#1}} \newcommand{\srg}[4]{\SRG(#1,#2,#3,#4)} \newcommand{\DU}{\hspace{0.1em} \dot{\cup} \hspace{0.1em}} \newcommand{\NS}{\, \underline{\vee} \,} \newcommand{\NNS}{\, \uuline{\vee} \,} \newcommand{\DuplicationGraph}[1]{\mathrm{Du}(#1)} \newcommand{\Corona}[2]{{#1} \circ {#2}} \newcommand{\EdgeCorona}[2]{{#1} \diamondsuit \hspace*{0.03cm} {#2}} \newcommand{\DuplicationCorona}[2]{{#1} \boxminus {#2}} \newcommand{\DuplicationEdgeCorona}[2]{{#1} \boxplus {#2}} \newcommand{\ClosedNeighborhoodCorona}[2]{{#1} \, \underline{\boxtimes} \, {#2}} \newcommand{\SubdivisionGraph}[1]{\mathrm{S}(#1)} \newcommand{\BipartiteIncidenceGraph}[1]{\mathrm{B}(#1)} \newcommand{\SVBVJ}[2]{\SubdivisionGraph{#1} \, \ddot{\vee} \, \BipartiteIncidenceGraph{#2}} \newcommand{\SEBEJ}[2]{\SubdivisionGraph{#1} \, \overset{\raisebox{-0.1cm}{$=$}}{\vee} \, \BipartiteIncidenceGraph{#2}} \newcommand{\SEBVJ}[2]{\SubdivisionGraph{#1} \, {\overset{\raisebox{-0.25cm}{$\stackrel{\rule{0.25cm}{0.05mm}}{\cdot}$}}{\vee}} \hspace*{0.1cm} \BipartiteIncidenceGraph{#2}} \newcommand{\SVBEJ}[2]{\SubdivisionGraph{#1} \, {\overset{\raisebox{0.0cm}{$\stackrel{\cdot}{\rule{0.25cm}{0.05mm}}$}}{\vee}} \hspace*{0.1cm} \BipartiteIncidenceGraph{#2}} \DeclareMathOperator{\Neighbors}{\set{N}} \DeclareMathOperator{\Degree}{\text{d}} \newcommand{\Ngb}[1]{\Neighbors(#1)} \newcommand{\dgr}[1]{\Degree_{#1}} \newcommand{\trace}[1]{\text{Tr}{(#1)}} \newcommand{\Gmats}{\{\A,\LM,\Q, {\bf{\mathcal{L}}}, \overline{\A}, \overline{\LM}, \overline{\Q}, \overline{{\bf{\mathcal{L}}}}\} } \newcommand{\AM}[1]{\text{AM}{(#1)}} \newcommand{\GM}[1]{\text{GM}{(#1)}} \newcommand{\diag}[1]{\operatorname{diag}\bigl(#1\bigr)} \newcommand\qfrac[3][1pt]{\frac{ \ThisStyle{\addstackgap[#1]{\SavedStyle#2}}}{ \ThisStyle{\addstackgap[#1]{\SavedStyle#3}}}} \MHInternalSyntaxOn \renewcommand{\dcases} { \MT_start_cases:nnnn {\quad} {$\m@th\displaystyle##$\hfil} {$\m@th\displaystyle##$\hfil} {\lbrace} } \MHInternalSyntaxOff \geometry{left=1in, right=1in, top=1in, bottom=1in} \makeatletter\c@MaxMatrixCols=15\makeatother \begin{document} \setlength{\baselineskip}{1.15\baselineskip} \title{On Spectral Graph Determination} \author{Igal Sason \and Noam Krupnik \and Suleiman Hamud \and Abraham Berman} \maketitle \thispagestyle{empty} \vspace*{-0.8cm} \begin{center} {\em Technion - Israel Institute of Technology, Technion City, Haifa 3200003, Israel} \end{center} \vskip 4mm {\noindent {\bf Abstract.} The study of spectral graph determination is a fascinating area of research in spectral graph theory and algebraic combinatorics. This field focuses on examining the spectral characterization of various classes of graphs, developing methods to construct or distinguish cospectral nonisomorphic graphs, and analyzing the conditions under which a graph's spectrum uniquely determines its structure. This paper presents an overview of both classical and recent advancements in these topics, along with newly obtained proofs of some existing results, which offer additional insights. \vspace*{0.2cm} \noindent {\bf Keywords.} Spectral graph theory, spectral graph determination, cospectral nonisomorphic graphs, Haemers' conjecture, Tur\'{a}n graphs, graph operations. \vspace*{0.2cm} \noindent {\bf 2020 Mathematics Subject Classification.} 05C50, 05C75, 05C76. \vspace*{0.2cm} \noindent {\bf Correspondence}: Igal Sason, Technion - Israel Institute of Technology, Technion City, Haifa 3200003, Israel. Email: [email protected]; Tel: +97248294699. \tableofcontents{} \section{Introduction} \label{section: Introduction} Spectral graph theory lies at the intersection of combinatorics and matrix theory, exploring the structural and combinatorial properties of graphs through the analysis of the eigenvalues and eigenvectors of matrices associated with these graphs \cite{BrouwerH2011,Chung1997,CvetkovicDS1995,CvetkovicRS2010,GodsilR2001}. Spectral properties of graphs offer powerful insights into a variety of useful graph characteristics, enabling the determination or estimation of features such as the independence number, clique number, chromatic number, and the Shannon capacity of graphs, which are notoriously NP-hard to compute. A particularly intriguing topic in spectral graph theory is the study of cospectral graphs, i.e., graphs that share identical multisets of eigenvalues with respect to one or more matrix representations. While isomorphic graphs are always cospectral, non-isomorphic graphs may also share spectra, leading to the study of non-isomorphic cospectral (NICS) graphs. This phenomenon raises profound questions about the extent to which a graph’s spectrum encodes its structural properties. Conversely, graphs determined by their spectrum (DS graphs) are uniquely identifiable, up to isomorphism, by their eigenvalues. In other words, a graph is DS if and only if no other non-isomorphic graph shares the same spectrum. The problem of spectral graph determination and the characterization of DS graphs dates back to the pioneering 1956 paper by G\"{u}nthard and Primas \cite{GunthardP56}, which explored the interplay between graph theory and chemistry. This paper posed the question of whether graphs can be uniquely determined by their spectra with respect to their adjacency matrix $\A$. While every graph can be determined by its adjacency matrix, which enables the determination of every graph by its eigenvalues and a basis of corresponding eigenvectors, the characterization of graphs for which eigenvalues alone suffice for identification forms a fertile area of research in spectral graph theory. This research holds both theoretical interest and practical implications. Subsequent studies have broadened the scope of this question to include determination by the spectra of other significant matrices, such as the Laplacian matrix ($\LM$), signless Laplacian matrix ($\Q$), and normalized Laplacian matrix (${\bf{\mathcal{L}}}$), among many other matrices associated with graphs. The study of cospectral and DS graphs with respect to these matrices has become a cornerstone of spectral graph theory. This line of research has far-reaching applications in diverse fields, including chemistry and molecular structure analysis, physics and quantum computing, network communication theory, machine learning, and data science. One of the most prominent conjectures in this area is Haemers' conjecture \cite{Haemers2016,Haemers2024}, which posits that most graphs are determined by the spectrum of their adjacency matrices ($\A$-DS). Despite many efforts in proving this open conjecture, some theoretical and experimental progress on the theme of this conjecture has been recently presented in \cite{KovalK2024,WangW2024}, while also graphs or graph families that are not DS continue to be discovered. Haemers’ conjecture has spurred significant interest in classifying DS graphs and understanding the factors that influence spectral determination, particularly among special families of graphs such as regular graphs, strongly regular graphs, trees, graphs of pyramids, as well as the construction of NICS graphs by a variety of graph operations. Studies in these directions of research have been covered in the seminal works by Schwenk \cite{Schwenk1973}, and by van Dam and Haemers \cite{vanDamH03,vanDamH09}, as well as in more recent studies (in part by the authors) such as \cite{AbdianBTKO21,AbiadH2012,AbiadBBCGV2022,Butler2010,ButlerJ2011,BermanCCLZ2018,Butler2016,ButlerH2016,BuZ2012,BuZ2012b, CamaraH14,DasP2013,DuttaA20,GodsilM1982,HamidzadeK2010,HamudB24,JinZ2014,KannanPW22,KoolenHI2016,KoolenHI2016b,KovalK2024,KrupnikB2024, LinLX2019,LiuZG2008,MaRen2010,OboudiAAB2021,OmidiT2007,OmidiV2010,Sason2024,YeLS2025,ZhangLY09,ZhangLZY09,ZhouBu2012}, and references therein. Specific contributions of these papers to the problem of the spectral determination of graphs are addressed in the continuation of this article. This paper surveys both classical and recent results on spectral graph determination, also presenting newly obtained proofs of some existing results, which offer additional insights. The paper emphasizes the significance of adjacency spectra ($\A$-spectra), and it provides conditions for $\A$-cospectrality, $\A$-NICS, and $\A$-DS graphs, offering examples that support or refute Haemers’ conjecture. We furthermore address the cospectrality of graphs with respect to the Laplacian, signless Laplacian, and normalized Laplacian matrices. For regular graphs, cospectrality with respect to any one of these matrices (or the adjacency matrix) implies cospectrality with respect to all the others, enabling a unified framework for studying DS and NICS graphs across different matrix representations. However, for irregular graphs, cospectrality with respect to one matrix does not necessarily imply cospectrality with respect to another. This distinction underscores the complexity of analyzing spectral properties in irregular graphs, where the interplay among different matrix representations becomes more intricate and often necessitates distinct techniques for characterization and comparison. The structure of the paper is as follows: Section~\ref{section: preliminaries} provides preliminary material in matrix theory, graph theory, and graph-associated matrices. Section~\ref{section: DS graphs} focuses on graphs determined by their spectra (with respect to one or multiple matrices). Section~\ref{section: special families of graphs} examines special families of graphs and their determination by adjacency spectra. Section~\ref{section: graph operations} analyzes unitary and binary graph operations, emphasizing their impact on spectral determination and construction of NICS graphs. Finally, Section~\ref{section: summary and outlook} concludes the paper with open questions and an outlook on spectral graph determination, highlighting areas for further research. \section{Preliminaries} \label{section: preliminaries} The present section provides preliminary material and notation in matrix theory, graph theory, and graph-associated matrices, which serves for the presentation of this paper. \subsection{Matrix Theory Preliminaries} \label{subsection: Matrix Theory Preliminaries} The following standard notation in matrix theory is used in this paper: \begin{itemize} \item $\Reals^{n\times m}$ denotes the set of all $n \times m$ matrices with real entries, \item $\Reals^{n} \triangleq \Reals^{n\times 1}$ denotes the set of all $n$-dimensional column vectors with real entries, \item $\I{n}\in\Reals^{n\times n}$ denotes the $n \times n$ identity matrix, \item $\mathbf{0}_{k,m} \in\Reals^{k\times m}$ denotes the $k \times m$ all-zero matrix, \item $\J{k,m}\in\Reals^{k\times m}$ denotes the $k \times m$ all-ones matrix, \item $\mathbf{1}_n \triangleq \J{n,1} \in \Reals^n$ denotes the $n$-dimensional column vector of ones. \end{itemize} Throughout this paper, we deal with real matrices. The concepts of \emph{Schur complement} and \emph{interlacing of eigenvalues} are useful in papers on spectral graph determination and cospectral graphs, and are also addressed in this paper. \begin{definition} \label{definition: Schur complement} Let $\mathbf{M}$ be a block matrix \begin{align} \mathbf{M}= \begin{pmatrix} \mathbf{A} & \mathbf{B}\\ \mathbf{C} & \mathbf{D} \end{pmatrix}, \end{align} where the block $\mathbf{D}$ is invertible. The \emph{Schur complement of $D$ in $M$} is \begin{align} \label{definition: eq - Schur complement} \mathbf{M/D}= \mathbf{A}-\mathbf{BD}^{-1}\mathbf{C}. \end{align} \end{definition} Schur proved the following remarkable theorem: \begin{theorem}[Theorem on the Schur complement \cite{Schur1917}] \label{theorem: Schur complement} If $D$ is invertible, then \begin{align} \label{eq: Schur's formula} \det{\mathbf{M}} & =\det(\mathbf{M/D}) \, \det{\mathbf{D}}. \end{align} \end{theorem} \begin{theorem}[Cauchy Interlacing Theorem \cite{ParlettB1998}] \label{thm:interlacing} Let $\lambda_{1} \ge \ldots \ge \lambda_{n}$ be the eigenvalues of a symmetric matrix $\mathbf{M}$ and let $\mu_{1}\ge\ldots\ge\mu_{m}$ be the eigenvalues of a \emph{principal $m \times m$ submatrix of $\mathbf{M}$} (i.e., a submatrix that is obtained by deleting the same set of rows and columns from $M$) then $\lambda_{i}\ge\mu_{i}\ge\lambda_{n-m+i}$ for $i=1,\ldots,m$. \end{theorem} \begin{definition}[Completely Positive Matrices] \label{definition: completely positive matrix} A matrix $\A \in \Reals^{n \times n}$ is called {\em completely positive} if there exists a matrix ${\mathbf{B}} \in \Reals^{n \times m}$ whose all entries are nonnegative such that $\A = {\mathbf{B}} {\mathbf{B}}^\mathrm{T}$. \end{definition} A completely positive matrix is therefore symmetric and all its entries are nonnegative. The interested reader is referred to the textbook \cite{ShakedBbook19} on completely positive matrices, also addressing their connections to graph theory. \begin{definition}[Positive Semidefinite Matrices] \label{definition: positive semidefinite matrix} A matrix $\A \in \Reals^{n \times n}$ is called {\em positive semidefinite} if $\A$ is symmetric, and the inequality $\underline{x}^{\mathrm{T}} \A \underline{x} \geq 0$ holds for every column vector $\underline{x} \in \Reals^n$. \end{definition} \begin{proposition} \label{proposition: positive semidefinite matrix} A symmetric matrix is positive semidefinite if and only if one of the following conditions hold: \begin{enumerate} \item All its eigenvalues are nonnegative (real) numbers. \item There exists a matrix ${\mathbf{B}} \in \Reals^{n \times m}$ such that $\A = {\mathbf{B}} {\mathbf{B}}^\mathrm{T}$. \end{enumerate} \end{proposition} The next result readily follows. \begin{corollary} \label{corollary: c.p. yields p.s.} A completely positive matrix is positive semidefinite. \end{corollary} \begin{remark} \label{remark: matrix of order 5} Regarding Corollary~\ref{corollary: c.p. yields p.s.}, it is natural to ask whether, under certain conditions, a positive semidefinite matrix whose all entries are nonnegative is also completely positive. By \cite[Theorem~3.35]{ShakedBbook19}, this holds for all square matrices of order $n \leq 4$. Moreover, \cite[Example~3.45]{ShakedBbook19} also presents an explicit example of a matrix of order~5 that is positive semidefinite with all nonnegative entries but is not completely positive. \end{remark} \subsection{Graph Theory Preliminaries} \label{subsection: Graph Theory Preliminaries} A graph $\Gr{G} = (\V{\Gr{G}}, \E{\Gr{G}})$ forms a pair where $\V{\Gr{G}}$ is a set of vertices and $\E{\Gr{G}}\subseteq \V{\Gr{G}} \times \V{\Gr{G}}$ is a set of edges. In this paper all the graphs are assumed to be \begin{itemize} \item {\em finite} - $\bigcard{\V{\Gr{G}}}<\infty$, \item {\em simple} - $\Gr{G}$ has no parallel edges and no self loops, \item {\em undirected} - the edges in $\Gr{G}$ are undirected. \end{itemize} We use the following terminology: \begin{itemize} \item The {\em degree}, $d(v)$, of a vertex $v\in \V{\Gr{G}}$ is the number of vertices in $\Gr{G}$ that are adjacent to $v$. \item A {\em walk} in a graph $\Gr{G}$ is a sequence of vertices in $\Gr{G}$, where every two consecutive vertices in the sequence are adjacent in $\Gr{G}$. \item A {\em path} in a graph is a walk with no repeated vertices. \item A {\em cycle} $\Cycle$ is a closed walk, obtained by adding an edge to a path in $\Gr{G}$. \item The {\em length of a path or a cycle} is equal to its number of edges. A {\em triangle} is a cycle of length~3. \item A {\em connected graph} is a graph in which every pair of distinct vertices is connected by a path. \item The {\em distance} between two vertices in a connected graph is the length of a shortest path that connects them. \item The {\em diameter} of a connected graph is the maximum distance between any two vertices in the graph, and the diameter of a disconnected graph is set to be infinity. \item The {\em connected component} of a vertex $v \in \V{\Gr{G}}$ is the subgraph whose vertex set $\set{U} \subseteq \V{\Gr{G}}$ consists of all the vertices that are connected to $v$ by any path (including the vertex $v$ itself), and its edge set consists of all the edges in $\E{\Gr{G}}$ whose two endpoints are contained in the vertex set $\set{U}$. \item A {\em tree} is a connected graph that has no cycles (i.e., it is a connected and {\em acyclic} graph). \item A {\em spanning tree} of a connected graph $\Gr{G}$ is a tree with the vertex set $\V{\Gr{G}}$ and some of the edges of~$\Gr{G}$. \item A graph is {\em regular} if all its vertices have the same degree. \item A {\em $d$-regular} graph is a regular graph whose all vertices have degree $d$. \item A {\em bipartite graph} is a graph $\Gr{G}$ whose vertex set is a disjoint union of two subsets such that no two vertices in the same subset are adjacent. \item A {\em complete bipartite graph} is a bipartite graph where every vertex in each of the two partite sets is adjacent to all the vertices in the other partite set. \end{itemize} \begin{definition}[Complement of a graph] The \emph{complement} of a graph $\Gr{G}$, denoted by $\CGr{G}$, is a graph whose vertex set is $\V{\Gr{G}}$, and its edge set is the complement set $\CGr{\E{\Gr{G}}}$. Every vertex in $\V{\Gr{G}}$ is nonadjacent to itself in $\Gr{G}$ and $\CGr{G}$, so $\{i,j\} \in \E{\CGr{G}}$ if and only if $\{i, j\} \notin \E{\Gr{G}}$ with $i \neq j$. \end{definition} \begin{definition}[Disjoint union of graphs] \label{def:disjoint_union_graphs} Let $\Gr{G}_1, \ldots, \Gr{G}_k$ be graphs. If the vertex sets in these graphs are not pairwise disjoint, let $\Gr{G}'_2, \ldots, \Gr{G}'_k$ be isomorphic copies of $\Gr{G}_2, \ldots, \Gr{G}_k$, respectively, such that none of the graphs $\Gr{G}_1, \Gr{G}'_2, \ldots \Gr{G}'_k$ have a vertex in common. The disjoint union of these graphs, denoted by $\Gr{G} = \Gr{G}_1 \DU \ldots \DU \Gr{G}_k$, is a graph whose vertex and edge sets are equal to the disjoint unions of the vertex and edge sets of $\Gr{G}_1, \Gr{G}'_2, \ldots, \Gr{G}'_k$ ($\Gr{G}$ is defined up to an isomorphism). \end{definition} \begin{definition} Let $k\in \naturals$ and let $\Gr{G}$ be a graph. Define $k \Gr{G} = \Gr{G} \DU \Gr{G} \DU \ldots \DU \Gr{G}$ to be the disjoint union of $k$ copies of $\Gr{G}$. \end{definition} \begin{definition}[Join of graphs] \label{definition: join of graphs} Let $\Gr{G}$ and $\Gr{H}$ be two graphs with disjoint vertex sets. The join of $\Gr{G}$ and $\Gr{H}$ is defined to be their disjoint union, together with all the edges that connect the vertices in $\Gr{G}$ with the vertices in $\Gr{H}$. It is denoted by $\Gr{G} \vee \Gr{H}$. \end{definition} \begin{definition}[Induced subgraphs] \label{definition: Induced subgraphs} Let $\Gr{G}=(\Vertex,\Edge)$ be a graph, and let $\set{U} \subseteq \Vertex$. The \emph{subgraph of $\Gr{G}$ induced by $\set{U}$} is the graph obtained by the vertices in $\set{U}$ and the edges in $\Gr{G}$ that has both ends on $\set{U}$. We say that $\Gr{H}$ is an \emph{induced subgraph of $\Gr{G}$}, if it is induced by some $\set{U} \subseteq \Vertex$. \end{definition} \begin{definition}[Strongly regular graphs] \label{definition: strongly regular graphs} A regular graph $\Gr{G}$ that is neither complete nor empty is called a {\em strongly regular} graph with parameters $(n,d,\lambda,\mu)$, where $\lambda$ and $\mu$ are nonnegative integers, if the following conditions hold: \begin{enumerate}[(1)] \item \label{Item 1 - definition of SRG} $\Gr{G}$ is a $d$-regular graph on $n$ vertices. \item \label{Item 2 - definition of SRG} Every two adjacent vertices in $\Gr{G}$ have exactly $\lambda$ common neighbors. \item \label{Item 3 - definition of SRG} Every two distinct and nonadjacent vertices in $\Gr{G}$ have exactly $\mu$ common neighbors. \end{enumerate} The family of strongly regular graphs with these four specified parameters is denoted by $\srg{n}{d}{\lambda}{\mu}$. It is important to note that a family of the form $\srg{n}{d}{\lambda}{\mu}$ may contain multiple nonisomorphic strongly regular graphs. Throughout this work, we refer to a strongly regular graph as $\srg{n}{d}{\lambda}{\mu}$ if it belongs to this family. \end{definition} \begin{proposition}[Feasible parameter vectors of strongly regular graphs] \label{proposition: necessary condition for the parameter vector of SRGs} The four parameters of a strongly regular graph $\srg{n}{d}{\lambda}{\mu}$ satisfy the equality \begin{align} \label{eq: necessary condition for the parameter vector of SRGs} (n-d-1)\mu = d(d-\lambda-1). \end{align} \end{proposition} \begin{remark} \label{remark: necessary condition for the parameter vector of SRGs} Equality~\eqref{eq: necessary condition for the parameter vector of SRGs} provides a necessary, but not sufficient, condition for the existence of a strongly regular graph $\srg{n}{d}{\lambda}{\mu}$. For example, as shown in \cite{Haemers93}, no $(76,21,2,7)$ strongly regular graph exists, even though the condition $(n-d-1)\mu = 378 = d(d-\lambda-1)$ is satisfied in this case. \end{remark} \begin{notation}[Classes of graphs] \noindent \begin{itemize} \item $\CoG{n}$ is the complete graph on $n$ vertices. \item $\PathG{n}$ is the path graph on $n$ vertices. \item $\CoBG{\ell}{r}$ is the complete bipartite graph whose degrees of partite sets are $\ell$ and $r$ (with possible equality between $\ell$ and $r$). \item $\SG{n}$ is the star graph on $n$ vertices $\SG{n} = \CoBG{1}{n-1}$. \end{itemize} \end{notation} \begin{definition}[Integer-valued functions of a graph] \noindent \begin{itemize} \item Let $k \in \naturals $. A \emph{proper} $k$-\emph{coloring} of a graph $\Gr{G}$ is a function $c \colon \V{\Gr{G}} \to \{1,2,...,k\}$, where $c(v) \ne c(u)$ for every $\{u,v\}\in \E{\Gr{G}}$. The \emph{chromatic number} of $\Gr{G}$, denoted by $\chrnum{\Gr{G}}$, is the smallest $k$ for which there exists a proper $k$-coloring of $\Gr{G}$. \item A \emph{clique} in a graph $\Gr{G}$ is a subset of vertices $U\subseteq \V{\Gr{G}}$ where the subgraph induced by $U$ is a complete graph. The \emph{clique number} of $\Gr{G}$, denoted by $\omega(\Gr{G})$, is the largest size of a clique in $\Gr{G}$; i.e., it is the largest order of an induced complete subgraph in $\Gr{G}$. \item An \emph{independent set} in a graph $\Gr{G}$ is a subset of vertices $U\subseteq \V{\Gr{G}}$, where $\{u,v\} \notin \E{\Gr{G}}$ for every $u,v \in U$. The \emph{independence number} of $\Gr{G}$, denoted by $\indnum{\Gr{G}}$, is the largest size of an independent set in $\Gr{G}$. \end{itemize} \end{definition} \begin{definition}[Orthogonal and orthonormal representations of a graph] \label{def: orthogonal representation} Let $\Gr{G}$ be a finite, simple, and undirected graph, and let $d \in \naturals$. \begin{itemize} \item An {\em orthogonal representation} of the graph $\Gr{G}$ in the $d$-dimensional Euclidean space $\Reals^d$ assigns to each vertex $i \in \V{\Gr{G}}$ a nonzero vector ${\bf{u}}_i \in \Reals^d$ such that ${\bf{u}}_i^{\mathrm{T}} {\bf{u}}_j = 0$ for every $\{i, j\} \notin \E{\Gr{G}}$ with $i \neq j$. In other words, for every two distinct and nonadjacent vertices in the graph, their assigned nonzero vectors should be orthogonal in $\Reals^d$. \item An {\em orthonormal representation} of $\Gr{G}$ is additionally represented by unit vectors, i.e., $\| {\bf{u}}_i \| = 1$ for all $i \in \V{\Gr{G}}$. \item In an orthogonal (orthonormal) representation of $\Gr{G}$, every two nonadjacent vertices in $\Gr{G}$ are mapped (by definition) into orthogonal (orthonormal) vectors, but adjacent vertices may not necessarily be mapped into nonorthogonal vectors. If ${\bf{u}}_i^{\mathrm{T}} {\bf{u}}_j \neq 0$ for all $\{i, j\} \in \E{\Gr{G}}$, then such a representation of $\Gr{G}$ is called {\em faithful}. \end{itemize} \end{definition} \begin{definition}[Lov\'{a}sz $\vartheta$-function \cite{Lovasz79_IT}] \label{definition: Lovasz theta function} Let $\Gr{G}$ be a finite, simple, and undirected graph. Then, the {\em Lov\'{a}sz $\vartheta$-function of $\Gr{G}$} is defined as \begin{eqnarray} \label{eq: Lovasz theta function} \vartheta(\Gr{G}) \triangleq \min_{{\bf{c}}, \{{\bf{u}}_i\}} \, \max_{i \in \V{\Gr{G}}} \, \frac1{\bigl( {\bf{c}}^{\mathrm{T}} {\bf{u}}_i \bigr)^2} \, , \end{eqnarray} where the minimum on the right-hand side of \eqref{eq: Lovasz theta function} is taken over all unit vectors ${\bf{c}}$ and all orthonormal representations $\{{\bf{u}}_i: i \in \V{\Gr{G}} \}$ of $\Gr{G}$. In \eqref{eq: Lovasz theta function}, it suffices to consider orthonormal representations in a space of dimension at most $n = \card{\V{\Gr{G}}}$. \end{definition} The Lov\'{a}sz $\vartheta$-function of a graph $\Gr{G}$ can be calculated by solving (numerically) a convex optimization problem. Let ${\bf{A}} = (A_{i,j})$ be the $n \times n$ adjacency matrix of $\Gr{G}$ with $n \triangleq \card{\V{\Gr{G}}}$. The Lov\'{a}sz $\vartheta$-function $\vartheta(\Gr{G})$ can be expressed as the solution of the following semidefinite programming (SDP) problem: \vspace*{0.2cm} \begin{eqnarray} \label{eq: SDP problem - Lovasz theta-function} \mbox{\fbox{$ \begin{array}{l} \text{maximize} \; \; \mathrm{Tr}({\bf{B}} \J{n}) \\ \text{subject to} \\ \begin{cases} {\bf{B}} \succeq 0, \\ \mathrm{Tr}({\bf{B}}) = 1, \\ A_{i,j} = 1 \; \Rightarrow \; B_{i,j} = 0, \quad i,j \in \OneTo{n}. \end{cases} \end{array}$}} \end{eqnarray} \vspace*{0.1cm} There exist efficient convex optimization algorithms (e.g., interior-point methods) to compute $\vartheta(\Gr{G})$, for every graph $\Gr{G}$, with a precision of $r$ decimal digits, and a computational complexity that is polynomial in $n$ and $r$. The reader is referred to Section~2.5 of \cite{Sason2024} for an account of the various interesting properties of the Lov\'{a}sz $\vartheta$-function. Among these properties, the sandwich theorem states that for every graph $\Gr{G}$, the following inequalities hold: \begin{align} \label{eq1: sandwich} \indnum{\Gr{G}} \leq \vartheta(\Gr{G}) \leq \chrnum{\CGr{G}}, \\ \label{eq2: sandwich} \clnum{\Gr{G}} \leq \vartheta(\CGr{G}) \leq \chrnum{\Gr{G}}. \end{align} The usefulness of \eqref{eq1: sandwich} and \eqref{eq2: sandwich} lies in the fact that while the independence, clique, and chromatic numbers of a graph are NP-hard to compute, the Lov\'{a}sz $\vartheta$-function can be efficiently computed as a bound in these inequalities by solving the convex optimization problem in \eqref{eq: SDP problem - Lovasz theta-function}. \bigskip \subsection{Matrices associated with a graph} \label{subsection: Matrices associated with a graph} \subsubsection{Four matrices associated with a graph} \noindent \vspace*{0.1cm} Let $\Gr{G}=(\Vertex,\Edge)$ be a graph with vertices $\left\{ v_{1},...,v_{n}\right\} $. There are several matrices associated with $\Gr{G}$. In this survey, we consider four of them, all are symmetric matrices in $\mathbb{R}^{n\times n}$: the \emph{adjacency matrix} ($\A$), \emph{Laplacian matrix} ($LM$), \emph{signless Laplacian matrix} ($\Q$), and the \emph{normialized Laplacian matrix} (${\bf{\mathcal{L}}}$). \begin{enumerate} \item The adjacency matrix of a graph $\Gr{G}$, denoted by $\A = \A(\Gr{G})$, has the binary-valued entries \begin{align} \label{eq: adjacency matrix} (\A(\Gr{G}))_{i,j}= \begin{cases} 1 & \mbox{if} \, \{v_i,v_j\} \in \E{\Gr{G}}, \\ 0 & \mbox{if} \, \{v_i,v_j\} \notin \E{\Gr{G}}. \end{cases} \end{align} \item The Laplacian matrix of a graph $\Gr{G}$, denoted by $\LM = \LM(\Gr{G})$, is given by \begin{align} \LM(\Gr{G}) = \D(\Gr{G})-\A(\Gr{G}), \end{align} where \begin{align} \D(\Gr{G}) = \diag{d(v_1), d(v_2), \ldots ,d(v_n)} \end{align} is the {\em diagonal matrix} whose entries in the principal diagonal are the degrees of the $n$ vertices of $\Gr{G}$. \item The signless Laplacian martix of a graph $\Gr{G}$, denoted by $\Q = \Q(\Gr{G})$, is given by \begin{align} \label{eq: signless Laplacian martix} \Q(\Gr{G}) = \D(\Gr{G})+\A(\Gr{G}). \end{align} \item The normalized Laplacian matrix of a graph $\Gr{G}$, denoted by $\mathcal{L}(\Gr{G})$, is given by \begin{align} \label{eq: normalized Laplacian matrix} \mathcal{L}(\Gr{G}) = \D^{-\frac12}(\Gr{G}) \, \LM(\Gr{G}) \, \D^{-\frac12}(\Gr{G}), \end{align} where \begin{align} \D^{-\frac12}(\Gr{G}) = \diag{d^{-\frac12}(v_1), d^{-\frac12}(v_2), \ldots, d^{-\frac12}(v_n)}, \end{align} with the convention that if $v \in \V{\Gr{G}}$ is an isolated vertex in $\Gr{G}$ (i.e., $d(v)=0$), then $d^{-\frac12}(v) = 0$. The entries of ${\bf{\mathcal{L}}} = (\mathcal{L}_{i,j})$ are given by \begin{align} \mathcal{L}_{i,j} = \begin{dcases} \begin{array}{cl} 1, \quad & \mbox{if $i=j$ and $d(v_i) \neq 0$,} \\[0.2cm] -\dfrac{1}{\sqrt{d(v_i) \, d(v_j)}}, \quad & \mbox{if $i \neq j$ and $\{v_i,v_j\} \in \E{\Gr{G}}$}, \\[0.5cm] 0, \quad & \mbox{otherwise}. \end{array} \end{dcases} \end{align} \end{enumerate} In the continuation of this section, we also occasionally refer to two other matrices that are associated with undirected graphs. \begin{definition} \label{definition: incidence matrix} Let $\Gr{G}$ be a graph with $n$ vertices and $m$ edges. The {\em incidence matrix} of $\Gr{G}$, denoted by ${\mathbf{B}} = {\mathbf{B}}(\Gr{G})$ is an $n \times m$ matrix with binary entries, defined as follows: \begin{align} B_{i,j} = \begin{cases} 1 & \text{if vertex \(v_i \in \V{\Gr{G}}\) is incident to edge \(e_j \in \E{\Gr{G}}\)}, \\ 0 & \text{if vertex \(v_i \in \V{\Gr{G}}\) is not incident to edge \(e_j \in \E{\Gr{G}}\)}. \end{cases} \end{align} For an undirected graph, each edge $e_j$ connects two vertices $v_i$ and $v_k$, and the corresponding column in $\mathbf{B}$ has exactly two $1$'s, one for each vertex. \end{definition} \begin{definition} \label{definition: oriented incidence matrix} Let $\Gr{G}$ be a graph with $n$ vertices and $m$ edges. An {\em oriented incidence matrix} of $\Gr{G}$, denoted by ${\mathbf{N}} = {\mathbf{N}}(\Gr{G})$ is an $n \times m$ matrix with ternary entries from $\{-1, 0, 1\}$, defined as follows. One first selects an arbitrary orientation to each edge in $\Gr{G}$, and then define \begin{align} N_{i,j} = \begin{cases} -1 & \text{if vertex \(v_i \in \V{\Gr{G}}\) is the tail (starting vertex) of edge \(e_j \in \E{\Gr{G}}\)}, \\ +1 & \text{if vertex \(v_i \in \V{\Gr{G}}\) is the head (ending vertex) of edge \(e_j \in \E{\Gr{G}}\)}, \\ \hspace*{0.2cm} 0 & \text{if vertex \(v_i \in \V{\Gr{G}}\) is not incident to edge \(e_j \in \E{\Gr{G}}\)}. \end{cases} \end{align} Consequently, each column of $\mathbf{N}$ contains exactly one entry equal to 1 and one entry equal to $-1$, representing the head and tail of the corresponding oriented edge in the graph, respectively, with all other entries in the column being zeros. \end{definition} For $X\in \left\{ A,L,Q,\mathcal{L} \right\}$, the \emph{$X$-spectrum} of a graph $\Gr{G}$, $\sigma_X(G)$, is the multiset of the eigenvalues of $X(G)$. We denote the elements of the multiset of eigenvalues of $\{\A, \LM, \Q, \mathcal{L}\}$, respectively, by \begin{align} \label{eq2:26.09.23} & \Eigval{1}{\Gr{G}} \geq \Eigval{2}{\Gr{G}} \geq \ldots \geq \Eigval{n}{\Gr{G}}, \\ \label{eq3:26.09.23} & \mu_1(\Gr{G}) \leq \mu_2(\Gr{G}) \leq \ldots \leq \mu_n(\Gr{G}), \\ \label{eq4:26.09.23} & \nu_1(\Gr{G}) \geq \nu_2(\Gr{G}) \geq \ldots \geq \nu_n(\Gr{G}), \\ \label{eq5:26.09.23} & \delta_1(\Gr{G}) \leq \delta_2(\Gr{G}) \leq \ldots \leq \delta_n(\Gr{G}). \end{align} \begin{example} Consider the complete bipartite graph $\Gr{G} = \CoBG{2}{3}$ with the adjacency matrix $$\A(\Gr{G})= \begin{pmatrix} {\bf{0}}_{2,2} & \J{2,3} \\ \J{3,2} & {\bf{0}}_{3,3} \end{pmatrix}.$$ The spectra of $\Gr{G}$ can be verified to be given as follows: \begin{enumerate} \item The $\A$-spectrum of $\Gr{G}$ is \begin{align} \sigma_{\A}(\Gr{G})=\Bigl\{ -\sqrt{6}, [0]^{3}, \sqrt{6}\Bigr\}, \end{align} with the notation that $[\lambda]^m$ means that $\lambda$ is an eigenvalue with multiplicity $m$. \item The $\LM$-spectrum of $\Gr{G}$ is \begin{align} \sigma_{\LM}(\Gr{G})=\Bigl\{ 0, [2]^{2}, 3, 5\Bigr\} . \end{align} \item The $\Q$-spectrum of $\Gr{G}$ is \begin{align} \sigma_{\Q}(\Gr{G})=\Bigl\{ 0, [2]^{2}, 3, 5\Bigr\} . \end{align} \item The ${\bf{\mathcal{L}}}$-spectrum of $\Gr{G}$ is \begin{align} \sigma_{{\bf{\mathcal{L}}}}(\Gr{G})=\Bigl\{ 0, [1]^{3}, 2 \Bigr\} . \end{align} \end{enumerate} \end{example} \begin{remark} If $\Gr{H}$ is an induced subgraph of a graph $\Gr{G}$, then $\A(\Gr{H})$ is a principal submatrix of $A(\Gr{G})$. However, since the degrees of the remaining vertices are affected by the removal of vertices when forming the induced subgraph $\Gr{H}$ from the graph $\Gr{G}$, this property does not hold for the other three associated matrices discussed in this paper (namely, the Laplacian, signless Laplacian, and normalized Laplacian matrices). \end{remark} \begin{definition} Let $\Gr{G}$ be a graph, and let $\CGr{G}$ be the complement graph of $\Gr{G}$. Define the following matrices: \begin{enumerate} \item $\overline{\A}(\Gr{G}) = \A(\overline{\Gr{G}})$. \item $\overline{\LM}(\Gr{G}) = \LM(\overline{\Gr{G}})$. \item $\overline{\Q}(\Gr{G}) = \Q(\overline{\Gr{G}})$. \item $\overline{{\bf{\mathcal{L}}}}(\Gr{G}) = {\bf{\mathcal{L}}}(\overline{\Gr{G}})$. \end{enumerate} \end{definition} \begin{definition} Let $\mathcal{X} \subseteq \Gmats$. The $\mathcal{X}$-spectrum of a graph $\Gr{G}$ is a list with $\sigma_X(\Gr{G})$ for every $X\in \mathcal{X}$. \end{definition} Observe that if $\mathcal{X} = \{ X \}$ is a singleton, then the $\mathcal{X}$ spectrum is equal to the $X$-spectrum. We now describe some important applications of the four matrices. \subsubsection{Properties of the adjacency matrix} \begin{theorem}[Number of walks of a given length between two fixed vertices] \label{thm: number of walks of a given length} Let $\Gr{G} = (\Vertex, \Edge)$ be a graph with a vertex set $\Vertex = \V{\Gr{G}} = \{ v_1, \ldots, v_n\}$, and let $\A = \A(\Gr{G})$ be the adjacency matrix of $\Gr{G}$. Then, the number of walks of length $\ell$, with the fixed endpoints $v_i$ and $v_j$, is equal to $(\A^\ell)_{i,j}$. \end{theorem} \begin{corollary}[Number of closed walks of a given length] \label{corollary: Number of Closed Walks of a Given Length} Let $\Gr{G} = (\Vertex, \Edge)$ be a simple undirected graph on $n$ vertices with an adjacency matrix $\A = \A(\Gr{G})$, and let its spectrum (with respect to $\A$) be given by $\{\lambda_j\}_{j=1}^n$. Then, for all $\ell \in \naturals$, the number of closed walks of length $\ell$ in $\Gr{G}$ is equal to $\sum_{j=1}^n \lambda_j^{\ell}$. \end{corollary} \begin{corollary}[Number of edges and triangles in a graph] \label{corollary: number of edges and triangles in a graph} Let $\Gr{G}$ be a simple undirected graph with $n = \card{\V{\Gr{G}}}$ vertices, $e = \card{\E{\Gr{G}}}$ edges, and $t$ triangles. Let $\A = \A(\Gr{G})$ be the adjacency matrix of $\Gr{G}$, and let $\{\lambda_j\}_{j=1}^n$ be its adjacency spectrum. Then, \begin{align} & \sum_{j=1}^n \lambda_j = \mathrm{tr}(\A) = 0, \label{eq: trace of A is zero} \\ & \sum_{j=1}^n \lambda_j^2 = \mathrm{tr}(\A^2) = 2 e, \label{eq: number of edges from A} \\ & \sum_{j=1}^n \lambda_j^3 = \mathrm{tr}(\A^3) = 6 t. \label{eq: number of triangles from A} \end{align} \end{corollary} For a $d$-regular graph, the largest eigenvalue of its adjacency matrix is equal to~$d$. Consequently, by Eq.~\eqref{eq: number of edges from A}, for $d$-regular graphs, $\sum_j \lambda_j^2 = 2e = nd = n \lambda_1$. Interestingly, this turns to be a necessary and sufficient condition for the regularity of a graph, which means that the adjacency spectrum enables to identify whether a graph is regular. \begin{theorem} \cite[Corollary~3.2.2]{CvetkovicRS2010} \label{theorem: graph regularity from A-spectrum} A graph $\Gr{G}$ on $n$ vertices is regular if and only if \begin{align} \sum_{i=1}^n \lambda_i^2 = n \lambda_1, \end{align} where $\lambda_1$ is the largest eigenvalue of the adjacency matrix of $\Gr{G}$. \end{theorem} \begin{theorem}[The eigenvalues of strongly regular graphs] \label{theorem: eigenvalues of srg} The following spectral properties are satisfied by the family of strongly regular graphs: \begin{enumerate}[(1)] \item \label{Item 1: eigenvalues of srg} A strongly regular graph has at most three distinct eigenvalues. \item \label{Item 2: eigenvalues of srg} Let $\Gr{G}$ be a connected strongly regular graph, and let its parameters be $\SRG(n,d,\lambda,\mu)$. Then, the largest eigenvalue of its adjacency matrix is $\Eigval{1}{\Gr{G}} = d$ with multiplicity~1, and the other two distinct eigenvalues of its adjacency matrix are given by \begin{align} \label{eigs-SRG} p_{1,2} = \tfrac12 \, \Biggl( \lambda - \mu \pm \sqrt{ (\lambda-\mu)^2 + 4(d-\mu) } \, \Biggr), \end{align} with the respective multiplicities \begin{align} \label{eig-multiplicities-SRG} m_{1,2} = \tfrac12 \, \Biggl( n-1 \mp \frac{2d+(n-1)(\lambda-\mu)}{\sqrt{(\lambda-\mu)^2+4(d-\mu)}} \, \Biggr). \end{align} \item \label{Item 3: eigenvalues of srg} A connected regular graph with exactly three distinct eigenvalues is strongly regular. \item \label{Item 4: eigenvalues of srg} Strongly regular graphs for which $2d+(n-1)(\lambda-\mu) \neq 0$ have integral eigenvalues and the multiplicities of $p_{1,2}$ are distinct. \item \label{Item 5: eigenvalues of srg} A connected regular graph is strongly regular if and only if it has three distinct eigenvalues, where the largest eigenvalue is of multiplicity~1. \item \label{Item 6: eigenvalues of srg} A disconnected strongly regular graph is a disjoint union of $m$ identical complete graphs $\CoG{r}$, where $m \geq 2$ and $r \in \naturals$. It belongs to the family $\srg{mr}{r-1}{r-2}{0}$, and its adjacency spectrum is $\{ (r-1)^{[m]}, (-1)^{[m(r-1)]} \}$, where superscripts indicate the multiplicities of the eigenvalues, thus having two distinct eigenvalues. \end{enumerate} \end{theorem} The following result follows readily from Theorem~\ref{theorem: eigenvalues of srg}. \begin{corollary} \label{corollary: cospectral SRGs} Strongly regular graphs with identical parameters $(n,d,\lambda,\mu)$ are cospectral. \end{corollary} \begin{remark} \label{remark: NICS SRGs} Strongly regular graphs having identical parameters $(n, d, \lambda, \mu)$ are cospectral but may not be isomorphic. For instance, Chang graphs form a set of three nonisomorphic strongly regular graphs with identical parameters $\srg{28}{12}{6}{4}$ \cite[Section~10.11]{BrouwerM22}. Consequently, the three Chang graphs are strongly regular NICS graphs. \end{remark} An important class of strongly regular graphs, for which $2d+(n-1)(\lambda-\mu)=0$, is given by the family of conference graphs. \begin{definition}[Conference graphs] \label{definition: conference graphs} A conference graph on $n$ vertices is a strongly regular graph with the parameters $\srg{n}{\tfrac12(n-1)}{\tfrac14(n-5)}{\tfrac14(n-1)}$, where $n$ must satisfy $n=4k+1$ with $k \in \naturals$. \end{definition} If $\Gr{G}$ is a conference graph on $n$ vertices, then so is its complement $\CGr{G}$; it is, however, not necessarily self-complementary. By Theorem~\ref{theorem: eigenvalues of srg}, the distinct eigenvalues of the adjacency matrix of $\Gr{G}$ are given by $\tfrac12 (n-1)$, $\tfrac12 (\hspace*{-0.1cm} \sqrt{n}-1)$, and $-\tfrac12 (\hspace*{-0.1cm} \sqrt{n}+1)$ with multiplicities $1, \tfrac12 (n-1)$, and $\tfrac12 (n-1)$, respectively. In contrast to Item~\ref{Item 4: eigenvalues of srg} of Theorem~\ref{theorem: eigenvalues of srg}, the eigenvalues $\pm \tfrac12 (\hspace*{-0.1cm} \sqrt{n}+1)$ are not necessarily integers. For instance, the cycle graph $\CG{5}$, which is a conference graph, has an adjacency spectrum $\bigl\{2, \bigl[\tfrac12 (\hspace*{-0.1cm} \sqrt{5}-1) \bigr]^{(2)}, \bigl[-\tfrac12 (\hspace*{-0.1cm} \sqrt{5}+1) \bigr]^{(2)} \}$. Thus, apart from the largest eigenvalue, the other eigenvalues are irrational numbers. \subsubsection{Properties of the Laplacian matrix} \begin{theorem} \label{theorem: On the Laplacian matrix of a graph} Let $\Gr{G}$ be a finite, simple, and undirected graph, and let $\LM$ be the Laplacian matrix of $\Gr{G}$. Then, \begin{enumerate} \item \label{Item 1: Laplacian matrix of a graph} The Laplacian matrix $\LM = {\mathbf{N}} {\mathbf{N}}^{\mathrm{T}}$ is positive semidefinite, where ${\mathbf{N}}$ is the oriented incidence matrix of $\Gr{G}$ (see Definition~\ref{definition: oriented incidence matrix} and \cite[p.~185]{CvetkovicRS2010}). \item \label{Item 2: Laplacian matrix of a graph} The smallest eigenvalue of $\, \LM$ is zero, with a multiplicity equal to the number of components in $\Gr{G}$ (see \cite[Theorem~7.1.2]{CvetkovicRS2010}). \item \label{Item 3: Laplacian matrix of a graph} The size of the graph, $\bigcard{\E{\Gr{G}}}$, equals one-half of the sum of the eigenvalues of $\, \LM$, counted with multiplicities (see \cite[Eq.~(7.4)]{CvetkovicRS2010}). \end{enumerate} \end{theorem} The following celebrated theorem provides an operational meaning of the $\LM$-spectrum of graphs in counting their number of spanning subgraphs. \begin{theorem}[Kirchhoff's Matrix-Tree Theorem \cite{Kirchhoff1958}] \label{theorem: number of spanning trees} The number of spanning trees in a connected and simple graph $\Gr{G}$ on $n$ vertices is determined by the $n-1$ nonzero eigenvalues of the Laplacian matrix, and it is equal to $\frac{1}{n} \overset{n}{\underset{\ell=2}{\prod}} \, \mu_\ell(\Gr{G})$. \end{theorem} \begin{corollary}[Cayley's Formula \cite{Cayley1889}] \label{corollary: number of spanning trees} The number of spanning trees of $\CoG{n}$ is $n^{n-2}$. \end{corollary} \begin{proof} The $\LM$-spectrum of $\CoG{n}$ is given by $\{0, [n]^{n-1}\}$, and the result readily follows from Theorem~\ref{theorem: number of spanning trees}. \end{proof} \subsubsection{Properties of the signless Laplacian matrix} \begin{theorem} \label{theorem: On the signless Laplacian matrix of a graph} Let $\Gr{G}$ be a finite, simple, and undirected graph, and let $\Q$ be the signless Laplacian matrix of $\Gr{G}$. Then, \begin{enumerate} \item \label{Item 1: signless Laplacian matrix of a graph} The matrix $\Q$ is positive semidefinite. Moreover, it is a completely positive matrix, expressed as $\Q = {\mathbf{B}} {\mathbf{B}}^{\mathrm{T}}$, where ${\mathbf{B}}$ is the incidence matrix of $\Gr{G}$ (see Definition~\ref{definition: incidence matrix} and \cite[Section~2.4]{CvetkovicRS2010}). \item \label{Item 2: signless Laplacian matrix of a graph} If $\Gr{G}$ is a connected graph, then it is bipartite if and only if the least eigenvalue of $\Q$ is equal to zero. In this case, $0$ is a simple $\Q$-eigenvalue (see \cite[Theorem~7.8.1]{CvetkovicRS2010}). \item \label{Item 3: signless Laplacian matrix of a graph} The multiplicity of 0 as an eigenvalue of $\Q$ is equal to the number of bipartite components in $\Gr{G}$ (see \cite[Corollary~7.8.2]{CvetkovicRS2010}). \item \label{Item 4: signless Laplacian matrix of a graph} The size of the graph $\bigl| E(\Gr{G}) \bigr| $ is equal to one-half the sum of the eigenvalues of~$\Q$, counted with multiplicities (see \cite[Corollary~7.8.9]{CvetkovicRS2010}). \end{enumerate} \end{theorem} The interested reader is referred to \cite{OliveiraLAK2010} for bounds on the $\Q$-spread (i.e., the difference between the largest and smallest eigenvalues of the signless Laplacian matrix), expressed as a function of the number of vertices in the graph. In regard to Item~\ref{Item 2: signless Laplacian matrix of a graph} of Theorem~\ref{theorem: On the signless Laplacian matrix of a graph}, the interested reader is referred to \cite{Cardoso2008} for a lower bound on the least eigenvalue of signless Laplacian matrix for connected non-bipartite graphs, and to \cite{ChenH2018} for a lower bound on the least eigenvalue of signless Laplacian matrix for a general simple graph with a fixed number of vertices and edges. \subsubsection{Properties of the normalized Laplacian matrix} The normalized Laplacian matrix of a graph, defined in \eqref{eq: normalized Laplacian matrix}, exhibits several interesting spectral properties, which are introduced below. \begin{theorem} \cite{CvetkovicRS2010,CvetkovicRS2007} \label{theorem: On the normalized Laplacian matrix of a graph} Let $\Gr{G}$ be a finite, simple, and undirected graph, and let ${\bf{\mathcal{L}}}$ be the normalized Laplacian matrix of $\Gr{G}$. Then, \begin{enumerate} \item \label{Item 1: normalized Laplacian matrix of a graph} The eigenvalues of ${\bf{\mathcal{L}}}$ lie in the interval $[0,2]$ (see \cite[Section~7.7]{CvetkovicRS2010}). \item \label{Item 2: normalized Laplacian matrix of a graph} The number of components in $\Gr{G}$ is equal to the multiplicity of~0 as an eigenvalue of ${\bf{\mathcal{L}}}$ (see \cite[Theorem~7.7.3]{CvetkovicRS2010}). \item \label{Item 3: normalized Laplacian matrix of a graph} The largest eigenvalue of ${\bf{\mathcal{L}}}$ is equal to~2 if and only if the graph has a bipartite component (see \cite[Theorem~7.7.2(v)]{CvetkovicRS2010}). Furthermore, the number of the bipartite components of $\Gr{G}$ is equal to the multiplicity of~2 as an eigenvalue of~${\bf{\mathcal{L}}}$. \item \label{Item 4: normalized Laplacian matrix of a graph} The sum of its eigenvalues (including multiplicities) is less than or equal to the graph order $(n)$, with equality if and only if the graph has no isolated vertices (see \cite[Theorem~7.7.2(i)]{CvetkovicRS2010}). \end{enumerate} \end{theorem} \subsubsection{More on the spectral properties of the four associated matrices} \noindent The following theorem considers equivalent spectral properties of bipartite graphs. \begin{theorem} \label{theorem: equivalences for bipartite graphs} Let $\Gr{G}$ be a graph. The following are equivalent: \begin{enumerate} \item \label{Item 1: TFAE bipartite graphs} $\Gr{G}$ is a bipartite graph. \item \label{Item 2: TFAE bipartite graphs} $\Gr{G}$ does not have cycles of odd length. \item \label{Item 3: TFAE bipartite graphs} The $\A$-spectrum of $\Gr{G}$ is symmetric around zero, and for every eigenvalue $\lambda$ of $\A(G)$, the eigenvalue $-\lambda$ is of the same multiplicity \cite[Theorem~3.2.3]{CvetkovicRS2010}. \item \label{Item 4: TFAE bipartite graphs} The $\LM$-spectrum and $\Q$-spectrum are identical (see \cite[Proposition~7.8.4]{CvetkovicRS2010}). \item \label{Item 5: TFAE bipartite graphs} The ${\bf{\mathcal{L}}}$-spectrum has the same multiplicity of $0$'s and $2$'s as eigenvalues (see \cite[Corollary~7.7.4]{CvetkovicRS2010}). \end{enumerate} \end{theorem} \begin{remark} \label{remark: on connected bipartite graphs} Item~\ref{Item 3: TFAE bipartite graphs} of Theorem~\ref{theorem: equivalences for bipartite graphs} can be strengthened if $\Gr{G}$ is a connected graph. In that case, $\Gr{G}$ is bipartite if and only if $\lambda_1 = -\lambda_n$ (see \cite[Theorem~3.2.4]{CvetkovicRS2010}). \end{remark} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Matrix} & \textbf{\# edges} & \textbf{bipartite} & \textbf{\# components} & \textbf{\# bipartite components} & \textbf{\# of closed walks} \\ \hline $\A$ & Yes & Yes & No & No & Yes \\ \hline $\LM$ & Yes & No & Yes & No & No \\ \hline $\Q$ & Yes & No & No & Yes & No \\ \hline ${\bf{\mathcal{L}}}$ & No & Yes & Yes & Yes & No \\ \hline \end{tabular} \caption{Some properties of a finite, simple, and undirected graph that one can or cannot determine by the $X$-spectrum for $X\in \{\A,\LM,\Q, {\bf{\mathcal{L}}} \}$} \label{table:properties_determined by the spectrum} \end{table} Table~\ref{table:properties_determined by the spectrum}, borrowed from \cite{Butler2014}, lists properties of a graph that can or cannot be determined by the $X$-spectrum for $X\in \{\A, \LM, \Q, \bf{\mathcal{L}}\}$. From the $\A$-spectrum of a graph $\Gr{G}$, one can determine the number of edges and the number of triangles in $\Gr{G}$ (by Eqs.~\eqref{eq: number of edges from A} and \eqref{eq: number of triangles from A}, respectively), and whether the graph is bipartite or not (by Item~\ref{Item 3: TFAE bipartite graphs} of Theorem~\ref{theorem: equivalences for bipartite graphs}). However, the $\A$ spectrum does not indicate the number of components (see Example~\ref{example: ANICS graphs with 5 vertices}). From the $\LM$-spectrum of a graph $\Gr{G}$, one can determine the number of edges (by Item~\ref{Item 3: Laplacian matrix of a graph} of Theorem~\ref{theorem: On the Laplacian matrix of a graph}), the number of spanning trees (by Theorem~\ref{theorem: number of spanning trees}), the number of components of $\Gr{G}$ (by Item~\ref{Item 2: Laplacian matrix of a graph} of Theorem~\ref{theorem: On the Laplacian matrix of a graph}), but not the number of its triangles, and whether the graph $\Gr{G}$ is bipartite. From the $\Q$-spectrum, one can determine whether the graph is bipartite, the number of bipartite components, and the number of edges (respectively, by Items~\ref{Item 3: signless Laplacian matrix of a graph} and~\ref{Item 4: signless Laplacian matrix of a graph} of Theorem~\ref{theorem: On the signless Laplacian matrix of a graph}), but not the number of components of the graph, and whether the graph is bipartite (see Remark~\ref{remark: bipartiteness}). From the ${\bf{\mathcal{L}}}$-spectrum, one can determine the number of components and the number of bipartite components in $\Gr{G}$ (by Theorem~\ref{theorem: On the normalized Laplacian matrix of a graph}), and whether the graph is bipartite (by Items~\ref{Item 1: TFAE bipartite graphs} and~\ref{Item 5: TFAE bipartite graphs} of Theorem~\ref{theorem: equivalences for bipartite graphs}). The number of closed walks in $\Gr{G}$ is determined by the $\A$-spectrum (by Corollary~\ref{corollary: Number of Closed Walks of a Given Length}), but not by the spectra with respect to the other three matrices. \begin{remark} \label{remark: bipartiteness} By Item~\ref{Item 2: signless Laplacian matrix of a graph} of Theorem~\ref{theorem: On the signless Laplacian matrix of a graph}, a connected graph is bipartite if and only if the least eigenvalue of its signless Laplacian matrix is equal to zero. If the graph is disconnected and it has a bipartite component and a non-bipartite component, then the least eigenvalue of its signless Laplacian matrix is equal to zero, although the graph is not bipartite. According to Table~\ref{table:properties_determined by the spectrum}, the $\Q$-spectrum alone does not determine whether a graph is bipartite. This is due to the fact that the $\Q$-spectrum does not provide information about the number of components in the graph or whether the graph is connected. It is worth noting that while neither the $\LM$-spectrum nor the $\Q$-spectrum independently determines whether a graph is bipartite, the combination of these spectra does. Specifically, by Item~\ref{Item 4: TFAE bipartite graphs} of Theorem~\ref{theorem: equivalences for bipartite graphs}, the combined knowledge of both spectra enables to establish this property. \end{remark} \section{Graphs determined by their spectra} \label{section: DS graphs} The spectral determination of graphs has long been a central topic in spectral graph theory. A major open question in this area is: "Which graphs are determined by their spectrum (DS)?" This section begins our survey of both classical and recent results on spectral graph determination. We explore the spectral characterization of various graph classes, methods for constructing or distinguishing cospectral nonisomorphic graphs, and conditions under which a graph’s spectrum uniquely determines its structure. Additionally, we present newly obtained proofs of existing results, offering further insights into this field. \begin{definition} Let $\Gr{G},\Gr{H}$ be two graphs. A mapping $\phi \colon \V{\Gr{G}} \rightarrow \V{\Gr{H}}$ is a \emph{graph isomorphism} if \begin{align} \{u,v\} \in \E{\Gr{G}} \iff \bigl\{ \phi(u),\phi(v) \bigr\} \in \E{\Gr{H}}. \end{align} If there is an isomorphism between $\Gr{G}$ and $\Gr{H}$, we say that these graphs are \emph{isomorphic}. \end{definition} \begin{definition} A \emph{permutation matrix} is a $\{0,1\}$--matrix in which each row and each column contains exactly one entry equal to $1$. \end{definition} \begin{remark} In terms of the adjacency matrix of a graph, $\Gr{G}$ and $\Gr{H}$ are cospectral graphs if $\A(\Gr{G})$ and $\A(\Gr{H})$ are similar matrices, and $\Gr{G}$ and $\Gr{H}$ are isomorphic if the similarity of their adjacency matrices is through a permutation matrix ${\bf{P}}$, i.e. \begin{align} A(\Gr{G}) = {\bf{P}} \, \A(\Gr{H}) \, {\bf{P}}^{-1}. \end{align} \end{remark} \subsection{Graphs determined by their adjacency spectrum (DS graphs)} \label{subsection: Graphs determined by their adjacency spectrum} \begin{theorem} \cite{vanDamH03} \label{theorem: van Dam and Haemers, 2003 - thm1} All of the graphs with less than five vertices are DS. \end{theorem} \begin{example} \label{example: ANICS graphs with 5 vertices} The star graph $\SG{5}$ and a graph formed by the disjoint union of a length-4 cycle and an isolated vertex, $\CG{4} \DU \CoG{1}$, have the same $\A$-spectrum $\{-2 , [0]^3 , 2\}$. They are, however, not isomorphic since $\SG{5}$ is connected and $\CG{4} \DU \CoG{1}$ is disconnected (see Figure~\ref{fig:graphs with 5 vertices}). \vspace*{-0.1cm} \begin{figure}[hbt] \centering \includegraphics[width=8cm]{ANICS_graph_with_5_vertices.png} \caption{The graphs $\SG{4} = \CoBG{1}{4}$ and $\CG{4} \DU \CoG{1}$ (i.e., a union of a 4-length cycle and an isolated vertex) are cospectral and nonisomorphic graphs ($\A$-NICS graphs) on five vertices. These two graphs therefore cannot be determined by their adjacency matrix.} \label{fig:graphs with 5 vertices} \end{figure} It can be verified computationally that all the connected nonisomorphic graphs on five vertices can be distinguished by their $\A$-spectrum (see \cite[Appendix~A1]{CvetkovicRS2010}). \end{example} \begin{theorem} \cite{vanDamH03} \label{theorem: van Dam and Haemers, 2003 - thm2} All the regular graphs with less than ten vertices are DS (and, as will be clarified later, also $\mathcal{X}$-DS for every $\mathcal{X} \subseteq \{\A, \LM, \Q\}$). \end{theorem} \begin{example} \label{example: NICS regular graphs on 10 vertices} \cite{vanDamH03} The following two regular graphs in Figure \ref{fig:graphs with 10 vertices} are $\{\A, \LM, \Q, \bf{\mathcal{L}}\}$-NICS. \begin{figure}[hbt] \centering \includegraphics[width=12cm]{cospectral_and_nonisomorphic_4-regular_graphs.png} \caption{$\{\A, \LM, \Q, \bf{\mathcal{L}}\}$-NICS regular graphs with $10$ vertices. These cospectral graphs are nonisomorphic because each of the two blue edges in $\Gr{G}$ belongs to three triangles, whereas no such an edge exists in $\Gr{H}$.}\label{fig:graphs with 10 vertices} \end{figure} The regular graphs $\Gr{G}$ and $\Gr{H}$ in Figure~\ref{fig:graphs with 10 vertices} can be verified to be cospectral with the common characteristic polynomial $$P(x)= x^{10} - 20x^8 - 16x^7 + 110x^6 + 136x^5 - 180x^4 - 320x^3 + 9x^2 + 200x + 80.$$ These graphs are also nonisomorphic because each of the two blue edges in $\Gr{G}$ belongs to three triangles, whereas no such an edge exists in $\Gr{H}$. Furthermore, it is shown in Example~4.18 of \cite{Sason2024} that each pair of the regular NICS graphs on 10~vertices, denoted by $\{\Gr{G}, \Gr{H}\}$ and $\{\CGr{G}, \CGr{H}\}$, exhibits distinct values of the Lov\'{a}sz $\vartheta$-functions, whereas the graphs $\Gr{G}$, $\CGr{G}$, $\Gr{H}$, and $\CGr{H}$ share identical independence numbers~(3), clique numbers~(3), and chromatic numbers~(4). Furthermore, based on these two pairs of graphs, it is constructively shown in Theorem~4.19 of \cite{Sason2024} that for every even integer $n \geq 14$, there exist connected, irregular, cospectral, and nonisomorphic graphs on $n$ vertices, being jointly cospectral with respect to their adjacency, Laplacian, signless Laplacian, and normalized Laplacian matrices, while also sharing identical independence, clique, and chromatic numbers, but being distinguished by their Lov\'{a}sz $\vartheta$-functions. \end{example} \begin{remark} \label{remark: relations to Igal's paper 2023} In continuation to Example~\ref{example: NICS regular graphs on 10 vertices}, it is worth noting that closed-form expressions for the Lov\'{a}sz $\vartheta$-functions of regular graphs, which are edge-transitive or strongly regular, were derived in \cite[Theorem~9]{Lovasz79_IT} and \cite[Proposition~1]{Sason23}, respectively. In particular, it follows from \cite[Proposition~1]{Sason23} that strongly regular graphs with identical four parameters $(n,d,\lambda,\mu)$ are cospectral and they have identical Lov\'{a}sz $\vartheta$-numbers, although they need not be necessarily isomorphic. For such an explicit counterexample, the reader is referred to \cite[Remark~3]{Sason23}. \end{remark} We next introduce friendship graphs to address their possible determination by their spectra with respect to several associated matrices. \begin{definition} \label{definition: friendship graph} Let $p\in \naturals$. \emph{The friendship graph} $\FG{p}$, also known as the \emph{windmill graph}, is a graph with $2p+1$ vertices, consisting of a single vertex (the central vertex) that is adjacent to all the other $2p$ vertices. Furthermore, every pair of these $2p$ vertices shares exactly one common neighbor, namely the central vertex (see Figure~\ref{fig:friendship graph F4}). This graph has $3p$ edges and $p$ triangles. \end{definition} \begin{figure}[H] \centering \includegraphics[width=3cm]{F4.png} \caption{The friendship (windmill) graph $\FG{4}$ has 9~vertices, 12 edges, and~4 triangles.}\label{fig:friendship graph F4} \end{figure} The term friendship graph in Definition~\ref{definition: friendship graph} originates from the \emph{Friendship Theorem} \cite{Erdos1963}. This theorem states that if $\Gr{G}$ is a finite graph where any two vertices share exactly one common neighbor, then there exists a vertex that is adjacent to all other vertices. In this context, the adjacency of vertices in the graph can be interpreted socially as a representation of friendship between the individuals represented by the vertices (assuming friendship is a mutual relationship). For a nice exposition of the proof of the Friendship Theorem, the interested reader is referred to Chapter~44 of \cite{AignerZ18}. \begin{theorem} \label{theorem: special classes of DS graphs} The following graphs are DS: \begin{enumerate}[1.] \item \label{item 1: DS graphs} All graphs with less than five vertices, and also all regular graphs with less than 10 vertices \cite{vanDamH03} (recall Theorems~\ref{theorem: van Dam and Haemers, 2003 - thm1} and~\ref{theorem: van Dam and Haemers, 2003 - thm2}). \item \label{item 2: DS graphs} The graphs $\CoG{n}$, $\CG{n}$, $\PathG{n}$, $\CoBG{m}{m}$ and $\CGr{\CoG{n}}$ \cite{vanDamH03}. \item \label{item 3: DS graphs} The complement of the path graph $\CGr{\PathG{n}}$ \cite{DoobH02}. \item \label{item 4: DS graphs} The disjoint union of $k$ path graph with no isolated vertices, the disjoint union of $k$ complete graphs with no isolated vertices, and the disjoint union of $k$ cycles (i.e., every 2-regular graph) \cite{vanDamH03}. \item \label{item 5: DS graphs} The complement graph of a DS regular graph \cite{CvetkovicRS2010}. \item \label{item 6: DS graphs} Every $(n-3)$-regular graph on $n$ vertices \cite{CvetkovicRS2010}. \item \label{item 7: DS graphs} The friendship graph $\FG{p}$ for $p \ne 16$ \cite{CioabaHVW2015}. \item \label{item 8: DS graphs} Sandglass graphs, which are obtained by appending a triangle to each of the pendant (i.e., degree-1) vertices of a path \cite{LuLYY09}. \item \label{item 9: DS graphs} If $\Gr{H}$ is a subgraph of a graph $\Gr{G}$, and $\Gr{G} \setminus \Gr{H}$ denotes the graph obtained from $\Gr{G}$ by deleting the edges of $\Gr{H}$, then also the following graphs are DS \cite{CamaraH14}: \begin{itemize} \item $\CoG{n} \setminus (\ell \CoG{2})$ and $\CoG{n} \setminus \CoG{m}$, where $m \leq n-2$, \item $\CoG{n} \setminus \CoBG{\ell}{m}$, \item $\CoG{n} \setminus \Gr{H}$, where $\Gr{H}$ has at most four edges. \end{itemize} \end{enumerate} \end{theorem} \subsection{Graphs determined by their spectra with respect to various matrices (X-DS graphs)} \label{subsection: Graphs determined by their X-DS spectrum} \noindent In this section, we consider graphs that are determined by the spectra of various associated matrices beyond the adjacency matrix spectrum. \begin{definition} Let $\Gr{G} , \Gr{H}$ be two graphs and let $\mathcal{X} \subseteq \Gmats$. \begin{enumerate} \item $\Gr{G}$ and $\Gr{H}$ are said to be \emph{$\mathcal{X}$-cospectral} if they have the same $X$-spectrum, i.e. $\sigma_X(\Gr{G}) = \sigma_X(\Gr{H})$. \item Nonisomorphic graphs $\Gr{G}$ and $\Gr{H}$ that are $\mathcal{X}$-cospectral are said to be \emph{$\mathcal{X}$-NICS}, where {\em NICS} is an abbreviation of {\em non-isomorphic and cospectral}. \item A graph $\Gr{G}$ is said to be \emph{determined by its $\mathcal{X}$-spectrum ($\mathcal{X}$-DS)} if every graph that is $\mathcal{X}$-cospectral to $\Gr{G}$ is also isomorphic to $\Gr{G}$. \end{enumerate} \end{definition} \begin{notation} For a singleton $\mathcal{X} = \{ X \}$, we abbreviate $\{ X \} $-cospectral, $\{X\}$-DS and $\{X\}$-NICS by $X$-cospectral, $X$-DS and $X$-NICS, respectively. For the adjacency matrix, we will abbreviate $\A$-DS by DS. \end{notation} \begin{remark} \label{remark: X,Y cospectrality} Let $\mathcal{X} \subseteq \mathcal{Y} \subseteq \Gmats$. The following holds by definition: \begin{itemize} \item If two graph $\Gr{G}, \Gr{H}$ are $\mathcal{Y}$-cospectral, then they are $\mathcal{X}$-cospectral. \item If a graph $\Gr{G}$ is $\mathcal{X}$-DS, then it is $\mathcal{Y}$-DS. \end{itemize} \end{remark} \begin{definition} \label{definition: generalized spectrum} Let $\Gr{G}$ be a graph. The \emph{generalized spectrum} of $\Gr{G}$ is the $\{\A, \overline{\A}\}$-spectrum of $\Gr{G}$. \end{definition} The following result on the cospectrality of regular graphs can be readily verified. \begin{proposition} \label{proposition: regular graphs cospectrality} Let $\Gr{G}$ and $\Gr{H}$ be regular graphs that are $\mathcal{X}$-cospectral for {\em some} $\mathcal{X} \subseteq \{\A, \LM, \Q, \bf{\mathcal{L}}\}$. Then, $\Gr{G}$ and $\Gr{H}$ are $\mathcal{Y}$-cospectral for {\em every} $\mathcal{Y} \subseteq \{\A, \overline{\A}, \LM, \overline{\LM}, \Q, \overline{\Q}, {\bf{\mathcal{L}}}, \overline{{\bf{\mathcal{L}}}} \}$. In particular, the cospectrality of regular graphs (and their complements) stays unaffected by the chosen matrix among $\{\A, \LM, \Q, \bf{\mathcal{L}}\}$. \end{proposition} \begin{definition} \label{definition: DGS} A graph $\Gr{G}$ is said to be \emph{determined by its generalized spectrum (DGS)} if it is uniquely determined by its generalized spectrum. In other words, a graph $\Gr{G}$ is DGS if and only if every graph $\Gr{H}$ with the same $\{\A, \overline{\A}\}$-spectrum as $\Gr{G}$ is necessarily isomorphic to $\Gr{G}$. \end{definition} If a graph is not DS, it may still be DGS, as additional spectral information is available. Conversely, every DS graph is also DGS. For further insights into DGS graphs, including various characterizations, conjectures, and studies, we refer the reader to \cite{WangXu06,Wang13,Wang17}. \vspace*{0.2cm} The continuation of this section characterizes graphs that are $X$-DS, where $X \in \{\LM, \Q, \mathcal{L}\}$, with pointers to various studies. We first consider regular DS graphs. \begin{theorem} \cite[Proposition~3]{vanDamH03} \label{theorem: regular DS graphs} For regular graphs, the properties of being DS, $\LM$-DS, and $\Q$-DS are equivalent. \end{theorem} \begin{remark} \label{remark: recurring approach} To avoid any potential confusion, it is important to emphasize that in statements such as Theorem~\ref{theorem: regular DS graphs}, the only available information is the spectrum of the graph. There is no indication or prior knowledge that the spectrum corresponds to a regular graph. In such cases, the regularity of the graph is not part of the revealed information and, therefore, cannot be used to determine the graph. This recurring approach --- stating that $\Gr{G}$ is stated to be a graph satisfying certain properties (e.g., regularity, strong regularity, etc.) and then examining whether the graph can be determined from its spectrum --- appears throughout this paper. It should be understood that the only available information is the spectrum of the graph, and no additional properties of the graph beyond its spectrum are disclosed. \end{remark} \begin{remark} \label{remark: DS regular graphs are not necessarily DS w.r.t. normalized Laplacian} The crux of the proof of Theorem~\ref{theorem: regular DS graphs} is that there are no two NICS graphs, with respect to either $\A$, $\LM$, or $\Q$, where one graph is regular and the other is irregular (see \cite[Proposition~2.2]{vanDamH03}). This, however, does not extend to NICS graphs with respect to the normalized Laplacian matrix $\mathcal{L}$, and regular DS graphs are not necessarily $\mathcal{L}$-DS. For instance, the cycle $\CG{4}$ and the bipartite complete graph $\CoBG{1}{3}$ (i.e., $\SG{3}$) share the same $\mathcal{L}$-spectrum, which is given by $\{0, 1^2, 2\}$, but these graphs are nonisomorphic (as $\CG{4}$ is regular, in contrast to $\CoBG{1}{3}$). It therefore follows that the 2-regular graph $\CG{4}$ is {\em not} $\mathcal{L}$-DS, although it is DS (see Item~\ref{item 2: DS graphs} of Theorem~\ref{theorem: special classes of DS graphs}). More generally, it is conjectured in \cite{Butler2016} that $\CG{n}$ is $\mathcal{L}$-DS if and only if $n>4$ and $4 \hspace*{-0.1cm} \not| \, n$. \end{remark} \begin{theorem} \label{theorem: L-DS graphs} The following graphs are $\LM$-DS: \begin{enumerate}[1.] \item $\PathG{n},\CG{n},\CoG{n},\CoBG{m}{m}$ and their complements \cite{vanDamH03}. \item The disjoint union of $k$ paths, $\PathG{n_1} \DU \PathG{n_2} \DU \ldots \DU \PathG{n_k}$ each having at least one edge \cite{vanDamH03}. \item The complete bipartite graph $\CoBG{m}{n}$ with $m,n\geq2$ and $\frac{5}{3}n<m$ \cite{Boulet2009}. \item \label{stars: L-DS} The star graphs $\SG{n}$ with $n \neq 3$ \cite{OmidiT2007,LiuZG2008}. \item Trees with a single vertex having a degree greater than~2 (referred to as starlike trees) \cite{OmidiT2007,LiuZG2008}. \item The friendship graph $\FG{p}$ \cite{LiuZG2008}. \item The path-friendship graphs, where a friendship graph and a starlike tree are joined by merging their vertices of degree greater than~2 \cite{OboudiAAB2021}. \item The wheel graph $\Gr{W}_{n+1} \triangleq \CoG{1} \vee \CG{n}$ for $n \neq 7$ (otherwise, if $n=7$, then it is not $\LM$-DS) \cite{ZhangLY09}. \item The join of a clique and an independent set on $n$ vertices, $\CoG{n-m} \vee \, \CGr{\CoG{m}}$, where $m \in \OneTo{n-1}$ \cite{DasL2016}. \item Sandglass graphs (see also Item~\ref{item 8: DS graphs} in Theorem~\ref{theorem: special classes of DS graphs}) \cite{LuLYY09}. \item The join graph $\Gr{G} \vee \CoG{m}$, for every $m \in \naturals$, where $\Gr{G}$ is a disconnected graph \cite{ZhouBu2012}. \item The join graph $\Gr{G} \vee \CoG{m}$, for every $m \in \naturals$, where $\Gr{G}$ is an $\LM$-DS connected graph on $n$ vertices and $m$ edges with $m \leq 2n-6$, $\CGr{G}$ is a connected graph, and either one of the following conditions holds \cite{ZhouBu2012}: \begin{itemize} \item $\Gr{G} \vee \CoG{1}$ is $\LM$-DS; \item the maximum degree of $\Gr{G}$ is smaller than $\tfrac12 (n-2)$. \end{itemize} \item Specifically, the join graph $\Gr{G} \vee \CoG{m}$, for every $m \in \naturals$, where $\Gr{G}$ is an $\LM$-DS tree on $n \geq 5$ vertices (since, the equality $m=n-1$ holds for a tree on $n$ vertices and $m$ edges) \cite{ZhouBu2012}. \end{enumerate} \end{theorem} \begin{remark} In general, a disjoint union of complete graphs is not determined by its Laplacian spectrum. \end{remark} \begin{theorem} \label{theorem: Q-DS graphs} The following graphs are $\Q$-DS: \begin{enumerate}[1.] \item The disjoint union of $k$ paths, $\PathG{n_1} \DU \PathG{n_2} \DU \ldots \DU \PathG{n_k}$ each having at least one edge \cite{vanDamH03}. \item The star graphs $\SG{n}$ with $n \geq 3$ \cite{BuZ2012b,OmidiV2010}. \item Trees with a single vertex having a degree greater than~2 \cite{BuZ2012b,OmidiV2010}. \item The friendship graph $\FG{k}$ \cite{WangBHB2010}. \item The lollipop graphs, where a lollipop graph, denoted by $\mathrm{H}_{n,p}$ where $n,p \in \naturals$ and $p<n$, is obtained by appending a cycle $\CG{p}$ to a pendant vertex of a path $\PathG{n-p}$ \cite{HamidzadeK2010,ZhangLZY09}. \item $\Gr{G} \vee \CoG{1}$ where $\Gr{G}$ is a either a $1$-regular graph, an $(n-2)$-regular graph of order $n$ or a $2$-regular graph with at least $11$ vertices \cite{BuZ2012}. \item If $n \geq 21$ and $0 \leq q \leq n-1$, then $\CoG{1} \vee (\PathG{q} \DU \, (n-q-1) \CoG{1})$ \cite{YeLS2025}. \item If $n \geq 21$ and $3 \leq q \leq n-1$, then $\CoG{1} \vee (\CG{q} \DU \, (n-q-1) \CoG{1})$ is $\Q$-DS if and only if $q \neq 3$ \cite{YeLS2025}. \item The join of a clique and an independent set on $n$ vertices, $\CoG{n-m} \vee \, \CGr{\CoG{m}}$, where $m \in \OneTo{n-1}$ and $m \neq 3$ \cite{DasL2016}. \end{enumerate} \end{theorem} Since the regular graphs $\CoG{n}$, $\CGr{\CoG{n}}$, $\CoBG{m}{m}$ and $\CG{n}$ are DS, they are also $\mathcal{X}$-DS for every $\mathcal{X} \subseteq \{\A, \LM, \Q \}$ (see Theorem~\ref{theorem: regular DS graphs}). This, however, does not apply to regular ${\bf{\mathcal{L}}}$-DS graphs (see Remark~\ref{remark: DS regular graphs are not necessarily DS w.r.t. normalized Laplacian}), which are next addressed. \begin{theorem} \label{theorem: X-DS friendship graphs} The following graphs are ${\bf{\mathcal{L}}}$-DS: \begin{itemize} \item $\CoG{n}$, for every $n \in \naturals$ \cite{ButlerH2016}. \item The friendship graph $\FG{k}$, for $k \geq 2$ \cite[Corollary~1]{BermanCCLZ2018}. \item More generally, $\mathrm{F}_{p,q} = \CoG{1} \vee (p \CoG{q})$ if $q \geq 2$, or $q=1$ and $p \geq 2$ \cite[Theorem~1]{BermanCCLZ2018}. \end{itemize} \end{theorem} \noindent \section{Special families of graphs} \label{section: special families of graphs} This section introduces special families of structured graphs and it states conditions for their unique determination by their spectra. \subsection{Stars and graphs of pyramids} \label{subsection: Stars and graphs of pyramids} \noindent \begin{definition} \label{definition: graphs of pyramids} For every $k,n \in \naturals$ with $k<n$, define the graph $T_{n,k}=\CoG{k} \vee \, \overline{\CoG{n-k}}$. For $k=1$, the graph $T_{n,k}$ represents the \emph{star graph} $\SG{n}$. For $k=2$, it represents a graph comprising $n-2$ triangles sharing a common edge, referred to as a \emph{crown}. For $n,k$ satisfying $1<k<n$, the graphs $T_{n,k}$ are referred to as \emph{graphs of pyramids} \cite{KrupnikB2024}. \end{definition} \begin{theorem} \cite{KrupnikB2024} \label{thm: KrupnikB2024 - pyramids are DS} The graphs of pyramids are DS for every $1<k<n$. \end{theorem} \begin{theorem} \cite{KrupnikB2024} \label{thm: KrupnikB2024 - DS star graphs} The star graph $\SG{n}$ is DS if and only if $n-1$ is prime. \end{theorem} To prove these theorems, a closed-form expression for the spectrum of $T_{n,k}$ is derived in \cite{KrupnikB2024}, which also presents a generalized result. Subsequently, using Theorem~\ref{thm: number of walks of a given length}, the number of edges and triangles in any graph cospectral with $T_{n,k}$ are calculated. Finally, Schur's theorem (Theorem~\ref{theorem: Schur complement}) and Cauchy's interlacing theorem (Theorem~\ref{thm:interlacing}) are applied in \cite{KrupnikB2024} to prove Theorems~\ref{thm: KrupnikB2024 - pyramids are DS} and~\ref{thm: KrupnikB2024 - DS star graphs}. \subsection{Complete bipartite graphs} \label{subsection: Complete bipartite graphs} By Theorem~\ref{thm: KrupnikB2024 - DS star graphs}, the star graph $\SG{n}=\CoBG{1}{n-1}$ is DS if and only if $n-1$ is prime. By Theorem~\ref{theorem: special classes of DS graphs}, the regular complete bipartite graph $\CoBG{m}{m}$ is DS for every $m \in \naturals$. Here, we generalize these results and provide a characterization for the DS property of $\CoBG{p}{q}$ for every $p,q\in \naturals$. \begin{theorem} \cite{vanDamH03} \label{thm:spectrum of CoBG} The spectrum of the complete bipartite graph $\CoBG{p}{q}$ is $\bigl\{-\sqrt{pq}, [0]^{p+q-2}, \sqrt{pq} \bigr\}$. \end{theorem} This theorem can be proved by Theorem~\ref{theorem: Schur complement}. An alternative simple proof is next presented. \begin{proof} The adjacency matrix of $\CoBG{p}{q}$ is given by \begin{align} \A(\CoBG{p}{q}) = \begin{pmatrix} \mathbf{0}_{p,p} & \J{p,q}\\ \J{q,p} & \mathbf{0}_{q,q} \end{pmatrix} \in \Reals^{(p+q) \times (p+q)} \end{align} The rank of $\A(\CoBG{p}{q})$ is equal to 2, so the multiplicity of $0$ as an eigenvalue is $p+q-2$. By Corollary~\ref{corollary: number of edges and triangles in a graph}, the two remaining eigenvalues are given by $\pm \lambda$ for some $\lambda \in \Reals$, since the eigenvalues sum to zero. Furthermore, \begin{align} 2\lambda^2 = \sum_{i=1}^{p+q} \lambda_i^2 = 2 \, \card{\E{\CoBG{p}{q}}} = 2pq, \end{align} so $\lambda = \sqrt{pq}$. \end{proof} For $p,q \in \mathbb{N}$, the arithmetic and geometric means of $p,q$ are, respectively, given by $\AM{p,q}=\tfrac12 (p+q)$ and $\GM{p,q}= \sqrt{ pq}$. The AM-GM inequality states that for every $p,q \in \naturals$, we have $\GM{p,q} \le \AM{p,q}$ with equality if and only if $p=q$. \begin{definition} \label{definition: AM minimizer} Let $p,q \in \naturals$. The two-elements multiset $\{p,q\} $ is said to be an \emph{AM-minimizer} if it attains the minimum arithmetic mean for their given geometric mean, i.e., \begin{align} \label{eq: AM minimizer} \AM{p,q} &= \min \Bigl\{\AM{a,b}: \; a,b \in \mathbb{N}, \, \GM{a,b}=\GM{p,q} \Bigr\} \\ \label{eq2: AM minimizer} &= \min \Bigl\{\tfrac12 (a+b): \; a,b \in \mathbb{N}, \, ab=pq \Bigr\}. \end{align} \end{definition} \begin{example} \label{example: AM minimizer} The following are AM-minimizers: \begin{itemize} \item $\{k,k\}$ for every $k\in \naturals $. By the AM-GM inequality, it is the only case where $\GM{p,q} = \AM{p,q}$. \item $\{p,q\}$ where $p,q$ are prime numbers. In this case, the following family of multisets \begin{align} \Bigl\{ \{a,b\} : \, a,b \in \mathbb{N}, \; \GM{a,b}=\GM{p,q} \Bigr\} \end{align} only contains the two multisets $\{p,q\},\{pq,1\}$, and $p+q \leq pq < pq+1$ since $p,q \geq 2$. \item $\{1,q\}$ where $q$ is a prime number. \end{itemize} \end{example} \begin{theorem} \label{thm:when CoBG is DS?} The following holds for every $p,q \in \naturals$: \begin{enumerate} \item \label{thm:when CoBG is DS? - part1} Let $\Gr{G}$ be a graph that is cospectral with $\CoBG{p}{q}$. Then, up to isomorphism, $G = \CoBG{a}{b} \cup \Gr{H}$ (i.e., $\Gr{G}$ is a disjoint union of the two graphs $\CoBG{a}{b}$ and $\Gr{H}$), where $\Gr{H}$ is an empty graph and $a,b \in \naturals$ satisfy $\GM{a,b} = \GM{p,q}$. \item \label{thm:when CoBG is DS? - part2} The complete bipartite graph $\CoBG{p}{q}$ is DS if and only if $\{p,q\}$ is an AM-minimizer. \end{enumerate} \end{theorem} \begin{remark} \label{remark: complete bipartite graphs} Item~\ref{thm:when CoBG is DS? - part2} of Theorem~\ref{thm:when CoBG is DS?} is equivalent to Corollary~3.1 of \cite{MaRen2010}, for which an alternative proof is presented here. \end{remark} \begin{proof} (Proof of Theorem~\ref{thm:when CoBG is DS?}): \begin{enumerate} \item Let $\Gr{G}$ be a graph cospectral with $\CoBG{p}{q}$. The number of edges in $\Gr{G}$ equals the number of edges in $\CoBG{p}{q}$, which is $pq$. As $\CoBG{p}{q}$ is bipartite, so is $\Gr{G}$. Since $\A(\Gr{G})$ is of rank $2$, and $\A(\PathG{3})$ has rank $3$, it follows from the Cauchy's Interlacing Theorem (Theorem~\ref{thm:interlacing}) that $\PathG{3}$ is not an induced subgraph of $\Gr{G}$. \newline It is claimed that $\Gr{G}$ has a single nonempty connected component. Suppose to the contrary that $\Gr{G}$ has (at least) two nonempty connected components $\Gr{H}_1,\Gr{H}_2$. For $i\in \{1,2\}$, since $\Gr{H}_i$ is a non-empty graph, $\A(\Gr{H}_i)$ has at least one eigenvalue $\lambda \ne 0$. Since $\Gr{G}$ is a simple graph, the sum of the eigenvalues of $\A(\Gr{H}_i)$ is $\trace{\A(\Gr{H}_i)}=0$, so $\Gr{H}_i$ has at least one positive eigenvalue. Thus, the induced subgraph $\Gr{H}_1 \cup \Gr{H}_2$ has at least two positive eigenvalues, while $\Gr{G}$ has only one positive eigenvalue, contradicting Cauchy's Interlacing Theorem. \\ Hence, $\Gr{G}$ can be decomposed as $\Gr{G} = \CoBG{a}{b} \cup \, \Gr{H}$ where $\Gr{H}$ is an empty graph. Since $\Gr{G}$ and $\CoBG{p}{q}$ have the same number of edges, $pq=ab$, so $\GM{p,q}=\GM{a,b}$. \item First, we will show that if $\{p,q\}$ is not an AM-minimizer, then the graph $\CoBG{p}{q}$ is not $\A$-DS. This is done by finding a nonisomorphic graph to $\CoBG{p}{q}$ that is $\A$-cospectral with it. By assumption, since $\{p,q\}$ is not an AM-minimizer, there exist $a, b \in \naturals$ satisfying $\GM{a,b} = \GM{p,q}$ and $a + b < p+q$. Define the graph $\Gr{G}=\CoBG{a}{b} \vee \, \overline{\CoG{r}}$ where $r=p+q-a-b$. Observe that $r \in \naturals$. The $\A$-spectrum of both of these graphs is given by \begin{align} \sigma_{\A}(\Gr{G}) = \sigma_{\A}(\CoBG{p}{q}) = \bigl\{-\sqrt{pq},[0]^{pq-2},\sqrt{pq} \bigr\}, \end{align} so these two graphs are nonisomorphic and cospectral, which means that $\Gr{G}$ is not $\A$-DS. \newline We next prove that if $\{p,q\}$ is an AM-minimizer, then $\CoBG{p}{q}$ is $\A$-DS. Let $\Gr{G}$ be a graph that is cospectral with $\CoBG{p}{q}$. From the first part of this theorem, $\Gr{G}=\CoBG{a}{b} \cup \, \Gr{H}$ where $\GM{a,b} = \GM{p,q}$ and $\Gr{H}$ is an empty graph. Consequently, it follows that $\AM{a,b}=\tfrac12(a+b) \leq \tfrac12(p+q) = \AM{p,q}$. Since $\{p,q\}$ is assumed to be an AM-minimizer, it follows that $\AM{a,b} \ge \AM{p,q}$, and thus equality holds. Both equalities $\GM{a,b} = \GM{p,q}$ and $\AM{a,b} = \AM{p,q}$ can be satisfied simultaneously if and only if $\{ a , b \} = \{ p , q \}$, so $r=p+q-a-b=0$ and $\Gr{G}=\CoBG{p}{q}$. \end{enumerate} \end{proof} \begin{corollary} \label{cor: bipartite not DS} Almost all of the complete bipartite graphs are not DS. More specifically, for every $n \in \naturals$, there exists a single complete bipartite graph on $n$ vertices that is DS. \end{corollary} \begin{proof} Let $n \in \naturals$. By the \emph{fundamental theorem of arithmetic}, there is a unique decomposition $n = \prod_{i=1}^{k} p_i$ where $k\in \naturals$ and $\{p_i\}$ are prime numbers for every $1 \le i \le k$. Consider the family of multisets \begin{align} \set{D} = \Bigl\{ \{a,b\} : a,b \in \mathbb{N} , \GM{a,b}=\sqrt{n} \Bigr\}. \end{align} This family has $2^k$ members, since every prime factor $p_i$ of $n$ should be in the prime decomposition of $a$ or $b$. Since the minimization of $\AM{a,b}$ under the equality constraint $\GM{a,b}=\sqrt{n}$ forms a convex optimization problem, only one of the multisets in the family $\set{D}$ is an AM-minimizer. Thus, if $n = \prod_{i=1}^{k} p_i$, then the number of complete bipartite graphs of $n$ vertices is $O(2^k)$, and (by Item~\ref{thm:when CoBG is DS? - part2} of Theorem~\ref{thm:when CoBG is DS?}) only one of them is DS. \end{proof} \subsection{Tur\'{a}n graphs} \label{subsection: Turan graphs} The Tur\'{a}n graphs are a significant and well-studied class of graphs in extremal graph theory, forming an important family of multipartite complete graphs. Tur\'{a}n graphs are particularly known for their role in Tur\'{a}n's theorem, which provides a solution to the problem of finding the maximum number of edges in a graph that does not contain a complete subgraph of a given order \cite{Turan1941}. Before delving into formal definitions, it is noted that the distinction of the Tur\'{a}n graphs as multipartite complete graphs is that they are as balanced as possible, ensuring their vertex sets are divided into parts of nearly equal size. \begin{definition} Let $n_1, \ldots, n_k$ be natural numbers. Define the \emph{complete $k$-partite graph} \begin{align} \CoG{n_1, \ldots, n_k}= \bigvee_{i=1}^{k}\overline{\CoG{n_i}}. \end{align} A graph is multipartite if it is $k$-partite for some $k \geq 2$. \end{definition} \begin{definition} \label{definition: Turan graph} Let $2 \le k \le n$. The \emph{Tur\'{a}n graph} $T(n,k)$ (not to be confused with the graph of pyramids $T_{n,k}$) is formed by partitioning a set of $n$ vertices into $k$ subsets, with sizes as equal as possible, and then every two vertices are adjacent in that graph if and only if they belong to different subsets. It is therefore expressed as the complete $k$-partite graph $K_{n_1,\dots,n_k}$, where $|n_i-n_j| \leq 1$ for all $i,j \in \OneTo{k}$ with $i \neq j$. Let $q$ and $s$ be the quotient and remainder, respectively, of dividing $n$ by $k$ (i.e., $n = qk+s$, $s \in \{0,1, \ldots, k-1\}$), and let $n_1 \leq \ldots \leq n_k$. Then, \begin{align} \label{eq: n_i in Turan's graph} n_i= \begin{cases} q, & 1\leq i \leq k-s,\\ q+1, & k-s+1 \leq i \leq k. \end{cases} \end{align} By construction, the graph $T(n,k)$ has a clique of order $k$ (any subset of vertices with a single representative from each of the $k$ subsets is a clique of order $k$), but it cannot have a clique of order $k+1$ (since vertices from the same subset are nonadjacent). Note also that, by \eqref{eq: n_i in Turan's graph}, the Tur\'{a}n graph $T(n,k)$ is an $(n-q)$-regular graph if and only if $n$ is divisible by $k$, and then $q = \frac{n}{k}$. \end{definition} \begin{definition} Let $q,k \in \naturals$. Define the \emph{regular complete multipartite graph}, $\mathrm{K}_{q}^{k}: = \overset{k}{\underset{i=1}{\bigvee}} \overline{\CoG{q}}$, to be the $k$-partite graph with $q$ vertices in each part. Observe that $\mathrm{K}_{q}^{k} = T(kq,k)$. \end{definition} Let $\Gr{G}$ be a simple graph on $n$ vertices that does not contain a clique of order greater than a fixed number $k \in \naturals$. Tur\'{a}n investigated a fundamental problem in extremal graph theory of determining the maximum number of edges that $\Gr{G}$ can have \cite{Turan1941}. \begin{theorem}[Tur\'{a}n's Graph Theorem] \label{theorem: Turan's theorem} Let $\Gr{G}$ be a graph on $n$ vertices with a clique of order at most $k$ for some $k \in \naturals$. Then, \begin{align} \card{\E{\Gr{G}}} &\leq \card{\E{T(n,k)}} \\ &= \biggl(1-\frac1k\biggr) \, \frac{n^2-s^2}{2} + \binom{s}{2}, \quad s \triangleq n - k \bigg\lfloor \frac{n}{k} \bigg\rfloor. \end{align} \end{theorem} For a nice exposition of five different proofs of Tur\'{a}n's Graph Theorem, the interested reader is referred to Chapter~41 of \cite{AignerZ18}. \begin{corollary} \label{corollary:turan} Let $k \in \naturals$, and let $\Gr{G}$ be a graph on $n$ vertices where $\omega(\Gr{G})\le k$ and $\card{\E{\Gr{G}}}=\card{\E{T(n,k)}}$. Let $\Gr{G}_{1}$ be a graph obtained by adding an arbitrary edge to $\Gr{G}$. Then $\omega(\Gr{G}_1)>k$. \end{corollary} \subsubsection{The spectrum of the Tur\'{a}n graph} \begin{theorem} \cite{EsserH1980} \label{theorem: spectrum of multipartite graphs} Let $k\in\naturals$, and let $n_1 \leq n_2 \leq \ldots \leq n_k$ be natural numbers. Let $\Gr{G} = \CoG{n_1,n_2, \dots, n_k}$ be a complete multipartite graph on $n = n_1 + \ldots n_k$ vertices. Then, \begin{itemize} \item $\Gr{G}$ has one positive eigenvalue, i.e., $\lambda_1(\Gr{G}) > 0$ and $\lambda_2(\Gr{G})\le 0$. \item $\Gr{G}$ has $0$ as an eigenvalue with multiplicity $n-k$. \item $\Gr{G}$ has $k-1$ negative eigenvalues, and \begin{align} n_1 \leq -\lambda_{n-k+2}(\Gr{G}) \leq n_2 \leq -\lambda_{n-k+3}(\Gr{G}) \le n_3 \leq \ldots \leq n_{k-1} \leq -\lambda_{n}(\Gr{G}) \le n_{k}. \end{align} \end{itemize} \end{theorem} \begin{corollary} \label{corollary:Kqk-spectrum} The spectrum of the regular complete $k$-partite graph $\CoG{q, \ldots, q} \triangleq \CoG{q}^k$ is given by \begin{align} \sigma_{\A}(\CoG{q}^{k})=\Bigl\{ [-q]^{k-1}, [0]^{(q-1)k}, q(k-1) \Bigr\}. \end{align} \end{corollary} \begin{proof} This readily follows from Theorem~\ref{theorem: spectrum of multipartite graphs} by setting $n_1 = \ldots = n_k = q$. \end{proof} \begin{lemma} \label{lemma: Join-A-Spec} \cite{Butler2008} Let $\Gr{G}_{i}$ be $r_{i}$-regular graphs on $n_{i}$ vertices for $i\in \{1,2\}$, with the adjacency spectrum $\sigma_{\A}(\Gr{G}_1)=(r_{1}=\mu_{1}\ge\mu_{2}\ge...\ge\mu_{n})$ and $\sigma_{A}(\Gr{G}_2) = (r_{2}=\nu_{1}\ge\nu_{2}\ge...\ge\nu_{n})$. The $\A$-spectrum of $\Gr{G}_1\vee \Gr{G}_2$ is given by \begin{align} \sigma_{\A}(\Gr{G}_{1}\vee \Gr{G}_{2})=\{ \mu_{i} \}_{i=2}^{n_{1}} \cup \{ \nu_{i}\}_{i=2}^{n_{2}} \cup \left\{ \frac{r_1+r_2 \pm\sqrt{(r_1-r_2)^{2}+4 n_1 n_2}}{2} \right\}. \end{align} \end{lemma} \begin{theorem} \label{theorem: A-spectrum of Turan graph} Let $q,s\in \naturals$ such that $n=kq+s$ and $0 \le s \leq k-1.$ The following holds with respect to the $\A$-spectrum of $T(n,k)$: \begin{enumerate} \item \label{item: irregular Turan graph} If $1 \leq s \leq k-1$, then the $\A$-spectrum of the irregular Tur\'{a}n graph $T(n,k)$ is given by \begin{align} \sigma_{\A}(T(n,k))=& \biggl\{ [-q-1]^{s-1}, [-q]^{k-s-1}, [0]^{n-k} \biggr\} \nonumber \\ \label{eq: A-spectrum of irregular Turan graph} & \cup \Biggl\{\tfrac12 \biggl[n-2q-1\pm \sqrt{\Bigl(n-2(q+1)s+1\Bigr)^2+4q(q+1)s(k-s)} \biggr] \Biggr\}. \end{align} \item \label{item: regular Turan graph} If $s=0$, then $q = \frac{n}{k}$, and the $\A$-spectrum of the regular Tur\'{a}n graph $T(n,k)$ is given by \begin{align} \label{eq: A-spectrum of regular Turan graph} \sigma_{\A}(T(n,k))=& \Bigl\{ [-q]^{k-1}, [0]^{n-k}, (k-1)q \Bigr\}. \end{align} \end{enumerate} \end{theorem} \begin{proof} Let $1 \leq s \leq k-1$, and we next derive the $\A$-spectrum of an irregular Tur\'{a}n graph $T(n,k)$ in Item~\ref{item: irregular Turan graph} of this theorem (i.e., its spectrum if $n$ is not divisible by $k$ since $s \neq 0$). By Corollary~\ref{corollary:Kqk-spectrum}, the spectra of the regular graphs $\CoG{q}^{k-s}$ and $\CoG{q+1}^{s}$ is \begin{align} & \sigma_{\A}(\CoG{q}^{k-s})=\left\{ [-q]^{k-s-1}, [0]^{(q-1)(k-s)}, q(k-s-1) \right\}, \\ & \sigma_{\A}(\CoG{q+1}^{s})=\left\{ [-q-1]^{s-1}, [0]^{qs}, (q+1)(s-1) \right\}. \end{align} The $(k-s)$-partite graph $\CoG{q}^{k-s}$ is $r_1$-regular with $r_1=q(k-s-1)$, the $s$-partite graph $\CoG{q+1}^{s}$ is $r_2$-regular with $r_2 = (q+1)(s-1)$, and by Definition~\ref{definition: Turan graph}, we have $T(n,k) = \CoG{q}^{k-s} \vee \CoG{q+1}^{s}$. Hence, by Lemma~\ref{lemma: Join-A-Spec}, the adjacency spectrum of $T(n,k)$ is given by \begin{align} \sigma_{\A}(T(n,k)) &= \sigma_{\A}(\CoG{q}^{k-s} \vee \CoG{q+1}^{s}) \nonumber \\ \label{eq0: 23.12.2024} &=\set{S}_1 \cup \set{S}_2 \cup \set{S}_3, \end{align} where \begin{align} \label{eq1: 23.12.2024} \set{S}_1 &= \Bigl\{ [-q]^{k-s-1}, [0]^{(q-1)(k-s)} \Bigr\}, \\ \label{eq2: 23.12.2024} \set{S}_2 &= \Bigl\{ [-q-1]^{s-1}, [0]^{qs} \Bigr\}, \\ \set{S}_3 &= \biggl\{ \frac{r_1+r_2 \pm \sqrt{(r_1-r_2)^2 + 4 n_1 n_2}}{2} \biggr\} \nonumber \\ \label{eq3: 23.12.2024} &= \Biggl\{\tfrac12 \biggl[n-2q-1\pm \sqrt{\Bigl(n-2(q+1)s+1\Bigr)^2+4q(q+1)s(k-s)} \biggr] \Biggr\}, \end{align} where the last equality holds since, by the equality $n=kq+s$ and the above expressions of $r_1$ and $r_2$, it can be readily verified that $r_1+r_2 = n-2q-1$ and $r_1-r_2 = n-2(q+1)s+1$. Finally, combining \eqref{eq0: 23.12.2024}--\eqref{eq3: 23.12.2024} gives the $\A$-spectrum in \eqref{eq: A-spectrum of irregular Turan graph} of an irregular Tur\'{a}n graph $T(n,k)$. We next prove Item~\ref{item: regular Turan graph} of this theorem, referring to a regular Tur\'{a}n graph $T(n,k)$ (i.e., $k|n$ or equivalently, $s=0$). In that case, we have $T(n,k)=\CoG{q}^{k}$ where $q = \frac{n}{k}$, so the $\A$-spectrum in \eqref{eq: A-spectrum of regular Turan graph} holds by Corollary~\ref{corollary:Kqk-spectrum}. \end{proof} \begin{remark} \label{remark: Turan - multiplicity of negative eigenvalues} In light of Theorem~\ref{theorem: A-spectrum of Turan graph}, if $k \geq 2$, then the number of negative eigenvalues (including multiplicities) of the adjacency matrix of the Tur\'{a}n graph $T(n,k)$ is $k-1$ if the graph is regular (i.e., if $k|n$), and it is $k-2$ otherwise (i.e., if the graph is irregular). If $k=1$, which corresponds to an empty graph (having no edges), then all eigenvalues are zeros (having no negative eigenvalues). Furthermore, the adjacency matrix of $T(n,k)$ always has a single positive eigenvalue, which is of multiplicity~1 irrespectively of the values of $n$ and $k$. We rely on these properties later in this paper (see Section~\ref{subsubsection: Turan graph is DS}). \end{remark} \begin{example} \label{example: A-spectrum of Turan graph} By Theorem~\ref{theorem: A-spectrum of Turan graph}, let us calculate the $\A$-spectrum of the Tur\'{a}n graph $T(17,7)$, and verify it numerically with the SageMath software \cite{SageMath}. Having $n=17$ and $k=7$ gives $q=2$ and $s=3$, which by Theorem~\ref{theorem: A-spectrum of Turan graph} implies that \begin{align} \label{eq4: 23.12.2024} \sigma_{\A}\bigl(T(17,7)\bigr)= \Bigl\{ [-3]^2, [-2]^3, [0]^{10}, \, 6(1+\sqrt{2}), \, 6(1-\sqrt{2}) \Bigr\}. \end{align} That has been numerically verified by programming in the SageMath software. \end{example} \subsubsection{Tur\'{a}n graphs are DS} \label{subsubsection: Turan graph is DS} The main result of this subsection establishes that all Tur\'{a}n graphs are determined by their $\A$-spectrum. This result is equivalent to Theorem~3.3 in \cite{MaRen2010}, while also presenting an alternative proof that offers additional insights. \begin{theorem} \label{theorem: Turan's graph is DS} The Tur\'{a}n graph $T(n,k)$ is $\A$-DS. \end{theorem} In order to prove Theorem~\ref{theorem: Turan's graph is DS}, we first introduce an auxiliary result from \cite{Smith1970}, followed by several other lemmata. \begin{theorem} \cite[Theorem~1]{Smith1970} \label{theorem:Smith_theorem_1} Let $\Gr{G}$ be a graph. Then, the following statements are equivalent: \begin{itemize} \item $\Gr{G}$ has exactly one positive eigenvalue. \item $\Gr{G}=\Gr{H} \cup \CGr{\CoG{m}}$ for some $m$, where $\Gr{H}$ is a nonempty complete multipartite graph. In other words, the non-isolated vertices of $\Gr{G}$ form a complete multipartite graph. \end{itemize} \end{theorem} \begin{proof}[Proof of Theorem~\ref{theorem: Turan's graph is DS}] Let $\Gr{G}$ be a graph that is $\A$-cospectral with $T(n,k)$. Denote $n=qk+s$ for $s,q\in \naturals$ such that $0\le s<k$. \begin{lemma} \label{lemma:Tnk_DS_lemma_1} The graph $\Gr{G}$ doesn't have a clique of order $k+1$. \end{lemma} \begin{proof} Suppose to the contrary that the graph $\Gr{G}$ has a clique of order $k+1$, which means that $\CoG{k+1}$ is an induced subgraph of $\Gr{G}$. The complete graph $\CoG{k+1}$ has $k$ negative eigenvalues ($-1$ with a multiplicity of $k$). On the other hand, $\Gr{G}$ has at most $k-1$ negative eigenvalues, $n-k$ zero eigenvalues, and exactly one positive eigenvalue; indeed, this follows from Theorem~\ref{theorem: A-spectrum of Turan graph} (see Remark~\ref{remark: Turan - multiplicity of negative eigenvalues}), and since $\Gr{G}$ and $T(n,k)$ are $\A$-cospectral graphs. Hence, by Cauchy's Interlacing Theorem, every induced subgraph of $\Gr{G}$ on $k+1$ vertices has at most $k-1$ negative eigenvalues (i.e., those eigenvalues interlaced between the negative and zero eigenvalues of $\Gr{G}$ that are placed at distance $k+1$ apart in a sorted list of the eigenvalues of $\Gr{G}$ in decreasing order). This contradicts our assumption on the existence of a clique of $k+1$ vertices because of the $k$ negative eigenvalues of $\CoG{k+1}$. \end{proof} \begin{lemma} \label{lemma:Tnk_DS_lemma_2} The graph $\Gr{G}$ is a complete multipartite graph. \end{lemma} \begin{proof} Since $\Gr{G}$ has exactly one positive eigenvalue, which is of multiplicity one, we get from Theorem~\ref{theorem:Smith_theorem_1} that $\Gr{G} = \Gr{H} \cup \CGr{\CoG{\ell}}$ for some $\ell \in \naturals$, where $\Gr{H}$ is a nonempty multipartite graph. We next show that $\ell=0$. Suppose to the contrary that $\ell \geq 1$, and let $v$ be an isolated vertex of $\CGr{\CoG{\ell}}$. Since $\Gr{H}$ is a nonempty graph, there exists a vertex $u\in \V{\Gr{H}}$. Let $\Gr{G}_1$ be the graph obtained from $\Gr{G}$ by adding the single edge $\{v,u\}$. By Lemma~\ref{lemma:Tnk_DS_lemma_1}, $\Gr{G}$ does not have a clique of order $k+1$. Hence, $\Gr{G}_1$ does not have a clique of order $k+1$ either, contradicting Corollary~\ref{corollary:turan}. Hence, $\Gr{G}=\Gr{H}$. \end{proof} \begin{lemma} \label{lemma:Tnk_DS_lemma_3} The graph $\Gr{G}$ is a complete $k$-partite graph. \end{lemma} \begin{proof} By Lemma \ref{lemma:Tnk_DS_lemma_2}, $\Gr{G}$ is a complete multipartite graph. Let $r$ be the number of partite subsets in the vertex set $\V{\Gr{G}}$. We show that $r=k$, which then gives that $\Gr{G}$ is a complete $k$-partite graph. By Lemma~\ref{lemma:Tnk_DS_lemma_1}, $\Gr{G}$ doesn't have a clique of order $k+1$. Hence, $r\le k$. Suppose to the contrary that $r<k$. Since $\Gr{G}$ is a complete $r$-partite graph, the largest order of a clique in $\Gr{G}$ is $r$. Let $\Gr{G}_{1}$ be a graph obtained from $\Gr{G}$ by adding an edge between two vertices within the same partite subset. The graph $\Gr{G}_1$ becomes an $(r+1)$-partite graph. Consequently, the maximum order of a clique in $\Gr{G}_1$ is at most $r+1 \leq k$. The graph $\Gr{G}_1$ has exactly one more edge than $\Gr{G}$. Since $\Gr{G}$ is $\A$-cospectral to $T(n,k)$, it has the same number of edges as in $T(n,k)$. Hence, $\Gr{G}_1$ contains more edges than $T(n,k)$, while also lacking a clique of order $k+1$. This contradicts Corollary~\ref{corollary:turan}, so we conclude that $r=k$. \end{proof} Let $n_1, n_2, \dots , n_k \in \naturals$ be the number of vertices in each partite subset of the complete $k$-partite graph $\Gr{G}$, i.e., $\Gr{G} = \CoG{n_1, n_2, \dots , n_k}$. Then, the next two lemmata subsequently hold. \begin{lemma} \label{lemma:Tnk_DS_lemma_4} For all $i \in \OneTo{k}$, $n_i \le q+1$ . \end{lemma} \begin{proof} Suppose to the contrary that there exists a partite subset in the complete $k$-partite graph $\Gr{G}$ with more than $q+1$ vertices. Let $\set{P}_1$ be such a partite subset, and suppose without loss of generality that $n_1= \card{\set{P}_1} \geq q+1$. By the pigeonhole principle, there exists a partite subset of $\Gr{G}$ with at most $q$ vertices (since $\sum_{i \in \OneTo{k}} n_i = n = kq+s$, where $0\leq s \leq k-1$). Let $\set{P}_2$ be such a partite subset of $\Gr{G}$, and suppose without loss of generality that $n_2 = \card{\set{P}_2} \leq q$. Let $\Gr{G}_{1}$ be the graph obtained from $\Gr{G}$ by removing a vertex $v \in \set{P}_1$, adding a new vertex $u$ to $\set{P}_2$, and connecting $u$ to all the vertices outside its partite subset. The new graph $\Gr{G}_1$ is also $k$-partite, so it does not contain a clique of order greater than $k$. Furthermore, by construction, $\Gr{G}_{1}$ has more edges than $\Gr{G}$, so \begin{align} \label{eq1: 24.12.2024} \bigcard{\E{\Gr{G}_1}} > \bigcard{\E{\Gr{G}}} = \bigcard{\E{T(n,k)}}. \end{align} Hence, $\Gr{G}_{1}$ is a graph with no clique of order greater than $k$, and it has more edges than $T(n,k)$. That contradicts Theorem~\ref{theorem: Turan's theorem}, so $\{n_i\}_{i=1}^k$ cannot include any element that is larger than $q+1$. \end{proof} \begin{lemma} \label{lemma:Tnk_DS_lemma_5} For all $i \in \OneTo{k}$, $n_i \geq q$. \end{lemma} \begin{proof} The proof of this lemma is analogous to the proof of Lemma~\ref{lemma:Tnk_DS_lemma_4}. Suppose to the contrary that there exists a partite subset of $\Gr{G}$ with less than $q$ vertices. Let $\set{P}_1$ be such a partite subset, so $p_1 \triangleq \bigcard{\set{P}_1}<q$. By the pigeonhole principle, there exists a partition with at least $q+1$ vertices. Let $\set{P}_2$ be such a partite subset set, whose number of vertices is denoted by $p_{2} \triangleq \card{\set{P}_2} \geq q+1$. Let $\Gr{G}_{1}$ be the graph obtained by removing a vertex $v \in \set{P}_2$, adding a new vertex $u$ to $\set{P}_1$, and connecting the vertex $u$ to all the vertices outside its partite subset. $\Gr{G}_{1}$ is $k$-partite, so it does not contain a clique of order greater than $k$, and $\Gr{G}_{1}$ has more edges than $\Gr{G}$ so \eqref{eq1: 24.12.2024} holds. Hence, $\Gr{G}_1$ is a graph with no clique of order greater than $k$, and it has more edges than $T(n,k)$. That contradicts Theorem~\ref{theorem: Turan's theorem}, so $\{n_i\}_{i=1}^k$ cannot include any element that is smaller than $q$. \end{proof} By Lemmata~\ref{lemma:Tnk_DS_lemma_4} and~\ref{lemma:Tnk_DS_lemma_5}, we conclude that $n_i \in \{q,q+1\} $ for every $1 \le i \le k-1$. Let $\alpha$ be the number of partite subsets of $q$ vertices and $\beta$ be the number of partite subsets of $q+1$ vertices. Since $\Gr{G}$ has $n$ vertices, where $\sum n_{i}=n$, it follows that $q\alpha + (q+1)\beta = n$. Moreover, $\Gr{G}$ is $k$-partite, so it follows that $\alpha+\beta=k$. This gives the linear system of equations \begin{align} \begin{pmatrix}q & q+1\\ 1 & 1 \end{pmatrix}\begin{pmatrix}\alpha\\ \beta \end{pmatrix}=\begin{pmatrix}n\\ k \end{pmatrix}, \end{align} which has the single solution \begin{align} \alpha = k-s, \quad \beta=n-qk=s. \end{align} Hence, $\Gr{G} = T(n,k)$, which completes the proof of Theorem~\ref{theorem: Turan's graph is DS}. \end{proof} \begin{remark} \label{remark: on the alternative proof by Ma and Ren} The proof of Theorem~\ref{theorem: Turan's graph is DS} is an alternative proof of Theorem~3.3 in \cite{MaRen2010}. While both proofs rely on Theorem~\ref{theorem:Smith_theorem_1}, which is Theorem~1 of \cite{Smith1970}, our proof relies on the adjacency spectral characterization in Theorem~\ref{theorem: A-spectrum of Turan graph}, noteworthy in its own right, and further builds upon a sequence of results presented in Lemmata~\ref{lemma:Tnk_DS_lemma_1}--\ref{lemma:Tnk_DS_lemma_5}. On the other hand, the proof of Theorem~3.3 in \cite{MaRen2010} relies on Theorem~\ref{theorem:Smith_theorem_1}, but then deviates substantially from our proof (see Lemmata~2.4 and~2.5 in \cite{MaRen2010} and Theorem~3.1 in \cite{MaRen2010}, serving to prove Theorem~\ref{theorem: Turan's graph is DS}). \end{remark} \subsection{Line graphs} \label{subsection: Line graphs} Among the various studied transformations on graphs, the line graphs of graphs are one of the most studied transformations \cite{BeinekeB21}. We first introduce their definition, and then address the spectral graph determination properties. \begin{definition} \label{definition: line graph} The {\em line graph} of a graph $\Gr{G}$, denoted by $\ell(\Gr{G})$, is a graph whose vertices are the edges in $\Gr{G}$, and two vertices are adjacent in $\ell(\Gr{G})$ if the corresponding edges are incident in $\Gr{G}$. \end{definition} A notable spectral property of line graphs is that all the eigenvalues of their adjacency matrix are greater than or equal to~$-2$ (see, e.g., \cite[Theorem~4.6]{BeinekeB21}). For the determination of all graphs whose spectrum is bounded from below by $-2$, the interested reader is referred to \cite[Section~4.5]{BeinekeB21}. The following theorem characterizes some families of line graphs that are DS. \begin{theorem} \label{theorem: DS line graphs} The following line graphs are DS: \begin{enumerate}[1] \item \label{Item 1: DS line graphs} The line graph of the complete graph $\CoG{k}$, where $k \geq 4$ and $k \neq 8$ (see \cite[Theorem~4.1.7]{CvetkovicRS2010}), \item \label{Item 2: DS line graphs} The line graph of the complete bipartite graph $\CoBG{k}{k}$, where $k \geq 2$ and $k \neq 4$ (see \cite[Theorem~4.1.8]{CvetkovicRS2010}), \item \label{Item 3: DS line graphs} The line graph $\ell(\CGr{\CG{6}})$ (see \cite[Proposition~4.1.5]{CvetkovicRS2010}), \item \label{Item 4: DS line graphs} The line graph of the complete bipartite graph $\CoBG{m}{n}$, where $m+n \geq 19$ and $\{m,n\} \neq \{2s^2+s, 2s^2-s\}$ with $s \in \naturals$ (see \cite[Proposition~4.1.18]{CvetkovicRS2010}). \end{enumerate} \end{theorem} \begin{remark} \label{remark: triangular graphs} In regard to Item~\ref{Item 1: DS line graphs} of Theorem~\ref{theorem: DS line graphs}, the line graphs of complete graphs are referred to as triangular graphs. These are strongly regular graphs with the parameters $\srg{\tfrac12 k(k-1)}{2(k-2)}{k-2}{4}$, where $k \geq 4$. For $k=8$, the corresponding triangular graph is cospectral and nonisomorphic to the three Chang graphs (see Remark~\ref{remark: NICS SRGs}), which are strongly regular graphs $\srg{28}{12}{6}{4}$. \end{remark} We next prove the following result in regard to the Petersen graph, which appears in \cite[Problem~4.3]{CvetkovicRS2010} and \cite[Section~10.3]{BrouwerM22} (without a proof). \begin{corollary} \label{corollary: Petersen graph is DS} The Petersen graph is DS. \end{corollary} \begin{proof} The Petersen graph is known to be isomorphic to the complement of the line graph of the complete graph $\CoG{5}$ (i.e., it is isomorphic to $\CGr{\ell(\CoG{5})}$. By Item~\ref{Item 1: DS line graphs} of Theorem~\ref{theorem: DS line graphs}, the line graph $\ell(\CoG{5})$ is DS. It is also a 6-regular graph (as the line graph of a $d$-regular graph is $(2d-2)$-regular, and $\CoG{5}$ is a 4-regular graph). Consequently, by Item~\ref{item 5: DS graphs} of Theorem~\ref{theorem: special classes of DS graphs}, the complement of $\ell(\CoG{5})$ is also DS. \end{proof} The following definition and theorem provide further elaboration on Item~\ref{Item 2: DS line graphs} of Theorem~\ref{theorem: DS line graphs}. \begin{definition} \cite[Section~1.1.8]{BrouwerM22} \label{definition: Lattice graphs} The {\em Hamming graph} $\mathrm{H}(2,q)$, where $q \geq 2$, has the vertex set $\OneTo{q} \times \OneTo{q}$, and any two vertices are adjacent if and only if they differ in one coordinate (i.e., their Hamming distance is equal to~1). These are also referred to {\em lattice graphs}, and denoted by $\mathrm{L}_2(q)$. The Lattice graph $\mathrm{L}_2(q)$, where $q \geq 2$, is also the line graph of the complete bipartite graph $\CoBG{q}{q}$, and it is a strongly regular graph with parameters $\srg{q^2}{2(q-1)}{q-2}{2}$. \end{definition} \begin{theorem} \cite{Shrikhande59} \label{theorem: DS Lattice graphs} The lattice graph $\mathrm{L}_2(q)$ is a strongly regular DS graph for all $q \neq 4$. For $q=4$, the graph $\mathrm{L}_2(4)$ is not DS since it is cospectral and nonisomorphic to the Shrikhande graph, which are nonisomorphic strongly regular graphs with the common parameters $\srg{16}{6}{4}{2}$. \end{theorem} The following result provides an interesting connection between $\A$ and $\Q$-cospectralities of graphs. \begin{theorem} \cite[Proposition~7.8.5]{CvetkovicRS2010} If two graphs are $\Q$-cospectral, then their line graphs are $\A$-cospectral. \end{theorem} \subsection{Nice graphs} \label{subsection: Nice graphs} The family referred to as "nice graphs" has been recently introduced in \cite{KovalK2024}. \begin{definition} \label{definition: nice graphs} \noindent \begin{itemize} \item A graph $\Gr{G}$ is \emph{sunlike} if it is connected, and can be obtained from a cycle $\mathrm{C}$ by adding some vertices and connecting each of them to some vertex in $\mathrm{C}$. \item Let $l,k\in \naturals$. A sunlike graph $\Gr{G}$ is \emph{$(l,k)$-nice} if it can be obtained by a cycle $\CG{\ell}$ and \begin{itemize} \item There is a single vertex $v_1 \in \V{\CG{\ell}}$ of degree $3$. \item There are $k$ vertices $u_1, \ldots, u_k \in \V{\CG{\ell}}$ of degree $4$. Let $\set{U} = \{u_1, \ldots, u_k\}$. \item By starting a walk on $\CG{\ell}$ from $v_1$ at some orientation, then after 4 or 6 steps we get to a vertex $u_1 \in \set{U}$. Then, after another $4$ or $6$ steps from $u_1$ we get to $u_2 \in \set{U}$, and so on until we get to the vertex $u_k \in \set{U}$. \end{itemize} \end{itemize} \end{definition} \begin{theorem} \cite{KovalK2024} \label{theorem: Koval and Kwan, 2024} Let $l,k \in \naturals$ such that $l \equiv 2 \, (\text{mod } 4)$. Let $\Gr{G}$ be an $(l,k)$-nice graph. If the order of $\Gr{G}$ is a prime number greater than some $n_0 \in \naturals$, then the line graph $\ell(\Gr{G})$ is DS. \end{theorem} A more general class of graphs is introduced in \cite{KovalK2024}, where it is shown that for every sufficiently large $n \in \naturals$, the number of nonisomorphic $n$-vertex DS graphs is at least $e^{cn}$ for some positive constant $c$ (see \cite[Theorem~1.4]{KovalK2024}). This recent result represents a significant advancement in the study of Haemers' conjecture because the earlier lower bounds on the number of nonisomorphic $n$-vertex DS graphs were all of the form of $e^{c \sqrt{n}}$, for some positive constant $c$. As noted in \cite{KovalK2024}, the first form of such a lower bound was derived by van Dam and Haemers \cite[Proposition~6]{vanDamH09}, who proved that a graph $\Gr{G}$ is DS if every connected component of $\Gr{G}$ is a complete subgraph, leading to a lower bound that is approximately of the form $e^{c \sqrt{n}}$ with $c = \sqrt{\tfrac{2}{3}} \, \pi$. Therefore, the transition to a lower bound in \cite{KovalK2024} that scales exponentially with $n$, rather than with $\sqrt{n}$, is both remarkable and noteworthy. \subsection{Friendship graphs and their generalization} \label{subsection: The friendship graphs} The next theorem considers whether friendship graphs (see Definition~\ref{definition: friendship graph}) can be uniquely determined by the spectra of four of their associated matrices. \begin{theorem} \label{theorem: friendship graphs are X-DS} The friendship graph $\FG{p}$ satisfies the following properties: It is DS if and only if $p \ne 16$ (i.e., the friendship graph is DS unless it has 16~triangles) \cite{CioabaHVW2015}, $\LM$-DS \cite{LinSM2010}, $\Q$-DS \cite{WangBHB2010}, and ${\bf{\mathcal{L}}}$-DS \cite{BermanCCLZ2018}. \end{theorem} The friendship graph $\FG{p}$, where $p \in \naturals$, can be expressed in the form $\FG{p}=\CoG{1} \vee (p\CoG{2})$ (see Figure~\ref{fig:friendship graph F4}). The last observation follows from a property of a generalized friendship graph, which is defined as follows. \begin{definition} Let $p,q\in \naturals$. The \emph{generalized friendship graph} is given by $\GFG{p}{q} = \CoG{1} \vee (p \CoG{q})$. Note that $\GFG{p}{2} = \FG{p}$. \end{definition} The following theorem addresses the conditions under which generalized friendship graphs can be uniquely determined by the spectra of their normalized Laplacian matrix. \begin{theorem} \label{theorem: Berman's generalized friendship graph} The generalized friendship graph $\GFG{p}{q}$ is ${\bf{\mathcal{L}}}$-DS if and only if $q \ge 2$, or $q=1$ and $p=2$ \cite{BermanCCLZ2018}. \end{theorem} \begin{corollary} \label{corollary: friendship graph is DS w.r.t. normalized Laplacian} The friendship graph $\FG{p}$ is ${\bf{\mathcal{L}}}$-DS \cite{BermanCCLZ2018}. \end{corollary} \subsection{Strongly regular graphs} \label{subsection: strongly regular graphs} Strongly regular graphs with an identical vector of parameters $(n,d,\lambda,\mu)$ are cospectral, but may not be isomorphic (see, Corollary~\ref{corollary: cospectral SRGs}, Remark~\ref{remark: NICS SRGs}, and Theorem~\ref{theorem: DS Lattice graphs}). For that reason, strongly regular graphs are not necessarily determined by their spectrum. There are, however, infinite families of strongly regular DS graphs: \begin{theorem} \cite[Proposition~14.5.1]{BrouwerH2011} \label{theorem: DS SRG} If $n \neq 8$, $m \neq 4$, $a \geq 2$, and $\ell \geq 2$, then the disjoint union of identical complete graphs $a \CoG{\ell}$, the line graph of a complete graph $\ell(\CoG{n})$, and the line graph of a complete bipartite graph with partite sets of equal size $\ell(\CoBG{m}{m})$, as well as their complements, are strongly regular DS graphs. \end{theorem} We next show that, although connected strongly regular graphs are not generally DS, the property of strong regularity, as well as the four parameters that characterize strongly regular graphs can be determined by the spectrum of their adjacency matrix. \begin{theorem} \label{theorem: on A-spectrum and SRGs} Let $\Gr{G}$ be a connected strongly regular graph. Then, its strong regularity, the vector of parameters $(n,d,\lambda,\mu)$, Lov\'{a}sz $\vartheta$-function $\vartheta(\Gr{G})$, number of edges and triangles, girth, and diameter can be all determined by its $\A$-spectrum. \end{theorem} \begin{proof} The order of a graph $n$ is determined by the $\A$-spectrum, being the number of eigenvalues (including multiplicities). By Theorem~\ref{theorem: graph regularity from A-spectrum}, the regularity of a graph is determined by its $\A$-spectrum. By Item~\ref{Item 5: eigenvalues of srg} of Theorem~\ref{theorem: eigenvalues of srg}, a connected regular graph is strongly regular if and only if it has three distinct eigenvalues. Hence, the strong regularity property of $\Gr{G}$ is determined by its $\A$-spectrum. For such a connected regular, the largest eigenvalue is simple, $\lambda_1=d$, and the other two distinct eigenvalues of the adjacency matrix of $\Gr{G}$ are given by $\lambda_2$ and $\lambda_n$ with $\lambda_n<\lambda_2$. We next show that the number of common neighbors of any pair of adjacent vertices ($\lambda$), and the number of common neighbors of any pair of nonadjacent vertices ($\mu$) in $\Gr{G}$ are, respectively, given by \begin{align} \label{eq1:23.09.23} & \lambda = \lambda_1 + (1+\lambda_2)(1+\lambda_n) - 1, \\ \label{eq2:23.09.23} & \mu = \lambda_1 + \lambda_2 \lambda_n, \end{align} so, these parameters are explicitly expressed in terms of the adjacency spectrum of the strongly regular graph. Indeed, by Theorem~\ref{theorem: eigenvalues of srg}, the second-largest and least eigenvalues of the adjacency matrix of $\Gr{G}$ are given by \begin{align} \label{eq1:21.01.25} \begin{dcases} \lambda_2 = \tfrac12 \Bigl( \lambda - \mu + \sqrt{(\lambda-\mu)^2 + 4(d-\mu)} \Bigr), \\[0.1cm] \lambda_n = \tfrac12 \Bigl( \lambda - \mu - \sqrt{(\lambda-\mu)^2 + 4(d-\mu)} \Bigr), \end{dcases} \end{align} from which it follows that (noting that $d = \lambda_1$) \begin{align} \label{eq2:21.01.25} \begin{dcases} \lambda_2 + \lambda_n = \lambda - \mu, \\ \lambda_2 \lambda_n = \mu - \lambda_1. \end{dcases} \end{align} This gives \eqref{eq1:23.09.23} and \eqref{eq2:23.09.23} from, respectively, the second equality in \eqref{eq2:21.01.25} and by adding the two equalities in \eqref{eq2:21.01.25}. The Lov\'{a}sz $\vartheta$-function of a strongly regular graph is given by (see \cite[Proposition~1]{Sason23}) \begin{align} \label{eq: Lovasz SRG} \vartheta(\Gr{G}) = -\frac{n \lambda_n}{d - \lambda_n}, \end{align} so $\vartheta(\Gr{G})$ is determined by its $\A$-spectrum since the strong regularity property of $\Gr{G}$ was first determined. The number of edges of $\Gr{G}$ of a $d$-regular graph is given by $\tfrac12 nd$, and the number of triangles of the strongly regular graph $\Gr{G}$ is given by $\tfrac16 nd \lambda$, so they are both determined once the four parameters of the strongly regular graphs are revealed. The diameter of a connected strongly regular graph is equal to~2 (note that complete graphs are excluded from the family of strongly regular graphs). Finally, the girth of the strongly regular graph $\Gr{G}$ is determined as follows \cite{CameronL2010}: \begin{enumerate} \item If $\lambda > 0$, then the girth of $\Gr{G}$ is equal to~3; \item If $\lambda=0$ and $\mu \geq 2$, then the girth of $\Gr{G}$ is equal to~4; \item If $\lambda=0$ and $\mu=1$, then the girth of $\Gr{G}$ is equal to~5. \end{enumerate} \end{proof} \begin{remark} \label{remark: connected SRG} A strongly regular graph is connected if and only if $\mu > 0$. \end{remark} By \cite[Proposition~2]{vanDamH03}, no pair of $\A$-cospectral graphs exists where one graph is regular, and the other is not. The following result extends this observation to strong regularity. \begin{corollary} \label{corollary: cospectral graphs - one is SRG} There are no two $\A$-cospectral connected graphs where one is strongly regular and the other is not. \end{corollary} \begin{proof} For a connected strongly regular graph, the strong regularity is determined by the $\A$-spectrum. \end{proof} Another corollary that follows from Theorem~\ref{theorem: on A-spectrum and SRGs} applies to strongly regular DS graphs. \begin{corollary} \label{corollary: SRG DS graphs} Let $\Gr{G}$ be a connected strongly regular graph such that there is no other nonisomorphic strongly regular graph with an identical vector of parameters $(n, d, \lambda, \mu)$. Then, $\Gr{G}$ is a DS graph. \end{corollary} Corollary~\ref{corollary: SRG DS graphs} naturally raises the following question. \begin{question} \label{question: DS SRG} Which connected strongly regular graphs are determined by their vector of parameters $(n, d, \lambda, \mu)$? \end{question} A partial answer to Question~\ref{question: DS SRG} is provided below. By Corollary~\ref{corollary: SRG DS graphs}, for connected strongly regular graphs, there exists an equivalence between their spectral determination (due to their regularity and in light of Theorem~\ref{theorem: regular DS graphs}, based on the spectrum of their adjacency, Laplacian, or signless Laplacian matrices) and the uniqueness of these graphs for the given parameter vector $(n, d, \lambda, \mu)$. The study of the number of nonisomorphic strongly regular graphs corresponding to a given set of parameters has been extensively explored. For example, by Theorem~\ref{theorem: DS Lattice graphs}, there is a unique (up to isomorphism) strongly regular graph of the form $\srg{q^2}{2(q-1)}{q-2}{2}$ for any given $q \geq 2$ with $q \neq 4$. Specifically, this implies the uniqueness of $\srg{36}{10}{4}{2}$ (setting $q=6$). On the other hand, a computer search by McKay and Spence established that there are $32,548$ strongly regular graphs of the form $\srg{36}{15}{6}{6}$, so none of them is DS (by Corollary~\ref{corollary: SRG DS graphs}). Further results on the uniqueness or non-uniqueness of strongly regular graphs with a given parameter vector $(n, d, \lambda, \mu)$ can be found in \cite{Cameron2003} and the references therein. Infinite families of strongly regular DS graphs are presented in Theorem~\ref{theorem: DS SRG}. Some known sporadic strongly regular DS graphs are listed in \cite[Table~2]{vanDamH03}, with an update in \cite[Table~1]{vanDamH09}). The uniqueness of further strongly regular graphs with given parameter vectors, making them therefore DS graphs, was established, e.g., in \cite{Brouwer83,BrouwerH92,StevanovicM2009,Coolsaet2006,Mesner56}. A combination of \cite[Table~1]{vanDamH09} and Corollary~\ref{corollary: Petersen graph is DS} implies that, apart from complete graphs on fewer than three vertices and all complete bipartite regular graphs (which are known to be DS, as stated in Item~\ref{item 2: DS graphs} of Theorem~\ref{theorem: special classes of DS graphs}), also all the seven currently known triangle-free strongly regular graphs (see \cite{StruikB2010}) are DS. These include: \begin{itemize} \item The Pentagon graph $\CG{5}$ that is $\srg{5}{2}{0}{1}$ (by Item~\ref{item 2: DS graphs} of Theorem~\ref{theorem: special classes of DS graphs}, and see \cite[Section~10.1]{BrouwerM22}), \item The Petersen graph $\srg{10}{3}{0}{1}$ (by Corollary~\ref{corollary: Petersen graph is DS}, and see \cite[Section~10.3]{BrouwerM22}), \item Clebsch graph $\srg{16}{5}{0}{2}$ (see \cite[Section~10.7]{BrouwerM22}), \item Hoffman-Singleton srg $\srg{50}{7}{0}{1}$ (see \cite[Section~10.19]{BrouwerM22}), \item Gewirtz graph $\srg{56}{10}{0}{2}$ (see \cite[Section~10.20]{BrouwerM22}), \item Mesner ($\mathrm{M}_{22}$) graph $\srg{77}{16}{0}{4}$ (see \cite[Section~10.27]{BrouwerM22} and \cite{Brouwer83}), \item Higman-Sims graph $\srg{100}{22}{0}{6}$ (see \cite[Section~10.31]{BrouwerM22} and \cite{Mesner56}). \end{itemize} An up-to-date list of strongly regular DS graphs --- strongly regular graphs that are uniquely determined by their parameter vectors --- as well as the number of strongly regular NICS graphs for given parameter vectors, is available on Brouwer's website \cite{Brouwer}. An exclamation mark placed to the left of a parameter vector $(n,d,\lambda,\mu)$, without a preceding number, indicates a strongly regular DS graph. In contrast, an exclamation mark preceded by a natural number greater than~1 specifies the number of strongly regular NICS graphs with the corresponding parameter vector. For example, as shown in \cite{Brouwer}, strongly regular graphs with the parameter vectors $(13,6,2,3)$, $(15,6,1,3)$, $(17,8,3,4)$, and $(21,10,3,6)$, among others, are DS graphs. On the other hand, according to \cite{Brouwer}, there are 15 strongly regular NICS graphs with the parameter vector $(25,12,5,6)$, 10 strongly regular NICS graphs with the parameter vector $(26,10,3,4)$, and so forth. To conclude, as strongly regular NICS graphs are not DS, $\LM$-DS, or $\Q$-DS, we were recently informed of ongoing research by Cioaba \textit{et al.} \cite{CioabaGJM25}, which investigates the spectral properties of higher-order Laplacian matrices associated with these graphs. This research demonstrates that the spectra of these new matrices can distinguish some of the strongly regular NICS graphs. However, in other cases, strongly regular NICS graphs remain indistinguishable even with the spectra of these higher-order Laplacian matrices. \section{Graph operations for the construction of cospectral graphs} \label{section: graph operations} This section presents such graph operations, focusing on unitary and binary transformations that enable the systematic construction of cospectral graphs. These operations are designed to preserve the spectral properties of the original graph while potentially altering its structure, thereby producing non-isomorphic graphs with identical eigenvalues. By employing these techniques, one can generate diverse examples of cospectral graphs, offering valuable tools for investigating the limitations of spectral characterization and exploring the boundaries between graphs that are or are not determined by their spectrum, which then relates the scope of the present section to Section~\ref{section: special families of graphs} that deals with graphs or graph families that are determined by their spectrum. \subsection{Coalescence} \label{subsection: Coalescence} \noindent A construction of cospectral trees has been offered in \cite{Schwenk1973}, implying that almost all trees are not DS. \begin{definition} \label{definition: Coalescence} Let $\Gr{G}_1 = (V_1,E_1), \Gr{G}_2=(V_2,E_2)$ be two graphs with $n_1$, $n_2$ vertices, respectively. Let $v_1 \in V_1$ and $v_2 \in V_2$ be an arbitrary choice of vertices in both graphs. The \emph{coalescence of $\Gr{G}_1$ and $\Gr{G}_2$ with respect to $v_1$ and $v_2$} is the graph with $n_1 + n_2 -1$ vertices, obtained by the union of $\Gr{G}_1$ and $\Gr{G}_2$ where $v_1$ and $v_2$ are identified as the same vertex in the united graph. \end{definition} \begin{theorem} \label{theorem: Coalescence} Let $\Gr{G}_1 = (V_1,E_1), \Gr{G}_2=(V_2,E_2)$ be two cospectral graphs, and let $v_1 \in V_1$ and $v_2 \in V_2$ be an arbitrary choice of vertices in both graphs. Let $\Gr{H}_1$ and $\Gr{H}_2$ be the subgraphs of $\Gr{G}_1$ and $\Gr{G}_2$ that are induced by $V_1 \setminus \{v_1\}$ and $V_2 \setminus \{v_2\}$, respectively. Let $\Gamma$ be a graph and $u\in V(\Gamma)$. If $\Gr{H}_1$ and $\Gr{H}_2$ are cospectral, then the coalescence of $\Gr{G}_1$ and $\Gamma$ with respect to $v_1$ and $u$ is cospectral to the coalescence of $\Gr{G}_2$ and $\Gamma$ with respect to $v_2$ and $u$. \end{theorem} Combinatorial arguments that rely on the coalescence operation on graphs lead to a striking asymptotic result in \cite{Schwenk1973}, stating that the fraction of $n$-vertex trees with cospectral and nonisomorphic mates, which are also trees, approaches one as $n$ tends to infinity. Consequently, the fraction of the $n$-vertex nonisomorphic trees that are determined by their spectrum (DS) approaches zero as $n$ tends to infinity. In other words, this means that almost all trees are not DS (with respect to their adjacency matrix) \cite{Schwenk1973}. \subsection{Seidel switching} \label{subsection: Seidel switching} \noindent Seidel switching is one of the well-known methods for the construction of cospectral graphs. \begin{definition} \label{definition: Seidel switching} Let $\Gr{G}$ be a graph, and let $\set{U} \subseteq \V{\Gr{G}}$. Constructing a graph $\Gr{G}_{\set{U}}$ by preserving all the edges in $\Gr{G}$ between vertices within $\set{U}$, as well as all edges in $\Gr{G}$ between vertices within the complement set $\cset{U} = \V{\Gr{G}} \setminus \set{U}$, while modifying adjacency and nonadjacency between any two vertices where one is in $\set{U}$ and the other is in $\cset{U}$, is referred to (up to isomorphism) as \emph{Seidel switching} of $\Gr{G}$ with respect to $\set{U}$. \end{definition} By Definition~\ref{definition: Seidel switching}, the Seidel switching of $\Gr{G}$ with respect to $\set{U}$ is equivalent to its Seidel switching with respect to $\cset{U}$. Let $\A(\Gr{G})$ and $\A(\Gr{G}_{\set{U}})$ be the adjacency matrices of a graph $\Gr{G}$ and its Seidel switching $\Gr{G}_{\set{U}}$, and let $\A_{\set{U}}$ and $\A_{\cset{U}}$ be the matrices of $\A(\Gr{G})$ that, respectively, refer to the adjacency matrices of the subgraphs of $\Gr{G}$ induced by $\set{U}$ and $\cset{U}$. Then, for some $\mathbf{B} \in \{0,1\}^{\card{\cset{U}} \times \card{\set{U}}}$, we get \begin{align} \A(\Gr{G})= \begin{pmatrix} \A_{\set{U}} & \mathbf{B}^{\mathrm{T}}\\ \mathbf{B} & \A_{\cset{U}} \end{pmatrix}, \end{align} and by Definition~\ref{definition: Seidel switching}, \begin{align} \A(\Gr{G}_{\set{U}})= \begin{pmatrix} \A_{\set{U}} & \overline{\mathbf{B}}^{\mathrm{T}}\\ \overline{\mathbf{B}} & \A_{\cset{U}} \end{pmatrix}, \end{align} where $\overline{\mathbf{B}}$ is obtained from $\mathbf{B}$ by interchanging zeros and ones. If $\Gr{G}$ is a regular graph, the following necessary and sufficient condition for $\Gr{G}_{\set{U}}$ to be a regular graph of the same degree of its vertices. \begin{theorem} \cite[Proposition~1.1.7]{CvetkovicRS2010} \label{theorem: d-regular graphs in Seidel switching} Let $\Gr{G}$ be a $d$-regular graph on $n$ vertices. Then, $\Gr{G}_{\set{U}}$ is also $d$-regular if and only if $\set{U}$ induces a regular subgraph of degree $k$, where $\card{\set{U}} = n - 2(d-k)$. \end{theorem} The next result shows the relevance of Seidel switching for the construction of regular and cospectral graphs. \begin{theorem} \cite[Proposition~1.1.8]{CvetkovicRS2010} \label{theorem: Seidel switching} Let $\Gr{G}$ be a $d$-regular graph, $\set{U} \subseteq \V{\Gr{G}}$, and let $\Gr{G}_{\set{U}}$ be obtained from $\Gr{G}$ by Seidel switching. If $\Gr{G}_{\set{U}}$ is also a $d$-regular graph, then $\Gr{G}$ and $\Gr{G}_{\set{U}}$ are cospectral (and due to their regularity, they are $\mathcal{X}$-cospectral for every $\mathcal{X} \in \{\A, \LM, \Q, \bf{\mathcal{L}}\}$). \end{theorem} \begin{remark} \label{remark: Seidel switching} Theorem~\ref{theorem: Seidel switching} provides a method for finding cospectral regular graphs. These graphs may be, however, also isomorphic. If the graphs are nonisomorphic, then it gives a pair of NICS graphs. \end{remark} \begin{remark} \label{remark: limitation of Seidel switching} A regular graph $\Gr{G}$ on $n$ vertices cannot be switched into another regular graph if $n$ is odd (see \cite[Corollary~4.1.10]{CvetkovicRS2010}), which means that the conditions in Theorem~\ref{theorem: Seidel switching} cannot be satisfied for any regular graph of an odd order. \end{remark} \begin{remark} \label{remark: equivalence relation of Seidel switching} Seidel switching determines an equivalence relation on graphs. This follows from the fact that switching with respect to a subset $\set{U} \in \V{\Gr{G}}$, and then with respect to a subset $\set{V} \in \V{\Gr{G}}$, is the same as switching with respect to $(\set{V} \setminus \set{U}) \cup (\set{U} \setminus \set{V})$ (see \cite[p.~18]{CvetkovicRS2010}). \end{remark} \begin{example} \label{example: NICS graphs by Seidel switching} The Shrikhande graph can be obtained through Seidel switching applied to the line graph $\ell(\CoBG{4}{4})$ with respect to four independent vertices of the latter (see \cite[Example~1.2.4]{CvetkovicRS2010}). Both are 6-regular graphs (hence, they are cospectral graphs by Theorem~\ref{theorem: Seidel switching}). Moreover, the former graph is a strongly regular graph $\srg{16}{6}{2}{2}$, whereas the line graph $\ell(\CoBG{4}{4})$ is not. Consequently, these are nonisomorphic and cospectral (NICS) 6-regular graphs on 16 vertices. \end{example} \subsection{The Godsil and McKay method} \label{subsection: Godsil and McKay method} \noindent Another construction of cospectral pairs of graphs was offered by Godsil and McKay in \cite{GodsilM1982}. \begin{theorem} \label{theorem: Godsil and McKay method} Let $\Gr{G}$ be a graph with an adjacency matrix of the form \begin{align} \A(\Gr{G})= \begin{pmatrix} {\mathbf{B}} & {\mathbf{N}} \\ {\mathbf{N}}^{\mathrm{T}} & {\mathbf{C}} \end{pmatrix} \end{align} where the sum of each column in ${\mathbf{N}} \in \{0,1\}^{b \times c}$ is either $0, b$ or $\frac{b}{2}$. Let $\widehat{{\mathbf{N}}}$ be the matrix obtained by replacing each column $\underline{c}$ in ${\mathbf{N}}$ whose sum of elements is $\frac{b}{2}$ with its complement $\mathbf{1}_n - \underline{c}$. Then, the modified graph $\widehat{\Gr{G}}$ whose adjacency matrix is given by \begin{align} \A(\widehat{\Gr{G}}) = \begin{pmatrix} {\mathbf{B}} & \widehat{\mathbf{N}} \\ {\widehat{\mathbf{N}}}^{\mathrm{T}} & {\mathbf{C}} \end{pmatrix} \end{align} is cospectral with $\Gr{G}$. \end{theorem} Two examples of pairs of NICS graphs are presented in Section~1.8.3 of \cite{BrouwerH2011}. \subsection{Graphs resulting from the duplication and corona graphs} \label{subsection: Graphs resulting from the duplication and corona graphs} \begin{definition} \cite{SampathkumarW1980} \label{definition: duplication graph} Let $\Gr{G}$ be a graph with a vertex set $\V{\Gr{G}}=\{v_1, \ldots, v_n\}$, and consider a copy of $\Gr{G}$ with a vertex set $\mathrm{\Gr{G}} = \{u_1, \ldots, u_n\}$, where $u_i$ is a duplicate of the vertex $v_i$. For each $i \in \OneTo{n}$, connect the vertex $u_i$ to all the neighbors of $v_i$ in $\Gr{G}$, and then delete all edges in $\Gr{G}$. Similarly, for each $i \in \OneTo{n}$, connect the vertex $v_i$ to all the neighbors of $u_i$ in the copied graph, and then delete all edges in the copied graph. The resulting graph, which has $2n$ vertices is called the \emph{duplication graph} of $\Gr{G}$, and is denoted by $\DuplicationGraph{\Gr{G}}$ (see Figure~\ref{fig:DG(C5)}). \end{definition} \begin{figure}[hbt] \centering \includegraphics[width=0.30\textwidth]{DG_C5.png} \caption{\label{fig:DG(C5)} The duplication graph $\DuplicationGraph{\CG{5}}$ (see Definition~\ref{definition: duplication graph}).} \end{figure} \begin{definition} \cite{FruchtH1970} \label{definition: corona graph} Let $\Gr{G}_1$ and $\Gr{G}_2$ be graphs on disjoint vertex sets of $n_1$ and $n_2$ vertices, and with $m_{1}$ and $m_{2}$ edges, respectively. The \emph{corona} of $\Gr{G}_1$ and $\Gr{G}_2$, denoted by $\Corona{\Gr{G}_1}{\Gr{G}_2}$, is a graph on $n_1+n_1 n_2$ vertices obtained by taking one copy of $\Gr{G}_1$ and $n_1$ copies of $\Gr{G}_2$, and then connecting, for each $i \in \OneTo{n_1}$, the $i$-th vertex of $\Gr{G}_1$ to each vertex in the $i$-th copy of $\Gr{G}_2$ (see Figure~\ref{fig:C4 CORONA 2K1}). \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.30\textwidth]{C4_CORONA_2K1.png} \par\end{centering} \caption{\label{fig:C4 CORONA 2K1} The corona graph $\Corona{\CG{4}}{(2\CoG{1})}$ (see Definition~\ref{definition: corona graph}) consists of a single copy of $\CG{4}$ (represented by the black vertices) and four copies of $2\CoG{1}$ (represented by the red vertices).} \end{figure} \end{definition} \begin{definition} \cite{HouS2010} \label{definition: edge corona graph} The \emph{edge corona} of $\Gr{G}_1$ and $\Gr{G}_2$, denoted by $\EdgeCorona{\Gr{G}_1}{\Gr{G}_2}$, is defined as the graph obtained by taking one copy of $\Gr{G}_1$ and $m_1 = \bigcard{\E{\Gr{G}_1}}$ copies of $\Gr{G}_2$, and then connecting, for each $j \in \OneTo{m_1}$, the two end-vertices of the $j$-th edge of $\Gr{G}_1$ to every vertex in the $j$-th copy of $\Gr{G}_2$ (see Figure~\ref{fig:C4 edge corona P3}). \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.30\textwidth]{C4_edge_corona_P3.png} \par\end{centering} \caption{\label{fig:C4 edge corona P3} The edge-corona graph $\EdgeCorona{\CG{4}}{\PathG{3}}$ (see Definition~\ref{definition: edge corona graph}.} \end{figure} \end{definition} \begin{definition} \label{definition: duplication corona graph} Let $\Gr{G}_1$ and $\Gr{G}_2$ be graphs with disjoint vertex sets of $n_1$ and $n_2$ vertices, respectively. Let $Du(\Gr{G}_1)$ be the duplication graph of $\Gr{G}_1$ with vertex set $\V{\Gr{G}_1} \cup \mathrm{U}(\Gr{G}_1)$, where $\V{\Gr{G}_1}=\{v_{1},\ldots,v_{n_1}\}$ and with the duplicated vertex set $\mathrm{U}(\Gr{G}_1)=\{u_{1},\ldots,u_{n_{1}}\}$ (see Definition~\ref{definition: duplication graph}). The \emph{duplication corona graph}, denoted by $\DuplicationCorona{\Gr{G}_1}{\Gr{G}_2}$, is the graph obtained from $\DuplicationGraph{\Gr{G}_1}$ and $n_1$ copies of $\Gr{G}_2$ by connecting, for each $i \in \OneTo{n_1}$, the vertex $v_{i} \in \V{\Gr{G}_1}$ of the graph $\DuplicationGraph{\Gr{G}_1}$ to every vertex in the $i$-th copy of $\Gr{G}_2$ (see Figure~\ref{fig:D.C}). \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.30\textwidth]{D.C.png} \par\end{centering} \caption{\label{fig:D.C} The duplication corona graph $\DuplicationCorona{\CG{4}}{\CoG{2}}$ (see Definition~\ref{definition: duplication corona graph}).} \end{figure} \end{definition} \begin{definition} \label{definition: duplication neighborhood corona} Let $\Gr{G}_1$ and $\Gr{G}_2$ be graphs with disjoint vertex sets of $n_1$ and $n_2$ vertices, respectively. Let $\DuplicationGraph{\Gr{G}_1}$ be the duplication graph of $\Gr{G}_1$ with the vertex set $\mathrm{V}(\Gr{G}_1)\cup \mathrm{U}(\Gr{G}_1)$, where $\mathrm{V}(\Gr{G}_1)=\{v_1, \ldots, v_{n_1}\}$ and the duplicated vertex set $\mathrm{U}(\Gr{G}_1)=\{u_1,\ldots,u_{n_1}\}$ (see Definition~\ref{definition: duplication graph}). The \emph{duplication neighborhood corona}, denoted by $\Gr{G}_1 \boxbar \Gr{G}_2$, is the graph obtained from $\DuplicationGraph{\Gr{G}_1}$ and $n_1$ copies of $\Gr{G}_2$ by connecting the neighbors of the vertex $v_i \in \V{\Gr{G}_1}$ of $\DuplicationGraph{\Gr{G}_1}$ to every vertex in the $i$-th copy of $\Gr{G}_2$ for $i \in \OneTo{n_1}$ (see Figure~\ref{fig:D.N.C}). \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.30\textwidth]{Duplication_neighbours_corona.png} \par\end{centering} \caption{\label{fig:D.N.C} The duplication neighborhood corona $\CoG{3} \boxbar \CoG{2}$ (see Definition~\ref{definition: duplication neighborhood corona}).} \end{figure} \end{definition} \begin{definition} \label{definition: duplication edge corona} Let $\Gr{G}_1$ and $\Gr{G}_2$ be graphs with disjoint vertex sets of $n_1$ and $n_2$ vertices, respectively. Let $\DuplicationGraph{\Gr{G}_1}$ be the duplication graph of $\Gr{G}_1$ with vertex set $\V{\Gr{G_1}} \cup \mathrm{U}(\Gr{G}_1)$, where $\V{\Gr{G}_1}=\{v_1, \ldots, v_{n_1}\}$ is the vertex set of $\Gr{G}_1$ and $\mathrm{U}(\Gr{G}_1)=\{u_1, \ldots, u_{n_{1}}\}$ is the duplicated vertex set. The \emph{duplication edge corona}, denoted by $\DuplicationEdgeCorona{\Gr{G}_1}{\Gr{G}_2}$, is the graph obtained from $\DuplicationGraph{\Gr{G}_1}$ and $\bigcard{\E{\Gr{G}_1}}$ copies of $\Gr{G}_2$ by connecting each of the two vertices $v_i, v_j \in \V{\Gr{G}_1}$ of $\DuplicationGraph{\Gr{G}_1}$ to every vertex in the $k$-th copy of $\Gr{G}_2$ whenever $\{v_i, v_j\}=e_k \in \E{\Gr{G}_1}$ (see Figure~\ref{fig:D.E.C}). \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.30\textwidth]{D.E.C.png} \par\end{centering} \centering{} \caption{\label{fig:D.E.C} The duplication edge corona $\DuplicationEdgeCorona{\CoG{3}}{\CoG{2}}$ (see Definition~\ref{definition: duplication edge corona}).} \end{figure} \end{definition} \begin{definition} \label{definition: closed neighborhood corona} Consider two graphs $\Gr{G}_1$ and $\Gr{G}_2$ with $n_{1}$ and $n_{2}$ vertices and, respectively. The \emph{closed neighborhood corona} of $\Gr{G}_1$ and $\Gr{G}_2$, denoted by $\ClosedNeighborhoodCorona{\Gr{G}_1}{\Gr{G}_2}$, is a new graph obtained by creating $n_{1}$ copies of $\Gr{G}_2$. Each vertex of the $i^{th}$ copy of $\Gr{G}_2$ is then connected to the $i^{th}$ vertex and neighborhood of the $i^{th}$ vertex of $\Gr{G}_1$ (see Figure~\ref{fig:CNCP}). \end{definition} \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.30\textwidth]{C4andk3.png} \par\end{centering} \caption{\label{fig:CNCP} \centering{The closed neighborhood corona of the 4-length cycle $\CG{4}$ and the triangle $\CoG{3}$, denoted by $\ClosedNeighborhoodCorona{\CG{4}}{\CoG{3}}$ (see Definition~\ref{definition: closed neighborhood corona}).}} \end{figure} \begin{theorem} \cite{AdigaRS2018} \label{theorem: NICS graphs - 1} Let $\Gr{G}_1,\Gr{H}_1$ be $r_1$-regular, cospectral graphs, and let $\Gr{G}_2$ and $H_{2}$ be $r_2$-regular, cospectral, and nonisomorphic (NICS) graphs. Then, the following holds: \begin{itemize} \item The duplication corona graphs $\DuplicationCorona{\Gr{G}_1}{\Gr{G}_2}$ and $\DuplicationCorona{\Gr{H}_1}{\Gr{H}_2}$ are $\{ \A, \LM, \Q \}$-NICS irregular graphs. \item The duplication neighborhood corona $\Gr{G}_1\boxbar \Gr{G}_2$ and $\Gr{H}_1\boxbar H_{2}$ are $\{ \A, \LM, \Q \}$-NICS irregular graphs. \item The duplication edge corona $\DuplicationEdgeCorona{\Gr{G}_1}{\Gr{G}_2}$ and $\DuplicationEdgeCorona{\Gr{H}_1}{\Gr{H}_2}$ are $\{ \A, \LM, \Q \}$-NICS irregular graphs. \end{itemize} \end{theorem} \begin{question} \label{question: NICS graphs - 1} Are the irregular graphs in Theorem~\ref{theorem: NICS graphs - 1} also cospectral with respect to the normalized Laplacian matrix? \end{question} \begin{theorem} \label{theorem: NICS graphs - 2} \cite{SonarS2024} Let $\Gr{G}_1$ and $\Gr{G}_2$ be cospectral regular graphs, and let $\Gr{H}$ be an arbitrary graph. Then, the following holds: \begin{itemize} \item The closed neighborhood corona $\ClosedNeighborhoodCorona{\Gr{G}_1}{\Gr{H}}$ and $\ClosedNeighborhoodCorona{\Gr{G}_2}{\Gr{H}}$ are $\{ \A, \LM, \Q \}$-NICS irregular graphs. \item The closed neighborhood corona $\ClosedNeighborhoodCorona{\Gr{H}}{\Gr{G}_{1}}$ and $\ClosedNeighborhoodCorona{\Gr{H}}{\Gr{G}_{2}}$ are $\{ \A, \LM, \Q \}$-NICS irregular graphs. \end{itemize} \end{theorem} \begin{question} \label{question: NICS graphs - 2} Are the irregular graphs in Theorem~\ref{theorem: NICS graphs - 2} also cospectral with respect to the normalized Laplacian matrix? \end{question} \subsection{Graphs constructions based on the subdivision and bipartite incidence graphs} \label{subsection: Graphs constructions based on the subdivision and bipartite incidence graphs} \begin{definition} \cite{CvetkovicRS2010} \label{definition: subdivision graph} Let $\Gr{G}$ be a graph. The \emph{subdivision graph} of $\Gr{G}$, denoted by $\SubdivisionGraph{\Gr{G}}$, is obtained from $\Gr{G}$ by inserting a new vertex into every edge of $\Gr{G}$. Subdivision is the process of adding a new vertex along an edge, effectively splitting the edge into two edges connected in series through the new vertex (see Figure~\ref{fig:subdivition}). \begin{figure}[hbt] \centering{}\includegraphics[width=0.30\textwidth]{S_C4.png} \caption{\label{fig:subdivition} \centering{The subdivision graph of a $4$-length cycle, denoted by $\SubdivisionGraph{\CG{4}}$, which is an $8$-length cycle $\CG{8}$ (see Definition~\ref{definition: subdivision graph}).}} \end{figure} \end{definition} \begin{definition} \cite{PisankiS13} \label{definition: bipartite incidence graph} Let $\Gr{G}$ be a graph. The \emph{bipartite incidence graph} of $\Gr{G}$, denoted by $\BipartiteIncidenceGraph{\Gr{G}}$, is a bipartite graph constructed as follows: For each edge $e \in \Gr{G}$, a new vertex $u_{e}$ is added to the vertex set of $\Gr{G}$. The vertex $u_e$ is then made adjacent to both endpoints of $e$ in $\Gr{G}$ (see Figure~\ref{fig:R(C4)-1}). \begin{figure}[H] \begin{centering} \includegraphics[width=0.30\textwidth]{R_C4.png} \par\end{centering} \centering{}\caption{\label{fig:R(C4)-1} \centering{The bipartite incidence graph of a length-4 cycle $\CG{4}$, denoted by $\BipartiteIncidenceGraph{\CG{4}}$ (see Definition~\ref{definition: bipartite incidence graph}).}} \end{figure} \end{definition} Let $\Gr{G}_1$ and $\Gr{G}_2$ be two vertex-disjoint graphs with $n_1$ and $n_2$ vertices, and $m_1$ and $m_2$ edges, respectively. Four binary operations on these graphs are defined as follows. \begin{definition} \label{definition: subdivision-vertex-bipartite-vertex join} The \emph{subdivision-vertex-bipartite-vertex join} of graphs $\Gr{G}_1$ and $\Gr{G}_2$, denoted $\SVBVJ{\Gr{G}_1}{\Gr{G}_2}$, is the graph obtained from $\SubdivisionGraph{\Gr{G}_1}$ and $\BipartiteIncidenceGraph{\Gr{G}_2}$ by connecting each vertex of $\V{\Gr{G}_1}$ (that is the subset of the original vertices in the vertex set of $\SubdivisionGraph{\Gr{G}_1}$) with every vertex of $\V{\Gr{G}_2}$ (that is the subset of the original vertices in the vertex set of $\BipartiteIncidenceGraph{\Gr{G}_2}$). \end{definition} By Definitions~\ref{definition: subdivision graph}, \ref{definition: bipartite incidence graph}, and~\ref{definition: subdivision-vertex-bipartite-vertex join}, it follows that the graph $\SVBVJ{\Gr{G}_1}{\Gr{G}_2}$ has $n_1+n_2+m_1+m_2$ vertices and $n_1 n_2+2m_1+3m_2$ edges. Figure~\ref{fig:subdivision-vertex-bipartite-vertex join} displays the graph $\SVBVJ{\CG{4}}{\PathG{3}}$. \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.40\textwidth]{C_c4_and_R_p3.png} \par\end{centering} \caption{\label{fig:subdivision-vertex-bipartite-vertex join} The graph $\SVBVJ{\CG{4}}{\PathG{3}}$ (see Definition~\ref{definition: subdivision-vertex-bipartite-vertex join}). The black vertices represent the vertices of the length-4 cycle $\CG{4}$ and the vertices of the path $\PathG{3}$. The additional vertices in the subdivision graph $\SubdivisionGraph{\CG{4}}$ are the four red vertices located at the bottom of this figure (as shown in Figure~\ref{fig:subdivition}). Similarly, the additional vertices in the bipartite incidence graph of the path $\PathG{3}$ are the two red vertices located at the top of this figure. The edges of the graph include the following: edges connecting each black vertex of $\CG{4}$ to each black vertex of $\PathG{3}$, edges between the black and red vertices at the bottom of the figure (that correspond to the subdivision of $\CG{4}$), and the four (light blue) edges connecting the two top red vertices to the top black vertices of the figure (that correspond to the bipartite incidence graph of $\PathG{3}$).} \end{figure} \begin{definition} \label{definition: subdivision-edge-bipartite-edge join} The \emph{subdivision-edge-bipartite-edge join} of $\Gr{G}_1$ and $\Gr{G}_2$, denoted by $\SEBEJ{\Gr{G}_1}{\Gr{G}_2}$, is the graph obtained from $\SubdivisionGraph{\Gr{G}_1}$ and $\BipartiteIncidenceGraph{\Gr{G}_2}$ by connecting each vertex of $\V{\SubdivisionGraph{\Gr{G}_1}} \setminus \V{\Gr{G}_1}$ (that is the subset of the added vertices in the vertex set of $\SubdivisionGraph{\Gr{G}_1}$) with every vertex of $\V{\BipartiteIncidenceGraph{\Gr{G}_2}} \setminus \V{\Gr{G}_2}$ (that is the subset of the added vertices in the vertex set of $\BipartiteIncidenceGraph{\Gr{G}_2}$). \end{definition} By Definitions~\ref{definition: subdivision graph}, \ref{definition: bipartite incidence graph}, and~\ref{definition: subdivision-edge-bipartite-edge join}, it follows that the graph $\SEBEJ{\Gr{G}_1}{\Gr{G}_2}$ has $n_1+n_2+m_1+m_2$ vertices and $m_1 m_2 + 2m_1 + 3m_2$ edges. Figure~\ref{fig:subdivision-edge-R-edge join} displays the graph $\SEBEJ{\CG{4}}{\PathG{3}}$. \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.40\textwidth]{subdivision-edge-R-edge-join.png} \par\end{centering} \caption{\label{fig:subdivision-edge-R-edge join} The graph $\SEBEJ{\CG{4}}{\PathG{3}}$ (see Definition~\ref{definition: subdivision-edge-bipartite-edge join}). In comparison to Figure~\ref{fig:subdivision-vertex-bipartite-vertex join}, edges connecting each black vertex of $\CG{4}$ to each black vertex of $\PathG{3}$ are deleted and replaced in this figure by edges connecting each red vertex of $\CG{4}$ to each red vertex of $\PathG{3}$.} \end{figure} \begin{definition} \label{definition: subdivision-edge-bipartite-vertex join} The \emph{subdivision-edge-bipartite-vertex join} of $\Gr{G}_1$ and $\Gr{G}_2$, denoted by $\SEBVJ{\Gr{G}_1}{\Gr{G}_2}$, is the graph obtained from $\SubdivisionGraph{\Gr{G}_1}$ and $\BipartiteIncidenceGraph{\Gr{G}_2}$ by connecting each vertex of $\V{\SubdivisionGraph{\Gr{G}_1}} \setminus \V{\Gr{G}_1}$ (that is the subset of the added vertices in the vertex set of $\SubdivisionGraph{\Gr{G}_1}$) with every vertex of $\V{\Gr{G}_2}$ (that is the subset of the original vertices in the vertex set of $\BipartiteIncidenceGraph{\Gr{G}_2}$). \end{definition} By Definitions~\ref{definition: subdivision graph}, \ref{definition: bipartite incidence graph}, and~\ref{definition: subdivision-edge-bipartite-vertex join}, it follows that the graph $\SEBVJ{\Gr{G}_1}{\Gr{G}_2}$ has $n_1+n_2+m_1+m_2$ vertices and $m_1 n_2 + 2m_1 + 3m_2$ edges. Figure~\ref{fig:subdivision-edge-R-vertex join} displays the graph $\SEBVJ{\CG{4}}{\PathG{3}}$. \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.40\textwidth]{subdivision-edge-R-vertex-join.png} \par\end{centering} \caption{\label{fig:subdivision-edge-R-vertex join} The graph $\SEBVJ{\CG{4}}{\PathG{3}}$ (see Definition~\ref{definition: subdivision-edge-bipartite-vertex join}). In comparison to Figure~\ref{fig:subdivision-vertex-bipartite-vertex join}, edges connecting each black vertex of $\CG{4}$ to each black vertex of $\PathG{3}$ are deleted and replaced in this figure by edges connecting each red vertex of $\CG{4}$ to each black vertex of $\PathG{3}$.} \end{figure} \begin{definition} \label{definition: subdivision-vertex-bipartite-edge join} The \emph{subdivision-vertex-bipartite-edge join} of $\Gr{G}_1$ and $\Gr{G}_2$, denoted by $\SVBEJ{\Gr{G}_1}{\Gr{G}_2}$, is the graph obtained from $\SubdivisionGraph{\Gr{G}_1}$ and $\BipartiteIncidenceGraph{\Gr{G}_2}$ by connecting each vertex of $\V{\Gr{G}_1}$ (that is the subset of the original vertices in the vertex set of $\SubdivisionGraph{\Gr{G}_1}$) with every vertex of $\V{\BipartiteIncidenceGraph{\Gr{G}_2}} \setminus \V{\Gr{G}_2}$ (that is the subset of the added vertices in the vertex set of $\BipartiteIncidenceGraph{\Gr{G}_2}$). \end{definition} By Definitions~\ref{definition: subdivision graph}, \ref{definition: bipartite incidence graph}, and~\ref{definition: subdivision-vertex-bipartite-edge join}, it follows that the graph $\SVBEJ{\Gr{G}_1}{\Gr{G}_2}$ has $n_1+n_2+m_1+m_2$ vertices and $n_1 m_2 + 2m_1 + 3m_2$ edges. Figure~\ref{fig:subdivision-vertex-R-edge join} displays the graph $\SVBEJ{\CG{4}}{\PathG{3}}$. \begin{figure}[hbt] \begin{centering} \includegraphics[width=0.40\textwidth]{subdivision-vertex-R-edge-join.png} \par\end{centering} \caption{\label{fig:subdivision-vertex-R-edge join} The graph $\SVBEJ{\CG{4}}{\PathG{3}}$ (see Definition~\ref{definition: subdivision-vertex-bipartite-edge join}). In comparison to Figure~\ref{fig:subdivision-vertex-bipartite-vertex join}, edges connecting each black vertex of $\CG{4}$ to each black vertex of $\PathG{3}$ are deleted and replaced in this figure by edges connecting each black vertex of $\CG{4}$ to each red vertex of $\PathG{3}$.} \end{figure} We next present the main result of this subsection, which motivates the four binary graph operations introduced in Definitions~\ref{definition: subdivision-vertex-bipartite-vertex join}--\ref{definition: subdivision-vertex-bipartite-edge join}. \begin{theorem} \cite{DasP2013} \label{theorem: NICS - DasP2013} Let $\Gr{G}_i$ , $\Gr{H}_i$, where $i \in \{1,2\}$, be regular graphs, where $\Gr{G}_1$ need not be different from $\Gr{H}_1$. If $\Gr{G}_1$ and $\Gr{H}_1$ are $\A$-cospectral, and $\Gr{G}_2$ and $\Gr{H}_2$ are $\A$-cospectral, then \begin{itemize} \item $\SVBVJ{\Gr{G}_1}{\Gr{G}_2}$ and $\SVBVJ{\Gr{H}_1}{\Gr{H}_2}$ are irregular $\{ \A, \LM, {\bf{\mathcal{L}}} \} $-NICS graphs. \item $\SEBEJ{\Gr{G}_1}{\Gr{G}_2}$ and $\SEBEJ{\Gr{H}_1}{\Gr{H}_2}$ are irregular $\{ \A, \LM, {\bf{\mathcal{L}}} \} $-NICS graphs. \item $\SEBVJ{\Gr{G}_1}{\Gr{G}_2}$ and $\SEBVJ{\Gr{H}_1}{\Gr{H}_2}$ are irregular $\{ \A, \LM, {\bf{\mathcal{L}}} \} $-NICS graphs. \item $\SVBEJ{\Gr{G}_1}{\Gr{G}_2}$ and $\SVBEJ{\Gr{H}_1}{\Gr{H}_2}$ are irregular $\{ \A, \LM, {\bf{\mathcal{L}}} \} $-NICS graphs. \end{itemize} \end{theorem} In light of Theorem~\ref{theorem: NICS - DasP2013}, the following questions naturally arises. \begin{question} \label{question: NICS - DasP2013} Are the graphs in Theorem~\ref{theorem: NICS - DasP2013} also cospectral with respect to the signless Laplacian matrix (i.e., $\Q$-cospectral)? \end{question} \subsection{Connected irregular NICS graphs} \label{subsection: Connected irregular NICS graphs} \noindent The (joint) cospectrality of regular graphs with respect to their adjacency, Laplacian, signless Laplacian, and normalized Laplacian matrices can be asserted by verifying their cospectrality with respect to only one of these matrices. In other words, regular graphs are $X$-cospectral for some $X \in \{\A, \LM, \Q, {\bf{\mathcal{L}}} \}$ if and only if they are cospectral with respect to all these matrices (see Proposition~\ref{proposition: regular graphs cospectrality}). For irregular graphs, this does not hold in general. Following \cite{Butler2010}, it is natural to ask the following question: \begin{question} \label{question: Butler2010} Are there pairs of irregular graphs that are $\{\A,\LM,\Q,{\bf{\mathcal{L}}}\}$-NICS, i.e., $X$-NICS with respect to every $X \in \{\A, \LM, \Q, {\bf{\mathcal{L}}}\}$? \end{question} This question remained open until two coauthors of this paper recently proposed a method for constructing pairs of irregular graphs that are $X$-cospectral with respect to every $X \in \{\A, \LM, \Q, {\bf{\mathcal{L}}}\}$, providing explicit constructions \cite{HamudB24}. Building on that work, another coauthor of this paper demonstrated in \cite{Sason2024} that for every even integer $n \geq 14$, there exist two connected, irregular $\{\A,\LM,\Q,{\bf{\mathcal{L}}}\}$-NICS graphs on $n$ vertices with identical independence, clique, and chromatic numbers, yet distinct Lov{\'a}sz $\vartheta$-functions. We now present the preliminary definitions required to outline the relevant results in \cite{HamudB24} and \cite{Sason2024}, and the construction of such cospectral irregular $X$-NICS graphs for all $X \in \{\A, \LM, \Q, {\bf{\mathcal{L}}}\}$. \begin{definition}[Neighbors splitting join of graphs] \label{definition: neighbors splitting join of graphs} \cite{LuMZ2023} Let $\Gr{G}$ and $\Gr{H}$ be graphs with disjoint vertex sets, and let $\V{\Gr{G}} = \{v_1, \ldots, v_n\}$. The {\em neighbors splitting (NS) join} of $\Gr{G}$ and $\Gr{H}$ is obtained by adding vertices $v'_1, \ldots, v'_n$ to the vertex set of $\Gr{G} \vee \Gr{H}$ and connecting $v'_i$ to $v_j$ if and only if $\{v_i, v_j\} \in \E{\Gr{G}}$. The NS join of $\Gr{G}$ and $\Gr{H}$ is denoted by $\Gr{G} \NS \Gr{H}$. \end{definition} \begin{definition}[Nonneighbors splitting join of graphs \cite{Hamud23,HamudB24}] \label{definition: nonneighbors splitting join of graphs} Let $\Gr{G}$ and $\Gr{H}$ be graphs with disjoint vertex sets, and let $\V{\Gr{G}} = \{v_1, \ldots, v_n\}$. The {\em nonneighbors splitting (NNS) join} of $\Gr{G}$ and $\Gr{H}$ is obtained by adding vertices $v'_1, \ldots, v'_n$ to the vertex set of $\Gr{G} \vee \Gr{H}$ and connecting $v'_i$ to $v_j$, with $i \neq j$, if and only if $\{v_i, v_j\} \not\in \E{\Gr{G}}$. The NNS join of $\Gr{G}$ and $\Gr{H}$ is denoted by $\Gr{G} \NNS \Gr{H}$. \end{definition} \begin{remark} In general, $\Gr{G} \NS \Gr{H} \not\cong \Gr{H} \NS \Gr{G}$ and $\Gr{G} \NNS \Gr{H} \not\cong \Gr{H} \NNS \Gr{G}$ (unless $\Gr{G} \cong \Gr{H}$). \end{remark} \begin{example}[NS and NNS join of graphs \cite{HamudB24}] \label{example: NS and NNS Join of Graphs} Figure~\ref{fig:BermanH23-Fig. 2.2} shows the NS and NNS joins of the path graphs $\PathG{4}$ and $\PathG{2}$, denoted by $\PathG{4} \NS \PathG{2}$ and $\PathG{4} \NNS \PathG{2}$, respectively. \vspace*{-0.1cm} \begin{figure}[H] \centering \includegraphics[width=11cm]{Hamud_and_Berman_Figure0.png} \caption{The neighbors splitting (NS) and nonneighbors splitting (NNS) joins of the path graphs $\PathG{4}$ and $\PathG{2}$ are depicted, respectively, on the left and right-hand sides of this figure. The NS and NNS joins of graphs are, respectively, denoted by $\PathG{4} \NS \PathG{2}$ and $\PathG{4} \NNS \PathG{2}$ \cite{HamudB24}.}\label{fig:BermanH23-Fig. 2.2} \end{figure} \end{example} \begin{theorem}[Irregular $\{\A, \LM, \Q, \bf{\mathcal{L}}\}$-NICS graphs] \label{theorem: Berman and Hamud, 2023} Let $\Gr{G}_1$ and $\Gr{H}_1$ be regular and cospectral graphs, and let $\Gr{G}_2$ and $\Gr{H}_2$ be regular, nonisomorphic, and cospectral (NICS) graphs. Then, the following statements hold: \begin{enumerate}[(1)] \item \label{Item 1: Berman and Hamud, 2023} The NS-join graphs $\Gr{G}_1 \NS \Gr{G}_2$ and $\Gr{H}_1 \NS \Gr{H}_2$ are irregular $\{\A, \LM, \Q, \bf{\mathcal{L}}\}$-NICS graphs. \item \label{Item 2: Berman and Hamud, 2023} The NNS join graphs $\Gr{G}_1 \NNS \Gr{G}_2$ and $\Gr{H}_1 \NNS \Gr{H}_2$ are irregular $\{\A, \LM, \Q, \bf{\mathcal{L}}\}$-NICS graphs. \end{enumerate} \end{theorem} The proof of Theorem~\ref{theorem: Berman and Hamud, 2023} is provided in \cite{Hamud23,HamudB24}, and it relies heavily on the notion of the Schur complement (see Theorem~\ref{theorem: Schur complement}). The interested reader is referred to the recently published paper \cite{HamudB24} for further details. Relying on Theorem~\ref{theorem: Berman and Hamud, 2023}, the following result is stated and proved in \cite{Sason2024}. \begin{theorem}[On irregular NICS graphs] \label{theorem: Sason - existence of NICS graphs with distinct theta functions} For every even integer $n \geq 14$, there exist two connected, irregular $\{\A, \LM, \Q, \bf{\mathcal{L}}\}$-NICS graphs on $n$ vertices with identical independence, clique, and chromatic numbers, yet their Lov\'{a}sz $\vartheta$-functions are distinct. \end{theorem} That result is in fact strengthened in Section~4.2 of \cite{Sason2024}, and the interested reader is referred to that paper for further details. The proof of Theorem~\ref{theorem: Sason - existence of NICS graphs with distinct theta functions} is also constructive, providing explicit such graphs. \section{Open questions and outlook} \label{section: summary and outlook} We conclude this paper by highlighting some of the most significant open questions in the fascinating area of research related to cospectral nonisomorphic graphs and graphs determined by their spectrum, leaving them as topics for further future study. \subsection{Haemers' conjecture} \label{subsection: Haemers' conjecture} Haemers' conjecture \cite{Haemers2016,Haemers2024}, a prominent topic in spectral graph theory, posits that almost all graphs are uniquely determined by their adjacency spectrum. This conjecture suggests that for large graphs, the probability of having two non-isomorphic graphs, on a large number of vertices, sharing the same adjacency spectrum is negligible. The conjecture has inspired extensive research, including studies on specific graph families, cospectrality, and algebraic graph invariants, contributing to deeper insights into the relationship between graph structure and eigenvalues. Haemers' conjecture is stated formally as follows. \begin{definition} \label{definition: Haemers' conjecture} For $n \in \mathbb{N}$, let $I(n)$ to be the numbers of distinct graphs on $n$ vertices, up to isomorphism. For $\mathcal{X} \subseteq \Gmats$, let $\alpha_{\mathcal{X}}(n)$ be the number of $\mathcal{X}$-DS graphs on $n$ vertices, up to isomorphism, and (for the sake of simplicity of notation) let $\alpha(n) \triangleq \alpha_{\{ A \}}(n)$ \end{definition} \begin{conjecture} \label{conjecture: Haemers' conjecture} \cite{Haemers2016} For sufficiently large $n$, almost all of the graphs are DS, i.e., \begin{align} \label{eq: Heamers' conjecture} \lim_{n \rightarrow \infty} \frac{\alpha(n)}{I(n)} = 1. \end{align} \end{conjecture} Several results lend support to this conjecture \cite{Haemers2016, KovalK2024, WangW2024}, but a complete proof in its full generality remains elusive. By \cite{GodsilR2001}, the number of graphs with $n$ vertices up to isomorphism is \begin{align} I(n) = \bigl( 1 - \epsilon(n) \bigr) \, \frac{2^{n(n-1)/2}}{n!}, \end{align} where $\underset{n \rightarrow \infty}{\lim} \epsilon(n) = 0$. By Stirling's approximation for $n!$ and straightforward algebra, $I(n)$ can be verified to be given by \begin{align} I(n) = 2^{\frac{n(n-1)}{2} \; \bigl(1 - \epsilon(n)\bigr)}. \end{align} It is shown in \cite{KovalK2024} that the number of DS graphs on $n$ vertices is at least $e^{cn}$ for some constant $c>0$ (see also the discussion in Section~\ref{subsection: Nice graphs}). In light of Remark~\ref{remark: X,Y cospectrality}, Conjecture~\ref{conjecture: Haemers' conjecture} leads to the following question. \begin{question} \label{question: generalized Haemers' conjecture} For what minimal subset $\mathcal{X} \subset \Gmats$ (if any) does the limit \begin{align} \label{eq: generalized Haemers' conjecture} \lim_{n\rightarrow \infty} \frac{\alpha_{\mathcal{X}}(n)}{I(n)} = 1 \end{align} hold? \end{question} Some computer results somewhat support Haemers' Conjecture. Approximately $80\%$ of the graphs with $12$ vertices are DS \cite{BrouwerS2009}. A new class of graphs is defined in \cite{WangW2024}, offering an algorithmic method for finding all the $\{\A,\overline{\A}\}$-cospectral mates of a graph $\Gr{G}$ in this class (i.e., graphs that are $\{\A,\overline{\A}\}$-cospectral with $\Gr{G}$). Using this algorithm, they found that out of $10,000$ graphs with $50$ vertices, chosen uniformly at random from this class, at least $99.5\%$ of them were $\{\A, \overline{\A}\}$-DS. \subsection{DS properties of structured graphs} \label{subsection: DS properties of structured graphs} This paper explores various structures of graph families determined by their spectrum with respect to one or more of their associated matrices, as well as the related problem of constructing pairs of nonisomorphic graphs that are cospectral with respect to some or all of these matrices. Several questions are interspersed throughout the paper (see Questions~\ref{question: DS SRG}, \ref{question: NICS graphs - 1}, \ref{question: NICS graphs - 2}, \ref{question: NICS - DasP2013}, and~\ref{question: Butler2010}), which remain open for further study. In addition to serving as a survey on spectral graph determination, this paper suggests a new alternative proof to Theorem~3.3 in \cite{MaRen2010}, asserting that all Tur\'{a}n graphs are determined by their adjacency spectrum (see Theorem~\ref{theorem: Turan's graph is DS}). This proof is based on a derivation of the adjacency spectrum of the family of Tur\'{a}n graphs (see Theorem~\ref{theorem: A-spectrum of Turan graph}). Since these graphs are generally bi-regular multipartite graphs (i.e., their vertices have only two possible degrees, which in this case are consecutive integers), it does not necessarily imply that Tur\'{a}n graphs are determined by the spectrum of some other associated matrices, such as the Laplacian, signless Laplacian, or normalized Laplacian matrices. Determining whether this holds is an open question. The distance matrix of a connected graph is the symmetric matrix whose columns and rows are indexed by the graph vertices, and its entries are equal to the pairwise distances between the corresponding vertices. The distance spectrum is the multiset of eigenvalues of the distance matrix, and its characterization has been a subject of fruitful research (see \cite{AouchicheH14,HuiqiuJJY21} for comprehensive surveys on the distance spectra of graphs, and \cite{Banerjee23,HogbenR22} on the spectra of graphs with respect to variants of the distance matrix). Nonisomorphic graphs may share an identical adjacency spectrum, as well as an identical distance spectrum, and therefore be graphs that are not determined by either their adjacency or distance spectrum. This holds, e.g., for the Shrikhande graph and the line graph of the complete bipartite graph $\CoBG{4}{4}$, which are nonisomorphic strongly regular graphs with the identical parameters $(16,6,2,2)$, sharing an identical $\A$-spectrum given by $\{6, [2]^6, [-2]^6\}$ and an identical distance spectrum given by $\{24, [0]^9, [-4]^6\}$. There exist, however, graphs that are determined by their distance spectrum but are not determined by their adjacency spectrum. In this context, \cite{JinZ2014} proves that complete multipartite graphs are uniquely determined by their distance spectra (this result confirms a conjecture proposed in \cite{LinHW2013} and extends the special case established there for complete bipartite graphs). A Tur\'{a}n graph is, in particular, determined by its distance spectrum \cite{JinZ2014}, and by its adjacency spectra (see Theorem~\ref{theorem: Turan's graph is DS} here). On the other hand, while complete multipartite graphs are not generally determined by their adjacency spectra (e.g., for complete bipartite graphs, see Theorem~\ref{thm:when CoBG is DS?}), they are necessarily determined by their distance spectra \cite{JinZ2014}. Another family of graphs that are determined by their distance spectrum but are not determined by their adjacency spectrum, named $d$-cube graphs, is provided in \cite{KoolenHI2016,KoolenHI2016b}. These graphs have their vertices represented by binary $n$-length tuples, where any two vertices are adjacent if and only if their corresponding binary $n$-tuples differ in one coordinate; it is also shown that these graphs have exactly three distinct distance eigenvalues. The question of which multipartite graphs, or graphs in general, are determined by their distance spectra remains open. Another newly obtained proof presented in this paper refers to a necessary and sufficient condition in \cite{MaRen2010} for complete bipartite graphs to be $\A$-DS (see Theorem~\ref{thm:when CoBG is DS?} and Remark~\ref{remark: complete bipartite graphs} here). These graphs are also bi-regular, and the two possible vertex degrees can differ by more than one. Both of these newly obtained proofs, discussed in Section~\ref{section: special families of graphs}, provide insights into the broader question of which (structured) multipartite graphs are determined by their adjacency spectrum or, more generally, by the spectra of some of their associated matrices. Even if Haemers' conjecture is eventually proved in its full generality, it remains surprising when a new family of structured graphs is shown to be DS (or $X$-DS, more generally). This is because for certain structured graphs, such as strongly regular graphs and trees, their spectra often fail to uniquely determine them \cite{vanDamH03,vanDamH09}. This stark contrast between the fact that almost all random graphs of high order are likely to be DS and the existence of interesting structured graphs that are not DS has significant implications. In addition to the questions posed earlier in this paper, we raise the following additional concrete question: \begin{question} {\em By Theorem~\ref{theorem: Berman's generalized friendship graph}, the family of generalized friendship graphs $F_{p,q}$ is ${\bf{\mathcal{L}}}$-DS. Are these graphs also DS with respect to their other associated matrices?} \end{question} We speculate that the DS property of a graph correlates, to some extent, with the number of symmetries that the graph possesses, and we hypothesize that the size of the automorphism group of a graph can partially indicate whether it is DS. A justification for this claim is that the automorphism group of a graph reflects its symmetries, which can influence the eigenvalues of its adjacency matrix. Highly symmetric graphs (i.e., those with large automorphism groups) often exhibit eigenvalue multiplicities and patterns that are shared by other nonisomorphic graphs, making such graphs less likely to be DS. Conversely, graphs with trivial automorphism groups are typically less symmetric and may have eigenvalues that uniquely determine their structure, increasing the likelihood that they are DS. As noted in \cite{GodsilR2001}, almost all graphs have trivial automorphism groups. This observation aligns with the conjecture that most graphs are DS, as the absence of symmetry reduces the likelihood of two nonisomorphic graphs sharing the same spectrum. It is noted, however, that the DS property of graphs is not solely dictated by their automorphism groups. Specifically, a graph with a large automorphism group can still be DS if its eigenvalues uniquely encode its structure (see Section~\ref{subsection: strongly regular graphs}). In contrast to these DS graphs, other graphs with trivial automorphism groups are not guaranteed to be DS; in such cases, the spectrum might not capture enough structural information to uniquely determine the graph. Typically, graphs with a small number of distinct eigenvalues seem to be, in general, the hardest graphs to distinguish by their spectrum. As noted in \cite{vanDamO11}, it seems that most graphs with few eigenvalues (e.g., most of the strongly regular graphs) are not determined by their spectrum, which served as one of the motivations of the work in \cite{vanDamO11} in studying graphs whose normalized Laplacian has three eigenvalues. To conclude, the size of the automorphism group of a graph can provide some indication of whether it is DS, but it is not a definitive criterion. While large automorphism groups often correlate with the graph not being DS due to shared eigenvalues among nonisomorphic graphs, this is not an absolute rule. Therefore, the claim should be understood as a general observation that requires qualification to account for exceptions. \vspace*{-0.3cm} \begin{thebibliography}{999} \bibitem{BrouwerH2011} A. E. Brouwer and W. H. Haemers, {\em Spectra of Graphs}, Springer Science \& Business Media, 2011. \doilink{https://doi.org/10.1007/978-1-4614-1939-6} \bibitem{Chung1997} F. R. K. Chung, {\em Spectral Graph Theory}, American Mathematical Society, 1997. \doilink{https://doi.org/10.1090/cbms/092} \bibitem{CvetkovicDS1995} D. M. Cvetkovi\'c, M. Doob, and H. Sacs. {\em Spectra of Graphs: Theory and Applications}, Johann Ambrosius Barth Verlag, third edition, 1995. \bibitem{CvetkovicRS2010} D. M. Cvetkovi\'c, P. Rowlinson, and S. Simi\'c, {\em An Introduction to the Theory of Graph Spectra}, Cambridge University Press, 2009. \doilink{https://doi.org/10.1017/CBO9780511801518} \bibitem{GodsilR2001} C. Godsil and G. Royle, {\em Algebraic Graph Theory}, Graduate Texts in Mathematics, vol.~27, Springer, New York, 2001. \doilink{https://doi.org/10.1007/978-1-4613-0163-9} \bibitem{GunthardP56} H. H. G\"{u}nthard and H. Primas, ``Zusammenhang von Graphentheorie und MO-Theorie von Molekeln mit Systemen konjugierter Bindungen,'' {\em Helv. Chim. Acta}, vol.~39, no.~6, pp.~1645--1653, 1956. \doilink{https://doi.org/10.1002/hlca.19560390623} \bibitem{Haemers2016} W. H. Haemers, ``Are almost all graphs determined by their spectrum?,'' {\em Not. S. Afr. Math. Soc. } 47.1 (2016): 42-45. Available at \url{https://www.researchgate.net/publication/304747396} \bibitem{Haemers2024} W. H. Haemers, ``Proving spectral uniqueness of graphs,'' plenary talk in {\em Combinatorics 2024}, Carovigno, Italy, June 2024. \bibitem{KovalK2024} I. Koval and M. Kwan, ``Exponentially many graphs are determined by their spectrum,'' {\em Quarterly Journal of Mathematics}, vol.~75, no.~3, pp.~869--899, September 2024. \doilink{https://doi.org/10.1093/qmath/haae030} \bibitem{WangW2024} W. Wang and W. Wang, ``Haemers’ conjecture: an algorithmic perspective,'' {\em Experimental Mathematics}, pp.~1--28, April 2024. \doilink{https://doi.org/10.1080/10586458.2024.2337229} \bibitem{Schwenk1973} A. J. Schwenk, ``Almost all trees are cospectral,'' {\em F. Harary (Ed.), New Directions in the Theory of Graphs, Academic Press, New York}, pp.~275--307, 1973. \bibitem{vanDamH03} E. R. van Dam and W. H. Haemers, ``Which graphs are determined by their spectrum?,'' {\em Linear Algebra and Applications}, vol.~343, pp.~241--272, November 2003. \doilink{https://doi.org/10.1016/S0024-3795(03)00483-X} \bibitem{vanDamH09} E. R. van Dam and W. H. Haemers, ``Developments on spectral characterizations of graphs,'' {\em Discrete Mathematics}, vol.~309, no.~3, pp.~576--586, February 2009. \doilink{http://dx.doi.org/10.1016/j.disc.2008.08.019} \bibitem{BermanCCLZ2018} A. Berman, D. M. Chen, Z. B. Chen, W. Z. Liang, and X. D. Zhang, ``A family of graphs that are determined by their normalized Laplacian spectra,'' {\em Linear Algebra and its Applications}, vol.~548, pp.~66--76, July 2018. \doilink{https://doi.org/10.1016/j.laa.2018.03.001} \bibitem{BuZ2012} C. Bu and J. Zhou, ``Signless Laplacian spectral characterization of the cones over some regular graphs,'' {\em Linear Algebra and its Applications}, vol.~436, no.~9, pp.~3634--3641, 2012. \doilink{https://doi.org/10.1016/j.laa.2011.12.035} \bibitem{BuZ2012b} C. Bu and J. Zhou, ``Starlike trees whose maximum degree exceed~4 are determined by their $Q$-spectra,'' {\em Linear Algebra and its Applications}, vol.~436, no.~1, pp.~143--151, January~2012. \doilink{doi:10.1016/j.laa.2011.06.028} \bibitem{Butler2010} S. Butler, A note about cospectral graphs for the adjacency and normalized Laplacian matrices. {\em Linear and Multilinear Algebra}, 2010, 58(3), 387--390. Taylor \& Francis. \doilink{https://doi.org/10.1080/03081080902722741} \bibitem{ButlerJ2011} S. Butler and J. Grout, ``A construction of cospectral graphs for the normalized Laplacian,'' {\em Electronic Journal of Combinatorics}, vol.~18, no.~1, paper~P231, pp.~1--20, December 2011. \doilink{https://doi.org/10.37236/718} \bibitem{Butler2016} S. Butler. Algebraic aspects of the normalized Laplacian. {\em Recent Trends in Combinatorics}, pp.~295--315, Springer, 2016. \doi{https://doi.org/10.1007/978-3-319-24298-9_13} \bibitem{ButlerH2016} S. Butler and K. Heysse, A cospectral family of graphs for the normalized Laplacian found by toggling. {\em Linear Algebra and its Applications}, vol.~507, pp.~499--512, October 2016. \doilink{https://doi.org/10.1016/j.laa.2016.06.033} \bibitem{CamaraH14} M. Camara and W. H. Haemers, ``Spectral characterizations of almost complete graphs,'' {\em Discrete Applied Mathematics}, vol.~176, pp.~19--23, October 2014. \doilink{https://doi.org/10.1016/j.dam.2013.08.002} \bibitem{DasP2013} A. Das and P. Panigrahi. Construction of simultaneous cospectral graphs for adjacency, Laplacian and normalized Laplacian matrices. {\em Kragujevac Journal of Mathematics}, 47(6):947-964 (2023) \doilink{http://dx.doi.org/10.46793/KgJMat2306.947D} \bibitem{DuttaA20} S. Dutta and B. Adhikari, ``Construction of cospectral graphs,'' {\em Journal of Algebraic Combinatorics}, vol.~52, pp.~215--235, September 2020. \doilink{https://doi.org/10.1007/s10801-019-00900-y} \bibitem{GodsilM1982} C. Godsil and B. McKay, ``Constructing cospectral graphs,'' {\em Aequationes Mathematicae}, vol.~25, pp.~257--268, December 1982. \doilink{https://doi.org/10.1007/BF02189621} \bibitem{HamudB24} S. Hamud and A. Berman, ``New constructions of nonregular cospectral graphs,'' {\em Special Matrices}, vol.~12, pp.~1--21, February 2024. \doilink{https://doi.org/10.1515/spma-2023-0109}. \bibitem{HamidzadeK2010} H. Hamidzade and D. Kiani, Erratum to “The lollipop graph is determined by its Q-spectrum”, {\em Discrete Mathematics}, {\em Discrete Mathematics}, vol.~310, no.~10--11, p.~1649, June 2010. \doilink{https://doi.org/10.1016/j.disc.2010.01.013} \bibitem{JinZ2014} Y. L. Jin and X. D. Zhang, ``Complete multipartite graphs are determined by their distance spectra,'' {\em Linear Algebra and its Applications}, vol.~448, pp.~285--291, February 2014. \doilink{https://doi.org/10.1016/j.laa.2014.01.029} \bibitem{KannanPW22} M. R. Kannan, S. Pragada, and H. Wankhede, ``On the construction of cospectral nonisomorphic bipartite graphs,'' {\em Discrete Mathematics}, vol.~345, no.~8, pp.~1--8, August 2022. \doilink{https://doi.org/10.1016/j.disc.2022.112916}. \bibitem{KoolenHI2016} J. H. Koolen, S. Hayat, and Q. Iqbal, ``Hypercubes are determined by their distance spectra,'' {\em Linear Algebra and its Applications}, vol.~505, pp.~97--108, September 2016. \doilink{http://dx.doi.org/10.1016/j.laa.2016.04.036} \bibitem{KoolenHI2016b} J. H. Koolen, S. Hayat, and Q. Iqbal, ``Corrigendum to 'Hypercubes are determined by their distance spectra','' {\em Linear Algebra and its Applications}, vol.~506, pp.~628--629, October 2016. \doilink{http://dx.doi.org/10.1016/j.laa.2016.06.043} \bibitem{KrupnikB2024} N. Krupnik and A. Berman, ``The graphs of pyramids are determined by their spectrum,'' {\em Linear Algebra and its Applications}, 2024. \doilink{https://doi.org/10.1016/j.laa.2024.04.029} \bibitem{LinLX2019} H. Lin, X. Liu, and J. Xue, ``Graphs determined by their $A_\alpha$-spectra,'' {\em Discrete Mathematics}, vol.~342, no.~2, pp.~441--450, February 2019. \doilink{https://doi.org/10.1016/j.disc.2018.10.006} \bibitem{AbdianBTKO21} A. Z. Abdian, L. W. Beineke, K. Thulasiraman, R. T. Khorami, and M. R. Oboudi, ``The spectral determination of the connected multicone graphs,'' {\em AKCE International Journal of Graphs and Combinatorics}, vol.~18, no.~1, pp.~47--52, 2021. \doilink{https://doi.org/10.1080/09728600.2021.1917974} \bibitem{AbiadH2012} A. Abiad, W. H. Haemers, Cospectral graphs and regular orthogonal matrices of level~2. {\em Electron. J. Combin.} vol~19, no.~3, paper~P13, 2012. \doilink{https://doi.org/10.37236/2383} \bibitem{AbiadBBCGV2022} A. Abiad, B. Brimkov, J. Breen, T.R. Cameron, H. Gupta, and R. Villagran, ``Constructions of cospectral graphs with different zero forcing numbers,'' {\em Electron. J. Linear Algebra}, vol.~38, pp.~280--294, May 2022. \doilink{https://doi.org/10.13001/ela.2022.6737} \bibitem{LiuZG2008} X. Liu, Y. Zhang, and X. Gui, ``The multi-fan graphs are determined by their Laplacian spectra,'' {\em Discrete Mathematics}, vol.~308, no.~18, pp.~4267--4271, September 2008. \doilink{https://doi.org/10.1016/j.disc.2007.08.002} \bibitem{MaRen2010} H. Ma and H. Ren, ``On the spectral characterization of the union of complete multipartite graph and some isolated vertices,'' {\em Discrete Mathematics}, vol.~310, pp.~3648--3652, September 2010. \doilink{https://doi.org/10.1016/j.disc.2010.09.004} \bibitem{OboudiAAB2021} M. R. Oboudi, A. Z. Abdian, A. R. Ashrafi, and L. W. Beineke, ``Laplacian spectral determination of path-friendship graphs,'' {\em AKCE International Journal of Graphs and Combinatorics}, vol.~18, no.~1, pp.~33--38, 2021. \doilink{https://doi.org/10.1080/09728600.2021.1917321} \bibitem{OmidiT2007} G. R. Omidi and K. Tajbakhsh, ``Starlike trees are determined by their Laplacian spectrum,'' {\em Linear Algebra and its Applications}, vol.~422, no.~2--, pp.~654--658, April 2007. \doilink{https://doi.org/10.1016/j.laa.2006.11.028} \bibitem{OmidiV2010} G. R. Omidi and E. Vatandoost, ``Starlike trees with maximum degree~4 are determined by their signless Laplacian spectra,'' {\em Electronic Journal of Linear Algebra}, vol.~20, pp.~274--290, May 2010. \doilink{https://doi.org/10.13001/1081-3810.1373} \bibitem{Sason2024} I. Sason, ``Observations on graph invariants with the Lov{\'a}sz $\vartheta$-function,'' {\em AIMS Mathematics}, vol.~9, no.~6, pp.~15385--15468, June 2024. \doilink{https://doi.org/10.3934/math.2024747} \bibitem{YeLS2025} J. Ye, M. Liu, and Z. Stani\'{c}, ``Two classes of graphs determined by their signless Laplacian spectrum,'' {\em Linear Algebra and Its Applications}, vol.~708, pp.~159--172, March 2025. \doilink{https://doi.org/10.1016/j.laa.2024.10.029}. \bibitem{ZhangLY09} Y. Zhang, X. Liu, X. Yong, ``Which wheel graphs are determined by their Laplacian spectra?'' {\em Computers and Mathematics with Applications}, vol.~58, pp.~1887--1890, November 2009. \doilink{http://dx.doi.org/10.1016/j.camwa.2009.07.028} \bibitem{ZhangLZY09} Y. Zhang, X. Liu, B. Zhang, and X. Yong, ``The lollipop graph is determined by its $Q$-spectrum,'' {\em Discrete Mathematics}, vol.~309, no.~10, pp.~3364--3369, May 2009. \doilink{https://doi.org/10.1016/j.disc.2008.09.052} \bibitem{ZhouBu2012} J. Zhou and C. Bu, ``Laplacian spectral characterization of some graphs obtained by product operation,'' {\em Discrete Mathematics}, vol.~312, pp.~1591--1595, May 2012. \doilink{http://dx.doi.org/10.1016/j.disc.2012.02.002} \bibitem{Schur1917} J. Schur, Über Potenzreihen, die im Innern des Einheitskreises beschränkt sind. (1917): vol.~147, pp.~205--232. \bibitem{ParlettB1998} B. N. Parlett, ``The symmetric eigenvalue problem,'' {\em Classics in Applied Mathematics}, 1998. \doilink{http://dx.doi.org/10.1137/1.9781611971163} \bibitem{ShakedBbook19} N. Shaked-Monderer and A. Berman, {\em Copositive and Completely Positive Matrices}, World Scientific Publishing Co. Pte. Ltd., 2021. \doilink{https://doi.org/10.1142/11386} \bibitem{Haemers93} W. Haemers, ``There exists no $(76,21,2,7)$ strongly regular graph,'' pp. 175--176 in {\em Finite Geometry and Combinatorics}, F. De Clerck and J. Hirschfeld editors, LMS Lecture Notes Series~191, Cambridge University Press, 1993. \doilink{https://doi.org/10.1017/CBO9780511526336.018} \bibitem{Lovasz79_IT} L. Lov\'{a}sz, ``On the Shannon capacity of a graph,'' {\em IEEE Transactions on Information Theory}, vol.~25, no.~1, pp.~1--7, January 1979. \doilink{https://doi.org/10.1109/TIT.1979.1055985} \bibitem{BrouwerM22} A. E. Brouwer and H. Van Maldeghem, {\em Strongly Regular Graphs}, Cambridge University Press, (Encyclopedia of Mathematics and its Applications, Series Number 182), 2022. \bibitem{Kirchhoff1958} G. Kirchhoff, ``On the solution of the equations obtained from the investigation of the linear distribution of galvanic currents,'' {\em IRE Transactions on Circuit Theory}, vol. 5, no. 1, pp. 4-7 (1958) \doilink{https://doi.org/10.1109/TCT.1958.1086426}. \bibitem{Cayley1889} A. Cayley. A theorem on trees. {\em Quart. J. Pure Appl. Math.} 23: 376–378. (1889) \bibitem{OliveiraLAK2010} C. S. Oliveira, L. S. de Lima, N. M. M. de Abreu, and S. Kirkland, ``Bounds on the $Q$-spread of a graph,'' {\em Linear Algebra and its Applications}, vol.~432, no.~9, pp.~2342--2351, April 2010. \doilink{https://doi.org/10.1016/j.laa.2009.06.011} \bibitem{Cardoso2008} D. M. Cardoso, D. Cvetkovi\'c, P. Rowlinson, and S. Simi\'c, ``A sharp lower bound for the least eigenvalue of the signless Laplacian of a non-bipartite graph,'' {\em Linear Algebra and its Applications}, vol.~429, no.~11-12, pp.~2770--2780, December 2008. \doilink{https://doi.org/10.1016/j.laa.2008.05.017} \bibitem{ChenH2018} X. Chen and Y. Hou, ``A sharp lower bound on the least signless Laplacian eigenvalue of a graph,'' {\em Bulletin of the Malaysian Mathematical Sciences Society}, Volume 41, pages 2011--2018, October 2018. \doilink{https://doi.org/10.1007/s40840-016-0440-1} \bibitem{CvetkovicRS2007} D. M. Cvetkovi\'c, P. Rowlinson, and S. Simi\'c, ``Signless Laplacians of finite graphs,'' {\em Linear Algebra and Applications}, vol.~423, no.~1, pp.~155--171, May 2007. \doilink{https://doi.org/10.1016/j.laa.2007.01.009} \bibitem{Butler2014} S. Butler. A gentle introduction to the normalized Laplacian, IMAGE 53, pp 19-27, Fall 2014. Online available from \url{https://www.stevebutler.org/research/publications} \bibitem{Sason23} I. Sason, ``Observations on Lov\'{a}sz $\vartheta$-function, graph capacity, eigenvalues, and strong products,'' {\em Entropy}, vol.~25, paper~104, pp.~1--40, January 2023. \doilink{https://doi.org/10.3390/e25010104} \bibitem{Erdos1963} P. Erd{\"o}s, ``On a problem in graph theory,'' {\em The Mathematical Gazette}, vol.~47, pp.~220--223, October 1963. \doilink{https://doi.org/10.2307/3613396} \bibitem{AignerZ18} M. Aigner, and G. M. Ziegler, {\em Proofs from the book}, Sixth Edition, Springer, Berlin, Germany, 2018. Available from: \url{https://link.springer.com/book/10.1007/978-3-662-57265-8} \bibitem{DoobH02} M. Doob and W. H. Haemers, ``The complement of the path is determined by its spectrum,'' {\em Linear Algebra and its Applications}, vol.~356, no.~1--3, pp.~57--65, November 2002. \doilink{https://doi.org/10.1016/S0024-3795(02)00323-3} \bibitem{CioabaHVW2015} S. M. Cioab\v{a}, W. H. Haemers, J. R. Vermette, and W. Wong, ``The graphs with all but two eigenvalues equal to $\pm 1$,'' {\em Journal of Algebraic Combinatorics}, vol.~41, no.~3, pp.~887--897, May 2015. \doilink{https://doi.org/10.1007/s10801-014-0557-y} \bibitem{LuLYY09} P. Lu, X. Liu, Z. Yuan, and X. Yong, ``Spectral characterizations of sandglass graphs,'' {\em Applied Mathematics Letters}, vol.~22, no.~8, pp.~1225--1230, August 2009. \doilink{http://dx.doi.org/10.1016/j.aml.2009.01.050} \bibitem{WangXu06} W. Wang and C. X. Xu, ``A sufficient condition for a family of graphs being determined by their generalized spectra,'' {\em European Journal of Combinatorics}, vol.~27, no.~6, pp.~826--840, August 2006. \doilink{https://doi.org/10.1016/j.ejc.2005.05.004} \bibitem{Wang13} W. Wang, ``Generalized spectral characterization of graphs revisited,'' {\em The Electronic Journal of Combinatorics}, vol.~20, no.~4, paper~P4, pp.~1--13, October 2013. \doilink{https://doi.org/10.37236/3748} \bibitem{Wang17} W. Wang, ``A simple arithmetic criterion for graphs being determined by their generalized spectra,'' {\em Journal of Combinatorial Theory, Series B}, vol.~122, pp.~438--451, January 2017. \doilink{https://doi.org/10.1016/j.jctb.2016.07.004} \bibitem{Boulet2009} R. Boulet, ``Disjoint unions of complete graphs characterized by their Laplacian spectrum,'' {\em Electronic Journal of Linear Algebra}, vol.~18, no.~1, pp.~773--783, January 2009. \doilink{https://doi.org/10.13001/1081-3810.1344} \bibitem{DasL2016} K. Ch. Das and M. Liu, ``Complete split graph determined by its (signless) Laplacian spectrum,'' {\em Discrete Applied Mathematics}, vol.~205, pp.~45--51, May 2016. \doilink{https://doi.org/10.1016/j.dam.2016.01.003} \bibitem{WangBHB2010} J. Wang, F. Belardo, Q. Huang, and B. Borovi\'{c}anin. On the two largest $Q$-eigenvalues of graphs. {\em Discrete Mathematics}, 310(21):2858--2866, November 2010. \doilink{https://doi.org/10.1016/j.disc.2010.06.030} \bibitem{Turan1941} P. Tur\'{a}n, ``On an external problem in graph theory,'' {\em Mat. Fiz. Lapok}, 48:436--452, 1941. \bibitem{EsserH1980} F. Esser and F. Harary, ``On the spectrum of a complete multipartite graph,'' {\em European Journal of Combinatorics}, vol.~1, no.~3, pp.~211--218, September 1980. \doilink{https://doi.org/10.1016/S0195-6698(80)80004-7} \bibitem{Butler2008} S. Butler, {\em Eigenvalues and Structures of Graphs}. University of California, San Diego, 2008. Available from \url{https://escholarship.org/uc/item/3qd9g26t} \bibitem{SageMath} {\it The Sage Developers}, SageMath, the Sage Mathematics Software System, Version 9.3, 2021. \bibitem{Smith1970} J. H. Smith, ``Some Propertice of the Spectrum of Graph,'' in: {\em Combinatorial Structures and Their Applications}, R. K. Guy, H. Hanani, N. Sauer, and J. Schönheim (Eds.), Gordon and Breach, New York, pp.~403--406, 1970. \bibitem{BeinekeB21} L. W. Beineke and J. S. Bagga, {\em Line Graphs and Line Digraphs}, Springer, 2021. \doilink{https://doi.org/10.1007/978-3-030-81386-4} \bibitem{Shrikhande59} S. S. Shrikhande, ``The uniqueness of the $\mathrm{L}_2$ association scheme,'' {\em The Annals of Mathematical Statistics}, vol.~30, no.~3, pp.~781--791, September 1959. \doilink{https://doi.org/10.1214/aoms/1177706207} \bibitem{LinSM2010} Y. Lin, J. Shu, and Y. Meng, ``Laplacian spectrum characterization of extensions of vertices of wheel graphs and multi-fan graphs,'' {\em Computers \& Mathematics with Applications}, vol.~60, no.~7, pp.~2003--2008, October 2010. \doilink{https://doi.org/10.1016/j.camwa.2010.07.035} \bibitem{CameronL2010} P. J. Cameron and J. H. van Lindt, ``Strongly regular graphs with no triangles,'' in {\em Graphs, Codes and Designs}, Chapter~5, pp.~37--44, Cambridge University Press, 1980. \doilink{https://doi.org/10.1017/CBO9780511662140.006} \bibitem{Cameron2003} P. J. Cameron, ``Random strongly regular graphs?,'' {\em Discrete Mathematics}, vol.~273, no.~1--3, pp.~103--114, December 2003. \doilink{https://doi.org/10.1016/S0012-365X(03)00231-0} \bibitem{Brouwer83} A. E. Brouwer, ``The uniqueness of the strongly regular graph on 77 points,'' {\em Journal of Graph Theory}, vol.~7, no.~4, pp.~455--461, December 1983. \doilink{https://doi.org/10.1002/jgt.3190070411} \bibitem{BrouwerH92} A. E. Brouwer and W. H. Haemers, ``Structure and uniqueness of the $(81, 20, 1, 6)$ strongly regular graph,'' {\em Discrete Mathematics}, vol.~106--107, pp.~77--82, September 1992. \doilink{https://doi.org/10.1016/0012-365X(92)90532-K} \bibitem{StevanovicM2009} D. Stevanovi\'{c} and M. Milo\v{s}evi\'{c},``A spectral proof of the uniqueness of a strongly regular graph with parameters $(81, 20, 1, 6)$,'' {\em European Journal of Combinatorics}, vol.~30, no.~4, pp.~957--968, May 2009. \doilink{https://doi.org/10.1016/j.ejc.2008.07.021} \bibitem{Coolsaet2006} K. Coolsaet, ``The uniqueness of the strongly regular graph $\mathrm{srg}(105, 32, 4, 12)$,'' {\em Bulletin of the Belgian Mathematical Society - Simon Stevin}, vol.~12, no.~5, pp.~707--718, January 2006. \doilink{https://doi.org/10.36045/bbms/1136902608} \bibitem{Mesner56} D. M. Mesner, ``An investigation of certain combinatorial properties of partially balanced incomplete block experimental designs and association schemes, with a detailed study of designs of Latin square and related types,'' PhD dissertation, Department of Statistics, Michigan State University, 1956. \doilink{https://doi.org/doi:10.25335/3z1w-1b88} \bibitem{StruikB2010} T. Struik and W. Bosma, {\em Seven known examples of triangle-free strongly-regular graphs}, 2010. Available at \url{https://www.math.ru.nl/OpenGraphProblems/Tjapko/30.html} \bibitem{Brouwer} A. E. Brouwer, tables of paramters of strongly regular graphs. \url{https://aeb.win.tue.nl/graphs/srg/} \bibitem{CioabaGJM25} S. Cioaba, K. Guo, C. Ji, and M. Mim, ``Clique complexes of strongly regular graphs and their eigenvalues,'' {\em in preparation}, 2025. \bibitem{SampathkumarW1980} E. Sampathkumar and H. B. Walikar, ``On the splitting graph of a graph,'' {\em Karnatak Univ. Sci}, vol.~13, pp.~13--16, 1980. Available from \url{https://www.researchgate.net/publication/269007309} \bibitem{FruchtH1970} R. Frucht and F. Harary, ``On the corona of two graphs,'' {\em Aequationes Mathematicae}, vol.~4, pp.~322-325, 1970. \doilink{https://doi.org/10.1007/BF01844162} \bibitem{HouS2010} Y. Hou and W. C. Shiu, The spectrum of the edge corona of two graphs. {\em The Electronic Journal of Linear Algebra}, vol.~20, pp.~586--594, September 2010. \doilink{https://doi.org/10.13001/1081-3810.1395} \bibitem{AdigaRS2018} C. Adiga, B. R. Rakshith, and K. N. Subba Krishna, ``Spectra of some new graph operations and some new class of integral graphs,'' {\em Iranian Journal of Mathematical Sciences and Informatics}, vol.~13, no.~1, pp.~51--65, May 2018. Available from \url{http://ijmsi.ir/article-1-755-en.html} \bibitem{SonarS2024} B. Sonar and R. Srivastava, ``On the spectrum of closed neighborhood corona product of graph and its application,'' July 2024. \doilink{http://dx.doi.org/10.48550/arXiv.2407.05653} \bibitem{PisankiS13} T. Pisanski and B. Servatius, {\em Configurations from a Graphical Viewpoint}, Springer, 2013. \doilink{https://doi.org/10.1007/978-0-8176-8364-1} \bibitem{LuMZ2023} Z. Lu, X. Ma, and M. Zhang, ``Spectra of graph operations based on splitting graph,'' {\em Journal of Applied Analysis and Computation}, vol.~13, no.~1, pp.~133--155, February 2023. \doilink{https://doi.org/10.11948/20210446} \bibitem{Hamud23} S. Hamud, {\em Contributions to Spectral Graph Theory}, Ph.D. dissertation, Technion-Israel Institute of Technology, Haifa, Israel, December 2023. \bibitem{BrouwerS2009} A. E. Brouwer and E. Spence, ``Cospectral graphs on 12 vertices,'' {\em Electronic Journal of Combinatorics}, vol.~16, no.~1, paper~20, pp.~1--3, June 2009. \doilink{https://doi.org/10.37236/258} \bibitem{AouchicheH14} M. Aouchiche and P. Hansen, ``Distance spectra of graphs: A survey,'' {\em Linear Algebra and its Applications}, vol.~458, pp.~301--386, June 2014. \doilink{https://doi.org/10.1016/j.laa.2014.06.010} \bibitem{HuiqiuJJY21} L. Huiqiu, S. Jinlong, X. Jie, and Z. Yuke, ``A survey on distance spectra of graphs,'' {\em Advances in Mathematics (China)}, vol.~50, no.~1, January 2021. \doilink{https://doi.org/10.11845/SXJZ.2020012A} \bibitem{Banerjee23} S. Banerjee, ``Distance Laplacian spectra of various graph operations and its application to graphs on algebraic structures,'' {\em Journal of Algebra and Its Applications}, vol.~22, no.~1, paper~2350022, pp.~1--26, 2023. \doilink{https://doi.org/10.1142/S0219498823500226} \bibitem{HogbenR22} L. Hogben and C. Reinhart, ``Spectra of variants of distance matrices of graphs and digraphs: a survey,'' {\em La Matematica}, vol.~1, pp.~186--224, January 2022. \doilink{https://doi.org/10.1007/s44007-021-00012-9} \bibitem{LinHW2013} H. Lin, Y. Hong, J. Wang, and J. Shu, ``On the distance spectrum of graphs,'' {\em Linear Algebra and its Applications}, vol.~439, pp.~1662--1669, May 2013. \doilink{https://doi.org/10.1016/j.laa.2013.04.019} \bibitem{vanDamO11} E. R. van Dam and G. R. Omidi, ``Graphs whose normalized Laplacian has three eigenvalues,'' {\em Linear Algebra and its Applications}, vol.~435, no.~10, pp.~2560--2569, November 2011. \doilink{https://doi.org/10.1016/j.laa.2011.02.005} \end{thebibliography} \end{document}
2412.20841v1
http://arxiv.org/abs/2412.20841v1
Compact harmonic RCD$(K, N)$ spaces are harmonic manifolds
\documentclass[12pt]{article} \usepackage{mathrsfs} \usepackage{amsmath} \usepackage{float} \usepackage{cite} \usepackage{esint} \usepackage[all]{xy} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{color} \usepackage{graphicx} \usepackage{hyperref} \usepackage{bm} \usepackage{indentfirst} \usepackage{geometry} \geometry{a4paper,scale=0.7} \theoremstyle{plain}\newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{ques}[thm]{Question} \newtheorem{property}[thm]{Property} \newtheorem{cor}[thm]{Corollary} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \newtheorem{exer}[thm]{Exercise} \newtheorem{exmp}[thm]{Example} \makeatletter \makeatother \numberwithin{equation}{section} \begin{document} \title{Compact harmonic RCD$(K,N)$ spaces are harmonic manifolds} \author{\it Zhangkai Huang \thanks{Sun Yat-sen University: [email protected]}} \date{\small\today} \maketitle \begin{abstract} This paper focus on the properties and characterizations of harmonic RCD$(K,N)$ spaces, which are the counterparts of harmonic Riemannian manifolds in the non-smooth setting. We prove that a compact RCD$(K,N)$ space is isometric to a locally harmonic Riemannian manifold if it satisfies either of the following harmonicity conditions: \begin{enumerate} \item the heat kernel $\rho(x,y,t)$ depends only on the variable $t$ and the distance between points $x$ and $y$. \item the volume of the intersection of two geodesic balls depends only on their radii and the distance between their centers. \end{enumerate} \end{abstract} \tableofcontents \section{Introduction} In $n$-dimensional Euclidean space $\mathbb{R}^n$, there exist harmonic functions that depend only on the geodesic distance. For example when $n>2$, the function $f(x_1,\ldots,x_n)=(x_1^2+\cdots+x_n^2)^{1-n/2}$ is harmonic on $\mathbb{R}^n\setminus \{0\}$. Inspired by this property, Ruse sought to identify harmonic functions on Riemannian manifolds with similar characteristics in 1930. However, it turned out that such radial harmonic functions exist only in very special cases. He proposed, and indeed, the historical definition of such special classes of manifolds, which is stated as follows. \begin{defn} A Riemannian manifold $(M^n,\mathrm{g})$ is said to be \textit{harmonic} if the volume density function $\theta_p:=\sqrt{|\det (g_{ij})|}$ in the normal coordinate at each point $p$ is a radial function. \end{defn} Today, numerous equivalent definitions about harmonic manifolds have been established (see for instance \cite{B78,DR92,W50}). Below, we list a few of these definitions for clarity. \begin{thm} A complete $n$-dimensional Riemannian manifold $(M^n,\mathrm{g})$ is harmonic if and only if either of the following condition holds. \begin{enumerate} \item[$(1)$] For any point $p\in M^n$ and the distance function $d_p:=\mathsf{d}_{\mathrm{g}}(p,\cdot)$, $\Delta d_p^2$ is radial on some small ball $B_r(p)$. \item[$(2)$] For any $p\in M^n$ there exists a nonconstant radial harmonic function in a punctured neighborhood of $p$. \item[$(3)$] Every small geodesic sphere in $M^n$ has constant mean curvature. \item[$(4)$] {Every harmonic function satisfies the mean value property}. \end{enumerate} \end{thm} When the space is simply connected, we have other equivalent characterizations of the space's harmonicity. \begin{thm}[{\cite[Thm. 3]{CH11}}, {\cite[Thm. 1]{CH12}}]\label{harm1} A connected, simply connected and complete Riemannian manifold is harmonic if and only if the volume of the intersection of two geodesic balls depends only on their radii and the distance between their centers. \end{thm} \begin{thm}[{\cite[Thm. 1.1]{S90}}]\label{harm2} A connected, simply connected and complete Riemannian manifold $(M^n,\mathrm{g})$ is harmonic if and only if the heat kernel $\rho(x,y,t)$ depends only on the variable $t$ and the distance between points $x$ and $y$, i.e., it is of the form $\rho(x,y,t) = \rho(\mathsf{d}_{\mathrm{g}}(x,y),t)$. \end{thm} Given a Riemannian manifold $(M^n,\mathrm{g})$, let $H_i$ ($i=1,2$) be two functions on $M^n\times M^n$ such that for any $x\in M^n$ the functions $H_i^x(\cdot):=H_i(x,\cdot)$ ($i=1,2$) are $L^2$-integrable. Under these conditions, the convolution of $H_1$ and $H_2$, denoted as $H_1 \ast H_2$, is defined by \[ H_1 \ast H_2(x, y) =\int_{M^n} H_1(x,z)H_2(y,z) \,\mathrm{d}\mathrm{vol}_{\mathrm{g}}(z). \] A function $H:M^n\times M^n\rightarrow \mathbb{R}$ is called radial kernel if $H(x, y)$ depends only on the geodesic distance between $x$ and $y$, that is, if $H =h\circ \mathsf{d}_{\mathrm{g}}$, where $h:\mathbb{R}^+\rightarrow \mathbb{R}$ is an arbitrary function. As an application of Theorem \ref{harm2}, through the approximation with characteristic functions, we can immediately deduce the following theorem. \begin{thm}[{\cite[Prop. 2.1]{S90}}]\label{harm3} A connected, simply connected and complete Riemannian manifold is harmonic if and only if the convolution of the radial kernel functions $H_1=h_1\circ \mathsf{d}_{\mathrm{g}}$ and $H_2=h_2\circ \mathsf{d}_{\mathrm{g}}$ is a radial kernel function whenever $h_1,h_2$ are $L^2$-integrable functions on $\mathbb{R}$ with compact support. \end{thm} The primary objective of this paper, is to characterize non-smooth contexts, namely $\mathrm{RCD}(K,N)$ metric measure spaces which are defined precisely in the following subsection and satisfy the conditions outlined in Theorem \ref{harm1}-\ref{harm3}. \subsection{RCD spaces} In this paper, by metric measure space we mean a triple $(X, \mathsf{d},\mathfrak{m})$ such that $(X, \mathsf{d})$ is a complete separable metric space and that $\mathfrak{m}$ is a nonnegative Borel measure with full support on $X$ and being finite on any bounded subset of $X$. The curvature-dimension condition CD$(K,N)$, which involves a lower Ricci curvature bound $K\in\mathbb{R}$ and an upper dimension bound $N\in [1,\infty)$ for metric measure spaces in a synthetic sense, was first introduced in \cite{St06a,St06b} by Sturm and in \cite{LV09} by Lott-Villani respectively. Subsequently, by adding a Riemannian structure into CD$(K,N)$ metric measure spaces, Ambrosio-Gigli-Savar\'{e} \cite{AGS14a}, Gigli \cite{G13,G15}, Erbar-Kuwada-Sturm \cite{EKS15} and Ambrosio-Mondino-Savar\'{e} \cite{AMS19} introduced the notion of RCD$(K, N)$ spaces. To be precise, a metric measure space is classified as an RCD$(K, N)$ space if it satisfies the CD$(K,N)$ condition and its $H^{1,2}$-Sobolev space is Hilbertian. Although RCD$(K,N)$ spaces are generally not smooth, they still possess some desirable analytical properties. For instance, Jiang-Li-Zhang \cite{JLZ16} established the Gaussian estimates for the heat kernels on RCD$(K,N)$ spaces, which implies the existence of locally Lipschitz representation of the heat kernels. Furthermore, Bru\`e-Semola \cite{BS20} demonstrated that each RCD$(K,N)$ space $({X},\mathsf{d},\mathfrak{m})$ bears a unique integer $n\in [1,N]$, which is called its essential dimension and is denoted by $n:=\mathrm{dim}_{\mathsf{d},\mathfrak{m}}({X})$, such that the the tangent space of $\mathfrak{m}$-almost every point is $\mathbb{R}^n$. It is worth mentioning that De Philippis-Gigli \cite{DG18} introduced a synthetic counterpart of volume non-collapsed Gromov-Hausdorff limit spaces of Riemannian manifolds with a constant dimension and a lower Ricci curvature bound, namely non-collapsed RCD$(K,N)$ spaces, meaning that the reference measure equals $N$-dimensional Hausdorff measure $\mathscr{H}^N$. In this case it can be readily checked that in this case $N$ must be an integer and $N:=\mathrm{dim}_{\mathsf{d},\mathscr{H}^N}({X})$. It is natural to explore additional conditions that could improve the regularity of general RCD$(K,N)$ spaces. Recently, Brena-Gigli-Honda-Zhu \cite{BGHZ23} demonstrated the equivalence between non-collapsed RCD$(K,N)$ spaces and weakly non-collapsed RCD$(K,N)$ spaces. Furthermore, the author \cite{H23} proved that every RCD$(K,N)$ space is non-collapsed, given an isometrically heat kernel immersing condition. Additionally, in \cite{H23}, the author demonstrated that a compact non-collapsed RCD$(K,n)$ space with eigenfunctions that realize an isometric immersion into Euclidean space is, in fact, smooth. \subsection{Main results} In this paper, we focus on the following specific subclasses of RCD$(K,N)$ spaces. \begin{defn}[Strongly harmonic RCD$(K,N)$ space] An RCD$(K,N)$ space $(X,\mathsf{d},\mathfrak{m})$ is said to be \textit{strongly harmonic} if there exists a real valued function $H:[0,\infty)\times [0,\infty)\rightarrow \mathbb{R}$ such that the heat kernel $\rho$ of $(X,\mathsf{d},\mathfrak{m})$ satisfies \begin{equation}\label{eeqn1.1} \rho(x,y,t)=H(\mathsf{d}\left(x,y\right),t),\ \forall x,y\in X,\ \forall t>0. \end{equation} \end{defn} \begin{defn}[Radial symmetric RCD$(K,N)$ space] An RCD$(K,N)$ space $(X,\mathsf{d},\mathfrak{m})$ is said to be \textit{radial symmetric} if there exists a real valued function $F:[0,\infty)\times [0,\infty)\rightarrow \mathbb{R}$ and non-constant eigenfunctions $\{\phi_i\}_{i=1}^m$ such that \begin{equation}\label{eqn1.2} \sum\limits_{i=1}^m \phi_i(x)\phi_i(y)=F(\mathsf{d}\left(x,y\right)),\ \forall x,y\in X. \end{equation} \end{defn} We prove the following three regularity results for the above subclasses of RCD$(K,N)$ spaces. \begin{thm}\label{mainthm2} Let $(X,\mathsf{d},\mathfrak{m})$ be a strongly harmonic $\mathrm{RCD}(K,N)$ space. Then $\mathfrak{m}=c\mathscr{H}^n$ for some constant $c>0$, where $n=\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)$ is the essential dimension of the space. In particular, $(X,\mathsf{d},\mathscr{H}^n)$ is a non-collapsed $\mathrm{RCD}(K,n)$ space. \end{thm} \begin{thm}\label{mainthm1} Let $(X,\mathsf{d},\mathscr{H}^n)$ be a non-collapsed radial symmetric $\mathrm{RCD}(K,n)$ space. Then $(X,\mathsf{d})$ is isometric to an $n$-dimensional smooth closed Riemannian manifold $(M^n,\mathrm{g})$. \end{thm} \begin{cor}\label{cor1.5} Assume $(X,\mathsf{d},\mathfrak{m})$ is a compact strongly harmonic $\mathrm{RCD}(K,N)$ space with $\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)=n$. Then $(X,\mathsf{d})$ is isometric to an $n$-dimensional smooth closed Riemannian manifold $(M^n,\mathrm{g})$. \end{cor} As an application of Corollary \ref{cor1.5}, we are positioned to establish the following theorem, which bears a direct relationship with Theorem \ref{harm1}. \begin{thm}\label{thm1.11} Let $(X,\mathsf{d},\mathfrak{m})$ be a compact $\mathrm{RCD}(K,N)$ space. If in addition the measure of the intersection of two geodesic balls depends only on their radii and the distance between their centers, then the following holds. \begin{enumerate} \item[$(1)$] $\mathfrak{m}=c\mathscr{H}^n$ for some constant $c>0$ and $(X,\mathsf{d},\mathscr{H}^n)$ is a non-collapsed $\mathrm{RCD}(K,n)$ space; \item[$(2)$] $(X,\mathsf{d})$ is isometric to an $n$-dimensional smooth closed Riemannian manifold $(M^n,\mathrm{g})$. \item[$(3)$] Moreover, if in the addtion that $X$ is simply-connected, then it is a harmonic manifold. \end{enumerate} \end{thm} \subsection{Outline of the proofs} In section \ref{sec3}, we initially focus on the smoothness of compact, non-collapsed, radial symmetric RCD$(K,n)$ spaces $(X,\mathsf{d},\mathscr{H}^n)$. Our main goal is to establish that the map $(\phi_1,\ldots,\phi_m):X\rightarrow \mathbb{R}^m$, constructed from eigenfunctions meeting the conditions in (\ref{eqn1.2}), is an isometric immersion. The smoothness is then inferred from \cite[Thm. 1.10]{H23}. Regarding the proof for the regularity of strongly harmonic RCD$(K,N)$ spaces $(X,\mathsf{d},\mathfrak{m})$, we employ a blow-up argument to demonstrate their non-collapsing nature and to enhance the upper bound $N$ to its essential dimension $n:=\mathrm{dim}_{\mathsf{d},\mathfrak{m}}({X})$. Subsequently, we establish the smoothness of compact, non-collapsed, strongly harmonic RCD$(K,n)$ spaces, leveraging the observation that they are radial symmetric. In section \ref{sec4}, we explore regularities of RCD$(K,N)$ spaces $(X,\mathsf{d},\mathfrak{m})$, where the volume of the intersection of two geodesic balls is only determined by their radii and the distance between their centers. Utilizing a blow-up argument demonstrate that these spaces are non-collapsed and that the upper bound $N$ can be refined to the essential dimension $n:=\mathrm{dim}_{\mathsf{d},\mathfrak{m}}({X})$. Furthermore, under the additional assumption of compactness for $X$, we examine the smoothness of the space. We observe that any geodesic can be extended to a length equal to its diameter. This observation allows us to construct bi-Lipschitz coordinate charts using distance functions, which shares some similarity of the bi-Lipschitz coordinates near regular points in Alexandrov spaces (see for instance \cite{BGP92}). By mapping back to Euclidean space, we derive a key equation for any two distinct points $x,\bar{x} \in X$. \[ \lim_{r\rightarrow 0} \,\frac{1}{r}\left(\frac{\mathscr{H}^n\left(B_r(x)\setminus B_{\mathsf{d}(x,\bar{x})}(\bar{x})\right)}{\omega_n r^n}-\frac{1}{2}\right)=\Delta \mathsf{d}_{\bar{x}}(x). \] Notably, as shown in \cite{HT03}, in a smooth setting, the second term in this equation corresponds to the mean curvature of the geodesic sphere of radius $\mathsf{d}(x,\bar{x})$ centered at $\bar{x}$. Finally, under polar coordinates, the volume density function is found to be radial, which implies that the space is strongly harmonic and, consequently, smooth. \ \ \ \textbf{Acknowledgements} The author acknowledges the support of Fundamental Research Funds for the Central Universities, Sun Yat-sen University, Grant Number 24qnpy105. He is thankful to Prof. Shouhei Honda and Prof. Huichun Zhang for their insightful discussions. \section{Preliminaries} In this paper we consistently employ $C=C(k_1,\ldots,k_m)$ to denote a positive constant that depends sorely on the parameters $k_1,\ldots,k_m$. For convenience, we may occasionally abbreviate this to $C$, acknowledging that it may change from one line to another if necessary. \subsection{Metric space} In this subsection, we will review some fundamental notation pertaining to metric spaces. Let $(X, \mathsf{d})$ be a metric space that is both complete and separable. \begin{itemize} \item We denote by $C(X)$ and $\mathrm{Lip}(X,\mathsf{d})$ the set of continuous function and Lipschitz functions on $(X,\mathsf{d})$, respectively. The subsets $\mathrm{Lip}_b(X,\mathsf{d})$ and $\mathrm{Lip}_c(X,\mathsf{d})$ represent bounded Lipschitz functions and compactly supported Lipschitz functions, respectively. \item For any $f\in \mathrm{Lip}(X,\mathsf{d})$, the Lipschitz constant of $f$ is defined by \[ \mathrm{Lip}\mathop{f}:=\sup_{x\neq y} \frac{|f(y)-f(x)|}{\mathsf{d}\left(y,x\right)}; \] the pointwise Lipschitz constant of $f$ at a point $x\in X$ is defined by \[ \mathrm{lip}\mathop{f}(x):=\limsup_{y\rightarrow x} \frac{|f(y)-f(x)|}{\mathsf{d}\left(y,x\right)} \] if $x$ is not isolated, and is understood to be 0 if $x$ is isolated. \item We denote by $B_r^X(x)$ (or briefly $B_r(X)$) the open ball of radius $r>0$ centered at $x$, i.e. $\{y\in X:\mathsf{d}\left(y,x\right)<r\}$ and by $A_{r_1,r_2}^X(x)$ (or briefly $A_{r_1,r_2}(x)$) the annulus $\{y\in X:r_1<\mathsf{d}\left(x,y\right)<r_2\}$. \item For any $n\in [1,\infty)$, we define $\mathscr{H}^n_\mathsf{d}$ (or simply $\mathscr{H}^n$ when the context is clear and there is no risk of confusion) as the $n$-dimensional Hausdorff measure on $(X,\mathsf{d})$. Specifically, we denote by $\mathscr{L}^n$ the standard Lebesgue measure on $\mathbb{R}^n$, which is consistent with the the $n$-dimensional Hausdorff measure. Additionally, we let $\omega_n$ represent Lebesgue measure of the unit ball $B_1(0_n)$ in $\mathbb{R}^n$. \item If $(X,\mathsf{d})$ is compact, then we define its diameter as: \[ \mathrm{diam}(X,\mathsf{d}):=\sup_{x,y\in X}\mathsf{d}\left(x,y\right). \] \end{itemize} \subsection{RCD space and heat kernel} Throughout this paper, when refering to a triple $(X,\mathsf{d},\mathfrak{m})$ as a metric measure space, we mean that $(X,\mathsf{d})$ is a complete and separable metric space and $\mathfrak{m}$ is a non-negative Borel measure which is finite on bounded sets. Now, let us consider $(X,\mathsf{d},\mathfrak{m})$ as a metric measure space. The Cheeger energy $\mathrm{Ch}:L^2(\mathfrak{m})\rightarrow [0,\infty]$ is a convex and lower semi-continuous functional defined as \[ \mathrm{Ch}(f):=\inf_{\{f_i\}}\left\{\int_X \left(\mathrm{lip}\mathop{f_i}\right)^2\mathrm{d}\mathfrak{m}\right\}, \] where the infimum is taken among all sequences $\{f_i\}\subset \mathrm{Lip}_b(X,\mathsf{d})\cap L^2(\mathfrak{m})$ such that $\|f_i-f\|_{L^2(\mathfrak{m})}\rightarrow 0$. The Sobolev space $H^{1,2}(X,\mathsf{d},\mathfrak{m})$ is then defined as the set of $L^2$-integrable functions with finite Cheeger energy. Subsequently, unless it leads to ambiguity, we will use the simplified notations $L^p$ for $L^p(X,\mathfrak{m})$ and $H^{1,2}$ for $H^{1,2}(X,\mathsf{d},\mathfrak{m})$. For any function $f\in H^{1,2}$, we can employ a minimizing sequence and apply Mazur's Lemma to uniquely identify a minimal relaxed slope $|\nabla f|\in L^2$ such that \[ \mathrm{Ch}(f)=\int_X |\nabla f|^2\, \mathrm{d}\mathfrak{m}. \] This minimal relaxed slope possesses the locality property, meaning that $|\nabla f|=|\nabla h|$ $\mathfrak{m}$-a.e. on $\{f=h\}$. Let us further make a convention that all metric measure spaces under consideration in this paper is \textit{infinitesimally Hilbertian}. This term signifies that the associated Sobolev spaces of these spaces are Hilbert spaces. It is noteworthy that, as demonstrated in \cite{AGS14a,G15}, under this condition, for any two functions $f,h\in H^{1,2}$, the following function is well-defined: \[ \langle\nabla f,\nabla h\rangle:=\lim_{\epsilon\rightarrow 0}\frac{|\nabla f+\epsilon h|^2-|\nabla h|^2}{2\epsilon}\in L^1. \] Furthermore, the domain of the Laplacian, denoted as $D(\Delta)$ can be defined as the set of functions $f\in H^{1,2}$ for which the following equation \[ \int_X \langle \nabla f,\nabla \varphi\rangle\, \mathrm{d}\mathfrak{m}=-\int_X h\varphi\, \mathrm{d}\mathfrak{m},\ \forall \varphi\in H^{1,2}, \] holds for some $h\in L^2$. This $h$ is unique and is denoted by $\Delta f$. As a localized version, for any open set $U\subset X$ we denote by $D(\Delta,U)$ the set of all $f\in H^{1,2}(U,\mathsf{d},\mathfrak{m})$ such that the following holds for some $h\in L^2(U)$, which is denoted by $\Delta_U f$, \[ \int_X \langle \nabla f,\nabla \varphi\rangle \,\mathrm{d}\mathfrak{m}=-\int_X h\varphi, \ \forall \varphi\in \mathrm{Lip}_c(U,\mathsf{d}). \] Here by $f\in H^{1,2}(U,\mathsf{d},\mathfrak{m})$ we mean that $\varphi f\in H^{1,2}$ for any $\varphi\in \mathrm{Lip}_c(U,\mathsf{d})$ and $f,|\nabla f|\in L^2(U)$. We are now in a position to introduce the definition of RCD$(K,N)$ spaces. See \cite{AGS15,AMS19,EKS15,G15} for details. \begin{defn} For any $K\in \mathbb{R}$, $N\in [1,\infty]$, an infinitesimally Hilbertian metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to be an RCD$(K,N)$ space if it satisfies the following conditions. \begin{enumerate} \item There exists $x\in X$ and $C>1$ such that for any $r>0$ we have $\mathfrak{m}(B_r(x))\leqslant C\exp(Cr^2)$. \item Any $f\in H^{1,2}$ satisfying $|\nabla f|\leqslant 1$ $\mathfrak{m}$-a.e. has a 1-Lipschitz representative. \item For any $f\in D(\Delta)$ with $\Delta f\in H^{1,2}$ and any $\varphi \in \mathrm{Test}F\left({X},\mathsf{d},\mathfrak{m}\right)$ with $ \varphi \geqslant 0$, we have \[ \frac{1}{2}\int_X |\nabla f|^2 \Delta \varphi\mathop{\mathrm{d}\mathfrak{m}}\geqslant \int_X \varphi \left(\frac{(\Delta f)^2}{N}+\langle \nabla f,\nabla \Delta f\rangle+K|\nabla f|^2 \right)\mathrm{d}\mathfrak{m}, \] where $\mathrm{Test}F({X},\mathsf{d},\mathfrak{m})$ is the class of test functions defined by \[ \mathrm{Test}F({X},\mathsf{d},\mathfrak{m}):=\left\{\varphi\in \text{Lip}({X},\mathsf{d})\cap D(\Delta)\cap L^\infty:\Delta f\in H^{1,2}\cap L^\infty\right\}. \] \end{enumerate} If in addition $\mathfrak{m}=\mathscr{H}^N$, then $(X,\mathsf{d},\mathfrak{m})$ is said to be a non-collapsed RCD$(K,N)$ space. \end{defn} Throughout this paper, when referring to an RCD$(K,N)$ space, we always mean that $N \in (1,\infty)$. For the remainder of this subsection, let us consider $(X, \mathsf{d}, \mathfrak{m})$ as a representative example of an RCD$(K,N)$ space. It should be emphasized that RCD$(K,N)$ spaces satisfy the following Bishop-Gromov volume growth inequality (see \cite[Thm. 5.31]{LV09}, \cite[Thm. 2.3]{St06b}). \begin{thm}\label{BGineq} For any $R>r>0$ $(R\leqslant \pi\sqrt{(N-1)/K}$ if $K>0)$ it holds that \[ \dfrac{\mathfrak{m}\left(B_R(x)\right)}{\mathfrak{m}\left(B_r(x)\right)}\leqslant \frac{V_{K,N}(R)}{ V_{K,N}(r)}, \] where $V_{K,N} (r)$ denotes the volume of a ball of radius $r$ in the $N$-dimensional model space with Ricci curvature $K$. \end{thm} Building on the contributions by Sturm (see \cite[Prop. 2.3]{St95}, \cite[Cor. 3.3]{St96}) and Jiang-Li-Zhang (as detailed in Theorem \ref{JLZ} below), it is established that RCD$(K,N)$ spaces possess locally Lipschitz continuous heat kernels. More precisely, there exists a non-negative function $\rho$ on $X \times X \times (0,\infty)$ such that the unique solution to the heat equation can be expressed as follows. \[ \mathrm{h}_t f=\int_X \rho(\cdot,y,t)f(y)\,\mathrm{d}\mathfrak{m}(y),\ \forall f\in L^2,\ \forall t>0, \] where by heat equation we mean $\mathrm{h}_t f$ satisfies that \[ \frac{d }{d t} \mathrm{h}_t f=\Delta \mathrm{h}_t f,\ \text{in }L^2;\ \ \lim_{t\downarrow 0}\| \mathrm{h}_t f-f\|_{L^2}=0. \] \begin{thm}[Gaussian estimates for the heat kernel {\cite[Thm. 1.1, Thm. 1.2]{JLZ16}}]\label{JLZ} Let $\rho$ be the heat kernel of $(X,\mathsf{d},\mathfrak{m})$. Then given any $\epsilon>0$, there exist $C_i=C_i(K,N,\epsilon)$ $(i=1,2,3,4)$ such that \begin{equation}\label{JLZineq} C_1^{-1}\exp\left(-\frac{\mathsf{d}^2\left(x,y\right)}{(4-\epsilon )t}-C_2 t\right)\leqslant \mathfrak{m}(B_{\sqrt{t}}(x))\rho(x,y,t)\leqslant C_1\exp\left(-\frac{\mathsf{d}^2\left(x,y\right)}{(4+\epsilon )t}+C_2 t\right) \end{equation} holds for any $x,y\in X$ and \[ |\nabla_x \rho(x,y,t)|\leqslant \frac{C_3}{\sqrt{t}\mathop{\mathfrak{m}(B_{\sqrt{t}}(x))}}\exp\left(-\frac{\mathsf{d}\left(x,y\right)}{(4+\epsilon)t}+C_4 t\right) \] holds for $\mathfrak{m}$-a.e. $x,y\in X$. In particular, if $K=0$, then there exists $C_5=C_5(N,\epsilon)$ such that $(\ref{JLZineq})$ can be improved to \[ C_5^{-1}\exp\left(-\frac{\mathsf{d}^2\left(x,y\right)}{(4+\epsilon )t}\right)\leqslant \mathfrak{m}(B_{\sqrt{t}}(x))\rho(x,y,t)\leqslant C_5\exp\left(-\frac{\mathsf{d}^2\left(x,y\right)}{(4-\epsilon )t}\right), \] for any $x,y\in X$. \end{thm} \begin{remark}[Rescaled RCD spaces] For any $a, b\in (0,\infty)$, the rescaled metric measure space $(X, a\mathsf{d}, b\mathfrak{m})$ is an RCD$(a^{-2}K, N)$ space, and its heat kernel $\tilde{\rho}$ can be expressed as \[ \begin{aligned} \tilde{\rho}:X\times X\times (0,\infty)&\longrightarrow (0,\infty)\\ (x,y,t)&\longmapsto b^{-1}\rho(x,y,a^{-2}t). \end{aligned} \] \end{remark} \subsection{Convergence of RCD spaces} We omit the precise definition of pointed measure Gromov-Hausdorff (pmGH) convergence. The details about this definition and the following theorem can be found for instance in \cite[Sec. 3]{GMS15}. \begin{thm}\label{GMS} Assume that $\{(X_i,\mathsf{d}_i,\mathfrak{m}_i,x_i)\}$ is a sequence of pointed $\mathrm{RCD}(K,N)$ spaces such that \[ 0<\liminf_{i\rightarrow \infty}\mathfrak{m}_i(B_1^{X_i}(x_i))\leqslant \limsup_{i\rightarrow \infty}\mathfrak{m}_i(B_1^{X_i}(x_i))<\infty. \] Then this sequence has a subsequence $\{(X_{i(j)},\mathsf{d}_{i(j)},\mathfrak{m}_{i(j)},x_{i(j)})\}$ which $\mathrm{pmGH}$ converges to a pointed $\mathrm{RCD}(K,N)$ space $(X,\mathsf{d},\mathfrak{m},x)$. \end{thm} Consequently, we can define the concept of \textit{regular sets} as follows. \begin{defn}[Regular set]\label{regularset} Let $(X,\mathsf{d},\mathfrak{m})$ be an RCD$(K,N)$ space. The tangent space at $x\in X$, denoted by $\mathrm{Tan}(X,\mathsf{d},\mathfrak{m},x)$, is defined as \[ \left\{(Y,\mathsf{d}_Y,\mathfrak{m}_Y,y):\exists r_i\downarrow 0 \text{, s.t. }\left(X,\frac{1}{r_i}\mathsf{d},\frac{\mathfrak{m}}{\mathfrak{m}(B_{r_i}(x))},x\right)\xrightarrow{\mathrm{pmGH}}(Y,\mathsf{d}_Y,\mathfrak{m}_Y,y)\right\}. \] The $k$-dimensional regular set is then defined as \[ \mathcal{R}_k:=\left\{x\in X:\mathrm{Tan}(X,\mathsf{d},\mathfrak{m},x)=\left\{\left(\mathbb{R}^k,\mathsf{d}_{\mathbb{R}^k},\frac{1}{\omega_k}\mathscr{L}^k,0_k\right)\right\}\right\}. \] \end{defn} For the subsequent result concerning the existence of the \textit{essential dimension} in RCD spaces, we refer to \cite[Thm. 0.1]{BS20}. \begin{thm}\label{BS} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\mathrm{RCD}(K,N)$ space. There exists a unique integer $n\in [1,N]$ such that $\mathfrak{m}\left(X\setminus \mathcal{R}_n\right)=0$. This integer $n$ is referred to as the essential dimension of $(X,\mathsf{d},\mathfrak{m})$ and is denoted by $\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)$. In particular, $\mathfrak{m}$ can be represented as $\theta \mathscr{H}^n\llcorner \mathcal{R}_n$ for some Borel function $\theta$ defined on $X$. \end{thm} \begin{remark}\label{AHT} Under the assumption of Theorem \ref{BS}, we define the set \[ \mathcal{R}_n^\ast:=\left\{x\in \mathcal{R}_n: \exists\lim_{r\downarrow 0}\frac{\mathfrak{m}(B_r(x))}{r^n}\in (0,\infty)\right\}, \] and assert that $\mathfrak{m}\left(\mathcal{R}_n\setminus \mathcal{R}_n^\ast\right)=0$. Furthermore, for $\mathfrak{m}\text{-a.e. }x\in \mathcal{R}_n^\ast$, the density function $\theta$ is given by \[ \theta(x)=\lim_{r\downarrow 0}\frac{\mathfrak{m}(B_r(x))}{\omega_n r^n},\ \mathfrak{m}\text{-a.e. }x\in \mathcal{R}_n^\ast. \] For further details, refer to \cite[Thm. 4.1]{AHT18}. \end{remark} Specifically, in the case of non-collapsed RCD$(K,n)$ spaces, the following assertion regarding the density function and the essential dimension is valid. \begin{thm}[{\cite[Cor. 1.7]{DG18}}]\label{DG18cor1.7} Let $(X,\mathsf{d},\mathscr{H}^n)$ be a non-collapsed $\mathrm{RCD}(K, n)$ space. Then its essential dimension $\mathrm{dim}_{\mathsf{d},\mathscr{H}^n}(X)$ equals $n$ (notably $n$ is an integer), and it holds that $\theta(x)\leqslant 1$ for any $x\in X$. Moreover, equality $\theta(x)=1$ occurs if and only if $x\in \mathcal{R}_n$. \end{thm} In the subsequent part of this subsection, we consider a sequence of pointed RCD$(K,N)$ spaces, denoted by $\{(X_i, \mathsf{d}_i, \mathfrak{m}_i, x_i)\}$, which converges in the pointed measured Gromov-Hausdorff (pmGH) sense to another pointed RCD$(K,N)$ space, namely $(X, \mathsf{d}, \mathfrak{m}, x)$. We assume that readers are familiar with the definitions of $L^2$-weak and $L^2$-strong convergence and $L^2_{\mathrm{loc}}$ (and their counterparts for Sobolev functions, namely $H^{1,2}$-weak and $H^{1,2}$-strong convergence, and $H^{1,2}_{\mathrm{loc}}$-weak and $H^{1,2}_{\mathrm{loc}}$-strong convergence) on various spaces. For reference, see \cite{AST17,AH17,GMS15} and \cite[Def. 1.1]{H15}. We conclude this subsection by presenting some useful results related to this topic. \begin{thm}[Pointwise convergence of heat kernels {\cite[Thm. 3.3]{AHT18}}]\label{pchk} The heat kernels $\rho_i$ of $(X_i,\mathsf{d}_i,\mathfrak{m}_i)$ satisfy \[ \lim_{i\rightarrow \infty} \rho_i(x_i,y_i,t_i)=\rho(x,y,t) \] whenever $X_i\times X_i\times (0,\infty)\ni (x_i,y_i,t_i)\rightarrow (x,y,t)\in X\times X\times (0,\infty)$. \end{thm} \begin{thm}[Arzel\`{a}-Ascoli theorem]\label{AAthm} Assume $f_i\in C(X_i)$ $(i\in\mathbb{N})$ such that the sequence $\{f_i\}$ meets the following two conditions. \begin{enumerate} \item[$1.$](Locally uniformly bounded) For any $R>0$ it holds that \[ \sup_i \sup_{y_i\in B_R(x_i)}|f_i(y_i)|<\infty. \] \item[$2.$](Locally equicontinuous) For any $\epsilon,R\in (0,\infty)$, there exists $\delta\in (0,1)$ such that for any $i\in \mathbb{N}$ it holds that \[ |f_i(y_i)-f_i(z_i)|<\epsilon, \ \forall y_i,z_i\in B_R(x_i) \text{ such that } \mathsf{d}_i(y_i,z_i)<\delta. \] \end{enumerate} Then after passing to a subsequence, there exists $f\in C(X)$ such that $\{f_i\}$ pointwisely converges to $f$ in the following sense: \[ f_i(y_i)\rightarrow f(y) \text{\ whenever\ } X_i\ni y_i\rightarrow y\in X. \] \end{thm} Utilizing the Arzelà-Ascoli theorem discussed above, we can demonstrate the subsequent theorem concerning precompactness in the context of $L^2$-weak convergence. \begin{thm} Assume $f_i\in L^2(\mathfrak{m}_i)$ $(i\in\mathbb{N})$ such that $\sup_i \|f_i\|_{L^2(\mathfrak{m}_i)}<\infty$, then after passing to a subsequence $f_i$ $L^2$-weakly converges to some $f\in L^2(\mathfrak{m})$. \end{thm} The forthcoming theorem is instrumental in our subsequent arguments and is derived from \cite{AH18}. \begin{thm}[Stability of Laplacian on balls]\label{AH18} Let $f_i\in D(\Delta, B_R(x_i))$ $(i\in\mathbb{N})$ such that $\sup_i \|\Delta f_i\|_{L^2\left(B_R(x_i)\right)}<\infty$ and $\{f_i\}$ $L^2$-strongly converges to some $f\in L^2(B_R(x),\mathfrak{m})$ on $B_R(x)$. Then for any $r\in (0,R)$ the following holds. \begin{enumerate} \item[1.] $f|_{B_r(x)}\in D(\Delta,B_r(x))$. \item[2.] $\{\Delta_i f_i\}$ $L^2$-weakly converges to $\Delta f$ on $B_r(x)$. \item[3.] $\{f_i\}$ $H^{1,2}$-strongly converges to $f$ on $B_r(x)$. \end{enumerate} \end{thm} \subsection{Calculus on RCD$(K,N)$ spaces} Let $(X,\mathsf{d},\mathfrak{m})$ be an RCD$(K,N)$ space. For the sake of brevity, we forgo detailing the definitions of the spaces of $L^p$-one forms, $L^p$-tensor fields of type (0,2) over a Borel set $A\subset X$, denoted by $L^p(T^\ast(A,\mathsf{d},\mathfrak{m}))$, $L^p((T^\ast)^{\otimes 2}(A,\mathsf{d},\mathfrak{m}))$, respectively. We also omit the definition of the pointwise Hilbert-Schimidt norm, denoted by $|\cdot|_{\mathsf{HS}}$, for $L^p$-tensor fields. Instead, we only report some crucial results on the calculus within RCD$(K,N)$ spaces. Details can be found in for instance \cite{G18a,G18b}. \begin{thm}[Exterior derivative] The linear operator $d$, called the exterior derivative, defined by \[ \begin{aligned} d:H^{1,2}&\longrightarrow L^2(T^\ast(X,\mathsf{d},\mathfrak{m}))\\ f&\longmapsto d\,f \end{aligned} \] satisfies that $|d\, f|=|\nabla f|$ $\mathfrak{m}$-a.e. for any $f\in H^{1,2}$. Moreover, the set $\{d\, f:f\in H^{1,2}\}$ is dense in $L^2(T^\ast(X,\mathsf{d},\mathfrak{m}))$. \end{thm} \begin{thm}[The Hessian] For any $f\in \mathrm{Test}F\left({X},\mathsf{d},\mathfrak{m}\right)$, there exists a unique $T\in L^2\left((T^\ast)^{\otimes 2}({X},\mathsf{d},\mathfrak{m})\right)$, called the Hessian of $f$, denoted by $ \mathop{\mathrm{Hess}}f$, such that for any $f_i\in \mathrm{Test}F\left({X},\mathsf{d},\mathfrak{m}\right)$ $(i=1,2)$, \begin{equation} {}{2T(\nabla f_1,\nabla f_2)= \langle \nabla f_1,\nabla\langle \nabla f_2,\nabla f\rangle\rangle +\langle \nabla f_2,\nabla\langle \nabla f_1,\nabla f\rangle\rangle-\langle \nabla f,\nabla\langle \nabla f_1,\nabla f_2\rangle\rangle } \end{equation} holds for $\mathfrak{m}$-a.e. $x\in {X}$. Moreover, the following holds for any $f\in \mathrm{Test}F\left({X},\mathsf{d},\mathfrak{m}\right)$, $\varphi\in \mathrm{Test}F({X},\mathsf{d},\mathfrak{m})$ with $\varphi\geqslant 0$. \begin{equation}\label{abc2.14} \frac{1}{2}\int_{X} \Delta \varphi \cdot |\nabla f|^2\,\mathrm{d}\mathfrak{m}\geqslant \int_{X}\varphi \left(|\mathop{\mathrm{Hess}}f|_{\mathsf{HS}}^2+ \langle \nabla \Delta f,\nabla f\rangle+K|\nabla f|^2\right) \mathrm{d}\mathfrak{m}. \end{equation} \end{thm} \begin{remark}\label{rmk2.16} Since $\mathrm{Test}F({X},\mathsf{d},\mathfrak{m})$ is dense in $D(\Delta)$, for any $f\in D(\Delta)$, the Hessian $\mathop{\mathrm{Hess}}f$ is well-defined and belongs to $ L^2\left((T^\ast)^{\otimes 2}({X},\mathsf{d},\mathfrak{m})\right)$. Furthermore, if $f_i\in D(\Delta)\cap \mathrm{Lip}(X,\mathsf{d})$ $(i=1,2)$, then we have $\langle \nabla f_1,\nabla f_2 \rangle\in H^{1,2}$ and $\mathrm{Hess}\, f_1=\mathrm{Hess}\, f_2$ $\mathfrak{m}$-a.e. on $\{f_1=f_2\}$. Additionally, the following holds for any $ \varphi\in H^{1,2}$. \begin{equation}\label{11eqn2.16} \langle \nabla \varphi, \nabla \langle \nabla f_1,\nabla f_2 \rangle \rangle= \mathop{\mathrm{Hess}}f_1\left(\nabla f_2,\nabla\varphi\right)+ \mathop{\mathrm{Hess}}f_2\left(\nabla f_1,\nabla\varphi\right) \ \ \mathfrak{m}\text{-a.e.} \end{equation} We note that when $\mathrm{diam}(X,\mathsf{d})>2$, for any $f\in D(\Delta)$, the local $L^2$-norm of the Hessian of $f$ can be estimated as follows: \begin{equation}\label{eeeeeeqn2.5} \begin{aligned} \int_{B_1(x)}\left|\mathrm{Hess}\, f\right|_{\mathsf{HS}}^2\mathrm{d}\mathfrak{m}\leqslant&\, C(K,N)\left(\int_{B_2(x)}\left(\Delta f\right)^2\mathrm{d}\mathfrak{m}+\inf_{m\in\mathbb{R}}\int_{B_2(x)}\left||\nabla f|^2-m\right| \mathrm{d}\mathfrak{m}\right)\\ &-2(K\wedge0)\int_{B_2(x)}|\nabla f|^2\,\mathrm{d}\mathfrak{m}. \end{aligned} \end{equation} It is important to highlight that, due to the locality of the Hessian, even if a function $f$ belongs to $D(\Delta,B_2(x))$, its Hessian on $B_1(x)$, which is still denoted by $\mathrm{Hess}\, f$, can be interpreted as $\mathrm{Hess}\,(\varphi f)$, where $\varphi$ is a non-negative cut-off function in $\mathrm{Test}F(X,\mathsf{d},\mathfrak{m})$ such that $\varphi=1$ on $B_1(x)$, that the support of $\varphi$ is contained within $ B_2(x)$, and that $|\Delta \varphi|+|\nabla \varphi|\leqslant C(K,N)$ $\mathfrak{m}$-a.e. on $B_2(x)$. See \cite[Thm. 6.7]{AMS14}, \cite{G18b} and \cite[Lem. 3.1]{MN14} for details. Let us consider a function $f\in D(\Delta,B_{2r}(x))$ for some $0<r< \mathrm{diam}(X,\mathsf{d})/2$, and examine $f/r$ on the rescaled RCD$(r^2K,N)$ space $\left(X,\mathsf{d}/r,\mathfrak{m}/\mathfrak{m}(B_r(x))\right)$. Then it immediately follows from (\ref{eeeeeeqn2.5}) that \begin{equation}\label{eeeeeeqn2.6} \begin{aligned} r^2\int_{B_r(x)}\left|\mathrm{Hess}\, f\right|_{\mathsf{HS}}^2\mathrm{d}\mathfrak{m}\leqslant& C(K,N)\left(r^2\int_{B_{2r}(x)}\left(\Delta f\right)^2\mathrm{d}\mathfrak{m}+\inf_{m\in\mathbb{R}}\int_{B_{2r}(x)}\left||\nabla f|^2-m\right| \mathrm{d}\mathfrak{m}\right)\\ &-2r^2(K\wedge0)\int_{B_{2r}(x)}|\nabla f|^2\,\mathrm{d}\mathfrak{m}. \end{aligned} \end{equation} \end{remark} Equation (\ref{eeeeeeqn2.5}) was first introduced in \cite[Sec. 1.3]{BPS23}. We express our gratitude to Elia Bru\'{e} for providing a comprehensive proof of this equation, which is reproduced below for the reader's convenience. \begin{proof}[Proof of (\ref{eeeeeeqn2.5})] Without loss of generality, we may assume that $K$ is non-positive. Consider the auxiliary problem \[ \left\{ \begin{aligned} \Delta \tilde{f}=0\ \ \text{on }B_2(x),\ \ \ \ \\ \tilde{f}-f\in H_0^{1,2}(B_2(x)). \end{aligned} \right. \] The existence of such a solution is guaranteed by \cite[Cor. 1.2]{BM95} and \cite[Thm 10.12]{BB11}. Since RCD$(K,N)$ spaces satisfy (2,2)-Poincar\'{e} inequality, they consequently fulfill the Sobolev inequality (see \cite[Thm. 1]{R12} and \cite[Thm. 5.1]{HK00}) and \cite[Thm. 5.51]{BB11}). This allows us to directly estimate that \[ \begin{aligned} \int_{B_2(x)}|\nabla (f-\tilde{f})|^2\,\mathrm{d}\mathfrak{m}&=\int_{B_2(x)}( f- \tilde{f})\,\Delta f\,\mathrm{d}\mathfrak{m}\\ &\leqslant \left(\int_{B_2(x)}( f- \tilde{f})^2\,\mathrm{d}\mathfrak{m}\right)^{\frac{1}{2}}\left(\int_{B_2(x)}(\Delta f)^2\,\mathrm{d}\mathfrak{m}\right)^{\frac{1}{2}}\\ &\leqslant C(K,N)\left(\int_{B_2(x)}|\nabla( f- \tilde{f})|^2\,\mathrm{d}\mathfrak{m}\right)^{\frac{1}{2}}\left(\int_{B_2(x)}(\Delta f)^2\,\mathrm{d}\mathfrak{m}\right)^{\frac{1}{2}}, \end{aligned} \] which implies \begin{equation}\label{eeeeqn2.6} \int_{B_2(x)}|\nabla (f-\tilde{f})|^2\,\mathrm{d}\mathfrak{m}\leqslant C(K,N)\int_{B_2(x)}(\Delta f)^2\,\mathrm{d}\mathfrak{m}. \end{equation} Using (\ref{abc2.14}) (\ref{eeeeqn2.6}), taking the cut-off function $\varphi$, and applying integral by parts we see \begin{equation}\label{eeeeeeeqn2.7} \begin{aligned} &\int_{B_1(x)}\left|\mathrm{Hess}\, (f-\tilde{f})\right|_{\mathsf{HS}}^2\,\mathrm{d}\mathfrak{m}\leqslant \int_{B_2(x)}\varphi\left|\mathrm{Hess}\, (f-\tilde{f})\right|_{\mathsf{HS}}^2\,\mathrm{d}\mathfrak{m}\\ \leqslant &\, C(K,N)\int_{B_2(x)}\left|\nabla (f-\tilde{f})\right|^2\,\mathrm{d}\mathfrak{m}-\int_{B_2(x)}\varphi\, \langle \nabla \Delta (f-\tilde{f}),\nabla (f-\tilde{f}) \rangle\,\mathrm{d}\mathfrak{m}\\ =&\,C(K,N)\int_{B_2(x)}\left|\nabla (f-\tilde{f})\right|^2\,\mathrm{d}\mathfrak{m}+\int_{B_2(x)}\left(\varphi\, (\Delta f)^2+\langle\nabla \varphi,\nabla (f-\tilde{f})\rangle\,\Delta f\right)\,\mathrm{d}\mathfrak{m}\\ \leqslant &\,C(K,N)\int_{B_2(x)}(\Delta f)^2\,\mathrm{d}\mathfrak{m}. \end{aligned} \end{equation} Similarly, given that $\Delta \tilde{f}=0$ and applying (\ref{eeeeqn2.6}), we deduce that for any $m\in\mathbb{R}$ the following equation holds. \begin{equation}\label{eeeeeeeeeeeqn2.7} \begin{aligned} &\int_{B_1(x)}\left|\mathrm{Hess}\,\tilde{f}\right|_{\mathsf{HS}}^2\,\mathrm{d}\mathfrak{m}\\ \leqslant&\ \frac{1}{2} \int_{B_2(x)}\Delta \varphi\left( \left|\nabla \tilde{f}\right|^2-m\right)\,\mathrm{d}\mathfrak{m}-K\int_{B_2(x)}\varphi\left|\nabla \tilde{f}\right|^2\,\mathrm{d}\mathfrak{m}\\ \leqslant& \ C(K,N)\int_{B_2(x)}\left| \left|\nabla \tilde{f}\right|^2-m\right|\,\mathrm{d}\mathfrak{m}-K\int_{B_2(x)}\left|\nabla \tilde{f}\right|^2\,\mathrm{d}\mathfrak{m}\\ =& \ C(K,N)\int_{B_2(x)}\left| \left|\nabla \tilde{f}\right|^2-m\right|\,\mathrm{d}\mathfrak{m}-K\int_{B_2(x)}\left(|\nabla f|^2-\left|\nabla (f-\tilde{f})\right|^2\right)\,\mathrm{d}\mathfrak{m}\\ \leqslant & \ C(K,N)\left(\int_{B_2(x)}\left| \left|\nabla \tilde{f}\right|^2-m\right|\,\mathrm{d}\mathfrak{m}+\int_{B_2(x)}{(\Delta f)}^2\,\mathrm{d}\mathfrak{m}\right)-K\int_{B_2(x)}{|\nabla f|}^2\, \mathrm{d}\mathfrak{m}. \end{aligned} \end{equation} Since \begin{equation}\label{eeeeeqn2.9} \int_{B_1(x)}\left|\mathrm{Hess}\, f\right|_{\mathsf{HS}}^2\mathrm{d}\mathfrak{m}\leqslant 2\int_{B_1(x)}\left(\left|\mathrm{Hess}\, (f-\tilde{f})\right|_{\mathsf{HS}}^2+\left|\mathrm{Hess}\,\tilde{f}\right|_{\mathsf{HS}}^2\right)\mathrm{d}\mathfrak{m}, \end{equation} to conclude it suffices to show that \begin{equation}\label{eeeeqn2.10} \inf_{m\in\mathbb{R}}\int_{B_2(x)}\left| \left|\nabla \tilde{f}\right|^2-m\right|\mathrm{d}\mathfrak{m}\leqslant \inf_{m\in\mathbb{R}}\int_{B_2(x)}\left| |\nabla f|^2-m\right|\mathrm{d}\mathfrak{m}+C(K,N)\int_{B_2(x)}(\Delta f)^2\,\mathrm{d}\mathfrak{m}. \end{equation} Let $\hat{m}$ be the real number such that \[ \int_{B_2(x)}\left| \left|\nabla f\right|^2-\hat{m}\right|\mathrm{d}\mathfrak{m}=\inf_{m\in\mathbb{R}}\int_{B_2(x)}\left| \left|\nabla f\right|^2-m\right|\mathrm{d}\mathfrak{m}. \] Then the harmonicity of $\tilde{f}$ gives that \[ \begin{aligned} &\inf_{m\in\mathbb{R}}\int_{B_2(x)}\left| \left|\nabla \tilde{f}\right|^2-m\right|\,\mathrm{d}\mathfrak{m}\leqslant \int_{B_2(x)}\left| \left|\nabla \tilde{f}\right|^2-\hat{m}\right|\,\mathrm{d}\mathfrak{m}\\ \leqslant &\int_{B_2(x)}\left| \left|\nabla f\right|^2-\hat{m}\right|\,\mathrm{d}\mathfrak{m}+\int_{B_2(x)}\left| \left|\nabla f\right|^2-\left|\nabla\tilde{f}\right|^2\right| \,\mathrm{d}\mathfrak{m}\\ =&\int_{B_2(x)}\left| \left|\nabla f\right|^2-\hat{m}\right|\,\mathrm{d}\mathfrak{m}+\int_{B_2(x)}\left|\nabla (f-\tilde{f})\right|^2\,\mathrm{d}\mathfrak{m}, \end{aligned} \] which together with (\ref{eeeeqn2.6}) yields (\ref{eeeeqn2.10}). Finally, the conclusion is derived from (\ref{eeeeeeeeeeeqn2.7})-(\ref{eeeeqn2.10}). \end{proof} \begin{thm}[Canonical Riemannian metric \cite{AHPT21,GP22}] There exists a unique Riemannian metric $\mathrm{g}\in L^\infty((T^\ast)^{\otimes 2}(X,\mathsf{d},\mathfrak{m}))$ such that $|\mathrm{g}|_{\mathsf{HS}}=\sqrt{n}$ $\mathfrak{m}$-a.e. and that for any $f_1, f_2\in H^{1,2} $ it holds that \[ \mathrm{g}(\nabla f_1,\nabla f_2)=\langle \nabla f_1,\nabla f_2\rangle,\ \mathfrak{m}\text{-a.e.} \] \end{thm} We are now prepared to present a recent result in \cite[Thm. 1.10]{H23}, which concludes this subsection. The theorem establishes that compact non-collapsed RCD$(K,n)$ spaces equipped with isometrically immersing eigenmaps are, in fact, smooth. \begin{thm}\label{H22} Let $(X,\mathsf{d},\mathscr{H}^n)$ be a compact non-collapsed $\mathrm{RCD}(K,n)$ space and $g$ be the canonical Riemannian metric on it. If there exist a finite number of non-constant eigenfunctions $\{\phi_i\}_{i=1}^m$ such that \[ \mathrm{g}=\sum\limits_{i=1}^m d\phi_i\otimes d\phi_i , \] then $(X,\mathsf{d})$ is isometric to an $n$-dimensional smooth closed Riemannian manifold $\left(M^n,\mathrm{g}\right)$. \end{thm} This theorem played an important role in proving the smoothness of compact isometrically heat kernel immersing RCD$(K,N)$ spaces as demonstrated in \cite{H23}. By ``isometrically heat kernel immersing" we mean that there exists a function $t\mapsto c(t)$ on $(0,\infty)$ such that for each $t > 0$, the function $x \mapsto (y \mapsto \sqrt{c(t)}\, \rho(x,y,t))$ provides an isometric immersion into $L^2$. \section{Symmetric and harmonic RCD$(K,N)$ spaces}\label{sec3} The first part of this section is aimed at proving Theorem \ref{mainthm1}. To achieve this, we require several key estimates concerning eigenvalues and eigenfunctions. For further details and background, refer to \cite[Appendix]{AHPT21} and \cite{ZZ19}. \begin{prop}\label{prop3.1} Let $(X, \mathsf{d}, \mathfrak{m})$ be a compact $\mathrm{RCD}(K, N)$ space with $\mathfrak{m}(X)=1$. Let $0=\lambda_0<\lambda_1\leqslant \lambda_2\leqslant \cdots \rightarrow \infty$ be all its eigenvalues counted with multiplicities and let $\{\varphi_i\}_{i=0}^\infty$ be the corresponding eigenfunctions, which form an $L^2$-orthonormal basis. Then, there exists a constant $C= C(K,N,\mathrm{diam}(X, \mathsf{d}))$, such that for all $i \geqslant 1$ we have \[ \|\varphi_i\|_{L^\infty}\leqslant C\,\lambda_i^{\frac{N}{4}},\ \| |\nabla \varphi_i |\|_{L^\infty}\leqslant C\,\lambda_i^{\frac{N+2}{4}},\ C^{-1} \,i^{\frac{2}{N}}\leqslant \lambda_i\leqslant C\,i^2. \] \end{prop} In the rest of this section we use the notation of (\ref{eqn1.2}). We let $\mu_i$ be the corresponding eigenvalue of $\phi_i$ ($i=1,\ldots,m$) and use $C$ to denote the constant \[ C=C\left(K,m,n,\mathrm{diam}({X},\mathsf{d}),\mathscr{H}^n({X}),\mu_1,\ldots,\mu_m,\left\|\phi_1\right\|_{L^2},\ldots,\left\|\phi_m\right\|_{L^2}\right). \] \begin{proof}[Proof of Theorem \ref{mainthm1}] Let us first show that \begin{equation}\label{eqn3.1} \left|\frac{F(0)-F(\mathsf{d}\left(x,y\right))}{\mathsf{d}^2\left(x,y\right)}\right|\leqslant C,\ \forall x,y\in X. \end{equation} Letting $x=y$ in (\ref{eqn1.2}) we know \begin{equation}\label{eqn3.2} \sum\limits_{i=1}^m {\phi_i}^2(x)=F(0),\ \forall x\in X. \end{equation} Therefore, it clearly follows from (\ref{eqn1.2}) and (\ref{eqn3.2}) that \begin{equation}\label{eqn3.3} \sum\limits_{i=1}^m (\phi_i(x)-\phi_i(y))^2=2\,\big(F(0)-F(\mathsf{d}\left(x,y\right))\big),\ \forall x,y\in X. \end{equation} As a result, we have \[ \frac{F(0)-F\left(\mathsf{d}\left(x,y\right)\right)}{\mathsf{d}^2\left(x,y\right)}=\frac{1}{2}\sum\limits_{i=1}^m \left(\frac{\phi_i(x)-\phi_i(y)}{\mathsf{d}\left(x,y\right)}\right)^2,\ \forall x,y\in X, \] which together with the Proposition \ref{prop3.1} yields (\ref{eqn3.1}). From now on let us take an arbitrary but fixed \[ x_0\in \mathcal{R}_n(X)\cap \bigcap\limits_{i,j=1}^m \mathrm{Leb}(\langle\nabla \phi_i,\nabla\phi_j\rangle), \] where $\mathrm{Leb}(\langle\nabla \phi_i,\nabla\phi_j\rangle)$ is the Lebesgue point of the function $\langle\nabla \phi_i,\nabla\phi_j\rangle$. We first claim that \begin{equation}\label{eqn3.4} \frac{1}{2} \liminf_{r\rightarrow 0} \frac{F(0)-F(r)}{r^2}>0. \end{equation} We prove by contradiction. Assume there exists a sequence $\{r_l\}\subset (0,\infty)$ such that $r_l\rightarrow 0$, and that ${r_l}^{-2}(F(0)-F(r_l))\to 0$ as $l\to \infty$. After passing to a subsequence, we have \[ (X_l,\mathsf{d}_l,\mathscr{H}^n,x_0):=\left(X,\frac{1}{r_l}\mathsf{d},\frac{1}{r_l^n}\mathscr{H}^n_\mathsf{d},x_0\right)\xrightarrow{\mathrm{pmGH}} \left(\mathbb{R}^n,\mathsf{d}_{\mathbb{R}^n},\mathscr{L}^n,0_n\right). \] For convenience, we denote the gradient and the Laplacian on $(X_l,\mathsf{d}_l,\mathscr{H}^n)$ by $\Delta_l,\nabla_l$ respectively. Additionally, we use $B_r^l(x_0)$ to represent the ball $B_r^{X_l}(x_0)$. For each $i\in \{1,\cdots,m\}$ and $l\in \mathbb{N}$, let us set $\varphi_{i,l}:=(\phi_i-\phi_i(x_0))/r_l$. Then we have \[ |\nabla_l \,\varphi_{i,l}|=|\nabla \phi_i|,\ \Delta_l\,\varphi_{i,l} =r_l\,\Delta\, \phi_i=-r_l\, \mu_i \,\phi_i. \] Particularly, the forthcoming estimates are straightforwardly extracted from Proposition \ref{prop3.1}: \begin{equation}\label{eqn3.5} \left\|\left|\nabla_l\, \varphi_{i,l}\right|\right\|_{L^\infty(X_l)}\leqslant C;\ \ \left\|\Delta_l \,\varphi_{i,l}\right\|_{L^\infty(X_l)}\leqslant C\,r_l\rightarrow 0\ \ \text{as }l\rightarrow \infty. \end{equation} According to the Arzela-Ascoli Theorem (Theorem \ref{AAthm}), for any $R>0$, and for each $i=1,\ldots,m$, the sequence $\{\varphi_{i,l}\}_l$ uniformly converges to a Lipschitz continuous function $\varphi_i$ on $B_R(0_n)\subset \mathbb{R}^n$. Furthermore, (\ref{eqn3.5}) and Theorem \ref{AH18} indicate that each $\varphi_i$ is a harmonic function satisfying $|\nabla^{\mathbb{R}^n} \varphi_i|\leqslant C$, and thus is a linear function on $\mathbb{R}^n$. Therefore, for each $i,j$, since $x_0\in \mathrm{Leb}(\langle\nabla \phi_i,\nabla\phi_j\rangle)$, we have \[ \begin{aligned} nt_{B_r(x_0)}\langle\nabla \phi_i,\nabla \phi_j\rangle\mathop{\mathrm{d}\mathscr{H}^n}\\ \ &=\frac{1}{\omega_n}\lim_{l\rightarrow \infty}\int_{B^l_1(x_0)}\langle\nabla_l\, \varphi_{i,l},\nabla_l\, \varphi_{j,l}\rangle \mathop{\mathrm{d}\mathscr{H}^n}=\langle\nabla^{\mathbb{R}^n} \varphi_i,\nabla^{\mathbb{R}^n} \varphi_j\rangle. \end{aligned} \] In particular, for any $y\in \partial B_1(0_n)\subset \mathbb{R}^n$, by taking $y_l\in \partial B^l_1(x_0)=\partial B_{r_l}(x_0)$ such that $y_l\rightarrow y$, we observe from (\ref{eqn3.3}) and the fact $\mathsf{d}_l\left(y_l,x_0\right)\rightarrow \mathsf{d}_{\mathbb{R}^n}(y,0_n)$ that \begin{equation}\label{eqn3.7} \begin{aligned} \lim_{l\rightarrow \infty} \frac{F(0)-F\left(\mathsf{d}\left(x_0,y_l\right)\right)}{\mathsf{d}^2\left(x_0,y_l\right)}&=2\lim_{l\rightarrow \infty}\left(\frac{r_l}{\mathsf{d}\left(x_0,y_l\right)}\right)^2\sum\limits_{i=1}^m \left(\frac{\phi_i(x_0)-\phi_i(y_l)}{r_l}\right)^2\\ \ &=2\sum\limits_{i=1}^m {\varphi_i}^2(y)=0. \end{aligned} \end{equation} As a result, $|\nabla^{\mathbb{R}^n} \varphi_1|\equiv|\nabla \phi_1|(x_0)=0$. This is impossible since one can choose $x_0$ as the Lebesgue point of the set $\{y\in X: |\nabla \phi_1|(y)\neq 0\}$. Therefore we complete the proof of (\ref{eqn3.4}). Additionally, from the above discussion we know $\sum_i |\nabla \phi_i|^2>0$ for $\mathscr{H}^n$-a.e. $x\in X$. Next let us consider an arbitrary sequence $\{r_l\}$ with $r_l\to 0$ such that the following limit exists. \[ \frac{1}{2} \,\lim_{l\to\infty } \frac{F(0)-F(r_l)}{{r_l}^2}=c>0. \] Then according to the linearity of $\varphi_i$ and (\ref{eqn3.7}), on $\mathbb{R}^n$ it holds that $\sum_i {\varphi_i}^2(y)=c\,{|y|}^2$. It is easy to see that $\sum_i |\nabla \phi_i|^2(x_0)=\sum_i |\nabla^{\mathbb{R}^n} \varphi_i|^2\equiv nc$. Hence we deduce that \begin{equation}\label{eqn33.8} \lim_{r \downarrow 0} \frac{F(0)-F(r)}{r^2}=2\sum\limits_{i=1}^m |\nabla \phi_i|^2(x_0):=2\,nc>0, \end{equation} because the sequence $\{r_l\}$ is taken to be arbitrary. Finally, we assert that \begin{equation}\label{eqn3.8} \left|\sum\limits_{i=1}^m d\,\phi_i\otimes d\,\phi_i-c\,\mathrm{g}\right|_\mathsf{HS}=0,\ \mathscr{H}^n\text{-a.e.}, \end{equation} where $c$ is the constant obtained in (\ref{eqn33.8}), and $\mathrm{g}$ is the canonical Riemannian metric on $(X,\mathsf{d},\mathscr{H}^n)$. Now let $x_0$ be an arbitrary but fixed point in $\mathrm{Leb}\left(\left|c\,\mathrm{g}-\sum\limits_{i=1}^m d\,\phi_i\otimes d\,\phi_i\right|_{\mathsf{HS}}^2\right)\cap \mathcal{R}_n$ and use the notations as above. Then since on $\mathbb{R}^n$ it holds that $\sum_i {\varphi_i}^2(y)=c\,{|y|}^2$, by applying the linearity of each $\varphi_i$ and taking partial derivatives we know \[ \sum_{i=1}^m\frac{\partial \varphi_i }{\partial y^\alpha}\,\frac{\partial \varphi_i }{ \partial y^\beta}=c\,\delta_{\alpha \beta}, \] which implies that $c\,\mathrm{g}_{\mathbb{R}^n}=\sum_i d\,\varphi_i\otimes d\,\varphi_i$. For each $i$, the $H^{1,2}$-strong convergence of the sequence $\{\varphi_{i,l}\}_l$ on any $B_R(0_n)\subset\mathbb{R}^n$ (Theorem \ref{AH18}) as well as (\ref{eqn3.5}) implies that \[ \begin{aligned} \left|c\,\mathrm{g}-\sum\limits_{i=1}^m d\,\phi_i\otimes d\,\phi_i\right|_\mathsf{HS}^2(x_0) nt_{B_r(x_0)}\left|c\,\mathrm{g}-\sum\limits_{i=1}^m d\,\phi_i\otimes d\,\phi_i\right|_\mathsf{HS}^2\mathrm{d}\mathscr{H}^n\\ =\ &\frac{1}{\omega_n}\lim_{l\rightarrow\infty}\int_{B_1^l(x_0)}\left|c\,\mathrm{g}_{X_l}-\sum\limits_{i=1}^m d\,\varphi_{i,l}\otimes d\,\varphi_{i,l}\right|_\mathsf{HS}^2\mathrm{d}\mathscr{H}^n\\ nt_{B_1(0_n)}\left|c\,\mathrm{g}_{\mathbb{R}^n}-\sum\limits_{i=1}^m d\,\varphi_i\otimes d\,\varphi_i\right|_\mathsf{HS}^2\mathrm{d}\mathscr{L}^n=0. \end{aligned} \] Since $x_0\in\mathrm{Leb}\left(\left|c\, \mathrm{g}-\sum\limits_{i=1}^m d\,\phi_i\otimes d\,\phi_i\right|_{\mathsf{HS}}^2\right)\cap \mathcal{R}_n$ is arbitrary, we have completed the proof of (\ref{eqn3.8}). It now remains to apply Theorems \ref{BS} and \ref{H22} to reach our conclusion. \end{proof} \begin{remark} In fact, the condition of radial symmetry in Theorem \ref{mainthm1} can be reduced to the existence of a real valued function $F : [0,\infty) \times [0,\infty)\rightarrow\mathbb{R}$ and non-constant eigenfunctions $\{\phi_i\}_{i=1}^m$ such that for any $x\in X$ there exists $\epsilon_x>0$ such that \[ \sum_{i=1}^m \phi_i(x)\phi_i(y) = F\left(\mathsf{d}\left(x, y\right)\right), \forall y\in B_{\epsilon_x}(x). \] \end{remark} Next, we focus on strongly harmonic RCD$(K,N)$ spaces. Specifically, we consider the RCD$(K,N)$ space $(X,\mathsf{d},\mathfrak{m})$ with $\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)=n$ for the remainder of this section. Let us recall the definition of a strongly harmonic RCD$(K,N)$ space: there exists a real valued function $H:[0,\infty)\times [0,\infty)\rightarrow \mathbb{R}$ such that the heat kernel $\rho$ of $(X,\mathsf{d},\mathfrak{m})$ satisfies \begin{equation}\label{eeqn4.1} \rho(x,y,t)=H\left(\mathsf{d}\left(x,y\right),t\right),\ \forall x,y\in X,\ \forall t>0. \end{equation} We begin with the proof of Theorem \ref{mainthm2}, which parallels the proof of \cite[Thm. 1.7]{H23}. \begin{proof}[Proof of Theorem \ref{mainthm2}] We state that there exists a constant $\tilde{c}>0$ such that \begin{equation}\label{eqn4.1} \lim_{t\downarrow 0} t^n H(rt,t^2)=\tilde{c}\mathop{(4\pi )^{-\frac{n}{2}}} \exp\left(-\frac{r^2}{4}\right), \ \forall r>0. \end{equation} Take an arbitrary but fixed $x_0\in \mathcal{R}_n^\ast(X)$. For any $\{r_i\}\subset \mathbb{R}$ with $r_i\rightarrow 0$, after passing to a subsequence, consider the following pmGH convergence. \[ (X_i,\mathsf{d}_i,\mathfrak{m},x_0):=\left(X,\frac{1}{r_i}\mathsf{d}_i,\frac{\mathfrak{m}}{\mathfrak{m}(B_{r_i}(x_0))},x_0\right)\xrightarrow{\mathrm{pmGH}} \left(\mathbb{R}^n,\mathsf{d}_{\mathbb{R}^n},\frac{1}{\omega_n}\mathscr{L}^n,0_n\right). \] On each $(X_i,\mathsf{d}_i,\mathfrak{m}_i)$, the heat kernel $\rho_i$ satisfies that \begin{equation}\label{eeeqn4.2} \rho_i(x_i,y_i,1)=\mathfrak{m}(B_{r_i}(x_0))\,\rho(x_i,y_i,r_i^2), \ \forall x_i,y_i\in X_i. \end{equation} For each $i$, we can take $x_i,y_i\in B_{2r_i}(x_0)$ such that $\mathsf{d}_i\left(x_i,y_i\right)=r_i$. Then after passing to a subsequence, we may assume that $X_i\ni x_i\rightarrow x\in \mathbb{R}^n$, $X_i\ni y_i\rightarrow y\in \mathbb{R}^n$. Due to Theorem \ref{pchk}, we have \begin{equation}\label{eeeqn4.3} \lim_{i\rightarrow \infty}\rho_i(x_i,y_i,1)=(4\pi )^{-\frac{n}{2}}\exp\left(-\frac{|x-y|^2}{4}\right). \end{equation} Combining (\ref{eeqn4.1}), (\ref{eeeqn4.2}) and (\ref{eeeqn4.3}) then gives \begin{equation}\label{eqnaaa5.13} \begin{aligned} \ &\vartheta_n(X,\mathsf{d},\mathfrak{m})(x_0)\lim_{i\rightarrow \infty} r_i^{n} H(r_i|x-y|,r_i^2)\\ =\ &\lim_{i\rightarrow \infty}\mathfrak{m}(B_{r_i}(x_0))H\left(r_i|x-y|,r_i^2\right)=(4\pi )^{-\frac{n}{2}}\exp\left(-\frac{|x-y|^2}{4}\right). \end{aligned} \end{equation} Since the aforementioned equality does not depend on the choice of the sequence $r_i\downarrow 0$, and the limit $\lim\limits_{t\downarrow 0} t^n H(rt,t^2)$ does not depend on the choice of $x_0\in \mathcal{R}_n^\ast(X)$, we thereby complete the proof of (\ref{eqn4.1}). Indeed, we have also proved that \[ \vartheta_n(X,\mathsf{d},\mathfrak{m})(x)=\tilde{c}^{-1},\ \mathfrak{m}\text{-a.e. }x\in \mathcal{R}_n^\ast(X). \] Moreover, if we initially take $x,y\in \mathbb{R}^n$ and then choose sequences $\{x_i\}$, $\{y_i\}$ such that $X_i\ni x_i\rightarrow x\in \mathbb{R}^n$, $X_i\ni y_i\rightarrow y\in \mathbb{R}^n$ in the above argument, we can find that \[ \frac{\mathsf{d}\left(x_i,y_i\right)}{r_i}=\mathsf{d}_i\left(x_i,y_i\right)\rightarrow |x-y|, \ \text{as}\ i\rightarrow \infty. \] As a result, by following the calculations in (\ref{eqnaaa5.13}), (\ref{eqn4.1}) can be refined to \begin{equation}\label{717aaaaaa} \lim_{t\downarrow 0} t^n H(rt+o(t),t^2)=\tilde{c}\mathop{(4\pi )^{-\frac{n}{2}}} \exp\left(-\frac{r^2}{4}\right),\ \forall r>0. \end{equation} Assume there exists a point $x_0$ such that $\vartheta_n(X,\mathsf{d},\mathfrak{m})(x_0)\neq\tilde{c}$. For any $r_i\downarrow 0$, after passing to a subsequence, there exists a pointed RCD$(0,n)$ space $\left(X_\infty,\mathsf{d}_\infty,\mathfrak{m}_\infty,x_\infty\right)$ such that \[ (X_i,\mathsf{d}_i,\mathfrak{m},x_0):=\left(X,\frac{1}{r_i}\mathsf{d}_i,\frac{\mathfrak{m}}{\mathfrak{m}(B_{r_i}(x_0))},x_0\right)\xrightarrow{\mathrm{pmGH}} \left(X_\infty,\mathsf{d}_\infty,\mathfrak{m}_\infty,x_\infty\right). \] Let $ \rho_\infty$ be the heat kernel on $(X_\infty,\mathsf{d}_\infty,\mathfrak{m}_\infty)$. For any $z_\infty,w_\infty \in X_\infty$, we can select two sequences $\{z_i\}$ and $\{w_i\}$ such that $X_i\ni z_i\rightarrow z_\infty$, $X_i\ni w_i\rightarrow w_\infty$. Similarly, we can show that the heat kernel on the limit space satisfies that \begin{equation}\label{eeqn4.5} \begin{aligned} \rho_\infty(z_\infty,w_\infty,1)&=\lim_{i\rightarrow \infty}\rho_i(z_i,w_i,1)\\ &=\lim_{i\rightarrow \infty}\mathfrak{m}(B_{r_i}(x_0))\mathop{\rho(z_i,w_i,r_i^2)}\\ &=\lim_{i\rightarrow \infty}\mathfrak{m}(B_{r_i}(x_0))\mathop{H(r_i\,\mathsf{d}_i\left(z_i,w_i\right),r_i^2).} \end{aligned} \end{equation} Owing to Theorem \ref{JLZ} and (\ref{717aaaaaa}), by letting $z_\infty=x_\infty$ and selecting $w_\infty\in \partial B_1(x_\infty)$, we observe from (\ref{eeqn4.5}) that \[ (\tilde{c}\mathop{C})^{-1}\leqslant\lim_{i\rightarrow \infty}\frac{\mathfrak{m}(B_{r_i}(x_0))}{r_i^n}\leqslant {\tilde{c}}^{-1} \mathop{C}, \] where $C=C(K,N)$ is a positive constant. Specifically, we know \[ (\tilde{c}\mathop{C})^{-1}\leqslant\liminf_{r\rightarrow 0}\frac{\mathfrak{m}(B_{r}(x_0))}{r^n}\leqslant \limsup_{r\rightarrow 0}\frac{\mathfrak{m}(B_{r}(x_0))}{r^n}\leqslant {\tilde{c}}^{-1} \mathop{C}. \] Therefore, applying \cite[Thm. 2.4.3]{AT04} implies that $\mathscr{H}^n \ll \mathfrak{m}$. This result, combined with Remark \ref{AHT} then shows that $\mathfrak{m}=\tilde{c}^{-1}\mathscr{H}^n$. Ultimately, it follows from \cite[Thm. 1.5 and 2.22]{BGHZ23} that $(X,\mathsf{d},\mathscr{H}^n)$ is a non-collapsed RCD$(K,n)$ space. \end{proof} \begin{proof}[Proof of Corollary \ref{cor1.5}] According to Theorem \ref{mainthm2}, $\mathfrak{m}=c\mathscr{H}^n$ for some $c>0$, and $(X,\mathsf{d},\mathscr{H}^n)$ is an RCD$(K,n)$ space. Since the space can be rescaled, without loss of generality we may assume that $\mathscr{H}^n(X)=1$. To conclude, it suffices to verify that $(X,\mathsf{d},\mathscr{H}^n)$ is a radial symmetric RCD$(K,n)$ space. Recall that for any $x,y\in X$ any $t>0$, our assumption implies that \[ \sum\limits_{i=0}^\infty e^{-\mu_i t}\phi_i(x)\phi_i(y)=\rho(x,y,t)=H\left(\mathsf{d}\left(x,y\right),t\right). \] Let $\phi_1,\ldots,\phi_m$ be the $L^2$-orthonormal basis of eigenspace with corresponding eigenvalue $\mu_1$. Given any two points $x,y\in X$, for any $t>0$ we calculate that \begin{equation}\label{eeqn4.3} \sum\limits_{i=1}^m\phi_i(x)\phi_i(y)=e^{\mu_1 t}(H(\mathsf{d}(x,y),t)-1)-\sum\limits_{i=m+1}^\infty e^{(\mu_1-\mu_i) t}\phi_i(x)\phi_i(y). \end{equation} Let $N_0$ be the integer such that \[ \mu_i\geqslant 2\, C_1(K,n)\mathop{i^{\frac{2}{n}}}\geqslant 2\mu_1,\ \forall i\geqslant N_0. \] Then the second term of the right hand side of (\ref{eeqn4.3}) satisfies that \begin{equation}\label{eeqn4.4} \begin{aligned} \left|\sum\limits_{i=m+1}^\infty e^{(\mu_1-\mu_i)t}\phi_i(x)\phi_i(y)\right|&\leqslant C_2(K,N) \sum\limits_{i=m+1}^\infty e^{(\mu_1-\mu_i )t}\mathop{i^{\frac{n}{2}}}\\ \ &=\sum\limits_{i=m+1}^{N_0} e^{(\mu_1-\mu_i) t}\mathop{i^{\frac{n}{2}}}+\sum\limits_{i=N_0+1}^{\infty} e^{(\mu_1-\mu_i) t}\mathop{i^{\frac{n}{2}}}\\ \ &\leqslant \sum\limits_{i=m+1}^{N_0} e^{(\mu_1-\mu_i) t}\mathop{i^{\frac{n}{2}}}+\sum\limits_{i=N_0+1}^{\infty} e^{C_1(K,N)i^{-\frac{2}{n}}}\mathop{i^{\frac{n}{2}}}\rightarrow 0\ \ \text{as }t\rightarrow \infty. \end{aligned} \end{equation} Finally it follows from (\ref{eeqn4.3}) and (\ref{eeqn4.4}) that \[ \sum\limits_{i=1}^m\phi_i(x)\phi_i(y)=\lim_{t\rightarrow \infty} e^{\mu_1 t}(H\left(\mathsf{d}\left(x,y\right),t\right)-1):=F\left(\mathsf{d}\left(x,y\right)\right), \] which shows that $(X,\mathsf{d},\mathscr{H}^n)$ is a radial symmetric RCD$(K,n)$ space. This completes the proof. \end{proof} \section{Smoothness Theorem \ref{thm1.11}}\label{sec4} The core of our analysis in this section is the proof of Theorem \ref{thm1.11}. We first present several useful results. \subsection{Co-area formula and disintegration formula on RCD spaces} In this section, we will frequently utilize the coarea formula. Let's review the version of the statement that we will require. For a more detailed discussion, refer to \cite[Proposition 4.2]{M03}, and for the representation of the perimeter measure using Hausdorff measures, see \cite{ABS19, BPS23}. Furthermore, we refer to for instance \cite[Rmk. 3.5]{GH16} for an explanation of how the total variation measure of a Lipschitz function coincides with the product of its weak upper gradient and the reference measure. \begin{thm}\label{coareafor}Let $(X, \mathsf{d}, \mathscr{H}^n)$ be a non-collapsed $\mathrm{RCD}(K, n)$ space. Let $v$ be a Lipschitz function on $X$. Then $\{v > t\}$ has finite perimeter for $\mathscr{L}^1$-a.e. $t> 0$, and for every Borel function $f: X \rightarrow [0, \infty]$ it holds: \[ \int_X f \,|\nabla v| \, \mathrm{d}\mathscr{H}^n = \int_{-\infty}^\infty \left( \int_{\{v=t\}} f \, \mathrm{d}\mathscr{H}^{n-1} \right) \mathrm{d}t. \] \end{thm} We may also need some knowledge concerning the disintegration formula of the distance function $\mathsf{d}_x:=\mathsf{d}\left(x,\cdot\right)$ on a compact RCD$(K,N)$ space $(X,\mathsf{d},\mathfrak{m})$, where $x\in X$ is a fixed point. We refer to for instance \cite[Sec. 3]{CM20}. It is worth pointing out the following geodesically non-branching property in the RCD setting. \begin{thm}[{\cite[Thm. 1.3]{D20}}]\label{RCDnb} $\mathrm{RCD}(K,N)$ spaces are non-branching. That is, there do not exist two unit speed geodesics $\gamma_1\neq \gamma_2$ such that they are parameterized on the unit interval and that there exist $t\in (0,1)$ such that \[ \gamma_1(s)=\gamma_2(s),\ \ \forall s\in [0,t]. \] \end{thm} For the remainder of this subsection, we assume that $(X,\mathsf{d},\mathfrak{m})$ is a compact RCD$(K,N)$ space with $\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)\geqslant 2$. Let us fix a point $x\in X$ and define the set $\Gamma$ as follows: \[ \Gamma:=\{(y,z)\in X\times X:\mathsf{d}_x(y)-\mathsf{d}_x(z)=\mathsf{d}\left(y,z\right)\}. \] The transpose of $\Gamma$ is given by $\Gamma^{-1}$, defined as $\Gamma^{-1}:=\{(y,z)\in X\times X: (z,y)\in \Gamma\}$. We then define the transport relation $R$ as $R:=\Gamma\cup \Gamma^{-1}$. Furthermore, we denote by $\Gamma(y)=\{z\in X:(y,z)\in \Gamma\}$ the section of $\Gamma$ through $y$ in the first coordinate, and similarly for $R(y)$ (through either coordinate by symmetry). Subsequently, we introduce the forward branching point set, following the approach outlined in \cite{C14}: \[ {\mathcal{E}=\{w\in X:\exists\ (y,z)\in \Gamma(w)\times \Gamma(w) \setminus R\}. } \] We note here that by combining Theorem \ref{RCDnb} with our assumption that $\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)\geqslant 2$, the backward branching point set in our context is precisely $\{x\}$. Let us consider the non-branched transport set $\mathcal{T}:=X\setminus (\mathcal{E}\cup\{x\})$, which is a $\sigma$-compact set with $\mathfrak{m}$-measure zero by \cite[Lem. 4.3, Prop. 4.5]{C14}. We also define the non-branched transport relation $\mathcal{R}:=R\cap (\mathcal{T}\times \mathcal{T})$. According to \cite[Thm. 4.6]{C14} (see also \cite{BC13}), $\mathcal{R}$ is an equivalence relation over $\mathcal{T}$ and that for any $y\in \mathcal{T}$, $R(y)\subset (X,\mathsf{d})$ is a unit speed geodesic defined on a closed interval, starting from $x$. As a result, the non-branched transport set $\mathcal{T}$ is partitioned into a disjoint family of equivalence classes, denoted by $\{X_\alpha\}_{\alpha\in Q}$, where $Q$ is an index set. It follows naturally to introduce the quotient map $\mathfrak{Q}:\mathcal{T}\rightarrow Q$, which is induced by this partition: \[ \alpha=\mathfrak{Q}(y) \iff y\in X_\alpha. \] Furthermore, we can equip $Q$ with the quotient $\sigma$-algebra $\mathscr{Q}$, which is derived from the $\sigma$-algebra $\mathscr{X}$ over $X$ consisting of $\mathfrak{m}$-measurable sets. This quotient $\sigma$-algebra is defined by \[ A\in \mathscr{Q}\iff \mathfrak{Q}^{-1}(A)\in \mathscr{X}, \] and it represents the finest $ \sigma$-algebra on $Q$ for which $\mathfrak{Q}$ is measurable. It is straightforward to verify that $\mathfrak{Q}_\sharp \mathfrak{m}\ll \mathfrak{q}$. Owing to \cite[Thm. 5.5]{C14}, for RCD$(K,N)$ spaces we have $\mathfrak{m}(X\setminus \mathcal{T})=0$. After normalization, we can introduce the the quotient measure $\mathfrak{q}$ as $\mathfrak{q}:=\mathfrak{Q}_\sharp (\mathfrak{m})$, meaning that $\mathfrak{q}$ is obtained by pushing forward $\mathfrak{m}(X)^{-1}\mathfrak{m}$ via the quotient map $\mathfrak{Q}$. Indeed, there is a rather explicit construction of the quotient map $\mathfrak{Q}$ which embeds $Q$ a subset of $X$ and ensures that the map itself is $\mathcal{A}$-measurable, where $\mathcal{A}$ is the $\sigma$-algebra generated by the analytic sets of $X$, as detailed in \cite{CM17} and \cite[Lem. 3.8]{CM201}. Thus, by combining the classical disintegration theorem \cite[Prop. 452F]{F03} with recent localization results (\cite[Thm. 9.5]{BC13} and \cite[Thm. 5.1]{CM17b}), we can summarize the above findings in the following theorem. \begin{thm}\label{thmdisinte} Let $(X,\mathsf{d},\mathfrak{m})$ be a compact $\mathrm{RCD}(K,N)$ space with $\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)\geqslant 2$ and let $x\in X$ be a fixed point. Then the measure $\mathfrak{m}$ restricted to the non-branched transport set $\mathcal{T}$ admits the following disintegration formula: \begin{equation}\label{disintefor} \mathfrak{m}\llcorner_\mathcal{T}:=\int_Q \mathfrak{m}_\alpha \mathop{\mathrm{d}\mathfrak{q}( \alpha)}, \end{equation} where $\mathfrak{q}$ is a Borel probability measure over $Q\subset X$ such that $\mathfrak{Q}_\sharp \mathfrak{m}\ll \mathfrak{q}$. Moreover, $\{\mathfrak{m}_\alpha\}_{\alpha\in Q}$ satisfies the following properties. \begin{enumerate} \item For $\mathfrak{q}$-a.e. $\alpha\in Q$, there exists a non-negative Borel function $h_\alpha$ on $X_\alpha$ such that $\mathfrak{m}_\alpha=h_\alpha \mathscr{H}^1\llcorner_{X_\alpha}$ and that $h_\alpha$ is a $\mathrm{CD}(K,N)$ density on $X_\alpha$ (or equivalently by writing $\overline{X}_\alpha$ as the closure of $X_\alpha$, $\left(\overline{X}_\alpha,\mathsf{d},h_\alpha\mathscr{H}^1\right)$ verifies the $\mathrm{CD}(K,N)$ condition). \item For any $\mathfrak{m}$-measurable set $B$, the map $\alpha\mapsto \mathfrak{m}_\alpha(B)$ is $\mathfrak{q}$-measurable. \item For any $\mathfrak{m}$-measurable set $B$ and $\mathfrak{q}$-measurable set $A$, the following disintegration formula holds: \[ \mathfrak{m}(B\cap \mathfrak{Q}^{-1}(A))=\int_A \int_{X_\alpha} \chi_B\mathop{\mathrm{d}\mathscr{H}^1}\mathop{\mathrm{d}\mathfrak{q}(\alpha).} \] \end{enumerate} \end{thm} \begin{remark}\label{rmk4.6aaaaaaaaa} In the second property of the family $\{\mathfrak{m}_\alpha\}_{\alpha\in Q}$, we omit the precise definition of the CD$(K,N)$ metric measure spaces, referring instead to \cite{LV09,St06a,St06b} for details. We note that for all $\alpha$ such that $h_\alpha$ is a $\mathrm{CD}(K,N)$ density on $X_\alpha$, $h_\alpha$ is locally semi-concave in the interior of $X_\alpha$. That is, for every $z_0$ in the interior of $X_\alpha$, there exists a constant $C(z_0,\alpha)\in \mathbb{R}$ such that the function $z\mapsto \left(h_\alpha-C(z_0,\alpha)z^2\right)$ is concave in a neighborhood (in $X_\alpha$) of $z_0$. Finally, we recall that if $\left([a,b],\mathsf{d}_{\mathbb{R}},h\mathscr{H}^1\right)$ is a CD$(K,N)$ metric measure space, then the derivative of $\log h$ is bounded as follows: \begin{equation}\label{eqn4.6cccc} -(N-1)\frac{V_{K,N}'(b-t)}{V_{K,N}(b-t)}\leqslant (\log h)'(t)\leqslant (N-1)\frac{V_{K,N}'(t-a)}{V_{K,N}(t-a)}, \end{equation} for $t \in (a,b)$ being a point of differentiability of $h$. See, for instance \cite[Lem. A.9]{CM21}. \end{remark} \subsection{Proof of Theorem \ref{thm1.11}} For reader's convenience, we recall Theorem \ref{thm1.11} as follows. \begin{thm}\label{thm5.1} Let $(X,\mathsf{d},\mathfrak{m})$ be a compact $\mathrm{RCD}(K,N)$ space with $\mathrm{dim}_{\mathsf{d},\mathfrak{m}}(X)=n$. If in addition the measure of the intersection of two geodesic balls depends only on their radii and the distance between their centers, then \begin{enumerate} \item[$(1)$] $\mathfrak{m}=c\mathscr{H}^n$ for some constant $c>0$ and $(X,\mathsf{d},\mathscr{H}^n)$ is a non-collapsed $\mathrm{RCD}(K,n)$ space; \item[$(2)$] $(X,\mathsf{d})$ is isometric to an $n$-dimensional smooth closed Riemannian manifold $(M^n,\mathrm{g})$. \end{enumerate} \end{thm} \begin{remark}\label{rmk4.8aaa} By approximating with characteristic functions, one can demonstrate that the assumption of Theorem \ref{thm5.1} is equivalent to the following condition: the convolution of two radial kernel functions $H_1=h_1\circ \mathsf{d}$ and $H_2=h_2\circ \mathsf{d}$ results in a radial kernel function, provided that $h_1,h_2$ are $L^2$-integrable functions on $\mathbb{R}$ with compact support. This finding offers a generalization of Theorem \ref{harm3} within the RCD framework. \end{remark} \begin{proof}[Proof of Theorem \ref{thm5.1}(1)] From our assumption we deduce the existence of a function $\theta:[0,\mathrm{diam}(X,\mathsf{d})]\rightarrow [0,\infty)$ such that \[ \mathfrak{m}(B_r(x))=\theta(r), \forall x\in X,\ \forall r\in [0,\mathrm{diam}(X,\mathsf{d})]. \] Then Theorem \ref{BS}, in conjunction with Remark \ref{AHT} indicates the existence of a constant $c>0$ such that \begin{equation}\label{vpxbbctt} \lim_{r\rightarrow 0}\frac{\mathfrak{m}(B_r(x))}{\omega_n r^n}=\lim_{r\rightarrow 0}\frac{\theta(r)}{\omega_n r^n}=c. \end{equation} Finally, following the argument as in the proof of Theorem \ref{mainthm2}, we complete the proof of statement (1). \end{proof} \begin{remark} A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to be ball-homogeneous if \[ \mathfrak{m}(B_r(x))=\theta(r), \, \forall x\in X,\ \forall r\in [0,\infty). \] To the best of the author's knowledge, the following proposition represents the state-of-the-art in the regularity of ball-homogeneous RCD$(K,N)$ spaces. Nevertheless, it remains an open question whether a compact ball-homogeneous RCD$(K,N)$ space is isometric to a smooth closed Riemannian manifold, or even a Lipschitz manifold. \end{remark} \begin{prop} Assume $(X,\mathsf{d},\mathfrak{m})$ is a ball-homogeneous $\mathrm{RCD}(K,N)$ space. Then statement $(1)$ of Theorem \ref{thm5.1} also holds and $(X,\mathsf{d},\mathscr{H}^n)$ is a $C^\alpha$-manifold, where $\alpha$ is a constant depending only on $n$. \end{prop} \begin{proof} The first part can be derived using the same proof as that of statement (1) in Theorem \ref{thm5.1}. As for the second part, let us denote by $\left(X_r,\mathsf{d}_r,\mathscr{H}^n,x\right)=\left(X,\mathsf{d}/r,\mathscr{H}^n_{\mathsf{d}}/r^n,x\right)$ for convenience and claim that \begin{equation}\label{aaaaannnnnneqn5.6} \lim_{r\rightarrow 0}\mathop{ \sup_{x\in X}}\mathsf{d}_{\mathrm{pmGH}}\left(\left(X_r,\mathsf{d}_r,\mathscr{H}^n,x\right),\left(\mathbb{R}^n,\mathsf{d}_{\mathbb{R}^n},\mathscr{L}^n,0_n\right)\right)= 0. \end{equation} Assume the contrary, i.e. there exists $\epsilon>0$, $r_i\rightarrow 0$ and $\{x_i\}\subset X$ such that \begin{equation}\label{11122222222222222} \mathsf{d}_{\mathrm{pmGH}}\left(\left(X_{r_i},\mathsf{d}_{r_i},\mathscr{H}^n,x_i\right),\left(\mathbb{R}^n,\mathsf{d}_{\mathbb{R}^n},\mathscr{L}^n,0_n\right)\right)>\epsilon. \end{equation} After passing to a subsequence, there exists an RCD$(0,n)$ space $(Y,\mathsf{d}_Y,\mathfrak{m}_Y,y)$ such that \[ \left(X_{r_i},\mathsf{d}_{r_i},\mathscr{H}^n,x_i\right)\xrightarrow{\mathrm{pmGH}} \left(Y,\mathsf{d}_Y,\mathfrak{m}_Y,y\right). \] From (\ref{vpxbbctt}) we know that for any $X_{r_i} \ni z_i\rightarrow z\in Y$, it holds \[ \mathfrak{m}_{Y}\left(B^Y_r(z)\right)=\lim_{i\rightarrow \infty} \frac{\mathscr{H}_{\mathsf{d}}^n(B^{X}_{r_i r}(z_i))}{r_i^n}=c\ \omega_n r^n, \ \forall r>0. \] As a consequence, following the same argument as in the proof of Theorem \ref{thm5.1} (1), we deduce that $\mathfrak{m}_Y=c\mathscr{H}^n$ and that $\left(Y,\mathsf{d}_Y,\mathscr{H}^n\right)$ is a non-collapsed RCD$(0,n)$ space. Consequently, by \cite[Thm. 1.6]{DG18}, it follows that $\left(Y,\mathsf{d}_Y\right)$ is isometric to $\left(\mathbb{R}^n,\mathsf{d}_{\mathbb{R}^n}\right)$. However, this leads to a contradiction with (\ref{11122222222222222}). By Theorem \ref{DG18cor1.7} we know $X=\mathcal{R}_n$. Finally, applying \cite[Thm. A.1.2]{ChCo1} completes the proof. \end{proof} In the remainder of this section, we focus on the second statement of Theorem \ref{thm5.1}. \begin{lem}\label{step1} Under the assumption of Theorem \ref{thm5.1}, any geodesic with length less than the diameter of $X$ is locally extendible. \end{lem} \begin{proof} Let $D=\mathrm{diam}(X,\mathsf{d})$ and choose $x_0,y_0\in X$ such that $\mathsf{d}\left(x_0,y_0\right)=D$. Consider the unit speed geodesic $\gamma_0:[0,D]\rightarrow X$ from $x_0$ to $y_0$. Note that for any $r,s,t>0$ satisfying $r+s+t<D$, since $\gamma_0(r+s+\frac{t}{2})\in B_{r+t}(\gamma_0(s))\setminus B_{r+s}(x_0)$, which is an open set, we have $ \mathscr{H}^n\left( B_{r+t}(\gamma_0(s))\setminus B_{r+s}(x_0)\right)>0$. Given any arbitrary but fixed two points $x,y\in X$ such that $e:=\mathsf{d}\left(x,y\right)<D$, let $\gamma:[0,e]\rightarrow X$ be the unit speed geodesic from $x$ to $y$. For any $\epsilon,e' \in \left(0,\min \{e,D-e\} /4\right)$, it follows the fact $\mathscr{H}^n \left( B_{e+\epsilon}(\gamma(e'))\setminus B_{e+e'}(x)\right)>0$ that $B_{e+\epsilon}(\gamma(e'))\setminus B_{e+e'}(x)\neq \emptyset$. Consequently, for each $i\in \mathbb{N}$, we can select a point $z_i\in B_{e+\frac{1}{i}}(\gamma(e'))\setminus B_{e+e'}(x)$. By taking a subsequence, we may assume that $\{z_i\}$ converges to a point $z\in X$. Thus we have \[ \lim_{i\to\infty}\left(e+\frac{1}{i}+e'\right)\geqslant \lim_{i\to\infty}\mathsf{d}(z_i,\gamma(e'))+e'\geqslant \lim_{i\to\infty}\mathsf{d}(z_i,x)=\mathsf{d}(z,x)\geqslant e+e'. \] If we let $\gamma':[0,e']\rightarrow X$ be the unit speed geodesic from $y$ to $z$, then for any $s\in [0,e]$, $s'\in [0,e']$ it holds that \[ \begin{aligned} e+e'&= \mathsf{d}\left(x,z\right)\leqslant \mathsf{d}\left(x,\gamma(s)\right)+\mathsf{d}\left(\gamma(s),\gamma'(s)\right)+\mathsf{d}\left(\gamma'(s),z\right)\\ \ &=\mathsf{d}\left(\gamma(s),\gamma'(s)\right)+s+e'-s'. \end{aligned} \] On the other hand, \[ \mathsf{d}\left(\gamma(s),\gamma'(s)\right)\leqslant \mathsf{d}\left(\gamma(s),y\right)+\mathsf{d}\left(y,\gamma'(s)\right)=e-s+s'. \] Therefore, $\mathsf{d}\left(\gamma(s),\gamma'(s)\right)=e-s+s'$. Finally, owing to Theorem \ref{RCDnb}, the curve $\tilde{\gamma}:[0,e+e']\rightarrow X$ defined by \[ \tilde{\gamma}(s)= \left\{\begin{array}{ll} \gamma(s) &\text{if $s\in [0,e]$},\\ \gamma'(s-e')&\text{if $s>e$.} \end{array}\right. \] is a minimal geodesic from $x$ to $z$. \end{proof} \begin{lem}\label{step2} Under the assumption of Theorem \ref{thm5.1}, the injectivity radius of $X$ equals its diameter. \end{lem} \begin{proof} First, we demonstrate that for any $x,y\in X$ with $0<\mathsf{d}\left(x,y\right)=l<D$, there exists a unique unit speed geodesic from $x$ to $y$, denoted by $\gamma_{x,y}$ for the remainder of this section. Suppose, for the sake of contradiction, that there are two geodesics $\gamma_1,\gamma_2$ from $x$ to $y$. By Lemma \ref{step1} we can assume that the domain of $\gamma_1$ can be extended to $[0,D]$ such that $\gamma_1(0)=x$ and $\gamma_1(l)=y$. For convenience, the extended curve is still denoted by $\gamma_1$, and it remains a unit speed geodesic. Given any $\delta\in (0,D-l)$, we calculate that \[ l+\delta=\mathsf{d}\left(\gamma_1(l+\delta),\gamma_2(0)\right)\leqslant \mathsf{d}\left(\gamma_1(l+\delta),\gamma_2(l)\right)+\mathsf{d}\left(\gamma_2(0),\gamma_2(l)\right)=l+\delta. \] Let \[ \eta(s)= \left\{\begin{array}{ll} \gamma_2(s) &\text{if $s\in [0,l]$},\\ \gamma_1(s-l)&\text{if $l<s<D$.} \end{array}\right. \] Therefore $\eta$ is also a minimal geodesic from $x$ to $\gamma_1(D)$ which agrees with $\gamma_2$ on $[0,l]$. However, this leads to a contradiction with the non-branching property of $\mathrm{RCD}(K,N)$ spaces (Theorem \ref{RCDnb}). \end{proof} \begin{lem}\label{step3} Under the assumption of Theorem \ref{thm5.1}, given any fixed point $x_0\in X$, we have the disintegration formula (\ref{disintefor}) on $X\setminus \mathcal{T}_{x_0}$ with respect to $\mathsf{d}_{x_0}$, where $\mathcal{T}_{x_0}=\{x_0\}\cup\partial B_D(x_0)$ and $Q=\partial B_{D/2}(x_0)$. \end{lem} \begin{proof} For any $0<r_1<r_2<D$, let us set \[ \begin{aligned} F_{r_2\rightarrow r_1}:\partial B_{r_2}(x_0)&\longrightarrow \partial B_{r_1}(x_0)\\ y&\longmapsto \gamma_{x_0,y}(r_1), \end{aligned} \] which is well defined and is a bijection due to Lemmas \ref{step1} and \ref{step2}. Moreover, if $\partial B_{r_2}(x_0)\ni y_i\rightarrow y\in \partial B_{r_2}(x_0)$, by Lemma \ref{step2} we know $\gamma_{x_0,y_i}$ must converge to $\gamma_{x_0,y}$, which means that $F_{r_2\rightarrow r_1}(y_i)\rightarrow F_{r_2\rightarrow r_1}(y)$. Hence $F_{r_2\rightarrow r_1}$ is continuous. Similarly $\left( F_{r_2\rightarrow r_1}\right)^{-1}$ (which is denoted by $F_{r_1\rightarrow r_2}$ later on for convenience) is also continuous. Therefore $F_{r_2\rightarrow r_1}$ is a homomorphism. Indeed, arguing by contradiction, we can show that $F_{r_2\rightarrow r_1}$ and $F_{r_1\rightarrow r_2}$ are uniform continuous. As a result, the following map \[ \begin{aligned} \mathfrak{Q}:\mathcal{T}_{x_0}&\longrightarrow Q:=\partial B_{D/2}(x_0)\\ y&\longmapsto F_{\mathsf{d}\left(y,x_0\right)\rightarrow D/2}(y), \end{aligned} \] is continuous and thus Borel. We also observe that in our context, each $\alpha\in \partial B_{D/2}(x_0)$ uniquely determines a geodesic starting from $x_0$, which gives a partition \[ X\setminus\mathcal{T}_{x_0}=\bigcup_{\alpha\in \partial B_{D/2}(x_0)}X_\alpha, \] where \[ \begin{aligned}X_\alpha:(0,D)&\longrightarrow X\\ t&\longmapsto F_{D/2\rightarrow t}(\alpha). \end{aligned} \] In addition, Theorem \ref{BGineq} ensures that $\mathscr{H}^n\left(\partial B_D(x_0)\right)=0$, which permits the application of Theorem \ref{thmdisinte} to $X\setminus \left(\mathcal{T}_{x_0}\cup\partial B_D(x_0)\right)$. Consequently, using the identification \[ X\setminus \left(\mathcal{T}_{x_0}\cup\partial B_D(x_0)\right)\ni y\mapsto \left(F_{\mathsf{d}\left(x_0,y\right)\rightarrow D/2}(y),\mathsf{d}\left(x_0,y\right)\right)\in \partial B_{D/2}(x_0)\times (0,D), \] we can reformulate the disintegration formula as \begin{equation}\label{eqn4.10aa} \int_X f\mathop{\mathrm{d}\mathscr{H}^n}=\int_{\partial B_{D/2}(x_0)}\int_{(0,D)}f\left(\alpha,t\right)\sigma_{x_0}\left(\alpha,t\right)\mathop{\mathrm{d}t}\mathop{\mathrm{d}\mathfrak{q}_{x_0}(\alpha),}\ \forall f\in L^1. \end{equation} \end{proof} \begin{remark}\label{rmk4.16} As a direct consequence of (\ref{eqn4.10aa}) and Theorem \ref{coareafor}, we see that $\partial B_R(x_0)$ is $\mathscr{H}^{n-1}$-measurable for any $R\in (0,D)$. Moreover, for any continuous function $f$ on $X$ it holds that \[ \int_{\partial B_{R}(x_0)}f\mathop{\mathrm{d}\mathscr{H}^{n-1}}=\int_{\partial B_{D/2}(x_0)}f(\alpha, R)\sigma_{x_0}(\alpha,R)\mathop{\mathrm{d}\mathfrak{q}_{x_0}(\alpha)}, \ \forall R\in (0,D). \] We also observe that, according to Theorem \ref{BGineq}, for any fixed point $y_0\in X$, the function $r\mapsto \mathscr{H}^n(B_r(y_0))/V_{K,n}(r)$ is monotone decreasing. Therefore its derivative is non-positive, yielding that \begin{equation}\label{eqn4.13aaa} \mathscr{H}^{n-1}(\partial B_r(y_0))\leqslant\frac{V'_{K,n}(r)\mathscr{H}^{n}( B_r(y_0))}{V_{K,n}(r)}\leqslant C(K,n,D)r^{n-1}. \end{equation} \end{remark} \begin{remark}\label{rmk4.17} According to Remark \ref{rmk4.6aaaaaaaaa} and \cite[Prop. 4.7]{CM20}, it is known that for any $x\in X$ and any $\delta\in (0,\mathsf{d}(x_0,x))$, the following inequality holds: \[ \|\Delta \,\mathsf{d}_{x_0}\|_{ L^\infty(B_{\delta/2}(x))}\leqslant C(\mathsf{d}(x_0,x),\delta,K,N). \] Furthermore, applying Theorem \ref{BGineq} and (\ref{eeeeeeqn2.6}) implies that \[ nt_{B_r(y)} |\mathrm{Hess} \mathop{\mathsf{d}_{x_0}} |_{\mathsf{HS}}^2\mathop{\mathrm{d}\mathscr{H}^n}\leqslant C(\mathsf{d}(x_0,x),\delta,K,N),\ \forall y\in B_{\delta/4}(x), \ \forall r\in (0,\delta/4). \] As a result, by letting $r\rightarrow 0$ we deduce that \[ \left\||\mathrm{Hess}\mathop{\mathsf{d}_{x_0}}|_\mathsf{HS}\right\|_{L^\infty(B_{\delta/4}(x))}\leqslant C(\mathsf{d}(x_0,x),\delta,K,n). \] \end{remark} \begin{lem}\label{step4} Under the assumption of Theorem \ref{thm5.1}, there exists a positive real number $\epsilon_0$ depending on $K,n,D$ satisfying the following property: given any fixed point $x_0\in X$, and any $\epsilon\in (0,\epsilon_0)$, one can find $\delta>0$ and $n$ points $\{p_i\}\subset \partial B_{D/3}(x_0)$ such that by letting $u_i:=\mathsf{d}_{p_i}-D/3$, the map \[ \begin{aligned} \textbf{U}:B_\delta(x_0)&\longrightarrow \mathbb{R}^n\\ y&\longmapsto(u_1(y),\ldots,u_n(y)), \end{aligned} \] is bijective. Moreover, $\textbf{U}$, $\textbf{U}^{-1}|_{\textbf{U}(B_\delta(x_0))}$ are both $(1+C(K,n,D)\epsilon)$-Lipschitz continuous. \end{lem} \begin{proof} For any $p\in \partial B_{D/3}(x_0)$, $\gamma_{p,x_0}$ can be extended to length $2D/3$. Moreover, Remarks \ref{rmk4.16} and \ref{rmk4.17} guarantees the existence of a constant $C=C(K,n,D)>1$ such that \begin{equation}\label{fffdajsflajlfad} \|\Delta \,\mathsf{d}_{p}\|_{ L^\infty(B_{D/6}(x_0))} +\left\||\mathrm{Hess}\mathop{\mathsf{d}_{p}}|_\mathsf{HS}\right\|_{L^\infty(B_{D/6}(x_0))}\leqslant C. \end{equation} Let us first fix a point $p_1\in \partial B_{D/3}(x_0)$. Then with the aid of Theorems \ref{AAthm} and \ref{AH18}, $(\mathsf{d}_{p_1}-D/3)/r$ uniformly converges to a linear function whose norm is 1 under the convergence (\ref{aaaaannnnnneqn5.6}) as $r\rightarrow 0$. In addition, by Gromov-Hausdorff approximation, there exists $r_1<\epsilon$ and a point $p_2'\in \partial B_{r_1}(x_0)$ such that \[ \left|\mathsf{d}_{p_1}(p_2')-\frac{D}{3}\right|\leqslant \epsilon r_1. \] Set $p_2=F_{r_1\rightarrow D/3}(p_2')$. Then the $C$-Lipschitz continuity of $\langle \nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2} \rangle$ on $B_{r_1}(x_0)$ follows from the combination of (\ref{11eqn2.16}) and (\ref{fffdajsflajlfad}). Furthermore, the following calculation implies that $\left|\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle\right|(x_0)\leqslant C\epsilon$. \begin{equation}\label{fdsajildajhf} \begin{aligned} &\ \left|\mathsf{d}_{p_1}(p_2')-\frac{D}{3}\right|\\ =&\ \left|\int_0^{r_1}\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle(\gamma_{x,p_2}(t))\mathop{\mathrm{d}t} \right|\\ \geqslant&\ r_1\left|\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle(x_0)\right|-\left|\int_0^{r_1}\left(\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle(\gamma_{x,p_2}(t)) -\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle(x_0)\right)\mathop{\mathrm{d}t}\right|\\ \geqslant &\ r_1\left|\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle(x_0)\right|-\frac{C}{2}(r_1)^2. \end{aligned} \end{equation} Let $\gamma_1,\gamma_2$ be limits of $\gamma_{x,p_1}$, $\gamma_{x,p_2}$ under the convergence (\ref{aaaaannnnnneqn5.6}). Then thanks to the Lipschitz continuity of $\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle$ and \cite{HM17}, the angle $\angle \gamma_1 0_n \gamma_2$ is uniquely determined by $\langle\nabla \mathsf{d}_{p_1},\nabla \mathsf{d}_{p_2}\rangle(x_0)$ and satisfies that $\left|\angle \gamma_1 0_n \gamma_2-\pi/2\right|\leqslant C\epsilon$. Similarly, one can identify points $p_3,\ldots,p_n\in \partial B_{D/3}(x_0)$ such that \[ \left|\langle \nabla\mathsf{d}_{p_i},\nabla \mathsf{d}_{p_j} \rangle\right|(x_0)\leqslant C\epsilon,\ \forall i\neq j. \] Therefore, combining (\ref{fffdajsflajlfad}) we have \[ \max_{y\in B_{4\epsilon}(x_0)}\left|\langle \nabla\mathsf{d}_{p_i},\nabla \mathsf{d}_{p_j} \rangle\right|(y)\leqslant 5\,C\epsilon,\ \forall i\neq j, \] which yields \[ \max_{y\in B_{4\epsilon}(x_0)}\left|g-\sum_{i=1}^m d\, \mathsf{d}_{p_i}\otimes d\, \mathsf{d}_{p_i}\right|(y)\leqslant 5\,nC\epsilon. \] We now let $u_i:=\mathsf{d}_{p_i}-D/3$ and $\textbf{U}:=(u_1,\ldots,u_n)$ and claim that \begin{equation}\label{4.14} \lim_{r\rightarrow 0}\sup_{y\in B_{2\epsilon}(x_0)}\max_{w\in \partial B_{r}(y)} \frac{|\textbf{U}(w)-\textbf{U}(y)|}{r}\geqslant 1-C\epsilon. \end{equation} We argue by contradiction. Assume there exists two sequences of points $\{w_i\}$ and $\{y_i\}$ such that $r_i=\mathsf{d}(w_i,y_i)\rightarrow 0$ and that $|\textbf{U}(w_i)-\textbf{U}(y_i)|<(1-C\epsilon )r_i$. Then using (\ref{aaaaannnnnneqn5.6}) and passing to a subsequence, we deduce that \[ (X_i,\mathsf{d}_i,\mathscr{H}^n,y_i):=\left(X,\frac{1}{r_i}\mathsf{d},\frac{1}{r_i^n}\mathscr{H}^n,y_i\right)\xrightarrow{\mathrm{pmGH}}\left(\mathbb{R}^n,\mathsf{d}_{\mathbb{R}^n},\mathscr{L}^n,0_n\right). \] Owing to (\ref{fffdajsflajlfad}) and Theorem \ref{AH18}, for each $j=1,\ldots,n$, $\{u_j^i:=u_j/r_i\}$ uniformly converges to a linear function $f_j:x\mapsto\alpha_j\cdot x$ with $|\alpha_j|=1$ on $\mathbb{R}^n$. Therefore, we can estimate that \[ \begin{aligned} \omega_n \sum_{j\neq l} {(\alpha_j\cdot\alpha_l)}^2=& \int_{B_1(0_n)}\left|\mathrm{g}_{\mathbb{R}^n}-\sum_{j=1}^n d\,f_j\otimes d\,f_j\right|^2\mathrm{d}\mathscr{L}^n\\ =&\lim_{i\rightarrow \infty}\int_{B_1^{X_i}(y_i)}\left|g_i^{X_i}-\sum_{i=j}^n d\,u_j^i\otimes d\,u_j^i\right|_{\mathsf{HS}}^2 \mathrm{d}\mathscr{H}^n\\ =&\lim_{i\rightarrow \infty}\frac{1}{r_i^n}\int_{B_{r_i}(y_i)}\left|g-\sum_{j=1}^n d\,u_j\otimes d\,u_j\right|_{\mathsf{HS}}^2 \mathrm{d}\mathscr{H}^n\leqslant 25\,\omega_n C^2 \epsilon^2. \end{aligned} \] Let us denote $b=(b_1,\ldots,b_n)\in \partial B_1(0_n)$ as the limit point of $w_i$. Then \[ \begin{aligned} (1-C\epsilon)^2\geqslant&\ \lim_{i\rightarrow \infty}\left(\frac{\left|\textbf{U}(w_i)-\textbf{U}(y_i)\right|}{r_i}\right)^2\\ =&\ \sum_{j=1}^n \left(\sum_{k=1}^n \alpha_j\cdot b\right)^2= 1+\sum_{k\neq l}b_k b_l\, {(\alpha_k\cdot\alpha_l)}^2\\ \geqslant& \,1-4C^2\epsilon^2\sum_{k\neq l}\frac{b_k^2+b_l^2}{2}\geqslant 1-25nC^2\epsilon^2. \end{aligned} \] Therefore to deduce the contradiction it suffices to set $\epsilon_0=1/(25nC+C)$. Similarly, one can show that \begin{equation}\label{4.15} \lim_{r\rightarrow 0}\sup_{y\in B_{2\epsilon}(x_0)}\max_{w\in \partial B_{r}(y)} \frac{|\textbf{U}(w)-\textbf{U}(y)|}{r}\leqslant 1+C\epsilon. \end{equation} Finally by (\ref{4.14}) and (\ref{4.15}) it suffices to choose $\delta>0$ such that \[ 1-2C\epsilon\leqslant \sup_{y\in B_{2\epsilon}(x_0)}\max_{w\in \partial B_{r}(y)} \frac{|\textbf{U}(w)-\textbf{U}(y)|}{r}\leqslant 1+2C\epsilon,\ \forall r<\delta. \] \end{proof} \begin{remark} Indeed, by employing a blow up argument, one can show that the choice of $\delta$ is independent of $x_0$. \end{remark} We are now in a position to prove the following key proposition. \begin{prop}\label{step5} Under the assumption of Theorem \ref{thm5.1}, given any fixed point $\bar{x}\in X$, it holds that \begin{equation}\label{1120} \Delta \,\mathsf{d}_{\bar{x}}(y)=\Delta\, \mathsf{d}_{\bar{x}}(z), \ \forall y,z \in \partial B_R(\bar{x}),\ \forall R\in (0,D). \end{equation} In other words, the value of $\Delta\, \mathsf{d}_{\bar{x}}$ only depends on the value of $\mathsf{d}_{\bar{x}}$. \end{prop} \begin{proof} Since we have the freedom to rescale the space, it suffices to consider equation (\ref{1120}) near a point $x_0\in X$ with $\mathsf{d}(\bar{x},x_0)=D/3$. Let us define $u_n=\mathsf{d}_{\bar{x}}-D/3$. Subsequently, we can choose a small $\delta>0$ and construct a bi-Lipschitz map $\textbf{U}=(u_1,\ldots,u_n):B_{\delta}(x_0)\rightarrow \mathbb{R}^n$ as described in Lemma \ref{step4}. We claim that for $\mathscr{H}^n$-a.e. $x\in B_\delta(x_0)$ the following holds. \begin{equation}\label{aaa4.16} \lim_{t\rightarrow 0} \frac{1}{t}\left(\frac{\mathscr{H}^n\left(B_t(x)\setminus B_{\mathsf{d}(\bar{x},x)}(x)\right)}{\omega_n t^n}-\frac{1}{2}\right)=\Delta \,\mathsf{d}_{\bar{x}}(x). \end{equation} Let us take \[ x\in \bigcap_{j,k=1}^n \mathrm{Leb}(\langle \mathrm{Hess} \mathop{u_j},\mathrm{Hess}\mathop{ u_k}\rangle)\cap \bigcap_{j,k,l=1}^n\mathrm{Leb}(\mathrm{Hess}\mathop{u_j}(\nabla u_k,\nabla u_l)) \cap B_\delta(x_0), \] and constants $a_{ij}$ ($i,j=1,\ldots,n$) and $b^i_{jk}$ ($i,j,k=1,\ldots,n$) such that the functions $v_i=\sum_{j=1}^n a_{ij}(u_j-u_j(x))+\sum_{j,k=1}^n b^i_{jk}(u_j-u_j(x)) (u_k-u_k(x))$ satisfy $v_i(x)=0$, $\langle\nabla v_i,\nabla v_j\rangle(x)=\delta_{ij}$ and \begin{equation}\label{4.17aaaaaa} nt_{B_t(x)} |\mathrm{Hess}\mathop{v_i}|_{\mathsf{HS}}^2\,\mathrm{d}\mathscr{H}^n=0,\ i=1,\ldots,n. \end{equation} To determine these constants, it is sufficient to solve the following system of equations, the solution of which can be approached in a manner analogous to that described in \cite[Sec. 4]{BMS23}. \begin{equation}\label{4.18c} \sum_{k,l=1}^n a_{ik}a_{jl}\langle \nabla u_k,\nabla u_l\rangle(x)=\delta_{ij}, \end{equation} and for each $i=1,\ldots,n$, \begin{equation}\label{4.19c} nt_{B_t(x)}\left(\sum_{j,k,l=1}^n a_{ij}b^i_{kl} \mathrm{Hess}\mathop{u_j}(\nabla u_k,\nabla u_l)+\sum_{j,k=1}^n a_{ij}a_{ik}\langle \mathrm{Hess} \mathop{u_j},\mathrm{Hess}\mathop{ u_k}\rangle\right)\mathop{\mathrm{d}\mathscr{H}^n}=0. \end{equation} By (\ref{fffdajsflajlfad}), we may assume that \begin{equation}\label{vestimate} \sum_{i=1}^n\left(\| |\mathrm{Hess}\mathop{v_i}|_{\mathsf{HS}}\|_{L^\infty(B_{t_0}(x))}+\| |\nabla v_i|\|_{L^\infty(B_{t_0}(x))}+\| \Delta v_i\|_{L^\infty(B_{t_0}(x))}\right)\leqslant \tilde{C} \end{equation} for some constant $\tilde{C}>0$ (which may vary from line to line) and some sufficiently small $t_0>0$. The subsequent discussion is predicated on the assumption that $t\ll t_0$. For convenience, let $\textbf{V}=(v_1,\ldots,v_n)$ and $r:=|\textbf{V}|$. We recall that, from (\ref{4.17aaaaaa}) (\ref{4.18c}) and \cite[Prop. 4.8]{BMS23}, it follows that \begin{equation}\label{eqn4.22} nt_{B_t(x)}\left(\sum_{i,j}|\langle\nabla v_i,\nabla v_j\rangle-\delta_{ij}|+\left||\nabla r|-1\right|+\left|\Delta r^2-2n\right|\right)\mathop{\mathrm{d}\mathscr{H}^n}\leqslant \tilde{C}t\mathop{\xi(t)}, \end{equation} where $\xi$ is a positive function that satisfies $\lim_{t\rightarrow 0}\xi(t)=0$, and is given by the following expression: \[ nt_{B_s(x)} |\mathrm{Hess}\mathop{v_i}|_{\mathsf{HS}}^2\mathop{\mathrm{d}\mathscr{H}^n.} \] To conclude, we may need the following three lemmas. \end{proof} \begin{lem}\label{hfouiaehofie} Under the assumption of Theorem \ref{thm5.1} and the setting in the proof of Proposition \ref{step5} we have \begin{equation}\label{aabb4.24} \begin{aligned} &\lim_{t\rightarrow 0}\frac{1}{\omega_n t^{n+1}}\int_{B_{2t}(x)} \left|\left(\det \left(\langle\nabla v_i,\nabla v_j \rangle\right)\right)^{\frac{1}{2}}-1\right|\mathop{\mathrm{d}\mathscr{H}^n}\\ =&\lim_{t\rightarrow 0}\frac{1}{\omega_n t^{n+1}}\int_{\textbf{V}(B_{t}(x))\cap \{u_n\geqslant u_n(x) \}} \left|\left(\det \left(\langle\nabla v_i,\nabla v_j \rangle\right)\right)^{-\frac{1}{2}}-1\right|\mathop{\mathrm{d}\mathscr{L}^n}=0. \end{aligned} \end{equation} Therefore, it holds that \begin{equation}\label{aabb4.25} \mathscr{H}^n\left(B_{t}(x)\setminus B_{\mathsf{d}(x,\bar{x})}(x)\right)=\mathscr{L}^n\left(\textbf{V}\mathop{(B_t(x))}\cap\ \{u_n\geqslant u_n(x)\}\right)+o(t^{n+1}). \end{equation} \end{lem} \begin{proof} For the first limit in (\ref{aabb4.24}), since $\langle\nabla v_i,\nabla v_j\rangle(x)=\delta_{ij}$, it follows from (\ref{vestimate}) that the matrix $\left(\langle\nabla v_i,\nabla v_j \rangle\right)(x)$ is of positive definite whenever $x$ is sufficiently close to $x_0$. Then the limit can be deduced directly from (\ref{eqn4.22}). As for the second limit in (\ref{aabb4.24}), first following the argument used in the proof of Lemma \ref{step4}, we can assume that the restrictions $\textbf{V}|_{B_{t}(x)}$ and $\textbf{V}^{-1}|_{\textbf{V}(B_{t}(x))}$ are both $(1+\tilde{C}t)$-Lipschitz maps. Then it suffices to apply a similar proof as in \cite[Lem. 4.7]{H23} to verify that \[ \textbf{V}_\sharp (\mathscr{H}^{n})=\left(\det \left(\langle\nabla v_i,\nabla v_j \rangle\right)\right)^{-\frac{1}{2}}\mathscr{L}^n. \] This together with (\ref{aabb4.24}) then implies (\ref{aabb4.25}). nt_{B_{2t}(z)} |\mathrm{Hess}\mathop{v_i}|_{\mathsf{HS}}^2\mathop{\mathrm{d}\mathscr{H}^n}\right)^{\frac{1}{2}}\mathop{\mathrm{d}t}\\ \end{proof} \begin{lem}\label{step6} Under the assumption of Theorem \ref{thm5.1} and the setting in the proof of Proposition \ref{step5} we have \[ \begin{aligned} &\int_{B_t(0_n)\cap \{u_n\geqslant u_n(x)\}} \left(\det \left(\langle\nabla v_i,\nabla v_j \rangle\right)\right)^{-\frac{1}{2}}\mathop{\mathrm{d}\mathscr{L}^n}\\ =&\int_{\textbf{V}(B_t(x))\cap \{u_n\geqslant u_n(z)\}} \left(\det \left(\langle\nabla v_i,\nabla v_j \rangle\right)\right)^{-\frac{1}{2}}\mathop{\mathrm{d}\mathscr{L}^n}+ o(t^{n+1}). \end{aligned} \] \end{lem} \begin{proof} The proof closely follows the approach outlined in \cite[Lem. 3.5]{BMS23}. Let $\tilde{r}=\max\{r,\mathsf{d}_x\}$ and $\Omega_t:=B_t(x)\cup \textbf{V}^{-1}(B_t(0_n))$. Clearly we have $\Omega_t=\{\tilde{r}\leqslant t\}$ and $B_{t-\tilde{C}t^2}(x)\subset\Omega_t\subset B_{t+\tilde{C}t^2}(x)$. We claim that $\mathscr{H}^n(\Omega_t)-\omega_n t^n\geqslant -t^{n+1}\xi(2t)$. Using Theorem \ref{coareafor} we see \[ \int_{\Omega_t}\frac{r|\nabla \tilde{r}|}{\tilde{r}^{n+1}}|\nabla(\tilde{r}-r)|^2\mathop{\mathrm{d}\mathscr{H}^n}=\int_0^t \int_{\partial \Omega_s} \frac{r}{s^{n+1}}|\nabla(\tilde{r}-r)|^2\mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}=I_1+I_2, \] where \[ I_1=\int_0^t \int_{\partial \Omega_s}\frac{r}{s^{n+1}}(|\nabla \tilde{r}|^2+|\nabla r|^2) \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}, \ \ I_2=-2\int_0^t \int_{\partial \Omega_s}\frac{r}{s^{n+1}}\langle\nabla r,\nabla \tilde{r}\rangle \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}. \] Due to the locality of minimal relax slopes, we know $|\nabla \tilde{r}|=\xi_{\{r\leqslant d_z\}}+\xi_{\{r\geqslant d_z\}}|\nabla r|$. Therefore applying (\ref{eqn4.22}) we have \[ \begin{aligned} I_1=2\int_0^t \int_{\partial \Omega_s}\frac{r}{s^{n+1}} \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}+\int_0^t \int_{\partial \Omega_s}\frac{r}{s^{n+1}}(|\nabla \tilde{r}|^2+|\nabla r|^2-2) \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}=I_3+I_4, \end{aligned} \] where \[ \begin{aligned} |I_4|&=\left|\int_0^t \int_{\partial \Omega_s}\frac{r}{s^{n+1}}(|\nabla \tilde{r}|^2+|\nabla r|^2-2) \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}\right|\\ &\leqslant 2\int_0^t \int_{\partial \Omega_s \cap\{\tilde{r}\geqslant \mathsf{d}_x\}}\frac{r}{s^{n+1}}||\nabla r|^2-1| \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}\\ &\leqslant \int_0^t \int_{\partial \Omega_s }\frac{r}{s^{n+1}}||\nabla r|^2-1| \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}= \int_{ \Omega_t }\frac{r|\nabla \tilde{r}|}{\tilde{r}^{n+1}}||\nabla r|^2-1| \mathop{\mathrm{d}\mathscr{H}^{n}}\\ &\leqslant \tilde{C}\int_{ \Omega_t }\frac{1}{{\tilde{r}}^{n}}||\nabla r|^2-1| \mathop{\mathrm{d}\mathscr{H}^{n}}\leqslant \tilde{C}\int_{B_{2t}(x)}\frac{1}{{\mathsf{d}_x}^{n}}||\nabla r|^2-1| \mathop{\mathrm{d}\mathscr{H}^{n}}\\ &\leqslant \frac{\tilde{C}}{t^n} \sum_{i=0}^\infty 2^{in}\int_{A_{2^{-i}\, t,2^{-(i-1)}\,t}(x)}||\nabla r|^2-1| \mathop{\mathrm{d}\mathscr{H}^{n}}\\ &\leqslant \frac{ \tilde{C}}{t^n} \sum_{i=0}^\infty 2^{in}\int_{B_{2^{-(i-1)}\,t}(x)}||\nabla r|^2-1| \mathop{\mathrm{d}\mathscr{H}^{n}}\leqslant \tilde{C}t\mathop{\xi(2t)}. \end{aligned} \] Regrading for $I_3$, we begin by substituting $r$ with $\tilde{r}$ in the proof of \cite[Prop. 4.8 i)]{BMS23} to observe that \begin{equation}\label{fauhfeouahfuoia} nt_{\partial B_s(x)}\tilde{r}\mathop{\mathrm{d}\mathscr{H}^{n-1}}\leqslant s+s^2\mathop{\xi(s)} \end{equation} Therefore according to (\ref{eqn4.13aaa}) and (\ref{fauhfeouahfuoia}) we know \[ \begin{aligned} I_3=2\int_0^t \frac{1}{s^{n+1}}\int_{\partial \Omega_s}(r-s) \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}+2\int_0^t \frac{\mathscr{H}^{n-1}(\partial \Omega_s)}{s^{n}}\mathop{\mathrm{d}s}, \end{aligned} \] with \[ \begin{aligned} &\left|\int_0^t \frac{1}{s^{n+1}}\int_{\partial \Omega_s}(r-s) \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}\right|\\ =& \int_0^t \frac{1}{s^{n+1}}\int_{\partial \Omega_s}(s-r) \mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s} ={\int_{\Omega_t} \frac{|\nabla \tilde{r}|}{\tilde{r}^{n+1}}(\tilde{r}-r)\mathop{\mathrm{d}\mathscr{H}^{n}}}\leqslant \int_{B_{2t}(x)} \frac{\tilde{C}}{{\mathsf{d}_x}^{n+1}}(\tilde{r}-r)\mathop{\mathrm{d}\mathscr{H}^{n}}\\ =&\,\tilde{C}\int_0^{2t} \frac{1}{s^{n+1}}\int_{\partial B_s(x)} (\tilde{r}-r)\mathop{\mathrm{d}\mathscr{H}^{n}}\mathop{\mathrm{d}s}\leqslant \tilde{C}\int_0^{2t} \frac{1}{s^{n-1}}\mathscr{H}^{n-1}(\partial B_s(x))\xi(s)\mathop{\mathrm{d}s}\leqslant \tilde{C}t\mathop{\xi(2t)}. \end{aligned} \] Recalling that for $\mathscr{L}^1$-a.e. $s\in (0,t)$, the exterior unit normal of $\partial \Omega_s$ coincides $\mathscr{H}^{n-1}$-a.e. with $\nabla \tilde{r}$ (as shown in \cite[Prop. 6.1]{BPS23b}), and applying the Gauss-Green formula from \cite{BPS23} along with (\ref{eqn4.22}), we obtain \[ \begin{aligned} I_2&=-\int_0^t \frac{1}{s^{n+1}}\int_{\Omega_s} \Delta r^2 \mathop{\mathrm{d}\mathscr{H}^{n}}\mathop{\mathrm{d}s}\\ &=-2n\int_0^t \frac{\mathscr{H}^n(\Omega_s)}{s^{n+1}}\mathop{\mathrm{d}s}-\int_0^t \frac{1}{s^{n+1}} \int_{\Omega_s} (\Delta r^2-2n) \mathop{\mathrm{d}\mathscr{H}^{n}}\mathop{\mathrm{d}s}, \end{aligned} \] with \[ \left|\int_0^t \frac{1}{s^{n+1}} \int_{\Omega_s} (\Delta r^2-2n) \mathop{\mathrm{d}\mathscr{H}^{n}}\mathop{\mathrm{d}s}\right|\leqslant \tilde{C}t\xi(2t). \] Now by recalling the conclusion of \cite[Lem. 3.5]{BMS23} we get \[ \begin{aligned} &\int_{\Omega_t}\frac{r|\nabla \tilde{r}|}{\tilde{r}^{n+1}}|\nabla(\tilde{r}-r)|^2\mathop{\mathrm{d}\mathscr{H}^{n}}\leqslant \tilde{C}\int_{B_{2t}(x)}\frac{r}{\tilde{r}^{n+1}}|\nabla(\tilde{r}-r)|^2\mathop{\mathrm{d}\mathscr{H}^{n}}\\ \leqslant&\ \tilde{C}\int_0^{2t} \frac{1}{s^{n+1}}\int_{\partial B_s(x)}r|\nabla(\tilde{r}-r)|^2\mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}\\ \leqslant&\ \tilde{C} \int_0^{2t} \frac{1}{s^{n+1}}\int_{\partial B_s(x)}r|\nabla(\mathsf{d}_x-r)|^2\mathop{\mathrm{d}\mathscr{H}^{n-1}}\mathop{\mathrm{d}s}\leqslant \tilde{C}t\xi(2t). \end{aligned} \] Finally from the Newton-Leibniz rule for $\mathscr{L}^1$-integrable functions we know \[ \begin{aligned} &\frac{1}{t^n}\int_{\Omega_t}|\nabla\tilde{r}|\mathop{\mathrm{d}\mathscr{H}^n}-\lim_{s\to 0}\frac{1}{s^n}\int_{\Omega_s}|\nabla\tilde{r}|\mathop{\mathrm{d}\mathscr{H}^n}\\ =&-n\int_0^t \frac{1}{s^{n+1}}\int_{\Omega_s}|\nabla \tilde{r}|\mathop{\mathrm{d}\mathscr{H}^n}\mathop{\mathrm{d}s}+\int_0^t \frac{\mathscr{H}^{n-1}(\partial \Omega_s)}{s^{n}}\mathop{\mathrm{d}s}\\ =&-n\int_0^t \frac{1}{s^{n+1}}\int_{\Omega_s}\left||\nabla \tilde{r}|-1\right|\mathop{\mathrm{d}\mathscr{H}^n}\mathop{\mathrm{d}s}-\int_0^t \frac{n\,\mathscr{H}^n(\Omega_s)}{s^{n+1}}\mathop{\mathrm{d}s}+\int_0^t \frac{\mathscr{H}^{n-1}(\partial \Omega_s)}{s^{n}}\mathop{\mathrm{d}s}\\ =&-n\int_0^t \frac{1}{s^{n+1}}\int_{B_{2s}(x)}\left||\nabla \tilde{r}|-1\right|\mathop{\mathrm{d}\mathscr{H}^n}\mathop{\mathrm{d}s}-\int_0^t \frac{n\,\mathscr{H}^n(\Omega_s)}{s^{n+1}}\mathop{\mathrm{d}s}+\int_0^t \frac{\mathscr{H}^{n-1}(\partial \Omega_s)}{s^{n}}\mathop{\mathrm{d}s}\\ \leqslant&\ \tilde{C}t\xi(2t). \end{aligned} \] Since a combination of (\ref{vestimate}) and the $(1+\tilde{C}t)$-bi-Lipschitz property of $\mathbf{V}$ implies that $\lim_{t\to 0}t^{-n}\int_{\Omega_t}|\nabla \tilde{r}|\mathop{\mathrm{d}\mathscr{H}^n}=\omega_n$, and $t^{-n}\int_{\Omega_t}||\nabla\tilde{r}|-1|\mathop{\mathrm{d}\mathscr{H}^n}\leqslant \tilde{C}t\xi(2t)$, we obtain $\mathscr{H}^n(\Omega_t)-\omega_n t^n\leqslant \tilde{C}t^{n+1}\xi(2t)$. Following the argument presented in the proof of Lemma \ref{hfouiaehofie}, we deduce that $|\mathscr{H}^n(\textbf{V}^{-1}B_t(0_n))-\omega_n t^n|\leqslant t^{n+1}\xi(t)$. Additionally, since it is established in \cite[Prop. 4.1]{BMS23} that $|\mathscr{H}^n(B_t(x))-\omega_n t^n|\leqslant t^{n+1}\xi(t)$, we conclude the proof by observing that \[ \begin{aligned} \int_{B_t(0_n)\triangle \textbf{V}(B_t(x))} \left(\det \left(\langle\nabla v_i,\nabla v_j \rangle\right)\right)^{-\frac{1}{2}}\mathop{\mathrm{d}\mathscr{L}^n}= \mathscr{H}^n\left(\textbf{V}^{-1}(B_t(0_n))\triangle B_t(x)\right)=o(t^{n+1}). \end{aligned} \] \end{proof} \begin{lem} Under the assumption of Theorem \ref{thm5.1} and the setting in the proof of Proposition \ref{step5} we have for $\mathscr{H}^n$-a.e. $x\in B_\delta(x_0)$ it holds that \begin{equation}\label{a4.26} 2\mathscr{L}^n\left(B_t(0_n)\cap \{u_n\geqslant u_n(x)\}\right)-\omega_n t^n=2\,\omega_n \Delta\, \mathsf{d}_{\bar{x}}(x)(t^{n+1}+o(t^{n+1})). \end{equation} \end{lem} \begin{proof} To facilitate the subsequent discussion, let us first establish the notation. We denote by $c=u_n(x)$, $f=u_n=\mathsf{d}_{\bar{x}}-D/3$, $\{u_n\geqslant c\}=\{f\geqslant c\}$ for simplicity, and by $A=(a_{ij})$, $A^{-1}=(\tilde{a}_{ij})$, $g^{ij}=\langle\nabla u_i ,\nabla u_j\rangle(x)$, $(g_{ij})=(g^{ij})^{-1}_{ij}$. \end{proof} Due to \cite{HT03}, the left-hand side of (\ref{a4.26}) corresponds precisely to the mean curvature of the level set $\{f=c\}$ at the origin in Euclidean space. This mean curvature can be expressed as follows (with all partial derivatives evaluated at the origin). \begin{equation}\label{e4.29} \begin{aligned} &\left(\left|\nabla^{\mathbb{R}^n} f\right|\right)^{-1}\Delta^{\mathbb{R}^n} f-\left(\left|\nabla^{\mathbb{R}^n} f\right|\right)^{-3}\mathrm{Hess}^{\mathbb{R}^n}\mathop{f}\left(\nabla^{\mathbb{R}^n} f,\nabla^{\mathbb{R}^n} f\right)\\ =&\left(\sum_i {f_i}^2\right)^{-\frac{1}{2}}\sum_i f_{ii}-\left(\sum_i {f_i}^2\right)^{-\frac{3}{2}}\sum_{ij}f_{ij}f_i f_j. \end{aligned} \end{equation} According to (\ref{4.18c}) and (\ref{4.19c}), we see \begin{equation}\label{4.30} f_i=\tilde{a}_{ni},\ f_{ij}=-2\tilde{a}_{nk}\tilde{a}_{\alpha j}b^k_{\alpha\beta}\tilde{a}_{\beta i}. \end{equation} Moreover, since $a_{ij}g^{jk}a_{kl}=\delta_{il}$, we know that $\sum_i {f_i}^2=g^{nn}=1$. Let us recall that in \cite{H18}, Han proved that \[ \Delta v_i=\mathrm{Tr}(\mathrm{Hess}\, v_i)=\langle \mathrm{Hess} \,v_i ,\mathrm{g}\rangle,\ \mathscr{H}^n\mathrm{-a.e.}, \] which yields that for each $i$ \[ nt_{B_t(x)}\Delta v_i\mathop{\mathrm{d}\mathscr{H}^n}=\sum_{j} a_{ij}\Delta u_j(x)+2\sum_{j,k} b^{i}_{jk}g^{jk}. \] As a result, one may check that \[ \sum_{i} f_{ii}=\Delta u_n (x). \] For the second term in (\ref{e4.29}), for each $i,\tilde{\alpha},\tilde{\beta}$, notice that \[ \mathrm{Hess} \mathop{v_i}(\nabla u_{\tilde{\alpha}},\nabla u_{\tilde{\beta}})=\sum_{j}a_{ij}\mathrm{Hess}\mathop{u_j}(\nabla u_{\tilde{\alpha}},\nabla u_{\tilde{\beta}})+2\sum_{k,l}b^i_{kl}g^{\tilde{\alpha}k}g^{\tilde{\beta}l}. \] Since \[ nt_{B_t(x)}\mathrm{Hess} \mathop{v_i}(\nabla u_{\tilde{\alpha}},\nabla u_{\tilde{\beta}})\mathop{\mathrm{d}\mathscr{H}^n}=0, \] combining (\ref{4.18c}), (\ref{4.19c}) and (\ref{4.30}) we calculate that \[ \begin{aligned} \sum_{i,j} f_{ij}f_i f_j&=-2\sum_{i,j,k,\alpha,\beta}\tilde{a}_{ni}\tilde{a}_{nj}\tilde{a}_{nk}\tilde{a}_{\alpha j}b^k_{\alpha\beta}\tilde{a}_{\beta i}\\ &=\sum_{i,j,k,l,\alpha,\beta,\tilde{\alpha},\tilde{\beta}}\tilde{a}_{ni}\tilde{a}_{nj}\tilde{a}_{nk}\tilde{a}_{\alpha j}\tilde{a}_{\beta i}g_{\tilde{\alpha}\alpha}g_{\tilde{\beta}\beta}a_{kl}\mathrm{Hess}\mathop{u_l}(\nabla u_{\tilde{\alpha}},\nabla u_{\tilde{\beta}})(x)\\ &=\sum_{i,j,k,l,\alpha,\beta,\tilde{\alpha},\tilde{\beta}}\tilde{a}_{ni}\tilde{a}_{nj}\tilde{a}_{\alpha j}\tilde{a}_{\beta i}g_{\tilde{\alpha}\alpha}g_{\tilde{\beta}\beta}\mathrm{Hess}\mathop{u_n}(\nabla u_{\tilde{\alpha}},\nabla u_{\tilde{\beta}})(x)\\ &=\mathop{\mathrm{Hess}\mathop{u_n}}(\nabla u_n,\nabla u_n)(x)=0. \end{aligned} \] Therefore we conclude. \begin{proof}[Proof of Theorem \ref{thm5.1}(2)] According to Lemmas \ref{step3} and \ref{step4}, Remark \ref{rmk4.16}, Proposition \ref{step5} and \cite[Prop. 4.7]{CM20}, there exists a positive Lipschitz function $\omega$, which is independent of the choice of the base point $x\in X$, defined on $(0,D)$ such that for any $r\in (0,D)$ it holds that \[ \mathrm{d}\mathscr{H}^{n-1}\llcorner_{\partial B_r(x)}=\omega(r)\,\mathrm{d}\mathfrak{q}_x. \] Let us set \[ \begin{aligned} R_x:\mathrm{Lip}(X,\mathsf{d})&\longrightarrow C([0,D])\\ nt_{\partial B_r(x)}f\,\mathrm{d}\mathscr{H}^{n-1}\right), \end{aligned} \] and \[ \begin{aligned} A_x:C([0,D])&\longrightarrow \mathrm{Lip}(X,\mathsf{d})\\ f&\longmapsto \left(y\mapsto f(\mathsf{d}(x,y))\right). \end{aligned} \] We claim that $\tilde{\rho}:(x,y,t)\mapsto R_x A_x \rho(x,\cdot,t)(y)$ is also a heat kernel. First we observe that for any test function $f$, since $\int_{\partial B_{D/2}(x)}\,\mathrm{d}\mathfrak{q}_x(\alpha)=1$, we have \[ nt_{B_{\mathsf{d}(x,y)}(x)} f \mathop{\mathrm{d}\mathscr{H}^{n-1}} = \int_{\partial B_{D/2}(x)} f(\alpha,\mathsf{d}(x,y))\,\mathrm{d}\mathfrak{q}_x(\alpha), \] which implies that $R_x A_x f\in \mathrm{Lip}(X,\mathsf{d})$. Moreover, for any test function $\varphi$, applying the co-area formula with base point $x$ and the Gauss-Green formula in \cite{BPS23} shows \[ \begin{aligned} &\int_X \Delta \varphi \, R_x A_x f\mathop{\mathrm{d}\mathscr{H}^{n}}=\int_X \langle\nabla \varphi,\nabla \mathsf{d}_x\rangle (A_x f)'(\mathsf{d}_x)\mathop{\mathrm{d}\mathscr{H}^{n}}\\ =&\int_0^D \omega(r)\left[\int_{B_{D/2}(x)}\frac{\partial}{\partial r} f(\alpha,r)\mathop{\mathrm{d}\mathfrak{q}_x(\alpha)} \int_{B_{D/2}(x)}\frac{\partial}{\partial r}\varphi(\alpha,r)\mathop{\mathrm{d}\mathfrak{q}_x(\alpha)}\right]\mathop{\mathrm{d}r}\\ =&\int_0^D \left[\int_{\partial B_r(x)}\langle\nabla f,\nabla \mathsf{d}_x\rangle\mathop{\mathrm{d}\mathscr{H}^{n-1}} \frac{\partial}{\partial r}\left(\int_{B_{D/2}(x)}\varphi(\alpha,r)\mathop{\mathrm{d}\mathfrak{q}_x(\alpha)}\right)\right]\mathop{\mathrm{d}r}\\ =&\int_0^D \left[\int_{B_r(x)} \Delta f\mathop{\mathrm{d}\mathscr{H}^{n}} \frac{\partial}{\partial r}\left(\int_{B_{D/2}(x)}\varphi(\alpha,r)\mathop{\mathrm{d}\mathfrak{q}_x(\alpha)}\right)\right]\mathop{\mathrm{d}r}\\ =&-\int_0^D \left[\frac{\partial}{\partial r}\left(\int_{B_r(x)} \Delta f\mathop{\mathrm{d}\mathscr{H}^{n}}\right)\int_{B_{D/2}(x)}\varphi(\alpha,r)\mathop{\mathrm{d}\mathfrak{q}_x(\alpha)}\right]\mathop{\mathrm{d}r}\\ =&-\int_0^D \left[\int_{\partial B_r(x)} \Delta f\mathop{\mathrm{d}\mathscr{H}^{n-1}}\int_{B_{D/2}(x)}\varphi(\alpha,r)\mathop{\mathrm{d}\mathfrak{q}_x(\alpha)}\right]\mathop{\mathrm{d}r}=\int_X \varphi\, R_x A_x \Delta f\mathop{\mathrm{d}\mathscr{H}^{n}}. \end{aligned} \] Recall that the set of test functions is dense in $H^{1,2}$. Therefore, we deduce that $R_x A_x f\in D(\Delta)$ and $\Delta R_x A_x f=R_x A_x \Delta f$. This finding verifies that \begin{equation}\label{4.34} \frac{\partial}{\partial t} \tilde{\rho}(x,\cdot,t)=\frac{\partial}{\partial t}R_x A_x\rho(x,\cdot,t)=R_x A_x\frac{\partial}{\partial t}\rho(x,\cdot,t)=R_x A_x \Delta \rho(x,\cdot,t)=\Delta \tilde{\rho}(x,\cdot,t). \end{equation} Especially for any $f\in \mathrm{Lip}(X,\mathsf{d})$, let \[ \tilde{\mathrm{h}}_t f:x\mapsto \int_X \tilde{\rho}(y,x,t)f(y)\mathop{\mathrm{d}\mathscr{H}^n(y)}, \] we will show that \begin{equation}\label{4.32} \left\| \tilde{\mathrm{h}}_t f-f\right\|_{L^2}\rightarrow 0\ \ \text{ as $t\rightarrow 0$}. \end{equation} The first step is to show that \begin{equation}\label{4.31} \left\|\int_X \varphi \, \tilde{\mathrm{h}}_t f \mathop{\mathrm{d}\mathscr{H}^n}-\int_X \varphi f \mathop{\mathrm{d}\mathscr{H}^n}\right\|_{L^2}\rightarrow 0\ \ \text{as }\ t\rightarrow 0,\ \forall \varphi\in L^2. \end{equation} For any $y\in X$, it is clear that $\mathrm{Lip}(R_y A_y f)\leqslant \mathrm{Lip}\,f$. Then the Bakry-\'Emery estimate (see for instance \cite[Thm. 6.1.4]{G18a}) implies $\left\|\left|\nabla \mathrm{h}_t (R_y A_y f)\right|\right\|_{L^\infty}\leqslant \exp(-Kt)\left\| \mathrm{h}_t(|\nabla R_y A_y f|^2)\right\|_{L^\infty}\leqslant \exp(-Kt)\,\mathrm{Lip}\,f$. As a result, the $L^2$-convergence $\mathrm{h}_t (R_y A_y f)\rightarrow R_y A_y f$ can be improved to uniform convergence. In particular, we have $(\mathrm{h}_t (R_y A_y f))(y)\rightarrow (R_y A_y f)(y)=\lim_{z\rightarrow y} (R_y A_y f)(z)=f(y)$. According to Fubini's theorem, \[ \begin{aligned} \int_X \varphi \, \tilde{\mathrm{h}}_t f \mathop{\mathrm{d}\mathscr{H}^n}&=\int_X\int_X \varphi(y) R_y A_y\rho(y,\cdot,t)(x)f(x) \mathop{\mathrm{d}\mathscr{H}^n(x)}\mathop{\mathrm{d}\mathscr{H}^n(y)}\\ nt_{\partial B_r(y)} \rho(y,z,t)\mathop{\mathrm{d}\mathscr{H}^{n-1}(z)}\int_{\partial B_r(y)}f(x) \mathop{\mathrm{d}\mathscr{H}^{n-1}(x)}\mathop{\mathrm{d}\mathscr{H}^n(y)}\\ &=\int_X \varphi(y) \int_0^D\int_{\partial B_r(y)} \rho(y,z,t)(R_y A_y f)(z)\mathop{\mathrm{d}\mathscr{H}^{n-1}(z)}\mathop{\mathrm{d}\mathscr{H}^n(y)}\\ &=\int_X \varphi(y) (\mathrm{h}_t (R_y A_y f))(y)\mathop{\mathrm{d}\mathscr{H}^n(y)}. \end{aligned} \] Thus (\ref{4.31}) is derived from the observation that $\int_X (\mathrm{h}_t (R_y A_y f))^2(y)\mathop{\mathrm{d}\mathscr{H}^n(y)}\leqslant \mathscr{H}^n(X)(\sup_X f)^2<\infty$, combined with the application of dominated convergence theorem. In particular, one can prove (\ref{4.32}) through the following calculation. \[ \begin{aligned} \int_X (\tilde{\mathrm{h}}_t f)^2 \mathop{\mathrm{d}\mathscr{H}^n}&=\int_X \tilde{\mathrm{h}}_t f(y) (\mathrm{h}_t (R_y A_y f))(y)\mathop{\mathrm{d}\mathscr{H}^n(y)}\\ &=\int_X (\mathrm{h}_t (R_y A_y f))^2(y)\mathop{\mathrm{d}\mathscr{H}^n(y)}\rightarrow \int_X f^2\mathop{\mathrm{d}\mathscr{H}^n},\ \ \text{as}\ t\rightarrow 0. \end{aligned} \] Due to (\ref{4.34}), (\ref{4.32}) and the uniqueness of the solution to the heat equation, for any $s,t>0$, since $\mathrm{h}_s f$ is a test function, it follows that $\tilde{\mathrm{h}}_t\mathrm{h}_s f=\mathrm{h}_{t+s}f$. Moreover, by letting $s\rightarrow 0$ and using dominated convergence theorem again, we see $\tilde{\mathrm{h}}_t f=\mathrm{h}_{t}f$. Ultimately, considering the density of the set of test functions in $H^{1,2}$ (and hence in $L^2$) we can conclude \[ \rho(x,y,t)=\tilde{\rho}(y,x,t),\ \forall x,y\in X,\ \forall t>0. \] This verifies the strong harmonicity of $(X,\mathsf{d},\mathscr{H}^n)$, which in fact, implies the smoothness due to Corollary \ref{cor1.5}. \end{proof} \bibliographystyle{alpha} \bibliography{reff} \bigskip \end{document}
2501.04026v2
http://arxiv.org/abs/2501.04026v2
Counting the number of integral fixed points of a discrete dynamical system with applications from arithmetic statistics, I
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[utf8]{inputenc} \usepackage[utf8]{inputenc} \usepackage{dirtytalk} \usepackage{tikz} \usepackage{geometry} \geometry{ a4paper, total={170mm,257mm}, left=20mm, top=20mm, } \usepackage{graphicx} \usepackage{enumitem} \usepackage{parskip} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{xcolor} \usepackage{hyperref} \newlist{myitemize}{itemize}{1} \setlist[myitemize,1]{leftmargin = 0.5in} \hypersetup{colorlinks = true, linkcolor = red, linktocpage} \newcommand{\eqname}[1]{\tag*{#1}} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem*{thm*}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{hyp}[thm]{Hypothesis} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{exmp}[thm]{Example} \newtheorem{rem}[thm]{Remark} \newtheorem*{note}{Note} \setlength{\parindent}{1cm} \title{\textbf{\small{COUNTING THE NUMBER OF INTEGRAL FIXED POINTS OF A DISCRETE DYNAMICAL SYSTEM WITH APPLICATIONS FROM ARITHMETIC STATISTICS, I}}} \author{\footnotesize{BRIAN KINTU}} \date{\small{\textit{Happily Dedicated: Dept. of MCS \& Presidency of Meric S. Gertler at the University of Toronto}}} \begin{document} \maketitle \begin{abstract} \small{In this article, we inspect a surprising relationship between the set of fixed points of a polynomial map $\varphi_{d, c}$ defined by $\varphi_{d, c}(z) = z^d + c$ for all $c, z \in \mathbb{Z}$ and the coefficient $c$, where $d > 2$ is an integer. Inspired greatly by the elegance of the counting problems along with the very striking results of Bhargava-Shankar-Tsimerman and their collaborators in arithmetic statistics and by an interesting point-counting result of Narkiewicz on rational periodic points of $\varphi_{d, c}$ for any odd degree $d>2$ in arithmetic dynamics, we prove that for any given prime integer $p\geq 3$, the average number of distinct integral fixed points of any $\varphi_{p, c}$ modulo $p$ is 3 or 0, as $p$ tends to infinity. Motivated further by the same down-to-earth work of (BST) and by a conjecture of Hutz on rational periodic points of $\varphi_{p-1, c}$ for any given prime integer $p\geq 5$ in arithmetic dynamics, we then also prove unconditionally that the average number of distinct integral fixed points of any $\varphi_{p-1, c}$ modulo $p$ is $1$ or $2$ or $0$, as $p$ tends to infinity. Moreover, we also show that a density of $0\%$ of integer polynomials $\varphi_{p, c}(x)$ have three fixed points modulo $p$, as $c$ tends to infinity; and also show this same density on integer polynomials $\varphi_{p-1, c}(x)$ with one or two fixed points modulo $p$. Consequently, a density of $100\%$ of integer polynomials $f(x): = x^p -x + c$ gives rise to odd prime degree-$p$ number fields $K_{f}:=\mathbb{Q}[x]\slash (f(x))$; and similarly a density of $100\%$ of integer polynomials $g(x): = x^{p-1}-x +c$ induces even degree-$p-1$ number fields $L_{g}:=\mathbb{Q}[x]\slash (g(x))$. Since each of the fields $K_{f}$ and $L_{g}$ comes naturally with the ring $\mathcal{O}_{K_{f}}$ and $\mathcal{O}_{L_{g}}$ of integers, resp., applying a density result of Bhargava-Shankar-Wang on our integer irreducibles $f(x)$ and $g(x)$ yields that a density equal to $\zeta(2)^{-1}$ of polynomials $f(x)$ is such that $\mathbb{Z}[x]\slash (f(x))$ is the ring $\mathcal{O}_{K_{f}}$; and similarly a density equal to $\zeta(2)^{-1}$ of polynomials $g(x)$ is such that $\mathbb{Z}[x]\slash (g(x))$ is the ring $\mathcal{O}_{L_{g}}$ of integers.} \end{abstract} \begin{center} \tableofcontents \end{center} \begin{center} \section{Introduction} \end{center} \noindent Consider any morphism $\varphi: {\mathbb{P}^N(K)} \rightarrow {\mathbb{P}^N(K)} $ of degree $d \geq 2$ defined on a projective space ${\mathbb{P}^N(K)}$ of dimension $N$, where $K$ is a number field. Then for any $n\in\mathbb{Z}$ and $\alpha\in\mathbb{P}^N(K)$, we call $\varphi^n = \underbrace{\varphi \circ \varphi \circ \cdots \circ \varphi}_\text{$n$ times}$ the $n^{th}$ \textit{iterate of $\varphi$} and call $\varphi^n(\alpha)$ the \textit{$n^{th}$ iteration of $\varphi$ on $\alpha$}. By convention, $\varphi^{0}$ acts as the identity map, i.e., $\varphi^{0}(\alpha) = \alpha$ for every point $\alpha\in {\mathbb{P}^N(K)}$. The everyday philosopher may want to know (quoting here Devaney \cite{Dev}): \say{\textit{Where do points $\alpha, \varphi(\alpha), \varphi^2(\alpha), \ \cdots\ ,\varphi^n(\alpha)$ go as $n$ becomes large, and what do they do when they get there?}} So now, for any given integer $n\geq 0$ and any given point $\alpha\in {\mathbb{P}^N(K)}$, we then call the set consisting of all the iterates $\varphi^n(\alpha)$ the \textit{(forward) orbit of $\alpha$}; and which in the theory of dynamical systems we usually denote by: \begin{equation} \mathcal{O}^{+}(\alpha):= \{\varphi^n(\alpha) : n \in \mathbb{Z}_{\geq 0} \}. \end{equation} One of the principal goals in the area of \say{arithmetic dynamics}, a newly emerging area of mathematics concerned with studying number-theoretic properties of discrete dynamical systems, is to classify all the points $\alpha\in\mathbb{P}^N(K)$ according to the behavior of their forward orbits $\mathcal{O}^{+}(\alpha)$. In this direction, we call any point $\alpha\in {\mathbb{P}^N(K)}$ a \textit{periodic point of $\varphi$}, whenever $\varphi^n (\alpha) = \alpha$ for some $n\in \mathbb{Z}_{\geq 0}$; and call any integer $n\geq 0$ such that $\varphi^n (\alpha) = \alpha$ a \textit{period of $\alpha$}, and the smallest such positive integer $n\geq 1$ is then called the \textit{exact period of $\alpha$}. We denote the set of all periodic points of $\varphi$ by Per$(\varphi, {\mathbb{P}^N(K)})$; and for any given point $\alpha\in$Per$(\varphi, {\mathbb{P}^N(K)})$ we then call the set of all iterates of $\varphi$ on $\alpha$, \textit{a periodic orbit of $\alpha$}. In Sect. \ref{sec4}, \ref{sec5}, \ref{sec6}, \ref{sec7} and \ref{sec8}, we study counting questions that are greatly inspired by all the beautiful work of Bhargava-Shankar-Tsimerman and their collaborators in the area of \say{arithmetic statistics}, an area of mathematics concerned with studying the distributions of arithmetic objects (quantities); and among such questions includes the natural question: \say{\textit{How many distinct fixed orbits can any $\varphi_{p,c}$ and $\varphi_{p-1,c}$ acting independently on the space $\mathbb{Z} / p\mathbb{Z}$ via iteration have on average, as $p\to \infty$?}} In doing so, we first prove that conditioning on a theorem of Narkiewicz \ref{theorem 3.2.1} yields the following main theorem on maps $\varphi_{p,c}$ for any given prime $p\geq 3$; and which we state later more precisely as Theorem \ref{2.2}: \begin{thm}\label{Binder-Brian1} Let $p\geq 3$ be any fixed prime integer, and assume Theorem \ref{theorem 3.2.1}. Let $\varphi_{p, c}$ be a map defined by $\varphi_{p, c}(z) = z^p+c$ for all $c, z\in\mathbb{Z}$. The number of distinct integral fixed points of any $\varphi_{p,c}$ modulo $p$ is $3$ or $0$. \end{thm} Inspired further by the activity and by all down-to-earth work of Bhargava-Shankar-Tsimerman in arithmetic statistics, and by an intriguing unresolved conjecture of Hutz \ref{conjecture 3.2.1} on rational periodic points of maps $\varphi_{d, c}$ of any even degree $d> 2$ (though not more importantly in our case attempting to prove his Conjecture \ref{conjecture 3.2.1}) and by recent work of Panraksa \cite{par2} in arithmetic dynamics, we revisit the setting in Sect. \ref{sec2} and consider in Sect. \ref{sec3} any $\varphi_{p-1,c}$ of any even degree $p-1$ iterated on $\mathbb{Z}\slash p\mathbb{Z}$ for any prime integer $p\geq 5$. In doing so, we prove unconditionally the following main theorem on any $\varphi_{p-1,c}$; which we state more precisely as Theorem \ref{6.0.2}: \begin{thm}\label{Binder-Brian2} Let $p\geq 5$ be any fixed prime integer, and let $\varphi_{p-1, c}$ be a map defined by $\varphi_{p-1, c}(z) = z^{p-1}+c$ for all $c, z\in\mathbb{Z}$. Then the number of distinct integral fixed points of any map $\varphi_{p-1,c}$ modulo $p$ is $1$ or $2$ or $0$. \end{thm} \noindent Notice that the obtained count in Thm \ref{Binder-Brian2} on the number of distinct integral fixed points of any $\varphi_{p-1,c}$ modulo $p$ is independent of $p$ (and hence independent of the degree of $\varphi_{p-1,c})$ in each of the three possibilities. Moreover, we may also observe that the expected total count (namely, $1+2+0 =3$) in Thm \ref{Binder-Brian2} on the number of distinct integral fixed points in the whole family of maps $\varphi_{p-1,c}$ modulo $p$ is also independent of $p$ and deg$(\varphi_{p-1,c})$; an observation which somewhat surprisingly coincides not only with a similar observation on the count in each of the two possibilities in Thm \ref{Binder-Brian1} but also coincides with a similar observation on the expected total count (namely, $3+0 =3$) on the number of distinct integral fixed points in the whole family of maps $\varphi_{p,c}$ modulo $p$. Since we know the inclusion $\mathbb{Z}\hookrightarrow\mathbb{Z}_{p}$ of rings, and so the space $\mathbb{Z}_{p}$ of all $p$-adic integers is evidently a much larger space than $\mathbb{Z}$. So then, inspired by the work of Adam-Fares \cite{Ada} in arithmetic dynamics and again by a\say{counting-application} philosophy in arithmetic statistics, we again inspect in a paper \cite{BK3} the aforementioned relationship where it is $\mathbb{Z}_{p}$ that's considered. Interestingly, we again obtain the same counting and asymptotics in the setting when $\varphi_{p-1,c}$ is iterated on $\mathbb{Z}_{p}\slash p\mathbb{Z}_{p}$, and get very different counting and asymptotics in the case when $\varphi_{p,c}$ iterated on $\mathbb{Z}_{p}\slash p\mathbb{Z}_{p}$. Motivated by a $K$-rational periodic point-counting result of Narkiewicz \cite{Narkie} on maps $\varphi_{p,c}$ defined over any real algebraic number field $K$ of degree $n\geq 2$, in a forthcoming work \cite{BK2}, we revisit the counting setting in Section \ref{sec2} and then consider any $\varphi_{p^{\ell},c}$ defined over any real algebraic number $K\slash \mathbb{Q}$ of any degree $n\geq 2$ for any prime $p\geq 3$ and any integer $\ell \geq 1$. In doing so, we show that conditioning on Narkiewicz' theorem and again using the same elementary counting technique, we can obtain a fixed integral point-counting result that is not only independent of the degree $n$ of any real $K$ and any degree $p^{\ell}$ of any map $\varphi_{p^{\ell},c}$, but is also very analogous to Theorem \ref{Binder-Brian1} for any real number field $K$, any odd prime $p$ and any integer $\ell$. More in that work \cite{BK2}, we again revisit the counting setting in Section \ref{sec3} and then consider any polynomial map $\varphi_{(p-1)^{\ell},c}$ defined over any algebraic number $K\slash \mathbb{Q}$ (where $K$ needn't be a real algebraic number field) of any degree $n\geq 2$, for any prime integer $p\geq 5$ and any integer $\ell \geq 1$. In doing so, we find that we can again obtain a fixed integral point-counting result that's not only independent of both the degree $n$ of any number field $K$ and any prime $p$, but is also very similar to Theorem \ref{Binder-Brian2} for any given algebraic number field $K$, any prime $p$ and any integer $\ell$. It's worth mentioning that there are several authors in the literature who have also done an explicit study on orbits; and among such authors includes, Silverman \cite{Sil} on the number of integral points in forward orbits of rational functions of degree $\geq 2$ defined over the field $\mathbb{C}$, and Wade \cite{Wade} on the average number of integral points in forward orbits of rational function of degree $\geq 2$ defined over $K$. In his 1993 beautiful paper \cite{Sil}, Silverman showed [\cite{Sil}, Theorem A] that the forward orbit of a rational function of degree $\geq 2$ whose second iterate is not a polynomial over $\mathbb{C}$, contains only finitely many integer points. Motivated by a point-counting result of Silverman \cite{Sil} and conditioning on a standard height uniform conjecture in arithmetic geometry, Wade \cite{Wade} established a zero-average result on the number of integral points in the forward orbit of a rational function of degree $\geq 2$ defined over $K$. Now observe that in each of the works \cite{Sil} and \cite{Wade}, the focus is on understanding very well integral interiority of the forward orbits. In our case, the focus is on understanding a somewhat less thorny problem, namely, as we see from the above (and as we also did in \cite{BK0} where we considered any arbitrary family of quadratic maps $\varphi_{2,c}$ iterating on $\mathbb{Z}\slash p\mathbb{Z}$ for any prime $p\geq 2$): the problem of counting fixed orbits (on average), and consequently hope to understand the statistical behavior along with the question of the meaning, of the achieved count. Such a problem may supposedly be of some interesting insight in the area of classical dynamical systems, since one of the main objectives in that area is to understand \textit{all} orbits via topological and analytic techniques, and in doing so, one may not only find that orbits can easily get very complicated but also interesting yet also important statistical questions concerning measuring the complexity of a dynamical system, in particular, the question of determining the topological entropy of a given system (which the reader need not worry about at all, since we won't be inspecting such a question in this article), may become intractable. \begin{rem} Loosely speaking, recall that \say{topological entropy} is a nonnegative statistic that gives a way of measuring the exponential growth rate of distinguishable orbits of a dynamical system as time progresses. You may see independently in more great details about topological entropy in the work of Adler \cite{Adl} and Bowen \cite{Ruf}. \end{rem}Now before we proceed any further in our discussion, let's first take a very quick look at the following classical example of a polynomial morphism with a rational periodic point and a rational periodic orbit: \begin{exmp} \label{ex 1.1} Consider $\varphi_{2, -21/16}$ defined by $\varphi_{2, -21/16}(z) = z^2 -21/16$ for all $c,z \in\mathbb{Q}$. Then if we compute all iterations of $\varphi_{2, -21/16}$ on $z = 1/4$, we get that $\mathcal{O}^{+}(1/4)$ consists of points $\varphi^0(1/4) = 1/4$, $\varphi^1(1/4) = -5/4$, $\varphi^2(1/4) = 1/4$ and so on; and moreover the points in $\mathcal{O}^{+}(1/4)$ form a sequence of the following form: \begin{center} $\{1/4 \longrightarrow -5/4 \} \longrightarrow \{1/4 \longrightarrow -5/4\}\longrightarrow \{1/4 \longrightarrow -5/4\} \longrightarrow \cdots$ \end{center}So in this example, $z = 1/4$ is a periodic point of a map $\varphi_{2, -21/16}$ with exact period 2 and from the above definition, $\mathcal{O}^{+}(1/4)$ is a rational periodic orbit (i.e., $\mathcal{O}^{+}(1/4)$ is a periodic orbit consisting of rational points). \end{exmp} In addition to the notion of a periodic point and a periodic orbit, we also have in dynamical systems a more complicated but somewhat related notion of a preperiodic point and a preperiodic orbit. We call a point $\alpha\in {\mathbb{P}^N(K)}$ a \textit{preperiodic point of $\varphi$}, whenever $\varphi^{m+n}(\alpha) = \varphi^{m}(\alpha)$ for some integers $m\geq 0$ and $n\geq 1$. In this case, the smallest integers $m\geq 0$ and $n\geq 1$ such that the equation $\varphi^{m+n}(\alpha) = \varphi^{m}(\alpha)$ happens, are called the \textit{preperiod} and \textit{eventual period of $\alpha$}, respectively. Again, we denote the set of preperiodic points of $\varphi$ by PrePer$(\varphi, {\mathbb{P}^N(K)})$. For any given preperiodic point $\alpha$ of $\varphi$, we then call the set of all iterates of $\varphi$ on $\alpha$, \textit{the preperiodic orbit of $\alpha$}. Now observe for $m=0$, we have $\varphi^{n}(\alpha) = \alpha $ and so $\alpha$ is a periodic point of period $n$. Hence, every periodic point of $\varphi$ is also a preperiodic point of $\varphi$, i.e., Per$(\varphi, {\mathbb{P}^N(K)}) \subseteq$ PrePer$(\varphi, {\mathbb{P}^N(K)})$; however, it need not be that PrePer$(\varphi, {\mathbb{P}^N(K)})\subseteq$ Per$(\varphi, {\mathbb{P}^N(K)})$ as illustrated by the following classical example: \begin{exmp} Consider $\varphi_{2, -29/16}$ defined by $\varphi_{2, -29/16}(z) = z^2 -29/16$ for all $c,z\in \mathbb{Q}$. Then if we compute all iterations of $\varphi_{2, -29/16}$ on $z = 3/4$, we get that $\mathcal{O}^{+}(3/4)$ consists of points $\varphi^0(3/4) = 3/4$, $\varphi^1(3/4) = -5/4$, $\varphi^2(3/4) = -1/4$ and so on; and moreover the points in $\mathcal{O}^{+}(3/4)$ form a sequence of the following form: \begin{center} $3/4 \longrightarrow -5/4 \longrightarrow \{-1/4 \longrightarrow -7/4 \longrightarrow 5/4\} \longrightarrow \{-1/4 \longrightarrow -7/4 \longrightarrow 5/4\}\longrightarrow \cdots$ \end{center} So in this example, we conclude that $z = 3/4$ is a rational preperiodic point of $\varphi_{2, -29/16}$ with preperiod $m=2$ and eventual period $n=3$; and $\mathcal{O}^{+}(3/4)$ is a preperiodic orbit. Also, since preperiod $m= 2 \neq 0$, then $z = 3/4$ is not a periodic point. As another example of a preperiodic point, recall in Example \ref{ex 1.1} that $z = 1/4$ is a periodic point of $\varphi_{2, -21/16}$, and since every periodic point is a preperiodic point, then $z = 1/4$ is a $\mathbb{Q}$-preperiodic point of $\varphi_{2, -21/16}$. Other interesting examples of $\mathbb{Q}$-preperiodic points may be found in Poonen's work \cite{Poonen}. \end{exmp} In the year 1950, Northcott \cite{North} used the theory of height functions to show that not only is the set PrePer$(\varphi, {\mathbb{P}^N(K)})$ always finite, but also for a given morphism $\varphi$ the set PrePer$(\varphi, {\mathbb{P}^N(K)})$ can be computed effectively. Forty-five years later, in the year 1995, Morton and Silverman conjectured that PrePer$(\varphi, \mathbb{P}^N(K))$ can be bounded in terms of degree $d$ of $\varphi$, degree $D$ of $K$, and dimension $N$ of the space ${\mathbb{P}^N(K)}$. This celebrated conjecture is called the \textit{Uniform Boundedness Conjecture}; which we then restate here as the following conjecture: \begin{conj} \label{silver-morton}[\cite{Morton}] Fix integers $D \geq 1$, $N \geq 1$, and $d \geq 2$. There exists a constant $C'= C'(D, N, d)$ such that for all number fields $K/{\mathbb{Q}}$ of degree at most $D$, and all morphisms $\varphi: {\mathbb{P}^N}(K) \rightarrow {\mathbb{P}^N}(K)$ of degree $d$ defined over $K$, the total number of preperiodic points of a morphism $\varphi$ is at most $C'$, i.e., \#PrePer$(\varphi, \mathbb{P}^N(K)) \leq C'$. \end{conj} \noindent Note that a special case of Conjecture \ref{silver-morton} is when the degree $D$ of a number field $K$ is $D = 1$, dimension $N$ of a space $\mathbb{P}^N(K)$ is $N = 1$, and degree $d$ of a morphism $\varphi$ is $d = 2$. In this case, if $\varphi$ is a polynomial morphism, then it is a quadratic map defined over the field $\mathbb{Q}$. Moreover, in this very special case, in the year 1995, Flynn and Poonen and Schaefer conjectured that a quadratic map has no points $z\in\mathbb{Q}$ with exact period more than 3. This conjecture of Flynn-Poonen-Schaefer \cite{Flynn} (which has been resolved for cases $n = 4$, $5$ in \cite{mor, Flynn} respectively and conditionally for $n=6$ in \cite{Stoll} is, however, still open for all cases $n\geq 7$ and moreover, which also Hutz-Ingram \cite{Ingram} gave strong computational evidence supporting it) is restated here formally as the following conjecture. Note that in this same special case, rational points of exact period $n\in \{1, 2, 3\}$ were first found in the year 1994 by Russo-Walde \cite{Russo} and also found in the year 1995 by Poonen \cite{Poonen} using a different set of techniques. We restate here the anticipated conjecture of Flynn-Poonen-Schaefer as the following conjecture: \begin{conj} \label{conj:2.4.1}[\cite{Flynn}, Conjecture 2] If $n \geq 4$, then there is no quadratic polynomial $\varphi_{2,c }(z) = z^2 + c\in \mathbb{Q}[z]$ with a rational point of exact period $n$. \end{conj} Now by assuming Conjecture \ref{conj:2.4.1} and also establishing interesting results on preperiodic points, in the year 1998, Poonen \cite{Poonen} then concluded that the total number of rational preperiodic points of any quadratic polynomial $\varphi_{2, c}(z)$ is at most nine. We restate here formally Poonen's result as the following corollary: \begin{cor}\label{cor2}[\cite{Poonen}, Corollary 1] If Conjecture \ref{conj:2.4.1} holds, then $\#$PrePer$(\varphi_{2,c}, \mathbb{Q}) \leq 9$, for all quadratic maps $\varphi_{2, c}$ defined by $\varphi_{2, c}(z) = z^2 + c$ for all points $c, z\in\mathbb{Q}$. \end{cor} Since Per$(\varphi, {\mathbb{P}^N(K)}) \subseteq$ PrePer$(\varphi, {\mathbb{P}^N(K)})$ and so if the size of PrePer$(\varphi, \mathbb{P}^N(K))$ is bounded above, then the size of Per$(\varphi, \mathbb{P}^N(K))$ is also bounded above and moreover bounded above by the same upper bound. So then, we may extract out the following periodic version of Conjecture \ref{silver-morton}, and the reason we do so, is because in Sections \ref{sec2} and \ref{sec3} we study a dynamical setting in which $K$ is replaced with $\mathbb{Z}$, $N=1$ and degree $d=p$ for any given prime integer $p> 2$ and $d = p-1$ for any any given prime $p>3$, respectively; in the attempt of understanding (and not claiming to prove) the possibility and validity of a periodic version of Conjecture \ref{silver-morton}. \subsection*{History on the Connection Between the Size of Per$(\varphi_{d, c}, K)$ and the Coefficient $c$} In the year 1994, Walde and Russo not only proved [\cite{Russo}, Corollary 4] that for a quadratic map $\varphi_{2,c}$ defined over $\mathbb{Q}$ with a periodic point, the denominator of a rational point $c$, denoted as den$(c)$, is a square but they also proved that den$(c)$ is even, whenever $\varphi_{2,c}$ admits a rational cycle of length $\ell \geq 3$. Moreover, Walde-Russo also proved [\cite{Russo}, Cor. 6, Thm 8 and Cor. 7] that the size \#Per$(\varphi_{2, c}, \mathbb{Q})\leq 2$, whenever den$(c)$ is an odd integer. Three years later, in the year 1997, Call-Goldstine \cite{Call} proved that the size of the set PrePer$(\varphi_{2,c},\mathbb{Q})$ of rational preperiodic points of a quadratic map $\varphi_{2, c}$ can be bounded above in terms of the number of distinct odd primes dividing the denominator of a point $c\in\mathbb{Q}$. We restate here formally this result of Call and Goldstine as the following theorem. Note that in this theorem, $GCD(a, e)$ refers to the greatest common divisor of $a$, $e \in \mathbb{Z}$: \begin{thm}\label{2.3.1}[\cite{Call}, Theorem 6.9] Let $e>0$ be an integer and let $s$ be the number of distinct odd prime factors of e. Define $\varepsilon = 0$, $1$, $2$, if $4\nmid e$, if $4\mid e$ and $8 \nmid e$, if $8 \mid e$, respectively. Let $c = a/e^2$, where $a\in \mathbb{Z}$ and $GCD(a, e) = 1$. If $c \neq -2$, then the total number of $\mathbb{Q}$-preperiodic points of $\varphi_{2, c}$ is at most $2^{s + 2 + \varepsilon} + 1$. Moreover, a quadratic map $\varphi_{2, -2}$ has exactly six rational preperiodic points. \end{thm} \noindent Now recall that the set Per$(\varphi_{2,c}, \mathbb{Q})$ is always a subset of PrePer$(\varphi_{2,c}, \mathbb{Q})$, and since we now know from Theorem \ref{2.3.1} that in the case of quadratic maps $\varphi_{2,c}$ defined over $\mathbb{Q}$, Call and Goldstine have shown that the maximum number of elements in PrePer$(\varphi_{2,c}, \mathbb{Q})$ is at most $2^{s + 2 + \varepsilon} + 1$ for any rational point $c\neq -2$. Hence, for any $c\neq -2$, the size of Per$(\varphi_{2,c}, \mathbb{Q})$ is equal to $2^{s + 2 + \varepsilon} + 1$ or is strictly less than $2^{s + 2 + \varepsilon} + 1$; and so for any $c\neq -2$, the size of Per$(\varphi_{2,c}, \mathbb{Q})$ is bounded above by a constant $2^{s + 2 + \varepsilon} + 1$ depending only on the number of distinct odd prime factors of den$(c)$ and on the quantity $\varepsilon$. On the other hand, for $c = -2$ (in which case den$(c) = 1$ and so has no distinct prime factors), Theorem \ref {2.3.1} tells us that \#PrePer$(\varphi_{2,-2}, \mathbb{Q}) = 6$ and so \#Per$(\varphi_{2,-2}, \mathbb{Q}) \leq 6$. Eight years later, after the work of Call-Goldstine, in the year 2005, Benedetto \cite{detto} conducted a detailed analysis of the filled Julia set and then improved substantially the bounds on the size of PrePre$(\varphi, K)$ which had been established by earlier work of Call-Goldstine and earlier works of several other people in the literature. In particular, by working only with polynomial maps $\varphi$ and only in dimension $N=1$, however, allowing arbitrary degree $d\geq 2$ and an arbitrary global field $K$ (i.e., $K$ is a number field or a function field defined over a finite field), Benedetto established the following remarkable result on the relationship between the size of the set PrePre$(\varphi, K)$ and the number of bad primes of $\varphi$ in $K$ (note that when the morphism $\varphi = \varphi_{d,c}$ for any point $c\in K$, then the \textit{bad primes} of $\varphi_{d,c}$ in $K$ are those primes that in fact divide the denominator of a $K$-point $c$): \begin{thm}\label{main} [\cite{detto}, Main Theorem] Let $K$ be a global field, $\varphi\in K[z]$ be a polynomial of degree $d\geq 2$ and $s$ be the number of bad primes of $\varphi$ in $K$. The number of preperiodic points of $\varphi$ in $\mathbb{P}^N(K)$ is at most $O(\text{s log s})$. \end{thm} \noindent Since from our earlier discussion we know that the set of periodic points of any given morphism $\varphi$ is always a subset of the set of preperiodic points of $\varphi$, so then, if $\varphi$ is a polynomial morphism of degree $d\geq 2$ defined over a global field $K$, then Thm \ref{main} also shows that the total number of periodic points of the underlying morphism $\varphi$ is in fact also $\ll \text{s log s}$, i.e., at most a positive constant times $\text{s log s}$. Notice that setting $\varphi(z) = \varphi_{d, c}(z) = z^d + c\in K[z]$ and $K = \mathbb{Q}$, then by Thm \ref{main} the size of Per$(\varphi_{d, c}, \mathbb{Q})$ is $\ll \text{s log s}$. More to this, since Thm \ref{main} applies to any $\varphi(z) = \varphi_{d, c}(z)$ of arbitrary degree $d\geq 2$ defined over $\mathbb{Q}$, it then follows that Thm \ref{main} can be applied to any $\varphi_{d, c}(z)$ of arbitrary prime or even degree $d\geq 2$; and as such one must then obtain the upper bound in Thm \ref{main} on the number of $\mathbb{Q}$-periodic points and hence on the number of $\mathbb{Z}$-periodic points. Hence, we again we see from Thm \ref{main} that the size of the set Per$(\varphi_{d, c}, \mathbb{Q})$ and thus of Per$(\varphi_{d, c}, \mathbb{Z})$, is bounded above using arithmetic information on the prime divisors of the coefficients of the polynomial $\varphi_{d, c}(z)$. Seven years after the work of Benedetto, in the year 2012, Narkiewicz's work \cite{Narkie} not only showed that any $\varphi_{d,c}$ defined over $\mathbb{Q}$ with odd degree $d\geq 3$ has no rational periodic points of exact period $n > 1$, but his also showed that the total number of $\mathbb{Q}$-preperiodic points is at most 4. We restate this result here as the following: \begin{thm} \label{theorem 3.2.1}\cite{Narkie} For any integer $n > 1$ and any odd integer $d\geq 3$, there is no $c\in \mathbb{Q}$ such that $\varphi_{d,c}$ defined by $\varphi_{d, c}(z)$ for all $c,z \in \mathbb{Q}$ has rational periodic points of exact period $n$. Moreover, $\#PrePer(\varphi_{d, c}, \mathbb{Q}) \leq 4$. \end{thm} \begin{rem} The first part of Thm \ref{theorem 3.2.1} is proved by observing that for each odd degree $d\geq 3$, the polynomial $\varphi_{d, c}(z)$ is non-decreasing on $\mathbb{Q}$ and so by elementary mathematical analysis one then expects the forward orbit $\mathcal{O}_{\varphi_{d,c}} (z)$ of each rational point $z$ to form a non-decreasing sequence of iterations. Hence, it is immediately evident that the polynomial $\varphi_{d, c}(z)$ can only have rational points of exact period $n = 1$ (with no preperiod). The upper bound 4 is obtained by counting the number of rational roots of $\varphi_{d, c}(z)-z$. Notice Thm \ref{theorem 3.2.1} shows that the number of rational points $z$ that satisfy the equation $z^d - z + c = 0$ is bounded above by 4; which then also means that the number of such rational points and hence such $\mathbb{Z}$-points is equal to 4 or strictly less than 4. \end{rem} Seven years after the work of Benedetto, in the year 2012, Narkiewicz's work \cite{Narkie} not only showed that any $\varphi_{d,c}$ over $\mathbb{Q}$ with odd degree $d\geq 3$, has no $\mathbb{Q}$-periodic points of exact period $n \geq 2$, but also showed that the total number of preperiodic points is $\leq 4$. Three years after the work \cite{Narkie}, in the year 2015, Hutz \cite{Hutz} developed an algorithm for determining all $\mathbb{Q}$-preperiodic points of morphisms over ${\mathbb{P}^N(K)}$, and then made the following: \begin{conj} \label{conjecture 3.2.1}[\cite{Hutz}, Conjecture 1a] For any integer $n > 2$, there is no even degree $d > 2$ and no point $c \in \mathbb{Q}$ such that the polynomial map $\varphi_{d, c}$ has rational points of exact period $n$. Moreover, \#PrePer$(\varphi_{d, c}, \mathbb{Q}) \leq 4$. \end{conj} \noindent On the note whether any theoretical progress has yet been made on Conjecture \ref{conjecture 3.2.1}, more recently, Panraksa \cite{par2} proved among many other results that the quartic polynomial $\varphi_{4,c}(z)\in\mathbb{Q}[z]$ has rational points of exact period $n = 2$. Moreover, he also proved that $\varphi_{d,c}(z)\in\mathbb{Q}[z]$ has no rational points of exact period $n = 2$ for any $c \in \mathbb{Q}$ with $c \neq -1$ and $d = 6$, $2k$ with $3 \mid 2k-1$. The interested reader may find these mentioned results of Panraksa in his unconditional Thms 2.1, 2.4 and also see his Thm 1.7 conditioned on the abc-conjecture in \cite{par2}. Twenty-eight years later, after the work of Walde-Russo, in the year 2022, Eliahou and Fares proved [\cite{Shalom2}, Theorem 2.12] that the denominator of a rational point $-c$, denoted as den$(-c)$ is divisible by 16, whenever a quadratic map $\varphi_{2,-c}$ defined by $\varphi_{2, -c}(z) = z^2 - c$ for all points $c, z\in \mathbb{Q}$ admits a rational cycle of length $\ell \geq 3$. Moreover, Eliahou and Fares also proved [\cite{Shalom2}, Proposition 2.8] that the size \#Per$(\varphi_{2, -c}, \mathbb{Q})\leq 2$, whenever den$(-c)$ is an odd integer. Motivated by also the work of Call-Goldstine, Eliahou-Fares \cite{Shalom2} also proved that the size of the set Per$(\varphi_{2, -c}, \mathbb{Q})$ can be bounded above simply by using information on den$(-c)$, namely, information in terms of the number of distinct primes dividing den$(-c)$. Moreover, the authors \cite{Shalom1} also showed that the upper bound is 4, whenever $c\in \mathbb{Q^*} = \mathbb{Q}\setminus\{0\}$. We restate here their results as the following: \begin{cor}\label{sha}[\cite{Shalom2, Shalom1}, Corollary 3.11 and Corollary 4.4, resp.] Let $c\in \mathbb{Q}$ such that den$(c) = d^2$ with $d\in 4 \mathbb{N}$. Let $s$ be the number of distinct primes dividing $d$. Then, the total number of $\mathbb{Q}$-periodic points of $\varphi_{2, -c}$ is at most $2^s + 2$. Moreover, for $c\in \mathbb{Q^*}$ such that the den$(c)$ is a power of a prime number. Then, $\#$Per$(\varphi_{2, c}, \mathbb{Q}) \leq 4$. \end{cor} \begin{rem} Since every point in a given $\mathbb{Q}$-periodic cycle of $\varphi_{2,c}$ is again a $\mathbb{Q}$-periodic point of $\varphi_{2,c}$, then it seems somewhat reasonable to infer from the foregoing history that what Walde-Russo initially proved and later improved by Eliahou-Fares is, precisely the claim that den$(c)$ is divisible by 2, whenever $\#$Per$(\varphi_{2, c}, \mathbb{Q})\geq 3$ or $\#$Per$(\varphi_{2, -c}, \mathbb{Q})\geq 3$. On the other hand, what both Walde-Russo and Eliahou-Fares also proved is the claim that the number $\#$Per$(\varphi_{2, c}, \mathbb{Q})\leq 2$ or $\#$Per$(\varphi_{2, -c}, \mathbb{Q})\leq 2$, respectively, whenever den$(c)$ is an odd integer. So now, it's highly plausible to assert that what the authors in \cite{Russo, Call, Shalom1, Shalom2} have studied is this very surprising relationship between the total number of elements in Per$(\varphi_{2, c}, \mathbb{Q})$ or Per$(\varphi_{2, -c},\mathbb{Q})$ and denominator of a generic coefficient $c$ or $-c$ of $\varphi_{2, c}(z)$ or $\varphi_{2, -c}(z)$, respectively. For $d\geq 2$, we've seen that what the authors in \cite{detto, Narkie, Hutz, par2} studied, is this same relationship between size of Per$(\varphi_{d, c}, \mathbb{Q})$ and the coefficients of $\varphi_{d, c}(z)$. \end{rem} \noindent The purpose of this article is to again inspect further the above connection, independently in the case of polynomial maps $\varphi_{p, c}$ of odd prime degree $p$ defined over $\mathbb{Z}$ for any given prime integer $p\geq 3$ and in the case of polynomial maps $\varphi_{p-1, c}$ of even degree $p-1$ defined over $\mathbb{Z}$ for any given prime integer $p\geq 5$; and doing so from a spirit that's truly inspired and guided by some of the many striking developments in arithmetic statistics. \section{On the Number of Integral Fixed Points of any Family of Polynomial Maps $\varphi_{p,c}$}\label{sec2} In this section, we use very elementary number-theoretic facts to count the number of distinct integral fixed points of any integral polynomial map $\varphi_{p,c}$, first by modulo 3 and then by modulo $p$ for any given prime integer $p\geq 3$. For any given integer $c\in \mathbb{Z}$ and any given prime integer $p\geq 3$, we then define a counting function \begin{equation}\label{N_{c}} N_{c}(p) := \#\{ z\in \mathbb{Z} / p\mathbb{Z} : \varphi_{p,c}(z) - z \equiv 0 \ \text{(mod $p$)}\}. \end{equation}\noindent We now first prove the following theorem on the number of distinct integral fixed points of any $\varphi_{3, c}$ modulo $3$: \begin{thm} \label{2.1} Given any family of cubic maps $\varphi_{3, c}$ defined by $\varphi_{3, c}(z) = z^3 + c$ for all $c, z\in\mathbb{Z}$, and let $N_{c}(3)$ be defined as in \textnormal{(\ref{N_{c}})}. Then for any coefficient $c = 3t$, we have $N_{c}(3) = 3$; otherwise, $N_{c}(3) = 0$ for every $c \neq 3t$. \end{thm} \begin{proof} Let $f(z)= \varphi_{3, c}(z)-z = z^3 - z + c$, and for every coefficient $c = 3t$, we then have $f(z)= z^3 - z + 3t$. So now, reducing both sides of $f(z)$ modulo 3, we have $f(z)\equiv z^3 - z$ (mod 3). But now for any $z\in \mathbb{Z}$, we then by Fermat's Little Theorem obtain $z^3-z\equiv 0$ (mod 3), which also means $z(z^2-1)\equiv 0$ (mod 3). This also means $z\equiv 0$ (mod 3) or $z^2 - 1\equiv 0$ (mod 3), since $\mathbb{Z} / 3\mathbb{Z}$ is an integral domain. Moreover, by a standard fact in elementary number theory, $z^2 -1 \equiv 0$ (mod 3) implies $z\equiv \pm 1$ (mod $3$). This then means $f(z)=0$ has exactly three unique solutions $z\equiv -1, 0, 1$ (mod 3); and so every integer solution of $f(z)=0$ is of the form $z_{1} = 3r-1$, $z_{2} = 3s$ and $z_{3} = 3u+1$ (mod $3$) for some $r, s, u\in \mathbb{Z}$. Hence, $\#\{ z\in \mathbb{Z} / 3\mathbb{Z} : \varphi_{3,c}(z) - z \equiv 0 \text{ (mod 3)}\} = 3$, and thus $N_{c}(3) = 3$. To see the second part, we first note that since the coefficient $c\neq 3t$ for any integer $t$, then this means $c\not \equiv 0$ (mod $3$). But then, the cubic polynomial $z^3 - z + c\not \equiv 0$ (mod $3$) for every integral point $z\in \mathbb{Z} / 3\mathbb{Z}$ and so $f(z)\not \equiv 0$ (mod 3) for every $z\in \mathbb{Z} / 3\mathbb{Z}$. Hence, the cubic polynomial $\varphi_{3,c}(x)-x$ has no roots in $\mathbb{Z} / 3\mathbb{Z}$ for every $c$ indivisible by $3$, and so the number $N_{c}(3) = 0$. This then completes the whole proof, as desired. \end{proof} We now generalize Theorem \ref{2.1} to any map $\varphi_{p, c}$ for any given prime integer $p\geq 3$. That is, assuming Theorem \ref{theorem 3.2.1}, we show that the number of distinct integral fixed points of any $\varphi_{p, c}$ modulo $p$ is either 3 or 0: \begin{thm} \label{2.2} Let $p\geq 3$ be any fixed prime integer and assume Theorem \ref{theorem 3.2.1}. Consider any family of polynomial maps $\varphi_{p, c}$ defined by $\varphi_{p, c}(z) = z^p + c$ for all points $c, z\in\mathbb{Z}$, and let the number $N_{c}(p)$ be defined as in \textnormal{(\ref{N_{c}})}. Then for every coefficient $c = pt$, we have $N_{c}(p) = 3$; otherwise, we have $N_{c}(p) = 0$ for every $c \neq pt$. \end{thm} \begin{proof} We give a similar argument as in the foregoing proof. Again, let $f(z)= z^p - z + c$, and for every $c = pt$, we have $f(z)= z^p - z + pt$. Now if we reduce both sides of $f(z)$ modulo $p$, we have $f(z)\equiv z^p - z$ (mod $p$). But now for any $z\in \mathbb{Z}$, we then know by Fermat's Little Theorem that $z^p-z\equiv 0$ (mod $p$), which also means that $z(z^{p-1}-1)\equiv 0$ (mod $p$). This also means that $z\equiv 0$ (mod $p$) or $z^{p-1} - 1\equiv 0$ (mod $p$), since $\mathbb{Z} / p\mathbb{Z}$ is an integral domain. Moreover, a standard fact in elementary number theory shows that $z^{p-1} -1 \equiv 0$ (mod $p$) holds for all $z\in \{1, 2, \cdots, p-1 \}$; and so $f(z)\equiv 0$ (mod $p$) for all $z\in \{1, 2, \cdots, p-1 \}$. Now recall from Theorem \ref{theorem 3.2.1} that $\#\{ z\in \mathbb{Z} : \varphi_{p,c}(z) - z = 0\}\leq 4$, and moreover since good reduction (mod $p$) preserves periodicity of points [\cite{Silverman}, Corollary 2.20], hence together with the solution $z\equiv 0$ (mod $p$), then at most three additional values of $z\in \{1, 2, \cdots, p-1 \}$ are desired for the congruence $f(z)\equiv 0$ (mod $p$). But now because of the first part of Theorem \ref{2.1}, then for all primes $p\geq 3$ the polynomial $f(z)$ (mod $p$) has three roots in $\mathbb{Z} / p\mathbb{Z}$. Hence, the number $\#\{ z\in \mathbb{Z} / p\mathbb{Z} : \varphi_{p,c}(z) - z \equiv 0 \text{ (mod $p$)}\} = 3$, and so the number $N_{c}(p) = 3$, as needed. To see the second part, we first note that since the coefficient $c\neq pt$ for any integer $t$, then this means $c\not \equiv 0$ (mod $p$). But then, the polynomial $z^p - z + c\not \equiv 0$ (mod $p$) for every point $z\in \mathbb{Z} / p\mathbb{Z}$ and so $f(z)\not \equiv 0$ (mod $p$) for every integral point $z\in \mathbb{Z} / p\mathbb{Z}$. Hence, we then see that the polynomial $\varphi_{p,c}(x)-x$ has no roots in $\mathbb{Z} / p\mathbb{Z}$ for every $c$ indivisble by $p$ and thus the number $N_{c}(p) = 0$, as also needed. This then completes the whole proof, as required. \end{proof} \begin{rem}\label{rem2.3} With now Theorem \ref{2.2}, we may then to each distinct integral fixed point of $\varphi_{p,c}$ associate an integral fixed orbit. So then, a dynamical translation of Theorem \ref{2.2} is the claim that the number of distinct integral fixed orbits that $\varphi_{p,c}$ has when iterated on the space $\mathbb{Z} / p\mathbb{Z}$, is 3 or 0. Notice that in both coefficient cases $c\equiv 0$ (mod $p)$ and $c\not \equiv 0$ (mod $p)$ that we considered in Theorem \ref{2.2}, it may also follow that the expected total number of distinct integral fixed points (fixed orbits) in the whole family of maps $\varphi_{p,c}$ modulo $p$ is $3 + 0 =3$. \end{rem} \section{On Number of Integral Fixed Points of any Family of Polynomial Maps $\varphi_{p-1,c}$}\label{sec3} As in Section \ref{sec2}, we also in this section wish unlike in Section \ref{sec2} to count unconditionally the number of distinct integral fixed points of any $\varphi_{p-1,c}$, first by modulo $5$ and then count by modulo $p$ for any given fixed prime integer $p\geq 5$. To this end, for any given $c\in \mathbb{Z}$ and any given prime $p\geq 5$, we define a counting function \begin{equation}\label{M_{c}} M_{c}(p) := \#\{ z\in \mathbb{Z} / p\mathbb{Z} : \varphi_{p-1,c}(z) - z \equiv 0 \ \text{(mod $p$)}\}. \end{equation}\noindent We then first prove the following theorem on the number of distinct integral fixed points of any $\varphi_{4, c}$ modulo $5$: \begin{thm} \label{6.0.1} Let $\varphi_{4, c}$ be defined by $\varphi_{4, c}(z) = z^4 + c$ for all $c, z\in\mathbb{Z}$, and let $M_{c}(5)$ be defined as in \textnormal{(\ref{M_{c}})}. Then $M_{c}(5) = 1$ or $2$ for every $c\equiv 1 \ (mod \ 5)$ or $c = 5t$, respectively; otherwise, $M_{c}(5) = 0$ for every $c\equiv -1\ (mod \ 5)$. \end{thm} \begin{proof} Let $g(z)= \varphi_{4,c}(z)-z = z^4 - z + c$ and for every coefficient $c = 5t$, reducing both sides of $g(z)$ modulo 5, we then have $g(z)\equiv z^4 - z$ (mod 5); and so the reduced polynomial $g(z)$ modulo $5$ is now a polynomial defined over a finite field $\mathbb{Z}\slash5\mathbb{Z}$ of 5 distinct elements. So now, it is a well-known fact about polynomials over finite fields that the quartic polynomial $h(x):=x^4 -1$ has 4 distinct nonzero roots in $\mathbb{Z}\slash5\mathbb{Z}$; and so we have $z^4 = 1$ for every nonzero $z\in \mathbb{Z}\slash5\mathbb{Z}$. But now, the reduced polynomial $g(z)\equiv 1 - z$ (mod 5) for every nonzero $z\in \mathbb{Z}\slash5\mathbb{Z}$; from which it then follows the polynomial $g(z)$ has a nonzero root in $\mathbb{Z}\slash5\mathbb{Z}$, namely, $z\equiv 1$ (mod 5). Moreover, since $z$ is also a linear factor of $g(z)\equiv z(z^3 - 1)$ (mod 5), then it also follows that $z\equiv 0$ (mod 5) is a root of $g(z)$ modulo $5$. Hence, the number $\#\{ z\in \mathbb{Z} / 5\mathbb{Z}: \varphi_{4,c}(z) - z \equiv 0 \text{ (mod 5)}\} = 2$ for every coefficient $c\equiv 0$ (mod $5$) and so the number $M_{c}(5) = 2$. To see the possibility that the number $M_{c}(5) = 1$ for every coefficient $c\equiv 1$ (mod 5), we first note that with $c\equiv 1$ (mod 5) and also recalling that $z^4 = 1$ for every nonzero $z\in \mathbb{Z}\slash5\mathbb{Z}$, then after reducing $g(z)= \varphi_{4,c}(z)-z = z^4 - z + c$ modulo $5$, we then have $g(z)\equiv 2 - z$ (mod 5) and so $g(z)$ has a root in $\mathbb{Z}\slash5\mathbb{Z}$, namely, $z\equiv 2$ (mod 5) and thus $M_{c}(5) = 1$. To see the second part, we first note that since the coefficient $c \equiv -1$ (mod 5), we then obtain $z^4 - z + c\equiv -z$ (mod 5) since $z^4 = 1$ for every nonzero $z\in \mathbb{Z}\slash5\mathbb{Z}$; and so the reduced polynomial $g(z) \equiv -z$ (mod 5). But now, notice that the point $z\equiv 0$ (mod 5) is a root of $g(z)$ modulo $5$ for every coefficient $c\equiv -1$ (mod $5$) and for every coefficient $c\equiv 0$ (mod $5$) as seen from the first part; which then clearly is impossible, since $-1\not \equiv 0$ (mod $5$). Hence, the quartic polynomial $\varphi_{4,c}(x)-x$ has no roots in $\mathbb{Z} / 5\mathbb{Z}$ for every coefficient $c\equiv -1$ (mod $5$) and so we obtain that the number $M_{c}(5) = 0$, as desired. \end{proof} We now generalize Theorem \ref{6.0.1} to any $\varphi_{p-1, c}$ for any given prime integer $p\geq 5$. Specifically, we show that the number of distinct integral fixed points of any polynomial map $\varphi_{p-1, c}$ modulo $p$ is equal to $1$ or $2$ or $0$: \begin{thm} \label{6.0.2} Let $p\geq 5$ be any fixed prime integer, and $\varphi_{p-1, c}$ be any polynomial map defined by $\varphi_{p-1, c}(z) = z^{p-1}+c$ for all $c, z\in\mathbb{Z}$. Let the number $M_{c}(p)$ be defined as in \textnormal{(\ref{M_{c}})}. Then we have $M_{c}(p) = 1$ or $2$ for every coefficient $c\equiv 1 \ (mod \ p)$ or $c = pt$, respectively; otherwise, we have $M_{c}(p) = 0$ for every point $c\equiv -1\ (mod \ p)$. \end{thm} \begin{proof} We give a similar argument as in the foregoing proof. As before, let $g(z)= z^{p-1} - z + c$ and for every coefficient $c = pt$, reducing both sides of $g(z)$ modulo $p$, we then have $g(z)\equiv z^{p-1} - z$ (mod $p$); and so the reduced polynomial $g(z)$ modulo $p$ is now a polynomial defined over a finite field $\mathbb{Z}\slash p\mathbb{Z}$ of $p$ distinct elements. So now, as before we have, the polynomial $h(x):=x^{p-1} -1$ has $p-1$ distinct nonzero roots in $\mathbb{Z}\slash p\mathbb{Z}$; and so we have $z^{p-1} = 1$ for every nonzero $z\in \mathbb{Z}\slash p\mathbb{Z}$. But now, the reduced polynomial $g(z)\equiv 1 - z$ (mod $p$) for every nonzero point $z\in \mathbb{Z}\slash p\mathbb{Z}$, and hence the polynomial $g(z)$ has a nonzero root in $\mathbb{Z}\slash p\mathbb{Z}$, namely, $z\equiv 1$ (mod $p$). Moreover, since $z$ is also a linear factor of the reduced polynomial $g(z)\equiv z(z^{p-2} - 1)$ (mod $p$), then it also follows that $z\equiv 0$ (mod $p$) is a root of the polynomial $g(z)$ modulo $p$. Thus, the number $\#\{ z\in \mathbb{Z} / p\mathbb{Z} : \varphi_{p-1,c}(z) - z \equiv 0 \text{ (mod $p$)}\} = 2$ for every $c\equiv 0$ (mod $p$) and so have $M_{c}(p) = 2$. To see the number $M_{c}(p) = 1$ for every coefficient $c\equiv 1$ (mod $p$), we note that with $c\equiv 1$ (mod $p$) and also that $z^{p-1} = 1$ for every nonzero $z\in \mathbb{Z}\slash p\mathbb{Z}$, then reducing $g(z)= \varphi_{p-1,c}(z)-z = z^{p-1} - z + c$ modulo $p$, we then have $g(z)\equiv 2 - z$ (mod $p$) since $z^{p-1} = 1$ for every nonzero point $z\in \mathbb{Z}\slash p\mathbb{Z}$; and hence $g(z)$ has a root in $\mathbb{Z}\slash p\mathbb{Z}$, namely, $z\equiv 2$ (mod $p$) and so $M_{c}(p) = 1$. To see the second part, we note that with $c \equiv -1$ (mod $p$), we then obtain $z^{p-1} - z + c\equiv -z$ (mod $p$) since $z^{p-1} = 1$ for every nonzero $z\in \mathbb{Z}\slash p\mathbb{Z}$; and so the polynomial $g(z) \equiv -z$ (mod $p$). But now, notice $z\equiv 0$ (mod $p$) is a root of $g(z)$ modulo $p$ for every $c\equiv -1$ (mod $p$) and for every $c\equiv 0$ (mod $p$) as seen from the first part; which then clearly is impossible, since $-1\not \equiv 0$ (mod $p$). Hence, we see $\varphi_{p-1,c}(x)-x$ has no roots in $\mathbb{Z} / p\mathbb{Z}$ for every coefficient $c\equiv -1$ (mod $p$) and so $M_{c}(p) = 0$, as desired. \end{proof} \begin{rem} With now Theorem \ref{6.0.2}, we may then to each distinct integral fixed point of $\varphi_{p-1,c}$ associate an integral fixed orbit. So then, a dynamical translation of Theorem \ref{6.0.2} is the claim that the number of distinct integral fixed orbits that $\varphi_{p-1,c}$ has when iterated on the space $\mathbb{Z} / p\mathbb{Z}$, is equal to $1$ or $2$ or $0$. Again, observe that in all of the possible coefficient cases $c\equiv 1, 0, -1$ (mod $p)$ that we considered in Theorem \ref{6.0.2}, it may then also follow that the expected total number of distinct integral fixed points (fixed orbits) in the whole family of maps $\varphi_{p-1,c}$ modulo $p$ is $1 + 2 + 0 =3$; which somewhat surprising coincides with the expected total number three of distinct integral fixed points (fixed orbits) in the whole family of maps $\varphi_{p,c}$ (mod $p$) as remarked in \ref{rem2.3}. \end{rem} \section{On the Average Number of Fixed Points of any Polynomial Map $\varphi_{p,c}$ \& $\varphi_{p-1,c}$}\label{sec4} In this section, we wish to inspect independently the behavior of the counting functions $N_{c}(p)$ and $M_{c}(p)$ as $p$ tends to infinity. First, we wish to ask and answer precisely the question: \say{\textit{What is the average value of $N_{c}(p)$ as $p \to \infty$?}} The following corollary shows that the average value of the function $N_{c}(p)$ is $3$ or 0, as $p\to \infty$: \begin{cor}\label{4.1} Let $p\geq 3$ be any prime integer. Then the average value of the function $N_{c}(p)$ exists and is equal to $3$ or $0$, as $p \to \infty$. More precisely, we have \begin{myitemize} \item[\textnormal{(a)}] Avg $N_{c = pt}(p) := \lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 3, \ p\mid c}N_{c}(p)}{\Large{\sum\limits_{p\geq 3, \ p\mid c}1}}} = 3.$ \item[\textnormal{(b)}] \text{Avg} $N_{c\neq pt}(p):= \lim\limits_{p \to\infty} \Large{\frac{\sum\limits_{p\geq 3, \ p\nmid c}N_{c}(p)}{\Large{\sum\limits_{p\geq 3, \ p\nmid c}1}}} = 0$. \end{myitemize} \end{cor} \begin{proof} Since from Theorem \ref{2.2} we know that the number $N_{c}(p) = 3$ for any prime $p\mid c$, we then obtain that $\lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 3, \ p\mid c}N_{c}(p)}{\Large{\sum\limits_{p\geq 3, \ p\mid c}1}}} = 3\lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 3, \ p\mid c}1}{\Large{\sum\limits_{p\geq 3, \ p\mid c}1}}} = 3$. Hence, the average value of $N_{c}(p)$, namely, Avg $N_{c = pt}(p)$ is equal to 3, which shows $\textnormal{(a)}$. To see $\textnormal{(b)}$, we recall from Theorem \ref{2.2} that $N_{c}(p) = 0$ for any prime $p\nmid c$; and so we then obtain $\lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 3, \ p\nmid c}N_{c}(p)}{\Large{\sum\limits_{p\geq 3, \ p\nmid c}1}}} = 0$; and hence Avg $N_{c\neq pt}(p) = 0$. This then completes the whole proof, as desired. \end{proof} \begin{rem} \label{re5} From arithmetic statistics to arithmetic dynamics, Corollary \ref{4.1} shows that any map $\varphi_{p,c}$ iterated on the space $\mathbb{Z} / p\mathbb{Z}$, has on average three or no distinct integral fixed orbits, as the degree $p$ tends to infinity. \end{rem} Now we also wish to ask and also answer precisely the question: \say{\textit{What is the average value of $M_{c}(p)$ as $p \to \infty$?}} The following corollary shows that the average value of the function $M_{c}(p)$ is $1$ or $2$ or 0, as $p\to \infty$: \begin{cor}\label{cor5} Let $p\geq 5$ be any prime integer. Then the average value of the function $M_{c}(p)$ exists and is equal to $1$ or $2$ or $0$, as $p\to\infty$. Specifically, we have \begin{myitemize} \item[\textnormal{(a)}] Avg $M_{c-1 = pt}(p) := \lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 5, \ p\mid (c-1)}M_{c}(p)}{\Large{\sum\limits_{p\geq 5, \ p\mid (c-1)}1}}} = 1.$ \item[\textnormal{(b)}] Avg $M_{c= pt}(p) := \lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 5, \ p\mid c}M_{c}(p)}{\Large{\sum\limits_{p\geq 5, \ p\mid c}1}}} = 2.$ \item[\textnormal{(c)}] \text{Avg} $M_{c+1= pt}(p):= \lim\limits_{p \to\infty} \Large{\frac{\sum\limits_{p\geq 5, \ p\mid (c+1)}M_{c}(p)}{\Large{\sum\limits_{p\geq 5, \ p\mid (c+1)}1}}} = 0$. \end{myitemize} \end{cor} \begin{proof} Since from Theorem \ref{6.0.2} we know that the number $M_{c}(p) = 1$ for any prime $p$ such that $p\mid (c-1)$, we then obtain $\lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 5, \ p\mid (c-1)}M_{c}(p)}{\Large{\sum\limits_{p\geq 5, \ p\mid (c-1)}1}}} = \lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 5, \ p\mid (c-1)}1}{\Large{\sum\limits_{p\geq 5, \ p\mid (c-1)}1}}} = 1$; and so the average value of $M_{c}(p)$, namely, Avg $M_{c-1 = pt}(p) = 1$. Similarly, since $M_{c}(p) = 2$ or $0$ for any prime $p$ such that $p\mid c$ or $p$ such that $p\mid (c+1)$ resp., we then also obtain $\lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 5, \ p\mid c}M_{c}(p)}{\Large{\sum\limits_{p\geq 5, \ p\mid c}1}}} = 2$ or $\lim\limits_{p\to\infty} \Large{\frac{\sum\limits_{p\geq 5, \ p\mid (c+1)}M_{c}(p)}{\Large{\sum\limits_{p\geq 5, \ p\mid (c+1)}1}}} = 0$, resp.; and so the average value Avg $M_{c = pt}(p) = 2$ or Avg $M_{c+1 = pt}(p) = 0$, resp., as desired. This then completes the whole proof, as required. \end{proof} \begin{rem} \label{re5} As before, from arithmetic statistics to arithmetic dynamics, Corollary \ref{cor5} then demonstrates that any $\varphi_{p-1,c}$ iterated on the space $\mathbb{Z} / p\mathbb{Z}$, has on average one or two or no fixed orbits, as $p$ tends to infinity. \end{rem} \section{On the Density of Monic Integer Polynomials $\varphi_{p,c}(x)$ with the number $N_{c}(p) = 3$}\label{sec5} In this section, we wish to ask and answer: \say{\textit{For $p\geq 3$ a prime integer, what is the density of monic integer polynomials $\varphi_{p, c}(x) = x^p + c$ with exactly three integral fixed points modulo $p$?}} The following corollary shows that there are very few integer polynomials $\varphi_{p,c}(x) = x^p + c$ with exactly three integral fixed points modulo $p$: \begin{cor}\label{Thm 4.1} Let $p\geq 3$ be a prime integer. Then the density of monic polynomials $\varphi_{p,c}(x) = x^p + c\in \mathbb{Z}[x]$ with the number $N_{c}(p) = 3$ exists and is equal to $0 \%$, as $c\to \infty$. More precisely, we have \begin{center} $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}} = \ 0.$ \end{center} \end{cor} \begin{proof} Since the defining condition $N_{c}(p) = 3$ is as we proved in Thm \ref{2.2}, determined whenever the coefficient $c$ is divisible by any given prime $p\geq 3$, hence, we may count $\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] : 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}$ by simply counting the number $\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] : 3\leq p\leq c \ \text{and} \ p\mid c \ \text{for \ any \ fixed} \ c \}$. Thus, we then have that \begin{center} $\Large{\frac{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}} = \Large{\frac{\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ p\mid c \ \text{for any fixed} \ c \}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}}$. \end{center}\indent Moreover, for any fixed $c$, we also have that the size \begin{center} $\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] : 3\leq p\leq c \ \text{and} \ p\mid c \} = \# \{p : 3\leq p\leq c \text{ and } p\mid c \} = \sum_{3\leq p\leq c, \ p\mid c}1 = \omega (c)$, \end{center}where the counting function $\omega(c)$ is the number of distinct prime factors of $c$. And if we also rewrite the size $\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] : 3\leq p\leq c \} = \sum_{3\leq p\leq c} 1 = \pi(c)$, where $\pi(.)$ is the prime-counting function, we then obtain \begin{center} $\Large{\frac{\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ p\mid c \ \text{for any fixed} \ c \}}{\Large{\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}} = \frac{\omega(c)}{\pi(c)}$. \end{center}Now recall from analytic number theory that for any positive integer $c$, we have $2^{\omega(c)}\leq \sigma (c) \leq 2^{\Omega(c)}$, where $\sigma(c)$ is the divisor function and $\Omega(c)$ counts the total number of prime factors of $c$, with respect to their multiplicity. So now taking logarithms, we then see that the inequality $2^{\omega(c)}\leq \sigma (c) \leq 2^{\Omega(c)}$ yields $\omega(c)\leq \frac{\text{log} \ \sigma(c)}{\text{log} \ 2}$; and hence yielding that $\frac{\omega(c)}{\pi(c)} \leq \frac{\text{log} \ \sigma(c)}{\text{log} \ 2 \cdot \pi(c)}$. Moreover, for all $\epsilon >0$, we know $\sigma(c) = o(c^{\epsilon})$ and so log $\sigma(c) =$ log $o(c^{\epsilon})$, which then yields that $\frac{\omega(c)}{\pi(c)} \leq \frac{\text{log} \ o(c^{\epsilon})}{\text{log} \ 2 \cdot \pi(c)}$. But now for any fixed $\epsilon$, if we take limit on both sides of the foregoing inequality as $c\to \infty$, we then obtain that $\lim\limits_{c\to\infty} \frac{\text{log} \ o(c^{\epsilon})}{\text{log} \ 2 \cdot \pi(c)} = 0$ and so we have $\lim\limits_{c\to\infty} \frac{\omega(c)}{\pi(c)} \leq 0$. Hence, we have \begin{center} $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}} \leq 0$. \end{center}Moreover, we also observe that the size $\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] : 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}\geq 1$, and so the quantity \begin{center} $\Large{\frac{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}}\geq \frac{1}{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}} = 0$. \end{center}Hence, we see $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}} \geq 0$ and which when combined with the above limit, we the obtain $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ \text{and} \ N_{c}(p) \ = \ 3\}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}} = \ 0$. This then completes whole proof, as desired. \end{proof}\noindent Note that one may also certainly interpret Corollary \ref{Thm 4.1} by saying that the probability of choosing randomly a monic integer polynomial $\varphi_{p,c}(x)$ in the space $\mathbb{Z}[x]$ with exactly three integral fixed points modulo $p$, is zero. \section{On Densities of Monic Integer Polynomials $\varphi_{p-1,c}(x)$ with $M_{c}(p) = 1$ \& $M_{c}(p) = 2$}\label{sec6} In this section, motivated by: \say{\textit{What is the density of polynomials $\varphi_{p-1,c}(x)\in \mathbb{Z}[x]$ with two integral fixed points modulo $p$?}}, we prove in the corollary below that the density of such polynomials $\varphi_{p-1,c}(x)$ exists and is zero: \begin{cor}\label{6.1} Let $p\geq 5$ be a prime integer. The density of monic polynomials $\varphi_{p-1,c}(x)=x^{p-1}+c\in \mathbb{Z}[x]$ with the number $M_{c}(p) = 2$ exists and is equal to $0\%$, as $c\to \infty$. More precisely, we have \begin{center} $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x]\ : \ 5\leq p\leq c \ and \ M_{c}(p) \ = \ 2\}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x]\ : \ 5\leq p\leq c \}}}} = \ 0.$ \end{center} \end{cor} \begin{proof} By applying the exact same reasoning as in the Proof of Corollary \ref{Thm 4.1}, we then see the desired limit. \end{proof} \noindent Note that one may again interpret Corollary \ref{6.1} by saying that the probability of choosing randomly a monic polynomial $\varphi_{p-1,c}(x) = x^{p-1}+c$ in the space $\mathbb{Z}[x]$ with exactly two integral fixed points modulo $p$, is zero. The following immediate corollary shows that the probability of choosing randomly a monic integer polynomial $\varphi_{p-1,c}(x)=x^{p-1}+c$ in the space $\mathbb{Z}[x]$ with exactly one integral fixed point modulo $p$, is also zero: \begin{cor}\label{6.2} Let $p\geq 5$ be a prime integer. The density of monic polynomials $\varphi_{p-1,c}(x)=x^{p-1}+c\in \mathbb{Z}[x]$ with the number $M_{c}(p) = 1$ exists and is equal to $0\%$, as $c\to \infty$. More precisely, we have \begin{center} $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x]\ : \ 5\leq p\leq c \ and \ M_{c}(p) \ = \ 1\}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x]\ : \ 5\leq p\leq c \}}}} = \ 0.$ \end{center} \end{cor} \begin{proof} As before, $M_{c}(p) = 1$ is as we proved in Theorem \ref{6.0.2}, determined whenever the coefficient $c$ is such that $c-1$ is divisible by any fixed prime $p\geq 5$; and so we may count $\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] : 5\leq p\leq c \ \text{and} \ M_{c}(p) \ = \ 1\}$ by simply counting the number $\# \{\varphi_{p-1,c}(x)\in \mathbb{Z}[x] : 5\leq p\leq c \ \text{and} \ p\mid (c-1) \ \text{for \ any \ fixed} \ c \}$. But now, since $c-1<c$, then if the number $\# \{p : 5\leq p\leq c \ \text{and} \ p\mid (c-1) \}< \# \{p : 5\leq p\leq c \ \text{and} \ p\mid c \}$, we then have that \begin{center} $\Large{\frac{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ \text{and} \ p\mid (c-1) \ \text{for any fixed} \ c\}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}} < \Large{\frac{\# \{\varphi_{p-1,c}(x)\in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ \text{and} \ p\mid c \ \text{for any fixed} \ c \}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}}.$ \end{center}Letting $c$ tend to infinity on both sides of the above inequality and then applying Corollary \ref{6.1}, we then have \begin{center} $\lim\limits_{c\to\infty}\Large{\frac{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ \text{and} \ p\mid (c-1)\}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}} \leq \lim\limits_{c\to\infty}\Large{\frac{\# \{\varphi_{p-1,c}(x)\in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ \text{and} \ p\mid c \}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}}$ = 0; \end{center} from which it then follows that \begin{center} $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x]\ : \ 5\leq p\leq c \ \text{and} \ M_{c}(p) \ = \ 1\}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x]\ : \ 5\leq p\leq c \}}}} = \ 0$, and hence showing the limit as desired in this case. \end{center}Otherwise, if the number $\# \{p : 5\leq p\leq c \ \text{and} \ p\mid c \}< \# \{p : 5\leq p\leq c \ \text{and} \ p\mid (c-1) \}$, we then have that \begin{center} $\Large{\frac{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ \text{and} \ p\mid c \ \text{for any fixed} \ c\}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}} < \Large{\frac{\# \{\varphi_{p-1,c}(x)\in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ \text{and} \ p\mid (c-1) \ \text{for any fixed} \ c \}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}}.$ \end{center}So now, taking limit as $c\to \infty$ on both sides of the above inequality and applying Corollary \ref{6.1} and then applying a similar argument as in the Proof of Corollary \ref{Thm 4.1} to obtain an upper bound zero, we then obtain \begin{center} $\lim\limits_{c\to\infty}\Large{\frac{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ \text{and} \ p\mid (c-1)\}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}} = 0$, as also needed. This then completes the whole proof. \end{center} \end{proof} \section{On Density of Integer Monics $\varphi_{p,c}(x)$ with $N_{c}(p) = 0$ and $\varphi_{p-1,c}(x)$ with $M_{c}(p) = 0$}\label{sec7} Recall in Corollary \ref{Thm 4.1} that a density of $0\%$ of monic integer (and hence monic rational) polynomials $\varphi_{p,c}(x)$ have exactly three integral fixed points modulo $p$; and so the density of monic integer polynomials $\varphi_{p,c}(x)$ that are reducible modulo $p$ is $0\%$. Now, we also ask: \say{\textit{What is the density of monic integer polynomials $\varphi_{p,c}(x)$ that don't have integral fixed points modulo $p$?}} The corollary below shows that the probability of choosing randomly an integer polynomial $\varphi_{p,c}(x) = x^p+c$ such that $\mathbb{Q}[x]\slash (\varphi_{p, c}(x)-x)$ is a degree-$p$ number field, is 1: \begin{cor}\label{8.1} Let $p\geq 3$ be a prime integer. Then the density of monic polynomials $\varphi_{p,c}(x) = x^p + c\in \mathbb{Z}[x]$ with the number $N_{c}(p) = 0$ exists and is equal to $100 \%$, as $c\to \infty$. More precisely, we have \begin{center} $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p,c}(x)\in \mathbb{Z}[x] \ : \ 3\leq p\leq c \ and \ N_{c}(p) \ = \ 0 \}}{\Large{\# \{\varphi_{p,c}(x) \in \mathbb{Z}[x] \ : \ 3\leq p\leq c \}}}} = \ 1.$ \end{center} \end{cor} \begin{proof} Since the number $N_{c}(p) = 3$ or $0$ for any given prime integer $p\geq 3$ and since we also proved the density in Corollary \ref{Thm 4.1}, we then obtain the desired density (i.e., we obtain that the limit exists and is equal to 1). \end{proof} \noindent Note that the foregoing corollary also shows that there are infinitely many polynomials $\varphi_{p,c}(x)$ over $\mathbb{Q}$ such that for $f(x) := \varphi_{p,c}(x)-x = x^p-x+c$, the induced quotient ring $K_{f} = \mathbb{Q}[x]\slash (f(x))$ is an odd prime degree-$p$ number field. Comparing the densities in Corollaries \ref{Thm 4.1} and \ref{8.1}, one may then observe that in the whole family of monic integer polynomials $\varphi_{p,c}(x) = x^p +c$, almost all such monics have no integral fixed points modulo $p$ (i.e., have no rational roots); and hence almost all such monic polynomials $f(x)$ are irreducible over $\mathbb{Q}$. But this may then imply that the average value of $N_{c}(p)$ in the whole family of monic polynomials $\varphi_{p,c}(x)$, is zero. We also recall in Cor. \ref{6.1} and \ref{6.2} that a density of $0\%$ of monic integer (and thus rational) polynomials $\varphi_{p-1,c}(x)$ has $M_{c}(p) = 2$ or $1$, resp.; and so the density of polynomials $\varphi_{p-1, c}(x)\in \mathbb{Z}[x]$ that are reducible modulo $p$ is $0\%$. We now ask: \say{\textit{What is the density of monic polynomials $\varphi_{p-1,c}(x)\in \mathbb{Z}[x]$ with no integral fixed points (mod $p$)?}} The following corollary shows that the probability of choosing randomly a monic integer polynomial $\varphi_{p-1,c}(x) = x^{p-1}+c\in \mathbb{Z}[x]$ so that $\mathbb{Q}[x]\slash (\varphi_{p-1, c}(x)-x)$ is a number field of even degree $p-1$, is 1: \begin{cor} \label{5.1} Let $p\geq 5$ be a prime integer. The density of monic polynomials $\varphi_{p-1, c}(x) = x^{p-1}+c\in \mathbb{Z}[x]$ with the number $M_{c}(p) = 0$ exists and is equal to $100 \%$, as $c\to \infty$. More precisely, we have \begin{center} $\lim\limits_{c\to\infty} \Large{\frac{\# \{\varphi_{p-1, c}(x)\in \mathbb{Z}[x] \ : \ 5\leq p\leq c \ and \ M_{c}(p) \ = \ 0 \}}{\Large{\# \{\varphi_{p-1,c}(x) \in \mathbb{Z}[x] \ : \ 5\leq p\leq c \}}}} = \ 1.$ \end{center} \end{cor} \begin{proof} Recall that the number $M_{c}(p) = 1, 2$ or $0$ for any given prime $p\geq 5$ and since we also proved the densities in Cor. \ref{6.1} and \ref{6.2}, we now obtain the desired density (i.e., we get that the limit exists and is equal to 1). \end{proof} \noindent As before, Corollary \ref{5.1} also shows that there are infinitely many monic polynomials $\varphi_{p-1,c}(x)$ over $\mathbb{Q}$ such that for $g(x) := \varphi_{p-1,c}(x)-x = x^{p-1}-x+c$, the induced quotient ring $L_{g} = \mathbb{Q}[x]\slash (g(x))$ is an algebraic number field of even degree $p-1$. Again, if we compare the densities in Cor. \ref{6.1}, \ref{6.2} and \ref{5.1}, we may then see that in the whole family of monic polynomials $\varphi_{p-1,c}(x) = x^{p-1} +c\in \mathbb{Z}[x]$, almost all such monics have no integral fixed points modulo $p$ (i.e., have no $\mathbb{Q}$-roots); and so almost all monic polynomials $g(x)$ are irreducible over $\mathbb{Q}$. Consequently, this may imply that the average value of $M_{c}(p)$ in the whole family of monics $\varphi_{p-1,c}(x)$, is zero. As always a central theme in algebraic number theory that whenever one is studying an algebraic number field $K$ of some interest, one must simultaneously try to describe very precisely what the associated ring $\mathcal{O}_{K}$ of integers is; and this is because $\mathcal{O}_{K}$ is classically known to describe naturally the arithmetic of the underlying number field. However, accessing $\mathcal{O}_{K}$ in practice from a computational point of view is known to be an extremely involved problem. In our case here, the following corollary shows that the probability of choosing randomly a monic integer polynomial $f(x)$ such that the quotient $\mathbb{Z}[x]\slash (f(x))$ is the ring of integers of $K_{f}$, is $\approx 60.7927\%$: \begin{cor} \label{8.2} Assume Corollary \ref{8.1}. When integer polynomials $f(x)$ are ordered by height $H(f) = |c|^{1\slash p}$ as defined in \cite{sch1}, the density of polynomials $f(x)$ such that $\mathbb{Z}[x]\slash (f(x))$ is the ring of integers of $K_{f}$ is $\zeta(2)^{-1}$. \end{cor} \begin{proof} Since from Corollary \ref{8.1} we know that there are infinitely many monic polynomials $f(x)$ over $\mathbb{Q}$ (and hence over $\mathbb{Z}$) such that $K_{f} = \mathbb{Q}[x]\slash (f(x))$ is an algebraic number field of degree deg$(f)$; and moreover associated to $K_{f}$, is the ring $\mathcal{O}_{K_{f}}$ of integers. So now, applying a remarkable result of Bhargava-Shankar-Wang [\cite{sch1}, Theorem 1.2] to the underlying family of monic integer polynomials $f(x)$ ordered by height $H(f) = |c|^{1\slash p}$ such that $\mathcal{O}_{K_{f}} = \mathbb{Z}[x]\slash (f(x))$, we then obtain that the density of such polynomials $f(x)$ is $\zeta(2)^{-1} \approx 60.7927\%$. \end{proof} As with $K_{f}$, every number field $L_{g}$ induced by a polynomial $g$, is naturally equipped with the ring of integers $\mathcal{O}_{L_{g}}$, and which as we again mention may be difficult to compute in practice. In the following corollary we again take great advantage of a density result of (BSW) [\cite{sch1}, Thm 1.2] to then show that the probability of choosing randomly a polynomial $g(x)\in \mathbb{Z}[x]$ such that $\mathbb{Z}[x]\slash (g(x))$ is the ring of integers of $L_{g}$, is $\approx 60.7927\%$: \begin{cor} Assume Corollary \ref{5.1}. When integer polynomials $g(x)$ are ordered by height $H(g) = |c|^{1\slash (p-1)}$ as defined in \cite{sch1}, the density of polynomials $g(x)$ such that $\mathbb{Z}[x]\slash (g(x))$ is the ring of integers of $L_{g}$ is $\zeta(2)^{-1}$. \end{cor} \begin{proof} By applying the same reasoning as in the Proof of Corollary \ref{8.2}, we then obtain the desired density. \end{proof} \section{On the Number of Number fields $K_{f}$ \& $L_{g}$ with Bounded Absolute Discriminant}\label{sec8} Recall we saw from Corollary \ref{8.1} that there is an infinite family of irreducible monic integer polynomials $f(x) = x^p-x + c$ such that field $K_{f}$ associated to $f$ is an algebraic number field of odd prime degree $p$. Moreover, we also saw from Corollary \ref{5.1} that one can always find an infinite family of irreducible monic integer polynomials $g(x) = x^{p-1}-x + c$ such that the field extension $L_{g}$ over $\mathbb{Q}$ arising from $g$ is an algebraic number field of even degree $p-1\geq 4$. In this section, we wish to study the problem of counting number fields; a problem that's originally from and is of very serious interest in arithmetic statistics. Inspired by work \cite{sch}, we wish to count here primitive number fields with bounded absolute discriminant and as a result we have the following: \begin{cor}\label{cor6} Assume Corollary \ref{8.1}, and let $K_{f} = \mathbb{Q}[x]\slash (f(x))$ be a primitive number field with discriminant $\Delta(K_{f})$. Then up to isomorphism classes of number fields, we have that $\# \{ K_{f} \ : \ |\Delta(K_{f})| < X \} \ll X^{p\slash(2p-2)}$. \end{cor} \begin{proof} From Corollary \ref{8.1}, we know that there are infinitely many monics $f(x) = x^p - x + c$ over $\mathbb{Q}$ (and so over $\mathbb{Z}$) such that $K_{f}$ is a number field of odd prime degree $p\geq 3$. But now, when the underlying number fields $K_{f}$ are primitive, we may then use the reasoning of Bhargava-Shankar-Wang in \cite{sch} to show that up to isomorphism classes of number fields the total number of such primitive number fields with $|\Delta(K_{f})| < X$, is bounded above by the number of monic integer polynomials $f(x)$ of degree $p\geq 3$ with height $H(f) \ll X^{1\slash(2p-2)}$ and vanishing subleading coefficient. Now since $H(f) = |c|^{1\slash p}$ and so $|c| \ll X^{p\slash(2p-2)}$, we then have that the size $\# \{f(x)\in \mathbb{Z}[x] \ : H(f) \ll X^{1\slash(2p-2)} \} = \# \{f(x)\in \mathbb{Z}[x] \ : |c| \ll X^{p\slash(2p-2)} \} \ll X^{p\slash(2p-2)}$. Hence, we obtain that the number $\# \{ K_{f} : |\Delta(K_{f})| < X \} \ll X^{p\slash(2p-2)}$ up to isomorphism classes of number fields, as needed. \end{proof} By applying a similar counting argument of Bhargava-Shankar-Wang in \cite{sch} as in Cor. \ref{cor6}, we then also obtain the following corollary on the number of primitive number fields $L_{g}$ with bounded absolute discriminant: \begin{cor} Assume Corollary \ref{5.1}, and let $L_{g} = \mathbb{Q}[x]\slash (g(x))$ be a primitive number field with discriminant $\Delta(L_{g})$. Then up to isomorphism classes of number fields, we have that $\# \{ L_{g} \ : \ |\Delta(L_{g})| < X \} \ll X^{(p-1)\slash(2p-4)}$. \end{cor} We recall in algebraic number theory that an algebraic number field $K$ is called \say{\textit{monogenic}} if there exists an algebraic number $\alpha \in K$ such that the ring $\mathcal{O}_{K}$ of integers is the subring $\mathbb{Z}[\alpha]$ generated by $\alpha$ over $\mathbb{Z}$, i.e., $\mathcal{O}_{K}= \mathbb{Z}[\alpha]$. In Corollary \ref{cor6}, we counted primitive number fields $K_{f}$ (i.e., $K_{f} = \mathbb{Q}(\alpha)$ for some $\alpha \in K$) with $|\Delta(K_{f})| < X$. Now we wish to count in the case when our number fields $K_{f}$ are monogenic with $|\Delta(K_{f})| < X$ and such that the associated Galois group Gal$(K_{f}\slash \mathbb{Q})$ is the symmetric group $S_{p}$. A hard counting task that we very easily tackle here by taking great advantage of a result of Bhargava-Shankar-Wang [\cite{sch1}, Corollary 1.3]: \begin{cor}\label{8.3} Assume Corollary \ref{8.1}. Then the number of isomorphism classes of algebraic number fields $K_{f}$ of degree $p\geq 3$ and with $|\Delta(K_{f})| < X$ that are monogenic and have associated Galois group $S_{p}$ is $\gg X^{\frac{1}{2} + \frac{1}{p}}$. \end{cor} \begin{proof} Since Cor. \ref{8.1} also means the existence of infinitely many monic polynomials $f(x)$ over $\mathbb{Q}$ (and so over $\mathbb{Z}$) such that $K_{f}$ is a prime degree-$p$ number field. Now applying Bhargava-Shankar-Wang's result [\cite{sch1}, Cor. 1.3] to the underlying number fields $K_{f}$ with $|\Delta(K_{f})| < X$ that are monogenic and have associated Galois group $S_{p}$, we then obtain that the number of isomorphism classes of such number fields $K_{f}$ is $\gg X^{\frac{1}{2} + \frac{1}{p}}$, as desired. \end{proof} We again wish to take great advantage of that same result of Bhargava-Shankar-Wang [\cite{sch1}, Corollary 1.3] to also immediately count in the following corollary the number of fields $L_{g}$ that are monogenic with absolute discriminant $|\Delta(L_{g})| < X$ and such that the associated Galois group Gal$(L_{g}\slash \mathbb{Q})$ is the symmetric group $S_{p-1}$: \begin{cor} Assume Corollary \ref{5.1}. Then the number of isomorphism classes of algebraic number fields $L_{g}$ of degree $p-1\geq 4$ and $|\Delta(L_{g})| < X$ that are monogenic and have associated Galois group $S_{p-1}$ is $\gg X^{\frac{1}{2} + \frac{1}{p-1}}$. \end{cor} \begin{proof} By applying the same reasoning as in the Proof of Corollary \ref{8.3}, we then obtain the desired count. \end{proof} \addcontentsline{toc}{section}{Acknowledgements} \section*{\textbf{Acknowledgements}} I'm deeply indebted to my long-time great advisors, Dr. Ilia Binder and Dr. Arul Shankar, for all their boundless generosity, friendship and for all the inspiring weekly conversations and along with Dr. Jacob Tsimerman for always very uncomprimisingly supporting my professional and philosophical-mathematical research endeavours. I'm very grateful to Dr. Shankar for bringing my attention to a Number theory course (MAT1200HS, 2023/24) and for strongly suggesting it to me. This suggestion would've never easily be realized if the course instructor at that time had not been unsurpassably generous in terms of allowing anyone to attend his class and in terms of his friendliness; and so I'm once again deeply indebted to Dr. Shankar for everything and to that instructor, namely, Dr. Tsimerman for his very enlightening Algebraic Number Theory class and conversations. Not to forget, I am very grateful to the very vibrant Dept. of Mathematical and Computational Sciences (MCS) at the University of Toronto, Mississuaga; and in particular, I'm very grateful and deeply indebted to Dr. Yael Karshon (for the truly invaluable advice about the state of professional mathematics), Dr. Ilia Binder, Dr. Arul Shankar, Dr. Marina Tvalavadze, Dr. Alex Rennet, Dr. Michael Gr\"{o}echenig, Dr. Julie Desjardins, Dr. Duncan Dauvergne, Dr. Ke Zhang, Dr. Jaimal Thind, for everything. I'm truly very grateful to the Office of the President at the UofT for the amazing hospitality during my in-person visit to converse with President Meric S. Gertler on \say{Leadership and Duty}. I'm very grateful to Dr. Meric for the very enlightening great conversations on the insurmountable importance of collaboration, inclusion and diversity in educational settings, and also more importantly for not only envisioning with Rose M. Patten for more representation of minority groups in U of T Science and Mathematics departments but also for taking very serious practical steps toward realizing such a vision with integrity, as I've thoroughly witnessed during the fantastic and progressive chairship of Dr. Binder with his team in Dept. of MCS. Lastly, I’m very grateful and deeply indebted to the former U of T Mississauga Registrar and Director of Enrolment Management, namely, Lorretta Neebar, for everything. Last but not the least, I'm truly very grateful and indebted to Dr. Michael Bumby for the great life-philosophical conversations, and more importantly for being a truly an amazing life coach and a great friend. As a graduate research student, this work and my studies are hugely and wholeheartedly funded by Dr. Binder and Dr. Shankar. As part of the first harvest of an upcoming long harvest, I very happily dedicate this article to the Department of Mathematical and Computational Sciences (MCS) and the Presidency of Dr. Meric S. Gerlter at the University of Toronto! Any opinions expressed in this article belong solely to me, the author, Brian Kintu; and should never be taken at all as a reflection of the views of anyone that’s been happily acknowledged by me. \bibliography{References} \bibliographystyle{plain} \noindent Dept. of Math. and Comp. Sciences (MCS), University of Toronto, Mississauga, Canada \newline \textit{E-mail address:} \textbf{[email protected]}\newline \date{\small{\textit{January 1 \& 30, 2025.}}} \end{document}
2412.21021v1
http://arxiv.org/abs/2412.21021v1
On the Algebraic Connectivity of Token Graphs and Graphs under Perturbations
\documentclass[11pt,a4paper]{article} \usepackage{soul} \usepackage[normalem]{ulem} \usepackage{epsf,epsfig,amsfonts,amsgen,amsmath,amstext,amsbsy,amsopn,lineno} \usepackage{amsmath,amsthm,amsbsy} \usepackage{color} \usepackage{cite} \usepackage{subfig} \usepackage{float} \usepackage{graphicx,tikz} \usepackage{mathrsfs} \usepackage[colorlinks=true,citecolor=black,linkcolor=black,urlcolor=black]{hyperref} \usepackage{hyperref} \hypersetup{colorlinks, linkcolor=blue, anchorcolor=blue, citecolor=blue} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{chngpage} \usepackage{mathtools} \renewcommand{\algorithmicrequire}{ \textbf{Input:}} \renewcommand{\algorithmicensure}{ \textbf{Output:}} \newcommand{\doi}[1]{\href{http://dx.doi.org/#1}{\texttt{doi:#1}}} \newcommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{\texttt{arXiv:#1}}} \newcommand{\me}{\mathrm{e}} \newcommand{\mi}{\mathrm{i}} \newcommand{\dif}{\mathrm{d}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\trace}{\mathrm{trace}} \newcommand{\Var}{\mathrm{Var}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\diag}{\mathrm{diag}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{comment}[theorem]{Comment} \newtheorem{construction}[theorem]{Construction} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{example}[theorem]{Example} \newtheorem{fact}[theorem]{Fact} \newtheorem{problem}[theorem]{Problem} \newtheorem{question}[theorem]{Question} \newtheorem{remark}[theorem]{Remark} \newtheorem{observation}[theorem]{Observation} \theoremstyle{definition} \newcommand\SmallMatrix[1]{{ \tiny\arraycolsep=0.3\arraycolsep\ensuremath{\begin{pmatrix}#1\end{pmatrix}}}} \newtheorem{contraction}{Contraction Rule} \newtheorem*{splitting}{Splitting Rule} \DeclareMathOperator{\Circ}{circ} \DeclareMathOperator{\cod}{cod} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\dgr}{dgr} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\som}{sum} \DeclareMathOperator{\spec}{sp} \newcommand{\qqed}{\hfill$\Box$\medskip} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Cay}{Cay} \DeclareMathOperator{\ecc}{ecc} \DeclareMathOperator{\Irep}{Irep} \DeclareMathOperator{\rad}{rad} \DeclareMathOperator{\Stab}{Stab} \DeclareMathOperator{\Sym}{Sym} \def\Cal{\mbox{$\cal C$}} \def\Par{\pi} \def\exc{\varepsilon} \def\G{\Gamma} \def\Re{\mathbb R} \def\Z{\mathbb Z} \def\e{\mbox{\boldmath $e$}} \def\j{\mbox{\boldmath $j$}} \def\p{\mbox{\boldmath $p$}} \def\r{\mbox{\boldmath $r$}} \def\s{\mbox{\boldmath $s$}} \def\u{\mbox{\boldmath $u$}} \def\x{\mbox{\boldmath $x$}} \def\y{\mbox{\boldmath $y$}} \def\z{\mbox{\boldmath $z$}} \def\w{\mbox{\boldmath $w$}} \def\vecnu{\mbox{\boldmath $\nu$}} \def\vecrho{\mbox{\boldmath $\rho$}} \def\vecalpha{\mbox{\boldmath $\alpha$}} \def\vece{\mbox{\boldmath $e$}} \def\vecu{\mbox{\boldmath $u$}} \def\u{\mbox{\boldmath $u$}} \def\vecv{\mbox{\boldmath $v$}} \def\v{\mbox{\boldmath $v$}} \def\vecz{\mbox{\boldmath $z$}} \def\vecm{\mbox{\boldmath $m$}} \def\vecj{\mbox{\boldmath $j$}} \def\vecx{\mbox{\boldmath $x$}} \def\vecxi{\mbox{\boldmath $\xi$}} \def\vec0{\mbox{\boldmath $0$}} \def\A{\mbox{\boldmath $A$}} \def\B{\mbox{\boldmath $B$}} \def\C{\mbox{\boldmath $C$}} \def\D{\mbox{\boldmath $D$}} \def\E{\mbox{\boldmath $E$}} \def\H{\mbox{\boldmath $H$}} \def\I{\mbox{\boldmath $I$}} \def\J{\mbox{\boldmath $J$}} \def\L{\mbox{\boldmath $L$}} \def\M{\mbox{\boldmath $M$}} \def\N{\mbox{\boldmath $N$}} \def\O{\mbox{\boldmath $O$}} \def\Q{\mbox{\boldmath $Q$}} \def\R{\mbox{\boldmath $R$}} \def\SS{\mbox{\boldmath $S$}} \def\MS{\mbox{\boldmath $S$}} \def\X{\mbox{\boldmath $X$}} \def\exc{\mbox{$\varepsilon$}} \def\G{\Gamma} \def\Re{\mathbb R} \def\Z{\mathbb Z} \def\magenta{\textcolor{magenta}} \tikzstyle{vertex}=[circle, draw, inner sep=0pt, minimum size=3pt] \newcommand{\vertex}{\node[vertex]} \baselineskip 15pt \renewcommand{\baselinestretch}{1.5} \setlength{\textwidth}{150mm} \setlength{\oddsidemargin}{7mm} \setlength{\evensidemargin}{7mm} \setlength{\topmargin}{-5mm} \setlength{\textheight}{245mm} \topmargin -18mm \numberwithin{equation}{section} \allowdisplaybreaks \def\proof{\par\noindent{\bf Proof.~}} \def\Tproof{\par\noindent{\bf Proof}} \def\qed{\hfill$\Box$\vspace{12pt}} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \newcommand{\purple}[1]{\textcolor{purple}{#1}} \newcommand{\green}[1]{\textcolor[rgb]{0.00,0.50,0.50}{#1}} \newcommand{\orange}[1]{\textcolor[rgb]{1.00,0.50,0.00}{#1}} \makeatletter \newenvironment{breakablealgorithm} { \begin{center} \refstepcounter{algorithm} \hrule height.8pt depth0pt \kern2pt \renewcommand{\caption}[2][\relax]{ {\raggedright\textbf{\ALG@name~\thealgorithm} ##2\par} \ifx\relax##1\relax \addcontentsline{loa}{algorithm}{\protect\numberline{\thealgorithm}##2} \else \addcontentsline{loa}{algorithm}{\protect\numberline{\thealgorithm}##1} \kern2pt\hrule\kern2pt } }{ \kern2pt\hrule\relax \end{center} } \makeatother \begin{document} \title{On the Algebraic Connectivity of Token Graphs\\ and Graphs under Perturbations\thanks{This research has been supported by National Natural Science Foundation of China (No.~12471334, No.~12131013), and Shaanxi Fundamental Science Research Project for Mathematics and Physics (No. 22JSZ009). C. Dalf\'o and M. A. Fiol are funded by AGAUR from the Catalan Government under project 2021SGR00434 and MICINN from the Spanish Government under project PID2020-115442RB-I00. M. A. Fiol's research is also supported by a grant from the Universitat Polit\`ecnica de Catalunya, reference AGRUPS-2024.}\\ {\small Dedicated to Professor Fuji Zhang on the occasion of his 88th birthday.}} \author{X. Song$^{a,b}$, C. Dalf\'o$^b$, M. A. Fiol$^c$, S. Zhang$^{a}$\\ $^a${\small School of Mathematics and Statistics, Northwestern Polytechnical University}\\ {\small Xi'an-Budapest Joint Research Center for Combinatorics, Northwestern Polytechnical University} \\ {\small Xi'an, Shaanxi, P.R. China, {\tt [email protected], [email protected]}}\\ $^b${\small Departament de Matem\`atica, Universitat de Lleida} \\ {\small Igualada (Barcelona), Catalonia, {\tt{[email protected]}}}\\ $^c${\small Departament de Matem\`atiques, Universitat Polit\`ecnica de Catalunya} \\ {\small Barcelona Graduate School, Institut de Matem\`atiques de la UPC-BarcelonaTech (IMTech)}\\ {\small Barcelona, Catalonia, {\tt{[email protected]}}} } \date{} \maketitle \begin{abstract} Given a graph $G=(V,E)$ on $n$ vertices and an integer $k$ between 1 and $n-1$, the $k$-token graph $F_k(G)$ has vertices representing the $k$-subsets of $V$, and two vertices are adjacent if their symmetric difference is the two end-vertices of an edge in $E$. Using the theory of Markov chains of random walks and the interchange process, it was proved that the algebraic connectivities (second smallest Laplacian eigenvalues) of $G$ and $F_k(G)$ coincide, but a combinatorial/algebraic proof has been shown elusive. In this paper, we use the latter approach and prove that such equality holds for different new classes of graphs under perturbations, such as extended cycles, extended complete bipartite graphs, kite graphs, and graphs with a cut clique. Kite graphs are formed by a graph (head) with several paths (tail) rooted at the same vertex and with exciting properties. For instance, we show that the different eigenvalues of a kite graph are also eigenvalues of its perturbed graph obtained by adding edges. Moreover, as a particular case of one of our theorems, we generalize a recent result of Barik and Verma \cite{bv24} about graphs with a cut vertex of degree $n-1$. Along the way, we give conditions under which the perturbed graph $G+uv$, with $uv\in E$, has the same algebraic connectivity as $G$. \end{abstract} \noindent{\em Keywords:} Token graph, Laplacian spectrum, Algebraic connectivity, Binomial matrix, Kite graph, Cut clique. \noindent{\em MSC2010:} 05C15, 05C10, 05C50. \section{Introduction} Let $G=(V,E)$ be a graph on $n=|V|$ vertices, with Laplacian matrix $\L=\L(G)$. This is a symmetric positive semidefinite and singular matrix with eigenvalues $\lambda_1(=0)\le\lambda_2\le\cdots\le\lambda_n$. Since its introduction by Fiedler \cite{Fiedler_1973}, the algebraic connectivity of $G$, $\alpha(G)=\lambda_2$, together with its eigenvector $\x$ (or Fiedler vector), has deserved much attention in the literature. See, for instance, the comprehensive survey by Abreu \cite{a07}. Moreover, the behavior of $\alpha(G)$ and $\x$ under graph perturbations has been considered. Recently, some papers have dealt with the algebraic connectivity of token graphs. The $k$-token graph $F_k(G)$ has vertices representing the different configurations of $k$ indistinguishable tokens in different vertices of $G$. Two configurations are adjacent if one can be obtained from the other by moving a token along an edge from its current position to an unoccupied vertex. It was shown that the Laplacian spectrum of a graph $G$ is contained in the spectrum of its $k$-token graph; see Dalf\'o, Duque, Fabila-Monroy, Fiol, Huemer, Trujillo-Negrete, and Zaragoza Mart\'inez \cite{Dalfo_2021}. Moreover, the authors proved that $\alpha(F_k(G)) = \alpha(G)$ for some families of graphs, such as the complete graphs and the paths. From their results, they conjectured that this was always the case. Later, they realized that this corresponded to the so-called `Aldous spectral graph conjecture' already proved by Caputo, Liggett, and Richthammer \cite{clr10}. The proof was based on Markov chains of random walks and the interchange process. However, until now, a combinatorial/algebraic proof is elusive. Some advances towards such a proof have been recently made; see, for instance, Barik and Verma \cite{bv24}, Dalf\'o and Fiol \cite{Dalfo_2024}, and Reyes, Dalf\'o, Fiol, and Messegu\'e \cite{Reyes_2025}. In this article, our contribution is a further step in this direction by proving the equality $\alpha(F_k(G))=\alpha(G)$ for different new infinite families of graphs. Besides, along the way, we give the conditions under which the perturbed graph $G+uv$ (where $uv$ is a new edge) has the same algebraic connectivity as $G$. This paper is structured as follows. The following section gives the preliminary results and notations used in this work. The new results are presented in the further sections. Thus, in Section \ref{sec:+edges}, we derive results about the algebraic connectivities of a graph and the same graph after adding one or more new edges. In Section \ref{sec:kite}, we give some results about the so-called kite graphs, that is, with a `head', which is a general graph, and a `tail,' which is a starlike tree (roughly speaking, a tree with paths). In the same section, we provide conditions under which the algebraic connectivity does not change when kite graphs are `perturbed' by adding some edges. In Section \ref{sec:cut-clique}, we compute the algebraic connectivity of graphs with a cut clique $K_r$ with maximum degree. This result generalizes a theorem of Barik and Verma \cite{bv24} about the algebraic connectivity and cut vertices. \section{Preliminaries} \label{sec:prelim} Let $G=(V,E)$ be a graph on $n$ vertices. Let $V'\subset V$. Denote by $G[V']$ the {\em induced subgraph} of $G$ whose vertex set is $V'$ and whose edge set consists of all edges of $G$ with both ends in $V'$. If $G'=(V',E')$ is an induced subgraph of $G$, the {\em degree} of $G'$, denoted by $d_{G}(G')$, is the number of edges between $V'$ and $V\setminus V'$. In particular, the degree of a vertex $v$ is denoted by $d_G(v)$. We denote by $G_1\cup G_2$ the {\em union} of two simple graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$, with vertex set $V_1\cup V_2$ and edge set $E_1\cup E_2$. If $G_1$ and $G_2$ are disjoint, we refer to their union as a {\em disjoint union}. Given an integer $k$ such that $1\le k\le n-1$, the {\em $k$-token graph} of $G$, denoted by $F_k(G)$, has ${n\choose k}$ vertices representing the configurations of $k$ indistinguishable tokens placed at distinct vertices of $G$. Moreover, two configurations are adjacent whenever one configuration can be reached from the other by moving one token along an edge of $G$ from its current position to an unoccupied vertex, see Fabila-Monroy, Flores-Peñaloza, Huemer, Hurtado, Urrutia, and Wood \cite{ffhhuw12}. As an example, Figure \ref{fig:Y+F2(Y)} shows the graph $Y$ on 5 vertices, and its 2-token graph $F_2(Y)$ with ${5\choose 2}=10$ vertices. \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.1,auto,swap] \vertex (1) at (0.4,2.25) [fill,label=above:{$1$}]{}; \vertex (2) at (1.6,2.25) [fill,label=above:{$2$}]{}; \vertex (3) at (1,1) [fill,label=left:{$3$}]{}; \vertex (4) at (1,-0.2) [fill,label=left:{$4$}]{}; \vertex (5) at (1,-1.4) [fill,label=left:{$5$}]{}; \draw[line width=0.6pt](1)--(3); \draw[line width=0.6pt](2)--(3); \draw[line width=0.6pt](3)--(4); \draw[line width=0.6pt](4)--(5); \vertex (12) at (6.5,2.55) [fill,label=above:{$12$}]{}; \vertex (34) at (6.5,1.3) [fill,label=above:{$34$}]{}; \vertex (35) at (6.5,0.1) [fill]{}; \node () at (6.6,-0.05) [label=above left:{$35$}]{}; \vertex (45) at (6.5,-1.5) [fill,label=left:{$45$}]{}; \vertex (13) at (5.2,2.05) [fill,label=left:{$13$}]{}; \vertex (14) at (5.2,0.8) [fill,label=left:{$14$}]{}; \vertex (15) at (5.2,-0.4) [fill,label=left:{$15$}]{}; \vertex (23) at (7.8,2.05) [fill,label=right:{$23$}]{}; \vertex (24) at (7.8,0.8) [fill,label=right:{$24$}]{}; \vertex (25) at (7.8,-0.4) [fill,label=right:{$25$}]{}; \draw[line width=0.6pt](12)--(13); \draw[line width=0.6pt](12)--(23); \draw[line width=0.6pt](23)--(24); \draw[line width=0.6pt](13)--(14); \draw[line width=0.6pt](14)--(34); \draw[line width=0.6pt](34)--(24); \draw[line width=0.6pt](24)--(25); \draw[line width=0.6pt](34)--(35); \draw[line width=0.6pt](14)--(15); \draw[line width=0.6pt](35)--(15); \draw[line width=0.6pt](35)--(25); \draw[line width=0.6pt](35)--(45); \end{tikzpicture} \caption{The graph $Y$ and its $2$-token graph $F_2(Y)$.}\label{fig:Y+F2(Y)} \end{center} \end{figure} For a matrix $\M$ with order $n$, let $\lambda_1(\M)\le \lambda_2(\M)\le \cdots \le \lambda_n(\M)$ be its eigenvalues. Denote by $\Phi(\M)$ the characteristic polynomial of $\M$. Let $\L=\L(G)$ be the Laplacian matrix of $G$. This matrix is positive semidefinite and singular, with eigenvalues $(0=)\lambda_1 \le \lambda_2\le \cdots\le \lambda_n$. Its second smallest eigenvalue $\lambda_2$ is known as the {\em algebraic connectivity} of $G$ (see Fiedler \cite{Fiedler_1973}), and we denote it by $\alpha(G)$. An eigenvector corresponding to the algebraic connectivity is called a {\em Fiedler vector}. Let $\j$ be the all-1 (column) vector. Given a graph $G$ of order $n$, we define a vector $\x$ to be an \emph{embedding} of $G$ if $\x \in W_n$, where $W_n:=\{\y:\y^{\top}\j = 0\}$. For a vertex $v\in V$, the entry of $\x$ corresponding to $v$ is denoted by $\x_v$. The value $\frac{\x^{\top}\L(G)\x}{\x^{\top}\x}$ is known as the \emph{Rayleigh quotient}, and \begin{equation} \alpha(G)\le \frac{\x^{\top}\L(G)\x}{\x^{\top}\x}=\frac{\sum_{uv\in E(G)}(\x_u-\x_v)^2}{\x^{\top}\x}\label{eq-rayleigh} \end{equation} for any vector $\x\in W_n$ and $\x\ne \vec0$, where equality holds if and only if $\x$ is a Fiedler vector of $G$. Given some integers $n$ and $k$ (with $k\in [n]=\{1,2,\ldots\}$), the $(n,k)$-\emph{binomial matrix} $\B$ is a ${n \choose k}\times n$ matrix whose rows are the characteristic vectors of the $k$-subsets of $[n]$ in a given order. Thus, if the $i$-th $k$-subset is $A$, then $$ (\B)_{ij}= \left\lbrace \begin{array}{ll} 1 & \mbox{if } j\in A,\\ 0 & \mbox{otherwise.} \end{array} \right. $$ Denote by $\O$ the all-$0$ matrix with the appropriate dimension. Let $\I_n$ and $\J_n=\j\j^{\top}$ be the identity matrix and the all-$1$ matrix with order $n$, respectively. Now, let us describe a pair of results on the interlacing of graph eigenvalues. \begin{lemma}[Horn and Johnson \cite{Horn_2013}] \label{le-principal_submatrix} Let $\M'$ be a Hermitian square matrix of order $n-1$, $\z\in \mathbb{C}^{n-1}$, and $a\in \mathbb{R}$, and let $$ \M=\left({\begin{array}{cccc} \M'&\z\\ \z^*&a \end{array}} \right), $$ where $\z^*$ is the conjugate transpose of $\z$. Then, the eigenvalues of $\M'$ interlace the eigenvalues of $\M$. That is, we have the inequalities \[ \lambda_1(\M)\le \lambda_1(\M')\le \lambda_2(\M)\le \cdots\le \lambda_{n-1}(\M)\le \lambda_{n-1}(\M')\le \lambda_n(\M), \] in which $\lambda_i(\M')=\lambda_{i+1}(\M)$ if and only if there is a nonzero vector $\w \in \mathbb{C}^{n-1}$ such that $\M'\w=\lambda_i(\M')\w$, with $\z^*\w=0$, and $\M'\w=\lambda_{i+1}(\M)\w$. \end{lemma} \begin{lemma}[Horn and Johnson \cite{Horn_2013}] \label{le-matrix_add} Let $\M_1$ and $\M_2$ be two Hermitian matrices of order $n$. Then, \[ \lambda_i(\M_1)+\lambda_1(\M_2)\le \lambda_i(\M_1+\M_2)\le \lambda_i(\M_1)+\lambda_n(\M_2) \] for $i=1,\ldots,n$, with equality in the upper bound if and only if there is nonzero vector $\x$ such that $\M_1\x=\lambda_i(\M_1)\x$, $\M_2\x=\lambda_n(\M_2)\x$, and $(\M_1+\M_2)\x=\lambda_i(\M_1+\M_2)\x$; equality in the lower bound occurs if and only if there is a nonzero vector $\x$ such that $\M_1\x=\lambda_i(\M_1)\x$, $\M_2\x=\lambda_1(\M_2)\x$, and $(\M_1+\M_2)\x=\lambda_i(\M_1+\M_2)\x$. \end{lemma} Let us now show some results concerning a graph after adding a new edge $uv$ to it. \begin{lemma}[Cvetkovi\'c, Doob, and Sachs \cite{Cvetkovic_1980}] \label{le-interlacing} Let $G$ be a graph and $G'=G+uv$. Then, the Laplacian eigenvalues of $G$ and $G'$ interlace, that is, \[ 0=\lambda_1(G)= \lambda_1(G')\le \lambda_2(G)\le \lambda_2(G')\le \cdots\le \lambda_n(G) \le \lambda_{n}(G'). \] \end{lemma} \begin{lemma}[Xue, Lin, and Shu \cite{Xue_2019}] \label{le-Xue} Let $G$ be a graph and $G'=G+uv$. If $\alpha(G')=\alpha(G)$, then there exists a Fiedler vector $\x$ of $G$ such that $\x_u=\x_v$. \end{lemma} \begin{lemma}[Merris \cite{Merris_1998}] \label{le-Merris} Let $G$ be a graph and $G'=G + uv$ or $G'=G - uv$. Let $\lambda$ be a Laplacian eigenvalue of $G$ corresponding to an eigenvector $\x$. If $\x_u=\x_v$, then $\lambda$ is also a Laplacian eigenvalue of $G'$ with eigenvector $\x$, where $G'$ is the graph obtained from $G$ by deleting or adding an edge $uv$ depending on whether or not it is an edge of $G$. \end{lemma} From Lemma \ref{le-Merris}, we get that $\lambda=\alpha(G)$ is also a Laplacian eigenvalue of $G+uv$ corresponding to $\x$ if $\x_u=\x_v$. Then, it must be $\alpha(G+uv)\le \alpha(G)$ and, since $\alpha(G+uv)\ge \alpha(G)$, we conclude that $\alpha(G)=\alpha(G+uv)$. Combining Lemmas \ref{le-Xue} and \ref{le-Merris}, we can obtain the following lemma, which gives a \textbf{necessary and sufficient condition} for $\alpha(G)=\alpha(G+uv)$. \begin{lemma} \label{le-adding_edge_iff} Let $G$ be a graph on $n$ vertices. Then, $\alpha(G)=\alpha(G+uv)$ if and only if there exists a Fiedler vector $\x$ of $G$ such that $\x_u=\x_v$. \end{lemma} Of course, by applying repeatedly Lemma \ref{le-adding_edge_iff}, if $G'$ is obtained from $G$ by adding $r$ edges $u_iv_i$, we have that $\alpha(G)=\alpha(G')$ if and only if there exists a Fiedler vector $\x$ of $G$ such that $\x_{u_i}=\x_{v_i}$ for $i=1,\ldots,r$. Some preliminary results related to token graphs are presented below. \begin{lemma}[Dalf\'o, Duque, Fabila-Monroy, Fiol, Huemer, Trujillo-Negrete, and Zaragoza Mart\'{\i}ınez \cite{Dalfo_2021}] \label{le-Dalfo2021} Consider a graph $G(\cong F_1(G))$ and its $k$-token graph $F_k(G)$, with corresponding Laplacian matrices $\L(G)$ and $\L(F_k(G))$, and the $(n,k)$-binomial matrix $\B$. The following statements hold: \begin{itemize} \item[$(i)$] If $\x$ is a $\lambda$-eigenvector of $\L(G)$, then $\B\x$ is a $\lambda$-eigenvector of $\L(F_k(G))$. \item[$(ii)$] If $\w$ is a $\lambda$-eigenvector of $\L(F_k(G))$ and $\B^{\top}\w\ne 0$, then $\B^{\top}\w$ is a $\lambda$-eigenvector of $\L(G)$. \end{itemize} \end{lemma} As a consequence of Lemma \ref{le-Dalfo2021}, we have the following result. \begin{theorem}[Dalf\'o, Duque, Fabila-Monroy, Fiol, Huemer, Trujillo-Negrete, and Zaragoza Mart\'{\i}ınez \cite{Dalfo_2021}] \label{th-Dalfo2021} Let $G$ be a graph on $n$ vertices and let $k$ be an integer such that $1\le k\le n-1$. Then, the spectrum of $G$ is contained in the spectrum of its $k$-token graph $F_k(G)$. \end{theorem} \begin{theorem}[Dalf\'o and Fiol \cite{Dalfo_2024}, Dalf\'o, Duque, Fabila-Monroy, Fiol, Huemer, Trujillo-Negrete, and Zaragoza Mart\'{\i}ınez \cite{Dalfo_2021}, Reyes, Dalf\'o, Fiol, and Messegu\'e \cite{Reyes_2025}] \label{th-Dalfo2024} For each of the following classes of graphs, the algebraic connectivity of a token graph $F_k(G)$ satisfies the following statements. \begin{itemize} \item[$(i)$] Let $T_n$ be a tree on $n$ vertices. Then, $\alpha(F_k(T_n))=\alpha(T_n)$ for every $n$ and $k=1,\ldots,n-1$. \item[$(ii)$] Let $G$ be a graph such that $\alpha(F_k(G))=\alpha(G)$. Let $T_G$ be a graph in which each vertex of $G$ is the root vertex of some (possibly empty) tree. Then, $\alpha(F_k(T_G))=\alpha(T_G)$. \item[$(iii)$] Let $G=K_{n_1,n_2}$ be a complete bipartite graph on $n=n_1+n_2$ vertices, with $n_1\le n_2$. Then, $\alpha(F_k(G))=\alpha(G)=n_1$ for every $n_1,n_2$ and $k=1,\ldots,n-1$. \item[$(iv)$] Let $G=K_n$ be a complete graph on $n$ vertices. Then, $\alpha(F_k(G))=\alpha(G)=n$ for every $k=1,\ldots,n-1$. \item[$(v)$] Let $C_n$ be a cycle on $n$ vertices. Then, $\alpha(F_k(C_n))=\alpha(C_n)$ for $k=1,2$. \end{itemize} \end{theorem} \begin{lemma}[Barik and Verma \cite{bv24}] Let $G$ be a graph on $n\ge 4$ vertices, and $H$ be the graph formed by adding a pendant vertex (say $v$, with the corresponding edge) to graph $G$. Then, for any integer k such that $2\le k\le n/2$, $$ \alpha(F_k(H)) \le \min\{\alpha(F_{k-1}(G)) + 1, \alpha(F_k(G)) + 1\}. $$ \end{lemma} \begin{theorem}[Barik and Verma \cite{bv24}] \label{theBarik} Let $G$ be a graph on $n$ vertices and $v$ be a cut vertex in $G$ such that $d_G(v)=n-1$. Then, for any integer $k$ such that $2\le k\le \frac{n}{2}$, $$ \alpha(F_k(G))=\alpha(G)=1. $$ \end{theorem} \section{Adding edges} \label{sec:+edges} In this section and the following ones, we present the new results of the paper. We begin by deriving results about the algebraic connectivities of a graph and the same graph after adding a new edge $uv$. \subsection{A basic result} \begin{theorem} \label{th-add_edge} Let $G=(V,E)$ be a graph with order $n$ such that, for some $k$ with $2\le k\le n/2$, we have $\alpha(F_k(G))=\alpha(G)$. Consider adding an edge $uv$, for $u,v\in V$, getting the new graph $G+uv$. If $\alpha(G+uv)=\alpha(G)$, then $\alpha(F_k(G+uv))=\alpha(G+uv)$. \end{theorem} \begin{proof} Note that $F_k(G)$ is a spanning subgraph of $F_k(G+uv)$ with \[ E(F_k(G+uv))\backslash E(F_k(G))=\{A_{r}A_{s} : A_{r}=\{u,u_1,\ldots,u_{k-1}\}, A_{s}=\{v,u_1,\ldots,u_{k-1}\}\}, \] where $u_1,\ldots,u_{k-1}\in V(G)\backslash \{u,v\}$. Then, by Lemma \ref{le-interlacing}, \begin{equation} \alpha(F_k(G+uv)){\ge}\alpha(F_k(G)).\label{eqthe2'} \end{equation} Since $\alpha(G+uv)=\alpha(G)$, by Lemma \ref{le-adding_edge_iff}, there exists a Fiedler vector $\x$ of $G$ such that $\x_{u}=\x_{v}$. Let $\y=\B\x$, where $\B$ is the $(n,k)$-binomial matrix. It follows from the hypothesis $\alpha(F_k(G))=\alpha(G)$ and Lemma \ref{le-Dalfo2021}($i$) that $\y$ is a Fiedler vector of $F_k(G)$. Moreover, observe that $$ \y_{A_r}-\y_{A_s}=\x_u+\sum_{i=1}^{k-1}\x_{u_i}-\left(\x_v+\sum_{i=1}^{k-1}\x_{u_i}\right)=\x_u-\x_v=0. $$ Hence, we get \begin{align} \alpha(F_k(G))=\frac{\y^{\top}\L(F_k(G))\y}{\y^{\top}\y}&=\frac{\sum_{A_rA_s\in E(F_k(G))}(\y_{A_r}-\y_{A_s})^2}{\y^{\top}\y}\notag\\ &=\frac{\sum_{A_rA_s\in E(F_k(G+uv))}(\y_{A_r}-\y_{A_s})^2}{\y^{\top}\y}\notag\\ &\ge \alpha(F_k(G+uv)).\label{eqthe3'} \end{align} Thus, from \eqref{eqthe2'}, \eqref{eqthe3'}, and the hypothesis, we conclude that $\alpha(F_k(G+uv))=\alpha(F_k(G))=\alpha(G)=\alpha(G+uv)$. Alternatively, we have \begin{equation*} \alpha(F_k(G+uv))\ge\alpha(F_k(G))=\alpha(G)=\alpha(G+uv). \label{eq1'} \end{equation*} Then, the result follows since, by Theorem \ref{th-Dalfo2021}, it must be $\alpha(F_k(G+uv))\le \alpha(G+uv)$.\qed \end{proof} \begin{example} The graph $Y=(V,E)$ of Figure \ref{fig:Y+F2(Y)} has algebraic connectivity $\alpha(Y)=0.5188$ with corresponding eigenvector $\x=(-0.5969,-0.5969,-2.8772,0.4812,1)^{\top}$. Then, since $\x_1=\x_2$, its `extended' graph $Y+12$ has the same algebraic connectivity $\alpha(Y+12)=\alpha(Y)$, with the same eigenvector $\x$. This graph is shown in Figure \ref{fig:F2(Y+12)} together with its 2-token graph $F_2(Y+12)$. Notice that $F_2(Y)$ is a spanning subgraph of $F_2(Y+12)$, where the `new' edges induced by $12$ are $A_rA_s\in \{13\sim 23, 14\sim 24, 15\sim 25\}$. Since $\alpha(F_2(Y))=\alpha(Y)$, Theorem \ref{th-add_edge} implies that $\alpha(F_2(Y+12))=\alpha(Y+12)=\alpha(Y)=0.5188$. \end{example} \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.1,auto,swap] \vertex (1) at (0.4,2.25) [fill,label=above:{$1$}]{}; \vertex (2) at (1.6,2.25) [fill,label=above:{$2$}]{}; \vertex (3) at (1,1) [fill,label=left:{$3$}]{}; \vertex (4) at (1,-0.2) [fill,label=left:{$4$}]{}; \vertex (5) at (1,-1.4) [fill,label=left:{$5$}]{}; \draw[line width=0.6pt](1)--(3); \draw[blue,line width=0.6pt](1)--(2); \draw[line width=0.6pt](2)--(3); \draw[line width=0.6pt](3)--(4); \draw[line width=0.6pt](4)--(5); \vertex (12) at (6.5,2.55) [fill,label=above:{$12$}]{}; \vertex (34) at (6.5,1.3) [fill,label=above:{$34$}]{}; \vertex (35) at (6.5,0.1) [fill]{}; \node () at (6.6,-0.05) [label=above left:{$35$}]{}; \vertex (45) at (6.5,-1.5) [fill,label=left:{$45$}]{}; \vertex (13) at (5.2,2.05) [fill,label=left:{$13$}]{}; \vertex (14) at (5.2,0.8) [fill,label=left:{$14$}]{}; \vertex (15) at (5.2,-0.4) [fill,label=left:{$15$}]{}; \vertex (23) at (7.8,2.05) [fill,label=right:{$23$}]{}; \vertex (24) at (7.8,0.8) [fill,label=right:{$24$}]{}; \vertex (25) at (7.8,-0.4) [fill,label=right:{$25$}]{}; \draw[line width=0.6pt](12)--(13); \draw[line width=0.6pt](12)--(23); \draw[line width=0.6pt](23)--(24); \draw[line width=0.6pt](13)--(14); \draw[line width=0.6pt](14)--(34); \draw[line width=0.6pt](34)--(24); \draw[line width=0.6pt](24)--(25); \draw[line width=0.6pt](34)--(35); \draw[line width=0.6pt](14)--(15); \draw[line width=0.6pt](35)--(15); \draw[line width=0.6pt](35)--(25); \draw[line width=0.6pt](35)--(45); \draw[blue,line width=0.6pt](13)--(23); \draw[blue,line width=0.6pt](14)--(24); \draw[blue,line width=0.6pt](15)--(25); \end{tikzpicture} \caption{The graph $Y+12$ and its 2-token graph $F_2(Y+12)$.}\label{fig:F2(Y+12)} \end{center} \end{figure} \subsection{Extended graphs with pendant vertices} From the result in Theorem \ref{th-add_edge}, it is natural to consider graphs satisfying $\alpha(G)=\alpha(G+uv)$ for some edge $uv$. A family of such graphs is given in the following Lemma \ref{le-Shao2008}, whose statement can be made more explicit by first computing the value of a particular eigenvalue. With this aim, given a vertex subset $V'\subseteq V$, let $\L_{[V']}(G)$ denote the principal submatrix of $\L(G)$ whose rows and columns correspond to $V'$. When $G$ is a path $P_{r+1}$ with vertices $u_0,u_1,\ldots,u_r$ and $V_r=\{u_1,\ldots,u_r\}$, $\L_{[V_r]}(G)$ is the $r\times r$ tridiagonal matrix $$ \L_{[V_r]}(P_{r+1}) = {\scriptsize\left( \begin{array}{cccccc} 2 & -1 & 0 & 0 & 0 & 0\\ -1 & 2 & -1 & 0 & 0 & 0\\ 0 & -1 & \ddots & \ddots & 0 & 0\\ 0 & 0 & \ddots & \ddots & -1 & 0\\ 0 & 0 & 0 & -1 & 2 & -1 \\ 0 & 0 & 0 & 0 & -1 & 1 \end{array} \right)}, $$ with eigenvalues \begin{equation} \label{eq:theta-k} \theta_k=2+2\cos\left(\frac{2k\pi}{2r+1}\right),\quad\mbox{for $k=1,\ldots,r$}. \end{equation} (See Yueh \cite[Th. 1]{y05}). The minimum eigenvalue is obtained when $k=r$ (since $\theta_k$ is decreasing with $k$), so we have that $\lambda_1(\L_{[V_r]}(P_{r+1}))=\theta_r$, and the following result of Shao, Guo, and Shan \cite{Shao_2008} reads as follows. \begin{lemma}[Shao, Guo, and Shan \cite{Shao_2008}] \label{le-Shao2008} Let $v$ be a vertex in a connected graph $H$ and suppose that $s(\ge 2)$ new paths (with equal length $r$) $vv_{i1}v_{i2}\cdots v_{ir}$ (for $i=1,\ldots,s$ and $r\ge 1$) are attached to $H$ at $v$ to form a new graph $G$. Let $G^+$ be the graph obtained from $G$ by arbitrarily adding edges among the vertices $v_{1j},v_{2j},\ldots,v_{sj}$ for any given $j=1,\ldots,r$. If $\alpha(G)\neq 2+2\cos\left(\frac{2r\pi}{2r+1}\right)$, then $\alpha(G^+)=\alpha(G)$. \end{lemma} \begin{example} A simple example of Lemma \ref{le-Shao2008} is obtained when $G=Y$, the graph of Figure \ref{fig:Y+F2(Y)}, where $s=2$ and $r=1$. Then, as $\alpha(Y)=0.5188\neq \theta_1=1$, the graph $G^+=Y+12$ of Figure \ref{fig:F2(Y+12)} satisfies $\alpha(G^+)=\alpha(G)$. Moreover, Figure \ref{fig:example} is an example when the statement of Lemma \ref{le-Shao2008} does not hold because the necessary conditions fail. In the latter graph, we have $\alpha(G^+)=0.2679$, which is different from $\alpha(G^+-v_{13}v_{23})=0.1981$. This is because $\alpha(G)=\alpha(G^+-\{v_{13}v_{23}, v_{11}v_{31}\})=\lambda_1(\L_{[V_r]}(P_{r+1}))=\theta_r=0.1981$ (see Table \ref{tab1} for $r=3$). \begin{table} \begin{center} {\small \begin{tabular}{|c||cccccccccc|} \hline $r$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline $\theta_r$ & 1 & 0.3820 & 0.1981 & 0.1206 & 0.0810 & 0.0581 & 0.0437 & 0.0341 & 0.0273 & 0.0223\\ \hline \end{tabular} } \end{center} \caption{The values of $\theta_r=\lambda_1(\L_{[V_r]}(P_{r+1}))$ for different values of $r$.} \label{tab1} \end{table} \end{example} \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.1,auto,swap] \vertex (0) at (1,1) [fill,label=above:{$v$}]{}; \vertex (11) at (2.2,1.8) [fill,label=above:{$v_{11}$}]{}; \vertex (12) at (3.4,2.3) [fill,label=above:{$v_{12}$}]{}; \vertex (13) at (4.6,2.7) [fill,label=above:{$v_{13}$}]{}; \vertex (21) at (2.7,1.3) [fill,label=above:{$v_{21}$}]{}; \vertex (22) at (3.8,1.45) [fill,label=above:{$v_{22}$}]{}; \vertex (23) at (5,1.5) [fill,label=right:{$v_{23}$}]{}; \vertex (31) at (2.3,0.7) [fill,label=below:{$v_{31}$}]{}; \vertex (32) at (3.5,0.4) [fill,label=below:{$v_{32}$}]{}; \vertex (33) at (4.6,0.1) [fill,label=below:{$v_{33}$}]{}; \vertex (41) at (-0.2,1.2) [fill,label=above:{$v_{41}$}]{}; \vertex (51) at (0,0.2) [fill,label=below:{$v_{51}$}]{}; \draw[line width=0.6pt](0)--(11); \draw[line width=0.6pt](0)--(21); \draw[line width=0.6pt](0)--(31); \draw[line width=0.6pt](0)--(41); \draw[line width=0.6pt](0)--(51); \draw[line width=0.6pt](11)--(12); \draw[line width=0.6pt](12)--(13); \draw[line width=0.6pt](13)--(23); \draw[line width=0.6pt](11)--(31); \draw[line width=0.6pt](21)--(22); \draw[line width=0.6pt](22)--(23); \draw[line width=0.6pt](31)--(32); \draw[line width=0.6pt](32)--(33); \draw[line width=0.6pt](41)--(51); \end{tikzpicture} \caption{An example of $G^+$ where $H=K_3$ and $s=r=3$ in Lemma \ref{le-Shao2008}.}\label{fig:example} \end{center} \end{figure} It has been shown in Kirland \cite{Kirkland_2000} that, for a connected graph $G$ with $n$ vertices and a cut vertex $v$, we have $\alpha(G)\le 1$ and the equality holds if and only if $d_G(v)=n-1$. We get the following theorem by combining this with previous results. \begin{theorem} \label{th-addedge-pendant} Let $G$ be a graph on $n$ vertices. Suppose that $v_1,v_2,\ldots,v_s$ (for $s\ge2$) are $s$ pendant vertices of $G$ adjacent to a common vertex $v$ with $d_G(v)\ne n-1$. Let $G^+$ be a graph obtain from $G$ by adding any $t$ $\left(\mbox{for } 0\le t\le \frac{s(s-1)}{2}\right)$ edges among $v_1,v_2,\ldots,v_s$. If $\alpha(F_k(G))=\alpha(G)$, then $\alpha(F_k(G^+))=\alpha(G^+)$. \end{theorem} \begin{proof} Note that $v$ is a cut vertex of $G$. Since $d_G(v)\ne n-1$, we have $\alpha(G)<1=\theta_1$. It follows from Lemma \ref{le-Shao2008} and Theorem \ref{th-add_edge} that the results hold.\qed \end{proof} The following corollary holds directly from Theorems \ref{th-Dalfo2024}$(i)$ and \ref{th-addedge-pendant}. \begin{corollary} Let $T$ be a tree on $n$ vertices. Suppose that $v_1,v_2,\ldots,v_s$ (for $s\ge2$) are $s$ pendant vertices of $T$ adjacent to a common vertex $v$ with $d_T(v)\ne n-1$. Let $T^+$ be a graph obtain from $T$ by adding any $t$ $\left(\mbox{for } 0\le t\le \frac{s(s-1)}{2}\right)$ edges among $v_1,v_2,\ldots,v_s$. Then, $\alpha(F_k(T^+))=\alpha(T^+)$. \end{corollary} \subsection{Extended cycles} Let $G=(V,E)=C_n$ be a cycle with $n$ vertices, $v_0,v_1,\ldots v_{n-1}$, and let $C^+_n=(V,E^+)$ be a graph obtained from $C_n$ by adding some edges in $E_0=\{v_iv_j:i+j=\nu\}$, where $\nu=n$ if $n$ is odd and $\nu\in \{n,n-1\}$ otherwise, so that $E^+=E\cup E_0$. Guo, Zhang, and Yu \cite{Guo_2018} showed that there exists a Fiedler vector $\x$ of $C_n$ such that $\x_{v_i}=\x_{v_{n-i}}$ (for $i=1,\ldots,(n-1)/2$), and an integer $s$ (with $1\le s\le (n-3)/2$) such that $\x_{v_0}>\x_{v_1}>\cdots>\x_{v_s}>0\ge \x_{v_{s+1}}>\cdots>\x_{v_{\frac{n-1}{2}}}$ for odd $n$. (For even $n$, there is a similar result). Consequently, they proved that adding the edges in $E_0$ does not change the algebraic connectivity: $\alpha(C_n)=\alpha(C^+_n)$. Combining this fact with Theorems \ref{th-add_edge} and \ref{th-Dalfo2024}$(v)$, we obtain the following result. \begin{theorem} Let $C_n$ be a cycle with $n$ vertices, and $C^+_n$ a graph obtained from $C_n$ by adding some edges in $E_0=\{v_iv_j:i+j=\nu\}$. Then, $\alpha(F_{k}(C^+_n))=\alpha(C^+_n)$ for $k=1,2$. \end{theorem} \subsection{Extended complete bipartite graphs} Now, we consider adding new edges to complete bipartite graphs. \begin{theorem} \label{th-bip-addedge1} Let $K_{n_1,n_2}$ be a complete bipartite graph on $n=n_1+n_2$ vertices, with bipartition $(X,Y)$ such that $|X|=n_1\le n_2=|Y|$. Let $K^+_{n_1,n_2}$ be a graph obtained from $K_{n_1,n_2}$ by adding any $t$ $\left(\mbox{for } 0\le t\le \frac{n_1(n_1-1)}{2}\right)$ edges among $X$. Then, $\alpha(F_k(K^+_{n_1,n_2}))=\alpha(K^+_{n_1,n_2})=n_1$ for every $n_1,n_2$ and $k=1,\ldots,n-1$. \end{theorem} \begin{proof} Denote by $K^*_{n_1,n_2}$ the graph obtained from $K_{n_1,n_2}$ by adding all edges among $X$, that is, $G[X]\cong K_{n_1}$. It can be shown that the spectrum of $K^*_{n_1,n_2}$ is $\{0,n_1^{n_2-1},n^{n_1}\}$. Thus, $\alpha(K^*_{n_1,n_2})=n_1$. It follows from Lemma \ref{le-interlacing} that \begin{equation*} \alpha(K^*_{n_1,n_2})\ge \alpha(K^+_{n_1,n_2})\ge\alpha(K_{n_1,n_2}). \end{equation*} Note that $\alpha(K^*_{n_1,n_2})=\alpha(K_{n_1,n_2})=n_1$. We obtain $\alpha(K^+_{n_1,n_2})=n_1=\alpha(K_{n_1,n_2})$. Then, the result follows from Theorems \ref{th-Dalfo2024}$(iii)$ and \ref{th-add_edge}.\qed \end{proof} How about adding edges among the vertices in $Y$ in the complete bipartite graph? We consider adding all edges among $Y$ based on the following lemma. For a vertex $v\in V(G)$, let $S_v:=\{A\in V(F_k(G)):v\in A\}$ and $S_v':=\{B\in V(F_k(G)):v\notin B\}$. Let $H_v$ and $H_v'$ be the subgraphs of $F_k(G)$ induced by $S_v$ and $S_v'$, respectively. Note that $H_v\cong F_{k-1}(G-v)$ and $H_v'\cong F_k(G-v)$. \begin{lemma}[Dalf\'o and Fiol \cite{Dalfo_2024}] \label{le-embedding} Given a vertex $v\in V(G)$ and an eigenvector $\w$ of $F_k(G)$ such that $\B^{\top}\w=\vec0$, let $\y_v:=\w|_{S_v}$ and $\y_v':=\w|_{S_v'}$. Then, $\y_v$ and $\y_v'$ are embeddings of $H_v$ and $H_v'$, respectively. \end{lemma} \begin{theorem} \label{th-bip-addedge2} Let $G=K_{n_1,n_2}$ be a complete bipartite graph on $n=n_1+n_2$ vertices, with bipartition $(X,Y)$ such that $2\le n_1\le n_2$, where $|X|=n_1$ and $|Y|=n_2$. Let $K^*_{n_1,n_2}$ be a graph obtained from $K_{n_1,n_2}$ by adding all edges among $Y$. Then, $\alpha(F_k(K^*_{n_1,n_2}))=\alpha(K^*_{n_1,n_2})=n_2$ for every $n_1,n_2$ and $k=1,\ldots,n-1$. \end{theorem} \begin{proof} Note that the spectrum of $K^*_{n_1,n_2}$ is $\{0,n_2^{n_1-1},n^{n_2}\}$. Thus, $\alpha(K^*_{n_1,n_2})=n_2$. Recall that $\alpha(F_k(K^*_{n_1,n_2}))\le \alpha(K^*_{n_1,n_2})=n_2$. Let $\w$ be a Fiedler vector of $F_k(K^*_{n_1,n_2})$. Without loss of generality, suppose that $\w^{\top}\w=1$. It follows from Lemma \ref{le-Dalfo2021}($ii$) that, if $\B^{\top}\w\ne\vec0$, then $\alpha(F_k(K^*_{n_1,n_2}))$ is also an eigenvalue of $K^*_{n_1,n_2}$, so that $\alpha(F_k(K^*_{n_1,n_2}))=\w^{\top}\L(F_k(K^*_{n_1,n_2}))\w\ge \alpha(K^*_{n_1,n_2})=n_2$. Then, it suffices to show $\alpha(F_k(K^*_{n_1,n_2}))\ge n_2$ when $\B^{\top}\w=\vec0$. We prove it by induction on $n_1(\ge2)$ and $k$. For $k=1$, the claim holds, since $F_1(K^*_{n_1,n_2})\cong K^*_{n_1,n_2}$. For $n_1=2$, note that the claim is true if $k=1$. Then, we assume that $\alpha(F_{k-1}(K^*_{n_1,n_2}))\ge n_2.$ Let $v$ be a vertex in $X$. Thus, $G-v\cong K_{n_2+1}$ is a complete graph. Let $\y_v:=\w|_{S_v}$ and $\y_v':=\w|_{S_v'}$ such that, by Lemma \ref{le-embedding}, are embeddings of $H_v\cong F_{k-1}(G-v)$ and $H_v'\cong F_k(G-v)$. Then, from (\ref{eq-rayleigh}), the hypothesis, and Lemma \ref{th-Dalfo2024}$(iv)$, we have \[ \frac{\sum_{(A,B)\in E(H_v)}\left((\y_v)_A-(\y_v)_B\right)^2}{\sum_{A\in V(H_v)}((\y_v)_A)^2}\ge \alpha(H_v)=\alpha(F_{k-1}(G-v))\ge n_2 \] and \[ \frac{\sum_{(A,B)\in E(H_v')}((\y_v')_A-(\y_v')_B)^2}{\sum_{A\in V(H_v')}((\y_v')_A)^2}\ge \alpha(H_v')=\alpha(F_k(G-v))=n_2+1. \] Thus, \begin{eqnarray*} \alpha(F_{k}(K^*_{n_1,n_2}))&=&\w^{\top}\L(F_k(K^*_{n_1,n_2}))\w\\ &=&\sum_{(A,B)\in E(F_k(K^*_{n_1,n_2}))}(\w_A-\w_B)^2\\ &\ge &\sum_{(A,B)\in E(H_v)}\left((\y_v)_A-(\y_v)_B\right)^2+\sum_{(A,B)\in E(H_v')}\left((\y_v')_A-(\y_v')_B\right)^2\\ &\ge &\alpha(F_{k-1}(G-v))\sum_{A\in V(H_v)}\left((\y_v)_A\right)^2+\alpha(F_k(G-v))\sum_{A\in V(H_v')}\left((\y_v')_A\right)^2\\ &\ge &n_2\left[\sum_{A\in V(H_v)}(\w_A)^2+\sum_{A\in V(H_v')}(\w_A)^2\right]\\ &=&n_2. \end{eqnarray*} For $n_1>2$ and $k>1$, we obtain the claim using a similar approach as above. Together with $\alpha(F_k(K^*_{n_1,n_2}))\le n_2$, the result holds. \qed \end{proof} \section{Kite graphs} \label{sec:kite} The graphs in Lemma \ref{le-Shao2008} suggest the following definition. \begin{definition} A {\em kite} $K(H,T)$ is a graph consisting of a {\em head} $H$ that is a general graph, and a {\em tail} $T$ consisting of $s\ge 2$ paths of equal length $r$ with the common end-vertex $v\in V(H)$. \end{definition} Examples of kite graphs are shown in Figures \ref{fig:Kite2}, \ref{fig:kite}, and \ref{fig:ex-th-kite}. Thus, general kite graphs appear in Lemma \ref{le-Shao2008}, whereas the graphs in Theorem \ref{th-addedge-pendant} correspond to kite graphs with tail length $r=1$. Before proving the main theorems about kite graphs, we first need some results. \begin{lemma}[Bapat and Pati \cite{Bapat_1998}] \label{le-Bapat1998} Let $G$ be a connected graph. Let $W$ be a set of vertices of $G$ such that $G- W$ is disconnected. Let $G_1$ and $G_2$ be two components of $G-W$. Suppose that $\lambda_1(\L_{[V(G_1)]}(G))\le \lambda_1(\L_{[V(G_2)]}(G))$. Then, one of the two following conditions holds: \begin{itemize} \item[$(i)$] $\alpha(G)< \lambda_1(\L_{[V(G_2)]}(G))$; \item[$(ii)$] $\alpha(G)=\lambda_1(\L_{[V(G_1)]}(G))= \lambda_1(\L_{[V(G_2)]}(G))$. \end{itemize} \end{lemma} The following result gives a sufficient condition for having the equality in Lemma \ref{le-Bapat1998}$(ii)$. \begin{lemma}\label{le-suffcondition} Let $G$ be a connected graph with a cut vertex $v$. Let $G_1,G_2,\ldots,G_r$ be $r$ components of $G-v$ with $\lambda_1(\L_{[V(G_1)]}(G))\le \lambda_1(\L_{[V(G_2)]}(G))\le \cdots \le\lambda_1(\L_{[V(G_r)]}(G))$. If $\lambda_1(\L_{[V(G_1)]}(G))=\lambda_1(\L_{[V(G_2)]}(G))$, then $\alpha(G)=\lambda_1(\L_{[V(G_1)]}(G))=\lambda_1(\L_{[V(G_2)]}(G))$. \end{lemma} \begin{proof} It follows from Lemma \ref{le-principal_submatrix} that \begin{equation} \label{eq:ineq-lambdas} \lambda_1(\L(G))\le \lambda_1(\L_{[V(G-v)]}(G))\le \lambda_2(\L(G))\le \lambda_2(\L_{[V(G-v)]}(G)). \end{equation} Since $\lambda_1(\L_{[V(G_1)]}(G))=\lambda_1(\L_{[V(G_2)]}(G))$, we have $\lambda_1(\L_{[V(G-v)]}(G))= \lambda_2(\L_{[V(G-v)]}(G))$. This is because $\L_{[V(G-v)]}(G)$ is a block diagonal matrix with blocks $\L_{[V(G_i)]}(G)$ for $i=1,\ldots,r$ and, hence, the spectrum of the former is the union of the spectra of the latter. Then, \eqref{eq:ineq-lambdas} gives $\alpha(G)=\lambda_2(\L(G))=\lambda_1(\L_{[V(G_1)]}(G))=\lambda_1(\L_{[V(G_2)]}(G))$. \qed \end{proof} We next give a sufficient and necessary condition for $\alpha (G)=2+2\cos(\frac{2r\pi}{2r+1})$ if $G$ is a kite graph. \begin{lemma} \label{le-kite-iff} Let $G=K(H,T)$ be a kite graph with a tail of length $r$ formed with $s(\ge 2)$ paths. Then, $\alpha (G)=2+2\cos(\frac{2r\pi}{2r+1})$ if and only if $\lambda_1(\L_{[V(H-v)]}(H))\ge 2+2\cos(\frac{2r\pi}{2r+1}).$ \end{lemma} \begin{proof} Note that \[\L_{[V(G-v)]}(G)=\diag\left(\L_{[V(H-v)]}(H),\L_{[V_{r}]}(P_{r+1}),\ldots,\L_{[V_{r}]}(P_{r+1})\right),\] and $\lambda_1(\L_{[V_{r}]}(P_{r+1}))=2+2\cos(\frac{2r\pi}{2r+1})$. If $\lambda_1(\L_{[V(H-v)]}(H))\ge \lambda_1(\L_{[V_{r}]}(P_{r+1}))$, then it follows from Lemma \ref{le-suffcondition} that $\alpha(G)=\lambda_1(\L_{[V_{r}]}(P_{r+1}))=2+2\cos(\frac{2r\pi}{2r+1})$. Conversely, suppose, by way of contradiction, that $\lambda_1(\L_{[V(H-v)]}(H))<\lambda_1(\L_{[V_{r}]}(P_{r+1}))$. Then, it follows from Lemma \ref{le-Bapat1998} that $\alpha(G)<\lambda_1(\L_{[V_{r}]}(P_{r+1}))=2+2\cos(\frac{2r\pi}{2r+1})$, a contradiction. Hence, we obtain $\lambda_1(\L_{[V(H-v)]}(H))\ge 2+2\cos(\frac{2r\pi}{2r+1})$.\qed \end{proof} \begin{proposition} \label{prop:LvsS} Let $G$ be a kite graph $K(H,T)$ with head $H$ and tail $T$ with $s(\ge 2)$ paths of length $r$, with vertices $v_{i1},v_{i2},\ldots,v_{ir}$ for $i=1,\ldots,s$. Let $G^+$ be a graph obtained from $G$ by adding some edges among the vertices of each set $U_j=\{v_{2j},v_{3j},\ldots,v_{sj}\}$ for each given $j\in \{1,\ldots,r\}$. Let $\L$ be the Laplacian matrix of $G$. Let $\SS=(s_{ij})$ be the matrix, indexed also by the vertices of $G$, with entries $$ s_{uv}=\left\{ \begin{array}{lll} 1 & \mbox{if } u=v\in V(H)\cup\{v_{11},v_{12}, \ldots,v_{1r}\},\\ \frac{1}{s-1} & \mbox{if } u,v\in U_j\quad \mbox{for every } j=1,\ldots,r,\\ 0 & \mbox{elsewhere.} \end{array} \right. $$ Then, the following statements hold. \begin{itemize} \item[$(i)$] The matrices $\L$ and $\SS$ commute: $\L\SS=\SS\L$. \item[$(ii)$] For every eigenvalue $\lambda$ of $G$, there exists a corresponding eigenvector $\x$ of $G$ such that $\x_{v_{2j}}=\x_{v_{3j}}=\cdots=\x_{v_{sj}}$ for $1\le j\le r$. \item[$(iii)$] Every different eigenvalue of $G$ appears also as an eigenvalue of $G^{+}$ (not necessarily including multiplicities). In particular, $\alpha(G^{+})=\alpha(G)$. \end{itemize} \end{proposition} \begin{proof} $(i)$ For simplicity, assume that $H=\{v\}$ (if not, the proof is basically the same). Then, the block matrices $\L$ and $\SS$ are $$ \L= {\scriptsize{\left( \begin{array}{cccccc} d_v & -\j^{\top} & \vec0 & \vec0 & \cdots & \vec0\\ -\j & 2\I & -\I & \O & \cdots & \O\\ \vec0 & -\I & \ddots & \ddots & \cdots & \vdots\\ \vec0 & \O & \ddots & \ddots & -\I & \O\\ \vdots & \vdots & \vdots & -\I & 2\I & -\I \\ \vec0 & \O & \cdots & \O & -\I & \I \end{array} \right)}},\quad\mbox{and}\quad \SS= \diag(1,\SS_{11},\ldots,\SS_{rr}) $$ where $\SS_{11}=\cdots=\SS_{rr}=\diag\left(1,\frac{1}{s-1}\J\right)$. Then, the computation of $\L\SS$ and $\SS\L$ shows that the commutativity comes from $\j^{\top}\SS_{11}=\j^{\top}$ and $\SS_{11}\j=\j$.\\ $(ii)$ Let $\y$ be a $\lambda$-eigenvector of $G$ so that $\L\y=\lambda\y$. First, notice that the vector $\x=\SS\y$ satisfy the condition since \begin{align} \x_{v} &=\y_{v},\quad\text{ if } u\in V(H)\cup\{v_{11},v_{12},\ldots,v_{1r}\}; \label{eq-kite-vec1}\\ \x_{v_{2j}} &=\x_{v_{3j}}=\cdots=\x_{v_{sj}}=\frac{\y_{v_{2j}}+\cdots+\y_{v_{sj}}}{s-1},\quad \text{ if } j=1,\ldots, r. \label{eq-kite-vec2} \end{align} Moreover, since $\y\neq \vec0$ and all the paths of the tail are isomorphic, there is no loss of generality if we assume that $\y_u\neq 0$ for some $u$ in \eqref{eq-kite-vec1}. Then, $\x\neq \vec0$, and $$ \L\x=\L\SS\y=\SS\L\y=\lambda\SS\y=\lambda\x, $$ so that $\x$ is also a $\lambda$-eigenvector of $G$.\\ $(iii)$ As happens for the algebraic connectivity in Lemma \ref{le-adding_edge_iff}, if a $\lambda$-eigenvector $\x$ of $G$ has equal components $\x_u=\x_v$, as in \eqref{eq-kite-vec2}, the graph $G^{+}$, obtained by adding the edge $uv$, has the same eigenvector $\x$ and eigenvalue $\lambda$. \qed \end{proof} \begin{example} The kite graph $G$ of Figure \ref{fig:Kite2} (left) \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.1,auto,swap] \vertex (1) at (0,1) [fill,label=above:{$1$}]{}; \vertex (2) at (-1,2) [fill,label=above:{$2$}]{}; \vertex (3) at (-2,1) [fill,label=left:{$3$}]{}; \vertex (4) at (-1,0) [fill,label=below:{$4$}]{}; \vertex (5) at (1,2) [fill,label=above:{$5$}]{}; \vertex (8) at (2,2) [fill,label=above:{$8$}]{}; \vertex (11) at (3,2) [fill,label=above:{$11$}]{}; \vertex (6) at (1,1) [fill,label=above:{$6$}]{}; \vertex (9) at (2,1) [fill,label=above:{$9$}]{}; \vertex (12) at (3,1) [fill,label=above:{$12$}]{}; \vertex (7) at (1,0) [fill,label=below:{$7$}]{}; \vertex (10) at (2,0) [fill,label=below:{$10$}]{}; \vertex (13) at (3,0) [fill,label=below:{$13$}]{}; \draw[line width=0.6pt](1)--(2); \draw[line width=0.6pt](2)--(3); \draw[line width=0.6pt](3)--(4); \draw[line width=0.6pt](1)--(4); \draw[line width=0.6pt](1)--(5); \draw[line width=0.6pt](1)--(6); \draw[line width=0.6pt](1)--(7); \draw[line width=0.6pt](5)--(8); \draw[line width=0.6pt](8)--(11); \draw[line width=0.6pt](6)--(9); \draw[line width=0.6pt](9)--(12); \draw[line width=0.6pt](7)--(10); \draw[line width=0.6pt](10)--(13); \vertex (1') at (7,1) [fill,label=above:{$1$}]{}; \vertex (2') at (6,2) [fill,label=above:{$2$}]{}; \vertex (3') at (5,1) [fill,label=left:{$3$}]{}; \vertex (4') at (6,0) [fill,label=below:{$4$}]{}; \vertex (5') at (8,2) [fill,label=above:{$5$}]{}; \vertex (8') at (9,2) [fill,label=above:{$8$}]{}; \vertex (11') at (10,2) [fill,label=above:{$11$}]{}; \vertex (6') at (8,1) [fill,label=above:{$6$}]{}; \vertex (9') at (9,1) [fill,label=above:{$9$}]{}; \vertex (12') at (10,1) [fill,label=above:{$12$}]{}; \vertex (7') at (8,0) [fill,label=below:{$7$}]{}; \vertex (10') at (9,0) [fill,label=below:{$10$}]{}; \vertex (13') at (10,0) [fill,label=below:{$13$}]{}; \draw[line width=0.6pt](1')--(2'); \draw[line width=0.6pt](2')--(3'); \draw[line width=0.6pt](3')--(4'); \draw[line width=0.6pt](1')--(4'); \draw[line width=0.6pt](1')--(5'); \draw[line width=0.6pt](1')--(6'); \draw[line width=0.6pt](1')--(7'); \draw[line width=0.6pt](5')--(8'); \draw[line width=0.6pt](8')--(11'); \draw[line width=0.6pt](6')--(9'); \draw[line width=0.6pt](9')--(12'); \draw[line width=0.6pt](7')--(10'); \draw[line width=0.6pt](10')--(13'); \draw[line width=0.6pt](12')--(13'); \draw[line width=0.6pt](6')--(7'); \draw[line width=0.6pt](9')--(10'); \end{tikzpicture} \caption{A kite graph $G=(H,T)$ with head $H=C_4$ and tail $T$ with $s=3$ paths of length $r=3$, and its extended graph $G^+$.}\label{fig:Kite2} \end{center} \end{figure} has Laplacian matrix $$ \L = \left({\scriptsize\begin{array}{ccccccccccccc} 5 & -1 & 0 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & -1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 2 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 2 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 2 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 2 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 1 \end{array}}\right) $$ with algebraic connectivity $\alpha(G)=0.19086$. Then, we can add the edges $2\sim 3$, $5\sim 6$, or $8\sim 9$ to obtain $G^+$, shown in the same figure (right), keeping the same value of $\alpha(G)$, as shown in Proposition \ref{prop:LvsS}. Thus, a Field vector is $\y =( 0,0,0,0,-0.44504,0,0.44504,$ $- 0.80193,0,0.80193,-1,0,1)^{\top}$. If we multiply it by the matrix $\SS$ $$ \SS = \\ \left({\scriptsize\begin{array}{ccccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{array}}\right), $$ then $\L\SS=\SS\L$ and we get \begin{align*} &\x=\SS\y=\\ &(0,0,0,0,-0.44504,0.22252,0.22252,-0.80193,0.40096,0.40096,-1,0.5,0.5)^{\top}. \end{align*} In fact, every eigenvalue in $$ \spec G=\{6.271, 3.342, 3.246^2, 2.810, 2, 1.554^2, 1.211, 0.364, 0.198^2,0\} $$ appears (in boldface) in \begin{align*} &\spec G^+=\\ &\{{\bf 6.271}, 5.246, 3.554, {\bf 3.342}, {\bf 3.246}, {\bf 2.810}, 2.198, {\bf 2}, {\bf 1.554}, {\bf 1.211}, {\bf 0.364}, {\bf 0.198}, {\bf 0}\}. \end{align*} \end{example} Now, we are ready to obtain the result on the token graph of kite graphs. \begin{theorem} \label{th-kite} Let $G=K(H,T)$ be a kite graph with a tail of length $r$ formed with $s(\ge 2)$ paths $v,v_{i1},v_{i2},\ldots,v_{ir}$ for $i=1,\ldots,s$. Let $K(H,T^+)$ be the graph obtained from $G$ by adding some edges between the vertices $v_{1j},v_{2j},\ldots,v_{sj}$, for each $j$ of $T$, and $K^+(H,T)$ be the graph obtained from $G$ by adding some edges to $H$, and between the vertices $v_{2j},\ldots,v_{sj}$ with $1\le j\le r$. Suppose that $\alpha(H)=\alpha(F_k(H))$, then the following statements hold. \begin{itemize} \item[$(i)$] If $\alpha (G)\neq2+2\cos(\frac{2r\pi}{2r+1})$, then $\alpha(F_k(K(H,T^+)))=\alpha(K(H,T^+))=\alpha(G)$. \item[$(ii)$] If $\alpha (G)=2+2\cos(\frac{2r\pi}{2r+1})$, then $\alpha(F_k(K^+(H,T)))=\alpha(K^+(H,T))=\alpha(G)=2+2\cos(\frac{2r\pi}{2r+1})$. \end{itemize} \end{theorem} \begin{proof} ($i$) Note that the result by Shao, Guo, and Shan \cite{Shao_2008} in Lemma \ref{le-Shao2008} reads as $\alpha(K(H,T^+))=\alpha(G)$. Combining it with Theorems \ref{th-add_edge} and \ref{th-Dalfo2024}$(ii)$, and the hypothesis, we obtain $\alpha(F_k(K(H,T^+)))=\alpha(K(H,T^+))=\alpha(G)$. ($ii$) Let $G'\cong K(H^+,T)$ be the graph obtained from $G$ by adding some edges to $H$, where $|V(H)|=h$. We first prove that $\alpha(K(H^+,T))=\alpha(K(H,T))$. From Lemma \ref{le-kite-iff}, we get that $\lambda_1(\L_{[V(H-v)]}(H))\ge \lambda_1(\L_{[V_{r}]}(P_{r+1}))=2+2\cos(\frac{2r\pi}{2r+1})$. Besides, observe that $G'$ is the graph obtained by adding some edges in $H$. Denote these additional edges by $E'$ if they are between the vertices of $H-v$, and by $E''$ if they have the end-vertex $v$. Let $H'$ be the graph with vertex set $V(H')=V(H)$ and edge set $E'$. Then, $\L_{[V(H-v)]}(G')=\L_{[V(H-v)]}(H)+\L(H')+\I''$, where $\I''$ is a diagonal matrix of order $h-1$, with $|E''|$ ones and $h-1-|E''|(\ge 1)$ zeros. It follows from Lemma \ref{le-matrix_add} with $i=1$ and the hypothesis that \begin{align} \lambda_1(\L_{[V(H-v)]}(G')) &\ge \lambda_1(\L_{[V(H-v)]}(H))+\lambda_1(\L(H'))+\lambda_1(\I'')\nonumber\\ & \ge\lambda_1(\L_{[V(H-v)]}(H))\ge2+2\cos\left(\frac{2r\pi}{2r+1}\right). \label{eq:2nd-ineq} \end{align} Then, from \eqref{eq:2nd-ineq} and Lemma \ref{le-suffcondition}, we have that $$ \alpha(G)=2+2\cos\left(\frac{2r\pi}{2r+1}\right)=\alpha(G'). $$ Next, let $G^+\cong K^+(H,T)$ be the graph obtained from $G'$ by adding some edges between the vertices $v_{2j},\ldots,v_{sj}$ with $1\le j\le r$. It follows from Proposition \ref{prop:LvsS}$(iii)$ that $\alpha(G^+)=\alpha(G')=\alpha(G)$. Together with Theorem \ref{th-Dalfo2024}$(ii)$ (since $\alpha(H)=\alpha(F_k(H))$ implies $\alpha(G)=\alpha(F_k(G))$) and Theorem \ref{th-add_edge}, we obtain $\alpha(F_k(K^+(H,T)))=\alpha(K^+(H,T))=\alpha(G)=2+2\cos\left(\frac{2r\pi}{2r+1}\right)$. \qed \end{proof} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.9,auto,swap] \vertex (v) at (0,1) [fill]{}; \node () at (-0.1,1) [label=above:{$v$}]{}; \vertex (11) at (-1,1) [fill,label=above:{$v_{11}$}]{}; \vertex (12) at (-2,0.9) [fill,label=above:{$v_{12}$}]{}; \vertex (13) at (-3,0.9) [fill,label=above:{$v_{13}$}]{}; \vertex (14) at (-4,0.35) [fill,label=above:{$v_{14}$}]{}; \vertex (21) at (-1,0.5) [fill]{}; \node () at (-1,0.35) [label=above:{$v_{21}$}]{}; \vertex (22) at (-2,0.3) [fill,label=above:{$v_{22}$}]{}; \vertex (23) at (-2.8,0.05) [fill,label=above:{$v_{23}$}]{}; \vertex (24) at (-3.3,-0.5) [fill]{}; \node () at (-3.4,-0.5) [label=above:{$v_{24}$}]{}; \vertex (31) at (-0.45,0) [fill,label=below:{$v_{31}$}]{}; \vertex (32) at (-1.5,-0.2) [fill,label=below:{$v_{32}$}]{}; \vertex (33) at (-2.3,-0.4) [fill]{}; \node () at (-2.2,-0.3) [label=below:{$v_{33}$}]{}; \vertex (34) at (-2.7,-0.9) [fill,label=below:{$v_{34}$}]{}; \vertex (1) at (0.7,2.3) [fill,label=above:{$u_{1}$}]{}; \vertex (2) at (2,2.9) [fill,label=above:{$u_{2}$}]{}; \vertex (3) at (3.4,2.4) [fill,label=above:{$u_{3}$}]{}; \vertex (4) at (2.8,1) [fill,label=below:{$u_{4}$}]{}; \vertex (5) at (1.5,0.4) [fill,label=below:{$u_{5}$}]{}; \draw[line width=0.6pt](v)--(11); \draw[line width=0.6pt](11)--(12); \draw[line width=0.6pt](12)--(13); \draw[line width=0.6pt](13)--(14); \draw[line width=0.6pt](v)--(21); \draw[line width=0.6pt](21)--(22); \draw[line width=0.6pt](22)--(23); \draw[line width=0.6pt](23)--(24); \draw[line width=0.6pt](v)--(31); \draw[line width=0.6pt](31)--(32); \draw[line width=0.6pt](32)--(33); \draw[line width=0.6pt](33)--(34); \draw[line width=0.6pt](v)--(1); \draw[line width=0.6pt](1)--(2); \draw[line width=0.6pt](2)--(3); \draw[line width=0.6pt](3)--(4); \draw[line width=0.6pt](4)--(5); \draw[line width=0.6pt](v)--(5); \draw[line width=0.6pt](1)--(5); \draw[line width=0.6pt](1)--(3); \draw[line width=0.6pt](2)--(4); \draw[line width=0.6pt](2)--(5); \draw[line width=0.6pt](v)--(4); \vertex (v') at (8,1) [fill]{}; \node () at (7.9,1) [label=above:{$v$}]{}; \vertex (11') at (7,1) [fill,label=above:{$v_{11}$}]{}; \vertex (12') at (6,0.9) [fill,label=above:{$v_{12}$}]{}; \vertex (13') at (5,0.9) [fill,label=above:{$v_{13}$}]{}; \vertex (14') at (4,0.35) [fill,label=above:{$v_{14}$}]{}; \vertex (21') at (7,0.5) [fill]{}; \node () at (7,0.35) [label=above:{$v_{21}$}]{}; \vertex (22') at (6,0.3) [fill,label=above:{$v_{22}$}]{}; \vertex (23') at (5.2,0.05) [fill,label=above:{$v_{23}$}]{}; \vertex (24') at (4.7,-0.5) [fill]{}; \node () at (4.6,-0.5) [label=above:{$v_{24}$}]{}; \vertex (31') at (7.45,0) [fill,label=below:{$v_{31}$}]{}; \vertex (32') at (6.5,-0.2) [fill,label=below:{$v_{32}$}]{}; \vertex (33') at (5.7,-0.4) [fill]{}; \node () at (5.8,-0.3) [label=below:{$v_{33}$}]{}; \vertex (34') at (5.3,-0.9) [fill,label=below:{$v_{34}$}]{}; \vertex (1') at (8.7,2.3) [fill,label=above:{$u_1$}]{}; \vertex (2') at (10,2.9) [fill,label=above:{$u_2$}]{}; \vertex (3') at (11.4,2.4) [fill,label=above:{$u_3$}]{}; \vertex (4') at (10.8,1) [fill,label=below:{$u_{4}$}]{}; \vertex (5') at (9.5,0.4) [fill,label=below:{$u_{5}$}]{}; \draw[line width=0.6pt](v')--(11'); \draw[line width=0.6pt](11')--(12'); \draw[line width=0.6pt](12')--(13'); \draw[line width=0.6pt](13')--(14'); \draw[line width=0.6pt](v')--(21'); \draw[line width=0.6pt](21')--(22'); \draw[line width=0.6pt](22')--(23'); \draw[line width=0.6pt](23')--(24'); \draw[line width=0.6pt](v')--(31'); \draw[line width=0.6pt](31')--(32'); \draw[line width=0.6pt](32')--(33'); \draw[line width=0.6pt](33')--(34'); \draw[line width=0.6pt](v')--(1'); \draw[line width=0.6pt](1')--(2'); \draw[line width=0.6pt](2')--(3'); \draw[line width=0.6pt](3')--(4'); \draw[line width=0.6pt](4')--(5'); \draw[line width=0.6pt](v')--(5'); \draw[line width=0.6pt](1')--(5'); \draw[line width=0.6pt](1')--(3'); \draw[line width=0.6pt](2')--(4'); \draw[line width=0.6pt](2')--(5'); \draw[line width=0.6pt](v')--(4'); \draw[blue,line width=0.6pt](21')--(31'); \draw[blue,line width=0.6pt](23')--(33'); \draw[blue,line width=0.6pt](24')--(34'); \draw[blue,line width=0.6pt](3')--(v'); \draw[blue,line width=0.6pt](2')--(5'); \draw[blue,line width=0.6pt](4')--(1'); \end{tikzpicture} \caption{A kite graph $G=K(H,T)$ and its extended graph $G^+$.}\label{fig:kite} \end{center} \end{figure} \begin{example} Figure \ref{fig:ex-th-kite} is an example of \eqref{eq:2nd-ineq} with $E'=\{36,46\}$ and $E''=\{13,14\}$. Note that $\L_{[V(H-v)]}(G^+)=\L_{[V(H-v)]}(H)+\L(H')+\I''$ with \begin{eqnarray*} \L_{[V(H-v)]}(H) &=&{\scriptsize{\left({\begin{array}{ccccc} 3&-1&0&-1&0\\ -1&2&-1&0&0\\ 0&-1&2&-1&0\\ -1&0&-1&3&-1\\ 0&0&0&-1&2 \end{array}} \right)}},\\ \L(H')&=&{\scriptsize{\left({\begin{array}{ccccc} 0&0&0&0&0\\ 0&1&0&0&-1\\ 0&0&1&0&-1\\ 0&0&0&0&0\\ 0&-1&-1&0&2 \end{array}} \right)}}, ~\text{and } \I''={\scriptsize{\left({\begin{array}{ccccc} 0&0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0 \end{array}} \right).}} \end{eqnarray*} Moreover, $\lambda_1(\L_{[V(H-v)]}(G^+))=0.747>0.284=\lambda_1(\L_{[V(H-v)]}(H))$. \end{example} \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.1,auto,swap] \vertex (1) at (0,1) [fill]{}; \node () at (0.1,1) [label=above:{$1$}]{}; \node () at (0.1,1) [label=below:{$v$}]{}; \vertex (01) at (1,1.2) [fill]{}; \vertex (02) at (2,1.4) [fill]{}; \vertex (03) at (3,1.6) [fill]{}; \vertex (04) at (1,0.8) [fill]{}; \vertex (05) at (2,0.6) [fill]{}; \vertex (06) at (3,0.4) [fill]{}; \vertex (2) at (-0.7,1.8) [fill,label=above:{$2$}]{}; \vertex (3) at (-1.7,1.8) [fill,label=above:{$3$}]{}; \vertex (4) at (-2.4,1) [fill,label=left:{$4$}]{}; \vertex (5) at (-1.7,0.2) [fill,label=below:{$5$}]{}; \vertex (6) at (-0.7,0.2) [fill,label=below:{$6$}]{}; \node () at (-2.4,1.5) [label=above:{$H$}]{}; \node () at (1.2,1.5) [label=above:{$T$}]{}; \draw[line width=0.6pt](1)--(01); \draw[line width=0.6pt](01)--(02); \draw[line width=0.6pt](02)--(03); \draw[line width=0.6pt](1)--(04); \draw[line width=0.6pt](04)--(05); \draw[line width=0.6pt](05)--(06); \draw[line width=0.6pt](1)--(2); \draw[line width=0.6pt](2)--(3); \draw[line width=0.6pt](3)--(4); \draw[line width=0.6pt](4)--(5); \draw[line width=0.6pt](5)--(6); \draw[line width=0.6pt](1)--(6); \draw[line width=0.6pt](2)--(5); \draw[blue,line width=0.6pt](1)--(3); \draw[blue,line width=0.6pt](1)--(4); \draw[blue,line width=0.6pt](3)--(6); \draw[blue,line width=0.6pt](4)--(6); \end{tikzpicture} \caption{A example for Theorem \ref{th-kite}.}\label{fig:ex-th-kite} \end{center} \end{figure} This theorem and the following ones in this section can be generalized for superkite graphs. A superkite graph $K(H,{\cal T})$ has head $H$ and supertail ${\cal T}=Q^s$ formed by $s(\ge 2)$ trees isomorphic to $Q$ with the same root $v\in V(H)$. Then, basically, the same proof of Theorem \ref{th-kite} leads to the following result. \begin{theorem} \label{th-superkite} Let $G$ be a superkite graph $K(H,{\cal T})$ with tail ${\cal T}=Q^s$, such that $H$ is not a complete graph with $h=|V(H)|$, with $\alpha(H)=\alpha(F_k(H))$, and $$ \lambda_1(\L_{[V(H-v)]}(H))\ge \lambda_1(\L_{[V(Q-v)]}(Q)). $$ Let $G^+$ be the graph obtained from $G$ by adding some edges to $H$. Then, $\alpha(G^+)=\alpha(F_k(G^+))$. \end{theorem} The following two corollaries deal with the cases in which the head of a kite graph is a cycle and a complete bipartite graph, respectively. \begin{corollary} \label{th-kite-cycle} Let $G$ be a kite graph $K(H,T)$ with tail length $r$, such that $H\cong C_h$ is a cycle with order $h$ $(\mbox{for } h\le 2r+1)$ and $\alpha(H)=\alpha(F_k(H))$. Let $G^+$ be the graph obtained from $G$ by adding some edges to $H$, and between the vertices $v_{2j},\ldots,v_{sj}$ with $1\le j\le r$. Then, $\alpha(G^+)=\alpha(F_k(G^+))$. \end{corollary} \begin{proof} It is known that the characteristic polynomials of $\L_{[V']}(C_{h})$, where $V'=V(C_h-v)$, and $\L(P_{h})$ are related as $x\cdot\Phi(\L_{[V']}(C_{h}))=\Phi(\L(P_{h}))$ (see Guo \cite{Guo_2008}). Hence, $\lambda_i(\L_{[V']}(C_{h}))=\lambda_{i+1}(\L(P_{h}))$ for $i=1,\ldots,h-1$. Then, since the Laplacian eigenvalues of the path $P_{\ell}$ of length $\ell$ are $\tau_k=2-2\cos(\frac{k\pi}{\ell})$ for $k=0,1,\ldots,\ell$, we have, with $H=C_h$, \begin{align*} \lambda_1(\L_{[V(H-v)]}(H))=\lambda_2(\L(P_h))&\ge\lambda_2(\L(P_{2r+1}))=2-2\cos\left(\frac{\pi}{2r+1}\right)\\ &=2+2\cos\left(\frac{2r\pi}{2r+1}\right)= \lambda_1(\L_{[V_r]}(P_{r+1})). \end{align*} From Lemma \ref{le-kite-iff} and Theorem \ref{th-kite}, the result holds. \qed \end{proof} \begin{corollary} \label{th-kite-bipartite} Let $G$ be a kite graph $K(H,T)$ with root vertex $v$, tail length $r$, and head $H\cong K_{h_1,h_2}$ being a complete bipartite graph with vertex set $V=V_1\cup V_2$, on $h=h_1+h_2$ vertices, where $h_i=|V_i|$. Let $G^+$ be the graph obtained from $G$ by adding some edges to $H$, and between the vertices $v_{2j},\ldots,v_{sj}$ with $1\le j\le r$. If $v\in V_i$ for some $i\in\{1,2\}$, and $$ \frac{1}{2}\left(h-\sqrt{h^2-4h_i}\right)\ge 2+2\cos\left(\frac{2r\pi}{2r+1}\right), $$ then $\alpha(G^+)=\alpha(F_k(G^+))$. \end{corollary} \begin{proof} It follows from Theorem \ref{th-Dalfo2024}($iii$) that $\alpha(F_k(K_{h_1,h_2}))=\alpha(K_{h_1,h_2})$. Moreover, it can be shown that, if $v\in V_i$, then $\lambda_1(\L_{V(H-v)}(H))=\frac{1}{2}\left(h-\sqrt{h^2-4h_i}\right)$ for $i=1,2$. Then, we have $\lambda_1(\L_{V(H-v)}(H))\ge 2+2\cos(\frac{2r\pi}{2r+1})=\lambda_1(\L_{[V_{r}]}(P_{r+1}))$. Therefore, the result holds by Lemma \ref{le-kite-iff} and Theorem \ref{th-kite}. \qed \end{proof} \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.1,auto,swap] \vertex (v) at (1,1) [fill]{}; \vertex (11) at (0,1) [fill]{}; \vertex (12) at (-1,0.9) [fill]{}; \vertex (13) at (-2,0.9) [fill]{}; \vertex (21) at (0,0.5) [fill]{}; \vertex (22) at (-1,0.3) [fill]{}; \vertex (23) at (-1.8,0.05) [fill]{}; \vertex (31) at (0.45,0) [fill]{}; \vertex (32) at (-0.5,-0.2) [fill]{}; \vertex (33) at (-1.3,-0.4) [fill]{}; \vertex (1) at (1.8,1.9) [fill]{}; \vertex (2) at (3,1.5) [fill]{}; \vertex (3) at (3,0.5) [fill]{}; \vertex (4) at (1.8,0.1) [fill]{}; \draw[line width=0.6pt](v)--(11); \draw[line width=0.6pt](11)--(12); \draw[line width=0.6pt](12)--(13); \draw[line width=0.6pt](v)--(21); \draw[line width=0.6pt](21)--(22); \draw[line width=0.6pt](22)--(23); \draw[line width=0.6pt](v)--(31); \draw[line width=0.6pt](31)--(32); \draw[line width=0.6pt](32)--(33); \draw[line width=0.6pt](v)--(1); \draw[line width=0.6pt](1)--(2); \draw[line width=0.6pt](2)--(3); \draw[line width=0.6pt](3)--(4); \draw[line width=0.6pt](4)--(v); \vertex (2v) at (7,1) [fill]{}; \vertex (211) at (6,1) [fill]{}; \vertex (212) at (5,0.9) [fill]{}; \vertex (213) at (4,0.9) [fill]{}; \vertex (221) at (6,0.5) [fill]{}; \vertex (222) at (5,0.3) [fill]{}; \vertex (223) at (4.2,0.05) [fill]{}; \vertex (231) at (6.45,0) [fill]{}; \vertex (232) at (5.5,-0.2) [fill]{}; \vertex (233) at (4.7,-0.4) [fill]{}; \vertex (21) at (7.35,1.9) [fill]{}; \vertex (22) at (8.4,1.9) [fill]{}; \vertex (23) at (8.2,1) [fill]{}; \vertex (24) at (7.9,0.2) [fill]{}; \draw[line width=0.6pt](2v)--(211); \draw[line width=0.6pt](211)--(212); \draw[line width=0.6pt](212)--(213); \draw[line width=0.6pt](2v)--(221); \draw[line width=0.6pt](221)--(222); \draw[line width=0.6pt](222)--(223); \draw[line width=0.6pt](2v)--(231); \draw[line width=0.6pt](231)--(232); \draw[line width=0.6pt](232)--(233); \draw[line width=0.6pt](2v)--(23); \draw[line width=0.6pt](2v)--(22); \draw[line width=0.6pt](21)--(22); \draw[line width=0.6pt](21)--(23); \draw[line width=0.6pt](21)--(24); \draw[line width=0.6pt](24)--(2v); \end{tikzpicture} \caption{The example in Corollary \ref{th-kite-cycle} (left) and Corollary \ref{th-kite-bipartite} (right) with $r=3$ and $s=3$.}\label{fig:kite_example} \end{center} \end{figure} \section{Graphs with a cut clique} \label{sec:cut-clique} The notion of cut clique generalizes the idea of cut vertex. Thus, a clique $K_r$ of a graph $G$ is a {\em cut clique} if the removal of (the vertices and edges of) $K_r$ disconnects $G$ in two or more components. Then, the following result is a natural generalization of Kirland's result \cite{Kirkland_2000} for cut vertices. \begin{proposition} \label{prop-cut-clique} For a connected graph $G$ with $n$ vertices and a cut clique $K_r$, we have $\alpha(G)\le r$ and the equality holds if and only if $d_G(K_r)=r(n-r)$. \end{proposition} \begin{proof} It follows from Fiedler \cite[\S 3.12]{Fiedler_1973} that $$ \alpha(G)\le \min\{\alpha(G-K_r)+|V(K_r)|,\alpha(K_r)+|V(G-K_r)|\}. $$ Then, since $G-K_r$ is disconnected, we have $\alpha(G)\le r$. We next show that, if $d_G(K_r)=r(n-r)$, then $\alpha(G)=r$ . Let $G_1,\ldots,G_s$ be all components of $G-K_r$. Notice that $\L(G)=\L_1+\L_2$, where $\L_1$ is the Laplacian matrix of the graph with vertex set $V(G)$ and edge set $E(G_1\cup\cdots\cup G_s)$, and $\L_2$ is the Laplacian matrix of $G$ removing all the edges of $G_1\cup\cdots\cup G_s$. That is, $$\L(G)={\scriptsize{\left({\begin{array}{ccccccc} \L(G_1)&&&0&\ldots&0\\ &\ddots&&&\ddots&\\ &&\L(G_s)&0&\ldots&0\\ 0&\ldots&0&0&\ldots&0\\ &\ddots&& &\ddots&\\ 0&\ldots&0& 0& \ldots&0 \end{array}} \right)}}+{\scriptsize{\left({\begin{array}{ccccccc} r\I& & &-1&\ldots&-1\\ &\ddots & &&\ddots&\\ &&r\I&-1&\ldots&-1\\ -1&\ldots&-1&n-1&\ldots&-1\\ &\ddots&& &\ddots&\\ -1&\ldots&-1& -1& \ldots&n-1 \end{array}} \right)}}. $$ Since the spectrum of $\L_2$ is $\{0^1,r^{n-r-1},n^r\}$, we have that $\lambda_2(\L_2)=r$. It follows from Lemma \ref{le-matrix_add} with $i=2$ that $\alpha(G)=\lambda_2(\L(G))\ge\lambda_1(\L_1)+\lambda_2(\L_2)=r.$ Together with $\alpha(G)\le r$, we conclude that $\alpha(G)=r$. Otherwise, if $d_{G}(K_r)\ne r(n-r)$, then there exists $u\in V(K_r)$ and $v\in V(G_1\cup\cdots\cup G_s)$ such that $uv\notin E(G)$. Denote by $n_1$ and $n_2$ the order of $G_1\cup\cdots\cup G_{s-1}$ and $G_s$, respectively. Let $G^*$ be a graph obtained from $G$ by adding edges such that $G^*[V(G_1\cup\cdots\cup G_{s-1})]$ and $G^*[V(G_{s})]$ are complete graphs, and $d_{G^*}(K_r)=r(n-r)$. Note that $$\L(G^*)=\left({\begin{array}{ccc} (n_1+r)\I_{n_1}-\J_{n_1}&\O&-\J_{n_1\times r}\\ \O&(n_2+r)\I_{n_2}-\J_{n_2}&-\J_{n_2\times r}\\ -\J_{r\times n_1}&-\J_{r\times n_2}&n\I_{r}-\J_{r} \end{array}} \right).$$ It can be shown that $\Phi(\L(G^*))=x(x-r)(x-r-n_1)^{n_1-1} (x-r-n_2)^{n_2-1}(x-n)^r$. Thus, the spectrum of $\L(G^*)$ is $\{0^1,r^1,(r+n_1)^{n_1-1},(r+n_2)^{n_2-1},n^r\}$. Moreover, it is easily checked that \[ \x=\Big(\underbrace{1,\ldots,1}_{n_1},\underbrace{-\frac{n_1}{n_2},\ldots,-\frac{n_1}{n_2}}_{n_2},\underbrace{0,\ldots,0}_r\Big)^{\top} \] is a Fiedler vector of $G^*$. Furthermore, since $r$ has multiplicity 1, any eigenvector $\y$ of $G^*$ with eigenvalue $\alpha(G^*)$ must be a multiple of $\x$. It follows from Lemma \ref{le-interlacing} that $\alpha(G^*)\ge\alpha(G^*-uv)$. If the equality holds and $\w$ is a Fiedler vector of $G^*-uv$, we have $\w_u=\w_v$ from Lemma \ref{le-adding_edge_iff}. Moreover, $\w$ also is a Fiedler vector of $G^*$ with $\w_u=\w_v$, a contradiction. Hence, the inequality must be strict. Finally, since $G^*-uv$ can be obtained by adding edges in $G$, we conclude that $\alpha(G)\le\alpha(G^*-uv)<\alpha(G^*)=r.$ \qed \end{proof} \begin{proposition} \label{prop-subgraph} Let $G_1$ and $G_2$ be two edge-disjoint graphs with the same $n$ vertices. Let $G=G_1\cup G_2$. Then, $F_k(G)=F_k(G_1)\cup F_k(G_2)$. \end{proposition} \begin{proof} It suffices to consider that there is exactly one edge $uv$ in $G_2$, that is, $G_1=G-uv$. The result holds since $F_k(G-uv)$ is a spanning subgraph of $F_k(G)$ with \begin{align*} E(F_k(G))\backslash E(F_k(G-uv)) &=\{A_{r}A_{s} : A_{r}=\{u,u_1,\ldots,u_{k-1}\}, A_{s}=\{v,u_1,\ldots,u_{k-1}\}\}\\ & =E(F_k(G_2)). \mbox{\hskip 7.8cm \qed} \end{align*} \end{proof} \begin{theorem} \label{th-clique} Let $G$ be a graph on $n$ vertices, with a cut clique $K_r$ such that $G-K_r$ results in some disjoint graphs $G_1,\ldots,G_s$, for $s\ge 2$, and degree $d_G(K_r)=r(n-r)$ (so that there are {\bf all} the edges between the vertices of $K_r$ and the vertices of each $G_i$). Then, $\alpha(G)=\alpha(F_k(G))=r$. \end{theorem} \begin{proof} By Theorem \ref{th-Dalfo2021} and Proposition \ref{prop-cut-clique}, we get \begin{equation} \alpha(F_k(G))\le \alpha(G)=r.\label{eq-th-clique_1} \end{equation} Let $H_1$ be a graph with vertex set $V(G)$ and edge set $E(H_1)=\{uv:uv\in G_1\cup\cdots\cup G_s\}$, and let $H_2$ be a graph obtained from $G$ by removing all the edges of $E(H_1)$. Note that $H_1$ and $H_2$ are the graphs with Laplacian matrices $\L_1$ and $\L_2$, respectively, given in the proof of Proposition \ref{prop-cut-clique}, and $H_2$ is the graph obtained from a complete bipartite graph by adding all edges in the partition with order $r$. By Theorems \ref{th-bip-addedge1}, and \ref{th-bip-addedge2}, we obtain $\alpha(F_k(H_2))=r$. From Proposition \ref{prop-subgraph}, we have $\L(F_k(G))=\L(F_k(H_1))+\L(F_k(H_2))$. Then, it follows from Lemma \ref{le-matrix_add} with $i=2$ that \begin{equation*} \alpha(F_k(G))=\lambda_2(F_k(G))\ge \lambda_1(F_k(H_1))+\lambda_2(F_k(H_2))=\alpha(F_k(H_2))=r. \end{equation*} Combining it with (\ref{eq-th-clique_1}), this completes the proof.\qed \end{proof} In particular, with $r=1$, the vertex of $K_1$ is a cut vertex of degree $n-1$, and we reobtain a result of Barik and Verma \cite{bv24} (see Theorem \ref{theBarik} in the Preliminaries). \begin{thebibliography}{99} \bibitem{a07} N. M. M. de Abreu, Old and new results on algebraic connectivity of graphs, {\em Linear Algebra Appl.} {\bf 423} (2007) 53--73. \bibitem{Bapat_1998} R. B. Bapat and S. Pati, Algebraic connectivity and the characteristic set of a graph, {\em Linear Multilinear Algebra} {\bf 45} (1998) 247--273. \bibitem{bv24} S. Barik and P. Verma, Spectral properties of token graphs, {\em Linear Algebra Appl.} {\bf 687} (2024) 181--206. \bibitem{clr10} P. Caputo, T. M. Liggett, T. Richthammer, Proof of Aldous’ spectral gap conjecture, {\em J. Amer. Math. Soc.} {\bf 23(3)} (2010) 831--851. \bibitem{Cvetkovic_1980} D. M. Cvetkovi\'c, M. Doob, and H. Sachs, {\em Spectra of Graphs}, {\em Academic Press, New York}, 1980. \bibitem{Dalfo_2021} C. Dalf\'o, F. Duque, R. Fabila-Monroy, M. A. Fiol, C. Huemer, A. L. Trujillo-Negrete, and F. J. Zaragoza Mart\'inez, On the Laplacian spectra of token graphs, {\em Linear Algebra Appl.} {\bf 625} (2021) 322--348. \bibitem{Dalfo_2024} C. Dalf\'o and M. A. Fiol, On the algebraic connectivity of some token graphs, {\em J. Algebraic Combin.} {\bf 60} (2024) 45--56. \bibitem{ffhhuw12} R. Fabila-Monroy, D. Flores-Peñaloza, C. Huemer, F. Hurtado, J. Urrutia, and D. R. Wood, Token graphs, {\em Graphs Combin.} {\bf 28(3)} (2012) 365--380. \bibitem{Fiedler_1973} M. Fiedler, Algebraic connectivity of graphs, {\em Czech. Math. J.} {\bf 23(2)} (1973) 298--305. \bibitem{Guo_2008} J. Guo, A conjecture on the algebraic connectivity of connected graphs with fixed girth, {\em Discrete Math.} {\bf 308} (2008) 5702--5711. \bibitem{Guo_2018} S. Guo, R. Zhang, and G. Yu, Hamiltonian graphs of given order and minimum algebraic connectivity, {\em Linear Multilinear Algebra} {\bf 66(3)} (2018) 459--468. \bibitem{Horn_2013} R. A. Horn and C. R. Johnson, {\em Matrix Analysis}, Cambridge University Press, Cambridge, 2013. \bibitem{Kirkland_2000} S. Kirkland, A bound on the algebraic connectivity of a graph in terms of the number of cutpoints, {\em Linear Multilinear Algebra} {\bf 47} (2000) 93--103. \bibitem{Merris_1998} R. Merris, Laplacian graph eigenvectors, {\em Linear Algebra Appl.} {\bf 278} (1998) 221--236. \bibitem{Reyes_2025} M. A. Reyes, C. Dalf\'o, M. A. Fiol, and A. Messegu\'e, A general method to find the spectrum and eigenspaces of the $k$-token graph of a cycle, and $2$-token through continuous fractions, {\em Discrete Appl. Math.} {\bf 360} (2025) 353--365. \bibitem{Shao_2008} J. Shao, J. Guo, and H. Shan, The ordering of trees and connected graphs by algebraic connectivity, {\em Linear Algebra Appl.} {\bf 428} (2008) 1421--1438. \bibitem{Xue_2019} J. Xue, H. Lin, and J. Shu, The algebraic connectivity of graphs with given circumference, {\em Theoret. Comput. Sci.} {\bf 772} (2019) 123--131. \bibitem{y05} W.-C. Yueh, Eigenvalues of several tridiagonal matrices, \textit{Appl. Math. E-Notes} \textbf{5} (2005) 66--74. \end{thebibliography} \end{document}